[kernel] r10295 - in dists/trunk/linux-2.6/debian/patches: bugfix/all series
Maximilian Attems
maks at alioth.debian.org
Wed Jan 30 21:52:13 UTC 2008
Author: maks
Date: Wed Jan 30 21:52:05 2008
New Revision: 10295
Log:
update to patch-2.6.24-git8
no further conflicts, huge diff due to x86 merge
Added:
dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git8
- copied, changed from r10294, /dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git7
Removed:
dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git7
Modified:
dists/trunk/linux-2.6/debian/patches/series/1~experimental.1
Copied: dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git8 (from r10294, /dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git7)
==============================================================================
--- /dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git7 (original)
+++ dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.24-git8 Wed Jan 30 21:52:05 2008
@@ -973,6 +973,191 @@
FURTHER INFORMATION
+diff --git a/Documentation/debugging-via-ohci1394.txt b/Documentation/debugging-via-ohci1394.txt
+new file mode 100644
+index 0000000..de4804e
+--- /dev/null
++++ b/Documentation/debugging-via-ohci1394.txt
+@@ -0,0 +1,179 @@
++
++ Using physical DMA provided by OHCI-1394 FireWire controllers for debugging
++ ---------------------------------------------------------------------------
++
++Introduction
++------------
++
++Basically all FireWire controllers which are in use today are compliant
++to the OHCI-1394 specification which defines the controller to be a PCI
++bus master which uses DMA to offload data transfers from the CPU and has
++a "Physical Response Unit" which executes specific requests by employing
++PCI-Bus master DMA after applying filters defined by the OHCI-1394 driver.
++
++Once properly configured, remote machines can send these requests to
++ask the OHCI-1394 controller to perform read and write requests on
++physical system memory and, for read requests, send the result of
++the physical memory read back to the requester.
++
++With that, it is possible to debug issues by reading interesting memory
++locations such as buffers like the printk buffer or the process table.
++
++Retrieving a full system memory dump is also possible over the FireWire,
++using data transfer rates in the order of 10MB/s or more.
++
++Memory access is currently limited to the low 4G of physical address
++space which can be a problem on IA64 machines where memory is located
++mostly above that limit, but it is rarely a problem on more common
++hardware such as hardware based on x86, x86-64 and PowerPC.
++
++Together with a early initialization of the OHCI-1394 controller for debugging,
++this facility proved most useful for examining long debugs logs in the printk
++buffer on to debug early boot problems in areas like ACPI where the system
++fails to boot and other means for debugging (serial port) are either not
++available (notebooks) or too slow for extensive debug information (like ACPI).
++
++Drivers
++-------
++
++The OHCI-1394 drivers in drivers/firewire and drivers/ieee1394 initialize
++the OHCI-1394 controllers to a working state and can be used to enable
++physical DMA. By default you only have to load the driver, and physical
++DMA access will be granted to all remote nodes, but it can be turned off
++when using the ohci1394 driver.
++
++Because these drivers depend on the PCI enumeration to be completed, an
++initialization routine which can runs pretty early (long before console_init(),
++which makes the printk buffer appear on the console can be called) was written.
++
++To activate it, enable CONFIG_PROVIDE_OHCI1394_DMA_INIT (Kernel hacking menu:
++Provide code for enabling DMA over FireWire early on boot) and pass the
++parameter "ohci1394_dma=early" to the recompiled kernel on boot.
++
++Tools
++-----
++
++firescope - Originally developed by Benjamin Herrenschmidt, Andi Kleen ported
++it from PowerPC to x86 and x86_64 and added functionality, firescope can now
++be used to view the printk buffer of a remote machine, even with live update.
++
++Bernhard Kaindl enhanced firescope to support accessing 64-bit machines
++from 32-bit firescope and vice versa:
++- ftp://ftp.suse.de/private/bk/firewire/tools/firescope-0.2.2.tar.bz2
++
++and he implemented fast system dump (alpha version - read README.txt):
++- ftp://ftp.suse.de/private/bk/firewire/tools/firedump-0.1.tar.bz2
++
++There is also a gdb proxy for firewire which allows to use gdb to access
++data which can be referenced from symbols found by gdb in vmlinux:
++- ftp://ftp.suse.de/private/bk/firewire/tools/fireproxy-0.33.tar.bz2
++
++The latest version of this gdb proxy (fireproxy-0.34) can communicate (not
++yet stable) with kgdb over an memory-based communication module (kgdbom).
++
++Getting Started
++---------------
++
++The OHCI-1394 specification regulates that the OHCI-1394 controller must
++disable all physical DMA on each bus reset.
++
++This means that if you want to debug an issue in a system state where
++interrupts are disabled and where no polling of the OHCI-1394 controller
++for bus resets takes place, you have to establish any FireWire cable
++connections and fully initialize all FireWire hardware __before__ the
++system enters such state.
++
++Step-by-step instructions for using firescope with early OHCI initialization:
++
++1) Verify that your hardware is supported:
++
++ Load the ohci1394 or the fw-ohci module and check your kernel logs.
++ You should see a line similar to
++
++ ohci1394: fw-host0: OHCI-1394 1.1 (PCI): IRQ=[18] MMIO=[fe9ff800-fe9fffff]
++ ... Max Packet=[2048] IR/IT contexts=[4/8]
++
++ when loading the driver. If you have no supported controller, many PCI,
++ CardBus and even some Express cards which are fully compliant to OHCI-1394
++ specification are available. If it requires no driver for Windows operating
++ systems, it most likely is. Only specialized shops have cards which are not
++ compliant, they are based on TI PCILynx chips and require drivers for Win-
++ dows operating systems.
++
++2) Establish a working FireWire cable connection:
++
++ Any FireWire cable, as long at it provides electrically and mechanically
++ stable connection and has matching connectors (there are small 4-pin and
++ large 6-pin FireWire ports) will do.
++
++ If an driver is running on both machines you should see a line like
++
++ ieee1394: Node added: ID:BUS[0-01:1023] GUID[0090270001b84bba]
++
++ on both machines in the kernel log when the cable is plugged in
++ and connects the two machines.
++
++3) Test physical DMA using firescope:
++
++ On the debug host,
++ - load the raw1394 module,
++ - make sure that /dev/raw1394 is accessible,
++ then start firescope:
++
++ $ firescope
++ Port 0 (ohci1394) opened, 2 nodes detected
++
++ FireScope
++ ---------
++ Target : <unspecified>
++ Gen : 1
++ [Ctrl-T] choose target
++ [Ctrl-H] this menu
++ [Ctrl-Q] quit
++
++ ------> Press Ctrl-T now, the output should be similar to:
++
++ 2 nodes available, local node is: 0
++ 0: ffc0, uuid: 00000000 00000000 [LOCAL]
++ 1: ffc1, uuid: 00279000 ba4bb801
++
++ Besides the [LOCAL] node, it must show another node without error message.
++
++4) Prepare for debugging with early OHCI-1394 initialization:
++
++ 4.1) Kernel compilation and installation on debug target
++
++ Compile the kernel to be debugged with CONFIG_PROVIDE_OHCI1394_DMA_INIT
++ (Kernel hacking: Provide code for enabling DMA over FireWire early on boot)
++ enabled and install it on the machine to be debugged (debug target).
++
++ 4.2) Transfer the System.map of the debugged kernel to the debug host
++
++ Copy the System.map of the kernel be debugged to the debug host (the host
++ which is connected to the debugged machine over the FireWire cable).
++
++5) Retrieving the printk buffer contents:
++
++ With the FireWire cable connected, the OHCI-1394 driver on the debugging
++ host loaded, reboot the debugged machine, booting the kernel which has
++ CONFIG_PROVIDE_OHCI1394_DMA_INIT enabled, with the option ohci1394_dma=early.
++
++ Then, on the debugging host, run firescope, for example by using -A:
++
++ firescope -A System.map-of-debug-target-kernel
++
++ Note: -A automatically attaches to the first non-local node. It only works
++ reliably if only connected two machines are connected using FireWire.
++
++ After having attached to the debug target, press Ctrl-D to view the
++ complete printk buffer or Ctrl-U to enter auto update mode and get an
++ updated live view of recent kernel messages logged on the debug target.
++
++ Call "firescope -h" to get more information on firescope's options.
++
++Notes
++-----
++Documentation and specifications: ftp://ftp.suse.de/private/bk/firewire/docs
++
++FireWire is a trademark of Apple Inc. - for more information please refer to:
++http://en.wikipedia.org/wiki/FireWire
diff --git a/Documentation/dontdiff b/Documentation/dontdiff
index f2d658a..c09a96b 100644
--- a/Documentation/dontdiff
@@ -1667,7 +1852,7 @@
+$ find . -name Kconfig\* | xargs grep -ns "depends on.*=.*||.*=" | grep -v orig
+
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
-index c417877..880f882 100644
+index c417877..5d171b7 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -34,6 +34,7 @@ parameter is applicable:
@@ -1701,7 +1886,84 @@
clock= [BUGS=X86-32, HW] gettimeofday clocksource override.
[Deprecated]
Forces specified clocksource (if available) to be used
-@@ -1123,6 +1131,10 @@ and is between 256 and 4096 characters. It is defined in the file
+@@ -408,8 +416,21 @@ and is between 256 and 4096 characters. It is defined in the file
+ [SPARC64] tick
+ [X86-64] hpet,tsc
+
+- code_bytes [IA32] How many bytes of object code to print in an
+- oops report.
++ clearcpuid=BITNUM [X86]
++ Disable CPUID feature X for the kernel. See
++ include/asm-x86/cpufeature.h for the valid bit numbers.
++ Note the Linux specific bits are not necessarily
++ stable over kernel options, but the vendor specific
++ ones should be.
++ Also note that user programs calling CPUID directly
++ or using the feature without checking anything
++ will still see it. This just prevents it from
++ being used by the kernel or shown in /proc/cpuinfo.
++ Also note the kernel might malfunction if you disable
++ some critical bits.
++
++ code_bytes [IA32/X86_64] How many bytes of object code to print
++ in an oops report.
+ Range: 0 - 8192
+ Default: 64
+
+@@ -562,6 +583,12 @@ and is between 256 and 4096 characters. It is defined in the file
+ See drivers/char/README.epca and
+ Documentation/digiepca.txt.
+
++ disable_mtrr_trim [X86, Intel and AMD only]
++ By default the kernel will trim any uncacheable
++ memory out of your available memory pool based on
++ MTRR settings. This parameter disables that behavior,
++ possibly causing your machine to run very slowly.
++
+ dmasound= [HW,OSS] Sound subsystem buffers
+
+ dscc4.setup= [NET]
+@@ -652,6 +679,10 @@ and is between 256 and 4096 characters. It is defined in the file
+
+ gamma= [HW,DRM]
+
++ gart_fix_e820= [X86_64] disable the fix e820 for K8 GART
++ Format: off | on
++ default: on
++
+ gdth= [HW,SCSI]
+ See header of drivers/scsi/gdth.c.
+
+@@ -786,6 +817,16 @@ and is between 256 and 4096 characters. It is defined in the file
+ for translation below 32 bit and if not available
+ then look in the higher range.
+
++ io_delay= [X86-32,X86-64] I/O delay method
++ 0x80
++ Standard port 0x80 based delay
++ 0xed
++ Alternate port 0xed based delay (needed on some systems)
++ udelay
++ Simple two microseconds delay
++ none
++ No delay
++
+ io7= [HW] IO7 for Marvel based alpha systems
+ See comment before marvel_specify_io7 in
+ arch/alpha/kernel/core_marvel.c.
+@@ -1051,6 +1092,11 @@ and is between 256 and 4096 characters. It is defined in the file
+ Multi-Function General Purpose Timers on AMD Geode
+ platforms.
+
++ mfgptfix [X86-32] Fix MFGPT timers on AMD Geode platforms when
++ the BIOS has incorrectly applied a workaround. TinyBIOS
++ version 0.98 is known to be affected, 0.99 fixes the
++ problem by letting the user disable the workaround.
++
+ mga= [HW,DRM]
+
+ mousedev.tap_time=
+@@ -1123,6 +1169,10 @@ and is between 256 and 4096 characters. It is defined in the file
of returning the full 64-bit number.
The default is to return 64-bit inode numbers.
@@ -1712,7 +1974,25 @@
nmi_watchdog= [KNL,BUGS=X86-32] Debugging features for SMP kernels
no387 [BUGS=X86-32] Tells the kernel to use the 387 maths
-@@ -1593,7 +1605,13 @@ and is between 256 and 4096 characters. It is defined in the file
+@@ -1147,6 +1197,8 @@ and is between 256 and 4096 characters. It is defined in the file
+
+ nodisconnect [HW,SCSI,M68K] Disables SCSI disconnects.
+
++ noefi [X86-32,X86-64] Disable EFI runtime services support.
++
+ noexec [IA-64]
+
+ noexec [X86-32,X86-64]
+@@ -1157,6 +1209,8 @@ and is between 256 and 4096 characters. It is defined in the file
+ register save and restore. The kernel will only save
+ legacy floating-point registers on task switch.
+
++ noclflush [BUGS=X86] Don't use the CLFLUSH instruction
++
+ nohlt [BUGS=ARM]
+
+ no-hlt [BUGS=X86-32] Tells the kernel that the hlt
+@@ -1593,7 +1647,13 @@ and is between 256 and 4096 characters. It is defined in the file
Format: <vendor>:<model>:<flags>
(flags are integer value)
@@ -1727,6 +2007,18 @@
scsi_mod.scan= [SCSI] sync (default) scans SCSI busses as they are
discovered. async scans them in kernel threads,
+@@ -1960,6 +2020,11 @@ and is between 256 and 4096 characters. It is defined in the file
+ vdso=1: enable VDSO (default)
+ vdso=0: disable VDSO mapping
+
++ vdso32= [X86-32,X86-64]
++ vdso32=2: enable compat VDSO (default with COMPAT_VDSO)
++ vdso32=1: enable 32-bit VDSO (default)
++ vdso32=0: disable 32-bit VDSO mapping
++
+ vector= [IA-64,SMP]
+ vector=percpu: enable percpu vector domain
+
diff --git a/Documentation/kobject.txt b/Documentation/kobject.txt
index ca86a88..bf3256e 100644
--- a/Documentation/kobject.txt
@@ -5159,6 +5451,54 @@
Look at the writable files. Writing 1 to them will enable the
corresponding debug option. All options can be set on a slab that does
+diff --git a/Documentation/x86_64/boot-options.txt b/Documentation/x86_64/boot-options.txt
+index 9453118..34abae4 100644
+--- a/Documentation/x86_64/boot-options.txt
++++ b/Documentation/x86_64/boot-options.txt
+@@ -110,12 +110,18 @@ Idle loop
+
+ Rebooting
+
+- reboot=b[ios] | t[riple] | k[bd] [, [w]arm | [c]old]
++ reboot=b[ios] | t[riple] | k[bd] | a[cpi] | e[fi] [, [w]arm | [c]old]
+ bios Use the CPU reboot vector for warm reset
+ warm Don't set the cold reboot flag
+ cold Set the cold reboot flag
+ triple Force a triple fault (init)
+ kbd Use the keyboard controller. cold reset (default)
++ acpi Use the ACPI RESET_REG in the FADT. If ACPI is not configured or the
++ ACPI reset does not work, the reboot path attempts the reset using
++ the keyboard controller.
++ efi Use efi reset_system runtime service. If EFI is not configured or the
++ EFI reset does not work, the reboot path attempts the reset using
++ the keyboard controller.
+
+ Using warm reset will be much faster especially on big memory
+ systems because the BIOS will not go through the memory check.
+diff --git a/Documentation/x86_64/uefi.txt b/Documentation/x86_64/uefi.txt
+index 91a98ed..7d77120 100644
+--- a/Documentation/x86_64/uefi.txt
++++ b/Documentation/x86_64/uefi.txt
+@@ -19,6 +19,10 @@ Mechanics:
+ - Build the kernel with the following configuration.
+ CONFIG_FB_EFI=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
++ If EFI runtime services are expected, the following configuration should
++ be selected.
++ CONFIG_EFI=y
++ CONFIG_EFI_VARS=y or m # optional
+ - Create a VFAT partition on the disk
+ - Copy the following to the VFAT partition:
+ elilo bootloader with x86_64 support, elilo configuration file,
+@@ -27,3 +31,8 @@ Mechanics:
+ can be found in the elilo sourceforge project.
+ - Boot to EFI shell and invoke elilo choosing the kernel image built
+ in first step.
++- If some or all EFI runtime services don't work, you can try following
++ kernel command line parameters to turn off some or all EFI runtime
++ services.
++ noefi turn off all EFI runtime services
++ reboot_type=k turn off EFI reboot runtime service
diff --git a/Documentation/zh_CN/CodingStyle b/Documentation/zh_CN/CodingStyle
new file mode 100644
index 0000000..ecd9307
@@ -7225,10 +7565,22 @@
/* Slow path */
spin_lock(lock);
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
-index a04f507..de211ac 100644
+index a04f507..77201d3 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
-@@ -180,8 +180,8 @@ config ARCH_AT91
+@@ -91,6 +91,11 @@ config GENERIC_IRQ_PROBE
+ bool
+ default y
+
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config RWSEM_GENERIC_SPINLOCK
+ bool
+ default y
+@@ -180,8 +185,8 @@ config ARCH_AT91
bool "Atmel AT91"
select GENERIC_GPIO
help
@@ -7239,7 +7591,7 @@
config ARCH_CLPS7500
bool "Cirrus CL-PS7500FE"
-@@ -217,6 +217,7 @@ config ARCH_EP93XX
+@@ -217,6 +222,7 @@ config ARCH_EP93XX
bool "EP93xx-based"
select ARM_AMBA
select ARM_VIC
@@ -7247,7 +7599,7 @@
help
This enables support for the Cirrus EP93xx series of CPUs.
-@@ -333,6 +334,16 @@ config ARCH_MXC
+@@ -333,6 +339,16 @@ config ARCH_MXC
help
Support for Freescale MXC/iMX-based family of processors
@@ -7264,7 +7616,7 @@
config ARCH_PNX4008
bool "Philips Nexperia PNX4008 Mobile"
help
-@@ -345,6 +356,7 @@ config ARCH_PXA
+@@ -345,6 +361,7 @@ config ARCH_PXA
select GENERIC_GPIO
select GENERIC_TIME
select GENERIC_CLOCKEVENTS
@@ -7272,7 +7624,7 @@
help
Support for Intel/Marvell's PXA2xx/PXA3xx processor line.
-@@ -366,6 +378,7 @@ config ARCH_SA1100
+@@ -366,6 +383,7 @@ config ARCH_SA1100
select ARCH_DISCONTIGMEM_ENABLE
select ARCH_MTD_XIP
select GENERIC_GPIO
@@ -7280,7 +7632,7 @@
help
Support for StrongARM 11x0 based boards.
-@@ -409,6 +422,17 @@ config ARCH_OMAP
+@@ -409,6 +427,17 @@ config ARCH_OMAP
help
Support for TI's OMAP platform (OMAP1 and OMAP2).
@@ -7298,7 +7650,7 @@
endchoice
source "arch/arm/mach-clps711x/Kconfig"
-@@ -441,6 +465,8 @@ source "arch/arm/mach-omap1/Kconfig"
+@@ -441,6 +470,8 @@ source "arch/arm/mach-omap1/Kconfig"
source "arch/arm/mach-omap2/Kconfig"
@@ -7307,7 +7659,7 @@
source "arch/arm/plat-s3c24xx/Kconfig"
source "arch/arm/plat-s3c/Kconfig"
-@@ -477,6 +503,8 @@ source "arch/arm/mach-davinci/Kconfig"
+@@ -477,6 +508,8 @@ source "arch/arm/mach-davinci/Kconfig"
source "arch/arm/mach-ks8695/Kconfig"
@@ -7316,7 +7668,7 @@
# Definitions to make life easier
config ARCH_ACORN
bool
-@@ -657,6 +685,7 @@ config HZ
+@@ -657,6 +690,7 @@ config HZ
default 128 if ARCH_L7200
default 200 if ARCH_EBSA110 || ARCH_S3C2410
default OMAP_32K_TIMER_HZ if ARCH_OMAP && OMAP_32K_TIMER
@@ -7324,7 +7676,7 @@
default 100
config AEABI
-@@ -716,7 +745,7 @@ config LEDS
+@@ -716,7 +750,7 @@ config LEDS
ARCH_OMAP || ARCH_P720T || ARCH_PXA_IDP || \
ARCH_SA1100 || ARCH_SHARK || ARCH_VERSATILE || \
ARCH_AT91 || MACH_TRIZEPS4 || ARCH_DAVINCI || \
@@ -7333,7 +7685,7 @@
help
If you say Y here, the LEDs on your machine will be used
to provide useful information about your current system status.
-@@ -867,7 +896,7 @@ config KEXEC
+@@ -867,7 +901,7 @@ config KEXEC
endmenu
@@ -7342,7 +7694,7 @@
menu "CPU Frequency scaling"
-@@ -903,6 +932,12 @@ config CPU_FREQ_IMX
+@@ -903,6 +937,12 @@ config CPU_FREQ_IMX
If in doubt, say N.
@@ -7355,7 +7707,7 @@
endmenu
endif
-@@ -951,7 +986,7 @@ config FPE_FASTFPE
+@@ -951,7 +991,7 @@ config FPE_FASTFPE
config VFP
bool "VFP-format floating point maths"
@@ -7364,7 +7716,7 @@
help
Say Y to include VFP support code in the kernel. This is needed
if your hardware includes a VFP unit.
-@@ -961,6 +996,18 @@ config VFP
+@@ -961,6 +1001,18 @@ config VFP
Say N if your target does not have VFP hardware.
@@ -59044,6 +59396,32 @@
#if defined(CONFIG_BLK_DEV_INITRD)
. = ALIGN(4);
___initramfs_start = .;
+diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
+index bef4772..5a41e75 100644
+--- a/arch/ia64/Kconfig
++++ b/arch/ia64/Kconfig
+@@ -42,6 +42,11 @@ config MMU
+ config SWIOTLB
+ bool
+
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config RWSEM_XCHGADD_ALGORITHM
+ bool
+ default y
+@@ -75,6 +80,9 @@ config GENERIC_TIME_VSYSCALL
+ bool
+ default y
+
++config ARCH_SETS_UP_PER_CPU_AREA
++ def_bool y
++
+ config DMI
+ bool
+ default y
diff --git a/arch/ia64/hp/sim/simeth.c b/arch/ia64/hp/sim/simeth.c
index 08b117e..9898feb 100644
--- a/arch/ia64/hp/sim/simeth.c
@@ -59060,6 +59438,33 @@
/*
* very simple loop because we get interrupts only when receiving
*/
+diff --git a/arch/ia64/ia32/binfmt_elf32.c b/arch/ia64/ia32/binfmt_elf32.c
+index 3e35987..4f0c30c 100644
+--- a/arch/ia64/ia32/binfmt_elf32.c
++++ b/arch/ia64/ia32/binfmt_elf32.c
+@@ -222,7 +222,8 @@ elf32_set_personality (void)
+ }
+
+ static unsigned long
+-elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
++elf32_map(struct file *filep, unsigned long addr, struct elf_phdr *eppnt,
++ int prot, int type, unsigned long unused)
+ {
+ unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK;
+
+diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c
+index 1962879..e699eb6 100644
+--- a/arch/ia64/kernel/module.c
++++ b/arch/ia64/kernel/module.c
+@@ -947,7 +947,7 @@ percpu_modcopy (void *pcpudst, const void *src, unsigned long size)
+ {
+ unsigned int i;
+ for_each_possible_cpu(i) {
+- memcpy(pcpudst + __per_cpu_offset[i], src, size);
++ memcpy(pcpudst + per_cpu_offset(i), src, size);
+ }
+ }
+ #endif /* CONFIG_SMP */
diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
index 4ac2b1f..86028c6 100644
--- a/arch/ia64/kernel/setup.c
@@ -59242,6 +59647,22 @@
printk("SGI SAL version %x.%02x\n", version >> 8, version & 0x00FF);
/*
+diff --git a/arch/m32r/Kconfig b/arch/m32r/Kconfig
+index ab9a264..f7237c5 100644
+--- a/arch/m32r/Kconfig
++++ b/arch/m32r/Kconfig
+@@ -235,6 +235,11 @@ config IRAM_SIZE
+ # Define implied options from the CPU selection here
+ #
+
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config RWSEM_GENERIC_SPINLOCK
+ bool
+ depends on M32R
diff --git a/arch/m32r/kernel/vmlinux.lds.S b/arch/m32r/kernel/vmlinux.lds.S
index 942a8c7..41b0785 100644
--- a/arch/m32r/kernel/vmlinux.lds.S
@@ -59365,7 +59786,7 @@
}
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
-index b22c043..6b0f85f 100644
+index b22c043..4fad0a3 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -37,16 +37,6 @@ config BASLER_EXCITE
@@ -59585,7 +60006,19 @@
source "arch/mips/jazz/Kconfig"
source "arch/mips/lasat/Kconfig"
source "arch/mips/pmc-sierra/Kconfig"
-@@ -797,10 +790,6 @@ config DMA_COHERENT
+@@ -701,6 +694,11 @@ source "arch/mips/vr41xx/Kconfig"
+
+ endmenu
+
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config RWSEM_GENERIC_SPINLOCK
+ bool
+ default y
+@@ -797,10 +795,6 @@ config DMA_COHERENT
config DMA_IP27
bool
@@ -59596,7 +60029,7 @@
config DMA_NONCOHERENT
bool
select DMA_NEED_PCI_MAP_STATE
-@@ -956,16 +945,40 @@ config EMMA2RH
+@@ -956,16 +950,40 @@ config EMMA2RH
config SERIAL_RM9000
bool
@@ -59638,7 +60071,7 @@
default "4" if PMC_MSP4200_EVAL
default "5"
-@@ -974,7 +987,7 @@ config HAVE_STD_PC_SERIAL_PORT
+@@ -974,7 +992,7 @@ config HAVE_STD_PC_SERIAL_PORT
config ARC_CONSOLE
bool "ARC console support"
@@ -59647,7 +60080,7 @@
config ARC_MEMORY
bool
-@@ -983,7 +996,7 @@ config ARC_MEMORY
+@@ -983,7 +1001,7 @@ config ARC_MEMORY
config ARC_PROMLIB
bool
@@ -59656,7 +60089,7 @@
default y
config ARC64
-@@ -1443,7 +1456,9 @@ config MIPS_MT_SMP
+@@ -1443,7 +1461,9 @@ config MIPS_MT_SMP
select MIPS_MT
select NR_CPUS_DEFAULT_2
select SMP
@@ -59666,7 +60099,7 @@
help
This is a kernel model which is also known a VSMP or lately
has been marketesed into SMVP.
-@@ -1460,6 +1475,7 @@ config MIPS_MT_SMTC
+@@ -1460,6 +1480,7 @@ config MIPS_MT_SMTC
select NR_CPUS_DEFAULT_8
select SMP
select SYS_SUPPORTS_SMP
@@ -59674,7 +60107,7 @@
help
This is a kernel model which is known a SMTC or lately has been
marketesed into SMVP.
-@@ -1469,6 +1485,19 @@ endchoice
+@@ -1469,6 +1490,19 @@ endchoice
config MIPS_MT
bool
@@ -59694,7 +60127,7 @@
config SYS_SUPPORTS_MULTITHREADING
bool
-@@ -1589,15 +1618,6 @@ config CPU_HAS_SMARTMIPS
+@@ -1589,15 +1623,6 @@ config CPU_HAS_SMARTMIPS
config CPU_HAS_WB
bool
@@ -59710,7 +60143,7 @@
#
# Vectored interrupt mode is an R2 feature
#
-@@ -1619,6 +1639,19 @@ config GENERIC_CLOCKEVENTS_BROADCAST
+@@ -1619,6 +1644,19 @@ config GENERIC_CLOCKEVENTS_BROADCAST
bool
#
@@ -59730,7 +60163,7 @@
# Use the generic interrupt handling code in kernel/irq/:
#
config GENERIC_HARDIRQS
-@@ -1721,6 +1754,9 @@ config SMP
+@@ -1721,6 +1759,9 @@ config SMP
If you don't know what to do here, say N.
@@ -59740,7 +60173,7 @@
config SYS_SUPPORTS_SMP
bool
-@@ -1978,9 +2014,6 @@ config MMU
+@@ -1978,9 +2019,6 @@ config MMU
config I8253
bool
@@ -62471,6 +62904,45 @@
MTC0 k0, CP0_EPC
/* I hope three instructions between MTC0 and ERET are enough... */
ori k1, _THREAD_MASK
+diff --git a/arch/mips/kernel/i8253.c b/arch/mips/kernel/i8253.c
+index c2d497c..fc4aa07 100644
+--- a/arch/mips/kernel/i8253.c
++++ b/arch/mips/kernel/i8253.c
+@@ -24,9 +24,7 @@ DEFINE_SPINLOCK(i8253_lock);
+ static void init_pit_timer(enum clock_event_mode mode,
+ struct clock_event_device *evt)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&i8253_lock, flags);
++ spin_lock(&i8253_lock);
+
+ switch(mode) {
+ case CLOCK_EVT_MODE_PERIODIC:
+@@ -55,7 +53,7 @@ static void init_pit_timer(enum clock_event_mode mode,
+ /* Nothing to do here */
+ break;
+ }
+- spin_unlock_irqrestore(&i8253_lock, flags);
++ spin_unlock(&i8253_lock);
+ }
+
+ /*
+@@ -65,12 +63,10 @@ static void init_pit_timer(enum clock_event_mode mode,
+ */
+ static int pit_next_event(unsigned long delta, struct clock_event_device *evt)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&i8253_lock, flags);
++ spin_lock(&i8253_lock);
+ outb_p(delta & 0xff , PIT_CH0); /* LSB */
+ outb(delta >> 8 , PIT_CH0); /* MSB */
+- spin_unlock_irqrestore(&i8253_lock, flags);
++ spin_unlock(&i8253_lock);
+
+ return 0;
+ }
diff --git a/arch/mips/kernel/i8259.c b/arch/mips/kernel/i8259.c
index 4710135..197d797 100644
--- a/arch/mips/kernel/i8259.c
@@ -70303,6 +70775,22 @@
#ifdef CONFIG_PCI
#ifdef CONFIG_ROCKHOPPER
ali_m5229_preinit();
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index b8ef178..2b649c4 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -19,6 +19,11 @@ config MMU
+ config STACK_GROWSUP
+ def_bool y
+
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config RWSEM_GENERIC_SPINLOCK
+ def_bool y
+
diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
index 40d0ff9..50b4a3a 100644
--- a/arch/parisc/kernel/vmlinux.lds.S
@@ -70334,6 +70822,32 @@
}
#ifdef CONFIG_BLK_DEV_INITRD
. = ALIGN(PAGE_SIZE);
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 232c298..fb85f6b 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -42,6 +42,9 @@ config GENERIC_HARDIRQS
+ bool
+ default y
+
++config ARCH_SETS_UP_PER_CPU_AREA
++ def_bool PPC64
++
+ config IRQ_PER_CPU
+ bool
+ default y
+@@ -53,6 +56,11 @@ config RWSEM_XCHGADD_ALGORITHM
+ bool
+ default y
+
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config ARCH_HAS_ILOG2_U32
+ bool
+ default y
diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index 18e3271..4b1d98b 100644
--- a/arch/powerpc/boot/Makefile
@@ -70347,6 +70861,90 @@
quiet_cmd_copy_zlibheader = COPY $@
cmd_copy_zlibheader = sed "s@<linux/\([^>]*\).*@\"\1\"@" $< > $@
+diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
+index 3e17d15..8b056d2 100644
+--- a/arch/powerpc/kernel/ptrace.c
++++ b/arch/powerpc/kernel/ptrace.c
+@@ -256,7 +256,7 @@ static int set_evrregs(struct task_struct *task, unsigned long *data)
+ #endif /* CONFIG_SPE */
+
+
+-static void set_single_step(struct task_struct *task)
++void user_enable_single_step(struct task_struct *task)
+ {
+ struct pt_regs *regs = task->thread.regs;
+
+@@ -271,7 +271,7 @@ static void set_single_step(struct task_struct *task)
+ set_tsk_thread_flag(task, TIF_SINGLESTEP);
+ }
+
+-static void clear_single_step(struct task_struct *task)
++void user_disable_single_step(struct task_struct *task)
+ {
+ struct pt_regs *regs = task->thread.regs;
+
+@@ -313,7 +313,7 @@ static int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
+ void ptrace_disable(struct task_struct *child)
+ {
+ /* make sure the single step bit is not set. */
+- clear_single_step(child);
++ user_disable_single_step(child);
+ }
+
+ /*
+@@ -445,52 +445,6 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
+ break;
+ }
+
+- case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+- case PTRACE_CONT: { /* restart after signal. */
+- ret = -EIO;
+- if (!valid_signal(data))
+- break;
+- if (request == PTRACE_SYSCALL)
+- set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+- else
+- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+- child->exit_code = data;
+- /* make sure the single step bit is not set. */
+- clear_single_step(child);
+- wake_up_process(child);
+- ret = 0;
+- break;
+- }
+-
+-/*
+- * make the child exit. Best I can do is send it a sigkill.
+- * perhaps it should be put in the status that it wants to
+- * exit.
+- */
+- case PTRACE_KILL: {
+- ret = 0;
+- if (child->exit_state == EXIT_ZOMBIE) /* already dead */
+- break;
+- child->exit_code = SIGKILL;
+- /* make sure the single step bit is not set. */
+- clear_single_step(child);
+- wake_up_process(child);
+- break;
+- }
+-
+- case PTRACE_SINGLESTEP: { /* set the trap flag. */
+- ret = -EIO;
+- if (!valid_signal(data))
+- break;
+- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+- set_single_step(child);
+- child->exit_code = data;
+- /* give it a chance to run. */
+- wake_up_process(child);
+- ret = 0;
+- break;
+- }
+-
+ case PTRACE_GET_DEBUGREG: {
+ ret = -EINVAL;
+ /* We only support one DABR and no IABRS at the moment */
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index 25d9a96..c8127f8 100644
--- a/arch/powerpc/kernel/sysfs.c
@@ -131142,6 +131740,32 @@
*(.exitcall.exit)
}
+diff --git a/arch/sparc64/Kconfig b/arch/sparc64/Kconfig
+index 10b212a..26f5791 100644
+--- a/arch/sparc64/Kconfig
++++ b/arch/sparc64/Kconfig
+@@ -66,6 +66,9 @@ config AUDIT_ARCH
+ bool
+ default y
+
++config ARCH_SETS_UP_PER_CPU_AREA
++ def_bool y
++
+ config ARCH_NO_VIRT_TO_BUS
+ def_bool y
+
+@@ -200,6 +203,11 @@ config US2E_FREQ
+ If in doubt, say N.
+
+ # Global things across all Sun machines.
++config GENERIC_LOCKBREAK
++ bool
++ default y
++ depends on SMP && PREEMPT
++
+ config RWSEM_GENERIC_SPINLOCK
+ bool
+
diff --git a/arch/sparc64/kernel/unaligned.c b/arch/sparc64/kernel/unaligned.c
index 953be81..dc7bf1b 100644
--- a/arch/sparc64/kernel/unaligned.c
@@ -131301,6 +131925,23 @@
/* Ensure the __preinit_array_start label is properly aligned. We
could instead move the label definition inside the section, but
+diff --git a/arch/um/kernel/ksyms.c b/arch/um/kernel/ksyms.c
+index 1b388b4..7c7142b 100644
+--- a/arch/um/kernel/ksyms.c
++++ b/arch/um/kernel/ksyms.c
+@@ -71,10 +71,10 @@ EXPORT_SYMBOL(dump_thread);
+
+ /* required for SMP */
+
+-extern void FASTCALL( __write_lock_failed(rwlock_t *rw));
++extern void __write_lock_failed(rwlock_t *rw);
+ EXPORT_SYMBOL(__write_lock_failed);
+
+-extern void FASTCALL( __read_lock_failed(rwlock_t *rw));
++extern void __read_lock_failed(rwlock_t *rw);
+ EXPORT_SYMBOL(__read_lock_failed);
+
+ #endif
diff --git a/arch/um/kernel/uml.lds.S b/arch/um/kernel/uml.lds.S
index 13df191..5828c1d 100644
--- a/arch/um/kernel/uml.lds.S
@@ -131323,6 +131964,197 @@
.data :
{
. = ALIGN(KERNEL_STACK_SIZE); /* init_task */
+diff --git a/arch/um/sys-i386/signal.c b/arch/um/sys-i386/signal.c
+index 0147227..19053d4 100644
+--- a/arch/um/sys-i386/signal.c
++++ b/arch/um/sys-i386/signal.c
+@@ -3,10 +3,10 @@
+ * Licensed under the GPL
+ */
+
+-#include "linux/ptrace.h"
+-#include "asm/unistd.h"
+-#include "asm/uaccess.h"
+-#include "asm/ucontext.h"
++#include <linux/ptrace.h>
++#include <asm/unistd.h>
++#include <asm/uaccess.h>
++#include <asm/ucontext.h>
+ #include "frame_kern.h"
+ #include "skas.h"
+
+@@ -18,17 +18,17 @@ void copy_sc(struct uml_pt_regs *regs, void *from)
+ REGS_FS(regs->gp) = sc->fs;
+ REGS_ES(regs->gp) = sc->es;
+ REGS_DS(regs->gp) = sc->ds;
+- REGS_EDI(regs->gp) = sc->edi;
+- REGS_ESI(regs->gp) = sc->esi;
+- REGS_EBP(regs->gp) = sc->ebp;
+- REGS_SP(regs->gp) = sc->esp;
+- REGS_EBX(regs->gp) = sc->ebx;
+- REGS_EDX(regs->gp) = sc->edx;
+- REGS_ECX(regs->gp) = sc->ecx;
+- REGS_EAX(regs->gp) = sc->eax;
+- REGS_IP(regs->gp) = sc->eip;
++ REGS_EDI(regs->gp) = sc->di;
++ REGS_ESI(regs->gp) = sc->si;
++ REGS_EBP(regs->gp) = sc->bp;
++ REGS_SP(regs->gp) = sc->sp;
++ REGS_EBX(regs->gp) = sc->bx;
++ REGS_EDX(regs->gp) = sc->dx;
++ REGS_ECX(regs->gp) = sc->cx;
++ REGS_EAX(regs->gp) = sc->ax;
++ REGS_IP(regs->gp) = sc->ip;
+ REGS_CS(regs->gp) = sc->cs;
+- REGS_EFLAGS(regs->gp) = sc->eflags;
++ REGS_EFLAGS(regs->gp) = sc->flags;
+ REGS_SS(regs->gp) = sc->ss;
+ }
+
+@@ -229,18 +229,18 @@ static int copy_sc_to_user(struct sigcontext __user *to,
+ sc.fs = REGS_FS(regs->regs.gp);
+ sc.es = REGS_ES(regs->regs.gp);
+ sc.ds = REGS_DS(regs->regs.gp);
+- sc.edi = REGS_EDI(regs->regs.gp);
+- sc.esi = REGS_ESI(regs->regs.gp);
+- sc.ebp = REGS_EBP(regs->regs.gp);
+- sc.esp = sp;
+- sc.ebx = REGS_EBX(regs->regs.gp);
+- sc.edx = REGS_EDX(regs->regs.gp);
+- sc.ecx = REGS_ECX(regs->regs.gp);
+- sc.eax = REGS_EAX(regs->regs.gp);
+- sc.eip = REGS_IP(regs->regs.gp);
++ sc.di = REGS_EDI(regs->regs.gp);
++ sc.si = REGS_ESI(regs->regs.gp);
++ sc.bp = REGS_EBP(regs->regs.gp);
++ sc.sp = sp;
++ sc.bx = REGS_EBX(regs->regs.gp);
++ sc.dx = REGS_EDX(regs->regs.gp);
++ sc.cx = REGS_ECX(regs->regs.gp);
++ sc.ax = REGS_EAX(regs->regs.gp);
++ sc.ip = REGS_IP(regs->regs.gp);
+ sc.cs = REGS_CS(regs->regs.gp);
+- sc.eflags = REGS_EFLAGS(regs->regs.gp);
+- sc.esp_at_signal = regs->regs.gp[UESP];
++ sc.flags = REGS_EFLAGS(regs->regs.gp);
++ sc.sp_at_signal = regs->regs.gp[UESP];
+ sc.ss = regs->regs.gp[SS];
+ sc.cr2 = fi->cr2;
+ sc.err = fi->error_code;
+diff --git a/arch/um/sys-x86_64/signal.c b/arch/um/sys-x86_64/signal.c
+index 1778d33..7457436 100644
+--- a/arch/um/sys-x86_64/signal.c
++++ b/arch/um/sys-x86_64/signal.c
+@@ -4,11 +4,11 @@
+ * Licensed under the GPL
+ */
+
+-#include "linux/personality.h"
+-#include "linux/ptrace.h"
+-#include "asm/unistd.h"
+-#include "asm/uaccess.h"
+-#include "asm/ucontext.h"
++#include <linux/personality.h>
++#include <linux/ptrace.h>
++#include <asm/unistd.h>
++#include <asm/uaccess.h>
++#include <asm/ucontext.h>
+ #include "frame_kern.h"
+ #include "skas.h"
+
+@@ -27,16 +27,16 @@ void copy_sc(struct uml_pt_regs *regs, void *from)
+ GETREG(regs, R13, sc, r13);
+ GETREG(regs, R14, sc, r14);
+ GETREG(regs, R15, sc, r15);
+- GETREG(regs, RDI, sc, rdi);
+- GETREG(regs, RSI, sc, rsi);
+- GETREG(regs, RBP, sc, rbp);
+- GETREG(regs, RBX, sc, rbx);
+- GETREG(regs, RDX, sc, rdx);
+- GETREG(regs, RAX, sc, rax);
+- GETREG(regs, RCX, sc, rcx);
+- GETREG(regs, RSP, sc, rsp);
+- GETREG(regs, RIP, sc, rip);
+- GETREG(regs, EFLAGS, sc, eflags);
++ GETREG(regs, RDI, sc, di);
++ GETREG(regs, RSI, sc, si);
++ GETREG(regs, RBP, sc, bp);
++ GETREG(regs, RBX, sc, bx);
++ GETREG(regs, RDX, sc, dx);
++ GETREG(regs, RAX, sc, ax);
++ GETREG(regs, RCX, sc, cx);
++ GETREG(regs, RSP, sc, sp);
++ GETREG(regs, RIP, sc, ip);
++ GETREG(regs, EFLAGS, sc, flags);
+ GETREG(regs, CS, sc, cs);
+
+ #undef GETREG
+@@ -61,16 +61,16 @@ static int copy_sc_from_user(struct pt_regs *regs,
+ err |= GETREG(regs, R13, from, r13);
+ err |= GETREG(regs, R14, from, r14);
+ err |= GETREG(regs, R15, from, r15);
+- err |= GETREG(regs, RDI, from, rdi);
+- err |= GETREG(regs, RSI, from, rsi);
+- err |= GETREG(regs, RBP, from, rbp);
+- err |= GETREG(regs, RBX, from, rbx);
+- err |= GETREG(regs, RDX, from, rdx);
+- err |= GETREG(regs, RAX, from, rax);
+- err |= GETREG(regs, RCX, from, rcx);
+- err |= GETREG(regs, RSP, from, rsp);
+- err |= GETREG(regs, RIP, from, rip);
+- err |= GETREG(regs, EFLAGS, from, eflags);
++ err |= GETREG(regs, RDI, from, di);
++ err |= GETREG(regs, RSI, from, si);
++ err |= GETREG(regs, RBP, from, bp);
++ err |= GETREG(regs, RBX, from, bx);
++ err |= GETREG(regs, RDX, from, dx);
++ err |= GETREG(regs, RAX, from, ax);
++ err |= GETREG(regs, RCX, from, cx);
++ err |= GETREG(regs, RSP, from, sp);
++ err |= GETREG(regs, RIP, from, ip);
++ err |= GETREG(regs, EFLAGS, from, flags);
+ err |= GETREG(regs, CS, from, cs);
+ if (err)
+ return 1;
+@@ -108,19 +108,19 @@ static int copy_sc_to_user(struct sigcontext __user *to,
+ __put_user((regs)->regs.gp[(regno) / sizeof(unsigned long)], \
+ &(sc)->regname)
+
+- err |= PUTREG(regs, RDI, to, rdi);
+- err |= PUTREG(regs, RSI, to, rsi);
+- err |= PUTREG(regs, RBP, to, rbp);
++ err |= PUTREG(regs, RDI, to, di);
++ err |= PUTREG(regs, RSI, to, si);
++ err |= PUTREG(regs, RBP, to, bp);
+ /*
+ * Must use orignal RSP, which is passed in, rather than what's in
+ * the pt_regs, because that's already been updated to point at the
+ * signal frame.
+ */
+- err |= __put_user(sp, &to->rsp);
+- err |= PUTREG(regs, RBX, to, rbx);
+- err |= PUTREG(regs, RDX, to, rdx);
+- err |= PUTREG(regs, RCX, to, rcx);
+- err |= PUTREG(regs, RAX, to, rax);
++ err |= __put_user(sp, &to->sp);
++ err |= PUTREG(regs, RBX, to, bx);
++ err |= PUTREG(regs, RDX, to, dx);
++ err |= PUTREG(regs, RCX, to, cx);
++ err |= PUTREG(regs, RAX, to, ax);
+ err |= PUTREG(regs, R8, to, r8);
+ err |= PUTREG(regs, R9, to, r9);
+ err |= PUTREG(regs, R10, to, r10);
+@@ -135,8 +135,8 @@ static int copy_sc_to_user(struct sigcontext __user *to,
+ err |= __put_user(fi->error_code, &to->err);
+ err |= __put_user(fi->trap_no, &to->trapno);
+
+- err |= PUTREG(regs, RIP, to, rip);
+- err |= PUTREG(regs, EFLAGS, to, eflags);
++ err |= PUTREG(regs, RIP, to, ip);
++ err |= PUTREG(regs, EFLAGS, to, flags);
+ #undef PUTREG
+
+ err |= __put_user(mask, &to->oldmask);
diff --git a/arch/v850/kernel/vmlinux.lds.S b/arch/v850/kernel/vmlinux.lds.S
index 6172599..d08cd1d 100644
--- a/arch/v850/kernel/vmlinux.lds.S
@@ -131366,6 +132198,3741 @@
_einittext = .; \
*(.text.init) /* 2.4 convention */ \
INITCALL_CONTENTS \
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 80b7ba4..fb3eea3 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -17,81 +17,69 @@ config X86_64
+
+ ### Arch settings
+ config X86
+- bool
+- default y
++ def_bool y
++
++config GENERIC_LOCKBREAK
++ def_bool n
+
+ config GENERIC_TIME
+- bool
+- default y
++ def_bool y
+
+ config GENERIC_CMOS_UPDATE
+- bool
+- default y
++ def_bool y
+
+ config CLOCKSOURCE_WATCHDOG
+- bool
+- default y
++ def_bool y
+
+ config GENERIC_CLOCKEVENTS
+- bool
+- default y
++ def_bool y
+
+ config GENERIC_CLOCKEVENTS_BROADCAST
+- bool
+- default y
++ def_bool y
+ depends on X86_64 || (X86_32 && X86_LOCAL_APIC)
+
+ config LOCKDEP_SUPPORT
+- bool
+- default y
++ def_bool y
+
+ config STACKTRACE_SUPPORT
+- bool
+- default y
++ def_bool y
+
+ config SEMAPHORE_SLEEPERS
+- bool
+- default y
++ def_bool y
+
+ config MMU
+- bool
+- default y
++ def_bool y
+
+ config ZONE_DMA
+- bool
+- default y
++ def_bool y
+
+ config QUICKLIST
+- bool
+- default X86_32
++ def_bool X86_32
+
+ config SBUS
+ bool
+
+ config GENERIC_ISA_DMA
+- bool
+- default y
++ def_bool y
+
+ config GENERIC_IOMAP
+- bool
+- default y
++ def_bool y
+
+ config GENERIC_BUG
+- bool
+- default y
++ def_bool y
+ depends on BUG
+
+ config GENERIC_HWEIGHT
+- bool
+- default y
++ def_bool y
++
++config GENERIC_GPIO
++ def_bool n
+
+ config ARCH_MAY_HAVE_PC_FDC
+- bool
+- default y
++ def_bool y
+
+ config DMI
+- bool
+- default y
++ def_bool y
+
+ config RWSEM_GENERIC_SPINLOCK
+ def_bool !X86_XADD
+@@ -112,6 +100,9 @@ config GENERIC_TIME_VSYSCALL
+ bool
+ default X86_64
+
++config HAVE_SETUP_PER_CPU_AREA
++ def_bool X86_64
++
+ config ARCH_SUPPORTS_OPROFILE
+ bool
+ default y
+@@ -144,9 +135,17 @@ config GENERIC_PENDING_IRQ
+
+ config X86_SMP
+ bool
+- depends on X86_32 && SMP && !X86_VOYAGER
++ depends on SMP && ((X86_32 && !X86_VOYAGER) || X86_64)
+ default y
+
++config X86_32_SMP
++ def_bool y
++ depends on X86_32 && SMP
++
++config X86_64_SMP
++ def_bool y
++ depends on X86_64 && SMP
++
+ config X86_HT
+ bool
+ depends on SMP
+@@ -292,6 +291,18 @@ config X86_ES7000
+ Only choose this option if you have such a system, otherwise you
+ should say N here.
+
++config X86_RDC321X
++ bool "RDC R-321x SoC"
++ depends on X86_32
++ select M486
++ select X86_REBOOTFIXUPS
++ select GENERIC_GPIO
++ select LEDS_GPIO
++ help
++ This option is needed for RDC R-321x system-on-chip, also known
++ as R-8610-(G).
++ If you don't have one of these chips, you should say N here.
++
+ config X86_VSMP
+ bool "Support for ScaleMP vSMP"
+ depends on X86_64 && PCI
+@@ -303,8 +314,8 @@ config X86_VSMP
+ endchoice
+
+ config SCHED_NO_NO_OMIT_FRAME_POINTER
+- bool "Single-depth WCHAN output"
+- default y
++ def_bool y
++ prompt "Single-depth WCHAN output"
+ depends on X86_32
+ help
+ Calculate simpler /proc/<PID>/wchan values. If this option
+@@ -314,18 +325,8 @@ config SCHED_NO_NO_OMIT_FRAME_POINTER
+
+ If in doubt, say "Y".
+
+-config PARAVIRT
+- bool
+- depends on X86_32 && !(X86_VISWS || X86_VOYAGER)
+- help
+- This changes the kernel so it can modify itself when it is run
+- under a hypervisor, potentially improving performance significantly
+- over full virtualization. However, when run without a hypervisor
+- the kernel is theoretically slower and slightly larger.
+-
+ menuconfig PARAVIRT_GUEST
+ bool "Paravirtualized guest support"
+- depends on X86_32
+ help
+ Say Y here to get to see options related to running Linux under
+ various hypervisors. This option alone does not add any kernel code.
+@@ -339,6 +340,7 @@ source "arch/x86/xen/Kconfig"
+ config VMI
+ bool "VMI Guest support"
+ select PARAVIRT
++ depends on X86_32
+ depends on !(X86_VISWS || X86_VOYAGER)
+ help
+ VMI provides a paravirtualized interface to the VMware ESX server
+@@ -348,40 +350,43 @@ config VMI
+
+ source "arch/x86/lguest/Kconfig"
+
++config PARAVIRT
++ bool "Enable paravirtualization code"
++ depends on !(X86_VISWS || X86_VOYAGER)
++ help
++ This changes the kernel so it can modify itself when it is run
++ under a hypervisor, potentially improving performance significantly
++ over full virtualization. However, when run without a hypervisor
++ the kernel is theoretically slower and slightly larger.
++
+ endif
+
+ config ACPI_SRAT
+- bool
+- default y
++ def_bool y
+ depends on X86_32 && ACPI && NUMA && (X86_SUMMIT || X86_GENERICARCH)
+ select ACPI_NUMA
+
+ config HAVE_ARCH_PARSE_SRAT
+- bool
+- default y
+- depends on ACPI_SRAT
++ def_bool y
++ depends on ACPI_SRAT
+
+ config X86_SUMMIT_NUMA
+- bool
+- default y
++ def_bool y
+ depends on X86_32 && NUMA && (X86_SUMMIT || X86_GENERICARCH)
+
+ config X86_CYCLONE_TIMER
+- bool
+- default y
++ def_bool y
+ depends on X86_32 && X86_SUMMIT || X86_GENERICARCH
+
+ config ES7000_CLUSTERED_APIC
+- bool
+- default y
++ def_bool y
+ depends on SMP && X86_ES7000 && MPENTIUMIII
+
+ source "arch/x86/Kconfig.cpu"
+
+ config HPET_TIMER
+- bool
++ def_bool X86_64
+ prompt "HPET Timer Support" if X86_32
+- default X86_64
+ help
+ Use the IA-PC HPET (High Precision Event Timer) to manage
+ time in preference to the PIT and RTC, if a HPET is
+@@ -399,9 +404,8 @@ config HPET_TIMER
+ Choose N to continue using the legacy 8254 timer.
+
+ config HPET_EMULATE_RTC
+- bool
+- depends on HPET_TIMER && RTC=y
+- default y
++ def_bool y
++ depends on HPET_TIMER && (RTC=y || RTC=m)
+
+ # Mark as embedded because too many people got it wrong.
+ # The code disables itself when not needed.
+@@ -441,8 +445,8 @@ config CALGARY_IOMMU
+ If unsure, say Y.
+
+ config CALGARY_IOMMU_ENABLED_BY_DEFAULT
+- bool "Should Calgary be enabled by default?"
+- default y
++ def_bool y
++ prompt "Should Calgary be enabled by default?"
+ depends on CALGARY_IOMMU
+ help
+ Should Calgary be enabled by default? if you choose 'y', Calgary
+@@ -486,9 +490,9 @@ config SCHED_SMT
+ N here.
+
+ config SCHED_MC
+- bool "Multi-core scheduler support"
++ def_bool y
++ prompt "Multi-core scheduler support"
+ depends on (X86_64 && SMP) || (X86_32 && X86_HT)
+- default y
+ help
+ Multi-core scheduler support improves the CPU scheduler's decision
+ making when dealing with multi-core CPU chips at a cost of slightly
+@@ -522,19 +526,16 @@ config X86_UP_IOAPIC
+ an IO-APIC, then the kernel will still run with no slowdown at all.
+
+ config X86_LOCAL_APIC
+- bool
++ def_bool y
+ depends on X86_64 || (X86_32 && (X86_UP_APIC || ((X86_VISWS || SMP) && !X86_VOYAGER) || X86_GENERICARCH))
+- default y
+
+ config X86_IO_APIC
+- bool
++ def_bool y
+ depends on X86_64 || (X86_32 && (X86_UP_IOAPIC || (SMP && !(X86_VISWS || X86_VOYAGER)) || X86_GENERICARCH))
+- default y
+
+ config X86_VISWS_APIC
+- bool
++ def_bool y
+ depends on X86_32 && X86_VISWS
+- default y
+
+ config X86_MCE
+ bool "Machine Check Exception"
+@@ -554,17 +555,17 @@ config X86_MCE
+ the 386 and 486, so nearly everyone can say Y here.
+
+ config X86_MCE_INTEL
+- bool "Intel MCE features"
++ def_bool y
++ prompt "Intel MCE features"
+ depends on X86_64 && X86_MCE && X86_LOCAL_APIC
+- default y
+ help
+ Additional support for intel specific MCE features such as
+ the thermal monitor.
+
+ config X86_MCE_AMD
+- bool "AMD MCE features"
++ def_bool y
++ prompt "AMD MCE features"
+ depends on X86_64 && X86_MCE && X86_LOCAL_APIC
+- default y
+ help
+ Additional support for AMD specific MCE features such as
+ the DRAM Error Threshold.
+@@ -637,9 +638,9 @@ config I8K
+ Say N otherwise.
+
+ config X86_REBOOTFIXUPS
+- bool "Enable X86 board specific fixups for reboot"
++ def_bool n
++ prompt "Enable X86 board specific fixups for reboot"
+ depends on X86_32 && X86
+- default n
+ ---help---
+ This enables chipset and/or board specific fixups to be done
+ in order to get reboot to work correctly. This is only needed on
+@@ -648,7 +649,7 @@ config X86_REBOOTFIXUPS
+ system.
+
+ Currently, the only fixup is for the Geode machines using
+- CS5530A and CS5536 chipsets.
++ CS5530A and CS5536 chipsets and the RDC R-321x SoC.
+
+ Say Y if you want to enable the fixup. Currently, it's safe to
+ enable this option even if you don't need it.
+@@ -672,9 +673,8 @@ config MICROCODE
+ module will be called microcode.
+
+ config MICROCODE_OLD_INTERFACE
+- bool
++ def_bool y
+ depends on MICROCODE
+- default y
+
+ config X86_MSR
+ tristate "/dev/cpu/*/msr - Model-specific register support"
+@@ -798,13 +798,12 @@ config PAGE_OFFSET
+ depends on X86_32
+
+ config HIGHMEM
+- bool
++ def_bool y
+ depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
+- default y
+
+ config X86_PAE
+- bool "PAE (Physical Address Extension) Support"
+- default n
++ def_bool n
++ prompt "PAE (Physical Address Extension) Support"
+ depends on X86_32 && !HIGHMEM4G
+ select RESOURCES_64BIT
+ help
+@@ -836,10 +835,10 @@ comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
+ depends on X86_32 && X86_SUMMIT && (!HIGHMEM64G || !ACPI)
+
+ config K8_NUMA
+- bool "Old style AMD Opteron NUMA detection"
+- depends on X86_64 && NUMA && PCI
+- default y
+- help
++ def_bool y
++ prompt "Old style AMD Opteron NUMA detection"
++ depends on X86_64 && NUMA && PCI
++ help
+ Enable K8 NUMA node topology detection. You should say Y here if
+ you have a multi processor AMD K8 system. This uses an old
+ method to read the NUMA configuration directly from the builtin
+@@ -847,10 +846,10 @@ config K8_NUMA
+ instead, which also takes priority if both are compiled in.
+
+ config X86_64_ACPI_NUMA
+- bool "ACPI NUMA detection"
++ def_bool y
++ prompt "ACPI NUMA detection"
+ depends on X86_64 && NUMA && ACPI && PCI
+ select ACPI_NUMA
+- default y
+ help
+ Enable ACPI SRAT based node topology detection.
+
+@@ -864,52 +863,53 @@ config NUMA_EMU
+
+ config NODES_SHIFT
+ int
++ range 1 15 if X86_64
+ default "6" if X86_64
+ default "4" if X86_NUMAQ
+ default "3"
+ depends on NEED_MULTIPLE_NODES
+
+ config HAVE_ARCH_BOOTMEM_NODE
+- bool
++ def_bool y
+ depends on X86_32 && NUMA
+- default y
+
+ config ARCH_HAVE_MEMORY_PRESENT
+- bool
++ def_bool y
+ depends on X86_32 && DISCONTIGMEM
+- default y
+
+ config NEED_NODE_MEMMAP_SIZE
+- bool
++ def_bool y
+ depends on X86_32 && (DISCONTIGMEM || SPARSEMEM)
+- default y
+
+ config HAVE_ARCH_ALLOC_REMAP
+- bool
++ def_bool y
+ depends on X86_32 && NUMA
+- default y
+
+ config ARCH_FLATMEM_ENABLE
+ def_bool y
+- depends on (X86_32 && ARCH_SELECT_MEMORY_MODEL && X86_PC) || (X86_64 && !NUMA)
++ depends on X86_32 && ARCH_SELECT_MEMORY_MODEL && X86_PC && !NUMA
+
+ config ARCH_DISCONTIGMEM_ENABLE
+ def_bool y
+- depends on NUMA
++ depends on NUMA && X86_32
+
+ config ARCH_DISCONTIGMEM_DEFAULT
+ def_bool y
+- depends on NUMA
++ depends on NUMA && X86_32
++
++config ARCH_SPARSEMEM_DEFAULT
++ def_bool y
++ depends on X86_64
+
+ config ARCH_SPARSEMEM_ENABLE
+ def_bool y
+- depends on NUMA || (EXPERIMENTAL && (X86_PC || X86_64))
++ depends on X86_64 || NUMA || (EXPERIMENTAL && X86_PC)
+ select SPARSEMEM_STATIC if X86_32
+ select SPARSEMEM_VMEMMAP_ENABLE if X86_64
+
+ config ARCH_SELECT_MEMORY_MODEL
+ def_bool y
+- depends on X86_32 && ARCH_SPARSEMEM_ENABLE
++ depends on ARCH_SPARSEMEM_ENABLE
+
+ config ARCH_MEMORY_PROBE
+ def_bool X86_64
+@@ -987,42 +987,32 @@ config MTRR
+ See <file:Documentation/mtrr.txt> for more information.
+
+ config EFI
+- bool "Boot from EFI support"
+- depends on X86_32 && ACPI
+- default n
++ def_bool n
++ prompt "EFI runtime service support"
++ depends on ACPI
+ ---help---
+- This enables the kernel to boot on EFI platforms using
+- system configuration information passed to it from the firmware.
+- This also enables the kernel to use any EFI runtime services that are
++ This enables the kernel to use EFI runtime services that are
+ available (such as the EFI variable services).
+
+- This option is only useful on systems that have EFI firmware
+- and will result in a kernel image that is ~8k larger. In addition,
+- you must use the latest ELILO loader available at
+- <http://elilo.sourceforge.net> in order to take advantage of
+- kernel initialization using EFI information (neither GRUB nor LILO know
+- anything about EFI). However, even with this option, the resultant
+- kernel should continue to boot on existing non-EFI platforms.
++ This option is only useful on systems that have EFI firmware.
++ In addition, you should use the latest ELILO loader available
++ at <http://elilo.sourceforge.net> in order to take advantage
++ of EFI runtime services. However, even with this option, the
++ resultant kernel should continue to boot on existing non-EFI
++ platforms.
+
+ config IRQBALANCE
+- bool "Enable kernel irq balancing"
++ def_bool y
++ prompt "Enable kernel irq balancing"
+ depends on X86_32 && SMP && X86_IO_APIC
+- default y
+ help
+ The default yes will allow the kernel to do irq load balancing.
+ Saying no will keep the kernel from doing irq load balancing.
+
+-# turning this on wastes a bunch of space.
+-# Summit needs it only when NUMA is on
+-config BOOT_IOREMAP
+- bool
+- depends on X86_32 && (((X86_SUMMIT || X86_GENERICARCH) && NUMA) || (X86 && EFI))
+- default y
+-
+ config SECCOMP
+- bool "Enable seccomp to safely compute untrusted bytecode"
++ def_bool y
++ prompt "Enable seccomp to safely compute untrusted bytecode"
+ depends on PROC_FS
+- default y
+ help
+ This kernel feature is useful for number crunching applications
+ that may need to compute untrusted bytecode during their
+@@ -1189,11 +1179,11 @@ config HOTPLUG_CPU
+ suspend.
+
+ config COMPAT_VDSO
+- bool "Compat VDSO support"
+- default y
+- depends on X86_32
++ def_bool y
++ prompt "Compat VDSO support"
++ depends on X86_32 || IA32_EMULATION
+ help
+- Map the VDSO to the predictable old-style address too.
++ Map the 32-bit VDSO to the predictable old-style address too.
+ ---help---
+ Say N here if you are running a sufficiently recent glibc
+ version (2.3.3 or later), to remove the high-mapped
+@@ -1207,30 +1197,26 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
+ def_bool y
+ depends on X86_64 || (X86_32 && HIGHMEM)
+
+-config MEMORY_HOTPLUG_RESERVE
+- def_bool X86_64
+- depends on (MEMORY_HOTPLUG && DISCONTIGMEM)
+-
+ config HAVE_ARCH_EARLY_PFN_TO_NID
+ def_bool X86_64
+ depends on NUMA
+
+-config OUT_OF_LINE_PFN_TO_PAGE
+- def_bool X86_64
+- depends on DISCONTIGMEM
+-
+ menu "Power management options"
+ depends on !X86_VOYAGER
+
+ config ARCH_HIBERNATION_HEADER
+- bool
++ def_bool y
+ depends on X86_64 && HIBERNATION
+- default y
+
+ source "kernel/power/Kconfig"
+
+ source "drivers/acpi/Kconfig"
+
++config X86_APM_BOOT
++ bool
++ default y
++ depends on APM || APM_MODULE
++
+ menuconfig APM
+ tristate "APM (Advanced Power Management) BIOS support"
+ depends on X86_32 && PM_SLEEP && !X86_VISWS
+@@ -1371,7 +1357,7 @@ menu "Bus options (PCI etc.)"
+ config PCI
+ bool "PCI support" if !X86_VISWS
+ depends on !X86_VOYAGER
+- default y if X86_VISWS
++ default y
+ select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC)
+ help
+ Find out whether you have a PCI motherboard. PCI is the name of a
+@@ -1418,25 +1404,21 @@ config PCI_GOANY
+ endchoice
+
+ config PCI_BIOS
+- bool
++ def_bool y
+ depends on X86_32 && !X86_VISWS && PCI && (PCI_GOBIOS || PCI_GOANY)
+- default y
+
+ # x86-64 doesn't support PCI BIOS access from long mode so always go direct.
+ config PCI_DIRECT
+- bool
++ def_bool y
+ depends on PCI && (X86_64 || (PCI_GODIRECT || PCI_GOANY) || X86_VISWS)
+- default y
+
+ config PCI_MMCONFIG
+- bool
++ def_bool y
+ depends on X86_32 && PCI && ACPI && (PCI_GOMMCONFIG || PCI_GOANY)
+- default y
+
+ config PCI_DOMAINS
+- bool
++ def_bool y
+ depends on PCI
+- default y
+
+ config PCI_MMCONFIG
+ bool "Support mmconfig PCI config space access"
+@@ -1453,9 +1435,9 @@ config DMAR
+ remapping devices.
+
+ config DMAR_GFX_WA
+- bool "Support for Graphics workaround"
++ def_bool y
++ prompt "Support for Graphics workaround"
+ depends on DMAR
+- default y
+ help
+ Current Graphics drivers tend to use physical address
+ for DMA and avoid using DMA APIs. Setting this config
+@@ -1464,9 +1446,8 @@ config DMAR_GFX_WA
+ to use physical addresses for DMA.
+
+ config DMAR_FLOPPY_WA
+- bool
++ def_bool y
+ depends on DMAR
+- default y
+ help
+ Floppy disk drivers are know to bypass DMA API calls
+ thereby failing to work when IOMMU is enabled. This
+@@ -1479,8 +1460,7 @@ source "drivers/pci/Kconfig"
+
+ # x86_64 have no ISA slots, but do have ISA-style DMA.
+ config ISA_DMA_API
+- bool
+- default y
++ def_bool y
+
+ if X86_32
+
+@@ -1546,9 +1526,9 @@ config SCx200HR_TIMER
+ other workaround is idle=poll boot option.
+
+ config GEODE_MFGPT_TIMER
+- bool "Geode Multi-Function General Purpose Timer (MFGPT) events"
++ def_bool y
++ prompt "Geode Multi-Function General Purpose Timer (MFGPT) events"
+ depends on MGEODE_LX && GENERIC_TIME && GENERIC_CLOCKEVENTS
+- default y
+ help
+ This driver provides a clock event source based on the MFGPT
+ timer(s) in the CS5535 and CS5536 companion chip for the geode.
+@@ -1575,6 +1555,7 @@ source "fs/Kconfig.binfmt"
+ config IA32_EMULATION
+ bool "IA32 Emulation"
+ depends on X86_64
++ select COMPAT_BINFMT_ELF
+ help
+ Include code to run 32-bit programs under a 64-bit kernel. You should
+ likely turn this on, unless you're 100% sure that you don't have any
+@@ -1587,18 +1568,16 @@ config IA32_AOUT
+ Support old a.out binaries in the 32bit emulation.
+
+ config COMPAT
+- bool
++ def_bool y
+ depends on IA32_EMULATION
+- default y
+
+ config COMPAT_FOR_U64_ALIGNMENT
+ def_bool COMPAT
+ depends on X86_64
+
+ config SYSVIPC_COMPAT
+- bool
++ def_bool y
+ depends on X86_64 && COMPAT && SYSVIPC
+- default y
+
+ endmenu
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index c301622..e09a6b7 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -219,10 +219,10 @@ config MGEODEGX1
+ Select this for a Geode GX1 (Cyrix MediaGX) chip.
+
+ config MGEODE_LX
+- bool "Geode GX/LX"
++ bool "Geode GX/LX"
+ depends on X86_32
+- help
+- Select this for AMD Geode GX and LX processors.
++ help
++ Select this for AMD Geode GX and LX processors.
+
+ config MCYRIXIII
+ bool "CyrixIII/VIA-C3"
+@@ -258,7 +258,7 @@ config MPSC
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+ Xeon CPUs with Intel 64bit which is compatible with x86-64.
+ Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
+- Netburst core and shouldn't use this option. You can distinguish them
++ Netburst core and shouldn't use this option. You can distinguish them
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
+@@ -317,81 +317,75 @@ config X86_L1_CACHE_SHIFT
+ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7
+
+ config X86_XADD
+- bool
++ def_bool y
+ depends on X86_32 && !M386
+- default y
+
+ config X86_PPRO_FENCE
+- bool
++ bool "PentiumPro memory ordering errata workaround"
+ depends on M686 || M586MMX || M586TSC || M586 || M486 || M386 || MGEODEGX1
+- default y
++ help
++ Old PentiumPro multiprocessor systems had errata that could cause memory
++ operations to violate the x86 ordering standard in rare cases. Enabling this
++ option will attempt to work around some (but not all) occurances of
++ this problem, at the cost of much heavier spinlock and memory barrier
++ operations.
++
++ If unsure, say n here. Even distro kernels should think twice before enabling
++ this: there are few systems, and an unlikely bug.
+
+ config X86_F00F_BUG
+- bool
++ def_bool y
+ depends on M586MMX || M586TSC || M586 || M486 || M386
+- default y
+
+ config X86_WP_WORKS_OK
+- bool
++ def_bool y
+ depends on X86_32 && !M386
+- default y
+
+ config X86_INVLPG
+- bool
++ def_bool y
+ depends on X86_32 && !M386
+- default y
+
+ config X86_BSWAP
+- bool
++ def_bool y
+ depends on X86_32 && !M386
+- default y
+
+ config X86_POPAD_OK
+- bool
++ def_bool y
+ depends on X86_32 && !M386
+- default y
+
+ config X86_ALIGNMENT_16
+- bool
++ def_bool y
+ depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || X86_ELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2 || MGEODEGX1
+- default y
+
+ config X86_GOOD_APIC
+- bool
++ def_bool y
+ depends on MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8 || MEFFICEON || MCORE2 || MVIAC7 || X86_64
+- default y
+
+ config X86_INTEL_USERCOPY
+- bool
++ def_bool y
+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
+- default y
+
+ config X86_USE_PPRO_CHECKSUM
+- bool
++ def_bool y
+ depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON || MGEODE_LX || MCORE2
+- default y
+
+ config X86_USE_3DNOW
+- bool
++ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+- default y
+
+ config X86_OOSTORE
+- bool
++ def_bool y
+ depends on (MWINCHIP3D || MWINCHIP2 || MWINCHIPC6) && MTRR
+- default y
+
+ config X86_TSC
+- bool
++ def_bool y
+ depends on ((MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64
+- default y
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+- bool
++ def_bool y
+ depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7)
+- default y
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+@@ -399,3 +393,6 @@ config X86_MINIMUM_CPU_FAMILY
+ default "4" if X86_32 && (X86_XADD || X86_CMPXCHG || X86_BSWAP || X86_WP_WORKS_OK)
+ default "3"
+
++config X86_DEBUGCTLMSR
++ def_bool y
++ depends on !(M586MMX || M586TSC || M586 || M486 || M386)
+diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
+index 761ca7b..2e1e3af 100644
+--- a/arch/x86/Kconfig.debug
++++ b/arch/x86/Kconfig.debug
+@@ -6,7 +6,7 @@ config TRACE_IRQFLAGS_SUPPORT
+ source "lib/Kconfig.debug"
+
+ config EARLY_PRINTK
+- bool "Early printk" if EMBEDDED && DEBUG_KERNEL && X86_32
++ bool "Early printk" if EMBEDDED
+ default y
+ help
+ Write kernel log output directly into the VGA buffer or to a serial
+@@ -40,22 +40,49 @@ comment "Page alloc debug is incompatible with Software Suspend on i386"
+
+ config DEBUG_PAGEALLOC
+ bool "Debug page memory allocations"
+- depends on DEBUG_KERNEL && !HIBERNATION && !HUGETLBFS
+- depends on X86_32
++ depends on DEBUG_KERNEL && X86_32
+ help
+ Unmap pages from the kernel linear mapping after free_pages().
+ This results in a large slowdown, but helps to find certain types
+ of memory corruptions.
+
++config DEBUG_PER_CPU_MAPS
++ bool "Debug access to per_cpu maps"
++ depends on DEBUG_KERNEL
++ depends on X86_64_SMP
++ default n
++ help
++ Say Y to verify that the per_cpu map being accessed has
++ been setup. Adds a fair amount of code to kernel memory
++ and decreases performance.
++
++ Say N if unsure.
++
+ config DEBUG_RODATA
+ bool "Write protect kernel read-only data structures"
++ default y
+ depends on DEBUG_KERNEL
+ help
+ Mark the kernel read-only data as write-protected in the pagetables,
+ in order to catch accidental (and incorrect) writes to such const
+- data. This option may have a slight performance impact because a
+- portion of the kernel code won't be covered by a 2MB TLB anymore.
+- If in doubt, say "N".
++ data. This is recommended so that we can catch kernel bugs sooner.
++ If in doubt, say "Y".
++
++config DEBUG_RODATA_TEST
++ bool "Testcase for the DEBUG_RODATA feature"
++ depends on DEBUG_RODATA
++ help
++ This option enables a testcase for the DEBUG_RODATA
++ feature as well as for the change_page_attr() infrastructure.
++ If in doubt, say "N"
++
++config DEBUG_NX_TEST
++ tristate "Testcase for the NX non-executable stack feature"
++ depends on DEBUG_KERNEL && m
++ help
++ This option enables a testcase for the CPU NX capability
++ and the software setup of this feature.
++ If in doubt, say "N"
+
+ config 4KSTACKS
+ bool "Use 4Kb for kernel stacks instead of 8Kb"
+@@ -75,8 +102,7 @@ config X86_FIND_SMP_CONFIG
+
+ config X86_MPPARSE
+ def_bool y
+- depends on X86_LOCAL_APIC && !X86_VISWS
+- depends on X86_32
++ depends on (X86_32 && (X86_LOCAL_APIC && !X86_VISWS)) || X86_64
+
+ config DOUBLEFAULT
+ default y
+@@ -112,4 +138,91 @@ config IOMMU_LEAK
+ Add a simple leak tracer to the IOMMU code. This is useful when you
+ are debugging a buggy device driver that leaks IOMMU mappings.
+
++#
++# IO delay types:
++#
++
++config IO_DELAY_TYPE_0X80
++ int
++ default "0"
++
++config IO_DELAY_TYPE_0XED
++ int
++ default "1"
++
++config IO_DELAY_TYPE_UDELAY
++ int
++ default "2"
++
++config IO_DELAY_TYPE_NONE
++ int
++ default "3"
++
++choice
++ prompt "IO delay type"
++ default IO_DELAY_0XED
++
++config IO_DELAY_0X80
++ bool "port 0x80 based port-IO delay [recommended]"
++ help
++ This is the traditional Linux IO delay used for in/out_p.
++ It is the most tested hence safest selection here.
++
++config IO_DELAY_0XED
++ bool "port 0xed based port-IO delay"
++ help
++ Use port 0xed as the IO delay. This frees up port 0x80 which is
++ often used as a hardware-debug port.
++
++config IO_DELAY_UDELAY
++ bool "udelay based port-IO delay"
++ help
++ Use udelay(2) as the IO delay method. This provides the delay
++ while not having any side-effect on the IO port space.
++
++config IO_DELAY_NONE
++ bool "no port-IO delay"
++ help
++ No port-IO delay. Will break on old boxes that require port-IO
++ delay for certain operations. Should work on most new machines.
++
++endchoice
++
++if IO_DELAY_0X80
++config DEFAULT_IO_DELAY_TYPE
++ int
++ default IO_DELAY_TYPE_0X80
++endif
++
++if IO_DELAY_0XED
++config DEFAULT_IO_DELAY_TYPE
++ int
++ default IO_DELAY_TYPE_0XED
++endif
++
++if IO_DELAY_UDELAY
++config DEFAULT_IO_DELAY_TYPE
++ int
++ default IO_DELAY_TYPE_UDELAY
++endif
++
++if IO_DELAY_NONE
++config DEFAULT_IO_DELAY_TYPE
++ int
++ default IO_DELAY_TYPE_NONE
++endif
++
++config DEBUG_BOOT_PARAMS
++ bool "Debug boot parameters"
++ depends on DEBUG_KERNEL
++ depends on DEBUG_FS
++ help
++ This option will cause struct boot_params to be exported via debugfs.
++
++config CPA_DEBUG
++ bool "CPA self test code"
++ depends on DEBUG_KERNEL
++ help
++ Do change_page_attr self tests at boot.
++
+ endmenu
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 7aa1dc6..b08f182 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -7,13 +7,252 @@ else
+ KBUILD_DEFCONFIG := $(ARCH)_defconfig
+ endif
+
+-# No need to remake these files
+-$(srctree)/arch/x86/Makefile%: ;
++# BITS is used as extension for files which are available in a 32 bit
++# and a 64 bit version to simplify shared Makefiles.
++# e.g.: obj-y += foo_$(BITS).o
++export BITS
+
+ ifeq ($(CONFIG_X86_32),y)
++ BITS := 32
+ UTS_MACHINE := i386
+- include $(srctree)/arch/x86/Makefile_32
++ CHECKFLAGS += -D__i386__
++
++ biarch := $(call cc-option,-m32)
++ KBUILD_AFLAGS += $(biarch)
++ KBUILD_CFLAGS += $(biarch)
++
++ ifdef CONFIG_RELOCATABLE
++ LDFLAGS_vmlinux := --emit-relocs
++ endif
++
++ KBUILD_CFLAGS += -msoft-float -mregparm=3 -freg-struct-return
++
++ # prevent gcc from keeping the stack 16 byte aligned
++ KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
++
++ # Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
++ # a lot more stack due to the lack of sharing of stacklots:
++ KBUILD_CFLAGS += $(shell if [ $(call cc-version) -lt 0400 ] ; then \
++ echo $(call cc-option,-fno-unit-at-a-time); fi ;)
++
++ # CPU-specific tuning. Anything which can be shared with UML should go here.
++ include $(srctree)/arch/x86/Makefile_32.cpu
++ KBUILD_CFLAGS += $(cflags-y)
++
++ # temporary until string.h is fixed
++ KBUILD_CFLAGS += -ffreestanding
+ else
++ BITS := 64
+ UTS_MACHINE := x86_64
+- include $(srctree)/arch/x86/Makefile_64
++ CHECKFLAGS += -D__x86_64__ -m64
++
++ KBUILD_AFLAGS += -m64
++ KBUILD_CFLAGS += -m64
++
++ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
++
++ cflags-$(CONFIG_MCORE2) += \
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
++ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
++ KBUILD_CFLAGS += $(cflags-y)
++
++ KBUILD_CFLAGS += -mno-red-zone
++ KBUILD_CFLAGS += -mcmodel=kernel
++
++ # -funit-at-a-time shrinks the kernel .text considerably
++ # unfortunately it makes reading oopses harder.
++ KBUILD_CFLAGS += $(call cc-option,-funit-at-a-time)
++
++ # this works around some issues with generating unwind tables in older gccs
++ # newer gccs do it by default
++ KBUILD_CFLAGS += -maccumulate-outgoing-args
++
++ stackp := $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh
++ stackp-$(CONFIG_CC_STACKPROTECTOR) := $(shell $(stackp) \
++ "$(CC)" -fstack-protector )
++ stackp-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(stackp) \
++ "$(CC)" -fstack-protector-all )
++
++ KBUILD_CFLAGS += $(stackp-y)
++endif
++
++# Stackpointer is addressed different for 32 bit and 64 bit x86
++sp-$(CONFIG_X86_32) := esp
++sp-$(CONFIG_X86_64) := rsp
++
++# do binutils support CFI?
++cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
++# is .cfi_signal_frame supported too?
++cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
++KBUILD_AFLAGS += $(cfi) $(cfi-sigframe)
++KBUILD_CFLAGS += $(cfi) $(cfi-sigframe)
++
++LDFLAGS := -m elf_$(UTS_MACHINE)
++OBJCOPYFLAGS := -O binary -R .note -R .comment -S
++
++# Speed up the build
++KBUILD_CFLAGS += -pipe
++# Workaround for a gcc prelease that unfortunately was shipped in a suse release
++KBUILD_CFLAGS += -Wno-sign-compare
++#
++KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
++# prevent gcc from generating any FP code by mistake
++KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
++
++###
++# Sub architecture support
++# fcore-y is linked before mcore-y files.
++
++# Default subarch .c files
++mcore-y := arch/x86/mach-default/
++
++# Voyager subarch support
++mflags-$(CONFIG_X86_VOYAGER) := -Iinclude/asm-x86/mach-voyager
++mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/
++
++# VISWS subarch support
++mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws
++mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws/
++
++# NUMAQ subarch support
++mflags-$(CONFIG_X86_NUMAQ) := -Iinclude/asm-x86/mach-numaq
++mcore-$(CONFIG_X86_NUMAQ) := arch/x86/mach-default/
++
++# BIGSMP subarch support
++mflags-$(CONFIG_X86_BIGSMP) := -Iinclude/asm-x86/mach-bigsmp
++mcore-$(CONFIG_X86_BIGSMP) := arch/x86/mach-default/
++
++#Summit subarch support
++mflags-$(CONFIG_X86_SUMMIT) := -Iinclude/asm-x86/mach-summit
++mcore-$(CONFIG_X86_SUMMIT) := arch/x86/mach-default/
++
++# generic subarchitecture
++mflags-$(CONFIG_X86_GENERICARCH):= -Iinclude/asm-x86/mach-generic
++fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
++mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/
++
++
++# ES7000 subarch support
++mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-x86/mach-es7000
++fcore-$(CONFIG_X86_ES7000) := arch/x86/mach-es7000/
++mcore-$(CONFIG_X86_ES7000) := arch/x86/mach-default/
++
++# RDC R-321x subarch support
++mflags-$(CONFIG_X86_RDC321X) := -Iinclude/asm-x86/mach-rdc321x
++mcore-$(CONFIG_X86_RDC321X) := arch/x86/mach-default
++core-$(CONFIG_X86_RDC321X) += arch/x86/mach-rdc321x/
++
++# default subarch .h files
++mflags-y += -Iinclude/asm-x86/mach-default
++
++# 64 bit does not support subarch support - clear sub arch variables
++fcore-$(CONFIG_X86_64) :=
++mcore-$(CONFIG_X86_64) :=
++mflags-$(CONFIG_X86_64) :=
++
++KBUILD_CFLAGS += $(mflags-y)
++KBUILD_AFLAGS += $(mflags-y)
++
++###
++# Kernel objects
++
++head-y := arch/x86/kernel/head_$(BITS).o
++head-$(CONFIG_X86_64) += arch/x86/kernel/head64.o
++head-y += arch/x86/kernel/init_task.o
++
++libs-y += arch/x86/lib/
++
++# Sub architecture files that needs linking first
++core-y += $(fcore-y)
++
++# Xen paravirtualization support
++core-$(CONFIG_XEN) += arch/x86/xen/
++
++# lguest paravirtualization support
++core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
++
++core-y += arch/x86/kernel/
++core-y += arch/x86/mm/
++
++# Remaining sub architecture files
++core-y += $(mcore-y)
++
++core-y += arch/x86/crypto/
++core-y += arch/x86/vdso/
++core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
++
++# drivers-y are linked after core-y
++drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
++drivers-$(CONFIG_PCI) += arch/x86/pci/
++
++# must be linked after kernel/
++drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
++
++ifeq ($(CONFIG_X86_32),y)
++drivers-$(CONFIG_PM) += arch/x86/power/
++drivers-$(CONFIG_FB) += arch/x86/video/
+ endif
++
++####
++# boot loader support. Several targets are kept for legacy purposes
++
++boot := arch/x86/boot
++
++PHONY += zImage bzImage compressed zlilo bzlilo \
++ zdisk bzdisk fdimage fdimage144 fdimage288 isoimage install
++
++# Default kernel to build
++all: bzImage
++
++# KBUILD_IMAGE specify target image being built
++ KBUILD_IMAGE := $(boot)/bzImage
++zImage zlilo zdisk: KBUILD_IMAGE := arch/x86/boot/zImage
++
++zImage bzImage: vmlinux
++ $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
++ $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
++ $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/bzImage
++
++compressed: zImage
++
++zlilo bzlilo: vmlinux
++ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zlilo
++
++zdisk bzdisk: vmlinux
++ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zdisk
++
++fdimage fdimage144 fdimage288 isoimage: vmlinux
++ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
++
++install: vdso_install
++ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) install
++
++PHONY += vdso_install
++vdso_install:
++ $(Q)$(MAKE) $(build)=arch/x86/vdso $@
++
++archclean:
++ $(Q)rm -rf $(objtree)/arch/i386
++ $(Q)rm -rf $(objtree)/arch/x86_64
++ $(Q)$(MAKE) $(clean)=$(boot)
++
++define archhelp
++ echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)'
++ echo ' install - Install kernel using'
++ echo ' (your) ~/bin/installkernel or'
++ echo ' (distribution) /sbin/installkernel or'
++ echo ' install to $$(INSTALL_PATH) and run lilo'
++ echo ' fdimage - Create 1.4MB boot floppy image (arch/x86/boot/fdimage)'
++ echo ' fdimage144 - Create 1.4MB boot floppy image (arch/x86/boot/fdimage)'
++ echo ' fdimage288 - Create 2.8MB boot floppy image (arch/x86/boot/fdimage)'
++ echo ' isoimage - Create a boot CD-ROM image (arch/x86/boot/image.iso)'
++ echo ' bzdisk/fdimage*/isoimage also accept:'
++ echo ' FDARGS="..." arguments for the booted kernel'
++ echo ' FDINITRD=file initrd for the booted kernel'
++endef
++
++CLEAN_FILES += arch/x86/boot/fdimage \
++ arch/x86/boot/image.iso \
++ arch/x86/boot/mtools.conf
+diff --git a/arch/x86/Makefile_32 b/arch/x86/Makefile_32
+deleted file mode 100644
+index 50394da..0000000
+--- a/arch/x86/Makefile_32
++++ /dev/null
+@@ -1,175 +0,0 @@
+-#
+-# i386 Makefile
+-#
+-# This file is included by the global makefile so that you can add your own
+-# architecture-specific flags and dependencies. Remember to do have actions
+-# for "archclean" cleaning up for this architecture.
+-#
+-# This file is subject to the terms and conditions of the GNU General Public
+-# License. See the file "COPYING" in the main directory of this archive
+-# for more details.
+-#
+-# Copyright (C) 1994 by Linus Torvalds
+-#
+-# 19990713 Artur Skawina <skawina at geocities.com>
+-# Added '-march' and '-mpreferred-stack-boundary' support
+-#
+-# 20050320 Kianusch Sayah Karadji <kianusch at sk-tech.net>
+-# Added support for GEODE CPU
+-
+-# BITS is used as extension for files which are available in a 32 bit
+-# and a 64 bit version to simplify shared Makefiles.
+-# e.g.: obj-y += foo_$(BITS).o
+-BITS := 32
+-export BITS
+-
+-HAS_BIARCH := $(call cc-option-yn, -m32)
+-ifeq ($(HAS_BIARCH),y)
+-AS := $(AS) --32
+-LD := $(LD) -m elf_i386
+-CC := $(CC) -m32
+-endif
+-
+-LDFLAGS := -m elf_i386
+-OBJCOPYFLAGS := -O binary -R .note -R .comment -S
+-ifdef CONFIG_RELOCATABLE
+-LDFLAGS_vmlinux := --emit-relocs
+-endif
+-CHECKFLAGS += -D__i386__
+-
+-KBUILD_CFLAGS += -pipe -msoft-float -mregparm=3 -freg-struct-return
+-
+-# prevent gcc from keeping the stack 16 byte aligned
+-KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
+-
+-# CPU-specific tuning. Anything which can be shared with UML should go here.
+-include $(srctree)/arch/x86/Makefile_32.cpu
+-
+-# temporary until string.h is fixed
+-cflags-y += -ffreestanding
+-
+-# this works around some issues with generating unwind tables in older gccs
+-# newer gccs do it by default
+-cflags-y += -maccumulate-outgoing-args
+-
+-# Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
+-# a lot more stack due to the lack of sharing of stacklots:
+-KBUILD_CFLAGS += $(shell if [ $(call cc-version) -lt 0400 ] ; then echo $(call cc-option,-fno-unit-at-a-time); fi ;)
+-
+-# do binutils support CFI?
+-cflags-y += $(call as-instr,.cfi_startproc\n.cfi_rel_offset esp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
+-KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_rel_offset esp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
+-
+-# is .cfi_signal_frame supported too?
+-cflags-y += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
+-KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
+-
+-KBUILD_CFLAGS += $(cflags-y)
+-
+-# Default subarch .c files
+-mcore-y := arch/x86/mach-default
+-
+-# Voyager subarch support
+-mflags-$(CONFIG_X86_VOYAGER) := -Iinclude/asm-x86/mach-voyager
+-mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager
+-
+-# VISWS subarch support
+-mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws
+-mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws
+-
+-# NUMAQ subarch support
+-mflags-$(CONFIG_X86_NUMAQ) := -Iinclude/asm-x86/mach-numaq
+-mcore-$(CONFIG_X86_NUMAQ) := arch/x86/mach-default
+-
+-# BIGSMP subarch support
+-mflags-$(CONFIG_X86_BIGSMP) := -Iinclude/asm-x86/mach-bigsmp
+-mcore-$(CONFIG_X86_BIGSMP) := arch/x86/mach-default
+-
+-#Summit subarch support
+-mflags-$(CONFIG_X86_SUMMIT) := -Iinclude/asm-x86/mach-summit
+-mcore-$(CONFIG_X86_SUMMIT) := arch/x86/mach-default
+-
+-# generic subarchitecture
+-mflags-$(CONFIG_X86_GENERICARCH) := -Iinclude/asm-x86/mach-generic
+-mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default
+-core-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
+-
+-# ES7000 subarch support
+-mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-x86/mach-es7000
+-mcore-$(CONFIG_X86_ES7000) := arch/x86/mach-default
+-core-$(CONFIG_X86_ES7000) := arch/x86/mach-es7000/
+-
+-# Xen paravirtualization support
+-core-$(CONFIG_XEN) += arch/x86/xen/
+-
+-# lguest paravirtualization support
+-core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
+-
+-# default subarch .h files
+-mflags-y += -Iinclude/asm-x86/mach-default
+-
+-head-y := arch/x86/kernel/head_32.o arch/x86/kernel/init_task.o
+-
+-libs-y += arch/x86/lib/
+-core-y += arch/x86/kernel/ \
+- arch/x86/mm/ \
+- $(mcore-y)/ \
+- arch/x86/crypto/
+-drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
+-drivers-$(CONFIG_PCI) += arch/x86/pci/
+-# must be linked after kernel/
+-drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
+-drivers-$(CONFIG_PM) += arch/x86/power/
+-drivers-$(CONFIG_FB) += arch/x86/video/
+-
+-KBUILD_CFLAGS += $(mflags-y)
+-KBUILD_AFLAGS += $(mflags-y)
+-
+-boot := arch/x86/boot
+-
+-PHONY += zImage bzImage compressed zlilo bzlilo \
+- zdisk bzdisk fdimage fdimage144 fdimage288 isoimage install
+-
+-all: bzImage
+-
+-# KBUILD_IMAGE specify target image being built
+- KBUILD_IMAGE := $(boot)/bzImage
+-zImage zlilo zdisk: KBUILD_IMAGE := arch/x86/boot/zImage
+-
+-zImage bzImage: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
+- $(Q)mkdir -p $(objtree)/arch/i386/boot
+- $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/i386/boot/bzImage
+-
+-compressed: zImage
+-
+-zlilo bzlilo: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zlilo
+-
+-zdisk bzdisk: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zdisk
+-
+-fdimage fdimage144 fdimage288 isoimage: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
+-
+-install:
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) install
+-
+-archclean:
+- $(Q)rm -rf $(objtree)/arch/i386/boot
+- $(Q)$(MAKE) $(clean)=arch/x86/boot
+-
+-define archhelp
+- echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)'
+- echo ' install - Install kernel using'
+- echo ' (your) ~/bin/installkernel or'
+- echo ' (distribution) /sbin/installkernel or'
+- echo ' install to $$(INSTALL_PATH) and run lilo'
+- echo ' bzdisk - Create a boot floppy in /dev/fd0'
+- echo ' fdimage - Create a boot floppy image'
+- echo ' isoimage - Create a boot CD-ROM image'
+-endef
+-
+-CLEAN_FILES += arch/x86/boot/fdimage \
+- arch/x86/boot/image.iso \
+- arch/x86/boot/mtools.conf
+diff --git a/arch/x86/Makefile_64 b/arch/x86/Makefile_64
+deleted file mode 100644
+index a804860..0000000
+--- a/arch/x86/Makefile_64
++++ /dev/null
+@@ -1,144 +0,0 @@
+-#
+-# x86_64 Makefile
+-#
+-# This file is included by the global makefile so that you can add your own
+-# architecture-specific flags and dependencies. Remember to do have actions
+-# for "archclean" and "archdep" for cleaning up and making dependencies for
+-# this architecture
+-#
+-# This file is subject to the terms and conditions of the GNU General Public
+-# License. See the file "COPYING" in the main directory of this archive
+-# for more details.
+-#
+-# Copyright (C) 1994 by Linus Torvalds
+-#
+-# 19990713 Artur Skawina <skawina at geocities.com>
+-# Added '-march' and '-mpreferred-stack-boundary' support
+-# 20000913 Pavel Machek <pavel at suse.cz>
+-# Converted for x86_64 architecture
+-# 20010105 Andi Kleen, add IA32 compiler.
+-# ....and later removed it again....
+-#
+-# $Id: Makefile,v 1.31 2002/03/22 15:56:07 ak Exp $
+-
+-# BITS is used as extension for files which are available in a 32 bit
+-# and a 64 bit version to simplify shared Makefiles.
+-# e.g.: obj-y += foo_$(BITS).o
+-BITS := 64
+-export BITS
+-
+-LDFLAGS := -m elf_x86_64
+-OBJCOPYFLAGS := -O binary -R .note -R .comment -S
+-LDFLAGS_vmlinux :=
+-CHECKFLAGS += -D__x86_64__ -m64
+-
+-cflags-y :=
+-cflags-kernel-y :=
+-cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
+-cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-# gcc doesn't support -march=core2 yet as of gcc 4.3, but I hope it
+-# will eventually. Use -mtune=generic as fallback
+-cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+-
+-cflags-y += -m64
+-cflags-y += -mno-red-zone
+-cflags-y += -mcmodel=kernel
+-cflags-y += -pipe
+-cflags-y += -Wno-sign-compare
+-cflags-y += -fno-asynchronous-unwind-tables
+-ifneq ($(CONFIG_DEBUG_INFO),y)
+-# -fweb shrinks the kernel a bit, but the difference is very small
+-# it also messes up debugging, so don't use it for now.
+-#cflags-y += $(call cc-option,-fweb)
+-endif
+-# -funit-at-a-time shrinks the kernel .text considerably
+-# unfortunately it makes reading oopses harder.
+-cflags-y += $(call cc-option,-funit-at-a-time)
+-# prevent gcc from generating any FP code by mistake
+-cflags-y += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
+-# this works around some issues with generating unwind tables in older gccs
+-# newer gccs do it by default
+-cflags-y += -maccumulate-outgoing-args
+-
+-# do binutils support CFI?
+-cflags-y += $(call as-instr,.cfi_startproc\n.cfi_rel_offset rsp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
+-KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_rel_offset rsp${comma}0\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
+-
+-# is .cfi_signal_frame supported too?
+-cflags-y += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
+-KBUILD_AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
+-
+-cflags-$(CONFIG_CC_STACKPROTECTOR) += $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh "$(CC)" -fstack-protector )
+-cflags-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh "$(CC)" -fstack-protector-all )
+-
+-KBUILD_CFLAGS += $(cflags-y)
+-CFLAGS_KERNEL += $(cflags-kernel-y)
+-KBUILD_AFLAGS += -m64
+-
+-head-y := arch/x86/kernel/head_64.o arch/x86/kernel/head64.o arch/x86/kernel/init_task.o
+-
+-libs-y += arch/x86/lib/
+-core-y += arch/x86/kernel/ \
+- arch/x86/mm/ \
+- arch/x86/crypto/ \
+- arch/x86/vdso/
+-core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
+-drivers-$(CONFIG_PCI) += arch/x86/pci/
+-drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
+-
+-boot := arch/x86/boot
+-
+-PHONY += bzImage bzlilo install archmrproper \
+- fdimage fdimage144 fdimage288 isoimage archclean
+-
+-#Default target when executing "make"
+-all: bzImage
+-
+-BOOTIMAGE := arch/x86/boot/bzImage
+-KBUILD_IMAGE := $(BOOTIMAGE)
+-
+-bzImage: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) $(BOOTIMAGE)
+- $(Q)mkdir -p $(objtree)/arch/x86_64/boot
+- $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/x86_64/boot/bzImage
+-
+-bzlilo: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) zlilo
+-
+-bzdisk: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) zdisk
+-
+-fdimage fdimage144 fdimage288 isoimage: vmlinux
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) $@
+-
+-install: vdso_install
+- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) $@
+-
+-vdso_install:
+-ifeq ($(CONFIG_IA32_EMULATION),y)
+- $(Q)$(MAKE) $(build)=arch/x86/ia32 $@
+-endif
+- $(Q)$(MAKE) $(build)=arch/x86/vdso $@
+-
+-archclean:
+- $(Q)rm -rf $(objtree)/arch/x86_64/boot
+- $(Q)$(MAKE) $(clean)=$(boot)
+-
+-define archhelp
+- echo '* bzImage - Compressed kernel image (arch/x86/boot/bzImage)'
+- echo ' install - Install kernel using'
+- echo ' (your) ~/bin/installkernel or'
+- echo ' (distribution) /sbin/installkernel or'
+- echo ' install to $$(INSTALL_PATH) and run lilo'
+- echo ' bzdisk - Create a boot floppy in /dev/fd0'
+- echo ' fdimage - Create a boot floppy image'
+- echo ' isoimage - Create a boot CD-ROM image'
+-endef
+-
+-CLEAN_FILES += arch/x86/boot/fdimage \
+- arch/x86/boot/image.iso \
+- arch/x86/boot/mtools.conf
+-
+-
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
+index 7a3116c..349b81a 100644
+--- a/arch/x86/boot/Makefile
++++ b/arch/x86/boot/Makefile
+@@ -28,9 +28,11 @@ SVGA_MODE := -DSVGA_MODE=NORMAL_VGA
+ targets := vmlinux.bin setup.bin setup.elf zImage bzImage
+ subdir- := compressed
+
+-setup-y += a20.o apm.o cmdline.o copy.o cpu.o cpucheck.o edd.o
++setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o
+ setup-y += header.o main.o mca.o memory.o pm.o pmjump.o
+-setup-y += printf.o string.o tty.o video.o version.o voyager.o
++setup-y += printf.o string.o tty.o video.o version.o
++setup-$(CONFIG_X86_APM_BOOT) += apm.o
++setup-$(CONFIG_X86_VOYAGER) += voyager.o
+
+ # The link order of the video-*.o modules can matter. In particular,
+ # video-vga.o *must* be listed first, followed by video-vesa.o.
+@@ -49,10 +51,7 @@ HOSTCFLAGS_build.o := $(LINUXINCLUDE)
+
+ # How to compile the 16-bit code. Note we always compile for -march=i386,
+ # that way we can complain to the user if the CPU is insufficient.
+-cflags-$(CONFIG_X86_32) :=
+-cflags-$(CONFIG_X86_64) := -m32
+ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
+- $(cflags-y) \
+ -Wall -Wstrict-prototypes \
+ -march=i386 -mregparm=3 \
+ -include $(srctree)/$(src)/code16gcc.h \
+@@ -62,6 +61,7 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
+ $(call cc-option, -fno-unit-at-a-time)) \
+ $(call cc-option, -fno-stack-protector) \
+ $(call cc-option, -mpreferred-stack-boundary=2)
++KBUILD_CFLAGS += $(call cc-option,-m32)
+ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
+
+ $(obj)/zImage: IMAGE_OFFSET := 0x1000
+diff --git a/arch/x86/boot/apm.c b/arch/x86/boot/apm.c
+index eab50c5..c117c7f 100644
+--- a/arch/x86/boot/apm.c
++++ b/arch/x86/boot/apm.c
+@@ -19,8 +19,6 @@
+
+ #include "boot.h"
+
+-#if defined(CONFIG_APM) || defined(CONFIG_APM_MODULE)
+-
+ int query_apm_bios(void)
+ {
+ u16 ax, bx, cx, dx, di;
+@@ -95,4 +93,3 @@ int query_apm_bios(void)
+ return 0;
+ }
+
+-#endif
+diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
+index d2b5adf..7822a49 100644
+--- a/arch/x86/boot/boot.h
++++ b/arch/x86/boot/boot.h
+@@ -109,7 +109,7 @@ typedef unsigned int addr_t;
+ static inline u8 rdfs8(addr_t addr)
+ {
+ u8 v;
+- asm volatile("movb %%fs:%1,%0" : "=r" (v) : "m" (*(u8 *)addr));
++ asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
+ return v;
+ }
+ static inline u16 rdfs16(addr_t addr)
+@@ -127,21 +127,21 @@ static inline u32 rdfs32(addr_t addr)
+
+ static inline void wrfs8(u8 v, addr_t addr)
+ {
+- asm volatile("movb %1,%%fs:%0" : "+m" (*(u8 *)addr) : "r" (v));
++ asm volatile("movb %1,%%fs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
+ }
+ static inline void wrfs16(u16 v, addr_t addr)
+ {
+- asm volatile("movw %1,%%fs:%0" : "+m" (*(u16 *)addr) : "r" (v));
++ asm volatile("movw %1,%%fs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
+ }
+ static inline void wrfs32(u32 v, addr_t addr)
+ {
+- asm volatile("movl %1,%%fs:%0" : "+m" (*(u32 *)addr) : "r" (v));
++ asm volatile("movl %1,%%fs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
+ }
+
+ static inline u8 rdgs8(addr_t addr)
+ {
+ u8 v;
+- asm volatile("movb %%gs:%1,%0" : "=r" (v) : "m" (*(u8 *)addr));
++ asm volatile("movb %%gs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
+ return v;
+ }
+ static inline u16 rdgs16(addr_t addr)
+@@ -159,15 +159,15 @@ static inline u32 rdgs32(addr_t addr)
+
+ static inline void wrgs8(u8 v, addr_t addr)
+ {
+- asm volatile("movb %1,%%gs:%0" : "+m" (*(u8 *)addr) : "r" (v));
++ asm volatile("movb %1,%%gs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
+ }
+ static inline void wrgs16(u16 v, addr_t addr)
+ {
+- asm volatile("movw %1,%%gs:%0" : "+m" (*(u16 *)addr) : "r" (v));
++ asm volatile("movw %1,%%gs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
+ }
+ static inline void wrgs32(u32 v, addr_t addr)
+ {
+- asm volatile("movl %1,%%gs:%0" : "+m" (*(u32 *)addr) : "r" (v));
++ asm volatile("movl %1,%%gs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
+ }
+
+ /* Note: these only return true/false, not a signed return value! */
+@@ -241,6 +241,7 @@ int query_apm_bios(void);
+
+ /* cmdline.c */
+ int cmdline_find_option(const char *option, char *buffer, int bufsize);
++int cmdline_find_option_bool(const char *option);
+
+ /* cpu.c, cpucheck.c */
+ int check_cpu(int *cpu_level_ptr, int *req_level_ptr, u32 **err_flags_ptr);
+diff --git a/arch/x86/boot/cmdline.c b/arch/x86/boot/cmdline.c
+index 34bb778..680408a 100644
+--- a/arch/x86/boot/cmdline.c
++++ b/arch/x86/boot/cmdline.c
+@@ -95,3 +95,68 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize)
+
+ return len;
+ }
++
++/*
++ * Find a boolean option (like quiet,noapic,nosmp....)
++ *
++ * Returns the position of that option (starts counting with 1)
++ * or 0 on not found
++ */
++int cmdline_find_option_bool(const char *option)
++{
++ u32 cmdline_ptr = boot_params.hdr.cmd_line_ptr;
++ addr_t cptr;
++ char c;
++ int pos = 0, wstart = 0;
++ const char *opptr = NULL;
++ enum {
++ st_wordstart, /* Start of word/after whitespace */
++ st_wordcmp, /* Comparing this word */
++ st_wordskip, /* Miscompare, skip */
++ } state = st_wordstart;
++
++ if (!cmdline_ptr || cmdline_ptr >= 0x100000)
++ return -1; /* No command line, or inaccessible */
++
++ cptr = cmdline_ptr & 0xf;
++ set_fs(cmdline_ptr >> 4);
++
++ while (cptr < 0x10000) {
++ c = rdfs8(cptr++);
++ pos++;
++
++ switch (state) {
++ case st_wordstart:
++ if (!c)
++ return 0;
++ else if (myisspace(c))
++ break;
++
++ state = st_wordcmp;
++ opptr = option;
++ wstart = pos;
++ /* fall through */
++
++ case st_wordcmp:
++ if (!*opptr)
++ if (!c || myisspace(c))
++ return wstart;
++ else
++ state = st_wordskip;
++ else if (!c)
++ return 0;
++ else if (c != *opptr++)
++ state = st_wordskip;
++ break;
++
++ case st_wordskip:
++ if (!c)
++ return 0;
++ else if (myisspace(c))
++ state = st_wordstart;
++ break;
++ }
++ }
++
++ return 0; /* Buffer overrun */
++}
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 52c1db8..fe24cea 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -1,5 +1,63 @@
++#
++# linux/arch/x86/boot/compressed/Makefile
++#
++# create a compressed vmlinux image from the original vmlinux
++#
++
++targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o
++
++KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
++KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
++cflags-$(CONFIG_X86_64) := -mcmodel=small
++KBUILD_CFLAGS += $(cflags-y)
++KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
++KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
++
++KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
++
++LDFLAGS := -m elf_$(UTS_MACHINE)
++LDFLAGS_vmlinux := -T
++
++$(obj)/vmlinux: $(src)/vmlinux_$(BITS).lds $(obj)/head_$(BITS).o $(obj)/misc.o $(obj)/piggy.o FORCE
++ $(call if_changed,ld)
++ @:
++
++$(obj)/vmlinux.bin: vmlinux FORCE
++ $(call if_changed,objcopy)
++
++
+ ifeq ($(CONFIG_X86_32),y)
+-include ${srctree}/arch/x86/boot/compressed/Makefile_32
++targets += vmlinux.bin.all vmlinux.relocs
++hostprogs-y := relocs
++
++quiet_cmd_relocs = RELOCS $@
++ cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $<
++$(obj)/vmlinux.relocs: vmlinux $(obj)/relocs FORCE
++ $(call if_changed,relocs)
++
++vmlinux.bin.all-y := $(obj)/vmlinux.bin
++vmlinux.bin.all-$(CONFIG_RELOCATABLE) += $(obj)/vmlinux.relocs
++quiet_cmd_relocbin = BUILD $@
++ cmd_relocbin = cat $(filter-out FORCE,$^) > $@
++$(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE
++ $(call if_changed,relocbin)
++
++ifdef CONFIG_RELOCATABLE
++$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
++ $(call if_changed,gzip)
+ else
+-include ${srctree}/arch/x86/boot/compressed/Makefile_64
++$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
++ $(call if_changed,gzip)
+ endif
++LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
++
++else
++$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
++ $(call if_changed,gzip)
++
++LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
++endif
++
++
++$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE
++ $(call if_changed,ld)
+diff --git a/arch/x86/boot/compressed/Makefile_32 b/arch/x86/boot/compressed/Makefile_32
+deleted file mode 100644
+index e43ff7c..0000000
+--- a/arch/x86/boot/compressed/Makefile_32
++++ /dev/null
+@@ -1,50 +0,0 @@
+-#
+-# linux/arch/x86/boot/compressed/Makefile
+-#
+-# create a compressed vmlinux image from the original vmlinux
+-#
+-
+-targets := vmlinux vmlinux.bin vmlinux.bin.gz head_32.o misc_32.o piggy.o \
+- vmlinux.bin.all vmlinux.relocs
+-EXTRA_AFLAGS := -traditional
+-
+-LDFLAGS_vmlinux := -T
+-hostprogs-y := relocs
+-
+-KBUILD_CFLAGS := -m32 -D__KERNEL__ $(LINUX_INCLUDE) -O2 \
+- -fno-strict-aliasing -fPIC \
+- $(call cc-option,-ffreestanding) \
+- $(call cc-option,-fno-stack-protector)
+-LDFLAGS := -m elf_i386
+-
+-$(obj)/vmlinux: $(src)/vmlinux_32.lds $(obj)/head_32.o $(obj)/misc_32.o $(obj)/piggy.o FORCE
+- $(call if_changed,ld)
+- @:
+-
+-$(obj)/vmlinux.bin: vmlinux FORCE
+- $(call if_changed,objcopy)
+-
+-quiet_cmd_relocs = RELOCS $@
+- cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $<
+-$(obj)/vmlinux.relocs: vmlinux $(obj)/relocs FORCE
+- $(call if_changed,relocs)
+-
+-vmlinux.bin.all-y := $(obj)/vmlinux.bin
+-vmlinux.bin.all-$(CONFIG_RELOCATABLE) += $(obj)/vmlinux.relocs
+-quiet_cmd_relocbin = BUILD $@
+- cmd_relocbin = cat $(filter-out FORCE,$^) > $@
+-$(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE
+- $(call if_changed,relocbin)
+-
+-ifdef CONFIG_RELOCATABLE
+-$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
+- $(call if_changed,gzip)
+-else
+-$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
+- $(call if_changed,gzip)
+-endif
+-
+-LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
+-
+-$(obj)/piggy.o: $(src)/vmlinux_32.scr $(obj)/vmlinux.bin.gz FORCE
+- $(call if_changed,ld)
+diff --git a/arch/x86/boot/compressed/Makefile_64 b/arch/x86/boot/compressed/Makefile_64
+deleted file mode 100644
+index 7801e8d..0000000
+--- a/arch/x86/boot/compressed/Makefile_64
++++ /dev/null
+@@ -1,30 +0,0 @@
+-#
+-# linux/arch/x86/boot/compressed/Makefile
+-#
+-# create a compressed vmlinux image from the original vmlinux
+-#
+-
+-targets := vmlinux vmlinux.bin vmlinux.bin.gz head_64.o misc_64.o piggy.o
+-
+-KBUILD_CFLAGS := -m64 -D__KERNEL__ $(LINUXINCLUDE) -O2 \
+- -fno-strict-aliasing -fPIC -mcmodel=small \
+- $(call cc-option, -ffreestanding) \
+- $(call cc-option, -fno-stack-protector)
+-KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
+-LDFLAGS := -m elf_x86_64
+-
+-LDFLAGS_vmlinux := -T
+-$(obj)/vmlinux: $(src)/vmlinux_64.lds $(obj)/head_64.o $(obj)/misc_64.o $(obj)/piggy.o FORCE
+- $(call if_changed,ld)
+- @:
+-
+-$(obj)/vmlinux.bin: vmlinux FORCE
+- $(call if_changed,objcopy)
+-
+-$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
+- $(call if_changed,gzip)
+-
+-LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
+-
+-$(obj)/piggy.o: $(obj)/vmlinux_64.scr $(obj)/vmlinux.bin.gz FORCE
+- $(call if_changed,ld)
+diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
+new file mode 100644
+index 0000000..8182e32
+--- /dev/null
++++ b/arch/x86/boot/compressed/misc.c
+@@ -0,0 +1,413 @@
++/*
++ * misc.c
++ *
++ * This is a collection of several routines from gzip-1.0.3
++ * adapted for Linux.
++ *
++ * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
++ * puts by Nick Holloway 1993, better puts by Martin Mares 1995
++ * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
++ */
++
++/*
++ * we have to be careful, because no indirections are allowed here, and
++ * paravirt_ops is a kind of one. As it will only run in baremetal anyway,
++ * we just keep it from happening
++ */
++#undef CONFIG_PARAVIRT
++#ifdef CONFIG_X86_64
++#define _LINUX_STRING_H_ 1
++#define __LINUX_BITMAP_H 1
++#endif
++
++#include <linux/linkage.h>
++#include <linux/screen_info.h>
++#include <asm/io.h>
++#include <asm/page.h>
++#include <asm/boot.h>
++
++/* WARNING!!
++ * This code is compiled with -fPIC and it is relocated dynamically
++ * at run time, but no relocation processing is performed.
++ * This means that it is not safe to place pointers in static structures.
++ */
++
++/*
++ * Getting to provable safe in place decompression is hard.
++ * Worst case behaviours need to be analyzed.
++ * Background information:
++ *
++ * The file layout is:
++ * magic[2]
++ * method[1]
++ * flags[1]
++ * timestamp[4]
++ * extraflags[1]
++ * os[1]
++ * compressed data blocks[N]
++ * crc[4] orig_len[4]
++ *
++ * resulting in 18 bytes of non compressed data overhead.
++ *
++ * Files divided into blocks
++ * 1 bit (last block flag)
++ * 2 bits (block type)
++ *
++ * 1 block occurs every 32K -1 bytes or when there 50% compression has been achieved.
++ * The smallest block type encoding is always used.
++ *
++ * stored:
++ * 32 bits length in bytes.
++ *
++ * fixed:
++ * magic fixed tree.
++ * symbols.
++ *
++ * dynamic:
++ * dynamic tree encoding.
++ * symbols.
++ *
++ *
++ * The buffer for decompression in place is the length of the
++ * uncompressed data, plus a small amount extra to keep the algorithm safe.
++ * The compressed data is placed at the end of the buffer. The output
++ * pointer is placed at the start of the buffer and the input pointer
++ * is placed where the compressed data starts. Problems will occur
++ * when the output pointer overruns the input pointer.
++ *
++ * The output pointer can only overrun the input pointer if the input
++ * pointer is moving faster than the output pointer. A condition only
++ * triggered by data whose compressed form is larger than the uncompressed
++ * form.
++ *
++ * The worst case at the block level is a growth of the compressed data
++ * of 5 bytes per 32767 bytes.
++ *
++ * The worst case internal to a compressed block is very hard to figure.
++ * The worst case can at least be boundined by having one bit that represents
++ * 32764 bytes and then all of the rest of the bytes representing the very
++ * very last byte.
++ *
++ * All of which is enough to compute an amount of extra data that is required
++ * to be safe. To avoid problems at the block level allocating 5 extra bytes
++ * per 32767 bytes of data is sufficient. To avoind problems internal to a block
++ * adding an extra 32767 bytes (the worst case uncompressed block size) is
++ * sufficient, to ensure that in the worst case the decompressed data for
++ * block will stop the byte before the compressed data for a block begins.
++ * To avoid problems with the compressed data's meta information an extra 18
++ * bytes are needed. Leading to the formula:
++ *
++ * extra_bytes = (uncompressed_size >> 12) + 32768 + 18 + decompressor_size.
++ *
++ * Adding 8 bytes per 32K is a bit excessive but much easier to calculate.
++ * Adding 32768 instead of 32767 just makes for round numbers.
++ * Adding the decompressor_size is necessary as it musht live after all
++ * of the data as well. Last I measured the decompressor is about 14K.
++ * 10K of actual data and 4K of bss.
++ *
++ */
++
++/*
++ * gzip declarations
++ */
++
++#define OF(args) args
++#define STATIC static
++
++#undef memset
++#undef memcpy
++#define memzero(s, n) memset ((s), 0, (n))
++
++typedef unsigned char uch;
++typedef unsigned short ush;
++typedef unsigned long ulg;
++
++#define WSIZE 0x80000000 /* Window size must be at least 32k,
++ * and a power of two
++ * We don't actually have a window just
++ * a huge output buffer so I report
++ * a 2G windows size, as that should
++ * always be larger than our output buffer.
++ */
++
++static uch *inbuf; /* input buffer */
++static uch *window; /* Sliding window buffer, (and final output buffer) */
++
++static unsigned insize; /* valid bytes in inbuf */
++static unsigned inptr; /* index of next byte to be processed in inbuf */
++static unsigned outcnt; /* bytes in output buffer */
++
++/* gzip flag byte */
++#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
++#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
++#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
++#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
++#define COMMENT 0x10 /* bit 4 set: file comment present */
++#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
++#define RESERVED 0xC0 /* bit 6,7: reserved */
++
++#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
++
++/* Diagnostic functions */
++#ifdef DEBUG
++# define Assert(cond,msg) {if(!(cond)) error(msg);}
++# define Trace(x) fprintf x
++# define Tracev(x) {if (verbose) fprintf x ;}
++# define Tracevv(x) {if (verbose>1) fprintf x ;}
++# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
++# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
++#else
++# define Assert(cond,msg)
++# define Trace(x)
++# define Tracev(x)
++# define Tracevv(x)
++# define Tracec(c,x)
++# define Tracecv(c,x)
++#endif
++
++static int fill_inbuf(void);
++static void flush_window(void);
++static void error(char *m);
++static void gzip_mark(void **);
++static void gzip_release(void **);
++
++/*
++ * This is set up by the setup-routine at boot-time
++ */
++static unsigned char *real_mode; /* Pointer to real-mode data */
++
++#define RM_EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
++#ifndef STANDARD_MEMORY_BIOS_CALL
++#define RM_ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
++#endif
++#define RM_SCREEN_INFO (*(struct screen_info *)(real_mode+0))
++
++extern unsigned char input_data[];
++extern int input_len;
++
++static long bytes_out = 0;
++
++static void *malloc(int size);
++static void free(void *where);
++
++static void *memset(void *s, int c, unsigned n);
++static void *memcpy(void *dest, const void *src, unsigned n);
++
++static void putstr(const char *);
++
++#ifdef CONFIG_X86_64
++#define memptr long
++#else
++#define memptr unsigned
++#endif
++
++static memptr free_mem_ptr;
++static memptr free_mem_end_ptr;
++
++#ifdef CONFIG_X86_64
++#define HEAP_SIZE 0x7000
++#else
++#define HEAP_SIZE 0x4000
++#endif
++
++static char *vidmem = (char *)0xb8000;
++static int vidport;
++static int lines, cols;
++
++#ifdef CONFIG_X86_NUMAQ
++void *xquad_portio;
++#endif
++
++#include "../../../../lib/inflate.c"
++
++static void *malloc(int size)
++{
++ void *p;
++
++ if (size <0) error("Malloc error");
++ if (free_mem_ptr <= 0) error("Memory error");
++
++ free_mem_ptr = (free_mem_ptr + 3) & ~3; /* Align */
++
++ p = (void *)free_mem_ptr;
++ free_mem_ptr += size;
++
++ if (free_mem_ptr >= free_mem_end_ptr)
++ error("Out of memory");
++
++ return p;
++}
++
++static void free(void *where)
++{ /* Don't care */
++}
++
++static void gzip_mark(void **ptr)
++{
++ *ptr = (void *) free_mem_ptr;
++}
++
++static void gzip_release(void **ptr)
++{
++ free_mem_ptr = (memptr) *ptr;
++}
++
++static void scroll(void)
++{
++ int i;
++
++ memcpy ( vidmem, vidmem + cols * 2, ( lines - 1 ) * cols * 2 );
++ for ( i = ( lines - 1 ) * cols * 2; i < lines * cols * 2; i += 2 )
++ vidmem[i] = ' ';
++}
++
++static void putstr(const char *s)
++{
++ int x,y,pos;
++ char c;
++
++#ifdef CONFIG_X86_32
++ if (RM_SCREEN_INFO.orig_video_mode == 0 && lines == 0 && cols == 0)
++ return;
++#endif
++
++ x = RM_SCREEN_INFO.orig_x;
++ y = RM_SCREEN_INFO.orig_y;
++
++ while ( ( c = *s++ ) != '\0' ) {
++ if ( c == '\n' ) {
++ x = 0;
++ if ( ++y >= lines ) {
++ scroll();
++ y--;
++ }
++ } else {
++ vidmem [(x + cols * y) * 2] = c;
++ if ( ++x >= cols ) {
++ x = 0;
++ if ( ++y >= lines ) {
++ scroll();
++ y--;
++ }
++ }
++ }
++ }
++
++ RM_SCREEN_INFO.orig_x = x;
++ RM_SCREEN_INFO.orig_y = y;
++
++ pos = (x + cols * y) * 2; /* Update cursor position */
++ outb(14, vidport);
++ outb(0xff & (pos >> 9), vidport+1);
++ outb(15, vidport);
++ outb(0xff & (pos >> 1), vidport+1);
++}
++
++static void* memset(void* s, int c, unsigned n)
++{
++ int i;
++ char *ss = s;
++
++ for (i=0;i<n;i++) ss[i] = c;
++ return s;
++}
++
++static void* memcpy(void* dest, const void* src, unsigned n)
++{
++ int i;
++ const char *s = src;
++ char *d = dest;
++
++ for (i=0;i<n;i++) d[i] = s[i];
++ return dest;
++}
++
++/* ===========================================================================
++ * Fill the input buffer. This is called only when the buffer is empty
++ * and at least one byte is really needed.
++ */
++static int fill_inbuf(void)
++{
++ error("ran out of input data");
++ return 0;
++}
++
++/* ===========================================================================
++ * Write the output window window[0..outcnt-1] and update crc and bytes_out.
++ * (Used for the decompressed data only.)
++ */
++static void flush_window(void)
++{
++ /* With my window equal to my output buffer
++ * I only need to compute the crc here.
++ */
++ ulg c = crc; /* temporary variable */
++ unsigned n;
++ uch *in, ch;
++
++ in = window;
++ for (n = 0; n < outcnt; n++) {
++ ch = *in++;
++ c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
++ }
++ crc = c;
++ bytes_out += (ulg)outcnt;
++ outcnt = 0;
++}
++
++static void error(char *x)
++{
++ putstr("\n\n");
++ putstr(x);
++ putstr("\n\n -- System halted");
++
++ while (1)
++ asm("hlt");
++}
++
++asmlinkage void decompress_kernel(void *rmode, memptr heap,
++ uch *input_data, unsigned long input_len,
++ uch *output)
++{
++ real_mode = rmode;
++
++ if (RM_SCREEN_INFO.orig_video_mode == 7) {
++ vidmem = (char *) 0xb0000;
++ vidport = 0x3b4;
++ } else {
++ vidmem = (char *) 0xb8000;
++ vidport = 0x3d4;
++ }
++
++ lines = RM_SCREEN_INFO.orig_video_lines;
++ cols = RM_SCREEN_INFO.orig_video_cols;
++
++ window = output; /* Output buffer (Normally at 1M) */
++ free_mem_ptr = heap; /* Heap */
++ free_mem_end_ptr = heap + HEAP_SIZE;
++ inbuf = input_data; /* Input buffer */
++ insize = input_len;
++ inptr = 0;
++
++#ifdef CONFIG_X86_64
++ if ((ulg)output & (__KERNEL_ALIGN - 1))
++ error("Destination address not 2M aligned");
++ if ((ulg)output >= 0xffffffffffUL)
++ error("Destination address too large");
++#else
++ if ((u32)output & (CONFIG_PHYSICAL_ALIGN -1))
++ error("Destination address not CONFIG_PHYSICAL_ALIGN aligned");
++ if (heap > ((-__PAGE_OFFSET-(512<<20)-1) & 0x7fffffff))
++ error("Destination address too large");
++#ifndef CONFIG_RELOCATABLE
++ if ((u32)output != LOAD_PHYSICAL_ADDR)
++ error("Wrong destination address");
++#endif
++#endif
++
++ makecrc();
++ putstr("\nDecompressing Linux... ");
++ gunzip();
++ putstr("done.\nBooting the kernel.\n");
++ return;
++}
+diff --git a/arch/x86/boot/compressed/misc_32.c b/arch/x86/boot/compressed/misc_32.c
+deleted file mode 100644
+index b74d60d..0000000
+--- a/arch/x86/boot/compressed/misc_32.c
++++ /dev/null
+@@ -1,382 +0,0 @@
+-/*
+- * misc.c
+- *
+- * This is a collection of several routines from gzip-1.0.3
+- * adapted for Linux.
+- *
+- * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
+- * puts by Nick Holloway 1993, better puts by Martin Mares 1995
+- * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
+- */
+-
+-#undef CONFIG_PARAVIRT
+-#include <linux/linkage.h>
+-#include <linux/vmalloc.h>
+-#include <linux/screen_info.h>
+-#include <asm/io.h>
+-#include <asm/page.h>
+-#include <asm/boot.h>
+-
+-/* WARNING!!
+- * This code is compiled with -fPIC and it is relocated dynamically
+- * at run time, but no relocation processing is performed.
+- * This means that it is not safe to place pointers in static structures.
+- */
+-
+-/*
+- * Getting to provable safe in place decompression is hard.
+- * Worst case behaviours need to be analyzed.
+- * Background information:
+- *
+- * The file layout is:
+- * magic[2]
+- * method[1]
+- * flags[1]
+- * timestamp[4]
+- * extraflags[1]
+- * os[1]
+- * compressed data blocks[N]
+- * crc[4] orig_len[4]
+- *
+- * resulting in 18 bytes of non compressed data overhead.
+- *
+- * Files divided into blocks
+- * 1 bit (last block flag)
+- * 2 bits (block type)
+- *
+- * 1 block occurs every 32K -1 bytes or when there 50% compression has been achieved.
+- * The smallest block type encoding is always used.
+- *
+- * stored:
+- * 32 bits length in bytes.
+- *
+- * fixed:
+- * magic fixed tree.
+- * symbols.
+- *
+- * dynamic:
+- * dynamic tree encoding.
+- * symbols.
+- *
+- *
+- * The buffer for decompression in place is the length of the
+- * uncompressed data, plus a small amount extra to keep the algorithm safe.
+- * The compressed data is placed at the end of the buffer. The output
+- * pointer is placed at the start of the buffer and the input pointer
+- * is placed where the compressed data starts. Problems will occur
+- * when the output pointer overruns the input pointer.
+- *
+- * The output pointer can only overrun the input pointer if the input
+- * pointer is moving faster than the output pointer. A condition only
+- * triggered by data whose compressed form is larger than the uncompressed
+- * form.
+- *
+- * The worst case at the block level is a growth of the compressed data
+- * of 5 bytes per 32767 bytes.
+- *
+- * The worst case internal to a compressed block is very hard to figure.
+- * The worst case can at least be boundined by having one bit that represents
+- * 32764 bytes and then all of the rest of the bytes representing the very
+- * very last byte.
+- *
+- * All of which is enough to compute an amount of extra data that is required
+- * to be safe. To avoid problems at the block level allocating 5 extra bytes
+- * per 32767 bytes of data is sufficient. To avoind problems internal to a block
+- * adding an extra 32767 bytes (the worst case uncompressed block size) is
+- * sufficient, to ensure that in the worst case the decompressed data for
+- * block will stop the byte before the compressed data for a block begins.
+- * To avoid problems with the compressed data's meta information an extra 18
+- * bytes are needed. Leading to the formula:
+- *
+- * extra_bytes = (uncompressed_size >> 12) + 32768 + 18 + decompressor_size.
+- *
+- * Adding 8 bytes per 32K is a bit excessive but much easier to calculate.
+- * Adding 32768 instead of 32767 just makes for round numbers.
+- * Adding the decompressor_size is necessary as it musht live after all
+- * of the data as well. Last I measured the decompressor is about 14K.
+- * 10K of actual data and 4K of bss.
+- *
+- */
+-
+-/*
+- * gzip declarations
+- */
+-
+-#define OF(args) args
+-#define STATIC static
+-
+-#undef memset
+-#undef memcpy
+-#define memzero(s, n) memset ((s), 0, (n))
+-
+-typedef unsigned char uch;
+-typedef unsigned short ush;
+-typedef unsigned long ulg;
+-
+-#define WSIZE 0x80000000 /* Window size must be at least 32k,
+- * and a power of two
+- * We don't actually have a window just
+- * a huge output buffer so I report
+- * a 2G windows size, as that should
+- * always be larger than our output buffer.
+- */
+-
+-static uch *inbuf; /* input buffer */
+-static uch *window; /* Sliding window buffer, (and final output buffer) */
+-
+-static unsigned insize; /* valid bytes in inbuf */
+-static unsigned inptr; /* index of next byte to be processed in inbuf */
+-static unsigned outcnt; /* bytes in output buffer */
+-
+-/* gzip flag byte */
+-#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
+-#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
+-#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
+-#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
+-#define COMMENT 0x10 /* bit 4 set: file comment present */
+-#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
+-#define RESERVED 0xC0 /* bit 6,7: reserved */
+-
+-#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
+-
+-/* Diagnostic functions */
+-#ifdef DEBUG
+-# define Assert(cond,msg) {if(!(cond)) error(msg);}
+-# define Trace(x) fprintf x
+-# define Tracev(x) {if (verbose) fprintf x ;}
+-# define Tracevv(x) {if (verbose>1) fprintf x ;}
+-# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
+-# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
+-#else
+-# define Assert(cond,msg)
+-# define Trace(x)
+-# define Tracev(x)
+-# define Tracevv(x)
+-# define Tracec(c,x)
+-# define Tracecv(c,x)
+-#endif
+-
+-static int fill_inbuf(void);
+-static void flush_window(void);
+-static void error(char *m);
+-static void gzip_mark(void **);
+-static void gzip_release(void **);
+-
+-/*
+- * This is set up by the setup-routine at boot-time
+- */
+-static unsigned char *real_mode; /* Pointer to real-mode data */
+-
+-#define RM_EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
+-#ifndef STANDARD_MEMORY_BIOS_CALL
+-#define RM_ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
+-#endif
+-#define RM_SCREEN_INFO (*(struct screen_info *)(real_mode+0))
+-
+-extern unsigned char input_data[];
+-extern int input_len;
+-
+-static long bytes_out = 0;
+-
+-static void *malloc(int size);
+-static void free(void *where);
+-
+-static void *memset(void *s, int c, unsigned n);
+-static void *memcpy(void *dest, const void *src, unsigned n);
+-
+-static void putstr(const char *);
+-
+-static unsigned long free_mem_ptr;
+-static unsigned long free_mem_end_ptr;
+-
+-#define HEAP_SIZE 0x4000
+-
+-static char *vidmem = (char *)0xb8000;
+-static int vidport;
+-static int lines, cols;
+-
+-#ifdef CONFIG_X86_NUMAQ
+-void *xquad_portio;
+-#endif
+-
+-#include "../../../../lib/inflate.c"
+-
+-static void *malloc(int size)
+-{
+- void *p;
+-
+- if (size <0) error("Malloc error");
+- if (free_mem_ptr <= 0) error("Memory error");
+-
+- free_mem_ptr = (free_mem_ptr + 3) & ~3; /* Align */
+-
+- p = (void *)free_mem_ptr;
+- free_mem_ptr += size;
+-
+- if (free_mem_ptr >= free_mem_end_ptr)
+- error("Out of memory");
+-
+- return p;
+-}
+-
+-static void free(void *where)
+-{ /* Don't care */
+-}
+-
+-static void gzip_mark(void **ptr)
+-{
+- *ptr = (void *) free_mem_ptr;
+-}
+-
+-static void gzip_release(void **ptr)
+-{
+- free_mem_ptr = (unsigned long) *ptr;
+-}
+-
+-static void scroll(void)
+-{
+- int i;
+-
+- memcpy ( vidmem, vidmem + cols * 2, ( lines - 1 ) * cols * 2 );
+- for ( i = ( lines - 1 ) * cols * 2; i < lines * cols * 2; i += 2 )
+- vidmem[i] = ' ';
+-}
+-
+-static void putstr(const char *s)
+-{
+- int x,y,pos;
+- char c;
+-
+- if (RM_SCREEN_INFO.orig_video_mode == 0 && lines == 0 && cols == 0)
+- return;
+-
+- x = RM_SCREEN_INFO.orig_x;
+- y = RM_SCREEN_INFO.orig_y;
+-
+- while ( ( c = *s++ ) != '\0' ) {
+- if ( c == '\n' ) {
+- x = 0;
+- if ( ++y >= lines ) {
+- scroll();
+- y--;
+- }
+- } else {
+- vidmem [ ( x + cols * y ) * 2 ] = c;
+- if ( ++x >= cols ) {
+- x = 0;
+- if ( ++y >= lines ) {
+- scroll();
+- y--;
+- }
+- }
+- }
+- }
+-
+- RM_SCREEN_INFO.orig_x = x;
+- RM_SCREEN_INFO.orig_y = y;
+-
+- pos = (x + cols * y) * 2; /* Update cursor position */
+- outb_p(14, vidport);
+- outb_p(0xff & (pos >> 9), vidport+1);
+- outb_p(15, vidport);
+- outb_p(0xff & (pos >> 1), vidport+1);
+-}
+-
+-static void* memset(void* s, int c, unsigned n)
+-{
+- int i;
+- char *ss = (char*)s;
+-
+- for (i=0;i<n;i++) ss[i] = c;
+- return s;
+-}
+-
+-static void* memcpy(void* dest, const void* src, unsigned n)
+-{
+- int i;
+- char *d = (char *)dest, *s = (char *)src;
+-
+- for (i=0;i<n;i++) d[i] = s[i];
+- return dest;
+-}
+-
+-/* ===========================================================================
+- * Fill the input buffer. This is called only when the buffer is empty
+- * and at least one byte is really needed.
+- */
+-static int fill_inbuf(void)
+-{
+- error("ran out of input data");
+- return 0;
+-}
+-
+-/* ===========================================================================
+- * Write the output window window[0..outcnt-1] and update crc and bytes_out.
+- * (Used for the decompressed data only.)
+- */
+-static void flush_window(void)
+-{
+- /* With my window equal to my output buffer
+- * I only need to compute the crc here.
+- */
+- ulg c = crc; /* temporary variable */
+- unsigned n;
+- uch *in, ch;
+-
+- in = window;
+- for (n = 0; n < outcnt; n++) {
+- ch = *in++;
+- c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
+- }
+- crc = c;
+- bytes_out += (ulg)outcnt;
+- outcnt = 0;
+-}
+-
+-static void error(char *x)
+-{
+- putstr("\n\n");
+- putstr(x);
+- putstr("\n\n -- System halted");
+-
+- while(1); /* Halt */
+-}
+-
+-asmlinkage void decompress_kernel(void *rmode, unsigned long end,
+- uch *input_data, unsigned long input_len, uch *output)
+-{
+- real_mode = rmode;
+-
+- if (RM_SCREEN_INFO.orig_video_mode == 7) {
+- vidmem = (char *) 0xb0000;
+- vidport = 0x3b4;
+- } else {
+- vidmem = (char *) 0xb8000;
+- vidport = 0x3d4;
+- }
+-
+- lines = RM_SCREEN_INFO.orig_video_lines;
+- cols = RM_SCREEN_INFO.orig_video_cols;
+-
+- window = output; /* Output buffer (Normally at 1M) */
+- free_mem_ptr = end; /* Heap */
+- free_mem_end_ptr = end + HEAP_SIZE;
+- inbuf = input_data; /* Input buffer */
+- insize = input_len;
+- inptr = 0;
+-
+- if ((u32)output & (CONFIG_PHYSICAL_ALIGN -1))
+- error("Destination address not CONFIG_PHYSICAL_ALIGN aligned");
+- if (end > ((-__PAGE_OFFSET-(512 <<20)-1) & 0x7fffffff))
+- error("Destination address too large");
+-#ifndef CONFIG_RELOCATABLE
+- if ((u32)output != LOAD_PHYSICAL_ADDR)
+- error("Wrong destination address");
+-#endif
+-
+- makecrc();
+- putstr("Uncompressing Linux... ");
+- gunzip();
+- putstr("Ok, booting the kernel.\n");
+- return;
+-}
+diff --git a/arch/x86/boot/compressed/misc_64.c b/arch/x86/boot/compressed/misc_64.c
+deleted file mode 100644
+index 6ea015a..0000000
+--- a/arch/x86/boot/compressed/misc_64.c
++++ /dev/null
+@@ -1,371 +0,0 @@
+-/*
+- * misc.c
+- *
+- * This is a collection of several routines from gzip-1.0.3
+- * adapted for Linux.
+- *
+- * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
+- * puts by Nick Holloway 1993, better puts by Martin Mares 1995
+- * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
+- */
+-
+-#define _LINUX_STRING_H_ 1
+-#define __LINUX_BITMAP_H 1
+-
+-#include <linux/linkage.h>
+-#include <linux/screen_info.h>
+-#include <asm/io.h>
+-#include <asm/page.h>
+-
+-/* WARNING!!
+- * This code is compiled with -fPIC and it is relocated dynamically
+- * at run time, but no relocation processing is performed.
+- * This means that it is not safe to place pointers in static structures.
+- */
+-
+-/*
+- * Getting to provable safe in place decompression is hard.
+- * Worst case behaviours need to be analyzed.
+- * Background information:
+- *
+- * The file layout is:
+- * magic[2]
+- * method[1]
+- * flags[1]
+- * timestamp[4]
+- * extraflags[1]
+- * os[1]
+- * compressed data blocks[N]
+- * crc[4] orig_len[4]
+- *
+- * resulting in 18 bytes of non compressed data overhead.
+- *
+- * Files divided into blocks
+- * 1 bit (last block flag)
+- * 2 bits (block type)
+- *
+- * 1 block occurs every 32K -1 bytes or when there 50% compression has been achieved.
+- * The smallest block type encoding is always used.
+- *
+- * stored:
+- * 32 bits length in bytes.
+- *
+- * fixed:
+- * magic fixed tree.
+- * symbols.
+- *
+- * dynamic:
+- * dynamic tree encoding.
+- * symbols.
+- *
+- *
+- * The buffer for decompression in place is the length of the
+- * uncompressed data, plus a small amount extra to keep the algorithm safe.
+- * The compressed data is placed at the end of the buffer. The output
+- * pointer is placed at the start of the buffer and the input pointer
+- * is placed where the compressed data starts. Problems will occur
+- * when the output pointer overruns the input pointer.
+- *
+- * The output pointer can only overrun the input pointer if the input
+- * pointer is moving faster than the output pointer. A condition only
+- * triggered by data whose compressed form is larger than the uncompressed
+- * form.
+- *
+- * The worst case at the block level is a growth of the compressed data
+- * of 5 bytes per 32767 bytes.
+- *
+- * The worst case internal to a compressed block is very hard to figure.
+- * The worst case can at least be boundined by having one bit that represents
+- * 32764 bytes and then all of the rest of the bytes representing the very
+- * very last byte.
+- *
+- * All of which is enough to compute an amount of extra data that is required
+- * to be safe. To avoid problems at the block level allocating 5 extra bytes
+- * per 32767 bytes of data is sufficient. To avoind problems internal to a block
+- * adding an extra 32767 bytes (the worst case uncompressed block size) is
+- * sufficient, to ensure that in the worst case the decompressed data for
+- * block will stop the byte before the compressed data for a block begins.
+- * To avoid problems with the compressed data's meta information an extra 18
+- * bytes are needed. Leading to the formula:
+- *
+- * extra_bytes = (uncompressed_size >> 12) + 32768 + 18 + decompressor_size.
+- *
+- * Adding 8 bytes per 32K is a bit excessive but much easier to calculate.
+- * Adding 32768 instead of 32767 just makes for round numbers.
+- * Adding the decompressor_size is necessary as it musht live after all
+- * of the data as well. Last I measured the decompressor is about 14K.
+- * 10K of actual data and 4K of bss.
+- *
+- */
+-
+-/*
+- * gzip declarations
+- */
+-
+-#define OF(args) args
+-#define STATIC static
+-
+-#undef memset
+-#undef memcpy
+-#define memzero(s, n) memset ((s), 0, (n))
+-
+-typedef unsigned char uch;
+-typedef unsigned short ush;
+-typedef unsigned long ulg;
+-
+-#define WSIZE 0x80000000 /* Window size must be at least 32k,
+- * and a power of two
+- * We don't actually have a window just
+- * a huge output buffer so I report
+- * a 2G windows size, as that should
+- * always be larger than our output buffer.
+- */
+-
+-static uch *inbuf; /* input buffer */
+-static uch *window; /* Sliding window buffer, (and final output buffer) */
+-
+-static unsigned insize; /* valid bytes in inbuf */
+-static unsigned inptr; /* index of next byte to be processed in inbuf */
+-static unsigned outcnt; /* bytes in output buffer */
+-
+-/* gzip flag byte */
+-#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
+-#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
+-#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
+-#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
+-#define COMMENT 0x10 /* bit 4 set: file comment present */
+-#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
+-#define RESERVED 0xC0 /* bit 6,7: reserved */
+-
+-#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
+-
+-/* Diagnostic functions */
+-#ifdef DEBUG
+-# define Assert(cond,msg) {if(!(cond)) error(msg);}
+-# define Trace(x) fprintf x
+-# define Tracev(x) {if (verbose) fprintf x ;}
+-# define Tracevv(x) {if (verbose>1) fprintf x ;}
+-# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
+-# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
+-#else
+-# define Assert(cond,msg)
+-# define Trace(x)
+-# define Tracev(x)
+-# define Tracevv(x)
+-# define Tracec(c,x)
+-# define Tracecv(c,x)
+-#endif
+-
+-static int fill_inbuf(void);
+-static void flush_window(void);
+-static void error(char *m);
+-static void gzip_mark(void **);
+-static void gzip_release(void **);
+-
+-/*
+- * This is set up by the setup-routine at boot-time
+- */
+-static unsigned char *real_mode; /* Pointer to real-mode data */
+-
+-#define RM_EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
+-#ifndef STANDARD_MEMORY_BIOS_CALL
+-#define RM_ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
+-#endif
+-#define RM_SCREEN_INFO (*(struct screen_info *)(real_mode+0))
+-
+-extern unsigned char input_data[];
+-extern int input_len;
+-
+-static long bytes_out = 0;
+-
+-static void *malloc(int size);
+-static void free(void *where);
+-
+-static void *memset(void *s, int c, unsigned n);
+-static void *memcpy(void *dest, const void *src, unsigned n);
+-
+-static void putstr(const char *);
+-
+-static long free_mem_ptr;
+-static long free_mem_end_ptr;
+-
+-#define HEAP_SIZE 0x7000
+-
+-static char *vidmem = (char *)0xb8000;
+-static int vidport;
+-static int lines, cols;
+-
+-#include "../../../../lib/inflate.c"
+-
+-static void *malloc(int size)
+-{
+- void *p;
+-
+- if (size <0) error("Malloc error");
+- if (free_mem_ptr <= 0) error("Memory error");
+-
+- free_mem_ptr = (free_mem_ptr + 3) & ~3; /* Align */
+-
+- p = (void *)free_mem_ptr;
+- free_mem_ptr += size;
+-
+- if (free_mem_ptr >= free_mem_end_ptr)
+- error("Out of memory");
+-
+- return p;
+-}
+-
+-static void free(void *where)
+-{ /* Don't care */
+-}
+-
+-static void gzip_mark(void **ptr)
+-{
+- *ptr = (void *) free_mem_ptr;
+-}
+-
+-static void gzip_release(void **ptr)
+-{
+- free_mem_ptr = (long) *ptr;
+-}
+-
+-static void scroll(void)
+-{
+- int i;
+-
+- memcpy ( vidmem, vidmem + cols * 2, ( lines - 1 ) * cols * 2 );
+- for ( i = ( lines - 1 ) * cols * 2; i < lines * cols * 2; i += 2 )
+- vidmem[i] = ' ';
+-}
+-
+-static void putstr(const char *s)
+-{
+- int x,y,pos;
+- char c;
+-
+- x = RM_SCREEN_INFO.orig_x;
+- y = RM_SCREEN_INFO.orig_y;
+-
+- while ( ( c = *s++ ) != '\0' ) {
+- if ( c == '\n' ) {
+- x = 0;
+- if ( ++y >= lines ) {
+- scroll();
+- y--;
+- }
+- } else {
+- vidmem [ ( x + cols * y ) * 2 ] = c;
+- if ( ++x >= cols ) {
+- x = 0;
+- if ( ++y >= lines ) {
+- scroll();
+- y--;
+- }
+- }
+- }
+- }
+-
+- RM_SCREEN_INFO.orig_x = x;
+- RM_SCREEN_INFO.orig_y = y;
+-
+- pos = (x + cols * y) * 2; /* Update cursor position */
+- outb_p(14, vidport);
+- outb_p(0xff & (pos >> 9), vidport+1);
+- outb_p(15, vidport);
+- outb_p(0xff & (pos >> 1), vidport+1);
+-}
+-
+-static void* memset(void* s, int c, unsigned n)
+-{
+- int i;
+- char *ss = (char*)s;
+-
+- for (i=0;i<n;i++) ss[i] = c;
+- return s;
+-}
+-
+-static void* memcpy(void* dest, const void* src, unsigned n)
+-{
+- int i;
+- char *d = (char *)dest, *s = (char *)src;
+-
+- for (i=0;i<n;i++) d[i] = s[i];
+- return dest;
+-}
+-
+-/* ===========================================================================
+- * Fill the input buffer. This is called only when the buffer is empty
+- * and at least one byte is really needed.
+- */
+-static int fill_inbuf(void)
+-{
+- error("ran out of input data");
+- return 0;
+-}
+-
+-/* ===========================================================================
+- * Write the output window window[0..outcnt-1] and update crc and bytes_out.
+- * (Used for the decompressed data only.)
+- */
+-static void flush_window(void)
+-{
+- /* With my window equal to my output buffer
+- * I only need to compute the crc here.
+- */
+- ulg c = crc; /* temporary variable */
+- unsigned n;
+- uch *in, ch;
+-
+- in = window;
+- for (n = 0; n < outcnt; n++) {
+- ch = *in++;
+- c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
+- }
+- crc = c;
+- bytes_out += (ulg)outcnt;
+- outcnt = 0;
+-}
+-
+-static void error(char *x)
+-{
+- putstr("\n\n");
+- putstr(x);
+- putstr("\n\n -- System halted");
+-
+- while(1); /* Halt */
+-}
+-
+-asmlinkage void decompress_kernel(void *rmode, unsigned long heap,
+- uch *input_data, unsigned long input_len, uch *output)
+-{
+- real_mode = rmode;
+-
+- if (RM_SCREEN_INFO.orig_video_mode == 7) {
+- vidmem = (char *) 0xb0000;
+- vidport = 0x3b4;
+- } else {
+- vidmem = (char *) 0xb8000;
+- vidport = 0x3d4;
+- }
+-
+- lines = RM_SCREEN_INFO.orig_video_lines;
+- cols = RM_SCREEN_INFO.orig_video_cols;
+-
+- window = output; /* Output buffer (Normally at 1M) */
+- free_mem_ptr = heap; /* Heap */
+- free_mem_end_ptr = heap + HEAP_SIZE;
+- inbuf = input_data; /* Input buffer */
+- insize = input_len;
+- inptr = 0;
+-
+- if ((ulg)output & (__KERNEL_ALIGN - 1))
+- error("Destination address not 2M aligned");
+- if ((ulg)output >= 0xffffffffffUL)
+- error("Destination address too large");
+-
+- makecrc();
+- putstr(".\nDecompressing Linux...");
+- gunzip();
+- putstr("done.\nBooting the kernel.\n");
+- return;
+-}
+diff --git a/arch/x86/boot/compressed/relocs.c b/arch/x86/boot/compressed/relocs.c
+index 7a0d00b..d01ea42 100644
+--- a/arch/x86/boot/compressed/relocs.c
++++ b/arch/x86/boot/compressed/relocs.c
+@@ -27,11 +27,6 @@ static unsigned long *relocs;
+ * absolute relocations present w.r.t these symbols.
+ */
+ static const char* safe_abs_relocs[] = {
+- "__kernel_vsyscall",
+- "__kernel_rt_sigreturn",
+- "__kernel_sigreturn",
+- "SYSENTER_RETURN",
+- "VDSO_NOTE_MASK",
+ "xen_irq_disable_direct_reloc",
+ "xen_save_fl_direct_reloc",
+ };
+@@ -45,6 +40,8 @@ static int is_safe_abs_reloc(const char* sym_name)
+ /* Match found */
+ return 1;
+ }
++ if (strncmp(sym_name, "VDSO", 4) == 0)
++ return 1;
+ if (strncmp(sym_name, "__crc_", 6) == 0)
+ return 1;
+ return 0;
+diff --git a/arch/x86/boot/compressed/vmlinux.scr b/arch/x86/boot/compressed/vmlinux.scr
+new file mode 100644
+index 0000000..f02382a
+--- /dev/null
++++ b/arch/x86/boot/compressed/vmlinux.scr
+@@ -0,0 +1,10 @@
++SECTIONS
++{
++ .rodata.compressed : {
++ input_len = .;
++ LONG(input_data_end - input_data) input_data = .;
++ *(.data)
++ output_len = . - 4;
++ input_data_end = .;
++ }
++}
+diff --git a/arch/x86/boot/compressed/vmlinux_32.lds b/arch/x86/boot/compressed/vmlinux_32.lds
+index cc4854f..bb3c483 100644
+--- a/arch/x86/boot/compressed/vmlinux_32.lds
++++ b/arch/x86/boot/compressed/vmlinux_32.lds
+@@ -3,17 +3,17 @@ OUTPUT_ARCH(i386)
+ ENTRY(startup_32)
+ SECTIONS
+ {
+- /* Be careful parts of head.S assume startup_32 is at
+- * address 0.
++ /* Be careful parts of head_32.S assume startup_32 is at
++ * address 0.
+ */
+- . = 0 ;
++ . = 0;
+ .text.head : {
+ _head = . ;
+ *(.text.head)
+ _ehead = . ;
+ }
+- .data.compressed : {
+- *(.data.compressed)
++ .rodata.compressed : {
++ *(.rodata.compressed)
+ }
+ .text : {
+ _text = .; /* Text */
+diff --git a/arch/x86/boot/compressed/vmlinux_32.scr b/arch/x86/boot/compressed/vmlinux_32.scr
+deleted file mode 100644
+index 707a88f..0000000
+--- a/arch/x86/boot/compressed/vmlinux_32.scr
++++ /dev/null
+@@ -1,10 +0,0 @@
+-SECTIONS
+-{
+- .data.compressed : {
+- input_len = .;
+- LONG(input_data_end - input_data) input_data = .;
+- *(.data)
+- output_len = . - 4;
+- input_data_end = .;
+- }
+-}
+diff --git a/arch/x86/boot/compressed/vmlinux_64.lds b/arch/x86/boot/compressed/vmlinux_64.lds
+index 94c13e5..f6e5b44 100644
+--- a/arch/x86/boot/compressed/vmlinux_64.lds
++++ b/arch/x86/boot/compressed/vmlinux_64.lds
+@@ -3,15 +3,19 @@ OUTPUT_ARCH(i386:x86-64)
+ ENTRY(startup_64)
+ SECTIONS
+ {
+- /* Be careful parts of head.S assume startup_32 is at
+- * address 0.
++ /* Be careful parts of head_64.S assume startup_64 is at
++ * address 0.
+ */
+ . = 0;
+- .text : {
++ .text.head : {
+ _head = . ;
+ *(.text.head)
+ _ehead = . ;
+- *(.text.compressed)
++ }
++ .rodata.compressed : {
++ *(.rodata.compressed)
++ }
++ .text : {
+ _text = .; /* Text */
+ *(.text)
+ *(.text.*)
+diff --git a/arch/x86/boot/compressed/vmlinux_64.scr b/arch/x86/boot/compressed/vmlinux_64.scr
+deleted file mode 100644
+index bd1429c..0000000
+--- a/arch/x86/boot/compressed/vmlinux_64.scr
++++ /dev/null
+@@ -1,10 +0,0 @@
+-SECTIONS
+-{
+- .text.compressed : {
+- input_len = .;
+- LONG(input_data_end - input_data) input_data = .;
+- *(.data)
+- output_len = . - 4;
+- input_data_end = .;
+- }
+-}
+diff --git a/arch/x86/boot/edd.c b/arch/x86/boot/edd.c
+index bd138e4..8721dc4 100644
+--- a/arch/x86/boot/edd.c
++++ b/arch/x86/boot/edd.c
+@@ -129,6 +129,7 @@ void query_edd(void)
+ char eddarg[8];
+ int do_mbr = 1;
+ int do_edd = 1;
++ int be_quiet;
+ int devno;
+ struct edd_info ei, *edp;
+ u32 *mbrptr;
+@@ -140,12 +141,21 @@ void query_edd(void)
+ do_edd = 0;
+ }
+
++ be_quiet = cmdline_find_option_bool("quiet");
++
+ edp = boot_params.eddbuf;
+ mbrptr = boot_params.edd_mbr_sig_buffer;
+
+ if (!do_edd)
+ return;
+
++ /* Bugs in OnBoard or AddOnCards Bios may hang the EDD probe,
++ * so give a hint if this happens.
++ */
++
++ if (!be_quiet)
++ printf("Probing EDD (edd=off to disable)... ");
++
+ for (devno = 0x80; devno < 0x80+EDD_MBR_SIG_MAX; devno++) {
+ /*
+ * Scan the BIOS-supported hard disks and query EDD
+@@ -162,6 +172,9 @@ void query_edd(void)
+ if (do_mbr && !read_mbr_sig(devno, &ei, mbrptr++))
+ boot_params.edd_mbr_sig_buf_entries = devno-0x80+1;
+ }
++
++ if (!be_quiet)
++ printf("ok\n");
+ }
+
+ #endif
+diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
+index 4cc5b04..64ad901 100644
+--- a/arch/x86/boot/header.S
++++ b/arch/x86/boot/header.S
+@@ -195,10 +195,13 @@ cmd_line_ptr: .long 0 # (Header version 0x0202 or later)
+ # can be located anywhere in
+ # low memory 0x10000 or higher.
+
+-ramdisk_max: .long (-__PAGE_OFFSET-(512 << 20)-1) & 0x7fffffff
++ramdisk_max: .long 0x7fffffff
+ # (Header version 0x0203 or later)
+ # The highest safe address for
+ # the contents of an initrd
++ # The current kernel allows up to 4 GB,
++ # but leave it at 2 GB to avoid
++ # possible bootloader bugs.
+
+ kernel_alignment: .long CONFIG_PHYSICAL_ALIGN #physical addr alignment
+ #required for protected mode
+diff --git a/arch/x86/boot/main.c b/arch/x86/boot/main.c
+index 1f95750..7828da5 100644
+--- a/arch/x86/boot/main.c
++++ b/arch/x86/boot/main.c
+@@ -100,20 +100,32 @@ static void set_bios_mode(void)
+ #endif
+ }
+
+-void main(void)
++static void init_heap(void)
+ {
+- /* First, copy the boot header into the "zeropage" */
+- copy_boot_params();
++ char *stack_end;
+
+- /* End of heap check */
+ if (boot_params.hdr.loadflags & CAN_USE_HEAP) {
+- heap_end = (char *)(boot_params.hdr.heap_end_ptr
+- +0x200-STACK_SIZE);
++ asm("leal %P1(%%esp),%0"
++ : "=r" (stack_end) : "i" (-STACK_SIZE));
++
++ heap_end = (char *)
++ ((size_t)boot_params.hdr.heap_end_ptr + 0x200);
++ if (heap_end > stack_end)
++ heap_end = stack_end;
+ } else {
+ /* Boot protocol 2.00 only, no heap available */
+ puts("WARNING: Ancient bootloader, some functionality "
+ "may be limited!\n");
+ }
++}
++
++void main(void)
++{
++ /* First, copy the boot header into the "zeropage" */
++ copy_boot_params();
++
++ /* End of heap check */
++ init_heap();
+
+ /* Make sure we have all the proper CPU support */
+ if (validate_cpu()) {
+@@ -131,9 +143,6 @@ void main(void)
+ /* Set keyboard repeat rate (why?) */
+ keyboard_set_repeat();
+
+- /* Set the video mode */
+- set_video();
+-
+ /* Query MCA information */
+ query_mca();
+
+@@ -154,6 +163,10 @@ void main(void)
+ #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
+ query_edd();
+ #endif
++
++ /* Set the video mode */
++ set_video();
++
+ /* Do the last things and invoke protected mode */
+ go_to_protected_mode();
+ }
+diff --git a/arch/x86/boot/pm.c b/arch/x86/boot/pm.c
+index 09fb342..1a0f936 100644
+--- a/arch/x86/boot/pm.c
++++ b/arch/x86/boot/pm.c
+@@ -104,7 +104,7 @@ static void reset_coprocessor(void)
+ (((u64)(base & 0xff000000) << 32) | \
+ ((u64)flags << 40) | \
+ ((u64)(limit & 0x00ff0000) << 32) | \
+- ((u64)(base & 0x00ffff00) << 16) | \
++ ((u64)(base & 0x00ffffff) << 16) | \
+ ((u64)(limit & 0x0000ffff)))
+
+ struct gdt_ptr {
+@@ -121,6 +121,10 @@ static void setup_gdt(void)
+ [GDT_ENTRY_BOOT_CS] = GDT_ENTRY(0xc09b, 0, 0xfffff),
+ /* DS: data, read/write, 4 GB, base 0 */
+ [GDT_ENTRY_BOOT_DS] = GDT_ENTRY(0xc093, 0, 0xfffff),
++ /* TSS: 32-bit tss, 104 bytes, base 4096 */
++ /* We only have a TSS here to keep Intel VT happy;
++ we don't actually use it for anything. */
++ [GDT_ENTRY_BOOT_TSS] = GDT_ENTRY(0x0089, 4096, 103),
+ };
+ /* Xen HVM incorrectly stores a pointer to the gdt_ptr, instead
+ of the gdt_ptr contents. Thus, make it static so it will
+diff --git a/arch/x86/boot/pmjump.S b/arch/x86/boot/pmjump.S
+index fa6bed1..f5402d5 100644
+--- a/arch/x86/boot/pmjump.S
++++ b/arch/x86/boot/pmjump.S
+@@ -15,6 +15,7 @@
+ */
+
+ #include <asm/boot.h>
++#include <asm/processor-flags.h>
+ #include <asm/segment.h>
+
+ .text
+@@ -29,28 +30,55 @@
+ */
+ protected_mode_jump:
+ movl %edx, %esi # Pointer to boot_params table
+- movl %eax, 2f # Patch ljmpl instruction
++
++ xorl %ebx, %ebx
++ movw %cs, %bx
++ shll $4, %ebx
++ addl %ebx, 2f
+
+ movw $__BOOT_DS, %cx
+- xorl %ebx, %ebx # Per the 32-bit boot protocol
+- xorl %ebp, %ebp # Per the 32-bit boot protocol
+- xorl %edi, %edi # Per the 32-bit boot protocol
++ movw $__BOOT_TSS, %di
+
+ movl %cr0, %edx
+- orb $1, %dl # Protected mode (PE) bit
++ orb $X86_CR0_PE, %dl # Protected mode
+ movl %edx, %cr0
+ jmp 1f # Short jump to serialize on 386/486
+ 1:
+
+- movw %cx, %ds
+- movw %cx, %es
+- movw %cx, %fs
+- movw %cx, %gs
+- movw %cx, %ss
+-
+- # Jump to the 32-bit entrypoint
++ # Transition to 32-bit mode
+ .byte 0x66, 0xea # ljmpl opcode
+-2: .long 0 # offset
++2: .long in_pm32 # offset
+ .word __BOOT_CS # segment
+
+ .size protected_mode_jump, .-protected_mode_jump
++
++ .code32
++ .type in_pm32, @function
++in_pm32:
++ # Set up data segments for flat 32-bit mode
++ movl %ecx, %ds
++ movl %ecx, %es
++ movl %ecx, %fs
++ movl %ecx, %gs
++ movl %ecx, %ss
++ # The 32-bit code sets up its own stack, but this way we do have
++ # a valid stack if some debugging hack wants to use it.
++ addl %ebx, %esp
++
++ # Set up TR to make Intel VT happy
++ ltr %di
++
++ # Clear registers to allow for future extensions to the
++ # 32-bit boot protocol
++ xorl %ecx, %ecx
++ xorl %edx, %edx
++ xorl %ebx, %ebx
++ xorl %ebp, %ebp
++ xorl %edi, %edi
++
++ # Set up LDTR to make Intel VT happy
++ lldt %cx
++
++ jmpl *%eax # Jump to the 32-bit entrypoint
++
++ .size in_pm32, .-in_pm32
+diff --git a/arch/x86/boot/video-bios.c b/arch/x86/boot/video-bios.c
+index ed0672a..ff664a1 100644
+--- a/arch/x86/boot/video-bios.c
++++ b/arch/x86/boot/video-bios.c
+@@ -104,6 +104,7 @@ static int bios_probe(void)
+
+ mi = GET_HEAP(struct mode_info, 1);
+ mi->mode = VIDEO_FIRST_BIOS+mode;
++ mi->depth = 0; /* text */
+ mi->x = rdfs16(0x44a);
+ mi->y = rdfs8(0x484)+1;
+ nmodes++;
+@@ -116,7 +117,7 @@ static int bios_probe(void)
+
+ __videocard video_bios =
+ {
+- .card_name = "BIOS (scanned)",
++ .card_name = "BIOS",
+ .probe = bios_probe,
+ .set_mode = bios_set_mode,
+ .unsafe = 1,
+diff --git a/arch/x86/boot/video-vesa.c b/arch/x86/boot/video-vesa.c
+index 4716b9a..662dd2f 100644
+--- a/arch/x86/boot/video-vesa.c
++++ b/arch/x86/boot/video-vesa.c
+@@ -79,20 +79,28 @@ static int vesa_probe(void)
+ /* Text Mode, TTY BIOS supported,
+ supported by hardware */
+ mi = GET_HEAP(struct mode_info, 1);
+- mi->mode = mode + VIDEO_FIRST_VESA;
+- mi->x = vminfo.h_res;
+- mi->y = vminfo.v_res;
++ mi->mode = mode + VIDEO_FIRST_VESA;
++ mi->depth = 0; /* text */
++ mi->x = vminfo.h_res;
++ mi->y = vminfo.v_res;
+ nmodes++;
+- } else if ((vminfo.mode_attr & 0x99) == 0x99) {
++ } else if ((vminfo.mode_attr & 0x99) == 0x99 &&
++ (vminfo.memory_layout == 4 ||
++ vminfo.memory_layout == 6) &&
++ vminfo.memory_planes == 1) {
+ #ifdef CONFIG_FB
+ /* Graphics mode, color, linear frame buffer
+- supported -- register the mode but hide from
+- the menu. Only do this if framebuffer is
+- configured, however, otherwise the user will
+- be left without a screen. */
++ supported. Only register the mode if
++ if framebuffer is configured, however,
++ otherwise the user will be left without a screen.
++ We don't require CONFIG_FB_VESA, however, since
++ some of the other framebuffer drivers can use
++ this mode-setting, too. */
+ mi = GET_HEAP(struct mode_info, 1);
+ mi->mode = mode + VIDEO_FIRST_VESA;
+- mi->x = mi->y = 0;
++ mi->depth = vminfo.bpp;
++ mi->x = vminfo.h_res;
++ mi->y = vminfo.v_res;
+ nmodes++;
+ #endif
+ }
+diff --git a/arch/x86/boot/video-vga.c b/arch/x86/boot/video-vga.c
+index aef02f9..7259387 100644
+--- a/arch/x86/boot/video-vga.c
++++ b/arch/x86/boot/video-vga.c
+@@ -18,22 +18,22 @@
+ #include "video.h"
+
+ static struct mode_info vga_modes[] = {
+- { VIDEO_80x25, 80, 25 },
+- { VIDEO_8POINT, 80, 50 },
+- { VIDEO_80x43, 80, 43 },
+- { VIDEO_80x28, 80, 28 },
+- { VIDEO_80x30, 80, 30 },
+- { VIDEO_80x34, 80, 34 },
+- { VIDEO_80x60, 80, 60 },
++ { VIDEO_80x25, 80, 25, 0 },
++ { VIDEO_8POINT, 80, 50, 0 },
++ { VIDEO_80x43, 80, 43, 0 },
++ { VIDEO_80x28, 80, 28, 0 },
++ { VIDEO_80x30, 80, 30, 0 },
++ { VIDEO_80x34, 80, 34, 0 },
++ { VIDEO_80x60, 80, 60, 0 },
+ };
+
+ static struct mode_info ega_modes[] = {
+- { VIDEO_80x25, 80, 25 },
+- { VIDEO_8POINT, 80, 43 },
++ { VIDEO_80x25, 80, 25, 0 },
++ { VIDEO_8POINT, 80, 43, 0 },
+ };
+
+ static struct mode_info cga_modes[] = {
+- { VIDEO_80x25, 80, 25 },
++ { VIDEO_80x25, 80, 25, 0 },
+ };
+
+ __videocard video_vga;
+diff --git a/arch/x86/boot/video.c b/arch/x86/boot/video.c
+index ad9712f..696d08f 100644
+--- a/arch/x86/boot/video.c
++++ b/arch/x86/boot/video.c
+@@ -293,13 +293,28 @@ static void display_menu(void)
+ struct mode_info *mi;
+ char ch;
+ int i;
++ int nmodes;
++ int modes_per_line;
++ int col;
+
+- puts("Mode: COLSxROWS:\n");
++ nmodes = 0;
++ for (card = video_cards; card < video_cards_end; card++)
++ nmodes += card->nmodes;
+
++ modes_per_line = 1;
++ if (nmodes >= 20)
++ modes_per_line = 3;
++
++ for (col = 0; col < modes_per_line; col++)
++ puts("Mode: Resolution: Type: ");
++ putchar('\n');
++
++ col = 0;
+ ch = '0';
+ for (card = video_cards; card < video_cards_end; card++) {
+ mi = card->modes;
+ for (i = 0; i < card->nmodes; i++, mi++) {
++ char resbuf[32];
+ int visible = mi->x && mi->y;
+ u16 mode_id = mi->mode ? mi->mode :
+ (mi->y << 8)+mi->x;
+@@ -307,8 +322,18 @@ static void display_menu(void)
+ if (!visible)
+ continue; /* Hidden mode */
+
+- printf("%c %04X %3dx%-3d %s\n",
+- ch, mode_id, mi->x, mi->y, card->card_name);
++ if (mi->depth)
++ sprintf(resbuf, "%dx%d", mi->y, mi->depth);
++ else
++ sprintf(resbuf, "%d", mi->y);
++
++ printf("%c %03X %4dx%-7s %-6s",
++ ch, mode_id, mi->x, resbuf, card->card_name);
++ col++;
++ if (col >= modes_per_line) {
++ putchar('\n');
++ col = 0;
++ }
+
+ if (ch == '9')
+ ch = 'a';
+@@ -318,6 +343,8 @@ static void display_menu(void)
+ ch++;
+ }
+ }
++ if (col)
++ putchar('\n');
+ }
+
+ #define H(x) ((x)-'a'+10)
+diff --git a/arch/x86/boot/video.h b/arch/x86/boot/video.h
+index b92447d..d69347f 100644
+--- a/arch/x86/boot/video.h
++++ b/arch/x86/boot/video.h
+@@ -83,7 +83,8 @@ void store_screen(void);
+
+ struct mode_info {
+ u16 mode; /* Mode number (vga= style) */
+- u8 x, y; /* Width, height */
++ u16 x, y; /* Width, height */
++ u16 depth; /* Bits per pixel, 0 for text mode */
+ };
+
+ struct card_info {
+diff --git a/arch/x86/boot/voyager.c b/arch/x86/boot/voyager.c
+index 61c8fe0..6499e32 100644
+--- a/arch/x86/boot/voyager.c
++++ b/arch/x86/boot/voyager.c
+@@ -16,8 +16,6 @@
+
+ #include "boot.h"
+
+-#ifdef CONFIG_X86_VOYAGER
+-
+ int query_voyager(void)
+ {
+ u8 err;
+@@ -42,5 +40,3 @@ int query_voyager(void)
+ copy_from_fs(data_ptr, di, 7); /* Table is 7 bytes apparently */
+ return 0;
+ }
+-
+-#endif /* CONFIG_X86_VOYAGER */
+diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
+index 54ee176..77562e7 100644
+--- a/arch/x86/configs/i386_defconfig
++++ b/arch/x86/configs/i386_defconfig
+@@ -99,9 +99,9 @@ CONFIG_IOSCHED_NOOP=y
+ CONFIG_IOSCHED_AS=y
+ CONFIG_IOSCHED_DEADLINE=y
+ CONFIG_IOSCHED_CFQ=y
+-CONFIG_DEFAULT_AS=y
++# CONFIG_DEFAULT_AS is not set
+ # CONFIG_DEFAULT_DEADLINE is not set
+-# CONFIG_DEFAULT_CFQ is not set
++CONFIG_DEFAULT_CFQ=y
+ # CONFIG_DEFAULT_NOOP is not set
+ CONFIG_DEFAULT_IOSCHED="anticipatory"
+
+diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
+index 38a83f9..9e2b0ef 100644
+--- a/arch/x86/configs/x86_64_defconfig
++++ b/arch/x86/configs/x86_64_defconfig
+@@ -145,15 +145,6 @@ CONFIG_K8_NUMA=y
+ CONFIG_NODES_SHIFT=6
+ CONFIG_X86_64_ACPI_NUMA=y
+ CONFIG_NUMA_EMU=y
+-CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
+-CONFIG_ARCH_DISCONTIGMEM_DEFAULT=y
+-CONFIG_ARCH_SPARSEMEM_ENABLE=y
+-CONFIG_SELECT_MEMORY_MODEL=y
+-# CONFIG_FLATMEM_MANUAL is not set
+-CONFIG_DISCONTIGMEM_MANUAL=y
+-# CONFIG_SPARSEMEM_MANUAL is not set
+-CONFIG_DISCONTIGMEM=y
+-CONFIG_FLAT_NODE_MEM_MAP=y
+ CONFIG_NEED_MULTIPLE_NODES=y
+ # CONFIG_SPARSEMEM_STATIC is not set
+ CONFIG_SPLIT_PTLOCK_CPUS=4
diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index 46bb609..3874c2d 100644
--- a/arch/x86/crypto/Makefile
@@ -135061,42110 +139628,109851 @@
+MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
+MODULE_ALIAS("twofish");
+MODULE_ALIAS("twofish-asm");
-diff --git a/arch/x86/kernel/apic_32.c b/arch/x86/kernel/apic_32.c
-index edb5108..a56c782 100644
---- a/arch/x86/kernel/apic_32.c
-+++ b/arch/x86/kernel/apic_32.c
-@@ -1530,7 +1530,7 @@ static int lapic_resume(struct sys_device *dev)
- */
+diff --git a/arch/x86/ia32/Makefile b/arch/x86/ia32/Makefile
+index e2edda2..52d0ccf 100644
+--- a/arch/x86/ia32/Makefile
++++ b/arch/x86/ia32/Makefile
+@@ -2,9 +2,7 @@
+ # Makefile for the ia32 kernel emulation subsystem.
+ #
+
+-obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_signal.o tls32.o \
+- ia32_binfmt.o fpu32.o ptrace32.o syscall32.o syscall32_syscall.o \
+- mmap32.o
++obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_signal.o
+
+ sysv-$(CONFIG_SYSVIPC) := ipc32.o
+ obj-$(CONFIG_IA32_EMULATION) += $(sysv-y)
+@@ -13,40 +11,3 @@ obj-$(CONFIG_IA32_AOUT) += ia32_aout.o
+
+ audit-class-$(CONFIG_AUDIT) := audit.o
+ obj-$(CONFIG_IA32_EMULATION) += $(audit-class-y)
+-
+-$(obj)/syscall32_syscall.o: \
+- $(foreach F,sysenter syscall,$(obj)/vsyscall-$F.so)
+-
+-# Teach kbuild about targets
+-targets := $(foreach F,$(addprefix vsyscall-,sysenter syscall),\
+- $F.o $F.so $F.so.dbg)
+-
+-# The DSO images are built using a special linker script
+-quiet_cmd_syscall = SYSCALL $@
+- cmd_syscall = $(CC) -m32 -nostdlib -shared \
+- $(call ld-option, -Wl$(comma)--hash-style=sysv) \
+- -Wl,-soname=linux-gate.so.1 -o $@ \
+- -Wl,-T,$(filter-out FORCE,$^)
+-
+-$(obj)/%.so: OBJCOPYFLAGS := -S
+-$(obj)/%.so: $(obj)/%.so.dbg FORCE
+- $(call if_changed,objcopy)
+-
+-$(obj)/vsyscall-sysenter.so.dbg $(obj)/vsyscall-syscall.so.dbg: \
+-$(obj)/vsyscall-%.so.dbg: $(src)/vsyscall.lds $(obj)/vsyscall-%.o FORCE
+- $(call if_changed,syscall)
+-
+-AFLAGS_vsyscall-sysenter.o = -m32 -Wa,-32
+-AFLAGS_vsyscall-syscall.o = -m32 -Wa,-32
+-
+-vdsos := vdso32-sysenter.so vdso32-syscall.so
+-
+-quiet_cmd_vdso_install = INSTALL $@
+- cmd_vdso_install = cp $(@:vdso32-%.so=$(obj)/vsyscall-%.so.dbg) \
+- $(MODLIB)/vdso/$@
+-
+-$(vdsos):
+- @mkdir -p $(MODLIB)/vdso
+- $(call cmd,vdso_install)
+-
+-vdso_install: $(vdsos)
+diff --git a/arch/x86/ia32/audit.c b/arch/x86/ia32/audit.c
+index 91b7b59..5d7b381 100644
+--- a/arch/x86/ia32/audit.c
++++ b/arch/x86/ia32/audit.c
+@@ -27,7 +27,7 @@ unsigned ia32_signal_class[] = {
+
+ int ia32_classify_syscall(unsigned syscall)
+ {
+- switch(syscall) {
++ switch (syscall) {
+ case __NR_open:
+ return 2;
+ case __NR_openat:
+diff --git a/arch/x86/ia32/fpu32.c b/arch/x86/ia32/fpu32.c
+deleted file mode 100644
+index 2c8209a..0000000
+--- a/arch/x86/ia32/fpu32.c
++++ /dev/null
+@@ -1,183 +0,0 @@
+-/*
+- * Copyright 2002 Andi Kleen, SuSE Labs.
+- * FXSAVE<->i387 conversion support. Based on code by Gareth Hughes.
+- * This is used for ptrace, signals and coredumps in 32bit emulation.
+- */
+-
+-#include <linux/sched.h>
+-#include <asm/sigcontext32.h>
+-#include <asm/processor.h>
+-#include <asm/uaccess.h>
+-#include <asm/i387.h>
+-
+-static inline unsigned short twd_i387_to_fxsr(unsigned short twd)
+-{
+- unsigned int tmp; /* to avoid 16 bit prefixes in the code */
+-
+- /* Transform each pair of bits into 01 (valid) or 00 (empty) */
+- tmp = ~twd;
+- tmp = (tmp | (tmp>>1)) & 0x5555; /* 0V0V0V0V0V0V0V0V */
+- /* and move the valid bits to the lower byte. */
+- tmp = (tmp | (tmp >> 1)) & 0x3333; /* 00VV00VV00VV00VV */
+- tmp = (tmp | (tmp >> 2)) & 0x0f0f; /* 0000VVVV0000VVVV */
+- tmp = (tmp | (tmp >> 4)) & 0x00ff; /* 00000000VVVVVVVV */
+- return tmp;
+-}
+-
+-static inline unsigned long twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave)
+-{
+- struct _fpxreg *st = NULL;
+- unsigned long tos = (fxsave->swd >> 11) & 7;
+- unsigned long twd = (unsigned long) fxsave->twd;
+- unsigned long tag;
+- unsigned long ret = 0xffff0000;
+- int i;
+-
+-#define FPREG_ADDR(f, n) ((void *)&(f)->st_space + (n) * 16);
+-
+- for (i = 0 ; i < 8 ; i++) {
+- if (twd & 0x1) {
+- st = FPREG_ADDR( fxsave, (i - tos) & 7 );
+-
+- switch (st->exponent & 0x7fff) {
+- case 0x7fff:
+- tag = 2; /* Special */
+- break;
+- case 0x0000:
+- if ( !st->significand[0] &&
+- !st->significand[1] &&
+- !st->significand[2] &&
+- !st->significand[3] ) {
+- tag = 1; /* Zero */
+- } else {
+- tag = 2; /* Special */
+- }
+- break;
+- default:
+- if (st->significand[3] & 0x8000) {
+- tag = 0; /* Valid */
+- } else {
+- tag = 2; /* Special */
+- }
+- break;
+- }
+- } else {
+- tag = 3; /* Empty */
+- }
+- ret |= (tag << (2 * i));
+- twd = twd >> 1;
+- }
+- return ret;
+-}
+-
+-
+-static inline int convert_fxsr_from_user(struct i387_fxsave_struct *fxsave,
+- struct _fpstate_ia32 __user *buf)
+-{
+- struct _fpxreg *to;
+- struct _fpreg __user *from;
+- int i;
+- u32 v;
+- int err = 0;
+-
+-#define G(num,val) err |= __get_user(val, num + (u32 __user *)buf)
+- G(0, fxsave->cwd);
+- G(1, fxsave->swd);
+- G(2, fxsave->twd);
+- fxsave->twd = twd_i387_to_fxsr(fxsave->twd);
+- G(3, fxsave->rip);
+- G(4, v);
+- fxsave->fop = v>>16; /* cs ignored */
+- G(5, fxsave->rdp);
+- /* 6: ds ignored */
+-#undef G
+- if (err)
+- return -1;
+-
+- to = (struct _fpxreg *)&fxsave->st_space[0];
+- from = &buf->_st[0];
+- for (i = 0 ; i < 8 ; i++, to++, from++) {
+- if (__copy_from_user(to, from, sizeof(*from)))
+- return -1;
+- }
+- return 0;
+-}
+-
+-
+-static inline int convert_fxsr_to_user(struct _fpstate_ia32 __user *buf,
+- struct i387_fxsave_struct *fxsave,
+- struct pt_regs *regs,
+- struct task_struct *tsk)
+-{
+- struct _fpreg __user *to;
+- struct _fpxreg *from;
+- int i;
+- u16 cs,ds;
+- int err = 0;
+-
+- if (tsk == current) {
+- /* should be actually ds/cs at fpu exception time,
+- but that information is not available in 64bit mode. */
+- asm("movw %%ds,%0 " : "=r" (ds));
+- asm("movw %%cs,%0 " : "=r" (cs));
+- } else { /* ptrace. task has stopped. */
+- ds = tsk->thread.ds;
+- cs = regs->cs;
+- }
+-
+-#define P(num,val) err |= __put_user(val, num + (u32 __user *)buf)
+- P(0, (u32)fxsave->cwd | 0xffff0000);
+- P(1, (u32)fxsave->swd | 0xffff0000);
+- P(2, twd_fxsr_to_i387(fxsave));
+- P(3, (u32)fxsave->rip);
+- P(4, cs | ((u32)fxsave->fop) << 16);
+- P(5, fxsave->rdp);
+- P(6, 0xffff0000 | ds);
+-#undef P
+-
+- if (err)
+- return -1;
+-
+- to = &buf->_st[0];
+- from = (struct _fpxreg *) &fxsave->st_space[0];
+- for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
+- if (__copy_to_user(to, from, sizeof(*to)))
+- return -1;
+- }
+- return 0;
+-}
+-
+-int restore_i387_ia32(struct task_struct *tsk, struct _fpstate_ia32 __user *buf, int fsave)
+-{
+- clear_fpu(tsk);
+- if (!fsave) {
+- if (__copy_from_user(&tsk->thread.i387.fxsave,
+- &buf->_fxsr_env[0],
+- sizeof(struct i387_fxsave_struct)))
+- return -1;
+- tsk->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
+- set_stopped_child_used_math(tsk);
+- }
+- return convert_fxsr_from_user(&tsk->thread.i387.fxsave, buf);
+-}
+-
+-int save_i387_ia32(struct task_struct *tsk,
+- struct _fpstate_ia32 __user *buf,
+- struct pt_regs *regs,
+- int fsave)
+-{
+- int err = 0;
+-
+- init_fpu(tsk);
+- if (convert_fxsr_to_user(buf, &tsk->thread.i387.fxsave, regs, tsk))
+- return -1;
+- if (fsave)
+- return 0;
+- err |= __put_user(tsk->thread.i387.fxsave.swd, &buf->status);
+- if (fsave)
+- return err ? -1 : 1;
+- err |= __put_user(X86_FXSR_MAGIC, &buf->magic);
+- err |= __copy_to_user(&buf->_fxsr_env[0], &tsk->thread.i387.fxsave,
+- sizeof(struct i387_fxsave_struct));
+- return err ? -1 : 1;
+-}
+diff --git a/arch/x86/ia32/ia32_aout.c b/arch/x86/ia32/ia32_aout.c
+index f82e1a9..e4c1207 100644
+--- a/arch/x86/ia32/ia32_aout.c
++++ b/arch/x86/ia32/ia32_aout.c
+@@ -25,6 +25,7 @@
+ #include <linux/binfmts.h>
+ #include <linux/personality.h>
+ #include <linux/init.h>
++#include <linux/jiffies.h>
- static struct sysdev_class lapic_sysclass = {
-- set_kset_name("lapic"),
-+ .name = "lapic",
- .resume = lapic_resume,
- .suspend = lapic_suspend,
- };
-diff --git a/arch/x86/kernel/apic_64.c b/arch/x86/kernel/apic_64.c
-index f28ccb5..fa6cdee 100644
---- a/arch/x86/kernel/apic_64.c
-+++ b/arch/x86/kernel/apic_64.c
-@@ -639,7 +639,7 @@ static int lapic_resume(struct sys_device *dev)
+ #include <asm/system.h>
+ #include <asm/uaccess.h>
+@@ -36,61 +37,67 @@
+ #undef WARN_OLD
+ #undef CORE_DUMP /* probably broken */
+
+-static int load_aout_binary(struct linux_binprm *, struct pt_regs * regs);
+-static int load_aout_library(struct file*);
++static int load_aout_binary(struct linux_binprm *, struct pt_regs *regs);
++static int load_aout_library(struct file *);
+
+ #ifdef CORE_DUMP
+-static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, unsigned long limit);
++static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file,
++ unsigned long limit);
+
+ /*
+ * fill in the user structure for a core dump..
+ */
+-static void dump_thread32(struct pt_regs * regs, struct user32 * dump)
++static void dump_thread32(struct pt_regs *regs, struct user32 *dump)
+ {
+- u32 fs,gs;
++ u32 fs, gs;
+
+ /* changed the size calculations - should hopefully work better. lbt */
+ dump->magic = CMAGIC;
+ dump->start_code = 0;
+- dump->start_stack = regs->rsp & ~(PAGE_SIZE - 1);
++ dump->start_stack = regs->sp & ~(PAGE_SIZE - 1);
+ dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT;
+- dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT;
++ dump->u_dsize = ((unsigned long)
++ (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT;
+ dump->u_dsize -= dump->u_tsize;
+ dump->u_ssize = 0;
+- dump->u_debugreg[0] = current->thread.debugreg0;
+- dump->u_debugreg[1] = current->thread.debugreg1;
+- dump->u_debugreg[2] = current->thread.debugreg2;
+- dump->u_debugreg[3] = current->thread.debugreg3;
+- dump->u_debugreg[4] = 0;
+- dump->u_debugreg[5] = 0;
+- dump->u_debugreg[6] = current->thread.debugreg6;
+- dump->u_debugreg[7] = current->thread.debugreg7;
+-
+- if (dump->start_stack < 0xc0000000)
+- dump->u_ssize = ((unsigned long) (0xc0000000 - dump->start_stack)) >> PAGE_SHIFT;
+-
+- dump->regs.ebx = regs->rbx;
+- dump->regs.ecx = regs->rcx;
+- dump->regs.edx = regs->rdx;
+- dump->regs.esi = regs->rsi;
+- dump->regs.edi = regs->rdi;
+- dump->regs.ebp = regs->rbp;
+- dump->regs.eax = regs->rax;
++ dump->u_debugreg[0] = current->thread.debugreg0;
++ dump->u_debugreg[1] = current->thread.debugreg1;
++ dump->u_debugreg[2] = current->thread.debugreg2;
++ dump->u_debugreg[3] = current->thread.debugreg3;
++ dump->u_debugreg[4] = 0;
++ dump->u_debugreg[5] = 0;
++ dump->u_debugreg[6] = current->thread.debugreg6;
++ dump->u_debugreg[7] = current->thread.debugreg7;
++
++ if (dump->start_stack < 0xc0000000) {
++ unsigned long tmp;
++
++ tmp = (unsigned long) (0xc0000000 - dump->start_stack);
++ dump->u_ssize = tmp >> PAGE_SHIFT;
++ }
++
++ dump->regs.bx = regs->bx;
++ dump->regs.cx = regs->cx;
++ dump->regs.dx = regs->dx;
++ dump->regs.si = regs->si;
++ dump->regs.di = regs->di;
++ dump->regs.bp = regs->bp;
++ dump->regs.ax = regs->ax;
+ dump->regs.ds = current->thread.ds;
+ dump->regs.es = current->thread.es;
+ asm("movl %%fs,%0" : "=r" (fs)); dump->regs.fs = fs;
+- asm("movl %%gs,%0" : "=r" (gs)); dump->regs.gs = gs;
+- dump->regs.orig_eax = regs->orig_rax;
+- dump->regs.eip = regs->rip;
++ asm("movl %%gs,%0" : "=r" (gs)); dump->regs.gs = gs;
++ dump->regs.orig_ax = regs->orig_ax;
++ dump->regs.ip = regs->ip;
+ dump->regs.cs = regs->cs;
+- dump->regs.eflags = regs->eflags;
+- dump->regs.esp = regs->rsp;
++ dump->regs.flags = regs->flags;
++ dump->regs.sp = regs->sp;
+ dump->regs.ss = regs->ss;
+
+ #if 1 /* FIXME */
+ dump->u_fpvalid = 0;
+ #else
+- dump->u_fpvalid = dump_fpu (regs, &dump->i387);
++ dump->u_fpvalid = dump_fpu(regs, &dump->i387);
+ #endif
}
- static struct sysdev_class lapic_sysclass = {
-- set_kset_name("lapic"),
-+ .name = "lapic",
- .resume = lapic_resume,
- .suspend = lapic_suspend,
- };
-diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c
-index 9f530ff..8b4507b 100644
---- a/arch/x86/kernel/cpu/intel_cacheinfo.c
-+++ b/arch/x86/kernel/cpu/intel_cacheinfo.c
-@@ -733,10 +733,8 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
- if (unlikely(retval < 0))
- return retval;
+@@ -128,15 +135,19 @@ static int dump_write(struct file *file, const void *addr, int nr)
+ return file->f_op->write(file, addr, nr, &file->f_pos) == nr;
+ }
-- cache_kobject[cpu]->parent = &sys_dev->kobj;
-- kobject_set_name(cache_kobject[cpu], "%s", "cache");
-- cache_kobject[cpu]->ktype = &ktype_percpu_entry;
-- retval = kobject_register(cache_kobject[cpu]);
-+ retval = kobject_init_and_add(cache_kobject[cpu], &ktype_percpu_entry,
-+ &sys_dev->kobj, "%s", "cache");
- if (retval < 0) {
- cpuid4_cache_sysfs_exit(cpu);
+-#define DUMP_WRITE(addr, nr) \
++#define DUMP_WRITE(addr, nr) \
+ if (!dump_write(file, (void *)(addr), (nr))) \
+ goto end_coredump;
+
+-#define DUMP_SEEK(offset) \
+-if (file->f_op->llseek) { \
+- if (file->f_op->llseek(file,(offset),0) != (offset)) \
+- goto end_coredump; \
+-} else file->f_pos = (offset)
++#define DUMP_SEEK(offset) \
++ if (file->f_op->llseek) { \
++ if (file->f_op->llseek(file, (offset), 0) != (offset)) \
++ goto end_coredump; \
++ } else \
++ file->f_pos = (offset)
++
++#define START_DATA() (u.u_tsize << PAGE_SHIFT)
++#define START_STACK(u) (u.start_stack)
+
+ /*
+ * Routine writes a core dump image in the current directory.
+@@ -148,62 +159,70 @@ if (file->f_op->llseek) { \
+ * dumping of the process results in another error..
+ */
+
+-static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, unsigned long limit)
++static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file,
++ unsigned long limit)
+ {
+ mm_segment_t fs;
+ int has_dumped = 0;
+ unsigned long dump_start, dump_size;
+ struct user32 dump;
+-# define START_DATA(u) (u.u_tsize << PAGE_SHIFT)
+-# define START_STACK(u) (u.start_stack)
+
+ fs = get_fs();
+ set_fs(KERNEL_DS);
+ has_dumped = 1;
+ current->flags |= PF_DUMPCORE;
+- strncpy(dump.u_comm, current->comm, sizeof(current->comm));
+- dump.u_ar0 = (u32)(((unsigned long)(&dump.regs)) - ((unsigned long)(&dump)));
++ strncpy(dump.u_comm, current->comm, sizeof(current->comm));
++ dump.u_ar0 = (u32)(((unsigned long)(&dump.regs)) -
++ ((unsigned long)(&dump)));
+ dump.signal = signr;
+ dump_thread32(regs, &dump);
+
+-/* If the size of the dump file exceeds the rlimit, then see what would happen
+- if we wrote the stack, but not the data area. */
++ /*
++ * If the size of the dump file exceeds the rlimit, then see
++ * what would happen if we wrote the stack, but not the data
++ * area.
++ */
+ if ((dump.u_dsize + dump.u_ssize + 1) * PAGE_SIZE > limit)
+ dump.u_dsize = 0;
+
+-/* Make sure we have enough room to write the stack and data areas. */
++ /* Make sure we have enough room to write the stack and data areas. */
+ if ((dump.u_ssize + 1) * PAGE_SIZE > limit)
+ dump.u_ssize = 0;
+
+-/* make sure we actually have a data and stack area to dump */
++ /* make sure we actually have a data and stack area to dump */
+ set_fs(USER_DS);
+- if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_DATA(dump), dump.u_dsize << PAGE_SHIFT))
++ if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_DATA(dump),
++ dump.u_dsize << PAGE_SHIFT))
+ dump.u_dsize = 0;
+- if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_STACK(dump), dump.u_ssize << PAGE_SHIFT))
++ if (!access_ok(VERIFY_READ, (void *) (unsigned long)START_STACK(dump),
++ dump.u_ssize << PAGE_SHIFT))
+ dump.u_ssize = 0;
+
+ set_fs(KERNEL_DS);
+-/* struct user */
+- DUMP_WRITE(&dump,sizeof(dump));
+-/* Now dump all of the user data. Include malloced stuff as well */
++ /* struct user */
++ DUMP_WRITE(&dump, sizeof(dump));
++ /* Now dump all of the user data. Include malloced stuff as well */
+ DUMP_SEEK(PAGE_SIZE);
+-/* now we start writing out the user space info */
++ /* now we start writing out the user space info */
+ set_fs(USER_DS);
+-/* Dump the data area */
++ /* Dump the data area */
+ if (dump.u_dsize != 0) {
+ dump_start = START_DATA(dump);
+ dump_size = dump.u_dsize << PAGE_SHIFT;
+- DUMP_WRITE(dump_start,dump_size);
++ DUMP_WRITE(dump_start, dump_size);
+ }
+-/* Now prepare to dump the stack area */
++ /* Now prepare to dump the stack area */
+ if (dump.u_ssize != 0) {
+ dump_start = START_STACK(dump);
+ dump_size = dump.u_ssize << PAGE_SHIFT;
+- DUMP_WRITE(dump_start,dump_size);
++ DUMP_WRITE(dump_start, dump_size);
+ }
+-/* Finally dump the task struct. Not be used by gdb, but could be useful */
++ /*
++ * Finally dump the task struct. Not be used by gdb, but
++ * could be useful
++ */
+ set_fs(KERNEL_DS);
+- DUMP_WRITE(current,sizeof(*current));
++ DUMP_WRITE(current, sizeof(*current));
+ end_coredump:
+ set_fs(fs);
+ return has_dumped;
+@@ -217,35 +236,34 @@ end_coredump:
+ */
+ static u32 __user *create_aout_tables(char __user *p, struct linux_binprm *bprm)
+ {
+- u32 __user *argv;
+- u32 __user *envp;
+- u32 __user *sp;
+- int argc = bprm->argc;
+- int envc = bprm->envc;
++ u32 __user *argv, *envp, *sp;
++ int argc = bprm->argc, envc = bprm->envc;
+
+ sp = (u32 __user *) ((-(unsigned long)sizeof(u32)) & (unsigned long) p);
+ sp -= envc+1;
+ envp = sp;
+ sp -= argc+1;
+ argv = sp;
+- put_user((unsigned long) envp,--sp);
+- put_user((unsigned long) argv,--sp);
+- put_user(argc,--sp);
++ put_user((unsigned long) envp, --sp);
++ put_user((unsigned long) argv, --sp);
++ put_user(argc, --sp);
+ current->mm->arg_start = (unsigned long) p;
+- while (argc-->0) {
++ while (argc-- > 0) {
+ char c;
+- put_user((u32)(unsigned long)p,argv++);
++
++ put_user((u32)(unsigned long)p, argv++);
+ do {
+- get_user(c,p++);
++ get_user(c, p++);
+ } while (c);
+ }
+ put_user(0, argv);
+ current->mm->arg_end = current->mm->env_start = (unsigned long) p;
+- while (envc-->0) {
++ while (envc-- > 0) {
+ char c;
+- put_user((u32)(unsigned long)p,envp++);
++
++ put_user((u32)(unsigned long)p, envp++);
+ do {
+- get_user(c,p++);
++ get_user(c, p++);
+ } while (c);
+ }
+ put_user(0, envp);
+@@ -257,20 +275,18 @@ static u32 __user *create_aout_tables(char __user *p, struct linux_binprm *bprm)
+ * These are the functions used to load a.out style executables and shared
+ * libraries. There is no binary dependent code anywhere else.
+ */
+-
+-static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
++static int load_aout_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ {
++ unsigned long error, fd_offset, rlim;
+ struct exec ex;
+- unsigned long error;
+- unsigned long fd_offset;
+- unsigned long rlim;
+ int retval;
+
+ ex = *((struct exec *) bprm->buf); /* exec-header */
+ if ((N_MAGIC(ex) != ZMAGIC && N_MAGIC(ex) != OMAGIC &&
+ N_MAGIC(ex) != QMAGIC && N_MAGIC(ex) != NMAGIC) ||
+ N_TRSIZE(ex) || N_DRSIZE(ex) ||
+- i_size_read(bprm->file->f_path.dentry->d_inode) < ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) {
++ i_size_read(bprm->file->f_path.dentry->d_inode) <
++ ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) {
+ return -ENOEXEC;
+ }
+
+@@ -291,13 +307,13 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
+ if (retval)
return retval;
-@@ -746,23 +744,23 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
- this_object = INDEX_KOBJECT_PTR(cpu,i);
- this_object->cpu = cpu;
- this_object->index = i;
-- this_object->kobj.parent = cache_kobject[cpu];
-- kobject_set_name(&(this_object->kobj), "index%1lu", i);
-- this_object->kobj.ktype = &ktype_cache;
-- retval = kobject_register(&(this_object->kobj));
-+ retval = kobject_init_and_add(&(this_object->kobj),
-+ &ktype_cache, cache_kobject[cpu],
-+ "index%1lu", i);
- if (unlikely(retval)) {
- for (j = 0; j < i; j++) {
-- kobject_unregister(
-- &(INDEX_KOBJECT_PTR(cpu,j)->kobj));
-+ kobject_put(&(INDEX_KOBJECT_PTR(cpu,j)->kobj));
- }
-- kobject_unregister(cache_kobject[cpu]);
-+ kobject_put(cache_kobject[cpu]);
- cpuid4_cache_sysfs_exit(cpu);
- break;
+
+- regs->cs = __USER32_CS;
++ regs->cs = __USER32_CS;
+ regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r12 =
+ regs->r13 = regs->r14 = regs->r15 = 0;
+
+ /* OK, This is the point of no return */
+ set_personality(PER_LINUX);
+- set_thread_flag(TIF_IA32);
++ set_thread_flag(TIF_IA32);
+ clear_thread_flag(TIF_ABI_PENDING);
+
+ current->mm->end_code = ex.a_text +
+@@ -311,7 +327,7 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
+
+ current->mm->mmap = NULL;
+ compute_creds(bprm);
+- current->flags &= ~PF_FORKNOEXEC;
++ current->flags &= ~PF_FORKNOEXEC;
+
+ if (N_MAGIC(ex) == OMAGIC) {
+ unsigned long text_addr, map_size;
+@@ -338,30 +354,31 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
+ send_sig(SIGKILL, current, 0);
+ return error;
}
-+ kobject_uevent(&(this_object->kobj), KOBJ_ADD);
+-
++
+ flush_icache_range(text_addr, text_addr+ex.a_text+ex.a_data);
+ } else {
+ #ifdef WARN_OLD
+ static unsigned long error_time, error_time2;
+ if ((ex.a_text & 0xfff || ex.a_data & 0xfff) &&
+- (N_MAGIC(ex) != NMAGIC) && (jiffies-error_time2) > 5*HZ)
+- {
++ (N_MAGIC(ex) != NMAGIC) &&
++ time_after(jiffies, error_time2 + 5*HZ)) {
+ printk(KERN_NOTICE "executable not page aligned\n");
+ error_time2 = jiffies;
+ }
+
+ if ((fd_offset & ~PAGE_MASK) != 0 &&
+- (jiffies-error_time) > 5*HZ)
+- {
+- printk(KERN_WARNING
+- "fd_offset is not page aligned. Please convert program: %s\n",
++ time_after(jiffies, error_time + 5*HZ)) {
++ printk(KERN_WARNING
++ "fd_offset is not page aligned. Please convert "
++ "program: %s\n",
+ bprm->file->f_path.dentry->d_name.name);
+ error_time = jiffies;
+ }
+ #endif
+
+- if (!bprm->file->f_op->mmap||((fd_offset & ~PAGE_MASK) != 0)) {
++ if (!bprm->file->f_op->mmap || (fd_offset & ~PAGE_MASK) != 0) {
+ loff_t pos = fd_offset;
++
+ down_write(¤t->mm->mmap_sem);
+ do_brk(N_TXTADDR(ex), ex.a_text+ex.a_data);
+ up_write(¤t->mm->mmap_sem);
+@@ -376,9 +393,10 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
+
+ down_write(¤t->mm->mmap_sem);
+ error = do_mmap(bprm->file, N_TXTADDR(ex), ex.a_text,
+- PROT_READ | PROT_EXEC,
+- MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE | MAP_32BIT,
+- fd_offset);
++ PROT_READ | PROT_EXEC,
++ MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE |
++ MAP_EXECUTABLE | MAP_32BIT,
++ fd_offset);
+ up_write(¤t->mm->mmap_sem);
+
+ if (error != N_TXTADDR(ex)) {
+@@ -387,9 +405,10 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
+ }
+
+ down_write(¤t->mm->mmap_sem);
+- error = do_mmap(bprm->file, N_DATADDR(ex), ex.a_data,
++ error = do_mmap(bprm->file, N_DATADDR(ex), ex.a_data,
+ PROT_READ | PROT_WRITE | PROT_EXEC,
+- MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE | MAP_32BIT,
++ MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE |
++ MAP_EXECUTABLE | MAP_32BIT,
+ fd_offset + ex.a_text);
+ up_write(¤t->mm->mmap_sem);
+ if (error != N_DATADDR(ex)) {
+@@ -403,9 +422,9 @@ beyond_if:
+ set_brk(current->mm->start_brk, current->mm->brk);
+
+ retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT);
+- if (retval < 0) {
+- /* Someone check-me: is this error path enough? */
+- send_sig(SIGKILL, current, 0);
++ if (retval < 0) {
++ /* Someone check-me: is this error path enough? */
++ send_sig(SIGKILL, current, 0);
+ return retval;
}
- if (!retval)
- cpu_set(cpu, cache_dev_map);
-+ kobject_uevent(cache_kobject[cpu], KOBJ_ADD);
- return retval;
- }
+@@ -414,10 +433,10 @@ beyond_if:
+ /* start thread */
+ asm volatile("movl %0,%%fs" :: "r" (0)); \
+ asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS));
+- load_gs_index(0);
+- (regs)->rip = ex.a_entry;
+- (regs)->rsp = current->mm->start_stack;
+- (regs)->eflags = 0x200;
++ load_gs_index(0);
++ (regs)->ip = ex.a_entry;
++ (regs)->sp = current->mm->start_stack;
++ (regs)->flags = 0x200;
+ (regs)->cs = __USER32_CS;
+ (regs)->ss = __USER32_DS;
+ regs->r8 = regs->r9 = regs->r10 = regs->r11 =
+@@ -425,7 +444,7 @@ beyond_if:
+ set_fs(USER_DS);
+ if (unlikely(current->ptrace & PT_PTRACED)) {
+ if (current->ptrace & PT_TRACE_EXEC)
+- ptrace_notify ((PTRACE_EVENT_EXEC << 8) | SIGTRAP);
++ ptrace_notify((PTRACE_EVENT_EXEC << 8) | SIGTRAP);
+ else
+ send_sig(SIGTRAP, current, 0);
+ }
+@@ -434,9 +453,8 @@ beyond_if:
-@@ -778,8 +776,8 @@ static void __cpuinit cache_remove_dev(struct sys_device * sys_dev)
- cpu_clear(cpu, cache_dev_map);
+ static int load_aout_library(struct file *file)
+ {
+- struct inode * inode;
+- unsigned long bss, start_addr, len;
+- unsigned long error;
++ struct inode *inode;
++ unsigned long bss, start_addr, len, error;
+ int retval;
+ struct exec ex;
- for (i = 0; i < num_cache_leaves; i++)
-- kobject_unregister(&(INDEX_KOBJECT_PTR(cpu,i)->kobj));
-- kobject_unregister(cache_kobject[cpu]);
-+ kobject_put(&(INDEX_KOBJECT_PTR(cpu,i)->kobj));
-+ kobject_put(cache_kobject[cpu]);
- cpuid4_cache_sysfs_exit(cpu);
- }
+@@ -450,7 +468,8 @@ static int load_aout_library(struct file *file)
+ /* We come in here for the regular a.out style of shared libraries */
+ if ((N_MAGIC(ex) != ZMAGIC && N_MAGIC(ex) != QMAGIC) || N_TRSIZE(ex) ||
+ N_DRSIZE(ex) || ((ex.a_entry & 0xfff) && N_MAGIC(ex) == ZMAGIC) ||
+- i_size_read(inode) < ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) {
++ i_size_read(inode) <
++ ex.a_text+ex.a_data+N_SYMSIZE(ex)+N_TXTOFF(ex)) {
+ goto out;
+ }
-diff --git a/arch/x86/kernel/cpu/mcheck/mce_64.c b/arch/x86/kernel/cpu/mcheck/mce_64.c
-index 4b21d29..242e866 100644
---- a/arch/x86/kernel/cpu/mcheck/mce_64.c
-+++ b/arch/x86/kernel/cpu/mcheck/mce_64.c
-@@ -745,7 +745,7 @@ static void mce_restart(void)
+@@ -467,10 +486,10 @@ static int load_aout_library(struct file *file)
- static struct sysdev_class mce_sysclass = {
- .resume = mce_resume,
-- set_kset_name("machinecheck"),
-+ .name = "machinecheck",
- };
+ #ifdef WARN_OLD
+ static unsigned long error_time;
+- if ((jiffies-error_time) > 5*HZ)
+- {
+- printk(KERN_WARNING
+- "N_TXTOFF is not page aligned. Please convert library: %s\n",
++ if (time_after(jiffies, error_time + 5*HZ)) {
++ printk(KERN_WARNING
++ "N_TXTOFF is not page aligned. Please convert "
++ "library: %s\n",
+ file->f_path.dentry->d_name.name);
+ error_time = jiffies;
+ }
+@@ -478,11 +497,12 @@ static int load_aout_library(struct file *file)
+ down_write(¤t->mm->mmap_sem);
+ do_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss);
+ up_write(¤t->mm->mmap_sem);
+-
++
+ file->f_op->read(file, (char __user *)start_addr,
+ ex.a_text + ex.a_data, &pos);
+ flush_icache_range((unsigned long) start_addr,
+- (unsigned long) start_addr + ex.a_text + ex.a_data);
++ (unsigned long) start_addr + ex.a_text +
++ ex.a_data);
- DEFINE_PER_CPU(struct sys_device, device_mce);
-diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
-index 752fb16..7535887 100644
---- a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
-+++ b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
-@@ -65,7 +65,7 @@ static struct threshold_block threshold_defaults = {
- };
+ retval = 0;
+ goto out;
+diff --git a/arch/x86/ia32/ia32_binfmt.c b/arch/x86/ia32/ia32_binfmt.c
+deleted file mode 100644
+index 55822d2..0000000
+--- a/arch/x86/ia32/ia32_binfmt.c
++++ /dev/null
+@@ -1,285 +0,0 @@
+-/*
+- * Written 2000,2002 by Andi Kleen.
+- *
+- * Loosely based on the sparc64 and IA64 32bit emulation loaders.
+- * This tricks binfmt_elf.c into loading 32bit binaries using lots
+- * of ugly preprocessor tricks. Talk about very very poor man's inheritance.
+- */
+-
+-#include <linux/types.h>
+-#include <linux/stddef.h>
+-#include <linux/rwsem.h>
+-#include <linux/sched.h>
+-#include <linux/compat.h>
+-#include <linux/string.h>
+-#include <linux/binfmts.h>
+-#include <linux/mm.h>
+-#include <linux/security.h>
+-#include <linux/elfcore-compat.h>
+-
+-#include <asm/segment.h>
+-#include <asm/ptrace.h>
+-#include <asm/processor.h>
+-#include <asm/user32.h>
+-#include <asm/sigcontext32.h>
+-#include <asm/fpu32.h>
+-#include <asm/i387.h>
+-#include <asm/uaccess.h>
+-#include <asm/ia32.h>
+-#include <asm/vsyscall32.h>
+-
+-#undef ELF_ARCH
+-#undef ELF_CLASS
+-#define ELF_CLASS ELFCLASS32
+-#define ELF_ARCH EM_386
+-
+-#undef elfhdr
+-#undef elf_phdr
+-#undef elf_note
+-#undef elf_addr_t
+-#define elfhdr elf32_hdr
+-#define elf_phdr elf32_phdr
+-#define elf_note elf32_note
+-#define elf_addr_t Elf32_Off
+-
+-#define ELF_NAME "elf/i386"
+-
+-#define AT_SYSINFO 32
+-#define AT_SYSINFO_EHDR 33
+-
+-int sysctl_vsyscall32 = 1;
+-
+-#undef ARCH_DLINFO
+-#define ARCH_DLINFO do { \
+- if (sysctl_vsyscall32) { \
+- current->mm->context.vdso = (void *)VSYSCALL32_BASE; \
+- NEW_AUX_ENT(AT_SYSINFO, (u32)(u64)VSYSCALL32_VSYSCALL); \
+- NEW_AUX_ENT(AT_SYSINFO_EHDR, VSYSCALL32_BASE); \
+- } \
+-} while(0)
+-
+-struct file;
+-
+-#define IA32_EMULATOR 1
+-
+-#undef ELF_ET_DYN_BASE
+-
+-#define ELF_ET_DYN_BASE (TASK_UNMAPPED_BASE + 0x1000000)
+-
+-#define jiffies_to_timeval(a,b) do { (b)->tv_usec = 0; (b)->tv_sec = (a)/HZ; }while(0)
+-
+-#define _GET_SEG(x) \
+- ({ __u32 seg; asm("movl %%" __stringify(x) ",%0" : "=r"(seg)); seg; })
+-
+-/* Assumes current==process to be dumped */
+-#undef ELF_CORE_COPY_REGS
+-#define ELF_CORE_COPY_REGS(pr_reg, regs) \
+- pr_reg[0] = regs->rbx; \
+- pr_reg[1] = regs->rcx; \
+- pr_reg[2] = regs->rdx; \
+- pr_reg[3] = regs->rsi; \
+- pr_reg[4] = regs->rdi; \
+- pr_reg[5] = regs->rbp; \
+- pr_reg[6] = regs->rax; \
+- pr_reg[7] = _GET_SEG(ds); \
+- pr_reg[8] = _GET_SEG(es); \
+- pr_reg[9] = _GET_SEG(fs); \
+- pr_reg[10] = _GET_SEG(gs); \
+- pr_reg[11] = regs->orig_rax; \
+- pr_reg[12] = regs->rip; \
+- pr_reg[13] = regs->cs; \
+- pr_reg[14] = regs->eflags; \
+- pr_reg[15] = regs->rsp; \
+- pr_reg[16] = regs->ss;
+-
+-
+-#define elf_prstatus compat_elf_prstatus
+-#define elf_prpsinfo compat_elf_prpsinfo
+-#define elf_fpregset_t struct user_i387_ia32_struct
+-#define elf_fpxregset_t struct user32_fxsr_struct
+-#define user user32
+-
+-#undef elf_read_implies_exec
+-#define elf_read_implies_exec(ex, executable_stack) (executable_stack != EXSTACK_DISABLE_X)
+-
+-#define elf_core_copy_regs elf32_core_copy_regs
+-static inline void elf32_core_copy_regs(compat_elf_gregset_t *elfregs,
+- struct pt_regs *regs)
+-{
+- ELF_CORE_COPY_REGS((&elfregs->ebx), regs)
+-}
+-
+-#define elf_core_copy_task_regs elf32_core_copy_task_regs
+-static inline int elf32_core_copy_task_regs(struct task_struct *t,
+- compat_elf_gregset_t* elfregs)
+-{
+- struct pt_regs *pp = task_pt_regs(t);
+- ELF_CORE_COPY_REGS((&elfregs->ebx), pp);
+- /* fix wrong segments */
+- elfregs->ds = t->thread.ds;
+- elfregs->fs = t->thread.fsindex;
+- elfregs->gs = t->thread.gsindex;
+- elfregs->es = t->thread.es;
+- return 1;
+-}
+-
+-#define elf_core_copy_task_fpregs elf32_core_copy_task_fpregs
+-static inline int
+-elf32_core_copy_task_fpregs(struct task_struct *tsk, struct pt_regs *regs,
+- elf_fpregset_t *fpu)
+-{
+- struct _fpstate_ia32 *fpstate = (void*)fpu;
+- mm_segment_t oldfs = get_fs();
+-
+- if (!tsk_used_math(tsk))
+- return 0;
+- if (!regs)
+- regs = task_pt_regs(tsk);
+- if (tsk == current)
+- unlazy_fpu(tsk);
+- set_fs(KERNEL_DS);
+- save_i387_ia32(tsk, fpstate, regs, 1);
+- /* Correct for i386 bug. It puts the fop into the upper 16bits of
+- the tag word (like FXSAVE), not into the fcs*/
+- fpstate->cssel |= fpstate->tag & 0xffff0000;
+- set_fs(oldfs);
+- return 1;
+-}
+-
+-#define ELF_CORE_COPY_XFPREGS 1
+-#define ELF_CORE_XFPREG_TYPE NT_PRXFPREG
+-#define elf_core_copy_task_xfpregs elf32_core_copy_task_xfpregs
+-static inline int
+-elf32_core_copy_task_xfpregs(struct task_struct *t, elf_fpxregset_t *xfpu)
+-{
+- struct pt_regs *regs = task_pt_regs(t);
+- if (!tsk_used_math(t))
+- return 0;
+- if (t == current)
+- unlazy_fpu(t);
+- memcpy(xfpu, &t->thread.i387.fxsave, sizeof(elf_fpxregset_t));
+- xfpu->fcs = regs->cs;
+- xfpu->fos = t->thread.ds; /* right? */
+- return 1;
+-}
+-
+-#undef elf_check_arch
+-#define elf_check_arch(x) \
+- ((x)->e_machine == EM_386)
+-
+-extern int force_personality32;
+-
+-#undef ELF_EXEC_PAGESIZE
+-#undef ELF_HWCAP
+-#undef ELF_PLATFORM
+-#undef SET_PERSONALITY
+-#define ELF_EXEC_PAGESIZE PAGE_SIZE
+-#define ELF_HWCAP (boot_cpu_data.x86_capability[0])
+-#define ELF_PLATFORM ("i686")
+-#define SET_PERSONALITY(ex, ibcs2) \
+-do { \
+- unsigned long new_flags = 0; \
+- if ((ex).e_ident[EI_CLASS] == ELFCLASS32) \
+- new_flags = _TIF_IA32; \
+- if ((current_thread_info()->flags & _TIF_IA32) \
+- != new_flags) \
+- set_thread_flag(TIF_ABI_PENDING); \
+- else \
+- clear_thread_flag(TIF_ABI_PENDING); \
+- /* XXX This overwrites the user set personality */ \
+- current->personality |= force_personality32; \
+-} while (0)
+-
+-/* Override some function names */
+-#define elf_format elf32_format
+-
+-#define init_elf_binfmt init_elf32_binfmt
+-#define exit_elf_binfmt exit_elf32_binfmt
+-
+-#define load_elf_binary load_elf32_binary
+-
+-#undef ELF_PLAT_INIT
+-#define ELF_PLAT_INIT(r, load_addr) elf32_init(r)
+-
+-#undef start_thread
+-#define start_thread(regs,new_rip,new_rsp) do { \
+- asm volatile("movl %0,%%fs" :: "r" (0)); \
+- asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS)); \
+- load_gs_index(0); \
+- (regs)->rip = (new_rip); \
+- (regs)->rsp = (new_rsp); \
+- (regs)->eflags = 0x200; \
+- (regs)->cs = __USER32_CS; \
+- (regs)->ss = __USER32_DS; \
+- set_fs(USER_DS); \
+-} while(0)
+-
+-
+-#include <linux/module.h>
+-
+-MODULE_DESCRIPTION("Binary format loader for compatibility with IA32 ELF binaries.");
+-MODULE_AUTHOR("Eric Youngdale, Andi Kleen");
+-
+-#undef MODULE_DESCRIPTION
+-#undef MODULE_AUTHOR
+-
+-static void elf32_init(struct pt_regs *);
+-
+-#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
+-#define arch_setup_additional_pages syscall32_setup_pages
+-extern int syscall32_setup_pages(struct linux_binprm *, int exstack);
+-
+-#include "../../../fs/binfmt_elf.c"
+-
+-static void elf32_init(struct pt_regs *regs)
+-{
+- struct task_struct *me = current;
+- regs->rdi = 0;
+- regs->rsi = 0;
+- regs->rdx = 0;
+- regs->rcx = 0;
+- regs->rax = 0;
+- regs->rbx = 0;
+- regs->rbp = 0;
+- regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r12 =
+- regs->r13 = regs->r14 = regs->r15 = 0;
+- me->thread.fs = 0;
+- me->thread.gs = 0;
+- me->thread.fsindex = 0;
+- me->thread.gsindex = 0;
+- me->thread.ds = __USER_DS;
+- me->thread.es = __USER_DS;
+-}
+-
+-#ifdef CONFIG_SYSCTL
+-/* Register vsyscall32 into the ABI table */
+-#include <linux/sysctl.h>
+-
+-static ctl_table abi_table2[] = {
+- {
+- .procname = "vsyscall32",
+- .data = &sysctl_vsyscall32,
+- .maxlen = sizeof(int),
+- .mode = 0644,
+- .proc_handler = proc_dointvec
+- },
+- {}
+-};
+-
+-static ctl_table abi_root_table2[] = {
+- {
+- .ctl_name = CTL_ABI,
+- .procname = "abi",
+- .mode = 0555,
+- .child = abi_table2
+- },
+- {}
+-};
+-
+-static __init int ia32_binfmt_init(void)
+-{
+- register_sysctl_table(abi_root_table2);
+- return 0;
+-}
+-__initcall(ia32_binfmt_init);
+-#endif
+diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
+index 6ea19c2..1c0503b 100644
+--- a/arch/x86/ia32/ia32_signal.c
++++ b/arch/x86/ia32/ia32_signal.c
+@@ -29,9 +29,8 @@
+ #include <asm/ia32_unistd.h>
+ #include <asm/user32.h>
+ #include <asm/sigcontext32.h>
+-#include <asm/fpu32.h>
+ #include <asm/proto.h>
+-#include <asm/vsyscall32.h>
++#include <asm/vdso.h>
- struct threshold_bank {
-- struct kobject kobj;
-+ struct kobject *kobj;
- struct threshold_block *blocks;
- cpumask_t cpus;
- };
-@@ -432,10 +432,9 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu,
- else
- per_cpu(threshold_banks, cpu)[bank]->blocks = b;
+ #define DEBUG_SIG 0
-- kobject_set_name(&b->kobj, "misc%i", block);
-- b->kobj.parent = &per_cpu(threshold_banks, cpu)[bank]->kobj;
-- b->kobj.ktype = &threshold_ktype;
-- err = kobject_register(&b->kobj);
-+ err = kobject_init_and_add(&b->kobj, &threshold_ktype,
-+ per_cpu(threshold_banks, cpu)[bank]->kobj,
-+ "misc%i", block);
- if (err)
- goto out_free;
- recurse:
-@@ -451,11 +450,13 @@ recurse:
- if (err)
- goto out_free;
+@@ -43,7 +42,8 @@ void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
+ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
+ {
+ int err;
+- if (!access_ok (VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
++
++ if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
+ return -EFAULT;
-+ kobject_uevent(&b->kobj, KOBJ_ADD);
+ /* If you change siginfo_t structure, please make sure that
+@@ -53,16 +53,19 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
+ 3 ints plus the relevant union member. */
+ err = __put_user(from->si_signo, &to->si_signo);
+ err |= __put_user(from->si_errno, &to->si_errno);
+- err |= __put_user((short)from->si_code, &to->si_code);
++ err |= __put_user((short)from->si_code, &to->si_code);
+
+ if (from->si_code < 0) {
+ err |= __put_user(from->si_pid, &to->si_pid);
+- err |= __put_user(from->si_uid, &to->si_uid);
+- err |= __put_user(ptr_to_compat(from->si_ptr), &to->si_ptr);
++ err |= __put_user(from->si_uid, &to->si_uid);
++ err |= __put_user(ptr_to_compat(from->si_ptr), &to->si_ptr);
+ } else {
+- /* First 32bits of unions are always present:
+- * si_pid === si_band === si_tid === si_addr(LS half) */
+- err |= __put_user(from->_sifields._pad[0], &to->_sifields._pad[0]);
++ /*
++ * First 32bits of unions are always present:
++ * si_pid === si_band === si_tid === si_addr(LS half)
++ */
++ err |= __put_user(from->_sifields._pad[0],
++ &to->_sifields._pad[0]);
+ switch (from->si_code >> 16) {
+ case __SI_FAULT >> 16:
+ break;
+@@ -76,14 +79,15 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
+ err |= __put_user(from->si_uid, &to->si_uid);
+ break;
+ case __SI_POLL >> 16:
+- err |= __put_user(from->si_fd, &to->si_fd);
++ err |= __put_user(from->si_fd, &to->si_fd);
+ break;
+ case __SI_TIMER >> 16:
+- err |= __put_user(from->si_overrun, &to->si_overrun);
++ err |= __put_user(from->si_overrun, &to->si_overrun);
+ err |= __put_user(ptr_to_compat(from->si_ptr),
+- &to->si_ptr);
++ &to->si_ptr);
+ break;
+- case __SI_RT >> 16: /* This is not generated by the kernel as of now. */
++ /* This is not generated by the kernel as of now. */
++ case __SI_RT >> 16:
+ case __SI_MESGQ >> 16:
+ err |= __put_user(from->si_uid, &to->si_uid);
+ err |= __put_user(from->si_int, &to->si_int);
+@@ -97,7 +101,8 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
+ {
+ int err;
+ u32 ptr32;
+- if (!access_ok (VERIFY_READ, from, sizeof(compat_siginfo_t)))
+
++ if (!access_ok(VERIFY_READ, from, sizeof(compat_siginfo_t)))
+ return -EFAULT;
+
+ err = __get_user(to->si_signo, &from->si_signo);
+@@ -112,8 +117,7 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
return err;
+ }
- out_free:
- if (b) {
-- kobject_unregister(&b->kobj);
-+ kobject_put(&b->kobj);
- kfree(b);
+-asmlinkage long
+-sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
++asmlinkage long sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
+ {
+ mask &= _BLOCKABLE;
+ spin_lock_irq(¤t->sighand->siglock);
+@@ -128,36 +132,37 @@ sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
+ return -ERESTARTNOHAND;
+ }
+
+-asmlinkage long
+-sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
+- stack_ia32_t __user *uoss_ptr,
+- struct pt_regs *regs)
++asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
++ stack_ia32_t __user *uoss_ptr,
++ struct pt_regs *regs)
+ {
+- stack_t uss,uoss;
++ stack_t uss, uoss;
+ int ret;
+- mm_segment_t seg;
+- if (uss_ptr) {
++ mm_segment_t seg;
++
++ if (uss_ptr) {
+ u32 ptr;
+- memset(&uss,0,sizeof(stack_t));
+- if (!access_ok(VERIFY_READ,uss_ptr,sizeof(stack_ia32_t)) ||
++
++ memset(&uss, 0, sizeof(stack_t));
++ if (!access_ok(VERIFY_READ, uss_ptr, sizeof(stack_ia32_t)) ||
+ __get_user(ptr, &uss_ptr->ss_sp) ||
+ __get_user(uss.ss_flags, &uss_ptr->ss_flags) ||
+ __get_user(uss.ss_size, &uss_ptr->ss_size))
+ return -EFAULT;
+ uss.ss_sp = compat_ptr(ptr);
}
- return err;
-@@ -489,7 +490,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
- goto out;
+- seg = get_fs();
+- set_fs(KERNEL_DS);
+- ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss, regs->rsp);
+- set_fs(seg);
++ seg = get_fs();
++ set_fs(KERNEL_DS);
++ ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss, regs->sp);
++ set_fs(seg);
+ if (ret >= 0 && uoss_ptr) {
+- if (!access_ok(VERIFY_WRITE,uoss_ptr,sizeof(stack_ia32_t)) ||
++ if (!access_ok(VERIFY_WRITE, uoss_ptr, sizeof(stack_ia32_t)) ||
+ __put_user(ptr_to_compat(uoss.ss_sp), &uoss_ptr->ss_sp) ||
+ __put_user(uoss.ss_flags, &uoss_ptr->ss_flags) ||
+ __put_user(uoss.ss_size, &uoss_ptr->ss_size))
+ ret = -EFAULT;
+- }
+- return ret;
++ }
++ return ret;
+ }
- err = sysfs_create_link(&per_cpu(device_mce, cpu).kobj,
-- &b->kobj, name);
-+ b->kobj, name);
- if (err)
- goto out;
+ /*
+@@ -186,87 +191,85 @@ struct rt_sigframe
+ char retcode[8];
+ };
-@@ -505,16 +506,15 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
- goto out;
+-static int
+-ia32_restore_sigcontext(struct pt_regs *regs, struct sigcontext_ia32 __user *sc, unsigned int *peax)
++#define COPY(x) { \
++ unsigned int reg; \
++ err |= __get_user(reg, &sc->x); \
++ regs->x = reg; \
++}
++
++#define RELOAD_SEG(seg,mask) \
++ { unsigned int cur; \
++ unsigned short pre; \
++ err |= __get_user(pre, &sc->seg); \
++ asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \
++ pre |= mask; \
++ if (pre != cur) loadsegment(seg, pre); }
++
++static int ia32_restore_sigcontext(struct pt_regs *regs,
++ struct sigcontext_ia32 __user *sc,
++ unsigned int *peax)
+ {
+- unsigned int err = 0;
+-
++ unsigned int tmpflags, gs, oldgs, err = 0;
++ struct _fpstate_ia32 __user *buf;
++ u32 tmp;
++
+ /* Always make any pending restarted system calls return -EINTR */
+ current_thread_info()->restart_block.fn = do_no_restart_syscall;
+
+ #if DEBUG_SIG
+- printk("SIG restore_sigcontext: sc=%p err(%x) eip(%x) cs(%x) flg(%x)\n",
+- sc, sc->err, sc->eip, sc->cs, sc->eflags);
++ printk(KERN_DEBUG "SIG restore_sigcontext: "
++ "sc=%p err(%x) eip(%x) cs(%x) flg(%x)\n",
++ sc, sc->err, sc->ip, sc->cs, sc->flags);
+ #endif
+-#define COPY(x) { \
+- unsigned int reg; \
+- err |= __get_user(reg, &sc->e ##x); \
+- regs->r ## x = reg; \
+-}
+
+-#define RELOAD_SEG(seg,mask) \
+- { unsigned int cur; \
+- unsigned short pre; \
+- err |= __get_user(pre, &sc->seg); \
+- asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \
+- pre |= mask; \
+- if (pre != cur) loadsegment(seg,pre); }
+-
+- /* Reload fs and gs if they have changed in the signal handler.
+- This does not handle long fs/gs base changes in the handler, but
+- does not clobber them at least in the normal case. */
+-
+- {
+- unsigned gs, oldgs;
+- err |= __get_user(gs, &sc->gs);
+- gs |= 3;
+- asm("movl %%gs,%0" : "=r" (oldgs));
+- if (gs != oldgs)
+- load_gs_index(gs);
+- }
+- RELOAD_SEG(fs,3);
+- RELOAD_SEG(ds,3);
+- RELOAD_SEG(es,3);
++ /*
++ * Reload fs and gs if they have changed in the signal
++ * handler. This does not handle long fs/gs base changes in
++ * the handler, but does not clobber them at least in the
++ * normal case.
++ */
++ err |= __get_user(gs, &sc->gs);
++ gs |= 3;
++ asm("movl %%gs,%0" : "=r" (oldgs));
++ if (gs != oldgs)
++ load_gs_index(gs);
++
++ RELOAD_SEG(fs, 3);
++ RELOAD_SEG(ds, 3);
++ RELOAD_SEG(es, 3);
+
+ COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
+ COPY(dx); COPY(cx); COPY(ip);
+- /* Don't touch extended registers */
+-
+- err |= __get_user(regs->cs, &sc->cs);
+- regs->cs |= 3;
+- err |= __get_user(regs->ss, &sc->ss);
+- regs->ss |= 3;
+-
+- {
+- unsigned int tmpflags;
+- err |= __get_user(tmpflags, &sc->eflags);
+- regs->eflags = (regs->eflags & ~0x40DD5) | (tmpflags & 0x40DD5);
+- regs->orig_rax = -1; /* disable syscall checks */
+- }
++ /* Don't touch extended registers */
++
++ err |= __get_user(regs->cs, &sc->cs);
++ regs->cs |= 3;
++ err |= __get_user(regs->ss, &sc->ss);
++ regs->ss |= 3;
++
++ err |= __get_user(tmpflags, &sc->flags);
++ regs->flags = (regs->flags & ~0x40DD5) | (tmpflags & 0x40DD5);
++ /* disable syscall checks */
++ regs->orig_ax = -1;
++
++ err |= __get_user(tmp, &sc->fpstate);
++ buf = compat_ptr(tmp);
++ if (buf) {
++ if (!access_ok(VERIFY_READ, buf, sizeof(*buf)))
++ goto badframe;
++ err |= restore_i387_ia32(buf);
++ } else {
++ struct task_struct *me = current;
+
+- {
+- u32 tmp;
+- struct _fpstate_ia32 __user * buf;
+- err |= __get_user(tmp, &sc->fpstate);
+- buf = compat_ptr(tmp);
+- if (buf) {
+- if (!access_ok(VERIFY_READ, buf, sizeof(*buf)))
+- goto badframe;
+- err |= restore_i387_ia32(current, buf, 0);
+- } else {
+- struct task_struct *me = current;
+- if (used_math()) {
+- clear_fpu(me);
+- clear_used_math();
+- }
++ if (used_math()) {
++ clear_fpu(me);
++ clear_used_math();
+ }
}
-- kobject_set_name(&b->kobj, "threshold_bank%i", bank);
-- b->kobj.parent = &per_cpu(device_mce, cpu).kobj;
-+ b->kobj = kobject_create_and_add(name, &per_cpu(device_mce, cpu).kobj);
-+ if (!b->kobj)
-+ goto out_free;
+- {
+- u32 tmp;
+- err |= __get_user(tmp, &sc->eax);
+- *peax = tmp;
+- }
++ err |= __get_user(tmp, &sc->ax);
++ *peax = tmp;
+
- #ifndef CONFIG_SMP
- b->cpus = CPU_MASK_ALL;
- #else
- b->cpus = per_cpu(cpu_core_map, cpu);
- #endif
-- err = kobject_register(&b->kobj);
-- if (err)
-- goto out_free;
+ return err;
- per_cpu(threshold_banks, cpu)[bank] = b;
+ badframe:
+@@ -275,15 +278,16 @@ badframe:
-@@ -531,7 +531,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
- continue;
+ asmlinkage long sys32_sigreturn(struct pt_regs *regs)
+ {
+- struct sigframe __user *frame = (struct sigframe __user *)(regs->rsp-8);
++ struct sigframe __user *frame = (struct sigframe __user *)(regs->sp-8);
+ sigset_t set;
+- unsigned int eax;
++ unsigned int ax;
+
+ if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_COMPAT_NSIG_WORDS > 1
+- && __copy_from_user((((char *) &set.sig) + 4), &frame->extramask,
++ && __copy_from_user((((char *) &set.sig) + 4),
++ &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+@@ -292,24 +296,24 @@ asmlinkage long sys32_sigreturn(struct pt_regs *regs)
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+-
+- if (ia32_restore_sigcontext(regs, &frame->sc, &eax))
++
++ if (ia32_restore_sigcontext(regs, &frame->sc, &ax))
+ goto badframe;
+- return eax;
++ return ax;
- err = sysfs_create_link(&per_cpu(device_mce, i).kobj,
-- &b->kobj, name);
-+ b->kobj, name);
- if (err)
- goto out;
+ badframe:
+ signal_fault(regs, frame, "32bit sigreturn");
+ return 0;
+-}
++}
-@@ -581,7 +581,7 @@ static void deallocate_threshold_block(unsigned int cpu,
- return;
+ asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
+ {
+ struct rt_sigframe __user *frame;
+ sigset_t set;
+- unsigned int eax;
++ unsigned int ax;
+ struct pt_regs tregs;
+
+- frame = (struct rt_sigframe __user *)(regs->rsp - 4);
++ frame = (struct rt_sigframe __user *)(regs->sp - 4);
+
+ if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+@@ -321,28 +325,28 @@ asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+-
+- if (ia32_restore_sigcontext(regs, &frame->uc.uc_mcontext, &eax))
++
++ if (ia32_restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax))
+ goto badframe;
- list_for_each_entry_safe(pos, tmp, &head->blocks->miscj, miscj) {
-- kobject_unregister(&pos->kobj);
-+ kobject_put(&pos->kobj);
- list_del(&pos->miscj);
- kfree(pos);
- }
-@@ -627,7 +627,7 @@ static void threshold_remove_bank(unsigned int cpu, int bank)
- deallocate_threshold_block(cpu, bank);
+ tregs = *regs;
+ if (sys32_sigaltstack(&frame->uc.uc_stack, NULL, &tregs) == -EFAULT)
+ goto badframe;
- free_out:
-- kobject_unregister(&b->kobj);
-+ kobject_put(b->kobj);
- kfree(b);
- per_cpu(threshold_banks, cpu)[bank] = NULL;
- }
-diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
-index 3b20613..beb45c9 100644
---- a/arch/x86/kernel/cpu/mtrr/main.c
-+++ b/arch/x86/kernel/cpu/mtrr/main.c
-@@ -349,7 +349,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
- replace = -1;
+- return eax;
++ return ax;
- /* No CPU hotplug when we change MTRR entries */
-- lock_cpu_hotplug();
-+ get_online_cpus();
- /* Search for existing MTRR */
- mutex_lock(&mtrr_mutex);
- for (i = 0; i < num_var_ranges; ++i) {
-@@ -405,7 +405,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
- error = i;
- out:
- mutex_unlock(&mtrr_mutex);
-- unlock_cpu_hotplug();
-+ put_online_cpus();
- return error;
- }
+ badframe:
+- signal_fault(regs,frame,"32bit rt sigreturn");
++ signal_fault(regs, frame, "32bit rt sigreturn");
+ return 0;
+-}
++}
-@@ -495,7 +495,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
+ /*
+ * Set up a signal frame.
+ */
- max = num_var_ranges;
- /* No CPU hotplug when we change MTRR entries */
-- lock_cpu_hotplug();
-+ get_online_cpus();
- mutex_lock(&mtrr_mutex);
- if (reg < 0) {
- /* Search for existing MTRR */
-@@ -536,7 +536,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
- error = reg;
- out:
- mutex_unlock(&mtrr_mutex);
-- unlock_cpu_hotplug();
-+ put_online_cpus();
- return error;
- }
- /**
-diff --git a/arch/x86/kernel/cpuid.c b/arch/x86/kernel/cpuid.c
-index 05c9936..d387c77 100644
---- a/arch/x86/kernel/cpuid.c
-+++ b/arch/x86/kernel/cpuid.c
-@@ -157,15 +157,15 @@ static int __cpuinit cpuid_class_cpu_callback(struct notifier_block *nfb,
+-static int
+-ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, struct _fpstate_ia32 __user *fpstate,
+- struct pt_regs *regs, unsigned int mask)
++static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
++ struct _fpstate_ia32 __user *fpstate,
++ struct pt_regs *regs, unsigned int mask)
+ {
+ int tmp, err = 0;
+
+@@ -356,26 +360,26 @@ ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, struct _fpstate_ia32 __
+ __asm__("movl %%es,%0" : "=r"(tmp): "0"(tmp));
+ err |= __put_user(tmp, (unsigned int __user *)&sc->es);
+
+- err |= __put_user((u32)regs->rdi, &sc->edi);
+- err |= __put_user((u32)regs->rsi, &sc->esi);
+- err |= __put_user((u32)regs->rbp, &sc->ebp);
+- err |= __put_user((u32)regs->rsp, &sc->esp);
+- err |= __put_user((u32)regs->rbx, &sc->ebx);
+- err |= __put_user((u32)regs->rdx, &sc->edx);
+- err |= __put_user((u32)regs->rcx, &sc->ecx);
+- err |= __put_user((u32)regs->rax, &sc->eax);
++ err |= __put_user((u32)regs->di, &sc->di);
++ err |= __put_user((u32)regs->si, &sc->si);
++ err |= __put_user((u32)regs->bp, &sc->bp);
++ err |= __put_user((u32)regs->sp, &sc->sp);
++ err |= __put_user((u32)regs->bx, &sc->bx);
++ err |= __put_user((u32)regs->dx, &sc->dx);
++ err |= __put_user((u32)regs->cx, &sc->cx);
++ err |= __put_user((u32)regs->ax, &sc->ax);
+ err |= __put_user((u32)regs->cs, &sc->cs);
+ err |= __put_user((u32)regs->ss, &sc->ss);
+ err |= __put_user(current->thread.trap_no, &sc->trapno);
+ err |= __put_user(current->thread.error_code, &sc->err);
+- err |= __put_user((u32)regs->rip, &sc->eip);
+- err |= __put_user((u32)regs->eflags, &sc->eflags);
+- err |= __put_user((u32)regs->rsp, &sc->esp_at_signal);
++ err |= __put_user((u32)regs->ip, &sc->ip);
++ err |= __put_user((u32)regs->flags, &sc->flags);
++ err |= __put_user((u32)regs->sp, &sc->sp_at_signal);
+
+- tmp = save_i387_ia32(current, fpstate, regs, 0);
++ tmp = save_i387_ia32(fpstate);
+ if (tmp < 0)
+ err = -EFAULT;
+- else {
++ else {
+ clear_used_math();
+ stts();
+ err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL),
+@@ -392,40 +396,53 @@ ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, struct _fpstate_ia32 __
+ /*
+ * Determine which stack to use..
+ */
+-static void __user *
+-get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
++static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
++ size_t frame_size)
+ {
+- unsigned long rsp;
++ unsigned long sp;
- switch (action) {
- case CPU_UP_PREPARE:
-- case CPU_UP_PREPARE_FROZEN:
- err = cpuid_device_create(cpu);
- break;
- case CPU_UP_CANCELED:
-- case CPU_UP_CANCELED_FROZEN:
- case CPU_DEAD:
-- case CPU_DEAD_FROZEN:
- cpuid_device_destroy(cpu);
- break;
-+ case CPU_UP_CANCELED_FROZEN:
-+ destroy_suspended_device(cpuid_class, MKDEV(CPUID_MAJOR, cpu));
-+ break;
+ /* Default to using normal stack */
+- rsp = regs->rsp;
++ sp = regs->sp;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+- if (sas_ss_flags(rsp) == 0)
+- rsp = current->sas_ss_sp + current->sas_ss_size;
++ if (sas_ss_flags(sp) == 0)
++ sp = current->sas_ss_sp + current->sas_ss_size;
}
- return err ? NOTIFY_BAD : NOTIFY_OK;
- }
-diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
-index 3a058bb..e70f388 100644
---- a/arch/x86/kernel/entry_64.S
-+++ b/arch/x86/kernel/entry_64.S
-@@ -283,7 +283,7 @@ sysret_careful:
- sysret_signal:
- TRACE_IRQS_ON
- sti
-- testl $(_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY),%edx
-+ testl $_TIF_DO_NOTIFY_MASK,%edx
- jz 1f
- /* Really a signal */
-@@ -377,7 +377,7 @@ int_very_careful:
- jmp int_restore_rest
-
- int_signal:
-- testl $(_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY),%edx
-+ testl $_TIF_DO_NOTIFY_MASK,%edx
- jz 1f
- movq %rsp,%rdi # &ptregs -> arg1
- xorl %esi,%esi # oldset -> arg2
-@@ -603,7 +603,7 @@ retint_careful:
- jmp retint_check
-
- retint_signal:
-- testl $(_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY),%edx
-+ testl $_TIF_DO_NOTIFY_MASK,%edx
- jz retint_swapgs
+ /* This is the legacy signal stack switching. */
+ else if ((regs->ss & 0xffff) != __USER_DS &&
+ !(ka->sa.sa_flags & SA_RESTORER) &&
+- ka->sa.sa_restorer) {
+- rsp = (unsigned long) ka->sa.sa_restorer;
+- }
++ ka->sa.sa_restorer)
++ sp = (unsigned long) ka->sa.sa_restorer;
+
+- rsp -= frame_size;
++ sp -= frame_size;
+ /* Align the stack pointer according to the i386 ABI,
+ * i.e. so that on function entry ((sp + 4) & 15) == 0. */
+- rsp = ((rsp + 4) & -16ul) - 4;
+- return (void __user *) rsp;
++ sp = ((sp + 4) & -16ul) - 4;
++ return (void __user *) sp;
+ }
+
+ int ia32_setup_frame(int sig, struct k_sigaction *ka,
+- compat_sigset_t *set, struct pt_regs * regs)
++ compat_sigset_t *set, struct pt_regs *regs)
+ {
+ struct sigframe __user *frame;
++ void __user *restorer;
+ int err = 0;
+
++ /* copy_to_user optimizes that into a single 8 byte store */
++ static const struct {
++ u16 poplmovl;
++ u32 val;
++ u16 int80;
++ u16 pad;
++ } __attribute__((packed)) code = {
++ 0xb858, /* popl %eax ; movl $...,%eax */
++ __NR_ia32_sigreturn,
++ 0x80cd, /* int $0x80 */
++ 0,
++ };
++
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+@@ -443,64 +460,53 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
+ if (_COMPAT_NSIG_WORDS > 1) {
+ err |= __copy_to_user(frame->extramask, &set->sig[1],
+ sizeof(frame->extramask));
++ if (err)
++ goto give_sigsegv;
+ }
+- if (err)
+- goto give_sigsegv;
+
+- /* Return stub is in 32bit vsyscall page */
+- {
+- void __user *restorer;
++ if (ka->sa.sa_flags & SA_RESTORER) {
++ restorer = ka->sa.sa_restorer;
++ } else {
++ /* Return stub is in 32bit vsyscall page */
+ if (current->binfmt->hasvdso)
+- restorer = VSYSCALL32_SIGRETURN;
++ restorer = VDSO32_SYMBOL(current->mm->context.vdso,
++ sigreturn);
+ else
+- restorer = (void *)&frame->retcode;
+- if (ka->sa.sa_flags & SA_RESTORER)
+- restorer = ka->sa.sa_restorer;
+- err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
+- }
+- /* These are actually not used anymore, but left because some
+- gdb versions depend on them as a marker. */
+- {
+- /* copy_to_user optimizes that into a single 8 byte store */
+- static const struct {
+- u16 poplmovl;
+- u32 val;
+- u16 int80;
+- u16 pad;
+- } __attribute__((packed)) code = {
+- 0xb858, /* popl %eax ; movl $...,%eax */
+- __NR_ia32_sigreturn,
+- 0x80cd, /* int $0x80 */
+- 0,
+- };
+- err |= __copy_to_user(frame->retcode, &code, 8);
++ restorer = &frame->retcode;
+ }
++ err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
++
++ /*
++ * These are actually not used anymore, but left because some
++ * gdb versions depend on them as a marker.
++ */
++ err |= __copy_to_user(frame->retcode, &code, 8);
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+- regs->rsp = (unsigned long) frame;
+- regs->rip = (unsigned long) ka->sa.sa_handler;
++ regs->sp = (unsigned long) frame;
++ regs->ip = (unsigned long) ka->sa.sa_handler;
+
+ /* Make -mregparm=3 work */
+- regs->rax = sig;
+- regs->rdx = 0;
+- regs->rcx = 0;
++ regs->ax = sig;
++ regs->dx = 0;
++ regs->cx = 0;
+
+- asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
+- asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
++ asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
++ asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
+
+- regs->cs = __USER32_CS;
+- regs->ss = __USER32_DS;
++ regs->cs = __USER32_CS;
++ regs->ss = __USER32_DS;
+
+ set_fs(USER_DS);
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~X86_EFLAGS_TF;
+ if (test_thread_flag(TIF_SINGLESTEP))
+ ptrace_notify(SIGTRAP);
+
+ #if DEBUG_SIG
+- printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
+- current->comm, current->pid, frame, regs->rip, frame->pretcode);
++ printk(KERN_DEBUG "SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
++ current->comm, current->pid, frame, regs->ip, frame->pretcode);
+ #endif
+
+ return 0;
+@@ -511,25 +517,34 @@ give_sigsegv:
+ }
+
+ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+- compat_sigset_t *set, struct pt_regs * regs)
++ compat_sigset_t *set, struct pt_regs *regs)
+ {
+ struct rt_sigframe __user *frame;
++ struct exec_domain *ed = current_thread_info()->exec_domain;
++ void __user *restorer;
+ int err = 0;
+
++ /* __copy_to_user optimizes that into a single 8 byte store */
++ static const struct {
++ u8 movl;
++ u32 val;
++ u16 int80;
++ u16 pad;
++ u8 pad2;
++ } __attribute__((packed)) code = {
++ 0xb8,
++ __NR_ia32_rt_sigreturn,
++ 0x80cd,
++ 0,
++ };
++
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+- {
+- struct exec_domain *ed = current_thread_info()->exec_domain;
+- err |= __put_user((ed
+- && ed->signal_invmap
+- && sig < 32
+- ? ed->signal_invmap[sig]
+- : sig),
+- &frame->sig);
+- }
++ err |= __put_user((ed && ed->signal_invmap && sig < 32
++ ? ed->signal_invmap[sig] : sig), &frame->sig);
+ err |= __put_user(ptr_to_compat(&frame->info), &frame->pinfo);
+ err |= __put_user(ptr_to_compat(&frame->uc), &frame->puc);
+ err |= copy_siginfo_to_user32(&frame->info, info);
+@@ -540,73 +555,58 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+- err |= __put_user(sas_ss_flags(regs->rsp),
++ err |= __put_user(sas_ss_flags(regs->sp),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= ia32_setup_sigcontext(&frame->uc.uc_mcontext, &frame->fpstate,
+- regs, set->sig[0]);
++ regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+ if (err)
+ goto give_sigsegv;
+
+-
+- {
+- void __user *restorer = VSYSCALL32_RTSIGRETURN;
+- if (ka->sa.sa_flags & SA_RESTORER)
+- restorer = ka->sa.sa_restorer;
+- err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
+- }
+-
+- /* This is movl $,%eax ; int $0x80 */
+- /* Not actually used anymore, but left because some gdb versions
+- need it. */
+- {
+- /* __copy_to_user optimizes that into a single 8 byte store */
+- static const struct {
+- u8 movl;
+- u32 val;
+- u16 int80;
+- u16 pad;
+- u8 pad2;
+- } __attribute__((packed)) code = {
+- 0xb8,
+- __NR_ia32_rt_sigreturn,
+- 0x80cd,
+- 0,
+- };
+- err |= __copy_to_user(frame->retcode, &code, 8);
+- }
++ if (ka->sa.sa_flags & SA_RESTORER)
++ restorer = ka->sa.sa_restorer;
++ else
++ restorer = VDSO32_SYMBOL(current->mm->context.vdso,
++ rt_sigreturn);
++ err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
++
++ /*
++ * Not actually used anymore, but left because some gdb
++ * versions need it.
++ */
++ err |= __copy_to_user(frame->retcode, &code, 8);
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+- regs->rsp = (unsigned long) frame;
+- regs->rip = (unsigned long) ka->sa.sa_handler;
++ regs->sp = (unsigned long) frame;
++ regs->ip = (unsigned long) ka->sa.sa_handler;
+
+ /* Make -mregparm=3 work */
+- regs->rax = sig;
+- regs->rdx = (unsigned long) &frame->info;
+- regs->rcx = (unsigned long) &frame->uc;
++ regs->ax = sig;
++ regs->dx = (unsigned long) &frame->info;
++ regs->cx = (unsigned long) &frame->uc;
+
+ /* Make -mregparm=3 work */
+- regs->rax = sig;
+- regs->rdx = (unsigned long) &frame->info;
+- regs->rcx = (unsigned long) &frame->uc;
++ regs->ax = sig;
++ regs->dx = (unsigned long) &frame->info;
++ regs->cx = (unsigned long) &frame->uc;
++
++ asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
++ asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
+
+- asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
+- asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
+-
+- regs->cs = __USER32_CS;
+- regs->ss = __USER32_DS;
++ regs->cs = __USER32_CS;
++ regs->ss = __USER32_DS;
+
+ set_fs(USER_DS);
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~X86_EFLAGS_TF;
+ if (test_thread_flag(TIF_SINGLESTEP))
+ ptrace_notify(SIGTRAP);
+
+ #if DEBUG_SIG
+- printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
+- current->comm, current->pid, frame, regs->rip, frame->pretcode);
++ printk(KERN_DEBUG "SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
++ current->comm, current->pid, frame, regs->ip, frame->pretcode);
+ #endif
+
+ return 0;
+diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
+index df588f0..0db0a62 100644
+--- a/arch/x86/ia32/ia32entry.S
++++ b/arch/x86/ia32/ia32entry.S
+@@ -12,7 +12,6 @@
+ #include <asm/ia32_unistd.h>
+ #include <asm/thread_info.h>
+ #include <asm/segment.h>
+-#include <asm/vsyscall32.h>
+ #include <asm/irqflags.h>
+ #include <linux/linkage.h>
+
+@@ -104,7 +103,7 @@ ENTRY(ia32_sysenter_target)
+ pushfq
+ CFI_ADJUST_CFA_OFFSET 8
+ /*CFI_REL_OFFSET rflags,0*/
+- movl $VSYSCALL32_SYSEXIT, %r10d
++ movl 8*3-THREAD_SIZE+threadinfo_sysenter_return(%rsp), %r10d
+ CFI_REGISTER rip,r10
+ pushq $__USER32_CS
+ CFI_ADJUST_CFA_OFFSET 8
+@@ -142,6 +141,8 @@ sysenter_do_call:
+ andl $~TS_COMPAT,threadinfo_status(%r10)
+ /* clear IF, that popfq doesn't enable interrupts early */
+ andl $~0x200,EFLAGS-R11(%rsp)
++ movl RIP-R11(%rsp),%edx /* User %eip */
++ CFI_REGISTER rip,rdx
+ RESTORE_ARGS 1,24,1,1,1,1
+ popfq
+ CFI_ADJUST_CFA_OFFSET -8
+@@ -149,8 +150,6 @@ sysenter_do_call:
+ popq %rcx /* User %esp */
+ CFI_ADJUST_CFA_OFFSET -8
+ CFI_REGISTER rsp,rcx
+- movl $VSYSCALL32_SYSEXIT,%edx /* User %eip */
+- CFI_REGISTER rip,rdx
TRACE_IRQS_ON
- sti
-diff --git a/arch/x86/kernel/i8237.c b/arch/x86/kernel/i8237.c
-index 2931383..dbd6c1d 100644
---- a/arch/x86/kernel/i8237.c
-+++ b/arch/x86/kernel/i8237.c
-@@ -51,7 +51,7 @@ static int i8237A_suspend(struct sys_device *dev, pm_message_t state)
+ swapgs
+ sti /* sti only takes effect after the next instruction */
+@@ -644,8 +643,8 @@ ia32_sys_call_table:
+ .quad compat_sys_futex /* 240 */
+ .quad compat_sys_sched_setaffinity
+ .quad compat_sys_sched_getaffinity
+- .quad sys32_set_thread_area
+- .quad sys32_get_thread_area
++ .quad sys_set_thread_area
++ .quad sys_get_thread_area
+ .quad compat_sys_io_setup /* 245 */
+ .quad sys_io_destroy
+ .quad compat_sys_io_getevents
+diff --git a/arch/x86/ia32/ipc32.c b/arch/x86/ia32/ipc32.c
+index 7b3342e..d21991c 100644
+--- a/arch/x86/ia32/ipc32.c
++++ b/arch/x86/ia32/ipc32.c
+@@ -9,9 +9,8 @@
+ #include <linux/ipc.h>
+ #include <linux/compat.h>
+
+-asmlinkage long
+-sys32_ipc(u32 call, int first, int second, int third,
+- compat_uptr_t ptr, u32 fifth)
++asmlinkage long sys32_ipc(u32 call, int first, int second, int third,
++ compat_uptr_t ptr, u32 fifth)
+ {
+ int version;
+
+@@ -19,36 +18,35 @@ sys32_ipc(u32 call, int first, int second, int third,
+ call &= 0xffff;
+
+ switch (call) {
+- case SEMOP:
++ case SEMOP:
+ /* struct sembuf is the same on 32 and 64bit :)) */
+ return sys_semtimedop(first, compat_ptr(ptr), second, NULL);
+- case SEMTIMEDOP:
++ case SEMTIMEDOP:
+ return compat_sys_semtimedop(first, compat_ptr(ptr), second,
+ compat_ptr(fifth));
+- case SEMGET:
++ case SEMGET:
+ return sys_semget(first, second, third);
+- case SEMCTL:
++ case SEMCTL:
+ return compat_sys_semctl(first, second, third, compat_ptr(ptr));
+
+- case MSGSND:
++ case MSGSND:
+ return compat_sys_msgsnd(first, second, third, compat_ptr(ptr));
+- case MSGRCV:
++ case MSGRCV:
+ return compat_sys_msgrcv(first, second, fifth, third,
+ version, compat_ptr(ptr));
+- case MSGGET:
++ case MSGGET:
+ return sys_msgget((key_t) first, second);
+- case MSGCTL:
++ case MSGCTL:
+ return compat_sys_msgctl(first, second, compat_ptr(ptr));
+
+- case SHMAT:
++ case SHMAT:
+ return compat_sys_shmat(first, second, third, version,
+ compat_ptr(ptr));
+- break;
+- case SHMDT:
++ case SHMDT:
+ return sys_shmdt(compat_ptr(ptr));
+- case SHMGET:
++ case SHMGET:
+ return sys_shmget(first, (unsigned)second, third);
+- case SHMCTL:
++ case SHMCTL:
+ return compat_sys_shmctl(first, second, compat_ptr(ptr));
+ }
+ return -ENOSYS;
+diff --git a/arch/x86/ia32/mmap32.c b/arch/x86/ia32/mmap32.c
+deleted file mode 100644
+index e4b84b4..0000000
+--- a/arch/x86/ia32/mmap32.c
++++ /dev/null
+@@ -1,79 +0,0 @@
+-/*
+- * linux/arch/x86_64/ia32/mm/mmap.c
+- *
+- * flexible mmap layout support
+- *
+- * Based on the i386 version which was
+- *
+- * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
+- * All Rights Reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- *
+- *
+- * Started by Ingo Molnar <mingo at elte.hu>
+- */
+-
+-#include <linux/personality.h>
+-#include <linux/mm.h>
+-#include <linux/random.h>
+-#include <linux/sched.h>
+-
+-/*
+- * Top of mmap area (just below the process stack).
+- *
+- * Leave an at least ~128 MB hole.
+- */
+-#define MIN_GAP (128*1024*1024)
+-#define MAX_GAP (TASK_SIZE/6*5)
+-
+-static inline unsigned long mmap_base(struct mm_struct *mm)
+-{
+- unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+- unsigned long random_factor = 0;
+-
+- if (current->flags & PF_RANDOMIZE)
+- random_factor = get_random_int() % (1024*1024);
+-
+- if (gap < MIN_GAP)
+- gap = MIN_GAP;
+- else if (gap > MAX_GAP)
+- gap = MAX_GAP;
+-
+- return PAGE_ALIGN(TASK_SIZE - gap - random_factor);
+-}
+-
+-/*
+- * This function, called very early during the creation of a new
+- * process VM image, sets up which VM layout function to use:
+- */
+-void ia32_pick_mmap_layout(struct mm_struct *mm)
+-{
+- /*
+- * Fall back to the standard layout if the personality
+- * bit is set, or if the expected stack growth is unlimited:
+- */
+- if (sysctl_legacy_va_layout ||
+- (current->personality & ADDR_COMPAT_LAYOUT) ||
+- current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY) {
+- mm->mmap_base = TASK_UNMAPPED_BASE;
+- mm->get_unmapped_area = arch_get_unmapped_area;
+- mm->unmap_area = arch_unmap_area;
+- } else {
+- mm->mmap_base = mmap_base(mm);
+- mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+- mm->unmap_area = arch_unmap_area_topdown;
+- }
+-}
+diff --git a/arch/x86/ia32/ptrace32.c b/arch/x86/ia32/ptrace32.c
+deleted file mode 100644
+index 4a233ad..0000000
+--- a/arch/x86/ia32/ptrace32.c
++++ /dev/null
+@@ -1,404 +0,0 @@
+-/*
+- * 32bit ptrace for x86-64.
+- *
+- * Copyright 2001,2002 Andi Kleen, SuSE Labs.
+- * Some parts copied from arch/i386/kernel/ptrace.c. See that file for earlier
+- * copyright.
+- *
+- * This allows to access 64bit processes too; but there is no way to see the extended
+- * register contents.
+- */
+-
+-#include <linux/kernel.h>
+-#include <linux/stddef.h>
+-#include <linux/sched.h>
+-#include <linux/syscalls.h>
+-#include <linux/unistd.h>
+-#include <linux/mm.h>
+-#include <linux/err.h>
+-#include <linux/ptrace.h>
+-#include <asm/ptrace.h>
+-#include <asm/compat.h>
+-#include <asm/uaccess.h>
+-#include <asm/user32.h>
+-#include <asm/user.h>
+-#include <asm/errno.h>
+-#include <asm/debugreg.h>
+-#include <asm/i387.h>
+-#include <asm/fpu32.h>
+-#include <asm/ia32.h>
+-
+-/*
+- * Determines which flags the user has access to [1 = access, 0 = no access].
+- * Prohibits changing ID(21), VIP(20), VIF(19), VM(17), IOPL(12-13), IF(9).
+- * Also masks reserved bits (31-22, 15, 5, 3, 1).
+- */
+-#define FLAG_MASK 0x54dd5UL
+-
+-#define R32(l,q) \
+- case offsetof(struct user32, regs.l): stack[offsetof(struct pt_regs, q)/8] = val; break
+-
+-static int putreg32(struct task_struct *child, unsigned regno, u32 val)
+-{
+- int i;
+- __u64 *stack = (__u64 *)task_pt_regs(child);
+-
+- switch (regno) {
+- case offsetof(struct user32, regs.fs):
+- if (val && (val & 3) != 3) return -EIO;
+- child->thread.fsindex = val & 0xffff;
+- break;
+- case offsetof(struct user32, regs.gs):
+- if (val && (val & 3) != 3) return -EIO;
+- child->thread.gsindex = val & 0xffff;
+- break;
+- case offsetof(struct user32, regs.ds):
+- if (val && (val & 3) != 3) return -EIO;
+- child->thread.ds = val & 0xffff;
+- break;
+- case offsetof(struct user32, regs.es):
+- child->thread.es = val & 0xffff;
+- break;
+- case offsetof(struct user32, regs.ss):
+- if ((val & 3) != 3) return -EIO;
+- stack[offsetof(struct pt_regs, ss)/8] = val & 0xffff;
+- break;
+- case offsetof(struct user32, regs.cs):
+- if ((val & 3) != 3) return -EIO;
+- stack[offsetof(struct pt_regs, cs)/8] = val & 0xffff;
+- break;
+-
+- R32(ebx, rbx);
+- R32(ecx, rcx);
+- R32(edx, rdx);
+- R32(edi, rdi);
+- R32(esi, rsi);
+- R32(ebp, rbp);
+- R32(eax, rax);
+- R32(orig_eax, orig_rax);
+- R32(eip, rip);
+- R32(esp, rsp);
+-
+- case offsetof(struct user32, regs.eflags): {
+- __u64 *flags = &stack[offsetof(struct pt_regs, eflags)/8];
+- val &= FLAG_MASK;
+- *flags = val | (*flags & ~FLAG_MASK);
+- break;
+- }
+-
+- case offsetof(struct user32, u_debugreg[4]):
+- case offsetof(struct user32, u_debugreg[5]):
+- return -EIO;
+-
+- case offsetof(struct user32, u_debugreg[0]):
+- child->thread.debugreg0 = val;
+- break;
+-
+- case offsetof(struct user32, u_debugreg[1]):
+- child->thread.debugreg1 = val;
+- break;
+-
+- case offsetof(struct user32, u_debugreg[2]):
+- child->thread.debugreg2 = val;
+- break;
+-
+- case offsetof(struct user32, u_debugreg[3]):
+- child->thread.debugreg3 = val;
+- break;
+-
+- case offsetof(struct user32, u_debugreg[6]):
+- child->thread.debugreg6 = val;
+- break;
+-
+- case offsetof(struct user32, u_debugreg[7]):
+- val &= ~DR_CONTROL_RESERVED;
+- /* See arch/i386/kernel/ptrace.c for an explanation of
+- * this awkward check.*/
+- for(i=0; i<4; i++)
+- if ((0x5454 >> ((val >> (16 + 4*i)) & 0xf)) & 1)
+- return -EIO;
+- child->thread.debugreg7 = val;
+- if (val)
+- set_tsk_thread_flag(child, TIF_DEBUG);
+- else
+- clear_tsk_thread_flag(child, TIF_DEBUG);
+- break;
+-
+- default:
+- if (regno > sizeof(struct user32) || (regno & 3))
+- return -EIO;
+-
+- /* Other dummy fields in the virtual user structure are ignored */
+- break;
+- }
+- return 0;
+-}
+-
+-#undef R32
+-
+-#define R32(l,q) \
+- case offsetof(struct user32, regs.l): *val = stack[offsetof(struct pt_regs, q)/8]; break
+-
+-static int getreg32(struct task_struct *child, unsigned regno, u32 *val)
+-{
+- __u64 *stack = (__u64 *)task_pt_regs(child);
+-
+- switch (regno) {
+- case offsetof(struct user32, regs.fs):
+- *val = child->thread.fsindex;
+- break;
+- case offsetof(struct user32, regs.gs):
+- *val = child->thread.gsindex;
+- break;
+- case offsetof(struct user32, regs.ds):
+- *val = child->thread.ds;
+- break;
+- case offsetof(struct user32, regs.es):
+- *val = child->thread.es;
+- break;
+-
+- R32(cs, cs);
+- R32(ss, ss);
+- R32(ebx, rbx);
+- R32(ecx, rcx);
+- R32(edx, rdx);
+- R32(edi, rdi);
+- R32(esi, rsi);
+- R32(ebp, rbp);
+- R32(eax, rax);
+- R32(orig_eax, orig_rax);
+- R32(eip, rip);
+- R32(eflags, eflags);
+- R32(esp, rsp);
+-
+- case offsetof(struct user32, u_debugreg[0]):
+- *val = child->thread.debugreg0;
+- break;
+- case offsetof(struct user32, u_debugreg[1]):
+- *val = child->thread.debugreg1;
+- break;
+- case offsetof(struct user32, u_debugreg[2]):
+- *val = child->thread.debugreg2;
+- break;
+- case offsetof(struct user32, u_debugreg[3]):
+- *val = child->thread.debugreg3;
+- break;
+- case offsetof(struct user32, u_debugreg[6]):
+- *val = child->thread.debugreg6;
+- break;
+- case offsetof(struct user32, u_debugreg[7]):
+- *val = child->thread.debugreg7;
+- break;
+-
+- default:
+- if (regno > sizeof(struct user32) || (regno & 3))
+- return -EIO;
+-
+- /* Other dummy fields in the virtual user structure are ignored */
+- *val = 0;
+- break;
+- }
+- return 0;
+-}
+-
+-#undef R32
+-
+-static long ptrace32_siginfo(unsigned request, u32 pid, u32 addr, u32 data)
+-{
+- int ret;
+- compat_siginfo_t __user *si32 = compat_ptr(data);
+- siginfo_t ssi;
+- siginfo_t __user *si = compat_alloc_user_space(sizeof(siginfo_t));
+- if (request == PTRACE_SETSIGINFO) {
+- memset(&ssi, 0, sizeof(siginfo_t));
+- ret = copy_siginfo_from_user32(&ssi, si32);
+- if (ret)
+- return ret;
+- if (copy_to_user(si, &ssi, sizeof(siginfo_t)))
+- return -EFAULT;
+- }
+- ret = sys_ptrace(request, pid, addr, (unsigned long)si);
+- if (ret)
+- return ret;
+- if (request == PTRACE_GETSIGINFO) {
+- if (copy_from_user(&ssi, si, sizeof(siginfo_t)))
+- return -EFAULT;
+- ret = copy_siginfo_to_user32(si32, &ssi);
+- }
+- return ret;
+-}
+-
+-asmlinkage long sys32_ptrace(long request, u32 pid, u32 addr, u32 data)
+-{
+- struct task_struct *child;
+- struct pt_regs *childregs;
+- void __user *datap = compat_ptr(data);
+- int ret;
+- __u32 val;
+-
+- switch (request) {
+- case PTRACE_TRACEME:
+- case PTRACE_ATTACH:
+- case PTRACE_KILL:
+- case PTRACE_CONT:
+- case PTRACE_SINGLESTEP:
+- case PTRACE_DETACH:
+- case PTRACE_SYSCALL:
+- case PTRACE_OLDSETOPTIONS:
+- case PTRACE_SETOPTIONS:
+- case PTRACE_SET_THREAD_AREA:
+- case PTRACE_GET_THREAD_AREA:
+- return sys_ptrace(request, pid, addr, data);
+-
+- default:
+- return -EINVAL;
+-
+- case PTRACE_PEEKTEXT:
+- case PTRACE_PEEKDATA:
+- case PTRACE_POKEDATA:
+- case PTRACE_POKETEXT:
+- case PTRACE_POKEUSR:
+- case PTRACE_PEEKUSR:
+- case PTRACE_GETREGS:
+- case PTRACE_SETREGS:
+- case PTRACE_SETFPREGS:
+- case PTRACE_GETFPREGS:
+- case PTRACE_SETFPXREGS:
+- case PTRACE_GETFPXREGS:
+- case PTRACE_GETEVENTMSG:
+- break;
+-
+- case PTRACE_SETSIGINFO:
+- case PTRACE_GETSIGINFO:
+- return ptrace32_siginfo(request, pid, addr, data);
+- }
+-
+- child = ptrace_get_task_struct(pid);
+- if (IS_ERR(child))
+- return PTR_ERR(child);
+-
+- ret = ptrace_check_attach(child, request == PTRACE_KILL);
+- if (ret < 0)
+- goto out;
+-
+- childregs = task_pt_regs(child);
+-
+- switch (request) {
+- case PTRACE_PEEKDATA:
+- case PTRACE_PEEKTEXT:
+- ret = 0;
+- if (access_process_vm(child, addr, &val, sizeof(u32), 0)!=sizeof(u32))
+- ret = -EIO;
+- else
+- ret = put_user(val, (unsigned int __user *)datap);
+- break;
+-
+- case PTRACE_POKEDATA:
+- case PTRACE_POKETEXT:
+- ret = 0;
+- if (access_process_vm(child, addr, &data, sizeof(u32), 1)!=sizeof(u32))
+- ret = -EIO;
+- break;
+-
+- case PTRACE_PEEKUSR:
+- ret = getreg32(child, addr, &val);
+- if (ret == 0)
+- ret = put_user(val, (__u32 __user *)datap);
+- break;
+-
+- case PTRACE_POKEUSR:
+- ret = putreg32(child, addr, data);
+- break;
+-
+- case PTRACE_GETREGS: { /* Get all gp regs from the child. */
+- int i;
+- if (!access_ok(VERIFY_WRITE, datap, 16*4)) {
+- ret = -EIO;
+- break;
+- }
+- ret = 0;
+- for ( i = 0; i <= 16*4 ; i += sizeof(__u32) ) {
+- getreg32(child, i, &val);
+- ret |= __put_user(val,(u32 __user *)datap);
+- datap += sizeof(u32);
+- }
+- break;
+- }
+-
+- case PTRACE_SETREGS: { /* Set all gp regs in the child. */
+- unsigned long tmp;
+- int i;
+- if (!access_ok(VERIFY_READ, datap, 16*4)) {
+- ret = -EIO;
+- break;
+- }
+- ret = 0;
+- for ( i = 0; i <= 16*4; i += sizeof(u32) ) {
+- ret |= __get_user(tmp, (u32 __user *)datap);
+- putreg32(child, i, tmp);
+- datap += sizeof(u32);
+- }
+- break;
+- }
+-
+- case PTRACE_GETFPREGS:
+- ret = -EIO;
+- if (!access_ok(VERIFY_READ, compat_ptr(data),
+- sizeof(struct user_i387_struct)))
+- break;
+- save_i387_ia32(child, datap, childregs, 1);
+- ret = 0;
+- break;
+-
+- case PTRACE_SETFPREGS:
+- ret = -EIO;
+- if (!access_ok(VERIFY_WRITE, datap,
+- sizeof(struct user_i387_struct)))
+- break;
+- ret = 0;
+- /* don't check EFAULT to be bug-to-bug compatible to i386 */
+- restore_i387_ia32(child, datap, 1);
+- break;
+-
+- case PTRACE_GETFPXREGS: {
+- struct user32_fxsr_struct __user *u = datap;
+- init_fpu(child);
+- ret = -EIO;
+- if (!access_ok(VERIFY_WRITE, u, sizeof(*u)))
+- break;
+- ret = -EFAULT;
+- if (__copy_to_user(u, &child->thread.i387.fxsave, sizeof(*u)))
+- break;
+- ret = __put_user(childregs->cs, &u->fcs);
+- ret |= __put_user(child->thread.ds, &u->fos);
+- break;
+- }
+- case PTRACE_SETFPXREGS: {
+- struct user32_fxsr_struct __user *u = datap;
+- unlazy_fpu(child);
+- ret = -EIO;
+- if (!access_ok(VERIFY_READ, u, sizeof(*u)))
+- break;
+- /* no checking to be bug-to-bug compatible with i386. */
+- /* but silence warning */
+- if (__copy_from_user(&child->thread.i387.fxsave, u, sizeof(*u)))
+- ;
+- set_stopped_child_used_math(child);
+- child->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
+- ret = 0;
+- break;
+- }
+-
+- case PTRACE_GETEVENTMSG:
+- ret = put_user(child->ptrace_message,(unsigned int __user *)compat_ptr(data));
+- break;
+-
+- default:
+- BUG();
+- }
+-
+- out:
+- put_task_struct(child);
+- return ret;
+-}
+-
+diff --git a/arch/x86/ia32/sys_ia32.c b/arch/x86/ia32/sys_ia32.c
+index bee96d6..abf71d2 100644
+--- a/arch/x86/ia32/sys_ia32.c
++++ b/arch/x86/ia32/sys_ia32.c
+@@ -1,29 +1,29 @@
+ /*
+ * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on
+- * sys_sparc32
++ * sys_sparc32
+ *
+ * Copyright (C) 2000 VA Linux Co
+ * Copyright (C) 2000 Don Dugger <n0ano at valinux.com>
+- * Copyright (C) 1999 Arun Sharma <arun.sharma at intel.com>
+- * Copyright (C) 1997,1998 Jakub Jelinek (jj at sunsite.mff.cuni.cz)
+- * Copyright (C) 1997 David S. Miller (davem at caip.rutgers.edu)
++ * Copyright (C) 1999 Arun Sharma <arun.sharma at intel.com>
++ * Copyright (C) 1997,1998 Jakub Jelinek (jj at sunsite.mff.cuni.cz)
++ * Copyright (C) 1997 David S. Miller (davem at caip.rutgers.edu)
+ * Copyright (C) 2000 Hewlett-Packard Co.
+ * Copyright (C) 2000 David Mosberger-Tang <davidm at hpl.hp.com>
+- * Copyright (C) 2000,2001,2002 Andi Kleen, SuSE Labs (x86-64 port)
++ * Copyright (C) 2000,2001,2002 Andi Kleen, SuSE Labs (x86-64 port)
+ *
+ * These routines maintain argument size conversion between 32bit and 64bit
+- * environment. In 2.5 most of this should be moved to a generic directory.
++ * environment. In 2.5 most of this should be moved to a generic directory.
+ *
+ * This file assumes that there is a hole at the end of user address space.
+- *
+- * Some of the functions are LE specific currently. These are hopefully all marked.
+- * This should be fixed.
++ *
++ * Some of the functions are LE specific currently. These are
++ * hopefully all marked. This should be fixed.
+ */
+
+ #include <linux/kernel.h>
+ #include <linux/sched.h>
+-#include <linux/fs.h>
+-#include <linux/file.h>
++#include <linux/fs.h>
++#include <linux/file.h>
+ #include <linux/signal.h>
+ #include <linux/syscalls.h>
+ #include <linux/resource.h>
+@@ -90,43 +90,44 @@ int cp_compat_stat(struct kstat *kbuf, struct compat_stat __user *ubuf)
+ if (sizeof(ino) < sizeof(kbuf->ino) && ino != kbuf->ino)
+ return -EOVERFLOW;
+ if (!access_ok(VERIFY_WRITE, ubuf, sizeof(struct compat_stat)) ||
+- __put_user (old_encode_dev(kbuf->dev), &ubuf->st_dev) ||
+- __put_user (ino, &ubuf->st_ino) ||
+- __put_user (kbuf->mode, &ubuf->st_mode) ||
+- __put_user (kbuf->nlink, &ubuf->st_nlink) ||
+- __put_user (uid, &ubuf->st_uid) ||
+- __put_user (gid, &ubuf->st_gid) ||
+- __put_user (old_encode_dev(kbuf->rdev), &ubuf->st_rdev) ||
+- __put_user (kbuf->size, &ubuf->st_size) ||
+- __put_user (kbuf->atime.tv_sec, &ubuf->st_atime) ||
+- __put_user (kbuf->atime.tv_nsec, &ubuf->st_atime_nsec) ||
+- __put_user (kbuf->mtime.tv_sec, &ubuf->st_mtime) ||
+- __put_user (kbuf->mtime.tv_nsec, &ubuf->st_mtime_nsec) ||
+- __put_user (kbuf->ctime.tv_sec, &ubuf->st_ctime) ||
+- __put_user (kbuf->ctime.tv_nsec, &ubuf->st_ctime_nsec) ||
+- __put_user (kbuf->blksize, &ubuf->st_blksize) ||
+- __put_user (kbuf->blocks, &ubuf->st_blocks))
++ __put_user(old_encode_dev(kbuf->dev), &ubuf->st_dev) ||
++ __put_user(ino, &ubuf->st_ino) ||
++ __put_user(kbuf->mode, &ubuf->st_mode) ||
++ __put_user(kbuf->nlink, &ubuf->st_nlink) ||
++ __put_user(uid, &ubuf->st_uid) ||
++ __put_user(gid, &ubuf->st_gid) ||
++ __put_user(old_encode_dev(kbuf->rdev), &ubuf->st_rdev) ||
++ __put_user(kbuf->size, &ubuf->st_size) ||
++ __put_user(kbuf->atime.tv_sec, &ubuf->st_atime) ||
++ __put_user(kbuf->atime.tv_nsec, &ubuf->st_atime_nsec) ||
++ __put_user(kbuf->mtime.tv_sec, &ubuf->st_mtime) ||
++ __put_user(kbuf->mtime.tv_nsec, &ubuf->st_mtime_nsec) ||
++ __put_user(kbuf->ctime.tv_sec, &ubuf->st_ctime) ||
++ __put_user(kbuf->ctime.tv_nsec, &ubuf->st_ctime_nsec) ||
++ __put_user(kbuf->blksize, &ubuf->st_blksize) ||
++ __put_user(kbuf->blocks, &ubuf->st_blocks))
+ return -EFAULT;
+ return 0;
}
- static struct sysdev_class i8237_sysdev_class = {
-- set_kset_name("i8237"),
-+ .name = "i8237",
- .suspend = i8237A_suspend,
- .resume = i8237A_resume,
- };
-diff --git a/arch/x86/kernel/i8259_32.c b/arch/x86/kernel/i8259_32.c
-index f634fc7..5f3496d 100644
---- a/arch/x86/kernel/i8259_32.c
-+++ b/arch/x86/kernel/i8259_32.c
-@@ -258,7 +258,7 @@ static int i8259A_shutdown(struct sys_device *dev)
+-asmlinkage long
+-sys32_truncate64(char __user * filename, unsigned long offset_low, unsigned long offset_high)
++asmlinkage long sys32_truncate64(char __user *filename,
++ unsigned long offset_low,
++ unsigned long offset_high)
+ {
+ return sys_truncate(filename, ((loff_t) offset_high << 32) | offset_low);
}
- static struct sysdev_class i8259_sysdev_class = {
-- set_kset_name("i8259"),
-+ .name = "i8259",
- .suspend = i8259A_suspend,
- .resume = i8259A_resume,
- .shutdown = i8259A_shutdown,
-diff --git a/arch/x86/kernel/i8259_64.c b/arch/x86/kernel/i8259_64.c
-index 3f27ea0..ba6d572 100644
---- a/arch/x86/kernel/i8259_64.c
-+++ b/arch/x86/kernel/i8259_64.c
-@@ -370,7 +370,7 @@ static int i8259A_shutdown(struct sys_device *dev)
+-asmlinkage long
+-sys32_ftruncate64(unsigned int fd, unsigned long offset_low, unsigned long offset_high)
++asmlinkage long sys32_ftruncate64(unsigned int fd, unsigned long offset_low,
++ unsigned long offset_high)
+ {
+ return sys_ftruncate(fd, ((loff_t) offset_high << 32) | offset_low);
}
- static struct sysdev_class i8259_sysdev_class = {
-- set_kset_name("i8259"),
-+ .name = "i8259",
- .suspend = i8259A_suspend,
- .resume = i8259A_resume,
- .shutdown = i8259A_shutdown,
-diff --git a/arch/x86/kernel/io_apic_32.c b/arch/x86/kernel/io_apic_32.c
-index a6b1490..ab77f19 100644
---- a/arch/x86/kernel/io_apic_32.c
-+++ b/arch/x86/kernel/io_apic_32.c
-@@ -2401,7 +2401,7 @@ static int ioapic_resume(struct sys_device *dev)
+-/* Another set for IA32/LFS -- x86_64 struct stat is different due to
+- support for 64bit inode numbers. */
+-
+-static int
+-cp_stat64(struct stat64 __user *ubuf, struct kstat *stat)
++/*
++ * Another set for IA32/LFS -- x86_64 struct stat is different due to
++ * support for 64bit inode numbers.
++ */
++static int cp_stat64(struct stat64 __user *ubuf, struct kstat *stat)
+ {
+ typeof(ubuf->st_uid) uid = 0;
+ typeof(ubuf->st_gid) gid = 0;
+@@ -134,38 +135,39 @@ cp_stat64(struct stat64 __user *ubuf, struct kstat *stat)
+ SET_GID(gid, stat->gid);
+ if (!access_ok(VERIFY_WRITE, ubuf, sizeof(struct stat64)) ||
+ __put_user(huge_encode_dev(stat->dev), &ubuf->st_dev) ||
+- __put_user (stat->ino, &ubuf->__st_ino) ||
+- __put_user (stat->ino, &ubuf->st_ino) ||
+- __put_user (stat->mode, &ubuf->st_mode) ||
+- __put_user (stat->nlink, &ubuf->st_nlink) ||
+- __put_user (uid, &ubuf->st_uid) ||
+- __put_user (gid, &ubuf->st_gid) ||
+- __put_user (huge_encode_dev(stat->rdev), &ubuf->st_rdev) ||
+- __put_user (stat->size, &ubuf->st_size) ||
+- __put_user (stat->atime.tv_sec, &ubuf->st_atime) ||
+- __put_user (stat->atime.tv_nsec, &ubuf->st_atime_nsec) ||
+- __put_user (stat->mtime.tv_sec, &ubuf->st_mtime) ||
+- __put_user (stat->mtime.tv_nsec, &ubuf->st_mtime_nsec) ||
+- __put_user (stat->ctime.tv_sec, &ubuf->st_ctime) ||
+- __put_user (stat->ctime.tv_nsec, &ubuf->st_ctime_nsec) ||
+- __put_user (stat->blksize, &ubuf->st_blksize) ||
+- __put_user (stat->blocks, &ubuf->st_blocks))
++ __put_user(stat->ino, &ubuf->__st_ino) ||
++ __put_user(stat->ino, &ubuf->st_ino) ||
++ __put_user(stat->mode, &ubuf->st_mode) ||
++ __put_user(stat->nlink, &ubuf->st_nlink) ||
++ __put_user(uid, &ubuf->st_uid) ||
++ __put_user(gid, &ubuf->st_gid) ||
++ __put_user(huge_encode_dev(stat->rdev), &ubuf->st_rdev) ||
++ __put_user(stat->size, &ubuf->st_size) ||
++ __put_user(stat->atime.tv_sec, &ubuf->st_atime) ||
++ __put_user(stat->atime.tv_nsec, &ubuf->st_atime_nsec) ||
++ __put_user(stat->mtime.tv_sec, &ubuf->st_mtime) ||
++ __put_user(stat->mtime.tv_nsec, &ubuf->st_mtime_nsec) ||
++ __put_user(stat->ctime.tv_sec, &ubuf->st_ctime) ||
++ __put_user(stat->ctime.tv_nsec, &ubuf->st_ctime_nsec) ||
++ __put_user(stat->blksize, &ubuf->st_blksize) ||
++ __put_user(stat->blocks, &ubuf->st_blocks))
+ return -EFAULT;
+ return 0;
}
- static struct sysdev_class ioapic_sysdev_class = {
-- set_kset_name("ioapic"),
-+ .name = "ioapic",
- .suspend = ioapic_suspend,
- .resume = ioapic_resume,
- };
-diff --git a/arch/x86/kernel/io_apic_64.c b/arch/x86/kernel/io_apic_64.c
-index cbac167..23a3ac0 100644
---- a/arch/x86/kernel/io_apic_64.c
-+++ b/arch/x86/kernel/io_apic_64.c
-@@ -1850,7 +1850,7 @@ static int ioapic_resume(struct sys_device *dev)
+-asmlinkage long
+-sys32_stat64(char __user * filename, struct stat64 __user *statbuf)
++asmlinkage long sys32_stat64(char __user *filename,
++ struct stat64 __user *statbuf)
+ {
+ struct kstat stat;
+ int ret = vfs_stat(filename, &stat);
++
+ if (!ret)
+ ret = cp_stat64(statbuf, &stat);
+ return ret;
}
- static struct sysdev_class ioapic_sysdev_class = {
-- set_kset_name("ioapic"),
-+ .name = "ioapic",
- .suspend = ioapic_suspend,
- .resume = ioapic_resume,
+-asmlinkage long
+-sys32_lstat64(char __user * filename, struct stat64 __user *statbuf)
++asmlinkage long sys32_lstat64(char __user *filename,
++ struct stat64 __user *statbuf)
+ {
+ struct kstat stat;
+ int ret = vfs_lstat(filename, &stat);
+@@ -174,8 +176,7 @@ sys32_lstat64(char __user * filename, struct stat64 __user *statbuf)
+ return ret;
+ }
+
+-asmlinkage long
+-sys32_fstat64(unsigned int fd, struct stat64 __user *statbuf)
++asmlinkage long sys32_fstat64(unsigned int fd, struct stat64 __user *statbuf)
+ {
+ struct kstat stat;
+ int ret = vfs_fstat(fd, &stat);
+@@ -184,9 +185,8 @@ sys32_fstat64(unsigned int fd, struct stat64 __user *statbuf)
+ return ret;
+ }
+
+-asmlinkage long
+-sys32_fstatat(unsigned int dfd, char __user *filename,
+- struct stat64 __user* statbuf, int flag)
++asmlinkage long sys32_fstatat(unsigned int dfd, char __user *filename,
++ struct stat64 __user *statbuf, int flag)
+ {
+ struct kstat stat;
+ int error = -EINVAL;
+@@ -221,8 +221,7 @@ struct mmap_arg_struct {
+ unsigned int offset;
};
-diff --git a/arch/x86/kernel/microcode.c b/arch/x86/kernel/microcode.c
-index 09c3152..40cfd54 100644
---- a/arch/x86/kernel/microcode.c
-+++ b/arch/x86/kernel/microcode.c
-@@ -436,7 +436,7 @@ static ssize_t microcode_write (struct file *file, const char __user *buf, size_
- return -EINVAL;
+
+-asmlinkage long
+-sys32_mmap(struct mmap_arg_struct __user *arg)
++asmlinkage long sys32_mmap(struct mmap_arg_struct __user *arg)
+ {
+ struct mmap_arg_struct a;
+ struct file *file = NULL;
+@@ -233,33 +232,33 @@ sys32_mmap(struct mmap_arg_struct __user *arg)
+ return -EFAULT;
+
+ if (a.offset & ~PAGE_MASK)
+- return -EINVAL;
++ return -EINVAL;
+
+ if (!(a.flags & MAP_ANONYMOUS)) {
+ file = fget(a.fd);
+ if (!file)
+ return -EBADF;
}
+-
+- mm = current->mm;
+- down_write(&mm->mmap_sem);
+- retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset>>PAGE_SHIFT);
++
++ mm = current->mm;
++ down_write(&mm->mmap_sem);
++ retval = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags,
++ a.offset>>PAGE_SHIFT);
+ if (file)
+ fput(file);
-- lock_cpu_hotplug();
-+ get_online_cpus();
- mutex_lock(µcode_mutex);
+- up_write(&mm->mmap_sem);
++ up_write(&mm->mmap_sem);
- user_buffer = (void __user *) buf;
-@@ -447,7 +447,7 @@ static ssize_t microcode_write (struct file *file, const char __user *buf, size_
- ret = (ssize_t)len;
+ return retval;
+ }
- mutex_unlock(µcode_mutex);
-- unlock_cpu_hotplug();
-+ put_online_cpus();
+-asmlinkage long
+-sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
++asmlinkage long sys32_mprotect(unsigned long start, size_t len,
++ unsigned long prot)
+ {
+- return sys_mprotect(start,len,prot);
++ return sys_mprotect(start, len, prot);
+ }
+
+-asmlinkage long
+-sys32_pipe(int __user *fd)
++asmlinkage long sys32_pipe(int __user *fd)
+ {
+ int retval;
+ int fds[2];
+@@ -269,13 +268,13 @@ sys32_pipe(int __user *fd)
+ goto out;
+ if (copy_to_user(fd, fds, sizeof(fds)))
+ retval = -EFAULT;
+- out:
++out:
+ return retval;
+ }
+
+-asmlinkage long
+-sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
+- struct sigaction32 __user *oact, unsigned int sigsetsize)
++asmlinkage long sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
++ struct sigaction32 __user *oact,
++ unsigned int sigsetsize)
+ {
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+@@ -291,12 +290,17 @@ sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
+ if (!access_ok(VERIFY_READ, act, sizeof(*act)) ||
+ __get_user(handler, &act->sa_handler) ||
+ __get_user(new_ka.sa.sa_flags, &act->sa_flags) ||
+- __get_user(restorer, &act->sa_restorer)||
+- __copy_from_user(&set32, &act->sa_mask, sizeof(compat_sigset_t)))
++ __get_user(restorer, &act->sa_restorer) ||
++ __copy_from_user(&set32, &act->sa_mask,
++ sizeof(compat_sigset_t)))
+ return -EFAULT;
+ new_ka.sa.sa_handler = compat_ptr(handler);
+ new_ka.sa.sa_restorer = compat_ptr(restorer);
+- /* FIXME: here we rely on _COMPAT_NSIG_WORS to be >= than _NSIG_WORDS << 1 */
++
++ /*
++ * FIXME: here we rely on _COMPAT_NSIG_WORS to be >=
++ * than _NSIG_WORDS << 1
++ */
+ switch (_NSIG_WORDS) {
+ case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6]
+ | (((long)set32.sig[7]) << 32);
+@@ -312,7 +316,10 @@ sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+- /* FIXME: here we rely on _COMPAT_NSIG_WORS to be >= than _NSIG_WORDS << 1 */
++ /*
++ * FIXME: here we rely on _COMPAT_NSIG_WORS to be >=
++ * than _NSIG_WORDS << 1
++ */
+ switch (_NSIG_WORDS) {
+ case 4:
+ set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32);
+@@ -328,23 +335,26 @@ sys32_rt_sigaction(int sig, struct sigaction32 __user *act,
+ set32.sig[0] = old_ka.sa.sa_mask.sig[0];
+ }
+ if (!access_ok(VERIFY_WRITE, oact, sizeof(*oact)) ||
+- __put_user(ptr_to_compat(old_ka.sa.sa_handler), &oact->sa_handler) ||
+- __put_user(ptr_to_compat(old_ka.sa.sa_restorer), &oact->sa_restorer) ||
++ __put_user(ptr_to_compat(old_ka.sa.sa_handler),
++ &oact->sa_handler) ||
++ __put_user(ptr_to_compat(old_ka.sa.sa_restorer),
++ &oact->sa_restorer) ||
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags) ||
+- __copy_to_user(&oact->sa_mask, &set32, sizeof(compat_sigset_t)))
++ __copy_to_user(&oact->sa_mask, &set32,
++ sizeof(compat_sigset_t)))
+ return -EFAULT;
+ }
return ret;
}
-@@ -658,14 +658,14 @@ static ssize_t reload_store(struct sys_device *dev, const char *buf, size_t sz)
- old = current->cpus_allowed;
+-asmlinkage long
+-sys32_sigaction (int sig, struct old_sigaction32 __user *act, struct old_sigaction32 __user *oact)
++asmlinkage long sys32_sigaction(int sig, struct old_sigaction32 __user *act,
++ struct old_sigaction32 __user *oact)
+ {
+- struct k_sigaction new_ka, old_ka;
+- int ret;
++ struct k_sigaction new_ka, old_ka;
++ int ret;
-- lock_cpu_hotplug();
-+ get_online_cpus();
- set_cpus_allowed(current, cpumask_of_cpu(cpu));
+- if (act) {
++ if (act) {
+ compat_old_sigset_t mask;
+ compat_uptr_t handler, restorer;
- mutex_lock(µcode_mutex);
- if (uci->valid)
- err = cpu_request_microcode(cpu);
- mutex_unlock(µcode_mutex);
-- unlock_cpu_hotplug();
-+ put_online_cpus();
- set_cpus_allowed(current, old);
- }
- if (err)
-@@ -817,9 +817,9 @@ static int __init microcode_init (void)
- return PTR_ERR(microcode_pdev);
- }
+@@ -359,33 +369,35 @@ sys32_sigaction (int sig, struct old_sigaction32 __user *act, struct old_sigacti
+ new_ka.sa.sa_restorer = compat_ptr(restorer);
-- lock_cpu_hotplug();
-+ get_online_cpus();
- error = sysdev_driver_register(&cpu_sysdev_class, &mc_sysdev_driver);
-- unlock_cpu_hotplug();
-+ put_online_cpus();
- if (error) {
- microcode_dev_exit();
- platform_device_unregister(microcode_pdev);
-@@ -839,9 +839,9 @@ static void __exit microcode_exit (void)
+ siginitset(&new_ka.sa.sa_mask, mask);
+- }
++ }
- unregister_hotcpu_notifier(&mc_cpu_notifier);
+- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
++ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
-- lock_cpu_hotplug();
-+ get_online_cpus();
- sysdev_driver_unregister(&cpu_sysdev_class, &mc_sysdev_driver);
-- unlock_cpu_hotplug();
-+ put_online_cpus();
+ if (!ret && oact) {
+ if (!access_ok(VERIFY_WRITE, oact, sizeof(*oact)) ||
+- __put_user(ptr_to_compat(old_ka.sa.sa_handler), &oact->sa_handler) ||
+- __put_user(ptr_to_compat(old_ka.sa.sa_restorer), &oact->sa_restorer) ||
++ __put_user(ptr_to_compat(old_ka.sa.sa_handler),
++ &oact->sa_handler) ||
++ __put_user(ptr_to_compat(old_ka.sa.sa_restorer),
++ &oact->sa_restorer) ||
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags) ||
+ __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask))
+ return -EFAULT;
+- }
++ }
- platform_device_unregister(microcode_pdev);
+ return ret;
}
-diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
-index ee6eba4..21f6e3c 100644
---- a/arch/x86/kernel/msr.c
-+++ b/arch/x86/kernel/msr.c
-@@ -155,15 +155,15 @@ static int __cpuinit msr_class_cpu_callback(struct notifier_block *nfb,
- switch (action) {
- case CPU_UP_PREPARE:
-- case CPU_UP_PREPARE_FROZEN:
- err = msr_device_create(cpu);
- break;
- case CPU_UP_CANCELED:
-- case CPU_UP_CANCELED_FROZEN:
- case CPU_DEAD:
-- case CPU_DEAD_FROZEN:
- msr_device_destroy(cpu);
- break;
-+ case CPU_UP_CANCELED_FROZEN:
-+ destroy_suspended_device(msr_class, MKDEV(MSR_MAJOR, cpu));
-+ break;
+-asmlinkage long
+-sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
+- compat_sigset_t __user *oset, unsigned int sigsetsize)
++asmlinkage long sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
++ compat_sigset_t __user *oset,
++ unsigned int sigsetsize)
+ {
+ sigset_t s;
+ compat_sigset_t s32;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+-
++
+ if (set) {
+- if (copy_from_user (&s32, set, sizeof(compat_sigset_t)))
++ if (copy_from_user(&s32, set, sizeof(compat_sigset_t)))
+ return -EFAULT;
+ switch (_NSIG_WORDS) {
+ case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
+@@ -394,13 +406,14 @@ sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
+ case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
+ }
}
- return err ? NOTIFY_BAD : NOTIFY_OK;
+- set_fs (KERNEL_DS);
++ set_fs(KERNEL_DS);
+ ret = sys_rt_sigprocmask(how,
+ set ? (sigset_t __user *)&s : NULL,
+ oset ? (sigset_t __user *)&s : NULL,
+- sigsetsize);
+- set_fs (old_fs);
+- if (ret) return ret;
++ sigsetsize);
++ set_fs(old_fs);
++ if (ret)
++ return ret;
+ if (oset) {
+ switch (_NSIG_WORDS) {
+ case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
+@@ -408,52 +421,49 @@ sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
+ case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
+ case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
+ }
+- if (copy_to_user (oset, &s32, sizeof(compat_sigset_t)))
++ if (copy_to_user(oset, &s32, sizeof(compat_sigset_t)))
+ return -EFAULT;
+ }
+ return 0;
}
-diff --git a/arch/x86/kernel/nmi_32.c b/arch/x86/kernel/nmi_32.c
-index 852db29..4f4bfd3 100644
---- a/arch/x86/kernel/nmi_32.c
-+++ b/arch/x86/kernel/nmi_32.c
-@@ -176,7 +176,7 @@ static int lapic_nmi_resume(struct sys_device *dev)
+-static inline long
+-get_tv32(struct timeval *o, struct compat_timeval __user *i)
++static inline long get_tv32(struct timeval *o, struct compat_timeval __user *i)
+ {
+- int err = -EFAULT;
+- if (access_ok(VERIFY_READ, i, sizeof(*i))) {
++ int err = -EFAULT;
++
++ if (access_ok(VERIFY_READ, i, sizeof(*i))) {
+ err = __get_user(o->tv_sec, &i->tv_sec);
+ err |= __get_user(o->tv_usec, &i->tv_usec);
+ }
+- return err;
++ return err;
+ }
- static struct sysdev_class nmi_sysclass = {
-- set_kset_name("lapic_nmi"),
-+ .name = "lapic_nmi",
- .resume = lapic_nmi_resume,
- .suspend = lapic_nmi_suspend,
- };
-diff --git a/arch/x86/kernel/nmi_64.c b/arch/x86/kernel/nmi_64.c
-index 4253c4e..c3d1476 100644
---- a/arch/x86/kernel/nmi_64.c
-+++ b/arch/x86/kernel/nmi_64.c
-@@ -211,7 +211,7 @@ static int lapic_nmi_resume(struct sys_device *dev)
+-static inline long
+-put_tv32(struct compat_timeval __user *o, struct timeval *i)
++static inline long put_tv32(struct compat_timeval __user *o, struct timeval *i)
+ {
+ int err = -EFAULT;
+- if (access_ok(VERIFY_WRITE, o, sizeof(*o))) {
++
++ if (access_ok(VERIFY_WRITE, o, sizeof(*o))) {
+ err = __put_user(i->tv_sec, &o->tv_sec);
+ err |= __put_user(i->tv_usec, &o->tv_usec);
+- }
+- return err;
++ }
++ return err;
}
- static struct sysdev_class nmi_sysclass = {
-- set_kset_name("lapic_nmi"),
-+ .name = "lapic_nmi",
- .resume = lapic_nmi_resume,
- .suspend = lapic_nmi_suspend,
- };
-diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
-index 9bdd830..20f29e4 100644
---- a/arch/x86/kernel/signal_32.c
-+++ b/arch/x86/kernel/signal_32.c
-@@ -658,6 +658,9 @@ void do_notify_resume(struct pt_regs *regs, void *_unused,
- /* deal with pending signal delivery */
- if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
- do_signal(regs);
-+
-+ if (thread_info_flags & _TIF_HRTICK_RESCHED)
-+ hrtick_resched();
-
- clear_thread_flag(TIF_IRET);
+-extern unsigned int alarm_setitimer(unsigned int seconds);
+-
+-asmlinkage long
+-sys32_alarm(unsigned int seconds)
++asmlinkage long sys32_alarm(unsigned int seconds)
+ {
+ return alarm_setitimer(seconds);
}
-diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c
-index ab086b0..38d8064 100644
---- a/arch/x86/kernel/signal_64.c
-+++ b/arch/x86/kernel/signal_64.c
-@@ -480,6 +480,9 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
- /* deal with pending signal delivery */
- if (thread_info_flags & (_TIF_SIGPENDING|_TIF_RESTORE_SIGMASK))
- do_signal(regs);
+
+-/* Translations due to time_t size differences. Which affects all
+- sorts of things, like timeval and itimerval. */
+-
+-extern struct timezone sys_tz;
+-
+-asmlinkage long
+-sys32_gettimeofday(struct compat_timeval __user *tv, struct timezone __user *tz)
++/*
++ * Translations due to time_t size differences. Which affects all
++ * sorts of things, like timeval and itimerval.
++ */
++asmlinkage long sys32_gettimeofday(struct compat_timeval __user *tv,
++ struct timezone __user *tz)
+ {
+ if (tv) {
+ struct timeval ktv;
+
-+ if (thread_info_flags & _TIF_HRTICK_RESCHED)
-+ hrtick_resched();
+ do_gettimeofday(&ktv);
+ if (put_tv32(tv, &ktv))
+ return -EFAULT;
+@@ -465,14 +475,14 @@ sys32_gettimeofday(struct compat_timeval __user *tv, struct timezone __user *tz)
+ return 0;
}
- void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
-diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
-index 6fa6cf0..55771fd 100644
---- a/arch/x86/kernel/stacktrace.c
-+++ b/arch/x86/kernel/stacktrace.c
-@@ -33,6 +33,19 @@ static void save_stack_address(void *data, unsigned long addr)
- trace->entries[trace->nr_entries++] = addr;
- }
+-asmlinkage long
+-sys32_settimeofday(struct compat_timeval __user *tv, struct timezone __user *tz)
++asmlinkage long sys32_settimeofday(struct compat_timeval __user *tv,
++ struct timezone __user *tz)
+ {
+ struct timeval ktv;
+ struct timespec kts;
+ struct timezone ktz;
-+static void save_stack_address_nosched(void *data, unsigned long addr)
-+{
-+ struct stack_trace *trace = (struct stack_trace *)data;
-+ if (in_sched_functions(addr))
-+ return;
-+ if (trace->skip > 0) {
-+ trace->skip--;
-+ return;
-+ }
-+ if (trace->nr_entries < trace->max_entries)
-+ trace->entries[trace->nr_entries++] = addr;
-+}
-+
- static const struct stacktrace_ops save_stack_ops = {
- .warning = save_stack_warning,
- .warning_symbol = save_stack_warning_symbol,
-@@ -40,6 +53,13 @@ static const struct stacktrace_ops save_stack_ops = {
- .address = save_stack_address,
+- if (tv) {
++ if (tv) {
+ if (get_tv32(&ktv, tv))
+ return -EFAULT;
+ kts.tv_sec = ktv.tv_sec;
+@@ -494,8 +504,7 @@ struct sel_arg_struct {
+ unsigned int tvp;
};
-+static const struct stacktrace_ops save_stack_ops_nosched = {
-+ .warning = save_stack_warning,
-+ .warning_symbol = save_stack_warning_symbol,
-+ .stack = save_stack_stack,
-+ .address = save_stack_address_nosched,
-+};
-+
- /*
- * Save stack-backtrace addresses into a stack_trace buffer.
- */
-@@ -50,3 +70,10 @@ void save_stack_trace(struct stack_trace *trace)
- trace->entries[trace->nr_entries++] = ULONG_MAX;
+-asmlinkage long
+-sys32_old_select(struct sel_arg_struct __user *arg)
++asmlinkage long sys32_old_select(struct sel_arg_struct __user *arg)
+ {
+ struct sel_arg_struct a;
+
+@@ -505,50 +514,45 @@ sys32_old_select(struct sel_arg_struct __user *arg)
+ compat_ptr(a.exp), compat_ptr(a.tvp));
}
- EXPORT_SYMBOL(save_stack_trace);
-+
-+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
-+{
-+ dump_trace(tsk, NULL, NULL, &save_stack_ops_nosched, trace);
-+ if (trace->nr_entries < trace->max_entries)
-+ trace->entries[trace->nr_entries++] = ULONG_MAX;
-+}
-diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
-index 7d72cce..84c913f 100644
---- a/arch/x86/kernel/vmlinux_32.lds.S
-+++ b/arch/x86/kernel/vmlinux_32.lds.S
-@@ -131,10 +131,12 @@ SECTIONS
- .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
- __init_begin = .;
- _sinittext = .;
-- *(.init.text)
-+ INIT_TEXT
- _einittext = .;
- }
-- .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { *(.init.data) }
-+ .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
-+ INIT_DATA
-+ }
- . = ALIGN(16);
- .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
- __setup_start = .;
-@@ -169,8 +171,12 @@ SECTIONS
- }
- /* .exit.text is discard at runtime, not link time, to deal with references
- from .altinstructions and .eh_frame */
-- .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { *(.exit.text) }
-- .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) { *(.exit.data) }
-+ .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
-+ EXIT_TEXT
-+ }
-+ .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
-+ EXIT_DATA
-+ }
- #if defined(CONFIG_BLK_DEV_INITRD)
- . = ALIGN(4096);
- .init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
-diff --git a/arch/x86/kernel/vmlinux_64.lds.S b/arch/x86/kernel/vmlinux_64.lds.S
-index ba8ea97..ea53869 100644
---- a/arch/x86/kernel/vmlinux_64.lds.S
-+++ b/arch/x86/kernel/vmlinux_64.lds.S
-@@ -155,12 +155,15 @@ SECTIONS
- __init_begin = .;
- .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
- _sinittext = .;
-- *(.init.text)
-+ INIT_TEXT
- _einittext = .;
- }
-- __initdata_begin = .;
-- .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { *(.init.data) }
-- __initdata_end = .;
-+ .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
-+ __initdata_begin = .;
-+ INIT_DATA
-+ __initdata_end = .;
-+ }
+
+-extern asmlinkage long
+-compat_sys_wait4(compat_pid_t pid, compat_uint_t * stat_addr, int options,
+- struct compat_rusage *ru);
+-
+-asmlinkage long
+-sys32_waitpid(compat_pid_t pid, unsigned int *stat_addr, int options)
++asmlinkage long sys32_waitpid(compat_pid_t pid, unsigned int *stat_addr,
++ int options)
+ {
+ return compat_sys_wait4(pid, stat_addr, options, NULL);
+ }
+
+ /* 32-bit timeval and related flotsam. */
+
+-asmlinkage long
+-sys32_sysfs(int option, u32 arg1, u32 arg2)
++asmlinkage long sys32_sysfs(int option, u32 arg1, u32 arg2)
+ {
+ return sys_sysfs(option, arg1, arg2);
+ }
+
+-asmlinkage long
+-sys32_sched_rr_get_interval(compat_pid_t pid, struct compat_timespec __user *interval)
++asmlinkage long sys32_sched_rr_get_interval(compat_pid_t pid,
++ struct compat_timespec __user *interval)
+ {
+ struct timespec t;
+ int ret;
+- mm_segment_t old_fs = get_fs ();
+-
+- set_fs (KERNEL_DS);
++ mm_segment_t old_fs = get_fs();
+
- . = ALIGN(16);
- __setup_start = .;
- .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) { *(.init.setup) }
-@@ -187,8 +190,12 @@ SECTIONS
- }
- /* .exit.text is discard at runtime, not link time, to deal with references
- from .altinstructions and .eh_frame */
-- .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { *(.exit.text) }
-- .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) { *(.exit.data) }
-+ .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
-+ EXIT_TEXT
-+ }
-+ .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
-+ EXIT_DATA
-+ }
++ set_fs(KERNEL_DS);
+ ret = sys_sched_rr_get_interval(pid, (struct timespec __user *)&t);
+- set_fs (old_fs);
++ set_fs(old_fs);
+ if (put_compat_timespec(&t, interval))
+ return -EFAULT;
+ return ret;
+ }
- /* vdso blob that is mapped into user space */
- vdso_start = . ;
-diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
-index 944bbcd..c8ab79e 100644
---- a/arch/x86/oprofile/nmi_int.c
-+++ b/arch/x86/oprofile/nmi_int.c
-@@ -51,7 +51,7 @@ static int nmi_resume(struct sys_device *dev)
+-asmlinkage long
+-sys32_rt_sigpending(compat_sigset_t __user *set, compat_size_t sigsetsize)
++asmlinkage long sys32_rt_sigpending(compat_sigset_t __user *set,
++ compat_size_t sigsetsize)
+ {
+ sigset_t s;
+ compat_sigset_t s32;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+-
+- set_fs (KERNEL_DS);
++
++ set_fs(KERNEL_DS);
+ ret = sys_rt_sigpending((sigset_t __user *)&s, sigsetsize);
+- set_fs (old_fs);
++ set_fs(old_fs);
+ if (!ret) {
+ switch (_NSIG_WORDS) {
+ case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
+@@ -556,30 +560,29 @@ sys32_rt_sigpending(compat_sigset_t __user *set, compat_size_t sigsetsize)
+ case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
+ case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
+ }
+- if (copy_to_user (set, &s32, sizeof(compat_sigset_t)))
++ if (copy_to_user(set, &s32, sizeof(compat_sigset_t)))
+ return -EFAULT;
+ }
+ return ret;
+ }
+-asmlinkage long
+-sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo)
++asmlinkage long sys32_rt_sigqueueinfo(int pid, int sig,
++ compat_siginfo_t __user *uinfo)
+ {
+ siginfo_t info;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+-
++
+ if (copy_siginfo_from_user32(&info, uinfo))
+ return -EFAULT;
+- set_fs (KERNEL_DS);
++ set_fs(KERNEL_DS);
+ ret = sys_rt_sigqueueinfo(pid, sig, (siginfo_t __user *)&info);
+- set_fs (old_fs);
++ set_fs(old_fs);
+ return ret;
+ }
- static struct sysdev_class oprofile_sysclass = {
-- set_kset_name("oprofile"),
-+ .name = "oprofile",
- .resume = nmi_resume,
- .suspend = nmi_suspend,
+ /* These are here just in case some old ia32 binary calls it. */
+-asmlinkage long
+-sys32_pause(void)
++asmlinkage long sys32_pause(void)
+ {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+@@ -599,25 +602,25 @@ struct sysctl_ia32 {
};
-diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S
-index ac4ed52..7d0f55a 100644
---- a/arch/xtensa/kernel/vmlinux.lds.S
-+++ b/arch/xtensa/kernel/vmlinux.lds.S
-@@ -136,13 +136,13 @@ SECTIONS
- __init_begin = .;
- .init.text : {
- _sinittext = .;
-- *(.init.literal) *(.init.text)
-+ *(.init.literal) INIT_TEXT
- _einittext = .;
- }
- .init.data :
- {
-- *(.init.data)
-+ INIT_DATA
- . = ALIGN(0x4);
- __tagtable_begin = .;
- *(.taglist)
-@@ -278,8 +278,9 @@ SECTIONS
- /* Sections to be discarded */
- /DISCARD/ :
- {
-- *(.exit.literal .exit.text)
-- *(.exit.data)
-+ *(.exit.literal)
-+ EXIT_TEXT
-+ EXIT_DATA
- *(.exitcall.exit)
- }
-diff --git a/arch/xtensa/mm/Makefile b/arch/xtensa/mm/Makefile
-index 10aec22..64e304a 100644
---- a/arch/xtensa/mm/Makefile
-+++ b/arch/xtensa/mm/Makefile
-@@ -1,9 +1,5 @@
- #
- # Makefile for the Linux/Xtensa-specific parts of the memory manager.
- #
--# Note! Dependencies are done automagically by 'make dep', which also
--# removes any old dependencies. DON'T put your own dependencies here
--# unless it's something special (ie not a .c file).
--#
+-asmlinkage long
+-sys32_sysctl(struct sysctl_ia32 __user *args32)
++asmlinkage long sys32_sysctl(struct sysctl_ia32 __user *args32)
+ {
+ struct sysctl_ia32 a32;
+- mm_segment_t old_fs = get_fs ();
++ mm_segment_t old_fs = get_fs();
+ void __user *oldvalp, *newvalp;
+ size_t oldlen;
+ int __user *namep;
+ long ret;
- obj-y := init.o fault.o tlb.o misc.o cache.o
-diff --git a/arch/xtensa/platform-iss/Makefile b/arch/xtensa/platform-iss/Makefile
-index 5b394e9..af96e31 100644
---- a/arch/xtensa/platform-iss/Makefile
-+++ b/arch/xtensa/platform-iss/Makefile
-@@ -3,11 +3,6 @@
- # Makefile for the Xtensa Instruction Set Simulator (ISS)
- # "prom monitor" library routines under Linux.
- #
--# Note! Dependencies are done automagically by 'make dep', which also
--# removes any old dependencies. DON'T put your own dependencies here
--# unless it's something special (ie not a .c file).
--#
--# Note 2! The CFLAGS definitions are in the main makefile...
+- if (copy_from_user(&a32, args32, sizeof (a32)))
++ if (copy_from_user(&a32, args32, sizeof(a32)))
+ return -EFAULT;
- obj-y = io.o console.o setup.o network.o
+ /*
+- * We need to pre-validate these because we have to disable address checking
+- * before calling do_sysctl() because of OLDLEN but we can't run the risk of the
+- * user specifying bad addresses here. Well, since we're dealing with 32 bit
+- * addresses, we KNOW that access_ok() will always succeed, so this is an
+- * expensive NOP, but so what...
++ * We need to pre-validate these because we have to disable
++ * address checking before calling do_sysctl() because of
++ * OLDLEN but we can't run the risk of the user specifying bad
++ * addresses here. Well, since we're dealing with 32 bit
++ * addresses, we KNOW that access_ok() will always succeed, so
++ * this is an expensive NOP, but so what...
+ */
+ namep = compat_ptr(a32.name);
+ oldvalp = compat_ptr(a32.oldval);
+@@ -636,34 +639,34 @@ sys32_sysctl(struct sysctl_ia32 __user *args32)
+ unlock_kernel();
+ set_fs(old_fs);
-diff --git a/block/Makefile b/block/Makefile
-index 8261081..5a43c7d 100644
---- a/block/Makefile
-+++ b/block/Makefile
-@@ -2,7 +2,9 @@
- # Makefile for the kernel block layer
- #
+- if (oldvalp && put_user (oldlen, (int __user *)compat_ptr(a32.oldlenp)))
++ if (oldvalp && put_user(oldlen, (int __user *)compat_ptr(a32.oldlenp)))
+ return -EFAULT;
--obj-$(CONFIG_BLOCK) := elevator.o ll_rw_blk.o ioctl.o genhd.o scsi_ioctl.o
-+obj-$(CONFIG_BLOCK) := elevator.o blk-core.o blk-tag.o blk-sysfs.o \
-+ blk-barrier.o blk-settings.o blk-ioc.o blk-map.o \
-+ blk-exec.o blk-merge.o ioctl.o genhd.o scsi_ioctl.o
+ return ret;
+ }
+ #endif
- obj-$(CONFIG_BLK_DEV_BSG) += bsg.o
- obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
-diff --git a/block/as-iosched.c b/block/as-iosched.c
-index cb5e53b..b201d16 100644
---- a/block/as-iosched.c
-+++ b/block/as-iosched.c
-@@ -170,9 +170,11 @@ static void free_as_io_context(struct as_io_context *aic)
+-/* warning: next two assume little endian */
+-asmlinkage long
+-sys32_pread(unsigned int fd, char __user *ubuf, u32 count, u32 poslo, u32 poshi)
++/* warning: next two assume little endian */
++asmlinkage long sys32_pread(unsigned int fd, char __user *ubuf, u32 count,
++ u32 poslo, u32 poshi)
+ {
+ return sys_pread64(fd, ubuf, count,
+ ((loff_t)AA(poshi) << 32) | AA(poslo));
+ }
- static void as_trim(struct io_context *ioc)
+-asmlinkage long
+-sys32_pwrite(unsigned int fd, char __user *ubuf, u32 count, u32 poslo, u32 poshi)
++asmlinkage long sys32_pwrite(unsigned int fd, char __user *ubuf, u32 count,
++ u32 poslo, u32 poshi)
{
-+ spin_lock(&ioc->lock);
- if (ioc->aic)
- free_as_io_context(ioc->aic);
- ioc->aic = NULL;
-+ spin_unlock(&ioc->lock);
+ return sys_pwrite64(fd, ubuf, count,
+ ((loff_t)AA(poshi) << 32) | AA(poslo));
}
- /* Called when the task exits */
-@@ -462,7 +464,9 @@ static void as_antic_timeout(unsigned long data)
- spin_lock_irqsave(q->queue_lock, flags);
- if (ad->antic_status == ANTIC_WAIT_REQ
- || ad->antic_status == ANTIC_WAIT_NEXT) {
-- struct as_io_context *aic = ad->io_context->aic;
-+ struct as_io_context *aic;
-+ spin_lock(&ad->io_context->lock);
-+ aic = ad->io_context->aic;
- ad->antic_status = ANTIC_FINISHED;
- kblockd_schedule_work(&ad->antic_work);
-@@ -475,6 +479,7 @@ static void as_antic_timeout(unsigned long data)
- /* process not "saved" by a cooperating request */
- ad->exit_no_coop = (7*ad->exit_no_coop + 256)/8;
- }
-+ spin_unlock(&ad->io_context->lock);
- }
- spin_unlock_irqrestore(q->queue_lock, flags);
+-asmlinkage long
+-sys32_personality(unsigned long personality)
++asmlinkage long sys32_personality(unsigned long personality)
+ {
+ int ret;
+- if (personality(current->personality) == PER_LINUX32 &&
++
++ if (personality(current->personality) == PER_LINUX32 &&
+ personality == PER_LINUX)
+ personality = PER_LINUX32;
+ ret = sys_personality(personality);
+@@ -672,34 +675,33 @@ sys32_personality(unsigned long personality)
+ return ret;
}
-@@ -635,9 +640,11 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
- ioc = ad->io_context;
- BUG_ON(!ioc);
-+ spin_lock(&ioc->lock);
+-asmlinkage long
+-sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset, s32 count)
++asmlinkage long sys32_sendfile(int out_fd, int in_fd,
++ compat_off_t __user *offset, s32 count)
+ {
+ mm_segment_t old_fs = get_fs();
+ int ret;
+ off_t of;
+-
++
+ if (offset && get_user(of, offset))
+ return -EFAULT;
+-
++
+ set_fs(KERNEL_DS);
+ ret = sys_sendfile(out_fd, in_fd, offset ? (off_t __user *)&of : NULL,
+ count);
+ set_fs(old_fs);
+-
++
+ if (offset && put_user(of, offset))
+ return -EFAULT;
+-
+ return ret;
+ }
- if (rq && ioc == RQ_IOC(rq)) {
- /* request from same process */
-+ spin_unlock(&ioc->lock);
- return 1;
- }
+ asmlinkage long sys32_mmap2(unsigned long addr, unsigned long len,
+- unsigned long prot, unsigned long flags,
+- unsigned long fd, unsigned long pgoff)
++ unsigned long prot, unsigned long flags,
++ unsigned long fd, unsigned long pgoff)
+ {
+ struct mm_struct *mm = current->mm;
+ unsigned long error;
+- struct file * file = NULL;
++ struct file *file = NULL;
-@@ -646,20 +653,25 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
- * In this situation status should really be FINISHED,
- * however the timer hasn't had the chance to run yet.
- */
-+ spin_unlock(&ioc->lock);
- return 1;
- }
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+@@ -717,36 +719,35 @@ asmlinkage long sys32_mmap2(unsigned long addr, unsigned long len,
+ return error;
+ }
- aic = ioc->aic;
-- if (!aic)
-+ if (!aic) {
-+ spin_unlock(&ioc->lock);
- return 0;
-+ }
+-asmlinkage long sys32_olduname(struct oldold_utsname __user * name)
++asmlinkage long sys32_olduname(struct oldold_utsname __user *name)
+ {
++ char *arch = "x86_64";
+ int err;
- if (atomic_read(&aic->nr_queued) > 0) {
- /* process has more requests queued */
-+ spin_unlock(&ioc->lock);
- return 1;
- }
+ if (!name)
+ return -EFAULT;
+ if (!access_ok(VERIFY_WRITE, name, sizeof(struct oldold_utsname)))
+ return -EFAULT;
+-
+- down_read(&uts_sem);
+-
+- err = __copy_to_user(&name->sysname,&utsname()->sysname,
+- __OLD_UTS_LEN);
+- err |= __put_user(0,name->sysname+__OLD_UTS_LEN);
+- err |= __copy_to_user(&name->nodename,&utsname()->nodename,
+- __OLD_UTS_LEN);
+- err |= __put_user(0,name->nodename+__OLD_UTS_LEN);
+- err |= __copy_to_user(&name->release,&utsname()->release,
+- __OLD_UTS_LEN);
+- err |= __put_user(0,name->release+__OLD_UTS_LEN);
+- err |= __copy_to_user(&name->version,&utsname()->version,
+- __OLD_UTS_LEN);
+- err |= __put_user(0,name->version+__OLD_UTS_LEN);
+- {
+- char *arch = "x86_64";
+- if (personality(current->personality) == PER_LINUX32)
+- arch = "i686";
+-
+- err |= __copy_to_user(&name->machine, arch, strlen(arch)+1);
+- }
++
++ down_read(&uts_sem);
++
++ err = __copy_to_user(&name->sysname, &utsname()->sysname,
++ __OLD_UTS_LEN);
++ err |= __put_user(0, name->sysname+__OLD_UTS_LEN);
++ err |= __copy_to_user(&name->nodename, &utsname()->nodename,
++ __OLD_UTS_LEN);
++ err |= __put_user(0, name->nodename+__OLD_UTS_LEN);
++ err |= __copy_to_user(&name->release, &utsname()->release,
++ __OLD_UTS_LEN);
++ err |= __put_user(0, name->release+__OLD_UTS_LEN);
++ err |= __copy_to_user(&name->version, &utsname()->version,
++ __OLD_UTS_LEN);
++ err |= __put_user(0, name->version+__OLD_UTS_LEN);
++
++ if (personality(current->personality) == PER_LINUX32)
++ arch = "i686";
++
++ err |= __copy_to_user(&name->machine, arch, strlen(arch) + 1);
- if (atomic_read(&aic->nr_dispatched) > 0) {
- /* process has more requests dispatched */
-+ spin_unlock(&ioc->lock);
- return 1;
- }
+ up_read(&uts_sem);
-@@ -680,6 +692,7 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
- }
+@@ -755,17 +756,19 @@ asmlinkage long sys32_olduname(struct oldold_utsname __user * name)
+ return err;
+ }
- as_update_iohist(ad, aic, rq);
-+ spin_unlock(&ioc->lock);
- return 1;
- }
+-long sys32_uname(struct old_utsname __user * name)
++long sys32_uname(struct old_utsname __user *name)
+ {
+ int err;
++
+ if (!name)
+ return -EFAULT;
+ down_read(&uts_sem);
+- err = copy_to_user(name, utsname(), sizeof (*name));
++ err = copy_to_user(name, utsname(), sizeof(*name));
+ up_read(&uts_sem);
+- if (personality(current->personality) == PER_LINUX32)
++ if (personality(current->personality) == PER_LINUX32)
+ err |= copy_to_user(&name->machine, "i686", 5);
+- return err?-EFAULT:0;
++
++ return err ? -EFAULT : 0;
+ }
-@@ -688,20 +701,27 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
- if (aic->ttime_samples == 0)
- ad->exit_prob = (7*ad->exit_prob + 256)/8;
+ long sys32_ustat(unsigned dev, struct ustat32 __user *u32p)
+@@ -773,27 +776,28 @@ long sys32_ustat(unsigned dev, struct ustat32 __user *u32p)
+ struct ustat u;
+ mm_segment_t seg;
+ int ret;
+-
+- seg = get_fs();
+- set_fs(KERNEL_DS);
++
++ seg = get_fs();
++ set_fs(KERNEL_DS);
+ ret = sys_ustat(dev, (struct ustat __user *)&u);
+ set_fs(seg);
+- if (ret >= 0) {
+- if (!access_ok(VERIFY_WRITE,u32p,sizeof(struct ustat32)) ||
+- __put_user((__u32) u.f_tfree, &u32p->f_tfree) ||
+- __put_user((__u32) u.f_tinode, &u32p->f_tfree) ||
+- __copy_to_user(&u32p->f_fname, u.f_fname, sizeof(u.f_fname)) ||
+- __copy_to_user(&u32p->f_fpack, u.f_fpack, sizeof(u.f_fpack)))
+- ret = -EFAULT;
+- }
++ if (ret < 0)
++ return ret;
++
++ if (!access_ok(VERIFY_WRITE, u32p, sizeof(struct ustat32)) ||
++ __put_user((__u32) u.f_tfree, &u32p->f_tfree) ||
++ __put_user((__u32) u.f_tinode, &u32p->f_tfree) ||
++ __copy_to_user(&u32p->f_fname, u.f_fname, sizeof(u.f_fname)) ||
++ __copy_to_user(&u32p->f_fpack, u.f_fpack, sizeof(u.f_fpack)))
++ ret = -EFAULT;
+ return ret;
+-}
++}
-- if (ad->exit_no_coop > 128)
-+ if (ad->exit_no_coop > 128) {
-+ spin_unlock(&ioc->lock);
- return 1;
-+ }
- }
+ asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv,
+ compat_uptr_t __user *envp, struct pt_regs *regs)
+ {
+ long error;
+- char * filename;
++ char *filename;
- if (aic->ttime_samples == 0) {
-- if (ad->new_ttime_mean > ad->antic_expire)
-+ if (ad->new_ttime_mean > ad->antic_expire) {
-+ spin_unlock(&ioc->lock);
- return 1;
-- if (ad->exit_prob * ad->exit_no_coop > 128*256)
-+ }
-+ if (ad->exit_prob * ad->exit_no_coop > 128*256) {
-+ spin_unlock(&ioc->lock);
- return 1;
-+ }
- } else if (aic->ttime_mean > ad->antic_expire) {
- /* the process thinks too much between requests */
-+ spin_unlock(&ioc->lock);
- return 1;
- }
+ filename = getname(name);
+ error = PTR_ERR(filename);
+@@ -812,18 +816,19 @@ asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv,
+ asmlinkage long sys32_clone(unsigned int clone_flags, unsigned int newsp,
+ struct pt_regs *regs)
+ {
+- void __user *parent_tid = (void __user *)regs->rdx;
+- void __user *child_tid = (void __user *)regs->rdi;
++ void __user *parent_tid = (void __user *)regs->dx;
++ void __user *child_tid = (void __user *)regs->di;
++
+ if (!newsp)
+- newsp = regs->rsp;
+- return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid);
++ newsp = regs->sp;
++ return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid);
+ }
+
+ /*
+- * Some system calls that need sign extended arguments. This could be done by a generic wrapper.
+- */
-
-+ spin_unlock(&ioc->lock);
- return 0;
+-long sys32_lseek (unsigned int fd, int offset, unsigned int whence)
++ * Some system calls that need sign extended arguments. This could be
++ * done by a generic wrapper.
++ */
++long sys32_lseek(unsigned int fd, int offset, unsigned int whence)
+ {
+ return sys_lseek(fd, offset, whence);
+ }
+@@ -832,49 +837,52 @@ long sys32_kill(int pid, int sig)
+ {
+ return sys_kill(pid, sig);
}
+-
+-long sys32_fadvise64_64(int fd, __u32 offset_low, __u32 offset_high,
++
++long sys32_fadvise64_64(int fd, __u32 offset_low, __u32 offset_high,
+ __u32 len_low, __u32 len_high, int advice)
+-{
++{
+ return sys_fadvise64_64(fd,
+ (((u64)offset_high)<<32) | offset_low,
+ (((u64)len_high)<<32) | len_low,
+- advice);
+-}
++ advice);
++}
-@@ -1255,7 +1275,9 @@ static void as_merged_requests(struct request_queue *q, struct request *req,
- * Don't copy here but swap, because when anext is
- * removed below, it must contain the unused context
- */
-+ double_spin_lock(&rioc->lock, &nioc->lock, rioc < nioc);
- swap_io_context(&rioc, &nioc);
-+ double_spin_unlock(&rioc->lock, &nioc->lock, rioc < nioc);
- }
- }
+ long sys32_vm86_warning(void)
+-{
++{
+ struct task_struct *me = current;
+ static char lastcomm[sizeof(me->comm)];
++
+ if (strncmp(lastcomm, me->comm, sizeof(lastcomm))) {
+- compat_printk(KERN_INFO "%s: vm86 mode not supported on 64 bit kernel\n",
+- me->comm);
++ compat_printk(KERN_INFO
++ "%s: vm86 mode not supported on 64 bit kernel\n",
++ me->comm);
+ strncpy(lastcomm, me->comm, sizeof(lastcomm));
+- }
++ }
+ return -ENOSYS;
+-}
++}
-diff --git a/block/blk-barrier.c b/block/blk-barrier.c
+ long sys32_lookup_dcookie(u32 addr_low, u32 addr_high,
+- char __user * buf, size_t len)
++ char __user *buf, size_t len)
+ {
+ return sys_lookup_dcookie(((u64)addr_high << 32) | addr_low, buf, len);
+ }
+
+-asmlinkage ssize_t sys32_readahead(int fd, unsigned off_lo, unsigned off_hi, size_t count)
++asmlinkage ssize_t sys32_readahead(int fd, unsigned off_lo, unsigned off_hi,
++ size_t count)
+ {
+ return sys_readahead(fd, ((u64)off_hi << 32) | off_lo, count);
+ }
+
+ asmlinkage long sys32_sync_file_range(int fd, unsigned off_low, unsigned off_hi,
+- unsigned n_low, unsigned n_hi, int flags)
++ unsigned n_low, unsigned n_hi, int flags)
+ {
+ return sys_sync_file_range(fd,
+ ((u64)off_hi << 32) | off_low,
+ ((u64)n_hi << 32) | n_low, flags);
+ }
+
+-asmlinkage long sys32_fadvise64(int fd, unsigned offset_lo, unsigned offset_hi, size_t len,
+- int advice)
++asmlinkage long sys32_fadvise64(int fd, unsigned offset_lo, unsigned offset_hi,
++ size_t len, int advice)
+ {
+ return sys_fadvise64_64(fd, ((u64)offset_hi << 32) | offset_lo,
+ len, advice);
+diff --git a/arch/x86/ia32/syscall32.c b/arch/x86/ia32/syscall32.c
+deleted file mode 100644
+index 15013ba..0000000
+--- a/arch/x86/ia32/syscall32.c
++++ /dev/null
+@@ -1,83 +0,0 @@
+-/* Copyright 2002,2003 Andi Kleen, SuSE Labs */
+-
+-/* vsyscall handling for 32bit processes. Map a stub page into it
+- on demand because 32bit cannot reach the kernel's fixmaps */
+-
+-#include <linux/mm.h>
+-#include <linux/string.h>
+-#include <linux/kernel.h>
+-#include <linux/gfp.h>
+-#include <linux/init.h>
+-#include <linux/stringify.h>
+-#include <linux/security.h>
+-#include <asm/proto.h>
+-#include <asm/tlbflush.h>
+-#include <asm/ia32_unistd.h>
+-#include <asm/vsyscall32.h>
+-
+-extern unsigned char syscall32_syscall[], syscall32_syscall_end[];
+-extern unsigned char syscall32_sysenter[], syscall32_sysenter_end[];
+-extern int sysctl_vsyscall32;
+-
+-static struct page *syscall32_pages[1];
+-static int use_sysenter = -1;
+-
+-struct linux_binprm;
+-
+-/* Setup a VMA at program startup for the vsyscall page */
+-int syscall32_setup_pages(struct linux_binprm *bprm, int exstack)
+-{
+- struct mm_struct *mm = current->mm;
+- int ret;
+-
+- down_write(&mm->mmap_sem);
+- /*
+- * MAYWRITE to allow gdb to COW and set breakpoints
+- *
+- * Make sure the vDSO gets into every core dump.
+- * Dumping its contents makes post-mortem fully interpretable later
+- * without matching up the same kernel and hardware config to see
+- * what PC values meant.
+- */
+- /* Could randomize here */
+- ret = install_special_mapping(mm, VSYSCALL32_BASE, PAGE_SIZE,
+- VM_READ|VM_EXEC|
+- VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC|
+- VM_ALWAYSDUMP,
+- syscall32_pages);
+- up_write(&mm->mmap_sem);
+- return ret;
+-}
+-
+-static int __init init_syscall32(void)
+-{
+- char *syscall32_page = (void *)get_zeroed_page(GFP_KERNEL);
+- if (!syscall32_page)
+- panic("Cannot allocate syscall32 page");
+- syscall32_pages[0] = virt_to_page(syscall32_page);
+- if (use_sysenter > 0) {
+- memcpy(syscall32_page, syscall32_sysenter,
+- syscall32_sysenter_end - syscall32_sysenter);
+- } else {
+- memcpy(syscall32_page, syscall32_syscall,
+- syscall32_syscall_end - syscall32_syscall);
+- }
+- return 0;
+-}
+-
+-__initcall(init_syscall32);
+-
+-/* May not be __init: called during resume */
+-void syscall32_cpu_init(void)
+-{
+- if (use_sysenter < 0)
+- use_sysenter = (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL);
+-
+- /* Load these always in case some future AMD CPU supports
+- SYSENTER from compat mode too. */
+- checking_wrmsrl(MSR_IA32_SYSENTER_CS, (u64)__KERNEL_CS);
+- checking_wrmsrl(MSR_IA32_SYSENTER_ESP, 0ULL);
+- checking_wrmsrl(MSR_IA32_SYSENTER_EIP, (u64)ia32_sysenter_target);
+-
+- wrmsrl(MSR_CSTAR, ia32_cstar_target);
+-}
+diff --git a/arch/x86/ia32/syscall32_syscall.S b/arch/x86/ia32/syscall32_syscall.S
+deleted file mode 100644
+index 933f0f0..0000000
+--- a/arch/x86/ia32/syscall32_syscall.S
++++ /dev/null
+@@ -1,17 +0,0 @@
+-/* 32bit VDSOs mapped into user space. */
+-
+- .section ".init.data","aw"
+-
+- .globl syscall32_syscall
+- .globl syscall32_syscall_end
+-
+-syscall32_syscall:
+- .incbin "arch/x86/ia32/vsyscall-syscall.so"
+-syscall32_syscall_end:
+-
+- .globl syscall32_sysenter
+- .globl syscall32_sysenter_end
+-
+-syscall32_sysenter:
+- .incbin "arch/x86/ia32/vsyscall-sysenter.so"
+-syscall32_sysenter_end:
+diff --git a/arch/x86/ia32/tls32.c b/arch/x86/ia32/tls32.c
+deleted file mode 100644
+index 1cc4340..0000000
+--- a/arch/x86/ia32/tls32.c
++++ /dev/null
+@@ -1,163 +0,0 @@
+-#include <linux/kernel.h>
+-#include <linux/errno.h>
+-#include <linux/sched.h>
+-#include <linux/user.h>
+-
+-#include <asm/uaccess.h>
+-#include <asm/desc.h>
+-#include <asm/system.h>
+-#include <asm/ldt.h>
+-#include <asm/processor.h>
+-#include <asm/proto.h>
+-
+-/*
+- * sys_alloc_thread_area: get a yet unused TLS descriptor index.
+- */
+-static int get_free_idx(void)
+-{
+- struct thread_struct *t = ¤t->thread;
+- int idx;
+-
+- for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
+- if (desc_empty((struct n_desc_struct *)(t->tls_array) + idx))
+- return idx + GDT_ENTRY_TLS_MIN;
+- return -ESRCH;
+-}
+-
+-/*
+- * Set a given TLS descriptor:
+- * When you want addresses > 32bit use arch_prctl()
+- */
+-int do_set_thread_area(struct thread_struct *t, struct user_desc __user *u_info)
+-{
+- struct user_desc info;
+- struct n_desc_struct *desc;
+- int cpu, idx;
+-
+- if (copy_from_user(&info, u_info, sizeof(info)))
+- return -EFAULT;
+-
+- idx = info.entry_number;
+-
+- /*
+- * index -1 means the kernel should try to find and
+- * allocate an empty descriptor:
+- */
+- if (idx == -1) {
+- idx = get_free_idx();
+- if (idx < 0)
+- return idx;
+- if (put_user(idx, &u_info->entry_number))
+- return -EFAULT;
+- }
+-
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
+-
+- desc = ((struct n_desc_struct *)t->tls_array) + idx - GDT_ENTRY_TLS_MIN;
+-
+- /*
+- * We must not get preempted while modifying the TLS.
+- */
+- cpu = get_cpu();
+-
+- if (LDT_empty(&info)) {
+- desc->a = 0;
+- desc->b = 0;
+- } else {
+- desc->a = LDT_entry_a(&info);
+- desc->b = LDT_entry_b(&info);
+- }
+- if (t == ¤t->thread)
+- load_TLS(t, cpu);
+-
+- put_cpu();
+- return 0;
+-}
+-
+-asmlinkage long sys32_set_thread_area(struct user_desc __user *u_info)
+-{
+- return do_set_thread_area(¤t->thread, u_info);
+-}
+-
+-
+-/*
+- * Get the current Thread-Local Storage area:
+- */
+-
+-#define GET_BASE(desc) ( \
+- (((desc)->a >> 16) & 0x0000ffff) | \
+- (((desc)->b << 16) & 0x00ff0000) | \
+- ( (desc)->b & 0xff000000) )
+-
+-#define GET_LIMIT(desc) ( \
+- ((desc)->a & 0x0ffff) | \
+- ((desc)->b & 0xf0000) )
+-
+-#define GET_32BIT(desc) (((desc)->b >> 22) & 1)
+-#define GET_CONTENTS(desc) (((desc)->b >> 10) & 3)
+-#define GET_WRITABLE(desc) (((desc)->b >> 9) & 1)
+-#define GET_LIMIT_PAGES(desc) (((desc)->b >> 23) & 1)
+-#define GET_PRESENT(desc) (((desc)->b >> 15) & 1)
+-#define GET_USEABLE(desc) (((desc)->b >> 20) & 1)
+-#define GET_LONGMODE(desc) (((desc)->b >> 21) & 1)
+-
+-int do_get_thread_area(struct thread_struct *t, struct user_desc __user *u_info)
+-{
+- struct user_desc info;
+- struct n_desc_struct *desc;
+- int idx;
+-
+- if (get_user(idx, &u_info->entry_number))
+- return -EFAULT;
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
+-
+- desc = ((struct n_desc_struct *)t->tls_array) + idx - GDT_ENTRY_TLS_MIN;
+-
+- memset(&info, 0, sizeof(struct user_desc));
+- info.entry_number = idx;
+- info.base_addr = GET_BASE(desc);
+- info.limit = GET_LIMIT(desc);
+- info.seg_32bit = GET_32BIT(desc);
+- info.contents = GET_CONTENTS(desc);
+- info.read_exec_only = !GET_WRITABLE(desc);
+- info.limit_in_pages = GET_LIMIT_PAGES(desc);
+- info.seg_not_present = !GET_PRESENT(desc);
+- info.useable = GET_USEABLE(desc);
+- info.lm = GET_LONGMODE(desc);
+-
+- if (copy_to_user(u_info, &info, sizeof(info)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-asmlinkage long sys32_get_thread_area(struct user_desc __user *u_info)
+-{
+- return do_get_thread_area(¤t->thread, u_info);
+-}
+-
+-
+-int ia32_child_tls(struct task_struct *p, struct pt_regs *childregs)
+-{
+- struct n_desc_struct *desc;
+- struct user_desc info;
+- struct user_desc __user *cp;
+- int idx;
+-
+- cp = (void __user *)childregs->rsi;
+- if (copy_from_user(&info, cp, sizeof(info)))
+- return -EFAULT;
+- if (LDT_empty(&info))
+- return -EINVAL;
+-
+- idx = info.entry_number;
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
+-
+- desc = (struct n_desc_struct *)(p->thread.tls_array) + idx - GDT_ENTRY_TLS_MIN;
+- desc->a = LDT_entry_a(&info);
+- desc->b = LDT_entry_b(&info);
+-
+- return 0;
+-}
+diff --git a/arch/x86/ia32/vsyscall-sigreturn.S b/arch/x86/ia32/vsyscall-sigreturn.S
+deleted file mode 100644
+index b383be0..0000000
+--- a/arch/x86/ia32/vsyscall-sigreturn.S
++++ /dev/null
+@@ -1,143 +0,0 @@
+-/*
+- * Common code for the sigreturn entry points on the vsyscall page.
+- * This code uses SYSCALL_ENTER_KERNEL (either syscall or int $0x80)
+- * to enter the kernel.
+- * This file is #include'd by vsyscall-*.S to define them after the
+- * vsyscall entry point. The addresses we get for these entry points
+- * by doing ".balign 32" must match in both versions of the page.
+- */
+-
+- .code32
+- .section .text.sigreturn,"ax"
+- .balign 32
+- .globl __kernel_sigreturn
+- .type __kernel_sigreturn, at function
+-__kernel_sigreturn:
+-.LSTART_sigreturn:
+- popl %eax
+- movl $__NR_ia32_sigreturn, %eax
+- SYSCALL_ENTER_KERNEL
+-.LEND_sigreturn:
+- .size __kernel_sigreturn,.-.LSTART_sigreturn
+-
+- .section .text.rtsigreturn,"ax"
+- .balign 32
+- .globl __kernel_rt_sigreturn
+- .type __kernel_rt_sigreturn, at function
+-__kernel_rt_sigreturn:
+-.LSTART_rt_sigreturn:
+- movl $__NR_ia32_rt_sigreturn, %eax
+- SYSCALL_ENTER_KERNEL
+-.LEND_rt_sigreturn:
+- .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
+-
+- .section .eh_frame,"a", at progbits
+-.LSTARTFRAMES:
+- .long .LENDCIES-.LSTARTCIES
+-.LSTARTCIES:
+- .long 0 /* CIE ID */
+- .byte 1 /* Version number */
+- .string "zRS" /* NUL-terminated augmentation string */
+- .uleb128 1 /* Code alignment factor */
+- .sleb128 -4 /* Data alignment factor */
+- .byte 8 /* Return address register column */
+- .uleb128 1 /* Augmentation value length */
+- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
+- .byte 0x0c /* DW_CFA_def_cfa */
+- .uleb128 4
+- .uleb128 4
+- .byte 0x88 /* DW_CFA_offset, column 0x8 */
+- .uleb128 1
+- .align 4
+-.LENDCIES:
+-
+- .long .LENDFDE2-.LSTARTFDE2 /* Length FDE */
+-.LSTARTFDE2:
+- .long .LSTARTFDE2-.LSTARTFRAMES /* CIE pointer */
+- /* HACK: The dwarf2 unwind routines will subtract 1 from the
+- return address to get an address in the middle of the
+- presumed call instruction. Since we didn't get here via
+- a call, we need to include the nop before the real start
+- to make up for it. */
+- .long .LSTART_sigreturn-1-. /* PC-relative start address */
+- .long .LEND_sigreturn-.LSTART_sigreturn+1
+- .uleb128 0 /* Augmentation length */
+- /* What follows are the instructions for the table generation.
+- We record the locations of each register saved. This is
+- complicated by the fact that the "CFA" is always assumed to
+- be the value of the stack pointer in the caller. This means
+- that we must define the CFA of this body of code to be the
+- saved value of the stack pointer in the sigcontext. Which
+- also means that there is no fixed relation to the other
+- saved registers, which means that we must use DW_CFA_expression
+- to compute their addresses. It also means that when we
+- adjust the stack with the popl, we have to do it all over again. */
+-
+-#define do_cfa_expr(offset) \
+- .byte 0x0f; /* DW_CFA_def_cfa_expression */ \
+- .uleb128 1f-0f; /* length */ \
+-0: .byte 0x74; /* DW_OP_breg4 */ \
+- .sleb128 offset; /* offset */ \
+- .byte 0x06; /* DW_OP_deref */ \
+-1:
+-
+-#define do_expr(regno, offset) \
+- .byte 0x10; /* DW_CFA_expression */ \
+- .uleb128 regno; /* regno */ \
+- .uleb128 1f-0f; /* length */ \
+-0: .byte 0x74; /* DW_OP_breg4 */ \
+- .sleb128 offset; /* offset */ \
+-1:
+-
+- do_cfa_expr(IA32_SIGCONTEXT_esp+4)
+- do_expr(0, IA32_SIGCONTEXT_eax+4)
+- do_expr(1, IA32_SIGCONTEXT_ecx+4)
+- do_expr(2, IA32_SIGCONTEXT_edx+4)
+- do_expr(3, IA32_SIGCONTEXT_ebx+4)
+- do_expr(5, IA32_SIGCONTEXT_ebp+4)
+- do_expr(6, IA32_SIGCONTEXT_esi+4)
+- do_expr(7, IA32_SIGCONTEXT_edi+4)
+- do_expr(8, IA32_SIGCONTEXT_eip+4)
+-
+- .byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
+-
+- do_cfa_expr(IA32_SIGCONTEXT_esp)
+- do_expr(0, IA32_SIGCONTEXT_eax)
+- do_expr(1, IA32_SIGCONTEXT_ecx)
+- do_expr(2, IA32_SIGCONTEXT_edx)
+- do_expr(3, IA32_SIGCONTEXT_ebx)
+- do_expr(5, IA32_SIGCONTEXT_ebp)
+- do_expr(6, IA32_SIGCONTEXT_esi)
+- do_expr(7, IA32_SIGCONTEXT_edi)
+- do_expr(8, IA32_SIGCONTEXT_eip)
+-
+- .align 4
+-.LENDFDE2:
+-
+- .long .LENDFDE3-.LSTARTFDE3 /* Length FDE */
+-.LSTARTFDE3:
+- .long .LSTARTFDE3-.LSTARTFRAMES /* CIE pointer */
+- /* HACK: See above wrt unwind library assumptions. */
+- .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
+- .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
+- .uleb128 0 /* Augmentation */
+- /* What follows are the instructions for the table generation.
+- We record the locations of each register saved. This is
+- slightly less complicated than the above, since we don't
+- modify the stack pointer in the process. */
+-
+- do_cfa_expr(IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_esp)
+- do_expr(0, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_eax)
+- do_expr(1, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ecx)
+- do_expr(2, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_edx)
+- do_expr(3, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ebx)
+- do_expr(5, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ebp)
+- do_expr(6, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_esi)
+- do_expr(7, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_edi)
+- do_expr(8, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_eip)
+-
+- .align 4
+-.LENDFDE3:
+-
+-#include "../../x86/kernel/vsyscall-note_32.S"
+-
+diff --git a/arch/x86/ia32/vsyscall-syscall.S b/arch/x86/ia32/vsyscall-syscall.S
+deleted file mode 100644
+index cf9ef67..0000000
+--- a/arch/x86/ia32/vsyscall-syscall.S
++++ /dev/null
+@@ -1,69 +0,0 @@
+-/*
+- * Code for the vsyscall page. This version uses the syscall instruction.
+- */
+-
+-#include <asm/ia32_unistd.h>
+-#include <asm/asm-offsets.h>
+-#include <asm/segment.h>
+-
+- .code32
+- .text
+- .section .text.vsyscall,"ax"
+- .globl __kernel_vsyscall
+- .type __kernel_vsyscall, at function
+-__kernel_vsyscall:
+-.LSTART_vsyscall:
+- push %ebp
+-.Lpush_ebp:
+- movl %ecx, %ebp
+- syscall
+- movl $__USER32_DS, %ecx
+- movl %ecx, %ss
+- movl %ebp, %ecx
+- popl %ebp
+-.Lpop_ebp:
+- ret
+-.LEND_vsyscall:
+- .size __kernel_vsyscall,.-.LSTART_vsyscall
+-
+- .section .eh_frame,"a", at progbits
+-.LSTARTFRAME:
+- .long .LENDCIE-.LSTARTCIE
+-.LSTARTCIE:
+- .long 0 /* CIE ID */
+- .byte 1 /* Version number */
+- .string "zR" /* NUL-terminated augmentation string */
+- .uleb128 1 /* Code alignment factor */
+- .sleb128 -4 /* Data alignment factor */
+- .byte 8 /* Return address register column */
+- .uleb128 1 /* Augmentation value length */
+- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
+- .byte 0x0c /* DW_CFA_def_cfa */
+- .uleb128 4
+- .uleb128 4
+- .byte 0x88 /* DW_CFA_offset, column 0x8 */
+- .uleb128 1
+- .align 4
+-.LENDCIE:
+-
+- .long .LENDFDE1-.LSTARTFDE1 /* Length FDE */
+-.LSTARTFDE1:
+- .long .LSTARTFDE1-.LSTARTFRAME /* CIE pointer */
+- .long .LSTART_vsyscall-. /* PC-relative start address */
+- .long .LEND_vsyscall-.LSTART_vsyscall
+- .uleb128 0 /* Augmentation length */
+- /* What follows are the instructions for the table generation.
+- We have to record all changes of the stack pointer. */
+- .byte 0x40 + .Lpush_ebp-.LSTART_vsyscall /* DW_CFA_advance_loc */
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .uleb128 8
+- .byte 0x85, 0x02 /* DW_CFA_offset %ebp -8 */
+- .byte 0x40 + .Lpop_ebp-.Lpush_ebp /* DW_CFA_advance_loc */
+- .byte 0xc5 /* DW_CFA_restore %ebp */
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .uleb128 4
+- .align 4
+-.LENDFDE1:
+-
+-#define SYSCALL_ENTER_KERNEL syscall
+-#include "vsyscall-sigreturn.S"
+diff --git a/arch/x86/ia32/vsyscall-sysenter.S b/arch/x86/ia32/vsyscall-sysenter.S
+deleted file mode 100644
+index ae056e5..0000000
+--- a/arch/x86/ia32/vsyscall-sysenter.S
++++ /dev/null
+@@ -1,95 +0,0 @@
+-/*
+- * Code for the vsyscall page. This version uses the sysenter instruction.
+- */
+-
+-#include <asm/ia32_unistd.h>
+-#include <asm/asm-offsets.h>
+-
+- .code32
+- .text
+- .section .text.vsyscall,"ax"
+- .globl __kernel_vsyscall
+- .type __kernel_vsyscall, at function
+-__kernel_vsyscall:
+-.LSTART_vsyscall:
+- push %ecx
+-.Lpush_ecx:
+- push %edx
+-.Lpush_edx:
+- push %ebp
+-.Lenter_kernel:
+- movl %esp,%ebp
+- sysenter
+- .space 7,0x90
+- jmp .Lenter_kernel
+- /* 16: System call normal return point is here! */
+- pop %ebp
+-.Lpop_ebp:
+- pop %edx
+-.Lpop_edx:
+- pop %ecx
+-.Lpop_ecx:
+- ret
+-.LEND_vsyscall:
+- .size __kernel_vsyscall,.-.LSTART_vsyscall
+-
+- .section .eh_frame,"a", at progbits
+-.LSTARTFRAME:
+- .long .LENDCIE-.LSTARTCIE
+-.LSTARTCIE:
+- .long 0 /* CIE ID */
+- .byte 1 /* Version number */
+- .string "zR" /* NUL-terminated augmentation string */
+- .uleb128 1 /* Code alignment factor */
+- .sleb128 -4 /* Data alignment factor */
+- .byte 8 /* Return address register column */
+- .uleb128 1 /* Augmentation value length */
+- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
+- .byte 0x0c /* DW_CFA_def_cfa */
+- .uleb128 4
+- .uleb128 4
+- .byte 0x88 /* DW_CFA_offset, column 0x8 */
+- .uleb128 1
+- .align 4
+-.LENDCIE:
+-
+- .long .LENDFDE1-.LSTARTFDE1 /* Length FDE */
+-.LSTARTFDE1:
+- .long .LSTARTFDE1-.LSTARTFRAME /* CIE pointer */
+- .long .LSTART_vsyscall-. /* PC-relative start address */
+- .long .LEND_vsyscall-.LSTART_vsyscall
+- .uleb128 0 /* Augmentation length */
+- /* What follows are the instructions for the table generation.
+- We have to record all changes of the stack pointer. */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpush_ecx-.LSTART_vsyscall
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x08 /* RA at offset 8 now */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpush_edx-.Lpush_ecx
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x0c /* RA at offset 12 now */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lenter_kernel-.Lpush_edx
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x10 /* RA at offset 16 now */
+- .byte 0x85, 0x04 /* DW_CFA_offset %ebp -16 */
+- /* Finally the epilogue. */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpop_ebp-.Lenter_kernel
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x12 /* RA at offset 12 now */
+- .byte 0xc5 /* DW_CFA_restore %ebp */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpop_edx-.Lpop_ebp
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x08 /* RA at offset 8 now */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpop_ecx-.Lpop_edx
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x04 /* RA at offset 4 now */
+- .align 4
+-.LENDFDE1:
+-
+-#define SYSCALL_ENTER_KERNEL int $0x80
+-#include "vsyscall-sigreturn.S"
+diff --git a/arch/x86/ia32/vsyscall.lds b/arch/x86/ia32/vsyscall.lds
+deleted file mode 100644
+index 1dc86ff..0000000
+--- a/arch/x86/ia32/vsyscall.lds
++++ /dev/null
+@@ -1,80 +0,0 @@
+-/*
+- * Linker script for vsyscall DSO. The vsyscall page is an ELF shared
+- * object prelinked to its virtual address. This script controls its layout.
+- */
+-
+-/* This must match <asm/fixmap.h>. */
+-VSYSCALL_BASE = 0xffffe000;
+-
+-SECTIONS
+-{
+- . = VSYSCALL_BASE + SIZEOF_HEADERS;
+-
+- .hash : { *(.hash) } :text
+- .gnu.hash : { *(.gnu.hash) }
+- .dynsym : { *(.dynsym) }
+- .dynstr : { *(.dynstr) }
+- .gnu.version : { *(.gnu.version) }
+- .gnu.version_d : { *(.gnu.version_d) }
+- .gnu.version_r : { *(.gnu.version_r) }
+-
+- /* This linker script is used both with -r and with -shared.
+- For the layouts to match, we need to skip more than enough
+- space for the dynamic symbol table et al. If this amount
+- is insufficient, ld -shared will barf. Just increase it here. */
+- . = VSYSCALL_BASE + 0x400;
+-
+- .text.vsyscall : { *(.text.vsyscall) } :text =0x90909090
+-
+- /* This is an 32bit object and we cannot easily get the offsets
+- into the 64bit kernel. Just hardcode them here. This assumes
+- that all the stubs don't need more than 0x100 bytes. */
+- . = VSYSCALL_BASE + 0x500;
+-
+- .text.sigreturn : { *(.text.sigreturn) } :text =0x90909090
+-
+- . = VSYSCALL_BASE + 0x600;
+-
+- .text.rtsigreturn : { *(.text.rtsigreturn) } :text =0x90909090
+-
+- .note : { *(.note.*) } :text :note
+- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+- .eh_frame : { KEEP (*(.eh_frame)) } :text
+- .dynamic : { *(.dynamic) } :text :dynamic
+- .useless : {
+- *(.got.plt) *(.got)
+- *(.data .data.* .gnu.linkonce.d.*)
+- *(.dynbss)
+- *(.bss .bss.* .gnu.linkonce.b.*)
+- } :text
+-}
+-
+-/*
+- * We must supply the ELF program headers explicitly to get just one
+- * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+- */
+-PHDRS
+-{
+- text PT_LOAD FILEHDR PHDRS FLAGS(5); /* PF_R|PF_X */
+- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+- note PT_NOTE FLAGS(4); /* PF_R */
+- eh_frame_hdr 0x6474e550; /* PT_GNU_EH_FRAME, but ld doesn't match the name */
+-}
+-
+-/*
+- * This controls what symbols we export from the DSO.
+- */
+-VERSION
+-{
+- LINUX_2.5 {
+- global:
+- __kernel_vsyscall;
+- __kernel_sigreturn;
+- __kernel_rt_sigreturn;
+-
+- local: *;
+- };
+-}
+-
+-/* The ELF entry point can be used to set the AT_SYSINFO value. */
+-ENTRY(__kernel_vsyscall);
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index 3857334..6f81300 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -1,9 +1,91 @@
+-ifeq ($(CONFIG_X86_32),y)
+-include ${srctree}/arch/x86/kernel/Makefile_32
+-else
+-include ${srctree}/arch/x86/kernel/Makefile_64
++#
++# Makefile for the linux kernel.
++#
++
++extra-y := head_$(BITS).o init_task.o vmlinux.lds
++extra-$(CONFIG_X86_64) += head64.o
++
++CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
++CFLAGS_vsyscall_64.o := $(PROFILING) -g0
++
++obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o
++obj-y += traps_$(BITS).o irq_$(BITS).o
++obj-y += time_$(BITS).o ioport.o ldt.o
++obj-y += setup_$(BITS).o i8259_$(BITS).o
++obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o
++obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o
++obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o
++obj-y += pci-dma_$(BITS).o bootflag.o e820_$(BITS).o
++obj-y += quirks.o i8237.o topology.o kdebugfs.o
++obj-y += alternative.o i8253.o
++obj-$(CONFIG_X86_64) += pci-nommu_64.o bugs_64.o
++obj-y += tsc_$(BITS).o io_delay.o rtc.o
++
++obj-y += i387.o
++obj-y += ptrace.o
++obj-y += ds.o
++obj-$(CONFIG_X86_32) += tls.o
++obj-$(CONFIG_IA32_EMULATION) += tls.o
++obj-y += step.o
++obj-$(CONFIG_STACKTRACE) += stacktrace.o
++obj-y += cpu/
++obj-y += acpi/
++obj-$(CONFIG_X86_BIOS_REBOOT) += reboot.o
++obj-$(CONFIG_X86_64) += reboot.o
++obj-$(CONFIG_MCA) += mca_32.o
++obj-$(CONFIG_X86_MSR) += msr.o
++obj-$(CONFIG_X86_CPUID) += cpuid.o
++obj-$(CONFIG_MICROCODE) += microcode.o
++obj-$(CONFIG_PCI) += early-quirks.o
++obj-$(CONFIG_APM) += apm_32.o
++obj-$(CONFIG_X86_SMP) += smp_$(BITS).o smpboot_$(BITS).o tsc_sync.o
++obj-$(CONFIG_X86_32_SMP) += smpcommon_32.o
++obj-$(CONFIG_X86_64_SMP) += smp_64.o smpboot_64.o tsc_sync.o
++obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_$(BITS).o
++obj-$(CONFIG_X86_MPPARSE) += mpparse_$(BITS).o
++obj-$(CONFIG_X86_LOCAL_APIC) += apic_$(BITS).o nmi_$(BITS).o
++obj-$(CONFIG_X86_IO_APIC) += io_apic_$(BITS).o
++obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
++obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
++obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
++obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
++obj-$(CONFIG_X86_NUMAQ) += numaq_32.o
++obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o
++obj-$(CONFIG_X86_VSMP) += vsmp_64.o
++obj-$(CONFIG_KPROBES) += kprobes.o
++obj-$(CONFIG_MODULES) += module_$(BITS).o
++obj-$(CONFIG_ACPI_SRAT) += srat_32.o
++obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o
++obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o
++obj-$(CONFIG_VM86) += vm86_32.o
++obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
++
++obj-$(CONFIG_HPET_TIMER) += hpet.o
++
++obj-$(CONFIG_K8_NB) += k8.o
++obj-$(CONFIG_MGEODE_LX) += geode_32.o mfgpt_32.o
++obj-$(CONFIG_DEBUG_RODATA_TEST) += test_rodata.o
++obj-$(CONFIG_DEBUG_NX_TEST) += test_nx.o
++
++obj-$(CONFIG_VMI) += vmi_32.o vmiclock_32.o
++obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
++
++ifdef CONFIG_INPUT_PCSPKR
++obj-y += pcspeaker.o
+ endif
+
+-# Workaround to delete .lds files with make clean
+-# The problem is that we do not enter Makefile_32 with make clean.
+-clean-files := vsyscall*.lds vsyscall*.so
++obj-$(CONFIG_SCx200) += scx200_32.o
++
++###
++# 64 bit specific files
++ifeq ($(CONFIG_X86_64),y)
++ obj-y += genapic_64.o genapic_flat_64.o
++ obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
++ obj-$(CONFIG_AUDIT) += audit_64.o
++ obj-$(CONFIG_PM) += suspend_64.o
++ obj-$(CONFIG_HIBERNATION) += suspend_asm_64.o
++
++ obj-$(CONFIG_GART_IOMMU) += pci-gart_64.o aperture_64.o
++ obj-$(CONFIG_CALGARY_IOMMU) += pci-calgary_64.o tce_64.o
++ obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o
++endif
+diff --git a/arch/x86/kernel/Makefile_32 b/arch/x86/kernel/Makefile_32
+deleted file mode 100644
+index a7bc93c..0000000
+--- a/arch/x86/kernel/Makefile_32
++++ /dev/null
+@@ -1,88 +0,0 @@
+-#
+-# Makefile for the linux kernel.
+-#
+-
+-extra-y := head_32.o init_task.o vmlinux.lds
+-CPPFLAGS_vmlinux.lds += -Ui386
+-
+-obj-y := process_32.o signal_32.o entry_32.o traps_32.o irq_32.o \
+- ptrace_32.o time_32.o ioport_32.o ldt_32.o setup_32.o i8259_32.o sys_i386_32.o \
+- pci-dma_32.o i386_ksyms_32.o i387_32.o bootflag.o e820_32.o\
+- quirks.o i8237.o topology.o alternative.o i8253.o tsc_32.o
+-
+-obj-$(CONFIG_STACKTRACE) += stacktrace.o
+-obj-y += cpu/
+-obj-y += acpi/
+-obj-$(CONFIG_X86_BIOS_REBOOT) += reboot_32.o
+-obj-$(CONFIG_MCA) += mca_32.o
+-obj-$(CONFIG_X86_MSR) += msr.o
+-obj-$(CONFIG_X86_CPUID) += cpuid.o
+-obj-$(CONFIG_MICROCODE) += microcode.o
+-obj-$(CONFIG_PCI) += early-quirks.o
+-obj-$(CONFIG_APM) += apm_32.o
+-obj-$(CONFIG_X86_SMP) += smp_32.o smpboot_32.o tsc_sync.o
+-obj-$(CONFIG_SMP) += smpcommon_32.o
+-obj-$(CONFIG_X86_TRAMPOLINE) += trampoline_32.o
+-obj-$(CONFIG_X86_MPPARSE) += mpparse_32.o
+-obj-$(CONFIG_X86_LOCAL_APIC) += apic_32.o nmi_32.o
+-obj-$(CONFIG_X86_IO_APIC) += io_apic_32.o
+-obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
+-obj-$(CONFIG_KEXEC) += machine_kexec_32.o relocate_kernel_32.o crash.o
+-obj-$(CONFIG_CRASH_DUMP) += crash_dump_32.o
+-obj-$(CONFIG_X86_NUMAQ) += numaq_32.o
+-obj-$(CONFIG_X86_SUMMIT_NUMA) += summit_32.o
+-obj-$(CONFIG_KPROBES) += kprobes_32.o
+-obj-$(CONFIG_MODULES) += module_32.o
+-obj-y += sysenter_32.o vsyscall_32.o
+-obj-$(CONFIG_ACPI_SRAT) += srat_32.o
+-obj-$(CONFIG_EFI) += efi_32.o efi_stub_32.o
+-obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o
+-obj-$(CONFIG_VM86) += vm86_32.o
+-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+-obj-$(CONFIG_HPET_TIMER) += hpet.o
+-obj-$(CONFIG_K8_NB) += k8.o
+-obj-$(CONFIG_MGEODE_LX) += geode_32.o mfgpt_32.o
+-
+-obj-$(CONFIG_VMI) += vmi_32.o vmiclock_32.o
+-obj-$(CONFIG_PARAVIRT) += paravirt_32.o
+-obj-y += pcspeaker.o
+-
+-obj-$(CONFIG_SCx200) += scx200_32.o
+-
+-# vsyscall_32.o contains the vsyscall DSO images as __initdata.
+-# We must build both images before we can assemble it.
+-# Note: kbuild does not track this dependency due to usage of .incbin
+-$(obj)/vsyscall_32.o: $(obj)/vsyscall-int80_32.so $(obj)/vsyscall-sysenter_32.so
+-targets += $(foreach F,int80 sysenter,vsyscall-$F_32.o vsyscall-$F_32.so)
+-targets += vsyscall-note_32.o vsyscall_32.lds
+-
+-# The DSO images are built using a special linker script.
+-quiet_cmd_syscall = SYSCALL $@
+- cmd_syscall = $(CC) -m elf_i386 -nostdlib $(SYSCFLAGS_$(@F)) \
+- -Wl,-T,$(filter-out FORCE,$^) -o $@
+-
+-export CPPFLAGS_vsyscall_32.lds += -P -C -Ui386
+-
+-vsyscall-flags = -shared -s -Wl,-soname=linux-gate.so.1 \
+- $(call ld-option, -Wl$(comma)--hash-style=sysv)
+-SYSCFLAGS_vsyscall-sysenter_32.so = $(vsyscall-flags)
+-SYSCFLAGS_vsyscall-int80_32.so = $(vsyscall-flags)
+-
+-$(obj)/vsyscall-int80_32.so $(obj)/vsyscall-sysenter_32.so: \
+-$(obj)/vsyscall-%.so: $(src)/vsyscall_32.lds \
+- $(obj)/vsyscall-%.o $(obj)/vsyscall-note_32.o FORCE
+- $(call if_changed,syscall)
+-
+-# We also create a special relocatable object that should mirror the symbol
+-# table and layout of the linked DSO. With ld -R we can then refer to
+-# these symbols in the kernel code rather than hand-coded addresses.
+-extra-y += vsyscall-syms.o
+-$(obj)/built-in.o: $(obj)/vsyscall-syms.o
+-$(obj)/built-in.o: ld_flags += -R $(obj)/vsyscall-syms.o
+-
+-SYSCFLAGS_vsyscall-syms.o = -r
+-$(obj)/vsyscall-syms.o: $(src)/vsyscall_32.lds \
+- $(obj)/vsyscall-sysenter_32.o $(obj)/vsyscall-note_32.o FORCE
+- $(call if_changed,syscall)
+-
+-
+diff --git a/arch/x86/kernel/Makefile_64 b/arch/x86/kernel/Makefile_64
+deleted file mode 100644
+index 5a88890..0000000
+--- a/arch/x86/kernel/Makefile_64
++++ /dev/null
+@@ -1,45 +0,0 @@
+-#
+-# Makefile for the linux kernel.
+-#
+-
+-extra-y := head_64.o head64.o init_task.o vmlinux.lds
+-CPPFLAGS_vmlinux.lds += -Ux86_64
+-EXTRA_AFLAGS := -traditional
+-
+-obj-y := process_64.o signal_64.o entry_64.o traps_64.o irq_64.o \
+- ptrace_64.o time_64.o ioport_64.o ldt_64.o setup_64.o i8259_64.o sys_x86_64.o \
+- x8664_ksyms_64.o i387_64.o syscall_64.o vsyscall_64.o \
+- setup64.o bootflag.o e820_64.o reboot_64.o quirks.o i8237.o \
+- pci-dma_64.o pci-nommu_64.o alternative.o hpet.o tsc_64.o bugs_64.o \
+- i8253.o
+-
+-obj-$(CONFIG_STACKTRACE) += stacktrace.o
+-obj-y += cpu/
+-obj-y += acpi/
+-obj-$(CONFIG_X86_MSR) += msr.o
+-obj-$(CONFIG_MICROCODE) += microcode.o
+-obj-$(CONFIG_X86_CPUID) += cpuid.o
+-obj-$(CONFIG_SMP) += smp_64.o smpboot_64.o trampoline_64.o tsc_sync.o
+-obj-y += apic_64.o nmi_64.o
+-obj-y += io_apic_64.o mpparse_64.o genapic_64.o genapic_flat_64.o
+-obj-$(CONFIG_KEXEC) += machine_kexec_64.o relocate_kernel_64.o crash.o
+-obj-$(CONFIG_CRASH_DUMP) += crash_dump_64.o
+-obj-$(CONFIG_PM) += suspend_64.o
+-obj-$(CONFIG_HIBERNATION) += suspend_asm_64.o
+-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+-obj-$(CONFIG_GART_IOMMU) += pci-gart_64.o aperture_64.o
+-obj-$(CONFIG_CALGARY_IOMMU) += pci-calgary_64.o tce_64.o
+-obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o
+-obj-$(CONFIG_KPROBES) += kprobes_64.o
+-obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
+-obj-$(CONFIG_X86_VSMP) += vsmp_64.o
+-obj-$(CONFIG_K8_NB) += k8.o
+-obj-$(CONFIG_AUDIT) += audit_64.o
+-
+-obj-$(CONFIG_MODULES) += module_64.o
+-obj-$(CONFIG_PCI) += early-quirks.o
+-
+-obj-y += topology.o
+-obj-y += pcspeaker.o
+-
+-CFLAGS_vsyscall_64.o := $(PROFILING) -g0
+diff --git a/arch/x86/kernel/acpi/Makefile b/arch/x86/kernel/acpi/Makefile
+index 1351c39..19d3d6e 100644
+--- a/arch/x86/kernel/acpi/Makefile
++++ b/arch/x86/kernel/acpi/Makefile
+@@ -1,5 +1,5 @@
+ obj-$(CONFIG_ACPI) += boot.o
+-obj-$(CONFIG_ACPI_SLEEP) += sleep_$(BITS).o wakeup_$(BITS).o
++obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_$(BITS).o
+
+ ifneq ($(CONFIG_ACPI_PROCESSOR),)
+ obj-y += cstate.o processor.o
+diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
new file mode 100644
-index 0000000..5f74fec
+index 0000000..6bc815c
--- /dev/null
-+++ b/block/blk-barrier.c
-@@ -0,0 +1,319 @@
++++ b/arch/x86/kernel/acpi/sleep.c
+@@ -0,0 +1,87 @@
+/*
-+ * Functions related to barrier IO handling
++ * sleep.c - x86-specific ACPI sleep support.
++ *
++ * Copyright (C) 2001-2003 Patrick Mochel
++ * Copyright (C) 2001-2003 Pavel Machek <pavel at suse.cz>
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
+
-+#include "blk.h"
++#include <linux/acpi.h>
++#include <linux/bootmem.h>
++#include <linux/dmi.h>
++#include <linux/cpumask.h>
++
++#include <asm/smp.h>
++
++/* address in low memory of the wakeup routine. */
++unsigned long acpi_wakeup_address = 0;
++unsigned long acpi_realmode_flags;
++extern char wakeup_start, wakeup_end;
++
++extern unsigned long acpi_copy_wakeup_routine(unsigned long);
+
+/**
-+ * blk_queue_ordered - does this queue support ordered writes
-+ * @q: the request queue
-+ * @ordered: one of QUEUE_ORDERED_*
-+ * @prepare_flush_fn: rq setup helper for cache flush ordered writes
++ * acpi_save_state_mem - save kernel state
+ *
-+ * Description:
-+ * For journalled file systems, doing ordered writes on a commit
-+ * block instead of explicitly doing wait_on_buffer (which is bad
-+ * for performance) can be a big win. Block drivers supporting this
-+ * feature should call this function and indicate so.
-+ *
-+ **/
-+int blk_queue_ordered(struct request_queue *q, unsigned ordered,
-+ prepare_flush_fn *prepare_flush_fn)
++ * Create an identity mapped page table and copy the wakeup routine to
++ * low memory.
++ */
++int acpi_save_state_mem(void)
+{
-+ if (ordered & (QUEUE_ORDERED_PREFLUSH | QUEUE_ORDERED_POSTFLUSH) &&
-+ prepare_flush_fn == NULL) {
-+ printk(KERN_ERR "blk_queue_ordered: prepare_flush_fn required\n");
-+ return -EINVAL;
-+ }
-+
-+ if (ordered != QUEUE_ORDERED_NONE &&
-+ ordered != QUEUE_ORDERED_DRAIN &&
-+ ordered != QUEUE_ORDERED_DRAIN_FLUSH &&
-+ ordered != QUEUE_ORDERED_DRAIN_FUA &&
-+ ordered != QUEUE_ORDERED_TAG &&
-+ ordered != QUEUE_ORDERED_TAG_FLUSH &&
-+ ordered != QUEUE_ORDERED_TAG_FUA) {
-+ printk(KERN_ERR "blk_queue_ordered: bad value %d\n", ordered);
-+ return -EINVAL;
++ if (!acpi_wakeup_address) {
++ printk(KERN_ERR "Could not allocate memory during boot, S3 disabled\n");
++ return -ENOMEM;
+ }
-+
-+ q->ordered = ordered;
-+ q->next_ordered = ordered;
-+ q->prepare_flush_fn = prepare_flush_fn;
++ memcpy((void *)acpi_wakeup_address, &wakeup_start,
++ &wakeup_end - &wakeup_start);
++ acpi_copy_wakeup_routine(acpi_wakeup_address);
+
+ return 0;
+}
+
-+EXPORT_SYMBOL(blk_queue_ordered);
-+
+/*
-+ * Cache flushing for ordered writes handling
++ * acpi_restore_state - undo effects of acpi_save_state_mem
+ */
-+inline unsigned blk_ordered_cur_seq(struct request_queue *q)
++void acpi_restore_state_mem(void)
+{
-+ if (!q->ordseq)
-+ return 0;
-+ return 1 << ffz(q->ordseq);
+}
+
-+unsigned blk_ordered_req_seq(struct request *rq)
++
++/**
++ * acpi_reserve_bootmem - do _very_ early ACPI initialisation
++ *
++ * We allocate a page from the first 1MB of memory for the wakeup
++ * routine for when we come back from a sleep state. The
++ * runtime allocator allows specification of <16MB pages, but not
++ * <1MB pages.
++ */
++void __init acpi_reserve_bootmem(void)
+{
-+ struct request_queue *q = rq->q;
++ if ((&wakeup_end - &wakeup_start) > PAGE_SIZE*2) {
++ printk(KERN_ERR
++ "ACPI: Wakeup code way too big, S3 disabled.\n");
++ return;
++ }
+
-+ BUG_ON(q->ordseq == 0);
++ acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE*2);
++ if (!acpi_wakeup_address)
++ printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n");
++}
+
-+ if (rq == &q->pre_flush_rq)
-+ return QUEUE_ORDSEQ_PREFLUSH;
-+ if (rq == &q->bar_rq)
-+ return QUEUE_ORDSEQ_BAR;
-+ if (rq == &q->post_flush_rq)
-+ return QUEUE_ORDSEQ_POSTFLUSH;
+
-+ /*
-+ * !fs requests don't need to follow barrier ordering. Always
-+ * put them at the front. This fixes the following deadlock.
++static int __init acpi_sleep_setup(char *str)
++{
++ while ((str != NULL) && (*str != '\0')) {
++ if (strncmp(str, "s3_bios", 7) == 0)
++ acpi_realmode_flags |= 1;
++ if (strncmp(str, "s3_mode", 7) == 0)
++ acpi_realmode_flags |= 2;
++ if (strncmp(str, "s3_beep", 7) == 0)
++ acpi_realmode_flags |= 4;
++ str = strchr(str, ',');
++ if (str != NULL)
++ str += strspn(str, ", \t");
++ }
++ return 1;
++}
++
++__setup("acpi_sleep=", acpi_sleep_setup);
+diff --git a/arch/x86/kernel/acpi/sleep_32.c b/arch/x86/kernel/acpi/sleep_32.c
+index 1069948..63fe552 100644
+--- a/arch/x86/kernel/acpi/sleep_32.c
++++ b/arch/x86/kernel/acpi/sleep_32.c
+@@ -12,76 +12,6 @@
+
+ #include <asm/smp.h>
+
+-/* address in low memory of the wakeup routine. */
+-unsigned long acpi_wakeup_address = 0;
+-unsigned long acpi_realmode_flags;
+-extern char wakeup_start, wakeup_end;
+-
+-extern unsigned long FASTCALL(acpi_copy_wakeup_routine(unsigned long));
+-
+-/**
+- * acpi_save_state_mem - save kernel state
+- *
+- * Create an identity mapped page table and copy the wakeup routine to
+- * low memory.
+- */
+-int acpi_save_state_mem(void)
+-{
+- if (!acpi_wakeup_address)
+- return 1;
+- memcpy((void *)acpi_wakeup_address, &wakeup_start,
+- &wakeup_end - &wakeup_start);
+- acpi_copy_wakeup_routine(acpi_wakeup_address);
+-
+- return 0;
+-}
+-
+-/*
+- * acpi_restore_state - undo effects of acpi_save_state_mem
+- */
+-void acpi_restore_state_mem(void)
+-{
+-}
+-
+-/**
+- * acpi_reserve_bootmem - do _very_ early ACPI initialisation
+- *
+- * We allocate a page from the first 1MB of memory for the wakeup
+- * routine for when we come back from a sleep state. The
+- * runtime allocator allows specification of <16MB pages, but not
+- * <1MB pages.
+- */
+-void __init acpi_reserve_bootmem(void)
+-{
+- if ((&wakeup_end - &wakeup_start) > PAGE_SIZE) {
+- printk(KERN_ERR
+- "ACPI: Wakeup code way too big, S3 disabled.\n");
+- return;
+- }
+-
+- acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE);
+- if (!acpi_wakeup_address)
+- printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n");
+-}
+-
+-static int __init acpi_sleep_setup(char *str)
+-{
+- while ((str != NULL) && (*str != '\0')) {
+- if (strncmp(str, "s3_bios", 7) == 0)
+- acpi_realmode_flags |= 1;
+- if (strncmp(str, "s3_mode", 7) == 0)
+- acpi_realmode_flags |= 2;
+- if (strncmp(str, "s3_beep", 7) == 0)
+- acpi_realmode_flags |= 4;
+- str = strchr(str, ',');
+- if (str != NULL)
+- str += strspn(str, ", \t");
+- }
+- return 1;
+-}
+-
+-__setup("acpi_sleep=", acpi_sleep_setup);
+-
+ /* Ouch, we want to delete this. We already have better version in userspace, in
+ s2ram from suspend.sf.net project */
+ static __init int reset_videomode_after_s3(const struct dmi_system_id *d)
+diff --git a/arch/x86/kernel/acpi/sleep_64.c b/arch/x86/kernel/acpi/sleep_64.c
+deleted file mode 100644
+index da42de2..0000000
+--- a/arch/x86/kernel/acpi/sleep_64.c
++++ /dev/null
+@@ -1,117 +0,0 @@
+-/*
+- * acpi.c - Architecture-Specific Low-Level ACPI Support
+- *
+- * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh at intel.com>
+- * Copyright (C) 2001 Jun Nakajima <jun.nakajima at intel.com>
+- * Copyright (C) 2001 Patrick Mochel <mochel at osdl.org>
+- * Copyright (C) 2002 Andi Kleen, SuSE Labs (x86-64 port)
+- * Copyright (C) 2003 Pavel Machek, SuSE Labs
+- *
+- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- *
+- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- */
+-
+-#include <linux/kernel.h>
+-#include <linux/init.h>
+-#include <linux/types.h>
+-#include <linux/stddef.h>
+-#include <linux/slab.h>
+-#include <linux/pci.h>
+-#include <linux/bootmem.h>
+-#include <linux/acpi.h>
+-#include <linux/cpumask.h>
+-
+-#include <asm/mpspec.h>
+-#include <asm/io.h>
+-#include <asm/apic.h>
+-#include <asm/apicdef.h>
+-#include <asm/page.h>
+-#include <asm/pgtable.h>
+-#include <asm/pgalloc.h>
+-#include <asm/io_apic.h>
+-#include <asm/proto.h>
+-#include <asm/tlbflush.h>
+-
+-/* --------------------------------------------------------------------------
+- Low-Level Sleep Support
+- -------------------------------------------------------------------------- */
+-
+-/* address in low memory of the wakeup routine. */
+-unsigned long acpi_wakeup_address = 0;
+-unsigned long acpi_realmode_flags;
+-extern char wakeup_start, wakeup_end;
+-
+-extern unsigned long acpi_copy_wakeup_routine(unsigned long);
+-
+-/**
+- * acpi_save_state_mem - save kernel state
+- *
+- * Create an identity mapped page table and copy the wakeup routine to
+- * low memory.
+- */
+-int acpi_save_state_mem(void)
+-{
+- memcpy((void *)acpi_wakeup_address, &wakeup_start,
+- &wakeup_end - &wakeup_start);
+- acpi_copy_wakeup_routine(acpi_wakeup_address);
+-
+- return 0;
+-}
+-
+-/*
+- * acpi_restore_state
+- */
+-void acpi_restore_state_mem(void)
+-{
+-}
+-
+-/**
+- * acpi_reserve_bootmem - do _very_ early ACPI initialisation
+- *
+- * We allocate a page in low memory for the wakeup
+- * routine for when we come back from a sleep state. The
+- * runtime allocator allows specification of <16M pages, but not
+- * <1M pages.
+- */
+-void __init acpi_reserve_bootmem(void)
+-{
+- acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE*2);
+- if ((&wakeup_end - &wakeup_start) > (PAGE_SIZE*2))
+- printk(KERN_CRIT
+- "ACPI: Wakeup code way too big, will crash on attempt"
+- " to suspend\n");
+-}
+-
+-static int __init acpi_sleep_setup(char *str)
+-{
+- while ((str != NULL) && (*str != '\0')) {
+- if (strncmp(str, "s3_bios", 7) == 0)
+- acpi_realmode_flags |= 1;
+- if (strncmp(str, "s3_mode", 7) == 0)
+- acpi_realmode_flags |= 2;
+- if (strncmp(str, "s3_beep", 7) == 0)
+- acpi_realmode_flags |= 4;
+- str = strchr(str, ',');
+- if (str != NULL)
+- str += strspn(str, ", \t");
+- }
+- return 1;
+-}
+-
+-__setup("acpi_sleep=", acpi_sleep_setup);
+-
+diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
+index 1e931aa..f53e327 100644
+--- a/arch/x86/kernel/acpi/wakeup_32.S
++++ b/arch/x86/kernel/acpi/wakeup_32.S
+@@ -1,4 +1,4 @@
+-.text
++ .section .text.page_aligned
+ #include <linux/linkage.h>
+ #include <asm/segment.h>
+ #include <asm/page.h>
+diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
+index 5ed3bc5..2e1b9e0 100644
+--- a/arch/x86/kernel/acpi/wakeup_64.S
++++ b/arch/x86/kernel/acpi/wakeup_64.S
+@@ -344,13 +344,13 @@ do_suspend_lowlevel:
+ call save_processor_state
+
+ movq $saved_context, %rax
+- movq %rsp, pt_regs_rsp(%rax)
+- movq %rbp, pt_regs_rbp(%rax)
+- movq %rsi, pt_regs_rsi(%rax)
+- movq %rdi, pt_regs_rdi(%rax)
+- movq %rbx, pt_regs_rbx(%rax)
+- movq %rcx, pt_regs_rcx(%rax)
+- movq %rdx, pt_regs_rdx(%rax)
++ movq %rsp, pt_regs_sp(%rax)
++ movq %rbp, pt_regs_bp(%rax)
++ movq %rsi, pt_regs_si(%rax)
++ movq %rdi, pt_regs_di(%rax)
++ movq %rbx, pt_regs_bx(%rax)
++ movq %rcx, pt_regs_cx(%rax)
++ movq %rdx, pt_regs_dx(%rax)
+ movq %r8, pt_regs_r8(%rax)
+ movq %r9, pt_regs_r9(%rax)
+ movq %r10, pt_regs_r10(%rax)
+@@ -360,7 +360,7 @@ do_suspend_lowlevel:
+ movq %r14, pt_regs_r14(%rax)
+ movq %r15, pt_regs_r15(%rax)
+ pushfq
+- popq pt_regs_eflags(%rax)
++ popq pt_regs_flags(%rax)
+
+ movq $.L97, saved_rip(%rip)
+
+@@ -391,15 +391,15 @@ do_suspend_lowlevel:
+ movq %rbx, %cr2
+ movq saved_context_cr0(%rax), %rbx
+ movq %rbx, %cr0
+- pushq pt_regs_eflags(%rax)
++ pushq pt_regs_flags(%rax)
+ popfq
+- movq pt_regs_rsp(%rax), %rsp
+- movq pt_regs_rbp(%rax), %rbp
+- movq pt_regs_rsi(%rax), %rsi
+- movq pt_regs_rdi(%rax), %rdi
+- movq pt_regs_rbx(%rax), %rbx
+- movq pt_regs_rcx(%rax), %rcx
+- movq pt_regs_rdx(%rax), %rdx
++ movq pt_regs_sp(%rax), %rsp
++ movq pt_regs_bp(%rax), %rbp
++ movq pt_regs_si(%rax), %rsi
++ movq pt_regs_di(%rax), %rdi
++ movq pt_regs_bx(%rax), %rbx
++ movq pt_regs_cx(%rax), %rcx
++ movq pt_regs_dx(%rax), %rdx
+ movq pt_regs_r8(%rax), %r8
+ movq pt_regs_r9(%rax), %r9
+ movq pt_regs_r10(%rax), %r10
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index d6405e0..45d79ea 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -273,6 +273,7 @@ struct smp_alt_module {
+ };
+ static LIST_HEAD(smp_alt_modules);
+ static DEFINE_SPINLOCK(smp_alt);
++static int smp_mode = 1; /* protected by smp_alt */
+
+ void alternatives_smp_module_add(struct module *mod, char *name,
+ void *locks, void *locks_end,
+@@ -341,12 +342,13 @@ void alternatives_smp_switch(int smp)
+
+ #ifdef CONFIG_LOCKDEP
+ /*
+- * A not yet fixed binutils section handling bug prevents
+- * alternatives-replacement from working reliably, so turn
+- * it off:
++ * Older binutils section handling bug prevented
++ * alternatives-replacement from working reliably.
+ *
-+ * http://thread.gmane.org/gmane.linux.kernel/537473
-+ */
-+ if (!blk_fs_request(rq))
-+ return QUEUE_ORDSEQ_DRAIN;
++ * If this still occurs then you should see a hang
++ * or crash shortly after this line:
+ */
+- printk("lockdep: not fixing up alternatives.\n");
+- return;
++ printk("lockdep: fixing up alternatives.\n");
+ #endif
+
+ if (noreplace_smp || smp_alt_once)
+@@ -354,21 +356,29 @@ void alternatives_smp_switch(int smp)
+ BUG_ON(!smp && (num_online_cpus() > 1));
+
+ spin_lock_irqsave(&smp_alt, flags);
+- if (smp) {
++
++ /*
++ * Avoid unnecessary switches because it forces JIT based VMs to
++ * throw away all cached translations, which can be quite costly.
++ */
++ if (smp == smp_mode) {
++ /* nothing */
++ } else if (smp) {
+ printk(KERN_INFO "SMP alternatives: switching to SMP code\n");
+- clear_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
+- clear_bit(X86_FEATURE_UP, cpu_data(0).x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
++ clear_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
+ list_for_each_entry(mod, &smp_alt_modules, next)
+ alternatives_smp_lock(mod->locks, mod->locks_end,
+ mod->text, mod->text_end);
+ } else {
+ printk(KERN_INFO "SMP alternatives: switching to UP code\n");
+- set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
+- set_bit(X86_FEATURE_UP, cpu_data(0).x86_capability);
++ set_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
++ set_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
+ list_for_each_entry(mod, &smp_alt_modules, next)
+ alternatives_smp_unlock(mod->locks, mod->locks_end,
+ mod->text, mod->text_end);
+ }
++ smp_mode = smp;
+ spin_unlock_irqrestore(&smp_alt, flags);
+ }
+
+@@ -431,8 +441,9 @@ void __init alternative_instructions(void)
+ if (smp_alt_once) {
+ if (1 == num_possible_cpus()) {
+ printk(KERN_INFO "SMP alternatives: switching to UP code\n");
+- set_bit(X86_FEATURE_UP, boot_cpu_data.x86_capability);
+- set_bit(X86_FEATURE_UP, cpu_data(0).x86_capability);
++ set_cpu_cap(&boot_cpu_data, X86_FEATURE_UP);
++ set_cpu_cap(&cpu_data(0), X86_FEATURE_UP);
++
+ alternatives_smp_unlock(__smp_locks, __smp_locks_end,
+ _text, _etext);
+ }
+@@ -440,7 +451,10 @@ void __init alternative_instructions(void)
+ alternatives_smp_module_add(NULL, "core kernel",
+ __smp_locks, __smp_locks_end,
+ _text, _etext);
+- alternatives_smp_switch(0);
++
++ /* Only switch to UP mode if we don't immediately boot others */
++ if (num_possible_cpus() == 1 || setup_max_cpus <= 1)
++ alternatives_smp_switch(0);
+ }
+ #endif
+ apply_paravirt(__parainstructions, __parainstructions_end);
+diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c
+index 5b69927..608152a 100644
+--- a/arch/x86/kernel/aperture_64.c
++++ b/arch/x86/kernel/aperture_64.c
+@@ -1,12 +1,12 @@
+-/*
++/*
+ * Firmware replacement code.
+- *
++ *
+ * Work around broken BIOSes that don't set an aperture or only set the
+- * aperture in the AGP bridge.
+- * If all fails map the aperture over some low memory. This is cheaper than
+- * doing bounce buffering. The memory is lost. This is done at early boot
+- * because only the bootmem allocator can allocate 32+MB.
+- *
++ * aperture in the AGP bridge.
++ * If all fails map the aperture over some low memory. This is cheaper than
++ * doing bounce buffering. The memory is lost. This is done at early boot
++ * because only the bootmem allocator can allocate 32+MB.
++ *
+ * Copyright 2002 Andi Kleen, SuSE Labs.
+ */
+ #include <linux/kernel.h>
+@@ -30,7 +30,7 @@ int gart_iommu_aperture_disabled __initdata = 0;
+ int gart_iommu_aperture_allowed __initdata = 0;
+
+ int fallback_aper_order __initdata = 1; /* 64MB */
+-int fallback_aper_force __initdata = 0;
++int fallback_aper_force __initdata = 0;
+
+ int fix_aperture __initdata = 1;
+
+@@ -49,167 +49,270 @@ static void __init insert_aperture_resource(u32 aper_base, u32 aper_size)
+ /* This code runs before the PCI subsystem is initialized, so just
+ access the northbridge directly. */
+
+-static u32 __init allocate_aperture(void)
++static u32 __init allocate_aperture(void)
+ {
+ u32 aper_size;
+- void *p;
++ void *p;
+
+- if (fallback_aper_order > 7)
+- fallback_aper_order = 7;
+- aper_size = (32 * 1024 * 1024) << fallback_aper_order;
++ if (fallback_aper_order > 7)
++ fallback_aper_order = 7;
++ aper_size = (32 * 1024 * 1024) << fallback_aper_order;
+
+- /*
+- * Aperture has to be naturally aligned. This means an 2GB aperture won't
+- * have much chance of finding a place in the lower 4GB of memory.
+- * Unfortunately we cannot move it up because that would make the
+- * IOMMU useless.
++ /*
++ * Aperture has to be naturally aligned. This means a 2GB aperture
++ * won't have much chance of finding a place in the lower 4GB of
++ * memory. Unfortunately we cannot move it up because that would
++ * make the IOMMU useless.
+ */
+ p = __alloc_bootmem_nopanic(aper_size, aper_size, 0);
+ if (!p || __pa(p)+aper_size > 0xffffffff) {
+- printk("Cannot allocate aperture memory hole (%p,%uK)\n",
+- p, aper_size>>10);
++ printk(KERN_ERR
++ "Cannot allocate aperture memory hole (%p,%uK)\n",
++ p, aper_size>>10);
+ if (p)
+ free_bootmem(__pa(p), aper_size);
+ return 0;
+ }
+- printk("Mapping aperture over %d KB of RAM @ %lx\n",
+- aper_size >> 10, __pa(p));
++ printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",
++ aper_size >> 10, __pa(p));
+ insert_aperture_resource((u32)__pa(p), aper_size);
+- return (u32)__pa(p);
+
-+ if ((rq->cmd_flags & REQ_ORDERED_COLOR) ==
-+ (q->orig_bar_rq->cmd_flags & REQ_ORDERED_COLOR))
-+ return QUEUE_ORDSEQ_DRAIN;
-+ else
-+ return QUEUE_ORDSEQ_DONE;
-+}
++ return (u32)__pa(p);
+ }
+
+ static int __init aperture_valid(u64 aper_base, u32 aper_size)
+-{
+- if (!aper_base)
+- return 0;
+- if (aper_size < 64*1024*1024) {
+- printk("Aperture too small (%d MB)\n", aper_size>>20);
++{
++ if (!aper_base)
+ return 0;
+- }
+
-+void blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
+ if (aper_base + aper_size > 0x100000000UL) {
+- printk("Aperture beyond 4GB. Ignoring.\n");
+- return 0;
++ printk(KERN_ERR "Aperture beyond 4GB. Ignoring.\n");
++ return 0;
+ }
+ if (e820_any_mapped(aper_base, aper_base + aper_size, E820_RAM)) {
+- printk("Aperture pointing to e820 RAM. Ignoring.\n");
+- return 0;
+- }
++ printk(KERN_ERR "Aperture pointing to e820 RAM. Ignoring.\n");
++ return 0;
++ }
++ if (aper_size < 64*1024*1024) {
++ printk(KERN_ERR "Aperture too small (%d MB)\n", aper_size>>20);
++ return 0;
++ }
++
+ return 1;
+-}
++}
+
+ /* Find a PCI capability */
+-static __u32 __init find_cap(int num, int slot, int func, int cap)
+-{
+- u8 pos;
++static __u32 __init find_cap(int num, int slot, int func, int cap)
+{
-+ struct request *rq;
+ int bytes;
+- if (!(read_pci_config_16(num,slot,func,PCI_STATUS) & PCI_STATUS_CAP_LIST))
++ u8 pos;
+
-+ if (error && !q->orderr)
-+ q->orderr = error;
++ if (!(read_pci_config_16(num, slot, func, PCI_STATUS) &
++ PCI_STATUS_CAP_LIST))
+ return 0;
+- pos = read_pci_config_byte(num,slot,func,PCI_CAPABILITY_LIST);
+- for (bytes = 0; bytes < 48 && pos >= 0x40; bytes++) {
+
-+ BUG_ON(q->ordseq & seq);
-+ q->ordseq |= seq;
++ pos = read_pci_config_byte(num, slot, func, PCI_CAPABILITY_LIST);
++ for (bytes = 0; bytes < 48 && pos >= 0x40; bytes++) {
+ u8 id;
+- pos &= ~3;
+- id = read_pci_config_byte(num,slot,func,pos+PCI_CAP_LIST_ID);
++
++ pos &= ~3;
++ id = read_pci_config_byte(num, slot, func, pos+PCI_CAP_LIST_ID);
+ if (id == 0xff)
+ break;
+- if (id == cap)
+- return pos;
+- pos = read_pci_config_byte(num,slot,func,pos+PCI_CAP_LIST_NEXT);
+- }
++ if (id == cap)
++ return pos;
++ pos = read_pci_config_byte(num, slot, func,
++ pos+PCI_CAP_LIST_NEXT);
++ }
+ return 0;
+-}
++}
+
+ /* Read a standard AGPv3 bridge header */
+ static __u32 __init read_agp(int num, int slot, int func, int cap, u32 *order)
+-{
++{
+ u32 apsize;
+ u32 apsizereg;
+ int nbits;
+ u32 aper_low, aper_hi;
+ u64 aper;
+
+- printk("AGP bridge at %02x:%02x:%02x\n", num, slot, func);
+- apsizereg = read_pci_config_16(num,slot,func, cap + 0x14);
++ printk(KERN_INFO "AGP bridge at %02x:%02x:%02x\n", num, slot, func);
++ apsizereg = read_pci_config_16(num, slot, func, cap + 0x14);
+ if (apsizereg == 0xffffffff) {
+- printk("APSIZE in AGP bridge unreadable\n");
++ printk(KERN_ERR "APSIZE in AGP bridge unreadable\n");
+ return 0;
+ }
+
+ apsize = apsizereg & 0xfff;
+ /* Some BIOS use weird encodings not in the AGPv3 table. */
+- if (apsize & 0xff)
+- apsize |= 0xf00;
++ if (apsize & 0xff)
++ apsize |= 0xf00;
+ nbits = hweight16(apsize);
+ *order = 7 - nbits;
+ if ((int)*order < 0) /* < 32MB */
+ *order = 0;
+-
+- aper_low = read_pci_config(num,slot,func, 0x10);
+- aper_hi = read_pci_config(num,slot,func,0x14);
+
-+ if (blk_ordered_cur_seq(q) != QUEUE_ORDSEQ_DONE)
-+ return;
++ aper_low = read_pci_config(num, slot, func, 0x10);
++ aper_hi = read_pci_config(num, slot, func, 0x14);
+ aper = (aper_low & ~((1<<22)-1)) | ((u64)aper_hi << 32);
+
+- printk("Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n",
+- aper, 32 << *order, apsizereg);
++ printk(KERN_INFO "Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n",
++ aper, 32 << *order, apsizereg);
+
+ if (!aperture_valid(aper, (32*1024*1024) << *order))
+- return 0;
+- return (u32)aper;
+-}
+-
+-/* Look for an AGP bridge. Windows only expects the aperture in the
+- AGP bridge and some BIOS forget to initialize the Northbridge too.
+- Work around this here.
+-
+- Do an PCI bus scan by hand because we're running before the PCI
+- subsystem.
++ return 0;
++ return (u32)aper;
++}
+
+- All K8 AGP bridges are AGPv3 compliant, so we can do this scan
+- generically. It's probably overkill to always scan all slots because
+- the AGP bridges should be always an own bus on the HT hierarchy,
+- but do it here for future safety. */
++/*
++ * Look for an AGP bridge. Windows only expects the aperture in the
++ * AGP bridge and some BIOS forget to initialize the Northbridge too.
++ * Work around this here.
++ *
++ * Do an PCI bus scan by hand because we're running before the PCI
++ * subsystem.
++ *
++ * All K8 AGP bridges are AGPv3 compliant, so we can do this scan
++ * generically. It's probably overkill to always scan all slots because
++ * the AGP bridges should be always an own bus on the HT hierarchy,
++ * but do it here for future safety.
++ */
+ static __u32 __init search_agp_bridge(u32 *order, int *valid_agp)
+ {
+ int num, slot, func;
+
+ /* Poor man's PCI discovery */
+- for (num = 0; num < 256; num++) {
+- for (slot = 0; slot < 32; slot++) {
+- for (func = 0; func < 8; func++) {
++ for (num = 0; num < 256; num++) {
++ for (slot = 0; slot < 32; slot++) {
++ for (func = 0; func < 8; func++) {
+ u32 class, cap;
+ u8 type;
+- class = read_pci_config(num,slot,func,
++ class = read_pci_config(num, slot, func,
+ PCI_CLASS_REVISION);
+ if (class == 0xffffffff)
+- break;
+-
+- switch (class >> 16) {
++ break;
+
-+ /*
-+ * Okay, sequence complete.
-+ */
-+ q->ordseq = 0;
-+ rq = q->orig_bar_rq;
++ switch (class >> 16) {
+ case PCI_CLASS_BRIDGE_HOST:
+ case PCI_CLASS_BRIDGE_OTHER: /* needed? */
+ /* AGP bridge? */
+- cap = find_cap(num,slot,func,PCI_CAP_ID_AGP);
++ cap = find_cap(num, slot, func,
++ PCI_CAP_ID_AGP);
+ if (!cap)
+ break;
+- *valid_agp = 1;
+- return read_agp(num,slot,func,cap,order);
+- }
+-
++ *valid_agp = 1;
++ return read_agp(num, slot, func, cap,
++ order);
++ }
+
-+ if (__blk_end_request(rq, q->orderr, blk_rq_bytes(rq)))
-+ BUG();
-+}
+ /* No multi-function device? */
+- type = read_pci_config_byte(num,slot,func,
++ type = read_pci_config_byte(num, slot, func,
+ PCI_HEADER_TYPE);
+ if (!(type & 0x80))
+ break;
+- }
+- }
++ }
++ }
+ }
+- printk("No AGP bridge found\n");
++ printk(KERN_INFO "No AGP bridge found\n");
+
-+static void pre_flush_end_io(struct request *rq, int error)
-+{
-+ elv_completed_request(rq->q, rq);
-+ blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_PREFLUSH, error);
-+}
+ return 0;
+ }
+
++static int gart_fix_e820 __initdata = 1;
+
-+static void bar_end_io(struct request *rq, int error)
++static int __init parse_gart_mem(char *p)
+{
-+ elv_completed_request(rq->q, rq);
-+ blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_BAR, error);
-+}
++ if (!p)
++ return -EINVAL;
+
-+static void post_flush_end_io(struct request *rq, int error)
-+{
-+ elv_completed_request(rq->q, rq);
-+ blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_POSTFLUSH, error);
++ if (!strncmp(p, "off", 3))
++ gart_fix_e820 = 0;
++ else if (!strncmp(p, "on", 2))
++ gart_fix_e820 = 1;
++
++ return 0;
+}
++early_param("gart_fix_e820", parse_gart_mem);
+
-+static void queue_flush(struct request_queue *q, unsigned which)
++void __init early_gart_iommu_check(void)
+{
-+ struct request *rq;
-+ rq_end_io_fn *end_io;
++ /*
++ * in case it is enabled before, esp for kexec/kdump,
++ * previous kernel already enable that. memset called
++ * by allocate_aperture/__alloc_bootmem_nopanic cause restart.
++ * or second kernel have different position for GART hole. and new
++ * kernel could use hole as RAM that is still used by GART set by
++ * first kernel
++ * or BIOS forget to put that in reserved.
++ * try to update e820 to make that region as reserved.
++ */
++ int fix, num;
++ u32 ctl;
++ u32 aper_size = 0, aper_order = 0, last_aper_order = 0;
++ u64 aper_base = 0, last_aper_base = 0;
++ int aper_enabled = 0, last_aper_enabled = 0;
+
-+ if (which == QUEUE_ORDERED_PREFLUSH) {
-+ rq = &q->pre_flush_rq;
-+ end_io = pre_flush_end_io;
-+ } else {
-+ rq = &q->post_flush_rq;
-+ end_io = post_flush_end_io;
-+ }
++ if (!early_pci_allowed())
++ return;
+
-+ rq->cmd_flags = REQ_HARDBARRIER;
-+ rq_init(q, rq);
-+ rq->elevator_private = NULL;
-+ rq->elevator_private2 = NULL;
-+ rq->rq_disk = q->bar_rq.rq_disk;
-+ rq->end_io = end_io;
-+ q->prepare_flush_fn(q, rq);
++ fix = 0;
++ for (num = 24; num < 32; num++) {
++ if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
++ continue;
+
-+ elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
-+}
++ ctl = read_pci_config(0, num, 3, 0x90);
++ aper_enabled = ctl & 1;
++ aper_order = (ctl >> 1) & 7;
++ aper_size = (32 * 1024 * 1024) << aper_order;
++ aper_base = read_pci_config(0, num, 3, 0x94) & 0x7fff;
++ aper_base <<= 25;
+
-+static inline struct request *start_ordered(struct request_queue *q,
-+ struct request *rq)
-+{
-+ q->orderr = 0;
-+ q->ordered = q->next_ordered;
-+ q->ordseq |= QUEUE_ORDSEQ_STARTED;
++ if ((last_aper_order && aper_order != last_aper_order) ||
++ (last_aper_base && aper_base != last_aper_base) ||
++ (last_aper_enabled && aper_enabled != last_aper_enabled)) {
++ fix = 1;
++ break;
++ }
++ last_aper_order = aper_order;
++ last_aper_base = aper_base;
++ last_aper_enabled = aper_enabled;
++ }
+
-+ /*
-+ * Prep proxy barrier request.
-+ */
-+ blkdev_dequeue_request(rq);
-+ q->orig_bar_rq = rq;
-+ rq = &q->bar_rq;
-+ rq->cmd_flags = 0;
-+ rq_init(q, rq);
-+ if (bio_data_dir(q->orig_bar_rq->bio) == WRITE)
-+ rq->cmd_flags |= REQ_RW;
-+ if (q->ordered & QUEUE_ORDERED_FUA)
-+ rq->cmd_flags |= REQ_FUA;
-+ rq->elevator_private = NULL;
-+ rq->elevator_private2 = NULL;
-+ init_request_from_bio(rq, q->orig_bar_rq->bio);
-+ rq->end_io = bar_end_io;
++ if (!fix && !aper_enabled)
++ return;
+
-+ /*
-+ * Queue ordered sequence. As we stack them at the head, we
-+ * need to queue in reverse order. Note that we rely on that
-+ * no fs request uses ELEVATOR_INSERT_FRONT and thus no fs
-+ * request gets inbetween ordered sequence. If this request is
-+ * an empty barrier, we don't need to do a postflush ever since
-+ * there will be no data written between the pre and post flush.
-+ * Hence a single flush will suffice.
-+ */
-+ if ((q->ordered & QUEUE_ORDERED_POSTFLUSH) && !blk_empty_barrier(rq))
-+ queue_flush(q, QUEUE_ORDERED_POSTFLUSH);
-+ else
-+ q->ordseq |= QUEUE_ORDSEQ_POSTFLUSH;
++ if (!aper_base || !aper_size || aper_base + aper_size > 0x100000000UL)
++ fix = 1;
+
-+ elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
++ if (gart_fix_e820 && !fix && aper_enabled) {
++ if (e820_any_mapped(aper_base, aper_base + aper_size,
++ E820_RAM)) {
++ /* reserved it, so we can resuse it in second kernel */
++ printk(KERN_INFO "update e820 for GART\n");
++ add_memory_region(aper_base, aper_size, E820_RESERVED);
++ update_e820();
++ }
++ return;
++ }
+
-+ if (q->ordered & QUEUE_ORDERED_PREFLUSH) {
-+ queue_flush(q, QUEUE_ORDERED_PREFLUSH);
-+ rq = &q->pre_flush_rq;
-+ } else
-+ q->ordseq |= QUEUE_ORDSEQ_PREFLUSH;
++ /* different nodes have different setting, disable them all at first*/
++ for (num = 24; num < 32; num++) {
++ if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
++ continue;
+
-+ if ((q->ordered & QUEUE_ORDERED_TAG) || q->in_flight == 0)
-+ q->ordseq |= QUEUE_ORDSEQ_DRAIN;
-+ else
-+ rq = NULL;
++ ctl = read_pci_config(0, num, 3, 0x90);
++ ctl &= ~1;
++ write_pci_config(0, num, 3, 0x90, ctl);
++ }
+
-+ return rq;
+}
+
-+int blk_do_ordered(struct request_queue *q, struct request **rqp)
+ void __init gart_iommu_hole_init(void)
+-{
+- int fix, num;
+{
-+ struct request *rq = *rqp;
-+ const int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq);
+ u32 aper_size, aper_alloc = 0, aper_order = 0, last_aper_order = 0;
+ u64 aper_base, last_aper_base = 0;
+- int valid_agp = 0;
++ int fix, num, valid_agp = 0;
++ int node;
+
+ if (gart_iommu_aperture_disabled || !fix_aperture ||
+ !early_pci_allowed())
+@@ -218,24 +321,26 @@ void __init gart_iommu_hole_init(void)
+ printk(KERN_INFO "Checking aperture...\n");
+
+ fix = 0;
+- for (num = 24; num < 32; num++) {
++ node = 0;
++ for (num = 24; num < 32; num++) {
+ if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
+ continue;
+
+ iommu_detected = 1;
+ gart_iommu_aperture = 1;
+
+- aper_order = (read_pci_config(0, num, 3, 0x90) >> 1) & 7;
+- aper_size = (32 * 1024 * 1024) << aper_order;
++ aper_order = (read_pci_config(0, num, 3, 0x90) >> 1) & 7;
++ aper_size = (32 * 1024 * 1024) << aper_order;
+ aper_base = read_pci_config(0, num, 3, 0x94) & 0x7fff;
+- aper_base <<= 25;
++ aper_base <<= 25;
++
++ printk(KERN_INFO "Node %d: aperture @ %Lx size %u MB\n",
++ node, aper_base, aper_size >> 20);
++ node++;
+
+- printk("CPU %d: aperture @ %Lx size %u MB\n", num-24,
+- aper_base, aper_size>>20);
+-
+ if (!aperture_valid(aper_base, aper_size)) {
+- fix = 1;
+- break;
++ fix = 1;
++ break;
+ }
+
+ if ((last_aper_order && aper_order != last_aper_order) ||
+@@ -245,55 +350,64 @@ void __init gart_iommu_hole_init(void)
+ }
+ last_aper_order = aper_order;
+ last_aper_base = aper_base;
+- }
++ }
+
+ if (!fix && !fallback_aper_force) {
+ if (last_aper_base) {
+ unsigned long n = (32 * 1024 * 1024) << last_aper_order;
+
-+ if (!q->ordseq) {
-+ if (!is_barrier)
-+ return 1;
+ insert_aperture_resource((u32)last_aper_base, n);
+ }
+- return;
++ return;
+ }
+
+ if (!fallback_aper_force)
+- aper_alloc = search_agp_bridge(&aper_order, &valid_agp);
+-
+- if (aper_alloc) {
++ aper_alloc = search_agp_bridge(&aper_order, &valid_agp);
+
-+ if (q->next_ordered != QUEUE_ORDERED_NONE) {
-+ *rqp = start_ordered(q, rq);
-+ return 1;
-+ } else {
++ if (aper_alloc) {
+ /* Got the aperture from the AGP bridge */
+ } else if (swiotlb && !valid_agp) {
+ /* Do nothing */
+ } else if ((!no_iommu && end_pfn > MAX_DMA32_PFN) ||
+ force_iommu ||
+ valid_agp ||
+- fallback_aper_force) {
+- printk("Your BIOS doesn't leave a aperture memory hole\n");
+- printk("Please enable the IOMMU option in the BIOS setup\n");
+- printk("This costs you %d MB of RAM\n",
+- 32 << fallback_aper_order);
++ fallback_aper_force) {
++ printk(KERN_ERR
++ "Your BIOS doesn't leave a aperture memory hole\n");
++ printk(KERN_ERR
++ "Please enable the IOMMU option in the BIOS setup\n");
++ printk(KERN_ERR
++ "This costs you %d MB of RAM\n",
++ 32 << fallback_aper_order);
+
+ aper_order = fallback_aper_order;
+ aper_alloc = allocate_aperture();
+- if (!aper_alloc) {
+- /* Could disable AGP and IOMMU here, but it's probably
+- not worth it. But the later users cannot deal with
+- bad apertures and turning on the aperture over memory
+- causes very strange problems, so it's better to
+- panic early. */
++ if (!aper_alloc) {
+ /*
-+ * This can happen when the queue switches to
-+ * ORDERED_NONE while this request is on it.
++ * Could disable AGP and IOMMU here, but it's
++ * probably not worth it. But the later users
++ * cannot deal with bad apertures and turning
++ * on the aperture over memory causes very
++ * strange problems, so it's better to panic
++ * early.
+ */
-+ blkdev_dequeue_request(rq);
-+ if (__blk_end_request(rq, -EOPNOTSUPP,
-+ blk_rq_bytes(rq)))
-+ BUG();
-+ *rqp = NULL;
-+ return 0;
-+ }
+ panic("Not enough memory for aperture");
+ }
+- } else {
+- return;
+- }
++ } else {
++ return;
++ }
+
+ /* Fix up the north bridges */
+- for (num = 24; num < 32; num++) {
++ for (num = 24; num < 32; num++) {
+ if (!early_is_k8_nb(read_pci_config(0, num, 3, 0x00)))
+- continue;
+-
+- /* Don't enable translation yet. That is done later.
+- Assume this BIOS didn't initialise the GART so
+- just overwrite all previous bits */
+- write_pci_config(0, num, 3, 0x90, aper_order<<1);
+- write_pci_config(0, num, 3, 0x94, aper_alloc>>25);
+- }
+-}
++ continue;
++
++ /*
++ * Don't enable translation yet. That is done later.
++ * Assume this BIOS didn't initialise the GART so
++ * just overwrite all previous bits
++ */
++ write_pci_config(0, num, 3, 0x90, aper_order<<1);
++ write_pci_config(0, num, 3, 0x94, aper_alloc>>25);
+ }
++}
+diff --git a/arch/x86/kernel/apic_32.c b/arch/x86/kernel/apic_32.c
+index edb5108..35a568e 100644
+--- a/arch/x86/kernel/apic_32.c
++++ b/arch/x86/kernel/apic_32.c
+@@ -43,12 +43,10 @@
+ #include <mach_apicdef.h>
+ #include <mach_ipi.h>
+
+-#include "io_ports.h"
+-
+ /*
+ * Sanity check
+ */
+-#if (SPURIOUS_APIC_VECTOR & 0x0F) != 0x0F
++#if ((SPURIOUS_APIC_VECTOR & 0x0F) != 0x0F)
+ # error SPURIOUS_APIC_VECTOR definition error
+ #endif
+
+@@ -57,7 +55,7 @@
+ *
+ * -1=force-disable, +1=force-enable
+ */
+-static int enable_local_apic __initdata = 0;
++static int enable_local_apic __initdata;
+
+ /* Local APIC timer verification ok */
+ static int local_apic_timer_verify_ok;
+@@ -101,6 +99,8 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
+ /* Local APIC was disabled by the BIOS and enabled by the kernel */
+ static int enabled_via_apicbase;
+
++static unsigned long apic_phys;
+
+ /*
+ * Get the LAPIC version
+ */
+@@ -110,7 +110,7 @@ static inline int lapic_get_version(void)
+ }
+
+ /*
+- * Check, if the APIC is integrated or a seperate chip
++ * Check, if the APIC is integrated or a separate chip
+ */
+ static inline int lapic_is_integrated(void)
+ {
+@@ -135,9 +135,9 @@ void apic_wait_icr_idle(void)
+ cpu_relax();
+ }
+
+-unsigned long safe_apic_wait_icr_idle(void)
++u32 safe_apic_wait_icr_idle(void)
+ {
+- unsigned long send_status;
++ u32 send_status;
+ int timeout;
+
+ timeout = 0;
+@@ -154,7 +154,7 @@ unsigned long safe_apic_wait_icr_idle(void)
+ /**
+ * enable_NMI_through_LVT0 - enable NMI through local vector table 0
+ */
+-void enable_NMI_through_LVT0 (void * dummy)
++void __cpuinit enable_NMI_through_LVT0(void)
+ {
+ unsigned int v = APIC_DM_NMI;
+
+@@ -379,8 +379,10 @@ void __init setup_boot_APIC_clock(void)
+ */
+ if (local_apic_timer_disabled) {
+ /* No broadcast on UP ! */
+- if (num_possible_cpus() > 1)
++ if (num_possible_cpus() > 1) {
++ lapic_clockevent.mult = 1;
+ setup_APIC_timer();
++ }
+ return;
+ }
+
+@@ -434,7 +436,7 @@ void __init setup_boot_APIC_clock(void)
+ "with PM Timer: %ldms instead of 100ms\n",
+ (long)res);
+ /* Correct the lapic counter value */
+- res = (((u64) delta ) * pm_100ms);
++ res = (((u64) delta) * pm_100ms);
+ do_div(res, deltapm);
+ printk(KERN_INFO "APIC delta adjusted to PM-Timer: "
+ "%lu (%ld)\n", (unsigned long) res, delta);
+@@ -472,6 +474,19 @@ void __init setup_boot_APIC_clock(void)
+
+ local_apic_timer_verify_ok = 1;
+
+ /*
-+ * Ordered sequence in progress
++ * Do a sanity check on the APIC calibration result
+ */
++ if (calibration_result < (1000000 / HZ)) {
++ local_irq_enable();
++ printk(KERN_WARNING
++ "APIC frequency too slow, disabling apic timer\n");
++ /* No broadcast on UP ! */
++ if (num_possible_cpus() > 1)
++ setup_APIC_timer();
++ return;
++ }
+
-+ /* Special requests are not subject to ordering rules. */
-+ if (!blk_fs_request(rq) &&
-+ rq != &q->pre_flush_rq && rq != &q->post_flush_rq)
-+ return 1;
+ /* We trust the pm timer based calibration */
+ if (!pm_referenced) {
+ apic_printk(APIC_VERBOSE, "... verify APIC timer\n");
+@@ -563,6 +578,9 @@ static void local_apic_timer_interrupt(void)
+ return;
+ }
+
++ /*
++ * the NMI deadlock-detector uses this.
++ */
+ per_cpu(irq_stat, cpu).apic_timer_irqs++;
+
+ evt->event_handler(evt);
+@@ -576,8 +594,7 @@ static void local_apic_timer_interrupt(void)
+ * [ if a single-CPU system runs an SMP kernel then we call the local
+ * interrupt as well. Thus we cannot inline the local irq ... ]
+ */
+-
+-void fastcall smp_apic_timer_interrupt(struct pt_regs *regs)
++void smp_apic_timer_interrupt(struct pt_regs *regs)
+ {
+ struct pt_regs *old_regs = set_irq_regs(regs);
+
+@@ -616,9 +633,14 @@ int setup_profiling_timer(unsigned int multiplier)
+ */
+ void clear_local_APIC(void)
+ {
+- int maxlvt = lapic_get_maxlvt();
+- unsigned long v;
++ int maxlvt;
++ u32 v;
+
-+ if (q->ordered & QUEUE_ORDERED_TAG) {
-+ /* Ordered by tag. Blocking the next barrier is enough. */
-+ if (is_barrier && rq != &q->bar_rq)
-+ *rqp = NULL;
-+ } else {
-+ /* Ordered by draining. Wait for turn. */
-+ WARN_ON(blk_ordered_req_seq(rq) < blk_ordered_cur_seq(q));
-+ if (blk_ordered_req_seq(rq) > blk_ordered_cur_seq(q))
-+ *rqp = NULL;
-+ }
++ /* APIC hasn't been mapped yet */
++ if (!apic_phys)
++ return;
+
++ maxlvt = lapic_get_maxlvt();
+ /*
+ * Masking an LVT entry can trigger a local APIC error
+ * if the vector is zero. Mask LVTERR first to prevent this.
+@@ -976,7 +998,8 @@ void __cpuinit setup_local_APIC(void)
+ value |= APIC_LVT_LEVEL_TRIGGER;
+ apic_write_around(APIC_LVT1, value);
+
+- if (integrated && !esr_disable) { /* !82489DX */
++ if (integrated && !esr_disable) {
++ /* !82489DX */
+ maxlvt = lapic_get_maxlvt();
+ if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
+ apic_write(APIC_ESR, 0);
+@@ -1020,7 +1043,7 @@ void __cpuinit setup_local_APIC(void)
+ /*
+ * Detect and initialize APIC
+ */
+-static int __init detect_init_APIC (void)
++static int __init detect_init_APIC(void)
+ {
+ u32 h, l, features;
+
+@@ -1077,7 +1100,7 @@ static int __init detect_init_APIC (void)
+ printk(KERN_WARNING "Could not enable APIC!\n");
+ return -1;
+ }
+- set_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
++ set_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
+ mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
+
+ /* The BIOS may have set up the APIC at some other address */
+@@ -1104,8 +1127,6 @@ no_apic:
+ */
+ void __init init_apic_mappings(void)
+ {
+- unsigned long apic_phys;
+-
+ /*
+ * If no local APIC can be found then set up a fake all
+ * zeroes page to simulate the local APIC and another
+@@ -1164,10 +1185,10 @@ fake_ioapic_page:
+ * This initializes the IO-APIC and APIC hardware if this is
+ * a UP kernel.
+ */
+-int __init APIC_init_uniprocessor (void)
++int __init APIC_init_uniprocessor(void)
+ {
+ if (enable_local_apic < 0)
+- clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
+
+ if (!smp_found_config && !cpu_has_apic)
+ return -1;
+@@ -1179,7 +1200,7 @@ int __init APIC_init_uniprocessor (void)
+ APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
+ printk(KERN_ERR "BIOS bug, local APIC #%d not detected!...\n",
+ boot_cpu_physical_apicid);
+- clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
+ return -1;
+ }
+
+@@ -1210,50 +1231,6 @@ int __init APIC_init_uniprocessor (void)
+ }
+
+ /*
+- * APIC command line parameters
+- */
+-static int __init parse_lapic(char *arg)
+-{
+- enable_local_apic = 1;
+- return 0;
+-}
+-early_param("lapic", parse_lapic);
+-
+-static int __init parse_nolapic(char *arg)
+-{
+- enable_local_apic = -1;
+- clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
+- return 0;
+-}
+-early_param("nolapic", parse_nolapic);
+-
+-static int __init parse_disable_lapic_timer(char *arg)
+-{
+- local_apic_timer_disabled = 1;
+- return 0;
+-}
+-early_param("nolapic_timer", parse_disable_lapic_timer);
+-
+-static int __init parse_lapic_timer_c2_ok(char *arg)
+-{
+- local_apic_timer_c2_ok = 1;
+- return 0;
+-}
+-early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
+-
+-static int __init apic_set_verbosity(char *str)
+-{
+- if (strcmp("debug", str) == 0)
+- apic_verbosity = APIC_DEBUG;
+- else if (strcmp("verbose", str) == 0)
+- apic_verbosity = APIC_VERBOSE;
+- return 1;
+-}
+-
+-__setup("apic=", apic_set_verbosity);
+-
+-
+-/*
+ * Local APIC interrupts
+ */
+
+@@ -1306,7 +1283,7 @@ void smp_error_interrupt(struct pt_regs *regs)
+ 6: Received illegal vector
+ 7: Illegal register address
+ */
+- printk (KERN_DEBUG "APIC error on CPU%d: %02lx(%02lx)\n",
++ printk(KERN_DEBUG "APIC error on CPU%d: %02lx(%02lx)\n",
+ smp_processor_id(), v , v1);
+ irq_exit();
+ }
+@@ -1393,7 +1370,7 @@ void disconnect_bsp_APIC(int virt_wire_setup)
+ value = apic_read(APIC_LVT0);
+ value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
+ APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
+- APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED );
++ APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
+ value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
+ value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
+ apic_write_around(APIC_LVT0, value);
+@@ -1530,7 +1507,7 @@ static int lapic_resume(struct sys_device *dev)
+ */
+
+ static struct sysdev_class lapic_sysclass = {
+- set_kset_name("lapic"),
++ .name = "lapic",
+ .resume = lapic_resume,
+ .suspend = lapic_suspend,
+ };
+@@ -1565,3 +1542,46 @@ device_initcall(init_lapic_sysfs);
+ static void apic_pm_activate(void) { }
+
+ #endif /* CONFIG_PM */
+
-+ return 1;
++/*
++ * APIC command line parameters
++ */
++static int __init parse_lapic(char *arg)
++{
++ enable_local_apic = 1;
++ return 0;
+}
++early_param("lapic", parse_lapic);
+
-+static void bio_end_empty_barrier(struct bio *bio, int err)
++static int __init parse_nolapic(char *arg)
+{
-+ if (err)
-+ clear_bit(BIO_UPTODATE, &bio->bi_flags);
-+
-+ complete(bio->bi_private);
++ enable_local_apic = -1;
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
++ return 0;
+}
++early_param("nolapic", parse_nolapic);
+
-+/**
-+ * blkdev_issue_flush - queue a flush
-+ * @bdev: blockdev to issue flush for
-+ * @error_sector: error sector
-+ *
-+ * Description:
-+ * Issue a flush for the block device in question. Caller can supply
-+ * room for storing the error offset in case of a flush error, if they
-+ * wish to. Caller must run wait_for_completion() on its own.
-+ */
-+int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector)
++static int __init parse_disable_lapic_timer(char *arg)
+{
-+ DECLARE_COMPLETION_ONSTACK(wait);
-+ struct request_queue *q;
-+ struct bio *bio;
-+ int ret;
-+
-+ if (bdev->bd_disk == NULL)
-+ return -ENXIO;
-+
-+ q = bdev_get_queue(bdev);
-+ if (!q)
-+ return -ENXIO;
-+
-+ bio = bio_alloc(GFP_KERNEL, 0);
-+ if (!bio)
-+ return -ENOMEM;
-+
-+ bio->bi_end_io = bio_end_empty_barrier;
-+ bio->bi_private = &wait;
-+ bio->bi_bdev = bdev;
-+ submit_bio(1 << BIO_RW_BARRIER, bio);
-+
-+ wait_for_completion(&wait);
-+
-+ /*
-+ * The driver must store the error location in ->bi_sector, if
-+ * it supports it. For non-stacked drivers, this should be copied
-+ * from rq->sector.
-+ */
-+ if (error_sector)
-+ *error_sector = bio->bi_sector;
-+
-+ ret = 0;
-+ if (!bio_flagged(bio, BIO_UPTODATE))
-+ ret = -EIO;
++ local_apic_timer_disabled = 1;
++ return 0;
++}
++early_param("nolapic_timer", parse_disable_lapic_timer);
+
-+ bio_put(bio);
-+ return ret;
++static int __init parse_lapic_timer_c2_ok(char *arg)
++{
++ local_apic_timer_c2_ok = 1;
++ return 0;
+}
++early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
+
-+EXPORT_SYMBOL(blkdev_issue_flush);
-diff --git a/block/blk-core.c b/block/blk-core.c
-new file mode 100644
-index 0000000..8ff9944
---- /dev/null
-+++ b/block/blk-core.c
-@@ -0,0 +1,2034 @@
-+/*
-+ * Copyright (C) 1991, 1992 Linus Torvalds
-+ * Copyright (C) 1994, Karl Keyte: Added support for disk statistics
-+ * Elevator latency, (C) 2000 Andrea Arcangeli <andrea at suse.de> SuSE
-+ * Queue request tables / lock, selectable elevator, Jens Axboe <axboe at suse.de>
-+ * kernel-doc documentation started by NeilBrown <neilb at cse.unsw.edu.au> - July2000
-+ * bio rewrite, highmem i/o, etc, Jens Axboe <axboe at suse.de> - may 2001
-+ */
++static int __init apic_set_verbosity(char *str)
++{
++ if (strcmp("debug", str) == 0)
++ apic_verbosity = APIC_DEBUG;
++ else if (strcmp("verbose", str) == 0)
++ apic_verbosity = APIC_VERBOSE;
++ return 1;
++}
++__setup("apic=", apic_set_verbosity);
+
+diff --git a/arch/x86/kernel/apic_64.c b/arch/x86/kernel/apic_64.c
+index f28ccb5..d8d03e0 100644
+--- a/arch/x86/kernel/apic_64.c
++++ b/arch/x86/kernel/apic_64.c
+@@ -23,32 +23,37 @@
+ #include <linux/mc146818rtc.h>
+ #include <linux/kernel_stat.h>
+ #include <linux/sysdev.h>
+-#include <linux/module.h>
+ #include <linux/ioport.h>
+ #include <linux/clockchips.h>
++#include <linux/acpi_pmtmr.h>
++#include <linux/module.h>
+
+ #include <asm/atomic.h>
+ #include <asm/smp.h>
+ #include <asm/mtrr.h>
+ #include <asm/mpspec.h>
++#include <asm/hpet.h>
+ #include <asm/pgalloc.h>
+ #include <asm/mach_apic.h>
+ #include <asm/nmi.h>
+ #include <asm/idle.h>
+ #include <asm/proto.h>
+ #include <asm/timex.h>
+-#include <asm/hpet.h>
+ #include <asm/apic.h>
+
+-int apic_verbosity;
+ int disable_apic_timer __cpuinitdata;
+ static int apic_calibrate_pmtmr __initdata;
++int disable_apic;
+
+-/* Local APIC timer works in C2? */
++/* Local APIC timer works in C2 */
+ int local_apic_timer_c2_ok;
+ EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
+
+-static struct resource *ioapic_resources;
+/*
-+ * This handles all read/write requests to block devices
++ * Debug level, exported for io_apic.c
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/backing-dev.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
-+#include <linux/highmem.h>
-+#include <linux/mm.h>
-+#include <linux/kernel_stat.h>
-+#include <linux/string.h>
-+#include <linux/init.h>
-+#include <linux/completion.h>
-+#include <linux/slab.h>
-+#include <linux/swap.h>
-+#include <linux/writeback.h>
-+#include <linux/task_io_accounting_ops.h>
-+#include <linux/interrupt.h>
-+#include <linux/cpu.h>
-+#include <linux/blktrace_api.h>
-+#include <linux/fault-inject.h>
-+
-+#include "blk.h"
++int apic_verbosity;
+
-+static int __make_request(struct request_queue *q, struct bio *bio);
+ static struct resource lapic_resource = {
+ .name = "Local APIC",
+ .flags = IORESOURCE_MEM | IORESOURCE_BUSY,
+@@ -60,10 +65,8 @@ static int lapic_next_event(unsigned long delta,
+ struct clock_event_device *evt);
+ static void lapic_timer_setup(enum clock_event_mode mode,
+ struct clock_event_device *evt);
+-
+ static void lapic_timer_broadcast(cpumask_t mask);
+-
+-static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen);
++static void apic_pm_activate(void);
+
+ static struct clock_event_device lapic_clockevent = {
+ .name = "lapic",
+@@ -78,6 +81,150 @@ static struct clock_event_device lapic_clockevent = {
+ };
+ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
+
++static unsigned long apic_phys;
+
+/*
-+ * For the allocated request tables
++ * Get the LAPIC version
+ */
-+struct kmem_cache *request_cachep;
++static inline int lapic_get_version(void)
++{
++ return GET_APIC_VERSION(apic_read(APIC_LVR));
++}
+
+/*
-+ * For queue allocation
++ * Check, if the APIC is integrated or a seperate chip
+ */
-+struct kmem_cache *blk_requestq_cachep = NULL;
++static inline int lapic_is_integrated(void)
++{
++ return 1;
++}
+
+/*
-+ * Controlling structure to kblockd
++ * Check, whether this is a modern or a first generation APIC
+ */
-+static struct workqueue_struct *kblockd_workqueue;
-+
-+static DEFINE_PER_CPU(struct list_head, blk_cpu_done);
-+
-+static void drive_stat_acct(struct request *rq, int new_io)
++static int modern_apic(void)
+{
-+ int rw = rq_data_dir(rq);
-+
-+ if (!blk_fs_request(rq) || !rq->rq_disk)
-+ return;
++ /* AMD systems use old APIC versions, so check the CPU */
++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
++ boot_cpu_data.x86 >= 0xf)
++ return 1;
++ return lapic_get_version() >= 0x14;
++}
+
-+ if (!new_io) {
-+ __disk_stat_inc(rq->rq_disk, merges[rw]);
-+ } else {
-+ disk_round_stats(rq->rq_disk);
-+ rq->rq_disk->in_flight++;
-+ }
++void apic_wait_icr_idle(void)
++{
++ while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
++ cpu_relax();
+}
+
-+void blk_queue_congestion_threshold(struct request_queue *q)
++u32 safe_apic_wait_icr_idle(void)
+{
-+ int nr;
++ u32 send_status;
++ int timeout;
+
-+ nr = q->nr_requests - (q->nr_requests / 8) + 1;
-+ if (nr > q->nr_requests)
-+ nr = q->nr_requests;
-+ q->nr_congestion_on = nr;
++ timeout = 0;
++ do {
++ send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
++ if (!send_status)
++ break;
++ udelay(100);
++ } while (timeout++ < 1000);
+
-+ nr = q->nr_requests - (q->nr_requests / 8) - (q->nr_requests / 16) - 1;
-+ if (nr < 1)
-+ nr = 1;
-+ q->nr_congestion_off = nr;
++ return send_status;
+}
+
+/**
-+ * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
-+ * @bdev: device
-+ *
-+ * Locates the passed device's request queue and returns the address of its
-+ * backing_dev_info
-+ *
-+ * Will return NULL if the request queue cannot be located.
++ * enable_NMI_through_LVT0 - enable NMI through local vector table 0
+ */
-+struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev)
++void __cpuinit enable_NMI_through_LVT0(void)
+{
-+ struct backing_dev_info *ret = NULL;
-+ struct request_queue *q = bdev_get_queue(bdev);
++ unsigned int v;
+
-+ if (q)
-+ ret = &q->backing_dev_info;
-+ return ret;
++ /* unmask and set to NMI */
++ v = APIC_DM_NMI;
++ apic_write(APIC_LVT0, v);
+}
-+EXPORT_SYMBOL(blk_get_backing_dev_info);
+
-+void rq_init(struct request_queue *q, struct request *rq)
++/**
++ * lapic_get_maxlvt - get the maximum number of local vector table entries
++ */
++int lapic_get_maxlvt(void)
+{
-+ INIT_LIST_HEAD(&rq->queuelist);
-+ INIT_LIST_HEAD(&rq->donelist);
++ unsigned int v, maxlvt;
+
-+ rq->errors = 0;
-+ rq->bio = rq->biotail = NULL;
-+ INIT_HLIST_NODE(&rq->hash);
-+ RB_CLEAR_NODE(&rq->rb_node);
-+ rq->ioprio = 0;
-+ rq->buffer = NULL;
-+ rq->ref_count = 1;
-+ rq->q = q;
-+ rq->special = NULL;
-+ rq->data_len = 0;
-+ rq->data = NULL;
-+ rq->nr_phys_segments = 0;
-+ rq->sense = NULL;
-+ rq->end_io = NULL;
-+ rq->end_io_data = NULL;
-+ rq->completion_data = NULL;
-+ rq->next_rq = NULL;
++ v = apic_read(APIC_LVR);
++ maxlvt = GET_APIC_MAXLVT(v);
++ return maxlvt;
+}
+
-+static void req_bio_endio(struct request *rq, struct bio *bio,
-+ unsigned int nbytes, int error)
++/*
++ * This function sets up the local APIC timer, with a timeout of
++ * 'clocks' APIC bus clock. During calibration we actually call
++ * this function twice on the boot CPU, once with a bogus timeout
++ * value, second time for real. The other (noncalibrating) CPUs
++ * call this function only once, with the real, calibrated value.
++ *
++ * We do reads before writes even if unnecessary, to get around the
++ * P5 APIC double write bug.
++ */
++
++static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
+{
-+ struct request_queue *q = rq->q;
++ unsigned int lvtt_value, tmp_value;
+
-+ if (&q->bar_rq != rq) {
-+ if (error)
-+ clear_bit(BIO_UPTODATE, &bio->bi_flags);
-+ else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
-+ error = -EIO;
++ lvtt_value = LOCAL_TIMER_VECTOR;
++ if (!oneshot)
++ lvtt_value |= APIC_LVT_TIMER_PERIODIC;
++ if (!irqen)
++ lvtt_value |= APIC_LVT_MASKED;
+
-+ if (unlikely(nbytes > bio->bi_size)) {
-+ printk("%s: want %u bytes done, only %u left\n",
-+ __FUNCTION__, nbytes, bio->bi_size);
-+ nbytes = bio->bi_size;
-+ }
++ apic_write(APIC_LVTT, lvtt_value);
+
-+ bio->bi_size -= nbytes;
-+ bio->bi_sector += (nbytes >> 9);
-+ if (bio->bi_size == 0)
-+ bio_endio(bio, error);
-+ } else {
++ /*
++ * Divide PICLK by 16
++ */
++ tmp_value = apic_read(APIC_TDCR);
++ apic_write(APIC_TDCR, (tmp_value
++ & ~(APIC_TDR_DIV_1 | APIC_TDR_DIV_TMBASE))
++ | APIC_TDR_DIV_16);
+
-+ /*
-+ * Okay, this is the barrier request in progress, just
-+ * record the error;
-+ */
-+ if (error && !q->orderr)
-+ q->orderr = error;
-+ }
++ if (!oneshot)
++ apic_write(APIC_TMICT, clocks);
+}
+
-+void blk_dump_rq_flags(struct request *rq, char *msg)
-+{
-+ int bit;
++/*
++ * Setup extended LVT, AMD specific (K8, family 10h)
++ *
++ * Vector mappings are hard coded. On K8 only offset 0 (APIC500) and
++ * MCE interrupts are supported. Thus MCE offset must be set to 0.
++ */
+
-+ printk("%s: dev %s: type=%x, flags=%x\n", msg,
-+ rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->cmd_type,
-+ rq->cmd_flags);
++#define APIC_EILVT_LVTOFF_MCE 0
++#define APIC_EILVT_LVTOFF_IBS 1
+
-+ printk("\nsector %llu, nr/cnr %lu/%u\n", (unsigned long long)rq->sector,
-+ rq->nr_sectors,
-+ rq->current_nr_sectors);
-+ printk("bio %p, biotail %p, buffer %p, data %p, len %u\n", rq->bio, rq->biotail, rq->buffer, rq->data, rq->data_len);
++static void setup_APIC_eilvt(u8 lvt_off, u8 vector, u8 msg_type, u8 mask)
++{
++ unsigned long reg = (lvt_off << 4) + APIC_EILVT0;
++ unsigned int v = (mask << 16) | (msg_type << 8) | vector;
+
-+ if (blk_pc_request(rq)) {
-+ printk("cdb: ");
-+ for (bit = 0; bit < sizeof(rq->cmd); bit++)
-+ printk("%02x ", rq->cmd[bit]);
-+ printk("\n");
-+ }
++ apic_write(reg, v);
+}
+
-+EXPORT_SYMBOL(blk_dump_rq_flags);
++u8 setup_APIC_eilvt_mce(u8 vector, u8 msg_type, u8 mask)
++{
++ setup_APIC_eilvt(APIC_EILVT_LVTOFF_MCE, vector, msg_type, mask);
++ return APIC_EILVT_LVTOFF_MCE;
++}
++
++u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask)
++{
++ setup_APIC_eilvt(APIC_EILVT_LVTOFF_IBS, vector, msg_type, mask);
++ return APIC_EILVT_LVTOFF_IBS;
++}
+
+/*
-+ * "plug" the device if there are no outstanding requests: this will
-+ * force the transfer to start only after we have put all the requests
-+ * on the list.
-+ *
-+ * This is called with interrupts off and no requests on the queue and
-+ * with the queue lock held.
++ * Program the next event, relative to now
+ */
-+void blk_plug_device(struct request_queue *q)
+ static int lapic_next_event(unsigned long delta,
+ struct clock_event_device *evt)
+ {
+@@ -85,6 +232,9 @@ static int lapic_next_event(unsigned long delta,
+ return 0;
+ }
+
++/*
++ * Setup the lapic timer in periodic or oneshot mode
++ */
+ static void lapic_timer_setup(enum clock_event_mode mode,
+ struct clock_event_device *evt)
+ {
+@@ -127,75 +277,261 @@ static void lapic_timer_broadcast(cpumask_t mask)
+ #endif
+ }
+
+-static void apic_pm_activate(void);
++/*
++ * Setup the local APIC timer for this CPU. Copy the initilized values
++ * of the boot CPU and register the clock event in the framework.
++ */
++static void setup_APIC_timer(void)
+{
-+ WARN_ON(!irqs_disabled());
++ struct clock_event_device *levt = &__get_cpu_var(lapic_events);
+
+-void apic_wait_icr_idle(void)
++ memcpy(levt, &lapic_clockevent, sizeof(*levt));
++ levt->cpumask = cpumask_of_cpu(smp_processor_id());
++
++ clockevents_register_device(levt);
++}
++
++/*
++ * In this function we calibrate APIC bus clocks to the external
++ * timer. Unfortunately we cannot use jiffies and the timer irq
++ * to calibrate, since some later bootup code depends on getting
++ * the first irq? Ugh.
++ *
++ * We want to do the calibration only once since we
++ * want to have local timer irqs syncron. CPUs connected
++ * by the same APIC bus have the very same bus frequency.
++ * And we want to have irqs off anyways, no accidental
++ * APIC irq that way.
++ */
++
++#define TICK_COUNT 100000000
++
++static void __init calibrate_APIC_clock(void)
+ {
+- while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
+- cpu_relax();
++ unsigned apic, apic_start;
++ unsigned long tsc, tsc_start;
++ int result;
++
++ local_irq_disable();
+
+ /*
-+ * don't plug a stopped queue, it must be paired with blk_start_queue()
-+ * which will restart the queueing
++ * Put whatever arbitrary (but long enough) timeout
++ * value into the APIC clock, we just want to get the
++ * counter running for calibration.
++ *
++ * No interrupt enable !
+ */
-+ if (blk_queue_stopped(q))
-+ return;
++ __setup_APIC_LVTT(250000000, 0, 0);
+
-+ if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags)) {
-+ mod_timer(&q->unplug_timer, jiffies + q->unplug_delay);
-+ blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
++ apic_start = apic_read(APIC_TMCCT);
++#ifdef CONFIG_X86_PM_TIMER
++ if (apic_calibrate_pmtmr && pmtmr_ioport) {
++ pmtimer_wait(5000); /* 5ms wait */
++ apic = apic_read(APIC_TMCCT);
++ result = (apic_start - apic) * 1000L / 5;
++ } else
++#endif
++ {
++ rdtscll(tsc_start);
++
++ do {
++ apic = apic_read(APIC_TMCCT);
++ rdtscll(tsc);
++ } while ((tsc - tsc_start) < TICK_COUNT &&
++ (apic_start - apic) < TICK_COUNT);
++
++ result = (apic_start - apic) * 1000L * tsc_khz /
++ (tsc - tsc_start);
+ }
-+}
+
-+EXPORT_SYMBOL(blk_plug_device);
++ local_irq_enable();
++
++ printk(KERN_DEBUG "APIC timer calibration result %d\n", result);
++
++ printk(KERN_INFO "Detected %d.%03d MHz APIC timer.\n",
++ result / 1000 / 1000, result / 1000 % 1000);
+
++ /* Calculate the scaled math multiplication factor */
++ lapic_clockevent.mult = div_sc(result, NSEC_PER_SEC, 32);
++ lapic_clockevent.max_delta_ns =
++ clockevent_delta2ns(0x7FFFFF, &lapic_clockevent);
++ lapic_clockevent.min_delta_ns =
++ clockevent_delta2ns(0xF, &lapic_clockevent);
++
++ calibration_result = result / HZ;
+ }
+
+-unsigned int safe_apic_wait_icr_idle(void)
+/*
-+ * remove the queue from the plugged list, if present. called with
-+ * queue lock held and interrupts disabled.
++ * Setup the boot APIC
++ *
++ * Calibrate and verify the result.
+ */
-+int blk_remove_plug(struct request_queue *q)
-+{
-+ WARN_ON(!irqs_disabled());
-+
-+ if (!test_and_clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags))
-+ return 0;
++void __init setup_boot_APIC_clock(void)
+ {
+- unsigned int send_status;
+- int timeout;
++ /*
++ * The local apic timer can be disabled via the kernel commandline.
++ * Register the lapic timer as a dummy clock event source on SMP
++ * systems, so the broadcast mechanism is used. On UP systems simply
++ * ignore it.
++ */
++ if (disable_apic_timer) {
++ printk(KERN_INFO "Disabling APIC timer\n");
++ /* No broadcast on UP ! */
++ if (num_possible_cpus() > 1) {
++ lapic_clockevent.mult = 1;
++ setup_APIC_timer();
++ }
++ return;
++ }
+
+- timeout = 0;
+- do {
+- send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
+- if (!send_status)
+- break;
+- udelay(100);
+- } while (timeout++ < 1000);
++ printk(KERN_INFO "Using local APIC timer interrupts.\n");
++ calibrate_APIC_clock();
+
+- return send_status;
++ /*
++ * Do a sanity check on the APIC calibration result
++ */
++ if (calibration_result < (1000000 / HZ)) {
++ printk(KERN_WARNING
++ "APIC frequency too slow, disabling apic timer\n");
++ /* No broadcast on UP ! */
++ if (num_possible_cpus() > 1)
++ setup_APIC_timer();
++ return;
++ }
+
-+ del_timer(&q->unplug_timer);
-+ return 1;
-+}
++ /*
++ * If nmi_watchdog is set to IO_APIC, we need the
++ * PIT/HPET going. Otherwise register lapic as a dummy
++ * device.
++ */
++ if (nmi_watchdog != NMI_IO_APIC)
++ lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
++ else
++ printk(KERN_WARNING "APIC timer registered as dummy,"
++ " due to nmi_watchdog=1!\n");
+
-+EXPORT_SYMBOL(blk_remove_plug);
++ setup_APIC_timer();
+ }
+
+-void enable_NMI_through_LVT0 (void * dummy)
++/*
++ * AMD C1E enabled CPUs have a real nasty problem: Some BIOSes set the
++ * C1E flag only in the secondary CPU, so when we detect the wreckage
++ * we already have enabled the boot CPU local apic timer. Check, if
++ * disable_apic_timer is set and the DUMMY flag is cleared. If yes,
++ * set the DUMMY flag again and force the broadcast mode in the
++ * clockevents layer.
++ */
++void __cpuinit check_boot_apic_timer_broadcast(void)
+ {
+- unsigned int v;
++ if (!disable_apic_timer ||
++ (lapic_clockevent.features & CLOCK_EVT_FEAT_DUMMY))
++ return;
+
+- /* unmask and set to NMI */
+- v = APIC_DM_NMI;
+- apic_write(APIC_LVT0, v);
++ printk(KERN_INFO "AMD C1E detected late. Force timer broadcast.\n");
++ lapic_clockevent.features |= CLOCK_EVT_FEAT_DUMMY;
+
++ local_irq_enable();
++ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_FORCE, &boot_cpu_id);
++ local_irq_disable();
+ }
+
+-int get_maxlvt(void)
++void __cpuinit setup_secondary_APIC_clock(void)
+ {
+- unsigned int v, maxlvt;
++ check_boot_apic_timer_broadcast();
++ setup_APIC_timer();
++}
+
+- v = apic_read(APIC_LVR);
+- maxlvt = GET_APIC_MAXLVT(v);
+- return maxlvt;
+/*
-+ * remove the plug and let it rip..
++ * The guts of the apic timer interrupt
+ */
-+void __generic_unplug_device(struct request_queue *q)
++static void local_apic_timer_interrupt(void)
+{
-+ if (unlikely(blk_queue_stopped(q)))
-+ return;
++ int cpu = smp_processor_id();
++ struct clock_event_device *evt = &per_cpu(lapic_events, cpu);
+
-+ if (!blk_remove_plug(q))
++ /*
++ * Normally we should not be here till LAPIC has been initialized but
++ * in some cases like kdump, its possible that there is a pending LAPIC
++ * timer interrupt from previous kernel's context and is delivered in
++ * new kernel the moment interrupts are enabled.
++ *
++ * Interrupts are enabled early and LAPIC is setup much later, hence
++ * its possible that when we get here evt->event_handler is NULL.
++ * Check for event_handler being NULL and discard the interrupt as
++ * spurious.
++ */
++ if (!evt->event_handler) {
++ printk(KERN_WARNING
++ "Spurious LAPIC timer interrupt on cpu %d\n", cpu);
++ /* Switch it off */
++ lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, evt);
+ return;
++ }
+
-+ q->request_fn(q);
-+}
-+EXPORT_SYMBOL(__generic_unplug_device);
++ /*
++ * the NMI deadlock-detector uses this.
++ */
++ add_pda(apic_timer_irqs, 1);
+
-+/**
-+ * generic_unplug_device - fire a request queue
-+ * @q: The &struct request_queue in question
++ evt->event_handler(evt);
+ }
+
+ /*
+- * 'what should we do if we get a hw irq event on an illegal vector'.
+- * each architecture has to answer this themselves.
++ * Local APIC timer interrupt. This is the most natural way for doing
++ * local interrupts, but local timer interrupts can be emulated by
++ * broadcast interrupts too. [in case the hw doesn't support APIC timers]
+ *
-+ * Description:
-+ * Linux uses plugging to build bigger requests queues before letting
-+ * the device have at them. If a queue is plugged, the I/O scheduler
-+ * is still adding and merging requests on the queue. Once the queue
-+ * gets unplugged, the request_fn defined for the queue is invoked and
-+ * transfers started.
-+ **/
-+void generic_unplug_device(struct request_queue *q)
-+{
-+ spin_lock_irq(q->queue_lock);
-+ __generic_unplug_device(q);
-+ spin_unlock_irq(q->queue_lock);
-+}
-+EXPORT_SYMBOL(generic_unplug_device);
-+
-+static void blk_backing_dev_unplug(struct backing_dev_info *bdi,
-+ struct page *page)
-+{
-+ struct request_queue *q = bdi->unplug_io_data;
++ * [ if a single-CPU system runs an SMP kernel then we call the local
++ * interrupt as well. Thus we cannot inline the local irq ... ]
+ */
+-void ack_bad_irq(unsigned int irq)
++void smp_apic_timer_interrupt(struct pt_regs *regs)
+ {
+- printk("unexpected IRQ trap at vector %02x\n", irq);
++ struct pt_regs *old_regs = set_irq_regs(regs);
+
-+ blk_unplug(q);
+ /*
+- * Currently unexpected vectors happen only on SMP and APIC.
+- * We _must_ ack these because every local APIC has only N
+- * irq slots per priority level, and a 'hanging, unacked' IRQ
+- * holds up an irq slot - in excessive cases (when multiple
+- * unexpected vectors occur) that might lock up the APIC
+- * completely.
+- * But don't ack when the APIC is disabled. -AK
++ * NOTE! We'd better ACK the irq immediately,
++ * because timer handling can be slow.
+ */
+- if (!disable_apic)
+- ack_APIC_irq();
++ ack_APIC_irq();
++ /*
++ * update_process_times() expects us to have done irq_enter().
++ * Besides, if we don't timer interrupts ignore the global
++ * interrupt lock, which is the WrongThing (tm) to do.
++ */
++ exit_idle();
++ irq_enter();
++ local_apic_timer_interrupt();
++ irq_exit();
++ set_irq_regs(old_regs);
+}
+
-+void blk_unplug_work(struct work_struct *work)
++int setup_profiling_timer(unsigned int multiplier)
+{
-+ struct request_queue *q =
-+ container_of(work, struct request_queue, unplug_work);
++ return -EINVAL;
+ }
+
+
-+ blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
-+ q->rq.count[READ] + q->rq.count[WRITE]);
++/*
++ * Local APIC start and shutdown
++ */
+
-+ q->unplug_fn(q);
++/**
++ * clear_local_APIC - shutdown the local APIC
++ *
++ * This is called, when a CPU is disabled and before rebooting, so the state of
++ * the local APIC has no dangling leftovers. Also used to cleanout any BIOS
++ * leftovers during boot.
++ */
+ void clear_local_APIC(void)
+ {
+- int maxlvt;
+- unsigned int v;
++ int maxlvt = lapic_get_maxlvt();
++ u32 v;
+
+- maxlvt = get_maxlvt();
++ /* APIC hasn't been mapped yet */
++ if (!apic_phys)
++ return;
+
++ maxlvt = lapic_get_maxlvt();
+ /*
+ * Masking an LVT entry can trigger a local APIC error
+ * if the vector is zero. Mask LVTERR first to prevent this.
+@@ -233,45 +569,9 @@ void clear_local_APIC(void)
+ apic_read(APIC_ESR);
+ }
+
+-void disconnect_bsp_APIC(int virt_wire_setup)
+-{
+- /* Go back to Virtual Wire compatibility mode */
+- unsigned long value;
+-
+- /* For the spurious interrupt use vector F, and enable it */
+- value = apic_read(APIC_SPIV);
+- value &= ~APIC_VECTOR_MASK;
+- value |= APIC_SPIV_APIC_ENABLED;
+- value |= 0xf;
+- apic_write(APIC_SPIV, value);
+-
+- if (!virt_wire_setup) {
+- /*
+- * For LVT0 make it edge triggered, active high,
+- * external and enabled
+- */
+- value = apic_read(APIC_LVT0);
+- value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
+- APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
+- APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED );
+- value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
+- value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
+- apic_write(APIC_LVT0, value);
+- } else {
+- /* Disable LVT0 */
+- apic_write(APIC_LVT0, APIC_LVT_MASKED);
+- }
+-
+- /* For LVT1 make it edge triggered, active high, nmi and enabled */
+- value = apic_read(APIC_LVT1);
+- value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
+- APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
+- APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
+- value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
+- value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_NMI);
+- apic_write(APIC_LVT1, value);
+-}
+-
++/**
++ * disable_local_APIC - clear and disable the local APIC
++ */
+ void disable_local_APIC(void)
+ {
+ unsigned int value;
+@@ -333,7 +633,7 @@ int __init verify_local_APIC(void)
+ reg1 = GET_APIC_VERSION(reg0);
+ if (reg1 == 0x00 || reg1 == 0xff)
+ return 0;
+- reg1 = get_maxlvt();
++ reg1 = lapic_get_maxlvt();
+ if (reg1 < 0x02 || reg1 == 0xff)
+ return 0;
+
+@@ -355,18 +655,20 @@ int __init verify_local_APIC(void)
+ * compatibility mode, but most boxes are anymore.
+ */
+ reg0 = apic_read(APIC_LVT0);
+- apic_printk(APIC_DEBUG,"Getting LVT0: %x\n", reg0);
++ apic_printk(APIC_DEBUG, "Getting LVT0: %x\n", reg0);
+ reg1 = apic_read(APIC_LVT1);
+ apic_printk(APIC_DEBUG, "Getting LVT1: %x\n", reg1);
+
+ return 1;
+ }
+
++/**
++ * sync_Arb_IDs - synchronize APIC bus arbitration IDs
++ */
+ void __init sync_Arb_IDs(void)
+ {
+ /* Unsupported on P4 - see Intel Dev. Manual Vol. 3, Ch. 8.6.1 */
+- unsigned int ver = GET_APIC_VERSION(apic_read(APIC_LVR));
+- if (ver >= 0x14) /* P4 or higher */
++ if (modern_apic())
+ return;
+
+ /*
+@@ -418,9 +720,12 @@ void __init init_bsp_APIC(void)
+ apic_write(APIC_LVT1, value);
+ }
+
+-void __cpuinit setup_local_APIC (void)
++/**
++ * setup_local_APIC - setup the local APIC
++ */
++void __cpuinit setup_local_APIC(void)
+ {
+- unsigned int value, maxlvt;
++ unsigned int value;
+ int i, j;
+
+ value = apic_read(APIC_LVR);
+@@ -516,30 +821,217 @@ void __cpuinit setup_local_APIC (void)
+ else
+ value = APIC_DM_NMI | APIC_LVT_MASKED;
+ apic_write(APIC_LVT1, value);
+}
-+
-+void blk_unplug_timeout(unsigned long data)
+
+- {
+- unsigned oldvalue;
+- maxlvt = get_maxlvt();
+- oldvalue = apic_read(APIC_ESR);
+- value = ERROR_APIC_VECTOR; // enables sending errors
+- apic_write(APIC_LVTERR, value);
+- /*
+- * spec says clear errors after enabling vector.
+- */
+- if (maxlvt > 3)
+- apic_write(APIC_ESR, 0);
+- value = apic_read(APIC_ESR);
+- if (value != oldvalue)
+- apic_printk(APIC_VERBOSE,
+- "ESR value after enabling vector: %08x, after %08x\n",
+- oldvalue, value);
+- }
++void __cpuinit lapic_setup_esr(void)
+{
-+ struct request_queue *q = (struct request_queue *)data;
++ unsigned maxlvt = lapic_get_maxlvt();
+
-+ blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
-+ q->rq.count[READ] + q->rq.count[WRITE]);
++ apic_write(APIC_LVTERR, ERROR_APIC_VECTOR);
++ /*
++ * spec says clear errors after enabling vector.
++ */
++ if (maxlvt > 3)
++ apic_write(APIC_ESR, 0);
++}
+
++void __cpuinit end_local_APIC_setup(void)
++{
++ lapic_setup_esr();
+ nmi_watchdog_default();
+ setup_apic_nmi_watchdog(NULL);
+ apic_pm_activate();
+ }
+
++/*
++ * Detect and enable local APICs on non-SMP boards.
++ * Original code written by Keir Fraser.
++ * On AMD64 we trust the BIOS - if it says no APIC it is likely
++ * not correctly set up (usually the APIC timer won't work etc.)
++ */
++static int __init detect_init_APIC(void)
++{
++ if (!cpu_has_apic) {
++ printk(KERN_INFO "No local APIC present\n");
++ return -1;
++ }
+
-+ kblockd_schedule_work(&q->unplug_work);
++ mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
++ boot_cpu_id = 0;
++ return 0;
+}
+
-+void blk_unplug(struct request_queue *q)
++/**
++ * init_apic_mappings - initialize APIC mappings
++ */
++void __init init_apic_mappings(void)
+{
+ /*
-+ * devices don't necessarily have an ->unplug_fn defined
++ * If no local APIC can be found then set up a fake all
++ * zeroes page to simulate the local APIC and another
++ * one for the IO-APIC.
+ */
-+ if (q->unplug_fn) {
-+ blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
-+ q->rq.count[READ] + q->rq.count[WRITE]);
++ if (!smp_found_config && detect_init_APIC()) {
++ apic_phys = (unsigned long) alloc_bootmem_pages(PAGE_SIZE);
++ apic_phys = __pa(apic_phys);
++ } else
++ apic_phys = mp_lapic_addr;
+
-+ q->unplug_fn(q);
-+ }
++ set_fixmap_nocache(FIX_APIC_BASE, apic_phys);
++ apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
++ APIC_BASE, apic_phys);
++
++ /* Put local APIC into the resource map. */
++ lapic_resource.start = apic_phys;
++ lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
++ insert_resource(&iomem_resource, &lapic_resource);
++
++ /*
++ * Fetch the APIC ID of the BSP in case we have a
++ * default configuration (or the MP table is broken).
++ */
++ boot_cpu_id = GET_APIC_ID(apic_read(APIC_ID));
+}
-+EXPORT_SYMBOL(blk_unplug);
+
-+/**
-+ * blk_start_queue - restart a previously stopped queue
-+ * @q: The &struct request_queue in question
-+ *
-+ * Description:
-+ * blk_start_queue() will clear the stop flag on the queue, and call
-+ * the request_fn for the queue if it was in a stopped state when
-+ * entered. Also see blk_stop_queue(). Queue lock must be held.
-+ **/
-+void blk_start_queue(struct request_queue *q)
++/*
++ * This initializes the IO-APIC and APIC hardware if this is
++ * a UP kernel.
++ */
++int __init APIC_init_uniprocessor(void)
+{
-+ WARN_ON(!irqs_disabled());
++ if (disable_apic) {
++ printk(KERN_INFO "Apic disabled\n");
++ return -1;
++ }
++ if (!cpu_has_apic) {
++ disable_apic = 1;
++ printk(KERN_INFO "Apic disabled by BIOS\n");
++ return -1;
++ }
+
-+ clear_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
++ verify_local_APIC();
++
++ phys_cpu_present_map = physid_mask_of_physid(boot_cpu_id);
++ apic_write(APIC_ID, SET_APIC_ID(boot_cpu_id));
++
++ setup_local_APIC();
+
+ /*
-+ * one level of recursion is ok and is much faster than kicking
-+ * the unplug handling
++ * Now enable IO-APICs, actually call clear_IO_APIC
++ * We need clear_IO_APIC before enabling vector on BP
+ */
-+ if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
-+ q->request_fn(q);
-+ clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
-+ } else {
-+ blk_plug_device(q);
-+ kblockd_schedule_work(&q->unplug_work);
-+ }
++ if (!skip_ioapic_setup && nr_ioapics)
++ enable_IO_APIC();
++
++ end_local_APIC_setup();
++
++ if (smp_found_config && !skip_ioapic_setup && nr_ioapics)
++ setup_IO_APIC();
++ else
++ nr_ioapics = 0;
++ setup_boot_APIC_clock();
++ check_nmi_watchdog();
++ return 0;
+}
+
-+EXPORT_SYMBOL(blk_start_queue);
++/*
++ * Local APIC interrupts
++ */
+
-+/**
-+ * blk_stop_queue - stop a queue
-+ * @q: The &struct request_queue in question
-+ *
-+ * Description:
-+ * The Linux block layer assumes that a block driver will consume all
-+ * entries on the request queue when the request_fn strategy is called.
-+ * Often this will not happen, because of hardware limitations (queue
-+ * depth settings). If a device driver gets a 'queue full' response,
-+ * or if it simply chooses not to queue more I/O at one point, it can
-+ * call this function to prevent the request_fn from being called until
-+ * the driver has signalled it's ready to go again. This happens by calling
-+ * blk_start_queue() to restart queue operations. Queue lock must be held.
-+ **/
-+void blk_stop_queue(struct request_queue *q)
++/*
++ * This interrupt should _never_ happen with our APIC/SMP architecture
++ */
++asmlinkage void smp_spurious_interrupt(void)
+{
-+ blk_remove_plug(q);
-+ set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
++ unsigned int v;
++ exit_idle();
++ irq_enter();
++ /*
++ * Check if this really is a spurious interrupt and ACK it
++ * if it is a vectored one. Just in case...
++ * Spurious interrupts should not be ACKed.
++ */
++ v = apic_read(APIC_ISR + ((SPURIOUS_APIC_VECTOR & ~0x1f) >> 1));
++ if (v & (1 << (SPURIOUS_APIC_VECTOR & 0x1f)))
++ ack_APIC_irq();
++
++ add_pda(irq_spurious_count, 1);
++ irq_exit();
+}
-+EXPORT_SYMBOL(blk_stop_queue);
+
-+/**
-+ * blk_sync_queue - cancel any pending callbacks on a queue
-+ * @q: the queue
-+ *
-+ * Description:
-+ * The block layer may perform asynchronous callback activity
-+ * on a queue, such as calling the unplug function after a timeout.
-+ * A block device may call blk_sync_queue to ensure that any
-+ * such activity is cancelled, thus allowing it to release resources
-+ * that the callbacks might use. The caller must already have made sure
-+ * that its ->make_request_fn will not re-add plugging prior to calling
-+ * this function.
-+ *
-+ */
-+void blk_sync_queue(struct request_queue *q)
++/*
++ * This interrupt should never happen with our APIC/SMP architecture
++ */
++asmlinkage void smp_error_interrupt(void)
++{
++ unsigned int v, v1;
++
++ exit_idle();
++ irq_enter();
++ /* First tickle the hardware, only then report what went on. -- REW */
++ v = apic_read(APIC_ESR);
++ apic_write(APIC_ESR, 0);
++ v1 = apic_read(APIC_ESR);
++ ack_APIC_irq();
++ atomic_inc(&irq_err_count);
++
++ /* Here is what the APIC error bits mean:
++ 0: Send CS error
++ 1: Receive CS error
++ 2: Send accept error
++ 3: Receive accept error
++ 4: Reserved
++ 5: Send illegal vector
++ 6: Received illegal vector
++ 7: Illegal register address
++ */
++ printk(KERN_DEBUG "APIC error on CPU%d: %02x(%02x)\n",
++ smp_processor_id(), v , v1);
++ irq_exit();
++}
++
++void disconnect_bsp_APIC(int virt_wire_setup)
+{
-+ del_timer_sync(&q->unplug_timer);
-+ kblockd_flush_work(&q->unplug_work);
++ /* Go back to Virtual Wire compatibility mode */
++ unsigned long value;
++
++ /* For the spurious interrupt use vector F, and enable it */
++ value = apic_read(APIC_SPIV);
++ value &= ~APIC_VECTOR_MASK;
++ value |= APIC_SPIV_APIC_ENABLED;
++ value |= 0xf;
++ apic_write(APIC_SPIV, value);
++
++ if (!virt_wire_setup) {
++ /*
++ * For LVT0 make it edge triggered, active high,
++ * external and enabled
++ */
++ value = apic_read(APIC_LVT0);
++ value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
++ APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
++ APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
++ value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
++ value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
++ apic_write(APIC_LVT0, value);
++ } else {
++ /* Disable LVT0 */
++ apic_write(APIC_LVT0, APIC_LVT_MASKED);
++ }
++
++ /* For LVT1 make it edge triggered, active high, nmi and enabled */
++ value = apic_read(APIC_LVT1);
++ value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
++ APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
++ APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
++ value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
++ value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_NMI);
++ apic_write(APIC_LVT1, value);
+}
-+EXPORT_SYMBOL(blk_sync_queue);
+
-+/**
-+ * blk_run_queue - run a single device queue
-+ * @q: The queue to run
++/*
++ * Power management
+ */
-+void blk_run_queue(struct request_queue *q)
-+{
-+ unsigned long flags;
+ #ifdef CONFIG_PM
+
+ static struct {
+@@ -571,7 +1063,7 @@ static int lapic_suspend(struct sys_device *dev, pm_message_t state)
+ if (!apic_pm_state.active)
+ return 0;
+
+- maxlvt = get_maxlvt();
++ maxlvt = lapic_get_maxlvt();
+
+ apic_pm_state.apic_id = apic_read(APIC_ID);
+ apic_pm_state.apic_taskpri = apic_read(APIC_TASKPRI);
+@@ -605,7 +1097,7 @@ static int lapic_resume(struct sys_device *dev)
+ if (!apic_pm_state.active)
+ return 0;
+
+- maxlvt = get_maxlvt();
++ maxlvt = lapic_get_maxlvt();
+
+ local_irq_save(flags);
+ rdmsr(MSR_IA32_APICBASE, l, h);
+@@ -639,14 +1131,14 @@ static int lapic_resume(struct sys_device *dev)
+ }
+
+ static struct sysdev_class lapic_sysclass = {
+- set_kset_name("lapic"),
++ .name = "lapic",
+ .resume = lapic_resume,
+ .suspend = lapic_suspend,
+ };
+
+ static struct sys_device device_lapic = {
+- .id = 0,
+- .cls = &lapic_sysclass,
++ .id = 0,
++ .cls = &lapic_sysclass,
+ };
+
+ static void __cpuinit apic_pm_activate(void)
+@@ -657,9 +1149,11 @@ static void __cpuinit apic_pm_activate(void)
+ static int __init init_lapic_sysfs(void)
+ {
+ int error;
+
-+ spin_lock_irqsave(q->queue_lock, flags);
-+ blk_remove_plug(q);
+ if (!cpu_has_apic)
+ return 0;
+ /* XXX: remove suspend/resume procs if !apic_pm_state.active? */
+
-+ /*
-+ * Only recurse once to avoid overrunning the stack, let the unplug
-+ * handling reinvoke the handler shortly if we already got there.
-+ */
-+ if (!elv_queue_empty(q)) {
-+ if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
-+ q->request_fn(q);
-+ clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
-+ } else {
-+ blk_plug_device(q);
-+ kblockd_schedule_work(&q->unplug_work);
+ error = sysdev_class_register(&lapic_sysclass);
+ if (!error)
+ error = sysdev_register(&device_lapic);
+@@ -673,423 +1167,6 @@ static void apic_pm_activate(void) { }
+
+ #endif /* CONFIG_PM */
+
+-static int __init apic_set_verbosity(char *str)
+-{
+- if (str == NULL) {
+- skip_ioapic_setup = 0;
+- ioapic_force = 1;
+- return 0;
+- }
+- if (strcmp("debug", str) == 0)
+- apic_verbosity = APIC_DEBUG;
+- else if (strcmp("verbose", str) == 0)
+- apic_verbosity = APIC_VERBOSE;
+- else {
+- printk(KERN_WARNING "APIC Verbosity level %s not recognised"
+- " use apic=verbose or apic=debug\n", str);
+- return -EINVAL;
+- }
+-
+- return 0;
+-}
+-early_param("apic", apic_set_verbosity);
+-
+-/*
+- * Detect and enable local APICs on non-SMP boards.
+- * Original code written by Keir Fraser.
+- * On AMD64 we trust the BIOS - if it says no APIC it is likely
+- * not correctly set up (usually the APIC timer won't work etc.)
+- */
+-
+-static int __init detect_init_APIC (void)
+-{
+- if (!cpu_has_apic) {
+- printk(KERN_INFO "No local APIC present\n");
+- return -1;
+- }
+-
+- mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
+- boot_cpu_id = 0;
+- return 0;
+-}
+-
+-#ifdef CONFIG_X86_IO_APIC
+-static struct resource * __init ioapic_setup_resources(void)
+-{
+-#define IOAPIC_RESOURCE_NAME_SIZE 11
+- unsigned long n;
+- struct resource *res;
+- char *mem;
+- int i;
+-
+- if (nr_ioapics <= 0)
+- return NULL;
+-
+- n = IOAPIC_RESOURCE_NAME_SIZE + sizeof(struct resource);
+- n *= nr_ioapics;
+-
+- mem = alloc_bootmem(n);
+- res = (void *)mem;
+-
+- if (mem != NULL) {
+- memset(mem, 0, n);
+- mem += sizeof(struct resource) * nr_ioapics;
+-
+- for (i = 0; i < nr_ioapics; i++) {
+- res[i].name = mem;
+- res[i].flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+- sprintf(mem, "IOAPIC %u", i);
+- mem += IOAPIC_RESOURCE_NAME_SIZE;
+- }
+- }
+-
+- ioapic_resources = res;
+-
+- return res;
+-}
+-
+-static int __init ioapic_insert_resources(void)
+-{
+- int i;
+- struct resource *r = ioapic_resources;
+-
+- if (!r) {
+- printk("IO APIC resources could be not be allocated.\n");
+- return -1;
+- }
+-
+- for (i = 0; i < nr_ioapics; i++) {
+- insert_resource(&iomem_resource, r);
+- r++;
+- }
+-
+- return 0;
+-}
+-
+-/* Insert the IO APIC resources after PCI initialization has occured to handle
+- * IO APICS that are mapped in on a BAR in PCI space. */
+-late_initcall(ioapic_insert_resources);
+-#endif
+-
+-void __init init_apic_mappings(void)
+-{
+- unsigned long apic_phys;
+-
+- /*
+- * If no local APIC can be found then set up a fake all
+- * zeroes page to simulate the local APIC and another
+- * one for the IO-APIC.
+- */
+- if (!smp_found_config && detect_init_APIC()) {
+- apic_phys = (unsigned long) alloc_bootmem_pages(PAGE_SIZE);
+- apic_phys = __pa(apic_phys);
+- } else
+- apic_phys = mp_lapic_addr;
+-
+- set_fixmap_nocache(FIX_APIC_BASE, apic_phys);
+- apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
+- APIC_BASE, apic_phys);
+-
+- /* Put local APIC into the resource map. */
+- lapic_resource.start = apic_phys;
+- lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
+- insert_resource(&iomem_resource, &lapic_resource);
+-
+- /*
+- * Fetch the APIC ID of the BSP in case we have a
+- * default configuration (or the MP table is broken).
+- */
+- boot_cpu_id = GET_APIC_ID(apic_read(APIC_ID));
+-
+- {
+- unsigned long ioapic_phys, idx = FIX_IO_APIC_BASE_0;
+- int i;
+- struct resource *ioapic_res;
+-
+- ioapic_res = ioapic_setup_resources();
+- for (i = 0; i < nr_ioapics; i++) {
+- if (smp_found_config) {
+- ioapic_phys = mp_ioapics[i].mpc_apicaddr;
+- } else {
+- ioapic_phys = (unsigned long)
+- alloc_bootmem_pages(PAGE_SIZE);
+- ioapic_phys = __pa(ioapic_phys);
+- }
+- set_fixmap_nocache(idx, ioapic_phys);
+- apic_printk(APIC_VERBOSE,
+- "mapped IOAPIC to %016lx (%016lx)\n",
+- __fix_to_virt(idx), ioapic_phys);
+- idx++;
+-
+- if (ioapic_res != NULL) {
+- ioapic_res->start = ioapic_phys;
+- ioapic_res->end = ioapic_phys + (4 * 1024) - 1;
+- ioapic_res++;
+- }
+- }
+- }
+-}
+-
+-/*
+- * This function sets up the local APIC timer, with a timeout of
+- * 'clocks' APIC bus clock. During calibration we actually call
+- * this function twice on the boot CPU, once with a bogus timeout
+- * value, second time for real. The other (noncalibrating) CPUs
+- * call this function only once, with the real, calibrated value.
+- *
+- * We do reads before writes even if unnecessary, to get around the
+- * P5 APIC double write bug.
+- */
+-
+-static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
+-{
+- unsigned int lvtt_value, tmp_value;
+-
+- lvtt_value = LOCAL_TIMER_VECTOR;
+- if (!oneshot)
+- lvtt_value |= APIC_LVT_TIMER_PERIODIC;
+- if (!irqen)
+- lvtt_value |= APIC_LVT_MASKED;
+-
+- apic_write(APIC_LVTT, lvtt_value);
+-
+- /*
+- * Divide PICLK by 16
+- */
+- tmp_value = apic_read(APIC_TDCR);
+- apic_write(APIC_TDCR, (tmp_value
+- & ~(APIC_TDR_DIV_1 | APIC_TDR_DIV_TMBASE))
+- | APIC_TDR_DIV_16);
+-
+- if (!oneshot)
+- apic_write(APIC_TMICT, clocks);
+-}
+-
+-static void setup_APIC_timer(void)
+-{
+- struct clock_event_device *levt = &__get_cpu_var(lapic_events);
+-
+- memcpy(levt, &lapic_clockevent, sizeof(*levt));
+- levt->cpumask = cpumask_of_cpu(smp_processor_id());
+-
+- clockevents_register_device(levt);
+-}
+-
+-/*
+- * In this function we calibrate APIC bus clocks to the external
+- * timer. Unfortunately we cannot use jiffies and the timer irq
+- * to calibrate, since some later bootup code depends on getting
+- * the first irq? Ugh.
+- *
+- * We want to do the calibration only once since we
+- * want to have local timer irqs syncron. CPUs connected
+- * by the same APIC bus have the very same bus frequency.
+- * And we want to have irqs off anyways, no accidental
+- * APIC irq that way.
+- */
+-
+-#define TICK_COUNT 100000000
+-
+-static void __init calibrate_APIC_clock(void)
+-{
+- unsigned apic, apic_start;
+- unsigned long tsc, tsc_start;
+- int result;
+-
+- local_irq_disable();
+-
+- /*
+- * Put whatever arbitrary (but long enough) timeout
+- * value into the APIC clock, we just want to get the
+- * counter running for calibration.
+- *
+- * No interrupt enable !
+- */
+- __setup_APIC_LVTT(250000000, 0, 0);
+-
+- apic_start = apic_read(APIC_TMCCT);
+-#ifdef CONFIG_X86_PM_TIMER
+- if (apic_calibrate_pmtmr && pmtmr_ioport) {
+- pmtimer_wait(5000); /* 5ms wait */
+- apic = apic_read(APIC_TMCCT);
+- result = (apic_start - apic) * 1000L / 5;
+- } else
+-#endif
+- {
+- rdtscll(tsc_start);
+-
+- do {
+- apic = apic_read(APIC_TMCCT);
+- rdtscll(tsc);
+- } while ((tsc - tsc_start) < TICK_COUNT &&
+- (apic_start - apic) < TICK_COUNT);
+-
+- result = (apic_start - apic) * 1000L * tsc_khz /
+- (tsc - tsc_start);
+- }
+-
+- local_irq_enable();
+-
+- printk(KERN_DEBUG "APIC timer calibration result %d\n", result);
+-
+- printk(KERN_INFO "Detected %d.%03d MHz APIC timer.\n",
+- result / 1000 / 1000, result / 1000 % 1000);
+-
+- /* Calculate the scaled math multiplication factor */
+- lapic_clockevent.mult = div_sc(result, NSEC_PER_SEC, 32);
+- lapic_clockevent.max_delta_ns =
+- clockevent_delta2ns(0x7FFFFF, &lapic_clockevent);
+- lapic_clockevent.min_delta_ns =
+- clockevent_delta2ns(0xF, &lapic_clockevent);
+-
+- calibration_result = result / HZ;
+-}
+-
+-void __init setup_boot_APIC_clock (void)
+-{
+- /*
+- * The local apic timer can be disabled via the kernel commandline.
+- * Register the lapic timer as a dummy clock event source on SMP
+- * systems, so the broadcast mechanism is used. On UP systems simply
+- * ignore it.
+- */
+- if (disable_apic_timer) {
+- printk(KERN_INFO "Disabling APIC timer\n");
+- /* No broadcast on UP ! */
+- if (num_possible_cpus() > 1)
+- setup_APIC_timer();
+- return;
+- }
+-
+- printk(KERN_INFO "Using local APIC timer interrupts.\n");
+- calibrate_APIC_clock();
+-
+- /*
+- * If nmi_watchdog is set to IO_APIC, we need the
+- * PIT/HPET going. Otherwise register lapic as a dummy
+- * device.
+- */
+- if (nmi_watchdog != NMI_IO_APIC)
+- lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
+- else
+- printk(KERN_WARNING "APIC timer registered as dummy,"
+- " due to nmi_watchdog=1!\n");
+-
+- setup_APIC_timer();
+-}
+-
+-/*
+- * AMD C1E enabled CPUs have a real nasty problem: Some BIOSes set the
+- * C1E flag only in the secondary CPU, so when we detect the wreckage
+- * we already have enabled the boot CPU local apic timer. Check, if
+- * disable_apic_timer is set and the DUMMY flag is cleared. If yes,
+- * set the DUMMY flag again and force the broadcast mode in the
+- * clockevents layer.
+- */
+-void __cpuinit check_boot_apic_timer_broadcast(void)
+-{
+- if (!disable_apic_timer ||
+- (lapic_clockevent.features & CLOCK_EVT_FEAT_DUMMY))
+- return;
+-
+- printk(KERN_INFO "AMD C1E detected late. Force timer broadcast.\n");
+- lapic_clockevent.features |= CLOCK_EVT_FEAT_DUMMY;
+-
+- local_irq_enable();
+- clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_FORCE, &boot_cpu_id);
+- local_irq_disable();
+-}
+-
+-void __cpuinit setup_secondary_APIC_clock(void)
+-{
+- check_boot_apic_timer_broadcast();
+- setup_APIC_timer();
+-}
+-
+-int setup_profiling_timer(unsigned int multiplier)
+-{
+- return -EINVAL;
+-}
+-
+-void setup_APIC_extended_lvt(unsigned char lvt_off, unsigned char vector,
+- unsigned char msg_type, unsigned char mask)
+-{
+- unsigned long reg = (lvt_off << 4) + K8_APIC_EXT_LVT_BASE;
+- unsigned int v = (mask << 16) | (msg_type << 8) | vector;
+- apic_write(reg, v);
+-}
+-
+-/*
+- * Local timer interrupt handler. It does both profiling and
+- * process statistics/rescheduling.
+- *
+- * We do profiling in every local tick, statistics/rescheduling
+- * happen only every 'profiling multiplier' ticks. The default
+- * multiplier is 1 and it can be changed by writing the new multiplier
+- * value into /proc/profile.
+- */
+-
+-void smp_local_timer_interrupt(void)
+-{
+- int cpu = smp_processor_id();
+- struct clock_event_device *evt = &per_cpu(lapic_events, cpu);
+-
+- /*
+- * Normally we should not be here till LAPIC has been initialized but
+- * in some cases like kdump, its possible that there is a pending LAPIC
+- * timer interrupt from previous kernel's context and is delivered in
+- * new kernel the moment interrupts are enabled.
+- *
+- * Interrupts are enabled early and LAPIC is setup much later, hence
+- * its possible that when we get here evt->event_handler is NULL.
+- * Check for event_handler being NULL and discard the interrupt as
+- * spurious.
+- */
+- if (!evt->event_handler) {
+- printk(KERN_WARNING
+- "Spurious LAPIC timer interrupt on cpu %d\n", cpu);
+- /* Switch it off */
+- lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, evt);
+- return;
+- }
+-
+- /*
+- * the NMI deadlock-detector uses this.
+- */
+- add_pda(apic_timer_irqs, 1);
+-
+- evt->event_handler(evt);
+-}
+-
+-/*
+- * Local APIC timer interrupt. This is the most natural way for doing
+- * local interrupts, but local timer interrupts can be emulated by
+- * broadcast interrupts too. [in case the hw doesn't support APIC timers]
+- *
+- * [ if a single-CPU system runs an SMP kernel then we call the local
+- * interrupt as well. Thus we cannot inline the local irq ... ]
+- */
+-void smp_apic_timer_interrupt(struct pt_regs *regs)
+-{
+- struct pt_regs *old_regs = set_irq_regs(regs);
+-
+- /*
+- * NOTE! We'd better ACK the irq immediately,
+- * because timer handling can be slow.
+- */
+- ack_APIC_irq();
+- /*
+- * update_process_times() expects us to have done irq_enter().
+- * Besides, if we don't timer interrupts ignore the global
+- * interrupt lock, which is the WrongThing (tm) to do.
+- */
+- exit_idle();
+- irq_enter();
+- smp_local_timer_interrupt();
+- irq_exit();
+- set_irq_regs(old_regs);
+-}
+-
+ /*
+ * apic_is_clustered_box() -- Check if we can expect good TSC
+ *
+@@ -1103,21 +1180,34 @@ __cpuinit int apic_is_clustered_box(void)
+ {
+ int i, clusters, zeros;
+ unsigned id;
++ u16 *bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr;
+ DECLARE_BITMAP(clustermap, NUM_APIC_CLUSTERS);
+
+ bitmap_zero(clustermap, NUM_APIC_CLUSTERS);
+
+ for (i = 0; i < NR_CPUS; i++) {
+- id = bios_cpu_apicid[i];
++ /* are we being called early in kernel startup? */
++ if (bios_cpu_apicid) {
++ id = bios_cpu_apicid[i];
++ }
++ else if (i < nr_cpu_ids) {
++ if (cpu_present(i))
++ id = per_cpu(x86_bios_cpu_apicid, i);
++ else
++ continue;
+ }
-+ }
++ else
++ break;
+
-+ spin_unlock_irqrestore(q->queue_lock, flags);
-+}
-+EXPORT_SYMBOL(blk_run_queue);
+ if (id != BAD_APICID)
+ __set_bit(APIC_CLUSTERID(id), clustermap);
+ }
+
+ /* Problem: Partially populated chassis may not have CPUs in some of
+ * the APIC clusters they have been allocated. Only present CPUs have
+- * bios_cpu_apicid entries, thus causing zeroes in the bitmap. Since
+- * clusters are allocated sequentially, count zeros only if they are
+- * bounded by ones.
++ * x86_bios_cpu_apicid entries, thus causing zeroes in the bitmap.
++ * Since clusters are allocated sequentially, count zeros only if
++ * they are bounded by ones.
+ */
+ clusters = 0;
+ zeros = 0;
+@@ -1138,96 +1228,33 @@ __cpuinit int apic_is_clustered_box(void)
+ }
+
+ /*
+- * This interrupt should _never_ happen with our APIC/SMP architecture
+- */
+-asmlinkage void smp_spurious_interrupt(void)
+-{
+- unsigned int v;
+- exit_idle();
+- irq_enter();
+- /*
+- * Check if this really is a spurious interrupt and ACK it
+- * if it is a vectored one. Just in case...
+- * Spurious interrupts should not be ACKed.
+- */
+- v = apic_read(APIC_ISR + ((SPURIOUS_APIC_VECTOR & ~0x1f) >> 1));
+- if (v & (1 << (SPURIOUS_APIC_VECTOR & 0x1f)))
+- ack_APIC_irq();
+-
+- add_pda(irq_spurious_count, 1);
+- irq_exit();
+-}
+-
+-/*
+- * This interrupt should never happen with our APIC/SMP architecture
++ * APIC command line parameters
+ */
+-
+-asmlinkage void smp_error_interrupt(void)
+-{
+- unsigned int v, v1;
+-
+- exit_idle();
+- irq_enter();
+- /* First tickle the hardware, only then report what went on. -- REW */
+- v = apic_read(APIC_ESR);
+- apic_write(APIC_ESR, 0);
+- v1 = apic_read(APIC_ESR);
+- ack_APIC_irq();
+- atomic_inc(&irq_err_count);
+-
+- /* Here is what the APIC error bits mean:
+- 0: Send CS error
+- 1: Receive CS error
+- 2: Send accept error
+- 3: Receive accept error
+- 4: Reserved
+- 5: Send illegal vector
+- 6: Received illegal vector
+- 7: Illegal register address
+- */
+- printk (KERN_DEBUG "APIC error on CPU%d: %02x(%02x)\n",
+- smp_processor_id(), v , v1);
+- irq_exit();
+-}
+-
+-int disable_apic;
+-
+-/*
+- * This initializes the IO-APIC and APIC hardware if this is
+- * a UP kernel.
+- */
+-int __init APIC_init_uniprocessor (void)
++static int __init apic_set_verbosity(char *str)
+ {
+- if (disable_apic) {
+- printk(KERN_INFO "Apic disabled\n");
+- return -1;
++ if (str == NULL) {
++ skip_ioapic_setup = 0;
++ ioapic_force = 1;
++ return 0;
+ }
+- if (!cpu_has_apic) {
+- disable_apic = 1;
+- printk(KERN_INFO "Apic disabled by BIOS\n");
+- return -1;
++ if (strcmp("debug", str) == 0)
++ apic_verbosity = APIC_DEBUG;
++ else if (strcmp("verbose", str) == 0)
++ apic_verbosity = APIC_VERBOSE;
++ else {
++ printk(KERN_WARNING "APIC Verbosity level %s not recognised"
++ " use apic=verbose or apic=debug\n", str);
++ return -EINVAL;
+ }
+
+- verify_local_APIC();
+-
+- phys_cpu_present_map = physid_mask_of_physid(boot_cpu_id);
+- apic_write(APIC_ID, SET_APIC_ID(boot_cpu_id));
+-
+- setup_local_APIC();
+-
+- if (smp_found_config && !skip_ioapic_setup && nr_ioapics)
+- setup_IO_APIC();
+- else
+- nr_ioapics = 0;
+- setup_boot_APIC_clock();
+- check_nmi_watchdog();
+ return 0;
+ }
++early_param("apic", apic_set_verbosity);
+
+ static __init int setup_disableapic(char *str)
+ {
+ disable_apic = 1;
+- clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
+ return 0;
+ }
+ early_param("disableapic", setup_disableapic);
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index af045ca..d4438ef 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -227,6 +227,7 @@
+ #include <linux/dmi.h>
+ #include <linux/suspend.h>
+ #include <linux/kthread.h>
++#include <linux/jiffies.h>
+
+ #include <asm/system.h>
+ #include <asm/uaccess.h>
+@@ -235,8 +236,6 @@
+ #include <asm/paravirt.h>
+ #include <asm/reboot.h>
+
+-#include "io_ports.h"
+-
+ #if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT)
+ extern int (*console_blank_hook)(int);
+ #endif
+@@ -324,7 +323,7 @@ extern int (*console_blank_hook)(int);
+ /*
+ * Ignore suspend events for this amount of time after a resume
+ */
+-#define DEFAULT_BOUNCE_INTERVAL (3 * HZ)
++#define DEFAULT_BOUNCE_INTERVAL (3 * HZ)
+
+ /*
+ * Maximum number of events stored
+@@ -336,7 +335,7 @@ extern int (*console_blank_hook)(int);
+ */
+ struct apm_user {
+ int magic;
+- struct apm_user * next;
++ struct apm_user *next;
+ unsigned int suser: 1;
+ unsigned int writer: 1;
+ unsigned int reader: 1;
+@@ -372,44 +371,44 @@ struct apm_user {
+ static struct {
+ unsigned long offset;
+ unsigned short segment;
+-} apm_bios_entry;
+-static int clock_slowed;
+-static int idle_threshold __read_mostly = DEFAULT_IDLE_THRESHOLD;
+-static int idle_period __read_mostly = DEFAULT_IDLE_PERIOD;
+-static int set_pm_idle;
+-static int suspends_pending;
+-static int standbys_pending;
+-static int ignore_sys_suspend;
+-static int ignore_normal_resume;
+-static int bounce_interval __read_mostly = DEFAULT_BOUNCE_INTERVAL;
+-
+-static int debug __read_mostly;
+-static int smp __read_mostly;
+-static int apm_disabled = -1;
++} apm_bios_entry;
++static int clock_slowed;
++static int idle_threshold __read_mostly = DEFAULT_IDLE_THRESHOLD;
++static int idle_period __read_mostly = DEFAULT_IDLE_PERIOD;
++static int set_pm_idle;
++static int suspends_pending;
++static int standbys_pending;
++static int ignore_sys_suspend;
++static int ignore_normal_resume;
++static int bounce_interval __read_mostly = DEFAULT_BOUNCE_INTERVAL;
++
++static int debug __read_mostly;
++static int smp __read_mostly;
++static int apm_disabled = -1;
+ #ifdef CONFIG_SMP
+-static int power_off;
++static int power_off;
+ #else
+-static int power_off = 1;
++static int power_off = 1;
+ #endif
+ #ifdef CONFIG_APM_REAL_MODE_POWER_OFF
+-static int realmode_power_off = 1;
++static int realmode_power_off = 1;
+ #else
+-static int realmode_power_off;
++static int realmode_power_off;
+ #endif
+ #ifdef CONFIG_APM_ALLOW_INTS
+-static int allow_ints = 1;
++static int allow_ints = 1;
+ #else
+-static int allow_ints;
++static int allow_ints;
+ #endif
+-static int broken_psr;
++static int broken_psr;
+
+ static DECLARE_WAIT_QUEUE_HEAD(apm_waitqueue);
+ static DECLARE_WAIT_QUEUE_HEAD(apm_suspend_waitqueue);
+-static struct apm_user * user_list;
++static struct apm_user *user_list;
+ static DEFINE_SPINLOCK(user_list_lock);
+-static const struct desc_struct bad_bios_desc = { 0, 0x00409200 };
++static const struct desc_struct bad_bios_desc = { { { 0, 0x00409200 } } };
+
+-static const char driver_version[] = "1.16ac"; /* no spaces */
++static const char driver_version[] = "1.16ac"; /* no spaces */
+
+ static struct task_struct *kapmd_task;
+
+@@ -417,7 +416,7 @@ static struct task_struct *kapmd_task;
+ * APM event names taken from the APM 1.2 specification. These are
+ * the message codes that the BIOS uses to tell us about events
+ */
+-static const char * const apm_event_name[] = {
++static const char * const apm_event_name[] = {
+ "system standby",
+ "system suspend",
+ "normal resume",
+@@ -435,14 +434,14 @@ static const char * const apm_event_name[] = {
+
+ typedef struct lookup_t {
+ int key;
+- char * msg;
++ char *msg;
+ } lookup_t;
+
+ /*
+ * The BIOS returns a set of standard error codes in AX when the
+ * carry flag is set.
+ */
+-
+
-+void blk_put_queue(struct request_queue *q)
-+{
-+ kobject_put(&q->kobj);
-+}
-+EXPORT_SYMBOL(blk_put_queue);
+ static const lookup_t error_table[] = {
+ /* N/A { APM_SUCCESS, "Operation succeeded" }, */
+ { APM_DISABLED, "Power management disabled" },
+@@ -472,24 +471,25 @@ static const lookup_t error_table[] = {
+ * Write a meaningful log entry to the kernel log in the event of
+ * an APM error.
+ */
+-
+
-+void blk_cleanup_queue(struct request_queue * q)
-+{
-+ mutex_lock(&q->sysfs_lock);
-+ set_bit(QUEUE_FLAG_DEAD, &q->queue_flags);
-+ mutex_unlock(&q->sysfs_lock);
+ static void apm_error(char *str, int err)
+ {
+- int i;
++ int i;
+
+ for (i = 0; i < ERROR_COUNT; i++)
+- if (error_table[i].key == err) break;
++ if (error_table[i].key == err)
++ break;
+ if (i < ERROR_COUNT)
+ printk(KERN_NOTICE "apm: %s: %s\n", str, error_table[i].msg);
+ else
+ printk(KERN_NOTICE "apm: %s: unknown error code %#2.2x\n",
+- str, err);
++ str, err);
+ }
+
+ /*
+ * Lock APM functionality to physical CPU 0
+ */
+-
+
-+ if (q->elevator)
-+ elevator_exit(q->elevator);
+ #ifdef CONFIG_SMP
+
+ static cpumask_t apm_save_cpus(void)
+@@ -511,7 +511,7 @@ static inline void apm_restore_cpus(cpumask_t mask)
+ /*
+ * No CPU lockdown needed on a uniprocessor
+ */
+-
+
-+ blk_put_queue(q);
-+}
+ #define apm_save_cpus() (current->cpus_allowed)
+ #define apm_restore_cpus(x) (void)(x)
+
+@@ -590,7 +590,7 @@ static inline void apm_irq_restore(unsigned long flags)
+ * code is returned in AH (bits 8-15 of eax) and this function
+ * returns non-zero.
+ */
+-
+
-+EXPORT_SYMBOL(blk_cleanup_queue);
+ static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
+ u32 *eax, u32 *ebx, u32 *ecx, u32 *edx, u32 *esi)
+ {
+@@ -602,7 +602,7 @@ static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
+ struct desc_struct *gdt;
+
+ cpus = apm_save_cpus();
+-
+
-+static int blk_init_free_list(struct request_queue *q)
-+{
-+ struct request_list *rl = &q->rq;
+ cpu = get_cpu();
+ gdt = get_cpu_gdt_table(cpu);
+ save_desc_40 = gdt[0x40 / 8];
+@@ -616,7 +616,7 @@ static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
+ gdt[0x40 / 8] = save_desc_40;
+ put_cpu();
+ apm_restore_cpus(cpus);
+-
+
-+ rl->count[READ] = rl->count[WRITE] = 0;
-+ rl->starved[READ] = rl->starved[WRITE] = 0;
-+ rl->elvpriv = 0;
-+ init_waitqueue_head(&rl->wait[READ]);
-+ init_waitqueue_head(&rl->wait[WRITE]);
+ return *eax & 0xff;
+ }
+
+@@ -645,7 +645,7 @@ static u8 apm_bios_call_simple(u32 func, u32 ebx_in, u32 ecx_in, u32 *eax)
+ struct desc_struct *gdt;
+
+ cpus = apm_save_cpus();
+-
+
-+ rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
-+ mempool_free_slab, request_cachep, q->node);
+ cpu = get_cpu();
+ gdt = get_cpu_gdt_table(cpu);
+ save_desc_40 = gdt[0x40 / 8];
+@@ -680,7 +680,7 @@ static u8 apm_bios_call_simple(u32 func, u32 ebx_in, u32 ecx_in, u32 *eax)
+
+ static int apm_driver_version(u_short *val)
+ {
+- u32 eax;
++ u32 eax;
+
+ if (apm_bios_call_simple(APM_FUNC_VERSION, 0, *val, &eax))
+ return (eax >> 8) & 0xff;
+@@ -704,16 +704,16 @@ static int apm_driver_version(u_short *val)
+ * that APM 1.2 is in use. If no messges are pending the value 0x80
+ * is returned (No power management events pending).
+ */
+-
+
-+ if (!rl->rq_pool)
-+ return -ENOMEM;
+ static int apm_get_event(apm_event_t *event, apm_eventinfo_t *info)
+ {
+- u32 eax;
+- u32 ebx;
+- u32 ecx;
+- u32 dummy;
++ u32 eax;
++ u32 ebx;
++ u32 ecx;
++ u32 dummy;
+
+ if (apm_bios_call(APM_FUNC_GET_EVENT, 0, 0, &eax, &ebx, &ecx,
+- &dummy, &dummy))
++ &dummy, &dummy))
+ return (eax >> 8) & 0xff;
+ *event = ebx;
+ if (apm_info.connection_version < 0x0102)
+@@ -736,10 +736,10 @@ static int apm_get_event(apm_event_t *event, apm_eventinfo_t *info)
+ * The state holds the state to transition to, which may in fact
+ * be an acceptance of a BIOS requested state change.
+ */
+-
+
-+ return 0;
-+}
+ static int set_power_state(u_short what, u_short state)
+ {
+- u32 eax;
++ u32 eax;
+
+ if (apm_bios_call_simple(APM_FUNC_SET_STATE, what, state, &eax))
+ return (eax >> 8) & 0xff;
+@@ -752,7 +752,7 @@ static int set_power_state(u_short what, u_short state)
+ *
+ * Transition the entire system into a new APM power state.
+ */
+-
+
-+struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
-+{
-+ return blk_alloc_queue_node(gfp_mask, -1);
-+}
-+EXPORT_SYMBOL(blk_alloc_queue);
+ static int set_system_power_state(u_short state)
+ {
+ return set_power_state(APM_DEVICE_ALL, state);
+@@ -766,13 +766,13 @@ static int set_system_power_state(u_short state)
+ * to handle the idle request. On a success the function returns 1
+ * if the BIOS did clock slowing or 0 otherwise.
+ */
+-
+
-+struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
-+{
-+ struct request_queue *q;
-+ int err;
+ static int apm_do_idle(void)
+ {
+- u32 eax;
+- u8 ret = 0;
+- int idled = 0;
+- int polling;
++ u32 eax;
++ u8 ret = 0;
++ int idled = 0;
++ int polling;
+
+ polling = !!(current_thread_info()->status & TS_POLLING);
+ if (polling) {
+@@ -799,10 +799,9 @@ static int apm_do_idle(void)
+ /* This always fails on some SMP boards running UP kernels.
+ * Only report the failure the first 5 times.
+ */
+- if (++t < 5)
+- {
++ if (++t < 5) {
+ printk(KERN_DEBUG "apm_do_idle failed (%d)\n",
+- (eax >> 8) & 0xff);
++ (eax >> 8) & 0xff);
+ t = jiffies;
+ }
+ return -1;
+@@ -814,15 +813,15 @@ static int apm_do_idle(void)
+ /**
+ * apm_do_busy - inform the BIOS the CPU is busy
+ *
+- * Request that the BIOS brings the CPU back to full performance.
++ * Request that the BIOS brings the CPU back to full performance.
+ */
+-
+
-+ q = kmem_cache_alloc_node(blk_requestq_cachep,
-+ gfp_mask | __GFP_ZERO, node_id);
-+ if (!q)
-+ return NULL;
+ static void apm_do_busy(void)
+ {
+- u32 dummy;
++ u32 dummy;
+
+ if (clock_slowed || ALWAYS_CALL_BUSY) {
+- (void) apm_bios_call_simple(APM_FUNC_BUSY, 0, 0, &dummy);
++ (void)apm_bios_call_simple(APM_FUNC_BUSY, 0, 0, &dummy);
+ clock_slowed = 0;
+ }
+ }
+@@ -833,15 +832,15 @@ static void apm_do_busy(void)
+ * power management - we probably want
+ * to conserve power.
+ */
+-#define IDLE_CALC_LIMIT (HZ * 100)
+-#define IDLE_LEAKY_MAX 16
++#define IDLE_CALC_LIMIT (HZ * 100)
++#define IDLE_LEAKY_MAX 16
+
+ static void (*original_pm_idle)(void) __read_mostly;
+
+ /**
+ * apm_cpu_idle - cpu idling for APM capable Linux
+ *
+- * This is the idling function the kernel executes when APM is available. It
++ * This is the idling function the kernel executes when APM is available. It
+ * tries to do BIOS powermanagement based on the average system idle time.
+ * Furthermore it calls the system default idle routine.
+ */
+@@ -882,7 +881,8 @@ recalc:
+
+ t = jiffies;
+ switch (apm_do_idle()) {
+- case 0: apm_idle_done = 1;
++ case 0:
++ apm_idle_done = 1;
+ if (t != jiffies) {
+ if (bucket) {
+ bucket = IDLE_LEAKY_MAX;
+@@ -893,7 +893,8 @@ recalc:
+ continue;
+ }
+ break;
+- case 1: apm_idle_done = 1;
++ case 1:
++ apm_idle_done = 1;
+ break;
+ default: /* BIOS refused */
+ break;
+@@ -921,10 +922,10 @@ recalc:
+ * the SMP call on CPU0 as some systems will only honour this call
+ * on their first cpu.
+ */
+-
+
-+ q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
-+ q->backing_dev_info.unplug_io_data = q;
-+ err = bdi_init(&q->backing_dev_info);
-+ if (err) {
-+ kmem_cache_free(blk_requestq_cachep, q);
-+ return NULL;
-+ }
+ static void apm_power_off(void)
+ {
+- unsigned char po_bios_call[] = {
++ unsigned char po_bios_call[] = {
+ 0xb8, 0x00, 0x10, /* movw $0x1000,ax */
+ 0x8e, 0xd0, /* movw ax,ss */
+ 0xbc, 0x00, 0xf0, /* movw $0xf000,sp */
+@@ -935,13 +936,12 @@ static void apm_power_off(void)
+ };
+
+ /* Some bioses don't like being called from CPU != 0 */
+- if (apm_info.realmode_power_off)
+- {
++ if (apm_info.realmode_power_off) {
+ (void)apm_save_cpus();
+ machine_real_restart(po_bios_call, sizeof(po_bios_call));
++ } else {
++ (void)set_system_power_state(APM_STATE_OFF);
+ }
+- else
+- (void) set_system_power_state(APM_STATE_OFF);
+ }
+
+ #ifdef CONFIG_APM_DO_ENABLE
+@@ -950,17 +950,17 @@ static void apm_power_off(void)
+ * apm_enable_power_management - enable BIOS APM power management
+ * @enable: enable yes/no
+ *
+- * Enable or disable the APM BIOS power services.
++ * Enable or disable the APM BIOS power services.
+ */
+-
+
-+ init_timer(&q->unplug_timer);
+ static int apm_enable_power_management(int enable)
+ {
+- u32 eax;
++ u32 eax;
+
+ if ((enable == 0) && (apm_info.bios.flags & APM_BIOS_DISENGAGED))
+ return APM_NOT_ENGAGED;
+ if (apm_bios_call_simple(APM_FUNC_ENABLE_PM, APM_DEVICE_BALL,
+- enable, &eax))
++ enable, &eax))
+ return (eax >> 8) & 0xff;
+ if (enable)
+ apm_info.bios.flags &= ~APM_BIOS_DISABLED;
+@@ -983,19 +983,19 @@ static int apm_enable_power_management(int enable)
+ * if reported is a lifetime in secodnds/minutes at current powwer
+ * consumption.
+ */
+-
+
-+ kobject_init(&q->kobj, &blk_queue_ktype);
+ static int apm_get_power_status(u_short *status, u_short *bat, u_short *life)
+ {
+- u32 eax;
+- u32 ebx;
+- u32 ecx;
+- u32 edx;
+- u32 dummy;
++ u32 eax;
++ u32 ebx;
++ u32 ecx;
++ u32 edx;
++ u32 dummy;
+
+ if (apm_info.get_power_status_broken)
+ return APM_32_UNSUPPORTED;
+ if (apm_bios_call(APM_FUNC_GET_STATUS, APM_DEVICE_ALL, 0,
+- &eax, &ebx, &ecx, &edx, &dummy))
++ &eax, &ebx, &ecx, &edx, &dummy))
+ return (eax >> 8) & 0xff;
+ *status = ebx;
+ *bat = ecx;
+@@ -1011,11 +1011,11 @@ static int apm_get_power_status(u_short *status, u_short *bat, u_short *life)
+ static int apm_get_battery_status(u_short which, u_short *status,
+ u_short *bat, u_short *life, u_short *nbat)
+ {
+- u32 eax;
+- u32 ebx;
+- u32 ecx;
+- u32 edx;
+- u32 esi;
++ u32 eax;
++ u32 ebx;
++ u32 ecx;
++ u32 edx;
++ u32 esi;
+
+ if (apm_info.connection_version < 0x0102) {
+ /* pretend we only have one battery. */
+@@ -1026,7 +1026,7 @@ static int apm_get_battery_status(u_short which, u_short *status,
+ }
+
+ if (apm_bios_call(APM_FUNC_GET_STATUS, (0x8000 | (which)), 0, &eax,
+- &ebx, &ecx, &edx, &esi))
++ &ebx, &ecx, &edx, &esi))
+ return (eax >> 8) & 0xff;
+ *status = ebx;
+ *bat = ecx;
+@@ -1044,10 +1044,10 @@ static int apm_get_battery_status(u_short which, u_short *status,
+ * Activate or deactive power management on either a specific device
+ * or the entire system (%APM_DEVICE_ALL).
+ */
+-
+
-+ mutex_init(&q->sysfs_lock);
+ static int apm_engage_power_management(u_short device, int enable)
+ {
+- u32 eax;
++ u32 eax;
+
+ if ((enable == 0) && (device == APM_DEVICE_ALL)
+ && (apm_info.bios.flags & APM_BIOS_DISABLED))
+@@ -1074,7 +1074,7 @@ static int apm_engage_power_management(u_short device, int enable)
+ * all video devices. Typically the BIOS will do laptop backlight and
+ * monitor powerdown for us.
+ */
+-
+
-+ return q;
-+}
-+EXPORT_SYMBOL(blk_alloc_queue_node);
+ static int apm_console_blank(int blank)
+ {
+ int error = APM_NOT_ENGAGED; /* silence gcc */
+@@ -1126,7 +1126,7 @@ static apm_event_t get_queued_event(struct apm_user *as)
+
+ static void queue_event(apm_event_t event, struct apm_user *sender)
+ {
+- struct apm_user * as;
++ struct apm_user *as;
+
+ spin_lock(&user_list_lock);
+ if (user_list == NULL)
+@@ -1174,11 +1174,11 @@ static void reinit_timer(void)
+
+ spin_lock_irqsave(&i8253_lock, flags);
+ /* set the clock to HZ */
+- outb_p(0x34, PIT_MODE); /* binary, mode 2, LSB/MSB, ch 0 */
++ outb_pit(0x34, PIT_MODE); /* binary, mode 2, LSB/MSB, ch 0 */
+ udelay(10);
+- outb_p(LATCH & 0xff, PIT_CH0); /* LSB */
++ outb_pit(LATCH & 0xff, PIT_CH0); /* LSB */
+ udelay(10);
+- outb(LATCH >> 8, PIT_CH0); /* MSB */
++ outb_pit(LATCH >> 8, PIT_CH0); /* MSB */
+ udelay(10);
+ spin_unlock_irqrestore(&i8253_lock, flags);
+ #endif
+@@ -1186,7 +1186,7 @@ static void reinit_timer(void)
+
+ static int suspend(int vetoable)
+ {
+- int err;
++ int err;
+ struct apm_user *as;
+
+ if (pm_send_all(PM_SUSPEND, (void *)3)) {
+@@ -1239,7 +1239,7 @@ static int suspend(int vetoable)
+
+ static void standby(void)
+ {
+- int err;
++ int err;
+
+ local_irq_disable();
+ device_power_down(PMSG_SUSPEND);
+@@ -1256,8 +1256,8 @@ static void standby(void)
+
+ static apm_event_t get_event(void)
+ {
+- int error;
+- apm_event_t event = APM_NO_EVENTS; /* silence gcc */
++ int error;
++ apm_event_t event = APM_NO_EVENTS; /* silence gcc */
+ apm_eventinfo_t info;
+
+ static int notified;
+@@ -1275,9 +1275,9 @@ static apm_event_t get_event(void)
+
+ static void check_events(void)
+ {
+- apm_event_t event;
+- static unsigned long last_resume;
+- static int ignore_bounce;
++ apm_event_t event;
++ static unsigned long last_resume;
++ static int ignore_bounce;
+
+ while ((event = get_event()) != 0) {
+ if (debug) {
+@@ -1289,7 +1289,7 @@ static void check_events(void)
+ "event 0x%02x\n", event);
+ }
+ if (ignore_bounce
+- && ((jiffies - last_resume) > bounce_interval))
++ && (time_after(jiffies, last_resume + bounce_interval)))
+ ignore_bounce = 0;
+
+ switch (event) {
+@@ -1357,7 +1357,7 @@ static void check_events(void)
+ /*
+ * We are not allowed to reject a critical suspend.
+ */
+- (void) suspend(0);
++ (void)suspend(0);
+ break;
+ }
+ }
+@@ -1365,12 +1365,12 @@ static void check_events(void)
+
+ static void apm_event_handler(void)
+ {
+- static int pending_count = 4;
+- int err;
++ static int pending_count = 4;
++ int err;
+
+ if ((standbys_pending > 0) || (suspends_pending > 0)) {
+ if ((apm_info.connection_version > 0x100) &&
+- (pending_count-- <= 0)) {
++ (pending_count-- <= 0)) {
+ pending_count = 4;
+ if (debug)
+ printk(KERN_DEBUG "apm: setting state busy\n");
+@@ -1418,9 +1418,9 @@ static int check_apm_user(struct apm_user *as, const char *func)
+
+ static ssize_t do_read(struct file *fp, char __user *buf, size_t count, loff_t *ppos)
+ {
+- struct apm_user * as;
+- int i;
+- apm_event_t event;
++ struct apm_user *as;
++ int i;
++ apm_event_t event;
+
+ as = fp->private_data;
+ if (check_apm_user(as, "read"))
+@@ -1459,9 +1459,9 @@ static ssize_t do_read(struct file *fp, char __user *buf, size_t count, loff_t *
+ return 0;
+ }
+
+-static unsigned int do_poll(struct file *fp, poll_table * wait)
++static unsigned int do_poll(struct file *fp, poll_table *wait)
+ {
+- struct apm_user * as;
++ struct apm_user *as;
+
+ as = fp->private_data;
+ if (check_apm_user(as, "poll"))
+@@ -1472,10 +1472,10 @@ static unsigned int do_poll(struct file *fp, poll_table * wait)
+ return 0;
+ }
+
+-static int do_ioctl(struct inode * inode, struct file *filp,
++static int do_ioctl(struct inode *inode, struct file *filp,
+ u_int cmd, u_long arg)
+ {
+- struct apm_user * as;
++ struct apm_user *as;
+
+ as = filp->private_data;
+ if (check_apm_user(as, "ioctl"))
+@@ -1515,9 +1515,9 @@ static int do_ioctl(struct inode * inode, struct file *filp,
+ return 0;
+ }
+
+-static int do_release(struct inode * inode, struct file * filp)
++static int do_release(struct inode *inode, struct file *filp)
+ {
+- struct apm_user * as;
++ struct apm_user *as;
+
+ as = filp->private_data;
+ if (check_apm_user(as, "release"))
+@@ -1533,11 +1533,11 @@ static int do_release(struct inode * inode, struct file * filp)
+ if (suspends_pending <= 0)
+ (void) suspend(1);
+ }
+- spin_lock(&user_list_lock);
++ spin_lock(&user_list_lock);
+ if (user_list == as)
+ user_list = as->next;
+ else {
+- struct apm_user * as1;
++ struct apm_user *as1;
+
+ for (as1 = user_list;
+ (as1 != NULL) && (as1->next != as);
+@@ -1553,9 +1553,9 @@ static int do_release(struct inode * inode, struct file * filp)
+ return 0;
+ }
+
+-static int do_open(struct inode * inode, struct file * filp)
++static int do_open(struct inode *inode, struct file *filp)
+ {
+- struct apm_user * as;
++ struct apm_user *as;
+
+ as = kmalloc(sizeof(*as), GFP_KERNEL);
+ if (as == NULL) {
+@@ -1569,7 +1569,7 @@ static int do_open(struct inode * inode, struct file * filp)
+ as->suspends_read = as->standbys_read = 0;
+ /*
+ * XXX - this is a tiny bit broken, when we consider BSD
+- * process accounting. If the device is opened by root, we
++ * process accounting. If the device is opened by root, we
+ * instantly flag that we used superuser privs. Who knows,
+ * we might close the device immediately without doing a
+ * privileged operation -- cevans
+@@ -1652,16 +1652,16 @@ static int proc_apm_show(struct seq_file *m, void *v)
+ 8) min = minutes; sec = seconds */
+
+ seq_printf(m, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n",
+- driver_version,
+- (apm_info.bios.version >> 8) & 0xff,
+- apm_info.bios.version & 0xff,
+- apm_info.bios.flags,
+- ac_line_status,
+- battery_status,
+- battery_flag,
+- percentage,
+- time_units,
+- units);
++ driver_version,
++ (apm_info.bios.version >> 8) & 0xff,
++ apm_info.bios.version & 0xff,
++ apm_info.bios.flags,
++ ac_line_status,
++ battery_status,
++ battery_flag,
++ percentage,
++ time_units,
++ units);
+ return 0;
+ }
+
+@@ -1684,8 +1684,8 @@ static int apm(void *unused)
+ unsigned short cx;
+ unsigned short dx;
+ int error;
+- char * power_stat;
+- char * bat_stat;
++ char *power_stat;
++ char *bat_stat;
+
+ #ifdef CONFIG_SMP
+ /* 2002/08/01 - WT
+@@ -1744,23 +1744,41 @@ static int apm(void *unused)
+ }
+ }
+
+- if (debug && (num_online_cpus() == 1 || smp )) {
++ if (debug && (num_online_cpus() == 1 || smp)) {
+ error = apm_get_power_status(&bx, &cx, &dx);
+ if (error)
+ printk(KERN_INFO "apm: power status not available\n");
+ else {
+ switch ((bx >> 8) & 0xff) {
+- case 0: power_stat = "off line"; break;
+- case 1: power_stat = "on line"; break;
+- case 2: power_stat = "on backup power"; break;
+- default: power_stat = "unknown"; break;
++ case 0:
++ power_stat = "off line";
++ break;
++ case 1:
++ power_stat = "on line";
++ break;
++ case 2:
++ power_stat = "on backup power";
++ break;
++ default:
++ power_stat = "unknown";
++ break;
+ }
+ switch (bx & 0xff) {
+- case 0: bat_stat = "high"; break;
+- case 1: bat_stat = "low"; break;
+- case 2: bat_stat = "critical"; break;
+- case 3: bat_stat = "charging"; break;
+- default: bat_stat = "unknown"; break;
++ case 0:
++ bat_stat = "high";
++ break;
++ case 1:
++ bat_stat = "low";
++ break;
++ case 2:
++ bat_stat = "critical";
++ break;
++ case 3:
++ bat_stat = "charging";
++ break;
++ default:
++ bat_stat = "unknown";
++ break;
+ }
+ printk(KERN_INFO
+ "apm: AC %s, battery status %s, battery life ",
+@@ -1777,8 +1795,8 @@ static int apm(void *unused)
+ printk("unknown\n");
+ else
+ printk("%d %s\n", dx & 0x7fff,
+- (dx & 0x8000) ?
+- "minutes" : "seconds");
++ (dx & 0x8000) ?
++ "minutes" : "seconds");
+ }
+ }
+ }
+@@ -1803,7 +1821,7 @@ static int apm(void *unused)
+ #ifndef MODULE
+ static int __init apm_setup(char *str)
+ {
+- int invert;
++ int invert;
+
+ while ((str != NULL) && (*str != '\0')) {
+ if (strncmp(str, "off", 3) == 0)
+@@ -1828,14 +1846,13 @@ static int __init apm_setup(char *str)
+ if ((strncmp(str, "power-off", 9) == 0) ||
+ (strncmp(str, "power_off", 9) == 0))
+ power_off = !invert;
+- if (strncmp(str, "smp", 3) == 0)
+- {
++ if (strncmp(str, "smp", 3) == 0) {
+ smp = !invert;
+ idle_threshold = 100;
+ }
+ if ((strncmp(str, "allow-ints", 10) == 0) ||
+ (strncmp(str, "allow_ints", 10) == 0))
+- apm_info.allow_ints = !invert;
++ apm_info.allow_ints = !invert;
+ if ((strncmp(str, "broken-psr", 10) == 0) ||
+ (strncmp(str, "broken_psr", 10) == 0))
+ apm_info.get_power_status_broken = !invert;
+@@ -1881,7 +1898,8 @@ static int __init print_if_true(const struct dmi_system_id *d)
+ */
+ static int __init broken_ps2_resume(const struct dmi_system_id *d)
+ {
+- printk(KERN_INFO "%s machine detected. Mousepad Resume Bug workaround hopefully not needed.\n", d->ident);
++ printk(KERN_INFO "%s machine detected. Mousepad Resume Bug "
++ "workaround hopefully not needed.\n", d->ident);
+ return 0;
+ }
+
+@@ -1890,7 +1908,8 @@ static int __init set_realmode_power_off(const struct dmi_system_id *d)
+ {
+ if (apm_info.realmode_power_off == 0) {
+ apm_info.realmode_power_off = 1;
+- printk(KERN_INFO "%s bios detected. Using realmode poweroff only.\n", d->ident);
++ printk(KERN_INFO "%s bios detected. "
++ "Using realmode poweroff only.\n", d->ident);
+ }
+ return 0;
+ }
+@@ -1900,7 +1919,8 @@ static int __init set_apm_ints(const struct dmi_system_id *d)
+ {
+ if (apm_info.allow_ints == 0) {
+ apm_info.allow_ints = 1;
+- printk(KERN_INFO "%s machine detected. Enabling interrupts during APM calls.\n", d->ident);
++ printk(KERN_INFO "%s machine detected. "
++ "Enabling interrupts during APM calls.\n", d->ident);
+ }
+ return 0;
+ }
+@@ -1910,7 +1930,8 @@ static int __init apm_is_horked(const struct dmi_system_id *d)
+ {
+ if (apm_info.disabled == 0) {
+ apm_info.disabled = 1;
+- printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident);
++ printk(KERN_INFO "%s machine detected. "
++ "Disabling APM.\n", d->ident);
+ }
+ return 0;
+ }
+@@ -1919,7 +1940,8 @@ static int __init apm_is_horked_d850md(const struct dmi_system_id *d)
+ {
+ if (apm_info.disabled == 0) {
+ apm_info.disabled = 1;
+- printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident);
++ printk(KERN_INFO "%s machine detected. "
++ "Disabling APM.\n", d->ident);
+ printk(KERN_INFO "This bug is fixed in bios P15 which is available for \n");
+ printk(KERN_INFO "download from support.intel.com \n");
+ }
+@@ -1931,7 +1953,8 @@ static int __init apm_likes_to_melt(const struct dmi_system_id *d)
+ {
+ if (apm_info.forbid_idle == 0) {
+ apm_info.forbid_idle = 1;
+- printk(KERN_INFO "%s machine detected. Disabling APM idle calls.\n", d->ident);
++ printk(KERN_INFO "%s machine detected. "
++ "Disabling APM idle calls.\n", d->ident);
+ }
+ return 0;
+ }
+@@ -1954,7 +1977,8 @@ static int __init apm_likes_to_melt(const struct dmi_system_id *d)
+ static int __init broken_apm_power(const struct dmi_system_id *d)
+ {
+ apm_info.get_power_status_broken = 1;
+- printk(KERN_WARNING "BIOS strings suggest APM bugs, disabling power status reporting.\n");
++ printk(KERN_WARNING "BIOS strings suggest APM bugs, "
++ "disabling power status reporting.\n");
+ return 0;
+ }
+
+@@ -1965,7 +1989,8 @@ static int __init broken_apm_power(const struct dmi_system_id *d)
+ static int __init swab_apm_power_in_minutes(const struct dmi_system_id *d)
+ {
+ apm_info.get_power_status_swabinminutes = 1;
+- printk(KERN_WARNING "BIOS strings suggest APM reports battery life in minutes and wrong byte order.\n");
++ printk(KERN_WARNING "BIOS strings suggest APM reports battery life "
++ "in minutes and wrong byte order.\n");
+ return 0;
+ }
+
+@@ -1990,8 +2015,8 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
+ apm_is_horked, "Dell Inspiron 2500",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
+- DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
+- DMI_MATCH(DMI_BIOS_VERSION,"A11"), },
++ DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
++ DMI_MATCH(DMI_BIOS_VERSION, "A11"), },
+ },
+ { /* Allow interrupts during suspend on Dell Inspiron laptops*/
+ set_apm_ints, "Dell Inspiron", {
+@@ -2014,15 +2039,15 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
+ apm_is_horked, "Dell Dimension 4100",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS-Z"),
+- DMI_MATCH(DMI_BIOS_VENDOR,"Intel Corp."),
+- DMI_MATCH(DMI_BIOS_VERSION,"A11"), },
++ DMI_MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
++ DMI_MATCH(DMI_BIOS_VERSION, "A11"), },
+ },
+ { /* Allow interrupts during suspend on Compaq Laptops*/
+ set_apm_ints, "Compaq 12XL125",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Compaq PC"),
+ DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+- DMI_MATCH(DMI_BIOS_VERSION,"4.06"), },
++ DMI_MATCH(DMI_BIOS_VERSION, "4.06"), },
+ },
+ { /* Allow interrupts during APM or the clock goes slow */
+ set_apm_ints, "ASUSTeK",
+@@ -2064,15 +2089,15 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
+ apm_is_horked, "Sharp PC-PJ/AX",
+ { DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "PC-PJ/AX"),
+- DMI_MATCH(DMI_BIOS_VENDOR,"SystemSoft"),
+- DMI_MATCH(DMI_BIOS_VERSION,"Version R2.08"), },
++ DMI_MATCH(DMI_BIOS_VENDOR, "SystemSoft"),
++ DMI_MATCH(DMI_BIOS_VERSION, "Version R2.08"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Dell Inspiron 2500",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
+- DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
+- DMI_MATCH(DMI_BIOS_VERSION,"A11"), },
++ DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
++ DMI_MATCH(DMI_BIOS_VERSION, "A11"), },
+ },
+ { /* APM idle hangs */
+ apm_likes_to_melt, "Jabil AMD",
+@@ -2203,11 +2228,11 @@ static int __init apm_init(void)
+ return -ENODEV;
+ }
+ printk(KERN_INFO
+- "apm: BIOS version %d.%d Flags 0x%02x (Driver version %s)\n",
+- ((apm_info.bios.version >> 8) & 0xff),
+- (apm_info.bios.version & 0xff),
+- apm_info.bios.flags,
+- driver_version);
++ "apm: BIOS version %d.%d Flags 0x%02x (Driver version %s)\n",
++ ((apm_info.bios.version >> 8) & 0xff),
++ (apm_info.bios.version & 0xff),
++ apm_info.bios.flags,
++ driver_version);
+ if ((apm_info.bios.flags & APM_32_BIT_SUPPORT) == 0) {
+ printk(KERN_INFO "apm: no 32 bit BIOS support\n");
+ return -ENODEV;
+@@ -2312,9 +2337,9 @@ static int __init apm_init(void)
+ }
+ wake_up_process(kapmd_task);
+
+- if (num_online_cpus() > 1 && !smp ) {
++ if (num_online_cpus() > 1 && !smp) {
+ printk(KERN_NOTICE
+- "apm: disabled - APM is not SMP safe (power off active).\n");
++ "apm: disabled - APM is not SMP safe (power off active).\n");
+ return 0;
+ }
+
+@@ -2339,7 +2364,7 @@ static int __init apm_init(void)
+
+ static void __exit apm_exit(void)
+ {
+- int error;
++ int error;
+
+ if (set_pm_idle) {
+ pm_idle = original_pm_idle;
+diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c
+index 0e45981..afd8446 100644
+--- a/arch/x86/kernel/asm-offsets_32.c
++++ b/arch/x86/kernel/asm-offsets_32.c
+@@ -38,15 +38,15 @@ void foo(void);
+
+ void foo(void)
+ {
+- OFFSET(SIGCONTEXT_eax, sigcontext, eax);
+- OFFSET(SIGCONTEXT_ebx, sigcontext, ebx);
+- OFFSET(SIGCONTEXT_ecx, sigcontext, ecx);
+- OFFSET(SIGCONTEXT_edx, sigcontext, edx);
+- OFFSET(SIGCONTEXT_esi, sigcontext, esi);
+- OFFSET(SIGCONTEXT_edi, sigcontext, edi);
+- OFFSET(SIGCONTEXT_ebp, sigcontext, ebp);
+- OFFSET(SIGCONTEXT_esp, sigcontext, esp);
+- OFFSET(SIGCONTEXT_eip, sigcontext, eip);
++ OFFSET(IA32_SIGCONTEXT_ax, sigcontext, ax);
++ OFFSET(IA32_SIGCONTEXT_bx, sigcontext, bx);
++ OFFSET(IA32_SIGCONTEXT_cx, sigcontext, cx);
++ OFFSET(IA32_SIGCONTEXT_dx, sigcontext, dx);
++ OFFSET(IA32_SIGCONTEXT_si, sigcontext, si);
++ OFFSET(IA32_SIGCONTEXT_di, sigcontext, di);
++ OFFSET(IA32_SIGCONTEXT_bp, sigcontext, bp);
++ OFFSET(IA32_SIGCONTEXT_sp, sigcontext, sp);
++ OFFSET(IA32_SIGCONTEXT_ip, sigcontext, ip);
+ BLANK();
+
+ OFFSET(CPUINFO_x86, cpuinfo_x86, x86);
+@@ -70,39 +70,38 @@ void foo(void)
+ OFFSET(TI_cpu, thread_info, cpu);
+ BLANK();
+
+- OFFSET(GDS_size, Xgt_desc_struct, size);
+- OFFSET(GDS_address, Xgt_desc_struct, address);
+- OFFSET(GDS_pad, Xgt_desc_struct, pad);
++ OFFSET(GDS_size, desc_ptr, size);
++ OFFSET(GDS_address, desc_ptr, address);
+ BLANK();
+
+- OFFSET(PT_EBX, pt_regs, ebx);
+- OFFSET(PT_ECX, pt_regs, ecx);
+- OFFSET(PT_EDX, pt_regs, edx);
+- OFFSET(PT_ESI, pt_regs, esi);
+- OFFSET(PT_EDI, pt_regs, edi);
+- OFFSET(PT_EBP, pt_regs, ebp);
+- OFFSET(PT_EAX, pt_regs, eax);
+- OFFSET(PT_DS, pt_regs, xds);
+- OFFSET(PT_ES, pt_regs, xes);
+- OFFSET(PT_FS, pt_regs, xfs);
+- OFFSET(PT_ORIG_EAX, pt_regs, orig_eax);
+- OFFSET(PT_EIP, pt_regs, eip);
+- OFFSET(PT_CS, pt_regs, xcs);
+- OFFSET(PT_EFLAGS, pt_regs, eflags);
+- OFFSET(PT_OLDESP, pt_regs, esp);
+- OFFSET(PT_OLDSS, pt_regs, xss);
++ OFFSET(PT_EBX, pt_regs, bx);
++ OFFSET(PT_ECX, pt_regs, cx);
++ OFFSET(PT_EDX, pt_regs, dx);
++ OFFSET(PT_ESI, pt_regs, si);
++ OFFSET(PT_EDI, pt_regs, di);
++ OFFSET(PT_EBP, pt_regs, bp);
++ OFFSET(PT_EAX, pt_regs, ax);
++ OFFSET(PT_DS, pt_regs, ds);
++ OFFSET(PT_ES, pt_regs, es);
++ OFFSET(PT_FS, pt_regs, fs);
++ OFFSET(PT_ORIG_EAX, pt_regs, orig_ax);
++ OFFSET(PT_EIP, pt_regs, ip);
++ OFFSET(PT_CS, pt_regs, cs);
++ OFFSET(PT_EFLAGS, pt_regs, flags);
++ OFFSET(PT_OLDESP, pt_regs, sp);
++ OFFSET(PT_OLDSS, pt_regs, ss);
+ BLANK();
+
+ OFFSET(EXEC_DOMAIN_handler, exec_domain, handler);
+- OFFSET(RT_SIGFRAME_sigcontext, rt_sigframe, uc.uc_mcontext);
++ OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe, uc.uc_mcontext);
+ BLANK();
+
+ OFFSET(pbe_address, pbe, address);
+ OFFSET(pbe_orig_address, pbe, orig_address);
+ OFFSET(pbe_next, pbe, next);
+
+- /* Offset from the sysenter stack to tss.esp0 */
+- DEFINE(TSS_sysenter_esp0, offsetof(struct tss_struct, x86_tss.esp0) -
++ /* Offset from the sysenter stack to tss.sp0 */
++ DEFINE(TSS_sysenter_sp0, offsetof(struct tss_struct, x86_tss.sp0) -
+ sizeof(struct tss_struct));
+
+ DEFINE(PAGE_SIZE_asm, PAGE_SIZE);
+@@ -111,8 +110,6 @@ void foo(void)
+ DEFINE(PTRS_PER_PMD, PTRS_PER_PMD);
+ DEFINE(PTRS_PER_PGD, PTRS_PER_PGD);
+
+- DEFINE(VDSO_PRELINK_asm, VDSO_PRELINK);
+-
+ OFFSET(crypto_tfm_ctx_offset, crypto_tfm, __crt_ctx);
+
+ #ifdef CONFIG_PARAVIRT
+@@ -123,7 +120,7 @@ void foo(void)
+ OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
+ OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
+ OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
+- OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit);
++ OFFSET(PV_CPU_irq_enable_syscall_ret, pv_cpu_ops, irq_enable_syscall_ret);
+ OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0);
+ #endif
+
+diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
+index d1b6ed9..494e1e0 100644
+--- a/arch/x86/kernel/asm-offsets_64.c
++++ b/arch/x86/kernel/asm-offsets_64.c
+@@ -38,7 +38,6 @@ int main(void)
+ #define ENTRY(entry) DEFINE(tsk_ ## entry, offsetof(struct task_struct, entry))
+ ENTRY(state);
+ ENTRY(flags);
+- ENTRY(thread);
+ ENTRY(pid);
+ BLANK();
+ #undef ENTRY
+@@ -47,6 +46,9 @@ int main(void)
+ ENTRY(addr_limit);
+ ENTRY(preempt_count);
+ ENTRY(status);
++#ifdef CONFIG_IA32_EMULATION
++ ENTRY(sysenter_return);
++#endif
+ BLANK();
+ #undef ENTRY
+ #define ENTRY(entry) DEFINE(pda_ ## entry, offsetof(struct x8664_pda, entry))
+@@ -59,17 +61,31 @@ int main(void)
+ ENTRY(data_offset);
+ BLANK();
+ #undef ENTRY
++#ifdef CONFIG_PARAVIRT
++ BLANK();
++ OFFSET(PARAVIRT_enabled, pv_info, paravirt_enabled);
++ OFFSET(PARAVIRT_PATCH_pv_cpu_ops, paravirt_patch_template, pv_cpu_ops);
++ OFFSET(PARAVIRT_PATCH_pv_irq_ops, paravirt_patch_template, pv_irq_ops);
++ OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
++ OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
++ OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
++ OFFSET(PV_CPU_irq_enable_syscall_ret, pv_cpu_ops, irq_enable_syscall_ret);
++ OFFSET(PV_CPU_swapgs, pv_cpu_ops, swapgs);
++ OFFSET(PV_MMU_read_cr2, pv_mmu_ops, read_cr2);
++#endif
++
++
+ #ifdef CONFIG_IA32_EMULATION
+ #define ENTRY(entry) DEFINE(IA32_SIGCONTEXT_ ## entry, offsetof(struct sigcontext_ia32, entry))
+- ENTRY(eax);
+- ENTRY(ebx);
+- ENTRY(ecx);
+- ENTRY(edx);
+- ENTRY(esi);
+- ENTRY(edi);
+- ENTRY(ebp);
+- ENTRY(esp);
+- ENTRY(eip);
++ ENTRY(ax);
++ ENTRY(bx);
++ ENTRY(cx);
++ ENTRY(dx);
++ ENTRY(si);
++ ENTRY(di);
++ ENTRY(bp);
++ ENTRY(sp);
++ ENTRY(ip);
+ BLANK();
+ #undef ENTRY
+ DEFINE(IA32_RT_SIGFRAME_sigcontext,
+@@ -81,14 +97,14 @@ int main(void)
+ DEFINE(pbe_next, offsetof(struct pbe, next));
+ BLANK();
+ #define ENTRY(entry) DEFINE(pt_regs_ ## entry, offsetof(struct pt_regs, entry))
+- ENTRY(rbx);
+- ENTRY(rbx);
+- ENTRY(rcx);
+- ENTRY(rdx);
+- ENTRY(rsp);
+- ENTRY(rbp);
+- ENTRY(rsi);
+- ENTRY(rdi);
++ ENTRY(bx);
++ ENTRY(bx);
++ ENTRY(cx);
++ ENTRY(dx);
++ ENTRY(sp);
++ ENTRY(bp);
++ ENTRY(si);
++ ENTRY(di);
+ ENTRY(r8);
+ ENTRY(r9);
+ ENTRY(r10);
+@@ -97,7 +113,7 @@ int main(void)
+ ENTRY(r13);
+ ENTRY(r14);
+ ENTRY(r15);
+- ENTRY(eflags);
++ ENTRY(flags);
+ BLANK();
+ #undef ENTRY
+ #define ENTRY(entry) DEFINE(saved_context_ ## entry, offsetof(struct saved_context, entry))
+@@ -108,7 +124,7 @@ int main(void)
+ ENTRY(cr8);
+ BLANK();
+ #undef ENTRY
+- DEFINE(TSS_ist, offsetof(struct tss_struct, ist));
++ DEFINE(TSS_ist, offsetof(struct tss_struct, x86_tss.ist));
+ BLANK();
+ DEFINE(crypto_tfm_ctx_offset, offsetof(struct crypto_tfm, __crt_ctx));
+ BLANK();
+diff --git a/arch/x86/kernel/bootflag.c b/arch/x86/kernel/bootflag.c
+index 0b98605..30f25a7 100644
+--- a/arch/x86/kernel/bootflag.c
++++ b/arch/x86/kernel/bootflag.c
+@@ -1,8 +1,6 @@
+ /*
+ * Implement 'Simple Boot Flag Specification 2.0'
+ */
+-
+-
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+@@ -14,40 +12,38 @@
+
+ #include <linux/mc146818rtc.h>
+
+-
+ #define SBF_RESERVED (0x78)
+ #define SBF_PNPOS (1<<0)
+ #define SBF_BOOTING (1<<1)
+ #define SBF_DIAG (1<<2)
+ #define SBF_PARITY (1<<7)
+
+-
+ int sbf_port __initdata = -1; /* set via acpi_boot_init() */
+
+-
+ static int __init parity(u8 v)
+ {
+ int x = 0;
+ int i;
+-
+- for(i=0;i<8;i++)
+- {
+- x^=(v&1);
+- v>>=1;
+
-+/**
-+ * blk_init_queue - prepare a request queue for use with a block device
-+ * @rfn: The function to be called to process requests that have been
-+ * placed on the queue.
-+ * @lock: Request queue spin lock
-+ *
-+ * Description:
-+ * If a block device wishes to use the standard request handling procedures,
-+ * which sorts requests and coalesces adjacent requests, then it must
-+ * call blk_init_queue(). The function @rfn will be called when there
-+ * are requests on the queue that need to be processed. If the device
-+ * supports plugging, then @rfn may not be called immediately when requests
-+ * are available on the queue, but may be called at some time later instead.
-+ * Plugged queues are generally unplugged when a buffer belonging to one
-+ * of the requests on the queue is needed, or due to memory pressure.
-+ *
-+ * @rfn is not required, or even expected, to remove all requests off the
-+ * queue, but only as many as it can handle at a time. If it does leave
-+ * requests on the queue, it is responsible for arranging that the requests
-+ * get dealt with eventually.
-+ *
-+ * The queue spin lock must be held while manipulating the requests on the
-+ * request queue; this lock will be taken also from interrupt context, so irq
-+ * disabling is needed for it.
-+ *
-+ * Function returns a pointer to the initialized request queue, or NULL if
-+ * it didn't succeed.
-+ *
-+ * Note:
-+ * blk_init_queue() must be paired with a blk_cleanup_queue() call
-+ * when the block device is deactivated (such as at module unload).
-+ **/
++ for (i = 0; i < 8; i++) {
++ x ^= (v & 1);
++ v >>= 1;
+ }
+
-+struct request_queue *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock)
-+{
-+ return blk_init_queue_node(rfn, lock, -1);
-+}
-+EXPORT_SYMBOL(blk_init_queue);
+ return x;
+ }
+
+ static void __init sbf_write(u8 v)
+ {
+ unsigned long flags;
+- if(sbf_port != -1)
+- {
+
-+struct request_queue *
-+blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
-+{
-+ struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
++ if (sbf_port != -1) {
+ v &= ~SBF_PARITY;
+- if(!parity(v))
+- v|=SBF_PARITY;
++ if (!parity(v))
++ v |= SBF_PARITY;
+
+- printk(KERN_INFO "Simple Boot Flag at 0x%x set to 0x%x\n", sbf_port, v);
++ printk(KERN_INFO "Simple Boot Flag at 0x%x set to 0x%x\n",
++ sbf_port, v);
+
+ spin_lock_irqsave(&rtc_lock, flags);
+ CMOS_WRITE(v, sbf_port);
+@@ -57,33 +53,41 @@ static void __init sbf_write(u8 v)
+
+ static u8 __init sbf_read(void)
+ {
+- u8 v;
+ unsigned long flags;
+- if(sbf_port == -1)
++ u8 v;
+
-+ if (!q)
-+ return NULL;
++ if (sbf_port == -1)
+ return 0;
+
-+ q->node = node_id;
-+ if (blk_init_free_list(q)) {
-+ kmem_cache_free(blk_requestq_cachep, q);
-+ return NULL;
+ spin_lock_irqsave(&rtc_lock, flags);
+ v = CMOS_READ(sbf_port);
+ spin_unlock_irqrestore(&rtc_lock, flags);
++
+ return v;
+ }
+
+ static int __init sbf_value_valid(u8 v)
+ {
+- if(v&SBF_RESERVED) /* Reserved bits */
++ if (v & SBF_RESERVED) /* Reserved bits */
+ return 0;
+- if(!parity(v))
++ if (!parity(v))
+ return 0;
++
+ return 1;
+ }
+
+ static int __init sbf_init(void)
+ {
+ u8 v;
+- if(sbf_port == -1)
++
++ if (sbf_port == -1)
+ return 0;
++
+ v = sbf_read();
+- if(!sbf_value_valid(v))
+- printk(KERN_WARNING "Simple Boot Flag value 0x%x read from CMOS RAM was invalid\n",v);
++ if (!sbf_value_valid(v)) {
++ printk(KERN_WARNING "Simple Boot Flag value 0x%x read from "
++ "CMOS RAM was invalid\n", v);
+ }
+
+ v &= ~SBF_RESERVED;
+ v &= ~SBF_BOOTING;
+@@ -92,7 +96,7 @@ static int __init sbf_init(void)
+ v |= SBF_PNPOS;
+ #endif
+ sbf_write(v);
+
-+ /*
-+ * if caller didn't supply a lock, they get per-queue locking with
-+ * our embedded lock
-+ */
-+ if (!lock) {
-+ spin_lock_init(&q->__queue_lock);
-+ lock = &q->__queue_lock;
+ return 0;
+ }
+-
+ module_init(sbf_init);
+diff --git a/arch/x86/kernel/bugs_64.c b/arch/x86/kernel/bugs_64.c
+index 9a189ce..8f520f9 100644
+--- a/arch/x86/kernel/bugs_64.c
++++ b/arch/x86/kernel/bugs_64.c
+@@ -13,7 +13,6 @@
+ void __init check_bugs(void)
+ {
+ identify_cpu(&boot_cpu_data);
+- mtrr_bp_init();
+ #if !defined(CONFIG_SMP)
+ printk("CPU: ");
+ print_cpu_info(&boot_cpu_data);
+diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c
+index 3e91d3e..238468a 100644
+--- a/arch/x86/kernel/cpu/addon_cpuid_features.c
++++ b/arch/x86/kernel/cpu/addon_cpuid_features.c
+@@ -45,6 +45,6 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
+ ®s[CR_ECX], ®s[CR_EDX]);
+
+ if (regs[cb->reg] & (1 << cb->bit))
+- set_bit(cb->feature, c->x86_capability);
++ set_cpu_cap(c, cb->feature);
+ }
+ }
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 1ff88c7..06fa159 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -63,6 +63,15 @@ static __cpuinit int amd_apic_timer_broken(void)
+
+ int force_mwait __cpuinitdata;
+
++void __cpuinit early_init_amd(struct cpuinfo_x86 *c)
++{
++ if (cpuid_eax(0x80000000) >= 0x80000007) {
++ c->x86_power = cpuid_edx(0x80000007);
++ if (c->x86_power & (1<<8))
++ set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
+ }
++}
+
-+ q->request_fn = rfn;
-+ q->prep_rq_fn = NULL;
-+ q->unplug_fn = generic_unplug_device;
-+ q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
-+ q->queue_lock = lock;
+ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ {
+ u32 l, h;
+@@ -85,6 +94,8 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ }
+ #endif
+
++ early_init_amd(c);
+
-+ blk_queue_segment_boundary(q, 0xffffffff);
+ /*
+ * FIXME: We should handle the K5 here. Set up the write
+ * range and also turn on MSR 83 bits 4 and 31 (write alloc,
+@@ -257,12 +268,6 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
+ }
+
+- if (cpuid_eax(0x80000000) >= 0x80000007) {
+- c->x86_power = cpuid_edx(0x80000007);
+- if (c->x86_power & (1<<8))
+- set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
+- }
+-
+ #ifdef CONFIG_X86_HT
+ /*
+ * On a AMD multi core setup the lower bits of the APIC id
+@@ -295,12 +300,12 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ local_apic_timer_disabled = 1;
+ #endif
+
+- if (c->x86 == 0x10 && !force_mwait)
+- clear_bit(X86_FEATURE_MWAIT, c->x86_capability);
+-
+ /* K6s reports MCEs but don't actually have all the MSRs */
+ if (c->x86 < 6)
+ clear_bit(X86_FEATURE_MCE, c->x86_capability);
+
-+ blk_queue_make_request(q, __make_request);
-+ blk_queue_max_segment_size(q, MAX_SEGMENT_SIZE);
++ if (cpu_has_xmm)
++ set_bit(X86_FEATURE_MFENCE_RDTSC, c->x86_capability);
+ }
+
+ static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 205fd5b..9b95edc 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -11,6 +11,7 @@
+ #include <linux/utsname.h>
+ #include <asm/bugs.h>
+ #include <asm/processor.h>
++#include <asm/processor-flags.h>
+ #include <asm/i387.h>
+ #include <asm/msr.h>
+ #include <asm/paravirt.h>
+@@ -35,7 +36,7 @@ __setup("mca-pentium", mca_pentium);
+ static int __init no_387(char *s)
+ {
+ boot_cpu_data.hard_math = 0;
+- write_cr0(0xE | read_cr0());
++ write_cr0(X86_CR0_TS | X86_CR0_EM | X86_CR0_MP | read_cr0());
+ return 1;
+ }
+
+@@ -153,7 +154,7 @@ static void __init check_config(void)
+ * If we configured ourselves for a TSC, we'd better have one!
+ */
+ #ifdef CONFIG_X86_TSC
+- if (!cpu_has_tsc && !tsc_disable)
++ if (!cpu_has_tsc)
+ panic("Kernel compiled for Pentium+, requires TSC feature!");
+ #endif
+
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index e2fcf20..db28aa9 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -22,43 +22,48 @@
+ #include "cpu.h"
+
+ DEFINE_PER_CPU(struct gdt_page, gdt_page) = { .gdt = {
+- [GDT_ENTRY_KERNEL_CS] = { 0x0000ffff, 0x00cf9a00 },
+- [GDT_ENTRY_KERNEL_DS] = { 0x0000ffff, 0x00cf9200 },
+- [GDT_ENTRY_DEFAULT_USER_CS] = { 0x0000ffff, 0x00cffa00 },
+- [GDT_ENTRY_DEFAULT_USER_DS] = { 0x0000ffff, 0x00cff200 },
++ [GDT_ENTRY_KERNEL_CS] = { { { 0x0000ffff, 0x00cf9a00 } } },
++ [GDT_ENTRY_KERNEL_DS] = { { { 0x0000ffff, 0x00cf9200 } } },
++ [GDT_ENTRY_DEFAULT_USER_CS] = { { { 0x0000ffff, 0x00cffa00 } } },
++ [GDT_ENTRY_DEFAULT_USER_DS] = { { { 0x0000ffff, 0x00cff200 } } },
+ /*
+ * Segments used for calling PnP BIOS have byte granularity.
+ * They code segments and data segments have fixed 64k limits,
+ * the transfer segment sizes are set at run time.
+ */
+- [GDT_ENTRY_PNPBIOS_CS32] = { 0x0000ffff, 0x00409a00 },/* 32-bit code */
+- [GDT_ENTRY_PNPBIOS_CS16] = { 0x0000ffff, 0x00009a00 },/* 16-bit code */
+- [GDT_ENTRY_PNPBIOS_DS] = { 0x0000ffff, 0x00009200 }, /* 16-bit data */
+- [GDT_ENTRY_PNPBIOS_TS1] = { 0x00000000, 0x00009200 },/* 16-bit data */
+- [GDT_ENTRY_PNPBIOS_TS2] = { 0x00000000, 0x00009200 },/* 16-bit data */
++ /* 32-bit code */
++ [GDT_ENTRY_PNPBIOS_CS32] = { { { 0x0000ffff, 0x00409a00 } } },
++ /* 16-bit code */
++ [GDT_ENTRY_PNPBIOS_CS16] = { { { 0x0000ffff, 0x00009a00 } } },
++ /* 16-bit data */
++ [GDT_ENTRY_PNPBIOS_DS] = { { { 0x0000ffff, 0x00009200 } } },
++ /* 16-bit data */
++ [GDT_ENTRY_PNPBIOS_TS1] = { { { 0x00000000, 0x00009200 } } },
++ /* 16-bit data */
++ [GDT_ENTRY_PNPBIOS_TS2] = { { { 0x00000000, 0x00009200 } } },
+ /*
+ * The APM segments have byte granularity and their bases
+ * are set at run time. All have 64k limits.
+ */
+- [GDT_ENTRY_APMBIOS_BASE] = { 0x0000ffff, 0x00409a00 },/* 32-bit code */
++ /* 32-bit code */
++ [GDT_ENTRY_APMBIOS_BASE] = { { { 0x0000ffff, 0x00409a00 } } },
+ /* 16-bit code */
+- [GDT_ENTRY_APMBIOS_BASE+1] = { 0x0000ffff, 0x00009a00 },
+- [GDT_ENTRY_APMBIOS_BASE+2] = { 0x0000ffff, 0x00409200 }, /* data */
++ [GDT_ENTRY_APMBIOS_BASE+1] = { { { 0x0000ffff, 0x00009a00 } } },
++ /* data */
++ [GDT_ENTRY_APMBIOS_BASE+2] = { { { 0x0000ffff, 0x00409200 } } },
+
+- [GDT_ENTRY_ESPFIX_SS] = { 0x00000000, 0x00c09200 },
+- [GDT_ENTRY_PERCPU] = { 0x00000000, 0x00000000 },
++ [GDT_ENTRY_ESPFIX_SS] = { { { 0x00000000, 0x00c09200 } } },
++ [GDT_ENTRY_PERCPU] = { { { 0x00000000, 0x00000000 } } },
+ } };
+ EXPORT_PER_CPU_SYMBOL_GPL(gdt_page);
+
++__u32 cleared_cpu_caps[NCAPINTS] __cpuinitdata;
+
-+ blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
-+ blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
+ static int cachesize_override __cpuinitdata = -1;
+-static int disable_x86_fxsr __cpuinitdata;
+ static int disable_x86_serial_nr __cpuinitdata = 1;
+-static int disable_x86_sep __cpuinitdata;
+
+ struct cpu_dev * cpu_devs[X86_VENDOR_NUM] = {};
+
+-extern int disable_pse;
+-
+ static void __cpuinit default_init(struct cpuinfo_x86 * c)
+ {
+ /* Not much we can do here... */
+@@ -207,16 +212,8 @@ static void __cpuinit get_cpu_vendor(struct cpuinfo_x86 *c, int early)
+
+ static int __init x86_fxsr_setup(char * s)
+ {
+- /* Tell all the other CPUs to not use it... */
+- disable_x86_fxsr = 1;
+-
+- /*
+- * ... and clear the bits early in the boot_cpu_data
+- * so that the bootup process doesn't try to do this
+- * either.
+- */
+- clear_bit(X86_FEATURE_FXSR, boot_cpu_data.x86_capability);
+- clear_bit(X86_FEATURE_XMM, boot_cpu_data.x86_capability);
++ setup_clear_cpu_cap(X86_FEATURE_FXSR);
++ setup_clear_cpu_cap(X86_FEATURE_XMM);
+ return 1;
+ }
+ __setup("nofxsr", x86_fxsr_setup);
+@@ -224,7 +221,7 @@ __setup("nofxsr", x86_fxsr_setup);
+
+ static int __init x86_sep_setup(char * s)
+ {
+- disable_x86_sep = 1;
++ setup_clear_cpu_cap(X86_FEATURE_SEP);
+ return 1;
+ }
+ __setup("nosep", x86_sep_setup);
+@@ -281,6 +278,33 @@ void __init cpu_detect(struct cpuinfo_x86 *c)
+ c->x86_cache_alignment = ((misc >> 8) & 0xff) * 8;
+ }
+ }
++static void __cpuinit early_get_cap(struct cpuinfo_x86 *c)
++{
++ u32 tfms, xlvl;
++ int ebx;
+
-+ q->sg_reserved_size = INT_MAX;
++ memset(&c->x86_capability, 0, sizeof c->x86_capability);
++ if (have_cpuid_p()) {
++ /* Intel-defined flags: level 0x00000001 */
++ if (c->cpuid_level >= 0x00000001) {
++ u32 capability, excap;
++ cpuid(0x00000001, &tfms, &ebx, &excap, &capability);
++ c->x86_capability[0] = capability;
++ c->x86_capability[4] = excap;
++ }
++
++ /* AMD-defined flags: level 0x80000001 */
++ xlvl = cpuid_eax(0x80000000);
++ if ((xlvl & 0xffff0000) == 0x80000000) {
++ if (xlvl >= 0x80000001) {
++ c->x86_capability[1] = cpuid_edx(0x80000001);
++ c->x86_capability[6] = cpuid_ecx(0x80000001);
++ }
++ }
+
-+ /*
-+ * all done
-+ */
-+ if (!elevator_init(q, NULL)) {
-+ blk_queue_congestion_threshold(q);
-+ return q;
+ }
+
-+ blk_put_queue(q);
-+ return NULL;
+}
-+EXPORT_SYMBOL(blk_init_queue_node);
+
+ /* Do minimum CPU detection early.
+ Fields really needed: vendor, cpuid_level, family, model, mask, cache alignment.
+@@ -300,6 +324,17 @@ static void __init early_cpu_detect(void)
+ cpu_detect(c);
+
+ get_cpu_vendor(c, 1);
+
-+int blk_get_queue(struct request_queue *q)
-+{
-+ if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
-+ kobject_get(&q->kobj);
-+ return 0;
++ switch (c->x86_vendor) {
++ case X86_VENDOR_AMD:
++ early_init_amd(c);
++ break;
++ case X86_VENDOR_INTEL:
++ early_init_intel(c);
++ break;
+ }
+
++ early_get_cap(c);
+ }
+
+ static void __cpuinit generic_identify(struct cpuinfo_x86 * c)
+@@ -357,8 +392,6 @@ static void __cpuinit generic_identify(struct cpuinfo_x86 * c)
+ init_scattered_cpuid_features(c);
+ }
+
+- early_intel_workaround(c);
+-
+ #ifdef CONFIG_X86_HT
+ c->phys_proc_id = (cpuid_ebx(1) >> 24) & 0xff;
+ #endif
+@@ -392,7 +425,7 @@ __setup("serialnumber", x86_serial_nr_setup);
+ /*
+ * This does the hard work of actually picking apart the CPU stuff...
+ */
+-static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
++void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ {
+ int i;
+
+@@ -418,20 +451,9 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+
+ generic_identify(c);
+
+- printk(KERN_DEBUG "CPU: After generic identify, caps:");
+- for (i = 0; i < NCAPINTS; i++)
+- printk(" %08lx", c->x86_capability[i]);
+- printk("\n");
+-
+- if (this_cpu->c_identify) {
++ if (this_cpu->c_identify)
+ this_cpu->c_identify(c);
+
+- printk(KERN_DEBUG "CPU: After vendor identify, caps:");
+- for (i = 0; i < NCAPINTS; i++)
+- printk(" %08lx", c->x86_capability[i]);
+- printk("\n");
+- }
+-
+ /*
+ * Vendor-specific initialization. In this section we
+ * canonicalize the feature flags, meaning if there are
+@@ -453,23 +475,6 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ * we do "generic changes."
+ */
+
+- /* TSC disabled? */
+- if ( tsc_disable )
+- clear_bit(X86_FEATURE_TSC, c->x86_capability);
+-
+- /* FXSR disabled? */
+- if (disable_x86_fxsr) {
+- clear_bit(X86_FEATURE_FXSR, c->x86_capability);
+- clear_bit(X86_FEATURE_XMM, c->x86_capability);
+- }
+-
+- /* SEP disabled? */
+- if (disable_x86_sep)
+- clear_bit(X86_FEATURE_SEP, c->x86_capability);
+-
+- if (disable_pse)
+- clear_bit(X86_FEATURE_PSE, c->x86_capability);
+-
+ /* If the model name is still unset, do table lookup. */
+ if ( !c->x86_model_id[0] ) {
+ char *p;
+@@ -482,13 +487,6 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ c->x86, c->x86_model);
+ }
+
+- /* Now the feature flags better reflect actual CPU features! */
+-
+- printk(KERN_DEBUG "CPU: After all inits, caps:");
+- for (i = 0; i < NCAPINTS; i++)
+- printk(" %08lx", c->x86_capability[i]);
+- printk("\n");
+-
+ /*
+ * On SMP, boot_cpu_data holds the common feature set between
+ * all CPUs; so make sure that we indicate which features are
+@@ -501,8 +499,14 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ boot_cpu_data.x86_capability[i] &= c->x86_capability[i];
+ }
+
++ /* Clear all flags overriden by options */
++ for (i = 0; i < NCAPINTS; i++)
++ c->x86_capability[i] ^= cleared_cpu_caps[i];
++
+ /* Init Machine Check Exception if available. */
+ mcheck_init(c);
++
++ select_idle_routine(c);
+ }
+
+ void __init identify_boot_cpu(void)
+@@ -510,7 +514,6 @@ void __init identify_boot_cpu(void)
+ identify_cpu(&boot_cpu_data);
+ sysenter_setup();
+ enable_sep_cpu();
+- mtrr_bp_init();
+ }
+
+ void __cpuinit identify_secondary_cpu(struct cpuinfo_x86 *c)
+@@ -567,6 +570,13 @@ void __cpuinit detect_ht(struct cpuinfo_x86 *c)
+ }
+ #endif
+
++static __init int setup_noclflush(char *arg)
++{
++ setup_clear_cpu_cap(X86_FEATURE_CLFLSH);
+ return 1;
+}
++__setup("noclflush", setup_noclflush);
+
-+EXPORT_SYMBOL(blk_get_queue);
-+
-+static inline void blk_free_request(struct request_queue *q, struct request *rq)
+ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
+ {
+ char *vendor = NULL;
+@@ -590,6 +600,17 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
+ printk("\n");
+ }
+
++static __init int setup_disablecpuid(char *arg)
+{
-+ if (rq->cmd_flags & REQ_ELVPRIV)
-+ elv_put_request(q, rq);
-+ mempool_free(rq, q->rq.rq_pool);
++ int bit;
++ if (get_option(&arg, &bit) && bit < NCAPINTS*32)
++ setup_clear_cpu_cap(bit);
++ else
++ return 0;
++ return 1;
+}
++__setup("clearcpuid=", setup_disablecpuid);
+
-+static struct request *
-+blk_alloc_request(struct request_queue *q, int rw, int priv, gfp_t gfp_mask)
+ cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
+
+ /* This is hacky. :)
+@@ -620,21 +641,13 @@ void __init early_cpu_init(void)
+ nexgen_init_cpu();
+ umc_init_cpu();
+ early_cpu_detect();
+-
+-#ifdef CONFIG_DEBUG_PAGEALLOC
+- /* pse is not compatible with on-the-fly unmapping,
+- * disable it even if the cpus claim to support it.
+- */
+- clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);
+- disable_pse = 1;
+-#endif
+ }
+
+ /* Make sure %fs is initialized properly in idle threads */
+ struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
+ {
+ memset(regs, 0, sizeof(struct pt_regs));
+- regs->xfs = __KERNEL_PERCPU;
++ regs->fs = __KERNEL_PERCPU;
+ return regs;
+ }
+
+@@ -642,7 +655,7 @@ struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
+ * it's on the real one. */
+ void switch_to_new_gdt(void)
+ {
+- struct Xgt_desc_struct gdt_descr;
++ struct desc_ptr gdt_descr;
+
+ gdt_descr.address = (long)get_cpu_gdt_table(smp_processor_id());
+ gdt_descr.size = GDT_SIZE - 1;
+@@ -672,12 +685,6 @@ void __cpuinit cpu_init(void)
+
+ if (cpu_has_vme || cpu_has_tsc || cpu_has_de)
+ clear_in_cr4(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
+- if (tsc_disable && cpu_has_tsc) {
+- printk(KERN_NOTICE "Disabling TSC...\n");
+- /**** FIX-HPA: DOES THIS REALLY BELONG HERE? ****/
+- clear_bit(X86_FEATURE_TSC, boot_cpu_data.x86_capability);
+- set_in_cr4(X86_CR4_TSD);
+- }
+
+ load_idt(&idt_descr);
+ switch_to_new_gdt();
+@@ -691,7 +698,7 @@ void __cpuinit cpu_init(void)
+ BUG();
+ enter_lazy_tlb(&init_mm, curr);
+
+- load_esp0(t, thread);
++ load_sp0(t, thread);
+ set_tss_desc(cpu,t);
+ load_TR_desc();
+ load_LDT(&init_mm.context);
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 2f6432c..ad6527a 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -24,5 +24,6 @@ extern struct cpu_dev * cpu_devs [X86_VENDOR_NUM];
+ extern int get_model_name(struct cpuinfo_x86 *c);
+ extern void display_cacheinfo(struct cpuinfo_x86 *c);
+
+-extern void early_intel_workaround(struct cpuinfo_x86 *c);
++extern void early_init_intel(struct cpuinfo_x86 *c);
++extern void early_init_amd(struct cpuinfo_x86 *c);
+
+diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+index fea0af0..a962dcb 100644
+--- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
++++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+@@ -67,7 +67,8 @@ struct acpi_cpufreq_data {
+ unsigned int cpu_feature;
+ };
+
+-static struct acpi_cpufreq_data *drv_data[NR_CPUS];
++static DEFINE_PER_CPU(struct acpi_cpufreq_data *, drv_data);
++
+ /* acpi_perf_data is a pointer to percpu data. */
+ static struct acpi_processor_performance *acpi_perf_data;
+
+@@ -218,14 +219,14 @@ static u32 get_cur_val(cpumask_t mask)
+ if (unlikely(cpus_empty(mask)))
+ return 0;
+
+- switch (drv_data[first_cpu(mask)]->cpu_feature) {
++ switch (per_cpu(drv_data, first_cpu(mask))->cpu_feature) {
+ case SYSTEM_INTEL_MSR_CAPABLE:
+ cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
+ cmd.addr.msr.reg = MSR_IA32_PERF_STATUS;
+ break;
+ case SYSTEM_IO_CAPABLE:
+ cmd.type = SYSTEM_IO_CAPABLE;
+- perf = drv_data[first_cpu(mask)]->acpi_data;
++ perf = per_cpu(drv_data, first_cpu(mask))->acpi_data;
+ cmd.addr.io.port = perf->control_register.address;
+ cmd.addr.io.bit_width = perf->control_register.bit_width;
+ break;
+@@ -325,7 +326,7 @@ static unsigned int get_measured_perf(unsigned int cpu)
+
+ #endif
+
+- retval = drv_data[cpu]->max_freq * perf_percent / 100;
++ retval = per_cpu(drv_data, cpu)->max_freq * perf_percent / 100;
+
+ put_cpu();
+ set_cpus_allowed(current, saved_mask);
+@@ -336,7 +337,7 @@ static unsigned int get_measured_perf(unsigned int cpu)
+
+ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
+ {
+- struct acpi_cpufreq_data *data = drv_data[cpu];
++ struct acpi_cpufreq_data *data = per_cpu(drv_data, cpu);
+ unsigned int freq;
+
+ dprintk("get_cur_freq_on_cpu (%d)\n", cpu);
+@@ -370,7 +371,7 @@ static unsigned int check_freqs(cpumask_t mask, unsigned int freq,
+ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
+ unsigned int target_freq, unsigned int relation)
+ {
+- struct acpi_cpufreq_data *data = drv_data[policy->cpu];
++ struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+ struct acpi_processor_performance *perf;
+ struct cpufreq_freqs freqs;
+ cpumask_t online_policy_cpus;
+@@ -466,7 +467,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
+
+ static int acpi_cpufreq_verify(struct cpufreq_policy *policy)
+ {
+- struct acpi_cpufreq_data *data = drv_data[policy->cpu];
++ struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+
+ dprintk("acpi_cpufreq_verify\n");
+
+@@ -570,7 +571,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ return -ENOMEM;
+
+ data->acpi_data = percpu_ptr(acpi_perf_data, cpu);
+- drv_data[cpu] = data;
++ per_cpu(drv_data, cpu) = data;
+
+ if (cpu_has(c, X86_FEATURE_CONSTANT_TSC))
+ acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS;
+@@ -714,20 +715,20 @@ err_unreg:
+ acpi_processor_unregister_performance(perf, cpu);
+ err_free:
+ kfree(data);
+- drv_data[cpu] = NULL;
++ per_cpu(drv_data, cpu) = NULL;
+
+ return result;
+ }
+
+ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+ {
+- struct acpi_cpufreq_data *data = drv_data[policy->cpu];
++ struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+
+ dprintk("acpi_cpufreq_cpu_exit\n");
+
+ if (data) {
+ cpufreq_frequency_table_put_attr(policy->cpu);
+- drv_data[policy->cpu] = NULL;
++ per_cpu(drv_data, policy->cpu) = NULL;
+ acpi_processor_unregister_performance(data->acpi_data,
+ policy->cpu);
+ kfree(data);
+@@ -738,7 +739,7 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+
+ static int acpi_cpufreq_resume(struct cpufreq_policy *policy)
+ {
+- struct acpi_cpufreq_data *data = drv_data[policy->cpu];
++ struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
+
+ dprintk("acpi_cpufreq_resume\n");
+
+diff --git a/arch/x86/kernel/cpu/cpufreq/longhaul.c b/arch/x86/kernel/cpu/cpufreq/longhaul.c
+index 749d00c..06fcce5 100644
+--- a/arch/x86/kernel/cpu/cpufreq/longhaul.c
++++ b/arch/x86/kernel/cpu/cpufreq/longhaul.c
+@@ -694,7 +694,7 @@ static acpi_status longhaul_walk_callback(acpi_handle obj_handle,
+ if ( acpi_bus_get_device(obj_handle, &d) ) {
+ return 0;
+ }
+- *return_value = (void *)acpi_driver_data(d);
++ *return_value = acpi_driver_data(d);
+ return 1;
+ }
+
+diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+index 99e1ef9..a052273 100644
+--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
++++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+@@ -52,7 +52,7 @@
+ /* serialize freq changes */
+ static DEFINE_MUTEX(fidvid_mutex);
+
+-static struct powernow_k8_data *powernow_data[NR_CPUS];
++static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data);
+
+ static int cpu_family = CPU_OPTERON;
+
+@@ -1018,7 +1018,7 @@ static int transition_frequency_pstate(struct powernow_k8_data *data, unsigned i
+ static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsigned relation)
+ {
+ cpumask_t oldmask = CPU_MASK_ALL;
+- struct powernow_k8_data *data = powernow_data[pol->cpu];
++ struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
+ u32 checkfid;
+ u32 checkvid;
+ unsigned int newstate;
+@@ -1094,7 +1094,7 @@ err_out:
+ /* Driver entry point to verify the policy and range of frequencies */
+ static int powernowk8_verify(struct cpufreq_policy *pol)
+ {
+- struct powernow_k8_data *data = powernow_data[pol->cpu];
++ struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
+
+ if (!data)
+ return -EINVAL;
+@@ -1202,7 +1202,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
+ dprintk("cpu_init done, current fid 0x%x, vid 0x%x\n",
+ data->currfid, data->currvid);
+
+- powernow_data[pol->cpu] = data;
++ per_cpu(powernow_data, pol->cpu) = data;
+
+ return 0;
+
+@@ -1216,7 +1216,7 @@ err_out:
+
+ static int __devexit powernowk8_cpu_exit (struct cpufreq_policy *pol)
+ {
+- struct powernow_k8_data *data = powernow_data[pol->cpu];
++ struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
+
+ if (!data)
+ return -EINVAL;
+@@ -1237,7 +1237,7 @@ static unsigned int powernowk8_get (unsigned int cpu)
+ cpumask_t oldmask = current->cpus_allowed;
+ unsigned int khz = 0;
+
+- data = powernow_data[first_cpu(per_cpu(cpu_core_map, cpu))];
++ data = per_cpu(powernow_data, first_cpu(per_cpu(cpu_core_map, cpu)));
+
+ if (!data)
+ return -EINVAL;
+diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
+index 88d66fb..404a6a2 100644
+--- a/arch/x86/kernel/cpu/cyrix.c
++++ b/arch/x86/kernel/cpu/cyrix.c
+@@ -5,6 +5,7 @@
+ #include <asm/dma.h>
+ #include <asm/io.h>
+ #include <asm/processor-cyrix.h>
++#include <asm/processor-flags.h>
+ #include <asm/timer.h>
+ #include <asm/pci-direct.h>
+ #include <asm/tsc.h>
+@@ -126,15 +127,12 @@ static void __cpuinit set_cx86_reorder(void)
+
+ static void __cpuinit set_cx86_memwb(void)
+ {
+- u32 cr0;
+-
+ printk(KERN_INFO "Enable Memory-Write-back mode on Cyrix/NSC processor.\n");
+
+ /* CCR2 bit 2: unlock NW bit */
+ setCx86(CX86_CCR2, getCx86(CX86_CCR2) & ~0x04);
+ /* set 'Not Write-through' */
+- cr0 = 0x20000000;
+- write_cr0(read_cr0() | cr0);
++ write_cr0(read_cr0() | X86_CR0_NW);
+ /* CCR2 bit 2: lock NW bit and set WT1 */
+ setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x14 );
+ }
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index cc8c501..d1c372b 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -11,6 +11,8 @@
+ #include <asm/pgtable.h>
+ #include <asm/msr.h>
+ #include <asm/uaccess.h>
++#include <asm/ptrace.h>
++#include <asm/ds.h>
+
+ #include "cpu.h"
+
+@@ -27,13 +29,14 @@
+ struct movsl_mask movsl_mask __read_mostly;
+ #endif
+
+-void __cpuinit early_intel_workaround(struct cpuinfo_x86 *c)
++void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
+ {
+- if (c->x86_vendor != X86_VENDOR_INTEL)
+- return;
+ /* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
+ if (c->x86 == 15 && c->x86_cache_alignment == 64)
+ c->x86_cache_alignment = 128;
++ if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
++ (c->x86 == 0x6 && c->x86_model >= 0x0e))
++ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ }
+
+ /*
+@@ -113,6 +116,8 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
+ unsigned int l2 = 0;
+ char *p = NULL;
+
++ early_init_intel(c);
++
+ #ifdef CONFIG_X86_F00F_BUG
+ /*
+ * All current models of Pentium and Pentium with MMX technology CPUs
+@@ -132,7 +137,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
+ }
+ #endif
+
+- select_idle_routine(c);
+ l2 = init_intel_cacheinfo(c);
+ if (c->cpuid_level > 9 ) {
+ unsigned eax = cpuid_eax(10);
+@@ -201,16 +205,13 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
+ }
+ #endif
+
++ if (cpu_has_xmm2)
++ set_bit(X86_FEATURE_LFENCE_RDTSC, c->x86_capability);
+ if (c->x86 == 15) {
+ set_bit(X86_FEATURE_P4, c->x86_capability);
+- set_bit(X86_FEATURE_SYNC_RDTSC, c->x86_capability);
+ }
+ if (c->x86 == 6)
+ set_bit(X86_FEATURE_P3, c->x86_capability);
+- if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
+- (c->x86 == 0x6 && c->x86_model >= 0x0e))
+- set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
+-
+ if (cpu_has_ds) {
+ unsigned int l1;
+ rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
+@@ -219,6 +220,9 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
+ if (!(l1 & (1<<12)))
+ set_bit(X86_FEATURE_PEBS, c->x86_capability);
+ }
++
++ if (cpu_has_bts)
++ ds_init_intel(c);
+ }
+
+ static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned int size)
+@@ -342,5 +346,22 @@ unsigned long cmpxchg_386_u32(volatile void *ptr, u32 old, u32 new)
+ EXPORT_SYMBOL(cmpxchg_386_u32);
+ #endif
+
++#ifndef CONFIG_X86_CMPXCHG64
++unsigned long long cmpxchg_486_u64(volatile void *ptr, u64 old, u64 new)
+{
-+ struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
++ u64 prev;
++ unsigned long flags;
+
-+ if (!rq)
-+ return NULL;
++ /* Poor man's cmpxchg8b for 386 and 486. Unsuitable for SMP */
++ local_irq_save(flags);
++ prev = *(u64 *)ptr;
++ if (prev == old)
++ *(u64 *)ptr = new;
++ local_irq_restore(flags);
++ return prev;
++}
++EXPORT_SYMBOL(cmpxchg_486_u64);
++#endif
+
-+ /*
-+ * first three bits are identical in rq->cmd_flags and bio->bi_rw,
-+ * see bio.h and blkdev.h
-+ */
-+ rq->cmd_flags = rw | REQ_ALLOCED;
+ // arch_initcall(intel_cpu_init);
+
+diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c
+index 9f530ff..8b4507b 100644
+--- a/arch/x86/kernel/cpu/intel_cacheinfo.c
++++ b/arch/x86/kernel/cpu/intel_cacheinfo.c
+@@ -733,10 +733,8 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
+ if (unlikely(retval < 0))
+ return retval;
+
+- cache_kobject[cpu]->parent = &sys_dev->kobj;
+- kobject_set_name(cache_kobject[cpu], "%s", "cache");
+- cache_kobject[cpu]->ktype = &ktype_percpu_entry;
+- retval = kobject_register(cache_kobject[cpu]);
++ retval = kobject_init_and_add(cache_kobject[cpu], &ktype_percpu_entry,
++ &sys_dev->kobj, "%s", "cache");
+ if (retval < 0) {
+ cpuid4_cache_sysfs_exit(cpu);
+ return retval;
+@@ -746,23 +744,23 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
+ this_object = INDEX_KOBJECT_PTR(cpu,i);
+ this_object->cpu = cpu;
+ this_object->index = i;
+- this_object->kobj.parent = cache_kobject[cpu];
+- kobject_set_name(&(this_object->kobj), "index%1lu", i);
+- this_object->kobj.ktype = &ktype_cache;
+- retval = kobject_register(&(this_object->kobj));
++ retval = kobject_init_and_add(&(this_object->kobj),
++ &ktype_cache, cache_kobject[cpu],
++ "index%1lu", i);
+ if (unlikely(retval)) {
+ for (j = 0; j < i; j++) {
+- kobject_unregister(
+- &(INDEX_KOBJECT_PTR(cpu,j)->kobj));
++ kobject_put(&(INDEX_KOBJECT_PTR(cpu,j)->kobj));
+ }
+- kobject_unregister(cache_kobject[cpu]);
++ kobject_put(cache_kobject[cpu]);
+ cpuid4_cache_sysfs_exit(cpu);
+ break;
+ }
++ kobject_uevent(&(this_object->kobj), KOBJ_ADD);
+ }
+ if (!retval)
+ cpu_set(cpu, cache_dev_map);
+
++ kobject_uevent(cache_kobject[cpu], KOBJ_ADD);
+ return retval;
+ }
+
+@@ -778,8 +776,8 @@ static void __cpuinit cache_remove_dev(struct sys_device * sys_dev)
+ cpu_clear(cpu, cache_dev_map);
+
+ for (i = 0; i < num_cache_leaves; i++)
+- kobject_unregister(&(INDEX_KOBJECT_PTR(cpu,i)->kobj));
+- kobject_unregister(cache_kobject[cpu]);
++ kobject_put(&(INDEX_KOBJECT_PTR(cpu,i)->kobj));
++ kobject_put(cache_kobject[cpu]);
+ cpuid4_cache_sysfs_exit(cpu);
+ }
+
+diff --git a/arch/x86/kernel/cpu/mcheck/k7.c b/arch/x86/kernel/cpu/mcheck/k7.c
+index eef63e3..e633c9c 100644
+--- a/arch/x86/kernel/cpu/mcheck/k7.c
++++ b/arch/x86/kernel/cpu/mcheck/k7.c
+@@ -16,7 +16,7 @@
+ #include "mce.h"
+
+ /* Machine Check Handler For AMD Athlon/Duron */
+-static fastcall void k7_machine_check(struct pt_regs * regs, long error_code)
++static void k7_machine_check(struct pt_regs * regs, long error_code)
+ {
+ int recover=1;
+ u32 alow, ahigh, high, low;
+@@ -27,29 +27,32 @@ static fastcall void k7_machine_check(struct pt_regs * regs, long error_code)
+ if (mcgstl & (1<<0)) /* Recoverable ? */
+ recover=0;
+
+- printk (KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
++ printk(KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
+ smp_processor_id(), mcgsth, mcgstl);
+
+- for (i=1; i<nr_mce_banks; i++) {
+- rdmsr (MSR_IA32_MC0_STATUS+i*4,low, high);
++ for (i = 1; i < nr_mce_banks; i++) {
++ rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high);
+ if (high&(1<<31)) {
++ char misc[20];
++ char addr[24];
++ misc[0] = addr[0] = '\0';
+ if (high & (1<<29))
+ recover |= 1;
+ if (high & (1<<25))
+ recover |= 2;
+- printk (KERN_EMERG "Bank %d: %08x%08x", i, high, low);
+ high &= ~(1<<31);
+ if (high & (1<<27)) {
+- rdmsr (MSR_IA32_MC0_MISC+i*4, alow, ahigh);
+- printk ("[%08x%08x]", ahigh, alow);
++ rdmsr(MSR_IA32_MC0_MISC+i*4, alow, ahigh);
++ snprintf(misc, 20, "[%08x%08x]", ahigh, alow);
+ }
+ if (high & (1<<26)) {
+- rdmsr (MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
+- printk (" at %08x%08x", ahigh, alow);
++ rdmsr(MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
++ snprintf(addr, 24, " at %08x%08x", ahigh, alow);
+ }
+- printk ("\n");
++ printk(KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n",
++ smp_processor_id(), i, high, low, misc, addr);
+ /* Clear it */
+- wrmsr (MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL);
++ wrmsr(MSR_IA32_MC0_STATUS+i*4, 0UL, 0UL);
+ /* Serialize */
+ wmb();
+ add_taint(TAINT_MACHINE_CHECK);
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.h b/arch/x86/kernel/cpu/mcheck/mce.h
+index 81fb6e2..ae9f628 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.h
++++ b/arch/x86/kernel/cpu/mcheck/mce.h
+@@ -8,7 +8,7 @@ void intel_p6_mcheck_init(struct cpuinfo_x86 *c);
+ void winchip_mcheck_init(struct cpuinfo_x86 *c);
+
+ /* Call the installed machine check handler for this CPU setup. */
+-extern fastcall void (*machine_check_vector)(struct pt_regs *, long error_code);
++extern void (*machine_check_vector)(struct pt_regs *, long error_code);
+
+ extern int nr_mce_banks;
+
+diff --git a/arch/x86/kernel/cpu/mcheck/mce_32.c b/arch/x86/kernel/cpu/mcheck/mce_32.c
+index 34c781e..a5182dc 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce_32.c
++++ b/arch/x86/kernel/cpu/mcheck/mce_32.c
+@@ -22,13 +22,13 @@ int nr_mce_banks;
+ EXPORT_SYMBOL_GPL(nr_mce_banks); /* non-fatal.o */
+
+ /* Handle unconfigured int18 (should never happen) */
+-static fastcall void unexpected_machine_check(struct pt_regs * regs, long error_code)
++static void unexpected_machine_check(struct pt_regs * regs, long error_code)
+ {
+ printk(KERN_ERR "CPU#%d: Unexpected int18 (Machine Check).\n", smp_processor_id());
+ }
+
+ /* Call the installed machine check handler for this CPU setup. */
+-void fastcall (*machine_check_vector)(struct pt_regs *, long error_code) = unexpected_machine_check;
++void (*machine_check_vector)(struct pt_regs *, long error_code) = unexpected_machine_check;
+
+ /* This has to be run for each processor */
+ void mcheck_init(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/mcheck/mce_64.c b/arch/x86/kernel/cpu/mcheck/mce_64.c
+index 4b21d29..9a699ed 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce_64.c
++++ b/arch/x86/kernel/cpu/mcheck/mce_64.c
+@@ -63,7 +63,7 @@ static DECLARE_WAIT_QUEUE_HEAD(mce_wait);
+ * separate MCEs from kernel messages to avoid bogus bug reports.
+ */
+
+-struct mce_log mcelog = {
++static struct mce_log mcelog = {
+ MCE_LOG_SIGNATURE,
+ MCE_LOG_LEN,
+ };
+@@ -80,7 +80,7 @@ void mce_log(struct mce *mce)
+ /* When the buffer fills up discard new entries. Assume
+ that the earlier errors are the more interesting. */
+ if (entry >= MCE_LOG_LEN) {
+- set_bit(MCE_OVERFLOW, &mcelog.flags);
++ set_bit(MCE_OVERFLOW, (unsigned long *)&mcelog.flags);
+ return;
+ }
+ /* Old left over entry. Skip. */
+@@ -110,12 +110,12 @@ static void print_mce(struct mce *m)
+ KERN_EMERG
+ "CPU %d: Machine Check Exception: %16Lx Bank %d: %016Lx\n",
+ m->cpu, m->mcgstatus, m->bank, m->status);
+- if (m->rip) {
++ if (m->ip) {
+ printk(KERN_EMERG "RIP%s %02x:<%016Lx> ",
+ !(m->mcgstatus & MCG_STATUS_EIPV) ? " !INEXACT!" : "",
+- m->cs, m->rip);
++ m->cs, m->ip);
+ if (m->cs == __KERNEL_CS)
+- print_symbol("{%s}", m->rip);
++ print_symbol("{%s}", m->ip);
+ printk("\n");
+ }
+ printk(KERN_EMERG "TSC %Lx ", m->tsc);
+@@ -156,16 +156,16 @@ static int mce_available(struct cpuinfo_x86 *c)
+ static inline void mce_get_rip(struct mce *m, struct pt_regs *regs)
+ {
+ if (regs && (m->mcgstatus & MCG_STATUS_RIPV)) {
+- m->rip = regs->rip;
++ m->ip = regs->ip;
+ m->cs = regs->cs;
+ } else {
+- m->rip = 0;
++ m->ip = 0;
+ m->cs = 0;
+ }
+ if (rip_msr) {
+ /* Assume the RIP in the MSR is exact. Is this true? */
+ m->mcgstatus |= MCG_STATUS_EIPV;
+- rdmsrl(rip_msr, m->rip);
++ rdmsrl(rip_msr, m->ip);
+ m->cs = 0;
+ }
+ }
+@@ -192,10 +192,10 @@ void do_machine_check(struct pt_regs * regs, long error_code)
+
+ atomic_inc(&mce_entry);
+
+- if (regs)
+- notify_die(DIE_NMI, "machine check", regs, error_code, 18,
+- SIGKILL);
+- if (!banks)
++ if ((regs
++ && notify_die(DIE_NMI, "machine check", regs, error_code,
++ 18, SIGKILL) == NOTIFY_STOP)
++ || !banks)
+ goto out2;
+
+ memset(&m, 0, sizeof(struct mce));
+@@ -288,7 +288,7 @@ void do_machine_check(struct pt_regs * regs, long error_code)
+ * instruction which caused the MCE.
+ */
+ if (m.mcgstatus & MCG_STATUS_EIPV)
+- user_space = panicm.rip && (panicm.cs & 3);
++ user_space = panicm.ip && (panicm.cs & 3);
+
+ /*
+ * If we know that the error was in user space, send a
+@@ -564,7 +564,7 @@ static ssize_t mce_read(struct file *filp, char __user *ubuf, size_t usize,
+ loff_t *off)
+ {
+ unsigned long *cpu_tsc;
+- static DECLARE_MUTEX(mce_read_sem);
++ static DEFINE_MUTEX(mce_read_mutex);
+ unsigned next;
+ char __user *buf = ubuf;
+ int i, err;
+@@ -573,12 +573,12 @@ static ssize_t mce_read(struct file *filp, char __user *ubuf, size_t usize,
+ if (!cpu_tsc)
+ return -ENOMEM;
+
+- down(&mce_read_sem);
++ mutex_lock(&mce_read_mutex);
+ next = rcu_dereference(mcelog.next);
+
+ /* Only supports full reads right now */
+ if (*off != 0 || usize < MCE_LOG_LEN*sizeof(struct mce)) {
+- up(&mce_read_sem);
++ mutex_unlock(&mce_read_mutex);
+ kfree(cpu_tsc);
+ return -EINVAL;
+ }
+@@ -621,7 +621,7 @@ static ssize_t mce_read(struct file *filp, char __user *ubuf, size_t usize,
+ memset(&mcelog.entry[i], 0, sizeof(struct mce));
+ }
+ }
+- up(&mce_read_sem);
++ mutex_unlock(&mce_read_mutex);
+ kfree(cpu_tsc);
+ return err ? -EFAULT : buf - ubuf;
+ }
+@@ -634,8 +634,7 @@ static unsigned int mce_poll(struct file *file, poll_table *wait)
+ return 0;
+ }
+
+-static int mce_ioctl(struct inode *i, struct file *f,unsigned int cmd,
+- unsigned long arg)
++static long mce_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ {
+ int __user *p = (int __user *)arg;
+
+@@ -664,7 +663,7 @@ static const struct file_operations mce_chrdev_ops = {
+ .release = mce_release,
+ .read = mce_read,
+ .poll = mce_poll,
+- .ioctl = mce_ioctl,
++ .unlocked_ioctl = mce_ioctl,
+ };
+
+ static struct miscdevice mce_log_device = {
+@@ -745,7 +744,7 @@ static void mce_restart(void)
+
+ static struct sysdev_class mce_sysclass = {
+ .resume = mce_resume,
+- set_kset_name("machinecheck"),
++ .name = "machinecheck",
+ };
+
+ DEFINE_PER_CPU(struct sys_device, device_mce);
+@@ -855,8 +854,8 @@ static void mce_remove_device(unsigned int cpu)
+ }
+
+ /* Get notified when a cpu comes on/off. Be hotplug friendly. */
+-static int
+-mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
++static int __cpuinit mce_cpu_callback(struct notifier_block *nfb,
++ unsigned long action, void *hcpu)
+ {
+ unsigned int cpu = (unsigned long)hcpu;
+
+@@ -873,7 +872,7 @@ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
+ return NOTIFY_OK;
+ }
+
+-static struct notifier_block mce_cpu_notifier = {
++static struct notifier_block mce_cpu_notifier __cpuinitdata = {
+ .notifier_call = mce_cpu_callback,
+ };
+
+diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+index 752fb16..32671da 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
++++ b/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+@@ -65,7 +65,7 @@ static struct threshold_block threshold_defaults = {
+ };
+
+ struct threshold_bank {
+- struct kobject kobj;
++ struct kobject *kobj;
+ struct threshold_block *blocks;
+ cpumask_t cpus;
+ };
+@@ -118,6 +118,7 @@ void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c)
+ {
+ unsigned int bank, block;
+ unsigned int cpu = smp_processor_id();
++ u8 lvt_off;
+ u32 low = 0, high = 0, address = 0;
+
+ for (bank = 0; bank < NR_BANKS; ++bank) {
+@@ -153,14 +154,13 @@ void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c)
+ if (shared_bank[bank] && c->cpu_core_id)
+ break;
+ #endif
++ lvt_off = setup_APIC_eilvt_mce(THRESHOLD_APIC_VECTOR,
++ APIC_EILVT_MSG_FIX, 0);
+
-+ if (priv) {
-+ if (unlikely(elv_set_request(q, rq, gfp_mask))) {
-+ mempool_free(rq, q->rq.rq_pool);
-+ return NULL;
-+ }
-+ rq->cmd_flags |= REQ_ELVPRIV;
-+ }
+ high &= ~MASK_LVTOFF_HI;
+- high |= K8_APIC_EXT_LVT_ENTRY_THRESHOLD << 20;
++ high |= lvt_off << 20;
+ wrmsr(address, low, high);
+
+- setup_APIC_extended_lvt(K8_APIC_EXT_LVT_ENTRY_THRESHOLD,
+- THRESHOLD_APIC_VECTOR,
+- K8_APIC_EXT_INT_MSG_FIX, 0);
+-
+ threshold_defaults.address = address;
+ threshold_restart_bank(&threshold_defaults, 0, 0);
+ }
+@@ -432,10 +432,9 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu,
+ else
+ per_cpu(threshold_banks, cpu)[bank]->blocks = b;
+
+- kobject_set_name(&b->kobj, "misc%i", block);
+- b->kobj.parent = &per_cpu(threshold_banks, cpu)[bank]->kobj;
+- b->kobj.ktype = &threshold_ktype;
+- err = kobject_register(&b->kobj);
++ err = kobject_init_and_add(&b->kobj, &threshold_ktype,
++ per_cpu(threshold_banks, cpu)[bank]->kobj,
++ "misc%i", block);
+ if (err)
+ goto out_free;
+ recurse:
+@@ -451,11 +450,14 @@ recurse:
+ if (err)
+ goto out_free;
+
++ if (b)
++ kobject_uevent(&b->kobj, KOBJ_ADD);
+
-+ return rq;
+ return err;
+
+ out_free:
+ if (b) {
+- kobject_unregister(&b->kobj);
++ kobject_put(&b->kobj);
+ kfree(b);
+ }
+ return err;
+@@ -489,7 +491,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
+ goto out;
+
+ err = sysfs_create_link(&per_cpu(device_mce, cpu).kobj,
+- &b->kobj, name);
++ b->kobj, name);
+ if (err)
+ goto out;
+
+@@ -505,16 +507,15 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
+ goto out;
+ }
+
+- kobject_set_name(&b->kobj, "threshold_bank%i", bank);
+- b->kobj.parent = &per_cpu(device_mce, cpu).kobj;
++ b->kobj = kobject_create_and_add(name, &per_cpu(device_mce, cpu).kobj);
++ if (!b->kobj)
++ goto out_free;
++
+ #ifndef CONFIG_SMP
+ b->cpus = CPU_MASK_ALL;
+ #else
+ b->cpus = per_cpu(cpu_core_map, cpu);
+ #endif
+- err = kobject_register(&b->kobj);
+- if (err)
+- goto out_free;
+
+ per_cpu(threshold_banks, cpu)[bank] = b;
+
+@@ -531,7 +532,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
+ continue;
+
+ err = sysfs_create_link(&per_cpu(device_mce, i).kobj,
+- &b->kobj, name);
++ b->kobj, name);
+ if (err)
+ goto out;
+
+@@ -554,7 +555,7 @@ static __cpuinit int threshold_create_device(unsigned int cpu)
+ int err = 0;
+
+ for (bank = 0; bank < NR_BANKS; ++bank) {
+- if (!(per_cpu(bank_map, cpu) & 1 << bank))
++ if (!(per_cpu(bank_map, cpu) & (1 << bank)))
+ continue;
+ err = threshold_create_bank(cpu, bank);
+ if (err)
+@@ -581,7 +582,7 @@ static void deallocate_threshold_block(unsigned int cpu,
+ return;
+
+ list_for_each_entry_safe(pos, tmp, &head->blocks->miscj, miscj) {
+- kobject_unregister(&pos->kobj);
++ kobject_put(&pos->kobj);
+ list_del(&pos->miscj);
+ kfree(pos);
+ }
+@@ -627,7 +628,7 @@ static void threshold_remove_bank(unsigned int cpu, int bank)
+ deallocate_threshold_block(cpu, bank);
+
+ free_out:
+- kobject_unregister(&b->kobj);
++ kobject_put(b->kobj);
+ kfree(b);
+ per_cpu(threshold_banks, cpu)[bank] = NULL;
+ }
+@@ -637,14 +638,14 @@ static void threshold_remove_device(unsigned int cpu)
+ unsigned int bank;
+
+ for (bank = 0; bank < NR_BANKS; ++bank) {
+- if (!(per_cpu(bank_map, cpu) & 1 << bank))
++ if (!(per_cpu(bank_map, cpu) & (1 << bank)))
+ continue;
+ threshold_remove_bank(cpu, bank);
+ }
+ }
+
+ /* get notified when a cpu comes on/off */
+-static int threshold_cpu_callback(struct notifier_block *nfb,
++static int __cpuinit threshold_cpu_callback(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+ {
+ /* cpu was unsigned int to begin with */
+@@ -669,7 +670,7 @@ static int threshold_cpu_callback(struct notifier_block *nfb,
+ return NOTIFY_OK;
+ }
+
+-static struct notifier_block threshold_cpu_notifier = {
++static struct notifier_block threshold_cpu_notifier __cpuinitdata = {
+ .notifier_call = threshold_cpu_callback,
+ };
+
+diff --git a/arch/x86/kernel/cpu/mcheck/p4.c b/arch/x86/kernel/cpu/mcheck/p4.c
+index be4dabf..cb03345 100644
+--- a/arch/x86/kernel/cpu/mcheck/p4.c
++++ b/arch/x86/kernel/cpu/mcheck/p4.c
+@@ -57,7 +57,7 @@ static void intel_thermal_interrupt(struct pt_regs *regs)
+ /* Thermal interrupt handler for this CPU setup */
+ static void (*vendor_thermal_interrupt)(struct pt_regs *regs) = unexpected_thermal_interrupt;
+
+-fastcall void smp_thermal_interrupt(struct pt_regs *regs)
++void smp_thermal_interrupt(struct pt_regs *regs)
+ {
+ irq_enter();
+ vendor_thermal_interrupt(regs);
+@@ -141,7 +141,7 @@ static inline void intel_get_extended_msrs(struct intel_mce_extended_msrs *r)
+ rdmsr (MSR_IA32_MCG_EIP, r->eip, h);
+ }
+
+-static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
++static void intel_machine_check(struct pt_regs * regs, long error_code)
+ {
+ int recover=1;
+ u32 alow, ahigh, high, low;
+@@ -152,38 +152,41 @@ static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
+ if (mcgstl & (1<<0)) /* Recoverable ? */
+ recover=0;
+
+- printk (KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
++ printk(KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
+ smp_processor_id(), mcgsth, mcgstl);
+
+ if (mce_num_extended_msrs > 0) {
+ struct intel_mce_extended_msrs dbg;
+ intel_get_extended_msrs(&dbg);
+- printk (KERN_DEBUG "CPU %d: EIP: %08x EFLAGS: %08x\n",
+- smp_processor_id(), dbg.eip, dbg.eflags);
+- printk (KERN_DEBUG "\teax: %08x ebx: %08x ecx: %08x edx: %08x\n",
+- dbg.eax, dbg.ebx, dbg.ecx, dbg.edx);
+- printk (KERN_DEBUG "\tesi: %08x edi: %08x ebp: %08x esp: %08x\n",
++ printk(KERN_DEBUG "CPU %d: EIP: %08x EFLAGS: %08x\n"
++ "\teax: %08x ebx: %08x ecx: %08x edx: %08x\n"
++ "\tesi: %08x edi: %08x ebp: %08x esp: %08x\n",
++ smp_processor_id(), dbg.eip, dbg.eflags,
++ dbg.eax, dbg.ebx, dbg.ecx, dbg.edx,
+ dbg.esi, dbg.edi, dbg.ebp, dbg.esp);
+ }
+
+- for (i=0; i<nr_mce_banks; i++) {
+- rdmsr (MSR_IA32_MC0_STATUS+i*4,low, high);
++ for (i = 0; i < nr_mce_banks; i++) {
++ rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high);
+ if (high & (1<<31)) {
++ char misc[20];
++ char addr[24];
++ misc[0] = addr[0] = '\0';
+ if (high & (1<<29))
+ recover |= 1;
+ if (high & (1<<25))
+ recover |= 2;
+- printk (KERN_EMERG "Bank %d: %08x%08x", i, high, low);
+ high &= ~(1<<31);
+ if (high & (1<<27)) {
+- rdmsr (MSR_IA32_MC0_MISC+i*4, alow, ahigh);
+- printk ("[%08x%08x]", ahigh, alow);
++ rdmsr(MSR_IA32_MC0_MISC+i*4, alow, ahigh);
++ snprintf(misc, 20, "[%08x%08x]", ahigh, alow);
+ }
+ if (high & (1<<26)) {
+- rdmsr (MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
+- printk (" at %08x%08x", ahigh, alow);
++ rdmsr(MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
++ snprintf(addr, 24, " at %08x%08x", ahigh, alow);
+ }
+- printk ("\n");
++ printk(KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n",
++ smp_processor_id(), i, high, low, misc, addr);
+ }
+ }
+
+diff --git a/arch/x86/kernel/cpu/mcheck/p5.c b/arch/x86/kernel/cpu/mcheck/p5.c
+index 94bc43d..a18310a 100644
+--- a/arch/x86/kernel/cpu/mcheck/p5.c
++++ b/arch/x86/kernel/cpu/mcheck/p5.c
+@@ -16,7 +16,7 @@
+ #include "mce.h"
+
+ /* Machine check handler for Pentium class Intel */
+-static fastcall void pentium_machine_check(struct pt_regs * regs, long error_code)
++static void pentium_machine_check(struct pt_regs * regs, long error_code)
+ {
+ u32 loaddr, hi, lotype;
+ rdmsr(MSR_IA32_P5_MC_ADDR, loaddr, hi);
+diff --git a/arch/x86/kernel/cpu/mcheck/p6.c b/arch/x86/kernel/cpu/mcheck/p6.c
+index deeae42..7434260 100644
+--- a/arch/x86/kernel/cpu/mcheck/p6.c
++++ b/arch/x86/kernel/cpu/mcheck/p6.c
+@@ -16,7 +16,7 @@
+ #include "mce.h"
+
+ /* Machine Check Handler For PII/PIII */
+-static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
++static void intel_machine_check(struct pt_regs * regs, long error_code)
+ {
+ int recover=1;
+ u32 alow, ahigh, high, low;
+@@ -27,27 +27,30 @@ static fastcall void intel_machine_check(struct pt_regs * regs, long error_code)
+ if (mcgstl & (1<<0)) /* Recoverable ? */
+ recover=0;
+
+- printk (KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
++ printk(KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n",
+ smp_processor_id(), mcgsth, mcgstl);
+
+- for (i=0; i<nr_mce_banks; i++) {
+- rdmsr (MSR_IA32_MC0_STATUS+i*4,low, high);
++ for (i = 0; i < nr_mce_banks; i++) {
++ rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high);
+ if (high & (1<<31)) {
++ char misc[20];
++ char addr[24];
++ misc[0] = addr[0] = '\0';
+ if (high & (1<<29))
+ recover |= 1;
+ if (high & (1<<25))
+ recover |= 2;
+- printk (KERN_EMERG "Bank %d: %08x%08x", i, high, low);
+ high &= ~(1<<31);
+ if (high & (1<<27)) {
+- rdmsr (MSR_IA32_MC0_MISC+i*4, alow, ahigh);
+- printk ("[%08x%08x]", ahigh, alow);
++ rdmsr(MSR_IA32_MC0_MISC+i*4, alow, ahigh);
++ snprintf(misc, 20, "[%08x%08x]", ahigh, alow);
+ }
+ if (high & (1<<26)) {
+- rdmsr (MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
+- printk (" at %08x%08x", ahigh, alow);
++ rdmsr(MSR_IA32_MC0_ADDR+i*4, alow, ahigh);
++ snprintf(addr, 24, " at %08x%08x", ahigh, alow);
+ }
+- printk ("\n");
++ printk(KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n",
++ smp_processor_id(), i, high, low, misc, addr);
+ }
+ }
+
+diff --git a/arch/x86/kernel/cpu/mcheck/winchip.c b/arch/x86/kernel/cpu/mcheck/winchip.c
+index 9e424b6..3d428d5 100644
+--- a/arch/x86/kernel/cpu/mcheck/winchip.c
++++ b/arch/x86/kernel/cpu/mcheck/winchip.c
+@@ -15,7 +15,7 @@
+ #include "mce.h"
+
+ /* Machine check handler for WinChip C6 */
+-static fastcall void winchip_machine_check(struct pt_regs * regs, long error_code)
++static void winchip_machine_check(struct pt_regs * regs, long error_code)
+ {
+ printk(KERN_EMERG "CPU0: Machine Check Exception.\n");
+ add_taint(TAINT_MACHINE_CHECK);
+diff --git a/arch/x86/kernel/cpu/mtrr/amd.c b/arch/x86/kernel/cpu/mtrr/amd.c
+index 0949cdb..ee2331b 100644
+--- a/arch/x86/kernel/cpu/mtrr/amd.c
++++ b/arch/x86/kernel/cpu/mtrr/amd.c
+@@ -53,8 +53,6 @@ static void amd_set_mtrr(unsigned int reg, unsigned long base,
+ <base> The base address of the region.
+ <size> The size of the region. If this is 0 the region is disabled.
+ <type> The type of the region.
+- <do_safe> If TRUE, do the change safely. If FALSE, safety measures should
+- be done externally.
+ [RETURNS] Nothing.
+ */
+ {
+diff --git a/arch/x86/kernel/cpu/mtrr/cyrix.c b/arch/x86/kernel/cpu/mtrr/cyrix.c
+index 9964be3..8e139c7 100644
+--- a/arch/x86/kernel/cpu/mtrr/cyrix.c
++++ b/arch/x86/kernel/cpu/mtrr/cyrix.c
+@@ -4,6 +4,7 @@
+ #include <asm/msr.h>
+ #include <asm/io.h>
+ #include <asm/processor-cyrix.h>
++#include <asm/processor-flags.h>
+ #include "mtrr.h"
+
+ int arr3_protected;
+@@ -142,7 +143,7 @@ static void prepare_set(void)
+
+ /* Disable and flush caches. Note that wbinvd flushes the TLBs as
+ a side-effect */
+- cr0 = read_cr0() | 0x40000000;
++ cr0 = read_cr0() | X86_CR0_CD;
+ wbinvd();
+ write_cr0(cr0);
+ wbinvd();
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index 992f08d..103d61a 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -9,11 +9,12 @@
+ #include <asm/msr.h>
+ #include <asm/system.h>
+ #include <asm/cpufeature.h>
++#include <asm/processor-flags.h>
+ #include <asm/tlbflush.h>
+ #include "mtrr.h"
+
+ struct mtrr_state {
+- struct mtrr_var_range *var_ranges;
++ struct mtrr_var_range var_ranges[MAX_VAR_RANGES];
+ mtrr_type fixed_ranges[NUM_FIXED_RANGES];
+ unsigned char enabled;
+ unsigned char have_fixed;
+@@ -85,12 +86,6 @@ void __init get_mtrr_state(void)
+ struct mtrr_var_range *vrs;
+ unsigned lo, dummy;
+
+- if (!mtrr_state.var_ranges) {
+- mtrr_state.var_ranges = kmalloc(num_var_ranges * sizeof (struct mtrr_var_range),
+- GFP_KERNEL);
+- if (!mtrr_state.var_ranges)
+- return;
+- }
+ vrs = mtrr_state.var_ranges;
+
+ rdmsr(MTRRcap_MSR, lo, dummy);
+@@ -188,7 +183,7 @@ static inline void k8_enable_fixed_iorrs(void)
+ * \param changed pointer which indicates whether the MTRR needed to be changed
+ * \param msrwords pointer to the MSR values which the MSR should have
+ */
+-static void set_fixed_range(int msr, int * changed, unsigned int * msrwords)
++static void set_fixed_range(int msr, bool *changed, unsigned int *msrwords)
+ {
+ unsigned lo, hi;
+
+@@ -200,7 +195,7 @@ static void set_fixed_range(int msr, int * changed, unsigned int * msrwords)
+ ((msrwords[0] | msrwords[1]) & K8_MTRR_RDMEM_WRMEM_MASK))
+ k8_enable_fixed_iorrs();
+ mtrr_wrmsr(msr, msrwords[0], msrwords[1]);
+- *changed = TRUE;
++ *changed = true;
+ }
+ }
+
+@@ -260,7 +255,7 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ static int set_fixed_ranges(mtrr_type * frs)
+ {
+ unsigned long long *saved = (unsigned long long *) frs;
+- int changed = FALSE;
++ bool changed = false;
+ int block=-1, range;
+
+ while (fixed_range_blocks[++block].ranges)
+@@ -273,17 +268,17 @@ static int set_fixed_ranges(mtrr_type * frs)
+
+ /* Set the MSR pair relating to a var range. Returns TRUE if
+ changes are made */
+-static int set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
++static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+ {
+ unsigned int lo, hi;
+- int changed = FALSE;
++ bool changed = false;
+
+ rdmsr(MTRRphysBase_MSR(index), lo, hi);
+ if ((vr->base_lo & 0xfffff0ffUL) != (lo & 0xfffff0ffUL)
+ || (vr->base_hi & (size_and_mask >> (32 - PAGE_SHIFT))) !=
+ (hi & (size_and_mask >> (32 - PAGE_SHIFT)))) {
+ mtrr_wrmsr(MTRRphysBase_MSR(index), vr->base_lo, vr->base_hi);
+- changed = TRUE;
++ changed = true;
+ }
+
+ rdmsr(MTRRphysMask_MSR(index), lo, hi);
+@@ -292,7 +287,7 @@ static int set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+ || (vr->mask_hi & (size_and_mask >> (32 - PAGE_SHIFT))) !=
+ (hi & (size_and_mask >> (32 - PAGE_SHIFT)))) {
+ mtrr_wrmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi);
+- changed = TRUE;
++ changed = true;
+ }
+ return changed;
+ }
+@@ -350,7 +345,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
+ spin_lock(&set_atomicity_lock);
+
+ /* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */
+- cr0 = read_cr0() | 0x40000000; /* set CD flag */
++ cr0 = read_cr0() | X86_CR0_CD;
+ write_cr0(cr0);
+ wbinvd();
+
+@@ -417,8 +412,6 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
+ <base> The base address of the region.
+ <size> The size of the region. If this is 0 the region is disabled.
+ <type> The type of the region.
+- <do_safe> If TRUE, do the change safely. If FALSE, safety measures should
+- be done externally.
+ [RETURNS] Nothing.
+ */
+ {
+diff --git a/arch/x86/kernel/cpu/mtrr/if.c b/arch/x86/kernel/cpu/mtrr/if.c
+index c7d8f17..91e150a 100644
+--- a/arch/x86/kernel/cpu/mtrr/if.c
++++ b/arch/x86/kernel/cpu/mtrr/if.c
+@@ -11,10 +11,6 @@
+ #include <asm/mtrr.h>
+ #include "mtrr.h"
+
+-/* RED-PEN: this is accessed without any locking */
+-extern unsigned int *usage_table;
+-
+-
+ #define FILE_FCOUNT(f) (((struct seq_file *)((f)->private_data))->private)
+
+ static const char *const mtrr_strings[MTRR_NUM_TYPES] =
+@@ -37,7 +33,7 @@ const char *mtrr_attrib_to_str(int x)
+
+ static int
+ mtrr_file_add(unsigned long base, unsigned long size,
+- unsigned int type, char increment, struct file *file, int page)
++ unsigned int type, bool increment, struct file *file, int page)
+ {
+ int reg, max;
+ unsigned int *fcount = FILE_FCOUNT(file);
+@@ -55,7 +51,7 @@ mtrr_file_add(unsigned long base, unsigned long size,
+ base >>= PAGE_SHIFT;
+ size >>= PAGE_SHIFT;
+ }
+- reg = mtrr_add_page(base, size, type, 1);
++ reg = mtrr_add_page(base, size, type, true);
+ if (reg >= 0)
+ ++fcount[reg];
+ return reg;
+@@ -141,7 +137,7 @@ mtrr_write(struct file *file, const char __user *buf, size_t len, loff_t * ppos)
+ size >>= PAGE_SHIFT;
+ err =
+ mtrr_add_page((unsigned long) base, (unsigned long) size, i,
+- 1);
++ true);
+ if (err < 0)
+ return err;
+ return len;
+@@ -217,7 +213,7 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ err =
+- mtrr_file_add(sentry.base, sentry.size, sentry.type, 1,
++ mtrr_file_add(sentry.base, sentry.size, sentry.type, true,
+ file, 0);
+ break;
+ case MTRRIOC_SET_ENTRY:
+@@ -226,7 +222,7 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
+ #endif
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+- err = mtrr_add(sentry.base, sentry.size, sentry.type, 0);
++ err = mtrr_add(sentry.base, sentry.size, sentry.type, false);
+ break;
+ case MTRRIOC_DEL_ENTRY:
+ #ifdef CONFIG_COMPAT
+@@ -270,7 +266,7 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ err =
+- mtrr_file_add(sentry.base, sentry.size, sentry.type, 1,
++ mtrr_file_add(sentry.base, sentry.size, sentry.type, true,
+ file, 1);
+ break;
+ case MTRRIOC_SET_PAGE_ENTRY:
+@@ -279,7 +275,8 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
+ #endif
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+- err = mtrr_add_page(sentry.base, sentry.size, sentry.type, 0);
++ err =
++ mtrr_add_page(sentry.base, sentry.size, sentry.type, false);
+ break;
+ case MTRRIOC_DEL_PAGE_ENTRY:
+ #ifdef CONFIG_COMPAT
+@@ -396,7 +393,7 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset)
+ for (i = 0; i < max; i++) {
+ mtrr_if->get(i, &base, &size, &type);
+ if (size == 0)
+- usage_table[i] = 0;
++ mtrr_usage_table[i] = 0;
+ else {
+ if (size < (0x100000 >> PAGE_SHIFT)) {
+ /* less than 1MB */
+@@ -410,7 +407,7 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset)
+ len += seq_printf(seq,
+ "reg%02i: base=0x%05lx000 (%4luMB), size=%4lu%cB: %s, count=%d\n",
+ i, base, base >> (20 - PAGE_SHIFT), size, factor,
+- mtrr_attrib_to_str(type), usage_table[i]);
++ mtrr_attrib_to_str(type), mtrr_usage_table[i]);
+ }
+ }
+ return 0;
+diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
+index 3b20613..7159195 100644
+--- a/arch/x86/kernel/cpu/mtrr/main.c
++++ b/arch/x86/kernel/cpu/mtrr/main.c
+@@ -38,8 +38,8 @@
+ #include <linux/cpu.h>
+ #include <linux/mutex.h>
+
++#include <asm/e820.h>
+ #include <asm/mtrr.h>
+-
+ #include <asm/uaccess.h>
+ #include <asm/processor.h>
+ #include <asm/msr.h>
+@@ -47,7 +47,7 @@
+
+ u32 num_var_ranges = 0;
+
+-unsigned int *usage_table;
++unsigned int mtrr_usage_table[MAX_VAR_RANGES];
+ static DEFINE_MUTEX(mtrr_mutex);
+
+ u64 size_or_mask, size_and_mask;
+@@ -121,13 +121,8 @@ static void __init init_table(void)
+ int i, max;
+
+ max = num_var_ranges;
+- if ((usage_table = kmalloc(max * sizeof *usage_table, GFP_KERNEL))
+- == NULL) {
+- printk(KERN_ERR "mtrr: could not allocate\n");
+- return;
+- }
+ for (i = 0; i < max; i++)
+- usage_table[i] = 1;
++ mtrr_usage_table[i] = 1;
+ }
+
+ struct set_mtrr_data {
+@@ -311,7 +306,7 @@ static void set_mtrr(unsigned int reg, unsigned long base,
+ */
+
+ int mtrr_add_page(unsigned long base, unsigned long size,
+- unsigned int type, char increment)
++ unsigned int type, bool increment)
+ {
+ int i, replace, error;
+ mtrr_type ltype;
+@@ -349,7 +344,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
+ replace = -1;
+
+ /* No CPU hotplug when we change MTRR entries */
+- lock_cpu_hotplug();
++ get_online_cpus();
+ /* Search for existing MTRR */
+ mutex_lock(&mtrr_mutex);
+ for (i = 0; i < num_var_ranges; ++i) {
+@@ -383,7 +378,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
+ goto out;
+ }
+ if (increment)
+- ++usage_table[i];
++ ++mtrr_usage_table[i];
+ error = i;
+ goto out;
+ }
+@@ -391,13 +386,15 @@ int mtrr_add_page(unsigned long base, unsigned long size,
+ i = mtrr_if->get_free_region(base, size, replace);
+ if (i >= 0) {
+ set_mtrr(i, base, size, type);
+- if (likely(replace < 0))
+- usage_table[i] = 1;
+- else {
+- usage_table[i] = usage_table[replace] + !!increment;
++ if (likely(replace < 0)) {
++ mtrr_usage_table[i] = 1;
++ } else {
++ mtrr_usage_table[i] = mtrr_usage_table[replace];
++ if (increment)
++ mtrr_usage_table[i]++;
+ if (unlikely(replace != i)) {
+ set_mtrr(replace, 0, 0, 0);
+- usage_table[replace] = 0;
++ mtrr_usage_table[replace] = 0;
+ }
+ }
+ } else
+@@ -405,7 +402,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
+ error = i;
+ out:
+ mutex_unlock(&mtrr_mutex);
+- unlock_cpu_hotplug();
++ put_online_cpus();
+ return error;
+ }
+
+@@ -460,7 +457,7 @@ static int mtrr_check(unsigned long base, unsigned long size)
+
+ int
+ mtrr_add(unsigned long base, unsigned long size, unsigned int type,
+- char increment)
++ bool increment)
+ {
+ if (mtrr_check(base, size))
+ return -EINVAL;
+@@ -495,7 +492,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
+
+ max = num_var_ranges;
+ /* No CPU hotplug when we change MTRR entries */
+- lock_cpu_hotplug();
++ get_online_cpus();
+ mutex_lock(&mtrr_mutex);
+ if (reg < 0) {
+ /* Search for existing MTRR */
+@@ -527,16 +524,16 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
+ printk(KERN_WARNING "mtrr: MTRR %d not used\n", reg);
+ goto out;
+ }
+- if (usage_table[reg] < 1) {
++ if (mtrr_usage_table[reg] < 1) {
+ printk(KERN_WARNING "mtrr: reg: %d has count=0\n", reg);
+ goto out;
+ }
+- if (--usage_table[reg] < 1)
++ if (--mtrr_usage_table[reg] < 1)
+ set_mtrr(reg, 0, 0, 0);
+ error = reg;
+ out:
+ mutex_unlock(&mtrr_mutex);
+- unlock_cpu_hotplug();
++ put_online_cpus();
+ return error;
+ }
+ /**
+@@ -591,16 +588,11 @@ struct mtrr_value {
+ unsigned long lsize;
+ };
+
+-static struct mtrr_value * mtrr_state;
++static struct mtrr_value mtrr_state[MAX_VAR_RANGES];
+
+ static int mtrr_save(struct sys_device * sysdev, pm_message_t state)
+ {
+ int i;
+- int size = num_var_ranges * sizeof(struct mtrr_value);
+-
+- mtrr_state = kzalloc(size,GFP_ATOMIC);
+- if (!mtrr_state)
+- return -ENOMEM;
+
+ for (i = 0; i < num_var_ranges; i++) {
+ mtrr_if->get(i,
+@@ -622,7 +614,6 @@ static int mtrr_restore(struct sys_device * sysdev)
+ mtrr_state[i].lsize,
+ mtrr_state[i].ltype);
+ }
+- kfree(mtrr_state);
+ return 0;
+ }
+
+@@ -633,6 +624,112 @@ static struct sysdev_driver mtrr_sysdev_driver = {
+ .resume = mtrr_restore,
+ };
+
++static int disable_mtrr_trim;
++
++static int __init disable_mtrr_trim_setup(char *str)
++{
++ disable_mtrr_trim = 1;
++ return 0;
+}
++early_param("disable_mtrr_trim", disable_mtrr_trim_setup);
+
+/*
-+ * ioc_batching returns true if the ioc is a valid batching request and
-+ * should be given priority access to a request.
++ * Newer AMD K8s and later CPUs have a special magic MSR way to force WB
++ * for memory >4GB. Check for that here.
++ * Note this won't check if the MTRRs < 4GB where the magic bit doesn't
++ * apply to are wrong, but so far we don't know of any such case in the wild.
+ */
-+static inline int ioc_batching(struct request_queue *q, struct io_context *ioc)
++#define Tom2Enabled (1U << 21)
++#define Tom2ForceMemTypeWB (1U << 22)
++
++static __init int amd_special_default_mtrr(void)
+{
-+ if (!ioc)
-+ return 0;
++ u32 l, h;
+
++ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
++ return 0;
++ if (boot_cpu_data.x86 < 0xf || boot_cpu_data.x86 > 0x11)
++ return 0;
++ /* In case some hypervisor doesn't pass SYSCFG through */
++ if (rdmsr_safe(MSR_K8_SYSCFG, &l, &h) < 0)
++ return 0;
+ /*
-+ * Make sure the process is able to allocate at least 1 request
-+ * even if the batch times out, otherwise we could theoretically
-+ * lose wakeups.
++ * Memory between 4GB and top of mem is forced WB by this magic bit.
++ * Reserved before K8RevF, but should be zero there.
+ */
-+ return ioc->nr_batch_requests == q->nr_batching ||
-+ (ioc->nr_batch_requests > 0
-+ && time_before(jiffies, ioc->last_waited + BLK_BATCH_TIME));
++ if ((l & (Tom2Enabled | Tom2ForceMemTypeWB)) ==
++ (Tom2Enabled | Tom2ForceMemTypeWB))
++ return 1;
++ return 0;
+}
+
-+/*
-+ * ioc_set_batching sets ioc to be a new "batcher" if it is not one. This
-+ * will cause the process to be a "batcher" on all queues in the system. This
-+ * is the behaviour we want though - once it gets a wakeup it should be given
-+ * a nice run.
++/**
++ * mtrr_trim_uncached_memory - trim RAM not covered by MTRRs
++ *
++ * Some buggy BIOSes don't setup the MTRRs properly for systems with certain
++ * memory configurations. This routine checks that the highest MTRR matches
++ * the end of memory, to make sure the MTRRs having a write back type cover
++ * all of the memory the kernel is intending to use. If not, it'll trim any
++ * memory off the end by adjusting end_pfn, removing it from the kernel's
++ * allocation pools, warning the user with an obnoxious message.
+ */
-+static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
++int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
+{
-+ if (!ioc || ioc_batching(q, ioc))
-+ return;
++ unsigned long i, base, size, highest_addr = 0, def, dummy;
++ mtrr_type type;
++ u64 trim_start, trim_size;
+
-+ ioc->nr_batch_requests = q->nr_batching;
-+ ioc->last_waited = jiffies;
-+}
++ /*
++ * Make sure we only trim uncachable memory on machines that
++ * support the Intel MTRR architecture:
++ */
++ if (!is_cpu(INTEL) || disable_mtrr_trim)
++ return 0;
++ rdmsr(MTRRdefType_MSR, def, dummy);
++ def &= 0xff;
++ if (def != MTRR_TYPE_UNCACHABLE)
++ return 0;
+
-+static void __freed_request(struct request_queue *q, int rw)
-+{
-+ struct request_list *rl = &q->rq;
++ if (amd_special_default_mtrr())
++ return 0;
+
-+ if (rl->count[rw] < queue_congestion_off_threshold(q))
-+ blk_clear_queue_congested(q, rw);
++ /* Find highest cached pfn */
++ for (i = 0; i < num_var_ranges; i++) {
++ mtrr_if->get(i, &base, &size, &type);
++ if (type != MTRR_TYPE_WRBACK)
++ continue;
++ base <<= PAGE_SHIFT;
++ size <<= PAGE_SHIFT;
++ if (highest_addr < base + size)
++ highest_addr = base + size;
++ }
+
-+ if (rl->count[rw] + 1 <= q->nr_requests) {
-+ if (waitqueue_active(&rl->wait[rw]))
-+ wake_up(&rl->wait[rw]);
++ /* kvm/qemu doesn't have mtrr set right, don't trim them all */
++ if (!highest_addr) {
++ printk(KERN_WARNING "WARNING: strange, CPU MTRRs all blank?\n");
++ WARN_ON(1);
++ return 0;
++ }
+
-+ blk_clear_queue_full(q, rw);
++ if ((highest_addr >> PAGE_SHIFT) < end_pfn) {
++ printk(KERN_WARNING "WARNING: BIOS bug: CPU MTRRs don't cover"
++ " all of memory, losing %LdMB of RAM.\n",
++ (((u64)end_pfn << PAGE_SHIFT) - highest_addr) >> 20);
++
++ WARN_ON(1);
++
++ printk(KERN_INFO "update e820 for mtrr\n");
++ trim_start = highest_addr;
++ trim_size = end_pfn;
++ trim_size <<= PAGE_SHIFT;
++ trim_size -= trim_start;
++ add_memory_region(trim_start, trim_size, E820_RESERVED);
++ update_e820();
++ return 1;
+ }
++
++ return 0;
+}
+
+ /**
+ * mtrr_bp_init - initialize mtrrs on the boot CPU
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
+index 289dfe6..fb74a2c 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
+@@ -2,10 +2,8 @@
+ * local mtrr defines.
+ */
+
+-#ifndef TRUE
+-#define TRUE 1
+-#define FALSE 0
+-#endif
++#include <linux/types.h>
++#include <linux/stddef.h>
+
+ #define MTRRcap_MSR 0x0fe
+ #define MTRRdefType_MSR 0x2ff
+@@ -14,6 +12,7 @@
+ #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
+
+ #define NUM_FIXED_RANGES 88
++#define MAX_VAR_RANGES 256
+ #define MTRRfix64K_00000_MSR 0x250
+ #define MTRRfix16K_80000_MSR 0x258
+ #define MTRRfix16K_A0000_MSR 0x259
+@@ -34,6 +33,8 @@
+ an 8 bit field: */
+ typedef u8 mtrr_type;
+
++extern unsigned int mtrr_usage_table[MAX_VAR_RANGES];
++
+ struct mtrr_ops {
+ u32 vendor;
+ u32 use_intel_if;
+diff --git a/arch/x86/kernel/cpu/mtrr/state.c b/arch/x86/kernel/cpu/mtrr/state.c
+index 49e20c2..9f8ba92 100644
+--- a/arch/x86/kernel/cpu/mtrr/state.c
++++ b/arch/x86/kernel/cpu/mtrr/state.c
+@@ -4,6 +4,7 @@
+ #include <asm/mtrr.h>
+ #include <asm/msr.h>
+ #include <asm/processor-cyrix.h>
++#include <asm/processor-flags.h>
+ #include "mtrr.h"
+
+
+@@ -25,7 +26,7 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt)
+
+ /* Disable and flush caches. Note that wbinvd flushes the TLBs as
+ a side-effect */
+- cr0 = read_cr0() | 0x40000000;
++ cr0 = read_cr0() | X86_CR0_CD;
+ wbinvd();
+ write_cr0(cr0);
+ wbinvd();
+diff --git a/arch/x86/kernel/cpu/perfctr-watchdog.c b/arch/x86/kernel/cpu/perfctr-watchdog.c
+index c02541e..9b83832 100644
+--- a/arch/x86/kernel/cpu/perfctr-watchdog.c
++++ b/arch/x86/kernel/cpu/perfctr-watchdog.c
+@@ -167,7 +167,6 @@ void release_evntsel_nmi(unsigned int msr)
+ clear_bit(counter, evntsel_nmi_owner);
+ }
+
+-EXPORT_SYMBOL(avail_to_resrv_perfctr_nmi);
+ EXPORT_SYMBOL(avail_to_resrv_perfctr_nmi_bit);
+ EXPORT_SYMBOL(reserve_perfctr_nmi);
+ EXPORT_SYMBOL(release_perfctr_nmi);
+diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c
+index 3900e46..0282132 100644
+--- a/arch/x86/kernel/cpu/proc.c
++++ b/arch/x86/kernel/cpu/proc.c
+@@ -188,7 +188,7 @@ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+ static void c_stop(struct seq_file *m, void *v)
+ {
+ }
+-struct seq_operations cpuinfo_op = {
++const struct seq_operations cpuinfo_op = {
+ .start = c_start,
+ .next = c_next,
+ .stop = c_stop,
+diff --git a/arch/x86/kernel/cpuid.c b/arch/x86/kernel/cpuid.c
+index 05c9936..dec66e4 100644
+--- a/arch/x86/kernel/cpuid.c
++++ b/arch/x86/kernel/cpuid.c
+@@ -50,7 +50,7 @@ struct cpuid_command {
+
+ static void cpuid_smp_cpuid(void *cmd_block)
+ {
+- struct cpuid_command *cmd = (struct cpuid_command *)cmd_block;
++ struct cpuid_command *cmd = cmd_block;
+
+ cpuid(cmd->reg, &cmd->data[0], &cmd->data[1], &cmd->data[2],
+ &cmd->data[3]);
+@@ -157,15 +157,15 @@ static int __cpuinit cpuid_class_cpu_callback(struct notifier_block *nfb,
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+- case CPU_UP_PREPARE_FROZEN:
+ err = cpuid_device_create(cpu);
+ break;
+ case CPU_UP_CANCELED:
+- case CPU_UP_CANCELED_FROZEN:
+ case CPU_DEAD:
+- case CPU_DEAD_FROZEN:
+ cpuid_device_destroy(cpu);
+ break;
++ case CPU_UP_CANCELED_FROZEN:
++ destroy_suspended_device(cpuid_class, MKDEV(CPUID_MAJOR, cpu));
++ break;
+ }
+ return err ? NOTIFY_BAD : NOTIFY_OK;
+ }
+diff --git a/arch/x86/kernel/doublefault_32.c b/arch/x86/kernel/doublefault_32.c
+index 40978af..a47798b 100644
+--- a/arch/x86/kernel/doublefault_32.c
++++ b/arch/x86/kernel/doublefault_32.c
+@@ -17,7 +17,7 @@ static unsigned long doublefault_stack[DOUBLEFAULT_STACKSIZE];
+
+ static void doublefault_fn(void)
+ {
+- struct Xgt_desc_struct gdt_desc = {0, 0};
++ struct desc_ptr gdt_desc = {0, 0};
+ unsigned long gdt, tss;
+
+ store_gdt(&gdt_desc);
+@@ -33,14 +33,15 @@ static void doublefault_fn(void)
+ printk(KERN_EMERG "double fault, tss at %08lx\n", tss);
+
+ if (ptr_ok(tss)) {
+- struct i386_hw_tss *t = (struct i386_hw_tss *)tss;
++ struct x86_hw_tss *t = (struct x86_hw_tss *)tss;
+
+- printk(KERN_EMERG "eip = %08lx, esp = %08lx\n", t->eip, t->esp);
++ printk(KERN_EMERG "eip = %08lx, esp = %08lx\n",
++ t->ip, t->sp);
+
+ printk(KERN_EMERG "eax = %08lx, ebx = %08lx, ecx = %08lx, edx = %08lx\n",
+- t->eax, t->ebx, t->ecx, t->edx);
++ t->ax, t->bx, t->cx, t->dx);
+ printk(KERN_EMERG "esi = %08lx, edi = %08lx\n",
+- t->esi, t->edi);
++ t->si, t->di);
+ }
+ }
+
+@@ -50,15 +51,15 @@ static void doublefault_fn(void)
+
+ struct tss_struct doublefault_tss __cacheline_aligned = {
+ .x86_tss = {
+- .esp0 = STACK_START,
++ .sp0 = STACK_START,
+ .ss0 = __KERNEL_DS,
+ .ldt = 0,
+ .io_bitmap_base = INVALID_IO_BITMAP_OFFSET,
+
+- .eip = (unsigned long) doublefault_fn,
++ .ip = (unsigned long) doublefault_fn,
+ /* 0x2 bit is always set */
+- .eflags = X86_EFLAGS_SF | 0x2,
+- .esp = STACK_START,
++ .flags = X86_EFLAGS_SF | 0x2,
++ .sp = STACK_START,
+ .es = __USER_DS,
+ .cs = __KERNEL_CS,
+ .ss = __KERNEL_DS,
+diff --git a/arch/x86/kernel/ds.c b/arch/x86/kernel/ds.c
+new file mode 100644
+index 0000000..1c5ca4d
+--- /dev/null
++++ b/arch/x86/kernel/ds.c
+@@ -0,0 +1,464 @@
++/*
++ * Debug Store support
++ *
++ * This provides a low-level interface to the hardware's Debug Store
++ * feature that is used for last branch recording (LBR) and
++ * precise-event based sampling (PEBS).
++ *
++ * Different architectures use a different DS layout/pointer size.
++ * The below functions therefore work on a void*.
++ *
++ *
++ * Since there is no user for PEBS, yet, only LBR (or branch
++ * trace store, BTS) is supported.
++ *
++ *
++ * Copyright (C) 2007 Intel Corporation.
++ * Markus Metzger <markus.t.metzger at intel.com>, Dec 2007
++ */
++
++#include <asm/ds.h>
++
++#include <linux/errno.h>
++#include <linux/string.h>
++#include <linux/slab.h>
++
+
+/*
-+ * A request has just been released. Account for it, update the full and
-+ * congestion status, wake up any waiters. Called under q->queue_lock.
++ * Debug Store (DS) save area configuration (see Intel64 and IA32
++ * Architectures Software Developer's Manual, section 18.5)
++ *
++ * The DS configuration consists of the following fields; different
++ * architetures vary in the size of those fields.
++ * - double-word aligned base linear address of the BTS buffer
++ * - write pointer into the BTS buffer
++ * - end linear address of the BTS buffer (one byte beyond the end of
++ * the buffer)
++ * - interrupt pointer into BTS buffer
++ * (interrupt occurs when write pointer passes interrupt pointer)
++ * - double-word aligned base linear address of the PEBS buffer
++ * - write pointer into the PEBS buffer
++ * - end linear address of the PEBS buffer (one byte beyond the end of
++ * the buffer)
++ * - interrupt pointer into PEBS buffer
++ * (interrupt occurs when write pointer passes interrupt pointer)
++ * - value to which counter is reset following counter overflow
++ *
++ * On later architectures, the last branch recording hardware uses
++ * 64bit pointers even in 32bit mode.
++ *
++ *
++ * Branch Trace Store (BTS) records store information about control
++ * flow changes. They at least provide the following information:
++ * - source linear address
++ * - destination linear address
++ *
++ * Netburst supported a predicated bit that had been dropped in later
++ * architectures. We do not suppor it.
++ *
++ *
++ * In order to abstract from the actual DS and BTS layout, we describe
++ * the access to the relevant fields.
++ * Thanks to Andi Kleen for proposing this design.
++ *
++ * The implementation, however, is not as general as it might seem. In
++ * order to stay somewhat simple and efficient, we assume an
++ * underlying unsigned type (mostly a pointer type) and we expect the
++ * field to be at least as big as that type.
+ */
-+static void freed_request(struct request_queue *q, int rw, int priv)
-+{
-+ struct request_list *rl = &q->rq;
+
-+ rl->count[rw]--;
-+ if (priv)
-+ rl->elvpriv--;
++/*
++ * A special from_ip address to indicate that the BTS record is an
++ * info record that needs to be interpreted or skipped.
++ */
++#define BTS_ESCAPE_ADDRESS (-1)
+
-+ __freed_request(q, rw);
++/*
++ * A field access descriptor
++ */
++struct access_desc {
++ unsigned char offset;
++ unsigned char size;
++};
+
-+ if (unlikely(rl->starved[rw ^ 1]))
-+ __freed_request(q, rw ^ 1);
-+}
++/*
++ * The configuration for a particular DS/BTS hardware implementation.
++ */
++struct ds_configuration {
++ /* the DS configuration */
++ unsigned char sizeof_ds;
++ struct access_desc bts_buffer_base;
++ struct access_desc bts_index;
++ struct access_desc bts_absolute_maximum;
++ struct access_desc bts_interrupt_threshold;
++ /* the BTS configuration */
++ unsigned char sizeof_bts;
++ struct access_desc from_ip;
++ struct access_desc to_ip;
++ /* BTS variants used to store additional information like
++ timestamps */
++ struct access_desc info_type;
++ struct access_desc info_data;
++ unsigned long debugctl_mask;
++};
+
-+#define blkdev_free_rq(list) list_entry((list)->next, struct request, queuelist)
+/*
-+ * Get a free request, queue_lock must be held.
-+ * Returns NULL on failure, with queue_lock held.
-+ * Returns !NULL on success, with queue_lock *not held*.
++ * The global configuration used by the below accessor functions
+ */
-+static struct request *get_request(struct request_queue *q, int rw_flags,
-+ struct bio *bio, gfp_t gfp_mask)
++static struct ds_configuration ds_cfg;
++
++/*
++ * Accessor functions for some DS and BTS fields using the above
++ * global ptrace_bts_cfg.
++ */
++static inline unsigned long get_bts_buffer_base(char *base)
+{
-+ struct request *rq = NULL;
-+ struct request_list *rl = &q->rq;
-+ struct io_context *ioc = NULL;
-+ const int rw = rw_flags & 0x01;
-+ int may_queue, priv;
++ return *(unsigned long *)(base + ds_cfg.bts_buffer_base.offset);
++}
++static inline void set_bts_buffer_base(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.bts_buffer_base.offset)) = value;
++}
++static inline unsigned long get_bts_index(char *base)
++{
++ return *(unsigned long *)(base + ds_cfg.bts_index.offset);
++}
++static inline void set_bts_index(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.bts_index.offset)) = value;
++}
++static inline unsigned long get_bts_absolute_maximum(char *base)
++{
++ return *(unsigned long *)(base + ds_cfg.bts_absolute_maximum.offset);
++}
++static inline void set_bts_absolute_maximum(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.bts_absolute_maximum.offset)) = value;
++}
++static inline unsigned long get_bts_interrupt_threshold(char *base)
++{
++ return *(unsigned long *)(base + ds_cfg.bts_interrupt_threshold.offset);
++}
++static inline void set_bts_interrupt_threshold(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.bts_interrupt_threshold.offset)) = value;
++}
++static inline unsigned long get_from_ip(char *base)
++{
++ return *(unsigned long *)(base + ds_cfg.from_ip.offset);
++}
++static inline void set_from_ip(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.from_ip.offset)) = value;
++}
++static inline unsigned long get_to_ip(char *base)
++{
++ return *(unsigned long *)(base + ds_cfg.to_ip.offset);
++}
++static inline void set_to_ip(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.to_ip.offset)) = value;
++}
++static inline unsigned char get_info_type(char *base)
++{
++ return *(unsigned char *)(base + ds_cfg.info_type.offset);
++}
++static inline void set_info_type(char *base, unsigned char value)
++{
++ (*(unsigned char *)(base + ds_cfg.info_type.offset)) = value;
++}
++static inline unsigned long get_info_data(char *base)
++{
++ return *(unsigned long *)(base + ds_cfg.info_data.offset);
++}
++static inline void set_info_data(char *base, unsigned long value)
++{
++ (*(unsigned long *)(base + ds_cfg.info_data.offset)) = value;
++}
+
-+ may_queue = elv_may_queue(q, rw_flags);
-+ if (may_queue == ELV_MQUEUE_NO)
-+ goto rq_starved;
+
-+ if (rl->count[rw]+1 >= queue_congestion_on_threshold(q)) {
-+ if (rl->count[rw]+1 >= q->nr_requests) {
-+ ioc = current_io_context(GFP_ATOMIC, q->node);
-+ /*
-+ * The queue will fill after this allocation, so set
-+ * it as full, and mark this process as "batching".
-+ * This process will be allowed to complete a batch of
-+ * requests, others will be blocked.
-+ */
-+ if (!blk_queue_full(q, rw)) {
-+ ioc_set_batching(q, ioc);
-+ blk_set_queue_full(q, rw);
-+ } else {
-+ if (may_queue != ELV_MQUEUE_MUST
-+ && !ioc_batching(q, ioc)) {
-+ /*
-+ * The queue is full and the allocating
-+ * process is not a "batcher", and not
-+ * exempted by the IO scheduler
-+ */
-+ goto out;
-+ }
-+ }
-+ }
-+ blk_set_queue_congested(q, rw);
-+ }
++int ds_allocate(void **dsp, size_t bts_size_in_bytes)
++{
++ size_t bts_size_in_records;
++ unsigned long bts;
++ void *ds;
+
-+ /*
-+ * Only allow batching queuers to allocate up to 50% over the defined
-+ * limit of requests, otherwise we could have thousands of requests
-+ * allocated with any setting of ->nr_requests
-+ */
-+ if (rl->count[rw] >= (3 * q->nr_requests / 2))
-+ goto out;
++ if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts)
++ return -EOPNOTSUPP;
+
-+ rl->count[rw]++;
-+ rl->starved[rw] = 0;
++ if (bts_size_in_bytes < 0)
++ return -EINVAL;
+
-+ priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
-+ if (priv)
-+ rl->elvpriv++;
++ bts_size_in_records =
++ bts_size_in_bytes / ds_cfg.sizeof_bts;
++ bts_size_in_bytes =
++ bts_size_in_records * ds_cfg.sizeof_bts;
+
-+ spin_unlock_irq(q->queue_lock);
++ if (bts_size_in_bytes <= 0)
++ return -EINVAL;
+
-+ rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
-+ if (unlikely(!rq)) {
-+ /*
-+ * Allocation failed presumably due to memory. Undo anything
-+ * we might have messed up.
-+ *
-+ * Allocating task should really be put onto the front of the
-+ * wait queue, but this is pretty rare.
-+ */
-+ spin_lock_irq(q->queue_lock);
-+ freed_request(q, rw, priv);
++ bts = (unsigned long)kzalloc(bts_size_in_bytes, GFP_KERNEL);
+
-+ /*
-+ * in the very unlikely event that allocation failed and no
-+ * requests for this direction was pending, mark us starved
-+ * so that freeing of a request in the other direction will
-+ * notice us. another possible fix would be to split the
-+ * rq mempool into READ and WRITE
-+ */
-+rq_starved:
-+ if (unlikely(rl->count[rw] == 0))
-+ rl->starved[rw] = 1;
++ if (!bts)
++ return -ENOMEM;
+
-+ goto out;
++ ds = kzalloc(ds_cfg.sizeof_ds, GFP_KERNEL);
++
++ if (!ds) {
++ kfree((void *)bts);
++ return -ENOMEM;
+ }
+
-+ /*
-+ * ioc may be NULL here, and ioc_batching will be false. That's
-+ * OK, if the queue is under the request limit then requests need
-+ * not count toward the nr_batch_requests limit. There will always
-+ * be some limit enforced by BLK_BATCH_TIME.
-+ */
-+ if (ioc_batching(q, ioc))
-+ ioc->nr_batch_requests--;
-+
-+ rq_init(q, rq);
++ set_bts_buffer_base(ds, bts);
++ set_bts_index(ds, bts);
++ set_bts_absolute_maximum(ds, bts + bts_size_in_bytes);
++ set_bts_interrupt_threshold(ds, bts + bts_size_in_bytes + 1);
+
-+ blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
-+out:
-+ return rq;
++ *dsp = ds;
++ return 0;
+}
+
-+/*
-+ * No available requests for this queue, unplug the device and wait for some
-+ * requests to become available.
-+ *
-+ * Called with q->queue_lock held, and returns with it unlocked.
-+ */
-+static struct request *get_request_wait(struct request_queue *q, int rw_flags,
-+ struct bio *bio)
++int ds_free(void **dsp)
+{
-+ const int rw = rw_flags & 0x01;
-+ struct request *rq;
-+
-+ rq = get_request(q, rw_flags, bio, GFP_NOIO);
-+ while (!rq) {
-+ DEFINE_WAIT(wait);
-+ struct request_list *rl = &q->rq;
++ if (*dsp)
++ kfree((void *)get_bts_buffer_base(*dsp));
++ kfree(*dsp);
++ *dsp = 0;
+
-+ prepare_to_wait_exclusive(&rl->wait[rw], &wait,
-+ TASK_UNINTERRUPTIBLE);
++ return 0;
++}
+
-+ rq = get_request(q, rw_flags, bio, GFP_NOIO);
++int ds_get_bts_size(void *ds)
++{
++ int size_in_bytes;
+
-+ if (!rq) {
-+ struct io_context *ioc;
++ if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts)
++ return -EOPNOTSUPP;
+
-+ blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
++ if (!ds)
++ return 0;
+
-+ __generic_unplug_device(q);
-+ spin_unlock_irq(q->queue_lock);
-+ io_schedule();
++ size_in_bytes =
++ get_bts_absolute_maximum(ds) -
++ get_bts_buffer_base(ds);
++ return size_in_bytes;
++}
+
-+ /*
-+ * After sleeping, we become a "batching" process and
-+ * will be able to allocate at least one request, and
-+ * up to a big batch of them for a small period time.
-+ * See ioc_batching, ioc_set_batching
-+ */
-+ ioc = current_io_context(GFP_NOIO, q->node);
-+ ioc_set_batching(q, ioc);
++int ds_get_bts_end(void *ds)
++{
++ int size_in_bytes = ds_get_bts_size(ds);
+
-+ spin_lock_irq(q->queue_lock);
-+ }
-+ finish_wait(&rl->wait[rw], &wait);
-+ }
++ if (size_in_bytes <= 0)
++ return size_in_bytes;
+
-+ return rq;
++ return size_in_bytes / ds_cfg.sizeof_bts;
+}
+
-+struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
++int ds_get_bts_index(void *ds)
+{
-+ struct request *rq;
++ int index_offset_in_bytes;
+
-+ BUG_ON(rw != READ && rw != WRITE);
++ if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts)
++ return -EOPNOTSUPP;
+
-+ spin_lock_irq(q->queue_lock);
-+ if (gfp_mask & __GFP_WAIT) {
-+ rq = get_request_wait(q, rw, NULL);
-+ } else {
-+ rq = get_request(q, rw, NULL, gfp_mask);
-+ if (!rq)
-+ spin_unlock_irq(q->queue_lock);
-+ }
-+ /* q->queue_lock is unlocked at this point */
++ index_offset_in_bytes =
++ get_bts_index(ds) -
++ get_bts_buffer_base(ds);
+
-+ return rq;
++ return index_offset_in_bytes / ds_cfg.sizeof_bts;
+}
-+EXPORT_SYMBOL(blk_get_request);
+
-+/**
-+ * blk_start_queueing - initiate dispatch of requests to device
-+ * @q: request queue to kick into gear
-+ *
-+ * This is basically a helper to remove the need to know whether a queue
-+ * is plugged or not if someone just wants to initiate dispatch of requests
-+ * for this queue.
-+ *
-+ * The queue lock must be held with interrupts disabled.
-+ */
-+void blk_start_queueing(struct request_queue *q)
++int ds_set_overflow(void *ds, int method)
+{
-+ if (!blk_queue_plugged(q))
-+ q->request_fn(q);
-+ else
-+ __generic_unplug_device(q);
++ switch (method) {
++ case DS_O_SIGNAL:
++ return -EOPNOTSUPP;
++ case DS_O_WRAP:
++ return 0;
++ default:
++ return -EINVAL;
++ }
+}
-+EXPORT_SYMBOL(blk_start_queueing);
+
-+/**
-+ * blk_requeue_request - put a request back on queue
-+ * @q: request queue where request should be inserted
-+ * @rq: request to be inserted
-+ *
-+ * Description:
-+ * Drivers often keep queueing requests until the hardware cannot accept
-+ * more, when that condition happens we need to put the request back
-+ * on the queue. Must be called with queue lock held.
-+ */
-+void blk_requeue_request(struct request_queue *q, struct request *rq)
++int ds_get_overflow(void *ds)
+{
-+ blk_add_trace_rq(q, rq, BLK_TA_REQUEUE);
++ return DS_O_WRAP;
++}
+
-+ if (blk_rq_tagged(rq))
-+ blk_queue_end_tag(q, rq);
++int ds_clear(void *ds)
++{
++ int bts_size = ds_get_bts_size(ds);
++ unsigned long bts_base;
+
-+ elv_requeue_request(q, rq);
++ if (bts_size <= 0)
++ return bts_size;
++
++ bts_base = get_bts_buffer_base(ds);
++ memset((void *)bts_base, 0, bts_size);
++
++ set_bts_index(ds, bts_base);
++ return 0;
+}
+
-+EXPORT_SYMBOL(blk_requeue_request);
++int ds_read_bts(void *ds, int index, struct bts_struct *out)
++{
++ void *bts;
+
-+/**
-+ * blk_insert_request - insert a special request in to a request queue
-+ * @q: request queue where request should be inserted
-+ * @rq: request to be inserted
-+ * @at_head: insert request at head or tail of queue
-+ * @data: private data
-+ *
-+ * Description:
-+ * Many block devices need to execute commands asynchronously, so they don't
-+ * block the whole kernel from preemption during request execution. This is
-+ * accomplished normally by inserting aritficial requests tagged as
-+ * REQ_SPECIAL in to the corresponding request queue, and letting them be
-+ * scheduled for actual execution by the request queue.
-+ *
-+ * We have the option of inserting the head or the tail of the queue.
-+ * Typically we use the tail for new ioctls and so forth. We use the head
-+ * of the queue for things like a QUEUE_FULL message from a device, or a
-+ * host that is unable to accept a particular command.
-+ */
-+void blk_insert_request(struct request_queue *q, struct request *rq,
-+ int at_head, void *data)
++ if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts)
++ return -EOPNOTSUPP;
++
++ if (index < 0)
++ return -EINVAL;
++
++ if (index >= ds_get_bts_size(ds))
++ return -EINVAL;
++
++ bts = (void *)(get_bts_buffer_base(ds) + (index * ds_cfg.sizeof_bts));
++
++ memset(out, 0, sizeof(*out));
++ if (get_from_ip(bts) == BTS_ESCAPE_ADDRESS) {
++ out->qualifier = get_info_type(bts);
++ out->variant.jiffies = get_info_data(bts);
++ } else {
++ out->qualifier = BTS_BRANCH;
++ out->variant.lbr.from_ip = get_from_ip(bts);
++ out->variant.lbr.to_ip = get_to_ip(bts);
++ }
++
++ return sizeof(*out);;
++}
++
++int ds_write_bts(void *ds, const struct bts_struct *in)
+{
-+ int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
-+ unsigned long flags;
++ unsigned long bts;
+
-+ /*
-+ * tell I/O scheduler that this isn't a regular read/write (ie it
-+ * must not attempt merges on this) and that it acts as a soft
-+ * barrier
-+ */
-+ rq->cmd_type = REQ_TYPE_SPECIAL;
-+ rq->cmd_flags |= REQ_SOFTBARRIER;
++ if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts)
++ return -EOPNOTSUPP;
+
-+ rq->special = data;
++ if (ds_get_bts_size(ds) <= 0)
++ return -ENXIO;
+
-+ spin_lock_irqsave(q->queue_lock, flags);
++ bts = get_bts_index(ds);
+
-+ /*
-+ * If command is tagged, release the tag
-+ */
-+ if (blk_rq_tagged(rq))
-+ blk_queue_end_tag(q, rq);
++ memset((void *)bts, 0, ds_cfg.sizeof_bts);
++ switch (in->qualifier) {
++ case BTS_INVALID:
++ break;
+
-+ drive_stat_acct(rq, 1);
-+ __elv_add_request(q, rq, where, 0);
-+ blk_start_queueing(q);
-+ spin_unlock_irqrestore(q->queue_lock, flags);
++ case BTS_BRANCH:
++ set_from_ip((void *)bts, in->variant.lbr.from_ip);
++ set_to_ip((void *)bts, in->variant.lbr.to_ip);
++ break;
++
++ case BTS_TASK_ARRIVES:
++ case BTS_TASK_DEPARTS:
++ set_from_ip((void *)bts, BTS_ESCAPE_ADDRESS);
++ set_info_type((void *)bts, in->qualifier);
++ set_info_data((void *)bts, in->variant.jiffies);
++ break;
++
++ default:
++ return -EINVAL;
++ }
++
++ bts = bts + ds_cfg.sizeof_bts;
++ if (bts >= get_bts_absolute_maximum(ds))
++ bts = get_bts_buffer_base(ds);
++ set_bts_index(ds, bts);
++
++ return ds_cfg.sizeof_bts;
++}
++
++unsigned long ds_debugctl_mask(void)
++{
++ return ds_cfg.debugctl_mask;
++}
++
++#ifdef __i386__
++static const struct ds_configuration ds_cfg_netburst = {
++ .sizeof_ds = 9 * 4,
++ .bts_buffer_base = { 0, 4 },
++ .bts_index = { 4, 4 },
++ .bts_absolute_maximum = { 8, 4 },
++ .bts_interrupt_threshold = { 12, 4 },
++ .sizeof_bts = 3 * 4,
++ .from_ip = { 0, 4 },
++ .to_ip = { 4, 4 },
++ .info_type = { 4, 1 },
++ .info_data = { 8, 4 },
++ .debugctl_mask = (1<<2)|(1<<3)
++};
++
++static const struct ds_configuration ds_cfg_pentium_m = {
++ .sizeof_ds = 9 * 4,
++ .bts_buffer_base = { 0, 4 },
++ .bts_index = { 4, 4 },
++ .bts_absolute_maximum = { 8, 4 },
++ .bts_interrupt_threshold = { 12, 4 },
++ .sizeof_bts = 3 * 4,
++ .from_ip = { 0, 4 },
++ .to_ip = { 4, 4 },
++ .info_type = { 4, 1 },
++ .info_data = { 8, 4 },
++ .debugctl_mask = (1<<6)|(1<<7)
++};
++#endif /* _i386_ */
++
++static const struct ds_configuration ds_cfg_core2 = {
++ .sizeof_ds = 9 * 8,
++ .bts_buffer_base = { 0, 8 },
++ .bts_index = { 8, 8 },
++ .bts_absolute_maximum = { 16, 8 },
++ .bts_interrupt_threshold = { 24, 8 },
++ .sizeof_bts = 3 * 8,
++ .from_ip = { 0, 8 },
++ .to_ip = { 8, 8 },
++ .info_type = { 8, 1 },
++ .info_data = { 16, 8 },
++ .debugctl_mask = (1<<6)|(1<<7)|(1<<9)
++};
++
++static inline void
++ds_configure(const struct ds_configuration *cfg)
++{
++ ds_cfg = *cfg;
+}
+
-+EXPORT_SYMBOL(blk_insert_request);
++void __cpuinit ds_init_intel(struct cpuinfo_x86 *c)
++{
++ switch (c->x86) {
++ case 0x6:
++ switch (c->x86_model) {
++#ifdef __i386__
++ case 0xD:
++ case 0xE: /* Pentium M */
++ ds_configure(&ds_cfg_pentium_m);
++ break;
++#endif /* _i386_ */
++ case 0xF: /* Core2 */
++ ds_configure(&ds_cfg_core2);
++ break;
++ default:
++ /* sorry, don't know about them */
++ break;
++ }
++ break;
++ case 0xF:
++ switch (c->x86_model) {
++#ifdef __i386__
++ case 0x0:
++ case 0x1:
++ case 0x2: /* Netburst */
++ ds_configure(&ds_cfg_netburst);
++ break;
++#endif /* _i386_ */
++ default:
++ /* sorry, don't know about them */
++ break;
++ }
++ break;
++ default:
++ /* sorry, don't know about them */
++ break;
++ }
++}
+diff --git a/arch/x86/kernel/e820_32.c b/arch/x86/kernel/e820_32.c
+index 18f500d..4e16ef4 100644
+--- a/arch/x86/kernel/e820_32.c
++++ b/arch/x86/kernel/e820_32.c
+@@ -7,7 +7,6 @@
+ #include <linux/kexec.h>
+ #include <linux/module.h>
+ #include <linux/mm.h>
+-#include <linux/efi.h>
+ #include <linux/pfn.h>
+ #include <linux/uaccess.h>
+ #include <linux/suspend.h>
+@@ -17,11 +16,6 @@
+ #include <asm/e820.h>
+ #include <asm/setup.h>
+
+-#ifdef CONFIG_EFI
+-int efi_enabled = 0;
+-EXPORT_SYMBOL(efi_enabled);
+-#endif
+-
+ struct e820map e820;
+ struct change_member {
+ struct e820entry *pbios; /* pointer to original bios entry */
+@@ -37,26 +31,6 @@ unsigned long pci_mem_start = 0x10000000;
+ EXPORT_SYMBOL(pci_mem_start);
+ #endif
+ extern int user_defined_memmap;
+-struct resource data_resource = {
+- .name = "Kernel data",
+- .start = 0,
+- .end = 0,
+- .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+-};
+-
+-struct resource code_resource = {
+- .name = "Kernel code",
+- .start = 0,
+- .end = 0,
+- .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+-};
+-
+-struct resource bss_resource = {
+- .name = "Kernel bss",
+- .start = 0,
+- .end = 0,
+- .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+-};
+
+ static struct resource system_rom_resource = {
+ .name = "System ROM",
+@@ -111,60 +85,6 @@ static struct resource video_rom_resource = {
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+ };
+
+-static struct resource video_ram_resource = {
+- .name = "Video RAM area",
+- .start = 0xa0000,
+- .end = 0xbffff,
+- .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+-};
+-
+-static struct resource standard_io_resources[] = { {
+- .name = "dma1",
+- .start = 0x0000,
+- .end = 0x001f,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "pic1",
+- .start = 0x0020,
+- .end = 0x0021,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "timer0",
+- .start = 0x0040,
+- .end = 0x0043,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "timer1",
+- .start = 0x0050,
+- .end = 0x0053,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "keyboard",
+- .start = 0x0060,
+- .end = 0x006f,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "dma page reg",
+- .start = 0x0080,
+- .end = 0x008f,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "pic2",
+- .start = 0x00a0,
+- .end = 0x00a1,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "dma2",
+- .start = 0x00c0,
+- .end = 0x00df,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-}, {
+- .name = "fpu",
+- .start = 0x00f0,
+- .end = 0x00ff,
+- .flags = IORESOURCE_BUSY | IORESOURCE_IO
+-} };
+-
+ #define ROMSIGNATURE 0xaa55
+
+ static int __init romsignature(const unsigned char *rom)
+@@ -260,10 +180,9 @@ static void __init probe_roms(void)
+ * Request address space for all standard RAM and ROM resources
+ * and also for regions reported as reserved by the e820.
+ */
+-static void __init
+-legacy_init_iomem_resources(struct resource *code_resource,
+- struct resource *data_resource,
+- struct resource *bss_resource)
++void __init init_iomem_resources(struct resource *code_resource,
++ struct resource *data_resource,
++ struct resource *bss_resource)
+ {
+ int i;
+
+@@ -305,35 +224,6 @@ legacy_init_iomem_resources(struct resource *code_resource,
+ }
+ }
+
+-/*
+- * Request address space for all standard resources
+- *
+- * This is called just before pcibios_init(), which is also a
+- * subsys_initcall, but is linked in later (in arch/i386/pci/common.c).
+- */
+-static int __init request_standard_resources(void)
+-{
+- int i;
+-
+- printk("Setting up standard PCI resources\n");
+- if (efi_enabled)
+- efi_initialize_iomem_resources(&code_resource,
+- &data_resource, &bss_resource);
+- else
+- legacy_init_iomem_resources(&code_resource,
+- &data_resource, &bss_resource);
+-
+- /* EFI systems may still have VGA */
+- request_resource(&iomem_resource, &video_ram_resource);
+-
+- /* request I/O space for devices used on all i[345]86 PCs */
+- for (i = 0; i < ARRAY_SIZE(standard_io_resources); i++)
+- request_resource(&ioport_resource, &standard_io_resources[i]);
+- return 0;
+-}
+-
+-subsys_initcall(request_standard_resources);
+-
+ #if defined(CONFIG_PM) && defined(CONFIG_HIBERNATION)
+ /**
+ * e820_mark_nosave_regions - Find the ranges of physical addresses that do not
+@@ -370,19 +260,17 @@ void __init add_memory_region(unsigned long long start,
+ {
+ int x;
+
+- if (!efi_enabled) {
+- x = e820.nr_map;
+-
+- if (x == E820MAX) {
+- printk(KERN_ERR "Ooops! Too many entries in the memory map!\n");
+- return;
+- }
++ x = e820.nr_map;
+
+- e820.map[x].addr = start;
+- e820.map[x].size = size;
+- e820.map[x].type = type;
+- e820.nr_map++;
++ if (x == E820MAX) {
++ printk(KERN_ERR "Ooops! Too many entries in the memory map!\n");
++ return;
+ }
+
-+/*
-+ * add-request adds a request to the linked list.
-+ * queue lock is held and interrupts disabled, as we muck with the
-+ * request queue list.
-+ */
-+static inline void add_request(struct request_queue * q, struct request * req)
++ e820.map[x].addr = start;
++ e820.map[x].size = size;
++ e820.map[x].type = type;
++ e820.nr_map++;
+ } /* add_memory_region */
+
+ /*
+@@ -598,29 +486,6 @@ int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
+ }
+
+ /*
+- * Callback for efi_memory_walk.
+- */
+-static int __init
+-efi_find_max_pfn(unsigned long start, unsigned long end, void *arg)
+-{
+- unsigned long *max_pfn = arg, pfn;
+-
+- if (start < end) {
+- pfn = PFN_UP(end -1);
+- if (pfn > *max_pfn)
+- *max_pfn = pfn;
+- }
+- return 0;
+-}
+-
+-static int __init
+-efi_memory_present_wrapper(unsigned long start, unsigned long end, void *arg)
+-{
+- memory_present(0, PFN_UP(start), PFN_DOWN(end));
+- return 0;
+-}
+-
+-/*
+ * Find the highest page frame number we have available
+ */
+ void __init find_max_pfn(void)
+@@ -628,11 +493,6 @@ void __init find_max_pfn(void)
+ int i;
+
+ max_pfn = 0;
+- if (efi_enabled) {
+- efi_memmap_walk(efi_find_max_pfn, &max_pfn);
+- efi_memmap_walk(efi_memory_present_wrapper, NULL);
+- return;
+- }
+
+ for (i = 0; i < e820.nr_map; i++) {
+ unsigned long start, end;
+@@ -650,34 +510,12 @@ void __init find_max_pfn(void)
+ }
+
+ /*
+- * Free all available memory for boot time allocation. Used
+- * as a callback function by efi_memory_walk()
+- */
+-
+-static int __init
+-free_available_memory(unsigned long start, unsigned long end, void *arg)
+-{
+- /* check max_low_pfn */
+- if (start >= (max_low_pfn << PAGE_SHIFT))
+- return 0;
+- if (end >= (max_low_pfn << PAGE_SHIFT))
+- end = max_low_pfn << PAGE_SHIFT;
+- if (start < end)
+- free_bootmem(start, end - start);
+-
+- return 0;
+-}
+-/*
+ * Register fully available low RAM pages with the bootmem allocator.
+ */
+ void __init register_bootmem_low_pages(unsigned long max_low_pfn)
+ {
+ int i;
+
+- if (efi_enabled) {
+- efi_memmap_walk(free_available_memory, NULL);
+- return;
+- }
+ for (i = 0; i < e820.nr_map; i++) {
+ unsigned long curr_pfn, last_pfn, size;
+ /*
+@@ -785,56 +623,12 @@ void __init print_memory_map(char *who)
+ }
+ }
+
+-static __init __always_inline void efi_limit_regions(unsigned long long size)
+-{
+- unsigned long long current_addr = 0;
+- efi_memory_desc_t *md, *next_md;
+- void *p, *p1;
+- int i, j;
+-
+- j = 0;
+- p1 = memmap.map;
+- for (p = p1, i = 0; p < memmap.map_end; p += memmap.desc_size, i++) {
+- md = p;
+- next_md = p1;
+- current_addr = md->phys_addr +
+- PFN_PHYS(md->num_pages);
+- if (is_available_memory(md)) {
+- if (md->phys_addr >= size) continue;
+- memcpy(next_md, md, memmap.desc_size);
+- if (current_addr >= size) {
+- next_md->num_pages -=
+- PFN_UP(current_addr-size);
+- }
+- p1 += memmap.desc_size;
+- next_md = p1;
+- j++;
+- } else if ((md->attribute & EFI_MEMORY_RUNTIME) ==
+- EFI_MEMORY_RUNTIME) {
+- /* In order to make runtime services
+- * available we have to include runtime
+- * memory regions in memory map */
+- memcpy(next_md, md, memmap.desc_size);
+- p1 += memmap.desc_size;
+- next_md = p1;
+- j++;
+- }
+- }
+- memmap.nr_map = j;
+- memmap.map_end = memmap.map +
+- (memmap.nr_map * memmap.desc_size);
+-}
+-
+ void __init limit_regions(unsigned long long size)
+ {
+ unsigned long long current_addr;
+ int i;
+
+ print_memory_map("limit_regions start");
+- if (efi_enabled) {
+- efi_limit_regions(size);
+- return;
+- }
+ for (i = 0; i < e820.nr_map; i++) {
+ current_addr = e820.map[i].addr + e820.map[i].size;
+ if (current_addr < size)
+@@ -955,3 +749,14 @@ static int __init parse_memmap(char *arg)
+ return 0;
+ }
+ early_param("memmap", parse_memmap);
++void __init update_e820(void)
+{
-+ drive_stat_acct(req, 1);
++ u8 nr_map;
+
-+ /*
-+ * elevator indicated where it wants this request to be
-+ * inserted at elevator_merge time
-+ */
-+ __elv_add_request(q, req, ELEVATOR_INSERT_SORT, 0);
++ nr_map = e820.nr_map;
++ if (sanitize_e820_map(e820.map, &nr_map))
++ return;
++ e820.nr_map = nr_map;
++ printk(KERN_INFO "modified physical RAM map:\n");
++ print_memory_map("modified");
+}
-+
+diff --git a/arch/x86/kernel/e820_64.c b/arch/x86/kernel/e820_64.c
+index 04698e0..c617174 100644
+--- a/arch/x86/kernel/e820_64.c
++++ b/arch/x86/kernel/e820_64.c
+@@ -1,4 +1,4 @@
+-/*
+/*
-+ * disk_round_stats() - Round off the performance stats on a struct
-+ * disk_stats.
-+ *
-+ * The average IO queue length and utilisation statistics are maintained
-+ * by observing the current state of the queue length and the amount of
-+ * time it has been in this state for.
-+ *
-+ * Normally, that accounting is done on IO completion, but that can result
-+ * in more than a second's worth of IO being accounted for within any one
-+ * second, leading to >100% utilisation. To deal with that, we call this
-+ * function to do a round-off before returning the results when reading
-+ * /proc/diskstats. This accounts immediately for all queue usage up to
-+ * the current jiffies and restarts the counters again.
+ * Handle the memory map.
+ * The functions here do the job until bootmem takes over.
+ *
+@@ -26,80 +26,87 @@
+ #include <asm/proto.h>
+ #include <asm/setup.h>
+ #include <asm/sections.h>
++#include <asm/kdebug.h>
+
+ struct e820map e820;
+
+-/*
++/*
+ * PFN of last memory page.
+ */
+-unsigned long end_pfn;
+-EXPORT_SYMBOL(end_pfn);
++unsigned long end_pfn;
+
+-/*
++/*
+ * end_pfn only includes RAM, while end_pfn_map includes all e820 entries.
+ * The direct mapping extends to end_pfn_map, so that we can directly access
+ * apertures, ACPI and other tables without having to play with fixmaps.
+- */
+-unsigned long end_pfn_map;
+ */
-+void disk_round_stats(struct gendisk *disk)
++unsigned long end_pfn_map;
+
+-/*
++/*
+ * Last pfn which the user wants to use.
+ */
+ static unsigned long __initdata end_user_pfn = MAXMEM>>PAGE_SHIFT;
+
+-extern struct resource code_resource, data_resource, bss_resource;
+-
+-/* Check for some hardcoded bad areas that early boot is not allowed to touch */
+-static inline int bad_addr(unsigned long *addrp, unsigned long size)
+-{
+- unsigned long addr = *addrp, last = addr + size;
+-
+- /* various gunk below that needed for SMP startup */
+- if (addr < 0x8000) {
+- *addrp = PAGE_ALIGN(0x8000);
+- return 1;
+- }
+-
+- /* direct mapping tables of the kernel */
+- if (last >= table_start<<PAGE_SHIFT && addr < table_end<<PAGE_SHIFT) {
+- *addrp = PAGE_ALIGN(table_end << PAGE_SHIFT);
+- return 1;
+- }
+-
+- /* initrd */
+-#ifdef CONFIG_BLK_DEV_INITRD
+- if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
+- unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
+- unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;
+- unsigned long ramdisk_end = ramdisk_image+ramdisk_size;
+-
+- if (last >= ramdisk_image && addr < ramdisk_end) {
+- *addrp = PAGE_ALIGN(ramdisk_end);
+- return 1;
+- }
+- }
++/*
++ * Early reserved memory areas.
++ */
++#define MAX_EARLY_RES 20
++
++struct early_res {
++ unsigned long start, end;
++};
++static struct early_res early_res[MAX_EARLY_RES] __initdata = {
++ { 0, PAGE_SIZE }, /* BIOS data page */
++#ifdef CONFIG_SMP
++ { SMP_TRAMPOLINE_BASE, SMP_TRAMPOLINE_BASE + 2*PAGE_SIZE },
+ #endif
+- /* kernel code */
+- if (last >= __pa_symbol(&_text) && addr < __pa_symbol(&_end)) {
+- *addrp = PAGE_ALIGN(__pa_symbol(&_end));
+- return 1;
++ {}
++};
++
++void __init reserve_early(unsigned long start, unsigned long end)
+{
-+ unsigned long now = jiffies;
++ int i;
++ struct early_res *r;
++ for (i = 0; i < MAX_EARLY_RES && early_res[i].end; i++) {
++ r = &early_res[i];
++ if (end > r->start && start < r->end)
++ panic("Overlapping early reservations %lx-%lx to %lx-%lx\n",
++ start, end, r->start, r->end);
+ }
++ if (i >= MAX_EARLY_RES)
++ panic("Too many early reservations");
++ r = &early_res[i];
++ r->start = start;
++ r->end = end;
++}
+
+- if (last >= ebda_addr && addr < ebda_addr + ebda_size) {
+- *addrp = PAGE_ALIGN(ebda_addr + ebda_size);
+- return 1;
++void __init early_res_to_bootmem(void)
++{
++ int i;
++ for (i = 0; i < MAX_EARLY_RES && early_res[i].end; i++) {
++ struct early_res *r = &early_res[i];
++ reserve_bootmem_generic(r->start, r->end - r->start);
+ }
++}
+
+-#ifdef CONFIG_NUMA
+- /* NUMA memory to node map */
+- if (last >= nodemap_addr && addr < nodemap_addr + nodemap_size) {
+- *addrp = nodemap_addr + nodemap_size;
+- return 1;
++/* Check for already reserved areas */
++static inline int bad_addr(unsigned long *addrp, unsigned long size)
++{
++ int i;
++ unsigned long addr = *addrp, last;
++ int changed = 0;
++again:
++ last = addr + size;
++ for (i = 0; i < MAX_EARLY_RES && early_res[i].end; i++) {
++ struct early_res *r = &early_res[i];
++ if (last >= r->start && addr < r->end) {
++ *addrp = addr = r->end;
++ changed = 1;
++ goto again;
++ }
+ }
+-#endif
+- /* XXX ramdisk image here? */
+- return 0;
+-}
++ return changed;
++}
+
+ /*
+ * This function checks if any part of the range <start,end> is mapped
+@@ -107,16 +114,18 @@ static inline int bad_addr(unsigned long *addrp, unsigned long size)
+ */
+ int
+ e820_any_mapped(unsigned long start, unsigned long end, unsigned type)
+-{
++{
+ int i;
+- for (i = 0; i < e820.nr_map; i++) {
+- struct e820entry *ei = &e820.map[i];
+- if (type && ei->type != type)
+
-+ if (now == disk->stamp)
-+ return;
++ for (i = 0; i < e820.nr_map; i++) {
++ struct e820entry *ei = &e820.map[i];
+
-+ if (disk->in_flight) {
-+ __disk_stat_add(disk, time_in_queue,
-+ disk->in_flight * (now - disk->stamp));
-+ __disk_stat_add(disk, io_ticks, (now - disk->stamp));
++ if (type && ei->type != type)
+ continue;
+ if (ei->addr >= end || ei->addr + ei->size <= start)
+- continue;
+- return 1;
+- }
++ continue;
++ return 1;
+ }
-+ disk->stamp = now;
-+}
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(e820_any_mapped);
+@@ -127,11 +136,14 @@ EXPORT_SYMBOL_GPL(e820_any_mapped);
+ * Note: this function only works correct if the e820 table is sorted and
+ * not-overlapping, which is the case
+ */
+-int __init e820_all_mapped(unsigned long start, unsigned long end, unsigned type)
++int __init e820_all_mapped(unsigned long start, unsigned long end,
++ unsigned type)
+ {
+ int i;
+
-+EXPORT_SYMBOL_GPL(disk_round_stats);
+ for (i = 0; i < e820.nr_map; i++) {
+ struct e820entry *ei = &e820.map[i];
+
+ if (type && ei->type != type)
+ continue;
+ /* is the region (part) in overlap with the current region ?*/
+@@ -143,65 +155,73 @@ int __init e820_all_mapped(unsigned long start, unsigned long end, unsigned type
+ */
+ if (ei->addr <= start)
+ start = ei->addr + ei->size;
+- /* if start is now at or beyond end, we're done, full coverage */
++ /*
++ * if start is now at or beyond end, we're done, full
++ * coverage
++ */
+ if (start >= end)
+- return 1; /* we're done */
++ return 1;
+ }
+ return 0;
+ }
+
+-/*
+- * Find a free area in a specific range.
+- */
+-unsigned long __init find_e820_area(unsigned long start, unsigned long end, unsigned size)
+-{
+- int i;
+- for (i = 0; i < e820.nr_map; i++) {
+- struct e820entry *ei = &e820.map[i];
+- unsigned long addr = ei->addr, last;
+- if (ei->type != E820_RAM)
+- continue;
+- if (addr < start)
+/*
-+ * queue lock must be held
++ * Find a free area in a specific range.
+ */
-+void __blk_put_request(struct request_queue *q, struct request *req)
++unsigned long __init find_e820_area(unsigned long start, unsigned long end,
++ unsigned size)
+{
-+ if (unlikely(!q))
-+ return;
-+ if (unlikely(--req->ref_count))
-+ return;
-+
-+ elv_completed_request(q, req);
-+
-+ /*
-+ * Request may not have originated from ll_rw_blk. if not,
-+ * it didn't come out of our reserved rq pools
-+ */
-+ if (req->cmd_flags & REQ_ALLOCED) {
-+ int rw = rq_data_dir(req);
-+ int priv = req->cmd_flags & REQ_ELVPRIV;
++ int i;
+
-+ BUG_ON(!list_empty(&req->queuelist));
-+ BUG_ON(!hlist_unhashed(&req->hash));
++ for (i = 0; i < e820.nr_map; i++) {
++ struct e820entry *ei = &e820.map[i];
++ unsigned long addr = ei->addr, last;
+
-+ blk_free_request(q, req);
-+ freed_request(q, rw, priv);
++ if (ei->type != E820_RAM)
++ continue;
++ if (addr < start)
+ addr = start;
+- if (addr > ei->addr + ei->size)
+- continue;
++ if (addr > ei->addr + ei->size)
++ continue;
+ while (bad_addr(&addr, size) && addr+size <= ei->addr+ei->size)
+ ;
+ last = PAGE_ALIGN(addr) + size;
+ if (last > ei->addr + ei->size)
+ continue;
+- if (last > end)
++ if (last > end)
+ continue;
+- return addr;
+- }
+- return -1UL;
+-}
++ return addr;
+ }
++ return -1UL;
+}
+
+ /*
+ * Find the highest page frame number we have available
+ */
+ unsigned long __init e820_end_of_ram(void)
+ {
+- unsigned long end_pfn = 0;
++ unsigned long end_pfn;
+
-+EXPORT_SYMBOL_GPL(__blk_put_request);
+ end_pfn = find_max_pfn_with_active_regions();
+-
+- if (end_pfn > end_pfn_map)
+
-+void blk_put_request(struct request *req)
-+{
-+ unsigned long flags;
-+ struct request_queue *q = req->q;
++ if (end_pfn > end_pfn_map)
+ end_pfn_map = end_pfn;
+ if (end_pfn_map > MAXMEM>>PAGE_SHIFT)
+ end_pfn_map = MAXMEM>>PAGE_SHIFT;
+ if (end_pfn > end_user_pfn)
+ end_pfn = end_user_pfn;
+- if (end_pfn > end_pfn_map)
+- end_pfn = end_pfn_map;
++ if (end_pfn > end_pfn_map)
++ end_pfn = end_pfn_map;
+
+- printk("end_pfn_map = %lu\n", end_pfn_map);
+- return end_pfn;
++ printk(KERN_INFO "end_pfn_map = %lu\n", end_pfn_map);
++ return end_pfn;
+ }
+
+ /*
+ * Mark e820 reserved areas as busy for the resource manager.
+ */
+-void __init e820_reserve_resources(void)
++void __init e820_reserve_resources(struct resource *code_resource,
++ struct resource *data_resource, struct resource *bss_resource)
+ {
+ int i;
+ for (i = 0; i < e820.nr_map; i++) {
+@@ -219,13 +239,13 @@ void __init e820_reserve_resources(void)
+ request_resource(&iomem_resource, res);
+ if (e820.map[i].type == E820_RAM) {
+ /*
+- * We don't know which RAM region contains kernel data,
+- * so we try it repeatedly and let the resource manager
+- * test it.
++ * We don't know which RAM region contains kernel data,
++ * so we try it repeatedly and let the resource manager
++ * test it.
+ */
+- request_resource(res, &code_resource);
+- request_resource(res, &data_resource);
+- request_resource(res, &bss_resource);
++ request_resource(res, code_resource);
++ request_resource(res, data_resource);
++ request_resource(res, bss_resource);
+ #ifdef CONFIG_KEXEC
+ if (crashk_res.start != crashk_res.end)
+ request_resource(res, &crashk_res);
+@@ -322,9 +342,9 @@ e820_register_active_regions(int nid, unsigned long start_pfn,
+ add_active_range(nid, ei_startpfn, ei_endpfn);
+ }
+
+-/*
++/*
+ * Add a memory region to the kernel e820 map.
+- */
++ */
+ void __init add_memory_region(unsigned long start, unsigned long size, int type)
+ {
+ int x = e820.nr_map;
+@@ -349,9 +369,7 @@ unsigned long __init e820_hole_size(unsigned long start, unsigned long end)
+ {
+ unsigned long start_pfn = start >> PAGE_SHIFT;
+ unsigned long end_pfn = end >> PAGE_SHIFT;
+- unsigned long ei_startpfn;
+- unsigned long ei_endpfn;
+- unsigned long ram = 0;
++ unsigned long ei_startpfn, ei_endpfn, ram = 0;
+ int i;
+
+ for (i = 0; i < e820.nr_map; i++) {
+@@ -363,28 +381,31 @@ unsigned long __init e820_hole_size(unsigned long start, unsigned long end)
+ return end - start - (ram << PAGE_SHIFT);
+ }
+
+-void __init e820_print_map(char *who)
++static void __init e820_print_map(char *who)
+ {
+ int i;
+
+ for (i = 0; i < e820.nr_map; i++) {
+ printk(KERN_INFO " %s: %016Lx - %016Lx ", who,
+- (unsigned long long) e820.map[i].addr,
+- (unsigned long long) (e820.map[i].addr + e820.map[i].size));
++ (unsigned long long) e820.map[i].addr,
++ (unsigned long long)
++ (e820.map[i].addr + e820.map[i].size));
+ switch (e820.map[i].type) {
+- case E820_RAM: printk("(usable)\n");
+- break;
++ case E820_RAM:
++ printk(KERN_CONT "(usable)\n");
++ break;
+ case E820_RESERVED:
+- printk("(reserved)\n");
+- break;
++ printk(KERN_CONT "(reserved)\n");
++ break;
+ case E820_ACPI:
+- printk("(ACPI data)\n");
+- break;
++ printk(KERN_CONT "(ACPI data)\n");
++ break;
+ case E820_NVS:
+- printk("(ACPI NVS)\n");
+- break;
+- default: printk("type %u\n", e820.map[i].type);
+- break;
++ printk(KERN_CONT "(ACPI NVS)\n");
++ break;
++ default:
++ printk(KERN_CONT "type %u\n", e820.map[i].type);
++ break;
+ }
+ }
+ }
+@@ -392,11 +413,11 @@ void __init e820_print_map(char *who)
+ /*
+ * Sanitize the BIOS e820 map.
+ *
+- * Some e820 responses include overlapping entries. The following
++ * Some e820 responses include overlapping entries. The following
+ * replaces the original e820 map with a new one, removing overlaps.
+ *
+ */
+-static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
++static int __init sanitize_e820_map(struct e820entry *biosmap, char *pnr_map)
+ {
+ struct change_member {
+ struct e820entry *pbios; /* pointer to original bios entry */
+@@ -416,7 +437,8 @@ static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
+ int i;
+
+ /*
+- Visually we're performing the following (1,2,3,4 = memory types)...
++ Visually we're performing the following
++ (1,2,3,4 = memory types)...
+
+ Sample memory map (w/overlaps):
+ ____22__________________
+@@ -458,22 +480,23 @@ static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
+ old_nr = *pnr_map;
+
+ /* bail out if we find any unreasonable addresses in bios map */
+- for (i=0; i<old_nr; i++)
++ for (i = 0; i < old_nr; i++)
+ if (biosmap[i].addr + biosmap[i].size < biosmap[i].addr)
+ return -1;
+
+ /* create pointers for initial change-point information (for sorting) */
+- for (i=0; i < 2*old_nr; i++)
++ for (i = 0; i < 2 * old_nr; i++)
+ change_point[i] = &change_point_list[i];
+
+ /* record all known change-points (starting and ending addresses),
+ omitting those that are for empty memory regions */
+ chgidx = 0;
+- for (i=0; i < old_nr; i++) {
++ for (i = 0; i < old_nr; i++) {
+ if (biosmap[i].size != 0) {
+ change_point[chgidx]->addr = biosmap[i].addr;
+ change_point[chgidx++]->pbios = &biosmap[i];
+- change_point[chgidx]->addr = biosmap[i].addr + biosmap[i].size;
++ change_point[chgidx]->addr = biosmap[i].addr +
++ biosmap[i].size;
+ change_point[chgidx++]->pbios = &biosmap[i];
+ }
+ }
+@@ -483,75 +506,106 @@ static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
+ still_changing = 1;
+ while (still_changing) {
+ still_changing = 0;
+- for (i=1; i < chg_nr; i++) {
+- /* if <current_addr> > <last_addr>, swap */
+- /* or, if current=<start_addr> & last=<end_addr>, swap */
+- if ((change_point[i]->addr < change_point[i-1]->addr) ||
+- ((change_point[i]->addr == change_point[i-1]->addr) &&
+- (change_point[i]->addr == change_point[i]->pbios->addr) &&
+- (change_point[i-1]->addr != change_point[i-1]->pbios->addr))
+- )
+- {
++ for (i = 1; i < chg_nr; i++) {
++ unsigned long long curaddr, lastaddr;
++ unsigned long long curpbaddr, lastpbaddr;
++
++ curaddr = change_point[i]->addr;
++ lastaddr = change_point[i - 1]->addr;
++ curpbaddr = change_point[i]->pbios->addr;
++ lastpbaddr = change_point[i - 1]->pbios->addr;
+
-+ /*
-+ * Gee, IDE calls in w/ NULL q. Fix IDE and remove the
-+ * following if (q) test.
-+ */
-+ if (q) {
-+ spin_lock_irqsave(q->queue_lock, flags);
-+ __blk_put_request(q, req);
-+ spin_unlock_irqrestore(q->queue_lock, flags);
-+ }
-+}
++ /*
++ * swap entries, when:
++ *
++ * curaddr > lastaddr or
++ * curaddr == lastaddr and curaddr == curpbaddr and
++ * lastaddr != lastpbaddr
++ */
++ if (curaddr < lastaddr ||
++ (curaddr == lastaddr && curaddr == curpbaddr &&
++ lastaddr != lastpbaddr)) {
+ change_tmp = change_point[i];
+ change_point[i] = change_point[i-1];
+ change_point[i-1] = change_tmp;
+- still_changing=1;
++ still_changing = 1;
+ }
+ }
+ }
+
+ /* create a new bios memory map, removing overlaps */
+- overlap_entries=0; /* number of entries in the overlap table */
+- new_bios_entry=0; /* index for creating new bios map entries */
++ overlap_entries = 0; /* number of entries in the overlap table */
++ new_bios_entry = 0; /* index for creating new bios map entries */
+ last_type = 0; /* start with undefined memory type */
+ last_addr = 0; /* start with 0 as last starting address */
+
-+EXPORT_SYMBOL(blk_put_request);
+ /* loop through change-points, determining affect on the new bios map */
+- for (chgidx=0; chgidx < chg_nr; chgidx++)
+- {
++ for (chgidx = 0; chgidx < chg_nr; chgidx++) {
+ /* keep track of all overlapping bios entries */
+- if (change_point[chgidx]->addr == change_point[chgidx]->pbios->addr)
+- {
+- /* add map entry to overlap list (> 1 entry implies an overlap) */
+- overlap_list[overlap_entries++]=change_point[chgidx]->pbios;
+- }
+- else
+- {
+- /* remove entry from list (order independent, so swap with last) */
+- for (i=0; i<overlap_entries; i++)
+- {
+- if (overlap_list[i] == change_point[chgidx]->pbios)
+- overlap_list[i] = overlap_list[overlap_entries-1];
++ if (change_point[chgidx]->addr ==
++ change_point[chgidx]->pbios->addr) {
++ /*
++ * add map entry to overlap list (> 1 entry
++ * implies an overlap)
++ */
++ overlap_list[overlap_entries++] =
++ change_point[chgidx]->pbios;
++ } else {
++ /*
++ * remove entry from list (order independent,
++ * so swap with last)
++ */
++ for (i = 0; i < overlap_entries; i++) {
++ if (overlap_list[i] ==
++ change_point[chgidx]->pbios)
++ overlap_list[i] =
++ overlap_list[overlap_entries-1];
+ }
+ overlap_entries--;
+ }
+- /* if there are overlapping entries, decide which "type" to use */
+- /* (larger value takes precedence -- 1=usable, 2,3,4,4+=unusable) */
++ /*
++ * if there are overlapping entries, decide which
++ * "type" to use (larger value takes precedence --
++ * 1=usable, 2,3,4,4+=unusable)
++ */
+ current_type = 0;
+- for (i=0; i<overlap_entries; i++)
++ for (i = 0; i < overlap_entries; i++)
+ if (overlap_list[i]->type > current_type)
+ current_type = overlap_list[i]->type;
+- /* continue building up new bios map based on this information */
++ /*
++ * continue building up new bios map based on this
++ * information
++ */
+ if (current_type != last_type) {
+ if (last_type != 0) {
+ new_bios[new_bios_entry].size =
+ change_point[chgidx]->addr - last_addr;
+- /* move forward only if the new size was non-zero */
++ /*
++ * move forward only if the new size
++ * was non-zero
++ */
+ if (new_bios[new_bios_entry].size != 0)
++ /*
++ * no more space left for new
++ * bios entries ?
++ */
+ if (++new_bios_entry >= E820MAX)
+- break; /* no more space left for new bios entries */
++ break;
+ }
+ if (current_type != 0) {
+- new_bios[new_bios_entry].addr = change_point[chgidx]->addr;
++ new_bios[new_bios_entry].addr =
++ change_point[chgidx]->addr;
+ new_bios[new_bios_entry].type = current_type;
+- last_addr=change_point[chgidx]->addr;
++ last_addr = change_point[chgidx]->addr;
+ }
+ last_type = current_type;
+ }
+ }
+- new_nr = new_bios_entry; /* retain count for new bios entries */
++ /* retain count for new bios entries */
++ new_nr = new_bios_entry;
+
+ /* copy new bios mapping into original location */
+- memcpy(biosmap, new_bios, new_nr*sizeof(struct e820entry));
++ memcpy(biosmap, new_bios, new_nr * sizeof(struct e820entry));
+ *pnr_map = new_nr;
+
+ return 0;
+@@ -566,7 +620,7 @@ static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
+ * will have given us a memory map that we can use to properly
+ * set up memory. If we aren't, we'll fake a memory map.
+ */
+-static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
++static int __init copy_e820_map(struct e820entry *biosmap, int nr_map)
+ {
+ /* Only one memory region (or negative)? Ignore it */
+ if (nr_map < 2)
+@@ -583,18 +637,20 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
+ return -1;
+
+ add_memory_region(start, size, type);
+- } while (biosmap++,--nr_map);
++ } while (biosmap++, --nr_map);
+ return 0;
+ }
+
+-void early_panic(char *msg)
++static void early_panic(char *msg)
+ {
+ early_printk(msg);
+ panic(msg);
+ }
+
+-void __init setup_memory_region(void)
++/* We're not void only for x86 32-bit compat */
++char * __init machine_specific_memory_setup(void)
+ {
++ char *who = "BIOS-e820";
+ /*
+ * Try to copy the BIOS-supplied E820-map.
+ *
+@@ -605,7 +661,10 @@ void __init setup_memory_region(void)
+ if (copy_e820_map(boot_params.e820_map, boot_params.e820_entries) < 0)
+ early_panic("Cannot find a valid memory map");
+ printk(KERN_INFO "BIOS-provided physical RAM map:\n");
+- e820_print_map("BIOS-e820");
++ e820_print_map(who);
+
-+void init_request_from_bio(struct request *req, struct bio *bio)
-+{
-+ req->cmd_type = REQ_TYPE_FS;
++ /* In case someone cares... */
++ return who;
+ }
+
+ static int __init parse_memopt(char *p)
+@@ -613,9 +672,9 @@ static int __init parse_memopt(char *p)
+ if (!p)
+ return -EINVAL;
+ end_user_pfn = memparse(p, &p);
+- end_user_pfn >>= PAGE_SHIFT;
++ end_user_pfn >>= PAGE_SHIFT;
+ return 0;
+-}
++}
+ early_param("mem", parse_memopt);
+
+ static int userdef __initdata;
+@@ -627,9 +686,9 @@ static int __init parse_memmap_opt(char *p)
+
+ if (!strcmp(p, "exactmap")) {
+ #ifdef CONFIG_CRASH_DUMP
+- /* If we are doing a crash dump, we
+- * still need to know the real mem
+- * size before original memory map is
++ /*
++ * If we are doing a crash dump, we still need to know
++ * the real mem size before original memory map is
+ * reset.
+ */
+ e820_register_active_regions(0, 0, -1UL);
+@@ -646,6 +705,8 @@ static int __init parse_memmap_opt(char *p)
+ mem_size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
-+ /*
-+ * inherit FAILFAST from bio (for read-ahead, and explicit FAILFAST)
-+ */
-+ if (bio_rw_ahead(bio) || bio_failfast(bio))
-+ req->cmd_flags |= REQ_FAILFAST;
++ userdef = 1;
+ if (*p == '@') {
+ start_at = memparse(p+1, &p);
+ add_memory_region(start_at, mem_size, E820_RAM);
+@@ -665,11 +726,29 @@ early_param("memmap", parse_memmap_opt);
+ void __init finish_e820_parsing(void)
+ {
+ if (userdef) {
++ char nr = e820.nr_map;
+
-+ /*
-+ * REQ_BARRIER implies no merging, but lets make it explicit
-+ */
-+ if (unlikely(bio_barrier(bio)))
-+ req->cmd_flags |= (REQ_HARDBARRIER | REQ_NOMERGE);
++ if (sanitize_e820_map(e820.map, &nr) < 0)
++ early_panic("Invalid user supplied memory map");
++ e820.nr_map = nr;
+
-+ if (bio_sync(bio))
-+ req->cmd_flags |= REQ_RW_SYNC;
-+ if (bio_rw_meta(bio))
-+ req->cmd_flags |= REQ_RW_META;
+ printk(KERN_INFO "user-defined physical RAM map:\n");
+ e820_print_map("user");
+ }
+ }
+
++void __init update_e820(void)
++{
++ u8 nr_map;
+
-+ req->errors = 0;
-+ req->hard_sector = req->sector = bio->bi_sector;
-+ req->ioprio = bio_prio(bio);
-+ req->start_time = jiffies;
-+ blk_rq_bio_prep(req->q, req, bio);
++ nr_map = e820.nr_map;
++ if (sanitize_e820_map(e820.map, &nr_map))
++ return;
++ e820.nr_map = nr_map;
++ printk(KERN_INFO "modified physical RAM map:\n");
++ e820_print_map("modified");
+}
+
-+static int __make_request(struct request_queue *q, struct bio *bio)
+ unsigned long pci_mem_start = 0xaeedbabe;
+ EXPORT_SYMBOL(pci_mem_start);
+
+@@ -713,8 +792,10 @@ __init void e820_setup_gap(void)
+
+ if (!found) {
+ gapstart = (end_pfn << PAGE_SHIFT) + 1024*1024;
+- printk(KERN_ERR "PCI: Warning: Cannot find a gap in the 32bit address range\n"
+- KERN_ERR "PCI: Unassigned devices with 32bit resource registers may break!\n");
++ printk(KERN_ERR "PCI: Warning: Cannot find a gap in the 32bit "
++ "address range\n"
++ KERN_ERR "PCI: Unassigned devices with 32bit resource "
++ "registers may break!\n");
+ }
+
+ /*
+@@ -727,8 +808,9 @@ __init void e820_setup_gap(void)
+ /* Fun with two's complement */
+ pci_mem_start = (gapstart + round) & -round;
+
+- printk(KERN_INFO "Allocating PCI resources starting at %lx (gap: %lx:%lx)\n",
+- pci_mem_start, gapstart, gapsize);
++ printk(KERN_INFO
++ "Allocating PCI resources starting at %lx (gap: %lx:%lx)\n",
++ pci_mem_start, gapstart, gapsize);
+ }
+
+ int __init arch_get_ram_range(int slot, u64 *addr, u64 *size)
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 88bb83e..9f51e1e 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -21,7 +21,33 @@
+ #include <asm/gart.h>
+ #endif
+
+-static void __init via_bugs(void)
++static void __init fix_hypertransport_config(int num, int slot, int func)
+{
-+ struct request *req;
-+ int el_ret, nr_sectors, barrier, err;
-+ const unsigned short prio = bio_prio(bio);
-+ const int sync = bio_sync(bio);
-+ int rw_flags;
-+
-+ nr_sectors = bio_sectors(bio);
-+
++ u32 htcfg;
+ /*
-+ * low level driver can indicate that it wants pages above a
-+ * certain limit bounced to low memory (ie for highmem, or even
-+ * ISA dma in theory)
++ * we found a hypertransport bus
++ * make sure that we are broadcasting
++ * interrupts to all cpus on the ht bus
++ * if we're using extended apic ids
+ */
-+ blk_queue_bounce(q, &bio);
-+
-+ barrier = bio_barrier(bio);
-+ if (unlikely(barrier) && (q->next_ordered == QUEUE_ORDERED_NONE)) {
-+ err = -EOPNOTSUPP;
-+ goto end_io;
++ htcfg = read_pci_config(num, slot, func, 0x68);
++ if (htcfg & (1 << 18)) {
++ printk(KERN_INFO "Detected use of extended apic ids "
++ "on hypertransport bus\n");
++ if ((htcfg & (1 << 17)) == 0) {
++ printk(KERN_INFO "Enabling hypertransport extended "
++ "apic interrupt broadcast\n");
++ printk(KERN_INFO "Note this is a bios bug, "
++ "please contact your hw vendor\n");
++ htcfg |= (1 << 17);
++ write_pci_config(num, slot, func, 0x68, htcfg);
++ }
+ }
+
-+ spin_lock_irq(q->queue_lock);
+
-+ if (unlikely(barrier) || elv_queue_empty(q))
-+ goto get_rq;
++}
+
-+ el_ret = elv_merge(q, &req, bio);
-+ switch (el_ret) {
-+ case ELEVATOR_BACK_MERGE:
-+ BUG_ON(!rq_mergeable(req));
++static void __init via_bugs(int num, int slot, int func)
+ {
+ #ifdef CONFIG_GART_IOMMU
+ if ((end_pfn > MAX_DMA32_PFN || force_iommu) &&
+@@ -44,7 +70,7 @@ static int __init nvidia_hpet_check(struct acpi_table_header *header)
+ #endif /* CONFIG_X86_IO_APIC */
+ #endif /* CONFIG_ACPI */
+
+-static void __init nvidia_bugs(void)
++static void __init nvidia_bugs(int num, int slot, int func)
+ {
+ #ifdef CONFIG_ACPI
+ #ifdef CONFIG_X86_IO_APIC
+@@ -72,7 +98,7 @@ static void __init nvidia_bugs(void)
+
+ }
+
+-static void __init ati_bugs(void)
++static void __init ati_bugs(int num, int slot, int func)
+ {
+ #ifdef CONFIG_X86_IO_APIC
+ if (timer_over_8254 == 1) {
+@@ -83,18 +109,67 @@ static void __init ati_bugs(void)
+ #endif
+ }
+
++#define QFLAG_APPLY_ONCE 0x1
++#define QFLAG_APPLIED 0x2
++#define QFLAG_DONE (QFLAG_APPLY_ONCE|QFLAG_APPLIED)
+ struct chipset {
+- u16 vendor;
+- void (*f)(void);
++ u32 vendor;
++ u32 device;
++ u32 class;
++ u32 class_mask;
++ u32 flags;
++ void (*f)(int num, int slot, int func);
+ };
+
+ static struct chipset early_qrk[] __initdata = {
+- { PCI_VENDOR_ID_NVIDIA, nvidia_bugs },
+- { PCI_VENDOR_ID_VIA, via_bugs },
+- { PCI_VENDOR_ID_ATI, ati_bugs },
++ { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
++ PCI_CLASS_BRIDGE_PCI, PCI_ANY_ID, QFLAG_APPLY_ONCE, nvidia_bugs },
++ { PCI_VENDOR_ID_VIA, PCI_ANY_ID,
++ PCI_CLASS_BRIDGE_PCI, PCI_ANY_ID, QFLAG_APPLY_ONCE, via_bugs },
++ { PCI_VENDOR_ID_ATI, PCI_ANY_ID,
++ PCI_CLASS_BRIDGE_PCI, PCI_ANY_ID, QFLAG_APPLY_ONCE, ati_bugs },
++ { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB,
++ PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, fix_hypertransport_config },
+ {}
+ };
+
++static void __init check_dev_quirk(int num, int slot, int func)
++{
++ u16 class;
++ u16 vendor;
++ u16 device;
++ u8 type;
++ int i;
+
-+ if (!ll_back_merge_fn(q, req, bio))
-+ break;
++ class = read_pci_config_16(num, slot, func, PCI_CLASS_DEVICE);
+
-+ blk_add_trace_bio(q, bio, BLK_TA_BACKMERGE);
++ if (class == 0xffff)
++ return;
+
-+ req->biotail->bi_next = bio;
-+ req->biotail = bio;
-+ req->nr_sectors = req->hard_nr_sectors += nr_sectors;
-+ req->ioprio = ioprio_best(req->ioprio, prio);
-+ drive_stat_acct(req, 0);
-+ if (!attempt_back_merge(q, req))
-+ elv_merged_request(q, req, el_ret);
-+ goto out;
++ vendor = read_pci_config_16(num, slot, func, PCI_VENDOR_ID);
+
-+ case ELEVATOR_FRONT_MERGE:
-+ BUG_ON(!rq_mergeable(req));
++ device = read_pci_config_16(num, slot, func, PCI_DEVICE_ID);
+
-+ if (!ll_front_merge_fn(q, req, bio))
-+ break;
++ for (i = 0; early_qrk[i].f != NULL; i++) {
++ if (((early_qrk[i].vendor == PCI_ANY_ID) ||
++ (early_qrk[i].vendor == vendor)) &&
++ ((early_qrk[i].device == PCI_ANY_ID) ||
++ (early_qrk[i].device == device)) &&
++ (!((early_qrk[i].class ^ class) &
++ early_qrk[i].class_mask))) {
++ if ((early_qrk[i].flags &
++ QFLAG_DONE) != QFLAG_DONE)
++ early_qrk[i].f(num, slot, func);
++ early_qrk[i].flags |= QFLAG_APPLIED;
++ }
++ }
+
-+ blk_add_trace_bio(q, bio, BLK_TA_FRONTMERGE);
++ type = read_pci_config_byte(num, slot, func,
++ PCI_HEADER_TYPE);
++ if (!(type & 0x80))
++ return;
++}
+
-+ bio->bi_next = req->bio;
-+ req->bio = bio;
+ void __init early_quirks(void)
+ {
+ int num, slot, func;
+@@ -103,36 +178,8 @@ void __init early_quirks(void)
+ return;
+
+ /* Poor man's PCI discovery */
+- for (num = 0; num < 32; num++) {
+- for (slot = 0; slot < 32; slot++) {
+- for (func = 0; func < 8; func++) {
+- u32 class;
+- u32 vendor;
+- u8 type;
+- int i;
+- class = read_pci_config(num,slot,func,
+- PCI_CLASS_REVISION);
+- if (class == 0xffffffff)
+- break;
+-
+- if ((class >> 16) != PCI_CLASS_BRIDGE_PCI)
+- continue;
+-
+- vendor = read_pci_config(num, slot, func,
+- PCI_VENDOR_ID);
+- vendor &= 0xffff;
+-
+- for (i = 0; early_qrk[i].f; i++)
+- if (early_qrk[i].vendor == vendor) {
+- early_qrk[i].f();
+- return;
+- }
+-
+- type = read_pci_config_byte(num, slot, func,
+- PCI_HEADER_TYPE);
+- if (!(type & 0x80))
+- break;
+- }
+- }
+- }
++ for (num = 0; num < 32; num++)
++ for (slot = 0; slot < 32; slot++)
++ for (func = 0; func < 8; func++)
++ check_dev_quirk(num, slot, func);
+ }
+diff --git a/arch/x86/kernel/efi.c b/arch/x86/kernel/efi.c
+new file mode 100644
+index 0000000..1411324
+--- /dev/null
++++ b/arch/x86/kernel/efi.c
+@@ -0,0 +1,512 @@
++/*
++ * Common EFI (Extensible Firmware Interface) support functions
++ * Based on Extensible Firmware Interface Specification version 1.0
++ *
++ * Copyright (C) 1999 VA Linux Systems
++ * Copyright (C) 1999 Walt Drummond <drummond at valinux.com>
++ * Copyright (C) 1999-2002 Hewlett-Packard Co.
++ * David Mosberger-Tang <davidm at hpl.hp.com>
++ * Stephane Eranian <eranian at hpl.hp.com>
++ * Copyright (C) 2005-2008 Intel Co.
++ * Fenghua Yu <fenghua.yu at intel.com>
++ * Bibo Mao <bibo.mao at intel.com>
++ * Chandramouli Narayanan <mouli at linux.intel.com>
++ * Huang Ying <ying.huang at intel.com>
++ *
++ * Copied from efi_32.c to eliminate the duplicated code between EFI
++ * 32/64 support code. --ying 2007-10-26
++ *
++ * All EFI Runtime Services are not implemented yet as EFI only
++ * supports physical mode addressing on SoftSDV. This is to be fixed
++ * in a future version. --drummond 1999-07-20
++ *
++ * Implemented EFI runtime services and virtual mode calls. --davidm
++ *
++ * Goutham Rao: <goutham.rao at intel.com>
++ * Skip non-WB memory and ignore empty memory ranges.
++ */
+
-+ /*
-+ * may not be valid. if the low level driver said
-+ * it didn't need a bounce buffer then it better
-+ * not touch req->buffer either...
-+ */
-+ req->buffer = bio_data(bio);
-+ req->current_nr_sectors = bio_cur_sectors(bio);
-+ req->hard_cur_sectors = req->current_nr_sectors;
-+ req->sector = req->hard_sector = bio->bi_sector;
-+ req->nr_sectors = req->hard_nr_sectors += nr_sectors;
-+ req->ioprio = ioprio_best(req->ioprio, prio);
-+ drive_stat_acct(req, 0);
-+ if (!attempt_front_merge(q, req))
-+ elv_merged_request(q, req, el_ret);
-+ goto out;
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/efi.h>
++#include <linux/bootmem.h>
++#include <linux/spinlock.h>
++#include <linux/uaccess.h>
++#include <linux/time.h>
++#include <linux/io.h>
++#include <linux/reboot.h>
++#include <linux/bcd.h>
+
-+ /* ELV_NO_MERGE: elevator says don't/can't merge. */
-+ default:
-+ ;
-+ }
++#include <asm/setup.h>
++#include <asm/efi.h>
++#include <asm/time.h>
++#include <asm/cacheflush.h>
++#include <asm/tlbflush.h>
+
-+get_rq:
-+ /*
-+ * This sync check and mask will be re-done in init_request_from_bio(),
-+ * but we need to set it earlier to expose the sync flag to the
-+ * rq allocator and io schedulers.
-+ */
-+ rw_flags = bio_data_dir(bio);
-+ if (sync)
-+ rw_flags |= REQ_RW_SYNC;
++#define EFI_DEBUG 1
++#define PFX "EFI: "
+
-+ /*
-+ * Grab a free request. This is might sleep but can not fail.
-+ * Returns with the queue unlocked.
-+ */
-+ req = get_request_wait(q, rw_flags, bio);
++int efi_enabled;
++EXPORT_SYMBOL(efi_enabled);
+
-+ /*
-+ * After dropping the lock and possibly sleeping here, our request
-+ * may now be mergeable after it had proven unmergeable (above).
-+ * We don't worry about that case for efficiency. It won't happen
-+ * often, and the elevators are able to handle it.
-+ */
-+ init_request_from_bio(req, bio);
++struct efi efi;
++EXPORT_SYMBOL(efi);
+
-+ spin_lock_irq(q->queue_lock);
-+ if (elv_queue_empty(q))
-+ blk_plug_device(q);
-+ add_request(q, req);
-+out:
-+ if (sync)
-+ __generic_unplug_device(q);
++struct efi_memory_map memmap;
+
-+ spin_unlock_irq(q->queue_lock);
-+ return 0;
++struct efi efi_phys __initdata;
++static efi_system_table_t efi_systab __initdata;
+
-+end_io:
-+ bio_endio(bio, err);
++static int __init setup_noefi(char *arg)
++{
++ efi_enabled = 0;
+ return 0;
+}
++early_param("noefi", setup_noefi);
+
-+/*
-+ * If bio->bi_dev is a partition, remap the location
-+ */
-+static inline void blk_partition_remap(struct bio *bio)
++static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
+{
-+ struct block_device *bdev = bio->bi_bdev;
-+
-+ if (bio_sectors(bio) && bdev != bdev->bd_contains) {
-+ struct hd_struct *p = bdev->bd_part;
-+ const int rw = bio_data_dir(bio);
-+
-+ p->sectors[rw] += bio_sectors(bio);
-+ p->ios[rw]++;
-+
-+ bio->bi_sector += p->start_sect;
-+ bio->bi_bdev = bdev->bd_contains;
-+
-+ blk_add_trace_remap(bdev_get_queue(bio->bi_bdev), bio,
-+ bdev->bd_dev, bio->bi_sector,
-+ bio->bi_sector - p->start_sect);
-+ }
++ return efi_call_virt2(get_time, tm, tc);
+}
+
-+static void handle_bad_sector(struct bio *bio)
++static efi_status_t virt_efi_set_time(efi_time_t *tm)
+{
-+ char b[BDEVNAME_SIZE];
-+
-+ printk(KERN_INFO "attempt to access beyond end of device\n");
-+ printk(KERN_INFO "%s: rw=%ld, want=%Lu, limit=%Lu\n",
-+ bdevname(bio->bi_bdev, b),
-+ bio->bi_rw,
-+ (unsigned long long)bio->bi_sector + bio_sectors(bio),
-+ (long long)(bio->bi_bdev->bd_inode->i_size >> 9));
-+
-+ set_bit(BIO_EOF, &bio->bi_flags);
++ return efi_call_virt1(set_time, tm);
+}
+
-+#ifdef CONFIG_FAIL_MAKE_REQUEST
-+
-+static DECLARE_FAULT_ATTR(fail_make_request);
-+
-+static int __init setup_fail_make_request(char *str)
++static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled,
++ efi_bool_t *pending,
++ efi_time_t *tm)
+{
-+ return setup_fault_attr(&fail_make_request, str);
++ return efi_call_virt3(get_wakeup_time,
++ enabled, pending, tm);
+}
-+__setup("fail_make_request=", setup_fail_make_request);
+
-+static int should_fail_request(struct bio *bio)
++static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm)
+{
-+ if ((bio->bi_bdev->bd_disk->flags & GENHD_FL_FAIL) ||
-+ (bio->bi_bdev->bd_part && bio->bi_bdev->bd_part->make_it_fail))
-+ return should_fail(&fail_make_request, bio->bi_size);
++ return efi_call_virt2(set_wakeup_time,
++ enabled, tm);
++}
+
-+ return 0;
++static efi_status_t virt_efi_get_variable(efi_char16_t *name,
++ efi_guid_t *vendor,
++ u32 *attr,
++ unsigned long *data_size,
++ void *data)
++{
++ return efi_call_virt5(get_variable,
++ name, vendor, attr,
++ data_size, data);
+}
+
-+static int __init fail_make_request_debugfs(void)
++static efi_status_t virt_efi_get_next_variable(unsigned long *name_size,
++ efi_char16_t *name,
++ efi_guid_t *vendor)
+{
-+ return init_fault_attr_dentries(&fail_make_request,
-+ "fail_make_request");
++ return efi_call_virt3(get_next_variable,
++ name_size, name, vendor);
+}
+
-+late_initcall(fail_make_request_debugfs);
++static efi_status_t virt_efi_set_variable(efi_char16_t *name,
++ efi_guid_t *vendor,
++ unsigned long attr,
++ unsigned long data_size,
++ void *data)
++{
++ return efi_call_virt5(set_variable,
++ name, vendor, attr,
++ data_size, data);
++}
+
-+#else /* CONFIG_FAIL_MAKE_REQUEST */
++static efi_status_t virt_efi_get_next_high_mono_count(u32 *count)
++{
++ return efi_call_virt1(get_next_high_mono_count, count);
++}
+
-+static inline int should_fail_request(struct bio *bio)
++static void virt_efi_reset_system(int reset_type,
++ efi_status_t status,
++ unsigned long data_size,
++ efi_char16_t *data)
+{
-+ return 0;
++ efi_call_virt4(reset_system, reset_type, status,
++ data_size, data);
+}
+
-+#endif /* CONFIG_FAIL_MAKE_REQUEST */
++static efi_status_t virt_efi_set_virtual_address_map(
++ unsigned long memory_map_size,
++ unsigned long descriptor_size,
++ u32 descriptor_version,
++ efi_memory_desc_t *virtual_map)
++{
++ return efi_call_virt4(set_virtual_address_map,
++ memory_map_size, descriptor_size,
++ descriptor_version, virtual_map);
++}
+
-+/*
-+ * Check whether this bio extends beyond the end of the device.
-+ */
-+static inline int bio_check_eod(struct bio *bio, unsigned int nr_sectors)
++static efi_status_t __init phys_efi_set_virtual_address_map(
++ unsigned long memory_map_size,
++ unsigned long descriptor_size,
++ u32 descriptor_version,
++ efi_memory_desc_t *virtual_map)
+{
-+ sector_t maxsector;
++ efi_status_t status;
+
-+ if (!nr_sectors)
-+ return 0;
++ efi_call_phys_prelog();
++ status = efi_call_phys4(efi_phys.set_virtual_address_map,
++ memory_map_size, descriptor_size,
++ descriptor_version, virtual_map);
++ efi_call_phys_epilog();
++ return status;
++}
+
-+ /* Test device or partition size, when known. */
-+ maxsector = bio->bi_bdev->bd_inode->i_size >> 9;
-+ if (maxsector) {
-+ sector_t sector = bio->bi_sector;
++static efi_status_t __init phys_efi_get_time(efi_time_t *tm,
++ efi_time_cap_t *tc)
++{
++ efi_status_t status;
+
-+ if (maxsector < nr_sectors || maxsector - nr_sectors < sector) {
-+ /*
-+ * This may well happen - the kernel calls bread()
-+ * without checking the size of the device, e.g., when
-+ * mounting a device.
-+ */
-+ handle_bad_sector(bio);
-+ return 1;
-+ }
++ efi_call_phys_prelog();
++ status = efi_call_phys2(efi_phys.get_time, tm, tc);
++ efi_call_phys_epilog();
++ return status;
++}
++
++int efi_set_rtc_mmss(unsigned long nowtime)
++{
++ int real_seconds, real_minutes;
++ efi_status_t status;
++ efi_time_t eft;
++ efi_time_cap_t cap;
++
++ status = efi.get_time(&eft, &cap);
++ if (status != EFI_SUCCESS) {
++ printk(KERN_ERR "Oops: efitime: can't read time!\n");
++ return -1;
+ }
+
++ real_seconds = nowtime % 60;
++ real_minutes = nowtime / 60;
++ if (((abs(real_minutes - eft.minute) + 15)/30) & 1)
++ real_minutes += 30;
++ real_minutes %= 60;
++ eft.minute = real_minutes;
++ eft.second = real_seconds;
++
++ status = efi.set_time(&eft);
++ if (status != EFI_SUCCESS) {
++ printk(KERN_ERR "Oops: efitime: can't write time!\n");
++ return -1;
++ }
+ return 0;
+}
+
-+/**
-+ * generic_make_request: hand a buffer to its device driver for I/O
-+ * @bio: The bio describing the location in memory and on the device.
-+ *
-+ * generic_make_request() is used to make I/O requests of block
-+ * devices. It is passed a &struct bio, which describes the I/O that needs
-+ * to be done.
-+ *
-+ * generic_make_request() does not return any status. The
-+ * success/failure status of the request, along with notification of
-+ * completion, is delivered asynchronously through the bio->bi_end_io
-+ * function described (one day) else where.
-+ *
-+ * The caller of generic_make_request must make sure that bi_io_vec
-+ * are set to describe the memory buffer, and that bi_dev and bi_sector are
-+ * set to describe the device address, and the
-+ * bi_end_io and optionally bi_private are set to describe how
-+ * completion notification should be signaled.
-+ *
-+ * generic_make_request and the drivers it calls may use bi_next if this
-+ * bio happens to be merged with someone else, and may change bi_dev and
-+ * bi_sector for remaps as it sees fit. So the values of these fields
-+ * should NOT be depended on after the call to generic_make_request.
-+ */
-+static inline void __generic_make_request(struct bio *bio)
++unsigned long efi_get_time(void)
+{
-+ struct request_queue *q;
-+ sector_t old_sector;
-+ int ret, nr_sectors = bio_sectors(bio);
-+ dev_t old_dev;
-+ int err = -EIO;
-+
-+ might_sleep();
++ efi_status_t status;
++ efi_time_t eft;
++ efi_time_cap_t cap;
+
-+ if (bio_check_eod(bio, nr_sectors))
-+ goto end_io;
++ status = efi.get_time(&eft, &cap);
++ if (status != EFI_SUCCESS)
++ printk(KERN_ERR "Oops: efitime: can't read time!\n");
+
-+ /*
-+ * Resolve the mapping until finished. (drivers are
-+ * still free to implement/resolve their own stacking
-+ * by explicitly returning 0)
-+ *
-+ * NOTE: we don't repeat the blk_size check for each new device.
-+ * Stacking drivers are expected to know what they are doing.
-+ */
-+ old_sector = -1;
-+ old_dev = 0;
-+ do {
-+ char b[BDEVNAME_SIZE];
++ return mktime(eft.year, eft.month, eft.day, eft.hour,
++ eft.minute, eft.second);
++}
+
-+ q = bdev_get_queue(bio->bi_bdev);
-+ if (!q) {
-+ printk(KERN_ERR
-+ "generic_make_request: Trying to access "
-+ "nonexistent block-device %s (%Lu)\n",
-+ bdevname(bio->bi_bdev, b),
-+ (long long) bio->bi_sector);
-+end_io:
-+ bio_endio(bio, err);
-+ break;
-+ }
++#if EFI_DEBUG
++static void __init print_efi_memmap(void)
++{
++ efi_memory_desc_t *md;
++ void *p;
++ int i;
+
-+ if (unlikely(nr_sectors > q->max_hw_sectors)) {
-+ printk("bio too big device %s (%u > %u)\n",
-+ bdevname(bio->bi_bdev, b),
-+ bio_sectors(bio),
-+ q->max_hw_sectors);
-+ goto end_io;
-+ }
++ for (p = memmap.map, i = 0;
++ p < memmap.map_end;
++ p += memmap.desc_size, i++) {
++ md = p;
++ printk(KERN_INFO PFX "mem%02u: type=%u, attr=0x%llx, "
++ "range=[0x%016llx-0x%016llx) (%lluMB)\n",
++ i, md->type, md->attribute, md->phys_addr,
++ md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT),
++ (md->num_pages >> (20 - EFI_PAGE_SHIFT)));
++ }
++}
++#endif /* EFI_DEBUG */
++
++void __init efi_init(void)
++{
++ efi_config_table_t *config_tables;
++ efi_runtime_services_t *runtime;
++ efi_char16_t *c16;
++ char vendor[100] = "unknown";
++ int i = 0;
++ void *tmp;
+
-+ if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
-+ goto end_io;
++#ifdef CONFIG_X86_32
++ efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab;
++ memmap.phys_map = (void *)boot_params.efi_info.efi_memmap;
++#else
++ efi_phys.systab = (efi_system_table_t *)
++ (boot_params.efi_info.efi_systab |
++ ((__u64)boot_params.efi_info.efi_systab_hi<<32));
++ memmap.phys_map = (void *)
++ (boot_params.efi_info.efi_memmap |
++ ((__u64)boot_params.efi_info.efi_memmap_hi<<32));
++#endif
++ memmap.nr_map = boot_params.efi_info.efi_memmap_size /
++ boot_params.efi_info.efi_memdesc_size;
++ memmap.desc_version = boot_params.efi_info.efi_memdesc_version;
++ memmap.desc_size = boot_params.efi_info.efi_memdesc_size;
++
++ efi.systab = early_ioremap((unsigned long)efi_phys.systab,
++ sizeof(efi_system_table_t));
++ if (efi.systab == NULL)
++ printk(KERN_ERR "Couldn't map the EFI system table!\n");
++ memcpy(&efi_systab, efi.systab, sizeof(efi_system_table_t));
++ early_iounmap(efi.systab, sizeof(efi_system_table_t));
++ efi.systab = &efi_systab;
++
++ /*
++ * Verify the EFI Table
++ */
++ if (efi.systab->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE)
++ printk(KERN_ERR "EFI system table signature incorrect!\n");
++ if ((efi.systab->hdr.revision >> 16) == 0)
++ printk(KERN_ERR "Warning: EFI system table version "
++ "%d.%02d, expected 1.00 or greater!\n",
++ efi.systab->hdr.revision >> 16,
++ efi.systab->hdr.revision & 0xffff);
++
++ /*
++ * Show what we know for posterity
++ */
++ c16 = tmp = early_ioremap(efi.systab->fw_vendor, 2);
++ if (c16) {
++ for (i = 0; i < sizeof(vendor) && *c16; ++i)
++ vendor[i] = *c16++;
++ vendor[i] = '\0';
++ } else
++ printk(KERN_ERR PFX "Could not map the firmware vendor!\n");
++ early_iounmap(tmp, 2);
+
-+ if (should_fail_request(bio))
-+ goto end_io;
++ printk(KERN_INFO "EFI v%u.%.02u by %s \n",
++ efi.systab->hdr.revision >> 16,
++ efi.systab->hdr.revision & 0xffff, vendor);
++
++ /*
++ * Let's see what config tables the firmware passed to us.
++ */
++ config_tables = early_ioremap(
++ efi.systab->tables,
++ efi.systab->nr_tables * sizeof(efi_config_table_t));
++ if (config_tables == NULL)
++ printk(KERN_ERR "Could not map EFI Configuration Table!\n");
++
++ printk(KERN_INFO);
++ for (i = 0; i < efi.systab->nr_tables; i++) {
++ if (!efi_guidcmp(config_tables[i].guid, MPS_TABLE_GUID)) {
++ efi.mps = config_tables[i].table;
++ printk(" MPS=0x%lx ", config_tables[i].table);
++ } else if (!efi_guidcmp(config_tables[i].guid,
++ ACPI_20_TABLE_GUID)) {
++ efi.acpi20 = config_tables[i].table;
++ printk(" ACPI 2.0=0x%lx ", config_tables[i].table);
++ } else if (!efi_guidcmp(config_tables[i].guid,
++ ACPI_TABLE_GUID)) {
++ efi.acpi = config_tables[i].table;
++ printk(" ACPI=0x%lx ", config_tables[i].table);
++ } else if (!efi_guidcmp(config_tables[i].guid,
++ SMBIOS_TABLE_GUID)) {
++ efi.smbios = config_tables[i].table;
++ printk(" SMBIOS=0x%lx ", config_tables[i].table);
++ } else if (!efi_guidcmp(config_tables[i].guid,
++ HCDP_TABLE_GUID)) {
++ efi.hcdp = config_tables[i].table;
++ printk(" HCDP=0x%lx ", config_tables[i].table);
++ } else if (!efi_guidcmp(config_tables[i].guid,
++ UGA_IO_PROTOCOL_GUID)) {
++ efi.uga = config_tables[i].table;
++ printk(" UGA=0x%lx ", config_tables[i].table);
++ }
++ }
++ printk("\n");
++ early_iounmap(config_tables,
++ efi.systab->nr_tables * sizeof(efi_config_table_t));
+
++ /*
++ * Check out the runtime services table. We need to map
++ * the runtime services table so that we can grab the physical
++ * address of several of the EFI runtime functions, needed to
++ * set the firmware into virtual mode.
++ */
++ runtime = early_ioremap((unsigned long)efi.systab->runtime,
++ sizeof(efi_runtime_services_t));
++ if (runtime != NULL) {
+ /*
-+ * If this device has partitions, remap block n
-+ * of partition p to block n+start(p) of the disk.
++ * We will only need *early* access to the following
++ * two EFI runtime services before set_virtual_address_map
++ * is invoked.
++ */
++ efi_phys.get_time = (efi_get_time_t *)runtime->get_time;
++ efi_phys.set_virtual_address_map =
++ (efi_set_virtual_address_map_t *)
++ runtime->set_virtual_address_map;
++ /*
++ * Make efi_get_time can be called before entering
++ * virtual mode.
+ */
-+ blk_partition_remap(bio);
++ efi.get_time = phys_efi_get_time;
++ } else
++ printk(KERN_ERR "Could not map the EFI runtime service "
++ "table!\n");
++ early_iounmap(runtime, sizeof(efi_runtime_services_t));
+
-+ if (old_sector != -1)
-+ blk_add_trace_remap(q, bio, old_dev, bio->bi_sector,
-+ old_sector);
++ /* Map the EFI memory map */
++ memmap.map = early_ioremap((unsigned long)memmap.phys_map,
++ memmap.nr_map * memmap.desc_size);
++ if (memmap.map == NULL)
++ printk(KERN_ERR "Could not map the EFI memory map!\n");
++ memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size);
++ if (memmap.desc_size != sizeof(efi_memory_desc_t))
++ printk(KERN_WARNING "Kernel-defined memdesc"
++ "doesn't match the one from EFI!\n");
+
-+ blk_add_trace_bio(q, bio, BLK_TA_QUEUE);
++ /* Setup for EFI runtime service */
++ reboot_type = BOOT_EFI;
+
-+ old_sector = bio->bi_sector;
-+ old_dev = bio->bi_bdev->bd_dev;
++#if EFI_DEBUG
++ print_efi_memmap();
++#endif
++}
+
-+ if (bio_check_eod(bio, nr_sectors))
-+ goto end_io;
-+ if (bio_empty_barrier(bio) && !q->prepare_flush_fn) {
-+ err = -EOPNOTSUPP;
-+ goto end_io;
-+ }
++#if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
++static void __init runtime_code_page_mkexec(void)
++{
++ efi_memory_desc_t *md;
++ unsigned long end;
++ void *p;
+
-+ ret = q->make_request_fn(q, bio);
-+ } while (ret);
++ if (!(__supported_pte_mask & _PAGE_NX))
++ return;
++
++ /* Make EFI runtime service code area executable */
++ for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
++ md = p;
++ end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT);
++ if (md->type == EFI_RUNTIME_SERVICES_CODE &&
++ (end >> PAGE_SHIFT) <= max_pfn_mapped) {
++ set_memory_x(md->virt_addr, md->num_pages);
++ set_memory_uc(md->virt_addr, md->num_pages);
++ }
++ }
++ __flush_tlb_all();
+}
++#else
++static inline void __init runtime_code_page_mkexec(void) { }
++#endif
+
+/*
-+ * We only want one ->make_request_fn to be active at a time,
-+ * else stack usage with stacked devices could be a problem.
-+ * So use current->bio_{list,tail} to keep a list of requests
-+ * submited by a make_request_fn function.
-+ * current->bio_tail is also used as a flag to say if
-+ * generic_make_request is currently active in this task or not.
-+ * If it is NULL, then no make_request is active. If it is non-NULL,
-+ * then a make_request is active, and new requests should be added
-+ * at the tail
++ * This function will switch the EFI runtime services to virtual mode.
++ * Essentially, look through the EFI memmap and map every region that
++ * has the runtime attribute bit set in its memory descriptor and update
++ * that memory descriptor with the virtual address obtained from ioremap().
++ * This enables the runtime services to be called without having to
++ * thunk back into physical mode for every invocation.
+ */
-+void generic_make_request(struct bio *bio)
++void __init efi_enter_virtual_mode(void)
+{
-+ if (current->bio_tail) {
-+ /* make_request is active */
-+ *(current->bio_tail) = bio;
-+ bio->bi_next = NULL;
-+ current->bio_tail = &bio->bi_next;
-+ return;
++ efi_memory_desc_t *md;
++ efi_status_t status;
++ unsigned long end;
++ void *p;
++
++ efi.systab = NULL;
++ for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
++ md = p;
++ if (!(md->attribute & EFI_MEMORY_RUNTIME))
++ continue;
++ end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT);
++ if ((md->attribute & EFI_MEMORY_WB) &&
++ ((end >> PAGE_SHIFT) <= max_pfn_mapped))
++ md->virt_addr = (unsigned long)__va(md->phys_addr);
++ else
++ md->virt_addr = (unsigned long)
++ efi_ioremap(md->phys_addr,
++ md->num_pages << EFI_PAGE_SHIFT);
++ if (!md->virt_addr)
++ printk(KERN_ERR PFX "ioremap of 0x%llX failed!\n",
++ (unsigned long long)md->phys_addr);
++ if ((md->phys_addr <= (unsigned long)efi_phys.systab) &&
++ ((unsigned long)efi_phys.systab < end))
++ efi.systab = (efi_system_table_t *)(unsigned long)
++ (md->virt_addr - md->phys_addr +
++ (unsigned long)efi_phys.systab);
++ }
++
++ BUG_ON(!efi.systab);
++
++ status = phys_efi_set_virtual_address_map(
++ memmap.desc_size * memmap.nr_map,
++ memmap.desc_size,
++ memmap.desc_version,
++ memmap.phys_map);
++
++ if (status != EFI_SUCCESS) {
++ printk(KERN_ALERT "Unable to switch EFI into virtual mode "
++ "(status=%lx)!\n", status);
++ panic("EFI call to SetVirtualAddressMap() failed!");
+ }
-+ /* following loop may be a bit non-obvious, and so deserves some
-+ * explanation.
-+ * Before entering the loop, bio->bi_next is NULL (as all callers
-+ * ensure that) so we have a list with a single bio.
-+ * We pretend that we have just taken it off a longer list, so
-+ * we assign bio_list to the next (which is NULL) and bio_tail
-+ * to &bio_list, thus initialising the bio_list of new bios to be
-+ * added. __generic_make_request may indeed add some more bios
-+ * through a recursive call to generic_make_request. If it
-+ * did, we find a non-NULL value in bio_list and re-enter the loop
-+ * from the top. In this case we really did just take the bio
-+ * of the top of the list (no pretending) and so fixup bio_list and
-+ * bio_tail or bi_next, and call into __generic_make_request again.
++
++ /*
++ * Now that EFI is in virtual mode, update the function
++ * pointers in the runtime service table to the new virtual addresses.
+ *
-+ * The loop was structured like this to make only one call to
-+ * __generic_make_request (which is important as it is large and
-+ * inlined) and to keep the structure simple.
++ * Call EFI services through wrapper functions.
+ */
-+ BUG_ON(bio->bi_next);
-+ do {
-+ current->bio_list = bio->bi_next;
-+ if (bio->bi_next == NULL)
-+ current->bio_tail = ¤t->bio_list;
-+ else
-+ bio->bi_next = NULL;
-+ __generic_make_request(bio);
-+ bio = current->bio_list;
-+ } while (bio);
-+ current->bio_tail = NULL; /* deactivate */
++ efi.get_time = virt_efi_get_time;
++ efi.set_time = virt_efi_set_time;
++ efi.get_wakeup_time = virt_efi_get_wakeup_time;
++ efi.set_wakeup_time = virt_efi_set_wakeup_time;
++ efi.get_variable = virt_efi_get_variable;
++ efi.get_next_variable = virt_efi_get_next_variable;
++ efi.set_variable = virt_efi_set_variable;
++ efi.get_next_high_mono_count = virt_efi_get_next_high_mono_count;
++ efi.reset_system = virt_efi_reset_system;
++ efi.set_virtual_address_map = virt_efi_set_virtual_address_map;
++ runtime_code_page_mkexec();
++ early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);
++ memmap.map = NULL;
+}
+
-+EXPORT_SYMBOL(generic_make_request);
++/*
++ * Convenience functions to obtain memory types and attributes
++ */
++u32 efi_mem_type(unsigned long phys_addr)
++{
++ efi_memory_desc_t *md;
++ void *p;
+
-+/**
-+ * submit_bio: submit a bio to the block device layer for I/O
-+ * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
-+ * @bio: The &struct bio which describes the I/O
++ for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
++ md = p;
++ if ((md->phys_addr <= phys_addr) &&
++ (phys_addr < (md->phys_addr +
++ (md->num_pages << EFI_PAGE_SHIFT))))
++ return md->type;
++ }
++ return 0;
++}
++
++u64 efi_mem_attributes(unsigned long phys_addr)
++{
++ efi_memory_desc_t *md;
++ void *p;
++
++ for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
++ md = p;
++ if ((md->phys_addr <= phys_addr) &&
++ (phys_addr < (md->phys_addr +
++ (md->num_pages << EFI_PAGE_SHIFT))))
++ return md->attribute;
++ }
++ return 0;
++}
+diff --git a/arch/x86/kernel/efi_32.c b/arch/x86/kernel/efi_32.c
+index e2be78f..cb91f98 100644
+--- a/arch/x86/kernel/efi_32.c
++++ b/arch/x86/kernel/efi_32.c
+@@ -20,40 +20,15 @@
+ */
+
+ #include <linux/kernel.h>
+-#include <linux/init.h>
+-#include <linux/mm.h>
+ #include <linux/types.h>
+-#include <linux/time.h>
+-#include <linux/spinlock.h>
+-#include <linux/bootmem.h>
+ #include <linux/ioport.h>
+-#include <linux/module.h>
+ #include <linux/efi.h>
+-#include <linux/kexec.h>
+
+-#include <asm/setup.h>
+ #include <asm/io.h>
+ #include <asm/page.h>
+ #include <asm/pgtable.h>
+-#include <asm/processor.h>
+-#include <asm/desc.h>
+ #include <asm/tlbflush.h>
+
+-#define EFI_DEBUG 0
+-#define PFX "EFI: "
+-
+-extern efi_status_t asmlinkage efi_call_phys(void *, ...);
+-
+-struct efi efi;
+-EXPORT_SYMBOL(efi);
+-static struct efi efi_phys;
+-struct efi_memory_map memmap;
+-
+-/*
+- * We require an early boot_ioremap mapping mechanism initially
+- */
+-extern void * boot_ioremap(unsigned long, unsigned long);
+-
+ /*
+ * To make EFI call EFI runtime service in physical addressing mode we need
+ * prelog/epilog before/after the invocation to disable interrupt, to
+@@ -62,16 +37,14 @@ extern void * boot_ioremap(unsigned long, unsigned long);
+ */
+
+ static unsigned long efi_rt_eflags;
+-static DEFINE_SPINLOCK(efi_rt_lock);
+ static pgd_t efi_bak_pg_dir_pointer[2];
+
+-static void efi_call_phys_prelog(void) __acquires(efi_rt_lock)
++void efi_call_phys_prelog(void)
+ {
+ unsigned long cr4;
+ unsigned long temp;
+- struct Xgt_desc_struct gdt_descr;
++ struct desc_ptr gdt_descr;
+
+- spin_lock(&efi_rt_lock);
+ local_irq_save(efi_rt_eflags);
+
+ /*
+@@ -101,17 +74,17 @@ static void efi_call_phys_prelog(void) __acquires(efi_rt_lock)
+ /*
+ * After the lock is released, the original page table is restored.
+ */
+- local_flush_tlb();
++ __flush_tlb_all();
+
+ gdt_descr.address = __pa(get_cpu_gdt_table(0));
+ gdt_descr.size = GDT_SIZE - 1;
+ load_gdt(&gdt_descr);
+ }
+
+-static void efi_call_phys_epilog(void) __releases(efi_rt_lock)
++void efi_call_phys_epilog(void)
+ {
+ unsigned long cr4;
+- struct Xgt_desc_struct gdt_descr;
++ struct desc_ptr gdt_descr;
+
+ gdt_descr.address = (unsigned long)get_cpu_gdt_table(0);
+ gdt_descr.size = GDT_SIZE - 1;
+@@ -132,586 +105,7 @@ static void efi_call_phys_epilog(void) __releases(efi_rt_lock)
+ /*
+ * After the lock is released, the original page table is restored.
+ */
+- local_flush_tlb();
++ __flush_tlb_all();
+
+ local_irq_restore(efi_rt_eflags);
+- spin_unlock(&efi_rt_lock);
+-}
+-
+-static efi_status_t
+-phys_efi_set_virtual_address_map(unsigned long memory_map_size,
+- unsigned long descriptor_size,
+- u32 descriptor_version,
+- efi_memory_desc_t *virtual_map)
+-{
+- efi_status_t status;
+-
+- efi_call_phys_prelog();
+- status = efi_call_phys(efi_phys.set_virtual_address_map,
+- memory_map_size, descriptor_size,
+- descriptor_version, virtual_map);
+- efi_call_phys_epilog();
+- return status;
+-}
+-
+-static efi_status_t
+-phys_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
+-{
+- efi_status_t status;
+-
+- efi_call_phys_prelog();
+- status = efi_call_phys(efi_phys.get_time, tm, tc);
+- efi_call_phys_epilog();
+- return status;
+-}
+-
+-inline int efi_set_rtc_mmss(unsigned long nowtime)
+-{
+- int real_seconds, real_minutes;
+- efi_status_t status;
+- efi_time_t eft;
+- efi_time_cap_t cap;
+-
+- spin_lock(&efi_rt_lock);
+- status = efi.get_time(&eft, &cap);
+- spin_unlock(&efi_rt_lock);
+- if (status != EFI_SUCCESS)
+- panic("Ooops, efitime: can't read time!\n");
+- real_seconds = nowtime % 60;
+- real_minutes = nowtime / 60;
+-
+- if (((abs(real_minutes - eft.minute) + 15)/30) & 1)
+- real_minutes += 30;
+- real_minutes %= 60;
+-
+- eft.minute = real_minutes;
+- eft.second = real_seconds;
+-
+- if (status != EFI_SUCCESS) {
+- printk("Ooops: efitime: can't read time!\n");
+- return -1;
+- }
+- return 0;
+-}
+-/*
+- * This is used during kernel init before runtime
+- * services have been remapped and also during suspend, therefore,
+- * we'll need to call both in physical and virtual modes.
+- */
+-inline unsigned long efi_get_time(void)
+-{
+- efi_status_t status;
+- efi_time_t eft;
+- efi_time_cap_t cap;
+-
+- if (efi.get_time) {
+- /* if we are in virtual mode use remapped function */
+- status = efi.get_time(&eft, &cap);
+- } else {
+- /* we are in physical mode */
+- status = phys_efi_get_time(&eft, &cap);
+- }
+-
+- if (status != EFI_SUCCESS)
+- printk("Oops: efitime: can't read time status: 0x%lx\n",status);
+-
+- return mktime(eft.year, eft.month, eft.day, eft.hour,
+- eft.minute, eft.second);
+-}
+-
+-int is_available_memory(efi_memory_desc_t * md)
+-{
+- if (!(md->attribute & EFI_MEMORY_WB))
+- return 0;
+-
+- switch (md->type) {
+- case EFI_LOADER_CODE:
+- case EFI_LOADER_DATA:
+- case EFI_BOOT_SERVICES_CODE:
+- case EFI_BOOT_SERVICES_DATA:
+- case EFI_CONVENTIONAL_MEMORY:
+- return 1;
+- }
+- return 0;
+-}
+-
+-/*
+- * We need to map the EFI memory map again after paging_init().
+- */
+-void __init efi_map_memmap(void)
+-{
+- memmap.map = NULL;
+-
+- memmap.map = bt_ioremap((unsigned long) memmap.phys_map,
+- (memmap.nr_map * memmap.desc_size));
+- if (memmap.map == NULL)
+- printk(KERN_ERR PFX "Could not remap the EFI memmap!\n");
+-
+- memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size);
+-}
+-
+-#if EFI_DEBUG
+-static void __init print_efi_memmap(void)
+-{
+- efi_memory_desc_t *md;
+- void *p;
+- int i;
+-
+- for (p = memmap.map, i = 0; p < memmap.map_end; p += memmap.desc_size, i++) {
+- md = p;
+- printk(KERN_INFO "mem%02u: type=%u, attr=0x%llx, "
+- "range=[0x%016llx-0x%016llx) (%lluMB)\n",
+- i, md->type, md->attribute, md->phys_addr,
+- md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT),
+- (md->num_pages >> (20 - EFI_PAGE_SHIFT)));
+- }
+-}
+-#endif /* EFI_DEBUG */
+-
+-/*
+- * Walks the EFI memory map and calls CALLBACK once for each EFI
+- * memory descriptor that has memory that is available for kernel use.
+- */
+-void efi_memmap_walk(efi_freemem_callback_t callback, void *arg)
+-{
+- int prev_valid = 0;
+- struct range {
+- unsigned long start;
+- unsigned long end;
+- } uninitialized_var(prev), curr;
+- efi_memory_desc_t *md;
+- unsigned long start, end;
+- void *p;
+-
+- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+- md = p;
+-
+- if ((md->num_pages == 0) || (!is_available_memory(md)))
+- continue;
+-
+- curr.start = md->phys_addr;
+- curr.end = curr.start + (md->num_pages << EFI_PAGE_SHIFT);
+-
+- if (!prev_valid) {
+- prev = curr;
+- prev_valid = 1;
+- } else {
+- if (curr.start < prev.start)
+- printk(KERN_INFO PFX "Unordered memory map\n");
+- if (prev.end == curr.start)
+- prev.end = curr.end;
+- else {
+- start =
+- (unsigned long) (PAGE_ALIGN(prev.start));
+- end = (unsigned long) (prev.end & PAGE_MASK);
+- if ((end > start)
+- && (*callback) (start, end, arg) < 0)
+- return;
+- prev = curr;
+- }
+- }
+- }
+- if (prev_valid) {
+- start = (unsigned long) PAGE_ALIGN(prev.start);
+- end = (unsigned long) (prev.end & PAGE_MASK);
+- if (end > start)
+- (*callback) (start, end, arg);
+- }
+-}
+-
+-void __init efi_init(void)
+-{
+- efi_config_table_t *config_tables;
+- efi_runtime_services_t *runtime;
+- efi_char16_t *c16;
+- char vendor[100] = "unknown";
+- unsigned long num_config_tables;
+- int i = 0;
+-
+- memset(&efi, 0, sizeof(efi) );
+- memset(&efi_phys, 0, sizeof(efi_phys));
+-
+- efi_phys.systab =
+- (efi_system_table_t *)boot_params.efi_info.efi_systab;
+- memmap.phys_map = (void *)boot_params.efi_info.efi_memmap;
+- memmap.nr_map = boot_params.efi_info.efi_memmap_size/
+- boot_params.efi_info.efi_memdesc_size;
+- memmap.desc_version = boot_params.efi_info.efi_memdesc_version;
+- memmap.desc_size = boot_params.efi_info.efi_memdesc_size;
+-
+- efi.systab = (efi_system_table_t *)
+- boot_ioremap((unsigned long) efi_phys.systab,
+- sizeof(efi_system_table_t));
+- /*
+- * Verify the EFI Table
+- */
+- if (efi.systab == NULL)
+- printk(KERN_ERR PFX "Woah! Couldn't map the EFI system table.\n");
+- if (efi.systab->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE)
+- printk(KERN_ERR PFX "Woah! EFI system table signature incorrect\n");
+- if ((efi.systab->hdr.revision >> 16) == 0)
+- printk(KERN_ERR PFX "Warning: EFI system table version "
+- "%d.%02d, expected 1.00 or greater\n",
+- efi.systab->hdr.revision >> 16,
+- efi.systab->hdr.revision & 0xffff);
+-
+- /*
+- * Grab some details from the system table
+- */
+- num_config_tables = efi.systab->nr_tables;
+- config_tables = (efi_config_table_t *)efi.systab->tables;
+- runtime = efi.systab->runtime;
+-
+- /*
+- * Show what we know for posterity
+- */
+- c16 = (efi_char16_t *) boot_ioremap(efi.systab->fw_vendor, 2);
+- if (c16) {
+- for (i = 0; i < (sizeof(vendor) - 1) && *c16; ++i)
+- vendor[i] = *c16++;
+- vendor[i] = '\0';
+- } else
+- printk(KERN_ERR PFX "Could not map the firmware vendor!\n");
+-
+- printk(KERN_INFO PFX "EFI v%u.%.02u by %s \n",
+- efi.systab->hdr.revision >> 16,
+- efi.systab->hdr.revision & 0xffff, vendor);
+-
+- /*
+- * Let's see what config tables the firmware passed to us.
+- */
+- config_tables = (efi_config_table_t *)
+- boot_ioremap((unsigned long) config_tables,
+- num_config_tables * sizeof(efi_config_table_t));
+-
+- if (config_tables == NULL)
+- printk(KERN_ERR PFX "Could not map EFI Configuration Table!\n");
+-
+- efi.mps = EFI_INVALID_TABLE_ADDR;
+- efi.acpi = EFI_INVALID_TABLE_ADDR;
+- efi.acpi20 = EFI_INVALID_TABLE_ADDR;
+- efi.smbios = EFI_INVALID_TABLE_ADDR;
+- efi.sal_systab = EFI_INVALID_TABLE_ADDR;
+- efi.boot_info = EFI_INVALID_TABLE_ADDR;
+- efi.hcdp = EFI_INVALID_TABLE_ADDR;
+- efi.uga = EFI_INVALID_TABLE_ADDR;
+-
+- for (i = 0; i < num_config_tables; i++) {
+- if (efi_guidcmp(config_tables[i].guid, MPS_TABLE_GUID) == 0) {
+- efi.mps = config_tables[i].table;
+- printk(KERN_INFO " MPS=0x%lx ", config_tables[i].table);
+- } else
+- if (efi_guidcmp(config_tables[i].guid, ACPI_20_TABLE_GUID) == 0) {
+- efi.acpi20 = config_tables[i].table;
+- printk(KERN_INFO " ACPI 2.0=0x%lx ", config_tables[i].table);
+- } else
+- if (efi_guidcmp(config_tables[i].guid, ACPI_TABLE_GUID) == 0) {
+- efi.acpi = config_tables[i].table;
+- printk(KERN_INFO " ACPI=0x%lx ", config_tables[i].table);
+- } else
+- if (efi_guidcmp(config_tables[i].guid, SMBIOS_TABLE_GUID) == 0) {
+- efi.smbios = config_tables[i].table;
+- printk(KERN_INFO " SMBIOS=0x%lx ", config_tables[i].table);
+- } else
+- if (efi_guidcmp(config_tables[i].guid, HCDP_TABLE_GUID) == 0) {
+- efi.hcdp = config_tables[i].table;
+- printk(KERN_INFO " HCDP=0x%lx ", config_tables[i].table);
+- } else
+- if (efi_guidcmp(config_tables[i].guid, UGA_IO_PROTOCOL_GUID) == 0) {
+- efi.uga = config_tables[i].table;
+- printk(KERN_INFO " UGA=0x%lx ", config_tables[i].table);
+- }
+- }
+- printk("\n");
+-
+- /*
+- * Check out the runtime services table. We need to map
+- * the runtime services table so that we can grab the physical
+- * address of several of the EFI runtime functions, needed to
+- * set the firmware into virtual mode.
+- */
+-
+- runtime = (efi_runtime_services_t *) boot_ioremap((unsigned long)
+- runtime,
+- sizeof(efi_runtime_services_t));
+- if (runtime != NULL) {
+- /*
+- * We will only need *early* access to the following
+- * two EFI runtime services before set_virtual_address_map
+- * is invoked.
+- */
+- efi_phys.get_time = (efi_get_time_t *) runtime->get_time;
+- efi_phys.set_virtual_address_map =
+- (efi_set_virtual_address_map_t *)
+- runtime->set_virtual_address_map;
+- } else
+- printk(KERN_ERR PFX "Could not map the runtime service table!\n");
+-
+- /* Map the EFI memory map for use until paging_init() */
+- memmap.map = boot_ioremap(boot_params.efi_info.efi_memmap,
+- boot_params.efi_info.efi_memmap_size);
+- if (memmap.map == NULL)
+- printk(KERN_ERR PFX "Could not map the EFI memory map!\n");
+-
+- memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size);
+-
+-#if EFI_DEBUG
+- print_efi_memmap();
+-#endif
+-}
+-
+-static inline void __init check_range_for_systab(efi_memory_desc_t *md)
+-{
+- if (((unsigned long)md->phys_addr <= (unsigned long)efi_phys.systab) &&
+- ((unsigned long)efi_phys.systab < md->phys_addr +
+- ((unsigned long)md->num_pages << EFI_PAGE_SHIFT))) {
+- unsigned long addr;
+-
+- addr = md->virt_addr - md->phys_addr +
+- (unsigned long)efi_phys.systab;
+- efi.systab = (efi_system_table_t *)addr;
+- }
+-}
+-
+-/*
+- * Wrap all the virtual calls in a way that forces the parameters on the stack.
+- */
+-
+-#define efi_call_virt(f, args...) \
+- ((efi_##f##_t __attribute__((regparm(0)))*)efi.systab->runtime->f)(args)
+-
+-static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
+-{
+- return efi_call_virt(get_time, tm, tc);
+-}
+-
+-static efi_status_t virt_efi_set_time (efi_time_t *tm)
+-{
+- return efi_call_virt(set_time, tm);
+-}
+-
+-static efi_status_t virt_efi_get_wakeup_time (efi_bool_t *enabled,
+- efi_bool_t *pending,
+- efi_time_t *tm)
+-{
+- return efi_call_virt(get_wakeup_time, enabled, pending, tm);
+-}
+-
+-static efi_status_t virt_efi_set_wakeup_time (efi_bool_t enabled,
+- efi_time_t *tm)
+-{
+- return efi_call_virt(set_wakeup_time, enabled, tm);
+-}
+-
+-static efi_status_t virt_efi_get_variable (efi_char16_t *name,
+- efi_guid_t *vendor, u32 *attr,
+- unsigned long *data_size, void *data)
+-{
+- return efi_call_virt(get_variable, name, vendor, attr, data_size, data);
+-}
+-
+-static efi_status_t virt_efi_get_next_variable (unsigned long *name_size,
+- efi_char16_t *name,
+- efi_guid_t *vendor)
+-{
+- return efi_call_virt(get_next_variable, name_size, name, vendor);
+-}
+-
+-static efi_status_t virt_efi_set_variable (efi_char16_t *name,
+- efi_guid_t *vendor,
+- unsigned long attr,
+- unsigned long data_size, void *data)
+-{
+- return efi_call_virt(set_variable, name, vendor, attr, data_size, data);
+-}
+-
+-static efi_status_t virt_efi_get_next_high_mono_count (u32 *count)
+-{
+- return efi_call_virt(get_next_high_mono_count, count);
+-}
+-
+-static void virt_efi_reset_system (int reset_type, efi_status_t status,
+- unsigned long data_size,
+- efi_char16_t *data)
+-{
+- efi_call_virt(reset_system, reset_type, status, data_size, data);
+-}
+-
+-/*
+- * This function will switch the EFI runtime services to virtual mode.
+- * Essentially, look through the EFI memmap and map every region that
+- * has the runtime attribute bit set in its memory descriptor and update
+- * that memory descriptor with the virtual address obtained from ioremap().
+- * This enables the runtime services to be called without having to
+- * thunk back into physical mode for every invocation.
+- */
+-
+-void __init efi_enter_virtual_mode(void)
+-{
+- efi_memory_desc_t *md;
+- efi_status_t status;
+- void *p;
+-
+- efi.systab = NULL;
+-
+- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+- md = p;
+-
+- if (!(md->attribute & EFI_MEMORY_RUNTIME))
+- continue;
+-
+- md->virt_addr = (unsigned long)ioremap(md->phys_addr,
+- md->num_pages << EFI_PAGE_SHIFT);
+- if (!(unsigned long)md->virt_addr) {
+- printk(KERN_ERR PFX "ioremap of 0x%lX failed\n",
+- (unsigned long)md->phys_addr);
+- }
+- /* update the virtual address of the EFI system table */
+- check_range_for_systab(md);
+- }
+-
+- BUG_ON(!efi.systab);
+-
+- status = phys_efi_set_virtual_address_map(
+- memmap.desc_size * memmap.nr_map,
+- memmap.desc_size,
+- memmap.desc_version,
+- memmap.phys_map);
+-
+- if (status != EFI_SUCCESS) {
+- printk (KERN_ALERT "You are screwed! "
+- "Unable to switch EFI into virtual mode "
+- "(status=%lx)\n", status);
+- panic("EFI call to SetVirtualAddressMap() failed!");
+- }
+-
+- /*
+- * Now that EFI is in virtual mode, update the function
+- * pointers in the runtime service table to the new virtual addresses.
+- */
+-
+- efi.get_time = virt_efi_get_time;
+- efi.set_time = virt_efi_set_time;
+- efi.get_wakeup_time = virt_efi_get_wakeup_time;
+- efi.set_wakeup_time = virt_efi_set_wakeup_time;
+- efi.get_variable = virt_efi_get_variable;
+- efi.get_next_variable = virt_efi_get_next_variable;
+- efi.set_variable = virt_efi_set_variable;
+- efi.get_next_high_mono_count = virt_efi_get_next_high_mono_count;
+- efi.reset_system = virt_efi_reset_system;
+-}
+-
+-void __init
+-efi_initialize_iomem_resources(struct resource *code_resource,
+- struct resource *data_resource,
+- struct resource *bss_resource)
+-{
+- struct resource *res;
+- efi_memory_desc_t *md;
+- void *p;
+-
+- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+- md = p;
+-
+- if ((md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT)) >
+- 0x100000000ULL)
+- continue;
+- res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
+- switch (md->type) {
+- case EFI_RESERVED_TYPE:
+- res->name = "Reserved Memory";
+- break;
+- case EFI_LOADER_CODE:
+- res->name = "Loader Code";
+- break;
+- case EFI_LOADER_DATA:
+- res->name = "Loader Data";
+- break;
+- case EFI_BOOT_SERVICES_DATA:
+- res->name = "BootServices Data";
+- break;
+- case EFI_BOOT_SERVICES_CODE:
+- res->name = "BootServices Code";
+- break;
+- case EFI_RUNTIME_SERVICES_CODE:
+- res->name = "Runtime Service Code";
+- break;
+- case EFI_RUNTIME_SERVICES_DATA:
+- res->name = "Runtime Service Data";
+- break;
+- case EFI_CONVENTIONAL_MEMORY:
+- res->name = "Conventional Memory";
+- break;
+- case EFI_UNUSABLE_MEMORY:
+- res->name = "Unusable Memory";
+- break;
+- case EFI_ACPI_RECLAIM_MEMORY:
+- res->name = "ACPI Reclaim";
+- break;
+- case EFI_ACPI_MEMORY_NVS:
+- res->name = "ACPI NVS";
+- break;
+- case EFI_MEMORY_MAPPED_IO:
+- res->name = "Memory Mapped IO";
+- break;
+- case EFI_MEMORY_MAPPED_IO_PORT_SPACE:
+- res->name = "Memory Mapped IO Port Space";
+- break;
+- default:
+- res->name = "Reserved";
+- break;
+- }
+- res->start = md->phys_addr;
+- res->end = res->start + ((md->num_pages << EFI_PAGE_SHIFT) - 1);
+- res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+- if (request_resource(&iomem_resource, res) < 0)
+- printk(KERN_ERR PFX "Failed to allocate res %s : "
+- "0x%llx-0x%llx\n", res->name,
+- (unsigned long long)res->start,
+- (unsigned long long)res->end);
+- /*
+- * We don't know which region contains kernel data so we try
+- * it repeatedly and let the resource manager test it.
+- */
+- if (md->type == EFI_CONVENTIONAL_MEMORY) {
+- request_resource(res, code_resource);
+- request_resource(res, data_resource);
+- request_resource(res, bss_resource);
+-#ifdef CONFIG_KEXEC
+- request_resource(res, &crashk_res);
+-#endif
+- }
+- }
+-}
+-
+-/*
+- * Convenience functions to obtain memory types and attributes
+- */
+-
+-u32 efi_mem_type(unsigned long phys_addr)
+-{
+- efi_memory_desc_t *md;
+- void *p;
+-
+- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+- md = p;
+- if ((md->phys_addr <= phys_addr) && (phys_addr <
+- (md->phys_addr + (md-> num_pages << EFI_PAGE_SHIFT)) ))
+- return md->type;
+- }
+- return 0;
+-}
+-
+-u64 efi_mem_attributes(unsigned long phys_addr)
+-{
+- efi_memory_desc_t *md;
+- void *p;
+-
+- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+- md = p;
+- if ((md->phys_addr <= phys_addr) && (phys_addr <
+- (md->phys_addr + (md-> num_pages << EFI_PAGE_SHIFT)) ))
+- return md->attribute;
+- }
+- return 0;
+ }
+diff --git a/arch/x86/kernel/efi_64.c b/arch/x86/kernel/efi_64.c
+new file mode 100644
+index 0000000..4b73992
+--- /dev/null
++++ b/arch/x86/kernel/efi_64.c
+@@ -0,0 +1,134 @@
++/*
++ * x86_64 specific EFI support functions
++ * Based on Extensible Firmware Interface Specification version 1.0
+ *
-+ * submit_bio() is very similar in purpose to generic_make_request(), and
-+ * uses that function to do most of the work. Both are fairly rough
-+ * interfaces, @bio must be presetup and ready for I/O.
++ * Copyright (C) 2005-2008 Intel Co.
++ * Fenghua Yu <fenghua.yu at intel.com>
++ * Bibo Mao <bibo.mao at intel.com>
++ * Chandramouli Narayanan <mouli at linux.intel.com>
++ * Huang Ying <ying.huang at intel.com>
++ *
++ * Code to convert EFI to E820 map has been implemented in elilo bootloader
++ * based on a EFI patch by Edgar Hucek. Based on the E820 map, the page table
++ * is setup appropriately for EFI runtime code.
++ * - mouli 06/14/2007.
+ *
+ */
-+void submit_bio(int rw, struct bio *bio)
-+{
-+ int count = bio_sectors(bio);
+
-+ bio->bi_rw |= rw;
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/mm.h>
++#include <linux/types.h>
++#include <linux/spinlock.h>
++#include <linux/bootmem.h>
++#include <linux/ioport.h>
++#include <linux/module.h>
++#include <linux/efi.h>
++#include <linux/uaccess.h>
++#include <linux/io.h>
++#include <linux/reboot.h>
+
-+ /*
-+ * If it's a regular read/write or a barrier with data attached,
-+ * go through the normal accounting stuff before submission.
-+ */
-+ if (!bio_empty_barrier(bio)) {
++#include <asm/setup.h>
++#include <asm/page.h>
++#include <asm/e820.h>
++#include <asm/pgtable.h>
++#include <asm/tlbflush.h>
++#include <asm/proto.h>
++#include <asm/efi.h>
+
-+ BIO_BUG_ON(!bio->bi_size);
-+ BIO_BUG_ON(!bio->bi_io_vec);
++static pgd_t save_pgd __initdata;
++static unsigned long efi_flags __initdata;
+
-+ if (rw & WRITE) {
-+ count_vm_events(PGPGOUT, count);
-+ } else {
-+ task_io_account_read(bio->bi_size);
-+ count_vm_events(PGPGIN, count);
-+ }
++static void __init early_mapping_set_exec(unsigned long start,
++ unsigned long end,
++ int executable)
++{
++ pte_t *kpte;
++ int level;
++
++ while (start < end) {
++ kpte = lookup_address((unsigned long)__va(start), &level);
++ BUG_ON(!kpte);
++ if (executable)
++ set_pte(kpte, pte_mkexec(*kpte));
++ else
++ set_pte(kpte, __pte((pte_val(*kpte) | _PAGE_NX) & \
++ __supported_pte_mask));
++ if (level == 4)
++ start = (start + PMD_SIZE) & PMD_MASK;
++ else
++ start = (start + PAGE_SIZE) & PAGE_MASK;
++ }
++}
+
-+ if (unlikely(block_dump)) {
-+ char b[BDEVNAME_SIZE];
-+ printk(KERN_DEBUG "%s(%d): %s block %Lu on %s\n",
-+ current->comm, task_pid_nr(current),
-+ (rw & WRITE) ? "WRITE" : "READ",
-+ (unsigned long long)bio->bi_sector,
-+ bdevname(bio->bi_bdev,b));
++static void __init early_runtime_code_mapping_set_exec(int executable)
++{
++ efi_memory_desc_t *md;
++ void *p;
++
++ if (!(__supported_pte_mask & _PAGE_NX))
++ return;
++
++ /* Make EFI runtime service code area executable */
++ for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
++ md = p;
++ if (md->type == EFI_RUNTIME_SERVICES_CODE) {
++ unsigned long end;
++ end = md->phys_addr + (md->num_pages << PAGE_SHIFT);
++ early_mapping_set_exec(md->phys_addr, end, executable);
+ }
+ }
-+
-+ generic_make_request(bio);
+}
+
-+EXPORT_SYMBOL(submit_bio);
-+
-+/**
-+ * __end_that_request_first - end I/O on a request
-+ * @req: the request being processed
-+ * @error: 0 for success, < 0 for error
-+ * @nr_bytes: number of bytes to complete
-+ *
-+ * Description:
-+ * Ends I/O on a number of bytes attached to @req, and sets it up
-+ * for the next range of segments (if any) in the cluster.
-+ *
-+ * Return:
-+ * 0 - we are done with this request, call end_that_request_last()
-+ * 1 - still buffers pending for this request
-+ **/
-+static int __end_that_request_first(struct request *req, int error,
-+ int nr_bytes)
++void __init efi_call_phys_prelog(void)
+{
-+ int total_bytes, bio_nbytes, next_idx = 0;
-+ struct bio *bio;
++ unsigned long vaddress;
+
-+ blk_add_trace_rq(req->q, req, BLK_TA_COMPLETE);
++ local_irq_save(efi_flags);
++ early_runtime_code_mapping_set_exec(1);
++ vaddress = (unsigned long)__va(0x0UL);
++ save_pgd = *pgd_offset_k(0x0UL);
++ set_pgd(pgd_offset_k(0x0UL), *pgd_offset_k(vaddress));
++ __flush_tlb_all();
++}
+
++void __init efi_call_phys_epilog(void)
++{
+ /*
-+ * for a REQ_BLOCK_PC request, we want to carry any eventual
-+ * sense key with us all the way through
++ * After the lock is released, the original page table is restored.
+ */
-+ if (!blk_pc_request(req))
-+ req->errors = 0;
++ set_pgd(pgd_offset_k(0x0UL), save_pgd);
++ early_runtime_code_mapping_set_exec(0);
++ __flush_tlb_all();
++ local_irq_restore(efi_flags);
++}
+
-+ if (error) {
-+ if (blk_fs_request(req) && !(req->cmd_flags & REQ_QUIET))
-+ printk("end_request: I/O error, dev %s, sector %llu\n",
-+ req->rq_disk ? req->rq_disk->disk_name : "?",
-+ (unsigned long long)req->sector);
-+ }
++void __init efi_reserve_bootmem(void)
++{
++ reserve_bootmem_generic((unsigned long)memmap.phys_map,
++ memmap.nr_map * memmap.desc_size);
++}
+
-+ if (blk_fs_request(req) && req->rq_disk) {
-+ const int rw = rq_data_dir(req);
++void __iomem * __init efi_ioremap(unsigned long offset,
++ unsigned long size)
++{
++ static unsigned pages_mapped;
++ unsigned long last_addr;
++ unsigned i, pages;
+
-+ disk_stat_add(req->rq_disk, sectors[rw], nr_bytes >> 9);
++ last_addr = offset + size - 1;
++ offset &= PAGE_MASK;
++ pages = (PAGE_ALIGN(last_addr) - offset) >> PAGE_SHIFT;
++ if (pages_mapped + pages > MAX_EFI_IO_PAGES)
++ return NULL;
++
++ for (i = 0; i < pages; i++) {
++ __set_fixmap(FIX_EFI_IO_MAP_FIRST_PAGE - pages_mapped,
++ offset, PAGE_KERNEL_EXEC_NOCACHE);
++ offset += PAGE_SIZE;
++ pages_mapped++;
+ }
+
-+ total_bytes = bio_nbytes = 0;
-+ while ((bio = req->bio) != NULL) {
-+ int nbytes;
++ return (void __iomem *)__fix_to_virt(FIX_EFI_IO_MAP_FIRST_PAGE - \
++ (pages_mapped - pages));
++}
+diff --git a/arch/x86/kernel/efi_stub_64.S b/arch/x86/kernel/efi_stub_64.S
+new file mode 100644
+index 0000000..99b47d4
+--- /dev/null
++++ b/arch/x86/kernel/efi_stub_64.S
+@@ -0,0 +1,109 @@
++/*
++ * Function calling ABI conversion from Linux to EFI for x86_64
++ *
++ * Copyright (C) 2007 Intel Corp
++ * Bibo Mao <bibo.mao at intel.com>
++ * Huang Ying <ying.huang at intel.com>
++ */
+
-+ /*
-+ * For an empty barrier request, the low level driver must
-+ * store a potential error location in ->sector. We pass
-+ * that back up in ->bi_sector.
-+ */
-+ if (blk_empty_barrier(req))
-+ bio->bi_sector = req->sector;
++#include <linux/linkage.h>
+
-+ if (nr_bytes >= bio->bi_size) {
-+ req->bio = bio->bi_next;
-+ nbytes = bio->bi_size;
-+ req_bio_endio(req, bio, nbytes, error);
-+ next_idx = 0;
-+ bio_nbytes = 0;
-+ } else {
-+ int idx = bio->bi_idx + next_idx;
++#define SAVE_XMM \
++ mov %rsp, %rax; \
++ subq $0x70, %rsp; \
++ and $~0xf, %rsp; \
++ mov %rax, (%rsp); \
++ mov %cr0, %rax; \
++ clts; \
++ mov %rax, 0x8(%rsp); \
++ movaps %xmm0, 0x60(%rsp); \
++ movaps %xmm1, 0x50(%rsp); \
++ movaps %xmm2, 0x40(%rsp); \
++ movaps %xmm3, 0x30(%rsp); \
++ movaps %xmm4, 0x20(%rsp); \
++ movaps %xmm5, 0x10(%rsp)
++
++#define RESTORE_XMM \
++ movaps 0x60(%rsp), %xmm0; \
++ movaps 0x50(%rsp), %xmm1; \
++ movaps 0x40(%rsp), %xmm2; \
++ movaps 0x30(%rsp), %xmm3; \
++ movaps 0x20(%rsp), %xmm4; \
++ movaps 0x10(%rsp), %xmm5; \
++ mov 0x8(%rsp), %rsi; \
++ mov %rsi, %cr0; \
++ mov (%rsp), %rsp
++
++ENTRY(efi_call0)
++ SAVE_XMM
++ subq $32, %rsp
++ call *%rdi
++ addq $32, %rsp
++ RESTORE_XMM
++ ret
+
-+ if (unlikely(bio->bi_idx >= bio->bi_vcnt)) {
-+ blk_dump_rq_flags(req, "__end_that");
-+ printk("%s: bio idx %d >= vcnt %d\n",
-+ __FUNCTION__,
-+ bio->bi_idx, bio->bi_vcnt);
-+ break;
-+ }
++ENTRY(efi_call1)
++ SAVE_XMM
++ subq $32, %rsp
++ mov %rsi, %rcx
++ call *%rdi
++ addq $32, %rsp
++ RESTORE_XMM
++ ret
+
-+ nbytes = bio_iovec_idx(bio, idx)->bv_len;
-+ BIO_BUG_ON(nbytes > bio->bi_size);
++ENTRY(efi_call2)
++ SAVE_XMM
++ subq $32, %rsp
++ mov %rsi, %rcx
++ call *%rdi
++ addq $32, %rsp
++ RESTORE_XMM
++ ret
+
-+ /*
-+ * not a complete bvec done
-+ */
-+ if (unlikely(nbytes > nr_bytes)) {
-+ bio_nbytes += nr_bytes;
-+ total_bytes += nr_bytes;
-+ break;
-+ }
++ENTRY(efi_call3)
++ SAVE_XMM
++ subq $32, %rsp
++ mov %rcx, %r8
++ mov %rsi, %rcx
++ call *%rdi
++ addq $32, %rsp
++ RESTORE_XMM
++ ret
+
-+ /*
-+ * advance to the next vector
-+ */
-+ next_idx++;
-+ bio_nbytes += nbytes;
-+ }
++ENTRY(efi_call4)
++ SAVE_XMM
++ subq $32, %rsp
++ mov %r8, %r9
++ mov %rcx, %r8
++ mov %rsi, %rcx
++ call *%rdi
++ addq $32, %rsp
++ RESTORE_XMM
++ ret
+
-+ total_bytes += nbytes;
-+ nr_bytes -= nbytes;
++ENTRY(efi_call5)
++ SAVE_XMM
++ subq $48, %rsp
++ mov %r9, 32(%rsp)
++ mov %r8, %r9
++ mov %rcx, %r8
++ mov %rsi, %rcx
++ call *%rdi
++ addq $48, %rsp
++ RESTORE_XMM
++ ret
+
-+ if ((bio = req->bio)) {
-+ /*
-+ * end more in this run, or just return 'not-done'
-+ */
-+ if (unlikely(nr_bytes <= 0))
-+ break;
-+ }
++ENTRY(efi_call6)
++ SAVE_XMM
++ mov (%rsp), %rax
++ mov 8(%rax), %rax
++ subq $48, %rsp
++ mov %r9, 32(%rsp)
++ mov %rax, 40(%rsp)
++ mov %r8, %r9
++ mov %rcx, %r8
++ mov %rsi, %rcx
++ call *%rdi
++ addq $48, %rsp
++ RESTORE_XMM
++ ret
+diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
+index dc7f938..be5c31d 100644
+--- a/arch/x86/kernel/entry_32.S
++++ b/arch/x86/kernel/entry_32.S
+@@ -58,7 +58,7 @@
+ * for paravirtualization. The following will never clobber any registers:
+ * INTERRUPT_RETURN (aka. "iret")
+ * GET_CR0_INTO_EAX (aka. "movl %cr0, %eax")
+- * ENABLE_INTERRUPTS_SYSEXIT (aka "sti; sysexit").
++ * ENABLE_INTERRUPTS_SYSCALL_RET (aka "sti; sysexit").
+ *
+ * For DISABLE_INTERRUPTS/ENABLE_INTERRUPTS (aka "cli"/"sti"), you must
+ * specify what registers can be overwritten (CLBR_NONE, CLBR_EAX/EDX/ECX/ANY).
+@@ -283,12 +283,12 @@ END(resume_kernel)
+ the vsyscall page. See vsyscall-sysentry.S, which defines the symbol. */
+
+ # sysenter call handler stub
+-ENTRY(sysenter_entry)
++ENTRY(ia32_sysenter_target)
+ CFI_STARTPROC simple
+ CFI_SIGNAL_FRAME
+ CFI_DEF_CFA esp, 0
+ CFI_REGISTER esp, ebp
+- movl TSS_sysenter_esp0(%esp),%esp
++ movl TSS_sysenter_sp0(%esp),%esp
+ sysenter_past_esp:
+ /*
+ * No need to follow this irqs on/off section: the syscall
+@@ -351,7 +351,7 @@ sysenter_past_esp:
+ xorl %ebp,%ebp
+ TRACE_IRQS_ON
+ 1: mov PT_FS(%esp), %fs
+- ENABLE_INTERRUPTS_SYSEXIT
++ ENABLE_INTERRUPTS_SYSCALL_RET
+ CFI_ENDPROC
+ .pushsection .fixup,"ax"
+ 2: movl $0,PT_FS(%esp)
+@@ -360,7 +360,7 @@ sysenter_past_esp:
+ .align 4
+ .long 1b,2b
+ .popsection
+-ENDPROC(sysenter_entry)
++ENDPROC(ia32_sysenter_target)
+
+ # system call handler stub
+ ENTRY(system_call)
+@@ -583,7 +583,7 @@ END(syscall_badsys)
+ * Build the entry stubs and pointer table with
+ * some assembler magic.
+ */
+-.data
++.section .rodata,"a"
+ ENTRY(interrupt)
+ .text
+
+@@ -743,7 +743,7 @@ END(device_not_available)
+ * that sets up the real kernel stack. Check here, since we can't
+ * allow the wrong stack to be used.
+ *
+- * "TSS_sysenter_esp0+12" is because the NMI/debug handler will have
++ * "TSS_sysenter_sp0+12" is because the NMI/debug handler will have
+ * already pushed 3 words if it hits on the sysenter instruction:
+ * eflags, cs and eip.
+ *
+@@ -755,7 +755,7 @@ END(device_not_available)
+ cmpw $__KERNEL_CS,4(%esp); \
+ jne ok; \
+ label: \
+- movl TSS_sysenter_esp0+offset(%esp),%esp; \
++ movl TSS_sysenter_sp0+offset(%esp),%esp; \
+ CFI_DEF_CFA esp, 0; \
+ CFI_UNDEFINED eip; \
+ pushfl; \
+@@ -768,7 +768,7 @@ label: \
+
+ KPROBE_ENTRY(debug)
+ RING0_INT_FRAME
+- cmpl $sysenter_entry,(%esp)
++ cmpl $ia32_sysenter_target,(%esp)
+ jne debug_stack_correct
+ FIX_STACK(12, debug_stack_correct, debug_esp_fix_insn)
+ debug_stack_correct:
+@@ -799,7 +799,7 @@ KPROBE_ENTRY(nmi)
+ popl %eax
+ CFI_ADJUST_CFA_OFFSET -4
+ je nmi_espfix_stack
+- cmpl $sysenter_entry,(%esp)
++ cmpl $ia32_sysenter_target,(%esp)
+ je nmi_stack_fixup
+ pushl %eax
+ CFI_ADJUST_CFA_OFFSET 4
+@@ -812,7 +812,7 @@ KPROBE_ENTRY(nmi)
+ popl %eax
+ CFI_ADJUST_CFA_OFFSET -4
+ jae nmi_stack_correct
+- cmpl $sysenter_entry,12(%esp)
++ cmpl $ia32_sysenter_target,12(%esp)
+ je nmi_debug_stack_check
+ nmi_stack_correct:
+ /* We have a RING0_INT_FRAME here */
+@@ -882,10 +882,10 @@ ENTRY(native_iret)
+ .previous
+ END(native_iret)
+
+-ENTRY(native_irq_enable_sysexit)
++ENTRY(native_irq_enable_syscall_ret)
+ sti
+ sysexit
+-END(native_irq_enable_sysexit)
++END(native_irq_enable_syscall_ret)
+ #endif
+
+ KPROBE_ENTRY(int3)
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index 3a058bb..bea8474 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -50,6 +50,7 @@
+ #include <asm/hw_irq.h>
+ #include <asm/page.h>
+ #include <asm/irqflags.h>
++#include <asm/paravirt.h>
+
+ .code64
+
+@@ -57,6 +58,13 @@
+ #define retint_kernel retint_restore_args
+ #endif
+
++#ifdef CONFIG_PARAVIRT
++ENTRY(native_irq_enable_syscall_ret)
++ movq %gs:pda_oldrsp,%rsp
++ swapgs
++ sysretq
++#endif /* CONFIG_PARAVIRT */
++
+
+ .macro TRACE_IRQS_IRETQ offset=ARGOFFSET
+ #ifdef CONFIG_TRACE_IRQFLAGS
+@@ -216,14 +224,21 @@ ENTRY(system_call)
+ CFI_DEF_CFA rsp,PDA_STACKOFFSET
+ CFI_REGISTER rip,rcx
+ /*CFI_REGISTER rflags,r11*/
+- swapgs
++ SWAPGS_UNSAFE_STACK
++ /*
++ * A hypervisor implementation might want to use a label
++ * after the swapgs, so that it can do the swapgs
++ * for the guest and jump here on syscall.
++ */
++ENTRY(system_call_after_swapgs)
++
+ movq %rsp,%gs:pda_oldrsp
+ movq %gs:pda_kernelstack,%rsp
+ /*
+ * No need to follow this irqs off/on section - it's straight
+ * and short:
+ */
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ SAVE_ARGS 8,1
+ movq %rax,ORIG_RAX-ARGOFFSET(%rsp)
+ movq %rcx,RIP-ARGOFFSET(%rsp)
+@@ -246,7 +261,7 @@ ret_from_sys_call:
+ sysret_check:
+ LOCKDEP_SYS_EXIT
+ GET_THREAD_INFO(%rcx)
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ movl threadinfo_flags(%rcx),%edx
+ andl %edi,%edx
+@@ -260,9 +275,7 @@ sysret_check:
+ CFI_REGISTER rip,rcx
+ RESTORE_ARGS 0,-ARG_SKIP,1
+ /*CFI_REGISTER rflags,r11*/
+- movq %gs:pda_oldrsp,%rsp
+- swapgs
+- sysretq
++ ENABLE_INTERRUPTS_SYSCALL_RET
+
+ CFI_RESTORE_STATE
+ /* Handle reschedules */
+@@ -271,7 +284,7 @@ sysret_careful:
+ bt $TIF_NEED_RESCHED,%edx
+ jnc sysret_signal
+ TRACE_IRQS_ON
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ pushq %rdi
+ CFI_ADJUST_CFA_OFFSET 8
+ call schedule
+@@ -282,8 +295,8 @@ sysret_careful:
+ /* Handle a signal */
+ sysret_signal:
+ TRACE_IRQS_ON
+- sti
+- testl $(_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY),%edx
++ ENABLE_INTERRUPTS(CLBR_NONE)
++ testl $_TIF_DO_NOTIFY_MASK,%edx
+ jz 1f
+
+ /* Really a signal */
+@@ -295,7 +308,7 @@ sysret_signal:
+ 1: movl $_TIF_NEED_RESCHED,%edi
+ /* Use IRET because user could have changed frame. This
+ works because ptregscall_common has called FIXUP_TOP_OF_STACK. */
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ jmp int_with_check
+
+@@ -327,7 +340,7 @@ tracesys:
+ */
+ .globl int_ret_from_sys_call
+ int_ret_from_sys_call:
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ testl $3,CS-ARGOFFSET(%rsp)
+ je retint_restore_args
+@@ -349,20 +362,20 @@ int_careful:
+ bt $TIF_NEED_RESCHED,%edx
+ jnc int_very_careful
+ TRACE_IRQS_ON
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ pushq %rdi
+ CFI_ADJUST_CFA_OFFSET 8
+ call schedule
+ popq %rdi
+ CFI_ADJUST_CFA_OFFSET -8
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ jmp int_with_check
+
+ /* handle signals and tracing -- both require a full stack frame */
+ int_very_careful:
+ TRACE_IRQS_ON
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ SAVE_REST
+ /* Check for syscall exit trace */
+ testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SINGLESTEP),%edx
+@@ -377,7 +390,7 @@ int_very_careful:
+ jmp int_restore_rest
+
+ int_signal:
+- testl $(_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY),%edx
++ testl $_TIF_DO_NOTIFY_MASK,%edx
+ jz 1f
+ movq %rsp,%rdi # &ptregs -> arg1
+ xorl %esi,%esi # oldset -> arg2
+@@ -385,7 +398,7 @@ int_signal:
+ 1: movl $_TIF_NEED_RESCHED,%edi
+ int_restore_rest:
+ RESTORE_REST
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ jmp int_with_check
+ CFI_ENDPROC
+@@ -506,7 +519,7 @@ END(stub_rt_sigreturn)
+ CFI_DEF_CFA_REGISTER rbp
+ testl $3,CS(%rdi)
+ je 1f
+- swapgs
++ SWAPGS
+ /* irqcount is used to check if a CPU is already on an interrupt
+ stack or not. While this is essentially redundant with preempt_count
+ it is a little cheaper to use a separate counter in the PDA
+@@ -527,7 +540,7 @@ ENTRY(common_interrupt)
+ interrupt do_IRQ
+ /* 0(%rsp): oldrsp-ARGOFFSET */
+ ret_from_intr:
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ decl %gs:pda_irqcount
+ leaveq
+@@ -556,13 +569,13 @@ retint_swapgs: /* return to user-space */
+ /*
+ * The iretq could re-enable interrupts:
+ */
+- cli
++ DISABLE_INTERRUPTS(CLBR_ANY)
+ TRACE_IRQS_IRETQ
+- swapgs
++ SWAPGS
+ jmp restore_args
+
+ retint_restore_args: /* return to kernel space */
+- cli
++ DISABLE_INTERRUPTS(CLBR_ANY)
+ /*
+ * The iretq could re-enable interrupts:
+ */
+@@ -570,10 +583,14 @@ retint_restore_args: /* return to kernel space */
+ restore_args:
+ RESTORE_ARGS 0,8,0
+ iret_label:
++#ifdef CONFIG_PARAVIRT
++ INTERRUPT_RETURN
++#endif
++ENTRY(native_iret)
+ iretq
+
+ .section __ex_table,"a"
+- .quad iret_label,bad_iret
++ .quad native_iret, bad_iret
+ .previous
+ .section .fixup,"ax"
+ /* force a signal here? this matches i386 behaviour */
+@@ -581,39 +598,39 @@ iret_label:
+ bad_iret:
+ movq $11,%rdi /* SIGSEGV */
+ TRACE_IRQS_ON
+- sti
+- jmp do_exit
+- .previous
+-
++ ENABLE_INTERRUPTS(CLBR_ANY | ~(CLBR_RDI))
++ jmp do_exit
++ .previous
++
+ /* edi: workmask, edx: work */
+ retint_careful:
+ CFI_RESTORE_STATE
+ bt $TIF_NEED_RESCHED,%edx
+ jnc retint_signal
+ TRACE_IRQS_ON
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ pushq %rdi
+ CFI_ADJUST_CFA_OFFSET 8
+ call schedule
+ popq %rdi
+ CFI_ADJUST_CFA_OFFSET -8
+ GET_THREAD_INFO(%rcx)
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ jmp retint_check
+
+ retint_signal:
+- testl $(_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY),%edx
++ testl $_TIF_DO_NOTIFY_MASK,%edx
+ jz retint_swapgs
+ TRACE_IRQS_ON
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ SAVE_REST
+ movq $-1,ORIG_RAX(%rsp)
+ xorl %esi,%esi # oldset
+ movq %rsp,%rdi # &pt_regs
+ call do_notify_resume
+ RESTORE_REST
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ movl $_TIF_NEED_RESCHED,%edi
+ GET_THREAD_INFO(%rcx)
+@@ -731,7 +748,7 @@ END(spurious_interrupt)
+ rdmsr
+ testl %edx,%edx
+ js 1f
+- swapgs
++ SWAPGS
+ xorl %ebx,%ebx
+ 1:
+ .if \ist
+@@ -747,7 +764,7 @@ END(spurious_interrupt)
+ .if \ist
+ addq $EXCEPTION_STKSZ, per_cpu__init_tss + TSS_ist + (\ist - 1) * 8(%rbp)
+ .endif
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ .if \irqtrace
+ TRACE_IRQS_OFF
+ .endif
+@@ -776,10 +793,10 @@ paranoid_swapgs\trace:
+ .if \trace
+ TRACE_IRQS_IRETQ 0
+ .endif
+- swapgs
++ SWAPGS_UNSAFE_STACK
+ paranoid_restore\trace:
+ RESTORE_ALL 8
+- iretq
++ INTERRUPT_RETURN
+ paranoid_userspace\trace:
+ GET_THREAD_INFO(%rcx)
+ movl threadinfo_flags(%rcx),%ebx
+@@ -794,11 +811,11 @@ paranoid_userspace\trace:
+ .if \trace
+ TRACE_IRQS_ON
+ .endif
+- sti
++ ENABLE_INTERRUPTS(CLBR_NONE)
+ xorl %esi,%esi /* arg2: oldset */
+ movq %rsp,%rdi /* arg1: &pt_regs */
+ call do_notify_resume
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ .if \trace
+ TRACE_IRQS_OFF
+ .endif
+@@ -807,9 +824,9 @@ paranoid_schedule\trace:
+ .if \trace
+ TRACE_IRQS_ON
+ .endif
+- sti
++ ENABLE_INTERRUPTS(CLBR_ANY)
+ call schedule
+- cli
++ DISABLE_INTERRUPTS(CLBR_ANY)
+ .if \trace
+ TRACE_IRQS_OFF
+ .endif
+@@ -862,7 +879,7 @@ KPROBE_ENTRY(error_entry)
+ testl $3,CS(%rsp)
+ je error_kernelspace
+ error_swapgs:
+- swapgs
++ SWAPGS
+ error_sti:
+ movq %rdi,RDI(%rsp)
+ CFI_REL_OFFSET rdi,RDI
+@@ -874,7 +891,7 @@ error_sti:
+ error_exit:
+ movl %ebx,%eax
+ RESTORE_REST
+- cli
++ DISABLE_INTERRUPTS(CLBR_NONE)
+ TRACE_IRQS_OFF
+ GET_THREAD_INFO(%rcx)
+ testl %eax,%eax
+@@ -911,12 +928,12 @@ ENTRY(load_gs_index)
+ CFI_STARTPROC
+ pushf
+ CFI_ADJUST_CFA_OFFSET 8
+- cli
+- swapgs
++ DISABLE_INTERRUPTS(CLBR_ANY | ~(CLBR_RDI))
++ SWAPGS
+ gs_change:
+ movl %edi,%gs
+ 2: mfence /* workaround */
+- swapgs
++ SWAPGS
+ popf
+ CFI_ADJUST_CFA_OFFSET -8
+ ret
+@@ -930,7 +947,7 @@ ENDPROC(load_gs_index)
+ .section .fixup,"ax"
+ /* running with kernelgs */
+ bad_gs:
+- swapgs /* switch back to user gs */
++ SWAPGS /* switch back to user gs */
+ xorl %eax,%eax
+ movl %eax,%gs
+ jmp 2b
+diff --git a/arch/x86/kernel/genapic_64.c b/arch/x86/kernel/genapic_64.c
+index ce703e2..4ae7b64 100644
+--- a/arch/x86/kernel/genapic_64.c
++++ b/arch/x86/kernel/genapic_64.c
+@@ -24,18 +24,11 @@
+ #include <acpi/acpi_bus.h>
+ #endif
+
+-/*
+- * which logical CPU number maps to which CPU (physical APIC ID)
+- *
+- * The following static array is used during kernel startup
+- * and the x86_cpu_to_apicid_ptr contains the address of the
+- * array during this time. Is it zeroed when the per_cpu
+- * data area is removed.
+- */
+-u8 x86_cpu_to_apicid_init[NR_CPUS] __initdata
++/* which logical CPU number maps to which CPU (physical APIC ID) */
++u16 x86_cpu_to_apicid_init[NR_CPUS] __initdata
+ = { [0 ... NR_CPUS-1] = BAD_APICID };
+-void *x86_cpu_to_apicid_ptr;
+-DEFINE_PER_CPU(u8, x86_cpu_to_apicid) = BAD_APICID;
++void *x86_cpu_to_apicid_early_ptr;
++DEFINE_PER_CPU(u16, x86_cpu_to_apicid) = BAD_APICID;
+ EXPORT_PER_CPU_SYMBOL(x86_cpu_to_apicid);
+
+ struct genapic __read_mostly *genapic = &apic_flat;
+diff --git a/arch/x86/kernel/geode_32.c b/arch/x86/kernel/geode_32.c
+index f12d8c5..9c7f7d3 100644
+--- a/arch/x86/kernel/geode_32.c
++++ b/arch/x86/kernel/geode_32.c
+@@ -1,6 +1,7 @@
+ /*
+ * AMD Geode southbridge support code
+ * Copyright (C) 2006, Advanced Micro Devices, Inc.
++ * Copyright (C) 2007, Andres Salomon <dilinger at debian.org>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public License
+@@ -51,45 +52,62 @@ EXPORT_SYMBOL_GPL(geode_get_dev_base);
+
+ /* === GPIO API === */
+
+-void geode_gpio_set(unsigned int gpio, unsigned int reg)
++void geode_gpio_set(u32 gpio, unsigned int reg)
+ {
+ u32 base = geode_get_dev_base(GEODE_DEV_GPIO);
+
+ if (!base)
+ return;
+
+- if (gpio < 16)
+- outl(1 << gpio, base + reg);
+- else
+- outl(1 << (gpio - 16), base + 0x80 + reg);
++ /* low bank register */
++ if (gpio & 0xFFFF)
++ outl(gpio & 0xFFFF, base + reg);
++ /* high bank register */
++ gpio >>= 16;
++ if (gpio)
++ outl(gpio, base + 0x80 + reg);
+ }
+ EXPORT_SYMBOL_GPL(geode_gpio_set);
+
+-void geode_gpio_clear(unsigned int gpio, unsigned int reg)
++void geode_gpio_clear(u32 gpio, unsigned int reg)
+ {
+ u32 base = geode_get_dev_base(GEODE_DEV_GPIO);
+
+ if (!base)
+ return;
+
+- if (gpio < 16)
+- outl(1 << (gpio + 16), base + reg);
+- else
+- outl(1 << gpio, base + 0x80 + reg);
++ /* low bank register */
++ if (gpio & 0xFFFF)
++ outl((gpio & 0xFFFF) << 16, base + reg);
++ /* high bank register */
++ gpio &= (0xFFFF << 16);
++ if (gpio)
++ outl(gpio, base + 0x80 + reg);
+ }
+ EXPORT_SYMBOL_GPL(geode_gpio_clear);
+
+-int geode_gpio_isset(unsigned int gpio, unsigned int reg)
++int geode_gpio_isset(u32 gpio, unsigned int reg)
+ {
+ u32 base = geode_get_dev_base(GEODE_DEV_GPIO);
++ u32 val;
+
+ if (!base)
+ return 0;
+
+- if (gpio < 16)
+- return (inl(base + reg) & (1 << gpio)) ? 1 : 0;
+- else
+- return (inl(base + 0x80 + reg) & (1 << (gpio - 16))) ? 1 : 0;
++ /* low bank register */
++ if (gpio & 0xFFFF) {
++ val = inl(base + reg) & (gpio & 0xFFFF);
++ if ((gpio & 0xFFFF) == val)
++ return 1;
++ }
++ /* high bank register */
++ gpio >>= 16;
++ if (gpio) {
++ val = inl(base + 0x80 + reg) & gpio;
++ if (gpio == val)
++ return 1;
+ }
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(geode_gpio_isset);
+
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 6b34693..a317336 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -10,6 +10,7 @@
+ #include <linux/kernel.h>
+ #include <linux/string.h>
+ #include <linux/percpu.h>
++#include <linux/start_kernel.h>
+
+ #include <asm/processor.h>
+ #include <asm/proto.h>
+@@ -19,12 +20,14 @@
+ #include <asm/pgtable.h>
+ #include <asm/tlbflush.h>
+ #include <asm/sections.h>
++#include <asm/kdebug.h>
++#include <asm/e820.h>
+
+ static void __init zap_identity_mappings(void)
+ {
+ pgd_t *pgd = pgd_offset_k(0UL);
+ pgd_clear(pgd);
+- __flush_tlb();
++ __flush_tlb_all();
+ }
+
+ /* Don't add a printk in there. printk relies on the PDA which is not initialized
+@@ -46,6 +49,35 @@ static void __init copy_bootdata(char *real_mode_data)
+ }
+ }
+
++#define EBDA_ADDR_POINTER 0x40E
+
-+ /*
-+ * completely done
-+ */
-+ if (!req->bio)
-+ return 0;
++static __init void reserve_ebda(void)
++{
++ unsigned ebda_addr, ebda_size;
+
+ /*
-+ * if the request wasn't completed, update state
++ * there is a real-mode segmented pointer pointing to the
++ * 4K EBDA area at 0x40E
+ */
-+ if (bio_nbytes) {
-+ req_bio_endio(req, bio, bio_nbytes, error);
-+ bio->bi_idx += next_idx;
-+ bio_iovec(bio)->bv_offset += nr_bytes;
-+ bio_iovec(bio)->bv_len -= nr_bytes;
-+ }
++ ebda_addr = *(unsigned short *)__va(EBDA_ADDR_POINTER);
++ ebda_addr <<= 4;
+
-+ blk_recalc_rq_sectors(req, total_bytes >> 9);
-+ blk_recalc_rq_segments(req);
-+ return 1;
-+}
++ if (!ebda_addr)
++ return;
+
-+/*
-+ * splice the completion data to a local structure and hand off to
-+ * process_completion_queue() to complete the requests
-+ */
-+static void blk_done_softirq(struct softirq_action *h)
-+{
-+ struct list_head *cpu_list, local_list;
++ ebda_size = *(unsigned short *)__va(ebda_addr);
+
-+ local_irq_disable();
-+ cpu_list = &__get_cpu_var(blk_cpu_done);
-+ list_replace_init(cpu_list, &local_list);
-+ local_irq_enable();
++ /* Round EBDA up to pages */
++ if (ebda_size == 0)
++ ebda_size = 1;
++ ebda_size <<= 10;
++ ebda_size = round_up(ebda_size + (ebda_addr & ~PAGE_MASK), PAGE_SIZE);
++ if (ebda_size > 64*1024)
++ ebda_size = 64*1024;
+
-+ while (!list_empty(&local_list)) {
-+ struct request *rq = list_entry(local_list.next, struct request, donelist);
++ reserve_early(ebda_addr, ebda_addr + ebda_size);
++}
+
-+ list_del_init(&rq->donelist);
-+ rq->q->softirq_done_fn(rq);
+ void __init x86_64_start_kernel(char * real_mode_data)
+ {
+ int i;
+@@ -56,8 +88,13 @@ void __init x86_64_start_kernel(char * real_mode_data)
+ /* Make NULL pointers segfault */
+ zap_identity_mappings();
+
+- for (i = 0; i < IDT_ENTRIES; i++)
++ for (i = 0; i < IDT_ENTRIES; i++) {
++#ifdef CONFIG_EARLY_PRINTK
++ set_intr_gate(i, &early_idt_handlers[i]);
++#else
+ set_intr_gate(i, early_idt_handler);
++#endif
+ }
-+}
+ load_idt((const struct desc_ptr *)&idt_descr);
+
+ early_printk("Kernel alive\n");
+@@ -67,8 +104,24 @@ void __init x86_64_start_kernel(char * real_mode_data)
+
+ pda_init(0);
+ copy_bootdata(__va(real_mode_data));
+-#ifdef CONFIG_SMP
+- cpu_set(0, cpu_online_map);
+-#endif
++
++ reserve_early(__pa_symbol(&_text), __pa_symbol(&_end));
++
++ /* Reserve INITRD */
++ if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
++ unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
++ unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;
++ unsigned long ramdisk_end = ramdisk_image + ramdisk_size;
++ reserve_early(ramdisk_image, ramdisk_end);
++ }
++
++ reserve_ebda();
+
-+static int __cpuinit blk_cpu_notify(struct notifier_block *self, unsigned long action,
-+ void *hcpu)
-+{
+ /*
-+ * If a CPU goes away, splice its entries to the current CPU
-+ * and trigger a run of the softirq
++ * At this point everything still needed from the boot loader
++ * or BIOS or kernel text should be early reserved or marked not
++ * RAM in e820. All other memory is free game.
+ */
-+ if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
-+ int cpu = (unsigned long) hcpu;
+
-+ local_irq_disable();
-+ list_splice_init(&per_cpu(blk_cpu_done, cpu),
-+ &__get_cpu_var(blk_cpu_done));
-+ raise_softirq_irqoff(BLOCK_SOFTIRQ);
-+ local_irq_enable();
-+ }
+ start_kernel();
+ }
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index fbad51f..5d8c573 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -9,6 +9,7 @@
+
+ .text
+ #include <linux/threads.h>
++#include <linux/init.h>
+ #include <linux/linkage.h>
+ #include <asm/segment.h>
+ #include <asm/page.h>
+@@ -151,7 +152,9 @@ WEAK(xen_entry)
+ /* Unknown implementation; there's really
+ nothing we can do at this point. */
+ ud2a
+-.data
+
-+ return NOTIFY_OK;
-+}
++ __INITDATA
+
+ subarch_entries:
+ .long default_entry /* normal x86/PC */
+ .long lguest_entry /* lguest hypervisor */
+@@ -199,7 +202,6 @@ default_entry:
+ addl $0x67, %eax /* 0x67 == _PAGE_TABLE */
+ movl %eax, 4092(%edx)
+
+- xorl %ebx,%ebx /* This is the boot CPU (BSP) */
+ jmp 3f
+ /*
+ * Non-boot CPU entry point; entered from trampoline.S
+@@ -222,6 +224,8 @@ ENTRY(startup_32_smp)
+ movl %eax,%es
+ movl %eax,%fs
+ movl %eax,%gs
++#endif /* CONFIG_SMP */
++3:
+
+ /*
+ * New page tables may be in 4Mbyte page mode and may
+@@ -268,12 +272,6 @@ ENTRY(startup_32_smp)
+ wrmsr
+
+ 6:
+- /* This is a secondary processor (AP) */
+- xorl %ebx,%ebx
+- incl %ebx
+-
+-#endif /* CONFIG_SMP */
+-3:
+
+ /*
+ * Enable paging
+@@ -297,7 +295,7 @@ ENTRY(startup_32_smp)
+ popfl
+
+ #ifdef CONFIG_SMP
+- andl %ebx,%ebx
++ cmpb $0, ready
+ jz 1f /* Initial CPU cleans BSS */
+ jmp checkCPUtype
+ 1:
+@@ -502,6 +500,7 @@ early_fault:
+ call printk
+ #endif
+ #endif
++ call dump_stack
+ hlt_loop:
+ hlt
+ jmp hlt_loop
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index b6167fe..1d5a7a3 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -19,6 +19,13 @@
+ #include <asm/msr.h>
+ #include <asm/cache.h>
+
++#ifdef CONFIG_PARAVIRT
++#include <asm/asm-offsets.h>
++#include <asm/paravirt.h>
++#else
++#define GET_CR2_INTO_RCX movq %cr2, %rcx
++#endif
+
-+static struct notifier_block blk_cpu_notifier __cpuinitdata = {
-+ .notifier_call = blk_cpu_notify,
-+};
+ /* we are not able to switch in one step to the final KERNEL ADRESS SPACE
+ * because we need identity-mapped pages.
+ *
+@@ -260,14 +267,43 @@ init_rsp:
+ bad_address:
+ jmp bad_address
+
++#ifdef CONFIG_EARLY_PRINTK
++.macro early_idt_tramp first, last
++ .ifgt \last-\first
++ early_idt_tramp \first, \last-1
++ .endif
++ movl $\last,%esi
++ jmp early_idt_handler
++.endm
++
++ .globl early_idt_handlers
++early_idt_handlers:
++ early_idt_tramp 0, 63
++ early_idt_tramp 64, 127
++ early_idt_tramp 128, 191
++ early_idt_tramp 192, 255
++#endif
+
-+/**
-+ * blk_complete_request - end I/O on a request
-+ * @req: the request being processed
-+ *
-+ * Description:
-+ * Ends all I/O on a request. It does not handle partial completions,
-+ * unless the driver actually implements this in its completion callback
-+ * through requeueing. The actual completion happens out-of-order,
-+ * through a softirq handler. The user must have registered a completion
-+ * callback through blk_queue_softirq_done().
-+ **/
+ ENTRY(early_idt_handler)
++#ifdef CONFIG_EARLY_PRINTK
+ cmpl $2,early_recursion_flag(%rip)
+ jz 1f
+ incl early_recursion_flag(%rip)
++ GET_CR2_INTO_RCX
++ movq %rcx,%r9
++ xorl %r8d,%r8d # zero for error code
++ movl %esi,%ecx # get vector number
++ # Test %ecx against mask of vectors that push error code.
++ cmpl $31,%ecx
++ ja 0f
++ movl $1,%eax
++ salq %cl,%rax
++ testl $0x27d00,%eax
++ je 0f
++ popq %r8 # get error code
++0: movq 0(%rsp),%rcx # get ip
++ movq 8(%rsp),%rdx # get cs
+ xorl %eax,%eax
+- movq 8(%rsp),%rsi # get rip
+- movq (%rsp),%rdx
+- movq %cr2,%rcx
+ leaq early_idt_msg(%rip),%rdi
+ call early_printk
+ cmpl $2,early_recursion_flag(%rip)
+@@ -278,15 +314,19 @@ ENTRY(early_idt_handler)
+ movq 8(%rsp),%rsi # get rip again
+ call __print_symbol
+ #endif
++#endif /* EARLY_PRINTK */
+ 1: hlt
+ jmp 1b
+
-+void blk_complete_request(struct request *req)
-+{
-+ struct list_head *cpu_list;
-+ unsigned long flags;
++#ifdef CONFIG_EARLY_PRINTK
+ early_recursion_flag:
+ .long 0
+
+ early_idt_msg:
+- .asciz "PANIC: early exception rip %lx error %lx cr2 %lx\n"
++ .asciz "PANIC: early exception %02lx rip %lx:%lx error %lx cr2 %lx\n"
+ early_idt_ripmsg:
+ .asciz "RIP %s\n"
++#endif /* CONFIG_EARLY_PRINTK */
+
+ .balign PAGE_SIZE
+
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 2f99ee2..429d084 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -6,7 +6,6 @@
+ #include <linux/init.h>
+ #include <linux/sysdev.h>
+ #include <linux/pm.h>
+-#include <linux/delay.h>
+
+ #include <asm/fixmap.h>
+ #include <asm/hpet.h>
+@@ -16,7 +15,8 @@
+ #define HPET_MASK CLOCKSOURCE_MASK(32)
+ #define HPET_SHIFT 22
+
+-/* FSEC = 10^-15 NSEC = 10^-9 */
++/* FSEC = 10^-15
++ NSEC = 10^-9 */
+ #define FSEC_PER_NSEC 1000000
+
+ /*
+@@ -107,6 +107,7 @@ int is_hpet_enabled(void)
+ {
+ return is_hpet_capable() && hpet_legacy_int_enabled;
+ }
++EXPORT_SYMBOL_GPL(is_hpet_enabled);
+
+ /*
+ * When the hpet driver (/dev/hpet) is enabled, we need to reserve
+@@ -132,16 +133,13 @@ static void hpet_reserve_platform_timers(unsigned long id)
+ #ifdef CONFIG_HPET_EMULATE_RTC
+ hpet_reserve_timer(&hd, 1);
+ #endif
+-
+ hd.hd_irq[0] = HPET_LEGACY_8254;
+ hd.hd_irq[1] = HPET_LEGACY_RTC;
+
+- for (i = 2; i < nrtimers; timer++, i++)
+- hd.hd_irq[i] = (timer->hpet_config & Tn_INT_ROUTE_CNF_MASK) >>
+- Tn_INT_ROUTE_CNF_SHIFT;
+-
++ for (i = 2; i < nrtimers; timer++, i++)
++ hd.hd_irq[i] = (timer->hpet_config & Tn_INT_ROUTE_CNF_MASK) >>
++ Tn_INT_ROUTE_CNF_SHIFT;
+ hpet_alloc(&hd);
+-
+ }
+ #else
+ static void hpet_reserve_platform_timers(unsigned long id) { }
+@@ -478,6 +476,7 @@ void hpet_disable(void)
+ */
+ #include <linux/mc146818rtc.h>
+ #include <linux/rtc.h>
++#include <asm/rtc.h>
+
+ #define DEFAULT_RTC_INT_FREQ 64
+ #define DEFAULT_RTC_SHIFT 6
+@@ -492,6 +491,38 @@ static unsigned long hpet_default_delta;
+ static unsigned long hpet_pie_delta;
+ static unsigned long hpet_pie_limit;
+
++static rtc_irq_handler irq_handler;
+
-+ BUG_ON(!req->q->softirq_done_fn);
-+
-+ local_irq_save(flags);
++/*
++ * Registers a IRQ handler.
++ */
++int hpet_register_irq_handler(rtc_irq_handler handler)
++{
++ if (!is_hpet_enabled())
++ return -ENODEV;
++ if (irq_handler)
++ return -EBUSY;
+
-+ cpu_list = &__get_cpu_var(blk_cpu_done);
-+ list_add_tail(&req->donelist, cpu_list);
-+ raise_softirq_irqoff(BLOCK_SOFTIRQ);
++ irq_handler = handler;
+
-+ local_irq_restore(flags);
++ return 0;
+}
++EXPORT_SYMBOL_GPL(hpet_register_irq_handler);
+
-+EXPORT_SYMBOL(blk_complete_request);
-+
+/*
-+ * queue lock must be held
++ * Deregisters the IRQ handler registered with hpet_register_irq_handler()
++ * and does cleanup.
+ */
-+static void end_that_request_last(struct request *req, int error)
++void hpet_unregister_irq_handler(rtc_irq_handler handler)
+{
-+ struct gendisk *disk = req->rq_disk;
++ if (!is_hpet_enabled())
++ return;
+
-+ if (blk_rq_tagged(req))
-+ blk_queue_end_tag(req->q, req);
++ irq_handler = NULL;
++ hpet_rtc_flags = 0;
++}
++EXPORT_SYMBOL_GPL(hpet_unregister_irq_handler);
+
-+ if (blk_queued_rq(req))
-+ blkdev_dequeue_request(req);
+ /*
+ * Timer 1 for RTC emulation. We use one shot mode, as periodic mode
+ * is not supported by all HPET implementations for timer 1.
+@@ -533,6 +564,7 @@ int hpet_rtc_timer_init(void)
+
+ return 1;
+ }
++EXPORT_SYMBOL_GPL(hpet_rtc_timer_init);
+
+ /*
+ * The functions below are called from rtc driver.
+@@ -547,6 +579,7 @@ int hpet_mask_rtc_irq_bit(unsigned long bit_mask)
+ hpet_rtc_flags &= ~bit_mask;
+ return 1;
+ }
++EXPORT_SYMBOL_GPL(hpet_mask_rtc_irq_bit);
+
+ int hpet_set_rtc_irq_bit(unsigned long bit_mask)
+ {
+@@ -562,6 +595,7 @@ int hpet_set_rtc_irq_bit(unsigned long bit_mask)
+
+ return 1;
+ }
++EXPORT_SYMBOL_GPL(hpet_set_rtc_irq_bit);
+
+ int hpet_set_alarm_time(unsigned char hrs, unsigned char min,
+ unsigned char sec)
+@@ -575,6 +609,7 @@ int hpet_set_alarm_time(unsigned char hrs, unsigned char min,
+
+ return 1;
+ }
++EXPORT_SYMBOL_GPL(hpet_set_alarm_time);
+
+ int hpet_set_periodic_freq(unsigned long freq)
+ {
+@@ -593,11 +628,13 @@ int hpet_set_periodic_freq(unsigned long freq)
+ }
+ return 1;
+ }
++EXPORT_SYMBOL_GPL(hpet_set_periodic_freq);
+
+ int hpet_rtc_dropped_irq(void)
+ {
+ return is_hpet_enabled();
+ }
++EXPORT_SYMBOL_GPL(hpet_rtc_dropped_irq);
+
+ static void hpet_rtc_timer_reinit(void)
+ {
+@@ -641,9 +678,10 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+ unsigned long rtc_int_flag = 0;
+
+ hpet_rtc_timer_reinit();
++ memset(&curr_time, 0, sizeof(struct rtc_time));
+
+ if (hpet_rtc_flags & (RTC_UIE | RTC_AIE))
+- rtc_get_rtc_time(&curr_time);
++ get_rtc_time(&curr_time);
+
+ if (hpet_rtc_flags & RTC_UIE &&
+ curr_time.tm_sec != hpet_prev_update_sec) {
+@@ -665,8 +703,10 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+
+ if (rtc_int_flag) {
+ rtc_int_flag |= (RTC_IRQF | (RTC_NUM_INTS << 8));
+- rtc_interrupt(rtc_int_flag, dev_id);
++ if (irq_handler)
++ irq_handler(rtc_int_flag, dev_id);
+ }
+ return IRQ_HANDLED;
+ }
++EXPORT_SYMBOL_GPL(hpet_rtc_interrupt);
+ #endif
+diff --git a/arch/x86/kernel/i386_ksyms_32.c b/arch/x86/kernel/i386_ksyms_32.c
+index 02112fc..0616278 100644
+--- a/arch/x86/kernel/i386_ksyms_32.c
++++ b/arch/x86/kernel/i386_ksyms_32.c
+@@ -22,12 +22,5 @@ EXPORT_SYMBOL(__put_user_8);
+
+ EXPORT_SYMBOL(strstr);
+
+-#ifdef CONFIG_SMP
+-extern void FASTCALL( __write_lock_failed(rwlock_t *rw));
+-extern void FASTCALL( __read_lock_failed(rwlock_t *rw));
+-EXPORT_SYMBOL(__write_lock_failed);
+-EXPORT_SYMBOL(__read_lock_failed);
+-#endif
+-
+ EXPORT_SYMBOL(csum_partial);
+ EXPORT_SYMBOL(empty_zero_page);
+diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
+new file mode 100644
+index 0000000..26719bd
+--- /dev/null
++++ b/arch/x86/kernel/i387.c
+@@ -0,0 +1,479 @@
++/*
++ * Copyright (C) 1994 Linus Torvalds
++ *
++ * Pentium III FXSR, SSE support
++ * General FPU state handling cleanups
++ * Gareth Hughes <gareth at valinux.com>, May 2000
++ */
+
-+ if (unlikely(laptop_mode) && blk_fs_request(req))
-+ laptop_io_completion();
++#include <linux/sched.h>
++#include <linux/module.h>
++#include <linux/regset.h>
++#include <asm/processor.h>
++#include <asm/i387.h>
++#include <asm/math_emu.h>
++#include <asm/sigcontext.h>
++#include <asm/user.h>
++#include <asm/ptrace.h>
++#include <asm/uaccess.h>
+
-+ /*
-+ * Account IO completion. bar_rq isn't accounted as a normal
-+ * IO on queueing nor completion. Accounting the containing
-+ * request is enough.
-+ */
-+ if (disk && blk_fs_request(req) && req != &req->q->bar_rq) {
-+ unsigned long duration = jiffies - req->start_time;
-+ const int rw = rq_data_dir(req);
++#ifdef CONFIG_X86_64
+
-+ __disk_stat_inc(disk, ios[rw]);
-+ __disk_stat_add(disk, ticks[rw], duration);
-+ disk_round_stats(disk);
-+ disk->in_flight--;
-+ }
++#include <asm/sigcontext32.h>
++#include <asm/user32.h>
+
-+ if (req->end_io)
-+ req->end_io(req, error);
-+ else {
-+ if (blk_bidi_rq(req))
-+ __blk_put_request(req->next_rq->q, req->next_rq);
++#else
+
-+ __blk_put_request(req->q, req);
++#define save_i387_ia32 save_i387
++#define restore_i387_ia32 restore_i387
++
++#define _fpstate_ia32 _fpstate
++#define user_i387_ia32_struct user_i387_struct
++#define user32_fxsr_struct user_fxsr_struct
++
++#endif
++
++#ifdef CONFIG_MATH_EMULATION
++#define HAVE_HWFP (boot_cpu_data.hard_math)
++#else
++#define HAVE_HWFP 1
++#endif
++
++unsigned int mxcsr_feature_mask __read_mostly = 0xffffffffu;
++
++void mxcsr_feature_mask_init(void)
++{
++ unsigned long mask = 0;
++ clts();
++ if (cpu_has_fxsr) {
++ memset(¤t->thread.i387.fxsave, 0,
++ sizeof(struct i387_fxsave_struct));
++ asm volatile("fxsave %0" : : "m" (current->thread.i387.fxsave));
++ mask = current->thread.i387.fxsave.mxcsr_mask;
++ if (mask == 0)
++ mask = 0x0000ffbf;
+ }
++ mxcsr_feature_mask &= mask;
++ stts();
+}
+
-+static inline void __end_request(struct request *rq, int uptodate,
-+ unsigned int nr_bytes)
++#ifdef CONFIG_X86_64
++/*
++ * Called at bootup to set up the initial FPU state that is later cloned
++ * into all processes.
++ */
++void __cpuinit fpu_init(void)
+{
-+ int error = 0;
++ unsigned long oldcr0 = read_cr0();
++ extern void __bad_fxsave_alignment(void);
+
-+ if (uptodate <= 0)
-+ error = uptodate ? uptodate : -EIO;
++ if (offsetof(struct task_struct, thread.i387.fxsave) & 15)
++ __bad_fxsave_alignment();
++ set_in_cr4(X86_CR4_OSFXSR);
++ set_in_cr4(X86_CR4_OSXMMEXCPT);
+
-+ __blk_end_request(rq, error, nr_bytes);
++ write_cr0(oldcr0 & ~((1UL<<3)|(1UL<<2))); /* clear TS and EM */
++
++ mxcsr_feature_mask_init();
++ /* clean state in init */
++ current_thread_info()->status = 0;
++ clear_used_math();
+}
++#endif /* CONFIG_X86_64 */
+
-+/**
-+ * blk_rq_bytes - Returns bytes left to complete in the entire request
-+ **/
-+unsigned int blk_rq_bytes(struct request *rq)
++/*
++ * The _current_ task is using the FPU for the first time
++ * so initialize it and set the mxcsr to its default
++ * value at reset if we support XMM instructions and then
++ * remeber the current task has used the FPU.
++ */
++void init_fpu(struct task_struct *tsk)
+{
-+ if (blk_fs_request(rq))
-+ return rq->hard_nr_sectors << 9;
++ if (tsk_used_math(tsk)) {
++ if (tsk == current)
++ unlazy_fpu(tsk);
++ return;
++ }
+
-+ return rq->data_len;
++ if (cpu_has_fxsr) {
++ memset(&tsk->thread.i387.fxsave, 0,
++ sizeof(struct i387_fxsave_struct));
++ tsk->thread.i387.fxsave.cwd = 0x37f;
++ if (cpu_has_xmm)
++ tsk->thread.i387.fxsave.mxcsr = MXCSR_DEFAULT;
++ } else {
++ memset(&tsk->thread.i387.fsave, 0,
++ sizeof(struct i387_fsave_struct));
++ tsk->thread.i387.fsave.cwd = 0xffff037fu;
++ tsk->thread.i387.fsave.swd = 0xffff0000u;
++ tsk->thread.i387.fsave.twd = 0xffffffffu;
++ tsk->thread.i387.fsave.fos = 0xffff0000u;
++ }
++ /*
++ * Only the device not available exception or ptrace can call init_fpu.
++ */
++ set_stopped_child_used_math(tsk);
+}
-+EXPORT_SYMBOL_GPL(blk_rq_bytes);
+
-+/**
-+ * blk_rq_cur_bytes - Returns bytes left to complete in the current segment
-+ **/
-+unsigned int blk_rq_cur_bytes(struct request *rq)
++int fpregs_active(struct task_struct *target, const struct user_regset *regset)
+{
-+ if (blk_fs_request(rq))
-+ return rq->current_nr_sectors << 9;
-+
-+ if (rq->bio)
-+ return rq->bio->bi_size;
-+
-+ return rq->data_len;
++ return tsk_used_math(target) ? regset->n : 0;
+}
-+EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
+
-+/**
-+ * end_queued_request - end all I/O on a queued request
-+ * @rq: the request being processed
-+ * @uptodate: error value or 0/1 uptodate flag
-+ *
-+ * Description:
-+ * Ends all I/O on a request, and removes it from the block layer queues.
-+ * Not suitable for normal IO completion, unless the driver still has
-+ * the request attached to the block layer.
-+ *
-+ **/
-+void end_queued_request(struct request *rq, int uptodate)
++int xfpregs_active(struct task_struct *target, const struct user_regset *regset)
+{
-+ __end_request(rq, uptodate, blk_rq_bytes(rq));
++ return (cpu_has_fxsr && tsk_used_math(target)) ? regset->n : 0;
+}
-+EXPORT_SYMBOL(end_queued_request);
+
-+/**
-+ * end_dequeued_request - end all I/O on a dequeued request
-+ * @rq: the request being processed
-+ * @uptodate: error value or 0/1 uptodate flag
-+ *
-+ * Description:
-+ * Ends all I/O on a request. The request must already have been
-+ * dequeued using blkdev_dequeue_request(), as is normally the case
-+ * for most drivers.
-+ *
-+ **/
-+void end_dequeued_request(struct request *rq, int uptodate)
++int xfpregs_get(struct task_struct *target, const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf)
+{
-+ __end_request(rq, uptodate, blk_rq_bytes(rq));
-+}
-+EXPORT_SYMBOL(end_dequeued_request);
++ if (!cpu_has_fxsr)
++ return -ENODEV;
+
++ unlazy_fpu(target);
+
-+/**
-+ * end_request - end I/O on the current segment of the request
-+ * @req: the request being processed
-+ * @uptodate: error value or 0/1 uptodate flag
-+ *
-+ * Description:
-+ * Ends I/O on the current segment of a request. If that is the only
-+ * remaining segment, the request is also completed and freed.
-+ *
-+ * This is a remnant of how older block drivers handled IO completions.
-+ * Modern drivers typically end IO on the full request in one go, unless
-+ * they have a residual value to account for. For that case this function
-+ * isn't really useful, unless the residual just happens to be the
-+ * full current segment. In other words, don't use this function in new
-+ * code. Either use end_request_completely(), or the
-+ * end_that_request_chunk() (along with end_that_request_last()) for
-+ * partial completions.
-+ *
-+ **/
-+void end_request(struct request *req, int uptodate)
-+{
-+ __end_request(req, uptodate, req->hard_cur_sectors << 9);
++ return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ &target->thread.i387.fxsave, 0, -1);
+}
-+EXPORT_SYMBOL(end_request);
+
-+/**
-+ * blk_end_io - Generic end_io function to complete a request.
-+ * @rq: the request being processed
-+ * @error: 0 for success, < 0 for error
-+ * @nr_bytes: number of bytes to complete @rq
-+ * @bidi_bytes: number of bytes to complete @rq->next_rq
-+ * @drv_callback: function called between completion of bios in the request
-+ * and completion of the request.
-+ * If the callback returns non 0, this helper returns without
-+ * completion of the request.
-+ *
-+ * Description:
-+ * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
-+ * If @rq has leftover, sets it up for the next range of segments.
-+ *
-+ * Return:
-+ * 0 - we are done with this request
-+ * 1 - this request is not freed yet, it still has pending buffers.
-+ **/
-+static int blk_end_io(struct request *rq, int error, int nr_bytes,
-+ int bidi_bytes, int (drv_callback)(struct request *))
++int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf)
+{
-+ struct request_queue *q = rq->q;
-+ unsigned long flags = 0UL;
-+
-+ if (blk_fs_request(rq) || blk_pc_request(rq)) {
-+ if (__end_that_request_first(rq, error, nr_bytes))
-+ return 1;
++ int ret;
+
-+ /* Bidi request must be completed as a whole */
-+ if (blk_bidi_rq(rq) &&
-+ __end_that_request_first(rq->next_rq, error, bidi_bytes))
-+ return 1;
-+ }
++ if (!cpu_has_fxsr)
++ return -ENODEV;
+
-+ /* Special feature for tricky drivers */
-+ if (drv_callback && drv_callback(rq))
-+ return 1;
++ unlazy_fpu(target);
++ set_stopped_child_used_math(target);
+
-+ add_disk_randomness(rq->rq_disk);
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ &target->thread.i387.fxsave, 0, -1);
+
-+ spin_lock_irqsave(q->queue_lock, flags);
-+ end_that_request_last(rq, error);
-+ spin_unlock_irqrestore(q->queue_lock, flags);
++ /*
++ * mxcsr reserved bits must be masked to zero for security reasons.
++ */
++ target->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
+
-+ return 0;
++ return ret;
+}
+
-+/**
-+ * blk_end_request - Helper function for drivers to complete the request.
-+ * @rq: the request being processed
-+ * @error: 0 for success, < 0 for error
-+ * @nr_bytes: number of bytes to complete
-+ *
-+ * Description:
-+ * Ends I/O on a number of bytes attached to @rq.
-+ * If @rq has leftover, sets it up for the next range of segments.
-+ *
-+ * Return:
-+ * 0 - we are done with this request
-+ * 1 - still buffers pending for this request
-+ **/
-+int blk_end_request(struct request *rq, int error, int nr_bytes)
-+{
-+ return blk_end_io(rq, error, nr_bytes, 0, NULL);
-+}
-+EXPORT_SYMBOL_GPL(blk_end_request);
++#if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
+
-+/**
-+ * __blk_end_request - Helper function for drivers to complete the request.
-+ * @rq: the request being processed
-+ * @error: 0 for success, < 0 for error
-+ * @nr_bytes: number of bytes to complete
-+ *
-+ * Description:
-+ * Must be called with queue lock held unlike blk_end_request().
-+ *
-+ * Return:
-+ * 0 - we are done with this request
-+ * 1 - still buffers pending for this request
-+ **/
-+int __blk_end_request(struct request *rq, int error, int nr_bytes)
++/*
++ * FPU tag word conversions.
++ */
++
++static inline unsigned short twd_i387_to_fxsr(unsigned short twd)
+{
-+ if (blk_fs_request(rq) || blk_pc_request(rq)) {
-+ if (__end_that_request_first(rq, error, nr_bytes))
-+ return 1;
-+ }
++ unsigned int tmp; /* to avoid 16 bit prefixes in the code */
+
-+ add_disk_randomness(rq->rq_disk);
++ /* Transform each pair of bits into 01 (valid) or 00 (empty) */
++ tmp = ~twd;
++ tmp = (tmp | (tmp>>1)) & 0x5555; /* 0V0V0V0V0V0V0V0V */
++ /* and move the valid bits to the lower byte. */
++ tmp = (tmp | (tmp >> 1)) & 0x3333; /* 00VV00VV00VV00VV */
++ tmp = (tmp | (tmp >> 2)) & 0x0f0f; /* 0000VVVV0000VVVV */
++ tmp = (tmp | (tmp >> 4)) & 0x00ff; /* 00000000VVVVVVVV */
++ return tmp;
++}
+
-+ end_that_request_last(rq, error);
++#define FPREG_ADDR(f, n) ((void *)&(f)->st_space + (n) * 16);
++#define FP_EXP_TAG_VALID 0
++#define FP_EXP_TAG_ZERO 1
++#define FP_EXP_TAG_SPECIAL 2
++#define FP_EXP_TAG_EMPTY 3
++
++static inline u32 twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave)
++{
++ struct _fpxreg *st;
++ u32 tos = (fxsave->swd >> 11) & 7;
++ u32 twd = (unsigned long) fxsave->twd;
++ u32 tag;
++ u32 ret = 0xffff0000u;
++ int i;
+
-+ return 0;
++ for (i = 0; i < 8; i++, twd >>= 1) {
++ if (twd & 0x1) {
++ st = FPREG_ADDR(fxsave, (i - tos) & 7);
++
++ switch (st->exponent & 0x7fff) {
++ case 0x7fff:
++ tag = FP_EXP_TAG_SPECIAL;
++ break;
++ case 0x0000:
++ if (!st->significand[0] &&
++ !st->significand[1] &&
++ !st->significand[2] &&
++ !st->significand[3])
++ tag = FP_EXP_TAG_ZERO;
++ else
++ tag = FP_EXP_TAG_SPECIAL;
++ break;
++ default:
++ if (st->significand[3] & 0x8000)
++ tag = FP_EXP_TAG_VALID;
++ else
++ tag = FP_EXP_TAG_SPECIAL;
++ break;
++ }
++ } else {
++ tag = FP_EXP_TAG_EMPTY;
++ }
++ ret |= tag << (2 * i);
++ }
++ return ret;
+}
-+EXPORT_SYMBOL_GPL(__blk_end_request);
+
-+/**
-+ * blk_end_bidi_request - Helper function for drivers to complete bidi request.
-+ * @rq: the bidi request being processed
-+ * @error: 0 for success, < 0 for error
-+ * @nr_bytes: number of bytes to complete @rq
-+ * @bidi_bytes: number of bytes to complete @rq->next_rq
-+ *
-+ * Description:
-+ * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
-+ *
-+ * Return:
-+ * 0 - we are done with this request
-+ * 1 - still buffers pending for this request
-+ **/
-+int blk_end_bidi_request(struct request *rq, int error, int nr_bytes,
-+ int bidi_bytes)
-+{
-+ return blk_end_io(rq, error, nr_bytes, bidi_bytes, NULL);
-+}
-+EXPORT_SYMBOL_GPL(blk_end_bidi_request);
++/*
++ * FXSR floating point environment conversions.
++ */
+
-+/**
-+ * blk_end_request_callback - Special helper function for tricky drivers
-+ * @rq: the request being processed
-+ * @error: 0 for success, < 0 for error
-+ * @nr_bytes: number of bytes to complete
-+ * @drv_callback: function called between completion of bios in the request
-+ * and completion of the request.
-+ * If the callback returns non 0, this helper returns without
-+ * completion of the request.
-+ *
-+ * Description:
-+ * Ends I/O on a number of bytes attached to @rq.
-+ * If @rq has leftover, sets it up for the next range of segments.
-+ *
-+ * This special helper function is used only for existing tricky drivers.
-+ * (e.g. cdrom_newpc_intr() of ide-cd)
-+ * This interface will be removed when such drivers are rewritten.
-+ * Don't use this interface in other places anymore.
-+ *
-+ * Return:
-+ * 0 - we are done with this request
-+ * 1 - this request is not freed yet.
-+ * this request still has pending buffers or
-+ * the driver doesn't want to finish this request yet.
-+ **/
-+int blk_end_request_callback(struct request *rq, int error, int nr_bytes,
-+ int (drv_callback)(struct request *))
++static void convert_from_fxsr(struct user_i387_ia32_struct *env,
++ struct task_struct *tsk)
+{
-+ return blk_end_io(rq, error, nr_bytes, 0, drv_callback);
++ struct i387_fxsave_struct *fxsave = &tsk->thread.i387.fxsave;
++ struct _fpreg *to = (struct _fpreg *) &env->st_space[0];
++ struct _fpxreg *from = (struct _fpxreg *) &fxsave->st_space[0];
++ int i;
++
++ env->cwd = fxsave->cwd | 0xffff0000u;
++ env->swd = fxsave->swd | 0xffff0000u;
++ env->twd = twd_fxsr_to_i387(fxsave);
++
++#ifdef CONFIG_X86_64
++ env->fip = fxsave->rip;
++ env->foo = fxsave->rdp;
++ if (tsk == current) {
++ /*
++ * should be actually ds/cs at fpu exception time, but
++ * that information is not available in 64bit mode.
++ */
++ asm("mov %%ds,%0" : "=r" (env->fos));
++ asm("mov %%cs,%0" : "=r" (env->fcs));
++ } else {
++ struct pt_regs *regs = task_pt_regs(tsk);
++ env->fos = 0xffff0000 | tsk->thread.ds;
++ env->fcs = regs->cs;
++ }
++#else
++ env->fip = fxsave->fip;
++ env->fcs = fxsave->fcs;
++ env->foo = fxsave->foo;
++ env->fos = fxsave->fos;
++#endif
++
++ for (i = 0; i < 8; ++i)
++ memcpy(&to[i], &from[i], sizeof(to[0]));
+}
-+EXPORT_SYMBOL_GPL(blk_end_request_callback);
+
-+void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
-+ struct bio *bio)
-+{
-+ /* first two bits are identical in rq->cmd_flags and bio->bi_rw */
-+ rq->cmd_flags |= (bio->bi_rw & 3);
++static void convert_to_fxsr(struct task_struct *tsk,
++ const struct user_i387_ia32_struct *env)
+
-+ rq->nr_phys_segments = bio_phys_segments(q, bio);
-+ rq->nr_hw_segments = bio_hw_segments(q, bio);
-+ rq->current_nr_sectors = bio_cur_sectors(bio);
-+ rq->hard_cur_sectors = rq->current_nr_sectors;
-+ rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
-+ rq->buffer = bio_data(bio);
-+ rq->data_len = bio->bi_size;
++{
++ struct i387_fxsave_struct *fxsave = &tsk->thread.i387.fxsave;
++ struct _fpreg *from = (struct _fpreg *) &env->st_space[0];
++ struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0];
++ int i;
+
-+ rq->bio = rq->biotail = bio;
++ fxsave->cwd = env->cwd;
++ fxsave->swd = env->swd;
++ fxsave->twd = twd_i387_to_fxsr(env->twd);
++ fxsave->fop = (u16) ((u32) env->fcs >> 16);
++#ifdef CONFIG_X86_64
++ fxsave->rip = env->fip;
++ fxsave->rdp = env->foo;
++ /* cs and ds ignored */
++#else
++ fxsave->fip = env->fip;
++ fxsave->fcs = (env->fcs & 0xffff);
++ fxsave->foo = env->foo;
++ fxsave->fos = env->fos;
++#endif
+
-+ if (bio->bi_bdev)
-+ rq->rq_disk = bio->bi_bdev->bd_disk;
++ for (i = 0; i < 8; ++i)
++ memcpy(&to[i], &from[i], sizeof(from[0]));
+}
+
-+int kblockd_schedule_work(struct work_struct *work)
++int fpregs_get(struct task_struct *target, const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf)
+{
-+ return queue_work(kblockd_workqueue, work);
-+}
++ struct user_i387_ia32_struct env;
+
-+EXPORT_SYMBOL(kblockd_schedule_work);
++ if (!HAVE_HWFP)
++ return fpregs_soft_get(target, regset, pos, count, kbuf, ubuf);
+
-+void kblockd_flush_work(struct work_struct *work)
-+{
-+ cancel_work_sync(work);
++ unlazy_fpu(target);
++
++ if (!cpu_has_fxsr)
++ return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ &target->thread.i387.fsave, 0, -1);
++
++ if (kbuf && pos == 0 && count == sizeof(env)) {
++ convert_from_fxsr(kbuf, target);
++ return 0;
++ }
++
++ convert_from_fxsr(&env, target);
++ return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &env, 0, -1);
+}
-+EXPORT_SYMBOL(kblockd_flush_work);
+
-+int __init blk_dev_init(void)
++int fpregs_set(struct task_struct *target, const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf)
+{
-+ int i;
++ struct user_i387_ia32_struct env;
++ int ret;
+
-+ kblockd_workqueue = create_workqueue("kblockd");
-+ if (!kblockd_workqueue)
-+ panic("Failed to create kblockd\n");
++ if (!HAVE_HWFP)
++ return fpregs_soft_set(target, regset, pos, count, kbuf, ubuf);
+
-+ request_cachep = kmem_cache_create("blkdev_requests",
-+ sizeof(struct request), 0, SLAB_PANIC, NULL);
++ unlazy_fpu(target);
++ set_stopped_child_used_math(target);
+
-+ blk_requestq_cachep = kmem_cache_create("blkdev_queue",
-+ sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
++ if (!cpu_has_fxsr)
++ return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ &target->thread.i387.fsave, 0, -1);
+
-+ for_each_possible_cpu(i)
-+ INIT_LIST_HEAD(&per_cpu(blk_cpu_done, i));
++ if (pos > 0 || count < sizeof(env))
++ convert_from_fxsr(&env, target);
+
-+ open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
-+ register_hotcpu_notifier(&blk_cpu_notifier);
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &env, 0, -1);
++ if (!ret)
++ convert_to_fxsr(target, &env);
+
-+ return 0;
++ return ret;
+}
+
-diff --git a/block/blk-exec.c b/block/blk-exec.c
-new file mode 100644
-index 0000000..ebfb44e
---- /dev/null
-+++ b/block/blk-exec.c
-@@ -0,0 +1,105 @@
+/*
-+ * Functions related to setting various queue properties from drivers
++ * Signal frame handlers.
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
+
-+#include "blk.h"
++static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf)
++{
++ struct task_struct *tsk = current;
+
-+/*
-+ * for max sense size
-+ */
-+#include <scsi/scsi_cmnd.h>
++ unlazy_fpu(tsk);
++ tsk->thread.i387.fsave.status = tsk->thread.i387.fsave.swd;
++ if (__copy_to_user(buf, &tsk->thread.i387.fsave,
++ sizeof(struct i387_fsave_struct)))
++ return -1;
++ return 1;
++}
+
-+/**
-+ * blk_end_sync_rq - executes a completion event on a request
-+ * @rq: request to complete
-+ * @error: end io status of the request
-+ */
-+void blk_end_sync_rq(struct request *rq, int error)
++static int save_i387_fxsave(struct _fpstate_ia32 __user *buf)
+{
-+ struct completion *waiting = rq->end_io_data;
++ struct task_struct *tsk = current;
++ struct user_i387_ia32_struct env;
++ int err = 0;
+
-+ rq->end_io_data = NULL;
-+ __blk_put_request(rq->q, rq);
++ unlazy_fpu(tsk);
+
-+ /*
-+ * complete last, if this is a stack request the process (and thus
-+ * the rq pointer) could be invalid right after this complete()
-+ */
-+ complete(waiting);
-+}
-+EXPORT_SYMBOL(blk_end_sync_rq);
++ convert_from_fxsr(&env, tsk);
++ if (__copy_to_user(buf, &env, sizeof(env)))
++ return -1;
+
-+/**
-+ * blk_execute_rq_nowait - insert a request into queue for execution
-+ * @q: queue to insert the request in
-+ * @bd_disk: matching gendisk
-+ * @rq: request to insert
-+ * @at_head: insert request at head or tail of queue
-+ * @done: I/O completion handler
-+ *
-+ * Description:
-+ * Insert a fully prepared request at the back of the io scheduler queue
-+ * for execution. Don't wait for completion.
-+ */
-+void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
-+ struct request *rq, int at_head,
-+ rq_end_io_fn *done)
-+{
-+ int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
++ err |= __put_user(tsk->thread.i387.fxsave.swd, &buf->status);
++ err |= __put_user(X86_FXSR_MAGIC, &buf->magic);
++ if (err)
++ return -1;
+
-+ rq->rq_disk = bd_disk;
-+ rq->cmd_flags |= REQ_NOMERGE;
-+ rq->end_io = done;
-+ WARN_ON(irqs_disabled());
-+ spin_lock_irq(q->queue_lock);
-+ __elv_add_request(q, rq, where, 1);
-+ __generic_unplug_device(q);
-+ spin_unlock_irq(q->queue_lock);
++ if (__copy_to_user(&buf->_fxsr_env[0], &tsk->thread.i387.fxsave,
++ sizeof(struct i387_fxsave_struct)))
++ return -1;
++ return 1;
+}
-+EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
+
-+/**
-+ * blk_execute_rq - insert a request into queue for execution
-+ * @q: queue to insert the request in
-+ * @bd_disk: matching gendisk
-+ * @rq: request to insert
-+ * @at_head: insert request at head or tail of queue
-+ *
-+ * Description:
-+ * Insert a fully prepared request at the back of the io scheduler queue
-+ * for execution and wait for completion.
-+ */
-+int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
-+ struct request *rq, int at_head)
++int save_i387_ia32(struct _fpstate_ia32 __user *buf)
+{
-+ DECLARE_COMPLETION_ONSTACK(wait);
-+ char sense[SCSI_SENSE_BUFFERSIZE];
-+ int err = 0;
++ if (!used_math())
++ return 0;
+
-+ /*
-+ * we need an extra reference to the request, so we can look at
-+ * it after io completion
++ /* This will cause a "finit" to be triggered by the next
++ * attempted FPU operation by the 'current' process.
+ */
-+ rq->ref_count++;
++ clear_used_math();
+
-+ if (!rq->sense) {
-+ memset(sense, 0, sizeof(sense));
-+ rq->sense = sense;
-+ rq->sense_len = 0;
++ if (HAVE_HWFP) {
++ if (cpu_has_fxsr) {
++ return save_i387_fxsave(buf);
++ } else {
++ return save_i387_fsave(buf);
++ }
++ } else {
++ return fpregs_soft_get(current, NULL,
++ 0, sizeof(struct user_i387_ia32_struct),
++ NULL, buf) ? -1 : 1;
+ }
++}
+
-+ rq->end_io_data = &wait;
-+ blk_execute_rq_nowait(q, bd_disk, rq, at_head, blk_end_sync_rq);
-+ wait_for_completion(&wait);
++static inline int restore_i387_fsave(struct _fpstate_ia32 __user *buf)
++{
++ struct task_struct *tsk = current;
++ clear_fpu(tsk);
++ return __copy_from_user(&tsk->thread.i387.fsave, buf,
++ sizeof(struct i387_fsave_struct));
++}
+
-+ if (rq->errors)
-+ err = -EIO;
++static int restore_i387_fxsave(struct _fpstate_ia32 __user *buf)
++{
++ int err;
++ struct task_struct *tsk = current;
++ struct user_i387_ia32_struct env;
++ clear_fpu(tsk);
++ err = __copy_from_user(&tsk->thread.i387.fxsave, &buf->_fxsr_env[0],
++ sizeof(struct i387_fxsave_struct));
++ /* mxcsr reserved bits must be masked to zero for security reasons */
++ tsk->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
++ if (err || __copy_from_user(&env, buf, sizeof(env)))
++ return 1;
++ convert_to_fxsr(tsk, &env);
++ return 0;
++}
++
++int restore_i387_ia32(struct _fpstate_ia32 __user *buf)
++{
++ int err;
+
++ if (HAVE_HWFP) {
++ if (cpu_has_fxsr) {
++ err = restore_i387_fxsave(buf);
++ } else {
++ err = restore_i387_fsave(buf);
++ }
++ } else {
++ err = fpregs_soft_set(current, NULL,
++ 0, sizeof(struct user_i387_ia32_struct),
++ NULL, buf) != 0;
++ }
++ set_used_math();
+ return err;
+}
+
-+EXPORT_SYMBOL(blk_execute_rq);
-diff --git a/block/blk-ioc.c b/block/blk-ioc.c
-new file mode 100644
-index 0000000..6d16755
---- /dev/null
-+++ b/block/blk-ioc.c
-@@ -0,0 +1,194 @@
+/*
-+ * Functions related to io context handling
++ * FPU state for core dumps.
++ * This is only used for a.out dumps now.
++ * It is declared generically using elf_fpregset_t (which is
++ * struct user_i387_struct) but is in fact only used for 32-bit
++ * dumps, so on 64-bit it is really struct user_i387_ia32_struct.
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/init.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
-+#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
++int dump_fpu(struct pt_regs *regs, struct user_i387_struct *fpu)
++{
++ int fpvalid;
++ struct task_struct *tsk = current;
+
-+#include "blk.h"
++ fpvalid = !!used_math();
++ if (fpvalid)
++ fpvalid = !fpregs_get(tsk, NULL,
++ 0, sizeof(struct user_i387_ia32_struct),
++ fpu, NULL);
+
-+/*
-+ * For io context allocations
-+ */
-+static struct kmem_cache *iocontext_cachep;
++ return fpvalid;
++}
++EXPORT_SYMBOL(dump_fpu);
+
-+static void cfq_dtor(struct io_context *ioc)
-+{
-+ struct cfq_io_context *cic[1];
-+ int r;
++#endif /* CONFIG_X86_32 || CONFIG_IA32_EMULATION */
+diff --git a/arch/x86/kernel/i387_32.c b/arch/x86/kernel/i387_32.c
+deleted file mode 100644
+index 7d2e12f..0000000
+--- a/arch/x86/kernel/i387_32.c
++++ /dev/null
+@@ -1,544 +0,0 @@
+-/*
+- * Copyright (C) 1994 Linus Torvalds
+- *
+- * Pentium III FXSR, SSE support
+- * General FPU state handling cleanups
+- * Gareth Hughes <gareth at valinux.com>, May 2000
+- */
+-
+-#include <linux/sched.h>
+-#include <linux/module.h>
+-#include <asm/processor.h>
+-#include <asm/i387.h>
+-#include <asm/math_emu.h>
+-#include <asm/sigcontext.h>
+-#include <asm/user.h>
+-#include <asm/ptrace.h>
+-#include <asm/uaccess.h>
+-
+-#ifdef CONFIG_MATH_EMULATION
+-#define HAVE_HWFP (boot_cpu_data.hard_math)
+-#else
+-#define HAVE_HWFP 1
+-#endif
+-
+-static unsigned long mxcsr_feature_mask __read_mostly = 0xffffffff;
+-
+-void mxcsr_feature_mask_init(void)
+-{
+- unsigned long mask = 0;
+- clts();
+- if (cpu_has_fxsr) {
+- memset(¤t->thread.i387.fxsave, 0, sizeof(struct i387_fxsave_struct));
+- asm volatile("fxsave %0" : : "m" (current->thread.i387.fxsave));
+- mask = current->thread.i387.fxsave.mxcsr_mask;
+- if (mask == 0) mask = 0x0000ffbf;
+- }
+- mxcsr_feature_mask &= mask;
+- stts();
+-}
+-
+-/*
+- * The _current_ task is using the FPU for the first time
+- * so initialize it and set the mxcsr to its default
+- * value at reset if we support XMM instructions and then
+- * remeber the current task has used the FPU.
+- */
+-void init_fpu(struct task_struct *tsk)
+-{
+- if (cpu_has_fxsr) {
+- memset(&tsk->thread.i387.fxsave, 0, sizeof(struct i387_fxsave_struct));
+- tsk->thread.i387.fxsave.cwd = 0x37f;
+- if (cpu_has_xmm)
+- tsk->thread.i387.fxsave.mxcsr = 0x1f80;
+- } else {
+- memset(&tsk->thread.i387.fsave, 0, sizeof(struct i387_fsave_struct));
+- tsk->thread.i387.fsave.cwd = 0xffff037fu;
+- tsk->thread.i387.fsave.swd = 0xffff0000u;
+- tsk->thread.i387.fsave.twd = 0xffffffffu;
+- tsk->thread.i387.fsave.fos = 0xffff0000u;
+- }
+- /* only the device not available exception or ptrace can call init_fpu */
+- set_stopped_child_used_math(tsk);
+-}
+-
+-/*
+- * FPU lazy state save handling.
+- */
+-
+-void kernel_fpu_begin(void)
+-{
+- struct thread_info *thread = current_thread_info();
+-
+- preempt_disable();
+- if (thread->status & TS_USEDFPU) {
+- __save_init_fpu(thread->task);
+- return;
+- }
+- clts();
+-}
+-EXPORT_SYMBOL_GPL(kernel_fpu_begin);
+-
+-/*
+- * FPU tag word conversions.
+- */
+-
+-static inline unsigned short twd_i387_to_fxsr( unsigned short twd )
+-{
+- unsigned int tmp; /* to avoid 16 bit prefixes in the code */
+-
+- /* Transform each pair of bits into 01 (valid) or 00 (empty) */
+- tmp = ~twd;
+- tmp = (tmp | (tmp>>1)) & 0x5555; /* 0V0V0V0V0V0V0V0V */
+- /* and move the valid bits to the lower byte. */
+- tmp = (tmp | (tmp >> 1)) & 0x3333; /* 00VV00VV00VV00VV */
+- tmp = (tmp | (tmp >> 2)) & 0x0f0f; /* 0000VVVV0000VVVV */
+- tmp = (tmp | (tmp >> 4)) & 0x00ff; /* 00000000VVVVVVVV */
+- return tmp;
+-}
+-
+-static inline unsigned long twd_fxsr_to_i387( struct i387_fxsave_struct *fxsave )
+-{
+- struct _fpxreg *st = NULL;
+- unsigned long tos = (fxsave->swd >> 11) & 7;
+- unsigned long twd = (unsigned long) fxsave->twd;
+- unsigned long tag;
+- unsigned long ret = 0xffff0000u;
+- int i;
+-
+-#define FPREG_ADDR(f, n) ((void *)&(f)->st_space + (n) * 16);
+-
+- for ( i = 0 ; i < 8 ; i++ ) {
+- if ( twd & 0x1 ) {
+- st = FPREG_ADDR( fxsave, (i - tos) & 7 );
+-
+- switch ( st->exponent & 0x7fff ) {
+- case 0x7fff:
+- tag = 2; /* Special */
+- break;
+- case 0x0000:
+- if ( !st->significand[0] &&
+- !st->significand[1] &&
+- !st->significand[2] &&
+- !st->significand[3] ) {
+- tag = 1; /* Zero */
+- } else {
+- tag = 2; /* Special */
+- }
+- break;
+- default:
+- if ( st->significand[3] & 0x8000 ) {
+- tag = 0; /* Valid */
+- } else {
+- tag = 2; /* Special */
+- }
+- break;
+- }
+- } else {
+- tag = 3; /* Empty */
+- }
+- ret |= (tag << (2 * i));
+- twd = twd >> 1;
+- }
+- return ret;
+-}
+-
+-/*
+- * FPU state interaction.
+- */
+-
+-unsigned short get_fpu_cwd( struct task_struct *tsk )
+-{
+- if ( cpu_has_fxsr ) {
+- return tsk->thread.i387.fxsave.cwd;
+- } else {
+- return (unsigned short)tsk->thread.i387.fsave.cwd;
+- }
+-}
+-
+-unsigned short get_fpu_swd( struct task_struct *tsk )
+-{
+- if ( cpu_has_fxsr ) {
+- return tsk->thread.i387.fxsave.swd;
+- } else {
+- return (unsigned short)tsk->thread.i387.fsave.swd;
+- }
+-}
+-
+-#if 0
+-unsigned short get_fpu_twd( struct task_struct *tsk )
+-{
+- if ( cpu_has_fxsr ) {
+- return tsk->thread.i387.fxsave.twd;
+- } else {
+- return (unsigned short)tsk->thread.i387.fsave.twd;
+- }
+-}
+-#endif /* 0 */
+-
+-unsigned short get_fpu_mxcsr( struct task_struct *tsk )
+-{
+- if ( cpu_has_xmm ) {
+- return tsk->thread.i387.fxsave.mxcsr;
+- } else {
+- return 0x1f80;
+- }
+-}
+-
+-#if 0
+-
+-void set_fpu_cwd( struct task_struct *tsk, unsigned short cwd )
+-{
+- if ( cpu_has_fxsr ) {
+- tsk->thread.i387.fxsave.cwd = cwd;
+- } else {
+- tsk->thread.i387.fsave.cwd = ((long)cwd | 0xffff0000u);
+- }
+-}
+-
+-void set_fpu_swd( struct task_struct *tsk, unsigned short swd )
+-{
+- if ( cpu_has_fxsr ) {
+- tsk->thread.i387.fxsave.swd = swd;
+- } else {
+- tsk->thread.i387.fsave.swd = ((long)swd | 0xffff0000u);
+- }
+-}
+-
+-void set_fpu_twd( struct task_struct *tsk, unsigned short twd )
+-{
+- if ( cpu_has_fxsr ) {
+- tsk->thread.i387.fxsave.twd = twd_i387_to_fxsr(twd);
+- } else {
+- tsk->thread.i387.fsave.twd = ((long)twd | 0xffff0000u);
+- }
+-}
+-
+-#endif /* 0 */
+-
+-/*
+- * FXSR floating point environment conversions.
+- */
+-
+-static int convert_fxsr_to_user( struct _fpstate __user *buf,
+- struct i387_fxsave_struct *fxsave )
+-{
+- unsigned long env[7];
+- struct _fpreg __user *to;
+- struct _fpxreg *from;
+- int i;
+-
+- env[0] = (unsigned long)fxsave->cwd | 0xffff0000ul;
+- env[1] = (unsigned long)fxsave->swd | 0xffff0000ul;
+- env[2] = twd_fxsr_to_i387(fxsave);
+- env[3] = fxsave->fip;
+- env[4] = fxsave->fcs | ((unsigned long)fxsave->fop << 16);
+- env[5] = fxsave->foo;
+- env[6] = fxsave->fos;
+-
+- if ( __copy_to_user( buf, env, 7 * sizeof(unsigned long) ) )
+- return 1;
+-
+- to = &buf->_st[0];
+- from = (struct _fpxreg *) &fxsave->st_space[0];
+- for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
+- unsigned long __user *t = (unsigned long __user *)to;
+- unsigned long *f = (unsigned long *)from;
+-
+- if (__put_user(*f, t) ||
+- __put_user(*(f + 1), t + 1) ||
+- __put_user(from->exponent, &to->exponent))
+- return 1;
+- }
+- return 0;
+-}
+-
+-static int convert_fxsr_from_user( struct i387_fxsave_struct *fxsave,
+- struct _fpstate __user *buf )
+-{
+- unsigned long env[7];
+- struct _fpxreg *to;
+- struct _fpreg __user *from;
+- int i;
+-
+- if ( __copy_from_user( env, buf, 7 * sizeof(long) ) )
+- return 1;
+-
+- fxsave->cwd = (unsigned short)(env[0] & 0xffff);
+- fxsave->swd = (unsigned short)(env[1] & 0xffff);
+- fxsave->twd = twd_i387_to_fxsr((unsigned short)(env[2] & 0xffff));
+- fxsave->fip = env[3];
+- fxsave->fop = (unsigned short)((env[4] & 0xffff0000ul) >> 16);
+- fxsave->fcs = (env[4] & 0xffff);
+- fxsave->foo = env[5];
+- fxsave->fos = env[6];
+-
+- to = (struct _fpxreg *) &fxsave->st_space[0];
+- from = &buf->_st[0];
+- for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
+- unsigned long *t = (unsigned long *)to;
+- unsigned long __user *f = (unsigned long __user *)from;
+-
+- if (__get_user(*t, f) ||
+- __get_user(*(t + 1), f + 1) ||
+- __get_user(to->exponent, &from->exponent))
+- return 1;
+- }
+- return 0;
+-}
+-
+-/*
+- * Signal frame handlers.
+- */
+-
+-static inline int save_i387_fsave( struct _fpstate __user *buf )
+-{
+- struct task_struct *tsk = current;
+-
+- unlazy_fpu( tsk );
+- tsk->thread.i387.fsave.status = tsk->thread.i387.fsave.swd;
+- if ( __copy_to_user( buf, &tsk->thread.i387.fsave,
+- sizeof(struct i387_fsave_struct) ) )
+- return -1;
+- return 1;
+-}
+-
+-static int save_i387_fxsave( struct _fpstate __user *buf )
+-{
+- struct task_struct *tsk = current;
+- int err = 0;
+-
+- unlazy_fpu( tsk );
+-
+- if ( convert_fxsr_to_user( buf, &tsk->thread.i387.fxsave ) )
+- return -1;
+-
+- err |= __put_user( tsk->thread.i387.fxsave.swd, &buf->status );
+- err |= __put_user( X86_FXSR_MAGIC, &buf->magic );
+- if ( err )
+- return -1;
+-
+- if ( __copy_to_user( &buf->_fxsr_env[0], &tsk->thread.i387.fxsave,
+- sizeof(struct i387_fxsave_struct) ) )
+- return -1;
+- return 1;
+-}
+-
+-int save_i387( struct _fpstate __user *buf )
+-{
+- if ( !used_math() )
+- return 0;
+-
+- /* This will cause a "finit" to be triggered by the next
+- * attempted FPU operation by the 'current' process.
+- */
+- clear_used_math();
+-
+- if ( HAVE_HWFP ) {
+- if ( cpu_has_fxsr ) {
+- return save_i387_fxsave( buf );
+- } else {
+- return save_i387_fsave( buf );
+- }
+- } else {
+- return save_i387_soft( ¤t->thread.i387.soft, buf );
+- }
+-}
+-
+-static inline int restore_i387_fsave( struct _fpstate __user *buf )
+-{
+- struct task_struct *tsk = current;
+- clear_fpu( tsk );
+- return __copy_from_user( &tsk->thread.i387.fsave, buf,
+- sizeof(struct i387_fsave_struct) );
+-}
+-
+-static int restore_i387_fxsave( struct _fpstate __user *buf )
+-{
+- int err;
+- struct task_struct *tsk = current;
+- clear_fpu( tsk );
+- err = __copy_from_user( &tsk->thread.i387.fxsave, &buf->_fxsr_env[0],
+- sizeof(struct i387_fxsave_struct) );
+- /* mxcsr reserved bits must be masked to zero for security reasons */
+- tsk->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
+- return err ? 1 : convert_fxsr_from_user( &tsk->thread.i387.fxsave, buf );
+-}
+-
+-int restore_i387( struct _fpstate __user *buf )
+-{
+- int err;
+-
+- if ( HAVE_HWFP ) {
+- if ( cpu_has_fxsr ) {
+- err = restore_i387_fxsave( buf );
+- } else {
+- err = restore_i387_fsave( buf );
+- }
+- } else {
+- err = restore_i387_soft( ¤t->thread.i387.soft, buf );
+- }
+- set_used_math();
+- return err;
+-}
+-
+-/*
+- * ptrace request handlers.
+- */
+-
+-static inline int get_fpregs_fsave( struct user_i387_struct __user *buf,
+- struct task_struct *tsk )
+-{
+- return __copy_to_user( buf, &tsk->thread.i387.fsave,
+- sizeof(struct user_i387_struct) );
+-}
+-
+-static inline int get_fpregs_fxsave( struct user_i387_struct __user *buf,
+- struct task_struct *tsk )
+-{
+- return convert_fxsr_to_user( (struct _fpstate __user *)buf,
+- &tsk->thread.i387.fxsave );
+-}
+-
+-int get_fpregs( struct user_i387_struct __user *buf, struct task_struct *tsk )
+-{
+- if ( HAVE_HWFP ) {
+- if ( cpu_has_fxsr ) {
+- return get_fpregs_fxsave( buf, tsk );
+- } else {
+- return get_fpregs_fsave( buf, tsk );
+- }
+- } else {
+- return save_i387_soft( &tsk->thread.i387.soft,
+- (struct _fpstate __user *)buf );
+- }
+-}
+-
+-static inline int set_fpregs_fsave( struct task_struct *tsk,
+- struct user_i387_struct __user *buf )
+-{
+- return __copy_from_user( &tsk->thread.i387.fsave, buf,
+- sizeof(struct user_i387_struct) );
+-}
+-
+-static inline int set_fpregs_fxsave( struct task_struct *tsk,
+- struct user_i387_struct __user *buf )
+-{
+- return convert_fxsr_from_user( &tsk->thread.i387.fxsave,
+- (struct _fpstate __user *)buf );
+-}
+-
+-int set_fpregs( struct task_struct *tsk, struct user_i387_struct __user *buf )
+-{
+- if ( HAVE_HWFP ) {
+- if ( cpu_has_fxsr ) {
+- return set_fpregs_fxsave( tsk, buf );
+- } else {
+- return set_fpregs_fsave( tsk, buf );
+- }
+- } else {
+- return restore_i387_soft( &tsk->thread.i387.soft,
+- (struct _fpstate __user *)buf );
+- }
+-}
+-
+-int get_fpxregs( struct user_fxsr_struct __user *buf, struct task_struct *tsk )
+-{
+- if ( cpu_has_fxsr ) {
+- if (__copy_to_user( buf, &tsk->thread.i387.fxsave,
+- sizeof(struct user_fxsr_struct) ))
+- return -EFAULT;
+- return 0;
+- } else {
+- return -EIO;
+- }
+-}
+-
+-int set_fpxregs( struct task_struct *tsk, struct user_fxsr_struct __user *buf )
+-{
+- int ret = 0;
+-
+- if ( cpu_has_fxsr ) {
+- if (__copy_from_user( &tsk->thread.i387.fxsave, buf,
+- sizeof(struct user_fxsr_struct) ))
+- ret = -EFAULT;
+- /* mxcsr reserved bits must be masked to zero for security reasons */
+- tsk->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
+- } else {
+- ret = -EIO;
+- }
+- return ret;
+-}
+-
+-/*
+- * FPU state for core dumps.
+- */
+-
+-static inline void copy_fpu_fsave( struct task_struct *tsk,
+- struct user_i387_struct *fpu )
+-{
+- memcpy( fpu, &tsk->thread.i387.fsave,
+- sizeof(struct user_i387_struct) );
+-}
+-
+-static inline void copy_fpu_fxsave( struct task_struct *tsk,
+- struct user_i387_struct *fpu )
+-{
+- unsigned short *to;
+- unsigned short *from;
+- int i;
+-
+- memcpy( fpu, &tsk->thread.i387.fxsave, 7 * sizeof(long) );
+-
+- to = (unsigned short *)&fpu->st_space[0];
+- from = (unsigned short *)&tsk->thread.i387.fxsave.st_space[0];
+- for ( i = 0 ; i < 8 ; i++, to += 5, from += 8 ) {
+- memcpy( to, from, 5 * sizeof(unsigned short) );
+- }
+-}
+-
+-int dump_fpu( struct pt_regs *regs, struct user_i387_struct *fpu )
+-{
+- int fpvalid;
+- struct task_struct *tsk = current;
+-
+- fpvalid = !!used_math();
+- if ( fpvalid ) {
+- unlazy_fpu( tsk );
+- if ( cpu_has_fxsr ) {
+- copy_fpu_fxsave( tsk, fpu );
+- } else {
+- copy_fpu_fsave( tsk, fpu );
+- }
+- }
+-
+- return fpvalid;
+-}
+-EXPORT_SYMBOL(dump_fpu);
+-
+-int dump_task_fpu(struct task_struct *tsk, struct user_i387_struct *fpu)
+-{
+- int fpvalid = !!tsk_used_math(tsk);
+-
+- if (fpvalid) {
+- if (tsk == current)
+- unlazy_fpu(tsk);
+- if (cpu_has_fxsr)
+- copy_fpu_fxsave(tsk, fpu);
+- else
+- copy_fpu_fsave(tsk, fpu);
+- }
+- return fpvalid;
+-}
+-
+-int dump_task_extended_fpu(struct task_struct *tsk, struct user_fxsr_struct *fpu)
+-{
+- int fpvalid = tsk_used_math(tsk) && cpu_has_fxsr;
+-
+- if (fpvalid) {
+- if (tsk == current)
+- unlazy_fpu(tsk);
+- memcpy(fpu, &tsk->thread.i387.fxsave, sizeof(*fpu));
+- }
+- return fpvalid;
+-}
+diff --git a/arch/x86/kernel/i387_64.c b/arch/x86/kernel/i387_64.c
+deleted file mode 100644
+index bfaff28..0000000
+--- a/arch/x86/kernel/i387_64.c
++++ /dev/null
+@@ -1,150 +0,0 @@
+-/*
+- * Copyright (C) 1994 Linus Torvalds
+- * Copyright (C) 2002 Andi Kleen, SuSE Labs
+- *
+- * Pentium III FXSR, SSE support
+- * General FPU state handling cleanups
+- * Gareth Hughes <gareth at valinux.com>, May 2000
+- *
+- * x86-64 rework 2002 Andi Kleen.
+- * Does direct fxsave in and out of user space now for signal handlers.
+- * All the FSAVE<->FXSAVE conversion code has been moved to the 32bit emulation,
+- * the 64bit user space sees a FXSAVE frame directly.
+- */
+-
+-#include <linux/sched.h>
+-#include <linux/init.h>
+-#include <asm/processor.h>
+-#include <asm/i387.h>
+-#include <asm/sigcontext.h>
+-#include <asm/user.h>
+-#include <asm/ptrace.h>
+-#include <asm/uaccess.h>
+-
+-unsigned int mxcsr_feature_mask __read_mostly = 0xffffffff;
+-
+-void mxcsr_feature_mask_init(void)
+-{
+- unsigned int mask;
+- clts();
+- memset(¤t->thread.i387.fxsave, 0, sizeof(struct i387_fxsave_struct));
+- asm volatile("fxsave %0" : : "m" (current->thread.i387.fxsave));
+- mask = current->thread.i387.fxsave.mxcsr_mask;
+- if (mask == 0) mask = 0x0000ffbf;
+- mxcsr_feature_mask &= mask;
+- stts();
+-}
+-
+-/*
+- * Called at bootup to set up the initial FPU state that is later cloned
+- * into all processes.
+- */
+-void __cpuinit fpu_init(void)
+-{
+- unsigned long oldcr0 = read_cr0();
+- extern void __bad_fxsave_alignment(void);
+-
+- if (offsetof(struct task_struct, thread.i387.fxsave) & 15)
+- __bad_fxsave_alignment();
+- set_in_cr4(X86_CR4_OSFXSR);
+- set_in_cr4(X86_CR4_OSXMMEXCPT);
+-
+- write_cr0(oldcr0 & ~((1UL<<3)|(1UL<<2))); /* clear TS and EM */
+-
+- mxcsr_feature_mask_init();
+- /* clean state in init */
+- current_thread_info()->status = 0;
+- clear_used_math();
+-}
+-
+-void init_fpu(struct task_struct *child)
+-{
+- if (tsk_used_math(child)) {
+- if (child == current)
+- unlazy_fpu(child);
+- return;
+- }
+- memset(&child->thread.i387.fxsave, 0, sizeof(struct i387_fxsave_struct));
+- child->thread.i387.fxsave.cwd = 0x37f;
+- child->thread.i387.fxsave.mxcsr = 0x1f80;
+- /* only the device not available exception or ptrace can call init_fpu */
+- set_stopped_child_used_math(child);
+-}
+-
+-/*
+- * Signal frame handlers.
+- */
+-
+-int save_i387(struct _fpstate __user *buf)
+-{
+- struct task_struct *tsk = current;
+- int err = 0;
+-
+- BUILD_BUG_ON(sizeof(struct user_i387_struct) !=
+- sizeof(tsk->thread.i387.fxsave));
+-
+- if ((unsigned long)buf % 16)
+- printk("save_i387: bad fpstate %p\n",buf);
+-
+- if (!used_math())
+- return 0;
+- clear_used_math(); /* trigger finit */
+- if (task_thread_info(tsk)->status & TS_USEDFPU) {
+- err = save_i387_checking((struct i387_fxsave_struct __user *)buf);
+- if (err) return err;
+- task_thread_info(tsk)->status &= ~TS_USEDFPU;
+- stts();
+- } else {
+- if (__copy_to_user(buf, &tsk->thread.i387.fxsave,
+- sizeof(struct i387_fxsave_struct)))
+- return -1;
+- }
+- return 1;
+-}
+-
+-/*
+- * ptrace request handlers.
+- */
+-
+-int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *tsk)
+-{
+- init_fpu(tsk);
+- return __copy_to_user(buf, &tsk->thread.i387.fxsave,
+- sizeof(struct user_i387_struct)) ? -EFAULT : 0;
+-}
+-
+-int set_fpregs(struct task_struct *tsk, struct user_i387_struct __user *buf)
+-{
+- if (__copy_from_user(&tsk->thread.i387.fxsave, buf,
+- sizeof(struct user_i387_struct)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-/*
+- * FPU state for core dumps.
+- */
+-
+-int dump_fpu( struct pt_regs *regs, struct user_i387_struct *fpu )
+-{
+- struct task_struct *tsk = current;
+-
+- if (!used_math())
+- return 0;
+-
+- unlazy_fpu(tsk);
+- memcpy(fpu, &tsk->thread.i387.fxsave, sizeof(struct user_i387_struct));
+- return 1;
+-}
+-
+-int dump_task_fpu(struct task_struct *tsk, struct user_i387_struct *fpu)
+-{
+- int fpvalid = !!tsk_used_math(tsk);
+-
+- if (fpvalid) {
+- if (tsk == current)
+- unlazy_fpu(tsk);
+- memcpy(fpu, &tsk->thread.i387.fxsave, sizeof(struct user_i387_struct));
+-}
+- return fpvalid;
+-}
+diff --git a/arch/x86/kernel/i8237.c b/arch/x86/kernel/i8237.c
+index 2931383..dbd6c1d 100644
+--- a/arch/x86/kernel/i8237.c
++++ b/arch/x86/kernel/i8237.c
+@@ -51,7 +51,7 @@ static int i8237A_suspend(struct sys_device *dev, pm_message_t state)
+ }
+
+ static struct sysdev_class i8237_sysdev_class = {
+- set_kset_name("i8237"),
++ .name = "i8237",
+ .suspend = i8237A_suspend,
+ .resume = i8237A_resume,
+ };
+diff --git a/arch/x86/kernel/i8253.c b/arch/x86/kernel/i8253.c
+index a42c807..ef62b07 100644
+--- a/arch/x86/kernel/i8253.c
++++ b/arch/x86/kernel/i8253.c
+@@ -13,10 +13,17 @@
+ #include <asm/delay.h>
+ #include <asm/i8253.h>
+ #include <asm/io.h>
++#include <asm/hpet.h>
+
+ DEFINE_SPINLOCK(i8253_lock);
+ EXPORT_SYMBOL(i8253_lock);
+
++#ifdef CONFIG_X86_32
++static void pit_disable_clocksource(void);
++#else
++static inline void pit_disable_clocksource(void) { }
++#endif
+
+ /*
+ * HPET replaces the PIT, when enabled. So we need to know, which of
+ * the two timers is used
+@@ -31,38 +38,38 @@ struct clock_event_device *global_clock_event;
+ static void init_pit_timer(enum clock_event_mode mode,
+ struct clock_event_device *evt)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&i8253_lock, flags);
++ spin_lock(&i8253_lock);
+
+ switch(mode) {
+ case CLOCK_EVT_MODE_PERIODIC:
+ /* binary, mode 2, LSB/MSB, ch 0 */
+- outb_p(0x34, PIT_MODE);
+- outb_p(LATCH & 0xff , PIT_CH0); /* LSB */
+- outb(LATCH >> 8 , PIT_CH0); /* MSB */
++ outb_pit(0x34, PIT_MODE);
++ outb_pit(LATCH & 0xff , PIT_CH0); /* LSB */
++ outb_pit(LATCH >> 8 , PIT_CH0); /* MSB */
+ break;
+
+ case CLOCK_EVT_MODE_SHUTDOWN:
+ case CLOCK_EVT_MODE_UNUSED:
+ if (evt->mode == CLOCK_EVT_MODE_PERIODIC ||
+ evt->mode == CLOCK_EVT_MODE_ONESHOT) {
+- outb_p(0x30, PIT_MODE);
+- outb_p(0, PIT_CH0);
+- outb_p(0, PIT_CH0);
++ outb_pit(0x30, PIT_MODE);
++ outb_pit(0, PIT_CH0);
++ outb_pit(0, PIT_CH0);
+ }
++ pit_disable_clocksource();
+ break;
+
+ case CLOCK_EVT_MODE_ONESHOT:
+ /* One shot setup */
+- outb_p(0x38, PIT_MODE);
++ pit_disable_clocksource();
++ outb_pit(0x38, PIT_MODE);
+ break;
+
+ case CLOCK_EVT_MODE_RESUME:
+ /* Nothing to do here */
+ break;
+ }
+- spin_unlock_irqrestore(&i8253_lock, flags);
++ spin_unlock(&i8253_lock);
+ }
+
+ /*
+@@ -72,12 +79,10 @@ static void init_pit_timer(enum clock_event_mode mode,
+ */
+ static int pit_next_event(unsigned long delta, struct clock_event_device *evt)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&i8253_lock, flags);
+- outb_p(delta & 0xff , PIT_CH0); /* LSB */
+- outb(delta >> 8 , PIT_CH0); /* MSB */
+- spin_unlock_irqrestore(&i8253_lock, flags);
++ spin_lock(&i8253_lock);
++ outb_pit(delta & 0xff , PIT_CH0); /* LSB */
++ outb_pit(delta >> 8 , PIT_CH0); /* MSB */
++ spin_unlock(&i8253_lock);
+
+ return 0;
+ }
+@@ -148,15 +153,15 @@ static cycle_t pit_read(void)
+ * count), it cannot be newer.
+ */
+ jifs = jiffies;
+- outb_p(0x00, PIT_MODE); /* latch the count ASAP */
+- count = inb_p(PIT_CH0); /* read the latched count */
+- count |= inb_p(PIT_CH0) << 8;
++ outb_pit(0x00, PIT_MODE); /* latch the count ASAP */
++ count = inb_pit(PIT_CH0); /* read the latched count */
++ count |= inb_pit(PIT_CH0) << 8;
+
+ /* VIA686a test code... reset the latch if count > max + 1 */
+ if (count > LATCH) {
+- outb_p(0x34, PIT_MODE);
+- outb_p(LATCH & 0xff, PIT_CH0);
+- outb(LATCH >> 8, PIT_CH0);
++ outb_pit(0x34, PIT_MODE);
++ outb_pit(LATCH & 0xff, PIT_CH0);
++ outb_pit(LATCH >> 8, PIT_CH0);
+ count = LATCH - 1;
+ }
+
+@@ -195,9 +200,28 @@ static struct clocksource clocksource_pit = {
+ .shift = 20,
+ };
+
++static void pit_disable_clocksource(void)
++{
+ /*
-+ * We don't have a specific key to lookup with, so use the gang
-+ * lookup to just retrieve the first item stored. The cfq exit
-+ * function will iterate the full tree, so any member will do.
++ * Use mult to check whether it is registered or not
+ */
-+ r = radix_tree_gang_lookup(&ioc->radix_root, (void **) cic, 0, 1);
-+ if (r > 0)
-+ cic[0]->dtor(ioc);
++ if (clocksource_pit.mult) {
++ clocksource_unregister(&clocksource_pit);
++ clocksource_pit.mult = 0;
++ }
+}
+
-+/*
-+ * IO Context helper functions. put_io_context() returns 1 if there are no
-+ * more users of this io context, 0 otherwise.
-+ */
-+int put_io_context(struct io_context *ioc)
+ static int __init init_pit_clocksource(void)
+ {
+- if (num_possible_cpus() > 1) /* PIT does not scale! */
++ /*
++ * Several reasons not to register PIT as a clocksource:
++ *
++ * - On SMP PIT does not scale due to i8253_lock
++ * - when HPET is enabled
++ * - when local APIC timer is active (PIT is switched off)
++ */
++ if (num_possible_cpus() > 1 || is_hpet_enabled() ||
++ pit_clockevent.mode != CLOCK_EVT_MODE_PERIODIC)
+ return 0;
+
+ clocksource_pit.mult = clocksource_hz2mult(CLOCK_TICK_RATE, 20);
+diff --git a/arch/x86/kernel/i8259_32.c b/arch/x86/kernel/i8259_32.c
+index f634fc7..2d25b77 100644
+--- a/arch/x86/kernel/i8259_32.c
++++ b/arch/x86/kernel/i8259_32.c
+@@ -21,8 +21,6 @@
+ #include <asm/arch_hooks.h>
+ #include <asm/i8259.h>
+
+-#include <io_ports.h>
+-
+ /*
+ * This is the 'legacy' 8259A Programmable Interrupt Controller,
+ * present in the majority of PC/AT boxes.
+@@ -258,7 +256,7 @@ static int i8259A_shutdown(struct sys_device *dev)
+ }
+
+ static struct sysdev_class i8259_sysdev_class = {
+- set_kset_name("i8259"),
++ .name = "i8259",
+ .suspend = i8259A_suspend,
+ .resume = i8259A_resume,
+ .shutdown = i8259A_shutdown,
+@@ -291,20 +289,20 @@ void init_8259A(int auto_eoi)
+ outb(0xff, PIC_SLAVE_IMR); /* mask all of 8259A-2 */
+
+ /*
+- * outb_p - this has to work on a wide range of PC hardware.
++ * outb_pic - this has to work on a wide range of PC hardware.
+ */
+- outb_p(0x11, PIC_MASTER_CMD); /* ICW1: select 8259A-1 init */
+- outb_p(0x20 + 0, PIC_MASTER_IMR); /* ICW2: 8259A-1 IR0-7 mapped to 0x20-0x27 */
+- outb_p(1U << PIC_CASCADE_IR, PIC_MASTER_IMR); /* 8259A-1 (the master) has a slave on IR2 */
++ outb_pic(0x11, PIC_MASTER_CMD); /* ICW1: select 8259A-1 init */
++ outb_pic(0x20 + 0, PIC_MASTER_IMR); /* ICW2: 8259A-1 IR0-7 mapped to 0x20-0x27 */
++ outb_pic(1U << PIC_CASCADE_IR, PIC_MASTER_IMR); /* 8259A-1 (the master) has a slave on IR2 */
+ if (auto_eoi) /* master does Auto EOI */
+- outb_p(MASTER_ICW4_DEFAULT | PIC_ICW4_AEOI, PIC_MASTER_IMR);
++ outb_pic(MASTER_ICW4_DEFAULT | PIC_ICW4_AEOI, PIC_MASTER_IMR);
+ else /* master expects normal EOI */
+- outb_p(MASTER_ICW4_DEFAULT, PIC_MASTER_IMR);
++ outb_pic(MASTER_ICW4_DEFAULT, PIC_MASTER_IMR);
+
+- outb_p(0x11, PIC_SLAVE_CMD); /* ICW1: select 8259A-2 init */
+- outb_p(0x20 + 8, PIC_SLAVE_IMR); /* ICW2: 8259A-2 IR0-7 mapped to 0x28-0x2f */
+- outb_p(PIC_CASCADE_IR, PIC_SLAVE_IMR); /* 8259A-2 is a slave on master's IR2 */
+- outb_p(SLAVE_ICW4_DEFAULT, PIC_SLAVE_IMR); /* (slave's support for AEOI in flat mode is to be investigated) */
++ outb_pic(0x11, PIC_SLAVE_CMD); /* ICW1: select 8259A-2 init */
++ outb_pic(0x20 + 8, PIC_SLAVE_IMR); /* ICW2: 8259A-2 IR0-7 mapped to 0x28-0x2f */
++ outb_pic(PIC_CASCADE_IR, PIC_SLAVE_IMR); /* 8259A-2 is a slave on master's IR2 */
++ outb_pic(SLAVE_ICW4_DEFAULT, PIC_SLAVE_IMR); /* (slave's support for AEOI in flat mode is to be investigated) */
+ if (auto_eoi)
+ /*
+ * In AEOI mode we just have to mask the interrupt
+@@ -341,7 +339,7 @@ static irqreturn_t math_error_irq(int cpl, void *dev_id)
+ outb(0,0xF0);
+ if (ignore_fpu_irq || !boot_cpu_data.hard_math)
+ return IRQ_NONE;
+- math_error((void __user *)get_irq_regs()->eip);
++ math_error((void __user *)get_irq_regs()->ip);
+ return IRQ_HANDLED;
+ }
+
+diff --git a/arch/x86/kernel/i8259_64.c b/arch/x86/kernel/i8259_64.c
+index 3f27ea0..fa57a15 100644
+--- a/arch/x86/kernel/i8259_64.c
++++ b/arch/x86/kernel/i8259_64.c
+@@ -21,6 +21,7 @@
+ #include <asm/delay.h>
+ #include <asm/desc.h>
+ #include <asm/apic.h>
++#include <asm/i8259.h>
+
+ /*
+ * Common place to define all x86 IRQ vectors
+@@ -48,7 +49,7 @@
+ */
+
+ /*
+- * The IO-APIC gives us many more interrupt sources. Most of these
++ * The IO-APIC gives us many more interrupt sources. Most of these
+ * are unused but an SMP system is supposed to have enough memory ...
+ * sometimes (mostly wrt. hw bugs) we get corrupted vectors all
+ * across the spectrum, so we really want to be prepared to get all
+@@ -76,7 +77,7 @@ BUILD_16_IRQS(0xc) BUILD_16_IRQS(0xd) BUILD_16_IRQS(0xe) BUILD_16_IRQS(0xf)
+ IRQ(x,c), IRQ(x,d), IRQ(x,e), IRQ(x,f)
+
+ /* for the irq vectors */
+-static void (*interrupt[NR_VECTORS - FIRST_EXTERNAL_VECTOR])(void) = {
++static void (*__initdata interrupt[NR_VECTORS - FIRST_EXTERNAL_VECTOR])(void) = {
+ IRQLIST_16(0x2), IRQLIST_16(0x3),
+ IRQLIST_16(0x4), IRQLIST_16(0x5), IRQLIST_16(0x6), IRQLIST_16(0x7),
+ IRQLIST_16(0x8), IRQLIST_16(0x9), IRQLIST_16(0xa), IRQLIST_16(0xb),
+@@ -114,11 +115,7 @@ static struct irq_chip i8259A_chip = {
+ /*
+ * This contains the irq mask for both 8259A irq controllers,
+ */
+-static unsigned int cached_irq_mask = 0xffff;
+-
+-#define __byte(x,y) (((unsigned char *)&(y))[x])
+-#define cached_21 (__byte(0,cached_irq_mask))
+-#define cached_A1 (__byte(1,cached_irq_mask))
++unsigned int cached_irq_mask = 0xffff;
+
+ /*
+ * Not all IRQs can be routed through the IO-APIC, eg. on certain (older)
+@@ -139,9 +136,9 @@ void disable_8259A_irq(unsigned int irq)
+ spin_lock_irqsave(&i8259A_lock, flags);
+ cached_irq_mask |= mask;
+ if (irq & 8)
+- outb(cached_A1,0xA1);
++ outb(cached_slave_mask, PIC_SLAVE_IMR);
+ else
+- outb(cached_21,0x21);
++ outb(cached_master_mask, PIC_MASTER_IMR);
+ spin_unlock_irqrestore(&i8259A_lock, flags);
+ }
+
+@@ -153,9 +150,9 @@ void enable_8259A_irq(unsigned int irq)
+ spin_lock_irqsave(&i8259A_lock, flags);
+ cached_irq_mask &= mask;
+ if (irq & 8)
+- outb(cached_A1,0xA1);
++ outb(cached_slave_mask, PIC_SLAVE_IMR);
+ else
+- outb(cached_21,0x21);
++ outb(cached_master_mask, PIC_MASTER_IMR);
+ spin_unlock_irqrestore(&i8259A_lock, flags);
+ }
+
+@@ -167,9 +164,9 @@ int i8259A_irq_pending(unsigned int irq)
+
+ spin_lock_irqsave(&i8259A_lock, flags);
+ if (irq < 8)
+- ret = inb(0x20) & mask;
++ ret = inb(PIC_MASTER_CMD) & mask;
+ else
+- ret = inb(0xA0) & (mask >> 8);
++ ret = inb(PIC_SLAVE_CMD) & (mask >> 8);
+ spin_unlock_irqrestore(&i8259A_lock, flags);
+
+ return ret;
+@@ -196,14 +193,14 @@ static inline int i8259A_irq_real(unsigned int irq)
+ int irqmask = 1<<irq;
+
+ if (irq < 8) {
+- outb(0x0B,0x20); /* ISR register */
+- value = inb(0x20) & irqmask;
+- outb(0x0A,0x20); /* back to the IRR register */
++ outb(0x0B,PIC_MASTER_CMD); /* ISR register */
++ value = inb(PIC_MASTER_CMD) & irqmask;
++ outb(0x0A,PIC_MASTER_CMD); /* back to the IRR register */
+ return value;
+ }
+- outb(0x0B,0xA0); /* ISR register */
+- value = inb(0xA0) & (irqmask >> 8);
+- outb(0x0A,0xA0); /* back to the IRR register */
++ outb(0x0B,PIC_SLAVE_CMD); /* ISR register */
++ value = inb(PIC_SLAVE_CMD) & (irqmask >> 8);
++ outb(0x0A,PIC_SLAVE_CMD); /* back to the IRR register */
+ return value;
+ }
+
+@@ -240,14 +237,17 @@ static void mask_and_ack_8259A(unsigned int irq)
+
+ handle_real_irq:
+ if (irq & 8) {
+- inb(0xA1); /* DUMMY - (do we need this?) */
+- outb(cached_A1,0xA1);
+- outb(0x60+(irq&7),0xA0);/* 'Specific EOI' to slave */
+- outb(0x62,0x20); /* 'Specific EOI' to master-IRQ2 */
++ inb(PIC_SLAVE_IMR); /* DUMMY - (do we need this?) */
++ outb(cached_slave_mask, PIC_SLAVE_IMR);
++ /* 'Specific EOI' to slave */
++ outb(0x60+(irq&7),PIC_SLAVE_CMD);
++ /* 'Specific EOI' to master-IRQ2 */
++ outb(0x60+PIC_CASCADE_IR,PIC_MASTER_CMD);
+ } else {
+- inb(0x21); /* DUMMY - (do we need this?) */
+- outb(cached_21,0x21);
+- outb(0x60+irq,0x20); /* 'Specific EOI' to master */
++ inb(PIC_MASTER_IMR); /* DUMMY - (do we need this?) */
++ outb(cached_master_mask, PIC_MASTER_IMR);
++ /* 'Specific EOI' to master */
++ outb(0x60+irq,PIC_MASTER_CMD);
+ }
+ spin_unlock_irqrestore(&i8259A_lock, flags);
+ return;
+@@ -270,7 +270,8 @@ spurious_8259A_irq:
+ * lets ACK and report it. [once per IRQ]
+ */
+ if (!(spurious_irq_mask & irqmask)) {
+- printk(KERN_DEBUG "spurious 8259A interrupt: IRQ%d.\n", irq);
++ printk(KERN_DEBUG
++ "spurious 8259A interrupt: IRQ%d.\n", irq);
+ spurious_irq_mask |= irqmask;
+ }
+ atomic_inc(&irq_err_count);
+@@ -283,51 +284,6 @@ spurious_8259A_irq:
+ }
+ }
+
+-void init_8259A(int auto_eoi)
+-{
+- unsigned long flags;
+-
+- i8259A_auto_eoi = auto_eoi;
+-
+- spin_lock_irqsave(&i8259A_lock, flags);
+-
+- outb(0xff, 0x21); /* mask all of 8259A-1 */
+- outb(0xff, 0xA1); /* mask all of 8259A-2 */
+-
+- /*
+- * outb_p - this has to work on a wide range of PC hardware.
+- */
+- outb_p(0x11, 0x20); /* ICW1: select 8259A-1 init */
+- outb_p(IRQ0_VECTOR, 0x21); /* ICW2: 8259A-1 IR0-7 mapped to 0x30-0x37 */
+- outb_p(0x04, 0x21); /* 8259A-1 (the master) has a slave on IR2 */
+- if (auto_eoi)
+- outb_p(0x03, 0x21); /* master does Auto EOI */
+- else
+- outb_p(0x01, 0x21); /* master expects normal EOI */
+-
+- outb_p(0x11, 0xA0); /* ICW1: select 8259A-2 init */
+- outb_p(IRQ8_VECTOR, 0xA1); /* ICW2: 8259A-2 IR0-7 mapped to 0x38-0x3f */
+- outb_p(0x02, 0xA1); /* 8259A-2 is a slave on master's IR2 */
+- outb_p(0x01, 0xA1); /* (slave's support for AEOI in flat mode
+- is to be investigated) */
+-
+- if (auto_eoi)
+- /*
+- * in AEOI mode we just have to mask the interrupt
+- * when acking.
+- */
+- i8259A_chip.mask_ack = disable_8259A_irq;
+- else
+- i8259A_chip.mask_ack = mask_and_ack_8259A;
+-
+- udelay(100); /* wait for 8259A to initialize */
+-
+- outb(cached_21, 0x21); /* restore master IRQ mask */
+- outb(cached_A1, 0xA1); /* restore slave IRQ mask */
+-
+- spin_unlock_irqrestore(&i8259A_lock, flags);
+-}
+-
+ static char irq_trigger[2];
+ /**
+ * ELCR registers (0x4d0, 0x4d1) control edge/level of IRQ
+@@ -364,13 +320,13 @@ static int i8259A_shutdown(struct sys_device *dev)
+ * the kernel initialization code can get it
+ * out of.
+ */
+- outb(0xff, 0x21); /* mask all of 8259A-1 */
+- outb(0xff, 0xA1); /* mask all of 8259A-1 */
++ outb(0xff, PIC_MASTER_IMR); /* mask all of 8259A-1 */
++ outb(0xff, PIC_SLAVE_IMR); /* mask all of 8259A-1 */
+ return 0;
+ }
+
+ static struct sysdev_class i8259_sysdev_class = {
+- set_kset_name("i8259"),
++ .name = "i8259",
+ .suspend = i8259A_suspend,
+ .resume = i8259A_resume,
+ .shutdown = i8259A_shutdown,
+@@ -391,6 +347,58 @@ static int __init i8259A_init_sysfs(void)
+
+ device_initcall(i8259A_init_sysfs);
+
++void init_8259A(int auto_eoi)
+{
-+ if (ioc == NULL)
-+ return 1;
++ unsigned long flags;
+
-+ BUG_ON(atomic_read(&ioc->refcount) == 0);
++ i8259A_auto_eoi = auto_eoi;
+
-+ if (atomic_dec_and_test(&ioc->refcount)) {
-+ rcu_read_lock();
-+ if (ioc->aic && ioc->aic->dtor)
-+ ioc->aic->dtor(ioc->aic);
-+ rcu_read_unlock();
-+ cfq_dtor(ioc);
++ spin_lock_irqsave(&i8259A_lock, flags);
+
-+ kmem_cache_free(iocontext_cachep, ioc);
-+ return 1;
-+ }
-+ return 0;
++ outb(0xff, PIC_MASTER_IMR); /* mask all of 8259A-1 */
++ outb(0xff, PIC_SLAVE_IMR); /* mask all of 8259A-2 */
++
++ /*
++ * outb_pic - this has to work on a wide range of PC hardware.
++ */
++ outb_pic(0x11, PIC_MASTER_CMD); /* ICW1: select 8259A-1 init */
++ /* ICW2: 8259A-1 IR0-7 mapped to 0x30-0x37 */
++ outb_pic(IRQ0_VECTOR, PIC_MASTER_IMR);
++ /* 8259A-1 (the master) has a slave on IR2 */
++ outb_pic(0x04, PIC_MASTER_IMR);
++ if (auto_eoi) /* master does Auto EOI */
++ outb_pic(MASTER_ICW4_DEFAULT | PIC_ICW4_AEOI, PIC_MASTER_IMR);
++ else /* master expects normal EOI */
++ outb_pic(MASTER_ICW4_DEFAULT, PIC_MASTER_IMR);
++
++ outb_pic(0x11, PIC_SLAVE_CMD); /* ICW1: select 8259A-2 init */
++ /* ICW2: 8259A-2 IR0-7 mapped to 0x38-0x3f */
++ outb_pic(IRQ8_VECTOR, PIC_SLAVE_IMR);
++ /* 8259A-2 is a slave on master's IR2 */
++ outb_pic(PIC_CASCADE_IR, PIC_SLAVE_IMR);
++ /* (slave's support for AEOI in flat mode is to be investigated) */
++ outb_pic(SLAVE_ICW4_DEFAULT, PIC_SLAVE_IMR);
++
++ if (auto_eoi)
++ /*
++ * In AEOI mode we just have to mask the interrupt
++ * when acking.
++ */
++ i8259A_chip.mask_ack = disable_8259A_irq;
++ else
++ i8259A_chip.mask_ack = mask_and_ack_8259A;
++
++ udelay(100); /* wait for 8259A to initialize */
++
++ outb(cached_master_mask, PIC_MASTER_IMR); /* restore master IRQ mask */
++ outb(cached_slave_mask, PIC_SLAVE_IMR); /* restore slave IRQ mask */
++
++ spin_unlock_irqrestore(&i8259A_lock, flags);
+}
-+EXPORT_SYMBOL(put_io_context);
+
-+static void cfq_exit(struct io_context *ioc)
-+{
-+ struct cfq_io_context *cic[1];
-+ int r;
+
-+ rcu_read_lock();
++
++
+ /*
+ * IRQ2 is cascade interrupt to second interrupt controller
+ */
+@@ -448,7 +456,9 @@ void __init init_ISA_irqs (void)
+ }
+ }
+
+-void __init init_IRQ(void)
++void init_IRQ(void) __attribute__((weak, alias("native_init_IRQ")));
++
++void __init native_init_IRQ(void)
+ {
+ int i;
+
+diff --git a/arch/x86/kernel/init_task.c b/arch/x86/kernel/init_task.c
+index 468c9c4..5b3ce79 100644
+--- a/arch/x86/kernel/init_task.c
++++ b/arch/x86/kernel/init_task.c
+@@ -15,7 +15,6 @@ static struct files_struct init_files = INIT_FILES;
+ static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
+ static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
+ struct mm_struct init_mm = INIT_MM(init_mm);
+-EXPORT_SYMBOL(init_mm);
+
+ /*
+ * Initial thread structure.
+diff --git a/arch/x86/kernel/io_apic_32.c b/arch/x86/kernel/io_apic_32.c
+index a6b1490..4ca5486 100644
+--- a/arch/x86/kernel/io_apic_32.c
++++ b/arch/x86/kernel/io_apic_32.c
+@@ -35,6 +35,7 @@
+ #include <linux/htirq.h>
+ #include <linux/freezer.h>
+ #include <linux/kthread.h>
++#include <linux/jiffies.h> /* time_after() */
+
+ #include <asm/io.h>
+ #include <asm/smp.h>
+@@ -48,8 +49,6 @@
+ #include <mach_apic.h>
+ #include <mach_apicdef.h>
+
+-#include "io_ports.h"
+-
+ int (*ioapic_renumber_irq)(int ioapic, int irq);
+ atomic_t irq_mis_count;
+
+@@ -351,7 +350,7 @@ static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t cpumask)
+ # include <asm/processor.h> /* kernel_thread() */
+ # include <linux/kernel_stat.h> /* kstat */
+ # include <linux/slab.h> /* kmalloc() */
+-# include <linux/timer.h> /* time_after() */
++# include <linux/timer.h>
+
+ #define IRQBALANCE_CHECK_ARCH -999
+ #define MAX_BALANCED_IRQ_INTERVAL (5*HZ)
+@@ -727,7 +726,7 @@ late_initcall(balanced_irq_init);
+ #endif /* CONFIG_SMP */
+
+ #ifndef CONFIG_SMP
+-void fastcall send_IPI_self(int vector)
++void send_IPI_self(int vector)
+ {
+ unsigned int cfg;
+
+@@ -1900,7 +1899,7 @@ static int __init timer_irq_works(void)
+ * might have cached one ExtINT interrupt. Finally, at
+ * least one tick may be lost due to delays.
+ */
+- if (jiffies - t1 > 4)
++ if (time_after(jiffies, t1 + 4))
+ return 1;
+
+ return 0;
+@@ -2080,7 +2079,7 @@ static struct irq_chip lapic_chip __read_mostly = {
+ .eoi = ack_apic,
+ };
+
+-static void setup_nmi (void)
++static void __init setup_nmi(void)
+ {
+ /*
+ * Dirty trick to enable the NMI watchdog ...
+@@ -2093,7 +2092,7 @@ static void setup_nmi (void)
+ */
+ apic_printk(APIC_VERBOSE, KERN_INFO "activating NMI Watchdog ...");
+
+- on_each_cpu(enable_NMI_through_LVT0, NULL, 1, 1);
++ enable_NMI_through_LVT0();
+
+ apic_printk(APIC_VERBOSE, " done.\n");
+ }
+@@ -2401,7 +2400,7 @@ static int ioapic_resume(struct sys_device *dev)
+ }
+
+ static struct sysdev_class ioapic_sysdev_class = {
+- set_kset_name("ioapic"),
++ .name = "ioapic",
+ .suspend = ioapic_suspend,
+ .resume = ioapic_resume,
+ };
+diff --git a/arch/x86/kernel/io_apic_64.c b/arch/x86/kernel/io_apic_64.c
+index cbac167..1627c0d 100644
+--- a/arch/x86/kernel/io_apic_64.c
++++ b/arch/x86/kernel/io_apic_64.c
+@@ -32,9 +32,11 @@
+ #include <linux/msi.h>
+ #include <linux/htirq.h>
+ #include <linux/dmar.h>
++#include <linux/jiffies.h>
+ #ifdef CONFIG_ACPI
+ #include <acpi/acpi_bus.h>
+ #endif
++#include <linux/bootmem.h>
+
+ #include <asm/idle.h>
+ #include <asm/io.h>
+@@ -1069,7 +1071,7 @@ void __apicdebuginit print_local_APIC(void * dummy)
+ v = apic_read(APIC_LVR);
+ printk(KERN_INFO "... APIC VERSION: %08x\n", v);
+ ver = GET_APIC_VERSION(v);
+- maxlvt = get_maxlvt();
++ maxlvt = lapic_get_maxlvt();
+
+ v = apic_read(APIC_TASKPRI);
+ printk(KERN_DEBUG "... APIC TASKPRI: %08x (%02x)\n", v, v & APIC_TPRI_MASK);
+@@ -1171,7 +1173,7 @@ void __apicdebuginit print_PIC(void)
+
+ #endif /* 0 */
+
+-static void __init enable_IO_APIC(void)
++void __init enable_IO_APIC(void)
+ {
+ union IO_APIC_reg_01 reg_01;
+ int i8259_apic, i8259_pin;
+@@ -1298,7 +1300,7 @@ static int __init timer_irq_works(void)
+ */
+
+ /* jiffies wrap? */
+- if (jiffies - t1 > 4)
++ if (time_after(jiffies, t1 + 4))
+ return 1;
+ return 0;
+ }
+@@ -1411,7 +1413,7 @@ static void irq_complete_move(unsigned int irq)
+ if (likely(!cfg->move_in_progress))
+ return;
+
+- vector = ~get_irq_regs()->orig_rax;
++ vector = ~get_irq_regs()->orig_ax;
+ me = smp_processor_id();
+ if ((vector == cfg->vector) && cpu_isset(me, cfg->domain)) {
+ cpumask_t cleanup_mask;
+@@ -1438,7 +1440,7 @@ static void ack_apic_level(unsigned int irq)
+ int do_unmask_irq = 0;
+
+ irq_complete_move(irq);
+-#if defined(CONFIG_GENERIC_PENDING_IRQ) || defined(CONFIG_IRQBALANCE)
++#ifdef CONFIG_GENERIC_PENDING_IRQ
+ /* If we are moving the irq we need to mask it */
+ if (unlikely(irq_desc[irq].status & IRQ_MOVE_PENDING)) {
+ do_unmask_irq = 1;
+@@ -1565,7 +1567,7 @@ static struct hw_interrupt_type lapic_irq_type __read_mostly = {
+ .end = end_lapic_irq,
+ };
+
+-static void setup_nmi (void)
++static void __init setup_nmi(void)
+ {
+ /*
+ * Dirty trick to enable the NMI watchdog ...
+@@ -1578,7 +1580,7 @@ static void setup_nmi (void)
+ */
+ printk(KERN_INFO "activating NMI Watchdog ...");
+
+- enable_NMI_through_LVT0(NULL);
++ enable_NMI_through_LVT0();
+
+ printk(" done.\n");
+ }
+@@ -1654,7 +1656,7 @@ static inline void unlock_ExtINT_logic(void)
+ *
+ * FIXME: really need to revamp this for modern platforms only.
+ */
+-static inline void check_timer(void)
++static inline void __init check_timer(void)
+ {
+ struct irq_cfg *cfg = irq_cfg + 0;
+ int apic1, pin1, apic2, pin2;
+@@ -1788,7 +1790,10 @@ __setup("no_timer_check", notimercheck);
+
+ void __init setup_IO_APIC(void)
+ {
+- enable_IO_APIC();
++
+ /*
-+ * See comment for cfq_dtor()
++ * calling enable_IO_APIC() is moved to setup_local_APIC for BP
+ */
-+ r = radix_tree_gang_lookup(&ioc->radix_root, (void **) cic, 0, 1);
-+ rcu_read_unlock();
+
+ if (acpi_ioapic)
+ io_apic_irqs = ~0; /* all IRQs go through IOAPIC */
+@@ -1850,7 +1855,7 @@ static int ioapic_resume(struct sys_device *dev)
+ }
+
+ static struct sysdev_class ioapic_sysdev_class = {
+- set_kset_name("ioapic"),
++ .name = "ioapic",
+ .suspend = ioapic_suspend,
+ .resume = ioapic_resume,
+ };
+@@ -2288,3 +2293,92 @@ void __init setup_ioapic_dest(void)
+ }
+ #endif
+
++#define IOAPIC_RESOURCE_NAME_SIZE 11
+
-+ if (r > 0)
-+ cic[0]->exit(ioc);
-+}
++static struct resource *ioapic_resources;
+
-+/* Called by the exitting task */
-+void exit_io_context(void)
++static struct resource * __init ioapic_setup_resources(void)
+{
-+ struct io_context *ioc;
++ unsigned long n;
++ struct resource *res;
++ char *mem;
++ int i;
+
-+ task_lock(current);
-+ ioc = current->io_context;
-+ current->io_context = NULL;
-+ task_unlock(current);
++ if (nr_ioapics <= 0)
++ return NULL;
+
-+ if (atomic_dec_and_test(&ioc->nr_tasks)) {
-+ if (ioc->aic && ioc->aic->exit)
-+ ioc->aic->exit(ioc->aic);
-+ cfq_exit(ioc);
++ n = IOAPIC_RESOURCE_NAME_SIZE + sizeof(struct resource);
++ n *= nr_ioapics;
+
-+ put_io_context(ioc);
++ mem = alloc_bootmem(n);
++ res = (void *)mem;
++
++ if (mem != NULL) {
++ memset(mem, 0, n);
++ mem += sizeof(struct resource) * nr_ioapics;
++
++ for (i = 0; i < nr_ioapics; i++) {
++ res[i].name = mem;
++ res[i].flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++ sprintf(mem, "IOAPIC %u", i);
++ mem += IOAPIC_RESOURCE_NAME_SIZE;
++ }
+ }
++
++ ioapic_resources = res;
++
++ return res;
+}
+
-+struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
++void __init ioapic_init_mappings(void)
+{
-+ struct io_context *ret;
++ unsigned long ioapic_phys, idx = FIX_IO_APIC_BASE_0;
++ struct resource *ioapic_res;
++ int i;
+
-+ ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
-+ if (ret) {
-+ atomic_set(&ret->refcount, 1);
-+ atomic_set(&ret->nr_tasks, 1);
-+ spin_lock_init(&ret->lock);
-+ ret->ioprio_changed = 0;
-+ ret->ioprio = 0;
-+ ret->last_waited = jiffies; /* doesn't matter... */
-+ ret->nr_batch_requests = 0; /* because this is 0 */
-+ ret->aic = NULL;
-+ INIT_RADIX_TREE(&ret->radix_root, GFP_ATOMIC | __GFP_HIGH);
-+ ret->ioc_data = NULL;
-+ }
++ ioapic_res = ioapic_setup_resources();
++ for (i = 0; i < nr_ioapics; i++) {
++ if (smp_found_config) {
++ ioapic_phys = mp_ioapics[i].mpc_apicaddr;
++ } else {
++ ioapic_phys = (unsigned long)
++ alloc_bootmem_pages(PAGE_SIZE);
++ ioapic_phys = __pa(ioapic_phys);
++ }
++ set_fixmap_nocache(idx, ioapic_phys);
++ apic_printk(APIC_VERBOSE,
++ "mapped IOAPIC to %016lx (%016lx)\n",
++ __fix_to_virt(idx), ioapic_phys);
++ idx++;
+
-+ return ret;
++ if (ioapic_res != NULL) {
++ ioapic_res->start = ioapic_phys;
++ ioapic_res->end = ioapic_phys + (4 * 1024) - 1;
++ ioapic_res++;
++ }
++ }
+}
+
-+/*
-+ * If the current task has no IO context then create one and initialise it.
-+ * Otherwise, return its existing IO context.
-+ *
-+ * This returned IO context doesn't have a specifically elevated refcount,
-+ * but since the current task itself holds a reference, the context can be
-+ * used in general code, so long as it stays within `current` context.
-+ */
-+struct io_context *current_io_context(gfp_t gfp_flags, int node)
++static int __init ioapic_insert_resources(void)
+{
-+ struct task_struct *tsk = current;
-+ struct io_context *ret;
++ int i;
++ struct resource *r = ioapic_resources;
+
-+ ret = tsk->io_context;
-+ if (likely(ret))
-+ return ret;
++ if (!r) {
++ printk(KERN_ERR
++ "IO APIC resources could be not be allocated.\n");
++ return -1;
++ }
+
-+ ret = alloc_io_context(gfp_flags, node);
-+ if (ret) {
-+ /* make sure set_task_ioprio() sees the settings above */
-+ smp_wmb();
-+ tsk->io_context = ret;
++ for (i = 0; i < nr_ioapics; i++) {
++ insert_resource(&iomem_resource, r);
++ r++;
+ }
+
-+ return ret;
++ return 0;
+}
+
++/* Insert the IO APIC resources after PCI initialization has occured to handle
++ * IO APICS that are mapped in on a BAR in PCI space. */
++late_initcall(ioapic_insert_resources);
++
+diff --git a/arch/x86/kernel/io_delay.c b/arch/x86/kernel/io_delay.c
+new file mode 100644
+index 0000000..bd49321
+--- /dev/null
++++ b/arch/x86/kernel/io_delay.c
+@@ -0,0 +1,114 @@
+/*
-+ * If the current task has no IO context then create one and initialise it.
-+ * If it does have a context, take a ref on it.
++ * I/O delay strategies for inb_p/outb_p
+ *
-+ * This is always called in the context of the task which submitted the I/O.
++ * Allow for a DMI based override of port 0x80, needed for certain HP laptops
++ * and possibly other systems. Also allow for the gradual elimination of
++ * outb_p/inb_p API uses.
+ */
-+struct io_context *get_io_context(gfp_t gfp_flags, int node)
-+{
-+ struct io_context *ret = NULL;
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/init.h>
++#include <linux/delay.h>
++#include <linux/dmi.h>
++#include <asm/io.h>
+
-+ /*
-+ * Check for unlikely race with exiting task. ioc ref count is
-+ * zero when ioc is being detached.
-+ */
-+ do {
-+ ret = current_io_context(gfp_flags, node);
-+ if (unlikely(!ret))
-+ break;
-+ } while (!atomic_inc_not_zero(&ret->refcount));
++int io_delay_type __read_mostly = CONFIG_DEFAULT_IO_DELAY_TYPE;
++EXPORT_SYMBOL_GPL(io_delay_type);
+
-+ return ret;
-+}
-+EXPORT_SYMBOL(get_io_context);
++static int __initdata io_delay_override;
+
-+void copy_io_context(struct io_context **pdst, struct io_context **psrc)
++/*
++ * Paravirt wants native_io_delay to be a constant.
++ */
++void native_io_delay(void)
+{
-+ struct io_context *src = *psrc;
-+ struct io_context *dst = *pdst;
++ switch (io_delay_type) {
++ default:
++ case CONFIG_IO_DELAY_TYPE_0X80:
++ asm volatile ("outb %al, $0x80");
++ break;
++ case CONFIG_IO_DELAY_TYPE_0XED:
++ asm volatile ("outb %al, $0xed");
++ break;
++ case CONFIG_IO_DELAY_TYPE_UDELAY:
++ /*
++ * 2 usecs is an upper-bound for the outb delay but
++ * note that udelay doesn't have the bus-level
++ * side-effects that outb does, nor does udelay() have
++ * precise timings during very early bootup (the delays
++ * are shorter until calibrated):
++ */
++ udelay(2);
++ case CONFIG_IO_DELAY_TYPE_NONE:
++ break;
++ }
++}
++EXPORT_SYMBOL(native_io_delay);
+
-+ if (src) {
-+ BUG_ON(atomic_read(&src->refcount) == 0);
-+ atomic_inc(&src->refcount);
-+ put_io_context(dst);
-+ *pdst = src;
++static int __init dmi_io_delay_0xed_port(const struct dmi_system_id *id)
++{
++ if (io_delay_type == CONFIG_IO_DELAY_TYPE_0X80) {
++ printk(KERN_NOTICE "%s: using 0xed I/O delay port\n",
++ id->ident);
++ io_delay_type = CONFIG_IO_DELAY_TYPE_0XED;
+ }
++
++ return 0;
+}
-+EXPORT_SYMBOL(copy_io_context);
+
-+void swap_io_context(struct io_context **ioc1, struct io_context **ioc2)
++/*
++ * Quirk table for systems that misbehave (lock up, etc.) if port
++ * 0x80 is used:
++ */
++static struct dmi_system_id __initdata io_delay_0xed_port_dmi_table[] = {
++ {
++ .callback = dmi_io_delay_0xed_port,
++ .ident = "Compaq Presario V6000",
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Quanta"),
++ DMI_MATCH(DMI_BOARD_NAME, "30B7")
++ }
++ },
++ {
++ .callback = dmi_io_delay_0xed_port,
++ .ident = "HP Pavilion dv9000z",
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Quanta"),
++ DMI_MATCH(DMI_BOARD_NAME, "30B9")
++ }
++ },
++ {
++ .callback = dmi_io_delay_0xed_port,
++ .ident = "HP Pavilion tx1000",
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Quanta"),
++ DMI_MATCH(DMI_BOARD_NAME, "30BF")
++ }
++ },
++ { }
++};
++
++void __init io_delay_init(void)
+{
-+ struct io_context *temp;
-+ temp = *ioc1;
-+ *ioc1 = *ioc2;
-+ *ioc2 = temp;
++ if (!io_delay_override)
++ dmi_check_system(io_delay_0xed_port_dmi_table);
+}
-+EXPORT_SYMBOL(swap_io_context);
+
-+int __init blk_ioc_init(void)
++static int __init io_delay_param(char *s)
+{
-+ iocontext_cachep = kmem_cache_create("blkdev_ioc",
-+ sizeof(struct io_context), 0, SLAB_PANIC, NULL);
++ if (!strcmp(s, "0x80"))
++ io_delay_type = CONFIG_IO_DELAY_TYPE_0X80;
++ else if (!strcmp(s, "0xed"))
++ io_delay_type = CONFIG_IO_DELAY_TYPE_0XED;
++ else if (!strcmp(s, "udelay"))
++ io_delay_type = CONFIG_IO_DELAY_TYPE_UDELAY;
++ else if (!strcmp(s, "none"))
++ io_delay_type = CONFIG_IO_DELAY_TYPE_NONE;
++ else
++ return -EINVAL;
++
++ io_delay_override = 1;
+ return 0;
+}
-+subsys_initcall(blk_ioc_init);
-diff --git a/block/blk-map.c b/block/blk-map.c
++
++early_param("io_delay", io_delay_param);
+diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
new file mode 100644
-index 0000000..916cfc9
+index 0000000..50e5e4a
--- /dev/null
-+++ b/block/blk-map.c
-@@ -0,0 +1,264 @@
++++ b/arch/x86/kernel/ioport.c
+@@ -0,0 +1,154 @@
+/*
-+ * Functions related to mapping data to requests
++ * This contains the io-permission bitmap code - written by obz, with changes
++ * by Linus. 32/64 bits code unification by Miguel Botón.
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
+
-+#include "blk.h"
++#include <linux/sched.h>
++#include <linux/kernel.h>
++#include <linux/capability.h>
++#include <linux/errno.h>
++#include <linux/types.h>
++#include <linux/ioport.h>
++#include <linux/smp.h>
++#include <linux/stddef.h>
++#include <linux/slab.h>
++#include <linux/thread_info.h>
++#include <linux/syscalls.h>
+
-+int blk_rq_append_bio(struct request_queue *q, struct request *rq,
-+ struct bio *bio)
++/* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */
++static void set_bitmap(unsigned long *bitmap, unsigned int base,
++ unsigned int extent, int new_value)
+{
-+ if (!rq->bio)
-+ blk_rq_bio_prep(q, rq, bio);
-+ else if (!ll_back_merge_fn(q, rq, bio))
-+ return -EINVAL;
-+ else {
-+ rq->biotail->bi_next = bio;
-+ rq->biotail = bio;
++ unsigned int i;
+
-+ rq->data_len += bio->bi_size;
++ for (i = base; i < base + extent; i++) {
++ if (new_value)
++ __set_bit(i, bitmap);
++ else
++ __clear_bit(i, bitmap);
+ }
-+ return 0;
+}
-+EXPORT_SYMBOL(blk_rq_append_bio);
+
-+static int __blk_rq_unmap_user(struct bio *bio)
++/*
++ * this changes the io permissions bitmap in the current task.
++ */
++asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
+{
-+ int ret = 0;
++ struct thread_struct * t = ¤t->thread;
++ struct tss_struct * tss;
++ unsigned int i, max_long, bytes, bytes_updated;
+
-+ if (bio) {
-+ if (bio_flagged(bio, BIO_USER_MAPPED))
-+ bio_unmap_user(bio);
-+ else
-+ ret = bio_uncopy_user(bio);
-+ }
++ if ((from + num <= from) || (from + num > IO_BITMAP_BITS))
++ return -EINVAL;
++ if (turn_on && !capable(CAP_SYS_RAWIO))
++ return -EPERM;
+
-+ return ret;
-+}
++ /*
++ * If it's the first ioperm() call in this thread's lifetime, set the
++ * IO bitmap up. ioperm() is much less timing critical than clone(),
++ * this is why we delay this operation until now:
++ */
++ if (!t->io_bitmap_ptr) {
++ unsigned long *bitmap = kmalloc(IO_BITMAP_BYTES, GFP_KERNEL);
+
-+static int __blk_rq_map_user(struct request_queue *q, struct request *rq,
-+ void __user *ubuf, unsigned int len)
-+{
-+ unsigned long uaddr;
-+ struct bio *bio, *orig_bio;
-+ int reading, ret;
++ if (!bitmap)
++ return -ENOMEM;
+
-+ reading = rq_data_dir(rq) == READ;
++ memset(bitmap, 0xff, IO_BITMAP_BYTES);
++ t->io_bitmap_ptr = bitmap;
++ set_thread_flag(TIF_IO_BITMAP);
++ }
+
+ /*
-+ * if alignment requirement is satisfied, map in user pages for
-+ * direct dma. else, set up kernel bounce buffers
++ * do it in the per-thread copy and in the TSS ...
++ *
++ * Disable preemption via get_cpu() - we must not switch away
++ * because the ->io_bitmap_max value must match the bitmap
++ * contents:
+ */
-+ uaddr = (unsigned long) ubuf;
-+ if (!(uaddr & queue_dma_alignment(q)) && !(len & queue_dma_alignment(q)))
-+ bio = bio_map_user(q, NULL, uaddr, len, reading);
-+ else
-+ bio = bio_copy_user(q, uaddr, len, reading);
++ tss = &per_cpu(init_tss, get_cpu());
+
-+ if (IS_ERR(bio))
-+ return PTR_ERR(bio);
++ set_bitmap(t->io_bitmap_ptr, from, num, !turn_on);
+
-+ orig_bio = bio;
-+ blk_queue_bounce(q, &bio);
++ /*
++ * Search for a (possibly new) maximum. This is simple and stupid,
++ * to keep it obviously correct:
++ */
++ max_long = 0;
++ for (i = 0; i < IO_BITMAP_LONGS; i++)
++ if (t->io_bitmap_ptr[i] != ~0UL)
++ max_long = i;
++
++ bytes = (max_long + 1) * sizeof(unsigned long);
++ bytes_updated = max(bytes, t->io_bitmap_max);
++
++ t->io_bitmap_max = bytes;
+
++#ifdef CONFIG_X86_32
+ /*
-+ * We link the bounce buffer in and could have to traverse it
-+ * later so we have to get a ref to prevent it from being freed
++ * Sets the lazy trigger so that the next I/O operation will
++ * reload the correct bitmap.
++ * Reset the owner so that a process switch will not set
++ * tss->io_bitmap_base to IO_BITMAP_OFFSET.
+ */
-+ bio_get(bio);
++ tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
++ tss->io_bitmap_owner = NULL;
++#else
++ /* Update the TSS: */
++ memcpy(tss->io_bitmap, t->io_bitmap_ptr, bytes_updated);
++#endif
+
-+ ret = blk_rq_append_bio(q, rq, bio);
-+ if (!ret)
-+ return bio->bi_size;
++ put_cpu();
+
-+ /* if it was boucned we must call the end io function */
-+ bio_endio(bio, 0);
-+ __blk_rq_unmap_user(orig_bio);
-+ bio_put(bio);
-+ return ret;
++ return 0;
+}
+
-+/**
-+ * blk_rq_map_user - map user data to a request, for REQ_BLOCK_PC usage
-+ * @q: request queue where request should be inserted
-+ * @rq: request structure to fill
-+ * @ubuf: the user buffer
-+ * @len: length of user data
-+ *
-+ * Description:
-+ * Data will be mapped directly for zero copy io, if possible. Otherwise
-+ * a kernel bounce buffer is used.
-+ *
-+ * A matching blk_rq_unmap_user() must be issued at the end of io, while
-+ * still in process context.
++/*
++ * sys_iopl has to be used when you want to access the IO ports
++ * beyond the 0x3ff range: to get the full 65536 ports bitmapped
++ * you'd need 8kB of bitmaps/process, which is a bit excessive.
+ *
-+ * Note: The mapped bio may need to be bounced through blk_queue_bounce()
-+ * before being submitted to the device, as pages mapped may be out of
-+ * reach. It's the callers responsibility to make sure this happens. The
-+ * original bio must be passed back in to blk_rq_unmap_user() for proper
-+ * unmapping.
++ * Here we just change the flags value on the stack: we allow
++ * only the super-user to do it. This depends on the stack-layout
++ * on system-call entry - see also fork() and the signal handling
++ * code.
+ */
-+int blk_rq_map_user(struct request_queue *q, struct request *rq,
-+ void __user *ubuf, unsigned long len)
++static int do_iopl(unsigned int level, struct pt_regs *regs)
+{
-+ unsigned long bytes_read = 0;
-+ struct bio *bio = NULL;
-+ int ret;
++ unsigned int old = (regs->flags >> 12) & 3;
+
-+ if (len > (q->max_hw_sectors << 9))
++ if (level > 3)
+ return -EINVAL;
-+ if (!len || !ubuf)
-+ return -EINVAL;
-+
-+ while (bytes_read != len) {
-+ unsigned long map_len, end, start;
-+
-+ map_len = min_t(unsigned long, len - bytes_read, BIO_MAX_SIZE);
-+ end = ((unsigned long)ubuf + map_len + PAGE_SIZE - 1)
-+ >> PAGE_SHIFT;
-+ start = (unsigned long)ubuf >> PAGE_SHIFT;
-+
-+ /*
-+ * A bad offset could cause us to require BIO_MAX_PAGES + 1
-+ * pages. If this happens we just lower the requested
-+ * mapping len by a page so that we can fit
-+ */
-+ if (end - start > BIO_MAX_PAGES)
-+ map_len -= PAGE_SIZE;
-+
-+ ret = __blk_rq_map_user(q, rq, ubuf, map_len);
-+ if (ret < 0)
-+ goto unmap_rq;
-+ if (!bio)
-+ bio = rq->bio;
-+ bytes_read += ret;
-+ ubuf += ret;
++ /* Trying to gain more privileges? */
++ if (level > old) {
++ if (!capable(CAP_SYS_RAWIO))
++ return -EPERM;
+ }
++ regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) | (level << 12);
+
-+ rq->buffer = rq->data = NULL;
+ return 0;
-+unmap_rq:
-+ blk_rq_unmap_user(bio);
-+ return ret;
+}
+
-+EXPORT_SYMBOL(blk_rq_map_user);
++#ifdef CONFIG_X86_32
++asmlinkage long sys_iopl(unsigned long regsp)
++{
++ struct pt_regs *regs = (struct pt_regs *)®sp;
++ unsigned int level = regs->bx;
++ struct thread_struct *t = ¤t->thread;
++ int rc;
+
-+/**
-+ * blk_rq_map_user_iov - map user data to a request, for REQ_BLOCK_PC usage
-+ * @q: request queue where request should be inserted
-+ * @rq: request to map data to
-+ * @iov: pointer to the iovec
-+ * @iov_count: number of elements in the iovec
-+ * @len: I/O byte count
-+ *
-+ * Description:
-+ * Data will be mapped directly for zero copy io, if possible. Otherwise
-+ * a kernel bounce buffer is used.
++ rc = do_iopl(level, regs);
++ if (rc < 0)
++ goto out;
++
++ t->iopl = level << 12;
++ set_iopl_mask(t->iopl);
++out:
++ return rc;
++}
++#else
++asmlinkage long sys_iopl(unsigned int level, struct pt_regs *regs)
++{
++ return do_iopl(level, regs);
++}
++#endif
+diff --git a/arch/x86/kernel/ioport_32.c b/arch/x86/kernel/ioport_32.c
+deleted file mode 100644
+index 4ed48dc..0000000
+--- a/arch/x86/kernel/ioport_32.c
++++ /dev/null
+@@ -1,151 +0,0 @@
+-/*
+- * This contains the io-permission bitmap code - written by obz, with changes
+- * by Linus.
+- */
+-
+-#include <linux/sched.h>
+-#include <linux/kernel.h>
+-#include <linux/capability.h>
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/ioport.h>
+-#include <linux/smp.h>
+-#include <linux/stddef.h>
+-#include <linux/slab.h>
+-#include <linux/thread_info.h>
+-#include <linux/syscalls.h>
+-
+-/* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */
+-static void set_bitmap(unsigned long *bitmap, unsigned int base, unsigned int extent, int new_value)
+-{
+- unsigned long mask;
+- unsigned long *bitmap_base = bitmap + (base / BITS_PER_LONG);
+- unsigned int low_index = base & (BITS_PER_LONG-1);
+- int length = low_index + extent;
+-
+- if (low_index != 0) {
+- mask = (~0UL << low_index);
+- if (length < BITS_PER_LONG)
+- mask &= ~(~0UL << length);
+- if (new_value)
+- *bitmap_base++ |= mask;
+- else
+- *bitmap_base++ &= ~mask;
+- length -= BITS_PER_LONG;
+- }
+-
+- mask = (new_value ? ~0UL : 0UL);
+- while (length >= BITS_PER_LONG) {
+- *bitmap_base++ = mask;
+- length -= BITS_PER_LONG;
+- }
+-
+- if (length > 0) {
+- mask = ~(~0UL << length);
+- if (new_value)
+- *bitmap_base++ |= mask;
+- else
+- *bitmap_base++ &= ~mask;
+- }
+-}
+-
+-
+-/*
+- * this changes the io permissions bitmap in the current task.
+- */
+-asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
+-{
+- unsigned long i, max_long, bytes, bytes_updated;
+- struct thread_struct * t = ¤t->thread;
+- struct tss_struct * tss;
+- unsigned long *bitmap;
+-
+- if ((from + num <= from) || (from + num > IO_BITMAP_BITS))
+- return -EINVAL;
+- if (turn_on && !capable(CAP_SYS_RAWIO))
+- return -EPERM;
+-
+- /*
+- * If it's the first ioperm() call in this thread's lifetime, set the
+- * IO bitmap up. ioperm() is much less timing critical than clone(),
+- * this is why we delay this operation until now:
+- */
+- if (!t->io_bitmap_ptr) {
+- bitmap = kmalloc(IO_BITMAP_BYTES, GFP_KERNEL);
+- if (!bitmap)
+- return -ENOMEM;
+-
+- memset(bitmap, 0xff, IO_BITMAP_BYTES);
+- t->io_bitmap_ptr = bitmap;
+- set_thread_flag(TIF_IO_BITMAP);
+- }
+-
+- /*
+- * do it in the per-thread copy and in the TSS ...
+- *
+- * Disable preemption via get_cpu() - we must not switch away
+- * because the ->io_bitmap_max value must match the bitmap
+- * contents:
+- */
+- tss = &per_cpu(init_tss, get_cpu());
+-
+- set_bitmap(t->io_bitmap_ptr, from, num, !turn_on);
+-
+- /*
+- * Search for a (possibly new) maximum. This is simple and stupid,
+- * to keep it obviously correct:
+- */
+- max_long = 0;
+- for (i = 0; i < IO_BITMAP_LONGS; i++)
+- if (t->io_bitmap_ptr[i] != ~0UL)
+- max_long = i;
+-
+- bytes = (max_long + 1) * sizeof(long);
+- bytes_updated = max(bytes, t->io_bitmap_max);
+-
+- t->io_bitmap_max = bytes;
+-
+- /*
+- * Sets the lazy trigger so that the next I/O operation will
+- * reload the correct bitmap.
+- * Reset the owner so that a process switch will not set
+- * tss->io_bitmap_base to IO_BITMAP_OFFSET.
+- */
+- tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
+- tss->io_bitmap_owner = NULL;
+-
+- put_cpu();
+-
+- return 0;
+-}
+-
+-/*
+- * sys_iopl has to be used when you want to access the IO ports
+- * beyond the 0x3ff range: to get the full 65536 ports bitmapped
+- * you'd need 8kB of bitmaps/process, which is a bit excessive.
+- *
+- * Here we just change the eflags value on the stack: we allow
+- * only the super-user to do it. This depends on the stack-layout
+- * on system-call entry - see also fork() and the signal handling
+- * code.
+- */
+-
+-asmlinkage long sys_iopl(unsigned long unused)
+-{
+- volatile struct pt_regs * regs = (struct pt_regs *) &unused;
+- unsigned int level = regs->ebx;
+- unsigned int old = (regs->eflags >> 12) & 3;
+- struct thread_struct *t = ¤t->thread;
+-
+- if (level > 3)
+- return -EINVAL;
+- /* Trying to gain more privileges? */
+- if (level > old) {
+- if (!capable(CAP_SYS_RAWIO))
+- return -EPERM;
+- }
+- t->iopl = level << 12;
+- regs->eflags = (regs->eflags & ~X86_EFLAGS_IOPL) | t->iopl;
+- set_iopl_mask(t->iopl);
+- return 0;
+-}
+diff --git a/arch/x86/kernel/ioport_64.c b/arch/x86/kernel/ioport_64.c
+deleted file mode 100644
+index 5f62fad..0000000
+--- a/arch/x86/kernel/ioport_64.c
++++ /dev/null
+@@ -1,117 +0,0 @@
+-/*
+- * This contains the io-permission bitmap code - written by obz, with changes
+- * by Linus.
+- */
+-
+-#include <linux/sched.h>
+-#include <linux/kernel.h>
+-#include <linux/capability.h>
+-#include <linux/errno.h>
+-#include <linux/types.h>
+-#include <linux/ioport.h>
+-#include <linux/smp.h>
+-#include <linux/stddef.h>
+-#include <linux/slab.h>
+-#include <linux/thread_info.h>
+-#include <linux/syscalls.h>
+-
+-/* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */
+-static void set_bitmap(unsigned long *bitmap, unsigned int base, unsigned int extent, int new_value)
+-{
+- int i;
+- if (new_value)
+- for (i = base; i < base + extent; i++)
+- __set_bit(i, bitmap);
+- else
+- for (i = base; i < base + extent; i++)
+- clear_bit(i, bitmap);
+-}
+-
+-/*
+- * this changes the io permissions bitmap in the current task.
+- */
+-asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
+-{
+- unsigned int i, max_long, bytes, bytes_updated;
+- struct thread_struct * t = ¤t->thread;
+- struct tss_struct * tss;
+- unsigned long *bitmap;
+-
+- if ((from + num <= from) || (from + num > IO_BITMAP_BITS))
+- return -EINVAL;
+- if (turn_on && !capable(CAP_SYS_RAWIO))
+- return -EPERM;
+-
+- /*
+- * If it's the first ioperm() call in this thread's lifetime, set the
+- * IO bitmap up. ioperm() is much less timing critical than clone(),
+- * this is why we delay this operation until now:
+- */
+- if (!t->io_bitmap_ptr) {
+- bitmap = kmalloc(IO_BITMAP_BYTES, GFP_KERNEL);
+- if (!bitmap)
+- return -ENOMEM;
+-
+- memset(bitmap, 0xff, IO_BITMAP_BYTES);
+- t->io_bitmap_ptr = bitmap;
+- set_thread_flag(TIF_IO_BITMAP);
+- }
+-
+- /*
+- * do it in the per-thread copy and in the TSS ...
+- *
+- * Disable preemption via get_cpu() - we must not switch away
+- * because the ->io_bitmap_max value must match the bitmap
+- * contents:
+- */
+- tss = &per_cpu(init_tss, get_cpu());
+-
+- set_bitmap(t->io_bitmap_ptr, from, num, !turn_on);
+-
+- /*
+- * Search for a (possibly new) maximum. This is simple and stupid,
+- * to keep it obviously correct:
+- */
+- max_long = 0;
+- for (i = 0; i < IO_BITMAP_LONGS; i++)
+- if (t->io_bitmap_ptr[i] != ~0UL)
+- max_long = i;
+-
+- bytes = (max_long + 1) * sizeof(long);
+- bytes_updated = max(bytes, t->io_bitmap_max);
+-
+- t->io_bitmap_max = bytes;
+-
+- /* Update the TSS: */
+- memcpy(tss->io_bitmap, t->io_bitmap_ptr, bytes_updated);
+-
+- put_cpu();
+-
+- return 0;
+-}
+-
+-/*
+- * sys_iopl has to be used when you want to access the IO ports
+- * beyond the 0x3ff range: to get the full 65536 ports bitmapped
+- * you'd need 8kB of bitmaps/process, which is a bit excessive.
+- *
+- * Here we just change the eflags value on the stack: we allow
+- * only the super-user to do it. This depends on the stack-layout
+- * on system-call entry - see also fork() and the signal handling
+- * code.
+- */
+-
+-asmlinkage long sys_iopl(unsigned int level, struct pt_regs *regs)
+-{
+- unsigned int old = (regs->eflags >> 12) & 3;
+-
+- if (level > 3)
+- return -EINVAL;
+- /* Trying to gain more privileges? */
+- if (level > old) {
+- if (!capable(CAP_SYS_RAWIO))
+- return -EPERM;
+- }
+- regs->eflags = (regs->eflags &~ X86_EFLAGS_IOPL) | (level << 12);
+- return 0;
+-}
+diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
+index d3fde94..cef054b 100644
+--- a/arch/x86/kernel/irq_32.c
++++ b/arch/x86/kernel/irq_32.c
+@@ -66,11 +66,11 @@ static union irq_ctx *softirq_ctx[NR_CPUS] __read_mostly;
+ * SMP cross-CPU interrupts have their own specific
+ * handlers).
+ */
+-fastcall unsigned int do_IRQ(struct pt_regs *regs)
++unsigned int do_IRQ(struct pt_regs *regs)
+ {
+ struct pt_regs *old_regs;
+ /* high bit used in ret_from_ code */
+- int irq = ~regs->orig_eax;
++ int irq = ~regs->orig_ax;
+ struct irq_desc *desc = irq_desc + irq;
+ #ifdef CONFIG_4KSTACKS
+ union irq_ctx *curctx, *irqctx;
+@@ -88,13 +88,13 @@ fastcall unsigned int do_IRQ(struct pt_regs *regs)
+ #ifdef CONFIG_DEBUG_STACKOVERFLOW
+ /* Debugging check for stack overflow: is there less than 1KB free? */
+ {
+- long esp;
++ long sp;
+
+ __asm__ __volatile__("andl %%esp,%0" :
+- "=r" (esp) : "0" (THREAD_SIZE - 1));
+- if (unlikely(esp < (sizeof(struct thread_info) + STACK_WARN))) {
++ "=r" (sp) : "0" (THREAD_SIZE - 1));
++ if (unlikely(sp < (sizeof(struct thread_info) + STACK_WARN))) {
+ printk("do_IRQ: stack overflow: %ld\n",
+- esp - sizeof(struct thread_info));
++ sp - sizeof(struct thread_info));
+ dump_stack();
+ }
+ }
+@@ -112,7 +112,7 @@ fastcall unsigned int do_IRQ(struct pt_regs *regs)
+ * current stack (which is the irq stack already after all)
+ */
+ if (curctx != irqctx) {
+- int arg1, arg2, ebx;
++ int arg1, arg2, bx;
+
+ /* build the stack frame on the IRQ stack */
+ isp = (u32*) ((char*)irqctx + sizeof(*irqctx));
+@@ -128,10 +128,10 @@ fastcall unsigned int do_IRQ(struct pt_regs *regs)
+ (curctx->tinfo.preempt_count & SOFTIRQ_MASK);
+
+ asm volatile(
+- " xchgl %%ebx,%%esp \n"
+- " call *%%edi \n"
+- " movl %%ebx,%%esp \n"
+- : "=a" (arg1), "=d" (arg2), "=b" (ebx)
++ " xchgl %%ebx,%%esp \n"
++ " call *%%edi \n"
++ " movl %%ebx,%%esp \n"
++ : "=a" (arg1), "=d" (arg2), "=b" (bx)
+ : "0" (irq), "1" (desc), "2" (isp),
+ "D" (desc->handle_irq)
+ : "memory", "cc"
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index 6b5c730..3aac154 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -20,6 +20,26 @@
+
+ atomic_t irq_err_count;
+
++/*
++ * 'what should we do if we get a hw irq event on an illegal vector'.
++ * each architecture has to answer this themselves.
++ */
++void ack_bad_irq(unsigned int irq)
++{
++ printk(KERN_WARNING "unexpected IRQ trap at vector %02x\n", irq);
++ /*
++ * Currently unexpected vectors happen only on SMP and APIC.
++ * We _must_ ack these because every local APIC has only N
++ * irq slots per priority level, and a 'hanging, unacked' IRQ
++ * holds up an irq slot - in excessive cases (when multiple
++ * unexpected vectors occur) that might lock up the APIC
++ * completely.
++ * But don't ack when the APIC is disabled. -AK
++ */
++ if (!disable_apic)
++ ack_APIC_irq();
++}
++
+ #ifdef CONFIG_DEBUG_STACKOVERFLOW
+ /*
+ * Probabilistic stack overflow check:
+@@ -33,11 +53,11 @@ static inline void stack_overflow_check(struct pt_regs *regs)
+ u64 curbase = (u64)task_stack_page(current);
+ static unsigned long warned = -60*HZ;
+
+- if (regs->rsp >= curbase && regs->rsp <= curbase + THREAD_SIZE &&
+- regs->rsp < curbase + sizeof(struct thread_info) + 128 &&
++ if (regs->sp >= curbase && regs->sp <= curbase + THREAD_SIZE &&
++ regs->sp < curbase + sizeof(struct thread_info) + 128 &&
+ time_after(jiffies, warned + 60*HZ)) {
+- printk("do_IRQ: %s near stack overflow (cur:%Lx,rsp:%lx)\n",
+- current->comm, curbase, regs->rsp);
++ printk("do_IRQ: %s near stack overflow (cur:%Lx,sp:%lx)\n",
++ current->comm, curbase, regs->sp);
+ show_stack(NULL,NULL);
+ warned = jiffies;
+ }
+@@ -142,7 +162,7 @@ asmlinkage unsigned int do_IRQ(struct pt_regs *regs)
+ struct pt_regs *old_regs = set_irq_regs(regs);
+
+ /* high bit used in ret_from_ code */
+- unsigned vector = ~regs->orig_rax;
++ unsigned vector = ~regs->orig_ax;
+ unsigned irq;
+
+ exit_idle();
+diff --git a/arch/x86/kernel/kdebugfs.c b/arch/x86/kernel/kdebugfs.c
+new file mode 100644
+index 0000000..7335430
+--- /dev/null
++++ b/arch/x86/kernel/kdebugfs.c
+@@ -0,0 +1,65 @@
++/*
++ * Architecture specific debugfs files
+ *
-+ * A matching blk_rq_unmap_user() must be issued at the end of io, while
-+ * still in process context.
++ * Copyright (C) 2007, Intel Corp.
++ * Huang Ying <ying.huang at intel.com>
+ *
-+ * Note: The mapped bio may need to be bounced through blk_queue_bounce()
-+ * before being submitted to the device, as pages mapped may be out of
-+ * reach. It's the callers responsibility to make sure this happens. The
-+ * original bio must be passed back in to blk_rq_unmap_user() for proper
-+ * unmapping.
++ * This file is released under the GPLv2.
+ */
-+int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
-+ struct sg_iovec *iov, int iov_count, unsigned int len)
-+{
-+ struct bio *bio;
+
-+ if (!iov || iov_count <= 0)
-+ return -EINVAL;
++#include <linux/debugfs.h>
++#include <linux/stat.h>
++#include <linux/init.h>
+
-+ /* we don't allow misaligned data like bio_map_user() does. If the
-+ * user is using sg, they're expected to know the alignment constraints
-+ * and respect them accordingly */
-+ bio = bio_map_user_iov(q, NULL, iov, iov_count, rq_data_dir(rq)== READ);
-+ if (IS_ERR(bio))
-+ return PTR_ERR(bio);
++#include <asm/setup.h>
+
-+ if (bio->bi_size != len) {
-+ bio_endio(bio, 0);
-+ bio_unmap_user(bio);
-+ return -EINVAL;
-+ }
++#ifdef CONFIG_DEBUG_BOOT_PARAMS
++static struct debugfs_blob_wrapper boot_params_blob = {
++ .data = &boot_params,
++ .size = sizeof(boot_params),
++};
+
-+ bio_get(bio);
-+ blk_rq_bio_prep(q, rq, bio);
-+ rq->buffer = rq->data = NULL;
++static int __init boot_params_kdebugfs_init(void)
++{
++ int error;
++ struct dentry *dbp, *version, *data;
++
++ dbp = debugfs_create_dir("boot_params", NULL);
++ if (!dbp) {
++ error = -ENOMEM;
++ goto err_return;
++ }
++ version = debugfs_create_x16("version", S_IRUGO, dbp,
++ &boot_params.hdr.version);
++ if (!version) {
++ error = -ENOMEM;
++ goto err_dir;
++ }
++ data = debugfs_create_blob("data", S_IRUGO, dbp,
++ &boot_params_blob);
++ if (!data) {
++ error = -ENOMEM;
++ goto err_version;
++ }
+ return 0;
++err_version:
++ debugfs_remove(version);
++err_dir:
++ debugfs_remove(dbp);
++err_return:
++ return error;
+}
++#endif
+
-+EXPORT_SYMBOL(blk_rq_map_user_iov);
++static int __init arch_kdebugfs_init(void)
++{
++ int error = 0;
+
-+/**
-+ * blk_rq_unmap_user - unmap a request with user data
-+ * @bio: start of bio list
++#ifdef CONFIG_DEBUG_BOOT_PARAMS
++ error = boot_params_kdebugfs_init();
++#endif
++
++ return error;
++}
++
++arch_initcall(arch_kdebugfs_init);
+diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
+new file mode 100644
+index 0000000..a99e764
+--- /dev/null
++++ b/arch/x86/kernel/kprobes.c
+@@ -0,0 +1,1066 @@
++/*
++ * Kernel Probes (KProbes)
+ *
-+ * Description:
-+ * Unmap a rq previously mapped by blk_rq_map_user(). The caller must
-+ * supply the original rq->bio from the blk_rq_map_user() return, since
-+ * the io completion may have changed rq->bio.
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
++ *
++ * Copyright (C) IBM Corporation, 2002, 2004
++ *
++ * 2002-Oct Created by Vamsi Krishna S <vamsi_krishna at in.ibm.com> Kernel
++ * Probes initial implementation ( includes contributions from
++ * Rusty Russell).
++ * 2004-July Suparna Bhattacharya <suparna at in.ibm.com> added jumper probes
++ * interface to access function arguments.
++ * 2004-Oct Jim Keniston <jkenisto at us.ibm.com> and Prasanna S Panchamukhi
++ * <prasanna at in.ibm.com> adapted for x86_64 from i386.
++ * 2005-Mar Roland McGrath <roland at redhat.com>
++ * Fixed to handle %rip-relative addressing mode correctly.
++ * 2005-May Hien Nguyen <hien at us.ibm.com>, Jim Keniston
++ * <jkenisto at us.ibm.com> and Prasanna S Panchamukhi
++ * <prasanna at in.ibm.com> added function-return probes.
++ * 2005-May Rusty Lynch <rusty.lynch at intel.com>
++ * Added function return probes functionality
++ * 2006-Feb Masami Hiramatsu <hiramatu at sdl.hitachi.co.jp> added
++ * kprobe-booster and kretprobe-booster for i386.
++ * 2007-Dec Masami Hiramatsu <mhiramat at redhat.com> added kprobe-booster
++ * and kretprobe-booster for x86-64
++ * 2007-Dec Masami Hiramatsu <mhiramat at redhat.com>, Arjan van de Ven
++ * <arjan at infradead.org> and Jim Keniston <jkenisto at us.ibm.com>
++ * unified x86 kprobes code.
+ */
-+int blk_rq_unmap_user(struct bio *bio)
-+{
-+ struct bio *mapped_bio;
-+ int ret = 0, ret2;
+
-+ while (bio) {
-+ mapped_bio = bio;
-+ if (unlikely(bio_flagged(bio, BIO_BOUNCED)))
-+ mapped_bio = bio->bi_private;
++#include <linux/kprobes.h>
++#include <linux/ptrace.h>
++#include <linux/string.h>
++#include <linux/slab.h>
++#include <linux/hardirq.h>
++#include <linux/preempt.h>
++#include <linux/module.h>
++#include <linux/kdebug.h>
+
-+ ret2 = __blk_rq_unmap_user(mapped_bio);
-+ if (ret2 && !ret)
-+ ret = ret2;
++#include <asm/cacheflush.h>
++#include <asm/desc.h>
++#include <asm/pgtable.h>
++#include <asm/uaccess.h>
++#include <asm/alternative.h>
+
-+ mapped_bio = bio;
-+ bio = bio->bi_next;
-+ bio_put(mapped_bio);
-+ }
++void jprobe_return_end(void);
+
-+ return ret;
-+}
++DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
++DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+
-+EXPORT_SYMBOL(blk_rq_unmap_user);
++#ifdef CONFIG_X86_64
++#define stack_addr(regs) ((unsigned long *)regs->sp)
++#else
++/*
++ * "®s->sp" looks wrong, but it's correct for x86_32. x86_32 CPUs
++ * don't save the ss and esp registers if the CPU is already in kernel
++ * mode when it traps. So for kprobes, regs->sp and regs->ss are not
++ * the [nonexistent] saved stack pointer and ss register, but rather
++ * the top 8 bytes of the pre-int3 stack. So ®s->sp happens to
++ * point to the top of the pre-int3 stack.
++ */
++#define stack_addr(regs) ((unsigned long *)®s->sp)
++#endif
++
++#define W(row, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf)\
++ (((b0##UL << 0x0)|(b1##UL << 0x1)|(b2##UL << 0x2)|(b3##UL << 0x3) | \
++ (b4##UL << 0x4)|(b5##UL << 0x5)|(b6##UL << 0x6)|(b7##UL << 0x7) | \
++ (b8##UL << 0x8)|(b9##UL << 0x9)|(ba##UL << 0xa)|(bb##UL << 0xb) | \
++ (bc##UL << 0xc)|(bd##UL << 0xd)|(be##UL << 0xe)|(bf##UL << 0xf)) \
++ << (row % 32))
++ /*
++ * Undefined/reserved opcodes, conditional jump, Opcode Extension
++ * Groups, and some special opcodes can not boost.
++ */
++static const u32 twobyte_is_boostable[256 / 32] = {
++ /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
++ /* ---------------------------------------------- */
++ W(0x00, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0) | /* 00 */
++ W(0x10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 10 */
++ W(0x20, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 20 */
++ W(0x30, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 30 */
++ W(0x40, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 40 */
++ W(0x50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 50 */
++ W(0x60, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1) | /* 60 */
++ W(0x70, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1) , /* 70 */
++ W(0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 80 */
++ W(0x90, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) , /* 90 */
++ W(0xa0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1) | /* a0 */
++ W(0xb0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1) , /* b0 */
++ W(0xc0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1) | /* c0 */
++ W(0xd0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1) , /* d0 */
++ W(0xe0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1) | /* e0 */
++ W(0xf0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0) /* f0 */
++ /* ----------------------------------------------- */
++ /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
++};
++static const u32 onebyte_has_modrm[256 / 32] = {
++ /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
++ /* ----------------------------------------------- */
++ W(0x00, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0) | /* 00 */
++ W(0x10, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0) , /* 10 */
++ W(0x20, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0) | /* 20 */
++ W(0x30, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0) , /* 30 */
++ W(0x40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 40 */
++ W(0x50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 50 */
++ W(0x60, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0) | /* 60 */
++ W(0x70, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 70 */
++ W(0x80, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 80 */
++ W(0x90, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 90 */
++ W(0xa0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* a0 */
++ W(0xb0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* b0 */
++ W(0xc0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0) | /* c0 */
++ W(0xd0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1) , /* d0 */
++ W(0xe0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* e0 */
++ W(0xf0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1) /* f0 */
++ /* ----------------------------------------------- */
++ /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
++};
++static const u32 twobyte_has_modrm[256 / 32] = {
++ /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
++ /* ----------------------------------------------- */
++ W(0x00, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1) | /* 0f */
++ W(0x10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0) , /* 1f */
++ W(0x20, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1) | /* 2f */
++ W(0x30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 3f */
++ W(0x40, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 4f */
++ W(0x50, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) , /* 5f */
++ W(0x60, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 6f */
++ W(0x70, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1) , /* 7f */
++ W(0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 8f */
++ W(0x90, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) , /* 9f */
++ W(0xa0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1) | /* af */
++ W(0xb0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1) , /* bf */
++ W(0xc0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0) | /* cf */
++ W(0xd0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) , /* df */
++ W(0xe0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* ef */
++ W(0xf0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0) /* ff */
++ /* ----------------------------------------------- */
++ /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
++};
++#undef W
++
++struct kretprobe_blackpoint kretprobe_blacklist[] = {
++ {"__switch_to", }, /* This function switches only current task, but
++ doesn't switch kernel stack.*/
++ {NULL, NULL} /* Terminator */
++};
++const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
++
++/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
++static void __kprobes set_jmp_op(void *from, void *to)
++{
++ struct __arch_jmp_op {
++ char op;
++ s32 raddr;
++ } __attribute__((packed)) * jop;
++ jop = (struct __arch_jmp_op *)from;
++ jop->raddr = (s32)((long)(to) - ((long)(from) + 5));
++ jop->op = RELATIVEJUMP_INSTRUCTION;
++}
+
-+/**
-+ * blk_rq_map_kern - map kernel data to a request, for REQ_BLOCK_PC usage
-+ * @q: request queue where request should be inserted
-+ * @rq: request to fill
-+ * @kbuf: the kernel buffer
-+ * @len: length of user data
-+ * @gfp_mask: memory allocation flags
++/*
++ * Check for the REX prefix which can only exist on X86_64
++ * X86_32 always returns 0
+ */
-+int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
-+ unsigned int len, gfp_t gfp_mask)
++static int __kprobes is_REX_prefix(kprobe_opcode_t *insn)
+{
-+ struct bio *bio;
++#ifdef CONFIG_X86_64
++ if ((*insn & 0xf0) == 0x40)
++ return 1;
++#endif
++ return 0;
++}
+
-+ if (len > (q->max_hw_sectors << 9))
-+ return -EINVAL;
-+ if (!len || !kbuf)
-+ return -EINVAL;
++/*
++ * Returns non-zero if opcode is boostable.
++ * RIP relative instructions are adjusted at copying time in 64 bits mode
++ */
++static int __kprobes can_boost(kprobe_opcode_t *opcodes)
++{
++ kprobe_opcode_t opcode;
++ kprobe_opcode_t *orig_opcodes = opcodes;
+
-+ bio = bio_map_kern(q, kbuf, len, gfp_mask);
-+ if (IS_ERR(bio))
-+ return PTR_ERR(bio);
++retry:
++ if (opcodes - orig_opcodes > MAX_INSN_SIZE - 1)
++ return 0;
++ opcode = *(opcodes++);
+
-+ if (rq_data_dir(rq) == WRITE)
-+ bio->bi_rw |= (1 << BIO_RW);
++ /* 2nd-byte opcode */
++ if (opcode == 0x0f) {
++ if (opcodes - orig_opcodes > MAX_INSN_SIZE - 1)
++ return 0;
++ return test_bit(*opcodes,
++ (unsigned long *)twobyte_is_boostable);
++ }
+
-+ blk_rq_bio_prep(q, rq, bio);
-+ blk_queue_bounce(q, &rq->bio);
-+ rq->buffer = rq->data = NULL;
-+ return 0;
++ switch (opcode & 0xf0) {
++#ifdef CONFIG_X86_64
++ case 0x40:
++ goto retry; /* REX prefix is boostable */
++#endif
++ case 0x60:
++ if (0x63 < opcode && opcode < 0x67)
++ goto retry; /* prefixes */
++ /* can't boost Address-size override and bound */
++ return (opcode != 0x62 && opcode != 0x67);
++ case 0x70:
++ return 0; /* can't boost conditional jump */
++ case 0xc0:
++ /* can't boost software-interruptions */
++ return (0xc1 < opcode && opcode < 0xcc) || opcode == 0xcf;
++ case 0xd0:
++ /* can boost AA* and XLAT */
++ return (opcode == 0xd4 || opcode == 0xd5 || opcode == 0xd7);
++ case 0xe0:
++ /* can boost in/out and absolute jmps */
++ return ((opcode & 0x04) || opcode == 0xea);
++ case 0xf0:
++ if ((opcode & 0x0c) == 0 && opcode != 0xf1)
++ goto retry; /* lock/rep(ne) prefix */
++ /* clear and set flags are boostable */
++ return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
++ default:
++ /* segment override prefixes are boostable */
++ if (opcode == 0x26 || opcode == 0x36 || opcode == 0x3e)
++ goto retry; /* prefixes */
++ /* CS override prefix and call are not boostable */
++ return (opcode != 0x2e && opcode != 0x9a);
++ }
+}
+
-+EXPORT_SYMBOL(blk_rq_map_kern);
-diff --git a/block/blk-merge.c b/block/blk-merge.c
-new file mode 100644
-index 0000000..5023f0b
---- /dev/null
-+++ b/block/blk-merge.c
-@@ -0,0 +1,485 @@
+/*
-+ * Functions related to segment and merge handling
++ * Returns non-zero if opcode modifies the interrupt flag.
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
-+#include <linux/scatterlist.h>
++static int __kprobes is_IF_modifier(kprobe_opcode_t *insn)
++{
++ switch (*insn) {
++ case 0xfa: /* cli */
++ case 0xfb: /* sti */
++ case 0xcf: /* iret/iretd */
++ case 0x9d: /* popf/popfd */
++ return 1;
++ }
+
-+#include "blk.h"
++ /*
++ * on X86_64, 0x40-0x4f are REX prefixes so we need to look
++ * at the next byte instead.. but of course not recurse infinitely
++ */
++ if (is_REX_prefix(insn))
++ return is_IF_modifier(++insn);
+
-+void blk_recalc_rq_sectors(struct request *rq, int nsect)
++ return 0;
++}
++
++/*
++ * Adjust the displacement if the instruction uses the %rip-relative
++ * addressing mode.
++ * If it does, Return the address of the 32-bit displacement word.
++ * If not, return null.
++ * Only applicable to 64-bit x86.
++ */
++static void __kprobes fix_riprel(struct kprobe *p)
+{
-+ if (blk_fs_request(rq)) {
-+ rq->hard_sector += nsect;
-+ rq->hard_nr_sectors -= nsect;
++#ifdef CONFIG_X86_64
++ u8 *insn = p->ainsn.insn;
++ s64 disp;
++ int need_modrm;
+
-+ /*
-+ * Move the I/O submission pointers ahead if required.
-+ */
-+ if ((rq->nr_sectors >= rq->hard_nr_sectors) &&
-+ (rq->sector <= rq->hard_sector)) {
-+ rq->sector = rq->hard_sector;
-+ rq->nr_sectors = rq->hard_nr_sectors;
-+ rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
-+ rq->current_nr_sectors = rq->hard_cur_sectors;
-+ rq->buffer = bio_data(rq->bio);
++ /* Skip legacy instruction prefixes. */
++ while (1) {
++ switch (*insn) {
++ case 0x66:
++ case 0x67:
++ case 0x2e:
++ case 0x3e:
++ case 0x26:
++ case 0x64:
++ case 0x65:
++ case 0x36:
++ case 0xf0:
++ case 0xf3:
++ case 0xf2:
++ ++insn;
++ continue;
+ }
++ break;
++ }
+
-+ /*
-+ * if total number of sectors is less than the first segment
-+ * size, something has gone terribly wrong
-+ */
-+ if (rq->nr_sectors < rq->current_nr_sectors) {
-+ printk("blk: request botched\n");
-+ rq->nr_sectors = rq->current_nr_sectors;
++ /* Skip REX instruction prefix. */
++ if (is_REX_prefix(insn))
++ ++insn;
++
++ if (*insn == 0x0f) {
++ /* Two-byte opcode. */
++ ++insn;
++ need_modrm = test_bit(*insn,
++ (unsigned long *)twobyte_has_modrm);
++ } else
++ /* One-byte opcode. */
++ need_modrm = test_bit(*insn,
++ (unsigned long *)onebyte_has_modrm);
++
++ if (need_modrm) {
++ u8 modrm = *++insn;
++ if ((modrm & 0xc7) == 0x05) {
++ /* %rip+disp32 addressing mode */
++ /* Displacement follows ModRM byte. */
++ ++insn;
++ /*
++ * The copied instruction uses the %rip-relative
++ * addressing mode. Adjust the displacement for the
++ * difference between the original location of this
++ * instruction and the location of the copy that will
++ * actually be run. The tricky bit here is making sure
++ * that the sign extension happens correctly in this
++ * calculation, since we need a signed 32-bit result to
++ * be sign-extended to 64 bits when it's added to the
++ * %rip value and yield the same 64-bit result that the
++ * sign-extension of the original signed 32-bit
++ * displacement would have given.
++ */
++ disp = (u8 *) p->addr + *((s32 *) insn) -
++ (u8 *) p->ainsn.insn;
++ BUG_ON((s64) (s32) disp != disp); /* Sanity check. */
++ *(s32 *)insn = (s32) disp;
+ }
+ }
++#endif
+}
+
-+void blk_recalc_rq_segments(struct request *rq)
++static void __kprobes arch_copy_kprobe(struct kprobe *p)
+{
-+ int nr_phys_segs;
-+ int nr_hw_segs;
-+ unsigned int phys_size;
-+ unsigned int hw_size;
-+ struct bio_vec *bv, *bvprv = NULL;
-+ int seg_size;
-+ int hw_seg_size;
-+ int cluster;
-+ struct req_iterator iter;
-+ int high, highprv = 1;
-+ struct request_queue *q = rq->q;
++ memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+
-+ if (!rq->bio)
-+ return;
++ fix_riprel(p);
+
-+ cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
-+ hw_seg_size = seg_size = 0;
-+ phys_size = hw_size = nr_phys_segs = nr_hw_segs = 0;
-+ rq_for_each_segment(bv, rq, iter) {
-+ /*
-+ * the trick here is making sure that a high page is never
-+ * considered part of another segment, since that might
-+ * change with the bounce page.
-+ */
-+ high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
-+ if (high || highprv)
-+ goto new_hw_segment;
-+ if (cluster) {
-+ if (seg_size + bv->bv_len > q->max_segment_size)
-+ goto new_segment;
-+ if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
-+ goto new_segment;
-+ if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
-+ goto new_segment;
-+ if (BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
-+ goto new_hw_segment;
++ if (can_boost(p->addr))
++ p->ainsn.boostable = 0;
++ else
++ p->ainsn.boostable = -1;
+
-+ seg_size += bv->bv_len;
-+ hw_seg_size += bv->bv_len;
-+ bvprv = bv;
-+ continue;
-+ }
-+new_segment:
-+ if (BIOVEC_VIRT_MERGEABLE(bvprv, bv) &&
-+ !BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
-+ hw_seg_size += bv->bv_len;
-+ else {
-+new_hw_segment:
-+ if (nr_hw_segs == 1 &&
-+ hw_seg_size > rq->bio->bi_hw_front_size)
-+ rq->bio->bi_hw_front_size = hw_seg_size;
-+ hw_seg_size = BIOVEC_VIRT_START_SIZE(bv) + bv->bv_len;
-+ nr_hw_segs++;
-+ }
++ p->opcode = *p->addr;
++}
+
-+ nr_phys_segs++;
-+ bvprv = bv;
-+ seg_size = bv->bv_len;
-+ highprv = high;
-+ }
++int __kprobes arch_prepare_kprobe(struct kprobe *p)
++{
++ /* insn: must be on special executable page on x86. */
++ p->ainsn.insn = get_insn_slot();
++ if (!p->ainsn.insn)
++ return -ENOMEM;
++ arch_copy_kprobe(p);
++ return 0;
++}
+
-+ if (nr_hw_segs == 1 &&
-+ hw_seg_size > rq->bio->bi_hw_front_size)
-+ rq->bio->bi_hw_front_size = hw_seg_size;
-+ if (hw_seg_size > rq->biotail->bi_hw_back_size)
-+ rq->biotail->bi_hw_back_size = hw_seg_size;
-+ rq->nr_phys_segments = nr_phys_segs;
-+ rq->nr_hw_segments = nr_hw_segs;
++void __kprobes arch_arm_kprobe(struct kprobe *p)
++{
++ text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1);
+}
+
-+void blk_recount_segments(struct request_queue *q, struct bio *bio)
++void __kprobes arch_disarm_kprobe(struct kprobe *p)
+{
-+ struct request rq;
-+ struct bio *nxt = bio->bi_next;
-+ rq.q = q;
-+ rq.bio = rq.biotail = bio;
-+ bio->bi_next = NULL;
-+ blk_recalc_rq_segments(&rq);
-+ bio->bi_next = nxt;
-+ bio->bi_phys_segments = rq.nr_phys_segments;
-+ bio->bi_hw_segments = rq.nr_hw_segments;
-+ bio->bi_flags |= (1 << BIO_SEG_VALID);
++ text_poke(p->addr, &p->opcode, 1);
+}
-+EXPORT_SYMBOL(blk_recount_segments);
+
-+static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
-+ struct bio *nxt)
++void __kprobes arch_remove_kprobe(struct kprobe *p)
+{
-+ if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER)))
-+ return 0;
++ mutex_lock(&kprobe_mutex);
++ free_insn_slot(p->ainsn.insn, (p->ainsn.boostable == 1));
++ mutex_unlock(&kprobe_mutex);
++}
+
-+ if (!BIOVEC_PHYS_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)))
-+ return 0;
-+ if (bio->bi_size + nxt->bi_size > q->max_segment_size)
-+ return 0;
++static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
++{
++ kcb->prev_kprobe.kp = kprobe_running();
++ kcb->prev_kprobe.status = kcb->kprobe_status;
++ kcb->prev_kprobe.old_flags = kcb->kprobe_old_flags;
++ kcb->prev_kprobe.saved_flags = kcb->kprobe_saved_flags;
++}
+
-+ /*
-+ * bio and nxt are contigous in memory, check if the queue allows
-+ * these two to be merged into one
-+ */
-+ if (BIO_SEG_BOUNDARY(q, bio, nxt))
-+ return 1;
++static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
++{
++ __get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp;
++ kcb->kprobe_status = kcb->prev_kprobe.status;
++ kcb->kprobe_old_flags = kcb->prev_kprobe.old_flags;
++ kcb->kprobe_saved_flags = kcb->prev_kprobe.saved_flags;
++}
+
-+ return 0;
++static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
++ struct kprobe_ctlblk *kcb)
++{
++ __get_cpu_var(current_kprobe) = p;
++ kcb->kprobe_saved_flags = kcb->kprobe_old_flags
++ = (regs->flags & (X86_EFLAGS_TF | X86_EFLAGS_IF));
++ if (is_IF_modifier(p->ainsn.insn))
++ kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
+}
+
-+static int blk_hw_contig_segment(struct request_queue *q, struct bio *bio,
-+ struct bio *nxt)
++static void __kprobes clear_btf(void)
+{
-+ if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-+ blk_recount_segments(q, bio);
-+ if (unlikely(!bio_flagged(nxt, BIO_SEG_VALID)))
-+ blk_recount_segments(q, nxt);
-+ if (!BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)) ||
-+ BIOVEC_VIRT_OVERSIZE(bio->bi_hw_back_size + nxt->bi_hw_front_size))
-+ return 0;
-+ if (bio->bi_hw_back_size + nxt->bi_hw_front_size > q->max_segment_size)
++ if (test_thread_flag(TIF_DEBUGCTLMSR))
++ wrmsrl(MSR_IA32_DEBUGCTLMSR, 0);
++}
++
++static void __kprobes restore_btf(void)
++{
++ if (test_thread_flag(TIF_DEBUGCTLMSR))
++ wrmsrl(MSR_IA32_DEBUGCTLMSR, current->thread.debugctlmsr);
++}
++
++static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
++{
++ clear_btf();
++ regs->flags |= X86_EFLAGS_TF;
++ regs->flags &= ~X86_EFLAGS_IF;
++ /* single step inline if the instruction is an int3 */
++ if (p->opcode == BREAKPOINT_INSTRUCTION)
++ regs->ip = (unsigned long)p->addr;
++ else
++ regs->ip = (unsigned long)p->ainsn.insn;
++}
++
++/* Called with kretprobe_lock held */
++void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
++ struct pt_regs *regs)
++{
++ unsigned long *sara = stack_addr(regs);
++
++ ri->ret_addr = (kprobe_opcode_t *) *sara;
++
++ /* Replace the return addr with trampoline addr */
++ *sara = (unsigned long) &kretprobe_trampoline;
++}
++
++static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
++ struct kprobe_ctlblk *kcb)
++{
++#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
++ if (p->ainsn.boostable == 1 && !p->post_handler) {
++ /* Boost up -- we can execute copied instructions directly */
++ reset_current_kprobe();
++ regs->ip = (unsigned long)p->ainsn.insn;
++ preempt_enable_no_resched();
++ return;
++ }
++#endif
++ prepare_singlestep(p, regs);
++ kcb->kprobe_status = KPROBE_HIT_SS;
++}
++
++/*
++ * We have reentered the kprobe_handler(), since another probe was hit while
++ * within the handler. We save the original kprobes variables and just single
++ * step on the instruction of the new probe without calling any user handlers.
++ */
++static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
++ struct kprobe_ctlblk *kcb)
++{
++ switch (kcb->kprobe_status) {
++ case KPROBE_HIT_SSDONE:
++#ifdef CONFIG_X86_64
++ /* TODO: Provide re-entrancy from post_kprobes_handler() and
++ * avoid exception stack corruption while single-stepping on
++ * the instruction of the new probe.
++ */
++ arch_disarm_kprobe(p);
++ regs->ip = (unsigned long)p->addr;
++ reset_current_kprobe();
++ preempt_enable_no_resched();
++ break;
++#endif
++ case KPROBE_HIT_ACTIVE:
++ save_previous_kprobe(kcb);
++ set_current_kprobe(p, regs, kcb);
++ kprobes_inc_nmissed_count(p);
++ prepare_singlestep(p, regs);
++ kcb->kprobe_status = KPROBE_REENTER;
++ break;
++ case KPROBE_HIT_SS:
++ if (p == kprobe_running()) {
++ regs->flags &= ~TF_MASK;
++ regs->flags |= kcb->kprobe_saved_flags;
++ return 0;
++ } else {
++ /* A probe has been hit in the codepath leading up
++ * to, or just after, single-stepping of a probed
++ * instruction. This entire codepath should strictly
++ * reside in .kprobes.text section. Raise a warning
++ * to highlight this peculiar case.
++ */
++ }
++ default:
++ /* impossible cases */
++ WARN_ON(1);
+ return 0;
++ }
+
+ return 1;
+}
+
+/*
-+ * map a request to scatterlist, return number of sg entries setup. Caller
-+ * must make sure sg can hold rq->nr_phys_segments entries
++ * Interrupts are disabled on entry as trap3 is an interrupt gate and they
++ * remain disabled thorough out this function.
+ */
-+int blk_rq_map_sg(struct request_queue *q, struct request *rq,
-+ struct scatterlist *sglist)
++static int __kprobes kprobe_handler(struct pt_regs *regs)
+{
-+ struct bio_vec *bvec, *bvprv;
-+ struct req_iterator iter;
-+ struct scatterlist *sg;
-+ int nsegs, cluster;
++ kprobe_opcode_t *addr;
++ struct kprobe *p;
++ struct kprobe_ctlblk *kcb;
+
-+ nsegs = 0;
-+ cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
++ addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
++ if (*addr != BREAKPOINT_INSTRUCTION) {
++ /*
++ * The breakpoint instruction was removed right
++ * after we hit it. Another cpu has removed
++ * either a probepoint or a debugger breakpoint
++ * at this address. In either case, no further
++ * handling of this interrupt is appropriate.
++ * Back up over the (now missing) int3 and run
++ * the original instruction.
++ */
++ regs->ip = (unsigned long)addr;
++ return 1;
++ }
+
+ /*
-+ * for each bio in rq
++ * We don't want to be preempted for the entire
++ * duration of kprobe processing. We conditionally
++ * re-enable preemption at the end of this function,
++ * and also in reenter_kprobe() and setup_singlestep().
+ */
-+ bvprv = NULL;
-+ sg = NULL;
-+ rq_for_each_segment(bvec, rq, iter) {
-+ int nbytes = bvec->bv_len;
-+
-+ if (bvprv && cluster) {
-+ if (sg->length + nbytes > q->max_segment_size)
-+ goto new_segment;
++ preempt_disable();
+
-+ if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
-+ goto new_segment;
-+ if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
-+ goto new_segment;
++ kcb = get_kprobe_ctlblk();
++ p = get_kprobe(addr);
+
-+ sg->length += nbytes;
++ if (p) {
++ if (kprobe_running()) {
++ if (reenter_kprobe(p, regs, kcb))
++ return 1;
+ } else {
-+new_segment:
-+ if (!sg)
-+ sg = sglist;
-+ else {
-+ /*
-+ * If the driver previously mapped a shorter
-+ * list, we could see a termination bit
-+ * prematurely unless it fully inits the sg
-+ * table on each mapping. We KNOW that there
-+ * must be more entries here or the driver
-+ * would be buggy, so force clear the
-+ * termination bit to avoid doing a full
-+ * sg_init_table() in drivers for each command.
-+ */
-+ sg->page_link &= ~0x02;
-+ sg = sg_next(sg);
-+ }
++ set_current_kprobe(p, regs, kcb);
++ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+
-+ sg_set_page(sg, bvec->bv_page, nbytes, bvec->bv_offset);
-+ nsegs++;
++ /*
++ * If we have no pre-handler or it returned 0, we
++ * continue with normal processing. If we have a
++ * pre-handler and it returned non-zero, it prepped
++ * for calling the break_handler below on re-entry
++ * for jprobe processing, so get out doing nothing
++ * more here.
++ */
++ if (!p->pre_handler || !p->pre_handler(p, regs))
++ setup_singlestep(p, regs, kcb);
++ return 1;
+ }
-+ bvprv = bvec;
-+ } /* segments in rq */
-+
-+ if (q->dma_drain_size) {
-+ sg->page_link &= ~0x02;
-+ sg = sg_next(sg);
-+ sg_set_page(sg, virt_to_page(q->dma_drain_buffer),
-+ q->dma_drain_size,
-+ ((unsigned long)q->dma_drain_buffer) &
-+ (PAGE_SIZE - 1));
-+ nsegs++;
-+ }
-+
-+ if (sg)
-+ sg_mark_end(sg);
++ } else if (kprobe_running()) {
++ p = __get_cpu_var(current_kprobe);
++ if (p->break_handler && p->break_handler(p, regs)) {
++ setup_singlestep(p, regs, kcb);
++ return 1;
++ }
++ } /* else: not a kprobe fault; let the kernel handle it */
+
-+ return nsegs;
++ preempt_enable_no_resched();
++ return 0;
+}
+
-+EXPORT_SYMBOL(blk_rq_map_sg);
++/*
++ * When a retprobed function returns, this code saves registers and
++ * calls trampoline_handler() runs, which calls the kretprobe's handler.
++ */
++void __kprobes kretprobe_trampoline_holder(void)
++{
++ asm volatile (
++ ".global kretprobe_trampoline\n"
++ "kretprobe_trampoline: \n"
++#ifdef CONFIG_X86_64
++ /* We don't bother saving the ss register */
++ " pushq %rsp\n"
++ " pushfq\n"
++ /*
++ * Skip cs, ip, orig_ax.
++ * trampoline_handler() will plug in these values
++ */
++ " subq $24, %rsp\n"
++ " pushq %rdi\n"
++ " pushq %rsi\n"
++ " pushq %rdx\n"
++ " pushq %rcx\n"
++ " pushq %rax\n"
++ " pushq %r8\n"
++ " pushq %r9\n"
++ " pushq %r10\n"
++ " pushq %r11\n"
++ " pushq %rbx\n"
++ " pushq %rbp\n"
++ " pushq %r12\n"
++ " pushq %r13\n"
++ " pushq %r14\n"
++ " pushq %r15\n"
++ " movq %rsp, %rdi\n"
++ " call trampoline_handler\n"
++ /* Replace saved sp with true return address. */
++ " movq %rax, 152(%rsp)\n"
++ " popq %r15\n"
++ " popq %r14\n"
++ " popq %r13\n"
++ " popq %r12\n"
++ " popq %rbp\n"
++ " popq %rbx\n"
++ " popq %r11\n"
++ " popq %r10\n"
++ " popq %r9\n"
++ " popq %r8\n"
++ " popq %rax\n"
++ " popq %rcx\n"
++ " popq %rdx\n"
++ " popq %rsi\n"
++ " popq %rdi\n"
++ /* Skip orig_ax, ip, cs */
++ " addq $24, %rsp\n"
++ " popfq\n"
++#else
++ " pushf\n"
++ /*
++ * Skip cs, ip, orig_ax.
++ * trampoline_handler() will plug in these values
++ */
++ " subl $12, %esp\n"
++ " pushl %fs\n"
++ " pushl %ds\n"
++ " pushl %es\n"
++ " pushl %eax\n"
++ " pushl %ebp\n"
++ " pushl %edi\n"
++ " pushl %esi\n"
++ " pushl %edx\n"
++ " pushl %ecx\n"
++ " pushl %ebx\n"
++ " movl %esp, %eax\n"
++ " call trampoline_handler\n"
++ /* Move flags to cs */
++ " movl 52(%esp), %edx\n"
++ " movl %edx, 48(%esp)\n"
++ /* Replace saved flags with true return address. */
++ " movl %eax, 52(%esp)\n"
++ " popl %ebx\n"
++ " popl %ecx\n"
++ " popl %edx\n"
++ " popl %esi\n"
++ " popl %edi\n"
++ " popl %ebp\n"
++ " popl %eax\n"
++ /* Skip ip, orig_ax, es, ds, fs */
++ " addl $20, %esp\n"
++ " popf\n"
++#endif
++ " ret\n");
++}
+
-+static inline int ll_new_mergeable(struct request_queue *q,
-+ struct request *req,
-+ struct bio *bio)
++/*
++ * Called from kretprobe_trampoline
++ */
++void * __kprobes trampoline_handler(struct pt_regs *regs)
+{
-+ int nr_phys_segs = bio_phys_segments(q, bio);
++ struct kretprobe_instance *ri = NULL;
++ struct hlist_head *head, empty_rp;
++ struct hlist_node *node, *tmp;
++ unsigned long flags, orig_ret_address = 0;
++ unsigned long trampoline_address = (unsigned long)&kretprobe_trampoline;
+
-+ if (req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
-+ req->cmd_flags |= REQ_NOMERGE;
-+ if (req == q->last_merge)
-+ q->last_merge = NULL;
-+ return 0;
-+ }
++ INIT_HLIST_HEAD(&empty_rp);
++ spin_lock_irqsave(&kretprobe_lock, flags);
++ head = kretprobe_inst_table_head(current);
++ /* fixup registers */
++#ifdef CONFIG_X86_64
++ regs->cs = __KERNEL_CS;
++#else
++ regs->cs = __KERNEL_CS | get_kernel_rpl();
++#endif
++ regs->ip = trampoline_address;
++ regs->orig_ax = ~0UL;
+
+ /*
-+ * A hw segment is just getting larger, bump just the phys
-+ * counter.
++ * It is possible to have multiple instances associated with a given
++ * task either because multiple functions in the call path have
++ * return probes installed on them, and/or more then one
++ * return probe was registered for a target function.
++ *
++ * We can handle this because:
++ * - instances are always pushed into the head of the list
++ * - when multiple return probes are registered for the same
++ * function, the (chronologically) first instance's ret_addr
++ * will be the real return address, and all the rest will
++ * point to kretprobe_trampoline.
+ */
-+ req->nr_phys_segments += nr_phys_segs;
-+ return 1;
-+}
++ hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
++ if (ri->task != current)
++ /* another task is sharing our hash bucket */
++ continue;
+
-+static inline int ll_new_hw_segment(struct request_queue *q,
-+ struct request *req,
-+ struct bio *bio)
-+{
-+ int nr_hw_segs = bio_hw_segments(q, bio);
-+ int nr_phys_segs = bio_phys_segments(q, bio);
++ if (ri->rp && ri->rp->handler) {
++ __get_cpu_var(current_kprobe) = &ri->rp->kp;
++ get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
++ ri->rp->handler(ri, regs);
++ __get_cpu_var(current_kprobe) = NULL;
++ }
+
-+ if (req->nr_hw_segments + nr_hw_segs > q->max_hw_segments
-+ || req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
-+ req->cmd_flags |= REQ_NOMERGE;
-+ if (req == q->last_merge)
-+ q->last_merge = NULL;
-+ return 0;
++ orig_ret_address = (unsigned long)ri->ret_addr;
++ recycle_rp_inst(ri, &empty_rp);
++
++ if (orig_ret_address != trampoline_address)
++ /*
++ * This is the real return address. Any other
++ * instances associated with this task are for
++ * other calls deeper on the call stack
++ */
++ break;
+ }
+
-+ /*
-+ * This will form the start of a new hw segment. Bump both
-+ * counters.
-+ */
-+ req->nr_hw_segments += nr_hw_segs;
-+ req->nr_phys_segments += nr_phys_segs;
-+ return 1;
-+}
++ kretprobe_assert(ri, orig_ret_address, trampoline_address);
+
-+int ll_back_merge_fn(struct request_queue *q, struct request *req,
-+ struct bio *bio)
-+{
-+ unsigned short max_sectors;
-+ int len;
++ spin_unlock_irqrestore(&kretprobe_lock, flags);
+
-+ if (unlikely(blk_pc_request(req)))
-+ max_sectors = q->max_hw_sectors;
-+ else
-+ max_sectors = q->max_sectors;
++ hlist_for_each_entry_safe(ri, node, tmp, &empty_rp, hlist) {
++ hlist_del(&ri->hlist);
++ kfree(ri);
++ }
++ return (void *)orig_ret_address;
++}
+
-+ if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
-+ req->cmd_flags |= REQ_NOMERGE;
-+ if (req == q->last_merge)
-+ q->last_merge = NULL;
-+ return 0;
++/*
++ * Called after single-stepping. p->addr is the address of the
++ * instruction whose first byte has been replaced by the "int 3"
++ * instruction. To avoid the SMP problems that can occur when we
++ * temporarily put back the original opcode to single-step, we
++ * single-stepped a copy of the instruction. The address of this
++ * copy is p->ainsn.insn.
++ *
++ * This function prepares to return from the post-single-step
++ * interrupt. We have to fix up the stack as follows:
++ *
++ * 0) Except in the case of absolute or indirect jump or call instructions,
++ * the new ip is relative to the copied instruction. We need to make
++ * it relative to the original instruction.
++ *
++ * 1) If the single-stepped instruction was pushfl, then the TF and IF
++ * flags are set in the just-pushed flags, and may need to be cleared.
++ *
++ * 2) If the single-stepped instruction was a call, the return address
++ * that is atop the stack is the address following the copied instruction.
++ * We need to make it the address following the original instruction.
++ *
++ * If this is the first time we've single-stepped the instruction at
++ * this probepoint, and the instruction is boostable, boost it: add a
++ * jump instruction after the copied instruction, that jumps to the next
++ * instruction after the probepoint.
++ */
++static void __kprobes resume_execution(struct kprobe *p,
++ struct pt_regs *regs, struct kprobe_ctlblk *kcb)
++{
++ unsigned long *tos = stack_addr(regs);
++ unsigned long copy_ip = (unsigned long)p->ainsn.insn;
++ unsigned long orig_ip = (unsigned long)p->addr;
++ kprobe_opcode_t *insn = p->ainsn.insn;
++
++ /*skip the REX prefix*/
++ if (is_REX_prefix(insn))
++ insn++;
++
++ regs->flags &= ~X86_EFLAGS_TF;
++ switch (*insn) {
++ case 0x9c: /* pushfl */
++ *tos &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF);
++ *tos |= kcb->kprobe_old_flags;
++ break;
++ case 0xc2: /* iret/ret/lret */
++ case 0xc3:
++ case 0xca:
++ case 0xcb:
++ case 0xcf:
++ case 0xea: /* jmp absolute -- ip is correct */
++ /* ip is already adjusted, no more changes required */
++ p->ainsn.boostable = 1;
++ goto no_change;
++ case 0xe8: /* call relative - Fix return addr */
++ *tos = orig_ip + (*tos - copy_ip);
++ break;
++#ifdef CONFIG_X86_32
++ case 0x9a: /* call absolute -- same as call absolute, indirect */
++ *tos = orig_ip + (*tos - copy_ip);
++ goto no_change;
++#endif
++ case 0xff:
++ if ((insn[1] & 0x30) == 0x10) {
++ /*
++ * call absolute, indirect
++ * Fix return addr; ip is correct.
++ * But this is not boostable
++ */
++ *tos = orig_ip + (*tos - copy_ip);
++ goto no_change;
++ } else if (((insn[1] & 0x31) == 0x20) ||
++ ((insn[1] & 0x31) == 0x21)) {
++ /*
++ * jmp near and far, absolute indirect
++ * ip is correct. And this is boostable
++ */
++ p->ainsn.boostable = 1;
++ goto no_change;
++ }
++ default:
++ break;
+ }
-+ if (unlikely(!bio_flagged(req->biotail, BIO_SEG_VALID)))
-+ blk_recount_segments(q, req->biotail);
-+ if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-+ blk_recount_segments(q, bio);
-+ len = req->biotail->bi_hw_back_size + bio->bi_hw_front_size;
-+ if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(req->biotail), __BVEC_START(bio)) &&
-+ !BIOVEC_VIRT_OVERSIZE(len)) {
-+ int mergeable = ll_new_mergeable(q, req, bio);
+
-+ if (mergeable) {
-+ if (req->nr_hw_segments == 1)
-+ req->bio->bi_hw_front_size = len;
-+ if (bio->bi_hw_segments == 1)
-+ bio->bi_hw_back_size = len;
++ if (p->ainsn.boostable == 0) {
++ if ((regs->ip > copy_ip) &&
++ (regs->ip - copy_ip) + 5 < MAX_INSN_SIZE) {
++ /*
++ * These instructions can be executed directly if it
++ * jumps back to correct address.
++ */
++ set_jmp_op((void *)regs->ip,
++ (void *)orig_ip + (regs->ip - copy_ip));
++ p->ainsn.boostable = 1;
++ } else {
++ p->ainsn.boostable = -1;
+ }
-+ return mergeable;
+ }
+
-+ return ll_new_hw_segment(q, req, bio);
++ regs->ip += orig_ip - copy_ip;
++
++no_change:
++ restore_btf();
+}
+
-+int ll_front_merge_fn(struct request_queue *q, struct request *req,
-+ struct bio *bio)
++/*
++ * Interrupts are disabled on entry as trap1 is an interrupt gate and they
++ * remain disabled thoroughout this function.
++ */
++static int __kprobes post_kprobe_handler(struct pt_regs *regs)
+{
-+ unsigned short max_sectors;
-+ int len;
-+
-+ if (unlikely(blk_pc_request(req)))
-+ max_sectors = q->max_hw_sectors;
-+ else
-+ max_sectors = q->max_sectors;
-+
++ struct kprobe *cur = kprobe_running();
++ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
-+ if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
-+ req->cmd_flags |= REQ_NOMERGE;
-+ if (req == q->last_merge)
-+ q->last_merge = NULL;
++ if (!cur)
+ return 0;
-+ }
-+ len = bio->bi_hw_back_size + req->bio->bi_hw_front_size;
-+ if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-+ blk_recount_segments(q, bio);
-+ if (unlikely(!bio_flagged(req->bio, BIO_SEG_VALID)))
-+ blk_recount_segments(q, req->bio);
-+ if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(req->bio)) &&
-+ !BIOVEC_VIRT_OVERSIZE(len)) {
-+ int mergeable = ll_new_mergeable(q, req, bio);
+
-+ if (mergeable) {
-+ if (bio->bi_hw_segments == 1)
-+ bio->bi_hw_front_size = len;
-+ if (req->nr_hw_segments == 1)
-+ req->biotail->bi_hw_back_size = len;
-+ }
-+ return mergeable;
++ if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) {
++ kcb->kprobe_status = KPROBE_HIT_SSDONE;
++ cur->post_handler(cur, regs, 0);
+ }
+
-+ return ll_new_hw_segment(q, req, bio);
-+}
++ resume_execution(cur, regs, kcb);
++ regs->flags |= kcb->kprobe_saved_flags;
++ trace_hardirqs_fixup_flags(regs->flags);
+
-+static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
-+ struct request *next)
-+{
-+ int total_phys_segments;
-+ int total_hw_segments;
++ /* Restore back the original saved kprobes variables and continue. */
++ if (kcb->kprobe_status == KPROBE_REENTER) {
++ restore_previous_kprobe(kcb);
++ goto out;
++ }
++ reset_current_kprobe();
++out:
++ preempt_enable_no_resched();
+
+ /*
-+ * First check if the either of the requests are re-queued
-+ * requests. Can't merge them if they are.
++ * if somebody else is singlestepping across a probe point, flags
++ * will have TF set, in which case, continue the remaining processing
++ * of do_debug, as if this is not a probe hit.
+ */
-+ if (req->special || next->special)
++ if (regs->flags & X86_EFLAGS_TF)
+ return 0;
+
-+ /*
-+ * Will it become too large?
-+ */
-+ if ((req->nr_sectors + next->nr_sectors) > q->max_sectors)
-+ return 0;
++ return 1;
++}
+
-+ total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
-+ if (blk_phys_contig_segment(q, req->biotail, next->bio))
-+ total_phys_segments--;
++int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
++{
++ struct kprobe *cur = kprobe_running();
++ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
-+ if (total_phys_segments > q->max_phys_segments)
-+ return 0;
++ switch (kcb->kprobe_status) {
++ case KPROBE_HIT_SS:
++ case KPROBE_REENTER:
++ /*
++ * We are here because the instruction being single
++ * stepped caused a page fault. We reset the current
++ * kprobe and the ip points back to the probe address
++ * and allow the page fault handler to continue as a
++ * normal page fault.
++ */
++ regs->ip = (unsigned long)cur->addr;
++ regs->flags |= kcb->kprobe_old_flags;
++ if (kcb->kprobe_status == KPROBE_REENTER)
++ restore_previous_kprobe(kcb);
++ else
++ reset_current_kprobe();
++ preempt_enable_no_resched();
++ break;
++ case KPROBE_HIT_ACTIVE:
++ case KPROBE_HIT_SSDONE:
++ /*
++ * We increment the nmissed count for accounting,
++ * we can also use npre/npostfault count for accounting
++ * these specific fault cases.
++ */
++ kprobes_inc_nmissed_count(cur);
+
-+ total_hw_segments = req->nr_hw_segments + next->nr_hw_segments;
-+ if (blk_hw_contig_segment(q, req->biotail, next->bio)) {
-+ int len = req->biotail->bi_hw_back_size + next->bio->bi_hw_front_size;
+ /*
-+ * propagate the combined length to the end of the requests
++ * We come here because instructions in the pre/post
++ * handler caused the page_fault, this could happen
++ * if handler tries to access user space by
++ * copy_from_user(), get_user() etc. Let the
++ * user-specified handler try to fix it first.
+ */
-+ if (req->nr_hw_segments == 1)
-+ req->bio->bi_hw_front_size = len;
-+ if (next->nr_hw_segments == 1)
-+ next->biotail->bi_hw_back_size = len;
-+ total_hw_segments--;
-+ }
++ if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
++ return 1;
+
-+ if (total_hw_segments > q->max_hw_segments)
-+ return 0;
++ /*
++ * In case the user-specified fault handler returned
++ * zero, try to fix up.
++ */
++ if (fixup_exception(regs))
++ return 1;
+
-+ /* Merge is OK... */
-+ req->nr_phys_segments = total_phys_segments;
-+ req->nr_hw_segments = total_hw_segments;
-+ return 1;
++ /*
++ * fixup routine could not handle it,
++ * Let do_page_fault() fix it.
++ */
++ break;
++ default:
++ break;
++ }
++ return 0;
+}
+
+/*
-+ * Has to be called with the request spinlock acquired
++ * Wrapper routine for handling exceptions.
+ */
-+static int attempt_merge(struct request_queue *q, struct request *req,
-+ struct request *next)
++int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
++ unsigned long val, void *data)
+{
-+ if (!rq_mergeable(req) || !rq_mergeable(next))
-+ return 0;
-+
-+ /*
-+ * not contiguous
-+ */
-+ if (req->sector + req->nr_sectors != next->sector)
-+ return 0;
-+
-+ if (rq_data_dir(req) != rq_data_dir(next)
-+ || req->rq_disk != next->rq_disk
-+ || next->special)
-+ return 0;
-+
-+ /*
-+ * If we are allowed to merge, then append bio list
-+ * from next to rq and release next. merge_requests_fn
-+ * will have updated segment counts, update sector
-+ * counts here.
-+ */
-+ if (!ll_merge_requests_fn(q, req, next))
-+ return 0;
-+
-+ /*
-+ * At this point we have either done a back merge
-+ * or front merge. We need the smaller start_time of
-+ * the merged requests to be the current request
-+ * for accounting purposes.
-+ */
-+ if (time_after(req->start_time, next->start_time))
-+ req->start_time = next->start_time;
-+
-+ req->biotail->bi_next = next->bio;
-+ req->biotail = next->biotail;
-+
-+ req->nr_sectors = req->hard_nr_sectors += next->hard_nr_sectors;
++ struct die_args *args = data;
++ int ret = NOTIFY_DONE;
+
-+ elv_merge_requests(q, req, next);
++ if (args->regs && user_mode_vm(args->regs))
++ return ret;
+
-+ if (req->rq_disk) {
-+ disk_round_stats(req->rq_disk);
-+ req->rq_disk->in_flight--;
++ switch (val) {
++ case DIE_INT3:
++ if (kprobe_handler(args->regs))
++ ret = NOTIFY_STOP;
++ break;
++ case DIE_DEBUG:
++ if (post_kprobe_handler(args->regs))
++ ret = NOTIFY_STOP;
++ break;
++ case DIE_GPF:
++ /*
++ * To be potentially processing a kprobe fault and to
++ * trust the result from kprobe_running(), we have
++ * be non-preemptible.
++ */
++ if (!preemptible() && kprobe_running() &&
++ kprobe_fault_handler(args->regs, args->trapnr))
++ ret = NOTIFY_STOP;
++ break;
++ default:
++ break;
+ }
++ return ret;
++}
+
-+ req->ioprio = ioprio_best(req->ioprio, next->ioprio);
++int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
++{
++ struct jprobe *jp = container_of(p, struct jprobe, kp);
++ unsigned long addr;
++ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
-+ __blk_put_request(q, next);
++ kcb->jprobe_saved_regs = *regs;
++ kcb->jprobe_saved_sp = stack_addr(regs);
++ addr = (unsigned long)(kcb->jprobe_saved_sp);
++
++ /*
++ * As Linus pointed out, gcc assumes that the callee
++ * owns the argument space and could overwrite it, e.g.
++ * tailcall optimization. So, to be absolutely safe
++ * we also save and restore enough stack bytes to cover
++ * the argument area.
++ */
++ memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr,
++ MIN_STACK_SIZE(addr));
++ regs->flags &= ~X86_EFLAGS_IF;
++ trace_hardirqs_off();
++ regs->ip = (unsigned long)(jp->entry);
+ return 1;
+}
+
-+int attempt_back_merge(struct request_queue *q, struct request *rq)
++void __kprobes jprobe_return(void)
+{
-+ struct request *next = elv_latter_request(q, rq);
-+
-+ if (next)
-+ return attempt_merge(q, rq, next);
++ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
-+ return 0;
++ asm volatile (
++#ifdef CONFIG_X86_64
++ " xchg %%rbx,%%rsp \n"
++#else
++ " xchgl %%ebx,%%esp \n"
++#endif
++ " int3 \n"
++ " .globl jprobe_return_end\n"
++ " jprobe_return_end: \n"
++ " nop \n"::"b"
++ (kcb->jprobe_saved_sp):"memory");
+}
+
-+int attempt_front_merge(struct request_queue *q, struct request *rq)
++int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
+{
-+ struct request *prev = elv_former_request(q, rq);
-+
-+ if (prev)
-+ return attempt_merge(q, prev, rq);
++ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
++ u8 *addr = (u8 *) (regs->ip - 1);
++ struct jprobe *jp = container_of(p, struct jprobe, kp);
+
++ if ((addr > (u8 *) jprobe_return) &&
++ (addr < (u8 *) jprobe_return_end)) {
++ if (stack_addr(regs) != kcb->jprobe_saved_sp) {
++ struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;
++ printk(KERN_ERR
++ "current sp %p does not match saved sp %p\n",
++ stack_addr(regs), kcb->jprobe_saved_sp);
++ printk(KERN_ERR "Saved registers for jprobe %p\n", jp);
++ show_registers(saved_regs);
++ printk(KERN_ERR "Current registers\n");
++ show_registers(regs);
++ BUG();
++ }
++ *regs = kcb->jprobe_saved_regs;
++ memcpy((kprobe_opcode_t *)(kcb->jprobe_saved_sp),
++ kcb->jprobes_stack,
++ MIN_STACK_SIZE(kcb->jprobe_saved_sp));
++ preempt_enable_no_resched();
++ return 1;
++ }
+ return 0;
+}
-diff --git a/block/blk-settings.c b/block/blk-settings.c
-new file mode 100644
-index 0000000..4df09a1
---- /dev/null
-+++ b/block/blk-settings.c
-@@ -0,0 +1,402 @@
-+/*
-+ * Functions related to setting various queue properties from drivers
-+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/init.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
-+#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
-+
-+#include "blk.h"
-+
-+unsigned long blk_max_low_pfn, blk_max_pfn;
-+EXPORT_SYMBOL(blk_max_low_pfn);
-+EXPORT_SYMBOL(blk_max_pfn);
+
-+/**
-+ * blk_queue_prep_rq - set a prepare_request function for queue
-+ * @q: queue
-+ * @pfn: prepare_request function
-+ *
-+ * It's possible for a queue to register a prepare_request callback which
-+ * is invoked before the request is handed to the request_fn. The goal of
-+ * the function is to prepare a request for I/O, it can be used to build a
-+ * cdb from the request data for instance.
-+ *
-+ */
-+void blk_queue_prep_rq(struct request_queue *q, prep_rq_fn *pfn)
++int __init arch_init_kprobes(void)
+{
-+ q->prep_rq_fn = pfn;
++ return 0;
+}
+
-+EXPORT_SYMBOL(blk_queue_prep_rq);
-+
-+/**
-+ * blk_queue_merge_bvec - set a merge_bvec function for queue
-+ * @q: queue
-+ * @mbfn: merge_bvec_fn
-+ *
-+ * Usually queues have static limitations on the max sectors or segments that
-+ * we can put in a request. Stacking drivers may have some settings that
-+ * are dynamic, and thus we have to query the queue whether it is ok to
-+ * add a new bio_vec to a bio at a given offset or not. If the block device
-+ * has such limitations, it needs to register a merge_bvec_fn to control
-+ * the size of bio's sent to it. Note that a block device *must* allow a
-+ * single page to be added to an empty bio. The block device driver may want
-+ * to use the bio_split() function to deal with these bio's. By default
-+ * no merge_bvec_fn is defined for a queue, and only the fixed limits are
-+ * honored.
-+ */
-+void blk_queue_merge_bvec(struct request_queue *q, merge_bvec_fn *mbfn)
++int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+{
-+ q->merge_bvec_fn = mbfn;
++ return 0;
+}
+diff --git a/arch/x86/kernel/kprobes_32.c b/arch/x86/kernel/kprobes_32.c
+deleted file mode 100644
+index 3a020f7..0000000
+--- a/arch/x86/kernel/kprobes_32.c
++++ /dev/null
+@@ -1,756 +0,0 @@
+-/*
+- * Kernel Probes (KProbes)
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+- *
+- * Copyright (C) IBM Corporation, 2002, 2004
+- *
+- * 2002-Oct Created by Vamsi Krishna S <vamsi_krishna at in.ibm.com> Kernel
+- * Probes initial implementation ( includes contributions from
+- * Rusty Russell).
+- * 2004-July Suparna Bhattacharya <suparna at in.ibm.com> added jumper probes
+- * interface to access function arguments.
+- * 2005-May Hien Nguyen <hien at us.ibm.com>, Jim Keniston
+- * <jkenisto at us.ibm.com> and Prasanna S Panchamukhi
+- * <prasanna at in.ibm.com> added function-return probes.
+- */
+-
+-#include <linux/kprobes.h>
+-#include <linux/ptrace.h>
+-#include <linux/preempt.h>
+-#include <linux/kdebug.h>
+-#include <asm/cacheflush.h>
+-#include <asm/desc.h>
+-#include <asm/uaccess.h>
+-#include <asm/alternative.h>
+-
+-void jprobe_return_end(void);
+-
+-DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
+-DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+-
+-struct kretprobe_blackpoint kretprobe_blacklist[] = {
+- {"__switch_to", }, /* This function switches only current task, but
+- doesn't switch kernel stack.*/
+- {NULL, NULL} /* Terminator */
+-};
+-const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
+-
+-/* insert a jmp code */
+-static __always_inline void set_jmp_op(void *from, void *to)
+-{
+- struct __arch_jmp_op {
+- char op;
+- long raddr;
+- } __attribute__((packed)) *jop;
+- jop = (struct __arch_jmp_op *)from;
+- jop->raddr = (long)(to) - ((long)(from) + 5);
+- jop->op = RELATIVEJUMP_INSTRUCTION;
+-}
+-
+-/*
+- * returns non-zero if opcodes can be boosted.
+- */
+-static __always_inline int can_boost(kprobe_opcode_t *opcodes)
+-{
+-#define W(row,b0,b1,b2,b3,b4,b5,b6,b7,b8,b9,ba,bb,bc,bd,be,bf) \
+- (((b0##UL << 0x0)|(b1##UL << 0x1)|(b2##UL << 0x2)|(b3##UL << 0x3) | \
+- (b4##UL << 0x4)|(b5##UL << 0x5)|(b6##UL << 0x6)|(b7##UL << 0x7) | \
+- (b8##UL << 0x8)|(b9##UL << 0x9)|(ba##UL << 0xa)|(bb##UL << 0xb) | \
+- (bc##UL << 0xc)|(bd##UL << 0xd)|(be##UL << 0xe)|(bf##UL << 0xf)) \
+- << (row % 32))
+- /*
+- * Undefined/reserved opcodes, conditional jump, Opcode Extension
+- * Groups, and some special opcodes can not be boost.
+- */
+- static const unsigned long twobyte_is_boostable[256 / 32] = {
+- /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
+- /* ------------------------------- */
+- W(0x00, 0,0,1,1,0,0,1,0,1,1,0,0,0,0,0,0)| /* 00 */
+- W(0x10, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), /* 10 */
+- W(0x20, 1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0)| /* 20 */
+- W(0x30, 0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), /* 30 */
+- W(0x40, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* 40 */
+- W(0x50, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), /* 50 */
+- W(0x60, 1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1)| /* 60 */
+- W(0x70, 0,0,0,0,1,1,1,1,0,0,0,0,0,0,1,1), /* 70 */
+- W(0x80, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* 80 */
+- W(0x90, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1), /* 90 */
+- W(0xa0, 1,1,0,1,1,1,0,0,1,1,0,1,1,1,0,1)| /* a0 */
+- W(0xb0, 1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1), /* b0 */
+- W(0xc0, 1,1,0,0,0,0,0,0,1,1,1,1,1,1,1,1)| /* c0 */
+- W(0xd0, 0,1,1,1,0,1,0,0,1,1,0,1,1,1,0,1), /* d0 */
+- W(0xe0, 0,1,1,0,0,1,0,0,1,1,0,1,1,1,0,1)| /* e0 */
+- W(0xf0, 0,1,1,1,0,1,0,0,1,1,1,0,1,1,1,0) /* f0 */
+- /* ------------------------------- */
+- /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
+- };
+-#undef W
+- kprobe_opcode_t opcode;
+- kprobe_opcode_t *orig_opcodes = opcodes;
+-retry:
+- if (opcodes - orig_opcodes > MAX_INSN_SIZE - 1)
+- return 0;
+- opcode = *(opcodes++);
+-
+- /* 2nd-byte opcode */
+- if (opcode == 0x0f) {
+- if (opcodes - orig_opcodes > MAX_INSN_SIZE - 1)
+- return 0;
+- return test_bit(*opcodes, twobyte_is_boostable);
+- }
+-
+- switch (opcode & 0xf0) {
+- case 0x60:
+- if (0x63 < opcode && opcode < 0x67)
+- goto retry; /* prefixes */
+- /* can't boost Address-size override and bound */
+- return (opcode != 0x62 && opcode != 0x67);
+- case 0x70:
+- return 0; /* can't boost conditional jump */
+- case 0xc0:
+- /* can't boost software-interruptions */
+- return (0xc1 < opcode && opcode < 0xcc) || opcode == 0xcf;
+- case 0xd0:
+- /* can boost AA* and XLAT */
+- return (opcode == 0xd4 || opcode == 0xd5 || opcode == 0xd7);
+- case 0xe0:
+- /* can boost in/out and absolute jmps */
+- return ((opcode & 0x04) || opcode == 0xea);
+- case 0xf0:
+- if ((opcode & 0x0c) == 0 && opcode != 0xf1)
+- goto retry; /* lock/rep(ne) prefix */
+- /* clear and set flags can be boost */
+- return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
+- default:
+- if (opcode == 0x26 || opcode == 0x36 || opcode == 0x3e)
+- goto retry; /* prefixes */
+- /* can't boost CS override and call */
+- return (opcode != 0x2e && opcode != 0x9a);
+- }
+-}
+-
+-/*
+- * returns non-zero if opcode modifies the interrupt flag.
+- */
+-static int __kprobes is_IF_modifier(kprobe_opcode_t opcode)
+-{
+- switch (opcode) {
+- case 0xfa: /* cli */
+- case 0xfb: /* sti */
+- case 0xcf: /* iret/iretd */
+- case 0x9d: /* popf/popfd */
+- return 1;
+- }
+- return 0;
+-}
+-
+-int __kprobes arch_prepare_kprobe(struct kprobe *p)
+-{
+- /* insn: must be on special executable page on i386. */
+- p->ainsn.insn = get_insn_slot();
+- if (!p->ainsn.insn)
+- return -ENOMEM;
+-
+- memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+- p->opcode = *p->addr;
+- if (can_boost(p->addr)) {
+- p->ainsn.boostable = 0;
+- } else {
+- p->ainsn.boostable = -1;
+- }
+- return 0;
+-}
+-
+-void __kprobes arch_arm_kprobe(struct kprobe *p)
+-{
+- text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1);
+-}
+-
+-void __kprobes arch_disarm_kprobe(struct kprobe *p)
+-{
+- text_poke(p->addr, &p->opcode, 1);
+-}
+-
+-void __kprobes arch_remove_kprobe(struct kprobe *p)
+-{
+- mutex_lock(&kprobe_mutex);
+- free_insn_slot(p->ainsn.insn, (p->ainsn.boostable == 1));
+- mutex_unlock(&kprobe_mutex);
+-}
+-
+-static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+-{
+- kcb->prev_kprobe.kp = kprobe_running();
+- kcb->prev_kprobe.status = kcb->kprobe_status;
+- kcb->prev_kprobe.old_eflags = kcb->kprobe_old_eflags;
+- kcb->prev_kprobe.saved_eflags = kcb->kprobe_saved_eflags;
+-}
+-
+-static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
+-{
+- __get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp;
+- kcb->kprobe_status = kcb->prev_kprobe.status;
+- kcb->kprobe_old_eflags = kcb->prev_kprobe.old_eflags;
+- kcb->kprobe_saved_eflags = kcb->prev_kprobe.saved_eflags;
+-}
+-
+-static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
+- struct kprobe_ctlblk *kcb)
+-{
+- __get_cpu_var(current_kprobe) = p;
+- kcb->kprobe_saved_eflags = kcb->kprobe_old_eflags
+- = (regs->eflags & (TF_MASK | IF_MASK));
+- if (is_IF_modifier(p->opcode))
+- kcb->kprobe_saved_eflags &= ~IF_MASK;
+-}
+-
+-static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
+-{
+- regs->eflags |= TF_MASK;
+- regs->eflags &= ~IF_MASK;
+- /*single step inline if the instruction is an int3*/
+- if (p->opcode == BREAKPOINT_INSTRUCTION)
+- regs->eip = (unsigned long)p->addr;
+- else
+- regs->eip = (unsigned long)p->ainsn.insn;
+-}
+-
+-/* Called with kretprobe_lock held */
+-void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+- struct pt_regs *regs)
+-{
+- unsigned long *sara = (unsigned long *)®s->esp;
+-
+- ri->ret_addr = (kprobe_opcode_t *) *sara;
+-
+- /* Replace the return addr with trampoline addr */
+- *sara = (unsigned long) &kretprobe_trampoline;
+-}
+-
+-/*
+- * Interrupts are disabled on entry as trap3 is an interrupt gate and they
+- * remain disabled thorough out this function.
+- */
+-static int __kprobes kprobe_handler(struct pt_regs *regs)
+-{
+- struct kprobe *p;
+- int ret = 0;
+- kprobe_opcode_t *addr;
+- struct kprobe_ctlblk *kcb;
+-
+- addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
+-
+- /*
+- * We don't want to be preempted for the entire
+- * duration of kprobe processing
+- */
+- preempt_disable();
+- kcb = get_kprobe_ctlblk();
+-
+- /* Check we're not actually recursing */
+- if (kprobe_running()) {
+- p = get_kprobe(addr);
+- if (p) {
+- if (kcb->kprobe_status == KPROBE_HIT_SS &&
+- *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
+- regs->eflags &= ~TF_MASK;
+- regs->eflags |= kcb->kprobe_saved_eflags;
+- goto no_kprobe;
+- }
+- /* We have reentered the kprobe_handler(), since
+- * another probe was hit while within the handler.
+- * We here save the original kprobes variables and
+- * just single step on the instruction of the new probe
+- * without calling any user handlers.
+- */
+- save_previous_kprobe(kcb);
+- set_current_kprobe(p, regs, kcb);
+- kprobes_inc_nmissed_count(p);
+- prepare_singlestep(p, regs);
+- kcb->kprobe_status = KPROBE_REENTER;
+- return 1;
+- } else {
+- if (*addr != BREAKPOINT_INSTRUCTION) {
+- /* The breakpoint instruction was removed by
+- * another cpu right after we hit, no further
+- * handling of this interrupt is appropriate
+- */
+- regs->eip -= sizeof(kprobe_opcode_t);
+- ret = 1;
+- goto no_kprobe;
+- }
+- p = __get_cpu_var(current_kprobe);
+- if (p->break_handler && p->break_handler(p, regs)) {
+- goto ss_probe;
+- }
+- }
+- goto no_kprobe;
+- }
+-
+- p = get_kprobe(addr);
+- if (!p) {
+- if (*addr != BREAKPOINT_INSTRUCTION) {
+- /*
+- * The breakpoint instruction was removed right
+- * after we hit it. Another cpu has removed
+- * either a probepoint or a debugger breakpoint
+- * at this address. In either case, no further
+- * handling of this interrupt is appropriate.
+- * Back up over the (now missing) int3 and run
+- * the original instruction.
+- */
+- regs->eip -= sizeof(kprobe_opcode_t);
+- ret = 1;
+- }
+- /* Not one of ours: let kernel handle it */
+- goto no_kprobe;
+- }
+-
+- set_current_kprobe(p, regs, kcb);
+- kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+-
+- if (p->pre_handler && p->pre_handler(p, regs))
+- /* handler has already set things up, so skip ss setup */
+- return 1;
+-
+-ss_probe:
+-#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+- if (p->ainsn.boostable == 1 && !p->post_handler){
+- /* Boost up -- we can execute copied instructions directly */
+- reset_current_kprobe();
+- regs->eip = (unsigned long)p->ainsn.insn;
+- preempt_enable_no_resched();
+- return 1;
+- }
+-#endif
+- prepare_singlestep(p, regs);
+- kcb->kprobe_status = KPROBE_HIT_SS;
+- return 1;
+-
+-no_kprobe:
+- preempt_enable_no_resched();
+- return ret;
+-}
+-
+-/*
+- * For function-return probes, init_kprobes() establishes a probepoint
+- * here. When a retprobed function returns, this probe is hit and
+- * trampoline_probe_handler() runs, calling the kretprobe's handler.
+- */
+- void __kprobes kretprobe_trampoline_holder(void)
+- {
+- asm volatile ( ".global kretprobe_trampoline\n"
+- "kretprobe_trampoline: \n"
+- " pushf\n"
+- /* skip cs, eip, orig_eax */
+- " subl $12, %esp\n"
+- " pushl %fs\n"
+- " pushl %ds\n"
+- " pushl %es\n"
+- " pushl %eax\n"
+- " pushl %ebp\n"
+- " pushl %edi\n"
+- " pushl %esi\n"
+- " pushl %edx\n"
+- " pushl %ecx\n"
+- " pushl %ebx\n"
+- " movl %esp, %eax\n"
+- " call trampoline_handler\n"
+- /* move eflags to cs */
+- " movl 52(%esp), %edx\n"
+- " movl %edx, 48(%esp)\n"
+- /* save true return address on eflags */
+- " movl %eax, 52(%esp)\n"
+- " popl %ebx\n"
+- " popl %ecx\n"
+- " popl %edx\n"
+- " popl %esi\n"
+- " popl %edi\n"
+- " popl %ebp\n"
+- " popl %eax\n"
+- /* skip eip, orig_eax, es, ds, fs */
+- " addl $20, %esp\n"
+- " popf\n"
+- " ret\n");
+-}
+-
+-/*
+- * Called from kretprobe_trampoline
+- */
+-fastcall void *__kprobes trampoline_handler(struct pt_regs *regs)
+-{
+- struct kretprobe_instance *ri = NULL;
+- struct hlist_head *head, empty_rp;
+- struct hlist_node *node, *tmp;
+- unsigned long flags, orig_ret_address = 0;
+- unsigned long trampoline_address =(unsigned long)&kretprobe_trampoline;
+-
+- INIT_HLIST_HEAD(&empty_rp);
+- spin_lock_irqsave(&kretprobe_lock, flags);
+- head = kretprobe_inst_table_head(current);
+- /* fixup registers */
+- regs->xcs = __KERNEL_CS | get_kernel_rpl();
+- regs->eip = trampoline_address;
+- regs->orig_eax = 0xffffffff;
+-
+- /*
+- * It is possible to have multiple instances associated with a given
+- * task either because an multiple functions in the call path
+- * have a return probe installed on them, and/or more then one return
+- * return probe was registered for a target function.
+- *
+- * We can handle this because:
+- * - instances are always inserted at the head of the list
+- * - when multiple return probes are registered for the same
+- * function, the first instance's ret_addr will point to the
+- * real return address, and all the rest will point to
+- * kretprobe_trampoline
+- */
+- hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
+- if (ri->task != current)
+- /* another task is sharing our hash bucket */
+- continue;
+-
+- if (ri->rp && ri->rp->handler){
+- __get_cpu_var(current_kprobe) = &ri->rp->kp;
+- get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
+- ri->rp->handler(ri, regs);
+- __get_cpu_var(current_kprobe) = NULL;
+- }
+-
+- orig_ret_address = (unsigned long)ri->ret_addr;
+- recycle_rp_inst(ri, &empty_rp);
+-
+- if (orig_ret_address != trampoline_address)
+- /*
+- * This is the real return address. Any other
+- * instances associated with this task are for
+- * other calls deeper on the call stack
+- */
+- break;
+- }
+-
+- kretprobe_assert(ri, orig_ret_address, trampoline_address);
+- spin_unlock_irqrestore(&kretprobe_lock, flags);
+-
+- hlist_for_each_entry_safe(ri, node, tmp, &empty_rp, hlist) {
+- hlist_del(&ri->hlist);
+- kfree(ri);
+- }
+- return (void*)orig_ret_address;
+-}
+-
+-/*
+- * Called after single-stepping. p->addr is the address of the
+- * instruction whose first byte has been replaced by the "int 3"
+- * instruction. To avoid the SMP problems that can occur when we
+- * temporarily put back the original opcode to single-step, we
+- * single-stepped a copy of the instruction. The address of this
+- * copy is p->ainsn.insn.
+- *
+- * This function prepares to return from the post-single-step
+- * interrupt. We have to fix up the stack as follows:
+- *
+- * 0) Except in the case of absolute or indirect jump or call instructions,
+- * the new eip is relative to the copied instruction. We need to make
+- * it relative to the original instruction.
+- *
+- * 1) If the single-stepped instruction was pushfl, then the TF and IF
+- * flags are set in the just-pushed eflags, and may need to be cleared.
+- *
+- * 2) If the single-stepped instruction was a call, the return address
+- * that is atop the stack is the address following the copied instruction.
+- * We need to make it the address following the original instruction.
+- *
+- * This function also checks instruction size for preparing direct execution.
+- */
+-static void __kprobes resume_execution(struct kprobe *p,
+- struct pt_regs *regs, struct kprobe_ctlblk *kcb)
+-{
+- unsigned long *tos = (unsigned long *)®s->esp;
+- unsigned long copy_eip = (unsigned long)p->ainsn.insn;
+- unsigned long orig_eip = (unsigned long)p->addr;
+-
+- regs->eflags &= ~TF_MASK;
+- switch (p->ainsn.insn[0]) {
+- case 0x9c: /* pushfl */
+- *tos &= ~(TF_MASK | IF_MASK);
+- *tos |= kcb->kprobe_old_eflags;
+- break;
+- case 0xc2: /* iret/ret/lret */
+- case 0xc3:
+- case 0xca:
+- case 0xcb:
+- case 0xcf:
+- case 0xea: /* jmp absolute -- eip is correct */
+- /* eip is already adjusted, no more changes required */
+- p->ainsn.boostable = 1;
+- goto no_change;
+- case 0xe8: /* call relative - Fix return addr */
+- *tos = orig_eip + (*tos - copy_eip);
+- break;
+- case 0x9a: /* call absolute -- same as call absolute, indirect */
+- *tos = orig_eip + (*tos - copy_eip);
+- goto no_change;
+- case 0xff:
+- if ((p->ainsn.insn[1] & 0x30) == 0x10) {
+- /*
+- * call absolute, indirect
+- * Fix return addr; eip is correct.
+- * But this is not boostable
+- */
+- *tos = orig_eip + (*tos - copy_eip);
+- goto no_change;
+- } else if (((p->ainsn.insn[1] & 0x31) == 0x20) || /* jmp near, absolute indirect */
+- ((p->ainsn.insn[1] & 0x31) == 0x21)) { /* jmp far, absolute indirect */
+- /* eip is correct. And this is boostable */
+- p->ainsn.boostable = 1;
+- goto no_change;
+- }
+- default:
+- break;
+- }
+-
+- if (p->ainsn.boostable == 0) {
+- if ((regs->eip > copy_eip) &&
+- (regs->eip - copy_eip) + 5 < MAX_INSN_SIZE) {
+- /*
+- * These instructions can be executed directly if it
+- * jumps back to correct address.
+- */
+- set_jmp_op((void *)regs->eip,
+- (void *)orig_eip + (regs->eip - copy_eip));
+- p->ainsn.boostable = 1;
+- } else {
+- p->ainsn.boostable = -1;
+- }
+- }
+-
+- regs->eip = orig_eip + (regs->eip - copy_eip);
+-
+-no_change:
+- return;
+-}
+-
+-/*
+- * Interrupts are disabled on entry as trap1 is an interrupt gate and they
+- * remain disabled thoroughout this function.
+- */
+-static int __kprobes post_kprobe_handler(struct pt_regs *regs)
+-{
+- struct kprobe *cur = kprobe_running();
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- if (!cur)
+- return 0;
+-
+- if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) {
+- kcb->kprobe_status = KPROBE_HIT_SSDONE;
+- cur->post_handler(cur, regs, 0);
+- }
+-
+- resume_execution(cur, regs, kcb);
+- regs->eflags |= kcb->kprobe_saved_eflags;
+- trace_hardirqs_fixup_flags(regs->eflags);
+-
+- /*Restore back the original saved kprobes variables and continue. */
+- if (kcb->kprobe_status == KPROBE_REENTER) {
+- restore_previous_kprobe(kcb);
+- goto out;
+- }
+- reset_current_kprobe();
+-out:
+- preempt_enable_no_resched();
+-
+- /*
+- * if somebody else is singlestepping across a probe point, eflags
+- * will have TF set, in which case, continue the remaining processing
+- * of do_debug, as if this is not a probe hit.
+- */
+- if (regs->eflags & TF_MASK)
+- return 0;
+-
+- return 1;
+-}
+-
+-int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+-{
+- struct kprobe *cur = kprobe_running();
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- switch(kcb->kprobe_status) {
+- case KPROBE_HIT_SS:
+- case KPROBE_REENTER:
+- /*
+- * We are here because the instruction being single
+- * stepped caused a page fault. We reset the current
+- * kprobe and the eip points back to the probe address
+- * and allow the page fault handler to continue as a
+- * normal page fault.
+- */
+- regs->eip = (unsigned long)cur->addr;
+- regs->eflags |= kcb->kprobe_old_eflags;
+- if (kcb->kprobe_status == KPROBE_REENTER)
+- restore_previous_kprobe(kcb);
+- else
+- reset_current_kprobe();
+- preempt_enable_no_resched();
+- break;
+- case KPROBE_HIT_ACTIVE:
+- case KPROBE_HIT_SSDONE:
+- /*
+- * We increment the nmissed count for accounting,
+- * we can also use npre/npostfault count for accouting
+- * these specific fault cases.
+- */
+- kprobes_inc_nmissed_count(cur);
+-
+- /*
+- * We come here because instructions in the pre/post
+- * handler caused the page_fault, this could happen
+- * if handler tries to access user space by
+- * copy_from_user(), get_user() etc. Let the
+- * user-specified handler try to fix it first.
+- */
+- if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
+- return 1;
+-
+- /*
+- * In case the user-specified fault handler returned
+- * zero, try to fix up.
+- */
+- if (fixup_exception(regs))
+- return 1;
+-
+- /*
+- * fixup_exception() could not handle it,
+- * Let do_page_fault() fix it.
+- */
+- break;
+- default:
+- break;
+- }
+- return 0;
+-}
+-
+-/*
+- * Wrapper routine to for handling exceptions.
+- */
+-int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
+- unsigned long val, void *data)
+-{
+- struct die_args *args = (struct die_args *)data;
+- int ret = NOTIFY_DONE;
+-
+- if (args->regs && user_mode_vm(args->regs))
+- return ret;
+-
+- switch (val) {
+- case DIE_INT3:
+- if (kprobe_handler(args->regs))
+- ret = NOTIFY_STOP;
+- break;
+- case DIE_DEBUG:
+- if (post_kprobe_handler(args->regs))
+- ret = NOTIFY_STOP;
+- break;
+- case DIE_GPF:
+- /* kprobe_running() needs smp_processor_id() */
+- preempt_disable();
+- if (kprobe_running() &&
+- kprobe_fault_handler(args->regs, args->trapnr))
+- ret = NOTIFY_STOP;
+- preempt_enable();
+- break;
+- default:
+- break;
+- }
+- return ret;
+-}
+-
+-int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
+-{
+- struct jprobe *jp = container_of(p, struct jprobe, kp);
+- unsigned long addr;
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- kcb->jprobe_saved_regs = *regs;
+- kcb->jprobe_saved_esp = ®s->esp;
+- addr = (unsigned long)(kcb->jprobe_saved_esp);
+-
+- /*
+- * TBD: As Linus pointed out, gcc assumes that the callee
+- * owns the argument space and could overwrite it, e.g.
+- * tailcall optimization. So, to be absolutely safe
+- * we also save and restore enough stack bytes to cover
+- * the argument area.
+- */
+- memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr,
+- MIN_STACK_SIZE(addr));
+- regs->eflags &= ~IF_MASK;
+- trace_hardirqs_off();
+- regs->eip = (unsigned long)(jp->entry);
+- return 1;
+-}
+-
+-void __kprobes jprobe_return(void)
+-{
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- asm volatile (" xchgl %%ebx,%%esp \n"
+- " int3 \n"
+- " .globl jprobe_return_end \n"
+- " jprobe_return_end: \n"
+- " nop \n"::"b"
+- (kcb->jprobe_saved_esp):"memory");
+-}
+-
+-int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
+-{
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+- u8 *addr = (u8 *) (regs->eip - 1);
+- unsigned long stack_addr = (unsigned long)(kcb->jprobe_saved_esp);
+- struct jprobe *jp = container_of(p, struct jprobe, kp);
+-
+- if ((addr > (u8 *) jprobe_return) && (addr < (u8 *) jprobe_return_end)) {
+- if (®s->esp != kcb->jprobe_saved_esp) {
+- struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;
+- printk("current esp %p does not match saved esp %p\n",
+- ®s->esp, kcb->jprobe_saved_esp);
+- printk("Saved registers for jprobe %p\n", jp);
+- show_registers(saved_regs);
+- printk("Current registers\n");
+- show_registers(regs);
+- BUG();
+- }
+- *regs = kcb->jprobe_saved_regs;
+- memcpy((kprobe_opcode_t *) stack_addr, kcb->jprobes_stack,
+- MIN_STACK_SIZE(stack_addr));
+- preempt_enable_no_resched();
+- return 1;
+- }
+- return 0;
+-}
+-
+-int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+-{
+- return 0;
+-}
+-
+-int __init arch_init_kprobes(void)
+-{
+- return 0;
+-}
+diff --git a/arch/x86/kernel/kprobes_64.c b/arch/x86/kernel/kprobes_64.c
+deleted file mode 100644
+index 5df19a9..0000000
+--- a/arch/x86/kernel/kprobes_64.c
++++ /dev/null
+@@ -1,749 +0,0 @@
+-/*
+- * Kernel Probes (KProbes)
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+- *
+- * Copyright (C) IBM Corporation, 2002, 2004
+- *
+- * 2002-Oct Created by Vamsi Krishna S <vamsi_krishna at in.ibm.com> Kernel
+- * Probes initial implementation ( includes contributions from
+- * Rusty Russell).
+- * 2004-July Suparna Bhattacharya <suparna at in.ibm.com> added jumper probes
+- * interface to access function arguments.
+- * 2004-Oct Jim Keniston <kenistoj at us.ibm.com> and Prasanna S Panchamukhi
+- * <prasanna at in.ibm.com> adapted for x86_64
+- * 2005-Mar Roland McGrath <roland at redhat.com>
+- * Fixed to handle %rip-relative addressing mode correctly.
+- * 2005-May Rusty Lynch <rusty.lynch at intel.com>
+- * Added function return probes functionality
+- */
+-
+-#include <linux/kprobes.h>
+-#include <linux/ptrace.h>
+-#include <linux/string.h>
+-#include <linux/slab.h>
+-#include <linux/preempt.h>
+-#include <linux/module.h>
+-#include <linux/kdebug.h>
+-
+-#include <asm/pgtable.h>
+-#include <asm/uaccess.h>
+-#include <asm/alternative.h>
+-
+-void jprobe_return_end(void);
+-static void __kprobes arch_copy_kprobe(struct kprobe *p);
+-
+-DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
+-DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+-
+-struct kretprobe_blackpoint kretprobe_blacklist[] = {
+- {"__switch_to", }, /* This function switches only current task, but
+- doesn't switch kernel stack.*/
+- {NULL, NULL} /* Terminator */
+-};
+-const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
+-
+-/*
+- * returns non-zero if opcode modifies the interrupt flag.
+- */
+-static int __kprobes is_IF_modifier(kprobe_opcode_t *insn)
+-{
+- switch (*insn) {
+- case 0xfa: /* cli */
+- case 0xfb: /* sti */
+- case 0xcf: /* iret/iretd */
+- case 0x9d: /* popf/popfd */
+- return 1;
+- }
+-
+- if (*insn >= 0x40 && *insn <= 0x4f && *++insn == 0xcf)
+- return 1;
+- return 0;
+-}
+-
+-int __kprobes arch_prepare_kprobe(struct kprobe *p)
+-{
+- /* insn: must be on special executable page on x86_64. */
+- p->ainsn.insn = get_insn_slot();
+- if (!p->ainsn.insn) {
+- return -ENOMEM;
+- }
+- arch_copy_kprobe(p);
+- return 0;
+-}
+-
+-/*
+- * Determine if the instruction uses the %rip-relative addressing mode.
+- * If it does, return the address of the 32-bit displacement word.
+- * If not, return null.
+- */
+-static s32 __kprobes *is_riprel(u8 *insn)
+-{
+-#define W(row,b0,b1,b2,b3,b4,b5,b6,b7,b8,b9,ba,bb,bc,bd,be,bf) \
+- (((b0##UL << 0x0)|(b1##UL << 0x1)|(b2##UL << 0x2)|(b3##UL << 0x3) | \
+- (b4##UL << 0x4)|(b5##UL << 0x5)|(b6##UL << 0x6)|(b7##UL << 0x7) | \
+- (b8##UL << 0x8)|(b9##UL << 0x9)|(ba##UL << 0xa)|(bb##UL << 0xb) | \
+- (bc##UL << 0xc)|(bd##UL << 0xd)|(be##UL << 0xe)|(bf##UL << 0xf)) \
+- << (row % 64))
+- static const u64 onebyte_has_modrm[256 / 64] = {
+- /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
+- /* ------------------------------- */
+- W(0x00, 1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0)| /* 00 */
+- W(0x10, 1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0)| /* 10 */
+- W(0x20, 1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0)| /* 20 */
+- W(0x30, 1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0), /* 30 */
+- W(0x40, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* 40 */
+- W(0x50, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* 50 */
+- W(0x60, 0,0,1,1,0,0,0,0,0,1,0,1,0,0,0,0)| /* 60 */
+- W(0x70, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), /* 70 */
+- W(0x80, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* 80 */
+- W(0x90, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* 90 */
+- W(0xa0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* a0 */
+- W(0xb0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), /* b0 */
+- W(0xc0, 1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0)| /* c0 */
+- W(0xd0, 1,1,1,1,0,0,0,0,1,1,1,1,1,1,1,1)| /* d0 */
+- W(0xe0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* e0 */
+- W(0xf0, 0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,1) /* f0 */
+- /* ------------------------------- */
+- /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
+- };
+- static const u64 twobyte_has_modrm[256 / 64] = {
+- /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
+- /* ------------------------------- */
+- W(0x00, 1,1,1,1,0,0,0,0,0,0,0,0,0,1,0,1)| /* 0f */
+- W(0x10, 1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0)| /* 1f */
+- W(0x20, 1,1,1,1,1,0,1,0,1,1,1,1,1,1,1,1)| /* 2f */
+- W(0x30, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), /* 3f */
+- W(0x40, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* 4f */
+- W(0x50, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* 5f */
+- W(0x60, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* 6f */
+- W(0x70, 1,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1), /* 7f */
+- W(0x80, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)| /* 8f */
+- W(0x90, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* 9f */
+- W(0xa0, 0,0,0,1,1,1,1,1,0,0,0,1,1,1,1,1)| /* af */
+- W(0xb0, 1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1), /* bf */
+- W(0xc0, 1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0)| /* cf */
+- W(0xd0, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* df */
+- W(0xe0, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)| /* ef */
+- W(0xf0, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0) /* ff */
+- /* ------------------------------- */
+- /* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
+- };
+-#undef W
+- int need_modrm;
+-
+- /* Skip legacy instruction prefixes. */
+- while (1) {
+- switch (*insn) {
+- case 0x66:
+- case 0x67:
+- case 0x2e:
+- case 0x3e:
+- case 0x26:
+- case 0x64:
+- case 0x65:
+- case 0x36:
+- case 0xf0:
+- case 0xf3:
+- case 0xf2:
+- ++insn;
+- continue;
+- }
+- break;
+- }
+-
+- /* Skip REX instruction prefix. */
+- if ((*insn & 0xf0) == 0x40)
+- ++insn;
+-
+- if (*insn == 0x0f) { /* Two-byte opcode. */
+- ++insn;
+- need_modrm = test_bit(*insn, twobyte_has_modrm);
+- } else { /* One-byte opcode. */
+- need_modrm = test_bit(*insn, onebyte_has_modrm);
+- }
+-
+- if (need_modrm) {
+- u8 modrm = *++insn;
+- if ((modrm & 0xc7) == 0x05) { /* %rip+disp32 addressing mode */
+- /* Displacement follows ModRM byte. */
+- return (s32 *) ++insn;
+- }
+- }
+-
+- /* No %rip-relative addressing mode here. */
+- return NULL;
+-}
+-
+-static void __kprobes arch_copy_kprobe(struct kprobe *p)
+-{
+- s32 *ripdisp;
+- memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE);
+- ripdisp = is_riprel(p->ainsn.insn);
+- if (ripdisp) {
+- /*
+- * The copied instruction uses the %rip-relative
+- * addressing mode. Adjust the displacement for the
+- * difference between the original location of this
+- * instruction and the location of the copy that will
+- * actually be run. The tricky bit here is making sure
+- * that the sign extension happens correctly in this
+- * calculation, since we need a signed 32-bit result to
+- * be sign-extended to 64 bits when it's added to the
+- * %rip value and yield the same 64-bit result that the
+- * sign-extension of the original signed 32-bit
+- * displacement would have given.
+- */
+- s64 disp = (u8 *) p->addr + *ripdisp - (u8 *) p->ainsn.insn;
+- BUG_ON((s64) (s32) disp != disp); /* Sanity check. */
+- *ripdisp = disp;
+- }
+- p->opcode = *p->addr;
+-}
+-
+-void __kprobes arch_arm_kprobe(struct kprobe *p)
+-{
+- text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1);
+-}
+-
+-void __kprobes arch_disarm_kprobe(struct kprobe *p)
+-{
+- text_poke(p->addr, &p->opcode, 1);
+-}
+-
+-void __kprobes arch_remove_kprobe(struct kprobe *p)
+-{
+- mutex_lock(&kprobe_mutex);
+- free_insn_slot(p->ainsn.insn, 0);
+- mutex_unlock(&kprobe_mutex);
+-}
+-
+-static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+-{
+- kcb->prev_kprobe.kp = kprobe_running();
+- kcb->prev_kprobe.status = kcb->kprobe_status;
+- kcb->prev_kprobe.old_rflags = kcb->kprobe_old_rflags;
+- kcb->prev_kprobe.saved_rflags = kcb->kprobe_saved_rflags;
+-}
+-
+-static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
+-{
+- __get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp;
+- kcb->kprobe_status = kcb->prev_kprobe.status;
+- kcb->kprobe_old_rflags = kcb->prev_kprobe.old_rflags;
+- kcb->kprobe_saved_rflags = kcb->prev_kprobe.saved_rflags;
+-}
+-
+-static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
+- struct kprobe_ctlblk *kcb)
+-{
+- __get_cpu_var(current_kprobe) = p;
+- kcb->kprobe_saved_rflags = kcb->kprobe_old_rflags
+- = (regs->eflags & (TF_MASK | IF_MASK));
+- if (is_IF_modifier(p->ainsn.insn))
+- kcb->kprobe_saved_rflags &= ~IF_MASK;
+-}
+-
+-static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
+-{
+- regs->eflags |= TF_MASK;
+- regs->eflags &= ~IF_MASK;
+- /*single step inline if the instruction is an int3*/
+- if (p->opcode == BREAKPOINT_INSTRUCTION)
+- regs->rip = (unsigned long)p->addr;
+- else
+- regs->rip = (unsigned long)p->ainsn.insn;
+-}
+-
+-/* Called with kretprobe_lock held */
+-void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+- struct pt_regs *regs)
+-{
+- unsigned long *sara = (unsigned long *)regs->rsp;
+-
+- ri->ret_addr = (kprobe_opcode_t *) *sara;
+- /* Replace the return addr with trampoline addr */
+- *sara = (unsigned long) &kretprobe_trampoline;
+-}
+-
+-int __kprobes kprobe_handler(struct pt_regs *regs)
+-{
+- struct kprobe *p;
+- int ret = 0;
+- kprobe_opcode_t *addr = (kprobe_opcode_t *)(regs->rip - sizeof(kprobe_opcode_t));
+- struct kprobe_ctlblk *kcb;
+-
+- /*
+- * We don't want to be preempted for the entire
+- * duration of kprobe processing
+- */
+- preempt_disable();
+- kcb = get_kprobe_ctlblk();
+-
+- /* Check we're not actually recursing */
+- if (kprobe_running()) {
+- p = get_kprobe(addr);
+- if (p) {
+- if (kcb->kprobe_status == KPROBE_HIT_SS &&
+- *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
+- regs->eflags &= ~TF_MASK;
+- regs->eflags |= kcb->kprobe_saved_rflags;
+- goto no_kprobe;
+- } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
+- /* TODO: Provide re-entrancy from
+- * post_kprobes_handler() and avoid exception
+- * stack corruption while single-stepping on
+- * the instruction of the new probe.
+- */
+- arch_disarm_kprobe(p);
+- regs->rip = (unsigned long)p->addr;
+- reset_current_kprobe();
+- ret = 1;
+- } else {
+- /* We have reentered the kprobe_handler(), since
+- * another probe was hit while within the
+- * handler. We here save the original kprobe
+- * variables and just single step on instruction
+- * of the new probe without calling any user
+- * handlers.
+- */
+- save_previous_kprobe(kcb);
+- set_current_kprobe(p, regs, kcb);
+- kprobes_inc_nmissed_count(p);
+- prepare_singlestep(p, regs);
+- kcb->kprobe_status = KPROBE_REENTER;
+- return 1;
+- }
+- } else {
+- if (*addr != BREAKPOINT_INSTRUCTION) {
+- /* The breakpoint instruction was removed by
+- * another cpu right after we hit, no further
+- * handling of this interrupt is appropriate
+- */
+- regs->rip = (unsigned long)addr;
+- ret = 1;
+- goto no_kprobe;
+- }
+- p = __get_cpu_var(current_kprobe);
+- if (p->break_handler && p->break_handler(p, regs)) {
+- goto ss_probe;
+- }
+- }
+- goto no_kprobe;
+- }
+-
+- p = get_kprobe(addr);
+- if (!p) {
+- if (*addr != BREAKPOINT_INSTRUCTION) {
+- /*
+- * The breakpoint instruction was removed right
+- * after we hit it. Another cpu has removed
+- * either a probepoint or a debugger breakpoint
+- * at this address. In either case, no further
+- * handling of this interrupt is appropriate.
+- * Back up over the (now missing) int3 and run
+- * the original instruction.
+- */
+- regs->rip = (unsigned long)addr;
+- ret = 1;
+- }
+- /* Not one of ours: let kernel handle it */
+- goto no_kprobe;
+- }
+-
+- set_current_kprobe(p, regs, kcb);
+- kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+-
+- if (p->pre_handler && p->pre_handler(p, regs))
+- /* handler has already set things up, so skip ss setup */
+- return 1;
+-
+-ss_probe:
+- prepare_singlestep(p, regs);
+- kcb->kprobe_status = KPROBE_HIT_SS;
+- return 1;
+-
+-no_kprobe:
+- preempt_enable_no_resched();
+- return ret;
+-}
+-
+-/*
+- * For function-return probes, init_kprobes() establishes a probepoint
+- * here. When a retprobed function returns, this probe is hit and
+- * trampoline_probe_handler() runs, calling the kretprobe's handler.
+- */
+- void kretprobe_trampoline_holder(void)
+- {
+- asm volatile ( ".global kretprobe_trampoline\n"
+- "kretprobe_trampoline: \n"
+- "nop\n");
+- }
+-
+-/*
+- * Called when we hit the probe point at kretprobe_trampoline
+- */
+-int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
+-{
+- struct kretprobe_instance *ri = NULL;
+- struct hlist_head *head, empty_rp;
+- struct hlist_node *node, *tmp;
+- unsigned long flags, orig_ret_address = 0;
+- unsigned long trampoline_address =(unsigned long)&kretprobe_trampoline;
+-
+- INIT_HLIST_HEAD(&empty_rp);
+- spin_lock_irqsave(&kretprobe_lock, flags);
+- head = kretprobe_inst_table_head(current);
+-
+- /*
+- * It is possible to have multiple instances associated with a given
+- * task either because an multiple functions in the call path
+- * have a return probe installed on them, and/or more then one return
+- * return probe was registered for a target function.
+- *
+- * We can handle this because:
+- * - instances are always inserted at the head of the list
+- * - when multiple return probes are registered for the same
+- * function, the first instance's ret_addr will point to the
+- * real return address, and all the rest will point to
+- * kretprobe_trampoline
+- */
+- hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
+- if (ri->task != current)
+- /* another task is sharing our hash bucket */
+- continue;
+-
+- if (ri->rp && ri->rp->handler)
+- ri->rp->handler(ri, regs);
+-
+- orig_ret_address = (unsigned long)ri->ret_addr;
+- recycle_rp_inst(ri, &empty_rp);
+-
+- if (orig_ret_address != trampoline_address)
+- /*
+- * This is the real return address. Any other
+- * instances associated with this task are for
+- * other calls deeper on the call stack
+- */
+- break;
+- }
+-
+- kretprobe_assert(ri, orig_ret_address, trampoline_address);
+- regs->rip = orig_ret_address;
+-
+- reset_current_kprobe();
+- spin_unlock_irqrestore(&kretprobe_lock, flags);
+- preempt_enable_no_resched();
+-
+- hlist_for_each_entry_safe(ri, node, tmp, &empty_rp, hlist) {
+- hlist_del(&ri->hlist);
+- kfree(ri);
+- }
+- /*
+- * By returning a non-zero value, we are telling
+- * kprobe_handler() that we don't want the post_handler
+- * to run (and have re-enabled preemption)
+- */
+- return 1;
+-}
+-
+-/*
+- * Called after single-stepping. p->addr is the address of the
+- * instruction whose first byte has been replaced by the "int 3"
+- * instruction. To avoid the SMP problems that can occur when we
+- * temporarily put back the original opcode to single-step, we
+- * single-stepped a copy of the instruction. The address of this
+- * copy is p->ainsn.insn.
+- *
+- * This function prepares to return from the post-single-step
+- * interrupt. We have to fix up the stack as follows:
+- *
+- * 0) Except in the case of absolute or indirect jump or call instructions,
+- * the new rip is relative to the copied instruction. We need to make
+- * it relative to the original instruction.
+- *
+- * 1) If the single-stepped instruction was pushfl, then the TF and IF
+- * flags are set in the just-pushed eflags, and may need to be cleared.
+- *
+- * 2) If the single-stepped instruction was a call, the return address
+- * that is atop the stack is the address following the copied instruction.
+- * We need to make it the address following the original instruction.
+- */
+-static void __kprobes resume_execution(struct kprobe *p,
+- struct pt_regs *regs, struct kprobe_ctlblk *kcb)
+-{
+- unsigned long *tos = (unsigned long *)regs->rsp;
+- unsigned long copy_rip = (unsigned long)p->ainsn.insn;
+- unsigned long orig_rip = (unsigned long)p->addr;
+- kprobe_opcode_t *insn = p->ainsn.insn;
+-
+- /*skip the REX prefix*/
+- if (*insn >= 0x40 && *insn <= 0x4f)
+- insn++;
+-
+- regs->eflags &= ~TF_MASK;
+- switch (*insn) {
+- case 0x9c: /* pushfl */
+- *tos &= ~(TF_MASK | IF_MASK);
+- *tos |= kcb->kprobe_old_rflags;
+- break;
+- case 0xc2: /* iret/ret/lret */
+- case 0xc3:
+- case 0xca:
+- case 0xcb:
+- case 0xcf:
+- case 0xea: /* jmp absolute -- ip is correct */
+- /* ip is already adjusted, no more changes required */
+- goto no_change;
+- case 0xe8: /* call relative - Fix return addr */
+- *tos = orig_rip + (*tos - copy_rip);
+- break;
+- case 0xff:
+- if ((insn[1] & 0x30) == 0x10) {
+- /* call absolute, indirect */
+- /* Fix return addr; ip is correct. */
+- *tos = orig_rip + (*tos - copy_rip);
+- goto no_change;
+- } else if (((insn[1] & 0x31) == 0x20) || /* jmp near, absolute indirect */
+- ((insn[1] & 0x31) == 0x21)) { /* jmp far, absolute indirect */
+- /* ip is correct. */
+- goto no_change;
+- }
+- default:
+- break;
+- }
+-
+- regs->rip = orig_rip + (regs->rip - copy_rip);
+-no_change:
+-
+- return;
+-}
+-
+-int __kprobes post_kprobe_handler(struct pt_regs *regs)
+-{
+- struct kprobe *cur = kprobe_running();
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- if (!cur)
+- return 0;
+-
+- if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) {
+- kcb->kprobe_status = KPROBE_HIT_SSDONE;
+- cur->post_handler(cur, regs, 0);
+- }
+-
+- resume_execution(cur, regs, kcb);
+- regs->eflags |= kcb->kprobe_saved_rflags;
+- trace_hardirqs_fixup_flags(regs->eflags);
+-
+- /* Restore the original saved kprobes variables and continue. */
+- if (kcb->kprobe_status == KPROBE_REENTER) {
+- restore_previous_kprobe(kcb);
+- goto out;
+- }
+- reset_current_kprobe();
+-out:
+- preempt_enable_no_resched();
+-
+- /*
+- * if somebody else is singlestepping across a probe point, eflags
+- * will have TF set, in which case, continue the remaining processing
+- * of do_debug, as if this is not a probe hit.
+- */
+- if (regs->eflags & TF_MASK)
+- return 0;
+-
+- return 1;
+-}
+-
+-int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+-{
+- struct kprobe *cur = kprobe_running();
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+- const struct exception_table_entry *fixup;
+-
+- switch(kcb->kprobe_status) {
+- case KPROBE_HIT_SS:
+- case KPROBE_REENTER:
+- /*
+- * We are here because the instruction being single
+- * stepped caused a page fault. We reset the current
+- * kprobe and the rip points back to the probe address
+- * and allow the page fault handler to continue as a
+- * normal page fault.
+- */
+- regs->rip = (unsigned long)cur->addr;
+- regs->eflags |= kcb->kprobe_old_rflags;
+- if (kcb->kprobe_status == KPROBE_REENTER)
+- restore_previous_kprobe(kcb);
+- else
+- reset_current_kprobe();
+- preempt_enable_no_resched();
+- break;
+- case KPROBE_HIT_ACTIVE:
+- case KPROBE_HIT_SSDONE:
+- /*
+- * We increment the nmissed count for accounting,
+- * we can also use npre/npostfault count for accouting
+- * these specific fault cases.
+- */
+- kprobes_inc_nmissed_count(cur);
+-
+- /*
+- * We come here because instructions in the pre/post
+- * handler caused the page_fault, this could happen
+- * if handler tries to access user space by
+- * copy_from_user(), get_user() etc. Let the
+- * user-specified handler try to fix it first.
+- */
+- if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
+- return 1;
+-
+- /*
+- * In case the user-specified fault handler returned
+- * zero, try to fix up.
+- */
+- fixup = search_exception_tables(regs->rip);
+- if (fixup) {
+- regs->rip = fixup->fixup;
+- return 1;
+- }
+-
+- /*
+- * fixup() could not handle it,
+- * Let do_page_fault() fix it.
+- */
+- break;
+- default:
+- break;
+- }
+- return 0;
+-}
+-
+-/*
+- * Wrapper routine for handling exceptions.
+- */
+-int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
+- unsigned long val, void *data)
+-{
+- struct die_args *args = (struct die_args *)data;
+- int ret = NOTIFY_DONE;
+-
+- if (args->regs && user_mode(args->regs))
+- return ret;
+-
+- switch (val) {
+- case DIE_INT3:
+- if (kprobe_handler(args->regs))
+- ret = NOTIFY_STOP;
+- break;
+- case DIE_DEBUG:
+- if (post_kprobe_handler(args->regs))
+- ret = NOTIFY_STOP;
+- break;
+- case DIE_GPF:
+- /* kprobe_running() needs smp_processor_id() */
+- preempt_disable();
+- if (kprobe_running() &&
+- kprobe_fault_handler(args->regs, args->trapnr))
+- ret = NOTIFY_STOP;
+- preempt_enable();
+- break;
+- default:
+- break;
+- }
+- return ret;
+-}
+-
+-int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
+-{
+- struct jprobe *jp = container_of(p, struct jprobe, kp);
+- unsigned long addr;
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- kcb->jprobe_saved_regs = *regs;
+- kcb->jprobe_saved_rsp = (long *) regs->rsp;
+- addr = (unsigned long)(kcb->jprobe_saved_rsp);
+- /*
+- * As Linus pointed out, gcc assumes that the callee
+- * owns the argument space and could overwrite it, e.g.
+- * tailcall optimization. So, to be absolutely safe
+- * we also save and restore enough stack bytes to cover
+- * the argument area.
+- */
+- memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr,
+- MIN_STACK_SIZE(addr));
+- regs->eflags &= ~IF_MASK;
+- trace_hardirqs_off();
+- regs->rip = (unsigned long)(jp->entry);
+- return 1;
+-}
+-
+-void __kprobes jprobe_return(void)
+-{
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+-
+- asm volatile (" xchg %%rbx,%%rsp \n"
+- " int3 \n"
+- " .globl jprobe_return_end \n"
+- " jprobe_return_end: \n"
+- " nop \n"::"b"
+- (kcb->jprobe_saved_rsp):"memory");
+-}
+-
+-int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
+-{
+- struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+- u8 *addr = (u8 *) (regs->rip - 1);
+- unsigned long stack_addr = (unsigned long)(kcb->jprobe_saved_rsp);
+- struct jprobe *jp = container_of(p, struct jprobe, kp);
+-
+- if ((addr > (u8 *) jprobe_return) && (addr < (u8 *) jprobe_return_end)) {
+- if ((unsigned long *)regs->rsp != kcb->jprobe_saved_rsp) {
+- struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;
+- printk("current rsp %p does not match saved rsp %p\n",
+- (long *)regs->rsp, kcb->jprobe_saved_rsp);
+- printk("Saved registers for jprobe %p\n", jp);
+- show_registers(saved_regs);
+- printk("Current registers\n");
+- show_registers(regs);
+- BUG();
+- }
+- *regs = kcb->jprobe_saved_regs;
+- memcpy((kprobe_opcode_t *) stack_addr, kcb->jprobes_stack,
+- MIN_STACK_SIZE(stack_addr));
+- preempt_enable_no_resched();
+- return 1;
+- }
+- return 0;
+-}
+-
+-static struct kprobe trampoline_p = {
+- .addr = (kprobe_opcode_t *) &kretprobe_trampoline,
+- .pre_handler = trampoline_probe_handler
+-};
+-
+-int __init arch_init_kprobes(void)
+-{
+- return register_kprobe(&trampoline_p);
+-}
+-
+-int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+-{
+- if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
+- return 1;
+-
+- return 0;
+-}
+diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
+new file mode 100644
+index 0000000..8a7660c
+--- /dev/null
++++ b/arch/x86/kernel/ldt.c
+@@ -0,0 +1,260 @@
++/*
++ * Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
++ * Copyright (C) 1999 Ingo Molnar <mingo at redhat.com>
++ * Copyright (C) 2002 Andi Kleen
++ *
++ * This handles calls from both 32bit and 64bit mode.
++ */
+
-+EXPORT_SYMBOL(blk_queue_merge_bvec);
++#include <linux/errno.h>
++#include <linux/sched.h>
++#include <linux/string.h>
++#include <linux/mm.h>
++#include <linux/smp.h>
++#include <linux/vmalloc.h>
+
-+void blk_queue_softirq_done(struct request_queue *q, softirq_done_fn *fn)
++#include <asm/uaccess.h>
++#include <asm/system.h>
++#include <asm/ldt.h>
++#include <asm/desc.h>
++#include <asm/mmu_context.h>
++
++#ifdef CONFIG_SMP
++static void flush_ldt(void *null)
+{
-+ q->softirq_done_fn = fn;
++ if (current->active_mm)
++ load_LDT(¤t->active_mm->context);
+}
++#endif
+
-+EXPORT_SYMBOL(blk_queue_softirq_done);
-+
-+/**
-+ * blk_queue_make_request - define an alternate make_request function for a device
-+ * @q: the request queue for the device to be affected
-+ * @mfn: the alternate make_request function
-+ *
-+ * Description:
-+ * The normal way for &struct bios to be passed to a device
-+ * driver is for them to be collected into requests on a request
-+ * queue, and then to allow the device driver to select requests
-+ * off that queue when it is ready. This works well for many block
-+ * devices. However some block devices (typically virtual devices
-+ * such as md or lvm) do not benefit from the processing on the
-+ * request queue, and are served best by having the requests passed
-+ * directly to them. This can be achieved by providing a function
-+ * to blk_queue_make_request().
-+ *
-+ * Caveat:
-+ * The driver that does this *must* be able to deal appropriately
-+ * with buffers in "highmemory". This can be accomplished by either calling
-+ * __bio_kmap_atomic() to get a temporary kernel mapping, or by calling
-+ * blk_queue_bounce() to create a buffer in normal memory.
-+ **/
-+void blk_queue_make_request(struct request_queue * q, make_request_fn * mfn)
++static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
+{
-+ /*
-+ * set defaults
-+ */
-+ q->nr_requests = BLKDEV_MAX_RQ;
-+ blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
-+ blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
-+ q->make_request_fn = mfn;
-+ q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
-+ q->backing_dev_info.state = 0;
-+ q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY;
-+ blk_queue_max_sectors(q, SAFE_MAX_SECTORS);
-+ blk_queue_hardsect_size(q, 512);
-+ blk_queue_dma_alignment(q, 511);
-+ blk_queue_congestion_threshold(q);
-+ q->nr_batching = BLK_BATCH_REQ;
-+
-+ q->unplug_thresh = 4; /* hmm */
-+ q->unplug_delay = (3 * HZ) / 1000; /* 3 milliseconds */
-+ if (q->unplug_delay == 0)
-+ q->unplug_delay = 1;
++ void *oldldt, *newldt;
++ int oldsize;
+
-+ INIT_WORK(&q->unplug_work, blk_unplug_work);
++ if (mincount <= pc->size)
++ return 0;
++ oldsize = pc->size;
++ mincount = (mincount + 511) & (~511);
++ if (mincount * LDT_ENTRY_SIZE > PAGE_SIZE)
++ newldt = vmalloc(mincount * LDT_ENTRY_SIZE);
++ else
++ newldt = (void *)__get_free_page(GFP_KERNEL);
+
-+ q->unplug_timer.function = blk_unplug_timeout;
-+ q->unplug_timer.data = (unsigned long)q;
++ if (!newldt)
++ return -ENOMEM;
+
-+ /*
-+ * by default assume old behaviour and bounce for any highmem page
-+ */
-+ blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
-+}
++ if (oldsize)
++ memcpy(newldt, pc->ldt, oldsize * LDT_ENTRY_SIZE);
++ oldldt = pc->ldt;
++ memset(newldt + oldsize * LDT_ENTRY_SIZE, 0,
++ (mincount - oldsize) * LDT_ENTRY_SIZE);
+
-+EXPORT_SYMBOL(blk_queue_make_request);
++#ifdef CONFIG_X86_64
++ /* CHECKME: Do we really need this ? */
++ wmb();
++#endif
++ pc->ldt = newldt;
++ wmb();
++ pc->size = mincount;
++ wmb();
+
-+/**
-+ * blk_queue_bounce_limit - set bounce buffer limit for queue
-+ * @q: the request queue for the device
-+ * @dma_addr: bus address limit
-+ *
-+ * Description:
-+ * Different hardware can have different requirements as to what pages
-+ * it can do I/O directly to. A low level driver can call
-+ * blk_queue_bounce_limit to have lower memory pages allocated as bounce
-+ * buffers for doing I/O to pages residing above @page.
-+ **/
-+void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr)
-+{
-+ unsigned long bounce_pfn = dma_addr >> PAGE_SHIFT;
-+ int dma = 0;
++ if (reload) {
++#ifdef CONFIG_SMP
++ cpumask_t mask;
+
-+ q->bounce_gfp = GFP_NOIO;
-+#if BITS_PER_LONG == 64
-+ /* Assume anything <= 4GB can be handled by IOMMU.
-+ Actually some IOMMUs can handle everything, but I don't
-+ know of a way to test this here. */
-+ if (bounce_pfn < (min_t(u64,0xffffffff,BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
-+ dma = 1;
-+ q->bounce_pfn = max_low_pfn;
++ preempt_disable();
++ load_LDT(pc);
++ mask = cpumask_of_cpu(smp_processor_id());
++ if (!cpus_equal(current->mm->cpu_vm_mask, mask))
++ smp_call_function(flush_ldt, NULL, 1, 1);
++ preempt_enable();
+#else
-+ if (bounce_pfn < blk_max_low_pfn)
-+ dma = 1;
-+ q->bounce_pfn = bounce_pfn;
++ load_LDT(pc);
+#endif
-+ if (dma) {
-+ init_emergency_isa_pool();
-+ q->bounce_gfp = GFP_NOIO | GFP_DMA;
-+ q->bounce_pfn = bounce_pfn;
+ }
++ if (oldsize) {
++ if (oldsize * LDT_ENTRY_SIZE > PAGE_SIZE)
++ vfree(oldldt);
++ else
++ put_page(virt_to_page(oldldt));
++ }
++ return 0;
+}
+
-+EXPORT_SYMBOL(blk_queue_bounce_limit);
-+
-+/**
-+ * blk_queue_max_sectors - set max sectors for a request for this queue
-+ * @q: the request queue for the device
-+ * @max_sectors: max sectors in the usual 512b unit
-+ *
-+ * Description:
-+ * Enables a low level driver to set an upper limit on the size of
-+ * received requests.
-+ **/
-+void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors)
++static inline int copy_ldt(mm_context_t *new, mm_context_t *old)
+{
-+ if ((max_sectors << 9) < PAGE_CACHE_SIZE) {
-+ max_sectors = 1 << (PAGE_CACHE_SHIFT - 9);
-+ printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors);
-+ }
++ int err = alloc_ldt(new, old->size, 0);
+
-+ if (BLK_DEF_MAX_SECTORS > max_sectors)
-+ q->max_hw_sectors = q->max_sectors = max_sectors;
-+ else {
-+ q->max_sectors = BLK_DEF_MAX_SECTORS;
-+ q->max_hw_sectors = max_sectors;
-+ }
++ if (err < 0)
++ return err;
++ memcpy(new->ldt, old->ldt, old->size * LDT_ENTRY_SIZE);
++ return 0;
+}
+
-+EXPORT_SYMBOL(blk_queue_max_sectors);
-+
-+/**
-+ * blk_queue_max_phys_segments - set max phys segments for a request for this queue
-+ * @q: the request queue for the device
-+ * @max_segments: max number of segments
-+ *
-+ * Description:
-+ * Enables a low level driver to set an upper limit on the number of
-+ * physical data segments in a request. This would be the largest sized
-+ * scatter list the driver could handle.
-+ **/
-+void blk_queue_max_phys_segments(struct request_queue *q,
-+ unsigned short max_segments)
++/*
++ * we do not have to muck with descriptors here, that is
++ * done in switch_mm() as needed.
++ */
++int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+{
-+ if (!max_segments) {
-+ max_segments = 1;
-+ printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
-+ }
++ struct mm_struct *old_mm;
++ int retval = 0;
+
-+ q->max_phys_segments = max_segments;
++ mutex_init(&mm->context.lock);
++ mm->context.size = 0;
++ old_mm = current->mm;
++ if (old_mm && old_mm->context.size > 0) {
++ mutex_lock(&old_mm->context.lock);
++ retval = copy_ldt(&mm->context, &old_mm->context);
++ mutex_unlock(&old_mm->context.lock);
++ }
++ return retval;
+}
+
-+EXPORT_SYMBOL(blk_queue_max_phys_segments);
-+
-+/**
-+ * blk_queue_max_hw_segments - set max hw segments for a request for this queue
-+ * @q: the request queue for the device
-+ * @max_segments: max number of segments
++/*
++ * No need to lock the MM as we are the last user
+ *
-+ * Description:
-+ * Enables a low level driver to set an upper limit on the number of
-+ * hw data segments in a request. This would be the largest number of
-+ * address/length pairs the host adapter can actually give as once
-+ * to the device.
-+ **/
-+void blk_queue_max_hw_segments(struct request_queue *q,
-+ unsigned short max_segments)
++ * 64bit: Don't touch the LDT register - we're already in the next thread.
++ */
++void destroy_context(struct mm_struct *mm)
+{
-+ if (!max_segments) {
-+ max_segments = 1;
-+ printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
++ if (mm->context.size) {
++#ifdef CONFIG_X86_32
++ /* CHECKME: Can this ever happen ? */
++ if (mm == current->active_mm)
++ clear_LDT();
++#endif
++ if (mm->context.size * LDT_ENTRY_SIZE > PAGE_SIZE)
++ vfree(mm->context.ldt);
++ else
++ put_page(virt_to_page(mm->context.ldt));
++ mm->context.size = 0;
+ }
-+
-+ q->max_hw_segments = max_segments;
+}
+
-+EXPORT_SYMBOL(blk_queue_max_hw_segments);
-+
-+/**
-+ * blk_queue_max_segment_size - set max segment size for blk_rq_map_sg
-+ * @q: the request queue for the device
-+ * @max_size: max size of segment in bytes
-+ *
-+ * Description:
-+ * Enables a low level driver to set an upper limit on the size of a
-+ * coalesced segment
-+ **/
-+void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
++static int read_ldt(void __user *ptr, unsigned long bytecount)
+{
-+ if (max_size < PAGE_CACHE_SIZE) {
-+ max_size = PAGE_CACHE_SIZE;
-+ printk("%s: set to minimum %d\n", __FUNCTION__, max_size);
-+ }
++ int err;
++ unsigned long size;
++ struct mm_struct *mm = current->mm;
+
-+ q->max_segment_size = max_size;
-+}
++ if (!mm->context.size)
++ return 0;
++ if (bytecount > LDT_ENTRY_SIZE * LDT_ENTRIES)
++ bytecount = LDT_ENTRY_SIZE * LDT_ENTRIES;
+
-+EXPORT_SYMBOL(blk_queue_max_segment_size);
++ mutex_lock(&mm->context.lock);
++ size = mm->context.size * LDT_ENTRY_SIZE;
++ if (size > bytecount)
++ size = bytecount;
+
-+/**
-+ * blk_queue_hardsect_size - set hardware sector size for the queue
-+ * @q: the request queue for the device
-+ * @size: the hardware sector size, in bytes
-+ *
-+ * Description:
-+ * This should typically be set to the lowest possible sector size
-+ * that the hardware can operate on (possible without reverting to
-+ * even internal read-modify-write operations). Usually the default
-+ * of 512 covers most hardware.
-+ **/
-+void blk_queue_hardsect_size(struct request_queue *q, unsigned short size)
-+{
-+ q->hardsect_size = size;
++ err = 0;
++ if (copy_to_user(ptr, mm->context.ldt, size))
++ err = -EFAULT;
++ mutex_unlock(&mm->context.lock);
++ if (err < 0)
++ goto error_return;
++ if (size != bytecount) {
++ /* zero-fill the rest */
++ if (clear_user(ptr + size, bytecount - size) != 0) {
++ err = -EFAULT;
++ goto error_return;
++ }
++ }
++ return bytecount;
++error_return:
++ return err;
+}
+
-+EXPORT_SYMBOL(blk_queue_hardsect_size);
-+
-+/*
-+ * Returns the minimum that is _not_ zero, unless both are zero.
-+ */
-+#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r))
-+
-+/**
-+ * blk_queue_stack_limits - inherit underlying queue limits for stacked drivers
-+ * @t: the stacking driver (top)
-+ * @b: the underlying device (bottom)
-+ **/
-+void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
++static int read_default_ldt(void __user *ptr, unsigned long bytecount)
+{
-+ /* zero is "infinity" */
-+ t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors);
-+ t->max_hw_sectors = min_not_zero(t->max_hw_sectors,b->max_hw_sectors);
-+
-+ t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments);
-+ t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments);
-+ t->max_segment_size = min(t->max_segment_size,b->max_segment_size);
-+ t->hardsect_size = max(t->hardsect_size,b->hardsect_size);
-+ if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags))
-+ clear_bit(QUEUE_FLAG_CLUSTER, &t->queue_flags);
++ /* CHECKME: Can we use _one_ random number ? */
++#ifdef CONFIG_X86_32
++ unsigned long size = 5 * sizeof(struct desc_struct);
++#else
++ unsigned long size = 128;
++#endif
++ if (bytecount > size)
++ bytecount = size;
++ if (clear_user(ptr, bytecount))
++ return -EFAULT;
++ return bytecount;
+}
+
-+EXPORT_SYMBOL(blk_queue_stack_limits);
-+
-+/**
-+ * blk_queue_dma_drain - Set up a drain buffer for excess dma.
-+ *
-+ * @q: the request queue for the device
-+ * @buf: physically contiguous buffer
-+ * @size: size of the buffer in bytes
-+ *
-+ * Some devices have excess DMA problems and can't simply discard (or
-+ * zero fill) the unwanted piece of the transfer. They have to have a
-+ * real area of memory to transfer it into. The use case for this is
-+ * ATAPI devices in DMA mode. If the packet command causes a transfer
-+ * bigger than the transfer size some HBAs will lock up if there
-+ * aren't DMA elements to contain the excess transfer. What this API
-+ * does is adjust the queue so that the buf is always appended
-+ * silently to the scatterlist.
-+ *
-+ * Note: This routine adjusts max_hw_segments to make room for
-+ * appending the drain buffer. If you call
-+ * blk_queue_max_hw_segments() or blk_queue_max_phys_segments() after
-+ * calling this routine, you must set the limit to one fewer than your
-+ * device can support otherwise there won't be room for the drain
-+ * buffer.
-+ */
-+int blk_queue_dma_drain(struct request_queue *q, void *buf,
-+ unsigned int size)
++static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
+{
-+ if (q->max_hw_segments < 2 || q->max_phys_segments < 2)
-+ return -EINVAL;
-+ /* make room for appending the drain */
-+ --q->max_hw_segments;
-+ --q->max_phys_segments;
-+ q->dma_drain_buffer = buf;
-+ q->dma_drain_size = size;
++ struct mm_struct *mm = current->mm;
++ struct desc_struct ldt;
++ int error;
++ struct user_desc ldt_info;
+
-+ return 0;
-+}
++ error = -EINVAL;
++ if (bytecount != sizeof(ldt_info))
++ goto out;
++ error = -EFAULT;
++ if (copy_from_user(&ldt_info, ptr, sizeof(ldt_info)))
++ goto out;
+
-+EXPORT_SYMBOL_GPL(blk_queue_dma_drain);
++ error = -EINVAL;
++ if (ldt_info.entry_number >= LDT_ENTRIES)
++ goto out;
++ if (ldt_info.contents == 3) {
++ if (oldmode)
++ goto out;
++ if (ldt_info.seg_not_present == 0)
++ goto out;
++ }
+
-+/**
-+ * blk_queue_segment_boundary - set boundary rules for segment merging
-+ * @q: the request queue for the device
-+ * @mask: the memory boundary mask
-+ **/
-+void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
-+{
-+ if (mask < PAGE_CACHE_SIZE - 1) {
-+ mask = PAGE_CACHE_SIZE - 1;
-+ printk("%s: set to minimum %lx\n", __FUNCTION__, mask);
++ mutex_lock(&mm->context.lock);
++ if (ldt_info.entry_number >= mm->context.size) {
++ error = alloc_ldt(¤t->mm->context,
++ ldt_info.entry_number + 1, 1);
++ if (error < 0)
++ goto out_unlock;
+ }
+
-+ q->seg_boundary_mask = mask;
-+}
++ /* Allow LDTs to be cleared by the user. */
++ if (ldt_info.base_addr == 0 && ldt_info.limit == 0) {
++ if (oldmode || LDT_empty(&ldt_info)) {
++ memset(&ldt, 0, sizeof(ldt));
++ goto install;
++ }
++ }
+
-+EXPORT_SYMBOL(blk_queue_segment_boundary);
++ fill_ldt(&ldt, &ldt_info);
++ if (oldmode)
++ ldt.avl = 0;
+
-+/**
-+ * blk_queue_dma_alignment - set dma length and memory alignment
-+ * @q: the request queue for the device
-+ * @mask: alignment mask
-+ *
-+ * description:
-+ * set required memory and length aligment for direct dma transactions.
-+ * this is used when buiding direct io requests for the queue.
-+ *
-+ **/
-+void blk_queue_dma_alignment(struct request_queue *q, int mask)
-+{
-+ q->dma_alignment = mask;
++ /* Install the new entry ... */
++install:
++ write_ldt_entry(mm->context.ldt, ldt_info.entry_number, &ldt);
++ error = 0;
++
++out_unlock:
++ mutex_unlock(&mm->context.lock);
++out:
++ return error;
+}
+
-+EXPORT_SYMBOL(blk_queue_dma_alignment);
++asmlinkage int sys_modify_ldt(int func, void __user *ptr,
++ unsigned long bytecount)
++{
++ int ret = -ENOSYS;
+
-+/**
-+ * blk_queue_update_dma_alignment - update dma length and memory alignment
-+ * @q: the request queue for the device
-+ * @mask: alignment mask
-+ *
-+ * description:
-+ * update required memory and length aligment for direct dma transactions.
-+ * If the requested alignment is larger than the current alignment, then
-+ * the current queue alignment is updated to the new value, otherwise it
-+ * is left alone. The design of this is to allow multiple objects
-+ * (driver, device, transport etc) to set their respective
-+ * alignments without having them interfere.
-+ *
-+ **/
-+void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
++ switch (func) {
++ case 0:
++ ret = read_ldt(ptr, bytecount);
++ break;
++ case 1:
++ ret = write_ldt(ptr, bytecount, 1);
++ break;
++ case 2:
++ ret = read_default_ldt(ptr, bytecount);
++ break;
++ case 0x11:
++ ret = write_ldt(ptr, bytecount, 0);
++ break;
++ }
++ return ret;
++}
+diff --git a/arch/x86/kernel/ldt_32.c b/arch/x86/kernel/ldt_32.c
+deleted file mode 100644
+index 9ff90a2..0000000
+--- a/arch/x86/kernel/ldt_32.c
++++ /dev/null
+@@ -1,248 +0,0 @@
+-/*
+- * Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
+- * Copyright (C) 1999 Ingo Molnar <mingo at redhat.com>
+- */
+-
+-#include <linux/errno.h>
+-#include <linux/sched.h>
+-#include <linux/string.h>
+-#include <linux/mm.h>
+-#include <linux/smp.h>
+-#include <linux/vmalloc.h>
+-#include <linux/slab.h>
+-
+-#include <asm/uaccess.h>
+-#include <asm/system.h>
+-#include <asm/ldt.h>
+-#include <asm/desc.h>
+-#include <asm/mmu_context.h>
+-
+-#ifdef CONFIG_SMP /* avoids "defined but not used" warnig */
+-static void flush_ldt(void *null)
+-{
+- if (current->active_mm)
+- load_LDT(¤t->active_mm->context);
+-}
+-#endif
+-
+-static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
+-{
+- void *oldldt;
+- void *newldt;
+- int oldsize;
+-
+- if (mincount <= pc->size)
+- return 0;
+- oldsize = pc->size;
+- mincount = (mincount+511)&(~511);
+- if (mincount*LDT_ENTRY_SIZE > PAGE_SIZE)
+- newldt = vmalloc(mincount*LDT_ENTRY_SIZE);
+- else
+- newldt = kmalloc(mincount*LDT_ENTRY_SIZE, GFP_KERNEL);
+-
+- if (!newldt)
+- return -ENOMEM;
+-
+- if (oldsize)
+- memcpy(newldt, pc->ldt, oldsize*LDT_ENTRY_SIZE);
+- oldldt = pc->ldt;
+- memset(newldt+oldsize*LDT_ENTRY_SIZE, 0, (mincount-oldsize)*LDT_ENTRY_SIZE);
+- pc->ldt = newldt;
+- wmb();
+- pc->size = mincount;
+- wmb();
+-
+- if (reload) {
+-#ifdef CONFIG_SMP
+- cpumask_t mask;
+- preempt_disable();
+- load_LDT(pc);
+- mask = cpumask_of_cpu(smp_processor_id());
+- if (!cpus_equal(current->mm->cpu_vm_mask, mask))
+- smp_call_function(flush_ldt, NULL, 1, 1);
+- preempt_enable();
+-#else
+- load_LDT(pc);
+-#endif
+- }
+- if (oldsize) {
+- if (oldsize*LDT_ENTRY_SIZE > PAGE_SIZE)
+- vfree(oldldt);
+- else
+- kfree(oldldt);
+- }
+- return 0;
+-}
+-
+-static inline int copy_ldt(mm_context_t *new, mm_context_t *old)
+-{
+- int err = alloc_ldt(new, old->size, 0);
+- if (err < 0)
+- return err;
+- memcpy(new->ldt, old->ldt, old->size*LDT_ENTRY_SIZE);
+- return 0;
+-}
+-
+-/*
+- * we do not have to muck with descriptors here, that is
+- * done in switch_mm() as needed.
+- */
+-int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+-{
+- struct mm_struct * old_mm;
+- int retval = 0;
+-
+- mutex_init(&mm->context.lock);
+- mm->context.size = 0;
+- old_mm = current->mm;
+- if (old_mm && old_mm->context.size > 0) {
+- mutex_lock(&old_mm->context.lock);
+- retval = copy_ldt(&mm->context, &old_mm->context);
+- mutex_unlock(&old_mm->context.lock);
+- }
+- return retval;
+-}
+-
+-/*
+- * No need to lock the MM as we are the last user
+- */
+-void destroy_context(struct mm_struct *mm)
+-{
+- if (mm->context.size) {
+- if (mm == current->active_mm)
+- clear_LDT();
+- if (mm->context.size*LDT_ENTRY_SIZE > PAGE_SIZE)
+- vfree(mm->context.ldt);
+- else
+- kfree(mm->context.ldt);
+- mm->context.size = 0;
+- }
+-}
+-
+-static int read_ldt(void __user * ptr, unsigned long bytecount)
+-{
+- int err;
+- unsigned long size;
+- struct mm_struct * mm = current->mm;
+-
+- if (!mm->context.size)
+- return 0;
+- if (bytecount > LDT_ENTRY_SIZE*LDT_ENTRIES)
+- bytecount = LDT_ENTRY_SIZE*LDT_ENTRIES;
+-
+- mutex_lock(&mm->context.lock);
+- size = mm->context.size*LDT_ENTRY_SIZE;
+- if (size > bytecount)
+- size = bytecount;
+-
+- err = 0;
+- if (copy_to_user(ptr, mm->context.ldt, size))
+- err = -EFAULT;
+- mutex_unlock(&mm->context.lock);
+- if (err < 0)
+- goto error_return;
+- if (size != bytecount) {
+- /* zero-fill the rest */
+- if (clear_user(ptr+size, bytecount-size) != 0) {
+- err = -EFAULT;
+- goto error_return;
+- }
+- }
+- return bytecount;
+-error_return:
+- return err;
+-}
+-
+-static int read_default_ldt(void __user * ptr, unsigned long bytecount)
+-{
+- int err;
+- unsigned long size;
+-
+- err = 0;
+- size = 5*sizeof(struct desc_struct);
+- if (size > bytecount)
+- size = bytecount;
+-
+- err = size;
+- if (clear_user(ptr, size))
+- err = -EFAULT;
+-
+- return err;
+-}
+-
+-static int write_ldt(void __user * ptr, unsigned long bytecount, int oldmode)
+-{
+- struct mm_struct * mm = current->mm;
+- __u32 entry_1, entry_2;
+- int error;
+- struct user_desc ldt_info;
+-
+- error = -EINVAL;
+- if (bytecount != sizeof(ldt_info))
+- goto out;
+- error = -EFAULT;
+- if (copy_from_user(&ldt_info, ptr, sizeof(ldt_info)))
+- goto out;
+-
+- error = -EINVAL;
+- if (ldt_info.entry_number >= LDT_ENTRIES)
+- goto out;
+- if (ldt_info.contents == 3) {
+- if (oldmode)
+- goto out;
+- if (ldt_info.seg_not_present == 0)
+- goto out;
+- }
+-
+- mutex_lock(&mm->context.lock);
+- if (ldt_info.entry_number >= mm->context.size) {
+- error = alloc_ldt(¤t->mm->context, ldt_info.entry_number+1, 1);
+- if (error < 0)
+- goto out_unlock;
+- }
+-
+- /* Allow LDTs to be cleared by the user. */
+- if (ldt_info.base_addr == 0 && ldt_info.limit == 0) {
+- if (oldmode || LDT_empty(&ldt_info)) {
+- entry_1 = 0;
+- entry_2 = 0;
+- goto install;
+- }
+- }
+-
+- entry_1 = LDT_entry_a(&ldt_info);
+- entry_2 = LDT_entry_b(&ldt_info);
+- if (oldmode)
+- entry_2 &= ~(1 << 20);
+-
+- /* Install the new entry ... */
+-install:
+- write_ldt_entry(mm->context.ldt, ldt_info.entry_number, entry_1, entry_2);
+- error = 0;
+-
+-out_unlock:
+- mutex_unlock(&mm->context.lock);
+-out:
+- return error;
+-}
+-
+-asmlinkage int sys_modify_ldt(int func, void __user *ptr, unsigned long bytecount)
+-{
+- int ret = -ENOSYS;
+-
+- switch (func) {
+- case 0:
+- ret = read_ldt(ptr, bytecount);
+- break;
+- case 1:
+- ret = write_ldt(ptr, bytecount, 1);
+- break;
+- case 2:
+- ret = read_default_ldt(ptr, bytecount);
+- break;
+- case 0x11:
+- ret = write_ldt(ptr, bytecount, 0);
+- break;
+- }
+- return ret;
+-}
+diff --git a/arch/x86/kernel/ldt_64.c b/arch/x86/kernel/ldt_64.c
+deleted file mode 100644
+index 60e57ab..0000000
+--- a/arch/x86/kernel/ldt_64.c
++++ /dev/null
+@@ -1,250 +0,0 @@
+-/*
+- * Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
+- * Copyright (C) 1999 Ingo Molnar <mingo at redhat.com>
+- * Copyright (C) 2002 Andi Kleen
+- *
+- * This handles calls from both 32bit and 64bit mode.
+- */
+-
+-#include <linux/errno.h>
+-#include <linux/sched.h>
+-#include <linux/string.h>
+-#include <linux/mm.h>
+-#include <linux/smp.h>
+-#include <linux/vmalloc.h>
+-#include <linux/slab.h>
+-
+-#include <asm/uaccess.h>
+-#include <asm/system.h>
+-#include <asm/ldt.h>
+-#include <asm/desc.h>
+-#include <asm/proto.h>
+-
+-#ifdef CONFIG_SMP /* avoids "defined but not used" warnig */
+-static void flush_ldt(void *null)
+-{
+- if (current->active_mm)
+- load_LDT(¤t->active_mm->context);
+-}
+-#endif
+-
+-static int alloc_ldt(mm_context_t *pc, unsigned mincount, int reload)
+-{
+- void *oldldt;
+- void *newldt;
+- unsigned oldsize;
+-
+- if (mincount <= (unsigned)pc->size)
+- return 0;
+- oldsize = pc->size;
+- mincount = (mincount+511)&(~511);
+- if (mincount*LDT_ENTRY_SIZE > PAGE_SIZE)
+- newldt = vmalloc(mincount*LDT_ENTRY_SIZE);
+- else
+- newldt = kmalloc(mincount*LDT_ENTRY_SIZE, GFP_KERNEL);
+-
+- if (!newldt)
+- return -ENOMEM;
+-
+- if (oldsize)
+- memcpy(newldt, pc->ldt, oldsize*LDT_ENTRY_SIZE);
+- oldldt = pc->ldt;
+- memset(newldt+oldsize*LDT_ENTRY_SIZE, 0, (mincount-oldsize)*LDT_ENTRY_SIZE);
+- wmb();
+- pc->ldt = newldt;
+- wmb();
+- pc->size = mincount;
+- wmb();
+- if (reload) {
+-#ifdef CONFIG_SMP
+- cpumask_t mask;
+-
+- preempt_disable();
+- mask = cpumask_of_cpu(smp_processor_id());
+- load_LDT(pc);
+- if (!cpus_equal(current->mm->cpu_vm_mask, mask))
+- smp_call_function(flush_ldt, NULL, 1, 1);
+- preempt_enable();
+-#else
+- load_LDT(pc);
+-#endif
+- }
+- if (oldsize) {
+- if (oldsize*LDT_ENTRY_SIZE > PAGE_SIZE)
+- vfree(oldldt);
+- else
+- kfree(oldldt);
+- }
+- return 0;
+-}
+-
+-static inline int copy_ldt(mm_context_t *new, mm_context_t *old)
+-{
+- int err = alloc_ldt(new, old->size, 0);
+- if (err < 0)
+- return err;
+- memcpy(new->ldt, old->ldt, old->size*LDT_ENTRY_SIZE);
+- return 0;
+-}
+-
+-/*
+- * we do not have to muck with descriptors here, that is
+- * done in switch_mm() as needed.
+- */
+-int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+-{
+- struct mm_struct * old_mm;
+- int retval = 0;
+-
+- mutex_init(&mm->context.lock);
+- mm->context.size = 0;
+- old_mm = current->mm;
+- if (old_mm && old_mm->context.size > 0) {
+- mutex_lock(&old_mm->context.lock);
+- retval = copy_ldt(&mm->context, &old_mm->context);
+- mutex_unlock(&old_mm->context.lock);
+- }
+- return retval;
+-}
+-
+-/*
+- *
+- * Don't touch the LDT register - we're already in the next thread.
+- */
+-void destroy_context(struct mm_struct *mm)
+-{
+- if (mm->context.size) {
+- if ((unsigned)mm->context.size*LDT_ENTRY_SIZE > PAGE_SIZE)
+- vfree(mm->context.ldt);
+- else
+- kfree(mm->context.ldt);
+- mm->context.size = 0;
+- }
+-}
+-
+-static int read_ldt(void __user * ptr, unsigned long bytecount)
+-{
+- int err;
+- unsigned long size;
+- struct mm_struct * mm = current->mm;
+-
+- if (!mm->context.size)
+- return 0;
+- if (bytecount > LDT_ENTRY_SIZE*LDT_ENTRIES)
+- bytecount = LDT_ENTRY_SIZE*LDT_ENTRIES;
+-
+- mutex_lock(&mm->context.lock);
+- size = mm->context.size*LDT_ENTRY_SIZE;
+- if (size > bytecount)
+- size = bytecount;
+-
+- err = 0;
+- if (copy_to_user(ptr, mm->context.ldt, size))
+- err = -EFAULT;
+- mutex_unlock(&mm->context.lock);
+- if (err < 0)
+- goto error_return;
+- if (size != bytecount) {
+- /* zero-fill the rest */
+- if (clear_user(ptr+size, bytecount-size) != 0) {
+- err = -EFAULT;
+- goto error_return;
+- }
+- }
+- return bytecount;
+-error_return:
+- return err;
+-}
+-
+-static int read_default_ldt(void __user * ptr, unsigned long bytecount)
+-{
+- /* Arbitrary number */
+- /* x86-64 default LDT is all zeros */
+- if (bytecount > 128)
+- bytecount = 128;
+- if (clear_user(ptr, bytecount))
+- return -EFAULT;
+- return bytecount;
+-}
+-
+-static int write_ldt(void __user * ptr, unsigned long bytecount, int oldmode)
+-{
+- struct task_struct *me = current;
+- struct mm_struct * mm = me->mm;
+- __u32 entry_1, entry_2, *lp;
+- int error;
+- struct user_desc ldt_info;
+-
+- error = -EINVAL;
+-
+- if (bytecount != sizeof(ldt_info))
+- goto out;
+- error = -EFAULT;
+- if (copy_from_user(&ldt_info, ptr, bytecount))
+- goto out;
+-
+- error = -EINVAL;
+- if (ldt_info.entry_number >= LDT_ENTRIES)
+- goto out;
+- if (ldt_info.contents == 3) {
+- if (oldmode)
+- goto out;
+- if (ldt_info.seg_not_present == 0)
+- goto out;
+- }
+-
+- mutex_lock(&mm->context.lock);
+- if (ldt_info.entry_number >= (unsigned)mm->context.size) {
+- error = alloc_ldt(¤t->mm->context, ldt_info.entry_number+1, 1);
+- if (error < 0)
+- goto out_unlock;
+- }
+-
+- lp = (__u32 *) ((ldt_info.entry_number << 3) + (char *) mm->context.ldt);
+-
+- /* Allow LDTs to be cleared by the user. */
+- if (ldt_info.base_addr == 0 && ldt_info.limit == 0) {
+- if (oldmode || LDT_empty(&ldt_info)) {
+- entry_1 = 0;
+- entry_2 = 0;
+- goto install;
+- }
+- }
+-
+- entry_1 = LDT_entry_a(&ldt_info);
+- entry_2 = LDT_entry_b(&ldt_info);
+- if (oldmode)
+- entry_2 &= ~(1 << 20);
+-
+- /* Install the new entry ... */
+-install:
+- *lp = entry_1;
+- *(lp+1) = entry_2;
+- error = 0;
+-
+-out_unlock:
+- mutex_unlock(&mm->context.lock);
+-out:
+- return error;
+-}
+-
+-asmlinkage int sys_modify_ldt(int func, void __user *ptr, unsigned long bytecount)
+-{
+- int ret = -ENOSYS;
+-
+- switch (func) {
+- case 0:
+- ret = read_ldt(ptr, bytecount);
+- break;
+- case 1:
+- ret = write_ldt(ptr, bytecount, 1);
+- break;
+- case 2:
+- ret = read_default_ldt(ptr, bytecount);
+- break;
+- case 0x11:
+- ret = write_ldt(ptr, bytecount, 0);
+- break;
+- }
+- return ret;
+-}
+diff --git a/arch/x86/kernel/machine_kexec_32.c b/arch/x86/kernel/machine_kexec_32.c
+index 11b935f..c1cfd60 100644
+--- a/arch/x86/kernel/machine_kexec_32.c
++++ b/arch/x86/kernel/machine_kexec_32.c
+@@ -32,7 +32,7 @@ static u32 kexec_pte1[1024] PAGE_ALIGNED;
+
+ static void set_idt(void *newidt, __u16 limit)
+ {
+- struct Xgt_desc_struct curidt;
++ struct desc_ptr curidt;
+
+ /* ia32 supports unaliged loads & stores */
+ curidt.size = limit;
+@@ -44,7 +44,7 @@ static void set_idt(void *newidt, __u16 limit)
+
+ static void set_gdt(void *newgdt, __u16 limit)
+ {
+- struct Xgt_desc_struct curgdt;
++ struct desc_ptr curgdt;
+
+ /* ia32 supports unaligned loads & stores */
+ curgdt.size = limit;
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index aa3d2c8..a1fef42 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -234,10 +234,5 @@ NORET_TYPE void machine_kexec(struct kimage *image)
+ void arch_crash_save_vmcoreinfo(void)
+ {
+ VMCOREINFO_SYMBOL(init_level4_pgt);
+-
+-#ifdef CONFIG_ARCH_DISCONTIGMEM_ENABLE
+- VMCOREINFO_SYMBOL(node_data);
+- VMCOREINFO_LENGTH(node_data, MAX_NUMNODES);
+-#endif
+ }
+
+diff --git a/arch/x86/kernel/mfgpt_32.c b/arch/x86/kernel/mfgpt_32.c
+index 3960ab7..219f86e 100644
+--- a/arch/x86/kernel/mfgpt_32.c
++++ b/arch/x86/kernel/mfgpt_32.c
+@@ -63,6 +63,21 @@ static int __init mfgpt_disable(char *s)
+ }
+ __setup("nomfgpt", mfgpt_disable);
+
++/* Reset the MFGPT timers. This is required by some broken BIOSes which already
++ * do the same and leave the system in an unstable state. TinyBIOS 0.98 is
++ * affected at least (0.99 is OK with MFGPT workaround left to off).
++ */
++static int __init mfgpt_fix(char *s)
+{
-+ BUG_ON(mask > PAGE_SIZE);
++ u32 val, dummy;
+
-+ if (mask > q->dma_alignment)
-+ q->dma_alignment = mask;
++ /* The following udocumented bit resets the MFGPT timers */
++ val = 0xFF; dummy = 0;
++ wrmsr(0x5140002B, val, dummy);
++ return 1;
+}
++__setup("mfgptfix", mfgpt_fix);
+
-+EXPORT_SYMBOL(blk_queue_update_dma_alignment);
+ /*
+ * Check whether any MFGPTs are available for the kernel to use. In most
+ * cases, firmware that uses AMD's VSA code will claim all timers during
+diff --git a/arch/x86/kernel/microcode.c b/arch/x86/kernel/microcode.c
+index 09c3152..6ff447f 100644
+--- a/arch/x86/kernel/microcode.c
++++ b/arch/x86/kernel/microcode.c
+@@ -244,8 +244,8 @@ static int microcode_sanity_check(void *mc)
+ return 0;
+ /* check extended signature checksum */
+ for (i = 0; i < ext_sigcount; i++) {
+- ext_sig = (struct extended_signature *)((void *)ext_header
+- + EXT_HEADER_SIZE + EXT_SIGNATURE_SIZE * i);
++ ext_sig = (void *)ext_header + EXT_HEADER_SIZE +
++ EXT_SIGNATURE_SIZE * i;
+ sum = orig_sum
+ - (mc_header->sig + mc_header->pf + mc_header->cksum)
+ + (ext_sig->sig + ext_sig->pf + ext_sig->cksum);
+@@ -279,11 +279,9 @@ static int get_maching_microcode(void *mc, int cpu)
+ if (total_size <= get_datasize(mc_header) + MC_HEADER_SIZE)
+ return 0;
+
+- ext_header = (struct extended_sigtable *)(mc +
+- get_datasize(mc_header) + MC_HEADER_SIZE);
++ ext_header = mc + get_datasize(mc_header) + MC_HEADER_SIZE;
+ ext_sigcount = ext_header->count;
+- ext_sig = (struct extended_signature *)((void *)ext_header
+- + EXT_HEADER_SIZE);
++ ext_sig = (void *)ext_header + EXT_HEADER_SIZE;
+ for (i = 0; i < ext_sigcount; i++) {
+ if (microcode_update_match(cpu, mc_header,
+ ext_sig->sig, ext_sig->pf))
+@@ -436,7 +434,7 @@ static ssize_t microcode_write (struct file *file, const char __user *buf, size_
+ return -EINVAL;
+ }
+
+- lock_cpu_hotplug();
++ get_online_cpus();
+ mutex_lock(µcode_mutex);
+
+ user_buffer = (void __user *) buf;
+@@ -447,7 +445,7 @@ static ssize_t microcode_write (struct file *file, const char __user *buf, size_
+ ret = (ssize_t)len;
+
+ mutex_unlock(µcode_mutex);
+- unlock_cpu_hotplug();
++ put_online_cpus();
+
+ return ret;
+ }
+@@ -539,7 +537,7 @@ static int cpu_request_microcode(int cpu)
+ pr_debug("ucode data file %s load failed\n", name);
+ return error;
+ }
+- buf = (void *)firmware->data;
++ buf = firmware->data;
+ size = firmware->size;
+ while ((offset = get_next_ucode_from_buffer(&mc, buf, size, offset))
+ > 0) {
+@@ -658,14 +656,14 @@ static ssize_t reload_store(struct sys_device *dev, const char *buf, size_t sz)
+
+ old = current->cpus_allowed;
+
+- lock_cpu_hotplug();
++ get_online_cpus();
+ set_cpus_allowed(current, cpumask_of_cpu(cpu));
+
+ mutex_lock(µcode_mutex);
+ if (uci->valid)
+ err = cpu_request_microcode(cpu);
+ mutex_unlock(µcode_mutex);
+- unlock_cpu_hotplug();
++ put_online_cpus();
+ set_cpus_allowed(current, old);
+ }
+ if (err)
+@@ -817,9 +815,9 @@ static int __init microcode_init (void)
+ return PTR_ERR(microcode_pdev);
+ }
+
+- lock_cpu_hotplug();
++ get_online_cpus();
+ error = sysdev_driver_register(&cpu_sysdev_class, &mc_sysdev_driver);
+- unlock_cpu_hotplug();
++ put_online_cpus();
+ if (error) {
+ microcode_dev_exit();
+ platform_device_unregister(microcode_pdev);
+@@ -839,9 +837,9 @@ static void __exit microcode_exit (void)
+
+ unregister_hotcpu_notifier(&mc_cpu_notifier);
+
+- lock_cpu_hotplug();
++ get_online_cpus();
+ sysdev_driver_unregister(&cpu_sysdev_class, &mc_sysdev_driver);
+- unlock_cpu_hotplug();
++ put_online_cpus();
+
+ platform_device_unregister(microcode_pdev);
+ }
+diff --git a/arch/x86/kernel/mpparse_32.c b/arch/x86/kernel/mpparse_32.c
+index 7a05a7f..67009cd 100644
+--- a/arch/x86/kernel/mpparse_32.c
++++ b/arch/x86/kernel/mpparse_32.c
+@@ -68,7 +68,7 @@ unsigned int def_to_bigsmp = 0;
+ /* Processor that is doing the boot up */
+ unsigned int boot_cpu_physical_apicid = -1U;
+ /* Internal processor count */
+-unsigned int __cpuinitdata num_processors;
++unsigned int num_processors;
+
+ /* Bitmask of physically existing CPUs */
+ physid_mask_t phys_cpu_present_map;
+@@ -258,7 +258,7 @@ static void __init MP_ioapic_info (struct mpc_config_ioapic *m)
+ if (!(m->mpc_flags & MPC_APIC_USABLE))
+ return;
+
+- printk(KERN_INFO "I/O APIC #%d Version %d at 0x%lX.\n",
++ printk(KERN_INFO "I/O APIC #%d Version %d at 0x%X.\n",
+ m->mpc_apicid, m->mpc_apicver, m->mpc_apicaddr);
+ if (nr_ioapics >= MAX_IO_APICS) {
+ printk(KERN_CRIT "Max # of I/O APICs (%d) exceeded (found %d).\n",
+@@ -405,9 +405,9 @@ static int __init smp_read_mpc(struct mp_config_table *mpc)
+
+ mps_oem_check(mpc, oem, str);
+
+- printk("APIC at: 0x%lX\n",mpc->mpc_lapic);
++ printk("APIC at: 0x%X\n", mpc->mpc_lapic);
+
+- /*
++ /*
+ * Save the local APIC address (it might be non-default) -- but only
+ * if we're not using ACPI.
+ */
+@@ -721,7 +721,7 @@ static int __init smp_scan_config (unsigned long base, unsigned long length)
+ unsigned long *bp = phys_to_virt(base);
+ struct intel_mp_floating *mpf;
+
+- Dprintk("Scan SMP from %p for %ld bytes.\n", bp,length);
++ printk(KERN_INFO "Scan SMP from %p for %ld bytes.\n", bp,length);
+ if (sizeof(*mpf) != 16)
+ printk("Error: MPF size\n");
+
+@@ -734,8 +734,8 @@ static int __init smp_scan_config (unsigned long base, unsigned long length)
+ || (mpf->mpf_specification == 4)) ) {
+
+ smp_found_config = 1;
+- printk(KERN_INFO "found SMP MP-table at %08lx\n",
+- virt_to_phys(mpf));
++ printk(KERN_INFO "found SMP MP-table at [%p] %08lx\n",
++ mpf, virt_to_phys(mpf));
+ reserve_bootmem(virt_to_phys(mpf), PAGE_SIZE);
+ if (mpf->mpf_physptr) {
+ /*
+@@ -918,14 +918,14 @@ void __init mp_register_ioapic(u8 id, u32 address, u32 gsi_base)
+ */
+ mp_ioapic_routing[idx].apic_id = mp_ioapics[idx].mpc_apicid;
+ mp_ioapic_routing[idx].gsi_base = gsi_base;
+- mp_ioapic_routing[idx].gsi_end = gsi_base +
++ mp_ioapic_routing[idx].gsi_end = gsi_base +
+ io_apic_get_redir_entries(idx);
+
+- printk("IOAPIC[%d]: apic_id %d, version %d, address 0x%lx, "
+- "GSI %d-%d\n", idx, mp_ioapics[idx].mpc_apicid,
+- mp_ioapics[idx].mpc_apicver, mp_ioapics[idx].mpc_apicaddr,
+- mp_ioapic_routing[idx].gsi_base,
+- mp_ioapic_routing[idx].gsi_end);
++ printk("IOAPIC[%d]: apic_id %d, version %d, address 0x%x, "
++ "GSI %d-%d\n", idx, mp_ioapics[idx].mpc_apicid,
++ mp_ioapics[idx].mpc_apicver, mp_ioapics[idx].mpc_apicaddr,
++ mp_ioapic_routing[idx].gsi_base,
++ mp_ioapic_routing[idx].gsi_end);
+ }
+
+ void __init
+@@ -1041,15 +1041,16 @@ void __init mp_config_acpi_legacy_irqs (void)
+ }
+
+ #define MAX_GSI_NUM 4096
++#define IRQ_COMPRESSION_START 64
+
+ int mp_register_gsi(u32 gsi, int triggering, int polarity)
+ {
+ int ioapic = -1;
+ int ioapic_pin = 0;
+ int idx, bit = 0;
+- static int pci_irq = 16;
++ static int pci_irq = IRQ_COMPRESSION_START;
+ /*
+- * Mapping between Global System Interrups, which
++ * Mapping between Global System Interrupts, which
+ * represent all possible interrupts, and IRQs
+ * assigned to actual devices.
+ */
+@@ -1086,12 +1087,16 @@ int mp_register_gsi(u32 gsi, int triggering, int polarity)
+ if ((1<<bit) & mp_ioapic_routing[ioapic].pin_programmed[idx]) {
+ Dprintk(KERN_DEBUG "Pin %d-%d already programmed\n",
+ mp_ioapic_routing[ioapic].apic_id, ioapic_pin);
+- return gsi_to_irq[gsi];
++ return (gsi < IRQ_COMPRESSION_START ? gsi : gsi_to_irq[gsi]);
+ }
+
+ mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit);
+
+- if (triggering == ACPI_LEVEL_SENSITIVE) {
++ /*
++ * For GSI >= 64, use IRQ compression
++ */
++ if ((gsi >= IRQ_COMPRESSION_START)
++ && (triggering == ACPI_LEVEL_SENSITIVE)) {
+ /*
+ * For PCI devices assign IRQs in order, avoiding gaps
+ * due to unused I/O APIC pins.
+diff --git a/arch/x86/kernel/mpparse_64.c b/arch/x86/kernel/mpparse_64.c
+index ef4aab1..72ab140 100644
+--- a/arch/x86/kernel/mpparse_64.c
++++ b/arch/x86/kernel/mpparse_64.c
+@@ -60,14 +60,18 @@ unsigned int boot_cpu_id = -1U;
+ EXPORT_SYMBOL(boot_cpu_id);
+
+ /* Internal processor count */
+-unsigned int num_processors __cpuinitdata = 0;
++unsigned int num_processors;
+
+ unsigned disabled_cpus __cpuinitdata;
+
+ /* Bitmask of physically existing CPUs */
+ physid_mask_t phys_cpu_present_map = PHYSID_MASK_NONE;
+
+-u8 bios_cpu_apicid[NR_CPUS] = { [0 ... NR_CPUS-1] = BAD_APICID };
++u16 x86_bios_cpu_apicid_init[NR_CPUS] __initdata
++ = { [0 ... NR_CPUS-1] = BAD_APICID };
++void *x86_bios_cpu_apicid_early_ptr;
++DEFINE_PER_CPU(u16, x86_bios_cpu_apicid) = BAD_APICID;
++EXPORT_PER_CPU_SYMBOL(x86_bios_cpu_apicid);
+
+
+ /*
+@@ -118,24 +122,22 @@ static void __cpuinit MP_processor_info(struct mpc_config_processor *m)
+ physid_set(m->mpc_apicid, phys_cpu_present_map);
+ if (m->mpc_cpuflag & CPU_BOOTPROCESSOR) {
+ /*
+- * bios_cpu_apicid is required to have processors listed
++ * x86_bios_cpu_apicid is required to have processors listed
+ * in same order as logical cpu numbers. Hence the first
+ * entry is BSP, and so on.
+ */
+ cpu = 0;
+ }
+- bios_cpu_apicid[cpu] = m->mpc_apicid;
+- /*
+- * We get called early in the the start_kernel initialization
+- * process when the per_cpu data area is not yet setup, so we
+- * use a static array that is removed after the per_cpu data
+- * area is created.
+- */
+- if (x86_cpu_to_apicid_ptr) {
+- u8 *x86_cpu_to_apicid = (u8 *)x86_cpu_to_apicid_ptr;
+- x86_cpu_to_apicid[cpu] = m->mpc_apicid;
++ /* are we being called early in kernel startup? */
++ if (x86_cpu_to_apicid_early_ptr) {
++ u16 *cpu_to_apicid = x86_cpu_to_apicid_early_ptr;
++ u16 *bios_cpu_apicid = x86_bios_cpu_apicid_early_ptr;
+
-+int __init blk_settings_init(void)
++ cpu_to_apicid[cpu] = m->mpc_apicid;
++ bios_cpu_apicid[cpu] = m->mpc_apicid;
+ } else {
+ per_cpu(x86_cpu_to_apicid, cpu) = m->mpc_apicid;
++ per_cpu(x86_bios_cpu_apicid, cpu) = m->mpc_apicid;
+ }
+
+ cpu_set(cpu, cpu_possible_map);
+diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
+index ee6eba4..21f6e3c 100644
+--- a/arch/x86/kernel/msr.c
++++ b/arch/x86/kernel/msr.c
+@@ -155,15 +155,15 @@ static int __cpuinit msr_class_cpu_callback(struct notifier_block *nfb,
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+- case CPU_UP_PREPARE_FROZEN:
+ err = msr_device_create(cpu);
+ break;
+ case CPU_UP_CANCELED:
+- case CPU_UP_CANCELED_FROZEN:
+ case CPU_DEAD:
+- case CPU_DEAD_FROZEN:
+ msr_device_destroy(cpu);
+ break;
++ case CPU_UP_CANCELED_FROZEN:
++ destroy_suspended_device(msr_class, MKDEV(MSR_MAJOR, cpu));
++ break;
+ }
+ return err ? NOTIFY_BAD : NOTIFY_OK;
+ }
+diff --git a/arch/x86/kernel/nmi_32.c b/arch/x86/kernel/nmi_32.c
+index 852db29..edd4136 100644
+--- a/arch/x86/kernel/nmi_32.c
++++ b/arch/x86/kernel/nmi_32.c
+@@ -51,13 +51,13 @@ static int unknown_nmi_panic_callback(struct pt_regs *regs, int cpu);
+
+ static int endflag __initdata = 0;
+
++#ifdef CONFIG_SMP
+ /* The performance counters used by NMI_LOCAL_APIC don't trigger when
+ * the CPU is idle. To make sure the NMI watchdog really ticks on all
+ * CPUs during the test make them busy.
+ */
+ static __init void nmi_cpu_busy(void *data)
+ {
+-#ifdef CONFIG_SMP
+ local_irq_enable_in_hardirq();
+ /* Intentionally don't use cpu_relax here. This is
+ to make sure that the performance counter really ticks,
+@@ -67,8 +67,8 @@ static __init void nmi_cpu_busy(void *data)
+ care if they get somewhat less cycles. */
+ while (endflag == 0)
+ mb();
+-#endif
+ }
++#endif
+
+ static int __init check_nmi_watchdog(void)
+ {
+@@ -87,11 +87,13 @@ static int __init check_nmi_watchdog(void)
+
+ printk(KERN_INFO "Testing NMI watchdog ... ");
+
++#ifdef CONFIG_SMP
+ if (nmi_watchdog == NMI_LOCAL_APIC)
+ smp_call_function(nmi_cpu_busy, (void *)&endflag, 0, 0);
++#endif
+
+ for_each_possible_cpu(cpu)
+- prev_nmi_count[cpu] = per_cpu(irq_stat, cpu).__nmi_count;
++ prev_nmi_count[cpu] = nmi_count(cpu);
+ local_irq_enable();
+ mdelay((20*1000)/nmi_hz); // wait 20 ticks
+
+@@ -176,7 +178,7 @@ static int lapic_nmi_resume(struct sys_device *dev)
+
+
+ static struct sysdev_class nmi_sysclass = {
+- set_kset_name("lapic_nmi"),
++ .name = "lapic_nmi",
+ .resume = lapic_nmi_resume,
+ .suspend = lapic_nmi_suspend,
+ };
+@@ -237,10 +239,10 @@ void acpi_nmi_disable(void)
+ on_each_cpu(__acpi_nmi_disable, NULL, 0, 1);
+ }
+
+-void setup_apic_nmi_watchdog (void *unused)
++void setup_apic_nmi_watchdog(void *unused)
+ {
+ if (__get_cpu_var(wd_enabled))
+- return;
++ return;
+
+ /* cheap hack to support suspend/resume */
+ /* if cpu0 is not active neither should the other cpus */
+@@ -329,7 +331,7 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason)
+ unsigned int sum;
+ int touched = 0;
+ int cpu = smp_processor_id();
+- int rc=0;
++ int rc = 0;
+
+ /* check for other users first */
+ if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT)
+diff --git a/arch/x86/kernel/nmi_64.c b/arch/x86/kernel/nmi_64.c
+index 4253c4e..fb99484 100644
+--- a/arch/x86/kernel/nmi_64.c
++++ b/arch/x86/kernel/nmi_64.c
+@@ -39,7 +39,7 @@ static cpumask_t backtrace_mask = CPU_MASK_NONE;
+ * 0: the lapic NMI watchdog is disabled, but can be enabled
+ */
+ atomic_t nmi_active = ATOMIC_INIT(0); /* oprofile uses this */
+-int panic_on_timeout;
++static int panic_on_timeout;
+
+ unsigned int nmi_watchdog = NMI_DEFAULT;
+ static unsigned int nmi_hz = HZ;
+@@ -78,22 +78,22 @@ static __init void nmi_cpu_busy(void *data)
+ }
+ #endif
+
+-int __init check_nmi_watchdog (void)
++int __init check_nmi_watchdog(void)
+ {
+- int *counts;
++ int *prev_nmi_count;
+ int cpu;
+
+- if ((nmi_watchdog == NMI_NONE) || (nmi_watchdog == NMI_DISABLED))
++ if ((nmi_watchdog == NMI_NONE) || (nmi_watchdog == NMI_DISABLED))
+ return 0;
+
+ if (!atomic_read(&nmi_active))
+ return 0;
+
+- counts = kmalloc(NR_CPUS * sizeof(int), GFP_KERNEL);
+- if (!counts)
++ prev_nmi_count = kmalloc(NR_CPUS * sizeof(int), GFP_KERNEL);
++ if (!prev_nmi_count)
+ return -1;
+
+- printk(KERN_INFO "testing NMI watchdog ... ");
++ printk(KERN_INFO "Testing NMI watchdog ... ");
+
+ #ifdef CONFIG_SMP
+ if (nmi_watchdog == NMI_LOCAL_APIC)
+@@ -101,30 +101,29 @@ int __init check_nmi_watchdog (void)
+ #endif
+
+ for (cpu = 0; cpu < NR_CPUS; cpu++)
+- counts[cpu] = cpu_pda(cpu)->__nmi_count;
++ prev_nmi_count[cpu] = cpu_pda(cpu)->__nmi_count;
+ local_irq_enable();
+ mdelay((20*1000)/nmi_hz); // wait 20 ticks
+
+ for_each_online_cpu(cpu) {
+ if (!per_cpu(wd_enabled, cpu))
+ continue;
+- if (cpu_pda(cpu)->__nmi_count - counts[cpu] <= 5) {
++ if (cpu_pda(cpu)->__nmi_count - prev_nmi_count[cpu] <= 5) {
+ printk(KERN_WARNING "WARNING: CPU#%d: NMI "
+ "appears to be stuck (%d->%d)!\n",
+- cpu,
+- counts[cpu],
+- cpu_pda(cpu)->__nmi_count);
++ cpu,
++ prev_nmi_count[cpu],
++ cpu_pda(cpu)->__nmi_count);
+ per_cpu(wd_enabled, cpu) = 0;
+ atomic_dec(&nmi_active);
+ }
+ }
++ endflag = 1;
+ if (!atomic_read(&nmi_active)) {
+- kfree(counts);
++ kfree(prev_nmi_count);
+ atomic_set(&nmi_active, -1);
+- endflag = 1;
+ return -1;
+ }
+- endflag = 1;
+ printk("OK.\n");
+
+ /* now that we know it works we can reduce NMI frequency to
+@@ -132,11 +131,11 @@ int __init check_nmi_watchdog (void)
+ if (nmi_watchdog == NMI_LOCAL_APIC)
+ nmi_hz = lapic_adjust_nmi_hz(1);
+
+- kfree(counts);
++ kfree(prev_nmi_count);
+ return 0;
+ }
+
+-int __init setup_nmi_watchdog(char *str)
++static int __init setup_nmi_watchdog(char *str)
+ {
+ int nmi;
+
+@@ -159,34 +158,6 @@ int __init setup_nmi_watchdog(char *str)
+
+ __setup("nmi_watchdog=", setup_nmi_watchdog);
+
+-
+-static void __acpi_nmi_disable(void *__unused)
+-{
+- apic_write(APIC_LVT0, APIC_DM_NMI | APIC_LVT_MASKED);
+-}
+-
+-/*
+- * Disable timer based NMIs on all CPUs:
+- */
+-void acpi_nmi_disable(void)
+-{
+- if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
+- on_each_cpu(__acpi_nmi_disable, NULL, 0, 1);
+-}
+-
+-static void __acpi_nmi_enable(void *__unused)
+-{
+- apic_write(APIC_LVT0, APIC_DM_NMI);
+-}
+-
+-/*
+- * Enable timer based NMIs on all CPUs:
+- */
+-void acpi_nmi_enable(void)
+-{
+- if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
+- on_each_cpu(__acpi_nmi_enable, NULL, 0, 1);
+-}
+ #ifdef CONFIG_PM
+
+ static int nmi_pm_active; /* nmi_active before suspend */
+@@ -211,13 +182,13 @@ static int lapic_nmi_resume(struct sys_device *dev)
+ }
+
+ static struct sysdev_class nmi_sysclass = {
+- set_kset_name("lapic_nmi"),
++ .name = "lapic_nmi",
+ .resume = lapic_nmi_resume,
+ .suspend = lapic_nmi_suspend,
+ };
+
+ static struct sys_device device_lapic_nmi = {
+- .id = 0,
++ .id = 0,
+ .cls = &nmi_sysclass,
+ };
+
+@@ -231,7 +202,7 @@ static int __init init_lapic_nmi_sysfs(void)
+ if (nmi_watchdog != NMI_LOCAL_APIC)
+ return 0;
+
+- if ( atomic_read(&nmi_active) < 0 )
++ if (atomic_read(&nmi_active) < 0)
+ return 0;
+
+ error = sysdev_class_register(&nmi_sysclass);
+@@ -244,9 +215,37 @@ late_initcall(init_lapic_nmi_sysfs);
+
+ #endif /* CONFIG_PM */
+
++static void __acpi_nmi_enable(void *__unused)
+{
-+ blk_max_low_pfn = max_low_pfn - 1;
-+ blk_max_pfn = max_pfn - 1;
-+ return 0;
++ apic_write(APIC_LVT0, APIC_DM_NMI);
+}
-+subsys_initcall(blk_settings_init);
-diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
-new file mode 100644
-index 0000000..bc28776
---- /dev/null
-+++ b/block/blk-sysfs.c
-@@ -0,0 +1,309 @@
++
+/*
-+ * Functions related to sysfs handling
++ * Enable timer based NMIs on all CPUs:
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
-+#include <linux/blktrace_api.h>
-+
-+#include "blk.h"
-+
-+struct queue_sysfs_entry {
-+ struct attribute attr;
-+ ssize_t (*show)(struct request_queue *, char *);
-+ ssize_t (*store)(struct request_queue *, const char *, size_t);
-+};
-+
-+static ssize_t
-+queue_var_show(unsigned int var, char *page)
++void acpi_nmi_enable(void)
+{
-+ return sprintf(page, "%d\n", var);
++ if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
++ on_each_cpu(__acpi_nmi_enable, NULL, 0, 1);
+}
+
-+static ssize_t
-+queue_var_store(unsigned long *var, const char *page, size_t count)
++static void __acpi_nmi_disable(void *__unused)
+{
-+ char *p = (char *) page;
-+
-+ *var = simple_strtoul(p, &p, 10);
-+ return count;
++ apic_write(APIC_LVT0, APIC_DM_NMI | APIC_LVT_MASKED);
+}
+
-+static ssize_t queue_requests_show(struct request_queue *q, char *page)
++/*
++ * Disable timer based NMIs on all CPUs:
++ */
++void acpi_nmi_disable(void)
+{
-+ return queue_var_show(q->nr_requests, (page));
++ if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
++ on_each_cpu(__acpi_nmi_disable, NULL, 0, 1);
+}
+
-+static ssize_t
-+queue_requests_store(struct request_queue *q, const char *page, size_t count)
-+{
-+ struct request_list *rl = &q->rq;
-+ unsigned long nr;
-+ int ret = queue_var_store(&nr, page, count);
-+ if (nr < BLKDEV_MIN_RQ)
-+ nr = BLKDEV_MIN_RQ;
-+
-+ spin_lock_irq(q->queue_lock);
-+ q->nr_requests = nr;
-+ blk_queue_congestion_threshold(q);
+ void setup_apic_nmi_watchdog(void *unused)
+ {
+- if (__get_cpu_var(wd_enabled) == 1)
++ if (__get_cpu_var(wd_enabled))
+ return;
+
+ /* cheap hack to support suspend/resume */
+@@ -311,8 +310,9 @@ void touch_nmi_watchdog(void)
+ }
+ }
+
+- touch_softlockup_watchdog();
++ touch_softlockup_watchdog();
+ }
++EXPORT_SYMBOL(touch_nmi_watchdog);
+
+ int __kprobes nmi_watchdog_tick(struct pt_regs * regs, unsigned reason)
+ {
+@@ -479,4 +479,3 @@ void __trigger_all_cpu_backtrace(void)
+
+ EXPORT_SYMBOL(nmi_active);
+ EXPORT_SYMBOL(nmi_watchdog);
+-EXPORT_SYMBOL(touch_nmi_watchdog);
+diff --git a/arch/x86/kernel/numaq_32.c b/arch/x86/kernel/numaq_32.c
+index 9000d82..e65281b 100644
+--- a/arch/x86/kernel/numaq_32.c
++++ b/arch/x86/kernel/numaq_32.c
+@@ -82,7 +82,7 @@ static int __init numaq_tsc_disable(void)
+ {
+ if (num_online_nodes() > 1) {
+ printk(KERN_DEBUG "NUMAQ: disabling TSC\n");
+- tsc_disable = 1;
++ setup_clear_cpu_cap(X86_FEATURE_TSC);
+ }
+ return 0;
+ }
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+new file mode 100644
+index 0000000..075962c
+--- /dev/null
++++ b/arch/x86/kernel/paravirt.c
+@@ -0,0 +1,440 @@
++/* Paravirtualization interfaces
++ Copyright (C) 2006 Rusty Russell IBM Corporation
+
-+ if (rl->count[READ] >= queue_congestion_on_threshold(q))
-+ blk_set_queue_congested(q, READ);
-+ else if (rl->count[READ] < queue_congestion_off_threshold(q))
-+ blk_clear_queue_congested(q, READ);
++ This program is free software; you can redistribute it and/or modify
++ it under the terms of the GNU General Public License as published by
++ the Free Software Foundation; either version 2 of the License, or
++ (at your option) any later version.
+
-+ if (rl->count[WRITE] >= queue_congestion_on_threshold(q))
-+ blk_set_queue_congested(q, WRITE);
-+ else if (rl->count[WRITE] < queue_congestion_off_threshold(q))
-+ blk_clear_queue_congested(q, WRITE);
++ This program is distributed in the hope that it will be useful,
++ but WITHOUT ANY WARRANTY; without even the implied warranty of
++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ GNU General Public License for more details.
+
-+ if (rl->count[READ] >= q->nr_requests) {
-+ blk_set_queue_full(q, READ);
-+ } else if (rl->count[READ]+1 <= q->nr_requests) {
-+ blk_clear_queue_full(q, READ);
-+ wake_up(&rl->wait[READ]);
-+ }
++ You should have received a copy of the GNU General Public License
++ along with this program; if not, write to the Free Software
++ Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+
-+ if (rl->count[WRITE] >= q->nr_requests) {
-+ blk_set_queue_full(q, WRITE);
-+ } else if (rl->count[WRITE]+1 <= q->nr_requests) {
-+ blk_clear_queue_full(q, WRITE);
-+ wake_up(&rl->wait[WRITE]);
-+ }
-+ spin_unlock_irq(q->queue_lock);
-+ return ret;
-+}
++ 2007 - x86_64 support added by Glauber de Oliveira Costa, Red Hat Inc
++*/
+
-+static ssize_t queue_ra_show(struct request_queue *q, char *page)
-+{
-+ int ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10);
++#include <linux/errno.h>
++#include <linux/module.h>
++#include <linux/efi.h>
++#include <linux/bcd.h>
++#include <linux/highmem.h>
+
-+ return queue_var_show(ra_kb, (page));
-+}
++#include <asm/bug.h>
++#include <asm/paravirt.h>
++#include <asm/desc.h>
++#include <asm/setup.h>
++#include <asm/arch_hooks.h>
++#include <asm/time.h>
++#include <asm/irq.h>
++#include <asm/delay.h>
++#include <asm/fixmap.h>
++#include <asm/apic.h>
++#include <asm/tlbflush.h>
++#include <asm/timer.h>
+
-+static ssize_t
-+queue_ra_store(struct request_queue *q, const char *page, size_t count)
++/* nop stub */
++void _paravirt_nop(void)
+{
-+ unsigned long ra_kb;
-+ ssize_t ret = queue_var_store(&ra_kb, page, count);
-+
-+ spin_lock_irq(q->queue_lock);
-+ q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10);
-+ spin_unlock_irq(q->queue_lock);
-+
-+ return ret;
+}
+
-+static ssize_t queue_max_sectors_show(struct request_queue *q, char *page)
++static void __init default_banner(void)
+{
-+ int max_sectors_kb = q->max_sectors >> 1;
-+
-+ return queue_var_show(max_sectors_kb, (page));
++ printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
++ pv_info.name);
+}
+
-+static ssize_t queue_hw_sector_size_show(struct request_queue *q, char *page)
++char *memory_setup(void)
+{
-+ return queue_var_show(q->hardsect_size, page);
++ return pv_init_ops.memory_setup();
+}
+
-+static ssize_t
-+queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
-+{
-+ unsigned long max_sectors_kb,
-+ max_hw_sectors_kb = q->max_hw_sectors >> 1,
-+ page_kb = 1 << (PAGE_CACHE_SHIFT - 10);
-+ ssize_t ret = queue_var_store(&max_sectors_kb, page, count);
++/* Simple instruction patching code. */
++#define DEF_NATIVE(ops, name, code) \
++ extern const char start_##ops##_##name[], end_##ops##_##name[]; \
++ asm("start_" #ops "_" #name ": " code "; end_" #ops "_" #name ":")
+
-+ if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb)
-+ return -EINVAL;
-+ /*
-+ * Take the queue lock to update the readahead and max_sectors
-+ * values synchronously:
-+ */
-+ spin_lock_irq(q->queue_lock);
-+ q->max_sectors = max_sectors_kb << 1;
-+ spin_unlock_irq(q->queue_lock);
++/* Undefined instruction for dealing with missing ops pointers. */
++static const unsigned char ud2a[] = { 0x0f, 0x0b };
+
-+ return ret;
++unsigned paravirt_patch_nop(void)
++{
++ return 0;
+}
+
-+static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page)
++unsigned paravirt_patch_ignore(unsigned len)
+{
-+ int max_hw_sectors_kb = q->max_hw_sectors >> 1;
-+
-+ return queue_var_show(max_hw_sectors_kb, (page));
++ return len;
+}
+
++struct branch {
++ unsigned char opcode;
++ u32 delta;
++} __attribute__((packed));
+
-+static struct queue_sysfs_entry queue_requests_entry = {
-+ .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
-+ .show = queue_requests_show,
-+ .store = queue_requests_store,
-+};
++unsigned paravirt_patch_call(void *insnbuf,
++ const void *target, u16 tgt_clobbers,
++ unsigned long addr, u16 site_clobbers,
++ unsigned len)
++{
++ struct branch *b = insnbuf;
++ unsigned long delta = (unsigned long)target - (addr+5);
+
-+static struct queue_sysfs_entry queue_ra_entry = {
-+ .attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
-+ .show = queue_ra_show,
-+ .store = queue_ra_store,
-+};
++ if (tgt_clobbers & ~site_clobbers)
++ return len; /* target would clobber too much for this site */
++ if (len < 5)
++ return len; /* call too long for patch site */
+
-+static struct queue_sysfs_entry queue_max_sectors_entry = {
-+ .attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
-+ .show = queue_max_sectors_show,
-+ .store = queue_max_sectors_store,
-+};
++ b->opcode = 0xe8; /* call */
++ b->delta = delta;
++ BUILD_BUG_ON(sizeof(*b) != 5);
+
-+static struct queue_sysfs_entry queue_max_hw_sectors_entry = {
-+ .attr = {.name = "max_hw_sectors_kb", .mode = S_IRUGO },
-+ .show = queue_max_hw_sectors_show,
-+};
++ return 5;
++}
+
-+static struct queue_sysfs_entry queue_iosched_entry = {
-+ .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR },
-+ .show = elv_iosched_show,
-+ .store = elv_iosched_store,
-+};
++unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
++ unsigned long addr, unsigned len)
++{
++ struct branch *b = insnbuf;
++ unsigned long delta = (unsigned long)target - (addr+5);
+
-+static struct queue_sysfs_entry queue_hw_sector_size_entry = {
-+ .attr = {.name = "hw_sector_size", .mode = S_IRUGO },
-+ .show = queue_hw_sector_size_show,
-+};
++ if (len < 5)
++ return len; /* call too long for patch site */
+
-+static struct attribute *default_attrs[] = {
-+ &queue_requests_entry.attr,
-+ &queue_ra_entry.attr,
-+ &queue_max_hw_sectors_entry.attr,
-+ &queue_max_sectors_entry.attr,
-+ &queue_iosched_entry.attr,
-+ &queue_hw_sector_size_entry.attr,
-+ NULL,
-+};
++ b->opcode = 0xe9; /* jmp */
++ b->delta = delta;
+
-+#define to_queue(atr) container_of((atr), struct queue_sysfs_entry, attr)
++ return 5;
++}
+
-+static ssize_t
-+queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
++/* Neat trick to map patch type back to the call within the
++ * corresponding structure. */
++static void *get_call_destination(u8 type)
+{
-+ struct queue_sysfs_entry *entry = to_queue(attr);
-+ struct request_queue *q =
-+ container_of(kobj, struct request_queue, kobj);
-+ ssize_t res;
-+
-+ if (!entry->show)
-+ return -EIO;
-+ mutex_lock(&q->sysfs_lock);
-+ if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
-+ mutex_unlock(&q->sysfs_lock);
-+ return -ENOENT;
-+ }
-+ res = entry->show(q, page);
-+ mutex_unlock(&q->sysfs_lock);
-+ return res;
++ struct paravirt_patch_template tmpl = {
++ .pv_init_ops = pv_init_ops,
++ .pv_time_ops = pv_time_ops,
++ .pv_cpu_ops = pv_cpu_ops,
++ .pv_irq_ops = pv_irq_ops,
++ .pv_apic_ops = pv_apic_ops,
++ .pv_mmu_ops = pv_mmu_ops,
++ };
++ return *((void **)&tmpl + type);
+}
+
-+static ssize_t
-+queue_attr_store(struct kobject *kobj, struct attribute *attr,
-+ const char *page, size_t length)
++unsigned paravirt_patch_default(u8 type, u16 clobbers, void *insnbuf,
++ unsigned long addr, unsigned len)
+{
-+ struct queue_sysfs_entry *entry = to_queue(attr);
-+ struct request_queue *q = container_of(kobj, struct request_queue, kobj);
++ void *opfunc = get_call_destination(type);
++ unsigned ret;
+
-+ ssize_t res;
++ if (opfunc == NULL)
++ /* If there's no function, patch it with a ud2a (BUG) */
++ ret = paravirt_patch_insns(insnbuf, len, ud2a, ud2a+sizeof(ud2a));
++ else if (opfunc == paravirt_nop)
++ /* If the operation is a nop, then nop the callsite */
++ ret = paravirt_patch_nop();
++ else if (type == PARAVIRT_PATCH(pv_cpu_ops.iret) ||
++ type == PARAVIRT_PATCH(pv_cpu_ops.irq_enable_syscall_ret))
++ /* If operation requires a jmp, then jmp */
++ ret = paravirt_patch_jmp(insnbuf, opfunc, addr, len);
++ else
++ /* Otherwise call the function; assume target could
++ clobber any caller-save reg */
++ ret = paravirt_patch_call(insnbuf, opfunc, CLBR_ANY,
++ addr, clobbers, len);
+
-+ if (!entry->store)
-+ return -EIO;
-+ mutex_lock(&q->sysfs_lock);
-+ if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
-+ mutex_unlock(&q->sysfs_lock);
-+ return -ENOENT;
-+ }
-+ res = entry->store(q, page, length);
-+ mutex_unlock(&q->sysfs_lock);
-+ return res;
++ return ret;
+}
+
-+/**
-+ * blk_cleanup_queue: - release a &struct request_queue when it is no longer needed
-+ * @kobj: the kobj belonging of the request queue to be released
-+ *
-+ * Description:
-+ * blk_cleanup_queue is the pair to blk_init_queue() or
-+ * blk_queue_make_request(). It should be called when a request queue is
-+ * being released; typically when a block device is being de-registered.
-+ * Currently, its primary task it to free all the &struct request
-+ * structures that were allocated to the queue and the queue itself.
-+ *
-+ * Caveat:
-+ * Hopefully the low level driver will have finished any
-+ * outstanding requests first...
-+ **/
-+static void blk_release_queue(struct kobject *kobj)
++unsigned paravirt_patch_insns(void *insnbuf, unsigned len,
++ const char *start, const char *end)
+{
-+ struct request_queue *q =
-+ container_of(kobj, struct request_queue, kobj);
-+ struct request_list *rl = &q->rq;
-+
-+ blk_sync_queue(q);
++ unsigned insn_len = end - start;
+
-+ if (rl->rq_pool)
-+ mempool_destroy(rl->rq_pool);
-+
-+ if (q->queue_tags)
-+ __blk_queue_free_tags(q);
-+
-+ blk_trace_shutdown(q);
++ if (insn_len > len || start == NULL)
++ insn_len = len;
++ else
++ memcpy(insnbuf, start, insn_len);
+
-+ bdi_destroy(&q->backing_dev_info);
-+ kmem_cache_free(blk_requestq_cachep, q);
++ return insn_len;
+}
+
-+static struct sysfs_ops queue_sysfs_ops = {
-+ .show = queue_attr_show,
-+ .store = queue_attr_store,
-+};
-+
-+struct kobj_type blk_queue_ktype = {
-+ .sysfs_ops = &queue_sysfs_ops,
-+ .default_attrs = default_attrs,
-+ .release = blk_release_queue,
-+};
-+
-+int blk_register_queue(struct gendisk *disk)
++void init_IRQ(void)
+{
-+ int ret;
-+
-+ struct request_queue *q = disk->queue;
-+
-+ if (!q || !q->request_fn)
-+ return -ENXIO;
-+
-+ ret = kobject_add(&q->kobj, kobject_get(&disk->dev.kobj),
-+ "%s", "queue");
-+ if (ret < 0)
-+ return ret;
-+
-+ kobject_uevent(&q->kobj, KOBJ_ADD);
-+
-+ ret = elv_register_queue(q);
-+ if (ret) {
-+ kobject_uevent(&q->kobj, KOBJ_REMOVE);
-+ kobject_del(&q->kobj);
-+ return ret;
-+ }
-+
-+ return 0;
++ pv_irq_ops.init_IRQ();
+}
+
-+void blk_unregister_queue(struct gendisk *disk)
++static void native_flush_tlb(void)
+{
-+ struct request_queue *q = disk->queue;
-+
-+ if (q && q->request_fn) {
-+ elv_unregister_queue(q);
-+
-+ kobject_uevent(&q->kobj, KOBJ_REMOVE);
-+ kobject_del(&q->kobj);
-+ kobject_put(&disk->dev.kobj);
-+ }
++ __native_flush_tlb();
+}
-diff --git a/block/blk-tag.c b/block/blk-tag.c
-new file mode 100644
-index 0000000..d1fd300
---- /dev/null
-+++ b/block/blk-tag.c
-@@ -0,0 +1,396 @@
++
+/*
-+ * Functions related to tagged command queuing
++ * Global pages have to be flushed a bit differently. Not a real
++ * performance problem because this does not happen often.
+ */
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
++static void native_flush_tlb_global(void)
++{
++ __native_flush_tlb_global();
++}
+
-+/**
-+ * blk_queue_find_tag - find a request by its tag and queue
-+ * @q: The request queue for the device
-+ * @tag: The tag of the request
-+ *
-+ * Notes:
-+ * Should be used when a device returns a tag and you want to match
-+ * it with a request.
-+ *
-+ * no locks need be held.
-+ **/
-+struct request *blk_queue_find_tag(struct request_queue *q, int tag)
++static void native_flush_tlb_single(unsigned long addr)
+{
-+ return blk_map_queue_find_tag(q->queue_tags, tag);
++ __native_flush_tlb_single(addr);
+}
+
-+EXPORT_SYMBOL(blk_queue_find_tag);
++/* These are in entry.S */
++extern void native_iret(void);
++extern void native_irq_enable_syscall_ret(void);
+
-+/**
-+ * __blk_free_tags - release a given set of tag maintenance info
-+ * @bqt: the tag map to free
-+ *
-+ * Tries to free the specified @bqt at . Returns true if it was
-+ * actually freed and false if there are still references using it
-+ */
-+static int __blk_free_tags(struct blk_queue_tag *bqt)
++static int __init print_banner(void)
+{
-+ int retval;
++ pv_init_ops.banner();
++ return 0;
++}
++core_initcall(print_banner);
+
-+ retval = atomic_dec_and_test(&bqt->refcnt);
-+ if (retval) {
-+ BUG_ON(bqt->busy);
++static struct resource reserve_ioports = {
++ .start = 0,
++ .end = IO_SPACE_LIMIT,
++ .name = "paravirt-ioport",
++ .flags = IORESOURCE_IO | IORESOURCE_BUSY,
++};
+
-+ kfree(bqt->tag_index);
-+ bqt->tag_index = NULL;
++static struct resource reserve_iomem = {
++ .start = 0,
++ .end = -1,
++ .name = "paravirt-iomem",
++ .flags = IORESOURCE_MEM | IORESOURCE_BUSY,
++};
+
-+ kfree(bqt->tag_map);
-+ bqt->tag_map = NULL;
++/*
++ * Reserve the whole legacy IO space to prevent any legacy drivers
++ * from wasting time probing for their hardware. This is a fairly
++ * brute-force approach to disabling all non-virtual drivers.
++ *
++ * Note that this must be called very early to have any effect.
++ */
++int paravirt_disable_iospace(void)
++{
++ int ret;
+
-+ kfree(bqt);
++ ret = request_resource(&ioport_resource, &reserve_ioports);
++ if (ret == 0) {
++ ret = request_resource(&iomem_resource, &reserve_iomem);
++ if (ret)
++ release_resource(&reserve_ioports);
+ }
+
-+ return retval;
++ return ret;
+}
+
-+/**
-+ * __blk_queue_free_tags - release tag maintenance info
-+ * @q: the request queue for the device
-+ *
-+ * Notes:
-+ * blk_cleanup_queue() will take care of calling this function, if tagging
-+ * has been used. So there's no need to call this directly.
-+ **/
-+void __blk_queue_free_tags(struct request_queue *q)
-+{
-+ struct blk_queue_tag *bqt = q->queue_tags;
-+
-+ if (!bqt)
-+ return;
++static DEFINE_PER_CPU(enum paravirt_lazy_mode, paravirt_lazy_mode) = PARAVIRT_LAZY_NONE;
+
-+ __blk_free_tags(bqt);
++static inline void enter_lazy(enum paravirt_lazy_mode mode)
++{
++ BUG_ON(__get_cpu_var(paravirt_lazy_mode) != PARAVIRT_LAZY_NONE);
++ BUG_ON(preemptible());
+
-+ q->queue_tags = NULL;
-+ q->queue_flags &= ~(1 << QUEUE_FLAG_QUEUED);
++ __get_cpu_var(paravirt_lazy_mode) = mode;
+}
+
-+/**
-+ * blk_free_tags - release a given set of tag maintenance info
-+ * @bqt: the tag map to free
-+ *
-+ * For externally managed @bqt@ frees the map. Callers of this
-+ * function must guarantee to have released all the queues that
-+ * might have been using this tag map.
-+ */
-+void blk_free_tags(struct blk_queue_tag *bqt)
++void paravirt_leave_lazy(enum paravirt_lazy_mode mode)
+{
-+ if (unlikely(!__blk_free_tags(bqt)))
-+ BUG();
++ BUG_ON(__get_cpu_var(paravirt_lazy_mode) != mode);
++ BUG_ON(preemptible());
++
++ __get_cpu_var(paravirt_lazy_mode) = PARAVIRT_LAZY_NONE;
+}
-+EXPORT_SYMBOL(blk_free_tags);
+
-+/**
-+ * blk_queue_free_tags - release tag maintenance info
-+ * @q: the request queue for the device
-+ *
-+ * Notes:
-+ * This is used to disabled tagged queuing to a device, yet leave
-+ * queue in function.
-+ **/
-+void blk_queue_free_tags(struct request_queue *q)
++void paravirt_enter_lazy_mmu(void)
+{
-+ clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
++ enter_lazy(PARAVIRT_LAZY_MMU);
+}
+
-+EXPORT_SYMBOL(blk_queue_free_tags);
-+
-+static int
-+init_tag_map(struct request_queue *q, struct blk_queue_tag *tags, int depth)
++void paravirt_leave_lazy_mmu(void)
+{
-+ struct request **tag_index;
-+ unsigned long *tag_map;
-+ int nr_ulongs;
-+
-+ if (q && depth > q->nr_requests * 2) {
-+ depth = q->nr_requests * 2;
-+ printk(KERN_ERR "%s: adjusted depth to %d\n",
-+ __FUNCTION__, depth);
-+ }
-+
-+ tag_index = kzalloc(depth * sizeof(struct request *), GFP_ATOMIC);
-+ if (!tag_index)
-+ goto fail;
-+
-+ nr_ulongs = ALIGN(depth, BITS_PER_LONG) / BITS_PER_LONG;
-+ tag_map = kzalloc(nr_ulongs * sizeof(unsigned long), GFP_ATOMIC);
-+ if (!tag_map)
-+ goto fail;
-+
-+ tags->real_max_depth = depth;
-+ tags->max_depth = depth;
-+ tags->tag_index = tag_index;
-+ tags->tag_map = tag_map;
-+
-+ return 0;
-+fail:
-+ kfree(tag_index);
-+ return -ENOMEM;
++ paravirt_leave_lazy(PARAVIRT_LAZY_MMU);
+}
+
-+static struct blk_queue_tag *__blk_queue_init_tags(struct request_queue *q,
-+ int depth)
++void paravirt_enter_lazy_cpu(void)
+{
-+ struct blk_queue_tag *tags;
-+
-+ tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC);
-+ if (!tags)
-+ goto fail;
-+
-+ if (init_tag_map(q, tags, depth))
-+ goto fail;
-+
-+ tags->busy = 0;
-+ atomic_set(&tags->refcnt, 1);
-+ return tags;
-+fail:
-+ kfree(tags);
-+ return NULL;
++ enter_lazy(PARAVIRT_LAZY_CPU);
+}
+
-+/**
-+ * blk_init_tags - initialize the tag info for an external tag map
-+ * @depth: the maximum queue depth supported
-+ * @tags: the tag to use
-+ **/
-+struct blk_queue_tag *blk_init_tags(int depth)
++void paravirt_leave_lazy_cpu(void)
+{
-+ return __blk_queue_init_tags(NULL, depth);
++ paravirt_leave_lazy(PARAVIRT_LAZY_CPU);
+}
-+EXPORT_SYMBOL(blk_init_tags);
+
-+/**
-+ * blk_queue_init_tags - initialize the queue tag info
-+ * @q: the request queue for the device
-+ * @depth: the maximum queue depth supported
-+ * @tags: the tag to use
-+ **/
-+int blk_queue_init_tags(struct request_queue *q, int depth,
-+ struct blk_queue_tag *tags)
++enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
+{
-+ int rc;
-+
-+ BUG_ON(tags && q->queue_tags && tags != q->queue_tags);
-+
-+ if (!tags && !q->queue_tags) {
-+ tags = __blk_queue_init_tags(q, depth);
++ return __get_cpu_var(paravirt_lazy_mode);
++}
+
-+ if (!tags)
-+ goto fail;
-+ } else if (q->queue_tags) {
-+ if ((rc = blk_queue_resize_tags(q, depth)))
-+ return rc;
-+ set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
-+ return 0;
-+ } else
-+ atomic_inc(&tags->refcnt);
++struct pv_info pv_info = {
++ .name = "bare hardware",
++ .paravirt_enabled = 0,
++ .kernel_rpl = 0,
++ .shared_kernel_pmd = 1, /* Only used when CONFIG_X86_PAE is set */
++};
+
-+ /*
-+ * assign it, all done
-+ */
-+ q->queue_tags = tags;
-+ q->queue_flags |= (1 << QUEUE_FLAG_QUEUED);
-+ INIT_LIST_HEAD(&q->tag_busy_list);
-+ return 0;
-+fail:
-+ kfree(tags);
-+ return -ENOMEM;
-+}
++struct pv_init_ops pv_init_ops = {
++ .patch = native_patch,
++ .banner = default_banner,
++ .arch_setup = paravirt_nop,
++ .memory_setup = machine_specific_memory_setup,
++};
+
-+EXPORT_SYMBOL(blk_queue_init_tags);
++struct pv_time_ops pv_time_ops = {
++ .time_init = hpet_time_init,
++ .get_wallclock = native_get_wallclock,
++ .set_wallclock = native_set_wallclock,
++ .sched_clock = native_sched_clock,
++ .get_cpu_khz = native_calculate_cpu_khz,
++};
+
-+/**
-+ * blk_queue_resize_tags - change the queueing depth
-+ * @q: the request queue for the device
-+ * @new_depth: the new max command queueing depth
-+ *
-+ * Notes:
-+ * Must be called with the queue lock held.
-+ **/
-+int blk_queue_resize_tags(struct request_queue *q, int new_depth)
-+{
-+ struct blk_queue_tag *bqt = q->queue_tags;
-+ struct request **tag_index;
-+ unsigned long *tag_map;
-+ int max_depth, nr_ulongs;
++struct pv_irq_ops pv_irq_ops = {
++ .init_IRQ = native_init_IRQ,
++ .save_fl = native_save_fl,
++ .restore_fl = native_restore_fl,
++ .irq_disable = native_irq_disable,
++ .irq_enable = native_irq_enable,
++ .safe_halt = native_safe_halt,
++ .halt = native_halt,
++};
+
-+ if (!bqt)
-+ return -ENXIO;
++struct pv_cpu_ops pv_cpu_ops = {
++ .cpuid = native_cpuid,
++ .get_debugreg = native_get_debugreg,
++ .set_debugreg = native_set_debugreg,
++ .clts = native_clts,
++ .read_cr0 = native_read_cr0,
++ .write_cr0 = native_write_cr0,
++ .read_cr4 = native_read_cr4,
++ .read_cr4_safe = native_read_cr4_safe,
++ .write_cr4 = native_write_cr4,
++#ifdef CONFIG_X86_64
++ .read_cr8 = native_read_cr8,
++ .write_cr8 = native_write_cr8,
++#endif
++ .wbinvd = native_wbinvd,
++ .read_msr = native_read_msr_safe,
++ .write_msr = native_write_msr_safe,
++ .read_tsc = native_read_tsc,
++ .read_pmc = native_read_pmc,
++ .read_tscp = native_read_tscp,
++ .load_tr_desc = native_load_tr_desc,
++ .set_ldt = native_set_ldt,
++ .load_gdt = native_load_gdt,
++ .load_idt = native_load_idt,
++ .store_gdt = native_store_gdt,
++ .store_idt = native_store_idt,
++ .store_tr = native_store_tr,
++ .load_tls = native_load_tls,
++ .write_ldt_entry = native_write_ldt_entry,
++ .write_gdt_entry = native_write_gdt_entry,
++ .write_idt_entry = native_write_idt_entry,
++ .load_sp0 = native_load_sp0,
+
-+ /*
-+ * if we already have large enough real_max_depth. just
-+ * adjust max_depth. *NOTE* as requests with tag value
-+ * between new_depth and real_max_depth can be in-flight, tag
-+ * map can not be shrunk blindly here.
-+ */
-+ if (new_depth <= bqt->real_max_depth) {
-+ bqt->max_depth = new_depth;
-+ return 0;
-+ }
++ .irq_enable_syscall_ret = native_irq_enable_syscall_ret,
++ .iret = native_iret,
++ .swapgs = native_swapgs,
+
-+ /*
-+ * Currently cannot replace a shared tag map with a new
-+ * one, so error out if this is the case
-+ */
-+ if (atomic_read(&bqt->refcnt) != 1)
-+ return -EBUSY;
++ .set_iopl_mask = native_set_iopl_mask,
++ .io_delay = native_io_delay,
+
-+ /*
-+ * save the old state info, so we can copy it back
-+ */
-+ tag_index = bqt->tag_index;
-+ tag_map = bqt->tag_map;
-+ max_depth = bqt->real_max_depth;
++ .lazy_mode = {
++ .enter = paravirt_nop,
++ .leave = paravirt_nop,
++ },
++};
+
-+ if (init_tag_map(q, bqt, new_depth))
-+ return -ENOMEM;
++struct pv_apic_ops pv_apic_ops = {
++#ifdef CONFIG_X86_LOCAL_APIC
++ .apic_write = native_apic_write,
++ .apic_write_atomic = native_apic_write_atomic,
++ .apic_read = native_apic_read,
++ .setup_boot_clock = setup_boot_APIC_clock,
++ .setup_secondary_clock = setup_secondary_APIC_clock,
++ .startup_ipi_hook = paravirt_nop,
++#endif
++};
+
-+ memcpy(bqt->tag_index, tag_index, max_depth * sizeof(struct request *));
-+ nr_ulongs = ALIGN(max_depth, BITS_PER_LONG) / BITS_PER_LONG;
-+ memcpy(bqt->tag_map, tag_map, nr_ulongs * sizeof(unsigned long));
++struct pv_mmu_ops pv_mmu_ops = {
++#ifndef CONFIG_X86_64
++ .pagetable_setup_start = native_pagetable_setup_start,
++ .pagetable_setup_done = native_pagetable_setup_done,
++#endif
+
-+ kfree(tag_index);
-+ kfree(tag_map);
-+ return 0;
-+}
++ .read_cr2 = native_read_cr2,
++ .write_cr2 = native_write_cr2,
++ .read_cr3 = native_read_cr3,
++ .write_cr3 = native_write_cr3,
+
-+EXPORT_SYMBOL(blk_queue_resize_tags);
++ .flush_tlb_user = native_flush_tlb,
++ .flush_tlb_kernel = native_flush_tlb_global,
++ .flush_tlb_single = native_flush_tlb_single,
++ .flush_tlb_others = native_flush_tlb_others,
+
-+/**
-+ * blk_queue_end_tag - end tag operations for a request
-+ * @q: the request queue for the device
-+ * @rq: the request that has completed
-+ *
-+ * Description:
-+ * Typically called when end_that_request_first() returns 0, meaning
-+ * all transfers have been done for a request. It's important to call
-+ * this function before end_that_request_last(), as that will put the
-+ * request back on the free list thus corrupting the internal tag list.
-+ *
-+ * Notes:
-+ * queue lock must be held.
-+ **/
-+void blk_queue_end_tag(struct request_queue *q, struct request *rq)
-+{
-+ struct blk_queue_tag *bqt = q->queue_tags;
-+ int tag = rq->tag;
++ .alloc_pt = paravirt_nop,
++ .alloc_pd = paravirt_nop,
++ .alloc_pd_clone = paravirt_nop,
++ .release_pt = paravirt_nop,
++ .release_pd = paravirt_nop,
+
-+ BUG_ON(tag == -1);
++ .set_pte = native_set_pte,
++ .set_pte_at = native_set_pte_at,
++ .set_pmd = native_set_pmd,
++ .pte_update = paravirt_nop,
++ .pte_update_defer = paravirt_nop,
+
-+ if (unlikely(tag >= bqt->real_max_depth))
-+ /*
-+ * This can happen after tag depth has been reduced.
-+ * FIXME: how about a warning or info message here?
-+ */
-+ return;
++#ifdef CONFIG_HIGHPTE
++ .kmap_atomic_pte = kmap_atomic,
++#endif
+
-+ list_del_init(&rq->queuelist);
-+ rq->cmd_flags &= ~REQ_QUEUED;
-+ rq->tag = -1;
++#if PAGETABLE_LEVELS >= 3
++#ifdef CONFIG_X86_PAE
++ .set_pte_atomic = native_set_pte_atomic,
++ .set_pte_present = native_set_pte_present,
++ .pte_clear = native_pte_clear,
++ .pmd_clear = native_pmd_clear,
++#endif
++ .set_pud = native_set_pud,
++ .pmd_val = native_pmd_val,
++ .make_pmd = native_make_pmd,
+
-+ if (unlikely(bqt->tag_index[tag] == NULL))
-+ printk(KERN_ERR "%s: tag %d is missing\n",
-+ __FUNCTION__, tag);
++#if PAGETABLE_LEVELS == 4
++ .pud_val = native_pud_val,
++ .make_pud = native_make_pud,
++ .set_pgd = native_set_pgd,
++#endif
++#endif /* PAGETABLE_LEVELS >= 3 */
+
-+ bqt->tag_index[tag] = NULL;
++ .pte_val = native_pte_val,
++ .pgd_val = native_pgd_val,
+
-+ if (unlikely(!test_bit(tag, bqt->tag_map))) {
-+ printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n",
-+ __FUNCTION__, tag);
-+ return;
-+ }
-+ /*
-+ * The tag_map bit acts as a lock for tag_index[bit], so we need
-+ * unlock memory barrier semantics.
-+ */
-+ clear_bit_unlock(tag, bqt->tag_map);
-+ bqt->busy--;
-+}
++ .make_pte = native_make_pte,
++ .make_pgd = native_make_pgd,
+
-+EXPORT_SYMBOL(blk_queue_end_tag);
++ .dup_mmap = paravirt_nop,
++ .exit_mmap = paravirt_nop,
++ .activate_mm = paravirt_nop,
+
-+/**
-+ * blk_queue_start_tag - find a free tag and assign it
-+ * @q: the request queue for the device
-+ * @rq: the block request that needs tagging
-+ *
-+ * Description:
-+ * This can either be used as a stand-alone helper, or possibly be
-+ * assigned as the queue &prep_rq_fn (in which case &struct request
-+ * automagically gets a tag assigned). Note that this function
-+ * assumes that any type of request can be queued! if this is not
-+ * true for your device, you must check the request type before
-+ * calling this function. The request will also be removed from
-+ * the request queue, so it's the drivers responsibility to readd
-+ * it if it should need to be restarted for some reason.
-+ *
-+ * Notes:
-+ * queue lock must be held.
-+ **/
-+int blk_queue_start_tag(struct request_queue *q, struct request *rq)
-+{
-+ struct blk_queue_tag *bqt = q->queue_tags;
-+ int tag;
++ .lazy_mode = {
++ .enter = paravirt_nop,
++ .leave = paravirt_nop,
++ },
++};
+
-+ if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
-+ printk(KERN_ERR
-+ "%s: request %p for device [%s] already tagged %d",
-+ __FUNCTION__, rq,
-+ rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->tag);
-+ BUG();
-+ }
-+
-+ /*
-+ * Protect against shared tag maps, as we may not have exclusive
-+ * access to the tag map.
-+ */
-+ do {
-+ tag = find_first_zero_bit(bqt->tag_map, bqt->max_depth);
-+ if (tag >= bqt->max_depth)
-+ return 1;
++EXPORT_SYMBOL_GPL(pv_time_ops);
++EXPORT_SYMBOL (pv_cpu_ops);
++EXPORT_SYMBOL (pv_mmu_ops);
++EXPORT_SYMBOL_GPL(pv_apic_ops);
++EXPORT_SYMBOL_GPL(pv_info);
++EXPORT_SYMBOL (pv_irq_ops);
+diff --git a/arch/x86/kernel/paravirt_32.c b/arch/x86/kernel/paravirt_32.c
+deleted file mode 100644
+index f500079..0000000
+--- a/arch/x86/kernel/paravirt_32.c
++++ /dev/null
+@@ -1,472 +0,0 @@
+-/* Paravirtualization interfaces
+- Copyright (C) 2006 Rusty Russell IBM Corporation
+-
+- This program is free software; you can redistribute it and/or modify
+- it under the terms of the GNU General Public License as published by
+- the Free Software Foundation; either version 2 of the License, or
+- (at your option) any later version.
+-
+- This program is distributed in the hope that it will be useful,
+- but WITHOUT ANY WARRANTY; without even the implied warranty of
+- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- GNU General Public License for more details.
+-
+- You should have received a copy of the GNU General Public License
+- along with this program; if not, write to the Free Software
+- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+-*/
+-#include <linux/errno.h>
+-#include <linux/module.h>
+-#include <linux/efi.h>
+-#include <linux/bcd.h>
+-#include <linux/highmem.h>
+-
+-#include <asm/bug.h>
+-#include <asm/paravirt.h>
+-#include <asm/desc.h>
+-#include <asm/setup.h>
+-#include <asm/arch_hooks.h>
+-#include <asm/time.h>
+-#include <asm/irq.h>
+-#include <asm/delay.h>
+-#include <asm/fixmap.h>
+-#include <asm/apic.h>
+-#include <asm/tlbflush.h>
+-#include <asm/timer.h>
+-
+-/* nop stub */
+-void _paravirt_nop(void)
+-{
+-}
+-
+-static void __init default_banner(void)
+-{
+- printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
+- pv_info.name);
+-}
+-
+-char *memory_setup(void)
+-{
+- return pv_init_ops.memory_setup();
+-}
+-
+-/* Simple instruction patching code. */
+-#define DEF_NATIVE(ops, name, code) \
+- extern const char start_##ops##_##name[], end_##ops##_##name[]; \
+- asm("start_" #ops "_" #name ": " code "; end_" #ops "_" #name ":")
+-
+-DEF_NATIVE(pv_irq_ops, irq_disable, "cli");
+-DEF_NATIVE(pv_irq_ops, irq_enable, "sti");
+-DEF_NATIVE(pv_irq_ops, restore_fl, "push %eax; popf");
+-DEF_NATIVE(pv_irq_ops, save_fl, "pushf; pop %eax");
+-DEF_NATIVE(pv_cpu_ops, iret, "iret");
+-DEF_NATIVE(pv_cpu_ops, irq_enable_sysexit, "sti; sysexit");
+-DEF_NATIVE(pv_mmu_ops, read_cr2, "mov %cr2, %eax");
+-DEF_NATIVE(pv_mmu_ops, write_cr3, "mov %eax, %cr3");
+-DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %cr3, %eax");
+-DEF_NATIVE(pv_cpu_ops, clts, "clts");
+-DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc");
+-
+-/* Undefined instruction for dealing with missing ops pointers. */
+-static const unsigned char ud2a[] = { 0x0f, 0x0b };
+-
+-static unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
+- unsigned long addr, unsigned len)
+-{
+- const unsigned char *start, *end;
+- unsigned ret;
+-
+- switch(type) {
+-#define SITE(ops, x) \
+- case PARAVIRT_PATCH(ops.x): \
+- start = start_##ops##_##x; \
+- end = end_##ops##_##x; \
+- goto patch_site
+-
+- SITE(pv_irq_ops, irq_disable);
+- SITE(pv_irq_ops, irq_enable);
+- SITE(pv_irq_ops, restore_fl);
+- SITE(pv_irq_ops, save_fl);
+- SITE(pv_cpu_ops, iret);
+- SITE(pv_cpu_ops, irq_enable_sysexit);
+- SITE(pv_mmu_ops, read_cr2);
+- SITE(pv_mmu_ops, read_cr3);
+- SITE(pv_mmu_ops, write_cr3);
+- SITE(pv_cpu_ops, clts);
+- SITE(pv_cpu_ops, read_tsc);
+-#undef SITE
+-
+- patch_site:
+- ret = paravirt_patch_insns(ibuf, len, start, end);
+- break;
+-
+- default:
+- ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
+- break;
+- }
+-
+- return ret;
+-}
+-
+-unsigned paravirt_patch_nop(void)
+-{
+- return 0;
+-}
+-
+-unsigned paravirt_patch_ignore(unsigned len)
+-{
+- return len;
+-}
+-
+-struct branch {
+- unsigned char opcode;
+- u32 delta;
+-} __attribute__((packed));
+-
+-unsigned paravirt_patch_call(void *insnbuf,
+- const void *target, u16 tgt_clobbers,
+- unsigned long addr, u16 site_clobbers,
+- unsigned len)
+-{
+- struct branch *b = insnbuf;
+- unsigned long delta = (unsigned long)target - (addr+5);
+-
+- if (tgt_clobbers & ~site_clobbers)
+- return len; /* target would clobber too much for this site */
+- if (len < 5)
+- return len; /* call too long for patch site */
+-
+- b->opcode = 0xe8; /* call */
+- b->delta = delta;
+- BUILD_BUG_ON(sizeof(*b) != 5);
+-
+- return 5;
+-}
+-
+-unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+- unsigned long addr, unsigned len)
+-{
+- struct branch *b = insnbuf;
+- unsigned long delta = (unsigned long)target - (addr+5);
+-
+- if (len < 5)
+- return len; /* call too long for patch site */
+-
+- b->opcode = 0xe9; /* jmp */
+- b->delta = delta;
+-
+- return 5;
+-}
+-
+-/* Neat trick to map patch type back to the call within the
+- * corresponding structure. */
+-static void *get_call_destination(u8 type)
+-{
+- struct paravirt_patch_template tmpl = {
+- .pv_init_ops = pv_init_ops,
+- .pv_time_ops = pv_time_ops,
+- .pv_cpu_ops = pv_cpu_ops,
+- .pv_irq_ops = pv_irq_ops,
+- .pv_apic_ops = pv_apic_ops,
+- .pv_mmu_ops = pv_mmu_ops,
+- };
+- return *((void **)&tmpl + type);
+-}
+-
+-unsigned paravirt_patch_default(u8 type, u16 clobbers, void *insnbuf,
+- unsigned long addr, unsigned len)
+-{
+- void *opfunc = get_call_destination(type);
+- unsigned ret;
+-
+- if (opfunc == NULL)
+- /* If there's no function, patch it with a ud2a (BUG) */
+- ret = paravirt_patch_insns(insnbuf, len, ud2a, ud2a+sizeof(ud2a));
+- else if (opfunc == paravirt_nop)
+- /* If the operation is a nop, then nop the callsite */
+- ret = paravirt_patch_nop();
+- else if (type == PARAVIRT_PATCH(pv_cpu_ops.iret) ||
+- type == PARAVIRT_PATCH(pv_cpu_ops.irq_enable_sysexit))
+- /* If operation requires a jmp, then jmp */
+- ret = paravirt_patch_jmp(insnbuf, opfunc, addr, len);
+- else
+- /* Otherwise call the function; assume target could
+- clobber any caller-save reg */
+- ret = paravirt_patch_call(insnbuf, opfunc, CLBR_ANY,
+- addr, clobbers, len);
+-
+- return ret;
+-}
+-
+-unsigned paravirt_patch_insns(void *insnbuf, unsigned len,
+- const char *start, const char *end)
+-{
+- unsigned insn_len = end - start;
+-
+- if (insn_len > len || start == NULL)
+- insn_len = len;
+- else
+- memcpy(insnbuf, start, insn_len);
+-
+- return insn_len;
+-}
+-
+-void init_IRQ(void)
+-{
+- pv_irq_ops.init_IRQ();
+-}
+-
+-static void native_flush_tlb(void)
+-{
+- __native_flush_tlb();
+-}
+-
+-/*
+- * Global pages have to be flushed a bit differently. Not a real
+- * performance problem because this does not happen often.
+- */
+-static void native_flush_tlb_global(void)
+-{
+- __native_flush_tlb_global();
+-}
+-
+-static void native_flush_tlb_single(unsigned long addr)
+-{
+- __native_flush_tlb_single(addr);
+-}
+-
+-/* These are in entry.S */
+-extern void native_iret(void);
+-extern void native_irq_enable_sysexit(void);
+-
+-static int __init print_banner(void)
+-{
+- pv_init_ops.banner();
+- return 0;
+-}
+-core_initcall(print_banner);
+-
+-static struct resource reserve_ioports = {
+- .start = 0,
+- .end = IO_SPACE_LIMIT,
+- .name = "paravirt-ioport",
+- .flags = IORESOURCE_IO | IORESOURCE_BUSY,
+-};
+-
+-static struct resource reserve_iomem = {
+- .start = 0,
+- .end = -1,
+- .name = "paravirt-iomem",
+- .flags = IORESOURCE_MEM | IORESOURCE_BUSY,
+-};
+-
+-/*
+- * Reserve the whole legacy IO space to prevent any legacy drivers
+- * from wasting time probing for their hardware. This is a fairly
+- * brute-force approach to disabling all non-virtual drivers.
+- *
+- * Note that this must be called very early to have any effect.
+- */
+-int paravirt_disable_iospace(void)
+-{
+- int ret;
+-
+- ret = request_resource(&ioport_resource, &reserve_ioports);
+- if (ret == 0) {
+- ret = request_resource(&iomem_resource, &reserve_iomem);
+- if (ret)
+- release_resource(&reserve_ioports);
+- }
+-
+- return ret;
+-}
+-
+-static DEFINE_PER_CPU(enum paravirt_lazy_mode, paravirt_lazy_mode) = PARAVIRT_LAZY_NONE;
+-
+-static inline void enter_lazy(enum paravirt_lazy_mode mode)
+-{
+- BUG_ON(x86_read_percpu(paravirt_lazy_mode) != PARAVIRT_LAZY_NONE);
+- BUG_ON(preemptible());
+-
+- x86_write_percpu(paravirt_lazy_mode, mode);
+-}
+-
+-void paravirt_leave_lazy(enum paravirt_lazy_mode mode)
+-{
+- BUG_ON(x86_read_percpu(paravirt_lazy_mode) != mode);
+- BUG_ON(preemptible());
+-
+- x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_NONE);
+-}
+-
+-void paravirt_enter_lazy_mmu(void)
+-{
+- enter_lazy(PARAVIRT_LAZY_MMU);
+-}
+-
+-void paravirt_leave_lazy_mmu(void)
+-{
+- paravirt_leave_lazy(PARAVIRT_LAZY_MMU);
+-}
+-
+-void paravirt_enter_lazy_cpu(void)
+-{
+- enter_lazy(PARAVIRT_LAZY_CPU);
+-}
+-
+-void paravirt_leave_lazy_cpu(void)
+-{
+- paravirt_leave_lazy(PARAVIRT_LAZY_CPU);
+-}
+-
+-enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
+-{
+- return x86_read_percpu(paravirt_lazy_mode);
+-}
+-
+-struct pv_info pv_info = {
+- .name = "bare hardware",
+- .paravirt_enabled = 0,
+- .kernel_rpl = 0,
+- .shared_kernel_pmd = 1, /* Only used when CONFIG_X86_PAE is set */
+-};
+-
+-struct pv_init_ops pv_init_ops = {
+- .patch = native_patch,
+- .banner = default_banner,
+- .arch_setup = paravirt_nop,
+- .memory_setup = machine_specific_memory_setup,
+-};
+-
+-struct pv_time_ops pv_time_ops = {
+- .time_init = hpet_time_init,
+- .get_wallclock = native_get_wallclock,
+- .set_wallclock = native_set_wallclock,
+- .sched_clock = native_sched_clock,
+- .get_cpu_khz = native_calculate_cpu_khz,
+-};
+-
+-struct pv_irq_ops pv_irq_ops = {
+- .init_IRQ = native_init_IRQ,
+- .save_fl = native_save_fl,
+- .restore_fl = native_restore_fl,
+- .irq_disable = native_irq_disable,
+- .irq_enable = native_irq_enable,
+- .safe_halt = native_safe_halt,
+- .halt = native_halt,
+-};
+-
+-struct pv_cpu_ops pv_cpu_ops = {
+- .cpuid = native_cpuid,
+- .get_debugreg = native_get_debugreg,
+- .set_debugreg = native_set_debugreg,
+- .clts = native_clts,
+- .read_cr0 = native_read_cr0,
+- .write_cr0 = native_write_cr0,
+- .read_cr4 = native_read_cr4,
+- .read_cr4_safe = native_read_cr4_safe,
+- .write_cr4 = native_write_cr4,
+- .wbinvd = native_wbinvd,
+- .read_msr = native_read_msr_safe,
+- .write_msr = native_write_msr_safe,
+- .read_tsc = native_read_tsc,
+- .read_pmc = native_read_pmc,
+- .load_tr_desc = native_load_tr_desc,
+- .set_ldt = native_set_ldt,
+- .load_gdt = native_load_gdt,
+- .load_idt = native_load_idt,
+- .store_gdt = native_store_gdt,
+- .store_idt = native_store_idt,
+- .store_tr = native_store_tr,
+- .load_tls = native_load_tls,
+- .write_ldt_entry = write_dt_entry,
+- .write_gdt_entry = write_dt_entry,
+- .write_idt_entry = write_dt_entry,
+- .load_esp0 = native_load_esp0,
+-
+- .irq_enable_sysexit = native_irq_enable_sysexit,
+- .iret = native_iret,
+-
+- .set_iopl_mask = native_set_iopl_mask,
+- .io_delay = native_io_delay,
+-
+- .lazy_mode = {
+- .enter = paravirt_nop,
+- .leave = paravirt_nop,
+- },
+-};
+-
+-struct pv_apic_ops pv_apic_ops = {
+-#ifdef CONFIG_X86_LOCAL_APIC
+- .apic_write = native_apic_write,
+- .apic_write_atomic = native_apic_write_atomic,
+- .apic_read = native_apic_read,
+- .setup_boot_clock = setup_boot_APIC_clock,
+- .setup_secondary_clock = setup_secondary_APIC_clock,
+- .startup_ipi_hook = paravirt_nop,
+-#endif
+-};
+-
+-struct pv_mmu_ops pv_mmu_ops = {
+- .pagetable_setup_start = native_pagetable_setup_start,
+- .pagetable_setup_done = native_pagetable_setup_done,
+-
+- .read_cr2 = native_read_cr2,
+- .write_cr2 = native_write_cr2,
+- .read_cr3 = native_read_cr3,
+- .write_cr3 = native_write_cr3,
+-
+- .flush_tlb_user = native_flush_tlb,
+- .flush_tlb_kernel = native_flush_tlb_global,
+- .flush_tlb_single = native_flush_tlb_single,
+- .flush_tlb_others = native_flush_tlb_others,
+-
+- .alloc_pt = paravirt_nop,
+- .alloc_pd = paravirt_nop,
+- .alloc_pd_clone = paravirt_nop,
+- .release_pt = paravirt_nop,
+- .release_pd = paravirt_nop,
+-
+- .set_pte = native_set_pte,
+- .set_pte_at = native_set_pte_at,
+- .set_pmd = native_set_pmd,
+- .pte_update = paravirt_nop,
+- .pte_update_defer = paravirt_nop,
+-
+-#ifdef CONFIG_HIGHPTE
+- .kmap_atomic_pte = kmap_atomic,
+-#endif
+-
+-#ifdef CONFIG_X86_PAE
+- .set_pte_atomic = native_set_pte_atomic,
+- .set_pte_present = native_set_pte_present,
+- .set_pud = native_set_pud,
+- .pte_clear = native_pte_clear,
+- .pmd_clear = native_pmd_clear,
+-
+- .pmd_val = native_pmd_val,
+- .make_pmd = native_make_pmd,
+-#endif
+-
+- .pte_val = native_pte_val,
+- .pgd_val = native_pgd_val,
+-
+- .make_pte = native_make_pte,
+- .make_pgd = native_make_pgd,
+-
+- .dup_mmap = paravirt_nop,
+- .exit_mmap = paravirt_nop,
+- .activate_mm = paravirt_nop,
+-
+- .lazy_mode = {
+- .enter = paravirt_nop,
+- .leave = paravirt_nop,
+- },
+-};
+-
+-EXPORT_SYMBOL_GPL(pv_time_ops);
+-EXPORT_SYMBOL (pv_cpu_ops);
+-EXPORT_SYMBOL (pv_mmu_ops);
+-EXPORT_SYMBOL_GPL(pv_apic_ops);
+-EXPORT_SYMBOL_GPL(pv_info);
+-EXPORT_SYMBOL (pv_irq_ops);
+diff --git a/arch/x86/kernel/paravirt_patch_32.c b/arch/x86/kernel/paravirt_patch_32.c
+new file mode 100644
+index 0000000..82fc5fc
+--- /dev/null
++++ b/arch/x86/kernel/paravirt_patch_32.c
+@@ -0,0 +1,49 @@
++#include <asm/paravirt.h>
++
++DEF_NATIVE(pv_irq_ops, irq_disable, "cli");
++DEF_NATIVE(pv_irq_ops, irq_enable, "sti");
++DEF_NATIVE(pv_irq_ops, restore_fl, "push %eax; popf");
++DEF_NATIVE(pv_irq_ops, save_fl, "pushf; pop %eax");
++DEF_NATIVE(pv_cpu_ops, iret, "iret");
++DEF_NATIVE(pv_cpu_ops, irq_enable_syscall_ret, "sti; sysexit");
++DEF_NATIVE(pv_mmu_ops, read_cr2, "mov %cr2, %eax");
++DEF_NATIVE(pv_mmu_ops, write_cr3, "mov %eax, %cr3");
++DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %cr3, %eax");
++DEF_NATIVE(pv_cpu_ops, clts, "clts");
++DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc");
++
++unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
++ unsigned long addr, unsigned len)
++{
++ const unsigned char *start, *end;
++ unsigned ret;
++
++#define PATCH_SITE(ops, x) \
++ case PARAVIRT_PATCH(ops.x): \
++ start = start_##ops##_##x; \
++ end = end_##ops##_##x; \
++ goto patch_site
++ switch(type) {
++ PATCH_SITE(pv_irq_ops, irq_disable);
++ PATCH_SITE(pv_irq_ops, irq_enable);
++ PATCH_SITE(pv_irq_ops, restore_fl);
++ PATCH_SITE(pv_irq_ops, save_fl);
++ PATCH_SITE(pv_cpu_ops, iret);
++ PATCH_SITE(pv_cpu_ops, irq_enable_syscall_ret);
++ PATCH_SITE(pv_mmu_ops, read_cr2);
++ PATCH_SITE(pv_mmu_ops, read_cr3);
++ PATCH_SITE(pv_mmu_ops, write_cr3);
++ PATCH_SITE(pv_cpu_ops, clts);
++ PATCH_SITE(pv_cpu_ops, read_tsc);
+
-+ } while (test_and_set_bit_lock(tag, bqt->tag_map));
-+ /*
-+ * We need lock ordering semantics given by test_and_set_bit_lock.
-+ * See blk_queue_end_tag for details.
-+ */
++ patch_site:
++ ret = paravirt_patch_insns(ibuf, len, start, end);
++ break;
+
-+ rq->cmd_flags |= REQ_QUEUED;
-+ rq->tag = tag;
-+ bqt->tag_index[tag] = rq;
-+ blkdev_dequeue_request(rq);
-+ list_add(&rq->queuelist, &q->tag_busy_list);
-+ bqt->busy++;
-+ return 0;
++ default:
++ ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
++ break;
++ }
++#undef PATCH_SITE
++ return ret;
+}
+diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
+new file mode 100644
+index 0000000..7d904e1
+--- /dev/null
++++ b/arch/x86/kernel/paravirt_patch_64.c
+@@ -0,0 +1,57 @@
++#include <asm/paravirt.h>
++#include <asm/asm-offsets.h>
++#include <linux/stringify.h>
+
-+EXPORT_SYMBOL(blk_queue_start_tag);
++DEF_NATIVE(pv_irq_ops, irq_disable, "cli");
++DEF_NATIVE(pv_irq_ops, irq_enable, "sti");
++DEF_NATIVE(pv_irq_ops, restore_fl, "pushq %rdi; popfq");
++DEF_NATIVE(pv_irq_ops, save_fl, "pushfq; popq %rax");
++DEF_NATIVE(pv_cpu_ops, iret, "iretq");
++DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax");
++DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax");
++DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3");
++DEF_NATIVE(pv_mmu_ops, flush_tlb_single, "invlpg (%rdi)");
++DEF_NATIVE(pv_cpu_ops, clts, "clts");
++DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
++
++/* the three commands give us more control to how to return from a syscall */
++DEF_NATIVE(pv_cpu_ops, irq_enable_syscall_ret, "movq %gs:" __stringify(pda_oldrsp) ", %rsp; swapgs; sysretq;");
++DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs");
++
++unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
++ unsigned long addr, unsigned len)
++{
++ const unsigned char *start, *end;
++ unsigned ret;
++
++#define PATCH_SITE(ops, x) \
++ case PARAVIRT_PATCH(ops.x): \
++ start = start_##ops##_##x; \
++ end = end_##ops##_##x; \
++ goto patch_site
++ switch(type) {
++ PATCH_SITE(pv_irq_ops, restore_fl);
++ PATCH_SITE(pv_irq_ops, save_fl);
++ PATCH_SITE(pv_irq_ops, irq_enable);
++ PATCH_SITE(pv_irq_ops, irq_disable);
++ PATCH_SITE(pv_cpu_ops, iret);
++ PATCH_SITE(pv_cpu_ops, irq_enable_syscall_ret);
++ PATCH_SITE(pv_cpu_ops, swapgs);
++ PATCH_SITE(pv_mmu_ops, read_cr2);
++ PATCH_SITE(pv_mmu_ops, read_cr3);
++ PATCH_SITE(pv_mmu_ops, write_cr3);
++ PATCH_SITE(pv_cpu_ops, clts);
++ PATCH_SITE(pv_mmu_ops, flush_tlb_single);
++ PATCH_SITE(pv_cpu_ops, wbinvd);
+
-+/**
-+ * blk_queue_invalidate_tags - invalidate all pending tags
-+ * @q: the request queue for the device
++ patch_site:
++ ret = paravirt_patch_insns(ibuf, len, start, end);
++ break;
++
++ default:
++ ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
++ break;
++ }
++#undef PATCH_SITE
++ return ret;
++}
+diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
+index 6bf1f71..21f34db 100644
+--- a/arch/x86/kernel/pci-calgary_64.c
++++ b/arch/x86/kernel/pci-calgary_64.c
+@@ -30,7 +30,6 @@
+ #include <linux/spinlock.h>
+ #include <linux/string.h>
+ #include <linux/dma-mapping.h>
+-#include <linux/init.h>
+ #include <linux/bitops.h>
+ #include <linux/pci_ids.h>
+ #include <linux/pci.h>
+@@ -183,7 +182,7 @@ static struct calgary_bus_info bus_info[MAX_PHB_BUS_NUM] = { { NULL, 0, 0 }, };
+
+ /* enable this to stress test the chip's TCE cache */
+ #ifdef CONFIG_IOMMU_DEBUG
+-int debugging __read_mostly = 1;
++static int debugging = 1;
+
+ static inline unsigned long verify_bit_range(unsigned long* bitmap,
+ int expected, unsigned long start, unsigned long end)
+@@ -202,7 +201,7 @@ static inline unsigned long verify_bit_range(unsigned long* bitmap,
+ return ~0UL;
+ }
+ #else /* debugging is disabled */
+-int debugging __read_mostly = 0;
++static int debugging;
+
+ static inline unsigned long verify_bit_range(unsigned long* bitmap,
+ int expected, unsigned long start, unsigned long end)
+diff --git a/arch/x86/kernel/pci-dma_64.c b/arch/x86/kernel/pci-dma_64.c
+index 5552d23..a82473d 100644
+--- a/arch/x86/kernel/pci-dma_64.c
++++ b/arch/x86/kernel/pci-dma_64.c
+@@ -13,7 +13,6 @@
+ #include <asm/calgary.h>
+
+ int iommu_merge __read_mostly = 0;
+-EXPORT_SYMBOL(iommu_merge);
+
+ dma_addr_t bad_dma_address __read_mostly;
+ EXPORT_SYMBOL(bad_dma_address);
+@@ -230,7 +229,7 @@ EXPORT_SYMBOL(dma_set_mask);
+ * See <Documentation/x86_64/boot-options.txt> for the iommu kernel parameter
+ * documentation.
+ */
+-__init int iommu_setup(char *p)
++static __init int iommu_setup(char *p)
+ {
+ iommu_merge = 1;
+
+diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
+index 06bcba5..4d5cc71 100644
+--- a/arch/x86/kernel/pci-gart_64.c
++++ b/arch/x86/kernel/pci-gart_64.c
+@@ -1,12 +1,12 @@
+ /*
+ * Dynamic DMA mapping support for AMD Hammer.
+- *
+ *
-+ * Description:
-+ * Hardware conditions may dictate a need to stop all pending requests.
-+ * In this case, we will safely clear the block side of the tag queue and
-+ * readd all requests to the request queue in the right order.
+ * Use the integrated AGP GART in the Hammer northbridge as an IOMMU for PCI.
+ * This allows to use PCI devices that only support 32bit addresses on systems
+- * with more than 4GB.
++ * with more than 4GB.
+ *
+ * See Documentation/DMA-mapping.txt for the interface specification.
+- *
+ *
-+ * Notes:
-+ * queue lock must be held.
-+ **/
-+void blk_queue_invalidate_tags(struct request_queue *q)
-+{
-+ struct list_head *tmp, *n;
+ * Copyright 2002 Andi Kleen, SuSE Labs.
+ * Subject to the GNU General Public License v2 only.
+ */
+@@ -37,23 +37,26 @@
+ #include <asm/k8.h>
+
+ static unsigned long iommu_bus_base; /* GART remapping area (physical) */
+-static unsigned long iommu_size; /* size of remapping area bytes */
++static unsigned long iommu_size; /* size of remapping area bytes */
+ static unsigned long iommu_pages; /* .. and in pages */
+
+-static u32 *iommu_gatt_base; /* Remapping table */
++static u32 *iommu_gatt_base; /* Remapping table */
+
+-/* If this is disabled the IOMMU will use an optimized flushing strategy
+- of only flushing when an mapping is reused. With it true the GART is flushed
+- for every mapping. Problem is that doing the lazy flush seems to trigger
+- bugs with some popular PCI cards, in particular 3ware (but has been also
+- also seen with Qlogic at least). */
++/*
++ * If this is disabled the IOMMU will use an optimized flushing strategy
++ * of only flushing when an mapping is reused. With it true the GART is
++ * flushed for every mapping. Problem is that doing the lazy flush seems
++ * to trigger bugs with some popular PCI cards, in particular 3ware (but
++ * has been also also seen with Qlogic at least).
++ */
+ int iommu_fullflush = 1;
+
+-/* Allocation bitmap for the remapping area */
++/* Allocation bitmap for the remapping area: */
+ static DEFINE_SPINLOCK(iommu_bitmap_lock);
+-static unsigned long *iommu_gart_bitmap; /* guarded by iommu_bitmap_lock */
++/* Guarded by iommu_bitmap_lock: */
++static unsigned long *iommu_gart_bitmap;
+
+-static u32 gart_unmapped_entry;
++static u32 gart_unmapped_entry;
+
+ #define GPTE_VALID 1
+ #define GPTE_COHERENT 2
+@@ -61,10 +64,10 @@ static u32 gart_unmapped_entry;
+ (((x) & 0xfffff000) | (((x) >> 32) << 4) | GPTE_VALID | GPTE_COHERENT)
+ #define GPTE_DECODE(x) (((x) & 0xfffff000) | (((u64)(x) & 0xff0) << 28))
+
+-#define to_pages(addr,size) \
++#define to_pages(addr, size) \
+ (round_up(((addr) & ~PAGE_MASK) + (size), PAGE_SIZE) >> PAGE_SHIFT)
+
+-#define EMERGENCY_PAGES 32 /* = 128KB */
++#define EMERGENCY_PAGES 32 /* = 128KB */
+
+ #ifdef CONFIG_AGP
+ #define AGPEXTERN extern
+@@ -77,130 +80,152 @@ AGPEXTERN int agp_memory_reserved;
+ AGPEXTERN __u32 *agp_gatt_table;
+
+ static unsigned long next_bit; /* protected by iommu_bitmap_lock */
+-static int need_flush; /* global flush state. set for each gart wrap */
++static int need_flush; /* global flush state. set for each gart wrap */
+
+-static unsigned long alloc_iommu(int size)
+-{
++static unsigned long alloc_iommu(int size)
++{
+ unsigned long offset, flags;
+
+- spin_lock_irqsave(&iommu_bitmap_lock, flags);
+- offset = find_next_zero_string(iommu_gart_bitmap,next_bit,iommu_pages,size);
++ spin_lock_irqsave(&iommu_bitmap_lock, flags);
++ offset = find_next_zero_string(iommu_gart_bitmap, next_bit,
++ iommu_pages, size);
+ if (offset == -1) {
+ need_flush = 1;
+- offset = find_next_zero_string(iommu_gart_bitmap,0,iommu_pages,size);
++ offset = find_next_zero_string(iommu_gart_bitmap, 0,
++ iommu_pages, size);
+ }
+- if (offset != -1) {
+- set_bit_string(iommu_gart_bitmap, offset, size);
+- next_bit = offset+size;
+- if (next_bit >= iommu_pages) {
++ if (offset != -1) {
++ set_bit_string(iommu_gart_bitmap, offset, size);
++ next_bit = offset+size;
++ if (next_bit >= iommu_pages) {
+ next_bit = 0;
+ need_flush = 1;
+- }
+- }
++ }
++ }
+ if (iommu_fullflush)
+ need_flush = 1;
+- spin_unlock_irqrestore(&iommu_bitmap_lock, flags);
++ spin_unlock_irqrestore(&iommu_bitmap_lock, flags);
+
-+ list_for_each_safe(tmp, n, &q->tag_busy_list)
-+ blk_requeue_request(q, list_entry_rq(tmp));
+ return offset;
+-}
+}
+
+ static void free_iommu(unsigned long offset, int size)
+-{
++{
+ unsigned long flags;
+
-+EXPORT_SYMBOL(blk_queue_invalidate_tags);
-diff --git a/block/blk.h b/block/blk.h
-new file mode 100644
-index 0000000..ec898dd
---- /dev/null
-+++ b/block/blk.h
-@@ -0,0 +1,53 @@
-+#ifndef BLK_INTERNAL_H
-+#define BLK_INTERNAL_H
-+
-+/* Amount of time in which a process may batch requests */
-+#define BLK_BATCH_TIME (HZ/50UL)
-+
-+/* Number of requests a "batching" process may submit */
-+#define BLK_BATCH_REQ 32
+ spin_lock_irqsave(&iommu_bitmap_lock, flags);
+ __clear_bit_string(iommu_gart_bitmap, offset, size);
+ spin_unlock_irqrestore(&iommu_bitmap_lock, flags);
+-}
++}
+
+-/*
++/*
+ * Use global flush state to avoid races with multiple flushers.
+ */
+ static void flush_gart(void)
+-{
++{
+ unsigned long flags;
+
-+extern struct kmem_cache *blk_requestq_cachep;
-+extern struct kobj_type blk_queue_ktype;
+ spin_lock_irqsave(&iommu_bitmap_lock, flags);
+ if (need_flush) {
+ k8_flush_garts();
+ need_flush = 0;
+- }
++ }
+ spin_unlock_irqrestore(&iommu_bitmap_lock, flags);
+-}
++}
+
+ #ifdef CONFIG_IOMMU_LEAK
+
+-#define SET_LEAK(x) if (iommu_leak_tab) \
+- iommu_leak_tab[x] = __builtin_return_address(0);
+-#define CLEAR_LEAK(x) if (iommu_leak_tab) \
+- iommu_leak_tab[x] = NULL;
++#define SET_LEAK(x) \
++ do { \
++ if (iommu_leak_tab) \
++ iommu_leak_tab[x] = __builtin_return_address(0);\
++ } while (0)
+
-+void rq_init(struct request_queue *q, struct request *rq);
-+void init_request_from_bio(struct request *req, struct bio *bio);
-+void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
-+ struct bio *bio);
-+void __blk_queue_free_tags(struct request_queue *q);
++#define CLEAR_LEAK(x) \
++ do { \
++ if (iommu_leak_tab) \
++ iommu_leak_tab[x] = NULL; \
++ } while (0)
+
+ /* Debugging aid for drivers that don't free their IOMMU tables */
+-static void **iommu_leak_tab;
++static void **iommu_leak_tab;
+ static int leak_trace;
+ static int iommu_leak_pages = 20;
+
-+void blk_unplug_work(struct work_struct *work);
-+void blk_unplug_timeout(unsigned long data);
+ static void dump_leak(void)
+ {
+ int i;
+- static int dump;
+- if (dump || !iommu_leak_tab) return;
++ static int dump;
++
++ if (dump || !iommu_leak_tab)
++ return;
+ dump = 1;
+- show_stack(NULL,NULL);
+- /* Very crude. dump some from the end of the table too */
+- printk("Dumping %d pages from end of IOMMU:\n", iommu_leak_pages);
+- for (i = 0; i < iommu_leak_pages; i+=2) {
+- printk("%lu: ", iommu_pages-i);
+- printk_address((unsigned long) iommu_leak_tab[iommu_pages-i]);
+- printk("%c", (i+1)%2 == 0 ? '\n' : ' ');
+- }
+- printk("\n");
++ show_stack(NULL, NULL);
+
-+struct io_context *current_io_context(gfp_t gfp_flags, int node);
++ /* Very crude. dump some from the end of the table too */
++ printk(KERN_DEBUG "Dumping %d pages from end of IOMMU:\n",
++ iommu_leak_pages);
++ for (i = 0; i < iommu_leak_pages; i += 2) {
++ printk(KERN_DEBUG "%lu: ", iommu_pages-i);
++ printk_address((unsigned long) iommu_leak_tab[iommu_pages-i], 0);
++ printk(KERN_CONT "%c", (i+1)%2 == 0 ? '\n' : ' ');
++ }
++ printk(KERN_DEBUG "\n");
+ }
+ #else
+-#define SET_LEAK(x)
+-#define CLEAR_LEAK(x)
++# define SET_LEAK(x)
++# define CLEAR_LEAK(x)
+ #endif
+
+ static void iommu_full(struct device *dev, size_t size, int dir)
+ {
+- /*
++ /*
+ * Ran out of IOMMU space for this operation. This is very bad.
+ * Unfortunately the drivers cannot handle this operation properly.
+- * Return some non mapped prereserved space in the aperture and
++ * Return some non mapped prereserved space in the aperture and
+ * let the Northbridge deal with it. This will result in garbage
+ * in the IO operation. When the size exceeds the prereserved space
+- * memory corruption will occur or random memory will be DMAed
++ * memory corruption will occur or random memory will be DMAed
+ * out. Hopefully no network devices use single mappings that big.
+- */
+-
+- printk(KERN_ERR
+- "PCI-DMA: Out of IOMMU space for %lu bytes at device %s\n",
+- size, dev->bus_id);
++ */
+
-+int ll_back_merge_fn(struct request_queue *q, struct request *req,
-+ struct bio *bio);
-+int ll_front_merge_fn(struct request_queue *q, struct request *req,
-+ struct bio *bio);
-+int attempt_back_merge(struct request_queue *q, struct request *rq);
-+int attempt_front_merge(struct request_queue *q, struct request *rq);
-+void blk_recalc_rq_segments(struct request *rq);
-+void blk_recalc_rq_sectors(struct request *rq, int nsect);
++ printk(KERN_ERR
++ "PCI-DMA: Out of IOMMU space for %lu bytes at device %s\n",
++ size, dev->bus_id);
+
+ if (size > PAGE_SIZE*EMERGENCY_PAGES) {
+ if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+ panic("PCI-DMA: Memory would be corrupted\n");
+- if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+- panic(KERN_ERR "PCI-DMA: Random memory would be DMAed\n");
+- }
+-
++ if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
++ panic(KERN_ERR
++ "PCI-DMA: Random memory would be DMAed\n");
++ }
+ #ifdef CONFIG_IOMMU_LEAK
+- dump_leak();
++ dump_leak();
+ #endif
+-}
++}
+
+-static inline int need_iommu(struct device *dev, unsigned long addr, size_t size)
+-{
++static inline int
++need_iommu(struct device *dev, unsigned long addr, size_t size)
++{
+ u64 mask = *dev->dma_mask;
+ int high = addr + size > mask;
+ int mmu = high;
+- if (force_iommu)
+- mmu = 1;
+- return mmu;
+
-+void blk_queue_congestion_threshold(struct request_queue *q);
++ if (force_iommu)
++ mmu = 1;
+
-+/*
-+ * Return the threshold (number of used requests) at which the queue is
-+ * considered to be congested. It include a little hysteresis to keep the
-+ * context switch rate down.
-+ */
-+static inline int queue_congestion_on_threshold(struct request_queue *q)
++ return mmu;
+ }
+
+-static inline int nonforced_iommu(struct device *dev, unsigned long addr, size_t size)
+-{
++static inline int
++nonforced_iommu(struct device *dev, unsigned long addr, size_t size)
+{
-+ return q->nr_congestion_on;
-+}
+ u64 mask = *dev->dma_mask;
+ int high = addr + size > mask;
+ int mmu = high;
+- return mmu;
+
-+/*
-+ * The threshold at which a queue is considered to be uncongested
-+ */
-+static inline int queue_congestion_off_threshold(struct request_queue *q)
++ return mmu;
+ }
+
+ /* Map a single continuous physical area into the IOMMU.
+@@ -208,13 +233,14 @@ static inline int nonforced_iommu(struct device *dev, unsigned long addr, size_t
+ */
+ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem,
+ size_t size, int dir)
+-{
+{
-+ return q->nr_congestion_off;
-+}
+ unsigned long npages = to_pages(phys_mem, size);
+ unsigned long iommu_page = alloc_iommu(npages);
+ int i;
+
-+#endif
-diff --git a/block/blktrace.c b/block/blktrace.c
-index 9b4da4a..568588c 100644
---- a/block/blktrace.c
-+++ b/block/blktrace.c
-@@ -235,7 +235,7 @@ static void blk_trace_cleanup(struct blk_trace *bt)
- kfree(bt);
+ if (iommu_page == -1) {
+ if (!nonforced_iommu(dev, phys_mem, size))
+- return phys_mem;
++ return phys_mem;
+ if (panic_on_overflow)
+ panic("dma_map_area overflow %lu bytes\n", size);
+ iommu_full(dev, size, dir);
+@@ -229,35 +255,39 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem,
+ return iommu_bus_base + iommu_page*PAGE_SIZE + (phys_mem & ~PAGE_MASK);
}
--static int blk_trace_remove(struct request_queue *q)
-+int blk_trace_remove(struct request_queue *q)
+-static dma_addr_t gart_map_simple(struct device *dev, char *buf,
+- size_t size, int dir)
++static dma_addr_t
++gart_map_simple(struct device *dev, char *buf, size_t size, int dir)
{
- struct blk_trace *bt;
+ dma_addr_t map = dma_map_area(dev, virt_to_bus(buf), size, dir);
++
+ flush_gart();
++
+ return map;
+ }
-@@ -249,6 +249,7 @@ static int blk_trace_remove(struct request_queue *q)
+ /* Map a single area into the IOMMU */
+-static dma_addr_t gart_map_single(struct device *dev, void *addr, size_t size, int dir)
++static dma_addr_t
++gart_map_single(struct device *dev, void *addr, size_t size, int dir)
+ {
+ unsigned long phys_mem, bus;
- return 0;
+ if (!dev)
+ dev = &fallback_dev;
+
+- phys_mem = virt_to_phys(addr);
++ phys_mem = virt_to_phys(addr);
+ if (!need_iommu(dev, phys_mem, size))
+- return phys_mem;
++ return phys_mem;
+
+ bus = gart_map_simple(dev, addr, size, dir);
+- return bus;
++
++ return bus;
}
-+EXPORT_SYMBOL_GPL(blk_trace_remove);
- static int blk_dropped_open(struct inode *inode, struct file *filp)
+ /*
+ * Free a DMA mapping.
+ */
+ static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
+- size_t size, int direction)
++ size_t size, int direction)
{
-@@ -316,18 +317,17 @@ static struct rchan_callbacks blk_relay_callbacks = {
+ unsigned long iommu_page;
+ int npages;
+@@ -266,6 +296,7 @@ static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
+ if (dma_addr < iommu_bus_base + EMERGENCY_PAGES*PAGE_SIZE ||
+ dma_addr >= iommu_bus_base + iommu_size)
+ return;
++
+ iommu_page = (dma_addr - iommu_bus_base)>>PAGE_SHIFT;
+ npages = to_pages(dma_addr, size);
+ for (i = 0; i < npages; i++) {
+@@ -278,7 +309,8 @@ static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
/*
- * Setup everything required to start tracing
+ * Wrapper for pci_unmap_single working with scatterlists.
*/
--int do_blk_trace_setup(struct request_queue *q, struct block_device *bdev,
-+int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
- struct blk_user_trace_setup *buts)
+-static void gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
++static void
++gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
{
- struct blk_trace *old_bt, *bt = NULL;
- struct dentry *dir = NULL;
-- char b[BDEVNAME_SIZE];
- int ret, i;
-
- if (!buts->buf_size || !buts->buf_nr)
- return -EINVAL;
-
-- strcpy(buts->name, bdevname(bdev, b));
-+ strcpy(buts->name, name);
-
- /*
- * some device names have larger paths - convert the slashes
-@@ -352,7 +352,7 @@ int do_blk_trace_setup(struct request_queue *q, struct block_device *bdev,
- goto err;
-
- bt->dir = dir;
-- bt->dev = bdev->bd_dev;
-+ bt->dev = dev;
- atomic_set(&bt->dropped, 0);
+ struct scatterlist *s;
+ int i;
+@@ -303,12 +335,13 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
- ret = -EIO;
-@@ -399,8 +399,8 @@ err:
- return ret;
+ for_each_sg(sg, s, nents, i) {
+ unsigned long addr = sg_phys(s);
+- if (nonforced_iommu(dev, addr, s->length)) {
++
++ if (nonforced_iommu(dev, addr, s->length)) {
+ addr = dma_map_area(dev, addr, s->length, dir);
+- if (addr == bad_dma_address) {
+- if (i > 0)
++ if (addr == bad_dma_address) {
++ if (i > 0)
+ gart_unmap_sg(dev, sg, i, dir);
+- nents = 0;
++ nents = 0;
+ sg[0].dma_length = 0;
+ break;
+ }
+@@ -317,15 +350,16 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
+ s->dma_length = s->length;
+ }
+ flush_gart();
++
+ return nents;
}
--static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
-- char __user *arg)
-+int blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
-+ char __user *arg)
+ /* Map multiple scatterlist entries continuous into the first. */
+ static int __dma_map_cont(struct scatterlist *start, int nelems,
+- struct scatterlist *sout, unsigned long pages)
++ struct scatterlist *sout, unsigned long pages)
{
- struct blk_user_trace_setup buts;
- int ret;
-@@ -409,7 +409,7 @@ static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
- if (ret)
- return -EFAULT;
-
-- ret = do_blk_trace_setup(q, bdev, &buts);
-+ ret = do_blk_trace_setup(q, name, dev, &buts);
- if (ret)
- return ret;
+ unsigned long iommu_start = alloc_iommu(pages);
+- unsigned long iommu_page = iommu_start;
++ unsigned long iommu_page = iommu_start;
+ struct scatterlist *s;
+ int i;
-@@ -418,8 +418,9 @@ static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
+@@ -335,32 +369,33 @@ static int __dma_map_cont(struct scatterlist *start, int nelems,
+ for_each_sg(start, s, nelems, i) {
+ unsigned long pages, addr;
+ unsigned long phys_addr = s->dma_address;
+-
++
+ BUG_ON(s != start && s->offset);
+ if (s == start) {
+ sout->dma_address = iommu_bus_base;
+ sout->dma_address += iommu_page*PAGE_SIZE + s->offset;
+ sout->dma_length = s->length;
+- } else {
+- sout->dma_length += s->length;
++ } else {
++ sout->dma_length += s->length;
+ }
+ addr = phys_addr;
+- pages = to_pages(s->offset, s->length);
+- while (pages--) {
+- iommu_gatt_base[iommu_page] = GPTE_ENCODE(addr);
++ pages = to_pages(s->offset, s->length);
++ while (pages--) {
++ iommu_gatt_base[iommu_page] = GPTE_ENCODE(addr);
+ SET_LEAK(iommu_page);
+ addr += PAGE_SIZE;
+ iommu_page++;
+ }
+- }
+- BUG_ON(iommu_page - iommu_start != pages);
++ }
++ BUG_ON(iommu_page - iommu_start != pages);
++
return 0;
}
-+EXPORT_SYMBOL_GPL(blk_trace_setup);
--static int blk_trace_startstop(struct request_queue *q, int start)
-+int blk_trace_startstop(struct request_queue *q, int start)
+-static inline int dma_map_cont(struct scatterlist *start, int nelems,
+- struct scatterlist *sout,
+- unsigned long pages, int need)
++static inline int
++dma_map_cont(struct scatterlist *start, int nelems, struct scatterlist *sout,
++ unsigned long pages, int need)
{
- struct blk_trace *bt;
- int ret;
-@@ -452,6 +453,7 @@ static int blk_trace_startstop(struct request_queue *q, int start)
-
- return ret;
+ if (!need) {
+ BUG_ON(nelems != 1);
+@@ -370,22 +405,19 @@ static inline int dma_map_cont(struct scatterlist *start, int nelems,
+ }
+ return __dma_map_cont(start, nelems, sout, pages);
}
-+EXPORT_SYMBOL_GPL(blk_trace_startstop);
-
- /**
- * blk_trace_ioctl: - handle the ioctls associated with tracing
-@@ -464,6 +466,7 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
+-
++
+ /*
+ * DMA map all entries in a scatterlist.
+- * Merge chunks that have page aligned sizes into a continuous mapping.
++ * Merge chunks that have page aligned sizes into a continuous mapping.
+ */
+-static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+- int dir)
++static int
++gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
{
- struct request_queue *q;
- int ret, start = 0;
-+ char b[BDEVNAME_SIZE];
-
- q = bdev_get_queue(bdev);
- if (!q)
-@@ -473,7 +476,8 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
+- int i;
+- int out;
+- int start;
+- unsigned long pages = 0;
+- int need = 0, nextneed;
+ struct scatterlist *s, *ps, *start_sg, *sgmap;
++ int need = 0, nextneed, i, out, start;
++ unsigned long pages = 0;
- switch (cmd) {
- case BLKTRACESETUP:
-- ret = blk_trace_setup(q, bdev, arg);
-+ strcpy(b, bdevname(bdev, b));
-+ ret = blk_trace_setup(q, b, bdev->bd_dev, arg);
- break;
- case BLKTRACESTART:
- start = 1;
-diff --git a/block/bsg.c b/block/bsg.c
-index 8e181ab..69b0a9d 100644
---- a/block/bsg.c
-+++ b/block/bsg.c
-@@ -445,6 +445,15 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
- else
- hdr->dout_resid = rq->data_len;
+- if (nents == 0)
++ if (nents == 0)
+ return 0;
-+ /*
-+ * If the request generated a negative error number, return it
-+ * (providing we aren't already returning an error); if it's
-+ * just a protocol response (i.e. non negative), that gets
-+ * processed above.
-+ */
-+ if (!ret && rq->errors < 0)
-+ ret = rq->errors;
+ if (!dev)
+@@ -397,15 +429,19 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ ps = NULL; /* shut up gcc */
+ for_each_sg(sg, s, nents, i) {
+ dma_addr_t addr = sg_phys(s);
++
+ s->dma_address = addr;
+- BUG_ON(s->length == 0);
++ BUG_ON(s->length == 0);
+
+- nextneed = need_iommu(dev, addr, s->length);
++ nextneed = need_iommu(dev, addr, s->length);
+
+ /* Handle the previous not yet processed entries */
+ if (i > start) {
+- /* Can only merge when the last chunk ends on a page
+- boundary and the new one doesn't have an offset. */
++ /*
++ * Can only merge when the last chunk ends on a
++ * page boundary and the new one doesn't have an
++ * offset.
++ */
+ if (!iommu_merge || !nextneed || !need || s->offset ||
+ (ps->offset + ps->length) % PAGE_SIZE) {
+ if (dma_map_cont(start_sg, i - start, sgmap,
+@@ -436,6 +472,7 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ error:
+ flush_gart();
+ gart_unmap_sg(dev, sg, out, dir);
+
- blk_rq_unmap_user(bio);
- blk_put_request(rq);
+ /* When it was forced or merged try again in a dumb way */
+ if (force_iommu || iommu_merge) {
+ out = dma_map_sg_nonforce(dev, sg, nents, dir);
+@@ -444,64 +481,68 @@ error:
+ }
+ if (panic_on_overflow)
+ panic("dma_map_sg: overflow on %lu pages\n", pages);
++
+ iommu_full(dev, pages << PAGE_SHIFT, dir);
+ for_each_sg(sg, s, nents, i)
+ s->dma_address = bad_dma_address;
+ return 0;
+-}
++}
-@@ -837,6 +846,7 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
- {
- struct bsg_device *bd = file->private_data;
- int __user *uarg = (int __user *) arg;
-+ int ret;
+ static int no_agp;
- switch (cmd) {
- /*
-@@ -889,12 +899,12 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
- if (rq->next_rq)
- bidi_bio = rq->next_rq->bio;
- blk_execute_rq(bd->queue, NULL, rq, 0);
-- blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio);
-+ ret = blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio);
+ static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size)
+-{
+- unsigned long a;
+- if (!iommu_size) {
+- iommu_size = aper_size;
+- if (!no_agp)
+- iommu_size /= 2;
+- }
+-
+- a = aper + iommu_size;
++{
++ unsigned long a;
++
++ if (!iommu_size) {
++ iommu_size = aper_size;
++ if (!no_agp)
++ iommu_size /= 2;
++ }
++
++ a = aper + iommu_size;
+ iommu_size -= round_up(a, LARGE_PAGE_SIZE) - a;
- if (copy_to_user(uarg, &hdr, sizeof(hdr)))
- return -EFAULT;
+- if (iommu_size < 64*1024*1024)
++ if (iommu_size < 64*1024*1024) {
+ printk(KERN_WARNING
+- "PCI-DMA: Warning: Small IOMMU %luMB. Consider increasing the AGP aperture in BIOS\n",iommu_size>>20);
+-
++ "PCI-DMA: Warning: Small IOMMU %luMB."
++ " Consider increasing the AGP aperture in BIOS\n",
++ iommu_size >> 20);
++ }
++
+ return iommu_size;
+-}
++}
-- return 0;
-+ return ret;
- }
- /*
- * block device ioctls
-diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
-index 13553e0..f28d1fb 100644
---- a/block/cfq-iosched.c
-+++ b/block/cfq-iosched.c
-@@ -26,9 +26,9 @@ static const int cfq_slice_async_rq = 2;
- static int cfq_slice_idle = HZ / 125;
+-static __init unsigned read_aperture(struct pci_dev *dev, u32 *size)
+-{
+- unsigned aper_size = 0, aper_base_32;
++static __init unsigned read_aperture(struct pci_dev *dev, u32 *size)
++{
++ unsigned aper_size = 0, aper_base_32, aper_order;
+ u64 aper_base;
+- unsigned aper_order;
+
+- pci_read_config_dword(dev, 0x94, &aper_base_32);
++ pci_read_config_dword(dev, 0x94, &aper_base_32);
+ pci_read_config_dword(dev, 0x90, &aper_order);
+- aper_order = (aper_order >> 1) & 7;
++ aper_order = (aper_order >> 1) & 7;
+
+- aper_base = aper_base_32 & 0x7fff;
++ aper_base = aper_base_32 & 0x7fff;
+ aper_base <<= 25;
+
+- aper_size = (32 * 1024 * 1024) << aper_order;
+- if (aper_base + aper_size > 0x100000000UL || !aper_size)
++ aper_size = (32 * 1024 * 1024) << aper_order;
++ if (aper_base + aper_size > 0x100000000UL || !aper_size)
+ aper_base = 0;
- /*
-- * grace period before allowing idle class to get disk access
-+ * offset from end of service tree
+ *size = aper_size;
+ return aper_base;
+-}
++}
+
+-/*
++/*
+ * Private Northbridge GATT initialization in case we cannot use the
+- * AGP driver for some reason.
++ * AGP driver for some reason.
*/
--#define CFQ_IDLE_GRACE (HZ / 10)
-+#define CFQ_IDLE_DELAY (HZ / 5)
+ static __init int init_k8_gatt(struct agp_kern_info *info)
+-{
++{
++ unsigned aper_size, gatt_size, new_aper_size;
++ unsigned aper_base, new_aper_base;
+ struct pci_dev *dev;
+ void *gatt;
+- unsigned aper_base, new_aper_base;
+- unsigned aper_size, gatt_size, new_aper_size;
+ int i;
- /*
- * below this threshold, we consider thinktime immediate
-@@ -98,8 +98,6 @@ struct cfq_data {
- struct cfq_queue *async_cfqq[2][IOPRIO_BE_NR];
- struct cfq_queue *async_idle_cfqq;
+ printk(KERN_INFO "PCI-DMA: Disabling AGP.\n");
+@@ -509,75 +550,75 @@ static __init int init_k8_gatt(struct agp_kern_info *info)
+ dev = NULL;
+ for (i = 0; i < num_k8_northbridges; i++) {
+ dev = k8_northbridges[i];
+- new_aper_base = read_aperture(dev, &new_aper_size);
+- if (!new_aper_base)
+- goto nommu;
+-
+- if (!aper_base) {
++ new_aper_base = read_aperture(dev, &new_aper_size);
++ if (!new_aper_base)
++ goto nommu;
++
++ if (!aper_base) {
+ aper_size = new_aper_size;
+ aper_base = new_aper_base;
+- }
+- if (aper_size != new_aper_size || aper_base != new_aper_base)
++ }
++ if (aper_size != new_aper_size || aper_base != new_aper_base)
+ goto nommu;
+ }
+ if (!aper_base)
+- goto nommu;
++ goto nommu;
+ info->aper_base = aper_base;
+- info->aper_size = aper_size>>20;
++ info->aper_size = aper_size >> 20;
+
+- gatt_size = (aper_size >> PAGE_SHIFT) * sizeof(u32);
+- gatt = (void *)__get_free_pages(GFP_KERNEL, get_order(gatt_size));
+- if (!gatt)
++ gatt_size = (aper_size >> PAGE_SHIFT) * sizeof(u32);
++ gatt = (void *)__get_free_pages(GFP_KERNEL, get_order(gatt_size));
++ if (!gatt)
+ panic("Cannot allocate GATT table");
+- if (change_page_attr_addr((unsigned long)gatt, gatt_size >> PAGE_SHIFT, PAGE_KERNEL_NOCACHE))
++ if (set_memory_uc((unsigned long)gatt, gatt_size >> PAGE_SHIFT))
+ panic("Could not set GART PTEs to uncacheable pages");
+- global_flush_tlb();
+
+- memset(gatt, 0, gatt_size);
++ memset(gatt, 0, gatt_size);
+ agp_gatt_table = gatt;
+
+ for (i = 0; i < num_k8_northbridges; i++) {
+- u32 ctl;
+- u32 gatt_reg;
++ u32 gatt_reg;
++ u32 ctl;
+
+ dev = k8_northbridges[i];
+- gatt_reg = __pa(gatt) >> 12;
+- gatt_reg <<= 4;
++ gatt_reg = __pa(gatt) >> 12;
++ gatt_reg <<= 4;
+ pci_write_config_dword(dev, 0x98, gatt_reg);
+- pci_read_config_dword(dev, 0x90, &ctl);
++ pci_read_config_dword(dev, 0x90, &ctl);
-- struct timer_list idle_class_timer;
--
- sector_t last_position;
- unsigned long last_end_request;
+ ctl |= 1;
+ ctl &= ~((1<<4) | (1<<5));
-@@ -199,8 +197,8 @@ CFQ_CFQQ_FNS(sync);
+- pci_write_config_dword(dev, 0x90, ctl);
++ pci_write_config_dword(dev, 0x90, ctl);
+ }
+ flush_gart();
+-
+- printk("PCI-DMA: aperture base @ %x size %u KB\n",aper_base, aper_size>>10);
++
++ printk(KERN_INFO "PCI-DMA: aperture base @ %x size %u KB\n",
++ aper_base, aper_size>>10);
+ return 0;
- static void cfq_dispatch_insert(struct request_queue *, struct request *);
- static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,
-- struct task_struct *, gfp_t);
--static struct cfq_io_context *cfq_cic_rb_lookup(struct cfq_data *,
-+ struct io_context *, gfp_t);
-+static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *,
- struct io_context *);
+ nommu:
+- /* Should not happen anymore */
++ /* Should not happen anymore */
+ printk(KERN_ERR "PCI-DMA: More than 4GB of RAM and no IOMMU\n"
+ KERN_ERR "PCI-DMA: 32bit PCI IO may malfunction.\n");
+- return -1;
+-}
++ return -1;
++}
- static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic,
-@@ -384,12 +382,15 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2)
- /*
- * The below is leftmost cache rbtree addon
- */
--static struct rb_node *cfq_rb_first(struct cfq_rb_root *root)
-+static struct cfq_queue *cfq_rb_first(struct cfq_rb_root *root)
- {
- if (!root->left)
- root->left = rb_first(&root->rb);
+ extern int agp_amd64_init(void);
-- return root->left;
-+ if (root->left)
-+ return rb_entry(root->left, struct cfq_queue, rb_node);
-+
-+ return NULL;
- }
+ static const struct dma_mapping_ops gart_dma_ops = {
+- .mapping_error = NULL,
+- .map_single = gart_map_single,
+- .map_simple = gart_map_simple,
+- .unmap_single = gart_unmap_single,
+- .sync_single_for_cpu = NULL,
+- .sync_single_for_device = NULL,
+- .sync_single_range_for_cpu = NULL,
+- .sync_single_range_for_device = NULL,
+- .sync_sg_for_cpu = NULL,
+- .sync_sg_for_device = NULL,
+- .map_sg = gart_map_sg,
+- .unmap_sg = gart_unmap_sg,
++ .mapping_error = NULL,
++ .map_single = gart_map_single,
++ .map_simple = gart_map_simple,
++ .unmap_single = gart_unmap_single,
++ .sync_single_for_cpu = NULL,
++ .sync_single_for_device = NULL,
++ .sync_single_range_for_cpu = NULL,
++ .sync_single_range_for_device = NULL,
++ .sync_sg_for_cpu = NULL,
++ .sync_sg_for_device = NULL,
++ .map_sg = gart_map_sg,
++ .unmap_sg = gart_unmap_sg,
+ };
+
+ void gart_iommu_shutdown(void)
+@@ -588,23 +629,23 @@ void gart_iommu_shutdown(void)
+ if (no_agp && (dma_ops != &gart_dma_ops))
+ return;
+
+- for (i = 0; i < num_k8_northbridges; i++) {
+- u32 ctl;
++ for (i = 0; i < num_k8_northbridges; i++) {
++ u32 ctl;
+
+- dev = k8_northbridges[i];
+- pci_read_config_dword(dev, 0x90, &ctl);
++ dev = k8_northbridges[i];
++ pci_read_config_dword(dev, 0x90, &ctl);
- static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
-@@ -446,12 +447,20 @@ static unsigned long cfq_slice_offset(struct cfq_data *cfqd,
- static void cfq_service_tree_add(struct cfq_data *cfqd,
- struct cfq_queue *cfqq, int add_front)
- {
-- struct rb_node **p = &cfqd->service_tree.rb.rb_node;
-- struct rb_node *parent = NULL;
-+ struct rb_node **p, *parent;
-+ struct cfq_queue *__cfqq;
- unsigned long rb_key;
- int left;
+- ctl &= ~1;
++ ctl &= ~1;
-- if (!add_front) {
-+ if (cfq_class_idle(cfqq)) {
-+ rb_key = CFQ_IDLE_DELAY;
-+ parent = rb_last(&cfqd->service_tree.rb);
-+ if (parent && parent != &cfqq->rb_node) {
-+ __cfqq = rb_entry(parent, struct cfq_queue, rb_node);
-+ rb_key += __cfqq->rb_key;
-+ } else
-+ rb_key += jiffies;
-+ } else if (!add_front) {
- rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies;
- rb_key += cfqq->slice_resid;
- cfqq->slice_resid = 0;
-@@ -469,8 +478,9 @@ static void cfq_service_tree_add(struct cfq_data *cfqd,
+- pci_write_config_dword(dev, 0x90, ctl);
+- }
++ pci_write_config_dword(dev, 0x90, ctl);
++ }
+ }
+
+ void __init gart_iommu_init(void)
+-{
++{
+ struct agp_kern_info info;
+- unsigned long aper_size;
+ unsigned long iommu_start;
++ unsigned long aper_size;
+ unsigned long scratch;
+ long i;
+
+@@ -614,14 +655,14 @@ void __init gart_iommu_init(void)
}
- left = 1;
-+ parent = NULL;
-+ p = &cfqd->service_tree.rb.rb_node;
- while (*p) {
-- struct cfq_queue *__cfqq;
- struct rb_node **n;
+ #ifndef CONFIG_AGP_AMD64
+- no_agp = 1;
++ no_agp = 1;
+ #else
+ /* Makefile puts PCI initialization via subsys_initcall first. */
+ /* Add other K8 AGP bridge drivers here */
+- no_agp = no_agp ||
+- (agp_amd64_init() < 0) ||
++ no_agp = no_agp ||
++ (agp_amd64_init() < 0) ||
+ (agp_copy_info(agp_bridge, &info) < 0);
+-#endif
++#endif
- parent = *p;
-@@ -524,8 +534,7 @@ static void cfq_resort_rr_list(struct cfq_data *cfqd, struct cfq_queue *cfqq)
- * add to busy list of queues for service, trying to be fair in ordering
- * the pending list according to last request service
- */
--static inline void
--cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-+static void cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
- {
- BUG_ON(cfq_cfqq_on_rr(cfqq));
- cfq_mark_cfqq_on_rr(cfqq);
-@@ -538,8 +547,7 @@ cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
- * Called when the cfqq no longer has requests pending, remove it from
- * the service tree.
- */
--static inline void
--cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-+static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
- {
- BUG_ON(!cfq_cfqq_on_rr(cfqq));
- cfq_clear_cfqq_on_rr(cfqq);
-@@ -554,7 +562,7 @@ cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
- /*
- * rb tree support functions
- */
--static inline void cfq_del_rq_rb(struct request *rq)
-+static void cfq_del_rq_rb(struct request *rq)
- {
- struct cfq_queue *cfqq = RQ_CFQQ(rq);
- struct cfq_data *cfqd = cfqq->cfqd;
-@@ -594,8 +602,7 @@ static void cfq_add_rq_rb(struct request *rq)
- BUG_ON(!cfqq->next_rq);
- }
+ if (swiotlb)
+ return;
+@@ -643,77 +684,78 @@ void __init gart_iommu_init(void)
+ }
--static inline void
--cfq_reposition_rq_rb(struct cfq_queue *cfqq, struct request *rq)
-+static void cfq_reposition_rq_rb(struct cfq_queue *cfqq, struct request *rq)
- {
- elv_rb_del(&cfqq->sort_list, rq);
- cfqq->queued[rq_is_sync(rq)]--;
-@@ -609,7 +616,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
- struct cfq_io_context *cic;
- struct cfq_queue *cfqq;
+ printk(KERN_INFO "PCI-DMA: using GART IOMMU.\n");
+- aper_size = info.aper_size * 1024 * 1024;
+- iommu_size = check_iommu_size(info.aper_base, aper_size);
+- iommu_pages = iommu_size >> PAGE_SHIFT;
+-
+- iommu_gart_bitmap = (void*)__get_free_pages(GFP_KERNEL,
+- get_order(iommu_pages/8));
+- if (!iommu_gart_bitmap)
+- panic("Cannot allocate iommu bitmap\n");
++ aper_size = info.aper_size * 1024 * 1024;
++ iommu_size = check_iommu_size(info.aper_base, aper_size);
++ iommu_pages = iommu_size >> PAGE_SHIFT;
++
++ iommu_gart_bitmap = (void *) __get_free_pages(GFP_KERNEL,
++ get_order(iommu_pages/8));
++ if (!iommu_gart_bitmap)
++ panic("Cannot allocate iommu bitmap\n");
+ memset(iommu_gart_bitmap, 0, iommu_pages/8);
+
+ #ifdef CONFIG_IOMMU_LEAK
+- if (leak_trace) {
+- iommu_leak_tab = (void *)__get_free_pages(GFP_KERNEL,
++ if (leak_trace) {
++ iommu_leak_tab = (void *)__get_free_pages(GFP_KERNEL,
+ get_order(iommu_pages*sizeof(void *)));
+- if (iommu_leak_tab)
+- memset(iommu_leak_tab, 0, iommu_pages * 8);
++ if (iommu_leak_tab)
++ memset(iommu_leak_tab, 0, iommu_pages * 8);
+ else
+- printk("PCI-DMA: Cannot allocate leak trace area\n");
+- }
++ printk(KERN_DEBUG
++ "PCI-DMA: Cannot allocate leak trace area\n");
++ }
+ #endif
-- cic = cfq_cic_rb_lookup(cfqd, tsk->io_context);
-+ cic = cfq_cic_lookup(cfqd, tsk->io_context);
- if (!cic)
- return NULL;
+- /*
++ /*
+ * Out of IOMMU space handling.
+- * Reserve some invalid pages at the beginning of the GART.
+- */
+- set_bit_string(iommu_gart_bitmap, 0, EMERGENCY_PAGES);
++ * Reserve some invalid pages at the beginning of the GART.
++ */
++ set_bit_string(iommu_gart_bitmap, 0, EMERGENCY_PAGES);
-@@ -721,7 +728,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
- * Lookup the cfqq that this bio will be queued with. Allow
- * merge only if rq is queued there.
+- agp_memory_reserved = iommu_size;
++ agp_memory_reserved = iommu_size;
+ printk(KERN_INFO
+ "PCI-DMA: Reserving %luMB of IOMMU area in the AGP aperture\n",
+- iommu_size>>20);
++ iommu_size >> 20);
+
+- iommu_start = aper_size - iommu_size;
+- iommu_bus_base = info.aper_base + iommu_start;
++ iommu_start = aper_size - iommu_size;
++ iommu_bus_base = info.aper_base + iommu_start;
+ bad_dma_address = iommu_bus_base;
+ iommu_gatt_base = agp_gatt_table + (iommu_start>>PAGE_SHIFT);
+
+- /*
++ /*
+ * Unmap the IOMMU part of the GART. The alias of the page is
+ * always mapped with cache enabled and there is no full cache
+ * coherency across the GART remapping. The unmapping avoids
+ * automatic prefetches from the CPU allocating cache lines in
+ * there. All CPU accesses are done via the direct mapping to
+ * the backing memory. The GART address is only used by PCI
+- * devices.
++ * devices.
*/
-- cic = cfq_cic_rb_lookup(cfqd, current->io_context);
-+ cic = cfq_cic_lookup(cfqd, current->io_context);
- if (!cic)
- return 0;
+ clear_kernel_mapping((unsigned long)__va(iommu_bus_base), iommu_size);
-@@ -732,15 +739,10 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
- return 0;
- }
+- /*
+- * Try to workaround a bug (thanks to BenH)
+- * Set unmapped entries to a scratch page instead of 0.
++ /*
++ * Try to workaround a bug (thanks to BenH)
++ * Set unmapped entries to a scratch page instead of 0.
+ * Any prefetches that hit unmapped entries won't get an bus abort
+ * then.
+ */
+- scratch = get_zeroed_page(GFP_KERNEL);
+- if (!scratch)
++ scratch = get_zeroed_page(GFP_KERNEL);
++ if (!scratch)
+ panic("Cannot allocate iommu scratch page");
+ gart_unmapped_entry = GPTE_ENCODE(__pa(scratch));
+- for (i = EMERGENCY_PAGES; i < iommu_pages; i++)
++ for (i = EMERGENCY_PAGES; i < iommu_pages; i++)
+ iommu_gatt_base[i] = gart_unmapped_entry;
--static inline void
--__cfq_set_active_queue(struct cfq_data *cfqd, struct cfq_queue *cfqq)
-+static void __cfq_set_active_queue(struct cfq_data *cfqd,
-+ struct cfq_queue *cfqq)
+ flush_gart();
+ dma_ops = &gart_dma_ops;
+-}
++}
+
+ void __init gart_parse_options(char *p)
{
- if (cfqq) {
-- /*
-- * stop potential idle class queues waiting service
-- */
-- del_timer(&cfqd->idle_class_timer);
--
- cfqq->slice_end = 0;
- cfq_clear_cfqq_must_alloc_slice(cfqq);
- cfq_clear_cfqq_fifo_expire(cfqq);
-@@ -789,47 +791,16 @@ static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out)
- __cfq_slice_expired(cfqd, cfqq, timed_out);
- }
+ int arg;
--static int start_idle_class_timer(struct cfq_data *cfqd)
--{
-- unsigned long end = cfqd->last_end_request + CFQ_IDLE_GRACE;
-- unsigned long now = jiffies;
--
-- if (time_before(now, end) &&
-- time_after_eq(now, cfqd->last_end_request)) {
-- mod_timer(&cfqd->idle_class_timer, end);
-- return 1;
-- }
--
-- return 0;
--}
+ #ifdef CONFIG_IOMMU_LEAK
+- if (!strncmp(p,"leak",4)) {
++ if (!strncmp(p, "leak", 4)) {
+ leak_trace = 1;
+ p += 4;
+ if (*p == '=') ++p;
+@@ -723,18 +765,18 @@ void __init gart_parse_options(char *p)
+ #endif
+ if (isdigit(*p) && get_option(&p, &arg))
+ iommu_size = arg;
+- if (!strncmp(p, "fullflush",8))
++ if (!strncmp(p, "fullflush", 8))
+ iommu_fullflush = 1;
+- if (!strncmp(p, "nofullflush",11))
++ if (!strncmp(p, "nofullflush", 11))
+ iommu_fullflush = 0;
+- if (!strncmp(p,"noagp",5))
++ if (!strncmp(p, "noagp", 5))
+ no_agp = 1;
+- if (!strncmp(p, "noaperture",10))
++ if (!strncmp(p, "noaperture", 10))
+ fix_aperture = 0;
+ /* duplicated from pci-dma.c */
+- if (!strncmp(p,"force",5))
++ if (!strncmp(p, "force", 5))
+ gart_iommu_aperture_allowed = 1;
+- if (!strncmp(p,"allowed",7))
++ if (!strncmp(p, "allowed", 7))
+ gart_iommu_aperture_allowed = 1;
+ if (!strncmp(p, "memaper", 7)) {
+ fallback_aper_force = 1;
+diff --git a/arch/x86/kernel/pci-swiotlb_64.c b/arch/x86/kernel/pci-swiotlb_64.c
+index 102866d..82a0a67 100644
+--- a/arch/x86/kernel/pci-swiotlb_64.c
++++ b/arch/x86/kernel/pci-swiotlb_64.c
+@@ -10,7 +10,6 @@
+ #include <asm/dma.h>
+
+ int swiotlb __read_mostly;
+-EXPORT_SYMBOL(swiotlb);
+
+ const struct dma_mapping_ops swiotlb_dma_ops = {
+ .mapping_error = swiotlb_dma_mapping_error,
+diff --git a/arch/x86/kernel/pmtimer_64.c b/arch/x86/kernel/pmtimer_64.c
+index ae8f912..b112406 100644
+--- a/arch/x86/kernel/pmtimer_64.c
++++ b/arch/x86/kernel/pmtimer_64.c
+@@ -19,13 +19,13 @@
+ #include <linux/time.h>
+ #include <linux/init.h>
+ #include <linux/cpumask.h>
++#include <linux/acpi_pmtmr.h>
++
+ #include <asm/io.h>
+ #include <asm/proto.h>
+ #include <asm/msr.h>
+ #include <asm/vsyscall.h>
+
+-#define ACPI_PM_MASK 0xFFFFFF /* limit it to 24 bits */
-
- /*
- * Get next queue for service. Unless we have a queue preemption,
- * we'll simply select the first cfqq in the service tree.
- */
- static struct cfq_queue *cfq_get_next_queue(struct cfq_data *cfqd)
+ static inline u32 cyc2us(u32 cycles)
{
-- struct cfq_queue *cfqq;
-- struct rb_node *n;
--
- if (RB_EMPTY_ROOT(&cfqd->service_tree.rb))
- return NULL;
+ /* The Power Management Timer ticks at 3.579545 ticks per microsecond.
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 46d391d..968371a 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -55,6 +55,7 @@
-- n = cfq_rb_first(&cfqd->service_tree);
-- cfqq = rb_entry(n, struct cfq_queue, rb_node);
--
-- if (cfq_class_idle(cfqq)) {
-- /*
-- * if we have idle queues and no rt or be queues had
-- * pending requests, either allow immediate service if
-- * the grace period has passed or arm the idle grace
-- * timer
-- */
-- if (start_idle_class_timer(cfqd))
-- cfqq = NULL;
-- }
--
-- return cfqq;
-+ return cfq_rb_first(&cfqd->service_tree);
- }
+ #include <asm/tlbflush.h>
+ #include <asm/cpu.h>
++#include <asm/kdebug.h>
- /*
-@@ -895,7 +866,7 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
- * task has exited, don't wait
- */
- cic = cfqd->active_cic;
-- if (!cic || !cic->ioc->task)
-+ if (!cic || !atomic_read(&cic->ioc->nr_tasks))
- return;
+ asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
- /*
-@@ -939,7 +910,7 @@ static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
- /*
- * return expired entry, or NULL to just start from scratch in rbtree
+@@ -74,7 +75,7 @@ EXPORT_PER_CPU_SYMBOL(cpu_number);
*/
--static inline struct request *cfq_check_fifo(struct cfq_queue *cfqq)
-+static struct request *cfq_check_fifo(struct cfq_queue *cfqq)
+ unsigned long thread_saved_pc(struct task_struct *tsk)
{
- struct cfq_data *cfqd = cfqq->cfqd;
- struct request *rq;
-@@ -1068,7 +1039,7 @@ __cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq,
- return dispatched;
+- return ((unsigned long *)tsk->thread.esp)[3];
++ return ((unsigned long *)tsk->thread.sp)[3];
}
--static inline int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
-+static int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
- {
- int dispatched = 0;
+ /*
+@@ -113,10 +114,19 @@ void default_idle(void)
+ smp_mb();
-@@ -1087,14 +1058,11 @@ static inline int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
+ local_irq_disable();
+- if (!need_resched())
++ if (!need_resched()) {
++ ktime_t t0, t1;
++ u64 t0n, t1n;
++
++ t0 = ktime_get();
++ t0n = ktime_to_ns(t0);
+ safe_halt(); /* enables interrupts racelessly */
+- else
+- local_irq_enable();
++ local_irq_disable();
++ t1 = ktime_get();
++ t1n = ktime_to_ns(t1);
++ sched_clock_idle_wakeup_event(t1n - t0n);
++ }
++ local_irq_enable();
+ current_thread_info()->status |= TS_POLLING;
+ } else {
+ /* loop is done by the caller */
+@@ -132,7 +142,7 @@ EXPORT_SYMBOL(default_idle);
+ * to poll the ->work.need_resched flag instead of waiting for the
+ * cross-CPU IPI to arrive. Use this option with caution.
*/
- static int cfq_forced_dispatch(struct cfq_data *cfqd)
+-static void poll_idle (void)
++static void poll_idle(void)
{
-+ struct cfq_queue *cfqq;
- int dispatched = 0;
-- struct rb_node *n;
--
-- while ((n = cfq_rb_first(&cfqd->service_tree)) != NULL) {
-- struct cfq_queue *cfqq = rb_entry(n, struct cfq_queue, rb_node);
-
-+ while ((cfqq = cfq_rb_first(&cfqd->service_tree)) != NULL)
- dispatched += __cfq_forced_dispatch_cfqq(cfqq);
-- }
-
- cfq_slice_expired(cfqd, 0);
-
-@@ -1170,20 +1138,69 @@ static void cfq_put_queue(struct cfq_queue *cfqq)
- kmem_cache_free(cfq_pool, cfqq);
+ cpu_relax();
}
+@@ -188,6 +198,9 @@ void cpu_idle(void)
+ rmb();
+ idle = pm_idle;
--static void cfq_free_io_context(struct io_context *ioc)
-+/*
-+ * Call func for each cic attached to this ioc. Returns number of cic's seen.
-+ */
-+#define CIC_GANG_NR 16
-+static unsigned int
-+call_for_each_cic(struct io_context *ioc,
-+ void (*func)(struct io_context *, struct cfq_io_context *))
++ if (rcu_pending(cpu))
++ rcu_check_callbacks(cpu, 0);
++
+ if (!idle)
+ idle = default_idle;
+
+@@ -255,13 +268,13 @@ EXPORT_SYMBOL_GPL(cpu_idle_wait);
+ * New with Core Duo processors, MWAIT can take some hints based on CPU
+ * capability.
+ */
+-void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
++void mwait_idle_with_hints(unsigned long ax, unsigned long cx)
{
-- struct cfq_io_context *__cic;
-- struct rb_node *n;
-- int freed = 0;
-+ struct cfq_io_context *cics[CIC_GANG_NR];
-+ unsigned long index = 0;
-+ unsigned int called = 0;
-+ int nr;
+ if (!need_resched()) {
+ __monitor((void *)¤t_thread_info()->flags, 0, 0);
+ smp_mb();
+ if (!need_resched())
+- __mwait(eax, ecx);
++ __mwait(ax, cx);
+ }
+ }
-- ioc->ioc_data = NULL;
-+ rcu_read_lock();
+@@ -272,19 +285,37 @@ static void mwait_idle(void)
+ mwait_idle_with_hints(0, 0);
+ }
-- while ((n = rb_first(&ioc->cic_root)) != NULL) {
-- __cic = rb_entry(n, struct cfq_io_context, rb_node);
-- rb_erase(&__cic->rb_node, &ioc->cic_root);
-- kmem_cache_free(cfq_ioc_pool, __cic);
-- freed++;
-- }
-+ do {
-+ int i;
-+
-+ /*
-+ * Perhaps there's a better way - this just gang lookups from
-+ * 0 to the end, restarting after each CIC_GANG_NR from the
-+ * last key + 1.
-+ */
-+ nr = radix_tree_gang_lookup(&ioc->radix_root, (void **) cics,
-+ index, CIC_GANG_NR);
-+ if (!nr)
-+ break;
-+
-+ called += nr;
-+ index = 1 + (unsigned long) cics[nr - 1]->key;
-+
-+ for (i = 0; i < nr; i++)
-+ func(ioc, cics[i]);
-+ } while (nr == CIC_GANG_NR);
-+
-+ rcu_read_unlock();
-+
-+ return called;
-+}
-+
-+static void cic_free_func(struct io_context *ioc, struct cfq_io_context *cic)
++static int __cpuinit mwait_usable(const struct cpuinfo_x86 *c)
+{
-+ unsigned long flags;
-+
-+ BUG_ON(!cic->dead_key);
-+
-+ spin_lock_irqsave(&ioc->lock, flags);
-+ radix_tree_delete(&ioc->radix_root, cic->dead_key);
-+ spin_unlock_irqrestore(&ioc->lock, flags);
-+
-+ kmem_cache_free(cfq_ioc_pool, cic);
++ if (force_mwait)
++ return 1;
++ /* Any C1 states supported? */
++ return c->cpuid_level >= 5 && ((cpuid_edx(5) >> 4) & 0xf) > 0;
+}
+
-+static void cfq_free_io_context(struct io_context *ioc)
-+{
-+ int freed;
+ void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
+ {
+- if (cpu_has(c, X86_FEATURE_MWAIT)) {
+- printk("monitor/mwait feature present.\n");
++ static int selected;
+
-+ /*
-+ * ioc->refcount is zero here, so no more cic's are allowed to be
-+ * linked into this ioc. So it should be ok to iterate over the known
-+ * list, we will see all cic's since no new ones are added.
-+ */
-+ freed = call_for_each_cic(ioc, cic_free_func);
++ if (selected)
++ return;
++#ifdef CONFIG_X86_SMP
++ if (pm_idle == poll_idle && smp_num_siblings > 1) {
++ printk(KERN_WARNING "WARNING: polling idle and HT enabled,"
++ " performance may degrade.\n");
++ }
++#endif
++ if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
+ /*
+ * Skip, if setup has overridden idle.
+ * One CPU supports mwait => All CPUs supports mwait
+ */
+ if (!pm_idle) {
+- printk("using mwait in idle threads.\n");
++ printk(KERN_INFO "using mwait in idle threads.\n");
+ pm_idle = mwait_idle;
+ }
+ }
++ selected = 1;
+ }
+
+ static int __init idle_setup(char *str)
+@@ -292,10 +323,6 @@ static int __init idle_setup(char *str)
+ if (!strcmp(str, "poll")) {
+ printk("using polling idle threads.\n");
+ pm_idle = poll_idle;
+-#ifdef CONFIG_X86_SMP
+- if (smp_num_siblings > 1)
+- printk("WARNING: polling idle and HT enabled, performance may degrade.\n");
+-#endif
+ } else if (!strcmp(str, "mwait"))
+ force_mwait = 1;
+ else
+@@ -310,15 +337,15 @@ void __show_registers(struct pt_regs *regs, int all)
+ {
+ unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
+ unsigned long d0, d1, d2, d3, d6, d7;
+- unsigned long esp;
++ unsigned long sp;
+ unsigned short ss, gs;
- elv_ioc_count_mod(ioc_count, -freed);
+ if (user_mode_vm(regs)) {
+- esp = regs->esp;
+- ss = regs->xss & 0xffff;
++ sp = regs->sp;
++ ss = regs->ss & 0xffff;
+ savesegment(gs, gs);
+ } else {
+- esp = (unsigned long) (®s->esp);
++ sp = (unsigned long) (®s->sp);
+ savesegment(ss, ss);
+ savesegment(gs, gs);
+ }
+@@ -331,17 +358,17 @@ void __show_registers(struct pt_regs *regs, int all)
+ init_utsname()->version);
+
+ printk("EIP: %04x:[<%08lx>] EFLAGS: %08lx CPU: %d\n",
+- 0xffff & regs->xcs, regs->eip, regs->eflags,
++ 0xffff & regs->cs, regs->ip, regs->flags,
+ smp_processor_id());
+- print_symbol("EIP is at %s\n", regs->eip);
++ print_symbol("EIP is at %s\n", regs->ip);
+
+ printk("EAX: %08lx EBX: %08lx ECX: %08lx EDX: %08lx\n",
+- regs->eax, regs->ebx, regs->ecx, regs->edx);
++ regs->ax, regs->bx, regs->cx, regs->dx);
+ printk("ESI: %08lx EDI: %08lx EBP: %08lx ESP: %08lx\n",
+- regs->esi, regs->edi, regs->ebp, esp);
++ regs->si, regs->di, regs->bp, sp);
+ printk(" DS: %04x ES: %04x FS: %04x GS: %04x SS: %04x\n",
+- regs->xds & 0xffff, regs->xes & 0xffff,
+- regs->xfs & 0xffff, gs, ss);
++ regs->ds & 0xffff, regs->es & 0xffff,
++ regs->fs & 0xffff, gs, ss);
-@@ -1205,7 +1222,12 @@ static void __cfq_exit_single_io_context(struct cfq_data *cfqd,
- struct cfq_io_context *cic)
+ if (!all)
+ return;
+@@ -369,12 +396,12 @@ void __show_registers(struct pt_regs *regs, int all)
+ void show_regs(struct pt_regs *regs)
{
- list_del_init(&cic->queue_list);
-+
-+ /*
-+ * Make sure key == NULL is seen for dead queues
-+ */
- smp_wmb();
-+ cic->dead_key = (unsigned long) cic->key;
- cic->key = NULL;
-
- if (cic->cfqq[ASYNC]) {
-@@ -1219,16 +1241,18 @@ static void __cfq_exit_single_io_context(struct cfq_data *cfqd,
- }
+ __show_registers(regs, 1);
+- show_trace(NULL, regs, ®s->esp);
++ show_trace(NULL, regs, ®s->sp, regs->bp);
}
--static void cfq_exit_single_io_context(struct cfq_io_context *cic)
-+static void cfq_exit_single_io_context(struct io_context *ioc,
-+ struct cfq_io_context *cic)
+ /*
+- * This gets run with %ebx containing the
+- * function to call, and %edx containing
++ * This gets run with %bx containing the
++ * function to call, and %dx containing
+ * the "args".
+ */
+ extern void kernel_thread_helper(void);
+@@ -388,16 +415,16 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
+
+ memset(®s, 0, sizeof(regs));
+
+- regs.ebx = (unsigned long) fn;
+- regs.edx = (unsigned long) arg;
++ regs.bx = (unsigned long) fn;
++ regs.dx = (unsigned long) arg;
+
+- regs.xds = __USER_DS;
+- regs.xes = __USER_DS;
+- regs.xfs = __KERNEL_PERCPU;
+- regs.orig_eax = -1;
+- regs.eip = (unsigned long) kernel_thread_helper;
+- regs.xcs = __KERNEL_CS | get_kernel_rpl();
+- regs.eflags = X86_EFLAGS_IF | X86_EFLAGS_SF | X86_EFLAGS_PF | 0x2;
++ regs.ds = __USER_DS;
++ regs.es = __USER_DS;
++ regs.fs = __KERNEL_PERCPU;
++ regs.orig_ax = -1;
++ regs.ip = (unsigned long) kernel_thread_helper;
++ regs.cs = __KERNEL_CS | get_kernel_rpl();
++ regs.flags = X86_EFLAGS_IF | X86_EFLAGS_SF | X86_EFLAGS_PF | 0x2;
+
+ /* Ok, create the new process.. */
+ return do_fork(flags | CLONE_VM | CLONE_UNTRACED, 0, ®s, 0, NULL, NULL);
+@@ -435,7 +462,12 @@ void flush_thread(void)
{
- struct cfq_data *cfqd = cic->key;
+ struct task_struct *tsk = current;
- if (cfqd) {
- struct request_queue *q = cfqd->queue;
-+ unsigned long flags;
+- memset(tsk->thread.debugreg, 0, sizeof(unsigned long)*8);
++ tsk->thread.debugreg0 = 0;
++ tsk->thread.debugreg1 = 0;
++ tsk->thread.debugreg2 = 0;
++ tsk->thread.debugreg3 = 0;
++ tsk->thread.debugreg6 = 0;
++ tsk->thread.debugreg7 = 0;
+ memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
+ clear_tsk_thread_flag(tsk, TIF_DEBUG);
+ /*
+@@ -460,7 +492,7 @@ void prepare_to_copy(struct task_struct *tsk)
+ unlazy_fpu(tsk);
+ }
+
+-int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
++int copy_thread(int nr, unsigned long clone_flags, unsigned long sp,
+ unsigned long unused,
+ struct task_struct * p, struct pt_regs * regs)
+ {
+@@ -470,15 +502,15 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
+
+ childregs = task_pt_regs(p);
+ *childregs = *regs;
+- childregs->eax = 0;
+- childregs->esp = esp;
++ childregs->ax = 0;
++ childregs->sp = sp;
-- spin_lock_irq(q->queue_lock);
-+ spin_lock_irqsave(q->queue_lock, flags);
- __cfq_exit_single_io_context(cfqd, cic);
-- spin_unlock_irq(q->queue_lock);
-+ spin_unlock_irqrestore(q->queue_lock, flags);
+- p->thread.esp = (unsigned long) childregs;
+- p->thread.esp0 = (unsigned long) (childregs+1);
++ p->thread.sp = (unsigned long) childregs;
++ p->thread.sp0 = (unsigned long) (childregs+1);
+
+- p->thread.eip = (unsigned long) ret_from_fork;
++ p->thread.ip = (unsigned long) ret_from_fork;
+
+- savesegment(gs,p->thread.gs);
++ savesegment(gs, p->thread.gs);
+
+ tsk = current;
+ if (unlikely(test_tsk_thread_flag(tsk, TIF_IO_BITMAP))) {
+@@ -491,32 +523,15 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
+ set_tsk_thread_flag(p, TIF_IO_BITMAP);
}
- }
-@@ -1238,21 +1262,8 @@ static void cfq_exit_single_io_context(struct cfq_io_context *cic)
- */
- static void cfq_exit_io_context(struct io_context *ioc)
- {
-- struct cfq_io_context *__cic;
-- struct rb_node *n;
++ err = 0;
++
+ /*
+ * Set a new TLS for the child thread?
+ */
+- if (clone_flags & CLONE_SETTLS) {
+- struct desc_struct *desc;
+- struct user_desc info;
+- int idx;
-
-- ioc->ioc_data = NULL;
+- err = -EFAULT;
+- if (copy_from_user(&info, (void __user *)childregs->esi, sizeof(info)))
+- goto out;
+- err = -EINVAL;
+- if (LDT_empty(&info))
+- goto out;
-
-- /*
-- * put the reference this task is holding to the various queues
-- */
-- n = rb_first(&ioc->cic_root);
-- while (n != NULL) {
-- __cic = rb_entry(n, struct cfq_io_context, rb_node);
+- idx = info.entry_number;
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- goto out;
-
-- cfq_exit_single_io_context(__cic);
-- n = rb_next(n);
+- desc = p->thread.tls_array + idx - GDT_ENTRY_TLS_MIN;
+- desc->a = LDT_entry_a(&info);
+- desc->b = LDT_entry_b(&info);
- }
-+ rcu_assign_pointer(ioc->ioc_data, NULL);
-+ call_for_each_cic(ioc, cfq_exit_single_io_context);
- }
-
- static struct cfq_io_context *
-@@ -1273,7 +1284,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
- return cic;
- }
++ if (clone_flags & CLONE_SETTLS)
++ err = do_set_thread_area(p, -1,
++ (struct user_desc __user *)childregs->si, 0);
--static void cfq_init_prio_data(struct cfq_queue *cfqq)
-+static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
+- err = 0;
+- out:
+ if (err && p->thread.io_bitmap_ptr) {
+ kfree(p->thread.io_bitmap_ptr);
+ p->thread.io_bitmap_max = 0;
+@@ -529,62 +544,52 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
+ */
+ void dump_thread(struct pt_regs * regs, struct user * dump)
{
- struct task_struct *tsk = current;
- int ioprio_class;
-@@ -1281,7 +1292,7 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq)
- if (!cfq_cfqq_prio_changed(cfqq))
- return;
+- int i;
++ u16 gs;
-- ioprio_class = IOPRIO_PRIO_CLASS(tsk->ioprio);
-+ ioprio_class = IOPRIO_PRIO_CLASS(ioc->ioprio);
- switch (ioprio_class) {
- default:
- printk(KERN_ERR "cfq: bad prio %x\n", ioprio_class);
-@@ -1293,11 +1304,11 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq)
- cfqq->ioprio_class = IOPRIO_CLASS_BE;
- break;
- case IOPRIO_CLASS_RT:
-- cfqq->ioprio = task_ioprio(tsk);
-+ cfqq->ioprio = task_ioprio(ioc);
- cfqq->ioprio_class = IOPRIO_CLASS_RT;
- break;
- case IOPRIO_CLASS_BE:
-- cfqq->ioprio = task_ioprio(tsk);
-+ cfqq->ioprio = task_ioprio(ioc);
- cfqq->ioprio_class = IOPRIO_CLASS_BE;
- break;
- case IOPRIO_CLASS_IDLE:
-@@ -1316,7 +1327,7 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq)
- cfq_clear_cfqq_prio_changed(cfqq);
- }
+ /* changed the size calculations - should hopefully work better. lbt */
+ dump->magic = CMAGIC;
+ dump->start_code = 0;
+- dump->start_stack = regs->esp & ~(PAGE_SIZE - 1);
++ dump->start_stack = regs->sp & ~(PAGE_SIZE - 1);
+ dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT;
+ dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT;
+ dump->u_dsize -= dump->u_tsize;
+ dump->u_ssize = 0;
+- for (i = 0; i < 8; i++)
+- dump->u_debugreg[i] = current->thread.debugreg[i];
++ dump->u_debugreg[0] = current->thread.debugreg0;
++ dump->u_debugreg[1] = current->thread.debugreg1;
++ dump->u_debugreg[2] = current->thread.debugreg2;
++ dump->u_debugreg[3] = current->thread.debugreg3;
++ dump->u_debugreg[4] = 0;
++ dump->u_debugreg[5] = 0;
++ dump->u_debugreg[6] = current->thread.debugreg6;
++ dump->u_debugreg[7] = current->thread.debugreg7;
+
+ if (dump->start_stack < TASK_SIZE)
+ dump->u_ssize = ((unsigned long) (TASK_SIZE - dump->start_stack)) >> PAGE_SHIFT;
+
+- dump->regs.ebx = regs->ebx;
+- dump->regs.ecx = regs->ecx;
+- dump->regs.edx = regs->edx;
+- dump->regs.esi = regs->esi;
+- dump->regs.edi = regs->edi;
+- dump->regs.ebp = regs->ebp;
+- dump->regs.eax = regs->eax;
+- dump->regs.ds = regs->xds;
+- dump->regs.es = regs->xes;
+- dump->regs.fs = regs->xfs;
+- savesegment(gs,dump->regs.gs);
+- dump->regs.orig_eax = regs->orig_eax;
+- dump->regs.eip = regs->eip;
+- dump->regs.cs = regs->xcs;
+- dump->regs.eflags = regs->eflags;
+- dump->regs.esp = regs->esp;
+- dump->regs.ss = regs->xss;
++ dump->regs.bx = regs->bx;
++ dump->regs.cx = regs->cx;
++ dump->regs.dx = regs->dx;
++ dump->regs.si = regs->si;
++ dump->regs.di = regs->di;
++ dump->regs.bp = regs->bp;
++ dump->regs.ax = regs->ax;
++ dump->regs.ds = (u16)regs->ds;
++ dump->regs.es = (u16)regs->es;
++ dump->regs.fs = (u16)regs->fs;
++ savesegment(gs,gs);
++ dump->regs.orig_ax = regs->orig_ax;
++ dump->regs.ip = regs->ip;
++ dump->regs.cs = (u16)regs->cs;
++ dump->regs.flags = regs->flags;
++ dump->regs.sp = regs->sp;
++ dump->regs.ss = (u16)regs->ss;
--static inline void changed_ioprio(struct cfq_io_context *cic)
-+static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
- {
- struct cfq_data *cfqd = cic->key;
- struct cfq_queue *cfqq;
-@@ -1330,8 +1341,7 @@ static inline void changed_ioprio(struct cfq_io_context *cic)
- cfqq = cic->cfqq[ASYNC];
- if (cfqq) {
- struct cfq_queue *new_cfqq;
-- new_cfqq = cfq_get_queue(cfqd, ASYNC, cic->ioc->task,
-- GFP_ATOMIC);
-+ new_cfqq = cfq_get_queue(cfqd, ASYNC, cic->ioc, GFP_ATOMIC);
- if (new_cfqq) {
- cic->cfqq[ASYNC] = new_cfqq;
- cfq_put_queue(cfqq);
-@@ -1347,29 +1357,19 @@ static inline void changed_ioprio(struct cfq_io_context *cic)
+ dump->u_fpvalid = dump_fpu (regs, &dump->i387);
+ }
+ EXPORT_SYMBOL(dump_thread);
- static void cfq_ioc_set_ioprio(struct io_context *ioc)
- {
-- struct cfq_io_context *cic;
-- struct rb_node *n;
+-/*
+- * Capture the user space registers if the task is not running (in user space)
+- */
+-int dump_task_regs(struct task_struct *tsk, elf_gregset_t *regs)
+-{
+- struct pt_regs ptregs = *task_pt_regs(tsk);
+- ptregs.xcs &= 0xffff;
+- ptregs.xds &= 0xffff;
+- ptregs.xes &= 0xffff;
+- ptregs.xss &= 0xffff;
-
-+ call_for_each_cic(ioc, changed_ioprio);
- ioc->ioprio_changed = 0;
+- elf_core_copy_regs(regs, &ptregs);
-
-- n = rb_first(&ioc->cic_root);
-- while (n != NULL) {
-- cic = rb_entry(n, struct cfq_io_context, rb_node);
+- return 1;
+-}
-
-- changed_ioprio(cic);
-- n = rb_next(n);
-- }
+ #ifdef CONFIG_SECCOMP
+-void hard_disable_TSC(void)
++static void hard_disable_TSC(void)
+ {
+ write_cr4(read_cr4() | X86_CR4_TSD);
}
-
- static struct cfq_queue *
- cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
-- struct task_struct *tsk, gfp_t gfp_mask)
-+ struct io_context *ioc, gfp_t gfp_mask)
+@@ -599,7 +604,7 @@ void disable_TSC(void)
+ hard_disable_TSC();
+ preempt_enable();
+ }
+-void hard_enable_TSC(void)
++static void hard_enable_TSC(void)
{
- struct cfq_queue *cfqq, *new_cfqq = NULL;
- struct cfq_io_context *cic;
-
- retry:
-- cic = cfq_cic_rb_lookup(cfqd, tsk->io_context);
-+ cic = cfq_cic_lookup(cfqd, ioc);
- /* cic always exists here */
- cfqq = cic_to_cfqq(cic, is_sync);
-
-@@ -1404,15 +1404,16 @@ retry:
- atomic_set(&cfqq->ref, 0);
- cfqq->cfqd = cfqd;
-
-- if (is_sync) {
-- cfq_mark_cfqq_idle_window(cfqq);
-- cfq_mark_cfqq_sync(cfqq);
-- }
--
- cfq_mark_cfqq_prio_changed(cfqq);
- cfq_mark_cfqq_queue_new(cfqq);
-
-- cfq_init_prio_data(cfqq);
-+ cfq_init_prio_data(cfqq, ioc);
-+
-+ if (is_sync) {
-+ if (!cfq_class_idle(cfqq))
-+ cfq_mark_cfqq_idle_window(cfqq);
-+ cfq_mark_cfqq_sync(cfqq);
-+ }
- }
-
- if (new_cfqq)
-@@ -1439,11 +1440,11 @@ cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio)
+ write_cr4(read_cr4() & ~X86_CR4_TSD);
}
-
- static struct cfq_queue *
--cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct task_struct *tsk,
-+cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
- gfp_t gfp_mask)
+@@ -609,18 +614,32 @@ static noinline void
+ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
+ struct tss_struct *tss)
{
-- const int ioprio = task_ioprio(tsk);
-- const int ioprio_class = task_ioprio_class(tsk);
-+ const int ioprio = task_ioprio(ioc);
-+ const int ioprio_class = task_ioprio_class(ioc);
- struct cfq_queue **async_cfqq = NULL;
- struct cfq_queue *cfqq = NULL;
+- struct thread_struct *next;
++ struct thread_struct *prev, *next;
++ unsigned long debugctl;
-@@ -1453,7 +1454,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct task_struct *tsk,
++ prev = &prev_p->thread;
+ next = &next_p->thread;
+
++ debugctl = prev->debugctlmsr;
++ if (next->ds_area_msr != prev->ds_area_msr) {
++ /* we clear debugctl to make sure DS
++ * is not in use when we change it */
++ debugctl = 0;
++ wrmsrl(MSR_IA32_DEBUGCTLMSR, 0);
++ wrmsr(MSR_IA32_DS_AREA, next->ds_area_msr, 0);
++ }
++
++ if (next->debugctlmsr != debugctl)
++ wrmsr(MSR_IA32_DEBUGCTLMSR, next->debugctlmsr, 0);
++
+ if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
+- set_debugreg(next->debugreg[0], 0);
+- set_debugreg(next->debugreg[1], 1);
+- set_debugreg(next->debugreg[2], 2);
+- set_debugreg(next->debugreg[3], 3);
++ set_debugreg(next->debugreg0, 0);
++ set_debugreg(next->debugreg1, 1);
++ set_debugreg(next->debugreg2, 2);
++ set_debugreg(next->debugreg3, 3);
+ /* no 4 and 5 */
+- set_debugreg(next->debugreg[6], 6);
+- set_debugreg(next->debugreg[7], 7);
++ set_debugreg(next->debugreg6, 6);
++ set_debugreg(next->debugreg7, 7);
}
- if (!cfqq) {
-- cfqq = cfq_find_alloc_queue(cfqd, is_sync, tsk, gfp_mask);
-+ cfqq = cfq_find_alloc_queue(cfqd, is_sync, ioc, gfp_mask);
- if (!cfqq)
- return NULL;
+ #ifdef CONFIG_SECCOMP
+@@ -634,6 +653,13 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
}
-@@ -1470,28 +1471,42 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct task_struct *tsk,
- return cfqq;
- }
+ #endif
-+static void cfq_cic_free(struct cfq_io_context *cic)
-+{
-+ kmem_cache_free(cfq_ioc_pool, cic);
-+ elv_ioc_count_dec(ioc_count);
-+
-+ if (ioc_gone && !elv_ioc_count_read(ioc_count))
-+ complete(ioc_gone);
-+}
-+
- /*
- * We drop cfq io contexts lazily, so we may find a dead one.
- */
- static void
--cfq_drop_dead_cic(struct io_context *ioc, struct cfq_io_context *cic)
-+cfq_drop_dead_cic(struct cfq_data *cfqd, struct io_context *ioc,
-+ struct cfq_io_context *cic)
- {
-+ unsigned long flags;
++ if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS))
++ ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS);
+
- WARN_ON(!list_empty(&cic->queue_list));
-
-+ spin_lock_irqsave(&ioc->lock, flags);
++ if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS))
++ ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES);
+
- if (ioc->ioc_data == cic)
-- ioc->ioc_data = NULL;
-+ rcu_assign_pointer(ioc->ioc_data, NULL);
-
-- rb_erase(&cic->rb_node, &ioc->cic_root);
-- kmem_cache_free(cfq_ioc_pool, cic);
-- elv_ioc_count_dec(ioc_count);
-+ radix_tree_delete(&ioc->radix_root, (unsigned long) cfqd);
-+ spin_unlock_irqrestore(&ioc->lock, flags);
+
-+ cfq_cic_free(cic);
- }
-
- static struct cfq_io_context *
--cfq_cic_rb_lookup(struct cfq_data *cfqd, struct io_context *ioc)
-+cfq_cic_lookup(struct cfq_data *cfqd, struct io_context *ioc)
+ if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
+ /*
+ * Disable the bitmap via an invalid offset. We still cache
+@@ -687,11 +713,11 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
+ * More important, however, is the fact that this allows us much
+ * more flexibility.
+ *
+- * The return value (in %eax) will be the "prev" task after
++ * The return value (in %ax) will be the "prev" task after
+ * the task-switch, and shows up in ret_from_fork in entry.S,
+ * for example.
+ */
+-struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
++struct task_struct * __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
{
-- struct rb_node *n;
- struct cfq_io_context *cic;
-- void *k, *key = cfqd;
-+ void *k;
-
- if (unlikely(!ioc))
- return NULL;
-@@ -1499,74 +1514,64 @@ cfq_cic_rb_lookup(struct cfq_data *cfqd, struct io_context *ioc)
+ struct thread_struct *prev = &prev_p->thread,
+ *next = &next_p->thread;
+@@ -710,7 +736,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
/*
- * we maintain a last-hit cache, to avoid browsing over the tree
+ * Reload esp0.
*/
-- cic = ioc->ioc_data;
-+ cic = rcu_dereference(ioc->ioc_data);
- if (cic && cic->key == cfqd)
- return cic;
-
--restart:
-- n = ioc->cic_root.rb_node;
-- while (n) {
-- cic = rb_entry(n, struct cfq_io_context, rb_node);
-+ do {
-+ rcu_read_lock();
-+ cic = radix_tree_lookup(&ioc->radix_root, (unsigned long) cfqd);
-+ rcu_read_unlock();
-+ if (!cic)
-+ break;
- /* ->key must be copied to avoid race with cfq_exit_queue() */
- k = cic->key;
- if (unlikely(!k)) {
-- cfq_drop_dead_cic(ioc, cic);
-- goto restart;
-+ cfq_drop_dead_cic(cfqd, ioc, cic);
-+ continue;
- }
-
-- if (key < k)
-- n = n->rb_left;
-- else if (key > k)
-- n = n->rb_right;
-- else {
-- ioc->ioc_data = cic;
-- return cic;
-- }
-- }
-+ rcu_assign_pointer(ioc->ioc_data, cic);
-+ break;
-+ } while (1);
+- load_esp0(tss, next);
++ load_sp0(tss, next);
-- return NULL;
-+ return cic;
- }
+ /*
+ * Save away %gs. No need to save %fs, as it was saved on the
+@@ -774,7 +800,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
--static inline void
--cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
-- struct cfq_io_context *cic)
-+/*
-+ * Add cic into ioc, using cfqd as the search key. This enables us to lookup
-+ * the process specific cfq io context when entered from the block layer.
-+ * Also adds the cic to a per-cfqd list, used when this queue is removed.
-+ */
-+static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
-+ struct cfq_io_context *cic, gfp_t gfp_mask)
+ asmlinkage int sys_fork(struct pt_regs regs)
{
-- struct rb_node **p;
-- struct rb_node *parent;
-- struct cfq_io_context *__cic;
- unsigned long flags;
-- void *k;
-+ int ret;
-
-- cic->ioc = ioc;
-- cic->key = cfqd;
-+ ret = radix_tree_preload(gfp_mask);
-+ if (!ret) {
-+ cic->ioc = ioc;
-+ cic->key = cfqd;
-
--restart:
-- parent = NULL;
-- p = &ioc->cic_root.rb_node;
-- while (*p) {
-- parent = *p;
-- __cic = rb_entry(parent, struct cfq_io_context, rb_node);
-- /* ->key must be copied to avoid race with cfq_exit_queue() */
-- k = __cic->key;
-- if (unlikely(!k)) {
-- cfq_drop_dead_cic(ioc, __cic);
-- goto restart;
-- }
-+ spin_lock_irqsave(&ioc->lock, flags);
-+ ret = radix_tree_insert(&ioc->radix_root,
-+ (unsigned long) cfqd, cic);
-+ spin_unlock_irqrestore(&ioc->lock, flags);
+- return do_fork(SIGCHLD, regs.esp, ®s, 0, NULL, NULL);
++ return do_fork(SIGCHLD, regs.sp, ®s, 0, NULL, NULL);
+ }
-- if (cic->key < k)
-- p = &(*p)->rb_left;
-- else if (cic->key > k)
-- p = &(*p)->rb_right;
-- else
-- BUG();
-+ radix_tree_preload_end();
-+
-+ if (!ret) {
-+ spin_lock_irqsave(cfqd->queue->queue_lock, flags);
-+ list_add(&cic->queue_list, &cfqd->cic_list);
-+ spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
-+ }
- }
+ asmlinkage int sys_clone(struct pt_regs regs)
+@@ -783,12 +809,12 @@ asmlinkage int sys_clone(struct pt_regs regs)
+ unsigned long newsp;
+ int __user *parent_tidptr, *child_tidptr;
-- rb_link_node(&cic->rb_node, parent, p);
-- rb_insert_color(&cic->rb_node, &ioc->cic_root);
-+ if (ret)
-+ printk(KERN_ERR "cfq: cic link failed!\n");
+- clone_flags = regs.ebx;
+- newsp = regs.ecx;
+- parent_tidptr = (int __user *)regs.edx;
+- child_tidptr = (int __user *)regs.edi;
++ clone_flags = regs.bx;
++ newsp = regs.cx;
++ parent_tidptr = (int __user *)regs.dx;
++ child_tidptr = (int __user *)regs.di;
+ if (!newsp)
+- newsp = regs.esp;
++ newsp = regs.sp;
+ return do_fork(clone_flags, newsp, ®s, 0, parent_tidptr, child_tidptr);
+ }
-- spin_lock_irqsave(cfqd->queue->queue_lock, flags);
-- list_add(&cic->queue_list, &cfqd->cic_list);
-- spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
-+ return ret;
+@@ -804,7 +830,7 @@ asmlinkage int sys_clone(struct pt_regs regs)
+ */
+ asmlinkage int sys_vfork(struct pt_regs regs)
+ {
+- return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs.esp, ®s, 0, NULL, NULL);
++ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs.sp, ®s, 0, NULL, NULL);
}
/*
-@@ -1586,7 +1591,7 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
- if (!ioc)
- return NULL;
+@@ -815,18 +841,15 @@ asmlinkage int sys_execve(struct pt_regs regs)
+ int error;
+ char * filename;
-- cic = cfq_cic_rb_lookup(cfqd, ioc);
-+ cic = cfq_cic_lookup(cfqd, ioc);
- if (cic)
+- filename = getname((char __user *) regs.ebx);
++ filename = getname((char __user *) regs.bx);
+ error = PTR_ERR(filename);
+ if (IS_ERR(filename))
goto out;
+ error = do_execve(filename,
+- (char __user * __user *) regs.ecx,
+- (char __user * __user *) regs.edx,
++ (char __user * __user *) regs.cx,
++ (char __user * __user *) regs.dx,
+ ®s);
+ if (error == 0) {
+- task_lock(current);
+- current->ptrace &= ~PT_DTRACE;
+- task_unlock(current);
+ /* Make sure we don't return using sysenter.. */
+ set_thread_flag(TIF_IRET);
+ }
+@@ -840,145 +863,37 @@ out:
-@@ -1594,13 +1599,17 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
- if (cic == NULL)
- goto err;
-
-- cfq_cic_link(cfqd, ioc, cic);
-+ if (cfq_cic_link(cfqd, ioc, cic, gfp_mask))
-+ goto err_free;
-+
- out:
- smp_read_barrier_depends();
- if (unlikely(ioc->ioprio_changed))
- cfq_ioc_set_ioprio(ioc);
-
- return cic;
-+err_free:
-+ cfq_cic_free(cic);
- err:
- put_io_context(ioc);
- return NULL;
-@@ -1655,12 +1664,15 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
- {
- int enable_idle;
-
-- if (!cfq_cfqq_sync(cfqq))
-+ /*
-+ * Don't idle for async or idle io prio class
-+ */
-+ if (!cfq_cfqq_sync(cfqq) || cfq_class_idle(cfqq))
- return;
-
- enable_idle = cfq_cfqq_idle_window(cfqq);
-
-- if (!cic->ioc->task || !cfqd->cfq_slice_idle ||
-+ if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
- (cfqd->hw_tag && CIC_SEEKY(cic)))
- enable_idle = 0;
- else if (sample_valid(cic->ttime_samples)) {
-@@ -1793,7 +1805,7 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq)
- struct cfq_data *cfqd = q->elevator->elevator_data;
- struct cfq_queue *cfqq = RQ_CFQQ(rq);
-
-- cfq_init_prio_data(cfqq);
-+ cfq_init_prio_data(cfqq, RQ_CIC(rq)->ioc);
-
- cfq_add_rq_rb(rq);
-
-@@ -1834,7 +1846,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
- cfq_set_prio_slice(cfqd, cfqq);
- cfq_clear_cfqq_slice_new(cfqq);
- }
-- if (cfq_slice_used(cfqq))
-+ if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq))
- cfq_slice_expired(cfqd, 1);
- else if (sync && RB_EMPTY_ROOT(&cfqq->sort_list))
- cfq_arm_slice_timer(cfqd);
-@@ -1894,13 +1906,13 @@ static int cfq_may_queue(struct request_queue *q, int rw)
- * so just lookup a possibly existing queue, or return 'may queue'
- * if that fails
- */
-- cic = cfq_cic_rb_lookup(cfqd, tsk->io_context);
-+ cic = cfq_cic_lookup(cfqd, tsk->io_context);
- if (!cic)
- return ELV_MQUEUE_MAY;
-
- cfqq = cic_to_cfqq(cic, rw & REQ_RW_SYNC);
- if (cfqq) {
-- cfq_init_prio_data(cfqq);
-+ cfq_init_prio_data(cfqq, cic->ioc);
- cfq_prio_boost(cfqq);
-
- return __cfq_may_queue(cfqq);
-@@ -1938,7 +1950,6 @@ static int
- cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+ unsigned long get_wchan(struct task_struct *p)
{
- struct cfq_data *cfqd = q->elevator->elevator_data;
-- struct task_struct *tsk = current;
- struct cfq_io_context *cic;
- const int rw = rq_data_dir(rq);
- const int is_sync = rq_is_sync(rq);
-@@ -1956,7 +1967,7 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
-
- cfqq = cic_to_cfqq(cic, is_sync);
- if (!cfqq) {
-- cfqq = cfq_get_queue(cfqd, is_sync, tsk, gfp_mask);
-+ cfqq = cfq_get_queue(cfqd, is_sync, cic->ioc, gfp_mask);
-
- if (!cfqq)
- goto queue_fail;
-@@ -2039,29 +2050,9 @@ out_cont:
- spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+- unsigned long ebp, esp, eip;
++ unsigned long bp, sp, ip;
+ unsigned long stack_page;
+ int count = 0;
+ if (!p || p == current || p->state == TASK_RUNNING)
+ return 0;
+ stack_page = (unsigned long)task_stack_page(p);
+- esp = p->thread.esp;
+- if (!stack_page || esp < stack_page || esp > top_esp+stack_page)
++ sp = p->thread.sp;
++ if (!stack_page || sp < stack_page || sp > top_esp+stack_page)
+ return 0;
+- /* include/asm-i386/system.h:switch_to() pushes ebp last. */
+- ebp = *(unsigned long *) esp;
++ /* include/asm-i386/system.h:switch_to() pushes bp last. */
++ bp = *(unsigned long *) sp;
+ do {
+- if (ebp < stack_page || ebp > top_ebp+stack_page)
++ if (bp < stack_page || bp > top_ebp+stack_page)
+ return 0;
+- eip = *(unsigned long *) (ebp+4);
+- if (!in_sched_functions(eip))
+- return eip;
+- ebp = *(unsigned long *) ebp;
++ ip = *(unsigned long *) (bp+4);
++ if (!in_sched_functions(ip))
++ return ip;
++ bp = *(unsigned long *) bp;
+ } while (count++ < 16);
+ return 0;
}
-/*
-- * Timer running if an idle class queue is waiting for service
+- * sys_alloc_thread_area: get a yet unused TLS descriptor index.
- */
--static void cfq_idle_class_timer(unsigned long data)
+-static int get_free_idx(void)
-{
-- struct cfq_data *cfqd = (struct cfq_data *) data;
-- unsigned long flags;
+- struct thread_struct *t = ¤t->thread;
+- int idx;
-
-- spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+- for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
+- if (desc_empty(t->tls_array + idx))
+- return idx + GDT_ENTRY_TLS_MIN;
+- return -ESRCH;
+-}
+-
+-/*
+- * Set a given TLS descriptor:
+- */
+-asmlinkage int sys_set_thread_area(struct user_desc __user *u_info)
+-{
+- struct thread_struct *t = ¤t->thread;
+- struct user_desc info;
+- struct desc_struct *desc;
+- int cpu, idx;
+-
+- if (copy_from_user(&info, u_info, sizeof(info)))
+- return -EFAULT;
+- idx = info.entry_number;
-
- /*
-- * race with a non-idle queue, reset timer
+- * index -1 means the kernel should try to find and
+- * allocate an empty descriptor:
- */
-- if (!start_idle_class_timer(cfqd))
-- cfq_schedule_dispatch(cfqd);
+- if (idx == -1) {
+- idx = get_free_idx();
+- if (idx < 0)
+- return idx;
+- if (put_user(idx, &u_info->entry_number))
+- return -EFAULT;
+- }
-
-- spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
+-
+- desc = t->tls_array + idx - GDT_ENTRY_TLS_MIN;
+-
+- /*
+- * We must not get preempted while modifying the TLS.
+- */
+- cpu = get_cpu();
+-
+- if (LDT_empty(&info)) {
+- desc->a = 0;
+- desc->b = 0;
+- } else {
+- desc->a = LDT_entry_a(&info);
+- desc->b = LDT_entry_b(&info);
+- }
+- load_TLS(t, cpu);
+-
+- put_cpu();
+-
+- return 0;
-}
-
- static void cfq_shutdown_timer_wq(struct cfq_data *cfqd)
+-/*
+- * Get the current Thread-Local Storage area:
+- */
+-
+-#define GET_BASE(desc) ( \
+- (((desc)->a >> 16) & 0x0000ffff) | \
+- (((desc)->b << 16) & 0x00ff0000) | \
+- ( (desc)->b & 0xff000000) )
+-
+-#define GET_LIMIT(desc) ( \
+- ((desc)->a & 0x0ffff) | \
+- ((desc)->b & 0xf0000) )
+-
+-#define GET_32BIT(desc) (((desc)->b >> 22) & 1)
+-#define GET_CONTENTS(desc) (((desc)->b >> 10) & 3)
+-#define GET_WRITABLE(desc) (((desc)->b >> 9) & 1)
+-#define GET_LIMIT_PAGES(desc) (((desc)->b >> 23) & 1)
+-#define GET_PRESENT(desc) (((desc)->b >> 15) & 1)
+-#define GET_USEABLE(desc) (((desc)->b >> 20) & 1)
+-
+-asmlinkage int sys_get_thread_area(struct user_desc __user *u_info)
+-{
+- struct user_desc info;
+- struct desc_struct *desc;
+- int idx;
+-
+- if (get_user(idx, &u_info->entry_number))
+- return -EFAULT;
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
+-
+- memset(&info, 0, sizeof(info));
+-
+- desc = current->thread.tls_array + idx - GDT_ENTRY_TLS_MIN;
+-
+- info.entry_number = idx;
+- info.base_addr = GET_BASE(desc);
+- info.limit = GET_LIMIT(desc);
+- info.seg_32bit = GET_32BIT(desc);
+- info.contents = GET_CONTENTS(desc);
+- info.read_exec_only = !GET_WRITABLE(desc);
+- info.limit_in_pages = GET_LIMIT_PAGES(desc);
+- info.seg_not_present = !GET_PRESENT(desc);
+- info.useable = GET_USEABLE(desc);
+-
+- if (copy_to_user(u_info, &info, sizeof(info)))
+- return -EFAULT;
+- return 0;
+-}
+-
+ unsigned long arch_align_stack(unsigned long sp)
{
- del_timer_sync(&cfqd->idle_slice_timer);
-- del_timer_sync(&cfqd->idle_class_timer);
- kblockd_flush_work(&cfqd->unplug_work);
+ if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
+ sp -= get_random_int() % 8192;
+ return sp & ~0xf;
}
-
-@@ -2126,10 +2117,6 @@ static void *cfq_init_queue(struct request_queue *q)
- cfqd->idle_slice_timer.function = cfq_idle_slice_timer;
- cfqd->idle_slice_timer.data = (unsigned long) cfqd;
-
-- init_timer(&cfqd->idle_class_timer);
-- cfqd->idle_class_timer.function = cfq_idle_class_timer;
-- cfqd->idle_class_timer.data = (unsigned long) cfqd;
++
++unsigned long arch_randomize_brk(struct mm_struct *mm)
++{
++ unsigned long range_end = mm->brk + 0x02000000;
++ return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
++}
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index ab79e1d..137a861 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -3,7 +3,7 @@
+ *
+ * Pentium III FXSR, SSE support
+ * Gareth Hughes <gareth at valinux.com>, May 2000
+- *
++ *
+ * X86-64 port
+ * Andi Kleen.
+ *
+@@ -19,19 +19,19 @@
+ #include <linux/cpu.h>
+ #include <linux/errno.h>
+ #include <linux/sched.h>
++#include <linux/fs.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+-#include <linux/fs.h>
+ #include <linux/elfcore.h>
+ #include <linux/smp.h>
+ #include <linux/slab.h>
+ #include <linux/user.h>
+-#include <linux/module.h>
+ #include <linux/a.out.h>
+ #include <linux/interrupt.h>
++#include <linux/utsname.h>
+ #include <linux/delay.h>
++#include <linux/module.h>
+ #include <linux/ptrace.h>
+-#include <linux/utsname.h>
+ #include <linux/random.h>
+ #include <linux/notifier.h>
+ #include <linux/kprobes.h>
+@@ -72,13 +72,6 @@ void idle_notifier_register(struct notifier_block *n)
+ {
+ atomic_notifier_chain_register(&idle_notifier, n);
+ }
+-EXPORT_SYMBOL_GPL(idle_notifier_register);
-
- INIT_WORK(&cfqd->unplug_work, cfq_kick_queue);
-
- cfqd->last_end_request = jiffies;
-@@ -2160,7 +2147,7 @@ static int __init cfq_slab_setup(void)
- if (!cfq_pool)
- goto fail;
-
-- cfq_ioc_pool = KMEM_CACHE(cfq_io_context, 0);
-+ cfq_ioc_pool = KMEM_CACHE(cfq_io_context, SLAB_DESTROY_BY_RCU);
- if (!cfq_ioc_pool)
- goto fail;
+-void idle_notifier_unregister(struct notifier_block *n)
+-{
+- atomic_notifier_chain_unregister(&idle_notifier, n);
+-}
+-EXPORT_SYMBOL(idle_notifier_unregister);
-diff --git a/block/compat_ioctl.c b/block/compat_ioctl.c
-index cae0a85..b733732 100644
---- a/block/compat_ioctl.c
-+++ b/block/compat_ioctl.c
-@@ -545,6 +545,7 @@ static int compat_blk_trace_setup(struct block_device *bdev, char __user *arg)
- struct blk_user_trace_setup buts;
- struct compat_blk_user_trace_setup cbuts;
- struct request_queue *q;
-+ char b[BDEVNAME_SIZE];
- int ret;
+ void enter_idle(void)
+ {
+@@ -106,7 +99,7 @@ void exit_idle(void)
+ * We use this if we don't have any better
+ * idle routine..
+ */
+-static void default_idle(void)
++void default_idle(void)
+ {
+ current_thread_info()->status &= ~TS_POLLING;
+ /*
+@@ -116,11 +109,18 @@ static void default_idle(void)
+ smp_mb();
+ local_irq_disable();
+ if (!need_resched()) {
+- /* Enables interrupts one instruction before HLT.
+- x86 special cases this so there is no race. */
+- safe_halt();
+- } else
+- local_irq_enable();
++ ktime_t t0, t1;
++ u64 t0n, t1n;
++
++ t0 = ktime_get();
++ t0n = ktime_to_ns(t0);
++ safe_halt(); /* enables interrupts racelessly */
++ local_irq_disable();
++ t1 = ktime_get();
++ t1n = ktime_to_ns(t1);
++ sched_clock_idle_wakeup_event(t1n - t0n);
++ }
++ local_irq_enable();
+ current_thread_info()->status |= TS_POLLING;
+ }
- q = bdev_get_queue(bdev);
-@@ -554,6 +555,8 @@ static int compat_blk_trace_setup(struct block_device *bdev, char __user *arg)
- if (copy_from_user(&cbuts, arg, sizeof(cbuts)))
- return -EFAULT;
+@@ -129,54 +129,12 @@ static void default_idle(void)
+ * to poll the ->need_resched flag instead of waiting for the
+ * cross-CPU IPI to arrive. Use this option with caution.
+ */
+-static void poll_idle (void)
++static void poll_idle(void)
+ {
+ local_irq_enable();
+ cpu_relax();
+ }
-+ strcpy(b, bdevname(bdev, b));
-+
- buts = (struct blk_user_trace_setup) {
- .act_mask = cbuts.act_mask,
- .buf_size = cbuts.buf_size,
-@@ -565,7 +568,7 @@ static int compat_blk_trace_setup(struct block_device *bdev, char __user *arg)
- memcpy(&buts.name, &cbuts.name, 32);
+-static void do_nothing(void *unused)
+-{
+-}
+-
+-void cpu_idle_wait(void)
+-{
+- unsigned int cpu, this_cpu = get_cpu();
+- cpumask_t map, tmp = current->cpus_allowed;
+-
+- set_cpus_allowed(current, cpumask_of_cpu(this_cpu));
+- put_cpu();
+-
+- cpus_clear(map);
+- for_each_online_cpu(cpu) {
+- per_cpu(cpu_idle_state, cpu) = 1;
+- cpu_set(cpu, map);
+- }
+-
+- __get_cpu_var(cpu_idle_state) = 0;
+-
+- wmb();
+- do {
+- ssleep(1);
+- for_each_online_cpu(cpu) {
+- if (cpu_isset(cpu, map) &&
+- !per_cpu(cpu_idle_state, cpu))
+- cpu_clear(cpu, map);
+- }
+- cpus_and(map, map, cpu_online_map);
+- /*
+- * We waited 1 sec, if a CPU still did not call idle
+- * it may be because it is in idle and not waking up
+- * because it has nothing to do.
+- * Give all the remaining CPUS a kick.
+- */
+- smp_call_function_mask(map, do_nothing, 0, 0);
+- } while (!cpus_empty(map));
+-
+- set_cpus_allowed(current, tmp);
+-}
+-EXPORT_SYMBOL_GPL(cpu_idle_wait);
+-
+ #ifdef CONFIG_HOTPLUG_CPU
+ DECLARE_PER_CPU(int, cpu_state);
- mutex_lock(&bdev->bd_mutex);
-- ret = do_blk_trace_setup(q, bdev, &buts);
-+ ret = do_blk_trace_setup(q, b, bdev->bd_dev, &buts);
- mutex_unlock(&bdev->bd_mutex);
- if (ret)
- return ret;
-diff --git a/block/elevator.c b/block/elevator.c
-index e452deb..8cd5775 100644
---- a/block/elevator.c
-+++ b/block/elevator.c
-@@ -185,9 +185,7 @@ static elevator_t *elevator_alloc(struct request_queue *q,
+@@ -207,19 +165,18 @@ static inline void play_dead(void)
+ * low exit latency (ie sit in a loop waiting for
+ * somebody to say that they'd like to reschedule)
+ */
+-void cpu_idle (void)
++void cpu_idle(void)
+ {
+ current_thread_info()->status |= TS_POLLING;
+ /* endless idle loop with no priority at all */
+ while (1) {
++ tick_nohz_stop_sched_tick();
+ while (!need_resched()) {
+ void (*idle)(void);
- eq->ops = &e->ops;
- eq->elevator_type = e;
-- kobject_init(&eq->kobj);
-- kobject_set_name(&eq->kobj, "%s", "iosched");
-- eq->kobj.ktype = &elv_ktype;
-+ kobject_init(&eq->kobj, &elv_ktype);
- mutex_init(&eq->sysfs_lock);
+ if (__get_cpu_var(cpu_idle_state))
+ __get_cpu_var(cpu_idle_state) = 0;
- eq->hash = kmalloc_node(sizeof(struct hlist_head) * ELV_HASH_ENTRIES,
-@@ -743,7 +741,21 @@ struct request *elv_next_request(struct request_queue *q)
- q->boundary_rq = NULL;
- }
+- tick_nohz_stop_sched_tick();
+-
+ rmb();
+ idle = pm_idle;
+ if (!idle)
+@@ -247,6 +204,47 @@ void cpu_idle (void)
+ }
+ }
-- if ((rq->cmd_flags & REQ_DONTPREP) || !q->prep_rq_fn)
-+ if (rq->cmd_flags & REQ_DONTPREP)
-+ break;
++static void do_nothing(void *unused)
++{
++}
+
-+ if (q->dma_drain_size && rq->data_len) {
-+ /*
-+ * make sure space for the drain appears we
-+ * know we can do this because max_hw_segments
-+ * has been adjusted to be one fewer than the
-+ * device can handle
-+ */
-+ rq->nr_phys_segments++;
-+ rq->nr_hw_segments++;
++void cpu_idle_wait(void)
++{
++ unsigned int cpu, this_cpu = get_cpu();
++ cpumask_t map, tmp = current->cpus_allowed;
++
++ set_cpus_allowed(current, cpumask_of_cpu(this_cpu));
++ put_cpu();
++
++ cpus_clear(map);
++ for_each_online_cpu(cpu) {
++ per_cpu(cpu_idle_state, cpu) = 1;
++ cpu_set(cpu, map);
++ }
++
++ __get_cpu_var(cpu_idle_state) = 0;
++
++ wmb();
++ do {
++ ssleep(1);
++ for_each_online_cpu(cpu) {
++ if (cpu_isset(cpu, map) && !per_cpu(cpu_idle_state, cpu))
++ cpu_clear(cpu, map);
+ }
++ cpus_and(map, map, cpu_online_map);
++ /*
++ * We waited 1 sec, if a CPU still did not call idle
++ * it may be because it is in idle and not waking up
++ * because it has nothing to do.
++ * Give all the remaining CPUS a kick.
++ */
++ smp_call_function_mask(map, do_nothing, 0, 0);
++ } while (!cpus_empty(map));
+
-+ if (!q->prep_rq_fn)
- break;
-
- ret = q->prep_rq_fn(q, rq);
-@@ -756,6 +768,16 @@ struct request *elv_next_request(struct request_queue *q)
- * avoid resource deadlock. REQ_STARTED will
- * prevent other fs requests from passing this one.
- */
-+ if (q->dma_drain_size && rq->data_len &&
-+ !(rq->cmd_flags & REQ_DONTPREP)) {
-+ /*
-+ * remove the space for the drain we added
-+ * so that we don't add it again
-+ */
-+ --rq->nr_phys_segments;
-+ --rq->nr_hw_segments;
-+ }
++ set_cpus_allowed(current, tmp);
++}
++EXPORT_SYMBOL_GPL(cpu_idle_wait);
+
- rq = NULL;
- break;
- } else if (ret == BLKPREP_KILL) {
-@@ -931,9 +953,7 @@ int elv_register_queue(struct request_queue *q)
- elevator_t *e = q->elevator;
- int error;
-
-- e->kobj.parent = &q->kobj;
--
-- error = kobject_add(&e->kobj);
-+ error = kobject_add(&e->kobj, &q->kobj, "%s", "iosched");
- if (!error) {
- struct elv_fs_entry *attr = e->elevator_type->elevator_attrs;
- if (attr) {
-diff --git a/block/genhd.c b/block/genhd.c
-index f2ac914..de2ebb2 100644
---- a/block/genhd.c
-+++ b/block/genhd.c
-@@ -17,8 +17,10 @@
- #include <linux/buffer_head.h>
- #include <linux/mutex.h>
-
--struct kset block_subsys;
--static DEFINE_MUTEX(block_subsys_lock);
-+static DEFINE_MUTEX(block_class_lock);
-+#ifndef CONFIG_SYSFS_DEPRECATED
-+struct kobject *block_depr;
-+#endif
-
/*
- * Can be deleted altogether. Later.
-@@ -37,19 +39,17 @@ static inline int major_to_index(int major)
- }
-
- #ifdef CONFIG_PROC_FS
--
- void blkdev_show(struct seq_file *f, off_t offset)
+ * This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
+ * which can obviate IPI to trigger checking of need_resched.
+@@ -257,13 +255,13 @@ void cpu_idle (void)
+ * New with Core Duo processors, MWAIT can take some hints based on CPU
+ * capability.
+ */
+-void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
++void mwait_idle_with_hints(unsigned long ax, unsigned long cx)
{
- struct blk_major_name *dp;
-
- if (offset < BLKDEV_MAJOR_HASH_SIZE) {
-- mutex_lock(&block_subsys_lock);
-+ mutex_lock(&block_class_lock);
- for (dp = major_names[offset]; dp; dp = dp->next)
- seq_printf(f, "%3d %s\n", dp->major, dp->name);
-- mutex_unlock(&block_subsys_lock);
-+ mutex_unlock(&block_class_lock);
+ if (!need_resched()) {
+ __monitor((void *)¤t_thread_info()->flags, 0, 0);
+ smp_mb();
+ if (!need_resched())
+- __mwait(eax, ecx);
++ __mwait(ax, cx);
}
}
--
- #endif /* CONFIG_PROC_FS */
-
- int register_blkdev(unsigned int major, const char *name)
-@@ -57,7 +57,7 @@ int register_blkdev(unsigned int major, const char *name)
- struct blk_major_name **n, *p;
- int index, ret = 0;
-
-- mutex_lock(&block_subsys_lock);
-+ mutex_lock(&block_class_lock);
-
- /* temporary */
- if (major == 0) {
-@@ -102,7 +102,7 @@ int register_blkdev(unsigned int major, const char *name)
- kfree(p);
- }
- out:
-- mutex_unlock(&block_subsys_lock);
-+ mutex_unlock(&block_class_lock);
- return ret;
- }
-
-@@ -114,7 +114,7 @@ void unregister_blkdev(unsigned int major, const char *name)
- struct blk_major_name *p = NULL;
- int index = major_to_index(major);
-- mutex_lock(&block_subsys_lock);
-+ mutex_lock(&block_class_lock);
- for (n = &major_names[index]; *n; n = &(*n)->next)
- if ((*n)->major == major)
- break;
-@@ -124,7 +124,7 @@ void unregister_blkdev(unsigned int major, const char *name)
- p = *n;
- *n = p->next;
+@@ -282,25 +280,41 @@ static void mwait_idle(void)
}
-- mutex_unlock(&block_subsys_lock);
-+ mutex_unlock(&block_class_lock);
- kfree(p);
- }
-
-@@ -137,29 +137,30 @@ static struct kobj_map *bdev_map;
- * range must be nonzero
- * The hash chain is sorted on range, so that subranges can override.
- */
--void blk_register_region(dev_t dev, unsigned long range, struct module *module,
-+void blk_register_region(dev_t devt, unsigned long range, struct module *module,
- struct kobject *(*probe)(dev_t, int *, void *),
- int (*lock)(dev_t, void *), void *data)
- {
-- kobj_map(bdev_map, dev, range, module, probe, lock, data);
-+ kobj_map(bdev_map, devt, range, module, probe, lock, data);
}
- EXPORT_SYMBOL(blk_register_region);
-
--void blk_unregister_region(dev_t dev, unsigned long range)
-+void blk_unregister_region(dev_t devt, unsigned long range)
++
++static int __cpuinit mwait_usable(const struct cpuinfo_x86 *c)
++{
++ if (force_mwait)
++ return 1;
++ /* Any C1 states supported? */
++ return c->cpuid_level >= 5 && ((cpuid_edx(5) >> 4) & 0xf) > 0;
++}
++
+ void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
{
-- kobj_unmap(bdev_map, dev, range);
-+ kobj_unmap(bdev_map, devt, range);
+- static int printed;
+- if (cpu_has(c, X86_FEATURE_MWAIT)) {
++ static int selected;
++
++ if (selected)
++ return;
++#ifdef CONFIG_X86_SMP
++ if (pm_idle == poll_idle && smp_num_siblings > 1) {
++ printk(KERN_WARNING "WARNING: polling idle and HT enabled,"
++ " performance may degrade.\n");
++ }
++#endif
++ if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
+ /*
+ * Skip, if setup has overridden idle.
+ * One CPU supports mwait => All CPUs supports mwait
+ */
+ if (!pm_idle) {
+- if (!printed) {
+- printk(KERN_INFO "using mwait in idle threads.\n");
+- printed = 1;
+- }
++ printk(KERN_INFO "using mwait in idle threads.\n");
+ pm_idle = mwait_idle;
+ }
+ }
++ selected = 1;
}
- EXPORT_SYMBOL(blk_unregister_region);
-
--static struct kobject *exact_match(dev_t dev, int *part, void *data)
-+static struct kobject *exact_match(dev_t devt, int *part, void *data)
+-static int __init idle_setup (char *str)
++static int __init idle_setup(char *str)
{
- struct gendisk *p = data;
-- return &p->kobj;
-+
-+ return &p->dev.kobj;
+ if (!strcmp(str, "poll")) {
+ printk("using polling idle threads.\n");
+@@ -315,13 +329,13 @@ static int __init idle_setup (char *str)
}
+ early_param("idle", idle_setup);
--static int exact_lock(dev_t dev, void *data)
-+static int exact_lock(dev_t devt, void *data)
+-/* Prints also some state that isn't saved in the pt_regs */
++/* Prints also some state that isn't saved in the pt_regs */
+ void __show_regs(struct pt_regs * regs)
{
- struct gendisk *p = data;
+ unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs;
+ unsigned long d0, d1, d2, d3, d6, d7;
+- unsigned int fsindex,gsindex;
+- unsigned int ds,cs,es;
++ unsigned int fsindex, gsindex;
++ unsigned int ds, cs, es;
-@@ -194,8 +195,6 @@ void unlink_gendisk(struct gendisk *disk)
- disk->minors);
+ printk("\n");
+ print_modules();
+@@ -330,16 +344,16 @@ void __show_regs(struct pt_regs * regs)
+ init_utsname()->release,
+ (int)strcspn(init_utsname()->version, " "),
+ init_utsname()->version);
+- printk("RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->rip);
+- printk_address(regs->rip);
+- printk("RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss, regs->rsp,
+- regs->eflags);
++ printk("RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->ip);
++ printk_address(regs->ip, 1);
++ printk("RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss, regs->sp,
++ regs->flags);
+ printk("RAX: %016lx RBX: %016lx RCX: %016lx\n",
+- regs->rax, regs->rbx, regs->rcx);
++ regs->ax, regs->bx, regs->cx);
+ printk("RDX: %016lx RSI: %016lx RDI: %016lx\n",
+- regs->rdx, regs->rsi, regs->rdi);
++ regs->dx, regs->si, regs->di);
+ printk("RBP: %016lx R08: %016lx R09: %016lx\n",
+- regs->rbp, regs->r8, regs->r9);
++ regs->bp, regs->r8, regs->r9);
+ printk("R10: %016lx R11: %016lx R12: %016lx\n",
+ regs->r10, regs->r11, regs->r12);
+ printk("R13: %016lx R14: %016lx R15: %016lx\n",
+@@ -379,7 +393,7 @@ void show_regs(struct pt_regs *regs)
+ {
+ printk("CPU %d:", smp_processor_id());
+ __show_regs(regs);
+- show_trace(NULL, regs, (void *)(regs + 1));
++ show_trace(NULL, regs, (void *)(regs + 1), regs->bp);
}
--#define to_disk(obj) container_of(obj,struct gendisk,kobj)
--
- /**
- * get_gendisk - get partitioning information for a given device
- * @dev: device to get partitioning information for
-@@ -203,10 +202,12 @@ void unlink_gendisk(struct gendisk *disk)
- * This function gets the structure containing partitioning
- * information for the given device @dev.
- */
--struct gendisk *get_gendisk(dev_t dev, int *part)
-+struct gendisk *get_gendisk(dev_t devt, int *part)
- {
-- struct kobject *kobj = kobj_lookup(bdev_map, dev, part);
-- return kobj ? to_disk(kobj) : NULL;
-+ struct kobject *kobj = kobj_lookup(bdev_map, devt, part);
-+ struct device *dev = kobj_to_dev(kobj);
-+
-+ return kobj ? dev_to_disk(dev) : NULL;
+ /*
+@@ -390,7 +404,7 @@ void exit_thread(void)
+ struct task_struct *me = current;
+ struct thread_struct *t = &me->thread;
+
+- if (me->thread.io_bitmap_ptr) {
++ if (me->thread.io_bitmap_ptr) {
+ struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
+
+ kfree(t->io_bitmap_ptr);
+@@ -426,7 +440,7 @@ void flush_thread(void)
+ tsk->thread.debugreg3 = 0;
+ tsk->thread.debugreg6 = 0;
+ tsk->thread.debugreg7 = 0;
+- memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
++ memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
+ /*
+ * Forget coprocessor state..
+ */
+@@ -449,26 +463,21 @@ void release_thread(struct task_struct *dead_task)
+
+ static inline void set_32bit_tls(struct task_struct *t, int tls, u32 addr)
+ {
+- struct user_desc ud = {
++ struct user_desc ud = {
+ .base_addr = addr,
+ .limit = 0xfffff,
+ .seg_32bit = 1,
+ .limit_in_pages = 1,
+ .useable = 1,
+ };
+- struct n_desc_struct *desc = (void *)t->thread.tls_array;
++ struct desc_struct *desc = t->thread.tls_array;
+ desc += tls;
+- desc->a = LDT_entry_a(&ud);
+- desc->b = LDT_entry_b(&ud);
++ fill_ldt(desc, &ud);
+ }
+
+ static inline u32 read_32bit_tls(struct task_struct *t, int tls)
+ {
+- struct desc_struct *desc = (void *)t->thread.tls_array;
+- desc += tls;
+- return desc->base0 |
+- (((u32)desc->base1) << 16) |
+- (((u32)desc->base2) << 24);
++ return get_desc_base(&t->thread.tls_array[tls]);
}
/*
-@@ -216,13 +217,17 @@ struct gendisk *get_gendisk(dev_t dev, int *part)
- */
- void __init printk_all_partitions(void)
- {
-- int n;
-+ struct device *dev;
- struct gendisk *sgp;
-+ char buf[BDEVNAME_SIZE];
-+ int n;
+@@ -480,7 +489,7 @@ void prepare_to_copy(struct task_struct *tsk)
+ unlazy_fpu(tsk);
+ }
+
+-int copy_thread(int nr, unsigned long clone_flags, unsigned long rsp,
++int copy_thread(int nr, unsigned long clone_flags, unsigned long sp,
+ unsigned long unused,
+ struct task_struct * p, struct pt_regs * regs)
+ {
+@@ -492,14 +501,14 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long rsp,
+ (THREAD_SIZE + task_stack_page(p))) - 1;
+ *childregs = *regs;
+
+- childregs->rax = 0;
+- childregs->rsp = rsp;
+- if (rsp == ~0UL)
+- childregs->rsp = (unsigned long)childregs;
++ childregs->ax = 0;
++ childregs->sp = sp;
++ if (sp == ~0UL)
++ childregs->sp = (unsigned long)childregs;
+
+- p->thread.rsp = (unsigned long) childregs;
+- p->thread.rsp0 = (unsigned long) (childregs+1);
+- p->thread.userrsp = me->thread.userrsp;
++ p->thread.sp = (unsigned long) childregs;
++ p->thread.sp0 = (unsigned long) (childregs+1);
++ p->thread.usersp = me->thread.usersp;
-- mutex_lock(&block_subsys_lock);
-+ mutex_lock(&block_class_lock);
- /* For each block device... */
-- list_for_each_entry(sgp, &block_subsys.list, kobj.entry) {
-- char buf[BDEVNAME_SIZE];
-+ list_for_each_entry(dev, &block_class.devices, node) {
-+ if (dev->type != &disk_type)
-+ continue;
-+ sgp = dev_to_disk(dev);
- /*
- * Don't show empty devices or things that have been surpressed
- */
-@@ -255,38 +260,46 @@ void __init printk_all_partitions(void)
- sgp->major, n + 1 + sgp->first_minor,
- (unsigned long long)sgp->part[n]->nr_sects >> 1,
- disk_name(sgp, n + 1, buf));
-- } /* partition subloop */
-- } /* Block device loop */
-+ }
+ set_tsk_thread_flag(p, TIF_FORK);
+
+@@ -520,7 +529,7 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long rsp,
+ memcpy(p->thread.io_bitmap_ptr, me->thread.io_bitmap_ptr,
+ IO_BITMAP_BYTES);
+ set_tsk_thread_flag(p, TIF_IO_BITMAP);
+- }
+ }
-- mutex_unlock(&block_subsys_lock);
-- return;
-+ mutex_unlock(&block_class_lock);
+ /*
+ * Set a new TLS for the child thread?
+@@ -528,7 +537,8 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long rsp,
+ if (clone_flags & CLONE_SETTLS) {
+ #ifdef CONFIG_IA32_EMULATION
+ if (test_thread_flag(TIF_IA32))
+- err = ia32_child_tls(p, childregs);
++ err = do_set_thread_area(p, -1,
++ (struct user_desc __user *)childregs->si, 0);
+ else
+ #endif
+ err = do_arch_prctl(p, ARCH_SET_FS, childregs->r8);
+@@ -547,17 +557,30 @@ out:
+ /*
+ * This special macro can be used to load a debugging register
+ */
+-#define loaddebug(thread,r) set_debugreg(thread->debugreg ## r, r)
++#define loaddebug(thread, r) set_debugreg(thread->debugreg ## r, r)
+
+ static inline void __switch_to_xtra(struct task_struct *prev_p,
+- struct task_struct *next_p,
+- struct tss_struct *tss)
++ struct task_struct *next_p,
++ struct tss_struct *tss)
+ {
+ struct thread_struct *prev, *next;
++ unsigned long debugctl;
+
+ prev = &prev_p->thread,
+ next = &next_p->thread;
+
++ debugctl = prev->debugctlmsr;
++ if (next->ds_area_msr != prev->ds_area_msr) {
++ /* we clear debugctl to make sure DS
++ * is not in use when we change it */
++ debugctl = 0;
++ wrmsrl(MSR_IA32_DEBUGCTLMSR, 0);
++ wrmsrl(MSR_IA32_DS_AREA, next->ds_area_msr);
++ }
++
++ if (next->debugctlmsr != debugctl)
++ wrmsrl(MSR_IA32_DEBUGCTLMSR, next->debugctlmsr);
++
+ if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
+ loaddebug(next, 0);
+ loaddebug(next, 1);
+@@ -581,12 +604,18 @@ static inline void __switch_to_xtra(struct task_struct *prev_p,
+ */
+ memset(tss->io_bitmap, 0xff, prev->io_bitmap_max);
+ }
++
++ if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS))
++ ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS);
++
++ if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS))
++ ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES);
}
- #ifdef CONFIG_PROC_FS
- /* iterator */
- static void *part_start(struct seq_file *part, loff_t *pos)
+ /*
+ * switch_to(x,y) should switch tasks from x to y.
+ *
+- * This could still be optimized:
++ * This could still be optimized:
+ * - fold all the options into a flag word and test it with a single test.
+ * - could test fs/gs bitsliced
+ *
+@@ -597,7 +626,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
{
-- struct list_head *p;
-- loff_t l = *pos;
-+ loff_t k = *pos;
-+ struct device *dev;
+ struct thread_struct *prev = &prev_p->thread,
+ *next = &next_p->thread;
+- int cpu = smp_processor_id();
++ int cpu = smp_processor_id();
+ struct tss_struct *tss = &per_cpu(init_tss, cpu);
-- mutex_lock(&block_subsys_lock);
-- list_for_each(p, &block_subsys.list)
-- if (!l--)
-- return list_entry(p, struct gendisk, kobj.entry);
-+ mutex_lock(&block_class_lock);
-+ list_for_each_entry(dev, &block_class.devices, node) {
-+ if (dev->type != &disk_type)
-+ continue;
-+ if (!k--)
-+ return dev_to_disk(dev);
-+ }
- return NULL;
- }
+ /* we're going to use this soon, after a few expensive things */
+@@ -607,7 +636,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ /*
+ * Reload esp0, LDT and the page table pointer:
+ */
+- tss->rsp0 = next->rsp0;
++ load_sp0(tss, next);
- static void *part_next(struct seq_file *part, void *v, loff_t *pos)
- {
-- struct list_head *p = ((struct gendisk *)v)->kobj.entry.next;
-+ struct gendisk *gp = v;
-+ struct device *dev;
- ++*pos;
-- return p==&block_subsys.list ? NULL :
-- list_entry(p, struct gendisk, kobj.entry);
-+ list_for_each_entry(dev, &gp->dev.node, node) {
-+ if (&dev->node == &block_class.devices)
-+ return NULL;
-+ if (dev->type == &disk_type)
-+ return dev_to_disk(dev);
-+ }
-+ return NULL;
- }
+ /*
+ * Switch DS and ES.
+@@ -666,8 +695,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ /*
+ * Switch the PDA and FPU contexts.
+ */
+- prev->userrsp = read_pda(oldrsp);
+- write_pda(oldrsp, next->userrsp);
++ prev->usersp = read_pda(oldrsp);
++ write_pda(oldrsp, next->usersp);
+ write_pda(pcurrent, next_p);
- static void part_stop(struct seq_file *part, void *v)
+ write_pda(kernelstack,
+@@ -684,8 +713,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ /*
+ * Now maybe reload the debug registers and handle I/O bitmaps
+ */
+- if (unlikely((task_thread_info(next_p)->flags & _TIF_WORK_CTXSW))
+- || test_tsk_thread_flag(prev_p, TIF_IO_BITMAP))
++ if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
++ task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
+ __switch_to_xtra(prev_p, next_p, tss);
+
+ /* If the task has used fpu the last 5 timeslices, just do a full
+@@ -700,7 +729,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ /*
+ * sys_execve() executes a new program.
+ */
+-asmlinkage
++asmlinkage
+ long sys_execve(char __user *name, char __user * __user *argv,
+ char __user * __user *envp, struct pt_regs regs)
{
-- mutex_unlock(&block_subsys_lock);
-+ mutex_unlock(&block_class_lock);
+@@ -712,11 +741,6 @@ long sys_execve(char __user *name, char __user * __user *argv,
+ if (IS_ERR(filename))
+ return error;
+ error = do_execve(filename, argv, envp, ®s);
+- if (error == 0) {
+- task_lock(current);
+- current->ptrace &= ~PT_DTRACE;
+- task_unlock(current);
+- }
+ putname(filename);
+ return error;
}
+@@ -726,18 +750,18 @@ void set_personality_64bit(void)
+ /* inherit personality from parent */
- static int show_partition(struct seq_file *part, void *v)
-@@ -295,7 +308,7 @@ static int show_partition(struct seq_file *part, void *v)
- int n;
- char buf[BDEVNAME_SIZE];
-
-- if (&sgp->kobj.entry == block_subsys.list.next)
-+ if (&sgp->dev.node == block_class.devices.next)
- seq_puts(part, "major minor #blocks name\n\n");
+ /* Make sure to be in 64bit mode */
+- clear_thread_flag(TIF_IA32);
++ clear_thread_flag(TIF_IA32);
- /* Don't show non-partitionable removeable devices or empty devices */
-@@ -324,111 +337,82 @@ static int show_partition(struct seq_file *part, void *v)
- return 0;
+ /* TBD: overwrites user setup. Should have two bits.
+ But 64bit processes have always behaved this way,
+ so it's not too bad. The main problem is just that
+- 32bit childs are affected again. */
++ 32bit childs are affected again. */
+ current->personality &= ~READ_IMPLIES_EXEC;
}
--struct seq_operations partitions_op = {
-- .start =part_start,
-- .next = part_next,
-- .stop = part_stop,
-- .show = show_partition
-+const struct seq_operations partitions_op = {
-+ .start = part_start,
-+ .next = part_next,
-+ .stop = part_stop,
-+ .show = show_partition
- };
- #endif
-
-
- extern int blk_dev_init(void);
-
--static struct kobject *base_probe(dev_t dev, int *part, void *data)
-+static struct kobject *base_probe(dev_t devt, int *part, void *data)
+ asmlinkage long sys_fork(struct pt_regs *regs)
{
-- if (request_module("block-major-%d-%d", MAJOR(dev), MINOR(dev)) > 0)
-+ if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
- /* Make old-style 2.4 aliases work */
-- request_module("block-major-%d", MAJOR(dev));
-+ request_module("block-major-%d", MAJOR(devt));
- return NULL;
+- return do_fork(SIGCHLD, regs->rsp, regs, 0, NULL, NULL);
++ return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
}
- static int __init genhd_device_init(void)
+ asmlinkage long
+@@ -745,7 +769,7 @@ sys_clone(unsigned long clone_flags, unsigned long newsp,
+ void __user *parent_tid, void __user *child_tid, struct pt_regs *regs)
{
-- int err;
--
-- bdev_map = kobj_map_init(base_probe, &block_subsys_lock);
-+ class_register(&block_class);
-+ bdev_map = kobj_map_init(base_probe, &block_class_lock);
- blk_dev_init();
-- err = subsystem_register(&block_subsys);
-- if (err < 0)
-- printk(KERN_WARNING "%s: subsystem_register error: %d\n",
-- __FUNCTION__, err);
-- return err;
-+
-+#ifndef CONFIG_SYSFS_DEPRECATED
-+ /* create top-level block dir */
-+ block_depr = kobject_create_and_add("block", NULL);
-+#endif
-+ return 0;
+ if (!newsp)
+- newsp = regs->rsp;
++ newsp = regs->sp;
+ return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid);
}
- subsys_initcall(genhd_device_init);
-
--
--
--/*
-- * kobject & sysfs bindings for block devices
-- */
--static ssize_t disk_attr_show(struct kobject *kobj, struct attribute *attr,
-- char *page)
-+static ssize_t disk_range_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
+@@ -761,29 +785,29 @@ sys_clone(unsigned long clone_flags, unsigned long newsp,
+ */
+ asmlinkage long sys_vfork(struct pt_regs *regs)
{
-- struct gendisk *disk = to_disk(kobj);
-- struct disk_attribute *disk_attr =
-- container_of(attr,struct disk_attribute,attr);
-- ssize_t ret = -EIO;
-+ struct gendisk *disk = dev_to_disk(dev);
-
-- if (disk_attr->show)
-- ret = disk_attr->show(disk,page);
-- return ret;
-+ return sprintf(buf, "%d\n", disk->minors);
+- return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->rsp, regs, 0,
++ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0,
+ NULL, NULL);
}
--static ssize_t disk_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char *page, size_t count)
-+static ssize_t disk_removable_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
+ unsigned long get_wchan(struct task_struct *p)
{
-- struct gendisk *disk = to_disk(kobj);
-- struct disk_attribute *disk_attr =
-- container_of(attr,struct disk_attribute,attr);
-- ssize_t ret = 0;
-+ struct gendisk *disk = dev_to_disk(dev);
+ unsigned long stack;
+- u64 fp,rip;
++ u64 fp,ip;
+ int count = 0;
-- if (disk_attr->store)
-- ret = disk_attr->store(disk, page, count);
-- return ret;
-+ return sprintf(buf, "%d\n",
-+ (disk->flags & GENHD_FL_REMOVABLE ? 1 : 0));
- }
+ if (!p || p == current || p->state==TASK_RUNNING)
+ return 0;
+ stack = (unsigned long)task_stack_page(p);
+- if (p->thread.rsp < stack || p->thread.rsp > stack+THREAD_SIZE)
++ if (p->thread.sp < stack || p->thread.sp > stack+THREAD_SIZE)
+ return 0;
+- fp = *(u64 *)(p->thread.rsp);
++ fp = *(u64 *)(p->thread.sp);
+ do {
+ if (fp < (unsigned long)stack ||
+ fp > (unsigned long)stack+THREAD_SIZE)
+ return 0;
+- rip = *(u64 *)(fp+8);
+- if (!in_sched_functions(rip))
+- return rip;
++ ip = *(u64 *)(fp+8);
++ if (!in_sched_functions(ip))
++ return ip;
+ fp = *(u64 *)fp;
+ } while (count++ < 16);
+ return 0;
+@@ -824,19 +848,19 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
+ /* Not strictly needed for fs, but do it for symmetry
+ with gs */
+ if (addr >= TASK_SIZE_OF(task))
+- return -EPERM;
++ return -EPERM;
+ cpu = get_cpu();
+- /* handle small bases via the GDT because that's faster to
++ /* handle small bases via the GDT because that's faster to
+ switch. */
+- if (addr <= 0xffffffff) {
++ if (addr <= 0xffffffff) {
+ set_32bit_tls(task, FS_TLS, addr);
+- if (doit) {
+- load_TLS(&task->thread, cpu);
++ if (doit) {
++ load_TLS(&task->thread, cpu);
+ asm volatile("movl %0,%%fs" :: "r"(FS_TLS_SEL));
+ }
+ task->thread.fsindex = FS_TLS_SEL;
+ task->thread.fs = 0;
+- } else {
++ } else {
+ task->thread.fsindex = 0;
+ task->thread.fs = addr;
+ if (doit) {
+@@ -848,24 +872,24 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
+ }
+ put_cpu();
+ break;
+- case ARCH_GET_FS: {
+- unsigned long base;
++ case ARCH_GET_FS: {
++ unsigned long base;
+ if (task->thread.fsindex == FS_TLS_SEL)
+ base = read_32bit_tls(task, FS_TLS);
+ else if (doit)
+ rdmsrl(MSR_FS_BASE, base);
+ else
+ base = task->thread.fs;
+- ret = put_user(base, (unsigned long __user *)addr);
+- break;
++ ret = put_user(base, (unsigned long __user *)addr);
++ break;
+ }
+- case ARCH_GET_GS: {
++ case ARCH_GET_GS: {
+ unsigned long base;
+ unsigned gsindex;
+ if (task->thread.gsindex == GS_TLS_SEL)
+ base = read_32bit_tls(task, GS_TLS);
+ else if (doit) {
+- asm("movl %%gs,%0" : "=r" (gsindex));
++ asm("movl %%gs,%0" : "=r" (gsindex));
+ if (gsindex)
+ rdmsrl(MSR_KERNEL_GS_BASE, base);
+ else
+@@ -873,39 +897,21 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
+ }
+ else
+ base = task->thread.gs;
+- ret = put_user(base, (unsigned long __user *)addr);
++ ret = put_user(base, (unsigned long __user *)addr);
+ break;
+ }
--static struct sysfs_ops disk_sysfs_ops = {
-- .show = &disk_attr_show,
-- .store = &disk_attr_store,
--};
--
--static ssize_t disk_uevent_store(struct gendisk * disk,
-- const char *buf, size_t count)
--{
-- kobject_uevent(&disk->kobj, KOBJ_ADD);
-- return count;
--}
--static ssize_t disk_dev_read(struct gendisk * disk, char *page)
--{
-- dev_t base = MKDEV(disk->major, disk->first_minor);
-- return print_dev_t(page, base);
--}
--static ssize_t disk_range_read(struct gendisk * disk, char *page)
-+static ssize_t disk_size_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
+ default:
+ ret = -EINVAL;
+ break;
+- }
++ }
+
+- return ret;
+-}
++ return ret;
++}
+
+ long sys_arch_prctl(int code, unsigned long addr)
{
-- return sprintf(page, "%d\n", disk->minors);
--}
--static ssize_t disk_removable_read(struct gendisk * disk, char *page)
+ return do_arch_prctl(current, code, addr);
+-}
+-
+-/*
+- * Capture the user space registers if the task is not running (in user space)
+- */
+-int dump_task_regs(struct task_struct *tsk, elf_gregset_t *regs)
-{
-- return sprintf(page, "%d\n",
-- (disk->flags & GENHD_FL_REMOVABLE ? 1 : 0));
-+ struct gendisk *disk = dev_to_disk(dev);
+- struct pt_regs *pp, ptregs;
+-
+- pp = task_pt_regs(tsk);
+-
+- ptregs = *pp;
+- ptregs.cs &= 0xffff;
+- ptregs.ss &= 0xffff;
+-
+- elf_core_copy_regs(regs, &ptregs);
+-
+- return 1;
+ }
-+ return sprintf(buf, "%llu\n", (unsigned long long)get_capacity(disk));
+ unsigned long arch_align_stack(unsigned long sp)
+@@ -914,3 +920,9 @@ unsigned long arch_align_stack(unsigned long sp)
+ sp -= get_random_int() % 8192;
+ return sp & ~0xf;
}
--static ssize_t disk_size_read(struct gendisk * disk, char *page)
--{
-- return sprintf(page, "%llu\n", (unsigned long long)get_capacity(disk));
--}
--static ssize_t disk_capability_read(struct gendisk *disk, char *page)
+
-+static ssize_t disk_capability_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
- {
-- return sprintf(page, "%x\n", disk->flags);
-+ struct gendisk *disk = dev_to_disk(dev);
++unsigned long arch_randomize_brk(struct mm_struct *mm)
++{
++ unsigned long range_end = mm->brk + 0x02000000;
++ return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
++}
+diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
+new file mode 100644
+index 0000000..96286df
+--- /dev/null
++++ b/arch/x86/kernel/ptrace.c
+@@ -0,0 +1,1545 @@
++/* By Ross Biro 1/23/92 */
++/*
++ * Pentium III FXSR, SSE support
++ * Gareth Hughes <gareth at valinux.com>, May 2000
++ *
++ * BTS tracing
++ * Markus Metzger <markus.t.metzger at intel.com>, Dec 2007
++ */
+
-+ return sprintf(buf, "%x\n", disk->flags);
- }
--static ssize_t disk_stats_read(struct gendisk * disk, char *page)
++#include <linux/kernel.h>
++#include <linux/sched.h>
++#include <linux/mm.h>
++#include <linux/smp.h>
++#include <linux/errno.h>
++#include <linux/ptrace.h>
++#include <linux/regset.h>
++#include <linux/user.h>
++#include <linux/elf.h>
++#include <linux/security.h>
++#include <linux/audit.h>
++#include <linux/seccomp.h>
++#include <linux/signal.h>
+
-+static ssize_t disk_stat_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
- {
-+ struct gendisk *disk = dev_to_disk(dev);
++#include <asm/uaccess.h>
++#include <asm/pgtable.h>
++#include <asm/system.h>
++#include <asm/processor.h>
++#include <asm/i387.h>
++#include <asm/debugreg.h>
++#include <asm/ldt.h>
++#include <asm/desc.h>
++#include <asm/prctl.h>
++#include <asm/proto.h>
++#include <asm/ds.h>
++
++#include "tls.h"
++
++enum x86_regset {
++ REGSET_GENERAL,
++ REGSET_FP,
++ REGSET_XFP,
++ REGSET_TLS,
++};
+
- preempt_disable();
- disk_round_stats(disk);
- preempt_enable();
-- return sprintf(page,
-+ return sprintf(buf,
- "%8lu %8lu %8llu %8u "
- "%8lu %8lu %8llu %8u "
- "%8u %8u %8u"
-@@ -445,40 +429,21 @@ static ssize_t disk_stats_read(struct gendisk * disk, char *page)
- jiffies_to_msecs(disk_stat_read(disk, io_ticks)),
- jiffies_to_msecs(disk_stat_read(disk, time_in_queue)));
- }
--static struct disk_attribute disk_attr_uevent = {
-- .attr = {.name = "uevent", .mode = S_IWUSR },
-- .store = disk_uevent_store
--};
--static struct disk_attribute disk_attr_dev = {
-- .attr = {.name = "dev", .mode = S_IRUGO },
-- .show = disk_dev_read
--};
--static struct disk_attribute disk_attr_range = {
-- .attr = {.name = "range", .mode = S_IRUGO },
-- .show = disk_range_read
--};
--static struct disk_attribute disk_attr_removable = {
-- .attr = {.name = "removable", .mode = S_IRUGO },
-- .show = disk_removable_read
--};
--static struct disk_attribute disk_attr_size = {
-- .attr = {.name = "size", .mode = S_IRUGO },
-- .show = disk_size_read
--};
--static struct disk_attribute disk_attr_capability = {
-- .attr = {.name = "capability", .mode = S_IRUGO },
-- .show = disk_capability_read
--};
--static struct disk_attribute disk_attr_stat = {
-- .attr = {.name = "stat", .mode = S_IRUGO },
-- .show = disk_stats_read
--};
-
- #ifdef CONFIG_FAIL_MAKE_REQUEST
-+static ssize_t disk_fail_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
++/*
++ * does not yet catch signals sent when the child dies.
++ * in exit.c or in signal.c.
++ */
++
++/*
++ * Determines which flags the user has access to [1 = access, 0 = no access].
++ */
++#define FLAG_MASK_32 ((unsigned long) \
++ (X86_EFLAGS_CF | X86_EFLAGS_PF | \
++ X86_EFLAGS_AF | X86_EFLAGS_ZF | \
++ X86_EFLAGS_SF | X86_EFLAGS_TF | \
++ X86_EFLAGS_DF | X86_EFLAGS_OF | \
++ X86_EFLAGS_RF | X86_EFLAGS_AC))
++
++/*
++ * Determines whether a value may be installed in a segment register.
++ */
++static inline bool invalid_selector(u16 value)
+{
-+ struct gendisk *disk = dev_to_disk(dev);
++ return unlikely(value != 0 && (value & SEGMENT_RPL_MASK) != USER_RPL);
++}
+
-+ return sprintf(buf, "%d\n", disk->flags & GENHD_FL_FAIL ? 1 : 0);
++#ifdef CONFIG_X86_32
++
++#define FLAG_MASK FLAG_MASK_32
++
++static long *pt_regs_access(struct pt_regs *regs, unsigned long regno)
++{
++ BUILD_BUG_ON(offsetof(struct pt_regs, bx) != 0);
++ regno >>= 2;
++ if (regno > FS)
++ --regno;
++ return ®s->bx + regno;
+}
-
--static ssize_t disk_fail_store(struct gendisk * disk,
-+static ssize_t disk_fail_store(struct device *dev,
-+ struct device_attribute *attr,
- const char *buf, size_t count)
- {
-+ struct gendisk *disk = dev_to_disk(dev);
- int i;
-
- if (count > 0 && sscanf(buf, "%d", &i) > 0) {
-@@ -490,136 +455,100 @@ static ssize_t disk_fail_store(struct gendisk * disk,
-
- return count;
- }
--static ssize_t disk_fail_read(struct gendisk * disk, char *page)
--{
-- return sprintf(page, "%d\n", disk->flags & GENHD_FL_FAIL ? 1 : 0);
--}
--static struct disk_attribute disk_attr_fail = {
-- .attr = {.name = "make-it-fail", .mode = S_IRUGO | S_IWUSR },
-- .store = disk_fail_store,
-- .show = disk_fail_read
--};
-
- #endif
-
--static struct attribute * default_attrs[] = {
-- &disk_attr_uevent.attr,
-- &disk_attr_dev.attr,
-- &disk_attr_range.attr,
-- &disk_attr_removable.attr,
-- &disk_attr_size.attr,
-- &disk_attr_stat.attr,
-- &disk_attr_capability.attr,
-+static DEVICE_ATTR(range, S_IRUGO, disk_range_show, NULL);
-+static DEVICE_ATTR(removable, S_IRUGO, disk_removable_show, NULL);
-+static DEVICE_ATTR(size, S_IRUGO, disk_size_show, NULL);
-+static DEVICE_ATTR(capability, S_IRUGO, disk_capability_show, NULL);
-+static DEVICE_ATTR(stat, S_IRUGO, disk_stat_show, NULL);
-+#ifdef CONFIG_FAIL_MAKE_REQUEST
-+static struct device_attribute dev_attr_fail =
-+ __ATTR(make-it-fail, S_IRUGO|S_IWUSR, disk_fail_show, disk_fail_store);
++
++static u16 get_segment_reg(struct task_struct *task, unsigned long offset)
++{
++ /*
++ * Returning the value truncates it to 16 bits.
++ */
++ unsigned int retval;
++ if (offset != offsetof(struct user_regs_struct, gs))
++ retval = *pt_regs_access(task_pt_regs(task), offset);
++ else {
++ retval = task->thread.gs;
++ if (task == current)
++ savesegment(gs, retval);
++ }
++ return retval;
++}
++
++static int set_segment_reg(struct task_struct *task,
++ unsigned long offset, u16 value)
++{
++ /*
++ * The value argument was already truncated to 16 bits.
++ */
++ if (invalid_selector(value))
++ return -EIO;
++
++ if (offset != offsetof(struct user_regs_struct, gs))
++ *pt_regs_access(task_pt_regs(task), offset) = value;
++ else {
++ task->thread.gs = value;
++ if (task == current)
++ /*
++ * The user-mode %gs is not affected by
++ * kernel entry, so we must update the CPU.
++ */
++ loadsegment(gs, value);
++ }
++
++ return 0;
++}
++
++static unsigned long debugreg_addr_limit(struct task_struct *task)
++{
++ return TASK_SIZE - 3;
++}
++
++#else /* CONFIG_X86_64 */
++
++#define FLAG_MASK (FLAG_MASK_32 | X86_EFLAGS_NT)
++
++static unsigned long *pt_regs_access(struct pt_regs *regs, unsigned long offset)
++{
++ BUILD_BUG_ON(offsetof(struct pt_regs, r15) != 0);
++ return ®s->r15 + (offset / sizeof(regs->r15));
++}
++
++static u16 get_segment_reg(struct task_struct *task, unsigned long offset)
++{
++ /*
++ * Returning the value truncates it to 16 bits.
++ */
++ unsigned int seg;
++
++ switch (offset) {
++ case offsetof(struct user_regs_struct, fs):
++ if (task == current) {
++ /* Older gas can't assemble movq %?s,%r?? */
++ asm("movl %%fs,%0" : "=r" (seg));
++ return seg;
++ }
++ return task->thread.fsindex;
++ case offsetof(struct user_regs_struct, gs):
++ if (task == current) {
++ asm("movl %%gs,%0" : "=r" (seg));
++ return seg;
++ }
++ return task->thread.gsindex;
++ case offsetof(struct user_regs_struct, ds):
++ if (task == current) {
++ asm("movl %%ds,%0" : "=r" (seg));
++ return seg;
++ }
++ return task->thread.ds;
++ case offsetof(struct user_regs_struct, es):
++ if (task == current) {
++ asm("movl %%es,%0" : "=r" (seg));
++ return seg;
++ }
++ return task->thread.es;
++
++ case offsetof(struct user_regs_struct, cs):
++ case offsetof(struct user_regs_struct, ss):
++ break;
++ }
++ return *pt_regs_access(task_pt_regs(task), offset);
++}
++
++static int set_segment_reg(struct task_struct *task,
++ unsigned long offset, u16 value)
++{
++ /*
++ * The value argument was already truncated to 16 bits.
++ */
++ if (invalid_selector(value))
++ return -EIO;
++
++ switch (offset) {
++ case offsetof(struct user_regs_struct,fs):
++ /*
++ * If this is setting fs as for normal 64-bit use but
++ * setting fs_base has implicitly changed it, leave it.
++ */
++ if ((value == FS_TLS_SEL && task->thread.fsindex == 0 &&
++ task->thread.fs != 0) ||
++ (value == 0 && task->thread.fsindex == FS_TLS_SEL &&
++ task->thread.fs == 0))
++ break;
++ task->thread.fsindex = value;
++ if (task == current)
++ loadsegment(fs, task->thread.fsindex);
++ break;
++ case offsetof(struct user_regs_struct,gs):
++ /*
++ * If this is setting gs as for normal 64-bit use but
++ * setting gs_base has implicitly changed it, leave it.
++ */
++ if ((value == GS_TLS_SEL && task->thread.gsindex == 0 &&
++ task->thread.gs != 0) ||
++ (value == 0 && task->thread.gsindex == GS_TLS_SEL &&
++ task->thread.gs == 0))
++ break;
++ task->thread.gsindex = value;
++ if (task == current)
++ load_gs_index(task->thread.gsindex);
++ break;
++ case offsetof(struct user_regs_struct,ds):
++ task->thread.ds = value;
++ if (task == current)
++ loadsegment(ds, task->thread.ds);
++ break;
++ case offsetof(struct user_regs_struct,es):
++ task->thread.es = value;
++ if (task == current)
++ loadsegment(es, task->thread.es);
++ break;
++
++ /*
++ * Can't actually change these in 64-bit mode.
++ */
++ case offsetof(struct user_regs_struct,cs):
++#ifdef CONFIG_IA32_EMULATION
++ if (test_tsk_thread_flag(task, TIF_IA32))
++ task_pt_regs(task)->cs = value;
++#endif
++ break;
++ case offsetof(struct user_regs_struct,ss):
++#ifdef CONFIG_IA32_EMULATION
++ if (test_tsk_thread_flag(task, TIF_IA32))
++ task_pt_regs(task)->ss = value;
+#endif
++ break;
++ }
+
-+static struct attribute *disk_attrs[] = {
-+ &dev_attr_range.attr,
-+ &dev_attr_removable.attr,
-+ &dev_attr_size.attr,
-+ &dev_attr_capability.attr,
-+ &dev_attr_stat.attr,
- #ifdef CONFIG_FAIL_MAKE_REQUEST
-- &disk_attr_fail.attr,
-+ &dev_attr_fail.attr,
- #endif
-- NULL,
-+ NULL
-+};
++ return 0;
++}
+
-+static struct attribute_group disk_attr_group = {
-+ .attrs = disk_attrs,
- };
-
--static void disk_release(struct kobject * kobj)
-+static struct attribute_group *disk_attr_groups[] = {
-+ &disk_attr_group,
-+ NULL
-+};
++static unsigned long debugreg_addr_limit(struct task_struct *task)
++{
++#ifdef CONFIG_IA32_EMULATION
++ if (test_tsk_thread_flag(task, TIF_IA32))
++ return IA32_PAGE_OFFSET - 3;
++#endif
++ return TASK_SIZE64 - 7;
++}
+
-+static void disk_release(struct device *dev)
- {
-- struct gendisk *disk = to_disk(kobj);
-+ struct gendisk *disk = dev_to_disk(dev);
++#endif /* CONFIG_X86_32 */
+
- kfree(disk->random);
- kfree(disk->part);
- free_disk_stats(disk);
- kfree(disk);
- }
--
--static struct kobj_type ktype_block = {
-- .release = disk_release,
-- .sysfs_ops = &disk_sysfs_ops,
-- .default_attrs = default_attrs,
-+struct class block_class = {
-+ .name = "block",
- };
-
--extern struct kobj_type ktype_part;
--
--static int block_uevent_filter(struct kset *kset, struct kobject *kobj)
--{
-- struct kobj_type *ktype = get_ktype(kobj);
--
-- return ((ktype == &ktype_block) || (ktype == &ktype_part));
--}
--
--static int block_uevent(struct kset *kset, struct kobject *kobj,
-- struct kobj_uevent_env *env)
--{
-- struct kobj_type *ktype = get_ktype(kobj);
-- struct device *physdev;
-- struct gendisk *disk;
-- struct hd_struct *part;
--
-- if (ktype == &ktype_block) {
-- disk = container_of(kobj, struct gendisk, kobj);
-- add_uevent_var(env, "MINOR=%u", disk->first_minor);
-- } else if (ktype == &ktype_part) {
-- disk = container_of(kobj->parent, struct gendisk, kobj);
-- part = container_of(kobj, struct hd_struct, kobj);
-- add_uevent_var(env, "MINOR=%u",
-- disk->first_minor + part->partno);
-- } else
-- return 0;
--
-- add_uevent_var(env, "MAJOR=%u", disk->major);
--
-- /* add physical device, backing this device */
-- physdev = disk->driverfs_dev;
-- if (physdev) {
-- char *path = kobject_get_path(&physdev->kobj, GFP_KERNEL);
--
-- add_uevent_var(env, "PHYSDEVPATH=%s", path);
-- kfree(path);
--
-- if (physdev->bus)
-- add_uevent_var(env, "PHYSDEVBUS=%s", physdev->bus->name);
--
-- if (physdev->driver)
-- add_uevent_var(env, physdev->driver->name);
-- }
--
-- return 0;
--}
--
--static struct kset_uevent_ops block_uevent_ops = {
-- .filter = block_uevent_filter,
-- .uevent = block_uevent,
-+struct device_type disk_type = {
-+ .name = "disk",
-+ .groups = disk_attr_groups,
-+ .release = disk_release,
- };
-
--decl_subsys(block, &ktype_block, &block_uevent_ops);
--
- /*
- * aggregate disk stat collector. Uses the same stats that the sysfs
- * entries do, above, but makes them available through one seq_file.
-- * Watching a few disks may be efficient through sysfs, but watching
-- * all of them will be more efficient through this interface.
- *
- * The output looks suspiciously like /proc/partitions with a bunch of
- * extra fields.
- */
-
--/* iterator */
- static void *diskstats_start(struct seq_file *part, loff_t *pos)
- {
- loff_t k = *pos;
-- struct list_head *p;
-+ struct device *dev;
-
-- mutex_lock(&block_subsys_lock);
-- list_for_each(p, &block_subsys.list)
-+ mutex_lock(&block_class_lock);
-+ list_for_each_entry(dev, &block_class.devices, node) {
-+ if (dev->type != &disk_type)
-+ continue;
- if (!k--)
-- return list_entry(p, struct gendisk, kobj.entry);
-+ return dev_to_disk(dev);
++static unsigned long get_flags(struct task_struct *task)
++{
++ unsigned long retval = task_pt_regs(task)->flags;
++
++ /*
++ * If the debugger set TF, hide it from the readout.
++ */
++ if (test_tsk_thread_flag(task, TIF_FORCED_TF))
++ retval &= ~X86_EFLAGS_TF;
++
++ return retval;
++}
++
++static int set_flags(struct task_struct *task, unsigned long value)
++{
++ struct pt_regs *regs = task_pt_regs(task);
++
++ /*
++ * If the user value contains TF, mark that
++ * it was not "us" (the debugger) that set it.
++ * If not, make sure it stays set if we had.
++ */
++ if (value & X86_EFLAGS_TF)
++ clear_tsk_thread_flag(task, TIF_FORCED_TF);
++ else if (test_tsk_thread_flag(task, TIF_FORCED_TF))
++ value |= X86_EFLAGS_TF;
++
++ regs->flags = (regs->flags & ~FLAG_MASK) | (value & FLAG_MASK);
++
++ return 0;
++}
++
++static int putreg(struct task_struct *child,
++ unsigned long offset, unsigned long value)
++{
++ switch (offset) {
++ case offsetof(struct user_regs_struct, cs):
++ case offsetof(struct user_regs_struct, ds):
++ case offsetof(struct user_regs_struct, es):
++ case offsetof(struct user_regs_struct, fs):
++ case offsetof(struct user_regs_struct, gs):
++ case offsetof(struct user_regs_struct, ss):
++ return set_segment_reg(child, offset, value);
++
++ case offsetof(struct user_regs_struct, flags):
++ return set_flags(child, value);
++
++#ifdef CONFIG_X86_64
++ case offsetof(struct user_regs_struct,fs_base):
++ if (value >= TASK_SIZE_OF(child))
++ return -EIO;
++ /*
++ * When changing the segment base, use do_arch_prctl
++ * to set either thread.fs or thread.fsindex and the
++ * corresponding GDT slot.
++ */
++ if (child->thread.fs != value)
++ return do_arch_prctl(child, ARCH_SET_FS, value);
++ return 0;
++ case offsetof(struct user_regs_struct,gs_base):
++ /*
++ * Exactly the same here as the %fs handling above.
++ */
++ if (value >= TASK_SIZE_OF(child))
++ return -EIO;
++ if (child->thread.gs != value)
++ return do_arch_prctl(child, ARCH_SET_GS, value);
++ return 0;
++#endif
+ }
- return NULL;
- }
-
- static void *diskstats_next(struct seq_file *part, void *v, loff_t *pos)
- {
-- struct list_head *p = ((struct gendisk *)v)->kobj.entry.next;
-+ struct gendisk *gp = v;
-+ struct device *dev;
+
- ++*pos;
-- return p==&block_subsys.list ? NULL :
-- list_entry(p, struct gendisk, kobj.entry);
-+ list_for_each_entry(dev, &gp->dev.node, node) {
-+ if (&dev->node == &block_class.devices)
-+ return NULL;
-+ if (dev->type == &disk_type)
-+ return dev_to_disk(dev);
++ *pt_regs_access(task_pt_regs(child), offset) = value;
++ return 0;
++}
++
++static unsigned long getreg(struct task_struct *task, unsigned long offset)
++{
++ switch (offset) {
++ case offsetof(struct user_regs_struct, cs):
++ case offsetof(struct user_regs_struct, ds):
++ case offsetof(struct user_regs_struct, es):
++ case offsetof(struct user_regs_struct, fs):
++ case offsetof(struct user_regs_struct, gs):
++ case offsetof(struct user_regs_struct, ss):
++ return get_segment_reg(task, offset);
++
++ case offsetof(struct user_regs_struct, flags):
++ return get_flags(task);
++
++#ifdef CONFIG_X86_64
++ case offsetof(struct user_regs_struct, fs_base): {
++ /*
++ * do_arch_prctl may have used a GDT slot instead of
++ * the MSR. To userland, it appears the same either
++ * way, except the %fs segment selector might not be 0.
++ */
++ unsigned int seg = task->thread.fsindex;
++ if (task->thread.fs != 0)
++ return task->thread.fs;
++ if (task == current)
++ asm("movl %%fs,%0" : "=r" (seg));
++ if (seg != FS_TLS_SEL)
++ return 0;
++ return get_desc_base(&task->thread.tls_array[FS_TLS]);
+ }
-+ return NULL;
- }
-
- static void diskstats_stop(struct seq_file *part, void *v)
- {
-- mutex_unlock(&block_subsys_lock);
-+ mutex_unlock(&block_class_lock);
- }
-
- static int diskstats_show(struct seq_file *s, void *v)
-@@ -629,7 +558,7 @@ static int diskstats_show(struct seq_file *s, void *v)
- int n = 0;
-
- /*
-- if (&sgp->kobj.entry == block_subsys.kset.list.next)
-+ if (&gp->dev.kobj.entry == block_class.devices.next)
- seq_puts(s, "major minor name"
- " rio rmerge rsect ruse wio wmerge "
- "wsect wuse running use aveq"
-@@ -666,7 +595,7 @@ static int diskstats_show(struct seq_file *s, void *v)
- return 0;
- }
-
--struct seq_operations diskstats_op = {
-+const struct seq_operations diskstats_op = {
- .start = diskstats_start,
- .next = diskstats_next,
- .stop = diskstats_stop,
-@@ -683,7 +612,7 @@ static void media_change_notify_thread(struct work_struct *work)
- * set enviroment vars to indicate which event this is for
- * so that user space will know to go check the media status.
- */
-- kobject_uevent_env(&gd->kobj, KOBJ_CHANGE, envp);
-+ kobject_uevent_env(&gd->dev.kobj, KOBJ_CHANGE, envp);
- put_device(gd->driverfs_dev);
- }
-
-@@ -694,6 +623,25 @@ void genhd_media_change_notify(struct gendisk *disk)
- }
- EXPORT_SYMBOL_GPL(genhd_media_change_notify);
-
-+dev_t blk_lookup_devt(const char *name)
++ case offsetof(struct user_regs_struct, gs_base): {
++ /*
++ * Exactly the same here as the %fs handling above.
++ */
++ unsigned int seg = task->thread.gsindex;
++ if (task->thread.gs != 0)
++ return task->thread.gs;
++ if (task == current)
++ asm("movl %%gs,%0" : "=r" (seg));
++ if (seg != GS_TLS_SEL)
++ return 0;
++ return get_desc_base(&task->thread.tls_array[GS_TLS]);
++ }
++#endif
++ }
++
++ return *pt_regs_access(task_pt_regs(task), offset);
++}
++
++static int genregs_get(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf)
+{
-+ struct device *dev;
-+ dev_t devt = MKDEV(0, 0);
++ if (kbuf) {
++ unsigned long *k = kbuf;
++ while (count > 0) {
++ *k++ = getreg(target, pos);
++ count -= sizeof(*k);
++ pos += sizeof(*k);
++ }
++ } else {
++ unsigned long __user *u = ubuf;
++ while (count > 0) {
++ if (__put_user(getreg(target, pos), u++))
++ return -EFAULT;
++ count -= sizeof(*u);
++ pos += sizeof(*u);
++ }
++ }
+
-+ mutex_lock(&block_class_lock);
-+ list_for_each_entry(dev, &block_class.devices, node) {
-+ if (strcmp(dev->bus_id, name) == 0) {
-+ devt = dev->devt;
++ return 0;
++}
++
++static int genregs_set(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf)
++{
++ int ret = 0;
++ if (kbuf) {
++ const unsigned long *k = kbuf;
++ while (count > 0 && !ret) {
++ ret = putreg(target, pos, *k++);
++ count -= sizeof(*k);
++ pos += sizeof(*k);
++ }
++ } else {
++ const unsigned long __user *u = ubuf;
++ while (count > 0 && !ret) {
++ unsigned long word;
++ ret = __get_user(word, u++);
++ if (ret)
++ break;
++ ret = putreg(target, pos, word);
++ count -= sizeof(*u);
++ pos += sizeof(*u);
++ }
++ }
++ return ret;
++}
++
++/*
++ * This function is trivial and will be inlined by the compiler.
++ * Having it separates the implementation details of debug
++ * registers from the interface details of ptrace.
++ */
++static unsigned long ptrace_get_debugreg(struct task_struct *child, int n)
++{
++ switch (n) {
++ case 0: return child->thread.debugreg0;
++ case 1: return child->thread.debugreg1;
++ case 2: return child->thread.debugreg2;
++ case 3: return child->thread.debugreg3;
++ case 6: return child->thread.debugreg6;
++ case 7: return child->thread.debugreg7;
++ }
++ return 0;
++}
++
++static int ptrace_set_debugreg(struct task_struct *child,
++ int n, unsigned long data)
++{
++ int i;
++
++ if (unlikely(n == 4 || n == 5))
++ return -EIO;
++
++ if (n < 4 && unlikely(data >= debugreg_addr_limit(child)))
++ return -EIO;
++
++ switch (n) {
++ case 0: child->thread.debugreg0 = data; break;
++ case 1: child->thread.debugreg1 = data; break;
++ case 2: child->thread.debugreg2 = data; break;
++ case 3: child->thread.debugreg3 = data; break;
++
++ case 6:
++ if ((data & ~0xffffffffUL) != 0)
++ return -EIO;
++ child->thread.debugreg6 = data;
++ break;
++
++ case 7:
++ /*
++ * Sanity-check data. Take one half-byte at once with
++ * check = (val >> (16 + 4*i)) & 0xf. It contains the
++ * R/Wi and LENi bits; bits 0 and 1 are R/Wi, and bits
++ * 2 and 3 are LENi. Given a list of invalid values,
++ * we do mask |= 1 << invalid_value, so that
++ * (mask >> check) & 1 is a correct test for invalid
++ * values.
++ *
++ * R/Wi contains the type of the breakpoint /
++ * watchpoint, LENi contains the length of the watched
++ * data in the watchpoint case.
++ *
++ * The invalid values are:
++ * - LENi == 0x10 (undefined), so mask |= 0x0f00. [32-bit]
++ * - R/Wi == 0x10 (break on I/O reads or writes), so
++ * mask |= 0x4444.
++ * - R/Wi == 0x00 && LENi != 0x00, so we have mask |=
++ * 0x1110.
++ *
++ * Finally, mask = 0x0f00 | 0x4444 | 0x1110 == 0x5f54.
++ *
++ * See the Intel Manual "System Programming Guide",
++ * 15.2.4
++ *
++ * Note that LENi == 0x10 is defined on x86_64 in long
++ * mode (i.e. even for 32-bit userspace software, but
++ * 64-bit kernel), so the x86_64 mask value is 0x5454.
++ * See the AMD manual no. 24593 (AMD64 System Programming)
++ */
++#ifdef CONFIG_X86_32
++#define DR7_MASK 0x5f54
++#else
++#define DR7_MASK 0x5554
++#endif
++ data &= ~DR_CONTROL_RESERVED;
++ for (i = 0; i < 4; i++)
++ if ((DR7_MASK >> ((data >> (16 + 4*i)) & 0xf)) & 1)
++ return -EIO;
++ child->thread.debugreg7 = data;
++ if (data)
++ set_tsk_thread_flag(child, TIF_DEBUG);
++ else
++ clear_tsk_thread_flag(child, TIF_DEBUG);
++ break;
++ }
++
++ return 0;
++}
++
++static int ptrace_bts_get_size(struct task_struct *child)
++{
++ if (!child->thread.ds_area_msr)
++ return -ENXIO;
++
++ return ds_get_bts_index((void *)child->thread.ds_area_msr);
++}
++
++static int ptrace_bts_read_record(struct task_struct *child,
++ long index,
++ struct bts_struct __user *out)
++{
++ struct bts_struct ret;
++ int retval;
++ int bts_end;
++ int bts_index;
++
++ if (!child->thread.ds_area_msr)
++ return -ENXIO;
++
++ if (index < 0)
++ return -EINVAL;
++
++ bts_end = ds_get_bts_end((void *)child->thread.ds_area_msr);
++ if (bts_end <= index)
++ return -EINVAL;
++
++ /* translate the ptrace bts index into the ds bts index */
++ bts_index = ds_get_bts_index((void *)child->thread.ds_area_msr);
++ bts_index -= (index + 1);
++ if (bts_index < 0)
++ bts_index += bts_end;
++
++ retval = ds_read_bts((void *)child->thread.ds_area_msr,
++ bts_index, &ret);
++ if (retval < 0)
++ return retval;
++
++ if (copy_to_user(out, &ret, sizeof(ret)))
++ return -EFAULT;
++
++ return sizeof(ret);
++}
++
++static int ptrace_bts_write_record(struct task_struct *child,
++ const struct bts_struct *in)
++{
++ int retval;
++
++ if (!child->thread.ds_area_msr)
++ return -ENXIO;
++
++ retval = ds_write_bts((void *)child->thread.ds_area_msr, in);
++ if (retval)
++ return retval;
++
++ return sizeof(*in);
++}
++
++static int ptrace_bts_clear(struct task_struct *child)
++{
++ if (!child->thread.ds_area_msr)
++ return -ENXIO;
++
++ return ds_clear((void *)child->thread.ds_area_msr);
++}
++
++static int ptrace_bts_drain(struct task_struct *child,
++ long size,
++ struct bts_struct __user *out)
++{
++ int end, i;
++ void *ds = (void *)child->thread.ds_area_msr;
++
++ if (!ds)
++ return -ENXIO;
++
++ end = ds_get_bts_index(ds);
++ if (end <= 0)
++ return end;
++
++ if (size < (end * sizeof(struct bts_struct)))
++ return -EIO;
++
++ for (i = 0; i < end; i++, out++) {
++ struct bts_struct ret;
++ int retval;
++
++ retval = ds_read_bts(ds, i, &ret);
++ if (retval < 0)
++ return retval;
++
++ if (copy_to_user(out, &ret, sizeof(ret)))
++ return -EFAULT;
++ }
++
++ ds_clear(ds);
++
++ return end;
++}
++
++static int ptrace_bts_realloc(struct task_struct *child,
++ int size, int reduce_size)
++{
++ unsigned long rlim, vm;
++ int ret, old_size;
++
++ if (size < 0)
++ return -EINVAL;
++
++ old_size = ds_get_bts_size((void *)child->thread.ds_area_msr);
++ if (old_size < 0)
++ return old_size;
++
++ ret = ds_free((void **)&child->thread.ds_area_msr);
++ if (ret < 0)
++ goto out;
++
++ size >>= PAGE_SHIFT;
++ old_size >>= PAGE_SHIFT;
++
++ current->mm->total_vm -= old_size;
++ current->mm->locked_vm -= old_size;
++
++ if (size == 0)
++ goto out;
++
++ rlim = current->signal->rlim[RLIMIT_AS].rlim_cur >> PAGE_SHIFT;
++ vm = current->mm->total_vm + size;
++ if (rlim < vm) {
++ ret = -ENOMEM;
++
++ if (!reduce_size)
++ goto out;
++
++ size = rlim - current->mm->total_vm;
++ if (size <= 0)
++ goto out;
++ }
++
++ rlim = current->signal->rlim[RLIMIT_MEMLOCK].rlim_cur >> PAGE_SHIFT;
++ vm = current->mm->locked_vm + size;
++ if (rlim < vm) {
++ ret = -ENOMEM;
++
++ if (!reduce_size)
++ goto out;
++
++ size = rlim - current->mm->locked_vm;
++ if (size <= 0)
++ goto out;
++ }
++
++ ret = ds_allocate((void **)&child->thread.ds_area_msr,
++ size << PAGE_SHIFT);
++ if (ret < 0)
++ goto out;
++
++ current->mm->total_vm += size;
++ current->mm->locked_vm += size;
++
++out:
++ if (child->thread.ds_area_msr)
++ set_tsk_thread_flag(child, TIF_DS_AREA_MSR);
++ else
++ clear_tsk_thread_flag(child, TIF_DS_AREA_MSR);
++
++ return ret;
++}
++
++static int ptrace_bts_config(struct task_struct *child,
++ long cfg_size,
++ const struct ptrace_bts_config __user *ucfg)
++{
++ struct ptrace_bts_config cfg;
++ int bts_size, ret = 0;
++ void *ds;
++
++ if (cfg_size < sizeof(cfg))
++ return -EIO;
++
++ if (copy_from_user(&cfg, ucfg, sizeof(cfg)))
++ return -EFAULT;
++
++ if ((int)cfg.size < 0)
++ return -EINVAL;
++
++ bts_size = 0;
++ ds = (void *)child->thread.ds_area_msr;
++ if (ds) {
++ bts_size = ds_get_bts_size(ds);
++ if (bts_size < 0)
++ return bts_size;
++ }
++ cfg.size = PAGE_ALIGN(cfg.size);
++
++ if (bts_size != cfg.size) {
++ ret = ptrace_bts_realloc(child, cfg.size,
++ cfg.flags & PTRACE_BTS_O_CUT_SIZE);
++ if (ret < 0)
++ goto errout;
++
++ ds = (void *)child->thread.ds_area_msr;
++ }
++
++ if (cfg.flags & PTRACE_BTS_O_SIGNAL)
++ ret = ds_set_overflow(ds, DS_O_SIGNAL);
++ else
++ ret = ds_set_overflow(ds, DS_O_WRAP);
++ if (ret < 0)
++ goto errout;
++
++ if (cfg.flags & PTRACE_BTS_O_TRACE)
++ child->thread.debugctlmsr |= ds_debugctl_mask();
++ else
++ child->thread.debugctlmsr &= ~ds_debugctl_mask();
++
++ if (cfg.flags & PTRACE_BTS_O_SCHED)
++ set_tsk_thread_flag(child, TIF_BTS_TRACE_TS);
++ else
++ clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS);
++
++ ret = sizeof(cfg);
++
++out:
++ if (child->thread.debugctlmsr)
++ set_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
++ else
++ clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
++
++ return ret;
++
++errout:
++ child->thread.debugctlmsr &= ~ds_debugctl_mask();
++ clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS);
++ goto out;
++}
++
++static int ptrace_bts_status(struct task_struct *child,
++ long cfg_size,
++ struct ptrace_bts_config __user *ucfg)
++{
++ void *ds = (void *)child->thread.ds_area_msr;
++ struct ptrace_bts_config cfg;
++
++ if (cfg_size < sizeof(cfg))
++ return -EIO;
++
++ memset(&cfg, 0, sizeof(cfg));
++
++ if (ds) {
++ cfg.size = ds_get_bts_size(ds);
++
++ if (ds_get_overflow(ds) == DS_O_SIGNAL)
++ cfg.flags |= PTRACE_BTS_O_SIGNAL;
++
++ if (test_tsk_thread_flag(child, TIF_DEBUGCTLMSR) &&
++ child->thread.debugctlmsr & ds_debugctl_mask())
++ cfg.flags |= PTRACE_BTS_O_TRACE;
++
++ if (test_tsk_thread_flag(child, TIF_BTS_TRACE_TS))
++ cfg.flags |= PTRACE_BTS_O_SCHED;
++ }
++
++ cfg.bts_size = sizeof(struct bts_struct);
++
++ if (copy_to_user(ucfg, &cfg, sizeof(cfg)))
++ return -EFAULT;
++
++ return sizeof(cfg);
++}
++
++void ptrace_bts_take_timestamp(struct task_struct *tsk,
++ enum bts_qualifier qualifier)
++{
++ struct bts_struct rec = {
++ .qualifier = qualifier,
++ .variant.jiffies = jiffies_64
++ };
++
++ ptrace_bts_write_record(tsk, &rec);
++}
++
++/*
++ * Called by kernel/ptrace.c when detaching..
++ *
++ * Make sure the single step bit is not set.
++ */
++void ptrace_disable(struct task_struct *child)
++{
++ user_disable_single_step(child);
++#ifdef TIF_SYSCALL_EMU
++ clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
++#endif
++ if (child->thread.ds_area_msr) {
++ ptrace_bts_realloc(child, 0, 0);
++ child->thread.debugctlmsr &= ~ds_debugctl_mask();
++ if (!child->thread.debugctlmsr)
++ clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
++ clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS);
++ }
++}
++
++#if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
++static const struct user_regset_view user_x86_32_view; /* Initialized below. */
++#endif
++
++long arch_ptrace(struct task_struct *child, long request, long addr, long data)
++{
++ int ret;
++ unsigned long __user *datap = (unsigned long __user *)data;
++
++ switch (request) {
++ /* read the word at location addr in the USER area. */
++ case PTRACE_PEEKUSR: {
++ unsigned long tmp;
++
++ ret = -EIO;
++ if ((addr & (sizeof(data) - 1)) || addr < 0 ||
++ addr >= sizeof(struct user))
+ break;
++
++ tmp = 0; /* Default return condition */
++ if (addr < sizeof(struct user_regs_struct))
++ tmp = getreg(child, addr);
++ else if (addr >= offsetof(struct user, u_debugreg[0]) &&
++ addr <= offsetof(struct user, u_debugreg[7])) {
++ addr -= offsetof(struct user, u_debugreg[0]);
++ tmp = ptrace_get_debugreg(child, addr / sizeof(data));
+ }
++ ret = put_user(tmp, datap);
++ break;
+ }
-+ mutex_unlock(&block_class_lock);
+
-+ return devt;
++ case PTRACE_POKEUSR: /* write the word at location addr in the USER area */
++ ret = -EIO;
++ if ((addr & (sizeof(data) - 1)) || addr < 0 ||
++ addr >= sizeof(struct user))
++ break;
++
++ if (addr < sizeof(struct user_regs_struct))
++ ret = putreg(child, addr, data);
++ else if (addr >= offsetof(struct user, u_debugreg[0]) &&
++ addr <= offsetof(struct user, u_debugreg[7])) {
++ addr -= offsetof(struct user, u_debugreg[0]);
++ ret = ptrace_set_debugreg(child,
++ addr / sizeof(data), data);
++ }
++ break;
++
++ case PTRACE_GETREGS: /* Get all gp regs from the child. */
++ return copy_regset_to_user(child,
++ task_user_regset_view(current),
++ REGSET_GENERAL,
++ 0, sizeof(struct user_regs_struct),
++ datap);
++
++ case PTRACE_SETREGS: /* Set all gp regs in the child. */
++ return copy_regset_from_user(child,
++ task_user_regset_view(current),
++ REGSET_GENERAL,
++ 0, sizeof(struct user_regs_struct),
++ datap);
++
++ case PTRACE_GETFPREGS: /* Get the child FPU state. */
++ return copy_regset_to_user(child,
++ task_user_regset_view(current),
++ REGSET_FP,
++ 0, sizeof(struct user_i387_struct),
++ datap);
++
++ case PTRACE_SETFPREGS: /* Set the child FPU state. */
++ return copy_regset_from_user(child,
++ task_user_regset_view(current),
++ REGSET_FP,
++ 0, sizeof(struct user_i387_struct),
++ datap);
++
++#ifdef CONFIG_X86_32
++ case PTRACE_GETFPXREGS: /* Get the child extended FPU state. */
++ return copy_regset_to_user(child, &user_x86_32_view,
++ REGSET_XFP,
++ 0, sizeof(struct user_fxsr_struct),
++ datap);
++
++ case PTRACE_SETFPXREGS: /* Set the child extended FPU state. */
++ return copy_regset_from_user(child, &user_x86_32_view,
++ REGSET_XFP,
++ 0, sizeof(struct user_fxsr_struct),
++ datap);
++#endif
++
++#if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
++ case PTRACE_GET_THREAD_AREA:
++ if (addr < 0)
++ return -EIO;
++ ret = do_get_thread_area(child, addr,
++ (struct user_desc __user *) data);
++ break;
++
++ case PTRACE_SET_THREAD_AREA:
++ if (addr < 0)
++ return -EIO;
++ ret = do_set_thread_area(child, addr,
++ (struct user_desc __user *) data, 0);
++ break;
++#endif
++
++#ifdef CONFIG_X86_64
++ /* normal 64bit interface to access TLS data.
++ Works just like arch_prctl, except that the arguments
++ are reversed. */
++ case PTRACE_ARCH_PRCTL:
++ ret = do_arch_prctl(child, data, addr);
++ break;
++#endif
++
++ case PTRACE_BTS_CONFIG:
++ ret = ptrace_bts_config
++ (child, data, (struct ptrace_bts_config __user *)addr);
++ break;
++
++ case PTRACE_BTS_STATUS:
++ ret = ptrace_bts_status
++ (child, data, (struct ptrace_bts_config __user *)addr);
++ break;
++
++ case PTRACE_BTS_SIZE:
++ ret = ptrace_bts_get_size(child);
++ break;
++
++ case PTRACE_BTS_GET:
++ ret = ptrace_bts_read_record
++ (child, data, (struct bts_struct __user *) addr);
++ break;
++
++ case PTRACE_BTS_CLEAR:
++ ret = ptrace_bts_clear(child);
++ break;
++
++ case PTRACE_BTS_DRAIN:
++ ret = ptrace_bts_drain
++ (child, data, (struct bts_struct __user *) addr);
++ break;
++
++ default:
++ ret = ptrace_request(child, request, addr, data);
++ break;
++ }
++
++ return ret;
+}
+
-+EXPORT_SYMBOL(blk_lookup_devt);
++#ifdef CONFIG_IA32_EMULATION
+
- struct gendisk *alloc_disk(int minors)
- {
- return alloc_disk_node(minors, -1);
-@@ -721,9 +669,10 @@ struct gendisk *alloc_disk_node(int minors, int node_id)
- }
- }
- disk->minors = minors;
-- kobj_set_kset_s(disk,block_subsys);
-- kobject_init(&disk->kobj);
- rand_initialize_disk(disk);
-+ disk->dev.class = &block_class;
-+ disk->dev.type = &disk_type;
-+ device_initialize(&disk->dev);
- INIT_WORK(&disk->async_notify,
- media_change_notify_thread);
- }
-@@ -743,7 +692,7 @@ struct kobject *get_disk(struct gendisk *disk)
- owner = disk->fops->owner;
- if (owner && !try_module_get(owner))
- return NULL;
-- kobj = kobject_get(&disk->kobj);
-+ kobj = kobject_get(&disk->dev.kobj);
- if (kobj == NULL) {
- module_put(owner);
- return NULL;
-@@ -757,7 +706,7 @@ EXPORT_SYMBOL(get_disk);
- void put_disk(struct gendisk *disk)
- {
- if (disk)
-- kobject_put(&disk->kobj);
-+ kobject_put(&disk->dev.kobj);
- }
-
- EXPORT_SYMBOL(put_disk);
-diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c
++#include <linux/compat.h>
++#include <linux/syscalls.h>
++#include <asm/ia32.h>
++#include <asm/user32.h>
++
++#define R32(l,q) \
++ case offsetof(struct user32, regs.l): \
++ regs->q = value; break
++
++#define SEG32(rs) \
++ case offsetof(struct user32, regs.rs): \
++ return set_segment_reg(child, \
++ offsetof(struct user_regs_struct, rs), \
++ value); \
++ break
++
++static int putreg32(struct task_struct *child, unsigned regno, u32 value)
++{
++ struct pt_regs *regs = task_pt_regs(child);
++
++ switch (regno) {
++
++ SEG32(cs);
++ SEG32(ds);
++ SEG32(es);
++ SEG32(fs);
++ SEG32(gs);
++ SEG32(ss);
++
++ R32(ebx, bx);
++ R32(ecx, cx);
++ R32(edx, dx);
++ R32(edi, di);
++ R32(esi, si);
++ R32(ebp, bp);
++ R32(eax, ax);
++ R32(orig_eax, orig_ax);
++ R32(eip, ip);
++ R32(esp, sp);
++
++ case offsetof(struct user32, regs.eflags):
++ return set_flags(child, value);
++
++ case offsetof(struct user32, u_debugreg[0]) ...
++ offsetof(struct user32, u_debugreg[7]):
++ regno -= offsetof(struct user32, u_debugreg[0]);
++ return ptrace_set_debugreg(child, regno / 4, value);
++
++ default:
++ if (regno > sizeof(struct user32) || (regno & 3))
++ return -EIO;
++
++ /*
++ * Other dummy fields in the virtual user structure
++ * are ignored
++ */
++ break;
++ }
++ return 0;
++}
++
++#undef R32
++#undef SEG32
++
++#define R32(l,q) \
++ case offsetof(struct user32, regs.l): \
++ *val = regs->q; break
++
++#define SEG32(rs) \
++ case offsetof(struct user32, regs.rs): \
++ *val = get_segment_reg(child, \
++ offsetof(struct user_regs_struct, rs)); \
++ break
++
++static int getreg32(struct task_struct *child, unsigned regno, u32 *val)
++{
++ struct pt_regs *regs = task_pt_regs(child);
++
++ switch (regno) {
++
++ SEG32(ds);
++ SEG32(es);
++ SEG32(fs);
++ SEG32(gs);
++
++ R32(cs, cs);
++ R32(ss, ss);
++ R32(ebx, bx);
++ R32(ecx, cx);
++ R32(edx, dx);
++ R32(edi, di);
++ R32(esi, si);
++ R32(ebp, bp);
++ R32(eax, ax);
++ R32(orig_eax, orig_ax);
++ R32(eip, ip);
++ R32(esp, sp);
++
++ case offsetof(struct user32, regs.eflags):
++ *val = get_flags(child);
++ break;
++
++ case offsetof(struct user32, u_debugreg[0]) ...
++ offsetof(struct user32, u_debugreg[7]):
++ regno -= offsetof(struct user32, u_debugreg[0]);
++ *val = ptrace_get_debugreg(child, regno / 4);
++ break;
++
++ default:
++ if (regno > sizeof(struct user32) || (regno & 3))
++ return -EIO;
++
++ /*
++ * Other dummy fields in the virtual user structure
++ * are ignored
++ */
++ *val = 0;
++ break;
++ }
++ return 0;
++}
++
++#undef R32
++#undef SEG32
++
++static int genregs32_get(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf)
++{
++ if (kbuf) {
++ compat_ulong_t *k = kbuf;
++ while (count > 0) {
++ getreg32(target, pos, k++);
++ count -= sizeof(*k);
++ pos += sizeof(*k);
++ }
++ } else {
++ compat_ulong_t __user *u = ubuf;
++ while (count > 0) {
++ compat_ulong_t word;
++ getreg32(target, pos, &word);
++ if (__put_user(word, u++))
++ return -EFAULT;
++ count -= sizeof(*u);
++ pos += sizeof(*u);
++ }
++ }
++
++ return 0;
++}
++
++static int genregs32_set(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf)
++{
++ int ret = 0;
++ if (kbuf) {
++ const compat_ulong_t *k = kbuf;
++ while (count > 0 && !ret) {
++ ret = putreg(target, pos, *k++);
++ count -= sizeof(*k);
++ pos += sizeof(*k);
++ }
++ } else {
++ const compat_ulong_t __user *u = ubuf;
++ while (count > 0 && !ret) {
++ compat_ulong_t word;
++ ret = __get_user(word, u++);
++ if (ret)
++ break;
++ ret = putreg(target, pos, word);
++ count -= sizeof(*u);
++ pos += sizeof(*u);
++ }
++ }
++ return ret;
++}
++
++static long ptrace32_siginfo(unsigned request, u32 pid, u32 addr, u32 data)
++{
++ siginfo_t __user *si = compat_alloc_user_space(sizeof(siginfo_t));
++ compat_siginfo_t __user *si32 = compat_ptr(data);
++ siginfo_t ssi;
++ int ret;
++
++ if (request == PTRACE_SETSIGINFO) {
++ memset(&ssi, 0, sizeof(siginfo_t));
++ ret = copy_siginfo_from_user32(&ssi, si32);
++ if (ret)
++ return ret;
++ if (copy_to_user(si, &ssi, sizeof(siginfo_t)))
++ return -EFAULT;
++ }
++ ret = sys_ptrace(request, pid, addr, (unsigned long)si);
++ if (ret)
++ return ret;
++ if (request == PTRACE_GETSIGINFO) {
++ if (copy_from_user(&ssi, si, sizeof(siginfo_t)))
++ return -EFAULT;
++ ret = copy_siginfo_to_user32(si32, &ssi);
++ }
++ return ret;
++}
++
++asmlinkage long sys32_ptrace(long request, u32 pid, u32 addr, u32 data)
++{
++ struct task_struct *child;
++ struct pt_regs *childregs;
++ void __user *datap = compat_ptr(data);
++ int ret;
++ __u32 val;
++
++ switch (request) {
++ case PTRACE_TRACEME:
++ case PTRACE_ATTACH:
++ case PTRACE_KILL:
++ case PTRACE_CONT:
++ case PTRACE_SINGLESTEP:
++ case PTRACE_SINGLEBLOCK:
++ case PTRACE_DETACH:
++ case PTRACE_SYSCALL:
++ case PTRACE_OLDSETOPTIONS:
++ case PTRACE_SETOPTIONS:
++ case PTRACE_SET_THREAD_AREA:
++ case PTRACE_GET_THREAD_AREA:
++ case PTRACE_BTS_CONFIG:
++ case PTRACE_BTS_STATUS:
++ case PTRACE_BTS_SIZE:
++ case PTRACE_BTS_GET:
++ case PTRACE_BTS_CLEAR:
++ case PTRACE_BTS_DRAIN:
++ return sys_ptrace(request, pid, addr, data);
++
++ default:
++ return -EINVAL;
++
++ case PTRACE_PEEKTEXT:
++ case PTRACE_PEEKDATA:
++ case PTRACE_POKEDATA:
++ case PTRACE_POKETEXT:
++ case PTRACE_POKEUSR:
++ case PTRACE_PEEKUSR:
++ case PTRACE_GETREGS:
++ case PTRACE_SETREGS:
++ case PTRACE_SETFPREGS:
++ case PTRACE_GETFPREGS:
++ case PTRACE_SETFPXREGS:
++ case PTRACE_GETFPXREGS:
++ case PTRACE_GETEVENTMSG:
++ break;
++
++ case PTRACE_SETSIGINFO:
++ case PTRACE_GETSIGINFO:
++ return ptrace32_siginfo(request, pid, addr, data);
++ }
++
++ child = ptrace_get_task_struct(pid);
++ if (IS_ERR(child))
++ return PTR_ERR(child);
++
++ ret = ptrace_check_attach(child, request == PTRACE_KILL);
++ if (ret < 0)
++ goto out;
++
++ childregs = task_pt_regs(child);
++
++ switch (request) {
++ case PTRACE_PEEKUSR:
++ ret = getreg32(child, addr, &val);
++ if (ret == 0)
++ ret = put_user(val, (__u32 __user *)datap);
++ break;
++
++ case PTRACE_POKEUSR:
++ ret = putreg32(child, addr, data);
++ break;
++
++ case PTRACE_GETREGS: /* Get all gp regs from the child. */
++ return copy_regset_to_user(child, &user_x86_32_view,
++ REGSET_GENERAL,
++ 0, sizeof(struct user_regs_struct32),
++ datap);
++
++ case PTRACE_SETREGS: /* Set all gp regs in the child. */
++ return copy_regset_from_user(child, &user_x86_32_view,
++ REGSET_GENERAL, 0,
++ sizeof(struct user_regs_struct32),
++ datap);
++
++ case PTRACE_GETFPREGS: /* Get the child FPU state. */
++ return copy_regset_to_user(child, &user_x86_32_view,
++ REGSET_FP, 0,
++ sizeof(struct user_i387_ia32_struct),
++ datap);
++
++ case PTRACE_SETFPREGS: /* Set the child FPU state. */
++ return copy_regset_from_user(
++ child, &user_x86_32_view, REGSET_FP,
++ 0, sizeof(struct user_i387_ia32_struct), datap);
++
++ case PTRACE_GETFPXREGS: /* Get the child extended FPU state. */
++ return copy_regset_to_user(child, &user_x86_32_view,
++ REGSET_XFP, 0,
++ sizeof(struct user32_fxsr_struct),
++ datap);
++
++ case PTRACE_SETFPXREGS: /* Set the child extended FPU state. */
++ return copy_regset_from_user(child, &user_x86_32_view,
++ REGSET_XFP, 0,
++ sizeof(struct user32_fxsr_struct),
++ datap);
++
++ default:
++ return compat_ptrace_request(child, request, addr, data);
++ }
++
++ out:
++ put_task_struct(child);
++ return ret;
++}
++
++#endif /* CONFIG_IA32_EMULATION */
++
++#ifdef CONFIG_X86_64
++
++static const struct user_regset x86_64_regsets[] = {
++ [REGSET_GENERAL] = {
++ .core_note_type = NT_PRSTATUS,
++ .n = sizeof(struct user_regs_struct) / sizeof(long),
++ .size = sizeof(long), .align = sizeof(long),
++ .get = genregs_get, .set = genregs_set
++ },
++ [REGSET_FP] = {
++ .core_note_type = NT_PRFPREG,
++ .n = sizeof(struct user_i387_struct) / sizeof(long),
++ .size = sizeof(long), .align = sizeof(long),
++ .active = xfpregs_active, .get = xfpregs_get, .set = xfpregs_set
++ },
++};
++
++static const struct user_regset_view user_x86_64_view = {
++ .name = "x86_64", .e_machine = EM_X86_64,
++ .regsets = x86_64_regsets, .n = ARRAY_SIZE(x86_64_regsets)
++};
++
++#else /* CONFIG_X86_32 */
++
++#define user_regs_struct32 user_regs_struct
++#define genregs32_get genregs_get
++#define genregs32_set genregs_set
++
++#endif /* CONFIG_X86_64 */
++
++#if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
++static const struct user_regset x86_32_regsets[] = {
++ [REGSET_GENERAL] = {
++ .core_note_type = NT_PRSTATUS,
++ .n = sizeof(struct user_regs_struct32) / sizeof(u32),
++ .size = sizeof(u32), .align = sizeof(u32),
++ .get = genregs32_get, .set = genregs32_set
++ },
++ [REGSET_FP] = {
++ .core_note_type = NT_PRFPREG,
++ .n = sizeof(struct user_i387_struct) / sizeof(u32),
++ .size = sizeof(u32), .align = sizeof(u32),
++ .active = fpregs_active, .get = fpregs_get, .set = fpregs_set
++ },
++ [REGSET_XFP] = {
++ .core_note_type = NT_PRXFPREG,
++ .n = sizeof(struct user_i387_struct) / sizeof(u32),
++ .size = sizeof(u32), .align = sizeof(u32),
++ .active = xfpregs_active, .get = xfpregs_get, .set = xfpregs_set
++ },
++ [REGSET_TLS] = {
++ .core_note_type = NT_386_TLS,
++ .n = GDT_ENTRY_TLS_ENTRIES, .bias = GDT_ENTRY_TLS_MIN,
++ .size = sizeof(struct user_desc),
++ .align = sizeof(struct user_desc),
++ .active = regset_tls_active,
++ .get = regset_tls_get, .set = regset_tls_set
++ },
++};
++
++static const struct user_regset_view user_x86_32_view = {
++ .name = "i386", .e_machine = EM_386,
++ .regsets = x86_32_regsets, .n = ARRAY_SIZE(x86_32_regsets)
++};
++#endif
++
++const struct user_regset_view *task_user_regset_view(struct task_struct *task)
++{
++#ifdef CONFIG_IA32_EMULATION
++ if (test_tsk_thread_flag(task, TIF_IA32))
++#endif
++#if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
++ return &user_x86_32_view;
++#endif
++#ifdef CONFIG_X86_64
++ return &user_x86_64_view;
++#endif
++}
++
++#ifdef CONFIG_X86_32
++
++void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code)
++{
++ struct siginfo info;
++
++ tsk->thread.trap_no = 1;
++ tsk->thread.error_code = error_code;
++
++ memset(&info, 0, sizeof(info));
++ info.si_signo = SIGTRAP;
++ info.si_code = TRAP_BRKPT;
++
++ /* User-mode ip? */
++ info.si_addr = user_mode_vm(regs) ? (void __user *) regs->ip : NULL;
++
++ /* Send us the fake SIGTRAP */
++ force_sig_info(SIGTRAP, &info, tsk);
++}
++
++/* notification of system call entry/exit
++ * - triggered by current->work.syscall_trace
++ */
++__attribute__((regparm(3)))
++int do_syscall_trace(struct pt_regs *regs, int entryexit)
++{
++ int is_sysemu = test_thread_flag(TIF_SYSCALL_EMU);
++ /*
++ * With TIF_SYSCALL_EMU set we want to ignore TIF_SINGLESTEP for syscall
++ * interception
++ */
++ int is_singlestep = !is_sysemu && test_thread_flag(TIF_SINGLESTEP);
++ int ret = 0;
++
++ /* do the secure computing check first */
++ if (!entryexit)
++ secure_computing(regs->orig_ax);
++
++ if (unlikely(current->audit_context)) {
++ if (entryexit)
++ audit_syscall_exit(AUDITSC_RESULT(regs->ax),
++ regs->ax);
++ /* Debug traps, when using PTRACE_SINGLESTEP, must be sent only
++ * on the syscall exit path. Normally, when TIF_SYSCALL_AUDIT is
++ * not used, entry.S will call us only on syscall exit, not
++ * entry; so when TIF_SYSCALL_AUDIT is used we must avoid
++ * calling send_sigtrap() on syscall entry.
++ *
++ * Note that when PTRACE_SYSEMU_SINGLESTEP is used,
++ * is_singlestep is false, despite his name, so we will still do
++ * the correct thing.
++ */
++ else if (is_singlestep)
++ goto out;
++ }
++
++ if (!(current->ptrace & PT_PTRACED))
++ goto out;
++
++ /* If a process stops on the 1st tracepoint with SYSCALL_TRACE
++ * and then is resumed with SYSEMU_SINGLESTEP, it will come in
++ * here. We have to check this and return */
++ if (is_sysemu && entryexit)
++ return 0;
++
++ /* Fake a debug trap */
++ if (is_singlestep)
++ send_sigtrap(current, regs, 0);
++
++ if (!test_thread_flag(TIF_SYSCALL_TRACE) && !is_sysemu)
++ goto out;
++
++ /* the 0x80 provides a way for the tracing parent to distinguish
++ between a syscall stop and SIGTRAP delivery */
++ /* Note that the debugger could change the result of test_thread_flag!*/
++ ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD) ? 0x80:0));
++
++ /*
++ * this isn't the same as continuing with a signal, but it will do
++ * for normal use. strace only continues with a signal if the
++ * stopping signal is not SIGTRAP. -brl
++ */
++ if (current->exit_code) {
++ send_sig(current->exit_code, current, 1);
++ current->exit_code = 0;
++ }
++ ret = is_sysemu;
++out:
++ if (unlikely(current->audit_context) && !entryexit)
++ audit_syscall_entry(AUDIT_ARCH_I386, regs->orig_ax,
++ regs->bx, regs->cx, regs->dx, regs->si);
++ if (ret == 0)
++ return 0;
++
++ regs->orig_ax = -1; /* force skip of syscall restarting */
++ if (unlikely(current->audit_context))
++ audit_syscall_exit(AUDITSC_RESULT(regs->ax), regs->ax);
++ return 1;
++}
++
++#else /* CONFIG_X86_64 */
++
++static void syscall_trace(struct pt_regs *regs)
++{
++
++#if 0
++ printk("trace %s ip %lx sp %lx ax %d origrax %d caller %lx tiflags %x ptrace %x\n",
++ current->comm,
++ regs->ip, regs->sp, regs->ax, regs->orig_ax, __builtin_return_address(0),
++ current_thread_info()->flags, current->ptrace);
++#endif
++
++ ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD)
++ ? 0x80 : 0));
++ /*
++ * this isn't the same as continuing with a signal, but it will do
++ * for normal use. strace only continues with a signal if the
++ * stopping signal is not SIGTRAP. -brl
++ */
++ if (current->exit_code) {
++ send_sig(current->exit_code, current, 1);
++ current->exit_code = 0;
++ }
++}
++
++asmlinkage void syscall_trace_enter(struct pt_regs *regs)
++{
++ /* do the secure computing check first */
++ secure_computing(regs->orig_ax);
++
++ if (test_thread_flag(TIF_SYSCALL_TRACE)
++ && (current->ptrace & PT_PTRACED))
++ syscall_trace(regs);
++
++ if (unlikely(current->audit_context)) {
++ if (test_thread_flag(TIF_IA32)) {
++ audit_syscall_entry(AUDIT_ARCH_I386,
++ regs->orig_ax,
++ regs->bx, regs->cx,
++ regs->dx, regs->si);
++ } else {
++ audit_syscall_entry(AUDIT_ARCH_X86_64,
++ regs->orig_ax,
++ regs->di, regs->si,
++ regs->dx, regs->r10);
++ }
++ }
++}
++
++asmlinkage void syscall_trace_leave(struct pt_regs *regs)
++{
++ if (unlikely(current->audit_context))
++ audit_syscall_exit(AUDITSC_RESULT(regs->ax), regs->ax);
++
++ if ((test_thread_flag(TIF_SYSCALL_TRACE)
++ || test_thread_flag(TIF_SINGLESTEP))
++ && (current->ptrace & PT_PTRACED))
++ syscall_trace(regs);
++}
++
++#endif /* CONFIG_X86_32 */
+diff --git a/arch/x86/kernel/ptrace_32.c b/arch/x86/kernel/ptrace_32.c
deleted file mode 100644
-index 8b91994..0000000
---- a/block/ll_rw_blk.c
+index ff5431c..0000000
+--- a/arch/x86/kernel/ptrace_32.c
+++ /dev/null
-@@ -1,4214 +0,0 @@
+@@ -1,717 +0,0 @@
+-/* By Ross Biro 1/23/92 */
-/*
-- * Copyright (C) 1991, 1992 Linus Torvalds
-- * Copyright (C) 1994, Karl Keyte: Added support for disk statistics
-- * Elevator latency, (C) 2000 Andrea Arcangeli <andrea at suse.de> SuSE
-- * Queue request tables / lock, selectable elevator, Jens Axboe <axboe at suse.de>
-- * kernel-doc documentation started by NeilBrown <neilb at cse.unsw.edu.au> - July2000
-- * bio rewrite, highmem i/o, etc, Jens Axboe <axboe at suse.de> - may 2001
+- * Pentium III FXSR, SSE support
+- * Gareth Hughes <gareth at valinux.com>, May 2000
- */
-
--/*
-- * This handles all read/write requests to block devices
-- */
-#include <linux/kernel.h>
--#include <linux/module.h>
--#include <linux/backing-dev.h>
--#include <linux/bio.h>
--#include <linux/blkdev.h>
--#include <linux/highmem.h>
+-#include <linux/sched.h>
-#include <linux/mm.h>
--#include <linux/kernel_stat.h>
--#include <linux/string.h>
--#include <linux/init.h>
--#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
--#include <linux/completion.h>
--#include <linux/slab.h>
--#include <linux/swap.h>
--#include <linux/writeback.h>
--#include <linux/task_io_accounting_ops.h>
--#include <linux/interrupt.h>
--#include <linux/cpu.h>
--#include <linux/blktrace_api.h>
--#include <linux/fault-inject.h>
--#include <linux/scatterlist.h>
--
--/*
-- * for max sense size
-- */
--#include <scsi/scsi_cmnd.h>
+-#include <linux/smp.h>
+-#include <linux/errno.h>
+-#include <linux/ptrace.h>
+-#include <linux/user.h>
+-#include <linux/security.h>
+-#include <linux/audit.h>
+-#include <linux/seccomp.h>
+-#include <linux/signal.h>
-
--static void blk_unplug_work(struct work_struct *work);
--static void blk_unplug_timeout(unsigned long data);
--static void drive_stat_acct(struct request *rq, int new_io);
--static void init_request_from_bio(struct request *req, struct bio *bio);
--static int __make_request(struct request_queue *q, struct bio *bio);
--static struct io_context *current_io_context(gfp_t gfp_flags, int node);
--static void blk_recalc_rq_segments(struct request *rq);
--static void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
-- struct bio *bio);
+-#include <asm/uaccess.h>
+-#include <asm/pgtable.h>
+-#include <asm/system.h>
+-#include <asm/processor.h>
+-#include <asm/i387.h>
+-#include <asm/debugreg.h>
+-#include <asm/ldt.h>
+-#include <asm/desc.h>
-
-/*
-- * For the allocated request tables
+- * does not yet catch signals sent when the child dies.
+- * in exit.c or in signal.c.
- */
--static struct kmem_cache *request_cachep;
-
-/*
-- * For queue allocation
+- * Determines which flags the user has access to [1 = access, 0 = no access].
+- * Prohibits changing ID(21), VIP(20), VIF(19), VM(17), NT(14), IOPL(12-13), IF(9).
+- * Also masks reserved bits (31-22, 15, 5, 3, 1).
- */
--static struct kmem_cache *requestq_cachep;
+-#define FLAG_MASK 0x00050dd5
-
--/*
-- * For io context allocations
-- */
--static struct kmem_cache *iocontext_cachep;
+-/* set's the trap flag. */
+-#define TRAP_FLAG 0x100
-
-/*
-- * Controlling structure to kblockd
+- * Offset of eflags on child stack..
- */
--static struct workqueue_struct *kblockd_workqueue;
--
--unsigned long blk_max_low_pfn, blk_max_pfn;
+-#define EFL_OFFSET offsetof(struct pt_regs, eflags)
-
--EXPORT_SYMBOL(blk_max_low_pfn);
--EXPORT_SYMBOL(blk_max_pfn);
--
--static DEFINE_PER_CPU(struct list_head, blk_cpu_done);
--
--/* Amount of time in which a process may batch requests */
--#define BLK_BATCH_TIME (HZ/50UL)
--
--/* Number of requests a "batching" process may submit */
--#define BLK_BATCH_REQ 32
--
--/*
-- * Return the threshold (number of used requests) at which the queue is
-- * considered to be congested. It include a little hysteresis to keep the
-- * context switch rate down.
-- */
--static inline int queue_congestion_on_threshold(struct request_queue *q)
+-static inline struct pt_regs *get_child_regs(struct task_struct *task)
-{
-- return q->nr_congestion_on;
+- void *stack_top = (void *)task->thread.esp0;
+- return stack_top - sizeof(struct pt_regs);
-}
-
-/*
-- * The threshold at which a queue is considered to be uncongested
-- */
--static inline int queue_congestion_off_threshold(struct request_queue *q)
--{
-- return q->nr_congestion_off;
--}
--
--static void blk_queue_congestion_threshold(struct request_queue *q)
+- * This routine will get a word off of the processes privileged stack.
+- * the offset is bytes into the pt_regs structure on the stack.
+- * This routine assumes that all the privileged stacks are in our
+- * data space.
+- */
+-static inline int get_stack_long(struct task_struct *task, int offset)
-{
-- int nr;
--
-- nr = q->nr_requests - (q->nr_requests / 8) + 1;
-- if (nr > q->nr_requests)
-- nr = q->nr_requests;
-- q->nr_congestion_on = nr;
+- unsigned char *stack;
-
-- nr = q->nr_requests - (q->nr_requests / 8) - (q->nr_requests / 16) - 1;
-- if (nr < 1)
-- nr = 1;
-- q->nr_congestion_off = nr;
+- stack = (unsigned char *)task->thread.esp0 - sizeof(struct pt_regs);
+- stack += offset;
+- return (*((int *)stack));
-}
-
--/**
-- * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
-- * @bdev: device
-- *
-- * Locates the passed device's request queue and returns the address of its
-- * backing_dev_info
-- *
-- * Will return NULL if the request queue cannot be located.
+-/*
+- * This routine will put a word on the processes privileged stack.
+- * the offset is bytes into the pt_regs structure on the stack.
+- * This routine assumes that all the privileged stacks are in our
+- * data space.
- */
--struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev)
+-static inline int put_stack_long(struct task_struct *task, int offset,
+- unsigned long data)
-{
-- struct backing_dev_info *ret = NULL;
-- struct request_queue *q = bdev_get_queue(bdev);
+- unsigned char * stack;
-
-- if (q)
-- ret = &q->backing_dev_info;
-- return ret;
+- stack = (unsigned char *)task->thread.esp0 - sizeof(struct pt_regs);
+- stack += offset;
+- *(unsigned long *) stack = data;
+- return 0;
-}
--EXPORT_SYMBOL(blk_get_backing_dev_info);
-
--/**
-- * blk_queue_prep_rq - set a prepare_request function for queue
-- * @q: queue
-- * @pfn: prepare_request function
-- *
-- * It's possible for a queue to register a prepare_request callback which
-- * is invoked before the request is handed to the request_fn. The goal of
-- * the function is to prepare a request for I/O, it can be used to build a
-- * cdb from the request data for instance.
-- *
-- */
--void blk_queue_prep_rq(struct request_queue *q, prep_rq_fn *pfn)
+-static int putreg(struct task_struct *child,
+- unsigned long regno, unsigned long value)
-{
-- q->prep_rq_fn = pfn;
+- switch (regno >> 2) {
+- case GS:
+- if (value && (value & 3) != 3)
+- return -EIO;
+- child->thread.gs = value;
+- return 0;
+- case DS:
+- case ES:
+- case FS:
+- if (value && (value & 3) != 3)
+- return -EIO;
+- value &= 0xffff;
+- break;
+- case SS:
+- case CS:
+- if ((value & 3) != 3)
+- return -EIO;
+- value &= 0xffff;
+- break;
+- case EFL:
+- value &= FLAG_MASK;
+- value |= get_stack_long(child, EFL_OFFSET) & ~FLAG_MASK;
+- break;
+- }
+- if (regno > FS*4)
+- regno -= 1*4;
+- put_stack_long(child, regno, value);
+- return 0;
-}
-
--EXPORT_SYMBOL(blk_queue_prep_rq);
--
--/**
-- * blk_queue_merge_bvec - set a merge_bvec function for queue
-- * @q: queue
-- * @mbfn: merge_bvec_fn
-- *
-- * Usually queues have static limitations on the max sectors or segments that
-- * we can put in a request. Stacking drivers may have some settings that
-- * are dynamic, and thus we have to query the queue whether it is ok to
-- * add a new bio_vec to a bio at a given offset or not. If the block device
-- * has such limitations, it needs to register a merge_bvec_fn to control
-- * the size of bio's sent to it. Note that a block device *must* allow a
-- * single page to be added to an empty bio. The block device driver may want
-- * to use the bio_split() function to deal with these bio's. By default
-- * no merge_bvec_fn is defined for a queue, and only the fixed limits are
-- * honored.
-- */
--void blk_queue_merge_bvec(struct request_queue *q, merge_bvec_fn *mbfn)
+-static unsigned long getreg(struct task_struct *child,
+- unsigned long regno)
-{
-- q->merge_bvec_fn = mbfn;
+- unsigned long retval = ~0UL;
+-
+- switch (regno >> 2) {
+- case GS:
+- retval = child->thread.gs;
+- break;
+- case DS:
+- case ES:
+- case FS:
+- case SS:
+- case CS:
+- retval = 0xffff;
+- /* fall through */
+- default:
+- if (regno > FS*4)
+- regno -= 1*4;
+- retval &= get_stack_long(child, regno);
+- }
+- return retval;
-}
-
--EXPORT_SYMBOL(blk_queue_merge_bvec);
+-#define LDT_SEGMENT 4
-
--void blk_queue_softirq_done(struct request_queue *q, softirq_done_fn *fn)
+-static unsigned long convert_eip_to_linear(struct task_struct *child, struct pt_regs *regs)
-{
-- q->softirq_done_fn = fn;
--}
+- unsigned long addr, seg;
-
--EXPORT_SYMBOL(blk_queue_softirq_done);
+- addr = regs->eip;
+- seg = regs->xcs & 0xffff;
+- if (regs->eflags & VM_MASK) {
+- addr = (addr & 0xffff) + (seg << 4);
+- return addr;
+- }
-
--/**
-- * blk_queue_make_request - define an alternate make_request function for a device
-- * @q: the request queue for the device to be affected
-- * @mfn: the alternate make_request function
-- *
-- * Description:
-- * The normal way for &struct bios to be passed to a device
-- * driver is for them to be collected into requests on a request
-- * queue, and then to allow the device driver to select requests
-- * off that queue when it is ready. This works well for many block
-- * devices. However some block devices (typically virtual devices
-- * such as md or lvm) do not benefit from the processing on the
-- * request queue, and are served best by having the requests passed
-- * directly to them. This can be achieved by providing a function
-- * to blk_queue_make_request().
-- *
-- * Caveat:
-- * The driver that does this *must* be able to deal appropriately
-- * with buffers in "highmemory". This can be accomplished by either calling
-- * __bio_kmap_atomic() to get a temporary kernel mapping, or by calling
-- * blk_queue_bounce() to create a buffer in normal memory.
-- **/
--void blk_queue_make_request(struct request_queue * q, make_request_fn * mfn)
--{
- /*
-- * set defaults
-- */
-- q->nr_requests = BLKDEV_MAX_RQ;
-- blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
-- blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
-- q->make_request_fn = mfn;
-- q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
-- q->backing_dev_info.state = 0;
-- q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY;
-- blk_queue_max_sectors(q, SAFE_MAX_SECTORS);
-- blk_queue_hardsect_size(q, 512);
-- blk_queue_dma_alignment(q, 511);
-- blk_queue_congestion_threshold(q);
-- q->nr_batching = BLK_BATCH_REQ;
+- * We'll assume that the code segments in the GDT
+- * are all zero-based. That is largely true: the
+- * TLS segments are used for data, and the PNPBIOS
+- * and APM bios ones we just ignore here.
+- */
+- if (seg & LDT_SEGMENT) {
+- u32 *desc;
+- unsigned long base;
+-
+- seg &= ~7UL;
+-
+- mutex_lock(&child->mm->context.lock);
+- if (unlikely((seg >> 3) >= child->mm->context.size))
+- addr = -1L; /* bogus selector, access would fault */
+- else {
+- desc = child->mm->context.ldt + seg;
+- base = ((desc[0] >> 16) |
+- ((desc[1] & 0xff) << 16) |
+- (desc[1] & 0xff000000));
+-
+- /* 16-bit code segment? */
+- if (!((desc[1] >> 22) & 1))
+- addr &= 0xffff;
+- addr += base;
+- }
+- mutex_unlock(&child->mm->context.lock);
+- }
+- return addr;
+-}
-
-- q->unplug_thresh = 4; /* hmm */
-- q->unplug_delay = (3 * HZ) / 1000; /* 3 milliseconds */
-- if (q->unplug_delay == 0)
-- q->unplug_delay = 1;
+-static inline int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs)
+-{
+- int i, copied;
+- unsigned char opcode[15];
+- unsigned long addr = convert_eip_to_linear(child, regs);
+-
+- copied = access_process_vm(child, addr, opcode, sizeof(opcode), 0);
+- for (i = 0; i < copied; i++) {
+- switch (opcode[i]) {
+- /* popf and iret */
+- case 0x9d: case 0xcf:
+- return 1;
+- /* opcode and address size prefixes */
+- case 0x66: case 0x67:
+- continue;
+- /* irrelevant prefixes (segment overrides and repeats) */
+- case 0x26: case 0x2e:
+- case 0x36: case 0x3e:
+- case 0x64: case 0x65:
+- case 0xf0: case 0xf2: case 0xf3:
+- continue;
-
-- INIT_WORK(&q->unplug_work, blk_unplug_work);
+- /*
+- * pushf: NOTE! We should probably not let
+- * the user see the TF bit being set. But
+- * it's more pain than it's worth to avoid
+- * it, and a debugger could emulate this
+- * all in user space if it _really_ cares.
+- */
+- case 0x9c:
+- default:
+- return 0;
+- }
+- }
+- return 0;
+-}
-
-- q->unplug_timer.function = blk_unplug_timeout;
-- q->unplug_timer.data = (unsigned long)q;
+-static void set_singlestep(struct task_struct *child)
+-{
+- struct pt_regs *regs = get_child_regs(child);
-
- /*
-- * by default assume old behaviour and bounce for any highmem page
+- * Always set TIF_SINGLESTEP - this guarantees that
+- * we single-step system calls etc.. This will also
+- * cause us to set TF when returning to user mode.
- */
-- blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
--}
+- set_tsk_thread_flag(child, TIF_SINGLESTEP);
-
--EXPORT_SYMBOL(blk_queue_make_request);
+- /*
+- * If TF was already set, don't do anything else
+- */
+- if (regs->eflags & TRAP_FLAG)
+- return;
-
--static void rq_init(struct request_queue *q, struct request *rq)
--{
-- INIT_LIST_HEAD(&rq->queuelist);
-- INIT_LIST_HEAD(&rq->donelist);
+- /* Set TF on the kernel stack.. */
+- regs->eflags |= TRAP_FLAG;
-
-- rq->errors = 0;
-- rq->bio = rq->biotail = NULL;
-- INIT_HLIST_NODE(&rq->hash);
-- RB_CLEAR_NODE(&rq->rb_node);
-- rq->ioprio = 0;
-- rq->buffer = NULL;
-- rq->ref_count = 1;
-- rq->q = q;
-- rq->special = NULL;
-- rq->data_len = 0;
-- rq->data = NULL;
-- rq->nr_phys_segments = 0;
-- rq->sense = NULL;
-- rq->end_io = NULL;
-- rq->end_io_data = NULL;
-- rq->completion_data = NULL;
-- rq->next_rq = NULL;
+- /*
+- * ..but if TF is changed by the instruction we will trace,
+- * don't mark it as being "us" that set it, so that we
+- * won't clear it by hand later.
+- */
+- if (is_setting_trap_flag(child, regs))
+- return;
+-
+- child->ptrace |= PT_DTRACE;
-}
-
--/**
-- * blk_queue_ordered - does this queue support ordered writes
-- * @q: the request queue
-- * @ordered: one of QUEUE_ORDERED_*
-- * @prepare_flush_fn: rq setup helper for cache flush ordered writes
-- *
-- * Description:
-- * For journalled file systems, doing ordered writes on a commit
-- * block instead of explicitly doing wait_on_buffer (which is bad
-- * for performance) can be a big win. Block drivers supporting this
-- * feature should call this function and indicate so.
-- *
-- **/
--int blk_queue_ordered(struct request_queue *q, unsigned ordered,
-- prepare_flush_fn *prepare_flush_fn)
+-static void clear_singlestep(struct task_struct *child)
-{
-- if (ordered & (QUEUE_ORDERED_PREFLUSH | QUEUE_ORDERED_POSTFLUSH) &&
-- prepare_flush_fn == NULL) {
-- printk(KERN_ERR "blk_queue_ordered: prepare_flush_fn required\n");
-- return -EINVAL;
-- }
+- /* Always clear TIF_SINGLESTEP... */
+- clear_tsk_thread_flag(child, TIF_SINGLESTEP);
-
-- if (ordered != QUEUE_ORDERED_NONE &&
-- ordered != QUEUE_ORDERED_DRAIN &&
-- ordered != QUEUE_ORDERED_DRAIN_FLUSH &&
-- ordered != QUEUE_ORDERED_DRAIN_FUA &&
-- ordered != QUEUE_ORDERED_TAG &&
-- ordered != QUEUE_ORDERED_TAG_FLUSH &&
-- ordered != QUEUE_ORDERED_TAG_FUA) {
-- printk(KERN_ERR "blk_queue_ordered: bad value %d\n", ordered);
-- return -EINVAL;
+- /* But touch TF only if it was set by us.. */
+- if (child->ptrace & PT_DTRACE) {
+- struct pt_regs *regs = get_child_regs(child);
+- regs->eflags &= ~TRAP_FLAG;
+- child->ptrace &= ~PT_DTRACE;
- }
--
-- q->ordered = ordered;
-- q->next_ordered = ordered;
-- q->prepare_flush_fn = prepare_flush_fn;
--
-- return 0;
-}
-
--EXPORT_SYMBOL(blk_queue_ordered);
--
-/*
-- * Cache flushing for ordered writes handling
+- * Called by kernel/ptrace.c when detaching..
+- *
+- * Make sure the single step bit is not set.
- */
--inline unsigned blk_ordered_cur_seq(struct request_queue *q)
--{
-- if (!q->ordseq)
-- return 0;
-- return 1 << ffz(q->ordseq);
+-void ptrace_disable(struct task_struct *child)
+-{
+- clear_singlestep(child);
+- clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
-}
-
--unsigned blk_ordered_req_seq(struct request *rq)
+-/*
+- * Perform get_thread_area on behalf of the traced child.
+- */
+-static int
+-ptrace_get_thread_area(struct task_struct *child,
+- int idx, struct user_desc __user *user_desc)
-{
-- struct request_queue *q = rq->q;
--
-- BUG_ON(q->ordseq == 0);
--
-- if (rq == &q->pre_flush_rq)
-- return QUEUE_ORDSEQ_PREFLUSH;
-- if (rq == &q->bar_rq)
-- return QUEUE_ORDSEQ_BAR;
-- if (rq == &q->post_flush_rq)
-- return QUEUE_ORDSEQ_POSTFLUSH;
+- struct user_desc info;
+- struct desc_struct *desc;
-
-- /*
-- * !fs requests don't need to follow barrier ordering. Always
-- * put them at the front. This fixes the following deadlock.
-- *
-- * http://thread.gmane.org/gmane.linux.kernel/537473
-- */
-- if (!blk_fs_request(rq))
-- return QUEUE_ORDSEQ_DRAIN;
--
-- if ((rq->cmd_flags & REQ_ORDERED_COLOR) ==
-- (q->orig_bar_rq->cmd_flags & REQ_ORDERED_COLOR))
-- return QUEUE_ORDSEQ_DRAIN;
-- else
-- return QUEUE_ORDSEQ_DONE;
--}
+-/*
+- * Get the current Thread-Local Storage area:
+- */
-
--void blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
--{
-- struct request *rq;
-- int uptodate;
+-#define GET_BASE(desc) ( \
+- (((desc)->a >> 16) & 0x0000ffff) | \
+- (((desc)->b << 16) & 0x00ff0000) | \
+- ( (desc)->b & 0xff000000) )
-
-- if (error && !q->orderr)
-- q->orderr = error;
+-#define GET_LIMIT(desc) ( \
+- ((desc)->a & 0x0ffff) | \
+- ((desc)->b & 0xf0000) )
-
-- BUG_ON(q->ordseq & seq);
-- q->ordseq |= seq;
+-#define GET_32BIT(desc) (((desc)->b >> 22) & 1)
+-#define GET_CONTENTS(desc) (((desc)->b >> 10) & 3)
+-#define GET_WRITABLE(desc) (((desc)->b >> 9) & 1)
+-#define GET_LIMIT_PAGES(desc) (((desc)->b >> 23) & 1)
+-#define GET_PRESENT(desc) (((desc)->b >> 15) & 1)
+-#define GET_USEABLE(desc) (((desc)->b >> 20) & 1)
-
-- if (blk_ordered_cur_seq(q) != QUEUE_ORDSEQ_DONE)
-- return;
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
-
-- /*
-- * Okay, sequence complete.
-- */
-- uptodate = 1;
-- if (q->orderr)
-- uptodate = q->orderr;
+- desc = child->thread.tls_array + idx - GDT_ENTRY_TLS_MIN;
-
-- q->ordseq = 0;
-- rq = q->orig_bar_rq;
+- info.entry_number = idx;
+- info.base_addr = GET_BASE(desc);
+- info.limit = GET_LIMIT(desc);
+- info.seg_32bit = GET_32BIT(desc);
+- info.contents = GET_CONTENTS(desc);
+- info.read_exec_only = !GET_WRITABLE(desc);
+- info.limit_in_pages = GET_LIMIT_PAGES(desc);
+- info.seg_not_present = !GET_PRESENT(desc);
+- info.useable = GET_USEABLE(desc);
-
-- end_that_request_first(rq, uptodate, rq->hard_nr_sectors);
-- end_that_request_last(rq, uptodate);
--}
+- if (copy_to_user(user_desc, &info, sizeof(info)))
+- return -EFAULT;
-
--static void pre_flush_end_io(struct request *rq, int error)
--{
-- elv_completed_request(rq->q, rq);
-- blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_PREFLUSH, error);
+- return 0;
-}
-
--static void bar_end_io(struct request *rq, int error)
+-/*
+- * Perform set_thread_area on behalf of the traced child.
+- */
+-static int
+-ptrace_set_thread_area(struct task_struct *child,
+- int idx, struct user_desc __user *user_desc)
-{
-- elv_completed_request(rq->q, rq);
-- blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_BAR, error);
--}
+- struct user_desc info;
+- struct desc_struct *desc;
-
--static void post_flush_end_io(struct request *rq, int error)
--{
-- elv_completed_request(rq->q, rq);
-- blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_POSTFLUSH, error);
--}
+- if (copy_from_user(&info, user_desc, sizeof(info)))
+- return -EFAULT;
-
--static void queue_flush(struct request_queue *q, unsigned which)
--{
-- struct request *rq;
-- rq_end_io_fn *end_io;
+- if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
+- return -EINVAL;
-
-- if (which == QUEUE_ORDERED_PREFLUSH) {
-- rq = &q->pre_flush_rq;
-- end_io = pre_flush_end_io;
+- desc = child->thread.tls_array + idx - GDT_ENTRY_TLS_MIN;
+- if (LDT_empty(&info)) {
+- desc->a = 0;
+- desc->b = 0;
- } else {
-- rq = &q->post_flush_rq;
-- end_io = post_flush_end_io;
+- desc->a = LDT_entry_a(&info);
+- desc->b = LDT_entry_b(&info);
- }
-
-- rq->cmd_flags = REQ_HARDBARRIER;
-- rq_init(q, rq);
-- rq->elevator_private = NULL;
-- rq->elevator_private2 = NULL;
-- rq->rq_disk = q->bar_rq.rq_disk;
-- rq->end_io = end_io;
-- q->prepare_flush_fn(q, rq);
--
-- elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
+- return 0;
-}
-
--static inline struct request *start_ordered(struct request_queue *q,
-- struct request *rq)
+-long arch_ptrace(struct task_struct *child, long request, long addr, long data)
-{
-- q->orderr = 0;
-- q->ordered = q->next_ordered;
-- q->ordseq |= QUEUE_ORDSEQ_STARTED;
--
-- /*
-- * Prep proxy barrier request.
-- */
-- blkdev_dequeue_request(rq);
-- q->orig_bar_rq = rq;
-- rq = &q->bar_rq;
-- rq->cmd_flags = 0;
-- rq_init(q, rq);
-- if (bio_data_dir(q->orig_bar_rq->bio) == WRITE)
-- rq->cmd_flags |= REQ_RW;
-- if (q->ordered & QUEUE_ORDERED_FUA)
-- rq->cmd_flags |= REQ_FUA;
-- rq->elevator_private = NULL;
-- rq->elevator_private2 = NULL;
-- init_request_from_bio(rq, q->orig_bar_rq->bio);
-- rq->end_io = bar_end_io;
+- struct user * dummy = NULL;
+- int i, ret;
+- unsigned long __user *datap = (unsigned long __user *)data;
-
-- /*
-- * Queue ordered sequence. As we stack them at the head, we
-- * need to queue in reverse order. Note that we rely on that
-- * no fs request uses ELEVATOR_INSERT_FRONT and thus no fs
-- * request gets inbetween ordered sequence. If this request is
-- * an empty barrier, we don't need to do a postflush ever since
-- * there will be no data written between the pre and post flush.
-- * Hence a single flush will suffice.
-- */
-- if ((q->ordered & QUEUE_ORDERED_POSTFLUSH) && !blk_empty_barrier(rq))
-- queue_flush(q, QUEUE_ORDERED_POSTFLUSH);
-- else
-- q->ordseq |= QUEUE_ORDSEQ_POSTFLUSH;
+- switch (request) {
+- /* when I and D space are separate, these will need to be fixed. */
+- case PTRACE_PEEKTEXT: /* read word at location addr. */
+- case PTRACE_PEEKDATA:
+- ret = generic_ptrace_peekdata(child, addr, data);
+- break;
-
-- elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
+- /* read the word at location addr in the USER area. */
+- case PTRACE_PEEKUSR: {
+- unsigned long tmp;
-
-- if (q->ordered & QUEUE_ORDERED_PREFLUSH) {
-- queue_flush(q, QUEUE_ORDERED_PREFLUSH);
-- rq = &q->pre_flush_rq;
-- } else
-- q->ordseq |= QUEUE_ORDSEQ_PREFLUSH;
+- ret = -EIO;
+- if ((addr & 3) || addr < 0 ||
+- addr > sizeof(struct user) - 3)
+- break;
-
-- if ((q->ordered & QUEUE_ORDERED_TAG) || q->in_flight == 0)
-- q->ordseq |= QUEUE_ORDSEQ_DRAIN;
-- else
-- rq = NULL;
+- tmp = 0; /* Default return condition */
+- if(addr < FRAME_SIZE*sizeof(long))
+- tmp = getreg(child, addr);
+- if(addr >= (long) &dummy->u_debugreg[0] &&
+- addr <= (long) &dummy->u_debugreg[7]){
+- addr -= (long) &dummy->u_debugreg[0];
+- addr = addr >> 2;
+- tmp = child->thread.debugreg[addr];
+- }
+- ret = put_user(tmp, datap);
+- break;
+- }
-
-- return rq;
--}
+- /* when I and D space are separate, this will have to be fixed. */
+- case PTRACE_POKETEXT: /* write the word at location addr. */
+- case PTRACE_POKEDATA:
+- ret = generic_ptrace_pokedata(child, addr, data);
+- break;
-
--int blk_do_ordered(struct request_queue *q, struct request **rqp)
--{
-- struct request *rq = *rqp;
-- const int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq);
+- case PTRACE_POKEUSR: /* write the word at location addr in the USER area */
+- ret = -EIO;
+- if ((addr & 3) || addr < 0 ||
+- addr > sizeof(struct user) - 3)
+- break;
-
-- if (!q->ordseq) {
-- if (!is_barrier)
-- return 1;
+- if (addr < FRAME_SIZE*sizeof(long)) {
+- ret = putreg(child, addr, data);
+- break;
+- }
+- /* We need to be very careful here. We implicitly
+- want to modify a portion of the task_struct, and we
+- have to be selective about what portions we allow someone
+- to modify. */
+-
+- ret = -EIO;
+- if(addr >= (long) &dummy->u_debugreg[0] &&
+- addr <= (long) &dummy->u_debugreg[7]){
+-
+- if(addr == (long) &dummy->u_debugreg[4]) break;
+- if(addr == (long) &dummy->u_debugreg[5]) break;
+- if(addr < (long) &dummy->u_debugreg[4] &&
+- ((unsigned long) data) >= TASK_SIZE-3) break;
+-
+- /* Sanity-check data. Take one half-byte at once with
+- * check = (val >> (16 + 4*i)) & 0xf. It contains the
+- * R/Wi and LENi bits; bits 0 and 1 are R/Wi, and bits
+- * 2 and 3 are LENi. Given a list of invalid values,
+- * we do mask |= 1 << invalid_value, so that
+- * (mask >> check) & 1 is a correct test for invalid
+- * values.
+- *
+- * R/Wi contains the type of the breakpoint /
+- * watchpoint, LENi contains the length of the watched
+- * data in the watchpoint case.
+- *
+- * The invalid values are:
+- * - LENi == 0x10 (undefined), so mask |= 0x0f00.
+- * - R/Wi == 0x10 (break on I/O reads or writes), so
+- * mask |= 0x4444.
+- * - R/Wi == 0x00 && LENi != 0x00, so we have mask |=
+- * 0x1110.
+- *
+- * Finally, mask = 0x0f00 | 0x4444 | 0x1110 == 0x5f54.
+- *
+- * See the Intel Manual "System Programming Guide",
+- * 15.2.4
+- *
+- * Note that LENi == 0x10 is defined on x86_64 in long
+- * mode (i.e. even for 32-bit userspace software, but
+- * 64-bit kernel), so the x86_64 mask value is 0x5454.
+- * See the AMD manual no. 24593 (AMD64 System
+- * Programming)*/
+-
+- if(addr == (long) &dummy->u_debugreg[7]) {
+- data &= ~DR_CONTROL_RESERVED;
+- for(i=0; i<4; i++)
+- if ((0x5f54 >> ((data >> (16 + 4*i)) & 0xf)) & 1)
+- goto out_tsk;
+- if (data)
+- set_tsk_thread_flag(child, TIF_DEBUG);
+- else
+- clear_tsk_thread_flag(child, TIF_DEBUG);
+- }
+- addr -= (long) &dummy->u_debugreg;
+- addr = addr >> 2;
+- child->thread.debugreg[addr] = data;
+- ret = 0;
+- }
+- break;
-
-- if (q->next_ordered != QUEUE_ORDERED_NONE) {
-- *rqp = start_ordered(q, rq);
-- return 1;
+- case PTRACE_SYSEMU: /* continue and stop at next syscall, which will not be executed */
+- case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+- case PTRACE_CONT: /* restart after signal. */
+- ret = -EIO;
+- if (!valid_signal(data))
+- break;
+- if (request == PTRACE_SYSEMU) {
+- set_tsk_thread_flag(child, TIF_SYSCALL_EMU);
+- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+- } else if (request == PTRACE_SYSCALL) {
+- set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+- clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
- } else {
-- /*
-- * This can happen when the queue switches to
-- * ORDERED_NONE while this request is on it.
-- */
-- blkdev_dequeue_request(rq);
-- end_that_request_first(rq, -EOPNOTSUPP,
-- rq->hard_nr_sectors);
-- end_that_request_last(rq, -EOPNOTSUPP);
-- *rqp = NULL;
-- return 0;
+- clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
+- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
- }
-- }
+- child->exit_code = data;
+- /* make sure the single step bit is not set. */
+- clear_singlestep(child);
+- wake_up_process(child);
+- ret = 0;
+- break;
-
-- /*
-- * Ordered sequence in progress
-- */
+-/*
+- * make the child exit. Best I can do is send it a sigkill.
+- * perhaps it should be put in the status that it wants to
+- * exit.
+- */
+- case PTRACE_KILL:
+- ret = 0;
+- if (child->exit_state == EXIT_ZOMBIE) /* already dead */
+- break;
+- child->exit_code = SIGKILL;
+- /* make sure the single step bit is not set. */
+- clear_singlestep(child);
+- wake_up_process(child);
+- break;
-
-- /* Special requests are not subject to ordering rules. */
-- if (!blk_fs_request(rq) &&
-- rq != &q->pre_flush_rq && rq != &q->post_flush_rq)
-- return 1;
+- case PTRACE_SYSEMU_SINGLESTEP: /* Same as SYSEMU, but singlestep if not syscall */
+- case PTRACE_SINGLESTEP: /* set the trap flag. */
+- ret = -EIO;
+- if (!valid_signal(data))
+- break;
-
-- if (q->ordered & QUEUE_ORDERED_TAG) {
-- /* Ordered by tag. Blocking the next barrier is enough. */
-- if (is_barrier && rq != &q->bar_rq)
-- *rqp = NULL;
-- } else {
-- /* Ordered by draining. Wait for turn. */
-- WARN_ON(blk_ordered_req_seq(rq) < blk_ordered_cur_seq(q));
-- if (blk_ordered_req_seq(rq) > blk_ordered_cur_seq(q))
-- *rqp = NULL;
+- if (request == PTRACE_SYSEMU_SINGLESTEP)
+- set_tsk_thread_flag(child, TIF_SYSCALL_EMU);
+- else
+- clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
+-
+- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+- set_singlestep(child);
+- child->exit_code = data;
+- /* give it a chance to run. */
+- wake_up_process(child);
+- ret = 0;
+- break;
+-
+- case PTRACE_GETREGS: { /* Get all gp regs from the child. */
+- if (!access_ok(VERIFY_WRITE, datap, FRAME_SIZE*sizeof(long))) {
+- ret = -EIO;
+- break;
+- }
+- for ( i = 0; i < FRAME_SIZE*sizeof(long); i += sizeof(long) ) {
+- __put_user(getreg(child, i), datap);
+- datap++;
+- }
+- ret = 0;
+- break;
- }
-
-- return 1;
--}
+- case PTRACE_SETREGS: { /* Set all gp regs in the child. */
+- unsigned long tmp;
+- if (!access_ok(VERIFY_READ, datap, FRAME_SIZE*sizeof(long))) {
+- ret = -EIO;
+- break;
+- }
+- for ( i = 0; i < FRAME_SIZE*sizeof(long); i += sizeof(long) ) {
+- __get_user(tmp, datap);
+- putreg(child, i, tmp);
+- datap++;
+- }
+- ret = 0;
+- break;
+- }
-
--static void req_bio_endio(struct request *rq, struct bio *bio,
-- unsigned int nbytes, int error)
--{
-- struct request_queue *q = rq->q;
+- case PTRACE_GETFPREGS: { /* Get the child FPU state. */
+- if (!access_ok(VERIFY_WRITE, datap,
+- sizeof(struct user_i387_struct))) {
+- ret = -EIO;
+- break;
+- }
+- ret = 0;
+- if (!tsk_used_math(child))
+- init_fpu(child);
+- get_fpregs((struct user_i387_struct __user *)data, child);
+- break;
+- }
-
-- if (&q->bar_rq != rq) {
-- if (error)
-- clear_bit(BIO_UPTODATE, &bio->bi_flags);
-- else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
-- error = -EIO;
+- case PTRACE_SETFPREGS: { /* Set the child FPU state. */
+- if (!access_ok(VERIFY_READ, datap,
+- sizeof(struct user_i387_struct))) {
+- ret = -EIO;
+- break;
+- }
+- set_stopped_child_used_math(child);
+- set_fpregs(child, (struct user_i387_struct __user *)data);
+- ret = 0;
+- break;
+- }
-
-- if (unlikely(nbytes > bio->bi_size)) {
-- printk("%s: want %u bytes done, only %u left\n",
-- __FUNCTION__, nbytes, bio->bi_size);
-- nbytes = bio->bi_size;
+- case PTRACE_GETFPXREGS: { /* Get the child extended FPU state. */
+- if (!access_ok(VERIFY_WRITE, datap,
+- sizeof(struct user_fxsr_struct))) {
+- ret = -EIO;
+- break;
- }
+- if (!tsk_used_math(child))
+- init_fpu(child);
+- ret = get_fpxregs((struct user_fxsr_struct __user *)data, child);
+- break;
+- }
-
-- bio->bi_size -= nbytes;
-- bio->bi_sector += (nbytes >> 9);
-- if (bio->bi_size == 0)
-- bio_endio(bio, error);
-- } else {
+- case PTRACE_SETFPXREGS: { /* Set the child extended FPU state. */
+- if (!access_ok(VERIFY_READ, datap,
+- sizeof(struct user_fxsr_struct))) {
+- ret = -EIO;
+- break;
+- }
+- set_stopped_child_used_math(child);
+- ret = set_fpxregs(child, (struct user_fxsr_struct __user *)data);
+- break;
+- }
-
-- /*
-- * Okay, this is the barrier request in progress, just
-- * record the error;
-- */
-- if (error && !q->orderr)
-- q->orderr = error;
+- case PTRACE_GET_THREAD_AREA:
+- ret = ptrace_get_thread_area(child, addr,
+- (struct user_desc __user *) data);
+- break;
+-
+- case PTRACE_SET_THREAD_AREA:
+- ret = ptrace_set_thread_area(child, addr,
+- (struct user_desc __user *) data);
+- break;
+-
+- default:
+- ret = ptrace_request(child, request, addr, data);
+- break;
- }
+- out_tsk:
+- return ret;
-}
-
--/**
-- * blk_queue_bounce_limit - set bounce buffer limit for queue
-- * @q: the request queue for the device
-- * @dma_addr: bus address limit
-- *
-- * Description:
-- * Different hardware can have different requirements as to what pages
-- * it can do I/O directly to. A low level driver can call
-- * blk_queue_bounce_limit to have lower memory pages allocated as bounce
-- * buffers for doing I/O to pages residing above @page.
-- **/
--void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr)
+-void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code)
-{
-- unsigned long bounce_pfn = dma_addr >> PAGE_SHIFT;
-- int dma = 0;
+- struct siginfo info;
-
-- q->bounce_gfp = GFP_NOIO;
--#if BITS_PER_LONG == 64
-- /* Assume anything <= 4GB can be handled by IOMMU.
-- Actually some IOMMUs can handle everything, but I don't
-- know of a way to test this here. */
-- if (bounce_pfn < (min_t(u64,0xffffffff,BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
-- dma = 1;
-- q->bounce_pfn = max_low_pfn;
--#else
-- if (bounce_pfn < blk_max_low_pfn)
-- dma = 1;
-- q->bounce_pfn = bounce_pfn;
--#endif
-- if (dma) {
-- init_emergency_isa_pool();
-- q->bounce_gfp = GFP_NOIO | GFP_DMA;
-- q->bounce_pfn = bounce_pfn;
-- }
--}
+- tsk->thread.trap_no = 1;
+- tsk->thread.error_code = error_code;
-
--EXPORT_SYMBOL(blk_queue_bounce_limit);
+- memset(&info, 0, sizeof(info));
+- info.si_signo = SIGTRAP;
+- info.si_code = TRAP_BRKPT;
-
--/**
-- * blk_queue_max_sectors - set max sectors for a request for this queue
-- * @q: the request queue for the device
-- * @max_sectors: max sectors in the usual 512b unit
-- *
-- * Description:
-- * Enables a low level driver to set an upper limit on the size of
-- * received requests.
-- **/
--void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors)
--{
-- if ((max_sectors << 9) < PAGE_CACHE_SIZE) {
-- max_sectors = 1 << (PAGE_CACHE_SHIFT - 9);
-- printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors);
-- }
+- /* User-mode eip? */
+- info.si_addr = user_mode_vm(regs) ? (void __user *) regs->eip : NULL;
-
-- if (BLK_DEF_MAX_SECTORS > max_sectors)
-- q->max_hw_sectors = q->max_sectors = max_sectors;
-- else {
-- q->max_sectors = BLK_DEF_MAX_SECTORS;
-- q->max_hw_sectors = max_sectors;
-- }
+- /* Send us the fake SIGTRAP */
+- force_sig_info(SIGTRAP, &info, tsk);
-}
-
--EXPORT_SYMBOL(blk_queue_max_sectors);
--
--/**
-- * blk_queue_max_phys_segments - set max phys segments for a request for this queue
-- * @q: the request queue for the device
-- * @max_segments: max number of segments
-- *
-- * Description:
-- * Enables a low level driver to set an upper limit on the number of
-- * physical data segments in a request. This would be the largest sized
-- * scatter list the driver could handle.
-- **/
--void blk_queue_max_phys_segments(struct request_queue *q,
-- unsigned short max_segments)
+-/* notification of system call entry/exit
+- * - triggered by current->work.syscall_trace
+- */
+-__attribute__((regparm(3)))
+-int do_syscall_trace(struct pt_regs *regs, int entryexit)
-{
-- if (!max_segments) {
-- max_segments = 1;
-- printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
+- int is_sysemu = test_thread_flag(TIF_SYSCALL_EMU);
+- /*
+- * With TIF_SYSCALL_EMU set we want to ignore TIF_SINGLESTEP for syscall
+- * interception
+- */
+- int is_singlestep = !is_sysemu && test_thread_flag(TIF_SINGLESTEP);
+- int ret = 0;
+-
+- /* do the secure computing check first */
+- if (!entryexit)
+- secure_computing(regs->orig_eax);
+-
+- if (unlikely(current->audit_context)) {
+- if (entryexit)
+- audit_syscall_exit(AUDITSC_RESULT(regs->eax),
+- regs->eax);
+- /* Debug traps, when using PTRACE_SINGLESTEP, must be sent only
+- * on the syscall exit path. Normally, when TIF_SYSCALL_AUDIT is
+- * not used, entry.S will call us only on syscall exit, not
+- * entry; so when TIF_SYSCALL_AUDIT is used we must avoid
+- * calling send_sigtrap() on syscall entry.
+- *
+- * Note that when PTRACE_SYSEMU_SINGLESTEP is used,
+- * is_singlestep is false, despite his name, so we will still do
+- * the correct thing.
+- */
+- else if (is_singlestep)
+- goto out;
- }
-
-- q->max_phys_segments = max_segments;
--}
+- if (!(current->ptrace & PT_PTRACED))
+- goto out;
-
--EXPORT_SYMBOL(blk_queue_max_phys_segments);
+- /* If a process stops on the 1st tracepoint with SYSCALL_TRACE
+- * and then is resumed with SYSEMU_SINGLESTEP, it will come in
+- * here. We have to check this and return */
+- if (is_sysemu && entryexit)
+- return 0;
-
--/**
-- * blk_queue_max_hw_segments - set max hw segments for a request for this queue
-- * @q: the request queue for the device
-- * @max_segments: max number of segments
-- *
-- * Description:
-- * Enables a low level driver to set an upper limit on the number of
-- * hw data segments in a request. This would be the largest number of
-- * address/length pairs the host adapter can actually give as once
-- * to the device.
-- **/
--void blk_queue_max_hw_segments(struct request_queue *q,
-- unsigned short max_segments)
--{
-- if (!max_segments) {
-- max_segments = 1;
-- printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
-- }
+- /* Fake a debug trap */
+- if (is_singlestep)
+- send_sigtrap(current, regs, 0);
-
-- q->max_hw_segments = max_segments;
--}
+- if (!test_thread_flag(TIF_SYSCALL_TRACE) && !is_sysemu)
+- goto out;
-
--EXPORT_SYMBOL(blk_queue_max_hw_segments);
+- /* the 0x80 provides a way for the tracing parent to distinguish
+- between a syscall stop and SIGTRAP delivery */
+- /* Note that the debugger could change the result of test_thread_flag!*/
+- ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD) ? 0x80:0));
-
--/**
-- * blk_queue_max_segment_size - set max segment size for blk_rq_map_sg
-- * @q: the request queue for the device
-- * @max_size: max size of segment in bytes
-- *
-- * Description:
-- * Enables a low level driver to set an upper limit on the size of a
-- * coalesced segment
-- **/
--void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
--{
-- if (max_size < PAGE_CACHE_SIZE) {
-- max_size = PAGE_CACHE_SIZE;
-- printk("%s: set to minimum %d\n", __FUNCTION__, max_size);
+- /*
+- * this isn't the same as continuing with a signal, but it will do
+- * for normal use. strace only continues with a signal if the
+- * stopping signal is not SIGTRAP. -brl
+- */
+- if (current->exit_code) {
+- send_sig(current->exit_code, current, 1);
+- current->exit_code = 0;
- }
+- ret = is_sysemu;
+-out:
+- if (unlikely(current->audit_context) && !entryexit)
+- audit_syscall_entry(AUDIT_ARCH_I386, regs->orig_eax,
+- regs->ebx, regs->ecx, regs->edx, regs->esi);
+- if (ret == 0)
+- return 0;
-
-- q->max_segment_size = max_size;
+- regs->orig_eax = -1; /* force skip of syscall restarting */
+- if (unlikely(current->audit_context))
+- audit_syscall_exit(AUDITSC_RESULT(regs->eax), regs->eax);
+- return 1;
-}
+diff --git a/arch/x86/kernel/ptrace_64.c b/arch/x86/kernel/ptrace_64.c
+deleted file mode 100644
+index 607085f..0000000
+--- a/arch/x86/kernel/ptrace_64.c
++++ /dev/null
+@@ -1,621 +0,0 @@
+-/* By Ross Biro 1/23/92 */
+-/*
+- * Pentium III FXSR, SSE support
+- * Gareth Hughes <gareth at valinux.com>, May 2000
+- *
+- * x86-64 port 2000-2002 Andi Kleen
+- */
-
--EXPORT_SYMBOL(blk_queue_max_segment_size);
+-#include <linux/kernel.h>
+-#include <linux/sched.h>
+-#include <linux/mm.h>
+-#include <linux/smp.h>
+-#include <linux/errno.h>
+-#include <linux/ptrace.h>
+-#include <linux/user.h>
+-#include <linux/security.h>
+-#include <linux/audit.h>
+-#include <linux/seccomp.h>
+-#include <linux/signal.h>
-
--/**
-- * blk_queue_hardsect_size - set hardware sector size for the queue
-- * @q: the request queue for the device
-- * @size: the hardware sector size, in bytes
-- *
-- * Description:
-- * This should typically be set to the lowest possible sector size
-- * that the hardware can operate on (possible without reverting to
-- * even internal read-modify-write operations). Usually the default
-- * of 512 covers most hardware.
-- **/
--void blk_queue_hardsect_size(struct request_queue *q, unsigned short size)
--{
-- q->hardsect_size = size;
--}
+-#include <asm/uaccess.h>
+-#include <asm/pgtable.h>
+-#include <asm/system.h>
+-#include <asm/processor.h>
+-#include <asm/i387.h>
+-#include <asm/debugreg.h>
+-#include <asm/ldt.h>
+-#include <asm/desc.h>
+-#include <asm/proto.h>
+-#include <asm/ia32.h>
-
--EXPORT_SYMBOL(blk_queue_hardsect_size);
+-/*
+- * does not yet catch signals sent when the child dies.
+- * in exit.c or in signal.c.
+- */
-
-/*
-- * Returns the minimum that is _not_ zero, unless both are zero.
+- * Determines which flags the user has access to [1 = access, 0 = no access].
+- * Prohibits changing ID(21), VIP(20), VIF(19), VM(17), IOPL(12-13), IF(9).
+- * Also masks reserved bits (63-22, 15, 5, 3, 1).
- */
--#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r))
+-#define FLAG_MASK 0x54dd5UL
-
--/**
-- * blk_queue_stack_limits - inherit underlying queue limits for stacked drivers
-- * @t: the stacking driver (top)
-- * @b: the underlying device (bottom)
-- **/
--void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
+-/* set's the trap flag. */
+-#define TRAP_FLAG 0x100UL
+-
+-/*
+- * eflags and offset of eflags on child stack..
+- */
+-#define EFLAGS offsetof(struct pt_regs, eflags)
+-#define EFL_OFFSET ((int)(EFLAGS-sizeof(struct pt_regs)))
+-
+-/*
+- * this routine will get a word off of the processes privileged stack.
+- * the offset is how far from the base addr as stored in the TSS.
+- * this routine assumes that all the privileged stacks are in our
+- * data space.
+- */
+-static inline unsigned long get_stack_long(struct task_struct *task, int offset)
-{
-- /* zero is "infinity" */
-- t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors);
-- t->max_hw_sectors = min_not_zero(t->max_hw_sectors,b->max_hw_sectors);
+- unsigned char *stack;
-
-- t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments);
-- t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments);
-- t->max_segment_size = min(t->max_segment_size,b->max_segment_size);
-- t->hardsect_size = max(t->hardsect_size,b->hardsect_size);
-- if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags))
-- clear_bit(QUEUE_FLAG_CLUSTER, &t->queue_flags);
+- stack = (unsigned char *)task->thread.rsp0;
+- stack += offset;
+- return (*((unsigned long *)stack));
-}
-
--EXPORT_SYMBOL(blk_queue_stack_limits);
--
--/**
-- * blk_queue_segment_boundary - set boundary rules for segment merging
-- * @q: the request queue for the device
-- * @mask: the memory boundary mask
-- **/
--void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
+-/*
+- * this routine will put a word on the processes privileged stack.
+- * the offset is how far from the base addr as stored in the TSS.
+- * this routine assumes that all the privileged stacks are in our
+- * data space.
+- */
+-static inline long put_stack_long(struct task_struct *task, int offset,
+- unsigned long data)
-{
-- if (mask < PAGE_CACHE_SIZE - 1) {
-- mask = PAGE_CACHE_SIZE - 1;
-- printk("%s: set to minimum %lx\n", __FUNCTION__, mask);
-- }
+- unsigned char * stack;
-
-- q->seg_boundary_mask = mask;
+- stack = (unsigned char *) task->thread.rsp0;
+- stack += offset;
+- *(unsigned long *) stack = data;
+- return 0;
-}
-
--EXPORT_SYMBOL(blk_queue_segment_boundary);
+-#define LDT_SEGMENT 4
-
--/**
-- * blk_queue_dma_alignment - set dma length and memory alignment
-- * @q: the request queue for the device
-- * @mask: alignment mask
-- *
-- * description:
-- * set required memory and length aligment for direct dma transactions.
-- * this is used when buiding direct io requests for the queue.
-- *
-- **/
--void blk_queue_dma_alignment(struct request_queue *q, int mask)
+-unsigned long convert_rip_to_linear(struct task_struct *child, struct pt_regs *regs)
-{
-- q->dma_alignment = mask;
--}
+- unsigned long addr, seg;
-
--EXPORT_SYMBOL(blk_queue_dma_alignment);
+- addr = regs->rip;
+- seg = regs->cs & 0xffff;
-
--/**
-- * blk_queue_find_tag - find a request by its tag and queue
-- * @q: The request queue for the device
-- * @tag: The tag of the request
-- *
-- * Notes:
-- * Should be used when a device returns a tag and you want to match
-- * it with a request.
-- *
-- * no locks need be held.
-- **/
--struct request *blk_queue_find_tag(struct request_queue *q, int tag)
--{
-- return blk_map_queue_find_tag(q->queue_tags, tag);
--}
+- /*
+- * We'll assume that the code segments in the GDT
+- * are all zero-based. That is largely true: the
+- * TLS segments are used for data, and the PNPBIOS
+- * and APM bios ones we just ignore here.
+- */
+- if (seg & LDT_SEGMENT) {
+- u32 *desc;
+- unsigned long base;
+-
+- seg &= ~7UL;
+-
+- mutex_lock(&child->mm->context.lock);
+- if (unlikely((seg >> 3) >= child->mm->context.size))
+- addr = -1L; /* bogus selector, access would fault */
+- else {
+- desc = child->mm->context.ldt + seg;
+- base = ((desc[0] >> 16) |
+- ((desc[1] & 0xff) << 16) |
+- (desc[1] & 0xff000000));
+-
+- /* 16-bit code segment? */
+- if (!((desc[1] >> 22) & 1))
+- addr &= 0xffff;
+- addr += base;
+- }
+- mutex_unlock(&child->mm->context.lock);
+- }
-
--EXPORT_SYMBOL(blk_queue_find_tag);
+- return addr;
+-}
-
--/**
-- * __blk_free_tags - release a given set of tag maintenance info
-- * @bqt: the tag map to free
-- *
-- * Tries to free the specified @bqt at . Returns true if it was
-- * actually freed and false if there are still references using it
-- */
--static int __blk_free_tags(struct blk_queue_tag *bqt)
+-static int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs)
-{
-- int retval;
+- int i, copied;
+- unsigned char opcode[15];
+- unsigned long addr = convert_rip_to_linear(child, regs);
+-
+- copied = access_process_vm(child, addr, opcode, sizeof(opcode), 0);
+- for (i = 0; i < copied; i++) {
+- switch (opcode[i]) {
+- /* popf and iret */
+- case 0x9d: case 0xcf:
+- return 1;
-
-- retval = atomic_dec_and_test(&bqt->refcnt);
-- if (retval) {
-- BUG_ON(bqt->busy);
+- /* CHECKME: 64 65 */
-
-- kfree(bqt->tag_index);
-- bqt->tag_index = NULL;
+- /* opcode and address size prefixes */
+- case 0x66: case 0x67:
+- continue;
+- /* irrelevant prefixes (segment overrides and repeats) */
+- case 0x26: case 0x2e:
+- case 0x36: case 0x3e:
+- case 0x64: case 0x65:
+- case 0xf2: case 0xf3:
+- continue;
-
-- kfree(bqt->tag_map);
-- bqt->tag_map = NULL;
+- case 0x40 ... 0x4f:
+- if (regs->cs != __USER_CS)
+- /* 32-bit mode: register increment */
+- return 0;
+- /* 64-bit mode: REX prefix */
+- continue;
-
-- kfree(bqt);
+- /* CHECKME: f2, f3 */
-
+- /*
+- * pushf: NOTE! We should probably not let
+- * the user see the TF bit being set. But
+- * it's more pain than it's worth to avoid
+- * it, and a debugger could emulate this
+- * all in user space if it _really_ cares.
+- */
+- case 0x9c:
+- default:
+- return 0;
+- }
- }
--
-- return retval;
+- return 0;
-}
-
--/**
-- * __blk_queue_free_tags - release tag maintenance info
-- * @q: the request queue for the device
-- *
-- * Notes:
-- * blk_cleanup_queue() will take care of calling this function, if tagging
-- * has been used. So there's no need to call this directly.
-- **/
--static void __blk_queue_free_tags(struct request_queue *q)
+-static void set_singlestep(struct task_struct *child)
-{
-- struct blk_queue_tag *bqt = q->queue_tags;
+- struct pt_regs *regs = task_pt_regs(child);
-
-- if (!bqt)
+- /*
+- * Always set TIF_SINGLESTEP - this guarantees that
+- * we single-step system calls etc.. This will also
+- * cause us to set TF when returning to user mode.
+- */
+- set_tsk_thread_flag(child, TIF_SINGLESTEP);
+-
+- /*
+- * If TF was already set, don't do anything else
+- */
+- if (regs->eflags & TRAP_FLAG)
- return;
-
-- __blk_free_tags(bqt);
+- /* Set TF on the kernel stack.. */
+- regs->eflags |= TRAP_FLAG;
-
-- q->queue_tags = NULL;
-- q->queue_flags &= ~(1 << QUEUE_FLAG_QUEUED);
--}
+- /*
+- * ..but if TF is changed by the instruction we will trace,
+- * don't mark it as being "us" that set it, so that we
+- * won't clear it by hand later.
+- */
+- if (is_setting_trap_flag(child, regs))
+- return;
-
+- child->ptrace |= PT_DTRACE;
+-}
-
--/**
-- * blk_free_tags - release a given set of tag maintenance info
-- * @bqt: the tag map to free
-- *
-- * For externally managed @bqt@ frees the map. Callers of this
-- * function must guarantee to have released all the queues that
-- * might have been using this tag map.
-- */
--void blk_free_tags(struct blk_queue_tag *bqt)
+-static void clear_singlestep(struct task_struct *child)
-{
-- if (unlikely(!__blk_free_tags(bqt)))
-- BUG();
+- /* Always clear TIF_SINGLESTEP... */
+- clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+-
+- /* But touch TF only if it was set by us.. */
+- if (child->ptrace & PT_DTRACE) {
+- struct pt_regs *regs = task_pt_regs(child);
+- regs->eflags &= ~TRAP_FLAG;
+- child->ptrace &= ~PT_DTRACE;
+- }
-}
--EXPORT_SYMBOL(blk_free_tags);
-
--/**
-- * blk_queue_free_tags - release tag maintenance info
-- * @q: the request queue for the device
+-/*
+- * Called by kernel/ptrace.c when detaching..
- *
-- * Notes:
-- * This is used to disabled tagged queuing to a device, yet leave
-- * queue in function.
-- **/
--void blk_queue_free_tags(struct request_queue *q)
--{
-- clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+- * Make sure the single step bit is not set.
+- */
+-void ptrace_disable(struct task_struct *child)
+-{
+- clear_singlestep(child);
-}
-
--EXPORT_SYMBOL(blk_queue_free_tags);
--
--static int
--init_tag_map(struct request_queue *q, struct blk_queue_tag *tags, int depth)
+-static int putreg(struct task_struct *child,
+- unsigned long regno, unsigned long value)
-{
-- struct request **tag_index;
-- unsigned long *tag_map;
-- int nr_ulongs;
--
-- if (q && depth > q->nr_requests * 2) {
-- depth = q->nr_requests * 2;
-- printk(KERN_ERR "%s: adjusted depth to %d\n",
-- __FUNCTION__, depth);
+- unsigned long tmp;
+-
+- switch (regno) {
+- case offsetof(struct user_regs_struct,fs):
+- if (value && (value & 3) != 3)
+- return -EIO;
+- child->thread.fsindex = value & 0xffff;
+- return 0;
+- case offsetof(struct user_regs_struct,gs):
+- if (value && (value & 3) != 3)
+- return -EIO;
+- child->thread.gsindex = value & 0xffff;
+- return 0;
+- case offsetof(struct user_regs_struct,ds):
+- if (value && (value & 3) != 3)
+- return -EIO;
+- child->thread.ds = value & 0xffff;
+- return 0;
+- case offsetof(struct user_regs_struct,es):
+- if (value && (value & 3) != 3)
+- return -EIO;
+- child->thread.es = value & 0xffff;
+- return 0;
+- case offsetof(struct user_regs_struct,ss):
+- if ((value & 3) != 3)
+- return -EIO;
+- value &= 0xffff;
+- return 0;
+- case offsetof(struct user_regs_struct,fs_base):
+- if (value >= TASK_SIZE_OF(child))
+- return -EIO;
+- child->thread.fs = value;
+- return 0;
+- case offsetof(struct user_regs_struct,gs_base):
+- if (value >= TASK_SIZE_OF(child))
+- return -EIO;
+- child->thread.gs = value;
+- return 0;
+- case offsetof(struct user_regs_struct, eflags):
+- value &= FLAG_MASK;
+- tmp = get_stack_long(child, EFL_OFFSET);
+- tmp &= ~FLAG_MASK;
+- value |= tmp;
+- break;
+- case offsetof(struct user_regs_struct,cs):
+- if ((value & 3) != 3)
+- return -EIO;
+- value &= 0xffff;
+- break;
- }
--
-- tag_index = kzalloc(depth * sizeof(struct request *), GFP_ATOMIC);
-- if (!tag_index)
-- goto fail;
--
-- nr_ulongs = ALIGN(depth, BITS_PER_LONG) / BITS_PER_LONG;
-- tag_map = kzalloc(nr_ulongs * sizeof(unsigned long), GFP_ATOMIC);
-- if (!tag_map)
-- goto fail;
--
-- tags->real_max_depth = depth;
-- tags->max_depth = depth;
-- tags->tag_index = tag_index;
-- tags->tag_map = tag_map;
--
+- put_stack_long(child, regno - sizeof(struct pt_regs), value);
- return 0;
--fail:
-- kfree(tag_index);
-- return -ENOMEM;
-}
-
--static struct blk_queue_tag *__blk_queue_init_tags(struct request_queue *q,
-- int depth)
+-static unsigned long getreg(struct task_struct *child, unsigned long regno)
-{
-- struct blk_queue_tag *tags;
--
-- tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC);
-- if (!tags)
-- goto fail;
--
-- if (init_tag_map(q, tags, depth))
-- goto fail;
--
-- tags->busy = 0;
-- atomic_set(&tags->refcnt, 1);
-- return tags;
--fail:
-- kfree(tags);
-- return NULL;
--}
+- unsigned long val;
+- switch (regno) {
+- case offsetof(struct user_regs_struct, fs):
+- return child->thread.fsindex;
+- case offsetof(struct user_regs_struct, gs):
+- return child->thread.gsindex;
+- case offsetof(struct user_regs_struct, ds):
+- return child->thread.ds;
+- case offsetof(struct user_regs_struct, es):
+- return child->thread.es;
+- case offsetof(struct user_regs_struct, fs_base):
+- return child->thread.fs;
+- case offsetof(struct user_regs_struct, gs_base):
+- return child->thread.gs;
+- default:
+- regno = regno - sizeof(struct pt_regs);
+- val = get_stack_long(child, regno);
+- if (test_tsk_thread_flag(child, TIF_IA32))
+- val &= 0xffffffff;
+- return val;
+- }
-
--/**
-- * blk_init_tags - initialize the tag info for an external tag map
-- * @depth: the maximum queue depth supported
-- * @tags: the tag to use
-- **/
--struct blk_queue_tag *blk_init_tags(int depth)
--{
-- return __blk_queue_init_tags(NULL, depth);
-}
--EXPORT_SYMBOL(blk_init_tags);
-
--/**
-- * blk_queue_init_tags - initialize the queue tag info
-- * @q: the request queue for the device
-- * @depth: the maximum queue depth supported
-- * @tags: the tag to use
-- **/
--int blk_queue_init_tags(struct request_queue *q, int depth,
-- struct blk_queue_tag *tags)
+-long arch_ptrace(struct task_struct *child, long request, long addr, long data)
-{
-- int rc;
+- long i, ret;
+- unsigned ui;
-
-- BUG_ON(tags && q->queue_tags && tags != q->queue_tags);
+- switch (request) {
+- /* when I and D space are separate, these will need to be fixed. */
+- case PTRACE_PEEKTEXT: /* read word at location addr. */
+- case PTRACE_PEEKDATA:
+- ret = generic_ptrace_peekdata(child, addr, data);
+- break;
-
-- if (!tags && !q->queue_tags) {
-- tags = __blk_queue_init_tags(q, depth);
+- /* read the word at location addr in the USER area. */
+- case PTRACE_PEEKUSR: {
+- unsigned long tmp;
-
-- if (!tags)
-- goto fail;
-- } else if (q->queue_tags) {
-- if ((rc = blk_queue_resize_tags(q, depth)))
-- return rc;
-- set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
-- return 0;
-- } else
-- atomic_inc(&tags->refcnt);
+- ret = -EIO;
+- if ((addr & 7) ||
+- addr > sizeof(struct user) - 7)
+- break;
-
-- /*
-- * assign it, all done
-- */
-- q->queue_tags = tags;
-- q->queue_flags |= (1 << QUEUE_FLAG_QUEUED);
-- INIT_LIST_HEAD(&q->tag_busy_list);
-- return 0;
--fail:
-- kfree(tags);
-- return -ENOMEM;
--}
+- switch (addr) {
+- case 0 ... sizeof(struct user_regs_struct) - sizeof(long):
+- tmp = getreg(child, addr);
+- break;
+- case offsetof(struct user, u_debugreg[0]):
+- tmp = child->thread.debugreg0;
+- break;
+- case offsetof(struct user, u_debugreg[1]):
+- tmp = child->thread.debugreg1;
+- break;
+- case offsetof(struct user, u_debugreg[2]):
+- tmp = child->thread.debugreg2;
+- break;
+- case offsetof(struct user, u_debugreg[3]):
+- tmp = child->thread.debugreg3;
+- break;
+- case offsetof(struct user, u_debugreg[6]):
+- tmp = child->thread.debugreg6;
+- break;
+- case offsetof(struct user, u_debugreg[7]):
+- tmp = child->thread.debugreg7;
+- break;
+- default:
+- tmp = 0;
+- break;
+- }
+- ret = put_user(tmp,(unsigned long __user *) data);
+- break;
+- }
-
--EXPORT_SYMBOL(blk_queue_init_tags);
+- /* when I and D space are separate, this will have to be fixed. */
+- case PTRACE_POKETEXT: /* write the word at location addr. */
+- case PTRACE_POKEDATA:
+- ret = generic_ptrace_pokedata(child, addr, data);
+- break;
-
--/**
-- * blk_queue_resize_tags - change the queueing depth
-- * @q: the request queue for the device
-- * @new_depth: the new max command queueing depth
-- *
-- * Notes:
-- * Must be called with the queue lock held.
-- **/
--int blk_queue_resize_tags(struct request_queue *q, int new_depth)
--{
-- struct blk_queue_tag *bqt = q->queue_tags;
-- struct request **tag_index;
-- unsigned long *tag_map;
-- int max_depth, nr_ulongs;
+- case PTRACE_POKEUSR: /* write the word at location addr in the USER area */
+- {
+- int dsize = test_tsk_thread_flag(child, TIF_IA32) ? 3 : 7;
+- ret = -EIO;
+- if ((addr & 7) ||
+- addr > sizeof(struct user) - 7)
+- break;
-
-- if (!bqt)
-- return -ENXIO;
+- switch (addr) {
+- case 0 ... sizeof(struct user_regs_struct) - sizeof(long):
+- ret = putreg(child, addr, data);
+- break;
+- /* Disallows to set a breakpoint into the vsyscall */
+- case offsetof(struct user, u_debugreg[0]):
+- if (data >= TASK_SIZE_OF(child) - dsize) break;
+- child->thread.debugreg0 = data;
+- ret = 0;
+- break;
+- case offsetof(struct user, u_debugreg[1]):
+- if (data >= TASK_SIZE_OF(child) - dsize) break;
+- child->thread.debugreg1 = data;
+- ret = 0;
+- break;
+- case offsetof(struct user, u_debugreg[2]):
+- if (data >= TASK_SIZE_OF(child) - dsize) break;
+- child->thread.debugreg2 = data;
+- ret = 0;
+- break;
+- case offsetof(struct user, u_debugreg[3]):
+- if (data >= TASK_SIZE_OF(child) - dsize) break;
+- child->thread.debugreg3 = data;
+- ret = 0;
+- break;
+- case offsetof(struct user, u_debugreg[6]):
+- if (data >> 32)
+- break;
+- child->thread.debugreg6 = data;
+- ret = 0;
+- break;
+- case offsetof(struct user, u_debugreg[7]):
+- /* See arch/i386/kernel/ptrace.c for an explanation of
+- * this awkward check.*/
+- data &= ~DR_CONTROL_RESERVED;
+- for(i=0; i<4; i++)
+- if ((0x5554 >> ((data >> (16 + 4*i)) & 0xf)) & 1)
+- break;
+- if (i == 4) {
+- child->thread.debugreg7 = data;
+- if (data)
+- set_tsk_thread_flag(child, TIF_DEBUG);
+- else
+- clear_tsk_thread_flag(child, TIF_DEBUG);
+- ret = 0;
+- }
+- break;
+- }
+- break;
+- }
+- case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+- case PTRACE_CONT: /* restart after signal. */
-
-- /*
-- * if we already have large enough real_max_depth. just
-- * adjust max_depth. *NOTE* as requests with tag value
-- * between new_depth and real_max_depth can be in-flight, tag
-- * map can not be shrunk blindly here.
-- */
-- if (new_depth <= bqt->real_max_depth) {
-- bqt->max_depth = new_depth;
-- return 0;
-- }
--
-- /*
-- * Currently cannot replace a shared tag map with a new
-- * one, so error out if this is the case
-- */
-- if (atomic_read(&bqt->refcnt) != 1)
-- return -EBUSY;
--
-- /*
-- * save the old state info, so we can copy it back
-- */
-- tag_index = bqt->tag_index;
-- tag_map = bqt->tag_map;
-- max_depth = bqt->real_max_depth;
--
-- if (init_tag_map(q, bqt, new_depth))
-- return -ENOMEM;
--
-- memcpy(bqt->tag_index, tag_index, max_depth * sizeof(struct request *));
-- nr_ulongs = ALIGN(max_depth, BITS_PER_LONG) / BITS_PER_LONG;
-- memcpy(bqt->tag_map, tag_map, nr_ulongs * sizeof(unsigned long));
--
-- kfree(tag_index);
-- kfree(tag_map);
-- return 0;
--}
+- ret = -EIO;
+- if (!valid_signal(data))
+- break;
+- if (request == PTRACE_SYSCALL)
+- set_tsk_thread_flag(child,TIF_SYSCALL_TRACE);
+- else
+- clear_tsk_thread_flag(child,TIF_SYSCALL_TRACE);
+- clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+- child->exit_code = data;
+- /* make sure the single step bit is not set. */
+- clear_singlestep(child);
+- wake_up_process(child);
+- ret = 0;
+- break;
-
--EXPORT_SYMBOL(blk_queue_resize_tags);
+-#ifdef CONFIG_IA32_EMULATION
+- /* This makes only sense with 32bit programs. Allow a
+- 64bit debugger to fully examine them too. Better
+- don't use it against 64bit processes, use
+- PTRACE_ARCH_PRCTL instead. */
+- case PTRACE_SET_THREAD_AREA: {
+- struct user_desc __user *p;
+- int old;
+- p = (struct user_desc __user *)data;
+- get_user(old, &p->entry_number);
+- put_user(addr, &p->entry_number);
+- ret = do_set_thread_area(&child->thread, p);
+- put_user(old, &p->entry_number);
+- break;
+- case PTRACE_GET_THREAD_AREA:
+- p = (struct user_desc __user *)data;
+- get_user(old, &p->entry_number);
+- put_user(addr, &p->entry_number);
+- ret = do_get_thread_area(&child->thread, p);
+- put_user(old, &p->entry_number);
+- break;
+- }
+-#endif
+- /* normal 64bit interface to access TLS data.
+- Works just like arch_prctl, except that the arguments
+- are reversed. */
+- case PTRACE_ARCH_PRCTL:
+- ret = do_arch_prctl(child, data, addr);
+- break;
-
--/**
-- * blk_queue_end_tag - end tag operations for a request
-- * @q: the request queue for the device
-- * @rq: the request that has completed
-- *
-- * Description:
-- * Typically called when end_that_request_first() returns 0, meaning
-- * all transfers have been done for a request. It's important to call
-- * this function before end_that_request_last(), as that will put the
-- * request back on the free list thus corrupting the internal tag list.
-- *
-- * Notes:
-- * queue lock must be held.
-- **/
--void blk_queue_end_tag(struct request_queue *q, struct request *rq)
--{
-- struct blk_queue_tag *bqt = q->queue_tags;
-- int tag = rq->tag;
+-/*
+- * make the child exit. Best I can do is send it a sigkill.
+- * perhaps it should be put in the status that it wants to
+- * exit.
+- */
+- case PTRACE_KILL:
+- ret = 0;
+- if (child->exit_state == EXIT_ZOMBIE) /* already dead */
+- break;
+- clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+- child->exit_code = SIGKILL;
+- /* make sure the single step bit is not set. */
+- clear_singlestep(child);
+- wake_up_process(child);
+- break;
-
-- BUG_ON(tag == -1);
+- case PTRACE_SINGLESTEP: /* set the trap flag. */
+- ret = -EIO;
+- if (!valid_signal(data))
+- break;
+- clear_tsk_thread_flag(child,TIF_SYSCALL_TRACE);
+- set_singlestep(child);
+- child->exit_code = data;
+- /* give it a chance to run. */
+- wake_up_process(child);
+- ret = 0;
+- break;
-
-- if (unlikely(tag >= bqt->real_max_depth))
-- /*
-- * This can happen after tag depth has been reduced.
-- * FIXME: how about a warning or info message here?
-- */
-- return;
+- case PTRACE_GETREGS: { /* Get all gp regs from the child. */
+- if (!access_ok(VERIFY_WRITE, (unsigned __user *)data,
+- sizeof(struct user_regs_struct))) {
+- ret = -EIO;
+- break;
+- }
+- ret = 0;
+- for (ui = 0; ui < sizeof(struct user_regs_struct); ui += sizeof(long)) {
+- ret |= __put_user(getreg(child, ui),(unsigned long __user *) data);
+- data += sizeof(long);
+- }
+- break;
+- }
-
-- list_del_init(&rq->queuelist);
-- rq->cmd_flags &= ~REQ_QUEUED;
-- rq->tag = -1;
+- case PTRACE_SETREGS: { /* Set all gp regs in the child. */
+- unsigned long tmp;
+- if (!access_ok(VERIFY_READ, (unsigned __user *)data,
+- sizeof(struct user_regs_struct))) {
+- ret = -EIO;
+- break;
+- }
+- ret = 0;
+- for (ui = 0; ui < sizeof(struct user_regs_struct); ui += sizeof(long)) {
+- ret = __get_user(tmp, (unsigned long __user *) data);
+- if (ret)
+- break;
+- ret = putreg(child, ui, tmp);
+- if (ret)
+- break;
+- data += sizeof(long);
+- }
+- break;
+- }
-
-- if (unlikely(bqt->tag_index[tag] == NULL))
-- printk(KERN_ERR "%s: tag %d is missing\n",
-- __FUNCTION__, tag);
+- case PTRACE_GETFPREGS: { /* Get the child extended FPU state. */
+- if (!access_ok(VERIFY_WRITE, (unsigned __user *)data,
+- sizeof(struct user_i387_struct))) {
+- ret = -EIO;
+- break;
+- }
+- ret = get_fpregs((struct user_i387_struct __user *)data, child);
+- break;
+- }
-
-- bqt->tag_index[tag] = NULL;
+- case PTRACE_SETFPREGS: { /* Set the child extended FPU state. */
+- if (!access_ok(VERIFY_READ, (unsigned __user *)data,
+- sizeof(struct user_i387_struct))) {
+- ret = -EIO;
+- break;
+- }
+- set_stopped_child_used_math(child);
+- ret = set_fpregs(child, (struct user_i387_struct __user *)data);
+- break;
+- }
-
-- if (unlikely(!test_bit(tag, bqt->tag_map))) {
-- printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n",
-- __FUNCTION__, tag);
-- return;
+- default:
+- ret = ptrace_request(child, request, addr, data);
+- break;
- }
-- /*
-- * The tag_map bit acts as a lock for tag_index[bit], so we need
-- * unlock memory barrier semantics.
-- */
-- clear_bit_unlock(tag, bqt->tag_map);
-- bqt->busy--;
+- return ret;
-}
-
--EXPORT_SYMBOL(blk_queue_end_tag);
--
--/**
-- * blk_queue_start_tag - find a free tag and assign it
-- * @q: the request queue for the device
-- * @rq: the block request that needs tagging
-- *
-- * Description:
-- * This can either be used as a stand-alone helper, or possibly be
-- * assigned as the queue &prep_rq_fn (in which case &struct request
-- * automagically gets a tag assigned). Note that this function
-- * assumes that any type of request can be queued! if this is not
-- * true for your device, you must check the request type before
-- * calling this function. The request will also be removed from
-- * the request queue, so it's the drivers responsibility to readd
-- * it if it should need to be restarted for some reason.
-- *
-- * Notes:
-- * queue lock must be held.
-- **/
--int blk_queue_start_tag(struct request_queue *q, struct request *rq)
+-static void syscall_trace(struct pt_regs *regs)
-{
-- struct blk_queue_tag *bqt = q->queue_tags;
-- int tag;
-
-- if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
-- printk(KERN_ERR
-- "%s: request %p for device [%s] already tagged %d",
-- __FUNCTION__, rq,
-- rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->tag);
-- BUG();
-- }
--
-- /*
-- * Protect against shared tag maps, as we may not have exclusive
-- * access to the tag map.
-- */
-- do {
-- tag = find_first_zero_bit(bqt->tag_map, bqt->max_depth);
-- if (tag >= bqt->max_depth)
-- return 1;
+-#if 0
+- printk("trace %s rip %lx rsp %lx rax %d origrax %d caller %lx tiflags %x ptrace %x\n",
+- current->comm,
+- regs->rip, regs->rsp, regs->rax, regs->orig_rax, __builtin_return_address(0),
+- current_thread_info()->flags, current->ptrace);
+-#endif
-
-- } while (test_and_set_bit_lock(tag, bqt->tag_map));
+- ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD)
+- ? 0x80 : 0));
- /*
-- * We need lock ordering semantics given by test_and_set_bit_lock.
-- * See blk_queue_end_tag for details.
+- * this isn't the same as continuing with a signal, but it will do
+- * for normal use. strace only continues with a signal if the
+- * stopping signal is not SIGTRAP. -brl
- */
--
-- rq->cmd_flags |= REQ_QUEUED;
-- rq->tag = tag;
-- bqt->tag_index[tag] = rq;
-- blkdev_dequeue_request(rq);
-- list_add(&rq->queuelist, &q->tag_busy_list);
-- bqt->busy++;
-- return 0;
--}
--
--EXPORT_SYMBOL(blk_queue_start_tag);
--
--/**
-- * blk_queue_invalidate_tags - invalidate all pending tags
-- * @q: the request queue for the device
-- *
-- * Description:
-- * Hardware conditions may dictate a need to stop all pending requests.
-- * In this case, we will safely clear the block side of the tag queue and
-- * readd all requests to the request queue in the right order.
-- *
-- * Notes:
-- * queue lock must be held.
-- **/
--void blk_queue_invalidate_tags(struct request_queue *q)
--{
-- struct list_head *tmp, *n;
--
-- list_for_each_safe(tmp, n, &q->tag_busy_list)
-- blk_requeue_request(q, list_entry_rq(tmp));
--}
--
--EXPORT_SYMBOL(blk_queue_invalidate_tags);
--
--void blk_dump_rq_flags(struct request *rq, char *msg)
--{
-- int bit;
--
-- printk("%s: dev %s: type=%x, flags=%x\n", msg,
-- rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->cmd_type,
-- rq->cmd_flags);
--
-- printk("\nsector %llu, nr/cnr %lu/%u\n", (unsigned long long)rq->sector,
-- rq->nr_sectors,
-- rq->current_nr_sectors);
-- printk("bio %p, biotail %p, buffer %p, data %p, len %u\n", rq->bio, rq->biotail, rq->buffer, rq->data, rq->data_len);
--
-- if (blk_pc_request(rq)) {
-- printk("cdb: ");
-- for (bit = 0; bit < sizeof(rq->cmd); bit++)
-- printk("%02x ", rq->cmd[bit]);
-- printk("\n");
+- if (current->exit_code) {
+- send_sig(current->exit_code, current, 1);
+- current->exit_code = 0;
- }
-}
-
--EXPORT_SYMBOL(blk_dump_rq_flags);
--
--void blk_recount_segments(struct request_queue *q, struct bio *bio)
--{
-- struct request rq;
-- struct bio *nxt = bio->bi_next;
-- rq.q = q;
-- rq.bio = rq.biotail = bio;
-- bio->bi_next = NULL;
-- blk_recalc_rq_segments(&rq);
-- bio->bi_next = nxt;
-- bio->bi_phys_segments = rq.nr_phys_segments;
-- bio->bi_hw_segments = rq.nr_hw_segments;
-- bio->bi_flags |= (1 << BIO_SEG_VALID);
--}
--EXPORT_SYMBOL(blk_recount_segments);
--
--static void blk_recalc_rq_segments(struct request *rq)
+-asmlinkage void syscall_trace_enter(struct pt_regs *regs)
-{
-- int nr_phys_segs;
-- int nr_hw_segs;
-- unsigned int phys_size;
-- unsigned int hw_size;
-- struct bio_vec *bv, *bvprv = NULL;
-- int seg_size;
-- int hw_seg_size;
-- int cluster;
-- struct req_iterator iter;
-- int high, highprv = 1;
-- struct request_queue *q = rq->q;
--
-- if (!rq->bio)
-- return;
+- /* do the secure computing check first */
+- secure_computing(regs->orig_rax);
-
-- cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
-- hw_seg_size = seg_size = 0;
-- phys_size = hw_size = nr_phys_segs = nr_hw_segs = 0;
-- rq_for_each_segment(bv, rq, iter) {
-- /*
-- * the trick here is making sure that a high page is never
-- * considered part of another segment, since that might
-- * change with the bounce page.
-- */
-- high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
-- if (high || highprv)
-- goto new_hw_segment;
-- if (cluster) {
-- if (seg_size + bv->bv_len > q->max_segment_size)
-- goto new_segment;
-- if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
-- goto new_segment;
-- if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
-- goto new_segment;
-- if (BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
-- goto new_hw_segment;
+- if (test_thread_flag(TIF_SYSCALL_TRACE)
+- && (current->ptrace & PT_PTRACED))
+- syscall_trace(regs);
-
-- seg_size += bv->bv_len;
-- hw_seg_size += bv->bv_len;
-- bvprv = bv;
-- continue;
-- }
--new_segment:
-- if (BIOVEC_VIRT_MERGEABLE(bvprv, bv) &&
-- !BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
-- hw_seg_size += bv->bv_len;
-- else {
--new_hw_segment:
-- if (nr_hw_segs == 1 &&
-- hw_seg_size > rq->bio->bi_hw_front_size)
-- rq->bio->bi_hw_front_size = hw_seg_size;
-- hw_seg_size = BIOVEC_VIRT_START_SIZE(bv) + bv->bv_len;
-- nr_hw_segs++;
+- if (unlikely(current->audit_context)) {
+- if (test_thread_flag(TIF_IA32)) {
+- audit_syscall_entry(AUDIT_ARCH_I386,
+- regs->orig_rax,
+- regs->rbx, regs->rcx,
+- regs->rdx, regs->rsi);
+- } else {
+- audit_syscall_entry(AUDIT_ARCH_X86_64,
+- regs->orig_rax,
+- regs->rdi, regs->rsi,
+- regs->rdx, regs->r10);
- }
--
-- nr_phys_segs++;
-- bvprv = bv;
-- seg_size = bv->bv_len;
-- highprv = high;
- }
--
-- if (nr_hw_segs == 1 &&
-- hw_seg_size > rq->bio->bi_hw_front_size)
-- rq->bio->bi_hw_front_size = hw_seg_size;
-- if (hw_seg_size > rq->biotail->bi_hw_back_size)
-- rq->biotail->bi_hw_back_size = hw_seg_size;
-- rq->nr_phys_segments = nr_phys_segs;
-- rq->nr_hw_segments = nr_hw_segs;
-}
-
--static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
-- struct bio *nxt)
--{
-- if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER)))
-- return 0;
--
-- if (!BIOVEC_PHYS_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)))
-- return 0;
-- if (bio->bi_size + nxt->bi_size > q->max_segment_size)
-- return 0;
--
-- /*
-- * bio and nxt are contigous in memory, check if the queue allows
-- * these two to be merged into one
-- */
-- if (BIO_SEG_BOUNDARY(q, bio, nxt))
-- return 1;
--
-- return 0;
--}
--
--static int blk_hw_contig_segment(struct request_queue *q, struct bio *bio,
-- struct bio *nxt)
+-asmlinkage void syscall_trace_leave(struct pt_regs *regs)
-{
-- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-- blk_recount_segments(q, bio);
-- if (unlikely(!bio_flagged(nxt, BIO_SEG_VALID)))
-- blk_recount_segments(q, nxt);
-- if (!BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)) ||
-- BIOVEC_VIRT_OVERSIZE(bio->bi_hw_back_size + nxt->bi_hw_front_size))
-- return 0;
-- if (bio->bi_hw_back_size + nxt->bi_hw_front_size > q->max_segment_size)
-- return 0;
+- if (unlikely(current->audit_context))
+- audit_syscall_exit(AUDITSC_RESULT(regs->rax), regs->rax);
-
-- return 1;
+- if ((test_thread_flag(TIF_SYSCALL_TRACE)
+- || test_thread_flag(TIF_SINGLESTEP))
+- && (current->ptrace & PT_PTRACED))
+- syscall_trace(regs);
-}
+diff --git a/arch/x86/kernel/quirks.c b/arch/x86/kernel/quirks.c
+index fab30e1..150ba29 100644
+--- a/arch/x86/kernel/quirks.c
++++ b/arch/x86/kernel/quirks.c
+@@ -162,6 +162,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH7_31,
+ ich_force_enable_hpet);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_1,
+ ich_force_enable_hpet);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_7,
++ ich_force_enable_hpet);
+
+
+ static struct pci_dev *cached_dev;
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+new file mode 100644
+index 0000000..5818dc2
+--- /dev/null
++++ b/arch/x86/kernel/reboot.c
+@@ -0,0 +1,451 @@
++#include <linux/module.h>
++#include <linux/init.h>
++#include <linux/reboot.h>
++#include <linux/init.h>
++#include <linux/pm.h>
++#include <linux/efi.h>
++#include <acpi/reboot.h>
++#include <asm/io.h>
++#include <asm/apic.h>
++#include <asm/desc.h>
++#include <asm/hpet.h>
++#include <asm/reboot_fixups.h>
++#include <asm/reboot.h>
++
++#ifdef CONFIG_X86_32
++# include <linux/dmi.h>
++# include <linux/ctype.h>
++# include <linux/mc146818rtc.h>
++# include <asm/pgtable.h>
++#else
++# include <asm/iommu.h>
++#endif
++
++/*
++ * Power off function, if any
++ */
++void (*pm_power_off)(void);
++EXPORT_SYMBOL(pm_power_off);
++
++static long no_idt[3];
++static int reboot_mode;
++enum reboot_type reboot_type = BOOT_KBD;
++int reboot_force;
++
++#if defined(CONFIG_X86_32) && defined(CONFIG_SMP)
++static int reboot_cpu = -1;
++#endif
++
++/* reboot=b[ios] | s[mp] | t[riple] | k[bd] | e[fi] [, [w]arm | [c]old]
++ warm Don't set the cold reboot flag
++ cold Set the cold reboot flag
++ bios Reboot by jumping through the BIOS (only for X86_32)
++ smp Reboot by executing reset on BSP or other CPU (only for X86_32)
++ triple Force a triple fault (init)
++ kbd Use the keyboard controller. cold reset (default)
++ acpi Use the RESET_REG in the FADT
++ efi Use efi reset_system runtime service
++ force Avoid anything that could hang.
++ */
++static int __init reboot_setup(char *str)
++{
++ for (;;) {
++ switch (*str) {
++ case 'w':
++ reboot_mode = 0x1234;
++ break;
++
++ case 'c':
++ reboot_mode = 0;
++ break;
++
++#ifdef CONFIG_X86_32
++#ifdef CONFIG_SMP
++ case 's':
++ if (isdigit(*(str+1))) {
++ reboot_cpu = (int) (*(str+1) - '0');
++ if (isdigit(*(str+2)))
++ reboot_cpu = reboot_cpu*10 + (int)(*(str+2) - '0');
++ }
++ /* we will leave sorting out the final value
++ when we are ready to reboot, since we might not
++ have set up boot_cpu_id or smp_num_cpu */
++ break;
++#endif /* CONFIG_SMP */
++
++ case 'b':
++#endif
++ case 'a':
++ case 'k':
++ case 't':
++ case 'e':
++ reboot_type = *str;
++ break;
++
++ case 'f':
++ reboot_force = 1;
++ break;
++ }
++
++ str = strchr(str, ',');
++ if (str)
++ str++;
++ else
++ break;
++ }
++ return 1;
++}
++
++__setup("reboot=", reboot_setup);
++
++
++#ifdef CONFIG_X86_32
++/*
++ * Reboot options and system auto-detection code provided by
++ * Dell Inc. so their systems "just work". :-)
++ */
++
++/*
++ * Some machines require the "reboot=b" commandline option,
++ * this quirk makes that automatic.
++ */
++static int __init set_bios_reboot(const struct dmi_system_id *d)
++{
++ if (reboot_type != BOOT_BIOS) {
++ reboot_type = BOOT_BIOS;
++ printk(KERN_INFO "%s series board detected. Selecting BIOS-method for reboots.\n", d->ident);
++ }
++ return 0;
++}
++
++static struct dmi_system_id __initdata reboot_dmi_table[] = {
++ { /* Handle problems with rebooting on Dell E520's */
++ .callback = set_bios_reboot,
++ .ident = "Dell E520",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Dell DM061"),
++ },
++ },
++ { /* Handle problems with rebooting on Dell 1300's */
++ .callback = set_bios_reboot,
++ .ident = "Dell PowerEdge 1300",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 1300/"),
++ },
++ },
++ { /* Handle problems with rebooting on Dell 300's */
++ .callback = set_bios_reboot,
++ .ident = "Dell PowerEdge 300",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 300/"),
++ },
++ },
++ { /* Handle problems with rebooting on Dell Optiplex 745's SFF*/
++ .callback = set_bios_reboot,
++ .ident = "Dell OptiPlex 745",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 745"),
++ DMI_MATCH(DMI_BOARD_NAME, "0WF810"),
++ },
++ },
++ { /* Handle problems with rebooting on Dell 2400's */
++ .callback = set_bios_reboot,
++ .ident = "Dell PowerEdge 2400",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 2400"),
++ },
++ },
++ { /* Handle problems with rebooting on HP laptops */
++ .callback = set_bios_reboot,
++ .ident = "HP Compaq Laptop",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq"),
++ },
++ },
++ { }
++};
++
++static int __init reboot_init(void)
++{
++ dmi_check_system(reboot_dmi_table);
++ return 0;
++}
++core_initcall(reboot_init);
++
++/* The following code and data reboots the machine by switching to real
++ mode and jumping to the BIOS reset entry point, as if the CPU has
++ really been reset. The previous version asked the keyboard
++ controller to pulse the CPU reset line, which is more thorough, but
++ doesn't work with at least one type of 486 motherboard. It is easy
++ to stop this code working; hence the copious comments. */
++static unsigned long long
++real_mode_gdt_entries [3] =
++{
++ 0x0000000000000000ULL, /* Null descriptor */
++ 0x00009a000000ffffULL, /* 16-bit real-mode 64k code at 0x00000000 */
++ 0x000092000100ffffULL /* 16-bit real-mode 64k data at 0x00000100 */
++};
++
++static struct desc_ptr
++real_mode_gdt = { sizeof (real_mode_gdt_entries) - 1, (long)real_mode_gdt_entries },
++real_mode_idt = { 0x3ff, 0 };
++
++/* This is 16-bit protected mode code to disable paging and the cache,
++ switch to real mode and jump to the BIOS reset code.
++
++ The instruction that switches to real mode by writing to CR0 must be
++ followed immediately by a far jump instruction, which set CS to a
++ valid value for real mode, and flushes the prefetch queue to avoid
++ running instructions that have already been decoded in protected
++ mode.
++
++ Clears all the flags except ET, especially PG (paging), PE
++ (protected-mode enable) and TS (task switch for coprocessor state
++ save). Flushes the TLB after paging has been disabled. Sets CD and
++ NW, to disable the cache on a 486, and invalidates the cache. This
++ is more like the state of a 486 after reset. I don't know if
++ something else should be done for other chips.
++
++ More could be done here to set up the registers as if a CPU reset had
++ occurred; hopefully real BIOSs don't assume much. */
++static unsigned char real_mode_switch [] =
++{
++ 0x66, 0x0f, 0x20, 0xc0, /* movl %cr0,%eax */
++ 0x66, 0x83, 0xe0, 0x11, /* andl $0x00000011,%eax */
++ 0x66, 0x0d, 0x00, 0x00, 0x00, 0x60, /* orl $0x60000000,%eax */
++ 0x66, 0x0f, 0x22, 0xc0, /* movl %eax,%cr0 */
++ 0x66, 0x0f, 0x22, 0xd8, /* movl %eax,%cr3 */
++ 0x66, 0x0f, 0x20, 0xc3, /* movl %cr0,%ebx */
++ 0x66, 0x81, 0xe3, 0x00, 0x00, 0x00, 0x60, /* andl $0x60000000,%ebx */
++ 0x74, 0x02, /* jz f */
++ 0x0f, 0x09, /* wbinvd */
++ 0x24, 0x10, /* f: andb $0x10,al */
++ 0x66, 0x0f, 0x22, 0xc0 /* movl %eax,%cr0 */
++};
++static unsigned char jump_to_bios [] =
++{
++ 0xea, 0x00, 0x00, 0xff, 0xff /* ljmp $0xffff,$0x0000 */
++};
++
++/*
++ * Switch to real mode and then execute the code
++ * specified by the code and length parameters.
++ * We assume that length will aways be less that 100!
++ */
++void machine_real_restart(unsigned char *code, int length)
++{
++ local_irq_disable();
++
++ /* Write zero to CMOS register number 0x0f, which the BIOS POST
++ routine will recognize as telling it to do a proper reboot. (Well
++ that's what this book in front of me says -- it may only apply to
++ the Phoenix BIOS though, it's not clear). At the same time,
++ disable NMIs by setting the top bit in the CMOS address register,
++ as we're about to do peculiar things to the CPU. I'm not sure if
++ `outb_p' is needed instead of just `outb'. Use it to be on the
++ safe side. (Yes, CMOS_WRITE does outb_p's. - Paul G.)
++ */
++ spin_lock(&rtc_lock);
++ CMOS_WRITE(0x00, 0x8f);
++ spin_unlock(&rtc_lock);
++
++ /* Remap the kernel at virtual address zero, as well as offset zero
++ from the kernel segment. This assumes the kernel segment starts at
++ virtual address PAGE_OFFSET. */
++ memcpy(swapper_pg_dir, swapper_pg_dir + USER_PGD_PTRS,
++ sizeof(swapper_pg_dir [0]) * KERNEL_PGD_PTRS);
++
++ /*
++ * Use `swapper_pg_dir' as our page directory.
++ */
++ load_cr3(swapper_pg_dir);
++
++ /* Write 0x1234 to absolute memory location 0x472. The BIOS reads
++ this on booting to tell it to "Bypass memory test (also warm
++ boot)". This seems like a fairly standard thing that gets set by
++ REBOOT.COM programs, and the previous reset routine did this
++ too. */
++ *((unsigned short *)0x472) = reboot_mode;
++
++ /* For the switch to real mode, copy some code to low memory. It has
++ to be in the first 64k because it is running in 16-bit mode, and it
++ has to have the same physical and virtual address, because it turns
++ off paging. Copy it near the end of the first page, out of the way
++ of BIOS variables. */
++ memcpy((void *)(0x1000 - sizeof(real_mode_switch) - 100),
++ real_mode_switch, sizeof (real_mode_switch));
++ memcpy((void *)(0x1000 - 100), code, length);
++
++ /* Set up the IDT for real mode. */
++ load_idt(&real_mode_idt);
++
++ /* Set up a GDT from which we can load segment descriptors for real
++ mode. The GDT is not used in real mode; it is just needed here to
++ prepare the descriptors. */
++ load_gdt(&real_mode_gdt);
++
++ /* Load the data segment registers, and thus the descriptors ready for
++ real mode. The base address of each segment is 0x100, 16 times the
++ selector value being loaded here. This is so that the segment
++ registers don't have to be reloaded after switching to real mode:
++ the values are consistent for real mode operation already. */
++ __asm__ __volatile__ ("movl $0x0010,%%eax\n"
++ "\tmovl %%eax,%%ds\n"
++ "\tmovl %%eax,%%es\n"
++ "\tmovl %%eax,%%fs\n"
++ "\tmovl %%eax,%%gs\n"
++ "\tmovl %%eax,%%ss" : : : "eax");
++
++ /* Jump to the 16-bit code that we copied earlier. It disables paging
++ and the cache, switches to real mode, and jumps to the BIOS reset
++ entry point. */
++ __asm__ __volatile__ ("ljmp $0x0008,%0"
++ :
++ : "i" ((void *)(0x1000 - sizeof (real_mode_switch) - 100)));
++}
++#ifdef CONFIG_APM_MODULE
++EXPORT_SYMBOL(machine_real_restart);
++#endif
++
++#endif /* CONFIG_X86_32 */
++
++static inline void kb_wait(void)
++{
++ int i;
++
++ for (i = 0; i < 0x10000; i++) {
++ if ((inb(0x64) & 0x02) == 0)
++ break;
++ udelay(2);
++ }
++}
++
++void machine_emergency_restart(void)
++{
++ int i;
++
++ /* Tell the BIOS if we want cold or warm reboot */
++ *((unsigned short *)__va(0x472)) = reboot_mode;
++
++ for (;;) {
++ /* Could also try the reset bit in the Hammer NB */
++ switch (reboot_type) {
++ case BOOT_KBD:
++ for (i = 0; i < 10; i++) {
++ kb_wait();
++ udelay(50);
++ outb(0xfe, 0x64); /* pulse reset low */
++ udelay(50);
++ }
++
++ case BOOT_TRIPLE:
++ load_idt((const struct desc_ptr *)&no_idt);
++ __asm__ __volatile__("int3");
++
++ reboot_type = BOOT_KBD;
++ break;
++
++#ifdef CONFIG_X86_32
++ case BOOT_BIOS:
++ machine_real_restart(jump_to_bios, sizeof(jump_to_bios));
++
++ reboot_type = BOOT_KBD;
++ break;
++#endif
++
++ case BOOT_ACPI:
++ acpi_reboot();
++ reboot_type = BOOT_KBD;
++ break;
++
++
++ case BOOT_EFI:
++ if (efi_enabled)
++ efi.reset_system(reboot_mode ? EFI_RESET_WARM : EFI_RESET_COLD,
++ EFI_SUCCESS, 0, NULL);
++
++ reboot_type = BOOT_KBD;
++ break;
++ }
++ }
++}
++
++void machine_shutdown(void)
++{
++ /* Stop the cpus and apics */
++#ifdef CONFIG_SMP
++ int reboot_cpu_id;
++
++ /* The boot cpu is always logical cpu 0 */
++ reboot_cpu_id = 0;
++
++#ifdef CONFIG_X86_32
++ /* See if there has been given a command line override */
++ if ((reboot_cpu != -1) && (reboot_cpu < NR_CPUS) &&
++ cpu_isset(reboot_cpu, cpu_online_map))
++ reboot_cpu_id = reboot_cpu;
++#endif
++
++ /* Make certain the cpu I'm about to reboot on is online */
++ if (!cpu_isset(reboot_cpu_id, cpu_online_map))
++ reboot_cpu_id = smp_processor_id();
++
++ /* Make certain I only run on the appropriate processor */
++ set_cpus_allowed(current, cpumask_of_cpu(reboot_cpu_id));
++
++ /* O.K Now that I'm on the appropriate processor,
++ * stop all of the others.
++ */
++ smp_send_stop();
++#endif
++
++ lapic_shutdown();
++
++#ifdef CONFIG_X86_IO_APIC
++ disable_IO_APIC();
++#endif
++
++#ifdef CONFIG_HPET_TIMER
++ hpet_disable();
++#endif
++
++#ifdef CONFIG_X86_64
++ pci_iommu_shutdown();
++#endif
++}
++
++void machine_restart(char *__unused)
++{
++ printk("machine restart\n");
++
++ if (!reboot_force)
++ machine_shutdown();
++ machine_emergency_restart();
++}
++
++void machine_halt(void)
++{
++}
++
++void machine_power_off(void)
++{
++ if (pm_power_off) {
++ if (!reboot_force)
++ machine_shutdown();
++ pm_power_off();
++ }
++}
++
++struct machine_ops machine_ops = {
++ .power_off = machine_power_off,
++ .shutdown = machine_shutdown,
++ .emergency_restart = machine_emergency_restart,
++ .restart = machine_restart,
++ .halt = machine_halt
++};
+diff --git a/arch/x86/kernel/reboot_32.c b/arch/x86/kernel/reboot_32.c
+deleted file mode 100644
+index bb1a0f8..0000000
+--- a/arch/x86/kernel/reboot_32.c
++++ /dev/null
+@@ -1,413 +0,0 @@
+-#include <linux/mm.h>
+-#include <linux/module.h>
+-#include <linux/delay.h>
+-#include <linux/init.h>
+-#include <linux/interrupt.h>
+-#include <linux/mc146818rtc.h>
+-#include <linux/efi.h>
+-#include <linux/dmi.h>
+-#include <linux/ctype.h>
+-#include <linux/pm.h>
+-#include <linux/reboot.h>
+-#include <asm/uaccess.h>
+-#include <asm/apic.h>
+-#include <asm/hpet.h>
+-#include <asm/desc.h>
+-#include "mach_reboot.h"
+-#include <asm/reboot_fixups.h>
+-#include <asm/reboot.h>
-
-/*
-- * map a request to scatterlist, return number of sg entries setup. Caller
-- * must make sure sg can hold rq->nr_phys_segments entries
+- * Power off function, if any
- */
--int blk_rq_map_sg(struct request_queue *q, struct request *rq,
-- struct scatterlist *sglist)
--{
-- struct bio_vec *bvec, *bvprv;
-- struct req_iterator iter;
-- struct scatterlist *sg;
-- int nsegs, cluster;
--
-- nsegs = 0;
-- cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
--
-- /*
-- * for each bio in rq
-- */
-- bvprv = NULL;
-- sg = NULL;
-- rq_for_each_segment(bvec, rq, iter) {
-- int nbytes = bvec->bv_len;
--
-- if (bvprv && cluster) {
-- if (sg->length + nbytes > q->max_segment_size)
-- goto new_segment;
+-void (*pm_power_off)(void);
+-EXPORT_SYMBOL(pm_power_off);
-
-- if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
-- goto new_segment;
-- if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
-- goto new_segment;
+-static int reboot_mode;
+-static int reboot_thru_bios;
-
-- sg->length += nbytes;
-- } else {
--new_segment:
-- if (!sg)
-- sg = sglist;
-- else {
-- /*
-- * If the driver previously mapped a shorter
-- * list, we could see a termination bit
-- * prematurely unless it fully inits the sg
-- * table on each mapping. We KNOW that there
-- * must be more entries here or the driver
-- * would be buggy, so force clear the
-- * termination bit to avoid doing a full
-- * sg_init_table() in drivers for each command.
-- */
-- sg->page_link &= ~0x02;
-- sg = sg_next(sg);
+-#ifdef CONFIG_SMP
+-static int reboot_cpu = -1;
+-#endif
+-static int __init reboot_setup(char *str)
+-{
+- while(1) {
+- switch (*str) {
+- case 'w': /* "warm" reboot (no memory testing etc) */
+- reboot_mode = 0x1234;
+- break;
+- case 'c': /* "cold" reboot (with memory testing etc) */
+- reboot_mode = 0x0;
+- break;
+- case 'b': /* "bios" reboot by jumping through the BIOS */
+- reboot_thru_bios = 1;
+- break;
+- case 'h': /* "hard" reboot by toggling RESET and/or crashing the CPU */
+- reboot_thru_bios = 0;
+- break;
+-#ifdef CONFIG_SMP
+- case 's': /* "smp" reboot by executing reset on BSP or other CPU*/
+- if (isdigit(*(str+1))) {
+- reboot_cpu = (int) (*(str+1) - '0');
+- if (isdigit(*(str+2)))
+- reboot_cpu = reboot_cpu*10 + (int)(*(str+2) - '0');
- }
--
-- sg_set_page(sg, bvec->bv_page, nbytes, bvec->bv_offset);
-- nsegs++;
+- /* we will leave sorting out the final value
+- when we are ready to reboot, since we might not
+- have set up boot_cpu_id or smp_num_cpu */
+- break;
+-#endif
- }
-- bvprv = bvec;
-- } /* segments in rq */
--
-- if (sg)
-- sg_mark_end(sg);
--
-- return nsegs;
+- if((str = strchr(str,',')) != NULL)
+- str++;
+- else
+- break;
+- }
+- return 1;
-}
-
--EXPORT_SYMBOL(blk_rq_map_sg);
+-__setup("reboot=", reboot_setup);
-
-/*
-- * the standard queue merge functions, can be overridden with device
-- * specific ones if so desired
+- * Reboot options and system auto-detection code provided by
+- * Dell Inc. so their systems "just work". :-)
- */
-
--static inline int ll_new_mergeable(struct request_queue *q,
-- struct request *req,
-- struct bio *bio)
+-/*
+- * Some machines require the "reboot=b" commandline option, this quirk makes that automatic.
+- */
+-static int __init set_bios_reboot(const struct dmi_system_id *d)
-{
-- int nr_phys_segs = bio_phys_segments(q, bio);
--
-- if (req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
-- req->cmd_flags |= REQ_NOMERGE;
-- if (req == q->last_merge)
-- q->last_merge = NULL;
-- return 0;
+- if (!reboot_thru_bios) {
+- reboot_thru_bios = 1;
+- printk(KERN_INFO "%s series board detected. Selecting BIOS-method for reboots.\n", d->ident);
- }
--
-- /*
-- * A hw segment is just getting larger, bump just the phys
-- * counter.
-- */
-- req->nr_phys_segments += nr_phys_segs;
-- return 1;
+- return 0;
-}
-
--static inline int ll_new_hw_segment(struct request_queue *q,
-- struct request *req,
-- struct bio *bio)
--{
-- int nr_hw_segs = bio_hw_segments(q, bio);
-- int nr_phys_segs = bio_phys_segments(q, bio);
--
-- if (req->nr_hw_segments + nr_hw_segs > q->max_hw_segments
-- || req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
-- req->cmd_flags |= REQ_NOMERGE;
-- if (req == q->last_merge)
-- q->last_merge = NULL;
-- return 0;
-- }
--
-- /*
-- * This will form the start of a new hw segment. Bump both
-- * counters.
-- */
-- req->nr_hw_segments += nr_hw_segs;
-- req->nr_phys_segments += nr_phys_segs;
-- return 1;
--}
+-static struct dmi_system_id __initdata reboot_dmi_table[] = {
+- { /* Handle problems with rebooting on Dell E520's */
+- .callback = set_bios_reboot,
+- .ident = "Dell E520",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "Dell DM061"),
+- },
+- },
+- { /* Handle problems with rebooting on Dell 1300's */
+- .callback = set_bios_reboot,
+- .ident = "Dell PowerEdge 1300",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 1300/"),
+- },
+- },
+- { /* Handle problems with rebooting on Dell 300's */
+- .callback = set_bios_reboot,
+- .ident = "Dell PowerEdge 300",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 300/"),
+- },
+- },
+- { /* Handle problems with rebooting on Dell Optiplex 745's SFF*/
+- .callback = set_bios_reboot,
+- .ident = "Dell OptiPlex 745",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 745"),
+- DMI_MATCH(DMI_BOARD_NAME, "0WF810"),
+- },
+- },
+- { /* Handle problems with rebooting on Dell 2400's */
+- .callback = set_bios_reboot,
+- .ident = "Dell PowerEdge 2400",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 2400"),
+- },
+- },
+- { /* Handle problems with rebooting on HP laptops */
+- .callback = set_bios_reboot,
+- .ident = "HP Compaq Laptop",
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq"),
+- },
+- },
+- { }
+-};
-
--static int ll_back_merge_fn(struct request_queue *q, struct request *req,
-- struct bio *bio)
+-static int __init reboot_init(void)
-{
-- unsigned short max_sectors;
-- int len;
--
-- if (unlikely(blk_pc_request(req)))
-- max_sectors = q->max_hw_sectors;
-- else
-- max_sectors = q->max_sectors;
--
-- if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
-- req->cmd_flags |= REQ_NOMERGE;
-- if (req == q->last_merge)
-- q->last_merge = NULL;
-- return 0;
-- }
-- if (unlikely(!bio_flagged(req->biotail, BIO_SEG_VALID)))
-- blk_recount_segments(q, req->biotail);
-- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-- blk_recount_segments(q, bio);
-- len = req->biotail->bi_hw_back_size + bio->bi_hw_front_size;
-- if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(req->biotail), __BVEC_START(bio)) &&
-- !BIOVEC_VIRT_OVERSIZE(len)) {
-- int mergeable = ll_new_mergeable(q, req, bio);
--
-- if (mergeable) {
-- if (req->nr_hw_segments == 1)
-- req->bio->bi_hw_front_size = len;
-- if (bio->bi_hw_segments == 1)
-- bio->bi_hw_back_size = len;
-- }
-- return mergeable;
-- }
--
-- return ll_new_hw_segment(q, req, bio);
+- dmi_check_system(reboot_dmi_table);
+- return 0;
-}
-
--static int ll_front_merge_fn(struct request_queue *q, struct request *req,
-- struct bio *bio)
--{
-- unsigned short max_sectors;
-- int len;
--
-- if (unlikely(blk_pc_request(req)))
-- max_sectors = q->max_hw_sectors;
-- else
-- max_sectors = q->max_sectors;
--
--
-- if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
-- req->cmd_flags |= REQ_NOMERGE;
-- if (req == q->last_merge)
-- q->last_merge = NULL;
-- return 0;
-- }
-- len = bio->bi_hw_back_size + req->bio->bi_hw_front_size;
-- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-- blk_recount_segments(q, bio);
-- if (unlikely(!bio_flagged(req->bio, BIO_SEG_VALID)))
-- blk_recount_segments(q, req->bio);
-- if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(req->bio)) &&
-- !BIOVEC_VIRT_OVERSIZE(len)) {
-- int mergeable = ll_new_mergeable(q, req, bio);
--
-- if (mergeable) {
-- if (bio->bi_hw_segments == 1)
-- bio->bi_hw_front_size = len;
-- if (req->nr_hw_segments == 1)
-- req->biotail->bi_hw_back_size = len;
-- }
-- return mergeable;
-- }
+-core_initcall(reboot_init);
-
-- return ll_new_hw_segment(q, req, bio);
--}
+-/* The following code and data reboots the machine by switching to real
+- mode and jumping to the BIOS reset entry point, as if the CPU has
+- really been reset. The previous version asked the keyboard
+- controller to pulse the CPU reset line, which is more thorough, but
+- doesn't work with at least one type of 486 motherboard. It is easy
+- to stop this code working; hence the copious comments. */
-
--static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
-- struct request *next)
+-static unsigned long long
+-real_mode_gdt_entries [3] =
-{
-- int total_phys_segments;
-- int total_hw_segments;
--
-- /*
-- * First check if the either of the requests are re-queued
-- * requests. Can't merge them if they are.
-- */
-- if (req->special || next->special)
-- return 0;
+- 0x0000000000000000ULL, /* Null descriptor */
+- 0x00009a000000ffffULL, /* 16-bit real-mode 64k code at 0x00000000 */
+- 0x000092000100ffffULL /* 16-bit real-mode 64k data at 0x00000100 */
+-};
-
-- /*
-- * Will it become too large?
-- */
-- if ((req->nr_sectors + next->nr_sectors) > q->max_sectors)
-- return 0;
+-static struct Xgt_desc_struct
+-real_mode_gdt = { sizeof (real_mode_gdt_entries) - 1, (long)real_mode_gdt_entries },
+-real_mode_idt = { 0x3ff, 0 },
+-no_idt = { 0, 0 };
-
-- total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
-- if (blk_phys_contig_segment(q, req->biotail, next->bio))
-- total_phys_segments--;
-
-- if (total_phys_segments > q->max_phys_segments)
-- return 0;
+-/* This is 16-bit protected mode code to disable paging and the cache,
+- switch to real mode and jump to the BIOS reset code.
-
-- total_hw_segments = req->nr_hw_segments + next->nr_hw_segments;
-- if (blk_hw_contig_segment(q, req->biotail, next->bio)) {
-- int len = req->biotail->bi_hw_back_size + next->bio->bi_hw_front_size;
-- /*
-- * propagate the combined length to the end of the requests
-- */
-- if (req->nr_hw_segments == 1)
-- req->bio->bi_hw_front_size = len;
-- if (next->nr_hw_segments == 1)
-- next->biotail->bi_hw_back_size = len;
-- total_hw_segments--;
-- }
+- The instruction that switches to real mode by writing to CR0 must be
+- followed immediately by a far jump instruction, which set CS to a
+- valid value for real mode, and flushes the prefetch queue to avoid
+- running instructions that have already been decoded in protected
+- mode.
-
-- if (total_hw_segments > q->max_hw_segments)
-- return 0;
+- Clears all the flags except ET, especially PG (paging), PE
+- (protected-mode enable) and TS (task switch for coprocessor state
+- save). Flushes the TLB after paging has been disabled. Sets CD and
+- NW, to disable the cache on a 486, and invalidates the cache. This
+- is more like the state of a 486 after reset. I don't know if
+- something else should be done for other chips.
-
-- /* Merge is OK... */
-- req->nr_phys_segments = total_phys_segments;
-- req->nr_hw_segments = total_hw_segments;
-- return 1;
--}
+- More could be done here to set up the registers as if a CPU reset had
+- occurred; hopefully real BIOSs don't assume much. */
-
--/*
-- * "plug" the device if there are no outstanding requests: this will
-- * force the transfer to start only after we have put all the requests
-- * on the list.
-- *
-- * This is called with interrupts off and no requests on the queue and
-- * with the queue lock held.
-- */
--void blk_plug_device(struct request_queue *q)
+-static unsigned char real_mode_switch [] =
-{
-- WARN_ON(!irqs_disabled());
--
-- /*
-- * don't plug a stopped queue, it must be paired with blk_start_queue()
-- * which will restart the queueing
-- */
-- if (blk_queue_stopped(q))
-- return;
--
-- if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags)) {
-- mod_timer(&q->unplug_timer, jiffies + q->unplug_delay);
-- blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
-- }
--}
--
--EXPORT_SYMBOL(blk_plug_device);
+- 0x66, 0x0f, 0x20, 0xc0, /* movl %cr0,%eax */
+- 0x66, 0x83, 0xe0, 0x11, /* andl $0x00000011,%eax */
+- 0x66, 0x0d, 0x00, 0x00, 0x00, 0x60, /* orl $0x60000000,%eax */
+- 0x66, 0x0f, 0x22, 0xc0, /* movl %eax,%cr0 */
+- 0x66, 0x0f, 0x22, 0xd8, /* movl %eax,%cr3 */
+- 0x66, 0x0f, 0x20, 0xc3, /* movl %cr0,%ebx */
+- 0x66, 0x81, 0xe3, 0x00, 0x00, 0x00, 0x60, /* andl $0x60000000,%ebx */
+- 0x74, 0x02, /* jz f */
+- 0x0f, 0x09, /* wbinvd */
+- 0x24, 0x10, /* f: andb $0x10,al */
+- 0x66, 0x0f, 0x22, 0xc0 /* movl %eax,%cr0 */
+-};
+-static unsigned char jump_to_bios [] =
+-{
+- 0xea, 0x00, 0x00, 0xff, 0xff /* ljmp $0xffff,$0x0000 */
+-};
-
-/*
-- * remove the queue from the plugged list, if present. called with
-- * queue lock held and interrupts disabled.
+- * Switch to real mode and then execute the code
+- * specified by the code and length parameters.
+- * We assume that length will aways be less that 100!
- */
--int blk_remove_plug(struct request_queue *q)
+-void machine_real_restart(unsigned char *code, int length)
-{
-- WARN_ON(!irqs_disabled());
+- local_irq_disable();
-
-- if (!test_and_clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags))
-- return 0;
+- /* Write zero to CMOS register number 0x0f, which the BIOS POST
+- routine will recognize as telling it to do a proper reboot. (Well
+- that's what this book in front of me says -- it may only apply to
+- the Phoenix BIOS though, it's not clear). At the same time,
+- disable NMIs by setting the top bit in the CMOS address register,
+- as we're about to do peculiar things to the CPU. I'm not sure if
+- `outb_p' is needed instead of just `outb'. Use it to be on the
+- safe side. (Yes, CMOS_WRITE does outb_p's. - Paul G.)
+- */
+-
+- spin_lock(&rtc_lock);
+- CMOS_WRITE(0x00, 0x8f);
+- spin_unlock(&rtc_lock);
+-
+- /* Remap the kernel at virtual address zero, as well as offset zero
+- from the kernel segment. This assumes the kernel segment starts at
+- virtual address PAGE_OFFSET. */
-
-- del_timer(&q->unplug_timer);
-- return 1;
--}
+- memcpy (swapper_pg_dir, swapper_pg_dir + USER_PGD_PTRS,
+- sizeof (swapper_pg_dir [0]) * KERNEL_PGD_PTRS);
-
--EXPORT_SYMBOL(blk_remove_plug);
+- /*
+- * Use `swapper_pg_dir' as our page directory.
+- */
+- load_cr3(swapper_pg_dir);
-
--/*
-- * remove the plug and let it rip..
-- */
--void __generic_unplug_device(struct request_queue *q)
--{
-- if (unlikely(blk_queue_stopped(q)))
-- return;
+- /* Write 0x1234 to absolute memory location 0x472. The BIOS reads
+- this on booting to tell it to "Bypass memory test (also warm
+- boot)". This seems like a fairly standard thing that gets set by
+- REBOOT.COM programs, and the previous reset routine did this
+- too. */
-
-- if (!blk_remove_plug(q))
-- return;
+- *((unsigned short *)0x472) = reboot_mode;
-
-- q->request_fn(q);
--}
--EXPORT_SYMBOL(__generic_unplug_device);
+- /* For the switch to real mode, copy some code to low memory. It has
+- to be in the first 64k because it is running in 16-bit mode, and it
+- has to have the same physical and virtual address, because it turns
+- off paging. Copy it near the end of the first page, out of the way
+- of BIOS variables. */
-
--/**
-- * generic_unplug_device - fire a request queue
-- * @q: The &struct request_queue in question
-- *
-- * Description:
-- * Linux uses plugging to build bigger requests queues before letting
-- * the device have at them. If a queue is plugged, the I/O scheduler
-- * is still adding and merging requests on the queue. Once the queue
-- * gets unplugged, the request_fn defined for the queue is invoked and
-- * transfers started.
-- **/
--void generic_unplug_device(struct request_queue *q)
--{
-- spin_lock_irq(q->queue_lock);
-- __generic_unplug_device(q);
-- spin_unlock_irq(q->queue_lock);
--}
--EXPORT_SYMBOL(generic_unplug_device);
+- memcpy ((void *) (0x1000 - sizeof (real_mode_switch) - 100),
+- real_mode_switch, sizeof (real_mode_switch));
+- memcpy ((void *) (0x1000 - 100), code, length);
-
--static void blk_backing_dev_unplug(struct backing_dev_info *bdi,
-- struct page *page)
--{
-- struct request_queue *q = bdi->unplug_io_data;
+- /* Set up the IDT for real mode. */
-
-- blk_unplug(q);
--}
+- load_idt(&real_mode_idt);
-
--static void blk_unplug_work(struct work_struct *work)
--{
-- struct request_queue *q =
-- container_of(work, struct request_queue, unplug_work);
+- /* Set up a GDT from which we can load segment descriptors for real
+- mode. The GDT is not used in real mode; it is just needed here to
+- prepare the descriptors. */
-
-- blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
-- q->rq.count[READ] + q->rq.count[WRITE]);
+- load_gdt(&real_mode_gdt);
-
-- q->unplug_fn(q);
--}
+- /* Load the data segment registers, and thus the descriptors ready for
+- real mode. The base address of each segment is 0x100, 16 times the
+- selector value being loaded here. This is so that the segment
+- registers don't have to be reloaded after switching to real mode:
+- the values are consistent for real mode operation already. */
-
--static void blk_unplug_timeout(unsigned long data)
--{
-- struct request_queue *q = (struct request_queue *)data;
+- __asm__ __volatile__ ("movl $0x0010,%%eax\n"
+- "\tmovl %%eax,%%ds\n"
+- "\tmovl %%eax,%%es\n"
+- "\tmovl %%eax,%%fs\n"
+- "\tmovl %%eax,%%gs\n"
+- "\tmovl %%eax,%%ss" : : : "eax");
-
-- blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
-- q->rq.count[READ] + q->rq.count[WRITE]);
+- /* Jump to the 16-bit code that we copied earlier. It disables paging
+- and the cache, switches to real mode, and jumps to the BIOS reset
+- entry point. */
-
-- kblockd_schedule_work(&q->unplug_work);
+- __asm__ __volatile__ ("ljmp $0x0008,%0"
+- :
+- : "i" ((void *) (0x1000 - sizeof (real_mode_switch) - 100)));
-}
+-#ifdef CONFIG_APM_MODULE
+-EXPORT_SYMBOL(machine_real_restart);
+-#endif
-
--void blk_unplug(struct request_queue *q)
+-static void native_machine_shutdown(void)
-{
-- /*
-- * devices don't necessarily have an ->unplug_fn defined
-- */
-- if (q->unplug_fn) {
-- blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
-- q->rq.count[READ] + q->rq.count[WRITE]);
+-#ifdef CONFIG_SMP
+- int reboot_cpu_id;
-
-- q->unplug_fn(q);
+- /* The boot cpu is always logical cpu 0 */
+- reboot_cpu_id = 0;
+-
+- /* See if there has been given a command line override */
+- if ((reboot_cpu != -1) && (reboot_cpu < NR_CPUS) &&
+- cpu_isset(reboot_cpu, cpu_online_map)) {
+- reboot_cpu_id = reboot_cpu;
- }
--}
--EXPORT_SYMBOL(blk_unplug);
-
--/**
-- * blk_start_queue - restart a previously stopped queue
-- * @q: The &struct request_queue in question
-- *
-- * Description:
-- * blk_start_queue() will clear the stop flag on the queue, and call
-- * the request_fn for the queue if it was in a stopped state when
-- * entered. Also see blk_stop_queue(). Queue lock must be held.
-- **/
--void blk_start_queue(struct request_queue *q)
--{
-- WARN_ON(!irqs_disabled());
+- /* Make certain the cpu I'm rebooting on is online */
+- if (!cpu_isset(reboot_cpu_id, cpu_online_map)) {
+- reboot_cpu_id = smp_processor_id();
+- }
-
-- clear_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
+- /* Make certain I only run on the appropriate processor */
+- set_cpus_allowed(current, cpumask_of_cpu(reboot_cpu_id));
-
-- /*
-- * one level of recursion is ok and is much faster than kicking
-- * the unplug handling
+- /* O.K. Now that I'm on the appropriate processor, stop
+- * all of the others, and disable their local APICs.
- */
-- if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
-- q->request_fn(q);
-- clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
-- } else {
-- blk_plug_device(q);
-- kblockd_schedule_work(&q->unplug_work);
-- }
--}
-
--EXPORT_SYMBOL(blk_start_queue);
+- smp_send_stop();
+-#endif /* CONFIG_SMP */
-
--/**
-- * blk_stop_queue - stop a queue
-- * @q: The &struct request_queue in question
-- *
-- * Description:
-- * The Linux block layer assumes that a block driver will consume all
-- * entries on the request queue when the request_fn strategy is called.
-- * Often this will not happen, because of hardware limitations (queue
-- * depth settings). If a device driver gets a 'queue full' response,
-- * or if it simply chooses not to queue more I/O at one point, it can
-- * call this function to prevent the request_fn from being called until
-- * the driver has signalled it's ready to go again. This happens by calling
-- * blk_start_queue() to restart queue operations. Queue lock must be held.
-- **/
--void blk_stop_queue(struct request_queue *q)
--{
-- blk_remove_plug(q);
-- set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
+- lapic_shutdown();
+-
+-#ifdef CONFIG_X86_IO_APIC
+- disable_IO_APIC();
+-#endif
+-#ifdef CONFIG_HPET_TIMER
+- hpet_disable();
+-#endif
-}
--EXPORT_SYMBOL(blk_stop_queue);
-
--/**
-- * blk_sync_queue - cancel any pending callbacks on a queue
-- * @q: the queue
-- *
-- * Description:
-- * The block layer may perform asynchronous callback activity
-- * on a queue, such as calling the unplug function after a timeout.
-- * A block device may call blk_sync_queue to ensure that any
-- * such activity is cancelled, thus allowing it to release resources
-- * that the callbacks might use. The caller must already have made sure
-- * that its ->make_request_fn will not re-add plugging prior to calling
-- * this function.
-- *
-- */
--void blk_sync_queue(struct request_queue *q)
+-void __attribute__((weak)) mach_reboot_fixups(void)
-{
-- del_timer_sync(&q->unplug_timer);
-- kblockd_flush_work(&q->unplug_work);
-}
--EXPORT_SYMBOL(blk_sync_queue);
-
--/**
-- * blk_run_queue - run a single device queue
-- * @q: The queue to run
-- */
--void blk_run_queue(struct request_queue *q)
+-static void native_machine_emergency_restart(void)
-{
-- unsigned long flags;
--
-- spin_lock_irqsave(q->queue_lock, flags);
-- blk_remove_plug(q);
--
-- /*
-- * Only recurse once to avoid overrunning the stack, let the unplug
-- * handling reinvoke the handler shortly if we already got there.
-- */
-- if (!elv_queue_empty(q)) {
-- if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
-- q->request_fn(q);
-- clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
-- } else {
-- blk_plug_device(q);
-- kblockd_schedule_work(&q->unplug_work);
+- if (!reboot_thru_bios) {
+- if (efi_enabled) {
+- efi.reset_system(EFI_RESET_COLD, EFI_SUCCESS, 0, NULL);
+- load_idt(&no_idt);
+- __asm__ __volatile__("int3");
+- }
+- /* rebooting needs to touch the page at absolute addr 0 */
+- *((unsigned short *)__va(0x472)) = reboot_mode;
+- for (;;) {
+- mach_reboot_fixups(); /* for board specific fixups */
+- mach_reboot();
+- /* That didn't work - force a triple fault.. */
+- load_idt(&no_idt);
+- __asm__ __volatile__("int3");
- }
- }
+- if (efi_enabled)
+- efi.reset_system(EFI_RESET_WARM, EFI_SUCCESS, 0, NULL);
-
-- spin_unlock_irqrestore(q->queue_lock, flags);
+- machine_real_restart(jump_to_bios, sizeof(jump_to_bios));
-}
--EXPORT_SYMBOL(blk_run_queue);
-
--/**
-- * blk_cleanup_queue: - release a &struct request_queue when it is no longer needed
-- * @kobj: the kobj belonging of the request queue to be released
-- *
-- * Description:
-- * blk_cleanup_queue is the pair to blk_init_queue() or
-- * blk_queue_make_request(). It should be called when a request queue is
-- * being released; typically when a block device is being de-registered.
-- * Currently, its primary task it to free all the &struct request
-- * structures that were allocated to the queue and the queue itself.
-- *
-- * Caveat:
-- * Hopefully the low level driver will have finished any
-- * outstanding requests first...
-- **/
--static void blk_release_queue(struct kobject *kobj)
+-static void native_machine_restart(char * __unused)
-{
-- struct request_queue *q =
-- container_of(kobj, struct request_queue, kobj);
-- struct request_list *rl = &q->rq;
--
-- blk_sync_queue(q);
--
-- if (rl->rq_pool)
-- mempool_destroy(rl->rq_pool);
--
-- if (q->queue_tags)
-- __blk_queue_free_tags(q);
--
-- blk_trace_shutdown(q);
--
-- bdi_destroy(&q->backing_dev_info);
-- kmem_cache_free(requestq_cachep, q);
+- machine_shutdown();
+- machine_emergency_restart();
-}
-
--void blk_put_queue(struct request_queue *q)
+-static void native_machine_halt(void)
-{
-- kobject_put(&q->kobj);
-}
--EXPORT_SYMBOL(blk_put_queue);
-
--void blk_cleanup_queue(struct request_queue * q)
+-static void native_machine_power_off(void)
-{
-- mutex_lock(&q->sysfs_lock);
-- set_bit(QUEUE_FLAG_DEAD, &q->queue_flags);
-- mutex_unlock(&q->sysfs_lock);
--
-- if (q->elevator)
-- elevator_exit(q->elevator);
--
-- blk_put_queue(q);
+- if (pm_power_off) {
+- machine_shutdown();
+- pm_power_off();
+- }
-}
-
--EXPORT_SYMBOL(blk_cleanup_queue);
--
--static int blk_init_free_list(struct request_queue *q)
--{
-- struct request_list *rl = &q->rq;
--
-- rl->count[READ] = rl->count[WRITE] = 0;
-- rl->starved[READ] = rl->starved[WRITE] = 0;
-- rl->elvpriv = 0;
-- init_waitqueue_head(&rl->wait[READ]);
-- init_waitqueue_head(&rl->wait[WRITE]);
--
-- rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
-- mempool_free_slab, request_cachep, q->node);
--
-- if (!rl->rq_pool)
-- return -ENOMEM;
-
-- return 0;
--}
+-struct machine_ops machine_ops = {
+- .power_off = native_machine_power_off,
+- .shutdown = native_machine_shutdown,
+- .emergency_restart = native_machine_emergency_restart,
+- .restart = native_machine_restart,
+- .halt = native_machine_halt,
+-};
-
--struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
+-void machine_power_off(void)
-{
-- return blk_alloc_queue_node(gfp_mask, -1);
+- machine_ops.power_off();
-}
--EXPORT_SYMBOL(blk_alloc_queue);
--
--static struct kobj_type queue_ktype;
-
--struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
+-void machine_shutdown(void)
-{
-- struct request_queue *q;
-- int err;
--
-- q = kmem_cache_alloc_node(requestq_cachep,
-- gfp_mask | __GFP_ZERO, node_id);
-- if (!q)
-- return NULL;
--
-- q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
-- q->backing_dev_info.unplug_io_data = q;
-- err = bdi_init(&q->backing_dev_info);
-- if (err) {
-- kmem_cache_free(requestq_cachep, q);
-- return NULL;
-- }
--
-- init_timer(&q->unplug_timer);
--
-- kobject_set_name(&q->kobj, "%s", "queue");
-- q->kobj.ktype = &queue_ktype;
-- kobject_init(&q->kobj);
--
-- mutex_init(&q->sysfs_lock);
--
-- return q;
+- machine_ops.shutdown();
-}
--EXPORT_SYMBOL(blk_alloc_queue_node);
--
--/**
-- * blk_init_queue - prepare a request queue for use with a block device
-- * @rfn: The function to be called to process requests that have been
-- * placed on the queue.
-- * @lock: Request queue spin lock
-- *
-- * Description:
-- * If a block device wishes to use the standard request handling procedures,
-- * which sorts requests and coalesces adjacent requests, then it must
-- * call blk_init_queue(). The function @rfn will be called when there
-- * are requests on the queue that need to be processed. If the device
-- * supports plugging, then @rfn may not be called immediately when requests
-- * are available on the queue, but may be called at some time later instead.
-- * Plugged queues are generally unplugged when a buffer belonging to one
-- * of the requests on the queue is needed, or due to memory pressure.
-- *
-- * @rfn is not required, or even expected, to remove all requests off the
-- * queue, but only as many as it can handle at a time. If it does leave
-- * requests on the queue, it is responsible for arranging that the requests
-- * get dealt with eventually.
-- *
-- * The queue spin lock must be held while manipulating the requests on the
-- * request queue; this lock will be taken also from interrupt context, so irq
-- * disabling is needed for it.
-- *
-- * Function returns a pointer to the initialized request queue, or NULL if
-- * it didn't succeed.
-- *
-- * Note:
-- * blk_init_queue() must be paired with a blk_cleanup_queue() call
-- * when the block device is deactivated (such as at module unload).
-- **/
-
--struct request_queue *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock)
+-void machine_emergency_restart(void)
-{
-- return blk_init_queue_node(rfn, lock, -1);
+- machine_ops.emergency_restart();
-}
--EXPORT_SYMBOL(blk_init_queue);
-
--struct request_queue *
--blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
+-void machine_restart(char *cmd)
-{
-- struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
--
-- if (!q)
-- return NULL;
--
-- q->node = node_id;
-- if (blk_init_free_list(q)) {
-- kmem_cache_free(requestq_cachep, q);
-- return NULL;
-- }
--
-- /*
-- * if caller didn't supply a lock, they get per-queue locking with
-- * our embedded lock
-- */
-- if (!lock) {
-- spin_lock_init(&q->__queue_lock);
-- lock = &q->__queue_lock;
-- }
--
-- q->request_fn = rfn;
-- q->prep_rq_fn = NULL;
-- q->unplug_fn = generic_unplug_device;
-- q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
-- q->queue_lock = lock;
--
-- blk_queue_segment_boundary(q, 0xffffffff);
--
-- blk_queue_make_request(q, __make_request);
-- blk_queue_max_segment_size(q, MAX_SEGMENT_SIZE);
--
-- blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
-- blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
--
-- q->sg_reserved_size = INT_MAX;
--
-- /*
-- * all done
-- */
-- if (!elevator_init(q, NULL)) {
-- blk_queue_congestion_threshold(q);
-- return q;
-- }
--
-- blk_put_queue(q);
-- return NULL;
+- machine_ops.restart(cmd);
-}
--EXPORT_SYMBOL(blk_init_queue_node);
-
--int blk_get_queue(struct request_queue *q)
+-void machine_halt(void)
-{
-- if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
-- kobject_get(&q->kobj);
-- return 0;
-- }
--
-- return 1;
+- machine_ops.halt();
-}
+diff --git a/arch/x86/kernel/reboot_64.c b/arch/x86/kernel/reboot_64.c
+deleted file mode 100644
+index 53620a9..0000000
+--- a/arch/x86/kernel/reboot_64.c
++++ /dev/null
+@@ -1,176 +0,0 @@
+-/* Various gunk just to reboot the machine. */
+-#include <linux/module.h>
+-#include <linux/reboot.h>
+-#include <linux/init.h>
+-#include <linux/smp.h>
+-#include <linux/kernel.h>
+-#include <linux/ctype.h>
+-#include <linux/string.h>
+-#include <linux/pm.h>
+-#include <linux/kdebug.h>
+-#include <linux/sched.h>
+-#include <asm/io.h>
+-#include <asm/delay.h>
+-#include <asm/desc.h>
+-#include <asm/hw_irq.h>
+-#include <asm/system.h>
+-#include <asm/pgtable.h>
+-#include <asm/tlbflush.h>
+-#include <asm/apic.h>
+-#include <asm/hpet.h>
+-#include <asm/gart.h>
-
--EXPORT_SYMBOL(blk_get_queue);
--
--static inline void blk_free_request(struct request_queue *q, struct request *rq)
--{
-- if (rq->cmd_flags & REQ_ELVPRIV)
-- elv_put_request(q, rq);
-- mempool_free(rq, q->rq.rq_pool);
--}
+-/*
+- * Power off function, if any
+- */
+-void (*pm_power_off)(void);
+-EXPORT_SYMBOL(pm_power_off);
-
--static struct request *
--blk_alloc_request(struct request_queue *q, int rw, int priv, gfp_t gfp_mask)
+-static long no_idt[3];
+-static enum {
+- BOOT_TRIPLE = 't',
+- BOOT_KBD = 'k'
+-} reboot_type = BOOT_KBD;
+-static int reboot_mode = 0;
+-int reboot_force;
+-
+-/* reboot=t[riple] | k[bd] [, [w]arm | [c]old]
+- warm Don't set the cold reboot flag
+- cold Set the cold reboot flag
+- triple Force a triple fault (init)
+- kbd Use the keyboard controller. cold reset (default)
+- force Avoid anything that could hang.
+- */
+-static int __init reboot_setup(char *str)
-{
-- struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
--
-- if (!rq)
-- return NULL;
+- for (;;) {
+- switch (*str) {
+- case 'w':
+- reboot_mode = 0x1234;
+- break;
-
-- /*
-- * first three bits are identical in rq->cmd_flags and bio->bi_rw,
-- * see bio.h and blkdev.h
-- */
-- rq->cmd_flags = rw | REQ_ALLOCED;
+- case 'c':
+- reboot_mode = 0;
+- break;
-
-- if (priv) {
-- if (unlikely(elv_set_request(q, rq, gfp_mask))) {
-- mempool_free(rq, q->rq.rq_pool);
-- return NULL;
+- case 't':
+- case 'b':
+- case 'k':
+- reboot_type = *str;
+- break;
+- case 'f':
+- reboot_force = 1;
+- break;
- }
-- rq->cmd_flags |= REQ_ELVPRIV;
+- if((str = strchr(str,',')) != NULL)
+- str++;
+- else
+- break;
- }
--
-- return rq;
+- return 1;
-}
-
--/*
-- * ioc_batching returns true if the ioc is a valid batching request and
-- * should be given priority access to a request.
-- */
--static inline int ioc_batching(struct request_queue *q, struct io_context *ioc)
--{
-- if (!ioc)
-- return 0;
--
-- /*
-- * Make sure the process is able to allocate at least 1 request
-- * even if the batch times out, otherwise we could theoretically
-- * lose wakeups.
-- */
-- return ioc->nr_batch_requests == q->nr_batching ||
-- (ioc->nr_batch_requests > 0
-- && time_before(jiffies, ioc->last_waited + BLK_BATCH_TIME));
--}
+-__setup("reboot=", reboot_setup);
-
--/*
-- * ioc_set_batching sets ioc to be a new "batcher" if it is not one. This
-- * will cause the process to be a "batcher" on all queues in the system. This
-- * is the behaviour we want though - once it gets a wakeup it should be given
-- * a nice run.
-- */
--static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
+-static inline void kb_wait(void)
-{
-- if (!ioc || ioc_batching(q, ioc))
-- return;
+- int i;
-
-- ioc->nr_batch_requests = q->nr_batching;
-- ioc->last_waited = jiffies;
+- for (i=0; i<0x10000; i++)
+- if ((inb_p(0x64) & 0x02) == 0)
+- break;
-}
-
--static void __freed_request(struct request_queue *q, int rw)
+-void machine_shutdown(void)
-{
-- struct request_list *rl = &q->rq;
+- unsigned long flags;
-
-- if (rl->count[rw] < queue_congestion_off_threshold(q))
-- blk_clear_queue_congested(q, rw);
+- /* Stop the cpus and apics */
+-#ifdef CONFIG_SMP
+- int reboot_cpu_id;
-
-- if (rl->count[rw] + 1 <= q->nr_requests) {
-- if (waitqueue_active(&rl->wait[rw]))
-- wake_up(&rl->wait[rw]);
+- /* The boot cpu is always logical cpu 0 */
+- reboot_cpu_id = 0;
-
-- blk_clear_queue_full(q, rw);
+- /* Make certain the cpu I'm about to reboot on is online */
+- if (!cpu_isset(reboot_cpu_id, cpu_online_map)) {
+- reboot_cpu_id = smp_processor_id();
- }
--}
--
--/*
-- * A request has just been released. Account for it, update the full and
-- * congestion status, wake up any waiters. Called under q->queue_lock.
-- */
--static void freed_request(struct request_queue *q, int rw, int priv)
--{
-- struct request_list *rl = &q->rq;
--
-- rl->count[rw]--;
-- if (priv)
-- rl->elvpriv--;
--
-- __freed_request(q, rw);
--
-- if (unlikely(rl->starved[rw ^ 1]))
-- __freed_request(q, rw ^ 1);
--}
--
--#define blkdev_free_rq(list) list_entry((list)->next, struct request, queuelist)
--/*
-- * Get a free request, queue_lock must be held.
-- * Returns NULL on failure, with queue_lock held.
-- * Returns !NULL on success, with queue_lock *not held*.
-- */
--static struct request *get_request(struct request_queue *q, int rw_flags,
-- struct bio *bio, gfp_t gfp_mask)
--{
-- struct request *rq = NULL;
-- struct request_list *rl = &q->rq;
-- struct io_context *ioc = NULL;
-- const int rw = rw_flags & 0x01;
-- int may_queue, priv;
--
-- may_queue = elv_may_queue(q, rw_flags);
-- if (may_queue == ELV_MQUEUE_NO)
-- goto rq_starved;
-
-- if (rl->count[rw]+1 >= queue_congestion_on_threshold(q)) {
-- if (rl->count[rw]+1 >= q->nr_requests) {
-- ioc = current_io_context(GFP_ATOMIC, q->node);
-- /*
-- * The queue will fill after this allocation, so set
-- * it as full, and mark this process as "batching".
-- * This process will be allowed to complete a batch of
-- * requests, others will be blocked.
-- */
-- if (!blk_queue_full(q, rw)) {
-- ioc_set_batching(q, ioc);
-- blk_set_queue_full(q, rw);
-- } else {
-- if (may_queue != ELV_MQUEUE_MUST
-- && !ioc_batching(q, ioc)) {
-- /*
-- * The queue is full and the allocating
-- * process is not a "batcher", and not
-- * exempted by the IO scheduler
-- */
-- goto out;
-- }
-- }
-- }
-- blk_set_queue_congested(q, rw);
-- }
+- /* Make certain I only run on the appropriate processor */
+- set_cpus_allowed(current, cpumask_of_cpu(reboot_cpu_id));
-
-- /*
-- * Only allow batching queuers to allocate up to 50% over the defined
-- * limit of requests, otherwise we could have thousands of requests
-- * allocated with any setting of ->nr_requests
+- /* O.K Now that I'm on the appropriate processor,
+- * stop all of the others.
- */
-- if (rl->count[rw] >= (3 * q->nr_requests / 2))
-- goto out;
--
-- rl->count[rw]++;
-- rl->starved[rw] = 0;
--
-- priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
-- if (priv)
-- rl->elvpriv++;
--
-- spin_unlock_irq(q->queue_lock);
+- smp_send_stop();
+-#endif
-
-- rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
-- if (unlikely(!rq)) {
-- /*
-- * Allocation failed presumably due to memory. Undo anything
-- * we might have messed up.
-- *
-- * Allocating task should really be put onto the front of the
-- * wait queue, but this is pretty rare.
-- */
-- spin_lock_irq(q->queue_lock);
-- freed_request(q, rw, priv);
+- local_irq_save(flags);
-
-- /*
-- * in the very unlikely event that allocation failed and no
-- * requests for this direction was pending, mark us starved
-- * so that freeing of a request in the other direction will
-- * notice us. another possible fix would be to split the
-- * rq mempool into READ and WRITE
-- */
--rq_starved:
-- if (unlikely(rl->count[rw] == 0))
-- rl->starved[rw] = 1;
+-#ifndef CONFIG_SMP
+- disable_local_APIC();
+-#endif
-
-- goto out;
-- }
+- disable_IO_APIC();
-
-- /*
-- * ioc may be NULL here, and ioc_batching will be false. That's
-- * OK, if the queue is under the request limit then requests need
-- * not count toward the nr_batch_requests limit. There will always
-- * be some limit enforced by BLK_BATCH_TIME.
-- */
-- if (ioc_batching(q, ioc))
-- ioc->nr_batch_requests--;
--
-- rq_init(q, rq);
+-#ifdef CONFIG_HPET_TIMER
+- hpet_disable();
+-#endif
+- local_irq_restore(flags);
-
-- blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
--out:
-- return rq;
+- pci_iommu_shutdown();
-}
-
--/*
-- * No available requests for this queue, unplug the device and wait for some
-- * requests to become available.
-- *
-- * Called with q->queue_lock held, and returns with it unlocked.
-- */
--static struct request *get_request_wait(struct request_queue *q, int rw_flags,
-- struct bio *bio)
+-void machine_emergency_restart(void)
-{
-- const int rw = rw_flags & 0x01;
-- struct request *rq;
--
-- rq = get_request(q, rw_flags, bio, GFP_NOIO);
-- while (!rq) {
-- DEFINE_WAIT(wait);
-- struct request_list *rl = &q->rq;
--
-- prepare_to_wait_exclusive(&rl->wait[rw], &wait,
-- TASK_UNINTERRUPTIBLE);
--
-- rq = get_request(q, rw_flags, bio, GFP_NOIO);
--
-- if (!rq) {
-- struct io_context *ioc;
--
-- blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
--
-- __generic_unplug_device(q);
-- spin_unlock_irq(q->queue_lock);
-- io_schedule();
--
-- /*
-- * After sleeping, we become a "batching" process and
-- * will be able to allocate at least one request, and
-- * up to a big batch of them for a small period time.
-- * See ioc_batching, ioc_set_batching
-- */
-- ioc = current_io_context(GFP_NOIO, q->node);
-- ioc_set_batching(q, ioc);
+- int i;
-
-- spin_lock_irq(q->queue_lock);
-- }
-- finish_wait(&rl->wait[rw], &wait);
-- }
+- /* Tell the BIOS if we want cold or warm reboot */
+- *((unsigned short *)__va(0x472)) = reboot_mode;
+-
+- for (;;) {
+- /* Could also try the reset bit in the Hammer NB */
+- switch (reboot_type) {
+- case BOOT_KBD:
+- for (i=0; i<10; i++) {
+- kb_wait();
+- udelay(50);
+- outb(0xfe,0x64); /* pulse reset low */
+- udelay(50);
+- }
+-
+- case BOOT_TRIPLE:
+- load_idt((const struct desc_ptr *)&no_idt);
+- __asm__ __volatile__("int3");
-
-- return rq;
+- reboot_type = BOOT_KBD;
+- break;
+- }
+- }
-}
-
--struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
+-void machine_restart(char * __unused)
-{
-- struct request *rq;
--
-- BUG_ON(rw != READ && rw != WRITE);
+- printk("machine restart\n");
-
-- spin_lock_irq(q->queue_lock);
-- if (gfp_mask & __GFP_WAIT) {
-- rq = get_request_wait(q, rw, NULL);
-- } else {
-- rq = get_request(q, rw, NULL, gfp_mask);
-- if (!rq)
-- spin_unlock_irq(q->queue_lock);
+- if (!reboot_force) {
+- machine_shutdown();
- }
-- /* q->queue_lock is unlocked at this point */
--
-- return rq;
+- machine_emergency_restart();
-}
--EXPORT_SYMBOL(blk_get_request);
-
--/**
-- * blk_start_queueing - initiate dispatch of requests to device
-- * @q: request queue to kick into gear
-- *
-- * This is basically a helper to remove the need to know whether a queue
-- * is plugged or not if someone just wants to initiate dispatch of requests
-- * for this queue.
-- *
-- * The queue lock must be held with interrupts disabled.
-- */
--void blk_start_queueing(struct request_queue *q)
+-void machine_halt(void)
-{
-- if (!blk_queue_plugged(q))
-- q->request_fn(q);
-- else
-- __generic_unplug_device(q);
-}
--EXPORT_SYMBOL(blk_start_queueing);
-
--/**
-- * blk_requeue_request - put a request back on queue
-- * @q: request queue where request should be inserted
-- * @rq: request to be inserted
-- *
-- * Description:
-- * Drivers often keep queueing requests until the hardware cannot accept
-- * more, when that condition happens we need to put the request back
-- * on the queue. Must be called with queue lock held.
-- */
--void blk_requeue_request(struct request_queue *q, struct request *rq)
+-void machine_power_off(void)
-{
-- blk_add_trace_rq(q, rq, BLK_TA_REQUEUE);
--
-- if (blk_rq_tagged(rq))
-- blk_queue_end_tag(q, rq);
--
-- elv_requeue_request(q, rq);
+- if (pm_power_off) {
+- if (!reboot_force) {
+- machine_shutdown();
+- }
+- pm_power_off();
+- }
-}
-
--EXPORT_SYMBOL(blk_requeue_request);
+diff --git a/arch/x86/kernel/reboot_fixups_32.c b/arch/x86/kernel/reboot_fixups_32.c
+index f452726..dec0b5e 100644
+--- a/arch/x86/kernel/reboot_fixups_32.c
++++ b/arch/x86/kernel/reboot_fixups_32.c
+@@ -30,6 +30,19 @@ static void cs5536_warm_reset(struct pci_dev *dev)
+ udelay(50); /* shouldn't get here but be safe and spin a while */
+ }
+
++static void rdc321x_reset(struct pci_dev *dev)
++{
++ unsigned i;
++ /* Voluntary reset the watchdog timer */
++ outl(0x80003840, 0xCF8);
++ /* Generate a CPU reset on next tick */
++ i = inl(0xCFC);
++ /* Use the minimum timer resolution */
++ i |= 0x1600;
++ outl(i, 0xCFC);
++ outb(1, 0x92);
++}
++
+ struct device_fixup {
+ unsigned int vendor;
+ unsigned int device;
+@@ -40,6 +53,7 @@ static struct device_fixup fixups_table[] = {
+ { PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5530_LEGACY, cs5530a_warm_reset },
+ { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_ISA, cs5536_warm_reset },
+ { PCI_VENDOR_ID_NS, PCI_DEVICE_ID_NS_SC1100_BRIDGE, cs5530a_warm_reset },
++{ PCI_VENDOR_ID_RDC, PCI_DEVICE_ID_RDC_R6030, rdc321x_reset },
+ };
+
+ /*
+diff --git a/arch/x86/kernel/rtc.c b/arch/x86/kernel/rtc.c
+new file mode 100644
+index 0000000..eb9b1a1
+--- /dev/null
++++ b/arch/x86/kernel/rtc.c
+@@ -0,0 +1,204 @@
++/*
++ * RTC related functions
++ */
++#include <linux/acpi.h>
++#include <linux/bcd.h>
++#include <linux/mc146818rtc.h>
++
++#include <asm/time.h>
++#include <asm/vsyscall.h>
++
++#ifdef CONFIG_X86_32
++# define CMOS_YEARS_OFFS 1900
++/*
++ * This is a special lock that is owned by the CPU and holds the index
++ * register we are working with. It is required for NMI access to the
++ * CMOS/RTC registers. See include/asm-i386/mc146818rtc.h for details.
++ */
++volatile unsigned long cmos_lock = 0;
++EXPORT_SYMBOL(cmos_lock);
++#else
++/*
++ * x86-64 systems only exists since 2002.
++ * This will work up to Dec 31, 2100
++ */
++# define CMOS_YEARS_OFFS 2000
++#endif
++
++DEFINE_SPINLOCK(rtc_lock);
++EXPORT_SYMBOL(rtc_lock);
++
++/*
++ * In order to set the CMOS clock precisely, set_rtc_mmss has to be
++ * called 500 ms after the second nowtime has started, because when
++ * nowtime is written into the registers of the CMOS clock, it will
++ * jump to the next second precisely 500 ms later. Check the Motorola
++ * MC146818A or Dallas DS12887 data sheet for details.
++ *
++ * BUG: This routine does not handle hour overflow properly; it just
++ * sets the minutes. Usually you'll only notice that after reboot!
++ */
++int mach_set_rtc_mmss(unsigned long nowtime)
++{
++ int retval = 0;
++ int real_seconds, real_minutes, cmos_minutes;
++ unsigned char save_control, save_freq_select;
++
++ /* tell the clock it's being set */
++ save_control = CMOS_READ(RTC_CONTROL);
++ CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
++
++ /* stop and reset prescaler */
++ save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
++ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
++
++ cmos_minutes = CMOS_READ(RTC_MINUTES);
++ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
++ BCD_TO_BIN(cmos_minutes);
++
++ /*
++ * since we're only adjusting minutes and seconds,
++ * don't interfere with hour overflow. This avoids
++ * messing with unknown time zones but requires your
++ * RTC not to be off by more than 15 minutes
++ */
++ real_seconds = nowtime % 60;
++ real_minutes = nowtime / 60;
++ /* correct for half hour time zone */
++ if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
++ real_minutes += 30;
++ real_minutes %= 60;
++
++ if (abs(real_minutes - cmos_minutes) < 30) {
++ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
++ BIN_TO_BCD(real_seconds);
++ BIN_TO_BCD(real_minutes);
++ }
++ CMOS_WRITE(real_seconds,RTC_SECONDS);
++ CMOS_WRITE(real_minutes,RTC_MINUTES);
++ } else {
++ printk(KERN_WARNING
++ "set_rtc_mmss: can't update from %d to %d\n",
++ cmos_minutes, real_minutes);
++ retval = -1;
++ }
++
++ /* The following flags have to be released exactly in this order,
++ * otherwise the DS12887 (popular MC146818A clone with integrated
++ * battery and quartz) will not reset the oscillator and will not
++ * update precisely 500 ms later. You won't find this mentioned in
++ * the Dallas Semiconductor data sheets, but who believes data
++ * sheets anyway ... -- Markus Kuhn
++ */
++ CMOS_WRITE(save_control, RTC_CONTROL);
++ CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
++
++ return retval;
++}
++
++unsigned long mach_get_cmos_time(void)
++{
++ unsigned int year, mon, day, hour, min, sec, century = 0;
++
++ /*
++ * If UIP is clear, then we have >= 244 microseconds before
++ * RTC registers will be updated. Spec sheet says that this
++ * is the reliable way to read RTC - registers. If UIP is set
++ * then the register access might be invalid.
++ */
++ while ((CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP))
++ cpu_relax();
++
++ sec = CMOS_READ(RTC_SECONDS);
++ min = CMOS_READ(RTC_MINUTES);
++ hour = CMOS_READ(RTC_HOURS);
++ day = CMOS_READ(RTC_DAY_OF_MONTH);
++ mon = CMOS_READ(RTC_MONTH);
++ year = CMOS_READ(RTC_YEAR);
++
++#if defined(CONFIG_ACPI) && defined(CONFIG_X86_64)
++ /* CHECKME: Is this really 64bit only ??? */
++ if (acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID &&
++ acpi_gbl_FADT.century)
++ century = CMOS_READ(acpi_gbl_FADT.century);
++#endif
++
++ if (RTC_ALWAYS_BCD || !(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY)) {
++ BCD_TO_BIN(sec);
++ BCD_TO_BIN(min);
++ BCD_TO_BIN(hour);
++ BCD_TO_BIN(day);
++ BCD_TO_BIN(mon);
++ BCD_TO_BIN(year);
++ }
++
++ if (century) {
++ BCD_TO_BIN(century);
++ year += century * 100;
++ printk(KERN_INFO "Extended CMOS year: %d\n", century * 100);
++ } else {
++ year += CMOS_YEARS_OFFS;
++ if (year < 1970)
++ year += 100;
++ }
++
++ return mktime(year, mon, day, hour, min, sec);
++}
++
++/* Routines for accessing the CMOS RAM/RTC. */
++unsigned char rtc_cmos_read(unsigned char addr)
++{
++ unsigned char val;
++
++ lock_cmos_prefix(addr);
++ outb_p(addr, RTC_PORT(0));
++ val = inb_p(RTC_PORT(1));
++ lock_cmos_suffix(addr);
++ return val;
++}
++EXPORT_SYMBOL(rtc_cmos_read);
++
++void rtc_cmos_write(unsigned char val, unsigned char addr)
++{
++ lock_cmos_prefix(addr);
++ outb_p(addr, RTC_PORT(0));
++ outb_p(val, RTC_PORT(1));
++ lock_cmos_suffix(addr);
++}
++EXPORT_SYMBOL(rtc_cmos_write);
++
++static int set_rtc_mmss(unsigned long nowtime)
++{
++ int retval;
++ unsigned long flags;
++
++ spin_lock_irqsave(&rtc_lock, flags);
++ retval = set_wallclock(nowtime);
++ spin_unlock_irqrestore(&rtc_lock, flags);
++
++ return retval;
++}
++
++/* not static: needed by APM */
++unsigned long read_persistent_clock(void)
++{
++ unsigned long retval, flags;
++
++ spin_lock_irqsave(&rtc_lock, flags);
++ retval = get_wallclock();
++ spin_unlock_irqrestore(&rtc_lock, flags);
++
++ return retval;
++}
++
++int update_persistent_clock(struct timespec now)
++{
++ return set_rtc_mmss(now.tv_sec);
++}
++
++unsigned long long native_read_tsc(void)
++{
++ return __native_read_tsc();
++}
++EXPORT_SYMBOL(native_read_tsc);
++
+diff --git a/arch/x86/kernel/setup64.c b/arch/x86/kernel/setup64.c
+index 3558ac7..309366f 100644
+--- a/arch/x86/kernel/setup64.c
++++ b/arch/x86/kernel/setup64.c
+@@ -24,7 +24,11 @@
+ #include <asm/sections.h>
+ #include <asm/setup.h>
+
++#ifndef CONFIG_DEBUG_BOOT_PARAMS
+ struct boot_params __initdata boot_params;
++#else
++struct boot_params boot_params;
++#endif
+
+ cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
+
+@@ -37,6 +41,8 @@ struct desc_ptr idt_descr = { 256 * 16 - 1, (unsigned long) idt_table };
+ char boot_cpu_stack[IRQSTACKSIZE] __attribute__((section(".bss.page_aligned")));
+
+ unsigned long __supported_pte_mask __read_mostly = ~0UL;
++EXPORT_SYMBOL_GPL(__supported_pte_mask);
++
+ static int do_not_nx __cpuinitdata = 0;
+
+ /* noexec=on|off
+@@ -80,6 +86,43 @@ static int __init nonx32_setup(char *str)
+ __setup("noexec32=", nonx32_setup);
+
+ /*
++ * Copy data used in early init routines from the initial arrays to the
++ * per cpu data areas. These arrays then become expendable and the
++ * *_early_ptr's are zeroed indicating that the static arrays are gone.
++ */
++static void __init setup_per_cpu_maps(void)
++{
++ int cpu;
++
++ for_each_possible_cpu(cpu) {
++#ifdef CONFIG_SMP
++ if (per_cpu_offset(cpu)) {
++#endif
++ per_cpu(x86_cpu_to_apicid, cpu) =
++ x86_cpu_to_apicid_init[cpu];
++ per_cpu(x86_bios_cpu_apicid, cpu) =
++ x86_bios_cpu_apicid_init[cpu];
++#ifdef CONFIG_NUMA
++ per_cpu(x86_cpu_to_node_map, cpu) =
++ x86_cpu_to_node_map_init[cpu];
++#endif
++#ifdef CONFIG_SMP
++ }
++ else
++ printk(KERN_NOTICE "per_cpu_offset zero for cpu %d\n",
++ cpu);
++#endif
++ }
++
++ /* indicate the early static arrays will soon be gone */
++ x86_cpu_to_apicid_early_ptr = NULL;
++ x86_bios_cpu_apicid_early_ptr = NULL;
++#ifdef CONFIG_NUMA
++ x86_cpu_to_node_map_early_ptr = NULL;
++#endif
++}
++
++/*
+ * Great future plan:
+ * Declare PDA itself and support (irqstack,tss,pgd) as per cpu data.
+ * Always point %gs to its beginning
+@@ -100,18 +143,21 @@ void __init setup_per_cpu_areas(void)
+ for_each_cpu_mask (i, cpu_possible_map) {
+ char *ptr;
+
+- if (!NODE_DATA(cpu_to_node(i))) {
++ if (!NODE_DATA(early_cpu_to_node(i))) {
+ printk("cpu with no node %d, num_online_nodes %d\n",
+ i, num_online_nodes());
+ ptr = alloc_bootmem_pages(size);
+ } else {
+- ptr = alloc_bootmem_pages_node(NODE_DATA(cpu_to_node(i)), size);
++ ptr = alloc_bootmem_pages_node(NODE_DATA(early_cpu_to_node(i)), size);
+ }
+ if (!ptr)
+ panic("Cannot allocate cpu data for CPU %d\n", i);
+ cpu_pda(i)->data_offset = ptr - __per_cpu_start;
+ memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
+ }
++
++ /* setup percpu data maps early */
++ setup_per_cpu_maps();
+ }
+
+ void pda_init(int cpu)
+@@ -169,7 +215,8 @@ void syscall_init(void)
+ #endif
+
+ /* Flags to clear on syscall */
+- wrmsrl(MSR_SYSCALL_MASK, EF_TF|EF_DF|EF_IE|0x3000);
++ wrmsrl(MSR_SYSCALL_MASK,
++ X86_EFLAGS_TF|X86_EFLAGS_DF|X86_EFLAGS_IF|X86_EFLAGS_IOPL);
+ }
+
+ void __cpuinit check_efer(void)
+@@ -227,7 +274,7 @@ void __cpuinit cpu_init (void)
+ * and set up the GDT descriptor:
+ */
+ if (cpu)
+- memcpy(cpu_gdt(cpu), cpu_gdt_table, GDT_SIZE);
++ memcpy(get_cpu_gdt_table(cpu), cpu_gdt_table, GDT_SIZE);
+
+ cpu_gdt_descr[cpu].size = GDT_SIZE;
+ load_gdt((const struct desc_ptr *)&cpu_gdt_descr[cpu]);
+@@ -257,10 +304,10 @@ void __cpuinit cpu_init (void)
+ v, cpu);
+ }
+ estacks += PAGE_SIZE << order[v];
+- orig_ist->ist[v] = t->ist[v] = (unsigned long)estacks;
++ orig_ist->ist[v] = t->x86_tss.ist[v] = (unsigned long)estacks;
+ }
+
+- t->io_bitmap_base = offsetof(struct tss_struct, io_bitmap);
++ t->x86_tss.io_bitmap_base = offsetof(struct tss_struct, io_bitmap);
+ /*
+ * <= is required because the CPU will access up to
+ * 8 bits beyond the end of the IO permission bitmap.
+diff --git a/arch/x86/kernel/setup_32.c b/arch/x86/kernel/setup_32.c
+index 9c24b45..62adc5f 100644
+--- a/arch/x86/kernel/setup_32.c
++++ b/arch/x86/kernel/setup_32.c
+@@ -44,9 +44,12 @@
+ #include <linux/crash_dump.h>
+ #include <linux/dmi.h>
+ #include <linux/pfn.h>
++#include <linux/pci.h>
++#include <linux/init_ohci1394_dma.h>
+
+ #include <video/edid.h>
+
++#include <asm/mtrr.h>
+ #include <asm/apic.h>
+ #include <asm/e820.h>
+ #include <asm/mpspec.h>
+@@ -67,14 +70,83 @@
+ address, and must not be in the .bss segment! */
+ unsigned long init_pg_tables_end __initdata = ~0UL;
+
+-int disable_pse __cpuinitdata = 0;
+-
+ /*
+ * Machine setup..
+ */
+-extern struct resource code_resource;
+-extern struct resource data_resource;
+-extern struct resource bss_resource;
++static struct resource data_resource = {
++ .name = "Kernel data",
++ .start = 0,
++ .end = 0,
++ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
++};
++
++static struct resource code_resource = {
++ .name = "Kernel code",
++ .start = 0,
++ .end = 0,
++ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
++};
++
++static struct resource bss_resource = {
++ .name = "Kernel bss",
++ .start = 0,
++ .end = 0,
++ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
++};
++
++static struct resource video_ram_resource = {
++ .name = "Video RAM area",
++ .start = 0xa0000,
++ .end = 0xbffff,
++ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
++};
++
++static struct resource standard_io_resources[] = { {
++ .name = "dma1",
++ .start = 0x0000,
++ .end = 0x001f,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "pic1",
++ .start = 0x0020,
++ .end = 0x0021,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "timer0",
++ .start = 0x0040,
++ .end = 0x0043,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "timer1",
++ .start = 0x0050,
++ .end = 0x0053,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "keyboard",
++ .start = 0x0060,
++ .end = 0x006f,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "dma page reg",
++ .start = 0x0080,
++ .end = 0x008f,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "pic2",
++ .start = 0x00a0,
++ .end = 0x00a1,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "dma2",
++ .start = 0x00c0,
++ .end = 0x00df,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++}, {
++ .name = "fpu",
++ .start = 0x00f0,
++ .end = 0x00ff,
++ .flags = IORESOURCE_BUSY | IORESOURCE_IO
++} };
+
+ /* cpu data as detected by the assembly code in head.S */
+ struct cpuinfo_x86 new_cpu_data __cpuinitdata = { 0, 0, 0, 0, -1, 1, 0, 0, -1 };
+@@ -116,13 +188,17 @@ extern int root_mountflags;
+
+ unsigned long saved_videomode;
+
+-#define RAMDISK_IMAGE_START_MASK 0x07FF
++#define RAMDISK_IMAGE_START_MASK 0x07FF
+ #define RAMDISK_PROMPT_FLAG 0x8000
+-#define RAMDISK_LOAD_FLAG 0x4000
++#define RAMDISK_LOAD_FLAG 0x4000
+
+ static char __initdata command_line[COMMAND_LINE_SIZE];
+
++#ifndef CONFIG_DEBUG_BOOT_PARAMS
+ struct boot_params __initdata boot_params;
++#else
++struct boot_params boot_params;
++#endif
+
+ #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
+ struct edd edd;
+@@ -166,8 +242,7 @@ static int __init parse_mem(char *arg)
+ return -EINVAL;
+
+ if (strcmp(arg, "nopentium") == 0) {
+- clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);
+- disable_pse = 1;
++ setup_clear_cpu_cap(X86_FEATURE_PSE);
+ } else {
+ /* If the user specifies memory size, we
+ * limit the BIOS-provided memory map to
+@@ -176,7 +251,7 @@ static int __init parse_mem(char *arg)
+ * trim the existing memory map.
+ */
+ unsigned long long mem_size;
+-
++
+ mem_size = memparse(arg, &arg);
+ limit_regions(mem_size);
+ user_defined_memmap = 1;
+@@ -315,7 +390,7 @@ static void __init reserve_ebda_region(void)
+ unsigned int addr;
+ addr = get_bios_ebda();
+ if (addr)
+- reserve_bootmem(addr, PAGE_SIZE);
++ reserve_bootmem(addr, PAGE_SIZE);
+ }
+
+ #ifndef CONFIG_NEED_MULTIPLE_NODES
+@@ -420,6 +495,100 @@ static inline void __init reserve_crashkernel(void)
+ {}
+ #endif
+
++#ifdef CONFIG_BLK_DEV_INITRD
++
++static bool do_relocate_initrd = false;
++
++static void __init reserve_initrd(void)
++{
++ unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
++ unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;
++ unsigned long ramdisk_end = ramdisk_image + ramdisk_size;
++ unsigned long end_of_lowmem = max_low_pfn << PAGE_SHIFT;
++ unsigned long ramdisk_here;
++
++ initrd_start = 0;
++
++ if (!boot_params.hdr.type_of_loader ||
++ !ramdisk_image || !ramdisk_size)
++ return; /* No initrd provided by bootloader */
++
++ if (ramdisk_end < ramdisk_image) {
++ printk(KERN_ERR "initrd wraps around end of memory, "
++ "disabling initrd\n");
++ return;
++ }
++ if (ramdisk_size >= end_of_lowmem/2) {
++ printk(KERN_ERR "initrd too large to handle, "
++ "disabling initrd\n");
++ return;
++ }
++ if (ramdisk_end <= end_of_lowmem) {
++ /* All in lowmem, easy case */
++ reserve_bootmem(ramdisk_image, ramdisk_size);
++ initrd_start = ramdisk_image + PAGE_OFFSET;
++ initrd_end = initrd_start+ramdisk_size;
++ return;
++ }
++
++ /* We need to move the initrd down into lowmem */
++ ramdisk_here = (end_of_lowmem - ramdisk_size) & PAGE_MASK;
++
++ /* Note: this includes all the lowmem currently occupied by
++ the initrd, we rely on that fact to keep the data intact. */
++ reserve_bootmem(ramdisk_here, ramdisk_size);
++ initrd_start = ramdisk_here + PAGE_OFFSET;
++ initrd_end = initrd_start + ramdisk_size;
++
++ do_relocate_initrd = true;
++}
++
++#define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT)
++
++static void __init relocate_initrd(void)
++{
++ unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
++ unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;
++ unsigned long end_of_lowmem = max_low_pfn << PAGE_SHIFT;
++ unsigned long ramdisk_here;
++ unsigned long slop, clen, mapaddr;
++ char *p, *q;
++
++ if (!do_relocate_initrd)
++ return;
++
++ ramdisk_here = initrd_start - PAGE_OFFSET;
++
++ q = (char *)initrd_start;
++
++ /* Copy any lowmem portion of the initrd */
++ if (ramdisk_image < end_of_lowmem) {
++ clen = end_of_lowmem - ramdisk_image;
++ p = (char *)__va(ramdisk_image);
++ memcpy(q, p, clen);
++ q += clen;
++ ramdisk_image += clen;
++ ramdisk_size -= clen;
++ }
++
++ /* Copy the highmem portion of the initrd */
++ while (ramdisk_size) {
++ slop = ramdisk_image & ~PAGE_MASK;
++ clen = ramdisk_size;
++ if (clen > MAX_MAP_CHUNK-slop)
++ clen = MAX_MAP_CHUNK-slop;
++ mapaddr = ramdisk_image & PAGE_MASK;
++ p = early_ioremap(mapaddr, clen+slop);
++ memcpy(q, p+slop, clen);
++ early_iounmap(p, clen+slop);
++ q += clen;
++ ramdisk_image += clen;
++ ramdisk_size -= clen;
++ }
++}
++
++#endif /* CONFIG_BLK_DEV_INITRD */
++
+ void __init setup_bootmem_allocator(void)
+ {
+ unsigned long bootmap_size;
+@@ -475,26 +644,10 @@ void __init setup_bootmem_allocator(void)
+ */
+ find_smp_config();
+ #endif
+- numa_kva_reserve();
+ #ifdef CONFIG_BLK_DEV_INITRD
+- if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
+- unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
+- unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;
+- unsigned long ramdisk_end = ramdisk_image + ramdisk_size;
+- unsigned long end_of_lowmem = max_low_pfn << PAGE_SHIFT;
+-
+- if (ramdisk_end <= end_of_lowmem) {
+- reserve_bootmem(ramdisk_image, ramdisk_size);
+- initrd_start = ramdisk_image + PAGE_OFFSET;
+- initrd_end = initrd_start+ramdisk_size;
+- } else {
+- printk(KERN_ERR "initrd extends beyond end of memory "
+- "(0x%08lx > 0x%08lx)\ndisabling initrd\n",
+- ramdisk_end, end_of_lowmem);
+- initrd_start = 0;
+- }
+- }
++ reserve_initrd();
+ #endif
++ numa_kva_reserve();
+ reserve_crashkernel();
+ }
+
+@@ -545,17 +698,11 @@ void __init setup_arch(char **cmdline_p)
+ memcpy(&boot_cpu_data, &new_cpu_data, sizeof(new_cpu_data));
+ pre_setup_arch_hook();
+ early_cpu_init();
++ early_ioremap_init();
+
+- /*
+- * FIXME: This isn't an official loader_type right
+- * now but does currently work with elilo.
+- * If we were configured as an EFI kernel, check to make
+- * sure that we were loaded correctly from elilo and that
+- * the system table is valid. If not, then initialize normally.
+- */
+ #ifdef CONFIG_EFI
+- if ((boot_params.hdr.type_of_loader == 0x50) &&
+- boot_params.efi_info.efi_systab)
++ if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,
++ "EL32", 4))
+ efi_enabled = 1;
+ #endif
+
+@@ -579,12 +726,9 @@ void __init setup_arch(char **cmdline_p)
+ rd_doload = ((boot_params.hdr.ram_size & RAMDISK_LOAD_FLAG) != 0);
+ #endif
+ ARCH_SETUP
+- if (efi_enabled)
+- efi_init();
+- else {
+- printk(KERN_INFO "BIOS-provided physical RAM map:\n");
+- print_memory_map(memory_setup());
+- }
++
++ printk(KERN_INFO "BIOS-provided physical RAM map:\n");
++ print_memory_map(memory_setup());
+
+ copy_edd();
+
+@@ -612,8 +756,16 @@ void __init setup_arch(char **cmdline_p)
+ strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
+ *cmdline_p = command_line;
+
++ if (efi_enabled)
++ efi_init();
++
+ max_low_pfn = setup_memory();
+
++ /* update e820 for memory not covered by WB MTRRs */
++ mtrr_bp_init();
++ if (mtrr_trim_uncached_memory(max_pfn))
++ max_low_pfn = setup_memory();
++
+ #ifdef CONFIG_VMI
+ /*
+ * Must be after max_low_pfn is determined, and before kernel
+@@ -636,6 +788,16 @@ void __init setup_arch(char **cmdline_p)
+ smp_alloc_memory(); /* AP processor realmode stacks in low memory*/
+ #endif
+ paging_init();
++
++ /*
++ * NOTE: On x86-32, only from this point on, fixmaps are ready for use.
++ */
++
++#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
++ if (init_ohci1394_dma_early)
++ init_ohci1394_dma_on_all_controllers();
++#endif
++
+ remapped_pgdat_init();
+ sparse_init();
+ zone_sizes_init();
+@@ -644,15 +806,19 @@ void __init setup_arch(char **cmdline_p)
+ * NOTE: at this point the bootmem allocator is fully available.
+ */
+
++#ifdef CONFIG_BLK_DEV_INITRD
++ relocate_initrd();
++#endif
++
+ paravirt_post_allocator_init();
+
+ dmi_scan_machine();
+
++ io_delay_init();
++
+ #ifdef CONFIG_X86_GENERICARCH
+ generic_apic_probe();
+-#endif
+- if (efi_enabled)
+- efi_map_memmap();
++#endif
+
+ #ifdef CONFIG_ACPI
+ /*
+@@ -661,9 +827,7 @@ void __init setup_arch(char **cmdline_p)
+ acpi_boot_table_init();
+ #endif
+
+-#ifdef CONFIG_PCI
+ early_quirks();
+-#endif
+
+ #ifdef CONFIG_ACPI
+ acpi_boot_init();
+@@ -692,3 +856,26 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+ #endif
+ }
++
++/*
++ * Request address space for all standard resources
++ *
++ * This is called just before pcibios_init(), which is also a
++ * subsys_initcall, but is linked in later (in arch/i386/pci/common.c).
++ */
++static int __init request_standard_resources(void)
++{
++ int i;
++
++ printk(KERN_INFO "Setting up standard PCI resources\n");
++ init_iomem_resources(&code_resource, &data_resource, &bss_resource);
++
++ request_resource(&iomem_resource, &video_ram_resource);
++
++ /* request I/O space for devices used on all i[345]86 PCs */
++ for (i = 0; i < ARRAY_SIZE(standard_io_resources); i++)
++ request_resource(&ioport_resource, &standard_io_resources[i]);
++ return 0;
++}
++
++subsys_initcall(request_standard_resources);
+diff --git a/arch/x86/kernel/setup_64.c b/arch/x86/kernel/setup_64.c
+index 30d94d1..77fb87b 100644
+--- a/arch/x86/kernel/setup_64.c
++++ b/arch/x86/kernel/setup_64.c
+@@ -30,6 +30,7 @@
+ #include <linux/crash_dump.h>
+ #include <linux/root_dev.h>
+ #include <linux/pci.h>
++#include <linux/efi.h>
+ #include <linux/acpi.h>
+ #include <linux/kallsyms.h>
+ #include <linux/edd.h>
+@@ -39,10 +40,13 @@
+ #include <linux/dmi.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/ctype.h>
++#include <linux/uaccess.h>
++#include <linux/init_ohci1394_dma.h>
+
+ #include <asm/mtrr.h>
+ #include <asm/uaccess.h>
+ #include <asm/system.h>
++#include <asm/vsyscall.h>
+ #include <asm/io.h>
+ #include <asm/smp.h>
+ #include <asm/msr.h>
+@@ -50,6 +54,7 @@
+ #include <video/edid.h>
+ #include <asm/e820.h>
+ #include <asm/dma.h>
++#include <asm/gart.h>
+ #include <asm/mpspec.h>
+ #include <asm/mmu_context.h>
+ #include <asm/proto.h>
+@@ -59,6 +64,15 @@
+ #include <asm/sections.h>
+ #include <asm/dmi.h>
+ #include <asm/cacheflush.h>
++#include <asm/mce.h>
++#include <asm/ds.h>
++#include <asm/topology.h>
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else
++#define ARCH_SETUP
++#endif
+
+ /*
+ * Machine setup..
+@@ -67,6 +81,8 @@
+ struct cpuinfo_x86 boot_cpu_data __read_mostly;
+ EXPORT_SYMBOL(boot_cpu_data);
+
++__u32 cleared_cpu_caps[NCAPINTS] __cpuinitdata;
++
+ unsigned long mmu_cr4_features;
+
+ /* Boot loader ID as an integer, for the benefit of proc_dointvec */
+@@ -76,7 +92,7 @@ unsigned long saved_video_mode;
+
+ int force_mwait __cpuinitdata;
+
+-/*
++/*
+ * Early DMI memory
+ */
+ int dmi_alloc_index;
+@@ -122,25 +138,27 @@ struct resource standard_io_resources[] = {
+
+ #define IORESOURCE_RAM (IORESOURCE_BUSY | IORESOURCE_MEM)
+
+-struct resource data_resource = {
++static struct resource data_resource = {
+ .name = "Kernel data",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_RAM,
+ };
+-struct resource code_resource = {
++static struct resource code_resource = {
+ .name = "Kernel code",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_RAM,
+ };
+-struct resource bss_resource = {
++static struct resource bss_resource = {
+ .name = "Kernel bss",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_RAM,
+ };
+
++static void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c);
++
+ #ifdef CONFIG_PROC_VMCORE
+ /* elfcorehdr= specifies the location of elf core header
+ * stored by the crashed kernel. This option will be passed
+@@ -166,12 +184,12 @@ contig_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
+ bootmap_size = bootmem_bootmap_pages(end_pfn)<<PAGE_SHIFT;
+ bootmap = find_e820_area(0, end_pfn<<PAGE_SHIFT, bootmap_size);
+ if (bootmap == -1L)
+- panic("Cannot find bootmem map of size %ld\n",bootmap_size);
++ panic("Cannot find bootmem map of size %ld\n", bootmap_size);
+ bootmap_size = init_bootmem(bootmap >> PAGE_SHIFT, end_pfn);
+ e820_register_active_regions(0, start_pfn, end_pfn);
+ free_bootmem_with_active_regions(0, end_pfn);
+ reserve_bootmem(bootmap, bootmap_size);
+-}
++}
+ #endif
+
+ #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
+@@ -205,7 +223,8 @@ static void __init reserve_crashkernel(void)
+ unsigned long long crash_size, crash_base;
+ int ret;
+
+- free_mem = ((unsigned long long)max_low_pfn - min_low_pfn) << PAGE_SHIFT;
++ free_mem =
++ ((unsigned long long)max_low_pfn - min_low_pfn) << PAGE_SHIFT;
+
+ ret = parse_crashkernel(boot_command_line, free_mem,
+ &crash_size, &crash_base);
+@@ -229,33 +248,21 @@ static inline void __init reserve_crashkernel(void)
+ {}
+ #endif
+
+-#define EBDA_ADDR_POINTER 0x40E
-
--/**
-- * blk_insert_request - insert a special request in to a request queue
-- * @q: request queue where request should be inserted
-- * @rq: request to be inserted
-- * @at_head: insert request at head or tail of queue
-- * @data: private data
-- *
-- * Description:
-- * Many block devices need to execute commands asynchronously, so they don't
-- * block the whole kernel from preemption during request execution. This is
-- * accomplished normally by inserting aritficial requests tagged as
-- * REQ_SPECIAL in to the corresponding request queue, and letting them be
-- * scheduled for actual execution by the request queue.
-- *
-- * We have the option of inserting the head or the tail of the queue.
-- * Typically we use the tail for new ioctls and so forth. We use the head
-- * of the queue for things like a QUEUE_FULL message from a device, or a
-- * host that is unable to accept a particular command.
-- */
--void blk_insert_request(struct request_queue *q, struct request *rq,
-- int at_head, void *data)
--{
-- int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
-- unsigned long flags;
+-unsigned __initdata ebda_addr;
+-unsigned __initdata ebda_size;
-
+-static void discover_ebda(void)
++/* Overridden in paravirt.c if CONFIG_PARAVIRT */
++void __attribute__((weak)) __init memory_setup(void)
+ {
- /*
-- * tell I/O scheduler that this isn't a regular read/write (ie it
-- * must not attempt merges on this) and that it acts as a soft
-- * barrier
+- * there is a real-mode segmented pointer pointing to the
+- * 4K EBDA area at 0x40E
- */
-- rq->cmd_type = REQ_TYPE_SPECIAL;
-- rq->cmd_flags |= REQ_SOFTBARRIER;
+- ebda_addr = *(unsigned short *)__va(EBDA_ADDR_POINTER);
+- ebda_addr <<= 4;
-
-- rq->special = data;
+- ebda_size = *(unsigned short *)__va(ebda_addr);
-
-- spin_lock_irqsave(q->queue_lock, flags);
+- /* Round EBDA up to pages */
+- if (ebda_size == 0)
+- ebda_size = 1;
+- ebda_size <<= 10;
+- ebda_size = round_up(ebda_size + (ebda_addr & ~PAGE_MASK), PAGE_SIZE);
+- if (ebda_size > 64*1024)
+- ebda_size = 64*1024;
++ machine_specific_memory_setup();
+ }
+
++/*
++ * setup_arch - architecture-specific boot-time initializations
++ *
++ * Note: On x86_64, fixmaps are ready for use even before this is called.
++ */
+ void __init setup_arch(char **cmdline_p)
+ {
++ unsigned i;
++
+ printk(KERN_INFO "Command line: %s\n", boot_command_line);
+
+ ROOT_DEV = old_decode_dev(boot_params.hdr.root_dev);
+@@ -269,7 +276,15 @@ void __init setup_arch(char **cmdline_p)
+ rd_prompt = ((boot_params.hdr.ram_size & RAMDISK_PROMPT_FLAG) != 0);
+ rd_doload = ((boot_params.hdr.ram_size & RAMDISK_LOAD_FLAG) != 0);
+ #endif
+- setup_memory_region();
++#ifdef CONFIG_EFI
++ if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,
++ "EL64", 4))
++ efi_enabled = 1;
++#endif
++
++ ARCH_SETUP
++
++ memory_setup();
+ copy_edd();
+
+ if (!boot_params.hdr.root_flags)
+@@ -293,27 +308,47 @@ void __init setup_arch(char **cmdline_p)
+
+ parse_early_param();
+
++#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
++ if (init_ohci1394_dma_early)
++ init_ohci1394_dma_on_all_controllers();
++#endif
++
+ finish_e820_parsing();
+
++ early_gart_iommu_check();
++
+ e820_register_active_regions(0, 0, -1UL);
+ /*
+ * partially used pages are not usable - thus
+ * we are rounding upwards:
+ */
+ end_pfn = e820_end_of_ram();
++ /* update e820 for memory not covered by WB MTRRs */
++ mtrr_bp_init();
++ if (mtrr_trim_uncached_memory(end_pfn)) {
++ e820_register_active_regions(0, 0, -1UL);
++ end_pfn = e820_end_of_ram();
++ }
++
+ num_physpages = end_pfn;
+
+ check_efer();
+
+- discover_ebda();
-
+ init_memory_mapping(0, (end_pfn_map << PAGE_SHIFT));
++ if (efi_enabled)
++ efi_init();
+
+ dmi_scan_machine();
+
++ io_delay_init();
++
+ #ifdef CONFIG_SMP
+- /* setup to use the static apicid table during kernel startup */
+- x86_cpu_to_apicid_ptr = (void *)&x86_cpu_to_apicid_init;
++ /* setup to use the early static init tables during kernel startup */
++ x86_cpu_to_apicid_early_ptr = (void *)x86_cpu_to_apicid_init;
++ x86_bios_cpu_apicid_early_ptr = (void *)x86_bios_cpu_apicid_init;
++#ifdef CONFIG_NUMA
++ x86_cpu_to_node_map_early_ptr = (void *)x86_cpu_to_node_map_init;
++#endif
+ #endif
+
+ #ifdef CONFIG_ACPI
+@@ -340,48 +375,26 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+
+ #ifdef CONFIG_NUMA
+- numa_initmem_init(0, end_pfn);
++ numa_initmem_init(0, end_pfn);
+ #else
+ contig_initmem_init(0, end_pfn);
+ #endif
+
+- /* Reserve direct mapping */
+- reserve_bootmem_generic(table_start << PAGE_SHIFT,
+- (table_end - table_start) << PAGE_SHIFT);
+-
+- /* reserve kernel */
+- reserve_bootmem_generic(__pa_symbol(&_text),
+- __pa_symbol(&_end) - __pa_symbol(&_text));
++ early_res_to_bootmem();
+
++#ifdef CONFIG_ACPI_SLEEP
+ /*
+- * reserve physical page 0 - it's a special BIOS page on many boxes,
+- * enabling clean reboots, SMP operation, laptop functions.
++ * Reserve low memory region for sleep support.
+ */
+- reserve_bootmem_generic(0, PAGE_SIZE);
+-
+- /* reserve ebda region */
+- if (ebda_addr)
+- reserve_bootmem_generic(ebda_addr, ebda_size);
+-#ifdef CONFIG_NUMA
+- /* reserve nodemap region */
+- if (nodemap_addr)
+- reserve_bootmem_generic(nodemap_addr, nodemap_size);
++ acpi_reserve_bootmem();
+ #endif
+
+-#ifdef CONFIG_SMP
+- /* Reserve SMP trampoline */
+- reserve_bootmem_generic(SMP_TRAMPOLINE_BASE, 2*PAGE_SIZE);
+-#endif
++ if (efi_enabled)
++ efi_reserve_bootmem();
+
+-#ifdef CONFIG_ACPI_SLEEP
+ /*
+- * Reserve low memory region for sleep support.
+- */
+- acpi_reserve_bootmem();
+-#endif
- /*
-- * If command is tagged, release the tag
+- * Find and reserve possible boot-time SMP configuration:
- */
-- if (blk_rq_tagged(rq))
-- blk_queue_end_tag(q, rq);
--
-- drive_stat_acct(rq, 1);
-- __elv_add_request(q, rq, where, 0);
-- blk_start_queueing(q);
-- spin_unlock_irqrestore(q->queue_lock, flags);
--}
--
--EXPORT_SYMBOL(blk_insert_request);
--
--static int __blk_rq_unmap_user(struct bio *bio)
--{
-- int ret = 0;
--
-- if (bio) {
-- if (bio_flagged(bio, BIO_USER_MAPPED))
-- bio_unmap_user(bio);
-- else
-- ret = bio_uncopy_user(bio);
++ * Find and reserve possible boot-time SMP configuration:
++ */
+ find_smp_config();
+ #ifdef CONFIG_BLK_DEV_INITRD
+ if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
+@@ -395,6 +408,8 @@ void __init setup_arch(char **cmdline_p)
+ initrd_start = ramdisk_image + PAGE_OFFSET;
+ initrd_end = initrd_start+ramdisk_size;
+ } else {
++ /* Assumes everything on node 0 */
++ free_bootmem(ramdisk_image, ramdisk_size);
+ printk(KERN_ERR "initrd extends beyond end of memory "
+ "(0x%08lx > 0x%08lx)\ndisabling initrd\n",
+ ramdisk_end, end_of_mem);
+@@ -404,17 +419,10 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+ reserve_crashkernel();
+ paging_init();
++ map_vsyscall();
+
+-#ifdef CONFIG_PCI
+ early_quirks();
+-#endif
+
+- /*
+- * set this early, so we dont allocate cpu0
+- * if MADT list doesnt list BSP first
+- * mpparse.c/MP_processor_info() allocates logical cpu numbers.
+- */
+- cpu_set(0, cpu_present_map);
+ #ifdef CONFIG_ACPI
+ /*
+ * Read APIC and some other early information from ACPI tables.
+@@ -430,25 +438,24 @@ void __init setup_arch(char **cmdline_p)
+ if (smp_found_config)
+ get_smp_config();
+ init_apic_mappings();
++ ioapic_init_mappings();
+
+ /*
+ * We trust e820 completely. No explicit ROM probing in memory.
+- */
+- e820_reserve_resources();
++ */
++ e820_reserve_resources(&code_resource, &data_resource, &bss_resource);
+ e820_mark_nosave_regions();
+
+- {
+- unsigned i;
+ /* request I/O space for devices used on all i[345]86 PCs */
+ for (i = 0; i < ARRAY_SIZE(standard_io_resources); i++)
+ request_resource(&ioport_resource, &standard_io_resources[i]);
- }
+
+ e820_setup_gap();
+
+ #ifdef CONFIG_VT
+ #if defined(CONFIG_VGA_CONSOLE)
+- conswitchp = &vga_con;
++ if (!efi_enabled || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY))
++ conswitchp = &vga_con;
+ #elif defined(CONFIG_DUMMY_CONSOLE)
+ conswitchp = &dummy_con;
+ #endif
+@@ -479,9 +486,10 @@ static void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
+
+ if (n >= 0x80000005) {
+ cpuid(0x80000005, &dummy, &ebx, &ecx, &edx);
+- printk(KERN_INFO "CPU: L1 I Cache: %dK (%d bytes/line), D cache %dK (%d bytes/line)\n",
+- edx>>24, edx&0xFF, ecx>>24, ecx&0xFF);
+- c->x86_cache_size=(ecx>>24)+(edx>>24);
++ printk(KERN_INFO "CPU: L1 I Cache: %dK (%d bytes/line), "
++ "D cache %dK (%d bytes/line)\n",
++ edx>>24, edx&0xFF, ecx>>24, ecx&0xFF);
++ c->x86_cache_size = (ecx>>24) + (edx>>24);
+ /* On K8 L1 TLB is inclusive, so don't count it */
+ c->x86_tlbsize = 0;
+ }
+@@ -495,11 +503,8 @@ static void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
+ printk(KERN_INFO "CPU: L2 Cache: %dK (%d bytes/line)\n",
+ c->x86_cache_size, ecx & 0xFF);
+ }
+-
+- if (n >= 0x80000007)
+- cpuid(0x80000007, &dummy, &dummy, &dummy, &c->x86_power);
+ if (n >= 0x80000008) {
+- cpuid(0x80000008, &eax, &dummy, &dummy, &dummy);
++ cpuid(0x80000008, &eax, &dummy, &dummy, &dummy);
+ c->x86_virt_bits = (eax >> 8) & 0xff;
+ c->x86_phys_bits = eax & 0xff;
+ }
+@@ -508,14 +513,15 @@ static void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
+ #ifdef CONFIG_NUMA
+ static int nearby_node(int apicid)
+ {
+- int i;
++ int i, node;
++
+ for (i = apicid - 1; i >= 0; i--) {
+- int node = apicid_to_node[i];
++ node = apicid_to_node[i];
+ if (node != NUMA_NO_NODE && node_online(node))
+ return node;
+ }
+ for (i = apicid + 1; i < MAX_LOCAL_APIC; i++) {
+- int node = apicid_to_node[i];
++ node = apicid_to_node[i];
+ if (node != NUMA_NO_NODE && node_online(node))
+ return node;
+ }
+@@ -527,7 +533,7 @@ static int nearby_node(int apicid)
+ * On a AMD dual core setup the lower bits of the APIC id distingush the cores.
+ * Assumes number of cores is a power of two.
+ */
+-static void __init amd_detect_cmp(struct cpuinfo_x86 *c)
++static void __cpuinit amd_detect_cmp(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+ unsigned bits;
+@@ -536,7 +542,54 @@ static void __init amd_detect_cmp(struct cpuinfo_x86 *c)
+ int node = 0;
+ unsigned apicid = hard_smp_processor_id();
+ #endif
+- unsigned ecx = cpuid_ecx(0x80000008);
++ bits = c->x86_coreid_bits;
++
++ /* Low order bits define the core id (index of core in socket) */
++ c->cpu_core_id = c->phys_proc_id & ((1 << bits)-1);
++ /* Convert the APIC ID into the socket ID */
++ c->phys_proc_id = phys_pkg_id(bits);
++
++#ifdef CONFIG_NUMA
++ node = c->phys_proc_id;
++ if (apicid_to_node[apicid] != NUMA_NO_NODE)
++ node = apicid_to_node[apicid];
++ if (!node_online(node)) {
++ /* Two possibilities here:
++ - The CPU is missing memory and no node was created.
++ In that case try picking one from a nearby CPU
++ - The APIC IDs differ from the HyperTransport node IDs
++ which the K8 northbridge parsing fills in.
++ Assume they are all increased by a constant offset,
++ but in the same order as the HT nodeids.
++ If that doesn't result in a usable node fall back to the
++ path for the previous case. */
++
++ int ht_nodeid = apicid - (cpu_data(0).phys_proc_id << bits);
++
++ if (ht_nodeid >= 0 &&
++ apicid_to_node[ht_nodeid] != NUMA_NO_NODE)
++ node = apicid_to_node[ht_nodeid];
++ /* Pick a nearby node */
++ if (!node_online(node))
++ node = nearby_node(apicid);
++ }
++ numa_set_node(cpu, node);
++
++ printk(KERN_INFO "CPU %d/%x -> Node %d\n", cpu, apicid, node);
++#endif
++#endif
++}
++
++static void __cpuinit early_init_amd_mc(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++ unsigned bits, ecx;
++
++ /* Multi core CPU? */
++ if (c->extended_cpuid_level < 0x80000008)
++ return;
++
++ ecx = cpuid_ecx(0x80000008);
+
+ c->x86_max_cores = (ecx & 0xff) + 1;
+
+@@ -549,37 +602,8 @@ static void __init amd_detect_cmp(struct cpuinfo_x86 *c)
+ bits++;
+ }
+
+- /* Low order bits define the core id (index of core in socket) */
+- c->cpu_core_id = c->phys_proc_id & ((1 << bits)-1);
+- /* Convert the APIC ID into the socket ID */
+- c->phys_proc_id = phys_pkg_id(bits);
+-
+-#ifdef CONFIG_NUMA
+- node = c->phys_proc_id;
+- if (apicid_to_node[apicid] != NUMA_NO_NODE)
+- node = apicid_to_node[apicid];
+- if (!node_online(node)) {
+- /* Two possibilities here:
+- - The CPU is missing memory and no node was created.
+- In that case try picking one from a nearby CPU
+- - The APIC IDs differ from the HyperTransport node IDs
+- which the K8 northbridge parsing fills in.
+- Assume they are all increased by a constant offset,
+- but in the same order as the HT nodeids.
+- If that doesn't result in a usable node fall back to the
+- path for the previous case. */
+- int ht_nodeid = apicid - (cpu_data(0).phys_proc_id << bits);
+- if (ht_nodeid >= 0 &&
+- apicid_to_node[ht_nodeid] != NUMA_NO_NODE)
+- node = apicid_to_node[ht_nodeid];
+- /* Pick a nearby node */
+- if (!node_online(node))
+- node = nearby_node(apicid);
+- }
+- numa_set_node(cpu, node);
++ c->x86_coreid_bits = bits;
+
+- printk(KERN_INFO "CPU %d/%x -> Node %d\n", cpu, apicid, node);
+-#endif
+ #endif
+ }
+
+@@ -595,8 +619,8 @@ static void __init amd_detect_cmp(struct cpuinfo_x86 *c)
+ /* AMD systems with C1E don't have a working lAPIC timer. Check for that. */
+ static __cpuinit int amd_apic_timer_broken(void)
+ {
+- u32 lo, hi;
+- u32 eax = cpuid_eax(CPUID_PROCESSOR_SIGNATURE);
++ u32 lo, hi, eax = cpuid_eax(CPUID_PROCESSOR_SIGNATURE);
++
+ switch (eax & CPUID_XFAM) {
+ case CPUID_XFAM_K8:
+ if ((eax & CPUID_XMOD) < CPUID_XMOD_REV_F)
+@@ -614,6 +638,15 @@ static __cpuinit int amd_apic_timer_broken(void)
+ return 0;
+ }
+
++static void __cpuinit early_init_amd(struct cpuinfo_x86 *c)
++{
++ early_init_amd_mc(c);
++
++ /* c->x86_power is 8000_0007 edx. Bit 8 is constant TSC */
++ if (c->x86_power & (1<<8))
++ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
++}
++
+ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ {
+ unsigned level;
+@@ -624,7 +657,7 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ /*
+ * Disable TLB flush filter by setting HWCR.FFDIS on K8
+ * bit 6 of msr C001_0015
+- *
++ *
+ * Errata 63 for SH-B3 steppings
+ * Errata 122 for all steppings (F+ have it disabled by default)
+ */
+@@ -637,35 +670,32 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+
+ /* Bit 31 in normal CPUID used for nonstandard 3DNow ID;
+ 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway */
+- clear_bit(0*32+31, &c->x86_capability);
+-
++ clear_bit(0*32+31, (unsigned long *)&c->x86_capability);
++
+ /* On C+ stepping K8 rep microcode works well for copy/memset */
+ level = cpuid_eax(1);
+- if (c->x86 == 15 && ((level >= 0x0f48 && level < 0x0f50) || level >= 0x0f58))
+- set_bit(X86_FEATURE_REP_GOOD, &c->x86_capability);
++ if (c->x86 == 15 && ((level >= 0x0f48 && level < 0x0f50) ||
++ level >= 0x0f58))
++ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
+ if (c->x86 == 0x10 || c->x86 == 0x11)
+- set_bit(X86_FEATURE_REP_GOOD, &c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
+
+ /* Enable workaround for FXSAVE leak */
+ if (c->x86 >= 6)
+- set_bit(X86_FEATURE_FXSAVE_LEAK, &c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_FXSAVE_LEAK);
+
+ level = get_model_name(c);
+ if (!level) {
+- switch (c->x86) {
++ switch (c->x86) {
+ case 15:
+ /* Should distinguish Models here, but this is only
+ a fallback anyways. */
+ strcpy(c->x86_model_id, "Hammer");
+- break;
+- }
+- }
++ break;
++ }
++ }
+ display_cacheinfo(c);
+
+- /* c->x86_power is 8000_0007 edx. Bit 8 is constant TSC */
+- if (c->x86_power & (1<<8))
+- set_bit(X86_FEATURE_CONSTANT_TSC, &c->x86_capability);
-
-- return ret;
--}
--
--int blk_rq_append_bio(struct request_queue *q, struct request *rq,
-- struct bio *bio)
--{
-- if (!rq->bio)
-- blk_rq_bio_prep(q, rq, bio);
-- else if (!ll_back_merge_fn(q, rq, bio))
-- return -EINVAL;
-- else {
-- rq->biotail->bi_next = bio;
-- rq->biotail = bio;
+ /* Multi core CPU? */
+ if (c->extended_cpuid_level >= 0x80000008)
+ amd_detect_cmp(c);
+@@ -677,41 +707,38 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
+ num_cache_leaves = 3;
+
+ if (c->x86 == 0xf || c->x86 == 0x10 || c->x86 == 0x11)
+- set_bit(X86_FEATURE_K8, &c->x86_capability);
-
-- rq->data_len += bio->bi_size;
-- }
-- return 0;
+- /* RDTSC can be speculated around */
+- clear_bit(X86_FEATURE_SYNC_RDTSC, &c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_K8);
+
+- /* Family 10 doesn't support C states in MWAIT so don't use it */
+- if (c->x86 == 0x10 && !force_mwait)
+- clear_bit(X86_FEATURE_MWAIT, &c->x86_capability);
++ /* MFENCE stops RDTSC speculation */
++ set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
+
+ if (amd_apic_timer_broken())
+ disable_apic_timer = 1;
+ }
+
+-static void __cpuinit detect_ht(struct cpuinfo_x86 *c)
++void __cpuinit detect_ht(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+- u32 eax, ebx, ecx, edx;
+- int index_msb, core_bits;
++ u32 eax, ebx, ecx, edx;
++ int index_msb, core_bits;
+
+ cpuid(1, &eax, &ebx, &ecx, &edx);
+
+
+ if (!cpu_has(c, X86_FEATURE_HT))
+ return;
+- if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
++ if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
+ goto out;
+
+ smp_num_siblings = (ebx & 0xff0000) >> 16;
+
+ if (smp_num_siblings == 1) {
+ printk(KERN_INFO "CPU: Hyper-Threading is disabled\n");
+- } else if (smp_num_siblings > 1 ) {
++ } else if (smp_num_siblings > 1) {
+
+ if (smp_num_siblings > NR_CPUS) {
+- printk(KERN_WARNING "CPU: Unsupported number of the siblings %d", smp_num_siblings);
++ printk(KERN_WARNING "CPU: Unsupported number of "
++ "siblings %d", smp_num_siblings);
+ smp_num_siblings = 1;
+ return;
+ }
+@@ -721,7 +748,7 @@ static void __cpuinit detect_ht(struct cpuinfo_x86 *c)
+
+ smp_num_siblings = smp_num_siblings / c->x86_max_cores;
+
+- index_msb = get_count_order(smp_num_siblings) ;
++ index_msb = get_count_order(smp_num_siblings);
+
+ core_bits = get_count_order(c->x86_max_cores);
+
+@@ -730,8 +757,10 @@ static void __cpuinit detect_ht(struct cpuinfo_x86 *c)
+ }
+ out:
+ if ((c->x86_max_cores * smp_num_siblings) > 1) {
+- printk(KERN_INFO "CPU: Physical Processor ID: %d\n", c->phys_proc_id);
+- printk(KERN_INFO "CPU: Processor Core ID: %d\n", c->cpu_core_id);
++ printk(KERN_INFO "CPU: Physical Processor ID: %d\n",
++ c->phys_proc_id);
++ printk(KERN_INFO "CPU: Processor Core ID: %d\n",
++ c->cpu_core_id);
+ }
+
+ #endif
+@@ -773,28 +802,39 @@ static void srat_detect_node(void)
+ #endif
+ }
+
++static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
++{
++ if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
++ (c->x86 == 0x6 && c->x86_model >= 0x0e))
++ set_bit(X86_FEATURE_CONSTANT_TSC, &c->x86_capability);
++}
++
+ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
+ {
+ /* Cache sizes */
+ unsigned n;
+
+ init_intel_cacheinfo(c);
+- if (c->cpuid_level > 9 ) {
++ if (c->cpuid_level > 9) {
+ unsigned eax = cpuid_eax(10);
+ /* Check for version and the number of counters */
+ if ((eax & 0xff) && (((eax>>8) & 0xff) > 1))
+- set_bit(X86_FEATURE_ARCH_PERFMON, &c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
+ }
+
+ if (cpu_has_ds) {
+ unsigned int l1, l2;
+ rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
+ if (!(l1 & (1<<11)))
+- set_bit(X86_FEATURE_BTS, c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_BTS);
+ if (!(l1 & (1<<12)))
+- set_bit(X86_FEATURE_PEBS, c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_PEBS);
+ }
+
++
++ if (cpu_has_bts)
++ ds_init_intel(c);
++
+ n = c->extended_cpuid_level;
+ if (n >= 0x80000008) {
+ unsigned eax = cpuid_eax(0x80000008);
+@@ -811,14 +851,11 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
+ c->x86_cache_alignment = c->x86_clflush_size * 2;
+ if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
+ (c->x86 == 0x6 && c->x86_model >= 0x0e))
+- set_bit(X86_FEATURE_CONSTANT_TSC, &c->x86_capability);
++ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ if (c->x86 == 6)
+- set_bit(X86_FEATURE_REP_GOOD, &c->x86_capability);
+- if (c->x86 == 15)
+- set_bit(X86_FEATURE_SYNC_RDTSC, &c->x86_capability);
+- else
+- clear_bit(X86_FEATURE_SYNC_RDTSC, &c->x86_capability);
+- c->x86_max_cores = intel_num_cpu_cores(c);
++ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
++ set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
++ c->x86_max_cores = intel_num_cpu_cores(c);
+
+ srat_detect_node();
+ }
+@@ -835,18 +872,12 @@ static void __cpuinit get_cpu_vendor(struct cpuinfo_x86 *c)
+ c->x86_vendor = X86_VENDOR_UNKNOWN;
+ }
+
+-struct cpu_model_info {
+- int vendor;
+- int family;
+- char *model_names[16];
+-};
+-
+ /* Do some early cpuid on the boot CPU to get some parameter that are
+ needed before check_bugs. Everything advanced is in identify_cpu
+ below. */
+-void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
++static void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
+ {
+- u32 tfms;
++ u32 tfms, xlvl;
+
+ c->loops_per_jiffy = loops_per_jiffy;
+ c->x86_cache_size = -1;
+@@ -857,6 +888,7 @@ void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
+ c->x86_clflush_size = 64;
+ c->x86_cache_alignment = c->x86_clflush_size;
+ c->x86_max_cores = 1;
++ c->x86_coreid_bits = 0;
+ c->extended_cpuid_level = 0;
+ memset(&c->x86_capability, 0, sizeof c->x86_capability);
+
+@@ -865,7 +897,7 @@ void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
+ (unsigned int *)&c->x86_vendor_id[0],
+ (unsigned int *)&c->x86_vendor_id[8],
+ (unsigned int *)&c->x86_vendor_id[4]);
+-
++
+ get_cpu_vendor(c);
+
+ /* Initialize the standard set of capabilities */
+@@ -883,7 +915,7 @@ void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
+ c->x86 += (tfms >> 20) & 0xff;
+ if (c->x86 >= 0x6)
+ c->x86_model += ((tfms >> 16) & 0xF) << 4;
+- if (c->x86_capability[0] & (1<<19))
++ if (c->x86_capability[0] & (1<<19))
+ c->x86_clflush_size = ((misc >> 8) & 0xff) * 8;
+ } else {
+ /* Have CPUID level 0 only - unheard of */
+@@ -893,18 +925,6 @@ void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
+ #ifdef CONFIG_SMP
+ c->phys_proc_id = (cpuid_ebx(1) >> 24) & 0xff;
+ #endif
-}
--EXPORT_SYMBOL(blk_rq_append_bio);
-
--static int __blk_rq_map_user(struct request_queue *q, struct request *rq,
-- void __user *ubuf, unsigned int len)
+-/*
+- * This does the hard work of actually picking apart the CPU stuff...
+- */
+-void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
-{
-- unsigned long uaddr;
-- struct bio *bio, *orig_bio;
-- int reading, ret;
--
-- reading = rq_data_dir(rq) == READ;
--
-- /*
-- * if alignment requirement is satisfied, map in user pages for
-- * direct dma. else, set up kernel bounce buffers
-- */
-- uaddr = (unsigned long) ubuf;
-- if (!(uaddr & queue_dma_alignment(q)) && !(len & queue_dma_alignment(q)))
-- bio = bio_map_user(q, NULL, uaddr, len, reading);
-- else
-- bio = bio_copy_user(q, uaddr, len, reading);
--
-- if (IS_ERR(bio))
-- return PTR_ERR(bio);
--
-- orig_bio = bio;
-- blk_queue_bounce(q, &bio);
--
-- /*
-- * We link the bounce buffer in and could have to traverse it
-- * later so we have to get a ref to prevent it from being freed
-- */
-- bio_get(bio);
--
-- ret = blk_rq_append_bio(q, rq, bio);
-- if (!ret)
-- return bio->bi_size;
+- int i;
+- u32 xlvl;
-
-- /* if it was boucned we must call the end io function */
-- bio_endio(bio, 0);
-- __blk_rq_unmap_user(orig_bio);
-- bio_put(bio);
-- return ret;
--}
+- early_identify_cpu(c);
-
--/**
-- * blk_rq_map_user - map user data to a request, for REQ_BLOCK_PC usage
-- * @q: request queue where request should be inserted
-- * @rq: request structure to fill
-- * @ubuf: the user buffer
-- * @len: length of user data
-- *
-- * Description:
-- * Data will be mapped directly for zero copy io, if possible. Otherwise
-- * a kernel bounce buffer is used.
-- *
-- * A matching blk_rq_unmap_user() must be issued at the end of io, while
-- * still in process context.
-- *
-- * Note: The mapped bio may need to be bounced through blk_queue_bounce()
-- * before being submitted to the device, as pages mapped may be out of
-- * reach. It's the callers responsibility to make sure this happens. The
-- * original bio must be passed back in to blk_rq_unmap_user() for proper
-- * unmapping.
-- */
--int blk_rq_map_user(struct request_queue *q, struct request *rq,
-- void __user *ubuf, unsigned long len)
--{
-- unsigned long bytes_read = 0;
-- struct bio *bio = NULL;
-- int ret;
--
-- if (len > (q->max_hw_sectors << 9))
-- return -EINVAL;
-- if (!len || !ubuf)
-- return -EINVAL;
--
-- while (bytes_read != len) {
-- unsigned long map_len, end, start;
--
-- map_len = min_t(unsigned long, len - bytes_read, BIO_MAX_SIZE);
-- end = ((unsigned long)ubuf + map_len + PAGE_SIZE - 1)
-- >> PAGE_SHIFT;
-- start = (unsigned long)ubuf >> PAGE_SHIFT;
--
-- /*
-- * A bad offset could cause us to require BIO_MAX_PAGES + 1
-- * pages. If this happens we just lower the requested
-- * mapping len by a page so that we can fit
-- */
-- if (end - start > BIO_MAX_PAGES)
-- map_len -= PAGE_SIZE;
--
-- ret = __blk_rq_map_user(q, rq, ubuf, map_len);
-- if (ret < 0)
-- goto unmap_rq;
-- if (!bio)
-- bio = rq->bio;
-- bytes_read += ret;
-- ubuf += ret;
+ /* AMD-defined flags: level 0x80000001 */
+ xlvl = cpuid_eax(0x80000000);
+ c->extended_cpuid_level = xlvl;
+@@ -925,6 +945,30 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ c->x86_capability[2] = cpuid_edx(0x80860001);
+ }
+
++ c->extended_cpuid_level = cpuid_eax(0x80000000);
++ if (c->extended_cpuid_level >= 0x80000007)
++ c->x86_power = cpuid_edx(0x80000007);
++
++ switch (c->x86_vendor) {
++ case X86_VENDOR_AMD:
++ early_init_amd(c);
++ break;
++ case X86_VENDOR_INTEL:
++ early_init_intel(c);
++ break;
++ }
++
++}
++
++/*
++ * This does the hard work of actually picking apart the CPU stuff...
++ */
++void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
++{
++ int i;
++
++ early_identify_cpu(c);
++
+ init_scattered_cpuid_features(c);
+
+ c->apicid = phys_pkg_id(0);
+@@ -954,8 +998,7 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ break;
+ }
+
+- select_idle_routine(c);
+- detect_ht(c);
++ detect_ht(c);
+
+ /*
+ * On SMP, boot_cpu_data holds the common feature set between
+@@ -965,32 +1008,56 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ */
+ if (c != &boot_cpu_data) {
+ /* AND the already accumulated flags with these */
+- for (i = 0 ; i < NCAPINTS ; i++)
++ for (i = 0; i < NCAPINTS; i++)
+ boot_cpu_data.x86_capability[i] &= c->x86_capability[i];
+ }
+
++ /* Clear all flags overriden by options */
++ for (i = 0; i < NCAPINTS; i++)
++ c->x86_capability[i] ^= cleared_cpu_caps[i];
++
+ #ifdef CONFIG_X86_MCE
+ mcheck_init(c);
+ #endif
++ select_idle_routine(c);
++
+ if (c != &boot_cpu_data)
+ mtrr_ap_init();
+ #ifdef CONFIG_NUMA
+ numa_add_cpu(smp_processor_id());
+ #endif
++
++}
++
++static __init int setup_noclflush(char *arg)
++{
++ setup_clear_cpu_cap(X86_FEATURE_CLFLSH);
++ return 1;
+ }
+-
++__setup("noclflush", setup_noclflush);
+
+ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
+ {
+ if (c->x86_model_id[0])
+- printk("%s", c->x86_model_id);
++ printk(KERN_INFO "%s", c->x86_model_id);
+
+- if (c->x86_mask || c->cpuid_level >= 0)
+- printk(" stepping %02x\n", c->x86_mask);
++ if (c->x86_mask || c->cpuid_level >= 0)
++ printk(KERN_CONT " stepping %02x\n", c->x86_mask);
+ else
+- printk("\n");
++ printk(KERN_CONT "\n");
+ }
+
++static __init int setup_disablecpuid(char *arg)
++{
++ int bit;
++ if (get_option(&arg, &bit) && bit < NCAPINTS*32)
++ setup_clear_cpu_cap(bit);
++ else
++ return 0;
++ return 1;
++}
++__setup("clearcpuid=", setup_disablecpuid);
++
+ /*
+ * Get CPU information for use by the procfs.
+ */
+@@ -998,9 +1065,9 @@ void __cpuinit print_cpu_info(struct cpuinfo_x86 *c)
+ static int show_cpuinfo(struct seq_file *m, void *v)
+ {
+ struct cpuinfo_x86 *c = v;
+- int cpu = 0;
++ int cpu = 0, i;
+
+- /*
++ /*
+ * These flag bits must match the definitions in <asm/cpufeature.h>.
+ * NULL means this bit is undefined or reserved; either way it doesn't
+ * have meaning as far as Linux is concerned. Note that it's important
+@@ -1010,10 +1077,10 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ */
+ static const char *const x86_cap_flags[] = {
+ /* Intel-defined */
+- "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce",
+- "cx8", "apic", NULL, "sep", "mtrr", "pge", "mca", "cmov",
+- "pat", "pse36", "pn", "clflush", NULL, "dts", "acpi", "mmx",
+- "fxsr", "sse", "sse2", "ss", "ht", "tm", "ia64", "pbe",
++ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce",
++ "cx8", "apic", NULL, "sep", "mtrr", "pge", "mca", "cmov",
++ "pat", "pse36", "pn", "clflush", NULL, "dts", "acpi", "mmx",
++ "fxsr", "sse", "sse2", "ss", "ht", "tm", "ia64", "pbe",
+
+ /* AMD-defined */
+ NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
+@@ -1080,34 +1147,35 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ cpu = c->cpu_index;
+ #endif
+
+- seq_printf(m,"processor\t: %u\n"
+- "vendor_id\t: %s\n"
+- "cpu family\t: %d\n"
+- "model\t\t: %d\n"
+- "model name\t: %s\n",
+- (unsigned)cpu,
+- c->x86_vendor_id[0] ? c->x86_vendor_id : "unknown",
+- c->x86,
+- (int)c->x86_model,
+- c->x86_model_id[0] ? c->x86_model_id : "unknown");
+-
++ seq_printf(m, "processor\t: %u\n"
++ "vendor_id\t: %s\n"
++ "cpu family\t: %d\n"
++ "model\t\t: %d\n"
++ "model name\t: %s\n",
++ (unsigned)cpu,
++ c->x86_vendor_id[0] ? c->x86_vendor_id : "unknown",
++ c->x86,
++ (int)c->x86_model,
++ c->x86_model_id[0] ? c->x86_model_id : "unknown");
++
+ if (c->x86_mask || c->cpuid_level >= 0)
+ seq_printf(m, "stepping\t: %d\n", c->x86_mask);
+ else
+ seq_printf(m, "stepping\t: unknown\n");
+-
+- if (cpu_has(c,X86_FEATURE_TSC)) {
++
++ if (cpu_has(c, X86_FEATURE_TSC)) {
+ unsigned int freq = cpufreq_quick_get((unsigned)cpu);
++
+ if (!freq)
+ freq = cpu_khz;
+ seq_printf(m, "cpu MHz\t\t: %u.%03u\n",
+- freq / 1000, (freq % 1000));
++ freq / 1000, (freq % 1000));
+ }
+
+ /* Cache size */
+- if (c->x86_cache_size >= 0)
++ if (c->x86_cache_size >= 0)
+ seq_printf(m, "cache size\t: %d KB\n", c->x86_cache_size);
+-
++
+ #ifdef CONFIG_SMP
+ if (smp_num_siblings * c->x86_max_cores > 1) {
+ seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
+@@ -1116,48 +1184,43 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id);
+ seq_printf(m, "cpu cores\t: %d\n", c->booted_cores);
+ }
+-#endif
++#endif
+
+ seq_printf(m,
+- "fpu\t\t: yes\n"
+- "fpu_exception\t: yes\n"
+- "cpuid level\t: %d\n"
+- "wp\t\t: yes\n"
+- "flags\t\t:",
++ "fpu\t\t: yes\n"
++ "fpu_exception\t: yes\n"
++ "cpuid level\t: %d\n"
++ "wp\t\t: yes\n"
++ "flags\t\t:",
+ c->cpuid_level);
+
+- {
+- int i;
+- for ( i = 0 ; i < 32*NCAPINTS ; i++ )
+- if (cpu_has(c, i) && x86_cap_flags[i] != NULL)
+- seq_printf(m, " %s", x86_cap_flags[i]);
- }
+-
++ for (i = 0; i < 32*NCAPINTS; i++)
++ if (cpu_has(c, i) && x86_cap_flags[i] != NULL)
++ seq_printf(m, " %s", x86_cap_flags[i]);
++
+ seq_printf(m, "\nbogomips\t: %lu.%02lu\n",
+ c->loops_per_jiffy/(500000/HZ),
+ (c->loops_per_jiffy/(5000/HZ)) % 100);
+
+- if (c->x86_tlbsize > 0)
++ if (c->x86_tlbsize > 0)
+ seq_printf(m, "TLB size\t: %d 4K pages\n", c->x86_tlbsize);
+ seq_printf(m, "clflush size\t: %d\n", c->x86_clflush_size);
+ seq_printf(m, "cache_alignment\t: %d\n", c->x86_cache_alignment);
+
+- seq_printf(m, "address sizes\t: %u bits physical, %u bits virtual\n",
++ seq_printf(m, "address sizes\t: %u bits physical, %u bits virtual\n",
+ c->x86_phys_bits, c->x86_virt_bits);
+
+ seq_printf(m, "power management:");
+- {
+- unsigned i;
+- for (i = 0; i < 32; i++)
+- if (c->x86_power & (1 << i)) {
+- if (i < ARRAY_SIZE(x86_power_flags) &&
+- x86_power_flags[i])
+- seq_printf(m, "%s%s",
+- x86_power_flags[i][0]?" ":"",
+- x86_power_flags[i]);
+- else
+- seq_printf(m, " [%d]", i);
+- }
++ for (i = 0; i < 32; i++) {
++ if (c->x86_power & (1 << i)) {
++ if (i < ARRAY_SIZE(x86_power_flags) &&
++ x86_power_flags[i])
++ seq_printf(m, "%s%s",
++ x86_power_flags[i][0]?" ":"",
++ x86_power_flags[i]);
++ else
++ seq_printf(m, " [%d]", i);
++ }
+ }
+
+ seq_printf(m, "\n\n");
+@@ -1184,8 +1247,8 @@ static void c_stop(struct seq_file *m, void *v)
+ {
+ }
+
+-struct seq_operations cpuinfo_op = {
+- .start =c_start,
++const struct seq_operations cpuinfo_op = {
++ .start = c_start,
+ .next = c_next,
+ .stop = c_stop,
+ .show = show_cpuinfo,
+diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
+index 9bdd830..caee1f0 100644
+--- a/arch/x86/kernel/signal_32.c
++++ b/arch/x86/kernel/signal_32.c
+@@ -23,6 +23,7 @@
+ #include <asm/ucontext.h>
+ #include <asm/uaccess.h>
+ #include <asm/i387.h>
++#include <asm/vdso.h>
+ #include "sigframe_32.h"
+
+ #define DEBUG_SIG 0
+@@ -81,14 +82,14 @@ sys_sigaction(int sig, const struct old_sigaction __user *act,
+ }
+
+ asmlinkage int
+-sys_sigaltstack(unsigned long ebx)
++sys_sigaltstack(unsigned long bx)
+ {
+ /* This is needed to make gcc realize it doesn't own the "struct pt_regs" */
+- struct pt_regs *regs = (struct pt_regs *)&ebx;
+- const stack_t __user *uss = (const stack_t __user *)ebx;
+- stack_t __user *uoss = (stack_t __user *)regs->ecx;
++ struct pt_regs *regs = (struct pt_regs *)&bx;
++ const stack_t __user *uss = (const stack_t __user *)bx;
++ stack_t __user *uoss = (stack_t __user *)regs->cx;
+
+- return do_sigaltstack(uss, uoss, regs->esp);
++ return do_sigaltstack(uss, uoss, regs->sp);
+ }
+
+
+@@ -109,12 +110,12 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
+ #define COPY_SEG(seg) \
+ { unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+- regs->x##seg = tmp; }
++ regs->seg = tmp; }
+
+ #define COPY_SEG_STRICT(seg) \
+ { unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+- regs->x##seg = tmp|3; }
++ regs->seg = tmp|3; }
+
+ #define GET_SEG(seg) \
+ { unsigned short tmp; \
+@@ -130,22 +131,22 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
+ COPY_SEG(fs);
+ COPY_SEG(es);
+ COPY_SEG(ds);
+- COPY(edi);
+- COPY(esi);
+- COPY(ebp);
+- COPY(esp);
+- COPY(ebx);
+- COPY(edx);
+- COPY(ecx);
+- COPY(eip);
++ COPY(di);
++ COPY(si);
++ COPY(bp);
++ COPY(sp);
++ COPY(bx);
++ COPY(dx);
++ COPY(cx);
++ COPY(ip);
+ COPY_SEG_STRICT(cs);
+ COPY_SEG_STRICT(ss);
+
+ {
+ unsigned int tmpflags;
+- err |= __get_user(tmpflags, &sc->eflags);
+- regs->eflags = (regs->eflags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
+- regs->orig_eax = -1; /* disable syscall checks */
++ err |= __get_user(tmpflags, &sc->flags);
++ regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
++ regs->orig_ax = -1; /* disable syscall checks */
+ }
+
+ {
+@@ -164,7 +165,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
+ }
+ }
+
+- err |= __get_user(*peax, &sc->eax);
++ err |= __get_user(*peax, &sc->ax);
+ return err;
+
+ badframe:
+@@ -174,9 +175,9 @@ badframe:
+ asmlinkage int sys_sigreturn(unsigned long __unused)
+ {
+ struct pt_regs *regs = (struct pt_regs *) &__unused;
+- struct sigframe __user *frame = (struct sigframe __user *)(regs->esp - 8);
++ struct sigframe __user *frame = (struct sigframe __user *)(regs->sp - 8);
+ sigset_t set;
+- int eax;
++ int ax;
+
+ if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+@@ -192,17 +193,20 @@ asmlinkage int sys_sigreturn(unsigned long __unused)
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+- if (restore_sigcontext(regs, &frame->sc, &eax))
++ if (restore_sigcontext(regs, &frame->sc, &ax))
+ goto badframe;
+- return eax;
++ return ax;
+
+ badframe:
+- if (show_unhandled_signals && printk_ratelimit())
+- printk("%s%s[%d] bad frame in sigreturn frame:%p eip:%lx"
+- " esp:%lx oeax:%lx\n",
++ if (show_unhandled_signals && printk_ratelimit()) {
++ printk("%s%s[%d] bad frame in sigreturn frame:%p ip:%lx"
++ " sp:%lx oeax:%lx",
+ task_pid_nr(current) > 1 ? KERN_INFO : KERN_EMERG,
+- current->comm, task_pid_nr(current), frame, regs->eip,
+- regs->esp, regs->orig_eax);
++ current->comm, task_pid_nr(current), frame, regs->ip,
++ regs->sp, regs->orig_ax);
++ print_vma_addr(" in ", regs->ip);
++ printk("\n");
++ }
+
+ force_sig(SIGSEGV, current);
+ return 0;
+@@ -211,9 +215,9 @@ badframe:
+ asmlinkage int sys_rt_sigreturn(unsigned long __unused)
+ {
+ struct pt_regs *regs = (struct pt_regs *) &__unused;
+- struct rt_sigframe __user *frame = (struct rt_sigframe __user *)(regs->esp - 4);
++ struct rt_sigframe __user *frame = (struct rt_sigframe __user *)(regs->sp - 4);
+ sigset_t set;
+- int eax;
++ int ax;
+
+ if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+@@ -226,13 +230,13 @@ asmlinkage int sys_rt_sigreturn(unsigned long __unused)
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+- if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &eax))
++ if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax))
+ goto badframe;
+
+- if (do_sigaltstack(&frame->uc.uc_stack, NULL, regs->esp) == -EFAULT)
++ if (do_sigaltstack(&frame->uc.uc_stack, NULL, regs->sp) == -EFAULT)
+ goto badframe;
+
+- return eax;
++ return ax;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+@@ -249,27 +253,27 @@ setup_sigcontext(struct sigcontext __user *sc, struct _fpstate __user *fpstate,
+ {
+ int tmp, err = 0;
+
+- err |= __put_user(regs->xfs, (unsigned int __user *)&sc->fs);
++ err |= __put_user(regs->fs, (unsigned int __user *)&sc->fs);
+ savesegment(gs, tmp);
+ err |= __put_user(tmp, (unsigned int __user *)&sc->gs);
+
+- err |= __put_user(regs->xes, (unsigned int __user *)&sc->es);
+- err |= __put_user(regs->xds, (unsigned int __user *)&sc->ds);
+- err |= __put_user(regs->edi, &sc->edi);
+- err |= __put_user(regs->esi, &sc->esi);
+- err |= __put_user(regs->ebp, &sc->ebp);
+- err |= __put_user(regs->esp, &sc->esp);
+- err |= __put_user(regs->ebx, &sc->ebx);
+- err |= __put_user(regs->edx, &sc->edx);
+- err |= __put_user(regs->ecx, &sc->ecx);
+- err |= __put_user(regs->eax, &sc->eax);
++ err |= __put_user(regs->es, (unsigned int __user *)&sc->es);
++ err |= __put_user(regs->ds, (unsigned int __user *)&sc->ds);
++ err |= __put_user(regs->di, &sc->di);
++ err |= __put_user(regs->si, &sc->si);
++ err |= __put_user(regs->bp, &sc->bp);
++ err |= __put_user(regs->sp, &sc->sp);
++ err |= __put_user(regs->bx, &sc->bx);
++ err |= __put_user(regs->dx, &sc->dx);
++ err |= __put_user(regs->cx, &sc->cx);
++ err |= __put_user(regs->ax, &sc->ax);
+ err |= __put_user(current->thread.trap_no, &sc->trapno);
+ err |= __put_user(current->thread.error_code, &sc->err);
+- err |= __put_user(regs->eip, &sc->eip);
+- err |= __put_user(regs->xcs, (unsigned int __user *)&sc->cs);
+- err |= __put_user(regs->eflags, &sc->eflags);
+- err |= __put_user(regs->esp, &sc->esp_at_signal);
+- err |= __put_user(regs->xss, (unsigned int __user *)&sc->ss);
++ err |= __put_user(regs->ip, &sc->ip);
++ err |= __put_user(regs->cs, (unsigned int __user *)&sc->cs);
++ err |= __put_user(regs->flags, &sc->flags);
++ err |= __put_user(regs->sp, &sc->sp_at_signal);
++ err |= __put_user(regs->ss, (unsigned int __user *)&sc->ss);
+
+ tmp = save_i387(fpstate);
+ if (tmp < 0)
+@@ -290,29 +294,36 @@ setup_sigcontext(struct sigcontext __user *sc, struct _fpstate __user *fpstate,
+ static inline void __user *
+ get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
+ {
+- unsigned long esp;
++ unsigned long sp;
+
+ /* Default to using normal stack */
+- esp = regs->esp;
++ sp = regs->sp;
++
++ /*
++ * If we are on the alternate signal stack and would overflow it, don't.
++ * Return an always-bogus address instead so we will die with SIGSEGV.
++ */
++ if (on_sig_stack(sp) && !likely(on_sig_stack(sp - frame_size)))
++ return (void __user *) -1L;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+- if (sas_ss_flags(esp) == 0)
+- esp = current->sas_ss_sp + current->sas_ss_size;
++ if (sas_ss_flags(sp) == 0)
++ sp = current->sas_ss_sp + current->sas_ss_size;
+ }
+
+ /* This is the legacy signal stack switching. */
+- else if ((regs->xss & 0xffff) != __USER_DS &&
++ else if ((regs->ss & 0xffff) != __USER_DS &&
+ !(ka->sa.sa_flags & SA_RESTORER) &&
+ ka->sa.sa_restorer) {
+- esp = (unsigned long) ka->sa.sa_restorer;
++ sp = (unsigned long) ka->sa.sa_restorer;
+ }
+
+- esp -= frame_size;
++ sp -= frame_size;
+ /* Align the stack pointer according to the i386 ABI,
+ * i.e. so that on function entry ((sp + 4) & 15) == 0. */
+- esp = ((esp + 4) & -16ul) - 4;
+- return (void __user *) esp;
++ sp = ((sp + 4) & -16ul) - 4;
++ return (void __user *) sp;
+ }
+
+ /* These symbols are defined with the addresses in the vsyscall page.
+@@ -355,9 +366,9 @@ static int setup_frame(int sig, struct k_sigaction *ka,
+ }
+
+ if (current->binfmt->hasvdso)
+- restorer = (void *)VDSO_SYM(&__kernel_sigreturn);
++ restorer = VDSO32_SYMBOL(current->mm->context.vdso, sigreturn);
+ else
+- restorer = (void *)&frame->retcode;
++ restorer = &frame->retcode;
+ if (ka->sa.sa_flags & SA_RESTORER)
+ restorer = ka->sa.sa_restorer;
+
+@@ -379,16 +390,16 @@ static int setup_frame(int sig, struct k_sigaction *ka,
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+- regs->esp = (unsigned long) frame;
+- regs->eip = (unsigned long) ka->sa.sa_handler;
+- regs->eax = (unsigned long) sig;
+- regs->edx = (unsigned long) 0;
+- regs->ecx = (unsigned long) 0;
++ regs->sp = (unsigned long) frame;
++ regs->ip = (unsigned long) ka->sa.sa_handler;
++ regs->ax = (unsigned long) sig;
++ regs->dx = (unsigned long) 0;
++ regs->cx = (unsigned long) 0;
+
+- regs->xds = __USER_DS;
+- regs->xes = __USER_DS;
+- regs->xss = __USER_DS;
+- regs->xcs = __USER_CS;
++ regs->ds = __USER_DS;
++ regs->es = __USER_DS;
++ regs->ss = __USER_DS;
++ regs->cs = __USER_CS;
+
+ /*
+ * Clear TF when entering the signal handler, but
+@@ -396,13 +407,13 @@ static int setup_frame(int sig, struct k_sigaction *ka,
+ * The tracer may want to single-step inside the
+ * handler too.
+ */
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~TF_MASK;
+ if (test_thread_flag(TIF_SINGLESTEP))
+ ptrace_notify(SIGTRAP);
+
+ #if DEBUG_SIG
+ printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
+- current->comm, current->pid, frame, regs->eip, frame->pretcode);
++ current->comm, current->pid, frame, regs->ip, frame->pretcode);
+ #endif
+
+ return 0;
+@@ -442,7 +453,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+- err |= __put_user(sas_ss_flags(regs->esp),
++ err |= __put_user(sas_ss_flags(regs->sp),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext(&frame->uc.uc_mcontext, &frame->fpstate,
+@@ -452,13 +463,13 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. */
+- restorer = (void *)VDSO_SYM(&__kernel_rt_sigreturn);
++ restorer = VDSO32_SYMBOL(current->mm->context.vdso, rt_sigreturn);
+ if (ka->sa.sa_flags & SA_RESTORER)
+ restorer = ka->sa.sa_restorer;
+ err |= __put_user(restorer, &frame->pretcode);
+
+ /*
+- * This is movl $,%eax ; int $0x80
++ * This is movl $,%ax ; int $0x80
+ *
+ * WE DO NOT USE IT ANY MORE! It's only left here for historical
+ * reasons and because gdb uses it as a signature to notice
+@@ -472,16 +483,16 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+- regs->esp = (unsigned long) frame;
+- regs->eip = (unsigned long) ka->sa.sa_handler;
+- regs->eax = (unsigned long) usig;
+- regs->edx = (unsigned long) &frame->info;
+- regs->ecx = (unsigned long) &frame->uc;
++ regs->sp = (unsigned long) frame;
++ regs->ip = (unsigned long) ka->sa.sa_handler;
++ regs->ax = (unsigned long) usig;
++ regs->dx = (unsigned long) &frame->info;
++ regs->cx = (unsigned long) &frame->uc;
+
+- regs->xds = __USER_DS;
+- regs->xes = __USER_DS;
+- regs->xss = __USER_DS;
+- regs->xcs = __USER_CS;
++ regs->ds = __USER_DS;
++ regs->es = __USER_DS;
++ regs->ss = __USER_DS;
++ regs->cs = __USER_CS;
+
+ /*
+ * Clear TF when entering the signal handler, but
+@@ -489,13 +500,13 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ * The tracer may want to single-step inside the
+ * handler too.
+ */
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~TF_MASK;
+ if (test_thread_flag(TIF_SINGLESTEP))
+ ptrace_notify(SIGTRAP);
+
+ #if DEBUG_SIG
+ printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
+- current->comm, current->pid, frame, regs->eip, frame->pretcode);
++ current->comm, current->pid, frame, regs->ip, frame->pretcode);
+ #endif
+
+ return 0;
+@@ -516,35 +527,33 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
+ int ret;
+
+ /* Are we from a system call? */
+- if (regs->orig_eax >= 0) {
++ if (regs->orig_ax >= 0) {
+ /* If so, check system call restarting.. */
+- switch (regs->eax) {
++ switch (regs->ax) {
+ case -ERESTART_RESTARTBLOCK:
+ case -ERESTARTNOHAND:
+- regs->eax = -EINTR;
++ regs->ax = -EINTR;
+ break;
+
+ case -ERESTARTSYS:
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
+- regs->eax = -EINTR;
++ regs->ax = -EINTR;
+ break;
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+- regs->eax = regs->orig_eax;
+- regs->eip -= 2;
++ regs->ax = regs->orig_ax;
++ regs->ip -= 2;
+ }
+ }
+
+ /*
+- * If TF is set due to a debugger (PT_DTRACE), clear the TF flag so
+- * that register information in the sigcontext is correct.
++ * If TF is set due to a debugger (TIF_FORCED_TF), clear the TF
++ * flag so that register information in the sigcontext is correct.
+ */
+- if (unlikely(regs->eflags & TF_MASK)
+- && likely(current->ptrace & PT_DTRACE)) {
+- current->ptrace &= ~PT_DTRACE;
+- regs->eflags &= ~TF_MASK;
+- }
++ if (unlikely(regs->flags & X86_EFLAGS_TF) &&
++ likely(test_and_clear_thread_flag(TIF_FORCED_TF)))
++ regs->flags &= ~X86_EFLAGS_TF;
+
+ /* Set up the stack frame */
+ if (ka->sa.sa_flags & SA_SIGINFO)
+@@ -569,7 +578,7 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
+ * want to handle. Thus you cannot kill init even with a SIGKILL even by
+ * mistake.
+ */
+-static void fastcall do_signal(struct pt_regs *regs)
++static void do_signal(struct pt_regs *regs)
+ {
+ siginfo_t info;
+ int signr;
+@@ -599,8 +608,8 @@ static void fastcall do_signal(struct pt_regs *regs)
+ * have been cleared if the watchpoint triggered
+ * inside the kernel.
+ */
+- if (unlikely(current->thread.debugreg[7]))
+- set_debugreg(current->thread.debugreg[7], 7);
++ if (unlikely(current->thread.debugreg7))
++ set_debugreg(current->thread.debugreg7, 7);
+
+ /* Whee! Actually deliver the signal. */
+ if (handle_signal(signr, &info, &ka, oldset, regs) == 0) {
+@@ -616,19 +625,19 @@ static void fastcall do_signal(struct pt_regs *regs)
+ }
+
+ /* Did we come from a system call? */
+- if (regs->orig_eax >= 0) {
++ if (regs->orig_ax >= 0) {
+ /* Restart the system call - no handlers present */
+- switch (regs->eax) {
++ switch (regs->ax) {
+ case -ERESTARTNOHAND:
+ case -ERESTARTSYS:
+ case -ERESTARTNOINTR:
+- regs->eax = regs->orig_eax;
+- regs->eip -= 2;
++ regs->ax = regs->orig_ax;
++ regs->ip -= 2;
+ break;
+
+ case -ERESTART_RESTARTBLOCK:
+- regs->eax = __NR_restart_syscall;
+- regs->eip -= 2;
++ regs->ax = __NR_restart_syscall;
++ regs->ip -= 2;
+ break;
+ }
+ }
+@@ -651,13 +660,16 @@ void do_notify_resume(struct pt_regs *regs, void *_unused,
+ {
+ /* Pending single-step? */
+ if (thread_info_flags & _TIF_SINGLESTEP) {
+- regs->eflags |= TF_MASK;
++ regs->flags |= TF_MASK;
+ clear_thread_flag(TIF_SINGLESTEP);
+ }
+
+ /* deal with pending signal delivery */
+ if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
+ do_signal(regs);
++
++ if (thread_info_flags & _TIF_HRTICK_RESCHED)
++ hrtick_resched();
+
+ clear_thread_flag(TIF_IRET);
+ }
+diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c
+index ab086b0..7347bb1 100644
+--- a/arch/x86/kernel/signal_64.c
++++ b/arch/x86/kernel/signal_64.c
+@@ -39,7 +39,7 @@ asmlinkage long
+ sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
+ struct pt_regs *regs)
+ {
+- return do_sigaltstack(uss, uoss, regs->rsp);
++ return do_sigaltstack(uss, uoss, regs->sp);
+ }
+
+
+@@ -64,8 +64,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, unsigned
+
+ #define COPY(x) err |= __get_user(regs->x, &sc->x)
+
+- COPY(rdi); COPY(rsi); COPY(rbp); COPY(rsp); COPY(rbx);
+- COPY(rdx); COPY(rcx); COPY(rip);
++ COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
++ COPY(dx); COPY(cx); COPY(ip);
+ COPY(r8);
+ COPY(r9);
+ COPY(r10);
+@@ -86,9 +86,9 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, unsigned
+
+ {
+ unsigned int tmpflags;
+- err |= __get_user(tmpflags, &sc->eflags);
+- regs->eflags = (regs->eflags & ~0x40DD5) | (tmpflags & 0x40DD5);
+- regs->orig_rax = -1; /* disable syscall checks */
++ err |= __get_user(tmpflags, &sc->flags);
++ regs->flags = (regs->flags & ~0x40DD5) | (tmpflags & 0x40DD5);
++ regs->orig_ax = -1; /* disable syscall checks */
+ }
+
+ {
+@@ -108,7 +108,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, unsigned
+ }
+ }
+
+- err |= __get_user(*prax, &sc->rax);
++ err |= __get_user(*prax, &sc->ax);
+ return err;
+
+ badframe:
+@@ -119,9 +119,9 @@ asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
+ {
+ struct rt_sigframe __user *frame;
+ sigset_t set;
+- unsigned long eax;
++ unsigned long ax;
+
+- frame = (struct rt_sigframe __user *)(regs->rsp - 8);
++ frame = (struct rt_sigframe __user *)(regs->sp - 8);
+ if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) {
+ goto badframe;
+ }
+@@ -135,17 +135,17 @@ asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+- if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &eax))
++ if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax))
+ goto badframe;
+
+ #ifdef DEBUG_SIG
+- printk("%d sigreturn rip:%lx rsp:%lx frame:%p rax:%lx\n",current->pid,regs->rip,regs->rsp,frame,eax);
++ printk("%d sigreturn ip:%lx sp:%lx frame:%p ax:%lx\n",current->pid,regs->ip,regs->sp,frame,ax);
+ #endif
+
+- if (do_sigaltstack(&frame->uc.uc_stack, NULL, regs->rsp) == -EFAULT)
++ if (do_sigaltstack(&frame->uc.uc_stack, NULL, regs->sp) == -EFAULT)
+ goto badframe;
+
+- return eax;
++ return ax;
+
+ badframe:
+ signal_fault(regs,frame,"sigreturn");
+@@ -165,14 +165,14 @@ setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, unsigned lo
+ err |= __put_user(0, &sc->gs);
+ err |= __put_user(0, &sc->fs);
+
+- err |= __put_user(regs->rdi, &sc->rdi);
+- err |= __put_user(regs->rsi, &sc->rsi);
+- err |= __put_user(regs->rbp, &sc->rbp);
+- err |= __put_user(regs->rsp, &sc->rsp);
+- err |= __put_user(regs->rbx, &sc->rbx);
+- err |= __put_user(regs->rdx, &sc->rdx);
+- err |= __put_user(regs->rcx, &sc->rcx);
+- err |= __put_user(regs->rax, &sc->rax);
++ err |= __put_user(regs->di, &sc->di);
++ err |= __put_user(regs->si, &sc->si);
++ err |= __put_user(regs->bp, &sc->bp);
++ err |= __put_user(regs->sp, &sc->sp);
++ err |= __put_user(regs->bx, &sc->bx);
++ err |= __put_user(regs->dx, &sc->dx);
++ err |= __put_user(regs->cx, &sc->cx);
++ err |= __put_user(regs->ax, &sc->ax);
+ err |= __put_user(regs->r8, &sc->r8);
+ err |= __put_user(regs->r9, &sc->r9);
+ err |= __put_user(regs->r10, &sc->r10);
+@@ -183,8 +183,8 @@ setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, unsigned lo
+ err |= __put_user(regs->r15, &sc->r15);
+ err |= __put_user(me->thread.trap_no, &sc->trapno);
+ err |= __put_user(me->thread.error_code, &sc->err);
+- err |= __put_user(regs->rip, &sc->rip);
+- err |= __put_user(regs->eflags, &sc->eflags);
++ err |= __put_user(regs->ip, &sc->ip);
++ err |= __put_user(regs->flags, &sc->flags);
+ err |= __put_user(mask, &sc->oldmask);
+ err |= __put_user(me->thread.cr2, &sc->cr2);
+
+@@ -198,18 +198,18 @@ setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, unsigned lo
+ static void __user *
+ get_stack(struct k_sigaction *ka, struct pt_regs *regs, unsigned long size)
+ {
+- unsigned long rsp;
++ unsigned long sp;
+
+ /* Default to using normal stack - redzone*/
+- rsp = regs->rsp - 128;
++ sp = regs->sp - 128;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+- if (sas_ss_flags(rsp) == 0)
+- rsp = current->sas_ss_sp + current->sas_ss_size;
++ if (sas_ss_flags(sp) == 0)
++ sp = current->sas_ss_sp + current->sas_ss_size;
+ }
+
+- return (void __user *)round_down(rsp - size, 16);
++ return (void __user *)round_down(sp - size, 16);
+ }
+
+ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+@@ -246,7 +246,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(me->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+- err |= __put_user(sas_ss_flags(regs->rsp),
++ err |= __put_user(sas_ss_flags(regs->sp),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(me->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0], me);
+@@ -271,21 +271,21 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ goto give_sigsegv;
+
+ #ifdef DEBUG_SIG
+- printk("%d old rip %lx old rsp %lx old rax %lx\n", current->pid,regs->rip,regs->rsp,regs->rax);
++ printk("%d old ip %lx old sp %lx old ax %lx\n", current->pid,regs->ip,regs->sp,regs->ax);
+ #endif
+
+ /* Set up registers for signal handler */
+- regs->rdi = sig;
++ regs->di = sig;
+ /* In case the signal handler was declared without prototypes */
+- regs->rax = 0;
++ regs->ax = 0;
+
+ /* This also works for non SA_SIGINFO handlers because they expect the
+ next argument after the signal number on the stack. */
+- regs->rsi = (unsigned long)&frame->info;
+- regs->rdx = (unsigned long)&frame->uc;
+- regs->rip = (unsigned long) ka->sa.sa_handler;
++ regs->si = (unsigned long)&frame->info;
++ regs->dx = (unsigned long)&frame->uc;
++ regs->ip = (unsigned long) ka->sa.sa_handler;
+
+- regs->rsp = (unsigned long)frame;
++ regs->sp = (unsigned long)frame;
+
+ /* Set up the CS register to run signal handlers in 64-bit mode,
+ even if the handler happens to be interrupting 32-bit code. */
+@@ -295,12 +295,12 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ see include/asm-x86_64/uaccess.h for details. */
+ set_fs(USER_DS);
+
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~X86_EFLAGS_TF;
+ if (test_thread_flag(TIF_SINGLESTEP))
+ ptrace_notify(SIGTRAP);
+ #ifdef DEBUG_SIG
+ printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%p\n",
+- current->comm, current->pid, frame, regs->rip, frame->pretcode);
++ current->comm, current->pid, frame, regs->ip, frame->pretcode);
+ #endif
+
+ return 0;
+@@ -321,44 +321,40 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
+ int ret;
+
+ #ifdef DEBUG_SIG
+- printk("handle_signal pid:%d sig:%lu rip:%lx rsp:%lx regs=%p\n",
++ printk("handle_signal pid:%d sig:%lu ip:%lx sp:%lx regs=%p\n",
+ current->pid, sig,
+- regs->rip, regs->rsp, regs);
++ regs->ip, regs->sp, regs);
+ #endif
+
+ /* Are we from a system call? */
+- if ((long)regs->orig_rax >= 0) {
++ if ((long)regs->orig_ax >= 0) {
+ /* If so, check system call restarting.. */
+- switch (regs->rax) {
++ switch (regs->ax) {
+ case -ERESTART_RESTARTBLOCK:
+ case -ERESTARTNOHAND:
+- regs->rax = -EINTR;
++ regs->ax = -EINTR;
+ break;
+
+ case -ERESTARTSYS:
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
+- regs->rax = -EINTR;
++ regs->ax = -EINTR;
+ break;
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+- regs->rax = regs->orig_rax;
+- regs->rip -= 2;
++ regs->ax = regs->orig_ax;
++ regs->ip -= 2;
+ break;
+ }
+ }
+
+ /*
+- * If TF is set due to a debugger (PT_DTRACE), clear the TF
+- * flag so that register information in the sigcontext is
+- * correct.
++ * If TF is set due to a debugger (TIF_FORCED_TF), clear the TF
++ * flag so that register information in the sigcontext is correct.
+ */
+- if (unlikely(regs->eflags & TF_MASK)) {
+- if (likely(current->ptrace & PT_DTRACE)) {
+- current->ptrace &= ~PT_DTRACE;
+- regs->eflags &= ~TF_MASK;
+- }
+- }
++ if (unlikely(regs->flags & X86_EFLAGS_TF) &&
++ likely(test_and_clear_thread_flag(TIF_FORCED_TF)))
++ regs->flags &= ~X86_EFLAGS_TF;
+
+ #ifdef CONFIG_IA32_EMULATION
+ if (test_thread_flag(TIF_IA32)) {
+@@ -430,21 +426,21 @@ static void do_signal(struct pt_regs *regs)
+ }
+
+ /* Did we come from a system call? */
+- if ((long)regs->orig_rax >= 0) {
++ if ((long)regs->orig_ax >= 0) {
+ /* Restart the system call - no handlers present */
+- long res = regs->rax;
++ long res = regs->ax;
+ switch (res) {
+ case -ERESTARTNOHAND:
+ case -ERESTARTSYS:
+ case -ERESTARTNOINTR:
+- regs->rax = regs->orig_rax;
+- regs->rip -= 2;
++ regs->ax = regs->orig_ax;
++ regs->ip -= 2;
+ break;
+ case -ERESTART_RESTARTBLOCK:
+- regs->rax = test_thread_flag(TIF_IA32) ?
++ regs->ax = test_thread_flag(TIF_IA32) ?
+ __NR_ia32_restart_syscall :
+ __NR_restart_syscall;
+- regs->rip -= 2;
++ regs->ip -= 2;
+ break;
+ }
+ }
+@@ -461,13 +457,13 @@ void
+ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
+ {
+ #ifdef DEBUG_SIG
+- printk("do_notify_resume flags:%x rip:%lx rsp:%lx caller:%p pending:%x\n",
+- thread_info_flags, regs->rip, regs->rsp, __builtin_return_address(0),signal_pending(current));
++ printk("do_notify_resume flags:%x ip:%lx sp:%lx caller:%p pending:%x\n",
++ thread_info_flags, regs->ip, regs->sp, __builtin_return_address(0),signal_pending(current));
+ #endif
+
+ /* Pending single-step? */
+ if (thread_info_flags & _TIF_SINGLESTEP) {
+- regs->eflags |= TF_MASK;
++ regs->flags |= X86_EFLAGS_TF;
+ clear_thread_flag(TIF_SINGLESTEP);
+ }
+
+@@ -480,14 +476,20 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
+ /* deal with pending signal delivery */
+ if (thread_info_flags & (_TIF_SIGPENDING|_TIF_RESTORE_SIGMASK))
+ do_signal(regs);
++
++ if (thread_info_flags & _TIF_HRTICK_RESCHED)
++ hrtick_resched();
+ }
+
+ void signal_fault(struct pt_regs *regs, void __user *frame, char *where)
+ {
+ struct task_struct *me = current;
+- if (show_unhandled_signals && printk_ratelimit())
+- printk("%s[%d] bad frame in %s frame:%p rip:%lx rsp:%lx orax:%lx\n",
+- me->comm,me->pid,where,frame,regs->rip,regs->rsp,regs->orig_rax);
++ if (show_unhandled_signals && printk_ratelimit()) {
++ printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx",
++ me->comm,me->pid,where,frame,regs->ip,regs->sp,regs->orig_ax);
++ print_vma_addr(" in ", regs->ip);
++ printk("\n");
++ }
+
+ force_sig(SIGSEGV, me);
+ }
+diff --git a/arch/x86/kernel/smp_32.c b/arch/x86/kernel/smp_32.c
+index fcaa026..dc0cde9 100644
+--- a/arch/x86/kernel/smp_32.c
++++ b/arch/x86/kernel/smp_32.c
+@@ -159,7 +159,7 @@ void __send_IPI_shortcut(unsigned int shortcut, int vector)
+ apic_write_around(APIC_ICR, cfg);
+ }
+
+-void fastcall send_IPI_self(int vector)
++void send_IPI_self(int vector)
+ {
+ __send_IPI_shortcut(APIC_DEST_SELF, vector);
+ }
+@@ -223,7 +223,7 @@ void send_IPI_mask_sequence(cpumask_t mask, int vector)
+ */
+
+ local_irq_save(flags);
+- for (query_cpu = 0; query_cpu < NR_CPUS; ++query_cpu) {
++ for_each_possible_cpu(query_cpu) {
+ if (cpu_isset(query_cpu, mask)) {
+ __send_IPI_dest_field(cpu_to_logical_apicid(query_cpu),
+ vector);
+@@ -256,13 +256,14 @@ static DEFINE_SPINLOCK(tlbstate_lock);
+ * We need to reload %cr3 since the page tables may be going
+ * away from under us..
+ */
+-void leave_mm(unsigned long cpu)
++void leave_mm(int cpu)
+ {
+ if (per_cpu(cpu_tlbstate, cpu).state == TLBSTATE_OK)
+ BUG();
+ cpu_clear(cpu, per_cpu(cpu_tlbstate, cpu).active_mm->cpu_vm_mask);
+ load_cr3(swapper_pg_dir);
+ }
++EXPORT_SYMBOL_GPL(leave_mm);
+
+ /*
+ *
+@@ -310,7 +311,7 @@ void leave_mm(unsigned long cpu)
+ * 2) Leave the mm if we are in the lazy tlb mode.
+ */
+
+-fastcall void smp_invalidate_interrupt(struct pt_regs *regs)
++void smp_invalidate_interrupt(struct pt_regs *regs)
+ {
+ unsigned long cpu;
+
+@@ -638,13 +639,13 @@ static void native_smp_send_stop(void)
+ * all the work is done automatically when
+ * we return from the interrupt.
+ */
+-fastcall void smp_reschedule_interrupt(struct pt_regs *regs)
++void smp_reschedule_interrupt(struct pt_regs *regs)
+ {
+ ack_APIC_irq();
+ __get_cpu_var(irq_stat).irq_resched_count++;
+ }
+
+-fastcall void smp_call_function_interrupt(struct pt_regs *regs)
++void smp_call_function_interrupt(struct pt_regs *regs)
+ {
+ void (*func) (void *info) = call_data->func;
+ void *info = call_data->info;
+@@ -675,7 +676,7 @@ static int convert_apicid_to_cpu(int apic_id)
+ {
+ int i;
+
+- for (i = 0; i < NR_CPUS; i++) {
++ for_each_possible_cpu(i) {
+ if (per_cpu(x86_cpu_to_apicid, i) == apic_id)
+ return i;
+ }
+diff --git a/arch/x86/kernel/smp_64.c b/arch/x86/kernel/smp_64.c
+index 03fa6ed..2fd74b0 100644
+--- a/arch/x86/kernel/smp_64.c
++++ b/arch/x86/kernel/smp_64.c
+@@ -29,7 +29,7 @@
+ #include <asm/idle.h>
+
+ /*
+- * Smarter SMP flushing macros.
++ * Smarter SMP flushing macros.
+ * c/o Linus Torvalds.
+ *
+ * These mean you can really definitely utterly forget about
+@@ -37,15 +37,15 @@
+ *
+ * Optimizations Manfred Spraul <manfred at colorfullife.com>
+ *
+- * More scalable flush, from Andi Kleen
++ * More scalable flush, from Andi Kleen
+ *
+- * To avoid global state use 8 different call vectors.
+- * Each CPU uses a specific vector to trigger flushes on other
+- * CPUs. Depending on the received vector the target CPUs look into
++ * To avoid global state use 8 different call vectors.
++ * Each CPU uses a specific vector to trigger flushes on other
++ * CPUs. Depending on the received vector the target CPUs look into
+ * the right per cpu variable for the flush data.
+ *
+- * With more than 8 CPUs they are hashed to the 8 available
+- * vectors. The limited global vector space forces us to this right now.
++ * With more than 8 CPUs they are hashed to the 8 available
++ * vectors. The limited global vector space forces us to this right now.
+ * In future when interrupts are split into per CPU domains this could be
+ * fixed, at the cost of triggering multiple IPIs in some cases.
+ */
+@@ -55,7 +55,6 @@ union smp_flush_state {
+ cpumask_t flush_cpumask;
+ struct mm_struct *flush_mm;
+ unsigned long flush_va;
+-#define FLUSH_ALL -1ULL
+ spinlock_t tlbstate_lock;
+ };
+ char pad[SMP_CACHE_BYTES];
+@@ -67,16 +66,17 @@ union smp_flush_state {
+ static DEFINE_PER_CPU(union smp_flush_state, flush_state);
+
+ /*
+- * We cannot call mmdrop() because we are in interrupt context,
++ * We cannot call mmdrop() because we are in interrupt context,
+ * instead update mm->cpu_vm_mask.
+ */
+-static inline void leave_mm(int cpu)
++void leave_mm(int cpu)
+ {
+ if (read_pda(mmu_state) == TLBSTATE_OK)
+ BUG();
+ cpu_clear(cpu, read_pda(active_mm)->cpu_vm_mask);
+ load_cr3(swapper_pg_dir);
+ }
++EXPORT_SYMBOL_GPL(leave_mm);
+
+ /*
+ *
+@@ -85,25 +85,25 @@ static inline void leave_mm(int cpu)
+ * 1) switch_mm() either 1a) or 1b)
+ * 1a) thread switch to a different mm
+ * 1a1) cpu_clear(cpu, old_mm->cpu_vm_mask);
+- * Stop ipi delivery for the old mm. This is not synchronized with
+- * the other cpus, but smp_invalidate_interrupt ignore flush ipis
+- * for the wrong mm, and in the worst case we perform a superfluous
+- * tlb flush.
++ * Stop ipi delivery for the old mm. This is not synchronized with
++ * the other cpus, but smp_invalidate_interrupt ignore flush ipis
++ * for the wrong mm, and in the worst case we perform a superfluous
++ * tlb flush.
+ * 1a2) set cpu mmu_state to TLBSTATE_OK
+- * Now the smp_invalidate_interrupt won't call leave_mm if cpu0
++ * Now the smp_invalidate_interrupt won't call leave_mm if cpu0
+ * was in lazy tlb mode.
+ * 1a3) update cpu active_mm
+- * Now cpu0 accepts tlb flushes for the new mm.
++ * Now cpu0 accepts tlb flushes for the new mm.
+ * 1a4) cpu_set(cpu, new_mm->cpu_vm_mask);
+- * Now the other cpus will send tlb flush ipis.
++ * Now the other cpus will send tlb flush ipis.
+ * 1a4) change cr3.
+ * 1b) thread switch without mm change
+ * cpu active_mm is correct, cpu0 already handles
+ * flush ipis.
+ * 1b1) set cpu mmu_state to TLBSTATE_OK
+ * 1b2) test_and_set the cpu bit in cpu_vm_mask.
+- * Atomically set the bit [other cpus will start sending flush ipis],
+- * and test the bit.
++ * Atomically set the bit [other cpus will start sending flush ipis],
++ * and test the bit.
+ * 1b3) if the bit was 0: leave_mm was called, flush the tlb.
+ * 2) switch %%esp, ie current
+ *
+@@ -137,12 +137,12 @@ asmlinkage void smp_invalidate_interrupt(struct pt_regs *regs)
+ * orig_rax contains the negated interrupt vector.
+ * Use that to determine where the sender put the data.
+ */
+- sender = ~regs->orig_rax - INVALIDATE_TLB_VECTOR_START;
++ sender = ~regs->orig_ax - INVALIDATE_TLB_VECTOR_START;
+ f = &per_cpu(flush_state, sender);
+
+ if (!cpu_isset(cpu, f->flush_cpumask))
+ goto out;
+- /*
++ /*
+ * This was a BUG() but until someone can quote me the
+ * line from the intel manual that guarantees an IPI to
+ * multiple CPUs is retried _only_ on the erroring CPUs
+@@ -150,10 +150,10 @@ asmlinkage void smp_invalidate_interrupt(struct pt_regs *regs)
+ *
+ * BUG();
+ */
+-
++
+ if (f->flush_mm == read_pda(active_mm)) {
+ if (read_pda(mmu_state) == TLBSTATE_OK) {
+- if (f->flush_va == FLUSH_ALL)
++ if (f->flush_va == TLB_FLUSH_ALL)
+ local_flush_tlb();
+ else
+ __flush_tlb_one(f->flush_va);
+@@ -166,19 +166,22 @@ out:
+ add_pda(irq_tlb_count, 1);
+ }
+
+-static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
+- unsigned long va)
++void native_flush_tlb_others(const cpumask_t *cpumaskp, struct mm_struct *mm,
++ unsigned long va)
+ {
+ int sender;
+ union smp_flush_state *f;
++ cpumask_t cpumask = *cpumaskp;
+
+ /* Caller has disabled preemption */
+ sender = smp_processor_id() % NUM_INVALIDATE_TLB_VECTORS;
+ f = &per_cpu(flush_state, sender);
+
+- /* Could avoid this lock when
+- num_online_cpus() <= NUM_INVALIDATE_TLB_VECTORS, but it is
+- probably not worth checking this for a cache-hot lock. */
++ /*
++ * Could avoid this lock when
++ * num_online_cpus() <= NUM_INVALIDATE_TLB_VECTORS, but it is
++ * probably not worth checking this for a cache-hot lock.
++ */
+ spin_lock(&f->tlbstate_lock);
+
+ f->flush_mm = mm;
+@@ -202,14 +205,14 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
+ int __cpuinit init_smp_flush(void)
+ {
+ int i;
++
+ for_each_cpu_mask(i, cpu_possible_map) {
+ spin_lock_init(&per_cpu(flush_state, i).tlbstate_lock);
+ }
+ return 0;
+ }
-
-- rq->buffer = rq->data = NULL;
-- return 0;
--unmap_rq:
-- blk_rq_unmap_user(bio);
-- return ret;
--}
--
--EXPORT_SYMBOL(blk_rq_map_user);
+ core_initcall(init_smp_flush);
+-
++
+ void flush_tlb_current_task(void)
+ {
+ struct mm_struct *mm = current->mm;
+@@ -221,10 +224,9 @@ void flush_tlb_current_task(void)
+
+ local_flush_tlb();
+ if (!cpus_empty(cpu_mask))
+- flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
++ flush_tlb_others(cpu_mask, mm, TLB_FLUSH_ALL);
+ preempt_enable();
+ }
+-EXPORT_SYMBOL(flush_tlb_current_task);
+
+ void flush_tlb_mm (struct mm_struct * mm)
+ {
+@@ -241,11 +243,10 @@ void flush_tlb_mm (struct mm_struct * mm)
+ leave_mm(smp_processor_id());
+ }
+ if (!cpus_empty(cpu_mask))
+- flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
++ flush_tlb_others(cpu_mask, mm, TLB_FLUSH_ALL);
+
+ preempt_enable();
+ }
+-EXPORT_SYMBOL(flush_tlb_mm);
+
+ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
+ {
+@@ -259,8 +260,8 @@ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
+ if (current->active_mm == mm) {
+ if(current->mm)
+ __flush_tlb_one(va);
+- else
+- leave_mm(smp_processor_id());
++ else
++ leave_mm(smp_processor_id());
+ }
+
+ if (!cpus_empty(cpu_mask))
+@@ -268,7 +269,6 @@ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
+
+ preempt_enable();
+ }
+-EXPORT_SYMBOL(flush_tlb_page);
+
+ static void do_flush_tlb_all(void* info)
+ {
+@@ -325,11 +325,9 @@ void unlock_ipi_call_lock(void)
+ * this function sends a 'generic call function' IPI to all other CPU
+ * of the system defined in the mask.
+ */
-
--/**
-- * blk_rq_map_user_iov - map user data to a request, for REQ_BLOCK_PC usage
-- * @q: request queue where request should be inserted
-- * @rq: request to map data to
-- * @iov: pointer to the iovec
-- * @iov_count: number of elements in the iovec
-- * @len: I/O byte count
-- *
-- * Description:
-- * Data will be mapped directly for zero copy io, if possible. Otherwise
-- * a kernel bounce buffer is used.
-- *
-- * A matching blk_rq_unmap_user() must be issued at the end of io, while
-- * still in process context.
-- *
-- * Note: The mapped bio may need to be bounced through blk_queue_bounce()
-- * before being submitted to the device, as pages mapped may be out of
-- * reach. It's the callers responsibility to make sure this happens. The
-- * original bio must be passed back in to blk_rq_unmap_user() for proper
-- * unmapping.
+-static int
+-__smp_call_function_mask(cpumask_t mask,
+- void (*func)(void *), void *info,
+- int wait)
++static int __smp_call_function_mask(cpumask_t mask,
++ void (*func)(void *), void *info,
++ int wait)
+ {
+ struct call_data_struct data;
+ cpumask_t allbutself;
+@@ -417,11 +415,10 @@ EXPORT_SYMBOL(smp_call_function_mask);
+ */
+
+ int smp_call_function_single (int cpu, void (*func) (void *info), void *info,
+- int nonatomic, int wait)
++ int nonatomic, int wait)
+ {
+ /* prevent preemption and reschedule on another processor */
+- int ret;
+- int me = get_cpu();
++ int ret, me = get_cpu();
+
+ /* Can deadlock when called with interrupts disabled */
+ WARN_ON(irqs_disabled());
+@@ -471,9 +468,9 @@ static void stop_this_cpu(void *dummy)
+ */
+ cpu_clear(smp_processor_id(), cpu_online_map);
+ disable_local_APIC();
+- for (;;)
++ for (;;)
+ halt();
+-}
++}
+
+ void smp_send_stop(void)
+ {
+diff --git a/arch/x86/kernel/smpboot_32.c b/arch/x86/kernel/smpboot_32.c
+index 4ea80cb..5787a0c 100644
+--- a/arch/x86/kernel/smpboot_32.c
++++ b/arch/x86/kernel/smpboot_32.c
+@@ -83,7 +83,6 @@ EXPORT_SYMBOL(cpu_online_map);
+
+ cpumask_t cpu_callin_map;
+ cpumask_t cpu_callout_map;
+-EXPORT_SYMBOL(cpu_callout_map);
+ cpumask_t cpu_possible_map;
+ EXPORT_SYMBOL(cpu_possible_map);
+ static cpumask_t smp_commenced_mask;
+@@ -92,15 +91,10 @@ static cpumask_t smp_commenced_mask;
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info);
+ EXPORT_PER_CPU_SYMBOL(cpu_info);
+
+-/*
+- * The following static array is used during kernel startup
+- * and the x86_cpu_to_apicid_ptr contains the address of the
+- * array during this time. Is it zeroed when the per_cpu
+- * data area is removed.
- */
--int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
-- struct sg_iovec *iov, int iov_count, unsigned int len)
--{
-- struct bio *bio;
--
-- if (!iov || iov_count <= 0)
-- return -EINVAL;
--
-- /* we don't allow misaligned data like bio_map_user() does. If the
-- * user is using sg, they're expected to know the alignment constraints
-- * and respect them accordingly */
-- bio = bio_map_user_iov(q, NULL, iov, iov_count, rq_data_dir(rq)== READ);
-- if (IS_ERR(bio))
-- return PTR_ERR(bio);
++/* which logical CPU number maps to which CPU (physical APIC ID) */
+ u8 x86_cpu_to_apicid_init[NR_CPUS] __initdata =
+ { [0 ... NR_CPUS-1] = BAD_APICID };
+-void *x86_cpu_to_apicid_ptr;
++void *x86_cpu_to_apicid_early_ptr;
+ DEFINE_PER_CPU(u8, x86_cpu_to_apicid) = BAD_APICID;
+ EXPORT_PER_CPU_SYMBOL(x86_cpu_to_apicid);
+
+@@ -113,7 +107,6 @@ u8 apicid_2_node[MAX_APICID];
+ extern const unsigned char trampoline_data [];
+ extern const unsigned char trampoline_end [];
+ static unsigned char *trampoline_base;
+-static int trampoline_exec;
+
+ static void map_cpu_to_logical_apicid(void);
+
+@@ -138,17 +131,13 @@ static unsigned long __cpuinit setup_trampoline(void)
+ */
+ void __init smp_alloc_memory(void)
+ {
+- trampoline_base = (void *) alloc_bootmem_low_pages(PAGE_SIZE);
++ trampoline_base = alloc_bootmem_low_pages(PAGE_SIZE);
+ /*
+ * Has to be in very low memory so we can execute
+ * real-mode AP code.
+ */
+ if (__pa(trampoline_base) >= 0x9F000)
+ BUG();
+- /*
+- * Make the SMP trampoline executable:
+- */
+- trampoline_exec = set_kernel_exec((unsigned long)trampoline_base, 1);
+ }
+
+ /*
+@@ -405,7 +394,7 @@ static void __cpuinit start_secondary(void *unused)
+ setup_secondary_clock();
+ if (nmi_watchdog == NMI_IO_APIC) {
+ disable_8259A_irq(0);
+- enable_NMI_through_LVT0(NULL);
++ enable_NMI_through_LVT0();
+ enable_8259A_irq(0);
+ }
+ /*
+@@ -448,38 +437,38 @@ void __devinit initialize_secondary(void)
+ {
+ /*
+ * We don't actually need to load the full TSS,
+- * basically just the stack pointer and the eip.
++ * basically just the stack pointer and the ip.
+ */
+
+ asm volatile(
+ "movl %0,%%esp\n\t"
+ "jmp *%1"
+ :
+- :"m" (current->thread.esp),"m" (current->thread.eip));
++ :"m" (current->thread.sp),"m" (current->thread.ip));
+ }
+
+ /* Static state in head.S used to set up a CPU */
+ extern struct {
+- void * esp;
++ void * sp;
+ unsigned short ss;
+ } stack_start;
+
+ #ifdef CONFIG_NUMA
+
+ /* which logical CPUs are on which nodes */
+-cpumask_t node_2_cpu_mask[MAX_NUMNODES] __read_mostly =
++cpumask_t node_to_cpumask_map[MAX_NUMNODES] __read_mostly =
+ { [0 ... MAX_NUMNODES-1] = CPU_MASK_NONE };
+-EXPORT_SYMBOL(node_2_cpu_mask);
++EXPORT_SYMBOL(node_to_cpumask_map);
+ /* which node each logical CPU is on */
+-int cpu_2_node[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = 0 };
+-EXPORT_SYMBOL(cpu_2_node);
++int cpu_to_node_map[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = 0 };
++EXPORT_SYMBOL(cpu_to_node_map);
+
+ /* set up a mapping between cpu and node. */
+ static inline void map_cpu_to_node(int cpu, int node)
+ {
+ printk("Mapping cpu %d to node %d\n", cpu, node);
+- cpu_set(cpu, node_2_cpu_mask[node]);
+- cpu_2_node[cpu] = node;
++ cpu_set(cpu, node_to_cpumask_map[node]);
++ cpu_to_node_map[cpu] = node;
+ }
+
+ /* undo a mapping between cpu and node. */
+@@ -489,8 +478,8 @@ static inline void unmap_cpu_to_node(int cpu)
+
+ printk("Unmapping cpu %d from all nodes\n", cpu);
+ for (node = 0; node < MAX_NUMNODES; node ++)
+- cpu_clear(cpu, node_2_cpu_mask[node]);
+- cpu_2_node[cpu] = 0;
++ cpu_clear(cpu, node_to_cpumask_map[node]);
++ cpu_to_node_map[cpu] = 0;
+ }
+ #else /* !CONFIG_NUMA */
+
+@@ -668,7 +657,7 @@ wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
+ * target processor state.
+ */
+ startup_ipi_hook(phys_apicid, (unsigned long) start_secondary,
+- (unsigned long) stack_start.esp);
++ (unsigned long) stack_start.sp);
+
+ /*
+ * Run STARTUP IPI loop.
+@@ -754,7 +743,7 @@ static inline struct task_struct * __cpuinit alloc_idle_task(int cpu)
+ /* initialize thread_struct. we really want to avoid destroy
+ * idle tread
+ */
+- idle->thread.esp = (unsigned long)task_pt_regs(idle);
++ idle->thread.sp = (unsigned long)task_pt_regs(idle);
+ init_idle(idle, cpu);
+ return idle;
+ }
+@@ -799,7 +788,7 @@ static int __cpuinit do_boot_cpu(int apicid, int cpu)
+ per_cpu(current_task, cpu) = idle;
+ early_gdt_descr.address = (unsigned long)get_cpu_gdt_table(cpu);
+
+- idle->thread.eip = (unsigned long) start_secondary;
++ idle->thread.ip = (unsigned long) start_secondary;
+ /* start_eip had better be page-aligned! */
+ start_eip = setup_trampoline();
+
+@@ -807,9 +796,9 @@ static int __cpuinit do_boot_cpu(int apicid, int cpu)
+ alternatives_smp_switch(1);
+
+ /* So we see what's up */
+- printk("Booting processor %d/%d eip %lx\n", cpu, apicid, start_eip);
++ printk("Booting processor %d/%d ip %lx\n", cpu, apicid, start_eip);
+ /* Stack for startup_32 can be just as for start_secondary onwards */
+- stack_start.esp = (void *) idle->thread.esp;
++ stack_start.sp = (void *) idle->thread.sp;
+
+ irq_ctx_init(cpu);
+
+@@ -1091,7 +1080,7 @@ static void __init smp_boot_cpus(unsigned int max_cpus)
+ * Allow the user to impress friends.
+ */
+ Dprintk("Before bogomips.\n");
+- for (cpu = 0; cpu < NR_CPUS; cpu++)
++ for_each_possible_cpu(cpu)
+ if (cpu_isset(cpu, cpu_callout_map))
+ bogosum += cpu_data(cpu).loops_per_jiffy;
+ printk(KERN_INFO
+@@ -1122,7 +1111,7 @@ static void __init smp_boot_cpus(unsigned int max_cpus)
+ * construct cpu_sibling_map, so that we can tell sibling CPUs
+ * efficiently.
+ */
+- for (cpu = 0; cpu < NR_CPUS; cpu++) {
++ for_each_possible_cpu(cpu) {
+ cpus_clear(per_cpu(cpu_sibling_map, cpu));
+ cpus_clear(per_cpu(cpu_core_map, cpu));
+ }
+@@ -1296,12 +1285,6 @@ void __init native_smp_cpus_done(unsigned int max_cpus)
+ setup_ioapic_dest();
+ #endif
+ zap_low_mappings();
+-#ifndef CONFIG_HOTPLUG_CPU
+- /*
+- * Disable executability of the SMP trampoline:
+- */
+- set_kernel_exec((unsigned long)trampoline_base, trampoline_exec);
+-#endif
+ }
+
+ void __init smp_intr_init(void)
+diff --git a/arch/x86/kernel/smpboot_64.c b/arch/x86/kernel/smpboot_64.c
+index aaf4e12..cc64b80 100644
+--- a/arch/x86/kernel/smpboot_64.c
++++ b/arch/x86/kernel/smpboot_64.c
+@@ -65,7 +65,7 @@ int smp_num_siblings = 1;
+ EXPORT_SYMBOL(smp_num_siblings);
+
+ /* Last level cache ID of each logical CPU */
+-DEFINE_PER_CPU(u8, cpu_llc_id) = BAD_APICID;
++DEFINE_PER_CPU(u16, cpu_llc_id) = BAD_APICID;
+
+ /* Bitmask of currently online CPUs */
+ cpumask_t cpu_online_map __read_mostly;
+@@ -78,8 +78,6 @@ EXPORT_SYMBOL(cpu_online_map);
+ */
+ cpumask_t cpu_callin_map;
+ cpumask_t cpu_callout_map;
+-EXPORT_SYMBOL(cpu_callout_map);
-
-- if (bio->bi_size != len) {
-- bio_endio(bio, 0);
-- bio_unmap_user(bio);
-- return -EINVAL;
-- }
+ cpumask_t cpu_possible_map;
+ EXPORT_SYMBOL(cpu_possible_map);
+
+@@ -113,10 +111,20 @@ DEFINE_PER_CPU(int, cpu_state) = { 0 };
+ * a new thread. Also avoids complicated thread destroy functionality
+ * for idle threads.
+ */
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Needed only for CONFIG_HOTPLUG_CPU because __cpuinitdata is
++ * removed after init for !CONFIG_HOTPLUG_CPU.
++ */
++static DEFINE_PER_CPU(struct task_struct *, idle_thread_array);
++#define get_idle_for_cpu(x) (per_cpu(idle_thread_array, x))
++#define set_idle_for_cpu(x,p) (per_cpu(idle_thread_array, x) = (p))
++#else
+ struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ;
-
-- bio_get(bio);
-- blk_rq_bio_prep(q, rq, bio);
-- rq->buffer = rq->data = NULL;
-- return 0;
--}
+ #define get_idle_for_cpu(x) (idle_thread_array[(x)])
+ #define set_idle_for_cpu(x,p) (idle_thread_array[(x)] = (p))
++#endif
++
+
+ /*
+ * Currently trivial. Write the real->protected mode
+@@ -212,6 +220,7 @@ void __cpuinit smp_callin(void)
+
+ Dprintk("CALLIN, before setup_local_APIC().\n");
+ setup_local_APIC();
++ end_local_APIC_setup();
+
+ /*
+ * Get our bogomips.
+@@ -338,7 +347,7 @@ void __cpuinit start_secondary(void)
+
+ if (nmi_watchdog == NMI_IO_APIC) {
+ disable_8259A_irq(0);
+- enable_NMI_through_LVT0(NULL);
++ enable_NMI_through_LVT0();
+ enable_8259A_irq(0);
+ }
+
+@@ -370,7 +379,7 @@ void __cpuinit start_secondary(void)
+
+ unlock_ipi_call_lock();
+
+- setup_secondary_APIC_clock();
++ setup_secondary_clock();
+
+ cpu_idle();
+ }
+@@ -384,19 +393,20 @@ static void inquire_remote_apic(int apicid)
+ unsigned i, regs[] = { APIC_ID >> 4, APIC_LVR >> 4, APIC_SPIV >> 4 };
+ char *names[] = { "ID", "VERSION", "SPIV" };
+ int timeout;
+- unsigned int status;
++ u32 status;
+
+ printk(KERN_INFO "Inquiring remote APIC #%d...\n", apicid);
+
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
+- printk("... APIC #%d %s: ", apicid, names[i]);
++ printk(KERN_INFO "... APIC #%d %s: ", apicid, names[i]);
+
+ /*
+ * Wait for idle.
+ */
+ status = safe_apic_wait_icr_idle();
+ if (status)
+- printk("a previous APIC delivery may have failed\n");
++ printk(KERN_CONT
++ "a previous APIC delivery may have failed\n");
+
+ apic_write(APIC_ICR2, SET_APIC_DEST_FIELD(apicid));
+ apic_write(APIC_ICR, APIC_DM_REMRD | regs[i]);
+@@ -410,10 +420,10 @@ static void inquire_remote_apic(int apicid)
+ switch (status) {
+ case APIC_ICR_RR_VALID:
+ status = apic_read(APIC_RRR);
+- printk("%08x\n", status);
++ printk(KERN_CONT "%08x\n", status);
+ break;
+ default:
+- printk("failed\n");
++ printk(KERN_CONT "failed\n");
+ }
+ }
+ }
+@@ -466,7 +476,7 @@ static int __cpuinit wakeup_secondary_via_INIT(int phys_apicid, unsigned int sta
+ */
+ Dprintk("#startup loops: %d.\n", num_starts);
+
+- maxlvt = get_maxlvt();
++ maxlvt = lapic_get_maxlvt();
+
+ for (j = 1; j <= num_starts; j++) {
+ Dprintk("Sending STARTUP #%d.\n",j);
+@@ -577,7 +587,7 @@ static int __cpuinit do_boot_cpu(int cpu, int apicid)
+ c_idle.idle = get_idle_for_cpu(cpu);
+
+ if (c_idle.idle) {
+- c_idle.idle->thread.rsp = (unsigned long) (((struct pt_regs *)
++ c_idle.idle->thread.sp = (unsigned long) (((struct pt_regs *)
+ (THREAD_SIZE + task_stack_page(c_idle.idle))) - 1);
+ init_idle(c_idle.idle, cpu);
+ goto do_rest;
+@@ -613,8 +623,8 @@ do_rest:
+
+ start_rip = setup_trampoline();
+
+- init_rsp = c_idle.idle->thread.rsp;
+- per_cpu(init_tss,cpu).rsp0 = init_rsp;
++ init_rsp = c_idle.idle->thread.sp;
++ load_sp0(&per_cpu(init_tss, cpu), &c_idle.idle->thread);
+ initial_code = start_secondary;
+ clear_tsk_thread_flag(c_idle.idle, TIF_FORK);
+
+@@ -691,7 +701,7 @@ do_rest:
+ }
+ if (boot_error) {
+ cpu_clear(cpu, cpu_callout_map); /* was set here (do_boot_cpu()) */
+- clear_bit(cpu, &cpu_initialized); /* was set by cpu_init() */
++ clear_bit(cpu, (unsigned long *)&cpu_initialized); /* was set by cpu_init() */
+ clear_node_cpumask(cpu); /* was set by numa_add_cpu */
+ cpu_clear(cpu, cpu_present_map);
+ cpu_clear(cpu, cpu_possible_map);
+@@ -841,24 +851,16 @@ static int __init smp_sanity_check(unsigned max_cpus)
+ return 0;
+ }
+
+-/*
+- * Copy apicid's found by MP_processor_info from initial array to the per cpu
+- * data area. The x86_cpu_to_apicid_init array is then expendable and the
+- * x86_cpu_to_apicid_ptr is zeroed indicating that the static array is no
+- * longer available.
+- */
+-void __init smp_set_apicids(void)
++static void __init smp_cpu_index_default(void)
+ {
+- int cpu;
++ int i;
++ struct cpuinfo_x86 *c;
+
+- for_each_cpu_mask(cpu, cpu_possible_map) {
+- if (per_cpu_offset(cpu))
+- per_cpu(x86_cpu_to_apicid, cpu) =
+- x86_cpu_to_apicid_init[cpu];
++ for_each_cpu_mask(i, cpu_possible_map) {
++ c = &cpu_data(i);
++ /* mark all to hotplug */
++ c->cpu_index = NR_CPUS;
+ }
-
--EXPORT_SYMBOL(blk_rq_map_user_iov);
+- /* indicate the static array will be going away soon */
+- x86_cpu_to_apicid_ptr = NULL;
+ }
+
+ /*
+@@ -868,9 +870,9 @@ void __init smp_set_apicids(void)
+ void __init smp_prepare_cpus(unsigned int max_cpus)
+ {
+ nmi_watchdog_default();
++ smp_cpu_index_default();
+ current_cpu_data = boot_cpu_data;
+ current_thread_info()->cpu = 0; /* needed? */
+- smp_set_apicids();
+ set_cpu_sibling_map(0);
+
+ if (smp_sanity_check(max_cpus) < 0) {
+@@ -885,6 +887,13 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ */
+ setup_local_APIC();
+
++ /*
++ * Enable IO APIC before setting up error vector
++ */
++ if (!skip_ioapic_setup && nr_ioapics)
++ enable_IO_APIC();
++ end_local_APIC_setup();
++
+ if (GET_APIC_ID(apic_read(APIC_ID)) != boot_cpu_id) {
+ panic("Boot APIC ID in local APIC unexpected (%d vs %d)",
+ GET_APIC_ID(apic_read(APIC_ID)), boot_cpu_id);
+@@ -903,7 +912,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ * Set up local APIC timer on boot CPU.
+ */
+
+- setup_boot_APIC_clock();
++ setup_boot_clock();
+ }
+
+ /*
+@@ -912,7 +921,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ void __init smp_prepare_boot_cpu(void)
+ {
+ int me = smp_processor_id();
+- cpu_set(me, cpu_online_map);
++ /* already set me in cpu_online_map in boot_cpu_init() */
+ cpu_set(me, cpu_callout_map);
+ per_cpu(cpu_state, me) = CPU_ONLINE;
+ }
+@@ -1016,7 +1025,7 @@ void remove_cpu_from_maps(void)
+
+ cpu_clear(cpu, cpu_callout_map);
+ cpu_clear(cpu, cpu_callin_map);
+- clear_bit(cpu, &cpu_initialized); /* was set by cpu_init() */
++ clear_bit(cpu, (unsigned long *)&cpu_initialized); /* was set by cpu_init() */
+ clear_node_cpumask(cpu);
+ }
+
+diff --git a/arch/x86/kernel/smpcommon_32.c b/arch/x86/kernel/smpcommon_32.c
+index bbfe85a..8bc38af 100644
+--- a/arch/x86/kernel/smpcommon_32.c
++++ b/arch/x86/kernel/smpcommon_32.c
+@@ -14,10 +14,11 @@ __cpuinit void init_gdt(int cpu)
+ {
+ struct desc_struct *gdt = get_cpu_gdt_table(cpu);
+
+- pack_descriptor((u32 *)&gdt[GDT_ENTRY_PERCPU].a,
+- (u32 *)&gdt[GDT_ENTRY_PERCPU].b,
++ pack_descriptor(&gdt[GDT_ENTRY_PERCPU],
+ __per_cpu_offset[cpu], 0xFFFFF,
+- 0x80 | DESCTYPE_S | 0x2, 0x8);
++ 0x2 | DESCTYPE_S, 0x8);
++
++ gdt[GDT_ENTRY_PERCPU].s = 1;
+
+ per_cpu(this_cpu_off, cpu) = __per_cpu_offset[cpu];
+ per_cpu(cpu_number, cpu) = cpu;
+diff --git a/arch/x86/kernel/srat_32.c b/arch/x86/kernel/srat_32.c
+index 2a8713e..2bf6903 100644
+--- a/arch/x86/kernel/srat_32.c
++++ b/arch/x86/kernel/srat_32.c
+@@ -57,8 +57,6 @@ static struct node_memory_chunk_s node_memory_chunk[MAXCHUNKS];
+ static int num_memory_chunks; /* total number of memory chunks */
+ static u8 __initdata apicid_to_pxm[MAX_APICID];
+
+-extern void * boot_ioremap(unsigned long, unsigned long);
-
--/**
-- * blk_rq_unmap_user - unmap a request with user data
-- * @bio: start of bio list
+ /* Identify CPU proximity domains */
+ static void __init parse_cpu_affinity_structure(char *p)
+ {
+@@ -299,7 +297,7 @@ int __init get_memcfg_from_srat(void)
+ }
+
+ rsdt = (struct acpi_table_rsdt *)
+- boot_ioremap(rsdp->rsdt_physical_address, sizeof(struct acpi_table_rsdt));
++ early_ioremap(rsdp->rsdt_physical_address, sizeof(struct acpi_table_rsdt));
+
+ if (!rsdt) {
+ printk(KERN_WARNING
+@@ -339,11 +337,11 @@ int __init get_memcfg_from_srat(void)
+ for (i = 0; i < tables; i++) {
+ /* Map in header, then map in full table length. */
+ header = (struct acpi_table_header *)
+- boot_ioremap(saved_rsdt.table.table_offset_entry[i], sizeof(struct acpi_table_header));
++ early_ioremap(saved_rsdt.table.table_offset_entry[i], sizeof(struct acpi_table_header));
+ if (!header)
+ break;
+ header = (struct acpi_table_header *)
+- boot_ioremap(saved_rsdt.table.table_offset_entry[i], header->length);
++ early_ioremap(saved_rsdt.table.table_offset_entry[i], header->length);
+ if (!header)
+ break;
+
+diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
+index 6fa6cf0..02f0f61 100644
+--- a/arch/x86/kernel/stacktrace.c
++++ b/arch/x86/kernel/stacktrace.c
+@@ -22,9 +22,23 @@ static int save_stack_stack(void *data, char *name)
+ return -1;
+ }
+
+-static void save_stack_address(void *data, unsigned long addr)
++static void save_stack_address(void *data, unsigned long addr, int reliable)
++{
++ struct stack_trace *trace = data;
++ if (trace->skip > 0) {
++ trace->skip--;
++ return;
++ }
++ if (trace->nr_entries < trace->max_entries)
++ trace->entries[trace->nr_entries++] = addr;
++}
++
++static void
++save_stack_address_nosched(void *data, unsigned long addr, int reliable)
+ {
+ struct stack_trace *trace = (struct stack_trace *)data;
++ if (in_sched_functions(addr))
++ return;
+ if (trace->skip > 0) {
+ trace->skip--;
+ return;
+@@ -40,13 +54,26 @@ static const struct stacktrace_ops save_stack_ops = {
+ .address = save_stack_address,
+ };
+
++static const struct stacktrace_ops save_stack_ops_nosched = {
++ .warning = save_stack_warning,
++ .warning_symbol = save_stack_warning_symbol,
++ .stack = save_stack_stack,
++ .address = save_stack_address_nosched,
++};
++
+ /*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+ void save_stack_trace(struct stack_trace *trace)
+ {
+- dump_trace(current, NULL, NULL, &save_stack_ops, trace);
++ dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
++ if (trace->nr_entries < trace->max_entries)
++ trace->entries[trace->nr_entries++] = ULONG_MAX;
++}
++
++void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
++{
++ dump_trace(tsk, NULL, NULL, 0, &save_stack_ops_nosched, trace);
+ if (trace->nr_entries < trace->max_entries)
+ trace->entries[trace->nr_entries++] = ULONG_MAX;
+ }
+-EXPORT_SYMBOL(save_stack_trace);
+diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
+new file mode 100644
+index 0000000..2ef1a5f
+--- /dev/null
++++ b/arch/x86/kernel/step.c
+@@ -0,0 +1,203 @@
++/*
++ * x86 single-step support code, common to 32-bit and 64-bit.
++ */
++#include <linux/sched.h>
++#include <linux/mm.h>
++#include <linux/ptrace.h>
++
++unsigned long convert_ip_to_linear(struct task_struct *child, struct pt_regs *regs)
++{
++ unsigned long addr, seg;
++
++ addr = regs->ip;
++ seg = regs->cs & 0xffff;
++ if (v8086_mode(regs)) {
++ addr = (addr & 0xffff) + (seg << 4);
++ return addr;
++ }
++
++ /*
++ * We'll assume that the code segments in the GDT
++ * are all zero-based. That is largely true: the
++ * TLS segments are used for data, and the PNPBIOS
++ * and APM bios ones we just ignore here.
++ */
++ if ((seg & SEGMENT_TI_MASK) == SEGMENT_LDT) {
++ u32 *desc;
++ unsigned long base;
++
++ seg &= ~7UL;
++
++ mutex_lock(&child->mm->context.lock);
++ if (unlikely((seg >> 3) >= child->mm->context.size))
++ addr = -1L; /* bogus selector, access would fault */
++ else {
++ desc = child->mm->context.ldt + seg;
++ base = ((desc[0] >> 16) |
++ ((desc[1] & 0xff) << 16) |
++ (desc[1] & 0xff000000));
++
++ /* 16-bit code segment? */
++ if (!((desc[1] >> 22) & 1))
++ addr &= 0xffff;
++ addr += base;
++ }
++ mutex_unlock(&child->mm->context.lock);
++ }
++
++ return addr;
++}
++
++static int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs)
++{
++ int i, copied;
++ unsigned char opcode[15];
++ unsigned long addr = convert_ip_to_linear(child, regs);
++
++ copied = access_process_vm(child, addr, opcode, sizeof(opcode), 0);
++ for (i = 0; i < copied; i++) {
++ switch (opcode[i]) {
++ /* popf and iret */
++ case 0x9d: case 0xcf:
++ return 1;
++
++ /* CHECKME: 64 65 */
++
++ /* opcode and address size prefixes */
++ case 0x66: case 0x67:
++ continue;
++ /* irrelevant prefixes (segment overrides and repeats) */
++ case 0x26: case 0x2e:
++ case 0x36: case 0x3e:
++ case 0x64: case 0x65:
++ case 0xf0: case 0xf2: case 0xf3:
++ continue;
++
++#ifdef CONFIG_X86_64
++ case 0x40 ... 0x4f:
++ if (regs->cs != __USER_CS)
++ /* 32-bit mode: register increment */
++ return 0;
++ /* 64-bit mode: REX prefix */
++ continue;
++#endif
++
++ /* CHECKME: f2, f3 */
++
++ /*
++ * pushf: NOTE! We should probably not let
++ * the user see the TF bit being set. But
++ * it's more pain than it's worth to avoid
++ * it, and a debugger could emulate this
++ * all in user space if it _really_ cares.
++ */
++ case 0x9c:
++ default:
++ return 0;
++ }
++ }
++ return 0;
++}
++
++/*
++ * Enable single-stepping. Return nonzero if user mode is not using TF itself.
++ */
++static int enable_single_step(struct task_struct *child)
++{
++ struct pt_regs *regs = task_pt_regs(child);
++
++ /*
++ * Always set TIF_SINGLESTEP - this guarantees that
++ * we single-step system calls etc.. This will also
++ * cause us to set TF when returning to user mode.
++ */
++ set_tsk_thread_flag(child, TIF_SINGLESTEP);
++
++ /*
++ * If TF was already set, don't do anything else
++ */
++ if (regs->flags & X86_EFLAGS_TF)
++ return 0;
++
++ /* Set TF on the kernel stack.. */
++ regs->flags |= X86_EFLAGS_TF;
++
++ /*
++ * ..but if TF is changed by the instruction we will trace,
++ * don't mark it as being "us" that set it, so that we
++ * won't clear it by hand later.
++ */
++ if (is_setting_trap_flag(child, regs))
++ return 0;
++
++ set_tsk_thread_flag(child, TIF_FORCED_TF);
++
++ return 1;
++}
++
++/*
++ * Install this value in MSR_IA32_DEBUGCTLMSR whenever child is running.
++ */
++static void write_debugctlmsr(struct task_struct *child, unsigned long val)
++{
++ child->thread.debugctlmsr = val;
++
++ if (child != current)
++ return;
++
++ wrmsrl(MSR_IA32_DEBUGCTLMSR, val);
++}
++
++/*
++ * Enable single or block step.
++ */
++static void enable_step(struct task_struct *child, bool block)
++{
++ /*
++ * Make sure block stepping (BTF) is not enabled unless it should be.
++ * Note that we don't try to worry about any is_setting_trap_flag()
++ * instructions after the first when using block stepping.
++ * So noone should try to use debugger block stepping in a program
++ * that uses user-mode single stepping itself.
++ */
++ if (enable_single_step(child) && block) {
++ set_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
++ write_debugctlmsr(child,
++ child->thread.debugctlmsr | DEBUGCTLMSR_BTF);
++ } else {
++ write_debugctlmsr(child,
++ child->thread.debugctlmsr & ~TIF_DEBUGCTLMSR);
++
++ if (!child->thread.debugctlmsr)
++ clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
++ }
++}
++
++void user_enable_single_step(struct task_struct *child)
++{
++ enable_step(child, 0);
++}
++
++void user_enable_block_step(struct task_struct *child)
++{
++ enable_step(child, 1);
++}
++
++void user_disable_single_step(struct task_struct *child)
++{
++ /*
++ * Make sure block stepping (BTF) is disabled.
++ */
++ write_debugctlmsr(child,
++ child->thread.debugctlmsr & ~TIF_DEBUGCTLMSR);
++
++ if (!child->thread.debugctlmsr)
++ clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
++
++ /* Always clear TIF_SINGLESTEP... */
++ clear_tsk_thread_flag(child, TIF_SINGLESTEP);
++
++ /* But touch TF only if it was set by us.. */
++ if (test_and_clear_tsk_thread_flag(child, TIF_FORCED_TF))
++ task_pt_regs(child)->flags &= ~X86_EFLAGS_TF;
++}
+diff --git a/arch/x86/kernel/suspend_64.c b/arch/x86/kernel/suspend_64.c
+index 2e5efaa..0919951 100644
+--- a/arch/x86/kernel/suspend_64.c
++++ b/arch/x86/kernel/suspend_64.c
+@@ -17,9 +17,26 @@
+ /* References to section boundaries */
+ extern const void __nosave_begin, __nosave_end;
+
++static void fix_processor_context(void);
++
+ struct saved_context saved_context;
+
+-void __save_processor_state(struct saved_context *ctxt)
++/**
++ * __save_processor_state - save CPU registers before creating a
++ * hibernation image and before restoring the memory state from it
++ * @ctxt - structure to store the registers contents in
++ *
++ * NOTE: If there is a CPU register the modification of which by the
++ * boot kernel (ie. the kernel used for loading the hibernation image)
++ * might affect the operations of the restored target kernel (ie. the one
++ * saved in the hibernation image), then its contents must be saved by this
++ * function. In other words, if kernel A is hibernated and different
++ * kernel B is used for loading the hibernation image into memory, the
++ * kernel A's __save_processor_state() function must save all registers
++ * needed by kernel A, so that it can operate correctly after the resume
++ * regardless of what kernel B does in the meantime.
++ */
++static void __save_processor_state(struct saved_context *ctxt)
+ {
+ kernel_fpu_begin();
+
+@@ -69,7 +86,12 @@ static void do_fpu_end(void)
+ kernel_fpu_end();
+ }
+
+-void __restore_processor_state(struct saved_context *ctxt)
++/**
++ * __restore_processor_state - restore the contents of CPU registers saved
++ * by __save_processor_state()
++ * @ctxt - structure to load the registers contents from
++ */
++static void __restore_processor_state(struct saved_context *ctxt)
+ {
+ /*
+ * control registers
+@@ -113,14 +135,14 @@ void restore_processor_state(void)
+ __restore_processor_state(&saved_context);
+ }
+
+-void fix_processor_context(void)
++static void fix_processor_context(void)
+ {
+ int cpu = smp_processor_id();
+ struct tss_struct *t = &per_cpu(init_tss, cpu);
+
+ set_tss_desc(cpu,t); /* This just modifies memory; should not be necessary. But... This is necessary, because 386 hardware has concept of busy TSS or some similar stupidity. */
+
+- cpu_gdt(cpu)[GDT_ENTRY_TSS].type = 9;
++ get_cpu_gdt_table(cpu)[GDT_ENTRY_TSS].type = 9;
+
+ syscall_init(); /* This sets MSR_*STAR and related */
+ load_TR_desc(); /* This does ltr */
+diff --git a/arch/x86/kernel/suspend_asm_64.S b/arch/x86/kernel/suspend_asm_64.S
+index 72f9521..aeb9a4d 100644
+--- a/arch/x86/kernel/suspend_asm_64.S
++++ b/arch/x86/kernel/suspend_asm_64.S
+@@ -18,13 +18,13 @@
+
+ ENTRY(swsusp_arch_suspend)
+ movq $saved_context, %rax
+- movq %rsp, pt_regs_rsp(%rax)
+- movq %rbp, pt_regs_rbp(%rax)
+- movq %rsi, pt_regs_rsi(%rax)
+- movq %rdi, pt_regs_rdi(%rax)
+- movq %rbx, pt_regs_rbx(%rax)
+- movq %rcx, pt_regs_rcx(%rax)
+- movq %rdx, pt_regs_rdx(%rax)
++ movq %rsp, pt_regs_sp(%rax)
++ movq %rbp, pt_regs_bp(%rax)
++ movq %rsi, pt_regs_si(%rax)
++ movq %rdi, pt_regs_di(%rax)
++ movq %rbx, pt_regs_bx(%rax)
++ movq %rcx, pt_regs_cx(%rax)
++ movq %rdx, pt_regs_dx(%rax)
+ movq %r8, pt_regs_r8(%rax)
+ movq %r9, pt_regs_r9(%rax)
+ movq %r10, pt_regs_r10(%rax)
+@@ -34,7 +34,7 @@ ENTRY(swsusp_arch_suspend)
+ movq %r14, pt_regs_r14(%rax)
+ movq %r15, pt_regs_r15(%rax)
+ pushfq
+- popq pt_regs_eflags(%rax)
++ popq pt_regs_flags(%rax)
+
+ /* save the address of restore_registers */
+ movq $restore_registers, %rax
+@@ -115,13 +115,13 @@ ENTRY(restore_registers)
+
+ /* We don't restore %rax, it must be 0 anyway */
+ movq $saved_context, %rax
+- movq pt_regs_rsp(%rax), %rsp
+- movq pt_regs_rbp(%rax), %rbp
+- movq pt_regs_rsi(%rax), %rsi
+- movq pt_regs_rdi(%rax), %rdi
+- movq pt_regs_rbx(%rax), %rbx
+- movq pt_regs_rcx(%rax), %rcx
+- movq pt_regs_rdx(%rax), %rdx
++ movq pt_regs_sp(%rax), %rsp
++ movq pt_regs_bp(%rax), %rbp
++ movq pt_regs_si(%rax), %rsi
++ movq pt_regs_di(%rax), %rdi
++ movq pt_regs_bx(%rax), %rbx
++ movq pt_regs_cx(%rax), %rcx
++ movq pt_regs_dx(%rax), %rdx
+ movq pt_regs_r8(%rax), %r8
+ movq pt_regs_r9(%rax), %r9
+ movq pt_regs_r10(%rax), %r10
+@@ -130,7 +130,7 @@ ENTRY(restore_registers)
+ movq pt_regs_r13(%rax), %r13
+ movq pt_regs_r14(%rax), %r14
+ movq pt_regs_r15(%rax), %r15
+- pushq pt_regs_eflags(%rax)
++ pushq pt_regs_flags(%rax)
+ popfq
+
+ xorq %rax, %rax
+diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
+index 907942e..bd802a5 100644
+--- a/arch/x86/kernel/sys_x86_64.c
++++ b/arch/x86/kernel/sys_x86_64.c
+@@ -12,6 +12,7 @@
+ #include <linux/file.h>
+ #include <linux/utsname.h>
+ #include <linux/personality.h>
++#include <linux/random.h>
+
+ #include <asm/uaccess.h>
+ #include <asm/ia32.h>
+@@ -65,6 +66,7 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
+ unsigned long *end)
+ {
+ if (!test_thread_flag(TIF_IA32) && (flags & MAP_32BIT)) {
++ unsigned long new_begin;
+ /* This is usually used needed to map code in small
+ model, so it needs to be in the first 31bit. Limit
+ it to that. This means we need to move the
+@@ -74,6 +76,11 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
+ of playground for now. -AK */
+ *begin = 0x40000000;
+ *end = 0x80000000;
++ if (current->flags & PF_RANDOMIZE) {
++ new_begin = randomize_range(*begin, *begin + 0x02000000, 0);
++ if (new_begin)
++ *begin = new_begin;
++ }
+ } else {
+ *begin = TASK_UNMAPPED_BASE;
+ *end = TASK_SIZE;
+@@ -143,6 +150,97 @@ full_search:
+ }
+ }
+
++
++unsigned long
++arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
++ const unsigned long len, const unsigned long pgoff,
++ const unsigned long flags)
++{
++ struct vm_area_struct *vma;
++ struct mm_struct *mm = current->mm;
++ unsigned long addr = addr0;
++
++ /* requested length too big for entire address space */
++ if (len > TASK_SIZE)
++ return -ENOMEM;
++
++ if (flags & MAP_FIXED)
++ return addr;
++
++ /* for MAP_32BIT mappings we force the legact mmap base */
++ if (!test_thread_flag(TIF_IA32) && (flags & MAP_32BIT))
++ goto bottomup;
++
++ /* requesting a specific address */
++ if (addr) {
++ addr = PAGE_ALIGN(addr);
++ vma = find_vma(mm, addr);
++ if (TASK_SIZE - len >= addr &&
++ (!vma || addr + len <= vma->vm_start))
++ return addr;
++ }
++
++ /* check if free_area_cache is useful for us */
++ if (len <= mm->cached_hole_size) {
++ mm->cached_hole_size = 0;
++ mm->free_area_cache = mm->mmap_base;
++ }
++
++ /* either no address requested or can't fit in requested address hole */
++ addr = mm->free_area_cache;
++
++ /* make sure it can fit in the remaining address space */
++ if (addr > len) {
++ vma = find_vma(mm, addr-len);
++ if (!vma || addr <= vma->vm_start)
++ /* remember the address as a hint for next time */
++ return (mm->free_area_cache = addr-len);
++ }
++
++ if (mm->mmap_base < len)
++ goto bottomup;
++
++ addr = mm->mmap_base-len;
++
++ do {
++ /*
++ * Lookup failure means no vma is above this address,
++ * else if new region fits below vma->vm_start,
++ * return with success:
++ */
++ vma = find_vma(mm, addr);
++ if (!vma || addr+len <= vma->vm_start)
++ /* remember the address as a hint for next time */
++ return (mm->free_area_cache = addr);
++
++ /* remember the largest hole we saw so far */
++ if (addr + mm->cached_hole_size < vma->vm_start)
++ mm->cached_hole_size = vma->vm_start - addr;
++
++ /* try just below the current vma->vm_start */
++ addr = vma->vm_start-len;
++ } while (len < vma->vm_start);
++
++bottomup:
++ /*
++ * A failed mmap() very likely causes application failure,
++ * so fall back to the bottom-up function here. This scenario
++ * can happen with large stack limits and large mmap()
++ * allocations.
++ */
++ mm->cached_hole_size = ~0UL;
++ mm->free_area_cache = TASK_UNMAPPED_BASE;
++ addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
++ /*
++ * Restore the topdown base:
++ */
++ mm->free_area_cache = mm->mmap_base;
++ mm->cached_hole_size = ~0UL;
++
++ return addr;
++}
++
++
+ asmlinkage long sys_uname(struct new_utsname __user * name)
+ {
+ int err;
+diff --git a/arch/x86/kernel/sysenter_32.c b/arch/x86/kernel/sysenter_32.c
+deleted file mode 100644
+index 5a2d951..0000000
+--- a/arch/x86/kernel/sysenter_32.c
++++ /dev/null
+@@ -1,346 +0,0 @@
+-/*
+- * (C) Copyright 2002 Linus Torvalds
+- * Portions based on the vdso-randomization code from exec-shield:
+- * Copyright(C) 2005-2006, Red Hat, Inc., Ingo Molnar
- *
-- * Description:
-- * Unmap a rq previously mapped by blk_rq_map_user(). The caller must
-- * supply the original rq->bio from the blk_rq_map_user() return, since
-- * the io completion may have changed rq->bio.
+- * This file contains the needed initializations to support sysenter.
- */
--int blk_rq_unmap_user(struct bio *bio)
--{
-- struct bio *mapped_bio;
-- int ret = 0, ret2;
--
-- while (bio) {
-- mapped_bio = bio;
-- if (unlikely(bio_flagged(bio, BIO_BOUNCED)))
-- mapped_bio = bio->bi_private;
-
-- ret2 = __blk_rq_unmap_user(mapped_bio);
-- if (ret2 && !ret)
-- ret = ret2;
+-#include <linux/init.h>
+-#include <linux/smp.h>
+-#include <linux/thread_info.h>
+-#include <linux/sched.h>
+-#include <linux/gfp.h>
+-#include <linux/string.h>
+-#include <linux/elf.h>
+-#include <linux/mm.h>
+-#include <linux/err.h>
+-#include <linux/module.h>
-
-- mapped_bio = bio;
-- bio = bio->bi_next;
-- bio_put(mapped_bio);
-- }
+-#include <asm/cpufeature.h>
+-#include <asm/msr.h>
+-#include <asm/pgtable.h>
+-#include <asm/unistd.h>
+-#include <asm/elf.h>
+-#include <asm/tlbflush.h>
-
-- return ret;
--}
+-enum {
+- VDSO_DISABLED = 0,
+- VDSO_ENABLED = 1,
+- VDSO_COMPAT = 2,
+-};
-
--EXPORT_SYMBOL(blk_rq_unmap_user);
+-#ifdef CONFIG_COMPAT_VDSO
+-#define VDSO_DEFAULT VDSO_COMPAT
+-#else
+-#define VDSO_DEFAULT VDSO_ENABLED
+-#endif
-
--/**
-- * blk_rq_map_kern - map kernel data to a request, for REQ_BLOCK_PC usage
-- * @q: request queue where request should be inserted
-- * @rq: request to fill
-- * @kbuf: the kernel buffer
-- * @len: length of user data
-- * @gfp_mask: memory allocation flags
+-/*
+- * Should the kernel map a VDSO page into processes and pass its
+- * address down to glibc upon exec()?
- */
--int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
-- unsigned int len, gfp_t gfp_mask)
--{
-- struct bio *bio;
--
-- if (len > (q->max_hw_sectors << 9))
-- return -EINVAL;
-- if (!len || !kbuf)
-- return -EINVAL;
--
-- bio = bio_map_kern(q, kbuf, len, gfp_mask);
-- if (IS_ERR(bio))
-- return PTR_ERR(bio);
--
-- if (rq_data_dir(rq) == WRITE)
-- bio->bi_rw |= (1 << BIO_RW);
--
-- blk_rq_bio_prep(q, rq, bio);
-- blk_queue_bounce(q, &rq->bio);
-- rq->buffer = rq->data = NULL;
-- return 0;
--}
+-unsigned int __read_mostly vdso_enabled = VDSO_DEFAULT;
-
--EXPORT_SYMBOL(blk_rq_map_kern);
+-EXPORT_SYMBOL_GPL(vdso_enabled);
-
--/**
-- * blk_execute_rq_nowait - insert a request into queue for execution
-- * @q: queue to insert the request in
-- * @bd_disk: matching gendisk
-- * @rq: request to insert
-- * @at_head: insert request at head or tail of queue
-- * @done: I/O completion handler
-- *
-- * Description:
-- * Insert a fully prepared request at the back of the io scheduler queue
-- * for execution. Don't wait for completion.
-- */
--void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
-- struct request *rq, int at_head,
-- rq_end_io_fn *done)
+-static int __init vdso_setup(char *s)
-{
-- int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+- vdso_enabled = simple_strtoul(s, NULL, 0);
-
-- rq->rq_disk = bd_disk;
-- rq->cmd_flags |= REQ_NOMERGE;
-- rq->end_io = done;
-- WARN_ON(irqs_disabled());
-- spin_lock_irq(q->queue_lock);
-- __elv_add_request(q, rq, where, 1);
-- __generic_unplug_device(q);
-- spin_unlock_irq(q->queue_lock);
+- return 1;
-}
--EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
-
--/**
-- * blk_execute_rq - insert a request into queue for execution
-- * @q: queue to insert the request in
-- * @bd_disk: matching gendisk
-- * @rq: request to insert
-- * @at_head: insert request at head or tail of queue
-- *
-- * Description:
-- * Insert a fully prepared request at the back of the io scheduler queue
-- * for execution and wait for completion.
-- */
--int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
-- struct request *rq, int at_head)
--{
-- DECLARE_COMPLETION_ONSTACK(wait);
-- char sense[SCSI_SENSE_BUFFERSIZE];
-- int err = 0;
+-__setup("vdso=", vdso_setup);
-
-- /*
-- * we need an extra reference to the request, so we can look at
-- * it after io completion
-- */
-- rq->ref_count++;
+-extern asmlinkage void sysenter_entry(void);
-
-- if (!rq->sense) {
-- memset(sense, 0, sizeof(sense));
-- rq->sense = sense;
-- rq->sense_len = 0;
-- }
+-static __init void reloc_symtab(Elf32_Ehdr *ehdr,
+- unsigned offset, unsigned size)
+-{
+- Elf32_Sym *sym = (void *)ehdr + offset;
+- unsigned nsym = size / sizeof(*sym);
+- unsigned i;
-
-- rq->end_io_data = &wait;
-- blk_execute_rq_nowait(q, bd_disk, rq, at_head, blk_end_sync_rq);
-- wait_for_completion(&wait);
+- for(i = 0; i < nsym; i++, sym++) {
+- if (sym->st_shndx == SHN_UNDEF ||
+- sym->st_shndx == SHN_ABS)
+- continue; /* skip */
-
-- if (rq->errors)
-- err = -EIO;
+- if (sym->st_shndx > SHN_LORESERVE) {
+- printk(KERN_INFO "VDSO: unexpected st_shndx %x\n",
+- sym->st_shndx);
+- continue;
+- }
-
-- return err;
+- switch(ELF_ST_TYPE(sym->st_info)) {
+- case STT_OBJECT:
+- case STT_FUNC:
+- case STT_SECTION:
+- case STT_FILE:
+- sym->st_value += VDSO_HIGH_BASE;
+- }
+- }
-}
-
--EXPORT_SYMBOL(blk_execute_rq);
--
--static void bio_end_empty_barrier(struct bio *bio, int err)
--{
-- if (err)
-- clear_bit(BIO_UPTODATE, &bio->bi_flags);
+-static __init void reloc_dyn(Elf32_Ehdr *ehdr, unsigned offset)
+-{
+- Elf32_Dyn *dyn = (void *)ehdr + offset;
+-
+- for(; dyn->d_tag != DT_NULL; dyn++)
+- switch(dyn->d_tag) {
+- case DT_PLTGOT:
+- case DT_HASH:
+- case DT_STRTAB:
+- case DT_SYMTAB:
+- case DT_RELA:
+- case DT_INIT:
+- case DT_FINI:
+- case DT_REL:
+- case DT_DEBUG:
+- case DT_JMPREL:
+- case DT_VERSYM:
+- case DT_VERDEF:
+- case DT_VERNEED:
+- case DT_ADDRRNGLO ... DT_ADDRRNGHI:
+- /* definitely pointers needing relocation */
+- dyn->d_un.d_ptr += VDSO_HIGH_BASE;
+- break;
+-
+- case DT_ENCODING ... OLD_DT_LOOS-1:
+- case DT_LOOS ... DT_HIOS-1:
+- /* Tags above DT_ENCODING are pointers if
+- they're even */
+- if (dyn->d_tag >= DT_ENCODING &&
+- (dyn->d_tag & 1) == 0)
+- dyn->d_un.d_ptr += VDSO_HIGH_BASE;
+- break;
+-
+- case DT_VERDEFNUM:
+- case DT_VERNEEDNUM:
+- case DT_FLAGS_1:
+- case DT_RELACOUNT:
+- case DT_RELCOUNT:
+- case DT_VALRNGLO ... DT_VALRNGHI:
+- /* definitely not pointers */
+- break;
-
-- complete(bio->bi_private);
+- case OLD_DT_LOOS ... DT_LOOS-1:
+- case DT_HIOS ... DT_VALRNGLO-1:
+- default:
+- if (dyn->d_tag > DT_ENCODING)
+- printk(KERN_INFO "VDSO: unexpected DT_tag %x\n",
+- dyn->d_tag);
+- break;
+- }
-}
-
--/**
-- * blkdev_issue_flush - queue a flush
-- * @bdev: blockdev to issue flush for
-- * @error_sector: error sector
-- *
-- * Description:
-- * Issue a flush for the block device in question. Caller can supply
-- * room for storing the error offset in case of a flush error, if they
-- * wish to. Caller must run wait_for_completion() on its own.
-- */
--int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector)
+-static __init void relocate_vdso(Elf32_Ehdr *ehdr)
-{
-- DECLARE_COMPLETION_ONSTACK(wait);
-- struct request_queue *q;
-- struct bio *bio;
-- int ret;
--
-- if (bdev->bd_disk == NULL)
-- return -ENXIO;
--
-- q = bdev_get_queue(bdev);
-- if (!q)
-- return -ENXIO;
--
-- bio = bio_alloc(GFP_KERNEL, 0);
-- if (!bio)
-- return -ENOMEM;
--
-- bio->bi_end_io = bio_end_empty_barrier;
-- bio->bi_private = &wait;
-- bio->bi_bdev = bdev;
-- submit_bio(1 << BIO_RW_BARRIER, bio);
--
-- wait_for_completion(&wait);
+- Elf32_Phdr *phdr;
+- Elf32_Shdr *shdr;
+- int i;
-
-- /*
-- * The driver must store the error location in ->bi_sector, if
-- * it supports it. For non-stacked drivers, this should be copied
-- * from rq->sector.
-- */
-- if (error_sector)
-- *error_sector = bio->bi_sector;
+- BUG_ON(memcmp(ehdr->e_ident, ELFMAG, 4) != 0 ||
+- !elf_check_arch(ehdr) ||
+- ehdr->e_type != ET_DYN);
+-
+- ehdr->e_entry += VDSO_HIGH_BASE;
+-
+- /* rebase phdrs */
+- phdr = (void *)ehdr + ehdr->e_phoff;
+- for (i = 0; i < ehdr->e_phnum; i++) {
+- phdr[i].p_vaddr += VDSO_HIGH_BASE;
+-
+- /* relocate dynamic stuff */
+- if (phdr[i].p_type == PT_DYNAMIC)
+- reloc_dyn(ehdr, phdr[i].p_offset);
+- }
+-
+- /* rebase sections */
+- shdr = (void *)ehdr + ehdr->e_shoff;
+- for(i = 0; i < ehdr->e_shnum; i++) {
+- if (!(shdr[i].sh_flags & SHF_ALLOC))
+- continue;
-
-- ret = 0;
-- if (!bio_flagged(bio, BIO_UPTODATE))
-- ret = -EIO;
+- shdr[i].sh_addr += VDSO_HIGH_BASE;
-
-- bio_put(bio);
-- return ret;
+- if (shdr[i].sh_type == SHT_SYMTAB ||
+- shdr[i].sh_type == SHT_DYNSYM)
+- reloc_symtab(ehdr, shdr[i].sh_offset,
+- shdr[i].sh_size);
+- }
-}
-
--EXPORT_SYMBOL(blkdev_issue_flush);
--
--static void drive_stat_acct(struct request *rq, int new_io)
+-void enable_sep_cpu(void)
-{
-- int rw = rq_data_dir(rq);
+- int cpu = get_cpu();
+- struct tss_struct *tss = &per_cpu(init_tss, cpu);
-
-- if (!blk_fs_request(rq) || !rq->rq_disk)
+- if (!boot_cpu_has(X86_FEATURE_SEP)) {
+- put_cpu();
- return;
--
-- if (!new_io) {
-- __disk_stat_inc(rq->rq_disk, merges[rw]);
-- } else {
-- disk_round_stats(rq->rq_disk);
-- rq->rq_disk->in_flight++;
- }
--}
--
--/*
-- * add-request adds a request to the linked list.
-- * queue lock is held and interrupts disabled, as we muck with the
-- * request queue list.
-- */
--static inline void add_request(struct request_queue * q, struct request * req)
--{
-- drive_stat_acct(req, 1);
-
+- tss->x86_tss.ss1 = __KERNEL_CS;
+- tss->x86_tss.esp1 = sizeof(struct tss_struct) + (unsigned long) tss;
+- wrmsr(MSR_IA32_SYSENTER_CS, __KERNEL_CS, 0);
+- wrmsr(MSR_IA32_SYSENTER_ESP, tss->x86_tss.esp1, 0);
+- wrmsr(MSR_IA32_SYSENTER_EIP, (unsigned long) sysenter_entry, 0);
+- put_cpu();
+-}
+-
+-static struct vm_area_struct gate_vma;
+-
+-static int __init gate_vma_init(void)
+-{
+- gate_vma.vm_mm = NULL;
+- gate_vma.vm_start = FIXADDR_USER_START;
+- gate_vma.vm_end = FIXADDR_USER_END;
+- gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
+- gate_vma.vm_page_prot = __P101;
- /*
-- * elevator indicated where it wants this request to be
-- * inserted at elevator_merge time
+- * Make sure the vDSO gets into every core dump.
+- * Dumping its contents makes post-mortem fully interpretable later
+- * without matching up the same kernel and hardware config to see
+- * what PC values meant.
- */
-- __elv_add_request(q, req, ELEVATOR_INSERT_SORT, 0);
+- gate_vma.vm_flags |= VM_ALWAYSDUMP;
+- return 0;
-}
--
+-
-/*
-- * disk_round_stats() - Round off the performance stats on a struct
-- * disk_stats.
-- *
-- * The average IO queue length and utilisation statistics are maintained
-- * by observing the current state of the queue length and the amount of
-- * time it has been in this state for.
-- *
-- * Normally, that accounting is done on IO completion, but that can result
-- * in more than a second's worth of IO being accounted for within any one
-- * second, leading to >100% utilisation. To deal with that, we call this
-- * function to do a round-off before returning the results when reading
-- * /proc/diskstats. This accounts immediately for all queue usage up to
-- * the current jiffies and restarts the counters again.
+- * These symbols are defined by vsyscall.o to mark the bounds
+- * of the ELF DSO images included therein.
- */
--void disk_round_stats(struct gendisk *disk)
+-extern const char vsyscall_int80_start, vsyscall_int80_end;
+-extern const char vsyscall_sysenter_start, vsyscall_sysenter_end;
+-static struct page *syscall_pages[1];
+-
+-static void map_compat_vdso(int map)
-{
-- unsigned long now = jiffies;
+- static int vdso_mapped;
-
-- if (now == disk->stamp)
+- if (map == vdso_mapped)
- return;
-
-- if (disk->in_flight) {
-- __disk_stat_add(disk, time_in_queue,
-- disk->in_flight * (now - disk->stamp));
-- __disk_stat_add(disk, io_ticks, (now - disk->stamp));
-- }
-- disk->stamp = now;
--}
+- vdso_mapped = map;
-
--EXPORT_SYMBOL_GPL(disk_round_stats);
+- __set_fixmap(FIX_VDSO, page_to_pfn(syscall_pages[0]) << PAGE_SHIFT,
+- map ? PAGE_READONLY_EXEC : PAGE_NONE);
-
--/*
-- * queue lock must be held
-- */
--void __blk_put_request(struct request_queue *q, struct request *req)
+- /* flush stray tlbs */
+- flush_tlb_all();
+-}
+-
+-int __init sysenter_setup(void)
-{
-- if (unlikely(!q))
-- return;
-- if (unlikely(--req->ref_count))
-- return;
+- void *syscall_page = (void *)get_zeroed_page(GFP_ATOMIC);
+- const void *vsyscall;
+- size_t vsyscall_len;
-
-- elv_completed_request(q, req);
+- syscall_pages[0] = virt_to_page(syscall_page);
-
-- /*
-- * Request may not have originated from ll_rw_blk. if not,
-- * it didn't come out of our reserved rq pools
-- */
-- if (req->cmd_flags & REQ_ALLOCED) {
-- int rw = rq_data_dir(req);
-- int priv = req->cmd_flags & REQ_ELVPRIV;
+- gate_vma_init();
-
-- BUG_ON(!list_empty(&req->queuelist));
-- BUG_ON(!hlist_unhashed(&req->hash));
+- printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO));
-
-- blk_free_request(q, req);
-- freed_request(q, rw, priv);
+- if (!boot_cpu_has(X86_FEATURE_SEP)) {
+- vsyscall = &vsyscall_int80_start;
+- vsyscall_len = &vsyscall_int80_end - &vsyscall_int80_start;
+- } else {
+- vsyscall = &vsyscall_sysenter_start;
+- vsyscall_len = &vsyscall_sysenter_end - &vsyscall_sysenter_start;
- }
--}
--
--EXPORT_SYMBOL_GPL(__blk_put_request);
-
--void blk_put_request(struct request *req)
--{
-- unsigned long flags;
-- struct request_queue *q = req->q;
+- memcpy(syscall_page, vsyscall, vsyscall_len);
+- relocate_vdso(syscall_page);
-
-- /*
-- * Gee, IDE calls in w/ NULL q. Fix IDE and remove the
-- * following if (q) test.
-- */
-- if (q) {
-- spin_lock_irqsave(q->queue_lock, flags);
-- __blk_put_request(q, req);
-- spin_unlock_irqrestore(q->queue_lock, flags);
-- }
+- return 0;
-}
-
--EXPORT_SYMBOL(blk_put_request);
--
--/**
-- * blk_end_sync_rq - executes a completion event on a request
-- * @rq: request to complete
-- * @error: end io status of the request
-- */
--void blk_end_sync_rq(struct request *rq, int error)
--{
-- struct completion *waiting = rq->end_io_data;
--
-- rq->end_io_data = NULL;
-- __blk_put_request(rq->q, rq);
--
-- /*
-- * complete last, if this is a stack request the process (and thus
-- * the rq pointer) could be invalid right after this complete()
-- */
-- complete(waiting);
--}
--EXPORT_SYMBOL(blk_end_sync_rq);
+-/* Defined in vsyscall-sysenter.S */
+-extern void SYSENTER_RETURN;
-
--/*
-- * Has to be called with the request spinlock acquired
-- */
--static int attempt_merge(struct request_queue *q, struct request *req,
-- struct request *next)
+-/* Setup a VMA at program startup for the vsyscall page */
+-int arch_setup_additional_pages(struct linux_binprm *bprm, int exstack)
-{
-- if (!rq_mergeable(req) || !rq_mergeable(next))
-- return 0;
--
-- /*
-- * not contiguous
-- */
-- if (req->sector + req->nr_sectors != next->sector)
-- return 0;
--
-- if (rq_data_dir(req) != rq_data_dir(next)
-- || req->rq_disk != next->rq_disk
-- || next->special)
-- return 0;
+- struct mm_struct *mm = current->mm;
+- unsigned long addr;
+- int ret = 0;
+- bool compat;
-
-- /*
-- * If we are allowed to merge, then append bio list
-- * from next to rq and release next. merge_requests_fn
-- * will have updated segment counts, update sector
-- * counts here.
-- */
-- if (!ll_merge_requests_fn(q, req, next))
-- return 0;
+- down_write(&mm->mmap_sem);
-
-- /*
-- * At this point we have either done a back merge
-- * or front merge. We need the smaller start_time of
-- * the merged requests to be the current request
-- * for accounting purposes.
-- */
-- if (time_after(req->start_time, next->start_time))
-- req->start_time = next->start_time;
+- /* Test compat mode once here, in case someone
+- changes it via sysctl */
+- compat = (vdso_enabled == VDSO_COMPAT);
-
-- req->biotail->bi_next = next->bio;
-- req->biotail = next->biotail;
+- map_compat_vdso(compat);
-
-- req->nr_sectors = req->hard_nr_sectors += next->hard_nr_sectors;
+- if (compat)
+- addr = VDSO_HIGH_BASE;
+- else {
+- addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+- if (IS_ERR_VALUE(addr)) {
+- ret = addr;
+- goto up_fail;
+- }
-
-- elv_merge_requests(q, req, next);
+- /*
+- * MAYWRITE to allow gdb to COW and set breakpoints
+- *
+- * Make sure the vDSO gets into every core dump.
+- * Dumping its contents makes post-mortem fully
+- * interpretable later without matching up the same
+- * kernel and hardware config to see what PC values
+- * meant.
+- */
+- ret = install_special_mapping(mm, addr, PAGE_SIZE,
+- VM_READ|VM_EXEC|
+- VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC|
+- VM_ALWAYSDUMP,
+- syscall_pages);
-
-- if (req->rq_disk) {
-- disk_round_stats(req->rq_disk);
-- req->rq_disk->in_flight--;
+- if (ret)
+- goto up_fail;
- }
-
-- req->ioprio = ioprio_best(req->ioprio, next->ioprio);
+- current->mm->context.vdso = (void *)addr;
+- current_thread_info()->sysenter_return =
+- (void *)VDSO_SYM(&SYSENTER_RETURN);
-
-- __blk_put_request(q, next);
-- return 1;
+- up_fail:
+- up_write(&mm->mmap_sem);
+-
+- return ret;
-}
-
--static inline int attempt_back_merge(struct request_queue *q,
-- struct request *rq)
+-const char *arch_vma_name(struct vm_area_struct *vma)
-{
-- struct request *next = elv_latter_request(q, rq);
--
-- if (next)
-- return attempt_merge(q, rq, next);
--
-- return 0;
+- if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso)
+- return "[vdso]";
+- return NULL;
-}
-
--static inline int attempt_front_merge(struct request_queue *q,
-- struct request *rq)
+-struct vm_area_struct *get_gate_vma(struct task_struct *tsk)
-{
-- struct request *prev = elv_former_request(q, rq);
+- struct mm_struct *mm = tsk->mm;
-
-- if (prev)
-- return attempt_merge(q, prev, rq);
--
-- return 0;
+- /* Check to see if this task was created in compat vdso mode */
+- if (mm && mm->context.vdso == (void *)VDSO_HIGH_BASE)
+- return &gate_vma;
+- return NULL;
-}
-
--static void init_request_from_bio(struct request *req, struct bio *bio)
+-int in_gate_area(struct task_struct *task, unsigned long addr)
-{
-- req->cmd_type = REQ_TYPE_FS;
--
-- /*
-- * inherit FAILFAST from bio (for read-ahead, and explicit FAILFAST)
-- */
-- if (bio_rw_ahead(bio) || bio_failfast(bio))
-- req->cmd_flags |= REQ_FAILFAST;
--
-- /*
-- * REQ_BARRIER implies no merging, but lets make it explicit
-- */
-- if (unlikely(bio_barrier(bio)))
-- req->cmd_flags |= (REQ_HARDBARRIER | REQ_NOMERGE);
--
-- if (bio_sync(bio))
-- req->cmd_flags |= REQ_RW_SYNC;
-- if (bio_rw_meta(bio))
-- req->cmd_flags |= REQ_RW_META;
+- const struct vm_area_struct *vma = get_gate_vma(task);
-
-- req->errors = 0;
-- req->hard_sector = req->sector = bio->bi_sector;
-- req->ioprio = bio_prio(bio);
-- req->start_time = jiffies;
-- blk_rq_bio_prep(req->q, req, bio);
+- return vma && addr >= vma->vm_start && addr < vma->vm_end;
-}
-
--static int __make_request(struct request_queue *q, struct bio *bio)
+-int in_gate_area_no_task(unsigned long addr)
-{
-- struct request *req;
-- int el_ret, nr_sectors, barrier, err;
-- const unsigned short prio = bio_prio(bio);
-- const int sync = bio_sync(bio);
-- int rw_flags;
--
-- nr_sectors = bio_sectors(bio);
--
-- /*
-- * low level driver can indicate that it wants pages above a
-- * certain limit bounced to low memory (ie for highmem, or even
-- * ISA dma in theory)
-- */
-- blk_queue_bounce(q, &bio);
--
-- barrier = bio_barrier(bio);
-- if (unlikely(barrier) && (q->next_ordered == QUEUE_ORDERED_NONE)) {
-- err = -EOPNOTSUPP;
-- goto end_io;
-- }
--
-- spin_lock_irq(q->queue_lock);
--
-- if (unlikely(barrier) || elv_queue_empty(q))
-- goto get_rq;
--
-- el_ret = elv_merge(q, &req, bio);
-- switch (el_ret) {
-- case ELEVATOR_BACK_MERGE:
-- BUG_ON(!rq_mergeable(req));
--
-- if (!ll_back_merge_fn(q, req, bio))
-- break;
--
-- blk_add_trace_bio(q, bio, BLK_TA_BACKMERGE);
--
-- req->biotail->bi_next = bio;
-- req->biotail = bio;
-- req->nr_sectors = req->hard_nr_sectors += nr_sectors;
-- req->ioprio = ioprio_best(req->ioprio, prio);
-- drive_stat_acct(req, 0);
-- if (!attempt_back_merge(q, req))
-- elv_merged_request(q, req, el_ret);
-- goto out;
--
-- case ELEVATOR_FRONT_MERGE:
-- BUG_ON(!rq_mergeable(req));
--
-- if (!ll_front_merge_fn(q, req, bio))
-- break;
--
-- blk_add_trace_bio(q, bio, BLK_TA_FRONTMERGE);
--
-- bio->bi_next = req->bio;
-- req->bio = bio;
--
-- /*
-- * may not be valid. if the low level driver said
-- * it didn't need a bounce buffer then it better
-- * not touch req->buffer either...
-- */
-- req->buffer = bio_data(bio);
-- req->current_nr_sectors = bio_cur_sectors(bio);
-- req->hard_cur_sectors = req->current_nr_sectors;
-- req->sector = req->hard_sector = bio->bi_sector;
-- req->nr_sectors = req->hard_nr_sectors += nr_sectors;
-- req->ioprio = ioprio_best(req->ioprio, prio);
-- drive_stat_acct(req, 0);
-- if (!attempt_front_merge(q, req))
-- elv_merged_request(q, req, el_ret);
-- goto out;
--
-- /* ELV_NO_MERGE: elevator says don't/can't merge. */
-- default:
-- ;
-- }
--
--get_rq:
-- /*
-- * This sync check and mask will be re-done in init_request_from_bio(),
-- * but we need to set it earlier to expose the sync flag to the
-- * rq allocator and io schedulers.
-- */
-- rw_flags = bio_data_dir(bio);
-- if (sync)
-- rw_flags |= REQ_RW_SYNC;
--
-- /*
-- * Grab a free request. This is might sleep but can not fail.
-- * Returns with the queue unlocked.
-- */
-- req = get_request_wait(q, rw_flags, bio);
--
-- /*
-- * After dropping the lock and possibly sleeping here, our request
-- * may now be mergeable after it had proven unmergeable (above).
-- * We don't worry about that case for efficiency. It won't happen
-- * often, and the elevators are able to handle it.
-- */
-- init_request_from_bio(req, bio);
--
-- spin_lock_irq(q->queue_lock);
-- if (elv_queue_empty(q))
-- blk_plug_device(q);
-- add_request(q, req);
--out:
-- if (sync)
-- __generic_unplug_device(q);
--
-- spin_unlock_irq(q->queue_lock);
-- return 0;
--
--end_io:
-- bio_endio(bio, err);
- return 0;
-}
+diff --git a/arch/x86/kernel/test_nx.c b/arch/x86/kernel/test_nx.c
+new file mode 100644
+index 0000000..6d7ef11
+--- /dev/null
++++ b/arch/x86/kernel/test_nx.c
+@@ -0,0 +1,176 @@
++/*
++ * test_nx.c: functional test for NX functionality
++ *
++ * (C) Copyright 2008 Intel Corporation
++ * Author: Arjan van de Ven <arjan at linux.intel.com>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * as published by the Free Software Foundation; version 2
++ * of the License.
++ */
++#include <linux/module.h>
++#include <linux/sort.h>
++#include <asm/uaccess.h>
++
++extern int rodata_test_data;
++
++/*
++ * This file checks 4 things:
++ * 1) Check if the stack is not executable
++ * 2) Check if kmalloc memory is not executable
++ * 3) Check if the .rodata section is not executable
++ * 4) Check if the .data section of a module is not executable
++ *
++ * To do this, the test code tries to execute memory in stack/kmalloc/etc,
++ * and then checks if the expected trap happens.
++ *
++ * Sadly, this implies having a dynamic exception handling table entry.
++ * ... which can be done (and will make Rusty cry)... but it can only
++ * be done in a stand-alone module with only 1 entry total.
++ * (otherwise we'd have to sort and that's just too messy)
++ */
++
++
++
++/*
++ * We want to set up an exception handling point on our stack,
++ * which means a variable value. This function is rather dirty
++ * and walks the exception table of the module, looking for a magic
++ * marker and replaces it with a specific function.
++ */
++static void fudze_exception_table(void *marker, void *new)
++{
++ struct module *mod = THIS_MODULE;
++ struct exception_table_entry *extable;
++
++ /*
++ * Note: This module has only 1 exception table entry,
++ * so searching and sorting is not needed. If that changes,
++ * this would be the place to search and re-sort the exception
++ * table.
++ */
++ if (mod->num_exentries > 1) {
++ printk(KERN_ERR "test_nx: too many exception table entries!\n");
++ printk(KERN_ERR "test_nx: test results are not reliable.\n");
++ return;
++ }
++ extable = (struct exception_table_entry *)mod->extable;
++ extable[0].insn = (unsigned long)new;
++}
++
++
++/*
++ * exception tables get their symbols translated so we need
++ * to use a fake function to put in there, which we can then
++ * replace at runtime.
++ */
++void foo_label(void);
++
++/*
++ * returns 0 for not-executable, negative for executable
++ *
++ * Note: we cannot allow this function to be inlined, because
++ * that would give us more than 1 exception table entry.
++ * This in turn would break the assumptions above.
++ */
++static noinline int test_address(void *address)
++{
++ unsigned long result;
++
++ /* Set up an exception table entry for our address */
++ fudze_exception_table(&foo_label, address);
++ result = 1;
++ asm volatile(
++ "foo_label:\n"
++ "0: call *%[fake_code]\n"
++ "1:\n"
++ ".section .fixup,\"ax\"\n"
++ "2: mov %[zero], %[rslt]\n"
++ " ret\n"
++ ".previous\n"
++ ".section __ex_table,\"a\"\n"
++ " .align 8\n"
++ " .quad 0b\n"
++ " .quad 2b\n"
++ ".previous\n"
++ : [rslt] "=r" (result)
++ : [fake_code] "r" (address), [zero] "r" (0UL), "0" (result)
++ );
++ /* change the exception table back for the next round */
++ fudze_exception_table(address, &foo_label);
++
++ if (result)
++ return -ENODEV;
++ return 0;
++}
++
++static unsigned char test_data = 0xC3; /* 0xC3 is the opcode for "ret" */
++
++static int test_NX(void)
++{
++ int ret = 0;
++ /* 0xC3 is the opcode for "ret" */
++ char stackcode[] = {0xC3, 0x90, 0 };
++ char *heap;
++
++ test_data = 0xC3;
++
++ printk(KERN_INFO "Testing NX protection\n");
++
++ /* Test 1: check if the stack is not executable */
++ if (test_address(&stackcode)) {
++ printk(KERN_ERR "test_nx: stack was executable\n");
++ ret = -ENODEV;
++ }
++
++
++ /* Test 2: Check if the heap is executable */
++ heap = kmalloc(64, GFP_KERNEL);
++ if (!heap)
++ return -ENOMEM;
++ heap[0] = 0xC3; /* opcode for "ret" */
++
++ if (test_address(heap)) {
++ printk(KERN_ERR "test_nx: heap was executable\n");
++ ret = -ENODEV;
++ }
++ kfree(heap);
++
++ /*
++ * The following 2 tests currently fail, this needs to get fixed
++ * Until then, don't run them to avoid too many people getting scared
++ * by the error message
++ */
++#if 0
++
++#ifdef CONFIG_DEBUG_RODATA
++ /* Test 3: Check if the .rodata section is executable */
++ if (rodata_test_data != 0xC3) {
++ printk(KERN_ERR "test_nx: .rodata marker has invalid value\n");
++ ret = -ENODEV;
++ } else if (test_address(&rodata_test_data)) {
++ printk(KERN_ERR "test_nx: .rodata section is executable\n");
++ ret = -ENODEV;
++ }
++#endif
++
++ /* Test 4: Check if the .data section of a module is executable */
++ if (test_address(&test_data)) {
++ printk(KERN_ERR "test_nx: .data section is executable\n");
++ ret = -ENODEV;
++ }
++
++#endif
++ return 0;
++}
++
++static void test_exit(void)
++{
++}
++
++module_init(test_NX);
++module_exit(test_exit);
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Testcase for the NX infrastructure");
++MODULE_AUTHOR("Arjan van de Ven <arjan at linux.intel.com>");
+diff --git a/arch/x86/kernel/test_rodata.c b/arch/x86/kernel/test_rodata.c
+new file mode 100644
+index 0000000..4c16377
+--- /dev/null
++++ b/arch/x86/kernel/test_rodata.c
+@@ -0,0 +1,86 @@
++/*
++ * test_rodata.c: functional test for mark_rodata_ro function
++ *
++ * (C) Copyright 2008 Intel Corporation
++ * Author: Arjan van de Ven <arjan at linux.intel.com>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * as published by the Free Software Foundation; version 2
++ * of the License.
++ */
++#include <linux/module.h>
++#include <asm/sections.h>
++extern int rodata_test_data;
++
++int rodata_test(void)
++{
++ unsigned long result;
++ unsigned long start, end;
++
++ /* test 1: read the value */
++ /* If this test fails, some previous testrun has clobbered the state */
++ if (!rodata_test_data) {
++ printk(KERN_ERR "rodata_test: test 1 fails (start data)\n");
++ return -ENODEV;
++ }
++
++ /* test 2: write to the variable; this should fault */
++ /*
++ * If this test fails, we managed to overwrite the data
++ *
++ * This is written in assembly to be able to catch the
++ * exception that is supposed to happen in the correct
++ * case
++ */
++
++ result = 1;
++ asm volatile(
++ "0: mov %[zero],(%[rodata_test])\n"
++ " mov %[zero], %[rslt]\n"
++ "1:\n"
++ ".section .fixup,\"ax\"\n"
++ "2: jmp 1b\n"
++ ".previous\n"
++ ".section __ex_table,\"a\"\n"
++ " .align 16\n"
++#ifdef CONFIG_X86_32
++ " .long 0b,2b\n"
++#else
++ " .quad 0b,2b\n"
++#endif
++ ".previous"
++ : [rslt] "=r" (result)
++ : [rodata_test] "r" (&rodata_test_data), [zero] "r" (0UL)
++ );
++
++
++ if (!result) {
++ printk(KERN_ERR "rodata_test: test data was not read only\n");
++ return -ENODEV;
++ }
++
++ /* test 3: check the value hasn't changed */
++ /* If this test fails, we managed to overwrite the data */
++ if (!rodata_test_data) {
++ printk(KERN_ERR "rodata_test: Test 3 failes (end data)\n");
++ return -ENODEV;
++ }
++ /* test 4: check if the rodata section is 4Kb aligned */
++ start = (unsigned long)__start_rodata;
++ end = (unsigned long)__end_rodata;
++ if (start & (PAGE_SIZE - 1)) {
++ printk(KERN_ERR "rodata_test: .rodata is not 4k aligned\n");
++ return -ENODEV;
++ }
++ if (end & (PAGE_SIZE - 1)) {
++ printk(KERN_ERR "rodata_test: .rodata end is not 4k aligned\n");
++ return -ENODEV;
++ }
++
++ return 0;
++}
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Testcase for the DEBUG_RODATA infrastructure");
++MODULE_AUTHOR("Arjan van de Ven <arjan at linux.intel.com>");
+diff --git a/arch/x86/kernel/time_32.c b/arch/x86/kernel/time_32.c
+index 8a322c9..1a89e93 100644
+--- a/arch/x86/kernel/time_32.c
++++ b/arch/x86/kernel/time_32.c
+@@ -28,98 +28,20 @@
+ * serialize accesses to xtime/lost_ticks).
+ */
+
+-#include <linux/errno.h>
+-#include <linux/sched.h>
+-#include <linux/kernel.h>
+-#include <linux/param.h>
+-#include <linux/string.h>
+-#include <linux/mm.h>
++#include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/time.h>
+-#include <linux/delay.h>
+-#include <linux/init.h>
+-#include <linux/smp.h>
+-#include <linux/module.h>
+-#include <linux/sysdev.h>
+-#include <linux/bcd.h>
+-#include <linux/efi.h>
+ #include <linux/mca.h>
+
+-#include <asm/io.h>
+-#include <asm/smp.h>
+-#include <asm/irq.h>
+-#include <asm/msr.h>
+-#include <asm/delay.h>
+-#include <asm/mpspec.h>
+-#include <asm/uaccess.h>
+-#include <asm/processor.h>
+-#include <asm/timer.h>
+-#include <asm/time.h>
-
--/*
-- * If bio->bi_dev is a partition, remap the location
-- */
--static inline void blk_partition_remap(struct bio *bio)
--{
-- struct block_device *bdev = bio->bi_bdev;
--
-- if (bio_sectors(bio) && bdev != bdev->bd_contains) {
-- struct hd_struct *p = bdev->bd_part;
-- const int rw = bio_data_dir(bio);
--
-- p->sectors[rw] += bio_sectors(bio);
-- p->ios[rw]++;
--
-- bio->bi_sector += p->start_sect;
-- bio->bi_bdev = bdev->bd_contains;
+-#include "mach_time.h"
-
-- blk_add_trace_remap(bdev_get_queue(bio->bi_bdev), bio,
-- bdev->bd_dev, bio->bi_sector,
-- bio->bi_sector - p->start_sect);
-- }
--}
+-#include <linux/timex.h>
-
--static void handle_bad_sector(struct bio *bio)
--{
-- char b[BDEVNAME_SIZE];
+-#include <asm/hpet.h>
-
-- printk(KERN_INFO "attempt to access beyond end of device\n");
-- printk(KERN_INFO "%s: rw=%ld, want=%Lu, limit=%Lu\n",
-- bdevname(bio->bi_bdev, b),
-- bio->bi_rw,
-- (unsigned long long)bio->bi_sector + bio_sectors(bio),
-- (long long)(bio->bi_bdev->bd_inode->i_size >> 9));
+ #include <asm/arch_hooks.h>
-
-- set_bit(BIO_EOF, &bio->bi_flags);
--}
+-#include "io_ports.h"
-
--#ifdef CONFIG_FAIL_MAKE_REQUEST
+-#include <asm/i8259.h>
++#include <asm/hpet.h>
++#include <asm/time.h>
+
+ #include "do_timer.h"
+
+ unsigned int cpu_khz; /* Detected as we calibrate the TSC */
+ EXPORT_SYMBOL(cpu_khz);
+
+-DEFINE_SPINLOCK(rtc_lock);
+-EXPORT_SYMBOL(rtc_lock);
-
--static DECLARE_FAULT_ATTR(fail_make_request);
+-/*
+- * This is a special lock that is owned by the CPU and holds the index
+- * register we are working with. It is required for NMI access to the
+- * CMOS/RTC registers. See include/asm-i386/mc146818rtc.h for details.
+- */
+-volatile unsigned long cmos_lock = 0;
+-EXPORT_SYMBOL(cmos_lock);
-
--static int __init setup_fail_make_request(char *str)
+-/* Routines for accessing the CMOS RAM/RTC. */
+-unsigned char rtc_cmos_read(unsigned char addr)
-{
-- return setup_fault_attr(&fail_make_request, str);
+- unsigned char val;
+- lock_cmos_prefix(addr);
+- outb_p(addr, RTC_PORT(0));
+- val = inb_p(RTC_PORT(1));
+- lock_cmos_suffix(addr);
+- return val;
-}
--__setup("fail_make_request=", setup_fail_make_request);
+-EXPORT_SYMBOL(rtc_cmos_read);
-
--static int should_fail_request(struct bio *bio)
+-void rtc_cmos_write(unsigned char val, unsigned char addr)
-{
-- if ((bio->bi_bdev->bd_disk->flags & GENHD_FL_FAIL) ||
-- (bio->bi_bdev->bd_part && bio->bi_bdev->bd_part->make_it_fail))
-- return should_fail(&fail_make_request, bio->bi_size);
--
-- return 0;
+- lock_cmos_prefix(addr);
+- outb_p(addr, RTC_PORT(0));
+- outb_p(val, RTC_PORT(1));
+- lock_cmos_suffix(addr);
-}
+-EXPORT_SYMBOL(rtc_cmos_write);
-
--static int __init fail_make_request_debugfs(void)
+-static int set_rtc_mmss(unsigned long nowtime)
-{
-- return init_fault_attr_dentries(&fail_make_request,
-- "fail_make_request");
--}
--
--late_initcall(fail_make_request_debugfs);
+- int retval;
+- unsigned long flags;
-
--#else /* CONFIG_FAIL_MAKE_REQUEST */
+- /* gets recalled with irq locally disabled */
+- /* XXX - does irqsave resolve this? -johnstul */
+- spin_lock_irqsave(&rtc_lock, flags);
+- retval = set_wallclock(nowtime);
+- spin_unlock_irqrestore(&rtc_lock, flags);
-
--static inline int should_fail_request(struct bio *bio)
--{
-- return 0;
+- return retval;
-}
-
--#endif /* CONFIG_FAIL_MAKE_REQUEST */
-
--/*
-- * Check whether this bio extends beyond the end of the device.
-- */
--static inline int bio_check_eod(struct bio *bio, unsigned int nr_sectors)
+ int timer_ack;
+
+ unsigned long profile_pc(struct pt_regs *regs)
+@@ -127,17 +49,17 @@ unsigned long profile_pc(struct pt_regs *regs)
+ unsigned long pc = instruction_pointer(regs);
+
+ #ifdef CONFIG_SMP
+- if (!v8086_mode(regs) && SEGMENT_IS_KERNEL_CODE(regs->xcs) &&
++ if (!v8086_mode(regs) && SEGMENT_IS_KERNEL_CODE(regs->cs) &&
+ in_lock_functions(pc)) {
+ #ifdef CONFIG_FRAME_POINTER
+- return *(unsigned long *)(regs->ebp + 4);
++ return *(unsigned long *)(regs->bp + 4);
+ #else
+- unsigned long *sp = (unsigned long *)®s->esp;
++ unsigned long *sp = (unsigned long *)®s->sp;
+
+ /* Return address is either directly at stack pointer
+- or above a saved eflags. Eflags has bits 22-31 zero,
++ or above a saved flags. Eflags has bits 22-31 zero,
+ kernel addresses don't. */
+- if (sp[0] >> 22)
++ if (sp[0] >> 22)
+ return sp[0];
+ if (sp[1] >> 22)
+ return sp[1];
+@@ -193,26 +115,6 @@ irqreturn_t timer_interrupt(int irq, void *dev_id)
+ return IRQ_HANDLED;
+ }
+
+-/* not static: needed by APM */
+-unsigned long read_persistent_clock(void)
-{
-- sector_t maxsector;
+- unsigned long retval;
+- unsigned long flags;
-
-- if (!nr_sectors)
-- return 0;
+- spin_lock_irqsave(&rtc_lock, flags);
-
-- /* Test device or partition size, when known. */
-- maxsector = bio->bi_bdev->bd_inode->i_size >> 9;
-- if (maxsector) {
-- sector_t sector = bio->bi_sector;
+- retval = get_wallclock();
-
-- if (maxsector < nr_sectors || maxsector - nr_sectors < sector) {
-- /*
-- * This may well happen - the kernel calls bread()
-- * without checking the size of the device, e.g., when
-- * mounting a device.
-- */
-- handle_bad_sector(bio);
-- return 1;
-- }
-- }
+- spin_unlock_irqrestore(&rtc_lock, flags);
-
-- return 0;
+- return retval;
-}
-
--/**
-- * generic_make_request: hand a buffer to its device driver for I/O
-- * @bio: The bio describing the location in memory and on the device.
-- *
-- * generic_make_request() is used to make I/O requests of block
-- * devices. It is passed a &struct bio, which describes the I/O that needs
-- * to be done.
-- *
-- * generic_make_request() does not return any status. The
-- * success/failure status of the request, along with notification of
-- * completion, is delivered asynchronously through the bio->bi_end_io
-- * function described (one day) else where.
-- *
-- * The caller of generic_make_request must make sure that bi_io_vec
-- * are set to describe the memory buffer, and that bi_dev and bi_sector are
-- * set to describe the device address, and the
-- * bi_end_io and optionally bi_private are set to describe how
-- * completion notification should be signaled.
-- *
-- * generic_make_request and the drivers it calls may use bi_next if this
-- * bio happens to be merged with someone else, and may change bi_dev and
-- * bi_sector for remaps as it sees fit. So the values of these fields
-- * should NOT be depended on after the call to generic_make_request.
-- */
--static inline void __generic_make_request(struct bio *bio)
+-int update_persistent_clock(struct timespec now)
-{
-- struct request_queue *q;
-- sector_t old_sector;
-- int ret, nr_sectors = bio_sectors(bio);
-- dev_t old_dev;
-- int err = -EIO;
+- return set_rtc_mmss(now.tv_sec);
+-}
-
-- might_sleep();
+ extern void (*late_time_init)(void);
+ /* Duplicate of time_init() below, with hpet_enable part added */
+ void __init hpet_time_init(void)
+diff --git a/arch/x86/kernel/time_64.c b/arch/x86/kernel/time_64.c
+index 368b194..0380795 100644
+--- a/arch/x86/kernel/time_64.c
++++ b/arch/x86/kernel/time_64.c
+@@ -11,43 +11,18 @@
+ * RTC support code taken from arch/i386/kernel/timers/time_hpet.c
+ */
+
+-#include <linux/kernel.h>
+-#include <linux/sched.h>
+-#include <linux/interrupt.h>
++#include <linux/clockchips.h>
+ #include <linux/init.h>
+-#include <linux/mc146818rtc.h>
+-#include <linux/time.h>
+-#include <linux/ioport.h>
++#include <linux/interrupt.h>
+ #include <linux/module.h>
+-#include <linux/device.h>
+-#include <linux/sysdev.h>
+-#include <linux/bcd.h>
+-#include <linux/notifier.h>
+-#include <linux/cpu.h>
+-#include <linux/kallsyms.h>
+-#include <linux/acpi.h>
+-#include <linux/clockchips.h>
++#include <linux/time.h>
+
+-#ifdef CONFIG_ACPI
+-#include <acpi/achware.h> /* for PM timer frequency */
+-#include <acpi/acpi_bus.h>
+-#endif
+ #include <asm/i8253.h>
+-#include <asm/pgtable.h>
+-#include <asm/vsyscall.h>
+-#include <asm/timex.h>
+-#include <asm/proto.h>
+-#include <asm/hpet.h>
+-#include <asm/sections.h>
+-#include <linux/hpet.h>
+-#include <asm/apic.h>
+ #include <asm/hpet.h>
+-#include <asm/mpspec.h>
+ #include <asm/nmi.h>
+ #include <asm/vgtod.h>
-
-- if (bio_check_eod(bio, nr_sectors))
-- goto end_io;
+-DEFINE_SPINLOCK(rtc_lock);
+-EXPORT_SYMBOL(rtc_lock);
++#include <asm/time.h>
++#include <asm/timer.h>
+
+ volatile unsigned long __jiffies __section_jiffies = INITIAL_JIFFIES;
+
+@@ -56,10 +31,10 @@ unsigned long profile_pc(struct pt_regs *regs)
+ unsigned long pc = instruction_pointer(regs);
+
+ /* Assume the lock function has either no stack frame or a copy
+- of eflags from PUSHF
++ of flags from PUSHF
+ Eflags always has bits 22 and up cleared unlike kernel addresses. */
+ if (!user_mode(regs) && in_lock_functions(pc)) {
+- unsigned long *sp = (unsigned long *)regs->rsp;
++ unsigned long *sp = (unsigned long *)regs->sp;
+ if (sp[0] >> 22)
+ return sp[0];
+ if (sp[1] >> 22)
+@@ -69,82 +44,6 @@ unsigned long profile_pc(struct pt_regs *regs)
+ }
+ EXPORT_SYMBOL(profile_pc);
+
+-/*
+- * In order to set the CMOS clock precisely, set_rtc_mmss has to be called 500
+- * ms after the second nowtime has started, because when nowtime is written
+- * into the registers of the CMOS clock, it will jump to the next second
+- * precisely 500 ms later. Check the Motorola MC146818A or Dallas DS12887 data
+- * sheet for details.
+- */
-
-- /*
-- * Resolve the mapping until finished. (drivers are
-- * still free to implement/resolve their own stacking
-- * by explicitly returning 0)
-- *
-- * NOTE: we don't repeat the blk_size check for each new device.
-- * Stacking drivers are expected to know what they are doing.
-- */
-- old_sector = -1;
-- old_dev = 0;
-- do {
-- char b[BDEVNAME_SIZE];
+-static int set_rtc_mmss(unsigned long nowtime)
+-{
+- int retval = 0;
+- int real_seconds, real_minutes, cmos_minutes;
+- unsigned char control, freq_select;
+- unsigned long flags;
-
-- q = bdev_get_queue(bio->bi_bdev);
-- if (!q) {
-- printk(KERN_ERR
-- "generic_make_request: Trying to access "
-- "nonexistent block-device %s (%Lu)\n",
-- bdevname(bio->bi_bdev, b),
-- (long long) bio->bi_sector);
--end_io:
-- bio_endio(bio, err);
-- break;
-- }
+-/*
+- * set_rtc_mmss is called when irqs are enabled, so disable irqs here
+- */
+- spin_lock_irqsave(&rtc_lock, flags);
+-/*
+- * Tell the clock it's being set and stop it.
+- */
+- control = CMOS_READ(RTC_CONTROL);
+- CMOS_WRITE(control | RTC_SET, RTC_CONTROL);
-
-- if (unlikely(nr_sectors > q->max_hw_sectors)) {
-- printk("bio too big device %s (%u > %u)\n",
-- bdevname(bio->bi_bdev, b),
-- bio_sectors(bio),
-- q->max_hw_sectors);
-- goto end_io;
-- }
+- freq_select = CMOS_READ(RTC_FREQ_SELECT);
+- CMOS_WRITE(freq_select | RTC_DIV_RESET2, RTC_FREQ_SELECT);
-
-- if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
-- goto end_io;
+- cmos_minutes = CMOS_READ(RTC_MINUTES);
+- BCD_TO_BIN(cmos_minutes);
-
-- if (should_fail_request(bio))
-- goto end_io;
+-/*
+- * since we're only adjusting minutes and seconds, don't interfere with hour
+- * overflow. This avoids messing with unknown time zones but requires your RTC
+- * not to be off by more than 15 minutes. Since we're calling it only when
+- * our clock is externally synchronized using NTP, this shouldn't be a problem.
+- */
-
-- /*
-- * If this device has partitions, remap block n
-- * of partition p to block n+start(p) of the disk.
-- */
-- blk_partition_remap(bio);
+- real_seconds = nowtime % 60;
+- real_minutes = nowtime / 60;
+- if (((abs(real_minutes - cmos_minutes) + 15) / 30) & 1)
+- real_minutes += 30; /* correct for half hour time zone */
+- real_minutes %= 60;
-
-- if (old_sector != -1)
-- blk_add_trace_remap(q, bio, old_dev, bio->bi_sector,
-- old_sector);
+- if (abs(real_minutes - cmos_minutes) >= 30) {
+- printk(KERN_WARNING "time.c: can't update CMOS clock "
+- "from %d to %d\n", cmos_minutes, real_minutes);
+- retval = -1;
+- } else {
+- BIN_TO_BCD(real_seconds);
+- BIN_TO_BCD(real_minutes);
+- CMOS_WRITE(real_seconds, RTC_SECONDS);
+- CMOS_WRITE(real_minutes, RTC_MINUTES);
+- }
-
-- blk_add_trace_bio(q, bio, BLK_TA_QUEUE);
+-/*
+- * The following flags have to be released exactly in this order, otherwise the
+- * DS12887 (popular MC146818A clone with integrated battery and quartz) will
+- * not reset the oscillator and will not update precisely 500 ms later. You
+- * won't find this mentioned in the Dallas Semiconductor data sheets, but who
+- * believes data sheets anyway ... -- Markus Kuhn
+- */
-
-- old_sector = bio->bi_sector;
-- old_dev = bio->bi_bdev->bd_dev;
+- CMOS_WRITE(control, RTC_CONTROL);
+- CMOS_WRITE(freq_select, RTC_FREQ_SELECT);
-
-- if (bio_check_eod(bio, nr_sectors))
-- goto end_io;
-- if (bio_empty_barrier(bio) && !q->prepare_flush_fn) {
-- err = -EOPNOTSUPP;
-- goto end_io;
-- }
+- spin_unlock_irqrestore(&rtc_lock, flags);
-
-- ret = q->make_request_fn(q, bio);
-- } while (ret);
+- return retval;
-}
-
--/*
-- * We only want one ->make_request_fn to be active at a time,
-- * else stack usage with stacked devices could be a problem.
-- * So use current->bio_{list,tail} to keep a list of requests
-- * submited by a make_request_fn function.
-- * current->bio_tail is also used as a flag to say if
-- * generic_make_request is currently active in this task or not.
-- * If it is NULL, then no make_request is active. If it is non-NULL,
-- * then a make_request is active, and new requests should be added
-- * at the tail
-- */
--void generic_make_request(struct bio *bio)
+-int update_persistent_clock(struct timespec now)
-{
-- if (current->bio_tail) {
-- /* make_request is active */
-- *(current->bio_tail) = bio;
-- bio->bi_next = NULL;
-- current->bio_tail = &bio->bi_next;
-- return;
-- }
-- /* following loop may be a bit non-obvious, and so deserves some
-- * explanation.
-- * Before entering the loop, bio->bi_next is NULL (as all callers
-- * ensure that) so we have a list with a single bio.
-- * We pretend that we have just taken it off a longer list, so
-- * we assign bio_list to the next (which is NULL) and bio_tail
-- * to &bio_list, thus initialising the bio_list of new bios to be
-- * added. __generic_make_request may indeed add some more bios
-- * through a recursive call to generic_make_request. If it
-- * did, we find a non-NULL value in bio_list and re-enter the loop
-- * from the top. In this case we really did just take the bio
-- * of the top of the list (no pretending) and so fixup bio_list and
-- * bio_tail or bi_next, and call into __generic_make_request again.
-- *
-- * The loop was structured like this to make only one call to
-- * __generic_make_request (which is important as it is large and
-- * inlined) and to keep the structure simple.
-- */
-- BUG_ON(bio->bi_next);
-- do {
-- current->bio_list = bio->bi_next;
-- if (bio->bi_next == NULL)
-- current->bio_tail = ¤t->bio_list;
-- else
-- bio->bi_next = NULL;
-- __generic_make_request(bio);
-- bio = current->bio_list;
-- } while (bio);
-- current->bio_tail = NULL; /* deactivate */
+- return set_rtc_mmss(now.tv_sec);
-}
-
--EXPORT_SYMBOL(generic_make_request);
--
--/**
-- * submit_bio: submit a bio to the block device layer for I/O
-- * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
-- * @bio: The &struct bio which describes the I/O
-- *
-- * submit_bio() is very similar in purpose to generic_make_request(), and
-- * uses that function to do most of the work. Both are fairly rough
-- * interfaces, @bio must be presetup and ready for I/O.
-- *
-- */
--void submit_bio(int rw, struct bio *bio)
+ static irqreturn_t timer_event_interrupt(int irq, void *dev_id)
+ {
+ add_pda(irq0_irqs, 1);
+@@ -154,67 +53,10 @@ static irqreturn_t timer_event_interrupt(int irq, void *dev_id)
+ return IRQ_HANDLED;
+ }
+
+-unsigned long read_persistent_clock(void)
-{
-- int count = bio_sectors(bio);
--
-- bio->bi_rw |= rw;
+- unsigned int year, mon, day, hour, min, sec;
+- unsigned long flags;
+- unsigned century = 0;
-
+- spin_lock_irqsave(&rtc_lock, flags);
- /*
-- * If it's a regular read/write or a barrier with data attached,
-- * go through the normal accounting stuff before submission.
+- * if UIP is clear, then we have >= 244 microseconds before RTC
+- * registers will be updated. Spec sheet says that this is the
+- * reliable way to read RTC - registers invalid (off bus) during update
- */
-- if (!bio_empty_barrier(bio)) {
--
-- BIO_BUG_ON(!bio->bi_size);
-- BIO_BUG_ON(!bio->bi_io_vec);
--
-- if (rw & WRITE) {
-- count_vm_events(PGPGOUT, count);
-- } else {
-- task_io_account_read(bio->bi_size);
-- count_vm_events(PGPGIN, count);
-- }
--
-- if (unlikely(block_dump)) {
-- char b[BDEVNAME_SIZE];
-- printk(KERN_DEBUG "%s(%d): %s block %Lu on %s\n",
-- current->comm, task_pid_nr(current),
-- (rw & WRITE) ? "WRITE" : "READ",
-- (unsigned long long)bio->bi_sector,
-- bdevname(bio->bi_bdev,b));
-- }
-- }
--
-- generic_make_request(bio);
--}
--
--EXPORT_SYMBOL(submit_bio);
--
--static void blk_recalc_rq_sectors(struct request *rq, int nsect)
--{
-- if (blk_fs_request(rq)) {
-- rq->hard_sector += nsect;
-- rq->hard_nr_sectors -= nsect;
--
-- /*
-- * Move the I/O submission pointers ahead if required.
-- */
-- if ((rq->nr_sectors >= rq->hard_nr_sectors) &&
-- (rq->sector <= rq->hard_sector)) {
-- rq->sector = rq->hard_sector;
-- rq->nr_sectors = rq->hard_nr_sectors;
-- rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
-- rq->current_nr_sectors = rq->hard_cur_sectors;
-- rq->buffer = bio_data(rq->bio);
-- }
--
-- /*
-- * if total number of sectors is less than the first segment
-- * size, something has gone terribly wrong
-- */
-- if (rq->nr_sectors < rq->current_nr_sectors) {
-- printk("blk: request botched\n");
-- rq->nr_sectors = rq->current_nr_sectors;
-- }
-- }
--}
+- while ((CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP))
+- cpu_relax();
-
--static int __end_that_request_first(struct request *req, int uptodate,
-- int nr_bytes)
--{
-- int total_bytes, bio_nbytes, error, next_idx = 0;
-- struct bio *bio;
-
-- blk_add_trace_rq(req->q, req, BLK_TA_COMPLETE);
+- /* now read all RTC registers while stable with interrupts disabled */
+- sec = CMOS_READ(RTC_SECONDS);
+- min = CMOS_READ(RTC_MINUTES);
+- hour = CMOS_READ(RTC_HOURS);
+- day = CMOS_READ(RTC_DAY_OF_MONTH);
+- mon = CMOS_READ(RTC_MONTH);
+- year = CMOS_READ(RTC_YEAR);
+-#ifdef CONFIG_ACPI
+- if (acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID &&
+- acpi_gbl_FADT.century)
+- century = CMOS_READ(acpi_gbl_FADT.century);
+-#endif
+- spin_unlock_irqrestore(&rtc_lock, flags);
-
- /*
-- * extend uptodate bool to allow < 0 value to be direct io error
+- * We know that x86-64 always uses BCD format, no need to check the
+- * config register.
- */
-- error = 0;
-- if (end_io_error(uptodate))
-- error = !uptodate ? -EIO : uptodate;
-
-- /*
-- * for a REQ_BLOCK_PC request, we want to carry any eventual
-- * sense key with us all the way through
-- */
-- if (!blk_pc_request(req))
-- req->errors = 0;
+- BCD_TO_BIN(sec);
+- BCD_TO_BIN(min);
+- BCD_TO_BIN(hour);
+- BCD_TO_BIN(day);
+- BCD_TO_BIN(mon);
+- BCD_TO_BIN(year);
-
-- if (!uptodate) {
-- if (blk_fs_request(req) && !(req->cmd_flags & REQ_QUIET))
-- printk("end_request: I/O error, dev %s, sector %llu\n",
-- req->rq_disk ? req->rq_disk->disk_name : "?",
-- (unsigned long long)req->sector);
+- if (century) {
+- BCD_TO_BIN(century);
+- year += century * 100;
+- printk(KERN_INFO "Extended CMOS year: %d\n", century * 100);
+- } else {
+- /*
+- * x86-64 systems only exists since 2002.
+- * This will work up to Dec 31, 2100
+- */
+- year += 2000;
- }
-
-- if (blk_fs_request(req) && req->rq_disk) {
-- const int rw = rq_data_dir(req);
--
-- disk_stat_add(req->rq_disk, sectors[rw], nr_bytes >> 9);
-- }
+- return mktime(year, mon, day, hour, min, sec);
+-}
-
-- total_bytes = bio_nbytes = 0;
-- while ((bio = req->bio) != NULL) {
-- int nbytes;
+ /* calibrate_cpu is used on systems with fixed rate TSCs to determine
+ * processor frequency */
+ #define TICK_COUNT 100000000
+-static unsigned int __init tsc_calibrate_cpu_khz(void)
++unsigned long __init native_calculate_cpu_khz(void)
+ {
+ int tsc_start, tsc_now;
+ int i, no_ctr_free;
+@@ -241,7 +83,7 @@ static unsigned int __init tsc_calibrate_cpu_khz(void)
+ rdtscl(tsc_start);
+ do {
+ rdmsrl(MSR_K7_PERFCTR0 + i, pmc_now);
+- tsc_now = get_cycles_sync();
++ tsc_now = get_cycles();
+ } while ((tsc_now - tsc_start) < TICK_COUNT);
+
+ local_irq_restore(flags);
+@@ -264,20 +106,22 @@ static struct irqaction irq0 = {
+ .name = "timer"
+ };
+
+-void __init time_init(void)
++void __init hpet_time_init(void)
+ {
+ if (!hpet_enable())
+ setup_pit_timer();
+
+ setup_irq(0, &irq0);
++}
+
++void __init time_init(void)
++{
+ tsc_calibrate();
+
+ cpu_khz = tsc_khz;
+ if (cpu_has(&boot_cpu_data, X86_FEATURE_CONSTANT_TSC) &&
+- boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
+- boot_cpu_data.x86 == 16)
+- cpu_khz = tsc_calibrate_cpu_khz();
++ (boot_cpu_data.x86_vendor == X86_VENDOR_AMD))
++ cpu_khz = calculate_cpu_khz();
+
+ if (unsynchronized_tsc())
+ mark_tsc_unstable("TSCs unsynchronized");
+@@ -290,4 +134,5 @@ void __init time_init(void)
+ printk(KERN_INFO "time.c: Detected %d.%03d MHz processor.\n",
+ cpu_khz / 1000, cpu_khz % 1000);
+ init_tsc_clocksource();
++ late_time_init = choose_time_init();
+ }
+diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
+new file mode 100644
+index 0000000..6dfd4e7
+--- /dev/null
++++ b/arch/x86/kernel/tls.c
+@@ -0,0 +1,213 @@
++#include <linux/kernel.h>
++#include <linux/errno.h>
++#include <linux/sched.h>
++#include <linux/user.h>
++#include <linux/regset.h>
++
++#include <asm/uaccess.h>
++#include <asm/desc.h>
++#include <asm/system.h>
++#include <asm/ldt.h>
++#include <asm/processor.h>
++#include <asm/proto.h>
++
++#include "tls.h"
++
++/*
++ * sys_alloc_thread_area: get a yet unused TLS descriptor index.
++ */
++static int get_free_idx(void)
++{
++ struct thread_struct *t = ¤t->thread;
++ int idx;
++
++ for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++)
++ if (desc_empty(&t->tls_array[idx]))
++ return idx + GDT_ENTRY_TLS_MIN;
++ return -ESRCH;
++}
++
++static void set_tls_desc(struct task_struct *p, int idx,
++ const struct user_desc *info, int n)
++{
++ struct thread_struct *t = &p->thread;
++ struct desc_struct *desc = &t->tls_array[idx - GDT_ENTRY_TLS_MIN];
++ int cpu;
++
++ /*
++ * We must not get preempted while modifying the TLS.
++ */
++ cpu = get_cpu();
++
++ while (n-- > 0) {
++ if (LDT_empty(info))
++ desc->a = desc->b = 0;
++ else
++ fill_ldt(desc, info);
++ ++info;
++ ++desc;
++ }
++
++ if (t == ¤t->thread)
++ load_TLS(t, cpu);
++
++ put_cpu();
++}
++
++/*
++ * Set a given TLS descriptor:
++ */
++int do_set_thread_area(struct task_struct *p, int idx,
++ struct user_desc __user *u_info,
++ int can_allocate)
++{
++ struct user_desc info;
++
++ if (copy_from_user(&info, u_info, sizeof(info)))
++ return -EFAULT;
++
++ if (idx == -1)
++ idx = info.entry_number;
++
++ /*
++ * index -1 means the kernel should try to find and
++ * allocate an empty descriptor:
++ */
++ if (idx == -1 && can_allocate) {
++ idx = get_free_idx();
++ if (idx < 0)
++ return idx;
++ if (put_user(idx, &u_info->entry_number))
++ return -EFAULT;
++ }
++
++ if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
++ return -EINVAL;
++
++ set_tls_desc(p, idx, &info, 1);
++
++ return 0;
++}
++
++asmlinkage int sys_set_thread_area(struct user_desc __user *u_info)
++{
++ return do_set_thread_area(current, -1, u_info, 1);
++}
++
++
++/*
++ * Get the current Thread-Local Storage area:
++ */
++
++static void fill_user_desc(struct user_desc *info, int idx,
++ const struct desc_struct *desc)
++
++{
++ memset(info, 0, sizeof(*info));
++ info->entry_number = idx;
++ info->base_addr = get_desc_base(desc);
++ info->limit = get_desc_limit(desc);
++ info->seg_32bit = desc->d;
++ info->contents = desc->type >> 2;
++ info->read_exec_only = !(desc->type & 2);
++ info->limit_in_pages = desc->g;
++ info->seg_not_present = !desc->p;
++ info->useable = desc->avl;
++#ifdef CONFIG_X86_64
++ info->lm = desc->l;
++#endif
++}
++
++int do_get_thread_area(struct task_struct *p, int idx,
++ struct user_desc __user *u_info)
++{
++ struct user_desc info;
++
++ if (idx == -1 && get_user(idx, &u_info->entry_number))
++ return -EFAULT;
++
++ if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
++ return -EINVAL;
++
++ fill_user_desc(&info, idx,
++ &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]);
++
++ if (copy_to_user(u_info, &info, sizeof(info)))
++ return -EFAULT;
++ return 0;
++}
++
++asmlinkage int sys_get_thread_area(struct user_desc __user *u_info)
++{
++ return do_get_thread_area(current, -1, u_info);
++}
++
++int regset_tls_active(struct task_struct *target,
++ const struct user_regset *regset)
++{
++ struct thread_struct *t = &target->thread;
++ int n = GDT_ENTRY_TLS_ENTRIES;
++ while (n > 0 && desc_empty(&t->tls_array[n - 1]))
++ --n;
++ return n;
++}
++
++int regset_tls_get(struct task_struct *target, const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf)
++{
++ const struct desc_struct *tls;
++
++ if (pos > GDT_ENTRY_TLS_ENTRIES * sizeof(struct user_desc) ||
++ (pos % sizeof(struct user_desc)) != 0 ||
++ (count % sizeof(struct user_desc)) != 0)
++ return -EINVAL;
++
++ pos /= sizeof(struct user_desc);
++ count /= sizeof(struct user_desc);
++
++ tls = &target->thread.tls_array[pos];
++
++ if (kbuf) {
++ struct user_desc *info = kbuf;
++ while (count-- > 0)
++ fill_user_desc(info++, GDT_ENTRY_TLS_MIN + pos++,
++ tls++);
++ } else {
++ struct user_desc __user *u_info = ubuf;
++ while (count-- > 0) {
++ struct user_desc info;
++ fill_user_desc(&info, GDT_ENTRY_TLS_MIN + pos++, tls++);
++ if (__copy_to_user(u_info++, &info, sizeof(info)))
++ return -EFAULT;
++ }
++ }
++
++ return 0;
++}
++
++int regset_tls_set(struct task_struct *target, const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf)
++{
++ struct user_desc infobuf[GDT_ENTRY_TLS_ENTRIES];
++ const struct user_desc *info;
++
++ if (pos > GDT_ENTRY_TLS_ENTRIES * sizeof(struct user_desc) ||
++ (pos % sizeof(struct user_desc)) != 0 ||
++ (count % sizeof(struct user_desc)) != 0)
++ return -EINVAL;
++
++ if (kbuf)
++ info = kbuf;
++ else if (__copy_from_user(infobuf, ubuf, count))
++ return -EFAULT;
++ else
++ info = infobuf;
++
++ set_tls_desc(target,
++ GDT_ENTRY_TLS_MIN + (pos / sizeof(struct user_desc)),
++ info, count / sizeof(struct user_desc));
++
++ return 0;
++}
+diff --git a/arch/x86/kernel/tls.h b/arch/x86/kernel/tls.h
+new file mode 100644
+index 0000000..2f083a2
+--- /dev/null
++++ b/arch/x86/kernel/tls.h
+@@ -0,0 +1,21 @@
++/*
++ * Internal declarations for x86 TLS implementation functions.
++ *
++ * Copyright (C) 2007 Red Hat, Inc. All rights reserved.
++ *
++ * This copyrighted material is made available to anyone wishing to use,
++ * modify, copy, or redistribute it subject to the terms and conditions
++ * of the GNU General Public License v.2.
++ *
++ * Red Hat Author: Roland McGrath.
++ */
++
++#ifndef _ARCH_X86_KERNEL_TLS_H
++
++#include <linux/regset.h>
++
++extern user_regset_active_fn regset_tls_active;
++extern user_regset_get_fn regset_tls_get;
++extern user_regset_set_fn regset_tls_set;
++
++#endif /* _ARCH_X86_KERNEL_TLS_H */
+diff --git a/arch/x86/kernel/topology.c b/arch/x86/kernel/topology.c
+index 7e16d67..78cbb65 100644
+--- a/arch/x86/kernel/topology.c
++++ b/arch/x86/kernel/topology.c
+@@ -31,9 +31,10 @@
+ #include <linux/mmzone.h>
+ #include <asm/cpu.h>
+
+-static struct i386_cpu cpu_devices[NR_CPUS];
++static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
+
+-int __cpuinit arch_register_cpu(int num)
++#ifdef CONFIG_HOTPLUG_CPU
++int arch_register_cpu(int num)
+ {
+ /*
+ * CPU0 cannot be offlined due to several
+@@ -44,21 +45,23 @@ int __cpuinit arch_register_cpu(int num)
+ * Also certain PCI quirks require not to enable hotplug control
+ * for all CPU's.
+ */
+-#ifdef CONFIG_HOTPLUG_CPU
+ if (num)
+- cpu_devices[num].cpu.hotpluggable = 1;
+-#endif
-
+- return register_cpu(&cpu_devices[num].cpu, num);
++ per_cpu(cpu_devices, num).cpu.hotpluggable = 1;
++ return register_cpu(&per_cpu(cpu_devices, num).cpu, num);
+ }
++EXPORT_SYMBOL(arch_register_cpu);
+
+-#ifdef CONFIG_HOTPLUG_CPU
+ void arch_unregister_cpu(int num)
+ {
+- return unregister_cpu(&cpu_devices[num].cpu);
++ return unregister_cpu(&per_cpu(cpu_devices, num).cpu);
+ }
+-EXPORT_SYMBOL(arch_register_cpu);
+ EXPORT_SYMBOL(arch_unregister_cpu);
++#else
++int arch_register_cpu(int num)
++{
++ return register_cpu(&per_cpu(cpu_devices, num).cpu, num);
++}
++EXPORT_SYMBOL(arch_register_cpu);
+ #endif /*CONFIG_HOTPLUG_CPU*/
+
+ static int __init topology_init(void)
+diff --git a/arch/x86/kernel/traps_32.c b/arch/x86/kernel/traps_32.c
+index 02d1e1e..3cf7297 100644
+--- a/arch/x86/kernel/traps_32.c
++++ b/arch/x86/kernel/traps_32.c
+@@ -76,7 +76,8 @@ char ignore_fpu_irq = 0;
+ * F0 0F bug workaround.. We have a special link segment
+ * for this.
+ */
+-struct desc_struct idt_table[256] __attribute__((__section__(".data.idt"))) = { {0, 0}, };
++gate_desc idt_table[256]
++ __attribute__((__section__(".data.idt"))) = { { { { 0, 0 } } }, };
+
+ asmlinkage void divide_error(void);
+ asmlinkage void debug(void);
+@@ -101,6 +102,34 @@ asmlinkage void machine_check(void);
+ int kstack_depth_to_print = 24;
+ static unsigned int code_bytes = 64;
+
++void printk_address(unsigned long address, int reliable)
++{
++#ifdef CONFIG_KALLSYMS
++ unsigned long offset = 0, symsize;
++ const char *symname;
++ char *modname;
++ char *delim = ":";
++ char namebuf[128];
++ char reliab[4] = "";
++
++ symname = kallsyms_lookup(address, &symsize, &offset,
++ &modname, namebuf);
++ if (!symname) {
++ printk(" [<%08lx>]\n", address);
++ return;
++ }
++ if (!reliable)
++ strcpy(reliab, "? ");
++
++ if (!modname)
++ modname = delim = "";
++ printk(" [<%08lx>] %s%s%s%s%s+0x%lx/0x%lx\n",
++ address, reliab, delim, modname, delim, symname, offset, symsize);
++#else
++ printk(" [<%08lx>]\n", address);
++#endif
++}
++
+ static inline int valid_stack_ptr(struct thread_info *tinfo, void *p, unsigned size)
+ {
+ return p > (void *)tinfo &&
+@@ -114,48 +143,35 @@ struct stack_frame {
+ };
+
+ static inline unsigned long print_context_stack(struct thread_info *tinfo,
+- unsigned long *stack, unsigned long ebp,
++ unsigned long *stack, unsigned long bp,
+ const struct stacktrace_ops *ops, void *data)
+ {
+-#ifdef CONFIG_FRAME_POINTER
+- struct stack_frame *frame = (struct stack_frame *)ebp;
+- while (valid_stack_ptr(tinfo, frame, sizeof(*frame))) {
+- struct stack_frame *next;
+- unsigned long addr;
++ struct stack_frame *frame = (struct stack_frame *)bp;
+
+- addr = frame->return_address;
+- ops->address(data, addr);
- /*
-- * For an empty barrier request, the low level driver must
-- * store a potential error location in ->sector. We pass
-- * that back up in ->bi_sector.
+- * break out of recursive entries (such as
+- * end_of_stack_stop_unwind_function). Also,
+- * we can never allow a frame pointer to
+- * move downwards!
- */
-- if (blk_empty_barrier(req))
-- bio->bi_sector = req->sector;
--
-- if (nr_bytes >= bio->bi_size) {
-- req->bio = bio->bi_next;
-- nbytes = bio->bi_size;
-- req_bio_endio(req, bio, nbytes, error);
-- next_idx = 0;
-- bio_nbytes = 0;
-- } else {
-- int idx = bio->bi_idx + next_idx;
--
-- if (unlikely(bio->bi_idx >= bio->bi_vcnt)) {
-- blk_dump_rq_flags(req, "__end_that");
-- printk("%s: bio idx %d >= vcnt %d\n",
-- __FUNCTION__,
-- bio->bi_idx, bio->bi_vcnt);
-- break;
-- }
--
-- nbytes = bio_iovec_idx(bio, idx)->bv_len;
-- BIO_BUG_ON(nbytes > bio->bi_size);
+- next = frame->next_frame;
+- if (next <= frame)
+- break;
+- frame = next;
+- }
+-#else
+ while (valid_stack_ptr(tinfo, stack, sizeof(*stack))) {
+ unsigned long addr;
+
+- addr = *stack++;
+- if (__kernel_text_address(addr))
+- ops->address(data, addr);
++ addr = *stack;
++ if (__kernel_text_address(addr)) {
++ if ((unsigned long) stack == bp + 4) {
++ ops->address(data, addr, 1);
++ frame = frame->next_frame;
++ bp = (unsigned long) frame;
++ } else {
++ ops->address(data, addr, bp == 0);
++ }
++ }
++ stack++;
+ }
+-#endif
+- return ebp;
++ return bp;
+ }
+
+ #define MSG(msg) ops->warning(data, msg)
+
+ void dump_trace(struct task_struct *task, struct pt_regs *regs,
+- unsigned long *stack,
++ unsigned long *stack, unsigned long bp,
+ const struct stacktrace_ops *ops, void *data)
+ {
+- unsigned long ebp = 0;
-
-- /*
-- * not a complete bvec done
-- */
-- if (unlikely(nbytes > nr_bytes)) {
-- bio_nbytes += nr_bytes;
-- total_bytes += nr_bytes;
-- break;
-- }
--
-- /*
-- * advance to the next vector
-- */
-- next_idx++;
-- bio_nbytes += nbytes;
-- }
+ if (!task)
+ task = current;
+
+@@ -163,17 +179,17 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
+ unsigned long dummy;
+ stack = &dummy;
+ if (task != current)
+- stack = (unsigned long *)task->thread.esp;
++ stack = (unsigned long *)task->thread.sp;
+ }
+
+ #ifdef CONFIG_FRAME_POINTER
+- if (!ebp) {
++ if (!bp) {
+ if (task == current) {
+- /* Grab ebp right from our regs */
+- asm ("movl %%ebp, %0" : "=r" (ebp) : );
++ /* Grab bp right from our regs */
++ asm ("movl %%ebp, %0" : "=r" (bp) : );
+ } else {
+- /* ebp is the last reg pushed by switch_to */
+- ebp = *(unsigned long *) task->thread.esp;
++ /* bp is the last reg pushed by switch_to */
++ bp = *(unsigned long *) task->thread.sp;
+ }
+ }
+ #endif
+@@ -182,7 +198,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
+ struct thread_info *context;
+ context = (struct thread_info *)
+ ((unsigned long)stack & (~(THREAD_SIZE - 1)));
+- ebp = print_context_stack(context, stack, ebp, ops, data);
++ bp = print_context_stack(context, stack, bp, ops, data);
+ /* Should be after the line below, but somewhere
+ in early boot context comes out corrupted and we
+ can't reference it -AK */
+@@ -217,9 +233,11 @@ static int print_trace_stack(void *data, char *name)
+ /*
+ * Print one address/symbol entries per line.
+ */
+-static void print_trace_address(void *data, unsigned long addr)
++static void print_trace_address(void *data, unsigned long addr, int reliable)
+ {
+ printk("%s [<%08lx>] ", (char *)data, addr);
++ if (!reliable)
++ printk("? ");
+ print_symbol("%s\n", addr);
+ touch_nmi_watchdog();
+ }
+@@ -233,32 +251,32 @@ static const struct stacktrace_ops print_trace_ops = {
+
+ static void
+ show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+- unsigned long * stack, char *log_lvl)
++ unsigned long *stack, unsigned long bp, char *log_lvl)
+ {
+- dump_trace(task, regs, stack, &print_trace_ops, log_lvl);
++ dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl);
+ printk("%s =======================\n", log_lvl);
+ }
+
+ void show_trace(struct task_struct *task, struct pt_regs *regs,
+- unsigned long * stack)
++ unsigned long *stack, unsigned long bp)
+ {
+- show_trace_log_lvl(task, regs, stack, "");
++ show_trace_log_lvl(task, regs, stack, bp, "");
+ }
+
+ static void show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
+- unsigned long *esp, char *log_lvl)
++ unsigned long *sp, unsigned long bp, char *log_lvl)
+ {
+ unsigned long *stack;
+ int i;
+
+- if (esp == NULL) {
++ if (sp == NULL) {
+ if (task)
+- esp = (unsigned long*)task->thread.esp;
++ sp = (unsigned long*)task->thread.sp;
+ else
+- esp = (unsigned long *)&esp;
++ sp = (unsigned long *)&sp;
+ }
+
+- stack = esp;
++ stack = sp;
+ for(i = 0; i < kstack_depth_to_print; i++) {
+ if (kstack_end(stack))
+ break;
+@@ -267,13 +285,13 @@ static void show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ printk("%08lx ", *stack++);
+ }
+ printk("\n%sCall Trace:\n", log_lvl);
+- show_trace_log_lvl(task, regs, esp, log_lvl);
++ show_trace_log_lvl(task, regs, sp, bp, log_lvl);
+ }
+
+-void show_stack(struct task_struct *task, unsigned long *esp)
++void show_stack(struct task_struct *task, unsigned long *sp)
+ {
+ printk(" ");
+- show_stack_log_lvl(task, NULL, esp, "");
++ show_stack_log_lvl(task, NULL, sp, 0, "");
+ }
+
+ /*
+@@ -282,13 +300,19 @@ void show_stack(struct task_struct *task, unsigned long *esp)
+ void dump_stack(void)
+ {
+ unsigned long stack;
++ unsigned long bp = 0;
++
++#ifdef CONFIG_FRAME_POINTER
++ if (!bp)
++ asm("movl %%ebp, %0" : "=r" (bp):);
++#endif
+
+ printk("Pid: %d, comm: %.20s %s %s %.*s\n",
+ current->pid, current->comm, print_tainted(),
+ init_utsname()->release,
+ (int)strcspn(init_utsname()->version, " "),
+ init_utsname()->version);
+- show_trace(current, NULL, &stack);
++ show_trace(current, NULL, &stack, bp);
+ }
+
+ EXPORT_SYMBOL(dump_stack);
+@@ -307,30 +331,30 @@ void show_registers(struct pt_regs *regs)
+ * time of the fault..
+ */
+ if (!user_mode_vm(regs)) {
+- u8 *eip;
++ u8 *ip;
+ unsigned int code_prologue = code_bytes * 43 / 64;
+ unsigned int code_len = code_bytes;
+ unsigned char c;
+
+ printk("\n" KERN_EMERG "Stack: ");
+- show_stack_log_lvl(NULL, regs, ®s->esp, KERN_EMERG);
++ show_stack_log_lvl(NULL, regs, ®s->sp, 0, KERN_EMERG);
+
+ printk(KERN_EMERG "Code: ");
+
+- eip = (u8 *)regs->eip - code_prologue;
+- if (eip < (u8 *)PAGE_OFFSET ||
+- probe_kernel_address(eip, c)) {
++ ip = (u8 *)regs->ip - code_prologue;
++ if (ip < (u8 *)PAGE_OFFSET ||
++ probe_kernel_address(ip, c)) {
+ /* try starting at EIP */
+- eip = (u8 *)regs->eip;
++ ip = (u8 *)regs->ip;
+ code_len = code_len - code_prologue + 1;
+ }
+- for (i = 0; i < code_len; i++, eip++) {
+- if (eip < (u8 *)PAGE_OFFSET ||
+- probe_kernel_address(eip, c)) {
++ for (i = 0; i < code_len; i++, ip++) {
++ if (ip < (u8 *)PAGE_OFFSET ||
++ probe_kernel_address(ip, c)) {
+ printk(" Bad EIP value.");
+ break;
+ }
+- if (eip == (u8 *)regs->eip)
++ if (ip == (u8 *)regs->ip)
+ printk("<%02x> ", c);
+ else
+ printk("%02x ", c);
+@@ -339,18 +363,57 @@ void show_registers(struct pt_regs *regs)
+ printk("\n");
+ }
+
+-int is_valid_bugaddr(unsigned long eip)
++int is_valid_bugaddr(unsigned long ip)
+ {
+ unsigned short ud2;
+
+- if (eip < PAGE_OFFSET)
++ if (ip < PAGE_OFFSET)
+ return 0;
+- if (probe_kernel_address((unsigned short *)eip, ud2))
++ if (probe_kernel_address((unsigned short *)ip, ud2))
+ return 0;
+
+ return ud2 == 0x0b0f;
+ }
+
++static int die_counter;
++
++int __kprobes __die(const char * str, struct pt_regs * regs, long err)
++{
++ unsigned long sp;
++ unsigned short ss;
++
++ printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
++#ifdef CONFIG_PREEMPT
++ printk("PREEMPT ");
++#endif
++#ifdef CONFIG_SMP
++ printk("SMP ");
++#endif
++#ifdef CONFIG_DEBUG_PAGEALLOC
++ printk("DEBUG_PAGEALLOC");
++#endif
++ printk("\n");
++
++ if (notify_die(DIE_OOPS, str, regs, err,
++ current->thread.trap_no, SIGSEGV) !=
++ NOTIFY_STOP) {
++ show_registers(regs);
++ /* Executive summary in case the oops scrolled away */
++ sp = (unsigned long) (®s->sp);
++ savesegment(ss, ss);
++ if (user_mode(regs)) {
++ sp = regs->sp;
++ ss = regs->ss & 0xffff;
++ }
++ printk(KERN_EMERG "EIP: [<%08lx>] ", regs->ip);
++ print_symbol("%s", regs->ip);
++ printk(" SS:ESP %04x:%08lx\n", ss, sp);
++ return 0;
++ } else {
++ return 1;
++ }
++}
++
+ /*
+ * This is gone through when something in the kernel has done something bad and
+ * is about to be terminated.
+@@ -366,7 +429,6 @@ void die(const char * str, struct pt_regs * regs, long err)
+ .lock_owner = -1,
+ .lock_owner_depth = 0
+ };
+- static int die_counter;
+ unsigned long flags;
+
+ oops_enter();
+@@ -382,43 +444,13 @@ void die(const char * str, struct pt_regs * regs, long err)
+ raw_local_irq_save(flags);
+
+ if (++die.lock_owner_depth < 3) {
+- unsigned long esp;
+- unsigned short ss;
++ report_bug(regs->ip, regs);
+
+- report_bug(regs->eip, regs);
-
-- total_bytes += nbytes;
-- nr_bytes -= nbytes;
+- printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff,
+- ++die_counter);
+-#ifdef CONFIG_PREEMPT
+- printk("PREEMPT ");
+-#endif
+-#ifdef CONFIG_SMP
+- printk("SMP ");
+-#endif
+-#ifdef CONFIG_DEBUG_PAGEALLOC
+- printk("DEBUG_PAGEALLOC");
+-#endif
+- printk("\n");
-
-- if ((bio = req->bio)) {
-- /*
-- * end more in this run, or just return 'not-done'
-- */
-- if (unlikely(nr_bytes <= 0))
-- break;
+- if (notify_die(DIE_OOPS, str, regs, err,
+- current->thread.trap_no, SIGSEGV) !=
+- NOTIFY_STOP) {
+- show_registers(regs);
+- /* Executive summary in case the oops scrolled away */
+- esp = (unsigned long) (®s->esp);
+- savesegment(ss, ss);
+- if (user_mode(regs)) {
+- esp = regs->esp;
+- ss = regs->xss & 0xffff;
+- }
+- printk(KERN_EMERG "EIP: [<%08lx>] ", regs->eip);
+- print_symbol("%s", regs->eip);
+- printk(" SS:ESP %04x:%08lx\n", ss, esp);
- }
-- }
--
-- /*
-- * completely done
-- */
-- if (!req->bio)
-- return 0;
--
-- /*
-- * if the request wasn't completed, update state
-- */
-- if (bio_nbytes) {
-- req_bio_endio(req, bio, bio_nbytes, error);
-- bio->bi_idx += next_idx;
-- bio_iovec(bio)->bv_offset += nr_bytes;
-- bio_iovec(bio)->bv_len -= nr_bytes;
-- }
--
-- blk_recalc_rq_sectors(req, total_bytes >> 9);
-- blk_recalc_rq_segments(req);
-- return 1;
--}
--
--/**
-- * end_that_request_first - end I/O on a request
-- * @req: the request being processed
-- * @uptodate: 1 for success, 0 for I/O error, < 0 for specific error
-- * @nr_sectors: number of sectors to end I/O on
-- *
-- * Description:
-- * Ends I/O on a number of sectors attached to @req, and sets it up
-- * for the next range of segments (if any) in the cluster.
-- *
-- * Return:
-- * 0 - we are done with this request, call end_that_request_last()
-- * 1 - still buffers pending for this request
-- **/
--int end_that_request_first(struct request *req, int uptodate, int nr_sectors)
--{
-- return __end_that_request_first(req, uptodate, nr_sectors << 9);
--}
--
--EXPORT_SYMBOL(end_that_request_first);
--
--/**
-- * end_that_request_chunk - end I/O on a request
-- * @req: the request being processed
-- * @uptodate: 1 for success, 0 for I/O error, < 0 for specific error
-- * @nr_bytes: number of bytes to complete
-- *
-- * Description:
-- * Ends I/O on a number of bytes attached to @req, and sets it up
-- * for the next range of segments (if any). Like end_that_request_first(),
-- * but deals with bytes instead of sectors.
-- *
-- * Return:
-- * 0 - we are done with this request, call end_that_request_last()
-- * 1 - still buffers pending for this request
-- **/
--int end_that_request_chunk(struct request *req, int uptodate, int nr_bytes)
--{
-- return __end_that_request_first(req, uptodate, nr_bytes);
--}
--
--EXPORT_SYMBOL(end_that_request_chunk);
--
+- else
++ if (__die(str, regs, err))
+ regs = NULL;
+- } else
++ } else {
+ printk(KERN_EMERG "Recursive die() failure, output suppressed\n");
++ }
+
+ bust_spinlocks(0);
+ die.lock_owner = -1;
+@@ -454,7 +486,7 @@ static void __kprobes do_trap(int trapnr, int signr, char *str, int vm86,
+ {
+ struct task_struct *tsk = current;
+
+- if (regs->eflags & VM_MASK) {
++ if (regs->flags & VM_MASK) {
+ if (vm86)
+ goto vm86_trap;
+ goto trap_signal;
+@@ -500,7 +532,7 @@ static void __kprobes do_trap(int trapnr, int signr, char *str, int vm86,
+ }
+
+ #define DO_ERROR(trapnr, signr, str, name) \
+-fastcall void do_##name(struct pt_regs * regs, long error_code) \
++void do_##name(struct pt_regs * regs, long error_code) \
+ { \
+ if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
+ == NOTIFY_STOP) \
+@@ -509,7 +541,7 @@ fastcall void do_##name(struct pt_regs * regs, long error_code) \
+ }
+
+ #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr, irq) \
+-fastcall void do_##name(struct pt_regs * regs, long error_code) \
++void do_##name(struct pt_regs * regs, long error_code) \
+ { \
+ siginfo_t info; \
+ if (irq) \
+@@ -525,7 +557,7 @@ fastcall void do_##name(struct pt_regs * regs, long error_code) \
+ }
+
+ #define DO_VM86_ERROR(trapnr, signr, str, name) \
+-fastcall void do_##name(struct pt_regs * regs, long error_code) \
++void do_##name(struct pt_regs * regs, long error_code) \
+ { \
+ if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
+ == NOTIFY_STOP) \
+@@ -534,7 +566,7 @@ fastcall void do_##name(struct pt_regs * regs, long error_code) \
+ }
+
+ #define DO_VM86_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
+-fastcall void do_##name(struct pt_regs * regs, long error_code) \
++void do_##name(struct pt_regs * regs, long error_code) \
+ { \
+ siginfo_t info; \
+ info.si_signo = signr; \
+@@ -548,13 +580,13 @@ fastcall void do_##name(struct pt_regs * regs, long error_code) \
+ do_trap(trapnr, signr, str, 1, regs, error_code, &info); \
+ }
+
+-DO_VM86_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->eip)
++DO_VM86_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->ip)
+ #ifndef CONFIG_KPROBES
+ DO_VM86_ERROR( 3, SIGTRAP, "int3", int3)
+ #endif
+ DO_VM86_ERROR( 4, SIGSEGV, "overflow", overflow)
+ DO_VM86_ERROR( 5, SIGSEGV, "bounds", bounds)
+-DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->eip, 0)
++DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->ip, 0)
+ DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
+ DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
+ DO_ERROR(11, SIGBUS, "segment not present", segment_not_present)
+@@ -562,7 +594,7 @@ DO_ERROR(12, SIGBUS, "stack segment", stack_segment)
+ DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0, 0)
+ DO_ERROR_INFO(32, SIGSEGV, "iret exception", iret_error, ILL_BADSTK, 0, 1)
+
+-fastcall void __kprobes do_general_protection(struct pt_regs * regs,
++void __kprobes do_general_protection(struct pt_regs * regs,
+ long error_code)
+ {
+ int cpu = get_cpu();
+@@ -596,7 +628,7 @@ fastcall void __kprobes do_general_protection(struct pt_regs * regs,
+ }
+ put_cpu();
+
+- if (regs->eflags & VM_MASK)
++ if (regs->flags & VM_MASK)
+ goto gp_in_vm86;
+
+ if (!user_mode(regs))
+@@ -605,11 +637,14 @@ fastcall void __kprobes do_general_protection(struct pt_regs * regs,
+ current->thread.error_code = error_code;
+ current->thread.trap_no = 13;
+ if (show_unhandled_signals && unhandled_signal(current, SIGSEGV) &&
+- printk_ratelimit())
++ printk_ratelimit()) {
+ printk(KERN_INFO
+- "%s[%d] general protection eip:%lx esp:%lx error:%lx\n",
++ "%s[%d] general protection ip:%lx sp:%lx error:%lx",
+ current->comm, task_pid_nr(current),
+- regs->eip, regs->esp, error_code);
++ regs->ip, regs->sp, error_code);
++ print_vma_addr(" in ", regs->ip);
++ printk("\n");
++ }
+
+ force_sig(SIGSEGV, current);
+ return;
+@@ -705,8 +740,8 @@ void __kprobes die_nmi(struct pt_regs *regs, const char *msg)
+ */
+ bust_spinlocks(1);
+ printk(KERN_EMERG "%s", msg);
+- printk(" on CPU%d, eip %08lx, registers:\n",
+- smp_processor_id(), regs->eip);
++ printk(" on CPU%d, ip %08lx, registers:\n",
++ smp_processor_id(), regs->ip);
+ show_registers(regs);
+ console_silent();
+ spin_unlock(&nmi_print_lock);
+@@ -763,7 +798,7 @@ static __kprobes void default_do_nmi(struct pt_regs * regs)
+
+ static int ignore_nmis;
+
+-fastcall __kprobes void do_nmi(struct pt_regs * regs, long error_code)
++__kprobes void do_nmi(struct pt_regs * regs, long error_code)
+ {
+ int cpu;
+
+@@ -792,7 +827,7 @@ void restart_nmi(void)
+ }
+
+ #ifdef CONFIG_KPROBES
+-fastcall void __kprobes do_int3(struct pt_regs *regs, long error_code)
++void __kprobes do_int3(struct pt_regs *regs, long error_code)
+ {
+ trace_hardirqs_fixup();
+
+@@ -828,7 +863,7 @@ fastcall void __kprobes do_int3(struct pt_regs *regs, long error_code)
+ * find every occurrence of the TF bit that could be saved away even
+ * by user code)
+ */
+-fastcall void __kprobes do_debug(struct pt_regs * regs, long error_code)
++void __kprobes do_debug(struct pt_regs * regs, long error_code)
+ {
+ unsigned int condition;
+ struct task_struct *tsk = current;
+@@ -837,24 +872,30 @@ fastcall void __kprobes do_debug(struct pt_regs * regs, long error_code)
+
+ get_debugreg(condition, 6);
+
++ /*
++ * The processor cleared BTF, so don't mark that we need it set.
++ */
++ clear_tsk_thread_flag(tsk, TIF_DEBUGCTLMSR);
++ tsk->thread.debugctlmsr = 0;
++
+ if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
+ SIGTRAP) == NOTIFY_STOP)
+ return;
+ /* It's safe to allow irq's after DR6 has been saved */
+- if (regs->eflags & X86_EFLAGS_IF)
++ if (regs->flags & X86_EFLAGS_IF)
+ local_irq_enable();
+
+ /* Mask out spurious debug traps due to lazy DR7 setting */
+ if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) {
+- if (!tsk->thread.debugreg[7])
++ if (!tsk->thread.debugreg7)
+ goto clear_dr7;
+ }
+
+- if (regs->eflags & VM_MASK)
++ if (regs->flags & VM_MASK)
+ goto debug_vm86;
+
+ /* Save debug status register where ptrace can see it */
+- tsk->thread.debugreg[6] = condition;
++ tsk->thread.debugreg6 = condition;
+
+ /*
+ * Single-stepping through TF: make sure we ignore any events in
+@@ -886,7 +927,7 @@ debug_vm86:
+
+ clear_TF_reenable:
+ set_tsk_thread_flag(tsk, TIF_SINGLESTEP);
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~TF_MASK;
+ return;
+ }
+
+@@ -895,7 +936,7 @@ clear_TF_reenable:
+ * the correct behaviour even in the presence of the asynchronous
+ * IRQ13 behaviour
+ */
+-void math_error(void __user *eip)
++void math_error(void __user *ip)
+ {
+ struct task_struct * task;
+ siginfo_t info;
+@@ -911,7 +952,7 @@ void math_error(void __user *eip)
+ info.si_signo = SIGFPE;
+ info.si_errno = 0;
+ info.si_code = __SI_FAULT;
+- info.si_addr = eip;
++ info.si_addr = ip;
+ /*
+ * (~cwd & swd) will mask out exceptions that are not set to unmasked
+ * status. 0x3f is the exception bits in these regs, 0x200 is the
+@@ -954,13 +995,13 @@ void math_error(void __user *eip)
+ force_sig_info(SIGFPE, &info, task);
+ }
+
+-fastcall void do_coprocessor_error(struct pt_regs * regs, long error_code)
++void do_coprocessor_error(struct pt_regs * regs, long error_code)
+ {
+ ignore_fpu_irq = 1;
+- math_error((void __user *)regs->eip);
++ math_error((void __user *)regs->ip);
+ }
+
+-static void simd_math_error(void __user *eip)
++static void simd_math_error(void __user *ip)
+ {
+ struct task_struct * task;
+ siginfo_t info;
+@@ -976,7 +1017,7 @@ static void simd_math_error(void __user *eip)
+ info.si_signo = SIGFPE;
+ info.si_errno = 0;
+ info.si_code = __SI_FAULT;
+- info.si_addr = eip;
++ info.si_addr = ip;
+ /*
+ * The SIMD FPU exceptions are handled a little differently, as there
+ * is only a single status/control register. Thus, to determine which
+@@ -1008,19 +1049,19 @@ static void simd_math_error(void __user *eip)
+ force_sig_info(SIGFPE, &info, task);
+ }
+
+-fastcall void do_simd_coprocessor_error(struct pt_regs * regs,
++void do_simd_coprocessor_error(struct pt_regs * regs,
+ long error_code)
+ {
+ if (cpu_has_xmm) {
+ /* Handle SIMD FPU exceptions on PIII+ processors. */
+ ignore_fpu_irq = 1;
+- simd_math_error((void __user *)regs->eip);
++ simd_math_error((void __user *)regs->ip);
+ } else {
+ /*
+ * Handle strange cache flush from user space exception
+ * in all other cases. This is undocumented behaviour.
+ */
+- if (regs->eflags & VM_MASK) {
++ if (regs->flags & VM_MASK) {
+ handle_vm86_fault((struct kernel_vm86_regs *)regs,
+ error_code);
+ return;
+@@ -1032,7 +1073,7 @@ fastcall void do_simd_coprocessor_error(struct pt_regs * regs,
+ }
+ }
+
+-fastcall void do_spurious_interrupt_bug(struct pt_regs * regs,
++void do_spurious_interrupt_bug(struct pt_regs * regs,
+ long error_code)
+ {
+ #if 0
+@@ -1041,7 +1082,7 @@ fastcall void do_spurious_interrupt_bug(struct pt_regs * regs,
+ #endif
+ }
+
+-fastcall unsigned long patch_espfix_desc(unsigned long uesp,
++unsigned long patch_espfix_desc(unsigned long uesp,
+ unsigned long kesp)
+ {
+ struct desc_struct *gdt = __get_cpu_var(gdt_page).gdt;
+@@ -1095,51 +1136,17 @@ asmlinkage void math_emulate(long arg)
+
+ #endif /* CONFIG_MATH_EMULATION */
+
-/*
-- * splice the completion data to a local structure and hand off to
-- * process_completion_queue() to complete the requests
+- * This needs to use 'idt_table' rather than 'idt', and
+- * thus use the _nonmapped_ version of the IDT, as the
+- * Pentium F0 0F bugfix can have resulted in the mapped
+- * IDT being write-protected.
- */
--static void blk_done_softirq(struct softirq_action *h)
--{
-- struct list_head *cpu_list, local_list;
--
-- local_irq_disable();
-- cpu_list = &__get_cpu_var(blk_cpu_done);
-- list_replace_init(cpu_list, &local_list);
-- local_irq_enable();
--
-- while (!list_empty(&local_list)) {
-- struct request *rq = list_entry(local_list.next, struct request, donelist);
--
-- list_del_init(&rq->donelist);
-- rq->q->softirq_done_fn(rq);
-- }
--}
--
--static int __cpuinit blk_cpu_notify(struct notifier_block *self, unsigned long action,
-- void *hcpu)
--{
-- /*
-- * If a CPU goes away, splice its entries to the current CPU
-- * and trigger a run of the softirq
-- */
-- if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
-- int cpu = (unsigned long) hcpu;
--
-- local_irq_disable();
-- list_splice_init(&per_cpu(blk_cpu_done, cpu),
-- &__get_cpu_var(blk_cpu_done));
-- raise_softirq_irqoff(BLOCK_SOFTIRQ);
-- local_irq_enable();
-- }
--
-- return NOTIFY_OK;
--}
--
--
--static struct notifier_block blk_cpu_notifier __cpuinitdata = {
-- .notifier_call = blk_cpu_notify,
--};
--
--/**
-- * blk_complete_request - end I/O on a request
-- * @req: the request being processed
-- *
-- * Description:
-- * Ends all I/O on a request. It does not handle partial completions,
-- * unless the driver actually implements this in its completion callback
-- * through requeueing. The actual completion happens out-of-order,
-- * through a softirq handler. The user must have registered a completion
-- * callback through blk_queue_softirq_done().
-- **/
--
--void blk_complete_request(struct request *req)
+-void set_intr_gate(unsigned int n, void *addr)
-{
-- struct list_head *cpu_list;
-- unsigned long flags;
--
-- BUG_ON(!req->q->softirq_done_fn);
--
-- local_irq_save(flags);
--
-- cpu_list = &__get_cpu_var(blk_cpu_done);
-- list_add_tail(&req->donelist, cpu_list);
-- raise_softirq_irqoff(BLOCK_SOFTIRQ);
--
-- local_irq_restore(flags);
+- _set_gate(n, DESCTYPE_INT, addr, __KERNEL_CS);
-}
-
--EXPORT_SYMBOL(blk_complete_request);
--
-/*
-- * queue lock must be held
+- * This routine sets up an interrupt gate at directory privilege level 3.
- */
--void end_that_request_last(struct request *req, int uptodate)
--{
-- struct gendisk *disk = req->rq_disk;
-- int error;
--
-- /*
-- * extend uptodate bool to allow < 0 value to be direct io error
-- */
-- error = 0;
-- if (end_io_error(uptodate))
-- error = !uptodate ? -EIO : uptodate;
--
-- if (unlikely(laptop_mode) && blk_fs_request(req))
-- laptop_io_completion();
--
-- /*
-- * Account IO completion. bar_rq isn't accounted as a normal
-- * IO on queueing nor completion. Accounting the containing
-- * request is enough.
-- */
-- if (disk && blk_fs_request(req) && req != &req->q->bar_rq) {
-- unsigned long duration = jiffies - req->start_time;
-- const int rw = rq_data_dir(req);
--
-- __disk_stat_inc(disk, ios[rw]);
-- __disk_stat_add(disk, ticks[rw], duration);
-- disk_round_stats(disk);
-- disk->in_flight--;
-- }
-- if (req->end_io)
-- req->end_io(req, error);
-- else
-- __blk_put_request(req->q, req);
--}
--
--EXPORT_SYMBOL(end_that_request_last);
--
--static inline void __end_request(struct request *rq, int uptodate,
-- unsigned int nr_bytes, int dequeue)
+-static inline void set_system_intr_gate(unsigned int n, void *addr)
-{
-- if (!end_that_request_chunk(rq, uptodate, nr_bytes)) {
-- if (dequeue)
-- blkdev_dequeue_request(rq);
-- add_disk_randomness(rq->rq_disk);
-- end_that_request_last(rq, uptodate);
-- }
+- _set_gate(n, DESCTYPE_INT | DESCTYPE_DPL3, addr, __KERNEL_CS);
-}
-
--static unsigned int rq_byte_size(struct request *rq)
+-static void __init set_trap_gate(unsigned int n, void *addr)
-{
-- if (blk_fs_request(rq))
-- return rq->hard_nr_sectors << 9;
--
-- return rq->data_len;
+- _set_gate(n, DESCTYPE_TRAP, addr, __KERNEL_CS);
-}
-
--/**
-- * end_queued_request - end all I/O on a queued request
-- * @rq: the request being processed
-- * @uptodate: error value or 0/1 uptodate flag
-- *
-- * Description:
-- * Ends all I/O on a request, and removes it from the block layer queues.
-- * Not suitable for normal IO completion, unless the driver still has
-- * the request attached to the block layer.
-- *
-- **/
--void end_queued_request(struct request *rq, int uptodate)
+-static void __init set_system_gate(unsigned int n, void *addr)
-{
-- __end_request(rq, uptodate, rq_byte_size(rq), 1);
+- _set_gate(n, DESCTYPE_TRAP | DESCTYPE_DPL3, addr, __KERNEL_CS);
-}
--EXPORT_SYMBOL(end_queued_request);
-
--/**
-- * end_dequeued_request - end all I/O on a dequeued request
-- * @rq: the request being processed
-- * @uptodate: error value or 0/1 uptodate flag
-- *
-- * Description:
-- * Ends all I/O on a request. The request must already have been
-- * dequeued using blkdev_dequeue_request(), as is normally the case
-- * for most drivers.
-- *
-- **/
--void end_dequeued_request(struct request *rq, int uptodate)
+-static void __init set_task_gate(unsigned int n, unsigned int gdt_entry)
-{
-- __end_request(rq, uptodate, rq_byte_size(rq), 0);
+- _set_gate(n, DESCTYPE_TASK, (void *)0, (gdt_entry<<3));
-}
--EXPORT_SYMBOL(end_dequeued_request);
--
-
--/**
-- * end_request - end I/O on the current segment of the request
-- * @req: the request being processed
-- * @uptodate: error value or 0/1 uptodate flag
-- *
-- * Description:
-- * Ends I/O on the current segment of a request. If that is the only
-- * remaining segment, the request is also completed and freed.
-- *
-- * This is a remnant of how older block drivers handled IO completions.
-- * Modern drivers typically end IO on the full request in one go, unless
-- * they have a residual value to account for. For that case this function
-- * isn't really useful, unless the residual just happens to be the
-- * full current segment. In other words, don't use this function in new
-- * code. Either use end_request_completely(), or the
-- * end_that_request_chunk() (along with end_that_request_last()) for
-- * partial completions.
-- *
-- **/
--void end_request(struct request *req, int uptodate)
--{
-- __end_request(req, uptodate, req->hard_cur_sectors << 9, 1);
+
+ void __init trap_init(void)
+ {
+ int i;
+
+ #ifdef CONFIG_EISA
+- void __iomem *p = ioremap(0x0FFFD9, 4);
++ void __iomem *p = early_ioremap(0x0FFFD9, 4);
+ if (readl(p) == 'E'+('I'<<8)+('S'<<16)+('A'<<24)) {
+ EISA_bus = 1;
+ }
+- iounmap(p);
++ early_iounmap(p, 4);
+ #endif
+
+ #ifdef CONFIG_X86_LOCAL_APIC
+diff --git a/arch/x86/kernel/traps_64.c b/arch/x86/kernel/traps_64.c
+index cc68b92..efc66df 100644
+--- a/arch/x86/kernel/traps_64.c
++++ b/arch/x86/kernel/traps_64.c
+@@ -74,22 +74,24 @@ asmlinkage void alignment_check(void);
+ asmlinkage void machine_check(void);
+ asmlinkage void spurious_interrupt_bug(void);
+
++static unsigned int code_bytes = 64;
++
+ static inline void conditional_sti(struct pt_regs *regs)
+ {
+- if (regs->eflags & X86_EFLAGS_IF)
++ if (regs->flags & X86_EFLAGS_IF)
+ local_irq_enable();
+ }
+
+ static inline void preempt_conditional_sti(struct pt_regs *regs)
+ {
+ preempt_disable();
+- if (regs->eflags & X86_EFLAGS_IF)
++ if (regs->flags & X86_EFLAGS_IF)
+ local_irq_enable();
+ }
+
+ static inline void preempt_conditional_cli(struct pt_regs *regs)
+ {
+- if (regs->eflags & X86_EFLAGS_IF)
++ if (regs->flags & X86_EFLAGS_IF)
+ local_irq_disable();
+ /* Make sure to not schedule here because we could be running
+ on an exception stack. */
+@@ -98,14 +100,15 @@ static inline void preempt_conditional_cli(struct pt_regs *regs)
+
+ int kstack_depth_to_print = 12;
+
+-#ifdef CONFIG_KALLSYMS
+-void printk_address(unsigned long address)
++void printk_address(unsigned long address, int reliable)
+ {
++#ifdef CONFIG_KALLSYMS
+ unsigned long offset = 0, symsize;
+ const char *symname;
+ char *modname;
+ char *delim = ":";
+- char namebuf[128];
++ char namebuf[KSYM_NAME_LEN];
++ char reliab[4] = "";
+
+ symname = kallsyms_lookup(address, &symsize, &offset,
+ &modname, namebuf);
+@@ -113,17 +116,17 @@ void printk_address(unsigned long address)
+ printk(" [<%016lx>]\n", address);
+ return;
+ }
++ if (!reliable)
++ strcpy(reliab, "? ");
++
+ if (!modname)
+- modname = delim = "";
+- printk(" [<%016lx>] %s%s%s%s+0x%lx/0x%lx\n",
+- address, delim, modname, delim, symname, offset, symsize);
-}
--EXPORT_SYMBOL(end_request);
--
--static void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
-- struct bio *bio)
++ modname = delim = "";
++ printk(" [<%016lx>] %s%s%s%s%s+0x%lx/0x%lx\n",
++ address, reliab, delim, modname, delim, symname, offset, symsize);
+ #else
+-void printk_address(unsigned long address)
-{
-- /* first two bits are identical in rq->cmd_flags and bio->bi_rw */
-- rq->cmd_flags |= (bio->bi_rw & 3);
--
-- rq->nr_phys_segments = bio_phys_segments(q, bio);
-- rq->nr_hw_segments = bio_hw_segments(q, bio);
-- rq->current_nr_sectors = bio_cur_sectors(bio);
-- rq->hard_cur_sectors = rq->current_nr_sectors;
-- rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
-- rq->buffer = bio_data(bio);
-- rq->data_len = bio->bi_size;
--
-- rq->bio = rq->biotail = bio;
--
-- if (bio->bi_bdev)
-- rq->rq_disk = bio->bi_bdev->bd_disk;
+ printk(" [<%016lx>]\n", address);
-}
+ #endif
++}
+
+ static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
+ unsigned *usedp, char **idp)
+@@ -208,14 +211,53 @@ static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
+ * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack
+ */
+
+-static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
++static inline int valid_stack_ptr(struct thread_info *tinfo,
++ void *p, unsigned int size, void *end)
++{
++ void *t = tinfo;
++ if (end) {
++ if (p < end && p >= (end-THREAD_SIZE))
++ return 1;
++ else
++ return 0;
++ }
++ return p > t && p < t + THREAD_SIZE - size;
++}
++
++/* The form of the top of the frame on the stack */
++struct stack_frame {
++ struct stack_frame *next_frame;
++ unsigned long return_address;
++};
++
++
++static inline unsigned long print_context_stack(struct thread_info *tinfo,
++ unsigned long *stack, unsigned long bp,
++ const struct stacktrace_ops *ops, void *data,
++ unsigned long *end)
+ {
+- void *t = (void *)tinfo;
+- return p > t && p < t + THREAD_SIZE - 3;
++ struct stack_frame *frame = (struct stack_frame *)bp;
++
++ while (valid_stack_ptr(tinfo, stack, sizeof(*stack), end)) {
++ unsigned long addr;
++
++ addr = *stack;
++ if (__kernel_text_address(addr)) {
++ if ((unsigned long) stack == bp + 8) {
++ ops->address(data, addr, 1);
++ frame = frame->next_frame;
++ bp = (unsigned long) frame;
++ } else {
++ ops->address(data, addr, bp == 0);
++ }
++ }
++ stack++;
++ }
++ return bp;
+ }
+
+ void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
+- unsigned long *stack,
++ unsigned long *stack, unsigned long bp,
+ const struct stacktrace_ops *ops, void *data)
+ {
+ const unsigned cpu = get_cpu();
+@@ -225,36 +267,28 @@ void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
+
+ if (!tsk)
+ tsk = current;
++ tinfo = task_thread_info(tsk);
+
+ if (!stack) {
+ unsigned long dummy;
+ stack = &dummy;
+ if (tsk && tsk != current)
+- stack = (unsigned long *)tsk->thread.rsp;
++ stack = (unsigned long *)tsk->thread.sp;
+ }
+
+- /*
+- * Print function call entries within a stack. 'cond' is the
+- * "end of stackframe" condition, that the 'stack++'
+- * iteration will eventually trigger.
+- */
+-#define HANDLE_STACK(cond) \
+- do while (cond) { \
+- unsigned long addr = *stack++; \
+- /* Use unlocked access here because except for NMIs \
+- we should be already protected against module unloads */ \
+- if (__kernel_text_address(addr)) { \
+- /* \
+- * If the address is either in the text segment of the \
+- * kernel, or in the region which contains vmalloc'ed \
+- * memory, it *may* be the address of a calling \
+- * routine; if so, print it so that someone tracing \
+- * down the cause of the crash will be able to figure \
+- * out the call path that was taken. \
+- */ \
+- ops->address(data, addr); \
+- } \
+- } while (0)
++#ifdef CONFIG_FRAME_POINTER
++ if (!bp) {
++ if (tsk == current) {
++ /* Grab bp right from our regs */
++ asm("movq %%rbp, %0" : "=r" (bp):);
++ } else {
++ /* bp is the last reg pushed by switch_to */
++ bp = *(unsigned long *) tsk->thread.sp;
++ }
++ }
++#endif
++
++
+
+ /*
+ * Print function call entries in all stacks, starting at the
+@@ -270,7 +304,9 @@ void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
+ if (estack_end) {
+ if (ops->stack(data, id) < 0)
+ break;
+- HANDLE_STACK (stack < estack_end);
++
++ bp = print_context_stack(tinfo, stack, bp, ops,
++ data, estack_end);
+ ops->stack(data, "<EOE>");
+ /*
+ * We link to the next stack via the
+@@ -288,7 +324,8 @@ void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
+ if (stack >= irqstack && stack < irqstack_end) {
+ if (ops->stack(data, "IRQ") < 0)
+ break;
+- HANDLE_STACK (stack < irqstack_end);
++ bp = print_context_stack(tinfo, stack, bp,
++ ops, data, irqstack_end);
+ /*
+ * We link to the next stack (which would be
+ * the process stack normally) the last
+@@ -306,9 +343,7 @@ void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
+ /*
+ * This handles the process stack:
+ */
+- tinfo = task_thread_info(tsk);
+- HANDLE_STACK (valid_stack_ptr(tinfo, stack));
+-#undef HANDLE_STACK
++ bp = print_context_stack(tinfo, stack, bp, ops, data, NULL);
+ put_cpu();
+ }
+ EXPORT_SYMBOL(dump_trace);
+@@ -331,10 +366,10 @@ static int print_trace_stack(void *data, char *name)
+ return 0;
+ }
+
+-static void print_trace_address(void *data, unsigned long addr)
++static void print_trace_address(void *data, unsigned long addr, int reliable)
+ {
+ touch_nmi_watchdog();
+- printk_address(addr);
++ printk_address(addr, reliable);
+ }
+
+ static const struct stacktrace_ops print_trace_ops = {
+@@ -345,15 +380,17 @@ static const struct stacktrace_ops print_trace_ops = {
+ };
+
+ void
+-show_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack)
++show_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack,
++ unsigned long bp)
+ {
+ printk("\nCall Trace:\n");
+- dump_trace(tsk, regs, stack, &print_trace_ops, NULL);
++ dump_trace(tsk, regs, stack, bp, &print_trace_ops, NULL);
+ printk("\n");
+ }
+
+ static void
+-_show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp)
++_show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *sp,
++ unsigned long bp)
+ {
+ unsigned long *stack;
+ int i;
+@@ -364,14 +401,14 @@ _show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp)
+ // debugging aid: "show_stack(NULL, NULL);" prints the
+ // back trace for this cpu.
+
+- if (rsp == NULL) {
++ if (sp == NULL) {
+ if (tsk)
+- rsp = (unsigned long *)tsk->thread.rsp;
++ sp = (unsigned long *)tsk->thread.sp;
+ else
+- rsp = (unsigned long *)&rsp;
++ sp = (unsigned long *)&sp;
+ }
+
+- stack = rsp;
++ stack = sp;
+ for(i=0; i < kstack_depth_to_print; i++) {
+ if (stack >= irqstack && stack <= irqstack_end) {
+ if (stack == irqstack_end) {
+@@ -387,12 +424,12 @@ _show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp)
+ printk(" %016lx", *stack++);
+ touch_nmi_watchdog();
+ }
+- show_trace(tsk, regs, rsp);
++ show_trace(tsk, regs, sp, bp);
+ }
+
+-void show_stack(struct task_struct *tsk, unsigned long * rsp)
++void show_stack(struct task_struct *tsk, unsigned long * sp)
+ {
+- _show_stack(tsk, NULL, rsp);
++ _show_stack(tsk, NULL, sp, 0);
+ }
+
+ /*
+@@ -401,13 +438,19 @@ void show_stack(struct task_struct *tsk, unsigned long * rsp)
+ void dump_stack(void)
+ {
+ unsigned long dummy;
++ unsigned long bp = 0;
++
++#ifdef CONFIG_FRAME_POINTER
++ if (!bp)
++ asm("movq %%rbp, %0" : "=r" (bp):);
++#endif
+
+ printk("Pid: %d, comm: %.20s %s %s %.*s\n",
+ current->pid, current->comm, print_tainted(),
+ init_utsname()->release,
+ (int)strcspn(init_utsname()->version, " "),
+ init_utsname()->version);
+- show_trace(NULL, NULL, &dummy);
++ show_trace(NULL, NULL, &dummy, bp);
+ }
+
+ EXPORT_SYMBOL(dump_stack);
+@@ -415,12 +458,15 @@ EXPORT_SYMBOL(dump_stack);
+ void show_registers(struct pt_regs *regs)
+ {
+ int i;
+- int in_kernel = !user_mode(regs);
+- unsigned long rsp;
++ unsigned long sp;
+ const int cpu = smp_processor_id();
+ struct task_struct *cur = cpu_pda(cpu)->pcurrent;
++ u8 *ip;
++ unsigned int code_prologue = code_bytes * 43 / 64;
++ unsigned int code_len = code_bytes;
+
+- rsp = regs->rsp;
++ sp = regs->sp;
++ ip = (u8 *) regs->ip - code_prologue;
+ printk("CPU %d ", cpu);
+ __show_regs(regs);
+ printk("Process %s (pid: %d, threadinfo %p, task %p)\n",
+@@ -430,45 +476,43 @@ void show_registers(struct pt_regs *regs)
+ * When in-kernel, we also print out the stack and code at the
+ * time of the fault..
+ */
+- if (in_kernel) {
++ if (!user_mode(regs)) {
++ unsigned char c;
+ printk("Stack: ");
+- _show_stack(NULL, regs, (unsigned long*)rsp);
+-
+- printk("\nCode: ");
+- if (regs->rip < PAGE_OFFSET)
+- goto bad;
+-
+- for (i=0; i<20; i++) {
+- unsigned char c;
+- if (__get_user(c, &((unsigned char*)regs->rip)[i])) {
+-bad:
++ _show_stack(NULL, regs, (unsigned long *)sp, regs->bp);
++ printk("\n");
++
++ printk(KERN_EMERG "Code: ");
++ if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {
++ /* try starting at RIP */
++ ip = (u8 *) regs->ip;
++ code_len = code_len - code_prologue + 1;
++ }
++ for (i = 0; i < code_len; i++, ip++) {
++ if (ip < (u8 *)PAGE_OFFSET ||
++ probe_kernel_address(ip, c)) {
+ printk(" Bad RIP value.");
+ break;
+ }
+- printk("%02x ", c);
++ if (ip == (u8 *)regs->ip)
++ printk("<%02x> ", c);
++ else
++ printk("%02x ", c);
+ }
+ }
+ printk("\n");
+ }
+
+-int is_valid_bugaddr(unsigned long rip)
++int is_valid_bugaddr(unsigned long ip)
+ {
+ unsigned short ud2;
+
+- if (__copy_from_user(&ud2, (const void __user *) rip, sizeof(ud2)))
++ if (__copy_from_user(&ud2, (const void __user *) ip, sizeof(ud2)))
+ return 0;
+
+ return ud2 == 0x0b0f;
+ }
+
+-#ifdef CONFIG_BUG
+-void out_of_line_bug(void)
+-{
+- BUG();
+-}
+-EXPORT_SYMBOL(out_of_line_bug);
+-#endif
-
--int kblockd_schedule_work(struct work_struct *work)
--{
-- return queue_work(kblockd_workqueue, work);
--}
+ static raw_spinlock_t die_lock = __RAW_SPIN_LOCK_UNLOCKED;
+ static int die_owner = -1;
+ static unsigned int die_nest_count;
+@@ -496,7 +540,7 @@ unsigned __kprobes long oops_begin(void)
+ return flags;
+ }
+
+-void __kprobes oops_end(unsigned long flags)
++void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ {
+ die_owner = -1;
+ bust_spinlocks(0);
+@@ -505,12 +549,17 @@ void __kprobes oops_end(unsigned long flags)
+ /* Nest count reaches zero, release the lock. */
+ __raw_spin_unlock(&die_lock);
+ raw_local_irq_restore(flags);
++ if (!regs) {
++ oops_exit();
++ return;
++ }
+ if (panic_on_oops)
+ panic("Fatal exception");
+ oops_exit();
++ do_exit(signr);
+ }
+
+-void __kprobes __die(const char * str, struct pt_regs * regs, long err)
++int __kprobes __die(const char * str, struct pt_regs * regs, long err)
+ {
+ static int die_counter;
+ printk(KERN_EMERG "%s: %04lx [%u] ", str, err & 0xffff,++die_counter);
+@@ -524,15 +573,17 @@ void __kprobes __die(const char * str, struct pt_regs * regs, long err)
+ printk("DEBUG_PAGEALLOC");
+ #endif
+ printk("\n");
+- notify_die(DIE_OOPS, str, regs, err, current->thread.trap_no, SIGSEGV);
++ if (notify_die(DIE_OOPS, str, regs, err, current->thread.trap_no, SIGSEGV) == NOTIFY_STOP)
++ return 1;
+ show_registers(regs);
+ add_taint(TAINT_DIE);
+ /* Executive summary in case the oops scrolled away */
+ printk(KERN_ALERT "RIP ");
+- printk_address(regs->rip);
+- printk(" RSP <%016lx>\n", regs->rsp);
++ printk_address(regs->ip, 1);
++ printk(" RSP <%016lx>\n", regs->sp);
+ if (kexec_should_crash(current))
+ crash_kexec(regs);
++ return 0;
+ }
+
+ void die(const char * str, struct pt_regs * regs, long err)
+@@ -540,11 +591,11 @@ void die(const char * str, struct pt_regs * regs, long err)
+ unsigned long flags = oops_begin();
+
+ if (!user_mode(regs))
+- report_bug(regs->rip, regs);
++ report_bug(regs->ip, regs);
+
+- __die(str, regs, err);
+- oops_end(flags);
+- do_exit(SIGSEGV);
++ if (__die(str, regs, err))
++ regs = NULL;
++ oops_end(flags, regs, SIGSEGV);
+ }
+
+ void __kprobes die_nmi(char *str, struct pt_regs *regs, int do_panic)
+@@ -561,10 +612,10 @@ void __kprobes die_nmi(char *str, struct pt_regs *regs, int do_panic)
+ crash_kexec(regs);
+ if (do_panic || panic_on_oops)
+ panic("Non maskable interrupt");
+- oops_end(flags);
++ oops_end(flags, NULL, SIGBUS);
+ nmi_exit();
+ local_irq_enable();
+- do_exit(SIGSEGV);
++ do_exit(SIGBUS);
+ }
+
+ static void __kprobes do_trap(int trapnr, int signr, char *str,
+@@ -588,11 +639,14 @@ static void __kprobes do_trap(int trapnr, int signr, char *str,
+ tsk->thread.trap_no = trapnr;
+
+ if (show_unhandled_signals && unhandled_signal(tsk, signr) &&
+- printk_ratelimit())
++ printk_ratelimit()) {
+ printk(KERN_INFO
+- "%s[%d] trap %s rip:%lx rsp:%lx error:%lx\n",
++ "%s[%d] trap %s ip:%lx sp:%lx error:%lx",
+ tsk->comm, tsk->pid, str,
+- regs->rip, regs->rsp, error_code);
++ regs->ip, regs->sp, error_code);
++ print_vma_addr(" in ", regs->ip);
++ printk("\n");
++ }
+
+ if (info)
+ force_sig_info(signr, info, tsk);
+@@ -602,19 +656,12 @@ static void __kprobes do_trap(int trapnr, int signr, char *str,
+ }
+
+
+- /* kernel trap */
+- {
+- const struct exception_table_entry *fixup;
+- fixup = search_exception_tables(regs->rip);
+- if (fixup)
+- regs->rip = fixup->fixup;
+- else {
+- tsk->thread.error_code = error_code;
+- tsk->thread.trap_no = trapnr;
+- die(str, regs, error_code);
+- }
+- return;
++ if (!fixup_exception(regs)) {
++ tsk->thread.error_code = error_code;
++ tsk->thread.trap_no = trapnr;
++ die(str, regs, error_code);
+ }
++ return;
+ }
+
+ #define DO_ERROR(trapnr, signr, str, name) \
+@@ -643,10 +690,10 @@ asmlinkage void do_##name(struct pt_regs * regs, long error_code) \
+ do_trap(trapnr, signr, str, regs, error_code, &info); \
+ }
+
+-DO_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->rip)
++DO_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->ip)
+ DO_ERROR( 4, SIGSEGV, "overflow", overflow)
+ DO_ERROR( 5, SIGSEGV, "bounds", bounds)
+-DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->rip)
++DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->ip)
+ DO_ERROR( 7, SIGSEGV, "device not available", device_not_available)
+ DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
+ DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
+@@ -694,32 +741,28 @@ asmlinkage void __kprobes do_general_protection(struct pt_regs * regs,
+ tsk->thread.trap_no = 13;
+
+ if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
+- printk_ratelimit())
++ printk_ratelimit()) {
+ printk(KERN_INFO
+- "%s[%d] general protection rip:%lx rsp:%lx error:%lx\n",
++ "%s[%d] general protection ip:%lx sp:%lx error:%lx",
+ tsk->comm, tsk->pid,
+- regs->rip, regs->rsp, error_code);
++ regs->ip, regs->sp, error_code);
++ print_vma_addr(" in ", regs->ip);
++ printk("\n");
++ }
+
+ force_sig(SIGSEGV, tsk);
+ return;
+ }
+
+- /* kernel gp */
+- {
+- const struct exception_table_entry *fixup;
+- fixup = search_exception_tables(regs->rip);
+- if (fixup) {
+- regs->rip = fixup->fixup;
+- return;
+- }
++ if (fixup_exception(regs))
++ return;
+
+- tsk->thread.error_code = error_code;
+- tsk->thread.trap_no = 13;
+- if (notify_die(DIE_GPF, "general protection fault", regs,
+- error_code, 13, SIGSEGV) == NOTIFY_STOP)
+- return;
+- die("general protection fault", regs, error_code);
+- }
++ tsk->thread.error_code = error_code;
++ tsk->thread.trap_no = 13;
++ if (notify_die(DIE_GPF, "general protection fault", regs,
++ error_code, 13, SIGSEGV) == NOTIFY_STOP)
++ return;
++ die("general protection fault", regs, error_code);
+ }
+
+ static __kprobes void
+@@ -832,15 +875,15 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
+ {
+ struct pt_regs *regs = eregs;
+ /* Did already sync */
+- if (eregs == (struct pt_regs *)eregs->rsp)
++ if (eregs == (struct pt_regs *)eregs->sp)
+ ;
+ /* Exception from user space */
+ else if (user_mode(eregs))
+ regs = task_pt_regs(current);
+ /* Exception from kernel and interrupts are enabled. Move to
+ kernel process stack. */
+- else if (eregs->eflags & X86_EFLAGS_IF)
+- regs = (struct pt_regs *)(eregs->rsp -= sizeof(struct pt_regs));
++ else if (eregs->flags & X86_EFLAGS_IF)
++ regs = (struct pt_regs *)(eregs->sp -= sizeof(struct pt_regs));
+ if (eregs != regs)
+ *regs = *eregs;
+ return regs;
+@@ -858,6 +901,12 @@ asmlinkage void __kprobes do_debug(struct pt_regs * regs,
+
+ get_debugreg(condition, 6);
+
++ /*
++ * The processor cleared BTF, so don't mark that we need it set.
++ */
++ clear_tsk_thread_flag(tsk, TIF_DEBUGCTLMSR);
++ tsk->thread.debugctlmsr = 0;
++
+ if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
+ SIGTRAP) == NOTIFY_STOP)
+ return;
+@@ -873,27 +922,14 @@ asmlinkage void __kprobes do_debug(struct pt_regs * regs,
+
+ tsk->thread.debugreg6 = condition;
+
+- /* Mask out spurious TF errors due to lazy TF clearing */
++
++ /*
++ * Single-stepping through TF: make sure we ignore any events in
++ * kernel space (but re-enable TF when returning to user mode).
++ */
+ if (condition & DR_STEP) {
+- /*
+- * The TF error should be masked out only if the current
+- * process is not traced and if the TRAP flag has been set
+- * previously by a tracing process (condition detected by
+- * the PT_DTRACE flag); remember that the i386 TRAP flag
+- * can be modified by the process itself in user mode,
+- * allowing programs to debug themselves without the ptrace()
+- * interface.
+- */
+ if (!user_mode(regs))
+ goto clear_TF_reenable;
+- /*
+- * Was the TF flag set by a debugger? If so, clear it now,
+- * so that register information is correct.
+- */
+- if (tsk->ptrace & PT_DTRACE) {
+- regs->eflags &= ~TF_MASK;
+- tsk->ptrace &= ~PT_DTRACE;
+- }
+ }
+
+ /* Ok, finally something we can handle */
+@@ -902,7 +938,7 @@ asmlinkage void __kprobes do_debug(struct pt_regs * regs,
+ info.si_signo = SIGTRAP;
+ info.si_errno = 0;
+ info.si_code = TRAP_BRKPT;
+- info.si_addr = user_mode(regs) ? (void __user *)regs->rip : NULL;
++ info.si_addr = user_mode(regs) ? (void __user *)regs->ip : NULL;
+ force_sig_info(SIGTRAP, &info, tsk);
+
+ clear_dr7:
+@@ -912,18 +948,15 @@ clear_dr7:
+
+ clear_TF_reenable:
+ set_tsk_thread_flag(tsk, TIF_SINGLESTEP);
+- regs->eflags &= ~TF_MASK;
++ regs->flags &= ~X86_EFLAGS_TF;
+ preempt_conditional_cli(regs);
+ }
+
+ static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr)
+ {
+- const struct exception_table_entry *fixup;
+- fixup = search_exception_tables(regs->rip);
+- if (fixup) {
+- regs->rip = fixup->fixup;
++ if (fixup_exception(regs))
+ return 1;
+- }
++
+ notify_die(DIE_GPF, str, regs, 0, trapnr, SIGFPE);
+ /* Illegal floating point operation in the kernel */
+ current->thread.trap_no = trapnr;
+@@ -938,7 +971,7 @@ static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr)
+ */
+ asmlinkage void do_coprocessor_error(struct pt_regs *regs)
+ {
+- void __user *rip = (void __user *)(regs->rip);
++ void __user *ip = (void __user *)(regs->ip);
+ struct task_struct * task;
+ siginfo_t info;
+ unsigned short cwd, swd;
+@@ -958,7 +991,7 @@ asmlinkage void do_coprocessor_error(struct pt_regs *regs)
+ info.si_signo = SIGFPE;
+ info.si_errno = 0;
+ info.si_code = __SI_FAULT;
+- info.si_addr = rip;
++ info.si_addr = ip;
+ /*
+ * (~cwd & swd) will mask out exceptions that are not set to unmasked
+ * status. 0x3f is the exception bits in these regs, 0x200 is the
+@@ -1007,7 +1040,7 @@ asmlinkage void bad_intr(void)
+
+ asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
+ {
+- void __user *rip = (void __user *)(regs->rip);
++ void __user *ip = (void __user *)(regs->ip);
+ struct task_struct * task;
+ siginfo_t info;
+ unsigned short mxcsr;
+@@ -1027,7 +1060,7 @@ asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
+ info.si_signo = SIGFPE;
+ info.si_errno = 0;
+ info.si_code = __SI_FAULT;
+- info.si_addr = rip;
++ info.si_addr = ip;
+ /*
+ * The SIMD FPU exceptions are handled a little differently, as there
+ * is only a single status/control register. Thus, to determine which
+@@ -1089,6 +1122,7 @@ asmlinkage void math_state_restore(void)
+ task_thread_info(me)->status |= TS_USEDFPU;
+ me->fpu_counter++;
+ }
++EXPORT_SYMBOL_GPL(math_state_restore);
+
+ void __init trap_init(void)
+ {
+@@ -1144,3 +1178,14 @@ static int __init kstack_setup(char *s)
+ return 0;
+ }
+ early_param("kstack", kstack_setup);
++
++
++static int __init code_bytes_setup(char *s)
++{
++ code_bytes = simple_strtoul(s, NULL, 0);
++ if (code_bytes > 8192)
++ code_bytes = 8192;
++
++ return 1;
++}
++__setup("code_bytes=", code_bytes_setup);
+diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
+index 9ebc0da..43517e3 100644
+--- a/arch/x86/kernel/tsc_32.c
++++ b/arch/x86/kernel/tsc_32.c
+@@ -5,6 +5,7 @@
+ #include <linux/jiffies.h>
+ #include <linux/init.h>
+ #include <linux/dmi.h>
++#include <linux/percpu.h>
+
+ #include <asm/delay.h>
+ #include <asm/tsc.h>
+@@ -23,8 +24,6 @@ static int tsc_enabled;
+ unsigned int tsc_khz;
+ EXPORT_SYMBOL_GPL(tsc_khz);
+
+-int tsc_disable;
-
--EXPORT_SYMBOL(kblockd_schedule_work);
+ #ifdef CONFIG_X86_TSC
+ static int __init tsc_setup(char *str)
+ {
+@@ -39,8 +38,7 @@ static int __init tsc_setup(char *str)
+ */
+ static int __init tsc_setup(char *str)
+ {
+- tsc_disable = 1;
-
--void kblockd_flush_work(struct work_struct *work)
--{
-- cancel_work_sync(work);
++ setup_clear_cpu_cap(X86_FEATURE_TSC);
+ return 1;
+ }
+ #endif
+@@ -80,13 +78,31 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
+ *
+ * -johnstul at us.ibm.com "math is hard, lets go shopping!"
+ */
+-unsigned long cyc2ns_scale __read_mostly;
+
+-#define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
++DEFINE_PER_CPU(unsigned long, cyc2ns);
+
+-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
++static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
+ {
+- cyc2ns_scale = (1000000 << CYC2NS_SCALE_FACTOR)/cpu_khz;
++ unsigned long flags, prev_scale, *scale;
++ unsigned long long tsc_now, ns_now;
++
++ local_irq_save(flags);
++ sched_clock_idle_sleep_event();
++
++ scale = &per_cpu(cyc2ns, cpu);
++
++ rdtscll(tsc_now);
++ ns_now = __cycles_2_ns(tsc_now);
++
++ prev_scale = *scale;
++ if (cpu_khz)
++ *scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
++
++ /*
++ * Start smoothly with the new frequency:
++ */
++ sched_clock_idle_wakeup_event(0);
++ local_irq_restore(flags);
+ }
+
+ /*
+@@ -239,7 +255,9 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, void *data)
+ ref_freq, freq->new);
+ if (!(freq->flags & CPUFREQ_CONST_LOOPS)) {
+ tsc_khz = cpu_khz;
+- set_cyc2ns_scale(cpu_khz);
++ preempt_disable();
++ set_cyc2ns_scale(cpu_khz, smp_processor_id());
++ preempt_enable();
+ /*
+ * TSC based sched_clock turns
+ * to junk w/ cpufreq
+@@ -333,6 +351,11 @@ __cpuinit int unsynchronized_tsc(void)
+ {
+ if (!cpu_has_tsc || tsc_unstable)
+ return 1;
++
++ /* Anything with constant TSC should be synchronized */
++ if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
++ return 0;
++
+ /*
+ * Intel systems are normally all synchronized.
+ * Exceptions must mark TSC as unstable:
+@@ -367,7 +390,9 @@ static inline void check_geode_tsc_reliable(void) { }
+
+ void __init tsc_init(void)
+ {
+- if (!cpu_has_tsc || tsc_disable)
++ int cpu;
++
++ if (!cpu_has_tsc)
+ goto out_no_tsc;
+
+ cpu_khz = calculate_cpu_khz();
+@@ -380,7 +405,15 @@ void __init tsc_init(void)
+ (unsigned long)cpu_khz / 1000,
+ (unsigned long)cpu_khz % 1000);
+
+- set_cyc2ns_scale(cpu_khz);
++ /*
++ * Secondary CPUs do not run through tsc_init(), so set up
++ * all the scale factors for all CPUs, assuming the same
++ * speed as the bootup CPU. (cpufreq notifiers will fix this
++ * up if their speed diverges)
++ */
++ for_each_possible_cpu(cpu)
++ set_cyc2ns_scale(cpu_khz, cpu);
++
+ use_tsc_delay();
+
+ /* Check and install the TSC clocksource */
+@@ -403,10 +436,5 @@ void __init tsc_init(void)
+ return;
+
+ out_no_tsc:
+- /*
+- * Set the tsc_disable flag if there's no TSC support, this
+- * makes it a fast flag for the kernel to see whether it
+- * should be using the TSC.
+- */
+- tsc_disable = 1;
++ setup_clear_cpu_cap(X86_FEATURE_TSC);
+ }
+diff --git a/arch/x86/kernel/tsc_64.c b/arch/x86/kernel/tsc_64.c
+index 9c70af4..947554d 100644
+--- a/arch/x86/kernel/tsc_64.c
++++ b/arch/x86/kernel/tsc_64.c
+@@ -10,6 +10,7 @@
+
+ #include <asm/hpet.h>
+ #include <asm/timex.h>
++#include <asm/timer.h>
+
+ static int notsc __initdata = 0;
+
+@@ -18,19 +19,51 @@ EXPORT_SYMBOL(cpu_khz);
+ unsigned int tsc_khz;
+ EXPORT_SYMBOL(tsc_khz);
+
+-static unsigned int cyc2ns_scale __read_mostly;
++/* Accelerators for sched_clock()
++ * convert from cycles(64bits) => nanoseconds (64bits)
++ * basic equation:
++ * ns = cycles / (freq / ns_per_sec)
++ * ns = cycles * (ns_per_sec / freq)
++ * ns = cycles * (10^9 / (cpu_khz * 10^3))
++ * ns = cycles * (10^6 / cpu_khz)
++ *
++ * Then we use scaling math (suggested by george at mvista.com) to get:
++ * ns = cycles * (10^6 * SC / cpu_khz) / SC
++ * ns = cycles * cyc2ns_scale / SC
++ *
++ * And since SC is a constant power of two, we can convert the div
++ * into a shift.
++ *
++ * We can use khz divisor instead of mhz to keep a better precision, since
++ * cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
++ * (mathieu.desnoyers at polymtl.ca)
++ *
++ * -johnstul at us.ibm.com "math is hard, lets go shopping!"
++ */
++DEFINE_PER_CPU(unsigned long, cyc2ns);
+
+-static inline void set_cyc2ns_scale(unsigned long khz)
++static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
+ {
+- cyc2ns_scale = (NSEC_PER_MSEC << NS_SCALE) / khz;
-}
--EXPORT_SYMBOL(kblockd_flush_work);
--
--int __init blk_dev_init(void)
++ unsigned long flags, prev_scale, *scale;
++ unsigned long long tsc_now, ns_now;
+
+-static unsigned long long cycles_2_ns(unsigned long long cyc)
-{
-- int i;
--
-- kblockd_workqueue = create_workqueue("kblockd");
-- if (!kblockd_workqueue)
-- panic("Failed to create kblockd\n");
+- return (cyc * cyc2ns_scale) >> NS_SCALE;
++ local_irq_save(flags);
++ sched_clock_idle_sleep_event();
++
++ scale = &per_cpu(cyc2ns, cpu);
++
++ rdtscll(tsc_now);
++ ns_now = __cycles_2_ns(tsc_now);
++
++ prev_scale = *scale;
++ if (cpu_khz)
++ *scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
++
++ sched_clock_idle_wakeup_event(0);
++ local_irq_restore(flags);
+ }
+
+-unsigned long long sched_clock(void)
++unsigned long long native_sched_clock(void)
+ {
+ unsigned long a = 0;
+
+@@ -44,12 +77,27 @@ unsigned long long sched_clock(void)
+ return cycles_2_ns(a);
+ }
+
++/* We need to define a real function for sched_clock, to override the
++ weak default version */
++#ifdef CONFIG_PARAVIRT
++unsigned long long sched_clock(void)
++{
++ return paravirt_sched_clock();
++}
++#else
++unsigned long long
++sched_clock(void) __attribute__((alias("native_sched_clock")));
++#endif
++
++
+ static int tsc_unstable;
+
+-inline int check_tsc_unstable(void)
++int check_tsc_unstable(void)
+ {
+ return tsc_unstable;
+ }
++EXPORT_SYMBOL_GPL(check_tsc_unstable);
++
+ #ifdef CONFIG_CPU_FREQ
+
+ /* Frequency scaling support. Adjust the TSC based timer when the cpu frequency
+@@ -100,7 +148,9 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
+ mark_tsc_unstable("cpufreq changes");
+ }
+
+- set_cyc2ns_scale(tsc_khz_ref);
++ preempt_disable();
++ set_cyc2ns_scale(tsc_khz_ref, smp_processor_id());
++ preempt_enable();
+
+ return 0;
+ }
+@@ -133,12 +183,12 @@ static unsigned long __init tsc_read_refs(unsigned long *pm,
+ int i;
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+- t1 = get_cycles_sync();
++ t1 = get_cycles();
+ if (hpet)
+ *hpet = hpet_readl(HPET_COUNTER) & 0xFFFFFFFF;
+ else
+ *pm = acpi_pm_read_early();
+- t2 = get_cycles_sync();
++ t2 = get_cycles();
+ if ((t2 - t1) < SMI_TRESHOLD)
+ return t2;
+ }
+@@ -151,7 +201,7 @@ static unsigned long __init tsc_read_refs(unsigned long *pm,
+ void __init tsc_calibrate(void)
+ {
+ unsigned long flags, tsc1, tsc2, tr1, tr2, pm1, pm2, hpet1, hpet2;
+- int hpet = is_hpet_enabled();
++ int hpet = is_hpet_enabled(), cpu;
+
+ local_irq_save(flags);
+
+@@ -162,9 +212,9 @@ void __init tsc_calibrate(void)
+ outb(0xb0, 0x43);
+ outb((CLOCK_TICK_RATE / (1000 / 50)) & 0xff, 0x42);
+ outb((CLOCK_TICK_RATE / (1000 / 50)) >> 8, 0x42);
+- tr1 = get_cycles_sync();
++ tr1 = get_cycles();
+ while ((inb(0x61) & 0x20) == 0);
+- tr2 = get_cycles_sync();
++ tr2 = get_cycles();
+
+ tsc2 = tsc_read_refs(&pm2, hpet ? &hpet2 : NULL);
+
+@@ -206,7 +256,9 @@ void __init tsc_calibrate(void)
+ }
+
+ tsc_khz = tsc2 / tsc1;
+- set_cyc2ns_scale(tsc_khz);
++
++ for_each_possible_cpu(cpu)
++ set_cyc2ns_scale(tsc_khz, cpu);
+ }
+
+ /*
+@@ -222,17 +274,9 @@ __cpuinit int unsynchronized_tsc(void)
+ if (apic_is_clustered_box())
+ return 1;
+ #endif
+- /* Most intel systems have synchronized TSCs except for
+- multi node systems */
+- if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+-#ifdef CONFIG_ACPI
+- /* But TSC doesn't tick in C3 so don't use it there */
+- if (acpi_gbl_FADT.header.length > 0 &&
+- acpi_gbl_FADT.C3latency < 1000)
+- return 1;
+-#endif
++
++ if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
+ return 0;
+- }
+
+ /* Assume multi socket systems are not synchronized */
+ return num_present_cpus() > 1;
+@@ -250,13 +294,13 @@ __setup("notsc", notsc_setup);
+ /* clock source code: */
+ static cycle_t read_tsc(void)
+ {
+- cycle_t ret = (cycle_t)get_cycles_sync();
++ cycle_t ret = (cycle_t)get_cycles();
+ return ret;
+ }
+
+ static cycle_t __vsyscall_fn vread_tsc(void)
+ {
+- cycle_t ret = (cycle_t)get_cycles_sync();
++ cycle_t ret = (cycle_t)vget_cycles();
+ return ret;
+ }
+
+diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c
+index 9125efe..0577825 100644
+--- a/arch/x86/kernel/tsc_sync.c
++++ b/arch/x86/kernel/tsc_sync.c
+@@ -46,7 +46,7 @@ static __cpuinit void check_tsc_warp(void)
+ cycles_t start, now, prev, end;
+ int i;
+
+- start = get_cycles_sync();
++ start = get_cycles();
+ /*
+ * The measurement runs for 20 msecs:
+ */
+@@ -61,18 +61,18 @@ static __cpuinit void check_tsc_warp(void)
+ */
+ __raw_spin_lock(&sync_lock);
+ prev = last_tsc;
+- now = get_cycles_sync();
++ now = get_cycles();
+ last_tsc = now;
+ __raw_spin_unlock(&sync_lock);
+
+ /*
+ * Be nice every now and then (and also check whether
+- * measurement is done [we also insert a 100 million
++ * measurement is done [we also insert a 10 million
+ * loops safety exit, so we dont lock up in case the
+ * TSC readout is totally broken]):
+ */
+ if (unlikely(!(i & 7))) {
+- if (now > end || i > 100000000)
++ if (now > end || i > 10000000)
+ break;
+ cpu_relax();
+ touch_nmi_watchdog();
+@@ -87,7 +87,11 @@ static __cpuinit void check_tsc_warp(void)
+ nr_warps++;
+ __raw_spin_unlock(&sync_lock);
+ }
-
-- request_cachep = kmem_cache_create("blkdev_requests",
-- sizeof(struct request), 0, SLAB_PANIC, NULL);
++ }
++ if (!(now-start)) {
++ printk("Warning: zero tsc calibration delta: %Ld [max: %Ld]\n",
++ now-start, end-start);
++ WARN_ON(1);
+ }
+ }
+
+@@ -129,24 +133,24 @@ void __cpuinit check_tsc_sync_source(int cpu)
+ while (atomic_read(&stop_count) != cpus-1)
+ cpu_relax();
+
+- /*
+- * Reset it - just in case we boot another CPU later:
+- */
+- atomic_set(&start_count, 0);
-
-- requestq_cachep = kmem_cache_create("blkdev_queue",
-- sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
+ if (nr_warps) {
+ printk("\n");
+ printk(KERN_WARNING "Measured %Ld cycles TSC warp between CPUs,"
+ " turning off TSC clock.\n", max_warp);
+ mark_tsc_unstable("check_tsc_sync_source failed");
+- nr_warps = 0;
+- max_warp = 0;
+- last_tsc = 0;
+ } else {
+ printk(" passed.\n");
+ }
+
+ /*
++ * Reset it - just in case we boot another CPU later:
++ */
++ atomic_set(&start_count, 0);
++ nr_warps = 0;
++ max_warp = 0;
++ last_tsc = 0;
++
++ /*
+ * Let the target continue with the bootup:
+ */
+ atomic_inc(&stop_count);
+diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c
+index 157e4be..738c210 100644
+--- a/arch/x86/kernel/vm86_32.c
++++ b/arch/x86/kernel/vm86_32.c
+@@ -70,10 +70,10 @@
+ /*
+ * 8- and 16-bit register defines..
+ */
+-#define AL(regs) (((unsigned char *)&((regs)->pt.eax))[0])
+-#define AH(regs) (((unsigned char *)&((regs)->pt.eax))[1])
+-#define IP(regs) (*(unsigned short *)&((regs)->pt.eip))
+-#define SP(regs) (*(unsigned short *)&((regs)->pt.esp))
++#define AL(regs) (((unsigned char *)&((regs)->pt.ax))[0])
++#define AH(regs) (((unsigned char *)&((regs)->pt.ax))[1])
++#define IP(regs) (*(unsigned short *)&((regs)->pt.ip))
++#define SP(regs) (*(unsigned short *)&((regs)->pt.sp))
+
+ /*
+ * virtual flags (16 and 32-bit versions)
+@@ -93,12 +93,12 @@ static int copy_vm86_regs_to_user(struct vm86_regs __user *user,
+ {
+ int ret = 0;
+
+- /* kernel_vm86_regs is missing xgs, so copy everything up to
++ /* kernel_vm86_regs is missing gs, so copy everything up to
+ (but not including) orig_eax, and then rest including orig_eax. */
+- ret += copy_to_user(user, regs, offsetof(struct kernel_vm86_regs, pt.orig_eax));
+- ret += copy_to_user(&user->orig_eax, ®s->pt.orig_eax,
++ ret += copy_to_user(user, regs, offsetof(struct kernel_vm86_regs, pt.orig_ax));
++ ret += copy_to_user(&user->orig_eax, ®s->pt.orig_ax,
+ sizeof(struct kernel_vm86_regs) -
+- offsetof(struct kernel_vm86_regs, pt.orig_eax));
++ offsetof(struct kernel_vm86_regs, pt.orig_ax));
+
+ return ret;
+ }
+@@ -110,18 +110,17 @@ static int copy_vm86_regs_from_user(struct kernel_vm86_regs *regs,
+ {
+ int ret = 0;
+
+- /* copy eax-xfs inclusive */
+- ret += copy_from_user(regs, user, offsetof(struct kernel_vm86_regs, pt.orig_eax));
+- /* copy orig_eax-__gsh+extra */
+- ret += copy_from_user(®s->pt.orig_eax, &user->orig_eax,
++ /* copy ax-fs inclusive */
++ ret += copy_from_user(regs, user, offsetof(struct kernel_vm86_regs, pt.orig_ax));
++ /* copy orig_ax-__gsh+extra */
++ ret += copy_from_user(®s->pt.orig_ax, &user->orig_eax,
+ sizeof(struct kernel_vm86_regs) -
+- offsetof(struct kernel_vm86_regs, pt.orig_eax) +
++ offsetof(struct kernel_vm86_regs, pt.orig_ax) +
+ extra);
+ return ret;
+ }
+
+-struct pt_regs * FASTCALL(save_v86_state(struct kernel_vm86_regs * regs));
+-struct pt_regs * fastcall save_v86_state(struct kernel_vm86_regs * regs)
++struct pt_regs * save_v86_state(struct kernel_vm86_regs * regs)
+ {
+ struct tss_struct *tss;
+ struct pt_regs *ret;
+@@ -138,7 +137,7 @@ struct pt_regs * fastcall save_v86_state(struct kernel_vm86_regs * regs)
+ printk("no vm86_info: BAD\n");
+ do_exit(SIGSEGV);
+ }
+- set_flags(regs->pt.eflags, VEFLAGS, VIF_MASK | current->thread.v86mask);
++ set_flags(regs->pt.flags, VEFLAGS, VIF_MASK | current->thread.v86mask);
+ tmp = copy_vm86_regs_to_user(¤t->thread.vm86_info->regs,regs);
+ tmp += put_user(current->thread.screen_bitmap,¤t->thread.vm86_info->screen_bitmap);
+ if (tmp) {
+@@ -147,15 +146,15 @@ struct pt_regs * fastcall save_v86_state(struct kernel_vm86_regs * regs)
+ }
+
+ tss = &per_cpu(init_tss, get_cpu());
+- current->thread.esp0 = current->thread.saved_esp0;
++ current->thread.sp0 = current->thread.saved_sp0;
+ current->thread.sysenter_cs = __KERNEL_CS;
+- load_esp0(tss, ¤t->thread);
+- current->thread.saved_esp0 = 0;
++ load_sp0(tss, ¤t->thread);
++ current->thread.saved_sp0 = 0;
+ put_cpu();
+
+ ret = KVM86->regs32;
+
+- ret->xfs = current->thread.saved_fs;
++ ret->fs = current->thread.saved_fs;
+ loadsegment(gs, current->thread.saved_gs);
+
+ return ret;
+@@ -197,7 +196,7 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
+
+ asmlinkage int sys_vm86old(struct pt_regs regs)
+ {
+- struct vm86_struct __user *v86 = (struct vm86_struct __user *)regs.ebx;
++ struct vm86_struct __user *v86 = (struct vm86_struct __user *)regs.bx;
+ struct kernel_vm86_struct info; /* declare this _on top_,
+ * this avoids wasting of stack space.
+ * This remains on the stack until we
+@@ -207,7 +206,7 @@ asmlinkage int sys_vm86old(struct pt_regs regs)
+ int tmp, ret = -EPERM;
+
+ tsk = current;
+- if (tsk->thread.saved_esp0)
++ if (tsk->thread.saved_sp0)
+ goto out;
+ tmp = copy_vm86_regs_from_user(&info.regs, &v86->regs,
+ offsetof(struct kernel_vm86_struct, vm86plus) -
+@@ -237,12 +236,12 @@ asmlinkage int sys_vm86(struct pt_regs regs)
+ struct vm86plus_struct __user *v86;
+
+ tsk = current;
+- switch (regs.ebx) {
++ switch (regs.bx) {
+ case VM86_REQUEST_IRQ:
+ case VM86_FREE_IRQ:
+ case VM86_GET_IRQ_BITS:
+ case VM86_GET_AND_RESET_IRQ:
+- ret = do_vm86_irq_handling(regs.ebx, (int)regs.ecx);
++ ret = do_vm86_irq_handling(regs.bx, (int)regs.cx);
+ goto out;
+ case VM86_PLUS_INSTALL_CHECK:
+ /* NOTE: on old vm86 stuff this will return the error
+@@ -256,9 +255,9 @@ asmlinkage int sys_vm86(struct pt_regs regs)
+
+ /* we come here only for functions VM86_ENTER, VM86_ENTER_NO_BYPASS */
+ ret = -EPERM;
+- if (tsk->thread.saved_esp0)
++ if (tsk->thread.saved_sp0)
+ goto out;
+- v86 = (struct vm86plus_struct __user *)regs.ecx;
++ v86 = (struct vm86plus_struct __user *)regs.cx;
+ tmp = copy_vm86_regs_from_user(&info.regs, &v86->regs,
+ offsetof(struct kernel_vm86_struct, regs32) -
+ sizeof(info.regs));
+@@ -281,23 +280,23 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
+ /*
+ * make sure the vm86() system call doesn't try to do anything silly
+ */
+- info->regs.pt.xds = 0;
+- info->regs.pt.xes = 0;
+- info->regs.pt.xfs = 0;
++ info->regs.pt.ds = 0;
++ info->regs.pt.es = 0;
++ info->regs.pt.fs = 0;
+
+ /* we are clearing gs later just before "jmp resume_userspace",
+ * because it is not saved/restored.
+ */
+
+ /*
+- * The eflags register is also special: we cannot trust that the user
++ * The flags register is also special: we cannot trust that the user
+ * has set it up safely, so this makes sure interrupt etc flags are
+ * inherited from protected mode.
+ */
+- VEFLAGS = info->regs.pt.eflags;
+- info->regs.pt.eflags &= SAFE_MASK;
+- info->regs.pt.eflags |= info->regs32->eflags & ~SAFE_MASK;
+- info->regs.pt.eflags |= VM_MASK;
++ VEFLAGS = info->regs.pt.flags;
++ info->regs.pt.flags &= SAFE_MASK;
++ info->regs.pt.flags |= info->regs32->flags & ~SAFE_MASK;
++ info->regs.pt.flags |= VM_MASK;
+
+ switch (info->cpu_type) {
+ case CPU_286:
+@@ -315,18 +314,18 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
+ }
+
+ /*
+- * Save old state, set default return value (%eax) to 0
++ * Save old state, set default return value (%ax) to 0
+ */
+- info->regs32->eax = 0;
+- tsk->thread.saved_esp0 = tsk->thread.esp0;
+- tsk->thread.saved_fs = info->regs32->xfs;
++ info->regs32->ax = 0;
++ tsk->thread.saved_sp0 = tsk->thread.sp0;
++ tsk->thread.saved_fs = info->regs32->fs;
+ savesegment(gs, tsk->thread.saved_gs);
+
+ tss = &per_cpu(init_tss, get_cpu());
+- tsk->thread.esp0 = (unsigned long) &info->VM86_TSS_ESP0;
++ tsk->thread.sp0 = (unsigned long) &info->VM86_TSS_ESP0;
+ if (cpu_has_sep)
+ tsk->thread.sysenter_cs = 0;
+- load_esp0(tss, &tsk->thread);
++ load_sp0(tss, &tsk->thread);
+ put_cpu();
+
+ tsk->thread.screen_bitmap = info->screen_bitmap;
+@@ -352,7 +351,7 @@ static inline void return_to_32bit(struct kernel_vm86_regs * regs16, int retval)
+ struct pt_regs * regs32;
+
+ regs32 = save_v86_state(regs16);
+- regs32->eax = retval;
++ regs32->ax = retval;
+ __asm__ __volatile__("movl %0,%%esp\n\t"
+ "movl %1,%%ebp\n\t"
+ "jmp resume_userspace"
+@@ -373,30 +372,30 @@ static inline void clear_IF(struct kernel_vm86_regs * regs)
+
+ static inline void clear_TF(struct kernel_vm86_regs * regs)
+ {
+- regs->pt.eflags &= ~TF_MASK;
++ regs->pt.flags &= ~TF_MASK;
+ }
+
+ static inline void clear_AC(struct kernel_vm86_regs * regs)
+ {
+- regs->pt.eflags &= ~AC_MASK;
++ regs->pt.flags &= ~AC_MASK;
+ }
+
+ /* It is correct to call set_IF(regs) from the set_vflags_*
+ * functions. However someone forgot to call clear_IF(regs)
+ * in the opposite case.
+ * After the command sequence CLI PUSHF STI POPF you should
+- * end up with interrups disabled, but you ended up with
++ * end up with interrupts disabled, but you ended up with
+ * interrupts enabled.
+ * ( I was testing my own changes, but the only bug I
+ * could find was in a function I had not changed. )
+ * [KD]
+ */
+
+-static inline void set_vflags_long(unsigned long eflags, struct kernel_vm86_regs * regs)
++static inline void set_vflags_long(unsigned long flags, struct kernel_vm86_regs * regs)
+ {
+- set_flags(VEFLAGS, eflags, current->thread.v86mask);
+- set_flags(regs->pt.eflags, eflags, SAFE_MASK);
+- if (eflags & IF_MASK)
++ set_flags(VEFLAGS, flags, current->thread.v86mask);
++ set_flags(regs->pt.flags, flags, SAFE_MASK);
++ if (flags & IF_MASK)
+ set_IF(regs);
+ else
+ clear_IF(regs);
+@@ -405,7 +404,7 @@ static inline void set_vflags_long(unsigned long eflags, struct kernel_vm86_regs
+ static inline void set_vflags_short(unsigned short flags, struct kernel_vm86_regs * regs)
+ {
+ set_flags(VFLAGS, flags, current->thread.v86mask);
+- set_flags(regs->pt.eflags, flags, SAFE_MASK);
++ set_flags(regs->pt.flags, flags, SAFE_MASK);
+ if (flags & IF_MASK)
+ set_IF(regs);
+ else
+@@ -414,7 +413,7 @@ static inline void set_vflags_short(unsigned short flags, struct kernel_vm86_reg
+
+ static inline unsigned long get_vflags(struct kernel_vm86_regs * regs)
+ {
+- unsigned long flags = regs->pt.eflags & RETURN_MASK;
++ unsigned long flags = regs->pt.flags & RETURN_MASK;
+
+ if (VEFLAGS & VIF_MASK)
+ flags |= IF_MASK;
+@@ -518,7 +517,7 @@ static void do_int(struct kernel_vm86_regs *regs, int i,
+ unsigned long __user *intr_ptr;
+ unsigned long segoffs;
+
+- if (regs->pt.xcs == BIOSSEG)
++ if (regs->pt.cs == BIOSSEG)
+ goto cannot_handle;
+ if (is_revectored(i, &KVM86->int_revectored))
+ goto cannot_handle;
+@@ -530,9 +529,9 @@ static void do_int(struct kernel_vm86_regs *regs, int i,
+ if ((segoffs >> 16) == BIOSSEG)
+ goto cannot_handle;
+ pushw(ssp, sp, get_vflags(regs), cannot_handle);
+- pushw(ssp, sp, regs->pt.xcs, cannot_handle);
++ pushw(ssp, sp, regs->pt.cs, cannot_handle);
+ pushw(ssp, sp, IP(regs), cannot_handle);
+- regs->pt.xcs = segoffs >> 16;
++ regs->pt.cs = segoffs >> 16;
+ SP(regs) -= 6;
+ IP(regs) = segoffs & 0xffff;
+ clear_TF(regs);
+@@ -549,7 +548,7 @@ int handle_vm86_trap(struct kernel_vm86_regs * regs, long error_code, int trapno
+ if (VMPI.is_vm86pus) {
+ if ( (trapno==3) || (trapno==1) )
+ return_to_32bit(regs, VM86_TRAP + (trapno << 8));
+- do_int(regs, trapno, (unsigned char __user *) (regs->pt.xss << 4), SP(regs));
++ do_int(regs, trapno, (unsigned char __user *) (regs->pt.ss << 4), SP(regs));
+ return 0;
+ }
+ if (trapno !=1)
+@@ -585,10 +584,10 @@ void handle_vm86_fault(struct kernel_vm86_regs * regs, long error_code)
+ handle_vm86_trap(regs, 0, 1); \
+ return; } while (0)
+
+- orig_flags = *(unsigned short *)®s->pt.eflags;
++ orig_flags = *(unsigned short *)®s->pt.flags;
+
+- csp = (unsigned char __user *) (regs->pt.xcs << 4);
+- ssp = (unsigned char __user *) (regs->pt.xss << 4);
++ csp = (unsigned char __user *) (regs->pt.cs << 4);
++ ssp = (unsigned char __user *) (regs->pt.ss << 4);
+ sp = SP(regs);
+ ip = IP(regs);
+
+@@ -675,7 +674,7 @@ void handle_vm86_fault(struct kernel_vm86_regs * regs, long error_code)
+ SP(regs) += 6;
+ }
+ IP(regs) = newip;
+- regs->pt.xcs = newcs;
++ regs->pt.cs = newcs;
+ CHECK_IF_IN_TRAP;
+ if (data32) {
+ set_vflags_long(newflags, regs);
+diff --git a/arch/x86/kernel/vmi_32.c b/arch/x86/kernel/vmi_32.c
+index f02bad6..4525bc2 100644
+--- a/arch/x86/kernel/vmi_32.c
++++ b/arch/x86/kernel/vmi_32.c
+@@ -62,7 +62,10 @@ static struct {
+ void (*cpuid)(void /* non-c */);
+ void (*_set_ldt)(u32 selector);
+ void (*set_tr)(u32 selector);
+- void (*set_kernel_stack)(u32 selector, u32 esp0);
++ void (*write_idt_entry)(struct desc_struct *, int, u32, u32);
++ void (*write_gdt_entry)(struct desc_struct *, int, u32, u32);
++ void (*write_ldt_entry)(struct desc_struct *, int, u32, u32);
++ void (*set_kernel_stack)(u32 selector, u32 sp0);
+ void (*allocate_page)(u32, u32, u32, u32, u32);
+ void (*release_page)(u32, u32);
+ void (*set_pte)(pte_t, pte_t *, unsigned);
+@@ -88,13 +91,13 @@ struct vmi_timer_ops vmi_timer_ops;
+ #define IRQ_PATCH_DISABLE 5
+
+ static inline void patch_offset(void *insnbuf,
+- unsigned long eip, unsigned long dest)
++ unsigned long ip, unsigned long dest)
+ {
+- *(unsigned long *)(insnbuf+1) = dest-eip-5;
++ *(unsigned long *)(insnbuf+1) = dest-ip-5;
+ }
+
+ static unsigned patch_internal(int call, unsigned len, void *insnbuf,
+- unsigned long eip)
++ unsigned long ip)
+ {
+ u64 reloc;
+ struct vmi_relocation_info *const rel = (struct vmi_relocation_info *)&reloc;
+@@ -103,13 +106,13 @@ static unsigned patch_internal(int call, unsigned len, void *insnbuf,
+ case VMI_RELOCATION_CALL_REL:
+ BUG_ON(len < 5);
+ *(char *)insnbuf = MNEM_CALL;
+- patch_offset(insnbuf, eip, (unsigned long)rel->eip);
++ patch_offset(insnbuf, ip, (unsigned long)rel->eip);
+ return 5;
+
+ case VMI_RELOCATION_JUMP_REL:
+ BUG_ON(len < 5);
+ *(char *)insnbuf = MNEM_JMP;
+- patch_offset(insnbuf, eip, (unsigned long)rel->eip);
++ patch_offset(insnbuf, ip, (unsigned long)rel->eip);
+ return 5;
+
+ case VMI_RELOCATION_NOP:
+@@ -131,25 +134,25 @@ static unsigned patch_internal(int call, unsigned len, void *insnbuf,
+ * sequence. The callee does nop padding for us.
+ */
+ static unsigned vmi_patch(u8 type, u16 clobbers, void *insns,
+- unsigned long eip, unsigned len)
++ unsigned long ip, unsigned len)
+ {
+ switch (type) {
+ case PARAVIRT_PATCH(pv_irq_ops.irq_disable):
+ return patch_internal(VMI_CALL_DisableInterrupts, len,
+- insns, eip);
++ insns, ip);
+ case PARAVIRT_PATCH(pv_irq_ops.irq_enable):
+ return patch_internal(VMI_CALL_EnableInterrupts, len,
+- insns, eip);
++ insns, ip);
+ case PARAVIRT_PATCH(pv_irq_ops.restore_fl):
+ return patch_internal(VMI_CALL_SetInterruptMask, len,
+- insns, eip);
++ insns, ip);
+ case PARAVIRT_PATCH(pv_irq_ops.save_fl):
+ return patch_internal(VMI_CALL_GetInterruptMask, len,
+- insns, eip);
++ insns, ip);
+ case PARAVIRT_PATCH(pv_cpu_ops.iret):
+- return patch_internal(VMI_CALL_IRET, len, insns, eip);
+- case PARAVIRT_PATCH(pv_cpu_ops.irq_enable_sysexit):
+- return patch_internal(VMI_CALL_SYSEXIT, len, insns, eip);
++ return patch_internal(VMI_CALL_IRET, len, insns, ip);
++ case PARAVIRT_PATCH(pv_cpu_ops.irq_enable_syscall_ret):
++ return patch_internal(VMI_CALL_SYSEXIT, len, insns, ip);
+ default:
+ break;
+ }
+@@ -157,36 +160,36 @@ static unsigned vmi_patch(u8 type, u16 clobbers, void *insns,
+ }
+
+ /* CPUID has non-C semantics, and paravirt-ops API doesn't match hardware ISA */
+-static void vmi_cpuid(unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
++static void vmi_cpuid(unsigned int *ax, unsigned int *bx,
++ unsigned int *cx, unsigned int *dx)
+ {
+ int override = 0;
+- if (*eax == 1)
++ if (*ax == 1)
+ override = 1;
+ asm volatile ("call *%6"
+- : "=a" (*eax),
+- "=b" (*ebx),
+- "=c" (*ecx),
+- "=d" (*edx)
+- : "0" (*eax), "2" (*ecx), "r" (vmi_ops.cpuid));
++ : "=a" (*ax),
++ "=b" (*bx),
++ "=c" (*cx),
++ "=d" (*dx)
++ : "0" (*ax), "2" (*cx), "r" (vmi_ops.cpuid));
+ if (override) {
+ if (disable_pse)
+- *edx &= ~X86_FEATURE_PSE;
++ *dx &= ~X86_FEATURE_PSE;
+ if (disable_pge)
+- *edx &= ~X86_FEATURE_PGE;
++ *dx &= ~X86_FEATURE_PGE;
+ if (disable_sep)
+- *edx &= ~X86_FEATURE_SEP;
++ *dx &= ~X86_FEATURE_SEP;
+ if (disable_tsc)
+- *edx &= ~X86_FEATURE_TSC;
++ *dx &= ~X86_FEATURE_TSC;
+ if (disable_mtrr)
+- *edx &= ~X86_FEATURE_MTRR;
++ *dx &= ~X86_FEATURE_MTRR;
+ }
+ }
+
+ static inline void vmi_maybe_load_tls(struct desc_struct *gdt, int nr, struct desc_struct *new)
+ {
+ if (gdt[nr].a != new->a || gdt[nr].b != new->b)
+- write_gdt_entry(gdt, nr, new->a, new->b);
++ write_gdt_entry(gdt, nr, new, 0);
+ }
+
+ static void vmi_load_tls(struct thread_struct *t, unsigned int cpu)
+@@ -200,12 +203,12 @@ static void vmi_load_tls(struct thread_struct *t, unsigned int cpu)
+ static void vmi_set_ldt(const void *addr, unsigned entries)
+ {
+ unsigned cpu = smp_processor_id();
+- u32 low, high;
++ struct desc_struct desc;
+
+- pack_descriptor(&low, &high, (unsigned long)addr,
++ pack_descriptor(&desc, (unsigned long)addr,
+ entries * sizeof(struct desc_struct) - 1,
+- DESCTYPE_LDT, 0);
+- write_gdt_entry(get_cpu_gdt_table(cpu), GDT_ENTRY_LDT, low, high);
++ DESC_LDT, 0);
++ write_gdt_entry(get_cpu_gdt_table(cpu), GDT_ENTRY_LDT, &desc, DESC_LDT);
+ vmi_ops._set_ldt(entries ? GDT_ENTRY_LDT*sizeof(struct desc_struct) : 0);
+ }
+
+@@ -214,17 +217,37 @@ static void vmi_set_tr(void)
+ vmi_ops.set_tr(GDT_ENTRY_TSS*sizeof(struct desc_struct));
+ }
+
+-static void vmi_load_esp0(struct tss_struct *tss,
++static void vmi_write_idt_entry(gate_desc *dt, int entry, const gate_desc *g)
++{
++ u32 *idt_entry = (u32 *)g;
++ vmi_ops.write_idt_entry(dt, entry, idt_entry[0], idt_entry[2]);
++}
++
++static void vmi_write_gdt_entry(struct desc_struct *dt, int entry,
++ const void *desc, int type)
++{
++ u32 *gdt_entry = (u32 *)desc;
++ vmi_ops.write_gdt_entry(dt, entry, gdt_entry[0], gdt_entry[2]);
++}
++
++static void vmi_write_ldt_entry(struct desc_struct *dt, int entry,
++ const void *desc)
++{
++ u32 *ldt_entry = (u32 *)desc;
++ vmi_ops.write_idt_entry(dt, entry, ldt_entry[0], ldt_entry[2]);
++}
++
++static void vmi_load_sp0(struct tss_struct *tss,
+ struct thread_struct *thread)
+ {
+- tss->x86_tss.esp0 = thread->esp0;
++ tss->x86_tss.sp0 = thread->sp0;
+
+ /* This can only happen when SEP is enabled, no need to test "SEP"arately */
+ if (unlikely(tss->x86_tss.ss1 != thread->sysenter_cs)) {
+ tss->x86_tss.ss1 = thread->sysenter_cs;
+ wrmsr(MSR_IA32_SYSENTER_CS, thread->sysenter_cs, 0);
+ }
+- vmi_ops.set_kernel_stack(__KERNEL_DS, tss->x86_tss.esp0);
++ vmi_ops.set_kernel_stack(__KERNEL_DS, tss->x86_tss.sp0);
+ }
+
+ static void vmi_flush_tlb_user(void)
+@@ -375,7 +398,7 @@ static void vmi_allocate_pt(struct mm_struct *mm, u32 pfn)
+ vmi_ops.allocate_page(pfn, VMI_PAGE_L1, 0, 0, 0);
+ }
+
+-static void vmi_allocate_pd(u32 pfn)
++static void vmi_allocate_pd(struct mm_struct *mm, u32 pfn)
+ {
+ /*
+ * This call comes in very early, before mem_map is setup.
+@@ -452,7 +475,7 @@ static void vmi_set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep
+ static void vmi_set_pmd(pmd_t *pmdp, pmd_t pmdval)
+ {
+ #ifdef CONFIG_X86_PAE
+- const pte_t pte = { pmdval.pmd, pmdval.pmd >> 32 };
++ const pte_t pte = { .pte = pmdval.pmd };
+ vmi_check_page_type(__pa(pmdp) >> PAGE_SHIFT, VMI_PAGE_PMD);
+ #else
+ const pte_t pte = { pmdval.pud.pgd.pgd };
+@@ -485,21 +508,21 @@ static void vmi_set_pte_present(struct mm_struct *mm, unsigned long addr, pte_t
+ static void vmi_set_pud(pud_t *pudp, pud_t pudval)
+ {
+ /* Um, eww */
+- const pte_t pte = { pudval.pgd.pgd, pudval.pgd.pgd >> 32 };
++ const pte_t pte = { .pte = pudval.pgd.pgd };
+ vmi_check_page_type(__pa(pudp) >> PAGE_SHIFT, VMI_PAGE_PGD);
+ vmi_ops.set_pte(pte, (pte_t *)pudp, VMI_PAGE_PDP);
+ }
+
+ static void vmi_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+- const pte_t pte = { 0 };
++ const pte_t pte = { .pte = 0 };
+ vmi_check_page_type(__pa(ptep) >> PAGE_SHIFT, VMI_PAGE_PTE);
+ vmi_ops.set_pte(pte, ptep, vmi_flags_addr(mm, addr, VMI_PAGE_PT, 0));
+ }
+
+ static void vmi_pmd_clear(pmd_t *pmd)
+ {
+- const pte_t pte = { 0 };
++ const pte_t pte = { .pte = 0 };
+ vmi_check_page_type(__pa(pmd) >> PAGE_SHIFT, VMI_PAGE_PMD);
+ vmi_ops.set_pte(pte, (pte_t *)pmd, VMI_PAGE_PD);
+ }
+@@ -790,10 +813,13 @@ static inline int __init activate_vmi(void)
+ para_fill(pv_cpu_ops.store_idt, GetIDT);
+ para_fill(pv_cpu_ops.store_tr, GetTR);
+ pv_cpu_ops.load_tls = vmi_load_tls;
+- para_fill(pv_cpu_ops.write_ldt_entry, WriteLDTEntry);
+- para_fill(pv_cpu_ops.write_gdt_entry, WriteGDTEntry);
+- para_fill(pv_cpu_ops.write_idt_entry, WriteIDTEntry);
+- para_wrap(pv_cpu_ops.load_esp0, vmi_load_esp0, set_kernel_stack, UpdateKernelStack);
++ para_wrap(pv_cpu_ops.write_ldt_entry, vmi_write_ldt_entry,
++ write_ldt_entry, WriteLDTEntry);
++ para_wrap(pv_cpu_ops.write_gdt_entry, vmi_write_gdt_entry,
++ write_gdt_entry, WriteGDTEntry);
++ para_wrap(pv_cpu_ops.write_idt_entry, vmi_write_idt_entry,
++ write_idt_entry, WriteIDTEntry);
++ para_wrap(pv_cpu_ops.load_sp0, vmi_load_sp0, set_kernel_stack, UpdateKernelStack);
+ para_fill(pv_cpu_ops.set_iopl_mask, SetIOPLMask);
+ para_fill(pv_cpu_ops.io_delay, IODelay);
+
+@@ -870,7 +896,7 @@ static inline int __init activate_vmi(void)
+ * the backend. They are performance critical anyway, so requiring
+ * a patch is not a big problem.
+ */
+- pv_cpu_ops.irq_enable_sysexit = (void *)0xfeedbab0;
++ pv_cpu_ops.irq_enable_syscall_ret = (void *)0xfeedbab0;
+ pv_cpu_ops.iret = (void *)0xbadbab0;
+
+ #ifdef CONFIG_SMP
+@@ -963,19 +989,19 @@ static int __init parse_vmi(char *arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "disable_pge")) {
+- clear_bit(X86_FEATURE_PGE, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_PGE);
+ disable_pge = 1;
+ } else if (!strcmp(arg, "disable_pse")) {
+- clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_PSE);
+ disable_pse = 1;
+ } else if (!strcmp(arg, "disable_sep")) {
+- clear_bit(X86_FEATURE_SEP, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_SEP);
+ disable_sep = 1;
+ } else if (!strcmp(arg, "disable_tsc")) {
+- clear_bit(X86_FEATURE_TSC, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_TSC);
+ disable_tsc = 1;
+ } else if (!strcmp(arg, "disable_mtrr")) {
+- clear_bit(X86_FEATURE_MTRR, boot_cpu_data.x86_capability);
++ clear_cpu_cap(&boot_cpu_data, X86_FEATURE_MTRR);
+ disable_mtrr = 1;
+ } else if (!strcmp(arg, "disable_timer")) {
+ disable_vmi_timer = 1;
+diff --git a/arch/x86/kernel/vmiclock_32.c b/arch/x86/kernel/vmiclock_32.c
+index b1b5ab0..a2b0307 100644
+--- a/arch/x86/kernel/vmiclock_32.c
++++ b/arch/x86/kernel/vmiclock_32.c
+@@ -35,7 +35,6 @@
+ #include <asm/i8253.h>
+
+ #include <irq_vectors.h>
+-#include "io_ports.h"
+
+ #define VMI_ONESHOT (VMI_ALARM_IS_ONESHOT | VMI_CYCLES_REAL | vmi_get_alarm_wiring())
+ #define VMI_PERIODIC (VMI_ALARM_IS_PERIODIC | VMI_CYCLES_REAL | vmi_get_alarm_wiring())
+@@ -238,7 +237,7 @@ static void __devinit vmi_time_init_clockevent(void)
+ void __init vmi_time_init(void)
+ {
+ /* Disable PIT: BIOSes start PIT CH0 with 18.2hz peridic. */
+- outb_p(0x3a, PIT_MODE); /* binary, mode 5, LSB/MSB, ch 0 */
++ outb_pit(0x3a, PIT_MODE); /* binary, mode 5, LSB/MSB, ch 0 */
+
+ vmi_time_init_clockevent();
+ setup_irq(0, &vmi_clock_action);
+diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
+index 7d72cce..f1148ac 100644
+--- a/arch/x86/kernel/vmlinux_32.lds.S
++++ b/arch/x86/kernel/vmlinux_32.lds.S
+@@ -8,12 +8,6 @@
+ * put it inside the section definition.
+ */
+
+-/* Don't define absolute symbols until and unless you know that symbol
+- * value is should remain constant even if kernel image is relocated
+- * at run time. Absolute symbols are not relocated. If symbol value should
+- * change if kernel is relocated, make the symbol section relative and
+- * put it inside the section definition.
+- */
+ #define LOAD_OFFSET __PAGE_OFFSET
+
+ #include <asm-generic/vmlinux.lds.h>
+@@ -44,6 +38,8 @@ SECTIONS
+
+ /* read-only */
+ .text : AT(ADDR(.text) - LOAD_OFFSET) {
++ . = ALIGN(4096); /* not really needed, already page aligned */
++ *(.text.page_aligned)
+ TEXT_TEXT
+ SCHED_TEXT
+ LOCK_TEXT
+@@ -131,10 +127,12 @@ SECTIONS
+ .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
+ __init_begin = .;
+ _sinittext = .;
+- *(.init.text)
++ INIT_TEXT
+ _einittext = .;
+ }
+- .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { *(.init.data) }
++ .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
++ INIT_DATA
++ }
+ . = ALIGN(16);
+ .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
+ __setup_start = .;
+@@ -169,8 +167,12 @@ SECTIONS
+ }
+ /* .exit.text is discard at runtime, not link time, to deal with references
+ from .altinstructions and .eh_frame */
+- .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { *(.exit.text) }
+- .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) { *(.exit.data) }
++ .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
++ EXIT_TEXT
++ }
++ .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
++ EXIT_DATA
++ }
+ #if defined(CONFIG_BLK_DEV_INITRD)
+ . = ALIGN(4096);
+ .init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
+diff --git a/arch/x86/kernel/vmlinux_64.lds.S b/arch/x86/kernel/vmlinux_64.lds.S
+index ba8ea97..0992b99 100644
+--- a/arch/x86/kernel/vmlinux_64.lds.S
++++ b/arch/x86/kernel/vmlinux_64.lds.S
+@@ -37,16 +37,15 @@ SECTIONS
+ KPROBES_TEXT
+ *(.fixup)
+ *(.gnu.warning)
+- } :text = 0x9090
+- /* out-of-line lock text */
+- .text.lock : AT(ADDR(.text.lock) - LOAD_OFFSET) { *(.text.lock) }
-
-- iocontext_cachep = kmem_cache_create("blkdev_ioc",
-- sizeof(struct io_context), 0, SLAB_PANIC, NULL);
+- _etext = .; /* End of text section */
++ _etext = .; /* End of text section */
++ } :text = 0x9090
+
+ . = ALIGN(16); /* Exception table */
+- __start___ex_table = .;
+- __ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) { *(__ex_table) }
+- __stop___ex_table = .;
++ __ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {
++ __start___ex_table = .;
++ *(__ex_table)
++ __stop___ex_table = .;
++ }
+
+ NOTES :text :note
+
+@@ -155,12 +154,15 @@ SECTIONS
+ __init_begin = .;
+ .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
+ _sinittext = .;
+- *(.init.text)
++ INIT_TEXT
+ _einittext = .;
+ }
+- __initdata_begin = .;
+- .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { *(.init.data) }
+- __initdata_end = .;
++ .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
++ __initdata_begin = .;
++ INIT_DATA
++ __initdata_end = .;
++ }
++
+ . = ALIGN(16);
+ __setup_start = .;
+ .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) { *(.init.setup) }
+@@ -176,6 +178,14 @@ SECTIONS
+ }
+ __con_initcall_end = .;
+ SECURITY_INIT
++
++ . = ALIGN(8);
++ .parainstructions : AT(ADDR(.parainstructions) - LOAD_OFFSET) {
++ __parainstructions = .;
++ *(.parainstructions)
++ __parainstructions_end = .;
++ }
++
+ . = ALIGN(8);
+ __alt_instructions = .;
+ .altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
+@@ -187,8 +197,12 @@ SECTIONS
+ }
+ /* .exit.text is discard at runtime, not link time, to deal with references
+ from .altinstructions and .eh_frame */
+- .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { *(.exit.text) }
+- .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) { *(.exit.data) }
++ .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
++ EXIT_TEXT
++ }
++ .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
++ EXIT_DATA
++ }
+
+ /* vdso blob that is mapped into user space */
+ vdso_start = . ;
+diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
+index 414caf0..d971210 100644
+--- a/arch/x86/kernel/vsmp_64.c
++++ b/arch/x86/kernel/vsmp_64.c
+@@ -25,21 +25,24 @@ static int __init vsmp_init(void)
+ return 0;
+
+ /* Check if we are running on a ScaleMP vSMP box */
+- if ((read_pci_config_16(0, 0x1f, 0, PCI_VENDOR_ID) != PCI_VENDOR_ID_SCALEMP) ||
+- (read_pci_config_16(0, 0x1f, 0, PCI_DEVICE_ID) != PCI_DEVICE_ID_SCALEMP_VSMP_CTL))
++ if ((read_pci_config_16(0, 0x1f, 0, PCI_VENDOR_ID) !=
++ PCI_VENDOR_ID_SCALEMP) ||
++ (read_pci_config_16(0, 0x1f, 0, PCI_DEVICE_ID) !=
++ PCI_DEVICE_ID_SCALEMP_VSMP_CTL))
+ return 0;
+
+ /* set vSMP magic bits to indicate vSMP capable kernel */
+ address = ioremap(read_pci_config(0, 0x1f, 0, PCI_BASE_ADDRESS_0), 8);
+ cap = readl(address);
+ ctl = readl(address + 4);
+- printk("vSMP CTL: capabilities:0x%08x control:0x%08x\n", cap, ctl);
++ printk(KERN_INFO "vSMP CTL: capabilities:0x%08x control:0x%08x\n",
++ cap, ctl);
+ if (cap & ctl & (1 << 4)) {
+ /* Turn on vSMP IRQ fastpath handling (see system.h) */
+ ctl &= ~(1 << 4);
+ writel(ctl, address + 4);
+ ctl = readl(address + 4);
+- printk("vSMP CTL: control set to:0x%08x\n", ctl);
++ printk(KERN_INFO "vSMP CTL: control set to:0x%08x\n", ctl);
+ }
+
+ iounmap(address);
+diff --git a/arch/x86/kernel/vsyscall-int80_32.S b/arch/x86/kernel/vsyscall-int80_32.S
+deleted file mode 100644
+index 103cab6..0000000
+--- a/arch/x86/kernel/vsyscall-int80_32.S
++++ /dev/null
+@@ -1,53 +0,0 @@
+-/*
+- * Code for the vsyscall page. This version uses the old int $0x80 method.
+- *
+- * NOTE:
+- * 1) __kernel_vsyscall _must_ be first in this page.
+- * 2) there are alignment constraints on this stub, see vsyscall-sigreturn.S
+- * for details.
+- */
-
-- for_each_possible_cpu(i)
-- INIT_LIST_HEAD(&per_cpu(blk_cpu_done, i));
+- .text
+- .globl __kernel_vsyscall
+- .type __kernel_vsyscall, at function
+-__kernel_vsyscall:
+-.LSTART_vsyscall:
+- int $0x80
+- ret
+-.LEND_vsyscall:
+- .size __kernel_vsyscall,.-.LSTART_vsyscall
+- .previous
-
-- open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
-- register_hotcpu_notifier(&blk_cpu_notifier);
+- .section .eh_frame,"a", at progbits
+-.LSTARTFRAMEDLSI:
+- .long .LENDCIEDLSI-.LSTARTCIEDLSI
+-.LSTARTCIEDLSI:
+- .long 0 /* CIE ID */
+- .byte 1 /* Version number */
+- .string "zR" /* NUL-terminated augmentation string */
+- .uleb128 1 /* Code alignment factor */
+- .sleb128 -4 /* Data alignment factor */
+- .byte 8 /* Return address register column */
+- .uleb128 1 /* Augmentation value length */
+- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
+- .byte 0x0c /* DW_CFA_def_cfa */
+- .uleb128 4
+- .uleb128 4
+- .byte 0x88 /* DW_CFA_offset, column 0x8 */
+- .uleb128 1
+- .align 4
+-.LENDCIEDLSI:
+- .long .LENDFDEDLSI-.LSTARTFDEDLSI /* Length FDE */
+-.LSTARTFDEDLSI:
+- .long .LSTARTFDEDLSI-.LSTARTFRAMEDLSI /* CIE pointer */
+- .long .LSTART_vsyscall-. /* PC-relative start address */
+- .long .LEND_vsyscall-.LSTART_vsyscall
+- .uleb128 0
+- .align 4
+-.LENDFDEDLSI:
+- .previous
-
-- blk_max_low_pfn = max_low_pfn - 1;
-- blk_max_pfn = max_pfn - 1;
+-/*
+- * Get the common code for the sigreturn entry points.
+- */
+-#include "vsyscall-sigreturn_32.S"
+diff --git a/arch/x86/kernel/vsyscall-note_32.S b/arch/x86/kernel/vsyscall-note_32.S
+deleted file mode 100644
+index fcf376a..0000000
+--- a/arch/x86/kernel/vsyscall-note_32.S
++++ /dev/null
+@@ -1,45 +0,0 @@
+-/*
+- * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
+- * Here we can supply some information useful to userland.
+- */
-
-- return 0;
--}
+-#include <linux/version.h>
+-#include <linux/elfnote.h>
-
+-/* Ideally this would use UTS_NAME, but using a quoted string here
+- doesn't work. Remember to change this when changing the
+- kernel's name. */
+-ELFNOTE_START(Linux, 0, "a")
+- .long LINUX_VERSION_CODE
+-ELFNOTE_END
+-
+-#ifdef CONFIG_XEN
+-/*
+- * Add a special note telling glibc's dynamic linker a fake hardware
+- * flavor that it will use to choose the search path for libraries in the
+- * same way it uses real hardware capabilities like "mmx".
+- * We supply "nosegneg" as the fake capability, to indicate that we
+- * do not like negative offsets in instructions using segment overrides,
+- * since we implement those inefficiently. This makes it possible to
+- * install libraries optimized to avoid those access patterns in someplace
+- * like /lib/i686/tls/nosegneg. Note that an /etc/ld.so.conf.d/file
+- * corresponding to the bits here is needed to make ldconfig work right.
+- * It should contain:
+- * hwcap 1 nosegneg
+- * to match the mapping of bit to name that we give here.
+- *
+- * At runtime, the fake hardware feature will be considered to be present
+- * if its bit is set in the mask word. So, we start with the mask 0, and
+- * at boot time we set VDSO_NOTE_NONEGSEG_BIT if running under Xen.
+- */
+-
+-#include "../../x86/xen/vdso.h" /* Defines VDSO_NOTE_NONEGSEG_BIT. */
+-
+- .globl VDSO_NOTE_MASK
+-ELFNOTE_START(GNU, 2, "a")
+- .long 1 /* ncaps */
+-VDSO_NOTE_MASK:
+- .long 0 /* mask */
+- .byte VDSO_NOTE_NONEGSEG_BIT; .asciz "nosegneg" /* bit, name */
+-ELFNOTE_END
+-#endif
+diff --git a/arch/x86/kernel/vsyscall-sigreturn_32.S b/arch/x86/kernel/vsyscall-sigreturn_32.S
+deleted file mode 100644
+index a92262f..0000000
+--- a/arch/x86/kernel/vsyscall-sigreturn_32.S
++++ /dev/null
+@@ -1,143 +0,0 @@
-/*
-- * IO Context helper functions
+- * Common code for the sigreturn entry points on the vsyscall page.
+- * So far this code is the same for both int80 and sysenter versions.
+- * This file is #include'd by vsyscall-*.S to define them after the
+- * vsyscall entry point. The kernel assumes that the addresses of these
+- * routines are constant for all vsyscall implementations.
- */
--void put_io_context(struct io_context *ioc)
--{
-- if (ioc == NULL)
-- return;
--
-- BUG_ON(atomic_read(&ioc->refcount) == 0);
-
-- if (atomic_dec_and_test(&ioc->refcount)) {
-- struct cfq_io_context *cic;
+-#include <asm/unistd.h>
+-#include <asm/asm-offsets.h>
-
-- rcu_read_lock();
-- if (ioc->aic && ioc->aic->dtor)
-- ioc->aic->dtor(ioc->aic);
-- if (ioc->cic_root.rb_node != NULL) {
-- struct rb_node *n = rb_first(&ioc->cic_root);
-
-- cic = rb_entry(n, struct cfq_io_context, rb_node);
-- cic->dtor(ioc);
-- }
-- rcu_read_unlock();
+-/* XXX
+- Should these be named "_sigtramp" or something?
+-*/
-
-- kmem_cache_free(iocontext_cachep, ioc);
-- }
--}
--EXPORT_SYMBOL(put_io_context);
+- .text
+- .org __kernel_vsyscall+32,0x90
+- .globl __kernel_sigreturn
+- .type __kernel_sigreturn, at function
+-__kernel_sigreturn:
+-.LSTART_sigreturn:
+- popl %eax /* XXX does this mean it needs unwind info? */
+- movl $__NR_sigreturn, %eax
+- int $0x80
+-.LEND_sigreturn:
+- .size __kernel_sigreturn,.-.LSTART_sigreturn
-
--/* Called by the exitting task */
--void exit_io_context(void)
--{
-- struct io_context *ioc;
-- struct cfq_io_context *cic;
+- .balign 32
+- .globl __kernel_rt_sigreturn
+- .type __kernel_rt_sigreturn, at function
+-__kernel_rt_sigreturn:
+-.LSTART_rt_sigreturn:
+- movl $__NR_rt_sigreturn, %eax
+- int $0x80
+-.LEND_rt_sigreturn:
+- .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
+- .balign 32
+- .previous
-
-- task_lock(current);
-- ioc = current->io_context;
-- current->io_context = NULL;
-- task_unlock(current);
+- .section .eh_frame,"a", at progbits
+-.LSTARTFRAMEDLSI1:
+- .long .LENDCIEDLSI1-.LSTARTCIEDLSI1
+-.LSTARTCIEDLSI1:
+- .long 0 /* CIE ID */
+- .byte 1 /* Version number */
+- .string "zRS" /* NUL-terminated augmentation string */
+- .uleb128 1 /* Code alignment factor */
+- .sleb128 -4 /* Data alignment factor */
+- .byte 8 /* Return address register column */
+- .uleb128 1 /* Augmentation value length */
+- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
+- .byte 0 /* DW_CFA_nop */
+- .align 4
+-.LENDCIEDLSI1:
+- .long .LENDFDEDLSI1-.LSTARTFDEDLSI1 /* Length FDE */
+-.LSTARTFDEDLSI1:
+- .long .LSTARTFDEDLSI1-.LSTARTFRAMEDLSI1 /* CIE pointer */
+- /* HACK: The dwarf2 unwind routines will subtract 1 from the
+- return address to get an address in the middle of the
+- presumed call instruction. Since we didn't get here via
+- a call, we need to include the nop before the real start
+- to make up for it. */
+- .long .LSTART_sigreturn-1-. /* PC-relative start address */
+- .long .LEND_sigreturn-.LSTART_sigreturn+1
+- .uleb128 0 /* Augmentation */
+- /* What follows are the instructions for the table generation.
+- We record the locations of each register saved. This is
+- complicated by the fact that the "CFA" is always assumed to
+- be the value of the stack pointer in the caller. This means
+- that we must define the CFA of this body of code to be the
+- saved value of the stack pointer in the sigcontext. Which
+- also means that there is no fixed relation to the other
+- saved registers, which means that we must use DW_CFA_expression
+- to compute their addresses. It also means that when we
+- adjust the stack with the popl, we have to do it all over again. */
+-
+-#define do_cfa_expr(offset) \
+- .byte 0x0f; /* DW_CFA_def_cfa_expression */ \
+- .uleb128 1f-0f; /* length */ \
+-0: .byte 0x74; /* DW_OP_breg4 */ \
+- .sleb128 offset; /* offset */ \
+- .byte 0x06; /* DW_OP_deref */ \
+-1:
-
-- ioc->task = NULL;
-- if (ioc->aic && ioc->aic->exit)
-- ioc->aic->exit(ioc->aic);
-- if (ioc->cic_root.rb_node != NULL) {
-- cic = rb_entry(rb_first(&ioc->cic_root), struct cfq_io_context, rb_node);
-- cic->exit(ioc);
-- }
+-#define do_expr(regno, offset) \
+- .byte 0x10; /* DW_CFA_expression */ \
+- .uleb128 regno; /* regno */ \
+- .uleb128 1f-0f; /* length */ \
+-0: .byte 0x74; /* DW_OP_breg4 */ \
+- .sleb128 offset; /* offset */ \
+-1:
-
-- put_io_context(ioc);
--}
+- do_cfa_expr(SIGCONTEXT_esp+4)
+- do_expr(0, SIGCONTEXT_eax+4)
+- do_expr(1, SIGCONTEXT_ecx+4)
+- do_expr(2, SIGCONTEXT_edx+4)
+- do_expr(3, SIGCONTEXT_ebx+4)
+- do_expr(5, SIGCONTEXT_ebp+4)
+- do_expr(6, SIGCONTEXT_esi+4)
+- do_expr(7, SIGCONTEXT_edi+4)
+- do_expr(8, SIGCONTEXT_eip+4)
+-
+- .byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
+-
+- do_cfa_expr(SIGCONTEXT_esp)
+- do_expr(0, SIGCONTEXT_eax)
+- do_expr(1, SIGCONTEXT_ecx)
+- do_expr(2, SIGCONTEXT_edx)
+- do_expr(3, SIGCONTEXT_ebx)
+- do_expr(5, SIGCONTEXT_ebp)
+- do_expr(6, SIGCONTEXT_esi)
+- do_expr(7, SIGCONTEXT_edi)
+- do_expr(8, SIGCONTEXT_eip)
+-
+- .align 4
+-.LENDFDEDLSI1:
+-
+- .long .LENDFDEDLSI2-.LSTARTFDEDLSI2 /* Length FDE */
+-.LSTARTFDEDLSI2:
+- .long .LSTARTFDEDLSI2-.LSTARTFRAMEDLSI1 /* CIE pointer */
+- /* HACK: See above wrt unwind library assumptions. */
+- .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
+- .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
+- .uleb128 0 /* Augmentation */
+- /* What follows are the instructions for the table generation.
+- We record the locations of each register saved. This is
+- slightly less complicated than the above, since we don't
+- modify the stack pointer in the process. */
+-
+- do_cfa_expr(RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_esp)
+- do_expr(0, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_eax)
+- do_expr(1, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_ecx)
+- do_expr(2, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_edx)
+- do_expr(3, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_ebx)
+- do_expr(5, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_ebp)
+- do_expr(6, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_esi)
+- do_expr(7, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_edi)
+- do_expr(8, RT_SIGFRAME_sigcontext-4 + SIGCONTEXT_eip)
-
+- .align 4
+-.LENDFDEDLSI2:
+- .previous
+diff --git a/arch/x86/kernel/vsyscall-sysenter_32.S b/arch/x86/kernel/vsyscall-sysenter_32.S
+deleted file mode 100644
+index ed879bf..0000000
+--- a/arch/x86/kernel/vsyscall-sysenter_32.S
++++ /dev/null
+@@ -1,122 +0,0 @@
-/*
-- * If the current task has no IO context then create one and initialise it.
-- * Otherwise, return its existing IO context.
+- * Code for the vsyscall page. This version uses the sysenter instruction.
- *
-- * This returned IO context doesn't have a specifically elevated refcount,
-- * but since the current task itself holds a reference, the context can be
-- * used in general code, so long as it stays within `current` context.
+- * NOTE:
+- * 1) __kernel_vsyscall _must_ be first in this page.
+- * 2) there are alignment constraints on this stub, see vsyscall-sigreturn.S
+- * for details.
- */
--static struct io_context *current_io_context(gfp_t gfp_flags, int node)
--{
-- struct task_struct *tsk = current;
-- struct io_context *ret;
--
-- ret = tsk->io_context;
-- if (likely(ret))
-- return ret;
--
-- ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
-- if (ret) {
-- atomic_set(&ret->refcount, 1);
-- ret->task = current;
-- ret->ioprio_changed = 0;
-- ret->last_waited = jiffies; /* doesn't matter... */
-- ret->nr_batch_requests = 0; /* because this is 0 */
-- ret->aic = NULL;
-- ret->cic_root.rb_node = NULL;
-- ret->ioc_data = NULL;
-- /* make sure set_task_ioprio() sees the settings above */
-- smp_wmb();
-- tsk->io_context = ret;
-- }
--
-- return ret;
--}
-
-/*
-- * If the current task has no IO context then create one and initialise it.
-- * If it does have a context, take a ref on it.
+- * The caller puts arg2 in %ecx, which gets pushed. The kernel will use
+- * %ecx itself for arg2. The pushing is because the sysexit instruction
+- * (found in entry.S) requires that we clobber %ecx with the desired %esp.
+- * User code might expect that %ecx is unclobbered though, as it would be
+- * for returning via the iret instruction, so we must push and pop.
- *
-- * This is always called in the context of the task which submitted the I/O.
+- * The caller puts arg3 in %edx, which the sysexit instruction requires
+- * for %eip. Thus, exactly as for arg2, we must push and pop.
+- *
+- * Arg6 is different. The caller puts arg6 in %ebp. Since the sysenter
+- * instruction clobbers %esp, the user's %esp won't even survive entry
+- * into the kernel. We store %esp in %ebp. Code in entry.S must fetch
+- * arg6 from the stack.
+- *
+- * You can not use this vsyscall for the clone() syscall because the
+- * three dwords on the parent stack do not get copied to the child.
- */
--struct io_context *get_io_context(gfp_t gfp_flags, int node)
--{
-- struct io_context *ret;
-- ret = current_io_context(gfp_flags, node);
-- if (likely(ret))
-- atomic_inc(&ret->refcount);
-- return ret;
--}
--EXPORT_SYMBOL(get_io_context);
--
--void copy_io_context(struct io_context **pdst, struct io_context **psrc)
--{
-- struct io_context *src = *psrc;
-- struct io_context *dst = *pdst;
--
-- if (src) {
-- BUG_ON(atomic_read(&src->refcount) == 0);
-- atomic_inc(&src->refcount);
-- put_io_context(dst);
-- *pdst = src;
-- }
--}
--EXPORT_SYMBOL(copy_io_context);
+- .text
+- .globl __kernel_vsyscall
+- .type __kernel_vsyscall, at function
+-__kernel_vsyscall:
+-.LSTART_vsyscall:
+- push %ecx
+-.Lpush_ecx:
+- push %edx
+-.Lpush_edx:
+- push %ebp
+-.Lenter_kernel:
+- movl %esp,%ebp
+- sysenter
+-
+- /* 7: align return point with nop's to make disassembly easier */
+- .space 7,0x90
+-
+- /* 14: System call restart point is here! (SYSENTER_RETURN-2) */
+- jmp .Lenter_kernel
+- /* 16: System call normal return point is here! */
+- .globl SYSENTER_RETURN /* Symbol used by sysenter.c */
+-SYSENTER_RETURN:
+- pop %ebp
+-.Lpop_ebp:
+- pop %edx
+-.Lpop_edx:
+- pop %ecx
+-.Lpop_ecx:
+- ret
+-.LEND_vsyscall:
+- .size __kernel_vsyscall,.-.LSTART_vsyscall
+- .previous
-
--void swap_io_context(struct io_context **ioc1, struct io_context **ioc2)
--{
-- struct io_context *temp;
-- temp = *ioc1;
-- *ioc1 = *ioc2;
-- *ioc2 = temp;
--}
--EXPORT_SYMBOL(swap_io_context);
+- .section .eh_frame,"a", at progbits
+-.LSTARTFRAMEDLSI:
+- .long .LENDCIEDLSI-.LSTARTCIEDLSI
+-.LSTARTCIEDLSI:
+- .long 0 /* CIE ID */
+- .byte 1 /* Version number */
+- .string "zR" /* NUL-terminated augmentation string */
+- .uleb128 1 /* Code alignment factor */
+- .sleb128 -4 /* Data alignment factor */
+- .byte 8 /* Return address register column */
+- .uleb128 1 /* Augmentation value length */
+- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
+- .byte 0x0c /* DW_CFA_def_cfa */
+- .uleb128 4
+- .uleb128 4
+- .byte 0x88 /* DW_CFA_offset, column 0x8 */
+- .uleb128 1
+- .align 4
+-.LENDCIEDLSI:
+- .long .LENDFDEDLSI-.LSTARTFDEDLSI /* Length FDE */
+-.LSTARTFDEDLSI:
+- .long .LSTARTFDEDLSI-.LSTARTFRAMEDLSI /* CIE pointer */
+- .long .LSTART_vsyscall-. /* PC-relative start address */
+- .long .LEND_vsyscall-.LSTART_vsyscall
+- .uleb128 0
+- /* What follows are the instructions for the table generation.
+- We have to record all changes of the stack pointer. */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpush_ecx-.LSTART_vsyscall
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x08 /* RA at offset 8 now */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpush_edx-.Lpush_ecx
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x0c /* RA at offset 12 now */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lenter_kernel-.Lpush_edx
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x10 /* RA at offset 16 now */
+- .byte 0x85, 0x04 /* DW_CFA_offset %ebp -16 */
+- /* Finally the epilogue. */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpop_ebp-.Lenter_kernel
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x0c /* RA at offset 12 now */
+- .byte 0xc5 /* DW_CFA_restore %ebp */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpop_edx-.Lpop_ebp
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x08 /* RA at offset 8 now */
+- .byte 0x04 /* DW_CFA_advance_loc4 */
+- .long .Lpop_ecx-.Lpop_edx
+- .byte 0x0e /* DW_CFA_def_cfa_offset */
+- .byte 0x04 /* RA at offset 4 now */
+- .align 4
+-.LENDFDEDLSI:
+- .previous
-
-/*
-- * sysfs parts below
+- * Get the common code for the sigreturn entry points.
- */
--struct queue_sysfs_entry {
-- struct attribute attr;
-- ssize_t (*show)(struct request_queue *, char *);
-- ssize_t (*store)(struct request_queue *, const char *, size_t);
--};
--
--static ssize_t
--queue_var_show(unsigned int var, char *page)
--{
-- return sprintf(page, "%d\n", var);
--}
--
--static ssize_t
--queue_var_store(unsigned long *var, const char *page, size_t count)
--{
-- char *p = (char *) page;
--
-- *var = simple_strtoul(p, &p, 10);
-- return count;
--}
--
--static ssize_t queue_requests_show(struct request_queue *q, char *page)
--{
-- return queue_var_show(q->nr_requests, (page));
--}
--
--static ssize_t
--queue_requests_store(struct request_queue *q, const char *page, size_t count)
--{
-- struct request_list *rl = &q->rq;
-- unsigned long nr;
-- int ret = queue_var_store(&nr, page, count);
-- if (nr < BLKDEV_MIN_RQ)
-- nr = BLKDEV_MIN_RQ;
--
-- spin_lock_irq(q->queue_lock);
-- q->nr_requests = nr;
-- blk_queue_congestion_threshold(q);
--
-- if (rl->count[READ] >= queue_congestion_on_threshold(q))
-- blk_set_queue_congested(q, READ);
-- else if (rl->count[READ] < queue_congestion_off_threshold(q))
-- blk_clear_queue_congested(q, READ);
--
-- if (rl->count[WRITE] >= queue_congestion_on_threshold(q))
-- blk_set_queue_congested(q, WRITE);
-- else if (rl->count[WRITE] < queue_congestion_off_threshold(q))
-- blk_clear_queue_congested(q, WRITE);
--
-- if (rl->count[READ] >= q->nr_requests) {
-- blk_set_queue_full(q, READ);
-- } else if (rl->count[READ]+1 <= q->nr_requests) {
-- blk_clear_queue_full(q, READ);
-- wake_up(&rl->wait[READ]);
-- }
--
-- if (rl->count[WRITE] >= q->nr_requests) {
-- blk_set_queue_full(q, WRITE);
-- } else if (rl->count[WRITE]+1 <= q->nr_requests) {
-- blk_clear_queue_full(q, WRITE);
-- wake_up(&rl->wait[WRITE]);
-- }
-- spin_unlock_irq(q->queue_lock);
-- return ret;
--}
--
--static ssize_t queue_ra_show(struct request_queue *q, char *page)
--{
-- int ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10);
--
-- return queue_var_show(ra_kb, (page));
--}
--
--static ssize_t
--queue_ra_store(struct request_queue *q, const char *page, size_t count)
--{
-- unsigned long ra_kb;
-- ssize_t ret = queue_var_store(&ra_kb, page, count);
--
-- spin_lock_irq(q->queue_lock);
-- q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10);
-- spin_unlock_irq(q->queue_lock);
+-#include "vsyscall-sigreturn_32.S"
+diff --git a/arch/x86/kernel/vsyscall_32.S b/arch/x86/kernel/vsyscall_32.S
+deleted file mode 100644
+index a5ab3dc..0000000
+--- a/arch/x86/kernel/vsyscall_32.S
++++ /dev/null
+@@ -1,15 +0,0 @@
+-#include <linux/init.h>
-
-- return ret;
--}
+-__INITDATA
-
--static ssize_t queue_max_sectors_show(struct request_queue *q, char *page)
--{
-- int max_sectors_kb = q->max_sectors >> 1;
+- .globl vsyscall_int80_start, vsyscall_int80_end
+-vsyscall_int80_start:
+- .incbin "arch/x86/kernel/vsyscall-int80_32.so"
+-vsyscall_int80_end:
+-
+- .globl vsyscall_sysenter_start, vsyscall_sysenter_end
+-vsyscall_sysenter_start:
+- .incbin "arch/x86/kernel/vsyscall-sysenter_32.so"
+-vsyscall_sysenter_end:
-
-- return queue_var_show(max_sectors_kb, (page));
--}
+-__FINIT
+diff --git a/arch/x86/kernel/vsyscall_32.lds.S b/arch/x86/kernel/vsyscall_32.lds.S
+deleted file mode 100644
+index 4a8b0ed..0000000
+--- a/arch/x86/kernel/vsyscall_32.lds.S
++++ /dev/null
+@@ -1,67 +0,0 @@
+-/*
+- * Linker script for vsyscall DSO. The vsyscall page is an ELF shared
+- * object prelinked to its virtual address, and with only one read-only
+- * segment (that fits in one page). This script controls its layout.
+- */
+-#include <asm/asm-offsets.h>
-
--static ssize_t
--queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
+-SECTIONS
-{
-- unsigned long max_sectors_kb,
-- max_hw_sectors_kb = q->max_hw_sectors >> 1,
-- page_kb = 1 << (PAGE_CACHE_SHIFT - 10);
-- ssize_t ret = queue_var_store(&max_sectors_kb, page, count);
--
-- if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb)
-- return -EINVAL;
-- /*
-- * Take the queue lock to update the readahead and max_sectors
-- * values synchronously:
-- */
-- spin_lock_irq(q->queue_lock);
-- q->max_sectors = max_sectors_kb << 1;
-- spin_unlock_irq(q->queue_lock);
+- . = VDSO_PRELINK_asm + SIZEOF_HEADERS;
-
-- return ret;
--}
+- .hash : { *(.hash) } :text
+- .gnu.hash : { *(.gnu.hash) }
+- .dynsym : { *(.dynsym) }
+- .dynstr : { *(.dynstr) }
+- .gnu.version : { *(.gnu.version) }
+- .gnu.version_d : { *(.gnu.version_d) }
+- .gnu.version_r : { *(.gnu.version_r) }
-
--static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page)
--{
-- int max_hw_sectors_kb = q->max_hw_sectors >> 1;
+- /* This linker script is used both with -r and with -shared.
+- For the layouts to match, we need to skip more than enough
+- space for the dynamic symbol table et al. If this amount
+- is insufficient, ld -shared will barf. Just increase it here. */
+- . = VDSO_PRELINK_asm + 0x400;
-
-- return queue_var_show(max_hw_sectors_kb, (page));
+- .text : { *(.text) } :text =0x90909090
+- .note : { *(.note.*) } :text :note
+- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+- .eh_frame : { KEEP (*(.eh_frame)) } :text
+- .dynamic : { *(.dynamic) } :text :dynamic
+- .useless : {
+- *(.got.plt) *(.got)
+- *(.data .data.* .gnu.linkonce.d.*)
+- *(.dynbss)
+- *(.bss .bss.* .gnu.linkonce.b.*)
+- } :text
-}
-
--
--static struct queue_sysfs_entry queue_requests_entry = {
-- .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
-- .show = queue_requests_show,
-- .store = queue_requests_store,
--};
--
--static struct queue_sysfs_entry queue_ra_entry = {
-- .attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
-- .show = queue_ra_show,
-- .store = queue_ra_store,
--};
--
--static struct queue_sysfs_entry queue_max_sectors_entry = {
-- .attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
-- .show = queue_max_sectors_show,
-- .store = queue_max_sectors_store,
--};
--
--static struct queue_sysfs_entry queue_max_hw_sectors_entry = {
-- .attr = {.name = "max_hw_sectors_kb", .mode = S_IRUGO },
-- .show = queue_max_hw_sectors_show,
--};
--
--static struct queue_sysfs_entry queue_iosched_entry = {
-- .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR },
-- .show = elv_iosched_show,
-- .store = elv_iosched_store,
--};
--
--static struct attribute *default_attrs[] = {
-- &queue_requests_entry.attr,
-- &queue_ra_entry.attr,
-- &queue_max_hw_sectors_entry.attr,
-- &queue_max_sectors_entry.attr,
-- &queue_iosched_entry.attr,
-- NULL,
--};
--
--#define to_queue(atr) container_of((atr), struct queue_sysfs_entry, attr)
--
--static ssize_t
--queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
+-/*
+- * We must supply the ELF program headers explicitly to get just one
+- * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+- */
+-PHDRS
-{
-- struct queue_sysfs_entry *entry = to_queue(attr);
-- struct request_queue *q =
-- container_of(kobj, struct request_queue, kobj);
-- ssize_t res;
--
-- if (!entry->show)
-- return -EIO;
-- mutex_lock(&q->sysfs_lock);
-- if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
-- mutex_unlock(&q->sysfs_lock);
-- return -ENOENT;
-- }
-- res = entry->show(q, page);
-- mutex_unlock(&q->sysfs_lock);
-- return res;
+- text PT_LOAD FILEHDR PHDRS FLAGS(5); /* PF_R|PF_X */
+- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+- note PT_NOTE FLAGS(4); /* PF_R */
+- eh_frame_hdr 0x6474e550; /* PT_GNU_EH_FRAME, but ld doesn't match the name */
-}
-
--static ssize_t
--queue_attr_store(struct kobject *kobj, struct attribute *attr,
-- const char *page, size_t length)
+-/*
+- * This controls what symbols we export from the DSO.
+- */
+-VERSION
-{
-- struct queue_sysfs_entry *entry = to_queue(attr);
-- struct request_queue *q = container_of(kobj, struct request_queue, kobj);
--
-- ssize_t res;
+- LINUX_2.5 {
+- global:
+- __kernel_vsyscall;
+- __kernel_sigreturn;
+- __kernel_rt_sigreturn;
-
-- if (!entry->store)
-- return -EIO;
-- mutex_lock(&q->sysfs_lock);
-- if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
-- mutex_unlock(&q->sysfs_lock);
-- return -ENOENT;
-- }
-- res = entry->store(q, page, length);
-- mutex_unlock(&q->sysfs_lock);
-- return res;
+- local: *;
+- };
-}
-
--static struct sysfs_ops queue_sysfs_ops = {
-- .show = queue_attr_show,
-- .store = queue_attr_store,
--};
--
--static struct kobj_type queue_ktype = {
-- .sysfs_ops = &queue_sysfs_ops,
-- .default_attrs = default_attrs,
-- .release = blk_release_queue,
--};
--
--int blk_register_queue(struct gendisk *disk)
--{
-- int ret;
--
-- struct request_queue *q = disk->queue;
--
-- if (!q || !q->request_fn)
-- return -ENXIO;
--
-- q->kobj.parent = kobject_get(&disk->kobj);
--
-- ret = kobject_add(&q->kobj);
-- if (ret < 0)
-- return ret;
--
-- kobject_uevent(&q->kobj, KOBJ_ADD);
+-/* The ELF entry point can be used to set the AT_SYSINFO value. */
+-ENTRY(__kernel_vsyscall);
+diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
+index ad4005c..3f82427 100644
+--- a/arch/x86/kernel/vsyscall_64.c
++++ b/arch/x86/kernel/vsyscall_64.c
+@@ -43,7 +43,7 @@
+ #include <asm/vgtod.h>
+
+ #define __vsyscall(nr) __attribute__ ((unused,__section__(".vsyscall_" #nr)))
+-#define __syscall_clobber "r11","rcx","memory"
++#define __syscall_clobber "r11","cx","memory"
+ #define __pa_vsymbol(x) \
+ ({unsigned long v; \
+ extern char __vsyscall_0; \
+@@ -190,7 +190,7 @@ time_t __vsyscall(1) vtime(time_t *t)
+ long __vsyscall(2)
+ vgetcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache)
+ {
+- unsigned int dummy, p;
++ unsigned int p;
+ unsigned long j = 0;
+
+ /* Fast cache - only recompute value once per jiffies and avoid
+@@ -205,7 +205,7 @@ vgetcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache)
+ p = tcache->blob[1];
+ } else if (__vgetcpu_mode == VGETCPU_RDTSCP) {
+ /* Load per CPU data from RDTSCP */
+- rdtscp(dummy, dummy, p);
++ native_read_tscp(&p);
+ } else {
+ /* Load per CPU data from GDT */
+ asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
+@@ -297,7 +297,7 @@ static void __cpuinit vsyscall_set_cpu(int cpu)
+ /* Store cpu number in limit so that it can be loaded quickly
+ in user space in vgetcpu.
+ 12 bits for the CPU and 8 bits for the node. */
+- d = (unsigned long *)(cpu_gdt(cpu) + GDT_ENTRY_PER_CPU);
++ d = (unsigned long *)(get_cpu_gdt_table(cpu) + GDT_ENTRY_PER_CPU);
+ *d = 0x0f40000000000ULL;
+ *d |= cpu;
+ *d |= (node & 0xf) << 12;
+@@ -319,7 +319,7 @@ cpu_vsyscall_notifier(struct notifier_block *n, unsigned long action, void *arg)
+ return NOTIFY_DONE;
+ }
+
+-static void __init map_vsyscall(void)
++void __init map_vsyscall(void)
+ {
+ extern char __vsyscall_0;
+ unsigned long physaddr_page0 = __pa_symbol(&__vsyscall_0);
+@@ -335,7 +335,6 @@ static int __init vsyscall_init(void)
+ BUG_ON((unsigned long) &vtime != VSYSCALL_ADDR(__NR_vtime));
+ BUG_ON((VSYSCALL_ADDR(0) != __fix_to_virt(VSYSCALL_FIRST_PAGE)));
+ BUG_ON((unsigned long) &vgetcpu != VSYSCALL_ADDR(__NR_vgetcpu));
+- map_vsyscall();
+ #ifdef CONFIG_SYSCTL
+ register_sysctl_table(kernel_root_table2);
+ #endif
+diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
+index 77c25b3..a66e9c1 100644
+--- a/arch/x86/kernel/x8664_ksyms_64.c
++++ b/arch/x86/kernel/x8664_ksyms_64.c
+@@ -8,6 +8,7 @@
+ #include <asm/processor.h>
+ #include <asm/uaccess.h>
+ #include <asm/pgtable.h>
++#include <asm/desc.h>
+
+ EXPORT_SYMBOL(kernel_thread);
+
+@@ -34,13 +35,6 @@ EXPORT_SYMBOL(__copy_from_user_inatomic);
+ EXPORT_SYMBOL(copy_page);
+ EXPORT_SYMBOL(clear_page);
+
+-#ifdef CONFIG_SMP
+-extern void __write_lock_failed(rwlock_t *rw);
+-extern void __read_lock_failed(rwlock_t *rw);
+-EXPORT_SYMBOL(__write_lock_failed);
+-EXPORT_SYMBOL(__read_lock_failed);
+-#endif
+-
+ /* Export string functions. We normally rely on gcc builtin for most of these,
+ but gcc sometimes decides not to inline them. */
+ #undef memcpy
+@@ -60,3 +54,8 @@ EXPORT_SYMBOL(init_level4_pgt);
+ EXPORT_SYMBOL(load_gs_index);
+
+ EXPORT_SYMBOL(_proxy_pda);
++
++#ifdef CONFIG_PARAVIRT
++/* Virtualized guests may want to use it */
++EXPORT_SYMBOL_GPL(cpu_gdt_descr);
++#endif
+diff --git a/arch/x86/lguest/Kconfig b/arch/x86/lguest/Kconfig
+index 19626ac..964dfa3 100644
+--- a/arch/x86/lguest/Kconfig
++++ b/arch/x86/lguest/Kconfig
+@@ -1,6 +1,7 @@
+ config LGUEST_GUEST
+ bool "Lguest guest support"
+ select PARAVIRT
++ depends on X86_32
+ depends on !X86_PAE
+ depends on !(X86_VISWS || X86_VOYAGER)
+ select VIRTIO
+diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
+index 92c5611..a633737 100644
+--- a/arch/x86/lguest/boot.c
++++ b/arch/x86/lguest/boot.c
+@@ -175,8 +175,8 @@ static void lguest_leave_lazy_mode(void)
+ * check there when it wants to deliver an interrupt.
+ */
+
+-/* save_flags() is expected to return the processor state (ie. "eflags"). The
+- * eflags word contains all kind of stuff, but in practice Linux only cares
++/* save_flags() is expected to return the processor state (ie. "flags"). The
++ * flags word contains all kind of stuff, but in practice Linux only cares
+ * about the interrupt flag. Our "save_flags()" just returns that. */
+ static unsigned long save_fl(void)
+ {
+@@ -217,19 +217,20 @@ static void irq_enable(void)
+ * address of the handler, and... well, who cares? The Guest just asks the
+ * Host to make the change anyway, because the Host controls the real IDT.
+ */
+-static void lguest_write_idt_entry(struct desc_struct *dt,
+- int entrynum, u32 low, u32 high)
++static void lguest_write_idt_entry(gate_desc *dt,
++ int entrynum, const gate_desc *g)
+ {
++ u32 *desc = (u32 *)g;
+ /* Keep the local copy up to date. */
+- write_dt_entry(dt, entrynum, low, high);
++ native_write_idt_entry(dt, entrynum, g);
+ /* Tell Host about this new entry. */
+- hcall(LHCALL_LOAD_IDT_ENTRY, entrynum, low, high);
++ hcall(LHCALL_LOAD_IDT_ENTRY, entrynum, desc[0], desc[1]);
+ }
+
+ /* Changing to a different IDT is very rare: we keep the IDT up-to-date every
+ * time it is written, so we can simply loop through all entries and tell the
+ * Host about them. */
+-static void lguest_load_idt(const struct Xgt_desc_struct *desc)
++static void lguest_load_idt(const struct desc_ptr *desc)
+ {
+ unsigned int i;
+ struct desc_struct *idt = (void *)desc->address;
+@@ -252,7 +253,7 @@ static void lguest_load_idt(const struct Xgt_desc_struct *desc)
+ * hypercall and use that repeatedly to load a new IDT. I don't think it
+ * really matters, but wouldn't it be nice if they were the same?
+ */
+-static void lguest_load_gdt(const struct Xgt_desc_struct *desc)
++static void lguest_load_gdt(const struct desc_ptr *desc)
+ {
+ BUG_ON((desc->size+1)/8 != GDT_ENTRIES);
+ hcall(LHCALL_LOAD_GDT, __pa(desc->address), GDT_ENTRIES, 0);
+@@ -261,10 +262,10 @@ static void lguest_load_gdt(const struct Xgt_desc_struct *desc)
+ /* For a single GDT entry which changes, we do the lazy thing: alter our GDT,
+ * then tell the Host to reload the entire thing. This operation is so rare
+ * that this naive implementation is reasonable. */
+-static void lguest_write_gdt_entry(struct desc_struct *dt,
+- int entrynum, u32 low, u32 high)
++static void lguest_write_gdt_entry(struct desc_struct *dt, int entrynum,
++ const void *desc, int type)
+ {
+- write_dt_entry(dt, entrynum, low, high);
++ native_write_gdt_entry(dt, entrynum, desc, type);
+ hcall(LHCALL_LOAD_GDT, __pa(dt), GDT_ENTRIES, 0);
+ }
+
+@@ -323,30 +324,30 @@ static void lguest_load_tr_desc(void)
+ * anyone (including userspace) can just use the raw "cpuid" instruction and
+ * the Host won't even notice since it isn't privileged. So we try not to get
+ * too worked up about it. */
+-static void lguest_cpuid(unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
++static void lguest_cpuid(unsigned int *ax, unsigned int *bx,
++ unsigned int *cx, unsigned int *dx)
+ {
+- int function = *eax;
++ int function = *ax;
+
+- native_cpuid(eax, ebx, ecx, edx);
++ native_cpuid(ax, bx, cx, dx);
+ switch (function) {
+ case 1: /* Basic feature request. */
+ /* We only allow kernel to see SSE3, CMPXCHG16B and SSSE3 */
+- *ecx &= 0x00002201;
++ *cx &= 0x00002201;
+ /* SSE, SSE2, FXSR, MMX, CMOV, CMPXCHG8B, FPU. */
+- *edx &= 0x07808101;
++ *dx &= 0x07808101;
+ /* The Host can do a nice optimization if it knows that the
+ * kernel mappings (addresses above 0xC0000000 or whatever
+ * PAGE_OFFSET is set to) haven't changed. But Linux calls
+ * flush_tlb_user() for both user and kernel mappings unless
+ * the Page Global Enable (PGE) feature bit is set. */
+- *edx |= 0x00002000;
++ *dx |= 0x00002000;
+ break;
+ case 0x80000000:
+ /* Futureproof this a little: if they ask how much extended
+ * processor information there is, limit it to known fields. */
+- if (*eax > 0x80000008)
+- *eax = 0x80000008;
++ if (*ax > 0x80000008)
++ *ax = 0x80000008;
+ break;
+ }
+ }
+@@ -755,10 +756,10 @@ static void lguest_time_init(void)
+ * segment), the privilege level (we're privilege level 1, the Host is 0 and
+ * will not tolerate us trying to use that), the stack pointer, and the number
+ * of pages in the stack. */
+-static void lguest_load_esp0(struct tss_struct *tss,
++static void lguest_load_sp0(struct tss_struct *tss,
+ struct thread_struct *thread)
+ {
+- lazy_hcall(LHCALL_SET_STACK, __KERNEL_DS|0x1, thread->esp0,
++ lazy_hcall(LHCALL_SET_STACK, __KERNEL_DS|0x1, thread->sp0,
+ THREAD_SIZE/PAGE_SIZE);
+ }
+
+@@ -788,11 +789,11 @@ static void lguest_wbinvd(void)
+ * code qualifies for Advanced. It will also never interrupt anything. It
+ * does, however, allow us to get through the Linux boot code. */
+ #ifdef CONFIG_X86_LOCAL_APIC
+-static void lguest_apic_write(unsigned long reg, unsigned long v)
++static void lguest_apic_write(unsigned long reg, u32 v)
+ {
+ }
+
+-static unsigned long lguest_apic_read(unsigned long reg)
++static u32 lguest_apic_read(unsigned long reg)
+ {
+ return 0;
+ }
+@@ -957,7 +958,7 @@ __init void lguest_init(void)
+ pv_cpu_ops.cpuid = lguest_cpuid;
+ pv_cpu_ops.load_idt = lguest_load_idt;
+ pv_cpu_ops.iret = lguest_iret;
+- pv_cpu_ops.load_esp0 = lguest_load_esp0;
++ pv_cpu_ops.load_sp0 = lguest_load_sp0;
+ pv_cpu_ops.load_tr_desc = lguest_load_tr_desc;
+ pv_cpu_ops.set_ldt = lguest_set_ldt;
+ pv_cpu_ops.load_tls = lguest_load_tls;
+diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
+index 329da27..4876182 100644
+--- a/arch/x86/lib/Makefile
++++ b/arch/x86/lib/Makefile
+@@ -1,5 +1,27 @@
++#
++# Makefile for x86 specific library files.
++#
++
++obj-$(CONFIG_SMP) := msr-on-cpu.o
++
++lib-y := delay_$(BITS).o
++lib-y += usercopy_$(BITS).o getuser_$(BITS).o putuser_$(BITS).o
++lib-y += memcpy_$(BITS).o
++
+ ifeq ($(CONFIG_X86_32),y)
+-include ${srctree}/arch/x86/lib/Makefile_32
++ lib-y += checksum_32.o
++ lib-y += strstr_32.o
++ lib-y += bitops_32.o semaphore_32.o string_32.o
++
++ lib-$(CONFIG_X86_USE_3DNOW) += mmx_32.o
+ else
+-include ${srctree}/arch/x86/lib/Makefile_64
++ obj-y += io_64.o iomap_copy_64.o
++
++ CFLAGS_csum-partial_64.o := -funroll-loops
++
++ lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o
++ lib-y += thunk_64.o clear_page_64.o copy_page_64.o
++ lib-y += bitstr_64.o bitops_64.o
++ lib-y += memmove_64.o memset_64.o
++ lib-y += copy_user_64.o rwlock_64.o copy_user_nocache_64.o
+ endif
+diff --git a/arch/x86/lib/Makefile_32 b/arch/x86/lib/Makefile_32
+deleted file mode 100644
+index 98d1f1e..0000000
+--- a/arch/x86/lib/Makefile_32
++++ /dev/null
+@@ -1,11 +0,0 @@
+-#
+-# Makefile for i386-specific library files..
+-#
-
-- ret = elv_register_queue(q);
-- if (ret) {
-- kobject_uevent(&q->kobj, KOBJ_REMOVE);
-- kobject_del(&q->kobj);
-- return ret;
-- }
-
-- return 0;
--}
+-lib-y = checksum_32.o delay_32.o usercopy_32.o getuser_32.o putuser_32.o memcpy_32.o strstr_32.o \
+- bitops_32.o semaphore_32.o string_32.o
-
--void blk_unregister_queue(struct gendisk *disk)
--{
-- struct request_queue *q = disk->queue;
+-lib-$(CONFIG_X86_USE_3DNOW) += mmx_32.o
-
-- if (q && q->request_fn) {
-- elv_unregister_queue(q);
+-obj-$(CONFIG_SMP) += msr-on-cpu.o
+diff --git a/arch/x86/lib/Makefile_64 b/arch/x86/lib/Makefile_64
+deleted file mode 100644
+index bbabad3..0000000
+--- a/arch/x86/lib/Makefile_64
++++ /dev/null
+@@ -1,13 +0,0 @@
+-#
+-# Makefile for x86_64-specific library files.
+-#
-
-- kobject_uevent(&q->kobj, KOBJ_REMOVE);
-- kobject_del(&q->kobj);
-- kobject_put(&disk->kobj);
-- }
--}
-diff --git a/crypto/Kconfig b/crypto/Kconfig
-index 083d2e1..c3166a1 100644
---- a/crypto/Kconfig
-+++ b/crypto/Kconfig
-@@ -24,10 +24,6 @@ config CRYPTO_ALGAPI
- help
- This option provides the API for cryptographic algorithms.
-
--config CRYPTO_ABLKCIPHER
-- tristate
-- select CRYPTO_BLKCIPHER
+-CFLAGS_csum-partial_64.o := -funroll-loops
-
- config CRYPTO_AEAD
- tristate
- select CRYPTO_ALGAPI
-@@ -36,6 +32,15 @@ config CRYPTO_BLKCIPHER
- tristate
- select CRYPTO_ALGAPI
-
-+config CRYPTO_SEQIV
-+ tristate "Sequence Number IV Generator"
-+ select CRYPTO_AEAD
-+ select CRYPTO_BLKCIPHER
-+ help
-+ This IV generator generates an IV based on a sequence number by
-+ xoring it with a salt. This algorithm is mainly useful for CTR
-+ and similar modes.
-+
- config CRYPTO_HASH
- tristate
- select CRYPTO_ALGAPI
-@@ -91,7 +96,7 @@ config CRYPTO_SHA1
- SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
-
- config CRYPTO_SHA256
-- tristate "SHA256 digest algorithm"
-+ tristate "SHA224 and SHA256 digest algorithm"
- select CRYPTO_ALGAPI
- help
- SHA256 secure hash standard (DFIPS 180-2).
-@@ -99,6 +104,9 @@ config CRYPTO_SHA256
- This version of SHA implements a 256 bit hash with 128 bits of
- security against collision attacks.
-
-+ This code also includes SHA-224, a 224 bit hash with 112 bits
-+ of security against collision attacks.
-+
- config CRYPTO_SHA512
- tristate "SHA384 and SHA512 digest algorithms"
- select CRYPTO_ALGAPI
-@@ -195,9 +203,34 @@ config CRYPTO_XTS
- key size 256, 384 or 512 bits. This implementation currently
- can't handle a sectorsize which is not a multiple of 16 bytes.
+-obj-y := io_64.o iomap_copy_64.o
+-obj-$(CONFIG_SMP) += msr-on-cpu.o
+-
+-lib-y := csum-partial_64.o csum-copy_64.o csum-wrappers_64.o delay_64.o \
+- usercopy_64.o getuser_64.o putuser_64.o \
+- thunk_64.o clear_page_64.o copy_page_64.o bitstr_64.o bitops_64.o
+-lib-y += memcpy_64.o memmove_64.o memset_64.o copy_user_64.o rwlock_64.o copy_user_nocache_64.o
+diff --git a/arch/x86/lib/memcpy_32.c b/arch/x86/lib/memcpy_32.c
+index 8ac51b8..37756b6 100644
+--- a/arch/x86/lib/memcpy_32.c
++++ b/arch/x86/lib/memcpy_32.c
+@@ -34,8 +34,8 @@ void *memmove(void *dest, const void *src, size_t n)
+ "cld"
+ : "=&c" (d0), "=&S" (d1), "=&D" (d2)
+ :"0" (n),
+- "1" (n-1+(const char *)src),
+- "2" (n-1+(char *)dest)
++ "1" (n-1+src),
++ "2" (n-1+dest)
+ :"memory");
+ }
+ return dest;
+diff --git a/arch/x86/lib/memmove_64.c b/arch/x86/lib/memmove_64.c
+index 751ebae..80175e4 100644
+--- a/arch/x86/lib/memmove_64.c
++++ b/arch/x86/lib/memmove_64.c
+@@ -11,8 +11,8 @@ void *memmove(void * dest,const void *src,size_t count)
+ if (dest < src) {
+ return memcpy(dest,src,count);
+ } else {
+- char *p = (char *) dest + count;
+- char *s = (char *) src + count;
++ char *p = dest + count;
++ const char *s = src + count;
+ while (count--)
+ *--p = *--s;
+ }
+diff --git a/arch/x86/lib/semaphore_32.S b/arch/x86/lib/semaphore_32.S
+index 444fba4..3899bd3 100644
+--- a/arch/x86/lib/semaphore_32.S
++++ b/arch/x86/lib/semaphore_32.S
+@@ -29,7 +29,7 @@
+ * registers (%eax, %edx and %ecx) except %eax whish is either a return
+ * value or just clobbered..
+ */
+- .section .sched.text
++ .section .sched.text, "ax"
+ ENTRY(__down_failed)
+ CFI_STARTPROC
+ FRAME
+@@ -49,7 +49,7 @@ ENTRY(__down_failed)
+ ENDFRAME
+ ret
+ CFI_ENDPROC
+- END(__down_failed)
++ ENDPROC(__down_failed)
+
+ ENTRY(__down_failed_interruptible)
+ CFI_STARTPROC
+@@ -70,7 +70,7 @@ ENTRY(__down_failed_interruptible)
+ ENDFRAME
+ ret
+ CFI_ENDPROC
+- END(__down_failed_interruptible)
++ ENDPROC(__down_failed_interruptible)
+
+ ENTRY(__down_failed_trylock)
+ CFI_STARTPROC
+@@ -91,7 +91,7 @@ ENTRY(__down_failed_trylock)
+ ENDFRAME
+ ret
+ CFI_ENDPROC
+- END(__down_failed_trylock)
++ ENDPROC(__down_failed_trylock)
+
+ ENTRY(__up_wakeup)
+ CFI_STARTPROC
+@@ -112,7 +112,7 @@ ENTRY(__up_wakeup)
+ ENDFRAME
+ ret
+ CFI_ENDPROC
+- END(__up_wakeup)
++ ENDPROC(__up_wakeup)
+
+ /*
+ * rw spinlock fallbacks
+@@ -132,7 +132,7 @@ ENTRY(__write_lock_failed)
+ ENDFRAME
+ ret
+ CFI_ENDPROC
+- END(__write_lock_failed)
++ ENDPROC(__write_lock_failed)
+
+ ENTRY(__read_lock_failed)
+ CFI_STARTPROC
+@@ -148,7 +148,7 @@ ENTRY(__read_lock_failed)
+ ENDFRAME
+ ret
+ CFI_ENDPROC
+- END(__read_lock_failed)
++ ENDPROC(__read_lock_failed)
+
+ #endif
+
+@@ -170,7 +170,7 @@ ENTRY(call_rwsem_down_read_failed)
+ CFI_ADJUST_CFA_OFFSET -4
+ ret
+ CFI_ENDPROC
+- END(call_rwsem_down_read_failed)
++ ENDPROC(call_rwsem_down_read_failed)
+
+ ENTRY(call_rwsem_down_write_failed)
+ CFI_STARTPROC
+@@ -182,7 +182,7 @@ ENTRY(call_rwsem_down_write_failed)
+ CFI_ADJUST_CFA_OFFSET -4
+ ret
+ CFI_ENDPROC
+- END(call_rwsem_down_write_failed)
++ ENDPROC(call_rwsem_down_write_failed)
+
+ ENTRY(call_rwsem_wake)
+ CFI_STARTPROC
+@@ -196,7 +196,7 @@ ENTRY(call_rwsem_wake)
+ CFI_ADJUST_CFA_OFFSET -4
+ 1: ret
+ CFI_ENDPROC
+- END(call_rwsem_wake)
++ ENDPROC(call_rwsem_wake)
+
+ /* Fix up special calling conventions */
+ ENTRY(call_rwsem_downgrade_wake)
+@@ -214,6 +214,6 @@ ENTRY(call_rwsem_downgrade_wake)
+ CFI_ADJUST_CFA_OFFSET -4
+ ret
+ CFI_ENDPROC
+- END(call_rwsem_downgrade_wake)
++ ENDPROC(call_rwsem_downgrade_wake)
+
+ #endif
+diff --git a/arch/x86/lib/thunk_64.S b/arch/x86/lib/thunk_64.S
+index 6ea73f3..8b92d42 100644
+--- a/arch/x86/lib/thunk_64.S
++++ b/arch/x86/lib/thunk_64.S
+@@ -33,7 +33,7 @@
+ .endm
+
-+config CRYPTO_CTR
-+ tristate "CTR support"
-+ select CRYPTO_BLKCIPHER
-+ select CRYPTO_SEQIV
-+ select CRYPTO_MANAGER
-+ help
-+ CTR: Counter mode
-+ This block cipher algorithm is required for IPSec.
+- .section .sched.text
++ .section .sched.text, "ax"
+ #ifdef CONFIG_RWSEM_XCHGADD_ALGORITHM
+ thunk rwsem_down_read_failed_thunk,rwsem_down_read_failed
+ thunk rwsem_down_write_failed_thunk,rwsem_down_write_failed
+diff --git a/arch/x86/mach-rdc321x/Makefile b/arch/x86/mach-rdc321x/Makefile
+new file mode 100644
+index 0000000..1faac81
+--- /dev/null
++++ b/arch/x86/mach-rdc321x/Makefile
+@@ -0,0 +1,5 @@
++#
++# Makefile for the RDC321x specific parts of the kernel
++#
++obj-$(CONFIG_X86_RDC321X) := gpio.o platform.o wdt.o
+
-+config CRYPTO_GCM
-+ tristate "GCM/GMAC support"
-+ select CRYPTO_CTR
-+ select CRYPTO_AEAD
-+ select CRYPTO_GF128MUL
-+ help
-+ Support for Galois/Counter Mode (GCM) and Galois Message
-+ Authentication Code (GMAC). Required for IPSec.
+diff --git a/arch/x86/mach-rdc321x/gpio.c b/arch/x86/mach-rdc321x/gpio.c
+new file mode 100644
+index 0000000..0312691
+--- /dev/null
++++ b/arch/x86/mach-rdc321x/gpio.c
+@@ -0,0 +1,91 @@
++/*
++ * Copyright (C) 2007, OpenWrt.org, Florian Fainelli <florian at openwrt.org>
++ * RDC321x architecture specific GPIO support
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the
++ * Free Software Foundation; either version 2 of the License, or (at your
++ * option) any later version.
++ */
+
-+config CRYPTO_CCM
-+ tristate "CCM support"
-+ select CRYPTO_CTR
-+ select CRYPTO_AEAD
-+ help
-+ Support for Counter with CBC MAC. Required for IPsec.
++#include <linux/autoconf.h>
++#include <linux/init.h>
++#include <linux/io.h>
++#include <linux/types.h>
++#include <linux/module.h>
++#include <linux/delay.h>
+
- config CRYPTO_CRYPTD
- tristate "Software async crypto daemon"
-- select CRYPTO_ABLKCIPHER
-+ select CRYPTO_BLKCIPHER
- select CRYPTO_MANAGER
- help
- This is a generic software asynchronous crypto daemon that
-@@ -320,6 +353,7 @@ config CRYPTO_AES_586
- tristate "AES cipher algorithms (i586)"
- depends on (X86 || UML_X86) && !64BIT
- select CRYPTO_ALGAPI
-+ select CRYPTO_AES
- help
- AES cipher algorithms (FIPS-197). AES uses the Rijndael
- algorithm.
-@@ -341,6 +375,7 @@ config CRYPTO_AES_X86_64
- tristate "AES cipher algorithms (x86_64)"
- depends on (X86 || UML_X86) && 64BIT
- select CRYPTO_ALGAPI
-+ select CRYPTO_AES
- help
- AES cipher algorithms (FIPS-197). AES uses the Rijndael
- algorithm.
-@@ -441,6 +476,46 @@ config CRYPTO_SEED
- See also:
- <http://www.kisa.or.kr/kisa/seed/jsp/seed_eng.jsp>
-
-+config CRYPTO_SALSA20
-+ tristate "Salsa20 stream cipher algorithm (EXPERIMENTAL)"
-+ depends on EXPERIMENTAL
-+ select CRYPTO_BLKCIPHER
-+ help
-+ Salsa20 stream cipher algorithm.
++#include <asm/mach-rdc321x/rdc321x_defs.h>
+
-+ Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
-+ Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
++static inline int rdc_gpio_is_valid(unsigned gpio)
++{
++ return (gpio <= RDC_MAX_GPIO);
++}
+
-+ The Salsa20 stream cipher algorithm is designed by Daniel J.
-+ Bernstein <djb at cr.yp.to>. See <http://cr.yp.to/snuffle.html>
++static unsigned int rdc_gpio_read(unsigned gpio)
++{
++ unsigned int val;
+
-+config CRYPTO_SALSA20_586
-+ tristate "Salsa20 stream cipher algorithm (i586) (EXPERIMENTAL)"
-+ depends on (X86 || UML_X86) && !64BIT
-+ depends on EXPERIMENTAL
-+ select CRYPTO_BLKCIPHER
-+ help
-+ Salsa20 stream cipher algorithm.
++ val = 0x80000000 | (7 << 11) | ((gpio&0x20?0x84:0x48));
++ outl(val, RDC3210_CFGREG_ADDR);
++ udelay(10);
++ val = inl(RDC3210_CFGREG_DATA);
++ val |= (0x1 << (gpio & 0x1F));
++ outl(val, RDC3210_CFGREG_DATA);
++ udelay(10);
++ val = 0x80000000 | (7 << 11) | ((gpio&0x20?0x88:0x4C));
++ outl(val, RDC3210_CFGREG_ADDR);
++ udelay(10);
++ val = inl(RDC3210_CFGREG_DATA);
+
-+ Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
-+ Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
++ return val;
++}
+
-+ The Salsa20 stream cipher algorithm is designed by Daniel J.
-+ Bernstein <djb at cr.yp.to>. See <http://cr.yp.to/snuffle.html>
++static void rdc_gpio_write(unsigned int val)
++{
++ if (val) {
++ outl(val, RDC3210_CFGREG_DATA);
++ udelay(10);
++ }
++}
+
-+config CRYPTO_SALSA20_X86_64
-+ tristate "Salsa20 stream cipher algorithm (x86_64) (EXPERIMENTAL)"
-+ depends on (X86 || UML_X86) && 64BIT
-+ depends on EXPERIMENTAL
-+ select CRYPTO_BLKCIPHER
-+ help
-+ Salsa20 stream cipher algorithm.
++int rdc_gpio_get_value(unsigned gpio)
++{
++ if (rdc_gpio_is_valid(gpio))
++ return (int)rdc_gpio_read(gpio);
++ else
++ return -EINVAL;
++}
++EXPORT_SYMBOL(rdc_gpio_get_value);
+
-+ Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
-+ Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
++void rdc_gpio_set_value(unsigned gpio, int value)
++{
++ unsigned int val;
+
-+ The Salsa20 stream cipher algorithm is designed by Daniel J.
-+ Bernstein <djb at cr.yp.to>. See <http://cr.yp.to/snuffle.html>
-
- config CRYPTO_DEFLATE
- tristate "Deflate compression algorithm"
-@@ -491,6 +566,7 @@ config CRYPTO_TEST
- tristate "Testing module"
- depends on m
- select CRYPTO_ALGAPI
-+ select CRYPTO_AEAD
- help
- Quick & dirty crypto test module.
-
-@@ -498,10 +574,19 @@ config CRYPTO_AUTHENC
- tristate "Authenc support"
- select CRYPTO_AEAD
- select CRYPTO_MANAGER
-+ select CRYPTO_HASH
- help
- Authenc: Combined mode wrapper for IPsec.
- This is required for IPSec.
-
-+config CRYPTO_LZO
-+ tristate "LZO compression algorithm"
-+ select CRYPTO_ALGAPI
-+ select LZO_COMPRESS
-+ select LZO_DECOMPRESS
-+ help
-+ This is the LZO algorithm.
++ if (!rdc_gpio_is_valid(gpio))
++ return;
+
- source "drivers/crypto/Kconfig"
-
- endif # if CRYPTO
-diff --git a/crypto/Makefile b/crypto/Makefile
-index 43c2a0d..48c7583 100644
---- a/crypto/Makefile
-+++ b/crypto/Makefile
-@@ -8,9 +8,14 @@ crypto_algapi-$(CONFIG_PROC_FS) += proc.o
- crypto_algapi-objs := algapi.o scatterwalk.o $(crypto_algapi-y)
- obj-$(CONFIG_CRYPTO_ALGAPI) += crypto_algapi.o
-
--obj-$(CONFIG_CRYPTO_ABLKCIPHER) += ablkcipher.o
- obj-$(CONFIG_CRYPTO_AEAD) += aead.o
--obj-$(CONFIG_CRYPTO_BLKCIPHER) += blkcipher.o
++ val = rdc_gpio_read(gpio);
+
-+crypto_blkcipher-objs := ablkcipher.o
-+crypto_blkcipher-objs += blkcipher.o
-+obj-$(CONFIG_CRYPTO_BLKCIPHER) += crypto_blkcipher.o
-+obj-$(CONFIG_CRYPTO_BLKCIPHER) += chainiv.o
-+obj-$(CONFIG_CRYPTO_BLKCIPHER) += eseqiv.o
-+obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
-
- crypto_hash-objs := hash.o
- obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o
-@@ -32,6 +37,9 @@ obj-$(CONFIG_CRYPTO_CBC) += cbc.o
- obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
- obj-$(CONFIG_CRYPTO_LRW) += lrw.o
- obj-$(CONFIG_CRYPTO_XTS) += xts.o
-+obj-$(CONFIG_CRYPTO_CTR) += ctr.o
-+obj-$(CONFIG_CRYPTO_GCM) += gcm.o
-+obj-$(CONFIG_CRYPTO_CCM) += ccm.o
- obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
- obj-$(CONFIG_CRYPTO_DES) += des_generic.o
- obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
-@@ -48,10 +56,12 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
- obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
- obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
- obj-$(CONFIG_CRYPTO_SEED) += seed.o
-+obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
- obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o
- obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
- obj-$(CONFIG_CRYPTO_CRC32C) += crc32c.o
- obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o
-+obj-$(CONFIG_CRYPTO_LZO) += lzo.o
-
- obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
-
-diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
-index 2731acb..3bcb099 100644
---- a/crypto/ablkcipher.c
-+++ b/crypto/ablkcipher.c
-@@ -13,14 +13,18 @@
- *
- */
-
--#include <crypto/algapi.h>
--#include <linux/errno.h>
-+#include <crypto/internal/skcipher.h>
-+#include <linux/err.h>
- #include <linux/init.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-+#include <linux/rtnetlink.h>
-+#include <linux/sched.h>
- #include <linux/slab.h>
- #include <linux/seq_file.h>
-
-+#include "internal.h"
++ if (value)
++ val &= ~(0x1 << (gpio & 0x1F));
++ else
++ val |= (0x1 << (gpio & 0x1F));
+
- static int setkey_unaligned(struct crypto_ablkcipher *tfm, const u8 *key,
- unsigned int keylen)
- {
-@@ -66,6 +70,16 @@ static unsigned int crypto_ablkcipher_ctxsize(struct crypto_alg *alg, u32 type,
- return alg->cra_ctxsize;
- }
-
-+int skcipher_null_givencrypt(struct skcipher_givcrypt_request *req)
-+{
-+ return crypto_ablkcipher_encrypt(&req->creq);
++ rdc_gpio_write(val);
+}
++EXPORT_SYMBOL(rdc_gpio_set_value);
+
-+int skcipher_null_givdecrypt(struct skcipher_givcrypt_request *req)
++int rdc_gpio_direction_input(unsigned gpio)
+{
-+ return crypto_ablkcipher_decrypt(&req->creq);
++ return 0;
+}
++EXPORT_SYMBOL(rdc_gpio_direction_input);
+
- static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type,
- u32 mask)
- {
-@@ -78,6 +92,11 @@ static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type,
- crt->setkey = setkey;
- crt->encrypt = alg->encrypt;
- crt->decrypt = alg->decrypt;
-+ if (!alg->ivsize) {
-+ crt->givencrypt = skcipher_null_givencrypt;
-+ crt->givdecrypt = skcipher_null_givdecrypt;
-+ }
-+ crt->base = __crypto_ablkcipher_cast(tfm);
- crt->ivsize = alg->ivsize;
-
- return 0;
-@@ -90,10 +109,13 @@ static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg *alg)
- struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher;
-
- seq_printf(m, "type : ablkcipher\n");
-+ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
-+ "yes" : "no");
- seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
- seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize);
- seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize);
- seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize);
-+ seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: "<default>");
- }
-
- const struct crypto_type crypto_ablkcipher_type = {
-@@ -105,5 +127,220 @@ const struct crypto_type crypto_ablkcipher_type = {
- };
- EXPORT_SYMBOL_GPL(crypto_ablkcipher_type);
-
-+static int no_givdecrypt(struct skcipher_givcrypt_request *req)
++int rdc_gpio_direction_output(unsigned gpio, int value)
+{
-+ return -ENOSYS;
++ return 0;
+}
++EXPORT_SYMBOL(rdc_gpio_direction_output);
+
-+static int crypto_init_givcipher_ops(struct crypto_tfm *tfm, u32 type,
-+ u32 mask)
-+{
-+ struct ablkcipher_alg *alg = &tfm->__crt_alg->cra_ablkcipher;
-+ struct ablkcipher_tfm *crt = &tfm->crt_ablkcipher;
+
-+ if (alg->ivsize > PAGE_SIZE / 8)
-+ return -EINVAL;
+diff --git a/arch/x86/mach-rdc321x/platform.c b/arch/x86/mach-rdc321x/platform.c
+new file mode 100644
+index 0000000..dda6024
+--- /dev/null
++++ b/arch/x86/mach-rdc321x/platform.c
+@@ -0,0 +1,68 @@
++/*
++ * Generic RDC321x platform devices
++ *
++ * Copyright (C) 2007 Florian Fainelli <florian at openwrt.org>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * as published by the Free Software Foundation; either version 2
++ * of the License, or (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the
++ * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
++ * Boston, MA 02110-1301, USA.
++ *
++ */
+
-+ crt->setkey = tfm->__crt_alg->cra_flags & CRYPTO_ALG_GENIV ?
-+ alg->setkey : setkey;
-+ crt->encrypt = alg->encrypt;
-+ crt->decrypt = alg->decrypt;
-+ crt->givencrypt = alg->givencrypt;
-+ crt->givdecrypt = alg->givdecrypt ?: no_givdecrypt;
-+ crt->base = __crypto_ablkcipher_cast(tfm);
-+ crt->ivsize = alg->ivsize;
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/list.h>
++#include <linux/device.h>
++#include <linux/platform_device.h>
++#include <linux/version.h>
++#include <linux/leds.h>
+
-+ return 0;
-+}
++#include <asm/gpio.h>
+
-+static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
-+ __attribute__ ((unused));
-+static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
-+{
-+ struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher;
++/* LEDS */
++static struct gpio_led default_leds[] = {
++ { .name = "rdc:dmz", .gpio = 1, },
++};
+
-+ seq_printf(m, "type : givcipher\n");
-+ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
-+ "yes" : "no");
-+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
-+ seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize);
-+ seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize);
-+ seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize);
-+ seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: "<built-in>");
-+}
++static struct gpio_led_platform_data rdc321x_led_data = {
++ .num_leds = ARRAY_SIZE(default_leds),
++ .leds = default_leds,
++};
+
-+const struct crypto_type crypto_givcipher_type = {
-+ .ctxsize = crypto_ablkcipher_ctxsize,
-+ .init = crypto_init_givcipher_ops,
-+#ifdef CONFIG_PROC_FS
-+ .show = crypto_givcipher_show,
-+#endif
++static struct platform_device rdc321x_leds = {
++ .name = "leds-gpio",
++ .id = -1,
++ .dev = {
++ .platform_data = &rdc321x_led_data,
++ }
+};
-+EXPORT_SYMBOL_GPL(crypto_givcipher_type);
+
-+const char *crypto_default_geniv(const struct crypto_alg *alg)
++/* Watchdog */
++static struct platform_device rdc321x_wdt = {
++ .name = "rdc321x-wdt",
++ .id = -1,
++ .num_resources = 0,
++};
++
++static struct platform_device *rdc321x_devs[] = {
++ &rdc321x_leds,
++ &rdc321x_wdt
++};
++
++static int __init rdc_board_setup(void)
+{
-+ return alg->cra_flags & CRYPTO_ALG_ASYNC ? "eseqiv" : "chainiv";
++ return platform_add_devices(rdc321x_devs, ARRAY_SIZE(rdc321x_devs));
+}
+
-+static int crypto_givcipher_default(struct crypto_alg *alg, u32 type, u32 mask)
-+{
-+ struct rtattr *tb[3];
-+ struct {
-+ struct rtattr attr;
-+ struct crypto_attr_type data;
-+ } ptype;
-+ struct {
-+ struct rtattr attr;
-+ struct crypto_attr_alg data;
-+ } palg;
-+ struct crypto_template *tmpl;
-+ struct crypto_instance *inst;
-+ struct crypto_alg *larval;
-+ const char *geniv;
-+ int err;
++arch_initcall(rdc_board_setup);
+diff --git a/arch/x86/mach-rdc321x/wdt.c b/arch/x86/mach-rdc321x/wdt.c
+new file mode 100644
+index 0000000..ec5625a
+--- /dev/null
++++ b/arch/x86/mach-rdc321x/wdt.c
+@@ -0,0 +1,275 @@
++/*
++ * RDC321x watchdog driver
++ *
++ * Copyright (C) 2007 Florian Fainelli <florian at openwrt.org>
++ *
++ * This driver is highly inspired from the cpu5_wdt driver
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
++ *
++ */
+
-+ larval = crypto_larval_lookup(alg->cra_driver_name,
-+ CRYPTO_ALG_TYPE_GIVCIPHER,
-+ CRYPTO_ALG_TYPE_MASK);
-+ err = PTR_ERR(larval);
-+ if (IS_ERR(larval))
-+ goto out;
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/types.h>
++#include <linux/errno.h>
++#include <linux/miscdevice.h>
++#include <linux/fs.h>
++#include <linux/init.h>
++#include <linux/ioport.h>
++#include <linux/timer.h>
++#include <linux/completion.h>
++#include <linux/jiffies.h>
++#include <linux/platform_device.h>
++#include <linux/watchdog.h>
++#include <linux/io.h>
++#include <linux/uaccess.h>
+
-+ err = -EAGAIN;
-+ if (!crypto_is_larval(larval))
-+ goto drop_larval;
++#include <asm/mach-rdc321x/rdc321x_defs.h>
+
-+ ptype.attr.rta_len = sizeof(ptype);
-+ ptype.attr.rta_type = CRYPTOA_TYPE;
-+ ptype.data.type = type | CRYPTO_ALG_GENIV;
-+ /* GENIV tells the template that we're making a default geniv. */
-+ ptype.data.mask = mask | CRYPTO_ALG_GENIV;
-+ tb[0] = &ptype.attr;
++#define RDC_WDT_MASK 0x80000000 /* Mask */
++#define RDC_WDT_EN 0x00800000 /* Enable bit */
++#define RDC_WDT_WTI 0x00200000 /* Generate CPU reset/NMI/WDT on timeout */
++#define RDC_WDT_RST 0x00100000 /* Reset bit */
++#define RDC_WDT_WIF 0x00040000 /* WDT IRQ Flag */
++#define RDC_WDT_IRT 0x00000100 /* IRQ Routing table */
++#define RDC_WDT_CNT 0x00000001 /* WDT count */
+
-+ palg.attr.rta_len = sizeof(palg);
-+ palg.attr.rta_type = CRYPTOA_ALG;
-+ /* Must use the exact name to locate ourselves. */
-+ memcpy(palg.data.name, alg->cra_driver_name, CRYPTO_MAX_ALG_NAME);
-+ tb[1] = &palg.attr;
++#define RDC_CLS_TMR 0x80003844 /* Clear timer */
+
-+ tb[2] = NULL;
++#define RDC_WDT_INTERVAL (HZ/10+1)
+
-+ if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
-+ CRYPTO_ALG_TYPE_BLKCIPHER)
-+ geniv = alg->cra_blkcipher.geniv;
-+ else
-+ geniv = alg->cra_ablkcipher.geniv;
++int nowayout = WATCHDOG_NOWAYOUT;
++module_param(nowayout, int, 0);
++MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
+
-+ if (!geniv)
-+ geniv = crypto_default_geniv(alg);
++static int ticks = 1000;
+
-+ tmpl = crypto_lookup_template(geniv);
-+ err = -ENOENT;
-+ if (!tmpl)
-+ goto kill_larval;
++/* some device data */
+
-+ inst = tmpl->alloc(tb);
-+ err = PTR_ERR(inst);
-+ if (IS_ERR(inst))
-+ goto put_tmpl;
++static struct {
++ struct completion stop;
++ volatile int running;
++ struct timer_list timer;
++ volatile int queue;
++ int default_ticks;
++ unsigned long inuse;
++} rdc321x_wdt_device;
+
-+ if ((err = crypto_register_instance(tmpl, inst))) {
-+ tmpl->free(inst);
-+ goto put_tmpl;
++/* generic helper functions */
++
++static void rdc321x_wdt_trigger(unsigned long unused)
++{
++ if (rdc321x_wdt_device.running)
++ ticks--;
++
++ /* keep watchdog alive */
++ outl(RDC_WDT_EN|inl(RDC3210_CFGREG_DATA), RDC3210_CFGREG_DATA);
++
++ /* requeue?? */
++ if (rdc321x_wdt_device.queue && ticks)
++ mod_timer(&rdc321x_wdt_device.timer,
++ jiffies + RDC_WDT_INTERVAL);
++ else {
++ /* ticks doesn't matter anyway */
++ complete(&rdc321x_wdt_device.stop);
+ }
+
-+ /* Redo the lookup to use the instance we just registered. */
-+ err = -EAGAIN;
++}
+
-+put_tmpl:
-+ crypto_tmpl_put(tmpl);
-+kill_larval:
-+ crypto_larval_kill(larval);
-+drop_larval:
-+ crypto_mod_put(larval);
-+out:
-+ crypto_mod_put(alg);
-+ return err;
++static void rdc321x_wdt_reset(void)
++{
++ ticks = rdc321x_wdt_device.default_ticks;
+}
+
-+static struct crypto_alg *crypto_lookup_skcipher(const char *name, u32 type,
-+ u32 mask)
++static void rdc321x_wdt_start(void)
+{
-+ struct crypto_alg *alg;
++ if (!rdc321x_wdt_device.queue) {
++ rdc321x_wdt_device.queue = 1;
+
-+ alg = crypto_alg_mod_lookup(name, type, mask);
-+ if (IS_ERR(alg))
-+ return alg;
++ /* Clear the timer */
++ outl(RDC_CLS_TMR, RDC3210_CFGREG_ADDR);
+
-+ if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
-+ CRYPTO_ALG_TYPE_GIVCIPHER)
-+ return alg;
++ /* Enable watchdog and set the timeout to 81.92 us */
++ outl(RDC_WDT_EN|RDC_WDT_CNT, RDC3210_CFGREG_DATA);
+
-+ if (!((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
-+ CRYPTO_ALG_TYPE_BLKCIPHER ? alg->cra_blkcipher.ivsize :
-+ alg->cra_ablkcipher.ivsize))
-+ return alg;
++ mod_timer(&rdc321x_wdt_device.timer,
++ jiffies + RDC_WDT_INTERVAL);
++ }
+
-+ return ERR_PTR(crypto_givcipher_default(alg, type, mask));
++ /* if process dies, counter is not decremented */
++ rdc321x_wdt_device.running++;
+}
+
-+int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn, const char *name,
-+ u32 type, u32 mask)
++static int rdc321x_wdt_stop(void)
+{
-+ struct crypto_alg *alg;
-+ int err;
-+
-+ type = crypto_skcipher_type(type);
-+ mask = crypto_skcipher_mask(mask);
++ if (rdc321x_wdt_device.running)
++ rdc321x_wdt_device.running = 0;
+
-+ alg = crypto_lookup_skcipher(name, type, mask);
-+ if (IS_ERR(alg))
-+ return PTR_ERR(alg);
++ ticks = rdc321x_wdt_device.default_ticks;
+
-+ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
-+ crypto_mod_put(alg);
-+ return err;
++ return -EIO;
+}
-+EXPORT_SYMBOL_GPL(crypto_grab_skcipher);
-+
-+struct crypto_ablkcipher *crypto_alloc_ablkcipher(const char *alg_name,
-+ u32 type, u32 mask)
-+{
-+ struct crypto_tfm *tfm;
-+ int err;
+
-+ type = crypto_skcipher_type(type);
-+ mask = crypto_skcipher_mask(mask);
++/* filesystem operations */
+
-+ for (;;) {
-+ struct crypto_alg *alg;
++static int rdc321x_wdt_open(struct inode *inode, struct file *file)
++{
++ if (test_and_set_bit(0, &rdc321x_wdt_device.inuse))
++ return -EBUSY;
+
-+ alg = crypto_lookup_skcipher(alg_name, type, mask);
-+ if (IS_ERR(alg)) {
-+ err = PTR_ERR(alg);
-+ goto err;
-+ }
++ return nonseekable_open(inode, file);
++}
+
-+ tfm = __crypto_alloc_tfm(alg, type, mask);
-+ if (!IS_ERR(tfm))
-+ return __crypto_ablkcipher_cast(tfm);
++static int rdc321x_wdt_release(struct inode *inode, struct file *file)
++{
++ clear_bit(0, &rdc321x_wdt_device.inuse);
++ return 0;
++}
+
-+ crypto_mod_put(alg);
-+ err = PTR_ERR(tfm);
++static int rdc321x_wdt_ioctl(struct inode *inode, struct file *file,
++ unsigned int cmd, unsigned long arg)
++{
++ void __user *argp = (void __user *)arg;
++ unsigned int value;
++ static struct watchdog_info ident = {
++ .options = WDIOF_CARDRESET,
++ .identity = "RDC321x WDT",
++ };
+
-+err:
-+ if (err != -EAGAIN)
-+ break;
-+ if (signal_pending(current)) {
-+ err = -EINTR;
++ switch (cmd) {
++ case WDIOC_KEEPALIVE:
++ rdc321x_wdt_reset();
++ break;
++ case WDIOC_GETSTATUS:
++ /* Read the value from the DATA register */
++ value = inl(RDC3210_CFGREG_DATA);
++ if (copy_to_user(argp, &value, sizeof(int)))
++ return -EFAULT;
++ break;
++ case WDIOC_GETSUPPORT:
++ if (copy_to_user(argp, &ident, sizeof(ident)))
++ return -EFAULT;
++ break;
++ case WDIOC_SETOPTIONS:
++ if (copy_from_user(&value, argp, sizeof(int)))
++ return -EFAULT;
++ switch (value) {
++ case WDIOS_ENABLECARD:
++ rdc321x_wdt_start();
+ break;
++ case WDIOS_DISABLECARD:
++ return rdc321x_wdt_stop();
++ default:
++ return -EINVAL;
+ }
++ break;
++ default:
++ return -ENOTTY;
+ }
++ return 0;
++}
+
-+ return ERR_PTR(err);
++static ssize_t rdc321x_wdt_write(struct file *file, const char __user *buf,
++ size_t count, loff_t *ppos)
++{
++ if (!count)
++ return -EIO;
++
++ rdc321x_wdt_reset();
++
++ return count;
+}
-+EXPORT_SYMBOL_GPL(crypto_alloc_ablkcipher);
+
- MODULE_LICENSE("GPL");
- MODULE_DESCRIPTION("Asynchronous block chaining cipher type");
-diff --git a/crypto/aead.c b/crypto/aead.c
-index 84a3501..3a6f3f5 100644
---- a/crypto/aead.c
-+++ b/crypto/aead.c
-@@ -12,14 +12,17 @@
- *
- */
-
--#include <crypto/algapi.h>
--#include <linux/errno.h>
-+#include <crypto/internal/aead.h>
-+#include <linux/err.h>
- #include <linux/init.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-+#include <linux/rtnetlink.h>
- #include <linux/slab.h>
- #include <linux/seq_file.h>
-
-+#include "internal.h"
-+
- static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
- unsigned int keylen)
- {
-@@ -53,25 +56,54 @@ static int setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
- return aead->setkey(tfm, key, keylen);
- }
-
-+int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
-+{
-+ struct aead_tfm *crt = crypto_aead_crt(tfm);
-+ int err;
-+
-+ if (authsize > crypto_aead_alg(tfm)->maxauthsize)
-+ return -EINVAL;
-+
-+ if (crypto_aead_alg(tfm)->setauthsize) {
-+ err = crypto_aead_alg(tfm)->setauthsize(crt->base, authsize);
-+ if (err)
-+ return err;
-+ }
-+
-+ crypto_aead_crt(crt->base)->authsize = authsize;
-+ crt->authsize = authsize;
-+ return 0;
-+}
-+EXPORT_SYMBOL_GPL(crypto_aead_setauthsize);
-+
- static unsigned int crypto_aead_ctxsize(struct crypto_alg *alg, u32 type,
- u32 mask)
- {
- return alg->cra_ctxsize;
- }
-
-+static int no_givcrypt(struct aead_givcrypt_request *req)
-+{
-+ return -ENOSYS;
-+}
-+
- static int crypto_init_aead_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
- {
- struct aead_alg *alg = &tfm->__crt_alg->cra_aead;
- struct aead_tfm *crt = &tfm->crt_aead;
-
-- if (max(alg->authsize, alg->ivsize) > PAGE_SIZE / 8)
-+ if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
- return -EINVAL;
-
-- crt->setkey = setkey;
-+ crt->setkey = tfm->__crt_alg->cra_flags & CRYPTO_ALG_GENIV ?
-+ alg->setkey : setkey;
- crt->encrypt = alg->encrypt;
- crt->decrypt = alg->decrypt;
-+ crt->givencrypt = alg->givencrypt ?: no_givcrypt;
-+ crt->givdecrypt = alg->givdecrypt ?: no_givcrypt;
-+ crt->base = __crypto_aead_cast(tfm);
- crt->ivsize = alg->ivsize;
-- crt->authsize = alg->authsize;
-+ crt->authsize = alg->maxauthsize;
-
- return 0;
- }
-@@ -83,9 +115,12 @@ static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
- struct aead_alg *aead = &alg->cra_aead;
-
- seq_printf(m, "type : aead\n");
-+ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
-+ "yes" : "no");
- seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
- seq_printf(m, "ivsize : %u\n", aead->ivsize);
-- seq_printf(m, "authsize : %u\n", aead->authsize);
-+ seq_printf(m, "maxauthsize : %u\n", aead->maxauthsize);
-+ seq_printf(m, "geniv : %s\n", aead->geniv ?: "<built-in>");
- }
-
- const struct crypto_type crypto_aead_type = {
-@@ -97,5 +132,358 @@ const struct crypto_type crypto_aead_type = {
- };
- EXPORT_SYMBOL_GPL(crypto_aead_type);
-
-+static int aead_null_givencrypt(struct aead_givcrypt_request *req)
-+{
-+ return crypto_aead_encrypt(&req->areq);
-+}
-+
-+static int aead_null_givdecrypt(struct aead_givcrypt_request *req)
-+{
-+ return crypto_aead_decrypt(&req->areq);
-+}
-+
-+static int crypto_init_nivaead_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
-+{
-+ struct aead_alg *alg = &tfm->__crt_alg->cra_aead;
-+ struct aead_tfm *crt = &tfm->crt_aead;
-+
-+ if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
-+ return -EINVAL;
-+
-+ crt->setkey = setkey;
-+ crt->encrypt = alg->encrypt;
-+ crt->decrypt = alg->decrypt;
-+ if (!alg->ivsize) {
-+ crt->givencrypt = aead_null_givencrypt;
-+ crt->givdecrypt = aead_null_givdecrypt;
-+ }
-+ crt->base = __crypto_aead_cast(tfm);
-+ crt->ivsize = alg->ivsize;
-+ crt->authsize = alg->maxauthsize;
-+
-+ return 0;
-+}
-+
-+static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg)
-+ __attribute__ ((unused));
-+static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg)
-+{
-+ struct aead_alg *aead = &alg->cra_aead;
-+
-+ seq_printf(m, "type : nivaead\n");
-+ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
-+ "yes" : "no");
-+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
-+ seq_printf(m, "ivsize : %u\n", aead->ivsize);
-+ seq_printf(m, "maxauthsize : %u\n", aead->maxauthsize);
-+ seq_printf(m, "geniv : %s\n", aead->geniv);
-+}
-+
-+const struct crypto_type crypto_nivaead_type = {
-+ .ctxsize = crypto_aead_ctxsize,
-+ .init = crypto_init_nivaead_ops,
-+#ifdef CONFIG_PROC_FS
-+ .show = crypto_nivaead_show,
-+#endif
++static const struct file_operations rdc321x_wdt_fops = {
++ .owner = THIS_MODULE,
++ .llseek = no_llseek,
++ .ioctl = rdc321x_wdt_ioctl,
++ .open = rdc321x_wdt_open,
++ .write = rdc321x_wdt_write,
++ .release = rdc321x_wdt_release,
+};
-+EXPORT_SYMBOL_GPL(crypto_nivaead_type);
-+
-+static int crypto_grab_nivaead(struct crypto_aead_spawn *spawn,
-+ const char *name, u32 type, u32 mask)
-+{
-+ struct crypto_alg *alg;
-+ int err;
-+
-+ type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
-+ type |= CRYPTO_ALG_TYPE_AEAD;
-+ mask |= CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV;
-+
-+ alg = crypto_alg_mod_lookup(name, type, mask);
-+ if (IS_ERR(alg))
-+ return PTR_ERR(alg);
+
-+ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
-+ crypto_mod_put(alg);
-+ return err;
-+}
++static struct miscdevice rdc321x_wdt_misc = {
++ .minor = WATCHDOG_MINOR,
++ .name = "watchdog",
++ .fops = &rdc321x_wdt_fops,
++};
+
-+struct crypto_instance *aead_geniv_alloc(struct crypto_template *tmpl,
-+ struct rtattr **tb, u32 type,
-+ u32 mask)
++static int __devinit rdc321x_wdt_probe(struct platform_device *pdev)
+{
-+ const char *name;
-+ struct crypto_aead_spawn *spawn;
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ struct crypto_alg *alg;
+ int err;
+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
-+
-+ if ((algt->type ^ (CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV)) &
-+ algt->mask)
-+ return ERR_PTR(-EINVAL);
-+
-+ name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(name);
-+ if (IS_ERR(name))
-+ return ERR_PTR(err);
-+
-+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-+ if (!inst)
-+ return ERR_PTR(-ENOMEM);
-+
-+ spawn = crypto_instance_ctx(inst);
-+
-+ /* Ignore async algorithms if necessary. */
-+ mask |= crypto_requires_sync(algt->type, algt->mask);
-+
-+ crypto_set_aead_spawn(spawn, inst);
-+ err = crypto_grab_nivaead(spawn, name, type, mask);
-+ if (err)
-+ goto err_free_inst;
-+
-+ alg = crypto_aead_spawn_alg(spawn);
-+
-+ err = -EINVAL;
-+ if (!alg->cra_aead.ivsize)
-+ goto err_drop_alg;
-+
-+ /*
-+ * This is only true if we're constructing an algorithm with its
-+ * default IV generator. For the default generator we elide the
-+ * template name and double-check the IV generator.
-+ */
-+ if (algt->mask & CRYPTO_ALG_GENIV) {
-+ if (strcmp(tmpl->name, alg->cra_aead.geniv))
-+ goto err_drop_alg;
-+
-+ memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
-+ memcpy(inst->alg.cra_driver_name, alg->cra_driver_name,
-+ CRYPTO_MAX_ALG_NAME);
-+ } else {
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-+ "%s(%s)", tmpl->name, alg->cra_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_alg;
-+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "%s(%s)", tmpl->name, alg->cra_driver_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_alg;
++ err = misc_register(&rdc321x_wdt_misc);
++ if (err < 0) {
++ printk(KERN_ERR PFX "watchdog misc_register failed\n");
++ return err;
+ }
+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV;
-+ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_priority = alg->cra_priority;
-+ inst->alg.cra_blocksize = alg->cra_blocksize;
-+ inst->alg.cra_alignmask = alg->cra_alignmask;
-+ inst->alg.cra_type = &crypto_aead_type;
-+
-+ inst->alg.cra_aead.ivsize = alg->cra_aead.ivsize;
-+ inst->alg.cra_aead.maxauthsize = alg->cra_aead.maxauthsize;
-+ inst->alg.cra_aead.geniv = alg->cra_aead.geniv;
-+
-+ inst->alg.cra_aead.setkey = alg->cra_aead.setkey;
-+ inst->alg.cra_aead.setauthsize = alg->cra_aead.setauthsize;
-+ inst->alg.cra_aead.encrypt = alg->cra_aead.encrypt;
-+ inst->alg.cra_aead.decrypt = alg->cra_aead.decrypt;
-+
-+out:
-+ return inst;
++ /* Reset the watchdog */
++ outl(RDC_WDT_RST, RDC3210_CFGREG_DATA);
+
-+err_drop_alg:
-+ crypto_drop_aead(spawn);
-+err_free_inst:
-+ kfree(inst);
-+ inst = ERR_PTR(err);
-+ goto out;
-+}
-+EXPORT_SYMBOL_GPL(aead_geniv_alloc);
++ init_completion(&rdc321x_wdt_device.stop);
++ rdc321x_wdt_device.queue = 0;
+
-+void aead_geniv_free(struct crypto_instance *inst)
-+{
-+ crypto_drop_aead(crypto_instance_ctx(inst));
-+ kfree(inst);
-+}
-+EXPORT_SYMBOL_GPL(aead_geniv_free);
++ clear_bit(0, &rdc321x_wdt_device.inuse);
+
-+int aead_geniv_init(struct crypto_tfm *tfm)
-+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct crypto_aead *aead;
++ setup_timer(&rdc321x_wdt_device.timer, rdc321x_wdt_trigger, 0);
+
-+ aead = crypto_spawn_aead(crypto_instance_ctx(inst));
-+ if (IS_ERR(aead))
-+ return PTR_ERR(aead);
++ rdc321x_wdt_device.default_ticks = ticks;
+
-+ tfm->crt_aead.base = aead;
-+ tfm->crt_aead.reqsize += crypto_aead_reqsize(aead);
++ printk(KERN_INFO PFX "watchdog init success\n");
+
+ return 0;
+}
-+EXPORT_SYMBOL_GPL(aead_geniv_init);
-+
-+void aead_geniv_exit(struct crypto_tfm *tfm)
-+{
-+ crypto_free_aead(tfm->crt_aead.base);
-+}
-+EXPORT_SYMBOL_GPL(aead_geniv_exit);
+
-+static int crypto_nivaead_default(struct crypto_alg *alg, u32 type, u32 mask)
++static int rdc321x_wdt_remove(struct platform_device *pdev)
+{
-+ struct rtattr *tb[3];
-+ struct {
-+ struct rtattr attr;
-+ struct crypto_attr_type data;
-+ } ptype;
-+ struct {
-+ struct rtattr attr;
-+ struct crypto_attr_alg data;
-+ } palg;
-+ struct crypto_template *tmpl;
-+ struct crypto_instance *inst;
-+ struct crypto_alg *larval;
-+ const char *geniv;
-+ int err;
-+
-+ larval = crypto_larval_lookup(alg->cra_driver_name,
-+ CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV,
-+ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
-+ err = PTR_ERR(larval);
-+ if (IS_ERR(larval))
-+ goto out;
-+
-+ err = -EAGAIN;
-+ if (!crypto_is_larval(larval))
-+ goto drop_larval;
-+
-+ ptype.attr.rta_len = sizeof(ptype);
-+ ptype.attr.rta_type = CRYPTOA_TYPE;
-+ ptype.data.type = type | CRYPTO_ALG_GENIV;
-+ /* GENIV tells the template that we're making a default geniv. */
-+ ptype.data.mask = mask | CRYPTO_ALG_GENIV;
-+ tb[0] = &ptype.attr;
-+
-+ palg.attr.rta_len = sizeof(palg);
-+ palg.attr.rta_type = CRYPTOA_ALG;
-+ /* Must use the exact name to locate ourselves. */
-+ memcpy(palg.data.name, alg->cra_driver_name, CRYPTO_MAX_ALG_NAME);
-+ tb[1] = &palg.attr;
-+
-+ tb[2] = NULL;
-+
-+ geniv = alg->cra_aead.geniv;
-+
-+ tmpl = crypto_lookup_template(geniv);
-+ err = -ENOENT;
-+ if (!tmpl)
-+ goto kill_larval;
-+
-+ inst = tmpl->alloc(tb);
-+ err = PTR_ERR(inst);
-+ if (IS_ERR(inst))
-+ goto put_tmpl;
-+
-+ if ((err = crypto_register_instance(tmpl, inst))) {
-+ tmpl->free(inst);
-+ goto put_tmpl;
++ if (rdc321x_wdt_device.queue) {
++ rdc321x_wdt_device.queue = 0;
++ wait_for_completion(&rdc321x_wdt_device.stop);
+ }
+
-+ /* Redo the lookup to use the instance we just registered. */
-+ err = -EAGAIN;
++ misc_deregister(&rdc321x_wdt_misc);
+
-+put_tmpl:
-+ crypto_tmpl_put(tmpl);
-+kill_larval:
-+ crypto_larval_kill(larval);
-+drop_larval:
-+ crypto_mod_put(larval);
-+out:
-+ crypto_mod_put(alg);
-+ return err;
++ return 0;
+}
+
-+static struct crypto_alg *crypto_lookup_aead(const char *name, u32 type,
-+ u32 mask)
-+{
-+ struct crypto_alg *alg;
-+
-+ alg = crypto_alg_mod_lookup(name, type, mask);
-+ if (IS_ERR(alg))
-+ return alg;
-+
-+ if (alg->cra_type == &crypto_aead_type)
-+ return alg;
-+
-+ if (!alg->cra_aead.ivsize)
-+ return alg;
-+
-+ return ERR_PTR(crypto_nivaead_default(alg, type, mask));
-+}
++static struct platform_driver rdc321x_wdt_driver = {
++ .probe = rdc321x_wdt_probe,
++ .remove = rdc321x_wdt_remove,
++ .driver = {
++ .owner = THIS_MODULE,
++ .name = "rdc321x-wdt",
++ },
++};
+
-+int crypto_grab_aead(struct crypto_aead_spawn *spawn, const char *name,
-+ u32 type, u32 mask)
++static int __init rdc321x_wdt_init(void)
+{
-+ struct crypto_alg *alg;
-+ int err;
-+
-+ type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
-+ type |= CRYPTO_ALG_TYPE_AEAD;
-+ mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
-+ mask |= CRYPTO_ALG_TYPE_MASK;
-+
-+ alg = crypto_lookup_aead(name, type, mask);
-+ if (IS_ERR(alg))
-+ return PTR_ERR(alg);
-+
-+ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
-+ crypto_mod_put(alg);
-+ return err;
++ return platform_driver_register(&rdc321x_wdt_driver);
+}
-+EXPORT_SYMBOL_GPL(crypto_grab_aead);
+
-+struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
++static void __exit rdc321x_wdt_exit(void)
+{
-+ struct crypto_tfm *tfm;
-+ int err;
-+
-+ type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
-+ type |= CRYPTO_ALG_TYPE_AEAD;
-+ mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
-+ mask |= CRYPTO_ALG_TYPE_MASK;
-+
-+ for (;;) {
-+ struct crypto_alg *alg;
-+
-+ alg = crypto_lookup_aead(alg_name, type, mask);
-+ if (IS_ERR(alg)) {
-+ err = PTR_ERR(alg);
-+ goto err;
-+ }
-+
-+ tfm = __crypto_alloc_tfm(alg, type, mask);
-+ if (!IS_ERR(tfm))
-+ return __crypto_aead_cast(tfm);
-+
-+ crypto_mod_put(alg);
-+ err = PTR_ERR(tfm);
++ platform_driver_unregister(&rdc321x_wdt_driver);
++}
+
-+err:
-+ if (err != -EAGAIN)
-+ break;
-+ if (signal_pending(current)) {
-+ err = -EINTR;
-+ break;
-+ }
-+ }
++module_init(rdc321x_wdt_init);
++module_exit(rdc321x_wdt_exit);
+
-+ return ERR_PTR(err);
-+}
-+EXPORT_SYMBOL_GPL(crypto_alloc_aead);
++MODULE_AUTHOR("Florian Fainelli <florian at openwrt.org>");
++MODULE_DESCRIPTION("RDC321x watchdog driver");
++MODULE_LICENSE("GPL");
++MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+diff --git a/arch/x86/mach-visws/mpparse.c b/arch/x86/mach-visws/mpparse.c
+index f3c74fa..2a8456a 100644
+--- a/arch/x86/mach-visws/mpparse.c
++++ b/arch/x86/mach-visws/mpparse.c
+@@ -36,19 +36,19 @@ unsigned int __initdata maxcpus = NR_CPUS;
+
+ static void __init MP_processor_info (struct mpc_config_processor *m)
+ {
+- int ver, logical_apicid;
++ int ver, logical_apicid;
+ physid_mask_t apic_cpus;
+-
+
- MODULE_LICENSE("GPL");
- MODULE_DESCRIPTION("Authenticated Encryption with Associated Data (AEAD)");
-diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c
-index 9401dca..cf30af7 100644
---- a/crypto/aes_generic.c
-+++ b/crypto/aes_generic.c
-@@ -47,11 +47,7 @@
- * ---------------------------------------------------------------------------
- */
+ if (!(m->mpc_cpuflag & CPU_ENABLED))
+ return;
--/* Some changes from the Gladman version:
-- s/RIJNDAEL(e_key)/E_KEY/g
-- s/RIJNDAEL(d_key)/D_KEY/g
--*/
--
-+#include <crypto/aes.h>
- #include <linux/module.h>
- #include <linux/init.h>
- #include <linux/types.h>
-@@ -59,88 +55,46 @@
- #include <linux/crypto.h>
- #include <asm/byteorder.h>
+ logical_apicid = m->mpc_apicid;
+- printk(KERN_INFO "%sCPU #%d %ld:%ld APIC version %d\n",
+- m->mpc_cpuflag & CPU_BOOTPROCESSOR ? "Bootup " : "",
+- m->mpc_apicid,
+- (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
+- (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
+- m->mpc_apicver);
++ printk(KERN_INFO "%sCPU #%d %u:%u APIC version %d\n",
++ m->mpc_cpuflag & CPU_BOOTPROCESSOR ? "Bootup " : "",
++ m->mpc_apicid,
++ (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
++ (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
++ m->mpc_apicver);
+
+ if (m->mpc_cpuflag & CPU_BOOTPROCESSOR)
+ boot_cpu_physical_apicid = m->mpc_apicid;
+diff --git a/arch/x86/mach-voyager/setup.c b/arch/x86/mach-voyager/setup.c
+index 3bef977..5ae5466 100644
+--- a/arch/x86/mach-voyager/setup.c
++++ b/arch/x86/mach-voyager/setup.c
+@@ -37,14 +37,14 @@ void __init pre_setup_arch_hook(void)
+ {
+ /* Voyagers run their CPUs from independent clocks, so disable
+ * the TSC code because we can't sync them */
+- tsc_disable = 1;
++ setup_clear_cpu_cap(X86_FEATURE_TSC);
+ }
--#define AES_MIN_KEY_SIZE 16
--#define AES_MAX_KEY_SIZE 32
--
--#define AES_BLOCK_SIZE 16
--
--/*
-- * #define byte(x, nr) ((unsigned char)((x) >> (nr*8)))
-- */
--static inline u8
--byte(const u32 x, const unsigned n)
-+static inline u8 byte(const u32 x, const unsigned n)
+ void __init trap_init_hook(void)
{
- return x >> (n << 3);
}
--struct aes_ctx {
-- int key_length;
-- u32 buf[120];
--};
--
--#define E_KEY (&ctx->buf[0])
--#define D_KEY (&ctx->buf[60])
--
- static u8 pow_tab[256] __initdata;
- static u8 log_tab[256] __initdata;
- static u8 sbx_tab[256] __initdata;
- static u8 isb_tab[256] __initdata;
- static u32 rco_tab[10];
--static u32 ft_tab[4][256];
--static u32 it_tab[4][256];
+-static struct irqaction irq0 = {
++static struct irqaction irq0 = {
+ .handler = timer_interrupt,
+ .flags = IRQF_DISABLED | IRQF_NOBALANCING | IRQF_IRQPOLL,
+ .mask = CPU_MASK_NONE,
+@@ -59,44 +59,47 @@ void __init time_init_hook(void)
--static u32 fl_tab[4][256];
--static u32 il_tab[4][256];
-+u32 crypto_ft_tab[4][256];
-+u32 crypto_fl_tab[4][256];
-+u32 crypto_it_tab[4][256];
-+u32 crypto_il_tab[4][256];
+ /* Hook for machine specific memory setup. */
--static inline u8 __init
--f_mult (u8 a, u8 b)
-+EXPORT_SYMBOL_GPL(crypto_ft_tab);
-+EXPORT_SYMBOL_GPL(crypto_fl_tab);
-+EXPORT_SYMBOL_GPL(crypto_it_tab);
-+EXPORT_SYMBOL_GPL(crypto_il_tab);
-+
-+static inline u8 __init f_mult(u8 a, u8 b)
+-char * __init machine_specific_memory_setup(void)
++char *__init machine_specific_memory_setup(void)
{
- u8 aa = log_tab[a], cc = aa + log_tab[b];
+ char *who;
- return pow_tab[cc + (cc < aa ? 1 : 0)];
+ who = "NOT VOYAGER";
+
+- if(voyager_level == 5) {
++ if (voyager_level == 5) {
+ __u32 addr, length;
+ int i;
+
+ who = "Voyager-SUS";
+
+ e820.nr_map = 0;
+- for(i=0; voyager_memory_detect(i, &addr, &length); i++) {
++ for (i = 0; voyager_memory_detect(i, &addr, &length); i++) {
+ add_memory_region(addr, length, E820_RAM);
+ }
+ return who;
+- } else if(voyager_level == 4) {
++ } else if (voyager_level == 4) {
+ __u32 tom;
+- __u16 catbase = inb(VOYAGER_SSPB_RELOCATION_PORT)<<8;
++ __u16 catbase = inb(VOYAGER_SSPB_RELOCATION_PORT) << 8;
+ /* select the DINO config space */
+ outb(VOYAGER_DINO, VOYAGER_CAT_CONFIG_PORT);
+ /* Read DINO top of memory register */
+ tom = ((inb(catbase + 0x4) & 0xf0) << 16)
+- + ((inb(catbase + 0x5) & 0x7f) << 24);
++ + ((inb(catbase + 0x5) & 0x7f) << 24);
+
+- if(inb(catbase) != VOYAGER_DINO) {
+- printk(KERN_ERR "Voyager: Failed to get DINO for L4, setting tom to EXT_MEM_K\n");
+- tom = (boot_params.screen_info.ext_mem_k)<<10;
++ if (inb(catbase) != VOYAGER_DINO) {
++ printk(KERN_ERR
++ "Voyager: Failed to get DINO for L4, setting tom to EXT_MEM_K\n");
++ tom = (boot_params.screen_info.ext_mem_k) << 10;
+ }
+ who = "Voyager-TOM";
+ add_memory_region(0, 0x9f000, E820_RAM);
+ /* map from 1M to top of memory */
+- add_memory_region(1*1024*1024, tom - 1*1024*1024, E820_RAM);
++ add_memory_region(1 * 1024 * 1024, tom - 1 * 1024 * 1024,
++ E820_RAM);
+ /* FIXME: Should check the ASICs to see if I need to
+ * take out the 8M window. Just do it at the moment
+ * */
+- add_memory_region(8*1024*1024, 8*1024*1024, E820_RESERVED);
++ add_memory_region(8 * 1024 * 1024, 8 * 1024 * 1024,
++ E820_RESERVED);
+ return who;
+ }
+
+@@ -114,8 +117,7 @@ char * __init machine_specific_memory_setup(void)
+ unsigned long mem_size;
+
+ /* compare results from other methods and take the greater */
+- if (boot_params.alt_mem_k
+- < boot_params.screen_info.ext_mem_k) {
++ if (boot_params.alt_mem_k < boot_params.screen_info.ext_mem_k) {
+ mem_size = boot_params.screen_info.ext_mem_k;
+ who = "BIOS-88";
+ } else {
+@@ -126,6 +128,6 @@ char * __init machine_specific_memory_setup(void)
+ e820.nr_map = 0;
+ add_memory_region(0, LOWMEMSIZE(), E820_RAM);
+ add_memory_region(HIGH_MEMORY, mem_size << 10, E820_RAM);
+- }
++ }
+ return who;
+ }
+diff --git a/arch/x86/mach-voyager/voyager_basic.c b/arch/x86/mach-voyager/voyager_basic.c
+index 9b77b39..6a949e4 100644
+--- a/arch/x86/mach-voyager/voyager_basic.c
++++ b/arch/x86/mach-voyager/voyager_basic.c
+@@ -35,7 +35,7 @@
+ /*
+ * Power off function, if any
+ */
+-void (*pm_power_off)(void);
++void (*pm_power_off) (void);
+ EXPORT_SYMBOL(pm_power_off);
+
+ int voyager_level = 0;
+@@ -43,39 +43,38 @@ int voyager_level = 0;
+ struct voyager_SUS *voyager_SUS = NULL;
+
+ #ifdef CONFIG_SMP
+-static void
+-voyager_dump(int dummy1, struct tty_struct *dummy3)
++static void voyager_dump(int dummy1, struct tty_struct *dummy3)
+ {
+ /* get here via a sysrq */
+ voyager_smp_dump();
}
--#define ff_mult(a,b) (a && b ? f_mult(a, b) : 0)
--
--#define f_rn(bo, bi, n, k) \
-- bo[n] = ft_tab[0][byte(bi[n],0)] ^ \
-- ft_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
-- ft_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
-- ft_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
--
--#define i_rn(bo, bi, n, k) \
-- bo[n] = it_tab[0][byte(bi[n],0)] ^ \
-- it_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
-- it_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
-- it_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
--
--#define ls_box(x) \
-- ( fl_tab[0][byte(x, 0)] ^ \
-- fl_tab[1][byte(x, 1)] ^ \
-- fl_tab[2][byte(x, 2)] ^ \
-- fl_tab[3][byte(x, 3)] )
--
--#define f_rl(bo, bi, n, k) \
-- bo[n] = fl_tab[0][byte(bi[n],0)] ^ \
-- fl_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
-- fl_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
-- fl_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
--
--#define i_rl(bo, bi, n, k) \
-- bo[n] = il_tab[0][byte(bi[n],0)] ^ \
-- il_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
-- il_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
-- il_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+ static struct sysrq_key_op sysrq_voyager_dump_op = {
+- .handler = voyager_dump,
+- .help_msg = "Voyager",
+- .action_msg = "Dump Voyager Status",
++ .handler = voyager_dump,
++ .help_msg = "Voyager",
++ .action_msg = "Dump Voyager Status",
+ };
+ #endif
+
+-void
+-voyager_detect(struct voyager_bios_info *bios)
++void voyager_detect(struct voyager_bios_info *bios)
+ {
+- if(bios->len != 0xff) {
+- int class = (bios->class_1 << 8)
+- | (bios->class_2 & 0xff);
++ if (bios->len != 0xff) {
++ int class = (bios->class_1 << 8)
++ | (bios->class_2 & 0xff);
+
+ printk("Voyager System detected.\n"
+ " Class %x, Revision %d.%d\n",
+ class, bios->major, bios->minor);
+- if(class == VOYAGER_LEVEL4)
++ if (class == VOYAGER_LEVEL4)
+ voyager_level = 4;
+- else if(class < VOYAGER_LEVEL5_AND_ABOVE)
++ else if (class < VOYAGER_LEVEL5_AND_ABOVE)
+ voyager_level = 3;
+ else
+ voyager_level = 5;
+ printk(" Architecture Level %d\n", voyager_level);
+- if(voyager_level < 4)
+- printk("\n**WARNING**: Voyager HAL only supports Levels 4 and 5 Architectures at the moment\n\n");
++ if (voyager_level < 4)
++ printk
++ ("\n**WARNING**: Voyager HAL only supports Levels 4 and 5 Architectures at the moment\n\n");
+ /* install the power off handler */
+ pm_power_off = voyager_power_off;
+ #ifdef CONFIG_SMP
+@@ -86,15 +85,13 @@ voyager_detect(struct voyager_bios_info *bios)
+ }
+ }
+
+-void
+-voyager_system_interrupt(int cpl, void *dev_id)
++void voyager_system_interrupt(int cpl, void *dev_id)
+ {
+ printk("Voyager: detected system interrupt\n");
+ }
+
+ /* Routine to read information from the extended CMOS area */
+-__u8
+-voyager_extended_cmos_read(__u16 addr)
++__u8 voyager_extended_cmos_read(__u16 addr)
+ {
+ outb(addr & 0xff, 0x74);
+ outb((addr >> 8) & 0xff, 0x75);
+@@ -108,12 +105,11 @@ voyager_extended_cmos_read(__u16 addr)
+
+ typedef struct ClickMap {
+ struct Entry {
+- __u32 Address;
+- __u32 Length;
++ __u32 Address;
++ __u32 Length;
+ } Entry[CLICK_ENTRIES];
+ } ClickMap_t;
+
-
--static void __init
--gen_tabs (void)
-+#define ff_mult(a, b) (a && b ? f_mult(a, b) : 0)
+ /* This routine is pretty much an awful hack to read the bios clickmap by
+ * mapping it into page 0. There are usually three regions in the map:
+ * Base Memory
+@@ -122,8 +118,7 @@ typedef struct ClickMap {
+ *
+ * Returns are 0 for failure and 1 for success on extracting region.
+ */
+-int __init
+-voyager_memory_detect(int region, __u32 *start, __u32 *length)
++int __init voyager_memory_detect(int region, __u32 * start, __u32 * length)
+ {
+ int i;
+ int retval = 0;
+@@ -132,13 +127,14 @@ voyager_memory_detect(int region, __u32 *start, __u32 *length)
+ unsigned long map_addr;
+ unsigned long old;
+
+- if(region >= CLICK_ENTRIES) {
++ if (region >= CLICK_ENTRIES) {
+ printk("Voyager: Illegal ClickMap region %d\n", region);
+ return 0;
+ }
+
+- for(i = 0; i < sizeof(cmos); i++)
+- cmos[i] = voyager_extended_cmos_read(VOYAGER_MEMORY_CLICKMAP + i);
++ for (i = 0; i < sizeof(cmos); i++)
++ cmos[i] =
++ voyager_extended_cmos_read(VOYAGER_MEMORY_CLICKMAP + i);
+
+ map_addr = *(unsigned long *)cmos;
+
+@@ -147,10 +143,10 @@ voyager_memory_detect(int region, __u32 *start, __u32 *length)
+ pg0[0] = ((map_addr & PAGE_MASK) | _PAGE_RW | _PAGE_PRESENT);
+ local_flush_tlb();
+ /* now clear everything out but page 0 */
+- map = (ClickMap_t *)(map_addr & (~PAGE_MASK));
++ map = (ClickMap_t *) (map_addr & (~PAGE_MASK));
+
+ /* zero length is the end of the clickmap */
+- if(map->Entry[region].Length != 0) {
++ if (map->Entry[region].Length != 0) {
+ *length = map->Entry[region].Length * CLICK_SIZE;
+ *start = map->Entry[region].Address;
+ retval = 1;
+@@ -165,10 +161,9 @@ voyager_memory_detect(int region, __u32 *start, __u32 *length)
+ /* voyager specific handling code for timer interrupts. Used to hand
+ * off the timer tick to the SMP code, since the VIC doesn't have an
+ * internal timer (The QIC does, but that's another story). */
+-void
+-voyager_timer_interrupt(void)
++void voyager_timer_interrupt(void)
+ {
+- if((jiffies & 0x3ff) == 0) {
++ if ((jiffies & 0x3ff) == 0) {
+
+ /* There seems to be something flaky in either
+ * hardware or software that is resetting the timer 0
+@@ -186,18 +181,20 @@ voyager_timer_interrupt(void)
+ __u16 val;
+
+ spin_lock(&i8253_lock);
+-
+
-+static void __init gen_tabs(void)
+ outb_p(0x00, 0x43);
+ val = inb_p(0x40);
+ val |= inb(0x40) << 8;
+ spin_unlock(&i8253_lock);
+
+- if(val > LATCH) {
+- printk("\nVOYAGER: countdown timer value too high (%d), resetting\n\n", val);
++ if (val > LATCH) {
++ printk
++ ("\nVOYAGER: countdown timer value too high (%d), resetting\n\n",
++ val);
+ spin_lock(&i8253_lock);
+- outb(0x34,0x43);
+- outb_p(LATCH & 0xff , 0x40); /* LSB */
+- outb(LATCH >> 8 , 0x40); /* MSB */
++ outb(0x34, 0x43);
++ outb_p(LATCH & 0xff, 0x40); /* LSB */
++ outb(LATCH >> 8, 0x40); /* MSB */
+ spin_unlock(&i8253_lock);
+ }
+ }
+@@ -206,14 +203,13 @@ voyager_timer_interrupt(void)
+ #endif
+ }
+
+-void
+-voyager_power_off(void)
++void voyager_power_off(void)
{
- u32 i, t;
- u8 p, q;
+ printk("VOYAGER Power Off\n");
-- /* log and power tables for GF(2**8) finite field with
-- 0x011b as modular polynomial - the simplest primitive
-- root is 0x03, used here to generate the tables */
-+ /*
-+ * log and power tables for GF(2**8) finite field with
-+ * 0x011b as modular polynomial - the simplest primitive
-+ * root is 0x03, used here to generate the tables
-+ */
+- if(voyager_level == 5) {
++ if (voyager_level == 5) {
+ voyager_cat_power_off();
+- } else if(voyager_level == 4) {
++ } else if (voyager_level == 4) {
+ /* This doesn't apparently work on most L4 machines,
+ * but the specs say to do this to get automatic power
+ * off. Unfortunately, if it doesn't power off the
+@@ -222,10 +218,8 @@ voyager_power_off(void)
+ #if 0
+ int port;
- for (i = 0, p = 1; i < 256; ++i) {
- pow_tab[i] = (u8) p;
-@@ -169,92 +123,119 @@ gen_tabs (void)
- p = sbx_tab[i];
+-
+ /* enable the voyager Configuration Space */
+- outb((inb(VOYAGER_MC_SETUP) & 0xf0) | 0x8,
+- VOYAGER_MC_SETUP);
++ outb((inb(VOYAGER_MC_SETUP) & 0xf0) | 0x8, VOYAGER_MC_SETUP);
+ /* the port for the power off flag is an offset from the
+ floating base */
+ port = (inb(VOYAGER_SSPB_RELOCATION_PORT) << 8) + 0x21;
+@@ -235,62 +229,57 @@ voyager_power_off(void)
+ }
+ /* and wait for it to happen */
+ local_irq_disable();
+- for(;;)
++ for (;;)
+ halt();
+ }
- t = p;
-- fl_tab[0][i] = t;
-- fl_tab[1][i] = rol32(t, 8);
-- fl_tab[2][i] = rol32(t, 16);
-- fl_tab[3][i] = rol32(t, 24);
-+ crypto_fl_tab[0][i] = t;
-+ crypto_fl_tab[1][i] = rol32(t, 8);
-+ crypto_fl_tab[2][i] = rol32(t, 16);
-+ crypto_fl_tab[3][i] = rol32(t, 24);
+ /* copied from process.c */
+-static inline void
+-kb_wait(void)
++static inline void kb_wait(void)
+ {
+ int i;
-- t = ((u32) ff_mult (2, p)) |
-+ t = ((u32) ff_mult(2, p)) |
- ((u32) p << 8) |
-- ((u32) p << 16) | ((u32) ff_mult (3, p) << 24);
-+ ((u32) p << 16) | ((u32) ff_mult(3, p) << 24);
+- for (i=0; i<0x10000; i++)
++ for (i = 0; i < 0x10000; i++)
+ if ((inb_p(0x64) & 0x02) == 0)
+ break;
+ }
-- ft_tab[0][i] = t;
-- ft_tab[1][i] = rol32(t, 8);
-- ft_tab[2][i] = rol32(t, 16);
-- ft_tab[3][i] = rol32(t, 24);
-+ crypto_ft_tab[0][i] = t;
-+ crypto_ft_tab[1][i] = rol32(t, 8);
-+ crypto_ft_tab[2][i] = rol32(t, 16);
-+ crypto_ft_tab[3][i] = rol32(t, 24);
+-void
+-machine_shutdown(void)
++void machine_shutdown(void)
+ {
+ /* Architecture specific shutdown needed before a kexec */
+ }
- p = isb_tab[i];
+-void
+-machine_restart(char *cmd)
++void machine_restart(char *cmd)
+ {
+ printk("Voyager Warm Restart\n");
+ kb_wait();
- t = p;
-- il_tab[0][i] = t;
-- il_tab[1][i] = rol32(t, 8);
-- il_tab[2][i] = rol32(t, 16);
-- il_tab[3][i] = rol32(t, 24);
--
-- t = ((u32) ff_mult (14, p)) |
-- ((u32) ff_mult (9, p) << 8) |
-- ((u32) ff_mult (13, p) << 16) |
-- ((u32) ff_mult (11, p) << 24);
--
-- it_tab[0][i] = t;
-- it_tab[1][i] = rol32(t, 8);
-- it_tab[2][i] = rol32(t, 16);
-- it_tab[3][i] = rol32(t, 24);
-+ crypto_il_tab[0][i] = t;
-+ crypto_il_tab[1][i] = rol32(t, 8);
-+ crypto_il_tab[2][i] = rol32(t, 16);
-+ crypto_il_tab[3][i] = rol32(t, 24);
+- if(voyager_level == 5) {
++ if (voyager_level == 5) {
+ /* write magic values to the RTC to inform system that
+ * shutdown is beginning */
+ outb(0x8f, 0x70);
+- outb(0x5 , 0x71);
+-
++ outb(0x5, 0x71);
+
-+ t = ((u32) ff_mult(14, p)) |
-+ ((u32) ff_mult(9, p) << 8) |
-+ ((u32) ff_mult(13, p) << 16) |
-+ ((u32) ff_mult(11, p) << 24);
+ udelay(50);
+- outb(0xfe,0x64); /* pull reset low */
+- } else if(voyager_level == 4) {
+- __u16 catbase = inb(VOYAGER_SSPB_RELOCATION_PORT)<<8;
++ outb(0xfe, 0x64); /* pull reset low */
++ } else if (voyager_level == 4) {
++ __u16 catbase = inb(VOYAGER_SSPB_RELOCATION_PORT) << 8;
+ __u8 basebd = inb(VOYAGER_MC_SETUP);
+-
+
-+ crypto_it_tab[0][i] = t;
-+ crypto_it_tab[1][i] = rol32(t, 8);
-+ crypto_it_tab[2][i] = rol32(t, 16);
-+ crypto_it_tab[3][i] = rol32(t, 24);
+ outb(basebd | 0x08, VOYAGER_MC_SETUP);
+ outb(0x02, catbase + 0x21);
}
+ local_irq_disable();
+- for(;;)
++ for (;;)
+ halt();
}
--#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
--
--#define imix_col(y,x) \
-- u = star_x(x); \
-- v = star_x(u); \
-- w = star_x(v); \
-- t = w ^ (x); \
-- (y) = u ^ v ^ w; \
-- (y) ^= ror32(u ^ t, 8) ^ \
-- ror32(v ^ t, 16) ^ \
-- ror32(t,24)
--
- /* initialise the key schedule from the user supplied key */
+-void
+-machine_emergency_restart(void)
++void machine_emergency_restart(void)
+ {
+ /*for now, just hook this to a warm restart */
+ machine_restart(NULL);
+ }
+
+-void
+-mca_nmi_hook(void)
++void mca_nmi_hook(void)
+ {
+ __u8 dumpval __maybe_unused = inb(0xf823);
+ __u8 swnmi __maybe_unused = inb(0xf813);
+@@ -301,8 +290,8 @@ mca_nmi_hook(void)
+ /* clear swnmi */
+ outb(0xff, 0xf813);
+ /* tell SUS to ignore dump */
+- if(voyager_level == 5 && voyager_SUS != NULL) {
+- if(voyager_SUS->SUS_mbox == VOYAGER_DUMP_BUTTON_NMI) {
++ if (voyager_level == 5 && voyager_SUS != NULL) {
++ if (voyager_SUS->SUS_mbox == VOYAGER_DUMP_BUTTON_NMI) {
+ voyager_SUS->kernel_mbox = VOYAGER_NO_COMMAND;
+ voyager_SUS->kernel_flags |= VOYAGER_OS_IN_PROGRESS;
+ udelay(1000);
+@@ -310,15 +299,14 @@ mca_nmi_hook(void)
+ voyager_SUS->kernel_flags &= ~VOYAGER_OS_IN_PROGRESS;
+ }
+ }
+- printk(KERN_ERR "VOYAGER: Dump switch pressed, printing CPU%d tracebacks\n", smp_processor_id());
++ printk(KERN_ERR
++ "VOYAGER: Dump switch pressed, printing CPU%d tracebacks\n",
++ smp_processor_id());
+ show_stack(NULL, NULL);
+ show_state();
+ }
--#define loop4(i) \
--{ t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \
-- t ^= E_KEY[4 * i]; E_KEY[4 * i + 4] = t; \
-- t ^= E_KEY[4 * i + 1]; E_KEY[4 * i + 5] = t; \
-- t ^= E_KEY[4 * i + 2]; E_KEY[4 * i + 6] = t; \
-- t ^= E_KEY[4 * i + 3]; E_KEY[4 * i + 7] = t; \
--}
-
--#define loop6(i) \
--{ t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \
-- t ^= E_KEY[6 * i]; E_KEY[6 * i + 6] = t; \
-- t ^= E_KEY[6 * i + 1]; E_KEY[6 * i + 7] = t; \
-- t ^= E_KEY[6 * i + 2]; E_KEY[6 * i + 8] = t; \
-- t ^= E_KEY[6 * i + 3]; E_KEY[6 * i + 9] = t; \
-- t ^= E_KEY[6 * i + 4]; E_KEY[6 * i + 10] = t; \
-- t ^= E_KEY[6 * i + 5]; E_KEY[6 * i + 11] = t; \
--}
-
--#define loop8(i) \
--{ t = ror32(t, 8); ; t = ls_box(t) ^ rco_tab[i]; \
-- t ^= E_KEY[8 * i]; E_KEY[8 * i + 8] = t; \
-- t ^= E_KEY[8 * i + 1]; E_KEY[8 * i + 9] = t; \
-- t ^= E_KEY[8 * i + 2]; E_KEY[8 * i + 10] = t; \
-- t ^= E_KEY[8 * i + 3]; E_KEY[8 * i + 11] = t; \
-- t = E_KEY[8 * i + 4] ^ ls_box(t); \
-- E_KEY[8 * i + 12] = t; \
-- t ^= E_KEY[8 * i + 5]; E_KEY[8 * i + 13] = t; \
-- t ^= E_KEY[8 * i + 6]; E_KEY[8 * i + 14] = t; \
-- t ^= E_KEY[8 * i + 7]; E_KEY[8 * i + 15] = t; \
--}
-+#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
+-void
+-machine_halt(void)
++void machine_halt(void)
+ {
+ /* treat a halt like a power off */
+ machine_power_off();
+diff --git a/arch/x86/mach-voyager/voyager_cat.c b/arch/x86/mach-voyager/voyager_cat.c
+index 2132ca6..17a7904 100644
+--- a/arch/x86/mach-voyager/voyager_cat.c
++++ b/arch/x86/mach-voyager/voyager_cat.c
+@@ -39,34 +39,32 @@
+ #define CAT_DATA (sspb + 0xd)
+
+ /* the internal cat functions */
+-static void cat_pack(__u8 *msg, __u16 start_bit, __u8 *data,
+- __u16 num_bits);
+-static void cat_unpack(__u8 *msg, __u16 start_bit, __u8 *data,
++static void cat_pack(__u8 * msg, __u16 start_bit, __u8 * data, __u16 num_bits);
++static void cat_unpack(__u8 * msg, __u16 start_bit, __u8 * data,
+ __u16 num_bits);
+-static void cat_build_header(__u8 *header, const __u16 len,
++static void cat_build_header(__u8 * header, const __u16 len,
+ const __u16 smallest_reg_bits,
+ const __u16 longest_reg_bits);
+-static int cat_sendinst(voyager_module_t *modp, voyager_asic_t *asicp,
++static int cat_sendinst(voyager_module_t * modp, voyager_asic_t * asicp,
+ __u8 reg, __u8 op);
+-static int cat_getdata(voyager_module_t *modp, voyager_asic_t *asicp,
+- __u8 reg, __u8 *value);
+-static int cat_shiftout(__u8 *data, __u16 data_bytes, __u16 header_bytes,
++static int cat_getdata(voyager_module_t * modp, voyager_asic_t * asicp,
++ __u8 reg, __u8 * value);
++static int cat_shiftout(__u8 * data, __u16 data_bytes, __u16 header_bytes,
+ __u8 pad_bits);
+-static int cat_write(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg,
++static int cat_write(voyager_module_t * modp, voyager_asic_t * asicp, __u8 reg,
+ __u8 value);
+-static int cat_read(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg,
+- __u8 *value);
+-static int cat_subread(voyager_module_t *modp, voyager_asic_t *asicp,
++static int cat_read(voyager_module_t * modp, voyager_asic_t * asicp, __u8 reg,
++ __u8 * value);
++static int cat_subread(voyager_module_t * modp, voyager_asic_t * asicp,
+ __u16 offset, __u16 len, void *buf);
+-static int cat_senddata(voyager_module_t *modp, voyager_asic_t *asicp,
++static int cat_senddata(voyager_module_t * modp, voyager_asic_t * asicp,
+ __u8 reg, __u8 value);
+-static int cat_disconnect(voyager_module_t *modp, voyager_asic_t *asicp);
+-static int cat_connect(voyager_module_t *modp, voyager_asic_t *asicp);
++static int cat_disconnect(voyager_module_t * modp, voyager_asic_t * asicp);
++static int cat_connect(voyager_module_t * modp, voyager_asic_t * asicp);
+
+-static inline const char *
+-cat_module_name(int module_id)
++static inline const char *cat_module_name(int module_id)
+ {
+- switch(module_id) {
++ switch (module_id) {
+ case 0x10:
+ return "Processor Slot 0";
+ case 0x11:
+@@ -105,14 +103,14 @@ voyager_module_t *voyager_cat_list;
+
+ /* the I/O port assignments for the VIC and QIC */
+ static struct resource vic_res = {
+- .name = "Voyager Interrupt Controller",
+- .start = 0xFC00,
+- .end = 0xFC6F
++ .name = "Voyager Interrupt Controller",
++ .start = 0xFC00,
++ .end = 0xFC6F
+ };
+ static struct resource qic_res = {
+- .name = "Quad Interrupt Controller",
+- .start = 0xFC70,
+- .end = 0xFCFF
++ .name = "Quad Interrupt Controller",
++ .start = 0xFC70,
++ .end = 0xFCFF
+ };
+
+ /* This function is used to pack a data bit stream inside a message.
+@@ -120,7 +118,7 @@ static struct resource qic_res = {
+ * Note: This function assumes that any unused bit in the data stream
+ * is set to zero so that the ors will work correctly */
+ static void
+-cat_pack(__u8 *msg, const __u16 start_bit, __u8 *data, const __u16 num_bits)
++cat_pack(__u8 * msg, const __u16 start_bit, __u8 * data, const __u16 num_bits)
+ {
+ /* compute initial shift needed */
+ const __u16 offset = start_bit % BITS_PER_BYTE;
+@@ -130,7 +128,7 @@ cat_pack(__u8 *msg, const __u16 start_bit, __u8 *data, const __u16 num_bits)
+ int i;
--static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-- unsigned int key_len)
-+#define imix_col(y,x) do { \
-+ u = star_x(x); \
-+ v = star_x(u); \
-+ w = star_x(v); \
-+ t = w ^ (x); \
-+ (y) = u ^ v ^ w; \
-+ (y) ^= ror32(u ^ t, 8) ^ \
-+ ror32(v ^ t, 16) ^ \
-+ ror32(t, 24); \
-+} while (0)
-+
-+#define ls_box(x) \
-+ crypto_fl_tab[0][byte(x, 0)] ^ \
-+ crypto_fl_tab[1][byte(x, 1)] ^ \
-+ crypto_fl_tab[2][byte(x, 2)] ^ \
-+ crypto_fl_tab[3][byte(x, 3)]
-+
-+#define loop4(i) do { \
-+ t = ror32(t, 8); \
-+ t = ls_box(t) ^ rco_tab[i]; \
-+ t ^= ctx->key_enc[4 * i]; \
-+ ctx->key_enc[4 * i + 4] = t; \
-+ t ^= ctx->key_enc[4 * i + 1]; \
-+ ctx->key_enc[4 * i + 5] = t; \
-+ t ^= ctx->key_enc[4 * i + 2]; \
-+ ctx->key_enc[4 * i + 6] = t; \
-+ t ^= ctx->key_enc[4 * i + 3]; \
-+ ctx->key_enc[4 * i + 7] = t; \
-+} while (0)
-+
-+#define loop6(i) do { \
-+ t = ror32(t, 8); \
-+ t = ls_box(t) ^ rco_tab[i]; \
-+ t ^= ctx->key_enc[6 * i]; \
-+ ctx->key_enc[6 * i + 6] = t; \
-+ t ^= ctx->key_enc[6 * i + 1]; \
-+ ctx->key_enc[6 * i + 7] = t; \
-+ t ^= ctx->key_enc[6 * i + 2]; \
-+ ctx->key_enc[6 * i + 8] = t; \
-+ t ^= ctx->key_enc[6 * i + 3]; \
-+ ctx->key_enc[6 * i + 9] = t; \
-+ t ^= ctx->key_enc[6 * i + 4]; \
-+ ctx->key_enc[6 * i + 10] = t; \
-+ t ^= ctx->key_enc[6 * i + 5]; \
-+ ctx->key_enc[6 * i + 11] = t; \
-+} while (0)
+ /* adjust if we have more than a byte of residue */
+- if(residue >= BITS_PER_BYTE) {
++ if (residue >= BITS_PER_BYTE) {
+ residue -= BITS_PER_BYTE;
+ len++;
+ }
+@@ -138,24 +136,25 @@ cat_pack(__u8 *msg, const __u16 start_bit, __u8 *data, const __u16 num_bits)
+ /* clear out the bits. We assume here that if len==0 then
+ * residue >= offset. This is always true for the catbus
+ * operations */
+- msg[byte] &= 0xff << (BITS_PER_BYTE - offset);
++ msg[byte] &= 0xff << (BITS_PER_BYTE - offset);
+ msg[byte++] |= data[0] >> offset;
+- if(len == 0)
++ if (len == 0)
+ return;
+- for(i = 1; i < len; i++)
+- msg[byte++] = (data[i-1] << (BITS_PER_BYTE - offset))
+- | (data[i] >> offset);
+- if(residue != 0) {
++ for (i = 1; i < len; i++)
++ msg[byte++] = (data[i - 1] << (BITS_PER_BYTE - offset))
++ | (data[i] >> offset);
++ if (residue != 0) {
+ __u8 mask = 0xff >> residue;
+- __u8 last_byte = data[i-1] << (BITS_PER_BYTE - offset)
+- | (data[i] >> offset);
+-
++ __u8 last_byte = data[i - 1] << (BITS_PER_BYTE - offset)
++ | (data[i] >> offset);
+
-+#define loop8(i) do { \
-+ t = ror32(t, 8); \
-+ t = ls_box(t) ^ rco_tab[i]; \
-+ t ^= ctx->key_enc[8 * i]; \
-+ ctx->key_enc[8 * i + 8] = t; \
-+ t ^= ctx->key_enc[8 * i + 1]; \
-+ ctx->key_enc[8 * i + 9] = t; \
-+ t ^= ctx->key_enc[8 * i + 2]; \
-+ ctx->key_enc[8 * i + 10] = t; \
-+ t ^= ctx->key_enc[8 * i + 3]; \
-+ ctx->key_enc[8 * i + 11] = t; \
-+ t = ctx->key_enc[8 * i + 4] ^ ls_box(t); \
-+ ctx->key_enc[8 * i + 12] = t; \
-+ t ^= ctx->key_enc[8 * i + 5]; \
-+ ctx->key_enc[8 * i + 13] = t; \
-+ t ^= ctx->key_enc[8 * i + 6]; \
-+ ctx->key_enc[8 * i + 14] = t; \
-+ t ^= ctx->key_enc[8 * i + 7]; \
-+ ctx->key_enc[8 * i + 15] = t; \
-+} while (0)
+ last_byte &= ~mask;
+ msg[byte] &= mask;
+ msg[byte] |= last_byte;
+ }
+ return;
+ }
+
-+int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-+ unsigned int key_len)
+ /* unpack the data again (same arguments as cat_pack()). data buffer
+ * must be zero populated.
+ *
+@@ -163,7 +162,7 @@ cat_pack(__u8 *msg, const __u16 start_bit, __u8 *data, const __u16 num_bits)
+ * data (starting at bit 0 in data).
+ */
+ static void
+-cat_unpack(__u8 *msg, const __u16 start_bit, __u8 *data, const __u16 num_bits)
++cat_unpack(__u8 * msg, const __u16 start_bit, __u8 * data, const __u16 num_bits)
{
-- struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
- const __le32 *key = (const __le32 *)in_key;
- u32 *flags = &tfm->crt_flags;
-- u32 i, t, u, v, w;
-+ u32 i, t, u, v, w, j;
+ /* compute initial shift needed */
+ const __u16 offset = start_bit % BITS_PER_BYTE;
+@@ -172,97 +171,97 @@ cat_unpack(__u8 *msg, const __u16 start_bit, __u8 *data, const __u16 num_bits)
+ __u16 byte = start_bit / BITS_PER_BYTE;
+ int i;
- if (key_len % 8) {
- *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
-@@ -263,95 +244,113 @@ static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+- if(last_bits != 0)
++ if (last_bits != 0)
+ len++;
+
+ /* special case: want < 8 bits from msg and we can get it from
+ * a single byte of the msg */
+- if(len == 0 && BITS_PER_BYTE - offset >= num_bits) {
++ if (len == 0 && BITS_PER_BYTE - offset >= num_bits) {
+ data[0] = msg[byte] << offset;
+ data[0] &= 0xff >> (BITS_PER_BYTE - num_bits);
+ return;
+ }
+- for(i = 0; i < len; i++) {
++ for (i = 0; i < len; i++) {
+ /* this annoying if has to be done just in case a read of
+ * msg one beyond the array causes a panic */
+- if(offset != 0) {
++ if (offset != 0) {
+ data[i] = msg[byte++] << offset;
+ data[i] |= msg[byte] >> (BITS_PER_BYTE - offset);
+- }
+- else {
++ } else {
+ data[i] = msg[byte++];
+ }
+ }
+ /* do we need to truncate the final byte */
+- if(last_bits != 0) {
+- data[i-1] &= 0xff << (BITS_PER_BYTE - last_bits);
++ if (last_bits != 0) {
++ data[i - 1] &= 0xff << (BITS_PER_BYTE - last_bits);
+ }
+ return;
+ }
- ctx->key_length = key_len;
+ static void
+-cat_build_header(__u8 *header, const __u16 len, const __u16 smallest_reg_bits,
++cat_build_header(__u8 * header, const __u16 len, const __u16 smallest_reg_bits,
+ const __u16 longest_reg_bits)
+ {
+ int i;
+ __u16 start_bit = (smallest_reg_bits - 1) % BITS_PER_BYTE;
+ __u8 *last_byte = &header[len - 1];
-- E_KEY[0] = le32_to_cpu(key[0]);
-- E_KEY[1] = le32_to_cpu(key[1]);
-- E_KEY[2] = le32_to_cpu(key[2]);
-- E_KEY[3] = le32_to_cpu(key[3]);
-+ ctx->key_dec[key_len + 24] = ctx->key_enc[0] = le32_to_cpu(key[0]);
-+ ctx->key_dec[key_len + 25] = ctx->key_enc[1] = le32_to_cpu(key[1]);
-+ ctx->key_dec[key_len + 26] = ctx->key_enc[2] = le32_to_cpu(key[2]);
-+ ctx->key_dec[key_len + 27] = ctx->key_enc[3] = le32_to_cpu(key[3]);
+- if(start_bit == 0)
++ if (start_bit == 0)
+ start_bit = 1; /* must have at least one bit in the hdr */
+-
+- for(i=0; i < len; i++)
++
++ for (i = 0; i < len; i++)
+ header[i] = 0;
- switch (key_len) {
- case 16:
-- t = E_KEY[3];
-+ t = ctx->key_enc[3];
- for (i = 0; i < 10; ++i)
-- loop4 (i);
-+ loop4(i);
- break;
+- for(i = start_bit; i > 0; i--)
++ for (i = start_bit; i > 0; i--)
+ *last_byte = ((*last_byte) << 1) + 1;
- case 24:
-- E_KEY[4] = le32_to_cpu(key[4]);
-- t = E_KEY[5] = le32_to_cpu(key[5]);
-+ ctx->key_enc[4] = le32_to_cpu(key[4]);
-+ t = ctx->key_enc[5] = le32_to_cpu(key[5]);
- for (i = 0; i < 8; ++i)
-- loop6 (i);
-+ loop6(i);
- break;
+ }
- case 32:
-- E_KEY[4] = le32_to_cpu(key[4]);
-- E_KEY[5] = le32_to_cpu(key[5]);
-- E_KEY[6] = le32_to_cpu(key[6]);
-- t = E_KEY[7] = le32_to_cpu(key[7]);
-+ ctx->key_enc[4] = le32_to_cpu(key[4]);
-+ ctx->key_enc[5] = le32_to_cpu(key[5]);
-+ ctx->key_enc[6] = le32_to_cpu(key[6]);
-+ t = ctx->key_enc[7] = le32_to_cpu(key[7]);
- for (i = 0; i < 7; ++i)
-- loop8 (i);
-+ loop8(i);
- break;
+ static int
+-cat_sendinst(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg, __u8 op)
++cat_sendinst(voyager_module_t * modp, voyager_asic_t * asicp, __u8 reg, __u8 op)
+ {
+ __u8 parity, inst, inst_buf[4] = { 0 };
+ __u8 iseq[VOYAGER_MAX_SCAN_PATH], hseq[VOYAGER_MAX_REG_SIZE];
+ __u16 ibytes, hbytes, padbits;
+ int i;
+-
++
+ /*
+ * Parity is the parity of the register number + 1 (READ_REGISTER
+ * and WRITE_REGISTER always add '1' to the number of bits == 1)
+ */
+- parity = (__u8)(1 + (reg & 0x01) +
+- ((__u8)(reg & 0x02) >> 1) +
+- ((__u8)(reg & 0x04) >> 2) +
+- ((__u8)(reg & 0x08) >> 3)) % 2;
++ parity = (__u8) (1 + (reg & 0x01) +
++ ((__u8) (reg & 0x02) >> 1) +
++ ((__u8) (reg & 0x04) >> 2) +
++ ((__u8) (reg & 0x08) >> 3)) % 2;
+
+ inst = ((parity << 7) | (reg << 2) | op);
+
+ outb(VOYAGER_CAT_IRCYC, CAT_CMD);
+- if(!modp->scan_path_connected) {
+- if(asicp->asic_id != VOYAGER_CAT_ID) {
+- printk("**WARNING***: cat_sendinst has disconnected scan path not to CAT asic\n");
++ if (!modp->scan_path_connected) {
++ if (asicp->asic_id != VOYAGER_CAT_ID) {
++ printk
++ ("**WARNING***: cat_sendinst has disconnected scan path not to CAT asic\n");
+ return 1;
+ }
+ outb(VOYAGER_CAT_HEADER, CAT_DATA);
+ outb(inst, CAT_DATA);
+- if(inb(CAT_DATA) != VOYAGER_CAT_HEADER) {
++ if (inb(CAT_DATA) != VOYAGER_CAT_HEADER) {
+ CDEBUG(("VOYAGER CAT: cat_sendinst failed to get CAT_HEADER\n"));
+ return 1;
+ }
+ return 0;
+ }
+ ibytes = modp->inst_bits / BITS_PER_BYTE;
+- if((padbits = modp->inst_bits % BITS_PER_BYTE) != 0) {
++ if ((padbits = modp->inst_bits % BITS_PER_BYTE) != 0) {
+ padbits = BITS_PER_BYTE - padbits;
+ ibytes++;
+ }
+ hbytes = modp->largest_reg / BITS_PER_BYTE;
+- if(modp->largest_reg % BITS_PER_BYTE)
++ if (modp->largest_reg % BITS_PER_BYTE)
+ hbytes++;
+ CDEBUG(("cat_sendinst: ibytes=%d, hbytes=%d\n", ibytes, hbytes));
+ /* initialise the instruction sequence to 0xff */
+- for(i=0; i < ibytes + hbytes; i++)
++ for (i = 0; i < ibytes + hbytes; i++)
+ iseq[i] = 0xff;
+ cat_build_header(hseq, hbytes, modp->smallest_reg, modp->largest_reg);
+ cat_pack(iseq, modp->inst_bits, hseq, hbytes * BITS_PER_BYTE);
+@@ -271,11 +270,11 @@ cat_sendinst(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg, __u8 op)
+ cat_pack(iseq, asicp->bit_location, inst_buf, asicp->ireg_length);
+ #ifdef VOYAGER_CAT_DEBUG
+ printk("ins = 0x%x, iseq: ", inst);
+- for(i=0; i< ibytes + hbytes; i++)
++ for (i = 0; i < ibytes + hbytes; i++)
+ printk("0x%x ", iseq[i]);
+ printk("\n");
+ #endif
+- if(cat_shiftout(iseq, ibytes, hbytes, padbits)) {
++ if (cat_shiftout(iseq, ibytes, hbytes, padbits)) {
+ CDEBUG(("VOYAGER CAT: cat_sendinst: cat_shiftout failed\n"));
+ return 1;
}
+@@ -284,72 +283,74 @@ cat_sendinst(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg, __u8 op)
+ }
-- D_KEY[0] = E_KEY[0];
-- D_KEY[1] = E_KEY[1];
-- D_KEY[2] = E_KEY[2];
-- D_KEY[3] = E_KEY[3];
-+ ctx->key_dec[0] = ctx->key_enc[key_len + 24];
-+ ctx->key_dec[1] = ctx->key_enc[key_len + 25];
-+ ctx->key_dec[2] = ctx->key_enc[key_len + 26];
-+ ctx->key_dec[3] = ctx->key_enc[key_len + 27];
+ static int
+-cat_getdata(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg,
+- __u8 *value)
++cat_getdata(voyager_module_t * modp, voyager_asic_t * asicp, __u8 reg,
++ __u8 * value)
+ {
+- if(!modp->scan_path_connected) {
+- if(asicp->asic_id != VOYAGER_CAT_ID) {
++ if (!modp->scan_path_connected) {
++ if (asicp->asic_id != VOYAGER_CAT_ID) {
+ CDEBUG(("VOYAGER CAT: ERROR: cat_getdata to CAT asic with scan path connected\n"));
+ return 1;
+ }
+- if(reg > VOYAGER_SUBADDRHI)
++ if (reg > VOYAGER_SUBADDRHI)
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ outb(VOYAGER_CAT_DRCYC, CAT_CMD);
+ outb(VOYAGER_CAT_HEADER, CAT_DATA);
+ *value = inb(CAT_DATA);
+ outb(0xAA, CAT_DATA);
+- if(inb(CAT_DATA) != VOYAGER_CAT_HEADER) {
++ if (inb(CAT_DATA) != VOYAGER_CAT_HEADER) {
+ CDEBUG(("cat_getdata: failed to get VOYAGER_CAT_HEADER\n"));
+ return 1;
+ }
+ return 0;
+- }
+- else {
+- __u16 sbits = modp->num_asics -1 + asicp->ireg_length;
++ } else {
++ __u16 sbits = modp->num_asics - 1 + asicp->ireg_length;
+ __u16 sbytes = sbits / BITS_PER_BYTE;
+ __u16 tbytes;
+- __u8 string[VOYAGER_MAX_SCAN_PATH], trailer[VOYAGER_MAX_REG_SIZE];
++ __u8 string[VOYAGER_MAX_SCAN_PATH],
++ trailer[VOYAGER_MAX_REG_SIZE];
+ __u8 padbits;
+ int i;
+-
++
+ outb(VOYAGER_CAT_DRCYC, CAT_CMD);
- for (i = 4; i < key_len + 24; ++i) {
-- imix_col (D_KEY[i], E_KEY[i]);
-+ j = key_len + 24 - (i & ~3) + (i & 3);
-+ imix_col(ctx->key_dec[j], ctx->key_enc[i]);
- }
--
- return 0;
+- if((padbits = sbits % BITS_PER_BYTE) != 0) {
++ if ((padbits = sbits % BITS_PER_BYTE) != 0) {
+ padbits = BITS_PER_BYTE - padbits;
+ sbytes++;
+ }
+ tbytes = asicp->ireg_length / BITS_PER_BYTE;
+- if(asicp->ireg_length % BITS_PER_BYTE)
++ if (asicp->ireg_length % BITS_PER_BYTE)
+ tbytes++;
+ CDEBUG(("cat_getdata: tbytes = %d, sbytes = %d, padbits = %d\n",
+- tbytes, sbytes, padbits));
++ tbytes, sbytes, padbits));
+ cat_build_header(trailer, tbytes, 1, asicp->ireg_length);
+
+-
+- for(i = tbytes - 1; i >= 0; i--) {
++ for (i = tbytes - 1; i >= 0; i--) {
+ outb(trailer[i], CAT_DATA);
+ string[sbytes + i] = inb(CAT_DATA);
+ }
+
+- for(i = sbytes - 1; i >= 0; i--) {
++ for (i = sbytes - 1; i >= 0; i--) {
+ outb(0xaa, CAT_DATA);
+ string[i] = inb(CAT_DATA);
+ }
+ *value = 0;
+- cat_unpack(string, padbits + (tbytes * BITS_PER_BYTE) + asicp->asic_location, value, asicp->ireg_length);
++ cat_unpack(string,
++ padbits + (tbytes * BITS_PER_BYTE) +
++ asicp->asic_location, value, asicp->ireg_length);
+ #ifdef VOYAGER_CAT_DEBUG
+ printk("value=0x%x, string: ", *value);
+- for(i=0; i< tbytes+sbytes; i++)
++ for (i = 0; i < tbytes + sbytes; i++)
+ printk("0x%x ", string[i]);
+ printk("\n");
+ #endif
+-
++
+ /* sanity check the rest of the return */
+- for(i=0; i < tbytes; i++) {
++ for (i = 0; i < tbytes; i++) {
+ __u8 input = 0;
+
+- cat_unpack(string, padbits + (i * BITS_PER_BYTE), &input, BITS_PER_BYTE);
+- if(trailer[i] != input) {
++ cat_unpack(string, padbits + (i * BITS_PER_BYTE),
++ &input, BITS_PER_BYTE);
++ if (trailer[i] != input) {
+ CDEBUG(("cat_getdata: failed to sanity check rest of ret(%d) 0x%x != 0x%x\n", i, input, trailer[i]));
+ return 1;
+ }
+@@ -360,14 +361,14 @@ cat_getdata(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg,
}
-+EXPORT_SYMBOL_GPL(crypto_aes_set_key);
- /* encrypt a block of text */
+ static int
+-cat_shiftout(__u8 *data, __u16 data_bytes, __u16 header_bytes, __u8 pad_bits)
++cat_shiftout(__u8 * data, __u16 data_bytes, __u16 header_bytes, __u8 pad_bits)
+ {
+ int i;
+-
+- for(i = data_bytes + header_bytes - 1; i >= header_bytes; i--)
++
++ for (i = data_bytes + header_bytes - 1; i >= header_bytes; i--)
+ outb(data[i], CAT_DATA);
--#define f_nround(bo, bi, k) \
-- f_rn(bo, bi, 0, k); \
-- f_rn(bo, bi, 1, k); \
-- f_rn(bo, bi, 2, k); \
-- f_rn(bo, bi, 3, k); \
-- k += 4
--
--#define f_lround(bo, bi, k) \
-- f_rl(bo, bi, 0, k); \
-- f_rl(bo, bi, 1, k); \
-- f_rl(bo, bi, 2, k); \
-- f_rl(bo, bi, 3, k)
-+#define f_rn(bo, bi, n, k) do { \
-+ bo[n] = crypto_ft_tab[0][byte(bi[n], 0)] ^ \
-+ crypto_ft_tab[1][byte(bi[(n + 1) & 3], 1)] ^ \
-+ crypto_ft_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
-+ crypto_ft_tab[3][byte(bi[(n + 3) & 3], 3)] ^ *(k + n); \
-+} while (0)
+- for(i = header_bytes - 1; i >= 0; i--) {
++ for (i = header_bytes - 1; i >= 0; i--) {
+ __u8 header = 0;
+ __u8 input;
+
+@@ -376,7 +377,7 @@ cat_shiftout(__u8 *data, __u16 data_bytes, __u16 header_bytes, __u8 pad_bits)
+ CDEBUG(("cat_shiftout: returned 0x%x\n", input));
+ cat_unpack(data, ((data_bytes + i) * BITS_PER_BYTE) - pad_bits,
+ &header, BITS_PER_BYTE);
+- if(input != header) {
++ if (input != header) {
+ CDEBUG(("VOYAGER CAT: cat_shiftout failed to return header 0x%x != 0x%x\n", input, header));
+ return 1;
+ }
+@@ -385,57 +386,57 @@ cat_shiftout(__u8 *data, __u16 data_bytes, __u16 header_bytes, __u8 pad_bits)
+ }
+
+ static int
+-cat_senddata(voyager_module_t *modp, voyager_asic_t *asicp,
++cat_senddata(voyager_module_t * modp, voyager_asic_t * asicp,
+ __u8 reg, __u8 value)
+ {
+ outb(VOYAGER_CAT_DRCYC, CAT_CMD);
+- if(!modp->scan_path_connected) {
+- if(asicp->asic_id != VOYAGER_CAT_ID) {
++ if (!modp->scan_path_connected) {
++ if (asicp->asic_id != VOYAGER_CAT_ID) {
+ CDEBUG(("VOYAGER CAT: ERROR: scan path disconnected when asic != CAT\n"));
+ return 1;
+ }
+ outb(VOYAGER_CAT_HEADER, CAT_DATA);
+ outb(value, CAT_DATA);
+- if(inb(CAT_DATA) != VOYAGER_CAT_HEADER) {
++ if (inb(CAT_DATA) != VOYAGER_CAT_HEADER) {
+ CDEBUG(("cat_senddata: failed to get correct header response to sent data\n"));
+ return 1;
+ }
+- if(reg > VOYAGER_SUBADDRHI) {
++ if (reg > VOYAGER_SUBADDRHI) {
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ }
+-
+
-+#define f_nround(bo, bi, k) do {\
-+ f_rn(bo, bi, 0, k); \
-+ f_rn(bo, bi, 1, k); \
-+ f_rn(bo, bi, 2, k); \
-+ f_rn(bo, bi, 3, k); \
-+ k += 4; \
-+} while (0)
+ return 0;
+- }
+- else {
++ } else {
+ __u16 hbytes = asicp->ireg_length / BITS_PER_BYTE;
+- __u16 dbytes = (modp->num_asics - 1 + asicp->ireg_length)/BITS_PER_BYTE;
+- __u8 padbits, dseq[VOYAGER_MAX_SCAN_PATH],
+- hseq[VOYAGER_MAX_REG_SIZE];
++ __u16 dbytes =
++ (modp->num_asics - 1 + asicp->ireg_length) / BITS_PER_BYTE;
++ __u8 padbits, dseq[VOYAGER_MAX_SCAN_PATH],
++ hseq[VOYAGER_MAX_REG_SIZE];
+ int i;
+
+- if((padbits = (modp->num_asics - 1
+- + asicp->ireg_length) % BITS_PER_BYTE) != 0) {
++ if ((padbits = (modp->num_asics - 1
++ + asicp->ireg_length) % BITS_PER_BYTE) != 0) {
+ padbits = BITS_PER_BYTE - padbits;
+ dbytes++;
+ }
+- if(asicp->ireg_length % BITS_PER_BYTE)
++ if (asicp->ireg_length % BITS_PER_BYTE)
+ hbytes++;
+-
+
-+#define f_rl(bo, bi, n, k) do { \
-+ bo[n] = crypto_fl_tab[0][byte(bi[n], 0)] ^ \
-+ crypto_fl_tab[1][byte(bi[(n + 1) & 3], 1)] ^ \
-+ crypto_fl_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
-+ crypto_fl_tab[3][byte(bi[(n + 3) & 3], 3)] ^ *(k + n); \
-+} while (0)
+ cat_build_header(hseq, hbytes, 1, asicp->ireg_length);
+-
+- for(i = 0; i < dbytes + hbytes; i++)
+
-+#define f_lround(bo, bi, k) do {\
-+ f_rl(bo, bi, 0, k); \
-+ f_rl(bo, bi, 1, k); \
-+ f_rl(bo, bi, 2, k); \
-+ f_rl(bo, bi, 3, k); \
-+} while (0)
++ for (i = 0; i < dbytes + hbytes; i++)
+ dseq[i] = 0xff;
+ CDEBUG(("cat_senddata: dbytes=%d, hbytes=%d, padbits=%d\n",
+ dbytes, hbytes, padbits));
+ cat_pack(dseq, modp->num_asics - 1 + asicp->ireg_length,
+ hseq, hbytes * BITS_PER_BYTE);
+- cat_pack(dseq, asicp->asic_location, &value,
++ cat_pack(dseq, asicp->asic_location, &value,
+ asicp->ireg_length);
+ #ifdef VOYAGER_CAT_DEBUG
+ printk("dseq ");
+- for(i=0; i<hbytes+dbytes; i++) {
++ for (i = 0; i < hbytes + dbytes; i++) {
+ printk("0x%x ", dseq[i]);
+ }
+ printk("\n");
+@@ -445,121 +446,125 @@ cat_senddata(voyager_module_t *modp, voyager_asic_t *asicp,
+ }
- static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ static int
+-cat_write(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg,
+- __u8 value)
++cat_write(voyager_module_t * modp, voyager_asic_t * asicp, __u8 reg, __u8 value)
{
-- const struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
-+ const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
- const __le32 *src = (const __le32 *)in;
- __le32 *dst = (__le32 *)out;
- u32 b0[4], b1[4];
-- const u32 *kp = E_KEY + 4;
-+ const u32 *kp = ctx->key_enc + 4;
-+ const int key_len = ctx->key_length;
+- if(cat_sendinst(modp, asicp, reg, VOYAGER_WRITE_CONFIG))
++ if (cat_sendinst(modp, asicp, reg, VOYAGER_WRITE_CONFIG))
+ return 1;
+ return cat_senddata(modp, asicp, reg, value);
+ }
-- b0[0] = le32_to_cpu(src[0]) ^ E_KEY[0];
-- b0[1] = le32_to_cpu(src[1]) ^ E_KEY[1];
-- b0[2] = le32_to_cpu(src[2]) ^ E_KEY[2];
-- b0[3] = le32_to_cpu(src[3]) ^ E_KEY[3];
-+ b0[0] = le32_to_cpu(src[0]) ^ ctx->key_enc[0];
-+ b0[1] = le32_to_cpu(src[1]) ^ ctx->key_enc[1];
-+ b0[2] = le32_to_cpu(src[2]) ^ ctx->key_enc[2];
-+ b0[3] = le32_to_cpu(src[3]) ^ ctx->key_enc[3];
+ static int
+-cat_read(voyager_module_t *modp, voyager_asic_t *asicp, __u8 reg,
+- __u8 *value)
++cat_read(voyager_module_t * modp, voyager_asic_t * asicp, __u8 reg,
++ __u8 * value)
+ {
+- if(cat_sendinst(modp, asicp, reg, VOYAGER_READ_CONFIG))
++ if (cat_sendinst(modp, asicp, reg, VOYAGER_READ_CONFIG))
+ return 1;
+ return cat_getdata(modp, asicp, reg, value);
+ }
-- if (ctx->key_length > 24) {
-- f_nround (b1, b0, kp);
-- f_nround (b0, b1, kp);
-+ if (key_len > 24) {
-+ f_nround(b1, b0, kp);
-+ f_nround(b0, b1, kp);
+ static int
+-cat_subaddrsetup(voyager_module_t *modp, voyager_asic_t *asicp, __u16 offset,
++cat_subaddrsetup(voyager_module_t * modp, voyager_asic_t * asicp, __u16 offset,
+ __u16 len)
+ {
+ __u8 val;
+
+- if(len > 1) {
++ if (len > 1) {
+ /* set auto increment */
+ __u8 newval;
+-
+- if(cat_read(modp, asicp, VOYAGER_AUTO_INC_REG, &val)) {
++
++ if (cat_read(modp, asicp, VOYAGER_AUTO_INC_REG, &val)) {
+ CDEBUG(("cat_subaddrsetup: read of VOYAGER_AUTO_INC_REG failed\n"));
+ return 1;
+ }
+- CDEBUG(("cat_subaddrsetup: VOYAGER_AUTO_INC_REG = 0x%x\n", val));
++ CDEBUG(("cat_subaddrsetup: VOYAGER_AUTO_INC_REG = 0x%x\n",
++ val));
+ newval = val | VOYAGER_AUTO_INC;
+- if(newval != val) {
+- if(cat_write(modp, asicp, VOYAGER_AUTO_INC_REG, val)) {
++ if (newval != val) {
++ if (cat_write(modp, asicp, VOYAGER_AUTO_INC_REG, val)) {
+ CDEBUG(("cat_subaddrsetup: write to VOYAGER_AUTO_INC_REG failed\n"));
+ return 1;
+ }
+ }
}
-
-- if (ctx->key_length > 16) {
-- f_nround (b1, b0, kp);
-- f_nround (b0, b1, kp);
-+ if (key_len > 16) {
-+ f_nround(b1, b0, kp);
-+ f_nround(b0, b1, kp);
+- if(cat_write(modp, asicp, VOYAGER_SUBADDRLO, (__u8)(offset &0xff))) {
++ if (cat_write(modp, asicp, VOYAGER_SUBADDRLO, (__u8) (offset & 0xff))) {
+ CDEBUG(("cat_subaddrsetup: write to SUBADDRLO failed\n"));
+ return 1;
+ }
+- if(asicp->subaddr > VOYAGER_SUBADDR_LO) {
+- if(cat_write(modp, asicp, VOYAGER_SUBADDRHI, (__u8)(offset >> 8))) {
++ if (asicp->subaddr > VOYAGER_SUBADDR_LO) {
++ if (cat_write
++ (modp, asicp, VOYAGER_SUBADDRHI, (__u8) (offset >> 8))) {
+ CDEBUG(("cat_subaddrsetup: write to SUBADDRHI failed\n"));
+ return 1;
+ }
+ cat_read(modp, asicp, VOYAGER_SUBADDRHI, &val);
+- CDEBUG(("cat_subaddrsetup: offset = %d, hi = %d\n", offset, val));
++ CDEBUG(("cat_subaddrsetup: offset = %d, hi = %d\n", offset,
++ val));
+ }
+ cat_read(modp, asicp, VOYAGER_SUBADDRLO, &val);
+ CDEBUG(("cat_subaddrsetup: offset = %d, lo = %d\n", offset, val));
+ return 0;
+ }
+-
++
+ static int
+-cat_subwrite(voyager_module_t *modp, voyager_asic_t *asicp, __u16 offset,
+- __u16 len, void *buf)
++cat_subwrite(voyager_module_t * modp, voyager_asic_t * asicp, __u16 offset,
++ __u16 len, void *buf)
+ {
+ int i, retval;
+
+ /* FIXME: need special actions for VOYAGER_CAT_ID here */
+- if(asicp->asic_id == VOYAGER_CAT_ID) {
++ if (asicp->asic_id == VOYAGER_CAT_ID) {
+ CDEBUG(("cat_subwrite: ATTEMPT TO WRITE TO CAT ASIC\n"));
+ /* FIXME -- This is supposed to be handled better
+ * There is a problem writing to the cat asic in the
+ * PSI. The 30us delay seems to work, though */
+ udelay(30);
+ }
+-
+- if((retval = cat_subaddrsetup(modp, asicp, offset, len)) != 0) {
++
++ if ((retval = cat_subaddrsetup(modp, asicp, offset, len)) != 0) {
+ printk("cat_subwrite: cat_subaddrsetup FAILED\n");
+ return retval;
+ }
+-
+- if(cat_sendinst(modp, asicp, VOYAGER_SUBADDRDATA, VOYAGER_WRITE_CONFIG)) {
++
++ if (cat_sendinst
++ (modp, asicp, VOYAGER_SUBADDRDATA, VOYAGER_WRITE_CONFIG)) {
+ printk("cat_subwrite: cat_sendinst FAILED\n");
+ return 1;
+ }
+- for(i = 0; i < len; i++) {
+- if(cat_senddata(modp, asicp, 0xFF, ((__u8 *)buf)[i])) {
+- printk("cat_subwrite: cat_sendata element at %d FAILED\n", i);
++ for (i = 0; i < len; i++) {
++ if (cat_senddata(modp, asicp, 0xFF, ((__u8 *) buf)[i])) {
++ printk
++ ("cat_subwrite: cat_sendata element at %d FAILED\n",
++ i);
+ return 1;
+ }
+ }
+ return 0;
+ }
+ static int
+-cat_subread(voyager_module_t *modp, voyager_asic_t *asicp, __u16 offset,
++cat_subread(voyager_module_t * modp, voyager_asic_t * asicp, __u16 offset,
+ __u16 len, void *buf)
+ {
+ int i, retval;
+
+- if((retval = cat_subaddrsetup(modp, asicp, offset, len)) != 0) {
++ if ((retval = cat_subaddrsetup(modp, asicp, offset, len)) != 0) {
+ CDEBUG(("cat_subread: cat_subaddrsetup FAILED\n"));
+ return retval;
}
-- f_nround (b1, b0, kp);
-- f_nround (b0, b1, kp);
-- f_nround (b1, b0, kp);
-- f_nround (b0, b1, kp);
-- f_nround (b1, b0, kp);
-- f_nround (b0, b1, kp);
-- f_nround (b1, b0, kp);
-- f_nround (b0, b1, kp);
-- f_nround (b1, b0, kp);
-- f_lround (b0, b1, kp);
-+ f_nround(b1, b0, kp);
-+ f_nround(b0, b1, kp);
-+ f_nround(b1, b0, kp);
-+ f_nround(b0, b1, kp);
-+ f_nround(b1, b0, kp);
-+ f_nround(b0, b1, kp);
-+ f_nround(b1, b0, kp);
-+ f_nround(b0, b1, kp);
-+ f_nround(b1, b0, kp);
-+ f_lround(b0, b1, kp);
-
- dst[0] = cpu_to_le32(b0[0]);
- dst[1] = cpu_to_le32(b0[1]);
-@@ -361,53 +360,69 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
-
- /* decrypt a block of text */
+- if(cat_sendinst(modp, asicp, VOYAGER_SUBADDRDATA, VOYAGER_READ_CONFIG)) {
++ if (cat_sendinst(modp, asicp, VOYAGER_SUBADDRDATA, VOYAGER_READ_CONFIG)) {
+ CDEBUG(("cat_subread: cat_sendinst failed\n"));
+ return 1;
+ }
+- for(i = 0; i < len; i++) {
+- if(cat_getdata(modp, asicp, 0xFF,
+- &((__u8 *)buf)[i])) {
+- CDEBUG(("cat_subread: cat_getdata element %d failed\n", i));
++ for (i = 0; i < len; i++) {
++ if (cat_getdata(modp, asicp, 0xFF, &((__u8 *) buf)[i])) {
++ CDEBUG(("cat_subread: cat_getdata element %d failed\n",
++ i));
+ return 1;
+ }
+ }
+ return 0;
+ }
--#define i_nround(bo, bi, k) \
-- i_rn(bo, bi, 0, k); \
-- i_rn(bo, bi, 1, k); \
-- i_rn(bo, bi, 2, k); \
-- i_rn(bo, bi, 3, k); \
-- k -= 4
-
--#define i_lround(bo, bi, k) \
-- i_rl(bo, bi, 0, k); \
-- i_rl(bo, bi, 1, k); \
-- i_rl(bo, bi, 2, k); \
-- i_rl(bo, bi, 3, k)
-+#define i_rn(bo, bi, n, k) do { \
-+ bo[n] = crypto_it_tab[0][byte(bi[n], 0)] ^ \
-+ crypto_it_tab[1][byte(bi[(n + 3) & 3], 1)] ^ \
-+ crypto_it_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
-+ crypto_it_tab[3][byte(bi[(n + 1) & 3], 3)] ^ *(k + n); \
-+} while (0)
+ /* buffer for storing EPROM data read in during initialisation */
+ static __initdata __u8 eprom_buf[0xFFFF];
+ static voyager_module_t *voyager_initial_module;
+@@ -568,8 +573,7 @@ static voyager_module_t *voyager_initial_module;
+ * boot cpu *after* all memory initialisation has been done (so we can
+ * use kmalloc) but before smp initialisation, so we can probe the SMP
+ * configuration and pick up necessary information. */
+-void __init
+-voyager_cat_init(void)
++void __init voyager_cat_init(void)
+ {
+ voyager_module_t **modpp = &voyager_initial_module;
+ voyager_asic_t **asicpp;
+@@ -578,27 +582,29 @@ voyager_cat_init(void)
+ unsigned long qic_addr = 0;
+ __u8 qabc_data[0x20];
+ __u8 num_submodules, val;
+- voyager_eprom_hdr_t *eprom_hdr = (voyager_eprom_hdr_t *)&eprom_buf[0];
+-
++ voyager_eprom_hdr_t *eprom_hdr = (voyager_eprom_hdr_t *) & eprom_buf[0];
+
-+#define i_nround(bo, bi, k) do {\
-+ i_rn(bo, bi, 0, k); \
-+ i_rn(bo, bi, 1, k); \
-+ i_rn(bo, bi, 2, k); \
-+ i_rn(bo, bi, 3, k); \
-+ k += 4; \
-+} while (0)
+ __u8 cmos[4];
+ unsigned long addr;
+-
+
-+#define i_rl(bo, bi, n, k) do { \
-+ bo[n] = crypto_il_tab[0][byte(bi[n], 0)] ^ \
-+ crypto_il_tab[1][byte(bi[(n + 3) & 3], 1)] ^ \
-+ crypto_il_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
-+ crypto_il_tab[3][byte(bi[(n + 1) & 3], 3)] ^ *(k + n); \
-+} while (0)
+ /* initiallise the SUS mailbox */
+- for(i=0; i<sizeof(cmos); i++)
++ for (i = 0; i < sizeof(cmos); i++)
+ cmos[i] = voyager_extended_cmos_read(VOYAGER_DUMP_LOCATION + i);
+ addr = *(unsigned long *)cmos;
+- if((addr & 0xff000000) != 0xff000000) {
+- printk(KERN_ERR "Voyager failed to get SUS mailbox (addr = 0x%lx\n", addr);
++ if ((addr & 0xff000000) != 0xff000000) {
++ printk(KERN_ERR
++ "Voyager failed to get SUS mailbox (addr = 0x%lx\n",
++ addr);
+ } else {
+ static struct resource res;
+-
+
-+#define i_lround(bo, bi, k) do {\
-+ i_rl(bo, bi, 0, k); \
-+ i_rl(bo, bi, 1, k); \
-+ i_rl(bo, bi, 2, k); \
-+ i_rl(bo, bi, 3, k); \
-+} while (0)
+ res.name = "voyager SUS";
+ res.start = addr;
+- res.end = addr+0x3ff;
+-
++ res.end = addr + 0x3ff;
++
+ request_resource(&iomem_resource, &res);
+ voyager_SUS = (struct voyager_SUS *)
+- ioremap(addr, 0x400);
++ ioremap(addr, 0x400);
+ printk(KERN_NOTICE "Voyager SUS mailbox version 0x%x\n",
+ voyager_SUS->SUS_version);
+ voyager_SUS->kernel_version = VOYAGER_MAILBOX_VERSION;
+@@ -609,8 +615,6 @@ voyager_cat_init(void)
+ voyager_extended_vic_processors = 0;
+ voyager_quad_processors = 0;
- static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
- {
-- const struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
-+ const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
- const __le32 *src = (const __le32 *)in;
- __le32 *dst = (__le32 *)out;
- u32 b0[4], b1[4];
- const int key_len = ctx->key_length;
-- const u32 *kp = D_KEY + key_len + 20;
-+ const u32 *kp = ctx->key_dec + 4;
+-
+-
+ printk("VOYAGER: beginning CAT bus probe\n");
+ /* set up the SuperSet Port Block which tells us where the
+ * CAT communication port is */
+@@ -618,14 +622,14 @@ voyager_cat_init(void)
+ VDEBUG(("VOYAGER DEBUG: sspb = 0x%x\n", sspb));
+
+ /* now find out if were 8 slot or normal */
+- if((inb(VIC_PROC_WHO_AM_I) & EIGHT_SLOT_IDENTIFIER)
+- == EIGHT_SLOT_IDENTIFIER) {
++ if ((inb(VIC_PROC_WHO_AM_I) & EIGHT_SLOT_IDENTIFIER)
++ == EIGHT_SLOT_IDENTIFIER) {
+ voyager_8slot = 1;
+- printk(KERN_NOTICE "Voyager: Eight slot 51xx configuration detected\n");
++ printk(KERN_NOTICE
++ "Voyager: Eight slot 51xx configuration detected\n");
+ }
+
+- for(i = VOYAGER_MIN_MODULE;
+- i <= VOYAGER_MAX_MODULE; i++) {
++ for (i = VOYAGER_MIN_MODULE; i <= VOYAGER_MAX_MODULE; i++) {
+ __u8 input;
+ int asic;
+ __u16 eprom_size;
+@@ -643,21 +647,21 @@ voyager_cat_init(void)
+ outb(0xAA, CAT_DATA);
+ input = inb(CAT_DATA);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+- if(input != VOYAGER_CAT_HEADER) {
++ if (input != VOYAGER_CAT_HEADER) {
+ continue;
+ }
+ CDEBUG(("VOYAGER DEBUG: found module id 0x%x, %s\n", i,
+ cat_module_name(i)));
+- *modpp = kmalloc(sizeof(voyager_module_t), GFP_KERNEL); /*&voyager_module_storage[cat_count++];*/
+- if(*modpp == NULL) {
++ *modpp = kmalloc(sizeof(voyager_module_t), GFP_KERNEL); /*&voyager_module_storage[cat_count++]; */
++ if (*modpp == NULL) {
+ printk("**WARNING** kmalloc failure in cat_init\n");
+ continue;
+ }
+ memset(*modpp, 0, sizeof(voyager_module_t));
+ /* need temporary asic for cat_subread. It will be
+ * filled in correctly later */
+- (*modpp)->asic = kmalloc(sizeof(voyager_asic_t), GFP_KERNEL); /*&voyager_asic_storage[asic_count];*/
+- if((*modpp)->asic == NULL) {
++ (*modpp)->asic = kmalloc(sizeof(voyager_asic_t), GFP_KERNEL); /*&voyager_asic_storage[asic_count]; */
++ if ((*modpp)->asic == NULL) {
+ printk("**WARNING** kmalloc failure in cat_init\n");
+ continue;
+ }
+@@ -666,47 +670,52 @@ voyager_cat_init(void)
+ (*modpp)->asic->subaddr = VOYAGER_SUBADDR_HI;
+ (*modpp)->module_addr = i;
+ (*modpp)->scan_path_connected = 0;
+- if(i == VOYAGER_PSI) {
++ if (i == VOYAGER_PSI) {
+ /* Exception leg for modules with no EEPROM */
+ printk("Module \"%s\"\n", cat_module_name(i));
+ continue;
+ }
+-
++
+ CDEBUG(("cat_init: Reading eeprom for module 0x%x at offset %d\n", i, VOYAGER_XSUM_END_OFFSET));
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ cat_disconnect(*modpp, (*modpp)->asic);
+- if(cat_subread(*modpp, (*modpp)->asic,
+- VOYAGER_XSUM_END_OFFSET, sizeof(eprom_size),
+- &eprom_size)) {
+- printk("**WARNING**: Voyager couldn't read EPROM size for module 0x%x\n", i);
++ if (cat_subread(*modpp, (*modpp)->asic,
++ VOYAGER_XSUM_END_OFFSET, sizeof(eprom_size),
++ &eprom_size)) {
++ printk
++ ("**WARNING**: Voyager couldn't read EPROM size for module 0x%x\n",
++ i);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+- if(eprom_size > sizeof(eprom_buf)) {
+- printk("**WARNING**: Voyager insufficient size to read EPROM data, module 0x%x. Need %d\n", i, eprom_size);
++ if (eprom_size > sizeof(eprom_buf)) {
++ printk
++ ("**WARNING**: Voyager insufficient size to read EPROM data, module 0x%x. Need %d\n",
++ i, eprom_size);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+- CDEBUG(("cat_init: module 0x%x, eeprom_size %d\n", i, eprom_size));
+- if(cat_subread(*modpp, (*modpp)->asic, 0,
+- eprom_size, eprom_buf)) {
++ CDEBUG(("cat_init: module 0x%x, eeprom_size %d\n", i,
++ eprom_size));
++ if (cat_subread
++ (*modpp, (*modpp)->asic, 0, eprom_size, eprom_buf)) {
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ printk("Module \"%s\", version 0x%x, tracer 0x%x, asics %d\n",
+ cat_module_name(i), eprom_hdr->version_id,
+- *((__u32 *)eprom_hdr->tracer), eprom_hdr->num_asics);
++ *((__u32 *) eprom_hdr->tracer), eprom_hdr->num_asics);
+ (*modpp)->ee_size = eprom_hdr->ee_size;
+ (*modpp)->num_asics = eprom_hdr->num_asics;
+ asicpp = &((*modpp)->asic);
+ sp_offset = eprom_hdr->scan_path_offset;
+ /* All we really care about are the Quad cards. We
+- * identify them because they are in a processor slot
+- * and have only four asics */
+- if((i < 0x10 || (i>=0x14 && i < 0x1c) || i>0x1f)) {
++ * identify them because they are in a processor slot
++ * and have only four asics */
++ if ((i < 0x10 || (i >= 0x14 && i < 0x1c) || i > 0x1f)) {
+ modpp = &((*modpp)->next);
+ continue;
+ }
+@@ -717,16 +726,17 @@ voyager_cat_init(void)
+ &num_submodules);
+ /* lowest two bits, active low */
+ num_submodules = ~(0xfc | num_submodules);
+- CDEBUG(("VOYAGER CAT: %d submodules present\n", num_submodules));
+- if(num_submodules == 0) {
++ CDEBUG(("VOYAGER CAT: %d submodules present\n",
++ num_submodules));
++ if (num_submodules == 0) {
+ /* fill in the dyadic extended processors */
+ __u8 cpu = i & 0x07;
+
+ printk("Module \"%s\": Dyadic Processor Card\n",
+ cat_module_name(i));
+- voyager_extended_vic_processors |= (1<<cpu);
++ voyager_extended_vic_processors |= (1 << cpu);
+ cpu += 4;
+- voyager_extended_vic_processors |= (1<<cpu);
++ voyager_extended_vic_processors |= (1 << cpu);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+@@ -740,28 +750,32 @@ voyager_cat_init(void)
+ cat_write(*modpp, (*modpp)->asic, VOYAGER_SUBMODSELECT, val);
-- b0[0] = le32_to_cpu(src[0]) ^ E_KEY[key_len + 24];
-- b0[1] = le32_to_cpu(src[1]) ^ E_KEY[key_len + 25];
-- b0[2] = le32_to_cpu(src[2]) ^ E_KEY[key_len + 26];
-- b0[3] = le32_to_cpu(src[3]) ^ E_KEY[key_len + 27];
-+ b0[0] = le32_to_cpu(src[0]) ^ ctx->key_dec[0];
-+ b0[1] = le32_to_cpu(src[1]) ^ ctx->key_dec[1];
-+ b0[2] = le32_to_cpu(src[2]) ^ ctx->key_dec[2];
-+ b0[3] = le32_to_cpu(src[3]) ^ ctx->key_dec[3];
+ outb(VOYAGER_CAT_END, CAT_CMD);
+-
- if (key_len > 24) {
-- i_nround (b1, b0, kp);
-- i_nround (b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_nround(b0, b1, kp);
- }
+ CDEBUG(("cat_init: Reading eeprom for module 0x%x at offset %d\n", i, VOYAGER_XSUM_END_OFFSET));
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ cat_disconnect(*modpp, (*modpp)->asic);
+- if(cat_subread(*modpp, (*modpp)->asic,
+- VOYAGER_XSUM_END_OFFSET, sizeof(eprom_size),
+- &eprom_size)) {
+- printk("**WARNING**: Voyager couldn't read EPROM size for module 0x%x\n", i);
++ if (cat_subread(*modpp, (*modpp)->asic,
++ VOYAGER_XSUM_END_OFFSET, sizeof(eprom_size),
++ &eprom_size)) {
++ printk
++ ("**WARNING**: Voyager couldn't read EPROM size for module 0x%x\n",
++ i);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+- if(eprom_size > sizeof(eprom_buf)) {
+- printk("**WARNING**: Voyager insufficient size to read EPROM data, module 0x%x. Need %d\n", i, eprom_size);
++ if (eprom_size > sizeof(eprom_buf)) {
++ printk
++ ("**WARNING**: Voyager insufficient size to read EPROM data, module 0x%x. Need %d\n",
++ i, eprom_size);
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+- CDEBUG(("cat_init: module 0x%x, eeprom_size %d\n", i, eprom_size));
+- if(cat_subread(*modpp, (*modpp)->asic, 0,
+- eprom_size, eprom_buf)) {
++ CDEBUG(("cat_init: module 0x%x, eeprom_size %d\n", i,
++ eprom_size));
++ if (cat_subread
++ (*modpp, (*modpp)->asic, 0, eprom_size, eprom_buf)) {
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ continue;
+ }
+@@ -773,30 +787,35 @@ voyager_cat_init(void)
+ sp_offset = eprom_hdr->scan_path_offset;
+ /* get rid of the dummy CAT asic and read the real one */
+ kfree((*modpp)->asic);
+- for(asic=0; asic < (*modpp)->num_asics; asic++) {
++ for (asic = 0; asic < (*modpp)->num_asics; asic++) {
+ int j;
+- voyager_asic_t *asicp = *asicpp
+- = kzalloc(sizeof(voyager_asic_t), GFP_KERNEL); /*&voyager_asic_storage[asic_count++];*/
++ voyager_asic_t *asicp = *asicpp = kzalloc(sizeof(voyager_asic_t), GFP_KERNEL); /*&voyager_asic_storage[asic_count++]; */
+ voyager_sp_table_t *sp_table;
+ voyager_at_t *asic_table;
+ voyager_jtt_t *jtag_table;
+
+- if(asicp == NULL) {
+- printk("**WARNING** kmalloc failure in cat_init\n");
++ if (asicp == NULL) {
++ printk
++ ("**WARNING** kmalloc failure in cat_init\n");
+ continue;
+ }
+ asicpp = &(asicp->next);
+ asicp->asic_location = asic;
+- sp_table = (voyager_sp_table_t *)(eprom_buf + sp_offset);
++ sp_table =
++ (voyager_sp_table_t *) (eprom_buf + sp_offset);
+ asicp->asic_id = sp_table->asic_id;
+- asic_table = (voyager_at_t *)(eprom_buf + sp_table->asic_data_offset);
+- for(j=0; j<4; j++)
++ asic_table =
++ (voyager_at_t *) (eprom_buf +
++ sp_table->asic_data_offset);
++ for (j = 0; j < 4; j++)
+ asicp->jtag_id[j] = asic_table->jtag_id[j];
+- jtag_table = (voyager_jtt_t *)(eprom_buf + asic_table->jtag_offset);
++ jtag_table =
++ (voyager_jtt_t *) (eprom_buf +
++ asic_table->jtag_offset);
+ asicp->ireg_length = jtag_table->ireg_len;
+ asicp->bit_location = (*modpp)->inst_bits;
+ (*modpp)->inst_bits += asicp->ireg_length;
+- if(asicp->ireg_length > (*modpp)->largest_reg)
++ if (asicp->ireg_length > (*modpp)->largest_reg)
+ (*modpp)->largest_reg = asicp->ireg_length;
+ if (asicp->ireg_length < (*modpp)->smallest_reg ||
+ (*modpp)->smallest_reg == 0)
+@@ -804,15 +823,13 @@ voyager_cat_init(void)
+ CDEBUG(("asic 0x%x, ireg_length=%d, bit_location=%d\n",
+ asicp->asic_id, asicp->ireg_length,
+ asicp->bit_location));
+- if(asicp->asic_id == VOYAGER_QUAD_QABC) {
++ if (asicp->asic_id == VOYAGER_QUAD_QABC) {
+ CDEBUG(("VOYAGER CAT: QABC ASIC found\n"));
+ qabc_asic = asicp;
+ }
+ sp_offset += sizeof(voyager_sp_table_t);
+ }
+- CDEBUG(("Module inst_bits = %d, largest_reg = %d, smallest_reg=%d\n",
+- (*modpp)->inst_bits, (*modpp)->largest_reg,
+- (*modpp)->smallest_reg));
++ CDEBUG(("Module inst_bits = %d, largest_reg = %d, smallest_reg=%d\n", (*modpp)->inst_bits, (*modpp)->largest_reg, (*modpp)->smallest_reg));
+ /* OK, now we have the QUAD ASICs set up, use them.
+ * we need to:
+ *
+@@ -828,10 +845,11 @@ voyager_cat_init(void)
+ qic_addr = qabc_data[5] << 8;
+ qic_addr = (qic_addr | qabc_data[6]) << 8;
+ qic_addr = (qic_addr | qabc_data[7]) << 8;
+- printk("Module \"%s\": Quad Processor Card; CPI 0x%lx, SET=0x%x\n",
+- cat_module_name(i), qic_addr, qabc_data[8]);
++ printk
++ ("Module \"%s\": Quad Processor Card; CPI 0x%lx, SET=0x%x\n",
++ cat_module_name(i), qic_addr, qabc_data[8]);
+ #if 0 /* plumbing fails---FIXME */
+- if((qabc_data[8] & 0xf0) == 0) {
++ if ((qabc_data[8] & 0xf0) == 0) {
+ /* FIXME: 32 way 8 CPU slot monster cannot be
+ * plumbed this way---need to check for it */
+
+@@ -842,94 +860,97 @@ voyager_cat_init(void)
+ #ifdef VOYAGER_CAT_DEBUG
+ /* verify plumbing */
+ cat_subread(*modpp, qabc_asic, 8, 1, &qabc_data[8]);
+- if((qabc_data[8] & 0xf0) == 0) {
+- CDEBUG(("PLUMBING FAILED: 0x%x\n", qabc_data[8]));
++ if ((qabc_data[8] & 0xf0) == 0) {
++ CDEBUG(("PLUMBING FAILED: 0x%x\n",
++ qabc_data[8]));
+ }
+ #endif
+ }
+ #endif
- if (key_len > 16) {
-- i_nround (b1, b0, kp);
-- i_nround (b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_nround(b0, b1, kp);
- }
+ {
+- struct resource *res = kzalloc(sizeof(struct resource),GFP_KERNEL);
++ struct resource *res =
++ kzalloc(sizeof(struct resource), GFP_KERNEL);
+ res->name = kmalloc(128, GFP_KERNEL);
+- sprintf((char *)res->name, "Voyager %s Quad CPI", cat_module_name(i));
++ sprintf((char *)res->name, "Voyager %s Quad CPI",
++ cat_module_name(i));
+ res->start = qic_addr;
+ res->end = qic_addr + 0x3ff;
+ request_resource(&iomem_resource, res);
+ }
-- i_nround (b1, b0, kp);
-- i_nround (b0, b1, kp);
-- i_nround (b1, b0, kp);
-- i_nround (b0, b1, kp);
-- i_nround (b1, b0, kp);
-- i_nround (b0, b1, kp);
-- i_nround (b1, b0, kp);
-- i_nround (b0, b1, kp);
-- i_nround (b1, b0, kp);
-- i_lround (b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_nround(b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_nround(b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_nround(b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_nround(b0, b1, kp);
-+ i_nround(b1, b0, kp);
-+ i_lround(b0, b1, kp);
+ qic_addr = (unsigned long)ioremap(qic_addr, 0x400);
+-
+- for(j = 0; j < 4; j++) {
++
++ for (j = 0; j < 4; j++) {
+ __u8 cpu;
- dst[0] = cpu_to_le32(b0[0]);
- dst[1] = cpu_to_le32(b0[1]);
-@@ -415,14 +430,13 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
- dst[3] = cpu_to_le32(b0[3]);
- }
+- if(voyager_8slot) {
++ if (voyager_8slot) {
+ /* 8 slot has a different mapping,
+ * each slot has only one vic line, so
+ * 1 cpu in each slot must be < 8 */
+- cpu = (i & 0x07) + j*8;
++ cpu = (i & 0x07) + j * 8;
+ } else {
+- cpu = (i & 0x03) + j*4;
++ cpu = (i & 0x03) + j * 4;
+ }
+- if( (qabc_data[8] & (1<<j))) {
+- voyager_extended_vic_processors |= (1<<cpu);
++ if ((qabc_data[8] & (1 << j))) {
++ voyager_extended_vic_processors |= (1 << cpu);
+ }
+- if(qabc_data[8] & (1<<(j+4)) ) {
++ if (qabc_data[8] & (1 << (j + 4))) {
+ /* Second SET register plumbed: Quad
+ * card has two VIC connected CPUs.
+ * Secondary cannot be booted as a VIC
+ * CPU */
+- voyager_extended_vic_processors |= (1<<cpu);
+- voyager_allowed_boot_processors &= (~(1<<cpu));
++ voyager_extended_vic_processors |= (1 << cpu);
++ voyager_allowed_boot_processors &=
++ (~(1 << cpu));
+ }
--
- static struct crypto_alg aes_alg = {
- .cra_name = "aes",
- .cra_driver_name = "aes-generic",
- .cra_priority = 100,
- .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
- .cra_blocksize = AES_BLOCK_SIZE,
-- .cra_ctxsize = sizeof(struct aes_ctx),
-+ .cra_ctxsize = sizeof(struct crypto_aes_ctx),
- .cra_alignmask = 3,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
-@@ -430,9 +444,9 @@ static struct crypto_alg aes_alg = {
- .cipher = {
- .cia_min_keysize = AES_MIN_KEY_SIZE,
- .cia_max_keysize = AES_MAX_KEY_SIZE,
-- .cia_setkey = aes_set_key,
-- .cia_encrypt = aes_encrypt,
-- .cia_decrypt = aes_decrypt
-+ .cia_setkey = crypto_aes_set_key,
-+ .cia_encrypt = aes_encrypt,
-+ .cia_decrypt = aes_decrypt
+- voyager_quad_processors |= (1<<cpu);
++ voyager_quad_processors |= (1 << cpu);
+ voyager_quad_cpi_addr[cpu] = (struct voyager_qic_cpi *)
+- (qic_addr+(j<<8));
++ (qic_addr + (j << 8));
+ CDEBUG(("CPU%d: CPI address 0x%lx\n", cpu,
+ (unsigned long)voyager_quad_cpi_addr[cpu]));
}
+ outb(VOYAGER_CAT_END, CAT_CMD);
+
+-
+-
+ *asicpp = NULL;
+ modpp = &((*modpp)->next);
}
- };
-diff --git a/crypto/algapi.c b/crypto/algapi.c
-index 8383282..e65cb50 100644
---- a/crypto/algapi.c
-+++ b/crypto/algapi.c
-@@ -472,7 +472,7 @@ int crypto_check_attr_type(struct rtattr **tb, u32 type)
+ *modpp = NULL;
+- printk("CAT Bus Initialisation finished: extended procs 0x%x, quad procs 0x%x, allowed vic boot = 0x%x\n", voyager_extended_vic_processors, voyager_quad_processors, voyager_allowed_boot_processors);
++ printk
++ ("CAT Bus Initialisation finished: extended procs 0x%x, quad procs 0x%x, allowed vic boot = 0x%x\n",
++ voyager_extended_vic_processors, voyager_quad_processors,
++ voyager_allowed_boot_processors);
+ request_resource(&ioport_resource, &vic_res);
+- if(voyager_quad_processors)
++ if (voyager_quad_processors)
+ request_resource(&ioport_resource, &qic_res);
+ /* set up the front power switch */
}
- EXPORT_SYMBOL_GPL(crypto_check_attr_type);
--struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
-+const char *crypto_attr_alg_name(struct rtattr *rta)
+-int
+-voyager_cat_readb(__u8 module, __u8 asic, int reg)
++int voyager_cat_readb(__u8 module, __u8 asic, int reg)
{
- struct crypto_attr_alg *alga;
+ return 0;
+ }
-@@ -486,7 +486,21 @@ struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
- alga = RTA_DATA(rta);
- alga->name[CRYPTO_MAX_ALG_NAME - 1] = 0;
+-static int
+-cat_disconnect(voyager_module_t *modp, voyager_asic_t *asicp)
++static int cat_disconnect(voyager_module_t * modp, voyager_asic_t * asicp)
+ {
+ __u8 val;
+ int err = 0;
-- return crypto_alg_mod_lookup(alga->name, type, mask);
-+ return alga->name;
-+}
-+EXPORT_SYMBOL_GPL(crypto_attr_alg_name);
-+
-+struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
-+{
-+ const char *name;
-+ int err;
-+
-+ name = crypto_attr_alg_name(rta);
-+ err = PTR_ERR(name);
-+ if (IS_ERR(name))
-+ return ERR_PTR(err);
-+
-+ return crypto_alg_mod_lookup(name, type, mask);
+- if(!modp->scan_path_connected)
++ if (!modp->scan_path_connected)
+ return 0;
+- if(asicp->asic_id != VOYAGER_CAT_ID) {
++ if (asicp->asic_id != VOYAGER_CAT_ID) {
+ CDEBUG(("cat_disconnect: ASIC is not CAT\n"));
+ return 1;
+ }
+ err = cat_read(modp, asicp, VOYAGER_SCANPATH, &val);
+- if(err) {
++ if (err) {
+ CDEBUG(("cat_disconnect: failed to read SCANPATH\n"));
+ return err;
+ }
+ val &= VOYAGER_DISCONNECT_ASIC;
+ err = cat_write(modp, asicp, VOYAGER_SCANPATH, val);
+- if(err) {
++ if (err) {
+ CDEBUG(("cat_disconnect: failed to write SCANPATH\n"));
+ return err;
+ }
+@@ -940,27 +961,26 @@ cat_disconnect(voyager_module_t *modp, voyager_asic_t *asicp)
+ return 0;
}
- EXPORT_SYMBOL_GPL(crypto_attr_alg);
-@@ -605,6 +619,53 @@ int crypto_tfm_in_queue(struct crypto_queue *queue, struct crypto_tfm *tfm)
+-static int
+-cat_connect(voyager_module_t *modp, voyager_asic_t *asicp)
++static int cat_connect(voyager_module_t * modp, voyager_asic_t * asicp)
+ {
+ __u8 val;
+ int err = 0;
+
+- if(modp->scan_path_connected)
++ if (modp->scan_path_connected)
+ return 0;
+- if(asicp->asic_id != VOYAGER_CAT_ID) {
++ if (asicp->asic_id != VOYAGER_CAT_ID) {
+ CDEBUG(("cat_connect: ASIC is not CAT\n"));
+ return 1;
+ }
+
+ err = cat_read(modp, asicp, VOYAGER_SCANPATH, &val);
+- if(err) {
++ if (err) {
+ CDEBUG(("cat_connect: failed to read SCANPATH\n"));
+ return err;
+ }
+ val |= VOYAGER_CONNECT_ASIC;
+ err = cat_write(modp, asicp, VOYAGER_SCANPATH, val);
+- if(err) {
++ if (err) {
+ CDEBUG(("cat_connect: failed to write SCANPATH\n"));
+ return err;
+ }
+@@ -971,11 +991,10 @@ cat_connect(voyager_module_t *modp, voyager_asic_t *asicp)
+ return 0;
}
- EXPORT_SYMBOL_GPL(crypto_tfm_in_queue);
-+static inline void crypto_inc_byte(u8 *a, unsigned int size)
-+{
-+ u8 *b = (a + size);
-+ u8 c;
-+
-+ for (; size; size--) {
-+ c = *--b + 1;
-+ *b = c;
-+ if (c)
-+ break;
-+ }
-+}
-+
-+void crypto_inc(u8 *a, unsigned int size)
-+{
-+ __be32 *b = (__be32 *)(a + size);
-+ u32 c;
-+
-+ for (; size >= 4; size -= 4) {
-+ c = be32_to_cpu(*--b) + 1;
-+ *b = cpu_to_be32(c);
-+ if (c)
-+ return;
-+ }
-+
-+ crypto_inc_byte(a, size);
-+}
-+EXPORT_SYMBOL_GPL(crypto_inc);
-+
-+static inline void crypto_xor_byte(u8 *a, const u8 *b, unsigned int size)
-+{
-+ for (; size; size--)
-+ *a++ ^= *b++;
-+}
-+
-+void crypto_xor(u8 *dst, const u8 *src, unsigned int size)
-+{
-+ u32 *a = (u32 *)dst;
-+ u32 *b = (u32 *)src;
-+
-+ for (; size >= 4; size -= 4)
-+ *a++ ^= *b++;
-+
-+ crypto_xor_byte((u8 *)a, (u8 *)b, size);
-+}
-+EXPORT_SYMBOL_GPL(crypto_xor);
+-void
+-voyager_cat_power_off(void)
++void voyager_cat_power_off(void)
+ {
+ /* Power the machine off by writing to the PSI over the CAT
+- * bus */
++ * bus */
+ __u8 data;
+ voyager_module_t psi = { 0 };
+ voyager_asic_t psi_asic = { 0 };
+@@ -1009,8 +1028,7 @@ voyager_cat_power_off(void)
+
+ struct voyager_status voyager_status = { 0 };
+
+-void
+-voyager_cat_psi(__u8 cmd, __u16 reg, __u8 *data)
++void voyager_cat_psi(__u8 cmd, __u16 reg, __u8 * data)
+ {
+ voyager_module_t psi = { 0 };
+ voyager_asic_t psi_asic = { 0 };
+@@ -1027,7 +1045,7 @@ voyager_cat_psi(__u8 cmd, __u16 reg, __u8 *data)
+ outb(VOYAGER_PSI, VOYAGER_CAT_CONFIG_PORT);
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ cat_disconnect(&psi, &psi_asic);
+- switch(cmd) {
++ switch (cmd) {
+ case VOYAGER_PSI_READ:
+ cat_read(&psi, &psi_asic, reg, data);
+ break;
+@@ -1047,8 +1065,7 @@ voyager_cat_psi(__u8 cmd, __u16 reg, __u8 *data)
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ }
+
+-void
+-voyager_cat_do_common_interrupt(void)
++void voyager_cat_do_common_interrupt(void)
+ {
+ /* This is caused either by a memory parity error or something
+ * in the PSI */
+@@ -1057,7 +1074,7 @@ voyager_cat_do_common_interrupt(void)
+ voyager_asic_t psi_asic = { 0 };
+ struct voyager_psi psi_reg;
+ int i;
+- re_read:
++ re_read:
+ psi.asic = &psi_asic;
+ psi.asic->asic_id = VOYAGER_CAT_ID;
+ psi.asic->subaddr = VOYAGER_SUBADDR_HI;
+@@ -1072,43 +1089,45 @@ voyager_cat_do_common_interrupt(void)
+ cat_disconnect(&psi, &psi_asic);
+ /* Read the status. NOTE: Need to read *all* the PSI regs here
+ * otherwise the cmn int will be reasserted */
+- for(i = 0; i < sizeof(psi_reg.regs); i++) {
+- cat_read(&psi, &psi_asic, i, &((__u8 *)&psi_reg.regs)[i]);
++ for (i = 0; i < sizeof(psi_reg.regs); i++) {
++ cat_read(&psi, &psi_asic, i, &((__u8 *) & psi_reg.regs)[i]);
+ }
+ outb(VOYAGER_CAT_END, CAT_CMD);
+- if((psi_reg.regs.checkbit & 0x02) == 0) {
++ if ((psi_reg.regs.checkbit & 0x02) == 0) {
+ psi_reg.regs.checkbit |= 0x02;
+ cat_write(&psi, &psi_asic, 5, psi_reg.regs.checkbit);
+ printk("VOYAGER RE-READ PSI\n");
+ goto re_read;
+ }
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+- for(i = 0; i < sizeof(psi_reg.subregs); i++) {
++ for (i = 0; i < sizeof(psi_reg.subregs); i++) {
+ /* This looks strange, but the PSI doesn't do auto increment
+ * correctly */
+- cat_subread(&psi, &psi_asic, VOYAGER_PSI_SUPPLY_REG + i,
+- 1, &((__u8 *)&psi_reg.subregs)[i]);
++ cat_subread(&psi, &psi_asic, VOYAGER_PSI_SUPPLY_REG + i,
++ 1, &((__u8 *) & psi_reg.subregs)[i]);
+ }
+ outb(VOYAGER_CAT_END, CAT_CMD);
+ #ifdef VOYAGER_CAT_DEBUG
+ printk("VOYAGER PSI: ");
+- for(i=0; i<sizeof(psi_reg.regs); i++)
+- printk("%02x ", ((__u8 *)&psi_reg.regs)[i]);
++ for (i = 0; i < sizeof(psi_reg.regs); i++)
++ printk("%02x ", ((__u8 *) & psi_reg.regs)[i]);
+ printk("\n ");
+- for(i=0; i<sizeof(psi_reg.subregs); i++)
+- printk("%02x ", ((__u8 *)&psi_reg.subregs)[i]);
++ for (i = 0; i < sizeof(psi_reg.subregs); i++)
++ printk("%02x ", ((__u8 *) & psi_reg.subregs)[i]);
+ printk("\n");
+ #endif
+- if(psi_reg.regs.intstatus & PSI_MON) {
++ if (psi_reg.regs.intstatus & PSI_MON) {
+ /* switch off or power fail */
+
+- if(psi_reg.subregs.supply & PSI_SWITCH_OFF) {
+- if(voyager_status.switch_off) {
+- printk(KERN_ERR "Voyager front panel switch turned off again---Immediate power off!\n");
++ if (psi_reg.subregs.supply & PSI_SWITCH_OFF) {
++ if (voyager_status.switch_off) {
++ printk(KERN_ERR
++ "Voyager front panel switch turned off again---Immediate power off!\n");
+ voyager_cat_power_off();
+ /* not reached */
+ } else {
+- printk(KERN_ERR "Voyager front panel switch turned off\n");
++ printk(KERN_ERR
++ "Voyager front panel switch turned off\n");
+ voyager_status.switch_off = 1;
+ voyager_status.request_from_kernel = 1;
+ wake_up_process(voyager_thread);
+@@ -1127,7 +1146,7 @@ voyager_cat_do_common_interrupt(void)
+
+ VDEBUG(("Voyager ac fail reg 0x%x\n",
+ psi_reg.subregs.ACfail));
+- if((psi_reg.subregs.ACfail & AC_FAIL_STAT_CHANGE) == 0) {
++ if ((psi_reg.subregs.ACfail & AC_FAIL_STAT_CHANGE) == 0) {
+ /* No further update */
+ return;
+ }
+@@ -1135,20 +1154,20 @@ voyager_cat_do_common_interrupt(void)
+ /* Don't bother trying to find out who failed.
+ * FIXME: This probably makes the code incorrect on
+ * anything other than a 345x */
+- for(i=0; i< 5; i++) {
+- if( psi_reg.subregs.ACfail &(1<<i)) {
++ for (i = 0; i < 5; i++) {
++ if (psi_reg.subregs.ACfail & (1 << i)) {
+ break;
+ }
+ }
+ printk(KERN_NOTICE "AC FAIL IN SUPPLY %d\n", i);
+ #endif
+ /* DON'T do this: it shuts down the AC PSI
+- outb(VOYAGER_CAT_RUN, CAT_CMD);
+- data = PSI_MASK_MASK | i;
+- cat_subwrite(&psi, &psi_asic, VOYAGER_PSI_MASK,
+- 1, &data);
+- outb(VOYAGER_CAT_END, CAT_CMD);
+- */
++ outb(VOYAGER_CAT_RUN, CAT_CMD);
++ data = PSI_MASK_MASK | i;
++ cat_subwrite(&psi, &psi_asic, VOYAGER_PSI_MASK,
++ 1, &data);
++ outb(VOYAGER_CAT_END, CAT_CMD);
++ */
+ printk(KERN_ERR "Voyager AC power failure\n");
+ outb(VOYAGER_CAT_RUN, CAT_CMD);
+ data = PSI_COLD_START;
+@@ -1159,16 +1178,16 @@ voyager_cat_do_common_interrupt(void)
+ voyager_status.request_from_kernel = 1;
+ wake_up_process(voyager_thread);
+ }
+-
+-
+- } else if(psi_reg.regs.intstatus & PSI_FAULT) {
+
- static int __init crypto_algapi_init(void)
++ } else if (psi_reg.regs.intstatus & PSI_FAULT) {
+ /* Major fault! */
+- printk(KERN_ERR "Voyager PSI Detected major fault, immediate power off!\n");
++ printk(KERN_ERR
++ "Voyager PSI Detected major fault, immediate power off!\n");
+ voyager_cat_power_off();
+ /* not reached */
+- } else if(psi_reg.regs.intstatus & (PSI_DC_FAIL | PSI_ALARM
+- | PSI_CURRENT | PSI_DVM
+- | PSI_PSCFAULT | PSI_STAT_CHG)) {
++ } else if (psi_reg.regs.intstatus & (PSI_DC_FAIL | PSI_ALARM
++ | PSI_CURRENT | PSI_DVM
++ | PSI_PSCFAULT | PSI_STAT_CHG)) {
+ /* other psi fault */
+
+ printk(KERN_WARNING "Voyager PSI status 0x%x\n", data);
+diff --git a/arch/x86/mach-voyager/voyager_smp.c b/arch/x86/mach-voyager/voyager_smp.c
+index 88124dd..dffa786 100644
+--- a/arch/x86/mach-voyager/voyager_smp.c
++++ b/arch/x86/mach-voyager/voyager_smp.c
+@@ -32,7 +32,8 @@
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = { &init_mm, 0 };
+
+ /* CPU IRQ affinity -- set to all ones initially */
+-static unsigned long cpu_irq_affinity[NR_CPUS] __cacheline_aligned = { [0 ... NR_CPUS-1] = ~0UL };
++static unsigned long cpu_irq_affinity[NR_CPUS] __cacheline_aligned =
++ {[0 ... NR_CPUS-1] = ~0UL };
+
+ /* per CPU data structure (for /proc/cpuinfo et al), visible externally
+ * indexed physically */
+@@ -76,7 +77,6 @@ EXPORT_SYMBOL(cpu_online_map);
+ * by scheduler but indexed physically */
+ cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
+
+-
+ /* The internal functions */
+ static void send_CPI(__u32 cpuset, __u8 cpi);
+ static void ack_CPI(__u8 cpi);
+@@ -101,94 +101,86 @@ int hard_smp_processor_id(void);
+ int safe_smp_processor_id(void);
+
+ /* Inline functions */
+-static inline void
+-send_one_QIC_CPI(__u8 cpu, __u8 cpi)
++static inline void send_one_QIC_CPI(__u8 cpu, __u8 cpi)
{
- crypto_init_proc();
-diff --git a/crypto/api.c b/crypto/api.c
-index 1f5c724..a2496d1 100644
---- a/crypto/api.c
-+++ b/crypto/api.c
-@@ -137,7 +137,7 @@ static struct crypto_alg *crypto_larval_alloc(const char *name, u32 type,
- return alg;
+ voyager_quad_cpi_addr[cpu]->qic_cpi[cpi].cpi =
+- (smp_processor_id() << 16) + cpi;
++ (smp_processor_id() << 16) + cpi;
}
--static void crypto_larval_kill(struct crypto_alg *alg)
-+void crypto_larval_kill(struct crypto_alg *alg)
+-static inline void
+-send_QIC_CPI(__u32 cpuset, __u8 cpi)
++static inline void send_QIC_CPI(__u32 cpuset, __u8 cpi)
{
- struct crypto_larval *larval = (void *)alg;
+ int cpu;
-@@ -147,6 +147,7 @@ static void crypto_larval_kill(struct crypto_alg *alg)
- complete_all(&larval->completion);
- crypto_alg_put(alg);
+ for_each_online_cpu(cpu) {
+- if(cpuset & (1<<cpu)) {
++ if (cpuset & (1 << cpu)) {
+ #ifdef VOYAGER_DEBUG
+- if(!cpu_isset(cpu, cpu_online_map))
+- VDEBUG(("CPU%d sending cpi %d to CPU%d not in cpu_online_map\n", hard_smp_processor_id(), cpi, cpu));
++ if (!cpu_isset(cpu, cpu_online_map))
++ VDEBUG(("CPU%d sending cpi %d to CPU%d not in "
++ "cpu_online_map\n",
++ hard_smp_processor_id(), cpi, cpu));
+ #endif
+ send_one_QIC_CPI(cpu, cpi - QIC_CPI_OFFSET);
+ }
+ }
}
-+EXPORT_SYMBOL_GPL(crypto_larval_kill);
- static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
+-static inline void
+-wrapper_smp_local_timer_interrupt(void)
++static inline void wrapper_smp_local_timer_interrupt(void)
{
-@@ -176,11 +177,9 @@ static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type,
- return alg;
+ irq_enter();
+ smp_local_timer_interrupt();
+ irq_exit();
}
--struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
-+struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask)
+-static inline void
+-send_one_CPI(__u8 cpu, __u8 cpi)
++static inline void send_one_CPI(__u8 cpu, __u8 cpi)
{
- struct crypto_alg *alg;
-- struct crypto_alg *larval;
-- int ok;
+- if(voyager_quad_processors & (1<<cpu))
++ if (voyager_quad_processors & (1 << cpu))
+ send_one_QIC_CPI(cpu, cpi - QIC_CPI_OFFSET);
+ else
+- send_CPI(1<<cpu, cpi);
++ send_CPI(1 << cpu, cpi);
+ }
- if (!name)
- return ERR_PTR(-ENOENT);
-@@ -193,7 +192,17 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
- if (alg)
- return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg;
+-static inline void
+-send_CPI_allbutself(__u8 cpi)
++static inline void send_CPI_allbutself(__u8 cpi)
+ {
+ __u8 cpu = smp_processor_id();
+ __u32 mask = cpus_addr(cpu_online_map)[0] & ~(1 << cpu);
+ send_CPI(mask, cpi);
+ }
-- larval = crypto_larval_alloc(name, type, mask);
-+ return crypto_larval_alloc(name, type, mask);
-+}
-+EXPORT_SYMBOL_GPL(crypto_larval_lookup);
-+
-+struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
-+{
-+ struct crypto_alg *alg;
-+ struct crypto_alg *larval;
-+ int ok;
-+
-+ larval = crypto_larval_lookup(name, type, mask);
- if (IS_ERR(larval) || !crypto_is_larval(larval))
- return larval;
+-static inline int
+-is_cpu_quad(void)
++static inline int is_cpu_quad(void)
+ {
+ __u8 cpumask = inb(VIC_PROC_WHO_AM_I);
+ return ((cpumask & QUAD_IDENTIFIER) == QUAD_IDENTIFIER);
+ }
-diff --git a/crypto/authenc.c b/crypto/authenc.c
-index 126a529..ed8ac5a 100644
---- a/crypto/authenc.c
-+++ b/crypto/authenc.c
-@@ -10,22 +10,21 @@
- *
- */
+-static inline int
+-is_cpu_extended(void)
++static inline int is_cpu_extended(void)
+ {
+ __u8 cpu = hard_smp_processor_id();
--#include <crypto/algapi.h>
-+#include <crypto/aead.h>
-+#include <crypto/internal/skcipher.h>
-+#include <crypto/authenc.h>
-+#include <crypto/scatterwalk.h>
- #include <linux/err.h>
- #include <linux/init.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-+#include <linux/rtnetlink.h>
- #include <linux/slab.h>
- #include <linux/spinlock.h>
+- return(voyager_extended_vic_processors & (1<<cpu));
++ return (voyager_extended_vic_processors & (1 << cpu));
+ }
--#include "scatterwalk.h"
--
- struct authenc_instance_ctx {
- struct crypto_spawn auth;
-- struct crypto_spawn enc;
--
-- unsigned int authsize;
-- unsigned int enckeylen;
-+ struct crypto_skcipher_spawn enc;
- };
+-static inline int
+-is_cpu_vic_boot(void)
++static inline int is_cpu_vic_boot(void)
+ {
+ __u8 cpu = hard_smp_processor_id();
- struct crypto_authenc_ctx {
-@@ -37,19 +36,31 @@ struct crypto_authenc_ctx {
- static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
- unsigned int keylen)
+- return(voyager_extended_vic_processors
+- & voyager_allowed_boot_processors & (1<<cpu));
++ return (voyager_extended_vic_processors
++ & voyager_allowed_boot_processors & (1 << cpu));
+ }
+
+-
+-static inline void
+-ack_CPI(__u8 cpi)
++static inline void ack_CPI(__u8 cpi)
{
-- struct authenc_instance_ctx *ictx =
-- crypto_instance_ctx(crypto_aead_alg_instance(authenc));
-- unsigned int enckeylen = ictx->enckeylen;
- unsigned int authkeylen;
-+ unsigned int enckeylen;
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct crypto_hash *auth = ctx->auth;
- struct crypto_ablkcipher *enc = ctx->enc;
-+ struct rtattr *rta = (void *)key;
-+ struct crypto_authenc_key_param *param;
- int err = -EINVAL;
+- switch(cpi) {
++ switch (cpi) {
+ case VIC_CPU_BOOT_CPI:
+- if(is_cpu_quad() && !is_cpu_vic_boot())
++ if (is_cpu_quad() && !is_cpu_vic_boot())
+ ack_QIC_CPI(cpi);
+ else
+ ack_VIC_CPI(cpi);
+ break;
+ case VIC_SYS_INT:
+- case VIC_CMN_INT:
++ case VIC_CMN_INT:
+ /* These are slightly strange. Even on the Quad card,
+ * They are vectored as VIC CPIs */
+- if(is_cpu_quad())
++ if (is_cpu_quad())
+ ack_special_QIC_CPI(cpi);
+ else
+ ack_VIC_CPI(cpi);
+@@ -205,11 +197,11 @@ ack_CPI(__u8 cpi)
+ * 8259 IRQs except that masks and things must be kept per processor
+ */
+ static struct irq_chip vic_chip = {
+- .name = "VIC",
+- .startup = startup_vic_irq,
+- .mask = mask_vic_irq,
+- .unmask = unmask_vic_irq,
+- .set_affinity = set_vic_irq_affinity,
++ .name = "VIC",
++ .startup = startup_vic_irq,
++ .mask = mask_vic_irq,
++ .unmask = unmask_vic_irq,
++ .set_affinity = set_vic_irq_affinity,
+ };
+
+ /* used to count up as CPUs are brought on line (starts at 0) */
+@@ -223,7 +215,7 @@ static __u32 trampoline_base;
+ /* The per cpu profile stuff - used in smp_local_timer_interrupt */
+ static DEFINE_PER_CPU(int, prof_multiplier) = 1;
+ static DEFINE_PER_CPU(int, prof_old_multiplier) = 1;
+-static DEFINE_PER_CPU(int, prof_counter) = 1;
++static DEFINE_PER_CPU(int, prof_counter) = 1;
+
+ /* the map used to check if a CPU has booted */
+ static __u32 cpu_booted_map;
+@@ -235,7 +227,6 @@ static cpumask_t smp_commenced_mask = CPU_MASK_NONE;
+ /* This is for the new dynamic CPU boot code */
+ cpumask_t cpu_callin_map = CPU_MASK_NONE;
+ cpumask_t cpu_callout_map = CPU_MASK_NONE;
+-EXPORT_SYMBOL(cpu_callout_map);
+ cpumask_t cpu_possible_map = CPU_MASK_NONE;
+ EXPORT_SYMBOL(cpu_possible_map);
-- if (keylen < enckeylen) {
-- crypto_aead_set_flags(authenc, CRYPTO_TFM_RES_BAD_KEY_LEN);
-- goto out;
-- }
-+ if (!RTA_OK(rta, keylen))
-+ goto badkey;
-+ if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
-+ goto badkey;
-+ if (RTA_PAYLOAD(rta) < sizeof(*param))
-+ goto badkey;
-+
-+ param = RTA_DATA(rta);
-+ enckeylen = be32_to_cpu(param->enckeylen);
-+
-+ key += RTA_ALIGN(rta->rta_len);
-+ keylen -= RTA_ALIGN(rta->rta_len);
-+
-+ if (keylen < enckeylen)
-+ goto badkey;
-+
- authkeylen = keylen - enckeylen;
+@@ -246,9 +237,9 @@ static __u16 vic_irq_mask[NR_CPUS] __cacheline_aligned;
+ static __u16 vic_irq_enable_mask[NR_CPUS] __cacheline_aligned = { 0 };
- crypto_hash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
-@@ -71,21 +82,38 @@ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
+ /* Lock for enable/disable of VIC interrupts */
+-static __cacheline_aligned DEFINE_SPINLOCK(vic_irq_lock);
++static __cacheline_aligned DEFINE_SPINLOCK(vic_irq_lock);
+
+-/* The boot processor is correctly set up in PC mode when it
++/* The boot processor is correctly set up in PC mode when it
+ * comes up, but the secondaries need their master/slave 8259
+ * pairs initializing correctly */
+
+@@ -262,8 +253,7 @@ static unsigned long vic_tick[NR_CPUS] __cacheline_aligned = { 0 };
+ static unsigned long vic_cpi_mailbox[NR_CPUS] __cacheline_aligned;
+
+ /* debugging routine to read the isr of the cpu's pic */
+-static inline __u16
+-vic_read_isr(void)
++static inline __u16 vic_read_isr(void)
+ {
+ __u16 isr;
- out:
- return err;
-+
-+badkey:
-+ crypto_aead_set_flags(authenc, CRYPTO_TFM_RES_BAD_KEY_LEN);
-+ goto out;
+@@ -275,17 +265,16 @@ vic_read_isr(void)
+ return isr;
}
--static int crypto_authenc_hash(struct aead_request *req)
-+static void authenc_chain(struct scatterlist *head, struct scatterlist *sg,
-+ int chain)
-+{
-+ if (chain) {
-+ head->length += sg->length;
-+ sg = scatterwalk_sg_next(sg);
-+ }
-+
-+ if (sg)
-+ scatterwalk_sg_chain(head, 2, sg);
-+ else
-+ sg_mark_end(head);
-+}
+-static __init void
+-qic_setup(void)
++static __init void qic_setup(void)
+ {
+- if(!is_cpu_quad()) {
++ if (!is_cpu_quad()) {
+ /* not a quad, no setup */
+ return;
+ }
+ outb(QIC_DEFAULT_MASK0, QIC_MASK_REGISTER0);
+ outb(QIC_CPI_ENABLE, QIC_MASK_REGISTER1);
+-
+- if(is_cpu_extended()) {
+
-+static u8 *crypto_authenc_hash(struct aead_request *req, unsigned int flags,
-+ struct scatterlist *cipher,
-+ unsigned int cryptlen)
++ if (is_cpu_extended()) {
+ /* the QIC duplicate of the VIC base register */
+ outb(VIC_DEFAULT_CPI_BASE, QIC_VIC_CPI_BASE_REGISTER);
+ outb(QIC_DEFAULT_CPI_BASE, QIC_CPI_BASE_REGISTER);
+@@ -295,8 +284,7 @@ qic_setup(void)
+ }
+ }
+
+-static __init void
+-vic_setup_pic(void)
++static __init void vic_setup_pic(void)
{
- struct crypto_aead *authenc = crypto_aead_reqtfm(req);
-- struct authenc_instance_ctx *ictx =
-- crypto_instance_ctx(crypto_aead_alg_instance(authenc));
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct crypto_hash *auth = ctx->auth;
- struct hash_desc desc = {
- .tfm = auth,
-+ .flags = aead_request_flags(req) & flags,
- };
- u8 *hash = aead_request_ctx(req);
-- struct scatterlist *dst = req->dst;
-- unsigned int cryptlen = req->cryptlen;
- int err;
+ outb(1, VIC_REDIRECT_REGISTER_1);
+ /* clear the claim registers for dynamic routing */
+@@ -333,7 +321,7 @@ vic_setup_pic(void)
- hash = (u8 *)ALIGN((unsigned long)hash + crypto_hash_alignmask(auth),
-@@ -100,7 +128,7 @@ static int crypto_authenc_hash(struct aead_request *req)
- if (err)
- goto auth_unlock;
+ /* ICW2: slave vector base */
+ outb(FIRST_EXTERNAL_VECTOR + 8, 0xA1);
+-
++
+ /* ICW3: slave ID */
+ outb(0x02, 0xA1);
-- err = crypto_hash_update(&desc, dst, cryptlen);
-+ err = crypto_hash_update(&desc, cipher, cryptlen);
- if (err)
- goto auth_unlock;
+@@ -341,19 +329,18 @@ vic_setup_pic(void)
+ outb(0x01, 0xA1);
+ }
-@@ -109,17 +137,53 @@ auth_unlock:
- spin_unlock_bh(&ctx->auth_lock);
+-static void
+-do_quad_bootstrap(void)
++static void do_quad_bootstrap(void)
+ {
+- if(is_cpu_quad() && is_cpu_vic_boot()) {
++ if (is_cpu_quad() && is_cpu_vic_boot()) {
+ int i;
+ unsigned long flags;
+ __u8 cpuid = hard_smp_processor_id();
- if (err)
-- return err;
-+ return ERR_PTR(err);
-+
-+ return hash;
-+}
+ local_irq_save(flags);
-- scatterwalk_map_and_copy(hash, dst, cryptlen, ictx->authsize, 1);
-+static int crypto_authenc_genicv(struct aead_request *req, u8 *iv,
-+ unsigned int flags)
-+{
-+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
-+ struct scatterlist *dst = req->dst;
-+ struct scatterlist cipher[2];
-+ struct page *dstp;
-+ unsigned int ivsize = crypto_aead_ivsize(authenc);
-+ unsigned int cryptlen;
-+ u8 *vdst;
-+ u8 *hash;
-+
-+ dstp = sg_page(dst);
-+ vdst = PageHighMem(dstp) ? NULL : page_address(dstp) + dst->offset;
-+
-+ sg_init_table(cipher, 2);
-+ sg_set_buf(cipher, iv, ivsize);
-+ authenc_chain(cipher, dst, vdst == iv + ivsize);
-+
-+ cryptlen = req->cryptlen + ivsize;
-+ hash = crypto_authenc_hash(req, flags, cipher, cryptlen);
-+ if (IS_ERR(hash))
-+ return PTR_ERR(hash);
-+
-+ scatterwalk_map_and_copy(hash, cipher, cryptlen,
-+ crypto_aead_authsize(authenc), 1);
- return 0;
+- for(i = 0; i<4; i++) {
++ for (i = 0; i < 4; i++) {
+ /* FIXME: this would be >>3 &0x7 on the 32 way */
+- if(((cpuid >> 2) & 0x03) == i)
++ if (((cpuid >> 2) & 0x03) == i)
+ /* don't lower our own mask! */
+ continue;
+
+@@ -368,12 +355,10 @@ do_quad_bootstrap(void)
+ }
}
- static void crypto_authenc_encrypt_done(struct crypto_async_request *req,
- int err)
+-
+ /* Set up all the basic stuff: read the SMP config and make all the
+ * SMP information reflect only the boot cpu. All others will be
+ * brought on-line later. */
+-void __init
+-find_smp_config(void)
++void __init find_smp_config(void)
{
-- if (!err)
-- err = crypto_authenc_hash(req->data);
-+ if (!err) {
-+ struct aead_request *areq = req->data;
-+ struct crypto_aead *authenc = crypto_aead_reqtfm(areq);
-+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
-+ struct ablkcipher_request *abreq = aead_request_ctx(areq);
-+ u8 *iv = (u8 *)(abreq + 1) +
-+ crypto_ablkcipher_reqsize(ctx->enc);
-+
-+ err = crypto_authenc_genicv(areq, iv, 0);
-+ }
+ int i;
- aead_request_complete(req->data, err);
- }
-@@ -129,72 +193,99 @@ static int crypto_authenc_encrypt(struct aead_request *req)
- struct crypto_aead *authenc = crypto_aead_reqtfm(req);
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct ablkcipher_request *abreq = aead_request_ctx(req);
-+ struct crypto_ablkcipher *enc = ctx->enc;
-+ struct scatterlist *dst = req->dst;
-+ unsigned int cryptlen = req->cryptlen;
-+ u8 *iv = (u8 *)(abreq + 1) + crypto_ablkcipher_reqsize(enc);
- int err;
+@@ -382,24 +367,31 @@ find_smp_config(void)
+ printk("VOYAGER SMP: Boot cpu is %d\n", boot_cpu_id);
-- ablkcipher_request_set_tfm(abreq, ctx->enc);
-+ ablkcipher_request_set_tfm(abreq, enc);
- ablkcipher_request_set_callback(abreq, aead_request_flags(req),
- crypto_authenc_encrypt_done, req);
-- ablkcipher_request_set_crypt(abreq, req->src, req->dst, req->cryptlen,
-- req->iv);
-+ ablkcipher_request_set_crypt(abreq, req->src, dst, cryptlen, req->iv);
-+
-+ memcpy(iv, req->iv, crypto_aead_ivsize(authenc));
+ /* initialize the CPU structures (moved from smp_boot_cpus) */
+- for(i=0; i<NR_CPUS; i++) {
++ for (i = 0; i < NR_CPUS; i++) {
+ cpu_irq_affinity[i] = ~0;
+ }
+ cpu_online_map = cpumask_of_cpu(boot_cpu_id);
- err = crypto_ablkcipher_encrypt(abreq);
- if (err)
- return err;
+ /* The boot CPU must be extended */
+- voyager_extended_vic_processors = 1<<boot_cpu_id;
++ voyager_extended_vic_processors = 1 << boot_cpu_id;
+ /* initially, all of the first 8 CPUs can boot */
+ voyager_allowed_boot_processors = 0xff;
+ /* set up everything for just this CPU, we can alter
+ * this as we start the other CPUs later */
+ /* now get the CPU disposition from the extended CMOS */
+- cpus_addr(phys_cpu_present_map)[0] = voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK);
+- cpus_addr(phys_cpu_present_map)[0] |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 1) << 8;
+- cpus_addr(phys_cpu_present_map)[0] |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 2) << 16;
+- cpus_addr(phys_cpu_present_map)[0] |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 3) << 24;
++ cpus_addr(phys_cpu_present_map)[0] =
++ voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK);
++ cpus_addr(phys_cpu_present_map)[0] |=
++ voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 1) << 8;
++ cpus_addr(phys_cpu_present_map)[0] |=
++ voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK +
++ 2) << 16;
++ cpus_addr(phys_cpu_present_map)[0] |=
++ voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK +
++ 3) << 24;
+ cpu_possible_map = phys_cpu_present_map;
+- printk("VOYAGER SMP: phys_cpu_present_map = 0x%lx\n", cpus_addr(phys_cpu_present_map)[0]);
++ printk("VOYAGER SMP: phys_cpu_present_map = 0x%lx\n",
++ cpus_addr(phys_cpu_present_map)[0]);
+ /* Here we set up the VIC to enable SMP */
+ /* enable the CPIs by writing the base vector to their register */
+ outb(VIC_DEFAULT_CPI_BASE, VIC_CPI_BASE_REGISTER);
+@@ -427,8 +419,7 @@ find_smp_config(void)
+ /*
+ * The bootstrap kernel entry code has set these up. Save them
+ * for a given CPU, id is physical */
+-void __init
+-smp_store_cpu_info(int id)
++void __init smp_store_cpu_info(int id)
+ {
+ struct cpuinfo_x86 *c = &cpu_data(id);
-- return crypto_authenc_hash(req);
-+ return crypto_authenc_genicv(req, iv, CRYPTO_TFM_REQ_MAY_SLEEP);
+@@ -438,21 +429,19 @@ smp_store_cpu_info(int id)
}
--static int crypto_authenc_verify(struct aead_request *req)
-+static void crypto_authenc_givencrypt_done(struct crypto_async_request *req,
-+ int err)
+ /* set up the trampoline and return the physical address of the code */
+-static __u32 __init
+-setup_trampoline(void)
++static __u32 __init setup_trampoline(void)
{
-- struct crypto_aead *authenc = crypto_aead_reqtfm(req);
-- struct authenc_instance_ctx *ictx =
-- crypto_instance_ctx(crypto_aead_alg_instance(authenc));
-+ if (!err) {
-+ struct aead_givcrypt_request *greq = req->data;
-+
-+ err = crypto_authenc_genicv(&greq->areq, greq->giv, 0);
-+ }
-+
-+ aead_request_complete(req->data, err);
-+}
-+
-+static int crypto_authenc_givencrypt(struct aead_givcrypt_request *req)
-+{
-+ struct crypto_aead *authenc = aead_givcrypt_reqtfm(req);
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
-- struct crypto_hash *auth = ctx->auth;
-- struct hash_desc desc = {
-- .tfm = auth,
-- .flags = aead_request_flags(req),
-- };
-- u8 *ohash = aead_request_ctx(req);
-- u8 *ihash;
-- struct scatterlist *src = req->src;
-- unsigned int cryptlen = req->cryptlen;
-- unsigned int authsize;
-+ struct aead_request *areq = &req->areq;
-+ struct skcipher_givcrypt_request *greq = aead_request_ctx(areq);
-+ u8 *iv = req->giv;
- int err;
+ /* these two are global symbols in trampoline.S */
+ extern const __u8 trampoline_end[];
+ extern const __u8 trampoline_data[];
-- ohash = (u8 *)ALIGN((unsigned long)ohash + crypto_hash_alignmask(auth),
-- crypto_hash_alignmask(auth) + 1);
-- ihash = ohash + crypto_hash_digestsize(auth);
--
-- spin_lock_bh(&ctx->auth_lock);
-- err = crypto_hash_init(&desc);
-- if (err)
-- goto auth_unlock;
-+ skcipher_givcrypt_set_tfm(greq, ctx->enc);
-+ skcipher_givcrypt_set_callback(greq, aead_request_flags(areq),
-+ crypto_authenc_givencrypt_done, areq);
-+ skcipher_givcrypt_set_crypt(greq, areq->src, areq->dst, areq->cryptlen,
-+ areq->iv);
-+ skcipher_givcrypt_set_giv(greq, iv, req->seq);
+- memcpy((__u8 *)trampoline_base, trampoline_data,
++ memcpy((__u8 *) trampoline_base, trampoline_data,
+ trampoline_end - trampoline_data);
+- return virt_to_phys((__u8 *)trampoline_base);
++ return virt_to_phys((__u8 *) trampoline_base);
+ }
-- err = crypto_hash_update(&desc, req->assoc, req->assoclen);
-+ err = crypto_skcipher_givencrypt(greq);
- if (err)
-- goto auth_unlock;
-+ return err;
+ /* Routine initially called when a non-boot CPU is brought online */
+-static void __init
+-start_secondary(void *unused)
++static void __init start_secondary(void *unused)
+ {
+ __u8 cpuid = hard_smp_processor_id();
+ /* external functions not defined in the headers */
+@@ -464,17 +453,18 @@ start_secondary(void *unused)
+ ack_CPI(VIC_CPU_BOOT_CPI);
-- err = crypto_hash_update(&desc, src, cryptlen);
-- if (err)
-- goto auth_unlock;
-+ return crypto_authenc_genicv(areq, iv, CRYPTO_TFM_REQ_MAY_SLEEP);
-+}
+ /* setup the 8259 master slave pair belonging to this CPU ---
+- * we won't actually receive any until the boot CPU
+- * relinquishes it's static routing mask */
++ * we won't actually receive any until the boot CPU
++ * relinquishes it's static routing mask */
+ vic_setup_pic();
-- err = crypto_hash_final(&desc, ohash);
--auth_unlock:
-- spin_unlock_bh(&ctx->auth_lock);
-+static int crypto_authenc_verify(struct aead_request *req,
-+ struct scatterlist *cipher,
-+ unsigned int cryptlen)
-+{
-+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
-+ u8 *ohash;
-+ u8 *ihash;
-+ unsigned int authsize;
+ qic_setup();
-- if (err)
-- return err;
-+ ohash = crypto_authenc_hash(req, CRYPTO_TFM_REQ_MAY_SLEEP, cipher,
-+ cryptlen);
-+ if (IS_ERR(ohash))
-+ return PTR_ERR(ohash);
+- if(is_cpu_quad() && !is_cpu_vic_boot()) {
++ if (is_cpu_quad() && !is_cpu_vic_boot()) {
+ /* clear the boot CPI */
+ __u8 dummy;
-- authsize = ictx->authsize;
-- scatterwalk_map_and_copy(ihash, src, cryptlen, authsize, 0);
-- return memcmp(ihash, ohash, authsize) ? -EINVAL : 0;
-+ authsize = crypto_aead_authsize(authenc);
-+ ihash = ohash + authsize;
-+ scatterwalk_map_and_copy(ihash, cipher, cryptlen, authsize, 0);
-+ return memcmp(ihash, ohash, authsize) ? -EBADMSG: 0;
+- dummy = voyager_quad_cpi_addr[cpuid]->qic_cpi[VIC_CPU_BOOT_CPI].cpi;
++ dummy =
++ voyager_quad_cpi_addr[cpuid]->qic_cpi[VIC_CPU_BOOT_CPI].cpi;
+ printk("read dummy %d\n", dummy);
+ }
+
+@@ -516,7 +506,6 @@ start_secondary(void *unused)
+ cpu_idle();
}
--static void crypto_authenc_decrypt_done(struct crypto_async_request *req,
-- int err)
-+static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
-+ unsigned int cryptlen)
+-
+ /* Routine to kick start the given CPU and wait for it to report ready
+ * (or timeout in startup). When this routine returns, the requested
+ * CPU is either fully running and configured or known to be dead.
+@@ -524,29 +513,28 @@ start_secondary(void *unused)
+ * We call this routine sequentially 1 CPU at a time, so no need for
+ * locking */
+
+-static void __init
+-do_boot_cpu(__u8 cpu)
++static void __init do_boot_cpu(__u8 cpu)
{
-- aead_request_complete(req->data, err);
-+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
-+ struct scatterlist *src = req->src;
-+ struct scatterlist cipher[2];
-+ struct page *srcp;
-+ unsigned int ivsize = crypto_aead_ivsize(authenc);
-+ u8 *vsrc;
+ struct task_struct *idle;
+ int timeout;
+ unsigned long flags;
+- int quad_boot = (1<<cpu) & voyager_quad_processors
+- & ~( voyager_extended_vic_processors
+- & voyager_allowed_boot_processors);
++ int quad_boot = (1 << cpu) & voyager_quad_processors
++ & ~(voyager_extended_vic_processors
++ & voyager_allowed_boot_processors);
+
+ /* This is an area in head.S which was used to set up the
+ * initial kernel stack. We need to alter this to give the
+ * booting CPU a new stack (taken from its idle process) */
+ extern struct {
+- __u8 *esp;
++ __u8 *sp;
+ unsigned short ss;
+ } stack_start;
+ /* This is the format of the CPI IDT gate (in real mode) which
+ * we're hijacking to boot the CPU */
+- union IDTFormat {
++ union IDTFormat {
+ struct seg {
+- __u16 Offset;
+- __u16 Segment;
++ __u16 Offset;
++ __u16 Segment;
+ } idt;
+ __u32 val;
+ } hijack_source;
+@@ -565,37 +553,44 @@ do_boot_cpu(__u8 cpu)
+ alternatives_smp_switch(1);
+
+ idle = fork_idle(cpu);
+- if(IS_ERR(idle))
++ if (IS_ERR(idle))
+ panic("failed fork for CPU%d", cpu);
+- idle->thread.eip = (unsigned long) start_secondary;
++ idle->thread.ip = (unsigned long)start_secondary;
+ /* init_tasks (in sched.c) is indexed logically */
+- stack_start.esp = (void *) idle->thread.esp;
++ stack_start.sp = (void *)idle->thread.sp;
+
+ init_gdt(cpu);
+- per_cpu(current_task, cpu) = idle;
++ per_cpu(current_task, cpu) = idle;
+ early_gdt_descr.address = (unsigned long)get_cpu_gdt_table(cpu);
+ irq_ctx_init(cpu);
+
+ /* Note: Don't modify initial ss override */
+- VDEBUG(("VOYAGER SMP: Booting CPU%d at 0x%lx[%x:%x], stack %p\n", cpu,
++ VDEBUG(("VOYAGER SMP: Booting CPU%d at 0x%lx[%x:%x], stack %p\n", cpu,
+ (unsigned long)hijack_source.val, hijack_source.idt.Segment,
+- hijack_source.idt.Offset, stack_start.esp));
++ hijack_source.idt.Offset, stack_start.sp));
+
+ /* init lowmem identity mapping */
+ clone_pgd_range(swapper_pg_dir, swapper_pg_dir + USER_PGD_PTRS,
+ min_t(unsigned long, KERNEL_PGD_PTRS, USER_PGD_PTRS));
+ flush_tlb_all();
+
+- if(quad_boot) {
++ if (quad_boot) {
+ printk("CPU %d: non extended Quad boot\n", cpu);
+- hijack_vector = (__u32 *)phys_to_virt((VIC_CPU_BOOT_CPI + QIC_DEFAULT_CPI_BASE)*4);
++ hijack_vector =
++ (__u32 *)
++ phys_to_virt((VIC_CPU_BOOT_CPI + QIC_DEFAULT_CPI_BASE) * 4);
+ *hijack_vector = hijack_source.val;
+ } else {
+ printk("CPU%d: extended VIC boot\n", cpu);
+- hijack_vector = (__u32 *)phys_to_virt((VIC_CPU_BOOT_CPI + VIC_DEFAULT_CPI_BASE)*4);
++ hijack_vector =
++ (__u32 *)
++ phys_to_virt((VIC_CPU_BOOT_CPI + VIC_DEFAULT_CPI_BASE) * 4);
+ *hijack_vector = hijack_source.val;
+ /* VIC errata, may also receive interrupt at this address */
+- hijack_vector = (__u32 *)phys_to_virt((VIC_CPU_BOOT_ERRATA_CPI + VIC_DEFAULT_CPI_BASE)*4);
++ hijack_vector =
++ (__u32 *)
++ phys_to_virt((VIC_CPU_BOOT_ERRATA_CPI +
++ VIC_DEFAULT_CPI_BASE) * 4);
+ *hijack_vector = hijack_source.val;
+ }
+ /* All non-boot CPUs start with interrupts fully masked. Need
+@@ -603,73 +598,76 @@ do_boot_cpu(__u8 cpu)
+ * this in the VIC by masquerading as the processor we're
+ * about to boot and lowering its interrupt mask */
+ local_irq_save(flags);
+- if(quad_boot) {
++ if (quad_boot) {
+ send_one_QIC_CPI(cpu, VIC_CPU_BOOT_CPI);
+ } else {
+ outb(VIC_CPU_MASQUERADE_ENABLE | cpu, VIC_PROCESSOR_ID);
+ /* here we're altering registers belonging to `cpu' */
+-
+
-+ srcp = sg_page(src);
-+ vsrc = PageHighMem(srcp) ? NULL : page_address(srcp) + src->offset;
+ outb(VIC_BOOT_INTERRUPT_MASK, 0x21);
+ /* now go back to our original identity */
+ outb(boot_cpu_id, VIC_PROCESSOR_ID);
+
+ /* and boot the CPU */
+
+- send_CPI((1<<cpu), VIC_CPU_BOOT_CPI);
++ send_CPI((1 << cpu), VIC_CPU_BOOT_CPI);
+ }
+ cpu_booted_map = 0;
+ local_irq_restore(flags);
+
+ /* now wait for it to become ready (or timeout) */
+- for(timeout = 0; timeout < 50000; timeout++) {
+- if(cpu_booted_map)
++ for (timeout = 0; timeout < 50000; timeout++) {
++ if (cpu_booted_map)
+ break;
+ udelay(100);
+ }
+ /* reset the page table */
+ zap_low_mappings();
+-
+
-+ sg_init_table(cipher, 2);
-+ sg_set_buf(cipher, iv, ivsize);
-+ authenc_chain(cipher, src, vsrc == iv + ivsize);
+ if (cpu_booted_map) {
+ VDEBUG(("CPU%d: Booted successfully, back in CPU %d\n",
+ cpu, smp_processor_id()));
+-
+
-+ return crypto_authenc_verify(req, cipher, cryptlen + ivsize);
+ printk("CPU%d: ", cpu);
+ print_cpu_info(&cpu_data(cpu));
+ wmb();
+ cpu_set(cpu, cpu_callout_map);
+ cpu_set(cpu, cpu_present_map);
+- }
+- else {
++ } else {
+ printk("CPU%d FAILED TO BOOT: ", cpu);
+- if (*((volatile unsigned char *)phys_to_virt(start_phys_address))==0xA5)
++ if (*
++ ((volatile unsigned char *)phys_to_virt(start_phys_address))
++ == 0xA5)
+ printk("Stuck.\n");
+ else
+ printk("Not responding.\n");
+-
++
+ cpucount--;
+ }
}
- static int crypto_authenc_decrypt(struct aead_request *req)
-@@ -202,17 +293,23 @@ static int crypto_authenc_decrypt(struct aead_request *req)
- struct crypto_aead *authenc = crypto_aead_reqtfm(req);
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct ablkcipher_request *abreq = aead_request_ctx(req);
-+ unsigned int cryptlen = req->cryptlen;
-+ unsigned int authsize = crypto_aead_authsize(authenc);
-+ u8 *iv = req->iv;
- int err;
+-void __init
+-smp_boot_cpus(void)
++void __init smp_boot_cpus(void)
+ {
+ int i;
-- err = crypto_authenc_verify(req);
-+ if (cryptlen < authsize)
-+ return -EINVAL;
-+ cryptlen -= authsize;
+ /* CAT BUS initialisation must be done after the memory */
+ /* FIXME: The L4 has a catbus too, it just needs to be
+ * accessed in a totally different way */
+- if(voyager_level == 5) {
++ if (voyager_level == 5) {
+ voyager_cat_init();
+
+ /* now that the cat has probed the Voyager System Bus, sanity
+ * check the cpu map */
+- if( ((voyager_quad_processors | voyager_extended_vic_processors)
+- & cpus_addr(phys_cpu_present_map)[0]) != cpus_addr(phys_cpu_present_map)[0]) {
++ if (((voyager_quad_processors | voyager_extended_vic_processors)
++ & cpus_addr(phys_cpu_present_map)[0]) !=
++ cpus_addr(phys_cpu_present_map)[0]) {
+ /* should panic */
+- printk("\n\n***WARNING*** Sanity check of CPU present map FAILED\n");
++ printk("\n\n***WARNING*** "
++ "Sanity check of CPU present map FAILED\n");
+ }
+- } else if(voyager_level == 4)
+- voyager_extended_vic_processors = cpus_addr(phys_cpu_present_map)[0];
++ } else if (voyager_level == 4)
++ voyager_extended_vic_processors =
++ cpus_addr(phys_cpu_present_map)[0];
+
+ /* this sets up the idle task to run on the current cpu */
+ voyager_extended_cpus = 1;
+@@ -678,14 +676,14 @@ smp_boot_cpus(void)
+ //global_irq_holder = boot_cpu_id;
+
+ /* FIXME: Need to do something about this but currently only works
+- * on CPUs with a tsc which none of mine have.
+- smp_tune_scheduling();
++ * on CPUs with a tsc which none of mine have.
++ smp_tune_scheduling();
+ */
+ smp_store_cpu_info(boot_cpu_id);
+ printk("CPU%d: ", boot_cpu_id);
+ print_cpu_info(&cpu_data(boot_cpu_id));
+
+- if(is_cpu_quad()) {
++ if (is_cpu_quad()) {
+ /* booting on a Quad CPU */
+ printk("VOYAGER SMP: Boot CPU is Quad\n");
+ qic_setup();
+@@ -697,11 +695,11 @@ smp_boot_cpus(void)
+
+ cpu_set(boot_cpu_id, cpu_online_map);
+ cpu_set(boot_cpu_id, cpu_callout_map);
+-
+- /* loop over all the extended VIC CPUs and boot them. The
+
-+ err = crypto_authenc_iverify(req, iv, cryptlen);
- if (err)
- return err;
++ /* loop over all the extended VIC CPUs and boot them. The
+ * Quad CPUs must be bootstrapped by their extended VIC cpu */
+- for(i = 0; i < NR_CPUS; i++) {
+- if(i == boot_cpu_id || !cpu_isset(i, phys_cpu_present_map))
++ for (i = 0; i < NR_CPUS; i++) {
++ if (i == boot_cpu_id || !cpu_isset(i, phys_cpu_present_map))
+ continue;
+ do_boot_cpu(i);
+ /* This udelay seems to be needed for the Quad boots
+@@ -715,25 +713,26 @@ smp_boot_cpus(void)
+ for (i = 0; i < NR_CPUS; i++)
+ if (cpu_isset(i, cpu_online_map))
+ bogosum += cpu_data(i).loops_per_jiffy;
+- printk(KERN_INFO "Total of %d processors activated (%lu.%02lu BogoMIPS).\n",
+- cpucount+1,
+- bogosum/(500000/HZ),
+- (bogosum/(5000/HZ))%100);
++ printk(KERN_INFO "Total of %d processors activated "
++ "(%lu.%02lu BogoMIPS).\n",
++ cpucount + 1, bogosum / (500000 / HZ),
++ (bogosum / (5000 / HZ)) % 100);
+ }
+ voyager_extended_cpus = hweight32(voyager_extended_vic_processors);
+- printk("VOYAGER: Extended (interrupt handling CPUs): %d, non-extended: %d\n", voyager_extended_cpus, num_booting_cpus() - voyager_extended_cpus);
++ printk("VOYAGER: Extended (interrupt handling CPUs): "
++ "%d, non-extended: %d\n", voyager_extended_cpus,
++ num_booting_cpus() - voyager_extended_cpus);
+ /* that's it, switch to symmetric mode */
+ outb(0, VIC_PRIORITY_REGISTER);
+ outb(0, VIC_CLAIM_REGISTER_0);
+ outb(0, VIC_CLAIM_REGISTER_1);
+-
++
+ VDEBUG(("VOYAGER SMP: Booted with %d CPUs\n", num_booting_cpus()));
+ }
- ablkcipher_request_set_tfm(abreq, ctx->enc);
- ablkcipher_request_set_callback(abreq, aead_request_flags(req),
-- crypto_authenc_decrypt_done, req);
-- ablkcipher_request_set_crypt(abreq, req->src, req->dst, req->cryptlen,
-- req->iv);
-+ req->base.complete, req->base.data);
-+ ablkcipher_request_set_crypt(abreq, req->src, req->dst, cryptlen, iv);
+ /* Reload the secondary CPUs task structure (this function does not
+ * return ) */
+-void __init
+-initialize_secondary(void)
++void __init initialize_secondary(void)
+ {
+ #if 0
+ // AC kernels only
+@@ -745,11 +744,9 @@ initialize_secondary(void)
+ * basically just the stack pointer and the eip.
+ */
- return crypto_ablkcipher_decrypt(abreq);
+- asm volatile(
+- "movl %0,%%esp\n\t"
+- "jmp *%1"
+- :
+- :"r" (current->thread.esp),"r" (current->thread.eip));
++ asm volatile ("movl %0,%%esp\n\t"
++ "jmp *%1"::"r" (current->thread.sp),
++ "r"(current->thread.ip));
+ }
+
+ /* handle a Voyager SYS_INT -- If we don't, the base board will
+@@ -758,25 +755,23 @@ initialize_secondary(void)
+ * System interrupts occur because some problem was detected on the
+ * various busses. To find out what you have to probe all the
+ * hardware via the CAT bus. FIXME: At the moment we do nothing. */
+-fastcall void
+-smp_vic_sys_interrupt(struct pt_regs *regs)
++void smp_vic_sys_interrupt(struct pt_regs *regs)
+ {
+ ack_CPI(VIC_SYS_INT);
+- printk("Voyager SYSTEM INTERRUPT\n");
++ printk("Voyager SYSTEM INTERRUPT\n");
+ }
+
+ /* Handle a voyager CMN_INT; These interrupts occur either because of
+ * a system status change or because a single bit memory error
+ * occurred. FIXME: At the moment, ignore all this. */
+-fastcall void
+-smp_vic_cmn_interrupt(struct pt_regs *regs)
++void smp_vic_cmn_interrupt(struct pt_regs *regs)
+ {
+ static __u8 in_cmn_int = 0;
+ static DEFINE_SPINLOCK(cmn_int_lock);
+
+ /* common ints are broadcast, so make sure we only do this once */
+ _raw_spin_lock(&cmn_int_lock);
+- if(in_cmn_int)
++ if (in_cmn_int)
+ goto unlock_end;
+
+ in_cmn_int++;
+@@ -784,12 +779,12 @@ smp_vic_cmn_interrupt(struct pt_regs *regs)
+
+ VDEBUG(("Voyager COMMON INTERRUPT\n"));
+
+- if(voyager_level == 5)
++ if (voyager_level == 5)
+ voyager_cat_do_common_interrupt();
+
+ _raw_spin_lock(&cmn_int_lock);
+ in_cmn_int = 0;
+- unlock_end:
++ unlock_end:
+ _raw_spin_unlock(&cmn_int_lock);
+ ack_CPI(VIC_CMN_INT);
+ }
+@@ -797,26 +792,23 @@ smp_vic_cmn_interrupt(struct pt_regs *regs)
+ /*
+ * Reschedule call back. Nothing to do, all the work is done
+ * automatically when we return from the interrupt. */
+-static void
+-smp_reschedule_interrupt(void)
++static void smp_reschedule_interrupt(void)
+ {
+ /* do nothing */
}
-@@ -224,19 +321,13 @@ static int crypto_authenc_init_tfm(struct crypto_tfm *tfm)
- struct crypto_authenc_ctx *ctx = crypto_tfm_ctx(tfm);
- struct crypto_hash *auth;
- struct crypto_ablkcipher *enc;
-- unsigned int digestsize;
- int err;
- auth = crypto_spawn_hash(&ictx->auth);
- if (IS_ERR(auth))
- return PTR_ERR(auth);
+-static struct mm_struct * flush_mm;
++static struct mm_struct *flush_mm;
+ static unsigned long flush_va;
+ static DEFINE_SPINLOCK(tlbstate_lock);
+-#define FLUSH_ALL 0xffffffff
+
+ /*
+- * We cannot call mmdrop() because we are in interrupt context,
++ * We cannot call mmdrop() because we are in interrupt context,
+ * instead update mm->cpu_vm_mask.
+ *
+ * We need to reload %cr3 since the page tables may be going
+ * away from under us..
+ */
+-static inline void
+-leave_mm (unsigned long cpu)
++static inline void voyager_leave_mm(unsigned long cpu)
+ {
+ if (per_cpu(cpu_tlbstate, cpu).state == TLBSTATE_OK)
+ BUG();
+@@ -824,12 +816,10 @@ leave_mm (unsigned long cpu)
+ load_cr3(swapper_pg_dir);
+ }
-- err = -EINVAL;
-- digestsize = crypto_hash_digestsize(auth);
-- if (ictx->authsize > digestsize)
-- goto err_free_hash;
-
-- enc = crypto_spawn_ablkcipher(&ictx->enc);
-+ enc = crypto_spawn_skcipher(&ictx->enc);
- err = PTR_ERR(enc);
- if (IS_ERR(enc))
- goto err_free_hash;
-@@ -246,9 +337,10 @@ static int crypto_authenc_init_tfm(struct crypto_tfm *tfm)
- tfm->crt_aead.reqsize = max_t(unsigned int,
- (crypto_hash_alignmask(auth) &
- ~(crypto_tfm_ctx_alignment() - 1)) +
-- digestsize * 2,
-- sizeof(struct ablkcipher_request) +
-- crypto_ablkcipher_reqsize(enc));
-+ crypto_hash_digestsize(auth) * 2,
-+ sizeof(struct skcipher_givcrypt_request) +
-+ crypto_ablkcipher_reqsize(enc) +
-+ crypto_ablkcipher_ivsize(enc));
+ /*
+ * Invalidate call-back
+ */
+-static void
+-smp_invalidate_interrupt(void)
++static void smp_invalidate_interrupt(void)
+ {
+ __u8 cpu = smp_processor_id();
- spin_lock_init(&ctx->auth_lock);
+@@ -837,18 +827,18 @@ smp_invalidate_interrupt(void)
+ return;
+ /* This will flood messages. Don't uncomment unless you see
+ * Problems with cross cpu invalidation
+- VDEBUG(("VOYAGER SMP: CPU%d received INVALIDATE_CPI\n",
+- smp_processor_id()));
+- */
++ VDEBUG(("VOYAGER SMP: CPU%d received INVALIDATE_CPI\n",
++ smp_processor_id()));
++ */
-@@ -269,75 +361,74 @@ static void crypto_authenc_exit_tfm(struct crypto_tfm *tfm)
+ if (flush_mm == per_cpu(cpu_tlbstate, cpu).active_mm) {
+ if (per_cpu(cpu_tlbstate, cpu).state == TLBSTATE_OK) {
+- if (flush_va == FLUSH_ALL)
++ if (flush_va == TLB_FLUSH_ALL)
+ local_flush_tlb();
+ else
+ __flush_tlb_one(flush_va);
+ } else
+- leave_mm(cpu);
++ voyager_leave_mm(cpu);
+ }
+ smp_mb__before_clear_bit();
+ clear_bit(cpu, &smp_invalidate_needed);
+@@ -857,11 +847,10 @@ smp_invalidate_interrupt(void)
- static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
+ /* All the new flush operations for 2.4 */
+
+-
+ /* This routine is called with a physical cpu mask */
+ static void
+-voyager_flush_tlb_others (unsigned long cpumask, struct mm_struct *mm,
+- unsigned long va)
++voyager_flush_tlb_others(unsigned long cpumask, struct mm_struct *mm,
++ unsigned long va)
{
-+ struct crypto_attr_type *algt;
- struct crypto_instance *inst;
- struct crypto_alg *auth;
- struct crypto_alg *enc;
- struct authenc_instance_ctx *ctx;
-- unsigned int authsize;
-- unsigned int enckeylen;
-+ const char *enc_name;
- int err;
+ int stuck = 50000;
-- err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD);
-- if (err)
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
- return ERR_PTR(err);
+@@ -875,7 +864,7 @@ voyager_flush_tlb_others (unsigned long cpumask, struct mm_struct *mm,
+ BUG();
-+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
-+ return ERR_PTR(-EINVAL);
+ spin_lock(&tlbstate_lock);
+-
+
- auth = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH,
- CRYPTO_ALG_TYPE_HASH_MASK);
- if (IS_ERR(auth))
- return ERR_PTR(PTR_ERR(auth));
+ flush_mm = mm;
+ flush_va = va;
+ atomic_set_mask(cpumask, &smp_invalidate_needed);
+@@ -887,23 +876,23 @@ voyager_flush_tlb_others (unsigned long cpumask, struct mm_struct *mm,
-- err = crypto_attr_u32(tb[2], &authsize);
-- inst = ERR_PTR(err);
-- if (err)
-- goto out_put_auth;
--
-- enc = crypto_attr_alg(tb[3], CRYPTO_ALG_TYPE_BLKCIPHER,
-- CRYPTO_ALG_TYPE_MASK);
-- inst = ERR_PTR(PTR_ERR(enc));
-- if (IS_ERR(enc))
-+ enc_name = crypto_attr_alg_name(tb[2]);
-+ err = PTR_ERR(enc_name);
-+ if (IS_ERR(enc_name))
- goto out_put_auth;
+ while (smp_invalidate_needed) {
+ mb();
+- if(--stuck == 0) {
+- printk("***WARNING*** Stuck doing invalidate CPI (CPU%d)\n", smp_processor_id());
++ if (--stuck == 0) {
++ printk("***WARNING*** Stuck doing invalidate CPI "
++ "(CPU%d)\n", smp_processor_id());
+ break;
+ }
+ }
-- err = crypto_attr_u32(tb[4], &enckeylen);
-- if (err)
-- goto out_put_enc;
--
- inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
- err = -ENOMEM;
- if (!inst)
-- goto out_put_enc;
--
-- err = -ENAMETOOLONG;
-- if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-- "authenc(%s,%u,%s,%u)", auth->cra_name, authsize,
-- enc->cra_name, enckeylen) >= CRYPTO_MAX_ALG_NAME)
-- goto err_free_inst;
--
-- if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-- "authenc(%s,%u,%s,%u)", auth->cra_driver_name,
-- authsize, enc->cra_driver_name, enckeylen) >=
-- CRYPTO_MAX_ALG_NAME)
-- goto err_free_inst;
-+ goto out_put_auth;
+ /* Uncomment only to debug invalidation problems
+- VDEBUG(("VOYAGER SMP: Completed invalidate CPI (CPU%d)\n", cpu));
+- */
++ VDEBUG(("VOYAGER SMP: Completed invalidate CPI (CPU%d)\n", cpu));
++ */
- ctx = crypto_instance_ctx(inst);
-- ctx->authsize = authsize;
-- ctx->enckeylen = enckeylen;
+ flush_mm = NULL;
+ flush_va = 0;
+ spin_unlock(&tlbstate_lock);
+ }
- err = crypto_init_spawn(&ctx->auth, auth, inst, CRYPTO_ALG_TYPE_MASK);
- if (err)
- goto err_free_inst;
+-void
+-flush_tlb_current_task(void)
++void flush_tlb_current_task(void)
+ {
+ struct mm_struct *mm = current->mm;
+ unsigned long cpu_mask;
+@@ -913,14 +902,12 @@ flush_tlb_current_task(void)
+ cpu_mask = cpus_addr(mm->cpu_vm_mask)[0] & ~(1 << smp_processor_id());
+ local_flush_tlb();
+ if (cpu_mask)
+- voyager_flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
++ voyager_flush_tlb_others(cpu_mask, mm, TLB_FLUSH_ALL);
-- err = crypto_init_spawn(&ctx->enc, enc, inst, CRYPTO_ALG_TYPE_MASK);
-+ crypto_set_skcipher_spawn(&ctx->enc, inst);
-+ err = crypto_grab_skcipher(&ctx->enc, enc_name, 0,
-+ crypto_requires_sync(algt->type,
-+ algt->mask));
- if (err)
- goto err_drop_auth;
+ preempt_enable();
+ }
-- inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_ASYNC;
-+ enc = crypto_skcipher_spawn_alg(&ctx->enc);
-+
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-+ "authenc(%s,%s)", auth->cra_name, enc->cra_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_enc;
-+
-+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "authenc(%s,%s)", auth->cra_driver_name,
-+ enc->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_enc;
-+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
-+ inst->alg.cra_flags |= enc->cra_flags & CRYPTO_ALG_ASYNC;
- inst->alg.cra_priority = enc->cra_priority * 10 + auth->cra_priority;
- inst->alg.cra_blocksize = enc->cra_blocksize;
-- inst->alg.cra_alignmask = max(auth->cra_alignmask, enc->cra_alignmask);
-+ inst->alg.cra_alignmask = auth->cra_alignmask | enc->cra_alignmask;
- inst->alg.cra_type = &crypto_aead_type;
+-
+-void
+-flush_tlb_mm (struct mm_struct * mm)
++void flush_tlb_mm(struct mm_struct *mm)
+ {
+ unsigned long cpu_mask;
-- inst->alg.cra_aead.ivsize = enc->cra_blkcipher.ivsize;
-- inst->alg.cra_aead.authsize = authsize;
-+ inst->alg.cra_aead.ivsize = enc->cra_ablkcipher.ivsize;
-+ inst->alg.cra_aead.maxauthsize = auth->cra_type == &crypto_hash_type ?
-+ auth->cra_hash.digestsize :
-+ auth->cra_digest.dia_digestsize;
+@@ -932,15 +919,15 @@ flush_tlb_mm (struct mm_struct * mm)
+ if (current->mm)
+ local_flush_tlb();
+ else
+- leave_mm(smp_processor_id());
++ voyager_leave_mm(smp_processor_id());
+ }
+ if (cpu_mask)
+- voyager_flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
++ voyager_flush_tlb_others(cpu_mask, mm, TLB_FLUSH_ALL);
- inst->alg.cra_ctxsize = sizeof(struct crypto_authenc_ctx);
+ preempt_enable();
+ }
-@@ -347,18 +438,19 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
- inst->alg.cra_aead.setkey = crypto_authenc_setkey;
- inst->alg.cra_aead.encrypt = crypto_authenc_encrypt;
- inst->alg.cra_aead.decrypt = crypto_authenc_decrypt;
-+ inst->alg.cra_aead.givencrypt = crypto_authenc_givencrypt;
+-void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
++void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
+ {
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long cpu_mask;
+@@ -949,10 +936,10 @@ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
+
+ cpu_mask = cpus_addr(mm->cpu_vm_mask)[0] & ~(1 << smp_processor_id());
+ if (current->active_mm == mm) {
+- if(current->mm)
++ if (current->mm)
+ __flush_tlb_one(va);
+- else
+- leave_mm(smp_processor_id());
++ else
++ voyager_leave_mm(smp_processor_id());
+ }
- out:
-- crypto_mod_put(enc);
--out_put_auth:
- crypto_mod_put(auth);
- return inst;
+ if (cpu_mask)
+@@ -960,21 +947,21 @@ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
-+err_drop_enc:
-+ crypto_drop_skcipher(&ctx->enc);
- err_drop_auth:
- crypto_drop_spawn(&ctx->auth);
- err_free_inst:
- kfree(inst);
--out_put_enc:
-+out_put_auth:
- inst = ERR_PTR(err);
- goto out;
+ preempt_enable();
}
-@@ -367,7 +459,7 @@ static void crypto_authenc_free(struct crypto_instance *inst)
++
+ EXPORT_SYMBOL(flush_tlb_page);
+
+ /* enable the requested IRQs */
+-static void
+-smp_enable_irq_interrupt(void)
++static void smp_enable_irq_interrupt(void)
{
- struct authenc_instance_ctx *ctx = crypto_instance_ctx(inst);
+ __u8 irq;
+ __u8 cpu = get_cpu();
-- crypto_drop_spawn(&ctx->enc);
-+ crypto_drop_skcipher(&ctx->enc);
- crypto_drop_spawn(&ctx->auth);
- kfree(inst);
+ VDEBUG(("VOYAGER SMP: CPU%d enabling irq mask 0x%x\n", cpu,
+- vic_irq_enable_mask[cpu]));
++ vic_irq_enable_mask[cpu]));
+
+ spin_lock(&vic_irq_lock);
+- for(irq = 0; irq < 16; irq++) {
+- if(vic_irq_enable_mask[cpu] & (1<<irq))
++ for (irq = 0; irq < 16; irq++) {
++ if (vic_irq_enable_mask[cpu] & (1 << irq))
+ enable_local_vic_irq(irq);
+ }
+ vic_irq_enable_mask[cpu] = 0;
+@@ -982,17 +969,16 @@ smp_enable_irq_interrupt(void)
+
+ put_cpu_no_resched();
}
-diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
-index f6c67f9..4a7e65c 100644
---- a/crypto/blkcipher.c
-+++ b/crypto/blkcipher.c
-@@ -14,7 +14,8 @@
- *
+-
++
+ /*
+ * CPU halt call-back
*/
+-static void
+-smp_stop_cpu_function(void *dummy)
++static void smp_stop_cpu_function(void *dummy)
+ {
+ VDEBUG(("VOYAGER SMP: CPU%d is STOPPING\n", smp_processor_id()));
+ cpu_clear(smp_processor_id(), cpu_online_map);
+ local_irq_disable();
+- for(;;)
++ for (;;)
+ halt();
+ }
--#include <linux/crypto.h>
-+#include <crypto/internal/skcipher.h>
-+#include <crypto/scatterwalk.h>
- #include <linux/errno.h>
- #include <linux/hardirq.h>
- #include <linux/kernel.h>
-@@ -25,7 +26,6 @@
- #include <linux/string.h>
-
- #include "internal.h"
--#include "scatterwalk.h"
+@@ -1006,14 +992,13 @@ struct call_data_struct {
+ int wait;
+ };
- enum {
- BLKCIPHER_WALK_PHYS = 1 << 0,
-@@ -433,9 +433,8 @@ static unsigned int crypto_blkcipher_ctxsize(struct crypto_alg *alg, u32 type,
- struct blkcipher_alg *cipher = &alg->cra_blkcipher;
- unsigned int len = alg->cra_ctxsize;
+-static struct call_data_struct * call_data;
++static struct call_data_struct *call_data;
-- type ^= CRYPTO_ALG_ASYNC;
-- mask &= CRYPTO_ALG_ASYNC;
-- if ((type & mask) && cipher->ivsize) {
-+ if ((mask & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_MASK &&
-+ cipher->ivsize) {
- len = ALIGN(len, (unsigned long)alg->cra_alignmask + 1);
- len += cipher->ivsize;
+ /* execute a thread on a new CPU. The function to be called must be
+ * previously set up. This is used to schedule a function for
+ * execution on all CPUs - set up the function then broadcast a
+ * function_interrupt CPI to come here on each CPU */
+-static void
+-smp_call_function_interrupt(void)
++static void smp_call_function_interrupt(void)
+ {
+ void (*func) (void *info) = call_data->func;
+ void *info = call_data->info;
+@@ -1027,16 +1012,17 @@ smp_call_function_interrupt(void)
+ * about to execute the function
+ */
+ mb();
+- if(!test_and_clear_bit(cpu, &call_data->started)) {
++ if (!test_and_clear_bit(cpu, &call_data->started)) {
+ /* If the bit wasn't set, this could be a replay */
+- printk(KERN_WARNING "VOYAGER SMP: CPU %d received call funtion with no call pending\n", cpu);
++ printk(KERN_WARNING "VOYAGER SMP: CPU %d received call funtion"
++ " with no call pending\n", cpu);
+ return;
}
-@@ -451,6 +450,11 @@ static int crypto_init_blkcipher_ops_async(struct crypto_tfm *tfm)
- crt->setkey = async_setkey;
- crt->encrypt = async_encrypt;
- crt->decrypt = async_decrypt;
-+ if (!alg->ivsize) {
-+ crt->givencrypt = skcipher_null_givencrypt;
-+ crt->givdecrypt = skcipher_null_givdecrypt;
-+ }
-+ crt->base = __crypto_ablkcipher_cast(tfm);
- crt->ivsize = alg->ivsize;
+ /*
+ * At this point the info structure may be out of scope unless wait==1
+ */
+ irq_enter();
+- (*func)(info);
++ (*func) (info);
+ __get_cpu_var(irq_stat).irq_call_count++;
+ irq_exit();
+ if (wait) {
+@@ -1046,14 +1032,13 @@ smp_call_function_interrupt(void)
+ }
- return 0;
-@@ -482,9 +486,7 @@ static int crypto_init_blkcipher_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
- if (alg->ivsize > PAGE_SIZE / 8)
- return -EINVAL;
+ static int
+-voyager_smp_call_function_mask (cpumask_t cpumask,
+- void (*func) (void *info), void *info,
+- int wait)
++voyager_smp_call_function_mask(cpumask_t cpumask,
++ void (*func) (void *info), void *info, int wait)
+ {
+ struct call_data_struct data;
+ u32 mask = cpus_addr(cpumask)[0];
-- type ^= CRYPTO_ALG_ASYNC;
-- mask &= CRYPTO_ALG_ASYNC;
-- if (type & mask)
-+ if ((mask & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_MASK)
- return crypto_init_blkcipher_ops_sync(tfm);
+- mask &= ~(1<<smp_processor_id());
++ mask &= ~(1 << smp_processor_id());
+
+ if (!mask)
+ return 0;
+@@ -1093,7 +1078,7 @@ voyager_smp_call_function_mask (cpumask_t cpumask,
+ * so we use the system clock to interrupt one processor, which in
+ * turn, broadcasts a timer CPI to all the others --- we receive that
+ * CPI here. We don't use this actually for counting so losing
+- * ticks doesn't matter
++ * ticks doesn't matter
+ *
+ * FIXME: For those CPUs which actually have a local APIC, we could
+ * try to use it to trigger this interrupt instead of having to
+@@ -1101,8 +1086,7 @@ voyager_smp_call_function_mask (cpumask_t cpumask,
+ * no local APIC, so I can't do this
+ *
+ * This function is currently a placeholder and is unused in the code */
+-fastcall void
+-smp_apic_timer_interrupt(struct pt_regs *regs)
++void smp_apic_timer_interrupt(struct pt_regs *regs)
+ {
+ struct pt_regs *old_regs = set_irq_regs(regs);
+ wrapper_smp_local_timer_interrupt();
+@@ -1110,8 +1094,7 @@ smp_apic_timer_interrupt(struct pt_regs *regs)
+ }
+
+ /* All of the QUAD interrupt GATES */
+-fastcall void
+-smp_qic_timer_interrupt(struct pt_regs *regs)
++void smp_qic_timer_interrupt(struct pt_regs *regs)
+ {
+ struct pt_regs *old_regs = set_irq_regs(regs);
+ ack_QIC_CPI(QIC_TIMER_CPI);
+@@ -1119,127 +1102,112 @@ smp_qic_timer_interrupt(struct pt_regs *regs)
+ set_irq_regs(old_regs);
+ }
+
+-fastcall void
+-smp_qic_invalidate_interrupt(struct pt_regs *regs)
++void smp_qic_invalidate_interrupt(struct pt_regs *regs)
+ {
+ ack_QIC_CPI(QIC_INVALIDATE_CPI);
+ smp_invalidate_interrupt();
+ }
+
+-fastcall void
+-smp_qic_reschedule_interrupt(struct pt_regs *regs)
++void smp_qic_reschedule_interrupt(struct pt_regs *regs)
+ {
+ ack_QIC_CPI(QIC_RESCHEDULE_CPI);
+ smp_reschedule_interrupt();
+ }
+
+-fastcall void
+-smp_qic_enable_irq_interrupt(struct pt_regs *regs)
++void smp_qic_enable_irq_interrupt(struct pt_regs *regs)
+ {
+ ack_QIC_CPI(QIC_ENABLE_IRQ_CPI);
+ smp_enable_irq_interrupt();
+ }
+
+-fastcall void
+-smp_qic_call_function_interrupt(struct pt_regs *regs)
++void smp_qic_call_function_interrupt(struct pt_regs *regs)
+ {
+ ack_QIC_CPI(QIC_CALL_FUNCTION_CPI);
+ smp_call_function_interrupt();
+ }
+
+-fastcall void
+-smp_vic_cpi_interrupt(struct pt_regs *regs)
++void smp_vic_cpi_interrupt(struct pt_regs *regs)
+ {
+ struct pt_regs *old_regs = set_irq_regs(regs);
+ __u8 cpu = smp_processor_id();
+
+- if(is_cpu_quad())
++ if (is_cpu_quad())
+ ack_QIC_CPI(VIC_CPI_LEVEL0);
else
- return crypto_init_blkcipher_ops_async(tfm);
-@@ -499,6 +501,8 @@ static void crypto_blkcipher_show(struct seq_file *m, struct crypto_alg *alg)
- seq_printf(m, "min keysize : %u\n", alg->cra_blkcipher.min_keysize);
- seq_printf(m, "max keysize : %u\n", alg->cra_blkcipher.max_keysize);
- seq_printf(m, "ivsize : %u\n", alg->cra_blkcipher.ivsize);
-+ seq_printf(m, "geniv : %s\n", alg->cra_blkcipher.geniv ?:
-+ "<default>");
+ ack_VIC_CPI(VIC_CPI_LEVEL0);
+
+- if(test_and_clear_bit(VIC_TIMER_CPI, &vic_cpi_mailbox[cpu]))
++ if (test_and_clear_bit(VIC_TIMER_CPI, &vic_cpi_mailbox[cpu]))
+ wrapper_smp_local_timer_interrupt();
+- if(test_and_clear_bit(VIC_INVALIDATE_CPI, &vic_cpi_mailbox[cpu]))
++ if (test_and_clear_bit(VIC_INVALIDATE_CPI, &vic_cpi_mailbox[cpu]))
+ smp_invalidate_interrupt();
+- if(test_and_clear_bit(VIC_RESCHEDULE_CPI, &vic_cpi_mailbox[cpu]))
++ if (test_and_clear_bit(VIC_RESCHEDULE_CPI, &vic_cpi_mailbox[cpu]))
+ smp_reschedule_interrupt();
+- if(test_and_clear_bit(VIC_ENABLE_IRQ_CPI, &vic_cpi_mailbox[cpu]))
++ if (test_and_clear_bit(VIC_ENABLE_IRQ_CPI, &vic_cpi_mailbox[cpu]))
+ smp_enable_irq_interrupt();
+- if(test_and_clear_bit(VIC_CALL_FUNCTION_CPI, &vic_cpi_mailbox[cpu]))
++ if (test_and_clear_bit(VIC_CALL_FUNCTION_CPI, &vic_cpi_mailbox[cpu]))
+ smp_call_function_interrupt();
+ set_irq_regs(old_regs);
}
- const struct crypto_type crypto_blkcipher_type = {
-@@ -510,5 +514,187 @@ const struct crypto_type crypto_blkcipher_type = {
- };
- EXPORT_SYMBOL_GPL(crypto_blkcipher_type);
+-static void
+-do_flush_tlb_all(void* info)
++static void do_flush_tlb_all(void *info)
+ {
+ unsigned long cpu = smp_processor_id();
-+static int crypto_grab_nivcipher(struct crypto_skcipher_spawn *spawn,
-+ const char *name, u32 type, u32 mask)
-+{
-+ struct crypto_alg *alg;
-+ int err;
-+
-+ type = crypto_skcipher_type(type);
-+ mask = crypto_skcipher_mask(mask) | CRYPTO_ALG_GENIV;
-+
-+ alg = crypto_alg_mod_lookup(name, type, mask);
-+ if (IS_ERR(alg))
-+ return PTR_ERR(alg);
-+
-+ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
-+ crypto_mod_put(alg);
-+ return err;
-+}
+ __flush_tlb_all();
+ if (per_cpu(cpu_tlbstate, cpu).state == TLBSTATE_LAZY)
+- leave_mm(cpu);
++ voyager_leave_mm(cpu);
+ }
+
+-
+ /* flush the TLB of every active CPU in the system */
+-void
+-flush_tlb_all(void)
++void flush_tlb_all(void)
+ {
+ on_each_cpu(do_flush_tlb_all, 0, 1, 1);
+ }
+
+ /* used to set up the trampoline for other CPUs when the memory manager
+ * is sorted out */
+-void __init
+-smp_alloc_memory(void)
++void __init smp_alloc_memory(void)
+ {
+- trampoline_base = (__u32)alloc_bootmem_low_pages(PAGE_SIZE);
+- if(__pa(trampoline_base) >= 0x93000)
++ trampoline_base = (__u32) alloc_bootmem_low_pages(PAGE_SIZE);
++ if (__pa(trampoline_base) >= 0x93000)
+ BUG();
+ }
+
+ /* send a reschedule CPI to one CPU by physical CPU number*/
+-static void
+-voyager_smp_send_reschedule(int cpu)
++static void voyager_smp_send_reschedule(int cpu)
+ {
+ send_one_CPI(cpu, VIC_RESCHEDULE_CPI);
+ }
+
+-
+-int
+-hard_smp_processor_id(void)
++int hard_smp_processor_id(void)
+ {
+ __u8 i;
+ __u8 cpumask = inb(VIC_PROC_WHO_AM_I);
+- if((cpumask & QUAD_IDENTIFIER) == QUAD_IDENTIFIER)
++ if ((cpumask & QUAD_IDENTIFIER) == QUAD_IDENTIFIER)
+ return cpumask & 0x1F;
+
+- for(i = 0; i < 8; i++) {
+- if(cpumask & (1<<i))
++ for (i = 0; i < 8; i++) {
++ if (cpumask & (1 << i))
+ return i;
+ }
+ printk("** WARNING ** Illegal cpuid returned by VIC: %d", cpumask);
+ return 0;
+ }
+
+-int
+-safe_smp_processor_id(void)
++int safe_smp_processor_id(void)
+ {
+ return hard_smp_processor_id();
+ }
+
+ /* broadcast a halt to all other CPUs */
+-static void
+-voyager_smp_send_stop(void)
++static void voyager_smp_send_stop(void)
+ {
+ smp_call_function(smp_stop_cpu_function, NULL, 1, 1);
+ }
+
+ /* this function is triggered in time.c when a clock tick fires
+ * we need to re-broadcast the tick to all CPUs */
+-void
+-smp_vic_timer_interrupt(void)
++void smp_vic_timer_interrupt(void)
+ {
+ send_CPI_allbutself(VIC_TIMER_CPI);
+ smp_local_timer_interrupt();
+@@ -1253,8 +1221,7 @@ smp_vic_timer_interrupt(void)
+ * multiplier is 1 and it can be changed by writing the new multiplier
+ * value into /proc/profile.
+ */
+-void
+-smp_local_timer_interrupt(void)
++void smp_local_timer_interrupt(void)
+ {
+ int cpu = smp_processor_id();
+ long weight;
+@@ -1269,18 +1236,18 @@ smp_local_timer_interrupt(void)
+ *
+ * Interrupts are already masked off at this point.
+ */
+- per_cpu(prof_counter,cpu) = per_cpu(prof_multiplier, cpu);
++ per_cpu(prof_counter, cpu) = per_cpu(prof_multiplier, cpu);
+ if (per_cpu(prof_counter, cpu) !=
+- per_cpu(prof_old_multiplier, cpu)) {
++ per_cpu(prof_old_multiplier, cpu)) {
+ /* FIXME: need to update the vic timer tick here */
+ per_cpu(prof_old_multiplier, cpu) =
+- per_cpu(prof_counter, cpu);
++ per_cpu(prof_counter, cpu);
+ }
+
+ update_process_times(user_mode_vm(get_irq_regs()));
+ }
+
+- if( ((1<<cpu) & voyager_extended_vic_processors) == 0)
++ if (((1 << cpu) & voyager_extended_vic_processors) == 0)
+ /* only extended VIC processors participate in
+ * interrupt distribution */
+ return;
+@@ -1296,12 +1263,12 @@ smp_local_timer_interrupt(void)
+ * we can take more than 100K local irqs per second on a 100 MHz P5.
+ */
+
+- if((++vic_tick[cpu] & 0x7) != 0)
++ if ((++vic_tick[cpu] & 0x7) != 0)
+ return;
+ /* get here every 16 ticks (about every 1/6 of a second) */
+
+ /* Change our priority to give someone else a chance at getting
+- * the IRQ. The algorithm goes like this:
++ * the IRQ. The algorithm goes like this:
+ *
+ * In the VIC, the dynamically routed interrupt is always
+ * handled by the lowest priority eligible (i.e. receiving
+@@ -1325,18 +1292,18 @@ smp_local_timer_interrupt(void)
+ * affinity code since we now try to even up the interrupt
+ * counts when an affinity binding is keeping them on a
+ * particular CPU*/
+- weight = (vic_intr_count[cpu]*voyager_extended_cpus
++ weight = (vic_intr_count[cpu] * voyager_extended_cpus
+ - vic_intr_total) >> 4;
+ weight += 4;
+- if(weight > 7)
++ if (weight > 7)
+ weight = 7;
+- if(weight < 0)
++ if (weight < 0)
+ weight = 0;
+-
+- outb((__u8)weight, VIC_PRIORITY_REGISTER);
+
-+struct crypto_instance *skcipher_geniv_alloc(struct crypto_template *tmpl,
-+ struct rtattr **tb, u32 type,
-+ u32 mask)
++ outb((__u8) weight, VIC_PRIORITY_REGISTER);
+
+ #ifdef VOYAGER_DEBUG
+- if((vic_tick[cpu] & 0xFFF) == 0) {
++ if ((vic_tick[cpu] & 0xFFF) == 0) {
+ /* print this message roughly every 25 secs */
+ printk("VOYAGER SMP: vic_tick[%d] = %lu, weight = %ld\n",
+ cpu, vic_tick[cpu], weight);
+@@ -1345,15 +1312,14 @@ smp_local_timer_interrupt(void)
+ }
+
+ /* setup the profiling timer */
+-int
+-setup_profiling_timer(unsigned int multiplier)
++int setup_profiling_timer(unsigned int multiplier)
+ {
+ int i;
+
+- if ( (!multiplier))
++ if ((!multiplier))
+ return -EINVAL;
+
+- /*
++ /*
+ * Set the new multiplier for each CPU. CPUs don't start using the
+ * new values until the next timer interrupt in which they do process
+ * accounting.
+@@ -1367,15 +1333,13 @@ setup_profiling_timer(unsigned int multiplier)
+ /* This is a bit of a mess, but forced on us by the genirq changes
+ * there's no genirq handler that really does what voyager wants
+ * so hack it up with the simple IRQ handler */
+-static void fastcall
+-handle_vic_irq(unsigned int irq, struct irq_desc *desc)
++static void handle_vic_irq(unsigned int irq, struct irq_desc *desc)
+ {
+ before_handle_vic_irq(irq);
+ handle_simple_irq(irq, desc);
+ after_handle_vic_irq(irq);
+ }
+
+-
+ /* The CPIs are handled in the per cpu 8259s, so they must be
+ * enabled to be received: FIX: enabling the CPIs in the early
+ * boot sequence interferes with bug checking; enable them later
+@@ -1385,13 +1349,12 @@ handle_vic_irq(unsigned int irq, struct irq_desc *desc)
+ #define QIC_SET_GATE(cpi, vector) \
+ set_intr_gate((cpi) + QIC_DEFAULT_CPI_BASE, (vector))
+
+-void __init
+-smp_intr_init(void)
++void __init smp_intr_init(void)
+ {
+ int i;
+
+ /* initialize the per cpu irq mask to all disabled */
+- for(i = 0; i < NR_CPUS; i++)
++ for (i = 0; i < NR_CPUS; i++)
+ vic_irq_mask[i] = 0xFFFF;
+
+ VIC_SET_GATE(VIC_CPI_LEVEL0, vic_cpi_interrupt);
+@@ -1404,42 +1367,40 @@ smp_intr_init(void)
+ QIC_SET_GATE(QIC_RESCHEDULE_CPI, qic_reschedule_interrupt);
+ QIC_SET_GATE(QIC_ENABLE_IRQ_CPI, qic_enable_irq_interrupt);
+ QIC_SET_GATE(QIC_CALL_FUNCTION_CPI, qic_call_function_interrupt);
+-
+
+- /* now put the VIC descriptor into the first 48 IRQs
++ /* now put the VIC descriptor into the first 48 IRQs
+ *
+ * This is for later: first 16 correspond to PC IRQs; next 16
+ * are Primary MC IRQs and final 16 are Secondary MC IRQs */
+- for(i = 0; i < 48; i++)
++ for (i = 0; i < 48; i++)
+ set_irq_chip_and_handler(i, &vic_chip, handle_vic_irq);
+ }
+
+ /* send a CPI at level cpi to a set of cpus in cpuset (set 1 bit per
+ * processor to receive CPI */
+-static void
+-send_CPI(__u32 cpuset, __u8 cpi)
++static void send_CPI(__u32 cpuset, __u8 cpi)
+ {
+ int cpu;
+ __u32 quad_cpuset = (cpuset & voyager_quad_processors);
+
+- if(cpi < VIC_START_FAKE_CPI) {
+- /* fake CPI are only used for booting, so send to the
++ if (cpi < VIC_START_FAKE_CPI) {
++ /* fake CPI are only used for booting, so send to the
+ * extended quads as well---Quads must be VIC booted */
+- outb((__u8)(cpuset), VIC_CPI_Registers[cpi]);
++ outb((__u8) (cpuset), VIC_CPI_Registers[cpi]);
+ return;
+ }
+- if(quad_cpuset)
++ if (quad_cpuset)
+ send_QIC_CPI(quad_cpuset, cpi);
+ cpuset &= ~quad_cpuset;
+ cpuset &= 0xff; /* only first 8 CPUs vaild for VIC CPI */
+- if(cpuset == 0)
++ if (cpuset == 0)
+ return;
+ for_each_online_cpu(cpu) {
+- if(cpuset & (1<<cpu))
++ if (cpuset & (1 << cpu))
+ set_bit(cpi, &vic_cpi_mailbox[cpu]);
+ }
+- if(cpuset)
+- outb((__u8)cpuset, VIC_CPI_Registers[VIC_CPI_LEVEL0]);
++ if (cpuset)
++ outb((__u8) cpuset, VIC_CPI_Registers[VIC_CPI_LEVEL0]);
+ }
+
+ /* Acknowledge receipt of CPI in the QIC, clear in QIC hardware and
+@@ -1448,20 +1409,19 @@ send_CPI(__u32 cpuset, __u8 cpi)
+ * DON'T make this inline otherwise the cache line read will be
+ * optimised away
+ * */
+-static int
+-ack_QIC_CPI(__u8 cpi) {
++static int ack_QIC_CPI(__u8 cpi)
+{
-+ struct {
-+ int (*setkey)(struct crypto_ablkcipher *tfm, const u8 *key,
-+ unsigned int keylen);
-+ int (*encrypt)(struct ablkcipher_request *req);
-+ int (*decrypt)(struct ablkcipher_request *req);
-+
-+ unsigned int min_keysize;
-+ unsigned int max_keysize;
-+ unsigned int ivsize;
-+
-+ const char *geniv;
-+ } balg;
-+ const char *name;
-+ struct crypto_skcipher_spawn *spawn;
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ struct crypto_alg *alg;
-+ int err;
-+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
-+
-+ if ((algt->type ^ (CRYPTO_ALG_TYPE_GIVCIPHER | CRYPTO_ALG_GENIV)) &
-+ algt->mask)
-+ return ERR_PTR(-EINVAL);
+ __u8 cpu = hard_smp_processor_id();
+
+ cpi &= 7;
+
+- outb(1<<cpi, QIC_INTERRUPT_CLEAR1);
++ outb(1 << cpi, QIC_INTERRUPT_CLEAR1);
+ return voyager_quad_cpi_addr[cpu]->qic_cpi[cpi].cpi;
+ }
+
+-static void
+-ack_special_QIC_CPI(__u8 cpi)
++static void ack_special_QIC_CPI(__u8 cpi)
+ {
+- switch(cpi) {
++ switch (cpi) {
+ case VIC_CMN_INT:
+ outb(QIC_CMN_INT, QIC_INTERRUPT_CLEAR0);
+ break;
+@@ -1474,8 +1434,7 @@ ack_special_QIC_CPI(__u8 cpi)
+ }
+
+ /* Acknowledge receipt of CPI in the VIC (essentially an EOI) */
+-static void
+-ack_VIC_CPI(__u8 cpi)
++static void ack_VIC_CPI(__u8 cpi)
+ {
+ #ifdef VOYAGER_DEBUG
+ unsigned long flags;
+@@ -1484,17 +1443,17 @@ ack_VIC_CPI(__u8 cpi)
+
+ local_irq_save(flags);
+ isr = vic_read_isr();
+- if((isr & (1<<(cpi &7))) == 0) {
++ if ((isr & (1 << (cpi & 7))) == 0) {
+ printk("VOYAGER SMP: CPU%d lost CPI%d\n", cpu, cpi);
+ }
+ #endif
+ /* send specific EOI; the two system interrupts have
+ * bit 4 set for a separate vector but behave as the
+ * corresponding 3 bit intr */
+- outb_p(0x60|(cpi & 7),0x20);
++ outb_p(0x60 | (cpi & 7), 0x20);
+
+ #ifdef VOYAGER_DEBUG
+- if((vic_read_isr() & (1<<(cpi &7))) != 0) {
++ if ((vic_read_isr() & (1 << (cpi & 7))) != 0) {
+ printk("VOYAGER SMP: CPU%d still asserting CPI%d\n", cpu, cpi);
+ }
+ local_irq_restore(flags);
+@@ -1502,12 +1461,11 @@ ack_VIC_CPI(__u8 cpi)
+ }
+
+ /* cribbed with thanks from irq.c */
+-#define __byte(x,y) (((unsigned char *)&(y))[x])
++#define __byte(x,y) (((unsigned char *)&(y))[x])
+ #define cached_21(cpu) (__byte(0,vic_irq_mask[cpu]))
+ #define cached_A1(cpu) (__byte(1,vic_irq_mask[cpu]))
+
+-static unsigned int
+-startup_vic_irq(unsigned int irq)
++static unsigned int startup_vic_irq(unsigned int irq)
+ {
+ unmask_vic_irq(irq);
+
+@@ -1535,13 +1493,12 @@ startup_vic_irq(unsigned int irq)
+ * broadcast an Interrupt enable CPI which causes all other CPUs to
+ * adjust their masks accordingly. */
+
+-static void
+-unmask_vic_irq(unsigned int irq)
++static void unmask_vic_irq(unsigned int irq)
+ {
+ /* linux doesn't to processor-irq affinity, so enable on
+ * all CPUs we know about */
+ int cpu = smp_processor_id(), real_cpu;
+- __u16 mask = (1<<irq);
++ __u16 mask = (1 << irq);
+ __u32 processorList = 0;
+ unsigned long flags;
+
+@@ -1549,78 +1506,72 @@ unmask_vic_irq(unsigned int irq)
+ irq, cpu, cpu_irq_affinity[cpu]));
+ spin_lock_irqsave(&vic_irq_lock, flags);
+ for_each_online_cpu(real_cpu) {
+- if(!(voyager_extended_vic_processors & (1<<real_cpu)))
++ if (!(voyager_extended_vic_processors & (1 << real_cpu)))
+ continue;
+- if(!(cpu_irq_affinity[real_cpu] & mask)) {
++ if (!(cpu_irq_affinity[real_cpu] & mask)) {
+ /* irq has no affinity for this CPU, ignore */
+ continue;
+ }
+- if(real_cpu == cpu) {
++ if (real_cpu == cpu) {
+ enable_local_vic_irq(irq);
+- }
+- else if(vic_irq_mask[real_cpu] & mask) {
++ } else if (vic_irq_mask[real_cpu] & mask) {
+ vic_irq_enable_mask[real_cpu] |= mask;
+- processorList |= (1<<real_cpu);
++ processorList |= (1 << real_cpu);
+ }
+ }
+ spin_unlock_irqrestore(&vic_irq_lock, flags);
+- if(processorList)
++ if (processorList)
+ send_CPI(processorList, VIC_ENABLE_IRQ_CPI);
+ }
+
+-static void
+-mask_vic_irq(unsigned int irq)
++static void mask_vic_irq(unsigned int irq)
+ {
+ /* lazy disable, do nothing */
+ }
+
+-static void
+-enable_local_vic_irq(unsigned int irq)
++static void enable_local_vic_irq(unsigned int irq)
+ {
+ __u8 cpu = smp_processor_id();
+ __u16 mask = ~(1 << irq);
+ __u16 old_mask = vic_irq_mask[cpu];
+
+ vic_irq_mask[cpu] &= mask;
+- if(vic_irq_mask[cpu] == old_mask)
++ if (vic_irq_mask[cpu] == old_mask)
+ return;
+
+ VDEBUG(("VOYAGER DEBUG: Enabling irq %d in hardware on CPU %d\n",
+ irq, cpu));
+
+ if (irq & 8) {
+- outb_p(cached_A1(cpu),0xA1);
++ outb_p(cached_A1(cpu), 0xA1);
+ (void)inb_p(0xA1);
+- }
+- else {
+- outb_p(cached_21(cpu),0x21);
++ } else {
++ outb_p(cached_21(cpu), 0x21);
+ (void)inb_p(0x21);
+ }
+ }
+
+-static void
+-disable_local_vic_irq(unsigned int irq)
++static void disable_local_vic_irq(unsigned int irq)
+ {
+ __u8 cpu = smp_processor_id();
+ __u16 mask = (1 << irq);
+ __u16 old_mask = vic_irq_mask[cpu];
+
+- if(irq == 7)
++ if (irq == 7)
+ return;
+
+ vic_irq_mask[cpu] |= mask;
+- if(old_mask == vic_irq_mask[cpu])
++ if (old_mask == vic_irq_mask[cpu])
+ return;
+
+ VDEBUG(("VOYAGER DEBUG: Disabling irq %d in hardware on CPU %d\n",
+ irq, cpu));
+
+ if (irq & 8) {
+- outb_p(cached_A1(cpu),0xA1);
++ outb_p(cached_A1(cpu), 0xA1);
+ (void)inb_p(0xA1);
+- }
+- else {
+- outb_p(cached_21(cpu),0x21);
++ } else {
++ outb_p(cached_21(cpu), 0x21);
+ (void)inb_p(0x21);
+ }
+ }
+@@ -1631,8 +1582,7 @@ disable_local_vic_irq(unsigned int irq)
+ * interrupt in the vic, so we merely set a flag (IRQ_DISABLED). If
+ * this interrupt actually comes in, then we mask and ack here to push
+ * the interrupt off to another CPU */
+-static void
+-before_handle_vic_irq(unsigned int irq)
++static void before_handle_vic_irq(unsigned int irq)
+ {
+ irq_desc_t *desc = irq_desc + irq;
+ __u8 cpu = smp_processor_id();
+@@ -1641,16 +1591,16 @@ before_handle_vic_irq(unsigned int irq)
+ vic_intr_total++;
+ vic_intr_count[cpu]++;
+
+- if(!(cpu_irq_affinity[cpu] & (1<<irq))) {
++ if (!(cpu_irq_affinity[cpu] & (1 << irq))) {
+ /* The irq is not in our affinity mask, push it off
+ * onto another CPU */
+- VDEBUG(("VOYAGER DEBUG: affinity triggered disable of irq %d on cpu %d\n",
+- irq, cpu));
++ VDEBUG(("VOYAGER DEBUG: affinity triggered disable of irq %d "
++ "on cpu %d\n", irq, cpu));
+ disable_local_vic_irq(irq);
+ /* set IRQ_INPROGRESS to prevent the handler in irq.c from
+ * actually calling the interrupt routine */
+ desc->status |= IRQ_REPLAY | IRQ_INPROGRESS;
+- } else if(desc->status & IRQ_DISABLED) {
++ } else if (desc->status & IRQ_DISABLED) {
+ /* Damn, the interrupt actually arrived, do the lazy
+ * disable thing. The interrupt routine in irq.c will
+ * not handle a IRQ_DISABLED interrupt, so nothing more
+@@ -1667,8 +1617,7 @@ before_handle_vic_irq(unsigned int irq)
+ }
+
+ /* Finish the VIC interrupt: basically mask */
+-static void
+-after_handle_vic_irq(unsigned int irq)
++static void after_handle_vic_irq(unsigned int irq)
+ {
+ irq_desc_t *desc = irq_desc + irq;
+
+@@ -1685,11 +1634,11 @@ after_handle_vic_irq(unsigned int irq)
+ #ifdef VOYAGER_DEBUG
+ /* DEBUG: before we ack, check what's in progress */
+ isr = vic_read_isr();
+- if((isr & (1<<irq) && !(status & IRQ_REPLAY)) == 0) {
++ if ((isr & (1 << irq) && !(status & IRQ_REPLAY)) == 0) {
+ int i;
+ __u8 cpu = smp_processor_id();
+ __u8 real_cpu;
+- int mask; /* Um... initialize me??? --RR */
++ int mask; /* Um... initialize me??? --RR */
+
+ printk("VOYAGER SMP: CPU%d lost interrupt %d\n",
+ cpu, irq);
+@@ -1698,9 +1647,10 @@ after_handle_vic_irq(unsigned int irq)
+ outb(VIC_CPU_MASQUERADE_ENABLE | real_cpu,
+ VIC_PROCESSOR_ID);
+ isr = vic_read_isr();
+- if(isr & (1<<irq)) {
+- printk("VOYAGER SMP: CPU%d ack irq %d\n",
+- real_cpu, irq);
++ if (isr & (1 << irq)) {
++ printk
++ ("VOYAGER SMP: CPU%d ack irq %d\n",
++ real_cpu, irq);
+ ack_vic_irq(irq);
+ }
+ outb(cpu, VIC_PROCESSOR_ID);
+@@ -1711,7 +1661,7 @@ after_handle_vic_irq(unsigned int irq)
+ * receipt by another CPU so everything must be in
+ * order here */
+ ack_vic_irq(irq);
+- if(status & IRQ_REPLAY) {
++ if (status & IRQ_REPLAY) {
+ /* replay is set if we disable the interrupt
+ * in the before_handle_vic_irq() routine, so
+ * clear the in progress bit here to allow the
+@@ -1720,9 +1670,9 @@ after_handle_vic_irq(unsigned int irq)
+ }
+ #ifdef VOYAGER_DEBUG
+ isr = vic_read_isr();
+- if((isr & (1<<irq)) != 0)
+- printk("VOYAGER SMP: after_handle_vic_irq() after ack irq=%d, isr=0x%x\n",
+- irq, isr);
++ if ((isr & (1 << irq)) != 0)
++ printk("VOYAGER SMP: after_handle_vic_irq() after "
++ "ack irq=%d, isr=0x%x\n", irq, isr);
+ #endif /* VOYAGER_DEBUG */
+ }
+ _raw_spin_unlock(&vic_irq_lock);
+@@ -1731,7 +1681,6 @@ after_handle_vic_irq(unsigned int irq)
+ * may be intercepted by another CPU if reasserted */
+ }
+
+-
+ /* Linux processor - interrupt affinity manipulations.
+ *
+ * For each processor, we maintain a 32 bit irq affinity mask.
+@@ -1748,8 +1697,7 @@ after_handle_vic_irq(unsigned int irq)
+ * change the mask and then do an interrupt enable CPI to re-enable on
+ * the selected processors */
+
+-void
+-set_vic_irq_affinity(unsigned int irq, cpumask_t mask)
++void set_vic_irq_affinity(unsigned int irq, cpumask_t mask)
+ {
+ /* Only extended processors handle interrupts */
+ unsigned long real_mask;
+@@ -1757,13 +1705,13 @@ set_vic_irq_affinity(unsigned int irq, cpumask_t mask)
+ int cpu;
+
+ real_mask = cpus_addr(mask)[0] & voyager_extended_vic_processors;
+-
+- if(cpus_addr(mask)[0] == 0)
+
-+ name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(name);
-+ if (IS_ERR(name))
-+ return ERR_PTR(err);
++ if (cpus_addr(mask)[0] == 0)
+ /* can't have no CPUs to accept the interrupt -- extremely
+ * bad things will happen */
+ return;
+
+- if(irq == 0)
++ if (irq == 0)
+ /* can't change the affinity of the timer IRQ. This
+ * is due to the constraint in the voyager
+ * architecture that the CPI also comes in on and IRQ
+@@ -1772,7 +1720,7 @@ set_vic_irq_affinity(unsigned int irq, cpumask_t mask)
+ * will no-longer be able to accept VIC CPIs */
+ return;
+
+- if(irq >= 32)
++ if (irq >= 32)
+ /* You can only have 32 interrupts in a voyager system
+ * (and 32 only if you have a secondary microchannel
+ * bus) */
+@@ -1780,8 +1728,8 @@ set_vic_irq_affinity(unsigned int irq, cpumask_t mask)
+
+ for_each_online_cpu(cpu) {
+ unsigned long cpu_mask = 1 << cpu;
+-
+- if(cpu_mask & real_mask) {
+
-+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-+ if (!inst)
-+ return ERR_PTR(-ENOMEM);
++ if (cpu_mask & real_mask) {
+ /* enable the interrupt for this cpu */
+ cpu_irq_affinity[cpu] |= irq_mask;
+ } else {
+@@ -1800,25 +1748,23 @@ set_vic_irq_affinity(unsigned int irq, cpumask_t mask)
+ unmask_vic_irq(irq);
+ }
+
+-static void
+-ack_vic_irq(unsigned int irq)
++static void ack_vic_irq(unsigned int irq)
+ {
+ if (irq & 8) {
+- outb(0x62,0x20); /* Specific EOI to cascade */
+- outb(0x60|(irq & 7),0xA0);
++ outb(0x62, 0x20); /* Specific EOI to cascade */
++ outb(0x60 | (irq & 7), 0xA0);
+ } else {
+- outb(0x60 | (irq & 7),0x20);
++ outb(0x60 | (irq & 7), 0x20);
+ }
+ }
+
+ /* enable the CPIs. In the VIC, the CPIs are delivered by the 8259
+ * but are not vectored by it. This means that the 8259 mask must be
+ * lowered to receive them */
+-static __init void
+-vic_enable_cpi(void)
++static __init void vic_enable_cpi(void)
+ {
+ __u8 cpu = smp_processor_id();
+-
+
-+ spawn = crypto_instance_ctx(inst);
+ /* just take a copy of the current mask (nop for boot cpu) */
+ vic_irq_mask[cpu] = vic_irq_mask[boot_cpu_id];
+
+@@ -1827,7 +1773,7 @@ vic_enable_cpi(void)
+ /* for sys int and cmn int */
+ enable_local_vic_irq(7);
+
+- if(is_cpu_quad()) {
++ if (is_cpu_quad()) {
+ outb(QIC_DEFAULT_MASK0, QIC_MASK_REGISTER0);
+ outb(QIC_CPI_ENABLE, QIC_MASK_REGISTER1);
+ VDEBUG(("VOYAGER SMP: QIC ENABLE CPI: CPU%d: MASK 0x%x\n",
+@@ -1838,8 +1784,7 @@ vic_enable_cpi(void)
+ cpu, vic_irq_mask[cpu]));
+ }
+
+-void
+-voyager_smp_dump()
++void voyager_smp_dump()
+ {
+ int old_cpu = smp_processor_id(), cpu;
+
+@@ -1865,10 +1810,10 @@ voyager_smp_dump()
+ cpu, vic_irq_mask[cpu], imr, irr, isr);
+ #if 0
+ /* These lines are put in to try to unstick an un ack'd irq */
+- if(isr != 0) {
++ if (isr != 0) {
+ int irq;
+- for(irq=0; irq<16; irq++) {
+- if(isr & (1<<irq)) {
++ for (irq = 0; irq < 16; irq++) {
++ if (isr & (1 << irq)) {
+ printk("\tCPU%d: ack irq %d\n",
+ cpu, irq);
+ local_irq_save(flags);
+@@ -1884,17 +1829,15 @@ voyager_smp_dump()
+ }
+ }
+
+-void
+-smp_voyager_power_off(void *dummy)
++void smp_voyager_power_off(void *dummy)
+ {
+- if(smp_processor_id() == boot_cpu_id)
++ if (smp_processor_id() == boot_cpu_id)
+ voyager_power_off();
+ else
+ smp_stop_cpu_function(NULL);
+ }
+
+-static void __init
+-voyager_smp_prepare_cpus(unsigned int max_cpus)
++static void __init voyager_smp_prepare_cpus(unsigned int max_cpus)
+ {
+ /* FIXME: ignore max_cpus for now */
+ smp_boot_cpus();
+@@ -1911,8 +1854,7 @@ static void __cpuinit voyager_smp_prepare_boot_cpu(void)
+ cpu_set(smp_processor_id(), cpu_present_map);
+ }
+
+-static int __cpuinit
+-voyager_cpu_up(unsigned int cpu)
++static int __cpuinit voyager_cpu_up(unsigned int cpu)
+ {
+ /* This only works at boot for x86. See "rewrite" above. */
+ if (cpu_isset(cpu, smp_commenced_mask))
+@@ -1928,14 +1870,12 @@ voyager_cpu_up(unsigned int cpu)
+ return 0;
+ }
+
+-static void __init
+-voyager_smp_cpus_done(unsigned int max_cpus)
++static void __init voyager_smp_cpus_done(unsigned int max_cpus)
+ {
+ zap_low_mappings();
+ }
+
+-void __init
+-smp_setup_processor_id(void)
++void __init smp_setup_processor_id(void)
+ {
+ current_thread_info()->cpu = hard_smp_processor_id();
+ x86_write_percpu(cpu_number, hard_smp_processor_id());
+diff --git a/arch/x86/mach-voyager/voyager_thread.c b/arch/x86/mach-voyager/voyager_thread.c
+index 50f9366..c69c931 100644
+--- a/arch/x86/mach-voyager/voyager_thread.c
++++ b/arch/x86/mach-voyager/voyager_thread.c
+@@ -30,12 +30,10 @@
+ #include <asm/mtrr.h>
+ #include <asm/msr.h>
+
+-
+ struct task_struct *voyager_thread;
+ static __u8 set_timeout;
+
+-static int
+-execute(const char *string)
++static int execute(const char *string)
+ {
+ int ret;
+
+@@ -52,48 +50,48 @@ execute(const char *string)
+ NULL,
+ };
+
+- if ((ret = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_PROC)) != 0) {
+- printk(KERN_ERR "Voyager failed to run \"%s\": %i\n",
+- string, ret);
++ if ((ret =
++ call_usermodehelper(argv[0], argv, envp, UMH_WAIT_PROC)) != 0) {
++ printk(KERN_ERR "Voyager failed to run \"%s\": %i\n", string,
++ ret);
+ }
+ return ret;
+ }
+
+-static void
+-check_from_kernel(void)
++static void check_from_kernel(void)
+ {
+- if(voyager_status.switch_off) {
+-
++ if (voyager_status.switch_off) {
+
-+ /* Ignore async algorithms if necessary. */
-+ mask |= crypto_requires_sync(algt->type, algt->mask);
+ /* FIXME: This should be configurable via proc */
+ execute("umask 600; echo 0 > /etc/initrunlvl; kill -HUP 1");
+- } else if(voyager_status.power_fail) {
++ } else if (voyager_status.power_fail) {
+ VDEBUG(("Voyager daemon detected AC power failure\n"));
+-
+
-+ crypto_set_skcipher_spawn(spawn, inst);
-+ err = crypto_grab_nivcipher(spawn, name, type, mask);
-+ if (err)
-+ goto err_free_inst;
+ /* FIXME: This should be configureable via proc */
+ execute("umask 600; echo F > /etc/powerstatus; kill -PWR 1");
+ set_timeout = 1;
+ }
+ }
+
+-static void
+-check_continuing_condition(void)
++static void check_continuing_condition(void)
+ {
+- if(voyager_status.power_fail) {
++ if (voyager_status.power_fail) {
+ __u8 data;
+- voyager_cat_psi(VOYAGER_PSI_SUBREAD,
++ voyager_cat_psi(VOYAGER_PSI_SUBREAD,
+ VOYAGER_PSI_AC_FAIL_REG, &data);
+- if((data & 0x1f) == 0) {
++ if ((data & 0x1f) == 0) {
+ /* all power restored */
+- printk(KERN_NOTICE "VOYAGER AC power restored, cancelling shutdown\n");
++ printk(KERN_NOTICE
++ "VOYAGER AC power restored, cancelling shutdown\n");
+ /* FIXME: should be user configureable */
+- execute("umask 600; echo O > /etc/powerstatus; kill -PWR 1");
++ execute
++ ("umask 600; echo O > /etc/powerstatus; kill -PWR 1");
+ set_timeout = 0;
+ }
+ }
+ }
+
+-static int
+-thread(void *unused)
++static int thread(void *unused)
+ {
+ printk(KERN_NOTICE "Voyager starting monitor thread\n");
+
+@@ -102,7 +100,7 @@ thread(void *unused)
+ schedule_timeout(set_timeout ? HZ : MAX_SCHEDULE_TIMEOUT);
+
+ VDEBUG(("Voyager Daemon awoken\n"));
+- if(voyager_status.request_from_kernel == 0) {
++ if (voyager_status.request_from_kernel == 0) {
+ /* probably awoken from timeout */
+ check_continuing_condition();
+ } else {
+@@ -112,20 +110,18 @@ thread(void *unused)
+ }
+ }
+
+-static int __init
+-voyager_thread_start(void)
++static int __init voyager_thread_start(void)
+ {
+ voyager_thread = kthread_run(thread, NULL, "kvoyagerd");
+ if (IS_ERR(voyager_thread)) {
+- printk(KERN_ERR "Voyager: Failed to create system monitor thread.\n");
++ printk(KERN_ERR
++ "Voyager: Failed to create system monitor thread.\n");
+ return PTR_ERR(voyager_thread);
+ }
+ return 0;
+ }
+
+-
+-static void __exit
+-voyager_thread_stop(void)
++static void __exit voyager_thread_stop(void)
+ {
+ kthread_stop(voyager_thread);
+ }
+diff --git a/arch/x86/math-emu/errors.c b/arch/x86/math-emu/errors.c
+index a1b0d22..59d353d 100644
+--- a/arch/x86/math-emu/errors.c
++++ b/arch/x86/math-emu/errors.c
+@@ -33,45 +33,41 @@
+ #undef PRINT_MESSAGES
+ /* */
+
+-
+ #if 0
+ void Un_impl(void)
+ {
+- u_char byte1, FPU_modrm;
+- unsigned long address = FPU_ORIG_EIP;
+-
+- RE_ENTRANT_CHECK_OFF;
+- /* No need to check access_ok(), we have previously fetched these bytes. */
+- printk("Unimplemented FPU Opcode at eip=%p : ", (void __user *) address);
+- if ( FPU_CS == __USER_CS )
+- {
+- while ( 1 )
+- {
+- FPU_get_user(byte1, (u_char __user *) address);
+- if ( (byte1 & 0xf8) == 0xd8 ) break;
+- printk("[%02x]", byte1);
+- address++;
++ u_char byte1, FPU_modrm;
++ unsigned long address = FPU_ORIG_EIP;
++
++ RE_ENTRANT_CHECK_OFF;
++ /* No need to check access_ok(), we have previously fetched these bytes. */
++ printk("Unimplemented FPU Opcode at eip=%p : ", (void __user *)address);
++ if (FPU_CS == __USER_CS) {
++ while (1) {
++ FPU_get_user(byte1, (u_char __user *) address);
++ if ((byte1 & 0xf8) == 0xd8)
++ break;
++ printk("[%02x]", byte1);
++ address++;
++ }
++ printk("%02x ", byte1);
++ FPU_get_user(FPU_modrm, 1 + (u_char __user *) address);
+
-+ alg = crypto_skcipher_spawn_alg(spawn);
++ if (FPU_modrm >= 0300)
++ printk("%02x (%02x+%d)\n", FPU_modrm, FPU_modrm & 0xf8,
++ FPU_modrm & 7);
++ else
++ printk("/%d\n", (FPU_modrm >> 3) & 7);
++ } else {
++ printk("cs selector = %04x\n", FPU_CS);
+ }
+- printk("%02x ", byte1);
+- FPU_get_user(FPU_modrm, 1 + (u_char __user *) address);
+-
+- if (FPU_modrm >= 0300)
+- printk("%02x (%02x+%d)\n", FPU_modrm, FPU_modrm & 0xf8, FPU_modrm & 7);
+- else
+- printk("/%d\n", (FPU_modrm >> 3) & 7);
+- }
+- else
+- {
+- printk("cs selector = %04x\n", FPU_CS);
+- }
+-
+- RE_ENTRANT_CHECK_ON;
+-
+- EXCEPTION(EX_Invalid);
+
+-}
+-#endif /* 0 */
++ RE_ENTRANT_CHECK_ON;
+
++ EXCEPTION(EX_Invalid);
+
-+ if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
-+ CRYPTO_ALG_TYPE_BLKCIPHER) {
-+ balg.ivsize = alg->cra_blkcipher.ivsize;
-+ balg.min_keysize = alg->cra_blkcipher.min_keysize;
-+ balg.max_keysize = alg->cra_blkcipher.max_keysize;
++}
++#endif /* 0 */
+
+ /*
+ Called for opcodes which are illegal and which are known to result in a
+@@ -79,139 +75,152 @@ void Un_impl(void)
+ */
+ void FPU_illegal(void)
+ {
+- math_abort(FPU_info,SIGILL);
++ math_abort(FPU_info, SIGILL);
+ }
+
+-
+-
+ void FPU_printall(void)
+ {
+- int i;
+- static const char *tag_desc[] = { "Valid", "Zero", "ERROR", "Empty",
+- "DeNorm", "Inf", "NaN" };
+- u_char byte1, FPU_modrm;
+- unsigned long address = FPU_ORIG_EIP;
+-
+- RE_ENTRANT_CHECK_OFF;
+- /* No need to check access_ok(), we have previously fetched these bytes. */
+- printk("At %p:", (void *) address);
+- if ( FPU_CS == __USER_CS )
+- {
++ int i;
++ static const char *tag_desc[] = { "Valid", "Zero", "ERROR", "Empty",
++ "DeNorm", "Inf", "NaN"
++ };
++ u_char byte1, FPU_modrm;
++ unsigned long address = FPU_ORIG_EIP;
+
-+ balg.setkey = async_setkey;
-+ balg.encrypt = async_encrypt;
-+ balg.decrypt = async_decrypt;
++ RE_ENTRANT_CHECK_OFF;
++ /* No need to check access_ok(), we have previously fetched these bytes. */
++ printk("At %p:", (void *)address);
++ if (FPU_CS == __USER_CS) {
+ #define MAX_PRINTED_BYTES 20
+- for ( i = 0; i < MAX_PRINTED_BYTES; i++ )
+- {
+- FPU_get_user(byte1, (u_char __user *) address);
+- if ( (byte1 & 0xf8) == 0xd8 )
+- {
+- printk(" %02x", byte1);
+- break;
+- }
+- printk(" [%02x]", byte1);
+- address++;
+- }
+- if ( i == MAX_PRINTED_BYTES )
+- printk(" [more..]\n");
+- else
+- {
+- FPU_get_user(FPU_modrm, 1 + (u_char __user *) address);
+-
+- if (FPU_modrm >= 0300)
+- printk(" %02x (%02x+%d)\n", FPU_modrm, FPU_modrm & 0xf8, FPU_modrm & 7);
+- else
+- printk(" /%d, mod=%d rm=%d\n",
+- (FPU_modrm >> 3) & 7, (FPU_modrm >> 6) & 3, FPU_modrm & 7);
++ for (i = 0; i < MAX_PRINTED_BYTES; i++) {
++ FPU_get_user(byte1, (u_char __user *) address);
++ if ((byte1 & 0xf8) == 0xd8) {
++ printk(" %02x", byte1);
++ break;
++ }
++ printk(" [%02x]", byte1);
++ address++;
++ }
++ if (i == MAX_PRINTED_BYTES)
++ printk(" [more..]\n");
++ else {
++ FPU_get_user(FPU_modrm, 1 + (u_char __user *) address);
+
-+ balg.geniv = alg->cra_blkcipher.geniv;
++ if (FPU_modrm >= 0300)
++ printk(" %02x (%02x+%d)\n", FPU_modrm,
++ FPU_modrm & 0xf8, FPU_modrm & 7);
++ else
++ printk(" /%d, mod=%d rm=%d\n",
++ (FPU_modrm >> 3) & 7,
++ (FPU_modrm >> 6) & 3, FPU_modrm & 7);
++ }
+ } else {
-+ balg.ivsize = alg->cra_ablkcipher.ivsize;
-+ balg.min_keysize = alg->cra_ablkcipher.min_keysize;
-+ balg.max_keysize = alg->cra_ablkcipher.max_keysize;
-+
-+ balg.setkey = alg->cra_ablkcipher.setkey;
-+ balg.encrypt = alg->cra_ablkcipher.encrypt;
-+ balg.decrypt = alg->cra_ablkcipher.decrypt;
++ printk("%04x\n", FPU_CS);
+ }
+- }
+- else
+- {
+- printk("%04x\n", FPU_CS);
+- }
+
+- partial_status = status_word();
++ partial_status = status_word();
+
+ #ifdef DEBUGGING
+-if ( partial_status & SW_Backward ) printk("SW: backward compatibility\n");
+-if ( partial_status & SW_C3 ) printk("SW: condition bit 3\n");
+-if ( partial_status & SW_C2 ) printk("SW: condition bit 2\n");
+-if ( partial_status & SW_C1 ) printk("SW: condition bit 1\n");
+-if ( partial_status & SW_C0 ) printk("SW: condition bit 0\n");
+-if ( partial_status & SW_Summary ) printk("SW: exception summary\n");
+-if ( partial_status & SW_Stack_Fault ) printk("SW: stack fault\n");
+-if ( partial_status & SW_Precision ) printk("SW: loss of precision\n");
+-if ( partial_status & SW_Underflow ) printk("SW: underflow\n");
+-if ( partial_status & SW_Overflow ) printk("SW: overflow\n");
+-if ( partial_status & SW_Zero_Div ) printk("SW: divide by zero\n");
+-if ( partial_status & SW_Denorm_Op ) printk("SW: denormalized operand\n");
+-if ( partial_status & SW_Invalid ) printk("SW: invalid operation\n");
++ if (partial_status & SW_Backward)
++ printk("SW: backward compatibility\n");
++ if (partial_status & SW_C3)
++ printk("SW: condition bit 3\n");
++ if (partial_status & SW_C2)
++ printk("SW: condition bit 2\n");
++ if (partial_status & SW_C1)
++ printk("SW: condition bit 1\n");
++ if (partial_status & SW_C0)
++ printk("SW: condition bit 0\n");
++ if (partial_status & SW_Summary)
++ printk("SW: exception summary\n");
++ if (partial_status & SW_Stack_Fault)
++ printk("SW: stack fault\n");
++ if (partial_status & SW_Precision)
++ printk("SW: loss of precision\n");
++ if (partial_status & SW_Underflow)
++ printk("SW: underflow\n");
++ if (partial_status & SW_Overflow)
++ printk("SW: overflow\n");
++ if (partial_status & SW_Zero_Div)
++ printk("SW: divide by zero\n");
++ if (partial_status & SW_Denorm_Op)
++ printk("SW: denormalized operand\n");
++ if (partial_status & SW_Invalid)
++ printk("SW: invalid operation\n");
+ #endif /* DEBUGGING */
+
+- printk(" SW: b=%d st=%ld es=%d sf=%d cc=%d%d%d%d ef=%d%d%d%d%d%d\n",
+- partial_status & 0x8000 ? 1 : 0, /* busy */
+- (partial_status & 0x3800) >> 11, /* stack top pointer */
+- partial_status & 0x80 ? 1 : 0, /* Error summary status */
+- partial_status & 0x40 ? 1 : 0, /* Stack flag */
+- partial_status & SW_C3?1:0, partial_status & SW_C2?1:0, /* cc */
+- partial_status & SW_C1?1:0, partial_status & SW_C0?1:0, /* cc */
+- partial_status & SW_Precision?1:0, partial_status & SW_Underflow?1:0,
+- partial_status & SW_Overflow?1:0, partial_status & SW_Zero_Div?1:0,
+- partial_status & SW_Denorm_Op?1:0, partial_status & SW_Invalid?1:0);
+-
+-printk(" CW: ic=%d rc=%ld%ld pc=%ld%ld iem=%d ef=%d%d%d%d%d%d\n",
+- control_word & 0x1000 ? 1 : 0,
+- (control_word & 0x800) >> 11, (control_word & 0x400) >> 10,
+- (control_word & 0x200) >> 9, (control_word & 0x100) >> 8,
+- control_word & 0x80 ? 1 : 0,
+- control_word & SW_Precision?1:0, control_word & SW_Underflow?1:0,
+- control_word & SW_Overflow?1:0, control_word & SW_Zero_Div?1:0,
+- control_word & SW_Denorm_Op?1:0, control_word & SW_Invalid?1:0);
+-
+- for ( i = 0; i < 8; i++ )
+- {
+- FPU_REG *r = &st(i);
+- u_char tagi = FPU_gettagi(i);
+- switch (tagi)
+- {
+- case TAG_Empty:
+- continue;
+- break;
+- case TAG_Zero:
+- case TAG_Special:
+- tagi = FPU_Special(r);
+- case TAG_Valid:
+- printk("st(%d) %c .%04lx %04lx %04lx %04lx e%+-6d ", i,
+- getsign(r) ? '-' : '+',
+- (long)(r->sigh >> 16),
+- (long)(r->sigh & 0xFFFF),
+- (long)(r->sigl >> 16),
+- (long)(r->sigl & 0xFFFF),
+- exponent(r) - EXP_BIAS + 1);
+- break;
+- default:
+- printk("Whoops! Error in errors.c: tag%d is %d ", i, tagi);
+- continue;
+- break;
++ printk(" SW: b=%d st=%d es=%d sf=%d cc=%d%d%d%d ef=%d%d%d%d%d%d\n", partial_status & 0x8000 ? 1 : 0, /* busy */
++ (partial_status & 0x3800) >> 11, /* stack top pointer */
++ partial_status & 0x80 ? 1 : 0, /* Error summary status */
++ partial_status & 0x40 ? 1 : 0, /* Stack flag */
++ partial_status & SW_C3 ? 1 : 0, partial_status & SW_C2 ? 1 : 0, /* cc */
++ partial_status & SW_C1 ? 1 : 0, partial_status & SW_C0 ? 1 : 0, /* cc */
++ partial_status & SW_Precision ? 1 : 0,
++ partial_status & SW_Underflow ? 1 : 0,
++ partial_status & SW_Overflow ? 1 : 0,
++ partial_status & SW_Zero_Div ? 1 : 0,
++ partial_status & SW_Denorm_Op ? 1 : 0,
++ partial_status & SW_Invalid ? 1 : 0);
++
++ printk(" CW: ic=%d rc=%d%d pc=%d%d iem=%d ef=%d%d%d%d%d%d\n",
++ control_word & 0x1000 ? 1 : 0,
++ (control_word & 0x800) >> 11, (control_word & 0x400) >> 10,
++ (control_word & 0x200) >> 9, (control_word & 0x100) >> 8,
++ control_word & 0x80 ? 1 : 0,
++ control_word & SW_Precision ? 1 : 0,
++ control_word & SW_Underflow ? 1 : 0,
++ control_word & SW_Overflow ? 1 : 0,
++ control_word & SW_Zero_Div ? 1 : 0,
++ control_word & SW_Denorm_Op ? 1 : 0,
++ control_word & SW_Invalid ? 1 : 0);
+
-+ balg.geniv = alg->cra_ablkcipher.geniv;
-+ }
++ for (i = 0; i < 8; i++) {
++ FPU_REG *r = &st(i);
++ u_char tagi = FPU_gettagi(i);
++ switch (tagi) {
++ case TAG_Empty:
++ continue;
++ break;
++ case TAG_Zero:
++ case TAG_Special:
++ tagi = FPU_Special(r);
++ case TAG_Valid:
++ printk("st(%d) %c .%04lx %04lx %04lx %04lx e%+-6d ", i,
++ getsign(r) ? '-' : '+',
++ (long)(r->sigh >> 16),
++ (long)(r->sigh & 0xFFFF),
++ (long)(r->sigl >> 16),
++ (long)(r->sigl & 0xFFFF),
++ exponent(r) - EXP_BIAS + 1);
++ break;
++ default:
++ printk("Whoops! Error in errors.c: tag%d is %d ", i,
++ tagi);
++ continue;
++ break;
++ }
++ printk("%s\n", tag_desc[(int)(unsigned)tagi]);
+ }
+- printk("%s\n", tag_desc[(int) (unsigned) tagi]);
+- }
+
+- RE_ENTRANT_CHECK_ON;
++ RE_ENTRANT_CHECK_ON;
+
+ }
+
+ static struct {
+- int type;
+- const char *name;
++ int type;
++ const char *name;
+ } exception_names[] = {
+- { EX_StackOver, "stack overflow" },
+- { EX_StackUnder, "stack underflow" },
+- { EX_Precision, "loss of precision" },
+- { EX_Underflow, "underflow" },
+- { EX_Overflow, "overflow" },
+- { EX_ZeroDiv, "divide by zero" },
+- { EX_Denormal, "denormalized operand" },
+- { EX_Invalid, "invalid operation" },
+- { EX_INTERNAL, "INTERNAL BUG in "FPU_VERSION },
+- { 0, NULL }
++ {
++ EX_StackOver, "stack overflow"}, {
++ EX_StackUnder, "stack underflow"}, {
++ EX_Precision, "loss of precision"}, {
++ EX_Underflow, "underflow"}, {
++ EX_Overflow, "overflow"}, {
++ EX_ZeroDiv, "divide by zero"}, {
++ EX_Denormal, "denormalized operand"}, {
++ EX_Invalid, "invalid operation"}, {
++ EX_INTERNAL, "INTERNAL BUG in " FPU_VERSION}, {
++ 0, NULL}
+ };
+
+ /*
+@@ -295,445 +304,386 @@ static struct {
+
+ asmlinkage void FPU_exception(int n)
+ {
+- int i, int_type;
+-
+- int_type = 0; /* Needed only to stop compiler warnings */
+- if ( n & EX_INTERNAL )
+- {
+- int_type = n - EX_INTERNAL;
+- n = EX_INTERNAL;
+- /* Set lots of exception bits! */
+- partial_status |= (SW_Exc_Mask | SW_Summary | SW_Backward);
+- }
+- else
+- {
+- /* Extract only the bits which we use to set the status word */
+- n &= (SW_Exc_Mask);
+- /* Set the corresponding exception bit */
+- partial_status |= n;
+- /* Set summary bits iff exception isn't masked */
+- if ( partial_status & ~control_word & CW_Exceptions )
+- partial_status |= (SW_Summary | SW_Backward);
+- if ( n & (SW_Stack_Fault | EX_Precision) )
+- {
+- if ( !(n & SW_C1) )
+- /* This bit distinguishes over- from underflow for a stack fault,
+- and roundup from round-down for precision loss. */
+- partial_status &= ~SW_C1;
++ int i, int_type;
++
++ int_type = 0; /* Needed only to stop compiler warnings */
++ if (n & EX_INTERNAL) {
++ int_type = n - EX_INTERNAL;
++ n = EX_INTERNAL;
++ /* Set lots of exception bits! */
++ partial_status |= (SW_Exc_Mask | SW_Summary | SW_Backward);
++ } else {
++ /* Extract only the bits which we use to set the status word */
++ n &= (SW_Exc_Mask);
++ /* Set the corresponding exception bit */
++ partial_status |= n;
++ /* Set summary bits iff exception isn't masked */
++ if (partial_status & ~control_word & CW_Exceptions)
++ partial_status |= (SW_Summary | SW_Backward);
++ if (n & (SW_Stack_Fault | EX_Precision)) {
++ if (!(n & SW_C1))
++ /* This bit distinguishes over- from underflow for a stack fault,
++ and roundup from round-down for precision loss. */
++ partial_status &= ~SW_C1;
++ }
+ }
+- }
+
+- RE_ENTRANT_CHECK_OFF;
+- if ( (~control_word & n & CW_Exceptions) || (n == EX_INTERNAL) )
+- {
++ RE_ENTRANT_CHECK_OFF;
++ if ((~control_word & n & CW_Exceptions) || (n == EX_INTERNAL)) {
+ #ifdef PRINT_MESSAGES
+- /* My message from the sponsor */
+- printk(FPU_VERSION" "__DATE__" (C) W. Metzenthen.\n");
++ /* My message from the sponsor */
++ printk(FPU_VERSION " " __DATE__ " (C) W. Metzenthen.\n");
+ #endif /* PRINT_MESSAGES */
+-
+- /* Get a name string for error reporting */
+- for (i=0; exception_names[i].type; i++)
+- if ( (exception_names[i].type & n) == exception_names[i].type )
+- break;
+-
+- if (exception_names[i].type)
+- {
+
-+ err = -EINVAL;
-+ if (!balg.ivsize)
-+ goto err_drop_alg;
++ /* Get a name string for error reporting */
++ for (i = 0; exception_names[i].type; i++)
++ if ((exception_names[i].type & n) ==
++ exception_names[i].type)
++ break;
+
-+ /*
-+ * This is only true if we're constructing an algorithm with its
-+ * default IV generator. For the default generator we elide the
-+ * template name and double-check the IV generator.
-+ */
-+ if (algt->mask & CRYPTO_ALG_GENIV) {
-+ if (!balg.geniv)
-+ balg.geniv = crypto_default_geniv(alg);
-+ err = -EAGAIN;
-+ if (strcmp(tmpl->name, balg.geniv))
-+ goto err_drop_alg;
++ if (exception_names[i].type) {
+ #ifdef PRINT_MESSAGES
+- printk("FP Exception: %s!\n", exception_names[i].name);
++ printk("FP Exception: %s!\n", exception_names[i].name);
+ #endif /* PRINT_MESSAGES */
+- }
+- else
+- printk("FPU emulator: Unknown Exception: 0x%04x!\n", n);
+-
+- if ( n == EX_INTERNAL )
+- {
+- printk("FPU emulator: Internal error type 0x%04x\n", int_type);
+- FPU_printall();
+- }
++ } else
++ printk("FPU emulator: Unknown Exception: 0x%04x!\n", n);
+
-+ memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
-+ memcpy(inst->alg.cra_driver_name, alg->cra_driver_name,
-+ CRYPTO_MAX_ALG_NAME);
-+ } else {
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-+ "%s(%s)", tmpl->name, alg->cra_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_alg;
-+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "%s(%s)", tmpl->name, alg->cra_driver_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_alg;
++ if (n == EX_INTERNAL) {
++ printk("FPU emulator: Internal error type 0x%04x\n",
++ int_type);
++ FPU_printall();
++ }
+ #ifdef PRINT_MESSAGES
+- else
+- FPU_printall();
++ else
++ FPU_printall();
+ #endif /* PRINT_MESSAGES */
+
+- /*
+- * The 80486 generates an interrupt on the next non-control FPU
+- * instruction. So we need some means of flagging it.
+- * We use the ES (Error Summary) bit for this.
+- */
+- }
+- RE_ENTRANT_CHECK_ON;
++ /*
++ * The 80486 generates an interrupt on the next non-control FPU
++ * instruction. So we need some means of flagging it.
++ * We use the ES (Error Summary) bit for this.
++ */
+ }
++ RE_ENTRANT_CHECK_ON;
+
+ #ifdef __DEBUG__
+- math_abort(FPU_info,SIGFPE);
++ math_abort(FPU_info, SIGFPE);
+ #endif /* __DEBUG__ */
+
+ }
+
+-
+ /* Real operation attempted on a NaN. */
+ /* Returns < 0 if the exception is unmasked */
+ int real_1op_NaN(FPU_REG *a)
+ {
+- int signalling, isNaN;
+-
+- isNaN = (exponent(a) == EXP_OVER) && (a->sigh & 0x80000000);
+-
+- /* The default result for the case of two "equal" NaNs (signs may
+- differ) is chosen to reproduce 80486 behaviour */
+- signalling = isNaN && !(a->sigh & 0x40000000);
+-
+- if ( !signalling )
+- {
+- if ( !isNaN ) /* pseudo-NaN, or other unsupported? */
+- {
+- if ( control_word & CW_Invalid )
+- {
+- /* Masked response */
+- reg_copy(&CONST_QNaN, a);
+- }
+- EXCEPTION(EX_Invalid);
+- return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Special;
++ int signalling, isNaN;
+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_GIVCIPHER | CRYPTO_ALG_GENIV;
-+ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_priority = alg->cra_priority;
-+ inst->alg.cra_blocksize = alg->cra_blocksize;
-+ inst->alg.cra_alignmask = alg->cra_alignmask;
-+ inst->alg.cra_type = &crypto_givcipher_type;
-+
-+ inst->alg.cra_ablkcipher.ivsize = balg.ivsize;
-+ inst->alg.cra_ablkcipher.min_keysize = balg.min_keysize;
-+ inst->alg.cra_ablkcipher.max_keysize = balg.max_keysize;
-+ inst->alg.cra_ablkcipher.geniv = balg.geniv;
++ isNaN = (exponent(a) == EXP_OVER) && (a->sigh & 0x80000000);
+
-+ inst->alg.cra_ablkcipher.setkey = balg.setkey;
-+ inst->alg.cra_ablkcipher.encrypt = balg.encrypt;
-+ inst->alg.cra_ablkcipher.decrypt = balg.decrypt;
++ /* The default result for the case of two "equal" NaNs (signs may
++ differ) is chosen to reproduce 80486 behaviour */
++ signalling = isNaN && !(a->sigh & 0x40000000);
+
-+out:
-+ return inst;
++ if (!signalling) {
++ if (!isNaN) { /* pseudo-NaN, or other unsupported? */
++ if (control_word & CW_Invalid) {
++ /* Masked response */
++ reg_copy(&CONST_QNaN, a);
++ }
++ EXCEPTION(EX_Invalid);
++ return (!(control_word & CW_Invalid) ? FPU_Exception :
++ 0) | TAG_Special;
++ }
++ return TAG_Special;
+ }
+- return TAG_Special;
+- }
+
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- if ( !(a->sigh & 0x80000000) ) /* pseudo-NaN ? */
+- {
+- reg_copy(&CONST_QNaN, a);
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ if (!(a->sigh & 0x80000000)) { /* pseudo-NaN ? */
++ reg_copy(&CONST_QNaN, a);
++ }
++ /* ensure a Quiet NaN */
++ a->sigh |= 0x40000000;
+ }
+- /* ensure a Quiet NaN */
+- a->sigh |= 0x40000000;
+- }
+
+- EXCEPTION(EX_Invalid);
++ EXCEPTION(EX_Invalid);
+
+- return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Special;
++ return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Special;
+ }
+
+-
+ /* Real operation attempted on two operands, one a NaN. */
+ /* Returns < 0 if the exception is unmasked */
+ int real_2op_NaN(FPU_REG const *b, u_char tagb,
+- int deststnr,
+- FPU_REG const *defaultNaN)
++ int deststnr, FPU_REG const *defaultNaN)
+ {
+- FPU_REG *dest = &st(deststnr);
+- FPU_REG const *a = dest;
+- u_char taga = FPU_gettagi(deststnr);
+- FPU_REG const *x;
+- int signalling, unsupported;
+-
+- if ( taga == TAG_Special )
+- taga = FPU_Special(a);
+- if ( tagb == TAG_Special )
+- tagb = FPU_Special(b);
+-
+- /* TW_NaN is also used for unsupported data types. */
+- unsupported = ((taga == TW_NaN)
+- && !((exponent(a) == EXP_OVER) && (a->sigh & 0x80000000)))
+- || ((tagb == TW_NaN)
+- && !((exponent(b) == EXP_OVER) && (b->sigh & 0x80000000)));
+- if ( unsupported )
+- {
+- if ( control_word & CW_Invalid )
+- {
+- /* Masked response */
+- FPU_copy_to_regi(&CONST_QNaN, TAG_Special, deststnr);
+- }
+- EXCEPTION(EX_Invalid);
+- return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Special;
+- }
+-
+- if (taga == TW_NaN)
+- {
+- x = a;
+- if (tagb == TW_NaN)
+- {
+- signalling = !(a->sigh & b->sigh & 0x40000000);
+- if ( significand(b) > significand(a) )
+- x = b;
+- else if ( significand(b) == significand(a) )
+- {
+- /* The default result for the case of two "equal" NaNs (signs may
+- differ) is chosen to reproduce 80486 behaviour */
+- x = defaultNaN;
+- }
+- }
+- else
+- {
+- /* return the quiet version of the NaN in a */
+- signalling = !(a->sigh & 0x40000000);
++ FPU_REG *dest = &st(deststnr);
++ FPU_REG const *a = dest;
++ u_char taga = FPU_gettagi(deststnr);
++ FPU_REG const *x;
++ int signalling, unsupported;
++
++ if (taga == TAG_Special)
++ taga = FPU_Special(a);
++ if (tagb == TAG_Special)
++ tagb = FPU_Special(b);
++
++ /* TW_NaN is also used for unsupported data types. */
++ unsupported = ((taga == TW_NaN)
++ && !((exponent(a) == EXP_OVER)
++ && (a->sigh & 0x80000000)))
++ || ((tagb == TW_NaN)
++ && !((exponent(b) == EXP_OVER) && (b->sigh & 0x80000000)));
++ if (unsupported) {
++ if (control_word & CW_Invalid) {
++ /* Masked response */
++ FPU_copy_to_regi(&CONST_QNaN, TAG_Special, deststnr);
++ }
++ EXCEPTION(EX_Invalid);
++ return (!(control_word & CW_Invalid) ? FPU_Exception : 0) |
++ TAG_Special;
+ }
+- }
+- else
+
-+err_drop_alg:
-+ crypto_drop_skcipher(spawn);
-+err_free_inst:
-+ kfree(inst);
-+ inst = ERR_PTR(err);
-+ goto out;
-+}
-+EXPORT_SYMBOL_GPL(skcipher_geniv_alloc);
++ if (taga == TW_NaN) {
++ x = a;
++ if (tagb == TW_NaN) {
++ signalling = !(a->sigh & b->sigh & 0x40000000);
++ if (significand(b) > significand(a))
++ x = b;
++ else if (significand(b) == significand(a)) {
++ /* The default result for the case of two "equal" NaNs (signs may
++ differ) is chosen to reproduce 80486 behaviour */
++ x = defaultNaN;
++ }
++ } else {
++ /* return the quiet version of the NaN in a */
++ signalling = !(a->sigh & 0x40000000);
++ }
++ } else
+ #ifdef PARANOID
+- if (tagb == TW_NaN)
++ if (tagb == TW_NaN)
+ #endif /* PARANOID */
+- {
+- signalling = !(b->sigh & 0x40000000);
+- x = b;
+- }
++ {
++ signalling = !(b->sigh & 0x40000000);
++ x = b;
++ }
+ #ifdef PARANOID
+- else
+- {
+- signalling = 0;
+- EXCEPTION(EX_INTERNAL|0x113);
+- x = &CONST_QNaN;
+- }
++ else {
++ signalling = 0;
++ EXCEPTION(EX_INTERNAL | 0x113);
++ x = &CONST_QNaN;
++ }
+ #endif /* PARANOID */
+
+- if ( (!signalling) || (control_word & CW_Invalid) )
+- {
+- if ( ! x )
+- x = b;
++ if ((!signalling) || (control_word & CW_Invalid)) {
++ if (!x)
++ x = b;
+
+- if ( !(x->sigh & 0x80000000) ) /* pseudo-NaN ? */
+- x = &CONST_QNaN;
++ if (!(x->sigh & 0x80000000)) /* pseudo-NaN ? */
++ x = &CONST_QNaN;
+
+- FPU_copy_to_regi(x, TAG_Special, deststnr);
++ FPU_copy_to_regi(x, TAG_Special, deststnr);
+
+- if ( !signalling )
+- return TAG_Special;
++ if (!signalling)
++ return TAG_Special;
+
+- /* ensure a Quiet NaN */
+- dest->sigh |= 0x40000000;
+- }
++ /* ensure a Quiet NaN */
++ dest->sigh |= 0x40000000;
++ }
+
+- EXCEPTION(EX_Invalid);
++ EXCEPTION(EX_Invalid);
+
+- return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Special;
++ return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Special;
+ }
+
+-
+ /* Invalid arith operation on Valid registers */
+ /* Returns < 0 if the exception is unmasked */
+ asmlinkage int arith_invalid(int deststnr)
+ {
+
+- EXCEPTION(EX_Invalid);
+-
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- FPU_copy_to_regi(&CONST_QNaN, TAG_Special, deststnr);
+- }
+-
+- return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Valid;
++ EXCEPTION(EX_Invalid);
+
+-}
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ FPU_copy_to_regi(&CONST_QNaN, TAG_Special, deststnr);
++ }
+
++ return (!(control_word & CW_Invalid) ? FPU_Exception : 0) | TAG_Valid;
+
-+void skcipher_geniv_free(struct crypto_instance *inst)
-+{
-+ crypto_drop_skcipher(crypto_instance_ctx(inst));
-+ kfree(inst);
+}
-+EXPORT_SYMBOL_GPL(skcipher_geniv_free);
-+
-+int skcipher_geniv_init(struct crypto_tfm *tfm)
-+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct crypto_ablkcipher *cipher;
-+
-+ cipher = crypto_spawn_skcipher(crypto_instance_ctx(inst));
-+ if (IS_ERR(cipher))
-+ return PTR_ERR(cipher);
+
+ /* Divide a finite number by zero */
+ asmlinkage int FPU_divide_by_zero(int deststnr, u_char sign)
+ {
+- FPU_REG *dest = &st(deststnr);
+- int tag = TAG_Valid;
++ FPU_REG *dest = &st(deststnr);
++ int tag = TAG_Valid;
+
-+ tfm->crt_ablkcipher.base = cipher;
-+ tfm->crt_ablkcipher.reqsize += crypto_ablkcipher_reqsize(cipher);
++ if (control_word & CW_ZeroDiv) {
++ /* The masked response */
++ FPU_copy_to_regi(&CONST_INF, TAG_Special, deststnr);
++ setsign(dest, sign);
++ tag = TAG_Special;
++ }
+
+- if ( control_word & CW_ZeroDiv )
+- {
+- /* The masked response */
+- FPU_copy_to_regi(&CONST_INF, TAG_Special, deststnr);
+- setsign(dest, sign);
+- tag = TAG_Special;
+- }
+-
+- EXCEPTION(EX_ZeroDiv);
++ EXCEPTION(EX_ZeroDiv);
+
+- return (!(control_word & CW_ZeroDiv) ? FPU_Exception : 0) | tag;
++ return (!(control_word & CW_ZeroDiv) ? FPU_Exception : 0) | tag;
+
+ }
+
+-
+ /* This may be called often, so keep it lean */
+ int set_precision_flag(int flags)
+ {
+- if ( control_word & CW_Precision )
+- {
+- partial_status &= ~(SW_C1 & flags);
+- partial_status |= flags; /* The masked response */
+- return 0;
+- }
+- else
+- {
+- EXCEPTION(flags);
+- return 1;
+- }
++ if (control_word & CW_Precision) {
++ partial_status &= ~(SW_C1 & flags);
++ partial_status |= flags; /* The masked response */
++ return 0;
++ } else {
++ EXCEPTION(flags);
++ return 1;
++ }
+ }
+
+-
+ /* This may be called often, so keep it lean */
+ asmlinkage void set_precision_flag_up(void)
+ {
+- if ( control_word & CW_Precision )
+- partial_status |= (SW_Precision | SW_C1); /* The masked response */
+- else
+- EXCEPTION(EX_Precision | SW_C1);
++ if (control_word & CW_Precision)
++ partial_status |= (SW_Precision | SW_C1); /* The masked response */
++ else
++ EXCEPTION(EX_Precision | SW_C1);
+ }
+
+-
+ /* This may be called often, so keep it lean */
+ asmlinkage void set_precision_flag_down(void)
+ {
+- if ( control_word & CW_Precision )
+- { /* The masked response */
+- partial_status &= ~SW_C1;
+- partial_status |= SW_Precision;
+- }
+- else
+- EXCEPTION(EX_Precision);
++ if (control_word & CW_Precision) { /* The masked response */
++ partial_status &= ~SW_C1;
++ partial_status |= SW_Precision;
++ } else
++ EXCEPTION(EX_Precision);
+ }
+
+-
+ asmlinkage int denormal_operand(void)
+ {
+- if ( control_word & CW_Denormal )
+- { /* The masked response */
+- partial_status |= SW_Denorm_Op;
+- return TAG_Special;
+- }
+- else
+- {
+- EXCEPTION(EX_Denormal);
+- return TAG_Special | FPU_Exception;
+- }
++ if (control_word & CW_Denormal) { /* The masked response */
++ partial_status |= SW_Denorm_Op;
++ return TAG_Special;
++ } else {
++ EXCEPTION(EX_Denormal);
++ return TAG_Special | FPU_Exception;
++ }
+ }
+
+-
+ asmlinkage int arith_overflow(FPU_REG *dest)
+ {
+- int tag = TAG_Valid;
++ int tag = TAG_Valid;
+
+- if ( control_word & CW_Overflow )
+- {
+- /* The masked response */
++ if (control_word & CW_Overflow) {
++ /* The masked response */
+ /* ###### The response here depends upon the rounding mode */
+- reg_copy(&CONST_INF, dest);
+- tag = TAG_Special;
+- }
+- else
+- {
+- /* Subtract the magic number from the exponent */
+- addexponent(dest, (-3 * (1 << 13)));
+- }
+-
+- EXCEPTION(EX_Overflow);
+- if ( control_word & CW_Overflow )
+- {
+- /* The overflow exception is masked. */
+- /* By definition, precision is lost.
+- The roundup bit (C1) is also set because we have
+- "rounded" upwards to Infinity. */
+- EXCEPTION(EX_Precision | SW_C1);
+- return tag;
+- }
+-
+- return tag;
++ reg_copy(&CONST_INF, dest);
++ tag = TAG_Special;
++ } else {
++ /* Subtract the magic number from the exponent */
++ addexponent(dest, (-3 * (1 << 13)));
++ }
+
+-}
++ EXCEPTION(EX_Overflow);
++ if (control_word & CW_Overflow) {
++ /* The overflow exception is masked. */
++ /* By definition, precision is lost.
++ The roundup bit (C1) is also set because we have
++ "rounded" upwards to Infinity. */
++ EXCEPTION(EX_Precision | SW_C1);
++ return tag;
++ }
+
-+ return 0;
++ return tag;
+
+}
-+EXPORT_SYMBOL_GPL(skcipher_geniv_init);
+
+ asmlinkage int arith_underflow(FPU_REG *dest)
+ {
+- int tag = TAG_Valid;
+-
+- if ( control_word & CW_Underflow )
+- {
+- /* The masked response */
+- if ( exponent16(dest) <= EXP_UNDER - 63 )
+- {
+- reg_copy(&CONST_Z, dest);
+- partial_status &= ~SW_C1; /* Round down. */
+- tag = TAG_Zero;
++ int tag = TAG_Valid;
++
++ if (control_word & CW_Underflow) {
++ /* The masked response */
++ if (exponent16(dest) <= EXP_UNDER - 63) {
++ reg_copy(&CONST_Z, dest);
++ partial_status &= ~SW_C1; /* Round down. */
++ tag = TAG_Zero;
++ } else {
++ stdexp(dest);
++ }
++ } else {
++ /* Add the magic number to the exponent. */
++ addexponent(dest, (3 * (1 << 13)) + EXTENDED_Ebias);
+ }
+- else
+- {
+- stdexp(dest);
+
-+void skcipher_geniv_exit(struct crypto_tfm *tfm)
-+{
-+ crypto_free_ablkcipher(tfm->crt_ablkcipher.base);
++ EXCEPTION(EX_Underflow);
++ if (control_word & CW_Underflow) {
++ /* The underflow exception is masked. */
++ EXCEPTION(EX_Precision);
++ return tag;
+ }
+- }
+- else
+- {
+- /* Add the magic number to the exponent. */
+- addexponent(dest, (3 * (1 << 13)) + EXTENDED_Ebias);
+- }
+-
+- EXCEPTION(EX_Underflow);
+- if ( control_word & CW_Underflow )
+- {
+- /* The underflow exception is masked. */
+- EXCEPTION(EX_Precision);
+- return tag;
+- }
+-
+- return tag;
+
+-}
++ return tag;
+
+}
-+EXPORT_SYMBOL_GPL(skcipher_geniv_exit);
-+
- MODULE_LICENSE("GPL");
- MODULE_DESCRIPTION("Generic block chaining cipher type");
-diff --git a/crypto/camellia.c b/crypto/camellia.c
-index 6877ecf..493fee7 100644
---- a/crypto/camellia.c
-+++ b/crypto/camellia.c
-@@ -36,176 +36,6 @@
- #include <linux/kernel.h>
- #include <linux/module.h>
+
+ void FPU_stack_overflow(void)
+ {
+
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- top--;
+- FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
+- }
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ top--;
++ FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
++ }
+
+- EXCEPTION(EX_StackOver);
++ EXCEPTION(EX_StackOver);
+
+- return;
++ return;
+
+ }
-
--#define CAMELLIA_MIN_KEY_SIZE 16
--#define CAMELLIA_MAX_KEY_SIZE 32
--#define CAMELLIA_BLOCK_SIZE 16
--#define CAMELLIA_TABLE_BYTE_LEN 272
--#define CAMELLIA_TABLE_WORD_LEN (CAMELLIA_TABLE_BYTE_LEN / 4)
+ void FPU_stack_underflow(void)
+ {
+
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
+- }
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
++ }
+
+- EXCEPTION(EX_StackUnder);
++ EXCEPTION(EX_StackUnder);
+
+- return;
++ return;
+
+ }
+
-
--typedef u32 KEY_TABLE_TYPE[CAMELLIA_TABLE_WORD_LEN];
+ void FPU_stack_underflow_i(int i)
+ {
+
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- FPU_copy_to_regi(&CONST_QNaN, TAG_Special, i);
+- }
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ FPU_copy_to_regi(&CONST_QNaN, TAG_Special, i);
++ }
+
+- EXCEPTION(EX_StackUnder);
++ EXCEPTION(EX_StackUnder);
+
+- return;
++ return;
+
+ }
+
-
+ void FPU_stack_underflow_pop(int i)
+ {
+
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- FPU_copy_to_regi(&CONST_QNaN, TAG_Special, i);
+- FPU_pop();
+- }
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ FPU_copy_to_regi(&CONST_QNaN, TAG_Special, i);
++ FPU_pop();
++ }
+
+- EXCEPTION(EX_StackUnder);
++ EXCEPTION(EX_StackUnder);
+
+- return;
++ return;
+
+ }
-
--/* key constants */
+diff --git a/arch/x86/math-emu/exception.h b/arch/x86/math-emu/exception.h
+index b463f21..67f43a4 100644
+--- a/arch/x86/math-emu/exception.h
++++ b/arch/x86/math-emu/exception.h
+@@ -9,7 +9,6 @@
+ #ifndef _EXCEPTION_H_
+ #define _EXCEPTION_H_
+
-
--#define CAMELLIA_SIGMA1L (0xA09E667FL)
--#define CAMELLIA_SIGMA1R (0x3BCC908BL)
--#define CAMELLIA_SIGMA2L (0xB67AE858L)
--#define CAMELLIA_SIGMA2R (0x4CAA73B2L)
--#define CAMELLIA_SIGMA3L (0xC6EF372FL)
--#define CAMELLIA_SIGMA3R (0xE94F82BEL)
--#define CAMELLIA_SIGMA4L (0x54FF53A5L)
--#define CAMELLIA_SIGMA4R (0xF1D36F1CL)
--#define CAMELLIA_SIGMA5L (0x10E527FAL)
--#define CAMELLIA_SIGMA5R (0xDE682D1DL)
--#define CAMELLIA_SIGMA6L (0xB05688C2L)
--#define CAMELLIA_SIGMA6R (0xB3E6C1FDL)
+ #ifdef __ASSEMBLY__
+ #define Const_(x) $##x
+ #else
+@@ -20,8 +19,8 @@
+ #include "fpu_emu.h"
+ #endif /* SW_C1 */
+
+-#define FPU_BUSY Const_(0x8000) /* FPU busy bit (8087 compatibility) */
+-#define EX_ErrorSummary Const_(0x0080) /* Error summary status */
++#define FPU_BUSY Const_(0x8000) /* FPU busy bit (8087 compatibility) */
++#define EX_ErrorSummary Const_(0x0080) /* Error summary status */
+ /* Special exceptions: */
+ #define EX_INTERNAL Const_(0x8000) /* Internal error in wm-FPU-emu */
+ #define EX_StackOver Const_(0x0041|SW_C1) /* stack overflow */
+@@ -34,11 +33,9 @@
+ #define EX_Denormal Const_(0x0002) /* denormalized operand */
+ #define EX_Invalid Const_(0x0001) /* invalid operation */
+
-
--struct camellia_ctx {
-- int key_length;
-- KEY_TABLE_TYPE key_table;
--};
+ #define PRECISION_LOST_UP Const_((EX_Precision | SW_C1))
+ #define PRECISION_LOST_DOWN Const_(EX_Precision)
+
-
+ #ifndef __ASSEMBLY__
+
+ #ifdef DEBUG
+@@ -48,6 +45,6 @@
+ #define EXCEPTION(x) FPU_exception(x)
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLY__ */
+
+ #endif /* _EXCEPTION_H_ */
+diff --git a/arch/x86/math-emu/fpu_arith.c b/arch/x86/math-emu/fpu_arith.c
+index 6972dec..aeab24e 100644
+--- a/arch/x86/math-emu/fpu_arith.c
++++ b/arch/x86/math-emu/fpu_arith.c
+@@ -15,160 +15,138 @@
+ #include "control_w.h"
+ #include "status_w.h"
+
-
--/*
-- * macros
-- */
+ void fadd__(void)
+ {
+- /* fadd st,st(i) */
+- int i = FPU_rm;
+- clear_C1();
+- FPU_add(&st(i), FPU_gettagi(i), 0, control_word);
++ /* fadd st,st(i) */
++ int i = FPU_rm;
++ clear_C1();
++ FPU_add(&st(i), FPU_gettagi(i), 0, control_word);
+ }
+
-
+ void fmul__(void)
+ {
+- /* fmul st,st(i) */
+- int i = FPU_rm;
+- clear_C1();
+- FPU_mul(&st(i), FPU_gettagi(i), 0, control_word);
++ /* fmul st,st(i) */
++ int i = FPU_rm;
++ clear_C1();
++ FPU_mul(&st(i), FPU_gettagi(i), 0, control_word);
+ }
+
-
--# define GETU32(pt) (((u32)(pt)[0] << 24) \
-- ^ ((u32)(pt)[1] << 16) \
-- ^ ((u32)(pt)[2] << 8) \
-- ^ ((u32)(pt)[3]))
-
--#define COPY4WORD(dst, src) \
-- do { \
-- (dst)[0]=(src)[0]; \
-- (dst)[1]=(src)[1]; \
-- (dst)[2]=(src)[2]; \
-- (dst)[3]=(src)[3]; \
-- }while(0)
+ void fsub__(void)
+ {
+- /* fsub st,st(i) */
+- clear_C1();
+- FPU_sub(0, FPU_rm, control_word);
++ /* fsub st,st(i) */
++ clear_C1();
++ FPU_sub(0, FPU_rm, control_word);
+ }
+
-
--#define SWAP4WORD(word) \
-- do { \
-- CAMELLIA_SWAP4((word)[0]); \
-- CAMELLIA_SWAP4((word)[1]); \
-- CAMELLIA_SWAP4((word)[2]); \
-- CAMELLIA_SWAP4((word)[3]); \
-- }while(0)
+ void fsubr_(void)
+ {
+- /* fsubr st,st(i) */
+- clear_C1();
+- FPU_sub(REV, FPU_rm, control_word);
++ /* fsubr st,st(i) */
++ clear_C1();
++ FPU_sub(REV, FPU_rm, control_word);
+ }
+
-
--#define XOR4WORD(a, b)/* a = a ^ b */ \
-- do { \
-- (a)[0]^=(b)[0]; \
-- (a)[1]^=(b)[1]; \
-- (a)[2]^=(b)[2]; \
-- (a)[3]^=(b)[3]; \
-- }while(0)
+ void fdiv__(void)
+ {
+- /* fdiv st,st(i) */
+- clear_C1();
+- FPU_div(0, FPU_rm, control_word);
++ /* fdiv st,st(i) */
++ clear_C1();
++ FPU_div(0, FPU_rm, control_word);
+ }
+
-
--#define XOR4WORD2(a, b, c)/* a = b ^ c */ \
-- do { \
-- (a)[0]=(b)[0]^(c)[0]; \
-- (a)[1]=(b)[1]^(c)[1]; \
-- (a)[2]=(b)[2]^(c)[2]; \
-- (a)[3]=(b)[3]^(c)[3]; \
-- }while(0)
+ void fdivr_(void)
+ {
+- /* fdivr st,st(i) */
+- clear_C1();
+- FPU_div(REV, FPU_rm, control_word);
++ /* fdivr st,st(i) */
++ clear_C1();
++ FPU_div(REV, FPU_rm, control_word);
+ }
+
-
--#define CAMELLIA_SUBKEY_L(INDEX) (subkey[(INDEX)*2])
--#define CAMELLIA_SUBKEY_R(INDEX) (subkey[(INDEX)*2 + 1])
-
--/* rotation right shift 1byte */
--#define CAMELLIA_RR8(x) (((x) >> 8) + ((x) << 24))
--/* rotation left shift 1bit */
--#define CAMELLIA_RL1(x) (((x) << 1) + ((x) >> 31))
--/* rotation left shift 1byte */
--#define CAMELLIA_RL8(x) (((x) << 8) + ((x) >> 24))
+ void fadd_i(void)
+ {
+- /* fadd st(i),st */
+- int i = FPU_rm;
+- clear_C1();
+- FPU_add(&st(i), FPU_gettagi(i), i, control_word);
++ /* fadd st(i),st */
++ int i = FPU_rm;
++ clear_C1();
++ FPU_add(&st(i), FPU_gettagi(i), i, control_word);
+ }
+
-
--#define CAMELLIA_ROLDQ(ll, lr, rl, rr, w0, w1, bits) \
-- do { \
-- w0 = ll; \
-- ll = (ll << bits) + (lr >> (32 - bits)); \
-- lr = (lr << bits) + (rl >> (32 - bits)); \
-- rl = (rl << bits) + (rr >> (32 - bits)); \
-- rr = (rr << bits) + (w0 >> (32 - bits)); \
-- } while(0)
+ void fmul_i(void)
+ {
+- /* fmul st(i),st */
+- clear_C1();
+- FPU_mul(&st(0), FPU_gettag0(), FPU_rm, control_word);
++ /* fmul st(i),st */
++ clear_C1();
++ FPU_mul(&st(0), FPU_gettag0(), FPU_rm, control_word);
+ }
+
-
--#define CAMELLIA_ROLDQo32(ll, lr, rl, rr, w0, w1, bits) \
-- do { \
-- w0 = ll; \
-- w1 = lr; \
-- ll = (lr << (bits - 32)) + (rl >> (64 - bits)); \
-- lr = (rl << (bits - 32)) + (rr >> (64 - bits)); \
-- rl = (rr << (bits - 32)) + (w0 >> (64 - bits)); \
-- rr = (w0 << (bits - 32)) + (w1 >> (64 - bits)); \
-- } while(0)
+ void fsubri(void)
+ {
+- /* fsubr st(i),st */
+- clear_C1();
+- FPU_sub(DEST_RM, FPU_rm, control_word);
++ /* fsubr st(i),st */
++ clear_C1();
++ FPU_sub(DEST_RM, FPU_rm, control_word);
+ }
+
-
--#define CAMELLIA_SP1110(INDEX) (camellia_sp1110[(INDEX)])
--#define CAMELLIA_SP0222(INDEX) (camellia_sp0222[(INDEX)])
--#define CAMELLIA_SP3033(INDEX) (camellia_sp3033[(INDEX)])
--#define CAMELLIA_SP4404(INDEX) (camellia_sp4404[(INDEX)])
+ void fsub_i(void)
+ {
+- /* fsub st(i),st */
+- clear_C1();
+- FPU_sub(REV|DEST_RM, FPU_rm, control_word);
++ /* fsub st(i),st */
++ clear_C1();
++ FPU_sub(REV | DEST_RM, FPU_rm, control_word);
+ }
+
-
--#define CAMELLIA_F(xl, xr, kl, kr, yl, yr, il, ir, t0, t1) \
-- do { \
-- il = xl ^ kl; \
-- ir = xr ^ kr; \
-- t0 = il >> 16; \
-- t1 = ir >> 16; \
-- yl = CAMELLIA_SP1110(ir & 0xff) \
-- ^ CAMELLIA_SP0222((t1 >> 8) & 0xff) \
-- ^ CAMELLIA_SP3033(t1 & 0xff) \
-- ^ CAMELLIA_SP4404((ir >> 8) & 0xff); \
-- yr = CAMELLIA_SP1110((t0 >> 8) & 0xff) \
-- ^ CAMELLIA_SP0222(t0 & 0xff) \
-- ^ CAMELLIA_SP3033((il >> 8) & 0xff) \
-- ^ CAMELLIA_SP4404(il & 0xff); \
-- yl ^= yr; \
-- yr = CAMELLIA_RR8(yr); \
-- yr ^= yl; \
-- } while(0)
+ void fdivri(void)
+ {
+- /* fdivr st(i),st */
+- clear_C1();
+- FPU_div(DEST_RM, FPU_rm, control_word);
++ /* fdivr st(i),st */
++ clear_C1();
++ FPU_div(DEST_RM, FPU_rm, control_word);
+ }
+
-
+ void fdiv_i(void)
+ {
+- /* fdiv st(i),st */
+- clear_C1();
+- FPU_div(REV|DEST_RM, FPU_rm, control_word);
++ /* fdiv st(i),st */
++ clear_C1();
++ FPU_div(REV | DEST_RM, FPU_rm, control_word);
+ }
+
-
--/*
-- * for speed up
-- *
-- */
--#define CAMELLIA_FLS(ll, lr, rl, rr, kll, klr, krl, krr, t0, t1, t2, t3) \
-- do { \
-- t0 = kll; \
-- t2 = krr; \
-- t0 &= ll; \
-- t2 |= rr; \
-- rl ^= t2; \
-- lr ^= CAMELLIA_RL1(t0); \
-- t3 = krl; \
-- t1 = klr; \
-- t3 &= rl; \
-- t1 |= lr; \
-- ll ^= t1; \
-- rr ^= CAMELLIA_RL1(t3); \
-- } while(0)
-
--#define CAMELLIA_ROUNDSM(xl, xr, kl, kr, yl, yr, il, ir, t0, t1) \
-- do { \
-- ir = CAMELLIA_SP1110(xr & 0xff); \
-- il = CAMELLIA_SP1110((xl>>24) & 0xff); \
-- ir ^= CAMELLIA_SP0222((xr>>24) & 0xff); \
-- il ^= CAMELLIA_SP0222((xl>>16) & 0xff); \
-- ir ^= CAMELLIA_SP3033((xr>>16) & 0xff); \
-- il ^= CAMELLIA_SP3033((xl>>8) & 0xff); \
-- ir ^= CAMELLIA_SP4404((xr>>8) & 0xff); \
-- il ^= CAMELLIA_SP4404(xl & 0xff); \
-- il ^= kl; \
-- ir ^= il ^ kr; \
-- yl ^= ir; \
-- yr ^= CAMELLIA_RR8(il) ^ ir; \
-- } while(0)
+ void faddp_(void)
+ {
+- /* faddp st(i),st */
+- int i = FPU_rm;
+- clear_C1();
+- if ( FPU_add(&st(i), FPU_gettagi(i), i, control_word) >= 0 )
+- FPU_pop();
++ /* faddp st(i),st */
++ int i = FPU_rm;
++ clear_C1();
++ if (FPU_add(&st(i), FPU_gettagi(i), i, control_word) >= 0)
++ FPU_pop();
+ }
+
-
--/**
-- * Stuff related to the Camellia key schedule
-- */
--#define SUBL(x) subL[(x)]
--#define SUBR(x) subR[(x)]
+ void fmulp_(void)
+ {
+- /* fmulp st(i),st */
+- clear_C1();
+- if ( FPU_mul(&st(0), FPU_gettag0(), FPU_rm, control_word) >= 0 )
+- FPU_pop();
++ /* fmulp st(i),st */
++ clear_C1();
++ if (FPU_mul(&st(0), FPU_gettag0(), FPU_rm, control_word) >= 0)
++ FPU_pop();
+ }
+
-
-
- static const u32 camellia_sp1110[256] = {
- 0x70707000,0x82828200,0x2c2c2c00,0xececec00,
- 0xb3b3b300,0x27272700,0xc0c0c000,0xe5e5e500,
-@@ -475,67 +305,348 @@ static const u32 camellia_sp4404[256] = {
+ void fsubrp(void)
+ {
+- /* fsubrp st(i),st */
+- clear_C1();
+- if ( FPU_sub(DEST_RM, FPU_rm, control_word) >= 0 )
+- FPU_pop();
++ /* fsubrp st(i),st */
++ clear_C1();
++ if (FPU_sub(DEST_RM, FPU_rm, control_word) >= 0)
++ FPU_pop();
+ }
+
+-
+ void fsubp_(void)
+ {
+- /* fsubp st(i),st */
+- clear_C1();
+- if ( FPU_sub(REV|DEST_RM, FPU_rm, control_word) >= 0 )
+- FPU_pop();
++ /* fsubp st(i),st */
++ clear_C1();
++ if (FPU_sub(REV | DEST_RM, FPU_rm, control_word) >= 0)
++ FPU_pop();
+ }
+
+-
+ void fdivrp(void)
+ {
+- /* fdivrp st(i),st */
+- clear_C1();
+- if ( FPU_div(DEST_RM, FPU_rm, control_word) >= 0 )
+- FPU_pop();
++ /* fdivrp st(i),st */
++ clear_C1();
++ if (FPU_div(DEST_RM, FPU_rm, control_word) >= 0)
++ FPU_pop();
+ }
+
+-
+ void fdivp_(void)
+ {
+- /* fdivp st(i),st */
+- clear_C1();
+- if ( FPU_div(REV|DEST_RM, FPU_rm, control_word) >= 0 )
+- FPU_pop();
++ /* fdivp st(i),st */
++ clear_C1();
++ if (FPU_div(REV | DEST_RM, FPU_rm, control_word) >= 0)
++ FPU_pop();
+ }
+diff --git a/arch/x86/math-emu/fpu_asm.h b/arch/x86/math-emu/fpu_asm.h
+index 9ba1241..955b932 100644
+--- a/arch/x86/math-emu/fpu_asm.h
++++ b/arch/x86/math-emu/fpu_asm.h
+@@ -14,7 +14,6 @@
+
+ #define EXCEPTION FPU_exception
+
+-
+ #define PARAM1 8(%ebp)
+ #define PARAM2 12(%ebp)
+ #define PARAM3 16(%ebp)
+diff --git a/arch/x86/math-emu/fpu_aux.c b/arch/x86/math-emu/fpu_aux.c
+index 20886cf..491e737 100644
+--- a/arch/x86/math-emu/fpu_aux.c
++++ b/arch/x86/math-emu/fpu_aux.c
+@@ -16,34 +16,34 @@
+ #include "status_w.h"
+ #include "control_w.h"
+
+-
+ static void fnop(void)
+ {
+ }
+
+ static void fclex(void)
+ {
+- partial_status &= ~(SW_Backward|SW_Summary|SW_Stack_Fault|SW_Precision|
+- SW_Underflow|SW_Overflow|SW_Zero_Div|SW_Denorm_Op|
+- SW_Invalid);
+- no_ip_update = 1;
++ partial_status &=
++ ~(SW_Backward | SW_Summary | SW_Stack_Fault | SW_Precision |
++ SW_Underflow | SW_Overflow | SW_Zero_Div | SW_Denorm_Op |
++ SW_Invalid);
++ no_ip_update = 1;
+ }
+
+ /* Needs to be externally visible */
+ void finit(void)
+ {
+- control_word = 0x037f;
+- partial_status = 0;
+- top = 0; /* We don't keep top in the status word internally. */
+- fpu_tag_word = 0xffff;
+- /* The behaviour is different from that detailed in
+- Section 15.1.6 of the Intel manual */
+- operand_address.offset = 0;
+- operand_address.selector = 0;
+- instruction_address.offset = 0;
+- instruction_address.selector = 0;
+- instruction_address.opcode = 0;
+- no_ip_update = 1;
++ control_word = 0x037f;
++ partial_status = 0;
++ top = 0; /* We don't keep top in the status word internally. */
++ fpu_tag_word = 0xffff;
++ /* The behaviour is different from that detailed in
++ Section 15.1.6 of the Intel manual */
++ operand_address.offset = 0;
++ operand_address.selector = 0;
++ instruction_address.offset = 0;
++ instruction_address.selector = 0;
++ instruction_address.opcode = 0;
++ no_ip_update = 1;
+ }
+
+ /*
+@@ -54,151 +54,134 @@ void finit(void)
+ #define fsetpm fnop
+
+ static FUNC const finit_table[] = {
+- feni, fdisi, fclex, finit,
+- fsetpm, FPU_illegal, FPU_illegal, FPU_illegal
++ feni, fdisi, fclex, finit,
++ fsetpm, FPU_illegal, FPU_illegal, FPU_illegal
};
+ void finit_(void)
+ {
+- (finit_table[FPU_rm])();
++ (finit_table[FPU_rm]) ();
+ }
-+#define CAMELLIA_MIN_KEY_SIZE 16
-+#define CAMELLIA_MAX_KEY_SIZE 32
-+#define CAMELLIA_BLOCK_SIZE 16
-+#define CAMELLIA_TABLE_BYTE_LEN 272
+-
+ static void fstsw_ax(void)
+ {
+- *(short *) &FPU_EAX = status_word();
+- no_ip_update = 1;
++ *(short *)&FPU_EAX = status_word();
++ no_ip_update = 1;
+ }
+
+ static FUNC const fstsw_table[] = {
+- fstsw_ax, FPU_illegal, FPU_illegal, FPU_illegal,
+- FPU_illegal, FPU_illegal, FPU_illegal, FPU_illegal
++ fstsw_ax, FPU_illegal, FPU_illegal, FPU_illegal,
++ FPU_illegal, FPU_illegal, FPU_illegal, FPU_illegal
+ };
+
+ void fstsw_(void)
+ {
+- (fstsw_table[FPU_rm])();
++ (fstsw_table[FPU_rm]) ();
+ }
+
+-
+ static FUNC const fp_nop_table[] = {
+- fnop, FPU_illegal, FPU_illegal, FPU_illegal,
+- FPU_illegal, FPU_illegal, FPU_illegal, FPU_illegal
++ fnop, FPU_illegal, FPU_illegal, FPU_illegal,
++ FPU_illegal, FPU_illegal, FPU_illegal, FPU_illegal
+ };
+
+ void fp_nop(void)
+ {
+- (fp_nop_table[FPU_rm])();
++ (fp_nop_table[FPU_rm]) ();
+ }
+
+-
+ void fld_i_(void)
+ {
+- FPU_REG *st_new_ptr;
+- int i;
+- u_char tag;
+-
+- if ( STACK_OVERFLOW )
+- { FPU_stack_overflow(); return; }
+-
+- /* fld st(i) */
+- i = FPU_rm;
+- if ( NOT_EMPTY(i) )
+- {
+- reg_copy(&st(i), st_new_ptr);
+- tag = FPU_gettagi(i);
+- push();
+- FPU_settag0(tag);
+- }
+- else
+- {
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- FPU_stack_underflow();
++ FPU_REG *st_new_ptr;
++ int i;
++ u_char tag;
+
-+/*
-+ * NB: L and R below stand for 'left' and 'right' as in written numbers.
-+ * That is, in (xxxL,xxxR) pair xxxL holds most significant digits,
-+ * _not_ least significant ones!
-+ */
++ if (STACK_OVERFLOW) {
++ FPU_stack_overflow();
++ return;
+ }
+- else
+- EXCEPTION(EX_StackUnder);
+- }
+
+-}
++ /* fld st(i) */
++ i = FPU_rm;
++ if (NOT_EMPTY(i)) {
++ reg_copy(&st(i), st_new_ptr);
++ tag = FPU_gettagi(i);
++ push();
++ FPU_settag0(tag);
++ } else {
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ FPU_stack_underflow();
++ } else
++ EXCEPTION(EX_StackUnder);
++ }
+
++}
+
+ void fxch_i(void)
+ {
+- /* fxch st(i) */
+- FPU_REG t;
+- int i = FPU_rm;
+- FPU_REG *st0_ptr = &st(0), *sti_ptr = &st(i);
+- long tag_word = fpu_tag_word;
+- int regnr = top & 7, regnri = ((regnr + i) & 7);
+- u_char st0_tag = (tag_word >> (regnr*2)) & 3;
+- u_char sti_tag = (tag_word >> (regnri*2)) & 3;
+-
+- if ( st0_tag == TAG_Empty )
+- {
+- if ( sti_tag == TAG_Empty )
+- {
+- FPU_stack_underflow();
+- FPU_stack_underflow_i(i);
+- return;
++ /* fxch st(i) */
++ FPU_REG t;
++ int i = FPU_rm;
++ FPU_REG *st0_ptr = &st(0), *sti_ptr = &st(i);
++ long tag_word = fpu_tag_word;
++ int regnr = top & 7, regnri = ((regnr + i) & 7);
++ u_char st0_tag = (tag_word >> (regnr * 2)) & 3;
++ u_char sti_tag = (tag_word >> (regnri * 2)) & 3;
++
++ if (st0_tag == TAG_Empty) {
++ if (sti_tag == TAG_Empty) {
++ FPU_stack_underflow();
++ FPU_stack_underflow_i(i);
++ return;
++ }
++ if (control_word & CW_Invalid) {
++ /* Masked response */
++ FPU_copy_to_reg0(sti_ptr, sti_tag);
++ }
++ FPU_stack_underflow_i(i);
++ return;
+ }
+- if ( control_word & CW_Invalid )
+- {
+- /* Masked response */
+- FPU_copy_to_reg0(sti_ptr, sti_tag);
++ if (sti_tag == TAG_Empty) {
++ if (control_word & CW_Invalid) {
++ /* Masked response */
++ FPU_copy_to_regi(st0_ptr, st0_tag, i);
++ }
++ FPU_stack_underflow();
++ return;
+ }
+- FPU_stack_underflow_i(i);
+- return;
+- }
+- if ( sti_tag == TAG_Empty )
+- {
+- if ( control_word & CW_Invalid )
+- {
+- /* Masked response */
+- FPU_copy_to_regi(st0_ptr, st0_tag, i);
+- }
+- FPU_stack_underflow();
+- return;
+- }
+- clear_C1();
+-
+- reg_copy(st0_ptr, &t);
+- reg_copy(sti_ptr, st0_ptr);
+- reg_copy(&t, sti_ptr);
+-
+- tag_word &= ~(3 << (regnr*2)) & ~(3 << (regnri*2));
+- tag_word |= (sti_tag << (regnr*2)) | (st0_tag << (regnri*2));
+- fpu_tag_word = tag_word;
+-}
++ clear_C1();
+
++ reg_copy(st0_ptr, &t);
++ reg_copy(sti_ptr, st0_ptr);
++ reg_copy(&t, sti_ptr);
+
++ tag_word &= ~(3 << (regnr * 2)) & ~(3 << (regnri * 2));
++ tag_word |= (sti_tag << (regnr * 2)) | (st0_tag << (regnri * 2));
++ fpu_tag_word = tag_word;
++}
+
+ void ffree_(void)
+ {
+- /* ffree st(i) */
+- FPU_settagi(FPU_rm, TAG_Empty);
++ /* ffree st(i) */
++ FPU_settagi(FPU_rm, TAG_Empty);
+ }
+
+-
+ void ffreep(void)
+ {
+- /* ffree st(i) + pop - unofficial code */
+- FPU_settagi(FPU_rm, TAG_Empty);
+- FPU_pop();
++ /* ffree st(i) + pop - unofficial code */
++ FPU_settagi(FPU_rm, TAG_Empty);
++ FPU_pop();
+ }
+
+-
+ void fst_i_(void)
+ {
+- /* fst st(i) */
+- FPU_copy_to_regi(&st(0), FPU_gettag0(), FPU_rm);
++ /* fst st(i) */
++ FPU_copy_to_regi(&st(0), FPU_gettag0(), FPU_rm);
+ }
+
+-
+ void fstp_i(void)
+ {
+- /* fstp st(i) */
+- FPU_copy_to_regi(&st(0), FPU_gettag0(), FPU_rm);
+- FPU_pop();
++ /* fstp st(i) */
++ FPU_copy_to_regi(&st(0), FPU_gettag0(), FPU_rm);
++ FPU_pop();
+ }
+-
+diff --git a/arch/x86/math-emu/fpu_emu.h b/arch/x86/math-emu/fpu_emu.h
+index 65120f5..4dae511 100644
+--- a/arch/x86/math-emu/fpu_emu.h
++++ b/arch/x86/math-emu/fpu_emu.h
+@@ -7,7 +7,6 @@
+ | |
+ +---------------------------------------------------------------------------*/
+
+-
+ #ifndef _FPU_EMU_H_
+ #define _FPU_EMU_H_
+
+@@ -28,15 +27,15 @@
+ #endif
+
+ #define EXP_BIAS Const(0)
+-#define EXP_OVER Const(0x4000) /* smallest invalid large exponent */
+-#define EXP_UNDER Const(-0x3fff) /* largest invalid small exponent */
+-#define EXP_WAY_UNDER Const(-0x6000) /* Below the smallest denormal, but
+- still a 16 bit nr. */
++#define EXP_OVER Const(0x4000) /* smallest invalid large exponent */
++#define EXP_UNDER Const(-0x3fff) /* largest invalid small exponent */
++#define EXP_WAY_UNDER Const(-0x6000) /* Below the smallest denormal, but
++ still a 16 bit nr. */
+ #define EXP_Infinity EXP_OVER
+ #define EXP_NaN EXP_OVER
+
+ #define EXTENDED_Ebias Const(0x3fff)
+-#define EXTENDED_Emin (-0x3ffe) /* smallest valid exponent */
++#define EXTENDED_Emin (-0x3ffe) /* smallest valid exponent */
+
+ #define SIGN_POS Const(0)
+ #define SIGN_NEG Const(0x80)
+@@ -44,10 +43,9 @@
+ #define SIGN_Positive Const(0)
+ #define SIGN_Negative Const(0x8000)
+
+-
+ /* Keep the order TAG_Valid, TAG_Zero, TW_Denormal */
+ /* The following fold to 2 (Special) in the Tag Word */
+-#define TW_Denormal Const(4) /* De-normal */
++#define TW_Denormal Const(4) /* De-normal */
+ #define TW_Infinity Const(5) /* + or - infinity */
+ #define TW_NaN Const(6) /* Not a Number */
+ #define TW_Unsupported Const(7) /* Not supported by an 80486 */
+@@ -67,14 +65,13 @@
+ #define DEST_RM 0x20
+ #define LOADED 0x40
+
+-#define FPU_Exception Const(0x80000000) /* Added to tag returns. */
+-
++#define FPU_Exception Const(0x80000000) /* Added to tag returns. */
+
+ #ifndef __ASSEMBLY__
+
+ #include "fpu_system.h"
+
+-#include <asm/sigcontext.h> /* for struct _fpstate */
++#include <asm/sigcontext.h> /* for struct _fpstate */
+ #include <asm/math_emu.h>
+ #include <linux/linkage.h>
+
+@@ -112,30 +109,33 @@ extern u_char emulating;
+ #define PREFIX_DEFAULT 7
+
+ struct address {
+- unsigned int offset;
+- unsigned int selector:16;
+- unsigned int opcode:11;
+- unsigned int empty:5;
++ unsigned int offset;
++ unsigned int selector:16;
++ unsigned int opcode:11;
++ unsigned int empty:5;
+ };
+ struct fpu__reg {
+- unsigned sigl;
+- unsigned sigh;
+- short exp;
++ unsigned sigl;
++ unsigned sigh;
++ short exp;
+ };
+
+-typedef void (*FUNC)(void);
++typedef void (*FUNC) (void);
+ typedef struct fpu__reg FPU_REG;
+-typedef void (*FUNC_ST0)(FPU_REG *st0_ptr, u_char st0_tag);
+-typedef struct { u_char address_size, operand_size, segment; }
+- overrides;
++typedef void (*FUNC_ST0) (FPU_REG *st0_ptr, u_char st0_tag);
++typedef struct {
++ u_char address_size, operand_size, segment;
++} overrides;
+ /* This structure is 32 bits: */
+-typedef struct { overrides override;
+- u_char default_mode; } fpu_addr_modes;
++typedef struct {
++ overrides override;
++ u_char default_mode;
++} fpu_addr_modes;
+ /* PROTECTED has a restricted meaning in the emulator; it is used
+ to signal that the emulator needs to do special things to ensure
+ that protection is respected in a segmented model. */
+ #define PROTECTED 4
+-#define SIXTEEN 1 /* We rely upon this being 1 (true) */
++#define SIXTEEN 1 /* We rely upon this being 1 (true) */
+ #define VM86 SIXTEEN
+ #define PM16 (SIXTEEN | PROTECTED)
+ #define SEG32 PROTECTED
+@@ -168,8 +168,8 @@ extern u_char const data_sizes_16[32];
+
+ static inline void reg_copy(FPU_REG const *x, FPU_REG *y)
+ {
+- *(short *)&(y->exp) = *(const short *)&(x->exp);
+- *(long long *)&(y->sigl) = *(const long long *)&(x->sigl);
++ *(short *)&(y->exp) = *(const short *)&(x->exp);
++ *(long long *)&(y->sigl) = *(const long long *)&(x->sigl);
+ }
+
+ #define exponent(x) (((*(short *)&((x)->exp)) & 0x7fff) - EXTENDED_Ebias)
+@@ -184,27 +184,26 @@ static inline void reg_copy(FPU_REG const *x, FPU_REG *y)
+
+ #define significand(x) ( ((unsigned long long *)&((x)->sigl))[0] )
+
+-
+ /*----- Prototypes for functions written in assembler -----*/
+ /* extern void reg_move(FPU_REG *a, FPU_REG *b); */
+
+ asmlinkage int FPU_normalize(FPU_REG *x);
+ asmlinkage int FPU_normalize_nuo(FPU_REG *x);
+ asmlinkage int FPU_u_sub(FPU_REG const *arg1, FPU_REG const *arg2,
+- FPU_REG *answ, unsigned int control_w, u_char sign,
++ FPU_REG * answ, unsigned int control_w, u_char sign,
+ int expa, int expb);
+ asmlinkage int FPU_u_mul(FPU_REG const *arg1, FPU_REG const *arg2,
+- FPU_REG *answ, unsigned int control_w, u_char sign,
++ FPU_REG * answ, unsigned int control_w, u_char sign,
+ int expon);
+ asmlinkage int FPU_u_div(FPU_REG const *arg1, FPU_REG const *arg2,
+- FPU_REG *answ, unsigned int control_w, u_char sign);
++ FPU_REG * answ, unsigned int control_w, u_char sign);
+ asmlinkage int FPU_u_add(FPU_REG const *arg1, FPU_REG const *arg2,
+- FPU_REG *answ, unsigned int control_w, u_char sign,
++ FPU_REG * answ, unsigned int control_w, u_char sign,
+ int expa, int expb);
+ asmlinkage int wm_sqrt(FPU_REG *n, int dummy1, int dummy2,
+ unsigned int control_w, u_char sign);
+-asmlinkage unsigned FPU_shrx(void *l, unsigned x);
+-asmlinkage unsigned FPU_shrxs(void *v, unsigned x);
++asmlinkage unsigned FPU_shrx(void *l, unsigned x);
++asmlinkage unsigned FPU_shrxs(void *v, unsigned x);
+ asmlinkage unsigned long FPU_div_small(unsigned long long *x, unsigned long y);
+ asmlinkage int FPU_round(FPU_REG *arg, unsigned int extent, int dummy,
+ unsigned int control_w, u_char sign);
+diff --git a/arch/x86/math-emu/fpu_entry.c b/arch/x86/math-emu/fpu_entry.c
+index 1853524..760baee 100644
+--- a/arch/x86/math-emu/fpu_entry.c
++++ b/arch/x86/math-emu/fpu_entry.c
+@@ -25,10 +25,11 @@
+ +---------------------------------------------------------------------------*/
+
+ #include <linux/signal.h>
+-#include <linux/ptrace.h>
++#include <linux/regset.h>
+
+ #include <asm/uaccess.h>
+ #include <asm/desc.h>
++#include <asm/user.h>
+
+ #include "fpu_system.h"
+ #include "fpu_emu.h"
+@@ -36,726 +37,727 @@
+ #include "control_w.h"
+ #include "status_w.h"
+
+-#define __BAD__ FPU_illegal /* Illegal on an 80486, causes SIGILL */
++#define __BAD__ FPU_illegal /* Illegal on an 80486, causes SIGILL */
+
+-#ifndef NO_UNDOC_CODE /* Un-documented FPU op-codes supported by default. */
++#ifndef NO_UNDOC_CODE /* Un-documented FPU op-codes supported by default. */
+
+ /* WARNING: These codes are not documented by Intel in their 80486 manual
+ and may not work on FPU clones or later Intel FPUs. */
+
+ /* Changes to support the un-doc codes provided by Linus Torvalds. */
+
+-#define _d9_d8_ fstp_i /* unofficial code (19) */
+-#define _dc_d0_ fcom_st /* unofficial code (14) */
+-#define _dc_d8_ fcompst /* unofficial code (1c) */
+-#define _dd_c8_ fxch_i /* unofficial code (0d) */
+-#define _de_d0_ fcompst /* unofficial code (16) */
+-#define _df_c0_ ffreep /* unofficial code (07) ffree + pop */
+-#define _df_c8_ fxch_i /* unofficial code (0f) */
+-#define _df_d0_ fstp_i /* unofficial code (17) */
+-#define _df_d8_ fstp_i /* unofficial code (1f) */
++#define _d9_d8_ fstp_i /* unofficial code (19) */
++#define _dc_d0_ fcom_st /* unofficial code (14) */
++#define _dc_d8_ fcompst /* unofficial code (1c) */
++#define _dd_c8_ fxch_i /* unofficial code (0d) */
++#define _de_d0_ fcompst /* unofficial code (16) */
++#define _df_c0_ ffreep /* unofficial code (07) ffree + pop */
++#define _df_c8_ fxch_i /* unofficial code (0f) */
++#define _df_d0_ fstp_i /* unofficial code (17) */
++#define _df_d8_ fstp_i /* unofficial code (1f) */
+
+ static FUNC const st_instr_table[64] = {
+- fadd__, fld_i_, __BAD__, __BAD__, fadd_i, ffree_, faddp_, _df_c0_,
+- fmul__, fxch_i, __BAD__, __BAD__, fmul_i, _dd_c8_, fmulp_, _df_c8_,
+- fcom_st, fp_nop, __BAD__, __BAD__, _dc_d0_, fst_i_, _de_d0_, _df_d0_,
+- fcompst, _d9_d8_, __BAD__, __BAD__, _dc_d8_, fstp_i, fcompp, _df_d8_,
+- fsub__, FPU_etc, __BAD__, finit_, fsubri, fucom_, fsubrp, fstsw_,
+- fsubr_, fconst, fucompp, __BAD__, fsub_i, fucomp, fsubp_, __BAD__,
+- fdiv__, FPU_triga, __BAD__, __BAD__, fdivri, __BAD__, fdivrp, __BAD__,
+- fdivr_, FPU_trigb, __BAD__, __BAD__, fdiv_i, __BAD__, fdivp_, __BAD__,
++ fadd__, fld_i_, __BAD__, __BAD__, fadd_i, ffree_, faddp_, _df_c0_,
++ fmul__, fxch_i, __BAD__, __BAD__, fmul_i, _dd_c8_, fmulp_, _df_c8_,
++ fcom_st, fp_nop, __BAD__, __BAD__, _dc_d0_, fst_i_, _de_d0_, _df_d0_,
++ fcompst, _d9_d8_, __BAD__, __BAD__, _dc_d8_, fstp_i, fcompp, _df_d8_,
++ fsub__, FPU_etc, __BAD__, finit_, fsubri, fucom_, fsubrp, fstsw_,
++ fsubr_, fconst, fucompp, __BAD__, fsub_i, fucomp, fsubp_, __BAD__,
++ fdiv__, FPU_triga, __BAD__, __BAD__, fdivri, __BAD__, fdivrp, __BAD__,
++ fdivr_, FPU_trigb, __BAD__, __BAD__, fdiv_i, __BAD__, fdivp_, __BAD__,
+ };
+
+-#else /* Support only documented FPU op-codes */
++#else /* Support only documented FPU op-codes */
+
+ static FUNC const st_instr_table[64] = {
+- fadd__, fld_i_, __BAD__, __BAD__, fadd_i, ffree_, faddp_, __BAD__,
+- fmul__, fxch_i, __BAD__, __BAD__, fmul_i, __BAD__, fmulp_, __BAD__,
+- fcom_st, fp_nop, __BAD__, __BAD__, __BAD__, fst_i_, __BAD__, __BAD__,
+- fcompst, __BAD__, __BAD__, __BAD__, __BAD__, fstp_i, fcompp, __BAD__,
+- fsub__, FPU_etc, __BAD__, finit_, fsubri, fucom_, fsubrp, fstsw_,
+- fsubr_, fconst, fucompp, __BAD__, fsub_i, fucomp, fsubp_, __BAD__,
+- fdiv__, FPU_triga, __BAD__, __BAD__, fdivri, __BAD__, fdivrp, __BAD__,
+- fdivr_, FPU_trigb, __BAD__, __BAD__, fdiv_i, __BAD__, fdivp_, __BAD__,
++ fadd__, fld_i_, __BAD__, __BAD__, fadd_i, ffree_, faddp_, __BAD__,
++ fmul__, fxch_i, __BAD__, __BAD__, fmul_i, __BAD__, fmulp_, __BAD__,
++ fcom_st, fp_nop, __BAD__, __BAD__, __BAD__, fst_i_, __BAD__, __BAD__,
++ fcompst, __BAD__, __BAD__, __BAD__, __BAD__, fstp_i, fcompp, __BAD__,
++ fsub__, FPU_etc, __BAD__, finit_, fsubri, fucom_, fsubrp, fstsw_,
++ fsubr_, fconst, fucompp, __BAD__, fsub_i, fucomp, fsubp_, __BAD__,
++ fdiv__, FPU_triga, __BAD__, __BAD__, fdivri, __BAD__, fdivrp, __BAD__,
++ fdivr_, FPU_trigb, __BAD__, __BAD__, fdiv_i, __BAD__, fdivp_, __BAD__,
+ };
+
+ #endif /* NO_UNDOC_CODE */
+
+-
+-#define _NONE_ 0 /* Take no special action */
+-#define _REG0_ 1 /* Need to check for not empty st(0) */
+-#define _REGI_ 2 /* Need to check for not empty st(0) and st(rm) */
+-#define _REGi_ 0 /* Uses st(rm) */
+-#define _PUSH_ 3 /* Need to check for space to push onto stack */
+-#define _null_ 4 /* Function illegal or not implemented */
+-#define _REGIi 5 /* Uses st(0) and st(rm), result to st(rm) */
+-#define _REGIp 6 /* Uses st(0) and st(rm), result to st(rm) then pop */
+-#define _REGIc 0 /* Compare st(0) and st(rm) */
+-#define _REGIn 0 /* Uses st(0) and st(rm), but handle checks later */
++#define _NONE_ 0 /* Take no special action */
++#define _REG0_ 1 /* Need to check for not empty st(0) */
++#define _REGI_ 2 /* Need to check for not empty st(0) and st(rm) */
++#define _REGi_ 0 /* Uses st(rm) */
++#define _PUSH_ 3 /* Need to check for space to push onto stack */
++#define _null_ 4 /* Function illegal or not implemented */
++#define _REGIi 5 /* Uses st(0) and st(rm), result to st(rm) */
++#define _REGIp 6 /* Uses st(0) and st(rm), result to st(rm) then pop */
++#define _REGIc 0 /* Compare st(0) and st(rm) */
++#define _REGIn 0 /* Uses st(0) and st(rm), but handle checks later */
+
+ #ifndef NO_UNDOC_CODE
+
+ /* Un-documented FPU op-codes supported by default. (see above) */
+
+ static u_char const type_table[64] = {
+- _REGI_, _NONE_, _null_, _null_, _REGIi, _REGi_, _REGIp, _REGi_,
+- _REGI_, _REGIn, _null_, _null_, _REGIi, _REGI_, _REGIp, _REGI_,
+- _REGIc, _NONE_, _null_, _null_, _REGIc, _REG0_, _REGIc, _REG0_,
+- _REGIc, _REG0_, _null_, _null_, _REGIc, _REG0_, _REGIc, _REG0_,
+- _REGI_, _NONE_, _null_, _NONE_, _REGIi, _REGIc, _REGIp, _NONE_,
+- _REGI_, _NONE_, _REGIc, _null_, _REGIi, _REGIc, _REGIp, _null_,
+- _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_,
+- _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_
++ _REGI_, _NONE_, _null_, _null_, _REGIi, _REGi_, _REGIp, _REGi_,
++ _REGI_, _REGIn, _null_, _null_, _REGIi, _REGI_, _REGIp, _REGI_,
++ _REGIc, _NONE_, _null_, _null_, _REGIc, _REG0_, _REGIc, _REG0_,
++ _REGIc, _REG0_, _null_, _null_, _REGIc, _REG0_, _REGIc, _REG0_,
++ _REGI_, _NONE_, _null_, _NONE_, _REGIi, _REGIc, _REGIp, _NONE_,
++ _REGI_, _NONE_, _REGIc, _null_, _REGIi, _REGIc, _REGIp, _null_,
++ _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_,
++ _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_
+ };
+
+-#else /* Support only documented FPU op-codes */
++#else /* Support only documented FPU op-codes */
+
+ static u_char const type_table[64] = {
+- _REGI_, _NONE_, _null_, _null_, _REGIi, _REGi_, _REGIp, _null_,
+- _REGI_, _REGIn, _null_, _null_, _REGIi, _null_, _REGIp, _null_,
+- _REGIc, _NONE_, _null_, _null_, _null_, _REG0_, _null_, _null_,
+- _REGIc, _null_, _null_, _null_, _null_, _REG0_, _REGIc, _null_,
+- _REGI_, _NONE_, _null_, _NONE_, _REGIi, _REGIc, _REGIp, _NONE_,
+- _REGI_, _NONE_, _REGIc, _null_, _REGIi, _REGIc, _REGIp, _null_,
+- _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_,
+- _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_
++ _REGI_, _NONE_, _null_, _null_, _REGIi, _REGi_, _REGIp, _null_,
++ _REGI_, _REGIn, _null_, _null_, _REGIi, _null_, _REGIp, _null_,
++ _REGIc, _NONE_, _null_, _null_, _null_, _REG0_, _null_, _null_,
++ _REGIc, _null_, _null_, _null_, _null_, _REG0_, _REGIc, _null_,
++ _REGI_, _NONE_, _null_, _NONE_, _REGIi, _REGIc, _REGIp, _NONE_,
++ _REGI_, _NONE_, _REGIc, _null_, _REGIi, _REGIc, _REGIp, _null_,
++ _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_,
++ _REGI_, _NONE_, _null_, _null_, _REGIi, _null_, _REGIp, _null_
+ };
+
+ #endif /* NO_UNDOC_CODE */
+
+-
+ #ifdef RE_ENTRANT_CHECKING
+-u_char emulating=0;
++u_char emulating = 0;
+ #endif /* RE_ENTRANT_CHECKING */
+
+-static int valid_prefix(u_char *Byte, u_char __user **fpu_eip,
+- overrides *override);
++static int valid_prefix(u_char *Byte, u_char __user ** fpu_eip,
++ overrides * override);
+
+ asmlinkage void math_emulate(long arg)
+ {
+- u_char FPU_modrm, byte1;
+- unsigned short code;
+- fpu_addr_modes addr_modes;
+- int unmasked;
+- FPU_REG loaded_data;
+- FPU_REG *st0_ptr;
+- u_char loaded_tag, st0_tag;
+- void __user *data_address;
+- struct address data_sel_off;
+- struct address entry_sel_off;
+- unsigned long code_base = 0;
+- unsigned long code_limit = 0; /* Initialized to stop compiler warnings */
+- struct desc_struct code_descriptor;
++ u_char FPU_modrm, byte1;
++ unsigned short code;
++ fpu_addr_modes addr_modes;
++ int unmasked;
++ FPU_REG loaded_data;
++ FPU_REG *st0_ptr;
++ u_char loaded_tag, st0_tag;
++ void __user *data_address;
++ struct address data_sel_off;
++ struct address entry_sel_off;
++ unsigned long code_base = 0;
++ unsigned long code_limit = 0; /* Initialized to stop compiler warnings */
++ struct desc_struct code_descriptor;
+
+ #ifdef RE_ENTRANT_CHECKING
+- if ( emulating )
+- {
+- printk("ERROR: wm-FPU-emu is not RE-ENTRANT!\n");
+- }
+- RE_ENTRANT_CHECK_ON;
++ if (emulating) {
++ printk("ERROR: wm-FPU-emu is not RE-ENTRANT!\n");
++ }
++ RE_ENTRANT_CHECK_ON;
+ #endif /* RE_ENTRANT_CHECKING */
+
+- if (!used_math())
+- {
+- finit();
+- set_used_math();
+- }
+-
+- SETUP_DATA_AREA(arg);
+-
+- FPU_ORIG_EIP = FPU_EIP;
+-
+- if ( (FPU_EFLAGS & 0x00020000) != 0 )
+- {
+- /* Virtual 8086 mode */
+- addr_modes.default_mode = VM86;
+- FPU_EIP += code_base = FPU_CS << 4;
+- code_limit = code_base + 0xffff; /* Assumes code_base <= 0xffff0000 */
+- }
+- else if ( FPU_CS == __USER_CS && FPU_DS == __USER_DS )
+- {
+- addr_modes.default_mode = 0;
+- }
+- else if ( FPU_CS == __KERNEL_CS )
+- {
+- printk("math_emulate: %04x:%08lx\n",FPU_CS,FPU_EIP);
+- panic("Math emulation needed in kernel");
+- }
+- else
+- {
+-
+- if ( (FPU_CS & 4) != 4 ) /* Must be in the LDT */
+- {
+- /* Can only handle segmented addressing via the LDT
+- for now, and it must be 16 bit */
+- printk("FPU emulator: Unsupported addressing mode\n");
+- math_abort(FPU_info, SIGILL);
++ if (!used_math()) {
++ finit();
++ set_used_math();
+ }
+
+- code_descriptor = LDT_DESCRIPTOR(FPU_CS);
+- if ( SEG_D_SIZE(code_descriptor) )
+- {
+- /* The above test may be wrong, the book is not clear */
+- /* Segmented 32 bit protected mode */
+- addr_modes.default_mode = SEG32;
++ SETUP_DATA_AREA(arg);
++
++ FPU_ORIG_EIP = FPU_EIP;
++
++ if ((FPU_EFLAGS & 0x00020000) != 0) {
++ /* Virtual 8086 mode */
++ addr_modes.default_mode = VM86;
++ FPU_EIP += code_base = FPU_CS << 4;
++ code_limit = code_base + 0xffff; /* Assumes code_base <= 0xffff0000 */
++ } else if (FPU_CS == __USER_CS && FPU_DS == __USER_DS) {
++ addr_modes.default_mode = 0;
++ } else if (FPU_CS == __KERNEL_CS) {
++ printk("math_emulate: %04x:%08lx\n", FPU_CS, FPU_EIP);
++ panic("Math emulation needed in kernel");
++ } else {
++
++ if ((FPU_CS & 4) != 4) { /* Must be in the LDT */
++ /* Can only handle segmented addressing via the LDT
++ for now, and it must be 16 bit */
++ printk("FPU emulator: Unsupported addressing mode\n");
++ math_abort(FPU_info, SIGILL);
++ }
++
++ code_descriptor = LDT_DESCRIPTOR(FPU_CS);
++ if (SEG_D_SIZE(code_descriptor)) {
++ /* The above test may be wrong, the book is not clear */
++ /* Segmented 32 bit protected mode */
++ addr_modes.default_mode = SEG32;
++ } else {
++ /* 16 bit protected mode */
++ addr_modes.default_mode = PM16;
++ }
++ FPU_EIP += code_base = SEG_BASE_ADDR(code_descriptor);
++ code_limit = code_base
++ + (SEG_LIMIT(code_descriptor) +
++ 1) * SEG_GRANULARITY(code_descriptor)
++ - 1;
++ if (code_limit < code_base)
++ code_limit = 0xffffffff;
+ }
+- else
+- {
+- /* 16 bit protected mode */
+- addr_modes.default_mode = PM16;
+
-+/* key constants */
++ FPU_lookahead = !(FPU_EFLAGS & X86_EFLAGS_TF);
+
-+#define CAMELLIA_SIGMA1L (0xA09E667FL)
-+#define CAMELLIA_SIGMA1R (0x3BCC908BL)
-+#define CAMELLIA_SIGMA2L (0xB67AE858L)
-+#define CAMELLIA_SIGMA2R (0x4CAA73B2L)
-+#define CAMELLIA_SIGMA3L (0xC6EF372FL)
-+#define CAMELLIA_SIGMA3R (0xE94F82BEL)
-+#define CAMELLIA_SIGMA4L (0x54FF53A5L)
-+#define CAMELLIA_SIGMA4R (0xF1D36F1CL)
-+#define CAMELLIA_SIGMA5L (0x10E527FAL)
-+#define CAMELLIA_SIGMA5R (0xDE682D1DL)
-+#define CAMELLIA_SIGMA6L (0xB05688C2L)
-+#define CAMELLIA_SIGMA6R (0xB3E6C1FDL)
++ if (!valid_prefix(&byte1, (u_char __user **) & FPU_EIP,
++ &addr_modes.override)) {
++ RE_ENTRANT_CHECK_OFF;
++ printk
++ ("FPU emulator: Unknown prefix byte 0x%02x, probably due to\n"
++ "FPU emulator: self-modifying code! (emulation impossible)\n",
++ byte1);
++ RE_ENTRANT_CHECK_ON;
++ EXCEPTION(EX_INTERNAL | 0x126);
++ math_abort(FPU_info, SIGILL);
+ }
+- FPU_EIP += code_base = SEG_BASE_ADDR(code_descriptor);
+- code_limit = code_base
+- + (SEG_LIMIT(code_descriptor)+1) * SEG_GRANULARITY(code_descriptor)
+- - 1;
+- if ( code_limit < code_base ) code_limit = 0xffffffff;
+- }
+-
+- FPU_lookahead = 1;
+- if (current->ptrace & PT_PTRACED)
+- FPU_lookahead = 0;
+-
+- if ( !valid_prefix(&byte1, (u_char __user **)&FPU_EIP,
+- &addr_modes.override) )
+- {
+- RE_ENTRANT_CHECK_OFF;
+- printk("FPU emulator: Unknown prefix byte 0x%02x, probably due to\n"
+- "FPU emulator: self-modifying code! (emulation impossible)\n",
+- byte1);
+- RE_ENTRANT_CHECK_ON;
+- EXCEPTION(EX_INTERNAL|0x126);
+- math_abort(FPU_info,SIGILL);
+- }
+-
+-do_another_FPU_instruction:
+-
+- no_ip_update = 0;
+-
+- FPU_EIP++; /* We have fetched the prefix and first code bytes. */
+-
+- if ( addr_modes.default_mode )
+- {
+- /* This checks for the minimum instruction bytes.
+- We also need to check any extra (address mode) code access. */
+- if ( FPU_EIP > code_limit )
+- math_abort(FPU_info,SIGSEGV);
+- }
+-
+- if ( (byte1 & 0xf8) != 0xd8 )
+- {
+- if ( byte1 == FWAIT_OPCODE )
+- {
+- if (partial_status & SW_Summary)
+- goto do_the_FPU_interrupt;
+- else
+- goto FPU_fwait_done;
+
-+/*
-+ * macros
-+ */
-+#define GETU32(v, pt) \
-+ do { \
-+ /* latest breed of gcc is clever enough to use move */ \
-+ memcpy(&(v), (pt), 4); \
-+ (v) = be32_to_cpu(v); \
-+ } while(0)
++ do_another_FPU_instruction:
+
-+/* rotation right shift 1byte */
-+#define ROR8(x) (((x) >> 8) + ((x) << 24))
-+/* rotation left shift 1bit */
-+#define ROL1(x) (((x) << 1) + ((x) >> 31))
-+/* rotation left shift 1byte */
-+#define ROL8(x) (((x) << 8) + ((x) >> 24))
++ no_ip_update = 0;
+
-+#define ROLDQ(ll, lr, rl, rr, w0, w1, bits) \
-+ do { \
-+ w0 = ll; \
-+ ll = (ll << bits) + (lr >> (32 - bits)); \
-+ lr = (lr << bits) + (rl >> (32 - bits)); \
-+ rl = (rl << bits) + (rr >> (32 - bits)); \
-+ rr = (rr << bits) + (w0 >> (32 - bits)); \
-+ } while(0)
++ FPU_EIP++; /* We have fetched the prefix and first code bytes. */
+
-+#define ROLDQo32(ll, lr, rl, rr, w0, w1, bits) \
-+ do { \
-+ w0 = ll; \
-+ w1 = lr; \
-+ ll = (lr << (bits - 32)) + (rl >> (64 - bits)); \
-+ lr = (rl << (bits - 32)) + (rr >> (64 - bits)); \
-+ rl = (rr << (bits - 32)) + (w0 >> (64 - bits)); \
-+ rr = (w0 << (bits - 32)) + (w1 >> (64 - bits)); \
-+ } while(0)
++ if (addr_modes.default_mode) {
++ /* This checks for the minimum instruction bytes.
++ We also need to check any extra (address mode) code access. */
++ if (FPU_EIP > code_limit)
++ math_abort(FPU_info, SIGSEGV);
+ }
+
-+#define CAMELLIA_F(xl, xr, kl, kr, yl, yr, il, ir, t0, t1) \
-+ do { \
-+ il = xl ^ kl; \
-+ ir = xr ^ kr; \
-+ t0 = il >> 16; \
-+ t1 = ir >> 16; \
-+ yl = camellia_sp1110[(u8)(ir )] \
-+ ^ camellia_sp0222[ (t1 >> 8)] \
-+ ^ camellia_sp3033[(u8)(t1 )] \
-+ ^ camellia_sp4404[(u8)(ir >> 8)]; \
-+ yr = camellia_sp1110[ (t0 >> 8)] \
-+ ^ camellia_sp0222[(u8)(t0 )] \
-+ ^ camellia_sp3033[(u8)(il >> 8)] \
-+ ^ camellia_sp4404[(u8)(il )]; \
-+ yl ^= yr; \
-+ yr = ROR8(yr); \
-+ yr ^= yl; \
-+ } while(0)
++ if ((byte1 & 0xf8) != 0xd8) {
++ if (byte1 == FWAIT_OPCODE) {
++ if (partial_status & SW_Summary)
++ goto do_the_FPU_interrupt;
++ else
++ goto FPU_fwait_done;
++ }
+ #ifdef PARANOID
+- EXCEPTION(EX_INTERNAL|0x128);
+- math_abort(FPU_info,SIGILL);
++ EXCEPTION(EX_INTERNAL | 0x128);
++ math_abort(FPU_info, SIGILL);
+ #endif /* PARANOID */
+- }
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(FPU_modrm, (u_char __user *) FPU_EIP);
+- RE_ENTRANT_CHECK_ON;
+- FPU_EIP++;
+-
+- if (partial_status & SW_Summary)
+- {
+- /* Ignore the error for now if the current instruction is a no-wait
+- control instruction */
+- /* The 80486 manual contradicts itself on this topic,
+- but a real 80486 uses the following instructions:
+- fninit, fnstenv, fnsave, fnstsw, fnstenv, fnclex.
+- */
+- code = (FPU_modrm << 8) | byte1;
+- if ( ! ( (((code & 0xf803) == 0xe003) || /* fnclex, fninit, fnstsw */
+- (((code & 0x3003) == 0x3001) && /* fnsave, fnstcw, fnstenv,
+- fnstsw */
+- ((code & 0xc000) != 0xc000))) ) )
+- {
+- /*
+- * We need to simulate the action of the kernel to FPU
+- * interrupts here.
+- */
+- do_the_FPU_interrupt:
+-
+- FPU_EIP = FPU_ORIG_EIP; /* Point to current FPU instruction. */
+-
+- RE_ENTRANT_CHECK_OFF;
+- current->thread.trap_no = 16;
+- current->thread.error_code = 0;
+- send_sig(SIGFPE, current, 1);
+- return;
+- }
+- }
+-
+- entry_sel_off.offset = FPU_ORIG_EIP;
+- entry_sel_off.selector = FPU_CS;
+- entry_sel_off.opcode = (byte1 << 8) | FPU_modrm;
+-
+- FPU_rm = FPU_modrm & 7;
+-
+- if ( FPU_modrm < 0300 )
+- {
+- /* All of these instructions use the mod/rm byte to get a data address */
+-
+- if ( (addr_modes.default_mode & SIXTEEN)
+- ^ (addr_modes.override.address_size == ADDR_SIZE_PREFIX) )
+- data_address = FPU_get_address_16(FPU_modrm, &FPU_EIP, &data_sel_off,
+- addr_modes);
+- else
+- data_address = FPU_get_address(FPU_modrm, &FPU_EIP, &data_sel_off,
+- addr_modes);
+-
+- if ( addr_modes.default_mode )
+- {
+- if ( FPU_EIP-1 > code_limit )
+- math_abort(FPU_info,SIGSEGV);
+ }
+
+- if ( !(byte1 & 1) )
+- {
+- unsigned short status1 = partial_status;
+-
+- st0_ptr = &st(0);
+- st0_tag = FPU_gettag0();
+-
+- /* Stack underflow has priority */
+- if ( NOT_EMPTY_ST0 )
+- {
+- if ( addr_modes.default_mode & PROTECTED )
+- {
+- /* This table works for 16 and 32 bit protected mode */
+- if ( access_limit < data_sizes_16[(byte1 >> 1) & 3] )
+- math_abort(FPU_info,SIGSEGV);
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(FPU_modrm, (u_char __user *) FPU_EIP);
++ RE_ENTRANT_CHECK_ON;
++ FPU_EIP++;
++
++ if (partial_status & SW_Summary) {
++ /* Ignore the error for now if the current instruction is a no-wait
++ control instruction */
++ /* The 80486 manual contradicts itself on this topic,
++ but a real 80486 uses the following instructions:
++ fninit, fnstenv, fnsave, fnstsw, fnstenv, fnclex.
++ */
++ code = (FPU_modrm << 8) | byte1;
++ if (!((((code & 0xf803) == 0xe003) || /* fnclex, fninit, fnstsw */
++ (((code & 0x3003) == 0x3001) && /* fnsave, fnstcw, fnstenv,
++ fnstsw */
++ ((code & 0xc000) != 0xc000))))) {
++ /*
++ * We need to simulate the action of the kernel to FPU
++ * interrupts here.
++ */
++ do_the_FPU_interrupt:
+
-+#define SUBKEY_L(INDEX) (subkey[(INDEX)*2])
-+#define SUBKEY_R(INDEX) (subkey[(INDEX)*2 + 1])
++ FPU_EIP = FPU_ORIG_EIP; /* Point to current FPU instruction. */
+
-+static void camellia_setup_tail(u32 *subkey, u32 *subL, u32 *subR, int max)
-+{
-+ u32 dw, tl, tr;
-+ u32 kw4l, kw4r;
-+ int i;
++ RE_ENTRANT_CHECK_OFF;
++ current->thread.trap_no = 16;
++ current->thread.error_code = 0;
++ send_sig(SIGFPE, current, 1);
++ return;
+ }
++ }
+
+- unmasked = 0; /* Do this here to stop compiler warnings. */
+- switch ( (byte1 >> 1) & 3 )
+- {
+- case 0:
+- unmasked = FPU_load_single((float __user *)data_address,
+- &loaded_data);
+- loaded_tag = unmasked & 0xff;
+- unmasked &= ~0xff;
+- break;
+- case 1:
+- loaded_tag = FPU_load_int32((long __user *)data_address, &loaded_data);
+- break;
+- case 2:
+- unmasked = FPU_load_double((double __user *)data_address,
+- &loaded_data);
+- loaded_tag = unmasked & 0xff;
+- unmasked &= ~0xff;
+- break;
+- case 3:
+- default: /* Used here to suppress gcc warnings. */
+- loaded_tag = FPU_load_int16((short __user *)data_address, &loaded_data);
+- break;
+- }
++ entry_sel_off.offset = FPU_ORIG_EIP;
++ entry_sel_off.selector = FPU_CS;
++ entry_sel_off.opcode = (byte1 << 8) | FPU_modrm;
+
+- /* No more access to user memory, it is safe
+- to use static data now */
+-
+- /* NaN operands have the next priority. */
+- /* We have to delay looking at st(0) until after
+- loading the data, because that data might contain an SNaN */
+- if ( ((st0_tag == TAG_Special) && isNaN(st0_ptr)) ||
+- ((loaded_tag == TAG_Special) && isNaN(&loaded_data)) )
+- {
+- /* Restore the status word; we might have loaded a
+- denormal. */
+- partial_status = status1;
+- if ( (FPU_modrm & 0x30) == 0x10 )
+- {
+- /* fcom or fcomp */
+- EXCEPTION(EX_Invalid);
+- setcc(SW_C3 | SW_C2 | SW_C0);
+- if ( (FPU_modrm & 0x08) && (control_word & CW_Invalid) )
+- FPU_pop(); /* fcomp, masked, so we pop. */
+- }
+- else
+- {
+- if ( loaded_tag == TAG_Special )
+- loaded_tag = FPU_Special(&loaded_data);
+-#ifdef PECULIAR_486
+- /* This is not really needed, but gives behaviour
+- identical to an 80486 */
+- if ( (FPU_modrm & 0x28) == 0x20 )
+- /* fdiv or fsub */
+- real_2op_NaN(&loaded_data, loaded_tag, 0, &loaded_data);
+- else
+-#endif /* PECULIAR_486 */
+- /* fadd, fdivr, fmul, or fsubr */
+- real_2op_NaN(&loaded_data, loaded_tag, 0, st0_ptr);
+- }
+- goto reg_mem_instr_done;
+- }
++ FPU_rm = FPU_modrm & 7;
+
+- if ( unmasked && !((FPU_modrm & 0x30) == 0x10) )
+- {
+- /* Is not a comparison instruction. */
+- if ( (FPU_modrm & 0x38) == 0x38 )
+- {
+- /* fdivr */
+- if ( (st0_tag == TAG_Zero) &&
+- ((loaded_tag == TAG_Valid)
+- || (loaded_tag == TAG_Special
+- && isdenormal(&loaded_data))) )
+- {
+- if ( FPU_divide_by_zero(0, getsign(&loaded_data))
+- < 0 )
+- {
+- /* We use the fact here that the unmasked
+- exception in the loaded data was for a
+- denormal operand */
+- /* Restore the state of the denormal op bit */
+- partial_status &= ~SW_Denorm_Op;
+- partial_status |= status1 & SW_Denorm_Op;
+- }
+- else
+- setsign(st0_ptr, getsign(&loaded_data));
+- }
+- }
+- goto reg_mem_instr_done;
+- }
++ if (FPU_modrm < 0300) {
++ /* All of these instructions use the mod/rm byte to get a data address */
+
+- switch ( (FPU_modrm >> 3) & 7 )
+- {
+- case 0: /* fadd */
+- clear_C1();
+- FPU_add(&loaded_data, loaded_tag, 0, control_word);
+- break;
+- case 1: /* fmul */
+- clear_C1();
+- FPU_mul(&loaded_data, loaded_tag, 0, control_word);
+- break;
+- case 2: /* fcom */
+- FPU_compare_st_data(&loaded_data, loaded_tag);
+- break;
+- case 3: /* fcomp */
+- if ( !FPU_compare_st_data(&loaded_data, loaded_tag)
+- && !unmasked )
+- FPU_pop();
+- break;
+- case 4: /* fsub */
+- clear_C1();
+- FPU_sub(LOADED|loaded_tag, (int)&loaded_data, control_word);
+- break;
+- case 5: /* fsubr */
+- clear_C1();
+- FPU_sub(REV|LOADED|loaded_tag, (int)&loaded_data, control_word);
+- break;
+- case 6: /* fdiv */
+- clear_C1();
+- FPU_div(LOADED|loaded_tag, (int)&loaded_data, control_word);
+- break;
+- case 7: /* fdivr */
+- clear_C1();
+- if ( st0_tag == TAG_Zero )
+- partial_status = status1; /* Undo any denorm tag,
+- zero-divide has priority. */
+- FPU_div(REV|LOADED|loaded_tag, (int)&loaded_data, control_word);
+- break;
++ if ((addr_modes.default_mode & SIXTEEN)
++ ^ (addr_modes.override.address_size == ADDR_SIZE_PREFIX))
++ data_address =
++ FPU_get_address_16(FPU_modrm, &FPU_EIP,
++ &data_sel_off, addr_modes);
++ else
++ data_address =
++ FPU_get_address(FPU_modrm, &FPU_EIP, &data_sel_off,
++ addr_modes);
++
++ if (addr_modes.default_mode) {
++ if (FPU_EIP - 1 > code_limit)
++ math_abort(FPU_info, SIGSEGV);
+ }
+- }
+- else
+- {
+- if ( (FPU_modrm & 0x30) == 0x10 )
+- {
+- /* The instruction is fcom or fcomp */
+- EXCEPTION(EX_StackUnder);
+- setcc(SW_C3 | SW_C2 | SW_C0);
+- if ( (FPU_modrm & 0x08) && (control_word & CW_Invalid) )
+- FPU_pop(); /* fcomp */
++
++ if (!(byte1 & 1)) {
++ unsigned short status1 = partial_status;
++
++ st0_ptr = &st(0);
++ st0_tag = FPU_gettag0();
++
++ /* Stack underflow has priority */
++ if (NOT_EMPTY_ST0) {
++ if (addr_modes.default_mode & PROTECTED) {
++ /* This table works for 16 and 32 bit protected mode */
++ if (access_limit <
++ data_sizes_16[(byte1 >> 1) & 3])
++ math_abort(FPU_info, SIGSEGV);
++ }
+
-+ /* absorb kw2 to other subkeys */
-+ /* round 2 */
-+ subL[3] ^= subL[1]; subR[3] ^= subR[1];
-+ /* round 4 */
-+ subL[5] ^= subL[1]; subR[5] ^= subR[1];
-+ /* round 6 */
-+ subL[7] ^= subL[1]; subR[7] ^= subR[1];
-+ subL[1] ^= subR[1] & ~subR[9];
-+ dw = subL[1] & subL[9],
-+ subR[1] ^= ROL1(dw); /* modified for FLinv(kl2) */
-+ /* round 8 */
-+ subL[11] ^= subL[1]; subR[11] ^= subR[1];
-+ /* round 10 */
-+ subL[13] ^= subL[1]; subR[13] ^= subR[1];
-+ /* round 12 */
-+ subL[15] ^= subL[1]; subR[15] ^= subR[1];
-+ subL[1] ^= subR[1] & ~subR[17];
-+ dw = subL[1] & subL[17],
-+ subR[1] ^= ROL1(dw); /* modified for FLinv(kl4) */
-+ /* round 14 */
-+ subL[19] ^= subL[1]; subR[19] ^= subR[1];
-+ /* round 16 */
-+ subL[21] ^= subL[1]; subR[21] ^= subR[1];
-+ /* round 18 */
-+ subL[23] ^= subL[1]; subR[23] ^= subR[1];
-+ if (max == 24) {
-+ /* kw3 */
-+ subL[24] ^= subL[1]; subR[24] ^= subR[1];
++ unmasked = 0; /* Do this here to stop compiler warnings. */
++ switch ((byte1 >> 1) & 3) {
++ case 0:
++ unmasked =
++ FPU_load_single((float __user *)
++ data_address,
++ &loaded_data);
++ loaded_tag = unmasked & 0xff;
++ unmasked &= ~0xff;
++ break;
++ case 1:
++ loaded_tag =
++ FPU_load_int32((long __user *)
++ data_address,
++ &loaded_data);
++ break;
++ case 2:
++ unmasked =
++ FPU_load_double((double __user *)
++ data_address,
++ &loaded_data);
++ loaded_tag = unmasked & 0xff;
++ unmasked &= ~0xff;
++ break;
++ case 3:
++ default: /* Used here to suppress gcc warnings. */
++ loaded_tag =
++ FPU_load_int16((short __user *)
++ data_address,
++ &loaded_data);
++ break;
++ }
+
-+ /* absorb kw4 to other subkeys */
-+ kw4l = subL[25]; kw4r = subR[25];
-+ } else {
-+ subL[1] ^= subR[1] & ~subR[25];
-+ dw = subL[1] & subL[25],
-+ subR[1] ^= ROL1(dw); /* modified for FLinv(kl6) */
-+ /* round 20 */
-+ subL[27] ^= subL[1]; subR[27] ^= subR[1];
-+ /* round 22 */
-+ subL[29] ^= subL[1]; subR[29] ^= subR[1];
-+ /* round 24 */
-+ subL[31] ^= subL[1]; subR[31] ^= subR[1];
-+ /* kw3 */
-+ subL[32] ^= subL[1]; subR[32] ^= subR[1];
++ /* No more access to user memory, it is safe
++ to use static data now */
+
-+ /* absorb kw4 to other subkeys */
-+ kw4l = subL[33]; kw4r = subR[33];
-+ /* round 23 */
-+ subL[30] ^= kw4l; subR[30] ^= kw4r;
-+ /* round 21 */
-+ subL[28] ^= kw4l; subR[28] ^= kw4r;
-+ /* round 19 */
-+ subL[26] ^= kw4l; subR[26] ^= kw4r;
-+ kw4l ^= kw4r & ~subR[24];
-+ dw = kw4l & subL[24],
-+ kw4r ^= ROL1(dw); /* modified for FL(kl5) */
-+ }
-+ /* round 17 */
-+ subL[22] ^= kw4l; subR[22] ^= kw4r;
-+ /* round 15 */
-+ subL[20] ^= kw4l; subR[20] ^= kw4r;
-+ /* round 13 */
-+ subL[18] ^= kw4l; subR[18] ^= kw4r;
-+ kw4l ^= kw4r & ~subR[16];
-+ dw = kw4l & subL[16],
-+ kw4r ^= ROL1(dw); /* modified for FL(kl3) */
-+ /* round 11 */
-+ subL[14] ^= kw4l; subR[14] ^= kw4r;
-+ /* round 9 */
-+ subL[12] ^= kw4l; subR[12] ^= kw4r;
-+ /* round 7 */
-+ subL[10] ^= kw4l; subR[10] ^= kw4r;
-+ kw4l ^= kw4r & ~subR[8];
-+ dw = kw4l & subL[8],
-+ kw4r ^= ROL1(dw); /* modified for FL(kl1) */
-+ /* round 5 */
-+ subL[6] ^= kw4l; subR[6] ^= kw4r;
-+ /* round 3 */
-+ subL[4] ^= kw4l; subR[4] ^= kw4r;
-+ /* round 1 */
-+ subL[2] ^= kw4l; subR[2] ^= kw4r;
-+ /* kw1 */
-+ subL[0] ^= kw4l; subR[0] ^= kw4r;
++ /* NaN operands have the next priority. */
++ /* We have to delay looking at st(0) until after
++ loading the data, because that data might contain an SNaN */
++ if (((st0_tag == TAG_Special) && isNaN(st0_ptr))
++ || ((loaded_tag == TAG_Special)
++ && isNaN(&loaded_data))) {
++ /* Restore the status word; we might have loaded a
++ denormal. */
++ partial_status = status1;
++ if ((FPU_modrm & 0x30) == 0x10) {
++ /* fcom or fcomp */
++ EXCEPTION(EX_Invalid);
++ setcc(SW_C3 | SW_C2 | SW_C0);
++ if ((FPU_modrm & 0x08)
++ && (control_word &
++ CW_Invalid))
++ FPU_pop(); /* fcomp, masked, so we pop. */
++ } else {
++ if (loaded_tag == TAG_Special)
++ loaded_tag =
++ FPU_Special
++ (&loaded_data);
++#ifdef PECULIAR_486
++ /* This is not really needed, but gives behaviour
++ identical to an 80486 */
++ if ((FPU_modrm & 0x28) == 0x20)
++ /* fdiv or fsub */
++ real_2op_NaN
++ (&loaded_data,
++ loaded_tag, 0,
++ &loaded_data);
++ else
++#endif /* PECULIAR_486 */
++ /* fadd, fdivr, fmul, or fsubr */
++ real_2op_NaN
++ (&loaded_data,
++ loaded_tag, 0,
++ st0_ptr);
++ }
++ goto reg_mem_instr_done;
++ }
+
-+ /* key XOR is end of F-function */
-+ SUBKEY_L(0) = subL[0] ^ subL[2];/* kw1 */
-+ SUBKEY_R(0) = subR[0] ^ subR[2];
-+ SUBKEY_L(2) = subL[3]; /* round 1 */
-+ SUBKEY_R(2) = subR[3];
-+ SUBKEY_L(3) = subL[2] ^ subL[4]; /* round 2 */
-+ SUBKEY_R(3) = subR[2] ^ subR[4];
-+ SUBKEY_L(4) = subL[3] ^ subL[5]; /* round 3 */
-+ SUBKEY_R(4) = subR[3] ^ subR[5];
-+ SUBKEY_L(5) = subL[4] ^ subL[6]; /* round 4 */
-+ SUBKEY_R(5) = subR[4] ^ subR[6];
-+ SUBKEY_L(6) = subL[5] ^ subL[7]; /* round 5 */
-+ SUBKEY_R(6) = subR[5] ^ subR[7];
-+ tl = subL[10] ^ (subR[10] & ~subR[8]);
-+ dw = tl & subL[8], /* FL(kl1) */
-+ tr = subR[10] ^ ROL1(dw);
-+ SUBKEY_L(7) = subL[6] ^ tl; /* round 6 */
-+ SUBKEY_R(7) = subR[6] ^ tr;
-+ SUBKEY_L(8) = subL[8]; /* FL(kl1) */
-+ SUBKEY_R(8) = subR[8];
-+ SUBKEY_L(9) = subL[9]; /* FLinv(kl2) */
-+ SUBKEY_R(9) = subR[9];
-+ tl = subL[7] ^ (subR[7] & ~subR[9]);
-+ dw = tl & subL[9], /* FLinv(kl2) */
-+ tr = subR[7] ^ ROL1(dw);
-+ SUBKEY_L(10) = tl ^ subL[11]; /* round 7 */
-+ SUBKEY_R(10) = tr ^ subR[11];
-+ SUBKEY_L(11) = subL[10] ^ subL[12]; /* round 8 */
-+ SUBKEY_R(11) = subR[10] ^ subR[12];
-+ SUBKEY_L(12) = subL[11] ^ subL[13]; /* round 9 */
-+ SUBKEY_R(12) = subR[11] ^ subR[13];
-+ SUBKEY_L(13) = subL[12] ^ subL[14]; /* round 10 */
-+ SUBKEY_R(13) = subR[12] ^ subR[14];
-+ SUBKEY_L(14) = subL[13] ^ subL[15]; /* round 11 */
-+ SUBKEY_R(14) = subR[13] ^ subR[15];
-+ tl = subL[18] ^ (subR[18] & ~subR[16]);
-+ dw = tl & subL[16], /* FL(kl3) */
-+ tr = subR[18] ^ ROL1(dw);
-+ SUBKEY_L(15) = subL[14] ^ tl; /* round 12 */
-+ SUBKEY_R(15) = subR[14] ^ tr;
-+ SUBKEY_L(16) = subL[16]; /* FL(kl3) */
-+ SUBKEY_R(16) = subR[16];
-+ SUBKEY_L(17) = subL[17]; /* FLinv(kl4) */
-+ SUBKEY_R(17) = subR[17];
-+ tl = subL[15] ^ (subR[15] & ~subR[17]);
-+ dw = tl & subL[17], /* FLinv(kl4) */
-+ tr = subR[15] ^ ROL1(dw);
-+ SUBKEY_L(18) = tl ^ subL[19]; /* round 13 */
-+ SUBKEY_R(18) = tr ^ subR[19];
-+ SUBKEY_L(19) = subL[18] ^ subL[20]; /* round 14 */
-+ SUBKEY_R(19) = subR[18] ^ subR[20];
-+ SUBKEY_L(20) = subL[19] ^ subL[21]; /* round 15 */
-+ SUBKEY_R(20) = subR[19] ^ subR[21];
-+ SUBKEY_L(21) = subL[20] ^ subL[22]; /* round 16 */
-+ SUBKEY_R(21) = subR[20] ^ subR[22];
-+ SUBKEY_L(22) = subL[21] ^ subL[23]; /* round 17 */
-+ SUBKEY_R(22) = subR[21] ^ subR[23];
-+ if (max == 24) {
-+ SUBKEY_L(23) = subL[22]; /* round 18 */
-+ SUBKEY_R(23) = subR[22];
-+ SUBKEY_L(24) = subL[24] ^ subL[23]; /* kw3 */
-+ SUBKEY_R(24) = subR[24] ^ subR[23];
++ if (unmasked && !((FPU_modrm & 0x30) == 0x10)) {
++ /* Is not a comparison instruction. */
++ if ((FPU_modrm & 0x38) == 0x38) {
++ /* fdivr */
++ if ((st0_tag == TAG_Zero) &&
++ ((loaded_tag == TAG_Valid)
++ || (loaded_tag ==
++ TAG_Special
++ &&
++ isdenormal
++ (&loaded_data)))) {
++ if (FPU_divide_by_zero
++ (0,
++ getsign
++ (&loaded_data))
++ < 0) {
++ /* We use the fact here that the unmasked
++ exception in the loaded data was for a
++ denormal operand */
++ /* Restore the state of the denormal op bit */
++ partial_status
++ &=
++ ~SW_Denorm_Op;
++ partial_status
++ |=
++ status1 &
++ SW_Denorm_Op;
++ } else
++ setsign(st0_ptr,
++ getsign
++ (&loaded_data));
++ }
++ }
++ goto reg_mem_instr_done;
++ }
++
++ switch ((FPU_modrm >> 3) & 7) {
++ case 0: /* fadd */
++ clear_C1();
++ FPU_add(&loaded_data, loaded_tag, 0,
++ control_word);
++ break;
++ case 1: /* fmul */
++ clear_C1();
++ FPU_mul(&loaded_data, loaded_tag, 0,
++ control_word);
++ break;
++ case 2: /* fcom */
++ FPU_compare_st_data(&loaded_data,
++ loaded_tag);
++ break;
++ case 3: /* fcomp */
++ if (!FPU_compare_st_data
++ (&loaded_data, loaded_tag)
++ && !unmasked)
++ FPU_pop();
++ break;
++ case 4: /* fsub */
++ clear_C1();
++ FPU_sub(LOADED | loaded_tag,
++ (int)&loaded_data,
++ control_word);
++ break;
++ case 5: /* fsubr */
++ clear_C1();
++ FPU_sub(REV | LOADED | loaded_tag,
++ (int)&loaded_data,
++ control_word);
++ break;
++ case 6: /* fdiv */
++ clear_C1();
++ FPU_div(LOADED | loaded_tag,
++ (int)&loaded_data,
++ control_word);
++ break;
++ case 7: /* fdivr */
++ clear_C1();
++ if (st0_tag == TAG_Zero)
++ partial_status = status1; /* Undo any denorm tag,
++ zero-divide has priority. */
++ FPU_div(REV | LOADED | loaded_tag,
++ (int)&loaded_data,
++ control_word);
++ break;
++ }
++ } else {
++ if ((FPU_modrm & 0x30) == 0x10) {
++ /* The instruction is fcom or fcomp */
++ EXCEPTION(EX_StackUnder);
++ setcc(SW_C3 | SW_C2 | SW_C0);
++ if ((FPU_modrm & 0x08)
++ && (control_word & CW_Invalid))
++ FPU_pop(); /* fcomp */
++ } else
++ FPU_stack_underflow();
++ }
++ reg_mem_instr_done:
++ operand_address = data_sel_off;
++ } else {
++ if (!(no_ip_update =
++ FPU_load_store(((FPU_modrm & 0x38) | (byte1 & 6))
++ >> 1, addr_modes, data_address))) {
++ operand_address = data_sel_off;
++ }
+ }
+- else
+- FPU_stack_underflow();
+- }
+- reg_mem_instr_done:
+- operand_address = data_sel_off;
+- }
+- else
+- {
+- if ( !(no_ip_update =
+- FPU_load_store(((FPU_modrm & 0x38) | (byte1 & 6)) >> 1,
+- addr_modes, data_address)) )
+- {
+- operand_address = data_sel_off;
+- }
+- }
+
+- }
+- else
+- {
+- /* None of these instructions access user memory */
+- u_char instr_index = (FPU_modrm & 0x38) | (byte1 & 7);
+ } else {
-+ tl = subL[26] ^ (subR[26] & ~subR[24]);
-+ dw = tl & subL[24], /* FL(kl5) */
-+ tr = subR[26] ^ ROL1(dw);
-+ SUBKEY_L(23) = subL[22] ^ tl; /* round 18 */
-+ SUBKEY_R(23) = subR[22] ^ tr;
-+ SUBKEY_L(24) = subL[24]; /* FL(kl5) */
-+ SUBKEY_R(24) = subR[24];
-+ SUBKEY_L(25) = subL[25]; /* FLinv(kl6) */
-+ SUBKEY_R(25) = subR[25];
-+ tl = subL[23] ^ (subR[23] & ~subR[25]);
-+ dw = tl & subL[25], /* FLinv(kl6) */
-+ tr = subR[23] ^ ROL1(dw);
-+ SUBKEY_L(26) = tl ^ subL[27]; /* round 19 */
-+ SUBKEY_R(26) = tr ^ subR[27];
-+ SUBKEY_L(27) = subL[26] ^ subL[28]; /* round 20 */
-+ SUBKEY_R(27) = subR[26] ^ subR[28];
-+ SUBKEY_L(28) = subL[27] ^ subL[29]; /* round 21 */
-+ SUBKEY_R(28) = subR[27] ^ subR[29];
-+ SUBKEY_L(29) = subL[28] ^ subL[30]; /* round 22 */
-+ SUBKEY_R(29) = subR[28] ^ subR[30];
-+ SUBKEY_L(30) = subL[29] ^ subL[31]; /* round 23 */
-+ SUBKEY_R(30) = subR[29] ^ subR[31];
-+ SUBKEY_L(31) = subL[30]; /* round 24 */
-+ SUBKEY_R(31) = subR[30];
-+ SUBKEY_L(32) = subL[32] ^ subL[31]; /* kw3 */
-+ SUBKEY_R(32) = subR[32] ^ subR[31];
++ /* None of these instructions access user memory */
++ u_char instr_index = (FPU_modrm & 0x38) | (byte1 & 7);
+
+ #ifdef PECULIAR_486
+- /* This is supposed to be undefined, but a real 80486 seems
+- to do this: */
+- operand_address.offset = 0;
+- operand_address.selector = FPU_DS;
++ /* This is supposed to be undefined, but a real 80486 seems
++ to do this: */
++ operand_address.offset = 0;
++ operand_address.selector = FPU_DS;
+ #endif /* PECULIAR_486 */
+
+- st0_ptr = &st(0);
+- st0_tag = FPU_gettag0();
+- switch ( type_table[(int) instr_index] )
+- {
+- case _NONE_: /* also _REGIc: _REGIn */
+- break;
+- case _REG0_:
+- if ( !NOT_EMPTY_ST0 )
+- {
+- FPU_stack_underflow();
+- goto FPU_instruction_done;
+- }
+- break;
+- case _REGIi:
+- if ( !NOT_EMPTY_ST0 || !NOT_EMPTY(FPU_rm) )
+- {
+- FPU_stack_underflow_i(FPU_rm);
+- goto FPU_instruction_done;
+- }
+- break;
+- case _REGIp:
+- if ( !NOT_EMPTY_ST0 || !NOT_EMPTY(FPU_rm) )
+- {
+- FPU_stack_underflow_pop(FPU_rm);
+- goto FPU_instruction_done;
+- }
+- break;
+- case _REGI_:
+- if ( !NOT_EMPTY_ST0 || !NOT_EMPTY(FPU_rm) )
+- {
+- FPU_stack_underflow();
+- goto FPU_instruction_done;
+- }
+- break;
+- case _PUSH_: /* Only used by the fld st(i) instruction */
+- break;
+- case _null_:
+- FPU_illegal();
+- goto FPU_instruction_done;
+- default:
+- EXCEPTION(EX_INTERNAL|0x111);
+- goto FPU_instruction_done;
+- }
+- (*st_instr_table[(int) instr_index])();
++ st0_ptr = &st(0);
++ st0_tag = FPU_gettag0();
++ switch (type_table[(int)instr_index]) {
++ case _NONE_: /* also _REGIc: _REGIn */
++ break;
++ case _REG0_:
++ if (!NOT_EMPTY_ST0) {
++ FPU_stack_underflow();
++ goto FPU_instruction_done;
++ }
++ break;
++ case _REGIi:
++ if (!NOT_EMPTY_ST0 || !NOT_EMPTY(FPU_rm)) {
++ FPU_stack_underflow_i(FPU_rm);
++ goto FPU_instruction_done;
++ }
++ break;
++ case _REGIp:
++ if (!NOT_EMPTY_ST0 || !NOT_EMPTY(FPU_rm)) {
++ FPU_stack_underflow_pop(FPU_rm);
++ goto FPU_instruction_done;
++ }
++ break;
++ case _REGI_:
++ if (!NOT_EMPTY_ST0 || !NOT_EMPTY(FPU_rm)) {
++ FPU_stack_underflow();
++ goto FPU_instruction_done;
++ }
++ break;
++ case _PUSH_: /* Only used by the fld st(i) instruction */
++ break;
++ case _null_:
++ FPU_illegal();
++ goto FPU_instruction_done;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x111);
++ goto FPU_instruction_done;
++ }
++ (*st_instr_table[(int)instr_index]) ();
+
+-FPU_instruction_done:
+- ;
+- }
++ FPU_instruction_done:
++ ;
++ }
+
+- if ( ! no_ip_update )
+- instruction_address = entry_sel_off;
++ if (!no_ip_update)
++ instruction_address = entry_sel_off;
+
+-FPU_fwait_done:
++ FPU_fwait_done:
+
+ #ifdef DEBUG
+- RE_ENTRANT_CHECK_OFF;
+- FPU_printall();
+- RE_ENTRANT_CHECK_ON;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_printall();
++ RE_ENTRANT_CHECK_ON;
+ #endif /* DEBUG */
+
+- if (FPU_lookahead && !need_resched())
+- {
+- FPU_ORIG_EIP = FPU_EIP - code_base;
+- if ( valid_prefix(&byte1, (u_char __user **)&FPU_EIP,
+- &addr_modes.override) )
+- goto do_another_FPU_instruction;
+- }
++ if (FPU_lookahead && !need_resched()) {
++ FPU_ORIG_EIP = FPU_EIP - code_base;
++ if (valid_prefix(&byte1, (u_char __user **) & FPU_EIP,
++ &addr_modes.override))
++ goto do_another_FPU_instruction;
+ }
+
+- if ( addr_modes.default_mode )
+- FPU_EIP -= code_base;
++ if (addr_modes.default_mode)
++ FPU_EIP -= code_base;
+
+- RE_ENTRANT_CHECK_OFF;
++ RE_ENTRANT_CHECK_OFF;
+ }
+
+-
+ /* Support for prefix bytes is not yet complete. To properly handle
+ all prefix bytes, further changes are needed in the emulator code
+ which accesses user address space. Access to separate segments is
+ important for msdos emulation. */
+ static int valid_prefix(u_char *Byte, u_char __user **fpu_eip,
+- overrides *override)
++ overrides * override)
+ {
+- u_char byte;
+- u_char __user *ip = *fpu_eip;
+-
+- *override = (overrides) { 0, 0, PREFIX_DEFAULT }; /* defaults */
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(byte, ip);
+- RE_ENTRANT_CHECK_ON;
+-
+- while ( 1 )
+- {
+- switch ( byte )
+- {
+- case ADDR_SIZE_PREFIX:
+- override->address_size = ADDR_SIZE_PREFIX;
+- goto do_next_byte;
+-
+- case OP_SIZE_PREFIX:
+- override->operand_size = OP_SIZE_PREFIX;
+- goto do_next_byte;
+-
+- case PREFIX_CS:
+- override->segment = PREFIX_CS_;
+- goto do_next_byte;
+- case PREFIX_ES:
+- override->segment = PREFIX_ES_;
+- goto do_next_byte;
+- case PREFIX_SS:
+- override->segment = PREFIX_SS_;
+- goto do_next_byte;
+- case PREFIX_FS:
+- override->segment = PREFIX_FS_;
+- goto do_next_byte;
+- case PREFIX_GS:
+- override->segment = PREFIX_GS_;
+- goto do_next_byte;
+- case PREFIX_DS:
+- override->segment = PREFIX_DS_;
+- goto do_next_byte;
++ u_char byte;
++ u_char __user *ip = *fpu_eip;
++
++ *override = (overrides) {
++ 0, 0, PREFIX_DEFAULT}; /* defaults */
++
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(byte, ip);
++ RE_ENTRANT_CHECK_ON;
+
-+ /* apply the inverse of the last half of P-function */
-+ i = 2;
-+ do {
-+ dw = SUBKEY_L(i + 0) ^ SUBKEY_R(i + 0); dw = ROL8(dw);/* round 1 */
-+ SUBKEY_R(i + 0) = SUBKEY_L(i + 0) ^ dw; SUBKEY_L(i + 0) = dw;
-+ dw = SUBKEY_L(i + 1) ^ SUBKEY_R(i + 1); dw = ROL8(dw);/* round 2 */
-+ SUBKEY_R(i + 1) = SUBKEY_L(i + 1) ^ dw; SUBKEY_L(i + 1) = dw;
-+ dw = SUBKEY_L(i + 2) ^ SUBKEY_R(i + 2); dw = ROL8(dw);/* round 3 */
-+ SUBKEY_R(i + 2) = SUBKEY_L(i + 2) ^ dw; SUBKEY_L(i + 2) = dw;
-+ dw = SUBKEY_L(i + 3) ^ SUBKEY_R(i + 3); dw = ROL8(dw);/* round 4 */
-+ SUBKEY_R(i + 3) = SUBKEY_L(i + 3) ^ dw; SUBKEY_L(i + 3) = dw;
-+ dw = SUBKEY_L(i + 4) ^ SUBKEY_R(i + 4); dw = ROL8(dw);/* round 5 */
-+ SUBKEY_R(i + 4) = SUBKEY_L(i + 4) ^ dw; SUBKEY_L(i + 4) = dw;
-+ dw = SUBKEY_L(i + 5) ^ SUBKEY_R(i + 5); dw = ROL8(dw);/* round 6 */
-+ SUBKEY_R(i + 5) = SUBKEY_L(i + 5) ^ dw; SUBKEY_L(i + 5) = dw;
-+ i += 8;
-+ } while (i < max);
-+}
++ while (1) {
++ switch (byte) {
++ case ADDR_SIZE_PREFIX:
++ override->address_size = ADDR_SIZE_PREFIX;
++ goto do_next_byte;
++
++ case OP_SIZE_PREFIX:
++ override->operand_size = OP_SIZE_PREFIX;
++ goto do_next_byte;
++
++ case PREFIX_CS:
++ override->segment = PREFIX_CS_;
++ goto do_next_byte;
++ case PREFIX_ES:
++ override->segment = PREFIX_ES_;
++ goto do_next_byte;
++ case PREFIX_SS:
++ override->segment = PREFIX_SS_;
++ goto do_next_byte;
++ case PREFIX_FS:
++ override->segment = PREFIX_FS_;
++ goto do_next_byte;
++ case PREFIX_GS:
++ override->segment = PREFIX_GS_;
++ goto do_next_byte;
++ case PREFIX_DS:
++ override->segment = PREFIX_DS_;
++ goto do_next_byte;
+
+ /* lock is not a valid prefix for FPU instructions,
+ let the cpu handle it to generate a SIGILL. */
+ /* case PREFIX_LOCK: */
+
+- /* rep.. prefixes have no meaning for FPU instructions */
+- case PREFIX_REPE:
+- case PREFIX_REPNE:
+-
+- do_next_byte:
+- ip++;
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(byte, ip);
+- RE_ENTRANT_CHECK_ON;
+- break;
+- case FWAIT_OPCODE:
+- *Byte = byte;
+- return 1;
+- default:
+- if ( (byte & 0xf8) == 0xd8 )
+- {
+- *Byte = byte;
+- *fpu_eip = ip;
+- return 1;
+- }
+- else
+- {
+- /* Not a valid sequence of prefix bytes followed by
+- an FPU instruction. */
+- *Byte = byte; /* Needed for error message. */
+- return 0;
+- }
++ /* rep.. prefixes have no meaning for FPU instructions */
++ case PREFIX_REPE:
++ case PREFIX_REPNE:
++
++ do_next_byte:
++ ip++;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(byte, ip);
++ RE_ENTRANT_CHECK_ON;
++ break;
++ case FWAIT_OPCODE:
++ *Byte = byte;
++ return 1;
++ default:
++ if ((byte & 0xf8) == 0xd8) {
++ *Byte = byte;
++ *fpu_eip = ip;
++ return 1;
++ } else {
++ /* Not a valid sequence of prefix bytes followed by
++ an FPU instruction. */
++ *Byte = byte; /* Needed for error message. */
++ return 0;
++ }
++ }
+ }
+- }
+ }
- static void camellia_setup128(const unsigned char *key, u32 *subkey)
+-
+-void math_abort(struct info * info, unsigned int signal)
++void math_abort(struct info *info, unsigned int signal)
{
- u32 kll, klr, krl, krr;
- u32 il, ir, t0, t1, w0, w1;
-- u32 kw4l, kw4r, dw, tl, tr;
- u32 subL[26];
- u32 subR[26];
+ FPU_EIP = FPU_ORIG_EIP;
+ current->thread.trap_no = 16;
+ current->thread.error_code = 0;
+- send_sig(signal,current,1);
++ send_sig(signal, current, 1);
+ RE_ENTRANT_CHECK_OFF;
+- __asm__("movl %0,%%esp ; ret": :"g" (((long) info)-4));
++ __asm__("movl %0,%%esp ; ret": :"g"(((long)info) - 4));
+ #ifdef PARANOID
+- printk("ERROR: wm-FPU-emu math_abort failed!\n");
++ printk("ERROR: wm-FPU-emu math_abort failed!\n");
+ #endif /* PARANOID */
+ }
- /**
-- * k == kll || klr || krl || krr (|| is concatination)
-- */
-- kll = GETU32(key );
-- klr = GETU32(key + 4);
-- krl = GETU32(key + 8);
-- krr = GETU32(key + 12);
-- /**
-- * generate KL dependent subkeys
-+ * k == kll || klr || krl || krr (|| is concatenation)
- */
-+ GETU32(kll, key );
-+ GETU32(klr, key + 4);
-+ GETU32(krl, key + 8);
-+ GETU32(krr, key + 12);
+-
+-
+ #define S387 ((struct i387_soft_struct *)s387)
+ #define sstatus_word() \
+ ((S387->swd & ~SW_Top & 0xffff) | ((S387->ftop << SW_Top_Shift) & SW_Top))
+
+-int restore_i387_soft(void *s387, struct _fpstate __user *buf)
++int fpregs_soft_set(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf)
+ {
+- u_char __user *d = (u_char __user *)buf;
+- int offset, other, i, tags, regnr, tag, newtop;
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, d, 7*4 + 8*10);
+- if (__copy_from_user(&S387->cwd, d, 7*4))
+- return -1;
+- RE_ENTRANT_CHECK_ON;
+-
+- d += 7*4;
+-
+- S387->ftop = (S387->swd >> SW_Top_Shift) & 7;
+- offset = (S387->ftop & 7) * 10;
+- other = 80 - offset;
+-
+- RE_ENTRANT_CHECK_OFF;
+- /* Copy all registers in stack order. */
+- if (__copy_from_user(((u_char *)&S387->st_space)+offset, d, other))
+- return -1;
+- if ( offset )
+- if (__copy_from_user((u_char *)&S387->st_space, d+other, offset))
+- return -1;
+- RE_ENTRANT_CHECK_ON;
+-
+- /* The tags may need to be corrected now. */
+- tags = S387->twd;
+- newtop = S387->ftop;
+- for ( i = 0; i < 8; i++ )
+- {
+- regnr = (i+newtop) & 7;
+- if ( ((tags >> ((regnr & 7)*2)) & 3) != TAG_Empty )
+- {
+- /* The loaded data over-rides all other cases. */
+- tag = FPU_tagof((FPU_REG *)((u_char *)S387->st_space + 10*regnr));
+- tags &= ~(3 << (regnr*2));
+- tags |= (tag & 3) << (regnr*2);
++ struct i387_soft_struct *s387 = &target->thread.i387.soft;
++ void *space = s387->st_space;
++ int ret;
++ int offset, other, i, tags, regnr, tag, newtop;
+
-+ /* generate KL dependent subkeys */
- /* kw1 */
-- SUBL(0) = kll; SUBR(0) = klr;
-+ subL[0] = kll; subR[0] = klr;
- /* kw2 */
-- SUBL(1) = krl; SUBR(1) = krr;
-+ subL[1] = krl; subR[1] = krr;
- /* rotation left shift 15bit */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* k3 */
-- SUBL(4) = kll; SUBR(4) = klr;
-+ subL[4] = kll; subR[4] = klr;
- /* k4 */
-- SUBL(5) = krl; SUBR(5) = krr;
-+ subL[5] = krl; subR[5] = krr;
- /* rotation left shift 15+30bit */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 30);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 30);
- /* k7 */
-- SUBL(10) = kll; SUBR(10) = klr;
-+ subL[10] = kll; subR[10] = klr;
- /* k8 */
-- SUBL(11) = krl; SUBR(11) = krr;
-+ subL[11] = krl; subR[11] = krr;
- /* rotation left shift 15+30+15bit */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* k10 */
-- SUBL(13) = krl; SUBR(13) = krr;
-+ subL[13] = krl; subR[13] = krr;
- /* rotation left shift 15+30+15+17 bit */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
- /* kl3 */
-- SUBL(16) = kll; SUBR(16) = klr;
-+ subL[16] = kll; subR[16] = klr;
- /* kl4 */
-- SUBL(17) = krl; SUBR(17) = krr;
-+ subL[17] = krl; subR[17] = krr;
- /* rotation left shift 15+30+15+17+17 bit */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
- /* k13 */
-- SUBL(18) = kll; SUBR(18) = klr;
-+ subL[18] = kll; subR[18] = klr;
- /* k14 */
-- SUBL(19) = krl; SUBR(19) = krr;
-+ subL[19] = krl; subR[19] = krr;
- /* rotation left shift 15+30+15+17+17+17 bit */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
- /* k17 */
-- SUBL(22) = kll; SUBR(22) = klr;
-+ subL[22] = kll; subR[22] = klr;
- /* k18 */
-- SUBL(23) = krl; SUBR(23) = krr;
-+ subL[23] = krl; subR[23] = krr;
++ RE_ENTRANT_CHECK_OFF;
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, s387, 0,
++ offsetof(struct i387_soft_struct, st_space));
++ RE_ENTRANT_CHECK_ON;
++
++ if (ret)
++ return ret;
++
++ S387->ftop = (S387->swd >> SW_Top_Shift) & 7;
++ offset = (S387->ftop & 7) * 10;
++ other = 80 - offset;
++
++ RE_ENTRANT_CHECK_OFF;
++
++ /* Copy all registers in stack order. */
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ space + offset, 0, other);
++ if (!ret && offset)
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ space, 0, offset);
++
++ RE_ENTRANT_CHECK_ON;
++
++ /* The tags may need to be corrected now. */
++ tags = S387->twd;
++ newtop = S387->ftop;
++ for (i = 0; i < 8; i++) {
++ regnr = (i + newtop) & 7;
++ if (((tags >> ((regnr & 7) * 2)) & 3) != TAG_Empty) {
++ /* The loaded data over-rides all other cases. */
++ tag =
++ FPU_tagof((FPU_REG *) ((u_char *) S387->st_space +
++ 10 * regnr));
++ tags &= ~(3 << (regnr * 2));
++ tags |= (tag & 3) << (regnr * 2);
++ }
+ }
+- }
+- S387->twd = tags;
++ S387->twd = tags;
- /* generate KA */
-- kll = SUBL(0); klr = SUBR(0);
-- krl = SUBL(1); krr = SUBR(1);
-+ kll = subL[0]; klr = subR[0];
-+ krl = subL[1]; krr = subR[1];
- CAMELLIA_F(kll, klr,
- CAMELLIA_SIGMA1L, CAMELLIA_SIGMA1R,
- w0, w1, il, ir, t0, t1);
-@@ -555,306 +666,108 @@ static void camellia_setup128(const unsigned char *key, u32 *subkey)
+- return 0;
++ return ret;
+ }
- /* generate KA dependent subkeys */
- /* k1, k2 */
-- SUBL(2) = kll; SUBR(2) = klr;
-- SUBL(3) = krl; SUBR(3) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ subL[2] = kll; subR[2] = klr;
-+ subL[3] = krl; subR[3] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* k5,k6 */
-- SUBL(6) = kll; SUBR(6) = klr;
-- SUBL(7) = krl; SUBR(7) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ subL[6] = kll; subR[6] = klr;
-+ subL[7] = krl; subR[7] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* kl1, kl2 */
-- SUBL(8) = kll; SUBR(8) = klr;
-- SUBL(9) = krl; SUBR(9) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ subL[8] = kll; subR[8] = klr;
-+ subL[9] = krl; subR[9] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* k9 */
-- SUBL(12) = kll; SUBR(12) = klr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ subL[12] = kll; subR[12] = klr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* k11, k12 */
-- SUBL(14) = kll; SUBR(14) = klr;
-- SUBL(15) = krl; SUBR(15) = krr;
-- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
-+ subL[14] = kll; subR[14] = klr;
-+ subL[15] = krl; subR[15] = krr;
-+ ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
- /* k15, k16 */
-- SUBL(20) = kll; SUBR(20) = klr;
-- SUBL(21) = krl; SUBR(21) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
-+ subL[20] = kll; subR[20] = klr;
-+ subL[21] = krl; subR[21] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
- /* kw3, kw4 */
-- SUBL(24) = kll; SUBR(24) = klr;
-- SUBL(25) = krl; SUBR(25) = krr;
-+ subL[24] = kll; subR[24] = klr;
-+ subL[25] = krl; subR[25] = krr;
+-
+-int save_i387_soft(void *s387, struct _fpstate __user * buf)
++int fpregs_soft_get(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf)
+ {
+- u_char __user *d = (u_char __user *)buf;
+- int offset = (S387->ftop & 7) * 10, other = 80 - offset;
++ struct i387_soft_struct *s387 = &target->thread.i387.soft;
++ const void *space = s387->st_space;
++ int ret;
++ int offset = (S387->ftop & 7) * 10, other = 80 - offset;
++
++ RE_ENTRANT_CHECK_OFF;
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE, d, 7*4 + 8*10);
+ #ifdef PECULIAR_486
+- S387->cwd &= ~0xe080;
+- /* An 80486 sets nearly all of the reserved bits to 1. */
+- S387->cwd |= 0xffff0040;
+- S387->swd = sstatus_word() | 0xffff0000;
+- S387->twd |= 0xffff0000;
+- S387->fcs &= ~0xf8000000;
+- S387->fos |= 0xffff0000;
++ S387->cwd &= ~0xe080;
++ /* An 80486 sets nearly all of the reserved bits to 1. */
++ S387->cwd |= 0xffff0040;
++ S387->swd = sstatus_word() | 0xffff0000;
++ S387->twd |= 0xffff0000;
++ S387->fcs &= ~0xf8000000;
++ S387->fos |= 0xffff0000;
+ #endif /* PECULIAR_486 */
+- if (__copy_to_user(d, &S387->cwd, 7*4))
+- return -1;
+- RE_ENTRANT_CHECK_ON;
+-
+- d += 7*4;
+-
+- RE_ENTRANT_CHECK_OFF;
+- /* Copy all registers in stack order. */
+- if (__copy_to_user(d, ((u_char *)&S387->st_space)+offset, other))
+- return -1;
+- if ( offset )
+- if (__copy_to_user(d+other, (u_char *)&S387->st_space, offset))
+- return -1;
+- RE_ENTRANT_CHECK_ON;
-
-- /* absorb kw2 to other subkeys */
-- /* round 2 */
-- SUBL(3) ^= SUBL(1); SUBR(3) ^= SUBR(1);
-- /* round 4 */
-- SUBL(5) ^= SUBL(1); SUBR(5) ^= SUBR(1);
-- /* round 6 */
-- SUBL(7) ^= SUBL(1); SUBR(7) ^= SUBR(1);
-- SUBL(1) ^= SUBR(1) & ~SUBR(9);
-- dw = SUBL(1) & SUBL(9),
-- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl2) */
-- /* round 8 */
-- SUBL(11) ^= SUBL(1); SUBR(11) ^= SUBR(1);
-- /* round 10 */
-- SUBL(13) ^= SUBL(1); SUBR(13) ^= SUBR(1);
-- /* round 12 */
-- SUBL(15) ^= SUBL(1); SUBR(15) ^= SUBR(1);
-- SUBL(1) ^= SUBR(1) & ~SUBR(17);
-- dw = SUBL(1) & SUBL(17),
-- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl4) */
-- /* round 14 */
-- SUBL(19) ^= SUBL(1); SUBR(19) ^= SUBR(1);
-- /* round 16 */
-- SUBL(21) ^= SUBL(1); SUBR(21) ^= SUBR(1);
-- /* round 18 */
-- SUBL(23) ^= SUBL(1); SUBR(23) ^= SUBR(1);
-- /* kw3 */
-- SUBL(24) ^= SUBL(1); SUBR(24) ^= SUBR(1);
+- return 1;
++
++ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, s387, 0,
++ offsetof(struct i387_soft_struct, st_space));
++
++ /* Copy all registers in stack order. */
++ if (!ret)
++ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ space + offset, 0, other);
++ if (!ret)
++ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ space, 0, offset);
++
++ RE_ENTRANT_CHECK_ON;
++
++ return ret;
+ }
+diff --git a/arch/x86/math-emu/fpu_etc.c b/arch/x86/math-emu/fpu_etc.c
+index e3b5d46..233e5af 100644
+--- a/arch/x86/math-emu/fpu_etc.c
++++ b/arch/x86/math-emu/fpu_etc.c
+@@ -16,128 +16,115 @@
+ #include "status_w.h"
+ #include "reg_constant.h"
+
-
-- /* absorb kw4 to other subkeys */
-- kw4l = SUBL(25); kw4r = SUBR(25);
-- /* round 17 */
-- SUBL(22) ^= kw4l; SUBR(22) ^= kw4r;
-- /* round 15 */
-- SUBL(20) ^= kw4l; SUBR(20) ^= kw4r;
-- /* round 13 */
-- SUBL(18) ^= kw4l; SUBR(18) ^= kw4r;
-- kw4l ^= kw4r & ~SUBR(16);
-- dw = kw4l & SUBL(16),
-- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl3) */
-- /* round 11 */
-- SUBL(14) ^= kw4l; SUBR(14) ^= kw4r;
-- /* round 9 */
-- SUBL(12) ^= kw4l; SUBR(12) ^= kw4r;
-- /* round 7 */
-- SUBL(10) ^= kw4l; SUBR(10) ^= kw4r;
-- kw4l ^= kw4r & ~SUBR(8);
-- dw = kw4l & SUBL(8),
-- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl1) */
-- /* round 5 */
-- SUBL(6) ^= kw4l; SUBR(6) ^= kw4r;
-- /* round 3 */
-- SUBL(4) ^= kw4l; SUBR(4) ^= kw4r;
-- /* round 1 */
-- SUBL(2) ^= kw4l; SUBR(2) ^= kw4r;
-- /* kw1 */
-- SUBL(0) ^= kw4l; SUBR(0) ^= kw4r;
+ static void fchs(FPU_REG *st0_ptr, u_char st0tag)
+ {
+- if ( st0tag ^ TAG_Empty )
+- {
+- signbyte(st0_ptr) ^= SIGN_NEG;
+- clear_C1();
+- }
+- else
+- FPU_stack_underflow();
++ if (st0tag ^ TAG_Empty) {
++ signbyte(st0_ptr) ^= SIGN_NEG;
++ clear_C1();
++ } else
++ FPU_stack_underflow();
+ }
+
-
+ static void fabs(FPU_REG *st0_ptr, u_char st0tag)
+ {
+- if ( st0tag ^ TAG_Empty )
+- {
+- setpositive(st0_ptr);
+- clear_C1();
+- }
+- else
+- FPU_stack_underflow();
++ if (st0tag ^ TAG_Empty) {
++ setpositive(st0_ptr);
++ clear_C1();
++ } else
++ FPU_stack_underflow();
+ }
+
-
-- /* key XOR is end of F-function */
-- CAMELLIA_SUBKEY_L(0) = SUBL(0) ^ SUBL(2);/* kw1 */
-- CAMELLIA_SUBKEY_R(0) = SUBR(0) ^ SUBR(2);
-- CAMELLIA_SUBKEY_L(2) = SUBL(3); /* round 1 */
-- CAMELLIA_SUBKEY_R(2) = SUBR(3);
-- CAMELLIA_SUBKEY_L(3) = SUBL(2) ^ SUBL(4); /* round 2 */
-- CAMELLIA_SUBKEY_R(3) = SUBR(2) ^ SUBR(4);
-- CAMELLIA_SUBKEY_L(4) = SUBL(3) ^ SUBL(5); /* round 3 */
-- CAMELLIA_SUBKEY_R(4) = SUBR(3) ^ SUBR(5);
-- CAMELLIA_SUBKEY_L(5) = SUBL(4) ^ SUBL(6); /* round 4 */
-- CAMELLIA_SUBKEY_R(5) = SUBR(4) ^ SUBR(6);
-- CAMELLIA_SUBKEY_L(6) = SUBL(5) ^ SUBL(7); /* round 5 */
-- CAMELLIA_SUBKEY_R(6) = SUBR(5) ^ SUBR(7);
-- tl = SUBL(10) ^ (SUBR(10) & ~SUBR(8));
-- dw = tl & SUBL(8), /* FL(kl1) */
-- tr = SUBR(10) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(7) = SUBL(6) ^ tl; /* round 6 */
-- CAMELLIA_SUBKEY_R(7) = SUBR(6) ^ tr;
-- CAMELLIA_SUBKEY_L(8) = SUBL(8); /* FL(kl1) */
-- CAMELLIA_SUBKEY_R(8) = SUBR(8);
-- CAMELLIA_SUBKEY_L(9) = SUBL(9); /* FLinv(kl2) */
-- CAMELLIA_SUBKEY_R(9) = SUBR(9);
-- tl = SUBL(7) ^ (SUBR(7) & ~SUBR(9));
-- dw = tl & SUBL(9), /* FLinv(kl2) */
-- tr = SUBR(7) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(10) = tl ^ SUBL(11); /* round 7 */
-- CAMELLIA_SUBKEY_R(10) = tr ^ SUBR(11);
-- CAMELLIA_SUBKEY_L(11) = SUBL(10) ^ SUBL(12); /* round 8 */
-- CAMELLIA_SUBKEY_R(11) = SUBR(10) ^ SUBR(12);
-- CAMELLIA_SUBKEY_L(12) = SUBL(11) ^ SUBL(13); /* round 9 */
-- CAMELLIA_SUBKEY_R(12) = SUBR(11) ^ SUBR(13);
-- CAMELLIA_SUBKEY_L(13) = SUBL(12) ^ SUBL(14); /* round 10 */
-- CAMELLIA_SUBKEY_R(13) = SUBR(12) ^ SUBR(14);
-- CAMELLIA_SUBKEY_L(14) = SUBL(13) ^ SUBL(15); /* round 11 */
-- CAMELLIA_SUBKEY_R(14) = SUBR(13) ^ SUBR(15);
-- tl = SUBL(18) ^ (SUBR(18) & ~SUBR(16));
-- dw = tl & SUBL(16), /* FL(kl3) */
-- tr = SUBR(18) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(15) = SUBL(14) ^ tl; /* round 12 */
-- CAMELLIA_SUBKEY_R(15) = SUBR(14) ^ tr;
-- CAMELLIA_SUBKEY_L(16) = SUBL(16); /* FL(kl3) */
-- CAMELLIA_SUBKEY_R(16) = SUBR(16);
-- CAMELLIA_SUBKEY_L(17) = SUBL(17); /* FLinv(kl4) */
-- CAMELLIA_SUBKEY_R(17) = SUBR(17);
-- tl = SUBL(15) ^ (SUBR(15) & ~SUBR(17));
-- dw = tl & SUBL(17), /* FLinv(kl4) */
-- tr = SUBR(15) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(18) = tl ^ SUBL(19); /* round 13 */
-- CAMELLIA_SUBKEY_R(18) = tr ^ SUBR(19);
-- CAMELLIA_SUBKEY_L(19) = SUBL(18) ^ SUBL(20); /* round 14 */
-- CAMELLIA_SUBKEY_R(19) = SUBR(18) ^ SUBR(20);
-- CAMELLIA_SUBKEY_L(20) = SUBL(19) ^ SUBL(21); /* round 15 */
-- CAMELLIA_SUBKEY_R(20) = SUBR(19) ^ SUBR(21);
-- CAMELLIA_SUBKEY_L(21) = SUBL(20) ^ SUBL(22); /* round 16 */
-- CAMELLIA_SUBKEY_R(21) = SUBR(20) ^ SUBR(22);
-- CAMELLIA_SUBKEY_L(22) = SUBL(21) ^ SUBL(23); /* round 17 */
-- CAMELLIA_SUBKEY_R(22) = SUBR(21) ^ SUBR(23);
-- CAMELLIA_SUBKEY_L(23) = SUBL(22); /* round 18 */
-- CAMELLIA_SUBKEY_R(23) = SUBR(22);
-- CAMELLIA_SUBKEY_L(24) = SUBL(24) ^ SUBL(23); /* kw3 */
-- CAMELLIA_SUBKEY_R(24) = SUBR(24) ^ SUBR(23);
+ static void ftst_(FPU_REG *st0_ptr, u_char st0tag)
+ {
+- switch (st0tag)
+- {
+- case TAG_Zero:
+- setcc(SW_C3);
+- break;
+- case TAG_Valid:
+- if (getsign(st0_ptr) == SIGN_POS)
+- setcc(0);
+- else
+- setcc(SW_C0);
+- break;
+- case TAG_Special:
+- switch ( FPU_Special(st0_ptr) )
+- {
+- case TW_Denormal:
+- if (getsign(st0_ptr) == SIGN_POS)
+- setcc(0);
+- else
+- setcc(SW_C0);
+- if ( denormal_operand() < 0 )
+- {
+-#ifdef PECULIAR_486
+- /* This is weird! */
+- if (getsign(st0_ptr) == SIGN_POS)
++ switch (st0tag) {
++ case TAG_Zero:
+ setcc(SW_C3);
++ break;
++ case TAG_Valid:
++ if (getsign(st0_ptr) == SIGN_POS)
++ setcc(0);
++ else
++ setcc(SW_C0);
++ break;
++ case TAG_Special:
++ switch (FPU_Special(st0_ptr)) {
++ case TW_Denormal:
++ if (getsign(st0_ptr) == SIGN_POS)
++ setcc(0);
++ else
++ setcc(SW_C0);
++ if (denormal_operand() < 0) {
++#ifdef PECULIAR_486
++ /* This is weird! */
++ if (getsign(st0_ptr) == SIGN_POS)
++ setcc(SW_C3);
+ #endif /* PECULIAR_486 */
+- return;
+- }
+- break;
+- case TW_NaN:
+- setcc(SW_C0|SW_C2|SW_C3); /* Operand is not comparable */
+- EXCEPTION(EX_Invalid);
+- break;
+- case TW_Infinity:
+- if (getsign(st0_ptr) == SIGN_POS)
+- setcc(0);
+- else
+- setcc(SW_C0);
+- break;
+- default:
+- setcc(SW_C0|SW_C2|SW_C3); /* Operand is not comparable */
+- EXCEPTION(EX_INTERNAL|0x14);
+- break;
++ return;
++ }
++ break;
++ case TW_NaN:
++ setcc(SW_C0 | SW_C2 | SW_C3); /* Operand is not comparable */
++ EXCEPTION(EX_Invalid);
++ break;
++ case TW_Infinity:
++ if (getsign(st0_ptr) == SIGN_POS)
++ setcc(0);
++ else
++ setcc(SW_C0);
++ break;
++ default:
++ setcc(SW_C0 | SW_C2 | SW_C3); /* Operand is not comparable */
++ EXCEPTION(EX_INTERNAL | 0x14);
++ break;
++ }
++ break;
++ case TAG_Empty:
++ setcc(SW_C0 | SW_C2 | SW_C3);
++ EXCEPTION(EX_StackUnder);
++ break;
+ }
+- break;
+- case TAG_Empty:
+- setcc(SW_C0|SW_C2|SW_C3);
+- EXCEPTION(EX_StackUnder);
+- break;
+- }
+ }
+
-
-- /* apply the inverse of the last half of P-function */
-- dw = CAMELLIA_SUBKEY_L(2) ^ CAMELLIA_SUBKEY_R(2),
-- dw = CAMELLIA_RL8(dw);/* round 1 */
-- CAMELLIA_SUBKEY_R(2) = CAMELLIA_SUBKEY_L(2) ^ dw,
-- CAMELLIA_SUBKEY_L(2) = dw;
-- dw = CAMELLIA_SUBKEY_L(3) ^ CAMELLIA_SUBKEY_R(3),
-- dw = CAMELLIA_RL8(dw);/* round 2 */
-- CAMELLIA_SUBKEY_R(3) = CAMELLIA_SUBKEY_L(3) ^ dw,
-- CAMELLIA_SUBKEY_L(3) = dw;
-- dw = CAMELLIA_SUBKEY_L(4) ^ CAMELLIA_SUBKEY_R(4),
-- dw = CAMELLIA_RL8(dw);/* round 3 */
-- CAMELLIA_SUBKEY_R(4) = CAMELLIA_SUBKEY_L(4) ^ dw,
-- CAMELLIA_SUBKEY_L(4) = dw;
-- dw = CAMELLIA_SUBKEY_L(5) ^ CAMELLIA_SUBKEY_R(5),
-- dw = CAMELLIA_RL8(dw);/* round 4 */
-- CAMELLIA_SUBKEY_R(5) = CAMELLIA_SUBKEY_L(5) ^ dw,
-- CAMELLIA_SUBKEY_L(5) = dw;
-- dw = CAMELLIA_SUBKEY_L(6) ^ CAMELLIA_SUBKEY_R(6),
-- dw = CAMELLIA_RL8(dw);/* round 5 */
-- CAMELLIA_SUBKEY_R(6) = CAMELLIA_SUBKEY_L(6) ^ dw,
-- CAMELLIA_SUBKEY_L(6) = dw;
-- dw = CAMELLIA_SUBKEY_L(7) ^ CAMELLIA_SUBKEY_R(7),
-- dw = CAMELLIA_RL8(dw);/* round 6 */
-- CAMELLIA_SUBKEY_R(7) = CAMELLIA_SUBKEY_L(7) ^ dw,
-- CAMELLIA_SUBKEY_L(7) = dw;
-- dw = CAMELLIA_SUBKEY_L(10) ^ CAMELLIA_SUBKEY_R(10),
-- dw = CAMELLIA_RL8(dw);/* round 7 */
-- CAMELLIA_SUBKEY_R(10) = CAMELLIA_SUBKEY_L(10) ^ dw,
-- CAMELLIA_SUBKEY_L(10) = dw;
-- dw = CAMELLIA_SUBKEY_L(11) ^ CAMELLIA_SUBKEY_R(11),
-- dw = CAMELLIA_RL8(dw);/* round 8 */
-- CAMELLIA_SUBKEY_R(11) = CAMELLIA_SUBKEY_L(11) ^ dw,
-- CAMELLIA_SUBKEY_L(11) = dw;
-- dw = CAMELLIA_SUBKEY_L(12) ^ CAMELLIA_SUBKEY_R(12),
-- dw = CAMELLIA_RL8(dw);/* round 9 */
-- CAMELLIA_SUBKEY_R(12) = CAMELLIA_SUBKEY_L(12) ^ dw,
-- CAMELLIA_SUBKEY_L(12) = dw;
-- dw = CAMELLIA_SUBKEY_L(13) ^ CAMELLIA_SUBKEY_R(13),
-- dw = CAMELLIA_RL8(dw);/* round 10 */
-- CAMELLIA_SUBKEY_R(13) = CAMELLIA_SUBKEY_L(13) ^ dw,
-- CAMELLIA_SUBKEY_L(13) = dw;
-- dw = CAMELLIA_SUBKEY_L(14) ^ CAMELLIA_SUBKEY_R(14),
-- dw = CAMELLIA_RL8(dw);/* round 11 */
-- CAMELLIA_SUBKEY_R(14) = CAMELLIA_SUBKEY_L(14) ^ dw,
-- CAMELLIA_SUBKEY_L(14) = dw;
-- dw = CAMELLIA_SUBKEY_L(15) ^ CAMELLIA_SUBKEY_R(15),
-- dw = CAMELLIA_RL8(dw);/* round 12 */
-- CAMELLIA_SUBKEY_R(15) = CAMELLIA_SUBKEY_L(15) ^ dw,
-- CAMELLIA_SUBKEY_L(15) = dw;
-- dw = CAMELLIA_SUBKEY_L(18) ^ CAMELLIA_SUBKEY_R(18),
-- dw = CAMELLIA_RL8(dw);/* round 13 */
-- CAMELLIA_SUBKEY_R(18) = CAMELLIA_SUBKEY_L(18) ^ dw,
-- CAMELLIA_SUBKEY_L(18) = dw;
-- dw = CAMELLIA_SUBKEY_L(19) ^ CAMELLIA_SUBKEY_R(19),
-- dw = CAMELLIA_RL8(dw);/* round 14 */
-- CAMELLIA_SUBKEY_R(19) = CAMELLIA_SUBKEY_L(19) ^ dw,
-- CAMELLIA_SUBKEY_L(19) = dw;
-- dw = CAMELLIA_SUBKEY_L(20) ^ CAMELLIA_SUBKEY_R(20),
-- dw = CAMELLIA_RL8(dw);/* round 15 */
-- CAMELLIA_SUBKEY_R(20) = CAMELLIA_SUBKEY_L(20) ^ dw,
-- CAMELLIA_SUBKEY_L(20) = dw;
-- dw = CAMELLIA_SUBKEY_L(21) ^ CAMELLIA_SUBKEY_R(21),
-- dw = CAMELLIA_RL8(dw);/* round 16 */
-- CAMELLIA_SUBKEY_R(21) = CAMELLIA_SUBKEY_L(21) ^ dw,
-- CAMELLIA_SUBKEY_L(21) = dw;
-- dw = CAMELLIA_SUBKEY_L(22) ^ CAMELLIA_SUBKEY_R(22),
-- dw = CAMELLIA_RL8(dw);/* round 17 */
-- CAMELLIA_SUBKEY_R(22) = CAMELLIA_SUBKEY_L(22) ^ dw,
-- CAMELLIA_SUBKEY_L(22) = dw;
-- dw = CAMELLIA_SUBKEY_L(23) ^ CAMELLIA_SUBKEY_R(23),
-- dw = CAMELLIA_RL8(dw);/* round 18 */
-- CAMELLIA_SUBKEY_R(23) = CAMELLIA_SUBKEY_L(23) ^ dw,
-- CAMELLIA_SUBKEY_L(23) = dw;
+ static void fxam(FPU_REG *st0_ptr, u_char st0tag)
+ {
+- int c = 0;
+- switch (st0tag)
+- {
+- case TAG_Empty:
+- c = SW_C3|SW_C0;
+- break;
+- case TAG_Zero:
+- c = SW_C3;
+- break;
+- case TAG_Valid:
+- c = SW_C2;
+- break;
+- case TAG_Special:
+- switch ( FPU_Special(st0_ptr) )
+- {
+- case TW_Denormal:
+- c = SW_C2|SW_C3; /* Denormal */
+- break;
+- case TW_NaN:
+- /* We also use NaN for unsupported types. */
+- if ( (st0_ptr->sigh & 0x80000000) && (exponent(st0_ptr) == EXP_OVER) )
+- c = SW_C0;
+- break;
+- case TW_Infinity:
+- c = SW_C2|SW_C0;
+- break;
++ int c = 0;
++ switch (st0tag) {
++ case TAG_Empty:
++ c = SW_C3 | SW_C0;
++ break;
++ case TAG_Zero:
++ c = SW_C3;
++ break;
++ case TAG_Valid:
++ c = SW_C2;
++ break;
++ case TAG_Special:
++ switch (FPU_Special(st0_ptr)) {
++ case TW_Denormal:
++ c = SW_C2 | SW_C3; /* Denormal */
++ break;
++ case TW_NaN:
++ /* We also use NaN for unsupported types. */
++ if ((st0_ptr->sigh & 0x80000000)
++ && (exponent(st0_ptr) == EXP_OVER))
++ c = SW_C0;
++ break;
++ case TW_Infinity:
++ c = SW_C2 | SW_C0;
++ break;
++ }
+ }
+- }
+- if ( getsign(st0_ptr) == SIGN_NEG )
+- c |= SW_C1;
+- setcc(c);
++ if (getsign(st0_ptr) == SIGN_NEG)
++ c |= SW_C1;
++ setcc(c);
+ }
+
-
-- return;
-+ camellia_setup_tail(subkey, subL, subR, 24);
+ static FUNC_ST0 const fp_etc_table[] = {
+- fchs, fabs, (FUNC_ST0)FPU_illegal, (FUNC_ST0)FPU_illegal,
+- ftst_, fxam, (FUNC_ST0)FPU_illegal, (FUNC_ST0)FPU_illegal
++ fchs, fabs, (FUNC_ST0) FPU_illegal, (FUNC_ST0) FPU_illegal,
++ ftst_, fxam, (FUNC_ST0) FPU_illegal, (FUNC_ST0) FPU_illegal
+ };
+
+ void FPU_etc(void)
+ {
+- (fp_etc_table[FPU_rm])(&st(0), FPU_gettag0());
++ (fp_etc_table[FPU_rm]) (&st(0), FPU_gettag0());
}
+diff --git a/arch/x86/math-emu/fpu_proto.h b/arch/x86/math-emu/fpu_proto.h
+index 37a8a7f..aa49b6a 100644
+--- a/arch/x86/math-emu/fpu_proto.h
++++ b/arch/x86/math-emu/fpu_proto.h
+@@ -66,7 +66,7 @@ extern int FPU_Special(FPU_REG const *ptr);
+ extern int isNaN(FPU_REG const *ptr);
+ extern void FPU_pop(void);
+ extern int FPU_empty_i(int stnr);
+-extern int FPU_stackoverflow(FPU_REG **st_new_ptr);
++extern int FPU_stackoverflow(FPU_REG ** st_new_ptr);
+ extern void FPU_copy_to_regi(FPU_REG const *r, u_char tag, int stnr);
+ extern void FPU_copy_to_reg1(FPU_REG const *r, u_char tag);
+ extern void FPU_copy_to_reg0(FPU_REG const *r, u_char tag);
+@@ -75,21 +75,23 @@ extern void FPU_triga(void);
+ extern void FPU_trigb(void);
+ /* get_address.c */
+ extern void __user *FPU_get_address(u_char FPU_modrm, unsigned long *fpu_eip,
+- struct address *addr, fpu_addr_modes addr_modes);
++ struct address *addr,
++ fpu_addr_modes addr_modes);
+ extern void __user *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
+- struct address *addr, fpu_addr_modes addr_modes);
++ struct address *addr,
++ fpu_addr_modes addr_modes);
+ /* load_store.c */
+ extern int FPU_load_store(u_char type, fpu_addr_modes addr_modes,
+- void __user *data_address);
++ void __user * data_address);
+ /* poly_2xm1.c */
+-extern int poly_2xm1(u_char sign, FPU_REG *arg, FPU_REG *result);
++extern int poly_2xm1(u_char sign, FPU_REG * arg, FPU_REG *result);
+ /* poly_atan.c */
+-extern void poly_atan(FPU_REG *st0_ptr, u_char st0_tag, FPU_REG *st1_ptr,
++extern void poly_atan(FPU_REG * st0_ptr, u_char st0_tag, FPU_REG *st1_ptr,
+ u_char st1_tag);
+ /* poly_l2.c */
+ extern void poly_l2(FPU_REG *st0_ptr, FPU_REG *st1_ptr, u_char st1_sign);
+ extern int poly_l2p1(u_char s0, u_char s1, FPU_REG *r0, FPU_REG *r1,
+- FPU_REG *d);
++ FPU_REG * d);
+ /* poly_sin.c */
+ extern void poly_sine(FPU_REG *st0_ptr);
+ extern void poly_cos(FPU_REG *st0_ptr);
+@@ -117,10 +119,13 @@ extern int FPU_load_int32(long __user *_s, FPU_REG *loaded_data);
+ extern int FPU_load_int16(short __user *_s, FPU_REG *loaded_data);
+ extern int FPU_load_bcd(u_char __user *s);
+ extern int FPU_store_extended(FPU_REG *st0_ptr, u_char st0_tag,
+- long double __user *d);
+-extern int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag, double __user *dfloat);
+-extern int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag, float __user *single);
+-extern int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag, long long __user *d);
++ long double __user * d);
++extern int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag,
++ double __user * dfloat);
++extern int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag,
++ float __user * single);
++extern int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag,
++ long long __user * d);
+ extern int FPU_store_int32(FPU_REG *st0_ptr, u_char st0_tag, long __user *d);
+ extern int FPU_store_int16(FPU_REG *st0_ptr, u_char st0_tag, short __user *d);
+ extern int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char __user *d);
+@@ -137,4 +142,3 @@ extern int FPU_div(int flags, int regrm, int control_w);
+ /* reg_convert.c */
+ extern int FPU_to_exp16(FPU_REG const *a, FPU_REG *x);
+ #endif /* _FPU_PROTO_H */
+-
+diff --git a/arch/x86/math-emu/fpu_tags.c b/arch/x86/math-emu/fpu_tags.c
+index cb436fe..d9c657c 100644
+--- a/arch/x86/math-emu/fpu_tags.c
++++ b/arch/x86/math-emu/fpu_tags.c
+@@ -14,114 +14,102 @@
+ #include "fpu_system.h"
+ #include "exception.h"
-
- static void camellia_setup256(const unsigned char *key, u32 *subkey)
+ void FPU_pop(void)
{
-- u32 kll,klr,krl,krr; /* left half of key */
-- u32 krll,krlr,krrl,krrr; /* right half of key */
-+ u32 kll, klr, krl, krr; /* left half of key */
-+ u32 krll, krlr, krrl, krrr; /* right half of key */
- u32 il, ir, t0, t1, w0, w1; /* temporary variables */
-- u32 kw4l, kw4r, dw, tl, tr;
- u32 subL[34];
- u32 subR[34];
+- fpu_tag_word |= 3 << ((top & 7)*2);
+- top++;
++ fpu_tag_word |= 3 << ((top & 7) * 2);
++ top++;
+ }
- /**
- * key = (kll || klr || krl || krr || krll || krlr || krrl || krrr)
-- * (|| is concatination)
-+ * (|| is concatenation)
- */
-
-- kll = GETU32(key );
-- klr = GETU32(key + 4);
-- krl = GETU32(key + 8);
-- krr = GETU32(key + 12);
-- krll = GETU32(key + 16);
-- krlr = GETU32(key + 20);
-- krrl = GETU32(key + 24);
-- krrr = GETU32(key + 28);
-+ GETU32(kll, key );
-+ GETU32(klr, key + 4);
-+ GETU32(krl, key + 8);
-+ GETU32(krr, key + 12);
-+ GETU32(krll, key + 16);
-+ GETU32(krlr, key + 20);
-+ GETU32(krrl, key + 24);
-+ GETU32(krrr, key + 28);
+ int FPU_gettag0(void)
+ {
+- return (fpu_tag_word >> ((top & 7)*2)) & 3;
++ return (fpu_tag_word >> ((top & 7) * 2)) & 3;
+ }
- /* generate KL dependent subkeys */
- /* kw1 */
-- SUBL(0) = kll; SUBR(0) = klr;
-+ subL[0] = kll; subR[0] = klr;
- /* kw2 */
-- SUBL(1) = krl; SUBR(1) = krr;
-- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 45);
-+ subL[1] = krl; subR[1] = krr;
-+ ROLDQo32(kll, klr, krl, krr, w0, w1, 45);
- /* k9 */
-- SUBL(12) = kll; SUBR(12) = klr;
-+ subL[12] = kll; subR[12] = klr;
- /* k10 */
-- SUBL(13) = krl; SUBR(13) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ subL[13] = krl; subR[13] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* kl3 */
-- SUBL(16) = kll; SUBR(16) = klr;
-+ subL[16] = kll; subR[16] = klr;
- /* kl4 */
-- SUBL(17) = krl; SUBR(17) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
-+ subL[17] = krl; subR[17] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
- /* k17 */
-- SUBL(22) = kll; SUBR(22) = klr;
-+ subL[22] = kll; subR[22] = klr;
- /* k18 */
-- SUBL(23) = krl; SUBR(23) = krr;
-- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
-+ subL[23] = krl; subR[23] = krr;
-+ ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
- /* k23 */
-- SUBL(30) = kll; SUBR(30) = klr;
-+ subL[30] = kll; subR[30] = klr;
- /* k24 */
-- SUBL(31) = krl; SUBR(31) = krr;
-+ subL[31] = krl; subR[31] = krr;
+-
+ int FPU_gettagi(int stnr)
+ {
+- return (fpu_tag_word >> (((top+stnr) & 7)*2)) & 3;
++ return (fpu_tag_word >> (((top + stnr) & 7) * 2)) & 3;
+ }
- /* generate KR dependent subkeys */
-- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
-+ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
- /* k3 */
-- SUBL(4) = krll; SUBR(4) = krlr;
-+ subL[4] = krll; subR[4] = krlr;
- /* k4 */
-- SUBL(5) = krrl; SUBR(5) = krrr;
-- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
-+ subL[5] = krrl; subR[5] = krrr;
-+ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
- /* kl1 */
-- SUBL(8) = krll; SUBR(8) = krlr;
-+ subL[8] = krll; subR[8] = krlr;
- /* kl2 */
-- SUBL(9) = krrl; SUBR(9) = krrr;
-- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
-+ subL[9] = krrl; subR[9] = krrr;
-+ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
- /* k13 */
-- SUBL(18) = krll; SUBR(18) = krlr;
-+ subL[18] = krll; subR[18] = krlr;
- /* k14 */
-- SUBL(19) = krrl; SUBR(19) = krrr;
-- CAMELLIA_ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
-+ subL[19] = krrl; subR[19] = krrr;
-+ ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
- /* k19 */
-- SUBL(26) = krll; SUBR(26) = krlr;
-+ subL[26] = krll; subR[26] = krlr;
- /* k20 */
-- SUBL(27) = krrl; SUBR(27) = krrr;
-- CAMELLIA_ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
-+ subL[27] = krrl; subR[27] = krrr;
-+ ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
+-
+ int FPU_gettag(int regnr)
+ {
+- return (fpu_tag_word >> ((regnr & 7)*2)) & 3;
++ return (fpu_tag_word >> ((regnr & 7) * 2)) & 3;
+ }
- /* generate KA */
-- kll = SUBL(0) ^ krll; klr = SUBR(0) ^ krlr;
-- krl = SUBL(1) ^ krrl; krr = SUBR(1) ^ krrr;
-+ kll = subL[0] ^ krll; klr = subR[0] ^ krlr;
-+ krl = subL[1] ^ krrl; krr = subR[1] ^ krrr;
- CAMELLIA_F(kll, klr,
- CAMELLIA_SIGMA1L, CAMELLIA_SIGMA1R,
- w0, w1, il, ir, t0, t1);
-@@ -885,310 +798,50 @@ static void camellia_setup256(const unsigned char *key, u32 *subkey)
- krll ^= w0; krlr ^= w1;
+-
+ void FPU_settag0(int tag)
+ {
+- int regnr = top;
+- regnr &= 7;
+- fpu_tag_word &= ~(3 << (regnr*2));
+- fpu_tag_word |= (tag & 3) << (regnr*2);
++ int regnr = top;
++ regnr &= 7;
++ fpu_tag_word &= ~(3 << (regnr * 2));
++ fpu_tag_word |= (tag & 3) << (regnr * 2);
+ }
- /* generate KA dependent subkeys */
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
- /* k5 */
-- SUBL(6) = kll; SUBR(6) = klr;
-+ subL[6] = kll; subR[6] = klr;
- /* k6 */
-- SUBL(7) = krl; SUBR(7) = krr;
-- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 30);
-+ subL[7] = krl; subR[7] = krr;
-+ ROLDQ(kll, klr, krl, krr, w0, w1, 30);
- /* k11 */
-- SUBL(14) = kll; SUBR(14) = klr;
-+ subL[14] = kll; subR[14] = klr;
- /* k12 */
-- SUBL(15) = krl; SUBR(15) = krr;
-+ subL[15] = krl; subR[15] = krr;
- /* rotation left shift 32bit */
- /* kl5 */
-- SUBL(24) = klr; SUBR(24) = krl;
-+ subL[24] = klr; subR[24] = krl;
- /* kl6 */
-- SUBL(25) = krr; SUBR(25) = kll;
-+ subL[25] = krr; subR[25] = kll;
- /* rotation left shift 49 from k11,k12 -> k21,k22 */
-- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 49);
-+ ROLDQo32(kll, klr, krl, krr, w0, w1, 49);
- /* k21 */
-- SUBL(28) = kll; SUBR(28) = klr;
-+ subL[28] = kll; subR[28] = klr;
- /* k22 */
-- SUBL(29) = krl; SUBR(29) = krr;
-+ subL[29] = krl; subR[29] = krr;
+-
+ void FPU_settagi(int stnr, int tag)
+ {
+- int regnr = stnr+top;
+- regnr &= 7;
+- fpu_tag_word &= ~(3 << (regnr*2));
+- fpu_tag_word |= (tag & 3) << (regnr*2);
++ int regnr = stnr + top;
++ regnr &= 7;
++ fpu_tag_word &= ~(3 << (regnr * 2));
++ fpu_tag_word |= (tag & 3) << (regnr * 2);
+ }
- /* generate KB dependent subkeys */
- /* k1 */
-- SUBL(2) = krll; SUBR(2) = krlr;
-+ subL[2] = krll; subR[2] = krlr;
- /* k2 */
-- SUBL(3) = krrl; SUBR(3) = krrr;
-- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
-+ subL[3] = krrl; subR[3] = krrr;
-+ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
- /* k7 */
-- SUBL(10) = krll; SUBR(10) = krlr;
-+ subL[10] = krll; subR[10] = krlr;
- /* k8 */
-- SUBL(11) = krrl; SUBR(11) = krrr;
-- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
-+ subL[11] = krrl; subR[11] = krrr;
-+ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
- /* k15 */
-- SUBL(20) = krll; SUBR(20) = krlr;
-+ subL[20] = krll; subR[20] = krlr;
- /* k16 */
-- SUBL(21) = krrl; SUBR(21) = krrr;
-- CAMELLIA_ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 51);
-+ subL[21] = krrl; subR[21] = krrr;
-+ ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 51);
- /* kw3 */
-- SUBL(32) = krll; SUBR(32) = krlr;
-+ subL[32] = krll; subR[32] = krlr;
- /* kw4 */
-- SUBL(33) = krrl; SUBR(33) = krrr;
-
-- /* absorb kw2 to other subkeys */
-- /* round 2 */
-- SUBL(3) ^= SUBL(1); SUBR(3) ^= SUBR(1);
-- /* round 4 */
-- SUBL(5) ^= SUBL(1); SUBR(5) ^= SUBR(1);
-- /* round 6 */
-- SUBL(7) ^= SUBL(1); SUBR(7) ^= SUBR(1);
-- SUBL(1) ^= SUBR(1) & ~SUBR(9);
-- dw = SUBL(1) & SUBL(9),
-- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl2) */
-- /* round 8 */
-- SUBL(11) ^= SUBL(1); SUBR(11) ^= SUBR(1);
-- /* round 10 */
-- SUBL(13) ^= SUBL(1); SUBR(13) ^= SUBR(1);
-- /* round 12 */
-- SUBL(15) ^= SUBL(1); SUBR(15) ^= SUBR(1);
-- SUBL(1) ^= SUBR(1) & ~SUBR(17);
-- dw = SUBL(1) & SUBL(17),
-- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl4) */
-- /* round 14 */
-- SUBL(19) ^= SUBL(1); SUBR(19) ^= SUBR(1);
-- /* round 16 */
-- SUBL(21) ^= SUBL(1); SUBR(21) ^= SUBR(1);
-- /* round 18 */
-- SUBL(23) ^= SUBL(1); SUBR(23) ^= SUBR(1);
-- SUBL(1) ^= SUBR(1) & ~SUBR(25);
-- dw = SUBL(1) & SUBL(25),
-- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl6) */
-- /* round 20 */
-- SUBL(27) ^= SUBL(1); SUBR(27) ^= SUBR(1);
-- /* round 22 */
-- SUBL(29) ^= SUBL(1); SUBR(29) ^= SUBR(1);
-- /* round 24 */
-- SUBL(31) ^= SUBL(1); SUBR(31) ^= SUBR(1);
-- /* kw3 */
-- SUBL(32) ^= SUBL(1); SUBR(32) ^= SUBR(1);
+ void FPU_settag(int regnr, int tag)
+ {
+- regnr &= 7;
+- fpu_tag_word &= ~(3 << (regnr*2));
+- fpu_tag_word |= (tag & 3) << (regnr*2);
++ regnr &= 7;
++ fpu_tag_word &= ~(3 << (regnr * 2));
++ fpu_tag_word |= (tag & 3) << (regnr * 2);
+ }
+
-
+ int FPU_Special(FPU_REG const *ptr)
+ {
+- int exp = exponent(ptr);
-
-- /* absorb kw4 to other subkeys */
-- kw4l = SUBL(33); kw4r = SUBR(33);
-- /* round 23 */
-- SUBL(30) ^= kw4l; SUBR(30) ^= kw4r;
-- /* round 21 */
-- SUBL(28) ^= kw4l; SUBR(28) ^= kw4r;
-- /* round 19 */
-- SUBL(26) ^= kw4l; SUBR(26) ^= kw4r;
-- kw4l ^= kw4r & ~SUBR(24);
-- dw = kw4l & SUBL(24),
-- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl5) */
-- /* round 17 */
-- SUBL(22) ^= kw4l; SUBR(22) ^= kw4r;
-- /* round 15 */
-- SUBL(20) ^= kw4l; SUBR(20) ^= kw4r;
-- /* round 13 */
-- SUBL(18) ^= kw4l; SUBR(18) ^= kw4r;
-- kw4l ^= kw4r & ~SUBR(16);
-- dw = kw4l & SUBL(16),
-- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl3) */
-- /* round 11 */
-- SUBL(14) ^= kw4l; SUBR(14) ^= kw4r;
-- /* round 9 */
-- SUBL(12) ^= kw4l; SUBR(12) ^= kw4r;
-- /* round 7 */
-- SUBL(10) ^= kw4l; SUBR(10) ^= kw4r;
-- kw4l ^= kw4r & ~SUBR(8);
-- dw = kw4l & SUBL(8),
-- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl1) */
-- /* round 5 */
-- SUBL(6) ^= kw4l; SUBR(6) ^= kw4r;
-- /* round 3 */
-- SUBL(4) ^= kw4l; SUBR(4) ^= kw4r;
-- /* round 1 */
-- SUBL(2) ^= kw4l; SUBR(2) ^= kw4r;
-- /* kw1 */
-- SUBL(0) ^= kw4l; SUBR(0) ^= kw4r;
-+ subL[33] = krrl; subR[33] = krrr;
+- if ( exp == EXP_BIAS+EXP_UNDER )
+- return TW_Denormal;
+- else if ( exp != EXP_BIAS+EXP_OVER )
+- return TW_NaN;
+- else if ( (ptr->sigh == 0x80000000) && (ptr->sigl == 0) )
+- return TW_Infinity;
+- return TW_NaN;
++ int exp = exponent(ptr);
++
++ if (exp == EXP_BIAS + EXP_UNDER)
++ return TW_Denormal;
++ else if (exp != EXP_BIAS + EXP_OVER)
++ return TW_NaN;
++ else if ((ptr->sigh == 0x80000000) && (ptr->sigl == 0))
++ return TW_Infinity;
++ return TW_NaN;
+ }
-- /* key XOR is end of F-function */
-- CAMELLIA_SUBKEY_L(0) = SUBL(0) ^ SUBL(2);/* kw1 */
-- CAMELLIA_SUBKEY_R(0) = SUBR(0) ^ SUBR(2);
-- CAMELLIA_SUBKEY_L(2) = SUBL(3); /* round 1 */
-- CAMELLIA_SUBKEY_R(2) = SUBR(3);
-- CAMELLIA_SUBKEY_L(3) = SUBL(2) ^ SUBL(4); /* round 2 */
-- CAMELLIA_SUBKEY_R(3) = SUBR(2) ^ SUBR(4);
-- CAMELLIA_SUBKEY_L(4) = SUBL(3) ^ SUBL(5); /* round 3 */
-- CAMELLIA_SUBKEY_R(4) = SUBR(3) ^ SUBR(5);
-- CAMELLIA_SUBKEY_L(5) = SUBL(4) ^ SUBL(6); /* round 4 */
-- CAMELLIA_SUBKEY_R(5) = SUBR(4) ^ SUBR(6);
-- CAMELLIA_SUBKEY_L(6) = SUBL(5) ^ SUBL(7); /* round 5 */
-- CAMELLIA_SUBKEY_R(6) = SUBR(5) ^ SUBR(7);
-- tl = SUBL(10) ^ (SUBR(10) & ~SUBR(8));
-- dw = tl & SUBL(8), /* FL(kl1) */
-- tr = SUBR(10) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(7) = SUBL(6) ^ tl; /* round 6 */
-- CAMELLIA_SUBKEY_R(7) = SUBR(6) ^ tr;
-- CAMELLIA_SUBKEY_L(8) = SUBL(8); /* FL(kl1) */
-- CAMELLIA_SUBKEY_R(8) = SUBR(8);
-- CAMELLIA_SUBKEY_L(9) = SUBL(9); /* FLinv(kl2) */
-- CAMELLIA_SUBKEY_R(9) = SUBR(9);
-- tl = SUBL(7) ^ (SUBR(7) & ~SUBR(9));
-- dw = tl & SUBL(9), /* FLinv(kl2) */
-- tr = SUBR(7) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(10) = tl ^ SUBL(11); /* round 7 */
-- CAMELLIA_SUBKEY_R(10) = tr ^ SUBR(11);
-- CAMELLIA_SUBKEY_L(11) = SUBL(10) ^ SUBL(12); /* round 8 */
-- CAMELLIA_SUBKEY_R(11) = SUBR(10) ^ SUBR(12);
-- CAMELLIA_SUBKEY_L(12) = SUBL(11) ^ SUBL(13); /* round 9 */
-- CAMELLIA_SUBKEY_R(12) = SUBR(11) ^ SUBR(13);
-- CAMELLIA_SUBKEY_L(13) = SUBL(12) ^ SUBL(14); /* round 10 */
-- CAMELLIA_SUBKEY_R(13) = SUBR(12) ^ SUBR(14);
-- CAMELLIA_SUBKEY_L(14) = SUBL(13) ^ SUBL(15); /* round 11 */
-- CAMELLIA_SUBKEY_R(14) = SUBR(13) ^ SUBR(15);
-- tl = SUBL(18) ^ (SUBR(18) & ~SUBR(16));
-- dw = tl & SUBL(16), /* FL(kl3) */
-- tr = SUBR(18) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(15) = SUBL(14) ^ tl; /* round 12 */
-- CAMELLIA_SUBKEY_R(15) = SUBR(14) ^ tr;
-- CAMELLIA_SUBKEY_L(16) = SUBL(16); /* FL(kl3) */
-- CAMELLIA_SUBKEY_R(16) = SUBR(16);
-- CAMELLIA_SUBKEY_L(17) = SUBL(17); /* FLinv(kl4) */
-- CAMELLIA_SUBKEY_R(17) = SUBR(17);
-- tl = SUBL(15) ^ (SUBR(15) & ~SUBR(17));
-- dw = tl & SUBL(17), /* FLinv(kl4) */
-- tr = SUBR(15) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(18) = tl ^ SUBL(19); /* round 13 */
-- CAMELLIA_SUBKEY_R(18) = tr ^ SUBR(19);
-- CAMELLIA_SUBKEY_L(19) = SUBL(18) ^ SUBL(20); /* round 14 */
-- CAMELLIA_SUBKEY_R(19) = SUBR(18) ^ SUBR(20);
-- CAMELLIA_SUBKEY_L(20) = SUBL(19) ^ SUBL(21); /* round 15 */
-- CAMELLIA_SUBKEY_R(20) = SUBR(19) ^ SUBR(21);
-- CAMELLIA_SUBKEY_L(21) = SUBL(20) ^ SUBL(22); /* round 16 */
-- CAMELLIA_SUBKEY_R(21) = SUBR(20) ^ SUBR(22);
-- CAMELLIA_SUBKEY_L(22) = SUBL(21) ^ SUBL(23); /* round 17 */
-- CAMELLIA_SUBKEY_R(22) = SUBR(21) ^ SUBR(23);
-- tl = SUBL(26) ^ (SUBR(26)
-- & ~SUBR(24));
-- dw = tl & SUBL(24), /* FL(kl5) */
-- tr = SUBR(26) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(23) = SUBL(22) ^ tl; /* round 18 */
-- CAMELLIA_SUBKEY_R(23) = SUBR(22) ^ tr;
-- CAMELLIA_SUBKEY_L(24) = SUBL(24); /* FL(kl5) */
-- CAMELLIA_SUBKEY_R(24) = SUBR(24);
-- CAMELLIA_SUBKEY_L(25) = SUBL(25); /* FLinv(kl6) */
-- CAMELLIA_SUBKEY_R(25) = SUBR(25);
-- tl = SUBL(23) ^ (SUBR(23) &
-- ~SUBR(25));
-- dw = tl & SUBL(25), /* FLinv(kl6) */
-- tr = SUBR(23) ^ CAMELLIA_RL1(dw);
-- CAMELLIA_SUBKEY_L(26) = tl ^ SUBL(27); /* round 19 */
-- CAMELLIA_SUBKEY_R(26) = tr ^ SUBR(27);
-- CAMELLIA_SUBKEY_L(27) = SUBL(26) ^ SUBL(28); /* round 20 */
-- CAMELLIA_SUBKEY_R(27) = SUBR(26) ^ SUBR(28);
-- CAMELLIA_SUBKEY_L(28) = SUBL(27) ^ SUBL(29); /* round 21 */
-- CAMELLIA_SUBKEY_R(28) = SUBR(27) ^ SUBR(29);
-- CAMELLIA_SUBKEY_L(29) = SUBL(28) ^ SUBL(30); /* round 22 */
-- CAMELLIA_SUBKEY_R(29) = SUBR(28) ^ SUBR(30);
-- CAMELLIA_SUBKEY_L(30) = SUBL(29) ^ SUBL(31); /* round 23 */
-- CAMELLIA_SUBKEY_R(30) = SUBR(29) ^ SUBR(31);
-- CAMELLIA_SUBKEY_L(31) = SUBL(30); /* round 24 */
-- CAMELLIA_SUBKEY_R(31) = SUBR(30);
-- CAMELLIA_SUBKEY_L(32) = SUBL(32) ^ SUBL(31); /* kw3 */
-- CAMELLIA_SUBKEY_R(32) = SUBR(32) ^ SUBR(31);
--
-- /* apply the inverse of the last half of P-function */
-- dw = CAMELLIA_SUBKEY_L(2) ^ CAMELLIA_SUBKEY_R(2),
-- dw = CAMELLIA_RL8(dw);/* round 1 */
-- CAMELLIA_SUBKEY_R(2) = CAMELLIA_SUBKEY_L(2) ^ dw,
-- CAMELLIA_SUBKEY_L(2) = dw;
-- dw = CAMELLIA_SUBKEY_L(3) ^ CAMELLIA_SUBKEY_R(3),
-- dw = CAMELLIA_RL8(dw);/* round 2 */
-- CAMELLIA_SUBKEY_R(3) = CAMELLIA_SUBKEY_L(3) ^ dw,
-- CAMELLIA_SUBKEY_L(3) = dw;
-- dw = CAMELLIA_SUBKEY_L(4) ^ CAMELLIA_SUBKEY_R(4),
-- dw = CAMELLIA_RL8(dw);/* round 3 */
-- CAMELLIA_SUBKEY_R(4) = CAMELLIA_SUBKEY_L(4) ^ dw,
-- CAMELLIA_SUBKEY_L(4) = dw;
-- dw = CAMELLIA_SUBKEY_L(5) ^ CAMELLIA_SUBKEY_R(5),
-- dw = CAMELLIA_RL8(dw);/* round 4 */
-- CAMELLIA_SUBKEY_R(5) = CAMELLIA_SUBKEY_L(5) ^ dw,
-- CAMELLIA_SUBKEY_L(5) = dw;
-- dw = CAMELLIA_SUBKEY_L(6) ^ CAMELLIA_SUBKEY_R(6),
-- dw = CAMELLIA_RL8(dw);/* round 5 */
-- CAMELLIA_SUBKEY_R(6) = CAMELLIA_SUBKEY_L(6) ^ dw,
-- CAMELLIA_SUBKEY_L(6) = dw;
-- dw = CAMELLIA_SUBKEY_L(7) ^ CAMELLIA_SUBKEY_R(7),
-- dw = CAMELLIA_RL8(dw);/* round 6 */
-- CAMELLIA_SUBKEY_R(7) = CAMELLIA_SUBKEY_L(7) ^ dw,
-- CAMELLIA_SUBKEY_L(7) = dw;
-- dw = CAMELLIA_SUBKEY_L(10) ^ CAMELLIA_SUBKEY_R(10),
-- dw = CAMELLIA_RL8(dw);/* round 7 */
-- CAMELLIA_SUBKEY_R(10) = CAMELLIA_SUBKEY_L(10) ^ dw,
-- CAMELLIA_SUBKEY_L(10) = dw;
-- dw = CAMELLIA_SUBKEY_L(11) ^ CAMELLIA_SUBKEY_R(11),
-- dw = CAMELLIA_RL8(dw);/* round 8 */
-- CAMELLIA_SUBKEY_R(11) = CAMELLIA_SUBKEY_L(11) ^ dw,
-- CAMELLIA_SUBKEY_L(11) = dw;
-- dw = CAMELLIA_SUBKEY_L(12) ^ CAMELLIA_SUBKEY_R(12),
-- dw = CAMELLIA_RL8(dw);/* round 9 */
-- CAMELLIA_SUBKEY_R(12) = CAMELLIA_SUBKEY_L(12) ^ dw,
-- CAMELLIA_SUBKEY_L(12) = dw;
-- dw = CAMELLIA_SUBKEY_L(13) ^ CAMELLIA_SUBKEY_R(13),
-- dw = CAMELLIA_RL8(dw);/* round 10 */
-- CAMELLIA_SUBKEY_R(13) = CAMELLIA_SUBKEY_L(13) ^ dw,
-- CAMELLIA_SUBKEY_L(13) = dw;
-- dw = CAMELLIA_SUBKEY_L(14) ^ CAMELLIA_SUBKEY_R(14),
-- dw = CAMELLIA_RL8(dw);/* round 11 */
-- CAMELLIA_SUBKEY_R(14) = CAMELLIA_SUBKEY_L(14) ^ dw,
-- CAMELLIA_SUBKEY_L(14) = dw;
-- dw = CAMELLIA_SUBKEY_L(15) ^ CAMELLIA_SUBKEY_R(15),
-- dw = CAMELLIA_RL8(dw);/* round 12 */
-- CAMELLIA_SUBKEY_R(15) = CAMELLIA_SUBKEY_L(15) ^ dw,
-- CAMELLIA_SUBKEY_L(15) = dw;
-- dw = CAMELLIA_SUBKEY_L(18) ^ CAMELLIA_SUBKEY_R(18),
-- dw = CAMELLIA_RL8(dw);/* round 13 */
-- CAMELLIA_SUBKEY_R(18) = CAMELLIA_SUBKEY_L(18) ^ dw,
-- CAMELLIA_SUBKEY_L(18) = dw;
-- dw = CAMELLIA_SUBKEY_L(19) ^ CAMELLIA_SUBKEY_R(19),
-- dw = CAMELLIA_RL8(dw);/* round 14 */
-- CAMELLIA_SUBKEY_R(19) = CAMELLIA_SUBKEY_L(19) ^ dw,
-- CAMELLIA_SUBKEY_L(19) = dw;
-- dw = CAMELLIA_SUBKEY_L(20) ^ CAMELLIA_SUBKEY_R(20),
-- dw = CAMELLIA_RL8(dw);/* round 15 */
-- CAMELLIA_SUBKEY_R(20) = CAMELLIA_SUBKEY_L(20) ^ dw,
-- CAMELLIA_SUBKEY_L(20) = dw;
-- dw = CAMELLIA_SUBKEY_L(21) ^ CAMELLIA_SUBKEY_R(21),
-- dw = CAMELLIA_RL8(dw);/* round 16 */
-- CAMELLIA_SUBKEY_R(21) = CAMELLIA_SUBKEY_L(21) ^ dw,
-- CAMELLIA_SUBKEY_L(21) = dw;
-- dw = CAMELLIA_SUBKEY_L(22) ^ CAMELLIA_SUBKEY_R(22),
-- dw = CAMELLIA_RL8(dw);/* round 17 */
-- CAMELLIA_SUBKEY_R(22) = CAMELLIA_SUBKEY_L(22) ^ dw,
-- CAMELLIA_SUBKEY_L(22) = dw;
-- dw = CAMELLIA_SUBKEY_L(23) ^ CAMELLIA_SUBKEY_R(23),
-- dw = CAMELLIA_RL8(dw);/* round 18 */
-- CAMELLIA_SUBKEY_R(23) = CAMELLIA_SUBKEY_L(23) ^ dw,
-- CAMELLIA_SUBKEY_L(23) = dw;
-- dw = CAMELLIA_SUBKEY_L(26) ^ CAMELLIA_SUBKEY_R(26),
-- dw = CAMELLIA_RL8(dw);/* round 19 */
-- CAMELLIA_SUBKEY_R(26) = CAMELLIA_SUBKEY_L(26) ^ dw,
-- CAMELLIA_SUBKEY_L(26) = dw;
-- dw = CAMELLIA_SUBKEY_L(27) ^ CAMELLIA_SUBKEY_R(27),
-- dw = CAMELLIA_RL8(dw);/* round 20 */
-- CAMELLIA_SUBKEY_R(27) = CAMELLIA_SUBKEY_L(27) ^ dw,
-- CAMELLIA_SUBKEY_L(27) = dw;
-- dw = CAMELLIA_SUBKEY_L(28) ^ CAMELLIA_SUBKEY_R(28),
-- dw = CAMELLIA_RL8(dw);/* round 21 */
-- CAMELLIA_SUBKEY_R(28) = CAMELLIA_SUBKEY_L(28) ^ dw,
-- CAMELLIA_SUBKEY_L(28) = dw;
-- dw = CAMELLIA_SUBKEY_L(29) ^ CAMELLIA_SUBKEY_R(29),
-- dw = CAMELLIA_RL8(dw);/* round 22 */
-- CAMELLIA_SUBKEY_R(29) = CAMELLIA_SUBKEY_L(29) ^ dw,
-- CAMELLIA_SUBKEY_L(29) = dw;
-- dw = CAMELLIA_SUBKEY_L(30) ^ CAMELLIA_SUBKEY_R(30),
-- dw = CAMELLIA_RL8(dw);/* round 23 */
-- CAMELLIA_SUBKEY_R(30) = CAMELLIA_SUBKEY_L(30) ^ dw,
-- CAMELLIA_SUBKEY_L(30) = dw;
-- dw = CAMELLIA_SUBKEY_L(31) ^ CAMELLIA_SUBKEY_R(31),
-- dw = CAMELLIA_RL8(dw);/* round 24 */
-- CAMELLIA_SUBKEY_R(31) = CAMELLIA_SUBKEY_L(31) ^ dw,
-- CAMELLIA_SUBKEY_L(31) = dw;
-
-- return;
-+ camellia_setup_tail(subkey, subL, subR, 32);
+ int isNaN(FPU_REG const *ptr)
+ {
+- return ( (exponent(ptr) == EXP_BIAS+EXP_OVER)
+- && !((ptr->sigh == 0x80000000) && (ptr->sigl == 0)) );
++ return ((exponent(ptr) == EXP_BIAS + EXP_OVER)
++ && !((ptr->sigh == 0x80000000) && (ptr->sigl == 0)));
}
- static void camellia_setup192(const unsigned char *key, u32 *subkey)
-@@ -1197,482 +850,168 @@ static void camellia_setup192(const unsigned char *key, u32 *subkey)
- u32 krll, krlr, krrl,krrr;
+-
+ int FPU_empty_i(int stnr)
+ {
+- int regnr = (top+stnr) & 7;
++ int regnr = (top + stnr) & 7;
- memcpy(kk, key, 24);
-- memcpy((unsigned char *)&krll, key+16,4);
-- memcpy((unsigned char *)&krlr, key+20,4);
-+ memcpy((unsigned char *)&krll, key+16, 4);
-+ memcpy((unsigned char *)&krlr, key+20, 4);
- krrl = ~krll;
- krrr = ~krlr;
- memcpy(kk+24, (unsigned char *)&krrl, 4);
- memcpy(kk+28, (unsigned char *)&krrr, 4);
- camellia_setup256(kk, subkey);
-- return;
+- return ((fpu_tag_word >> (regnr*2)) & 3) == TAG_Empty;
++ return ((fpu_tag_word >> (regnr * 2)) & 3) == TAG_Empty;
}
-
--/**
-- * Stuff related to camellia encryption/decryption
-+/*
-+ * Encrypt/decrypt
- */
--static void camellia_encrypt128(const u32 *subkey, __be32 *io_text)
--{
-- u32 il,ir,t0,t1; /* temporary valiables */
--
-- u32 io[4];
--
-- io[0] = be32_to_cpu(io_text[0]);
-- io[1] = be32_to_cpu(io_text[1]);
-- io[2] = be32_to_cpu(io_text[2]);
-- io[3] = be32_to_cpu(io_text[3]);
--
-- /* pre whitening but absorb kw2*/
-- io[0] ^= CAMELLIA_SUBKEY_L(0);
-- io[1] ^= CAMELLIA_SUBKEY_R(0);
-- /* main iteration */
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
-- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
-- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
-- t0,t1,il,ir);
-
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
-- io[0],io[1],il,ir,t0,t1);
-+#define CAMELLIA_FLS(ll, lr, rl, rr, kll, klr, krl, krr, t0, t1, t2, t3) \
-+ do { \
-+ t0 = kll; \
-+ t2 = krr; \
-+ t0 &= ll; \
-+ t2 |= rr; \
-+ rl ^= t2; \
-+ lr ^= ROL1(t0); \
-+ t3 = krl; \
-+ t1 = klr; \
-+ t3 &= rl; \
-+ t1 |= lr; \
-+ ll ^= t1; \
-+ rr ^= ROL1(t3); \
-+ } while(0)
+-int FPU_stackoverflow(FPU_REG **st_new_ptr)
++int FPU_stackoverflow(FPU_REG ** st_new_ptr)
+ {
+- *st_new_ptr = &st(-1);
++ *st_new_ptr = &st(-1);
+
+- return ((fpu_tag_word >> (((top - 1) & 7)*2)) & 3) != TAG_Empty;
++ return ((fpu_tag_word >> (((top - 1) & 7) * 2)) & 3) != TAG_Empty;
+ }
-- /* post whitening but kw4 */
-- io[2] ^= CAMELLIA_SUBKEY_L(24);
-- io[3] ^= CAMELLIA_SUBKEY_R(24);
--
-- t0 = io[0];
-- t1 = io[1];
-- io[0] = io[2];
-- io[1] = io[3];
-- io[2] = t0;
-- io[3] = t1;
--
-- io_text[0] = cpu_to_be32(io[0]);
-- io_text[1] = cpu_to_be32(io[1]);
-- io_text[2] = cpu_to_be32(io[2]);
-- io_text[3] = cpu_to_be32(io[3]);
-
-- return;
--}
-+#define CAMELLIA_ROUNDSM(xl, xr, kl, kr, yl, yr, il, ir) \
-+ do { \
-+ ir = camellia_sp1110[(u8)xr]; \
-+ il = camellia_sp1110[ (xl >> 24)]; \
-+ ir ^= camellia_sp0222[ (xr >> 24)]; \
-+ il ^= camellia_sp0222[(u8)(xl >> 16)]; \
-+ ir ^= camellia_sp3033[(u8)(xr >> 16)]; \
-+ il ^= camellia_sp3033[(u8)(xl >> 8)]; \
-+ ir ^= camellia_sp4404[(u8)(xr >> 8)]; \
-+ il ^= camellia_sp4404[(u8)xl]; \
-+ il ^= kl; \
-+ ir ^= il ^ kr; \
-+ yl ^= ir; \
-+ yr ^= ROR8(il) ^ ir; \
-+ } while(0)
+ void FPU_copy_to_regi(FPU_REG const *r, u_char tag, int stnr)
+ {
+- reg_copy(r, &st(stnr));
+- FPU_settagi(stnr, tag);
++ reg_copy(r, &st(stnr));
++ FPU_settagi(stnr, tag);
+ }
--static void camellia_decrypt128(const u32 *subkey, __be32 *io_text)
-+/* max = 24: 128bit encrypt, max = 32: 256bit encrypt */
-+static void camellia_do_encrypt(const u32 *subkey, u32 *io, unsigned max)
+ void FPU_copy_to_reg1(FPU_REG const *r, u_char tag)
{
-- u32 il,ir,t0,t1; /* temporary valiables */
-+ u32 il,ir,t0,t1; /* temporary variables */
+- reg_copy(r, &st(1));
+- FPU_settagi(1, tag);
++ reg_copy(r, &st(1));
++ FPU_settagi(1, tag);
+ }
-- u32 io[4];
--
-- io[0] = be32_to_cpu(io_text[0]);
-- io[1] = be32_to_cpu(io_text[1]);
-- io[2] = be32_to_cpu(io_text[2]);
-- io[3] = be32_to_cpu(io_text[3]);
--
-- /* pre whitening but absorb kw2*/
-- io[0] ^= CAMELLIA_SUBKEY_L(24);
-- io[1] ^= CAMELLIA_SUBKEY_R(24);
-+ /* pre whitening but absorb kw2 */
-+ io[0] ^= SUBKEY_L(0);
-+ io[1] ^= SUBKEY_R(0);
+ void FPU_copy_to_reg0(FPU_REG const *r, u_char tag)
+ {
+- int regnr = top;
+- regnr &= 7;
++ int regnr = top;
++ regnr &= 7;
- /* main iteration */
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
-- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
-- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
-- io[0],io[1],il,ir,t0,t1);
--
-- /* post whitening but kw4 */
-- io[2] ^= CAMELLIA_SUBKEY_L(0);
-- io[3] ^= CAMELLIA_SUBKEY_R(0);
--
-- t0 = io[0];
-- t1 = io[1];
-- io[0] = io[2];
-- io[1] = io[3];
-- io[2] = t0;
-- io[3] = t1;
--
-- io_text[0] = cpu_to_be32(io[0]);
-- io_text[1] = cpu_to_be32(io[1]);
-- io_text[2] = cpu_to_be32(io[2]);
-- io_text[3] = cpu_to_be32(io[3]);
--
-- return;
--}
+- reg_copy(r, &st(0));
++ reg_copy(r, &st(0));
+
+- fpu_tag_word &= ~(3 << (regnr*2));
+- fpu_tag_word |= (tag & 3) << (regnr*2);
++ fpu_tag_word &= ~(3 << (regnr * 2));
++ fpu_tag_word |= (tag & 3) << (regnr * 2);
+ }
+diff --git a/arch/x86/math-emu/fpu_trig.c b/arch/x86/math-emu/fpu_trig.c
+index 403cbde..ecd0668 100644
+--- a/arch/x86/math-emu/fpu_trig.c
++++ b/arch/x86/math-emu/fpu_trig.c
+@@ -15,11 +15,10 @@
+ #include "fpu_emu.h"
+ #include "status_w.h"
+ #include "control_w.h"
+-#include "reg_constant.h"
++#include "reg_constant.h"
+
+ static void rem_kernel(unsigned long long st0, unsigned long long *y,
+- unsigned long long st1,
+- unsigned long long q, int n);
++ unsigned long long st1, unsigned long long q, int n);
+
+ #define BETTER_THAN_486
+
+@@ -33,788 +32,706 @@ static void rem_kernel(unsigned long long st0, unsigned long long *y,
+ precision of the result sometimes degrades to about 63.9 bits */
+ static int trig_arg(FPU_REG *st0_ptr, int even)
+ {
+- FPU_REG tmp;
+- u_char tmptag;
+- unsigned long long q;
+- int old_cw = control_word, saved_status = partial_status;
+- int tag, st0_tag = TAG_Valid;
-
+- if ( exponent(st0_ptr) >= 63 )
+- {
+- partial_status |= SW_C2; /* Reduction incomplete. */
+- return -1;
+- }
-
--/**
-- * stuff for 192 and 256bit encryption/decryption
-- */
--static void camellia_encrypt256(const u32 *subkey, __be32 *io_text)
--{
-- u32 il,ir,t0,t1; /* temporary valiables */
+- control_word &= ~CW_RC;
+- control_word |= RC_CHOP;
-
-- u32 io[4];
+- setpositive(st0_ptr);
+- tag = FPU_u_div(st0_ptr, &CONST_PI2, &tmp, PR_64_BITS | RC_CHOP | 0x3f,
+- SIGN_POS);
+-
+- FPU_round_to_int(&tmp, tag); /* Fortunately, this can't overflow
+- to 2^64 */
+- q = significand(&tmp);
+- if ( q )
+- {
+- rem_kernel(significand(st0_ptr),
+- &significand(&tmp),
+- significand(&CONST_PI2),
+- q, exponent(st0_ptr) - exponent(&CONST_PI2));
+- setexponent16(&tmp, exponent(&CONST_PI2));
+- st0_tag = FPU_normalize(&tmp);
+- FPU_copy_to_reg0(&tmp, st0_tag);
+- }
-
-- io[0] = be32_to_cpu(io_text[0]);
-- io[1] = be32_to_cpu(io_text[1]);
-- io[2] = be32_to_cpu(io_text[2]);
-- io[3] = be32_to_cpu(io_text[3]);
-+#define ROUNDS(i) do { \
-+ CAMELLIA_ROUNDSM(io[0],io[1], \
-+ SUBKEY_L(i + 2),SUBKEY_R(i + 2), \
-+ io[2],io[3],il,ir); \
-+ CAMELLIA_ROUNDSM(io[2],io[3], \
-+ SUBKEY_L(i + 3),SUBKEY_R(i + 3), \
-+ io[0],io[1],il,ir); \
-+ CAMELLIA_ROUNDSM(io[0],io[1], \
-+ SUBKEY_L(i + 4),SUBKEY_R(i + 4), \
-+ io[2],io[3],il,ir); \
-+ CAMELLIA_ROUNDSM(io[2],io[3], \
-+ SUBKEY_L(i + 5),SUBKEY_R(i + 5), \
-+ io[0],io[1],il,ir); \
-+ CAMELLIA_ROUNDSM(io[0],io[1], \
-+ SUBKEY_L(i + 6),SUBKEY_R(i + 6), \
-+ io[2],io[3],il,ir); \
-+ CAMELLIA_ROUNDSM(io[2],io[3], \
-+ SUBKEY_L(i + 7),SUBKEY_R(i + 7), \
-+ io[0],io[1],il,ir); \
-+} while (0)
-+#define FLS(i) do { \
-+ CAMELLIA_FLS(io[0],io[1],io[2],io[3], \
-+ SUBKEY_L(i + 0),SUBKEY_R(i + 0), \
-+ SUBKEY_L(i + 1),SUBKEY_R(i + 1), \
-+ t0,t1,il,ir); \
-+} while (0)
+- if ( (even && !(q & 1)) || (!even && (q & 1)) )
+- {
+- st0_tag = FPU_sub(REV|LOADED|TAG_Valid, (int)&CONST_PI2, FULL_PRECISION);
++ FPU_REG tmp;
++ u_char tmptag;
++ unsigned long long q;
++ int old_cw = control_word, saved_status = partial_status;
++ int tag, st0_tag = TAG_Valid;
+
-+ ROUNDS(0);
-+ FLS(8);
-+ ROUNDS(8);
-+ FLS(16);
-+ ROUNDS(16);
-+ if (max == 32) {
-+ FLS(24);
-+ ROUNDS(24);
++ if (exponent(st0_ptr) >= 63) {
++ partial_status |= SW_C2; /* Reduction incomplete. */
++ return -1;
+ }
-- /* pre whitening but absorb kw2*/
-- io[0] ^= CAMELLIA_SUBKEY_L(0);
-- io[1] ^= CAMELLIA_SUBKEY_R(0);
--
-- /* main iteration */
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
-- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
-- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
-- io[0],io[1],il,ir,t0,t1);
+-#ifdef BETTER_THAN_486
+- /* So far, the results are exact but based upon a 64 bit
+- precision approximation to pi/2. The technique used
+- now is equivalent to using an approximation to pi/2 which
+- is accurate to about 128 bits. */
+- if ( (exponent(st0_ptr) <= exponent(&CONST_PI2extra) + 64) || (q > 1) )
+- {
+- /* This code gives the effect of having pi/2 to better than
+- 128 bits precision. */
-
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(24),CAMELLIA_SUBKEY_R(24),
-- CAMELLIA_SUBKEY_L(25),CAMELLIA_SUBKEY_R(25),
-- t0,t1,il,ir);
+- significand(&tmp) = q + 1;
+- setexponent16(&tmp, 63);
+- FPU_normalize(&tmp);
+- tmptag =
+- FPU_u_mul(&CONST_PI2extra, &tmp, &tmp, FULL_PRECISION, SIGN_POS,
+- exponent(&CONST_PI2extra) + exponent(&tmp));
+- setsign(&tmp, getsign(&CONST_PI2extra));
+- st0_tag = FPU_add(&tmp, tmptag, 0, FULL_PRECISION);
+- if ( signnegative(st0_ptr) )
+- {
+- /* CONST_PI2extra is negative, so the result of the addition
+- can be negative. This means that the argument is actually
+- in a different quadrant. The correction is always < pi/2,
+- so it can't overflow into yet another quadrant. */
+- setpositive(st0_ptr);
+- q++;
+- }
++ control_word &= ~CW_RC;
++ control_word |= RC_CHOP;
++
++ setpositive(st0_ptr);
++ tag = FPU_u_div(st0_ptr, &CONST_PI2, &tmp, PR_64_BITS | RC_CHOP | 0x3f,
++ SIGN_POS);
++
++ FPU_round_to_int(&tmp, tag); /* Fortunately, this can't overflow
++ to 2^64 */
++ q = significand(&tmp);
++ if (q) {
++ rem_kernel(significand(st0_ptr),
++ &significand(&tmp),
++ significand(&CONST_PI2),
++ q, exponent(st0_ptr) - exponent(&CONST_PI2));
++ setexponent16(&tmp, exponent(&CONST_PI2));
++ st0_tag = FPU_normalize(&tmp);
++ FPU_copy_to_reg0(&tmp, st0_tag);
+ }
++
++ if ((even && !(q & 1)) || (!even && (q & 1))) {
++ st0_tag =
++ FPU_sub(REV | LOADED | TAG_Valid, (int)&CONST_PI2,
++ FULL_PRECISION);
++
++#ifdef BETTER_THAN_486
++ /* So far, the results are exact but based upon a 64 bit
++ precision approximation to pi/2. The technique used
++ now is equivalent to using an approximation to pi/2 which
++ is accurate to about 128 bits. */
++ if ((exponent(st0_ptr) <= exponent(&CONST_PI2extra) + 64)
++ || (q > 1)) {
++ /* This code gives the effect of having pi/2 to better than
++ 128 bits precision. */
++
++ significand(&tmp) = q + 1;
++ setexponent16(&tmp, 63);
++ FPU_normalize(&tmp);
++ tmptag =
++ FPU_u_mul(&CONST_PI2extra, &tmp, &tmp,
++ FULL_PRECISION, SIGN_POS,
++ exponent(&CONST_PI2extra) +
++ exponent(&tmp));
++ setsign(&tmp, getsign(&CONST_PI2extra));
++ st0_tag = FPU_add(&tmp, tmptag, 0, FULL_PRECISION);
++ if (signnegative(st0_ptr)) {
++ /* CONST_PI2extra is negative, so the result of the addition
++ can be negative. This means that the argument is actually
++ in a different quadrant. The correction is always < pi/2,
++ so it can't overflow into yet another quadrant. */
++ setpositive(st0_ptr);
++ q++;
++ }
++ }
+ #endif /* BETTER_THAN_486 */
+- }
++ }
+ #ifdef BETTER_THAN_486
+- else
+- {
+- /* So far, the results are exact but based upon a 64 bit
+- precision approximation to pi/2. The technique used
+- now is equivalent to using an approximation to pi/2 which
+- is accurate to about 128 bits. */
+- if ( ((q > 0) && (exponent(st0_ptr) <= exponent(&CONST_PI2extra) + 64))
+- || (q > 1) )
+- {
+- /* This code gives the effect of having p/2 to better than
+- 128 bits precision. */
-
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(26),CAMELLIA_SUBKEY_R(26),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(27),CAMELLIA_SUBKEY_R(27),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(28),CAMELLIA_SUBKEY_R(28),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(29),CAMELLIA_SUBKEY_R(29),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(30),CAMELLIA_SUBKEY_R(30),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(31),CAMELLIA_SUBKEY_R(31),
-- io[0],io[1],il,ir,t0,t1);
-+#undef ROUNDS
-+#undef FLS
+- significand(&tmp) = q;
+- setexponent16(&tmp, 63);
+- FPU_normalize(&tmp); /* This must return TAG_Valid */
+- tmptag = FPU_u_mul(&CONST_PI2extra, &tmp, &tmp, FULL_PRECISION,
+- SIGN_POS,
+- exponent(&CONST_PI2extra) + exponent(&tmp));
+- setsign(&tmp, getsign(&CONST_PI2extra));
+- st0_tag = FPU_sub(LOADED|(tmptag & 0x0f), (int)&tmp,
+- FULL_PRECISION);
+- if ( (exponent(st0_ptr) == exponent(&CONST_PI2)) &&
+- ((st0_ptr->sigh > CONST_PI2.sigh)
+- || ((st0_ptr->sigh == CONST_PI2.sigh)
+- && (st0_ptr->sigl > CONST_PI2.sigl))) )
+- {
+- /* CONST_PI2extra is negative, so the result of the
+- subtraction can be larger than pi/2. This means
+- that the argument is actually in a different quadrant.
+- The correction is always < pi/2, so it can't overflow
+- into yet another quadrant. */
+- st0_tag = FPU_sub(REV|LOADED|TAG_Valid, (int)&CONST_PI2,
+- FULL_PRECISION);
+- q++;
+- }
++ else {
++ /* So far, the results are exact but based upon a 64 bit
++ precision approximation to pi/2. The technique used
++ now is equivalent to using an approximation to pi/2 which
++ is accurate to about 128 bits. */
++ if (((q > 0)
++ && (exponent(st0_ptr) <= exponent(&CONST_PI2extra) + 64))
++ || (q > 1)) {
++ /* This code gives the effect of having p/2 to better than
++ 128 bits precision. */
++
++ significand(&tmp) = q;
++ setexponent16(&tmp, 63);
++ FPU_normalize(&tmp); /* This must return TAG_Valid */
++ tmptag =
++ FPU_u_mul(&CONST_PI2extra, &tmp, &tmp,
++ FULL_PRECISION, SIGN_POS,
++ exponent(&CONST_PI2extra) +
++ exponent(&tmp));
++ setsign(&tmp, getsign(&CONST_PI2extra));
++ st0_tag = FPU_sub(LOADED | (tmptag & 0x0f), (int)&tmp,
++ FULL_PRECISION);
++ if ((exponent(st0_ptr) == exponent(&CONST_PI2)) &&
++ ((st0_ptr->sigh > CONST_PI2.sigh)
++ || ((st0_ptr->sigh == CONST_PI2.sigh)
++ && (st0_ptr->sigl > CONST_PI2.sigl)))) {
++ /* CONST_PI2extra is negative, so the result of the
++ subtraction can be larger than pi/2. This means
++ that the argument is actually in a different quadrant.
++ The correction is always < pi/2, so it can't overflow
++ into yet another quadrant. */
++ st0_tag =
++ FPU_sub(REV | LOADED | TAG_Valid,
++ (int)&CONST_PI2, FULL_PRECISION);
++ q++;
++ }
++ }
+ }
+- }
+ #endif /* BETTER_THAN_486 */
- /* post whitening but kw4 */
-- io[2] ^= CAMELLIA_SUBKEY_L(32);
-- io[3] ^= CAMELLIA_SUBKEY_R(32);
--
-- t0 = io[0];
-- t1 = io[1];
-- io[0] = io[2];
-- io[1] = io[3];
-- io[2] = t0;
-- io[3] = t1;
--
-- io_text[0] = cpu_to_be32(io[0]);
-- io_text[1] = cpu_to_be32(io[1]);
-- io_text[2] = cpu_to_be32(io[2]);
-- io_text[3] = cpu_to_be32(io[3]);
--
-- return;
-+ io[2] ^= SUBKEY_L(max);
-+ io[3] ^= SUBKEY_R(max);
-+ /* NB: io[0],[1] should be swapped with [2],[3] by caller! */
+- FPU_settag0(st0_tag);
+- control_word = old_cw;
+- partial_status = saved_status & ~SW_C2; /* Reduction complete. */
++ FPU_settag0(st0_tag);
++ control_word = old_cw;
++ partial_status = saved_status & ~SW_C2; /* Reduction complete. */
+
+- return (q & 3) | even;
++ return (q & 3) | even;
}
-
--static void camellia_decrypt256(const u32 *subkey, __be32 *io_text)
-+static void camellia_do_decrypt(const u32 *subkey, u32 *io, unsigned i)
+ /* Convert a long to register */
+ static void convert_l2reg(long const *arg, int deststnr)
{
-- u32 il,ir,t0,t1; /* temporary valiables */
-+ u32 il,ir,t0,t1; /* temporary variables */
-
-- u32 io[4];
+- int tag;
+- long num = *arg;
+- u_char sign;
+- FPU_REG *dest = &st(deststnr);
-
-- io[0] = be32_to_cpu(io_text[0]);
-- io[1] = be32_to_cpu(io_text[1]);
-- io[2] = be32_to_cpu(io_text[2]);
-- io[3] = be32_to_cpu(io_text[3]);
+- if (num == 0)
+- {
+- FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
+- return;
+- }
-
-- /* pre whitening but absorb kw2*/
-- io[0] ^= CAMELLIA_SUBKEY_L(32);
-- io[1] ^= CAMELLIA_SUBKEY_R(32);
-+ /* pre whitening but absorb kw2 */
-+ io[0] ^= SUBKEY_L(i);
-+ io[1] ^= SUBKEY_R(i);
+- if (num > 0)
+- { sign = SIGN_POS; }
+- else
+- { num = -num; sign = SIGN_NEG; }
+-
+- dest->sigh = num;
+- dest->sigl = 0;
+- setexponent16(dest, 31);
+- tag = FPU_normalize(dest);
+- FPU_settagi(deststnr, tag);
+- setsign(dest, sign);
+- return;
+-}
++ int tag;
++ long num = *arg;
++ u_char sign;
++ FPU_REG *dest = &st(deststnr);
- /* main iteration */
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(31),CAMELLIA_SUBKEY_R(31),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(30),CAMELLIA_SUBKEY_R(30),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(29),CAMELLIA_SUBKEY_R(29),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(28),CAMELLIA_SUBKEY_R(28),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(27),CAMELLIA_SUBKEY_R(27),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(26),CAMELLIA_SUBKEY_R(26),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(25),CAMELLIA_SUBKEY_R(25),
-- CAMELLIA_SUBKEY_L(24),CAMELLIA_SUBKEY_R(24),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
-- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
-- io[0],io[1],il,ir,t0,t1);
--
-- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
-- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
-- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
-- t0,t1,il,ir);
--
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
-- io[0],io[1],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[0],io[1],
-- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
-- io[2],io[3],il,ir,t0,t1);
-- CAMELLIA_ROUNDSM(io[2],io[3],
-- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
-- io[0],io[1],il,ir,t0,t1);
-+#define ROUNDS(i) do { \
-+ CAMELLIA_ROUNDSM(io[0],io[1], \
-+ SUBKEY_L(i + 7),SUBKEY_R(i + 7), \
-+ io[2],io[3],il,ir); \
-+ CAMELLIA_ROUNDSM(io[2],io[3], \
-+ SUBKEY_L(i + 6),SUBKEY_R(i + 6), \
-+ io[0],io[1],il,ir); \
-+ CAMELLIA_ROUNDSM(io[0],io[1], \
-+ SUBKEY_L(i + 5),SUBKEY_R(i + 5), \
-+ io[2],io[3],il,ir); \
-+ CAMELLIA_ROUNDSM(io[2],io[3], \
-+ SUBKEY_L(i + 4),SUBKEY_R(i + 4), \
-+ io[0],io[1],il,ir); \
-+ CAMELLIA_ROUNDSM(io[0],io[1], \
-+ SUBKEY_L(i + 3),SUBKEY_R(i + 3), \
-+ io[2],io[3],il,ir); \
-+ CAMELLIA_ROUNDSM(io[2],io[3], \
-+ SUBKEY_L(i + 2),SUBKEY_R(i + 2), \
-+ io[0],io[1],il,ir); \
-+} while (0)
-+#define FLS(i) do { \
-+ CAMELLIA_FLS(io[0],io[1],io[2],io[3], \
-+ SUBKEY_L(i + 1),SUBKEY_R(i + 1), \
-+ SUBKEY_L(i + 0),SUBKEY_R(i + 0), \
-+ t0,t1,il,ir); \
-+} while (0)
++ if (num == 0) {
++ FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
++ return;
++ }
+
-+ if (i == 32) {
-+ ROUNDS(24);
-+ FLS(24);
++ if (num > 0) {
++ sign = SIGN_POS;
++ } else {
++ num = -num;
++ sign = SIGN_NEG;
+ }
-+ ROUNDS(16);
-+ FLS(16);
-+ ROUNDS(8);
-+ FLS(8);
-+ ROUNDS(0);
+
-+#undef ROUNDS
-+#undef FLS
++ dest->sigh = num;
++ dest->sigl = 0;
++ setexponent16(dest, 31);
++ tag = FPU_normalize(dest);
++ FPU_settagi(deststnr, tag);
++ setsign(dest, sign);
++ return;
++}
+
+ static void single_arg_error(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- if ( st0_tag == TAG_Empty )
+- FPU_stack_underflow(); /* Puts a QNaN in st(0) */
+- else if ( st0_tag == TW_NaN )
+- real_1op_NaN(st0_ptr); /* return with a NaN in st(0) */
++ if (st0_tag == TAG_Empty)
++ FPU_stack_underflow(); /* Puts a QNaN in st(0) */
++ else if (st0_tag == TW_NaN)
++ real_1op_NaN(st0_ptr); /* return with a NaN in st(0) */
+ #ifdef PARANOID
+- else
+- EXCEPTION(EX_INTERNAL|0x0112);
++ else
++ EXCEPTION(EX_INTERNAL | 0x0112);
+ #endif /* PARANOID */
+ }
- /* post whitening but kw4 */
-- io[2] ^= CAMELLIA_SUBKEY_L(0);
-- io[3] ^= CAMELLIA_SUBKEY_R(0);
--
-- t0 = io[0];
-- t1 = io[1];
-- io[0] = io[2];
-- io[1] = io[3];
-- io[2] = t0;
-- io[3] = t1;
-
-- io_text[0] = cpu_to_be32(io[0]);
-- io_text[1] = cpu_to_be32(io[1]);
-- io_text[2] = cpu_to_be32(io[2]);
-- io_text[3] = cpu_to_be32(io[3]);
+ static void single_arg_2_error(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- int isNaN;
-
-- return;
-+ io[2] ^= SUBKEY_L(0);
-+ io[3] ^= SUBKEY_R(0);
-+ /* NB: 0,1 should be swapped with 2,3 by caller! */
+- switch ( st0_tag )
+- {
+- case TW_NaN:
+- isNaN = (exponent(st0_ptr) == EXP_OVER) && (st0_ptr->sigh & 0x80000000);
+- if ( isNaN && !(st0_ptr->sigh & 0x40000000) ) /* Signaling ? */
+- {
+- EXCEPTION(EX_Invalid);
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- /* Convert to a QNaN */
+- st0_ptr->sigh |= 0x40000000;
+- push();
+- FPU_copy_to_reg0(st0_ptr, TAG_Special);
+- }
+- }
+- else if ( isNaN )
+- {
+- /* A QNaN */
+- push();
+- FPU_copy_to_reg0(st0_ptr, TAG_Special);
+- }
+- else
+- {
+- /* pseudoNaN or other unsupported */
+- EXCEPTION(EX_Invalid);
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
+- push();
+- FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
+- }
+- }
+- break; /* return with a NaN in st(0) */
++ int isNaN;
++
++ switch (st0_tag) {
++ case TW_NaN:
++ isNaN = (exponent(st0_ptr) == EXP_OVER)
++ && (st0_ptr->sigh & 0x80000000);
++ if (isNaN && !(st0_ptr->sigh & 0x40000000)) { /* Signaling ? */
++ EXCEPTION(EX_Invalid);
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ /* Convert to a QNaN */
++ st0_ptr->sigh |= 0x40000000;
++ push();
++ FPU_copy_to_reg0(st0_ptr, TAG_Special);
++ }
++ } else if (isNaN) {
++ /* A QNaN */
++ push();
++ FPU_copy_to_reg0(st0_ptr, TAG_Special);
++ } else {
++ /* pseudoNaN or other unsupported */
++ EXCEPTION(EX_Invalid);
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
++ push();
++ FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
++ }
++ }
++ break; /* return with a NaN in st(0) */
+ #ifdef PARANOID
+- default:
+- EXCEPTION(EX_INTERNAL|0x0112);
++ default:
++ EXCEPTION(EX_INTERNAL | 0x0112);
+ #endif /* PARANOID */
+- }
++ }
}
+-
+ /*---------------------------------------------------------------------------*/
-+struct camellia_ctx {
-+ int key_length;
-+ u32 key_table[CAMELLIA_TABLE_BYTE_LEN / sizeof(u32)];
-+};
-+
- static int
- camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key,
- unsigned int key_len)
-@@ -1688,7 +1027,7 @@ camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-
- cctx->key_length = key_len;
+ static void f2xm1(FPU_REG *st0_ptr, u_char tag)
+ {
+- FPU_REG a;
++ FPU_REG a;
-- switch(key_len) {
-+ switch (key_len) {
- case 16:
- camellia_setup128(key, cctx->key_table);
- break;
-@@ -1698,68 +1037,59 @@ camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key,
- case 32:
- camellia_setup256(key, cctx->key_table);
- break;
-- default:
-- break;
- }
+- clear_C1();
++ clear_C1();
- return 0;
- }
+- if ( tag == TAG_Valid )
+- {
+- /* For an 80486 FPU, the result is undefined if the arg is >= 1.0 */
+- if ( exponent(st0_ptr) < 0 )
+- {
+- denormal_arg:
++ if (tag == TAG_Valid) {
++ /* For an 80486 FPU, the result is undefined if the arg is >= 1.0 */
++ if (exponent(st0_ptr) < 0) {
++ denormal_arg:
--
- static void camellia_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
- {
- const struct camellia_ctx *cctx = crypto_tfm_ctx(tfm);
- const __be32 *src = (const __be32 *)in;
- __be32 *dst = (__be32 *)out;
+- FPU_to_exp16(st0_ptr, &a);
++ FPU_to_exp16(st0_ptr, &a);
-- __be32 tmp[4];
-+ u32 tmp[4];
+- /* poly_2xm1(x) requires 0 < st(0) < 1. */
+- poly_2xm1(getsign(st0_ptr), &a, st0_ptr);
++ /* poly_2xm1(x) requires 0 < st(0) < 1. */
++ poly_2xm1(getsign(st0_ptr), &a, st0_ptr);
++ }
++ set_precision_flag_up(); /* 80486 appears to always do this */
++ return;
+ }
+- set_precision_flag_up(); /* 80486 appears to always do this */
+- return;
+- }
-- memcpy(tmp, src, CAMELLIA_BLOCK_SIZE);
-+ tmp[0] = be32_to_cpu(src[0]);
-+ tmp[1] = be32_to_cpu(src[1]);
-+ tmp[2] = be32_to_cpu(src[2]);
-+ tmp[3] = be32_to_cpu(src[3]);
+- if ( tag == TAG_Zero )
+- return;
++ if (tag == TAG_Zero)
++ return;
-- switch (cctx->key_length) {
-- case 16:
-- camellia_encrypt128(cctx->key_table, tmp);
-- break;
-- case 24:
-- /* fall through */
-- case 32:
-- camellia_encrypt256(cctx->key_table, tmp);
-- break;
-- default:
-- break;
-- }
-+ camellia_do_encrypt(cctx->key_table, tmp,
-+ cctx->key_length == 16 ? 24 : 32 /* for key lengths of 24 and 32 */
-+ );
+- if ( tag == TAG_Special )
+- tag = FPU_Special(st0_ptr);
++ if (tag == TAG_Special)
++ tag = FPU_Special(st0_ptr);
-- memcpy(dst, tmp, CAMELLIA_BLOCK_SIZE);
-+ /* do_encrypt returns 0,1 swapped with 2,3 */
-+ dst[0] = cpu_to_be32(tmp[2]);
-+ dst[1] = cpu_to_be32(tmp[3]);
-+ dst[2] = cpu_to_be32(tmp[0]);
-+ dst[3] = cpu_to_be32(tmp[1]);
+- switch ( tag )
+- {
+- case TW_Denormal:
+- if ( denormal_operand() < 0 )
+- return;
+- goto denormal_arg;
+- case TW_Infinity:
+- if ( signnegative(st0_ptr) )
+- {
+- /* -infinity gives -1 (p16-10) */
+- FPU_copy_to_reg0(&CONST_1, TAG_Valid);
+- setnegative(st0_ptr);
++ switch (tag) {
++ case TW_Denormal:
++ if (denormal_operand() < 0)
++ return;
++ goto denormal_arg;
++ case TW_Infinity:
++ if (signnegative(st0_ptr)) {
++ /* -infinity gives -1 (p16-10) */
++ FPU_copy_to_reg0(&CONST_1, TAG_Valid);
++ setnegative(st0_ptr);
++ }
++ return;
++ default:
++ single_arg_error(st0_ptr, tag);
+ }
+- return;
+- default:
+- single_arg_error(st0_ptr, tag);
+- }
}
-
- static void camellia_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ static void fptan(FPU_REG *st0_ptr, u_char st0_tag)
{
- const struct camellia_ctx *cctx = crypto_tfm_ctx(tfm);
- const __be32 *src = (const __be32 *)in;
- __be32 *dst = (__be32 *)out;
+- FPU_REG *st_new_ptr;
+- int q;
+- u_char arg_sign = getsign(st0_ptr);
+-
+- /* Stack underflow has higher priority */
+- if ( st0_tag == TAG_Empty )
+- {
+- FPU_stack_underflow(); /* Puts a QNaN in st(0) */
+- if ( control_word & CW_Invalid )
+- {
+- st_new_ptr = &st(-1);
+- push();
+- FPU_stack_underflow(); /* Puts a QNaN in the new st(0) */
++ FPU_REG *st_new_ptr;
++ int q;
++ u_char arg_sign = getsign(st0_ptr);
++
++ /* Stack underflow has higher priority */
++ if (st0_tag == TAG_Empty) {
++ FPU_stack_underflow(); /* Puts a QNaN in st(0) */
++ if (control_word & CW_Invalid) {
++ st_new_ptr = &st(-1);
++ push();
++ FPU_stack_underflow(); /* Puts a QNaN in the new st(0) */
++ }
++ return;
+ }
+- return;
+- }
+-
+- if ( STACK_OVERFLOW )
+- { FPU_stack_overflow(); return; }
+-
+- if ( st0_tag == TAG_Valid )
+- {
+- if ( exponent(st0_ptr) > -40 )
+- {
+- if ( (q = trig_arg(st0_ptr, 0)) == -1 )
+- {
+- /* Operand is out of range */
+- return;
+- }
+-
+- poly_tan(st0_ptr);
+- setsign(st0_ptr, (q & 1) ^ (arg_sign != 0));
+- set_precision_flag_up(); /* We do not really know if up or down */
++
++ if (STACK_OVERFLOW) {
++ FPU_stack_overflow();
++ return;
+ }
+- else
+- {
+- /* For a small arg, the result == the argument */
+- /* Underflow may happen */
-- __be32 tmp[4];
-+ u32 tmp[4];
+- denormal_arg:
++ if (st0_tag == TAG_Valid) {
++ if (exponent(st0_ptr) > -40) {
++ if ((q = trig_arg(st0_ptr, 0)) == -1) {
++ /* Operand is out of range */
++ return;
++ }
++
++ poly_tan(st0_ptr);
++ setsign(st0_ptr, (q & 1) ^ (arg_sign != 0));
++ set_precision_flag_up(); /* We do not really know if up or down */
++ } else {
++ /* For a small arg, the result == the argument */
++ /* Underflow may happen */
++
++ denormal_arg:
++
++ FPU_to_exp16(st0_ptr, st0_ptr);
-- memcpy(tmp, src, CAMELLIA_BLOCK_SIZE);
-+ tmp[0] = be32_to_cpu(src[0]);
-+ tmp[1] = be32_to_cpu(src[1]);
-+ tmp[2] = be32_to_cpu(src[2]);
-+ tmp[3] = be32_to_cpu(src[3]);
+- FPU_to_exp16(st0_ptr, st0_ptr);
+-
+- st0_tag = FPU_round(st0_ptr, 1, 0, FULL_PRECISION, arg_sign);
+- FPU_settag0(st0_tag);
++ st0_tag =
++ FPU_round(st0_ptr, 1, 0, FULL_PRECISION, arg_sign);
++ FPU_settag0(st0_tag);
++ }
++ push();
++ FPU_copy_to_reg0(&CONST_1, TAG_Valid);
++ return;
+ }
+- push();
+- FPU_copy_to_reg0(&CONST_1, TAG_Valid);
+- return;
+- }
+-
+- if ( st0_tag == TAG_Zero )
+- {
+- push();
+- FPU_copy_to_reg0(&CONST_1, TAG_Valid);
+- setcc(0);
+- return;
+- }
+-
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+-
+- if ( st0_tag == TW_Denormal )
+- {
+- if ( denormal_operand() < 0 )
+- return;
-- switch (cctx->key_length) {
-- case 16:
-- camellia_decrypt128(cctx->key_table, tmp);
-- break;
-- case 24:
-- /* fall through */
-- case 32:
-- camellia_decrypt256(cctx->key_table, tmp);
-- break;
-- default:
-- break;
-- }
-+ camellia_do_decrypt(cctx->key_table, tmp,
-+ cctx->key_length == 16 ? 24 : 32 /* for key lengths of 24 and 32 */
-+ );
+- goto denormal_arg;
+- }
+-
+- if ( st0_tag == TW_Infinity )
+- {
+- /* The 80486 treats infinity as an invalid operand */
+- if ( arith_invalid(0) >= 0 )
+- {
+- st_new_ptr = &st(-1);
+- push();
+- arith_invalid(0);
++ if (st0_tag == TAG_Zero) {
++ push();
++ FPU_copy_to_reg0(&CONST_1, TAG_Valid);
++ setcc(0);
++ return;
++ }
++
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++
++ if (st0_tag == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return;
++
++ goto denormal_arg;
+ }
+- return;
+- }
-- memcpy(dst, tmp, CAMELLIA_BLOCK_SIZE);
-+ /* do_decrypt returns 0,1 swapped with 2,3 */
-+ dst[0] = cpu_to_be32(tmp[2]);
-+ dst[1] = cpu_to_be32(tmp[3]);
-+ dst[2] = cpu_to_be32(tmp[0]);
-+ dst[3] = cpu_to_be32(tmp[1]);
- }
+- single_arg_2_error(st0_ptr, st0_tag);
+-}
++ if (st0_tag == TW_Infinity) {
++ /* The 80486 treats infinity as an invalid operand */
++ if (arith_invalid(0) >= 0) {
++ st_new_ptr = &st(-1);
++ push();
++ arith_invalid(0);
++ }
++ return;
++ }
--
- static struct crypto_alg camellia_alg = {
- .cra_name = "camellia",
- .cra_driver_name = "camellia-generic",
-@@ -1786,16 +1116,13 @@ static int __init camellia_init(void)
- return crypto_register_alg(&camellia_alg);
- }
++ single_arg_2_error(st0_ptr, st0_tag);
++}
--
- static void __exit camellia_fini(void)
+ static void fxtract(FPU_REG *st0_ptr, u_char st0_tag)
{
- crypto_unregister_alg(&camellia_alg);
- }
-
+- FPU_REG *st_new_ptr;
+- u_char sign;
+- register FPU_REG *st1_ptr = st0_ptr; /* anticipate */
-
- module_init(camellia_init);
- module_exit(camellia_fini);
-
+- if ( STACK_OVERFLOW )
+- { FPU_stack_overflow(); return; }
-
- MODULE_DESCRIPTION("Camellia Cipher Algorithm");
- MODULE_LICENSE("GPL");
-diff --git a/crypto/cast6.c b/crypto/cast6.c
-index 136ab6d..5fd9420 100644
---- a/crypto/cast6.c
-+++ b/crypto/cast6.c
-@@ -369,7 +369,7 @@ static const u8 Tr[4][8] = {
- };
-
- /* forward octave */
--static inline void W(u32 *key, unsigned int i) {
-+static void W(u32 *key, unsigned int i) {
- u32 I;
- key[6] ^= F1(key[7], Tr[i % 4][0], Tm[i][0]);
- key[5] ^= F2(key[6], Tr[i % 4][1], Tm[i][1]);
-@@ -428,7 +428,7 @@ static int cast6_setkey(struct crypto_tfm *tfm, const u8 *in_key,
- }
+- clear_C1();
+-
+- if ( st0_tag == TAG_Valid )
+- {
+- long e;
+-
+- push();
+- sign = getsign(st1_ptr);
+- reg_copy(st1_ptr, st_new_ptr);
+- setexponent16(st_new_ptr, exponent(st_new_ptr));
+-
+- denormal_arg:
+-
+- e = exponent16(st_new_ptr);
+- convert_l2reg(&e, 1);
+- setexponentpos(st_new_ptr, 0);
+- setsign(st_new_ptr, sign);
+- FPU_settag0(TAG_Valid); /* Needed if arg was a denormal */
+- return;
+- }
+- else if ( st0_tag == TAG_Zero )
+- {
+- sign = getsign(st0_ptr);
+-
+- if ( FPU_divide_by_zero(0, SIGN_NEG) < 0 )
+- return;
++ FPU_REG *st_new_ptr;
++ u_char sign;
++ register FPU_REG *st1_ptr = st0_ptr; /* anticipate */
+
+- push();
+- FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
+- setsign(st_new_ptr, sign);
+- return;
+- }
++ if (STACK_OVERFLOW) {
++ FPU_stack_overflow();
++ return;
++ }
- /*forward quad round*/
--static inline void Q (u32 * block, u8 * Kr, u32 * Km) {
-+static void Q (u32 * block, u8 * Kr, u32 * Km) {
- u32 I;
- block[2] ^= F1(block[3], Kr[0], Km[0]);
- block[1] ^= F2(block[2], Kr[1], Km[1]);
-@@ -437,7 +437,7 @@ static inline void Q (u32 * block, u8 * Kr, u32 * Km) {
- }
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
++ clear_C1();
- /*reverse quad round*/
--static inline void QBAR (u32 * block, u8 * Kr, u32 * Km) {
-+static void QBAR (u32 * block, u8 * Kr, u32 * Km) {
- u32 I;
- block[3] ^= F1(block[0], Kr[3], Km[3]);
- block[0] ^= F3(block[1], Kr[2], Km[2]);
-diff --git a/crypto/cbc.c b/crypto/cbc.c
-index 1f2649e..6affff8 100644
---- a/crypto/cbc.c
-+++ b/crypto/cbc.c
-@@ -14,13 +14,13 @@
- #include <linux/err.h>
- #include <linux/init.h>
- #include <linux/kernel.h>
-+#include <linux/log2.h>
- #include <linux/module.h>
- #include <linux/scatterlist.h>
- #include <linux/slab.h>
+- if ( st0_tag == TW_Denormal )
+- {
+- if (denormal_operand() < 0 )
+- return;
++ if (st0_tag == TAG_Valid) {
++ long e;
- struct crypto_cbc_ctx {
- struct crypto_cipher *child;
-- void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
- };
+- push();
+- sign = getsign(st1_ptr);
+- FPU_to_exp16(st1_ptr, st_new_ptr);
+- goto denormal_arg;
+- }
+- else if ( st0_tag == TW_Infinity )
+- {
+- sign = getsign(st0_ptr);
+- setpositive(st0_ptr);
+- push();
+- FPU_copy_to_reg0(&CONST_INF, TAG_Special);
+- setsign(st_new_ptr, sign);
+- return;
+- }
+- else if ( st0_tag == TW_NaN )
+- {
+- if ( real_1op_NaN(st0_ptr) < 0 )
+- return;
++ push();
++ sign = getsign(st1_ptr);
++ reg_copy(st1_ptr, st_new_ptr);
++ setexponent16(st_new_ptr, exponent(st_new_ptr));
++
++ denormal_arg:
++
++ e = exponent16(st_new_ptr);
++ convert_l2reg(&e, 1);
++ setexponentpos(st_new_ptr, 0);
++ setsign(st_new_ptr, sign);
++ FPU_settag0(TAG_Valid); /* Needed if arg was a denormal */
++ return;
++ } else if (st0_tag == TAG_Zero) {
++ sign = getsign(st0_ptr);
++
++ if (FPU_divide_by_zero(0, SIGN_NEG) < 0)
++ return;
- static int crypto_cbc_setkey(struct crypto_tfm *parent, const u8 *key,
-@@ -41,9 +41,7 @@ static int crypto_cbc_setkey(struct crypto_tfm *parent, const u8 *key,
+- push();
+- FPU_copy_to_reg0(st0_ptr, TAG_Special);
+- return;
+- }
+- else if ( st0_tag == TAG_Empty )
+- {
+- /* Is this the correct behaviour? */
+- if ( control_word & EX_Invalid )
+- {
+- FPU_stack_underflow();
+- push();
+- FPU_stack_underflow();
++ push();
++ FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
++ setsign(st_new_ptr, sign);
++ return;
++ }
++
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++
++ if (st0_tag == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return;
++
++ push();
++ sign = getsign(st1_ptr);
++ FPU_to_exp16(st1_ptr, st_new_ptr);
++ goto denormal_arg;
++ } else if (st0_tag == TW_Infinity) {
++ sign = getsign(st0_ptr);
++ setpositive(st0_ptr);
++ push();
++ FPU_copy_to_reg0(&CONST_INF, TAG_Special);
++ setsign(st_new_ptr, sign);
++ return;
++ } else if (st0_tag == TW_NaN) {
++ if (real_1op_NaN(st0_ptr) < 0)
++ return;
++
++ push();
++ FPU_copy_to_reg0(st0_ptr, TAG_Special);
++ return;
++ } else if (st0_tag == TAG_Empty) {
++ /* Is this the correct behaviour? */
++ if (control_word & EX_Invalid) {
++ FPU_stack_underflow();
++ push();
++ FPU_stack_underflow();
++ } else
++ EXCEPTION(EX_StackUnder);
+ }
+- else
+- EXCEPTION(EX_StackUnder);
+- }
+ #ifdef PARANOID
+- else
+- EXCEPTION(EX_INTERNAL | 0x119);
++ else
++ EXCEPTION(EX_INTERNAL | 0x119);
+ #endif /* PARANOID */
+ }
- static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
+-
+ static void fdecstp(void)
{
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_encrypt;
-@@ -54,7 +52,7 @@ static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
- u8 *iv = walk->iv;
-
- do {
-- xor(iv, src, bsize);
-+ crypto_xor(iv, src, bsize);
- fn(crypto_cipher_tfm(tfm), dst, iv);
- memcpy(iv, dst, bsize);
+- clear_C1();
+- top--;
++ clear_C1();
++ top--;
+ }
-@@ -67,9 +65,7 @@ static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
+ static void fincstp(void)
+ {
+- clear_C1();
+- top++;
++ clear_C1();
++ top++;
+ }
- static int crypto_cbc_encrypt_inplace(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
+-
+ static void fsqrt_(FPU_REG *st0_ptr, u_char st0_tag)
{
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_encrypt;
-@@ -79,7 +75,7 @@ static int crypto_cbc_encrypt_inplace(struct blkcipher_desc *desc,
- u8 *iv = walk->iv;
+- int expon;
+-
+- clear_C1();
+-
+- if ( st0_tag == TAG_Valid )
+- {
+- u_char tag;
+-
+- if (signnegative(st0_ptr))
+- {
+- arith_invalid(0); /* sqrt(negative) is invalid */
+- return;
+- }
++ int expon;
++
++ clear_C1();
- do {
-- xor(src, iv, bsize);
-+ crypto_xor(src, iv, bsize);
- fn(crypto_cipher_tfm(tfm), src, src);
- iv = src;
+- /* make st(0) in [1.0 .. 4.0) */
+- expon = exponent(st0_ptr);
+-
+- denormal_arg:
+-
+- setexponent16(st0_ptr, (expon & 1));
+-
+- /* Do the computation, the sign of the result will be positive. */
+- tag = wm_sqrt(st0_ptr, 0, 0, control_word, SIGN_POS);
+- addexponent(st0_ptr, expon >> 1);
+- FPU_settag0(tag);
+- return;
+- }
+-
+- if ( st0_tag == TAG_Zero )
+- return;
+-
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+-
+- if ( st0_tag == TW_Infinity )
+- {
+- if ( signnegative(st0_ptr) )
+- arith_invalid(0); /* sqrt(-Infinity) is invalid */
+- return;
+- }
+- else if ( st0_tag == TW_Denormal )
+- {
+- if (signnegative(st0_ptr))
+- {
+- arith_invalid(0); /* sqrt(negative) is invalid */
+- return;
++ if (st0_tag == TAG_Valid) {
++ u_char tag;
++
++ if (signnegative(st0_ptr)) {
++ arith_invalid(0); /* sqrt(negative) is invalid */
++ return;
++ }
++
++ /* make st(0) in [1.0 .. 4.0) */
++ expon = exponent(st0_ptr);
++
++ denormal_arg:
++
++ setexponent16(st0_ptr, (expon & 1));
++
++ /* Do the computation, the sign of the result will be positive. */
++ tag = wm_sqrt(st0_ptr, 0, 0, control_word, SIGN_POS);
++ addexponent(st0_ptr, expon >> 1);
++ FPU_settag0(tag);
++ return;
+ }
-@@ -99,7 +95,6 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
- struct crypto_blkcipher *tfm = desc->tfm;
- struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
- struct crypto_cipher *child = ctx->child;
-- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
- int err;
+- if ( denormal_operand() < 0 )
+- return;
++ if (st0_tag == TAG_Zero)
++ return;
- blkcipher_walk_init(&walk, dst, src, nbytes);
-@@ -107,11 +102,9 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
+- FPU_to_exp16(st0_ptr, st0_ptr);
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
- while ((nbytes = walk.nbytes)) {
- if (walk.src.virt.addr == walk.dst.virt.addr)
-- nbytes = crypto_cbc_encrypt_inplace(desc, &walk, child,
-- xor);
-+ nbytes = crypto_cbc_encrypt_inplace(desc, &walk, child);
- else
-- nbytes = crypto_cbc_encrypt_segment(desc, &walk, child,
-- xor);
-+ nbytes = crypto_cbc_encrypt_segment(desc, &walk, child);
- err = blkcipher_walk_done(desc, &walk, nbytes);
- }
+- expon = exponent16(st0_ptr);
++ if (st0_tag == TW_Infinity) {
++ if (signnegative(st0_ptr))
++ arith_invalid(0); /* sqrt(-Infinity) is invalid */
++ return;
++ } else if (st0_tag == TW_Denormal) {
++ if (signnegative(st0_ptr)) {
++ arith_invalid(0); /* sqrt(negative) is invalid */
++ return;
++ }
-@@ -120,9 +113,7 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
+- goto denormal_arg;
+- }
++ if (denormal_operand() < 0)
++ return;
- static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
- {
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_decrypt;
-@@ -134,7 +125,7 @@ static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
+- single_arg_error(st0_ptr, st0_tag);
++ FPU_to_exp16(st0_ptr, st0_ptr);
- do {
- fn(crypto_cipher_tfm(tfm), dst, src);
-- xor(dst, iv, bsize);
-+ crypto_xor(dst, iv, bsize);
- iv = src;
+-}
++ expon = exponent16(st0_ptr);
++
++ goto denormal_arg;
++ }
- src += bsize;
-@@ -148,34 +139,29 @@ static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
++ single_arg_error(st0_ptr, st0_tag);
++
++}
- static int crypto_cbc_decrypt_inplace(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
+ static void frndint_(FPU_REG *st0_ptr, u_char st0_tag)
{
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_decrypt;
- int bsize = crypto_cipher_blocksize(tfm);
-- unsigned long alignmask = crypto_cipher_alignmask(tfm);
- unsigned int nbytes = walk->nbytes;
- u8 *src = walk->src.virt.addr;
-- u8 stack[bsize + alignmask];
-- u8 *first_iv = (u8 *)ALIGN((unsigned long)stack, alignmask + 1);
--
-- memcpy(first_iv, walk->iv, bsize);
-+ u8 last_iv[bsize];
+- int flags, tag;
++ int flags, tag;
- /* Start of the last block. */
-- src += nbytes - nbytes % bsize - bsize;
-- memcpy(walk->iv, src, bsize);
-+ src += nbytes - (nbytes & (bsize - 1)) - bsize;
-+ memcpy(last_iv, src, bsize);
-
- for (;;) {
- fn(crypto_cipher_tfm(tfm), src, src);
- if ((nbytes -= bsize) < bsize)
- break;
-- xor(src, src - bsize, bsize);
-+ crypto_xor(src, src - bsize, bsize);
- src -= bsize;
- }
+- if ( st0_tag == TAG_Valid )
+- {
+- u_char sign;
++ if (st0_tag == TAG_Valid) {
++ u_char sign;
-- xor(src, first_iv, bsize);
-+ crypto_xor(src, walk->iv, bsize);
-+ memcpy(walk->iv, last_iv, bsize);
+- denormal_arg:
++ denormal_arg:
- return nbytes;
- }
-@@ -188,7 +174,6 @@ static int crypto_cbc_decrypt(struct blkcipher_desc *desc,
- struct crypto_blkcipher *tfm = desc->tfm;
- struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
- struct crypto_cipher *child = ctx->child;
-- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
- int err;
+- sign = getsign(st0_ptr);
++ sign = getsign(st0_ptr);
- blkcipher_walk_init(&walk, dst, src, nbytes);
-@@ -196,48 +181,15 @@ static int crypto_cbc_decrypt(struct blkcipher_desc *desc,
+- if (exponent(st0_ptr) > 63)
+- return;
++ if (exponent(st0_ptr) > 63)
++ return;
++
++ if (st0_tag == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return;
++ }
++
++ /* Fortunately, this can't overflow to 2^64 */
++ if ((flags = FPU_round_to_int(st0_ptr, st0_tag)))
++ set_precision_flag(flags);
- while ((nbytes = walk.nbytes)) {
- if (walk.src.virt.addr == walk.dst.virt.addr)
-- nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child,
-- xor);
-+ nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child);
- else
-- nbytes = crypto_cbc_decrypt_segment(desc, &walk, child,
-- xor);
-+ nbytes = crypto_cbc_decrypt_segment(desc, &walk, child);
- err = blkcipher_walk_done(desc, &walk, nbytes);
+- if ( st0_tag == TW_Denormal )
+- {
+- if (denormal_operand() < 0 )
+- return;
++ setexponent16(st0_ptr, 63);
++ tag = FPU_normalize(st0_ptr);
++ setsign(st0_ptr, sign);
++ FPU_settag0(tag);
++ return;
}
- return err;
- }
-
--static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
--{
-- do {
-- *a++ ^= *b++;
-- } while (--bs);
--}
+- /* Fortunately, this can't overflow to 2^64 */
+- if ( (flags = FPU_round_to_int(st0_ptr, st0_tag)) )
+- set_precision_flag(flags);
-
--static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
--{
-- u32 *a = (u32 *)dst;
-- u32 *b = (u32 *)src;
+- setexponent16(st0_ptr, 63);
+- tag = FPU_normalize(st0_ptr);
+- setsign(st0_ptr, sign);
+- FPU_settag0(tag);
+- return;
+- }
-
-- do {
-- *a++ ^= *b++;
-- } while ((bs -= 4));
--}
+- if ( st0_tag == TAG_Zero )
+- return;
-
--static void xor_64(u8 *a, const u8 *b, unsigned int bs)
--{
-- ((u32 *)a)[0] ^= ((u32 *)b)[0];
-- ((u32 *)a)[1] ^= ((u32 *)b)[1];
--}
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
-
--static void xor_128(u8 *a, const u8 *b, unsigned int bs)
--{
-- ((u32 *)a)[0] ^= ((u32 *)b)[0];
-- ((u32 *)a)[1] ^= ((u32 *)b)[1];
-- ((u32 *)a)[2] ^= ((u32 *)b)[2];
-- ((u32 *)a)[3] ^= ((u32 *)b)[3];
+- if ( st0_tag == TW_Denormal )
+- goto denormal_arg;
+- else if ( st0_tag == TW_Infinity )
+- return;
+- else
+- single_arg_error(st0_ptr, st0_tag);
-}
--
- static int crypto_cbc_init_tfm(struct crypto_tfm *tfm)
- {
- struct crypto_instance *inst = (void *)tfm->__crt_alg;
-@@ -245,22 +197,6 @@ static int crypto_cbc_init_tfm(struct crypto_tfm *tfm)
- struct crypto_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
- struct crypto_cipher *cipher;
++ if (st0_tag == TAG_Zero)
++ return;
-- switch (crypto_tfm_alg_blocksize(tfm)) {
-- case 8:
-- ctx->xor = xor_64;
-- break;
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++
++ if (st0_tag == TW_Denormal)
++ goto denormal_arg;
++ else if (st0_tag == TW_Infinity)
++ return;
++ else
++ single_arg_error(st0_ptr, st0_tag);
++}
+
+ static int fsin(FPU_REG *st0_ptr, u_char tag)
+ {
+- u_char arg_sign = getsign(st0_ptr);
-
-- case 16:
-- ctx->xor = xor_128;
-- break;
+- if ( tag == TAG_Valid )
+- {
+- int q;
-
-- default:
-- if (crypto_tfm_alg_blocksize(tfm) % 4)
-- ctx->xor = xor_byte;
-- else
-- ctx->xor = xor_quad;
-- }
+- if ( exponent(st0_ptr) > -40 )
+- {
+- if ( (q = trig_arg(st0_ptr, 0)) == -1 )
+- {
+- /* Operand is out of range */
+- return 1;
+- }
-
- cipher = crypto_spawn_cipher(spawn);
- if (IS_ERR(cipher))
- return PTR_ERR(cipher);
-@@ -290,6 +226,10 @@ static struct crypto_instance *crypto_cbc_alloc(struct rtattr **tb)
- if (IS_ERR(alg))
- return ERR_PTR(PTR_ERR(alg));
-
-+ inst = ERR_PTR(-EINVAL);
-+ if (!is_power_of_2(alg->cra_blocksize))
-+ goto out_put_alg;
-+
- inst = crypto_alloc_instance("cbc", alg);
- if (IS_ERR(inst))
- goto out_put_alg;
-@@ -300,8 +240,9 @@ static struct crypto_instance *crypto_cbc_alloc(struct rtattr **tb)
- inst->alg.cra_alignmask = alg->cra_alignmask;
- inst->alg.cra_type = &crypto_blkcipher_type;
-
-- if (!(alg->cra_blocksize % 4))
-- inst->alg.cra_alignmask |= 3;
-+ /* We access the data as u32s when xoring. */
-+ inst->alg.cra_alignmask |= __alignof__(u32) - 1;
-+
- inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
- inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
- inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
-diff --git a/crypto/ccm.c b/crypto/ccm.c
-new file mode 100644
-index 0000000..7cf7e5a
---- /dev/null
-+++ b/crypto/ccm.c
-@@ -0,0 +1,889 @@
-+/*
-+ * CCM: Counter with CBC-MAC
-+ *
-+ * (C) Copyright IBM Corp. 2007 - Joy Latten <latten at us.ibm.com>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License as published by the Free
-+ * Software Foundation; either version 2 of the License, or (at your option)
-+ * any later version.
-+ *
-+ */
-+
-+#include <crypto/internal/aead.h>
-+#include <crypto/internal/skcipher.h>
-+#include <crypto/scatterwalk.h>
-+#include <linux/err.h>
-+#include <linux/init.h>
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/slab.h>
-+
-+#include "internal.h"
-+
-+struct ccm_instance_ctx {
-+ struct crypto_skcipher_spawn ctr;
-+ struct crypto_spawn cipher;
-+};
-+
-+struct crypto_ccm_ctx {
-+ struct crypto_cipher *cipher;
-+ struct crypto_ablkcipher *ctr;
-+};
-+
-+struct crypto_rfc4309_ctx {
-+ struct crypto_aead *child;
-+ u8 nonce[3];
-+};
-+
-+struct crypto_ccm_req_priv_ctx {
-+ u8 odata[16];
-+ u8 idata[16];
-+ u8 auth_tag[16];
-+ u32 ilen;
-+ u32 flags;
-+ struct scatterlist src[2];
-+ struct scatterlist dst[2];
-+ struct ablkcipher_request abreq;
-+};
+- poly_sine(st0_ptr);
+-
+- if (q & 2)
+- changesign(st0_ptr);
+-
+- setsign(st0_ptr, getsign(st0_ptr) ^ arg_sign);
+-
+- /* We do not really know if up or down */
+- set_precision_flag_up();
+- return 0;
++ u_char arg_sign = getsign(st0_ptr);
+
-+static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx(
-+ struct aead_request *req)
-+{
-+ unsigned long align = crypto_aead_alignmask(crypto_aead_reqtfm(req));
++ if (tag == TAG_Valid) {
++ int q;
+
-+ return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1);
-+}
++ if (exponent(st0_ptr) > -40) {
++ if ((q = trig_arg(st0_ptr, 0)) == -1) {
++ /* Operand is out of range */
++ return 1;
++ }
+
-+static int set_msg_len(u8 *block, unsigned int msglen, int csize)
-+{
-+ __be32 data;
++ poly_sine(st0_ptr);
+
-+ memset(block, 0, csize);
-+ block += csize;
++ if (q & 2)
++ changesign(st0_ptr);
+
-+ if (csize >= 4)
-+ csize = 4;
-+ else if (msglen > (1 << (8 * csize)))
-+ return -EOVERFLOW;
++ setsign(st0_ptr, getsign(st0_ptr) ^ arg_sign);
+
-+ data = cpu_to_be32(msglen);
-+ memcpy(block - csize, (u8 *)&data + 4 - csize, csize);
++ /* We do not really know if up or down */
++ set_precision_flag_up();
++ return 0;
++ } else {
++ /* For a small arg, the result == the argument */
++ set_precision_flag_up(); /* Must be up. */
++ return 0;
++ }
+ }
+- else
+- {
+- /* For a small arg, the result == the argument */
+- set_precision_flag_up(); /* Must be up. */
+- return 0;
+
-+ return 0;
-+}
++ if (tag == TAG_Zero) {
++ setcc(0);
++ return 0;
+ }
+- }
+-
+- if ( tag == TAG_Zero )
+- {
+- setcc(0);
+- return 0;
+- }
+-
+- if ( tag == TAG_Special )
+- tag = FPU_Special(st0_ptr);
+-
+- if ( tag == TW_Denormal )
+- {
+- if ( denormal_operand() < 0 )
+- return 1;
+-
+- /* For a small arg, the result == the argument */
+- /* Underflow may happen */
+- FPU_to_exp16(st0_ptr, st0_ptr);
+-
+- tag = FPU_round(st0_ptr, 1, 0, FULL_PRECISION, arg_sign);
+-
+- FPU_settag0(tag);
+-
+- return 0;
+- }
+- else if ( tag == TW_Infinity )
+- {
+- /* The 80486 treats infinity as an invalid operand */
+- arith_invalid(0);
+- return 1;
+- }
+- else
+- {
+- single_arg_error(st0_ptr, tag);
+- return 1;
+- }
+-}
+
++ if (tag == TAG_Special)
++ tag = FPU_Special(st0_ptr);
+
-+static int crypto_ccm_setkey(struct crypto_aead *aead, const u8 *key,
-+ unsigned int keylen)
-+{
-+ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_ablkcipher *ctr = ctx->ctr;
-+ struct crypto_cipher *tfm = ctx->cipher;
-+ int err = 0;
++ if (tag == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return 1;
+
-+ crypto_ablkcipher_clear_flags(ctr, CRYPTO_TFM_REQ_MASK);
-+ crypto_ablkcipher_set_flags(ctr, crypto_aead_get_flags(aead) &
-+ CRYPTO_TFM_REQ_MASK);
-+ err = crypto_ablkcipher_setkey(ctr, key, keylen);
-+ crypto_aead_set_flags(aead, crypto_ablkcipher_get_flags(ctr) &
-+ CRYPTO_TFM_RES_MASK);
-+ if (err)
-+ goto out;
++ /* For a small arg, the result == the argument */
++ /* Underflow may happen */
++ FPU_to_exp16(st0_ptr, st0_ptr);
+
-+ crypto_cipher_clear_flags(tfm, CRYPTO_TFM_REQ_MASK);
-+ crypto_cipher_set_flags(tfm, crypto_aead_get_flags(aead) &
-+ CRYPTO_TFM_REQ_MASK);
-+ err = crypto_cipher_setkey(tfm, key, keylen);
-+ crypto_aead_set_flags(aead, crypto_cipher_get_flags(tfm) &
-+ CRYPTO_TFM_RES_MASK);
++ tag = FPU_round(st0_ptr, 1, 0, FULL_PRECISION, arg_sign);
+
-+out:
-+ return err;
-+}
++ FPU_settag0(tag);
+
-+static int crypto_ccm_setauthsize(struct crypto_aead *tfm,
-+ unsigned int authsize)
-+{
-+ switch (authsize) {
-+ case 4:
-+ case 6:
-+ case 8:
-+ case 10:
-+ case 12:
-+ case 14:
-+ case 16:
-+ break;
-+ default:
-+ return -EINVAL;
++ return 0;
++ } else if (tag == TW_Infinity) {
++ /* The 80486 treats infinity as an invalid operand */
++ arith_invalid(0);
++ return 1;
++ } else {
++ single_arg_error(st0_ptr, tag);
++ return 1;
+ }
-+
-+ return 0;
+}
+
+ static int f_cos(FPU_REG *st0_ptr, u_char tag)
+ {
+- u_char st0_sign;
+-
+- st0_sign = getsign(st0_ptr);
+-
+- if ( tag == TAG_Valid )
+- {
+- int q;
+-
+- if ( exponent(st0_ptr) > -40 )
+- {
+- if ( (exponent(st0_ptr) < 0)
+- || ((exponent(st0_ptr) == 0)
+- && (significand(st0_ptr) <= 0xc90fdaa22168c234LL)) )
+- {
+- poly_cos(st0_ptr);
+-
+- /* We do not really know if up or down */
+- set_precision_flag_down();
+-
+- return 0;
+- }
+- else if ( (q = trig_arg(st0_ptr, FCOS)) != -1 )
+- {
+- poly_sine(st0_ptr);
+-
+- if ((q+1) & 2)
+- changesign(st0_ptr);
+-
+- /* We do not really know if up or down */
+- set_precision_flag_down();
+-
+- return 0;
+- }
+- else
+- {
+- /* Operand is out of range */
+- return 1;
+- }
+- }
+- else
+- {
+- denormal_arg:
++ u_char st0_sign;
+
-+static int format_input(u8 *info, struct aead_request *req,
-+ unsigned int cryptlen)
-+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ unsigned int lp = req->iv[0];
-+ unsigned int l = lp + 1;
-+ unsigned int m;
-+
-+ m = crypto_aead_authsize(aead);
-+
-+ memcpy(info, req->iv, 16);
-+
-+ /* format control info per RFC 3610 and
-+ * NIST Special Publication 800-38C
-+ */
-+ *info |= (8 * ((m - 2) / 2));
-+ if (req->assoclen)
-+ *info |= 64;
-+
-+ return set_msg_len(info + 16 - l, cryptlen, l);
-+}
++ st0_sign = getsign(st0_ptr);
+
+- setcc(0);
+- FPU_copy_to_reg0(&CONST_1, TAG_Valid);
++ if (tag == TAG_Valid) {
++ int q;
+
-+static int format_adata(u8 *adata, unsigned int a)
-+{
-+ int len = 0;
++ if (exponent(st0_ptr) > -40) {
++ if ((exponent(st0_ptr) < 0)
++ || ((exponent(st0_ptr) == 0)
++ && (significand(st0_ptr) <=
++ 0xc90fdaa22168c234LL))) {
++ poly_cos(st0_ptr);
+
-+ /* add control info for associated data
-+ * RFC 3610 and NIST Special Publication 800-38C
-+ */
-+ if (a < 65280) {
-+ *(__be16 *)adata = cpu_to_be16(a);
-+ len = 2;
-+ } else {
-+ *(__be16 *)adata = cpu_to_be16(0xfffe);
-+ *(__be32 *)&adata[2] = cpu_to_be32(a);
-+ len = 6;
-+ }
++ /* We do not really know if up or down */
++ set_precision_flag_down();
+
-+ return len;
-+}
++ return 0;
++ } else if ((q = trig_arg(st0_ptr, FCOS)) != -1) {
++ poly_sine(st0_ptr);
+
-+static void compute_mac(struct crypto_cipher *tfm, u8 *data, int n,
-+ struct crypto_ccm_req_priv_ctx *pctx)
-+{
-+ unsigned int bs = 16;
-+ u8 *odata = pctx->odata;
-+ u8 *idata = pctx->idata;
-+ int datalen, getlen;
++ if ((q + 1) & 2)
++ changesign(st0_ptr);
+
-+ datalen = n;
++ /* We do not really know if up or down */
++ set_precision_flag_down();
+
-+ /* first time in here, block may be partially filled. */
-+ getlen = bs - pctx->ilen;
-+ if (datalen >= getlen) {
-+ memcpy(idata + pctx->ilen, data, getlen);
-+ crypto_xor(odata, idata, bs);
-+ crypto_cipher_encrypt_one(tfm, odata, odata);
-+ datalen -= getlen;
-+ data += getlen;
-+ pctx->ilen = 0;
-+ }
++ return 0;
++ } else {
++ /* Operand is out of range */
++ return 1;
++ }
++ } else {
++ denormal_arg:
+
-+ /* now encrypt rest of data */
-+ while (datalen >= bs) {
-+ crypto_xor(odata, data, bs);
-+ crypto_cipher_encrypt_one(tfm, odata, odata);
++ setcc(0);
++ FPU_copy_to_reg0(&CONST_1, TAG_Valid);
+ #ifdef PECULIAR_486
+- set_precision_flag_down(); /* 80486 appears to do this. */
++ set_precision_flag_down(); /* 80486 appears to do this. */
+ #else
+- set_precision_flag_up(); /* Must be up. */
++ set_precision_flag_up(); /* Must be up. */
+ #endif /* PECULIAR_486 */
+- return 0;
++ return 0;
++ }
++ } else if (tag == TAG_Zero) {
++ FPU_copy_to_reg0(&CONST_1, TAG_Valid);
++ setcc(0);
++ return 0;
+ }
+- }
+- else if ( tag == TAG_Zero )
+- {
+- FPU_copy_to_reg0(&CONST_1, TAG_Valid);
+- setcc(0);
+- return 0;
+- }
+-
+- if ( tag == TAG_Special )
+- tag = FPU_Special(st0_ptr);
+-
+- if ( tag == TW_Denormal )
+- {
+- if ( denormal_operand() < 0 )
+- return 1;
+-
+- goto denormal_arg;
+- }
+- else if ( tag == TW_Infinity )
+- {
+- /* The 80486 treats infinity as an invalid operand */
+- arith_invalid(0);
+- return 1;
+- }
+- else
+- {
+- single_arg_error(st0_ptr, tag); /* requires st0_ptr == &st(0) */
+- return 1;
+- }
+-}
+
++ if (tag == TAG_Special)
++ tag = FPU_Special(st0_ptr);
+
-+ datalen -= bs;
-+ data += bs;
-+ }
++ if (tag == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return 1;
+
-+ /* check and see if there's leftover data that wasn't
-+ * enough to fill a block.
-+ */
-+ if (datalen) {
-+ memcpy(idata + pctx->ilen, data, datalen);
-+ pctx->ilen += datalen;
++ goto denormal_arg;
++ } else if (tag == TW_Infinity) {
++ /* The 80486 treats infinity as an invalid operand */
++ arith_invalid(0);
++ return 1;
++ } else {
++ single_arg_error(st0_ptr, tag); /* requires st0_ptr == &st(0) */
++ return 1;
+ }
+}
-+
-+static void get_data_to_compute(struct crypto_cipher *tfm,
-+ struct crypto_ccm_req_priv_ctx *pctx,
-+ struct scatterlist *sg, unsigned int len)
-+{
-+ struct scatter_walk walk;
-+ u8 *data_src;
-+ int n;
-+
-+ scatterwalk_start(&walk, sg);
-+
-+ while (len) {
-+ n = scatterwalk_clamp(&walk, len);
-+ if (!n) {
-+ scatterwalk_start(&walk, sg_next(walk.sg));
-+ n = scatterwalk_clamp(&walk, len);
+
+ static void fcos(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- f_cos(st0_ptr, st0_tag);
++ f_cos(st0_ptr, st0_tag);
+ }
+
+-
+ static void fsincos(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- FPU_REG *st_new_ptr;
+- FPU_REG arg;
+- u_char tag;
+-
+- /* Stack underflow has higher priority */
+- if ( st0_tag == TAG_Empty )
+- {
+- FPU_stack_underflow(); /* Puts a QNaN in st(0) */
+- if ( control_word & CW_Invalid )
+- {
+- st_new_ptr = &st(-1);
+- push();
+- FPU_stack_underflow(); /* Puts a QNaN in the new st(0) */
++ FPU_REG *st_new_ptr;
++ FPU_REG arg;
++ u_char tag;
++
++ /* Stack underflow has higher priority */
++ if (st0_tag == TAG_Empty) {
++ FPU_stack_underflow(); /* Puts a QNaN in st(0) */
++ if (control_word & CW_Invalid) {
++ st_new_ptr = &st(-1);
++ push();
++ FPU_stack_underflow(); /* Puts a QNaN in the new st(0) */
+ }
-+ data_src = scatterwalk_map(&walk, 0);
++ return;
+ }
+- return;
+- }
+-
+- if ( STACK_OVERFLOW )
+- { FPU_stack_overflow(); return; }
+-
+- if ( st0_tag == TAG_Special )
+- tag = FPU_Special(st0_ptr);
+- else
+- tag = st0_tag;
+-
+- if ( tag == TW_NaN )
+- {
+- single_arg_2_error(st0_ptr, TW_NaN);
+- return;
+- }
+- else if ( tag == TW_Infinity )
+- {
+- /* The 80486 treats infinity as an invalid operand */
+- if ( arith_invalid(0) >= 0 )
+- {
+- /* Masked response */
+- push();
+- arith_invalid(0);
+
-+ compute_mac(tfm, data_src, n, pctx);
-+ len -= n;
++ if (STACK_OVERFLOW) {
++ FPU_stack_overflow();
++ return;
+ }
+- return;
+- }
+-
+- reg_copy(st0_ptr, &arg);
+- if ( !fsin(st0_ptr, st0_tag) )
+- {
+- push();
+- FPU_copy_to_reg0(&arg, st0_tag);
+- f_cos(&st(0), st0_tag);
+- }
+- else
+- {
+- /* An error, so restore st(0) */
+- FPU_copy_to_reg0(&arg, st0_tag);
+- }
+-}
+
++ if (st0_tag == TAG_Special)
++ tag = FPU_Special(st0_ptr);
++ else
++ tag = st0_tag;
+
-+ scatterwalk_unmap(data_src, 0);
-+ scatterwalk_advance(&walk, n);
-+ scatterwalk_done(&walk, 0, len);
-+ if (len)
-+ crypto_yield(pctx->flags);
++ if (tag == TW_NaN) {
++ single_arg_2_error(st0_ptr, TW_NaN);
++ return;
++ } else if (tag == TW_Infinity) {
++ /* The 80486 treats infinity as an invalid operand */
++ if (arith_invalid(0) >= 0) {
++ /* Masked response */
++ push();
++ arith_invalid(0);
++ }
++ return;
+ }
+
-+ /* any leftover needs padding and then encrypted */
-+ if (pctx->ilen) {
-+ int padlen;
-+ u8 *odata = pctx->odata;
-+ u8 *idata = pctx->idata;
-+
-+ padlen = 16 - pctx->ilen;
-+ memset(idata + pctx->ilen, 0, padlen);
-+ crypto_xor(odata, idata, 16);
-+ crypto_cipher_encrypt_one(tfm, odata, odata);
-+ pctx->ilen = 0;
++ reg_copy(st0_ptr, &arg);
++ if (!fsin(st0_ptr, st0_tag)) {
++ push();
++ FPU_copy_to_reg0(&arg, st0_tag);
++ f_cos(&st(0), st0_tag);
++ } else {
++ /* An error, so restore st(0) */
++ FPU_copy_to_reg0(&arg, st0_tag);
+ }
+}
+
+ /*---------------------------------------------------------------------------*/
+ /* The following all require two arguments: st(0) and st(1) */
+@@ -826,1020 +743,901 @@ static void fsincos(FPU_REG *st0_ptr, u_char st0_tag)
+ result must be zero.
+ */
+ static void rem_kernel(unsigned long long st0, unsigned long long *y,
+- unsigned long long st1,
+- unsigned long long q, int n)
++ unsigned long long st1, unsigned long long q, int n)
+ {
+- int dummy;
+- unsigned long long x;
+-
+- x = st0 << n;
+-
+- /* Do the required multiplication and subtraction in the one operation */
+-
+- /* lsw x -= lsw st1 * lsw q */
+- asm volatile ("mull %4; subl %%eax,%0; sbbl %%edx,%1"
+- :"=m" (((unsigned *)&x)[0]), "=m" (((unsigned *)&x)[1]),
+- "=a" (dummy)
+- :"2" (((unsigned *)&st1)[0]), "m" (((unsigned *)&q)[0])
+- :"%dx");
+- /* msw x -= msw st1 * lsw q */
+- asm volatile ("mull %3; subl %%eax,%0"
+- :"=m" (((unsigned *)&x)[1]), "=a" (dummy)
+- :"1" (((unsigned *)&st1)[1]), "m" (((unsigned *)&q)[0])
+- :"%dx");
+- /* msw x -= lsw st1 * msw q */
+- asm volatile ("mull %3; subl %%eax,%0"
+- :"=m" (((unsigned *)&x)[1]), "=a" (dummy)
+- :"1" (((unsigned *)&st1)[0]), "m" (((unsigned *)&q)[1])
+- :"%dx");
+-
+- *y = x;
++ int dummy;
++ unsigned long long x;
++
++ x = st0 << n;
++
++ /* Do the required multiplication and subtraction in the one operation */
++
++ /* lsw x -= lsw st1 * lsw q */
++ asm volatile ("mull %4; subl %%eax,%0; sbbl %%edx,%1":"=m"
++ (((unsigned *)&x)[0]), "=m"(((unsigned *)&x)[1]),
++ "=a"(dummy)
++ :"2"(((unsigned *)&st1)[0]), "m"(((unsigned *)&q)[0])
++ :"%dx");
++ /* msw x -= msw st1 * lsw q */
++ asm volatile ("mull %3; subl %%eax,%0":"=m" (((unsigned *)&x)[1]),
++ "=a"(dummy)
++ :"1"(((unsigned *)&st1)[1]), "m"(((unsigned *)&q)[0])
++ :"%dx");
++ /* msw x -= lsw st1 * msw q */
++ asm volatile ("mull %3; subl %%eax,%0":"=m" (((unsigned *)&x)[1]),
++ "=a"(dummy)
++ :"1"(((unsigned *)&st1)[0]), "m"(((unsigned *)&q)[1])
++ :"%dx");
++
++ *y = x;
+ }
+
+-
+ /* Remainder of st(0) / st(1) */
+ /* This routine produces exact results, i.e. there is never any
+ rounding or truncation, etc of the result. */
+ static void do_fprem(FPU_REG *st0_ptr, u_char st0_tag, int round)
+ {
+- FPU_REG *st1_ptr = &st(1);
+- u_char st1_tag = FPU_gettagi(1);
+-
+- if ( !((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid)) )
+- {
+- FPU_REG tmp, st0, st1;
+- u_char st0_sign, st1_sign;
+- u_char tmptag;
+- int tag;
+- int old_cw;
+- int expdif;
+- long long q;
+- unsigned short saved_status;
+- int cc;
+-
+- fprem_valid:
+- /* Convert registers for internal use. */
+- st0_sign = FPU_to_exp16(st0_ptr, &st0);
+- st1_sign = FPU_to_exp16(st1_ptr, &st1);
+- expdif = exponent16(&st0) - exponent16(&st1);
+-
+- old_cw = control_word;
+- cc = 0;
+-
+- /* We want the status following the denorm tests, but don't want
+- the status changed by the arithmetic operations. */
+- saved_status = partial_status;
+- control_word &= ~CW_RC;
+- control_word |= RC_CHOP;
+-
+- if ( expdif < 64 )
+- {
+- /* This should be the most common case */
+-
+- if ( expdif > -2 )
+- {
+- u_char sign = st0_sign ^ st1_sign;
+- tag = FPU_u_div(&st0, &st1, &tmp,
+- PR_64_BITS | RC_CHOP | 0x3f,
+- sign);
+- setsign(&tmp, sign);
+-
+- if ( exponent(&tmp) >= 0 )
+- {
+- FPU_round_to_int(&tmp, tag); /* Fortunately, this can't
+- overflow to 2^64 */
+- q = significand(&tmp);
+-
+- rem_kernel(significand(&st0),
+- &significand(&tmp),
+- significand(&st1),
+- q, expdif);
+-
+- setexponent16(&tmp, exponent16(&st1));
+- }
+- else
+- {
+- reg_copy(&st0, &tmp);
+- q = 0;
+- }
+-
+- if ( (round == RC_RND) && (tmp.sigh & 0xc0000000) )
+- {
+- /* We may need to subtract st(1) once more,
+- to get a result <= 1/2 of st(1). */
+- unsigned long long x;
+- expdif = exponent16(&st1) - exponent16(&tmp);
+- if ( expdif <= 1 )
+- {
+- if ( expdif == 0 )
+- x = significand(&st1) - significand(&tmp);
+- else /* expdif is 1 */
+- x = (significand(&st1) << 1) - significand(&tmp);
+- if ( (x < significand(&tmp)) ||
+- /* or equi-distant (from 0 & st(1)) and q is odd */
+- ((x == significand(&tmp)) && (q & 1) ) )
+- {
+- st0_sign = ! st0_sign;
+- significand(&tmp) = x;
+- q++;
++ FPU_REG *st1_ptr = &st(1);
++ u_char st1_tag = FPU_gettagi(1);
++
++ if (!((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid))) {
++ FPU_REG tmp, st0, st1;
++ u_char st0_sign, st1_sign;
++ u_char tmptag;
++ int tag;
++ int old_cw;
++ int expdif;
++ long long q;
++ unsigned short saved_status;
++ int cc;
++
++ fprem_valid:
++ /* Convert registers for internal use. */
++ st0_sign = FPU_to_exp16(st0_ptr, &st0);
++ st1_sign = FPU_to_exp16(st1_ptr, &st1);
++ expdif = exponent16(&st0) - exponent16(&st1);
++
++ old_cw = control_word;
++ cc = 0;
++
++ /* We want the status following the denorm tests, but don't want
++ the status changed by the arithmetic operations. */
++ saved_status = partial_status;
++ control_word &= ~CW_RC;
++ control_word |= RC_CHOP;
++
++ if (expdif < 64) {
++ /* This should be the most common case */
++
++ if (expdif > -2) {
++ u_char sign = st0_sign ^ st1_sign;
++ tag = FPU_u_div(&st0, &st1, &tmp,
++ PR_64_BITS | RC_CHOP | 0x3f,
++ sign);
++ setsign(&tmp, sign);
++
++ if (exponent(&tmp) >= 0) {
++ FPU_round_to_int(&tmp, tag); /* Fortunately, this can't
++ overflow to 2^64 */
++ q = significand(&tmp);
++
++ rem_kernel(significand(&st0),
++ &significand(&tmp),
++ significand(&st1),
++ q, expdif);
+
-+static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain,
-+ unsigned int cryptlen)
-+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
-+ struct crypto_cipher *cipher = ctx->cipher;
-+ unsigned int assoclen = req->assoclen;
-+ u8 *odata = pctx->odata;
-+ u8 *idata = pctx->idata;
-+ int err;
-+
-+ /* format control data for input */
-+ err = format_input(odata, req, cryptlen);
-+ if (err)
-+ goto out;
++ setexponent16(&tmp, exponent16(&st1));
++ } else {
++ reg_copy(&st0, &tmp);
++ q = 0;
++ }
+
-+ /* encrypt first block to use as start in computing mac */
-+ crypto_cipher_encrypt_one(cipher, odata, odata);
++ if ((round == RC_RND)
++ && (tmp.sigh & 0xc0000000)) {
++ /* We may need to subtract st(1) once more,
++ to get a result <= 1/2 of st(1). */
++ unsigned long long x;
++ expdif =
++ exponent16(&st1) - exponent16(&tmp);
++ if (expdif <= 1) {
++ if (expdif == 0)
++ x = significand(&st1) -
++ significand(&tmp);
++ else /* expdif is 1 */
++ x = (significand(&st1)
++ << 1) -
++ significand(&tmp);
++ if ((x < significand(&tmp)) ||
++ /* or equi-distant (from 0 & st(1)) and q is odd */
++ ((x == significand(&tmp))
++ && (q & 1))) {
++ st0_sign = !st0_sign;
++ significand(&tmp) = x;
++ q++;
++ }
++ }
++ }
+
-+ /* format associated data and compute into mac */
-+ if (assoclen) {
-+ pctx->ilen = format_adata(idata, assoclen);
-+ get_data_to_compute(cipher, pctx, req->assoc, req->assoclen);
++ if (q & 4)
++ cc |= SW_C0;
++ if (q & 2)
++ cc |= SW_C3;
++ if (q & 1)
++ cc |= SW_C1;
++ } else {
++ control_word = old_cw;
++ setcc(0);
++ return;
+ }
+- }
+- }
+-
+- if (q & 4) cc |= SW_C0;
+- if (q & 2) cc |= SW_C3;
+- if (q & 1) cc |= SW_C1;
+- }
+- else
+- {
+- control_word = old_cw;
+- setcc(0);
+- return;
+- }
+- }
+- else
+- {
+- /* There is a large exponent difference ( >= 64 ) */
+- /* To make much sense, the code in this section should
+- be done at high precision. */
+- int exp_1, N;
+- u_char sign;
+-
+- /* prevent overflow here */
+- /* N is 'a number between 32 and 63' (p26-113) */
+- reg_copy(&st0, &tmp);
+- tmptag = st0_tag;
+- N = (expdif & 0x0000001f) + 32; /* This choice gives results
+- identical to an AMD 486 */
+- setexponent16(&tmp, N);
+- exp_1 = exponent16(&st1);
+- setexponent16(&st1, 0);
+- expdif -= N;
+-
+- sign = getsign(&tmp) ^ st1_sign;
+- tag = FPU_u_div(&tmp, &st1, &tmp, PR_64_BITS | RC_CHOP | 0x3f,
+- sign);
+- setsign(&tmp, sign);
+-
+- FPU_round_to_int(&tmp, tag); /* Fortunately, this can't
+- overflow to 2^64 */
+-
+- rem_kernel(significand(&st0),
+- &significand(&tmp),
+- significand(&st1),
+- significand(&tmp),
+- exponent(&tmp)
+- );
+- setexponent16(&tmp, exp_1 + expdif);
+-
+- /* It is possible for the operation to be complete here.
+- What does the IEEE standard say? The Intel 80486 manual
+- implies that the operation will never be completed at this
+- point, and the behaviour of a real 80486 confirms this.
+- */
+- if ( !(tmp.sigh | tmp.sigl) )
+- {
+- /* The result is zero */
+- control_word = old_cw;
+- partial_status = saved_status;
+- FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
+- setsign(&st0, st0_sign);
++ } else {
++ /* There is a large exponent difference ( >= 64 ) */
++ /* To make much sense, the code in this section should
++ be done at high precision. */
++ int exp_1, N;
++ u_char sign;
++
++ /* prevent overflow here */
++ /* N is 'a number between 32 and 63' (p26-113) */
++ reg_copy(&st0, &tmp);
++ tmptag = st0_tag;
++ N = (expdif & 0x0000001f) + 32; /* This choice gives results
++ identical to an AMD 486 */
++ setexponent16(&tmp, N);
++ exp_1 = exponent16(&st1);
++ setexponent16(&st1, 0);
++ expdif -= N;
++
++ sign = getsign(&tmp) ^ st1_sign;
++ tag =
++ FPU_u_div(&tmp, &st1, &tmp,
++ PR_64_BITS | RC_CHOP | 0x3f, sign);
++ setsign(&tmp, sign);
++
++ FPU_round_to_int(&tmp, tag); /* Fortunately, this can't
++ overflow to 2^64 */
++
++ rem_kernel(significand(&st0),
++ &significand(&tmp),
++ significand(&st1),
++ significand(&tmp), exponent(&tmp)
++ );
++ setexponent16(&tmp, exp_1 + expdif);
++
++ /* It is possible for the operation to be complete here.
++ What does the IEEE standard say? The Intel 80486 manual
++ implies that the operation will never be completed at this
++ point, and the behaviour of a real 80486 confirms this.
++ */
++ if (!(tmp.sigh | tmp.sigl)) {
++ /* The result is zero */
++ control_word = old_cw;
++ partial_status = saved_status;
++ FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
++ setsign(&st0, st0_sign);
+ #ifdef PECULIAR_486
+- setcc(SW_C2);
++ setcc(SW_C2);
+ #else
+- setcc(0);
++ setcc(0);
+ #endif /* PECULIAR_486 */
+- return;
+- }
+- cc = SW_C2;
+- }
++ return;
++ }
++ cc = SW_C2;
++ }
+
+- control_word = old_cw;
+- partial_status = saved_status;
+- tag = FPU_normalize_nuo(&tmp);
+- reg_copy(&tmp, st0_ptr);
+-
+- /* The only condition to be looked for is underflow,
+- and it can occur here only if underflow is unmasked. */
+- if ( (exponent16(&tmp) <= EXP_UNDER) && (tag != TAG_Zero)
+- && !(control_word & CW_Underflow) )
+- {
+- setcc(cc);
+- tag = arith_underflow(st0_ptr);
+- setsign(st0_ptr, st0_sign);
+- FPU_settag0(tag);
+- return;
+- }
+- else if ( (exponent16(&tmp) > EXP_UNDER) || (tag == TAG_Zero) )
+- {
+- stdexp(st0_ptr);
+- setsign(st0_ptr, st0_sign);
+- }
+- else
+- {
+- tag = FPU_round(st0_ptr, 0, 0, FULL_PRECISION, st0_sign);
+- }
+- FPU_settag0(tag);
+- setcc(cc);
++ control_word = old_cw;
++ partial_status = saved_status;
++ tag = FPU_normalize_nuo(&tmp);
++ reg_copy(&tmp, st0_ptr);
++
++ /* The only condition to be looked for is underflow,
++ and it can occur here only if underflow is unmasked. */
++ if ((exponent16(&tmp) <= EXP_UNDER) && (tag != TAG_Zero)
++ && !(control_word & CW_Underflow)) {
++ setcc(cc);
++ tag = arith_underflow(st0_ptr);
++ setsign(st0_ptr, st0_sign);
++ FPU_settag0(tag);
++ return;
++ } else if ((exponent16(&tmp) > EXP_UNDER) || (tag == TAG_Zero)) {
++ stdexp(st0_ptr);
++ setsign(st0_ptr, st0_sign);
++ } else {
++ tag =
++ FPU_round(st0_ptr, 0, 0, FULL_PRECISION, st0_sign);
++ }
++ FPU_settag0(tag);
++ setcc(cc);
+
+- return;
+- }
++ return;
+ }
+
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+- if ( st1_tag == TAG_Special )
+- st1_tag = FPU_Special(st1_ptr);
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++ if (st1_tag == TAG_Special)
++ st1_tag = FPU_Special(st1_ptr);
+
+- if ( ((st0_tag == TAG_Valid) && (st1_tag == TW_Denormal))
++ if (((st0_tag == TAG_Valid) && (st1_tag == TW_Denormal))
+ || ((st0_tag == TW_Denormal) && (st1_tag == TAG_Valid))
+- || ((st0_tag == TW_Denormal) && (st1_tag == TW_Denormal)) )
+- {
+- if ( denormal_operand() < 0 )
+- return;
+- goto fprem_valid;
+- }
+- else if ( (st0_tag == TAG_Empty) || (st1_tag == TAG_Empty) )
+- {
+- FPU_stack_underflow();
+- return;
+- }
+- else if ( st0_tag == TAG_Zero )
+- {
+- if ( st1_tag == TAG_Valid )
+- {
+- setcc(0); return;
+- }
+- else if ( st1_tag == TW_Denormal )
+- {
+- if ( denormal_operand() < 0 )
+- return;
+- setcc(0); return;
+- }
+- else if ( st1_tag == TAG_Zero )
+- { arith_invalid(0); return; } /* fprem(?,0) always invalid */
+- else if ( st1_tag == TW_Infinity )
+- { setcc(0); return; }
+- }
+- else if ( (st0_tag == TAG_Valid) || (st0_tag == TW_Denormal) )
+- {
+- if ( st1_tag == TAG_Zero )
+- {
+- arith_invalid(0); /* fprem(Valid,Zero) is invalid */
+- return;
+- }
+- else if ( st1_tag != TW_NaN )
+- {
+- if ( ((st0_tag == TW_Denormal) || (st1_tag == TW_Denormal))
+- && (denormal_operand() < 0) )
+- return;
+-
+- if ( st1_tag == TW_Infinity )
+- {
+- /* fprem(Valid,Infinity) is o.k. */
+- setcc(0); return;
+- }
+- }
+- }
+- else if ( st0_tag == TW_Infinity )
+- {
+- if ( st1_tag != TW_NaN )
+- {
+- arith_invalid(0); /* fprem(Infinity,?) is invalid */
+- return;
++ || ((st0_tag == TW_Denormal) && (st1_tag == TW_Denormal))) {
++ if (denormal_operand() < 0)
++ return;
++ goto fprem_valid;
++ } else if ((st0_tag == TAG_Empty) || (st1_tag == TAG_Empty)) {
++ FPU_stack_underflow();
++ return;
++ } else if (st0_tag == TAG_Zero) {
++ if (st1_tag == TAG_Valid) {
++ setcc(0);
++ return;
++ } else if (st1_tag == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return;
++ setcc(0);
++ return;
++ } else if (st1_tag == TAG_Zero) {
++ arith_invalid(0);
++ return;
++ } /* fprem(?,0) always invalid */
++ else if (st1_tag == TW_Infinity) {
++ setcc(0);
++ return;
++ }
++ } else if ((st0_tag == TAG_Valid) || (st0_tag == TW_Denormal)) {
++ if (st1_tag == TAG_Zero) {
++ arith_invalid(0); /* fprem(Valid,Zero) is invalid */
++ return;
++ } else if (st1_tag != TW_NaN) {
++ if (((st0_tag == TW_Denormal)
++ || (st1_tag == TW_Denormal))
++ && (denormal_operand() < 0))
++ return;
+
-+ /* compute plaintext into mac */
-+ get_data_to_compute(cipher, pctx, plain, cryptlen);
-+
-+out:
-+ return err;
-+}
-+
-+static void crypto_ccm_encrypt_done(struct crypto_async_request *areq, int err)
-+{
-+ struct aead_request *req = areq->data;
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
-+ u8 *odata = pctx->odata;
-+
-+ if (!err)
-+ scatterwalk_map_and_copy(odata, req->dst, req->cryptlen,
-+ crypto_aead_authsize(aead), 1);
-+ aead_request_complete(req, err);
-+}
-+
-+static inline int crypto_ccm_check_iv(const u8 *iv)
-+{
-+ /* 2 <= L <= 8, so 1 <= L' <= 7. */
-+ if (1 > iv[0] || iv[0] > 7)
-+ return -EINVAL;
-+
-+ return 0;
-+}
-+
-+static int crypto_ccm_encrypt(struct aead_request *req)
-+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
-+ struct ablkcipher_request *abreq = &pctx->abreq;
-+ struct scatterlist *dst;
-+ unsigned int cryptlen = req->cryptlen;
-+ u8 *odata = pctx->odata;
-+ u8 *iv = req->iv;
-+ int err;
-+
-+ err = crypto_ccm_check_iv(iv);
-+ if (err)
-+ return err;
-+
-+ pctx->flags = aead_request_flags(req);
-+
-+ err = crypto_ccm_auth(req, req->src, cryptlen);
-+ if (err)
-+ return err;
++ if (st1_tag == TW_Infinity) {
++ /* fprem(Valid,Infinity) is o.k. */
++ setcc(0);
++ return;
++ }
++ }
++ } else if (st0_tag == TW_Infinity) {
++ if (st1_tag != TW_NaN) {
++ arith_invalid(0); /* fprem(Infinity,?) is invalid */
++ return;
++ }
+ }
+- }
+
+- /* One of the registers must contain a NaN if we got here. */
++ /* One of the registers must contain a NaN if we got here. */
+
+ #ifdef PARANOID
+- if ( (st0_tag != TW_NaN) && (st1_tag != TW_NaN) )
+- EXCEPTION(EX_INTERNAL | 0x118);
++ if ((st0_tag != TW_NaN) && (st1_tag != TW_NaN))
++ EXCEPTION(EX_INTERNAL | 0x118);
+ #endif /* PARANOID */
+
+- real_2op_NaN(st1_ptr, st1_tag, 0, st1_ptr);
++ real_2op_NaN(st1_ptr, st1_tag, 0, st1_ptr);
+
+ }
+
+-
+ /* ST(1) <- ST(1) * log ST; pop ST */
+ static void fyl2x(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- FPU_REG *st1_ptr = &st(1), exponent;
+- u_char st1_tag = FPU_gettagi(1);
+- u_char sign;
+- int e, tag;
+-
+- clear_C1();
+-
+- if ( (st0_tag == TAG_Valid) && (st1_tag == TAG_Valid) )
+- {
+- both_valid:
+- /* Both regs are Valid or Denormal */
+- if ( signpositive(st0_ptr) )
+- {
+- if ( st0_tag == TW_Denormal )
+- FPU_to_exp16(st0_ptr, st0_ptr);
+- else
+- /* Convert st(0) for internal use. */
+- setexponent16(st0_ptr, exponent(st0_ptr));
+-
+- if ( (st0_ptr->sigh == 0x80000000) && (st0_ptr->sigl == 0) )
+- {
+- /* Special case. The result can be precise. */
+- u_char esign;
+- e = exponent16(st0_ptr);
+- if ( e >= 0 )
+- {
+- exponent.sigh = e;
+- esign = SIGN_POS;
+- }
+- else
+- {
+- exponent.sigh = -e;
+- esign = SIGN_NEG;
++ FPU_REG *st1_ptr = &st(1), exponent;
++ u_char st1_tag = FPU_gettagi(1);
++ u_char sign;
++ int e, tag;
++
++ clear_C1();
++
++ if ((st0_tag == TAG_Valid) && (st1_tag == TAG_Valid)) {
++ both_valid:
++ /* Both regs are Valid or Denormal */
++ if (signpositive(st0_ptr)) {
++ if (st0_tag == TW_Denormal)
++ FPU_to_exp16(st0_ptr, st0_ptr);
++ else
++ /* Convert st(0) for internal use. */
++ setexponent16(st0_ptr, exponent(st0_ptr));
+
-+ /* Note: rfc 3610 and NIST 800-38C require counter of
-+ * zero to encrypt auth tag.
-+ */
-+ memset(iv + 15 - iv[0], 0, iv[0] + 1);
++ if ((st0_ptr->sigh == 0x80000000)
++ && (st0_ptr->sigl == 0)) {
++ /* Special case. The result can be precise. */
++ u_char esign;
++ e = exponent16(st0_ptr);
++ if (e >= 0) {
++ exponent.sigh = e;
++ esign = SIGN_POS;
++ } else {
++ exponent.sigh = -e;
++ esign = SIGN_NEG;
++ }
++ exponent.sigl = 0;
++ setexponent16(&exponent, 31);
++ tag = FPU_normalize_nuo(&exponent);
++ stdexp(&exponent);
++ setsign(&exponent, esign);
++ tag =
++ FPU_mul(&exponent, tag, 1, FULL_PRECISION);
++ if (tag >= 0)
++ FPU_settagi(1, tag);
++ } else {
++ /* The usual case */
++ sign = getsign(st1_ptr);
++ if (st1_tag == TW_Denormal)
++ FPU_to_exp16(st1_ptr, st1_ptr);
++ else
++ /* Convert st(1) for internal use. */
++ setexponent16(st1_ptr,
++ exponent(st1_ptr));
++ poly_l2(st0_ptr, st1_ptr, sign);
++ }
++ } else {
++ /* negative */
++ if (arith_invalid(1) < 0)
++ return;
+ }
+- exponent.sigl = 0;
+- setexponent16(&exponent, 31);
+- tag = FPU_normalize_nuo(&exponent);
+- stdexp(&exponent);
+- setsign(&exponent, esign);
+- tag = FPU_mul(&exponent, tag, 1, FULL_PRECISION);
+- if ( tag >= 0 )
+- FPU_settagi(1, tag);
+- }
+- else
+- {
+- /* The usual case */
+- sign = getsign(st1_ptr);
+- if ( st1_tag == TW_Denormal )
+- FPU_to_exp16(st1_ptr, st1_ptr);
+- else
+- /* Convert st(1) for internal use. */
+- setexponent16(st1_ptr, exponent(st1_ptr));
+- poly_l2(st0_ptr, st1_ptr, sign);
+- }
+- }
+- else
+- {
+- /* negative */
+- if ( arith_invalid(1) < 0 )
+- return;
+- }
+
+- FPU_pop();
+-
+- return;
+- }
+-
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+- if ( st1_tag == TAG_Special )
+- st1_tag = FPU_Special(st1_ptr);
+-
+- if ( (st0_tag == TAG_Empty) || (st1_tag == TAG_Empty) )
+- {
+- FPU_stack_underflow_pop(1);
+- return;
+- }
+- else if ( (st0_tag <= TW_Denormal) && (st1_tag <= TW_Denormal) )
+- {
+- if ( st0_tag == TAG_Zero )
+- {
+- if ( st1_tag == TAG_Zero )
+- {
+- /* Both args zero is invalid */
+- if ( arith_invalid(1) < 0 )
+- return;
+- }
+- else
+- {
+- u_char sign;
+- sign = getsign(st1_ptr)^SIGN_NEG;
+- if ( FPU_divide_by_zero(1, sign) < 0 )
+- return;
++ FPU_pop();
+
+- setsign(st1_ptr, sign);
+- }
+- }
+- else if ( st1_tag == TAG_Zero )
+- {
+- /* st(1) contains zero, st(0) valid <> 0 */
+- /* Zero is the valid answer */
+- sign = getsign(st1_ptr);
+-
+- if ( signnegative(st0_ptr) )
+- {
+- /* log(negative) */
+- if ( arith_invalid(1) < 0 )
+ return;
+- }
+- else if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+- else
+- {
+- if ( exponent(st0_ptr) < 0 )
+- sign ^= SIGN_NEG;
+-
+- FPU_copy_to_reg1(&CONST_Z, TAG_Zero);
+- setsign(st1_ptr, sign);
+- }
+ }
+- else
+- {
+- /* One or both operands are denormals. */
+- if ( denormal_operand() < 0 )
+- return;
+- goto both_valid;
+- }
+- }
+- else if ( (st0_tag == TW_NaN) || (st1_tag == TW_NaN) )
+- {
+- if ( real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0 )
+- return;
+- }
+- /* One or both arg must be an infinity */
+- else if ( st0_tag == TW_Infinity )
+- {
+- if ( (signnegative(st0_ptr)) || (st1_tag == TAG_Zero) )
+- {
+- /* log(-infinity) or 0*log(infinity) */
+- if ( arith_invalid(1) < 0 )
+- return;
+- }
+- else
+- {
+- u_char sign = getsign(st1_ptr);
+
+- if ( (st1_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++ if (st1_tag == TAG_Special)
++ st1_tag = FPU_Special(st1_ptr);
+
+- FPU_copy_to_reg1(&CONST_INF, TAG_Special);
+- setsign(st1_ptr, sign);
+- }
+- }
+- /* st(1) must be infinity here */
+- else if ( ((st0_tag == TAG_Valid) || (st0_tag == TW_Denormal))
+- && ( signpositive(st0_ptr) ) )
+- {
+- if ( exponent(st0_ptr) >= 0 )
+- {
+- if ( (exponent(st0_ptr) == 0) &&
+- (st0_ptr->sigh == 0x80000000) &&
+- (st0_ptr->sigl == 0) )
+- {
+- /* st(0) holds 1.0 */
+- /* infinity*log(1) */
+- if ( arith_invalid(1) < 0 )
++ if ((st0_tag == TAG_Empty) || (st1_tag == TAG_Empty)) {
++ FPU_stack_underflow_pop(1);
+ return;
+- }
+- /* else st(0) is positive and > 1.0 */
++ } else if ((st0_tag <= TW_Denormal) && (st1_tag <= TW_Denormal)) {
++ if (st0_tag == TAG_Zero) {
++ if (st1_tag == TAG_Zero) {
++ /* Both args zero is invalid */
++ if (arith_invalid(1) < 0)
++ return;
++ } else {
++ u_char sign;
++ sign = getsign(st1_ptr) ^ SIGN_NEG;
++ if (FPU_divide_by_zero(1, sign) < 0)
++ return;
+
-+ sg_init_table(pctx->src, 2);
-+ sg_set_buf(pctx->src, odata, 16);
-+ scatterwalk_sg_chain(pctx->src, 2, req->src);
++ setsign(st1_ptr, sign);
++ }
++ } else if (st1_tag == TAG_Zero) {
++ /* st(1) contains zero, st(0) valid <> 0 */
++ /* Zero is the valid answer */
++ sign = getsign(st1_ptr);
++
++ if (signnegative(st0_ptr)) {
++ /* log(negative) */
++ if (arith_invalid(1) < 0)
++ return;
++ } else if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
++ else {
++ if (exponent(st0_ptr) < 0)
++ sign ^= SIGN_NEG;
+
-+ dst = pctx->src;
-+ if (req->src != req->dst) {
-+ sg_init_table(pctx->dst, 2);
-+ sg_set_buf(pctx->dst, odata, 16);
-+ scatterwalk_sg_chain(pctx->dst, 2, req->dst);
-+ dst = pctx->dst;
++ FPU_copy_to_reg1(&CONST_Z, TAG_Zero);
++ setsign(st1_ptr, sign);
++ }
++ } else {
++ /* One or both operands are denormals. */
++ if (denormal_operand() < 0)
++ return;
++ goto both_valid;
++ }
++ } else if ((st0_tag == TW_NaN) || (st1_tag == TW_NaN)) {
++ if (real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0)
++ return;
+ }
++ /* One or both arg must be an infinity */
++ else if (st0_tag == TW_Infinity) {
++ if ((signnegative(st0_ptr)) || (st1_tag == TAG_Zero)) {
++ /* log(-infinity) or 0*log(infinity) */
++ if (arith_invalid(1) < 0)
++ return;
++ } else {
++ u_char sign = getsign(st1_ptr);
+
-+ ablkcipher_request_set_tfm(abreq, ctx->ctr);
-+ ablkcipher_request_set_callback(abreq, pctx->flags,
-+ crypto_ccm_encrypt_done, req);
-+ ablkcipher_request_set_crypt(abreq, pctx->src, dst, cryptlen + 16, iv);
-+ err = crypto_ablkcipher_encrypt(abreq);
-+ if (err)
-+ return err;
-+
-+ /* copy authtag to end of dst */
-+ scatterwalk_map_and_copy(odata, req->dst, cryptlen,
-+ crypto_aead_authsize(aead), 1);
-+ return err;
-+}
-+
-+static void crypto_ccm_decrypt_done(struct crypto_async_request *areq,
-+ int err)
-+{
-+ struct aead_request *req = areq->data;
-+ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ unsigned int authsize = crypto_aead_authsize(aead);
-+ unsigned int cryptlen = req->cryptlen - authsize;
++ if ((st1_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+
-+ if (!err) {
-+ err = crypto_ccm_auth(req, req->dst, cryptlen);
-+ if (!err && memcmp(pctx->auth_tag, pctx->odata, authsize))
-+ err = -EBADMSG;
++ FPU_copy_to_reg1(&CONST_INF, TAG_Special);
++ setsign(st1_ptr, sign);
++ }
+ }
+- else
+- {
+- /* st(0) is positive and < 1.0 */
++ /* st(1) must be infinity here */
++ else if (((st0_tag == TAG_Valid) || (st0_tag == TW_Denormal))
++ && (signpositive(st0_ptr))) {
++ if (exponent(st0_ptr) >= 0) {
++ if ((exponent(st0_ptr) == 0) &&
++ (st0_ptr->sigh == 0x80000000) &&
++ (st0_ptr->sigl == 0)) {
++ /* st(0) holds 1.0 */
++ /* infinity*log(1) */
++ if (arith_invalid(1) < 0)
++ return;
++ }
++ /* else st(0) is positive and > 1.0 */
++ } else {
++ /* st(0) is positive and < 1.0 */
+
+- if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
++ if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+
+- changesign(st1_ptr);
+- }
+- }
+- else
+- {
+- /* st(0) must be zero or negative */
+- if ( st0_tag == TAG_Zero )
+- {
+- /* This should be invalid, but a real 80486 is happy with it. */
++ changesign(st1_ptr);
++ }
++ } else {
++ /* st(0) must be zero or negative */
++ if (st0_tag == TAG_Zero) {
++ /* This should be invalid, but a real 80486 is happy with it. */
+
+ #ifndef PECULIAR_486
+- sign = getsign(st1_ptr);
+- if ( FPU_divide_by_zero(1, sign) < 0 )
+- return;
++ sign = getsign(st1_ptr);
++ if (FPU_divide_by_zero(1, sign) < 0)
++ return;
+ #endif /* PECULIAR_486 */
+
+- changesign(st1_ptr);
++ changesign(st1_ptr);
++ } else if (arith_invalid(1) < 0) /* log(negative) */
++ return;
+ }
+- else if ( arith_invalid(1) < 0 ) /* log(negative) */
+- return;
+- }
+
+- FPU_pop();
++ FPU_pop();
+ }
+
+-
+ static void fpatan(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- FPU_REG *st1_ptr = &st(1);
+- u_char st1_tag = FPU_gettagi(1);
+- int tag;
++ FPU_REG *st1_ptr = &st(1);
++ u_char st1_tag = FPU_gettagi(1);
++ int tag;
+
+- clear_C1();
+- if ( !((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid)) )
+- {
+- valid_atan:
++ clear_C1();
++ if (!((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid))) {
++ valid_atan:
+
+- poly_atan(st0_ptr, st0_tag, st1_ptr, st1_tag);
++ poly_atan(st0_ptr, st0_tag, st1_ptr, st1_tag);
+
+- FPU_pop();
++ FPU_pop();
+
+- return;
+- }
++ return;
+ }
-+ aead_request_complete(req, err);
-+}
-+
-+static int crypto_ccm_decrypt(struct aead_request *req)
-+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
-+ struct ablkcipher_request *abreq = &pctx->abreq;
-+ struct scatterlist *dst;
-+ unsigned int authsize = crypto_aead_authsize(aead);
-+ unsigned int cryptlen = req->cryptlen;
-+ u8 *authtag = pctx->auth_tag;
-+ u8 *odata = pctx->odata;
-+ u8 *iv = req->iv;
-+ int err;
-+
-+ if (cryptlen < authsize)
-+ return -EINVAL;
-+ cryptlen -= authsize;
-+
-+ err = crypto_ccm_check_iv(iv);
-+ if (err)
-+ return err;
-+
-+ pctx->flags = aead_request_flags(req);
-+
-+ scatterwalk_map_and_copy(authtag, req->src, cryptlen, authsize, 0);
+
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+- if ( st1_tag == TAG_Special )
+- st1_tag = FPU_Special(st1_ptr);
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++ if (st1_tag == TAG_Special)
++ st1_tag = FPU_Special(st1_ptr);
+
+- if ( ((st0_tag == TAG_Valid) && (st1_tag == TW_Denormal))
++ if (((st0_tag == TAG_Valid) && (st1_tag == TW_Denormal))
+ || ((st0_tag == TW_Denormal) && (st1_tag == TAG_Valid))
+- || ((st0_tag == TW_Denormal) && (st1_tag == TW_Denormal)) )
+- {
+- if ( denormal_operand() < 0 )
+- return;
++ || ((st0_tag == TW_Denormal) && (st1_tag == TW_Denormal))) {
++ if (denormal_operand() < 0)
++ return;
+
+- goto valid_atan;
+- }
+- else if ( (st0_tag == TAG_Empty) || (st1_tag == TAG_Empty) )
+- {
+- FPU_stack_underflow_pop(1);
+- return;
+- }
+- else if ( (st0_tag == TW_NaN) || (st1_tag == TW_NaN) )
+- {
+- if ( real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) >= 0 )
+- FPU_pop();
+- return;
+- }
+- else if ( (st0_tag == TW_Infinity) || (st1_tag == TW_Infinity) )
+- {
+- u_char sign = getsign(st1_ptr);
+- if ( st0_tag == TW_Infinity )
+- {
+- if ( st1_tag == TW_Infinity )
+- {
+- if ( signpositive(st0_ptr) )
+- {
+- FPU_copy_to_reg1(&CONST_PI4, TAG_Valid);
+- }
+- else
+- {
+- setpositive(st1_ptr);
+- tag = FPU_u_add(&CONST_PI4, &CONST_PI2, st1_ptr,
+- FULL_PRECISION, SIGN_POS,
+- exponent(&CONST_PI4), exponent(&CONST_PI2));
+- if ( tag >= 0 )
+- FPU_settagi(1, tag);
+- }
+- }
+- else
+- {
+- if ( (st1_tag == TW_Denormal) && (denormal_operand() < 0) )
++ goto valid_atan;
++ } else if ((st0_tag == TAG_Empty) || (st1_tag == TAG_Empty)) {
++ FPU_stack_underflow_pop(1);
++ return;
++ } else if ((st0_tag == TW_NaN) || (st1_tag == TW_NaN)) {
++ if (real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) >= 0)
++ FPU_pop();
+ return;
++ } else if ((st0_tag == TW_Infinity) || (st1_tag == TW_Infinity)) {
++ u_char sign = getsign(st1_ptr);
++ if (st0_tag == TW_Infinity) {
++ if (st1_tag == TW_Infinity) {
++ if (signpositive(st0_ptr)) {
++ FPU_copy_to_reg1(&CONST_PI4, TAG_Valid);
++ } else {
++ setpositive(st1_ptr);
++ tag =
++ FPU_u_add(&CONST_PI4, &CONST_PI2,
++ st1_ptr, FULL_PRECISION,
++ SIGN_POS,
++ exponent(&CONST_PI4),
++ exponent(&CONST_PI2));
++ if (tag >= 0)
++ FPU_settagi(1, tag);
++ }
++ } else {
++ if ((st1_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+
-+ memset(iv + 15 - iv[0], 0, iv[0] + 1);
++ if (signpositive(st0_ptr)) {
++ FPU_copy_to_reg1(&CONST_Z, TAG_Zero);
++ setsign(st1_ptr, sign); /* An 80486 preserves the sign */
++ FPU_pop();
++ return;
++ } else {
++ FPU_copy_to_reg1(&CONST_PI, TAG_Valid);
++ }
++ }
++ } else {
++ /* st(1) is infinity, st(0) not infinity */
++ if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+
+- if ( signpositive(st0_ptr) )
+- {
+- FPU_copy_to_reg1(&CONST_Z, TAG_Zero);
+- setsign(st1_ptr, sign); /* An 80486 preserves the sign */
+- FPU_pop();
+- return;
++ FPU_copy_to_reg1(&CONST_PI2, TAG_Valid);
+ }
+- else
+- {
+- FPU_copy_to_reg1(&CONST_PI, TAG_Valid);
++ setsign(st1_ptr, sign);
++ } else if (st1_tag == TAG_Zero) {
++ /* st(0) must be valid or zero */
++ u_char sign = getsign(st1_ptr);
+
-+ sg_init_table(pctx->src, 2);
-+ sg_set_buf(pctx->src, authtag, 16);
-+ scatterwalk_sg_chain(pctx->src, 2, req->src);
++ if ((st0_tag == TW_Denormal) && (denormal_operand() < 0))
++ return;
+
-+ dst = pctx->src;
-+ if (req->src != req->dst) {
-+ sg_init_table(pctx->dst, 2);
-+ sg_set_buf(pctx->dst, authtag, 16);
-+ scatterwalk_sg_chain(pctx->dst, 2, req->dst);
-+ dst = pctx->dst;
++ if (signpositive(st0_ptr)) {
++ /* An 80486 preserves the sign */
++ FPU_pop();
++ return;
+ }
+- }
+- }
+- else
+- {
+- /* st(1) is infinity, st(0) not infinity */
+- if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+
+- FPU_copy_to_reg1(&CONST_PI2, TAG_Valid);
+- }
+- setsign(st1_ptr, sign);
+- }
+- else if ( st1_tag == TAG_Zero )
+- {
+- /* st(0) must be valid or zero */
+- u_char sign = getsign(st1_ptr);
+-
+- if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
++ FPU_copy_to_reg1(&CONST_PI, TAG_Valid);
++ setsign(st1_ptr, sign);
++ } else if (st0_tag == TAG_Zero) {
++ /* st(1) must be TAG_Valid here */
++ u_char sign = getsign(st1_ptr);
+
+- if ( signpositive(st0_ptr) )
+- {
+- /* An 80486 preserves the sign */
+- FPU_pop();
+- return;
+- }
++ if ((st1_tag == TW_Denormal) && (denormal_operand() < 0))
++ return;
+
+- FPU_copy_to_reg1(&CONST_PI, TAG_Valid);
+- setsign(st1_ptr, sign);
+- }
+- else if ( st0_tag == TAG_Zero )
+- {
+- /* st(1) must be TAG_Valid here */
+- u_char sign = getsign(st1_ptr);
+-
+- if ( (st1_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+-
+- FPU_copy_to_reg1(&CONST_PI2, TAG_Valid);
+- setsign(st1_ptr, sign);
+- }
++ FPU_copy_to_reg1(&CONST_PI2, TAG_Valid);
++ setsign(st1_ptr, sign);
+ }
+ #ifdef PARANOID
+- else
+- EXCEPTION(EX_INTERNAL | 0x125);
++ else
++ EXCEPTION(EX_INTERNAL | 0x125);
+ #endif /* PARANOID */
+
+- FPU_pop();
+- set_precision_flag_up(); /* We do not really know if up or down */
++ FPU_pop();
++ set_precision_flag_up(); /* We do not really know if up or down */
+ }
+
+-
+ static void fprem(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- do_fprem(st0_ptr, st0_tag, RC_CHOP);
++ do_fprem(st0_ptr, st0_tag, RC_CHOP);
+ }
+
+-
+ static void fprem1(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- do_fprem(st0_ptr, st0_tag, RC_RND);
++ do_fprem(st0_ptr, st0_tag, RC_RND);
+ }
+
+-
+ static void fyl2xp1(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- u_char sign, sign1;
+- FPU_REG *st1_ptr = &st(1), a, b;
+- u_char st1_tag = FPU_gettagi(1);
++ u_char sign, sign1;
++ FPU_REG *st1_ptr = &st(1), a, b;
++ u_char st1_tag = FPU_gettagi(1);
+
+- clear_C1();
+- if ( !((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid)) )
+- {
+- valid_yl2xp1:
++ clear_C1();
++ if (!((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid))) {
++ valid_yl2xp1:
+
+- sign = getsign(st0_ptr);
+- sign1 = getsign(st1_ptr);
++ sign = getsign(st0_ptr);
++ sign1 = getsign(st1_ptr);
+
+- FPU_to_exp16(st0_ptr, &a);
+- FPU_to_exp16(st1_ptr, &b);
++ FPU_to_exp16(st0_ptr, &a);
++ FPU_to_exp16(st1_ptr, &b);
+
+- if ( poly_l2p1(sign, sign1, &a, &b, st1_ptr) )
+- return;
++ if (poly_l2p1(sign, sign1, &a, &b, st1_ptr))
++ return;
+
+- FPU_pop();
+- return;
+- }
++ FPU_pop();
++ return;
++ }
+
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+- if ( st1_tag == TAG_Special )
+- st1_tag = FPU_Special(st1_ptr);
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++ if (st1_tag == TAG_Special)
++ st1_tag = FPU_Special(st1_ptr);
+
+- if ( ((st0_tag == TAG_Valid) && (st1_tag == TW_Denormal))
++ if (((st0_tag == TAG_Valid) && (st1_tag == TW_Denormal))
+ || ((st0_tag == TW_Denormal) && (st1_tag == TAG_Valid))
+- || ((st0_tag == TW_Denormal) && (st1_tag == TW_Denormal)) )
+- {
+- if ( denormal_operand() < 0 )
+- return;
+-
+- goto valid_yl2xp1;
+- }
+- else if ( (st0_tag == TAG_Empty) | (st1_tag == TAG_Empty) )
+- {
+- FPU_stack_underflow_pop(1);
+- return;
+- }
+- else if ( st0_tag == TAG_Zero )
+- {
+- switch ( st1_tag )
+- {
+- case TW_Denormal:
+- if ( denormal_operand() < 0 )
+- return;
+-
+- case TAG_Zero:
+- case TAG_Valid:
+- setsign(st0_ptr, getsign(st0_ptr) ^ getsign(st1_ptr));
+- FPU_copy_to_reg1(st0_ptr, st0_tag);
+- break;
+-
+- case TW_Infinity:
+- /* Infinity*log(1) */
+- if ( arith_invalid(1) < 0 )
+- return;
+- break;
++ || ((st0_tag == TW_Denormal) && (st1_tag == TW_Denormal))) {
++ if (denormal_operand() < 0)
++ return;
+
+- case TW_NaN:
+- if ( real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0 )
+- return;
+- break;
+-
+- default:
++ goto valid_yl2xp1;
++ } else if ((st0_tag == TAG_Empty) | (st1_tag == TAG_Empty)) {
++ FPU_stack_underflow_pop(1);
++ return;
++ } else if (st0_tag == TAG_Zero) {
++ switch (st1_tag) {
++ case TW_Denormal:
++ if (denormal_operand() < 0)
++ return;
+
-+ ablkcipher_request_set_tfm(abreq, ctx->ctr);
-+ ablkcipher_request_set_callback(abreq, pctx->flags,
-+ crypto_ccm_decrypt_done, req);
-+ ablkcipher_request_set_crypt(abreq, pctx->src, dst, cryptlen + 16, iv);
-+ err = crypto_ablkcipher_decrypt(abreq);
-+ if (err)
-+ return err;
-+
-+ err = crypto_ccm_auth(req, req->dst, cryptlen);
-+ if (err)
-+ return err;
-+
-+ /* verify */
-+ if (memcmp(authtag, odata, authsize))
-+ return -EBADMSG;
-+
-+ return err;
-+}
-+
-+static int crypto_ccm_init_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct ccm_instance_ctx *ictx = crypto_instance_ctx(inst);
-+ struct crypto_ccm_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_cipher *cipher;
-+ struct crypto_ablkcipher *ctr;
-+ unsigned long align;
-+ int err;
-+
-+ cipher = crypto_spawn_cipher(&ictx->cipher);
-+ if (IS_ERR(cipher))
-+ return PTR_ERR(cipher);
-+
-+ ctr = crypto_spawn_skcipher(&ictx->ctr);
-+ err = PTR_ERR(ctr);
-+ if (IS_ERR(ctr))
-+ goto err_free_cipher;
-+
-+ ctx->cipher = cipher;
-+ ctx->ctr = ctr;
-+
-+ align = crypto_tfm_alg_alignmask(tfm);
-+ align &= ~(crypto_tfm_ctx_alignment() - 1);
-+ tfm->crt_aead.reqsize = align +
-+ sizeof(struct crypto_ccm_req_priv_ctx) +
-+ crypto_ablkcipher_reqsize(ctr);
-+
-+ return 0;
-+
-+err_free_cipher:
-+ crypto_free_cipher(cipher);
-+ return err;
-+}
-+
-+static void crypto_ccm_exit_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_ccm_ctx *ctx = crypto_tfm_ctx(tfm);
-+
-+ crypto_free_cipher(ctx->cipher);
-+ crypto_free_ablkcipher(ctx->ctr);
-+}
-+
-+static struct crypto_instance *crypto_ccm_alloc_common(struct rtattr **tb,
-+ const char *full_name,
-+ const char *ctr_name,
-+ const char *cipher_name)
-+{
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ struct crypto_alg *ctr;
-+ struct crypto_alg *cipher;
-+ struct ccm_instance_ctx *ictx;
-+ int err;
-+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
-+
-+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
-+ return ERR_PTR(-EINVAL);
-+
-+ cipher = crypto_alg_mod_lookup(cipher_name, CRYPTO_ALG_TYPE_CIPHER,
-+ CRYPTO_ALG_TYPE_MASK);
-+ err = PTR_ERR(cipher);
-+ if (IS_ERR(cipher))
-+ return ERR_PTR(err);
-+
-+ err = -EINVAL;
-+ if (cipher->cra_blocksize != 16)
-+ goto out_put_cipher;
-+
-+ inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
-+ err = -ENOMEM;
-+ if (!inst)
-+ goto out_put_cipher;
-+
-+ ictx = crypto_instance_ctx(inst);
++ case TAG_Zero:
++ case TAG_Valid:
++ setsign(st0_ptr, getsign(st0_ptr) ^ getsign(st1_ptr));
++ FPU_copy_to_reg1(st0_ptr, st0_tag);
++ break;
+
-+ err = crypto_init_spawn(&ictx->cipher, cipher, inst,
-+ CRYPTO_ALG_TYPE_MASK);
-+ if (err)
-+ goto err_free_inst;
++ case TW_Infinity:
++ /* Infinity*log(1) */
++ if (arith_invalid(1) < 0)
++ return;
++ break;
+
-+ crypto_set_skcipher_spawn(&ictx->ctr, inst);
-+ err = crypto_grab_skcipher(&ictx->ctr, ctr_name, 0,
-+ crypto_requires_sync(algt->type,
-+ algt->mask));
-+ if (err)
-+ goto err_drop_cipher;
++ case TW_NaN:
++ if (real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0)
++ return;
++ break;
+
-+ ctr = crypto_skcipher_spawn_alg(&ictx->ctr);
++ default:
+ #ifdef PARANOID
+- EXCEPTION(EX_INTERNAL | 0x116);
+- return;
++ EXCEPTION(EX_INTERNAL | 0x116);
++ return;
+ #endif /* PARANOID */
+- break;
+- }
+- }
+- else if ( (st0_tag == TAG_Valid) || (st0_tag == TW_Denormal) )
+- {
+- switch ( st1_tag )
+- {
+- case TAG_Zero:
+- if ( signnegative(st0_ptr) )
+- {
+- if ( exponent(st0_ptr) >= 0 )
+- {
+- /* st(0) holds <= -1.0 */
+-#ifdef PECULIAR_486 /* Stupid 80486 doesn't worry about log(negative). */
+- changesign(st1_ptr);
++ break;
++ }
++ } else if ((st0_tag == TAG_Valid) || (st0_tag == TW_Denormal)) {
++ switch (st1_tag) {
++ case TAG_Zero:
++ if (signnegative(st0_ptr)) {
++ if (exponent(st0_ptr) >= 0) {
++ /* st(0) holds <= -1.0 */
++#ifdef PECULIAR_486 /* Stupid 80486 doesn't worry about log(negative). */
++ changesign(st1_ptr);
+ #else
+- if ( arith_invalid(1) < 0 )
+- return;
++ if (arith_invalid(1) < 0)
++ return;
+ #endif /* PECULIAR_486 */
+- }
+- else if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+- else
+- changesign(st1_ptr);
+- }
+- else if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+- break;
+-
+- case TW_Infinity:
+- if ( signnegative(st0_ptr) )
+- {
+- if ( (exponent(st0_ptr) >= 0) &&
+- !((st0_ptr->sigh == 0x80000000) &&
+- (st0_ptr->sigl == 0)) )
+- {
+- /* st(0) holds < -1.0 */
+-#ifdef PECULIAR_486 /* Stupid 80486 doesn't worry about log(negative). */
+- changesign(st1_ptr);
++ } else if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
++ else
++ changesign(st1_ptr);
++ } else if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
++ break;
+
-+ /* Not a stream cipher? */
-+ err = -EINVAL;
-+ if (ctr->cra_blocksize != 1)
-+ goto err_drop_ctr;
++ case TW_Infinity:
++ if (signnegative(st0_ptr)) {
++ if ((exponent(st0_ptr) >= 0) &&
++ !((st0_ptr->sigh == 0x80000000) &&
++ (st0_ptr->sigl == 0))) {
++ /* st(0) holds < -1.0 */
++#ifdef PECULIAR_486 /* Stupid 80486 doesn't worry about log(negative). */
++ changesign(st1_ptr);
+ #else
+- if ( arith_invalid(1) < 0 ) return;
++ if (arith_invalid(1) < 0)
++ return;
+ #endif /* PECULIAR_486 */
++ } else if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
++ else
++ changesign(st1_ptr);
++ } else if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
++ break;
+
-+ /* We want the real thing! */
-+ if (ctr->cra_ablkcipher.ivsize != 16)
-+ goto err_drop_ctr;
++ case TW_NaN:
++ if (real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0)
++ return;
+ }
+- else if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+- else
+- changesign(st1_ptr);
+- }
+- else if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+- break;
+-
+- case TW_NaN:
+- if ( real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0 )
+- return;
+- }
+
+- }
+- else if ( st0_tag == TW_NaN )
+- {
+- if ( real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0 )
+- return;
+- }
+- else if ( st0_tag == TW_Infinity )
+- {
+- if ( st1_tag == TW_NaN )
+- {
+- if ( real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0 )
+- return;
+- }
+- else if ( signnegative(st0_ptr) )
+- {
++ } else if (st0_tag == TW_NaN) {
++ if (real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0)
++ return;
++ } else if (st0_tag == TW_Infinity) {
++ if (st1_tag == TW_NaN) {
++ if (real_2op_NaN(st0_ptr, st0_tag, 1, st0_ptr) < 0)
++ return;
++ } else if (signnegative(st0_ptr)) {
+ #ifndef PECULIAR_486
+- /* This should have higher priority than denormals, but... */
+- if ( arith_invalid(1) < 0 ) /* log(-infinity) */
+- return;
++ /* This should have higher priority than denormals, but... */
++ if (arith_invalid(1) < 0) /* log(-infinity) */
++ return;
+ #endif /* PECULIAR_486 */
+- if ( (st1_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
++ if ((st1_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+ #ifdef PECULIAR_486
+- /* Denormal operands actually get higher priority */
+- if ( arith_invalid(1) < 0 ) /* log(-infinity) */
+- return;
++ /* Denormal operands actually get higher priority */
++ if (arith_invalid(1) < 0) /* log(-infinity) */
++ return;
+ #endif /* PECULIAR_486 */
+- }
+- else if ( st1_tag == TAG_Zero )
+- {
+- /* log(infinity) */
+- if ( arith_invalid(1) < 0 )
+- return;
+- }
+-
+- /* st(1) must be valid here. */
++ } else if (st1_tag == TAG_Zero) {
++ /* log(infinity) */
++ if (arith_invalid(1) < 0)
++ return;
++ }
+
+- else if ( (st1_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
++ /* st(1) must be valid here. */
+
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "ccm_base(%s,%s)", ctr->cra_driver_name,
-+ cipher->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
-+ goto err_drop_ctr;
++ else if ((st1_tag == TW_Denormal) && (denormal_operand() < 0))
++ return;
+
+- /* The Manual says that log(Infinity) is invalid, but a real
+- 80486 sensibly says that it is o.k. */
+- else
+- {
+- u_char sign = getsign(st1_ptr);
+- FPU_copy_to_reg1(&CONST_INF, TAG_Special);
+- setsign(st1_ptr, sign);
++ /* The Manual says that log(Infinity) is invalid, but a real
++ 80486 sensibly says that it is o.k. */
++ else {
++ u_char sign = getsign(st1_ptr);
++ FPU_copy_to_reg1(&CONST_INF, TAG_Special);
++ setsign(st1_ptr, sign);
++ }
+ }
+- }
+ #ifdef PARANOID
+- else
+- {
+- EXCEPTION(EX_INTERNAL | 0x117);
+- return;
+- }
++ else {
++ EXCEPTION(EX_INTERNAL | 0x117);
++ return;
++ }
+ #endif /* PARANOID */
+
+- FPU_pop();
+- return;
++ FPU_pop();
++ return;
+
+ }
+
+-
+ static void fscale(FPU_REG *st0_ptr, u_char st0_tag)
+ {
+- FPU_REG *st1_ptr = &st(1);
+- u_char st1_tag = FPU_gettagi(1);
+- int old_cw = control_word;
+- u_char sign = getsign(st0_ptr);
+-
+- clear_C1();
+- if ( !((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid)) )
+- {
+- long scale;
+- FPU_REG tmp;
+-
+- /* Convert register for internal use. */
+- setexponent16(st0_ptr, exponent(st0_ptr));
+-
+- valid_scale:
+-
+- if ( exponent(st1_ptr) > 30 )
+- {
+- /* 2^31 is far too large, would require 2^(2^30) or 2^(-2^30) */
+-
+- if ( signpositive(st1_ptr) )
+- {
+- EXCEPTION(EX_Overflow);
+- FPU_copy_to_reg0(&CONST_INF, TAG_Special);
+- }
+- else
+- {
+- EXCEPTION(EX_Underflow);
+- FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
+- }
+- setsign(st0_ptr, sign);
+- return;
+- }
+-
+- control_word &= ~CW_RC;
+- control_word |= RC_CHOP;
+- reg_copy(st1_ptr, &tmp);
+- FPU_round_to_int(&tmp, st1_tag); /* This can never overflow here */
+- control_word = old_cw;
+- scale = signnegative(st1_ptr) ? -tmp.sigl : tmp.sigl;
+- scale += exponent16(st0_ptr);
+-
+- setexponent16(st0_ptr, scale);
+-
+- /* Use FPU_round() to properly detect under/overflow etc */
+- FPU_round(st0_ptr, 0, 0, control_word, sign);
+-
+- return;
+- }
+-
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+- if ( st1_tag == TAG_Special )
+- st1_tag = FPU_Special(st1_ptr);
+-
+- if ( (st0_tag == TAG_Valid) || (st0_tag == TW_Denormal) )
+- {
+- switch ( st1_tag )
+- {
+- case TAG_Valid:
+- /* st(0) must be a denormal */
+- if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+-
+- FPU_to_exp16(st0_ptr, st0_ptr); /* Will not be left on stack */
+- goto valid_scale;
+-
+- case TAG_Zero:
+- if ( st0_tag == TW_Denormal )
+- denormal_operand();
+- return;
+-
+- case TW_Denormal:
+- denormal_operand();
+- return;
+-
+- case TW_Infinity:
+- if ( (st0_tag == TW_Denormal) && (denormal_operand() < 0) )
+- return;
+-
+- if ( signpositive(st1_ptr) )
+- FPU_copy_to_reg0(&CONST_INF, TAG_Special);
+- else
+- FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
+- setsign(st0_ptr, sign);
+- return;
++ FPU_REG *st1_ptr = &st(1);
++ u_char st1_tag = FPU_gettagi(1);
++ int old_cw = control_word;
++ u_char sign = getsign(st0_ptr);
++
++ clear_C1();
++ if (!((st0_tag ^ TAG_Valid) | (st1_tag ^ TAG_Valid))) {
++ long scale;
++ FPU_REG tmp;
++
++ /* Convert register for internal use. */
++ setexponent16(st0_ptr, exponent(st0_ptr));
++
++ valid_scale:
++
++ if (exponent(st1_ptr) > 30) {
++ /* 2^31 is far too large, would require 2^(2^30) or 2^(-2^30) */
++
++ if (signpositive(st1_ptr)) {
++ EXCEPTION(EX_Overflow);
++ FPU_copy_to_reg0(&CONST_INF, TAG_Special);
++ } else {
++ EXCEPTION(EX_Underflow);
++ FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
++ }
++ setsign(st0_ptr, sign);
++ return;
++ }
+
+- case TW_NaN:
+- real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
+- return;
+- }
+- }
+- else if ( st0_tag == TAG_Zero )
+- {
+- switch ( st1_tag )
+- {
+- case TAG_Valid:
+- case TAG_Zero:
+- return;
++ control_word &= ~CW_RC;
++ control_word |= RC_CHOP;
++ reg_copy(st1_ptr, &tmp);
++ FPU_round_to_int(&tmp, st1_tag); /* This can never overflow here */
++ control_word = old_cw;
++ scale = signnegative(st1_ptr) ? -tmp.sigl : tmp.sigl;
++ scale += exponent16(st0_ptr);
+
+- case TW_Denormal:
+- denormal_operand();
+- return;
++ setexponent16(st0_ptr, scale);
+
+- case TW_Infinity:
+- if ( signpositive(st1_ptr) )
+- arith_invalid(0); /* Zero scaled by +Infinity */
+- return;
++ /* Use FPU_round() to properly detect under/overflow etc */
++ FPU_round(st0_ptr, 0, 0, control_word, sign);
+
+- case TW_NaN:
+- real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
+- return;
++ return;
+ }
+- }
+- else if ( st0_tag == TW_Infinity )
+- {
+- switch ( st1_tag )
+- {
+- case TAG_Valid:
+- case TAG_Zero:
+- return;
+-
+- case TW_Denormal:
+- denormal_operand();
+- return;
+
+- case TW_Infinity:
+- if ( signnegative(st1_ptr) )
+- arith_invalid(0); /* Infinity scaled by -Infinity */
+- return;
+-
+- case TW_NaN:
+- real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
+- return;
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++ if (st1_tag == TAG_Special)
++ st1_tag = FPU_Special(st1_ptr);
++
++ if ((st0_tag == TAG_Valid) || (st0_tag == TW_Denormal)) {
++ switch (st1_tag) {
++ case TAG_Valid:
++ /* st(0) must be a denormal */
++ if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+
-+ memcpy(inst->alg.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
++ FPU_to_exp16(st0_ptr, st0_ptr); /* Will not be left on stack */
++ goto valid_scale;
+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
-+ inst->alg.cra_flags |= ctr->cra_flags & CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_priority = cipher->cra_priority + ctr->cra_priority;
-+ inst->alg.cra_blocksize = 1;
-+ inst->alg.cra_alignmask = cipher->cra_alignmask | ctr->cra_alignmask |
-+ (__alignof__(u32) - 1);
-+ inst->alg.cra_type = &crypto_aead_type;
-+ inst->alg.cra_aead.ivsize = 16;
-+ inst->alg.cra_aead.maxauthsize = 16;
-+ inst->alg.cra_ctxsize = sizeof(struct crypto_ccm_ctx);
-+ inst->alg.cra_init = crypto_ccm_init_tfm;
-+ inst->alg.cra_exit = crypto_ccm_exit_tfm;
-+ inst->alg.cra_aead.setkey = crypto_ccm_setkey;
-+ inst->alg.cra_aead.setauthsize = crypto_ccm_setauthsize;
-+ inst->alg.cra_aead.encrypt = crypto_ccm_encrypt;
-+ inst->alg.cra_aead.decrypt = crypto_ccm_decrypt;
++ case TAG_Zero:
++ if (st0_tag == TW_Denormal)
++ denormal_operand();
++ return;
+
-+out:
-+ crypto_mod_put(cipher);
-+ return inst;
++ case TW_Denormal:
++ denormal_operand();
++ return;
+
-+err_drop_ctr:
-+ crypto_drop_skcipher(&ictx->ctr);
-+err_drop_cipher:
-+ crypto_drop_spawn(&ictx->cipher);
-+err_free_inst:
-+ kfree(inst);
-+out_put_cipher:
-+ inst = ERR_PTR(err);
-+ goto out;
-+}
++ case TW_Infinity:
++ if ((st0_tag == TW_Denormal)
++ && (denormal_operand() < 0))
++ return;
+
-+static struct crypto_instance *crypto_ccm_alloc(struct rtattr **tb)
-+{
-+ int err;
-+ const char *cipher_name;
-+ char ctr_name[CRYPTO_MAX_ALG_NAME];
-+ char full_name[CRYPTO_MAX_ALG_NAME];
++ if (signpositive(st1_ptr))
++ FPU_copy_to_reg0(&CONST_INF, TAG_Special);
++ else
++ FPU_copy_to_reg0(&CONST_Z, TAG_Zero);
++ setsign(st0_ptr, sign);
++ return;
+
-+ cipher_name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(cipher_name);
-+ if (IS_ERR(cipher_name))
-+ return ERR_PTR(err);
++ case TW_NaN:
++ real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
++ return;
++ }
++ } else if (st0_tag == TAG_Zero) {
++ switch (st1_tag) {
++ case TAG_Valid:
++ case TAG_Zero:
++ return;
+
-+ if (snprintf(ctr_name, CRYPTO_MAX_ALG_NAME, "ctr(%s)",
-+ cipher_name) >= CRYPTO_MAX_ALG_NAME)
-+ return ERR_PTR(-ENAMETOOLONG);
++ case TW_Denormal:
++ denormal_operand();
++ return;
+
-+ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm(%s)", cipher_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ return ERR_PTR(-ENAMETOOLONG);
++ case TW_Infinity:
++ if (signpositive(st1_ptr))
++ arith_invalid(0); /* Zero scaled by +Infinity */
++ return;
+
-+ return crypto_ccm_alloc_common(tb, full_name, ctr_name, cipher_name);
-+}
++ case TW_NaN:
++ real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
++ return;
++ }
++ } else if (st0_tag == TW_Infinity) {
++ switch (st1_tag) {
++ case TAG_Valid:
++ case TAG_Zero:
++ return;
+
-+static void crypto_ccm_free(struct crypto_instance *inst)
-+{
-+ struct ccm_instance_ctx *ctx = crypto_instance_ctx(inst);
++ case TW_Denormal:
++ denormal_operand();
++ return;
+
-+ crypto_drop_spawn(&ctx->cipher);
-+ crypto_drop_skcipher(&ctx->ctr);
-+ kfree(inst);
-+}
++ case TW_Infinity:
++ if (signnegative(st1_ptr))
++ arith_invalid(0); /* Infinity scaled by -Infinity */
++ return;
+
-+static struct crypto_template crypto_ccm_tmpl = {
-+ .name = "ccm",
-+ .alloc = crypto_ccm_alloc,
-+ .free = crypto_ccm_free,
-+ .module = THIS_MODULE,
++ case TW_NaN:
++ real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
++ return;
++ }
++ } else if (st0_tag == TW_NaN) {
++ if (st1_tag != TAG_Empty) {
++ real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr);
++ return;
++ }
+ }
+- }
+- else if ( st0_tag == TW_NaN )
+- {
+- if ( st1_tag != TAG_Empty )
+- { real_2op_NaN(st1_ptr, st1_tag, 0, st0_ptr); return; }
+- }
+-
+ #ifdef PARANOID
+- if ( !((st0_tag == TAG_Empty) || (st1_tag == TAG_Empty)) )
+- {
+- EXCEPTION(EX_INTERNAL | 0x115);
+- return;
+- }
++ if (!((st0_tag == TAG_Empty) || (st1_tag == TAG_Empty))) {
++ EXCEPTION(EX_INTERNAL | 0x115);
++ return;
++ }
+ #endif
+
+- /* At least one of st(0), st(1) must be empty */
+- FPU_stack_underflow();
++ /* At least one of st(0), st(1) must be empty */
++ FPU_stack_underflow();
+
+ }
+
+-
+ /*---------------------------------------------------------------------------*/
+
+ static FUNC_ST0 const trig_table_a[] = {
+- f2xm1, fyl2x, fptan, fpatan,
+- fxtract, fprem1, (FUNC_ST0)fdecstp, (FUNC_ST0)fincstp
++ f2xm1, fyl2x, fptan, fpatan,
++ fxtract, fprem1, (FUNC_ST0) fdecstp, (FUNC_ST0) fincstp
+ };
+
+ void FPU_triga(void)
+ {
+- (trig_table_a[FPU_rm])(&st(0), FPU_gettag0());
++ (trig_table_a[FPU_rm]) (&st(0), FPU_gettag0());
+ }
+
+-
+-static FUNC_ST0 const trig_table_b[] =
+- {
+- fprem, fyl2xp1, fsqrt_, fsincos, frndint_, fscale, (FUNC_ST0)fsin, fcos
+- };
++static FUNC_ST0 const trig_table_b[] = {
++ fprem, fyl2xp1, fsqrt_, fsincos, frndint_, fscale, (FUNC_ST0) fsin, fcos
+};
+
+ void FPU_trigb(void)
+ {
+- (trig_table_b[FPU_rm])(&st(0), FPU_gettag0());
++ (trig_table_b[FPU_rm]) (&st(0), FPU_gettag0());
+ }
+diff --git a/arch/x86/math-emu/get_address.c b/arch/x86/math-emu/get_address.c
+index 2e2c51a..d701e2b 100644
+--- a/arch/x86/math-emu/get_address.c
++++ b/arch/x86/math-emu/get_address.c
+@@ -17,7 +17,6 @@
+ | other processes using the emulator while swapping is in progress. |
+ +---------------------------------------------------------------------------*/
+
+-
+ #include <linux/stddef.h>
+
+ #include <asm/uaccess.h>
+@@ -27,31 +26,30 @@
+ #include "exception.h"
+ #include "fpu_emu.h"
+
+-
+ #define FPU_WRITE_BIT 0x10
+
+ static int reg_offset[] = {
+- offsetof(struct info,___eax),
+- offsetof(struct info,___ecx),
+- offsetof(struct info,___edx),
+- offsetof(struct info,___ebx),
+- offsetof(struct info,___esp),
+- offsetof(struct info,___ebp),
+- offsetof(struct info,___esi),
+- offsetof(struct info,___edi)
++ offsetof(struct info, ___eax),
++ offsetof(struct info, ___ecx),
++ offsetof(struct info, ___edx),
++ offsetof(struct info, ___ebx),
++ offsetof(struct info, ___esp),
++ offsetof(struct info, ___ebp),
++ offsetof(struct info, ___esi),
++ offsetof(struct info, ___edi)
+ };
+
+ #define REG_(x) (*(long *)(reg_offset[(x)]+(u_char *) FPU_info))
+
+ static int reg_offset_vm86[] = {
+- offsetof(struct info,___cs),
+- offsetof(struct info,___vm86_ds),
+- offsetof(struct info,___vm86_es),
+- offsetof(struct info,___vm86_fs),
+- offsetof(struct info,___vm86_gs),
+- offsetof(struct info,___ss),
+- offsetof(struct info,___vm86_ds)
+- };
++ offsetof(struct info, ___cs),
++ offsetof(struct info, ___vm86_ds),
++ offsetof(struct info, ___vm86_es),
++ offsetof(struct info, ___vm86_fs),
++ offsetof(struct info, ___vm86_gs),
++ offsetof(struct info, ___ss),
++ offsetof(struct info, ___vm86_ds)
++};
+
+ #define VM86_REG_(x) (*(unsigned short *) \
+ (reg_offset_vm86[((unsigned)x)]+(u_char *) FPU_info))
+@@ -60,158 +58,141 @@ static int reg_offset_vm86[] = {
+ #define ___GS ___ds
+
+ static int reg_offset_pm[] = {
+- offsetof(struct info,___cs),
+- offsetof(struct info,___ds),
+- offsetof(struct info,___es),
+- offsetof(struct info,___fs),
+- offsetof(struct info,___GS),
+- offsetof(struct info,___ss),
+- offsetof(struct info,___ds)
+- };
++ offsetof(struct info, ___cs),
++ offsetof(struct info, ___ds),
++ offsetof(struct info, ___es),
++ offsetof(struct info, ___fs),
++ offsetof(struct info, ___GS),
++ offsetof(struct info, ___ss),
++ offsetof(struct info, ___ds)
++};
+
+ #define PM_REG_(x) (*(unsigned short *) \
+ (reg_offset_pm[((unsigned)x)]+(u_char *) FPU_info))
+
+-
+ /* Decode the SIB byte. This function assumes mod != 0 */
+ static int sib(int mod, unsigned long *fpu_eip)
+ {
+- u_char ss,index,base;
+- long offset;
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(base, (u_char __user *) (*fpu_eip)); /* The SIB byte */
+- RE_ENTRANT_CHECK_ON;
+- (*fpu_eip)++;
+- ss = base >> 6;
+- index = (base >> 3) & 7;
+- base &= 7;
+-
+- if ((mod == 0) && (base == 5))
+- offset = 0; /* No base register */
+- else
+- offset = REG_(base);
+-
+- if (index == 4)
+- {
+- /* No index register */
+- /* A non-zero ss is illegal */
+- if ( ss )
+- EXCEPTION(EX_Invalid);
+- }
+- else
+- {
+- offset += (REG_(index)) << ss;
+- }
+-
+- if (mod == 1)
+- {
+- /* 8 bit signed displacement */
+- long displacement;
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(displacement, (signed char __user *) (*fpu_eip));
+- offset += displacement;
+- RE_ENTRANT_CHECK_ON;
+- (*fpu_eip)++;
+- }
+- else if (mod == 2 || base == 5) /* The second condition also has mod==0 */
+- {
+- /* 32 bit displacement */
+- long displacement;
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(4);
+- FPU_get_user(displacement, (long __user *) (*fpu_eip));
+- offset += displacement;
+- RE_ENTRANT_CHECK_ON;
+- (*fpu_eip) += 4;
+- }
+-
+- return offset;
+-}
++ u_char ss, index, base;
++ long offset;
+
-+static struct crypto_instance *crypto_ccm_base_alloc(struct rtattr **tb)
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(base, (u_char __user *) (*fpu_eip)); /* The SIB byte */
++ RE_ENTRANT_CHECK_ON;
++ (*fpu_eip)++;
++ ss = base >> 6;
++ index = (base >> 3) & 7;
++ base &= 7;
++
++ if ((mod == 0) && (base == 5))
++ offset = 0; /* No base register */
++ else
++ offset = REG_(base);
++
++ if (index == 4) {
++ /* No index register */
++ /* A non-zero ss is illegal */
++ if (ss)
++ EXCEPTION(EX_Invalid);
++ } else {
++ offset += (REG_(index)) << ss;
++ }
++
++ if (mod == 1) {
++ /* 8 bit signed displacement */
++ long displacement;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(displacement, (signed char __user *)(*fpu_eip));
++ offset += displacement;
++ RE_ENTRANT_CHECK_ON;
++ (*fpu_eip)++;
++ } else if (mod == 2 || base == 5) { /* The second condition also has mod==0 */
++ /* 32 bit displacement */
++ long displacement;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(4);
++ FPU_get_user(displacement, (long __user *)(*fpu_eip));
++ offset += displacement;
++ RE_ENTRANT_CHECK_ON;
++ (*fpu_eip) += 4;
++ }
+
++ return offset;
++}
+
+-static unsigned long vm86_segment(u_char segment,
+- struct address *addr)
++static unsigned long vm86_segment(u_char segment, struct address *addr)
+ {
+- segment--;
++ segment--;
+ #ifdef PARANOID
+- if ( segment > PREFIX_SS_ )
+- {
+- EXCEPTION(EX_INTERNAL|0x130);
+- math_abort(FPU_info,SIGSEGV);
+- }
++ if (segment > PREFIX_SS_) {
++ EXCEPTION(EX_INTERNAL | 0x130);
++ math_abort(FPU_info, SIGSEGV);
++ }
+ #endif /* PARANOID */
+- addr->selector = VM86_REG_(segment);
+- return (unsigned long)VM86_REG_(segment) << 4;
++ addr->selector = VM86_REG_(segment);
++ return (unsigned long)VM86_REG_(segment) << 4;
+ }
+
+-
+ /* This should work for 16 and 32 bit protected mode. */
+ static long pm_address(u_char FPU_modrm, u_char segment,
+ struct address *addr, long offset)
+-{
+- struct desc_struct descriptor;
+- unsigned long base_address, limit, address, seg_top;
+{
-+ int err;
-+ const char *ctr_name;
-+ const char *cipher_name;
-+ char full_name[CRYPTO_MAX_ALG_NAME];
-+
-+ ctr_name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(ctr_name);
-+ if (IS_ERR(ctr_name))
-+ return ERR_PTR(err);
-+
-+ cipher_name = crypto_attr_alg_name(tb[2]);
-+ err = PTR_ERR(cipher_name);
-+ if (IS_ERR(cipher_name))
-+ return ERR_PTR(err);
-+
-+ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)",
-+ ctr_name, cipher_name) >= CRYPTO_MAX_ALG_NAME)
-+ return ERR_PTR(-ENAMETOOLONG);
-+
-+ return crypto_ccm_alloc_common(tb, full_name, ctr_name, cipher_name);
++ struct desc_struct descriptor;
++ unsigned long base_address, limit, address, seg_top;
+
+- segment--;
++ segment--;
+
+ #ifdef PARANOID
+- /* segment is unsigned, so this also detects if segment was 0: */
+- if ( segment > PREFIX_SS_ )
+- {
+- EXCEPTION(EX_INTERNAL|0x132);
+- math_abort(FPU_info,SIGSEGV);
+- }
++ /* segment is unsigned, so this also detects if segment was 0: */
++ if (segment > PREFIX_SS_) {
++ EXCEPTION(EX_INTERNAL | 0x132);
++ math_abort(FPU_info, SIGSEGV);
++ }
+ #endif /* PARANOID */
+
+- switch ( segment )
+- {
+- /* gs isn't used by the kernel, so it still has its
+- user-space value. */
+- case PREFIX_GS_-1:
+- /* N.B. - movl %seg, mem is a 2 byte write regardless of prefix */
+- savesegment(gs, addr->selector);
+- break;
+- default:
+- addr->selector = PM_REG_(segment);
+- }
+-
+- descriptor = LDT_DESCRIPTOR(PM_REG_(segment));
+- base_address = SEG_BASE_ADDR(descriptor);
+- address = base_address + offset;
+- limit = base_address
+- + (SEG_LIMIT(descriptor)+1) * SEG_GRANULARITY(descriptor) - 1;
+- if ( limit < base_address ) limit = 0xffffffff;
+-
+- if ( SEG_EXPAND_DOWN(descriptor) )
+- {
+- if ( SEG_G_BIT(descriptor) )
+- seg_top = 0xffffffff;
+- else
+- {
+- seg_top = base_address + (1 << 20);
+- if ( seg_top < base_address ) seg_top = 0xffffffff;
++ switch (segment) {
++ /* gs isn't used by the kernel, so it still has its
++ user-space value. */
++ case PREFIX_GS_ - 1:
++ /* N.B. - movl %seg, mem is a 2 byte write regardless of prefix */
++ savesegment(gs, addr->selector);
++ break;
++ default:
++ addr->selector = PM_REG_(segment);
+ }
+- access_limit =
+- (address <= limit) || (address >= seg_top) ? 0 :
+- ((seg_top-address) >= 255 ? 255 : seg_top-address);
+- }
+- else
+- {
+- access_limit =
+- (address > limit) || (address < base_address) ? 0 :
+- ((limit-address) >= 254 ? 255 : limit-address+1);
+- }
+- if ( SEG_EXECUTE_ONLY(descriptor) ||
+- (!SEG_WRITE_PERM(descriptor) && (FPU_modrm & FPU_WRITE_BIT)) )
+- {
+- access_limit = 0;
+- }
+- return address;
+-}
+
++ descriptor = LDT_DESCRIPTOR(PM_REG_(segment));
++ base_address = SEG_BASE_ADDR(descriptor);
++ address = base_address + offset;
++ limit = base_address
++ + (SEG_LIMIT(descriptor) + 1) * SEG_GRANULARITY(descriptor) - 1;
++ if (limit < base_address)
++ limit = 0xffffffff;
++
++ if (SEG_EXPAND_DOWN(descriptor)) {
++ if (SEG_G_BIT(descriptor))
++ seg_top = 0xffffffff;
++ else {
++ seg_top = base_address + (1 << 20);
++ if (seg_top < base_address)
++ seg_top = 0xffffffff;
++ }
++ access_limit =
++ (address <= limit) || (address >= seg_top) ? 0 :
++ ((seg_top - address) >= 255 ? 255 : seg_top - address);
++ } else {
++ access_limit =
++ (address > limit) || (address < base_address) ? 0 :
++ ((limit - address) >= 254 ? 255 : limit - address + 1);
++ }
++ if (SEG_EXECUTE_ONLY(descriptor) ||
++ (!SEG_WRITE_PERM(descriptor) && (FPU_modrm & FPU_WRITE_BIT))) {
++ access_limit = 0;
++ }
++ return address;
+}
-+
-+static struct crypto_template crypto_ccm_base_tmpl = {
-+ .name = "ccm_base",
-+ .alloc = crypto_ccm_base_alloc,
-+ .free = crypto_ccm_free,
-+ .module = THIS_MODULE,
-+};
-+
-+static int crypto_rfc4309_setkey(struct crypto_aead *parent, const u8 *key,
-+ unsigned int keylen)
+
+ /*
+ MOD R/M byte: MOD == 3 has a special use for the FPU
+@@ -221,7 +202,6 @@ static long pm_address(u_char FPU_modrm, u_char segment,
+ ..... ......... .........
+ MOD OPCODE(2) R/M
+
+-
+ SIB byte
+
+ 7 6 5 4 3 2 1 0
+@@ -231,208 +211,194 @@ static long pm_address(u_char FPU_modrm, u_char segment,
+ */
+
+ void __user *FPU_get_address(u_char FPU_modrm, unsigned long *fpu_eip,
+- struct address *addr,
+- fpu_addr_modes addr_modes)
++ struct address *addr, fpu_addr_modes addr_modes)
+{
-+ struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(parent);
-+ struct crypto_aead *child = ctx->child;
-+ int err;
++ u_char mod;
++ unsigned rm = FPU_modrm & 7;
++ long *cpu_reg_ptr;
++ int address = 0; /* Initialized just to stop compiler warnings. */
+
-+ if (keylen < 3)
-+ return -EINVAL;
++ /* Memory accessed via the cs selector is write protected
++ in `non-segmented' 32 bit protected mode. */
++ if (!addr_modes.default_mode && (FPU_modrm & FPU_WRITE_BIT)
++ && (addr_modes.override.segment == PREFIX_CS_)) {
++ math_abort(FPU_info, SIGSEGV);
++ }
+
-+ keylen -= 3;
-+ memcpy(ctx->nonce, key + keylen, 3);
++ addr->selector = FPU_DS; /* Default, for 32 bit non-segmented mode. */
+
-+ crypto_aead_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-+ crypto_aead_set_flags(child, crypto_aead_get_flags(parent) &
-+ CRYPTO_TFM_REQ_MASK);
-+ err = crypto_aead_setkey(child, key, keylen);
-+ crypto_aead_set_flags(parent, crypto_aead_get_flags(child) &
-+ CRYPTO_TFM_RES_MASK);
++ mod = (FPU_modrm >> 6) & 3;
+
-+ return err;
-+}
++ if (rm == 4 && mod != 3) {
++ address = sib(mod, fpu_eip);
++ } else {
++ cpu_reg_ptr = ®_(rm);
++ switch (mod) {
++ case 0:
++ if (rm == 5) {
++ /* Special case: disp32 */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(4);
++ FPU_get_user(address,
++ (unsigned long __user
++ *)(*fpu_eip));
++ (*fpu_eip) += 4;
++ RE_ENTRANT_CHECK_ON;
++ addr->offset = address;
++ return (void __user *)address;
++ } else {
++ address = *cpu_reg_ptr; /* Just return the contents
++ of the cpu register */
++ addr->offset = address;
++ return (void __user *)address;
++ }
++ case 1:
++ /* 8 bit signed displacement */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(address, (signed char __user *)(*fpu_eip));
++ RE_ENTRANT_CHECK_ON;
++ (*fpu_eip)++;
++ break;
++ case 2:
++ /* 32 bit displacement */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(4);
++ FPU_get_user(address, (long __user *)(*fpu_eip));
++ (*fpu_eip) += 4;
++ RE_ENTRANT_CHECK_ON;
++ break;
++ case 3:
++ /* Not legal for the FPU */
++ EXCEPTION(EX_Invalid);
++ }
++ address += *cpu_reg_ptr;
++ }
+
-+static int crypto_rfc4309_setauthsize(struct crypto_aead *parent,
-+ unsigned int authsize)
-+{
-+ struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(parent);
++ addr->offset = address;
+
-+ switch (authsize) {
-+ case 8:
-+ case 12:
-+ case 16:
++ switch (addr_modes.default_mode) {
++ case 0:
++ break;
++ case VM86:
++ address += vm86_segment(addr_modes.override.segment, addr);
++ break;
++ case PM16:
++ case SEG32:
++ address = pm_address(FPU_modrm, addr_modes.override.segment,
++ addr, address);
+ break;
+ default:
-+ return -EINVAL;
++ EXCEPTION(EX_INTERNAL | 0x133);
+ }
+
-+ return crypto_aead_setauthsize(ctx->child, authsize);
-+}
-+
-+static struct aead_request *crypto_rfc4309_crypt(struct aead_request *req)
-+{
-+ struct aead_request *subreq = aead_request_ctx(req);
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_aead *child = ctx->child;
-+ u8 *iv = PTR_ALIGN((u8 *)(subreq + 1) + crypto_aead_reqsize(child),
-+ crypto_aead_alignmask(child) + 1);
-+
-+ /* L' */
-+ iv[0] = 3;
-+
-+ memcpy(iv + 1, ctx->nonce, 3);
-+ memcpy(iv + 4, req->iv, 8);
-+
-+ aead_request_set_tfm(subreq, child);
-+ aead_request_set_callback(subreq, req->base.flags, req->base.complete,
-+ req->base.data);
-+ aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, iv);
-+ aead_request_set_assoc(subreq, req->assoc, req->assoclen);
-+
-+ return subreq;
++ return (void __user *)address;
+}
+
-+static int crypto_rfc4309_encrypt(struct aead_request *req)
-+{
-+ req = crypto_rfc4309_crypt(req);
++void __user *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
++ struct address *addr, fpu_addr_modes addr_modes)
+ {
+- u_char mod;
+- unsigned rm = FPU_modrm & 7;
+- long *cpu_reg_ptr;
+- int address = 0; /* Initialized just to stop compiler warnings. */
+-
+- /* Memory accessed via the cs selector is write protected
+- in `non-segmented' 32 bit protected mode. */
+- if ( !addr_modes.default_mode && (FPU_modrm & FPU_WRITE_BIT)
+- && (addr_modes.override.segment == PREFIX_CS_) )
+- {
+- math_abort(FPU_info,SIGSEGV);
+- }
+-
+- addr->selector = FPU_DS; /* Default, for 32 bit non-segmented mode. */
+-
+- mod = (FPU_modrm >> 6) & 3;
+-
+- if (rm == 4 && mod != 3)
+- {
+- address = sib(mod, fpu_eip);
+- }
+- else
+- {
+- cpu_reg_ptr = & REG_(rm);
+- switch (mod)
+- {
++ u_char mod;
++ unsigned rm = FPU_modrm & 7;
++ int address = 0; /* Default used for mod == 0 */
+
-+ return crypto_aead_encrypt(req);
-+}
++ /* Memory accessed via the cs selector is write protected
++ in `non-segmented' 32 bit protected mode. */
++ if (!addr_modes.default_mode && (FPU_modrm & FPU_WRITE_BIT)
++ && (addr_modes.override.segment == PREFIX_CS_)) {
++ math_abort(FPU_info, SIGSEGV);
++ }
+
-+static int crypto_rfc4309_decrypt(struct aead_request *req)
-+{
-+ req = crypto_rfc4309_crypt(req);
++ addr->selector = FPU_DS; /* Default, for 32 bit non-segmented mode. */
+
-+ return crypto_aead_decrypt(req);
-+}
++ mod = (FPU_modrm >> 6) & 3;
+
-+static int crypto_rfc4309_init_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct crypto_aead_spawn *spawn = crypto_instance_ctx(inst);
-+ struct crypto_rfc4309_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_aead *aead;
-+ unsigned long align;
++ switch (mod) {
+ case 0:
+- if (rm == 5)
+- {
+- /* Special case: disp32 */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(4);
+- FPU_get_user(address, (unsigned long __user *) (*fpu_eip));
+- (*fpu_eip) += 4;
+- RE_ENTRANT_CHECK_ON;
+- addr->offset = address;
+- return (void __user *) address;
+- }
+- else
+- {
+- address = *cpu_reg_ptr; /* Just return the contents
+- of the cpu register */
+- addr->offset = address;
+- return (void __user *) address;
+- }
++ if (rm == 6) {
++ /* Special case: disp16 */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(2);
++ FPU_get_user(address,
++ (unsigned short __user *)(*fpu_eip));
++ (*fpu_eip) += 2;
++ RE_ENTRANT_CHECK_ON;
++ goto add_segment;
++ }
++ break;
+ case 1:
+- /* 8 bit signed displacement */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(address, (signed char __user *) (*fpu_eip));
+- RE_ENTRANT_CHECK_ON;
+- (*fpu_eip)++;
+- break;
++ /* 8 bit signed displacement */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(1);
++ FPU_get_user(address, (signed char __user *)(*fpu_eip));
++ RE_ENTRANT_CHECK_ON;
++ (*fpu_eip)++;
++ break;
+ case 2:
+- /* 32 bit displacement */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(4);
+- FPU_get_user(address, (long __user *) (*fpu_eip));
+- (*fpu_eip) += 4;
+- RE_ENTRANT_CHECK_ON;
+- break;
++ /* 16 bit displacement */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_code_access_ok(2);
++ FPU_get_user(address, (unsigned short __user *)(*fpu_eip));
++ (*fpu_eip) += 2;
++ RE_ENTRANT_CHECK_ON;
++ break;
+ case 3:
+- /* Not legal for the FPU */
+- EXCEPTION(EX_Invalid);
++ /* Not legal for the FPU */
++ EXCEPTION(EX_Invalid);
++ break;
++ }
++ switch (rm) {
++ case 0:
++ address += FPU_info->___ebx + FPU_info->___esi;
++ break;
++ case 1:
++ address += FPU_info->___ebx + FPU_info->___edi;
++ break;
++ case 2:
++ address += FPU_info->___ebp + FPU_info->___esi;
++ if (addr_modes.override.segment == PREFIX_DEFAULT)
++ addr_modes.override.segment = PREFIX_SS_;
++ break;
++ case 3:
++ address += FPU_info->___ebp + FPU_info->___edi;
++ if (addr_modes.override.segment == PREFIX_DEFAULT)
++ addr_modes.override.segment = PREFIX_SS_;
++ break;
++ case 4:
++ address += FPU_info->___esi;
++ break;
++ case 5:
++ address += FPU_info->___edi;
++ break;
++ case 6:
++ address += FPU_info->___ebp;
++ if (addr_modes.override.segment == PREFIX_DEFAULT)
++ addr_modes.override.segment = PREFIX_SS_;
++ break;
++ case 7:
++ address += FPU_info->___ebx;
++ break;
+ }
+- address += *cpu_reg_ptr;
+- }
+-
+- addr->offset = address;
+-
+- switch ( addr_modes.default_mode )
+- {
+- case 0:
+- break;
+- case VM86:
+- address += vm86_segment(addr_modes.override.segment, addr);
+- break;
+- case PM16:
+- case SEG32:
+- address = pm_address(FPU_modrm, addr_modes.override.segment,
+- addr, address);
+- break;
+- default:
+- EXCEPTION(EX_INTERNAL|0x133);
+- }
+-
+- return (void __user *)address;
+-}
+
++ add_segment:
++ address &= 0xffff;
+
+-void __user *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
+- struct address *addr,
+- fpu_addr_modes addr_modes)
+-{
+- u_char mod;
+- unsigned rm = FPU_modrm & 7;
+- int address = 0; /* Default used for mod == 0 */
+-
+- /* Memory accessed via the cs selector is write protected
+- in `non-segmented' 32 bit protected mode. */
+- if ( !addr_modes.default_mode && (FPU_modrm & FPU_WRITE_BIT)
+- && (addr_modes.override.segment == PREFIX_CS_) )
+- {
+- math_abort(FPU_info,SIGSEGV);
+- }
+-
+- addr->selector = FPU_DS; /* Default, for 32 bit non-segmented mode. */
+-
+- mod = (FPU_modrm >> 6) & 3;
+-
+- switch (mod)
+- {
+- case 0:
+- if (rm == 6)
+- {
+- /* Special case: disp16 */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(2);
+- FPU_get_user(address, (unsigned short __user *) (*fpu_eip));
+- (*fpu_eip) += 2;
+- RE_ENTRANT_CHECK_ON;
+- goto add_segment;
++ addr->offset = address;
+
-+ aead = crypto_spawn_aead(spawn);
-+ if (IS_ERR(aead))
-+ return PTR_ERR(aead);
++ switch (addr_modes.default_mode) {
++ case 0:
++ break;
++ case VM86:
++ address += vm86_segment(addr_modes.override.segment, addr);
++ break;
++ case PM16:
++ case SEG32:
++ address = pm_address(FPU_modrm, addr_modes.override.segment,
++ addr, address);
++ break;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x131);
+ }
+- break;
+- case 1:
+- /* 8 bit signed displacement */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(1);
+- FPU_get_user(address, (signed char __user *) (*fpu_eip));
+- RE_ENTRANT_CHECK_ON;
+- (*fpu_eip)++;
+- break;
+- case 2:
+- /* 16 bit displacement */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_code_access_ok(2);
+- FPU_get_user(address, (unsigned short __user *) (*fpu_eip));
+- (*fpu_eip) += 2;
+- RE_ENTRANT_CHECK_ON;
+- break;
+- case 3:
+- /* Not legal for the FPU */
+- EXCEPTION(EX_Invalid);
+- break;
+- }
+- switch ( rm )
+- {
+- case 0:
+- address += FPU_info->___ebx + FPU_info->___esi;
+- break;
+- case 1:
+- address += FPU_info->___ebx + FPU_info->___edi;
+- break;
+- case 2:
+- address += FPU_info->___ebp + FPU_info->___esi;
+- if ( addr_modes.override.segment == PREFIX_DEFAULT )
+- addr_modes.override.segment = PREFIX_SS_;
+- break;
+- case 3:
+- address += FPU_info->___ebp + FPU_info->___edi;
+- if ( addr_modes.override.segment == PREFIX_DEFAULT )
+- addr_modes.override.segment = PREFIX_SS_;
+- break;
+- case 4:
+- address += FPU_info->___esi;
+- break;
+- case 5:
+- address += FPU_info->___edi;
+- break;
+- case 6:
+- address += FPU_info->___ebp;
+- if ( addr_modes.override.segment == PREFIX_DEFAULT )
+- addr_modes.override.segment = PREFIX_SS_;
+- break;
+- case 7:
+- address += FPU_info->___ebx;
+- break;
+- }
+-
+- add_segment:
+- address &= 0xffff;
+-
+- addr->offset = address;
+-
+- switch ( addr_modes.default_mode )
+- {
+- case 0:
+- break;
+- case VM86:
+- address += vm86_segment(addr_modes.override.segment, addr);
+- break;
+- case PM16:
+- case SEG32:
+- address = pm_address(FPU_modrm, addr_modes.override.segment,
+- addr, address);
+- break;
+- default:
+- EXCEPTION(EX_INTERNAL|0x131);
+- }
+-
+- return (void __user *)address ;
+
-+ ctx->child = aead;
++ return (void __user *)address;
+ }
+diff --git a/arch/x86/math-emu/load_store.c b/arch/x86/math-emu/load_store.c
+index eebd6fb..2931ff3 100644
+--- a/arch/x86/math-emu/load_store.c
++++ b/arch/x86/math-emu/load_store.c
+@@ -26,247 +26,257 @@
+ #include "status_w.h"
+ #include "control_w.h"
+
+-
+-#define _NONE_ 0 /* st0_ptr etc not needed */
+-#define _REG0_ 1 /* Will be storing st(0) */
+-#define _PUSH_ 3 /* Need to check for space to push onto stack */
+-#define _null_ 4 /* Function illegal or not implemented */
++#define _NONE_ 0 /* st0_ptr etc not needed */
++#define _REG0_ 1 /* Will be storing st(0) */
++#define _PUSH_ 3 /* Need to check for space to push onto stack */
++#define _null_ 4 /* Function illegal or not implemented */
+
+ #define pop_0() { FPU_settag0(TAG_Empty); top++; }
+
+-
+ static u_char const type_table[32] = {
+- _PUSH_, _PUSH_, _PUSH_, _PUSH_,
+- _null_, _null_, _null_, _null_,
+- _REG0_, _REG0_, _REG0_, _REG0_,
+- _REG0_, _REG0_, _REG0_, _REG0_,
+- _NONE_, _null_, _NONE_, _PUSH_,
+- _NONE_, _PUSH_, _null_, _PUSH_,
+- _NONE_, _null_, _NONE_, _REG0_,
+- _NONE_, _REG0_, _NONE_, _REG0_
+- };
++ _PUSH_, _PUSH_, _PUSH_, _PUSH_,
++ _null_, _null_, _null_, _null_,
++ _REG0_, _REG0_, _REG0_, _REG0_,
++ _REG0_, _REG0_, _REG0_, _REG0_,
++ _NONE_, _null_, _NONE_, _PUSH_,
++ _NONE_, _PUSH_, _null_, _PUSH_,
++ _NONE_, _null_, _NONE_, _REG0_,
++ _NONE_, _REG0_, _NONE_, _REG0_
++};
+
+ u_char const data_sizes_16[32] = {
+- 4, 4, 8, 2, 0, 0, 0, 0,
+- 4, 4, 8, 2, 4, 4, 8, 2,
+- 14, 0, 94, 10, 2, 10, 0, 8,
+- 14, 0, 94, 10, 2, 10, 2, 8
++ 4, 4, 8, 2, 0, 0, 0, 0,
++ 4, 4, 8, 2, 4, 4, 8, 2,
++ 14, 0, 94, 10, 2, 10, 0, 8,
++ 14, 0, 94, 10, 2, 10, 2, 8
+ };
+
+ static u_char const data_sizes_32[32] = {
+- 4, 4, 8, 2, 0, 0, 0, 0,
+- 4, 4, 8, 2, 4, 4, 8, 2,
+- 28, 0,108, 10, 2, 10, 0, 8,
+- 28, 0,108, 10, 2, 10, 2, 8
++ 4, 4, 8, 2, 0, 0, 0, 0,
++ 4, 4, 8, 2, 4, 4, 8, 2,
++ 28, 0, 108, 10, 2, 10, 0, 8,
++ 28, 0, 108, 10, 2, 10, 2, 8
+ };
+
+ int FPU_load_store(u_char type, fpu_addr_modes addr_modes,
+- void __user *data_address)
++ void __user * data_address)
+ {
+- FPU_REG loaded_data;
+- FPU_REG *st0_ptr;
+- u_char st0_tag = TAG_Empty; /* This is just to stop a gcc warning. */
+- u_char loaded_tag;
++ FPU_REG loaded_data;
++ FPU_REG *st0_ptr;
++ u_char st0_tag = TAG_Empty; /* This is just to stop a gcc warning. */
++ u_char loaded_tag;
+
+- st0_ptr = NULL; /* Initialized just to stop compiler warnings. */
++ st0_ptr = NULL; /* Initialized just to stop compiler warnings. */
+
+- if ( addr_modes.default_mode & PROTECTED )
+- {
+- if ( addr_modes.default_mode == SEG32 )
+- {
+- if ( access_limit < data_sizes_32[type] )
+- math_abort(FPU_info,SIGSEGV);
+- }
+- else if ( addr_modes.default_mode == PM16 )
+- {
+- if ( access_limit < data_sizes_16[type] )
+- math_abort(FPU_info,SIGSEGV);
+- }
++ if (addr_modes.default_mode & PROTECTED) {
++ if (addr_modes.default_mode == SEG32) {
++ if (access_limit < data_sizes_32[type])
++ math_abort(FPU_info, SIGSEGV);
++ } else if (addr_modes.default_mode == PM16) {
++ if (access_limit < data_sizes_16[type])
++ math_abort(FPU_info, SIGSEGV);
++ }
+ #ifdef PARANOID
+- else
+- EXCEPTION(EX_INTERNAL|0x140);
++ else
++ EXCEPTION(EX_INTERNAL | 0x140);
+ #endif /* PARANOID */
+- }
++ }
+
+- switch ( type_table[type] )
+- {
+- case _NONE_:
+- break;
+- case _REG0_:
+- st0_ptr = &st(0); /* Some of these instructions pop after
+- storing */
+- st0_tag = FPU_gettag0();
+- break;
+- case _PUSH_:
+- {
+- if ( FPU_gettagi(-1) != TAG_Empty )
+- { FPU_stack_overflow(); return 0; }
+- top--;
+- st0_ptr = &st(0);
+- }
+- break;
+- case _null_:
+- FPU_illegal();
+- return 0;
++ switch (type_table[type]) {
++ case _NONE_:
++ break;
++ case _REG0_:
++ st0_ptr = &st(0); /* Some of these instructions pop after
++ storing */
++ st0_tag = FPU_gettag0();
++ break;
++ case _PUSH_:
++ {
++ if (FPU_gettagi(-1) != TAG_Empty) {
++ FPU_stack_overflow();
++ return 0;
++ }
++ top--;
++ st0_ptr = &st(0);
++ }
++ break;
++ case _null_:
++ FPU_illegal();
++ return 0;
+ #ifdef PARANOID
+- default:
+- EXCEPTION(EX_INTERNAL|0x141);
+- return 0;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x141);
++ return 0;
+ #endif /* PARANOID */
+- }
+-
+- switch ( type )
+- {
+- case 000: /* fld m32real */
+- clear_C1();
+- loaded_tag = FPU_load_single((float __user *)data_address, &loaded_data);
+- if ( (loaded_tag == TAG_Special)
+- && isNaN(&loaded_data)
+- && (real_1op_NaN(&loaded_data) < 0) )
+- {
+- top++;
+- break;
+- }
+- FPU_copy_to_reg0(&loaded_data, loaded_tag);
+- break;
+- case 001: /* fild m32int */
+- clear_C1();
+- loaded_tag = FPU_load_int32((long __user *)data_address, &loaded_data);
+- FPU_copy_to_reg0(&loaded_data, loaded_tag);
+- break;
+- case 002: /* fld m64real */
+- clear_C1();
+- loaded_tag = FPU_load_double((double __user *)data_address, &loaded_data);
+- if ( (loaded_tag == TAG_Special)
+- && isNaN(&loaded_data)
+- && (real_1op_NaN(&loaded_data) < 0) )
+- {
+- top++;
+- break;
+ }
+- FPU_copy_to_reg0(&loaded_data, loaded_tag);
+- break;
+- case 003: /* fild m16int */
+- clear_C1();
+- loaded_tag = FPU_load_int16((short __user *)data_address, &loaded_data);
+- FPU_copy_to_reg0(&loaded_data, loaded_tag);
+- break;
+- case 010: /* fst m32real */
+- clear_C1();
+- FPU_store_single(st0_ptr, st0_tag, (float __user *)data_address);
+- break;
+- case 011: /* fist m32int */
+- clear_C1();
+- FPU_store_int32(st0_ptr, st0_tag, (long __user *)data_address);
+- break;
+- case 012: /* fst m64real */
+- clear_C1();
+- FPU_store_double(st0_ptr, st0_tag, (double __user *)data_address);
+- break;
+- case 013: /* fist m16int */
+- clear_C1();
+- FPU_store_int16(st0_ptr, st0_tag, (short __user *)data_address);
+- break;
+- case 014: /* fstp m32real */
+- clear_C1();
+- if ( FPU_store_single(st0_ptr, st0_tag, (float __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- case 015: /* fistp m32int */
+- clear_C1();
+- if ( FPU_store_int32(st0_ptr, st0_tag, (long __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- case 016: /* fstp m64real */
+- clear_C1();
+- if ( FPU_store_double(st0_ptr, st0_tag, (double __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- case 017: /* fistp m16int */
+- clear_C1();
+- if ( FPU_store_int16(st0_ptr, st0_tag, (short __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- case 020: /* fldenv m14/28byte */
+- fldenv(addr_modes, (u_char __user *)data_address);
+- /* Ensure that the values just loaded are not changed by
+- fix-up operations. */
+- return 1;
+- case 022: /* frstor m94/108byte */
+- frstor(addr_modes, (u_char __user *)data_address);
+- /* Ensure that the values just loaded are not changed by
+- fix-up operations. */
+- return 1;
+- case 023: /* fbld m80dec */
+- clear_C1();
+- loaded_tag = FPU_load_bcd((u_char __user *)data_address);
+- FPU_settag0(loaded_tag);
+- break;
+- case 024: /* fldcw */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, data_address, 2);
+- FPU_get_user(control_word, (unsigned short __user *) data_address);
+- RE_ENTRANT_CHECK_ON;
+- if ( partial_status & ~control_word & CW_Exceptions )
+- partial_status |= (SW_Summary | SW_Backward);
+- else
+- partial_status &= ~(SW_Summary | SW_Backward);
+
-+ align = crypto_aead_alignmask(aead);
-+ align &= ~(crypto_tfm_ctx_alignment() - 1);
-+ tfm->crt_aead.reqsize = sizeof(struct aead_request) +
-+ ALIGN(crypto_aead_reqsize(aead),
-+ crypto_tfm_ctx_alignment()) +
-+ align + 16;
++ switch (type) {
++ case 000: /* fld m32real */
++ clear_C1();
++ loaded_tag =
++ FPU_load_single((float __user *)data_address, &loaded_data);
++ if ((loaded_tag == TAG_Special)
++ && isNaN(&loaded_data)
++ && (real_1op_NaN(&loaded_data) < 0)) {
++ top++;
++ break;
++ }
++ FPU_copy_to_reg0(&loaded_data, loaded_tag);
++ break;
++ case 001: /* fild m32int */
++ clear_C1();
++ loaded_tag =
++ FPU_load_int32((long __user *)data_address, &loaded_data);
++ FPU_copy_to_reg0(&loaded_data, loaded_tag);
++ break;
++ case 002: /* fld m64real */
++ clear_C1();
++ loaded_tag =
++ FPU_load_double((double __user *)data_address,
++ &loaded_data);
++ if ((loaded_tag == TAG_Special)
++ && isNaN(&loaded_data)
++ && (real_1op_NaN(&loaded_data) < 0)) {
++ top++;
++ break;
++ }
++ FPU_copy_to_reg0(&loaded_data, loaded_tag);
++ break;
++ case 003: /* fild m16int */
++ clear_C1();
++ loaded_tag =
++ FPU_load_int16((short __user *)data_address, &loaded_data);
++ FPU_copy_to_reg0(&loaded_data, loaded_tag);
++ break;
++ case 010: /* fst m32real */
++ clear_C1();
++ FPU_store_single(st0_ptr, st0_tag,
++ (float __user *)data_address);
++ break;
++ case 011: /* fist m32int */
++ clear_C1();
++ FPU_store_int32(st0_ptr, st0_tag, (long __user *)data_address);
++ break;
++ case 012: /* fst m64real */
++ clear_C1();
++ FPU_store_double(st0_ptr, st0_tag,
++ (double __user *)data_address);
++ break;
++ case 013: /* fist m16int */
++ clear_C1();
++ FPU_store_int16(st0_ptr, st0_tag, (short __user *)data_address);
++ break;
++ case 014: /* fstp m32real */
++ clear_C1();
++ if (FPU_store_single
++ (st0_ptr, st0_tag, (float __user *)data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ case 015: /* fistp m32int */
++ clear_C1();
++ if (FPU_store_int32
++ (st0_ptr, st0_tag, (long __user *)data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ case 016: /* fstp m64real */
++ clear_C1();
++ if (FPU_store_double
++ (st0_ptr, st0_tag, (double __user *)data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ case 017: /* fistp m16int */
++ clear_C1();
++ if (FPU_store_int16
++ (st0_ptr, st0_tag, (short __user *)data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ case 020: /* fldenv m14/28byte */
++ fldenv(addr_modes, (u_char __user *) data_address);
++ /* Ensure that the values just loaded are not changed by
++ fix-up operations. */
++ return 1;
++ case 022: /* frstor m94/108byte */
++ frstor(addr_modes, (u_char __user *) data_address);
++ /* Ensure that the values just loaded are not changed by
++ fix-up operations. */
++ return 1;
++ case 023: /* fbld m80dec */
++ clear_C1();
++ loaded_tag = FPU_load_bcd((u_char __user *) data_address);
++ FPU_settag0(loaded_tag);
++ break;
++ case 024: /* fldcw */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, data_address, 2);
++ FPU_get_user(control_word,
++ (unsigned short __user *)data_address);
++ RE_ENTRANT_CHECK_ON;
++ if (partial_status & ~control_word & CW_Exceptions)
++ partial_status |= (SW_Summary | SW_Backward);
++ else
++ partial_status &= ~(SW_Summary | SW_Backward);
+ #ifdef PECULIAR_486
+- control_word |= 0x40; /* An 80486 appears to always set this bit */
++ control_word |= 0x40; /* An 80486 appears to always set this bit */
+ #endif /* PECULIAR_486 */
+- return 1;
+- case 025: /* fld m80real */
+- clear_C1();
+- loaded_tag = FPU_load_extended((long double __user *)data_address, 0);
+- FPU_settag0(loaded_tag);
+- break;
+- case 027: /* fild m64int */
+- clear_C1();
+- loaded_tag = FPU_load_int64((long long __user *)data_address);
+- if (loaded_tag == TAG_Error)
++ return 1;
++ case 025: /* fld m80real */
++ clear_C1();
++ loaded_tag =
++ FPU_load_extended((long double __user *)data_address, 0);
++ FPU_settag0(loaded_tag);
++ break;
++ case 027: /* fild m64int */
++ clear_C1();
++ loaded_tag = FPU_load_int64((long long __user *)data_address);
++ if (loaded_tag == TAG_Error)
++ return 0;
++ FPU_settag0(loaded_tag);
++ break;
++ case 030: /* fstenv m14/28byte */
++ fstenv(addr_modes, (u_char __user *) data_address);
++ return 1;
++ case 032: /* fsave */
++ fsave(addr_modes, (u_char __user *) data_address);
++ return 1;
++ case 033: /* fbstp m80dec */
++ clear_C1();
++ if (FPU_store_bcd
++ (st0_ptr, st0_tag, (u_char __user *) data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ case 034: /* fstcw m16int */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, data_address, 2);
++ FPU_put_user(control_word,
++ (unsigned short __user *)data_address);
++ RE_ENTRANT_CHECK_ON;
++ return 1;
++ case 035: /* fstp m80real */
++ clear_C1();
++ if (FPU_store_extended
++ (st0_ptr, st0_tag, (long double __user *)data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ case 036: /* fstsw m2byte */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, data_address, 2);
++ FPU_put_user(status_word(),
++ (unsigned short __user *)data_address);
++ RE_ENTRANT_CHECK_ON;
++ return 1;
++ case 037: /* fistp m64int */
++ clear_C1();
++ if (FPU_store_int64
++ (st0_ptr, st0_tag, (long long __user *)data_address))
++ pop_0(); /* pop only if the number was actually stored
++ (see the 80486 manual p16-28) */
++ break;
++ }
+ return 0;
+- FPU_settag0(loaded_tag);
+- break;
+- case 030: /* fstenv m14/28byte */
+- fstenv(addr_modes, (u_char __user *)data_address);
+- return 1;
+- case 032: /* fsave */
+- fsave(addr_modes, (u_char __user *)data_address);
+- return 1;
+- case 033: /* fbstp m80dec */
+- clear_C1();
+- if ( FPU_store_bcd(st0_ptr, st0_tag, (u_char __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- case 034: /* fstcw m16int */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,data_address,2);
+- FPU_put_user(control_word, (unsigned short __user *) data_address);
+- RE_ENTRANT_CHECK_ON;
+- return 1;
+- case 035: /* fstp m80real */
+- clear_C1();
+- if ( FPU_store_extended(st0_ptr, st0_tag, (long double __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- case 036: /* fstsw m2byte */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,data_address,2);
+- FPU_put_user(status_word(),(unsigned short __user *) data_address);
+- RE_ENTRANT_CHECK_ON;
+- return 1;
+- case 037: /* fistp m64int */
+- clear_C1();
+- if ( FPU_store_int64(st0_ptr, st0_tag, (long long __user *)data_address) )
+- pop_0(); /* pop only if the number was actually stored
+- (see the 80486 manual p16-28) */
+- break;
+- }
+- return 0;
+ }
+diff --git a/arch/x86/math-emu/poly.h b/arch/x86/math-emu/poly.h
+index 4db7981..168eb44 100644
+--- a/arch/x86/math-emu/poly.h
++++ b/arch/x86/math-emu/poly.h
+@@ -21,9 +21,9 @@
+ allows. 9-byte would probably be sufficient.
+ */
+ typedef struct {
+- unsigned long lsw;
+- unsigned long midw;
+- unsigned long msw;
++ unsigned long lsw;
++ unsigned long midw;
++ unsigned long msw;
+ } Xsig;
+
+ asmlinkage void mul64(unsigned long long const *a, unsigned long long const *b,
+@@ -49,7 +49,6 @@ asmlinkage void div_Xsig(Xsig *x1, const Xsig *x2, const Xsig *dest);
+ /* Macro to access the 8 ms bytes of an Xsig as a long long */
+ #define XSIG_LL(x) (*(unsigned long long *)&x.midw)
+
+-
+ /*
+ Need to run gcc with optimizations on to get these to
+ actually be in-line.
+@@ -63,59 +62,53 @@ asmlinkage void div_Xsig(Xsig *x1, const Xsig *x2, const Xsig *dest);
+ static inline unsigned long mul_32_32(const unsigned long arg1,
+ const unsigned long arg2)
+ {
+- int retval;
+- asm volatile ("mull %2; movl %%edx,%%eax" \
+- :"=a" (retval) \
+- :"0" (arg1), "g" (arg2) \
+- :"dx");
+- return retval;
++ int retval;
++ asm volatile ("mull %2; movl %%edx,%%eax":"=a" (retval)
++ :"0"(arg1), "g"(arg2)
++ :"dx");
++ return retval;
+ }
+
+-
+ /* Add the 12 byte Xsig x2 to Xsig dest, with no checks for overflow. */
+ static inline void add_Xsig_Xsig(Xsig *dest, const Xsig *x2)
+ {
+- asm volatile ("movl %1,%%edi; movl %2,%%esi;\n"
+- "movl (%%esi),%%eax; addl %%eax,(%%edi);\n"
+- "movl 4(%%esi),%%eax; adcl %%eax,4(%%edi);\n"
+- "movl 8(%%esi),%%eax; adcl %%eax,8(%%edi);\n"
+- :"=g" (*dest):"g" (dest), "g" (x2)
+- :"ax","si","di");
++ asm volatile ("movl %1,%%edi; movl %2,%%esi;\n"
++ "movl (%%esi),%%eax; addl %%eax,(%%edi);\n"
++ "movl 4(%%esi),%%eax; adcl %%eax,4(%%edi);\n"
++ "movl 8(%%esi),%%eax; adcl %%eax,8(%%edi);\n":"=g"
++ (*dest):"g"(dest), "g"(x2)
++ :"ax", "si", "di");
+ }
+
+-
+ /* Add the 12 byte Xsig x2 to Xsig dest, adjust exp if overflow occurs. */
+ /* Note: the constraints in the asm statement didn't always work properly
+ with gcc 2.5.8. Changing from using edi to using ecx got around the
+ problem, but keep fingers crossed! */
+ static inline void add_two_Xsig(Xsig *dest, const Xsig *x2, long int *exp)
+ {
+- asm volatile ("movl %2,%%ecx; movl %3,%%esi;\n"
+- "movl (%%esi),%%eax; addl %%eax,(%%ecx);\n"
+- "movl 4(%%esi),%%eax; adcl %%eax,4(%%ecx);\n"
+- "movl 8(%%esi),%%eax; adcl %%eax,8(%%ecx);\n"
+- "jnc 0f;\n"
+- "rcrl 8(%%ecx); rcrl 4(%%ecx); rcrl (%%ecx)\n"
+- "movl %4,%%ecx; incl (%%ecx)\n"
+- "movl $1,%%eax; jmp 1f;\n"
+- "0: xorl %%eax,%%eax;\n"
+- "1:\n"
+- :"=g" (*exp), "=g" (*dest)
+- :"g" (dest), "g" (x2), "g" (exp)
+- :"cx","si","ax");
++ asm volatile ("movl %2,%%ecx; movl %3,%%esi;\n"
++ "movl (%%esi),%%eax; addl %%eax,(%%ecx);\n"
++ "movl 4(%%esi),%%eax; adcl %%eax,4(%%ecx);\n"
++ "movl 8(%%esi),%%eax; adcl %%eax,8(%%ecx);\n"
++ "jnc 0f;\n"
++ "rcrl 8(%%ecx); rcrl 4(%%ecx); rcrl (%%ecx)\n"
++ "movl %4,%%ecx; incl (%%ecx)\n"
++ "movl $1,%%eax; jmp 1f;\n"
++ "0: xorl %%eax,%%eax;\n" "1:\n":"=g" (*exp), "=g"(*dest)
++ :"g"(dest), "g"(x2), "g"(exp)
++ :"cx", "si", "ax");
+ }
+
+-
+ /* Negate (subtract from 1.0) the 12 byte Xsig */
+ /* This is faster in a loop on my 386 than using the "neg" instruction. */
+ static inline void negate_Xsig(Xsig *x)
+ {
+- asm volatile("movl %1,%%esi;\n"
+- "xorl %%ecx,%%ecx;\n"
+- "movl %%ecx,%%eax; subl (%%esi),%%eax; movl %%eax,(%%esi);\n"
+- "movl %%ecx,%%eax; sbbl 4(%%esi),%%eax; movl %%eax,4(%%esi);\n"
+- "movl %%ecx,%%eax; sbbl 8(%%esi),%%eax; movl %%eax,8(%%esi);\n"
+- :"=g" (*x):"g" (x):"si","ax","cx");
++ asm volatile ("movl %1,%%esi;\n"
++ "xorl %%ecx,%%ecx;\n"
++ "movl %%ecx,%%eax; subl (%%esi),%%eax; movl %%eax,(%%esi);\n"
++ "movl %%ecx,%%eax; sbbl 4(%%esi),%%eax; movl %%eax,4(%%esi);\n"
++ "movl %%ecx,%%eax; sbbl 8(%%esi),%%eax; movl %%eax,8(%%esi);\n":"=g"
++ (*x):"g"(x):"si", "ax", "cx");
+ }
+
+ #endif /* _POLY_H */
+diff --git a/arch/x86/math-emu/poly_2xm1.c b/arch/x86/math-emu/poly_2xm1.c
+index 9766ad5..b00e9e1 100644
+--- a/arch/x86/math-emu/poly_2xm1.c
++++ b/arch/x86/math-emu/poly_2xm1.c
+@@ -17,21 +17,19 @@
+ #include "control_w.h"
+ #include "poly.h"
+
+-
+ #define HIPOWER 11
+-static const unsigned long long lterms[HIPOWER] =
+-{
+- 0x0000000000000000LL, /* This term done separately as 12 bytes */
+- 0xf5fdeffc162c7543LL,
+- 0x1c6b08d704a0bfa6LL,
+- 0x0276556df749cc21LL,
+- 0x002bb0ffcf14f6b8LL,
+- 0x0002861225ef751cLL,
+- 0x00001ffcbfcd5422LL,
+- 0x00000162c005d5f1LL,
+- 0x0000000da96ccb1bLL,
+- 0x0000000078d1b897LL,
+- 0x000000000422b029LL
++static const unsigned long long lterms[HIPOWER] = {
++ 0x0000000000000000LL, /* This term done separately as 12 bytes */
++ 0xf5fdeffc162c7543LL,
++ 0x1c6b08d704a0bfa6LL,
++ 0x0276556df749cc21LL,
++ 0x002bb0ffcf14f6b8LL,
++ 0x0002861225ef751cLL,
++ 0x00001ffcbfcd5422LL,
++ 0x00000162c005d5f1LL,
++ 0x0000000da96ccb1bLL,
++ 0x0000000078d1b897LL,
++ 0x000000000422b029LL
+ };
+
+ static const Xsig hiterm = MK_XSIG(0xb17217f7, 0xd1cf79ab, 0xc8a39194);
+@@ -45,112 +43,103 @@ static const Xsig shiftterm2 = MK_XSIG(0xb504f333, 0xf9de6484, 0x597d89b3);
+ static const Xsig shiftterm3 = MK_XSIG(0xd744fcca, 0xd69d6af4, 0x39a68bb9);
+
+ static const Xsig *shiftterm[] = { &shiftterm0, &shiftterm1,
+- &shiftterm2, &shiftterm3 };
+-
++ &shiftterm2, &shiftterm3
++};
+
+ /*--- poly_2xm1() -----------------------------------------------------------+
+ | Requires st(0) which is TAG_Valid and < 1. |
+ +---------------------------------------------------------------------------*/
+-int poly_2xm1(u_char sign, FPU_REG *arg, FPU_REG *result)
++int poly_2xm1(u_char sign, FPU_REG *arg, FPU_REG *result)
+ {
+- long int exponent, shift;
+- unsigned long long Xll;
+- Xsig accumulator, Denom, argSignif;
+- u_char tag;
++ long int exponent, shift;
++ unsigned long long Xll;
++ Xsig accumulator, Denom, argSignif;
++ u_char tag;
+
+- exponent = exponent16(arg);
++ exponent = exponent16(arg);
+
+ #ifdef PARANOID
+- if ( exponent >= 0 ) /* Don't want a |number| >= 1.0 */
+- {
+- /* Number negative, too large, or not Valid. */
+- EXCEPTION(EX_INTERNAL|0x127);
+- return 1;
+- }
++ if (exponent >= 0) { /* Don't want a |number| >= 1.0 */
++ /* Number negative, too large, or not Valid. */
++ EXCEPTION(EX_INTERNAL | 0x127);
++ return 1;
++ }
+ #endif /* PARANOID */
+
+- argSignif.lsw = 0;
+- XSIG_LL(argSignif) = Xll = significand(arg);
+-
+- if ( exponent == -1 )
+- {
+- shift = (argSignif.msw & 0x40000000) ? 3 : 2;
+- /* subtract 0.5 or 0.75 */
+- exponent -= 2;
+- XSIG_LL(argSignif) <<= 2;
+- Xll <<= 2;
+- }
+- else if ( exponent == -2 )
+- {
+- shift = 1;
+- /* subtract 0.25 */
+- exponent--;
+- XSIG_LL(argSignif) <<= 1;
+- Xll <<= 1;
+- }
+- else
+- shift = 0;
+-
+- if ( exponent < -2 )
+- {
+- /* Shift the argument right by the required places. */
+- if ( FPU_shrx(&Xll, -2-exponent) >= 0x80000000U )
+- Xll++; /* round up */
+- }
+-
+- accumulator.lsw = accumulator.midw = accumulator.msw = 0;
+- polynomial_Xsig(&accumulator, &Xll, lterms, HIPOWER-1);
+- mul_Xsig_Xsig(&accumulator, &argSignif);
+- shr_Xsig(&accumulator, 3);
+-
+- mul_Xsig_Xsig(&argSignif, &hiterm); /* The leading term */
+- add_two_Xsig(&accumulator, &argSignif, &exponent);
+-
+- if ( shift )
+- {
+- /* The argument is large, use the identity:
+- f(x+a) = f(a) * (f(x) + 1) - 1;
+- */
+- shr_Xsig(&accumulator, - exponent);
+- accumulator.msw |= 0x80000000; /* add 1.0 */
+- mul_Xsig_Xsig(&accumulator, shiftterm[shift]);
+- accumulator.msw &= 0x3fffffff; /* subtract 1.0 */
+- exponent = 1;
+- }
+-
+- if ( sign != SIGN_POS )
+- {
+- /* The argument is negative, use the identity:
+- f(-x) = -f(x) / (1 + f(x))
+- */
+- Denom.lsw = accumulator.lsw;
+- XSIG_LL(Denom) = XSIG_LL(accumulator);
+- if ( exponent < 0 )
+- shr_Xsig(&Denom, - exponent);
+- else if ( exponent > 0 )
+- {
+- /* exponent must be 1 here */
+- XSIG_LL(Denom) <<= 1;
+- if ( Denom.lsw & 0x80000000 )
+- XSIG_LL(Denom) |= 1;
+- (Denom.lsw) <<= 1;
++ argSignif.lsw = 0;
++ XSIG_LL(argSignif) = Xll = significand(arg);
++
++ if (exponent == -1) {
++ shift = (argSignif.msw & 0x40000000) ? 3 : 2;
++ /* subtract 0.5 or 0.75 */
++ exponent -= 2;
++ XSIG_LL(argSignif) <<= 2;
++ Xll <<= 2;
++ } else if (exponent == -2) {
++ shift = 1;
++ /* subtract 0.25 */
++ exponent--;
++ XSIG_LL(argSignif) <<= 1;
++ Xll <<= 1;
++ } else
++ shift = 0;
+
++ if (exponent < -2) {
++ /* Shift the argument right by the required places. */
++ if (FPU_shrx(&Xll, -2 - exponent) >= 0x80000000U)
++ Xll++; /* round up */
++ }
++
++ accumulator.lsw = accumulator.midw = accumulator.msw = 0;
++ polynomial_Xsig(&accumulator, &Xll, lterms, HIPOWER - 1);
++ mul_Xsig_Xsig(&accumulator, &argSignif);
++ shr_Xsig(&accumulator, 3);
++
++ mul_Xsig_Xsig(&argSignif, &hiterm); /* The leading term */
++ add_two_Xsig(&accumulator, &argSignif, &exponent);
++
++ if (shift) {
++ /* The argument is large, use the identity:
++ f(x+a) = f(a) * (f(x) + 1) - 1;
++ */
++ shr_Xsig(&accumulator, -exponent);
++ accumulator.msw |= 0x80000000; /* add 1.0 */
++ mul_Xsig_Xsig(&accumulator, shiftterm[shift]);
++ accumulator.msw &= 0x3fffffff; /* subtract 1.0 */
++ exponent = 1;
++ }
++
++ if (sign != SIGN_POS) {
++ /* The argument is negative, use the identity:
++ f(-x) = -f(x) / (1 + f(x))
++ */
++ Denom.lsw = accumulator.lsw;
++ XSIG_LL(Denom) = XSIG_LL(accumulator);
++ if (exponent < 0)
++ shr_Xsig(&Denom, -exponent);
++ else if (exponent > 0) {
++ /* exponent must be 1 here */
++ XSIG_LL(Denom) <<= 1;
++ if (Denom.lsw & 0x80000000)
++ XSIG_LL(Denom) |= 1;
++ (Denom.lsw) <<= 1;
++ }
++ Denom.msw |= 0x80000000; /* add 1.0 */
++ div_Xsig(&accumulator, &Denom, &accumulator);
+ }
+- Denom.msw |= 0x80000000; /* add 1.0 */
+- div_Xsig(&accumulator, &Denom, &accumulator);
+- }
+
+- /* Convert to 64 bit signed-compatible */
+- exponent += round_Xsig(&accumulator);
++ /* Convert to 64 bit signed-compatible */
++ exponent += round_Xsig(&accumulator);
+
+- result = &st(0);
+- significand(result) = XSIG_LL(accumulator);
+- setexponent16(result, exponent);
++ result = &st(0);
++ significand(result) = XSIG_LL(accumulator);
++ setexponent16(result, exponent);
+
+- tag = FPU_round(result, 1, 0, FULL_PRECISION, sign);
++ tag = FPU_round(result, 1, 0, FULL_PRECISION, sign);
+
+- setsign(result, sign);
+- FPU_settag0(tag);
++ setsign(result, sign);
++ FPU_settag0(tag);
+
+- return 0;
+ return 0;
-+}
-+
-+static void crypto_rfc4309_exit_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_rfc4309_ctx *ctx = crypto_tfm_ctx(tfm);
-+
-+ crypto_free_aead(ctx->child);
-+}
-+
-+static struct crypto_instance *crypto_rfc4309_alloc(struct rtattr **tb)
-+{
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ struct crypto_aead_spawn *spawn;
-+ struct crypto_alg *alg;
-+ const char *ccm_name;
-+ int err;
-+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
-+
-+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
-+ return ERR_PTR(-EINVAL);
-+
-+ ccm_name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(ccm_name);
-+ if (IS_ERR(ccm_name))
-+ return ERR_PTR(err);
-+
-+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-+ if (!inst)
-+ return ERR_PTR(-ENOMEM);
-+
-+ spawn = crypto_instance_ctx(inst);
-+ crypto_set_aead_spawn(spawn, inst);
-+ err = crypto_grab_aead(spawn, ccm_name, 0,
-+ crypto_requires_sync(algt->type, algt->mask));
-+ if (err)
-+ goto out_free_inst;
-+
-+ alg = crypto_aead_spawn_alg(spawn);
-+
-+ err = -EINVAL;
-+
-+ /* We only support 16-byte blocks. */
-+ if (alg->cra_aead.ivsize != 16)
-+ goto out_drop_alg;
-+
-+ /* Not a stream cipher? */
-+ if (alg->cra_blocksize != 1)
-+ goto out_drop_alg;
-+
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-+ "rfc4309(%s)", alg->cra_name) >= CRYPTO_MAX_ALG_NAME ||
-+ snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "rfc4309(%s)", alg->cra_driver_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto out_drop_alg;
-+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
-+ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_priority = alg->cra_priority;
-+ inst->alg.cra_blocksize = 1;
-+ inst->alg.cra_alignmask = alg->cra_alignmask;
-+ inst->alg.cra_type = &crypto_nivaead_type;
-+
-+ inst->alg.cra_aead.ivsize = 8;
-+ inst->alg.cra_aead.maxauthsize = 16;
-+
-+ inst->alg.cra_ctxsize = sizeof(struct crypto_rfc4309_ctx);
-+
-+ inst->alg.cra_init = crypto_rfc4309_init_tfm;
-+ inst->alg.cra_exit = crypto_rfc4309_exit_tfm;
-+
-+ inst->alg.cra_aead.setkey = crypto_rfc4309_setkey;
-+ inst->alg.cra_aead.setauthsize = crypto_rfc4309_setauthsize;
-+ inst->alg.cra_aead.encrypt = crypto_rfc4309_encrypt;
-+ inst->alg.cra_aead.decrypt = crypto_rfc4309_decrypt;
-+
-+ inst->alg.cra_aead.geniv = "seqiv";
-+
-+out:
-+ return inst;
-+
-+out_drop_alg:
-+ crypto_drop_aead(spawn);
-+out_free_inst:
-+ kfree(inst);
-+ inst = ERR_PTR(err);
-+ goto out;
-+}
-+
-+static void crypto_rfc4309_free(struct crypto_instance *inst)
-+{
-+ crypto_drop_spawn(crypto_instance_ctx(inst));
-+ kfree(inst);
-+}
-+
-+static struct crypto_template crypto_rfc4309_tmpl = {
-+ .name = "rfc4309",
-+ .alloc = crypto_rfc4309_alloc,
-+ .free = crypto_rfc4309_free,
-+ .module = THIS_MODULE,
-+};
-+
-+static int __init crypto_ccm_module_init(void)
-+{
-+ int err;
-+
-+ err = crypto_register_template(&crypto_ccm_base_tmpl);
-+ if (err)
-+ goto out;
-+
-+ err = crypto_register_template(&crypto_ccm_tmpl);
-+ if (err)
-+ goto out_undo_base;
-+
-+ err = crypto_register_template(&crypto_rfc4309_tmpl);
-+ if (err)
-+ goto out_undo_ccm;
-+
-+out:
-+ return err;
-+
-+out_undo_ccm:
-+ crypto_unregister_template(&crypto_ccm_tmpl);
-+out_undo_base:
-+ crypto_unregister_template(&crypto_ccm_base_tmpl);
-+ goto out;
-+}
-+
-+static void __exit crypto_ccm_module_exit(void)
-+{
-+ crypto_unregister_template(&crypto_rfc4309_tmpl);
-+ crypto_unregister_template(&crypto_ccm_tmpl);
-+ crypto_unregister_template(&crypto_ccm_base_tmpl);
-+}
-+
-+module_init(crypto_ccm_module_init);
-+module_exit(crypto_ccm_module_exit);
-+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("Counter with CBC MAC");
-+MODULE_ALIAS("ccm_base");
-+MODULE_ALIAS("rfc4309");
-diff --git a/crypto/chainiv.c b/crypto/chainiv.c
-new file mode 100644
-index 0000000..d17fa04
---- /dev/null
-+++ b/crypto/chainiv.c
-@@ -0,0 +1,331 @@
-+/*
-+ * chainiv: Chain IV Generator
-+ *
-+ * Generate IVs simply be using the last block of the previous encryption.
-+ * This is mainly useful for CBC with a synchronous algorithm.
-+ *
-+ * Copyright (c) 2007 Herbert Xu <herbert at gondor.apana.org.au>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License as published by the Free
-+ * Software Foundation; either version 2 of the License, or (at your option)
-+ * any later version.
-+ *
-+ */
-+
-+#include <crypto/internal/skcipher.h>
-+#include <linux/err.h>
-+#include <linux/init.h>
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/random.h>
-+#include <linux/spinlock.h>
-+#include <linux/string.h>
-+#include <linux/workqueue.h>
-+
-+enum {
-+ CHAINIV_STATE_INUSE = 0,
-+};
-+
-+struct chainiv_ctx {
-+ spinlock_t lock;
-+ char iv[];
-+};
-+
-+struct async_chainiv_ctx {
-+ unsigned long state;
-+
-+ spinlock_t lock;
-+ int err;
-+
-+ struct crypto_queue queue;
-+ struct work_struct postponed;
-+
-+ char iv[];
-+};
+
+ }
+diff --git a/arch/x86/math-emu/poly_atan.c b/arch/x86/math-emu/poly_atan.c
+index 82f7029..20c28e5 100644
+--- a/arch/x86/math-emu/poly_atan.c
++++ b/arch/x86/math-emu/poly_atan.c
+@@ -18,28 +18,25 @@
+ #include "control_w.h"
+ #include "poly.h"
+
+-
+ #define HIPOWERon 6 /* odd poly, negative terms */
+-static const unsigned long long oddnegterms[HIPOWERon] =
+-{
+- 0x0000000000000000LL, /* Dummy (not for - 1.0) */
+- 0x015328437f756467LL,
+- 0x0005dda27b73dec6LL,
+- 0x0000226bf2bfb91aLL,
+- 0x000000ccc439c5f7LL,
+- 0x0000000355438407LL
+-} ;
++static const unsigned long long oddnegterms[HIPOWERon] = {
++ 0x0000000000000000LL, /* Dummy (not for - 1.0) */
++ 0x015328437f756467LL,
++ 0x0005dda27b73dec6LL,
++ 0x0000226bf2bfb91aLL,
++ 0x000000ccc439c5f7LL,
++ 0x0000000355438407LL
++};
+
+ #define HIPOWERop 6 /* odd poly, positive terms */
+-static const unsigned long long oddplterms[HIPOWERop] =
+-{
++static const unsigned long long oddplterms[HIPOWERop] = {
+ /* 0xaaaaaaaaaaaaaaabLL, transferred to fixedpterm[] */
+- 0x0db55a71875c9ac2LL,
+- 0x0029fce2d67880b0LL,
+- 0x0000dfd3908b4596LL,
+- 0x00000550fd61dab4LL,
+- 0x0000001c9422b3f9LL,
+- 0x000000003e3301e1LL
++ 0x0db55a71875c9ac2LL,
++ 0x0029fce2d67880b0LL,
++ 0x0000dfd3908b4596LL,
++ 0x00000550fd61dab4LL,
++ 0x0000001c9422b3f9LL,
++ 0x000000003e3301e1LL
+ };
+
+ static const unsigned long long denomterm = 0xebd9b842c5c53a0eLL;
+@@ -48,182 +45,164 @@ static const Xsig fixedpterm = MK_XSIG(0xaaaaaaaa, 0xaaaaaaaa, 0xaaaaaaaa);
+
+ static const Xsig pi_signif = MK_XSIG(0xc90fdaa2, 0x2168c234, 0xc4c6628b);
+
+-
+ /*--- poly_atan() -----------------------------------------------------------+
+ | |
+ +---------------------------------------------------------------------------*/
+-void poly_atan(FPU_REG *st0_ptr, u_char st0_tag,
+- FPU_REG *st1_ptr, u_char st1_tag)
++void poly_atan(FPU_REG *st0_ptr, u_char st0_tag,
++ FPU_REG *st1_ptr, u_char st1_tag)
+ {
+- u_char transformed, inverted,
+- sign1, sign2;
+- int exponent;
+- long int dummy_exp;
+- Xsig accumulator, Numer, Denom, accumulatore, argSignif,
+- argSq, argSqSq;
+- u_char tag;
+-
+- sign1 = getsign(st0_ptr);
+- sign2 = getsign(st1_ptr);
+- if ( st0_tag == TAG_Valid )
+- {
+- exponent = exponent(st0_ptr);
+- }
+- else
+- {
+- /* This gives non-compatible stack contents... */
+- FPU_to_exp16(st0_ptr, st0_ptr);
+- exponent = exponent16(st0_ptr);
+- }
+- if ( st1_tag == TAG_Valid )
+- {
+- exponent -= exponent(st1_ptr);
+- }
+- else
+- {
+- /* This gives non-compatible stack contents... */
+- FPU_to_exp16(st1_ptr, st1_ptr);
+- exponent -= exponent16(st1_ptr);
+- }
+-
+- if ( (exponent < 0) || ((exponent == 0) &&
+- ((st0_ptr->sigh < st1_ptr->sigh) ||
+- ((st0_ptr->sigh == st1_ptr->sigh) &&
+- (st0_ptr->sigl < st1_ptr->sigl))) ) )
+- {
+- inverted = 1;
+- Numer.lsw = Denom.lsw = 0;
+- XSIG_LL(Numer) = significand(st0_ptr);
+- XSIG_LL(Denom) = significand(st1_ptr);
+- }
+- else
+- {
+- inverted = 0;
+- exponent = -exponent;
+- Numer.lsw = Denom.lsw = 0;
+- XSIG_LL(Numer) = significand(st1_ptr);
+- XSIG_LL(Denom) = significand(st0_ptr);
+- }
+- div_Xsig(&Numer, &Denom, &argSignif);
+- exponent += norm_Xsig(&argSignif);
+-
+- if ( (exponent >= -1)
+- || ((exponent == -2) && (argSignif.msw > 0xd413ccd0)) )
+- {
+- /* The argument is greater than sqrt(2)-1 (=0.414213562...) */
+- /* Convert the argument by an identity for atan */
+- transformed = 1;
+-
+- if ( exponent >= 0 )
+- {
++ u_char transformed, inverted, sign1, sign2;
++ int exponent;
++ long int dummy_exp;
++ Xsig accumulator, Numer, Denom, accumulatore, argSignif, argSq, argSqSq;
++ u_char tag;
++
++ sign1 = getsign(st0_ptr);
++ sign2 = getsign(st1_ptr);
++ if (st0_tag == TAG_Valid) {
++ exponent = exponent(st0_ptr);
++ } else {
++ /* This gives non-compatible stack contents... */
++ FPU_to_exp16(st0_ptr, st0_ptr);
++ exponent = exponent16(st0_ptr);
++ }
++ if (st1_tag == TAG_Valid) {
++ exponent -= exponent(st1_ptr);
++ } else {
++ /* This gives non-compatible stack contents... */
++ FPU_to_exp16(st1_ptr, st1_ptr);
++ exponent -= exponent16(st1_ptr);
++ }
++
++ if ((exponent < 0) || ((exponent == 0) &&
++ ((st0_ptr->sigh < st1_ptr->sigh) ||
++ ((st0_ptr->sigh == st1_ptr->sigh) &&
++ (st0_ptr->sigl < st1_ptr->sigl))))) {
++ inverted = 1;
++ Numer.lsw = Denom.lsw = 0;
++ XSIG_LL(Numer) = significand(st0_ptr);
++ XSIG_LL(Denom) = significand(st1_ptr);
++ } else {
++ inverted = 0;
++ exponent = -exponent;
++ Numer.lsw = Denom.lsw = 0;
++ XSIG_LL(Numer) = significand(st1_ptr);
++ XSIG_LL(Denom) = significand(st0_ptr);
++ }
++ div_Xsig(&Numer, &Denom, &argSignif);
++ exponent += norm_Xsig(&argSignif);
++
++ if ((exponent >= -1)
++ || ((exponent == -2) && (argSignif.msw > 0xd413ccd0))) {
++ /* The argument is greater than sqrt(2)-1 (=0.414213562...) */
++ /* Convert the argument by an identity for atan */
++ transformed = 1;
++
++ if (exponent >= 0) {
+ #ifdef PARANOID
+- if ( !( (exponent == 0) &&
+- (argSignif.lsw == 0) && (argSignif.midw == 0) &&
+- (argSignif.msw == 0x80000000) ) )
+- {
+- EXCEPTION(EX_INTERNAL|0x104); /* There must be a logic error */
+- return;
+- }
++ if (!((exponent == 0) &&
++ (argSignif.lsw == 0) && (argSignif.midw == 0) &&
++ (argSignif.msw == 0x80000000))) {
++ EXCEPTION(EX_INTERNAL | 0x104); /* There must be a logic error */
++ return;
++ }
+ #endif /* PARANOID */
+- argSignif.msw = 0; /* Make the transformed arg -> 0.0 */
++ argSignif.msw = 0; /* Make the transformed arg -> 0.0 */
++ } else {
++ Numer.lsw = Denom.lsw = argSignif.lsw;
++ XSIG_LL(Numer) = XSIG_LL(Denom) = XSIG_LL(argSignif);
+
-+static int chainiv_givencrypt(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
-+ unsigned int ivsize;
-+ int err;
++ if (exponent < -1)
++ shr_Xsig(&Numer, -1 - exponent);
++ negate_Xsig(&Numer);
+
-+ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
-+ ablkcipher_request_set_callback(subreq, req->creq.base.flags &
-+ ~CRYPTO_TFM_REQ_MAY_SLEEP,
-+ req->creq.base.complete,
-+ req->creq.base.data);
-+ ablkcipher_request_set_crypt(subreq, req->creq.src, req->creq.dst,
-+ req->creq.nbytes, req->creq.info);
++ shr_Xsig(&Denom, -exponent);
++ Denom.msw |= 0x80000000;
+
-+ spin_lock_bh(&ctx->lock);
++ div_Xsig(&Numer, &Denom, &argSignif);
+
-+ ivsize = crypto_ablkcipher_ivsize(geniv);
++ exponent = -1 + norm_Xsig(&argSignif);
++ }
++ } else {
++ transformed = 0;
++ }
+
-+ memcpy(req->giv, ctx->iv, ivsize);
-+ memcpy(subreq->info, ctx->iv, ivsize);
++ argSq.lsw = argSignif.lsw;
++ argSq.midw = argSignif.midw;
++ argSq.msw = argSignif.msw;
++ mul_Xsig_Xsig(&argSq, &argSq);
+
-+ err = crypto_ablkcipher_encrypt(subreq);
-+ if (err)
-+ goto unlock;
++ argSqSq.lsw = argSq.lsw;
++ argSqSq.midw = argSq.midw;
++ argSqSq.msw = argSq.msw;
++ mul_Xsig_Xsig(&argSqSq, &argSqSq);
+
-+ memcpy(ctx->iv, subreq->info, ivsize);
++ accumulatore.lsw = argSq.lsw;
++ XSIG_LL(accumulatore) = XSIG_LL(argSq);
+
-+unlock:
-+ spin_unlock_bh(&ctx->lock);
++ shr_Xsig(&argSq, 2 * (-1 - exponent - 1));
++ shr_Xsig(&argSqSq, 4 * (-1 - exponent - 1));
+
-+ return err;
-+}
++ /* Now have argSq etc with binary point at the left
++ .1xxxxxxxx */
+
-+static int chainiv_givencrypt_first(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ /* Do the basic fixed point polynomial evaluation */
++ accumulator.msw = accumulator.midw = accumulator.lsw = 0;
++ polynomial_Xsig(&accumulator, &XSIG_LL(argSqSq),
++ oddplterms, HIPOWERop - 1);
++ mul64_Xsig(&accumulator, &XSIG_LL(argSq));
++ negate_Xsig(&accumulator);
++ polynomial_Xsig(&accumulator, &XSIG_LL(argSqSq), oddnegterms,
++ HIPOWERon - 1);
++ negate_Xsig(&accumulator);
++ add_two_Xsig(&accumulator, &fixedpterm, &dummy_exp);
+
-+ spin_lock_bh(&ctx->lock);
-+ if (crypto_ablkcipher_crt(geniv)->givencrypt !=
-+ chainiv_givencrypt_first)
-+ goto unlock;
++ mul64_Xsig(&accumulatore, &denomterm);
++ shr_Xsig(&accumulatore, 1 + 2 * (-1 - exponent));
++ accumulatore.msw |= 0x80000000;
+
-+ crypto_ablkcipher_crt(geniv)->givencrypt = chainiv_givencrypt;
-+ get_random_bytes(ctx->iv, crypto_ablkcipher_ivsize(geniv));
++ div_Xsig(&accumulator, &accumulatore, &accumulator);
+
-+unlock:
-+ spin_unlock_bh(&ctx->lock);
++ mul_Xsig_Xsig(&accumulator, &argSignif);
++ mul_Xsig_Xsig(&accumulator, &argSq);
+
-+ return chainiv_givencrypt(req);
-+}
++ shr_Xsig(&accumulator, 3);
++ negate_Xsig(&accumulator);
++ add_Xsig_Xsig(&accumulator, &argSignif);
+
-+static int chainiv_init_common(struct crypto_tfm *tfm)
-+{
-+ tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request);
++ if (transformed) {
++ /* compute pi/4 - accumulator */
++ shr_Xsig(&accumulator, -1 - exponent);
++ negate_Xsig(&accumulator);
++ add_Xsig_Xsig(&accumulator, &pi_signif);
++ exponent = -1;
++ }
+
-+ return skcipher_geniv_init(tfm);
-+}
++ if (inverted) {
++ /* compute pi/2 - accumulator */
++ shr_Xsig(&accumulator, -exponent);
++ negate_Xsig(&accumulator);
++ add_Xsig_Xsig(&accumulator, &pi_signif);
++ exponent = 0;
+ }
+- else
+- {
+- Numer.lsw = Denom.lsw = argSignif.lsw;
+- XSIG_LL(Numer) = XSIG_LL(Denom) = XSIG_LL(argSignif);
+-
+- if ( exponent < -1 )
+- shr_Xsig(&Numer, -1-exponent);
+- negate_Xsig(&Numer);
+-
+- shr_Xsig(&Denom, -exponent);
+- Denom.msw |= 0x80000000;
+-
+- div_Xsig(&Numer, &Denom, &argSignif);
+-
+- exponent = -1 + norm_Xsig(&argSignif);
++
++ if (sign1) {
++ /* compute pi - accumulator */
++ shr_Xsig(&accumulator, 1 - exponent);
++ negate_Xsig(&accumulator);
++ add_Xsig_Xsig(&accumulator, &pi_signif);
++ exponent = 1;
+ }
+- }
+- else
+- {
+- transformed = 0;
+- }
+-
+- argSq.lsw = argSignif.lsw; argSq.midw = argSignif.midw;
+- argSq.msw = argSignif.msw;
+- mul_Xsig_Xsig(&argSq, &argSq);
+-
+- argSqSq.lsw = argSq.lsw; argSqSq.midw = argSq.midw; argSqSq.msw = argSq.msw;
+- mul_Xsig_Xsig(&argSqSq, &argSqSq);
+-
+- accumulatore.lsw = argSq.lsw;
+- XSIG_LL(accumulatore) = XSIG_LL(argSq);
+-
+- shr_Xsig(&argSq, 2*(-1-exponent-1));
+- shr_Xsig(&argSqSq, 4*(-1-exponent-1));
+-
+- /* Now have argSq etc with binary point at the left
+- .1xxxxxxxx */
+-
+- /* Do the basic fixed point polynomial evaluation */
+- accumulator.msw = accumulator.midw = accumulator.lsw = 0;
+- polynomial_Xsig(&accumulator, &XSIG_LL(argSqSq),
+- oddplterms, HIPOWERop-1);
+- mul64_Xsig(&accumulator, &XSIG_LL(argSq));
+- negate_Xsig(&accumulator);
+- polynomial_Xsig(&accumulator, &XSIG_LL(argSqSq), oddnegterms, HIPOWERon-1);
+- negate_Xsig(&accumulator);
+- add_two_Xsig(&accumulator, &fixedpterm, &dummy_exp);
+-
+- mul64_Xsig(&accumulatore, &denomterm);
+- shr_Xsig(&accumulatore, 1 + 2*(-1-exponent));
+- accumulatore.msw |= 0x80000000;
+-
+- div_Xsig(&accumulator, &accumulatore, &accumulator);
+-
+- mul_Xsig_Xsig(&accumulator, &argSignif);
+- mul_Xsig_Xsig(&accumulator, &argSq);
+-
+- shr_Xsig(&accumulator, 3);
+- negate_Xsig(&accumulator);
+- add_Xsig_Xsig(&accumulator, &argSignif);
+-
+- if ( transformed )
+- {
+- /* compute pi/4 - accumulator */
+- shr_Xsig(&accumulator, -1-exponent);
+- negate_Xsig(&accumulator);
+- add_Xsig_Xsig(&accumulator, &pi_signif);
+- exponent = -1;
+- }
+-
+- if ( inverted )
+- {
+- /* compute pi/2 - accumulator */
+- shr_Xsig(&accumulator, -exponent);
+- negate_Xsig(&accumulator);
+- add_Xsig_Xsig(&accumulator, &pi_signif);
+- exponent = 0;
+- }
+-
+- if ( sign1 )
+- {
+- /* compute pi - accumulator */
+- shr_Xsig(&accumulator, 1 - exponent);
+- negate_Xsig(&accumulator);
+- add_Xsig_Xsig(&accumulator, &pi_signif);
+- exponent = 1;
+- }
+-
+- exponent += round_Xsig(&accumulator);
+-
+- significand(st1_ptr) = XSIG_LL(accumulator);
+- setexponent16(st1_ptr, exponent);
+-
+- tag = FPU_round(st1_ptr, 1, 0, FULL_PRECISION, sign2);
+- FPU_settagi(1, tag);
+-
+- set_precision_flag_up(); /* We do not really know if up or down,
+- use this as the default. */
+
-+static int chainiv_init(struct crypto_tfm *tfm)
-+{
-+ struct chainiv_ctx *ctx = crypto_tfm_ctx(tfm);
++ exponent += round_Xsig(&accumulator);
+
-+ spin_lock_init(&ctx->lock);
++ significand(st1_ptr) = XSIG_LL(accumulator);
++ setexponent16(st1_ptr, exponent);
+
-+ return chainiv_init_common(tfm);
-+}
++ tag = FPU_round(st1_ptr, 1, 0, FULL_PRECISION, sign2);
++ FPU_settagi(1, tag);
+
-+static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx)
-+{
-+ int queued;
++ set_precision_flag_up(); /* We do not really know if up or down,
++ use this as the default. */
+
+ }
+diff --git a/arch/x86/math-emu/poly_l2.c b/arch/x86/math-emu/poly_l2.c
+index dd00e1d..8e2ff4b 100644
+--- a/arch/x86/math-emu/poly_l2.c
++++ b/arch/x86/math-emu/poly_l2.c
+@@ -10,7 +10,6 @@
+ | |
+ +---------------------------------------------------------------------------*/
+
+-
+ #include "exception.h"
+ #include "reg_constant.h"
+ #include "fpu_emu.h"
+@@ -18,184 +17,163 @@
+ #include "control_w.h"
+ #include "poly.h"
+
+-
+ static void log2_kernel(FPU_REG const *arg, u_char argsign,
+- Xsig *accum_result, long int *expon);
+-
++ Xsig * accum_result, long int *expon);
+
+ /*--- poly_l2() -------------------------------------------------------------+
+ | Base 2 logarithm by a polynomial approximation. |
+ +---------------------------------------------------------------------------*/
+-void poly_l2(FPU_REG *st0_ptr, FPU_REG *st1_ptr, u_char st1_sign)
++void poly_l2(FPU_REG *st0_ptr, FPU_REG *st1_ptr, u_char st1_sign)
+ {
+- long int exponent, expon, expon_expon;
+- Xsig accumulator, expon_accum, yaccum;
+- u_char sign, argsign;
+- FPU_REG x;
+- int tag;
+-
+- exponent = exponent16(st0_ptr);
+-
+- /* From st0_ptr, make a number > sqrt(2)/2 and < sqrt(2) */
+- if ( st0_ptr->sigh > (unsigned)0xb504f334 )
+- {
+- /* Treat as sqrt(2)/2 < st0_ptr < 1 */
+- significand(&x) = - significand(st0_ptr);
+- setexponent16(&x, -1);
+- exponent++;
+- argsign = SIGN_NEG;
+- }
+- else
+- {
+- /* Treat as 1 <= st0_ptr < sqrt(2) */
+- x.sigh = st0_ptr->sigh - 0x80000000;
+- x.sigl = st0_ptr->sigl;
+- setexponent16(&x, 0);
+- argsign = SIGN_POS;
+- }
+- tag = FPU_normalize_nuo(&x);
+-
+- if ( tag == TAG_Zero )
+- {
+- expon = 0;
+- accumulator.msw = accumulator.midw = accumulator.lsw = 0;
+- }
+- else
+- {
+- log2_kernel(&x, argsign, &accumulator, &expon);
+- }
+-
+- if ( exponent < 0 )
+- {
+- sign = SIGN_NEG;
+- exponent = -exponent;
+- }
+- else
+- sign = SIGN_POS;
+- expon_accum.msw = exponent; expon_accum.midw = expon_accum.lsw = 0;
+- if ( exponent )
+- {
+- expon_expon = 31 + norm_Xsig(&expon_accum);
+- shr_Xsig(&accumulator, expon_expon - expon);
+-
+- if ( sign ^ argsign )
+- negate_Xsig(&accumulator);
+- add_Xsig_Xsig(&accumulator, &expon_accum);
+- }
+- else
+- {
+- expon_expon = expon;
+- sign = argsign;
+- }
+-
+- yaccum.lsw = 0; XSIG_LL(yaccum) = significand(st1_ptr);
+- mul_Xsig_Xsig(&accumulator, &yaccum);
+-
+- expon_expon += round_Xsig(&accumulator);
+-
+- if ( accumulator.msw == 0 )
+- {
+- FPU_copy_to_reg1(&CONST_Z, TAG_Zero);
+- return;
+- }
+-
+- significand(st1_ptr) = XSIG_LL(accumulator);
+- setexponent16(st1_ptr, expon_expon + exponent16(st1_ptr) + 1);
+-
+- tag = FPU_round(st1_ptr, 1, 0, FULL_PRECISION, sign ^ st1_sign);
+- FPU_settagi(1, tag);
+-
+- set_precision_flag_up(); /* 80486 appears to always do this */
+-
+- return;
++ long int exponent, expon, expon_expon;
++ Xsig accumulator, expon_accum, yaccum;
++ u_char sign, argsign;
++ FPU_REG x;
++ int tag;
+
-+ if (!ctx->queue.qlen) {
-+ smp_mb__before_clear_bit();
-+ clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
++ exponent = exponent16(st0_ptr);
+
-+ if (!ctx->queue.qlen ||
-+ test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
-+ goto out;
++ /* From st0_ptr, make a number > sqrt(2)/2 and < sqrt(2) */
++ if (st0_ptr->sigh > (unsigned)0xb504f334) {
++ /* Treat as sqrt(2)/2 < st0_ptr < 1 */
++ significand(&x) = -significand(st0_ptr);
++ setexponent16(&x, -1);
++ exponent++;
++ argsign = SIGN_NEG;
++ } else {
++ /* Treat as 1 <= st0_ptr < sqrt(2) */
++ x.sigh = st0_ptr->sigh - 0x80000000;
++ x.sigl = st0_ptr->sigl;
++ setexponent16(&x, 0);
++ argsign = SIGN_POS;
++ }
++ tag = FPU_normalize_nuo(&x);
+
+-}
++ if (tag == TAG_Zero) {
++ expon = 0;
++ accumulator.msw = accumulator.midw = accumulator.lsw = 0;
++ } else {
++ log2_kernel(&x, argsign, &accumulator, &expon);
+ }
+
-+ queued = schedule_work(&ctx->postponed);
-+ BUG_ON(!queued);
-+
-+out:
-+ return ctx->err;
-+}
-+
-+static int async_chainiv_postpone_request(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ int err;
++ if (exponent < 0) {
++ sign = SIGN_NEG;
++ exponent = -exponent;
++ } else
++ sign = SIGN_POS;
++ expon_accum.msw = exponent;
++ expon_accum.midw = expon_accum.lsw = 0;
++ if (exponent) {
++ expon_expon = 31 + norm_Xsig(&expon_accum);
++ shr_Xsig(&accumulator, expon_expon - expon);
+
-+ spin_lock_bh(&ctx->lock);
-+ err = skcipher_enqueue_givcrypt(&ctx->queue, req);
-+ spin_unlock_bh(&ctx->lock);
++ if (sign ^ argsign)
++ negate_Xsig(&accumulator);
++ add_Xsig_Xsig(&accumulator, &expon_accum);
++ } else {
++ expon_expon = expon;
++ sign = argsign;
++ }
+
-+ if (test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
-+ return err;
++ yaccum.lsw = 0;
++ XSIG_LL(yaccum) = significand(st1_ptr);
++ mul_Xsig_Xsig(&accumulator, &yaccum);
+
-+ ctx->err = err;
-+ return async_chainiv_schedule_work(ctx);
-+}
++ expon_expon += round_Xsig(&accumulator);
+
-+static int async_chainiv_givencrypt_tail(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
-+ unsigned int ivsize = crypto_ablkcipher_ivsize(geniv);
++ if (accumulator.msw == 0) {
++ FPU_copy_to_reg1(&CONST_Z, TAG_Zero);
++ return;
++ }
+
-+ memcpy(req->giv, ctx->iv, ivsize);
-+ memcpy(subreq->info, ctx->iv, ivsize);
++ significand(st1_ptr) = XSIG_LL(accumulator);
++ setexponent16(st1_ptr, expon_expon + exponent16(st1_ptr) + 1);
+
++ tag = FPU_round(st1_ptr, 1, 0, FULL_PRECISION, sign ^ st1_sign);
++ FPU_settagi(1, tag);
+
-+ ctx->err = crypto_ablkcipher_encrypt(subreq);
-+ if (ctx->err)
-+ goto out;
++ set_precision_flag_up(); /* 80486 appears to always do this */
+
-+ memcpy(ctx->iv, subreq->info, ivsize);
++ return;
+
-+out:
-+ return async_chainiv_schedule_work(ctx);
+}
-+
-+static int async_chainiv_givencrypt(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
-+
-+ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
-+ ablkcipher_request_set_callback(subreq, req->creq.base.flags,
-+ req->creq.base.complete,
-+ req->creq.base.data);
-+ ablkcipher_request_set_crypt(subreq, req->creq.src, req->creq.dst,
-+ req->creq.nbytes, req->creq.info);
-+
-+ if (test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
-+ goto postpone;
-+
-+ if (ctx->queue.qlen) {
-+ clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
-+ goto postpone;
+
+ /*--- poly_l2p1() -----------------------------------------------------------+
+ | Base 2 logarithm by a polynomial approximation. |
+ | log2(x+1) |
+ +---------------------------------------------------------------------------*/
+-int poly_l2p1(u_char sign0, u_char sign1,
+- FPU_REG *st0_ptr, FPU_REG *st1_ptr, FPU_REG *dest)
++int poly_l2p1(u_char sign0, u_char sign1,
++ FPU_REG * st0_ptr, FPU_REG * st1_ptr, FPU_REG * dest)
+ {
+- u_char tag;
+- long int exponent;
+- Xsig accumulator, yaccum;
++ u_char tag;
++ long int exponent;
++ Xsig accumulator, yaccum;
+
+- if ( exponent16(st0_ptr) < 0 )
+- {
+- log2_kernel(st0_ptr, sign0, &accumulator, &exponent);
++ if (exponent16(st0_ptr) < 0) {
++ log2_kernel(st0_ptr, sign0, &accumulator, &exponent);
+
+- yaccum.lsw = 0;
+- XSIG_LL(yaccum) = significand(st1_ptr);
+- mul_Xsig_Xsig(&accumulator, &yaccum);
++ yaccum.lsw = 0;
++ XSIG_LL(yaccum) = significand(st1_ptr);
++ mul_Xsig_Xsig(&accumulator, &yaccum);
+
+- exponent += round_Xsig(&accumulator);
++ exponent += round_Xsig(&accumulator);
+
+- exponent += exponent16(st1_ptr) + 1;
+- if ( exponent < EXP_WAY_UNDER ) exponent = EXP_WAY_UNDER;
++ exponent += exponent16(st1_ptr) + 1;
++ if (exponent < EXP_WAY_UNDER)
++ exponent = EXP_WAY_UNDER;
+
+- significand(dest) = XSIG_LL(accumulator);
+- setexponent16(dest, exponent);
++ significand(dest) = XSIG_LL(accumulator);
++ setexponent16(dest, exponent);
+
+- tag = FPU_round(dest, 1, 0, FULL_PRECISION, sign0 ^ sign1);
+- FPU_settagi(1, tag);
++ tag = FPU_round(dest, 1, 0, FULL_PRECISION, sign0 ^ sign1);
++ FPU_settagi(1, tag);
+
+- if ( tag == TAG_Valid )
+- set_precision_flag_up(); /* 80486 appears to always do this */
+- }
+- else
+- {
+- /* The magnitude of st0_ptr is far too large. */
++ if (tag == TAG_Valid)
++ set_precision_flag_up(); /* 80486 appears to always do this */
++ } else {
++ /* The magnitude of st0_ptr is far too large. */
+
+- if ( sign0 != SIGN_POS )
+- {
+- /* Trying to get the log of a negative number. */
+-#ifdef PECULIAR_486 /* Stupid 80486 doesn't worry about log(negative). */
+- changesign(st1_ptr);
++ if (sign0 != SIGN_POS) {
++ /* Trying to get the log of a negative number. */
++#ifdef PECULIAR_486 /* Stupid 80486 doesn't worry about log(negative). */
++ changesign(st1_ptr);
+ #else
+- if ( arith_invalid(1) < 0 )
+- return 1;
++ if (arith_invalid(1) < 0)
++ return 1;
+ #endif /* PECULIAR_486 */
+- }
++ }
+
+- /* 80486 appears to do this */
+- if ( sign0 == SIGN_NEG )
+- set_precision_flag_down();
+- else
+- set_precision_flag_up();
+- }
++ /* 80486 appears to do this */
++ if (sign0 == SIGN_NEG)
++ set_precision_flag_down();
++ else
++ set_precision_flag_up();
+ }
+
+- if ( exponent(dest) <= EXP_UNDER )
+- EXCEPTION(EX_Underflow);
++ if (exponent(dest) <= EXP_UNDER)
++ EXCEPTION(EX_Underflow);
+
+- return 0;
++ return 0;
+
+ }
+
+-
+-
+-
+ #undef HIPOWER
+ #define HIPOWER 10
+-static const unsigned long long logterms[HIPOWER] =
+-{
+- 0x2a8eca5705fc2ef0LL,
+- 0xf6384ee1d01febceLL,
+- 0x093bb62877cdf642LL,
+- 0x006985d8a9ec439bLL,
+- 0x0005212c4f55a9c8LL,
+- 0x00004326a16927f0LL,
+- 0x0000038d1d80a0e7LL,
+- 0x0000003141cc80c6LL,
+- 0x00000002b1668c9fLL,
+- 0x000000002c7a46aaLL
++static const unsigned long long logterms[HIPOWER] = {
++ 0x2a8eca5705fc2ef0LL,
++ 0xf6384ee1d01febceLL,
++ 0x093bb62877cdf642LL,
++ 0x006985d8a9ec439bLL,
++ 0x0005212c4f55a9c8LL,
++ 0x00004326a16927f0LL,
++ 0x0000038d1d80a0e7LL,
++ 0x0000003141cc80c6LL,
++ 0x00000002b1668c9fLL,
++ 0x000000002c7a46aaLL
+ };
+
+ static const unsigned long leadterm = 0xb8000000;
+
+-
+ /*--- log2_kernel() ---------------------------------------------------------+
+ | Base 2 logarithm by a polynomial approximation. |
+ | log2(x+1) |
+@@ -203,70 +181,64 @@ static const unsigned long leadterm = 0xb8000000;
+ static void log2_kernel(FPU_REG const *arg, u_char argsign, Xsig *accum_result,
+ long int *expon)
+ {
+- long int exponent, adj;
+- unsigned long long Xsq;
+- Xsig accumulator, Numer, Denom, argSignif, arg_signif;
+-
+- exponent = exponent16(arg);
+- Numer.lsw = Denom.lsw = 0;
+- XSIG_LL(Numer) = XSIG_LL(Denom) = significand(arg);
+- if ( argsign == SIGN_POS )
+- {
+- shr_Xsig(&Denom, 2 - (1 + exponent));
+- Denom.msw |= 0x80000000;
+- div_Xsig(&Numer, &Denom, &argSignif);
+- }
+- else
+- {
+- shr_Xsig(&Denom, 1 - (1 + exponent));
+- negate_Xsig(&Denom);
+- if ( Denom.msw & 0x80000000 )
+- {
+- div_Xsig(&Numer, &Denom, &argSignif);
+- exponent ++;
+- }
+- else
+- {
+- /* Denom must be 1.0 */
+- argSignif.lsw = Numer.lsw; argSignif.midw = Numer.midw;
+- argSignif.msw = Numer.msw;
++ long int exponent, adj;
++ unsigned long long Xsq;
++ Xsig accumulator, Numer, Denom, argSignif, arg_signif;
++
++ exponent = exponent16(arg);
++ Numer.lsw = Denom.lsw = 0;
++ XSIG_LL(Numer) = XSIG_LL(Denom) = significand(arg);
++ if (argsign == SIGN_POS) {
++ shr_Xsig(&Denom, 2 - (1 + exponent));
++ Denom.msw |= 0x80000000;
++ div_Xsig(&Numer, &Denom, &argSignif);
++ } else {
++ shr_Xsig(&Denom, 1 - (1 + exponent));
++ negate_Xsig(&Denom);
++ if (Denom.msw & 0x80000000) {
++ div_Xsig(&Numer, &Denom, &argSignif);
++ exponent++;
++ } else {
++ /* Denom must be 1.0 */
++ argSignif.lsw = Numer.lsw;
++ argSignif.midw = Numer.midw;
++ argSignif.msw = Numer.msw;
++ }
+ }
+- }
+
+ #ifndef PECULIAR_486
+- /* Should check here that |local_arg| is within the valid range */
+- if ( exponent >= -2 )
+- {
+- if ( (exponent > -2) ||
+- (argSignif.msw > (unsigned)0xafb0ccc0) )
+- {
+- /* The argument is too large */
++ /* Should check here that |local_arg| is within the valid range */
++ if (exponent >= -2) {
++ if ((exponent > -2) || (argSignif.msw > (unsigned)0xafb0ccc0)) {
++ /* The argument is too large */
++ }
+ }
+- }
+ #endif /* PECULIAR_486 */
+
+- arg_signif.lsw = argSignif.lsw; XSIG_LL(arg_signif) = XSIG_LL(argSignif);
+- adj = norm_Xsig(&argSignif);
+- accumulator.lsw = argSignif.lsw; XSIG_LL(accumulator) = XSIG_LL(argSignif);
+- mul_Xsig_Xsig(&accumulator, &accumulator);
+- shr_Xsig(&accumulator, 2*(-1 - (1 + exponent + adj)));
+- Xsq = XSIG_LL(accumulator);
+- if ( accumulator.lsw & 0x80000000 )
+- Xsq++;
+-
+- accumulator.msw = accumulator.midw = accumulator.lsw = 0;
+- /* Do the basic fixed point polynomial evaluation */
+- polynomial_Xsig(&accumulator, &Xsq, logterms, HIPOWER-1);
+-
+- mul_Xsig_Xsig(&accumulator, &argSignif);
+- shr_Xsig(&accumulator, 6 - adj);
+-
+- mul32_Xsig(&arg_signif, leadterm);
+- add_two_Xsig(&accumulator, &arg_signif, &exponent);
+-
+- *expon = exponent + 1;
+- accum_result->lsw = accumulator.lsw;
+- accum_result->midw = accumulator.midw;
+- accum_result->msw = accumulator.msw;
++ arg_signif.lsw = argSignif.lsw;
++ XSIG_LL(arg_signif) = XSIG_LL(argSignif);
++ adj = norm_Xsig(&argSignif);
++ accumulator.lsw = argSignif.lsw;
++ XSIG_LL(accumulator) = XSIG_LL(argSignif);
++ mul_Xsig_Xsig(&accumulator, &accumulator);
++ shr_Xsig(&accumulator, 2 * (-1 - (1 + exponent + adj)));
++ Xsq = XSIG_LL(accumulator);
++ if (accumulator.lsw & 0x80000000)
++ Xsq++;
++
++ accumulator.msw = accumulator.midw = accumulator.lsw = 0;
++ /* Do the basic fixed point polynomial evaluation */
++ polynomial_Xsig(&accumulator, &Xsq, logterms, HIPOWER - 1);
++
++ mul_Xsig_Xsig(&accumulator, &argSignif);
++ shr_Xsig(&accumulator, 6 - adj);
++
++ mul32_Xsig(&arg_signif, leadterm);
++ add_two_Xsig(&accumulator, &arg_signif, &exponent);
++
++ *expon = exponent + 1;
++ accum_result->lsw = accumulator.lsw;
++ accum_result->midw = accumulator.midw;
++ accum_result->msw = accumulator.msw;
+
+ }
+diff --git a/arch/x86/math-emu/poly_sin.c b/arch/x86/math-emu/poly_sin.c
+index a36313f..b862039 100644
+--- a/arch/x86/math-emu/poly_sin.c
++++ b/arch/x86/math-emu/poly_sin.c
+@@ -11,7 +11,6 @@
+ | |
+ +---------------------------------------------------------------------------*/
+
+-
+ #include "exception.h"
+ #include "reg_constant.h"
+ #include "fpu_emu.h"
+@@ -19,379 +18,361 @@
+ #include "control_w.h"
+ #include "poly.h"
+
+-
+ #define N_COEFF_P 4
+ #define N_COEFF_N 4
+
+-static const unsigned long long pos_terms_l[N_COEFF_P] =
+-{
+- 0xaaaaaaaaaaaaaaabLL,
+- 0x00d00d00d00cf906LL,
+- 0x000006b99159a8bbLL,
+- 0x000000000d7392e6LL
++static const unsigned long long pos_terms_l[N_COEFF_P] = {
++ 0xaaaaaaaaaaaaaaabLL,
++ 0x00d00d00d00cf906LL,
++ 0x000006b99159a8bbLL,
++ 0x000000000d7392e6LL
+ };
+
+-static const unsigned long long neg_terms_l[N_COEFF_N] =
+-{
+- 0x2222222222222167LL,
+- 0x0002e3bc74aab624LL,
+- 0x0000000b09229062LL,
+- 0x00000000000c7973LL
++static const unsigned long long neg_terms_l[N_COEFF_N] = {
++ 0x2222222222222167LL,
++ 0x0002e3bc74aab624LL,
++ 0x0000000b09229062LL,
++ 0x00000000000c7973LL
+ };
+
+-
+-
+ #define N_COEFF_PH 4
+ #define N_COEFF_NH 4
+-static const unsigned long long pos_terms_h[N_COEFF_PH] =
+-{
+- 0x0000000000000000LL,
+- 0x05b05b05b05b0406LL,
+- 0x000049f93edd91a9LL,
+- 0x00000000c9c9ed62LL
++static const unsigned long long pos_terms_h[N_COEFF_PH] = {
++ 0x0000000000000000LL,
++ 0x05b05b05b05b0406LL,
++ 0x000049f93edd91a9LL,
++ 0x00000000c9c9ed62LL
+ };
+
+-static const unsigned long long neg_terms_h[N_COEFF_NH] =
+-{
+- 0xaaaaaaaaaaaaaa98LL,
+- 0x001a01a01a019064LL,
+- 0x0000008f76c68a77LL,
+- 0x0000000000d58f5eLL
++static const unsigned long long neg_terms_h[N_COEFF_NH] = {
++ 0xaaaaaaaaaaaaaa98LL,
++ 0x001a01a01a019064LL,
++ 0x0000008f76c68a77LL,
++ 0x0000000000d58f5eLL
+ };
+
+-
+ /*--- poly_sine() -----------------------------------------------------------+
+ | |
+ +---------------------------------------------------------------------------*/
+-void poly_sine(FPU_REG *st0_ptr)
++void poly_sine(FPU_REG *st0_ptr)
+ {
+- int exponent, echange;
+- Xsig accumulator, argSqrd, argTo4;
+- unsigned long fix_up, adj;
+- unsigned long long fixed_arg;
+- FPU_REG result;
++ int exponent, echange;
++ Xsig accumulator, argSqrd, argTo4;
++ unsigned long fix_up, adj;
++ unsigned long long fixed_arg;
++ FPU_REG result;
+
+- exponent = exponent(st0_ptr);
++ exponent = exponent(st0_ptr);
+
+- accumulator.lsw = accumulator.midw = accumulator.msw = 0;
++ accumulator.lsw = accumulator.midw = accumulator.msw = 0;
+
+- /* Split into two ranges, for arguments below and above 1.0 */
+- /* The boundary between upper and lower is approx 0.88309101259 */
+- if ( (exponent < -1) || ((exponent == -1) && (st0_ptr->sigh <= 0xe21240aa)) )
+- {
+- /* The argument is <= 0.88309101259 */
++ /* Split into two ranges, for arguments below and above 1.0 */
++ /* The boundary between upper and lower is approx 0.88309101259 */
++ if ((exponent < -1)
++ || ((exponent == -1) && (st0_ptr->sigh <= 0xe21240aa))) {
++ /* The argument is <= 0.88309101259 */
+
+- argSqrd.msw = st0_ptr->sigh; argSqrd.midw = st0_ptr->sigl; argSqrd.lsw = 0;
+- mul64_Xsig(&argSqrd, &significand(st0_ptr));
+- shr_Xsig(&argSqrd, 2*(-1-exponent));
+- argTo4.msw = argSqrd.msw; argTo4.midw = argSqrd.midw;
+- argTo4.lsw = argSqrd.lsw;
+- mul_Xsig_Xsig(&argTo4, &argTo4);
++ argSqrd.msw = st0_ptr->sigh;
++ argSqrd.midw = st0_ptr->sigl;
++ argSqrd.lsw = 0;
++ mul64_Xsig(&argSqrd, &significand(st0_ptr));
++ shr_Xsig(&argSqrd, 2 * (-1 - exponent));
++ argTo4.msw = argSqrd.msw;
++ argTo4.midw = argSqrd.midw;
++ argTo4.lsw = argSqrd.lsw;
++ mul_Xsig_Xsig(&argTo4, &argTo4);
+
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_l,
+- N_COEFF_N-1);
+- mul_Xsig_Xsig(&accumulator, &argSqrd);
+- negate_Xsig(&accumulator);
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_l,
++ N_COEFF_N - 1);
++ mul_Xsig_Xsig(&accumulator, &argSqrd);
++ negate_Xsig(&accumulator);
+
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_l,
+- N_COEFF_P-1);
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_l,
++ N_COEFF_P - 1);
+
+- shr_Xsig(&accumulator, 2); /* Divide by four */
+- accumulator.msw |= 0x80000000; /* Add 1.0 */
++ shr_Xsig(&accumulator, 2); /* Divide by four */
++ accumulator.msw |= 0x80000000; /* Add 1.0 */
+
+- mul64_Xsig(&accumulator, &significand(st0_ptr));
+- mul64_Xsig(&accumulator, &significand(st0_ptr));
+- mul64_Xsig(&accumulator, &significand(st0_ptr));
++ mul64_Xsig(&accumulator, &significand(st0_ptr));
++ mul64_Xsig(&accumulator, &significand(st0_ptr));
++ mul64_Xsig(&accumulator, &significand(st0_ptr));
+
+- /* Divide by four, FPU_REG compatible, etc */
+- exponent = 3*exponent;
++ /* Divide by four, FPU_REG compatible, etc */
++ exponent = 3 * exponent;
+
+- /* The minimum exponent difference is 3 */
+- shr_Xsig(&accumulator, exponent(st0_ptr) - exponent);
++ /* The minimum exponent difference is 3 */
++ shr_Xsig(&accumulator, exponent(st0_ptr) - exponent);
+
+- negate_Xsig(&accumulator);
+- XSIG_LL(accumulator) += significand(st0_ptr);
++ negate_Xsig(&accumulator);
++ XSIG_LL(accumulator) += significand(st0_ptr);
+
+- echange = round_Xsig(&accumulator);
++ echange = round_Xsig(&accumulator);
+
+- setexponentpos(&result, exponent(st0_ptr) + echange);
+- }
+- else
+- {
+- /* The argument is > 0.88309101259 */
+- /* We use sin(st(0)) = cos(pi/2-st(0)) */
++ setexponentpos(&result, exponent(st0_ptr) + echange);
++ } else {
++ /* The argument is > 0.88309101259 */
++ /* We use sin(st(0)) = cos(pi/2-st(0)) */
+
+- fixed_arg = significand(st0_ptr);
++ fixed_arg = significand(st0_ptr);
+
+- if ( exponent == 0 )
+- {
+- /* The argument is >= 1.0 */
++ if (exponent == 0) {
++ /* The argument is >= 1.0 */
+
+- /* Put the binary point at the left. */
+- fixed_arg <<= 1;
+- }
+- /* pi/2 in hex is: 1.921fb54442d18469 898CC51701B839A2 52049C1 */
+- fixed_arg = 0x921fb54442d18469LL - fixed_arg;
+- /* There is a special case which arises due to rounding, to fix here. */
+- if ( fixed_arg == 0xffffffffffffffffLL )
+- fixed_arg = 0;
++ /* Put the binary point at the left. */
++ fixed_arg <<= 1;
++ }
++ /* pi/2 in hex is: 1.921fb54442d18469 898CC51701B839A2 52049C1 */
++ fixed_arg = 0x921fb54442d18469LL - fixed_arg;
++ /* There is a special case which arises due to rounding, to fix here. */
++ if (fixed_arg == 0xffffffffffffffffLL)
++ fixed_arg = 0;
+
+- XSIG_LL(argSqrd) = fixed_arg; argSqrd.lsw = 0;
+- mul64_Xsig(&argSqrd, &fixed_arg);
++ XSIG_LL(argSqrd) = fixed_arg;
++ argSqrd.lsw = 0;
++ mul64_Xsig(&argSqrd, &fixed_arg);
+
+- XSIG_LL(argTo4) = XSIG_LL(argSqrd); argTo4.lsw = argSqrd.lsw;
+- mul_Xsig_Xsig(&argTo4, &argTo4);
++ XSIG_LL(argTo4) = XSIG_LL(argSqrd);
++ argTo4.lsw = argSqrd.lsw;
++ mul_Xsig_Xsig(&argTo4, &argTo4);
+
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_h,
+- N_COEFF_NH-1);
+- mul_Xsig_Xsig(&accumulator, &argSqrd);
+- negate_Xsig(&accumulator);
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_h,
++ N_COEFF_NH - 1);
++ mul_Xsig_Xsig(&accumulator, &argSqrd);
++ negate_Xsig(&accumulator);
+
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_h,
+- N_COEFF_PH-1);
+- negate_Xsig(&accumulator);
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_h,
++ N_COEFF_PH - 1);
++ negate_Xsig(&accumulator);
+
+- mul64_Xsig(&accumulator, &fixed_arg);
+- mul64_Xsig(&accumulator, &fixed_arg);
++ mul64_Xsig(&accumulator, &fixed_arg);
++ mul64_Xsig(&accumulator, &fixed_arg);
+
+- shr_Xsig(&accumulator, 3);
+- negate_Xsig(&accumulator);
++ shr_Xsig(&accumulator, 3);
++ negate_Xsig(&accumulator);
+
+- add_Xsig_Xsig(&accumulator, &argSqrd);
++ add_Xsig_Xsig(&accumulator, &argSqrd);
+
+- shr_Xsig(&accumulator, 1);
++ shr_Xsig(&accumulator, 1);
+
+- accumulator.lsw |= 1; /* A zero accumulator here would cause problems */
+- negate_Xsig(&accumulator);
++ accumulator.lsw |= 1; /* A zero accumulator here would cause problems */
++ negate_Xsig(&accumulator);
+
+- /* The basic computation is complete. Now fix the answer to
+- compensate for the error due to the approximation used for
+- pi/2
+- */
++ /* The basic computation is complete. Now fix the answer to
++ compensate for the error due to the approximation used for
++ pi/2
++ */
+
+- /* This has an exponent of -65 */
+- fix_up = 0x898cc517;
+- /* The fix-up needs to be improved for larger args */
+- if ( argSqrd.msw & 0xffc00000 )
+- {
+- /* Get about 32 bit precision in these: */
+- fix_up -= mul_32_32(0x898cc517, argSqrd.msw) / 6;
+- }
+- fix_up = mul_32_32(fix_up, LL_MSW(fixed_arg));
++ /* This has an exponent of -65 */
++ fix_up = 0x898cc517;
++ /* The fix-up needs to be improved for larger args */
++ if (argSqrd.msw & 0xffc00000) {
++ /* Get about 32 bit precision in these: */
++ fix_up -= mul_32_32(0x898cc517, argSqrd.msw) / 6;
++ }
++ fix_up = mul_32_32(fix_up, LL_MSW(fixed_arg));
+
+- adj = accumulator.lsw; /* temp save */
+- accumulator.lsw -= fix_up;
+- if ( accumulator.lsw > adj )
+- XSIG_LL(accumulator) --;
++ adj = accumulator.lsw; /* temp save */
++ accumulator.lsw -= fix_up;
++ if (accumulator.lsw > adj)
++ XSIG_LL(accumulator)--;
+
+- echange = round_Xsig(&accumulator);
++ echange = round_Xsig(&accumulator);
+
+- setexponentpos(&result, echange - 1);
+- }
++ setexponentpos(&result, echange - 1);
++ }
+
+- significand(&result) = XSIG_LL(accumulator);
+- setsign(&result, getsign(st0_ptr));
+- FPU_copy_to_reg0(&result, TAG_Valid);
++ significand(&result) = XSIG_LL(accumulator);
++ setsign(&result, getsign(st0_ptr));
++ FPU_copy_to_reg0(&result, TAG_Valid);
+
+ #ifdef PARANOID
+- if ( (exponent(&result) >= 0)
+- && (significand(&result) > 0x8000000000000000LL) )
+- {
+- EXCEPTION(EX_INTERNAL|0x150);
+- }
++ if ((exponent(&result) >= 0)
++ && (significand(&result) > 0x8000000000000000LL)) {
++ EXCEPTION(EX_INTERNAL | 0x150);
++ }
+ #endif /* PARANOID */
+
+ }
+
+-
+-
+ /*--- poly_cos() ------------------------------------------------------------+
+ | |
+ +---------------------------------------------------------------------------*/
+-void poly_cos(FPU_REG *st0_ptr)
++void poly_cos(FPU_REG *st0_ptr)
+ {
+- FPU_REG result;
+- long int exponent, exp2, echange;
+- Xsig accumulator, argSqrd, fix_up, argTo4;
+- unsigned long long fixed_arg;
++ FPU_REG result;
++ long int exponent, exp2, echange;
++ Xsig accumulator, argSqrd, fix_up, argTo4;
++ unsigned long long fixed_arg;
+
+ #ifdef PARANOID
+- if ( (exponent(st0_ptr) > 0)
+- || ((exponent(st0_ptr) == 0)
+- && (significand(st0_ptr) > 0xc90fdaa22168c234LL)) )
+- {
+- EXCEPTION(EX_Invalid);
+- FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
+- return;
+- }
+-#endif /* PARANOID */
+-
+- exponent = exponent(st0_ptr);
+-
+- accumulator.lsw = accumulator.midw = accumulator.msw = 0;
+-
+- if ( (exponent < -1) || ((exponent == -1) && (st0_ptr->sigh <= 0xb00d6f54)) )
+- {
+- /* arg is < 0.687705 */
+-
+- argSqrd.msw = st0_ptr->sigh; argSqrd.midw = st0_ptr->sigl;
+- argSqrd.lsw = 0;
+- mul64_Xsig(&argSqrd, &significand(st0_ptr));
+-
+- if ( exponent < -1 )
+- {
+- /* shift the argument right by the required places */
+- shr_Xsig(&argSqrd, 2*(-1-exponent));
++ if ((exponent(st0_ptr) > 0)
++ || ((exponent(st0_ptr) == 0)
++ && (significand(st0_ptr) > 0xc90fdaa22168c234LL))) {
++ EXCEPTION(EX_Invalid);
++ FPU_copy_to_reg0(&CONST_QNaN, TAG_Special);
++ return;
+ }
++#endif /* PARANOID */
+
+- argTo4.msw = argSqrd.msw; argTo4.midw = argSqrd.midw;
+- argTo4.lsw = argSqrd.lsw;
+- mul_Xsig_Xsig(&argTo4, &argTo4);
+-
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_h,
+- N_COEFF_NH-1);
+- mul_Xsig_Xsig(&accumulator, &argSqrd);
+- negate_Xsig(&accumulator);
+-
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_h,
+- N_COEFF_PH-1);
+- negate_Xsig(&accumulator);
+-
+- mul64_Xsig(&accumulator, &significand(st0_ptr));
+- mul64_Xsig(&accumulator, &significand(st0_ptr));
+- shr_Xsig(&accumulator, -2*(1+exponent));
+-
+- shr_Xsig(&accumulator, 3);
+- negate_Xsig(&accumulator);
+-
+- add_Xsig_Xsig(&accumulator, &argSqrd);
+-
+- shr_Xsig(&accumulator, 1);
+-
+- /* It doesn't matter if accumulator is all zero here, the
+- following code will work ok */
+- negate_Xsig(&accumulator);
+-
+- if ( accumulator.lsw & 0x80000000 )
+- XSIG_LL(accumulator) ++;
+- if ( accumulator.msw == 0 )
+- {
+- /* The result is 1.0 */
+- FPU_copy_to_reg0(&CONST_1, TAG_Valid);
+- return;
+- }
+- else
+- {
+- significand(&result) = XSIG_LL(accumulator);
+-
+- /* will be a valid positive nr with expon = -1 */
+- setexponentpos(&result, -1);
+- }
+- }
+- else
+- {
+- fixed_arg = significand(st0_ptr);
+-
+- if ( exponent == 0 )
+- {
+- /* The argument is >= 1.0 */
+-
+- /* Put the binary point at the left. */
+- fixed_arg <<= 1;
+- }
+- /* pi/2 in hex is: 1.921fb54442d18469 898CC51701B839A2 52049C1 */
+- fixed_arg = 0x921fb54442d18469LL - fixed_arg;
+- /* There is a special case which arises due to rounding, to fix here. */
+- if ( fixed_arg == 0xffffffffffffffffLL )
+- fixed_arg = 0;
+-
+- exponent = -1;
+- exp2 = -1;
+-
+- /* A shift is needed here only for a narrow range of arguments,
+- i.e. for fixed_arg approx 2^-32, but we pick up more... */
+- if ( !(LL_MSW(fixed_arg) & 0xffff0000) )
+- {
+- fixed_arg <<= 16;
+- exponent -= 16;
+- exp2 -= 16;
+- }
+-
+- XSIG_LL(argSqrd) = fixed_arg; argSqrd.lsw = 0;
+- mul64_Xsig(&argSqrd, &fixed_arg);
+-
+- if ( exponent < -1 )
+- {
+- /* shift the argument right by the required places */
+- shr_Xsig(&argSqrd, 2*(-1-exponent));
+- }
+-
+- argTo4.msw = argSqrd.msw; argTo4.midw = argSqrd.midw;
+- argTo4.lsw = argSqrd.lsw;
+- mul_Xsig_Xsig(&argTo4, &argTo4);
+-
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_l,
+- N_COEFF_N-1);
+- mul_Xsig_Xsig(&accumulator, &argSqrd);
+- negate_Xsig(&accumulator);
+-
+- polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_l,
+- N_COEFF_P-1);
+-
+- shr_Xsig(&accumulator, 2); /* Divide by four */
+- accumulator.msw |= 0x80000000; /* Add 1.0 */
+-
+- mul64_Xsig(&accumulator, &fixed_arg);
+- mul64_Xsig(&accumulator, &fixed_arg);
+- mul64_Xsig(&accumulator, &fixed_arg);
+-
+- /* Divide by four, FPU_REG compatible, etc */
+- exponent = 3*exponent;
+-
+- /* The minimum exponent difference is 3 */
+- shr_Xsig(&accumulator, exp2 - exponent);
+-
+- negate_Xsig(&accumulator);
+- XSIG_LL(accumulator) += fixed_arg;
+-
+- /* The basic computation is complete. Now fix the answer to
+- compensate for the error due to the approximation used for
+- pi/2
+- */
+-
+- /* This has an exponent of -65 */
+- XSIG_LL(fix_up) = 0x898cc51701b839a2ll;
+- fix_up.lsw = 0;
+-
+- /* The fix-up needs to be improved for larger args */
+- if ( argSqrd.msw & 0xffc00000 )
+- {
+- /* Get about 32 bit precision in these: */
+- fix_up.msw -= mul_32_32(0x898cc517, argSqrd.msw) / 2;
+- fix_up.msw += mul_32_32(0x898cc517, argTo4.msw) / 24;
++ exponent = exponent(st0_ptr);
+
-+ return async_chainiv_givencrypt_tail(req);
-+
-+postpone:
-+ return async_chainiv_postpone_request(req);
-+}
-+
-+static int async_chainiv_givencrypt_first(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+
-+ if (test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
-+ goto out;
++ accumulator.lsw = accumulator.midw = accumulator.msw = 0;
+
-+ if (crypto_ablkcipher_crt(geniv)->givencrypt !=
-+ async_chainiv_givencrypt_first)
-+ goto unlock;
++ if ((exponent < -1)
++ || ((exponent == -1) && (st0_ptr->sigh <= 0xb00d6f54))) {
++ /* arg is < 0.687705 */
+
-+ crypto_ablkcipher_crt(geniv)->givencrypt = async_chainiv_givencrypt;
-+ get_random_bytes(ctx->iv, crypto_ablkcipher_ivsize(geniv));
++ argSqrd.msw = st0_ptr->sigh;
++ argSqrd.midw = st0_ptr->sigl;
++ argSqrd.lsw = 0;
++ mul64_Xsig(&argSqrd, &significand(st0_ptr));
+
-+unlock:
-+ clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
++ if (exponent < -1) {
++ /* shift the argument right by the required places */
++ shr_Xsig(&argSqrd, 2 * (-1 - exponent));
++ }
+
-+out:
-+ return async_chainiv_givencrypt(req);
-+}
++ argTo4.msw = argSqrd.msw;
++ argTo4.midw = argSqrd.midw;
++ argTo4.lsw = argSqrd.lsw;
++ mul_Xsig_Xsig(&argTo4, &argTo4);
+
-+static void async_chainiv_do_postponed(struct work_struct *work)
-+{
-+ struct async_chainiv_ctx *ctx = container_of(work,
-+ struct async_chainiv_ctx,
-+ postponed);
-+ struct skcipher_givcrypt_request *req;
-+ struct ablkcipher_request *subreq;
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_h,
++ N_COEFF_NH - 1);
++ mul_Xsig_Xsig(&accumulator, &argSqrd);
++ negate_Xsig(&accumulator);
+
-+ /* Only handle one request at a time to avoid hogging keventd. */
-+ spin_lock_bh(&ctx->lock);
-+ req = skcipher_dequeue_givcrypt(&ctx->queue);
-+ spin_unlock_bh(&ctx->lock);
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_h,
++ N_COEFF_PH - 1);
++ negate_Xsig(&accumulator);
+
-+ if (!req) {
-+ async_chainiv_schedule_work(ctx);
-+ return;
-+ }
++ mul64_Xsig(&accumulator, &significand(st0_ptr));
++ mul64_Xsig(&accumulator, &significand(st0_ptr));
++ shr_Xsig(&accumulator, -2 * (1 + exponent));
+
-+ subreq = skcipher_givcrypt_reqctx(req);
-+ subreq->base.flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
++ shr_Xsig(&accumulator, 3);
++ negate_Xsig(&accumulator);
+
-+ async_chainiv_givencrypt_tail(req);
-+}
++ add_Xsig_Xsig(&accumulator, &argSqrd);
+
-+static int async_chainiv_init(struct crypto_tfm *tfm)
-+{
-+ struct async_chainiv_ctx *ctx = crypto_tfm_ctx(tfm);
++ shr_Xsig(&accumulator, 1);
+
-+ spin_lock_init(&ctx->lock);
++ /* It doesn't matter if accumulator is all zero here, the
++ following code will work ok */
++ negate_Xsig(&accumulator);
+
-+ crypto_init_queue(&ctx->queue, 100);
-+ INIT_WORK(&ctx->postponed, async_chainiv_do_postponed);
++ if (accumulator.lsw & 0x80000000)
++ XSIG_LL(accumulator)++;
++ if (accumulator.msw == 0) {
++ /* The result is 1.0 */
++ FPU_copy_to_reg0(&CONST_1, TAG_Valid);
++ return;
++ } else {
++ significand(&result) = XSIG_LL(accumulator);
+
-+ return chainiv_init_common(tfm);
-+}
++ /* will be a valid positive nr with expon = -1 */
++ setexponentpos(&result, -1);
++ }
++ } else {
++ fixed_arg = significand(st0_ptr);
+
-+static void async_chainiv_exit(struct crypto_tfm *tfm)
-+{
-+ struct async_chainiv_ctx *ctx = crypto_tfm_ctx(tfm);
++ if (exponent == 0) {
++ /* The argument is >= 1.0 */
+
-+ BUG_ON(test_bit(CHAINIV_STATE_INUSE, &ctx->state) || ctx->queue.qlen);
++ /* Put the binary point at the left. */
++ fixed_arg <<= 1;
++ }
++ /* pi/2 in hex is: 1.921fb54442d18469 898CC51701B839A2 52049C1 */
++ fixed_arg = 0x921fb54442d18469LL - fixed_arg;
++ /* There is a special case which arises due to rounding, to fix here. */
++ if (fixed_arg == 0xffffffffffffffffLL)
++ fixed_arg = 0;
+
-+ skcipher_geniv_exit(tfm);
-+}
++ exponent = -1;
++ exp2 = -1;
+
-+static struct crypto_template chainiv_tmpl;
++ /* A shift is needed here only for a narrow range of arguments,
++ i.e. for fixed_arg approx 2^-32, but we pick up more... */
++ if (!(LL_MSW(fixed_arg) & 0xffff0000)) {
++ fixed_arg <<= 16;
++ exponent -= 16;
++ exp2 -= 16;
++ }
+
-+static struct crypto_instance *chainiv_alloc(struct rtattr **tb)
-+{
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ int err;
++ XSIG_LL(argSqrd) = fixed_arg;
++ argSqrd.lsw = 0;
++ mul64_Xsig(&argSqrd, &fixed_arg);
+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
++ if (exponent < -1) {
++ /* shift the argument right by the required places */
++ shr_Xsig(&argSqrd, 2 * (-1 - exponent));
++ }
+
-+ inst = skcipher_geniv_alloc(&chainiv_tmpl, tb, 0, 0);
-+ if (IS_ERR(inst))
-+ goto out;
++ argTo4.msw = argSqrd.msw;
++ argTo4.midw = argSqrd.midw;
++ argTo4.lsw = argSqrd.lsw;
++ mul_Xsig_Xsig(&argTo4, &argTo4);
+
-+ inst->alg.cra_ablkcipher.givencrypt = chainiv_givencrypt_first;
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), neg_terms_l,
++ N_COEFF_N - 1);
++ mul_Xsig_Xsig(&accumulator, &argSqrd);
++ negate_Xsig(&accumulator);
+
-+ inst->alg.cra_init = chainiv_init;
-+ inst->alg.cra_exit = skcipher_geniv_exit;
++ polynomial_Xsig(&accumulator, &XSIG_LL(argTo4), pos_terms_l,
++ N_COEFF_P - 1);
+
-+ inst->alg.cra_ctxsize = sizeof(struct chainiv_ctx);
++ shr_Xsig(&accumulator, 2); /* Divide by four */
++ accumulator.msw |= 0x80000000; /* Add 1.0 */
+
-+ if (!crypto_requires_sync(algt->type, algt->mask)) {
-+ inst->alg.cra_flags |= CRYPTO_ALG_ASYNC;
++ mul64_Xsig(&accumulator, &fixed_arg);
++ mul64_Xsig(&accumulator, &fixed_arg);
++ mul64_Xsig(&accumulator, &fixed_arg);
+
-+ inst->alg.cra_ablkcipher.givencrypt =
-+ async_chainiv_givencrypt_first;
++ /* Divide by four, FPU_REG compatible, etc */
++ exponent = 3 * exponent;
+
-+ inst->alg.cra_init = async_chainiv_init;
-+ inst->alg.cra_exit = async_chainiv_exit;
++ /* The minimum exponent difference is 3 */
++ shr_Xsig(&accumulator, exp2 - exponent);
+
-+ inst->alg.cra_ctxsize = sizeof(struct async_chainiv_ctx);
-+ }
++ negate_Xsig(&accumulator);
++ XSIG_LL(accumulator) += fixed_arg;
+
-+ inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
++ /* The basic computation is complete. Now fix the answer to
++ compensate for the error due to the approximation used for
++ pi/2
++ */
+
-+out:
-+ return inst;
-+}
++ /* This has an exponent of -65 */
++ XSIG_LL(fix_up) = 0x898cc51701b839a2ll;
++ fix_up.lsw = 0;
+
-+static struct crypto_template chainiv_tmpl = {
-+ .name = "chainiv",
-+ .alloc = chainiv_alloc,
-+ .free = skcipher_geniv_free,
-+ .module = THIS_MODULE,
-+};
++ /* The fix-up needs to be improved for larger args */
++ if (argSqrd.msw & 0xffc00000) {
++ /* Get about 32 bit precision in these: */
++ fix_up.msw -= mul_32_32(0x898cc517, argSqrd.msw) / 2;
++ fix_up.msw += mul_32_32(0x898cc517, argTo4.msw) / 24;
++ }
+
-+static int __init chainiv_module_init(void)
-+{
-+ return crypto_register_template(&chainiv_tmpl);
-+}
++ exp2 += norm_Xsig(&accumulator);
++ shr_Xsig(&accumulator, 1); /* Prevent overflow */
++ exp2++;
++ shr_Xsig(&fix_up, 65 + exp2);
+
-+static void __exit chainiv_module_exit(void)
-+{
-+ crypto_unregister_template(&chainiv_tmpl);
-+}
++ add_Xsig_Xsig(&accumulator, &fix_up);
+
-+module_init(chainiv_module_init);
-+module_exit(chainiv_module_exit);
++ echange = round_Xsig(&accumulator);
+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("Chain IV Generator");
-diff --git a/crypto/cryptd.c b/crypto/cryptd.c
-index 8bf2da8..074298f 100644
---- a/crypto/cryptd.c
-+++ b/crypto/cryptd.c
-@@ -228,7 +228,7 @@ static struct crypto_instance *cryptd_alloc_blkcipher(
- struct crypto_alg *alg;
++ setexponentpos(&result, exp2 + echange);
++ significand(&result) = XSIG_LL(accumulator);
+ }
- alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_BLKCIPHER,
-- CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
-+ CRYPTO_ALG_TYPE_MASK);
- if (IS_ERR(alg))
- return ERR_PTR(PTR_ERR(alg));
+- exp2 += norm_Xsig(&accumulator);
+- shr_Xsig(&accumulator, 1); /* Prevent overflow */
+- exp2++;
+- shr_Xsig(&fix_up, 65 + exp2);
+-
+- add_Xsig_Xsig(&accumulator, &fix_up);
+-
+- echange = round_Xsig(&accumulator);
+-
+- setexponentpos(&result, exp2 + echange);
+- significand(&result) = XSIG_LL(accumulator);
+- }
+-
+- FPU_copy_to_reg0(&result, TAG_Valid);
++ FPU_copy_to_reg0(&result, TAG_Valid);
-@@ -236,13 +236,15 @@ static struct crypto_instance *cryptd_alloc_blkcipher(
- if (IS_ERR(inst))
- goto out_put_alg;
+ #ifdef PARANOID
+- if ( (exponent(&result) >= 0)
+- && (significand(&result) > 0x8000000000000000LL) )
+- {
+- EXCEPTION(EX_INTERNAL|0x151);
+- }
++ if ((exponent(&result) >= 0)
++ && (significand(&result) > 0x8000000000000000LL)) {
++ EXCEPTION(EX_INTERNAL | 0x151);
++ }
+ #endif /* PARANOID */
-- inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC;
- inst->alg.cra_type = &crypto_ablkcipher_type;
+ }
+diff --git a/arch/x86/math-emu/poly_tan.c b/arch/x86/math-emu/poly_tan.c
+index 8df3e03..1875763 100644
+--- a/arch/x86/math-emu/poly_tan.c
++++ b/arch/x86/math-emu/poly_tan.c
+@@ -17,206 +17,196 @@
+ #include "control_w.h"
+ #include "poly.h"
- inst->alg.cra_ablkcipher.ivsize = alg->cra_blkcipher.ivsize;
- inst->alg.cra_ablkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
- inst->alg.cra_ablkcipher.max_keysize = alg->cra_blkcipher.max_keysize;
+-
+ #define HiPOWERop 3 /* odd poly, positive terms */
+-static const unsigned long long oddplterm[HiPOWERop] =
+-{
+- 0x0000000000000000LL,
+- 0x0051a1cf08fca228LL,
+- 0x0000000071284ff7LL
++static const unsigned long long oddplterm[HiPOWERop] = {
++ 0x0000000000000000LL,
++ 0x0051a1cf08fca228LL,
++ 0x0000000071284ff7LL
+ };
-+ inst->alg.cra_ablkcipher.geniv = alg->cra_blkcipher.geniv;
-+
- inst->alg.cra_ctxsize = sizeof(struct cryptd_blkcipher_ctx);
+ #define HiPOWERon 2 /* odd poly, negative terms */
+-static const unsigned long long oddnegterm[HiPOWERon] =
+-{
+- 0x1291a9a184244e80LL,
+- 0x0000583245819c21LL
++static const unsigned long long oddnegterm[HiPOWERon] = {
++ 0x1291a9a184244e80LL,
++ 0x0000583245819c21LL
+ };
- inst->alg.cra_init = cryptd_blkcipher_init_tfm;
-diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
-index 29f7747..ff7b3de 100644
---- a/crypto/crypto_null.c
-+++ b/crypto/crypto_null.c
-@@ -16,15 +16,17 @@
- * (at your option) any later version.
- *
- */
-+
-+#include <crypto/internal/skcipher.h>
- #include <linux/init.h>
- #include <linux/module.h>
- #include <linux/mm.h>
--#include <linux/crypto.h>
- #include <linux/string.h>
+ #define HiPOWERep 2 /* even poly, positive terms */
+-static const unsigned long long evenplterm[HiPOWERep] =
+-{
+- 0x0e848884b539e888LL,
+- 0x00003c7f18b887daLL
++static const unsigned long long evenplterm[HiPOWERep] = {
++ 0x0e848884b539e888LL,
++ 0x00003c7f18b887daLL
+ };
- #define NULL_KEY_SIZE 0
- #define NULL_BLOCK_SIZE 1
- #define NULL_DIGEST_SIZE 0
-+#define NULL_IV_SIZE 0
+ #define HiPOWERen 2 /* even poly, negative terms */
+-static const unsigned long long evennegterm[HiPOWERen] =
+-{
+- 0xf1f0200fd51569ccLL,
+- 0x003afb46105c4432LL
++static const unsigned long long evennegterm[HiPOWERen] = {
++ 0xf1f0200fd51569ccLL,
++ 0x003afb46105c4432LL
+ };
- static int null_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-@@ -55,6 +57,26 @@ static void null_crypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
- memcpy(dst, src, NULL_BLOCK_SIZE);
- }
+ static const unsigned long long twothirds = 0xaaaaaaaaaaaaaaabLL;
-+static int skcipher_null_crypt(struct blkcipher_desc *desc,
-+ struct scatterlist *dst,
-+ struct scatterlist *src, unsigned int nbytes)
-+{
-+ struct blkcipher_walk walk;
-+ int err;
+-
+ /*--- poly_tan() ------------------------------------------------------------+
+ | |
+ +---------------------------------------------------------------------------*/
+-void poly_tan(FPU_REG *st0_ptr)
++void poly_tan(FPU_REG *st0_ptr)
+ {
+- long int exponent;
+- int invert;
+- Xsig argSq, argSqSq, accumulatoro, accumulatore, accum,
+- argSignif, fix_up;
+- unsigned long adj;
++ long int exponent;
++ int invert;
++ Xsig argSq, argSqSq, accumulatoro, accumulatore, accum,
++ argSignif, fix_up;
++ unsigned long adj;
+
+- exponent = exponent(st0_ptr);
++ exponent = exponent(st0_ptr);
+
+ #ifdef PARANOID
+- if ( signnegative(st0_ptr) ) /* Can't hack a number < 0.0 */
+- { arith_invalid(0); return; } /* Need a positive number */
++ if (signnegative(st0_ptr)) { /* Can't hack a number < 0.0 */
++ arith_invalid(0);
++ return;
++ } /* Need a positive number */
+ #endif /* PARANOID */
+
+- /* Split the problem into two domains, smaller and larger than pi/4 */
+- if ( (exponent == 0) || ((exponent == -1) && (st0_ptr->sigh > 0xc90fdaa2)) )
+- {
+- /* The argument is greater than (approx) pi/4 */
+- invert = 1;
+- accum.lsw = 0;
+- XSIG_LL(accum) = significand(st0_ptr);
+-
+- if ( exponent == 0 )
+- {
+- /* The argument is >= 1.0 */
+- /* Put the binary point at the left. */
+- XSIG_LL(accum) <<= 1;
+- }
+- /* pi/2 in hex is: 1.921fb54442d18469 898CC51701B839A2 52049C1 */
+- XSIG_LL(accum) = 0x921fb54442d18469LL - XSIG_LL(accum);
+- /* This is a special case which arises due to rounding. */
+- if ( XSIG_LL(accum) == 0xffffffffffffffffLL )
+- {
+- FPU_settag0(TAG_Valid);
+- significand(st0_ptr) = 0x8a51e04daabda360LL;
+- setexponent16(st0_ptr, (0x41 + EXTENDED_Ebias) | SIGN_Negative);
+- return;
++ /* Split the problem into two domains, smaller and larger than pi/4 */
++ if ((exponent == 0)
++ || ((exponent == -1) && (st0_ptr->sigh > 0xc90fdaa2))) {
++ /* The argument is greater than (approx) pi/4 */
++ invert = 1;
++ accum.lsw = 0;
++ XSIG_LL(accum) = significand(st0_ptr);
++
++ if (exponent == 0) {
++ /* The argument is >= 1.0 */
++ /* Put the binary point at the left. */
++ XSIG_LL(accum) <<= 1;
++ }
++ /* pi/2 in hex is: 1.921fb54442d18469 898CC51701B839A2 52049C1 */
++ XSIG_LL(accum) = 0x921fb54442d18469LL - XSIG_LL(accum);
++ /* This is a special case which arises due to rounding. */
++ if (XSIG_LL(accum) == 0xffffffffffffffffLL) {
++ FPU_settag0(TAG_Valid);
++ significand(st0_ptr) = 0x8a51e04daabda360LL;
++ setexponent16(st0_ptr,
++ (0x41 + EXTENDED_Ebias) | SIGN_Negative);
++ return;
++ }
+
-+ blkcipher_walk_init(&walk, dst, src, nbytes);
-+ err = blkcipher_walk_virt(desc, &walk);
++ argSignif.lsw = accum.lsw;
++ XSIG_LL(argSignif) = XSIG_LL(accum);
++ exponent = -1 + norm_Xsig(&argSignif);
++ } else {
++ invert = 0;
++ argSignif.lsw = 0;
++ XSIG_LL(accum) = XSIG_LL(argSignif) = significand(st0_ptr);
+
-+ while (walk.nbytes) {
-+ if (walk.src.virt.addr != walk.dst.virt.addr)
-+ memcpy(walk.dst.virt.addr, walk.src.virt.addr,
-+ walk.nbytes);
-+ err = blkcipher_walk_done(desc, &walk, 0);
-+ }
++ if (exponent < -1) {
++ /* shift the argument right by the required places */
++ if (FPU_shrx(&XSIG_LL(accum), -1 - exponent) >=
++ 0x80000000U)
++ XSIG_LL(accum)++; /* round up */
++ }
+ }
+
+- argSignif.lsw = accum.lsw;
+- XSIG_LL(argSignif) = XSIG_LL(accum);
+- exponent = -1 + norm_Xsig(&argSignif);
+- }
+- else
+- {
+- invert = 0;
+- argSignif.lsw = 0;
+- XSIG_LL(accum) = XSIG_LL(argSignif) = significand(st0_ptr);
+-
+- if ( exponent < -1 )
+- {
+- /* shift the argument right by the required places */
+- if ( FPU_shrx(&XSIG_LL(accum), -1-exponent) >= 0x80000000U )
+- XSIG_LL(accum) ++; /* round up */
+- }
+- }
+-
+- XSIG_LL(argSq) = XSIG_LL(accum); argSq.lsw = accum.lsw;
+- mul_Xsig_Xsig(&argSq, &argSq);
+- XSIG_LL(argSqSq) = XSIG_LL(argSq); argSqSq.lsw = argSq.lsw;
+- mul_Xsig_Xsig(&argSqSq, &argSqSq);
+-
+- /* Compute the negative terms for the numerator polynomial */
+- accumulatoro.msw = accumulatoro.midw = accumulatoro.lsw = 0;
+- polynomial_Xsig(&accumulatoro, &XSIG_LL(argSqSq), oddnegterm, HiPOWERon-1);
+- mul_Xsig_Xsig(&accumulatoro, &argSq);
+- negate_Xsig(&accumulatoro);
+- /* Add the positive terms */
+- polynomial_Xsig(&accumulatoro, &XSIG_LL(argSqSq), oddplterm, HiPOWERop-1);
+-
+-
+- /* Compute the positive terms for the denominator polynomial */
+- accumulatore.msw = accumulatore.midw = accumulatore.lsw = 0;
+- polynomial_Xsig(&accumulatore, &XSIG_LL(argSqSq), evenplterm, HiPOWERep-1);
+- mul_Xsig_Xsig(&accumulatore, &argSq);
+- negate_Xsig(&accumulatore);
+- /* Add the negative terms */
+- polynomial_Xsig(&accumulatore, &XSIG_LL(argSqSq), evennegterm, HiPOWERen-1);
+- /* Multiply by arg^2 */
+- mul64_Xsig(&accumulatore, &XSIG_LL(argSignif));
+- mul64_Xsig(&accumulatore, &XSIG_LL(argSignif));
+- /* de-normalize and divide by 2 */
+- shr_Xsig(&accumulatore, -2*(1+exponent) + 1);
+- negate_Xsig(&accumulatore); /* This does 1 - accumulator */
+-
+- /* Now find the ratio. */
+- if ( accumulatore.msw == 0 )
+- {
+- /* accumulatoro must contain 1.0 here, (actually, 0) but it
+- really doesn't matter what value we use because it will
+- have negligible effect in later calculations
+- */
+- XSIG_LL(accum) = 0x8000000000000000LL;
+- accum.lsw = 0;
+- }
+- else
+- {
+- div_Xsig(&accumulatoro, &accumulatore, &accum);
+- }
+-
+- /* Multiply by 1/3 * arg^3 */
+- mul64_Xsig(&accum, &XSIG_LL(argSignif));
+- mul64_Xsig(&accum, &XSIG_LL(argSignif));
+- mul64_Xsig(&accum, &XSIG_LL(argSignif));
+- mul64_Xsig(&accum, &twothirds);
+- shr_Xsig(&accum, -2*(exponent+1));
+-
+- /* tan(arg) = arg + accum */
+- add_two_Xsig(&accum, &argSignif, &exponent);
+-
+- if ( invert )
+- {
+- /* We now have the value of tan(pi_2 - arg) where pi_2 is an
+- approximation for pi/2
+- */
+- /* The next step is to fix the answer to compensate for the
+- error due to the approximation used for pi/2
+- */
+-
+- /* This is (approx) delta, the error in our approx for pi/2
+- (see above). It has an exponent of -65
+- */
+- XSIG_LL(fix_up) = 0x898cc51701b839a2LL;
+- fix_up.lsw = 0;
+-
+- if ( exponent == 0 )
+- adj = 0xffffffff; /* We want approx 1.0 here, but
+- this is close enough. */
+- else if ( exponent > -30 )
+- {
+- adj = accum.msw >> -(exponent+1); /* tan */
+- adj = mul_32_32(adj, adj); /* tan^2 */
++ XSIG_LL(argSq) = XSIG_LL(accum);
++ argSq.lsw = accum.lsw;
++ mul_Xsig_Xsig(&argSq, &argSq);
++ XSIG_LL(argSqSq) = XSIG_LL(argSq);
++ argSqSq.lsw = argSq.lsw;
++ mul_Xsig_Xsig(&argSqSq, &argSqSq);
++
++ /* Compute the negative terms for the numerator polynomial */
++ accumulatoro.msw = accumulatoro.midw = accumulatoro.lsw = 0;
++ polynomial_Xsig(&accumulatoro, &XSIG_LL(argSqSq), oddnegterm,
++ HiPOWERon - 1);
++ mul_Xsig_Xsig(&accumulatoro, &argSq);
++ negate_Xsig(&accumulatoro);
++ /* Add the positive terms */
++ polynomial_Xsig(&accumulatoro, &XSIG_LL(argSqSq), oddplterm,
++ HiPOWERop - 1);
++
++ /* Compute the positive terms for the denominator polynomial */
++ accumulatore.msw = accumulatore.midw = accumulatore.lsw = 0;
++ polynomial_Xsig(&accumulatore, &XSIG_LL(argSqSq), evenplterm,
++ HiPOWERep - 1);
++ mul_Xsig_Xsig(&accumulatore, &argSq);
++ negate_Xsig(&accumulatore);
++ /* Add the negative terms */
++ polynomial_Xsig(&accumulatore, &XSIG_LL(argSqSq), evennegterm,
++ HiPOWERen - 1);
++ /* Multiply by arg^2 */
++ mul64_Xsig(&accumulatore, &XSIG_LL(argSignif));
++ mul64_Xsig(&accumulatore, &XSIG_LL(argSignif));
++ /* de-normalize and divide by 2 */
++ shr_Xsig(&accumulatore, -2 * (1 + exponent) + 1);
++ negate_Xsig(&accumulatore); /* This does 1 - accumulator */
++
++ /* Now find the ratio. */
++ if (accumulatore.msw == 0) {
++ /* accumulatoro must contain 1.0 here, (actually, 0) but it
++ really doesn't matter what value we use because it will
++ have negligible effect in later calculations
++ */
++ XSIG_LL(accum) = 0x8000000000000000LL;
++ accum.lsw = 0;
++ } else {
++ div_Xsig(&accumulatoro, &accumulatore, &accum);
+ }
+- else
+- adj = 0;
+- adj = mul_32_32(0x898cc517, adj); /* delta * tan^2 */
+-
+- fix_up.msw += adj;
+- if ( !(fix_up.msw & 0x80000000) ) /* did fix_up overflow ? */
+- {
+- /* Yes, we need to add an msb */
+- shr_Xsig(&fix_up, 1);
+- fix_up.msw |= 0x80000000;
+- shr_Xsig(&fix_up, 64 + exponent);
++
++ /* Multiply by 1/3 * arg^3 */
++ mul64_Xsig(&accum, &XSIG_LL(argSignif));
++ mul64_Xsig(&accum, &XSIG_LL(argSignif));
++ mul64_Xsig(&accum, &XSIG_LL(argSignif));
++ mul64_Xsig(&accum, &twothirds);
++ shr_Xsig(&accum, -2 * (exponent + 1));
++
++ /* tan(arg) = arg + accum */
++ add_two_Xsig(&accum, &argSignif, &exponent);
++
++ if (invert) {
++ /* We now have the value of tan(pi_2 - arg) where pi_2 is an
++ approximation for pi/2
++ */
++ /* The next step is to fix the answer to compensate for the
++ error due to the approximation used for pi/2
++ */
++
++ /* This is (approx) delta, the error in our approx for pi/2
++ (see above). It has an exponent of -65
++ */
++ XSIG_LL(fix_up) = 0x898cc51701b839a2LL;
++ fix_up.lsw = 0;
++
++ if (exponent == 0)
++ adj = 0xffffffff; /* We want approx 1.0 here, but
++ this is close enough. */
++ else if (exponent > -30) {
++ adj = accum.msw >> -(exponent + 1); /* tan */
++ adj = mul_32_32(adj, adj); /* tan^2 */
++ } else
++ adj = 0;
++ adj = mul_32_32(0x898cc517, adj); /* delta * tan^2 */
+
-+ return err;
-+}
++ fix_up.msw += adj;
++ if (!(fix_up.msw & 0x80000000)) { /* did fix_up overflow ? */
++ /* Yes, we need to add an msb */
++ shr_Xsig(&fix_up, 1);
++ fix_up.msw |= 0x80000000;
++ shr_Xsig(&fix_up, 64 + exponent);
++ } else
++ shr_Xsig(&fix_up, 65 + exponent);
+
- static struct crypto_alg compress_null = {
- .cra_name = "compress_null",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
-@@ -76,6 +98,7 @@ static struct crypto_alg digest_null = {
- .cra_list = LIST_HEAD_INIT(digest_null.cra_list),
- .cra_u = { .digest = {
- .dia_digestsize = NULL_DIGEST_SIZE,
-+ .dia_setkey = null_setkey,
- .dia_init = null_init,
- .dia_update = null_update,
- .dia_final = null_final } }
-@@ -96,6 +119,25 @@ static struct crypto_alg cipher_null = {
- .cia_decrypt = null_crypt } }
- };
-
-+static struct crypto_alg skcipher_null = {
-+ .cra_name = "ecb(cipher_null)",
-+ .cra_driver_name = "ecb-cipher_null",
-+ .cra_priority = 100,
-+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
-+ .cra_blocksize = NULL_BLOCK_SIZE,
-+ .cra_type = &crypto_blkcipher_type,
-+ .cra_ctxsize = 0,
-+ .cra_module = THIS_MODULE,
-+ .cra_list = LIST_HEAD_INIT(skcipher_null.cra_list),
-+ .cra_u = { .blkcipher = {
-+ .min_keysize = NULL_KEY_SIZE,
-+ .max_keysize = NULL_KEY_SIZE,
-+ .ivsize = NULL_IV_SIZE,
-+ .setkey = null_setkey,
-+ .encrypt = skcipher_null_crypt,
-+ .decrypt = skcipher_null_crypt } }
-+};
++ add_two_Xsig(&accum, &fix_up, &exponent);
+
- MODULE_ALIAS("compress_null");
- MODULE_ALIAS("digest_null");
- MODULE_ALIAS("cipher_null");
-@@ -108,27 +150,35 @@ static int __init init(void)
- if (ret < 0)
- goto out;
++ /* accum now contains tan(pi/2 - arg).
++ Use tan(arg) = 1.0 / tan(pi/2 - arg)
++ */
++ accumulatoro.lsw = accumulatoro.midw = 0;
++ accumulatoro.msw = 0x80000000;
++ div_Xsig(&accumulatoro, &accum, &accum);
++ exponent = -exponent - 1;
+ }
+- else
+- shr_Xsig(&fix_up, 65 + exponent);
+-
+- add_two_Xsig(&accum, &fix_up, &exponent);
+-
+- /* accum now contains tan(pi/2 - arg).
+- Use tan(arg) = 1.0 / tan(pi/2 - arg)
+- */
+- accumulatoro.lsw = accumulatoro.midw = 0;
+- accumulatoro.msw = 0x80000000;
+- div_Xsig(&accumulatoro, &accum, &accum);
+- exponent = - exponent - 1;
+- }
+-
+- /* Transfer the result */
+- round_Xsig(&accum);
+- FPU_settag0(TAG_Valid);
+- significand(st0_ptr) = XSIG_LL(accum);
+- setexponent16(st0_ptr, exponent + EXTENDED_Ebias); /* Result is positive. */
++
++ /* Transfer the result */
++ round_Xsig(&accum);
++ FPU_settag0(TAG_Valid);
++ significand(st0_ptr) = XSIG_LL(accum);
++ setexponent16(st0_ptr, exponent + EXTENDED_Ebias); /* Result is positive. */
+
+ }
+diff --git a/arch/x86/math-emu/reg_add_sub.c b/arch/x86/math-emu/reg_add_sub.c
+index 7cd3b37..deea48b 100644
+--- a/arch/x86/math-emu/reg_add_sub.c
++++ b/arch/x86/math-emu/reg_add_sub.c
+@@ -27,7 +27,7 @@
+ static
+ int add_sub_specials(FPU_REG const *a, u_char taga, u_char signa,
+ FPU_REG const *b, u_char tagb, u_char signb,
+- FPU_REG *dest, int deststnr, int control_w);
++ FPU_REG * dest, int deststnr, int control_w);
-+ ret = crypto_register_alg(&skcipher_null);
-+ if (ret < 0)
-+ goto out_unregister_cipher;
+ /*
+ Operates on st(0) and st(n), or on st(0) and temporary data.
+@@ -35,340 +35,299 @@ int add_sub_specials(FPU_REG const *a, u_char taga, u_char signa,
+ */
+ int FPU_add(FPU_REG const *b, u_char tagb, int deststnr, int control_w)
+ {
+- FPU_REG *a = &st(0);
+- FPU_REG *dest = &st(deststnr);
+- u_char signb = getsign(b);
+- u_char taga = FPU_gettag0();
+- u_char signa = getsign(a);
+- u_char saved_sign = getsign(dest);
+- int diff, tag, expa, expb;
+-
+- if ( !(taga | tagb) )
+- {
+- expa = exponent(a);
+- expb = exponent(b);
+-
+- valid_add:
+- /* Both registers are valid */
+- if (!(signa ^ signb))
+- {
+- /* signs are the same */
+- tag = FPU_u_add(a, b, dest, control_w, signa, expa, expb);
+- }
+- else
+- {
+- /* The signs are different, so do a subtraction */
+- diff = expa - expb;
+- if (!diff)
+- {
+- diff = a->sigh - b->sigh; /* This works only if the ms bits
+- are identical. */
+- if (!diff)
+- {
+- diff = a->sigl > b->sigl;
+- if (!diff)
+- diff = -(a->sigl < b->sigl);
++ FPU_REG *a = &st(0);
++ FPU_REG *dest = &st(deststnr);
++ u_char signb = getsign(b);
++ u_char taga = FPU_gettag0();
++ u_char signa = getsign(a);
++ u_char saved_sign = getsign(dest);
++ int diff, tag, expa, expb;
++
++ if (!(taga | tagb)) {
++ expa = exponent(a);
++ expb = exponent(b);
++
++ valid_add:
++ /* Both registers are valid */
++ if (!(signa ^ signb)) {
++ /* signs are the same */
++ tag =
++ FPU_u_add(a, b, dest, control_w, signa, expa, expb);
++ } else {
++ /* The signs are different, so do a subtraction */
++ diff = expa - expb;
++ if (!diff) {
++ diff = a->sigh - b->sigh; /* This works only if the ms bits
++ are identical. */
++ if (!diff) {
++ diff = a->sigl > b->sigl;
++ if (!diff)
++ diff = -(a->sigl < b->sigl);
++ }
++ }
+
- ret = crypto_register_alg(&digest_null);
-- if (ret < 0) {
-- crypto_unregister_alg(&cipher_null);
-- goto out;
++ if (diff > 0) {
++ tag =
++ FPU_u_sub(a, b, dest, control_w, signa,
++ expa, expb);
++ } else if (diff < 0) {
++ tag =
++ FPU_u_sub(b, a, dest, control_w, signb,
++ expb, expa);
++ } else {
++ FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
++ /* sign depends upon rounding mode */
++ setsign(dest, ((control_w & CW_RC) != RC_DOWN)
++ ? SIGN_POS : SIGN_NEG);
++ return TAG_Zero;
++ }
+ }
+- }
+-
+- if (diff > 0)
+- {
+- tag = FPU_u_sub(a, b, dest, control_w, signa, expa, expb);
+- }
+- else if ( diff < 0 )
+- {
+- tag = FPU_u_sub(b, a, dest, control_w, signb, expb, expa);
+- }
+- else
+- {
+- FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
+- /* sign depends upon rounding mode */
+- setsign(dest, ((control_w & CW_RC) != RC_DOWN)
+- ? SIGN_POS : SIGN_NEG);
+- return TAG_Zero;
+- }
- }
-+ if (ret < 0)
-+ goto out_unregister_skcipher;
- ret = crypto_register_alg(&compress_null);
-- if (ret < 0) {
-- crypto_unregister_alg(&digest_null);
-- crypto_unregister_alg(&cipher_null);
-- goto out;
-- }
-+ if (ret < 0)
-+ goto out_unregister_digest;
+- if ( tag < 0 )
+- {
+- setsign(dest, saved_sign);
+- return tag;
++ if (tag < 0) {
++ setsign(dest, saved_sign);
++ return tag;
++ }
++ FPU_settagi(deststnr, tag);
++ return tag;
+ }
+- FPU_settagi(deststnr, tag);
+- return tag;
+- }
- out:
- return ret;
-+
-+out_unregister_digest:
-+ crypto_unregister_alg(&digest_null);
-+out_unregister_skcipher:
-+ crypto_unregister_alg(&skcipher_null);
-+out_unregister_cipher:
-+ crypto_unregister_alg(&cipher_null);
-+ goto out;
- }
+- if ( taga == TAG_Special )
+- taga = FPU_Special(a);
+- if ( tagb == TAG_Special )
+- tagb = FPU_Special(b);
++ if (taga == TAG_Special)
++ taga = FPU_Special(a);
++ if (tagb == TAG_Special)
++ tagb = FPU_Special(b);
+
+- if ( ((taga == TAG_Valid) && (tagb == TW_Denormal))
++ if (((taga == TAG_Valid) && (tagb == TW_Denormal))
+ || ((taga == TW_Denormal) && (tagb == TAG_Valid))
+- || ((taga == TW_Denormal) && (tagb == TW_Denormal)) )
+- {
+- FPU_REG x, y;
++ || ((taga == TW_Denormal) && (tagb == TW_Denormal))) {
++ FPU_REG x, y;
++
++ if (denormal_operand() < 0)
++ return FPU_Exception;
++
++ FPU_to_exp16(a, &x);
++ FPU_to_exp16(b, &y);
++ a = &x;
++ b = &y;
++ expa = exponent16(a);
++ expb = exponent16(b);
++ goto valid_add;
++ }
+
+- if ( denormal_operand() < 0 )
+- return FPU_Exception;
++ if ((taga == TW_NaN) || (tagb == TW_NaN)) {
++ if (deststnr == 0)
++ return real_2op_NaN(b, tagb, deststnr, a);
++ else
++ return real_2op_NaN(a, taga, deststnr, a);
++ }
- static void __exit fini(void)
- {
- crypto_unregister_alg(&compress_null);
- crypto_unregister_alg(&digest_null);
-+ crypto_unregister_alg(&skcipher_null);
- crypto_unregister_alg(&cipher_null);
+- FPU_to_exp16(a, &x);
+- FPU_to_exp16(b, &y);
+- a = &x;
+- b = &y;
+- expa = exponent16(a);
+- expb = exponent16(b);
+- goto valid_add;
+- }
+-
+- if ( (taga == TW_NaN) || (tagb == TW_NaN) )
+- {
+- if ( deststnr == 0 )
+- return real_2op_NaN(b, tagb, deststnr, a);
+- else
+- return real_2op_NaN(a, taga, deststnr, a);
+- }
+-
+- return add_sub_specials(a, taga, signa, b, tagb, signb,
+- dest, deststnr, control_w);
++ return add_sub_specials(a, taga, signa, b, tagb, signb,
++ dest, deststnr, control_w);
}
-diff --git a/crypto/ctr.c b/crypto/ctr.c
-new file mode 100644
-index 0000000..2d7425f
---- /dev/null
-+++ b/crypto/ctr.c
-@@ -0,0 +1,422 @@
-+/*
-+ * CTR: Counter mode
-+ *
-+ * (C) Copyright IBM Corp. 2007 - Joy Latten <latten at us.ibm.com>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License as published by the Free
-+ * Software Foundation; either version 2 of the License, or (at your option)
-+ * any later version.
-+ *
-+ */
-+
-+#include <crypto/algapi.h>
-+#include <crypto/ctr.h>
-+#include <linux/err.h>
-+#include <linux/init.h>
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/random.h>
-+#include <linux/scatterlist.h>
-+#include <linux/slab.h>
+-
+ /* Subtract b from a. (a-b) -> dest */
+ int FPU_sub(int flags, int rm, int control_w)
+ {
+- FPU_REG const *a, *b;
+- FPU_REG *dest;
+- u_char taga, tagb, signa, signb, saved_sign, sign;
+- int diff, tag = 0, expa, expb, deststnr;
+-
+- a = &st(0);
+- taga = FPU_gettag0();
+-
+- deststnr = 0;
+- if ( flags & LOADED )
+- {
+- b = (FPU_REG *)rm;
+- tagb = flags & 0x0f;
+- }
+- else
+- {
+- b = &st(rm);
+- tagb = FPU_gettagi(rm);
+-
+- if ( flags & DEST_RM )
+- deststnr = rm;
+- }
+-
+- signa = getsign(a);
+- signb = getsign(b);
+-
+- if ( flags & REV )
+- {
+- signa ^= SIGN_NEG;
+- signb ^= SIGN_NEG;
+- }
+-
+- dest = &st(deststnr);
+- saved_sign = getsign(dest);
+-
+- if ( !(taga | tagb) )
+- {
+- expa = exponent(a);
+- expb = exponent(b);
+-
+- valid_subtract:
+- /* Both registers are valid */
+-
+- diff = expa - expb;
+-
+- if (!diff)
+- {
+- diff = a->sigh - b->sigh; /* Works only if ms bits are identical */
+- if (!diff)
+- {
+- diff = a->sigl > b->sigl;
+- if (!diff)
+- diff = -(a->sigl < b->sigl);
+- }
++ FPU_REG const *a, *b;
++ FPU_REG *dest;
++ u_char taga, tagb, signa, signb, saved_sign, sign;
++ int diff, tag = 0, expa, expb, deststnr;
+
-+struct crypto_ctr_ctx {
-+ struct crypto_cipher *child;
-+};
++ a = &st(0);
++ taga = FPU_gettag0();
+
-+struct crypto_rfc3686_ctx {
-+ struct crypto_blkcipher *child;
-+ u8 nonce[CTR_RFC3686_NONCE_SIZE];
-+};
++ deststnr = 0;
++ if (flags & LOADED) {
++ b = (FPU_REG *) rm;
++ tagb = flags & 0x0f;
++ } else {
++ b = &st(rm);
++ tagb = FPU_gettagi(rm);
+
-+static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
-+ unsigned int keylen)
-+{
-+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
-+ struct crypto_cipher *child = ctx->child;
-+ int err;
++ if (flags & DEST_RM)
++ deststnr = rm;
+ }
+
+- switch ( (((int)signa)*2 + signb) / SIGN_NEG )
+- {
+- case 0: /* P - P */
+- case 3: /* N - N */
+- if (diff > 0)
+- {
+- /* |a| > |b| */
+- tag = FPU_u_sub(a, b, dest, control_w, signa, expa, expb);
+- }
+- else if ( diff == 0 )
+- {
+- FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
+-
+- /* sign depends upon rounding mode */
+- setsign(dest, ((control_w & CW_RC) != RC_DOWN)
+- ? SIGN_POS : SIGN_NEG);
+- return TAG_Zero;
+- }
+- else
+- {
+- sign = signa ^ SIGN_NEG;
+- tag = FPU_u_sub(b, a, dest, control_w, sign, expb, expa);
+- }
+- break;
+- case 1: /* P - N */
+- tag = FPU_u_add(a, b, dest, control_w, SIGN_POS, expa, expb);
+- break;
+- case 2: /* N - P */
+- tag = FPU_u_add(a, b, dest, control_w, SIGN_NEG, expa, expb);
+- break;
++ signa = getsign(a);
++ signb = getsign(b);
++
++ if (flags & REV) {
++ signa ^= SIGN_NEG;
++ signb ^= SIGN_NEG;
++ }
++
++ dest = &st(deststnr);
++ saved_sign = getsign(dest);
++
++ if (!(taga | tagb)) {
++ expa = exponent(a);
++ expb = exponent(b);
++
++ valid_subtract:
++ /* Both registers are valid */
++
++ diff = expa - expb;
++
++ if (!diff) {
++ diff = a->sigh - b->sigh; /* Works only if ms bits are identical */
++ if (!diff) {
++ diff = a->sigl > b->sigl;
++ if (!diff)
++ diff = -(a->sigl < b->sigl);
++ }
++ }
+
-+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
-+ CRYPTO_TFM_REQ_MASK);
-+ err = crypto_cipher_setkey(child, key, keylen);
-+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
-+ CRYPTO_TFM_RES_MASK);
++ switch ((((int)signa) * 2 + signb) / SIGN_NEG) {
++ case 0: /* P - P */
++ case 3: /* N - N */
++ if (diff > 0) {
++ /* |a| > |b| */
++ tag =
++ FPU_u_sub(a, b, dest, control_w, signa,
++ expa, expb);
++ } else if (diff == 0) {
++ FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
++
++ /* sign depends upon rounding mode */
++ setsign(dest, ((control_w & CW_RC) != RC_DOWN)
++ ? SIGN_POS : SIGN_NEG);
++ return TAG_Zero;
++ } else {
++ sign = signa ^ SIGN_NEG;
++ tag =
++ FPU_u_sub(b, a, dest, control_w, sign, expb,
++ expa);
++ }
++ break;
++ case 1: /* P - N */
++ tag =
++ FPU_u_add(a, b, dest, control_w, SIGN_POS, expa,
++ expb);
++ break;
++ case 2: /* N - P */
++ tag =
++ FPU_u_add(a, b, dest, control_w, SIGN_NEG, expa,
++ expb);
++ break;
+ #ifdef PARANOID
+- default:
+- EXCEPTION(EX_INTERNAL|0x111);
+- return -1;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x111);
++ return -1;
+ #endif
++ }
++ if (tag < 0) {
++ setsign(dest, saved_sign);
++ return tag;
++ }
++ FPU_settagi(deststnr, tag);
++ return tag;
+ }
+- if ( tag < 0 )
+- {
+- setsign(dest, saved_sign);
+- return tag;
+- }
+- FPU_settagi(deststnr, tag);
+- return tag;
+- }
+
+- if ( taga == TAG_Special )
+- taga = FPU_Special(a);
+- if ( tagb == TAG_Special )
+- tagb = FPU_Special(b);
++ if (taga == TAG_Special)
++ taga = FPU_Special(a);
++ if (tagb == TAG_Special)
++ tagb = FPU_Special(b);
+
+- if ( ((taga == TAG_Valid) && (tagb == TW_Denormal))
++ if (((taga == TAG_Valid) && (tagb == TW_Denormal))
+ || ((taga == TW_Denormal) && (tagb == TAG_Valid))
+- || ((taga == TW_Denormal) && (tagb == TW_Denormal)) )
+- {
+- FPU_REG x, y;
++ || ((taga == TW_Denormal) && (tagb == TW_Denormal))) {
++ FPU_REG x, y;
+
+- if ( denormal_operand() < 0 )
+- return FPU_Exception;
++ if (denormal_operand() < 0)
++ return FPU_Exception;
++
++ FPU_to_exp16(a, &x);
++ FPU_to_exp16(b, &y);
++ a = &x;
++ b = &y;
++ expa = exponent16(a);
++ expb = exponent16(b);
+
+- FPU_to_exp16(a, &x);
+- FPU_to_exp16(b, &y);
+- a = &x;
+- b = &y;
+- expa = exponent16(a);
+- expb = exponent16(b);
+-
+- goto valid_subtract;
+- }
+-
+- if ( (taga == TW_NaN) || (tagb == TW_NaN) )
+- {
+- FPU_REG const *d1, *d2;
+- if ( flags & REV )
+- {
+- d1 = b;
+- d2 = a;
++ goto valid_subtract;
+ }
+- else
+- {
+- d1 = a;
+- d2 = b;
+
-+ return err;
++ if ((taga == TW_NaN) || (tagb == TW_NaN)) {
++ FPU_REG const *d1, *d2;
++ if (flags & REV) {
++ d1 = b;
++ d2 = a;
++ } else {
++ d1 = a;
++ d2 = b;
++ }
++ if (flags & LOADED)
++ return real_2op_NaN(b, tagb, deststnr, d1);
++ if (flags & DEST_RM)
++ return real_2op_NaN(a, taga, deststnr, d2);
++ else
++ return real_2op_NaN(b, tagb, deststnr, d2);
+ }
+- if ( flags & LOADED )
+- return real_2op_NaN(b, tagb, deststnr, d1);
+- if ( flags & DEST_RM )
+- return real_2op_NaN(a, taga, deststnr, d2);
+- else
+- return real_2op_NaN(b, tagb, deststnr, d2);
+- }
+-
+- return add_sub_specials(a, taga, signa, b, tagb, signb ^ SIGN_NEG,
+- dest, deststnr, control_w);
+-}
+
++ return add_sub_specials(a, taga, signa, b, tagb, signb ^ SIGN_NEG,
++ dest, deststnr, control_w);
+}
+
+ static
+ int add_sub_specials(FPU_REG const *a, u_char taga, u_char signa,
+ FPU_REG const *b, u_char tagb, u_char signb,
+- FPU_REG *dest, int deststnr, int control_w)
++ FPU_REG * dest, int deststnr, int control_w)
+ {
+- if ( ((taga == TW_Denormal) || (tagb == TW_Denormal))
+- && (denormal_operand() < 0) )
+- return FPU_Exception;
+-
+- if (taga == TAG_Zero)
+- {
+- if (tagb == TAG_Zero)
+- {
+- /* Both are zero, result will be zero. */
+- u_char different_signs = signa ^ signb;
+-
+- FPU_copy_to_regi(a, TAG_Zero, deststnr);
+- if ( different_signs )
+- {
+- /* Signs are different. */
+- /* Sign of answer depends upon rounding mode. */
+- setsign(dest, ((control_w & CW_RC) != RC_DOWN)
+- ? SIGN_POS : SIGN_NEG);
+- }
+- else
+- setsign(dest, signa); /* signa may differ from the sign of a. */
+- return TAG_Zero;
+- }
+- else
+- {
+- reg_copy(b, dest);
+- if ( (tagb == TW_Denormal) && (b->sigh & 0x80000000) )
+- {
+- /* A pseudoDenormal, convert it. */
+- addexponent(dest, 1);
+- tagb = TAG_Valid;
+- }
+- else if ( tagb > TAG_Empty )
+- tagb = TAG_Special;
+- setsign(dest, signb); /* signb may differ from the sign of b. */
+- FPU_settagi(deststnr, tagb);
+- return tagb;
+- }
+- }
+- else if (tagb == TAG_Zero)
+- {
+- reg_copy(a, dest);
+- if ( (taga == TW_Denormal) && (a->sigh & 0x80000000) )
+- {
+- /* A pseudoDenormal */
+- addexponent(dest, 1);
+- taga = TAG_Valid;
+- }
+- else if ( taga > TAG_Empty )
+- taga = TAG_Special;
+- setsign(dest, signa); /* signa may differ from the sign of a. */
+- FPU_settagi(deststnr, taga);
+- return taga;
+- }
+- else if (taga == TW_Infinity)
+- {
+- if ( (tagb != TW_Infinity) || (signa == signb) )
+- {
+- FPU_copy_to_regi(a, TAG_Special, deststnr);
+- setsign(dest, signa); /* signa may differ from the sign of a. */
+- return taga;
++ if (((taga == TW_Denormal) || (tagb == TW_Denormal))
++ && (denormal_operand() < 0))
++ return FPU_Exception;
++
++ if (taga == TAG_Zero) {
++ if (tagb == TAG_Zero) {
++ /* Both are zero, result will be zero. */
++ u_char different_signs = signa ^ signb;
++
++ FPU_copy_to_regi(a, TAG_Zero, deststnr);
++ if (different_signs) {
++ /* Signs are different. */
++ /* Sign of answer depends upon rounding mode. */
++ setsign(dest, ((control_w & CW_RC) != RC_DOWN)
++ ? SIGN_POS : SIGN_NEG);
++ } else
++ setsign(dest, signa); /* signa may differ from the sign of a. */
++ return TAG_Zero;
++ } else {
++ reg_copy(b, dest);
++ if ((tagb == TW_Denormal) && (b->sigh & 0x80000000)) {
++ /* A pseudoDenormal, convert it. */
++ addexponent(dest, 1);
++ tagb = TAG_Valid;
++ } else if (tagb > TAG_Empty)
++ tagb = TAG_Special;
++ setsign(dest, signb); /* signb may differ from the sign of b. */
++ FPU_settagi(deststnr, tagb);
++ return tagb;
++ }
++ } else if (tagb == TAG_Zero) {
++ reg_copy(a, dest);
++ if ((taga == TW_Denormal) && (a->sigh & 0x80000000)) {
++ /* A pseudoDenormal */
++ addexponent(dest, 1);
++ taga = TAG_Valid;
++ } else if (taga > TAG_Empty)
++ taga = TAG_Special;
++ setsign(dest, signa); /* signa may differ from the sign of a. */
++ FPU_settagi(deststnr, taga);
++ return taga;
++ } else if (taga == TW_Infinity) {
++ if ((tagb != TW_Infinity) || (signa == signb)) {
++ FPU_copy_to_regi(a, TAG_Special, deststnr);
++ setsign(dest, signa); /* signa may differ from the sign of a. */
++ return taga;
++ }
++ /* Infinity-Infinity is undefined. */
++ return arith_invalid(deststnr);
++ } else if (tagb == TW_Infinity) {
++ FPU_copy_to_regi(b, TAG_Special, deststnr);
++ setsign(dest, signb); /* signb may differ from the sign of b. */
++ return tagb;
+ }
+- /* Infinity-Infinity is undefined. */
+- return arith_invalid(deststnr);
+- }
+- else if (tagb == TW_Infinity)
+- {
+- FPU_copy_to_regi(b, TAG_Special, deststnr);
+- setsign(dest, signb); /* signb may differ from the sign of b. */
+- return tagb;
+- }
+-
+ #ifdef PARANOID
+- EXCEPTION(EX_INTERNAL|0x101);
++ EXCEPTION(EX_INTERNAL | 0x101);
+ #endif
+
+- return FPU_Exception;
++ return FPU_Exception;
+ }
+-
+diff --git a/arch/x86/math-emu/reg_compare.c b/arch/x86/math-emu/reg_compare.c
+index f37c5b5..ecce55f 100644
+--- a/arch/x86/math-emu/reg_compare.c
++++ b/arch/x86/math-emu/reg_compare.c
+@@ -20,362 +20,331 @@
+ #include "control_w.h"
+ #include "status_w.h"
+
+-
+ static int compare(FPU_REG const *b, int tagb)
+ {
+- int diff, exp0, expb;
+- u_char st0_tag;
+- FPU_REG *st0_ptr;
+- FPU_REG x, y;
+- u_char st0_sign, signb = getsign(b);
+-
+- st0_ptr = &st(0);
+- st0_tag = FPU_gettag0();
+- st0_sign = getsign(st0_ptr);
+-
+- if ( tagb == TAG_Special )
+- tagb = FPU_Special(b);
+- if ( st0_tag == TAG_Special )
+- st0_tag = FPU_Special(st0_ptr);
+-
+- if ( ((st0_tag != TAG_Valid) && (st0_tag != TW_Denormal))
+- || ((tagb != TAG_Valid) && (tagb != TW_Denormal)) )
+- {
+- if ( st0_tag == TAG_Zero )
+- {
+- if ( tagb == TAG_Zero ) return COMP_A_eq_B;
+- if ( tagb == TAG_Valid )
+- return ((signb == SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B);
+- if ( tagb == TW_Denormal )
+- return ((signb == SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B)
+- | COMP_Denormal;
+- }
+- else if ( tagb == TAG_Zero )
+- {
+- if ( st0_tag == TAG_Valid )
+- return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B);
+- if ( st0_tag == TW_Denormal )
+- return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
+- | COMP_Denormal;
++ int diff, exp0, expb;
++ u_char st0_tag;
++ FPU_REG *st0_ptr;
++ FPU_REG x, y;
++ u_char st0_sign, signb = getsign(b);
++
++ st0_ptr = &st(0);
++ st0_tag = FPU_gettag0();
++ st0_sign = getsign(st0_ptr);
++
++ if (tagb == TAG_Special)
++ tagb = FPU_Special(b);
++ if (st0_tag == TAG_Special)
++ st0_tag = FPU_Special(st0_ptr);
++
++ if (((st0_tag != TAG_Valid) && (st0_tag != TW_Denormal))
++ || ((tagb != TAG_Valid) && (tagb != TW_Denormal))) {
++ if (st0_tag == TAG_Zero) {
++ if (tagb == TAG_Zero)
++ return COMP_A_eq_B;
++ if (tagb == TAG_Valid)
++ return ((signb ==
++ SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B);
++ if (tagb == TW_Denormal)
++ return ((signb ==
++ SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B)
++ | COMP_Denormal;
++ } else if (tagb == TAG_Zero) {
++ if (st0_tag == TAG_Valid)
++ return ((st0_sign ==
++ SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B);
++ if (st0_tag == TW_Denormal)
++ return ((st0_sign ==
++ SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
++ | COMP_Denormal;
++ }
++
++ if (st0_tag == TW_Infinity) {
++ if ((tagb == TAG_Valid) || (tagb == TAG_Zero))
++ return ((st0_sign ==
++ SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B);
++ else if (tagb == TW_Denormal)
++ return ((st0_sign ==
++ SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
++ | COMP_Denormal;
++ else if (tagb == TW_Infinity) {
++ /* The 80486 book says that infinities can be equal! */
++ return (st0_sign == signb) ? COMP_A_eq_B :
++ ((st0_sign ==
++ SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B);
++ }
++ /* Fall through to the NaN code */
++ } else if (tagb == TW_Infinity) {
++ if ((st0_tag == TAG_Valid) || (st0_tag == TAG_Zero))
++ return ((signb ==
++ SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B);
++ if (st0_tag == TW_Denormal)
++ return ((signb ==
++ SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B)
++ | COMP_Denormal;
++ /* Fall through to the NaN code */
++ }
++
++ /* The only possibility now should be that one of the arguments
++ is a NaN */
++ if ((st0_tag == TW_NaN) || (tagb == TW_NaN)) {
++ int signalling = 0, unsupported = 0;
++ if (st0_tag == TW_NaN) {
++ signalling =
++ (st0_ptr->sigh & 0xc0000000) == 0x80000000;
++ unsupported = !((exponent(st0_ptr) == EXP_OVER)
++ && (st0_ptr->
++ sigh & 0x80000000));
++ }
++ if (tagb == TW_NaN) {
++ signalling |=
++ (b->sigh & 0xc0000000) == 0x80000000;
++ unsupported |= !((exponent(b) == EXP_OVER)
++ && (b->sigh & 0x80000000));
++ }
++ if (signalling || unsupported)
++ return COMP_No_Comp | COMP_SNaN | COMP_NaN;
++ else
++ /* Neither is a signaling NaN */
++ return COMP_No_Comp | COMP_NaN;
++ }
+
-+static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
-+ struct crypto_cipher *tfm)
-+{
-+ unsigned int bsize = crypto_cipher_blocksize(tfm);
-+ unsigned long alignmask = crypto_cipher_alignmask(tfm);
-+ u8 *ctrblk = walk->iv;
-+ u8 tmp[bsize + alignmask];
-+ u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
-+ u8 *src = walk->src.virt.addr;
-+ u8 *dst = walk->dst.virt.addr;
-+ unsigned int nbytes = walk->nbytes;
-+
-+ crypto_cipher_encrypt_one(tfm, keystream, ctrblk);
-+ crypto_xor(keystream, src, nbytes);
-+ memcpy(dst, keystream, nbytes);
++ EXCEPTION(EX_Invalid);
+ }
+
+- if ( st0_tag == TW_Infinity )
+- {
+- if ( (tagb == TAG_Valid) || (tagb == TAG_Zero) )
+- return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B);
+- else if ( tagb == TW_Denormal )
+- return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
+- | COMP_Denormal;
+- else if ( tagb == TW_Infinity )
+- {
+- /* The 80486 book says that infinities can be equal! */
+- return (st0_sign == signb) ? COMP_A_eq_B :
+- ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B);
+- }
+- /* Fall through to the NaN code */
+- }
+- else if ( tagb == TW_Infinity )
+- {
+- if ( (st0_tag == TAG_Valid) || (st0_tag == TAG_Zero) )
+- return ((signb == SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B);
+- if ( st0_tag == TW_Denormal )
+- return ((signb == SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B)
+- | COMP_Denormal;
+- /* Fall through to the NaN code */
++ if (st0_sign != signb) {
++ return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
++ | (((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
++ COMP_Denormal : 0);
+ }
+
+- /* The only possibility now should be that one of the arguments
+- is a NaN */
+- if ( (st0_tag == TW_NaN) || (tagb == TW_NaN) )
+- {
+- int signalling = 0, unsupported = 0;
+- if ( st0_tag == TW_NaN )
+- {
+- signalling = (st0_ptr->sigh & 0xc0000000) == 0x80000000;
+- unsupported = !((exponent(st0_ptr) == EXP_OVER)
+- && (st0_ptr->sigh & 0x80000000));
+- }
+- if ( tagb == TW_NaN )
+- {
+- signalling |= (b->sigh & 0xc0000000) == 0x80000000;
+- unsupported |= !((exponent(b) == EXP_OVER)
+- && (b->sigh & 0x80000000));
+- }
+- if ( signalling || unsupported )
+- return COMP_No_Comp | COMP_SNaN | COMP_NaN;
+- else
+- /* Neither is a signaling NaN */
+- return COMP_No_Comp | COMP_NaN;
++ if ((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) {
++ FPU_to_exp16(st0_ptr, &x);
++ FPU_to_exp16(b, &y);
++ st0_ptr = &x;
++ b = &y;
++ exp0 = exponent16(st0_ptr);
++ expb = exponent16(b);
++ } else {
++ exp0 = exponent(st0_ptr);
++ expb = exponent(b);
+ }
+-
+- EXCEPTION(EX_Invalid);
+- }
+-
+- if (st0_sign != signb)
+- {
+- return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
+- | ( ((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
+- COMP_Denormal : 0);
+- }
+-
+- if ( (st0_tag == TW_Denormal) || (tagb == TW_Denormal) )
+- {
+- FPU_to_exp16(st0_ptr, &x);
+- FPU_to_exp16(b, &y);
+- st0_ptr = &x;
+- b = &y;
+- exp0 = exponent16(st0_ptr);
+- expb = exponent16(b);
+- }
+- else
+- {
+- exp0 = exponent(st0_ptr);
+- expb = exponent(b);
+- }
+
+ #ifdef PARANOID
+- if (!(st0_ptr->sigh & 0x80000000)) EXCEPTION(EX_Invalid);
+- if (!(b->sigh & 0x80000000)) EXCEPTION(EX_Invalid);
++ if (!(st0_ptr->sigh & 0x80000000))
++ EXCEPTION(EX_Invalid);
++ if (!(b->sigh & 0x80000000))
++ EXCEPTION(EX_Invalid);
+ #endif /* PARANOID */
+
+- diff = exp0 - expb;
+- if ( diff == 0 )
+- {
+- diff = st0_ptr->sigh - b->sigh; /* Works only if ms bits are
+- identical */
+- if ( diff == 0 )
+- {
+- diff = st0_ptr->sigl > b->sigl;
+- if ( diff == 0 )
+- diff = -(st0_ptr->sigl < b->sigl);
++ diff = exp0 - expb;
++ if (diff == 0) {
++ diff = st0_ptr->sigh - b->sigh; /* Works only if ms bits are
++ identical */
++ if (diff == 0) {
++ diff = st0_ptr->sigl > b->sigl;
++ if (diff == 0)
++ diff = -(st0_ptr->sigl < b->sigl);
++ }
+ }
+- }
+-
+- if ( diff > 0 )
+- {
+- return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
+- | ( ((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
+- COMP_Denormal : 0);
+- }
+- if ( diff < 0 )
+- {
+- return ((st0_sign == SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B)
+- | ( ((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
+- COMP_Denormal : 0);
+- }
+-
+- return COMP_A_eq_B
+- | ( ((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
+- COMP_Denormal : 0);
+
+-}
++ if (diff > 0) {
++ return ((st0_sign == SIGN_POS) ? COMP_A_gt_B : COMP_A_lt_B)
++ | (((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
++ COMP_Denormal : 0);
++ }
++ if (diff < 0) {
++ return ((st0_sign == SIGN_POS) ? COMP_A_lt_B : COMP_A_gt_B)
++ | (((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
++ COMP_Denormal : 0);
++ }
+
++ return COMP_A_eq_B
++ | (((st0_tag == TW_Denormal) || (tagb == TW_Denormal)) ?
++ COMP_Denormal : 0);
+
-+ crypto_inc(ctrblk, bsize);
+}
+
+ /* This function requires that st(0) is not empty */
+ int FPU_compare_st_data(FPU_REG const *loaded_data, u_char loaded_tag)
+ {
+- int f = 0, c;
+-
+- c = compare(loaded_data, loaded_tag);
+-
+- if (c & COMP_NaN)
+- {
+- EXCEPTION(EX_Invalid);
+- f = SW_C3 | SW_C2 | SW_C0;
+- }
+- else
+- switch (c & 7)
+- {
+- case COMP_A_lt_B:
+- f = SW_C0;
+- break;
+- case COMP_A_eq_B:
+- f = SW_C3;
+- break;
+- case COMP_A_gt_B:
+- f = 0;
+- break;
+- case COMP_No_Comp:
+- f = SW_C3 | SW_C2 | SW_C0;
+- break;
++ int f = 0, c;
+
-+static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
-+ struct crypto_cipher *tfm)
-+{
-+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
-+ crypto_cipher_alg(tfm)->cia_encrypt;
-+ unsigned int bsize = crypto_cipher_blocksize(tfm);
-+ u8 *ctrblk = walk->iv;
-+ u8 *src = walk->src.virt.addr;
-+ u8 *dst = walk->dst.virt.addr;
-+ unsigned int nbytes = walk->nbytes;
-+
-+ do {
-+ /* create keystream */
-+ fn(crypto_cipher_tfm(tfm), dst, ctrblk);
-+ crypto_xor(dst, src, bsize);
-+
-+ /* increment counter in counterblock */
-+ crypto_inc(ctrblk, bsize);
++ c = compare(loaded_data, loaded_tag);
+
-+ src += bsize;
-+ dst += bsize;
-+ } while ((nbytes -= bsize) >= bsize);
++ if (c & COMP_NaN) {
++ EXCEPTION(EX_Invalid);
++ f = SW_C3 | SW_C2 | SW_C0;
++ } else
++ switch (c & 7) {
++ case COMP_A_lt_B:
++ f = SW_C0;
++ break;
++ case COMP_A_eq_B:
++ f = SW_C3;
++ break;
++ case COMP_A_gt_B:
++ f = 0;
++ break;
++ case COMP_No_Comp:
++ f = SW_C3 | SW_C2 | SW_C0;
++ break;
+ #ifdef PARANOID
+- default:
+- EXCEPTION(EX_INTERNAL|0x121);
+- f = SW_C3 | SW_C2 | SW_C0;
+- break;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x121);
++ f = SW_C3 | SW_C2 | SW_C0;
++ break;
+ #endif /* PARANOID */
+- }
+- setcc(f);
+- if (c & COMP_Denormal)
+- {
+- return denormal_operand() < 0;
+- }
+- return 0;
++ }
++ setcc(f);
++ if (c & COMP_Denormal) {
++ return denormal_operand() < 0;
++ }
++ return 0;
+ }
+
+-
+ static int compare_st_st(int nr)
+ {
+- int f = 0, c;
+- FPU_REG *st_ptr;
+-
+- if ( !NOT_EMPTY(0) || !NOT_EMPTY(nr) )
+- {
+- setcc(SW_C3 | SW_C2 | SW_C0);
+- /* Stack fault */
+- EXCEPTION(EX_StackUnder);
+- return !(control_word & CW_Invalid);
+- }
+-
+- st_ptr = &st(nr);
+- c = compare(st_ptr, FPU_gettagi(nr));
+- if (c & COMP_NaN)
+- {
+- setcc(SW_C3 | SW_C2 | SW_C0);
+- EXCEPTION(EX_Invalid);
+- return !(control_word & CW_Invalid);
+- }
+- else
+- switch (c & 7)
+- {
+- case COMP_A_lt_B:
+- f = SW_C0;
+- break;
+- case COMP_A_eq_B:
+- f = SW_C3;
+- break;
+- case COMP_A_gt_B:
+- f = 0;
+- break;
+- case COMP_No_Comp:
+- f = SW_C3 | SW_C2 | SW_C0;
+- break;
++ int f = 0, c;
++ FPU_REG *st_ptr;
+
-+ return nbytes;
-+}
++ if (!NOT_EMPTY(0) || !NOT_EMPTY(nr)) {
++ setcc(SW_C3 | SW_C2 | SW_C0);
++ /* Stack fault */
++ EXCEPTION(EX_StackUnder);
++ return !(control_word & CW_Invalid);
++ }
++
++ st_ptr = &st(nr);
++ c = compare(st_ptr, FPU_gettagi(nr));
++ if (c & COMP_NaN) {
++ setcc(SW_C3 | SW_C2 | SW_C0);
++ EXCEPTION(EX_Invalid);
++ return !(control_word & CW_Invalid);
++ } else
++ switch (c & 7) {
++ case COMP_A_lt_B:
++ f = SW_C0;
++ break;
++ case COMP_A_eq_B:
++ f = SW_C3;
++ break;
++ case COMP_A_gt_B:
++ f = 0;
++ break;
++ case COMP_No_Comp:
++ f = SW_C3 | SW_C2 | SW_C0;
++ break;
+ #ifdef PARANOID
+- default:
+- EXCEPTION(EX_INTERNAL|0x122);
+- f = SW_C3 | SW_C2 | SW_C0;
+- break;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x122);
++ f = SW_C3 | SW_C2 | SW_C0;
++ break;
+ #endif /* PARANOID */
+- }
+- setcc(f);
+- if (c & COMP_Denormal)
+- {
+- return denormal_operand() < 0;
+- }
+- return 0;
++ }
++ setcc(f);
++ if (c & COMP_Denormal) {
++ return denormal_operand() < 0;
++ }
++ return 0;
+ }
+
+-
+ static int compare_u_st_st(int nr)
+ {
+- int f = 0, c;
+- FPU_REG *st_ptr;
+-
+- if ( !NOT_EMPTY(0) || !NOT_EMPTY(nr) )
+- {
+- setcc(SW_C3 | SW_C2 | SW_C0);
+- /* Stack fault */
+- EXCEPTION(EX_StackUnder);
+- return !(control_word & CW_Invalid);
+- }
+-
+- st_ptr = &st(nr);
+- c = compare(st_ptr, FPU_gettagi(nr));
+- if (c & COMP_NaN)
+- {
+- setcc(SW_C3 | SW_C2 | SW_C0);
+- if (c & COMP_SNaN) /* This is the only difference between
+- un-ordered and ordinary comparisons */
+- {
+- EXCEPTION(EX_Invalid);
+- return !(control_word & CW_Invalid);
++ int f = 0, c;
++ FPU_REG *st_ptr;
++
++ if (!NOT_EMPTY(0) || !NOT_EMPTY(nr)) {
++ setcc(SW_C3 | SW_C2 | SW_C0);
++ /* Stack fault */
++ EXCEPTION(EX_StackUnder);
++ return !(control_word & CW_Invalid);
+ }
+- return 0;
+- }
+- else
+- switch (c & 7)
+- {
+- case COMP_A_lt_B:
+- f = SW_C0;
+- break;
+- case COMP_A_eq_B:
+- f = SW_C3;
+- break;
+- case COMP_A_gt_B:
+- f = 0;
+- break;
+- case COMP_No_Comp:
+- f = SW_C3 | SW_C2 | SW_C0;
+- break;
+
-+static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
-+ struct crypto_cipher *tfm)
-+{
-+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
-+ crypto_cipher_alg(tfm)->cia_encrypt;
-+ unsigned int bsize = crypto_cipher_blocksize(tfm);
-+ unsigned long alignmask = crypto_cipher_alignmask(tfm);
-+ unsigned int nbytes = walk->nbytes;
-+ u8 *ctrblk = walk->iv;
-+ u8 *src = walk->src.virt.addr;
-+ u8 tmp[bsize + alignmask];
-+ u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
++ st_ptr = &st(nr);
++ c = compare(st_ptr, FPU_gettagi(nr));
++ if (c & COMP_NaN) {
++ setcc(SW_C3 | SW_C2 | SW_C0);
++ if (c & COMP_SNaN) { /* This is the only difference between
++ un-ordered and ordinary comparisons */
++ EXCEPTION(EX_Invalid);
++ return !(control_word & CW_Invalid);
++ }
++ return 0;
++ } else
++ switch (c & 7) {
++ case COMP_A_lt_B:
++ f = SW_C0;
++ break;
++ case COMP_A_eq_B:
++ f = SW_C3;
++ break;
++ case COMP_A_gt_B:
++ f = 0;
++ break;
++ case COMP_No_Comp:
++ f = SW_C3 | SW_C2 | SW_C0;
++ break;
+ #ifdef PARANOID
+- default:
+- EXCEPTION(EX_INTERNAL|0x123);
+- f = SW_C3 | SW_C2 | SW_C0;
+- break;
+-#endif /* PARANOID */
+- }
+- setcc(f);
+- if (c & COMP_Denormal)
+- {
+- return denormal_operand() < 0;
+- }
+- return 0;
++ default:
++ EXCEPTION(EX_INTERNAL | 0x123);
++ f = SW_C3 | SW_C2 | SW_C0;
++ break;
++#endif /* PARANOID */
++ }
++ setcc(f);
++ if (c & COMP_Denormal) {
++ return denormal_operand() < 0;
++ }
++ return 0;
+ }
+
+ /*---------------------------------------------------------------------------*/
+
+ void fcom_st(void)
+ {
+- /* fcom st(i) */
+- compare_st_st(FPU_rm);
++ /* fcom st(i) */
++ compare_st_st(FPU_rm);
+ }
+
+-
+ void fcompst(void)
+ {
+- /* fcomp st(i) */
+- if ( !compare_st_st(FPU_rm) )
+- FPU_pop();
++ /* fcomp st(i) */
++ if (!compare_st_st(FPU_rm))
++ FPU_pop();
+ }
+
+-
+ void fcompp(void)
+ {
+- /* fcompp */
+- if (FPU_rm != 1)
+- {
+- FPU_illegal();
+- return;
+- }
+- if ( !compare_st_st(1) )
+- poppop();
++ /* fcompp */
++ if (FPU_rm != 1) {
++ FPU_illegal();
++ return;
++ }
++ if (!compare_st_st(1))
++ poppop();
+ }
+
+-
+ void fucom_(void)
+ {
+- /* fucom st(i) */
+- compare_u_st_st(FPU_rm);
++ /* fucom st(i) */
++ compare_u_st_st(FPU_rm);
+
+ }
+
+-
+ void fucomp(void)
+ {
+- /* fucomp st(i) */
+- if ( !compare_u_st_st(FPU_rm) )
+- FPU_pop();
++ /* fucomp st(i) */
++ if (!compare_u_st_st(FPU_rm))
++ FPU_pop();
+ }
+
+-
+ void fucompp(void)
+ {
+- /* fucompp */
+- if (FPU_rm == 1)
+- {
+- if ( !compare_u_st_st(1) )
+- poppop();
+- }
+- else
+- FPU_illegal();
++ /* fucompp */
++ if (FPU_rm == 1) {
++ if (!compare_u_st_st(1))
++ poppop();
++ } else
++ FPU_illegal();
+ }
+diff --git a/arch/x86/math-emu/reg_constant.c b/arch/x86/math-emu/reg_constant.c
+index a850158..04869e6 100644
+--- a/arch/x86/math-emu/reg_constant.c
++++ b/arch/x86/math-emu/reg_constant.c
+@@ -16,29 +16,28 @@
+ #include "reg_constant.h"
+ #include "control_w.h"
+
+-
+ #define MAKE_REG(s,e,l,h) { l, h, \
+ ((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) }
+
+-FPU_REG const CONST_1 = MAKE_REG(POS, 0, 0x00000000, 0x80000000);
++FPU_REG const CONST_1 = MAKE_REG(POS, 0, 0x00000000, 0x80000000);
+ #if 0
+-FPU_REG const CONST_2 = MAKE_REG(POS, 1, 0x00000000, 0x80000000);
++FPU_REG const CONST_2 = MAKE_REG(POS, 1, 0x00000000, 0x80000000);
+ FPU_REG const CONST_HALF = MAKE_REG(POS, -1, 0x00000000, 0x80000000);
+-#endif /* 0 */
+-static FPU_REG const CONST_L2T = MAKE_REG(POS, 1, 0xcd1b8afe, 0xd49a784b);
+-static FPU_REG const CONST_L2E = MAKE_REG(POS, 0, 0x5c17f0bc, 0xb8aa3b29);
+-FPU_REG const CONST_PI = MAKE_REG(POS, 1, 0x2168c235, 0xc90fdaa2);
+-FPU_REG const CONST_PI2 = MAKE_REG(POS, 0, 0x2168c235, 0xc90fdaa2);
+-FPU_REG const CONST_PI4 = MAKE_REG(POS, -1, 0x2168c235, 0xc90fdaa2);
+-static FPU_REG const CONST_LG2 = MAKE_REG(POS, -2, 0xfbcff799, 0x9a209a84);
+-static FPU_REG const CONST_LN2 = MAKE_REG(POS, -1, 0xd1cf79ac, 0xb17217f7);
++#endif /* 0 */
++static FPU_REG const CONST_L2T = MAKE_REG(POS, 1, 0xcd1b8afe, 0xd49a784b);
++static FPU_REG const CONST_L2E = MAKE_REG(POS, 0, 0x5c17f0bc, 0xb8aa3b29);
++FPU_REG const CONST_PI = MAKE_REG(POS, 1, 0x2168c235, 0xc90fdaa2);
++FPU_REG const CONST_PI2 = MAKE_REG(POS, 0, 0x2168c235, 0xc90fdaa2);
++FPU_REG const CONST_PI4 = MAKE_REG(POS, -1, 0x2168c235, 0xc90fdaa2);
++static FPU_REG const CONST_LG2 = MAKE_REG(POS, -2, 0xfbcff799, 0x9a209a84);
++static FPU_REG const CONST_LN2 = MAKE_REG(POS, -1, 0xd1cf79ac, 0xb17217f7);
+
+ /* Extra bits to take pi/2 to more than 128 bits precision. */
+ FPU_REG const CONST_PI2extra = MAKE_REG(NEG, -66,
+- 0xfc8f8cbb, 0xece675d1);
++ 0xfc8f8cbb, 0xece675d1);
+
+ /* Only the sign (and tag) is used in internal zeroes */
+-FPU_REG const CONST_Z = MAKE_REG(POS, EXP_UNDER, 0x0, 0x0);
++FPU_REG const CONST_Z = MAKE_REG(POS, EXP_UNDER, 0x0, 0x0);
+
+ /* Only the sign and significand (and tag) are used in internal NaNs */
+ /* The 80486 never generates one of these
+@@ -48,24 +47,22 @@ FPU_REG const CONST_SNAN = MAKE_REG(POS, EXP_OVER, 0x00000001, 0x80000000);
+ FPU_REG const CONST_QNaN = MAKE_REG(NEG, EXP_OVER, 0x00000000, 0xC0000000);
+
+ /* Only the sign (and tag) is used in internal infinities */
+-FPU_REG const CONST_INF = MAKE_REG(POS, EXP_OVER, 0x00000000, 0x80000000);
+-
++FPU_REG const CONST_INF = MAKE_REG(POS, EXP_OVER, 0x00000000, 0x80000000);
+
+ static void fld_const(FPU_REG const *c, int adj, u_char tag)
+ {
+- FPU_REG *st_new_ptr;
+-
+- if ( STACK_OVERFLOW )
+- {
+- FPU_stack_overflow();
+- return;
+- }
+- push();
+- reg_copy(c, st_new_ptr);
+- st_new_ptr->sigl += adj; /* For all our fldxxx constants, we don't need to
+- borrow or carry. */
+- FPU_settag0(tag);
+- clear_C1();
++ FPU_REG *st_new_ptr;
+
-+ do {
-+ /* create keystream */
-+ fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
-+ crypto_xor(src, keystream, bsize);
++ if (STACK_OVERFLOW) {
++ FPU_stack_overflow();
++ return;
++ }
++ push();
++ reg_copy(c, st_new_ptr);
++ st_new_ptr->sigl += adj; /* For all our fldxxx constants, we don't need to
++ borrow or carry. */
++ FPU_settag0(tag);
++ clear_C1();
+ }
+
+ /* A fast way to find out whether x is one of RC_DOWN or RC_CHOP
+@@ -75,46 +72,46 @@ static void fld_const(FPU_REG const *c, int adj, u_char tag)
+
+ static void fld1(int rc)
+ {
+- fld_const(&CONST_1, 0, TAG_Valid);
++ fld_const(&CONST_1, 0, TAG_Valid);
+ }
+
+ static void fldl2t(int rc)
+ {
+- fld_const(&CONST_L2T, (rc == RC_UP) ? 1 : 0, TAG_Valid);
++ fld_const(&CONST_L2T, (rc == RC_UP) ? 1 : 0, TAG_Valid);
+ }
+
+ static void fldl2e(int rc)
+ {
+- fld_const(&CONST_L2E, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
++ fld_const(&CONST_L2E, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
+ }
+
+ static void fldpi(int rc)
+ {
+- fld_const(&CONST_PI, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
++ fld_const(&CONST_PI, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
+ }
+
+ static void fldlg2(int rc)
+ {
+- fld_const(&CONST_LG2, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
++ fld_const(&CONST_LG2, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
+ }
+
+ static void fldln2(int rc)
+ {
+- fld_const(&CONST_LN2, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
++ fld_const(&CONST_LN2, DOWN_OR_CHOP(rc) ? -1 : 0, TAG_Valid);
+ }
+
+ static void fldz(int rc)
+ {
+- fld_const(&CONST_Z, 0, TAG_Zero);
++ fld_const(&CONST_Z, 0, TAG_Zero);
+ }
+
+-typedef void (*FUNC_RC)(int);
++typedef void (*FUNC_RC) (int);
+
+ static FUNC_RC constants_table[] = {
+- fld1, fldl2t, fldl2e, fldpi, fldlg2, fldln2, fldz, (FUNC_RC)FPU_illegal
++ fld1, fldl2t, fldl2e, fldpi, fldlg2, fldln2, fldz, (FUNC_RC) FPU_illegal
+ };
+
+ void fconst(void)
+ {
+- (constants_table[FPU_rm])(control_word & CW_RC);
++ (constants_table[FPU_rm]) (control_word & CW_RC);
+ }
+diff --git a/arch/x86/math-emu/reg_convert.c b/arch/x86/math-emu/reg_convert.c
+index 45a2587..1080607 100644
+--- a/arch/x86/math-emu/reg_convert.c
++++ b/arch/x86/math-emu/reg_convert.c
+@@ -13,41 +13,34 @@
+ #include "exception.h"
+ #include "fpu_emu.h"
+
+-
+ int FPU_to_exp16(FPU_REG const *a, FPU_REG *x)
+ {
+- int sign = getsign(a);
+-
+- *(long long *)&(x->sigl) = *(const long long *)&(a->sigl);
+-
+- /* Set up the exponent as a 16 bit quantity. */
+- setexponent16(x, exponent(a));
+-
+- if ( exponent16(x) == EXP_UNDER )
+- {
+- /* The number is a de-normal or pseudodenormal. */
+- /* We only deal with the significand and exponent. */
+-
+- if (x->sigh & 0x80000000)
+- {
+- /* Is a pseudodenormal. */
+- /* This is non-80486 behaviour because the number
+- loses its 'denormal' identity. */
+- addexponent(x, 1);
+- }
+- else
+- {
+- /* Is a denormal. */
+- addexponent(x, 1);
+- FPU_normalize_nuo(x);
++ int sign = getsign(a);
++
++ *(long long *)&(x->sigl) = *(const long long *)&(a->sigl);
++
++ /* Set up the exponent as a 16 bit quantity. */
++ setexponent16(x, exponent(a));
++
++ if (exponent16(x) == EXP_UNDER) {
++ /* The number is a de-normal or pseudodenormal. */
++ /* We only deal with the significand and exponent. */
++
++ if (x->sigh & 0x80000000) {
++ /* Is a pseudodenormal. */
++ /* This is non-80486 behaviour because the number
++ loses its 'denormal' identity. */
++ addexponent(x, 1);
++ } else {
++ /* Is a denormal. */
++ addexponent(x, 1);
++ FPU_normalize_nuo(x);
++ }
+ }
+- }
+
+- if ( !(x->sigh & 0x80000000) )
+- {
+- EXCEPTION(EX_INTERNAL | 0x180);
+- }
++ if (!(x->sigh & 0x80000000)) {
++ EXCEPTION(EX_INTERNAL | 0x180);
++ }
+
+- return sign;
++ return sign;
+ }
+-
+diff --git a/arch/x86/math-emu/reg_divide.c b/arch/x86/math-emu/reg_divide.c
+index 5cee7ff..6827012 100644
+--- a/arch/x86/math-emu/reg_divide.c
++++ b/arch/x86/math-emu/reg_divide.c
+@@ -26,182 +26,157 @@
+ */
+ int FPU_div(int flags, int rm, int control_w)
+ {
+- FPU_REG x, y;
+- FPU_REG const *a, *b, *st0_ptr, *st_ptr;
+- FPU_REG *dest;
+- u_char taga, tagb, signa, signb, sign, saved_sign;
+- int tag, deststnr;
+-
+- if ( flags & DEST_RM )
+- deststnr = rm;
+- else
+- deststnr = 0;
+-
+- if ( flags & REV )
+- {
+- b = &st(0);
+- st0_ptr = b;
+- tagb = FPU_gettag0();
+- if ( flags & LOADED )
+- {
+- a = (FPU_REG *)rm;
+- taga = flags & 0x0f;
++ FPU_REG x, y;
++ FPU_REG const *a, *b, *st0_ptr, *st_ptr;
++ FPU_REG *dest;
++ u_char taga, tagb, signa, signb, sign, saved_sign;
++ int tag, deststnr;
++
++ if (flags & DEST_RM)
++ deststnr = rm;
++ else
++ deststnr = 0;
++
++ if (flags & REV) {
++ b = &st(0);
++ st0_ptr = b;
++ tagb = FPU_gettag0();
++ if (flags & LOADED) {
++ a = (FPU_REG *) rm;
++ taga = flags & 0x0f;
++ } else {
++ a = &st(rm);
++ st_ptr = a;
++ taga = FPU_gettagi(rm);
++ }
++ } else {
++ a = &st(0);
++ st0_ptr = a;
++ taga = FPU_gettag0();
++ if (flags & LOADED) {
++ b = (FPU_REG *) rm;
++ tagb = flags & 0x0f;
++ } else {
++ b = &st(rm);
++ st_ptr = b;
++ tagb = FPU_gettagi(rm);
++ }
+ }
+- else
+- {
+- a = &st(rm);
+- st_ptr = a;
+- taga = FPU_gettagi(rm);
+- }
+- }
+- else
+- {
+- a = &st(0);
+- st0_ptr = a;
+- taga = FPU_gettag0();
+- if ( flags & LOADED )
+- {
+- b = (FPU_REG *)rm;
+- tagb = flags & 0x0f;
+- }
+- else
+- {
+- b = &st(rm);
+- st_ptr = b;
+- tagb = FPU_gettagi(rm);
+- }
+- }
+
+- signa = getsign(a);
+- signb = getsign(b);
++ signa = getsign(a);
++ signb = getsign(b);
+
+- sign = signa ^ signb;
++ sign = signa ^ signb;
+
+- dest = &st(deststnr);
+- saved_sign = getsign(dest);
++ dest = &st(deststnr);
++ saved_sign = getsign(dest);
+
+- if ( !(taga | tagb) )
+- {
+- /* Both regs Valid, this should be the most common case. */
+- reg_copy(a, &x);
+- reg_copy(b, &y);
+- setpositive(&x);
+- setpositive(&y);
+- tag = FPU_u_div(&x, &y, dest, control_w, sign);
++ if (!(taga | tagb)) {
++ /* Both regs Valid, this should be the most common case. */
++ reg_copy(a, &x);
++ reg_copy(b, &y);
++ setpositive(&x);
++ setpositive(&y);
++ tag = FPU_u_div(&x, &y, dest, control_w, sign);
+
+- if ( tag < 0 )
+- return tag;
++ if (tag < 0)
++ return tag;
+
+- FPU_settagi(deststnr, tag);
+- return tag;
+- }
++ FPU_settagi(deststnr, tag);
++ return tag;
++ }
+
+- if ( taga == TAG_Special )
+- taga = FPU_Special(a);
+- if ( tagb == TAG_Special )
+- tagb = FPU_Special(b);
++ if (taga == TAG_Special)
++ taga = FPU_Special(a);
++ if (tagb == TAG_Special)
++ tagb = FPU_Special(b);
+
+- if ( ((taga == TAG_Valid) && (tagb == TW_Denormal))
++ if (((taga == TAG_Valid) && (tagb == TW_Denormal))
+ || ((taga == TW_Denormal) && (tagb == TAG_Valid))
+- || ((taga == TW_Denormal) && (tagb == TW_Denormal)) )
+- {
+- if ( denormal_operand() < 0 )
+- return FPU_Exception;
+-
+- FPU_to_exp16(a, &x);
+- FPU_to_exp16(b, &y);
+- tag = FPU_u_div(&x, &y, dest, control_w, sign);
+- if ( tag < 0 )
+- return tag;
+-
+- FPU_settagi(deststnr, tag);
+- return tag;
+- }
+- else if ( (taga <= TW_Denormal) && (tagb <= TW_Denormal) )
+- {
+- if ( tagb != TAG_Zero )
+- {
+- /* Want to find Zero/Valid */
+- if ( tagb == TW_Denormal )
+- {
+- if ( denormal_operand() < 0 )
+- return FPU_Exception;
+- }
+-
+- /* The result is zero. */
+- FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
+- setsign(dest, sign);
+- return TAG_Zero;
++ || ((taga == TW_Denormal) && (tagb == TW_Denormal))) {
++ if (denormal_operand() < 0)
++ return FPU_Exception;
++
++ FPU_to_exp16(a, &x);
++ FPU_to_exp16(b, &y);
++ tag = FPU_u_div(&x, &y, dest, control_w, sign);
++ if (tag < 0)
++ return tag;
++
++ FPU_settagi(deststnr, tag);
++ return tag;
++ } else if ((taga <= TW_Denormal) && (tagb <= TW_Denormal)) {
++ if (tagb != TAG_Zero) {
++ /* Want to find Zero/Valid */
++ if (tagb == TW_Denormal) {
++ if (denormal_operand() < 0)
++ return FPU_Exception;
++ }
++
++ /* The result is zero. */
++ FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
++ setsign(dest, sign);
++ return TAG_Zero;
++ }
++ /* We have an exception condition, either 0/0 or Valid/Zero. */
++ if (taga == TAG_Zero) {
++ /* 0/0 */
++ return arith_invalid(deststnr);
++ }
++ /* Valid/Zero */
++ return FPU_divide_by_zero(deststnr, sign);
+ }
+- /* We have an exception condition, either 0/0 or Valid/Zero. */
+- if ( taga == TAG_Zero )
+- {
+- /* 0/0 */
+- return arith_invalid(deststnr);
++ /* Must have infinities, NaNs, etc */
++ else if ((taga == TW_NaN) || (tagb == TW_NaN)) {
++ if (flags & LOADED)
++ return real_2op_NaN((FPU_REG *) rm, flags & 0x0f, 0,
++ st0_ptr);
++
++ if (flags & DEST_RM) {
++ int tag;
++ tag = FPU_gettag0();
++ if (tag == TAG_Special)
++ tag = FPU_Special(st0_ptr);
++ return real_2op_NaN(st0_ptr, tag, rm,
++ (flags & REV) ? st0_ptr : &st(rm));
++ } else {
++ int tag;
++ tag = FPU_gettagi(rm);
++ if (tag == TAG_Special)
++ tag = FPU_Special(&st(rm));
++ return real_2op_NaN(&st(rm), tag, 0,
++ (flags & REV) ? st0_ptr : &st(rm));
++ }
++ } else if (taga == TW_Infinity) {
++ if (tagb == TW_Infinity) {
++ /* infinity/infinity */
++ return arith_invalid(deststnr);
++ } else {
++ /* tagb must be Valid or Zero */
++ if ((tagb == TW_Denormal) && (denormal_operand() < 0))
++ return FPU_Exception;
++
++ /* Infinity divided by Zero or Valid does
++ not raise and exception, but returns Infinity */
++ FPU_copy_to_regi(a, TAG_Special, deststnr);
++ setsign(dest, sign);
++ return taga;
++ }
++ } else if (tagb == TW_Infinity) {
++ if ((taga == TW_Denormal) && (denormal_operand() < 0))
++ return FPU_Exception;
++
++ /* The result is zero. */
++ FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
++ setsign(dest, sign);
++ return TAG_Zero;
+ }
+- /* Valid/Zero */
+- return FPU_divide_by_zero(deststnr, sign);
+- }
+- /* Must have infinities, NaNs, etc */
+- else if ( (taga == TW_NaN) || (tagb == TW_NaN) )
+- {
+- if ( flags & LOADED )
+- return real_2op_NaN((FPU_REG *)rm, flags & 0x0f, 0, st0_ptr);
+-
+- if ( flags & DEST_RM )
+- {
+- int tag;
+- tag = FPU_gettag0();
+- if ( tag == TAG_Special )
+- tag = FPU_Special(st0_ptr);
+- return real_2op_NaN(st0_ptr, tag, rm, (flags & REV) ? st0_ptr : &st(rm));
+- }
+- else
+- {
+- int tag;
+- tag = FPU_gettagi(rm);
+- if ( tag == TAG_Special )
+- tag = FPU_Special(&st(rm));
+- return real_2op_NaN(&st(rm), tag, 0, (flags & REV) ? st0_ptr : &st(rm));
+- }
+- }
+- else if (taga == TW_Infinity)
+- {
+- if (tagb == TW_Infinity)
+- {
+- /* infinity/infinity */
+- return arith_invalid(deststnr);
+- }
+- else
+- {
+- /* tagb must be Valid or Zero */
+- if ( (tagb == TW_Denormal) && (denormal_operand() < 0) )
+- return FPU_Exception;
+-
+- /* Infinity divided by Zero or Valid does
+- not raise and exception, but returns Infinity */
+- FPU_copy_to_regi(a, TAG_Special, deststnr);
+- setsign(dest, sign);
+- return taga;
+- }
+- }
+- else if (tagb == TW_Infinity)
+- {
+- if ( (taga == TW_Denormal) && (denormal_operand() < 0) )
+- return FPU_Exception;
+-
+- /* The result is zero. */
+- FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
+- setsign(dest, sign);
+- return TAG_Zero;
+- }
+ #ifdef PARANOID
+- else
+- {
+- EXCEPTION(EX_INTERNAL|0x102);
+- return FPU_Exception;
+- }
+-#endif /* PARANOID */
++ else {
++ EXCEPTION(EX_INTERNAL | 0x102);
++ return FPU_Exception;
++ }
++#endif /* PARANOID */
+
+ return 0;
+ }
+diff --git a/arch/x86/math-emu/reg_ld_str.c b/arch/x86/math-emu/reg_ld_str.c
+index e976cae..799d4af 100644
+--- a/arch/x86/math-emu/reg_ld_str.c
++++ b/arch/x86/math-emu/reg_ld_str.c
+@@ -27,1084 +27,938 @@
+ #include "control_w.h"
+ #include "status_w.h"
+
+-
+-#define DOUBLE_Emax 1023 /* largest valid exponent */
++#define DOUBLE_Emax 1023 /* largest valid exponent */
+ #define DOUBLE_Ebias 1023
+-#define DOUBLE_Emin (-1022) /* smallest valid exponent */
++#define DOUBLE_Emin (-1022) /* smallest valid exponent */
+
+-#define SINGLE_Emax 127 /* largest valid exponent */
++#define SINGLE_Emax 127 /* largest valid exponent */
+ #define SINGLE_Ebias 127
+-#define SINGLE_Emin (-126) /* smallest valid exponent */
+-
++#define SINGLE_Emin (-126) /* smallest valid exponent */
+
+ static u_char normalize_no_excep(FPU_REG *r, int exp, int sign)
+ {
+- u_char tag;
++ u_char tag;
+
+- setexponent16(r, exp);
++ setexponent16(r, exp);
+
+- tag = FPU_normalize_nuo(r);
+- stdexp(r);
+- if ( sign )
+- setnegative(r);
++ tag = FPU_normalize_nuo(r);
++ stdexp(r);
++ if (sign)
++ setnegative(r);
+
+- return tag;
++ return tag;
+ }
+
+-
+ int FPU_tagof(FPU_REG *ptr)
+ {
+- int exp;
+-
+- exp = exponent16(ptr) & 0x7fff;
+- if ( exp == 0 )
+- {
+- if ( !(ptr->sigh | ptr->sigl) )
+- {
+- return TAG_Zero;
++ int exp;
+
-+ /* increment counter in counterblock */
-+ crypto_inc(ctrblk, bsize);
++ exp = exponent16(ptr) & 0x7fff;
++ if (exp == 0) {
++ if (!(ptr->sigh | ptr->sigl)) {
++ return TAG_Zero;
++ }
++ /* The number is a de-normal or pseudodenormal. */
++ return TAG_Special;
++ }
+
-+ src += bsize;
-+ } while ((nbytes -= bsize) >= bsize);
++ if (exp == 0x7fff) {
++ /* Is an Infinity, a NaN, or an unsupported data type. */
++ return TAG_Special;
+ }
+- /* The number is a de-normal or pseudodenormal. */
+- return TAG_Special;
+- }
+-
+- if ( exp == 0x7fff )
+- {
+- /* Is an Infinity, a NaN, or an unsupported data type. */
+- return TAG_Special;
+- }
+-
+- if ( !(ptr->sigh & 0x80000000) )
+- {
+- /* Unsupported data type. */
+- /* Valid numbers have the ms bit set to 1. */
+- /* Unnormal. */
+- return TAG_Special;
+- }
+-
+- return TAG_Valid;
+-}
+
++ if (!(ptr->sigh & 0x80000000)) {
++ /* Unsupported data type. */
++ /* Valid numbers have the ms bit set to 1. */
++ /* Unnormal. */
++ return TAG_Special;
++ }
+
-+ return nbytes;
++ return TAG_Valid;
+}
+
+ /* Get a long double from user memory */
+ int FPU_load_extended(long double __user *s, int stnr)
+ {
+- FPU_REG *sti_ptr = &st(stnr);
++ FPU_REG *sti_ptr = &st(stnr);
+
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, s, 10);
+- __copy_from_user(sti_ptr, s, 10);
+- RE_ENTRANT_CHECK_ON;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, s, 10);
++ __copy_from_user(sti_ptr, s, 10);
++ RE_ENTRANT_CHECK_ON;
+
+- return FPU_tagof(sti_ptr);
++ return FPU_tagof(sti_ptr);
+ }
+
+-
+ /* Get a double from user memory */
+ int FPU_load_double(double __user *dfloat, FPU_REG *loaded_data)
+ {
+- int exp, tag, negative;
+- unsigned m64, l64;
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, dfloat, 8);
+- FPU_get_user(m64, 1 + (unsigned long __user *) dfloat);
+- FPU_get_user(l64, (unsigned long __user *) dfloat);
+- RE_ENTRANT_CHECK_ON;
+-
+- negative = (m64 & 0x80000000) ? SIGN_Negative : SIGN_Positive;
+- exp = ((m64 & 0x7ff00000) >> 20) - DOUBLE_Ebias + EXTENDED_Ebias;
+- m64 &= 0xfffff;
+- if ( exp > DOUBLE_Emax + EXTENDED_Ebias )
+- {
+- /* Infinity or NaN */
+- if ((m64 == 0) && (l64 == 0))
+- {
+- /* +- infinity */
+- loaded_data->sigh = 0x80000000;
+- loaded_data->sigl = 0x00000000;
+- exp = EXP_Infinity + EXTENDED_Ebias;
+- tag = TAG_Special;
+- }
+- else
+- {
+- /* Must be a signaling or quiet NaN */
+- exp = EXP_NaN + EXTENDED_Ebias;
+- loaded_data->sigh = (m64 << 11) | 0x80000000;
+- loaded_data->sigh |= l64 >> 21;
+- loaded_data->sigl = l64 << 11;
+- tag = TAG_Special; /* The calling function must look for NaNs */
+- }
+- }
+- else if ( exp < DOUBLE_Emin + EXTENDED_Ebias )
+- {
+- /* Zero or de-normal */
+- if ((m64 == 0) && (l64 == 0))
+- {
+- /* Zero */
+- reg_copy(&CONST_Z, loaded_data);
+- exp = 0;
+- tag = TAG_Zero;
+- }
+- else
+- {
+- /* De-normal */
+- loaded_data->sigh = m64 << 11;
+- loaded_data->sigh |= l64 >> 21;
+- loaded_data->sigl = l64 << 11;
+-
+- return normalize_no_excep(loaded_data, DOUBLE_Emin, negative)
+- | (denormal_operand() < 0 ? FPU_Exception : 0);
+- }
+- }
+- else
+- {
+- loaded_data->sigh = (m64 << 11) | 0x80000000;
+- loaded_data->sigh |= l64 >> 21;
+- loaded_data->sigl = l64 << 11;
++ int exp, tag, negative;
++ unsigned m64, l64;
++
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, dfloat, 8);
++ FPU_get_user(m64, 1 + (unsigned long __user *)dfloat);
++ FPU_get_user(l64, (unsigned long __user *)dfloat);
++ RE_ENTRANT_CHECK_ON;
++
++ negative = (m64 & 0x80000000) ? SIGN_Negative : SIGN_Positive;
++ exp = ((m64 & 0x7ff00000) >> 20) - DOUBLE_Ebias + EXTENDED_Ebias;
++ m64 &= 0xfffff;
++ if (exp > DOUBLE_Emax + EXTENDED_Ebias) {
++ /* Infinity or NaN */
++ if ((m64 == 0) && (l64 == 0)) {
++ /* +- infinity */
++ loaded_data->sigh = 0x80000000;
++ loaded_data->sigl = 0x00000000;
++ exp = EXP_Infinity + EXTENDED_Ebias;
++ tag = TAG_Special;
++ } else {
++ /* Must be a signaling or quiet NaN */
++ exp = EXP_NaN + EXTENDED_Ebias;
++ loaded_data->sigh = (m64 << 11) | 0x80000000;
++ loaded_data->sigh |= l64 >> 21;
++ loaded_data->sigl = l64 << 11;
++ tag = TAG_Special; /* The calling function must look for NaNs */
++ }
++ } else if (exp < DOUBLE_Emin + EXTENDED_Ebias) {
++ /* Zero or de-normal */
++ if ((m64 == 0) && (l64 == 0)) {
++ /* Zero */
++ reg_copy(&CONST_Z, loaded_data);
++ exp = 0;
++ tag = TAG_Zero;
++ } else {
++ /* De-normal */
++ loaded_data->sigh = m64 << 11;
++ loaded_data->sigh |= l64 >> 21;
++ loaded_data->sigl = l64 << 11;
+
-+static int crypto_ctr_crypt(struct blkcipher_desc *desc,
-+ struct scatterlist *dst, struct scatterlist *src,
-+ unsigned int nbytes)
-+{
-+ struct blkcipher_walk walk;
-+ struct crypto_blkcipher *tfm = desc->tfm;
-+ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
-+ struct crypto_cipher *child = ctx->child;
-+ unsigned int bsize = crypto_cipher_blocksize(child);
-+ int err;
-+
-+ blkcipher_walk_init(&walk, dst, src, nbytes);
-+ err = blkcipher_walk_virt_block(desc, &walk, bsize);
++ return normalize_no_excep(loaded_data, DOUBLE_Emin,
++ negative)
++ | (denormal_operand() < 0 ? FPU_Exception : 0);
++ }
++ } else {
++ loaded_data->sigh = (m64 << 11) | 0x80000000;
++ loaded_data->sigh |= l64 >> 21;
++ loaded_data->sigl = l64 << 11;
+
+- tag = TAG_Valid;
+- }
++ tag = TAG_Valid;
++ }
+
+- setexponent16(loaded_data, exp | negative);
++ setexponent16(loaded_data, exp | negative);
+
+- return tag;
++ return tag;
+ }
+
+-
+ /* Get a float from user memory */
+ int FPU_load_single(float __user *single, FPU_REG *loaded_data)
+ {
+- unsigned m32;
+- int exp, tag, negative;
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, single, 4);
+- FPU_get_user(m32, (unsigned long __user *) single);
+- RE_ENTRANT_CHECK_ON;
+-
+- negative = (m32 & 0x80000000) ? SIGN_Negative : SIGN_Positive;
+-
+- if (!(m32 & 0x7fffffff))
+- {
+- /* Zero */
+- reg_copy(&CONST_Z, loaded_data);
+- addexponent(loaded_data, negative);
+- return TAG_Zero;
+- }
+- exp = ((m32 & 0x7f800000) >> 23) - SINGLE_Ebias + EXTENDED_Ebias;
+- m32 = (m32 & 0x7fffff) << 8;
+- if ( exp < SINGLE_Emin + EXTENDED_Ebias )
+- {
+- /* De-normals */
+- loaded_data->sigh = m32;
+- loaded_data->sigl = 0;
+-
+- return normalize_no_excep(loaded_data, SINGLE_Emin, negative)
+- | (denormal_operand() < 0 ? FPU_Exception : 0);
+- }
+- else if ( exp > SINGLE_Emax + EXTENDED_Ebias )
+- {
+- /* Infinity or NaN */
+- if ( m32 == 0 )
+- {
+- /* +- infinity */
+- loaded_data->sigh = 0x80000000;
+- loaded_data->sigl = 0x00000000;
+- exp = EXP_Infinity + EXTENDED_Ebias;
+- tag = TAG_Special;
++ unsigned m32;
++ int exp, tag, negative;
++
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, single, 4);
++ FPU_get_user(m32, (unsigned long __user *)single);
++ RE_ENTRANT_CHECK_ON;
++
++ negative = (m32 & 0x80000000) ? SIGN_Negative : SIGN_Positive;
++
++ if (!(m32 & 0x7fffffff)) {
++ /* Zero */
++ reg_copy(&CONST_Z, loaded_data);
++ addexponent(loaded_data, negative);
++ return TAG_Zero;
+ }
+- else
+- {
+- /* Must be a signaling or quiet NaN */
+- exp = EXP_NaN + EXTENDED_Ebias;
+- loaded_data->sigh = m32 | 0x80000000;
+- loaded_data->sigl = 0;
+- tag = TAG_Special; /* The calling function must look for NaNs */
++ exp = ((m32 & 0x7f800000) >> 23) - SINGLE_Ebias + EXTENDED_Ebias;
++ m32 = (m32 & 0x7fffff) << 8;
++ if (exp < SINGLE_Emin + EXTENDED_Ebias) {
++ /* De-normals */
++ loaded_data->sigh = m32;
++ loaded_data->sigl = 0;
++
++ return normalize_no_excep(loaded_data, SINGLE_Emin, negative)
++ | (denormal_operand() < 0 ? FPU_Exception : 0);
++ } else if (exp > SINGLE_Emax + EXTENDED_Ebias) {
++ /* Infinity or NaN */
++ if (m32 == 0) {
++ /* +- infinity */
++ loaded_data->sigh = 0x80000000;
++ loaded_data->sigl = 0x00000000;
++ exp = EXP_Infinity + EXTENDED_Ebias;
++ tag = TAG_Special;
++ } else {
++ /* Must be a signaling or quiet NaN */
++ exp = EXP_NaN + EXTENDED_Ebias;
++ loaded_data->sigh = m32 | 0x80000000;
++ loaded_data->sigl = 0;
++ tag = TAG_Special; /* The calling function must look for NaNs */
++ }
++ } else {
++ loaded_data->sigh = m32 | 0x80000000;
++ loaded_data->sigl = 0;
++ tag = TAG_Valid;
+ }
+- }
+- else
+- {
+- loaded_data->sigh = m32 | 0x80000000;
+- loaded_data->sigl = 0;
+- tag = TAG_Valid;
+- }
+
+- setexponent16(loaded_data, exp | negative); /* Set the sign. */
++ setexponent16(loaded_data, exp | negative); /* Set the sign. */
+
+- return tag;
++ return tag;
+ }
+
+-
+ /* Get a long long from user memory */
+ int FPU_load_int64(long long __user *_s)
+ {
+- long long s;
+- int sign;
+- FPU_REG *st0_ptr = &st(0);
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, _s, 8);
+- if (copy_from_user(&s,_s,8))
+- FPU_abort;
+- RE_ENTRANT_CHECK_ON;
+-
+- if (s == 0)
+- {
+- reg_copy(&CONST_Z, st0_ptr);
+- return TAG_Zero;
+- }
+-
+- if (s > 0)
+- sign = SIGN_Positive;
+- else
+- {
+- s = -s;
+- sign = SIGN_Negative;
+- }
+-
+- significand(st0_ptr) = s;
+-
+- return normalize_no_excep(st0_ptr, 63, sign);
+-}
++ long long s;
++ int sign;
++ FPU_REG *st0_ptr = &st(0);
+
-+ while (walk.nbytes >= bsize) {
-+ if (walk.src.virt.addr == walk.dst.virt.addr)
-+ nbytes = crypto_ctr_crypt_inplace(&walk, child);
-+ else
-+ nbytes = crypto_ctr_crypt_segment(&walk, child);
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, _s, 8);
++ if (copy_from_user(&s, _s, 8))
++ FPU_abort;
++ RE_ENTRANT_CHECK_ON;
+
-+ err = blkcipher_walk_done(desc, &walk, nbytes);
++ if (s == 0) {
++ reg_copy(&CONST_Z, st0_ptr);
++ return TAG_Zero;
+ }
+
-+ if (walk.nbytes) {
-+ crypto_ctr_crypt_final(&walk, child);
-+ err = blkcipher_walk_done(desc, &walk, 0);
++ if (s > 0)
++ sign = SIGN_Positive;
++ else {
++ s = -s;
++ sign = SIGN_Negative;
+ }
+
++ significand(st0_ptr) = s;
+
-+ return err;
++ return normalize_no_excep(st0_ptr, 63, sign);
+}
+
+ /* Get a long from user memory */
+ int FPU_load_int32(long __user *_s, FPU_REG *loaded_data)
+ {
+- long s;
+- int negative;
++ long s;
++ int negative;
+
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, _s, 4);
+- FPU_get_user(s, _s);
+- RE_ENTRANT_CHECK_ON;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, _s, 4);
++ FPU_get_user(s, _s);
++ RE_ENTRANT_CHECK_ON;
+
+- if (s == 0)
+- { reg_copy(&CONST_Z, loaded_data); return TAG_Zero; }
++ if (s == 0) {
++ reg_copy(&CONST_Z, loaded_data);
++ return TAG_Zero;
++ }
+
+- if (s > 0)
+- negative = SIGN_Positive;
+- else
+- {
+- s = -s;
+- negative = SIGN_Negative;
+- }
++ if (s > 0)
++ negative = SIGN_Positive;
++ else {
++ s = -s;
++ negative = SIGN_Negative;
++ }
+
+- loaded_data->sigh = s;
+- loaded_data->sigl = 0;
++ loaded_data->sigh = s;
++ loaded_data->sigl = 0;
+
+- return normalize_no_excep(loaded_data, 31, negative);
++ return normalize_no_excep(loaded_data, 31, negative);
+ }
+
+-
+ /* Get a short from user memory */
+ int FPU_load_int16(short __user *_s, FPU_REG *loaded_data)
+ {
+- int s, negative;
++ int s, negative;
+
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, _s, 2);
+- /* Cast as short to get the sign extended. */
+- FPU_get_user(s, _s);
+- RE_ENTRANT_CHECK_ON;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, _s, 2);
++ /* Cast as short to get the sign extended. */
++ FPU_get_user(s, _s);
++ RE_ENTRANT_CHECK_ON;
+
+- if (s == 0)
+- { reg_copy(&CONST_Z, loaded_data); return TAG_Zero; }
++ if (s == 0) {
++ reg_copy(&CONST_Z, loaded_data);
++ return TAG_Zero;
++ }
+
+- if (s > 0)
+- negative = SIGN_Positive;
+- else
+- {
+- s = -s;
+- negative = SIGN_Negative;
+- }
++ if (s > 0)
++ negative = SIGN_Positive;
++ else {
++ s = -s;
++ negative = SIGN_Negative;
++ }
+
+- loaded_data->sigh = s << 16;
+- loaded_data->sigl = 0;
++ loaded_data->sigh = s << 16;
++ loaded_data->sigl = 0;
+
+- return normalize_no_excep(loaded_data, 15, negative);
++ return normalize_no_excep(loaded_data, 15, negative);
+ }
+
+-
+ /* Get a packed bcd array from user memory */
+ int FPU_load_bcd(u_char __user *s)
+ {
+- FPU_REG *st0_ptr = &st(0);
+- int pos;
+- u_char bcd;
+- long long l=0;
+- int sign;
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, s, 10);
+- RE_ENTRANT_CHECK_ON;
+- for ( pos = 8; pos >= 0; pos--)
+- {
+- l *= 10;
+- RE_ENTRANT_CHECK_OFF;
+- FPU_get_user(bcd, s+pos);
+- RE_ENTRANT_CHECK_ON;
+- l += bcd >> 4;
+- l *= 10;
+- l += bcd & 0x0f;
+- }
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_get_user(sign, s+9);
+- sign = sign & 0x80 ? SIGN_Negative : SIGN_Positive;
+- RE_ENTRANT_CHECK_ON;
+-
+- if ( l == 0 )
+- {
+- reg_copy(&CONST_Z, st0_ptr);
+- addexponent(st0_ptr, sign); /* Set the sign. */
+- return TAG_Zero;
+- }
+- else
+- {
+- significand(st0_ptr) = l;
+- return normalize_no_excep(st0_ptr, 63, sign);
+- }
++ FPU_REG *st0_ptr = &st(0);
++ int pos;
++ u_char bcd;
++ long long l = 0;
++ int sign;
+
-+static int crypto_ctr_init_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
-+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_cipher *cipher;
-+
-+ cipher = crypto_spawn_cipher(spawn);
-+ if (IS_ERR(cipher))
-+ return PTR_ERR(cipher);
-+
-+ ctx->child = cipher;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, s, 10);
++ RE_ENTRANT_CHECK_ON;
++ for (pos = 8; pos >= 0; pos--) {
++ l *= 10;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_get_user(bcd, s + pos);
++ RE_ENTRANT_CHECK_ON;
++ l += bcd >> 4;
++ l *= 10;
++ l += bcd & 0x0f;
++ }
++
++ RE_ENTRANT_CHECK_OFF;
++ FPU_get_user(sign, s + 9);
++ sign = sign & 0x80 ? SIGN_Negative : SIGN_Positive;
++ RE_ENTRANT_CHECK_ON;
++
++ if (l == 0) {
++ reg_copy(&CONST_Z, st0_ptr);
++ addexponent(st0_ptr, sign); /* Set the sign. */
++ return TAG_Zero;
++ } else {
++ significand(st0_ptr) = l;
++ return normalize_no_excep(st0_ptr, 63, sign);
++ }
+ }
+
+ /*===========================================================================*/
+
+ /* Put a long double into user memory */
+-int FPU_store_extended(FPU_REG *st0_ptr, u_char st0_tag, long double __user *d)
++int FPU_store_extended(FPU_REG *st0_ptr, u_char st0_tag,
++ long double __user * d)
+ {
+- /*
+- The only exception raised by an attempt to store to an
+- extended format is the Invalid Stack exception, i.e.
+- attempting to store from an empty register.
+- */
+-
+- if ( st0_tag != TAG_Empty )
+- {
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE, d, 10);
+-
+- FPU_put_user(st0_ptr->sigl, (unsigned long __user *) d);
+- FPU_put_user(st0_ptr->sigh, (unsigned long __user *) ((u_char __user *)d + 4));
+- FPU_put_user(exponent16(st0_ptr), (unsigned short __user *) ((u_char __user *)d + 8));
+- RE_ENTRANT_CHECK_ON;
+-
+- return 1;
+- }
+-
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- /* Put out the QNaN indefinite */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,10);
+- FPU_put_user(0, (unsigned long __user *) d);
+- FPU_put_user(0xc0000000, 1 + (unsigned long __user *) d);
+- FPU_put_user(0xffff, 4 + (short __user *) d);
+- RE_ENTRANT_CHECK_ON;
+- return 1;
+- }
+- else
+- return 0;
++ /*
++ The only exception raised by an attempt to store to an
++ extended format is the Invalid Stack exception, i.e.
++ attempting to store from an empty register.
++ */
++
++ if (st0_tag != TAG_Empty) {
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 10);
++
++ FPU_put_user(st0_ptr->sigl, (unsigned long __user *)d);
++ FPU_put_user(st0_ptr->sigh,
++ (unsigned long __user *)((u_char __user *) d + 4));
++ FPU_put_user(exponent16(st0_ptr),
++ (unsigned short __user *)((u_char __user *) d +
++ 8));
++ RE_ENTRANT_CHECK_ON;
+
-+ return 0;
++ return 1;
++ }
+
+-}
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ /* Put out the QNaN indefinite */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 10);
++ FPU_put_user(0, (unsigned long __user *)d);
++ FPU_put_user(0xc0000000, 1 + (unsigned long __user *)d);
++ FPU_put_user(0xffff, 4 + (short __user *)d);
++ RE_ENTRANT_CHECK_ON;
++ return 1;
++ } else
++ return 0;
+
+}
+
+ /* Put a double into user memory */
+ int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag, double __user *dfloat)
+ {
+- unsigned long l[2];
+- unsigned long increment = 0; /* avoid gcc warnings */
+- int precision_loss;
+- int exp;
+- FPU_REG tmp;
++ unsigned long l[2];
++ unsigned long increment = 0; /* avoid gcc warnings */
++ int precision_loss;
++ int exp;
++ FPU_REG tmp;
+
+- if ( st0_tag == TAG_Valid )
+- {
+- reg_copy(st0_ptr, &tmp);
+- exp = exponent(&tmp);
++ if (st0_tag == TAG_Valid) {
++ reg_copy(st0_ptr, &tmp);
++ exp = exponent(&tmp);
+
+- if ( exp < DOUBLE_Emin ) /* It may be a denormal */
+- {
+- addexponent(&tmp, -DOUBLE_Emin + 52); /* largest exp to be 51 */
++ if (exp < DOUBLE_Emin) { /* It may be a denormal */
++ addexponent(&tmp, -DOUBLE_Emin + 52); /* largest exp to be 51 */
+
+- denormal_arg:
++ denormal_arg:
+
+- if ( (precision_loss = FPU_round_to_int(&tmp, st0_tag)) )
+- {
++ if ((precision_loss = FPU_round_to_int(&tmp, st0_tag))) {
+ #ifdef PECULIAR_486
+- /* Did it round to a non-denormal ? */
+- /* This behaviour might be regarded as peculiar, it appears
+- that the 80486 rounds to the dest precision, then
+- converts to decide underflow. */
+- if ( !((tmp.sigh == 0x00100000) && (tmp.sigl == 0) &&
+- (st0_ptr->sigl & 0x000007ff)) )
++ /* Did it round to a non-denormal ? */
++ /* This behaviour might be regarded as peculiar, it appears
++ that the 80486 rounds to the dest precision, then
++ converts to decide underflow. */
++ if (!
++ ((tmp.sigh == 0x00100000) && (tmp.sigl == 0)
++ && (st0_ptr->sigl & 0x000007ff)))
+ #endif /* PECULIAR_486 */
+- {
+- EXCEPTION(EX_Underflow);
+- /* This is a special case: see sec 16.2.5.1 of
+- the 80486 book */
+- if ( !(control_word & CW_Underflow) )
+- return 0;
+- }
+- EXCEPTION(precision_loss);
+- if ( !(control_word & CW_Precision) )
+- return 0;
+- }
+- l[0] = tmp.sigl;
+- l[1] = tmp.sigh;
+- }
+- else
+- {
+- if ( tmp.sigl & 0x000007ff )
+- {
+- precision_loss = 1;
+- switch (control_word & CW_RC)
+- {
+- case RC_RND:
+- /* Rounding can get a little messy.. */
+- increment = ((tmp.sigl & 0x7ff) > 0x400) | /* nearest */
+- ((tmp.sigl & 0xc00) == 0xc00); /* odd -> even */
+- break;
+- case RC_DOWN: /* towards -infinity */
+- increment = signpositive(&tmp) ? 0 : tmp.sigl & 0x7ff;
+- break;
+- case RC_UP: /* towards +infinity */
+- increment = signpositive(&tmp) ? tmp.sigl & 0x7ff : 0;
+- break;
+- case RC_CHOP:
+- increment = 0;
+- break;
+- }
+-
+- /* Truncate the mantissa */
+- tmp.sigl &= 0xfffff800;
+-
+- if ( increment )
+- {
+- if ( tmp.sigl >= 0xfffff800 )
+- {
+- /* the sigl part overflows */
+- if ( tmp.sigh == 0xffffffff )
+- {
+- /* The sigh part overflows */
+- tmp.sigh = 0x80000000;
+- exp++;
+- if (exp >= EXP_OVER)
+- goto overflow;
++ {
++ EXCEPTION(EX_Underflow);
++ /* This is a special case: see sec 16.2.5.1 of
++ the 80486 book */
++ if (!(control_word & CW_Underflow))
++ return 0;
++ }
++ EXCEPTION(precision_loss);
++ if (!(control_word & CW_Precision))
++ return 0;
+ }
+- else
+- {
+- tmp.sigh ++;
++ l[0] = tmp.sigl;
++ l[1] = tmp.sigh;
++ } else {
++ if (tmp.sigl & 0x000007ff) {
++ precision_loss = 1;
++ switch (control_word & CW_RC) {
++ case RC_RND:
++ /* Rounding can get a little messy.. */
++ increment = ((tmp.sigl & 0x7ff) > 0x400) | /* nearest */
++ ((tmp.sigl & 0xc00) == 0xc00); /* odd -> even */
++ break;
++ case RC_DOWN: /* towards -infinity */
++ increment =
++ signpositive(&tmp) ? 0 : tmp.
++ sigl & 0x7ff;
++ break;
++ case RC_UP: /* towards +infinity */
++ increment =
++ signpositive(&tmp) ? tmp.
++ sigl & 0x7ff : 0;
++ break;
++ case RC_CHOP:
++ increment = 0;
++ break;
++ }
+
-+static void crypto_ctr_exit_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
++ /* Truncate the mantissa */
++ tmp.sigl &= 0xfffff800;
+
-+ crypto_free_cipher(ctx->child);
-+}
++ if (increment) {
++ if (tmp.sigl >= 0xfffff800) {
++ /* the sigl part overflows */
++ if (tmp.sigh == 0xffffffff) {
++ /* The sigh part overflows */
++ tmp.sigh = 0x80000000;
++ exp++;
++ if (exp >= EXP_OVER)
++ goto overflow;
++ } else {
++ tmp.sigh++;
++ }
++ tmp.sigl = 0x00000000;
++ } else {
++ /* We only need to increment sigl */
++ tmp.sigl += 0x00000800;
++ }
++ }
++ } else
++ precision_loss = 0;
+
-+static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
-+{
-+ struct crypto_instance *inst;
-+ struct crypto_alg *alg;
-+ int err;
++ l[0] = (tmp.sigl >> 11) | (tmp.sigh << 21);
++ l[1] = ((tmp.sigh >> 11) & 0xfffff);
+
-+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
-+ if (err)
-+ return ERR_PTR(err);
++ if (exp > DOUBLE_Emax) {
++ overflow:
++ EXCEPTION(EX_Overflow);
++ if (!(control_word & CW_Overflow))
++ return 0;
++ set_precision_flag_up();
++ if (!(control_word & CW_Precision))
++ return 0;
+
-+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
-+ CRYPTO_ALG_TYPE_MASK);
-+ if (IS_ERR(alg))
-+ return ERR_PTR(PTR_ERR(alg));
++ /* This is a special case: see sec 16.2.5.1 of the 80486 book */
++ /* Overflow to infinity */
++ l[0] = 0x00000000; /* Set to */
++ l[1] = 0x7ff00000; /* + INF */
++ } else {
++ if (precision_loss) {
++ if (increment)
++ set_precision_flag_up();
++ else
++ set_precision_flag_down();
++ }
++ /* Add the exponent */
++ l[1] |= (((exp + DOUBLE_Ebias) & 0x7ff) << 20);
+ }
+- tmp.sigl = 0x00000000;
+- }
+- else
+- {
+- /* We only need to increment sigl */
+- tmp.sigl += 0x00000800;
+- }
+- }
+- }
+- else
+- precision_loss = 0;
+-
+- l[0] = (tmp.sigl >> 11) | (tmp.sigh << 21);
+- l[1] = ((tmp.sigh >> 11) & 0xfffff);
+-
+- if ( exp > DOUBLE_Emax )
+- {
+- overflow:
+- EXCEPTION(EX_Overflow);
+- if ( !(control_word & CW_Overflow) )
+- return 0;
+- set_precision_flag_up();
+- if ( !(control_word & CW_Precision) )
+- return 0;
+-
+- /* This is a special case: see sec 16.2.5.1 of the 80486 book */
+- /* Overflow to infinity */
+- l[0] = 0x00000000; /* Set to */
+- l[1] = 0x7ff00000; /* + INF */
+- }
+- else
+- {
+- if ( precision_loss )
+- {
+- if ( increment )
+- set_precision_flag_up();
+- else
+- set_precision_flag_down();
+ }
+- /* Add the exponent */
+- l[1] |= (((exp+DOUBLE_Ebias) & 0x7ff) << 20);
+- }
+- }
+- }
+- else if (st0_tag == TAG_Zero)
+- {
+- /* Number is zero */
+- l[0] = 0;
+- l[1] = 0;
+- }
+- else if ( st0_tag == TAG_Special )
+- {
+- st0_tag = FPU_Special(st0_ptr);
+- if ( st0_tag == TW_Denormal )
+- {
+- /* A denormal will always underflow. */
++ } else if (st0_tag == TAG_Zero) {
++ /* Number is zero */
++ l[0] = 0;
++ l[1] = 0;
++ } else if (st0_tag == TAG_Special) {
++ st0_tag = FPU_Special(st0_ptr);
++ if (st0_tag == TW_Denormal) {
++ /* A denormal will always underflow. */
+ #ifndef PECULIAR_486
+- /* An 80486 is supposed to be able to generate
+- a denormal exception here, but... */
+- /* Underflow has priority. */
+- if ( control_word & CW_Underflow )
+- denormal_operand();
++ /* An 80486 is supposed to be able to generate
++ a denormal exception here, but... */
++ /* Underflow has priority. */
++ if (control_word & CW_Underflow)
++ denormal_operand();
+ #endif /* PECULIAR_486 */
+- reg_copy(st0_ptr, &tmp);
+- goto denormal_arg;
+- }
+- else if (st0_tag == TW_Infinity)
+- {
+- l[0] = 0;
+- l[1] = 0x7ff00000;
+- }
+- else if (st0_tag == TW_NaN)
+- {
+- /* Is it really a NaN ? */
+- if ( (exponent(st0_ptr) == EXP_OVER)
+- && (st0_ptr->sigh & 0x80000000) )
+- {
+- /* See if we can get a valid NaN from the FPU_REG */
+- l[0] = (st0_ptr->sigl >> 11) | (st0_ptr->sigh << 21);
+- l[1] = ((st0_ptr->sigh >> 11) & 0xfffff);
+- if ( !(st0_ptr->sigh & 0x40000000) )
+- {
+- /* It is a signalling NaN */
+- EXCEPTION(EX_Invalid);
+- if ( !(control_word & CW_Invalid) )
+- return 0;
+- l[1] |= (0x40000000 >> 11);
++ reg_copy(st0_ptr, &tmp);
++ goto denormal_arg;
++ } else if (st0_tag == TW_Infinity) {
++ l[0] = 0;
++ l[1] = 0x7ff00000;
++ } else if (st0_tag == TW_NaN) {
++ /* Is it really a NaN ? */
++ if ((exponent(st0_ptr) == EXP_OVER)
++ && (st0_ptr->sigh & 0x80000000)) {
++ /* See if we can get a valid NaN from the FPU_REG */
++ l[0] =
++ (st0_ptr->sigl >> 11) | (st0_ptr->
++ sigh << 21);
++ l[1] = ((st0_ptr->sigh >> 11) & 0xfffff);
++ if (!(st0_ptr->sigh & 0x40000000)) {
++ /* It is a signalling NaN */
++ EXCEPTION(EX_Invalid);
++ if (!(control_word & CW_Invalid))
++ return 0;
++ l[1] |= (0x40000000 >> 11);
++ }
++ l[1] |= 0x7ff00000;
++ } else {
++ /* It is an unsupported data type */
++ EXCEPTION(EX_Invalid);
++ if (!(control_word & CW_Invalid))
++ return 0;
++ l[0] = 0;
++ l[1] = 0xfff80000;
++ }
+ }
+- l[1] |= 0x7ff00000;
+- }
+- else
+- {
+- /* It is an unsupported data type */
+- EXCEPTION(EX_Invalid);
+- if ( !(control_word & CW_Invalid) )
+- return 0;
+- l[0] = 0;
+- l[1] = 0xfff80000;
+- }
++ } else if (st0_tag == TAG_Empty) {
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ if (control_word & CW_Invalid) {
++ /* The masked response */
++ /* Put out the QNaN indefinite */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, dfloat, 8);
++ FPU_put_user(0, (unsigned long __user *)dfloat);
++ FPU_put_user(0xfff80000,
++ 1 + (unsigned long __user *)dfloat);
++ RE_ENTRANT_CHECK_ON;
++ return 1;
++ } else
++ return 0;
+ }
+- }
+- else if ( st0_tag == TAG_Empty )
+- {
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- if ( control_word & CW_Invalid )
+- {
+- /* The masked response */
+- /* Put out the QNaN indefinite */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,dfloat,8);
+- FPU_put_user(0, (unsigned long __user *) dfloat);
+- FPU_put_user(0xfff80000, 1 + (unsigned long __user *) dfloat);
+- RE_ENTRANT_CHECK_ON;
+- return 1;
+- }
+- else
+- return 0;
+- }
+- if ( getsign(st0_ptr) )
+- l[1] |= 0x80000000;
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,dfloat,8);
+- FPU_put_user(l[0], (unsigned long __user *)dfloat);
+- FPU_put_user(l[1], 1 + (unsigned long __user *)dfloat);
+- RE_ENTRANT_CHECK_ON;
+-
+- return 1;
+-}
++ if (getsign(st0_ptr))
++ l[1] |= 0x80000000;
+
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, dfloat, 8);
++ FPU_put_user(l[0], (unsigned long __user *)dfloat);
++ FPU_put_user(l[1], 1 + (unsigned long __user *)dfloat);
++ RE_ENTRANT_CHECK_ON;
+
-+ /* Block size must be >= 4 bytes. */
-+ err = -EINVAL;
-+ if (alg->cra_blocksize < 4)
-+ goto out_put_alg;
++ return 1;
++}
+
+ /* Put a float into user memory */
+ int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag, float __user *single)
+ {
+- long templ = 0;
+- unsigned long increment = 0; /* avoid gcc warnings */
+- int precision_loss;
+- int exp;
+- FPU_REG tmp;
++ long templ = 0;
++ unsigned long increment = 0; /* avoid gcc warnings */
++ int precision_loss;
++ int exp;
++ FPU_REG tmp;
+
+- if ( st0_tag == TAG_Valid )
+- {
++ if (st0_tag == TAG_Valid) {
+
+- reg_copy(st0_ptr, &tmp);
+- exp = exponent(&tmp);
++ reg_copy(st0_ptr, &tmp);
++ exp = exponent(&tmp);
+
+- if ( exp < SINGLE_Emin )
+- {
+- addexponent(&tmp, -SINGLE_Emin + 23); /* largest exp to be 22 */
++ if (exp < SINGLE_Emin) {
++ addexponent(&tmp, -SINGLE_Emin + 23); /* largest exp to be 22 */
+
+- denormal_arg:
++ denormal_arg:
+
+- if ( (precision_loss = FPU_round_to_int(&tmp, st0_tag)) )
+- {
++ if ((precision_loss = FPU_round_to_int(&tmp, st0_tag))) {
+ #ifdef PECULIAR_486
+- /* Did it round to a non-denormal ? */
+- /* This behaviour might be regarded as peculiar, it appears
+- that the 80486 rounds to the dest precision, then
+- converts to decide underflow. */
+- if ( !((tmp.sigl == 0x00800000) &&
+- ((st0_ptr->sigh & 0x000000ff) || st0_ptr->sigl)) )
++ /* Did it round to a non-denormal ? */
++ /* This behaviour might be regarded as peculiar, it appears
++ that the 80486 rounds to the dest precision, then
++ converts to decide underflow. */
++ if (!((tmp.sigl == 0x00800000) &&
++ ((st0_ptr->sigh & 0x000000ff)
++ || st0_ptr->sigl)))
+ #endif /* PECULIAR_486 */
+- {
+- EXCEPTION(EX_Underflow);
+- /* This is a special case: see sec 16.2.5.1 of
+- the 80486 book */
+- if ( !(control_word & CW_Underflow) )
+- return 0;
+- }
+- EXCEPTION(precision_loss);
+- if ( !(control_word & CW_Precision) )
+- return 0;
+- }
+- templ = tmp.sigl;
+- }
+- else
+- {
+- if ( tmp.sigl | (tmp.sigh & 0x000000ff) )
+- {
+- unsigned long sigh = tmp.sigh;
+- unsigned long sigl = tmp.sigl;
+-
+- precision_loss = 1;
+- switch (control_word & CW_RC)
+- {
+- case RC_RND:
+- increment = ((sigh & 0xff) > 0x80) /* more than half */
+- || (((sigh & 0xff) == 0x80) && sigl) /* more than half */
+- || ((sigh & 0x180) == 0x180); /* round to even */
+- break;
+- case RC_DOWN: /* towards -infinity */
+- increment = signpositive(&tmp)
+- ? 0 : (sigl | (sigh & 0xff));
+- break;
+- case RC_UP: /* towards +infinity */
+- increment = signpositive(&tmp)
+- ? (sigl | (sigh & 0xff)) : 0;
+- break;
+- case RC_CHOP:
+- increment = 0;
+- break;
+- }
+-
+- /* Truncate part of the mantissa */
+- tmp.sigl = 0;
+-
+- if (increment)
+- {
+- if ( sigh >= 0xffffff00 )
+- {
+- /* The sigh part overflows */
+- tmp.sigh = 0x80000000;
+- exp++;
+- if ( exp >= EXP_OVER )
+- goto overflow;
+- }
+- else
+- {
+- tmp.sigh &= 0xffffff00;
+- tmp.sigh += 0x100;
+- }
+- }
+- else
+- {
+- tmp.sigh &= 0xffffff00; /* Finish the truncation */
+- }
+- }
+- else
+- precision_loss = 0;
+-
+- templ = (tmp.sigh >> 8) & 0x007fffff;
+-
+- if ( exp > SINGLE_Emax )
+- {
+- overflow:
+- EXCEPTION(EX_Overflow);
+- if ( !(control_word & CW_Overflow) )
+- return 0;
+- set_precision_flag_up();
+- if ( !(control_word & CW_Precision) )
+- return 0;
+-
+- /* This is a special case: see sec 16.2.5.1 of the 80486 book. */
+- /* Masked response is overflow to infinity. */
+- templ = 0x7f800000;
+- }
+- else
+- {
+- if ( precision_loss )
+- {
+- if ( increment )
+- set_precision_flag_up();
+- else
+- set_precision_flag_down();
++ {
++ EXCEPTION(EX_Underflow);
++ /* This is a special case: see sec 16.2.5.1 of
++ the 80486 book */
++ if (!(control_word & CW_Underflow))
++ return 0;
++ }
++ EXCEPTION(precision_loss);
++ if (!(control_word & CW_Precision))
++ return 0;
++ }
++ templ = tmp.sigl;
++ } else {
++ if (tmp.sigl | (tmp.sigh & 0x000000ff)) {
++ unsigned long sigh = tmp.sigh;
++ unsigned long sigl = tmp.sigl;
++
++ precision_loss = 1;
++ switch (control_word & CW_RC) {
++ case RC_RND:
++ increment = ((sigh & 0xff) > 0x80) /* more than half */
++ ||(((sigh & 0xff) == 0x80) && sigl) /* more than half */
++ ||((sigh & 0x180) == 0x180); /* round to even */
++ break;
++ case RC_DOWN: /* towards -infinity */
++ increment = signpositive(&tmp)
++ ? 0 : (sigl | (sigh & 0xff));
++ break;
++ case RC_UP: /* towards +infinity */
++ increment = signpositive(&tmp)
++ ? (sigl | (sigh & 0xff)) : 0;
++ break;
++ case RC_CHOP:
++ increment = 0;
++ break;
++ }
+
-+ /* If this is false we'd fail the alignment of crypto_inc. */
-+ if (alg->cra_blocksize % 4)
-+ goto out_put_alg;
++ /* Truncate part of the mantissa */
++ tmp.sigl = 0;
+
-+ inst = crypto_alloc_instance("ctr", alg);
-+ if (IS_ERR(inst))
-+ goto out;
++ if (increment) {
++ if (sigh >= 0xffffff00) {
++ /* The sigh part overflows */
++ tmp.sigh = 0x80000000;
++ exp++;
++ if (exp >= EXP_OVER)
++ goto overflow;
++ } else {
++ tmp.sigh &= 0xffffff00;
++ tmp.sigh += 0x100;
++ }
++ } else {
++ tmp.sigh &= 0xffffff00; /* Finish the truncation */
++ }
++ } else
++ precision_loss = 0;
+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
-+ inst->alg.cra_priority = alg->cra_priority;
-+ inst->alg.cra_blocksize = 1;
-+ inst->alg.cra_alignmask = alg->cra_alignmask | (__alignof__(u32) - 1);
-+ inst->alg.cra_type = &crypto_blkcipher_type;
++ templ = (tmp.sigh >> 8) & 0x007fffff;
+
-+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
-+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
-+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
++ if (exp > SINGLE_Emax) {
++ overflow:
++ EXCEPTION(EX_Overflow);
++ if (!(control_word & CW_Overflow))
++ return 0;
++ set_precision_flag_up();
++ if (!(control_word & CW_Precision))
++ return 0;
+
-+ inst->alg.cra_ctxsize = sizeof(struct crypto_ctr_ctx);
++ /* This is a special case: see sec 16.2.5.1 of the 80486 book. */
++ /* Masked response is overflow to infinity. */
++ templ = 0x7f800000;
++ } else {
++ if (precision_loss) {
++ if (increment)
++ set_precision_flag_up();
++ else
++ set_precision_flag_down();
++ }
++ /* Add the exponent */
++ templ |= ((exp + SINGLE_Ebias) & 0xff) << 23;
++ }
+ }
+- /* Add the exponent */
+- templ |= ((exp+SINGLE_Ebias) & 0xff) << 23;
+- }
+- }
+- }
+- else if (st0_tag == TAG_Zero)
+- {
+- templ = 0;
+- }
+- else if ( st0_tag == TAG_Special )
+- {
+- st0_tag = FPU_Special(st0_ptr);
+- if (st0_tag == TW_Denormal)
+- {
+- reg_copy(st0_ptr, &tmp);
+-
+- /* A denormal will always underflow. */
++ } else if (st0_tag == TAG_Zero) {
++ templ = 0;
++ } else if (st0_tag == TAG_Special) {
++ st0_tag = FPU_Special(st0_ptr);
++ if (st0_tag == TW_Denormal) {
++ reg_copy(st0_ptr, &tmp);
++
++ /* A denormal will always underflow. */
+ #ifndef PECULIAR_486
+- /* An 80486 is supposed to be able to generate
+- a denormal exception here, but... */
+- /* Underflow has priority. */
+- if ( control_word & CW_Underflow )
+- denormal_operand();
+-#endif /* PECULIAR_486 */
+- goto denormal_arg;
+- }
+- else if (st0_tag == TW_Infinity)
+- {
+- templ = 0x7f800000;
+- }
+- else if (st0_tag == TW_NaN)
+- {
+- /* Is it really a NaN ? */
+- if ( (exponent(st0_ptr) == EXP_OVER) && (st0_ptr->sigh & 0x80000000) )
+- {
+- /* See if we can get a valid NaN from the FPU_REG */
+- templ = st0_ptr->sigh >> 8;
+- if ( !(st0_ptr->sigh & 0x40000000) )
+- {
+- /* It is a signalling NaN */
+- EXCEPTION(EX_Invalid);
+- if ( !(control_word & CW_Invalid) )
+- return 0;
+- templ |= (0x40000000 >> 8);
++ /* An 80486 is supposed to be able to generate
++ a denormal exception here, but... */
++ /* Underflow has priority. */
++ if (control_word & CW_Underflow)
++ denormal_operand();
++#endif /* PECULIAR_486 */
++ goto denormal_arg;
++ } else if (st0_tag == TW_Infinity) {
++ templ = 0x7f800000;
++ } else if (st0_tag == TW_NaN) {
++ /* Is it really a NaN ? */
++ if ((exponent(st0_ptr) == EXP_OVER)
++ && (st0_ptr->sigh & 0x80000000)) {
++ /* See if we can get a valid NaN from the FPU_REG */
++ templ = st0_ptr->sigh >> 8;
++ if (!(st0_ptr->sigh & 0x40000000)) {
++ /* It is a signalling NaN */
++ EXCEPTION(EX_Invalid);
++ if (!(control_word & CW_Invalid))
++ return 0;
++ templ |= (0x40000000 >> 8);
++ }
++ templ |= 0x7f800000;
++ } else {
++ /* It is an unsupported data type */
++ EXCEPTION(EX_Invalid);
++ if (!(control_word & CW_Invalid))
++ return 0;
++ templ = 0xffc00000;
++ }
+ }
+- templ |= 0x7f800000;
+- }
+- else
+- {
+- /* It is an unsupported data type */
+- EXCEPTION(EX_Invalid);
+- if ( !(control_word & CW_Invalid) )
+- return 0;
+- templ = 0xffc00000;
+- }
+- }
+ #ifdef PARANOID
+- else
+- {
+- EXCEPTION(EX_INTERNAL|0x164);
+- return 0;
+- }
++ else {
++ EXCEPTION(EX_INTERNAL | 0x164);
++ return 0;
++ }
+ #endif
+- }
+- else if ( st0_tag == TAG_Empty )
+- {
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- if ( control_word & EX_Invalid )
+- {
+- /* The masked response */
+- /* Put out the QNaN indefinite */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,single,4);
+- FPU_put_user(0xffc00000, (unsigned long __user *) single);
+- RE_ENTRANT_CHECK_ON;
+- return 1;
++ } else if (st0_tag == TAG_Empty) {
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ if (control_word & EX_Invalid) {
++ /* The masked response */
++ /* Put out the QNaN indefinite */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, single, 4);
++ FPU_put_user(0xffc00000,
++ (unsigned long __user *)single);
++ RE_ENTRANT_CHECK_ON;
++ return 1;
++ } else
++ return 0;
+ }
+- else
+- return 0;
+- }
+ #ifdef PARANOID
+- else
+- {
+- EXCEPTION(EX_INTERNAL|0x163);
+- return 0;
+- }
++ else {
++ EXCEPTION(EX_INTERNAL | 0x163);
++ return 0;
++ }
+ #endif
+- if ( getsign(st0_ptr) )
+- templ |= 0x80000000;
++ if (getsign(st0_ptr))
++ templ |= 0x80000000;
+
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,single,4);
+- FPU_put_user(templ,(unsigned long __user *) single);
+- RE_ENTRANT_CHECK_ON;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, single, 4);
++ FPU_put_user(templ, (unsigned long __user *)single);
++ RE_ENTRANT_CHECK_ON;
+
+- return 1;
++ return 1;
+ }
+
+-
+ /* Put a long long into user memory */
+ int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag, long long __user *d)
+ {
+- FPU_REG t;
+- long long tll;
+- int precision_loss;
+-
+- if ( st0_tag == TAG_Empty )
+- {
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- goto invalid_operand;
+- }
+- else if ( st0_tag == TAG_Special )
+- {
+- st0_tag = FPU_Special(st0_ptr);
+- if ( (st0_tag == TW_Infinity) ||
+- (st0_tag == TW_NaN) )
+- {
+- EXCEPTION(EX_Invalid);
+- goto invalid_operand;
++ FPU_REG t;
++ long long tll;
++ int precision_loss;
++
++ if (st0_tag == TAG_Empty) {
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ goto invalid_operand;
++ } else if (st0_tag == TAG_Special) {
++ st0_tag = FPU_Special(st0_ptr);
++ if ((st0_tag == TW_Infinity) || (st0_tag == TW_NaN)) {
++ EXCEPTION(EX_Invalid);
++ goto invalid_operand;
++ }
+ }
+- }
+-
+- reg_copy(st0_ptr, &t);
+- precision_loss = FPU_round_to_int(&t, st0_tag);
+- ((long *)&tll)[0] = t.sigl;
+- ((long *)&tll)[1] = t.sigh;
+- if ( (precision_loss == 1) ||
+- ((t.sigh & 0x80000000) &&
+- !((t.sigh == 0x80000000) && (t.sigl == 0) &&
+- signnegative(&t))) )
+- {
+- EXCEPTION(EX_Invalid);
+- /* This is a special case: see sec 16.2.5.1 of the 80486 book */
+- invalid_operand:
+- if ( control_word & EX_Invalid )
+- {
+- /* Produce something like QNaN "indefinite" */
+- tll = 0x8000000000000000LL;
+
-+ inst->alg.cra_init = crypto_ctr_init_tfm;
-+ inst->alg.cra_exit = crypto_ctr_exit_tfm;
++ reg_copy(st0_ptr, &t);
++ precision_loss = FPU_round_to_int(&t, st0_tag);
++ ((long *)&tll)[0] = t.sigl;
++ ((long *)&tll)[1] = t.sigh;
++ if ((precision_loss == 1) ||
++ ((t.sigh & 0x80000000) &&
++ !((t.sigh == 0x80000000) && (t.sigl == 0) && signnegative(&t)))) {
++ EXCEPTION(EX_Invalid);
++ /* This is a special case: see sec 16.2.5.1 of the 80486 book */
++ invalid_operand:
++ if (control_word & EX_Invalid) {
++ /* Produce something like QNaN "indefinite" */
++ tll = 0x8000000000000000LL;
++ } else
++ return 0;
++ } else {
++ if (precision_loss)
++ set_precision_flag(precision_loss);
++ if (signnegative(&t))
++ tll = -tll;
+ }
+- else
+- return 0;
+- }
+- else
+- {
+- if ( precision_loss )
+- set_precision_flag(precision_loss);
+- if ( signnegative(&t) )
+- tll = - tll;
+- }
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,8);
+- if (copy_to_user(d, &tll, 8))
+- FPU_abort;
+- RE_ENTRANT_CHECK_ON;
+-
+- return 1;
+-}
+
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 8);
++ if (copy_to_user(d, &tll, 8))
++ FPU_abort;
++ RE_ENTRANT_CHECK_ON;
+
-+ inst->alg.cra_blkcipher.setkey = crypto_ctr_setkey;
-+ inst->alg.cra_blkcipher.encrypt = crypto_ctr_crypt;
-+ inst->alg.cra_blkcipher.decrypt = crypto_ctr_crypt;
++ return 1;
++}
+
+ /* Put a long into user memory */
+ int FPU_store_int32(FPU_REG *st0_ptr, u_char st0_tag, long __user *d)
+ {
+- FPU_REG t;
+- int precision_loss;
+-
+- if ( st0_tag == TAG_Empty )
+- {
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- goto invalid_operand;
+- }
+- else if ( st0_tag == TAG_Special )
+- {
+- st0_tag = FPU_Special(st0_ptr);
+- if ( (st0_tag == TW_Infinity) ||
+- (st0_tag == TW_NaN) )
+- {
+- EXCEPTION(EX_Invalid);
+- goto invalid_operand;
++ FPU_REG t;
++ int precision_loss;
++
++ if (st0_tag == TAG_Empty) {
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ goto invalid_operand;
++ } else if (st0_tag == TAG_Special) {
++ st0_tag = FPU_Special(st0_ptr);
++ if ((st0_tag == TW_Infinity) || (st0_tag == TW_NaN)) {
++ EXCEPTION(EX_Invalid);
++ goto invalid_operand;
++ }
+ }
+- }
+-
+- reg_copy(st0_ptr, &t);
+- precision_loss = FPU_round_to_int(&t, st0_tag);
+- if (t.sigh ||
+- ((t.sigl & 0x80000000) &&
+- !((t.sigl == 0x80000000) && signnegative(&t))) )
+- {
+- EXCEPTION(EX_Invalid);
+- /* This is a special case: see sec 16.2.5.1 of the 80486 book */
+- invalid_operand:
+- if ( control_word & EX_Invalid )
+- {
+- /* Produce something like QNaN "indefinite" */
+- t.sigl = 0x80000000;
+
-+out:
-+ crypto_mod_put(alg);
-+ return inst;
++ reg_copy(st0_ptr, &t);
++ precision_loss = FPU_round_to_int(&t, st0_tag);
++ if (t.sigh ||
++ ((t.sigl & 0x80000000) &&
++ !((t.sigl == 0x80000000) && signnegative(&t)))) {
++ EXCEPTION(EX_Invalid);
++ /* This is a special case: see sec 16.2.5.1 of the 80486 book */
++ invalid_operand:
++ if (control_word & EX_Invalid) {
++ /* Produce something like QNaN "indefinite" */
++ t.sigl = 0x80000000;
++ } else
++ return 0;
++ } else {
++ if (precision_loss)
++ set_precision_flag(precision_loss);
++ if (signnegative(&t))
++ t.sigl = -(long)t.sigl;
+ }
+- else
+- return 0;
+- }
+- else
+- {
+- if ( precision_loss )
+- set_precision_flag(precision_loss);
+- if ( signnegative(&t) )
+- t.sigl = -(long)t.sigl;
+- }
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,4);
+- FPU_put_user(t.sigl, (unsigned long __user *) d);
+- RE_ENTRANT_CHECK_ON;
+-
+- return 1;
+-}
+
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 4);
++ FPU_put_user(t.sigl, (unsigned long __user *)d);
++ RE_ENTRANT_CHECK_ON;
+
-+out_put_alg:
-+ inst = ERR_PTR(err);
-+ goto out;
++ return 1;
+}
+
+ /* Put a short into user memory */
+ int FPU_store_int16(FPU_REG *st0_ptr, u_char st0_tag, short __user *d)
+ {
+- FPU_REG t;
+- int precision_loss;
+-
+- if ( st0_tag == TAG_Empty )
+- {
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- goto invalid_operand;
+- }
+- else if ( st0_tag == TAG_Special )
+- {
+- st0_tag = FPU_Special(st0_ptr);
+- if ( (st0_tag == TW_Infinity) ||
+- (st0_tag == TW_NaN) )
+- {
+- EXCEPTION(EX_Invalid);
+- goto invalid_operand;
++ FPU_REG t;
++ int precision_loss;
++
++ if (st0_tag == TAG_Empty) {
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ goto invalid_operand;
++ } else if (st0_tag == TAG_Special) {
++ st0_tag = FPU_Special(st0_ptr);
++ if ((st0_tag == TW_Infinity) || (st0_tag == TW_NaN)) {
++ EXCEPTION(EX_Invalid);
++ goto invalid_operand;
++ }
+ }
+- }
+-
+- reg_copy(st0_ptr, &t);
+- precision_loss = FPU_round_to_int(&t, st0_tag);
+- if (t.sigh ||
+- ((t.sigl & 0xffff8000) &&
+- !((t.sigl == 0x8000) && signnegative(&t))) )
+- {
+- EXCEPTION(EX_Invalid);
+- /* This is a special case: see sec 16.2.5.1 of the 80486 book */
+- invalid_operand:
+- if ( control_word & EX_Invalid )
+- {
+- /* Produce something like QNaN "indefinite" */
+- t.sigl = 0x8000;
+
-+static void crypto_ctr_free(struct crypto_instance *inst)
-+{
-+ crypto_drop_spawn(crypto_instance_ctx(inst));
-+ kfree(inst);
++ reg_copy(st0_ptr, &t);
++ precision_loss = FPU_round_to_int(&t, st0_tag);
++ if (t.sigh ||
++ ((t.sigl & 0xffff8000) &&
++ !((t.sigl == 0x8000) && signnegative(&t)))) {
++ EXCEPTION(EX_Invalid);
++ /* This is a special case: see sec 16.2.5.1 of the 80486 book */
++ invalid_operand:
++ if (control_word & EX_Invalid) {
++ /* Produce something like QNaN "indefinite" */
++ t.sigl = 0x8000;
++ } else
++ return 0;
++ } else {
++ if (precision_loss)
++ set_precision_flag(precision_loss);
++ if (signnegative(&t))
++ t.sigl = -t.sigl;
+ }
+- else
+- return 0;
+- }
+- else
+- {
+- if ( precision_loss )
+- set_precision_flag(precision_loss);
+- if ( signnegative(&t) )
+- t.sigl = -t.sigl;
+- }
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,2);
+- FPU_put_user((short)t.sigl, d);
+- RE_ENTRANT_CHECK_ON;
+-
+- return 1;
+-}
+
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 2);
++ FPU_put_user((short)t.sigl, d);
++ RE_ENTRANT_CHECK_ON;
++
++ return 1;
+}
+
+ /* Put a packed bcd array into user memory */
+ int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char __user *d)
+ {
+- FPU_REG t;
+- unsigned long long ll;
+- u_char b;
+- int i, precision_loss;
+- u_char sign = (getsign(st0_ptr) == SIGN_NEG) ? 0x80 : 0;
+-
+- if ( st0_tag == TAG_Empty )
+- {
+- /* Empty register (stack underflow) */
+- EXCEPTION(EX_StackUnder);
+- goto invalid_operand;
+- }
+- else if ( st0_tag == TAG_Special )
+- {
+- st0_tag = FPU_Special(st0_ptr);
+- if ( (st0_tag == TW_Infinity) ||
+- (st0_tag == TW_NaN) )
+- {
+- EXCEPTION(EX_Invalid);
+- goto invalid_operand;
++ FPU_REG t;
++ unsigned long long ll;
++ u_char b;
++ int i, precision_loss;
++ u_char sign = (getsign(st0_ptr) == SIGN_NEG) ? 0x80 : 0;
++
++ if (st0_tag == TAG_Empty) {
++ /* Empty register (stack underflow) */
++ EXCEPTION(EX_StackUnder);
++ goto invalid_operand;
++ } else if (st0_tag == TAG_Special) {
++ st0_tag = FPU_Special(st0_ptr);
++ if ((st0_tag == TW_Infinity) || (st0_tag == TW_NaN)) {
++ EXCEPTION(EX_Invalid);
++ goto invalid_operand;
++ }
++ }
+
-+static struct crypto_template crypto_ctr_tmpl = {
-+ .name = "ctr",
-+ .alloc = crypto_ctr_alloc,
-+ .free = crypto_ctr_free,
-+ .module = THIS_MODULE,
-+};
++ reg_copy(st0_ptr, &t);
++ precision_loss = FPU_round_to_int(&t, st0_tag);
++ ll = significand(&t);
++
++ /* Check for overflow, by comparing with 999999999999999999 decimal. */
++ if ((t.sigh > 0x0de0b6b3) ||
++ ((t.sigh == 0x0de0b6b3) && (t.sigl > 0xa763ffff))) {
++ EXCEPTION(EX_Invalid);
++ /* This is a special case: see sec 16.2.5.1 of the 80486 book */
++ invalid_operand:
++ if (control_word & CW_Invalid) {
++ /* Produce the QNaN "indefinite" */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 10);
++ for (i = 0; i < 7; i++)
++ FPU_put_user(0, d + i); /* These bytes "undefined" */
++ FPU_put_user(0xc0, d + 7); /* This byte "undefined" */
++ FPU_put_user(0xff, d + 8);
++ FPU_put_user(0xff, d + 9);
++ RE_ENTRANT_CHECK_ON;
++ return 1;
++ } else
++ return 0;
++ } else if (precision_loss) {
++ /* Precision loss doesn't stop the data transfer */
++ set_precision_flag(precision_loss);
+ }
+- }
+-
+- reg_copy(st0_ptr, &t);
+- precision_loss = FPU_round_to_int(&t, st0_tag);
+- ll = significand(&t);
+-
+- /* Check for overflow, by comparing with 999999999999999999 decimal. */
+- if ( (t.sigh > 0x0de0b6b3) ||
+- ((t.sigh == 0x0de0b6b3) && (t.sigl > 0xa763ffff)) )
+- {
+- EXCEPTION(EX_Invalid);
+- /* This is a special case: see sec 16.2.5.1 of the 80486 book */
+- invalid_operand:
+- if ( control_word & CW_Invalid )
+- {
+- /* Produce the QNaN "indefinite" */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,10);
+- for ( i = 0; i < 7; i++)
+- FPU_put_user(0, d+i); /* These bytes "undefined" */
+- FPU_put_user(0xc0, d+7); /* This byte "undefined" */
+- FPU_put_user(0xff, d+8);
+- FPU_put_user(0xff, d+9);
+- RE_ENTRANT_CHECK_ON;
+- return 1;
++
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 10);
++ RE_ENTRANT_CHECK_ON;
++ for (i = 0; i < 9; i++) {
++ b = FPU_div_small(&ll, 10);
++ b |= (FPU_div_small(&ll, 10)) << 4;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_put_user(b, d + i);
++ RE_ENTRANT_CHECK_ON;
+ }
+- else
+- return 0;
+- }
+- else if ( precision_loss )
+- {
+- /* Precision loss doesn't stop the data transfer */
+- set_precision_flag(precision_loss);
+- }
+-
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,10);
+- RE_ENTRANT_CHECK_ON;
+- for ( i = 0; i < 9; i++)
+- {
+- b = FPU_div_small(&ll, 10);
+- b |= (FPU_div_small(&ll, 10)) << 4;
+- RE_ENTRANT_CHECK_OFF;
+- FPU_put_user(b, d+i);
+- RE_ENTRANT_CHECK_ON;
+- }
+- RE_ENTRANT_CHECK_OFF;
+- FPU_put_user(sign, d+9);
+- RE_ENTRANT_CHECK_ON;
+-
+- return 1;
++ RE_ENTRANT_CHECK_OFF;
++ FPU_put_user(sign, d + 9);
++ RE_ENTRANT_CHECK_ON;
+
-+static int crypto_rfc3686_setkey(struct crypto_tfm *parent, const u8 *key,
-+ unsigned int keylen)
-+{
-+ struct crypto_rfc3686_ctx *ctx = crypto_tfm_ctx(parent);
-+ struct crypto_blkcipher *child = ctx->child;
-+ int err;
++ return 1;
+ }
+
+ /*===========================================================================*/
+@@ -1119,59 +973,56 @@ int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char __user *d)
+ largest possible value */
+ int FPU_round_to_int(FPU_REG *r, u_char tag)
+ {
+- u_char very_big;
+- unsigned eax;
+-
+- if (tag == TAG_Zero)
+- {
+- /* Make sure that zero is returned */
+- significand(r) = 0;
+- return 0; /* o.k. */
+- }
+-
+- if (exponent(r) > 63)
+- {
+- r->sigl = r->sigh = ~0; /* The largest representable number */
+- return 1; /* overflow */
+- }
+-
+- eax = FPU_shrxs(&r->sigl, 63 - exponent(r));
+- very_big = !(~(r->sigh) | ~(r->sigl)); /* test for 0xfff...fff */
++ u_char very_big;
++ unsigned eax;
++
++ if (tag == TAG_Zero) {
++ /* Make sure that zero is returned */
++ significand(r) = 0;
++ return 0; /* o.k. */
++ }
++
++ if (exponent(r) > 63) {
++ r->sigl = r->sigh = ~0; /* The largest representable number */
++ return 1; /* overflow */
++ }
++
++ eax = FPU_shrxs(&r->sigl, 63 - exponent(r));
++ very_big = !(~(r->sigh) | ~(r->sigl)); /* test for 0xfff...fff */
+ #define half_or_more (eax & 0x80000000)
+ #define frac_part (eax)
+ #define more_than_half ((eax & 0x80000001) == 0x80000001)
+- switch (control_word & CW_RC)
+- {
+- case RC_RND:
+- if ( more_than_half /* nearest */
+- || (half_or_more && (r->sigl & 1)) ) /* odd -> even */
+- {
+- if ( very_big ) return 1; /* overflow */
+- significand(r) ++;
+- return PRECISION_LOST_UP;
+- }
+- break;
+- case RC_DOWN:
+- if (frac_part && getsign(r))
+- {
+- if ( very_big ) return 1; /* overflow */
+- significand(r) ++;
+- return PRECISION_LOST_UP;
+- }
+- break;
+- case RC_UP:
+- if (frac_part && !getsign(r))
+- {
+- if ( very_big ) return 1; /* overflow */
+- significand(r) ++;
+- return PRECISION_LOST_UP;
++ switch (control_word & CW_RC) {
++ case RC_RND:
++ if (more_than_half /* nearest */
++ || (half_or_more && (r->sigl & 1))) { /* odd -> even */
++ if (very_big)
++ return 1; /* overflow */
++ significand(r)++;
++ return PRECISION_LOST_UP;
++ }
++ break;
++ case RC_DOWN:
++ if (frac_part && getsign(r)) {
++ if (very_big)
++ return 1; /* overflow */
++ significand(r)++;
++ return PRECISION_LOST_UP;
++ }
++ break;
++ case RC_UP:
++ if (frac_part && !getsign(r)) {
++ if (very_big)
++ return 1; /* overflow */
++ significand(r)++;
++ return PRECISION_LOST_UP;
++ }
++ break;
++ case RC_CHOP:
++ break;
+ }
+- break;
+- case RC_CHOP:
+- break;
+- }
+
+- return eax ? PRECISION_LOST_DOWN : 0;
++ return eax ? PRECISION_LOST_DOWN : 0;
+
+ }
+
+@@ -1179,197 +1030,195 @@ int FPU_round_to_int(FPU_REG *r, u_char tag)
+
+ u_char __user *fldenv(fpu_addr_modes addr_modes, u_char __user *s)
+ {
+- unsigned short tag_word = 0;
+- u_char tag;
+- int i;
+-
+- if ( (addr_modes.default_mode == VM86) ||
+- ((addr_modes.default_mode == PM16)
+- ^ (addr_modes.override.operand_size == OP_SIZE_PREFIX)) )
+- {
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, s, 0x0e);
+- FPU_get_user(control_word, (unsigned short __user *) s);
+- FPU_get_user(partial_status, (unsigned short __user *) (s+2));
+- FPU_get_user(tag_word, (unsigned short __user *) (s+4));
+- FPU_get_user(instruction_address.offset, (unsigned short __user *) (s+6));
+- FPU_get_user(instruction_address.selector, (unsigned short __user *) (s+8));
+- FPU_get_user(operand_address.offset, (unsigned short __user *) (s+0x0a));
+- FPU_get_user(operand_address.selector, (unsigned short __user *) (s+0x0c));
+- RE_ENTRANT_CHECK_ON;
+- s += 0x0e;
+- if ( addr_modes.default_mode == VM86 )
+- {
+- instruction_address.offset
+- += (instruction_address.selector & 0xf000) << 4;
+- operand_address.offset += (operand_address.selector & 0xf000) << 4;
++ unsigned short tag_word = 0;
++ u_char tag;
++ int i;
+
-+ /* the nonce is stored in bytes at end of key */
-+ if (keylen < CTR_RFC3686_NONCE_SIZE)
-+ return -EINVAL;
++ if ((addr_modes.default_mode == VM86) ||
++ ((addr_modes.default_mode == PM16)
++ ^ (addr_modes.override.operand_size == OP_SIZE_PREFIX))) {
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, s, 0x0e);
++ FPU_get_user(control_word, (unsigned short __user *)s);
++ FPU_get_user(partial_status, (unsigned short __user *)(s + 2));
++ FPU_get_user(tag_word, (unsigned short __user *)(s + 4));
++ FPU_get_user(instruction_address.offset,
++ (unsigned short __user *)(s + 6));
++ FPU_get_user(instruction_address.selector,
++ (unsigned short __user *)(s + 8));
++ FPU_get_user(operand_address.offset,
++ (unsigned short __user *)(s + 0x0a));
++ FPU_get_user(operand_address.selector,
++ (unsigned short __user *)(s + 0x0c));
++ RE_ENTRANT_CHECK_ON;
++ s += 0x0e;
++ if (addr_modes.default_mode == VM86) {
++ instruction_address.offset
++ += (instruction_address.selector & 0xf000) << 4;
++ operand_address.offset +=
++ (operand_address.selector & 0xf000) << 4;
++ }
++ } else {
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, s, 0x1c);
++ FPU_get_user(control_word, (unsigned short __user *)s);
++ FPU_get_user(partial_status, (unsigned short __user *)(s + 4));
++ FPU_get_user(tag_word, (unsigned short __user *)(s + 8));
++ FPU_get_user(instruction_address.offset,
++ (unsigned long __user *)(s + 0x0c));
++ FPU_get_user(instruction_address.selector,
++ (unsigned short __user *)(s + 0x10));
++ FPU_get_user(instruction_address.opcode,
++ (unsigned short __user *)(s + 0x12));
++ FPU_get_user(operand_address.offset,
++ (unsigned long __user *)(s + 0x14));
++ FPU_get_user(operand_address.selector,
++ (unsigned long __user *)(s + 0x18));
++ RE_ENTRANT_CHECK_ON;
++ s += 0x1c;
+ }
+- }
+- else
+- {
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ, s, 0x1c);
+- FPU_get_user(control_word, (unsigned short __user *) s);
+- FPU_get_user(partial_status, (unsigned short __user *) (s+4));
+- FPU_get_user(tag_word, (unsigned short __user *) (s+8));
+- FPU_get_user(instruction_address.offset, (unsigned long __user *) (s+0x0c));
+- FPU_get_user(instruction_address.selector, (unsigned short __user *) (s+0x10));
+- FPU_get_user(instruction_address.opcode, (unsigned short __user *) (s+0x12));
+- FPU_get_user(operand_address.offset, (unsigned long __user *) (s+0x14));
+- FPU_get_user(operand_address.selector, (unsigned long __user *) (s+0x18));
+- RE_ENTRANT_CHECK_ON;
+- s += 0x1c;
+- }
+
+ #ifdef PECULIAR_486
+- control_word &= ~0xe080;
+-#endif /* PECULIAR_486 */
+-
+- top = (partial_status >> SW_Top_Shift) & 7;
+-
+- if ( partial_status & ~control_word & CW_Exceptions )
+- partial_status |= (SW_Summary | SW_Backward);
+- else
+- partial_status &= ~(SW_Summary | SW_Backward);
+-
+- for ( i = 0; i < 8; i++ )
+- {
+- tag = tag_word & 3;
+- tag_word >>= 2;
+-
+- if ( tag == TAG_Empty )
+- /* New tag is empty. Accept it */
+- FPU_settag(i, TAG_Empty);
+- else if ( FPU_gettag(i) == TAG_Empty )
+- {
+- /* Old tag is empty and new tag is not empty. New tag is determined
+- by old reg contents */
+- if ( exponent(&fpu_register(i)) == - EXTENDED_Ebias )
+- {
+- if ( !(fpu_register(i).sigl | fpu_register(i).sigh) )
+- FPU_settag(i, TAG_Zero);
+- else
+- FPU_settag(i, TAG_Special);
+- }
+- else if ( exponent(&fpu_register(i)) == 0x7fff - EXTENDED_Ebias )
+- {
+- FPU_settag(i, TAG_Special);
+- }
+- else if ( fpu_register(i).sigh & 0x80000000 )
+- FPU_settag(i, TAG_Valid);
+- else
+- FPU_settag(i, TAG_Special); /* An Un-normal */
+- }
+- /* Else old tag is not empty and new tag is not empty. Old tag
+- remains correct */
+- }
+-
+- return s;
+-}
++ control_word &= ~0xe080;
++#endif /* PECULIAR_486 */
+
-+ memcpy(ctx->nonce, key + (keylen - CTR_RFC3686_NONCE_SIZE),
-+ CTR_RFC3686_NONCE_SIZE);
++ top = (partial_status >> SW_Top_Shift) & 7;
+
-+ keylen -= CTR_RFC3686_NONCE_SIZE;
++ if (partial_status & ~control_word & CW_Exceptions)
++ partial_status |= (SW_Summary | SW_Backward);
++ else
++ partial_status &= ~(SW_Summary | SW_Backward);
+
-+ crypto_blkcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-+ crypto_blkcipher_set_flags(child, crypto_tfm_get_flags(parent) &
-+ CRYPTO_TFM_REQ_MASK);
-+ err = crypto_blkcipher_setkey(child, key, keylen);
-+ crypto_tfm_set_flags(parent, crypto_blkcipher_get_flags(child) &
-+ CRYPTO_TFM_RES_MASK);
++ for (i = 0; i < 8; i++) {
++ tag = tag_word & 3;
++ tag_word >>= 2;
+
-+ return err;
++ if (tag == TAG_Empty)
++ /* New tag is empty. Accept it */
++ FPU_settag(i, TAG_Empty);
++ else if (FPU_gettag(i) == TAG_Empty) {
++ /* Old tag is empty and new tag is not empty. New tag is determined
++ by old reg contents */
++ if (exponent(&fpu_register(i)) == -EXTENDED_Ebias) {
++ if (!
++ (fpu_register(i).sigl | fpu_register(i).
++ sigh))
++ FPU_settag(i, TAG_Zero);
++ else
++ FPU_settag(i, TAG_Special);
++ } else if (exponent(&fpu_register(i)) ==
++ 0x7fff - EXTENDED_Ebias) {
++ FPU_settag(i, TAG_Special);
++ } else if (fpu_register(i).sigh & 0x80000000)
++ FPU_settag(i, TAG_Valid);
++ else
++ FPU_settag(i, TAG_Special); /* An Un-normal */
++ }
++ /* Else old tag is not empty and new tag is not empty. Old tag
++ remains correct */
++ }
+
++ return s;
+}
+
+ void frstor(fpu_addr_modes addr_modes, u_char __user *data_address)
+ {
+- int i, regnr;
+- u_char __user *s = fldenv(addr_modes, data_address);
+- int offset = (top & 7) * 10, other = 80 - offset;
+-
+- /* Copy all registers in stack order. */
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_READ,s,80);
+- __copy_from_user(register_base+offset, s, other);
+- if ( offset )
+- __copy_from_user(register_base, s+other, offset);
+- RE_ENTRANT_CHECK_ON;
+-
+- for ( i = 0; i < 8; i++ )
+- {
+- regnr = (i+top) & 7;
+- if ( FPU_gettag(regnr) != TAG_Empty )
+- /* The loaded data over-rides all other cases. */
+- FPU_settag(regnr, FPU_tagof(&st(i)));
+- }
++ int i, regnr;
++ u_char __user *s = fldenv(addr_modes, data_address);
++ int offset = (top & 7) * 10, other = 80 - offset;
++
++ /* Copy all registers in stack order. */
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_READ, s, 80);
++ __copy_from_user(register_base + offset, s, other);
++ if (offset)
++ __copy_from_user(register_base, s + other, offset);
++ RE_ENTRANT_CHECK_ON;
+
-+static int crypto_rfc3686_crypt(struct blkcipher_desc *desc,
-+ struct scatterlist *dst,
-+ struct scatterlist *src, unsigned int nbytes)
-+{
-+ struct crypto_blkcipher *tfm = desc->tfm;
-+ struct crypto_rfc3686_ctx *ctx = crypto_blkcipher_ctx(tfm);
-+ struct crypto_blkcipher *child = ctx->child;
-+ unsigned long alignmask = crypto_blkcipher_alignmask(tfm);
-+ u8 ivblk[CTR_RFC3686_BLOCK_SIZE + alignmask];
-+ u8 *iv = PTR_ALIGN(ivblk + 0, alignmask + 1);
-+ u8 *info = desc->info;
-+ int err;
-+
-+ /* set up counter block */
-+ memcpy(iv, ctx->nonce, CTR_RFC3686_NONCE_SIZE);
-+ memcpy(iv + CTR_RFC3686_NONCE_SIZE, info, CTR_RFC3686_IV_SIZE);
-+
-+ /* initialize counter portion of counter block */
-+ *(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) =
-+ cpu_to_be32(1);
-+
-+ desc->tfm = child;
-+ desc->info = iv;
-+ err = crypto_blkcipher_encrypt_iv(desc, dst, src, nbytes);
-+ desc->tfm = tfm;
-+ desc->info = info;
++ for (i = 0; i < 8; i++) {
++ regnr = (i + top) & 7;
++ if (FPU_gettag(regnr) != TAG_Empty)
++ /* The loaded data over-rides all other cases. */
++ FPU_settag(regnr, FPU_tagof(&st(i)));
++ }
+
+ }
+
+-
+ u_char __user *fstenv(fpu_addr_modes addr_modes, u_char __user *d)
+ {
+- if ( (addr_modes.default_mode == VM86) ||
+- ((addr_modes.default_mode == PM16)
+- ^ (addr_modes.override.operand_size == OP_SIZE_PREFIX)) )
+- {
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,14);
++ if ((addr_modes.default_mode == VM86) ||
++ ((addr_modes.default_mode == PM16)
++ ^ (addr_modes.override.operand_size == OP_SIZE_PREFIX))) {
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 14);
+ #ifdef PECULIAR_486
+- FPU_put_user(control_word & ~0xe080, (unsigned long __user *) d);
++ FPU_put_user(control_word & ~0xe080, (unsigned long __user *)d);
+ #else
+- FPU_put_user(control_word, (unsigned short __user *) d);
++ FPU_put_user(control_word, (unsigned short __user *)d);
+ #endif /* PECULIAR_486 */
+- FPU_put_user(status_word(), (unsigned short __user *) (d+2));
+- FPU_put_user(fpu_tag_word, (unsigned short __user *) (d+4));
+- FPU_put_user(instruction_address.offset, (unsigned short __user *) (d+6));
+- FPU_put_user(operand_address.offset, (unsigned short __user *) (d+0x0a));
+- if ( addr_modes.default_mode == VM86 )
+- {
+- FPU_put_user((instruction_address.offset & 0xf0000) >> 4,
+- (unsigned short __user *) (d+8));
+- FPU_put_user((operand_address.offset & 0xf0000) >> 4,
+- (unsigned short __user *) (d+0x0c));
+- }
+- else
+- {
+- FPU_put_user(instruction_address.selector, (unsigned short __user *) (d+8));
+- FPU_put_user(operand_address.selector, (unsigned short __user *) (d+0x0c));
+- }
+- RE_ENTRANT_CHECK_ON;
+- d += 0x0e;
+- }
+- else
+- {
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE, d, 7*4);
++ FPU_put_user(status_word(), (unsigned short __user *)(d + 2));
++ FPU_put_user(fpu_tag_word, (unsigned short __user *)(d + 4));
++ FPU_put_user(instruction_address.offset,
++ (unsigned short __user *)(d + 6));
++ FPU_put_user(operand_address.offset,
++ (unsigned short __user *)(d + 0x0a));
++ if (addr_modes.default_mode == VM86) {
++ FPU_put_user((instruction_address.
++ offset & 0xf0000) >> 4,
++ (unsigned short __user *)(d + 8));
++ FPU_put_user((operand_address.offset & 0xf0000) >> 4,
++ (unsigned short __user *)(d + 0x0c));
++ } else {
++ FPU_put_user(instruction_address.selector,
++ (unsigned short __user *)(d + 8));
++ FPU_put_user(operand_address.selector,
++ (unsigned short __user *)(d + 0x0c));
++ }
++ RE_ENTRANT_CHECK_ON;
++ d += 0x0e;
++ } else {
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 7 * 4);
+ #ifdef PECULIAR_486
+- control_word &= ~0xe080;
+- /* An 80486 sets nearly all of the reserved bits to 1. */
+- control_word |= 0xffff0040;
+- partial_status = status_word() | 0xffff0000;
+- fpu_tag_word |= 0xffff0000;
+- I387.soft.fcs &= ~0xf8000000;
+- I387.soft.fos |= 0xffff0000;
++ control_word &= ~0xe080;
++ /* An 80486 sets nearly all of the reserved bits to 1. */
++ control_word |= 0xffff0040;
++ partial_status = status_word() | 0xffff0000;
++ fpu_tag_word |= 0xffff0000;
++ I387.soft.fcs &= ~0xf8000000;
++ I387.soft.fos |= 0xffff0000;
+ #endif /* PECULIAR_486 */
+- if (__copy_to_user(d, &control_word, 7*4))
+- FPU_abort;
+- RE_ENTRANT_CHECK_ON;
+- d += 0x1c;
+- }
+-
+- control_word |= CW_Exceptions;
+- partial_status &= ~(SW_Summary | SW_Backward);
+-
+- return d;
+-}
++ if (__copy_to_user(d, &control_word, 7 * 4))
++ FPU_abort;
++ RE_ENTRANT_CHECK_ON;
++ d += 0x1c;
++ }
+
++ control_word |= CW_Exceptions;
++ partial_status &= ~(SW_Summary | SW_Backward);
+
-+ return err;
++ return d;
+}
+
+ void fsave(fpu_addr_modes addr_modes, u_char __user *data_address)
+ {
+- u_char __user *d;
+- int offset = (top & 7) * 10, other = 80 - offset;
++ u_char __user *d;
++ int offset = (top & 7) * 10, other = 80 - offset;
+
+- d = fstenv(addr_modes, data_address);
++ d = fstenv(addr_modes, data_address);
+
+- RE_ENTRANT_CHECK_OFF;
+- FPU_access_ok(VERIFY_WRITE,d,80);
++ RE_ENTRANT_CHECK_OFF;
++ FPU_access_ok(VERIFY_WRITE, d, 80);
+
+- /* Copy all registers in stack order. */
+- if (__copy_to_user(d, register_base+offset, other))
+- FPU_abort;
+- if ( offset )
+- if (__copy_to_user(d+other, register_base, offset))
+- FPU_abort;
+- RE_ENTRANT_CHECK_ON;
++ /* Copy all registers in stack order. */
++ if (__copy_to_user(d, register_base + offset, other))
++ FPU_abort;
++ if (offset)
++ if (__copy_to_user(d + other, register_base, offset))
++ FPU_abort;
++ RE_ENTRANT_CHECK_ON;
+
+- finit();
++ finit();
+ }
+
+ /*===========================================================================*/
+diff --git a/arch/x86/math-emu/reg_mul.c b/arch/x86/math-emu/reg_mul.c
+index 40f50b6..36c37f7 100644
+--- a/arch/x86/math-emu/reg_mul.c
++++ b/arch/x86/math-emu/reg_mul.c
+@@ -20,7 +20,6 @@
+ #include "reg_constant.h"
+ #include "fpu_system.h"
+
+-
+ /*
+ Multiply two registers to give a register result.
+ The sources are st(deststnr) and (b,tagb,signb).
+@@ -29,104 +28,88 @@
+ /* This routine must be called with non-empty source registers */
+ int FPU_mul(FPU_REG const *b, u_char tagb, int deststnr, int control_w)
+ {
+- FPU_REG *a = &st(deststnr);
+- FPU_REG *dest = a;
+- u_char taga = FPU_gettagi(deststnr);
+- u_char saved_sign = getsign(dest);
+- u_char sign = (getsign(a) ^ getsign(b));
+- int tag;
+-
++ FPU_REG *a = &st(deststnr);
++ FPU_REG *dest = a;
++ u_char taga = FPU_gettagi(deststnr);
++ u_char saved_sign = getsign(dest);
++ u_char sign = (getsign(a) ^ getsign(b));
++ int tag;
+
+- if ( !(taga | tagb) )
+- {
+- /* Both regs Valid, this should be the most common case. */
++ if (!(taga | tagb)) {
++ /* Both regs Valid, this should be the most common case. */
+
+- tag = FPU_u_mul(a, b, dest, control_w, sign, exponent(a) + exponent(b));
+- if ( tag < 0 )
+- {
+- setsign(dest, saved_sign);
+- return tag;
++ tag =
++ FPU_u_mul(a, b, dest, control_w, sign,
++ exponent(a) + exponent(b));
++ if (tag < 0) {
++ setsign(dest, saved_sign);
++ return tag;
++ }
++ FPU_settagi(deststnr, tag);
++ return tag;
+ }
+- FPU_settagi(deststnr, tag);
+- return tag;
+- }
+
+- if ( taga == TAG_Special )
+- taga = FPU_Special(a);
+- if ( tagb == TAG_Special )
+- tagb = FPU_Special(b);
++ if (taga == TAG_Special)
++ taga = FPU_Special(a);
++ if (tagb == TAG_Special)
++ tagb = FPU_Special(b);
+
+- if ( ((taga == TAG_Valid) && (tagb == TW_Denormal))
++ if (((taga == TAG_Valid) && (tagb == TW_Denormal))
+ || ((taga == TW_Denormal) && (tagb == TAG_Valid))
+- || ((taga == TW_Denormal) && (tagb == TW_Denormal)) )
+- {
+- FPU_REG x, y;
+- if ( denormal_operand() < 0 )
+- return FPU_Exception;
+-
+- FPU_to_exp16(a, &x);
+- FPU_to_exp16(b, &y);
+- tag = FPU_u_mul(&x, &y, dest, control_w, sign,
+- exponent16(&x) + exponent16(&y));
+- if ( tag < 0 )
+- {
+- setsign(dest, saved_sign);
+- return tag;
+- }
+- FPU_settagi(deststnr, tag);
+- return tag;
+- }
+- else if ( (taga <= TW_Denormal) && (tagb <= TW_Denormal) )
+- {
+- if ( ((tagb == TW_Denormal) || (taga == TW_Denormal))
+- && (denormal_operand() < 0) )
+- return FPU_Exception;
++ || ((taga == TW_Denormal) && (tagb == TW_Denormal))) {
++ FPU_REG x, y;
++ if (denormal_operand() < 0)
++ return FPU_Exception;
+
+- /* Must have either both arguments == zero, or
+- one valid and the other zero.
+- The result is therefore zero. */
+- FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
+- /* The 80486 book says that the answer is +0, but a real
+- 80486 behaves this way.
+- IEEE-754 apparently says it should be this way. */
+- setsign(dest, sign);
+- return TAG_Zero;
+- }
+- /* Must have infinities, NaNs, etc */
+- else if ( (taga == TW_NaN) || (tagb == TW_NaN) )
+- {
+- return real_2op_NaN(b, tagb, deststnr, &st(0));
+- }
+- else if ( ((taga == TW_Infinity) && (tagb == TAG_Zero))
+- || ((tagb == TW_Infinity) && (taga == TAG_Zero)) )
+- {
+- return arith_invalid(deststnr); /* Zero*Infinity is invalid */
+- }
+- else if ( ((taga == TW_Denormal) || (tagb == TW_Denormal))
+- && (denormal_operand() < 0) )
+- {
+- return FPU_Exception;
+- }
+- else if (taga == TW_Infinity)
+- {
+- FPU_copy_to_regi(a, TAG_Special, deststnr);
+- setsign(dest, sign);
+- return TAG_Special;
+- }
+- else if (tagb == TW_Infinity)
+- {
+- FPU_copy_to_regi(b, TAG_Special, deststnr);
+- setsign(dest, sign);
+- return TAG_Special;
+- }
++ FPU_to_exp16(a, &x);
++ FPU_to_exp16(b, &y);
++ tag = FPU_u_mul(&x, &y, dest, control_w, sign,
++ exponent16(&x) + exponent16(&y));
++ if (tag < 0) {
++ setsign(dest, saved_sign);
++ return tag;
++ }
++ FPU_settagi(deststnr, tag);
++ return tag;
++ } else if ((taga <= TW_Denormal) && (tagb <= TW_Denormal)) {
++ if (((tagb == TW_Denormal) || (taga == TW_Denormal))
++ && (denormal_operand() < 0))
++ return FPU_Exception;
+
++ /* Must have either both arguments == zero, or
++ one valid and the other zero.
++ The result is therefore zero. */
++ FPU_copy_to_regi(&CONST_Z, TAG_Zero, deststnr);
++ /* The 80486 book says that the answer is +0, but a real
++ 80486 behaves this way.
++ IEEE-754 apparently says it should be this way. */
++ setsign(dest, sign);
++ return TAG_Zero;
++ }
++ /* Must have infinities, NaNs, etc */
++ else if ((taga == TW_NaN) || (tagb == TW_NaN)) {
++ return real_2op_NaN(b, tagb, deststnr, &st(0));
++ } else if (((taga == TW_Infinity) && (tagb == TAG_Zero))
++ || ((tagb == TW_Infinity) && (taga == TAG_Zero))) {
++ return arith_invalid(deststnr); /* Zero*Infinity is invalid */
++ } else if (((taga == TW_Denormal) || (tagb == TW_Denormal))
++ && (denormal_operand() < 0)) {
++ return FPU_Exception;
++ } else if (taga == TW_Infinity) {
++ FPU_copy_to_regi(a, TAG_Special, deststnr);
++ setsign(dest, sign);
++ return TAG_Special;
++ } else if (tagb == TW_Infinity) {
++ FPU_copy_to_regi(b, TAG_Special, deststnr);
++ setsign(dest, sign);
++ return TAG_Special;
++ }
+ #ifdef PARANOID
+- else
+- {
+- EXCEPTION(EX_INTERNAL|0x102);
+- return FPU_Exception;
+- }
+-#endif /* PARANOID */
++ else {
++ EXCEPTION(EX_INTERNAL | 0x102);
++ return FPU_Exception;
++ }
++#endif /* PARANOID */
+
+ return 0;
+ }
+diff --git a/arch/x86/math-emu/status_w.h b/arch/x86/math-emu/status_w.h
+index 59e7330..54a3f22 100644
+--- a/arch/x86/math-emu/status_w.h
++++ b/arch/x86/math-emu/status_w.h
+@@ -10,7 +10,7 @@
+ #ifndef _STATUS_H_
+ #define _STATUS_H_
+
+-#include "fpu_emu.h" /* for definition of PECULIAR_486 */
++#include "fpu_emu.h" /* for definition of PECULIAR_486 */
+
+ #ifdef __ASSEMBLY__
+ #define Const__(x) $##x
+@@ -34,7 +34,7 @@
+ #define SW_Denorm_Op Const__(0x0002) /* denormalized operand */
+ #define SW_Invalid Const__(0x0001) /* invalid operation */
+
+-#define SW_Exc_Mask Const__(0x27f) /* Status word exception bit mask */
++#define SW_Exc_Mask Const__(0x27f) /* Status word exception bit mask */
+
+ #ifndef __ASSEMBLY__
+
+@@ -50,8 +50,8 @@
+ ((partial_status & ~SW_Top & 0xffff) | ((top << SW_Top_Shift) & SW_Top))
+ static inline void setcc(int cc)
+ {
+- partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3);
+- partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3);
++ partial_status &= ~(SW_C0 | SW_C1 | SW_C2 | SW_C3);
++ partial_status |= (cc) & (SW_C0 | SW_C1 | SW_C2 | SW_C3);
+ }
+
+ #ifdef PECULIAR_486
+diff --git a/arch/x86/mm/Makefile_32 b/arch/x86/mm/Makefile_32
+index 362b4ad..c36ae88 100644
+--- a/arch/x86/mm/Makefile_32
++++ b/arch/x86/mm/Makefile_32
+@@ -2,9 +2,8 @@
+ # Makefile for the linux i386-specific parts of the memory manager.
+ #
+
+-obj-y := init_32.o pgtable_32.o fault_32.o ioremap_32.o extable_32.o pageattr_32.o mmap_32.o
++obj-y := init_32.o pgtable_32.o fault.o ioremap.o extable.o pageattr.o mmap.o
+
+ obj-$(CONFIG_NUMA) += discontig_32.o
+ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
+ obj-$(CONFIG_HIGHMEM) += highmem_32.o
+-obj-$(CONFIG_BOOT_IOREMAP) += boot_ioremap_32.o
+diff --git a/arch/x86/mm/Makefile_64 b/arch/x86/mm/Makefile_64
+index 6bcb479..688c8c2 100644
+--- a/arch/x86/mm/Makefile_64
++++ b/arch/x86/mm/Makefile_64
+@@ -2,9 +2,8 @@
+ # Makefile for the linux x86_64-specific parts of the memory manager.
+ #
+
+-obj-y := init_64.o fault_64.o ioremap_64.o extable_64.o pageattr_64.o mmap_64.o
++obj-y := init_64.o fault.o ioremap.o extable.o pageattr.o mmap.o
+ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
+ obj-$(CONFIG_NUMA) += numa_64.o
+ obj-$(CONFIG_K8_NUMA) += k8topology_64.o
+ obj-$(CONFIG_ACPI_NUMA) += srat_64.o
+-
+diff --git a/arch/x86/mm/boot_ioremap_32.c b/arch/x86/mm/boot_ioremap_32.c
+deleted file mode 100644
+index f14da2a..0000000
+--- a/arch/x86/mm/boot_ioremap_32.c
++++ /dev/null
+@@ -1,100 +0,0 @@
+-/*
+- * arch/i386/mm/boot_ioremap.c
+- *
+- * Re-map functions for early boot-time before paging_init() when the
+- * boot-time pagetables are still in use
+- *
+- * Written by Dave Hansen <haveblue at us.ibm.com>
+- */
+-
+-
+-/*
+- * We need to use the 2-level pagetable functions, but CONFIG_X86_PAE
+- * keeps that from happening. If anyone has a better way, I'm listening.
+- *
+- * boot_pte_t is defined only if this all works correctly
+- */
+-
+-#undef CONFIG_X86_PAE
+-#undef CONFIG_PARAVIRT
+-#include <asm/page.h>
+-#include <asm/pgtable.h>
+-#include <asm/tlbflush.h>
+-#include <linux/init.h>
+-#include <linux/stddef.h>
+-
+-/*
+- * I'm cheating here. It is known that the two boot PTE pages are
+- * allocated next to each other. I'm pretending that they're just
+- * one big array.
+- */
+-
+-#define BOOT_PTE_PTRS (PTRS_PER_PTE*2)
+-
+-static unsigned long boot_pte_index(unsigned long vaddr)
+-{
+- return __pa(vaddr) >> PAGE_SHIFT;
+-}
+-
+-static inline boot_pte_t* boot_vaddr_to_pte(void *address)
+-{
+- boot_pte_t* boot_pg = (boot_pte_t*)pg0;
+- return &boot_pg[boot_pte_index((unsigned long)address)];
+-}
+-
+-/*
+- * This is only for a caller who is clever enough to page-align
+- * phys_addr and virtual_source, and who also has a preference
+- * about which virtual address from which to steal ptes
+- */
+-static void __boot_ioremap(unsigned long phys_addr, unsigned long nrpages,
+- void* virtual_source)
+-{
+- boot_pte_t* pte;
+- int i;
+- char *vaddr = virtual_source;
+-
+- pte = boot_vaddr_to_pte(virtual_source);
+- for (i=0; i < nrpages; i++, phys_addr += PAGE_SIZE, pte++) {
+- set_pte(pte, pfn_pte(phys_addr>>PAGE_SHIFT, PAGE_KERNEL));
+- __flush_tlb_one(&vaddr[i*PAGE_SIZE]);
+- }
+-}
+-
+-/* the virtual space we're going to remap comes from this array */
+-#define BOOT_IOREMAP_PAGES 4
+-#define BOOT_IOREMAP_SIZE (BOOT_IOREMAP_PAGES*PAGE_SIZE)
+-static __initdata char boot_ioremap_space[BOOT_IOREMAP_SIZE]
+- __attribute__ ((aligned (PAGE_SIZE)));
+-
+-/*
+- * This only applies to things which need to ioremap before paging_init()
+- * bt_ioremap() and plain ioremap() are both useless at this point.
+- *
+- * When used, we're still using the boot-time pagetables, which only
+- * have 2 PTE pages mapping the first 8MB
+- *
+- * There is no unmap. The boot-time PTE pages aren't used after boot.
+- * If you really want the space back, just remap it yourself.
+- * boot_ioremap(&ioremap_space-PAGE_OFFSET, BOOT_IOREMAP_SIZE)
+- */
+-__init void* boot_ioremap(unsigned long phys_addr, unsigned long size)
+-{
+- unsigned long last_addr, offset;
+- unsigned int nrpages;
+-
+- last_addr = phys_addr + size - 1;
+-
+- /* page align the requested address */
+- offset = phys_addr & ~PAGE_MASK;
+- phys_addr &= PAGE_MASK;
+- size = PAGE_ALIGN(last_addr) - phys_addr;
+-
+- nrpages = size >> PAGE_SHIFT;
+- if (nrpages > BOOT_IOREMAP_PAGES)
+- return NULL;
+-
+- __boot_ioremap(phys_addr, nrpages, boot_ioremap_space);
+-
+- return &boot_ioremap_space[offset];
+-}
+diff --git a/arch/x86/mm/discontig_32.c b/arch/x86/mm/discontig_32.c
+index 13a474d..04b1d20 100644
+--- a/arch/x86/mm/discontig_32.c
++++ b/arch/x86/mm/discontig_32.c
+@@ -32,6 +32,7 @@
+ #include <linux/kexec.h>
+ #include <linux/pfn.h>
+ #include <linux/swap.h>
++#include <linux/acpi.h>
+
+ #include <asm/e820.h>
+ #include <asm/setup.h>
+@@ -103,14 +104,10 @@ extern unsigned long highend_pfn, highstart_pfn;
+
+ #define LARGE_PAGE_BYTES (PTRS_PER_PTE * PAGE_SIZE)
+
+-static unsigned long node_remap_start_pfn[MAX_NUMNODES];
+ unsigned long node_remap_size[MAX_NUMNODES];
+-static unsigned long node_remap_offset[MAX_NUMNODES];
+ static void *node_remap_start_vaddr[MAX_NUMNODES];
+ void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags);
+
+-static void *node_remap_end_vaddr[MAX_NUMNODES];
+-static void *node_remap_alloc_vaddr[MAX_NUMNODES];
+ static unsigned long kva_start_pfn;
+ static unsigned long kva_pages;
+ /*
+@@ -167,6 +164,22 @@ static void __init allocate_pgdat(int nid)
+ }
+ }
+
++#ifdef CONFIG_DISCONTIGMEM
++/*
++ * In the discontig memory model, a portion of the kernel virtual area (KVA)
++ * is reserved and portions of nodes are mapped using it. This is to allow
++ * node-local memory to be allocated for structures that would normally require
++ * ZONE_NORMAL. The memory is allocated with alloc_remap() and callers
++ * should be prepared to allocate from the bootmem allocator instead. This KVA
++ * mechanism is incompatible with SPARSEMEM as it makes assumptions about the
++ * layout of memory that are broken if alloc_remap() succeeds for some of the
++ * map and fails for others
++ */
++static unsigned long node_remap_start_pfn[MAX_NUMNODES];
++static void *node_remap_end_vaddr[MAX_NUMNODES];
++static void *node_remap_alloc_vaddr[MAX_NUMNODES];
++static unsigned long node_remap_offset[MAX_NUMNODES];
+
-+static int crypto_rfc3686_init_tfm(struct crypto_tfm *tfm)
+ void *alloc_remap(int nid, unsigned long size)
+ {
+ void *allocation = node_remap_alloc_vaddr[nid];
+@@ -263,11 +276,46 @@ static unsigned long calculate_numa_remap_pages(void)
+ return reserve_pages;
+ }
+
++static void init_remap_allocator(int nid)
+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
-+ struct crypto_rfc3686_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_blkcipher *cipher;
-+
-+ cipher = crypto_spawn_blkcipher(spawn);
-+ if (IS_ERR(cipher))
-+ return PTR_ERR(cipher);
-+
-+ ctx->child = cipher;
++ node_remap_start_vaddr[nid] = pfn_to_kaddr(
++ kva_start_pfn + node_remap_offset[nid]);
++ node_remap_end_vaddr[nid] = node_remap_start_vaddr[nid] +
++ (node_remap_size[nid] * PAGE_SIZE);
++ node_remap_alloc_vaddr[nid] = node_remap_start_vaddr[nid] +
++ ALIGN(sizeof(pg_data_t), PAGE_SIZE);
+
-+ return 0;
++ printk ("node %d will remap to vaddr %08lx - %08lx\n", nid,
++ (ulong) node_remap_start_vaddr[nid],
++ (ulong) pfn_to_kaddr(highstart_pfn
++ + node_remap_offset[nid] + node_remap_size[nid]));
+}
-+
-+static void crypto_rfc3686_exit_tfm(struct crypto_tfm *tfm)
++#else
++void *alloc_remap(int nid, unsigned long size)
+{
-+ struct crypto_rfc3686_ctx *ctx = crypto_tfm_ctx(tfm);
-+
-+ crypto_free_blkcipher(ctx->child);
++ return NULL;
+}
+
-+static struct crypto_instance *crypto_rfc3686_alloc(struct rtattr **tb)
++static unsigned long calculate_numa_remap_pages(void)
+{
-+ struct crypto_instance *inst;
-+ struct crypto_alg *alg;
-+ int err;
-+
-+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
-+ if (err)
-+ return ERR_PTR(err);
-+
-+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_BLKCIPHER,
-+ CRYPTO_ALG_TYPE_MASK);
-+ err = PTR_ERR(alg);
-+ if (IS_ERR(alg))
-+ return ERR_PTR(err);
-+
-+ /* We only support 16-byte blocks. */
-+ err = -EINVAL;
-+ if (alg->cra_blkcipher.ivsize != CTR_RFC3686_BLOCK_SIZE)
-+ goto out_put_alg;
-+
-+ /* Not a stream cipher? */
-+ if (alg->cra_blocksize != 1)
-+ goto out_put_alg;
-+
-+ inst = crypto_alloc_instance("rfc3686", alg);
-+ if (IS_ERR(inst))
-+ goto out;
-+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
-+ inst->alg.cra_priority = alg->cra_priority;
-+ inst->alg.cra_blocksize = 1;
-+ inst->alg.cra_alignmask = alg->cra_alignmask;
-+ inst->alg.cra_type = &crypto_blkcipher_type;
-+
-+ inst->alg.cra_blkcipher.ivsize = CTR_RFC3686_IV_SIZE;
-+ inst->alg.cra_blkcipher.min_keysize = alg->cra_blkcipher.min_keysize
-+ + CTR_RFC3686_NONCE_SIZE;
-+ inst->alg.cra_blkcipher.max_keysize = alg->cra_blkcipher.max_keysize
-+ + CTR_RFC3686_NONCE_SIZE;
-+
-+ inst->alg.cra_blkcipher.geniv = "seqiv";
-+
-+ inst->alg.cra_ctxsize = sizeof(struct crypto_rfc3686_ctx);
-+
-+ inst->alg.cra_init = crypto_rfc3686_init_tfm;
-+ inst->alg.cra_exit = crypto_rfc3686_exit_tfm;
-+
-+ inst->alg.cra_blkcipher.setkey = crypto_rfc3686_setkey;
-+ inst->alg.cra_blkcipher.encrypt = crypto_rfc3686_crypt;
-+ inst->alg.cra_blkcipher.decrypt = crypto_rfc3686_crypt;
-+
-+out:
-+ crypto_mod_put(alg);
-+ return inst;
-+
-+out_put_alg:
-+ inst = ERR_PTR(err);
-+ goto out;
++ return 0;
+}
+
-+static struct crypto_template crypto_rfc3686_tmpl = {
-+ .name = "rfc3686",
-+ .alloc = crypto_rfc3686_alloc,
-+ .free = crypto_ctr_free,
-+ .module = THIS_MODULE,
-+};
-+
-+static int __init crypto_ctr_module_init(void)
++static void init_remap_allocator(int nid)
+{
-+ int err;
-+
-+ err = crypto_register_template(&crypto_ctr_tmpl);
-+ if (err)
-+ goto out;
-+
-+ err = crypto_register_template(&crypto_rfc3686_tmpl);
-+ if (err)
-+ goto out_drop_ctr;
-+
-+out:
-+ return err;
-+
-+out_drop_ctr:
-+ crypto_unregister_template(&crypto_ctr_tmpl);
-+ goto out;
+}
+
-+static void __exit crypto_ctr_module_exit(void)
++void __init remap_numa_kva(void)
+{
-+ crypto_unregister_template(&crypto_rfc3686_tmpl);
-+ crypto_unregister_template(&crypto_ctr_tmpl);
+}
++#endif /* CONFIG_DISCONTIGMEM */
+
-+module_init(crypto_ctr_module_init);
-+module_exit(crypto_ctr_module_exit);
-+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("CTR Counter block mode");
-+MODULE_ALIAS("rfc3686");
-diff --git a/crypto/des_generic.c b/crypto/des_generic.c
-index 59966d1..355ecb7 100644
---- a/crypto/des_generic.c
-+++ b/crypto/des_generic.c
-@@ -20,13 +20,7 @@
- #include <linux/crypto.h>
- #include <linux/types.h>
-
--#define DES_KEY_SIZE 8
--#define DES_EXPKEY_WORDS 32
--#define DES_BLOCK_SIZE 8
--
--#define DES3_EDE_KEY_SIZE (3 * DES_KEY_SIZE)
--#define DES3_EDE_EXPKEY_WORDS (3 * DES_EXPKEY_WORDS)
--#define DES3_EDE_BLOCK_SIZE DES_BLOCK_SIZE
-+#include <crypto/des.h>
-
- #define ROL(x, r) ((x) = rol32((x), (r)))
- #define ROR(x, r) ((x) = ror32((x), (r)))
-@@ -634,7 +628,7 @@ static const u32 S8[64] = {
- * Choice 1 has operated on the key.
- *
- */
--static unsigned long ekey(u32 *pe, const u8 *k)
-+unsigned long des_ekey(u32 *pe, const u8 *k)
+ extern void setup_bootmem_allocator(void);
+ unsigned long __init setup_memory(void)
{
- /* K&R: long is at least 32 bits */
- unsigned long a, b, c, d, w;
-@@ -709,6 +703,7 @@ static unsigned long ekey(u32 *pe, const u8 *k)
- /* Zero if weak key */
- return w;
- }
-+EXPORT_SYMBOL_GPL(des_ekey);
+ int nid;
+ unsigned long system_start_pfn, system_max_low_pfn;
++ unsigned long wasted_pages;
- /*
- * Decryption key expansion
-@@ -792,7 +787,7 @@ static int des_setkey(struct crypto_tfm *tfm, const u8 *key,
- int ret;
+ /*
+ * When mapping a NUMA machine we allocate the node_mem_map arrays
+@@ -288,11 +336,18 @@ unsigned long __init setup_memory(void)
- /* Expand to tmp */
-- ret = ekey(tmp, key);
-+ ret = des_ekey(tmp, key);
+ #ifdef CONFIG_BLK_DEV_INITRD
+ /* Numa kva area is below the initrd */
+- if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image)
+- kva_start_pfn = PFN_DOWN(boot_params.hdr.ramdisk_image)
++ if (initrd_start)
++ kva_start_pfn = PFN_DOWN(initrd_start - PAGE_OFFSET)
+ - kva_pages;
+ #endif
+- kva_start_pfn -= kva_start_pfn & (PTRS_PER_PTE-1);
++
++ /*
++ * We waste pages past at the end of the KVA for no good reason other
++ * than how it is located. This is bad.
++ */
++ wasted_pages = kva_start_pfn & (PTRS_PER_PTE-1);
++ kva_start_pfn -= wasted_pages;
++ kva_pages += wasted_pages;
+
+ system_max_low_pfn = max_low_pfn = find_max_low_pfn();
+ printk("kva_start_pfn ~ %ld find_max_low_pfn() ~ %ld\n",
+@@ -318,19 +373,9 @@ unsigned long __init setup_memory(void)
+ printk("Low memory ends at vaddr %08lx\n",
+ (ulong) pfn_to_kaddr(max_low_pfn));
+ for_each_online_node(nid) {
+- node_remap_start_vaddr[nid] = pfn_to_kaddr(
+- kva_start_pfn + node_remap_offset[nid]);
+- /* Init the node remap allocator */
+- node_remap_end_vaddr[nid] = node_remap_start_vaddr[nid] +
+- (node_remap_size[nid] * PAGE_SIZE);
+- node_remap_alloc_vaddr[nid] = node_remap_start_vaddr[nid] +
+- ALIGN(sizeof(pg_data_t), PAGE_SIZE);
++ init_remap_allocator(nid);
- if (unlikely(ret == 0) && (*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
- *flags |= CRYPTO_TFM_RES_WEAK_KEY;
-@@ -879,9 +874,9 @@ static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key,
- return -EINVAL;
+ allocate_pgdat(nid);
+- printk ("node %d will remap to vaddr %08lx - %08lx\n", nid,
+- (ulong) node_remap_start_vaddr[nid],
+- (ulong) pfn_to_kaddr(highstart_pfn
+- + node_remap_offset[nid] + node_remap_size[nid]));
}
+ printk("High memory starts at vaddr %08lx\n",
+ (ulong) pfn_to_kaddr(highstart_pfn));
+@@ -345,7 +390,8 @@ unsigned long __init setup_memory(void)
-- ekey(expkey, key); expkey += DES_EXPKEY_WORDS; key += DES_KEY_SIZE;
-+ des_ekey(expkey, key); expkey += DES_EXPKEY_WORDS; key += DES_KEY_SIZE;
- dkey(expkey, key); expkey += DES_EXPKEY_WORDS; key += DES_KEY_SIZE;
-- ekey(expkey, key);
-+ des_ekey(expkey, key);
-
- return 0;
+ void __init numa_kva_reserve(void)
+ {
+- reserve_bootmem(PFN_PHYS(kva_start_pfn),PFN_PHYS(kva_pages));
++ if (kva_pages)
++ reserve_bootmem(PFN_PHYS(kva_start_pfn), PFN_PHYS(kva_pages));
}
-diff --git a/crypto/digest.c b/crypto/digest.c
-index 8871dec..6fd43bd 100644
---- a/crypto/digest.c
-+++ b/crypto/digest.c
-@@ -12,6 +12,7 @@
- *
- */
-+#include <crypto/scatterwalk.h>
- #include <linux/mm.h>
- #include <linux/errno.h>
- #include <linux/hardirq.h>
-@@ -20,9 +21,6 @@
- #include <linux/module.h>
- #include <linux/scatterlist.h>
+ void __init zone_sizes_init(void)
+@@ -430,3 +476,29 @@ int memory_add_physaddr_to_nid(u64 addr)
--#include "internal.h"
--#include "scatterwalk.h"
--
- static int init(struct hash_desc *desc)
- {
- struct crypto_tfm *tfm = crypto_hash_tfm(desc->tfm);
-diff --git a/crypto/eseqiv.c b/crypto/eseqiv.c
-new file mode 100644
-index 0000000..eb90d27
---- /dev/null
-+++ b/crypto/eseqiv.c
-@@ -0,0 +1,264 @@
+ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+ #endif
++
++#ifndef CONFIG_HAVE_ARCH_PARSE_SRAT
+/*
-+ * eseqiv: Encrypted Sequence Number IV Generator
-+ *
-+ * This generator generates an IV based on a sequence number by xoring it
-+ * with a salt and then encrypting it with the same key as used to encrypt
-+ * the plain text. This algorithm requires that the block size be equal
-+ * to the IV size. It is mainly useful for CBC.
-+ *
-+ * Copyright (c) 2007 Herbert Xu <herbert at gondor.apana.org.au>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License as published by the Free
-+ * Software Foundation; either version 2 of the License, or (at your option)
-+ * any later version.
++ * XXX FIXME: Make SLIT table parsing available to 32-bit NUMA
+ *
++ * These stub functions are needed to compile 32-bit NUMA when SRAT is
++ * not set. There are functions in srat_64.c for parsing this table
++ * and it may be possible to make them common functions.
+ */
++void acpi_numa_slit_init (struct acpi_table_slit *slit)
++{
++ printk(KERN_INFO "ACPI: No support for parsing SLIT table\n");
++}
+
-+#include <crypto/internal/skcipher.h>
-+#include <crypto/scatterwalk.h>
-+#include <linux/err.h>
-+#include <linux/init.h>
-+#include <linux/kernel.h>
-+#include <linux/mm.h>
-+#include <linux/module.h>
-+#include <linux/random.h>
-+#include <linux/scatterlist.h>
-+#include <linux/spinlock.h>
-+#include <linux/string.h>
-+
-+struct eseqiv_request_ctx {
-+ struct scatterlist src[2];
-+ struct scatterlist dst[2];
-+ char tail[];
-+};
-+
-+struct eseqiv_ctx {
-+ spinlock_t lock;
-+ unsigned int reqoff;
-+ char salt[];
-+};
-+
-+static void eseqiv_complete2(struct skcipher_givcrypt_request *req)
++void acpi_numa_processor_affinity_init (struct acpi_srat_cpu_affinity *pa)
+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct eseqiv_request_ctx *reqctx = skcipher_givcrypt_reqctx(req);
++}
+
-+ memcpy(req->giv, PTR_ALIGN((u8 *)reqctx->tail,
-+ crypto_ablkcipher_alignmask(geniv) + 1),
-+ crypto_ablkcipher_ivsize(geniv));
++void acpi_numa_memory_affinity_init (struct acpi_srat_mem_affinity *ma)
++{
+}
+
-+static void eseqiv_complete(struct crypto_async_request *base, int err)
++void acpi_numa_arch_fixup(void)
+{
-+ struct skcipher_givcrypt_request *req = base->data;
++}
++#endif /* CONFIG_HAVE_ARCH_PARSE_SRAT */
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+new file mode 100644
+index 0000000..7e8db53
+--- /dev/null
++++ b/arch/x86/mm/extable.c
+@@ -0,0 +1,62 @@
++#include <linux/module.h>
++#include <linux/spinlock.h>
++#include <asm/uaccess.h>
+
-+ if (err)
-+ goto out;
+
-+ eseqiv_complete2(req);
++int fixup_exception(struct pt_regs *regs)
++{
++ const struct exception_table_entry *fixup;
+
-+out:
-+ skcipher_givcrypt_complete(req, err);
-+}
++#ifdef CONFIG_PNPBIOS
++ if (unlikely(SEGMENT_IS_PNP_CODE(regs->cs))) {
++ extern u32 pnp_bios_fault_eip, pnp_bios_fault_esp;
++ extern u32 pnp_bios_is_utter_crap;
++ pnp_bios_is_utter_crap = 1;
++ printk(KERN_CRIT "PNPBIOS fault.. attempting recovery.\n");
++ __asm__ volatile(
++ "movl %0, %%esp\n\t"
++ "jmp *%1\n\t"
++ : : "g" (pnp_bios_fault_esp), "g" (pnp_bios_fault_eip));
++ panic("do_trap: can't hit this");
++ }
++#endif
+
-+static void eseqiv_chain(struct scatterlist *head, struct scatterlist *sg,
-+ int chain)
-+{
-+ if (chain) {
-+ head->length += sg->length;
-+ sg = scatterwalk_sg_next(sg);
++ fixup = search_exception_tables(regs->ip);
++ if (fixup) {
++ regs->ip = fixup->fixup;
++ return 1;
+ }
+
-+ if (sg)
-+ scatterwalk_sg_chain(head, 2, sg);
-+ else
-+ sg_mark_end(head);
++ return 0;
+}
+
-+static int eseqiv_givencrypt(struct skcipher_givcrypt_request *req)
++#ifdef CONFIG_X86_64
++/*
++ * Need to defined our own search_extable on X86_64 to work around
++ * a B stepping K8 bug.
++ */
++const struct exception_table_entry *
++search_extable(const struct exception_table_entry *first,
++ const struct exception_table_entry *last,
++ unsigned long value)
+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ struct eseqiv_request_ctx *reqctx = skcipher_givcrypt_reqctx(req);
-+ struct ablkcipher_request *subreq;
-+ crypto_completion_t complete;
-+ void *data;
-+ struct scatterlist *osrc, *odst;
-+ struct scatterlist *dst;
-+ struct page *srcp;
-+ struct page *dstp;
-+ u8 *giv;
-+ u8 *vsrc;
-+ u8 *vdst;
-+ __be64 seq;
-+ unsigned int ivsize;
-+ unsigned int len;
-+ int err;
++ /* B stepping K8 bug */
++ if ((value >> 32) == 0)
++ value |= 0xffffffffUL << 32;
+
-+ subreq = (void *)(reqctx->tail + ctx->reqoff);
-+ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
++ while (first <= last) {
++ const struct exception_table_entry *mid;
++ long diff;
+
-+ giv = req->giv;
-+ complete = req->creq.base.complete;
-+ data = req->creq.base.data;
++ mid = (last - first) / 2 + first;
++ diff = mid->insn - value;
++ if (diff == 0)
++ return mid;
++ else if (diff < 0)
++ first = mid+1;
++ else
++ last = mid-1;
++ }
++ return NULL;
++}
++#endif
+diff --git a/arch/x86/mm/extable_32.c b/arch/x86/mm/extable_32.c
+deleted file mode 100644
+index 0ce4f22..0000000
+--- a/arch/x86/mm/extable_32.c
++++ /dev/null
+@@ -1,35 +0,0 @@
+-/*
+- * linux/arch/i386/mm/extable.c
+- */
+-
+-#include <linux/module.h>
+-#include <linux/spinlock.h>
+-#include <asm/uaccess.h>
+-
+-int fixup_exception(struct pt_regs *regs)
+-{
+- const struct exception_table_entry *fixup;
+-
+-#ifdef CONFIG_PNPBIOS
+- if (unlikely(SEGMENT_IS_PNP_CODE(regs->xcs)))
+- {
+- extern u32 pnp_bios_fault_eip, pnp_bios_fault_esp;
+- extern u32 pnp_bios_is_utter_crap;
+- pnp_bios_is_utter_crap = 1;
+- printk(KERN_CRIT "PNPBIOS fault.. attempting recovery.\n");
+- __asm__ volatile(
+- "movl %0, %%esp\n\t"
+- "jmp *%1\n\t"
+- : : "g" (pnp_bios_fault_esp), "g" (pnp_bios_fault_eip));
+- panic("do_trap: can't hit this");
+- }
+-#endif
+-
+- fixup = search_exception_tables(regs->eip);
+- if (fixup) {
+- regs->eip = fixup->fixup;
+- return 1;
+- }
+-
+- return 0;
+-}
+diff --git a/arch/x86/mm/extable_64.c b/arch/x86/mm/extable_64.c
+deleted file mode 100644
+index 79ac6e7..0000000
+--- a/arch/x86/mm/extable_64.c
++++ /dev/null
+@@ -1,34 +0,0 @@
+-/*
+- * linux/arch/x86_64/mm/extable.c
+- */
+-
+-#include <linux/module.h>
+-#include <linux/spinlock.h>
+-#include <linux/init.h>
+-#include <asm/uaccess.h>
+-
+-/* Simple binary search */
+-const struct exception_table_entry *
+-search_extable(const struct exception_table_entry *first,
+- const struct exception_table_entry *last,
+- unsigned long value)
+-{
+- /* Work around a B stepping K8 bug */
+- if ((value >> 32) == 0)
+- value |= 0xffffffffUL << 32;
+-
+- while (first <= last) {
+- const struct exception_table_entry *mid;
+- long diff;
+-
+- mid = (last - first) / 2 + first;
+- diff = mid->insn - value;
+- if (diff == 0)
+- return mid;
+- else if (diff < 0)
+- first = mid+1;
+- else
+- last = mid-1;
+- }
+- return NULL;
+-}
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+new file mode 100644
+index 0000000..e28cc52
+--- /dev/null
++++ b/arch/x86/mm/fault.c
+@@ -0,0 +1,986 @@
++/*
++ * Copyright (C) 1995 Linus Torvalds
++ * Copyright (C) 2001,2002 Andi Kleen, SuSE Labs.
++ */
+
-+ osrc = req->creq.src;
-+ odst = req->creq.dst;
-+ srcp = sg_page(osrc);
-+ dstp = sg_page(odst);
-+ vsrc = PageHighMem(srcp) ? NULL : page_address(srcp) + osrc->offset;
-+ vdst = PageHighMem(dstp) ? NULL : page_address(dstp) + odst->offset;
++#include <linux/signal.h>
++#include <linux/sched.h>
++#include <linux/kernel.h>
++#include <linux/errno.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/ptrace.h>
++#include <linux/mman.h>
++#include <linux/mm.h>
++#include <linux/smp.h>
++#include <linux/interrupt.h>
++#include <linux/init.h>
++#include <linux/tty.h>
++#include <linux/vt_kern.h> /* For unblank_screen() */
++#include <linux/compiler.h>
++#include <linux/highmem.h>
++#include <linux/bootmem.h> /* for max_low_pfn */
++#include <linux/vmalloc.h>
++#include <linux/module.h>
++#include <linux/kprobes.h>
++#include <linux/uaccess.h>
++#include <linux/kdebug.h>
+
-+ ivsize = crypto_ablkcipher_ivsize(geniv);
++#include <asm/system.h>
++#include <asm/desc.h>
++#include <asm/segment.h>
++#include <asm/pgalloc.h>
++#include <asm/smp.h>
++#include <asm/tlbflush.h>
++#include <asm/proto.h>
++#include <asm-generic/sections.h>
+
-+ if (vsrc != giv + ivsize && vdst != giv + ivsize) {
-+ giv = PTR_ALIGN((u8 *)reqctx->tail,
-+ crypto_ablkcipher_alignmask(geniv) + 1);
-+ complete = eseqiv_complete;
-+ data = req;
-+ }
++/*
++ * Page fault error code bits
++ * bit 0 == 0 means no page found, 1 means protection fault
++ * bit 1 == 0 means read, 1 means write
++ * bit 2 == 0 means kernel, 1 means user-mode
++ * bit 3 == 1 means use of reserved bit detected
++ * bit 4 == 1 means fault was an instruction fetch
++ */
++#define PF_PROT (1<<0)
++#define PF_WRITE (1<<1)
++#define PF_USER (1<<2)
++#define PF_RSVD (1<<3)
++#define PF_INSTR (1<<4)
+
-+ ablkcipher_request_set_callback(subreq, req->creq.base.flags, complete,
-+ data);
++static inline int notify_page_fault(struct pt_regs *regs)
++{
++#ifdef CONFIG_KPROBES
++ int ret = 0;
+
-+ sg_init_table(reqctx->src, 2);
-+ sg_set_buf(reqctx->src, giv, ivsize);
-+ eseqiv_chain(reqctx->src, osrc, vsrc == giv + ivsize);
++ /* kprobe_running() needs smp_processor_id() */
++#ifdef CONFIG_X86_32
++ if (!user_mode_vm(regs)) {
++#else
++ if (!user_mode(regs)) {
++#endif
++ preempt_disable();
++ if (kprobe_running() && kprobe_fault_handler(regs, 14))
++ ret = 1;
++ preempt_enable();
++ }
+
-+ dst = reqctx->src;
-+ if (osrc != odst) {
-+ sg_init_table(reqctx->dst, 2);
-+ sg_set_buf(reqctx->dst, giv, ivsize);
-+ eseqiv_chain(reqctx->dst, odst, vdst == giv + ivsize);
++ return ret;
++#else
++ return 0;
++#endif
++}
+
-+ dst = reqctx->dst;
-+ }
++/*
++ * X86_32
++ * Sometimes AMD Athlon/Opteron CPUs report invalid exceptions on prefetch.
++ * Check that here and ignore it.
++ *
++ * X86_64
++ * Sometimes the CPU reports invalid exceptions on prefetch.
++ * Check that here and ignore it.
++ *
++ * Opcode checker based on code by Richard Brunner
++ */
++static int is_prefetch(struct pt_regs *regs, unsigned long addr,
++ unsigned long error_code)
++{
++ unsigned char *instr;
++ int scan_more = 1;
++ int prefetch = 0;
++ unsigned char *max_instr;
+
-+ ablkcipher_request_set_crypt(subreq, reqctx->src, dst,
-+ req->creq.nbytes, req->creq.info);
++#ifdef CONFIG_X86_32
++ if (!(__supported_pte_mask & _PAGE_NX))
++ return 0;
++#endif
+
-+ memcpy(req->creq.info, ctx->salt, ivsize);
++ /* If it was a exec fault on NX page, ignore */
++ if (error_code & PF_INSTR)
++ return 0;
+
-+ len = ivsize;
-+ if (ivsize > sizeof(u64)) {
-+ memset(req->giv, 0, ivsize - sizeof(u64));
-+ len = sizeof(u64);
-+ }
-+ seq = cpu_to_be64(req->seq);
-+ memcpy(req->giv + ivsize - len, &seq, len);
++ instr = (unsigned char *)convert_ip_to_linear(current, regs);
++ max_instr = instr + 15;
+
-+ err = crypto_ablkcipher_encrypt(subreq);
-+ if (err)
-+ goto out;
++ if (user_mode(regs) && instr >= (unsigned char *)TASK_SIZE)
++ return 0;
+
-+ eseqiv_complete2(req);
++ while (scan_more && instr < max_instr) {
++ unsigned char opcode;
++ unsigned char instr_hi;
++ unsigned char instr_lo;
+
-+out:
-+ return err;
-+}
++ if (probe_kernel_address(instr, opcode))
++ break;
+
-+static int eseqiv_givencrypt_first(struct skcipher_givcrypt_request *req)
-+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ instr_hi = opcode & 0xf0;
++ instr_lo = opcode & 0x0f;
++ instr++;
+
-+ spin_lock_bh(&ctx->lock);
-+ if (crypto_ablkcipher_crt(geniv)->givencrypt != eseqiv_givencrypt_first)
-+ goto unlock;
++ switch (instr_hi) {
++ case 0x20:
++ case 0x30:
++ /*
++ * Values 0x26,0x2E,0x36,0x3E are valid x86 prefixes.
++ * In X86_64 long mode, the CPU will signal invalid
++ * opcode if some of these prefixes are present so
++ * X86_64 will never get here anyway
++ */
++ scan_more = ((instr_lo & 7) == 0x6);
++ break;
++#ifdef CONFIG_X86_64
++ case 0x40:
++ /*
++ * In AMD64 long mode 0x40..0x4F are valid REX prefixes
++ * Need to figure out under what instruction mode the
++ * instruction was issued. Could check the LDT for lm,
++ * but for now it's good enough to assume that long
++ * mode only uses well known segments or kernel.
++ */
++ scan_more = (!user_mode(regs)) || (regs->cs == __USER_CS);
++ break;
++#endif
++ case 0x60:
++ /* 0x64 thru 0x67 are valid prefixes in all modes. */
++ scan_more = (instr_lo & 0xC) == 0x4;
++ break;
++ case 0xF0:
++ /* 0xF0, 0xF2, 0xF3 are valid prefixes in all modes. */
++ scan_more = !instr_lo || (instr_lo>>1) == 1;
++ break;
++ case 0x00:
++ /* Prefetch instruction is 0x0F0D or 0x0F18 */
++ scan_more = 0;
+
-+ crypto_ablkcipher_crt(geniv)->givencrypt = eseqiv_givencrypt;
-+ get_random_bytes(ctx->salt, crypto_ablkcipher_ivsize(geniv));
++ if (probe_kernel_address(instr, opcode))
++ break;
++ prefetch = (instr_lo == 0xF) &&
++ (opcode == 0x0D || opcode == 0x18);
++ break;
++ default:
++ scan_more = 0;
++ break;
++ }
++ }
++ return prefetch;
++}
+
-+unlock:
-+ spin_unlock_bh(&ctx->lock);
++static void force_sig_info_fault(int si_signo, int si_code,
++ unsigned long address, struct task_struct *tsk)
++{
++ siginfo_t info;
+
-+ return eseqiv_givencrypt(req);
++ info.si_signo = si_signo;
++ info.si_errno = 0;
++ info.si_code = si_code;
++ info.si_addr = (void __user *)address;
++ force_sig_info(si_signo, &info, tsk);
+}
+
-+static int eseqiv_init(struct crypto_tfm *tfm)
++#ifdef CONFIG_X86_64
++static int bad_address(void *p)
+{
-+ struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm);
-+ struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ unsigned long alignmask;
-+ unsigned int reqsize;
++ unsigned long dummy;
++ return probe_kernel_address((unsigned long *)p, dummy);
++}
++#endif
+
-+ spin_lock_init(&ctx->lock);
++void dump_pagetable(unsigned long address)
++{
++#ifdef CONFIG_X86_32
++ __typeof__(pte_val(__pte(0))) page;
+
-+ alignmask = crypto_tfm_ctx_alignment() - 1;
-+ reqsize = sizeof(struct eseqiv_request_ctx);
++ page = read_cr3();
++ page = ((__typeof__(page) *) __va(page))[address >> PGDIR_SHIFT];
++#ifdef CONFIG_X86_PAE
++ printk("*pdpt = %016Lx ", page);
++ if ((page >> PAGE_SHIFT) < max_low_pfn
++ && page & _PAGE_PRESENT) {
++ page &= PAGE_MASK;
++ page = ((__typeof__(page) *) __va(page))[(address >> PMD_SHIFT)
++ & (PTRS_PER_PMD - 1)];
++ printk(KERN_CONT "*pde = %016Lx ", page);
++ page &= ~_PAGE_NX;
++ }
++#else
++ printk("*pde = %08lx ", page);
++#endif
+
-+ if (alignmask & reqsize) {
-+ alignmask &= reqsize;
-+ alignmask--;
++ /*
++ * We must not directly access the pte in the highpte
++ * case if the page table is located in highmem.
++ * And let's rather not kmap-atomic the pte, just in case
++ * it's allocated already.
++ */
++ if ((page >> PAGE_SHIFT) < max_low_pfn
++ && (page & _PAGE_PRESENT)
++ && !(page & _PAGE_PSE)) {
++ page &= PAGE_MASK;
++ page = ((__typeof__(page) *) __va(page))[(address >> PAGE_SHIFT)
++ & (PTRS_PER_PTE - 1)];
++ printk("*pte = %0*Lx ", sizeof(page)*2, (u64)page);
+ }
+
-+ alignmask = ~alignmask;
-+ alignmask &= crypto_ablkcipher_alignmask(geniv);
++ printk("\n");
++#else /* CONFIG_X86_64 */
++ pgd_t *pgd;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
+
-+ reqsize += alignmask;
-+ reqsize += crypto_ablkcipher_ivsize(geniv);
-+ reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
++ pgd = (pgd_t *)read_cr3();
+
-+ ctx->reqoff = reqsize - sizeof(struct eseqiv_request_ctx);
++ pgd = __va((unsigned long)pgd & PHYSICAL_PAGE_MASK);
++ pgd += pgd_index(address);
++ if (bad_address(pgd)) goto bad;
++ printk("PGD %lx ", pgd_val(*pgd));
++ if (!pgd_present(*pgd)) goto ret;
+
-+ tfm->crt_ablkcipher.reqsize = reqsize +
-+ sizeof(struct ablkcipher_request);
++ pud = pud_offset(pgd, address);
++ if (bad_address(pud)) goto bad;
++ printk("PUD %lx ", pud_val(*pud));
++ if (!pud_present(*pud)) goto ret;
+
-+ return skcipher_geniv_init(tfm);
-+}
++ pmd = pmd_offset(pud, address);
++ if (bad_address(pmd)) goto bad;
++ printk("PMD %lx ", pmd_val(*pmd));
++ if (!pmd_present(*pmd) || pmd_large(*pmd)) goto ret;
+
-+static struct crypto_template eseqiv_tmpl;
++ pte = pte_offset_kernel(pmd, address);
++ if (bad_address(pte)) goto bad;
++ printk("PTE %lx", pte_val(*pte));
++ret:
++ printk("\n");
++ return;
++bad:
++ printk("BAD\n");
++#endif
++}
+
-+static struct crypto_instance *eseqiv_alloc(struct rtattr **tb)
++#ifdef CONFIG_X86_32
++static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
+{
-+ struct crypto_instance *inst;
-+ int err;
-+
-+ inst = skcipher_geniv_alloc(&eseqiv_tmpl, tb, 0, 0);
-+ if (IS_ERR(inst))
-+ goto out;
-+
-+ err = -EINVAL;
-+ if (inst->alg.cra_ablkcipher.ivsize != inst->alg.cra_blocksize)
-+ goto free_inst;
++ unsigned index = pgd_index(address);
++ pgd_t *pgd_k;
++ pud_t *pud, *pud_k;
++ pmd_t *pmd, *pmd_k;
+
-+ inst->alg.cra_ablkcipher.givencrypt = eseqiv_givencrypt_first;
++ pgd += index;
++ pgd_k = init_mm.pgd + index;
+
-+ inst->alg.cra_init = eseqiv_init;
-+ inst->alg.cra_exit = skcipher_geniv_exit;
++ if (!pgd_present(*pgd_k))
++ return NULL;
+
-+ inst->alg.cra_ctxsize = sizeof(struct eseqiv_ctx);
-+ inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
++ /*
++ * set_pgd(pgd, *pgd_k); here would be useless on PAE
++ * and redundant with the set_pmd() on non-PAE. As would
++ * set_pud.
++ */
+
-+out:
-+ return inst;
++ pud = pud_offset(pgd, address);
++ pud_k = pud_offset(pgd_k, address);
++ if (!pud_present(*pud_k))
++ return NULL;
+
-+free_inst:
-+ skcipher_geniv_free(inst);
-+ inst = ERR_PTR(err);
-+ goto out;
++ pmd = pmd_offset(pud, address);
++ pmd_k = pmd_offset(pud_k, address);
++ if (!pmd_present(*pmd_k))
++ return NULL;
++ if (!pmd_present(*pmd)) {
++ set_pmd(pmd, *pmd_k);
++ arch_flush_lazy_mmu_mode();
++ } else
++ BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
++ return pmd_k;
+}
++#endif
+
-+static struct crypto_template eseqiv_tmpl = {
-+ .name = "eseqiv",
-+ .alloc = eseqiv_alloc,
-+ .free = skcipher_geniv_free,
-+ .module = THIS_MODULE,
-+};
-+
-+static int __init eseqiv_module_init(void)
-+{
-+ return crypto_register_template(&eseqiv_tmpl);
-+}
++#ifdef CONFIG_X86_64
++static const char errata93_warning[] =
++KERN_ERR "******* Your BIOS seems to not contain a fix for K8 errata #93\n"
++KERN_ERR "******* Working around it, but it may cause SEGVs or burn power.\n"
++KERN_ERR "******* Please consider a BIOS update.\n"
++KERN_ERR "******* Disabling USB legacy in the BIOS may also help.\n";
++#endif
+
-+static void __exit eseqiv_module_exit(void)
++/* Workaround for K8 erratum #93 & buggy BIOS.
++ BIOS SMM functions are required to use a specific workaround
++ to avoid corruption of the 64bit RIP register on C stepping K8.
++ A lot of BIOS that didn't get tested properly miss this.
++ The OS sees this as a page fault with the upper 32bits of RIP cleared.
++ Try to work around it here.
++ Note we only handle faults in kernel here.
++ Does nothing for X86_32
++ */
++static int is_errata93(struct pt_regs *regs, unsigned long address)
+{
-+ crypto_unregister_template(&eseqiv_tmpl);
++#ifdef CONFIG_X86_64
++ static int warned;
++ if (address != regs->ip)
++ return 0;
++ if ((address >> 32) != 0)
++ return 0;
++ address |= 0xffffffffUL << 32;
++ if ((address >= (u64)_stext && address <= (u64)_etext) ||
++ (address >= MODULES_VADDR && address <= MODULES_END)) {
++ if (!warned) {
++ printk(errata93_warning);
++ warned = 1;
++ }
++ regs->ip = address;
++ return 1;
++ }
++#endif
++ return 0;
+}
+
-+module_init(eseqiv_module_init);
-+module_exit(eseqiv_module_exit);
-+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("Encrypted Sequence Number IV Generator");
-diff --git a/crypto/gcm.c b/crypto/gcm.c
-new file mode 100644
-index 0000000..e70afd0
---- /dev/null
-+++ b/crypto/gcm.c
-@@ -0,0 +1,823 @@
+/*
-+ * GCM: Galois/Counter Mode.
-+ *
-+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <mh1 at iki.fi>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License version 2 as published
-+ * by the Free Software Foundation.
-+ */
++ * Work around K8 erratum #100 K8 in compat mode occasionally jumps to illegal
++ * addresses >4GB. We catch this in the page fault handler because these
++ * addresses are not reachable. Just detect this case and return. Any code
++ * segment in LDT is compatibility mode.
++ */
++static int is_errata100(struct pt_regs *regs, unsigned long address)
++{
++#ifdef CONFIG_X86_64
++ if ((regs->cs == __USER32_CS || (regs->cs & (1<<2))) &&
++ (address >> 32))
++ return 1;
++#endif
++ return 0;
++}
+
-+#include <crypto/gf128mul.h>
-+#include <crypto/internal/aead.h>
-+#include <crypto/internal/skcipher.h>
-+#include <crypto/scatterwalk.h>
-+#include <linux/completion.h>
-+#include <linux/err.h>
-+#include <linux/init.h>
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/slab.h>
++void do_invalid_op(struct pt_regs *, unsigned long);
+
-+struct gcm_instance_ctx {
-+ struct crypto_skcipher_spawn ctr;
-+};
++static int is_f00f_bug(struct pt_regs *regs, unsigned long address)
++{
++#ifdef CONFIG_X86_F00F_BUG
++ unsigned long nr;
++ /*
++ * Pentium F0 0F C7 C8 bug workaround.
++ */
++ if (boot_cpu_data.f00f_bug) {
++ nr = (address - idt_descr.address) >> 3;
+
-+struct crypto_gcm_ctx {
-+ struct crypto_ablkcipher *ctr;
-+ struct gf128mul_4k *gf128;
-+};
++ if (nr == 6) {
++ do_invalid_op(regs, 0);
++ return 1;
++ }
++ }
++#endif
++ return 0;
++}
+
-+struct crypto_rfc4106_ctx {
-+ struct crypto_aead *child;
-+ u8 nonce[4];
-+};
++static void show_fault_oops(struct pt_regs *regs, unsigned long error_code,
++ unsigned long address)
++{
++#ifdef CONFIG_X86_32
++ if (!oops_may_print())
++ return;
++#endif
+
-+struct crypto_gcm_ghash_ctx {
-+ u32 bytes;
-+ u32 flags;
-+ struct gf128mul_4k *gf128;
-+ u8 buffer[16];
-+};
++#ifdef CONFIG_X86_PAE
++ if (error_code & PF_INSTR) {
++ int level;
++ pte_t *pte = lookup_address(address, &level);
+
-+struct crypto_gcm_req_priv_ctx {
-+ u8 auth_tag[16];
-+ u8 iauth_tag[16];
-+ struct scatterlist src[2];
-+ struct scatterlist dst[2];
-+ struct crypto_gcm_ghash_ctx ghash;
-+ struct ablkcipher_request abreq;
-+};
++ if (pte && pte_present(*pte) && !pte_exec(*pte))
++ printk(KERN_CRIT "kernel tried to execute "
++ "NX-protected page - exploit attempt? "
++ "(uid: %d)\n", current->uid);
++ }
++#endif
+
-+struct crypto_gcm_setkey_result {
-+ int err;
-+ struct completion completion;
-+};
++ printk(KERN_ALERT "BUG: unable to handle kernel ");
++ if (address < PAGE_SIZE)
++ printk(KERN_CONT "NULL pointer dereference");
++ else
++ printk(KERN_CONT "paging request");
++#ifdef CONFIG_X86_32
++ printk(KERN_CONT " at %08lx\n", address);
++#else
++ printk(KERN_CONT " at %016lx\n", address);
++#endif
++ printk(KERN_ALERT "IP:");
++ printk_address(regs->ip, 1);
++ dump_pagetable(address);
++}
+
-+static inline struct crypto_gcm_req_priv_ctx *crypto_gcm_reqctx(
-+ struct aead_request *req)
++#ifdef CONFIG_X86_64
++static noinline void pgtable_bad(unsigned long address, struct pt_regs *regs,
++ unsigned long error_code)
+{
-+ unsigned long align = crypto_aead_alignmask(crypto_aead_reqtfm(req));
++ unsigned long flags = oops_begin();
++ struct task_struct *tsk;
+
-+ return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1);
++ printk(KERN_ALERT "%s: Corrupted page table at address %lx\n",
++ current->comm, address);
++ dump_pagetable(address);
++ tsk = current;
++ tsk->thread.cr2 = address;
++ tsk->thread.trap_no = 14;
++ tsk->thread.error_code = error_code;
++ if (__die("Bad pagetable", regs, error_code))
++ regs = NULL;
++ oops_end(flags, regs, SIGKILL);
+}
++#endif
+
-+static void crypto_gcm_ghash_init(struct crypto_gcm_ghash_ctx *ctx, u32 flags,
-+ struct gf128mul_4k *gf128)
++/*
++ * Handle a spurious fault caused by a stale TLB entry. This allows
++ * us to lazily refresh the TLB when increasing the permissions of a
++ * kernel page (RO -> RW or NX -> X). Doing it eagerly is very
++ * expensive since that implies doing a full cross-processor TLB
++ * flush, even if no stale TLB entries exist on other processors.
++ * There are no security implications to leaving a stale TLB when
++ * increasing the permissions on a page.
++ */
++static int spurious_fault(unsigned long address,
++ unsigned long error_code)
+{
-+ ctx->bytes = 0;
-+ ctx->flags = flags;
-+ ctx->gf128 = gf128;
-+ memset(ctx->buffer, 0, 16);
-+}
++ pgd_t *pgd;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
+
-+static void crypto_gcm_ghash_update(struct crypto_gcm_ghash_ctx *ctx,
-+ const u8 *src, unsigned int srclen)
-+{
-+ u8 *dst = ctx->buffer;
++ /* Reserved-bit violation or user access to kernel space? */
++ if (error_code & (PF_USER | PF_RSVD))
++ return 0;
+
-+ if (ctx->bytes) {
-+ int n = min(srclen, ctx->bytes);
-+ u8 *pos = dst + (16 - ctx->bytes);
++ pgd = init_mm.pgd + pgd_index(address);
++ if (!pgd_present(*pgd))
++ return 0;
+
-+ ctx->bytes -= n;
-+ srclen -= n;
++ pud = pud_offset(pgd, address);
++ if (!pud_present(*pud))
++ return 0;
+
-+ while (n--)
-+ *pos++ ^= *src++;
++ pmd = pmd_offset(pud, address);
++ if (!pmd_present(*pmd))
++ return 0;
+
-+ if (!ctx->bytes)
-+ gf128mul_4k_lle((be128 *)dst, ctx->gf128);
-+ }
++ pte = pte_offset_kernel(pmd, address);
++ if (!pte_present(*pte))
++ return 0;
+
-+ while (srclen >= 16) {
-+ crypto_xor(dst, src, 16);
-+ gf128mul_4k_lle((be128 *)dst, ctx->gf128);
-+ src += 16;
-+ srclen -= 16;
-+ }
++ if ((error_code & PF_WRITE) && !pte_write(*pte))
++ return 0;
++ if ((error_code & PF_INSTR) && !pte_exec(*pte))
++ return 0;
+
-+ if (srclen) {
-+ ctx->bytes = 16 - srclen;
-+ while (srclen--)
-+ *dst++ ^= *src++;
-+ }
++ return 1;
+}
+
-+static void crypto_gcm_ghash_update_sg(struct crypto_gcm_ghash_ctx *ctx,
-+ struct scatterlist *sg, int len)
++/*
++ * X86_32
++ * Handle a fault on the vmalloc or module mapping area
++ *
++ * X86_64
++ * Handle a fault on the vmalloc area
++ *
++ * This assumes no large pages in there.
++ */
++static int vmalloc_fault(unsigned long address)
+{
-+ struct scatter_walk walk;
-+ u8 *src;
-+ int n;
++#ifdef CONFIG_X86_32
++ unsigned long pgd_paddr;
++ pmd_t *pmd_k;
++ pte_t *pte_k;
++ /*
++ * Synchronize this task's top level page-table
++ * with the 'reference' page table.
++ *
++ * Do _not_ use "current" here. We might be inside
++ * an interrupt in the middle of a task switch..
++ */
++ pgd_paddr = read_cr3();
++ pmd_k = vmalloc_sync_one(__va(pgd_paddr), address);
++ if (!pmd_k)
++ return -1;
++ pte_k = pte_offset_kernel(pmd_k, address);
++ if (!pte_present(*pte_k))
++ return -1;
++ return 0;
++#else
++ pgd_t *pgd, *pgd_ref;
++ pud_t *pud, *pud_ref;
++ pmd_t *pmd, *pmd_ref;
++ pte_t *pte, *pte_ref;
++
++ /* Copy kernel mappings over when needed. This can also
++ happen within a race in page table update. In the later
++ case just flush. */
++
++ pgd = pgd_offset(current->mm ?: &init_mm, address);
++ pgd_ref = pgd_offset_k(address);
++ if (pgd_none(*pgd_ref))
++ return -1;
++ if (pgd_none(*pgd))
++ set_pgd(pgd, *pgd_ref);
++ else
++ BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
+
-+ if (!len)
-+ return;
++ /* Below here mismatches are bugs because these lower tables
++ are shared */
+
-+ scatterwalk_start(&walk, sg);
++ pud = pud_offset(pgd, address);
++ pud_ref = pud_offset(pgd_ref, address);
++ if (pud_none(*pud_ref))
++ return -1;
++ if (pud_none(*pud) || pud_page_vaddr(*pud) != pud_page_vaddr(*pud_ref))
++ BUG();
++ pmd = pmd_offset(pud, address);
++ pmd_ref = pmd_offset(pud_ref, address);
++ if (pmd_none(*pmd_ref))
++ return -1;
++ if (pmd_none(*pmd) || pmd_page(*pmd) != pmd_page(*pmd_ref))
++ BUG();
++ pte_ref = pte_offset_kernel(pmd_ref, address);
++ if (!pte_present(*pte_ref))
++ return -1;
++ pte = pte_offset_kernel(pmd, address);
++ /* Don't use pte_page here, because the mappings can point
++ outside mem_map, and the NUMA hash lookup cannot handle
++ that. */
++ if (!pte_present(*pte) || pte_pfn(*pte) != pte_pfn(*pte_ref))
++ BUG();
++ return 0;
++#endif
++}
+
-+ while (len) {
-+ n = scatterwalk_clamp(&walk, len);
++int show_unhandled_signals = 1;
+
-+ if (!n) {
-+ scatterwalk_start(&walk, scatterwalk_sg_next(walk.sg));
-+ n = scatterwalk_clamp(&walk, len);
-+ }
++/*
++ * This routine handles page faults. It determines the address,
++ * and the problem, and then passes it off to one of the appropriate
++ * routines.
++ */
++#ifdef CONFIG_X86_64
++asmlinkage
++#endif
++void __kprobes do_page_fault(struct pt_regs *regs, unsigned long error_code)
++{
++ struct task_struct *tsk;
++ struct mm_struct *mm;
++ struct vm_area_struct *vma;
++ unsigned long address;
++ int write, si_code;
++ int fault;
++#ifdef CONFIG_X86_64
++ unsigned long flags;
++#endif
+
-+ src = scatterwalk_map(&walk, 0);
++ /*
++ * We can fault from pretty much anywhere, with unknown IRQ state.
++ */
++ trace_hardirqs_fixup();
+
-+ crypto_gcm_ghash_update(ctx, src, n);
-+ len -= n;
++ tsk = current;
++ mm = tsk->mm;
++ prefetchw(&mm->mmap_sem);
+
-+ scatterwalk_unmap(src, 0);
-+ scatterwalk_advance(&walk, n);
-+ scatterwalk_done(&walk, 0, len);
-+ if (len)
-+ crypto_yield(ctx->flags);
-+ }
-+}
++ /* get the address */
++ address = read_cr2();
+
-+static void crypto_gcm_ghash_flush(struct crypto_gcm_ghash_ctx *ctx)
-+{
-+ u8 *dst = ctx->buffer;
++ si_code = SEGV_MAPERR;
+
-+ if (ctx->bytes) {
-+ u8 *tmp = dst + (16 - ctx->bytes);
++ if (notify_page_fault(regs))
++ return;
+
-+ while (ctx->bytes--)
-+ *tmp++ ^= 0;
++ /*
++ * We fault-in kernel-space virtual memory on-demand. The
++ * 'reference' page table is init_mm.pgd.
++ *
++ * NOTE! We MUST NOT take any locks for this case. We may
++ * be in an interrupt or a critical region, and should
++ * only copy the information from the master page table,
++ * nothing more.
++ *
++ * This verifies that the fault happens in kernel space
++ * (error_code & 4) == 0, and that the fault was not a
++ * protection error (error_code & 9) == 0.
++ */
++#ifdef CONFIG_X86_32
++ if (unlikely(address >= TASK_SIZE)) {
++ if (!(error_code & (PF_RSVD|PF_USER|PF_PROT)) &&
++ vmalloc_fault(address) >= 0)
++ return;
+
-+ gf128mul_4k_lle((be128 *)dst, ctx->gf128);
++ /* Can handle a stale RO->RW TLB */
++ if (spurious_fault(address, error_code))
++ return;
++
++ /*
++ * Don't take the mm semaphore here. If we fixup a prefetch
++ * fault we could otherwise deadlock.
++ */
++ goto bad_area_nosemaphore;
+ }
+
-+ ctx->bytes = 0;
-+}
++ /* It's safe to allow irq's after cr2 has been saved and the vmalloc
++ fault has been handled. */
++ if (regs->flags & (X86_EFLAGS_IF|VM_MASK))
++ local_irq_enable();
+
-+static void crypto_gcm_ghash_final_xor(struct crypto_gcm_ghash_ctx *ctx,
-+ unsigned int authlen,
-+ unsigned int cryptlen, u8 *dst)
-+{
-+ u8 *buf = ctx->buffer;
-+ u128 lengths;
++ /*
++ * If we're in an interrupt, have no user context or are running in an
++ * atomic region then we must not take the fault.
++ */
++ if (in_atomic() || !mm)
++ goto bad_area_nosemaphore;
++#else /* CONFIG_X86_64 */
++ if (unlikely(address >= TASK_SIZE64)) {
++ /*
++ * Don't check for the module range here: its PML4
++ * is always initialized because it's shared with the main
++ * kernel text. Only vmalloc may need PML4 syncups.
++ */
++ if (!(error_code & (PF_RSVD|PF_USER|PF_PROT)) &&
++ ((address >= VMALLOC_START && address < VMALLOC_END))) {
++ if (vmalloc_fault(address) >= 0)
++ return;
++ }
+
-+ lengths.a = cpu_to_be64(authlen * 8);
-+ lengths.b = cpu_to_be64(cryptlen * 8);
++ /* Can handle a stale RO->RW TLB */
++ if (spurious_fault(address, error_code))
++ return;
+
-+ crypto_gcm_ghash_flush(ctx);
-+ crypto_xor(buf, (u8 *)&lengths, 16);
-+ gf128mul_4k_lle((be128 *)buf, ctx->gf128);
-+ crypto_xor(dst, buf, 16);
-+}
++ /*
++ * Don't take the mm semaphore here. If we fixup a prefetch
++ * fault we could otherwise deadlock.
++ */
++ goto bad_area_nosemaphore;
++ }
++ if (likely(regs->flags & X86_EFLAGS_IF))
++ local_irq_enable();
+
-+static void crypto_gcm_setkey_done(struct crypto_async_request *req, int err)
-+{
-+ struct crypto_gcm_setkey_result *result = req->data;
++ if (unlikely(error_code & PF_RSVD))
++ pgtable_bad(address, regs, error_code);
+
-+ if (err == -EINPROGRESS)
-+ return;
++ /*
++ * If we're in an interrupt, have no user context or are running in an
++ * atomic region then we must not take the fault.
++ */
++ if (unlikely(in_atomic() || !mm))
++ goto bad_area_nosemaphore;
+
-+ result->err = err;
-+ complete(&result->completion);
-+}
++ /*
++ * User-mode registers count as a user access even for any
++ * potential system fault or CPU buglet.
++ */
++ if (user_mode_vm(regs))
++ error_code |= PF_USER;
++again:
++#endif
++ /* When running in the kernel we expect faults to occur only to
++ * addresses in user space. All other faults represent errors in the
++ * kernel and should generate an OOPS. Unfortunately, in the case of an
++ * erroneous fault occurring in a code path which already holds mmap_sem
++ * we will deadlock attempting to validate the fault against the
++ * address space. Luckily the kernel only validly references user
++ * space from well defined areas of code, which are listed in the
++ * exceptions table.
++ *
++ * As the vast majority of faults will be valid we will only perform
++ * the source reference check when there is a possibility of a deadlock.
++ * Attempt to lock the address space, if we cannot we then validate the
++ * source. If this is invalid we can skip the address space check,
++ * thus avoiding the deadlock.
++ */
++ if (!down_read_trylock(&mm->mmap_sem)) {
++ if ((error_code & PF_USER) == 0 &&
++ !search_exception_tables(regs->ip))
++ goto bad_area_nosemaphore;
++ down_read(&mm->mmap_sem);
++ }
+
-+static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key,
-+ unsigned int keylen)
-+{
-+ struct crypto_gcm_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_ablkcipher *ctr = ctx->ctr;
-+ struct {
-+ be128 hash;
-+ u8 iv[8];
++ vma = find_vma(mm, address);
++ if (!vma)
++ goto bad_area;
++ if (vma->vm_start <= address)
++ goto good_area;
++ if (!(vma->vm_flags & VM_GROWSDOWN))
++ goto bad_area;
++ if (error_code & PF_USER) {
++ /*
++ * Accessing the stack below %sp is always a bug.
++ * The large cushion allows instructions like enter
++ * and pusha to work. ("enter $65535,$31" pushes
++ * 32 pointers and then decrements %sp by 65535.)
++ */
++ if (address + 65536 + 32 * sizeof(unsigned long) < regs->sp)
++ goto bad_area;
++ }
++ if (expand_stack(vma, address))
++ goto bad_area;
++/*
++ * Ok, we have a good vm_area for this memory access, so
++ * we can handle it..
++ */
++good_area:
++ si_code = SEGV_ACCERR;
++ write = 0;
++ switch (error_code & (PF_PROT|PF_WRITE)) {
++ default: /* 3: write, present */
++ /* fall through */
++ case PF_WRITE: /* write, not present */
++ if (!(vma->vm_flags & VM_WRITE))
++ goto bad_area;
++ write++;
++ break;
++ case PF_PROT: /* read, present */
++ goto bad_area;
++ case 0: /* read, not present */
++ if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))
++ goto bad_area;
++ }
+
-+ struct crypto_gcm_setkey_result result;
++#ifdef CONFIG_X86_32
++survive:
++#endif
++ /*
++ * If for any reason at all we couldn't handle the fault,
++ * make sure we exit gracefully rather than endlessly redo
++ * the fault.
++ */
++ fault = handle_mm_fault(mm, vma, address, write);
++ if (unlikely(fault & VM_FAULT_ERROR)) {
++ if (fault & VM_FAULT_OOM)
++ goto out_of_memory;
++ else if (fault & VM_FAULT_SIGBUS)
++ goto do_sigbus;
++ BUG();
++ }
++ if (fault & VM_FAULT_MAJOR)
++ tsk->maj_flt++;
++ else
++ tsk->min_flt++;
+
-+ struct scatterlist sg[1];
-+ struct ablkcipher_request req;
-+ } *data;
-+ int err;
++#ifdef CONFIG_X86_32
++ /*
++ * Did it hit the DOS screen memory VA from vm86 mode?
++ */
++ if (v8086_mode(regs)) {
++ unsigned long bit = (address - 0xA0000) >> PAGE_SHIFT;
++ if (bit < 32)
++ tsk->thread.screen_bitmap |= 1 << bit;
++ }
++#endif
++ up_read(&mm->mmap_sem);
++ return;
+
-+ crypto_ablkcipher_clear_flags(ctr, CRYPTO_TFM_REQ_MASK);
-+ crypto_ablkcipher_set_flags(ctr, crypto_aead_get_flags(aead) &
-+ CRYPTO_TFM_REQ_MASK);
++/*
++ * Something tried to access memory that isn't in our memory map..
++ * Fix it, but check if it's kernel or user first..
++ */
++bad_area:
++ up_read(&mm->mmap_sem);
+
-+ err = crypto_ablkcipher_setkey(ctr, key, keylen);
-+ if (err)
-+ return err;
++bad_area_nosemaphore:
++ /* User mode accesses just cause a SIGSEGV */
++ if (error_code & PF_USER) {
++ /*
++ * It's possible to have interrupts off here.
++ */
++ local_irq_enable();
+
-+ crypto_aead_set_flags(aead, crypto_ablkcipher_get_flags(ctr) &
-+ CRYPTO_TFM_RES_MASK);
++ /*
++ * Valid to do another page fault here because this one came
++ * from user space.
++ */
++ if (is_prefetch(regs, address, error_code))
++ return;
+
-+ data = kzalloc(sizeof(*data) + crypto_ablkcipher_reqsize(ctr),
-+ GFP_KERNEL);
-+ if (!data)
-+ return -ENOMEM;
++ if (is_errata100(regs, address))
++ return;
+
-+ init_completion(&data->result.completion);
-+ sg_init_one(data->sg, &data->hash, sizeof(data->hash));
-+ ablkcipher_request_set_tfm(&data->req, ctr);
-+ ablkcipher_request_set_callback(&data->req, CRYPTO_TFM_REQ_MAY_SLEEP |
-+ CRYPTO_TFM_REQ_MAY_BACKLOG,
-+ crypto_gcm_setkey_done,
-+ &data->result);
-+ ablkcipher_request_set_crypt(&data->req, data->sg, data->sg,
-+ sizeof(data->hash), data->iv);
++ if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
++ printk_ratelimit()) {
++ printk(
++#ifdef CONFIG_X86_32
++ "%s%s[%d]: segfault at %lx ip %08lx sp %08lx error %lx",
++#else
++ "%s%s[%d]: segfault at %lx ip %lx sp %lx error %lx",
++#endif
++ task_pid_nr(tsk) > 1 ? KERN_INFO : KERN_EMERG,
++ tsk->comm, task_pid_nr(tsk), address, regs->ip,
++ regs->sp, error_code);
++ print_vma_addr(" in ", regs->ip);
++ printk("\n");
++ }
+
-+ err = crypto_ablkcipher_encrypt(&data->req);
-+ if (err == -EINPROGRESS || err == -EBUSY) {
-+ err = wait_for_completion_interruptible(
-+ &data->result.completion);
-+ if (!err)
-+ err = data->result.err;
++ tsk->thread.cr2 = address;
++ /* Kernel addresses are always protection faults */
++ tsk->thread.error_code = error_code | (address >= TASK_SIZE);
++ tsk->thread.trap_no = 14;
++ force_sig_info_fault(SIGSEGV, si_code, address, tsk);
++ return;
+ }
+
-+ if (err)
-+ goto out;
++ if (is_f00f_bug(regs, address))
++ return;
+
-+ if (ctx->gf128 != NULL)
-+ gf128mul_free_4k(ctx->gf128);
++no_context:
++ /* Are we prepared to handle this kernel fault? */
++ if (fixup_exception(regs))
++ return;
+
-+ ctx->gf128 = gf128mul_init_4k_lle(&data->hash);
++ /*
++ * X86_32
++ * Valid to do another page fault here, because if this fault
++ * had been triggered by is_prefetch fixup_exception would have
++ * handled it.
++ *
++ * X86_64
++ * Hall of shame of CPU/BIOS bugs.
++ */
++ if (is_prefetch(regs, address, error_code))
++ return;
+
-+ if (ctx->gf128 == NULL)
-+ err = -ENOMEM;
++ if (is_errata93(regs, address))
++ return;
+
-+out:
-+ kfree(data);
-+ return err;
-+}
++/*
++ * Oops. The kernel tried to access some bad page. We'll have to
++ * terminate things with extreme prejudice.
++ */
++#ifdef CONFIG_X86_32
++ bust_spinlocks(1);
++#else
++ flags = oops_begin();
++#endif
+
-+static int crypto_gcm_setauthsize(struct crypto_aead *tfm,
-+ unsigned int authsize)
-+{
-+ switch (authsize) {
-+ case 4:
-+ case 8:
-+ case 12:
-+ case 13:
-+ case 14:
-+ case 15:
-+ case 16:
-+ break;
-+ default:
-+ return -EINVAL;
++ show_fault_oops(regs, error_code, address);
++
++ tsk->thread.cr2 = address;
++ tsk->thread.trap_no = 14;
++ tsk->thread.error_code = error_code;
++
++#ifdef CONFIG_X86_32
++ die("Oops", regs, error_code);
++ bust_spinlocks(0);
++ do_exit(SIGKILL);
++#else
++ if (__die("Oops", regs, error_code))
++ regs = NULL;
++ /* Executive summary in case the body of the oops scrolled away */
++ printk(KERN_EMERG "CR2: %016lx\n", address);
++ oops_end(flags, regs, SIGKILL);
++#endif
++
++/*
++ * We ran out of memory, or some other thing happened to us that made
++ * us unable to handle the page fault gracefully.
++ */
++out_of_memory:
++ up_read(&mm->mmap_sem);
++ if (is_global_init(tsk)) {
++ yield();
++#ifdef CONFIG_X86_32
++ down_read(&mm->mmap_sem);
++ goto survive;
++#else
++ goto again;
++#endif
+ }
+
-+ return 0;
++ printk("VM: killing process %s\n", tsk->comm);
++ if (error_code & PF_USER)
++ do_group_exit(SIGKILL);
++ goto no_context;
++
++do_sigbus:
++ up_read(&mm->mmap_sem);
++
++ /* Kernel mode? Handle exceptions or die */
++ if (!(error_code & PF_USER))
++ goto no_context;
++#ifdef CONFIG_X86_32
++ /* User space => ok to do another page fault */
++ if (is_prefetch(regs, address, error_code))
++ return;
++#endif
++ tsk->thread.cr2 = address;
++ tsk->thread.error_code = error_code;
++ tsk->thread.trap_no = 14;
++ force_sig_info_fault(SIGBUS, BUS_ADRERR, address, tsk);
+}
+
-+static void crypto_gcm_init_crypt(struct ablkcipher_request *ablk_req,
-+ struct aead_request *req,
-+ unsigned int cryptlen)
++DEFINE_SPINLOCK(pgd_lock);
++LIST_HEAD(pgd_list);
++
++void vmalloc_sync_all(void)
+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_gcm_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
-+ u32 flags = req->base.tfm->crt_flags;
-+ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
-+ struct scatterlist *dst;
-+ __be32 counter = cpu_to_be32(1);
++#ifdef CONFIG_X86_32
++ /*
++ * Note that races in the updates of insync and start aren't
++ * problematic: insync can only get set bits added, and updates to
++ * start are only improving performance (without affecting correctness
++ * if undone).
++ */
++ static DECLARE_BITMAP(insync, PTRS_PER_PGD);
++ static unsigned long start = TASK_SIZE;
++ unsigned long address;
+
-+ memset(pctx->auth_tag, 0, sizeof(pctx->auth_tag));
-+ memcpy(req->iv + 12, &counter, 4);
++ if (SHARED_KERNEL_PMD)
++ return;
+
-+ sg_init_table(pctx->src, 2);
-+ sg_set_buf(pctx->src, pctx->auth_tag, sizeof(pctx->auth_tag));
-+ scatterwalk_sg_chain(pctx->src, 2, req->src);
++ BUILD_BUG_ON(TASK_SIZE & ~PGDIR_MASK);
++ for (address = start; address >= TASK_SIZE; address += PGDIR_SIZE) {
++ if (!test_bit(pgd_index(address), insync)) {
++ unsigned long flags;
++ struct page *page;
+
-+ dst = pctx->src;
-+ if (req->src != req->dst) {
-+ sg_init_table(pctx->dst, 2);
-+ sg_set_buf(pctx->dst, pctx->auth_tag, sizeof(pctx->auth_tag));
-+ scatterwalk_sg_chain(pctx->dst, 2, req->dst);
-+ dst = pctx->dst;
++ spin_lock_irqsave(&pgd_lock, flags);
++ list_for_each_entry(page, &pgd_list, lru) {
++ if (!vmalloc_sync_one(page_address(page),
++ address))
++ break;
++ }
++ spin_unlock_irqrestore(&pgd_lock, flags);
++ if (!page)
++ set_bit(pgd_index(address), insync);
++ }
++ if (address == start && test_bit(pgd_index(address), insync))
++ start = address + PGDIR_SIZE;
++ }
++#else /* CONFIG_X86_64 */
++ /*
++ * Note that races in the updates of insync and start aren't
++ * problematic: insync can only get set bits added, and updates to
++ * start are only improving performance (without affecting correctness
++ * if undone).
++ */
++ static DECLARE_BITMAP(insync, PTRS_PER_PGD);
++ static unsigned long start = VMALLOC_START & PGDIR_MASK;
++ unsigned long address;
++
++ for (address = start; address <= VMALLOC_END; address += PGDIR_SIZE) {
++ if (!test_bit(pgd_index(address), insync)) {
++ const pgd_t *pgd_ref = pgd_offset_k(address);
++ struct page *page;
++
++ if (pgd_none(*pgd_ref))
++ continue;
++ spin_lock(&pgd_lock);
++ list_for_each_entry(page, &pgd_list, lru) {
++ pgd_t *pgd;
++ pgd = (pgd_t *)page_address(page) + pgd_index(address);
++ if (pgd_none(*pgd))
++ set_pgd(pgd, *pgd_ref);
++ else
++ BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
++ }
++ spin_unlock(&pgd_lock);
++ set_bit(pgd_index(address), insync);
++ }
++ if (address == start)
++ start = address + PGDIR_SIZE;
+ }
++ /* Check that there is no need to do the same for the modules area. */
++ BUILD_BUG_ON(!(MODULES_VADDR > __START_KERNEL));
++ BUILD_BUG_ON(!(((MODULES_END - 1) & PGDIR_MASK) ==
++ (__START_KERNEL & PGDIR_MASK)));
++#endif
++}
+diff --git a/arch/x86/mm/fault_32.c b/arch/x86/mm/fault_32.c
+deleted file mode 100644
+index a2273d4..0000000
+--- a/arch/x86/mm/fault_32.c
++++ /dev/null
+@@ -1,659 +0,0 @@
+-/*
+- * linux/arch/i386/mm/fault.c
+- *
+- * Copyright (C) 1995 Linus Torvalds
+- */
+-
+-#include <linux/signal.h>
+-#include <linux/sched.h>
+-#include <linux/kernel.h>
+-#include <linux/errno.h>
+-#include <linux/string.h>
+-#include <linux/types.h>
+-#include <linux/ptrace.h>
+-#include <linux/mman.h>
+-#include <linux/mm.h>
+-#include <linux/smp.h>
+-#include <linux/interrupt.h>
+-#include <linux/init.h>
+-#include <linux/tty.h>
+-#include <linux/vt_kern.h> /* For unblank_screen() */
+-#include <linux/highmem.h>
+-#include <linux/bootmem.h> /* for max_low_pfn */
+-#include <linux/vmalloc.h>
+-#include <linux/module.h>
+-#include <linux/kprobes.h>
+-#include <linux/uaccess.h>
+-#include <linux/kdebug.h>
+-#include <linux/kprobes.h>
+-
+-#include <asm/system.h>
+-#include <asm/desc.h>
+-#include <asm/segment.h>
+-
+-extern void die(const char *,struct pt_regs *,long);
+-
+-#ifdef CONFIG_KPROBES
+-static inline int notify_page_fault(struct pt_regs *regs)
+-{
+- int ret = 0;
+-
+- /* kprobe_running() needs smp_processor_id() */
+- if (!user_mode_vm(regs)) {
+- preempt_disable();
+- if (kprobe_running() && kprobe_fault_handler(regs, 14))
+- ret = 1;
+- preempt_enable();
+- }
+-
+- return ret;
+-}
+-#else
+-static inline int notify_page_fault(struct pt_regs *regs)
+-{
+- return 0;
+-}
+-#endif
+-
+-/*
+- * Return EIP plus the CS segment base. The segment limit is also
+- * adjusted, clamped to the kernel/user address space (whichever is
+- * appropriate), and returned in *eip_limit.
+- *
+- * The segment is checked, because it might have been changed by another
+- * task between the original faulting instruction and here.
+- *
+- * If CS is no longer a valid code segment, or if EIP is beyond the
+- * limit, or if it is a kernel address when CS is not a kernel segment,
+- * then the returned value will be greater than *eip_limit.
+- *
+- * This is slow, but is very rarely executed.
+- */
+-static inline unsigned long get_segment_eip(struct pt_regs *regs,
+- unsigned long *eip_limit)
+-{
+- unsigned long eip = regs->eip;
+- unsigned seg = regs->xcs & 0xffff;
+- u32 seg_ar, seg_limit, base, *desc;
+-
+- /* Unlikely, but must come before segment checks. */
+- if (unlikely(regs->eflags & VM_MASK)) {
+- base = seg << 4;
+- *eip_limit = base + 0xffff;
+- return base + (eip & 0xffff);
+- }
+-
+- /* The standard kernel/user address space limit. */
+- *eip_limit = user_mode(regs) ? USER_DS.seg : KERNEL_DS.seg;
+-
+- /* By far the most common cases. */
+- if (likely(SEGMENT_IS_FLAT_CODE(seg)))
+- return eip;
+-
+- /* Check the segment exists, is within the current LDT/GDT size,
+- that kernel/user (ring 0..3) has the appropriate privilege,
+- that it's a code segment, and get the limit. */
+- __asm__ ("larl %3,%0; lsll %3,%1"
+- : "=&r" (seg_ar), "=r" (seg_limit) : "0" (0), "rm" (seg));
+- if ((~seg_ar & 0x9800) || eip > seg_limit) {
+- *eip_limit = 0;
+- return 1; /* So that returned eip > *eip_limit. */
+- }
+-
+- /* Get the GDT/LDT descriptor base.
+- When you look for races in this code remember that
+- LDT and other horrors are only used in user space. */
+- if (seg & (1<<2)) {
+- /* Must lock the LDT while reading it. */
+- mutex_lock(¤t->mm->context.lock);
+- desc = current->mm->context.ldt;
+- desc = (void *)desc + (seg & ~7);
+- } else {
+- /* Must disable preemption while reading the GDT. */
+- desc = (u32 *)get_cpu_gdt_table(get_cpu());
+- desc = (void *)desc + (seg & ~7);
+- }
+-
+- /* Decode the code segment base from the descriptor */
+- base = get_desc_base((unsigned long *)desc);
+-
+- if (seg & (1<<2)) {
+- mutex_unlock(¤t->mm->context.lock);
+- } else
+- put_cpu();
+-
+- /* Adjust EIP and segment limit, and clamp at the kernel limit.
+- It's legitimate for segments to wrap at 0xffffffff. */
+- seg_limit += base;
+- if (seg_limit < *eip_limit && seg_limit >= base)
+- *eip_limit = seg_limit;
+- return eip + base;
+-}
+-
+-/*
+- * Sometimes AMD Athlon/Opteron CPUs report invalid exceptions on prefetch.
+- * Check that here and ignore it.
+- */
+-static int __is_prefetch(struct pt_regs *regs, unsigned long addr)
+-{
+- unsigned long limit;
+- unsigned char *instr = (unsigned char *)get_segment_eip (regs, &limit);
+- int scan_more = 1;
+- int prefetch = 0;
+- int i;
+-
+- for (i = 0; scan_more && i < 15; i++) {
+- unsigned char opcode;
+- unsigned char instr_hi;
+- unsigned char instr_lo;
+-
+- if (instr > (unsigned char *)limit)
+- break;
+- if (probe_kernel_address(instr, opcode))
+- break;
+-
+- instr_hi = opcode & 0xf0;
+- instr_lo = opcode & 0x0f;
+- instr++;
+-
+- switch (instr_hi) {
+- case 0x20:
+- case 0x30:
+- /* Values 0x26,0x2E,0x36,0x3E are valid x86 prefixes. */
+- scan_more = ((instr_lo & 7) == 0x6);
+- break;
+-
+- case 0x60:
+- /* 0x64 thru 0x67 are valid prefixes in all modes. */
+- scan_more = (instr_lo & 0xC) == 0x4;
+- break;
+- case 0xF0:
+- /* 0xF0, 0xF2, and 0xF3 are valid prefixes */
+- scan_more = !instr_lo || (instr_lo>>1) == 1;
+- break;
+- case 0x00:
+- /* Prefetch instruction is 0x0F0D or 0x0F18 */
+- scan_more = 0;
+- if (instr > (unsigned char *)limit)
+- break;
+- if (probe_kernel_address(instr, opcode))
+- break;
+- prefetch = (instr_lo == 0xF) &&
+- (opcode == 0x0D || opcode == 0x18);
+- break;
+- default:
+- scan_more = 0;
+- break;
+- }
+- }
+- return prefetch;
+-}
+-
+-static inline int is_prefetch(struct pt_regs *regs, unsigned long addr,
+- unsigned long error_code)
+-{
+- if (unlikely(boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
+- boot_cpu_data.x86 >= 6)) {
+- /* Catch an obscure case of prefetch inside an NX page. */
+- if (nx_enabled && (error_code & 16))
+- return 0;
+- return __is_prefetch(regs, addr);
+- }
+- return 0;
+-}
+-
+-static noinline void force_sig_info_fault(int si_signo, int si_code,
+- unsigned long address, struct task_struct *tsk)
+-{
+- siginfo_t info;
+-
+- info.si_signo = si_signo;
+- info.si_errno = 0;
+- info.si_code = si_code;
+- info.si_addr = (void __user *)address;
+- force_sig_info(si_signo, &info, tsk);
+-}
+-
+-fastcall void do_invalid_op(struct pt_regs *, unsigned long);
+-
+-static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
+-{
+- unsigned index = pgd_index(address);
+- pgd_t *pgd_k;
+- pud_t *pud, *pud_k;
+- pmd_t *pmd, *pmd_k;
+-
+- pgd += index;
+- pgd_k = init_mm.pgd + index;
+-
+- if (!pgd_present(*pgd_k))
+- return NULL;
+-
+- /*
+- * set_pgd(pgd, *pgd_k); here would be useless on PAE
+- * and redundant with the set_pmd() on non-PAE. As would
+- * set_pud.
+- */
+-
+- pud = pud_offset(pgd, address);
+- pud_k = pud_offset(pgd_k, address);
+- if (!pud_present(*pud_k))
+- return NULL;
+-
+- pmd = pmd_offset(pud, address);
+- pmd_k = pmd_offset(pud_k, address);
+- if (!pmd_present(*pmd_k))
+- return NULL;
+- if (!pmd_present(*pmd)) {
+- set_pmd(pmd, *pmd_k);
+- arch_flush_lazy_mmu_mode();
+- } else
+- BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
+- return pmd_k;
+-}
+-
+-/*
+- * Handle a fault on the vmalloc or module mapping area
+- *
+- * This assumes no large pages in there.
+- */
+-static inline int vmalloc_fault(unsigned long address)
+-{
+- unsigned long pgd_paddr;
+- pmd_t *pmd_k;
+- pte_t *pte_k;
+- /*
+- * Synchronize this task's top level page-table
+- * with the 'reference' page table.
+- *
+- * Do _not_ use "current" here. We might be inside
+- * an interrupt in the middle of a task switch..
+- */
+- pgd_paddr = read_cr3();
+- pmd_k = vmalloc_sync_one(__va(pgd_paddr), address);
+- if (!pmd_k)
+- return -1;
+- pte_k = pte_offset_kernel(pmd_k, address);
+- if (!pte_present(*pte_k))
+- return -1;
+- return 0;
+-}
+-
+-int show_unhandled_signals = 1;
+-
+-/*
+- * This routine handles page faults. It determines the address,
+- * and the problem, and then passes it off to one of the appropriate
+- * routines.
+- *
+- * error_code:
+- * bit 0 == 0 means no page found, 1 means protection fault
+- * bit 1 == 0 means read, 1 means write
+- * bit 2 == 0 means kernel, 1 means user-mode
+- * bit 3 == 1 means use of reserved bit detected
+- * bit 4 == 1 means fault was an instruction fetch
+- */
+-fastcall void __kprobes do_page_fault(struct pt_regs *regs,
+- unsigned long error_code)
+-{
+- struct task_struct *tsk;
+- struct mm_struct *mm;
+- struct vm_area_struct * vma;
+- unsigned long address;
+- int write, si_code;
+- int fault;
+-
+- /*
+- * We can fault from pretty much anywhere, with unknown IRQ state.
+- */
+- trace_hardirqs_fixup();
+-
+- /* get the address */
+- address = read_cr2();
+-
+- tsk = current;
+-
+- si_code = SEGV_MAPERR;
+-
+- /*
+- * We fault-in kernel-space virtual memory on-demand. The
+- * 'reference' page table is init_mm.pgd.
+- *
+- * NOTE! We MUST NOT take any locks for this case. We may
+- * be in an interrupt or a critical region, and should
+- * only copy the information from the master page table,
+- * nothing more.
+- *
+- * This verifies that the fault happens in kernel space
+- * (error_code & 4) == 0, and that the fault was not a
+- * protection error (error_code & 9) == 0.
+- */
+- if (unlikely(address >= TASK_SIZE)) {
+- if (!(error_code & 0x0000000d) && vmalloc_fault(address) >= 0)
+- return;
+- if (notify_page_fault(regs))
+- return;
+- /*
+- * Don't take the mm semaphore here. If we fixup a prefetch
+- * fault we could otherwise deadlock.
+- */
+- goto bad_area_nosemaphore;
+- }
+-
+- if (notify_page_fault(regs))
+- return;
+-
+- /* It's safe to allow irq's after cr2 has been saved and the vmalloc
+- fault has been handled. */
+- if (regs->eflags & (X86_EFLAGS_IF|VM_MASK))
+- local_irq_enable();
+-
+- mm = tsk->mm;
+-
+- /*
+- * If we're in an interrupt, have no user context or are running in an
+- * atomic region then we must not take the fault..
+- */
+- if (in_atomic() || !mm)
+- goto bad_area_nosemaphore;
+-
+- /* When running in the kernel we expect faults to occur only to
+- * addresses in user space. All other faults represent errors in the
+- * kernel and should generate an OOPS. Unfortunately, in the case of an
+- * erroneous fault occurring in a code path which already holds mmap_sem
+- * we will deadlock attempting to validate the fault against the
+- * address space. Luckily the kernel only validly references user
+- * space from well defined areas of code, which are listed in the
+- * exceptions table.
+- *
+- * As the vast majority of faults will be valid we will only perform
+- * the source reference check when there is a possibility of a deadlock.
+- * Attempt to lock the address space, if we cannot we then validate the
+- * source. If this is invalid we can skip the address space check,
+- * thus avoiding the deadlock.
+- */
+- if (!down_read_trylock(&mm->mmap_sem)) {
+- if ((error_code & 4) == 0 &&
+- !search_exception_tables(regs->eip))
+- goto bad_area_nosemaphore;
+- down_read(&mm->mmap_sem);
+- }
+-
+- vma = find_vma(mm, address);
+- if (!vma)
+- goto bad_area;
+- if (vma->vm_start <= address)
+- goto good_area;
+- if (!(vma->vm_flags & VM_GROWSDOWN))
+- goto bad_area;
+- if (error_code & 4) {
+- /*
+- * Accessing the stack below %esp is always a bug.
+- * The large cushion allows instructions like enter
+- * and pusha to work. ("enter $65535,$31" pushes
+- * 32 pointers and then decrements %esp by 65535.)
+- */
+- if (address + 65536 + 32 * sizeof(unsigned long) < regs->esp)
+- goto bad_area;
+- }
+- if (expand_stack(vma, address))
+- goto bad_area;
+-/*
+- * Ok, we have a good vm_area for this memory access, so
+- * we can handle it..
+- */
+-good_area:
+- si_code = SEGV_ACCERR;
+- write = 0;
+- switch (error_code & 3) {
+- default: /* 3: write, present */
+- /* fall through */
+- case 2: /* write, not present */
+- if (!(vma->vm_flags & VM_WRITE))
+- goto bad_area;
+- write++;
+- break;
+- case 1: /* read, present */
+- goto bad_area;
+- case 0: /* read, not present */
+- if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))
+- goto bad_area;
+- }
+-
+- survive:
+- /*
+- * If for any reason at all we couldn't handle the fault,
+- * make sure we exit gracefully rather than endlessly redo
+- * the fault.
+- */
+- fault = handle_mm_fault(mm, vma, address, write);
+- if (unlikely(fault & VM_FAULT_ERROR)) {
+- if (fault & VM_FAULT_OOM)
+- goto out_of_memory;
+- else if (fault & VM_FAULT_SIGBUS)
+- goto do_sigbus;
+- BUG();
+- }
+- if (fault & VM_FAULT_MAJOR)
+- tsk->maj_flt++;
+- else
+- tsk->min_flt++;
+-
+- /*
+- * Did it hit the DOS screen memory VA from vm86 mode?
+- */
+- if (regs->eflags & VM_MASK) {
+- unsigned long bit = (address - 0xA0000) >> PAGE_SHIFT;
+- if (bit < 32)
+- tsk->thread.screen_bitmap |= 1 << bit;
+- }
+- up_read(&mm->mmap_sem);
+- return;
+-
+-/*
+- * Something tried to access memory that isn't in our memory map..
+- * Fix it, but check if it's kernel or user first..
+- */
+-bad_area:
+- up_read(&mm->mmap_sem);
+-
+-bad_area_nosemaphore:
+- /* User mode accesses just cause a SIGSEGV */
+- if (error_code & 4) {
+- /*
+- * It's possible to have interrupts off here.
+- */
+- local_irq_enable();
+-
+- /*
+- * Valid to do another page fault here because this one came
+- * from user space.
+- */
+- if (is_prefetch(regs, address, error_code))
+- return;
+-
+- if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
+- printk_ratelimit()) {
+- printk("%s%s[%d]: segfault at %08lx eip %08lx "
+- "esp %08lx error %lx\n",
+- task_pid_nr(tsk) > 1 ? KERN_INFO : KERN_EMERG,
+- tsk->comm, task_pid_nr(tsk), address, regs->eip,
+- regs->esp, error_code);
+- }
+- tsk->thread.cr2 = address;
+- /* Kernel addresses are always protection faults */
+- tsk->thread.error_code = error_code | (address >= TASK_SIZE);
+- tsk->thread.trap_no = 14;
+- force_sig_info_fault(SIGSEGV, si_code, address, tsk);
+- return;
+- }
+-
+-#ifdef CONFIG_X86_F00F_BUG
+- /*
+- * Pentium F0 0F C7 C8 bug workaround.
+- */
+- if (boot_cpu_data.f00f_bug) {
+- unsigned long nr;
+-
+- nr = (address - idt_descr.address) >> 3;
+-
+- if (nr == 6) {
+- do_invalid_op(regs, 0);
+- return;
+- }
+- }
+-#endif
+-
+-no_context:
+- /* Are we prepared to handle this kernel fault? */
+- if (fixup_exception(regs))
+- return;
+-
+- /*
+- * Valid to do another page fault here, because if this fault
+- * had been triggered by is_prefetch fixup_exception would have
+- * handled it.
+- */
+- if (is_prefetch(regs, address, error_code))
+- return;
+-
+-/*
+- * Oops. The kernel tried to access some bad page. We'll have to
+- * terminate things with extreme prejudice.
+- */
+-
+- bust_spinlocks(1);
+-
+- if (oops_may_print()) {
+- __typeof__(pte_val(__pte(0))) page;
+-
+-#ifdef CONFIG_X86_PAE
+- if (error_code & 16) {
+- pte_t *pte = lookup_address(address);
+-
+- if (pte && pte_present(*pte) && !pte_exec_kernel(*pte))
+- printk(KERN_CRIT "kernel tried to execute "
+- "NX-protected page - exploit attempt? "
+- "(uid: %d)\n", current->uid);
+- }
+-#endif
+- if (address < PAGE_SIZE)
+- printk(KERN_ALERT "BUG: unable to handle kernel NULL "
+- "pointer dereference");
+- else
+- printk(KERN_ALERT "BUG: unable to handle kernel paging"
+- " request");
+- printk(" at virtual address %08lx\n",address);
+- printk(KERN_ALERT "printing eip: %08lx ", regs->eip);
+-
+- page = read_cr3();
+- page = ((__typeof__(page) *) __va(page))[address >> PGDIR_SHIFT];
+-#ifdef CONFIG_X86_PAE
+- printk("*pdpt = %016Lx ", page);
+- if ((page >> PAGE_SHIFT) < max_low_pfn
+- && page & _PAGE_PRESENT) {
+- page &= PAGE_MASK;
+- page = ((__typeof__(page) *) __va(page))[(address >> PMD_SHIFT)
+- & (PTRS_PER_PMD - 1)];
+- printk(KERN_CONT "*pde = %016Lx ", page);
+- page &= ~_PAGE_NX;
+- }
+-#else
+- printk("*pde = %08lx ", page);
+-#endif
+-
+- /*
+- * We must not directly access the pte in the highpte
+- * case if the page table is located in highmem.
+- * And let's rather not kmap-atomic the pte, just in case
+- * it's allocated already.
+- */
+- if ((page >> PAGE_SHIFT) < max_low_pfn
+- && (page & _PAGE_PRESENT)
+- && !(page & _PAGE_PSE)) {
+- page &= PAGE_MASK;
+- page = ((__typeof__(page) *) __va(page))[(address >> PAGE_SHIFT)
+- & (PTRS_PER_PTE - 1)];
+- printk("*pte = %0*Lx ", sizeof(page)*2, (u64)page);
+- }
+-
+- printk("\n");
+- }
+-
+- tsk->thread.cr2 = address;
+- tsk->thread.trap_no = 14;
+- tsk->thread.error_code = error_code;
+- die("Oops", regs, error_code);
+- bust_spinlocks(0);
+- do_exit(SIGKILL);
+-
+-/*
+- * We ran out of memory, or some other thing happened to us that made
+- * us unable to handle the page fault gracefully.
+- */
+-out_of_memory:
+- up_read(&mm->mmap_sem);
+- if (is_global_init(tsk)) {
+- yield();
+- down_read(&mm->mmap_sem);
+- goto survive;
+- }
+- printk("VM: killing process %s\n", tsk->comm);
+- if (error_code & 4)
+- do_group_exit(SIGKILL);
+- goto no_context;
+-
+-do_sigbus:
+- up_read(&mm->mmap_sem);
+-
+- /* Kernel mode? Handle exceptions or die */
+- if (!(error_code & 4))
+- goto no_context;
+-
+- /* User space => ok to do another page fault */
+- if (is_prefetch(regs, address, error_code))
+- return;
+-
+- tsk->thread.cr2 = address;
+- tsk->thread.error_code = error_code;
+- tsk->thread.trap_no = 14;
+- force_sig_info_fault(SIGBUS, BUS_ADRERR, address, tsk);
+-}
+-
+-void vmalloc_sync_all(void)
+-{
+- /*
+- * Note that races in the updates of insync and start aren't
+- * problematic: insync can only get set bits added, and updates to
+- * start are only improving performance (without affecting correctness
+- * if undone).
+- */
+- static DECLARE_BITMAP(insync, PTRS_PER_PGD);
+- static unsigned long start = TASK_SIZE;
+- unsigned long address;
+-
+- if (SHARED_KERNEL_PMD)
+- return;
+-
+- BUILD_BUG_ON(TASK_SIZE & ~PGDIR_MASK);
+- for (address = start; address >= TASK_SIZE; address += PGDIR_SIZE) {
+- if (!test_bit(pgd_index(address), insync)) {
+- unsigned long flags;
+- struct page *page;
+-
+- spin_lock_irqsave(&pgd_lock, flags);
+- for (page = pgd_list; page; page =
+- (struct page *)page->index)
+- if (!vmalloc_sync_one(page_address(page),
+- address)) {
+- BUG_ON(page != pgd_list);
+- break;
+- }
+- spin_unlock_irqrestore(&pgd_lock, flags);
+- if (!page)
+- set_bit(pgd_index(address), insync);
+- }
+- if (address == start && test_bit(pgd_index(address), insync))
+- start = address + PGDIR_SIZE;
+- }
+-}
+diff --git a/arch/x86/mm/fault_64.c b/arch/x86/mm/fault_64.c
+deleted file mode 100644
+index 0e26230..0000000
+--- a/arch/x86/mm/fault_64.c
++++ /dev/null
+@@ -1,623 +0,0 @@
+-/*
+- * linux/arch/x86-64/mm/fault.c
+- *
+- * Copyright (C) 1995 Linus Torvalds
+- * Copyright (C) 2001,2002 Andi Kleen, SuSE Labs.
+- */
+-
+-#include <linux/signal.h>
+-#include <linux/sched.h>
+-#include <linux/kernel.h>
+-#include <linux/errno.h>
+-#include <linux/string.h>
+-#include <linux/types.h>
+-#include <linux/ptrace.h>
+-#include <linux/mman.h>
+-#include <linux/mm.h>
+-#include <linux/smp.h>
+-#include <linux/interrupt.h>
+-#include <linux/init.h>
+-#include <linux/tty.h>
+-#include <linux/vt_kern.h> /* For unblank_screen() */
+-#include <linux/compiler.h>
+-#include <linux/vmalloc.h>
+-#include <linux/module.h>
+-#include <linux/kprobes.h>
+-#include <linux/uaccess.h>
+-#include <linux/kdebug.h>
+-#include <linux/kprobes.h>
+-
+-#include <asm/system.h>
+-#include <asm/pgalloc.h>
+-#include <asm/smp.h>
+-#include <asm/tlbflush.h>
+-#include <asm/proto.h>
+-#include <asm-generic/sections.h>
+-
+-/* Page fault error code bits */
+-#define PF_PROT (1<<0) /* or no page found */
+-#define PF_WRITE (1<<1)
+-#define PF_USER (1<<2)
+-#define PF_RSVD (1<<3)
+-#define PF_INSTR (1<<4)
+-
+-#ifdef CONFIG_KPROBES
+-static inline int notify_page_fault(struct pt_regs *regs)
+-{
+- int ret = 0;
+-
+- /* kprobe_running() needs smp_processor_id() */
+- if (!user_mode(regs)) {
+- preempt_disable();
+- if (kprobe_running() && kprobe_fault_handler(regs, 14))
+- ret = 1;
+- preempt_enable();
+- }
+-
+- return ret;
+-}
+-#else
+-static inline int notify_page_fault(struct pt_regs *regs)
+-{
+- return 0;
+-}
+-#endif
+-
+-/* Sometimes the CPU reports invalid exceptions on prefetch.
+- Check that here and ignore.
+- Opcode checker based on code by Richard Brunner */
+-static noinline int is_prefetch(struct pt_regs *regs, unsigned long addr,
+- unsigned long error_code)
+-{
+- unsigned char *instr;
+- int scan_more = 1;
+- int prefetch = 0;
+- unsigned char *max_instr;
+-
+- /* If it was a exec fault ignore */
+- if (error_code & PF_INSTR)
+- return 0;
+-
+- instr = (unsigned char __user *)convert_rip_to_linear(current, regs);
+- max_instr = instr + 15;
+-
+- if (user_mode(regs) && instr >= (unsigned char *)TASK_SIZE)
+- return 0;
+-
+- while (scan_more && instr < max_instr) {
+- unsigned char opcode;
+- unsigned char instr_hi;
+- unsigned char instr_lo;
+-
+- if (probe_kernel_address(instr, opcode))
+- break;
+-
+- instr_hi = opcode & 0xf0;
+- instr_lo = opcode & 0x0f;
+- instr++;
+-
+- switch (instr_hi) {
+- case 0x20:
+- case 0x30:
+- /* Values 0x26,0x2E,0x36,0x3E are valid x86
+- prefixes. In long mode, the CPU will signal
+- invalid opcode if some of these prefixes are
+- present so we will never get here anyway */
+- scan_more = ((instr_lo & 7) == 0x6);
+- break;
+-
+- case 0x40:
+- /* In AMD64 long mode, 0x40 to 0x4F are valid REX prefixes
+- Need to figure out under what instruction mode the
+- instruction was issued ... */
+- /* Could check the LDT for lm, but for now it's good
+- enough to assume that long mode only uses well known
+- segments or kernel. */
+- scan_more = (!user_mode(regs)) || (regs->cs == __USER_CS);
+- break;
+-
+- case 0x60:
+- /* 0x64 thru 0x67 are valid prefixes in all modes. */
+- scan_more = (instr_lo & 0xC) == 0x4;
+- break;
+- case 0xF0:
+- /* 0xF0, 0xF2, and 0xF3 are valid prefixes in all modes. */
+- scan_more = !instr_lo || (instr_lo>>1) == 1;
+- break;
+- case 0x00:
+- /* Prefetch instruction is 0x0F0D or 0x0F18 */
+- scan_more = 0;
+- if (probe_kernel_address(instr, opcode))
+- break;
+- prefetch = (instr_lo == 0xF) &&
+- (opcode == 0x0D || opcode == 0x18);
+- break;
+- default:
+- scan_more = 0;
+- break;
+- }
+- }
+- return prefetch;
+-}
+-
+-static int bad_address(void *p)
+-{
+- unsigned long dummy;
+- return probe_kernel_address((unsigned long *)p, dummy);
+-}
+-
+-void dump_pagetable(unsigned long address)
+-{
+- pgd_t *pgd;
+- pud_t *pud;
+- pmd_t *pmd;
+- pte_t *pte;
+-
+- pgd = (pgd_t *)read_cr3();
+-
+- pgd = __va((unsigned long)pgd & PHYSICAL_PAGE_MASK);
+- pgd += pgd_index(address);
+- if (bad_address(pgd)) goto bad;
+- printk("PGD %lx ", pgd_val(*pgd));
+- if (!pgd_present(*pgd)) goto ret;
+-
+- pud = pud_offset(pgd, address);
+- if (bad_address(pud)) goto bad;
+- printk("PUD %lx ", pud_val(*pud));
+- if (!pud_present(*pud)) goto ret;
+-
+- pmd = pmd_offset(pud, address);
+- if (bad_address(pmd)) goto bad;
+- printk("PMD %lx ", pmd_val(*pmd));
+- if (!pmd_present(*pmd) || pmd_large(*pmd)) goto ret;
+-
+- pte = pte_offset_kernel(pmd, address);
+- if (bad_address(pte)) goto bad;
+- printk("PTE %lx", pte_val(*pte));
+-ret:
+- printk("\n");
+- return;
+-bad:
+- printk("BAD\n");
+-}
+-
+-static const char errata93_warning[] =
+-KERN_ERR "******* Your BIOS seems to not contain a fix for K8 errata #93\n"
+-KERN_ERR "******* Working around it, but it may cause SEGVs or burn power.\n"
+-KERN_ERR "******* Please consider a BIOS update.\n"
+-KERN_ERR "******* Disabling USB legacy in the BIOS may also help.\n";
+-
+-/* Workaround for K8 erratum #93 & buggy BIOS.
+- BIOS SMM functions are required to use a specific workaround
+- to avoid corruption of the 64bit RIP register on C stepping K8.
+- A lot of BIOS that didn't get tested properly miss this.
+- The OS sees this as a page fault with the upper 32bits of RIP cleared.
+- Try to work around it here.
+- Note we only handle faults in kernel here. */
+-
+-static int is_errata93(struct pt_regs *regs, unsigned long address)
+-{
+- static int warned;
+- if (address != regs->rip)
+- return 0;
+- if ((address >> 32) != 0)
+- return 0;
+- address |= 0xffffffffUL << 32;
+- if ((address >= (u64)_stext && address <= (u64)_etext) ||
+- (address >= MODULES_VADDR && address <= MODULES_END)) {
+- if (!warned) {
+- printk(errata93_warning);
+- warned = 1;
+- }
+- regs->rip = address;
+- return 1;
+- }
+- return 0;
+-}
+-
+-static noinline void pgtable_bad(unsigned long address, struct pt_regs *regs,
+- unsigned long error_code)
+-{
+- unsigned long flags = oops_begin();
+- struct task_struct *tsk;
+-
+- printk(KERN_ALERT "%s: Corrupted page table at address %lx\n",
+- current->comm, address);
+- dump_pagetable(address);
+- tsk = current;
+- tsk->thread.cr2 = address;
+- tsk->thread.trap_no = 14;
+- tsk->thread.error_code = error_code;
+- __die("Bad pagetable", regs, error_code);
+- oops_end(flags);
+- do_exit(SIGKILL);
+-}
+-
+-/*
+- * Handle a fault on the vmalloc area
+- *
+- * This assumes no large pages in there.
+- */
+-static int vmalloc_fault(unsigned long address)
+-{
+- pgd_t *pgd, *pgd_ref;
+- pud_t *pud, *pud_ref;
+- pmd_t *pmd, *pmd_ref;
+- pte_t *pte, *pte_ref;
+-
+- /* Copy kernel mappings over when needed. This can also
+- happen within a race in page table update. In the later
+- case just flush. */
+-
+- pgd = pgd_offset(current->mm ?: &init_mm, address);
+- pgd_ref = pgd_offset_k(address);
+- if (pgd_none(*pgd_ref))
+- return -1;
+- if (pgd_none(*pgd))
+- set_pgd(pgd, *pgd_ref);
+- else
+- BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
+-
+- /* Below here mismatches are bugs because these lower tables
+- are shared */
+-
+- pud = pud_offset(pgd, address);
+- pud_ref = pud_offset(pgd_ref, address);
+- if (pud_none(*pud_ref))
+- return -1;
+- if (pud_none(*pud) || pud_page_vaddr(*pud) != pud_page_vaddr(*pud_ref))
+- BUG();
+- pmd = pmd_offset(pud, address);
+- pmd_ref = pmd_offset(pud_ref, address);
+- if (pmd_none(*pmd_ref))
+- return -1;
+- if (pmd_none(*pmd) || pmd_page(*pmd) != pmd_page(*pmd_ref))
+- BUG();
+- pte_ref = pte_offset_kernel(pmd_ref, address);
+- if (!pte_present(*pte_ref))
+- return -1;
+- pte = pte_offset_kernel(pmd, address);
+- /* Don't use pte_page here, because the mappings can point
+- outside mem_map, and the NUMA hash lookup cannot handle
+- that. */
+- if (!pte_present(*pte) || pte_pfn(*pte) != pte_pfn(*pte_ref))
+- BUG();
+- return 0;
+-}
+-
+-int show_unhandled_signals = 1;
+-
+-/*
+- * This routine handles page faults. It determines the address,
+- * and the problem, and then passes it off to one of the appropriate
+- * routines.
+- */
+-asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
+- unsigned long error_code)
+-{
+- struct task_struct *tsk;
+- struct mm_struct *mm;
+- struct vm_area_struct * vma;
+- unsigned long address;
+- const struct exception_table_entry *fixup;
+- int write, fault;
+- unsigned long flags;
+- siginfo_t info;
+-
+- /*
+- * We can fault from pretty much anywhere, with unknown IRQ state.
+- */
+- trace_hardirqs_fixup();
+-
+- tsk = current;
+- mm = tsk->mm;
+- prefetchw(&mm->mmap_sem);
+-
+- /* get the address */
+- address = read_cr2();
+-
+- info.si_code = SEGV_MAPERR;
+-
+-
+- /*
+- * We fault-in kernel-space virtual memory on-demand. The
+- * 'reference' page table is init_mm.pgd.
+- *
+- * NOTE! We MUST NOT take any locks for this case. We may
+- * be in an interrupt or a critical region, and should
+- * only copy the information from the master page table,
+- * nothing more.
+- *
+- * This verifies that the fault happens in kernel space
+- * (error_code & 4) == 0, and that the fault was not a
+- * protection error (error_code & 9) == 0.
+- */
+- if (unlikely(address >= TASK_SIZE64)) {
+- /*
+- * Don't check for the module range here: its PML4
+- * is always initialized because it's shared with the main
+- * kernel text. Only vmalloc may need PML4 syncups.
+- */
+- if (!(error_code & (PF_RSVD|PF_USER|PF_PROT)) &&
+- ((address >= VMALLOC_START && address < VMALLOC_END))) {
+- if (vmalloc_fault(address) >= 0)
+- return;
+- }
+- if (notify_page_fault(regs))
+- return;
+- /*
+- * Don't take the mm semaphore here. If we fixup a prefetch
+- * fault we could otherwise deadlock.
+- */
+- goto bad_area_nosemaphore;
+- }
+-
+- if (notify_page_fault(regs))
+- return;
+-
+- if (likely(regs->eflags & X86_EFLAGS_IF))
+- local_irq_enable();
+-
+- if (unlikely(error_code & PF_RSVD))
+- pgtable_bad(address, regs, error_code);
+-
+- /*
+- * If we're in an interrupt or have no user
+- * context, we must not take the fault..
+- */
+- if (unlikely(in_atomic() || !mm))
+- goto bad_area_nosemaphore;
+-
+- /*
+- * User-mode registers count as a user access even for any
+- * potential system fault or CPU buglet.
+- */
+- if (user_mode_vm(regs))
+- error_code |= PF_USER;
+-
+- again:
+- /* When running in the kernel we expect faults to occur only to
+- * addresses in user space. All other faults represent errors in the
+- * kernel and should generate an OOPS. Unfortunately, in the case of an
+- * erroneous fault occurring in a code path which already holds mmap_sem
+- * we will deadlock attempting to validate the fault against the
+- * address space. Luckily the kernel only validly references user
+- * space from well defined areas of code, which are listed in the
+- * exceptions table.
+- *
+- * As the vast majority of faults will be valid we will only perform
+- * the source reference check when there is a possibility of a deadlock.
+- * Attempt to lock the address space, if we cannot we then validate the
+- * source. If this is invalid we can skip the address space check,
+- * thus avoiding the deadlock.
+- */
+- if (!down_read_trylock(&mm->mmap_sem)) {
+- if ((error_code & PF_USER) == 0 &&
+- !search_exception_tables(regs->rip))
+- goto bad_area_nosemaphore;
+- down_read(&mm->mmap_sem);
+- }
+-
+- vma = find_vma(mm, address);
+- if (!vma)
+- goto bad_area;
+- if (likely(vma->vm_start <= address))
+- goto good_area;
+- if (!(vma->vm_flags & VM_GROWSDOWN))
+- goto bad_area;
+- if (error_code & 4) {
+- /* Allow userspace just enough access below the stack pointer
+- * to let the 'enter' instruction work.
+- */
+- if (address + 65536 + 32 * sizeof(unsigned long) < regs->rsp)
+- goto bad_area;
+- }
+- if (expand_stack(vma, address))
+- goto bad_area;
+-/*
+- * Ok, we have a good vm_area for this memory access, so
+- * we can handle it..
+- */
+-good_area:
+- info.si_code = SEGV_ACCERR;
+- write = 0;
+- switch (error_code & (PF_PROT|PF_WRITE)) {
+- default: /* 3: write, present */
+- /* fall through */
+- case PF_WRITE: /* write, not present */
+- if (!(vma->vm_flags & VM_WRITE))
+- goto bad_area;
+- write++;
+- break;
+- case PF_PROT: /* read, present */
+- goto bad_area;
+- case 0: /* read, not present */
+- if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))
+- goto bad_area;
+- }
+-
+- /*
+- * If for any reason at all we couldn't handle the fault,
+- * make sure we exit gracefully rather than endlessly redo
+- * the fault.
+- */
+- fault = handle_mm_fault(mm, vma, address, write);
+- if (unlikely(fault & VM_FAULT_ERROR)) {
+- if (fault & VM_FAULT_OOM)
+- goto out_of_memory;
+- else if (fault & VM_FAULT_SIGBUS)
+- goto do_sigbus;
+- BUG();
+- }
+- if (fault & VM_FAULT_MAJOR)
+- tsk->maj_flt++;
+- else
+- tsk->min_flt++;
+- up_read(&mm->mmap_sem);
+- return;
+-
+-/*
+- * Something tried to access memory that isn't in our memory map..
+- * Fix it, but check if it's kernel or user first..
+- */
+-bad_area:
+- up_read(&mm->mmap_sem);
+-
+-bad_area_nosemaphore:
+- /* User mode accesses just cause a SIGSEGV */
+- if (error_code & PF_USER) {
+-
+- /*
+- * It's possible to have interrupts off here.
+- */
+- local_irq_enable();
+-
+- if (is_prefetch(regs, address, error_code))
+- return;
+-
+- /* Work around K8 erratum #100 K8 in compat mode
+- occasionally jumps to illegal addresses >4GB. We
+- catch this here in the page fault handler because
+- these addresses are not reachable. Just detect this
+- case and return. Any code segment in LDT is
+- compatibility mode. */
+- if ((regs->cs == __USER32_CS || (regs->cs & (1<<2))) &&
+- (address >> 32))
+- return;
+-
+- if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
+- printk_ratelimit()) {
+- printk(
+- "%s%s[%d]: segfault at %lx rip %lx rsp %lx error %lx\n",
+- tsk->pid > 1 ? KERN_INFO : KERN_EMERG,
+- tsk->comm, tsk->pid, address, regs->rip,
+- regs->rsp, error_code);
+- }
+-
+- tsk->thread.cr2 = address;
+- /* Kernel addresses are always protection faults */
+- tsk->thread.error_code = error_code | (address >= TASK_SIZE);
+- tsk->thread.trap_no = 14;
+- info.si_signo = SIGSEGV;
+- info.si_errno = 0;
+- /* info.si_code has been set above */
+- info.si_addr = (void __user *)address;
+- force_sig_info(SIGSEGV, &info, tsk);
+- return;
+- }
+-
+-no_context:
+-
+- /* Are we prepared to handle this kernel fault? */
+- fixup = search_exception_tables(regs->rip);
+- if (fixup) {
+- regs->rip = fixup->fixup;
+- return;
+- }
+-
+- /*
+- * Hall of shame of CPU/BIOS bugs.
+- */
+-
+- if (is_prefetch(regs, address, error_code))
+- return;
+-
+- if (is_errata93(regs, address))
+- return;
+-
+-/*
+- * Oops. The kernel tried to access some bad page. We'll have to
+- * terminate things with extreme prejudice.
+- */
+-
+- flags = oops_begin();
+-
+- if (address < PAGE_SIZE)
+- printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+- else
+- printk(KERN_ALERT "Unable to handle kernel paging request");
+- printk(" at %016lx RIP: \n" KERN_ALERT,address);
+- printk_address(regs->rip);
+- dump_pagetable(address);
+- tsk->thread.cr2 = address;
+- tsk->thread.trap_no = 14;
+- tsk->thread.error_code = error_code;
+- __die("Oops", regs, error_code);
+- /* Executive summary in case the body of the oops scrolled away */
+- printk(KERN_EMERG "CR2: %016lx\n", address);
+- oops_end(flags);
+- do_exit(SIGKILL);
+-
+-/*
+- * We ran out of memory, or some other thing happened to us that made
+- * us unable to handle the page fault gracefully.
+- */
+-out_of_memory:
+- up_read(&mm->mmap_sem);
+- if (is_global_init(current)) {
+- yield();
+- goto again;
+- }
+- printk("VM: killing process %s\n", tsk->comm);
+- if (error_code & 4)
+- do_group_exit(SIGKILL);
+- goto no_context;
+-
+-do_sigbus:
+- up_read(&mm->mmap_sem);
+-
+- /* Kernel mode? Handle exceptions or die */
+- if (!(error_code & PF_USER))
+- goto no_context;
+-
+- tsk->thread.cr2 = address;
+- tsk->thread.error_code = error_code;
+- tsk->thread.trap_no = 14;
+- info.si_signo = SIGBUS;
+- info.si_errno = 0;
+- info.si_code = BUS_ADRERR;
+- info.si_addr = (void __user *)address;
+- force_sig_info(SIGBUS, &info, tsk);
+- return;
+-}
+-
+-DEFINE_SPINLOCK(pgd_lock);
+-LIST_HEAD(pgd_list);
+-
+-void vmalloc_sync_all(void)
+-{
+- /* Note that races in the updates of insync and start aren't
+- problematic:
+- insync can only get set bits added, and updates to start are only
+- improving performance (without affecting correctness if undone). */
+- static DECLARE_BITMAP(insync, PTRS_PER_PGD);
+- static unsigned long start = VMALLOC_START & PGDIR_MASK;
+- unsigned long address;
+-
+- for (address = start; address <= VMALLOC_END; address += PGDIR_SIZE) {
+- if (!test_bit(pgd_index(address), insync)) {
+- const pgd_t *pgd_ref = pgd_offset_k(address);
+- struct page *page;
+-
+- if (pgd_none(*pgd_ref))
+- continue;
+- spin_lock(&pgd_lock);
+- list_for_each_entry(page, &pgd_list, lru) {
+- pgd_t *pgd;
+- pgd = (pgd_t *)page_address(page) + pgd_index(address);
+- if (pgd_none(*pgd))
+- set_pgd(pgd, *pgd_ref);
+- else
+- BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
+- }
+- spin_unlock(&pgd_lock);
+- set_bit(pgd_index(address), insync);
+- }
+- if (address == start)
+- start = address + PGDIR_SIZE;
+- }
+- /* Check that there is no need to do the same for the modules area. */
+- BUILD_BUG_ON(!(MODULES_VADDR > __START_KERNEL));
+- BUILD_BUG_ON(!(((MODULES_END - 1) & PGDIR_MASK) ==
+- (__START_KERNEL & PGDIR_MASK)));
+-}
+diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
+index 1c3bf95..3d936f2 100644
+--- a/arch/x86/mm/highmem_32.c
++++ b/arch/x86/mm/highmem_32.c
+@@ -18,6 +18,49 @@ void kunmap(struct page *page)
+ kunmap_high(page);
+ }
+
++static void debug_kmap_atomic_prot(enum km_type type)
++{
++#ifdef CONFIG_DEBUG_HIGHMEM
++ static unsigned warn_count = 10;
+
-+ ablkcipher_request_set_tfm(ablk_req, ctx->ctr);
-+ ablkcipher_request_set_crypt(ablk_req, pctx->src, dst,
-+ cryptlen + sizeof(pctx->auth_tag),
-+ req->iv);
++ if (unlikely(warn_count == 0))
++ return;
+
-+ crypto_gcm_ghash_init(ghash, flags, ctx->gf128);
++ if (unlikely(in_interrupt())) {
++ if (in_irq()) {
++ if (type != KM_IRQ0 && type != KM_IRQ1 &&
++ type != KM_BIO_SRC_IRQ && type != KM_BIO_DST_IRQ &&
++ type != KM_BOUNCE_READ) {
++ WARN_ON(1);
++ warn_count--;
++ }
++ } else if (!irqs_disabled()) { /* softirq */
++ if (type != KM_IRQ0 && type != KM_IRQ1 &&
++ type != KM_SOFTIRQ0 && type != KM_SOFTIRQ1 &&
++ type != KM_SKB_SUNRPC_DATA &&
++ type != KM_SKB_DATA_SOFTIRQ &&
++ type != KM_BOUNCE_READ) {
++ WARN_ON(1);
++ warn_count--;
++ }
++ }
++ }
+
-+ crypto_gcm_ghash_update_sg(ghash, req->assoc, req->assoclen);
-+ crypto_gcm_ghash_flush(ghash);
++ if (type == KM_IRQ0 || type == KM_IRQ1 || type == KM_BOUNCE_READ ||
++ type == KM_BIO_SRC_IRQ || type == KM_BIO_DST_IRQ) {
++ if (!irqs_disabled()) {
++ WARN_ON(1);
++ warn_count--;
++ }
++ } else if (type == KM_SOFTIRQ0 || type == KM_SOFTIRQ1) {
++ if (irq_count() == 0 && !irqs_disabled()) {
++ WARN_ON(1);
++ warn_count--;
++ }
++ }
++#endif
+}
+
-+static int crypto_gcm_hash(struct aead_request *req)
-+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
-+ u8 *auth_tag = pctx->auth_tag;
-+ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
+ /*
+ * kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
+ * no global lock is needed and because the kmap code must perform a global TLB
+@@ -30,8 +73,10 @@ void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot)
+ {
+ enum fixed_addresses idx;
+ unsigned long vaddr;
+-
+ /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
+
-+ crypto_gcm_ghash_update_sg(ghash, req->dst, req->cryptlen);
-+ crypto_gcm_ghash_final_xor(ghash, req->assoclen, req->cryptlen,
-+ auth_tag);
++ debug_kmap_atomic_prot(type);
+
-+ scatterwalk_map_and_copy(auth_tag, req->dst, req->cryptlen,
-+ crypto_aead_authsize(aead), 1);
-+ return 0;
-+}
+ pagefault_disable();
+
+ if (!PageHighMem(page))
+diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
+index 6c06d9c..4fbafb4 100644
+--- a/arch/x86/mm/hugetlbpage.c
++++ b/arch/x86/mm/hugetlbpage.c
+@@ -15,6 +15,7 @@
+ #include <asm/mman.h>
+ #include <asm/tlb.h>
+ #include <asm/tlbflush.h>
++#include <asm/pgalloc.h>
+
+ static unsigned long page_table_shareable(struct vm_area_struct *svma,
+ struct vm_area_struct *vma,
+@@ -88,7 +89,7 @@ static void huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
+
+ spin_lock(&mm->page_table_lock);
+ if (pud_none(*pud))
+- pud_populate(mm, pud, (unsigned long) spte & PAGE_MASK);
++ pud_populate(mm, pud, (pmd_t *)((unsigned long)spte & PAGE_MASK));
+ else
+ put_page(virt_to_page(spte));
+ spin_unlock(&mm->page_table_lock);
+diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
+index 3c76d19..da524fb 100644
+--- a/arch/x86/mm/init_32.c
++++ b/arch/x86/mm/init_32.c
+@@ -27,7 +27,6 @@
+ #include <linux/bootmem.h>
+ #include <linux/slab.h>
+ #include <linux/proc_fs.h>
+-#include <linux/efi.h>
+ #include <linux/memory_hotplug.h>
+ #include <linux/initrd.h>
+ #include <linux/cpumask.h>
+@@ -40,8 +39,10 @@
+ #include <asm/fixmap.h>
+ #include <asm/e820.h>
+ #include <asm/apic.h>
++#include <asm/bugs.h>
+ #include <asm/tlb.h>
+ #include <asm/tlbflush.h>
++#include <asm/pgalloc.h>
+ #include <asm/sections.h>
+ #include <asm/paravirt.h>
+
+@@ -50,7 +51,7 @@ unsigned int __VMALLOC_RESERVE = 128 << 20;
+ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
+ unsigned long highstart_pfn, highend_pfn;
+
+-static int noinline do_test_wp_bit(void);
++static noinline int do_test_wp_bit(void);
+
+ /*
+ * Creates a middle page table and puts a pointer to it in the
+@@ -61,26 +62,26 @@ static pmd_t * __init one_md_table_init(pgd_t *pgd)
+ {
+ pud_t *pud;
+ pmd_t *pmd_table;
+-
+
-+static void crypto_gcm_encrypt_done(struct crypto_async_request *areq, int err)
-+{
-+ struct aead_request *req = areq->data;
+ #ifdef CONFIG_X86_PAE
+ if (!(pgd_val(*pgd) & _PAGE_PRESENT)) {
+ pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE);
+
+- paravirt_alloc_pd(__pa(pmd_table) >> PAGE_SHIFT);
++ paravirt_alloc_pd(&init_mm, __pa(pmd_table) >> PAGE_SHIFT);
+ set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT));
+ pud = pud_offset(pgd, 0);
+- if (pmd_table != pmd_offset(pud, 0))
+- BUG();
++ BUG_ON(pmd_table != pmd_offset(pud, 0));
+ }
+ #endif
+ pud = pud_offset(pgd, 0);
+ pmd_table = pmd_offset(pud, 0);
+
-+ if (!err)
-+ err = crypto_gcm_hash(req);
+ return pmd_table;
+ }
+
+ /*
+ * Create a page table and place a pointer to it in a middle page
+- * directory entry.
++ * directory entry:
+ */
+ static pte_t * __init one_page_table_init(pmd_t *pmd)
+ {
+@@ -90,9 +91,10 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
+ #ifdef CONFIG_DEBUG_PAGEALLOC
+ page_table = (pte_t *) alloc_bootmem_pages(PAGE_SIZE);
+ #endif
+- if (!page_table)
++ if (!page_table) {
+ page_table =
+ (pte_t *)alloc_bootmem_low_pages(PAGE_SIZE);
++ }
+
+ paravirt_alloc_pt(&init_mm, __pa(page_table) >> PAGE_SHIFT);
+ set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE));
+@@ -103,22 +105,21 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
+ }
+
+ /*
+- * This function initializes a certain range of kernel virtual memory
++ * This function initializes a certain range of kernel virtual memory
+ * with new bootmem page tables, everywhere page tables are missing in
+ * the given range.
+- */
+-
+-/*
+- * NOTE: The pagetables are allocated contiguous on the physical space
+- * so we can cache the place of the first one and move around without
++ *
++ * NOTE: The pagetables are allocated contiguous on the physical space
++ * so we can cache the place of the first one and move around without
+ * checking the pgd every time.
+ */
+-static void __init page_table_range_init (unsigned long start, unsigned long end, pgd_t *pgd_base)
++static void __init
++page_table_range_init(unsigned long start, unsigned long end, pgd_t *pgd_base)
+ {
+- pgd_t *pgd;
+- pmd_t *pmd;
+ int pgd_idx, pmd_idx;
+ unsigned long vaddr;
++ pgd_t *pgd;
++ pmd_t *pmd;
+
+ vaddr = start;
+ pgd_idx = pgd_index(vaddr);
+@@ -128,7 +129,8 @@ static void __init page_table_range_init (unsigned long start, unsigned long end
+ for ( ; (pgd_idx < PTRS_PER_PGD) && (vaddr != end); pgd++, pgd_idx++) {
+ pmd = one_md_table_init(pgd);
+ pmd = pmd + pmd_index(vaddr);
+- for (; (pmd_idx < PTRS_PER_PMD) && (vaddr != end); pmd++, pmd_idx++) {
++ for (; (pmd_idx < PTRS_PER_PMD) && (vaddr != end);
++ pmd++, pmd_idx++) {
+ one_page_table_init(pmd);
+
+ vaddr += PMD_SIZE;
+@@ -145,17 +147,17 @@ static inline int is_kernel_text(unsigned long addr)
+ }
+
+ /*
+- * This maps the physical memory to kernel virtual address space, a total
+- * of max_low_pfn pages, by creating page tables starting from address
+- * PAGE_OFFSET.
++ * This maps the physical memory to kernel virtual address space, a total
++ * of max_low_pfn pages, by creating page tables starting from address
++ * PAGE_OFFSET:
+ */
+ static void __init kernel_physical_mapping_init(pgd_t *pgd_base)
+ {
++ int pgd_idx, pmd_idx, pte_ofs;
+ unsigned long pfn;
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+- int pgd_idx, pmd_idx, pte_ofs;
+
+ pgd_idx = pgd_index(PAGE_OFFSET);
+ pgd = pgd_base + pgd_idx;
+@@ -165,29 +167,43 @@ static void __init kernel_physical_mapping_init(pgd_t *pgd_base)
+ pmd = one_md_table_init(pgd);
+ if (pfn >= max_low_pfn)
+ continue;
+- for (pmd_idx = 0; pmd_idx < PTRS_PER_PMD && pfn < max_low_pfn; pmd++, pmd_idx++) {
+- unsigned int address = pfn * PAGE_SIZE + PAGE_OFFSET;
+
+- /* Map with big pages if possible, otherwise create normal page tables. */
++ for (pmd_idx = 0;
++ pmd_idx < PTRS_PER_PMD && pfn < max_low_pfn;
++ pmd++, pmd_idx++) {
++ unsigned int addr = pfn * PAGE_SIZE + PAGE_OFFSET;
+
-+ aead_request_complete(req, err);
-+}
++ /*
++ * Map with big pages if possible, otherwise
++ * create normal page tables:
++ */
+ if (cpu_has_pse) {
+- unsigned int address2 = (pfn + PTRS_PER_PTE - 1) * PAGE_SIZE + PAGE_OFFSET + PAGE_SIZE-1;
+- if (is_kernel_text(address) || is_kernel_text(address2))
+- set_pmd(pmd, pfn_pmd(pfn, PAGE_KERNEL_LARGE_EXEC));
+- else
+- set_pmd(pmd, pfn_pmd(pfn, PAGE_KERNEL_LARGE));
++ unsigned int addr2;
++ pgprot_t prot = PAGE_KERNEL_LARGE;
+
-+static int crypto_gcm_encrypt(struct aead_request *req)
-+{
-+ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
-+ struct ablkcipher_request *abreq = &pctx->abreq;
-+ int err;
++ addr2 = (pfn + PTRS_PER_PTE-1) * PAGE_SIZE +
++ PAGE_OFFSET + PAGE_SIZE-1;
+
-+ crypto_gcm_init_crypt(abreq, req, req->cryptlen);
-+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
-+ crypto_gcm_encrypt_done, req);
++ if (is_kernel_text(addr) ||
++ is_kernel_text(addr2))
++ prot = PAGE_KERNEL_LARGE_EXEC;
+
-+ err = crypto_ablkcipher_encrypt(abreq);
-+ if (err)
-+ return err;
++ set_pmd(pmd, pfn_pmd(pfn, prot));
+
+ pfn += PTRS_PER_PTE;
+- } else {
+- pte = one_page_table_init(pmd);
+-
+- for (pte_ofs = 0;
+- pte_ofs < PTRS_PER_PTE && pfn < max_low_pfn;
+- pte++, pfn++, pte_ofs++, address += PAGE_SIZE) {
+- if (is_kernel_text(address))
+- set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC));
+- else
+- set_pte(pte, pfn_pte(pfn, PAGE_KERNEL));
+- }
++ continue;
++ }
++ pte = one_page_table_init(pmd);
+
-+ return crypto_gcm_hash(req);
-+}
++ for (pte_ofs = 0;
++ pte_ofs < PTRS_PER_PTE && pfn < max_low_pfn;
++ pte++, pfn++, pte_ofs++, addr += PAGE_SIZE) {
++ pgprot_t prot = PAGE_KERNEL;
+
-+static int crypto_gcm_verify(struct aead_request *req)
++ if (is_kernel_text(addr))
++ prot = PAGE_KERNEL_EXEC;
++
++ set_pte(pte, pfn_pte(pfn, prot));
+ }
+ }
+ }
+@@ -200,57 +216,23 @@ static inline int page_kills_ppro(unsigned long pagenr)
+ return 0;
+ }
+
+-int page_is_ram(unsigned long pagenr)
+-{
+- int i;
+- unsigned long addr, end;
+-
+- if (efi_enabled) {
+- efi_memory_desc_t *md;
+- void *p;
+-
+- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+- md = p;
+- if (!is_available_memory(md))
+- continue;
+- addr = (md->phys_addr+PAGE_SIZE-1) >> PAGE_SHIFT;
+- end = (md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT)) >> PAGE_SHIFT;
+-
+- if ((pagenr >= addr) && (pagenr < end))
+- return 1;
+- }
+- return 0;
+- }
+-
+- for (i = 0; i < e820.nr_map; i++) {
+-
+- if (e820.map[i].type != E820_RAM) /* not usable memory */
+- continue;
+- /*
+- * !!!FIXME!!! Some BIOSen report areas as RAM that
+- * are not. Notably the 640->1Mb area. We need a sanity
+- * check here.
+- */
+- addr = (e820.map[i].addr+PAGE_SIZE-1) >> PAGE_SHIFT;
+- end = (e820.map[i].addr+e820.map[i].size) >> PAGE_SHIFT;
+- if ((pagenr >= addr) && (pagenr < end))
+- return 1;
+- }
+- return 0;
+-}
+-
+ #ifdef CONFIG_HIGHMEM
+ pte_t *kmap_pte;
+ pgprot_t kmap_prot;
+
+-#define kmap_get_fixmap_pte(vaddr) \
+- pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(vaddr), vaddr), (vaddr)), (vaddr))
++static inline pte_t *kmap_get_fixmap_pte(unsigned long vaddr)
+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
-+ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
-+ u8 *auth_tag = pctx->auth_tag;
-+ u8 *iauth_tag = pctx->iauth_tag;
-+ unsigned int authsize = crypto_aead_authsize(aead);
-+ unsigned int cryptlen = req->cryptlen - authsize;
++ return pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(vaddr),
++ vaddr), vaddr), vaddr);
++}
+
+ static void __init kmap_init(void)
+ {
+ unsigned long kmap_vstart;
+
+- /* cache the first kmap pte */
++ /*
++ * Cache the first kmap pte:
++ */
+ kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
+ kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
+
+@@ -259,11 +241,11 @@ static void __init kmap_init(void)
+
+ static void __init permanent_kmaps_init(pgd_t *pgd_base)
+ {
++ unsigned long vaddr;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+- unsigned long vaddr;
+
+ vaddr = PKMAP_BASE;
+ page_table_range_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
+@@ -272,7 +254,7 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base)
+ pud = pud_offset(pgd, vaddr);
+ pmd = pmd_offset(pud, vaddr);
+ pte = pte_offset_kernel(pmd, vaddr);
+- pkmap_page_table = pte;
++ pkmap_page_table = pte;
+ }
+
+ static void __meminit free_new_highpage(struct page *page)
+@@ -291,7 +273,8 @@ void __init add_one_highpage_init(struct page *page, int pfn, int bad_ppro)
+ SetPageReserved(page);
+ }
+
+-static int __meminit add_one_highpage_hotplug(struct page *page, unsigned long pfn)
++static int __meminit
++add_one_highpage_hotplug(struct page *page, unsigned long pfn)
+ {
+ free_new_highpage(page);
+ totalram_pages++;
+@@ -299,6 +282,7 @@ static int __meminit add_one_highpage_hotplug(struct page *page, unsigned long p
+ max_mapnr = max(pfn, max_mapnr);
+ #endif
+ num_physpages++;
+
-+ crypto_gcm_ghash_final_xor(ghash, req->assoclen, cryptlen, auth_tag);
+ return 0;
+ }
+
+@@ -306,7 +290,7 @@ static int __meminit add_one_highpage_hotplug(struct page *page, unsigned long p
+ * Not currently handling the NUMA case.
+ * Assuming single node and all memory that
+ * has been added dynamically that would be
+- * onlined here is in HIGHMEM
++ * onlined here is in HIGHMEM.
+ */
+ void __meminit online_page(struct page *page)
+ {
+@@ -314,13 +298,11 @@ void __meminit online_page(struct page *page)
+ add_one_highpage_hotplug(page, page_to_pfn(page));
+ }
+
+-
+-#ifdef CONFIG_NUMA
+-extern void set_highmem_pages_init(int);
+-#else
++#ifndef CONFIG_NUMA
+ static void __init set_highmem_pages_init(int bad_ppro)
+ {
+ int pfn;
+
-+ authsize = crypto_aead_authsize(aead);
-+ scatterwalk_map_and_copy(iauth_tag, req->src, cryptlen, authsize, 0);
-+ return memcmp(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
-+}
+ for (pfn = highstart_pfn; pfn < highend_pfn; pfn++) {
+ /*
+ * Holes under sparsemem might not have no mem_map[]:
+@@ -330,23 +312,18 @@ static void __init set_highmem_pages_init(int bad_ppro)
+ }
+ totalram_pages += totalhigh_pages;
+ }
+-#endif /* CONFIG_FLATMEM */
++#endif /* !CONFIG_NUMA */
+
+ #else
+-#define kmap_init() do { } while (0)
+-#define permanent_kmaps_init(pgd_base) do { } while (0)
+-#define set_highmem_pages_init(bad_ppro) do { } while (0)
++# define kmap_init() do { } while (0)
++# define permanent_kmaps_init(pgd_base) do { } while (0)
++# define set_highmem_pages_init(bad_ppro) do { } while (0)
+ #endif /* CONFIG_HIGHMEM */
+
+-unsigned long long __PAGE_KERNEL = _PAGE_KERNEL;
++pteval_t __PAGE_KERNEL = _PAGE_KERNEL;
+ EXPORT_SYMBOL(__PAGE_KERNEL);
+-unsigned long long __PAGE_KERNEL_EXEC = _PAGE_KERNEL_EXEC;
+
+-#ifdef CONFIG_NUMA
+-extern void __init remap_numa_kva(void);
+-#else
+-#define remap_numa_kva() do {} while (0)
+-#endif
++pteval_t __PAGE_KERNEL_EXEC = _PAGE_KERNEL_EXEC;
+
+ void __init native_pagetable_setup_start(pgd_t *base)
+ {
+@@ -372,7 +349,7 @@ void __init native_pagetable_setup_start(pgd_t *base)
+ memset(&base[USER_PTRS_PER_PGD], 0,
+ KERNEL_PGD_PTRS * sizeof(pgd_t));
+ #else
+- paravirt_alloc_pd(__pa(swapper_pg_dir) >> PAGE_SHIFT);
++ paravirt_alloc_pd(&init_mm, __pa(base) >> PAGE_SHIFT);
+ #endif
+ }
+
+@@ -410,10 +387,10 @@ void __init native_pagetable_setup_done(pgd_t *base)
+ * be partially populated, and so it avoids stomping on any existing
+ * mappings.
+ */
+-static void __init pagetable_init (void)
++static void __init pagetable_init(void)
+ {
+- unsigned long vaddr, end;
+ pgd_t *pgd_base = swapper_pg_dir;
++ unsigned long vaddr, end;
+
+ paravirt_pagetable_setup_start(pgd_base);
+
+@@ -435,9 +412,11 @@ static void __init pagetable_init (void)
+ * Fixed mappings, only the page table structure has to be
+ * created - mappings will be set by set_fixmap():
+ */
++ early_ioremap_clear();
+ vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
+ end = (FIXADDR_TOP + PMD_SIZE - 1) & PMD_MASK;
+ page_table_range_init(vaddr, end, pgd_base);
++ early_ioremap_reset();
+
+ permanent_kmaps_init(pgd_base);
+
+@@ -450,7 +429,7 @@ static void __init pagetable_init (void)
+ * driver might have split up a kernel 4MB mapping.
+ */
+ char __nosavedata swsusp_pg_dir[PAGE_SIZE]
+- __attribute__ ((aligned (PAGE_SIZE)));
++ __attribute__ ((aligned(PAGE_SIZE)));
+
+ static inline void save_pg_dir(void)
+ {
+@@ -462,7 +441,7 @@ static inline void save_pg_dir(void)
+ }
+ #endif
+
+-void zap_low_mappings (void)
++void zap_low_mappings(void)
+ {
+ int i;
+
+@@ -474,22 +453,24 @@ void zap_low_mappings (void)
+ * Note that "pgd_clear()" doesn't do it for
+ * us, because pgd_clear() is a no-op on i386.
+ */
+- for (i = 0; i < USER_PTRS_PER_PGD; i++)
++ for (i = 0; i < USER_PTRS_PER_PGD; i++) {
+ #ifdef CONFIG_X86_PAE
+ set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(empty_zero_page)));
+ #else
+ set_pgd(swapper_pg_dir+i, __pgd(0));
+ #endif
++ }
+ flush_tlb_all();
+ }
+
+-int nx_enabled = 0;
++int nx_enabled;
+
-+static void crypto_gcm_decrypt_done(struct crypto_async_request *areq, int err)
++pteval_t __supported_pte_mask __read_mostly = ~_PAGE_NX;
++EXPORT_SYMBOL_GPL(__supported_pte_mask);
+
+ #ifdef CONFIG_X86_PAE
+
+-static int disable_nx __initdata = 0;
+-u64 __supported_pte_mask __read_mostly = ~_PAGE_NX;
+-EXPORT_SYMBOL_GPL(__supported_pte_mask);
++static int disable_nx __initdata;
+
+ /*
+ * noexec = on|off
+@@ -506,11 +487,14 @@ static int __init noexec_setup(char *str)
+ __supported_pte_mask |= _PAGE_NX;
+ disable_nx = 0;
+ }
+- } else if (!strcmp(str,"off")) {
+- disable_nx = 1;
+- __supported_pte_mask &= ~_PAGE_NX;
+- } else
+- return -EINVAL;
++ } else {
++ if (!strcmp(str, "off")) {
++ disable_nx = 1;
++ __supported_pte_mask &= ~_PAGE_NX;
++ } else {
++ return -EINVAL;
++ }
++ }
+
+ return 0;
+ }
+@@ -522,6 +506,7 @@ static void __init set_nx(void)
+
+ if (cpu_has_pae && (cpuid_eax(0x80000000) > 0x80000001)) {
+ cpuid(0x80000001, &v[0], &v[1], &v[2], &v[3]);
++
+ if ((v[3] & (1 << 20)) && !disable_nx) {
+ rdmsr(MSR_EFER, l, h);
+ l |= EFER_NX;
+@@ -531,35 +516,6 @@ static void __init set_nx(void)
+ }
+ }
+ }
+-
+-/*
+- * Enables/disables executability of a given kernel page and
+- * returns the previous setting.
+- */
+-int __init set_kernel_exec(unsigned long vaddr, int enable)
+-{
+- pte_t *pte;
+- int ret = 1;
+-
+- if (!nx_enabled)
+- goto out;
+-
+- pte = lookup_address(vaddr);
+- BUG_ON(!pte);
+-
+- if (!pte_exec_kernel(*pte))
+- ret = 0;
+-
+- if (enable)
+- pte->pte_high &= ~(1 << (_PAGE_BIT_NX - 32));
+- else
+- pte->pte_high |= 1 << (_PAGE_BIT_NX - 32);
+- pte_update_defer(&init_mm, vaddr, pte);
+- __flush_tlb_all();
+-out:
+- return ret;
+-}
+-
+ #endif
+
+ /*
+@@ -574,9 +530,8 @@ void __init paging_init(void)
+ #ifdef CONFIG_X86_PAE
+ set_nx();
+ if (nx_enabled)
+- printk("NX (Execute Disable) protection: active\n");
++ printk(KERN_INFO "NX (Execute Disable) protection: active\n");
+ #endif
+-
+ pagetable_init();
+
+ load_cr3(swapper_pg_dir);
+@@ -600,10 +555,10 @@ void __init paging_init(void)
+ * used to involve black magic jumps to work around some nasty CPU bugs,
+ * but fortunately the switch to using exceptions got rid of all that.
+ */
+-
+ static void __init test_wp_bit(void)
+ {
+- printk("Checking if this processor honours the WP bit even in supervisor mode... ");
++ printk(KERN_INFO
++ "Checking if this processor honours the WP bit even in supervisor mode...");
+
+ /* Any page-aligned address will do, the test is non-destructive */
+ __set_fixmap(FIX_WP_TEST, __pa(&swapper_pg_dir), PAGE_READONLY);
+@@ -611,47 +566,46 @@ static void __init test_wp_bit(void)
+ clear_fixmap(FIX_WP_TEST);
+
+ if (!boot_cpu_data.wp_works_ok) {
+- printk("No.\n");
++ printk(KERN_CONT "No.\n");
+ #ifdef CONFIG_X86_WP_WORKS_OK
+- panic("This kernel doesn't support CPU's with broken WP. Recompile it for a 386!");
++ panic(
++ "This kernel doesn't support CPU's with broken WP. Recompile it for a 386!");
+ #endif
+ } else {
+- printk("Ok.\n");
++ printk(KERN_CONT "Ok.\n");
+ }
+ }
+
+-static struct kcore_list kcore_mem, kcore_vmalloc;
++static struct kcore_list kcore_mem, kcore_vmalloc;
+
+ void __init mem_init(void)
+ {
+- extern int ppro_with_ram_bug(void);
+ int codesize, reservedpages, datasize, initsize;
+- int tmp;
+- int bad_ppro;
++ int tmp, bad_ppro;
+
+ #ifdef CONFIG_FLATMEM
+ BUG_ON(!mem_map);
+ #endif
+-
+ bad_ppro = ppro_with_ram_bug();
+
+ #ifdef CONFIG_HIGHMEM
+ /* check that fixmap and pkmap do not overlap */
+- if (PKMAP_BASE+LAST_PKMAP*PAGE_SIZE >= FIXADDR_START) {
+- printk(KERN_ERR "fixmap and kmap areas overlap - this will crash\n");
++ if (PKMAP_BASE + LAST_PKMAP*PAGE_SIZE >= FIXADDR_START) {
++ printk(KERN_ERR
++ "fixmap and kmap areas overlap - this will crash\n");
+ printk(KERN_ERR "pkstart: %lxh pkend: %lxh fixstart %lxh\n",
+- PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE, FIXADDR_START);
++ PKMAP_BASE, PKMAP_BASE + LAST_PKMAP*PAGE_SIZE,
++ FIXADDR_START);
+ BUG();
+ }
+ #endif
+-
+ /* this will put all low memory onto the freelists */
+ totalram_pages += free_all_bootmem();
+
+ reservedpages = 0;
+ for (tmp = 0; tmp < max_low_pfn; tmp++)
+ /*
+- * Only count reserved RAM pages
++ * Only count reserved RAM pages:
+ */
+ if (page_is_ram(tmp) && PageReserved(pfn_to_page(tmp)))
+ reservedpages++;
+@@ -662,11 +616,12 @@ void __init mem_init(void)
+ datasize = (unsigned long) &_edata - (unsigned long) &_etext;
+ initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
+
+- kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
+- kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
++ kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
++ kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
+ VMALLOC_END-VMALLOC_START);
+
+- printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, %dk reserved, %dk data, %dk init, %ldk highmem)\n",
++ printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
++ "%dk reserved, %dk data, %dk init, %ldk highmem)\n",
+ (unsigned long) nr_free_pages() << (PAGE_SHIFT-10),
+ num_physpages << (PAGE_SHIFT-10),
+ codesize >> 10,
+@@ -677,45 +632,46 @@ void __init mem_init(void)
+ );
+
+ #if 1 /* double-sanity-check paranoia */
+- printk("virtual kernel memory layout:\n"
+- " fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
++ printk(KERN_INFO "virtual kernel memory layout:\n"
++ " fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
+ #ifdef CONFIG_HIGHMEM
+- " pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
++ " pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
+ #endif
+- " vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n"
+- " lowmem : 0x%08lx - 0x%08lx (%4ld MB)\n"
+- " .init : 0x%08lx - 0x%08lx (%4ld kB)\n"
+- " .data : 0x%08lx - 0x%08lx (%4ld kB)\n"
+- " .text : 0x%08lx - 0x%08lx (%4ld kB)\n",
+- FIXADDR_START, FIXADDR_TOP,
+- (FIXADDR_TOP - FIXADDR_START) >> 10,
++ " vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n"
++ " lowmem : 0x%08lx - 0x%08lx (%4ld MB)\n"
++ " .init : 0x%08lx - 0x%08lx (%4ld kB)\n"
++ " .data : 0x%08lx - 0x%08lx (%4ld kB)\n"
++ " .text : 0x%08lx - 0x%08lx (%4ld kB)\n",
++ FIXADDR_START, FIXADDR_TOP,
++ (FIXADDR_TOP - FIXADDR_START) >> 10,
+
+ #ifdef CONFIG_HIGHMEM
+- PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE,
+- (LAST_PKMAP*PAGE_SIZE) >> 10,
++ PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE,
++ (LAST_PKMAP*PAGE_SIZE) >> 10,
+ #endif
+
+- VMALLOC_START, VMALLOC_END,
+- (VMALLOC_END - VMALLOC_START) >> 20,
++ VMALLOC_START, VMALLOC_END,
++ (VMALLOC_END - VMALLOC_START) >> 20,
+
+- (unsigned long)__va(0), (unsigned long)high_memory,
+- ((unsigned long)high_memory - (unsigned long)__va(0)) >> 20,
++ (unsigned long)__va(0), (unsigned long)high_memory,
++ ((unsigned long)high_memory - (unsigned long)__va(0)) >> 20,
+
+- (unsigned long)&__init_begin, (unsigned long)&__init_end,
+- ((unsigned long)&__init_end - (unsigned long)&__init_begin) >> 10,
++ (unsigned long)&__init_begin, (unsigned long)&__init_end,
++ ((unsigned long)&__init_end -
++ (unsigned long)&__init_begin) >> 10,
+
+- (unsigned long)&_etext, (unsigned long)&_edata,
+- ((unsigned long)&_edata - (unsigned long)&_etext) >> 10,
++ (unsigned long)&_etext, (unsigned long)&_edata,
++ ((unsigned long)&_edata - (unsigned long)&_etext) >> 10,
+
+- (unsigned long)&_text, (unsigned long)&_etext,
+- ((unsigned long)&_etext - (unsigned long)&_text) >> 10);
++ (unsigned long)&_text, (unsigned long)&_etext,
++ ((unsigned long)&_etext - (unsigned long)&_text) >> 10);
+
+ #ifdef CONFIG_HIGHMEM
+- BUG_ON(PKMAP_BASE+LAST_PKMAP*PAGE_SIZE > FIXADDR_START);
+- BUG_ON(VMALLOC_END > PKMAP_BASE);
++ BUG_ON(PKMAP_BASE + LAST_PKMAP*PAGE_SIZE > FIXADDR_START);
++ BUG_ON(VMALLOC_END > PKMAP_BASE);
+ #endif
+- BUG_ON(VMALLOC_START > VMALLOC_END);
+- BUG_ON((unsigned long)high_memory > VMALLOC_START);
++ BUG_ON(VMALLOC_START > VMALLOC_END);
++ BUG_ON((unsigned long)high_memory > VMALLOC_START);
+ #endif /* double-sanity-check paranoia */
+
+ #ifdef CONFIG_X86_PAE
+@@ -746,49 +702,38 @@ int arch_add_memory(int nid, u64 start, u64 size)
+
+ return __add_pages(zone, start_pfn, nr_pages);
+ }
+-
+ #endif
+
+-struct kmem_cache *pmd_cache;
+-
+-void __init pgtable_cache_init(void)
+-{
+- if (PTRS_PER_PMD > 1)
+- pmd_cache = kmem_cache_create("pmd",
+- PTRS_PER_PMD*sizeof(pmd_t),
+- PTRS_PER_PMD*sizeof(pmd_t),
+- SLAB_PANIC,
+- pmd_ctor);
+-}
+-
+ /*
+ * This function cannot be __init, since exceptions don't work in that
+ * section. Put this after the callers, so that it cannot be inlined.
+ */
+-static int noinline do_test_wp_bit(void)
++static noinline int do_test_wp_bit(void)
+ {
+ char tmp_reg;
+ int flag;
+
+ __asm__ __volatile__(
+- " movb %0,%1 \n"
+- "1: movb %1,%0 \n"
+- " xorl %2,%2 \n"
++ " movb %0, %1 \n"
++ "1: movb %1, %0 \n"
++ " xorl %2, %2 \n"
+ "2: \n"
+- ".section __ex_table,\"a\"\n"
++ ".section __ex_table, \"a\"\n"
+ " .align 4 \n"
+- " .long 1b,2b \n"
++ " .long 1b, 2b \n"
+ ".previous \n"
+ :"=m" (*(char *)fix_to_virt(FIX_WP_TEST)),
+ "=q" (tmp_reg),
+ "=r" (flag)
+ :"2" (1)
+ :"memory");
+-
++
+ return flag;
+ }
+
+ #ifdef CONFIG_DEBUG_RODATA
++const int rodata_test_data = 0xC3;
++EXPORT_SYMBOL_GPL(rodata_test_data);
+
+ void mark_rodata_ro(void)
+ {
+@@ -801,32 +746,58 @@ void mark_rodata_ro(void)
+ if (num_possible_cpus() <= 1)
+ #endif
+ {
+- change_page_attr(virt_to_page(start),
+- size >> PAGE_SHIFT, PAGE_KERNEL_RX);
+- printk("Write protecting the kernel text: %luk\n", size >> 10);
++ set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
++ printk(KERN_INFO "Write protecting the kernel text: %luk\n",
++ size >> 10);
++
++#ifdef CONFIG_CPA_DEBUG
++ printk(KERN_INFO "Testing CPA: Reverting %lx-%lx\n",
++ start, start+size);
++ set_pages_rw(virt_to_page(start), size>>PAGE_SHIFT);
++
++ printk(KERN_INFO "Testing CPA: write protecting again\n");
++ set_pages_ro(virt_to_page(start), size>>PAGE_SHIFT);
++#endif
+ }
+ #endif
+ start += size;
+ size = (unsigned long)__end_rodata - start;
+- change_page_attr(virt_to_page(start),
+- size >> PAGE_SHIFT, PAGE_KERNEL_RO);
+- printk("Write protecting the kernel read-only data: %luk\n",
+- size >> 10);
++ set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
++ printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n",
++ size >> 10);
++ rodata_test();
+
+- /*
+- * change_page_attr() requires a global_flush_tlb() call after it.
+- * We do this after the printk so that if something went wrong in the
+- * change, the printk gets out at least to give a better debug hint
+- * of who is the culprit.
+- */
+- global_flush_tlb();
++#ifdef CONFIG_CPA_DEBUG
++ printk(KERN_INFO "Testing CPA: undo %lx-%lx\n", start, start + size);
++ set_pages_rw(virt_to_page(start), size >> PAGE_SHIFT);
++
++ printk(KERN_INFO "Testing CPA: write protecting again\n");
++ set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
++#endif
+ }
+ #endif
+
+ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ {
++#ifdef CONFIG_DEBUG_PAGEALLOC
++ /*
++ * If debugging page accesses then do not free this memory but
++ * mark them not present - any buggy init-section access will
++ * create a kernel page fault:
++ */
++ printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
++ begin, PAGE_ALIGN(end));
++ set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
++#else
+ unsigned long addr;
+
++ /*
++ * We just marked the kernel text read only above, now that
++ * we are going to free part of that, we need to make that
++ * writeable first.
++ */
++ set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
++
+ for (addr = begin; addr < end; addr += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ init_page_count(virt_to_page(addr));
+@@ -835,6 +806,7 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ totalram_pages++;
+ }
+ printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
++#endif
+ }
+
+ void free_initmem(void)
+@@ -850,4 +822,3 @@ void free_initrd_mem(unsigned long start, unsigned long end)
+ free_init_pages("initrd memory", start, end);
+ }
+ #endif
+-
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 0f9c8c8..cc50a13 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -43,12 +43,10 @@
+ #include <asm/proto.h>
+ #include <asm/smp.h>
+ #include <asm/sections.h>
++#include <asm/kdebug.h>
++#include <asm/numa.h>
+
+-#ifndef Dprintk
+-#define Dprintk(x...)
+-#endif
+-
+-const struct dma_mapping_ops* dma_ops;
++const struct dma_mapping_ops *dma_ops;
+ EXPORT_SYMBOL(dma_ops);
+
+ static unsigned long dma_reserve __initdata;
+@@ -65,22 +63,26 @@ void show_mem(void)
+ {
+ long i, total = 0, reserved = 0;
+ long shared = 0, cached = 0;
+- pg_data_t *pgdat;
+ struct page *page;
++ pg_data_t *pgdat;
+
+ printk(KERN_INFO "Mem-info:\n");
+ show_free_areas();
+- printk(KERN_INFO "Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
++ printk(KERN_INFO "Free swap: %6ldkB\n",
++ nr_swap_pages << (PAGE_SHIFT-10));
+
+ for_each_online_pgdat(pgdat) {
+- for (i = 0; i < pgdat->node_spanned_pages; ++i) {
+- /* this loop can take a while with 256 GB and 4k pages
+- so update the NMI watchdog */
+- if (unlikely(i % MAX_ORDER_NR_PAGES == 0)) {
++ for (i = 0; i < pgdat->node_spanned_pages; ++i) {
++ /*
++ * This loop can take a while with 256 GB and
++ * 4k pages so defer the NMI watchdog:
++ */
++ if (unlikely(i % MAX_ORDER_NR_PAGES == 0))
+ touch_nmi_watchdog();
+- }
++
+ if (!pfn_valid(pgdat->node_start_pfn + i))
+ continue;
++
+ page = pfn_to_page(pgdat->node_start_pfn + i);
+ total++;
+ if (PageReserved(page))
+@@ -89,51 +91,58 @@ void show_mem(void)
+ cached++;
+ else if (page_count(page))
+ shared += page_count(page) - 1;
+- }
++ }
+ }
+- printk(KERN_INFO "%lu pages of RAM\n", total);
+- printk(KERN_INFO "%lu reserved pages\n",reserved);
+- printk(KERN_INFO "%lu pages shared\n",shared);
+- printk(KERN_INFO "%lu pages swap cached\n",cached);
++ printk(KERN_INFO "%lu pages of RAM\n", total);
++ printk(KERN_INFO "%lu reserved pages\n", reserved);
++ printk(KERN_INFO "%lu pages shared\n", shared);
++ printk(KERN_INFO "%lu pages swap cached\n", cached);
+ }
+
+ int after_bootmem;
+
+ static __init void *spp_getpage(void)
+-{
+{
-+ struct aead_request *req = areq->data;
+ void *ptr;
+
-+ if (!err)
-+ err = crypto_gcm_verify(req);
+ if (after_bootmem)
+- ptr = (void *) get_zeroed_page(GFP_ATOMIC);
++ ptr = (void *) get_zeroed_page(GFP_ATOMIC);
+ else
+ ptr = alloc_bootmem_pages(PAGE_SIZE);
+- if (!ptr || ((unsigned long)ptr & ~PAGE_MASK))
+- panic("set_pte_phys: cannot allocate page data %s\n", after_bootmem?"after bootmem":"");
+
+- Dprintk("spp_getpage %p\n", ptr);
++ if (!ptr || ((unsigned long)ptr & ~PAGE_MASK)) {
++ panic("set_pte_phys: cannot allocate page data %s\n",
++ after_bootmem ? "after bootmem" : "");
++ }
+
-+ aead_request_complete(req, err);
++ pr_debug("spp_getpage %p\n", ptr);
++
+ return ptr;
+-}
+}
+
+-static __init void set_pte_phys(unsigned long vaddr,
+- unsigned long phys, pgprot_t prot)
++static __init void
++set_pte_phys(unsigned long vaddr, unsigned long phys, pgprot_t prot)
+ {
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte, new_pte;
+
+- Dprintk("set_pte_phys %lx to %lx\n", vaddr, phys);
++ pr_debug("set_pte_phys %lx to %lx\n", vaddr, phys);
+
+ pgd = pgd_offset_k(vaddr);
+ if (pgd_none(*pgd)) {
+- printk("PGD FIXMAP MISSING, it should be setup in head.S!\n");
++ printk(KERN_ERR
++ "PGD FIXMAP MISSING, it should be setup in head.S!\n");
+ return;
+ }
+ pud = pud_offset(pgd, vaddr);
+ if (pud_none(*pud)) {
+- pmd = (pmd_t *) spp_getpage();
++ pmd = (pmd_t *) spp_getpage();
+ set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE | _PAGE_USER));
+ if (pmd != pmd_offset(pud, 0)) {
+- printk("PAGETABLE BUG #01! %p <-> %p\n", pmd, pmd_offset(pud,0));
++ printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
++ pmd, pmd_offset(pud, 0));
+ return;
+ }
+ }
+@@ -142,7 +151,7 @@ static __init void set_pte_phys(unsigned long vaddr,
+ pte = (pte_t *) spp_getpage();
+ set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE | _PAGE_USER));
+ if (pte != pte_offset_kernel(pmd, 0)) {
+- printk("PAGETABLE BUG #02!\n");
++ printk(KERN_ERR "PAGETABLE BUG #02!\n");
+ return;
+ }
+ }
+@@ -162,33 +171,35 @@ static __init void set_pte_phys(unsigned long vaddr,
+ }
+
+ /* NOTE: this is meant to be run only at boot */
+-void __init
+-__set_fixmap (enum fixed_addresses idx, unsigned long phys, pgprot_t prot)
++void __init
++__set_fixmap(enum fixed_addresses idx, unsigned long phys, pgprot_t prot)
+ {
+ unsigned long address = __fix_to_virt(idx);
+
+ if (idx >= __end_of_fixed_addresses) {
+- printk("Invalid __set_fixmap\n");
++ printk(KERN_ERR "Invalid __set_fixmap\n");
+ return;
+ }
+ set_pte_phys(address, phys, prot);
+ }
+
+-unsigned long __meminitdata table_start, table_end;
++static unsigned long __initdata table_start;
++static unsigned long __meminitdata table_end;
+
+ static __meminit void *alloc_low_page(unsigned long *phys)
+-{
++{
+ unsigned long pfn = table_end++;
+ void *adr;
+
+ if (after_bootmem) {
+ adr = (void *)get_zeroed_page(GFP_ATOMIC);
+ *phys = __pa(adr);
+
-+static int crypto_gcm_decrypt(struct aead_request *req)
+ return adr;
+ }
+
+- if (pfn >= end_pfn)
+- panic("alloc_low_page: ran out of memory");
++ if (pfn >= end_pfn)
++ panic("alloc_low_page: ran out of memory");
+
+ adr = early_ioremap(pfn * PAGE_SIZE, PAGE_SIZE);
+ memset(adr, 0, PAGE_SIZE);
+@@ -197,44 +208,49 @@ static __meminit void *alloc_low_page(unsigned long *phys)
+ }
+
+ static __meminit void unmap_low_page(void *adr)
+-{
+-
+{
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
-+ struct ablkcipher_request *abreq = &pctx->abreq;
-+ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
-+ unsigned int cryptlen = req->cryptlen;
-+ unsigned int authsize = crypto_aead_authsize(aead);
-+ int err;
+ if (after_bootmem)
+ return;
+
+ early_iounmap(adr, PAGE_SIZE);
+-}
++}
+
+ /* Must run before zap_low_mappings */
+ __meminit void *early_ioremap(unsigned long addr, unsigned long size)
+ {
+- unsigned long vaddr;
+ pmd_t *pmd, *last_pmd;
++ unsigned long vaddr;
+ int i, pmds;
+
+ pmds = ((addr & ~PMD_MASK) + size + ~PMD_MASK) / PMD_SIZE;
+ vaddr = __START_KERNEL_map;
+ pmd = level2_kernel_pgt;
+ last_pmd = level2_kernel_pgt + PTRS_PER_PMD - 1;
++
+ for (; pmd <= last_pmd; pmd++, vaddr += PMD_SIZE) {
+ for (i = 0; i < pmds; i++) {
+ if (pmd_present(pmd[i]))
+- goto next;
++ goto continue_outer_loop;
+ }
+ vaddr += addr & ~PMD_MASK;
+ addr &= PMD_MASK;
++
+ for (i = 0; i < pmds; i++, addr += PMD_SIZE)
+- set_pmd(pmd + i,__pmd(addr | _KERNPG_TABLE | _PAGE_PSE));
+- __flush_tlb();
++ set_pmd(pmd+i, __pmd(addr | __PAGE_KERNEL_LARGE_EXEC));
++ __flush_tlb_all();
++
+ return (void *)vaddr;
+- next:
++continue_outer_loop:
+ ;
+ }
+- printk("early_ioremap(0x%lx, %lu) failed\n", addr, size);
++ printk(KERN_ERR "early_ioremap(0x%lx, %lu) failed\n", addr, size);
+
-+ if (cryptlen < authsize)
-+ return -EINVAL;
-+ cryptlen -= authsize;
+ return NULL;
+ }
+
+-/* To avoid virtual aliases later */
++/*
++ * To avoid virtual aliases later:
++ */
+ __meminit void early_iounmap(void *addr, unsigned long size)
+ {
+ unsigned long vaddr;
+@@ -244,9 +260,11 @@ __meminit void early_iounmap(void *addr, unsigned long size)
+ vaddr = (unsigned long)addr;
+ pmds = ((vaddr & ~PMD_MASK) + size + ~PMD_MASK) / PMD_SIZE;
+ pmd = level2_kernel_pgt + pmd_index(vaddr);
+
-+ crypto_gcm_init_crypt(abreq, req, cryptlen);
-+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
-+ crypto_gcm_decrypt_done, req);
+ for (i = 0; i < pmds; i++)
+ pmd_clear(pmd + i);
+- __flush_tlb();
+
-+ crypto_gcm_ghash_update_sg(ghash, req->src, cryptlen);
++ __flush_tlb_all();
+ }
+
+ static void __meminit
+@@ -259,16 +277,17 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end)
+ pmd_t *pmd = pmd_page + pmd_index(address);
+
+ if (address >= end) {
+- if (!after_bootmem)
++ if (!after_bootmem) {
+ for (; i < PTRS_PER_PMD; i++, pmd++)
+ set_pmd(pmd, __pmd(0));
++ }
+ break;
+ }
+
+ if (pmd_val(*pmd))
+ continue;
+
+- entry = _PAGE_NX|_PAGE_PSE|_KERNPG_TABLE|_PAGE_GLOBAL|address;
++ entry = __PAGE_KERNEL_LARGE|_PAGE_GLOBAL|address;
+ entry &= __supported_pte_mask;
+ set_pmd(pmd, __pmd(entry));
+ }
+@@ -277,19 +296,19 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end)
+ static void __meminit
+ phys_pmd_update(pud_t *pud, unsigned long address, unsigned long end)
+ {
+- pmd_t *pmd = pmd_offset(pud,0);
++ pmd_t *pmd = pmd_offset(pud, 0);
+ spin_lock(&init_mm.page_table_lock);
+ phys_pmd_init(pmd, address, end);
+ spin_unlock(&init_mm.page_table_lock);
+ __flush_tlb_all();
+ }
+
+-static void __meminit phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end)
+-{
++static void __meminit
++phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end)
++{
+ int i = pud_index(addr);
+
+-
+- for (; i < PTRS_PER_PUD; i++, addr = (addr & PUD_MASK) + PUD_SIZE ) {
++ for (; i < PTRS_PER_PUD; i++, addr = (addr & PUD_MASK) + PUD_SIZE) {
+ unsigned long pmd_phys;
+ pud_t *pud = pud_page + pud_index(addr);
+ pmd_t *pmd;
+@@ -297,10 +316,11 @@ static void __meminit phys_pud_init(pud_t *pud_page, unsigned long addr, unsigne
+ if (addr >= end)
+ break;
+
+- if (!after_bootmem && !e820_any_mapped(addr,addr+PUD_SIZE,0)) {
+- set_pud(pud, __pud(0));
++ if (!after_bootmem &&
++ !e820_any_mapped(addr, addr+PUD_SIZE, 0)) {
++ set_pud(pud, __pud(0));
+ continue;
+- }
++ }
+
+ if (pud_val(*pud)) {
+ phys_pmd_update(pud, addr, end);
+@@ -308,14 +328,16 @@ static void __meminit phys_pud_init(pud_t *pud_page, unsigned long addr, unsigne
+ }
+
+ pmd = alloc_low_page(&pmd_phys);
+
-+ err = crypto_ablkcipher_decrypt(abreq);
-+ if (err)
-+ return err;
+ spin_lock(&init_mm.page_table_lock);
+ set_pud(pud, __pud(pmd_phys | _KERNPG_TABLE));
+ phys_pmd_init(pmd, addr, end);
+ spin_unlock(&init_mm.page_table_lock);
+
-+ return crypto_gcm_verify(req);
+ unmap_low_page(pmd);
+ }
+- __flush_tlb();
+-}
++ __flush_tlb_all();
+}
-+
-+static int crypto_gcm_init_tfm(struct crypto_tfm *tfm)
+
+ static void __init find_early_table_space(unsigned long end)
+ {
+@@ -326,14 +348,23 @@ static void __init find_early_table_space(unsigned long end)
+ tables = round_up(puds * sizeof(pud_t), PAGE_SIZE) +
+ round_up(pmds * sizeof(pmd_t), PAGE_SIZE);
+
+- /* RED-PEN putting page tables only on node 0 could
+- cause a hotspot and fill up ZONE_DMA. The page tables
+- need roughly 0.5KB per GB. */
+- start = 0x8000;
+- table_start = find_e820_area(start, end, tables);
++ /*
++ * RED-PEN putting page tables only on node 0 could
++ * cause a hotspot and fill up ZONE_DMA. The page tables
++ * need roughly 0.5KB per GB.
++ */
++ start = 0x8000;
++ table_start = find_e820_area(start, end, tables);
+ if (table_start == -1UL)
+ panic("Cannot find space for the kernel page tables");
+
++ /*
++ * When you have a lot of RAM like 256GB, early_table will not fit
++ * into 0x8000 range, find_e820_area() will find area after kernel
++ * bss but the table_start is not page aligned, so need to round it
++ * up to avoid overlap with bss:
++ */
++ table_start = round_up(table_start, PAGE_SIZE);
+ table_start >>= PAGE_SHIFT;
+ table_end = table_start;
+
+@@ -342,20 +373,23 @@ static void __init find_early_table_space(unsigned long end)
+ (table_start << PAGE_SHIFT) + tables);
+ }
+
+-/* Setup the direct mapping of the physical memory at PAGE_OFFSET.
+- This runs before bootmem is initialized and gets pages directly from the
+- physical memory. To access them they are temporarily mapped. */
++/*
++ * Setup the direct mapping of the physical memory at PAGE_OFFSET.
++ * This runs before bootmem is initialized and gets pages directly from
++ * the physical memory. To access them they are temporarily mapped.
++ */
+ void __init_refok init_memory_mapping(unsigned long start, unsigned long end)
+-{
+- unsigned long next;
+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct gcm_instance_ctx *ictx = crypto_instance_ctx(inst);
-+ struct crypto_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_ablkcipher *ctr;
-+ unsigned long align;
-+ int err;
++ unsigned long next;
+
+- Dprintk("init_memory_mapping\n");
++ pr_debug("init_memory_mapping\n");
+
+- /*
++ /*
+ * Find space for the kernel direct mapping tables.
+- * Later we should allocate these tables in the local node of the memory
+- * mapped. Unfortunately this is done currently before the nodes are
+- * discovered.
++ *
++ * Later we should allocate these tables in the local node of the
++ * memory mapped. Unfortunately this is done currently before the
++ * nodes are discovered.
+ */
+ if (!after_bootmem)
+ find_early_table_space(end);
+@@ -364,8 +398,8 @@ void __init_refok init_memory_mapping(unsigned long start, unsigned long end)
+ end = (unsigned long)__va(end);
+
+ for (; start < end; start = next) {
+- unsigned long pud_phys;
+ pgd_t *pgd = pgd_offset_k(start);
++ unsigned long pud_phys;
+ pud_t *pud;
+
+ if (after_bootmem)
+@@ -374,23 +408,26 @@ void __init_refok init_memory_mapping(unsigned long start, unsigned long end)
+ pud = alloc_low_page(&pud_phys);
+
+ next = start + PGDIR_SIZE;
+- if (next > end)
+- next = end;
++ if (next > end)
++ next = end;
+ phys_pud_init(pud, __pa(start), __pa(next));
+ if (!after_bootmem)
+ set_pgd(pgd_offset_k(start), mk_kernel_pgd(pud_phys));
+ unmap_low_page(pud);
+- }
++ }
+
+ if (!after_bootmem)
+ mmu_cr4_features = read_cr4();
+ __flush_tlb_all();
+
-+ ctr = crypto_spawn_skcipher(&ictx->ctr);
-+ err = PTR_ERR(ctr);
-+ if (IS_ERR(ctr))
-+ return err;
++ reserve_early(table_start << PAGE_SHIFT, table_end << PAGE_SHIFT);
+ }
+
+ #ifndef CONFIG_NUMA
+ void __init paging_init(void)
+ {
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+
-+ ctx->ctr = ctr;
-+ ctx->gf128 = NULL;
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
+ max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
+@@ -402,39 +439,48 @@ void __init paging_init(void)
+ }
+ #endif
+
+-/* Unmap a kernel mapping if it exists. This is useful to avoid prefetches
+- from the CPU leading to inconsistent cache lines. address and size
+- must be aligned to 2MB boundaries.
+- Does nothing when the mapping doesn't exist. */
+-void __init clear_kernel_mapping(unsigned long address, unsigned long size)
++/*
++ * Unmap a kernel mapping if it exists. This is useful to avoid
++ * prefetches from the CPU leading to inconsistent cache lines.
++ * address and size must be aligned to 2MB boundaries.
++ * Does nothing when the mapping doesn't exist.
++ */
++void __init clear_kernel_mapping(unsigned long address, unsigned long size)
+ {
+ unsigned long end = address + size;
+
+ BUG_ON(address & ~LARGE_PAGE_MASK);
+- BUG_ON(size & ~LARGE_PAGE_MASK);
+-
+- for (; address < end; address += LARGE_PAGE_SIZE) {
++ BUG_ON(size & ~LARGE_PAGE_MASK);
+
-+ align = crypto_tfm_alg_alignmask(tfm);
-+ align &= ~(crypto_tfm_ctx_alignment() - 1);
-+ tfm->crt_aead.reqsize = align +
-+ sizeof(struct crypto_gcm_req_priv_ctx) +
-+ crypto_ablkcipher_reqsize(ctr);
++ for (; address < end; address += LARGE_PAGE_SIZE) {
+ pgd_t *pgd = pgd_offset_k(address);
+ pud_t *pud;
+ pmd_t *pmd;
+
-+ return 0;
-+}
+ if (pgd_none(*pgd))
+ continue;
+
-+static void crypto_gcm_exit_tfm(struct crypto_tfm *tfm)
-+{
-+ struct crypto_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
+ pud = pud_offset(pgd, address);
+ if (pud_none(*pud))
+- continue;
++ continue;
+
-+ if (ctx->gf128 != NULL)
-+ gf128mul_free_4k(ctx->gf128);
+ pmd = pmd_offset(pud, address);
+ if (!pmd || pmd_none(*pmd))
+- continue;
+- if (0 == (pmd_val(*pmd) & _PAGE_PSE)) {
+- /* Could handle this, but it should not happen currently. */
+- printk(KERN_ERR
+- "clear_kernel_mapping: mapping has been split. will leak memory\n");
+- pmd_ERROR(*pmd);
++ continue;
+
-+ crypto_free_ablkcipher(ctx->ctr);
++ if (!(pmd_val(*pmd) & _PAGE_PSE)) {
++ /*
++ * Could handle this, but it should not happen
++ * currently:
++ */
++ printk(KERN_ERR "clear_kernel_mapping: "
++ "mapping has been split. will leak memory\n");
++ pmd_ERROR(*pmd);
+ }
+- set_pmd(pmd, __pmd(0));
++ set_pmd(pmd, __pmd(0));
+ }
+ __flush_tlb_all();
+-}
+}
+
+ /*
+ * Memory hotplug specific functions
+@@ -461,16 +507,12 @@ int arch_add_memory(int nid, u64 start, u64 size)
+ unsigned long nr_pages = size >> PAGE_SHIFT;
+ int ret;
+
+- init_memory_mapping(start, (start + size -1));
++ init_memory_mapping(start, start + size-1);
+
+ ret = __add_pages(zone, start_pfn, nr_pages);
+- if (ret)
+- goto error;
++ WARN_ON(1);
+
+ return ret;
+-error:
+- printk("%s: Problem encountered in __add_pages!\n", __func__);
+- return ret;
+ }
+ EXPORT_SYMBOL_GPL(arch_add_memory);
+
+@@ -484,36 +526,8 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+
+ #endif /* CONFIG_MEMORY_HOTPLUG */
+
+-#ifdef CONFIG_MEMORY_HOTPLUG_RESERVE
+-/*
+- * Memory Hotadd without sparsemem. The mem_maps have been allocated in advance,
+- * just online the pages.
+- */
+-int __add_pages(struct zone *z, unsigned long start_pfn, unsigned long nr_pages)
+-{
+- int err = -EIO;
+- unsigned long pfn;
+- unsigned long total = 0, mem = 0;
+- for (pfn = start_pfn; pfn < start_pfn + nr_pages; pfn++) {
+- if (pfn_valid(pfn)) {
+- online_page(pfn_to_page(pfn));
+- err = 0;
+- mem++;
+- }
+- total++;
+- }
+- if (!err) {
+- z->spanned_pages += total;
+- z->present_pages += mem;
+- z->zone_pgdat->node_spanned_pages += total;
+- z->zone_pgdat->node_present_pages += mem;
+- }
+- return err;
+-}
+-#endif
+-
+-static struct kcore_list kcore_mem, kcore_vmalloc, kcore_kernel, kcore_modules,
+- kcore_vsyscall;
++static struct kcore_list kcore_mem, kcore_vmalloc, kcore_kernel,
++ kcore_modules, kcore_vsyscall;
+
+ void __init mem_init(void)
+ {
+@@ -521,8 +535,15 @@ void __init mem_init(void)
+
+ pci_iommu_alloc();
+
+- /* clear the zero-page */
+- memset(empty_zero_page, 0, PAGE_SIZE);
++ /* clear_bss() already clear the empty_zero_page */
+
-+static struct crypto_instance *crypto_gcm_alloc_common(struct rtattr **tb,
-+ const char *full_name,
-+ const char *ctr_name)
-+{
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ struct crypto_alg *ctr;
-+ struct gcm_instance_ctx *ctx;
-+ int err;
++ /* temporary debugging - double check it's true: */
++ {
++ int i;
+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
++ for (i = 0; i < 1024; i++)
++ WARN_ON_ONCE(empty_zero_page[i]);
++ }
+
+ reservedpages = 0;
+
+@@ -534,7 +555,6 @@ void __init mem_init(void)
+ #endif
+ reservedpages = end_pfn - totalram_pages -
+ absent_pages_in_range(0, end_pfn);
+-
+ after_bootmem = 1;
+
+ codesize = (unsigned long) &_etext - (unsigned long) &_text;
+@@ -542,15 +562,16 @@ void __init mem_init(void)
+ initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
+
+ /* Register memory areas for /proc/kcore */
+- kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
+- kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
++ kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
++ kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
+ VMALLOC_END-VMALLOC_START);
+ kclist_add(&kcore_kernel, &_stext, _end - _stext);
+ kclist_add(&kcore_modules, (void *)MODULES_VADDR, MODULES_LEN);
+- kclist_add(&kcore_vsyscall, (void *)VSYSCALL_START,
++ kclist_add(&kcore_vsyscall, (void *)VSYSCALL_START,
+ VSYSCALL_END - VSYSCALL_START);
+
+- printk("Memory: %luk/%luk available (%ldk kernel code, %ldk reserved, %ldk data, %ldk init)\n",
++ printk(KERN_INFO "Memory: %luk/%luk available (%ldk kernel code, "
++ "%ldk reserved, %ldk data, %ldk init)\n",
+ (unsigned long) nr_free_pages() << (PAGE_SHIFT-10),
+ end_pfn << (PAGE_SHIFT-10),
+ codesize >> 10,
+@@ -566,19 +587,27 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ if (begin >= end)
+ return;
+
++ /*
++ * If debugging page accesses then do not free this memory but
++ * mark them not present - any buggy init-section access will
++ * create a kernel page fault:
++ */
++#ifdef CONFIG_DEBUG_PAGEALLOC
++ printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
++ begin, PAGE_ALIGN(end));
++ set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
++#else
+ printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
++
+ for (addr = begin; addr < end; addr += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ init_page_count(virt_to_page(addr));
+ memset((void *)(addr & ~(PAGE_SIZE-1)),
+ POISON_FREE_INITMEM, PAGE_SIZE);
+- if (addr >= __START_KERNEL_map)
+- change_page_attr_addr(addr, 1, __pgprot(0));
+ free_page(addr);
+ totalram_pages++;
+ }
+- if (addr > __START_KERNEL_map)
+- global_flush_tlb();
++#endif
+ }
+
+ void free_initmem(void)
+@@ -589,6 +618,8 @@ void free_initmem(void)
+ }
+
+ #ifdef CONFIG_DEBUG_RODATA
++const int rodata_test_data = 0xC3;
++EXPORT_SYMBOL_GPL(rodata_test_data);
+
+ void mark_rodata_ro(void)
+ {
+@@ -603,25 +634,27 @@ void mark_rodata_ro(void)
+ #ifdef CONFIG_KPROBES
+ start = (unsigned long)__start_rodata;
+ #endif
+-
+
-+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
-+ return ERR_PTR(-EINVAL);
+ end = (unsigned long)__end_rodata;
+ start = (start + PAGE_SIZE - 1) & PAGE_MASK;
+ end &= PAGE_MASK;
+ if (end <= start)
+ return;
+
+- change_page_attr_addr(start, (end - start) >> PAGE_SHIFT, PAGE_KERNEL_RO);
++ set_memory_ro(start, (end - start) >> PAGE_SHIFT);
+
+ printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n",
+ (end - start) >> 10);
+
+- /*
+- * change_page_attr_addr() requires a global_flush_tlb() call after it.
+- * We do this after the printk so that if something went wrong in the
+- * change, the printk gets out at least to give a better debug hint
+- * of who is the culprit.
+- */
+- global_flush_tlb();
++ rodata_test();
+
-+ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
-+ if (!inst)
-+ return ERR_PTR(-ENOMEM);
++#ifdef CONFIG_CPA_DEBUG
++ printk(KERN_INFO "Testing CPA: undo %lx-%lx\n", start, end);
++ set_memory_rw(start, (end-start) >> PAGE_SHIFT);
+
-+ ctx = crypto_instance_ctx(inst);
-+ crypto_set_skcipher_spawn(&ctx->ctr, inst);
-+ err = crypto_grab_skcipher(&ctx->ctr, ctr_name, 0,
-+ crypto_requires_sync(algt->type,
-+ algt->mask));
-+ if (err)
-+ goto err_free_inst;
++ printk(KERN_INFO "Testing CPA: again\n");
++ set_memory_ro(start, (end-start) >> PAGE_SHIFT);
++#endif
+ }
+ #endif
+
+@@ -632,17 +665,21 @@ void free_initrd_mem(unsigned long start, unsigned long end)
+ }
+ #endif
+
+-void __init reserve_bootmem_generic(unsigned long phys, unsigned len)
+-{
++void __init reserve_bootmem_generic(unsigned long phys, unsigned len)
++{
+ #ifdef CONFIG_NUMA
+ int nid = phys_to_nid(phys);
+ #endif
+ unsigned long pfn = phys >> PAGE_SHIFT;
+
-+ ctr = crypto_skcipher_spawn_alg(&ctx->ctr);
+ if (pfn >= end_pfn) {
+- /* This can happen with kdump kernels when accessing firmware
+- tables. */
++ /*
++ * This can happen with kdump kernels when accessing
++ * firmware tables:
++ */
+ if (pfn < end_pfn_map)
+ return;
+
-+ /* We only support 16-byte blocks. */
-+ if (ctr->cra_ablkcipher.ivsize != 16)
-+ goto out_put_ctr;
+ printk(KERN_ERR "reserve_bootmem: illegal reserve %lx %u\n",
+ phys, len);
+ return;
+@@ -650,9 +687,9 @@ void __init reserve_bootmem_generic(unsigned long phys, unsigned len)
+
+ /* Should check here against the e820 map to avoid double free */
+ #ifdef CONFIG_NUMA
+- reserve_bootmem_node(NODE_DATA(nid), phys, len);
+-#else
+- reserve_bootmem(phys, len);
++ reserve_bootmem_node(NODE_DATA(nid), phys, len);
++#else
++ reserve_bootmem(phys, len);
+ #endif
+ if (phys+len <= MAX_DMA_PFN*PAGE_SIZE) {
+ dma_reserve += len / PAGE_SIZE;
+@@ -660,46 +697,49 @@ void __init reserve_bootmem_generic(unsigned long phys, unsigned len)
+ }
+ }
+
+-int kern_addr_valid(unsigned long addr)
+-{
++int kern_addr_valid(unsigned long addr)
++{
+ unsigned long above = ((long)addr) >> __VIRTUAL_MASK_SHIFT;
+- pgd_t *pgd;
+- pud_t *pud;
+- pmd_t *pmd;
+- pte_t *pte;
++ pgd_t *pgd;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
+
+ if (above != 0 && above != -1UL)
+- return 0;
+-
++ return 0;
+
-+ /* Not a stream cipher? */
-+ err = -EINVAL;
-+ if (ctr->cra_blocksize != 1)
-+ goto out_put_ctr;
+ pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd))
+ return 0;
+
+ pud = pud_offset(pgd, addr);
+ if (pud_none(*pud))
+- return 0;
++ return 0;
+
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd))
+ return 0;
+
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "gcm_base(%s)", ctr->cra_driver_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto out_put_ctr;
+ if (pmd_large(*pmd))
+ return pfn_valid(pmd_pfn(*pmd));
+
+ pte = pte_offset_kernel(pmd, addr);
+ if (pte_none(*pte))
+ return 0;
+
-+ memcpy(inst->alg.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
+ return pfn_valid(pte_pfn(*pte));
+ }
+
+-/* A pseudo VMA to allow ptrace access for the vsyscall page. This only
+- covers the 64bit vsyscall page now. 32bit has a real VMA now and does
+- not need special handling anymore. */
+-
++/*
++ * A pseudo VMA to allow ptrace access for the vsyscall page. This only
++ * covers the 64bit vsyscall page now. 32bit has a real VMA now and does
++ * not need special handling anymore:
++ */
+ static struct vm_area_struct gate_vma = {
+- .vm_start = VSYSCALL_START,
+- .vm_end = VSYSCALL_START + (VSYSCALL_MAPPED_PAGES << PAGE_SHIFT),
+- .vm_page_prot = PAGE_READONLY_EXEC,
+- .vm_flags = VM_READ | VM_EXEC
++ .vm_start = VSYSCALL_START,
++ .vm_end = VSYSCALL_START + (VSYSCALL_MAPPED_PAGES * PAGE_SIZE),
++ .vm_page_prot = PAGE_READONLY_EXEC,
++ .vm_flags = VM_READ | VM_EXEC
+ };
+
+ struct vm_area_struct *get_gate_vma(struct task_struct *tsk)
+@@ -714,14 +754,17 @@ struct vm_area_struct *get_gate_vma(struct task_struct *tsk)
+ int in_gate_area(struct task_struct *task, unsigned long addr)
+ {
+ struct vm_area_struct *vma = get_gate_vma(task);
+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
-+ inst->alg.cra_flags |= ctr->cra_flags & CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_priority = ctr->cra_priority;
-+ inst->alg.cra_blocksize = 1;
-+ inst->alg.cra_alignmask = ctr->cra_alignmask | (__alignof__(u64) - 1);
-+ inst->alg.cra_type = &crypto_aead_type;
-+ inst->alg.cra_aead.ivsize = 16;
-+ inst->alg.cra_aead.maxauthsize = 16;
-+ inst->alg.cra_ctxsize = sizeof(struct crypto_gcm_ctx);
-+ inst->alg.cra_init = crypto_gcm_init_tfm;
-+ inst->alg.cra_exit = crypto_gcm_exit_tfm;
-+ inst->alg.cra_aead.setkey = crypto_gcm_setkey;
-+ inst->alg.cra_aead.setauthsize = crypto_gcm_setauthsize;
-+ inst->alg.cra_aead.encrypt = crypto_gcm_encrypt;
-+ inst->alg.cra_aead.decrypt = crypto_gcm_decrypt;
+ if (!vma)
+ return 0;
+
-+out:
-+ return inst;
+ return (addr >= vma->vm_start) && (addr < vma->vm_end);
+ }
+
+-/* Use this when you have no reliable task/vma, typically from interrupt
+- * context. It is less reliable than using the task's vma and may give
+- * false positives.
++/*
++ * Use this when you have no reliable task/vma, typically from interrupt
++ * context. It is less reliable than using the task's vma and may give
++ * false positives:
+ */
+ int in_gate_area_no_task(unsigned long addr)
+ {
+@@ -741,8 +784,8 @@ const char *arch_vma_name(struct vm_area_struct *vma)
+ /*
+ * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
+ */
+-int __meminit vmemmap_populate(struct page *start_page,
+- unsigned long size, int node)
++int __meminit
++vmemmap_populate(struct page *start_page, unsigned long size, int node)
+ {
+ unsigned long addr = (unsigned long)start_page;
+ unsigned long end = (unsigned long)(start_page + size);
+@@ -757,6 +800,7 @@ int __meminit vmemmap_populate(struct page *start_page,
+ pgd = vmemmap_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
-+out_put_ctr:
-+ crypto_drop_skcipher(&ctx->ctr);
-+err_free_inst:
-+ kfree(inst);
-+ inst = ERR_PTR(err);
-+ goto out;
-+}
+ pud = vmemmap_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+@@ -764,20 +808,22 @@ int __meminit vmemmap_populate(struct page *start_page,
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd)) {
+ pte_t entry;
+- void *p = vmemmap_alloc_block(PMD_SIZE, node);
++ void *p;
+
-+static struct crypto_instance *crypto_gcm_alloc(struct rtattr **tb)
-+{
-+ int err;
-+ const char *cipher_name;
-+ char ctr_name[CRYPTO_MAX_ALG_NAME];
-+ char full_name[CRYPTO_MAX_ALG_NAME];
++ p = vmemmap_alloc_block(PMD_SIZE, node);
+ if (!p)
+ return -ENOMEM;
+
+- entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL);
+- mk_pte_huge(entry);
++ entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
++ PAGE_KERNEL_LARGE);
+ set_pmd(pmd, __pmd(pte_val(entry)));
+
+ printk(KERN_DEBUG " [%lx-%lx] PMD ->%p on node %d\n",
+ addr, addr + PMD_SIZE - 1, p, node);
+- } else
++ } else {
+ vmemmap_verify((pte_t *)pmd, node, addr, next);
++ }
+ }
+-
+ return 0;
+ }
+ #endif
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+new file mode 100644
+index 0000000..ed79572
+--- /dev/null
++++ b/arch/x86/mm/ioremap.c
+@@ -0,0 +1,501 @@
++/*
++ * Re-map IO memory to kernel address space so that we can access it.
++ * This is needed for high PCI addresses that aren't mapped in the
++ * 640k-1MB IO memory area on PC's
++ *
++ * (C) Copyright 1995 1996 Linus Torvalds
++ */
+
-+ cipher_name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(cipher_name);
-+ if (IS_ERR(cipher_name))
-+ return ERR_PTR(err);
++#include <linux/bootmem.h>
++#include <linux/init.h>
++#include <linux/io.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/vmalloc.h>
+
-+ if (snprintf(ctr_name, CRYPTO_MAX_ALG_NAME, "ctr(%s)", cipher_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ return ERR_PTR(-ENAMETOOLONG);
++#include <asm/cacheflush.h>
++#include <asm/e820.h>
++#include <asm/fixmap.h>
++#include <asm/pgtable.h>
++#include <asm/tlbflush.h>
++#include <asm/pgalloc.h>
+
-+ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm(%s)", cipher_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ return ERR_PTR(-ENAMETOOLONG);
++enum ioremap_mode {
++ IOR_MODE_UNCACHED,
++ IOR_MODE_CACHED,
++};
+
-+ return crypto_gcm_alloc_common(tb, full_name, ctr_name);
-+}
++#ifdef CONFIG_X86_64
+
-+static void crypto_gcm_free(struct crypto_instance *inst)
++unsigned long __phys_addr(unsigned long x)
+{
-+ struct gcm_instance_ctx *ctx = crypto_instance_ctx(inst);
-+
-+ crypto_drop_skcipher(&ctx->ctr);
-+ kfree(inst);
++ if (x >= __START_KERNEL_map)
++ return x - __START_KERNEL_map + phys_base;
++ return x - PAGE_OFFSET;
+}
++EXPORT_SYMBOL(__phys_addr);
+
-+static struct crypto_template crypto_gcm_tmpl = {
-+ .name = "gcm",
-+ .alloc = crypto_gcm_alloc,
-+ .free = crypto_gcm_free,
-+ .module = THIS_MODULE,
-+};
++#endif
+
-+static struct crypto_instance *crypto_gcm_base_alloc(struct rtattr **tb)
++int page_is_ram(unsigned long pagenr)
+{
-+ int err;
-+ const char *ctr_name;
-+ char full_name[CRYPTO_MAX_ALG_NAME];
++ unsigned long addr, end;
++ int i;
+
-+ ctr_name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(ctr_name);
-+ if (IS_ERR(ctr_name))
-+ return ERR_PTR(err);
++ for (i = 0; i < e820.nr_map; i++) {
++ /*
++ * Not usable memory:
++ */
++ if (e820.map[i].type != E820_RAM)
++ continue;
++ addr = (e820.map[i].addr + PAGE_SIZE-1) >> PAGE_SHIFT;
++ end = (e820.map[i].addr + e820.map[i].size) >> PAGE_SHIFT;
+
-+ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s)",
-+ ctr_name) >= CRYPTO_MAX_ALG_NAME)
-+ return ERR_PTR(-ENAMETOOLONG);
++ /*
++ * Sanity check: Some BIOSen report areas as RAM that
++ * are not. Notably the 640->1Mb area, which is the
++ * PCI BIOS area.
++ */
++ if (addr >= (BIOS_BEGIN >> PAGE_SHIFT) &&
++ end < (BIOS_END >> PAGE_SHIFT))
++ continue;
+
-+ return crypto_gcm_alloc_common(tb, full_name, ctr_name);
++ if ((pagenr >= addr) && (pagenr < end))
++ return 1;
++ }
++ return 0;
+}
+
-+static struct crypto_template crypto_gcm_base_tmpl = {
-+ .name = "gcm_base",
-+ .alloc = crypto_gcm_base_alloc,
-+ .free = crypto_gcm_free,
-+ .module = THIS_MODULE,
-+};
-+
-+static int crypto_rfc4106_setkey(struct crypto_aead *parent, const u8 *key,
-+ unsigned int keylen)
++/*
++ * Fix up the linear direct mapping of the kernel to avoid cache attribute
++ * conflicts.
++ */
++static int ioremap_change_attr(unsigned long paddr, unsigned long size,
++ enum ioremap_mode mode)
+{
-+ struct crypto_rfc4106_ctx *ctx = crypto_aead_ctx(parent);
-+ struct crypto_aead *child = ctx->child;
-+ int err;
++ unsigned long vaddr = (unsigned long)__va(paddr);
++ unsigned long nrpages = size >> PAGE_SHIFT;
++ int err, level;
+
-+ if (keylen < 4)
-+ return -EINVAL;
++ /* No change for pages after the last mapping */
++ if ((paddr + size - 1) >= (max_pfn_mapped << PAGE_SHIFT))
++ return 0;
+
-+ keylen -= 4;
-+ memcpy(ctx->nonce, key + keylen, 4);
++ /*
++ * If there is no identity map for this address,
++ * change_page_attr_addr is unnecessary
++ */
++ if (!lookup_address(vaddr, &level))
++ return 0;
+
-+ crypto_aead_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-+ crypto_aead_set_flags(child, crypto_aead_get_flags(parent) &
-+ CRYPTO_TFM_REQ_MASK);
-+ err = crypto_aead_setkey(child, key, keylen);
-+ crypto_aead_set_flags(parent, crypto_aead_get_flags(child) &
-+ CRYPTO_TFM_RES_MASK);
++ switch (mode) {
++ case IOR_MODE_UNCACHED:
++ default:
++ err = set_memory_uc(vaddr, nrpages);
++ break;
++ case IOR_MODE_CACHED:
++ err = set_memory_wb(vaddr, nrpages);
++ break;
++ }
+
+ return err;
+}
+
-+static int crypto_rfc4106_setauthsize(struct crypto_aead *parent,
-+ unsigned int authsize)
++/*
++ * Remap an arbitrary physical address space into the kernel virtual
++ * address space. Needed when the kernel wants to access high addresses
++ * directly.
++ *
++ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
++ * have to convert them into an offset in a page-aligned mapping, but the
++ * caller shouldn't need to know that small detail.
++ */
++static void __iomem *__ioremap(unsigned long phys_addr, unsigned long size,
++ enum ioremap_mode mode)
+{
-+ struct crypto_rfc4106_ctx *ctx = crypto_aead_ctx(parent);
++ void __iomem *addr;
++ struct vm_struct *area;
++ unsigned long offset, last_addr;
++ pgprot_t prot;
+
-+ switch (authsize) {
-+ case 8:
-+ case 12:
-+ case 16:
-+ break;
-+ default:
-+ return -EINVAL;
++ /* Don't allow wraparound or zero size */
++ last_addr = phys_addr + size - 1;
++ if (!size || last_addr < phys_addr)
++ return NULL;
++
++ /*
++ * Don't remap the low PCI/ISA area, it's always mapped..
++ */
++ if (phys_addr >= ISA_START_ADDRESS && last_addr < ISA_END_ADDRESS)
++ return (__force void __iomem *)phys_to_virt(phys_addr);
++
++ /*
++ * Don't allow anybody to remap normal RAM that we're using..
++ */
++ for (offset = phys_addr >> PAGE_SHIFT; offset < max_pfn_mapped &&
++ (offset << PAGE_SHIFT) < last_addr; offset++) {
++ if (page_is_ram(offset))
++ return NULL;
+ }
+
-+ return crypto_aead_setauthsize(ctx->child, authsize);
-+}
++ switch (mode) {
++ case IOR_MODE_UNCACHED:
++ default:
++ prot = PAGE_KERNEL_NOCACHE;
++ break;
++ case IOR_MODE_CACHED:
++ prot = PAGE_KERNEL;
++ break;
++ }
+
-+static struct aead_request *crypto_rfc4106_crypt(struct aead_request *req)
-+{
-+ struct aead_request *subreq = aead_request_ctx(req);
-+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
-+ struct crypto_rfc4106_ctx *ctx = crypto_aead_ctx(aead);
-+ struct crypto_aead *child = ctx->child;
-+ u8 *iv = PTR_ALIGN((u8 *)(subreq + 1) + crypto_aead_reqsize(child),
-+ crypto_aead_alignmask(child) + 1);
++ /*
++ * Mappings have to be page-aligned
++ */
++ offset = phys_addr & ~PAGE_MASK;
++ phys_addr &= PAGE_MASK;
++ size = PAGE_ALIGN(last_addr+1) - phys_addr;
+
-+ memcpy(iv, ctx->nonce, 4);
-+ memcpy(iv + 4, req->iv, 8);
++ /*
++ * Ok, go for it..
++ */
++ area = get_vm_area(size, VM_IOREMAP);
++ if (!area)
++ return NULL;
++ area->phys_addr = phys_addr;
++ addr = (void __iomem *) area->addr;
++ if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
++ phys_addr, prot)) {
++ remove_vm_area((void *)(PAGE_MASK & (unsigned long) addr));
++ return NULL;
++ }
+
-+ aead_request_set_tfm(subreq, child);
-+ aead_request_set_callback(subreq, req->base.flags, req->base.complete,
-+ req->base.data);
-+ aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, iv);
-+ aead_request_set_assoc(subreq, req->assoc, req->assoclen);
++ if (ioremap_change_attr(phys_addr, size, mode) < 0) {
++ vunmap(addr);
++ return NULL;
++ }
+
-+ return subreq;
++ return (void __iomem *) (offset + (char __iomem *)addr);
+}
+
-+static int crypto_rfc4106_encrypt(struct aead_request *req)
++/**
++ * ioremap_nocache - map bus memory into CPU space
++ * @offset: bus address of the memory
++ * @size: size of the resource to map
++ *
++ * ioremap_nocache performs a platform specific sequence of operations to
++ * make bus memory CPU accessible via the readb/readw/readl/writeb/
++ * writew/writel functions and the other mmio helpers. The returned
++ * address is not guaranteed to be usable directly as a virtual
++ * address.
++ *
++ * This version of ioremap ensures that the memory is marked uncachable
++ * on the CPU as well as honouring existing caching rules from things like
++ * the PCI bus. Note that there are other caches and buffers on many
++ * busses. In particular driver authors should read up on PCI writes
++ *
++ * It's useful if some control registers are in such an area and
++ * write combining or read caching is not desirable:
++ *
++ * Must be freed with iounmap.
++ */
++void __iomem *ioremap_nocache(unsigned long phys_addr, unsigned long size)
+{
-+ req = crypto_rfc4106_crypt(req);
-+
-+ return crypto_aead_encrypt(req);
++ return __ioremap(phys_addr, size, IOR_MODE_UNCACHED);
+}
++EXPORT_SYMBOL(ioremap_nocache);
+
-+static int crypto_rfc4106_decrypt(struct aead_request *req)
++void __iomem *ioremap_cache(unsigned long phys_addr, unsigned long size)
+{
-+ req = crypto_rfc4106_crypt(req);
-+
-+ return crypto_aead_decrypt(req);
++ return __ioremap(phys_addr, size, IOR_MODE_CACHED);
+}
++EXPORT_SYMBOL(ioremap_cache);
+
-+static int crypto_rfc4106_init_tfm(struct crypto_tfm *tfm)
++/**
++ * iounmap - Free a IO remapping
++ * @addr: virtual address from ioremap_*
++ *
++ * Caller must ensure there is only one unmapping for the same pointer.
++ */
++void iounmap(volatile void __iomem *addr)
+{
-+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
-+ struct crypto_aead_spawn *spawn = crypto_instance_ctx(inst);
-+ struct crypto_rfc4106_ctx *ctx = crypto_tfm_ctx(tfm);
-+ struct crypto_aead *aead;
-+ unsigned long align;
++ struct vm_struct *p, *o;
+
-+ aead = crypto_spawn_aead(spawn);
-+ if (IS_ERR(aead))
-+ return PTR_ERR(aead);
++ if ((void __force *)addr <= high_memory)
++ return;
+
-+ ctx->child = aead;
++ /*
++ * __ioremap special-cases the PCI/ISA range by not instantiating a
++ * vm_area and by simply returning an address into the kernel mapping
++ * of ISA space. So handle that here.
++ */
++ if (addr >= phys_to_virt(ISA_START_ADDRESS) &&
++ addr < phys_to_virt(ISA_END_ADDRESS))
++ return;
+
-+ align = crypto_aead_alignmask(aead);
-+ align &= ~(crypto_tfm_ctx_alignment() - 1);
-+ tfm->crt_aead.reqsize = sizeof(struct aead_request) +
-+ ALIGN(crypto_aead_reqsize(aead),
-+ crypto_tfm_ctx_alignment()) +
-+ align + 16;
++ addr = (volatile void __iomem *)
++ (PAGE_MASK & (unsigned long __force)addr);
+
-+ return 0;
++ /* Use the vm area unlocked, assuming the caller
++ ensures there isn't another iounmap for the same address
++ in parallel. Reuse of the virtual address is prevented by
++ leaving it in the global lists until we're done with it.
++ cpa takes care of the direct mappings. */
++ read_lock(&vmlist_lock);
++ for (p = vmlist; p; p = p->next) {
++ if (p->addr == addr)
++ break;
++ }
++ read_unlock(&vmlist_lock);
++
++ if (!p) {
++ printk(KERN_ERR "iounmap: bad address %p\n", addr);
++ dump_stack();
++ return;
++ }
++
++ /* Reset the direct mapping. Can block */
++ ioremap_change_attr(p->phys_addr, p->size, IOR_MODE_CACHED);
++
++ /* Finally remove it */
++ o = remove_vm_area((void *)addr);
++ BUG_ON(p != o || o == NULL);
++ kfree(p);
+}
++EXPORT_SYMBOL(iounmap);
+
-+static void crypto_rfc4106_exit_tfm(struct crypto_tfm *tfm)
++#ifdef CONFIG_X86_32
++
++int __initdata early_ioremap_debug;
++
++static int __init early_ioremap_debug_setup(char *str)
+{
-+ struct crypto_rfc4106_ctx *ctx = crypto_tfm_ctx(tfm);
++ early_ioremap_debug = 1;
+
-+ crypto_free_aead(ctx->child);
++ return 0;
+}
++early_param("early_ioremap_debug", early_ioremap_debug_setup);
+
-+static struct crypto_instance *crypto_rfc4106_alloc(struct rtattr **tb)
++static __initdata int after_paging_init;
++static __initdata unsigned long bm_pte[1024]
++ __attribute__((aligned(PAGE_SIZE)));
++
++static inline unsigned long * __init early_ioremap_pgd(unsigned long addr)
+{
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ struct crypto_aead_spawn *spawn;
-+ struct crypto_alg *alg;
-+ const char *ccm_name;
-+ int err;
++ return (unsigned long *)swapper_pg_dir + ((addr >> 22) & 1023);
++}
+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
++static inline unsigned long * __init early_ioremap_pte(unsigned long addr)
++{
++ return bm_pte + ((addr >> PAGE_SHIFT) & 1023);
++}
+
-+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
-+ return ERR_PTR(-EINVAL);
++void __init early_ioremap_init(void)
++{
++ unsigned long *pgd;
+
-+ ccm_name = crypto_attr_alg_name(tb[1]);
-+ err = PTR_ERR(ccm_name);
-+ if (IS_ERR(ccm_name))
-+ return ERR_PTR(err);
++ if (early_ioremap_debug)
++ printk(KERN_INFO "early_ioremap_init()\n");
+
-+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-+ if (!inst)
-+ return ERR_PTR(-ENOMEM);
++ pgd = early_ioremap_pgd(fix_to_virt(FIX_BTMAP_BEGIN));
++ *pgd = __pa(bm_pte) | _PAGE_TABLE;
++ memset(bm_pte, 0, sizeof(bm_pte));
++ /*
++ * The boot-ioremap range spans multiple pgds, for which
++ * we are not prepared:
++ */
++ if (pgd != early_ioremap_pgd(fix_to_virt(FIX_BTMAP_END))) {
++ WARN_ON(1);
++ printk(KERN_WARNING "pgd %p != %p\n",
++ pgd, early_ioremap_pgd(fix_to_virt(FIX_BTMAP_END)));
++ printk(KERN_WARNING "fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
++ fix_to_virt(FIX_BTMAP_BEGIN));
++ printk(KERN_WARNING "fix_to_virt(FIX_BTMAP_END): %08lx\n",
++ fix_to_virt(FIX_BTMAP_END));
+
-+ spawn = crypto_instance_ctx(inst);
-+ crypto_set_aead_spawn(spawn, inst);
-+ err = crypto_grab_aead(spawn, ccm_name, 0,
-+ crypto_requires_sync(algt->type, algt->mask));
-+ if (err)
-+ goto out_free_inst;
++ printk(KERN_WARNING "FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
++ printk(KERN_WARNING "FIX_BTMAP_BEGIN: %d\n",
++ FIX_BTMAP_BEGIN);
++ }
++}
+
-+ alg = crypto_aead_spawn_alg(spawn);
++void __init early_ioremap_clear(void)
++{
++ unsigned long *pgd;
+
-+ err = -EINVAL;
++ if (early_ioremap_debug)
++ printk(KERN_INFO "early_ioremap_clear()\n");
+
-+ /* We only support 16-byte blocks. */
-+ if (alg->cra_aead.ivsize != 16)
-+ goto out_drop_alg;
++ pgd = early_ioremap_pgd(fix_to_virt(FIX_BTMAP_BEGIN));
++ *pgd = 0;
++ paravirt_release_pt(__pa(pgd) >> PAGE_SHIFT);
++ __flush_tlb_all();
++}
+
-+ /* Not a stream cipher? */
-+ if (alg->cra_blocksize != 1)
-+ goto out_drop_alg;
++void __init early_ioremap_reset(void)
++{
++ enum fixed_addresses idx;
++ unsigned long *pte, phys, addr;
+
-+ err = -ENAMETOOLONG;
-+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-+ "rfc4106(%s)", alg->cra_name) >= CRYPTO_MAX_ALG_NAME ||
-+ snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-+ "rfc4106(%s)", alg->cra_driver_name) >=
-+ CRYPTO_MAX_ALG_NAME)
-+ goto out_drop_alg;
++ after_paging_init = 1;
++ for (idx = FIX_BTMAP_BEGIN; idx >= FIX_BTMAP_END; idx--) {
++ addr = fix_to_virt(idx);
++ pte = early_ioremap_pte(addr);
++ if (!*pte & _PAGE_PRESENT) {
++ phys = *pte & PAGE_MASK;
++ set_fixmap(idx, phys);
++ }
++ }
++}
+
-+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
-+ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
-+ inst->alg.cra_priority = alg->cra_priority;
-+ inst->alg.cra_blocksize = 1;
-+ inst->alg.cra_alignmask = alg->cra_alignmask;
-+ inst->alg.cra_type = &crypto_nivaead_type;
++static void __init __early_set_fixmap(enum fixed_addresses idx,
++ unsigned long phys, pgprot_t flags)
++{
++ unsigned long *pte, addr = __fix_to_virt(idx);
+
-+ inst->alg.cra_aead.ivsize = 8;
-+ inst->alg.cra_aead.maxauthsize = 16;
++ if (idx >= __end_of_fixed_addresses) {
++ BUG();
++ return;
++ }
++ pte = early_ioremap_pte(addr);
++ if (pgprot_val(flags))
++ *pte = (phys & PAGE_MASK) | pgprot_val(flags);
++ else
++ *pte = 0;
++ __flush_tlb_one(addr);
++}
+
-+ inst->alg.cra_ctxsize = sizeof(struct crypto_rfc4106_ctx);
++static inline void __init early_set_fixmap(enum fixed_addresses idx,
++ unsigned long phys)
++{
++ if (after_paging_init)
++ set_fixmap(idx, phys);
++ else
++ __early_set_fixmap(idx, phys, PAGE_KERNEL);
++}
+
-+ inst->alg.cra_init = crypto_rfc4106_init_tfm;
-+ inst->alg.cra_exit = crypto_rfc4106_exit_tfm;
++static inline void __init early_clear_fixmap(enum fixed_addresses idx)
++{
++ if (after_paging_init)
++ clear_fixmap(idx);
++ else
++ __early_set_fixmap(idx, 0, __pgprot(0));
++}
+
-+ inst->alg.cra_aead.setkey = crypto_rfc4106_setkey;
-+ inst->alg.cra_aead.setauthsize = crypto_rfc4106_setauthsize;
-+ inst->alg.cra_aead.encrypt = crypto_rfc4106_encrypt;
-+ inst->alg.cra_aead.decrypt = crypto_rfc4106_decrypt;
+
-+ inst->alg.cra_aead.geniv = "seqiv";
++int __initdata early_ioremap_nested;
+
-+out:
-+ return inst;
++static int __init check_early_ioremap_leak(void)
++{
++ if (!early_ioremap_nested)
++ return 0;
+
-+out_drop_alg:
-+ crypto_drop_aead(spawn);
-+out_free_inst:
-+ kfree(inst);
-+ inst = ERR_PTR(err);
-+ goto out;
++ printk(KERN_WARNING
++ "Debug warning: early ioremap leak of %d areas detected.\n",
++ early_ioremap_nested);
++ printk(KERN_WARNING
++ "please boot with early_ioremap_debug and report the dmesg.\n");
++ WARN_ON(1);
++
++ return 1;
+}
++late_initcall(check_early_ioremap_leak);
+
-+static void crypto_rfc4106_free(struct crypto_instance *inst)
++void __init *early_ioremap(unsigned long phys_addr, unsigned long size)
+{
-+ crypto_drop_spawn(crypto_instance_ctx(inst));
-+ kfree(inst);
-+}
++ unsigned long offset, last_addr;
++ unsigned int nrpages, nesting;
++ enum fixed_addresses idx0, idx;
+
-+static struct crypto_template crypto_rfc4106_tmpl = {
-+ .name = "rfc4106",
-+ .alloc = crypto_rfc4106_alloc,
-+ .free = crypto_rfc4106_free,
-+ .module = THIS_MODULE,
-+};
++ WARN_ON(system_state != SYSTEM_BOOTING);
+
-+static int __init crypto_gcm_module_init(void)
-+{
-+ int err;
++ nesting = early_ioremap_nested;
++ if (early_ioremap_debug) {
++ printk(KERN_INFO "early_ioremap(%08lx, %08lx) [%d] => ",
++ phys_addr, size, nesting);
++ dump_stack();
++ }
+
-+ err = crypto_register_template(&crypto_gcm_base_tmpl);
-+ if (err)
-+ goto out;
++ /* Don't allow wraparound or zero size */
++ last_addr = phys_addr + size - 1;
++ if (!size || last_addr < phys_addr) {
++ WARN_ON(1);
++ return NULL;
++ }
+
-+ err = crypto_register_template(&crypto_gcm_tmpl);
-+ if (err)
-+ goto out_undo_base;
++ if (nesting >= FIX_BTMAPS_NESTING) {
++ WARN_ON(1);
++ return NULL;
++ }
++ early_ioremap_nested++;
++ /*
++ * Mappings have to be page-aligned
++ */
++ offset = phys_addr & ~PAGE_MASK;
++ phys_addr &= PAGE_MASK;
++ size = PAGE_ALIGN(last_addr) - phys_addr;
+
-+ err = crypto_register_template(&crypto_rfc4106_tmpl);
-+ if (err)
-+ goto out_undo_gcm;
++ /*
++ * Mappings have to fit in the FIX_BTMAP area.
++ */
++ nrpages = size >> PAGE_SHIFT;
++ if (nrpages > NR_FIX_BTMAPS) {
++ WARN_ON(1);
++ return NULL;
++ }
+
-+out:
-+ return err;
++ /*
++ * Ok, go for it..
++ */
++ idx0 = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*nesting;
++ idx = idx0;
++ while (nrpages > 0) {
++ early_set_fixmap(idx, phys_addr);
++ phys_addr += PAGE_SIZE;
++ --idx;
++ --nrpages;
++ }
++ if (early_ioremap_debug)
++ printk(KERN_CONT "%08lx + %08lx\n", offset, fix_to_virt(idx0));
+
-+out_undo_gcm:
-+ crypto_unregister_template(&crypto_gcm_tmpl);
-+out_undo_base:
-+ crypto_unregister_template(&crypto_gcm_base_tmpl);
-+ goto out;
++ return (void *) (offset + fix_to_virt(idx0));
+}
+
-+static void __exit crypto_gcm_module_exit(void)
++void __init early_iounmap(void *addr, unsigned long size)
+{
-+ crypto_unregister_template(&crypto_rfc4106_tmpl);
-+ crypto_unregister_template(&crypto_gcm_tmpl);
-+ crypto_unregister_template(&crypto_gcm_base_tmpl);
++ unsigned long virt_addr;
++ unsigned long offset;
++ unsigned int nrpages;
++ enum fixed_addresses idx;
++ unsigned int nesting;
++
++ nesting = --early_ioremap_nested;
++ WARN_ON(nesting < 0);
++
++ if (early_ioremap_debug) {
++ printk(KERN_INFO "early_iounmap(%p, %08lx) [%d]\n", addr,
++ size, nesting);
++ dump_stack();
++ }
++
++ virt_addr = (unsigned long)addr;
++ if (virt_addr < fix_to_virt(FIX_BTMAP_BEGIN)) {
++ WARN_ON(1);
++ return;
++ }
++ offset = virt_addr & ~PAGE_MASK;
++ nrpages = PAGE_ALIGN(offset + size - 1) >> PAGE_SHIFT;
++
++ idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*nesting;
++ while (nrpages > 0) {
++ early_clear_fixmap(idx);
++ --idx;
++ --nrpages;
++ }
+}
+
-+module_init(crypto_gcm_module_init);
-+module_exit(crypto_gcm_module_exit);
++void __this_fixmap_does_not_exist(void)
++{
++ WARN_ON(1);
++}
+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("Galois/Counter Mode");
-+MODULE_AUTHOR("Mikko Herranen <mh1 at iki.fi>");
-+MODULE_ALIAS("gcm_base");
-+MODULE_ALIAS("rfc4106");
-diff --git a/crypto/hmac.c b/crypto/hmac.c
-index 0f05be7..a1d016a 100644
---- a/crypto/hmac.c
-+++ b/crypto/hmac.c
-@@ -17,6 +17,7 @@
- */
-
- #include <crypto/algapi.h>
-+#include <crypto/scatterwalk.h>
- #include <linux/err.h>
- #include <linux/init.h>
- #include <linux/kernel.h>
-@@ -160,7 +161,7 @@ static int hmac_digest(struct hash_desc *pdesc, struct scatterlist *sg,
-
- sg_init_table(sg1, 2);
- sg_set_buf(sg1, ipad, bs);
-- sg_set_page(&sg1[1], (void *) sg, 0, 0);
-+ scatterwalk_sg_chain(sg1, 2, sg);
-
- sg_init_table(sg2, 1);
- sg_set_buf(sg2, opad, bs + ds);
-diff --git a/crypto/internal.h b/crypto/internal.h
-index abb01f7..32f4c21 100644
---- a/crypto/internal.h
-+++ b/crypto/internal.h
-@@ -25,7 +25,6 @@
- #include <linux/notifier.h>
- #include <linux/rwsem.h>
- #include <linux/slab.h>
--#include <asm/kmap_types.h>
-
- /* Crypto notification events. */
- enum {
-@@ -50,34 +49,6 @@ extern struct list_head crypto_alg_list;
- extern struct rw_semaphore crypto_alg_sem;
- extern struct blocking_notifier_head crypto_chain;
-
--static inline enum km_type crypto_kmap_type(int out)
++#endif /* CONFIG_X86_32 */
+diff --git a/arch/x86/mm/ioremap_32.c b/arch/x86/mm/ioremap_32.c
+deleted file mode 100644
+index 0b27831..0000000
+--- a/arch/x86/mm/ioremap_32.c
++++ /dev/null
+@@ -1,274 +0,0 @@
+-/*
+- * arch/i386/mm/ioremap.c
+- *
+- * Re-map IO memory to kernel address space so that we can access it.
+- * This is needed for high PCI addresses that aren't mapped in the
+- * 640k-1MB IO memory area on PC's
+- *
+- * (C) Copyright 1995 1996 Linus Torvalds
+- */
+-
+-#include <linux/vmalloc.h>
+-#include <linux/init.h>
+-#include <linux/slab.h>
+-#include <linux/module.h>
+-#include <linux/io.h>
+-#include <asm/fixmap.h>
+-#include <asm/cacheflush.h>
+-#include <asm/tlbflush.h>
+-#include <asm/pgtable.h>
+-
+-#define ISA_START_ADDRESS 0xa0000
+-#define ISA_END_ADDRESS 0x100000
+-
+-/*
+- * Generic mapping function (not visible outside):
+- */
+-
+-/*
+- * Remap an arbitrary physical address space into the kernel virtual
+- * address space. Needed when the kernel wants to access high addresses
+- * directly.
+- *
+- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+- * have to convert them into an offset in a page-aligned mapping, but the
+- * caller shouldn't need to know that small detail.
+- */
+-void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags)
-{
-- enum km_type type;
+- void __iomem * addr;
+- struct vm_struct * area;
+- unsigned long offset, last_addr;
+- pgprot_t prot;
-
-- if (in_softirq())
-- type = out * (KM_SOFTIRQ1 - KM_SOFTIRQ0) + KM_SOFTIRQ0;
-- else
-- type = out * (KM_USER1 - KM_USER0) + KM_USER0;
+- /* Don't allow wraparound or zero size */
+- last_addr = phys_addr + size - 1;
+- if (!size || last_addr < phys_addr)
+- return NULL;
-
-- return type;
+- /*
+- * Don't remap the low PCI/ISA area, it's always mapped..
+- */
+- if (phys_addr >= ISA_START_ADDRESS && last_addr < ISA_END_ADDRESS)
+- return (void __iomem *) phys_to_virt(phys_addr);
+-
+- /*
+- * Don't allow anybody to remap normal RAM that we're using..
+- */
+- if (phys_addr <= virt_to_phys(high_memory - 1)) {
+- char *t_addr, *t_end;
+- struct page *page;
+-
+- t_addr = __va(phys_addr);
+- t_end = t_addr + (size - 1);
+-
+- for(page = virt_to_page(t_addr); page <= virt_to_page(t_end); page++)
+- if(!PageReserved(page))
+- return NULL;
+- }
+-
+- prot = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY
+- | _PAGE_ACCESSED | flags);
+-
+- /*
+- * Mappings have to be page-aligned
+- */
+- offset = phys_addr & ~PAGE_MASK;
+- phys_addr &= PAGE_MASK;
+- size = PAGE_ALIGN(last_addr+1) - phys_addr;
+-
+- /*
+- * Ok, go for it..
+- */
+- area = get_vm_area(size, VM_IOREMAP | (flags << 20));
+- if (!area)
+- return NULL;
+- area->phys_addr = phys_addr;
+- addr = (void __iomem *) area->addr;
+- if (ioremap_page_range((unsigned long) addr,
+- (unsigned long) addr + size, phys_addr, prot)) {
+- vunmap((void __force *) addr);
+- return NULL;
+- }
+- return (void __iomem *) (offset + (char __iomem *)addr);
-}
+-EXPORT_SYMBOL(__ioremap);
-
--static inline void *crypto_kmap(struct page *page, int out)
+-/**
+- * ioremap_nocache - map bus memory into CPU space
+- * @offset: bus address of the memory
+- * @size: size of the resource to map
+- *
+- * ioremap_nocache performs a platform specific sequence of operations to
+- * make bus memory CPU accessible via the readb/readw/readl/writeb/
+- * writew/writel functions and the other mmio helpers. The returned
+- * address is not guaranteed to be usable directly as a virtual
+- * address.
+- *
+- * This version of ioremap ensures that the memory is marked uncachable
+- * on the CPU as well as honouring existing caching rules from things like
+- * the PCI bus. Note that there are other caches and buffers on many
+- * busses. In particular driver authors should read up on PCI writes
+- *
+- * It's useful if some control registers are in such an area and
+- * write combining or read caching is not desirable:
+- *
+- * Must be freed with iounmap.
+- */
+-
+-void __iomem *ioremap_nocache (unsigned long phys_addr, unsigned long size)
-{
-- return kmap_atomic(page, crypto_kmap_type(out));
+- unsigned long last_addr;
+- void __iomem *p = __ioremap(phys_addr, size, _PAGE_PCD);
+- if (!p)
+- return p;
+-
+- /* Guaranteed to be > phys_addr, as per __ioremap() */
+- last_addr = phys_addr + size - 1;
+-
+- if (last_addr < virt_to_phys(high_memory) - 1) {
+- struct page *ppage = virt_to_page(__va(phys_addr));
+- unsigned long npages;
+-
+- phys_addr &= PAGE_MASK;
+-
+- /* This might overflow and become zero.. */
+- last_addr = PAGE_ALIGN(last_addr);
+-
+- /* .. but that's ok, because modulo-2**n arithmetic will make
+- * the page-aligned "last - first" come out right.
+- */
+- npages = (last_addr - phys_addr) >> PAGE_SHIFT;
+-
+- if (change_page_attr(ppage, npages, PAGE_KERNEL_NOCACHE) < 0) {
+- iounmap(p);
+- p = NULL;
+- }
+- global_flush_tlb();
+- }
+-
+- return p;
-}
+-EXPORT_SYMBOL(ioremap_nocache);
-
--static inline void crypto_kunmap(void *vaddr, int out)
+-/**
+- * iounmap - Free a IO remapping
+- * @addr: virtual address from ioremap_*
+- *
+- * Caller must ensure there is only one unmapping for the same pointer.
+- */
+-void iounmap(volatile void __iomem *addr)
-{
-- kunmap_atomic(vaddr, crypto_kmap_type(out));
+- struct vm_struct *p, *o;
+-
+- if ((void __force *)addr <= high_memory)
+- return;
+-
+- /*
+- * __ioremap special-cases the PCI/ISA range by not instantiating a
+- * vm_area and by simply returning an address into the kernel mapping
+- * of ISA space. So handle that here.
+- */
+- if (addr >= phys_to_virt(ISA_START_ADDRESS) &&
+- addr < phys_to_virt(ISA_END_ADDRESS))
+- return;
+-
+- addr = (volatile void __iomem *)(PAGE_MASK & (unsigned long __force)addr);
+-
+- /* Use the vm area unlocked, assuming the caller
+- ensures there isn't another iounmap for the same address
+- in parallel. Reuse of the virtual address is prevented by
+- leaving it in the global lists until we're done with it.
+- cpa takes care of the direct mappings. */
+- read_lock(&vmlist_lock);
+- for (p = vmlist; p; p = p->next) {
+- if (p->addr == addr)
+- break;
+- }
+- read_unlock(&vmlist_lock);
+-
+- if (!p) {
+- printk("iounmap: bad address %p\n", addr);
+- dump_stack();
+- return;
+- }
+-
+- /* Reset the direct mapping. Can block */
+- if ((p->flags >> 20) && p->phys_addr < virt_to_phys(high_memory) - 1) {
+- change_page_attr(virt_to_page(__va(p->phys_addr)),
+- get_vm_area_size(p) >> PAGE_SHIFT,
+- PAGE_KERNEL);
+- global_flush_tlb();
+- }
+-
+- /* Finally remove it */
+- o = remove_vm_area((void *)addr);
+- BUG_ON(p != o || o == NULL);
+- kfree(p);
-}
+-EXPORT_SYMBOL(iounmap);
-
--static inline void crypto_yield(u32 flags)
+-void __init *bt_ioremap(unsigned long phys_addr, unsigned long size)
-{
-- if (flags & CRYPTO_TFM_REQ_MAY_SLEEP)
-- cond_resched();
+- unsigned long offset, last_addr;
+- unsigned int nrpages;
+- enum fixed_addresses idx;
+-
+- /* Don't allow wraparound or zero size */
+- last_addr = phys_addr + size - 1;
+- if (!size || last_addr < phys_addr)
+- return NULL;
+-
+- /*
+- * Don't remap the low PCI/ISA area, it's always mapped..
+- */
+- if (phys_addr >= ISA_START_ADDRESS && last_addr < ISA_END_ADDRESS)
+- return phys_to_virt(phys_addr);
+-
+- /*
+- * Mappings have to be page-aligned
+- */
+- offset = phys_addr & ~PAGE_MASK;
+- phys_addr &= PAGE_MASK;
+- size = PAGE_ALIGN(last_addr) - phys_addr;
+-
+- /*
+- * Mappings have to fit in the FIX_BTMAP area.
+- */
+- nrpages = size >> PAGE_SHIFT;
+- if (nrpages > NR_FIX_BTMAPS)
+- return NULL;
+-
+- /*
+- * Ok, go for it..
+- */
+- idx = FIX_BTMAP_BEGIN;
+- while (nrpages > 0) {
+- set_fixmap(idx, phys_addr);
+- phys_addr += PAGE_SIZE;
+- --idx;
+- --nrpages;
+- }
+- return (void*) (offset + fix_to_virt(FIX_BTMAP_BEGIN));
-}
-
- #ifdef CONFIG_PROC_FS
- void __init crypto_init_proc(void);
- void __exit crypto_exit_proc(void);
-@@ -122,6 +93,8 @@ void crypto_exit_digest_ops(struct crypto_tfm *tfm);
- void crypto_exit_cipher_ops(struct crypto_tfm *tfm);
- void crypto_exit_compress_ops(struct crypto_tfm *tfm);
+-void __init bt_iounmap(void *addr, unsigned long size)
+-{
+- unsigned long virt_addr;
+- unsigned long offset;
+- unsigned int nrpages;
+- enum fixed_addresses idx;
+-
+- virt_addr = (unsigned long)addr;
+- if (virt_addr < fix_to_virt(FIX_BTMAP_BEGIN))
+- return;
+- offset = virt_addr & ~PAGE_MASK;
+- nrpages = PAGE_ALIGN(offset + size - 1) >> PAGE_SHIFT;
+-
+- idx = FIX_BTMAP_BEGIN;
+- while (nrpages > 0) {
+- clear_fixmap(idx);
+- --idx;
+- --nrpages;
+- }
+-}
+diff --git a/arch/x86/mm/ioremap_64.c b/arch/x86/mm/ioremap_64.c
+deleted file mode 100644
+index 6cac90a..0000000
+--- a/arch/x86/mm/ioremap_64.c
++++ /dev/null
+@@ -1,210 +0,0 @@
+-/*
+- * arch/x86_64/mm/ioremap.c
+- *
+- * Re-map IO memory to kernel address space so that we can access it.
+- * This is needed for high PCI addresses that aren't mapped in the
+- * 640k-1MB IO memory area on PC's
+- *
+- * (C) Copyright 1995 1996 Linus Torvalds
+- */
+-
+-#include <linux/vmalloc.h>
+-#include <linux/init.h>
+-#include <linux/slab.h>
+-#include <linux/module.h>
+-#include <linux/io.h>
+-
+-#include <asm/pgalloc.h>
+-#include <asm/fixmap.h>
+-#include <asm/tlbflush.h>
+-#include <asm/cacheflush.h>
+-#include <asm/proto.h>
+-
+-unsigned long __phys_addr(unsigned long x)
+-{
+- if (x >= __START_KERNEL_map)
+- return x - __START_KERNEL_map + phys_base;
+- return x - PAGE_OFFSET;
+-}
+-EXPORT_SYMBOL(__phys_addr);
+-
+-#define ISA_START_ADDRESS 0xa0000
+-#define ISA_END_ADDRESS 0x100000
+-
+-/*
+- * Fix up the linear direct mapping of the kernel to avoid cache attribute
+- * conflicts.
+- */
+-static int
+-ioremap_change_attr(unsigned long phys_addr, unsigned long size,
+- unsigned long flags)
+-{
+- int err = 0;
+- if (phys_addr + size - 1 < (end_pfn_map << PAGE_SHIFT)) {
+- unsigned long npages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+- unsigned long vaddr = (unsigned long) __va(phys_addr);
+-
+- /*
+- * Must use a address here and not struct page because the phys addr
+- * can be a in hole between nodes and not have an memmap entry.
+- */
+- err = change_page_attr_addr(vaddr,npages,__pgprot(__PAGE_KERNEL|flags));
+- if (!err)
+- global_flush_tlb();
+- }
+- return err;
+-}
+-
+-/*
+- * Generic mapping function
+- */
+-
+-/*
+- * Remap an arbitrary physical address space into the kernel virtual
+- * address space. Needed when the kernel wants to access high addresses
+- * directly.
+- *
+- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+- * have to convert them into an offset in a page-aligned mapping, but the
+- * caller shouldn't need to know that small detail.
+- */
+-void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags)
+-{
+- void * addr;
+- struct vm_struct * area;
+- unsigned long offset, last_addr;
+- pgprot_t pgprot;
+-
+- /* Don't allow wraparound or zero size */
+- last_addr = phys_addr + size - 1;
+- if (!size || last_addr < phys_addr)
+- return NULL;
+-
+- /*
+- * Don't remap the low PCI/ISA area, it's always mapped..
+- */
+- if (phys_addr >= ISA_START_ADDRESS && last_addr < ISA_END_ADDRESS)
+- return (__force void __iomem *)phys_to_virt(phys_addr);
+-
+-#ifdef CONFIG_FLATMEM
+- /*
+- * Don't allow anybody to remap normal RAM that we're using..
+- */
+- if (last_addr < virt_to_phys(high_memory)) {
+- char *t_addr, *t_end;
+- struct page *page;
+-
+- t_addr = __va(phys_addr);
+- t_end = t_addr + (size - 1);
+-
+- for(page = virt_to_page(t_addr); page <= virt_to_page(t_end); page++)
+- if(!PageReserved(page))
+- return NULL;
+- }
+-#endif
+-
+- pgprot = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_GLOBAL
+- | _PAGE_DIRTY | _PAGE_ACCESSED | flags);
+- /*
+- * Mappings have to be page-aligned
+- */
+- offset = phys_addr & ~PAGE_MASK;
+- phys_addr &= PAGE_MASK;
+- size = PAGE_ALIGN(last_addr+1) - phys_addr;
+-
+- /*
+- * Ok, go for it..
+- */
+- area = get_vm_area(size, VM_IOREMAP | (flags << 20));
+- if (!area)
+- return NULL;
+- area->phys_addr = phys_addr;
+- addr = area->addr;
+- if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
+- phys_addr, pgprot)) {
+- remove_vm_area((void *)(PAGE_MASK & (unsigned long) addr));
+- return NULL;
+- }
+- if (flags && ioremap_change_attr(phys_addr, size, flags) < 0) {
+- area->flags &= 0xffffff;
+- vunmap(addr);
+- return NULL;
+- }
+- return (__force void __iomem *) (offset + (char *)addr);
+-}
+-EXPORT_SYMBOL(__ioremap);
+-
+-/**
+- * ioremap_nocache - map bus memory into CPU space
+- * @offset: bus address of the memory
+- * @size: size of the resource to map
+- *
+- * ioremap_nocache performs a platform specific sequence of operations to
+- * make bus memory CPU accessible via the readb/readw/readl/writeb/
+- * writew/writel functions and the other mmio helpers. The returned
+- * address is not guaranteed to be usable directly as a virtual
+- * address.
+- *
+- * This version of ioremap ensures that the memory is marked uncachable
+- * on the CPU as well as honouring existing caching rules from things like
+- * the PCI bus. Note that there are other caches and buffers on many
+- * busses. In particular driver authors should read up on PCI writes
+- *
+- * It's useful if some control registers are in such an area and
+- * write combining or read caching is not desirable:
+- *
+- * Must be freed with iounmap.
+- */
+-
+-void __iomem *ioremap_nocache (unsigned long phys_addr, unsigned long size)
+-{
+- return __ioremap(phys_addr, size, _PAGE_PCD);
+-}
+-EXPORT_SYMBOL(ioremap_nocache);
+-
+-/**
+- * iounmap - Free a IO remapping
+- * @addr: virtual address from ioremap_*
+- *
+- * Caller must ensure there is only one unmapping for the same pointer.
+- */
+-void iounmap(volatile void __iomem *addr)
+-{
+- struct vm_struct *p, *o;
+-
+- if (addr <= high_memory)
+- return;
+- if (addr >= phys_to_virt(ISA_START_ADDRESS) &&
+- addr < phys_to_virt(ISA_END_ADDRESS))
+- return;
+-
+- addr = (volatile void __iomem *)(PAGE_MASK & (unsigned long __force)addr);
+- /* Use the vm area unlocked, assuming the caller
+- ensures there isn't another iounmap for the same address
+- in parallel. Reuse of the virtual address is prevented by
+- leaving it in the global lists until we're done with it.
+- cpa takes care of the direct mappings. */
+- read_lock(&vmlist_lock);
+- for (p = vmlist; p; p = p->next) {
+- if (p->addr == addr)
+- break;
+- }
+- read_unlock(&vmlist_lock);
+-
+- if (!p) {
+- printk("iounmap: bad address %p\n", addr);
+- dump_stack();
+- return;
+- }
+-
+- /* Reset the direct mapping. Can block */
+- if (p->flags >> 20)
+- ioremap_change_attr(p->phys_addr, p->size, 0);
+-
+- /* Finally remove it */
+- o = remove_vm_area((void *)addr);
+- BUG_ON(p != o || o == NULL);
+- kfree(p);
+-}
+-EXPORT_SYMBOL(iounmap);
+-
+diff --git a/arch/x86/mm/k8topology_64.c b/arch/x86/mm/k8topology_64.c
+index a96006f..7a2ebce 100644
+--- a/arch/x86/mm/k8topology_64.c
++++ b/arch/x86/mm/k8topology_64.c
+@@ -1,9 +1,9 @@
+-/*
++/*
+ * AMD K8 NUMA support.
+ * Discover the memory map and associated nodes.
+- *
++ *
+ * This version reads it directly from the K8 northbridge.
+- *
++ *
+ * Copyright 2002,2003 Andi Kleen, SuSE Labs.
+ */
+ #include <linux/kernel.h>
+@@ -22,132 +22,135 @@
-+void crypto_larval_kill(struct crypto_alg *alg);
-+struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask);
- void crypto_larval_error(const char *name, u32 type, u32 mask);
+ static __init int find_northbridge(void)
+ {
+- int num;
++ int num;
- void crypto_shoot_alg(struct crypto_alg *alg);
-diff --git a/crypto/lzo.c b/crypto/lzo.c
+- for (num = 0; num < 32; num++) {
++ for (num = 0; num < 32; num++) {
+ u32 header;
+-
+- header = read_pci_config(0, num, 0, 0x00);
+- if (header != (PCI_VENDOR_ID_AMD | (0x1100<<16)))
+- continue;
+-
+- header = read_pci_config(0, num, 1, 0x00);
+- if (header != (PCI_VENDOR_ID_AMD | (0x1101<<16)))
+- continue;
+- return num;
+- }
+-
+- return -1;
++
++ header = read_pci_config(0, num, 0, 0x00);
++ if (header != (PCI_VENDOR_ID_AMD | (0x1100<<16)) &&
++ header != (PCI_VENDOR_ID_AMD | (0x1200<<16)) &&
++ header != (PCI_VENDOR_ID_AMD | (0x1300<<16)))
++ continue;
++
++ header = read_pci_config(0, num, 1, 0x00);
++ if (header != (PCI_VENDOR_ID_AMD | (0x1101<<16)) &&
++ header != (PCI_VENDOR_ID_AMD | (0x1201<<16)) &&
++ header != (PCI_VENDOR_ID_AMD | (0x1301<<16)))
++ continue;
++ return num;
++ }
++
++ return -1;
+ }
+
+ int __init k8_scan_nodes(unsigned long start, unsigned long end)
+-{
++{
+ unsigned long prevbase;
+ struct bootnode nodes[8];
+- int nodeid, i, j, nb;
++ int nodeid, i, nb;
+ unsigned char nodeids[8];
+ int found = 0;
+ u32 reg;
+ unsigned numnodes;
+- unsigned num_cores;
++ unsigned cores;
++ unsigned bits;
++ int j;
+
+ if (!early_pci_allowed())
+ return -1;
+
+- nb = find_northbridge();
+- if (nb < 0)
++ nb = find_northbridge();
++ if (nb < 0)
+ return nb;
+
+- printk(KERN_INFO "Scanning NUMA topology in Northbridge %d\n", nb);
+-
+- num_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
+- printk(KERN_INFO "CPU has %d num_cores\n", num_cores);
++ printk(KERN_INFO "Scanning NUMA topology in Northbridge %d\n", nb);
+
+- reg = read_pci_config(0, nb, 0, 0x60);
++ reg = read_pci_config(0, nb, 0, 0x60);
+ numnodes = ((reg >> 4) & 0xF) + 1;
+ if (numnodes <= 1)
+ return -1;
+
+ printk(KERN_INFO "Number of nodes %d\n", numnodes);
+
+- memset(&nodes,0,sizeof(nodes));
++ memset(&nodes, 0, sizeof(nodes));
+ prevbase = 0;
+- for (i = 0; i < 8; i++) {
+- unsigned long base,limit;
++ for (i = 0; i < 8; i++) {
++ unsigned long base, limit;
+ u32 nodeid;
+-
++
+ base = read_pci_config(0, nb, 1, 0x40 + i*8);
+ limit = read_pci_config(0, nb, 1, 0x44 + i*8);
+
+- nodeid = limit & 7;
++ nodeid = limit & 7;
+ nodeids[i] = nodeid;
+- if ((base & 3) == 0) {
++ if ((base & 3) == 0) {
+ if (i < numnodes)
+- printk("Skipping disabled node %d\n", i);
++ printk("Skipping disabled node %d\n", i);
+ continue;
+- }
++ }
+ if (nodeid >= numnodes) {
+ printk("Ignoring excess node %d (%lx:%lx)\n", nodeid,
+- base, limit);
++ base, limit);
+ continue;
+- }
++ }
+
+- if (!limit) {
+- printk(KERN_INFO "Skipping node entry %d (base %lx)\n", i,
+- base);
++ if (!limit) {
++ printk(KERN_INFO "Skipping node entry %d (base %lx)\n",
++ i, base);
+ continue;
+ }
+ if ((base >> 8) & 3 || (limit >> 8) & 3) {
+- printk(KERN_ERR "Node %d using interleaving mode %lx/%lx\n",
+- nodeid, (base>>8)&3, (limit>>8) & 3);
+- return -1;
+- }
++ printk(KERN_ERR "Node %d using interleaving mode %lx/%lx\n",
++ nodeid, (base>>8)&3, (limit>>8) & 3);
++ return -1;
++ }
+ if (node_isset(nodeid, node_possible_map)) {
+- printk(KERN_INFO "Node %d already present. Skipping\n",
++ printk(KERN_INFO "Node %d already present. Skipping\n",
+ nodeid);
+ continue;
+ }
+
+- limit >>= 16;
+- limit <<= 24;
++ limit >>= 16;
++ limit <<= 24;
+ limit |= (1<<24)-1;
+ limit++;
+
+ if (limit > end_pfn << PAGE_SHIFT)
+ limit = end_pfn << PAGE_SHIFT;
+ if (limit <= base)
+- continue;
+-
++ continue;
++
+ base >>= 16;
+- base <<= 24;
+-
+- if (base < start)
+- base = start;
+- if (limit > end)
+- limit = end;
+- if (limit == base) {
+- printk(KERN_ERR "Empty node %d\n", nodeid);
+- continue;
++ base <<= 24;
++
++ if (base < start)
++ base = start;
++ if (limit > end)
++ limit = end;
++ if (limit == base) {
++ printk(KERN_ERR "Empty node %d\n", nodeid);
++ continue;
+ }
+- if (limit < base) {
++ if (limit < base) {
+ printk(KERN_ERR "Node %d bogus settings %lx-%lx.\n",
+- nodeid, base, limit);
++ nodeid, base, limit);
+ continue;
+- }
+-
++ }
++
+ /* Could sort here, but pun for now. Should not happen anyroads. */
+- if (prevbase > base) {
++ if (prevbase > base) {
+ printk(KERN_ERR "Node map not sorted %lx,%lx\n",
+- prevbase,base);
++ prevbase, base);
+ return -1;
+ }
+-
+- printk(KERN_INFO "Node %d MemBase %016lx Limit %016lx\n",
+- nodeid, base, limit);
+-
++
++ printk(KERN_INFO "Node %d MemBase %016lx Limit %016lx\n",
++ nodeid, base, limit);
++
+ found++;
+-
+- nodes[nodeid].start = base;
++
++ nodes[nodeid].start = base;
+ nodes[nodeid].end = limit;
+ e820_register_active_regions(nodeid,
+ nodes[nodeid].start >> PAGE_SHIFT,
+@@ -156,27 +159,31 @@ int __init k8_scan_nodes(unsigned long start, unsigned long end)
+ prevbase = base;
+
+ node_set(nodeid, node_possible_map);
+- }
++ }
+
+ if (!found)
+- return -1;
++ return -1;
+
+ memnode_shift = compute_hash_shift(nodes, 8);
+- if (memnode_shift < 0) {
+- printk(KERN_ERR "No NUMA node hash function found. Contact maintainer\n");
+- return -1;
+- }
+- printk(KERN_INFO "Using node hash shift of %d\n", memnode_shift);
++ if (memnode_shift < 0) {
++ printk(KERN_ERR "No NUMA node hash function found. Contact maintainer\n");
++ return -1;
++ }
++ printk(KERN_INFO "Using node hash shift of %d\n", memnode_shift);
++
++ /* use the coreid bits from early_identify_cpu */
++ bits = boot_cpu_data.x86_coreid_bits;
++ cores = (1<<bits);
+
+ for (i = 0; i < 8; i++) {
+- if (nodes[i].start != nodes[i].end) {
++ if (nodes[i].start != nodes[i].end) {
+ nodeid = nodeids[i];
+- for (j = 0; j < num_cores; j++)
+- apicid_to_node[(nodeid * num_cores) + j] = i;
+- setup_node_bootmem(i, nodes[i].start, nodes[i].end);
+- }
++ for (j = 0; j < cores; j++)
++ apicid_to_node[(nodeid << bits) + j] = i;
++ setup_node_bootmem(i, nodes[i].start, nodes[i].end);
++ }
+ }
+
+ numa_init_array();
+ return 0;
+-}
++}
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
new file mode 100644
-index 0000000..48c3288
+index 0000000..56fe712
--- /dev/null
-+++ b/crypto/lzo.c
-@@ -0,0 +1,106 @@
++++ b/arch/x86/mm/mmap.c
+@@ -0,0 +1,123 @@
+/*
-+ * Cryptographic API.
++ * Flexible mmap layout support
+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License version 2 as published by
-+ * the Free Software Foundation.
++ * Based on code by Ingo Molnar and Andi Kleen, copyrighted
++ * as follows:
+ *
-+ * This program is distributed in the hope that it will be useful, but WITHOUT
-+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
-+ * more details.
++ * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
++ * All Rights Reserved.
++ * Copyright 2005 Andi Kleen, SUSE Labs.
++ * Copyright 2007 Jiri Kosina, SUSE Labs.
+ *
-+ * You should have received a copy of the GNU General Public License along with
-+ * this program; if not, write to the Free Software Foundation, Inc., 51
-+ * Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
+ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
-+#include <linux/init.h>
-+#include <linux/module.h>
-+#include <linux/crypto.h>
-+#include <linux/vmalloc.h>
-+#include <linux/lzo.h>
++#include <linux/personality.h>
++#include <linux/mm.h>
++#include <linux/random.h>
++#include <linux/limits.h>
++#include <linux/sched.h>
+
-+struct lzo_ctx {
-+ void *lzo_comp_mem;
-+};
++/*
++ * Top of mmap area (just below the process stack).
++ *
++ * Leave an at least ~128 MB hole.
++ */
++#define MIN_GAP (128*1024*1024)
++#define MAX_GAP (TASK_SIZE/6*5)
+
-+static int lzo_init(struct crypto_tfm *tfm)
++/*
++ * True on X86_32 or when emulating IA32 on X86_64
++ */
++static int mmap_is_ia32(void)
+{
-+ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
-+
-+ ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
-+ if (!ctx->lzo_comp_mem)
-+ return -ENOMEM;
-+
++#ifdef CONFIG_X86_32
++ return 1;
++#endif
++#ifdef CONFIG_IA32_EMULATION
++ if (test_thread_flag(TIF_IA32))
++ return 1;
++#endif
+ return 0;
+}
+
-+static void lzo_exit(struct crypto_tfm *tfm)
++static int mmap_is_legacy(void)
+{
-+ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
++ if (current->personality & ADDR_COMPAT_LAYOUT)
++ return 1;
+
-+ vfree(ctx->lzo_comp_mem);
++ if (current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY)
++ return 1;
++
++ return sysctl_legacy_va_layout;
+}
+
-+static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
-+ unsigned int slen, u8 *dst, unsigned int *dlen)
++static unsigned long mmap_rnd(void)
+{
-+ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
-+ size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
-+ int err;
-+
-+ err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx->lzo_comp_mem);
-+
-+ if (err != LZO_E_OK)
-+ return -EINVAL;
++ unsigned long rnd = 0;
+
-+ *dlen = tmp_len;
-+ return 0;
++ /*
++ * 8 bits of randomness in 32bit mmaps, 20 address space bits
++ * 28 bits of randomness in 64bit mmaps, 40 address space bits
++ */
++ if (current->flags & PF_RANDOMIZE) {
++ if (mmap_is_ia32())
++ rnd = (long)get_random_int() % (1<<8);
++ else
++ rnd = (long)(get_random_int() % (1<<28));
++ }
++ return rnd << PAGE_SHIFT;
+}
+
-+static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
-+ unsigned int slen, u8 *dst, unsigned int *dlen)
++static unsigned long mmap_base(void)
+{
-+ int err;
-+ size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
-+
-+ err = lzo1x_decompress_safe(src, slen, dst, &tmp_len);
-+
-+ if (err != LZO_E_OK)
-+ return -EINVAL;
++ unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+
-+ *dlen = tmp_len;
-+ return 0;
++ if (gap < MIN_GAP)
++ gap = MIN_GAP;
++ else if (gap > MAX_GAP)
++ gap = MAX_GAP;
+
++ return PAGE_ALIGN(TASK_SIZE - gap - mmap_rnd());
+}
+
-+static struct crypto_alg alg = {
-+ .cra_name = "lzo",
-+ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
-+ .cra_ctxsize = sizeof(struct lzo_ctx),
-+ .cra_module = THIS_MODULE,
-+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
-+ .cra_init = lzo_init,
-+ .cra_exit = lzo_exit,
-+ .cra_u = { .compress = {
-+ .coa_compress = lzo_compress,
-+ .coa_decompress = lzo_decompress } }
-+};
-+
-+static int __init init(void)
++/*
++ * Bottom-up (legacy) layout on X86_32 did not support randomization, X86_64
++ * does, but not when emulating X86_32
++ */
++static unsigned long mmap_legacy_base(void)
+{
-+ return crypto_register_alg(&alg);
++ if (mmap_is_ia32())
++ return TASK_UNMAPPED_BASE;
++ else
++ return TASK_UNMAPPED_BASE + mmap_rnd();
+}
+
-+static void __exit fini(void)
++/*
++ * This function, called very early during the creation of a new
++ * process VM image, sets up which VM layout function to use:
++ */
++void arch_pick_mmap_layout(struct mm_struct *mm)
+{
-+ crypto_unregister_alg(&alg);
++ if (mmap_is_legacy()) {
++ mm->mmap_base = mmap_legacy_base();
++ mm->get_unmapped_area = arch_get_unmapped_area;
++ mm->unmap_area = arch_unmap_area;
++ } else {
++ mm->mmap_base = mmap_base();
++ mm->get_unmapped_area = arch_get_unmapped_area_topdown;
++ mm->unmap_area = arch_unmap_area_topdown;
++ }
+}
+diff --git a/arch/x86/mm/mmap_32.c b/arch/x86/mm/mmap_32.c
+deleted file mode 100644
+index 552e084..0000000
+--- a/arch/x86/mm/mmap_32.c
++++ /dev/null
+@@ -1,77 +0,0 @@
+-/*
+- * linux/arch/i386/mm/mmap.c
+- *
+- * flexible mmap layout support
+- *
+- * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
+- * All Rights Reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- *
+- *
+- * Started by Ingo Molnar <mingo at elte.hu>
+- */
+-
+-#include <linux/personality.h>
+-#include <linux/mm.h>
+-#include <linux/random.h>
+-#include <linux/sched.h>
+-
+-/*
+- * Top of mmap area (just below the process stack).
+- *
+- * Leave an at least ~128 MB hole.
+- */
+-#define MIN_GAP (128*1024*1024)
+-#define MAX_GAP (TASK_SIZE/6*5)
+-
+-static inline unsigned long mmap_base(struct mm_struct *mm)
+-{
+- unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+- unsigned long random_factor = 0;
+-
+- if (current->flags & PF_RANDOMIZE)
+- random_factor = get_random_int() % (1024*1024);
+-
+- if (gap < MIN_GAP)
+- gap = MIN_GAP;
+- else if (gap > MAX_GAP)
+- gap = MAX_GAP;
+-
+- return PAGE_ALIGN(TASK_SIZE - gap - random_factor);
+-}
+-
+-/*
+- * This function, called very early during the creation of a new
+- * process VM image, sets up which VM layout function to use:
+- */
+-void arch_pick_mmap_layout(struct mm_struct *mm)
+-{
+- /*
+- * Fall back to the standard layout if the personality
+- * bit is set, or if the expected stack growth is unlimited:
+- */
+- if (sysctl_legacy_va_layout ||
+- (current->personality & ADDR_COMPAT_LAYOUT) ||
+- current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY) {
+- mm->mmap_base = TASK_UNMAPPED_BASE;
+- mm->get_unmapped_area = arch_get_unmapped_area;
+- mm->unmap_area = arch_unmap_area;
+- } else {
+- mm->mmap_base = mmap_base(mm);
+- mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+- mm->unmap_area = arch_unmap_area_topdown;
+- }
+-}
+diff --git a/arch/x86/mm/mmap_64.c b/arch/x86/mm/mmap_64.c
+deleted file mode 100644
+index 80bba0d..0000000
+--- a/arch/x86/mm/mmap_64.c
++++ /dev/null
+@@ -1,29 +0,0 @@
+-/* Copyright 2005 Andi Kleen, SuSE Labs.
+- * Licensed under GPL, v.2
+- */
+-#include <linux/mm.h>
+-#include <linux/sched.h>
+-#include <linux/random.h>
+-#include <asm/ia32.h>
+-
+-/* Notebook: move the mmap code from sys_x86_64.c over here. */
+-
+-void arch_pick_mmap_layout(struct mm_struct *mm)
+-{
+-#ifdef CONFIG_IA32_EMULATION
+- if (current_thread_info()->flags & _TIF_IA32)
+- return ia32_pick_mmap_layout(mm);
+-#endif
+- mm->mmap_base = TASK_UNMAPPED_BASE;
+- if (current->flags & PF_RANDOMIZE) {
+- /* Add 28bit randomness which is about 40bits of address space
+- because mmap base has to be page aligned.
+- or ~1/128 of the total user VM
+- (total user address space is 47bits) */
+- unsigned rnd = get_random_int() & 0xfffffff;
+- mm->mmap_base += ((unsigned long)rnd) << PAGE_SHIFT;
+- }
+- mm->get_unmapped_area = arch_get_unmapped_area;
+- mm->unmap_area = arch_unmap_area;
+-}
+-
+diff --git a/arch/x86/mm/numa_64.c b/arch/x86/mm/numa_64.c
+index 3d6926b..dc3b1f7 100644
+--- a/arch/x86/mm/numa_64.c
++++ b/arch/x86/mm/numa_64.c
+@@ -1,7 +1,7 @@
+-/*
++/*
+ * Generic VM initialization for x86-64 NUMA setups.
+ * Copyright 2002,2003 Andi Kleen, SuSE Labs.
+- */
++ */
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/string.h>
+@@ -11,35 +11,45 @@
+ #include <linux/ctype.h>
+ #include <linux/module.h>
+ #include <linux/nodemask.h>
++#include <linux/sched.h>
+
+ #include <asm/e820.h>
+ #include <asm/proto.h>
+ #include <asm/dma.h>
+ #include <asm/numa.h>
+ #include <asm/acpi.h>
++#include <asm/k8.h>
+
+ #ifndef Dprintk
+ #define Dprintk(x...)
+ #endif
+
+ struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
++EXPORT_SYMBOL(node_data);
+
-+module_init(init);
-+module_exit(fini);
-+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("LZO Compression Algorithm");
-diff --git a/crypto/pcbc.c b/crypto/pcbc.c
-index c3ed8a1..fe70477 100644
---- a/crypto/pcbc.c
-+++ b/crypto/pcbc.c
-@@ -24,7 +24,6 @@
+ bootmem_data_t plat_node_bdata[MAX_NUMNODES];
- struct crypto_pcbc_ctx {
- struct crypto_cipher *child;
-- void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
+ struct memnode memnode;
+
+-unsigned char cpu_to_node[NR_CPUS] __read_mostly = {
++int x86_cpu_to_node_map_init[NR_CPUS] = {
+ [0 ... NR_CPUS-1] = NUMA_NO_NODE
+ };
+-unsigned char apicid_to_node[MAX_LOCAL_APIC] __cpuinitdata = {
+- [0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
++void *x86_cpu_to_node_map_early_ptr;
++DEFINE_PER_CPU(int, x86_cpu_to_node_map) = NUMA_NO_NODE;
++EXPORT_PER_CPU_SYMBOL(x86_cpu_to_node_map);
++EXPORT_SYMBOL(x86_cpu_to_node_map_early_ptr);
++
++s16 apicid_to_node[MAX_LOCAL_APIC] __cpuinitdata = {
++ [0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
};
+-cpumask_t node_to_cpumask[MAX_NUMNODES] __read_mostly;
++
++cpumask_t node_to_cpumask_map[MAX_NUMNODES] __read_mostly;
++EXPORT_SYMBOL(node_to_cpumask_map);
- static int crypto_pcbc_setkey(struct crypto_tfm *parent, const u8 *key,
-@@ -45,9 +44,7 @@ static int crypto_pcbc_setkey(struct crypto_tfm *parent, const u8 *key,
+ int numa_off __initdata;
+ unsigned long __initdata nodemap_addr;
+ unsigned long __initdata nodemap_size;
- static int crypto_pcbc_encrypt_segment(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
- {
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_encrypt;
-@@ -58,10 +55,10 @@ static int crypto_pcbc_encrypt_segment(struct blkcipher_desc *desc,
- u8 *iv = walk->iv;
+-
+ /*
+ * Given a shift value, try to populate memnodemap[]
+ * Returns :
+@@ -47,14 +57,13 @@ unsigned long __initdata nodemap_size;
+ * 0 if memnodmap[] too small (of shift too small)
+ * -1 if node overlap or lost ram (shift too big)
+ */
+-static int __init
+-populate_memnodemap(const struct bootnode *nodes, int numnodes, int shift)
++static int __init populate_memnodemap(const struct bootnode *nodes,
++ int numnodes, int shift)
+ {
+- int i;
+- int res = -1;
+ unsigned long addr, end;
++ int i, res = -1;
+
+- memset(memnodemap, 0xff, memnodemapsize);
++ memset(memnodemap, 0xff, sizeof(s16)*memnodemapsize);
+ for (i = 0; i < numnodes; i++) {
+ addr = nodes[i].start;
+ end = nodes[i].end;
+@@ -63,13 +72,13 @@ populate_memnodemap(const struct bootnode *nodes, int numnodes, int shift)
+ if ((end >> shift) >= memnodemapsize)
+ return 0;
+ do {
+- if (memnodemap[addr >> shift] != 0xff)
++ if (memnodemap[addr >> shift] != NUMA_NO_NODE)
+ return -1;
+ memnodemap[addr >> shift] = i;
+ addr += (1UL << shift);
+ } while (addr < end);
+ res = 1;
+- }
++ }
+ return res;
+ }
- do {
-- xor(iv, src, bsize);
-+ crypto_xor(iv, src, bsize);
- fn(crypto_cipher_tfm(tfm), dst, iv);
- memcpy(iv, dst, bsize);
-- xor(iv, src, bsize);
-+ crypto_xor(iv, src, bsize);
+@@ -78,12 +87,12 @@ static int __init allocate_cachealigned_memnodemap(void)
+ unsigned long pad, pad_addr;
- src += bsize;
- dst += bsize;
-@@ -72,9 +69,7 @@ static int crypto_pcbc_encrypt_segment(struct blkcipher_desc *desc,
+ memnodemap = memnode.embedded_map;
+- if (memnodemapsize <= 48)
++ if (memnodemapsize <= ARRAY_SIZE(memnode.embedded_map))
+ return 0;
- static int crypto_pcbc_encrypt_inplace(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
- {
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_encrypt;
-@@ -86,10 +81,10 @@ static int crypto_pcbc_encrypt_inplace(struct blkcipher_desc *desc,
+ pad = L1_CACHE_BYTES - 1;
+ pad_addr = 0x8000;
+- nodemap_size = pad + memnodemapsize;
++ nodemap_size = pad + sizeof(s16) * memnodemapsize;
+ nodemap_addr = find_e820_area(pad_addr, end_pfn<<PAGE_SHIFT,
+ nodemap_size);
+ if (nodemap_addr == -1UL) {
+@@ -94,6 +103,7 @@ static int __init allocate_cachealigned_memnodemap(void)
+ }
+ pad_addr = (nodemap_addr + pad) & ~pad;
+ memnodemap = phys_to_virt(pad_addr);
++ reserve_early(nodemap_addr, nodemap_addr + nodemap_size);
+
+ printk(KERN_DEBUG "NUMA: Allocated memnodemap from %lx - %lx\n",
+ nodemap_addr, nodemap_addr + nodemap_size);
+@@ -104,8 +114,8 @@ static int __init allocate_cachealigned_memnodemap(void)
+ * The LSB of all start and end addresses in the node map is the value of the
+ * maximum possible shift.
+ */
+-static int __init
+-extract_lsb_from_nodes (const struct bootnode *nodes, int numnodes)
++static int __init extract_lsb_from_nodes(const struct bootnode *nodes,
++ int numnodes)
+ {
+ int i, nodes_used = 0;
+ unsigned long start, end;
+@@ -140,51 +150,50 @@ int __init compute_hash_shift(struct bootnode *nodes, int numnodes)
+ shift);
+
+ if (populate_memnodemap(nodes, numnodes, shift) != 1) {
+- printk(KERN_INFO
+- "Your memory is not aligned you need to rebuild your kernel "
+- "with a bigger NODEMAPSIZE shift=%d\n",
+- shift);
++ printk(KERN_INFO "Your memory is not aligned you need to "
++ "rebuild your kernel with a bigger NODEMAPSIZE "
++ "shift=%d\n", shift);
+ return -1;
+ }
+ return shift;
+ }
- do {
- memcpy(tmpbuf, src, bsize);
-- xor(iv, tmpbuf, bsize);
-+ crypto_xor(iv, src, bsize);
- fn(crypto_cipher_tfm(tfm), src, iv);
-- memcpy(iv, src, bsize);
-- xor(iv, tmpbuf, bsize);
-+ memcpy(iv, tmpbuf, bsize);
-+ crypto_xor(iv, src, bsize);
+-#ifdef CONFIG_SPARSEMEM
+ int early_pfn_to_nid(unsigned long pfn)
+ {
+ return phys_to_nid(pfn << PAGE_SHIFT);
+ }
+-#endif
- src += bsize;
- } while ((nbytes -= bsize) >= bsize);
-@@ -107,7 +102,6 @@ static int crypto_pcbc_encrypt(struct blkcipher_desc *desc,
- struct crypto_blkcipher *tfm = desc->tfm;
- struct crypto_pcbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
- struct crypto_cipher *child = ctx->child;
-- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
- int err;
+-static void * __init
+-early_node_mem(int nodeid, unsigned long start, unsigned long end,
+- unsigned long size)
++static void * __init early_node_mem(int nodeid, unsigned long start,
++ unsigned long end, unsigned long size)
+ {
+ unsigned long mem = find_e820_area(start, end, size);
+ void *ptr;
++
+ if (mem != -1L)
+ return __va(mem);
+ ptr = __alloc_bootmem_nopanic(size,
+ SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS));
+ if (ptr == NULL) {
+ printk(KERN_ERR "Cannot find %lu bytes in node %d\n",
+- size, nodeid);
++ size, nodeid);
+ return NULL;
+ }
+ return ptr;
+ }
- blkcipher_walk_init(&walk, dst, src, nbytes);
-@@ -115,11 +109,11 @@ static int crypto_pcbc_encrypt(struct blkcipher_desc *desc,
+ /* Initialize bootmem allocator for a node */
+-void __init setup_node_bootmem(int nodeid, unsigned long start, unsigned long end)
+-{
+- unsigned long start_pfn, end_pfn, bootmap_pages, bootmap_size, bootmap_start;
+- unsigned long nodedata_phys;
++void __init setup_node_bootmem(int nodeid, unsigned long start,
++ unsigned long end)
++{
++ unsigned long start_pfn, end_pfn, bootmap_pages, bootmap_size;
++ unsigned long bootmap_start, nodedata_phys;
+ void *bootmap;
+ const int pgdat_size = round_up(sizeof(pg_data_t), PAGE_SIZE);
+
+- start = round_up(start, ZONE_ALIGN);
++ start = round_up(start, ZONE_ALIGN);
+
+- printk(KERN_INFO "Bootmem setup node %d %016lx-%016lx\n", nodeid, start, end);
++ printk(KERN_INFO "Bootmem setup node %d %016lx-%016lx\n", nodeid,
++ start, end);
+
+ start_pfn = start >> PAGE_SHIFT;
+ end_pfn = end >> PAGE_SHIFT;
+@@ -200,75 +209,55 @@ void __init setup_node_bootmem(int nodeid, unsigned long start, unsigned long en
+ NODE_DATA(nodeid)->node_spanned_pages = end_pfn - start_pfn;
+
+ /* Find a place for the bootmem map */
+- bootmap_pages = bootmem_bootmap_pages(end_pfn - start_pfn);
++ bootmap_pages = bootmem_bootmap_pages(end_pfn - start_pfn);
+ bootmap_start = round_up(nodedata_phys + pgdat_size, PAGE_SIZE);
+ bootmap = early_node_mem(nodeid, bootmap_start, end,
+ bootmap_pages<<PAGE_SHIFT);
+ if (bootmap == NULL) {
+ if (nodedata_phys < start || nodedata_phys >= end)
+- free_bootmem((unsigned long)node_data[nodeid],pgdat_size);
++ free_bootmem((unsigned long)node_data[nodeid],
++ pgdat_size);
+ node_data[nodeid] = NULL;
+ return;
+ }
+ bootmap_start = __pa(bootmap);
+- Dprintk("bootmap start %lu pages %lu\n", bootmap_start, bootmap_pages);
+-
++ Dprintk("bootmap start %lu pages %lu\n", bootmap_start, bootmap_pages);
++
+ bootmap_size = init_bootmem_node(NODE_DATA(nodeid),
+- bootmap_start >> PAGE_SHIFT,
+- start_pfn, end_pfn);
++ bootmap_start >> PAGE_SHIFT,
++ start_pfn, end_pfn);
+
+ free_bootmem_with_active_regions(nodeid, end);
+
+- reserve_bootmem_node(NODE_DATA(nodeid), nodedata_phys, pgdat_size);
+- reserve_bootmem_node(NODE_DATA(nodeid), bootmap_start, bootmap_pages<<PAGE_SHIFT);
++ reserve_bootmem_node(NODE_DATA(nodeid), nodedata_phys, pgdat_size);
++ reserve_bootmem_node(NODE_DATA(nodeid), bootmap_start,
++ bootmap_pages<<PAGE_SHIFT);
+ #ifdef CONFIG_ACPI_NUMA
+ srat_reserve_add_area(nodeid);
+ #endif
+ node_set_online(nodeid);
+-}
+-
+-/* Initialize final allocator for a zone */
+-void __init setup_node_zones(int nodeid)
+-{
+- unsigned long start_pfn, end_pfn, memmapsize, limit;
+-
+- start_pfn = node_start_pfn(nodeid);
+- end_pfn = node_end_pfn(nodeid);
+-
+- Dprintk(KERN_INFO "Setting up memmap for node %d %lx-%lx\n",
+- nodeid, start_pfn, end_pfn);
+-
+- /* Try to allocate mem_map at end to not fill up precious <4GB
+- memory. */
+- memmapsize = sizeof(struct page) * (end_pfn-start_pfn);
+- limit = end_pfn << PAGE_SHIFT;
+-#ifdef CONFIG_FLAT_NODE_MEM_MAP
+- NODE_DATA(nodeid)->node_mem_map =
+- __alloc_bootmem_core(NODE_DATA(nodeid)->bdata,
+- memmapsize, SMP_CACHE_BYTES,
+- round_down(limit - memmapsize, PAGE_SIZE),
+- limit);
+-#endif
+-}
++}
- while ((nbytes = walk.nbytes)) {
- if (walk.src.virt.addr == walk.dst.virt.addr)
-- nbytes = crypto_pcbc_encrypt_inplace(desc, &walk, child,
-- xor);
-+ nbytes = crypto_pcbc_encrypt_inplace(desc, &walk,
-+ child);
- else
-- nbytes = crypto_pcbc_encrypt_segment(desc, &walk, child,
-- xor);
-+ nbytes = crypto_pcbc_encrypt_segment(desc, &walk,
-+ child);
- err = blkcipher_walk_done(desc, &walk, nbytes);
++/*
++ * There are unfortunately some poorly designed mainboards around that
++ * only connect memory to a single CPU. This breaks the 1:1 cpu->node
++ * mapping. To avoid this fill in the mapping for all possible CPUs,
++ * as the number of CPUs is not known yet. We round robin the existing
++ * nodes.
++ */
+ void __init numa_init_array(void)
+ {
+ int rr, i;
+- /* There are unfortunately some poorly designed mainboards around
+- that only connect memory to a single CPU. This breaks the 1:1 cpu->node
+- mapping. To avoid this fill in the mapping for all possible
+- CPUs, as the number of CPUs is not known yet.
+- We round robin the existing nodes. */
++
+ rr = first_node(node_online_map);
+ for (i = 0; i < NR_CPUS; i++) {
+- if (cpu_to_node(i) != NUMA_NO_NODE)
++ if (early_cpu_to_node(i) != NUMA_NO_NODE)
+ continue;
+- numa_set_node(i, rr);
++ numa_set_node(i, rr);
+ rr = next_node(rr, node_online_map);
+ if (rr == MAX_NUMNODES)
+ rr = first_node(node_online_map);
}
+-
+ }
-@@ -128,9 +122,7 @@ static int crypto_pcbc_encrypt(struct blkcipher_desc *desc,
+ #ifdef CONFIG_NUMA_EMU
+@@ -276,15 +265,17 @@ void __init numa_init_array(void)
+ char *cmdline __initdata;
- static int crypto_pcbc_decrypt_segment(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
+ /*
+- * Setups up nid to range from addr to addr + size. If the end boundary is
+- * greater than max_addr, then max_addr is used instead. The return value is 0
+- * if there is additional memory left for allocation past addr and -1 otherwise.
+- * addr is adjusted to be at the end of the node.
++ * Setups up nid to range from addr to addr + size. If the end
++ * boundary is greater than max_addr, then max_addr is used instead.
++ * The return value is 0 if there is additional memory left for
++ * allocation past addr and -1 otherwise. addr is adjusted to be at
++ * the end of the node.
+ */
+ static int __init setup_node_range(int nid, struct bootnode *nodes, u64 *addr,
+ u64 size, u64 max_addr)
{
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_decrypt;
-@@ -142,9 +134,9 @@ static int crypto_pcbc_decrypt_segment(struct blkcipher_desc *desc,
+ int ret = 0;
++
+ nodes[nid].start = *addr;
+ *addr += size;
+ if (*addr >= max_addr) {
+@@ -335,6 +326,7 @@ static int __init split_nodes_equally(struct bootnode *nodes, u64 *addr,
- do {
- fn(crypto_cipher_tfm(tfm), dst, src);
-- xor(dst, iv, bsize);
-+ crypto_xor(dst, iv, bsize);
- memcpy(iv, src, bsize);
-- xor(iv, dst, bsize);
-+ crypto_xor(iv, dst, bsize);
+ for (i = node_start; i < num_nodes + node_start; i++) {
+ u64 end = *addr + size;
++
+ if (i < big)
+ end += FAKE_NODE_MIN_SIZE;
+ /*
+@@ -380,14 +372,9 @@ static int __init split_nodes_by_size(struct bootnode *nodes, u64 *addr,
+ static int __init numa_emulation(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ struct bootnode nodes[MAX_NUMNODES];
+- u64 addr = start_pfn << PAGE_SHIFT;
++ u64 size, addr = start_pfn << PAGE_SHIFT;
+ u64 max_addr = end_pfn << PAGE_SHIFT;
+- int num_nodes = 0;
+- int coeff_flag;
+- int coeff = -1;
+- int num = 0;
+- u64 size;
+- int i;
++ int num_nodes = 0, num = 0, coeff_flag, coeff = -1, i;
- src += bsize;
- dst += bsize;
-@@ -157,9 +149,7 @@ static int crypto_pcbc_decrypt_segment(struct blkcipher_desc *desc,
+ memset(&nodes, 0, sizeof(nodes));
+ /*
+@@ -395,8 +382,9 @@ static int __init numa_emulation(unsigned long start_pfn, unsigned long end_pfn)
+ * system RAM into N fake nodes.
+ */
+ if (!strchr(cmdline, '*') && !strchr(cmdline, ',')) {
+- num_nodes = split_nodes_equally(nodes, &addr, max_addr, 0,
+- simple_strtol(cmdline, NULL, 0));
++ long n = simple_strtol(cmdline, NULL, 0);
++
++ num_nodes = split_nodes_equally(nodes, &addr, max_addr, 0, n);
+ if (num_nodes < 0)
+ return num_nodes;
+ goto out;
+@@ -483,46 +471,47 @@ out:
+ for_each_node_mask(i, node_possible_map) {
+ e820_register_active_regions(i, nodes[i].start >> PAGE_SHIFT,
+ nodes[i].end >> PAGE_SHIFT);
+- setup_node_bootmem(i, nodes[i].start, nodes[i].end);
++ setup_node_bootmem(i, nodes[i].start, nodes[i].end);
+ }
+ acpi_fake_nodes(nodes, num_nodes);
+- numa_init_array();
+- return 0;
++ numa_init_array();
++ return 0;
+ }
+ #endif /* CONFIG_NUMA_EMU */
- static int crypto_pcbc_decrypt_inplace(struct blkcipher_desc *desc,
- struct blkcipher_walk *walk,
-- struct crypto_cipher *tfm,
-- void (*xor)(u8 *, const u8 *,
-- unsigned int))
-+ struct crypto_cipher *tfm)
- {
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
- crypto_cipher_alg(tfm)->cia_decrypt;
-@@ -172,9 +162,9 @@ static int crypto_pcbc_decrypt_inplace(struct blkcipher_desc *desc,
- do {
- memcpy(tmpbuf, src, bsize);
- fn(crypto_cipher_tfm(tfm), src, src);
-- xor(src, iv, bsize);
-+ crypto_xor(src, iv, bsize);
- memcpy(iv, tmpbuf, bsize);
-- xor(iv, src, bsize);
-+ crypto_xor(iv, src, bsize);
+ void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
+-{
++{
+ int i;
- src += bsize;
- } while ((nbytes -= bsize) >= bsize);
-@@ -192,7 +182,6 @@ static int crypto_pcbc_decrypt(struct blkcipher_desc *desc,
- struct crypto_blkcipher *tfm = desc->tfm;
- struct crypto_pcbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
- struct crypto_cipher *child = ctx->child;
-- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
- int err;
+ nodes_clear(node_possible_map);
- blkcipher_walk_init(&walk, dst, src, nbytes);
-@@ -200,48 +189,17 @@ static int crypto_pcbc_decrypt(struct blkcipher_desc *desc,
+ #ifdef CONFIG_NUMA_EMU
+ if (cmdline && !numa_emulation(start_pfn, end_pfn))
+- return;
++ return;
+ nodes_clear(node_possible_map);
+ #endif
+
+ #ifdef CONFIG_ACPI_NUMA
+ if (!numa_off && !acpi_scan_nodes(start_pfn << PAGE_SHIFT,
+ end_pfn << PAGE_SHIFT))
+- return;
++ return;
+ nodes_clear(node_possible_map);
+ #endif
+
+ #ifdef CONFIG_K8_NUMA
+- if (!numa_off && !k8_scan_nodes(start_pfn<<PAGE_SHIFT, end_pfn<<PAGE_SHIFT))
++ if (!numa_off && !k8_scan_nodes(start_pfn<<PAGE_SHIFT,
++ end_pfn<<PAGE_SHIFT))
+ return;
+ nodes_clear(node_possible_map);
+ #endif
+ printk(KERN_INFO "%s\n",
+ numa_off ? "NUMA turned off" : "No NUMA configuration found");
+
+- printk(KERN_INFO "Faking a node at %016lx-%016lx\n",
++ printk(KERN_INFO "Faking a node at %016lx-%016lx\n",
+ start_pfn << PAGE_SHIFT,
+- end_pfn << PAGE_SHIFT);
+- /* setup dummy node covering all memory */
+- memnode_shift = 63;
++ end_pfn << PAGE_SHIFT);
++ /* setup dummy node covering all memory */
++ memnode_shift = 63;
+ memnodemap = memnode.embedded_map;
+ memnodemap[0] = 0;
+ nodes_clear(node_online_map);
+@@ -530,36 +519,48 @@ void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
+ node_set(0, node_possible_map);
+ for (i = 0; i < NR_CPUS; i++)
+ numa_set_node(i, 0);
+- node_to_cpumask[0] = cpumask_of_cpu(0);
++ /* cpumask_of_cpu() may not be available during early startup */
++ memset(&node_to_cpumask_map[0], 0, sizeof(node_to_cpumask_map[0]));
++ cpu_set(0, node_to_cpumask_map[0]);
+ e820_register_active_regions(0, start_pfn, end_pfn);
+ setup_node_bootmem(0, start_pfn << PAGE_SHIFT, end_pfn << PAGE_SHIFT);
+ }
- while ((nbytes = walk.nbytes)) {
- if (walk.src.virt.addr == walk.dst.virt.addr)
-- nbytes = crypto_pcbc_decrypt_inplace(desc, &walk, child,
-- xor);
-+ nbytes = crypto_pcbc_decrypt_inplace(desc, &walk,
-+ child);
- else
-- nbytes = crypto_pcbc_decrypt_segment(desc, &walk, child,
-- xor);
-+ nbytes = crypto_pcbc_decrypt_segment(desc, &walk,
-+ child);
- err = blkcipher_walk_done(desc, &walk, nbytes);
- }
+ __cpuinit void numa_add_cpu(int cpu)
+ {
+- set_bit(cpu, &node_to_cpumask[cpu_to_node(cpu)]);
+-}
++ set_bit(cpu,
++ (unsigned long *)&node_to_cpumask_map[early_cpu_to_node(cpu)]);
++}
- return err;
+ void __cpuinit numa_set_node(int cpu, int node)
+ {
++ int *cpu_to_node_map = x86_cpu_to_node_map_early_ptr;
++
+ cpu_pda(cpu)->nodenumber = node;
+- cpu_to_node(cpu) = node;
++
++ if(cpu_to_node_map)
++ cpu_to_node_map[cpu] = node;
++ else if(per_cpu_offset(cpu))
++ per_cpu(x86_cpu_to_node_map, cpu) = node;
++ else
++ Dprintk(KERN_INFO "Setting node for non-present cpu %d\n", cpu);
}
--static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
--{
-- do {
-- *a++ ^= *b++;
-- } while (--bs);
--}
--
--static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
--{
-- u32 *a = (u32 *)dst;
-- u32 *b = (u32 *)src;
--
-- do {
-- *a++ ^= *b++;
-- } while ((bs -= 4));
--}
--
--static void xor_64(u8 *a, const u8 *b, unsigned int bs)
--{
-- ((u32 *)a)[0] ^= ((u32 *)b)[0];
-- ((u32 *)a)[1] ^= ((u32 *)b)[1];
--}
+-unsigned long __init numa_free_all_bootmem(void)
+-{
+- int i;
++unsigned long __init numa_free_all_bootmem(void)
++{
+ unsigned long pages = 0;
+- for_each_online_node(i) {
++ int i;
++
++ for_each_online_node(i)
+ pages += free_all_bootmem_node(NODE_DATA(i));
+- }
++
+ return pages;
+-}
++}
+
+ void __init paging_init(void)
+-{
+- int i;
++{
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
++
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
+ max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
+@@ -568,32 +569,27 @@ void __init paging_init(void)
+ sparse_memory_present_with_active_regions(MAX_NUMNODES);
+ sparse_init();
+
+- for_each_online_node(i) {
+- setup_node_zones(i);
+- }
-
--static void xor_128(u8 *a, const u8 *b, unsigned int bs)
--{
-- ((u32 *)a)[0] ^= ((u32 *)b)[0];
-- ((u32 *)a)[1] ^= ((u32 *)b)[1];
-- ((u32 *)a)[2] ^= ((u32 *)b)[2];
-- ((u32 *)a)[3] ^= ((u32 *)b)[3];
--}
+ free_area_init_nodes(max_zone_pfns);
+-}
++}
+
+ static __init int numa_setup(char *opt)
+-{
++{
+ if (!opt)
+ return -EINVAL;
+- if (!strncmp(opt,"off",3))
++ if (!strncmp(opt, "off", 3))
+ numa_off = 1;
+ #ifdef CONFIG_NUMA_EMU
+ if (!strncmp(opt, "fake=", 5))
+ cmdline = opt + 5;
+ #endif
+ #ifdef CONFIG_ACPI_NUMA
+- if (!strncmp(opt,"noacpi",6))
+- acpi_numa = -1;
+- if (!strncmp(opt,"hotadd=", 7))
++ if (!strncmp(opt, "noacpi", 6))
++ acpi_numa = -1;
++ if (!strncmp(opt, "hotadd=", 7))
+ hotadd_percent = simple_strtoul(opt+7, NULL, 10);
+ #endif
+ return 0;
+-}
-
- static int crypto_pcbc_init_tfm(struct crypto_tfm *tfm)
++}
+ early_param("numa", numa_setup);
+
+ /*
+@@ -611,38 +607,16 @@ early_param("numa", numa_setup);
+ void __init init_cpu_to_node(void)
{
- struct crypto_instance *inst = (void *)tfm->__crt_alg;
-@@ -249,22 +207,6 @@ static int crypto_pcbc_init_tfm(struct crypto_tfm *tfm)
- struct crypto_pcbc_ctx *ctx = crypto_tfm_ctx(tfm);
- struct crypto_cipher *cipher;
+ int i;
+- for (i = 0; i < NR_CPUS; i++) {
+- u8 apicid = x86_cpu_to_apicid_init[i];
++
++ for (i = 0; i < NR_CPUS; i++) {
++ u16 apicid = x86_cpu_to_apicid_init[i];
++
+ if (apicid == BAD_APICID)
+ continue;
+ if (apicid_to_node[apicid] == NUMA_NO_NODE)
+ continue;
+- numa_set_node(i,apicid_to_node[apicid]);
++ numa_set_node(i, apicid_to_node[apicid]);
+ }
+ }
-- switch (crypto_tfm_alg_blocksize(tfm)) {
-- case 8:
-- ctx->xor = xor_64;
-- break;
--
-- case 16:
-- ctx->xor = xor_128;
-- break;
--
-- default:
-- if (crypto_tfm_alg_blocksize(tfm) % 4)
-- ctx->xor = xor_byte;
-- else
-- ctx->xor = xor_quad;
-- }
+-EXPORT_SYMBOL(cpu_to_node);
+-EXPORT_SYMBOL(node_to_cpumask);
+-EXPORT_SYMBOL(memnode);
+-EXPORT_SYMBOL(node_data);
-
- cipher = crypto_spawn_cipher(spawn);
- if (IS_ERR(cipher))
- return PTR_ERR(cipher);
-@@ -304,8 +246,9 @@ static struct crypto_instance *crypto_pcbc_alloc(struct rtattr **tb)
- inst->alg.cra_alignmask = alg->cra_alignmask;
- inst->alg.cra_type = &crypto_blkcipher_type;
+-#ifdef CONFIG_DISCONTIGMEM
+-/*
+- * Functions to convert PFNs from/to per node page addresses.
+- * These are out of line because they are quite big.
+- * They could be all tuned by pre caching more state.
+- * Should do that.
+- */
-- if (!(alg->cra_blocksize % 4))
-- inst->alg.cra_alignmask |= 3;
-+ /* We access the data as u32s when xoring. */
-+ inst->alg.cra_alignmask |= __alignof__(u32) - 1;
-+
- inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
- inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
- inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
-diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c
+-int pfn_valid(unsigned long pfn)
+-{
+- unsigned nid;
+- if (pfn >= num_physpages)
+- return 0;
+- nid = pfn_to_nid(pfn);
+- if (nid == 0xff)
+- return 0;
+- return pfn >= node_start_pfn(nid) && (pfn) < node_end_pfn(nid);
+-}
+-EXPORT_SYMBOL(pfn_valid);
+-#endif
+diff --git a/arch/x86/mm/pageattr-test.c b/arch/x86/mm/pageattr-test.c
new file mode 100644
-index 0000000..1fa4e4d
+index 0000000..06353d4
--- /dev/null
-+++ b/crypto/salsa20_generic.c
-@@ -0,0 +1,255 @@
++++ b/arch/x86/mm/pageattr-test.c
+@@ -0,0 +1,224 @@
+/*
-+ * Salsa20: Salsa20 stream cipher algorithm
-+ *
-+ * Copyright (c) 2007 Tan Swee Heng <thesweeheng at gmail.com>
-+ *
-+ * Derived from:
-+ * - salsa20.c: Public domain C code by Daniel J. Bernstein <djb at cr.yp.to>
-+ *
-+ * Salsa20 is a stream cipher candidate in eSTREAM, the ECRYPT Stream
-+ * Cipher Project. It is designed by Daniel J. Bernstein <djb at cr.yp.to>.
-+ * More information about eSTREAM and Salsa20 can be found here:
-+ * http://www.ecrypt.eu.org/stream/
-+ * http://cr.yp.to/snuffle.html
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License as published by the Free
-+ * Software Foundation; either version 2 of the License, or (at your option)
-+ * any later version.
++ * self test for change_page_attr.
+ *
++ * Clears the global bit on random pages in the direct mapping, then reverts
++ * and compares page tables forwards and afterwards.
+ */
-+
++#include <linux/bootmem.h>
++#include <linux/random.h>
++#include <linux/kernel.h>
+#include <linux/init.h>
-+#include <linux/module.h>
-+#include <linux/errno.h>
-+#include <linux/crypto.h>
-+#include <linux/types.h>
-+#include <crypto/algapi.h>
-+#include <asm/byteorder.h>
-+
-+#define SALSA20_IV_SIZE 8U
-+#define SALSA20_MIN_KEY_SIZE 16U
-+#define SALSA20_MAX_KEY_SIZE 32U
-+
-+/*
-+ * Start of code taken from D. J. Bernstein's reference implementation.
-+ * With some modifications and optimizations made to suit our needs.
-+ */
++#include <linux/mm.h>
+
-+/*
-+salsa20-ref.c version 20051118
-+D. J. Bernstein
-+Public domain.
-+*/
++#include <asm/cacheflush.h>
++#include <asm/pgtable.h>
++#include <asm/kdebug.h>
+
-+#define ROTATE(v,n) (((v) << (n)) | ((v) >> (32 - (n))))
-+#define XOR(v,w) ((v) ^ (w))
-+#define PLUS(v,w) (((v) + (w)))
-+#define PLUSONE(v) (PLUS((v),1))
-+#define U32TO8_LITTLE(p, v) \
-+ { (p)[0] = (v >> 0) & 0xff; (p)[1] = (v >> 8) & 0xff; \
-+ (p)[2] = (v >> 16) & 0xff; (p)[3] = (v >> 24) & 0xff; }
-+#define U8TO32_LITTLE(p) \
-+ (((u32)((p)[0]) ) | ((u32)((p)[1]) << 8) | \
-+ ((u32)((p)[2]) << 16) | ((u32)((p)[3]) << 24) )
++enum {
++ NTEST = 4000,
++#ifdef CONFIG_X86_64
++ LPS = (1 << PMD_SHIFT),
++#elif defined(CONFIG_X86_PAE)
++ LPS = (1 << PMD_SHIFT),
++#else
++ LPS = (1 << 22),
++#endif
++ GPS = (1<<30)
++};
+
-+struct salsa20_ctx
-+{
-+ u32 input[16];
++struct split_state {
++ long lpg, gpg, spg, exec;
++ long min_exec, max_exec;
+};
+
-+static void salsa20_wordtobyte(u8 output[64], const u32 input[16])
++static __init int print_split(struct split_state *s)
+{
-+ u32 x[16];
-+ int i;
++ long i, expected, missed = 0;
++ int printed = 0;
++ int err = 0;
+
-+ memcpy(x, input, sizeof(x));
-+ for (i = 20; i > 0; i -= 2) {
-+ x[ 4] = XOR(x[ 4],ROTATE(PLUS(x[ 0],x[12]), 7));
-+ x[ 8] = XOR(x[ 8],ROTATE(PLUS(x[ 4],x[ 0]), 9));
-+ x[12] = XOR(x[12],ROTATE(PLUS(x[ 8],x[ 4]),13));
-+ x[ 0] = XOR(x[ 0],ROTATE(PLUS(x[12],x[ 8]),18));
-+ x[ 9] = XOR(x[ 9],ROTATE(PLUS(x[ 5],x[ 1]), 7));
-+ x[13] = XOR(x[13],ROTATE(PLUS(x[ 9],x[ 5]), 9));
-+ x[ 1] = XOR(x[ 1],ROTATE(PLUS(x[13],x[ 9]),13));
-+ x[ 5] = XOR(x[ 5],ROTATE(PLUS(x[ 1],x[13]),18));
-+ x[14] = XOR(x[14],ROTATE(PLUS(x[10],x[ 6]), 7));
-+ x[ 2] = XOR(x[ 2],ROTATE(PLUS(x[14],x[10]), 9));
-+ x[ 6] = XOR(x[ 6],ROTATE(PLUS(x[ 2],x[14]),13));
-+ x[10] = XOR(x[10],ROTATE(PLUS(x[ 6],x[ 2]),18));
-+ x[ 3] = XOR(x[ 3],ROTATE(PLUS(x[15],x[11]), 7));
-+ x[ 7] = XOR(x[ 7],ROTATE(PLUS(x[ 3],x[15]), 9));
-+ x[11] = XOR(x[11],ROTATE(PLUS(x[ 7],x[ 3]),13));
-+ x[15] = XOR(x[15],ROTATE(PLUS(x[11],x[ 7]),18));
-+ x[ 1] = XOR(x[ 1],ROTATE(PLUS(x[ 0],x[ 3]), 7));
-+ x[ 2] = XOR(x[ 2],ROTATE(PLUS(x[ 1],x[ 0]), 9));
-+ x[ 3] = XOR(x[ 3],ROTATE(PLUS(x[ 2],x[ 1]),13));
-+ x[ 0] = XOR(x[ 0],ROTATE(PLUS(x[ 3],x[ 2]),18));
-+ x[ 6] = XOR(x[ 6],ROTATE(PLUS(x[ 5],x[ 4]), 7));
-+ x[ 7] = XOR(x[ 7],ROTATE(PLUS(x[ 6],x[ 5]), 9));
-+ x[ 4] = XOR(x[ 4],ROTATE(PLUS(x[ 7],x[ 6]),13));
-+ x[ 5] = XOR(x[ 5],ROTATE(PLUS(x[ 4],x[ 7]),18));
-+ x[11] = XOR(x[11],ROTATE(PLUS(x[10],x[ 9]), 7));
-+ x[ 8] = XOR(x[ 8],ROTATE(PLUS(x[11],x[10]), 9));
-+ x[ 9] = XOR(x[ 9],ROTATE(PLUS(x[ 8],x[11]),13));
-+ x[10] = XOR(x[10],ROTATE(PLUS(x[ 9],x[ 8]),18));
-+ x[12] = XOR(x[12],ROTATE(PLUS(x[15],x[14]), 7));
-+ x[13] = XOR(x[13],ROTATE(PLUS(x[12],x[15]), 9));
-+ x[14] = XOR(x[14],ROTATE(PLUS(x[13],x[12]),13));
-+ x[15] = XOR(x[15],ROTATE(PLUS(x[14],x[13]),18));
++ s->lpg = s->gpg = s->spg = s->exec = 0;
++ s->min_exec = ~0UL;
++ s->max_exec = 0;
++ for (i = 0; i < max_pfn_mapped; ) {
++ unsigned long addr = (unsigned long)__va(i << PAGE_SHIFT);
++ int level;
++ pte_t *pte;
++
++ pte = lookup_address(addr, &level);
++ if (!pte) {
++ if (!printed) {
++ dump_pagetable(addr);
++ printk(KERN_INFO "CPA %lx no pte level %d\n",
++ addr, level);
++ printed = 1;
++ }
++ missed++;
++ i++;
++ continue;
++ }
++
++ if (level == PG_LEVEL_1G && sizeof(long) == 8) {
++ s->gpg++;
++ i += GPS/PAGE_SIZE;
++ } else if (level == PG_LEVEL_2M) {
++ if (!(pte_val(*pte) & _PAGE_PSE)) {
++ printk(KERN_ERR
++ "%lx level %d but not PSE %Lx\n",
++ addr, level, (u64)pte_val(*pte));
++ err = 1;
++ }
++ s->lpg++;
++ i += LPS/PAGE_SIZE;
++ } else {
++ s->spg++;
++ i++;
++ }
++ if (!(pte_val(*pte) & _PAGE_NX)) {
++ s->exec++;
++ if (addr < s->min_exec)
++ s->min_exec = addr;
++ if (addr > s->max_exec)
++ s->max_exec = addr;
++ }
+ }
-+ for (i = 0; i < 16; ++i)
-+ x[i] = PLUS(x[i],input[i]);
-+ for (i = 0; i < 16; ++i)
-+ U32TO8_LITTLE(output + 4 * i,x[i]);
++ printk(KERN_INFO
++ "CPA mapping 4k %lu large %lu gb %lu x %lu[%lx-%lx] miss %lu\n",
++ s->spg, s->lpg, s->gpg, s->exec,
++ s->min_exec != ~0UL ? s->min_exec : 0, s->max_exec, missed);
++
++ expected = (s->gpg*GPS + s->lpg*LPS)/PAGE_SIZE + s->spg + missed;
++ if (expected != i) {
++ printk(KERN_ERR "CPA max_pfn_mapped %lu but expected %lu\n",
++ max_pfn_mapped, expected);
++ return 1;
++ }
++ return err;
+}
+
-+static const char sigma[16] = "expand 32-byte k";
-+static const char tau[16] = "expand 16-byte k";
++static unsigned long __initdata addr[NTEST];
++static unsigned int __initdata len[NTEST];
+
-+static void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k, u32 kbytes)
++/* Change the global bit on random pages in the direct mapping */
++static __init int exercise_pageattr(void)
+{
-+ const char *constants;
++ struct split_state sa, sb, sc;
++ unsigned long *bm;
++ pte_t *pte, pte0;
++ int failed = 0;
++ int level;
++ int i, k;
++ int err;
+
-+ ctx->input[1] = U8TO32_LITTLE(k + 0);
-+ ctx->input[2] = U8TO32_LITTLE(k + 4);
-+ ctx->input[3] = U8TO32_LITTLE(k + 8);
-+ ctx->input[4] = U8TO32_LITTLE(k + 12);
-+ if (kbytes == 32) { /* recommended */
-+ k += 16;
-+ constants = sigma;
-+ } else { /* kbytes == 16 */
-+ constants = tau;
++ printk(KERN_INFO "CPA exercising pageattr\n");
++
++ bm = vmalloc((max_pfn_mapped + 7) / 8);
++ if (!bm) {
++ printk(KERN_ERR "CPA Cannot vmalloc bitmap\n");
++ return -ENOMEM;
+ }
-+ ctx->input[11] = U8TO32_LITTLE(k + 0);
-+ ctx->input[12] = U8TO32_LITTLE(k + 4);
-+ ctx->input[13] = U8TO32_LITTLE(k + 8);
-+ ctx->input[14] = U8TO32_LITTLE(k + 12);
-+ ctx->input[0] = U8TO32_LITTLE(constants + 0);
-+ ctx->input[5] = U8TO32_LITTLE(constants + 4);
-+ ctx->input[10] = U8TO32_LITTLE(constants + 8);
-+ ctx->input[15] = U8TO32_LITTLE(constants + 12);
-+}
++ memset(bm, 0, (max_pfn_mapped + 7) / 8);
+
-+static void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv)
-+{
-+ ctx->input[6] = U8TO32_LITTLE(iv + 0);
-+ ctx->input[7] = U8TO32_LITTLE(iv + 4);
-+ ctx->input[8] = 0;
-+ ctx->input[9] = 0;
-+}
++ failed += print_split(&sa);
++ srandom32(100);
+
-+static void salsa20_encrypt_bytes(struct salsa20_ctx *ctx, u8 *dst,
-+ const u8 *src, unsigned int bytes)
-+{
-+ u8 buf[64];
++ for (i = 0; i < NTEST; i++) {
++ unsigned long pfn = random32() % max_pfn_mapped;
+
-+ if (dst != src)
-+ memcpy(dst, src, bytes);
++ addr[i] = (unsigned long)__va(pfn << PAGE_SHIFT);
++ len[i] = random32() % 100;
++ len[i] = min_t(unsigned long, len[i], max_pfn_mapped - pfn - 1);
+
-+ while (bytes) {
-+ salsa20_wordtobyte(buf, ctx->input);
++ if (len[i] == 0)
++ len[i] = 1;
+
-+ ctx->input[8] = PLUSONE(ctx->input[8]);
-+ if (!ctx->input[8])
-+ ctx->input[9] = PLUSONE(ctx->input[9]);
++ pte = NULL;
++ pte0 = pfn_pte(0, __pgprot(0)); /* shut gcc up */
+
-+ if (bytes <= 64) {
-+ crypto_xor(dst, buf, bytes);
-+ return;
++ for (k = 0; k < len[i]; k++) {
++ pte = lookup_address(addr[i] + k*PAGE_SIZE, &level);
++ if (!pte || pgprot_val(pte_pgprot(*pte)) == 0) {
++ addr[i] = 0;
++ break;
++ }
++ if (k == 0) {
++ pte0 = *pte;
++ } else {
++ if (pgprot_val(pte_pgprot(*pte)) !=
++ pgprot_val(pte_pgprot(pte0))) {
++ len[i] = k;
++ break;
++ }
++ }
++ if (test_bit(pfn + k, bm)) {
++ len[i] = k;
++ break;
++ }
++ __set_bit(pfn + k, bm);
++ }
++ if (!addr[i] || !pte || !k) {
++ addr[i] = 0;
++ continue;
+ }
+
-+ crypto_xor(dst, buf, 64);
-+ bytes -= 64;
-+ dst += 64;
-+ }
-+}
-+
-+/*
-+ * End of code taken from D. J. Bernstein's reference implementation.
-+ */
++ err = change_page_attr_clear(addr[i], len[i],
++ __pgprot(_PAGE_GLOBAL));
++ if (err < 0) {
++ printk(KERN_ERR "CPA %d failed %d\n", i, err);
++ failed++;
++ }
+
-+static int setkey(struct crypto_tfm *tfm, const u8 *key,
-+ unsigned int keysize)
-+{
-+ struct salsa20_ctx *ctx = crypto_tfm_ctx(tfm);
-+ salsa20_keysetup(ctx, key, keysize);
-+ return 0;
-+}
++ pte = lookup_address(addr[i], &level);
++ if (!pte || pte_global(*pte) || pte_huge(*pte)) {
++ printk(KERN_ERR "CPA %lx: bad pte %Lx\n", addr[i],
++ pte ? (u64)pte_val(*pte) : 0ULL);
++ failed++;
++ }
++ if (level != PG_LEVEL_4K) {
++ printk(KERN_ERR "CPA %lx: unexpected level %d\n",
++ addr[i], level);
++ failed++;
++ }
+
-+static int encrypt(struct blkcipher_desc *desc,
-+ struct scatterlist *dst, struct scatterlist *src,
-+ unsigned int nbytes)
-+{
-+ struct blkcipher_walk walk;
-+ struct crypto_blkcipher *tfm = desc->tfm;
-+ struct salsa20_ctx *ctx = crypto_blkcipher_ctx(tfm);
-+ int err;
++ }
++ vfree(bm);
+
-+ blkcipher_walk_init(&walk, dst, src, nbytes);
-+ err = blkcipher_walk_virt_block(desc, &walk, 64);
++ failed += print_split(&sb);
+
-+ salsa20_ivsetup(ctx, walk.iv);
++ printk(KERN_INFO "CPA reverting everything\n");
++ for (i = 0; i < NTEST; i++) {
++ if (!addr[i])
++ continue;
++ pte = lookup_address(addr[i], &level);
++ if (!pte) {
++ printk(KERN_ERR "CPA lookup of %lx failed\n", addr[i]);
++ failed++;
++ continue;
++ }
++ err = change_page_attr_set(addr[i], len[i],
++ __pgprot(_PAGE_GLOBAL));
++ if (err < 0) {
++ printk(KERN_ERR "CPA reverting failed: %d\n", err);
++ failed++;
++ }
++ pte = lookup_address(addr[i], &level);
++ if (!pte || !pte_global(*pte)) {
++ printk(KERN_ERR "CPA %lx: bad pte after revert %Lx\n",
++ addr[i], pte ? (u64)pte_val(*pte) : 0ULL);
++ failed++;
++ }
+
-+ if (likely(walk.nbytes == nbytes))
-+ {
-+ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,
-+ walk.src.virt.addr, nbytes);
-+ return blkcipher_walk_done(desc, &walk, 0);
+ }
+
-+ while (walk.nbytes >= 64) {
-+ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,
-+ walk.src.virt.addr,
-+ walk.nbytes - (walk.nbytes % 64));
-+ err = blkcipher_walk_done(desc, &walk, walk.nbytes % 64);
-+ }
++ failed += print_split(&sc);
+
-+ if (walk.nbytes) {
-+ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,
-+ walk.src.virt.addr, walk.nbytes);
-+ err = blkcipher_walk_done(desc, &walk, 0);
++ if (failed) {
++ printk(KERN_ERR "CPA selftests NOT PASSED. Please report.\n");
++ WARN_ON(1);
++ } else {
++ printk(KERN_INFO "CPA selftests PASSED\n");
+ }
+
-+ return err;
++ return 0;
+}
++module_init(exercise_pageattr);
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+new file mode 100644
+index 0000000..1cc6607
+--- /dev/null
++++ b/arch/x86/mm/pageattr.c
+@@ -0,0 +1,564 @@
++/*
++ * Copyright 2002 Andi Kleen, SuSE Labs.
++ * Thanks to Ben LaHaise for precious feedback.
++ */
++#include <linux/highmem.h>
++#include <linux/bootmem.h>
++#include <linux/module.h>
++#include <linux/sched.h>
++#include <linux/slab.h>
++#include <linux/mm.h>
+
-+static struct crypto_alg alg = {
-+ .cra_name = "salsa20",
-+ .cra_driver_name = "salsa20-generic",
-+ .cra_priority = 100,
-+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
-+ .cra_type = &crypto_blkcipher_type,
-+ .cra_blocksize = 1,
-+ .cra_ctxsize = sizeof(struct salsa20_ctx),
-+ .cra_alignmask = 3,
-+ .cra_module = THIS_MODULE,
-+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
-+ .cra_u = {
-+ .blkcipher = {
-+ .setkey = setkey,
-+ .encrypt = encrypt,
-+ .decrypt = encrypt,
-+ .min_keysize = SALSA20_MIN_KEY_SIZE,
-+ .max_keysize = SALSA20_MAX_KEY_SIZE,
-+ .ivsize = SALSA20_IV_SIZE,
-+ }
-+ }
-+};
-+
-+static int __init init(void)
-+{
-+ return crypto_register_alg(&alg);
-+}
++#include <asm/e820.h>
++#include <asm/processor.h>
++#include <asm/tlbflush.h>
++#include <asm/sections.h>
++#include <asm/uaccess.h>
++#include <asm/pgalloc.h>
+
-+static void __exit fini(void)
++static inline int
++within(unsigned long addr, unsigned long start, unsigned long end)
+{
-+ crypto_unregister_alg(&alg);
++ return addr >= start && addr < end;
+}
+
-+module_init(init);
-+module_exit(fini);
-+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm");
-+MODULE_ALIAS("salsa20");
-diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
-index b9bbda0..9aeeb52 100644
---- a/crypto/scatterwalk.c
-+++ b/crypto/scatterwalk.c
-@@ -13,6 +13,8 @@
- * any later version.
- *
- */
-+
-+#include <crypto/scatterwalk.h>
- #include <linux/kernel.h>
- #include <linux/mm.h>
- #include <linux/module.h>
-@@ -20,9 +22,6 @@
- #include <linux/highmem.h>
- #include <linux/scatterlist.h>
-
--#include "internal.h"
--#include "scatterwalk.h"
--
- static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out)
- {
- void *src = out ? buf : sgdata;
-@@ -106,6 +105,9 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
- struct scatter_walk walk;
- unsigned int offset = 0;
-
-+ if (!nbytes)
-+ return;
-+
- for (;;) {
- scatterwalk_start(&walk, sg);
-
-@@ -113,7 +115,7 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
- break;
-
- offset += sg->length;
-- sg = sg_next(sg);
-+ sg = scatterwalk_sg_next(sg);
- }
-
- scatterwalk_advance(&walk, start - offset);
-diff --git a/crypto/scatterwalk.h b/crypto/scatterwalk.h
-deleted file mode 100644
-index 87ed681..0000000
---- a/crypto/scatterwalk.h
-+++ /dev/null
-@@ -1,80 +0,0 @@
--/*
-- * Cryptographic API.
-- *
-- * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
-- * Copyright (c) 2002 Adam J. Richter <adam at yggdrasil.com>
-- * Copyright (c) 2004 Jean-Luc Cooke <jlcooke at certainkey.com>
-- *
-- * This program is free software; you can redistribute it and/or modify it
-- * under the terms of the GNU General Public License as published by the Free
-- * Software Foundation; either version 2 of the License, or (at your option)
-- * any later version.
-- *
-- */
--
--#ifndef _CRYPTO_SCATTERWALK_H
--#define _CRYPTO_SCATTERWALK_H
--
--#include <linux/mm.h>
--#include <linux/scatterlist.h>
--
--#include "internal.h"
--
--static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg)
--{
-- return (++sg)->length ? sg : (void *) sg_page(sg);
--}
--
--static inline unsigned long scatterwalk_samebuf(struct scatter_walk *walk_in,
-- struct scatter_walk *walk_out)
--{
-- return !(((sg_page(walk_in->sg) - sg_page(walk_out->sg)) << PAGE_SHIFT) +
-- (int)(walk_in->offset - walk_out->offset));
--}
--
--static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk)
--{
-- unsigned int len = walk->sg->offset + walk->sg->length - walk->offset;
-- unsigned int len_this_page = offset_in_page(~walk->offset) + 1;
-- return len_this_page > len ? len : len_this_page;
--}
--
--static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk,
-- unsigned int nbytes)
--{
-- unsigned int len_this_page = scatterwalk_pagelen(walk);
-- return nbytes > len_this_page ? len_this_page : nbytes;
--}
--
--static inline void scatterwalk_advance(struct scatter_walk *walk,
-- unsigned int nbytes)
--{
-- walk->offset += nbytes;
--}
--
--static inline unsigned int scatterwalk_aligned(struct scatter_walk *walk,
-- unsigned int alignmask)
--{
-- return !(walk->offset & alignmask);
--}
--
--static inline struct page *scatterwalk_page(struct scatter_walk *walk)
--{
-- return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
--}
--
--static inline void scatterwalk_unmap(void *vaddr, int out)
--{
-- crypto_kunmap(vaddr, out);
--}
--
--void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg);
--void scatterwalk_copychunks(void *buf, struct scatter_walk *walk,
-- size_t nbytes, int out);
--void *scatterwalk_map(struct scatter_walk *walk, int out);
--void scatterwalk_done(struct scatter_walk *walk, int out, int more);
--
--void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
-- unsigned int start, unsigned int nbytes, int out);
--
--#endif /* _CRYPTO_SCATTERWALK_H */
-diff --git a/crypto/seqiv.c b/crypto/seqiv.c
-new file mode 100644
-index 0000000..b903aab
---- /dev/null
-+++ b/crypto/seqiv.c
-@@ -0,0 +1,345 @@
+/*
-+ * seqiv: Sequence Number IV Generator
-+ *
-+ * This generator generates an IV based on a sequence number by xoring it
-+ * with a salt. This algorithm is mainly useful for CTR and similar modes.
-+ *
-+ * Copyright (c) 2007 Herbert Xu <herbert at gondor.apana.org.au>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of the GNU General Public License as published by the Free
-+ * Software Foundation; either version 2 of the License, or (at your option)
-+ * any later version.
++ * Flushing functions
++ */
++
++/**
++ * clflush_cache_range - flush a cache range with clflush
++ * @addr: virtual start address
++ * @size: number of bytes to flush
+ *
++ * clflush is an unordered instruction which needs fencing with mfence
++ * to avoid ordering issues.
+ */
++void clflush_cache_range(void *vaddr, unsigned int size)
++{
++ void *vend = vaddr + size - 1;
+
-+#include <crypto/internal/aead.h>
-+#include <crypto/internal/skcipher.h>
-+#include <linux/err.h>
-+#include <linux/init.h>
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/random.h>
-+#include <linux/spinlock.h>
-+#include <linux/string.h>
++ mb();
+
-+struct seqiv_ctx {
-+ spinlock_t lock;
-+ u8 salt[] __attribute__ ((aligned(__alignof__(u32))));
-+};
++ for (; vaddr < vend; vaddr += boot_cpu_data.x86_clflush_size)
++ clflush(vaddr);
++ /*
++ * Flush any possible final partial cacheline:
++ */
++ clflush(vend);
+
-+static void seqiv_complete2(struct skcipher_givcrypt_request *req, int err)
++ mb();
++}
++
++static void __cpa_flush_all(void *arg)
+{
-+ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
-+ struct crypto_ablkcipher *geniv;
++ /*
++ * Flush all to work around Errata in early athlons regarding
++ * large page flushing.
++ */
++ __flush_tlb_all();
+
-+ if (err == -EINPROGRESS)
-+ return;
++ if (boot_cpu_data.x86_model >= 4)
++ wbinvd();
++}
+
-+ if (err)
-+ goto out;
++static void cpa_flush_all(void)
++{
++ BUG_ON(irqs_disabled());
+
-+ geniv = skcipher_givcrypt_reqtfm(req);
-+ memcpy(req->creq.info, subreq->info, crypto_ablkcipher_ivsize(geniv));
++ on_each_cpu(__cpa_flush_all, NULL, 1, 1);
++}
+
-+out:
-+ kfree(subreq->info);
++static void __cpa_flush_range(void *arg)
++{
++ /*
++ * We could optimize that further and do individual per page
++ * tlb invalidates for a low number of pages. Caveat: we must
++ * flush the high aliases on 64bit as well.
++ */
++ __flush_tlb_all();
+}
+
-+static void seqiv_complete(struct crypto_async_request *base, int err)
++static void cpa_flush_range(unsigned long start, int numpages)
+{
-+ struct skcipher_givcrypt_request *req = base->data;
++ unsigned int i, level;
++ unsigned long addr;
+
-+ seqiv_complete2(req, err);
-+ skcipher_givcrypt_complete(req, err);
++ BUG_ON(irqs_disabled());
++ WARN_ON(PAGE_ALIGN(start) != start);
++
++ on_each_cpu(__cpa_flush_range, NULL, 1, 1);
++
++ /*
++ * We only need to flush on one CPU,
++ * clflush is a MESI-coherent instruction that
++ * will cause all other CPUs to flush the same
++ * cachelines:
++ */
++ for (i = 0, addr = start; i < numpages; i++, addr += PAGE_SIZE) {
++ pte_t *pte = lookup_address(addr, &level);
++
++ /*
++ * Only flush present addresses:
++ */
++ if (pte && pte_present(*pte))
++ clflush_cache_range((void *) addr, PAGE_SIZE);
++ }
+}
+
-+static void seqiv_aead_complete2(struct aead_givcrypt_request *req, int err)
++/*
++ * Certain areas of memory on x86 require very specific protection flags,
++ * for example the BIOS area or kernel text. Callers don't always get this
++ * right (again, ioremap() on BIOS memory is not uncommon) so this function
++ * checks and fixes these known static required protection bits.
++ */
++static inline pgprot_t static_protections(pgprot_t prot, unsigned long address)
+{
-+ struct aead_request *subreq = aead_givcrypt_reqctx(req);
-+ struct crypto_aead *geniv;
++ pgprot_t forbidden = __pgprot(0);
+
-+ if (err == -EINPROGRESS)
-+ return;
++ /*
++ * The BIOS area between 640k and 1Mb needs to be executable for
++ * PCI BIOS based config access (CONFIG_PCI_GOBIOS) support.
++ */
++ if (within(__pa(address), BIOS_BEGIN, BIOS_END))
++ pgprot_val(forbidden) |= _PAGE_NX;
+
-+ if (err)
-+ goto out;
++ /*
++ * The kernel text needs to be executable for obvious reasons
++ * Does not cover __inittext since that is gone later on
++ */
++ if (within(address, (unsigned long)_text, (unsigned long)_etext))
++ pgprot_val(forbidden) |= _PAGE_NX;
+
-+ geniv = aead_givcrypt_reqtfm(req);
-+ memcpy(req->areq.iv, subreq->iv, crypto_aead_ivsize(geniv));
++#ifdef CONFIG_DEBUG_RODATA
++ /* The .rodata section needs to be read-only */
++ if (within(address, (unsigned long)__start_rodata,
++ (unsigned long)__end_rodata))
++ pgprot_val(forbidden) |= _PAGE_RW;
++#endif
+
-+out:
-+ kfree(subreq->iv);
++ prot = __pgprot(pgprot_val(prot) & ~pgprot_val(forbidden));
++
++ return prot;
+}
+
-+static void seqiv_aead_complete(struct crypto_async_request *base, int err)
++pte_t *lookup_address(unsigned long address, int *level)
+{
-+ struct aead_givcrypt_request *req = base->data;
++ pgd_t *pgd = pgd_offset_k(address);
++ pud_t *pud;
++ pmd_t *pmd;
+
-+ seqiv_aead_complete2(req, err);
-+ aead_givcrypt_complete(req, err);
++ *level = PG_LEVEL_NONE;
++
++ if (pgd_none(*pgd))
++ return NULL;
++ pud = pud_offset(pgd, address);
++ if (pud_none(*pud))
++ return NULL;
++ pmd = pmd_offset(pud, address);
++ if (pmd_none(*pmd))
++ return NULL;
++
++ *level = PG_LEVEL_2M;
++ if (pmd_large(*pmd))
++ return (pte_t *)pmd;
++
++ *level = PG_LEVEL_4K;
++ return pte_offset_kernel(pmd, address);
+}
+
-+static void seqiv_geniv(struct seqiv_ctx *ctx, u8 *info, u64 seq,
-+ unsigned int ivsize)
++static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
+{
-+ unsigned int len = ivsize;
++ /* change init_mm */
++ set_pte_atomic(kpte, pte);
++#ifdef CONFIG_X86_32
++ if (!SHARED_KERNEL_PMD) {
++ struct page *page;
+
-+ if (ivsize > sizeof(u64)) {
-+ memset(info, 0, ivsize - sizeof(u64));
-+ len = sizeof(u64);
++ list_for_each_entry(page, &pgd_list, lru) {
++ pgd_t *pgd;
++ pud_t *pud;
++ pmd_t *pmd;
++
++ pgd = (pgd_t *)page_address(page) + pgd_index(address);
++ pud = pud_offset(pgd, address);
++ pmd = pmd_offset(pud, address);
++ set_pte_atomic((pte_t *)pmd, pte);
++ }
+ }
-+ seq = cpu_to_be64(seq);
-+ memcpy(info + ivsize - len, &seq, len);
-+ crypto_xor(info, ctx->salt, ivsize);
++#endif
+}
+
-+static int seqiv_givencrypt(struct skcipher_givcrypt_request *req)
++static int split_large_page(pte_t *kpte, unsigned long address)
+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
-+ crypto_completion_t complete;
-+ void *data;
-+ u8 *info;
-+ unsigned int ivsize;
-+ int err;
-+
-+ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
++ pgprot_t ref_prot = pte_pgprot(pte_clrhuge(*kpte));
++ gfp_t gfp_flags = GFP_KERNEL;
++ unsigned long flags;
++ unsigned long addr;
++ pte_t *pbase, *tmp;
++ struct page *base;
++ unsigned int i, level;
++
++#ifdef CONFIG_DEBUG_PAGEALLOC
++ gfp_flags = __GFP_HIGH | __GFP_NOFAIL | __GFP_NOWARN;
++ gfp_flags = GFP_ATOMIC | __GFP_NOWARN;
++#endif
++ base = alloc_pages(gfp_flags, 0);
++ if (!base)
++ return -ENOMEM;
+
-+ complete = req->creq.base.complete;
-+ data = req->creq.base.data;
-+ info = req->creq.info;
++ spin_lock_irqsave(&pgd_lock, flags);
++ /*
++ * Check for races, another CPU might have split this page
++ * up for us already:
++ */
++ tmp = lookup_address(address, &level);
++ if (tmp != kpte) {
++ WARN_ON_ONCE(1);
++ goto out_unlock;
++ }
+
-+ ivsize = crypto_ablkcipher_ivsize(geniv);
++ address = __pa(address);
++ addr = address & LARGE_PAGE_MASK;
++ pbase = (pte_t *)page_address(base);
++#ifdef CONFIG_X86_32
++ paravirt_alloc_pt(&init_mm, page_to_pfn(base));
++#endif
+
-+ if (unlikely(!IS_ALIGNED((unsigned long)info,
-+ crypto_ablkcipher_alignmask(geniv) + 1))) {
-+ info = kmalloc(ivsize, req->creq.base.flags &
-+ CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
-+ GFP_ATOMIC);
-+ if (!info)
-+ return -ENOMEM;
++ pgprot_val(ref_prot) &= ~_PAGE_NX;
++ for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE)
++ set_pte(&pbase[i], pfn_pte(addr >> PAGE_SHIFT, ref_prot));
+
-+ complete = seqiv_complete;
-+ data = req;
-+ }
++ /*
++ * Install the new, split up pagetable. Important detail here:
++ *
++ * On Intel the NX bit of all levels must be cleared to make a
++ * page executable. See section 4.13.2 of Intel 64 and IA-32
++ * Architectures Software Developer's Manual).
++ */
++ ref_prot = pte_pgprot(pte_mkexec(pte_clrhuge(*kpte)));
++ __set_pmd_pte(kpte, address, mk_pte(base, ref_prot));
++ base = NULL;
+
-+ ablkcipher_request_set_callback(subreq, req->creq.base.flags, complete,
-+ data);
-+ ablkcipher_request_set_crypt(subreq, req->creq.src, req->creq.dst,
-+ req->creq.nbytes, info);
++out_unlock:
++ spin_unlock_irqrestore(&pgd_lock, flags);
+
-+ seqiv_geniv(ctx, info, req->seq, ivsize);
-+ memcpy(req->giv, info, ivsize);
++ if (base)
++ __free_pages(base, 0);
+
-+ err = crypto_ablkcipher_encrypt(subreq);
-+ if (unlikely(info != req->creq.info))
-+ seqiv_complete2(req, err);
-+ return err;
++ return 0;
+}
+
-+static int seqiv_aead_givencrypt(struct aead_givcrypt_request *req)
++static int
++__change_page_attr(unsigned long address, unsigned long pfn,
++ pgprot_t mask_set, pgprot_t mask_clr)
+{
-+ struct crypto_aead *geniv = aead_givcrypt_reqtfm(req);
-+ struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
-+ struct aead_request *areq = &req->areq;
-+ struct aead_request *subreq = aead_givcrypt_reqctx(req);
-+ crypto_completion_t complete;
-+ void *data;
-+ u8 *info;
-+ unsigned int ivsize;
-+ int err;
++ struct page *kpte_page;
++ int level, err = 0;
++ pte_t *kpte;
+
-+ aead_request_set_tfm(subreq, aead_geniv_base(geniv));
++#ifdef CONFIG_X86_32
++ BUG_ON(pfn > max_low_pfn);
++#endif
+
-+ complete = areq->base.complete;
-+ data = areq->base.data;
-+ info = areq->iv;
++repeat:
++ kpte = lookup_address(address, &level);
++ if (!kpte)
++ return -EINVAL;
+
-+ ivsize = crypto_aead_ivsize(geniv);
++ kpte_page = virt_to_page(kpte);
++ BUG_ON(PageLRU(kpte_page));
++ BUG_ON(PageCompound(kpte_page));
+
-+ if (unlikely(!IS_ALIGNED((unsigned long)info,
-+ crypto_aead_alignmask(geniv) + 1))) {
-+ info = kmalloc(ivsize, areq->base.flags &
-+ CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
-+ GFP_ATOMIC);
-+ if (!info)
-+ return -ENOMEM;
++ if (level == PG_LEVEL_4K) {
++ pgprot_t new_prot = pte_pgprot(*kpte);
++ pte_t new_pte, old_pte = *kpte;
+
-+ complete = seqiv_aead_complete;
-+ data = req;
-+ }
++ pgprot_val(new_prot) &= ~pgprot_val(mask_clr);
++ pgprot_val(new_prot) |= pgprot_val(mask_set);
+
-+ aead_request_set_callback(subreq, areq->base.flags, complete, data);
-+ aead_request_set_crypt(subreq, areq->src, areq->dst, areq->cryptlen,
-+ info);
-+ aead_request_set_assoc(subreq, areq->assoc, areq->assoclen);
++ new_prot = static_protections(new_prot, address);
+
-+ seqiv_geniv(ctx, info, req->seq, ivsize);
-+ memcpy(req->giv, info, ivsize);
++ new_pte = pfn_pte(pfn, canon_pgprot(new_prot));
++ BUG_ON(pte_pfn(new_pte) != pte_pfn(old_pte));
+
-+ err = crypto_aead_encrypt(subreq);
-+ if (unlikely(info != areq->iv))
-+ seqiv_aead_complete2(req, err);
++ set_pte_atomic(kpte, new_pte);
++ } else {
++ err = split_large_page(kpte, address);
++ if (!err)
++ goto repeat;
++ }
+ return err;
+}
+
-+static int seqiv_givencrypt_first(struct skcipher_givcrypt_request *req)
++/**
++ * change_page_attr_addr - Change page table attributes in linear mapping
++ * @address: Virtual address in linear mapping.
++ * @prot: New page table attribute (PAGE_*)
++ *
++ * Change page attributes of a page in the direct mapping. This is a variant
++ * of change_page_attr() that also works on memory holes that do not have
++ * mem_map entry (pfn_valid() is false).
++ *
++ * See change_page_attr() documentation for more details.
++ *
++ * Modules and drivers should use the set_memory_* APIs instead.
++ */
++
++#define HIGH_MAP_START __START_KERNEL_map
++#define HIGH_MAP_END (__START_KERNEL_map + KERNEL_TEXT_SIZE)
++
++static int
++change_page_attr_addr(unsigned long address, pgprot_t mask_set,
++ pgprot_t mask_clr)
+{
-+ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
-+ struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ unsigned long phys_addr = __pa(address);
++ unsigned long pfn = phys_addr >> PAGE_SHIFT;
++ int err;
+
-+ spin_lock_bh(&ctx->lock);
-+ if (crypto_ablkcipher_crt(geniv)->givencrypt != seqiv_givencrypt_first)
-+ goto unlock;
++#ifdef CONFIG_X86_64
++ /*
++ * If we are inside the high mapped kernel range, then we
++ * fixup the low mapping first. __va() returns the virtual
++ * address in the linear mapping:
++ */
++ if (within(address, HIGH_MAP_START, HIGH_MAP_END))
++ address = (unsigned long) __va(phys_addr);
++#endif
+
-+ crypto_ablkcipher_crt(geniv)->givencrypt = seqiv_givencrypt;
-+ get_random_bytes(ctx->salt, crypto_ablkcipher_ivsize(geniv));
++ err = __change_page_attr(address, pfn, mask_set, mask_clr);
++ if (err)
++ return err;
+
-+unlock:
-+ spin_unlock_bh(&ctx->lock);
++#ifdef CONFIG_X86_64
++ /*
++ * If the physical address is inside the kernel map, we need
++ * to touch the high mapped kernel as well:
++ */
++ if (within(phys_addr, 0, KERNEL_TEXT_SIZE)) {
++ /*
++ * Calc the high mapping address. See __phys_addr()
++ * for the non obvious details.
++ */
++ address = phys_addr + HIGH_MAP_START - phys_base;
++ /* Make sure the kernel mappings stay executable */
++ pgprot_val(mask_clr) |= _PAGE_NX;
+
-+ return seqiv_givencrypt(req);
++ /*
++ * Our high aliases are imprecise, because we check
++ * everything between 0 and KERNEL_TEXT_SIZE, so do
++ * not propagate lookup failures back to users:
++ */
++ __change_page_attr(address, pfn, mask_set, mask_clr);
++ }
++#endif
++ return err;
+}
+
-+static int seqiv_aead_givencrypt_first(struct aead_givcrypt_request *req)
++static int __change_page_attr_set_clr(unsigned long addr, int numpages,
++ pgprot_t mask_set, pgprot_t mask_clr)
+{
-+ struct crypto_aead *geniv = aead_givcrypt_reqtfm(req);
-+ struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
-+
-+ spin_lock_bh(&ctx->lock);
-+ if (crypto_aead_crt(geniv)->givencrypt != seqiv_aead_givencrypt_first)
-+ goto unlock;
-+
-+ crypto_aead_crt(geniv)->givencrypt = seqiv_aead_givencrypt;
-+ get_random_bytes(ctx->salt, crypto_aead_ivsize(geniv));
++ unsigned int i;
++ int ret;
+
-+unlock:
-+ spin_unlock_bh(&ctx->lock);
++ for (i = 0; i < numpages ; i++, addr += PAGE_SIZE) {
++ ret = change_page_attr_addr(addr, mask_set, mask_clr);
++ if (ret)
++ return ret;
++ }
+
-+ return seqiv_aead_givencrypt(req);
++ return 0;
+}
+
-+static int seqiv_init(struct crypto_tfm *tfm)
++static int change_page_attr_set_clr(unsigned long addr, int numpages,
++ pgprot_t mask_set, pgprot_t mask_clr)
+{
-+ struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm);
-+ struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
-+
-+ spin_lock_init(&ctx->lock);
++ int ret = __change_page_attr_set_clr(addr, numpages, mask_set,
++ mask_clr);
+
-+ tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request);
++ /*
++ * On success we use clflush, when the CPU supports it to
++ * avoid the wbindv. If the CPU does not support it and in the
++ * error case we fall back to cpa_flush_all (which uses
++ * wbindv):
++ */
++ if (!ret && cpu_has_clflush)
++ cpa_flush_range(addr, numpages);
++ else
++ cpa_flush_all();
+
-+ return skcipher_geniv_init(tfm);
++ return ret;
+}
+
-+static int seqiv_aead_init(struct crypto_tfm *tfm)
++static inline int change_page_attr_set(unsigned long addr, int numpages,
++ pgprot_t mask)
+{
-+ struct crypto_aead *geniv = __crypto_aead_cast(tfm);
-+ struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
++ return change_page_attr_set_clr(addr, numpages, mask, __pgprot(0));
++}
+
-+ spin_lock_init(&ctx->lock);
++static inline int change_page_attr_clear(unsigned long addr, int numpages,
++ pgprot_t mask)
++{
++ return __change_page_attr_set_clr(addr, numpages, __pgprot(0), mask);
+
-+ tfm->crt_aead.reqsize = sizeof(struct aead_request);
++}
+
-+ return aead_geniv_init(tfm);
++int set_memory_uc(unsigned long addr, int numpages)
++{
++ return change_page_attr_set(addr, numpages,
++ __pgprot(_PAGE_PCD | _PAGE_PWT));
+}
++EXPORT_SYMBOL(set_memory_uc);
+
-+static struct crypto_template seqiv_tmpl;
++int set_memory_wb(unsigned long addr, int numpages)
++{
++ return change_page_attr_clear(addr, numpages,
++ __pgprot(_PAGE_PCD | _PAGE_PWT));
++}
++EXPORT_SYMBOL(set_memory_wb);
+
-+static struct crypto_instance *seqiv_ablkcipher_alloc(struct rtattr **tb)
++int set_memory_x(unsigned long addr, int numpages)
+{
-+ struct crypto_instance *inst;
++ return change_page_attr_clear(addr, numpages, __pgprot(_PAGE_NX));
++}
++EXPORT_SYMBOL(set_memory_x);
+
-+ inst = skcipher_geniv_alloc(&seqiv_tmpl, tb, 0, 0);
++int set_memory_nx(unsigned long addr, int numpages)
++{
++ return change_page_attr_set(addr, numpages, __pgprot(_PAGE_NX));
++}
++EXPORT_SYMBOL(set_memory_nx);
+
-+ if (IS_ERR(inst))
-+ goto out;
++int set_memory_ro(unsigned long addr, int numpages)
++{
++ return change_page_attr_clear(addr, numpages, __pgprot(_PAGE_RW));
++}
+
-+ inst->alg.cra_ablkcipher.givencrypt = seqiv_givencrypt_first;
++int set_memory_rw(unsigned long addr, int numpages)
++{
++ return change_page_attr_set(addr, numpages, __pgprot(_PAGE_RW));
++}
+
-+ inst->alg.cra_init = seqiv_init;
-+ inst->alg.cra_exit = skcipher_geniv_exit;
++int set_memory_np(unsigned long addr, int numpages)
++{
++ return change_page_attr_clear(addr, numpages, __pgprot(_PAGE_PRESENT));
++}
+
-+ inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
++int set_pages_uc(struct page *page, int numpages)
++{
++ unsigned long addr = (unsigned long)page_address(page);
+
-+out:
-+ return inst;
++ return set_memory_uc(addr, numpages);
+}
++EXPORT_SYMBOL(set_pages_uc);
+
-+static struct crypto_instance *seqiv_aead_alloc(struct rtattr **tb)
++int set_pages_wb(struct page *page, int numpages)
+{
-+ struct crypto_instance *inst;
-+
-+ inst = aead_geniv_alloc(&seqiv_tmpl, tb, 0, 0);
++ unsigned long addr = (unsigned long)page_address(page);
+
-+ if (IS_ERR(inst))
-+ goto out;
++ return set_memory_wb(addr, numpages);
++}
++EXPORT_SYMBOL(set_pages_wb);
+
-+ inst->alg.cra_aead.givencrypt = seqiv_aead_givencrypt_first;
++int set_pages_x(struct page *page, int numpages)
++{
++ unsigned long addr = (unsigned long)page_address(page);
+
-+ inst->alg.cra_init = seqiv_aead_init;
-+ inst->alg.cra_exit = aead_geniv_exit;
++ return set_memory_x(addr, numpages);
++}
++EXPORT_SYMBOL(set_pages_x);
+
-+ inst->alg.cra_ctxsize = inst->alg.cra_aead.ivsize;
++int set_pages_nx(struct page *page, int numpages)
++{
++ unsigned long addr = (unsigned long)page_address(page);
+
-+out:
-+ return inst;
++ return set_memory_nx(addr, numpages);
+}
++EXPORT_SYMBOL(set_pages_nx);
+
-+static struct crypto_instance *seqiv_alloc(struct rtattr **tb)
++int set_pages_ro(struct page *page, int numpages)
+{
-+ struct crypto_attr_type *algt;
-+ struct crypto_instance *inst;
-+ int err;
++ unsigned long addr = (unsigned long)page_address(page);
+
-+ algt = crypto_get_attr_type(tb);
-+ err = PTR_ERR(algt);
-+ if (IS_ERR(algt))
-+ return ERR_PTR(err);
++ return set_memory_ro(addr, numpages);
++}
+
-+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK)
-+ inst = seqiv_ablkcipher_alloc(tb);
-+ else
-+ inst = seqiv_aead_alloc(tb);
++int set_pages_rw(struct page *page, int numpages)
++{
++ unsigned long addr = (unsigned long)page_address(page);
+
-+ if (IS_ERR(inst))
-+ goto out;
++ return set_memory_rw(addr, numpages);
++}
+
-+ inst->alg.cra_alignmask |= __alignof__(u32) - 1;
-+ inst->alg.cra_ctxsize += sizeof(struct seqiv_ctx);
+
-+out:
-+ return inst;
++#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_CPA_DEBUG)
++static inline int __change_page_attr_set(unsigned long addr, int numpages,
++ pgprot_t mask)
++{
++ return __change_page_attr_set_clr(addr, numpages, mask, __pgprot(0));
+}
+
-+static void seqiv_free(struct crypto_instance *inst)
++static inline int __change_page_attr_clear(unsigned long addr, int numpages,
++ pgprot_t mask)
+{
-+ if ((inst->alg.cra_flags ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK)
-+ skcipher_geniv_free(inst);
-+ else
-+ aead_geniv_free(inst);
++ return __change_page_attr_set_clr(addr, numpages, __pgprot(0), mask);
+}
++#endif
+
-+static struct crypto_template seqiv_tmpl = {
-+ .name = "seqiv",
-+ .alloc = seqiv_alloc,
-+ .free = seqiv_free,
-+ .module = THIS_MODULE,
-+};
++#ifdef CONFIG_DEBUG_PAGEALLOC
+
-+static int __init seqiv_module_init(void)
++static int __set_pages_p(struct page *page, int numpages)
+{
-+ return crypto_register_template(&seqiv_tmpl);
++ unsigned long addr = (unsigned long)page_address(page);
++
++ return __change_page_attr_set(addr, numpages,
++ __pgprot(_PAGE_PRESENT | _PAGE_RW));
+}
+
-+static void __exit seqiv_module_exit(void)
++static int __set_pages_np(struct page *page, int numpages)
+{
-+ crypto_unregister_template(&seqiv_tmpl);
++ unsigned long addr = (unsigned long)page_address(page);
++
++ return __change_page_attr_clear(addr, numpages,
++ __pgprot(_PAGE_PRESENT));
+}
+
-+module_init(seqiv_module_init);
-+module_exit(seqiv_module_exit);
++void kernel_map_pages(struct page *page, int numpages, int enable)
++{
++ if (PageHighMem(page))
++ return;
++ if (!enable) {
++ debug_check_no_locks_freed(page_address(page),
++ numpages * PAGE_SIZE);
++ }
+
-+MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("Sequence Number IV Generator");
-diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c
-index fd3918b..3cc93fd 100644
---- a/crypto/sha256_generic.c
-+++ b/crypto/sha256_generic.c
-@@ -9,6 +9,7 @@
- * Copyright (c) Jean-Luc Cooke <jlcooke at certainkey.com>
- * Copyright (c) Andrew McDonald <andrew at mcdonald.org.uk>
- * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
-+ * SHA224 Support Copyright 2007 Intel Corporation <jonathan.lynch at intel.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
-@@ -218,6 +219,22 @@ static void sha256_transform(u32 *state, const u8 *input)
- memset(W, 0, 64 * sizeof(u32));
- }
-
++ /*
++ * If page allocator is not up yet then do not call c_p_a():
++ */
++ if (!debug_pagealloc_enabled)
++ return;
+
-+static void sha224_init(struct crypto_tfm *tfm)
-+{
-+ struct sha256_ctx *sctx = crypto_tfm_ctx(tfm);
-+ sctx->state[0] = SHA224_H0;
-+ sctx->state[1] = SHA224_H1;
-+ sctx->state[2] = SHA224_H2;
-+ sctx->state[3] = SHA224_H3;
-+ sctx->state[4] = SHA224_H4;
-+ sctx->state[5] = SHA224_H5;
-+ sctx->state[6] = SHA224_H6;
-+ sctx->state[7] = SHA224_H7;
-+ sctx->count[0] = 0;
-+ sctx->count[1] = 0;
++ /*
++ * The return value is ignored - the calls cannot fail,
++ * large pages are disabled at boot time:
++ */
++ if (enable)
++ __set_pages_p(page, numpages);
++ else
++ __set_pages_np(page, numpages);
++
++ /*
++ * We should perform an IPI and flush all tlbs,
++ * but that can deadlock->flush only current cpu:
++ */
++ __flush_tlb_all();
+}
++#endif
+
- static void sha256_init(struct crypto_tfm *tfm)
- {
- struct sha256_ctx *sctx = crypto_tfm_ctx(tfm);
-@@ -294,8 +311,17 @@ static void sha256_final(struct crypto_tfm *tfm, u8 *out)
- memset(sctx, 0, sizeof(*sctx));
++/*
++ * The testcases use internal knowledge of the implementation that shouldn't
++ * be exposed to the rest of the kernel. Include these directly here.
++ */
++#ifdef CONFIG_CPA_DEBUG
++#include "pageattr-test.c"
++#endif
+diff --git a/arch/x86/mm/pageattr_32.c b/arch/x86/mm/pageattr_32.c
+deleted file mode 100644
+index 260073c..0000000
+--- a/arch/x86/mm/pageattr_32.c
++++ /dev/null
+@@ -1,278 +0,0 @@
+-/*
+- * Copyright 2002 Andi Kleen, SuSE Labs.
+- * Thanks to Ben LaHaise for precious feedback.
+- */
+-
+-#include <linux/mm.h>
+-#include <linux/sched.h>
+-#include <linux/highmem.h>
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <asm/uaccess.h>
+-#include <asm/processor.h>
+-#include <asm/tlbflush.h>
+-#include <asm/pgalloc.h>
+-#include <asm/sections.h>
+-
+-static DEFINE_SPINLOCK(cpa_lock);
+-static struct list_head df_list = LIST_HEAD_INIT(df_list);
+-
+-
+-pte_t *lookup_address(unsigned long address)
+-{
+- pgd_t *pgd = pgd_offset_k(address);
+- pud_t *pud;
+- pmd_t *pmd;
+- if (pgd_none(*pgd))
+- return NULL;
+- pud = pud_offset(pgd, address);
+- if (pud_none(*pud))
+- return NULL;
+- pmd = pmd_offset(pud, address);
+- if (pmd_none(*pmd))
+- return NULL;
+- if (pmd_large(*pmd))
+- return (pte_t *)pmd;
+- return pte_offset_kernel(pmd, address);
+-}
+-
+-static struct page *split_large_page(unsigned long address, pgprot_t prot,
+- pgprot_t ref_prot)
+-{
+- int i;
+- unsigned long addr;
+- struct page *base;
+- pte_t *pbase;
+-
+- spin_unlock_irq(&cpa_lock);
+- base = alloc_pages(GFP_KERNEL, 0);
+- spin_lock_irq(&cpa_lock);
+- if (!base)
+- return NULL;
+-
+- /*
+- * page_private is used to track the number of entries in
+- * the page table page that have non standard attributes.
+- */
+- SetPagePrivate(base);
+- page_private(base) = 0;
+-
+- address = __pa(address);
+- addr = address & LARGE_PAGE_MASK;
+- pbase = (pte_t *)page_address(base);
+- paravirt_alloc_pt(&init_mm, page_to_pfn(base));
+- for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) {
+- set_pte(&pbase[i], pfn_pte(addr >> PAGE_SHIFT,
+- addr == address ? prot : ref_prot));
+- }
+- return base;
+-}
+-
+-static void cache_flush_page(struct page *p)
+-{
+- void *adr = page_address(p);
+- int i;
+- for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size)
+- clflush(adr+i);
+-}
+-
+-static void flush_kernel_map(void *arg)
+-{
+- struct list_head *lh = (struct list_head *)arg;
+- struct page *p;
+-
+- /* High level code is not ready for clflush yet */
+- if (0 && cpu_has_clflush) {
+- list_for_each_entry (p, lh, lru)
+- cache_flush_page(p);
+- } else if (boot_cpu_data.x86_model >= 4)
+- wbinvd();
+-
+- /* Flush all to work around Errata in early athlons regarding
+- * large page flushing.
+- */
+- __flush_tlb_all();
+-}
+-
+-static void set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
+-{
+- struct page *page;
+- unsigned long flags;
+-
+- set_pte_atomic(kpte, pte); /* change init_mm */
+- if (SHARED_KERNEL_PMD)
+- return;
+-
+- spin_lock_irqsave(&pgd_lock, flags);
+- for (page = pgd_list; page; page = (struct page *)page->index) {
+- pgd_t *pgd;
+- pud_t *pud;
+- pmd_t *pmd;
+- pgd = (pgd_t *)page_address(page) + pgd_index(address);
+- pud = pud_offset(pgd, address);
+- pmd = pmd_offset(pud, address);
+- set_pte_atomic((pte_t *)pmd, pte);
+- }
+- spin_unlock_irqrestore(&pgd_lock, flags);
+-}
+-
+-/*
+- * No more special protections in this 2/4MB area - revert to a
+- * large page again.
+- */
+-static inline void revert_page(struct page *kpte_page, unsigned long address)
+-{
+- pgprot_t ref_prot;
+- pte_t *linear;
+-
+- ref_prot =
+- ((address & LARGE_PAGE_MASK) < (unsigned long)&_etext)
+- ? PAGE_KERNEL_LARGE_EXEC : PAGE_KERNEL_LARGE;
+-
+- linear = (pte_t *)
+- pmd_offset(pud_offset(pgd_offset_k(address), address), address);
+- set_pmd_pte(linear, address,
+- pfn_pte((__pa(address) & LARGE_PAGE_MASK) >> PAGE_SHIFT,
+- ref_prot));
+-}
+-
+-static inline void save_page(struct page *kpte_page)
+-{
+- if (!test_and_set_bit(PG_arch_1, &kpte_page->flags))
+- list_add(&kpte_page->lru, &df_list);
+-}
+-
+-static int
+-__change_page_attr(struct page *page, pgprot_t prot)
+-{
+- pte_t *kpte;
+- unsigned long address;
+- struct page *kpte_page;
+-
+- BUG_ON(PageHighMem(page));
+- address = (unsigned long)page_address(page);
+-
+- kpte = lookup_address(address);
+- if (!kpte)
+- return -EINVAL;
+- kpte_page = virt_to_page(kpte);
+- BUG_ON(PageLRU(kpte_page));
+- BUG_ON(PageCompound(kpte_page));
+-
+- if (pgprot_val(prot) != pgprot_val(PAGE_KERNEL)) {
+- if (!pte_huge(*kpte)) {
+- set_pte_atomic(kpte, mk_pte(page, prot));
+- } else {
+- pgprot_t ref_prot;
+- struct page *split;
+-
+- ref_prot =
+- ((address & LARGE_PAGE_MASK) < (unsigned long)&_etext)
+- ? PAGE_KERNEL_EXEC : PAGE_KERNEL;
+- split = split_large_page(address, prot, ref_prot);
+- if (!split)
+- return -ENOMEM;
+- set_pmd_pte(kpte,address,mk_pte(split, ref_prot));
+- kpte_page = split;
+- }
+- page_private(kpte_page)++;
+- } else if (!pte_huge(*kpte)) {
+- set_pte_atomic(kpte, mk_pte(page, PAGE_KERNEL));
+- BUG_ON(page_private(kpte_page) == 0);
+- page_private(kpte_page)--;
+- } else
+- BUG();
+-
+- /*
+- * If the pte was reserved, it means it was created at boot
+- * time (not via split_large_page) and in turn we must not
+- * replace it with a largepage.
+- */
+-
+- save_page(kpte_page);
+- if (!PageReserved(kpte_page)) {
+- if (cpu_has_pse && (page_private(kpte_page) == 0)) {
+- paravirt_release_pt(page_to_pfn(kpte_page));
+- revert_page(kpte_page, address);
+- }
+- }
+- return 0;
+-}
+-
+-static inline void flush_map(struct list_head *l)
+-{
+- on_each_cpu(flush_kernel_map, l, 1, 1);
+-}
+-
+-/*
+- * Change the page attributes of an page in the linear mapping.
+- *
+- * This should be used when a page is mapped with a different caching policy
+- * than write-back somewhere - some CPUs do not like it when mappings with
+- * different caching policies exist. This changes the page attributes of the
+- * in kernel linear mapping too.
+- *
+- * The caller needs to ensure that there are no conflicting mappings elsewhere.
+- * This function only deals with the kernel linear map.
+- *
+- * Caller must call global_flush_tlb() after this.
+- */
+-int change_page_attr(struct page *page, int numpages, pgprot_t prot)
+-{
+- int err = 0;
+- int i;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&cpa_lock, flags);
+- for (i = 0; i < numpages; i++, page++) {
+- err = __change_page_attr(page, prot);
+- if (err)
+- break;
+- }
+- spin_unlock_irqrestore(&cpa_lock, flags);
+- return err;
+-}
+-
+-void global_flush_tlb(void)
+-{
+- struct list_head l;
+- struct page *pg, *next;
+-
+- BUG_ON(irqs_disabled());
+-
+- spin_lock_irq(&cpa_lock);
+- list_replace_init(&df_list, &l);
+- spin_unlock_irq(&cpa_lock);
+- flush_map(&l);
+- list_for_each_entry_safe(pg, next, &l, lru) {
+- list_del(&pg->lru);
+- clear_bit(PG_arch_1, &pg->flags);
+- if (PageReserved(pg) || !cpu_has_pse || page_private(pg) != 0)
+- continue;
+- ClearPagePrivate(pg);
+- __free_page(pg);
+- }
+-}
+-
+-#ifdef CONFIG_DEBUG_PAGEALLOC
+-void kernel_map_pages(struct page *page, int numpages, int enable)
+-{
+- if (PageHighMem(page))
+- return;
+- if (!enable)
+- debug_check_no_locks_freed(page_address(page),
+- numpages * PAGE_SIZE);
+-
+- /* the return value is ignored - the calls cannot fail,
+- * large pages are disabled at boot time.
+- */
+- change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0));
+- /* we should perform an IPI and flush all tlbs,
+- * but that can deadlock->flush only current cpu.
+- */
+- __flush_tlb_all();
+-}
+-#endif
+-
+-EXPORT_SYMBOL(change_page_attr);
+-EXPORT_SYMBOL(global_flush_tlb);
+diff --git a/arch/x86/mm/pageattr_64.c b/arch/x86/mm/pageattr_64.c
+deleted file mode 100644
+index c40afba..0000000
+--- a/arch/x86/mm/pageattr_64.c
++++ /dev/null
+@@ -1,255 +0,0 @@
+-/*
+- * Copyright 2002 Andi Kleen, SuSE Labs.
+- * Thanks to Ben LaHaise for precious feedback.
+- */
+-
+-#include <linux/mm.h>
+-#include <linux/sched.h>
+-#include <linux/highmem.h>
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <asm/uaccess.h>
+-#include <asm/processor.h>
+-#include <asm/tlbflush.h>
+-#include <asm/io.h>
+-
+-pte_t *lookup_address(unsigned long address)
+-{
+- pgd_t *pgd = pgd_offset_k(address);
+- pud_t *pud;
+- pmd_t *pmd;
+- pte_t *pte;
+- if (pgd_none(*pgd))
+- return NULL;
+- pud = pud_offset(pgd, address);
+- if (!pud_present(*pud))
+- return NULL;
+- pmd = pmd_offset(pud, address);
+- if (!pmd_present(*pmd))
+- return NULL;
+- if (pmd_large(*pmd))
+- return (pte_t *)pmd;
+- pte = pte_offset_kernel(pmd, address);
+- if (pte && !pte_present(*pte))
+- pte = NULL;
+- return pte;
+-}
+-
+-static struct page *split_large_page(unsigned long address, pgprot_t prot,
+- pgprot_t ref_prot)
+-{
+- int i;
+- unsigned long addr;
+- struct page *base = alloc_pages(GFP_KERNEL, 0);
+- pte_t *pbase;
+- if (!base)
+- return NULL;
+- /*
+- * page_private is used to track the number of entries in
+- * the page table page have non standard attributes.
+- */
+- SetPagePrivate(base);
+- page_private(base) = 0;
+-
+- address = __pa(address);
+- addr = address & LARGE_PAGE_MASK;
+- pbase = (pte_t *)page_address(base);
+- for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) {
+- pbase[i] = pfn_pte(addr >> PAGE_SHIFT,
+- addr == address ? prot : ref_prot);
+- }
+- return base;
+-}
+-
+-void clflush_cache_range(void *adr, int size)
+-{
+- int i;
+- for (i = 0; i < size; i += boot_cpu_data.x86_clflush_size)
+- clflush(adr+i);
+-}
+-
+-static void flush_kernel_map(void *arg)
+-{
+- struct list_head *l = (struct list_head *)arg;
+- struct page *pg;
+-
+- /* When clflush is available always use it because it is
+- much cheaper than WBINVD. */
+- /* clflush is still broken. Disable for now. */
+- if (1 || !cpu_has_clflush)
+- asm volatile("wbinvd" ::: "memory");
+- else list_for_each_entry(pg, l, lru) {
+- void *adr = page_address(pg);
+- clflush_cache_range(adr, PAGE_SIZE);
+- }
+- __flush_tlb_all();
+-}
+-
+-static inline void flush_map(struct list_head *l)
+-{
+- on_each_cpu(flush_kernel_map, l, 1, 1);
+-}
+-
+-static LIST_HEAD(deferred_pages); /* protected by init_mm.mmap_sem */
+-
+-static inline void save_page(struct page *fpage)
+-{
+- if (!test_and_set_bit(PG_arch_1, &fpage->flags))
+- list_add(&fpage->lru, &deferred_pages);
+-}
+-
+-/*
+- * No more special protections in this 2/4MB area - revert to a
+- * large page again.
+- */
+-static void revert_page(unsigned long address, pgprot_t ref_prot)
+-{
+- pgd_t *pgd;
+- pud_t *pud;
+- pmd_t *pmd;
+- pte_t large_pte;
+- unsigned long pfn;
+-
+- pgd = pgd_offset_k(address);
+- BUG_ON(pgd_none(*pgd));
+- pud = pud_offset(pgd,address);
+- BUG_ON(pud_none(*pud));
+- pmd = pmd_offset(pud, address);
+- BUG_ON(pmd_val(*pmd) & _PAGE_PSE);
+- pfn = (__pa(address) & LARGE_PAGE_MASK) >> PAGE_SHIFT;
+- large_pte = pfn_pte(pfn, ref_prot);
+- large_pte = pte_mkhuge(large_pte);
+- set_pte((pte_t *)pmd, large_pte);
+-}
+-
+-static int
+-__change_page_attr(unsigned long address, unsigned long pfn, pgprot_t prot,
+- pgprot_t ref_prot)
+-{
+- pte_t *kpte;
+- struct page *kpte_page;
+- pgprot_t ref_prot2;
+-
+- kpte = lookup_address(address);
+- if (!kpte) return 0;
+- kpte_page = virt_to_page(((unsigned long)kpte) & PAGE_MASK);
+- BUG_ON(PageLRU(kpte_page));
+- BUG_ON(PageCompound(kpte_page));
+- if (pgprot_val(prot) != pgprot_val(ref_prot)) {
+- if (!pte_huge(*kpte)) {
+- set_pte(kpte, pfn_pte(pfn, prot));
+- } else {
+- /*
+- * split_large_page will take the reference for this
+- * change_page_attr on the split page.
+- */
+- struct page *split;
+- ref_prot2 = pte_pgprot(pte_clrhuge(*kpte));
+- split = split_large_page(address, prot, ref_prot2);
+- if (!split)
+- return -ENOMEM;
+- pgprot_val(ref_prot2) &= ~_PAGE_NX;
+- set_pte(kpte, mk_pte(split, ref_prot2));
+- kpte_page = split;
+- }
+- page_private(kpte_page)++;
+- } else if (!pte_huge(*kpte)) {
+- set_pte(kpte, pfn_pte(pfn, ref_prot));
+- BUG_ON(page_private(kpte_page) == 0);
+- page_private(kpte_page)--;
+- } else
+- BUG();
+-
+- /* on x86-64 the direct mapping set at boot is not using 4k pages */
+- BUG_ON(PageReserved(kpte_page));
+-
+- save_page(kpte_page);
+- if (page_private(kpte_page) == 0)
+- revert_page(address, ref_prot);
+- return 0;
+-}
+-
+-/*
+- * Change the page attributes of an page in the linear mapping.
+- *
+- * This should be used when a page is mapped with a different caching policy
+- * than write-back somewhere - some CPUs do not like it when mappings with
+- * different caching policies exist. This changes the page attributes of the
+- * in kernel linear mapping too.
+- *
+- * The caller needs to ensure that there are no conflicting mappings elsewhere.
+- * This function only deals with the kernel linear map.
+- *
+- * Caller must call global_flush_tlb() after this.
+- */
+-int change_page_attr_addr(unsigned long address, int numpages, pgprot_t prot)
+-{
+- int err = 0, kernel_map = 0;
+- int i;
+-
+- if (address >= __START_KERNEL_map
+- && address < __START_KERNEL_map + KERNEL_TEXT_SIZE) {
+- address = (unsigned long)__va(__pa(address));
+- kernel_map = 1;
+- }
+-
+- down_write(&init_mm.mmap_sem);
+- for (i = 0; i < numpages; i++, address += PAGE_SIZE) {
+- unsigned long pfn = __pa(address) >> PAGE_SHIFT;
+-
+- if (!kernel_map || pte_present(pfn_pte(0, prot))) {
+- err = __change_page_attr(address, pfn, prot, PAGE_KERNEL);
+- if (err)
+- break;
+- }
+- /* Handle kernel mapping too which aliases part of the
+- * lowmem */
+- if (__pa(address) < KERNEL_TEXT_SIZE) {
+- unsigned long addr2;
+- pgprot_t prot2;
+- addr2 = __START_KERNEL_map + __pa(address);
+- /* Make sure the kernel mappings stay executable */
+- prot2 = pte_pgprot(pte_mkexec(pfn_pte(0, prot)));
+- err = __change_page_attr(addr2, pfn, prot2,
+- PAGE_KERNEL_EXEC);
+- }
+- }
+- up_write(&init_mm.mmap_sem);
+- return err;
+-}
+-
+-/* Don't call this for MMIO areas that may not have a mem_map entry */
+-int change_page_attr(struct page *page, int numpages, pgprot_t prot)
+-{
+- unsigned long addr = (unsigned long)page_address(page);
+- return change_page_attr_addr(addr, numpages, prot);
+-}
+-
+-void global_flush_tlb(void)
+-{
+- struct page *pg, *next;
+- struct list_head l;
+-
+- /*
+- * Write-protect the semaphore, to exclude two contexts
+- * doing a list_replace_init() call in parallel and to
+- * exclude new additions to the deferred_pages list:
+- */
+- down_write(&init_mm.mmap_sem);
+- list_replace_init(&deferred_pages, &l);
+- up_write(&init_mm.mmap_sem);
+-
+- flush_map(&l);
+-
+- list_for_each_entry_safe(pg, next, &l, lru) {
+- list_del(&pg->lru);
+- clear_bit(PG_arch_1, &pg->flags);
+- if (page_private(pg) != 0)
+- continue;
+- ClearPagePrivate(pg);
+- __free_page(pg);
+- }
+-}
+-
+-EXPORT_SYMBOL(change_page_attr);
+-EXPORT_SYMBOL(global_flush_tlb);
+diff --git a/arch/x86/mm/pgtable_32.c b/arch/x86/mm/pgtable_32.c
+index be61a1d..2ae5999 100644
+--- a/arch/x86/mm/pgtable_32.c
++++ b/arch/x86/mm/pgtable_32.c
+@@ -195,11 +195,6 @@ struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
+ return pte;
}
-+static void sha224_final(struct crypto_tfm *tfm, u8 *hash)
-+{
-+ u8 D[SHA256_DIGEST_SIZE];
-+
-+ sha256_final(tfm, D);
+-void pmd_ctor(struct kmem_cache *cache, void *pmd)
+-{
+- memset(pmd, 0, PTRS_PER_PMD*sizeof(pmd_t));
+-}
+-
+ /*
+ * List of all pgd's needed for non-PAE so it can invalidate entries
+ * in both cached and uncached pgd's; not needed for PAE since the
+@@ -210,27 +205,18 @@ void pmd_ctor(struct kmem_cache *cache, void *pmd)
+ * vmalloc faults work because attached pagetables are never freed.
+ * -- wli
+ */
+-DEFINE_SPINLOCK(pgd_lock);
+-struct page *pgd_list;
+-
+ static inline void pgd_list_add(pgd_t *pgd)
+ {
+ struct page *page = virt_to_page(pgd);
+- page->index = (unsigned long)pgd_list;
+- if (pgd_list)
+- set_page_private(pgd_list, (unsigned long)&page->index);
+- pgd_list = page;
+- set_page_private(page, (unsigned long)&pgd_list);
+
-+ memcpy(hash, D, SHA224_DIGEST_SIZE);
-+ memset(D, 0, SHA256_DIGEST_SIZE);
-+}
++ list_add(&page->lru, &pgd_list);
+ }
--static struct crypto_alg alg = {
-+static struct crypto_alg sha256 = {
- .cra_name = "sha256",
- .cra_driver_name= "sha256-generic",
- .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
-@@ -303,28 +329,58 @@ static struct crypto_alg alg = {
- .cra_ctxsize = sizeof(struct sha256_ctx),
- .cra_module = THIS_MODULE,
- .cra_alignmask = 3,
-- .cra_list = LIST_HEAD_INIT(alg.cra_list),
-+ .cra_list = LIST_HEAD_INIT(sha256.cra_list),
- .cra_u = { .digest = {
- .dia_digestsize = SHA256_DIGEST_SIZE,
-- .dia_init = sha256_init,
-- .dia_update = sha256_update,
-- .dia_final = sha256_final } }
-+ .dia_init = sha256_init,
-+ .dia_update = sha256_update,
-+ .dia_final = sha256_final } }
-+};
+ static inline void pgd_list_del(pgd_t *pgd)
+ {
+- struct page *next, **pprev, *page = virt_to_page(pgd);
+- next = (struct page *)page->index;
+- pprev = (struct page **)page_private(page);
+- *pprev = next;
+- if (next)
+- set_page_private(next, (unsigned long)pprev);
++ struct page *page = virt_to_page(pgd);
+
-+static struct crypto_alg sha224 = {
-+ .cra_name = "sha224",
-+ .cra_driver_name = "sha224-generic",
-+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
-+ .cra_blocksize = SHA224_BLOCK_SIZE,
-+ .cra_ctxsize = sizeof(struct sha256_ctx),
-+ .cra_module = THIS_MODULE,
-+ .cra_alignmask = 3,
-+ .cra_list = LIST_HEAD_INIT(sha224.cra_list),
-+ .cra_u = { .digest = {
-+ .dia_digestsize = SHA224_DIGEST_SIZE,
-+ .dia_init = sha224_init,
-+ .dia_update = sha256_update,
-+ .dia_final = sha224_final } }
- };
++ list_del(&page->lru);
+ }
- static int __init init(void)
+
+@@ -285,7 +271,6 @@ static void pgd_dtor(void *pgd)
+ if (SHARED_KERNEL_PMD)
+ return;
+
+- paravirt_release_pd(__pa(pgd) >> PAGE_SHIFT);
+ spin_lock_irqsave(&pgd_lock, flags);
+ pgd_list_del(pgd);
+ spin_unlock_irqrestore(&pgd_lock, flags);
+@@ -294,77 +279,96 @@ static void pgd_dtor(void *pgd)
+ #define UNSHARED_PTRS_PER_PGD \
+ (SHARED_KERNEL_PMD ? USER_PTRS_PER_PGD : PTRS_PER_PGD)
+
+-/* If we allocate a pmd for part of the kernel address space, then
+- make sure its initialized with the appropriate kernel mappings.
+- Otherwise use a cached zeroed pmd. */
+-static pmd_t *pmd_cache_alloc(int idx)
++#ifdef CONFIG_X86_PAE
++/*
++ * Mop up any pmd pages which may still be attached to the pgd.
++ * Normally they will be freed by munmap/exit_mmap, but any pmd we
++ * preallocate which never got a corresponding vma will need to be
++ * freed manually.
++ */
++static void pgd_mop_up_pmds(pgd_t *pgdp)
{
-- return crypto_register_alg(&alg);
-+ int ret = 0;
+- pmd_t *pmd;
++ int i;
+
+- if (idx >= USER_PTRS_PER_PGD) {
+- pmd = (pmd_t *)__get_free_page(GFP_KERNEL);
++ for(i = 0; i < UNSHARED_PTRS_PER_PGD; i++) {
++ pgd_t pgd = pgdp[i];
+
+- if (pmd)
+- memcpy(pmd,
+- (void *)pgd_page_vaddr(swapper_pg_dir[idx]),
++ if (pgd_val(pgd) != 0) {
++ pmd_t *pmd = (pmd_t *)pgd_page_vaddr(pgd);
+
-+ ret = crypto_register_alg(&sha224);
++ pgdp[i] = native_make_pgd(0);
+
-+ if (ret < 0)
-+ return ret;
++ paravirt_release_pd(pgd_val(pgd) >> PAGE_SHIFT);
++ pmd_free(pmd);
++ }
++ }
++}
+
-+ ret = crypto_register_alg(&sha256);
++/*
++ * In PAE mode, we need to do a cr3 reload (=tlb flush) when
++ * updating the top-level pagetable entries to guarantee the
++ * processor notices the update. Since this is expensive, and
++ * all 4 top-level entries are used almost immediately in a
++ * new process's life, we just pre-populate them here.
++ *
++ * Also, if we're in a paravirt environment where the kernel pmd is
++ * not shared between pagetables (!SHARED_KERNEL_PMDS), we allocate
++ * and initialize the kernel pmds here.
++ */
++static int pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd)
++{
++ pud_t *pud;
++ unsigned long addr;
++ int i;
+
-+ if (ret < 0)
-+ crypto_unregister_alg(&sha224);
++ pud = pud_offset(pgd, 0);
++ for (addr = i = 0; i < UNSHARED_PTRS_PER_PGD;
++ i++, pud++, addr += PUD_SIZE) {
++ pmd_t *pmd = pmd_alloc_one(mm, addr);
+
-+ return ret;
++ if (!pmd) {
++ pgd_mop_up_pmds(pgd);
++ return 0;
++ }
++
++ if (i >= USER_PTRS_PER_PGD)
++ memcpy(pmd, (pmd_t *)pgd_page_vaddr(swapper_pg_dir[i]),
+ sizeof(pmd_t) * PTRS_PER_PMD);
+- } else
+- pmd = kmem_cache_alloc(pmd_cache, GFP_KERNEL);
+
+- return pmd;
++ pud_populate(mm, pud, pmd);
++ }
++
++ return 1;
++}
++#else /* !CONFIG_X86_PAE */
++/* No need to prepopulate any pagetable entries in non-PAE modes. */
++static int pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd)
++{
++ return 1;
}
- static void __exit fini(void)
+-static void pmd_cache_free(pmd_t *pmd, int idx)
++static void pgd_mop_up_pmds(pgd_t *pgd)
{
-- crypto_unregister_alg(&alg);
-+ crypto_unregister_alg(&sha224);
-+ crypto_unregister_alg(&sha256);
+- if (idx >= USER_PTRS_PER_PGD)
+- free_page((unsigned long)pmd);
+- else
+- kmem_cache_free(pmd_cache, pmd);
}
++#endif /* CONFIG_X86_PAE */
- module_init(init);
- module_exit(fini);
+ pgd_t *pgd_alloc(struct mm_struct *mm)
+ {
+- int i;
+ pgd_t *pgd = quicklist_alloc(0, GFP_KERNEL, pgd_ctor);
- MODULE_LICENSE("GPL");
--MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm");
-+MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm");
+- if (PTRS_PER_PMD == 1 || !pgd)
+- return pgd;
++ mm->pgd = pgd; /* so that alloc_pd can use it */
+
+- for (i = 0; i < UNSHARED_PTRS_PER_PGD; ++i) {
+- pmd_t *pmd = pmd_cache_alloc(i);
+-
+- if (!pmd)
+- goto out_oom;
+-
+- paravirt_alloc_pd(__pa(pmd) >> PAGE_SHIFT);
+- set_pgd(&pgd[i], __pgd(1 + __pa(pmd)));
++ if (pgd && !pgd_prepopulate_pmd(mm, pgd)) {
++ quicklist_free(0, pgd_dtor, pgd);
++ pgd = NULL;
+ }
+- return pgd;
+
+-out_oom:
+- for (i--; i >= 0; i--) {
+- pgd_t pgdent = pgd[i];
+- void* pmd = (void *)__va(pgd_val(pgdent)-1);
+- paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
+- pmd_cache_free(pmd, i);
+- }
+- quicklist_free(0, pgd_dtor, pgd);
+- return NULL;
++ return pgd;
+ }
-+MODULE_ALIAS("sha224");
- MODULE_ALIAS("sha256");
-diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
-index 24141fb..1ab8c01 100644
---- a/crypto/tcrypt.c
-+++ b/crypto/tcrypt.c
-@@ -6,12 +6,16 @@
- *
- * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
- * Copyright (c) 2002 Jean-Francois Dive <jef at linuxbe.org>
-+ * Copyright (c) 2007 Nokia Siemens Networks
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
- *
-+ * 2007-11-13 Added GCM tests
-+ * 2007-11-13 Added AEAD support
-+ * 2007-11-06 Added SHA-224 and SHA-224-HMAC tests
- * 2006-12-07 Added SHA384 HMAC and SHA512 HMAC tests
- * 2004-08-09 Added cipher speed tests (Reyk Floeter <reyk at vantronix.net>)
- * 2003-09-14 Rewritten by Kartikey Mahendra Bhatt
-@@ -71,22 +75,23 @@ static unsigned int sec;
+ void pgd_free(pgd_t *pgd)
+ {
+- int i;
+-
+- /* in the PAE case user pgd entries are overwritten before usage */
+- if (PTRS_PER_PMD > 1)
+- for (i = 0; i < UNSHARED_PTRS_PER_PGD; ++i) {
+- pgd_t pgdent = pgd[i];
+- void* pmd = (void *)__va(pgd_val(pgdent)-1);
+- paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
+- pmd_cache_free(pmd, i);
+- }
+- /* in the non-PAE case, free_pgtables() clears user pgd entries */
++ pgd_mop_up_pmds(pgd);
+ quicklist_free(0, pgd_dtor, pgd);
+ }
- static int mode;
- static char *xbuf;
-+static char *axbuf;
- static char *tvmem;
+@@ -372,4 +376,3 @@ void check_pgt_cache(void)
+ {
+ quicklist_trim(0, pgd_dtor, 25, 16);
+ }
+-
+diff --git a/arch/x86/mm/srat_64.c b/arch/x86/mm/srat_64.c
+index ea85172..65416f8 100644
+--- a/arch/x86/mm/srat_64.c
++++ b/arch/x86/mm/srat_64.c
+@@ -130,6 +130,9 @@ void __init
+ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
+ {
+ int pxm, node;
++ int apic_id;
++
++ apic_id = pa->apic_id;
+ if (srat_disabled())
+ return;
+ if (pa->header.length != sizeof(struct acpi_srat_cpu_affinity)) {
+@@ -145,68 +148,12 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
+ bad_srat();
+ return;
+ }
+- apicid_to_node[pa->apic_id] = node;
++ apicid_to_node[apic_id] = node;
+ acpi_numa = 1;
+ printk(KERN_INFO "SRAT: PXM %u -> APIC %u -> Node %u\n",
+- pxm, pa->apic_id, node);
+-}
+-
+-#ifdef CONFIG_MEMORY_HOTPLUG_RESERVE
+-/*
+- * Protect against too large hotadd areas that would fill up memory.
+- */
+-static int hotadd_enough_memory(struct bootnode *nd)
+-{
+- static unsigned long allocated;
+- static unsigned long last_area_end;
+- unsigned long pages = (nd->end - nd->start) >> PAGE_SHIFT;
+- long mem = pages * sizeof(struct page);
+- unsigned long addr;
+- unsigned long allowed;
+- unsigned long oldpages = pages;
+-
+- if (mem < 0)
+- return 0;
+- allowed = (end_pfn - absent_pages_in_range(0, end_pfn)) * PAGE_SIZE;
+- allowed = (allowed / 100) * hotadd_percent;
+- if (allocated + mem > allowed) {
+- unsigned long range;
+- /* Give them at least part of their hotadd memory upto hotadd_percent
+- It would be better to spread the limit out
+- over multiple hotplug areas, but that is too complicated
+- right now */
+- if (allocated >= allowed)
+- return 0;
+- range = allowed - allocated;
+- pages = (range / PAGE_SIZE);
+- mem = pages * sizeof(struct page);
+- nd->end = nd->start + range;
+- }
+- /* Not completely fool proof, but a good sanity check */
+- addr = find_e820_area(last_area_end, end_pfn<<PAGE_SHIFT, mem);
+- if (addr == -1UL)
+- return 0;
+- if (pages != oldpages)
+- printk(KERN_NOTICE "SRAT: Hotadd area limited to %lu bytes\n",
+- pages << PAGE_SHIFT);
+- last_area_end = addr + mem;
+- allocated += mem;
+- return 1;
+-}
+-
+-static int update_end_of_memory(unsigned long end)
+-{
+- found_add_area = 1;
+- if ((end >> PAGE_SHIFT) > end_pfn)
+- end_pfn = end >> PAGE_SHIFT;
+- return 1;
++ pxm, apic_id, node);
+ }
- static char *check[] = {
-- "des", "md5", "des3_ede", "rot13", "sha1", "sha256", "blowfish",
-- "twofish", "serpent", "sha384", "sha512", "md4", "aes", "cast6",
-+ "des", "md5", "des3_ede", "rot13", "sha1", "sha224", "sha256",
-+ "blowfish", "twofish", "serpent", "sha384", "sha512", "md4", "aes",
-+ "cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
- "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
- "khazad", "wp512", "wp384", "wp256", "tnepres", "xeta", "fcrypt",
-- "camellia", "seed", NULL
-+ "camellia", "seed", "salsa20", "lzo", NULL
+-static inline int save_add_info(void)
+-{
+- return hotadd_percent > 0;
+-}
+-#else
+ int update_end_of_memory(unsigned long end) {return -1;}
+ static int hotadd_enough_memory(struct bootnode *nd) {return 1;}
+ #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
+@@ -214,10 +161,9 @@ static inline int save_add_info(void) {return 1;}
+ #else
+ static inline int save_add_info(void) {return 0;}
+ #endif
+-#endif
+ /*
+ * Update nodes_add and decide if to include add are in the zone.
+- * Both SPARSE and RESERVE need nodes_add infomation.
++ * Both SPARSE and RESERVE need nodes_add information.
+ * This code supports one contiguous hot add area per node.
+ */
+ static int reserve_hotadd(int node, unsigned long start, unsigned long end)
+@@ -377,7 +323,7 @@ static int __init nodes_cover_memory(const struct bootnode *nodes)
+ return 1;
+ }
+
+-static void unparse_node(int node)
++static void __init unparse_node(int node)
+ {
+ int i;
+ node_clear(node, nodes_parsed);
+@@ -400,7 +346,12 @@ int __init acpi_scan_nodes(unsigned long start, unsigned long end)
+ /* First clean up the node list */
+ for (i = 0; i < MAX_NUMNODES; i++) {
+ cutoff_node(i, start, end);
+- if ((nodes[i].end - nodes[i].start) < NODE_MIN_SIZE) {
++ /*
++ * don't confuse VM with a node that doesn't have the
++ * minimum memory.
++ */
++ if (nodes[i].end &&
++ (nodes[i].end - nodes[i].start) < NODE_MIN_SIZE) {
+ unparse_node(i);
+ node_set_offline(i);
+ }
+@@ -431,9 +382,11 @@ int __init acpi_scan_nodes(unsigned long start, unsigned long end)
+ setup_node_bootmem(i, nodes[i].start, nodes[i].end);
+
+ for (i = 0; i < NR_CPUS; i++) {
+- if (cpu_to_node(i) == NUMA_NO_NODE)
++ int node = early_cpu_to_node(i);
++
++ if (node == NUMA_NO_NODE)
+ continue;
+- if (!node_isset(cpu_to_node(i), node_possible_map))
++ if (!node_isset(node, node_possible_map))
+ numa_set_node(i, NUMA_NO_NODE);
+ }
+ numa_init_array();
+@@ -441,6 +394,12 @@ int __init acpi_scan_nodes(unsigned long start, unsigned long end)
+ }
+
+ #ifdef CONFIG_NUMA_EMU
++static int fake_node_to_pxm_map[MAX_NUMNODES] __initdata = {
++ [0 ... MAX_NUMNODES-1] = PXM_INVAL
++};
++static s16 fake_apicid_to_node[MAX_LOCAL_APIC] __initdata = {
++ [0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
++};
+ static int __init find_node_by_addr(unsigned long addr)
+ {
+ int ret = NUMA_NO_NODE;
+@@ -457,7 +416,7 @@ static int __init find_node_by_addr(unsigned long addr)
+ break;
+ }
+ }
+- return i;
++ return ret;
+ }
+
+ /*
+@@ -471,12 +430,6 @@ static int __init find_node_by_addr(unsigned long addr)
+ void __init acpi_fake_nodes(const struct bootnode *fake_nodes, int num_nodes)
+ {
+ int i, j;
+- int fake_node_to_pxm_map[MAX_NUMNODES] = {
+- [0 ... MAX_NUMNODES-1] = PXM_INVAL
+- };
+- unsigned char fake_apicid_to_node[MAX_LOCAL_APIC] = {
+- [0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
+- };
+
+ printk(KERN_INFO "Faking PXM affinity for fake nodes on real "
+ "topology.\n");
+diff --git a/arch/x86/oprofile/backtrace.c b/arch/x86/oprofile/backtrace.c
+index 0ed046a..e2095cb 100644
+--- a/arch/x86/oprofile/backtrace.c
++++ b/arch/x86/oprofile/backtrace.c
+@@ -32,7 +32,7 @@ static int backtrace_stack(void *data, char *name)
+ return 0;
+ }
+
+-static void backtrace_address(void *data, unsigned long addr)
++static void backtrace_address(void *data, unsigned long addr, int reliable)
+ {
+ unsigned int *depth = data;
+
+@@ -48,7 +48,7 @@ static struct stacktrace_ops backtrace_ops = {
};
- static void hexdump(unsigned char *buf, unsigned int len)
+ struct frame_head {
+- struct frame_head *ebp;
++ struct frame_head *bp;
+ unsigned long ret;
+ } __attribute__((packed));
+
+@@ -67,21 +67,21 @@ dump_user_backtrace(struct frame_head * head)
+
+ /* frame pointers should strictly progress back up the stack
+ * (towards higher addresses) */
+- if (head >= bufhead[0].ebp)
++ if (head >= bufhead[0].bp)
+ return NULL;
+
+- return bufhead[0].ebp;
++ return bufhead[0].bp;
+ }
+
+ void
+ x86_backtrace(struct pt_regs * const regs, unsigned int depth)
{
-- while (len--)
-- printk("%02x", *buf++);
+ struct frame_head *head = (struct frame_head *)frame_pointer(regs);
+- unsigned long stack = stack_pointer(regs);
++ unsigned long stack = kernel_trap_sp(regs);
+
+ if (!user_mode_vm(regs)) {
+ if (depth)
+- dump_trace(NULL, regs, (unsigned long *)stack,
++ dump_trace(NULL, regs, (unsigned long *)stack, 0,
+ &backtrace_ops, &depth);
+ return;
+ }
+diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
+index 944bbcd..1f11cf0 100644
+--- a/arch/x86/oprofile/nmi_int.c
++++ b/arch/x86/oprofile/nmi_int.c
+@@ -18,11 +18,11 @@
+ #include <asm/nmi.h>
+ #include <asm/msr.h>
+ #include <asm/apic.h>
+-
++
+ #include "op_counter.h"
+ #include "op_x86_model.h"
+
+-static struct op_x86_model_spec const * model;
++static struct op_x86_model_spec const *model;
+ static struct op_msrs cpu_msrs[NR_CPUS];
+ static unsigned long saved_lvtpc[NR_CPUS];
+
+@@ -41,7 +41,6 @@ static int nmi_suspend(struct sys_device *dev, pm_message_t state)
+ return 0;
+ }
+
-
-- printk("\n");
-+ print_hex_dump(KERN_CONT, "", DUMP_PREFIX_OFFSET,
-+ 16, 1,
-+ buf, len, false);
+ static int nmi_resume(struct sys_device *dev)
+ {
+ if (nmi_enabled == 1)
+@@ -49,29 +48,27 @@ static int nmi_resume(struct sys_device *dev)
+ return 0;
}
- static void tcrypt_complete(struct crypto_async_request *req, int err)
-@@ -215,6 +220,238 @@ out:
- crypto_free_hash(tfm);
+-
+ static struct sysdev_class oprofile_sysclass = {
+- set_kset_name("oprofile"),
++ .name = "oprofile",
+ .resume = nmi_resume,
+ .suspend = nmi_suspend,
+ };
+
+-
+ static struct sys_device device_oprofile = {
+ .id = 0,
+ .cls = &oprofile_sysclass,
+ };
+
+-
+ static int __init init_sysfs(void)
+ {
+ int error;
+- if (!(error = sysdev_class_register(&oprofile_sysclass)))
++
++ error = sysdev_class_register(&oprofile_sysclass);
++ if (!error)
+ error = sysdev_register(&device_oprofile);
+ return error;
}
-+static void test_aead(char *algo, int enc, struct aead_testvec *template,
-+ unsigned int tcount)
-+{
-+ unsigned int ret, i, j, k, temp;
-+ unsigned int tsize;
-+ char *q;
-+ struct crypto_aead *tfm;
-+ char *key;
-+ struct aead_testvec *aead_tv;
-+ struct aead_request *req;
-+ struct scatterlist sg[8];
-+ struct scatterlist asg[8];
-+ const char *e;
-+ struct tcrypt_result result;
-+ unsigned int authsize;
+-
+ static void exit_sysfs(void)
+ {
+ sysdev_unregister(&device_oprofile);
+@@ -90,7 +87,7 @@ static int profile_exceptions_notify(struct notifier_block *self,
+ int ret = NOTIFY_DONE;
+ int cpu = smp_processor_id();
+
+- switch(val) {
++ switch (val) {
+ case DIE_NMI:
+ if (model->check_ctrs(args->regs, &cpu_msrs[cpu]))
+ ret = NOTIFY_STOP;
+@@ -101,24 +98,24 @@ static int profile_exceptions_notify(struct notifier_block *self,
+ return ret;
+ }
+
+-static void nmi_cpu_save_registers(struct op_msrs * msrs)
++static void nmi_cpu_save_registers(struct op_msrs *msrs)
+ {
+ unsigned int const nr_ctrs = model->num_counters;
+- unsigned int const nr_ctrls = model->num_controls;
+- struct op_msr * counters = msrs->counters;
+- struct op_msr * controls = msrs->controls;
++ unsigned int const nr_ctrls = model->num_controls;
++ struct op_msr *counters = msrs->counters;
++ struct op_msr *controls = msrs->controls;
+ unsigned int i;
+
+ for (i = 0; i < nr_ctrs; ++i) {
+- if (counters[i].addr){
++ if (counters[i].addr) {
+ rdmsr(counters[i].addr,
+ counters[i].saved.low,
+ counters[i].saved.high);
+ }
+ }
+-
+
-+ if (enc == ENCRYPT)
-+ e = "encryption";
-+ else
-+ e = "decryption";
+ for (i = 0; i < nr_ctrls; ++i) {
+- if (controls[i].addr){
++ if (controls[i].addr) {
+ rdmsr(controls[i].addr,
+ controls[i].saved.low,
+ controls[i].saved.high);
+@@ -126,15 +123,13 @@ static void nmi_cpu_save_registers(struct op_msrs * msrs)
+ }
+ }
+
+-
+-static void nmi_save_registers(void * dummy)
++static void nmi_save_registers(void *dummy)
+ {
+ int cpu = smp_processor_id();
+- struct op_msrs * msrs = &cpu_msrs[cpu];
++ struct op_msrs *msrs = &cpu_msrs[cpu];
+ nmi_cpu_save_registers(msrs);
+ }
+
+-
+ static void free_msrs(void)
+ {
+ int i;
+@@ -146,7 +141,6 @@ static void free_msrs(void)
+ }
+ }
+
+-
+ static int allocate_msrs(void)
+ {
+ int success = 1;
+@@ -173,11 +167,10 @@ static int allocate_msrs(void)
+ return success;
+ }
+
+-
+-static void nmi_cpu_setup(void * dummy)
++static void nmi_cpu_setup(void *dummy)
+ {
+ int cpu = smp_processor_id();
+- struct op_msrs * msrs = &cpu_msrs[cpu];
++ struct op_msrs *msrs = &cpu_msrs[cpu];
+ spin_lock(&oprofilefs_lock);
+ model->setup_ctrs(msrs);
+ spin_unlock(&oprofilefs_lock);
+@@ -193,13 +186,14 @@ static struct notifier_block profile_exceptions_nb = {
+
+ static int nmi_setup(void)
+ {
+- int err=0;
++ int err = 0;
+ int cpu;
+
+ if (!allocate_msrs())
+ return -ENOMEM;
+
+- if ((err = register_die_notifier(&profile_exceptions_nb))){
++ err = register_die_notifier(&profile_exceptions_nb);
++ if (err) {
+ free_msrs();
+ return err;
+ }
+@@ -210,7 +204,7 @@ static int nmi_setup(void)
+
+ /* Assume saved/restored counters are the same on all CPUs */
+ model->fill_in_addresses(&cpu_msrs[0]);
+- for_each_possible_cpu (cpu) {
++ for_each_possible_cpu(cpu) {
+ if (cpu != 0) {
+ memcpy(cpu_msrs[cpu].counters, cpu_msrs[0].counters,
+ sizeof(struct op_msr) * model->num_counters);
+@@ -226,39 +220,37 @@ static int nmi_setup(void)
+ return 0;
+ }
+
+-
+-static void nmi_restore_registers(struct op_msrs * msrs)
++static void nmi_restore_registers(struct op_msrs *msrs)
+ {
+ unsigned int const nr_ctrs = model->num_counters;
+- unsigned int const nr_ctrls = model->num_controls;
+- struct op_msr * counters = msrs->counters;
+- struct op_msr * controls = msrs->controls;
++ unsigned int const nr_ctrls = model->num_controls;
++ struct op_msr *counters = msrs->counters;
++ struct op_msr *controls = msrs->controls;
+ unsigned int i;
+
+ for (i = 0; i < nr_ctrls; ++i) {
+- if (controls[i].addr){
++ if (controls[i].addr) {
+ wrmsr(controls[i].addr,
+ controls[i].saved.low,
+ controls[i].saved.high);
+ }
+ }
+-
+
-+ printk(KERN_INFO "\ntesting %s %s\n", algo, e);
+ for (i = 0; i < nr_ctrs; ++i) {
+- if (counters[i].addr){
++ if (counters[i].addr) {
+ wrmsr(counters[i].addr,
+ counters[i].saved.low,
+ counters[i].saved.high);
+ }
+ }
+ }
+-
+
+-static void nmi_cpu_shutdown(void * dummy)
++static void nmi_cpu_shutdown(void *dummy)
+ {
+ unsigned int v;
+ int cpu = smp_processor_id();
+- struct op_msrs * msrs = &cpu_msrs[cpu];
+-
++ struct op_msrs *msrs = &cpu_msrs[cpu];
+
-+ tsize = sizeof(struct aead_testvec);
-+ tsize *= tcount;
+ /* restoring APIC_LVTPC can trigger an apic error because the delivery
+ * mode and vector nr combination can be illegal. That's by design: on
+ * power on apic lvt contain a zero vector nr which are legal only for
+@@ -271,7 +263,6 @@ static void nmi_cpu_shutdown(void * dummy)
+ nmi_restore_registers(msrs);
+ }
+
+-
+ static void nmi_shutdown(void)
+ {
+ nmi_enabled = 0;
+@@ -281,45 +272,40 @@ static void nmi_shutdown(void)
+ free_msrs();
+ }
+
+-
+-static void nmi_cpu_start(void * dummy)
++static void nmi_cpu_start(void *dummy)
+ {
+- struct op_msrs const * msrs = &cpu_msrs[smp_processor_id()];
++ struct op_msrs const *msrs = &cpu_msrs[smp_processor_id()];
+ model->start(msrs);
+ }
+-
+
+ static int nmi_start(void)
+ {
+ on_each_cpu(nmi_cpu_start, NULL, 0, 1);
+ return 0;
+ }
+-
+-
+-static void nmi_cpu_stop(void * dummy)
+
-+ if (tsize > TVMEMSIZE) {
-+ printk(KERN_INFO "template (%u) too big for tvmem (%u)\n",
-+ tsize, TVMEMSIZE);
-+ return;
-+ }
++static void nmi_cpu_stop(void *dummy)
+ {
+- struct op_msrs const * msrs = &cpu_msrs[smp_processor_id()];
++ struct op_msrs const *msrs = &cpu_msrs[smp_processor_id()];
+ model->stop(msrs);
+ }
+-
+-
+
-+ memcpy(tvmem, template, tsize);
-+ aead_tv = (void *)tvmem;
+ static void nmi_stop(void)
+ {
+ on_each_cpu(nmi_cpu_stop, NULL, 0, 1);
+ }
+
+-
+ struct op_counter_config counter_config[OP_MAX_COUNTER];
+
+-static int nmi_create_files(struct super_block * sb, struct dentry * root)
++static int nmi_create_files(struct super_block *sb, struct dentry *root)
+ {
+ unsigned int i;
+
+ for (i = 0; i < model->num_counters; ++i) {
+- struct dentry * dir;
++ struct dentry *dir;
+ char buf[4];
+-
+- /* quick little hack to _not_ expose a counter if it is not
+
-+ init_completion(&result.completion);
++ /* quick little hack to _not_ expose a counter if it is not
+ * available for use. This should protect userspace app.
+ * NOTE: assumes 1:1 mapping here (that counters are organized
+ * sequentially in their struct assignment).
+@@ -329,21 +315,21 @@ static int nmi_create_files(struct super_block * sb, struct dentry * root)
+
+ snprintf(buf, sizeof(buf), "%d", i);
+ dir = oprofilefs_mkdir(sb, root, buf);
+- oprofilefs_create_ulong(sb, dir, "enabled", &counter_config[i].enabled);
+- oprofilefs_create_ulong(sb, dir, "event", &counter_config[i].event);
+- oprofilefs_create_ulong(sb, dir, "count", &counter_config[i].count);
+- oprofilefs_create_ulong(sb, dir, "unit_mask", &counter_config[i].unit_mask);
+- oprofilefs_create_ulong(sb, dir, "kernel", &counter_config[i].kernel);
+- oprofilefs_create_ulong(sb, dir, "user", &counter_config[i].user);
++ oprofilefs_create_ulong(sb, dir, "enabled", &counter_config[i].enabled);
++ oprofilefs_create_ulong(sb, dir, "event", &counter_config[i].event);
++ oprofilefs_create_ulong(sb, dir, "count", &counter_config[i].count);
++ oprofilefs_create_ulong(sb, dir, "unit_mask", &counter_config[i].unit_mask);
++ oprofilefs_create_ulong(sb, dir, "kernel", &counter_config[i].kernel);
++ oprofilefs_create_ulong(sb, dir, "user", &counter_config[i].user);
+ }
+
+ return 0;
+ }
+-
+
-+ tfm = crypto_alloc_aead(algo, 0, 0);
+ static int p4force;
+ module_param(p4force, int, 0);
+-
+-static int __init p4_init(char ** cpu_type)
+
-+ if (IS_ERR(tfm)) {
-+ printk(KERN_INFO "failed to load transform for %s: %ld\n",
-+ algo, PTR_ERR(tfm));
++static int __init p4_init(char **cpu_type)
+ {
+ __u8 cpu_model = boot_cpu_data.x86_model;
+
+@@ -356,15 +342,15 @@ static int __init p4_init(char ** cpu_type)
+ return 1;
+ #else
+ switch (smp_num_siblings) {
+- case 1:
+- *cpu_type = "i386/p4";
+- model = &op_p4_spec;
+- return 1;
+-
+- case 2:
+- *cpu_type = "i386/p4-ht";
+- model = &op_p4_ht2_spec;
+- return 1;
++ case 1:
++ *cpu_type = "i386/p4";
++ model = &op_p4_spec;
++ return 1;
++
++ case 2:
++ *cpu_type = "i386/p4-ht";
++ model = &op_p4_ht2_spec;
++ return 1;
+ }
+ #endif
+
+@@ -373,8 +359,7 @@ static int __init p4_init(char ** cpu_type)
+ return 0;
+ }
+
+-
+-static int __init ppro_init(char ** cpu_type)
++static int __init ppro_init(char **cpu_type)
+ {
+ __u8 cpu_model = boot_cpu_data.x86_model;
+
+@@ -409,52 +394,52 @@ int __init op_nmi_init(struct oprofile_operations *ops)
+
+ if (!cpu_has_apic)
+ return -ENODEV;
+-
++
+ switch (vendor) {
+- case X86_VENDOR_AMD:
+- /* Needs to be at least an Athlon (or hammer in 32bit mode) */
++ case X86_VENDOR_AMD:
++ /* Needs to be at least an Athlon (or hammer in 32bit mode) */
+
+- switch (family) {
+- default:
++ switch (family) {
++ default:
++ return -ENODEV;
++ case 6:
++ model = &op_athlon_spec;
++ cpu_type = "i386/athlon";
++ break;
++ case 0xf:
++ model = &op_athlon_spec;
++ /* Actually it could be i386/hammer too, but give
++ user space an consistent name. */
++ cpu_type = "x86-64/hammer";
++ break;
++ case 0x10:
++ model = &op_athlon_spec;
++ cpu_type = "x86-64/family10";
++ break;
++ }
++ break;
++
++ case X86_VENDOR_INTEL:
++ switch (family) {
++ /* Pentium IV */
++ case 0xf:
++ if (!p4_init(&cpu_type))
+ return -ENODEV;
+- case 6:
+- model = &op_athlon_spec;
+- cpu_type = "i386/athlon";
+- break;
+- case 0xf:
+- model = &op_athlon_spec;
+- /* Actually it could be i386/hammer too, but give
+- user space an consistent name. */
+- cpu_type = "x86-64/hammer";
+- break;
+- case 0x10:
+- model = &op_athlon_spec;
+- cpu_type = "x86-64/family10";
+- break;
+- }
+ break;
+-
+- case X86_VENDOR_INTEL:
+- switch (family) {
+- /* Pentium IV */
+- case 0xf:
+- if (!p4_init(&cpu_type))
+- return -ENODEV;
+- break;
+-
+- /* A P6-class processor */
+- case 6:
+- if (!ppro_init(&cpu_type))
+- return -ENODEV;
+- break;
+-
+- default:
+- return -ENODEV;
+- }
++
++ /* A P6-class processor */
++ case 6:
++ if (!ppro_init(&cpu_type))
++ return -ENODEV;
+ break;
+
+ default:
+ return -ENODEV;
++ }
++ break;
++
++ default:
++ return -ENODEV;
+ }
+
+ init_sysfs();
+@@ -469,7 +454,6 @@ int __init op_nmi_init(struct oprofile_operations *ops)
+ return 0;
+ }
+
+-
+ void op_nmi_exit(void)
+ {
+ if (using_nmi)
+diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
+index 8627463..52deabc 100644
+--- a/arch/x86/pci/common.c
++++ b/arch/x86/pci/common.c
+@@ -109,6 +109,19 @@ static void __devinit pcibios_fixup_ghosts(struct pci_bus *b)
+ }
+ }
+
++static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
++{
++ struct resource *rom_r = &dev->resource[PCI_ROM_RESOURCE];
++
++ if (rom_r->parent)
+ return;
-+ }
++ if (rom_r->start)
++ /* we deal with BIOS assigned ROM later */
++ return;
++ if (!(pci_probe & PCI_ASSIGN_ROMS))
++ rom_r->start = rom_r->end = rom_r->flags = 0;
++}
+
-+ req = aead_request_alloc(tfm, GFP_KERNEL);
-+ if (!req) {
-+ printk(KERN_INFO "failed to allocate request for %s\n", algo);
-+ goto out;
+ /*
+ * Called after each bus is probed, but before its children
+ * are examined.
+@@ -116,8 +129,12 @@ static void __devinit pcibios_fixup_ghosts(struct pci_bus *b)
+
+ void __devinit pcibios_fixup_bus(struct pci_bus *b)
+ {
++ struct pci_dev *dev;
++
+ pcibios_fixup_ghosts(b);
+ pci_read_bridge_bases(b);
++ list_for_each_entry(dev, &b->devices, bus_list)
++ pcibios_fixup_device_resources(dev);
+ }
+
+ /*
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 6cff66d..cb63007 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -19,7 +19,7 @@ static void __devinit pci_fixup_i450nx(struct pci_dev *d)
+
+ printk(KERN_WARNING "PCI: Searching for i450NX host bridges on %s\n", pci_name(d));
+ reg = 0xd0;
+- for(pxb=0; pxb<2; pxb++) {
++ for(pxb = 0; pxb < 2; pxb++) {
+ pci_read_config_byte(d, reg++, &busno);
+ pci_read_config_byte(d, reg++, &suba);
+ pci_read_config_byte(d, reg++, &subb);
+@@ -56,7 +56,7 @@ static void __devinit pci_fixup_umc_ide(struct pci_dev *d)
+ int i;
+
+ printk(KERN_WARNING "PCI: Fixing base address flags for device %s\n", pci_name(d));
+- for(i=0; i<4; i++)
++ for(i = 0; i < 4; i++)
+ d->resource[i].flags |= PCI_BASE_ADDRESS_SPACE_IO;
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886BF, pci_fixup_umc_ide);
+@@ -127,7 +127,7 @@ static void pci_fixup_via_northbridge_bug(struct pci_dev *d)
+ NB latency to zero */
+ pci_write_config_byte(d, PCI_LATENCY_TIMER, 0);
+
+- where = 0x95; /* the memory write queue timer register is
++ where = 0x95; /* the memory write queue timer register is
+ different for the KT266x's: 0x95 not 0x55 */
+ } else if (d->device == PCI_DEVICE_ID_VIA_8363_0 &&
+ (d->revision == VIA_8363_KL133_REVISION_ID ||
+@@ -230,7 +230,7 @@ static int quirk_pcie_aspm_write(struct pci_bus *bus, unsigned int devfn, int wh
+
+ if ((offset) && (where == offset))
+ value = value & 0xfffffffc;
+-
++
+ return raw_pci_ops->write(0, bus->number, devfn, where, size, value);
+ }
+
+@@ -271,8 +271,8 @@ static void pcie_rootport_aspm_quirk(struct pci_dev *pdev)
+ * after hot-remove, the pbus->devices is empty and this code
+ * will set the offsets to zero and the bus ops to parent's bus
+ * ops, which is unmodified.
+- */
+- for (i= GET_INDEX(pdev->device, 0); i <= GET_INDEX(pdev->device, 7); ++i)
++ */
++ for (i = GET_INDEX(pdev->device, 0); i <= GET_INDEX(pdev->device, 7); ++i)
+ quirk_aspm_offset[i] = 0;
+
+ pbus->ops = pbus->parent->ops;
+@@ -286,17 +286,17 @@ static void pcie_rootport_aspm_quirk(struct pci_dev *pdev)
+ list_for_each_entry(dev, &pbus->devices, bus_list) {
+ /* There are 0 to 8 devices attached to this bus */
+ cap_base = pci_find_capability(dev, PCI_CAP_ID_EXP);
+- quirk_aspm_offset[GET_INDEX(pdev->device, dev->devfn)]= cap_base + 0x10;
++ quirk_aspm_offset[GET_INDEX(pdev->device, dev->devfn)] = cap_base + 0x10;
+ }
+ pbus->ops = &quirk_pcie_aspm_ops;
+ }
+ }
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PA, pcie_rootport_aspm_quirk );
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PA1, pcie_rootport_aspm_quirk );
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PB, pcie_rootport_aspm_quirk );
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PB1, pcie_rootport_aspm_quirk );
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC, pcie_rootport_aspm_quirk );
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk );
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PA, pcie_rootport_aspm_quirk);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PA1, pcie_rootport_aspm_quirk);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PB, pcie_rootport_aspm_quirk);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PB1, pcie_rootport_aspm_quirk);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC, pcie_rootport_aspm_quirk);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk);
+
+ /*
+ * Fixup to mark boot BIOS video selected by BIOS before it changes
+@@ -336,8 +336,8 @@ static void __devinit pci_fixup_video(struct pci_dev *pdev)
+ * PCI header type NORMAL.
+ */
+ if (bridge
+- &&((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE)
+- ||(bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) {
++ && ((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE)
++ || (bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) {
+ pci_read_config_word(bridge, PCI_BRIDGE_CONTROL,
+ &config);
+ if (!(config & PCI_BRIDGE_CTL_VGA))
+diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c
+index 88d8f5c..ed07ce6 100644
+--- a/arch/x86/pci/irq.c
++++ b/arch/x86/pci/irq.c
+@@ -200,6 +200,7 @@ static int pirq_ali_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
+ {
+ static const unsigned char irqmap[16] = { 0, 9, 3, 10, 4, 5, 7, 6, 1, 11, 0, 12, 0, 14, 0, 15 };
+
++ WARN_ON_ONCE(pirq >= 16);
+ return irqmap[read_config_nybble(router, 0x48, pirq-1)];
+ }
+
+@@ -207,7 +208,8 @@ static int pirq_ali_set(struct pci_dev *router, struct pci_dev *dev, int pirq, i
+ {
+ static const unsigned char irqmap[16] = { 0, 8, 0, 2, 4, 5, 7, 6, 0, 1, 3, 9, 11, 0, 13, 15 };
+ unsigned int val = irqmap[irq];
+-
++
++ WARN_ON_ONCE(pirq >= 16);
+ if (val) {
+ write_config_nybble(router, 0x48, pirq-1, val);
+ return 1;
+@@ -257,12 +259,16 @@ static int pirq_via_set(struct pci_dev *router, struct pci_dev *dev, int pirq, i
+ static int pirq_via586_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
+ {
+ static const unsigned int pirqmap[5] = { 3, 2, 5, 1, 1 };
++
++ WARN_ON_ONCE(pirq >= 5);
+ return read_config_nybble(router, 0x55, pirqmap[pirq-1]);
+ }
+
+ static int pirq_via586_set(struct pci_dev *router, struct pci_dev *dev, int pirq, int irq)
+ {
+ static const unsigned int pirqmap[5] = { 3, 2, 5, 1, 1 };
++
++ WARN_ON_ONCE(pirq >= 5);
+ write_config_nybble(router, 0x55, pirqmap[pirq-1], irq);
+ return 1;
+ }
+@@ -275,12 +281,16 @@ static int pirq_via586_set(struct pci_dev *router, struct pci_dev *dev, int pirq
+ static int pirq_ite_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
+ {
+ static const unsigned char pirqmap[4] = { 1, 0, 2, 3 };
++
++ WARN_ON_ONCE(pirq >= 4);
+ return read_config_nybble(router,0x43, pirqmap[pirq-1]);
+ }
+
+ static int pirq_ite_set(struct pci_dev *router, struct pci_dev *dev, int pirq, int irq)
+ {
+ static const unsigned char pirqmap[4] = { 1, 0, 2, 3 };
++
++ WARN_ON_ONCE(pirq >= 4);
+ write_config_nybble(router, 0x43, pirqmap[pirq-1], irq);
+ return 1;
+ }
+@@ -419,6 +429,7 @@ static int pirq_sis_set(struct pci_dev *router, struct pci_dev *dev, int pirq, i
+
+ static int pirq_vlsi_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
+ {
++ WARN_ON_ONCE(pirq >= 9);
+ if (pirq > 8) {
+ printk(KERN_INFO "VLSI router pirq escape (%d)\n", pirq);
+ return 0;
+@@ -428,6 +439,7 @@ static int pirq_vlsi_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
+
+ static int pirq_vlsi_set(struct pci_dev *router, struct pci_dev *dev, int pirq, int irq)
+ {
++ WARN_ON_ONCE(pirq >= 9);
+ if (pirq > 8) {
+ printk(KERN_INFO "VLSI router pirq escape (%d)\n", pirq);
+ return 0;
+@@ -449,14 +461,14 @@ static int pirq_vlsi_set(struct pci_dev *router, struct pci_dev *dev, int pirq,
+ */
+ static int pirq_serverworks_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
+ {
+- outb_p(pirq, 0xc00);
++ outb(pirq, 0xc00);
+ return inb(0xc01) & 0xf;
+ }
+
+ static int pirq_serverworks_set(struct pci_dev *router, struct pci_dev *dev, int pirq, int irq)
+ {
+- outb_p(pirq, 0xc00);
+- outb_p(irq, 0xc01);
++ outb(pirq, 0xc00);
++ outb(irq, 0xc01);
+ return 1;
+ }
+
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 998fd3e..efcf620 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -19,7 +19,7 @@ unsigned long saved_context_esp, saved_context_ebp;
+ unsigned long saved_context_esi, saved_context_edi;
+ unsigned long saved_context_eflags;
+
+-void __save_processor_state(struct saved_context *ctxt)
++static void __save_processor_state(struct saved_context *ctxt)
+ {
+ mtrr_save_fixed_ranges(NULL);
+ kernel_fpu_begin();
+@@ -74,19 +74,19 @@ static void fix_processor_context(void)
+ /*
+ * Now maybe reload the debug registers
+ */
+- if (current->thread.debugreg[7]){
+- set_debugreg(current->thread.debugreg[0], 0);
+- set_debugreg(current->thread.debugreg[1], 1);
+- set_debugreg(current->thread.debugreg[2], 2);
+- set_debugreg(current->thread.debugreg[3], 3);
++ if (current->thread.debugreg7) {
++ set_debugreg(current->thread.debugreg0, 0);
++ set_debugreg(current->thread.debugreg1, 1);
++ set_debugreg(current->thread.debugreg2, 2);
++ set_debugreg(current->thread.debugreg3, 3);
+ /* no 4 and 5 */
+- set_debugreg(current->thread.debugreg[6], 6);
+- set_debugreg(current->thread.debugreg[7], 7);
++ set_debugreg(current->thread.debugreg6, 6);
++ set_debugreg(current->thread.debugreg7, 7);
+ }
+
+ }
+
+-void __restore_processor_state(struct saved_context *ctxt)
++static void __restore_processor_state(struct saved_context *ctxt)
+ {
+ /*
+ * control registers
+diff --git a/arch/x86/vdso/.gitignore b/arch/x86/vdso/.gitignore
+index f8b69d8..60274d5 100644
+--- a/arch/x86/vdso/.gitignore
++++ b/arch/x86/vdso/.gitignore
+@@ -1 +1,6 @@
+ vdso.lds
++vdso-syms.lds
++vdso32-syms.lds
++vdso32-syscall-syms.lds
++vdso32-sysenter-syms.lds
++vdso32-int80-syms.lds
+diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
+index e7bff0f..d28dda5 100644
+--- a/arch/x86/vdso/Makefile
++++ b/arch/x86/vdso/Makefile
+@@ -1,39 +1,37 @@
+ #
+-# x86-64 vDSO.
++# Building vDSO images for x86.
+ #
+
++VDSO64-$(CONFIG_X86_64) := y
++VDSO32-$(CONFIG_X86_32) := y
++VDSO32-$(CONFIG_COMPAT) := y
++
++vdso-install-$(VDSO64-y) += vdso.so
++vdso-install-$(VDSO32-y) += $(vdso32-y:=.so)
++
++
+ # files to link into the vdso
+-# vdso-start.o has to be first
+-vobjs-y := vdso-start.o vdso-note.o vclock_gettime.o vgetcpu.o vvar.o
++vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vvar.o
+
+ # files to link into kernel
+-obj-y := vma.o vdso.o vdso-syms.o
++obj-$(VDSO64-y) += vma.o vdso.o
++obj-$(VDSO32-y) += vdso32.o vdso32-setup.o
+
+ vobjs := $(foreach F,$(vobjs-y),$(obj)/$F)
+
+ $(obj)/vdso.o: $(obj)/vdso.so
+
+-targets += vdso.so vdso.so.dbg vdso.lds $(vobjs-y) vdso-syms.o
+-
+-# The DSO images are built using a special linker script.
+-quiet_cmd_syscall = SYSCALL $@
+- cmd_syscall = $(CC) -m elf_x86_64 -nostdlib $(SYSCFLAGS_$(@F)) \
+- -Wl,-T,$(filter-out FORCE,$^) -o $@
++targets += vdso.so vdso.so.dbg vdso.lds $(vobjs-y)
+
+ export CPPFLAGS_vdso.lds += -P -C
+
+-vdso-flags = -fPIC -shared -Wl,-soname=linux-vdso.so.1 \
+- $(call ld-option, -Wl$(comma)--hash-style=sysv) \
+- -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096
+-SYSCFLAGS_vdso.so = $(vdso-flags)
+-SYSCFLAGS_vdso.so.dbg = $(vdso-flags)
++VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -Wl,-soname=linux-vdso.so.1 \
++ -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096
+
+ $(obj)/vdso.o: $(src)/vdso.S $(obj)/vdso.so
+
+-$(obj)/vdso.so: $(src)/vdso.lds $(vobjs) FORCE
+-
+ $(obj)/vdso.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
+- $(call if_changed,syscall)
++ $(call if_changed,vdso)
+
+ $(obj)/%.so: OBJCOPYFLAGS := -S
+ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+@@ -41,24 +39,96 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+
+ CFL := $(PROFILING) -mcmodel=small -fPIC -g0 -O2 -fasynchronous-unwind-tables -m64
+
+-$(obj)/vclock_gettime.o: KBUILD_CFLAGS = $(CFL)
+-$(obj)/vgetcpu.o: KBUILD_CFLAGS = $(CFL)
++$(vobjs): KBUILD_CFLAGS = $(CFL)
++
++targets += vdso-syms.lds
++obj-$(VDSO64-y) += vdso-syms.lds
++
++#
++# Match symbols in the DSO that look like VDSO*; produce a file of constants.
++#
++sed-vdsosym := -e 's/^00*/0/' \
++ -e 's/^\([0-9a-fA-F]*\) . \(VDSO[a-zA-Z0-9_]*\)$$/\2 = 0x\1;/p'
++quiet_cmd_vdsosym = VDSOSYM $@
++ cmd_vdsosym = $(NM) $< | sed -n $(sed-vdsosym) | LC_ALL=C sort > $@
++
++$(obj)/%-syms.lds: $(obj)/%.so.dbg FORCE
++ $(call if_changed,vdsosym)
++
++#
++# Build multiple 32-bit vDSO images to choose from at boot time.
++#
++obj-$(VDSO32-y) += vdso32-syms.lds
++vdso32.so-$(CONFIG_X86_32) += int80
++vdso32.so-$(CONFIG_COMPAT) += syscall
++vdso32.so-$(VDSO32-y) += sysenter
++
++CPPFLAGS_vdso32.lds = $(CPPFLAGS_vdso.lds)
++VDSO_LDFLAGS_vdso32.lds = -m elf_i386 -Wl,-soname=linux-gate.so.1
++
++# This makes sure the $(obj) subdirectory exists even though vdso32/
++# is not a kbuild sub-make subdirectory.
++override obj-dirs = $(dir $(obj)) $(obj)/vdso32/
+
+-# We also create a special relocatable object that should mirror the symbol
+-# table and layout of the linked DSO. With ld -R we can then refer to
+-# these symbols in the kernel code rather than hand-coded addresses.
+-extra-y += vdso-syms.o
+-$(obj)/built-in.o: $(obj)/vdso-syms.o
+-$(obj)/built-in.o: ld_flags += -R $(obj)/vdso-syms.o
++targets += vdso32/vdso32.lds
++targets += $(vdso32.so-y:%=vdso32-%.so.dbg) $(vdso32.so-y:%=vdso32-%.so)
++targets += vdso32/note.o $(vdso32.so-y:%=vdso32/%.o)
+
+-SYSCFLAGS_vdso-syms.o = -r -d
+-$(obj)/vdso-syms.o: $(src)/vdso.lds $(vobjs) FORCE
+- $(call if_changed,syscall)
++extra-y += $(vdso32.so-y:%=vdso32-%.so)
+
++$(obj)/vdso32.o: $(vdso32.so-y:%=$(obj)/vdso32-%.so)
++
++KBUILD_AFLAGS_32 := $(filter-out -m64,$(KBUILD_AFLAGS))
++$(vdso32.so-y:%=$(obj)/vdso32-%.so.dbg): KBUILD_AFLAGS = $(KBUILD_AFLAGS_32)
++$(vdso32.so-y:%=$(obj)/vdso32-%.so.dbg): asflags-$(CONFIG_X86_64) += -m32
++
++$(vdso32.so-y:%=$(obj)/vdso32-%.so.dbg): $(obj)/vdso32-%.so.dbg: FORCE \
++ $(obj)/vdso32/vdso32.lds \
++ $(obj)/vdso32/note.o \
++ $(obj)/vdso32/%.o
++ $(call if_changed,vdso)
++
++# Make vdso32-*-syms.lds from each image, and then make sure they match.
++# The only difference should be that some do not define VDSO32_SYSENTER_RETURN.
++
++targets += vdso32-syms.lds $(vdso32.so-y:%=vdso32-%-syms.lds)
++
++quiet_cmd_vdso32sym = VDSOSYM $@
++define cmd_vdso32sym
++ if LC_ALL=C sort -u $(filter-out FORCE,$^) > $(@D)/.tmp_$(@F) && \
++ $(foreach H,$(filter-out FORCE,$^),\
++ if grep -q VDSO32_SYSENTER_RETURN $H; \
++ then diff -u $(@D)/.tmp_$(@F) $H; \
++ else sed /VDSO32_SYSENTER_RETURN/d $(@D)/.tmp_$(@F) | \
++ diff -u - $H; fi &&) : ;\
++ then mv -f $(@D)/.tmp_$(@F) $@; \
++ else rm -f $(@D)/.tmp_$(@F); exit 1; \
++ fi
++endef
++
++$(obj)/vdso32-syms.lds: $(vdso32.so-y:%=$(obj)/vdso32-%-syms.lds) FORCE
++ $(call if_changed,vdso32sym)
++
++#
++# The DSO images are built using a special linker script.
++#
++quiet_cmd_vdso = VDSO $@
++ cmd_vdso = $(CC) -nostdlib -o $@ \
++ $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$(filter %.lds,$(^F))) \
++ -Wl,-T,$(filter %.lds,$^) $(filter %.o,$^)
++
++VDSO_LDFLAGS = -fPIC -shared $(call ld-option, -Wl$(comma)--hash-style=sysv)
++
++#
++# Install the unstripped copy of vdso*.so listed in $(vdso-install-y).
++#
+ quiet_cmd_vdso_install = INSTALL $@
+ cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+-vdso.so:
++$(vdso-install-y): %.so: $(obj)/%.so.dbg FORCE
+ @mkdir -p $(MODLIB)/vdso
+ $(call cmd,vdso_install)
+
+-vdso_install: vdso.so
++PHONY += vdso_install $(vdso-install-y)
++vdso_install: $(vdso-install-y)
++
++clean-files := vdso32-syscall* vdso32-sysenter* vdso32-int80*
+diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
+index 5b54cdf..23476c2 100644
+--- a/arch/x86/vdso/vclock_gettime.c
++++ b/arch/x86/vdso/vclock_gettime.c
+@@ -19,7 +19,6 @@
+ #include <asm/hpet.h>
+ #include <asm/unistd.h>
+ #include <asm/io.h>
+-#include <asm/vgtod.h>
+ #include "vextern.h"
+
+ #define gtod vdso_vsyscall_gtod_data
+diff --git a/arch/x86/vdso/vdso-layout.lds.S b/arch/x86/vdso/vdso-layout.lds.S
+new file mode 100644
+index 0000000..634a2cf
+--- /dev/null
++++ b/arch/x86/vdso/vdso-layout.lds.S
+@@ -0,0 +1,64 @@
++/*
++ * Linker script for vDSO. This is an ELF shared object prelinked to
++ * its virtual address, and with only one read-only segment.
++ * This script controls its layout.
++ */
++
++SECTIONS
++{
++ . = VDSO_PRELINK + SIZEOF_HEADERS;
++
++ .hash : { *(.hash) } :text
++ .gnu.hash : { *(.gnu.hash) }
++ .dynsym : { *(.dynsym) }
++ .dynstr : { *(.dynstr) }
++ .gnu.version : { *(.gnu.version) }
++ .gnu.version_d : { *(.gnu.version_d) }
++ .gnu.version_r : { *(.gnu.version_r) }
++
++ .note : { *(.note.*) } :text :note
++
++ .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
++ .eh_frame : { KEEP (*(.eh_frame)) } :text
++
++ .dynamic : { *(.dynamic) } :text :dynamic
++
++ .rodata : { *(.rodata*) } :text
++ .data : {
++ *(.data*)
++ *(.sdata*)
++ *(.got.plt) *(.got)
++ *(.gnu.linkonce.d.*)
++ *(.bss*)
++ *(.dynbss*)
++ *(.gnu.linkonce.b.*)
+ }
+
-+ aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-+ tcrypt_complete, &result);
++ .altinstructions : { *(.altinstructions) }
++ .altinstr_replacement : { *(.altinstr_replacement) }
+
-+ for (i = 0, j = 0; i < tcount; i++) {
-+ if (!aead_tv[i].np) {
-+ printk(KERN_INFO "test %u (%d bit key):\n",
-+ ++j, aead_tv[i].klen * 8);
++ /*
++ * Align the actual code well away from the non-instruction data.
++ * This is the best thing for the I-cache.
++ */
++ . = ALIGN(0x100);
+
-+ crypto_aead_clear_flags(tfm, ~0);
-+ if (aead_tv[i].wk)
-+ crypto_aead_set_flags(
-+ tfm, CRYPTO_TFM_REQ_WEAK_KEY);
-+ key = aead_tv[i].key;
++ .text : { *(.text*) } :text =0x90909090
++}
+
-+ ret = crypto_aead_setkey(tfm, key,
-+ aead_tv[i].klen);
-+ if (ret) {
-+ printk(KERN_INFO "setkey() failed flags=%x\n",
-+ crypto_aead_get_flags(tfm));
++/*
++ * Very old versions of ld do not recognize this name token; use the constant.
++ */
++#define PT_GNU_EH_FRAME 0x6474e550
+
-+ if (!aead_tv[i].fail)
-+ goto out;
-+ }
++/*
++ * We must supply the ELF program headers explicitly to get just one
++ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
++ */
++PHDRS
++{
++ text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
++ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
++ note PT_NOTE FLAGS(4); /* PF_R */
++ eh_frame_hdr PT_GNU_EH_FRAME;
++}
+diff --git a/arch/x86/vdso/vdso-start.S b/arch/x86/vdso/vdso-start.S
+deleted file mode 100644
+index 2dc2cdb..0000000
+--- a/arch/x86/vdso/vdso-start.S
++++ /dev/null
+@@ -1,2 +0,0 @@
+- .globl vdso_kernel_start
+-vdso_kernel_start:
+diff --git a/arch/x86/vdso/vdso.lds.S b/arch/x86/vdso/vdso.lds.S
+index 667d324..4e5dd3b 100644
+--- a/arch/x86/vdso/vdso.lds.S
++++ b/arch/x86/vdso/vdso.lds.S
+@@ -1,79 +1,37 @@
+ /*
+- * Linker script for vsyscall DSO. The vsyscall page is an ELF shared
+- * object prelinked to its virtual address, and with only one read-only
+- * segment (that fits in one page). This script controls its layout.
++ * Linker script for 64-bit vDSO.
++ * We #include the file to define the layout details.
++ * Here we only choose the prelinked virtual address.
++ *
++ * This file defines the version script giving the user-exported symbols in
++ * the DSO. We can define local symbols here called VDSO* to make their
++ * values visible using the asm-x86/vdso.h macros from the kernel proper.
+ */
+-#include <asm/asm-offsets.h>
+-#include "voffset.h"
+
+ #define VDSO_PRELINK 0xffffffffff700000
+-
+-SECTIONS
+-{
+- . = VDSO_PRELINK + SIZEOF_HEADERS;
+-
+- .hash : { *(.hash) } :text
+- .gnu.hash : { *(.gnu.hash) }
+- .dynsym : { *(.dynsym) }
+- .dynstr : { *(.dynstr) }
+- .gnu.version : { *(.gnu.version) }
+- .gnu.version_d : { *(.gnu.version_d) }
+- .gnu.version_r : { *(.gnu.version_r) }
+-
+- /* This linker script is used both with -r and with -shared.
+- For the layouts to match, we need to skip more than enough
+- space for the dynamic symbol table et al. If this amount
+- is insufficient, ld -shared will barf. Just increase it here. */
+- . = VDSO_PRELINK + VDSO_TEXT_OFFSET;
+-
+- .text : { *(.text*) } :text
+- .rodata : { *(.rodata*) } :text
+- .data : {
+- *(.data*)
+- *(.sdata*)
+- *(.bss*)
+- *(.dynbss*)
+- } :text
+-
+- .altinstructions : { *(.altinstructions) } :text
+- .altinstr_replacement : { *(.altinstr_replacement) } :text
+-
+- .note : { *(.note.*) } :text :note
+- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+- .eh_frame : { KEEP (*(.eh_frame)) } :text
+- .dynamic : { *(.dynamic) } :text :dynamic
+- .useless : {
+- *(.got.plt) *(.got)
+- *(.gnu.linkonce.d.*)
+- *(.gnu.linkonce.b.*)
+- } :text
+-}
++#include "vdso-layout.lds.S"
+
+ /*
+- * We must supply the ELF program headers explicitly to get just one
+- * PT_LOAD segment, and set the flags explicitly to make segments read-only.
++ * This controls what userland symbols we export from the vDSO.
+ */
+-PHDRS
+-{
+- text PT_LOAD FILEHDR PHDRS FLAGS(5); /* PF_R|PF_X */
+- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+- note PT_NOTE FLAGS(4); /* PF_R */
+- eh_frame_hdr 0x6474e550; /* PT_GNU_EH_FRAME, but ld doesn't match the name */
++VERSION {
++ LINUX_2.6 {
++ global:
++ clock_gettime;
++ __vdso_clock_gettime;
++ gettimeofday;
++ __vdso_gettimeofday;
++ getcpu;
++ __vdso_getcpu;
++ local: *;
++ };
+ }
+
++VDSO64_PRELINK = VDSO_PRELINK;
+
-+ authsize = abs(aead_tv[i].rlen - aead_tv[i].ilen);
-+ ret = crypto_aead_setauthsize(tfm, authsize);
-+ if (ret) {
-+ printk(KERN_INFO
-+ "failed to set authsize = %u\n",
-+ authsize);
-+ goto out;
-+ }
+ /*
+- * This controls what symbols we export from the DSO.
++ * Define VDSO64_x for each VEXTERN(x), for use via VDSO64_SYMBOL.
+ */
+-VERSION
+-{
+- LINUX_2.6 {
+- global:
+- clock_gettime;
+- __vdso_clock_gettime;
+- gettimeofday;
+- __vdso_gettimeofday;
+- getcpu;
+- __vdso_getcpu;
+- local: *;
+- };
+-}
++#define VEXTERN(x) VDSO64_ ## x = vdso_ ## x;
++#include "vextern.h"
++#undef VEXTERN
+diff --git a/arch/x86/vdso/vdso32-setup.c b/arch/x86/vdso/vdso32-setup.c
+new file mode 100644
+index 0000000..348f134
+--- /dev/null
++++ b/arch/x86/vdso/vdso32-setup.c
+@@ -0,0 +1,444 @@
++/*
++ * (C) Copyright 2002 Linus Torvalds
++ * Portions based on the vdso-randomization code from exec-shield:
++ * Copyright(C) 2005-2006, Red Hat, Inc., Ingo Molnar
++ *
++ * This file contains the needed initializations to support sysenter.
++ */
+
-+ sg_init_one(&sg[0], aead_tv[i].input,
-+ aead_tv[i].ilen + (enc ? authsize : 0));
++#include <linux/init.h>
++#include <linux/smp.h>
++#include <linux/thread_info.h>
++#include <linux/sched.h>
++#include <linux/gfp.h>
++#include <linux/string.h>
++#include <linux/elf.h>
++#include <linux/mm.h>
++#include <linux/err.h>
++#include <linux/module.h>
+
-+ sg_init_one(&asg[0], aead_tv[i].assoc,
-+ aead_tv[i].alen);
++#include <asm/cpufeature.h>
++#include <asm/msr.h>
++#include <asm/pgtable.h>
++#include <asm/unistd.h>
++#include <asm/elf.h>
++#include <asm/tlbflush.h>
++#include <asm/vdso.h>
++#include <asm/proto.h>
+
-+ aead_request_set_crypt(req, sg, sg,
-+ aead_tv[i].ilen,
-+ aead_tv[i].iv);
++enum {
++ VDSO_DISABLED = 0,
++ VDSO_ENABLED = 1,
++ VDSO_COMPAT = 2,
++};
+
-+ aead_request_set_assoc(req, asg, aead_tv[i].alen);
++#ifdef CONFIG_COMPAT_VDSO
++#define VDSO_DEFAULT VDSO_COMPAT
++#else
++#define VDSO_DEFAULT VDSO_ENABLED
++#endif
+
-+ ret = enc ?
-+ crypto_aead_encrypt(req) :
-+ crypto_aead_decrypt(req);
++#ifdef CONFIG_X86_64
++#define vdso_enabled sysctl_vsyscall32
++#define arch_setup_additional_pages syscall32_setup_pages
++#endif
+
-+ switch (ret) {
-+ case 0:
-+ break;
-+ case -EINPROGRESS:
-+ case -EBUSY:
-+ ret = wait_for_completion_interruptible(
-+ &result.completion);
-+ if (!ret && !(ret = result.err)) {
-+ INIT_COMPLETION(result.completion);
-+ break;
-+ }
-+ /* fall through */
-+ default:
-+ printk(KERN_INFO "%s () failed err=%d\n",
-+ e, -ret);
-+ goto out;
-+ }
++/*
++ * This is the difference between the prelinked addresses in the vDSO images
++ * and the VDSO_HIGH_BASE address where CONFIG_COMPAT_VDSO places the vDSO
++ * in the user address space.
++ */
++#define VDSO_ADDR_ADJUST (VDSO_HIGH_BASE - (unsigned long)VDSO32_PRELINK)
+
-+ q = kmap(sg_page(&sg[0])) + sg[0].offset;
-+ hexdump(q, aead_tv[i].rlen);
++/*
++ * Should the kernel map a VDSO page into processes and pass its
++ * address down to glibc upon exec()?
++ */
++unsigned int __read_mostly vdso_enabled = VDSO_DEFAULT;
+
-+ printk(KERN_INFO "enc/dec: %s\n",
-+ memcmp(q, aead_tv[i].result,
-+ aead_tv[i].rlen) ? "fail" : "pass");
++static int __init vdso_setup(char *s)
++{
++ vdso_enabled = simple_strtoul(s, NULL, 0);
++
++ return 1;
++}
++
++/*
++ * For consistency, the argument vdso32=[012] affects the 32-bit vDSO
++ * behavior on both 64-bit and 32-bit kernels.
++ * On 32-bit kernels, vdso=[012] means the same thing.
++ */
++__setup("vdso32=", vdso_setup);
++
++#ifdef CONFIG_X86_32
++__setup_param("vdso=", vdso32_setup, vdso_setup, 0);
++
++EXPORT_SYMBOL_GPL(vdso_enabled);
++#endif
++
++static __init void reloc_symtab(Elf32_Ehdr *ehdr,
++ unsigned offset, unsigned size)
++{
++ Elf32_Sym *sym = (void *)ehdr + offset;
++ unsigned nsym = size / sizeof(*sym);
++ unsigned i;
++
++ for(i = 0; i < nsym; i++, sym++) {
++ if (sym->st_shndx == SHN_UNDEF ||
++ sym->st_shndx == SHN_ABS)
++ continue; /* skip */
++
++ if (sym->st_shndx > SHN_LORESERVE) {
++ printk(KERN_INFO "VDSO: unexpected st_shndx %x\n",
++ sym->st_shndx);
++ continue;
++ }
++
++ switch(ELF_ST_TYPE(sym->st_info)) {
++ case STT_OBJECT:
++ case STT_FUNC:
++ case STT_SECTION:
++ case STT_FILE:
++ sym->st_value += VDSO_ADDR_ADJUST;
+ }
+ }
++}
+
-+ printk(KERN_INFO "\ntesting %s %s across pages (chunking)\n", algo, e);
-+ memset(xbuf, 0, XBUFSIZE);
-+ memset(axbuf, 0, XBUFSIZE);
++static __init void reloc_dyn(Elf32_Ehdr *ehdr, unsigned offset)
++{
++ Elf32_Dyn *dyn = (void *)ehdr + offset;
++
++ for(; dyn->d_tag != DT_NULL; dyn++)
++ switch(dyn->d_tag) {
++ case DT_PLTGOT:
++ case DT_HASH:
++ case DT_STRTAB:
++ case DT_SYMTAB:
++ case DT_RELA:
++ case DT_INIT:
++ case DT_FINI:
++ case DT_REL:
++ case DT_DEBUG:
++ case DT_JMPREL:
++ case DT_VERSYM:
++ case DT_VERDEF:
++ case DT_VERNEED:
++ case DT_ADDRRNGLO ... DT_ADDRRNGHI:
++ /* definitely pointers needing relocation */
++ dyn->d_un.d_ptr += VDSO_ADDR_ADJUST;
++ break;
++
++ case DT_ENCODING ... OLD_DT_LOOS-1:
++ case DT_LOOS ... DT_HIOS-1:
++ /* Tags above DT_ENCODING are pointers if
++ they're even */
++ if (dyn->d_tag >= DT_ENCODING &&
++ (dyn->d_tag & 1) == 0)
++ dyn->d_un.d_ptr += VDSO_ADDR_ADJUST;
++ break;
++
++ case DT_VERDEFNUM:
++ case DT_VERNEEDNUM:
++ case DT_FLAGS_1:
++ case DT_RELACOUNT:
++ case DT_RELCOUNT:
++ case DT_VALRNGLO ... DT_VALRNGHI:
++ /* definitely not pointers */
++ break;
+
-+ for (i = 0, j = 0; i < tcount; i++) {
-+ if (aead_tv[i].np) {
-+ printk(KERN_INFO "test %u (%d bit key):\n",
-+ ++j, aead_tv[i].klen * 8);
++ case OLD_DT_LOOS ... DT_LOOS-1:
++ case DT_HIOS ... DT_VALRNGLO-1:
++ default:
++ if (dyn->d_tag > DT_ENCODING)
++ printk(KERN_INFO "VDSO: unexpected DT_tag %x\n",
++ dyn->d_tag);
++ break;
++ }
++}
+
-+ crypto_aead_clear_flags(tfm, ~0);
-+ if (aead_tv[i].wk)
-+ crypto_aead_set_flags(
-+ tfm, CRYPTO_TFM_REQ_WEAK_KEY);
-+ key = aead_tv[i].key;
++static __init void relocate_vdso(Elf32_Ehdr *ehdr)
++{
++ Elf32_Phdr *phdr;
++ Elf32_Shdr *shdr;
++ int i;
+
-+ ret = crypto_aead_setkey(tfm, key, aead_tv[i].klen);
-+ if (ret) {
-+ printk(KERN_INFO "setkey() failed flags=%x\n",
-+ crypto_aead_get_flags(tfm));
++ BUG_ON(memcmp(ehdr->e_ident, ELFMAG, 4) != 0 ||
++ !elf_check_arch_ia32(ehdr) ||
++ ehdr->e_type != ET_DYN);
++
++ ehdr->e_entry += VDSO_ADDR_ADJUST;
++
++ /* rebase phdrs */
++ phdr = (void *)ehdr + ehdr->e_phoff;
++ for (i = 0; i < ehdr->e_phnum; i++) {
++ phdr[i].p_vaddr += VDSO_ADDR_ADJUST;
++
++ /* relocate dynamic stuff */
++ if (phdr[i].p_type == PT_DYNAMIC)
++ reloc_dyn(ehdr, phdr[i].p_offset);
++ }
++
++ /* rebase sections */
++ shdr = (void *)ehdr + ehdr->e_shoff;
++ for(i = 0; i < ehdr->e_shnum; i++) {
++ if (!(shdr[i].sh_flags & SHF_ALLOC))
++ continue;
+
-+ if (!aead_tv[i].fail)
-+ goto out;
-+ }
++ shdr[i].sh_addr += VDSO_ADDR_ADJUST;
+
-+ sg_init_table(sg, aead_tv[i].np);
-+ for (k = 0, temp = 0; k < aead_tv[i].np; k++) {
-+ memcpy(&xbuf[IDX[k]],
-+ aead_tv[i].input + temp,
-+ aead_tv[i].tap[k]);
-+ temp += aead_tv[i].tap[k];
-+ sg_set_buf(&sg[k], &xbuf[IDX[k]],
-+ aead_tv[i].tap[k]);
-+ }
++ if (shdr[i].sh_type == SHT_SYMTAB ||
++ shdr[i].sh_type == SHT_DYNSYM)
++ reloc_symtab(ehdr, shdr[i].sh_offset,
++ shdr[i].sh_size);
++ }
++}
+
-+ authsize = abs(aead_tv[i].rlen - aead_tv[i].ilen);
-+ ret = crypto_aead_setauthsize(tfm, authsize);
-+ if (ret) {
-+ printk(KERN_INFO
-+ "failed to set authsize = %u\n",
-+ authsize);
-+ goto out;
-+ }
++/*
++ * These symbols are defined by vdso32.S to mark the bounds
++ * of the ELF DSO images included therein.
++ */
++extern const char vdso32_default_start, vdso32_default_end;
++extern const char vdso32_sysenter_start, vdso32_sysenter_end;
++static struct page *vdso32_pages[1];
+
-+ if (enc)
-+ sg[k - 1].length += authsize;
++#ifdef CONFIG_X86_64
+
-+ sg_init_table(asg, aead_tv[i].anp);
-+ for (k = 0, temp = 0; k < aead_tv[i].anp; k++) {
-+ memcpy(&axbuf[IDX[k]],
-+ aead_tv[i].assoc + temp,
-+ aead_tv[i].atap[k]);
-+ temp += aead_tv[i].atap[k];
-+ sg_set_buf(&asg[k], &axbuf[IDX[k]],
-+ aead_tv[i].atap[k]);
-+ }
++static int use_sysenter __read_mostly = -1;
+
-+ aead_request_set_crypt(req, sg, sg,
-+ aead_tv[i].ilen,
-+ aead_tv[i].iv);
++#define vdso32_sysenter() (use_sysenter > 0)
+
-+ aead_request_set_assoc(req, asg, aead_tv[i].alen);
++/* May not be __init: called during resume */
++void syscall32_cpu_init(void)
++{
++ if (use_sysenter < 0)
++ use_sysenter = (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL);
+
-+ ret = enc ?
-+ crypto_aead_encrypt(req) :
-+ crypto_aead_decrypt(req);
++ /* Load these always in case some future AMD CPU supports
++ SYSENTER from compat mode too. */
++ checking_wrmsrl(MSR_IA32_SYSENTER_CS, (u64)__KERNEL_CS);
++ checking_wrmsrl(MSR_IA32_SYSENTER_ESP, 0ULL);
++ checking_wrmsrl(MSR_IA32_SYSENTER_EIP, (u64)ia32_sysenter_target);
+
-+ switch (ret) {
-+ case 0:
-+ break;
-+ case -EINPROGRESS:
-+ case -EBUSY:
-+ ret = wait_for_completion_interruptible(
-+ &result.completion);
-+ if (!ret && !(ret = result.err)) {
-+ INIT_COMPLETION(result.completion);
-+ break;
-+ }
-+ /* fall through */
-+ default:
-+ printk(KERN_INFO "%s () failed err=%d\n",
-+ e, -ret);
-+ goto out;
-+ }
++ wrmsrl(MSR_CSTAR, ia32_cstar_target);
++}
+
-+ for (k = 0, temp = 0; k < aead_tv[i].np; k++) {
-+ printk(KERN_INFO "page %u\n", k);
-+ q = kmap(sg_page(&sg[k])) + sg[k].offset;
-+ hexdump(q, aead_tv[i].tap[k]);
-+ printk(KERN_INFO "%s\n",
-+ memcmp(q, aead_tv[i].result + temp,
-+ aead_tv[i].tap[k] -
-+ (k < aead_tv[i].np - 1 || enc ?
-+ 0 : authsize)) ?
-+ "fail" : "pass");
++#define compat_uses_vma 1
+
-+ temp += aead_tv[i].tap[k];
-+ }
++static inline void map_compat_vdso(int map)
++{
++}
++
++#else /* CONFIG_X86_32 */
++
++#define vdso32_sysenter() (boot_cpu_has(X86_FEATURE_SEP))
++
++void enable_sep_cpu(void)
++{
++ int cpu = get_cpu();
++ struct tss_struct *tss = &per_cpu(init_tss, cpu);
++
++ if (!boot_cpu_has(X86_FEATURE_SEP)) {
++ put_cpu();
++ return;
++ }
++
++ tss->x86_tss.ss1 = __KERNEL_CS;
++ tss->x86_tss.sp1 = sizeof(struct tss_struct) + (unsigned long) tss;
++ wrmsr(MSR_IA32_SYSENTER_CS, __KERNEL_CS, 0);
++ wrmsr(MSR_IA32_SYSENTER_ESP, tss->x86_tss.sp1, 0);
++ wrmsr(MSR_IA32_SYSENTER_EIP, (unsigned long) ia32_sysenter_target, 0);
++ put_cpu();
++}
++
++static struct vm_area_struct gate_vma;
++
++static int __init gate_vma_init(void)
++{
++ gate_vma.vm_mm = NULL;
++ gate_vma.vm_start = FIXADDR_USER_START;
++ gate_vma.vm_end = FIXADDR_USER_END;
++ gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
++ gate_vma.vm_page_prot = __P101;
++ /*
++ * Make sure the vDSO gets into every core dump.
++ * Dumping its contents makes post-mortem fully interpretable later
++ * without matching up the same kernel and hardware config to see
++ * what PC values meant.
++ */
++ gate_vma.vm_flags |= VM_ALWAYSDUMP;
++ return 0;
++}
++
++#define compat_uses_vma 0
++
++static void map_compat_vdso(int map)
++{
++ static int vdso_mapped;
++
++ if (map == vdso_mapped)
++ return;
++
++ vdso_mapped = map;
++
++ __set_fixmap(FIX_VDSO, page_to_pfn(vdso32_pages[0]) << PAGE_SHIFT,
++ map ? PAGE_READONLY_EXEC : PAGE_NONE);
++
++ /* flush stray tlbs */
++ flush_tlb_all();
++}
++
++#endif /* CONFIG_X86_64 */
++
++int __init sysenter_setup(void)
++{
++ void *syscall_page = (void *)get_zeroed_page(GFP_ATOMIC);
++ const void *vsyscall;
++ size_t vsyscall_len;
++
++ vdso32_pages[0] = virt_to_page(syscall_page);
++
++#ifdef CONFIG_X86_32
++ gate_vma_init();
++
++ printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO));
++#endif
++
++ if (!vdso32_sysenter()) {
++ vsyscall = &vdso32_default_start;
++ vsyscall_len = &vdso32_default_end - &vdso32_default_start;
++ } else {
++ vsyscall = &vdso32_sysenter_start;
++ vsyscall_len = &vdso32_sysenter_end - &vdso32_sysenter_start;
++ }
++
++ memcpy(syscall_page, vsyscall, vsyscall_len);
++ relocate_vdso(syscall_page);
++
++ return 0;
++}
++
++/* Setup a VMA at program startup for the vsyscall page */
++int arch_setup_additional_pages(struct linux_binprm *bprm, int exstack)
++{
++ struct mm_struct *mm = current->mm;
++ unsigned long addr;
++ int ret = 0;
++ bool compat;
++
++ down_write(&mm->mmap_sem);
++
++ /* Test compat mode once here, in case someone
++ changes it via sysctl */
++ compat = (vdso_enabled == VDSO_COMPAT);
++
++ map_compat_vdso(compat);
++
++ if (compat)
++ addr = VDSO_HIGH_BASE;
++ else {
++ addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
++ if (IS_ERR_VALUE(addr)) {
++ ret = addr;
++ goto up_fail;
+ }
+ }
+
-+out:
-+ crypto_free_aead(tfm);
-+ aead_request_free(req);
++ if (compat_uses_vma || !compat) {
++ /*
++ * MAYWRITE to allow gdb to COW and set breakpoints
++ *
++ * Make sure the vDSO gets into every core dump.
++ * Dumping its contents makes post-mortem fully
++ * interpretable later without matching up the same
++ * kernel and hardware config to see what PC values
++ * meant.
++ */
++ ret = install_special_mapping(mm, addr, PAGE_SIZE,
++ VM_READ|VM_EXEC|
++ VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC|
++ VM_ALWAYSDUMP,
++ vdso32_pages);
++
++ if (ret)
++ goto up_fail;
++ }
++
++ current->mm->context.vdso = (void *)addr;
++ current_thread_info()->sysenter_return =
++ VDSO32_SYMBOL(addr, SYSENTER_RETURN);
++
++ up_fail:
++ up_write(&mm->mmap_sem);
++
++ return ret;
+}
+
- static void test_cipher(char *algo, int enc,
- struct cipher_testvec *template, unsigned int tcount)
++#ifdef CONFIG_X86_64
++
++__initcall(sysenter_setup);
++
++#ifdef CONFIG_SYSCTL
++/* Register vsyscall32 into the ABI table */
++#include <linux/sysctl.h>
++
++static ctl_table abi_table2[] = {
++ {
++ .procname = "vsyscall32",
++ .data = &sysctl_vsyscall32,
++ .maxlen = sizeof(int),
++ .mode = 0644,
++ .proc_handler = proc_dointvec
++ },
++ {}
++};
++
++static ctl_table abi_root_table2[] = {
++ {
++ .ctl_name = CTL_ABI,
++ .procname = "abi",
++ .mode = 0555,
++ .child = abi_table2
++ },
++ {}
++};
++
++static __init int ia32_binfmt_init(void)
++{
++ register_sysctl_table(abi_root_table2);
++ return 0;
++}
++__initcall(ia32_binfmt_init);
++#endif
++
++#else /* CONFIG_X86_32 */
++
++const char *arch_vma_name(struct vm_area_struct *vma)
++{
++ if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso)
++ return "[vdso]";
++ return NULL;
++}
++
++struct vm_area_struct *get_gate_vma(struct task_struct *tsk)
++{
++ struct mm_struct *mm = tsk->mm;
++
++ /* Check to see if this task was created in compat vdso mode */
++ if (mm && mm->context.vdso == (void *)VDSO_HIGH_BASE)
++ return &gate_vma;
++ return NULL;
++}
++
++int in_gate_area(struct task_struct *task, unsigned long addr)
++{
++ const struct vm_area_struct *vma = get_gate_vma(task);
++
++ return vma && addr >= vma->vm_start && addr < vma->vm_end;
++}
++
++int in_gate_area_no_task(unsigned long addr)
++{
++ return 0;
++}
++
++#endif /* CONFIG_X86_64 */
+diff --git a/arch/x86/vdso/vdso32.S b/arch/x86/vdso/vdso32.S
+new file mode 100644
+index 0000000..1e36f72
+--- /dev/null
++++ b/arch/x86/vdso/vdso32.S
+@@ -0,0 +1,19 @@
++#include <linux/init.h>
++
++__INITDATA
++
++ .globl vdso32_default_start, vdso32_default_end
++vdso32_default_start:
++#ifdef CONFIG_X86_32
++ .incbin "arch/x86/vdso/vdso32-int80.so"
++#else
++ .incbin "arch/x86/vdso/vdso32-syscall.so"
++#endif
++vdso32_default_end:
++
++ .globl vdso32_sysenter_start, vdso32_sysenter_end
++vdso32_sysenter_start:
++ .incbin "arch/x86/vdso/vdso32-sysenter.so"
++vdso32_sysenter_end:
++
++__FINIT
+diff --git a/arch/x86/vdso/vdso32/.gitignore b/arch/x86/vdso/vdso32/.gitignore
+new file mode 100644
+index 0000000..e45fba9
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/.gitignore
+@@ -0,0 +1 @@
++vdso32.lds
+diff --git a/arch/x86/vdso/vdso32/int80.S b/arch/x86/vdso/vdso32/int80.S
+new file mode 100644
+index 0000000..b15b7c0
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/int80.S
+@@ -0,0 +1,56 @@
++/*
++ * Code for the vDSO. This version uses the old int $0x80 method.
++ *
++ * First get the common code for the sigreturn entry points.
++ * This must come first.
++ */
++#include "sigreturn.S"
++
++ .text
++ .globl __kernel_vsyscall
++ .type __kernel_vsyscall, at function
++ ALIGN
++__kernel_vsyscall:
++.LSTART_vsyscall:
++ int $0x80
++ ret
++.LEND_vsyscall:
++ .size __kernel_vsyscall,.-.LSTART_vsyscall
++ .previous
++
++ .section .eh_frame,"a", at progbits
++.LSTARTFRAMEDLSI:
++ .long .LENDCIEDLSI-.LSTARTCIEDLSI
++.LSTARTCIEDLSI:
++ .long 0 /* CIE ID */
++ .byte 1 /* Version number */
++ .string "zR" /* NUL-terminated augmentation string */
++ .uleb128 1 /* Code alignment factor */
++ .sleb128 -4 /* Data alignment factor */
++ .byte 8 /* Return address register column */
++ .uleb128 1 /* Augmentation value length */
++ .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
++ .byte 0x0c /* DW_CFA_def_cfa */
++ .uleb128 4
++ .uleb128 4
++ .byte 0x88 /* DW_CFA_offset, column 0x8 */
++ .uleb128 1
++ .align 4
++.LENDCIEDLSI:
++ .long .LENDFDEDLSI-.LSTARTFDEDLSI /* Length FDE */
++.LSTARTFDEDLSI:
++ .long .LSTARTFDEDLSI-.LSTARTFRAMEDLSI /* CIE pointer */
++ .long .LSTART_vsyscall-. /* PC-relative start address */
++ .long .LEND_vsyscall-.LSTART_vsyscall
++ .uleb128 0
++ .align 4
++.LENDFDEDLSI:
++ .previous
++
++ /*
++ * Pad out the segment to match the size of the sysenter.S version.
++ */
++VDSO32_vsyscall_eh_frame_size = 0x40
++ .section .data,"aw", at progbits
++ .space VDSO32_vsyscall_eh_frame_size-(.LENDFDEDLSI-.LSTARTFRAMEDLSI), 0
++ .previous
+diff --git a/arch/x86/vdso/vdso32/note.S b/arch/x86/vdso/vdso32/note.S
+new file mode 100644
+index 0000000..c83f257
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/note.S
+@@ -0,0 +1,44 @@
++/*
++ * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
++ * Here we can supply some information useful to userland.
++ */
++
++#include <linux/version.h>
++#include <linux/elfnote.h>
++
++/* Ideally this would use UTS_NAME, but using a quoted string here
++ doesn't work. Remember to change this when changing the
++ kernel's name. */
++ELFNOTE_START(Linux, 0, "a")
++ .long LINUX_VERSION_CODE
++ELFNOTE_END
++
++#ifdef CONFIG_XEN
++/*
++ * Add a special note telling glibc's dynamic linker a fake hardware
++ * flavor that it will use to choose the search path for libraries in the
++ * same way it uses real hardware capabilities like "mmx".
++ * We supply "nosegneg" as the fake capability, to indicate that we
++ * do not like negative offsets in instructions using segment overrides,
++ * since we implement those inefficiently. This makes it possible to
++ * install libraries optimized to avoid those access patterns in someplace
++ * like /lib/i686/tls/nosegneg. Note that an /etc/ld.so.conf.d/file
++ * corresponding to the bits here is needed to make ldconfig work right.
++ * It should contain:
++ * hwcap 1 nosegneg
++ * to match the mapping of bit to name that we give here.
++ *
++ * At runtime, the fake hardware feature will be considered to be present
++ * if its bit is set in the mask word. So, we start with the mask 0, and
++ * at boot time we set VDSO_NOTE_NONEGSEG_BIT if running under Xen.
++ */
++
++#include "../../xen/vdso.h" /* Defines VDSO_NOTE_NONEGSEG_BIT. */
++
++ELFNOTE_START(GNU, 2, "a")
++ .long 1 /* ncaps */
++VDSO32_NOTE_MASK: /* Symbol used by arch/x86/xen/setup.c */
++ .long 0 /* mask */
++ .byte VDSO_NOTE_NONEGSEG_BIT; .asciz "nosegneg" /* bit, name */
++ELFNOTE_END
++#endif
+diff --git a/arch/x86/vdso/vdso32/sigreturn.S b/arch/x86/vdso/vdso32/sigreturn.S
+new file mode 100644
+index 0000000..31776d0
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/sigreturn.S
+@@ -0,0 +1,144 @@
++/*
++ * Common code for the sigreturn entry points in vDSO images.
++ * So far this code is the same for both int80 and sysenter versions.
++ * This file is #include'd by int80.S et al to define them first thing.
++ * The kernel assumes that the addresses of these routines are constant
++ * for all vDSO implementations.
++ */
++
++#include <linux/linkage.h>
++#include <asm/unistd_32.h>
++#include <asm/asm-offsets.h>
++
++#ifndef SYSCALL_ENTER_KERNEL
++#define SYSCALL_ENTER_KERNEL int $0x80
++#endif
++
++ .text
++ .globl __kernel_sigreturn
++ .type __kernel_sigreturn, at function
++ ALIGN
++__kernel_sigreturn:
++.LSTART_sigreturn:
++ popl %eax /* XXX does this mean it needs unwind info? */
++ movl $__NR_sigreturn, %eax
++ SYSCALL_ENTER_KERNEL
++.LEND_sigreturn:
++ nop
++ .size __kernel_sigreturn,.-.LSTART_sigreturn
++
++ .globl __kernel_rt_sigreturn
++ .type __kernel_rt_sigreturn, at function
++ ALIGN
++__kernel_rt_sigreturn:
++.LSTART_rt_sigreturn:
++ movl $__NR_rt_sigreturn, %eax
++ SYSCALL_ENTER_KERNEL
++.LEND_rt_sigreturn:
++ nop
++ .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
++ .previous
++
++ .section .eh_frame,"a", at progbits
++.LSTARTFRAMEDLSI1:
++ .long .LENDCIEDLSI1-.LSTARTCIEDLSI1
++.LSTARTCIEDLSI1:
++ .long 0 /* CIE ID */
++ .byte 1 /* Version number */
++ .string "zRS" /* NUL-terminated augmentation string */
++ .uleb128 1 /* Code alignment factor */
++ .sleb128 -4 /* Data alignment factor */
++ .byte 8 /* Return address register column */
++ .uleb128 1 /* Augmentation value length */
++ .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
++ .byte 0 /* DW_CFA_nop */
++ .align 4
++.LENDCIEDLSI1:
++ .long .LENDFDEDLSI1-.LSTARTFDEDLSI1 /* Length FDE */
++.LSTARTFDEDLSI1:
++ .long .LSTARTFDEDLSI1-.LSTARTFRAMEDLSI1 /* CIE pointer */
++ /* HACK: The dwarf2 unwind routines will subtract 1 from the
++ return address to get an address in the middle of the
++ presumed call instruction. Since we didn't get here via
++ a call, we need to include the nop before the real start
++ to make up for it. */
++ .long .LSTART_sigreturn-1-. /* PC-relative start address */
++ .long .LEND_sigreturn-.LSTART_sigreturn+1
++ .uleb128 0 /* Augmentation */
++ /* What follows are the instructions for the table generation.
++ We record the locations of each register saved. This is
++ complicated by the fact that the "CFA" is always assumed to
++ be the value of the stack pointer in the caller. This means
++ that we must define the CFA of this body of code to be the
++ saved value of the stack pointer in the sigcontext. Which
++ also means that there is no fixed relation to the other
++ saved registers, which means that we must use DW_CFA_expression
++ to compute their addresses. It also means that when we
++ adjust the stack with the popl, we have to do it all over again. */
++
++#define do_cfa_expr(offset) \
++ .byte 0x0f; /* DW_CFA_def_cfa_expression */ \
++ .uleb128 1f-0f; /* length */ \
++0: .byte 0x74; /* DW_OP_breg4 */ \
++ .sleb128 offset; /* offset */ \
++ .byte 0x06; /* DW_OP_deref */ \
++1:
++
++#define do_expr(regno, offset) \
++ .byte 0x10; /* DW_CFA_expression */ \
++ .uleb128 regno; /* regno */ \
++ .uleb128 1f-0f; /* length */ \
++0: .byte 0x74; /* DW_OP_breg4 */ \
++ .sleb128 offset; /* offset */ \
++1:
++
++ do_cfa_expr(IA32_SIGCONTEXT_sp+4)
++ do_expr(0, IA32_SIGCONTEXT_ax+4)
++ do_expr(1, IA32_SIGCONTEXT_cx+4)
++ do_expr(2, IA32_SIGCONTEXT_dx+4)
++ do_expr(3, IA32_SIGCONTEXT_bx+4)
++ do_expr(5, IA32_SIGCONTEXT_bp+4)
++ do_expr(6, IA32_SIGCONTEXT_si+4)
++ do_expr(7, IA32_SIGCONTEXT_di+4)
++ do_expr(8, IA32_SIGCONTEXT_ip+4)
++
++ .byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
++
++ do_cfa_expr(IA32_SIGCONTEXT_sp)
++ do_expr(0, IA32_SIGCONTEXT_ax)
++ do_expr(1, IA32_SIGCONTEXT_cx)
++ do_expr(2, IA32_SIGCONTEXT_dx)
++ do_expr(3, IA32_SIGCONTEXT_bx)
++ do_expr(5, IA32_SIGCONTEXT_bp)
++ do_expr(6, IA32_SIGCONTEXT_si)
++ do_expr(7, IA32_SIGCONTEXT_di)
++ do_expr(8, IA32_SIGCONTEXT_ip)
++
++ .align 4
++.LENDFDEDLSI1:
++
++ .long .LENDFDEDLSI2-.LSTARTFDEDLSI2 /* Length FDE */
++.LSTARTFDEDLSI2:
++ .long .LSTARTFDEDLSI2-.LSTARTFRAMEDLSI1 /* CIE pointer */
++ /* HACK: See above wrt unwind library assumptions. */
++ .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
++ .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
++ .uleb128 0 /* Augmentation */
++ /* What follows are the instructions for the table generation.
++ We record the locations of each register saved. This is
++ slightly less complicated than the above, since we don't
++ modify the stack pointer in the process. */
++
++ do_cfa_expr(IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_sp)
++ do_expr(0, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ax)
++ do_expr(1, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_cx)
++ do_expr(2, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_dx)
++ do_expr(3, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bx)
++ do_expr(5, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bp)
++ do_expr(6, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_si)
++ do_expr(7, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_di)
++ do_expr(8, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ip)
++
++ .align 4
++.LENDFDEDLSI2:
++ .previous
+diff --git a/arch/x86/vdso/vdso32/syscall.S b/arch/x86/vdso/vdso32/syscall.S
+new file mode 100644
+index 0000000..5415b56
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/syscall.S
+@@ -0,0 +1,77 @@
++/*
++ * Code for the vDSO. This version uses the syscall instruction.
++ *
++ * First get the common code for the sigreturn entry points.
++ * This must come first.
++ */
++#define SYSCALL_ENTER_KERNEL syscall
++#include "sigreturn.S"
++
++#include <asm/segment.h>
++
++ .text
++ .globl __kernel_vsyscall
++ .type __kernel_vsyscall, at function
++ ALIGN
++__kernel_vsyscall:
++.LSTART_vsyscall:
++ push %ebp
++.Lpush_ebp:
++ movl %ecx, %ebp
++ syscall
++ movl $__USER32_DS, %ecx
++ movl %ecx, %ss
++ movl %ebp, %ecx
++ popl %ebp
++.Lpop_ebp:
++ ret
++.LEND_vsyscall:
++ .size __kernel_vsyscall,.-.LSTART_vsyscall
++
++ .section .eh_frame,"a", at progbits
++.LSTARTFRAME:
++ .long .LENDCIE-.LSTARTCIE
++.LSTARTCIE:
++ .long 0 /* CIE ID */
++ .byte 1 /* Version number */
++ .string "zR" /* NUL-terminated augmentation string */
++ .uleb128 1 /* Code alignment factor */
++ .sleb128 -4 /* Data alignment factor */
++ .byte 8 /* Return address register column */
++ .uleb128 1 /* Augmentation value length */
++ .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
++ .byte 0x0c /* DW_CFA_def_cfa */
++ .uleb128 4
++ .uleb128 4
++ .byte 0x88 /* DW_CFA_offset, column 0x8 */
++ .uleb128 1
++ .align 4
++.LENDCIE:
++
++ .long .LENDFDE1-.LSTARTFDE1 /* Length FDE */
++.LSTARTFDE1:
++ .long .LSTARTFDE1-.LSTARTFRAME /* CIE pointer */
++ .long .LSTART_vsyscall-. /* PC-relative start address */
++ .long .LEND_vsyscall-.LSTART_vsyscall
++ .uleb128 0 /* Augmentation length */
++ /* What follows are the instructions for the table generation.
++ We have to record all changes of the stack pointer. */
++ .byte 0x40 + .Lpush_ebp-.LSTART_vsyscall /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .uleb128 8
++ .byte 0x85, 0x02 /* DW_CFA_offset %ebp -8 */
++ .byte 0x40 + .Lpop_ebp-.Lpush_ebp /* DW_CFA_advance_loc */
++ .byte 0xc5 /* DW_CFA_restore %ebp */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .uleb128 4
++ .align 4
++.LENDFDE1:
++ .previous
++
++ /*
++ * Pad out the segment to match the size of the sysenter.S version.
++ */
++VDSO32_vsyscall_eh_frame_size = 0x40
++ .section .data,"aw", at progbits
++ .space VDSO32_vsyscall_eh_frame_size-(.LENDFDE1-.LSTARTFRAME), 0
++ .previous
+diff --git a/arch/x86/vdso/vdso32/sysenter.S b/arch/x86/vdso/vdso32/sysenter.S
+new file mode 100644
+index 0000000..e2800af
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/sysenter.S
+@@ -0,0 +1,116 @@
++/*
++ * Code for the vDSO. This version uses the sysenter instruction.
++ *
++ * First get the common code for the sigreturn entry points.
++ * This must come first.
++ */
++#include "sigreturn.S"
++
++/*
++ * The caller puts arg2 in %ecx, which gets pushed. The kernel will use
++ * %ecx itself for arg2. The pushing is because the sysexit instruction
++ * (found in entry.S) requires that we clobber %ecx with the desired %esp.
++ * User code might expect that %ecx is unclobbered though, as it would be
++ * for returning via the iret instruction, so we must push and pop.
++ *
++ * The caller puts arg3 in %edx, which the sysexit instruction requires
++ * for %eip. Thus, exactly as for arg2, we must push and pop.
++ *
++ * Arg6 is different. The caller puts arg6 in %ebp. Since the sysenter
++ * instruction clobbers %esp, the user's %esp won't even survive entry
++ * into the kernel. We store %esp in %ebp. Code in entry.S must fetch
++ * arg6 from the stack.
++ *
++ * You can not use this vsyscall for the clone() syscall because the
++ * three words on the parent stack do not get copied to the child.
++ */
++ .text
++ .globl __kernel_vsyscall
++ .type __kernel_vsyscall, at function
++ ALIGN
++__kernel_vsyscall:
++.LSTART_vsyscall:
++ push %ecx
++.Lpush_ecx:
++ push %edx
++.Lpush_edx:
++ push %ebp
++.Lenter_kernel:
++ movl %esp,%ebp
++ sysenter
++
++ /* 7: align return point with nop's to make disassembly easier */
++ .space 7,0x90
++
++ /* 14: System call restart point is here! (SYSENTER_RETURN-2) */
++ jmp .Lenter_kernel
++ /* 16: System call normal return point is here! */
++VDSO32_SYSENTER_RETURN: /* Symbol used by sysenter.c via vdso32-syms.h */
++ pop %ebp
++.Lpop_ebp:
++ pop %edx
++.Lpop_edx:
++ pop %ecx
++.Lpop_ecx:
++ ret
++.LEND_vsyscall:
++ .size __kernel_vsyscall,.-.LSTART_vsyscall
++ .previous
++
++ .section .eh_frame,"a", at progbits
++.LSTARTFRAMEDLSI:
++ .long .LENDCIEDLSI-.LSTARTCIEDLSI
++.LSTARTCIEDLSI:
++ .long 0 /* CIE ID */
++ .byte 1 /* Version number */
++ .string "zR" /* NUL-terminated augmentation string */
++ .uleb128 1 /* Code alignment factor */
++ .sleb128 -4 /* Data alignment factor */
++ .byte 8 /* Return address register column */
++ .uleb128 1 /* Augmentation value length */
++ .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
++ .byte 0x0c /* DW_CFA_def_cfa */
++ .uleb128 4
++ .uleb128 4
++ .byte 0x88 /* DW_CFA_offset, column 0x8 */
++ .uleb128 1
++ .align 4
++.LENDCIEDLSI:
++ .long .LENDFDEDLSI-.LSTARTFDEDLSI /* Length FDE */
++.LSTARTFDEDLSI:
++ .long .LSTARTFDEDLSI-.LSTARTFRAMEDLSI /* CIE pointer */
++ .long .LSTART_vsyscall-. /* PC-relative start address */
++ .long .LEND_vsyscall-.LSTART_vsyscall
++ .uleb128 0
++ /* What follows are the instructions for the table generation.
++ We have to record all changes of the stack pointer. */
++ .byte 0x40 + (.Lpush_ecx-.LSTART_vsyscall) /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .byte 0x08 /* RA at offset 8 now */
++ .byte 0x40 + (.Lpush_edx-.Lpush_ecx) /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .byte 0x0c /* RA at offset 12 now */
++ .byte 0x40 + (.Lenter_kernel-.Lpush_edx) /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .byte 0x10 /* RA at offset 16 now */
++ .byte 0x85, 0x04 /* DW_CFA_offset %ebp -16 */
++ /* Finally the epilogue. */
++ .byte 0x40 + (.Lpop_ebp-.Lenter_kernel) /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .byte 0x0c /* RA at offset 12 now */
++ .byte 0xc5 /* DW_CFA_restore %ebp */
++ .byte 0x40 + (.Lpop_edx-.Lpop_ebp) /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .byte 0x08 /* RA at offset 8 now */
++ .byte 0x40 + (.Lpop_ecx-.Lpop_edx) /* DW_CFA_advance_loc */
++ .byte 0x0e /* DW_CFA_def_cfa_offset */
++ .byte 0x04 /* RA at offset 4 now */
++ .align 4
++.LENDFDEDLSI:
++ .previous
++
++ /*
++ * Emit a symbol with the size of this .eh_frame data,
++ * to verify it matches the other versions.
++ */
++VDSO32_vsyscall_eh_frame_size = (.LENDFDEDLSI-.LSTARTFRAMEDLSI)
+diff --git a/arch/x86/vdso/vdso32/vdso32.lds.S b/arch/x86/vdso/vdso32/vdso32.lds.S
+new file mode 100644
+index 0000000..976124b
+--- /dev/null
++++ b/arch/x86/vdso/vdso32/vdso32.lds.S
+@@ -0,0 +1,37 @@
++/*
++ * Linker script for 32-bit vDSO.
++ * We #include the file to define the layout details.
++ * Here we only choose the prelinked virtual address.
++ *
++ * This file defines the version script giving the user-exported symbols in
++ * the DSO. We can define local symbols here called VDSO* to make their
++ * values visible using the asm-x86/vdso.h macros from the kernel proper.
++ */
++
++#define VDSO_PRELINK 0
++#include "../vdso-layout.lds.S"
++
++/* The ELF entry point can be used to set the AT_SYSINFO value. */
++ENTRY(__kernel_vsyscall);
++
++/*
++ * This controls what userland symbols we export from the vDSO.
++ */
++VERSION
++{
++ LINUX_2.5 {
++ global:
++ __kernel_vsyscall;
++ __kernel_sigreturn;
++ __kernel_rt_sigreturn;
++ local: *;
++ };
++}
++
++/*
++ * Symbols we define here called VDSO* get their values into vdso32-syms.h.
++ */
++VDSO32_PRELINK = VDSO_PRELINK;
++VDSO32_vsyscall = __kernel_vsyscall;
++VDSO32_sigreturn = __kernel_sigreturn;
++VDSO32_rt_sigreturn = __kernel_rt_sigreturn;
+diff --git a/arch/x86/vdso/vgetcpu.c b/arch/x86/vdso/vgetcpu.c
+index 3b1ae1a..c8097f1 100644
+--- a/arch/x86/vdso/vgetcpu.c
++++ b/arch/x86/vdso/vgetcpu.c
+@@ -15,11 +15,11 @@
+
+ long __vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused)
{
-@@ -237,15 +474,11 @@ static void test_cipher(char *algo, int enc,
- printk("\ntesting %s %s\n", algo, e);
+- unsigned int dummy, p;
++ unsigned int p;
- tsize = sizeof (struct cipher_testvec);
-- tsize *= tcount;
--
- if (tsize > TVMEMSIZE) {
- printk("template (%u) too big for tvmem (%u)\n", tsize,
- TVMEMSIZE);
- return;
- }
--
-- memcpy(tvmem, template, tsize);
- cipher_tv = (void *)tvmem;
+ if (*vdso_vgetcpu_mode == VGETCPU_RDTSCP) {
+ /* Load per CPU data from RDTSCP */
+- rdtscp(dummy, dummy, p);
++ native_read_tscp(&p);
+ } else {
+ /* Load per CPU data from GDT */
+ asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
+diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
+index ff9333e..3fdd514 100644
+--- a/arch/x86/vdso/vma.c
++++ b/arch/x86/vdso/vma.c
+@@ -11,23 +11,20 @@
+ #include <asm/vsyscall.h>
+ #include <asm/vgtod.h>
+ #include <asm/proto.h>
+-#include "voffset.h"
++#include <asm/vdso.h>
+
+-int vdso_enabled = 1;
+-
+-#define VEXTERN(x) extern typeof(__ ## x) *vdso_ ## x;
+-#include "vextern.h"
++#include "vextern.h" /* Just for VMAGIC. */
+ #undef VEXTERN
+
+-extern char vdso_kernel_start[], vdso_start[], vdso_end[];
++int vdso_enabled = 1;
++
++extern char vdso_start[], vdso_end[];
+ extern unsigned short vdso_sync_cpuid;
+
+ struct page **vdso_pages;
+
+-static inline void *var_ref(void *vbase, char *var, char *name)
++static inline void *var_ref(void *p, char *name)
+ {
+- unsigned offset = var - &vdso_kernel_start[0] + VDSO_TEXT_OFFSET;
+- void *p = vbase + offset;
+ if (*(void **)p != (void *)VMAGIC) {
+ printk("VDSO: variable %s broken\n", name);
+ vdso_enabled = 0;
+@@ -62,9 +59,8 @@ static int __init init_vdso_vars(void)
+ vdso_enabled = 0;
+ }
+
+-#define V(x) *(typeof(x) *) var_ref(vbase, (char *)RELOC_HIDE(&x, 0), #x)
+ #define VEXTERN(x) \
+- V(vdso_ ## x) = &__ ## x;
++ *(typeof(__ ## x) **) var_ref(VDSO64_SYMBOL(vbase, x), #x) = &__ ## x;
+ #include "vextern.h"
+ #undef VEXTERN
+ return 0;
+diff --git a/arch/x86/vdso/voffset.h b/arch/x86/vdso/voffset.h
+deleted file mode 100644
+index 4af67c7..0000000
+--- a/arch/x86/vdso/voffset.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#define VDSO_TEXT_OFFSET 0x600
+diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
+index fbfa55c..4d5f264 100644
+--- a/arch/x86/xen/Kconfig
++++ b/arch/x86/xen/Kconfig
+@@ -5,6 +5,7 @@
+ config XEN
+ bool "Xen guest support"
+ select PARAVIRT
++ depends on X86_32
+ depends on X86_CMPXCHG && X86_TSC && !NEED_MULTIPLE_NODES && !(X86_VISWS || X86_VOYAGER)
+ help
+ This is the Linux Xen port. Enabling this will allow the
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 79ad152..de647bc 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -141,8 +141,8 @@ static void __init xen_banner(void)
+ printk(KERN_INFO "Hypervisor signature: %s\n", xen_start_info->magic);
+ }
+
+-static void xen_cpuid(unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
++static void xen_cpuid(unsigned int *ax, unsigned int *bx,
++ unsigned int *cx, unsigned int *dx)
+ {
+ unsigned maskedx = ~0;
+
+@@ -150,18 +150,18 @@ static void xen_cpuid(unsigned int *eax, unsigned int *ebx,
+ * Mask out inconvenient features, to try and disable as many
+ * unsupported kernel subsystems as possible.
+ */
+- if (*eax == 1)
++ if (*ax == 1)
+ maskedx = ~((1 << X86_FEATURE_APIC) | /* disable APIC */
+ (1 << X86_FEATURE_ACPI) | /* disable ACPI */
+ (1 << X86_FEATURE_ACC)); /* thermal monitoring */
+
+ asm(XEN_EMULATE_PREFIX "cpuid"
+- : "=a" (*eax),
+- "=b" (*ebx),
+- "=c" (*ecx),
+- "=d" (*edx)
+- : "0" (*eax), "2" (*ecx));
+- *edx &= maskedx;
++ : "=a" (*ax),
++ "=b" (*bx),
++ "=c" (*cx),
++ "=d" (*dx)
++ : "0" (*ax), "2" (*cx));
++ *dx &= maskedx;
+ }
+
+ static void xen_set_debugreg(int reg, unsigned long val)
+@@ -275,19 +275,12 @@ static unsigned long xen_store_tr(void)
+
+ static void xen_set_ldt(const void *addr, unsigned entries)
+ {
+- unsigned long linear_addr = (unsigned long)addr;
+ struct mmuext_op *op;
+ struct multicall_space mcs = xen_mc_entry(sizeof(*op));
+
+ op = mcs.args;
+ op->cmd = MMUEXT_SET_LDT;
+- if (linear_addr) {
+- /* ldt my be vmalloced, use arbitrary_virt_to_machine */
+- xmaddr_t maddr;
+- maddr = arbitrary_virt_to_machine((unsigned long)addr);
+- linear_addr = (unsigned long)maddr.maddr;
+- }
+- op->arg1.linear_addr = linear_addr;
++ op->arg1.linear_addr = (unsigned long)addr;
+ op->arg2.nr_ents = entries;
+
+ MULTI_mmuext_op(mcs.mc, op, 1, NULL, DOMID_SELF);
+@@ -295,7 +288,7 @@ static void xen_set_ldt(const void *addr, unsigned entries)
+ xen_mc_issue(PARAVIRT_LAZY_CPU);
+ }
+
+-static void xen_load_gdt(const struct Xgt_desc_struct *dtr)
++static void xen_load_gdt(const struct desc_ptr *dtr)
+ {
+ unsigned long *frames;
+ unsigned long va = dtr->address;
+@@ -357,11 +350,11 @@ static void xen_load_tls(struct thread_struct *t, unsigned int cpu)
+ }
+
+ static void xen_write_ldt_entry(struct desc_struct *dt, int entrynum,
+- u32 low, u32 high)
++ const void *ptr)
+ {
+ unsigned long lp = (unsigned long)&dt[entrynum];
+ xmaddr_t mach_lp = virt_to_machine(lp);
+- u64 entry = (u64)high << 32 | low;
++ u64 entry = *(u64 *)ptr;
- init_completion(&result.completion);
-@@ -269,33 +502,34 @@ static void test_cipher(char *algo, int enc,
+ preempt_disable();
- j = 0;
- for (i = 0; i < tcount; i++) {
-- if (!(cipher_tv[i].np)) {
-+ memcpy(cipher_tv, &template[i], tsize);
-+ if (!(cipher_tv->np)) {
- j++;
- printk("test %u (%d bit key):\n",
-- j, cipher_tv[i].klen * 8);
-+ j, cipher_tv->klen * 8);
+@@ -395,12 +388,11 @@ static int cvt_gate_to_trap(int vector, u32 low, u32 high,
+ }
- crypto_ablkcipher_clear_flags(tfm, ~0);
-- if (cipher_tv[i].wk)
-+ if (cipher_tv->wk)
- crypto_ablkcipher_set_flags(
- tfm, CRYPTO_TFM_REQ_WEAK_KEY);
-- key = cipher_tv[i].key;
-+ key = cipher_tv->key;
+ /* Locations of each CPU's IDT */
+-static DEFINE_PER_CPU(struct Xgt_desc_struct, idt_desc);
++static DEFINE_PER_CPU(struct desc_ptr, idt_desc);
- ret = crypto_ablkcipher_setkey(tfm, key,
-- cipher_tv[i].klen);
-+ cipher_tv->klen);
- if (ret) {
- printk("setkey() failed flags=%x\n",
- crypto_ablkcipher_get_flags(tfm));
+ /* Set an IDT entry. If the entry is part of the current IDT, then
+ also update Xen. */
+-static void xen_write_idt_entry(struct desc_struct *dt, int entrynum,
+- u32 low, u32 high)
++static void xen_write_idt_entry(gate_desc *dt, int entrynum, const gate_desc *g)
+ {
+ unsigned long p = (unsigned long)&dt[entrynum];
+ unsigned long start, end;
+@@ -412,14 +404,15 @@ static void xen_write_idt_entry(struct desc_struct *dt, int entrynum,
-- if (!cipher_tv[i].fail)
-+ if (!cipher_tv->fail)
- goto out;
- }
+ xen_mc_flush();
-- sg_init_one(&sg[0], cipher_tv[i].input,
-- cipher_tv[i].ilen);
-+ sg_init_one(&sg[0], cipher_tv->input,
-+ cipher_tv->ilen);
+- write_dt_entry(dt, entrynum, low, high);
++ native_write_idt_entry(dt, entrynum, g);
- ablkcipher_request_set_crypt(req, sg, sg,
-- cipher_tv[i].ilen,
-- cipher_tv[i].iv);
-+ cipher_tv->ilen,
-+ cipher_tv->iv);
+ if (p >= start && (p + 8) <= end) {
+ struct trap_info info[2];
++ u32 *desc = (u32 *)g;
- ret = enc ?
- crypto_ablkcipher_encrypt(req) :
-@@ -319,11 +553,11 @@ static void test_cipher(char *algo, int enc,
- }
+ info[1].address = 0;
- q = kmap(sg_page(&sg[0])) + sg[0].offset;
-- hexdump(q, cipher_tv[i].rlen);
-+ hexdump(q, cipher_tv->rlen);
+- if (cvt_gate_to_trap(entrynum, low, high, &info[0]))
++ if (cvt_gate_to_trap(entrynum, desc[0], desc[1], &info[0]))
+ if (HYPERVISOR_set_trap_table(info))
+ BUG();
+ }
+@@ -427,7 +420,7 @@ static void xen_write_idt_entry(struct desc_struct *dt, int entrynum,
+ preempt_enable();
+ }
- printk("%s\n",
-- memcmp(q, cipher_tv[i].result,
-- cipher_tv[i].rlen) ? "fail" : "pass");
-+ memcmp(q, cipher_tv->result,
-+ cipher_tv->rlen) ? "fail" : "pass");
- }
+-static void xen_convert_trap_info(const struct Xgt_desc_struct *desc,
++static void xen_convert_trap_info(const struct desc_ptr *desc,
+ struct trap_info *traps)
+ {
+ unsigned in, out, count;
+@@ -446,7 +439,7 @@ static void xen_convert_trap_info(const struct Xgt_desc_struct *desc,
+
+ void xen_copy_trap_info(struct trap_info *traps)
+ {
+- const struct Xgt_desc_struct *desc = &__get_cpu_var(idt_desc);
++ const struct desc_ptr *desc = &__get_cpu_var(idt_desc);
+
+ xen_convert_trap_info(desc, traps);
+ }
+@@ -454,7 +447,7 @@ void xen_copy_trap_info(struct trap_info *traps)
+ /* Load a new IDT into Xen. In principle this can be per-CPU, so we
+ hold a spinlock to protect the static traps[] array (static because
+ it avoids allocation, and saves stack space). */
+-static void xen_load_idt(const struct Xgt_desc_struct *desc)
++static void xen_load_idt(const struct desc_ptr *desc)
+ {
+ static DEFINE_SPINLOCK(lock);
+ static struct trap_info traps[257];
+@@ -475,22 +468,21 @@ static void xen_load_idt(const struct Xgt_desc_struct *desc)
+ /* Write a GDT descriptor entry. Ignore LDT descriptors, since
+ they're handled differently. */
+ static void xen_write_gdt_entry(struct desc_struct *dt, int entry,
+- u32 low, u32 high)
++ const void *desc, int type)
+ {
+ preempt_disable();
+
+- switch ((high >> 8) & 0xff) {
+- case DESCTYPE_LDT:
+- case DESCTYPE_TSS:
++ switch (type) {
++ case DESC_LDT:
++ case DESC_TSS:
+ /* ignore */
+ break;
+
+ default: {
+ xmaddr_t maddr = virt_to_machine(&dt[entry]);
+- u64 desc = (u64)high << 32 | low;
+
+ xen_mc_flush();
+- if (HYPERVISOR_update_descriptor(maddr.maddr, desc))
++ if (HYPERVISOR_update_descriptor(maddr.maddr, *(u64 *)desc))
+ BUG();
}
-@@ -332,41 +566,42 @@ static void test_cipher(char *algo, int enc,
+@@ -499,11 +491,11 @@ static void xen_write_gdt_entry(struct desc_struct *dt, int entry,
+ preempt_enable();
+ }
- j = 0;
- for (i = 0; i < tcount; i++) {
-- if (cipher_tv[i].np) {
-+ memcpy(cipher_tv, &template[i], tsize);
-+ if (cipher_tv->np) {
- j++;
- printk("test %u (%d bit key):\n",
-- j, cipher_tv[i].klen * 8);
-+ j, cipher_tv->klen * 8);
+-static void xen_load_esp0(struct tss_struct *tss,
++static void xen_load_sp0(struct tss_struct *tss,
+ struct thread_struct *thread)
+ {
+ struct multicall_space mcs = xen_mc_entry(0);
+- MULTI_stack_switch(mcs.mc, __KERNEL_DS, thread->esp0);
++ MULTI_stack_switch(mcs.mc, __KERNEL_DS, thread->sp0);
+ xen_mc_issue(PARAVIRT_LAZY_CPU);
+ }
- crypto_ablkcipher_clear_flags(tfm, ~0);
-- if (cipher_tv[i].wk)
-+ if (cipher_tv->wk)
- crypto_ablkcipher_set_flags(
- tfm, CRYPTO_TFM_REQ_WEAK_KEY);
-- key = cipher_tv[i].key;
-+ key = cipher_tv->key;
+@@ -521,12 +513,12 @@ static void xen_io_delay(void)
+ }
- ret = crypto_ablkcipher_setkey(tfm, key,
-- cipher_tv[i].klen);
-+ cipher_tv->klen);
- if (ret) {
- printk("setkey() failed flags=%x\n",
- crypto_ablkcipher_get_flags(tfm));
+ #ifdef CONFIG_X86_LOCAL_APIC
+-static unsigned long xen_apic_read(unsigned long reg)
++static u32 xen_apic_read(unsigned long reg)
+ {
+ return 0;
+ }
-- if (!cipher_tv[i].fail)
-+ if (!cipher_tv->fail)
- goto out;
- }
+-static void xen_apic_write(unsigned long reg, unsigned long val)
++static void xen_apic_write(unsigned long reg, u32 val)
+ {
+ /* Warn to see if there's any stray references */
+ WARN_ON(1);
+@@ -666,6 +658,13 @@ static __init void xen_alloc_pt_init(struct mm_struct *mm, u32 pfn)
+ make_lowmem_page_readonly(__va(PFN_PHYS(pfn)));
+ }
- temp = 0;
-- sg_init_table(sg, cipher_tv[i].np);
-- for (k = 0; k < cipher_tv[i].np; k++) {
-+ sg_init_table(sg, cipher_tv->np);
-+ for (k = 0; k < cipher_tv->np; k++) {
- memcpy(&xbuf[IDX[k]],
-- cipher_tv[i].input + temp,
-- cipher_tv[i].tap[k]);
-- temp += cipher_tv[i].tap[k];
-+ cipher_tv->input + temp,
-+ cipher_tv->tap[k]);
-+ temp += cipher_tv->tap[k];
- sg_set_buf(&sg[k], &xbuf[IDX[k]],
-- cipher_tv[i].tap[k]);
-+ cipher_tv->tap[k]);
- }
++/* Early release_pt assumes that all pts are pinned, since there's
++ only init_mm and anything attached to that is pinned. */
++static void xen_release_pt_init(u32 pfn)
++{
++ make_lowmem_page_readwrite(__va(PFN_PHYS(pfn)));
++}
++
+ static void pin_pagetable_pfn(unsigned level, unsigned long pfn)
+ {
+ struct mmuext_op op;
+@@ -677,7 +676,7 @@ static void pin_pagetable_pfn(unsigned level, unsigned long pfn)
- ablkcipher_request_set_crypt(req, sg, sg,
-- cipher_tv[i].ilen,
-- cipher_tv[i].iv);
-+ cipher_tv->ilen,
-+ cipher_tv->iv);
+ /* This needs to make sure the new pte page is pinned iff its being
+ attached to a pinned pagetable. */
+-static void xen_alloc_pt(struct mm_struct *mm, u32 pfn)
++static void xen_alloc_ptpage(struct mm_struct *mm, u32 pfn, unsigned level)
+ {
+ struct page *page = pfn_to_page(pfn);
- ret = enc ?
- crypto_ablkcipher_encrypt(req) :
-@@ -390,15 +625,15 @@ static void test_cipher(char *algo, int enc,
- }
+@@ -686,7 +685,7 @@ static void xen_alloc_pt(struct mm_struct *mm, u32 pfn)
- temp = 0;
-- for (k = 0; k < cipher_tv[i].np; k++) {
-+ for (k = 0; k < cipher_tv->np; k++) {
- printk("page %u\n", k);
- q = kmap(sg_page(&sg[k])) + sg[k].offset;
-- hexdump(q, cipher_tv[i].tap[k]);
-+ hexdump(q, cipher_tv->tap[k]);
- printk("%s\n",
-- memcmp(q, cipher_tv[i].result + temp,
-- cipher_tv[i].tap[k]) ? "fail" :
-+ memcmp(q, cipher_tv->result + temp,
-+ cipher_tv->tap[k]) ? "fail" :
- "pass");
-- temp += cipher_tv[i].tap[k];
-+ temp += cipher_tv->tap[k];
+ if (!PageHighMem(page)) {
+ make_lowmem_page_readonly(__va(PFN_PHYS(pfn)));
+- pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn);
++ pin_pagetable_pfn(level, pfn);
+ } else
+ /* make sure there are no stray mappings of
+ this page */
+@@ -694,6 +693,16 @@ static void xen_alloc_pt(struct mm_struct *mm, u32 pfn)
+ }
+ }
+
++static void xen_alloc_pt(struct mm_struct *mm, u32 pfn)
++{
++ xen_alloc_ptpage(mm, pfn, MMUEXT_PIN_L1_TABLE);
++}
++
++static void xen_alloc_pd(struct mm_struct *mm, u32 pfn)
++{
++ xen_alloc_ptpage(mm, pfn, MMUEXT_PIN_L2_TABLE);
++}
++
+ /* This should never happen until we're OK to use struct page */
+ static void xen_release_pt(u32 pfn)
+ {
+@@ -796,6 +805,9 @@ static __init void xen_pagetable_setup_done(pgd_t *base)
+ /* This will work as long as patching hasn't happened yet
+ (which it hasn't) */
+ pv_mmu_ops.alloc_pt = xen_alloc_pt;
++ pv_mmu_ops.alloc_pd = xen_alloc_pd;
++ pv_mmu_ops.release_pt = xen_release_pt;
++ pv_mmu_ops.release_pd = xen_release_pt;
+ pv_mmu_ops.set_pte = xen_set_pte;
+
+ if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+@@ -953,7 +965,7 @@ static const struct pv_cpu_ops xen_cpu_ops __initdata = {
+ .read_pmc = native_read_pmc,
+
+ .iret = (void *)&hypercall_page[__HYPERVISOR_iret],
+- .irq_enable_sysexit = NULL, /* never called */
++ .irq_enable_syscall_ret = NULL, /* never called */
+
+ .load_tr_desc = paravirt_nop,
+ .set_ldt = xen_set_ldt,
+@@ -968,7 +980,7 @@ static const struct pv_cpu_ops xen_cpu_ops __initdata = {
+ .write_ldt_entry = xen_write_ldt_entry,
+ .write_gdt_entry = xen_write_gdt_entry,
+ .write_idt_entry = xen_write_idt_entry,
+- .load_esp0 = xen_load_esp0,
++ .load_sp0 = xen_load_sp0,
+
+ .set_iopl_mask = xen_set_iopl_mask,
+ .io_delay = xen_io_delay,
+@@ -1019,10 +1031,10 @@ static const struct pv_mmu_ops xen_mmu_ops __initdata = {
+ .pte_update_defer = paravirt_nop,
+
+ .alloc_pt = xen_alloc_pt_init,
+- .release_pt = xen_release_pt,
+- .alloc_pd = paravirt_nop,
++ .release_pt = xen_release_pt_init,
++ .alloc_pd = xen_alloc_pt_init,
+ .alloc_pd_clone = paravirt_nop,
+- .release_pd = paravirt_nop,
++ .release_pd = xen_release_pt_init,
+
+ #ifdef CONFIG_HIGHPTE
+ .kmap_atomic_pte = xen_kmap_atomic_pte,
+diff --git a/arch/x86/xen/events.c b/arch/x86/xen/events.c
+index 6d1da58..dcf613e 100644
+--- a/arch/x86/xen/events.c
++++ b/arch/x86/xen/events.c
+@@ -465,7 +465,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
+ * a bitset of words which contain pending event bits. The second
+ * level is a bitset of pending events themselves.
+ */
+-fastcall void xen_evtchn_do_upcall(struct pt_regs *regs)
++void xen_evtchn_do_upcall(struct pt_regs *regs)
+ {
+ int cpu = get_cpu();
+ struct shared_info *s = HYPERVISOR_shared_info;
+@@ -487,7 +487,7 @@ fastcall void xen_evtchn_do_upcall(struct pt_regs *regs)
+ int irq = evtchn_to_irq[port];
+
+ if (irq != -1) {
+- regs->orig_eax = ~irq;
++ regs->orig_ax = ~irq;
+ do_IRQ(regs);
}
}
- }
-@@ -800,7 +1035,8 @@ out:
- crypto_free_hash(tfm);
- }
+diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
+index 0ac6c5d..45aa771 100644
+--- a/arch/x86/xen/mmu.c
++++ b/arch/x86/xen/mmu.c
+@@ -58,7 +58,8 @@
--static void test_deflate(void)
-+static void test_comp(char *algo, struct comp_testvec *ctemplate,
-+ struct comp_testvec *dtemplate, int ctcount, int dtcount)
+ xmaddr_t arbitrary_virt_to_machine(unsigned long address)
{
- unsigned int i;
- char result[COMP_BUF_SIZE];
-@@ -808,25 +1044,26 @@ static void test_deflate(void)
- struct comp_testvec *tv;
- unsigned int tsize;
+- pte_t *pte = lookup_address(address);
++ int level;
++ pte_t *pte = lookup_address(address, &level);
+ unsigned offset = address & PAGE_MASK;
-- printk("\ntesting deflate compression\n");
-+ printk("\ntesting %s compression\n", algo);
+ BUG_ON(pte == NULL);
+@@ -70,8 +71,9 @@ void make_lowmem_page_readonly(void *vaddr)
+ {
+ pte_t *pte, ptev;
+ unsigned long address = (unsigned long)vaddr;
++ int level;
-- tsize = sizeof (deflate_comp_tv_template);
-+ tsize = sizeof(struct comp_testvec);
-+ tsize *= ctcount;
- if (tsize > TVMEMSIZE) {
- printk("template (%u) too big for tvmem (%u)\n", tsize,
- TVMEMSIZE);
- return;
- }
+- pte = lookup_address(address);
++ pte = lookup_address(address, &level);
+ BUG_ON(pte == NULL);
-- memcpy(tvmem, deflate_comp_tv_template, tsize);
-+ memcpy(tvmem, ctemplate, tsize);
- tv = (void *)tvmem;
+ ptev = pte_wrprotect(*pte);
+@@ -84,8 +86,9 @@ void make_lowmem_page_readwrite(void *vaddr)
+ {
+ pte_t *pte, ptev;
+ unsigned long address = (unsigned long)vaddr;
++ int level;
-- tfm = crypto_alloc_comp("deflate", 0, CRYPTO_ALG_ASYNC);
-+ tfm = crypto_alloc_comp(algo, 0, CRYPTO_ALG_ASYNC);
- if (IS_ERR(tfm)) {
-- printk("failed to load transform for deflate\n");
-+ printk("failed to load transform for %s\n", algo);
- return;
- }
+- pte = lookup_address(address);
++ pte = lookup_address(address, &level);
+ BUG_ON(pte == NULL);
-- for (i = 0; i < DEFLATE_COMP_TEST_VECTORS; i++) {
-+ for (i = 0; i < ctcount; i++) {
- int ilen, ret, dlen = COMP_BUF_SIZE;
+ ptev = pte_mkwrite(*pte);
+@@ -241,12 +244,12 @@ unsigned long long xen_pgd_val(pgd_t pgd)
- printk("test %u:\n", i + 1);
-@@ -845,19 +1082,20 @@ static void test_deflate(void)
- ilen, dlen);
- }
+ pte_t xen_make_pte(unsigned long long pte)
+ {
+- if (pte & 1)
++ if (pte & _PAGE_PRESENT) {
+ pte = phys_to_machine(XPADDR(pte)).maddr;
++ pte &= ~(_PAGE_PCD | _PAGE_PWT);
++ }
-- printk("\ntesting deflate decompression\n");
-+ printk("\ntesting %s decompression\n", algo);
+- pte &= ~_PAGE_PCD;
+-
+- return (pte_t){ pte, pte >> 32 };
++ return (pte_t){ .pte = pte };
+ }
-- tsize = sizeof (deflate_decomp_tv_template);
-+ tsize = sizeof(struct comp_testvec);
-+ tsize *= dtcount;
- if (tsize > TVMEMSIZE) {
- printk("template (%u) too big for tvmem (%u)\n", tsize,
- TVMEMSIZE);
- goto out;
- }
+ pmd_t xen_make_pmd(unsigned long long pmd)
+@@ -290,10 +293,10 @@ unsigned long xen_pgd_val(pgd_t pgd)
-- memcpy(tvmem, deflate_decomp_tv_template, tsize);
-+ memcpy(tvmem, dtemplate, tsize);
- tv = (void *)tvmem;
+ pte_t xen_make_pte(unsigned long pte)
+ {
+- if (pte & _PAGE_PRESENT)
++ if (pte & _PAGE_PRESENT) {
+ pte = phys_to_machine(XPADDR(pte)).maddr;
+-
+- pte &= ~_PAGE_PCD;
++ pte &= ~(_PAGE_PCD | _PAGE_PWT);
++ }
-- for (i = 0; i < DEFLATE_DECOMP_TEST_VECTORS; i++) {
-+ for (i = 0; i < dtcount; i++) {
- int ilen, ret, dlen = COMP_BUF_SIZE;
+ return (pte_t){ pte };
+ }
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index f84e772..3bad477 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -10,6 +10,7 @@
+ #include <linux/pm.h>
- printk("test %u:\n", i + 1);
-@@ -918,6 +1156,8 @@ static void do_test(void)
+ #include <asm/elf.h>
++#include <asm/vdso.h>
+ #include <asm/e820.h>
+ #include <asm/setup.h>
+ #include <asm/xen/hypervisor.h>
+@@ -59,12 +60,10 @@ static void xen_idle(void)
+ /*
+ * Set the bit indicating "nosegneg" library variants should be used.
+ */
+-static void fiddle_vdso(void)
++static void __init fiddle_vdso(void)
+ {
+- extern u32 VDSO_NOTE_MASK; /* See ../kernel/vsyscall-note.S. */
+- extern char vsyscall_int80_start;
+- u32 *mask = (u32 *) ((unsigned long) &VDSO_NOTE_MASK - VDSO_PRELINK +
+- &vsyscall_int80_start);
++ extern const char vdso32_default_start;
++ u32 *mask = VDSO32_SYMBOL(&vdso32_default_start, NOTE_MASK);
+ *mask |= 1 << VDSO_NOTE_NONEGSEG_BIT;
+ }
+
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index c1b131b..aafc544 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -146,7 +146,7 @@ void __init xen_smp_prepare_boot_cpu(void)
+ old memory can be recycled */
+ make_lowmem_page_readwrite(&per_cpu__gdt_page);
- test_hash("md4", md4_tv_template, MD4_TEST_VECTORS);
+- for (cpu = 0; cpu < NR_CPUS; cpu++) {
++ for_each_possible_cpu(cpu) {
+ cpus_clear(per_cpu(cpu_sibling_map, cpu));
+ /*
+ * cpu_core_map lives in a per cpu area that is cleared
+@@ -163,7 +163,7 @@ void __init xen_smp_prepare_cpus(unsigned int max_cpus)
+ {
+ unsigned cpu;
-+ test_hash("sha224", sha224_tv_template, SHA224_TEST_VECTORS);
+- for (cpu = 0; cpu < NR_CPUS; cpu++) {
++ for_each_possible_cpu(cpu) {
+ cpus_clear(per_cpu(cpu_sibling_map, cpu));
+ /*
+ * cpu_core_ map will be zeroed when the per
+@@ -239,10 +239,10 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
+ ctxt->gdt_ents = ARRAY_SIZE(gdt->gdt);
+
+ ctxt->user_regs.cs = __KERNEL_CS;
+- ctxt->user_regs.esp = idle->thread.esp0 - sizeof(struct pt_regs);
++ ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
+
+ ctxt->kernel_ss = __KERNEL_DS;
+- ctxt->kernel_sp = idle->thread.esp0;
++ ctxt->kernel_sp = idle->thread.sp0;
+
+ ctxt->event_callback_cs = __KERNEL_CS;
+ ctxt->event_callback_eip = (unsigned long)xen_hypervisor_callback;
+diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
+index d083ff5..b3721fd 100644
+--- a/arch/x86/xen/time.c
++++ b/arch/x86/xen/time.c
+@@ -592,7 +592,7 @@ __init void xen_time_init(void)
+ set_normalized_timespec(&wall_to_monotonic,
+ -xtime.tv_sec, -xtime.tv_nsec);
+
+- tsc_disable = 0;
++ setup_force_cpu_cap(X86_FEATURE_TSC);
+
+ xen_setup_timer(cpu);
+ xen_setup_cpu_clockevents();
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index f8d6937..288d587 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -4,16 +4,18 @@
+ #ifdef CONFIG_XEN
+
+ #include <linux/elfnote.h>
++#include <linux/init.h>
+ #include <asm/boot.h>
+ #include <xen/interface/elfnote.h>
+
+-.pushsection .init.text
++ __INIT
+ ENTRY(startup_xen)
+ movl %esi,xen_start_info
+ cld
+ movl $(init_thread_union+THREAD_SIZE),%esp
+ jmp xen_start_kernel
+-.popsection
+
- test_hash("sha256", sha256_tv_template, SHA256_TEST_VECTORS);
++ __FINIT
- //BLOWFISH
-@@ -969,6 +1209,18 @@ static void do_test(void)
- AES_XTS_ENC_TEST_VECTORS);
- test_cipher("xts(aes)", DECRYPT, aes_xts_dec_tv_template,
- AES_XTS_DEC_TEST_VECTORS);
-+ test_cipher("rfc3686(ctr(aes))", ENCRYPT, aes_ctr_enc_tv_template,
-+ AES_CTR_ENC_TEST_VECTORS);
-+ test_cipher("rfc3686(ctr(aes))", DECRYPT, aes_ctr_dec_tv_template,
-+ AES_CTR_DEC_TEST_VECTORS);
-+ test_aead("gcm(aes)", ENCRYPT, aes_gcm_enc_tv_template,
-+ AES_GCM_ENC_TEST_VECTORS);
-+ test_aead("gcm(aes)", DECRYPT, aes_gcm_dec_tv_template,
-+ AES_GCM_DEC_TEST_VECTORS);
-+ test_aead("ccm(aes)", ENCRYPT, aes_ccm_enc_tv_template,
-+ AES_CCM_ENC_TEST_VECTORS);
-+ test_aead("ccm(aes)", DECRYPT, aes_ccm_dec_tv_template,
-+ AES_CCM_DEC_TEST_VECTORS);
+ .pushsection .bss.page_aligned
+ .align PAGE_SIZE_asm
+diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S
+index ac4ed52..7d0f55a 100644
+--- a/arch/xtensa/kernel/vmlinux.lds.S
++++ b/arch/xtensa/kernel/vmlinux.lds.S
+@@ -136,13 +136,13 @@ SECTIONS
+ __init_begin = .;
+ .init.text : {
+ _sinittext = .;
+- *(.init.literal) *(.init.text)
++ *(.init.literal) INIT_TEXT
+ _einittext = .;
+ }
- //CAST5
- test_cipher("ecb(cast5)", ENCRYPT, cast5_enc_tv_template,
-@@ -1057,12 +1309,18 @@ static void do_test(void)
- test_hash("tgr192", tgr192_tv_template, TGR192_TEST_VECTORS);
- test_hash("tgr160", tgr160_tv_template, TGR160_TEST_VECTORS);
- test_hash("tgr128", tgr128_tv_template, TGR128_TEST_VECTORS);
-- test_deflate();
-+ test_comp("deflate", deflate_comp_tv_template,
-+ deflate_decomp_tv_template, DEFLATE_COMP_TEST_VECTORS,
-+ DEFLATE_DECOMP_TEST_VECTORS);
-+ test_comp("lzo", lzo_comp_tv_template, lzo_decomp_tv_template,
-+ LZO_COMP_TEST_VECTORS, LZO_DECOMP_TEST_VECTORS);
- test_hash("crc32c", crc32c_tv_template, CRC32C_TEST_VECTORS);
- test_hash("hmac(md5)", hmac_md5_tv_template,
- HMAC_MD5_TEST_VECTORS);
- test_hash("hmac(sha1)", hmac_sha1_tv_template,
- HMAC_SHA1_TEST_VECTORS);
-+ test_hash("hmac(sha224)", hmac_sha224_tv_template,
-+ HMAC_SHA224_TEST_VECTORS);
- test_hash("hmac(sha256)", hmac_sha256_tv_template,
- HMAC_SHA256_TEST_VECTORS);
- test_hash("hmac(sha384)", hmac_sha384_tv_template,
-@@ -1156,6 +1414,10 @@ static void do_test(void)
- AES_XTS_ENC_TEST_VECTORS);
- test_cipher("xts(aes)", DECRYPT, aes_xts_dec_tv_template,
- AES_XTS_DEC_TEST_VECTORS);
-+ test_cipher("rfc3686(ctr(aes))", ENCRYPT, aes_ctr_enc_tv_template,
-+ AES_CTR_ENC_TEST_VECTORS);
-+ test_cipher("rfc3686(ctr(aes))", DECRYPT, aes_ctr_dec_tv_template,
-+ AES_CTR_DEC_TEST_VECTORS);
- break;
+ .init.data :
+ {
+- *(.init.data)
++ INIT_DATA
+ . = ALIGN(0x4);
+ __tagtable_begin = .;
+ *(.taglist)
+@@ -278,8 +278,9 @@ SECTIONS
+ /* Sections to be discarded */
+ /DISCARD/ :
+ {
+- *(.exit.literal .exit.text)
+- *(.exit.data)
++ *(.exit.literal)
++ EXIT_TEXT
++ EXIT_DATA
+ *(.exitcall.exit)
+ }
- case 11:
-@@ -1167,7 +1429,9 @@ static void do_test(void)
- break;
+diff --git a/arch/xtensa/mm/Makefile b/arch/xtensa/mm/Makefile
+index 10aec22..64e304a 100644
+--- a/arch/xtensa/mm/Makefile
++++ b/arch/xtensa/mm/Makefile
+@@ -1,9 +1,5 @@
+ #
+ # Makefile for the Linux/Xtensa-specific parts of the memory manager.
+ #
+-# Note! Dependencies are done automagically by 'make dep', which also
+-# removes any old dependencies. DON'T put your own dependencies here
+-# unless it's something special (ie not a .c file).
+-#
- case 13:
-- test_deflate();
-+ test_comp("deflate", deflate_comp_tv_template,
-+ deflate_decomp_tv_template, DEFLATE_COMP_TEST_VECTORS,
-+ DEFLATE_DECOMP_TEST_VECTORS);
- break;
+ obj-y := init.o fault.o tlb.o misc.o cache.o
+diff --git a/arch/xtensa/platform-iss/Makefile b/arch/xtensa/platform-iss/Makefile
+index 5b394e9..af96e31 100644
+--- a/arch/xtensa/platform-iss/Makefile
++++ b/arch/xtensa/platform-iss/Makefile
+@@ -3,11 +3,6 @@
+ # Makefile for the Xtensa Instruction Set Simulator (ISS)
+ # "prom monitor" library routines under Linux.
+ #
+-# Note! Dependencies are done automagically by 'make dep', which also
+-# removes any old dependencies. DON'T put your own dependencies here
+-# unless it's something special (ie not a .c file).
+-#
+-# Note 2! The CFLAGS definitions are in the main makefile...
- case 14:
-@@ -1291,6 +1555,34 @@ static void do_test(void)
- camellia_cbc_dec_tv_template,
- CAMELLIA_CBC_DEC_TEST_VECTORS);
- break;
-+ case 33:
-+ test_hash("sha224", sha224_tv_template, SHA224_TEST_VECTORS);
-+ break;
-+
-+ case 34:
-+ test_cipher("salsa20", ENCRYPT,
-+ salsa20_stream_enc_tv_template,
-+ SALSA20_STREAM_ENC_TEST_VECTORS);
-+ break;
-+
-+ case 35:
-+ test_aead("gcm(aes)", ENCRYPT, aes_gcm_enc_tv_template,
-+ AES_GCM_ENC_TEST_VECTORS);
-+ test_aead("gcm(aes)", DECRYPT, aes_gcm_dec_tv_template,
-+ AES_GCM_DEC_TEST_VECTORS);
-+ break;
-+
-+ case 36:
-+ test_comp("lzo", lzo_comp_tv_template, lzo_decomp_tv_template,
-+ LZO_COMP_TEST_VECTORS, LZO_DECOMP_TEST_VECTORS);
-+ break;
-+
-+ case 37:
-+ test_aead("ccm(aes)", ENCRYPT, aes_ccm_enc_tv_template,
-+ AES_CCM_ENC_TEST_VECTORS);
-+ test_aead("ccm(aes)", DECRYPT, aes_ccm_dec_tv_template,
-+ AES_CCM_DEC_TEST_VECTORS);
-+ break;
+ obj-y = io.o console.o setup.o network.o
- case 100:
- test_hash("hmac(md5)", hmac_md5_tv_template,
-@@ -1317,6 +1609,15 @@ static void do_test(void)
- HMAC_SHA512_TEST_VECTORS);
- break;
+diff --git a/block/Makefile b/block/Makefile
+index 8261081..5a43c7d 100644
+--- a/block/Makefile
++++ b/block/Makefile
+@@ -2,7 +2,9 @@
+ # Makefile for the kernel block layer
+ #
-+ case 105:
-+ test_hash("hmac(sha224)", hmac_sha224_tv_template,
-+ HMAC_SHA224_TEST_VECTORS);
-+ break;
-+
-+ case 106:
-+ test_hash("xcbc(aes)", aes_xcbc128_tv_template,
-+ XCBC_AES_TEST_VECTORS);
-+ break;
+-obj-$(CONFIG_BLOCK) := elevator.o ll_rw_blk.o ioctl.o genhd.o scsi_ioctl.o
++obj-$(CONFIG_BLOCK) := elevator.o blk-core.o blk-tag.o blk-sysfs.o \
++ blk-barrier.o blk-settings.o blk-ioc.o blk-map.o \
++ blk-exec.o blk-merge.o ioctl.o genhd.o scsi_ioctl.o
- case 200:
- test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
-@@ -1400,6 +1701,11 @@ static void do_test(void)
- camellia_speed_template);
- break;
+ obj-$(CONFIG_BLK_DEV_BSG) += bsg.o
+ obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
+diff --git a/block/as-iosched.c b/block/as-iosched.c
+index cb5e53b..9603684 100644
+--- a/block/as-iosched.c
++++ b/block/as-iosched.c
+@@ -170,9 +170,11 @@ static void free_as_io_context(struct as_io_context *aic)
-+ case 206:
-+ test_cipher_speed("salsa20", ENCRYPT, sec, NULL, 0,
-+ salsa20_speed_template);
-+ break;
-+
- case 300:
- /* fall through */
+ static void as_trim(struct io_context *ioc)
+ {
++ spin_lock(&ioc->lock);
+ if (ioc->aic)
+ free_as_io_context(ioc->aic);
+ ioc->aic = NULL;
++ spin_unlock(&ioc->lock);
+ }
-@@ -1451,6 +1757,10 @@ static void do_test(void)
- test_hash_speed("tgr192", sec, generic_hash_speed_template);
- if (mode > 300 && mode < 400) break;
+ /* Called when the task exits */
+@@ -462,7 +464,9 @@ static void as_antic_timeout(unsigned long data)
+ spin_lock_irqsave(q->queue_lock, flags);
+ if (ad->antic_status == ANTIC_WAIT_REQ
+ || ad->antic_status == ANTIC_WAIT_NEXT) {
+- struct as_io_context *aic = ad->io_context->aic;
++ struct as_io_context *aic;
++ spin_lock(&ad->io_context->lock);
++ aic = ad->io_context->aic;
-+ case 313:
-+ test_hash_speed("sha224", sec, generic_hash_speed_template);
-+ if (mode > 300 && mode < 400) break;
-+
- case 399:
- break;
+ ad->antic_status = ANTIC_FINISHED;
+ kblockd_schedule_work(&ad->antic_work);
+@@ -475,6 +479,7 @@ static void as_antic_timeout(unsigned long data)
+ /* process not "saved" by a cooperating request */
+ ad->exit_no_coop = (7*ad->exit_no_coop + 256)/8;
+ }
++ spin_unlock(&ad->io_context->lock);
+ }
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ }
+@@ -635,9 +640,11 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
-@@ -1467,20 +1777,21 @@ static void do_test(void)
+ ioc = ad->io_context;
+ BUG_ON(!ioc);
++ spin_lock(&ioc->lock);
- static int __init init(void)
- {
-+ int err = -ENOMEM;
-+
- tvmem = kmalloc(TVMEMSIZE, GFP_KERNEL);
- if (tvmem == NULL)
-- return -ENOMEM;
-+ return err;
+ if (rq && ioc == RQ_IOC(rq)) {
+ /* request from same process */
++ spin_unlock(&ioc->lock);
+ return 1;
+ }
- xbuf = kmalloc(XBUFSIZE, GFP_KERNEL);
-- if (xbuf == NULL) {
-- kfree(tvmem);
-- return -ENOMEM;
-- }
-+ if (xbuf == NULL)
-+ goto err_free_tv;
+@@ -646,20 +653,25 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
+ * In this situation status should really be FINISHED,
+ * however the timer hasn't had the chance to run yet.
+ */
++ spin_unlock(&ioc->lock);
+ return 1;
+ }
-- do_test();
-+ axbuf = kmalloc(XBUFSIZE, GFP_KERNEL);
-+ if (axbuf == NULL)
-+ goto err_free_xbuf;
+ aic = ioc->aic;
+- if (!aic)
++ if (!aic) {
++ spin_unlock(&ioc->lock);
+ return 0;
++ }
-- kfree(xbuf);
-- kfree(tvmem);
-+ do_test();
+ if (atomic_read(&aic->nr_queued) > 0) {
+ /* process has more requests queued */
++ spin_unlock(&ioc->lock);
+ return 1;
+ }
- /* We intentionaly return -EAGAIN to prevent keeping
- * the module. It does all its work from init()
-@@ -1488,7 +1799,15 @@ static int __init init(void)
- * => we don't need it in the memory, do we?
- * -- mludvig
- */
-- return -EAGAIN;
-+ err = -EAGAIN;
-+
-+ kfree(axbuf);
-+ err_free_xbuf:
-+ kfree(xbuf);
-+ err_free_tv:
-+ kfree(tvmem);
-+
-+ return err;
- }
+ if (atomic_read(&aic->nr_dispatched) > 0) {
+ /* process has more requests dispatched */
++ spin_unlock(&ioc->lock);
+ return 1;
+ }
- /*
-diff --git a/crypto/tcrypt.h b/crypto/tcrypt.h
-index ec86138..f785e56 100644
---- a/crypto/tcrypt.h
-+++ b/crypto/tcrypt.h
-@@ -6,12 +6,15 @@
- *
- * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
- * Copyright (c) 2002 Jean-Francois Dive <jef at linuxbe.org>
-+ * Copyright (c) 2007 Nokia Siemens Networks
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
- *
-+ * 2007-11-13 Added GCM tests
-+ * 2007-11-13 Added AEAD support
- * 2006-12-07 Added SHA384 HMAC and SHA512 HMAC tests
- * 2004-08-09 Cipher speed tests by Reyk Floeter <reyk at vantronix.net>
- * 2003-09-14 Changes by Kartikey Mahendra Bhatt
-@@ -40,14 +43,32 @@ struct hash_testvec {
- struct cipher_testvec {
- char key[MAX_KEYLEN] __attribute__ ((__aligned__(4)));
- char iv[MAX_IVLEN];
-+ char input[4100];
-+ char result[4100];
-+ unsigned char tap[MAX_TAP];
-+ int np;
-+ unsigned char fail;
-+ unsigned char wk; /* weak key flag */
-+ unsigned char klen;
-+ unsigned short ilen;
-+ unsigned short rlen;
-+};
-+
-+struct aead_testvec {
-+ char key[MAX_KEYLEN] __attribute__ ((__aligned__(4)));
-+ char iv[MAX_IVLEN];
- char input[512];
-+ char assoc[512];
- char result[512];
- unsigned char tap[MAX_TAP];
-+ unsigned char atap[MAX_TAP];
- int np;
-+ int anp;
- unsigned char fail;
- unsigned char wk; /* weak key flag */
- unsigned char klen;
- unsigned short ilen;
-+ unsigned short alen;
- unsigned short rlen;
- };
+@@ -680,6 +692,7 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
+ }
-@@ -173,6 +194,33 @@ static struct hash_testvec sha1_tv_template[] = {
+ as_update_iohist(ad, aic, rq);
++ spin_unlock(&ioc->lock);
+ return 1;
}
- };
-+
+@@ -688,20 +701,27 @@ static int as_can_break_anticipation(struct as_data *ad, struct request *rq)
+ if (aic->ttime_samples == 0)
+ ad->exit_prob = (7*ad->exit_prob + 256)/8;
+
+- if (ad->exit_no_coop > 128)
++ if (ad->exit_no_coop > 128) {
++ spin_unlock(&ioc->lock);
+ return 1;
++ }
+ }
+
+ if (aic->ttime_samples == 0) {
+- if (ad->new_ttime_mean > ad->antic_expire)
++ if (ad->new_ttime_mean > ad->antic_expire) {
++ spin_unlock(&ioc->lock);
+ return 1;
+- if (ad->exit_prob * ad->exit_no_coop > 128*256)
++ }
++ if (ad->exit_prob * ad->exit_no_coop > 128*256) {
++ spin_unlock(&ioc->lock);
+ return 1;
++ }
+ } else if (aic->ttime_mean > ad->antic_expire) {
+ /* the process thinks too much between requests */
++ spin_unlock(&ioc->lock);
+ return 1;
+ }
+-
++ spin_unlock(&ioc->lock);
+ return 0;
+ }
+
+@@ -1255,7 +1275,13 @@ static void as_merged_requests(struct request_queue *q, struct request *req,
+ * Don't copy here but swap, because when anext is
+ * removed below, it must contain the unused context
+ */
+- swap_io_context(&rioc, &nioc);
++ if (rioc != nioc) {
++ double_spin_lock(&rioc->lock, &nioc->lock,
++ rioc < nioc);
++ swap_io_context(&rioc, &nioc);
++ double_spin_unlock(&rioc->lock, &nioc->lock,
++ rioc < nioc);
++ }
+ }
+ }
+
+diff --git a/block/blk-barrier.c b/block/blk-barrier.c
+new file mode 100644
+index 0000000..5f74fec
+--- /dev/null
++++ b/block/blk-barrier.c
+@@ -0,0 +1,319 @@
+/*
-+ * SHA224 test vectors from from FIPS PUB 180-2
++ * Functions related to barrier IO handling
+ */
-+#define SHA224_TEST_VECTORS 2
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
+
-+static struct hash_testvec sha224_tv_template[] = {
-+ {
-+ .plaintext = "abc",
-+ .psize = 3,
-+ .digest = { 0x23, 0x09, 0x7D, 0x22, 0x34, 0x05, 0xD8, 0x22,
-+ 0x86, 0x42, 0xA4, 0x77, 0xBD, 0xA2, 0x55, 0xB3,
-+ 0x2A, 0xAD, 0xBC, 0xE4, 0xBD, 0xA0, 0xB3, 0xF7,
-+ 0xE3, 0x6C, 0x9D, 0xA7},
-+ }, {
-+ .plaintext =
-+ "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
-+ .psize = 56,
-+ .digest = { 0x75, 0x38, 0x8B, 0x16, 0x51, 0x27, 0x76, 0xCC,
-+ 0x5D, 0xBA, 0x5D, 0xA1, 0xFD, 0x89, 0x01, 0x50,
-+ 0xB0, 0xC6, 0x45, 0x5C, 0xB4, 0xF5, 0x8B, 0x19,
-+ 0x52, 0x52, 0x25, 0x25 },
-+ .np = 2,
-+ .tap = { 28, 28 }
++#include "blk.h"
++
++/**
++ * blk_queue_ordered - does this queue support ordered writes
++ * @q: the request queue
++ * @ordered: one of QUEUE_ORDERED_*
++ * @prepare_flush_fn: rq setup helper for cache flush ordered writes
++ *
++ * Description:
++ * For journalled file systems, doing ordered writes on a commit
++ * block instead of explicitly doing wait_on_buffer (which is bad
++ * for performance) can be a big win. Block drivers supporting this
++ * feature should call this function and indicate so.
++ *
++ **/
++int blk_queue_ordered(struct request_queue *q, unsigned ordered,
++ prepare_flush_fn *prepare_flush_fn)
++{
++ if (ordered & (QUEUE_ORDERED_PREFLUSH | QUEUE_ORDERED_POSTFLUSH) &&
++ prepare_flush_fn == NULL) {
++ printk(KERN_ERR "blk_queue_ordered: prepare_flush_fn required\n");
++ return -EINVAL;
+ }
-+};
+
- /*
- * SHA256 test vectors from from NIST
- */
-@@ -817,6 +865,121 @@ static struct hash_testvec hmac_sha1_tv_template[] = {
- },
- };
-
++ if (ordered != QUEUE_ORDERED_NONE &&
++ ordered != QUEUE_ORDERED_DRAIN &&
++ ordered != QUEUE_ORDERED_DRAIN_FLUSH &&
++ ordered != QUEUE_ORDERED_DRAIN_FUA &&
++ ordered != QUEUE_ORDERED_TAG &&
++ ordered != QUEUE_ORDERED_TAG_FLUSH &&
++ ordered != QUEUE_ORDERED_TAG_FUA) {
++ printk(KERN_ERR "blk_queue_ordered: bad value %d\n", ordered);
++ return -EINVAL;
++ }
++
++ q->ordered = ordered;
++ q->next_ordered = ordered;
++ q->prepare_flush_fn = prepare_flush_fn;
++
++ return 0;
++}
++
++EXPORT_SYMBOL(blk_queue_ordered);
+
+/*
-+ * SHA224 HMAC test vectors from RFC4231
++ * Cache flushing for ordered writes handling
+ */
-+#define HMAC_SHA224_TEST_VECTORS 4
++inline unsigned blk_ordered_cur_seq(struct request_queue *q)
++{
++ if (!q->ordseq)
++ return 0;
++ return 1 << ffz(q->ordseq);
++}
+
-+static struct hash_testvec hmac_sha224_tv_template[] = {
-+ {
-+ .key = { 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
-+ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
-+ 0x0b, 0x0b, 0x0b, 0x0b },
-+ .ksize = 20,
-+ /* ("Hi There") */
-+ .plaintext = { 0x48, 0x69, 0x20, 0x54, 0x68, 0x65, 0x72, 0x65 },
-+ .psize = 8,
-+ .digest = { 0x89, 0x6f, 0xb1, 0x12, 0x8a, 0xbb, 0xdf, 0x19,
-+ 0x68, 0x32, 0x10, 0x7c, 0xd4, 0x9d, 0xf3, 0x3f,
-+ 0x47, 0xb4, 0xb1, 0x16, 0x99, 0x12, 0xba, 0x4f,
-+ 0x53, 0x68, 0x4b, 0x22},
-+ }, {
-+ .key = { 0x4a, 0x65, 0x66, 0x65 }, /* ("Jefe") */
-+ .ksize = 4,
-+ /* ("what do ya want for nothing?") */
-+ .plaintext = { 0x77, 0x68, 0x61, 0x74, 0x20, 0x64, 0x6f, 0x20,
-+ 0x79, 0x61, 0x20, 0x77, 0x61, 0x6e, 0x74, 0x20,
-+ 0x66, 0x6f, 0x72, 0x20, 0x6e, 0x6f, 0x74, 0x68,
-+ 0x69, 0x6e, 0x67, 0x3f },
-+ .psize = 28,
-+ .digest = { 0xa3, 0x0e, 0x01, 0x09, 0x8b, 0xc6, 0xdb, 0xbf,
-+ 0x45, 0x69, 0x0f, 0x3a, 0x7e, 0x9e, 0x6d, 0x0f,
-+ 0x8b, 0xbe, 0xa2, 0xa3, 0x9e, 0x61, 0x48, 0x00,
-+ 0x8f, 0xd0, 0x5e, 0x44 },
-+ .np = 4,
-+ .tap = { 7, 7, 7, 7 }
-+ }, {
-+ .key = { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa },
-+ .ksize = 131,
-+ /* ("Test Using Larger Than Block-Size Key - Hash Key First") */
-+ .plaintext = { 0x54, 0x65, 0x73, 0x74, 0x20, 0x55, 0x73, 0x69,
-+ 0x6e, 0x67, 0x20, 0x4c, 0x61, 0x72, 0x67, 0x65,
-+ 0x72, 0x20, 0x54, 0x68, 0x61, 0x6e, 0x20, 0x42,
-+ 0x6c, 0x6f, 0x63, 0x6b, 0x2d, 0x53, 0x69, 0x7a,
-+ 0x65, 0x20, 0x4b, 0x65, 0x79, 0x20, 0x2d, 0x20,
-+ 0x48, 0x61, 0x73, 0x68, 0x20, 0x4b, 0x65, 0x79,
-+ 0x20, 0x46, 0x69, 0x72, 0x73, 0x74 },
-+ .psize = 54,
-+ .digest = { 0x95, 0xe9, 0xa0, 0xdb, 0x96, 0x20, 0x95, 0xad,
-+ 0xae, 0xbe, 0x9b, 0x2d, 0x6f, 0x0d, 0xbc, 0xe2,
-+ 0xd4, 0x99, 0xf1, 0x12, 0xf2, 0xd2, 0xb7, 0x27,
-+ 0x3f, 0xa6, 0x87, 0x0e },
-+ }, {
-+ .key = { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
-+ 0xaa, 0xaa, 0xaa },
-+ .ksize = 131,
-+ /* ("This is a test using a larger than block-size key and a")
-+ (" larger than block-size data. The key needs to be")
-+ (" hashed before being used by the HMAC algorithm.") */
-+ .plaintext = { 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20,
-+ 0x61, 0x20, 0x74, 0x65, 0x73, 0x74, 0x20, 0x75,
-+ 0x73, 0x69, 0x6e, 0x67, 0x20, 0x61, 0x20, 0x6c,
-+ 0x61, 0x72, 0x67, 0x65, 0x72, 0x20, 0x74, 0x68,
-+ 0x61, 0x6e, 0x20, 0x62, 0x6c, 0x6f, 0x63, 0x6b,
-+ 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x20, 0x6b, 0x65,
-+ 0x79, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x61, 0x20,
-+ 0x6c, 0x61, 0x72, 0x67, 0x65, 0x72, 0x20, 0x74,
-+ 0x68, 0x61, 0x6e, 0x20, 0x62, 0x6c, 0x6f, 0x63,
-+ 0x6b, 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x20, 0x64,
-+ 0x61, 0x74, 0x61, 0x2e, 0x20, 0x54, 0x68, 0x65,
-+ 0x20, 0x6b, 0x65, 0x79, 0x20, 0x6e, 0x65, 0x65,
-+ 0x64, 0x73, 0x20, 0x74, 0x6f, 0x20, 0x62, 0x65,
-+ 0x20, 0x68, 0x61, 0x73, 0x68, 0x65, 0x64, 0x20,
-+ 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x20, 0x62,
-+ 0x65, 0x69, 0x6e, 0x67, 0x20, 0x75, 0x73, 0x65,
-+ 0x64, 0x20, 0x62, 0x79, 0x20, 0x74, 0x68, 0x65,
-+ 0x20, 0x48, 0x4d, 0x41, 0x43, 0x20, 0x61, 0x6c,
-+ 0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x2e },
-+ .psize = 152,
-+ .digest = { 0x3a, 0x85, 0x41, 0x66, 0xac, 0x5d, 0x9f, 0x02,
-+ 0x3f, 0x54, 0xd5, 0x17, 0xd0, 0xb3, 0x9d, 0xbd,
-+ 0x94, 0x67, 0x70, 0xdb, 0x9c, 0x2b, 0x95, 0xc9,
-+ 0xf6, 0xf5, 0x65, 0xd1 },
-+ },
-+};
++unsigned blk_ordered_req_seq(struct request *rq)
++{
++ struct request_queue *q = rq->q;
+
- /*
- * HMAC-SHA256 test vectors from
- * draft-ietf-ipsec-ciph-sha-256-01.txt
-@@ -2140,12 +2303,18 @@ static struct cipher_testvec cast6_dec_tv_template[] = {
- */
- #define AES_ENC_TEST_VECTORS 3
- #define AES_DEC_TEST_VECTORS 3
--#define AES_CBC_ENC_TEST_VECTORS 2
--#define AES_CBC_DEC_TEST_VECTORS 2
-+#define AES_CBC_ENC_TEST_VECTORS 4
-+#define AES_CBC_DEC_TEST_VECTORS 4
- #define AES_LRW_ENC_TEST_VECTORS 8
- #define AES_LRW_DEC_TEST_VECTORS 8
- #define AES_XTS_ENC_TEST_VECTORS 4
- #define AES_XTS_DEC_TEST_VECTORS 4
-+#define AES_CTR_ENC_TEST_VECTORS 7
-+#define AES_CTR_DEC_TEST_VECTORS 6
-+#define AES_GCM_ENC_TEST_VECTORS 9
-+#define AES_GCM_DEC_TEST_VECTORS 8
-+#define AES_CCM_ENC_TEST_VECTORS 7
-+#define AES_CCM_DEC_TEST_VECTORS 7
-
- static struct cipher_testvec aes_enc_tv_template[] = {
- { /* From FIPS-197 */
-@@ -2249,6 +2418,57 @@ static struct cipher_testvec aes_cbc_enc_tv_template[] = {
- 0x75, 0x86, 0x60, 0x2d, 0x25, 0x3c, 0xff, 0xf9,
- 0x1b, 0x82, 0x66, 0xbe, 0xa6, 0xd6, 0x1a, 0xb1 },
- .rlen = 32,
-+ }, { /* From NIST SP800-38A */
-+ .key = { 0x8e, 0x73, 0xb0, 0xf7, 0xda, 0x0e, 0x64, 0x52,
-+ 0xc8, 0x10, 0xf3, 0x2b, 0x80, 0x90, 0x79, 0xe5,
-+ 0x62, 0xf8, 0xea, 0xd2, 0x52, 0x2c, 0x6b, 0x7b },
-+ .klen = 24,
-+ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
-+ .input = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
-+ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
-+ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
-+ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
-+ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
-+ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
-+ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
-+ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
-+ .ilen = 64,
-+ .result = { 0x4f, 0x02, 0x1d, 0xb2, 0x43, 0xbc, 0x63, 0x3d,
-+ 0x71, 0x78, 0x18, 0x3a, 0x9f, 0xa0, 0x71, 0xe8,
-+ 0xb4, 0xd9, 0xad, 0xa9, 0xad, 0x7d, 0xed, 0xf4,
-+ 0xe5, 0xe7, 0x38, 0x76, 0x3f, 0x69, 0x14, 0x5a,
-+ 0x57, 0x1b, 0x24, 0x20, 0x12, 0xfb, 0x7a, 0xe0,
-+ 0x7f, 0xa9, 0xba, 0xac, 0x3d, 0xf1, 0x02, 0xe0,
-+ 0x08, 0xb0, 0xe2, 0x79, 0x88, 0x59, 0x88, 0x81,
-+ 0xd9, 0x20, 0xa9, 0xe6, 0x4f, 0x56, 0x15, 0xcd },
-+ .rlen = 64,
-+ }, {
-+ .key = { 0x60, 0x3d, 0xeb, 0x10, 0x15, 0xca, 0x71, 0xbe,
-+ 0x2b, 0x73, 0xae, 0xf0, 0x85, 0x7d, 0x77, 0x81,
-+ 0x1f, 0x35, 0x2c, 0x07, 0x3b, 0x61, 0x08, 0xd7,
-+ 0x2d, 0x98, 0x10, 0xa3, 0x09, 0x14, 0xdf, 0xf4 },
-+ .klen = 32,
-+ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
-+ .input = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
-+ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
-+ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
-+ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
-+ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
-+ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
-+ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
-+ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
-+ .ilen = 64,
-+ .result = { 0xf5, 0x8c, 0x4c, 0x04, 0xd6, 0xe5, 0xf1, 0xba,
-+ 0x77, 0x9e, 0xab, 0xfb, 0x5f, 0x7b, 0xfb, 0xd6,
-+ 0x9c, 0xfc, 0x4e, 0x96, 0x7e, 0xdb, 0x80, 0x8d,
-+ 0x67, 0x9f, 0x77, 0x7b, 0xc6, 0x70, 0x2c, 0x7d,
-+ 0x39, 0xf2, 0x33, 0x69, 0xa9, 0xd9, 0xba, 0xcf,
-+ 0xa5, 0x30, 0xe2, 0x63, 0x04, 0x23, 0x14, 0x61,
-+ 0xb2, 0xeb, 0x05, 0xe2, 0xc3, 0x9b, 0xe9, 0xfc,
-+ 0xda, 0x6c, 0x19, 0x07, 0x8c, 0x6a, 0x9d, 0x1b },
-+ .rlen = 64,
- },
- };
-
-@@ -2280,6 +2500,57 @@ static struct cipher_testvec aes_cbc_dec_tv_template[] = {
- 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
- 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
- .rlen = 32,
-+ }, { /* From NIST SP800-38A */
-+ .key = { 0x8e, 0x73, 0xb0, 0xf7, 0xda, 0x0e, 0x64, 0x52,
-+ 0xc8, 0x10, 0xf3, 0x2b, 0x80, 0x90, 0x79, 0xe5,
-+ 0x62, 0xf8, 0xea, 0xd2, 0x52, 0x2c, 0x6b, 0x7b },
-+ .klen = 24,
-+ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
-+ .input = { 0x4f, 0x02, 0x1d, 0xb2, 0x43, 0xbc, 0x63, 0x3d,
-+ 0x71, 0x78, 0x18, 0x3a, 0x9f, 0xa0, 0x71, 0xe8,
-+ 0xb4, 0xd9, 0xad, 0xa9, 0xad, 0x7d, 0xed, 0xf4,
-+ 0xe5, 0xe7, 0x38, 0x76, 0x3f, 0x69, 0x14, 0x5a,
-+ 0x57, 0x1b, 0x24, 0x20, 0x12, 0xfb, 0x7a, 0xe0,
-+ 0x7f, 0xa9, 0xba, 0xac, 0x3d, 0xf1, 0x02, 0xe0,
-+ 0x08, 0xb0, 0xe2, 0x79, 0x88, 0x59, 0x88, 0x81,
-+ 0xd9, 0x20, 0xa9, 0xe6, 0x4f, 0x56, 0x15, 0xcd },
-+ .ilen = 64,
-+ .result = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
-+ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
-+ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
-+ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
-+ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
-+ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
-+ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
-+ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
-+ .rlen = 64,
-+ }, {
-+ .key = { 0x60, 0x3d, 0xeb, 0x10, 0x15, 0xca, 0x71, 0xbe,
-+ 0x2b, 0x73, 0xae, 0xf0, 0x85, 0x7d, 0x77, 0x81,
-+ 0x1f, 0x35, 0x2c, 0x07, 0x3b, 0x61, 0x08, 0xd7,
-+ 0x2d, 0x98, 0x10, 0xa3, 0x09, 0x14, 0xdf, 0xf4 },
-+ .klen = 32,
-+ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
-+ .input = { 0xf5, 0x8c, 0x4c, 0x04, 0xd6, 0xe5, 0xf1, 0xba,
-+ 0x77, 0x9e, 0xab, 0xfb, 0x5f, 0x7b, 0xfb, 0xd6,
-+ 0x9c, 0xfc, 0x4e, 0x96, 0x7e, 0xdb, 0x80, 0x8d,
-+ 0x67, 0x9f, 0x77, 0x7b, 0xc6, 0x70, 0x2c, 0x7d,
-+ 0x39, 0xf2, 0x33, 0x69, 0xa9, 0xd9, 0xba, 0xcf,
-+ 0xa5, 0x30, 0xe2, 0x63, 0x04, 0x23, 0x14, 0x61,
-+ 0xb2, 0xeb, 0x05, 0xe2, 0xc3, 0x9b, 0xe9, 0xfc,
-+ 0xda, 0x6c, 0x19, 0x07, 0x8c, 0x6a, 0x9d, 0x1b },
-+ .ilen = 64,
-+ .result = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
-+ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
-+ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
-+ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
-+ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
-+ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
-+ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
-+ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
-+ .rlen = 64,
- },
- };
-
-@@ -3180,6 +3451,1843 @@ static struct cipher_testvec aes_xts_dec_tv_template[] = {
- }
- };
-
++ BUG_ON(q->ordseq == 0);
+
-+static struct cipher_testvec aes_ctr_enc_tv_template[] = {
-+ { /* From RFC 3686 */
-+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
-+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
-+ 0x00, 0x00, 0x00, 0x30 },
-+ .klen = 20,
-+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
-+ .input = { "Single block msg" },
-+ .ilen = 16,
-+ .result = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
-+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
-+ .rlen = 16,
-+ }, {
-+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
-+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
-+ 0x00, 0x6c, 0xb6, 0xdb },
-+ .klen = 20,
-+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
-+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
-+ .ilen = 32,
-+ .result = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
-+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
-+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
-+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
-+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
-+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
-+ 0x00, 0x00, 0x00, 0x48 },
-+ .klen = 28,
-+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
-+ .input = { "Single block msg" },
-+ .ilen = 16,
-+ .result = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
-+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
-+ .rlen = 16,
-+ }, {
-+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
-+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
-+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
-+ 0x00, 0x96, 0xb0, 0x3b },
-+ .klen = 28,
-+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
-+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
-+ .ilen = 32,
-+ .result = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
-+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
-+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
-+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
-+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
-+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
-+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
-+ 0x00, 0x00, 0x00, 0x60 },
-+ .klen = 36,
-+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
-+ .input = { "Single block msg" },
-+ .ilen = 16,
-+ .result = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
-+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
-+ .rlen = 16,
-+ }, {
-+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
-+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
-+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
-+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
-+ 0x00, 0xfa, 0xac, 0x24 },
-+ .klen = 36,
-+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
-+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
-+ .ilen = 32,
-+ .result = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
-+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
-+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
-+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
-+ .rlen = 32,
-+ }, {
-+ // generated using Crypto++
-+ .key = {
-+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-+ 0x00, 0x00, 0x00, 0x00,
-+ },
-+ .klen = 32 + 4,
-+ .iv = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ },
-+ .input = {
-+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
-+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
-+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
-+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
-+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
-+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
-+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
-+ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f,
-+ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
-+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
-+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
-+ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
-+ 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
-+ 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f,
-+ 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97,
-+ 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7,
-+ 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf,
-+ 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7,
-+ 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf,
-+ 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf,
-+ 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7,
-+ 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf,
-+ 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7,
-+ 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef,
-+ 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7,
-+ 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff,
-+ 0x00, 0x03, 0x06, 0x09, 0x0c, 0x0f, 0x12, 0x15,
-+ 0x18, 0x1b, 0x1e, 0x21, 0x24, 0x27, 0x2a, 0x2d,
-+ 0x30, 0x33, 0x36, 0x39, 0x3c, 0x3f, 0x42, 0x45,
-+ 0x48, 0x4b, 0x4e, 0x51, 0x54, 0x57, 0x5a, 0x5d,
-+ 0x60, 0x63, 0x66, 0x69, 0x6c, 0x6f, 0x72, 0x75,
-+ 0x78, 0x7b, 0x7e, 0x81, 0x84, 0x87, 0x8a, 0x8d,
-+ 0x90, 0x93, 0x96, 0x99, 0x9c, 0x9f, 0xa2, 0xa5,
-+ 0xa8, 0xab, 0xae, 0xb1, 0xb4, 0xb7, 0xba, 0xbd,
-+ 0xc0, 0xc3, 0xc6, 0xc9, 0xcc, 0xcf, 0xd2, 0xd5,
-+ 0xd8, 0xdb, 0xde, 0xe1, 0xe4, 0xe7, 0xea, 0xed,
-+ 0xf0, 0xf3, 0xf6, 0xf9, 0xfc, 0xff, 0x02, 0x05,
-+ 0x08, 0x0b, 0x0e, 0x11, 0x14, 0x17, 0x1a, 0x1d,
-+ 0x20, 0x23, 0x26, 0x29, 0x2c, 0x2f, 0x32, 0x35,
-+ 0x38, 0x3b, 0x3e, 0x41, 0x44, 0x47, 0x4a, 0x4d,
-+ 0x50, 0x53, 0x56, 0x59, 0x5c, 0x5f, 0x62, 0x65,
-+ 0x68, 0x6b, 0x6e, 0x71, 0x74, 0x77, 0x7a, 0x7d,
-+ 0x80, 0x83, 0x86, 0x89, 0x8c, 0x8f, 0x92, 0x95,
-+ 0x98, 0x9b, 0x9e, 0xa1, 0xa4, 0xa7, 0xaa, 0xad,
-+ 0xb0, 0xb3, 0xb6, 0xb9, 0xbc, 0xbf, 0xc2, 0xc5,
-+ 0xc8, 0xcb, 0xce, 0xd1, 0xd4, 0xd7, 0xda, 0xdd,
-+ 0xe0, 0xe3, 0xe6, 0xe9, 0xec, 0xef, 0xf2, 0xf5,
-+ 0xf8, 0xfb, 0xfe, 0x01, 0x04, 0x07, 0x0a, 0x0d,
-+ 0x10, 0x13, 0x16, 0x19, 0x1c, 0x1f, 0x22, 0x25,
-+ 0x28, 0x2b, 0x2e, 0x31, 0x34, 0x37, 0x3a, 0x3d,
-+ 0x40, 0x43, 0x46, 0x49, 0x4c, 0x4f, 0x52, 0x55,
-+ 0x58, 0x5b, 0x5e, 0x61, 0x64, 0x67, 0x6a, 0x6d,
-+ 0x70, 0x73, 0x76, 0x79, 0x7c, 0x7f, 0x82, 0x85,
-+ 0x88, 0x8b, 0x8e, 0x91, 0x94, 0x97, 0x9a, 0x9d,
-+ 0xa0, 0xa3, 0xa6, 0xa9, 0xac, 0xaf, 0xb2, 0xb5,
-+ 0xb8, 0xbb, 0xbe, 0xc1, 0xc4, 0xc7, 0xca, 0xcd,
-+ 0xd0, 0xd3, 0xd6, 0xd9, 0xdc, 0xdf, 0xe2, 0xe5,
-+ 0xe8, 0xeb, 0xee, 0xf1, 0xf4, 0xf7, 0xfa, 0xfd,
-+ 0x00, 0x05, 0x0a, 0x0f, 0x14, 0x19, 0x1e, 0x23,
-+ 0x28, 0x2d, 0x32, 0x37, 0x3c, 0x41, 0x46, 0x4b,
-+ 0x50, 0x55, 0x5a, 0x5f, 0x64, 0x69, 0x6e, 0x73,
-+ 0x78, 0x7d, 0x82, 0x87, 0x8c, 0x91, 0x96, 0x9b,
-+ 0xa0, 0xa5, 0xaa, 0xaf, 0xb4, 0xb9, 0xbe, 0xc3,
-+ 0xc8, 0xcd, 0xd2, 0xd7, 0xdc, 0xe1, 0xe6, 0xeb,
-+ 0xf0, 0xf5, 0xfa, 0xff, 0x04, 0x09, 0x0e, 0x13,
-+ 0x18, 0x1d, 0x22, 0x27, 0x2c, 0x31, 0x36, 0x3b,
-+ 0x40, 0x45, 0x4a, 0x4f, 0x54, 0x59, 0x5e, 0x63,
-+ 0x68, 0x6d, 0x72, 0x77, 0x7c, 0x81, 0x86, 0x8b,
-+ 0x90, 0x95, 0x9a, 0x9f, 0xa4, 0xa9, 0xae, 0xb3,
-+ 0xb8, 0xbd, 0xc2, 0xc7, 0xcc, 0xd1, 0xd6, 0xdb,
-+ 0xe0, 0xe5, 0xea, 0xef, 0xf4, 0xf9, 0xfe, 0x03,
-+ 0x08, 0x0d, 0x12, 0x17, 0x1c, 0x21, 0x26, 0x2b,
-+ 0x30, 0x35, 0x3a, 0x3f, 0x44, 0x49, 0x4e, 0x53,
-+ 0x58, 0x5d, 0x62, 0x67, 0x6c, 0x71, 0x76, 0x7b,
-+ 0x80, 0x85, 0x8a, 0x8f, 0x94, 0x99, 0x9e, 0xa3,
-+ 0xa8, 0xad, 0xb2, 0xb7, 0xbc, 0xc1, 0xc6, 0xcb,
-+ 0xd0, 0xd5, 0xda, 0xdf, 0xe4, 0xe9, 0xee, 0xf3,
-+ 0xf8, 0xfd, 0x02, 0x07, 0x0c, 0x11, 0x16, 0x1b,
-+ 0x20, 0x25, 0x2a, 0x2f, 0x34, 0x39, 0x3e, 0x43,
-+ 0x48, 0x4d, 0x52, 0x57, 0x5c, 0x61, 0x66, 0x6b,
-+ 0x70, 0x75, 0x7a, 0x7f, 0x84, 0x89, 0x8e, 0x93,
-+ 0x98, 0x9d, 0xa2, 0xa7, 0xac, 0xb1, 0xb6, 0xbb,
-+ 0xc0, 0xc5, 0xca, 0xcf, 0xd4, 0xd9, 0xde, 0xe3,
-+ 0xe8, 0xed, 0xf2, 0xf7, 0xfc, 0x01, 0x06, 0x0b,
-+ 0x10, 0x15, 0x1a, 0x1f, 0x24, 0x29, 0x2e, 0x33,
-+ 0x38, 0x3d, 0x42, 0x47, 0x4c, 0x51, 0x56, 0x5b,
-+ 0x60, 0x65, 0x6a, 0x6f, 0x74, 0x79, 0x7e, 0x83,
-+ 0x88, 0x8d, 0x92, 0x97, 0x9c, 0xa1, 0xa6, 0xab,
-+ 0xb0, 0xb5, 0xba, 0xbf, 0xc4, 0xc9, 0xce, 0xd3,
-+ 0xd8, 0xdd, 0xe2, 0xe7, 0xec, 0xf1, 0xf6, 0xfb,
-+ 0x00, 0x07, 0x0e, 0x15, 0x1c, 0x23, 0x2a, 0x31,
-+ 0x38, 0x3f, 0x46, 0x4d, 0x54, 0x5b, 0x62, 0x69,
-+ 0x70, 0x77, 0x7e, 0x85, 0x8c, 0x93, 0x9a, 0xa1,
-+ 0xa8, 0xaf, 0xb6, 0xbd, 0xc4, 0xcb, 0xd2, 0xd9,
-+ 0xe0, 0xe7, 0xee, 0xf5, 0xfc, 0x03, 0x0a, 0x11,
-+ 0x18, 0x1f, 0x26, 0x2d, 0x34, 0x3b, 0x42, 0x49,
-+ 0x50, 0x57, 0x5e, 0x65, 0x6c, 0x73, 0x7a, 0x81,
-+ 0x88, 0x8f, 0x96, 0x9d, 0xa4, 0xab, 0xb2, 0xb9,
-+ 0xc0, 0xc7, 0xce, 0xd5, 0xdc, 0xe3, 0xea, 0xf1,
-+ 0xf8, 0xff, 0x06, 0x0d, 0x14, 0x1b, 0x22, 0x29,
-+ 0x30, 0x37, 0x3e, 0x45, 0x4c, 0x53, 0x5a, 0x61,
-+ 0x68, 0x6f, 0x76, 0x7d, 0x84, 0x8b, 0x92, 0x99,
-+ 0xa0, 0xa7, 0xae, 0xb5, 0xbc, 0xc3, 0xca, 0xd1,
-+ 0xd8, 0xdf, 0xe6, 0xed, 0xf4, 0xfb, 0x02, 0x09,
-+ 0x10, 0x17, 0x1e, 0x25, 0x2c, 0x33, 0x3a, 0x41,
-+ 0x48, 0x4f, 0x56, 0x5d, 0x64, 0x6b, 0x72, 0x79,
-+ 0x80, 0x87, 0x8e, 0x95, 0x9c, 0xa3, 0xaa, 0xb1,
-+ 0xb8, 0xbf, 0xc6, 0xcd, 0xd4, 0xdb, 0xe2, 0xe9,
-+ 0xf0, 0xf7, 0xfe, 0x05, 0x0c, 0x13, 0x1a, 0x21,
-+ 0x28, 0x2f, 0x36, 0x3d, 0x44, 0x4b, 0x52, 0x59,
-+ 0x60, 0x67, 0x6e, 0x75, 0x7c, 0x83, 0x8a, 0x91,
-+ 0x98, 0x9f, 0xa6, 0xad, 0xb4, 0xbb, 0xc2, 0xc9,
-+ 0xd0, 0xd7, 0xde, 0xe5, 0xec, 0xf3, 0xfa, 0x01,
-+ 0x08, 0x0f, 0x16, 0x1d, 0x24, 0x2b, 0x32, 0x39,
-+ 0x40, 0x47, 0x4e, 0x55, 0x5c, 0x63, 0x6a, 0x71,
-+ 0x78, 0x7f, 0x86, 0x8d, 0x94, 0x9b, 0xa2, 0xa9,
-+ 0xb0, 0xb7, 0xbe, 0xc5, 0xcc, 0xd3, 0xda, 0xe1,
-+ 0xe8, 0xef, 0xf6, 0xfd, 0x04, 0x0b, 0x12, 0x19,
-+ 0x20, 0x27, 0x2e, 0x35, 0x3c, 0x43, 0x4a, 0x51,
-+ 0x58, 0x5f, 0x66, 0x6d, 0x74, 0x7b, 0x82, 0x89,
-+ 0x90, 0x97, 0x9e, 0xa5, 0xac, 0xb3, 0xba, 0xc1,
-+ 0xc8, 0xcf, 0xd6, 0xdd, 0xe4, 0xeb, 0xf2, 0xf9,
-+ 0x00, 0x09, 0x12, 0x1b, 0x24, 0x2d, 0x36, 0x3f,
-+ 0x48, 0x51, 0x5a, 0x63, 0x6c, 0x75, 0x7e, 0x87,
-+ 0x90, 0x99, 0xa2, 0xab, 0xb4, 0xbd, 0xc6, 0xcf,
-+ 0xd8, 0xe1, 0xea, 0xf3, 0xfc, 0x05, 0x0e, 0x17,
-+ 0x20, 0x29, 0x32, 0x3b, 0x44, 0x4d, 0x56, 0x5f,
-+ 0x68, 0x71, 0x7a, 0x83, 0x8c, 0x95, 0x9e, 0xa7,
-+ 0xb0, 0xb9, 0xc2, 0xcb, 0xd4, 0xdd, 0xe6, 0xef,
-+ 0xf8, 0x01, 0x0a, 0x13, 0x1c, 0x25, 0x2e, 0x37,
-+ 0x40, 0x49, 0x52, 0x5b, 0x64, 0x6d, 0x76, 0x7f,
-+ 0x88, 0x91, 0x9a, 0xa3, 0xac, 0xb5, 0xbe, 0xc7,
-+ 0xd0, 0xd9, 0xe2, 0xeb, 0xf4, 0xfd, 0x06, 0x0f,
-+ 0x18, 0x21, 0x2a, 0x33, 0x3c, 0x45, 0x4e, 0x57,
-+ 0x60, 0x69, 0x72, 0x7b, 0x84, 0x8d, 0x96, 0x9f,
-+ 0xa8, 0xb1, 0xba, 0xc3, 0xcc, 0xd5, 0xde, 0xe7,
-+ 0xf0, 0xf9, 0x02, 0x0b, 0x14, 0x1d, 0x26, 0x2f,
-+ 0x38, 0x41, 0x4a, 0x53, 0x5c, 0x65, 0x6e, 0x77,
-+ 0x80, 0x89, 0x92, 0x9b, 0xa4, 0xad, 0xb6, 0xbf,
-+ 0xc8, 0xd1, 0xda, 0xe3, 0xec, 0xf5, 0xfe, 0x07,
-+ 0x10, 0x19, 0x22, 0x2b, 0x34, 0x3d, 0x46, 0x4f,
-+ 0x58, 0x61, 0x6a, 0x73, 0x7c, 0x85, 0x8e, 0x97,
-+ 0xa0, 0xa9, 0xb2, 0xbb, 0xc4, 0xcd, 0xd6, 0xdf,
-+ 0xe8, 0xf1, 0xfa, 0x03, 0x0c, 0x15, 0x1e, 0x27,
-+ 0x30, 0x39, 0x42, 0x4b, 0x54, 0x5d, 0x66, 0x6f,
-+ 0x78, 0x81, 0x8a, 0x93, 0x9c, 0xa5, 0xae, 0xb7,
-+ 0xc0, 0xc9, 0xd2, 0xdb, 0xe4, 0xed, 0xf6, 0xff,
-+ 0x08, 0x11, 0x1a, 0x23, 0x2c, 0x35, 0x3e, 0x47,
-+ 0x50, 0x59, 0x62, 0x6b, 0x74, 0x7d, 0x86, 0x8f,
-+ 0x98, 0xa1, 0xaa, 0xb3, 0xbc, 0xc5, 0xce, 0xd7,
-+ 0xe0, 0xe9, 0xf2, 0xfb, 0x04, 0x0d, 0x16, 0x1f,
-+ 0x28, 0x31, 0x3a, 0x43, 0x4c, 0x55, 0x5e, 0x67,
-+ 0x70, 0x79, 0x82, 0x8b, 0x94, 0x9d, 0xa6, 0xaf,
-+ 0xb8, 0xc1, 0xca, 0xd3, 0xdc, 0xe5, 0xee, 0xf7,
-+ 0x00, 0x0b, 0x16, 0x21, 0x2c, 0x37, 0x42, 0x4d,
-+ 0x58, 0x63, 0x6e, 0x79, 0x84, 0x8f, 0x9a, 0xa5,
-+ 0xb0, 0xbb, 0xc6, 0xd1, 0xdc, 0xe7, 0xf2, 0xfd,
-+ 0x08, 0x13, 0x1e, 0x29, 0x34, 0x3f, 0x4a, 0x55,
-+ 0x60, 0x6b, 0x76, 0x81, 0x8c, 0x97, 0xa2, 0xad,
-+ 0xb8, 0xc3, 0xce, 0xd9, 0xe4, 0xef, 0xfa, 0x05,
-+ 0x10, 0x1b, 0x26, 0x31, 0x3c, 0x47, 0x52, 0x5d,
-+ 0x68, 0x73, 0x7e, 0x89, 0x94, 0x9f, 0xaa, 0xb5,
-+ 0xc0, 0xcb, 0xd6, 0xe1, 0xec, 0xf7, 0x02, 0x0d,
-+ 0x18, 0x23, 0x2e, 0x39, 0x44, 0x4f, 0x5a, 0x65,
-+ 0x70, 0x7b, 0x86, 0x91, 0x9c, 0xa7, 0xb2, 0xbd,
-+ 0xc8, 0xd3, 0xde, 0xe9, 0xf4, 0xff, 0x0a, 0x15,
-+ 0x20, 0x2b, 0x36, 0x41, 0x4c, 0x57, 0x62, 0x6d,
-+ 0x78, 0x83, 0x8e, 0x99, 0xa4, 0xaf, 0xba, 0xc5,
-+ 0xd0, 0xdb, 0xe6, 0xf1, 0xfc, 0x07, 0x12, 0x1d,
-+ 0x28, 0x33, 0x3e, 0x49, 0x54, 0x5f, 0x6a, 0x75,
-+ 0x80, 0x8b, 0x96, 0xa1, 0xac, 0xb7, 0xc2, 0xcd,
-+ 0xd8, 0xe3, 0xee, 0xf9, 0x04, 0x0f, 0x1a, 0x25,
-+ 0x30, 0x3b, 0x46, 0x51, 0x5c, 0x67, 0x72, 0x7d,
-+ 0x88, 0x93, 0x9e, 0xa9, 0xb4, 0xbf, 0xca, 0xd5,
-+ 0xe0, 0xeb, 0xf6, 0x01, 0x0c, 0x17, 0x22, 0x2d,
-+ 0x38, 0x43, 0x4e, 0x59, 0x64, 0x6f, 0x7a, 0x85,
-+ 0x90, 0x9b, 0xa6, 0xb1, 0xbc, 0xc7, 0xd2, 0xdd,
-+ 0xe8, 0xf3, 0xfe, 0x09, 0x14, 0x1f, 0x2a, 0x35,
-+ 0x40, 0x4b, 0x56, 0x61, 0x6c, 0x77, 0x82, 0x8d,
-+ 0x98, 0xa3, 0xae, 0xb9, 0xc4, 0xcf, 0xda, 0xe5,
-+ 0xf0, 0xfb, 0x06, 0x11, 0x1c, 0x27, 0x32, 0x3d,
-+ 0x48, 0x53, 0x5e, 0x69, 0x74, 0x7f, 0x8a, 0x95,
-+ 0xa0, 0xab, 0xb6, 0xc1, 0xcc, 0xd7, 0xe2, 0xed,
-+ 0xf8, 0x03, 0x0e, 0x19, 0x24, 0x2f, 0x3a, 0x45,
-+ 0x50, 0x5b, 0x66, 0x71, 0x7c, 0x87, 0x92, 0x9d,
-+ 0xa8, 0xb3, 0xbe, 0xc9, 0xd4, 0xdf, 0xea, 0xf5,
-+ 0x00, 0x0d, 0x1a, 0x27, 0x34, 0x41, 0x4e, 0x5b,
-+ 0x68, 0x75, 0x82, 0x8f, 0x9c, 0xa9, 0xb6, 0xc3,
-+ 0xd0, 0xdd, 0xea, 0xf7, 0x04, 0x11, 0x1e, 0x2b,
-+ 0x38, 0x45, 0x52, 0x5f, 0x6c, 0x79, 0x86, 0x93,
-+ 0xa0, 0xad, 0xba, 0xc7, 0xd4, 0xe1, 0xee, 0xfb,
-+ 0x08, 0x15, 0x22, 0x2f, 0x3c, 0x49, 0x56, 0x63,
-+ 0x70, 0x7d, 0x8a, 0x97, 0xa4, 0xb1, 0xbe, 0xcb,
-+ 0xd8, 0xe5, 0xf2, 0xff, 0x0c, 0x19, 0x26, 0x33,
-+ 0x40, 0x4d, 0x5a, 0x67, 0x74, 0x81, 0x8e, 0x9b,
-+ 0xa8, 0xb5, 0xc2, 0xcf, 0xdc, 0xe9, 0xf6, 0x03,
-+ 0x10, 0x1d, 0x2a, 0x37, 0x44, 0x51, 0x5e, 0x6b,
-+ 0x78, 0x85, 0x92, 0x9f, 0xac, 0xb9, 0xc6, 0xd3,
-+ 0xe0, 0xed, 0xfa, 0x07, 0x14, 0x21, 0x2e, 0x3b,
-+ 0x48, 0x55, 0x62, 0x6f, 0x7c, 0x89, 0x96, 0xa3,
-+ 0xb0, 0xbd, 0xca, 0xd7, 0xe4, 0xf1, 0xfe, 0x0b,
-+ 0x18, 0x25, 0x32, 0x3f, 0x4c, 0x59, 0x66, 0x73,
-+ 0x80, 0x8d, 0x9a, 0xa7, 0xb4, 0xc1, 0xce, 0xdb,
-+ 0xe8, 0xf5, 0x02, 0x0f, 0x1c, 0x29, 0x36, 0x43,
-+ 0x50, 0x5d, 0x6a, 0x77, 0x84, 0x91, 0x9e, 0xab,
-+ 0xb8, 0xc5, 0xd2, 0xdf, 0xec, 0xf9, 0x06, 0x13,
-+ 0x20, 0x2d, 0x3a, 0x47, 0x54, 0x61, 0x6e, 0x7b,
-+ 0x88, 0x95, 0xa2, 0xaf, 0xbc, 0xc9, 0xd6, 0xe3,
-+ 0xf0, 0xfd, 0x0a, 0x17, 0x24, 0x31, 0x3e, 0x4b,
-+ 0x58, 0x65, 0x72, 0x7f, 0x8c, 0x99, 0xa6, 0xb3,
-+ 0xc0, 0xcd, 0xda, 0xe7, 0xf4, 0x01, 0x0e, 0x1b,
-+ 0x28, 0x35, 0x42, 0x4f, 0x5c, 0x69, 0x76, 0x83,
-+ 0x90, 0x9d, 0xaa, 0xb7, 0xc4, 0xd1, 0xde, 0xeb,
-+ 0xf8, 0x05, 0x12, 0x1f, 0x2c, 0x39, 0x46, 0x53,
-+ 0x60, 0x6d, 0x7a, 0x87, 0x94, 0xa1, 0xae, 0xbb,
-+ 0xc8, 0xd5, 0xe2, 0xef, 0xfc, 0x09, 0x16, 0x23,
-+ 0x30, 0x3d, 0x4a, 0x57, 0x64, 0x71, 0x7e, 0x8b,
-+ 0x98, 0xa5, 0xb2, 0xbf, 0xcc, 0xd9, 0xe6, 0xf3,
-+ 0x00, 0x0f, 0x1e, 0x2d, 0x3c, 0x4b, 0x5a, 0x69,
-+ 0x78, 0x87, 0x96, 0xa5, 0xb4, 0xc3, 0xd2, 0xe1,
-+ 0xf0, 0xff, 0x0e, 0x1d, 0x2c, 0x3b, 0x4a, 0x59,
-+ 0x68, 0x77, 0x86, 0x95, 0xa4, 0xb3, 0xc2, 0xd1,
-+ 0xe0, 0xef, 0xfe, 0x0d, 0x1c, 0x2b, 0x3a, 0x49,
-+ 0x58, 0x67, 0x76, 0x85, 0x94, 0xa3, 0xb2, 0xc1,
-+ 0xd0, 0xdf, 0xee, 0xfd, 0x0c, 0x1b, 0x2a, 0x39,
-+ 0x48, 0x57, 0x66, 0x75, 0x84, 0x93, 0xa2, 0xb1,
-+ 0xc0, 0xcf, 0xde, 0xed, 0xfc, 0x0b, 0x1a, 0x29,
-+ 0x38, 0x47, 0x56, 0x65, 0x74, 0x83, 0x92, 0xa1,
-+ 0xb0, 0xbf, 0xce, 0xdd, 0xec, 0xfb, 0x0a, 0x19,
-+ 0x28, 0x37, 0x46, 0x55, 0x64, 0x73, 0x82, 0x91,
-+ 0xa0, 0xaf, 0xbe, 0xcd, 0xdc, 0xeb, 0xfa, 0x09,
-+ 0x18, 0x27, 0x36, 0x45, 0x54, 0x63, 0x72, 0x81,
-+ 0x90, 0x9f, 0xae, 0xbd, 0xcc, 0xdb, 0xea, 0xf9,
-+ 0x08, 0x17, 0x26, 0x35, 0x44, 0x53, 0x62, 0x71,
-+ 0x80, 0x8f, 0x9e, 0xad, 0xbc, 0xcb, 0xda, 0xe9,
-+ 0xf8, 0x07, 0x16, 0x25, 0x34, 0x43, 0x52, 0x61,
-+ 0x70, 0x7f, 0x8e, 0x9d, 0xac, 0xbb, 0xca, 0xd9,
-+ 0xe8, 0xf7, 0x06, 0x15, 0x24, 0x33, 0x42, 0x51,
-+ 0x60, 0x6f, 0x7e, 0x8d, 0x9c, 0xab, 0xba, 0xc9,
-+ 0xd8, 0xe7, 0xf6, 0x05, 0x14, 0x23, 0x32, 0x41,
-+ 0x50, 0x5f, 0x6e, 0x7d, 0x8c, 0x9b, 0xaa, 0xb9,
-+ 0xc8, 0xd7, 0xe6, 0xf5, 0x04, 0x13, 0x22, 0x31,
-+ 0x40, 0x4f, 0x5e, 0x6d, 0x7c, 0x8b, 0x9a, 0xa9,
-+ 0xb8, 0xc7, 0xd6, 0xe5, 0xf4, 0x03, 0x12, 0x21,
-+ 0x30, 0x3f, 0x4e, 0x5d, 0x6c, 0x7b, 0x8a, 0x99,
-+ 0xa8, 0xb7, 0xc6, 0xd5, 0xe4, 0xf3, 0x02, 0x11,
-+ 0x20, 0x2f, 0x3e, 0x4d, 0x5c, 0x6b, 0x7a, 0x89,
-+ 0x98, 0xa7, 0xb6, 0xc5, 0xd4, 0xe3, 0xf2, 0x01,
-+ 0x10, 0x1f, 0x2e, 0x3d, 0x4c, 0x5b, 0x6a, 0x79,
-+ 0x88, 0x97, 0xa6, 0xb5, 0xc4, 0xd3, 0xe2, 0xf1,
-+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
-+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff,
-+ 0x10, 0x21, 0x32, 0x43, 0x54, 0x65, 0x76, 0x87,
-+ 0x98, 0xa9, 0xba, 0xcb, 0xdc, 0xed, 0xfe, 0x0f,
-+ 0x20, 0x31, 0x42, 0x53, 0x64, 0x75, 0x86, 0x97,
-+ 0xa8, 0xb9, 0xca, 0xdb, 0xec, 0xfd, 0x0e, 0x1f,
-+ 0x30, 0x41, 0x52, 0x63, 0x74, 0x85, 0x96, 0xa7,
-+ 0xb8, 0xc9, 0xda, 0xeb, 0xfc, 0x0d, 0x1e, 0x2f,
-+ 0x40, 0x51, 0x62, 0x73, 0x84, 0x95, 0xa6, 0xb7,
-+ 0xc8, 0xd9, 0xea, 0xfb, 0x0c, 0x1d, 0x2e, 0x3f,
-+ 0x50, 0x61, 0x72, 0x83, 0x94, 0xa5, 0xb6, 0xc7,
-+ 0xd8, 0xe9, 0xfa, 0x0b, 0x1c, 0x2d, 0x3e, 0x4f,
-+ 0x60, 0x71, 0x82, 0x93, 0xa4, 0xb5, 0xc6, 0xd7,
-+ 0xe8, 0xf9, 0x0a, 0x1b, 0x2c, 0x3d, 0x4e, 0x5f,
-+ 0x70, 0x81, 0x92, 0xa3, 0xb4, 0xc5, 0xd6, 0xe7,
-+ 0xf8, 0x09, 0x1a, 0x2b, 0x3c, 0x4d, 0x5e, 0x6f,
-+ 0x80, 0x91, 0xa2, 0xb3, 0xc4, 0xd5, 0xe6, 0xf7,
-+ 0x08, 0x19, 0x2a, 0x3b, 0x4c, 0x5d, 0x6e, 0x7f,
-+ 0x90, 0xa1, 0xb2, 0xc3, 0xd4, 0xe5, 0xf6, 0x07,
-+ 0x18, 0x29, 0x3a, 0x4b, 0x5c, 0x6d, 0x7e, 0x8f,
-+ 0xa0, 0xb1, 0xc2, 0xd3, 0xe4, 0xf5, 0x06, 0x17,
-+ 0x28, 0x39, 0x4a, 0x5b, 0x6c, 0x7d, 0x8e, 0x9f,
-+ 0xb0, 0xc1, 0xd2, 0xe3, 0xf4, 0x05, 0x16, 0x27,
-+ 0x38, 0x49, 0x5a, 0x6b, 0x7c, 0x8d, 0x9e, 0xaf,
-+ 0xc0, 0xd1, 0xe2, 0xf3, 0x04, 0x15, 0x26, 0x37,
-+ 0x48, 0x59, 0x6a, 0x7b, 0x8c, 0x9d, 0xae, 0xbf,
-+ 0xd0, 0xe1, 0xf2, 0x03, 0x14, 0x25, 0x36, 0x47,
-+ 0x58, 0x69, 0x7a, 0x8b, 0x9c, 0xad, 0xbe, 0xcf,
-+ 0xe0, 0xf1, 0x02, 0x13, 0x24, 0x35, 0x46, 0x57,
-+ 0x68, 0x79, 0x8a, 0x9b, 0xac, 0xbd, 0xce, 0xdf,
-+ 0xf0, 0x01, 0x12, 0x23, 0x34, 0x45, 0x56, 0x67,
-+ 0x78, 0x89, 0x9a, 0xab, 0xbc, 0xcd, 0xde, 0xef,
-+ 0x00, 0x13, 0x26, 0x39, 0x4c, 0x5f, 0x72, 0x85,
-+ 0x98, 0xab, 0xbe, 0xd1, 0xe4, 0xf7, 0x0a, 0x1d,
-+ 0x30, 0x43, 0x56, 0x69, 0x7c, 0x8f, 0xa2, 0xb5,
-+ 0xc8, 0xdb, 0xee, 0x01, 0x14, 0x27, 0x3a, 0x4d,
-+ 0x60, 0x73, 0x86, 0x99, 0xac, 0xbf, 0xd2, 0xe5,
-+ 0xf8, 0x0b, 0x1e, 0x31, 0x44, 0x57, 0x6a, 0x7d,
-+ 0x90, 0xa3, 0xb6, 0xc9, 0xdc, 0xef, 0x02, 0x15,
-+ 0x28, 0x3b, 0x4e, 0x61, 0x74, 0x87, 0x9a, 0xad,
-+ 0xc0, 0xd3, 0xe6, 0xf9, 0x0c, 0x1f, 0x32, 0x45,
-+ 0x58, 0x6b, 0x7e, 0x91, 0xa4, 0xb7, 0xca, 0xdd,
-+ 0xf0, 0x03, 0x16, 0x29, 0x3c, 0x4f, 0x62, 0x75,
-+ 0x88, 0x9b, 0xae, 0xc1, 0xd4, 0xe7, 0xfa, 0x0d,
-+ 0x20, 0x33, 0x46, 0x59, 0x6c, 0x7f, 0x92, 0xa5,
-+ 0xb8, 0xcb, 0xde, 0xf1, 0x04, 0x17, 0x2a, 0x3d,
-+ 0x50, 0x63, 0x76, 0x89, 0x9c, 0xaf, 0xc2, 0xd5,
-+ 0xe8, 0xfb, 0x0e, 0x21, 0x34, 0x47, 0x5a, 0x6d,
-+ 0x80, 0x93, 0xa6, 0xb9, 0xcc, 0xdf, 0xf2, 0x05,
-+ 0x18, 0x2b, 0x3e, 0x51, 0x64, 0x77, 0x8a, 0x9d,
-+ 0xb0, 0xc3, 0xd6, 0xe9, 0xfc, 0x0f, 0x22, 0x35,
-+ 0x48, 0x5b, 0x6e, 0x81, 0x94, 0xa7, 0xba, 0xcd,
-+ 0xe0, 0xf3, 0x06, 0x19, 0x2c, 0x3f, 0x52, 0x65,
-+ 0x78, 0x8b, 0x9e, 0xb1, 0xc4, 0xd7, 0xea, 0xfd,
-+ 0x10, 0x23, 0x36, 0x49, 0x5c, 0x6f, 0x82, 0x95,
-+ 0xa8, 0xbb, 0xce, 0xe1, 0xf4, 0x07, 0x1a, 0x2d,
-+ 0x40, 0x53, 0x66, 0x79, 0x8c, 0x9f, 0xb2, 0xc5,
-+ 0xd8, 0xeb, 0xfe, 0x11, 0x24, 0x37, 0x4a, 0x5d,
-+ 0x70, 0x83, 0x96, 0xa9, 0xbc, 0xcf, 0xe2, 0xf5,
-+ 0x08, 0x1b, 0x2e, 0x41, 0x54, 0x67, 0x7a, 0x8d,
-+ 0xa0, 0xb3, 0xc6, 0xd9, 0xec, 0xff, 0x12, 0x25,
-+ 0x38, 0x4b, 0x5e, 0x71, 0x84, 0x97, 0xaa, 0xbd,
-+ 0xd0, 0xe3, 0xf6, 0x09, 0x1c, 0x2f, 0x42, 0x55,
-+ 0x68, 0x7b, 0x8e, 0xa1, 0xb4, 0xc7, 0xda, 0xed,
-+ 0x00, 0x15, 0x2a, 0x3f, 0x54, 0x69, 0x7e, 0x93,
-+ 0xa8, 0xbd, 0xd2, 0xe7, 0xfc, 0x11, 0x26, 0x3b,
-+ 0x50, 0x65, 0x7a, 0x8f, 0xa4, 0xb9, 0xce, 0xe3,
-+ 0xf8, 0x0d, 0x22, 0x37, 0x4c, 0x61, 0x76, 0x8b,
-+ 0xa0, 0xb5, 0xca, 0xdf, 0xf4, 0x09, 0x1e, 0x33,
-+ 0x48, 0x5d, 0x72, 0x87, 0x9c, 0xb1, 0xc6, 0xdb,
-+ 0xf0, 0x05, 0x1a, 0x2f, 0x44, 0x59, 0x6e, 0x83,
-+ 0x98, 0xad, 0xc2, 0xd7, 0xec, 0x01, 0x16, 0x2b,
-+ 0x40, 0x55, 0x6a, 0x7f, 0x94, 0xa9, 0xbe, 0xd3,
-+ 0xe8, 0xfd, 0x12, 0x27, 0x3c, 0x51, 0x66, 0x7b,
-+ 0x90, 0xa5, 0xba, 0xcf, 0xe4, 0xf9, 0x0e, 0x23,
-+ 0x38, 0x4d, 0x62, 0x77, 0x8c, 0xa1, 0xb6, 0xcb,
-+ 0xe0, 0xf5, 0x0a, 0x1f, 0x34, 0x49, 0x5e, 0x73,
-+ 0x88, 0x9d, 0xb2, 0xc7, 0xdc, 0xf1, 0x06, 0x1b,
-+ 0x30, 0x45, 0x5a, 0x6f, 0x84, 0x99, 0xae, 0xc3,
-+ 0xd8, 0xed, 0x02, 0x17, 0x2c, 0x41, 0x56, 0x6b,
-+ 0x80, 0x95, 0xaa, 0xbf, 0xd4, 0xe9, 0xfe, 0x13,
-+ 0x28, 0x3d, 0x52, 0x67, 0x7c, 0x91, 0xa6, 0xbb,
-+ 0xd0, 0xe5, 0xfa, 0x0f, 0x24, 0x39, 0x4e, 0x63,
-+ 0x78, 0x8d, 0xa2, 0xb7, 0xcc, 0xe1, 0xf6, 0x0b,
-+ 0x20, 0x35, 0x4a, 0x5f, 0x74, 0x89, 0x9e, 0xb3,
-+ 0xc8, 0xdd, 0xf2, 0x07, 0x1c, 0x31, 0x46, 0x5b,
-+ 0x70, 0x85, 0x9a, 0xaf, 0xc4, 0xd9, 0xee, 0x03,
-+ 0x18, 0x2d, 0x42, 0x57, 0x6c, 0x81, 0x96, 0xab,
-+ 0xc0, 0xd5, 0xea, 0xff, 0x14, 0x29, 0x3e, 0x53,
-+ 0x68, 0x7d, 0x92, 0xa7, 0xbc, 0xd1, 0xe6, 0xfb,
-+ 0x10, 0x25, 0x3a, 0x4f, 0x64, 0x79, 0x8e, 0xa3,
-+ 0xb8, 0xcd, 0xe2, 0xf7, 0x0c, 0x21, 0x36, 0x4b,
-+ 0x60, 0x75, 0x8a, 0x9f, 0xb4, 0xc9, 0xde, 0xf3,
-+ 0x08, 0x1d, 0x32, 0x47, 0x5c, 0x71, 0x86, 0x9b,
-+ 0xb0, 0xc5, 0xda, 0xef, 0x04, 0x19, 0x2e, 0x43,
-+ 0x58, 0x6d, 0x82, 0x97, 0xac, 0xc1, 0xd6, 0xeb,
-+ 0x00, 0x17, 0x2e, 0x45, 0x5c, 0x73, 0x8a, 0xa1,
-+ 0xb8, 0xcf, 0xe6, 0xfd, 0x14, 0x2b, 0x42, 0x59,
-+ 0x70, 0x87, 0x9e, 0xb5, 0xcc, 0xe3, 0xfa, 0x11,
-+ 0x28, 0x3f, 0x56, 0x6d, 0x84, 0x9b, 0xb2, 0xc9,
-+ 0xe0, 0xf7, 0x0e, 0x25, 0x3c, 0x53, 0x6a, 0x81,
-+ 0x98, 0xaf, 0xc6, 0xdd, 0xf4, 0x0b, 0x22, 0x39,
-+ 0x50, 0x67, 0x7e, 0x95, 0xac, 0xc3, 0xda, 0xf1,
-+ 0x08, 0x1f, 0x36, 0x4d, 0x64, 0x7b, 0x92, 0xa9,
-+ 0xc0, 0xd7, 0xee, 0x05, 0x1c, 0x33, 0x4a, 0x61,
-+ 0x78, 0x8f, 0xa6, 0xbd, 0xd4, 0xeb, 0x02, 0x19,
-+ 0x30, 0x47, 0x5e, 0x75, 0x8c, 0xa3, 0xba, 0xd1,
-+ 0xe8, 0xff, 0x16, 0x2d, 0x44, 0x5b, 0x72, 0x89,
-+ 0xa0, 0xb7, 0xce, 0xe5, 0xfc, 0x13, 0x2a, 0x41,
-+ 0x58, 0x6f, 0x86, 0x9d, 0xb4, 0xcb, 0xe2, 0xf9,
-+ 0x10, 0x27, 0x3e, 0x55, 0x6c, 0x83, 0x9a, 0xb1,
-+ 0xc8, 0xdf, 0xf6, 0x0d, 0x24, 0x3b, 0x52, 0x69,
-+ 0x80, 0x97, 0xae, 0xc5, 0xdc, 0xf3, 0x0a, 0x21,
-+ 0x38, 0x4f, 0x66, 0x7d, 0x94, 0xab, 0xc2, 0xd9,
-+ 0xf0, 0x07, 0x1e, 0x35, 0x4c, 0x63, 0x7a, 0x91,
-+ 0xa8, 0xbf, 0xd6, 0xed, 0x04, 0x1b, 0x32, 0x49,
-+ 0x60, 0x77, 0x8e, 0xa5, 0xbc, 0xd3, 0xea, 0x01,
-+ 0x18, 0x2f, 0x46, 0x5d, 0x74, 0x8b, 0xa2, 0xb9,
-+ 0xd0, 0xe7, 0xfe, 0x15, 0x2c, 0x43, 0x5a, 0x71,
-+ 0x88, 0x9f, 0xb6, 0xcd, 0xe4, 0xfb, 0x12, 0x29,
-+ 0x40, 0x57, 0x6e, 0x85, 0x9c, 0xb3, 0xca, 0xe1,
-+ 0xf8, 0x0f, 0x26, 0x3d, 0x54, 0x6b, 0x82, 0x99,
-+ 0xb0, 0xc7, 0xde, 0xf5, 0x0c, 0x23, 0x3a, 0x51,
-+ 0x68, 0x7f, 0x96, 0xad, 0xc4, 0xdb, 0xf2, 0x09,
-+ 0x20, 0x37, 0x4e, 0x65, 0x7c, 0x93, 0xaa, 0xc1,
-+ 0xd8, 0xef, 0x06, 0x1d, 0x34, 0x4b, 0x62, 0x79,
-+ 0x90, 0xa7, 0xbe, 0xd5, 0xec, 0x03, 0x1a, 0x31,
-+ 0x48, 0x5f, 0x76, 0x8d, 0xa4, 0xbb, 0xd2, 0xe9,
-+ 0x00, 0x19, 0x32, 0x4b, 0x64, 0x7d, 0x96, 0xaf,
-+ 0xc8, 0xe1, 0xfa, 0x13, 0x2c, 0x45, 0x5e, 0x77,
-+ 0x90, 0xa9, 0xc2, 0xdb, 0xf4, 0x0d, 0x26, 0x3f,
-+ 0x58, 0x71, 0x8a, 0xa3, 0xbc, 0xd5, 0xee, 0x07,
-+ 0x20, 0x39, 0x52, 0x6b, 0x84, 0x9d, 0xb6, 0xcf,
-+ 0xe8, 0x01, 0x1a, 0x33, 0x4c, 0x65, 0x7e, 0x97,
-+ 0xb0, 0xc9, 0xe2, 0xfb, 0x14, 0x2d, 0x46, 0x5f,
-+ 0x78, 0x91, 0xaa, 0xc3, 0xdc, 0xf5, 0x0e, 0x27,
-+ 0x40, 0x59, 0x72, 0x8b, 0xa4, 0xbd, 0xd6, 0xef,
-+ 0x08, 0x21, 0x3a, 0x53, 0x6c, 0x85, 0x9e, 0xb7,
-+ 0xd0, 0xe9, 0x02, 0x1b, 0x34, 0x4d, 0x66, 0x7f,
-+ 0x98, 0xb1, 0xca, 0xe3, 0xfc, 0x15, 0x2e, 0x47,
-+ 0x60, 0x79, 0x92, 0xab, 0xc4, 0xdd, 0xf6, 0x0f,
-+ 0x28, 0x41, 0x5a, 0x73, 0x8c, 0xa5, 0xbe, 0xd7,
-+ 0xf0, 0x09, 0x22, 0x3b, 0x54, 0x6d, 0x86, 0x9f,
-+ 0xb8, 0xd1, 0xea, 0x03, 0x1c, 0x35, 0x4e, 0x67,
-+ 0x80, 0x99, 0xb2, 0xcb, 0xe4, 0xfd, 0x16, 0x2f,
-+ 0x48, 0x61, 0x7a, 0x93, 0xac, 0xc5, 0xde, 0xf7,
-+ 0x10, 0x29, 0x42, 0x5b, 0x74, 0x8d, 0xa6, 0xbf,
-+ 0xd8, 0xf1, 0x0a, 0x23, 0x3c, 0x55, 0x6e, 0x87,
-+ 0xa0, 0xb9, 0xd2, 0xeb, 0x04, 0x1d, 0x36, 0x4f,
-+ 0x68, 0x81, 0x9a, 0xb3, 0xcc, 0xe5, 0xfe, 0x17,
-+ 0x30, 0x49, 0x62, 0x7b, 0x94, 0xad, 0xc6, 0xdf,
-+ 0xf8, 0x11, 0x2a, 0x43, 0x5c, 0x75, 0x8e, 0xa7,
-+ 0xc0, 0xd9, 0xf2, 0x0b, 0x24, 0x3d, 0x56, 0x6f,
-+ 0x88, 0xa1, 0xba, 0xd3, 0xec, 0x05, 0x1e, 0x37,
-+ 0x50, 0x69, 0x82, 0x9b, 0xb4, 0xcd, 0xe6, 0xff,
-+ 0x18, 0x31, 0x4a, 0x63, 0x7c, 0x95, 0xae, 0xc7,
-+ 0xe0, 0xf9, 0x12, 0x2b, 0x44, 0x5d, 0x76, 0x8f,
-+ 0xa8, 0xc1, 0xda, 0xf3, 0x0c, 0x25, 0x3e, 0x57,
-+ 0x70, 0x89, 0xa2, 0xbb, 0xd4, 0xed, 0x06, 0x1f,
-+ 0x38, 0x51, 0x6a, 0x83, 0x9c, 0xb5, 0xce, 0xe7,
-+ 0x00, 0x1b, 0x36, 0x51, 0x6c, 0x87, 0xa2, 0xbd,
-+ 0xd8, 0xf3, 0x0e, 0x29, 0x44, 0x5f, 0x7a, 0x95,
-+ 0xb0, 0xcb, 0xe6, 0x01, 0x1c, 0x37, 0x52, 0x6d,
-+ 0x88, 0xa3, 0xbe, 0xd9, 0xf4, 0x0f, 0x2a, 0x45,
-+ 0x60, 0x7b, 0x96, 0xb1, 0xcc, 0xe7, 0x02, 0x1d,
-+ 0x38, 0x53, 0x6e, 0x89, 0xa4, 0xbf, 0xda, 0xf5,
-+ 0x10, 0x2b, 0x46, 0x61, 0x7c, 0x97, 0xb2, 0xcd,
-+ 0xe8, 0x03, 0x1e, 0x39, 0x54, 0x6f, 0x8a, 0xa5,
-+ 0xc0, 0xdb, 0xf6, 0x11, 0x2c, 0x47, 0x62, 0x7d,
-+ 0x98, 0xb3, 0xce, 0xe9, 0x04, 0x1f, 0x3a, 0x55,
-+ 0x70, 0x8b, 0xa6, 0xc1, 0xdc, 0xf7, 0x12, 0x2d,
-+ 0x48, 0x63, 0x7e, 0x99, 0xb4, 0xcf, 0xea, 0x05,
-+ 0x20, 0x3b, 0x56, 0x71, 0x8c, 0xa7, 0xc2, 0xdd,
-+ 0xf8, 0x13, 0x2e, 0x49, 0x64, 0x7f, 0x9a, 0xb5,
-+ 0xd0, 0xeb, 0x06, 0x21, 0x3c, 0x57, 0x72, 0x8d,
-+ 0xa8, 0xc3, 0xde, 0xf9, 0x14, 0x2f, 0x4a, 0x65,
-+ 0x80, 0x9b, 0xb6, 0xd1, 0xec, 0x07, 0x22, 0x3d,
-+ 0x58, 0x73, 0x8e, 0xa9, 0xc4, 0xdf, 0xfa, 0x15,
-+ 0x30, 0x4b, 0x66, 0x81, 0x9c, 0xb7, 0xd2, 0xed,
-+ 0x08, 0x23, 0x3e, 0x59, 0x74, 0x8f, 0xaa, 0xc5,
-+ 0xe0, 0xfb, 0x16, 0x31, 0x4c, 0x67, 0x82, 0x9d,
-+ 0xb8, 0xd3, 0xee, 0x09, 0x24, 0x3f, 0x5a, 0x75,
-+ 0x90, 0xab, 0xc6, 0xe1, 0xfc, 0x17, 0x32, 0x4d,
-+ 0x68, 0x83, 0x9e, 0xb9, 0xd4, 0xef, 0x0a, 0x25,
-+ 0x40, 0x5b, 0x76, 0x91, 0xac, 0xc7, 0xe2, 0xfd,
-+ 0x18, 0x33, 0x4e, 0x69, 0x84, 0x9f, 0xba, 0xd5,
-+ 0xf0, 0x0b, 0x26, 0x41, 0x5c, 0x77, 0x92, 0xad,
-+ 0xc8, 0xe3, 0xfe, 0x19, 0x34, 0x4f, 0x6a, 0x85,
-+ 0xa0, 0xbb, 0xd6, 0xf1, 0x0c, 0x27, 0x42, 0x5d,
-+ 0x78, 0x93, 0xae, 0xc9, 0xe4, 0xff, 0x1a, 0x35,
-+ 0x50, 0x6b, 0x86, 0xa1, 0xbc, 0xd7, 0xf2, 0x0d,
-+ 0x28, 0x43, 0x5e, 0x79, 0x94, 0xaf, 0xca, 0xe5,
-+ 0x00, 0x1d, 0x3a, 0x57, 0x74, 0x91, 0xae, 0xcb,
-+ 0xe8, 0x05, 0x22, 0x3f, 0x5c, 0x79, 0x96, 0xb3,
-+ 0xd0, 0xed, 0x0a, 0x27, 0x44, 0x61, 0x7e, 0x9b,
-+ 0xb8, 0xd5, 0xf2, 0x0f, 0x2c, 0x49, 0x66, 0x83,
-+ 0xa0, 0xbd, 0xda, 0xf7, 0x14, 0x31, 0x4e, 0x6b,
-+ 0x88, 0xa5, 0xc2, 0xdf, 0xfc, 0x19, 0x36, 0x53,
-+ 0x70, 0x8d, 0xaa, 0xc7, 0xe4, 0x01, 0x1e, 0x3b,
-+ 0x58, 0x75, 0x92, 0xaf, 0xcc, 0xe9, 0x06, 0x23,
-+ 0x40, 0x5d, 0x7a, 0x97, 0xb4, 0xd1, 0xee, 0x0b,
-+ 0x28, 0x45, 0x62, 0x7f, 0x9c, 0xb9, 0xd6, 0xf3,
-+ 0x10, 0x2d, 0x4a, 0x67, 0x84, 0xa1, 0xbe, 0xdb,
-+ 0xf8, 0x15, 0x32, 0x4f, 0x6c, 0x89, 0xa6, 0xc3,
-+ 0xe0, 0xfd, 0x1a, 0x37, 0x54, 0x71, 0x8e, 0xab,
-+ 0xc8, 0xe5, 0x02, 0x1f, 0x3c, 0x59, 0x76, 0x93,
-+ 0xb0, 0xcd, 0xea, 0x07, 0x24, 0x41, 0x5e, 0x7b,
-+ 0x98, 0xb5, 0xd2, 0xef, 0x0c, 0x29, 0x46, 0x63,
-+ 0x80, 0x9d, 0xba, 0xd7, 0xf4, 0x11, 0x2e, 0x4b,
-+ 0x68, 0x85, 0xa2, 0xbf, 0xdc, 0xf9, 0x16, 0x33,
-+ 0x50, 0x6d, 0x8a, 0xa7, 0xc4, 0xe1, 0xfe, 0x1b,
-+ 0x38, 0x55, 0x72, 0x8f, 0xac, 0xc9, 0xe6, 0x03,
-+ 0x20, 0x3d, 0x5a, 0x77, 0x94, 0xb1, 0xce, 0xeb,
-+ 0x08, 0x25, 0x42, 0x5f, 0x7c, 0x99, 0xb6, 0xd3,
-+ 0xf0, 0x0d, 0x2a, 0x47, 0x64, 0x81, 0x9e, 0xbb,
-+ 0xd8, 0xf5, 0x12, 0x2f, 0x4c, 0x69, 0x86, 0xa3,
-+ 0xc0, 0xdd, 0xfa, 0x17, 0x34, 0x51, 0x6e, 0x8b,
-+ 0xa8, 0xc5, 0xe2, 0xff, 0x1c, 0x39, 0x56, 0x73,
-+ 0x90, 0xad, 0xca, 0xe7, 0x04, 0x21, 0x3e, 0x5b,
-+ 0x78, 0x95, 0xb2, 0xcf, 0xec, 0x09, 0x26, 0x43,
-+ 0x60, 0x7d, 0x9a, 0xb7, 0xd4, 0xf1, 0x0e, 0x2b,
-+ 0x48, 0x65, 0x82, 0x9f, 0xbc, 0xd9, 0xf6, 0x13,
-+ 0x30, 0x4d, 0x6a, 0x87, 0xa4, 0xc1, 0xde, 0xfb,
-+ 0x18, 0x35, 0x52, 0x6f, 0x8c, 0xa9, 0xc6, 0xe3,
-+ 0x00, 0x1f, 0x3e, 0x5d, 0x7c, 0x9b, 0xba, 0xd9,
-+ 0xf8, 0x17, 0x36, 0x55, 0x74, 0x93, 0xb2, 0xd1,
-+ 0xf0, 0x0f, 0x2e, 0x4d, 0x6c, 0x8b, 0xaa, 0xc9,
-+ 0xe8, 0x07, 0x26, 0x45, 0x64, 0x83, 0xa2, 0xc1,
-+ 0xe0, 0xff, 0x1e, 0x3d, 0x5c, 0x7b, 0x9a, 0xb9,
-+ 0xd8, 0xf7, 0x16, 0x35, 0x54, 0x73, 0x92, 0xb1,
-+ 0xd0, 0xef, 0x0e, 0x2d, 0x4c, 0x6b, 0x8a, 0xa9,
-+ 0xc8, 0xe7, 0x06, 0x25, 0x44, 0x63, 0x82, 0xa1,
-+ 0xc0, 0xdf, 0xfe, 0x1d, 0x3c, 0x5b, 0x7a, 0x99,
-+ 0xb8, 0xd7, 0xf6, 0x15, 0x34, 0x53, 0x72, 0x91,
-+ 0xb0, 0xcf, 0xee, 0x0d, 0x2c, 0x4b, 0x6a, 0x89,
-+ 0xa8, 0xc7, 0xe6, 0x05, 0x24, 0x43, 0x62, 0x81,
-+ 0xa0, 0xbf, 0xde, 0xfd, 0x1c, 0x3b, 0x5a, 0x79,
-+ 0x98, 0xb7, 0xd6, 0xf5, 0x14, 0x33, 0x52, 0x71,
-+ 0x90, 0xaf, 0xce, 0xed, 0x0c, 0x2b, 0x4a, 0x69,
-+ 0x88, 0xa7, 0xc6, 0xe5, 0x04, 0x23, 0x42, 0x61,
-+ 0x80, 0x9f, 0xbe, 0xdd, 0xfc, 0x1b, 0x3a, 0x59,
-+ 0x78, 0x97, 0xb6, 0xd5, 0xf4, 0x13, 0x32, 0x51,
-+ 0x70, 0x8f, 0xae, 0xcd, 0xec, 0x0b, 0x2a, 0x49,
-+ 0x68, 0x87, 0xa6, 0xc5, 0xe4, 0x03, 0x22, 0x41,
-+ 0x60, 0x7f, 0x9e, 0xbd, 0xdc, 0xfb, 0x1a, 0x39,
-+ 0x58, 0x77, 0x96, 0xb5, 0xd4, 0xf3, 0x12, 0x31,
-+ 0x50, 0x6f, 0x8e, 0xad, 0xcc, 0xeb, 0x0a, 0x29,
-+ 0x48, 0x67, 0x86, 0xa5, 0xc4, 0xe3, 0x02, 0x21,
-+ 0x40, 0x5f, 0x7e, 0x9d, 0xbc, 0xdb, 0xfa, 0x19,
-+ 0x38, 0x57, 0x76, 0x95, 0xb4, 0xd3, 0xf2, 0x11,
-+ 0x30, 0x4f, 0x6e, 0x8d, 0xac, 0xcb, 0xea, 0x09,
-+ 0x28, 0x47, 0x66, 0x85, 0xa4, 0xc3, 0xe2, 0x01,
-+ 0x20, 0x3f, 0x5e, 0x7d, 0x9c, 0xbb, 0xda, 0xf9,
-+ 0x18, 0x37, 0x56, 0x75, 0x94, 0xb3, 0xd2, 0xf1,
-+ 0x10, 0x2f, 0x4e, 0x6d, 0x8c, 0xab, 0xca, 0xe9,
-+ 0x08, 0x27, 0x46, 0x65, 0x84, 0xa3, 0xc2, 0xe1,
-+ 0x00, 0x21, 0x42, 0x63,
-+ },
-+ .ilen = 4100,
-+ .result = {
-+ 0xf0, 0x5c, 0x74, 0xad, 0x4e, 0xbc, 0x99, 0xe2,
-+ 0xae, 0xff, 0x91, 0x3a, 0x44, 0xcf, 0x38, 0x32,
-+ 0x1e, 0xad, 0xa7, 0xcd, 0xa1, 0x39, 0x95, 0xaa,
-+ 0x10, 0xb1, 0xb3, 0x2e, 0x04, 0x31, 0x8f, 0x86,
-+ 0xf2, 0x62, 0x74, 0x70, 0x0c, 0xa4, 0x46, 0x08,
-+ 0xa8, 0xb7, 0x99, 0xa8, 0xe9, 0xd2, 0x73, 0x79,
-+ 0x7e, 0x6e, 0xd4, 0x8f, 0x1e, 0xc7, 0x8e, 0x31,
-+ 0x0b, 0xfa, 0x4b, 0xce, 0xfd, 0xf3, 0x57, 0x71,
-+ 0xe9, 0x46, 0x03, 0xa5, 0x3d, 0x34, 0x00, 0xe2,
-+ 0x18, 0xff, 0x75, 0x6d, 0x06, 0x2d, 0x00, 0xab,
-+ 0xb9, 0x3e, 0x6c, 0x59, 0xc5, 0x84, 0x06, 0xb5,
-+ 0x8b, 0xd0, 0x89, 0x9c, 0x4a, 0x79, 0x16, 0xc6,
-+ 0x3d, 0x74, 0x54, 0xfa, 0x44, 0xcd, 0x23, 0x26,
-+ 0x5c, 0xcf, 0x7e, 0x28, 0x92, 0x32, 0xbf, 0xdf,
-+ 0xa7, 0x20, 0x3c, 0x74, 0x58, 0x2a, 0x9a, 0xde,
-+ 0x61, 0x00, 0x1c, 0x4f, 0xff, 0x59, 0xc4, 0x22,
-+ 0xac, 0x3c, 0xd0, 0xe8, 0x6c, 0xf9, 0x97, 0x1b,
-+ 0x58, 0x9b, 0xad, 0x71, 0xe8, 0xa9, 0xb5, 0x0d,
-+ 0xee, 0x2f, 0x04, 0x1f, 0x7f, 0xbc, 0x99, 0xee,
-+ 0x84, 0xff, 0x42, 0x60, 0xdc, 0x3a, 0x18, 0xa5,
-+ 0x81, 0xf9, 0xef, 0xdc, 0x7a, 0x0f, 0x65, 0x41,
-+ 0x2f, 0xa3, 0xd3, 0xf9, 0xc2, 0xcb, 0xc0, 0x4d,
-+ 0x8f, 0xd3, 0x76, 0x96, 0xad, 0x49, 0x6d, 0x38,
-+ 0x3d, 0x39, 0x0b, 0x6c, 0x80, 0xb7, 0x54, 0x69,
-+ 0xf0, 0x2c, 0x90, 0x02, 0x29, 0x0d, 0x1c, 0x12,
-+ 0xad, 0x55, 0xc3, 0x8b, 0x68, 0xd9, 0xcc, 0xb3,
-+ 0xb2, 0x64, 0x33, 0x90, 0x5e, 0xca, 0x4b, 0xe2,
-+ 0xfb, 0x75, 0xdc, 0x63, 0xf7, 0x9f, 0x82, 0x74,
-+ 0xf0, 0xc9, 0xaa, 0x7f, 0xe9, 0x2a, 0x9b, 0x33,
-+ 0xbc, 0x88, 0x00, 0x7f, 0xca, 0xb2, 0x1f, 0x14,
-+ 0xdb, 0xc5, 0x8e, 0x7b, 0x11, 0x3c, 0x3e, 0x08,
-+ 0xf3, 0x83, 0xe8, 0xe0, 0x94, 0x86, 0x2e, 0x92,
-+ 0x78, 0x6b, 0x01, 0xc9, 0xc7, 0x83, 0xba, 0x21,
-+ 0x6a, 0x25, 0x15, 0x33, 0x4e, 0x45, 0x08, 0xec,
-+ 0x35, 0xdb, 0xe0, 0x6e, 0x31, 0x51, 0x79, 0xa9,
-+ 0x42, 0x44, 0x65, 0xc1, 0xa0, 0xf1, 0xf9, 0x2a,
-+ 0x70, 0xd5, 0xb6, 0xc6, 0xc1, 0x8c, 0x39, 0xfc,
-+ 0x25, 0xa6, 0x55, 0xd9, 0xdd, 0x2d, 0x4c, 0xec,
-+ 0x49, 0xc6, 0xeb, 0x0e, 0xa8, 0x25, 0x2a, 0x16,
-+ 0x1b, 0x66, 0x84, 0xda, 0xe2, 0x92, 0xe5, 0xc0,
-+ 0xc8, 0x53, 0x07, 0xaf, 0x80, 0x84, 0xec, 0xfd,
-+ 0xcd, 0xd1, 0x6e, 0xcd, 0x6f, 0x6a, 0xf5, 0x36,
-+ 0xc5, 0x15, 0xe5, 0x25, 0x7d, 0x77, 0xd1, 0x1a,
-+ 0x93, 0x36, 0xa9, 0xcf, 0x7c, 0xa4, 0x54, 0x4a,
-+ 0x06, 0x51, 0x48, 0x4e, 0xf6, 0x59, 0x87, 0xd2,
-+ 0x04, 0x02, 0xef, 0xd3, 0x44, 0xde, 0x76, 0x31,
-+ 0xb3, 0x34, 0x17, 0x1b, 0x9d, 0x66, 0x11, 0x9f,
-+ 0x1e, 0xcc, 0x17, 0xe9, 0xc7, 0x3c, 0x1b, 0xe7,
-+ 0xcb, 0x50, 0x08, 0xfc, 0xdc, 0x2b, 0x24, 0xdb,
-+ 0x65, 0x83, 0xd0, 0x3b, 0xe3, 0x30, 0xea, 0x94,
-+ 0x6c, 0xe7, 0xe8, 0x35, 0x32, 0xc7, 0xdb, 0x64,
-+ 0xb4, 0x01, 0xab, 0x36, 0x2c, 0x77, 0x13, 0xaf,
-+ 0xf8, 0x2b, 0x88, 0x3f, 0x54, 0x39, 0xc4, 0x44,
-+ 0xfe, 0xef, 0x6f, 0x68, 0x34, 0xbe, 0x0f, 0x05,
-+ 0x16, 0x6d, 0xf6, 0x0a, 0x30, 0xe7, 0xe3, 0xed,
-+ 0xc4, 0xde, 0x3c, 0x1b, 0x13, 0xd8, 0xdb, 0xfe,
-+ 0x41, 0x62, 0xe5, 0x28, 0xd4, 0x8d, 0xa3, 0xc7,
-+ 0x93, 0x97, 0xc6, 0x48, 0x45, 0x1d, 0x9f, 0x83,
-+ 0xdf, 0x4b, 0x40, 0x3e, 0x42, 0x25, 0x87, 0x80,
-+ 0x4c, 0x7d, 0xa8, 0xd4, 0x98, 0x23, 0x95, 0x75,
-+ 0x41, 0x8c, 0xda, 0x41, 0x9b, 0xd4, 0xa7, 0x06,
-+ 0xb5, 0xf1, 0x71, 0x09, 0x53, 0xbe, 0xca, 0xbf,
-+ 0x32, 0x03, 0xed, 0xf0, 0x50, 0x1c, 0x56, 0x39,
-+ 0x5b, 0xa4, 0x75, 0x18, 0xf7, 0x9b, 0x58, 0xef,
-+ 0x53, 0xfc, 0x2a, 0x38, 0x23, 0x15, 0x75, 0xcd,
-+ 0x45, 0xe5, 0x5a, 0x82, 0x55, 0xba, 0x21, 0xfa,
-+ 0xd4, 0xbd, 0xc6, 0x94, 0x7c, 0xc5, 0x80, 0x12,
-+ 0xf7, 0x4b, 0x32, 0xc4, 0x9a, 0x82, 0xd8, 0x28,
-+ 0x8f, 0xd9, 0xc2, 0x0f, 0x60, 0x03, 0xbe, 0x5e,
-+ 0x21, 0xd6, 0x5f, 0x58, 0xbf, 0x5c, 0xb1, 0x32,
-+ 0x82, 0x8d, 0xa9, 0xe5, 0xf2, 0x66, 0x1a, 0xc0,
-+ 0xa0, 0xbc, 0x58, 0x2f, 0x71, 0xf5, 0x2f, 0xed,
-+ 0xd1, 0x26, 0xb9, 0xd8, 0x49, 0x5a, 0x07, 0x19,
-+ 0x01, 0x7c, 0x59, 0xb0, 0xf8, 0xa4, 0xb7, 0xd3,
-+ 0x7b, 0x1a, 0x8c, 0x38, 0xf4, 0x50, 0xa4, 0x59,
-+ 0xb0, 0xcc, 0x41, 0x0b, 0x88, 0x7f, 0xe5, 0x31,
-+ 0xb3, 0x42, 0xba, 0xa2, 0x7e, 0xd4, 0x32, 0x71,
-+ 0x45, 0x87, 0x48, 0xa9, 0xc2, 0xf2, 0x89, 0xb3,
-+ 0xe4, 0xa7, 0x7e, 0x52, 0x15, 0x61, 0xfa, 0xfe,
-+ 0xc9, 0xdd, 0x81, 0xeb, 0x13, 0xab, 0xab, 0xc3,
-+ 0x98, 0x59, 0xd8, 0x16, 0x3d, 0x14, 0x7a, 0x1c,
-+ 0x3c, 0x41, 0x9a, 0x16, 0x16, 0x9b, 0xd2, 0xd2,
-+ 0x69, 0x3a, 0x29, 0x23, 0xac, 0x86, 0x32, 0xa5,
-+ 0x48, 0x9c, 0x9e, 0xf3, 0x47, 0x77, 0x81, 0x70,
-+ 0x24, 0xe8, 0x85, 0xd2, 0xf5, 0xb5, 0xfa, 0xff,
-+ 0x59, 0x6a, 0xd3, 0x50, 0x59, 0x43, 0x59, 0xde,
-+ 0xd9, 0xf1, 0x55, 0xa5, 0x0c, 0xc3, 0x1a, 0x1a,
-+ 0x18, 0x34, 0x0d, 0x1a, 0x63, 0x33, 0xed, 0x10,
-+ 0xe0, 0x1d, 0x2a, 0x18, 0xd2, 0xc0, 0x54, 0xa8,
-+ 0xca, 0xb5, 0x9a, 0xd3, 0xdd, 0xca, 0x45, 0x84,
-+ 0x50, 0xe7, 0x0f, 0xfe, 0xa4, 0x99, 0x5a, 0xbe,
-+ 0x43, 0x2d, 0x9a, 0xcb, 0x92, 0x3f, 0x5a, 0x1d,
-+ 0x85, 0xd8, 0xc9, 0xdf, 0x68, 0xc9, 0x12, 0x80,
-+ 0x56, 0x0c, 0xdc, 0x00, 0xdc, 0x3a, 0x7d, 0x9d,
-+ 0xa3, 0xa2, 0xe8, 0x4d, 0xbf, 0xf9, 0x70, 0xa0,
-+ 0xa4, 0x13, 0x4f, 0x6b, 0xaf, 0x0a, 0x89, 0x7f,
-+ 0xda, 0xf0, 0xbf, 0x9b, 0xc8, 0x1d, 0xe5, 0xf8,
-+ 0x2e, 0x8b, 0x07, 0xb5, 0x73, 0x1b, 0xcc, 0xa2,
-+ 0xa6, 0xad, 0x30, 0xbc, 0x78, 0x3c, 0x5b, 0x10,
-+ 0xfa, 0x5e, 0x62, 0x2d, 0x9e, 0x64, 0xb3, 0x33,
-+ 0xce, 0xf9, 0x1f, 0x86, 0xe7, 0x8b, 0xa2, 0xb8,
-+ 0xe8, 0x99, 0x57, 0x8c, 0x11, 0xed, 0x66, 0xd9,
-+ 0x3c, 0x72, 0xb9, 0xc3, 0xe6, 0x4e, 0x17, 0x3a,
-+ 0x6a, 0xcb, 0x42, 0x24, 0x06, 0xed, 0x3e, 0x4e,
-+ 0xa3, 0xe8, 0x6a, 0x94, 0xda, 0x0d, 0x4e, 0xd5,
-+ 0x14, 0x19, 0xcf, 0xb6, 0x26, 0xd8, 0x2e, 0xcc,
-+ 0x64, 0x76, 0x38, 0x49, 0x4d, 0xfe, 0x30, 0x6d,
-+ 0xe4, 0xc8, 0x8c, 0x7b, 0xc4, 0xe0, 0x35, 0xba,
-+ 0x22, 0x6e, 0x76, 0xe1, 0x1a, 0xf2, 0x53, 0xc3,
-+ 0x28, 0xa2, 0x82, 0x1f, 0x61, 0x69, 0xad, 0xc1,
-+ 0x7b, 0x28, 0x4b, 0x1e, 0x6c, 0x85, 0x95, 0x9b,
-+ 0x51, 0xb5, 0x17, 0x7f, 0x12, 0x69, 0x8c, 0x24,
-+ 0xd5, 0xc7, 0x5a, 0x5a, 0x11, 0x54, 0xff, 0x5a,
-+ 0xf7, 0x16, 0xc3, 0x91, 0xa6, 0xf0, 0xdc, 0x0a,
-+ 0xb6, 0xa7, 0x4a, 0x0d, 0x7a, 0x58, 0xfe, 0xa5,
-+ 0xf5, 0xcb, 0x8f, 0x7b, 0x0e, 0xea, 0x57, 0xe7,
-+ 0xbd, 0x79, 0xd6, 0x1c, 0x88, 0x23, 0x6c, 0xf2,
-+ 0x4d, 0x29, 0x77, 0x53, 0x35, 0x6a, 0x00, 0x8d,
-+ 0xcd, 0xa3, 0x58, 0xbe, 0x77, 0x99, 0x18, 0xf8,
-+ 0xe6, 0xe1, 0x8f, 0xe9, 0x37, 0x8f, 0xe3, 0xe2,
-+ 0x5a, 0x8a, 0x93, 0x25, 0xaf, 0xf3, 0x78, 0x80,
-+ 0xbe, 0xa6, 0x1b, 0xc6, 0xac, 0x8b, 0x1c, 0x91,
-+ 0x58, 0xe1, 0x9f, 0x89, 0x35, 0x9d, 0x1d, 0x21,
-+ 0x29, 0x9f, 0xf4, 0x99, 0x02, 0x27, 0x0f, 0xa8,
-+ 0x4f, 0x79, 0x94, 0x2b, 0x33, 0x2c, 0xda, 0xa2,
-+ 0x26, 0x39, 0x83, 0x94, 0xef, 0x27, 0xd8, 0x53,
-+ 0x8f, 0x66, 0x0d, 0xe4, 0x41, 0x7d, 0x34, 0xcd,
-+ 0x43, 0x7c, 0x95, 0x0a, 0x53, 0xef, 0x66, 0xda,
-+ 0x7e, 0x9b, 0xf3, 0x93, 0xaf, 0xd0, 0x73, 0x71,
-+ 0xba, 0x40, 0x9b, 0x74, 0xf8, 0xd7, 0xd7, 0x41,
-+ 0x6d, 0xaf, 0x72, 0x9c, 0x8d, 0x21, 0x87, 0x3c,
-+ 0xfd, 0x0a, 0x90, 0xa9, 0x47, 0x96, 0x9e, 0xd3,
-+ 0x88, 0xee, 0x73, 0xcf, 0x66, 0x2f, 0x52, 0x56,
-+ 0x6d, 0xa9, 0x80, 0x4c, 0xe2, 0x6f, 0x62, 0x88,
-+ 0x3f, 0x0e, 0x54, 0x17, 0x48, 0x80, 0x5d, 0xd3,
-+ 0xc3, 0xda, 0x25, 0x3d, 0xa1, 0xc8, 0xcb, 0x9f,
-+ 0x9b, 0x70, 0xb3, 0xa1, 0xeb, 0x04, 0x52, 0xa1,
-+ 0xf2, 0x22, 0x0f, 0xfc, 0xc8, 0x18, 0xfa, 0xf9,
-+ 0x85, 0x9c, 0xf1, 0xac, 0xeb, 0x0c, 0x02, 0x46,
-+ 0x75, 0xd2, 0xf5, 0x2c, 0xe3, 0xd2, 0x59, 0x94,
-+ 0x12, 0xf3, 0x3c, 0xfc, 0xd7, 0x92, 0xfa, 0x36,
-+ 0xba, 0x61, 0x34, 0x38, 0x7c, 0xda, 0x48, 0x3e,
-+ 0x08, 0xc9, 0x39, 0x23, 0x5e, 0x02, 0x2c, 0x1a,
-+ 0x18, 0x7e, 0xb4, 0xd9, 0xfd, 0x9e, 0x40, 0x02,
-+ 0xb1, 0x33, 0x37, 0x32, 0xe7, 0xde, 0xd6, 0xd0,
-+ 0x7c, 0x58, 0x65, 0x4b, 0xf8, 0x34, 0x27, 0x9c,
-+ 0x44, 0xb4, 0xbd, 0xe9, 0xe9, 0x4c, 0x78, 0x7d,
-+ 0x4b, 0x9f, 0xce, 0xb1, 0xcd, 0x47, 0xa5, 0x37,
-+ 0xe5, 0x6d, 0xbd, 0xb9, 0x43, 0x94, 0x0a, 0xd4,
-+ 0xd6, 0xf9, 0x04, 0x5f, 0xb5, 0x66, 0x6c, 0x1a,
-+ 0x35, 0x12, 0xe3, 0x36, 0x28, 0x27, 0x36, 0x58,
-+ 0x01, 0x2b, 0x79, 0xe4, 0xba, 0x6d, 0x10, 0x7d,
-+ 0x65, 0xdf, 0x84, 0x95, 0xf4, 0xd5, 0xb6, 0x8f,
-+ 0x2b, 0x9f, 0x96, 0x00, 0x86, 0x60, 0xf0, 0x21,
-+ 0x76, 0xa8, 0x6a, 0x8c, 0x28, 0x1c, 0xb3, 0x6b,
-+ 0x97, 0xd7, 0xb6, 0x53, 0x2a, 0xcc, 0xab, 0x40,
-+ 0x9d, 0x62, 0x79, 0x58, 0x52, 0xe6, 0x65, 0xb7,
-+ 0xab, 0x55, 0x67, 0x9c, 0x89, 0x7c, 0x03, 0xb0,
-+ 0x73, 0x59, 0xc5, 0x81, 0xf5, 0x18, 0x17, 0x5c,
-+ 0x89, 0xf3, 0x78, 0x35, 0x44, 0x62, 0x78, 0x72,
-+ 0xd0, 0x96, 0xeb, 0x31, 0xe7, 0x87, 0x77, 0x14,
-+ 0x99, 0x51, 0xf2, 0x59, 0x26, 0x9e, 0xb5, 0xa6,
-+ 0x45, 0xfe, 0x6e, 0xbd, 0x07, 0x4c, 0x94, 0x5a,
-+ 0xa5, 0x7d, 0xfc, 0xf1, 0x2b, 0x77, 0xe2, 0xfe,
-+ 0x17, 0xd4, 0x84, 0xa0, 0xac, 0xb5, 0xc7, 0xda,
-+ 0xa9, 0x1a, 0xb6, 0xf3, 0x74, 0x11, 0xb4, 0x9d,
-+ 0xfb, 0x79, 0x2e, 0x04, 0x2d, 0x50, 0x28, 0x83,
-+ 0xbf, 0xc6, 0x52, 0xd3, 0x34, 0xd6, 0xe8, 0x7a,
-+ 0xb6, 0xea, 0xe7, 0xa8, 0x6c, 0x15, 0x1e, 0x2c,
-+ 0x57, 0xbc, 0x48, 0x4e, 0x5f, 0x5c, 0xb6, 0x92,
-+ 0xd2, 0x49, 0x77, 0x81, 0x6d, 0x90, 0x70, 0xae,
-+ 0x98, 0xa1, 0x03, 0x0d, 0x6b, 0xb9, 0x77, 0x14,
-+ 0xf1, 0x4e, 0x23, 0xd3, 0xf8, 0x68, 0xbd, 0xc2,
-+ 0xfe, 0x04, 0xb7, 0x5c, 0xc5, 0x17, 0x60, 0x8f,
-+ 0x65, 0x54, 0xa4, 0x7a, 0x42, 0xdc, 0x18, 0x0d,
-+ 0xb5, 0xcf, 0x0f, 0xd3, 0xc7, 0x91, 0x66, 0x1b,
-+ 0x45, 0x42, 0x27, 0x75, 0x50, 0xe5, 0xee, 0xb8,
-+ 0x7f, 0x33, 0x2c, 0xba, 0x4a, 0x92, 0x4d, 0x2c,
-+ 0x3c, 0xe3, 0x0d, 0x80, 0x01, 0xba, 0x0d, 0x29,
-+ 0xd8, 0x3c, 0xe9, 0x13, 0x16, 0x57, 0xe6, 0xea,
-+ 0x94, 0x52, 0xe7, 0x00, 0x4d, 0x30, 0xb0, 0x0f,
-+ 0x35, 0xb8, 0xb8, 0xa7, 0xb1, 0xb5, 0x3b, 0x44,
-+ 0xe1, 0x2f, 0xfd, 0x88, 0xed, 0x43, 0xe7, 0x52,
-+ 0x10, 0x93, 0xb3, 0x8a, 0x30, 0x6b, 0x0a, 0xf7,
-+ 0x23, 0xc6, 0x50, 0x9d, 0x4a, 0xb0, 0xde, 0xc3,
-+ 0xdc, 0x9b, 0x2f, 0x01, 0x56, 0x36, 0x09, 0xc5,
-+ 0x2f, 0x6b, 0xfe, 0xf1, 0xd8, 0x27, 0x45, 0x03,
-+ 0x30, 0x5e, 0x5c, 0x5b, 0xb4, 0x62, 0x0e, 0x1a,
-+ 0xa9, 0x21, 0x2b, 0x92, 0x94, 0x87, 0x62, 0x57,
-+ 0x4c, 0x10, 0x74, 0x1a, 0xf1, 0x0a, 0xc5, 0x84,
-+ 0x3b, 0x9e, 0x72, 0x02, 0xd7, 0xcc, 0x09, 0x56,
-+ 0xbd, 0x54, 0xc1, 0xf0, 0xc3, 0xe3, 0xb3, 0xf8,
-+ 0xd2, 0x0d, 0x61, 0xcb, 0xef, 0xce, 0x0d, 0x05,
-+ 0xb0, 0x98, 0xd9, 0x8e, 0x4f, 0xf9, 0xbc, 0x93,
-+ 0xa6, 0xea, 0xc8, 0xcf, 0x10, 0x53, 0x4b, 0xf1,
-+ 0xec, 0xfc, 0x89, 0xf9, 0x64, 0xb0, 0x22, 0xbf,
-+ 0x9e, 0x55, 0x46, 0x9f, 0x7c, 0x50, 0x8e, 0x84,
-+ 0x54, 0x20, 0x98, 0xd7, 0x6c, 0x40, 0x1e, 0xdb,
-+ 0x69, 0x34, 0x78, 0x61, 0x24, 0x21, 0x9c, 0x8a,
-+ 0xb3, 0x62, 0x31, 0x8b, 0x6e, 0xf5, 0x2a, 0x35,
-+ 0x86, 0x13, 0xb1, 0x6c, 0x64, 0x2e, 0x41, 0xa5,
-+ 0x05, 0xf2, 0x42, 0xba, 0xd2, 0x3a, 0x0d, 0x8e,
-+ 0x8a, 0x59, 0x94, 0x3c, 0xcf, 0x36, 0x27, 0x82,
-+ 0xc2, 0x45, 0xee, 0x58, 0xcd, 0x88, 0xb4, 0xec,
-+ 0xde, 0xb2, 0x96, 0x0a, 0xaf, 0x38, 0x6f, 0x88,
-+ 0xd7, 0xd8, 0xe1, 0xdf, 0xb9, 0x96, 0xa9, 0x0a,
-+ 0xb1, 0x95, 0x28, 0x86, 0x20, 0xe9, 0x17, 0x49,
-+ 0xa2, 0x29, 0x38, 0xaa, 0xa5, 0xe9, 0x6e, 0xf1,
-+ 0x19, 0x27, 0xc0, 0xd5, 0x2a, 0x22, 0xc3, 0x0b,
-+ 0xdb, 0x7c, 0x73, 0x10, 0xb9, 0xba, 0x89, 0x76,
-+ 0x54, 0xae, 0x7d, 0x71, 0xb3, 0x93, 0xf6, 0x32,
-+ 0xe6, 0x47, 0x43, 0x55, 0xac, 0xa0, 0x0d, 0xc2,
-+ 0x93, 0x27, 0x4a, 0x8e, 0x0e, 0x74, 0x15, 0xc7,
-+ 0x0b, 0x85, 0xd9, 0x0c, 0xa9, 0x30, 0x7a, 0x3e,
-+ 0xea, 0x8f, 0x85, 0x6d, 0x3a, 0x12, 0x4f, 0x72,
-+ 0x69, 0x58, 0x7a, 0x80, 0xbb, 0xb5, 0x97, 0xf3,
-+ 0xcf, 0x70, 0xd2, 0x5d, 0xdd, 0x4d, 0x21, 0x79,
-+ 0x54, 0x4d, 0xe4, 0x05, 0xe8, 0xbd, 0xc2, 0x62,
-+ 0xb1, 0x3b, 0x77, 0x1c, 0xd6, 0x5c, 0xf3, 0xa0,
-+ 0x79, 0x00, 0xa8, 0x6c, 0x29, 0xd9, 0x18, 0x24,
-+ 0x36, 0xa2, 0x46, 0xc0, 0x96, 0x65, 0x7f, 0xbd,
-+ 0x2a, 0xed, 0x36, 0x16, 0x0c, 0xaa, 0x9f, 0xf4,
-+ 0xc5, 0xb4, 0xe2, 0x12, 0xed, 0x69, 0xed, 0x4f,
-+ 0x26, 0x2c, 0x39, 0x52, 0x89, 0x98, 0xe7, 0x2c,
-+ 0x99, 0xa4, 0x9e, 0xa3, 0x9b, 0x99, 0x46, 0x7a,
-+ 0x3a, 0xdc, 0xa8, 0x59, 0xa3, 0xdb, 0xc3, 0x3b,
-+ 0x95, 0x0d, 0x3b, 0x09, 0x6e, 0xee, 0x83, 0x5d,
-+ 0x32, 0x4d, 0xed, 0xab, 0xfa, 0x98, 0x14, 0x4e,
-+ 0xc3, 0x15, 0x45, 0x53, 0x61, 0xc4, 0x93, 0xbd,
-+ 0x90, 0xf4, 0x99, 0x95, 0x4c, 0xe6, 0x76, 0x92,
-+ 0x29, 0x90, 0x46, 0x30, 0x92, 0x69, 0x7d, 0x13,
-+ 0xf2, 0xa5, 0xcd, 0x69, 0x49, 0x44, 0xb2, 0x0f,
-+ 0x63, 0x40, 0x36, 0x5f, 0x09, 0xe2, 0x78, 0xf8,
-+ 0x91, 0xe3, 0xe2, 0xfa, 0x10, 0xf7, 0xc8, 0x24,
-+ 0xa8, 0x89, 0x32, 0x5c, 0x37, 0x25, 0x1d, 0xb2,
-+ 0xea, 0x17, 0x8a, 0x0a, 0xa9, 0x64, 0xc3, 0x7c,
-+ 0x3c, 0x7c, 0xbd, 0xc6, 0x79, 0x34, 0xe7, 0xe2,
-+ 0x85, 0x8e, 0xbf, 0xf8, 0xde, 0x92, 0xa0, 0xae,
-+ 0x20, 0xc4, 0xf6, 0xbb, 0x1f, 0x38, 0x19, 0x0e,
-+ 0xe8, 0x79, 0x9c, 0xa1, 0x23, 0xe9, 0x54, 0x7e,
-+ 0x37, 0x2f, 0xe2, 0x94, 0x32, 0xaf, 0xa0, 0x23,
-+ 0x49, 0xe4, 0xc0, 0xb3, 0xac, 0x00, 0x8f, 0x36,
-+ 0x05, 0xc4, 0xa6, 0x96, 0xec, 0x05, 0x98, 0x4f,
-+ 0x96, 0x67, 0x57, 0x1f, 0x20, 0x86, 0x1b, 0x2d,
-+ 0x69, 0xe4, 0x29, 0x93, 0x66, 0x5f, 0xaf, 0x6b,
-+ 0x88, 0x26, 0x2c, 0x67, 0x02, 0x4b, 0x52, 0xd0,
-+ 0x83, 0x7a, 0x43, 0x1f, 0xc0, 0x71, 0x15, 0x25,
-+ 0x77, 0x65, 0x08, 0x60, 0x11, 0x76, 0x4c, 0x8d,
-+ 0xed, 0xa9, 0x27, 0xc6, 0xb1, 0x2a, 0x2c, 0x6a,
-+ 0x4a, 0x97, 0xf5, 0xc6, 0xb7, 0x70, 0x42, 0xd3,
-+ 0x03, 0xd1, 0x24, 0x95, 0xec, 0x6d, 0xab, 0x38,
-+ 0x72, 0xce, 0xe2, 0x8b, 0x33, 0xd7, 0x51, 0x09,
-+ 0xdc, 0x45, 0xe0, 0x09, 0x96, 0x32, 0xf3, 0xc4,
-+ 0x84, 0xdc, 0x73, 0x73, 0x2d, 0x1b, 0x11, 0x98,
-+ 0xc5, 0x0e, 0x69, 0x28, 0x94, 0xc7, 0xb5, 0x4d,
-+ 0xc8, 0x8a, 0xd0, 0xaa, 0x13, 0x2e, 0x18, 0x74,
-+ 0xdd, 0xd1, 0x1e, 0xf3, 0x90, 0xe8, 0xfc, 0x9a,
-+ 0x72, 0x4a, 0x0e, 0xd1, 0xe4, 0xfb, 0x0d, 0x96,
-+ 0xd1, 0x0c, 0x79, 0x85, 0x1b, 0x1c, 0xfe, 0xe1,
-+ 0x62, 0x8f, 0x7a, 0x73, 0x32, 0xab, 0xc8, 0x18,
-+ 0x69, 0xe3, 0x34, 0x30, 0xdf, 0x13, 0xa6, 0xe5,
-+ 0xe8, 0x0e, 0x67, 0x7f, 0x81, 0x11, 0xb4, 0x60,
-+ 0xc7, 0xbd, 0x79, 0x65, 0x50, 0xdc, 0xc4, 0x5b,
-+ 0xde, 0x39, 0xa4, 0x01, 0x72, 0x63, 0xf3, 0xd1,
-+ 0x64, 0x4e, 0xdf, 0xfc, 0x27, 0x92, 0x37, 0x0d,
-+ 0x57, 0xcd, 0x11, 0x4f, 0x11, 0x04, 0x8e, 0x1d,
-+ 0x16, 0xf7, 0xcd, 0x92, 0x9a, 0x99, 0x30, 0x14,
-+ 0xf1, 0x7c, 0x67, 0x1b, 0x1f, 0x41, 0x0b, 0xe8,
-+ 0x32, 0xe8, 0xb8, 0xc1, 0x4f, 0x54, 0x86, 0x4f,
-+ 0xe5, 0x79, 0x81, 0x73, 0xcd, 0x43, 0x59, 0x68,
-+ 0x73, 0x02, 0x3b, 0x78, 0x21, 0x72, 0x43, 0x00,
-+ 0x49, 0x17, 0xf7, 0x00, 0xaf, 0x68, 0x24, 0x53,
-+ 0x05, 0x0a, 0xc3, 0x33, 0xe0, 0x33, 0x3f, 0x69,
-+ 0xd2, 0x84, 0x2f, 0x0b, 0xed, 0xde, 0x04, 0xf4,
-+ 0x11, 0x94, 0x13, 0x69, 0x51, 0x09, 0x28, 0xde,
-+ 0x57, 0x5c, 0xef, 0xdc, 0x9a, 0x49, 0x1c, 0x17,
-+ 0x97, 0xf3, 0x96, 0xc1, 0x7f, 0x5d, 0x2e, 0x7d,
-+ 0x55, 0xb8, 0xb3, 0x02, 0x09, 0xb3, 0x1f, 0xe7,
-+ 0xc9, 0x8d, 0xa3, 0x36, 0x34, 0x8a, 0x77, 0x13,
-+ 0x30, 0x63, 0x4c, 0xa5, 0xcd, 0xc3, 0xe0, 0x7e,
-+ 0x05, 0xa1, 0x7b, 0x0c, 0xcb, 0x74, 0x47, 0x31,
-+ 0x62, 0x03, 0x43, 0xf1, 0x87, 0xb4, 0xb0, 0x85,
-+ 0x87, 0x8e, 0x4b, 0x25, 0xc7, 0xcf, 0xae, 0x4b,
-+ 0x36, 0x46, 0x3e, 0x62, 0xbc, 0x6f, 0xeb, 0x5f,
-+ 0x73, 0xac, 0xe6, 0x07, 0xee, 0xc1, 0xa1, 0xd6,
-+ 0xc4, 0xab, 0xc9, 0xd6, 0x89, 0x45, 0xe1, 0xf1,
-+ 0x04, 0x4e, 0x1a, 0x6f, 0xbb, 0x4f, 0x3a, 0xa3,
-+ 0xa0, 0xcb, 0xa3, 0x0a, 0xd8, 0x71, 0x35, 0x55,
-+ 0xe4, 0xbc, 0x2e, 0x04, 0x06, 0xe6, 0xff, 0x5b,
-+ 0x1c, 0xc0, 0x11, 0x7c, 0xc5, 0x17, 0xf3, 0x38,
-+ 0xcf, 0xe9, 0xba, 0x0f, 0x0e, 0xef, 0x02, 0xc2,
-+ 0x8d, 0xc6, 0xbc, 0x4b, 0x67, 0x20, 0x95, 0xd7,
-+ 0x2c, 0x45, 0x5b, 0x86, 0x44, 0x8c, 0x6f, 0x2e,
-+ 0x7e, 0x9f, 0x1c, 0x77, 0xba, 0x6b, 0x0e, 0xa3,
-+ 0x69, 0xdc, 0xab, 0x24, 0x57, 0x60, 0x47, 0xc1,
-+ 0xd1, 0xa5, 0x9d, 0x23, 0xe6, 0xb1, 0x37, 0xfe,
-+ 0x93, 0xd2, 0x4c, 0x46, 0xf9, 0x0c, 0xc6, 0xfb,
-+ 0xd6, 0x9d, 0x99, 0x69, 0xab, 0x7a, 0x07, 0x0c,
-+ 0x65, 0xe7, 0xc4, 0x08, 0x96, 0xe2, 0xa5, 0x01,
-+ 0x3f, 0x46, 0x07, 0x05, 0x7e, 0xe8, 0x9a, 0x90,
-+ 0x50, 0xdc, 0xe9, 0x7a, 0xea, 0xa1, 0x39, 0x6e,
-+ 0x66, 0xe4, 0x6f, 0xa5, 0x5f, 0xb2, 0xd9, 0x5b,
-+ 0xf5, 0xdb, 0x2a, 0x32, 0xf0, 0x11, 0x6f, 0x7c,
-+ 0x26, 0x10, 0x8f, 0x3d, 0x80, 0xe9, 0x58, 0xf7,
-+ 0xe0, 0xa8, 0x57, 0xf8, 0xdb, 0x0e, 0xce, 0x99,
-+ 0x63, 0x19, 0x3d, 0xd5, 0xec, 0x1b, 0x77, 0x69,
-+ 0x98, 0xf6, 0xe4, 0x5f, 0x67, 0x17, 0x4b, 0x09,
-+ 0x85, 0x62, 0x82, 0x70, 0x18, 0xe2, 0x9a, 0x78,
-+ 0xe2, 0x62, 0xbd, 0xb4, 0xf1, 0x42, 0xc6, 0xfb,
-+ 0x08, 0xd0, 0xbd, 0xeb, 0x4e, 0x09, 0xf2, 0xc8,
-+ 0x1e, 0xdc, 0x3d, 0x32, 0x21, 0x56, 0x9c, 0x4f,
-+ 0x35, 0xf3, 0x61, 0x06, 0x72, 0x84, 0xc4, 0x32,
-+ 0xf2, 0xf1, 0xfa, 0x0b, 0x2f, 0xc3, 0xdb, 0x02,
-+ 0x04, 0xc2, 0xde, 0x57, 0x64, 0x60, 0x8d, 0xcf,
-+ 0xcb, 0x86, 0x5d, 0x97, 0x3e, 0xb1, 0x9c, 0x01,
-+ 0xd6, 0x28, 0x8f, 0x99, 0xbc, 0x46, 0xeb, 0x05,
-+ 0xaf, 0x7e, 0xb8, 0x21, 0x2a, 0x56, 0x85, 0x1c,
-+ 0xb3, 0x71, 0xa0, 0xde, 0xca, 0x96, 0xf1, 0x78,
-+ 0x49, 0xa2, 0x99, 0x81, 0x80, 0x5c, 0x01, 0xf5,
-+ 0xa0, 0xa2, 0x56, 0x63, 0xe2, 0x70, 0x07, 0xa5,
-+ 0x95, 0xd6, 0x85, 0xeb, 0x36, 0x9e, 0xa9, 0x51,
-+ 0x66, 0x56, 0x5f, 0x1d, 0x02, 0x19, 0xe2, 0xf6,
-+ 0x4f, 0x73, 0x38, 0x09, 0x75, 0x64, 0x48, 0xe0,
-+ 0xf1, 0x7e, 0x0e, 0xe8, 0x9d, 0xf9, 0xed, 0x94,
-+ 0xfe, 0x16, 0x26, 0x62, 0x49, 0x74, 0xf4, 0xb0,
-+ 0xd4, 0xa9, 0x6c, 0xb0, 0xfd, 0x53, 0xe9, 0x81,
-+ 0xe0, 0x7a, 0xbf, 0xcf, 0xb5, 0xc4, 0x01, 0x81,
-+ 0x79, 0x99, 0x77, 0x01, 0x3b, 0xe9, 0xa2, 0xb6,
-+ 0xe6, 0x6a, 0x8a, 0x9e, 0x56, 0x1c, 0x8d, 0x1e,
-+ 0x8f, 0x06, 0x55, 0x2c, 0x6c, 0xdc, 0x92, 0x87,
-+ 0x64, 0x3b, 0x4b, 0x19, 0xa1, 0x13, 0x64, 0x1d,
-+ 0x4a, 0xe9, 0xc0, 0x00, 0xb8, 0x95, 0xef, 0x6b,
-+ 0x1a, 0x86, 0x6d, 0x37, 0x52, 0x02, 0xc2, 0xe0,
-+ 0xc8, 0xbb, 0x42, 0x0c, 0x02, 0x21, 0x4a, 0xc9,
-+ 0xef, 0xa0, 0x54, 0xe4, 0x5e, 0x16, 0x53, 0x81,
-+ 0x70, 0x62, 0x10, 0xaf, 0xde, 0xb8, 0xb5, 0xd3,
-+ 0xe8, 0x5e, 0x6c, 0xc3, 0x8a, 0x3e, 0x18, 0x07,
-+ 0xf2, 0x2f, 0x7d, 0xa7, 0xe1, 0x3d, 0x4e, 0xb4,
-+ 0x26, 0xa7, 0xa3, 0x93, 0x86, 0xb2, 0x04, 0x1e,
-+ 0x53, 0x5d, 0x86, 0xd6, 0xde, 0x65, 0xca, 0xe3,
-+ 0x4e, 0xc1, 0xcf, 0xef, 0xc8, 0x70, 0x1b, 0x83,
-+ 0x13, 0xdd, 0x18, 0x8b, 0x0d, 0x76, 0xd2, 0xf6,
-+ 0x37, 0x7a, 0x93, 0x7a, 0x50, 0x11, 0x9f, 0x96,
-+ 0x86, 0x25, 0xfd, 0xac, 0xdc, 0xbe, 0x18, 0x93,
-+ 0x19, 0x6b, 0xec, 0x58, 0x4f, 0xb9, 0x75, 0xa7,
-+ 0xdd, 0x3f, 0x2f, 0xec, 0xc8, 0x5a, 0x84, 0xab,
-+ 0xd5, 0xe4, 0x8a, 0x07, 0xf6, 0x4d, 0x23, 0xd6,
-+ 0x03, 0xfb, 0x03, 0x6a, 0xea, 0x66, 0xbf, 0xd4,
-+ 0xb1, 0x34, 0xfb, 0x78, 0xe9, 0x55, 0xdc, 0x7c,
-+ 0x3d, 0x9c, 0xe5, 0x9a, 0xac, 0xc3, 0x7a, 0x80,
-+ 0x24, 0x6d, 0xa0, 0xef, 0x25, 0x7c, 0xb7, 0xea,
-+ 0xce, 0x4d, 0x5f, 0x18, 0x60, 0xce, 0x87, 0x22,
-+ 0x66, 0x2f, 0xd5, 0xdd, 0xdd, 0x02, 0x21, 0x75,
-+ 0x82, 0xa0, 0x1f, 0x58, 0xc6, 0xd3, 0x62, 0xf7,
-+ 0x32, 0xd8, 0xaf, 0x1e, 0x07, 0x77, 0x51, 0x96,
-+ 0xd5, 0x6b, 0x1e, 0x7e, 0x80, 0x02, 0xe8, 0x67,
-+ 0xea, 0x17, 0x0b, 0x10, 0xd2, 0x3f, 0x28, 0x25,
-+ 0x4f, 0x05, 0x77, 0x02, 0x14, 0x69, 0xf0, 0x2c,
-+ 0xbe, 0x0c, 0xf1, 0x74, 0x30, 0xd1, 0xb9, 0x9b,
-+ 0xfc, 0x8c, 0xbb, 0x04, 0x16, 0xd9, 0xba, 0xc3,
-+ 0xbc, 0x91, 0x8a, 0xc4, 0x30, 0xa4, 0xb0, 0x12,
-+ 0x4c, 0x21, 0x87, 0xcb, 0xc9, 0x1d, 0x16, 0x96,
-+ 0x07, 0x6f, 0x23, 0x54, 0xb9, 0x6f, 0x79, 0xe5,
-+ 0x64, 0xc0, 0x64, 0xda, 0xb1, 0xae, 0xdd, 0x60,
-+ 0x6c, 0x1a, 0x9d, 0xd3, 0x04, 0x8e, 0x45, 0xb0,
-+ 0x92, 0x61, 0xd0, 0x48, 0x81, 0xed, 0x5e, 0x1d,
-+ 0xa0, 0xc9, 0xa4, 0x33, 0xc7, 0x13, 0x51, 0x5d,
-+ 0x7f, 0x83, 0x73, 0xb6, 0x70, 0x18, 0x65, 0x3e,
-+ 0x2f, 0x0e, 0x7a, 0x12, 0x39, 0x98, 0xab, 0xd8,
-+ 0x7e, 0x6f, 0xa3, 0xd1, 0xba, 0x56, 0xad, 0xbd,
-+ 0xf0, 0x03, 0x01, 0x1c, 0x85, 0x35, 0x9f, 0xeb,
-+ 0x19, 0x63, 0xa1, 0xaf, 0xfe, 0x2d, 0x35, 0x50,
-+ 0x39, 0xa0, 0x65, 0x7c, 0x95, 0x7e, 0x6b, 0xfe,
-+ 0xc1, 0xac, 0x07, 0x7c, 0x98, 0x4f, 0xbe, 0x57,
-+ 0xa7, 0x22, 0xec, 0xe2, 0x7e, 0x29, 0x09, 0x53,
-+ 0xe8, 0xbf, 0xb4, 0x7e, 0x3f, 0x8f, 0xfc, 0x14,
-+ 0xce, 0x54, 0xf9, 0x18, 0x58, 0xb5, 0xff, 0x44,
-+ 0x05, 0x9d, 0xce, 0x1b, 0xb6, 0x82, 0x23, 0xc8,
-+ 0x2e, 0xbc, 0x69, 0xbb, 0x4a, 0x29, 0x0f, 0x65,
-+ 0x94, 0xf0, 0x63, 0x06, 0x0e, 0xef, 0x8c, 0xbd,
-+ 0xff, 0xfd, 0xb0, 0x21, 0x6e, 0x57, 0x05, 0x75,
-+ 0xda, 0xd5, 0xc4, 0xeb, 0x8d, 0x32, 0xf7, 0x50,
-+ 0xd3, 0x6f, 0x22, 0xed, 0x5f, 0x8e, 0xa2, 0x5b,
-+ 0x80, 0x8c, 0xc8, 0x78, 0x40, 0x24, 0x4b, 0x89,
-+ 0x30, 0xce, 0x7a, 0x97, 0x0e, 0xc4, 0xaf, 0xef,
-+ 0x9b, 0xb4, 0xcd, 0x66, 0x74, 0x14, 0x04, 0x2b,
-+ 0xf7, 0xce, 0x0b, 0x1c, 0x6e, 0xc2, 0x78, 0x8c,
-+ 0xca, 0xc5, 0xd0, 0x1c, 0x95, 0x4a, 0x91, 0x2d,
-+ 0xa7, 0x20, 0xeb, 0x86, 0x52, 0xb7, 0x67, 0xd8,
-+ 0x0c, 0xd6, 0x04, 0x14, 0xde, 0x51, 0x74, 0x75,
-+ 0xe7, 0x11, 0xb4, 0x87, 0xa3, 0x3d, 0x2d, 0xad,
-+ 0x4f, 0xef, 0xa0, 0x0f, 0x70, 0x00, 0x6d, 0x13,
-+ 0x19, 0x1d, 0x41, 0x50, 0xe9, 0xd8, 0xf0, 0x32,
-+ 0x71, 0xbc, 0xd3, 0x11, 0xf2, 0xac, 0xbe, 0xaf,
-+ 0x75, 0x46, 0x65, 0x4e, 0x07, 0x34, 0x37, 0xa3,
-+ 0x89, 0xfe, 0x75, 0xd4, 0x70, 0x4c, 0xc6, 0x3f,
-+ 0x69, 0x24, 0x0e, 0x38, 0x67, 0x43, 0x8c, 0xde,
-+ 0x06, 0xb5, 0xb8, 0xe7, 0xc4, 0xf0, 0x41, 0x8f,
-+ 0xf0, 0xbd, 0x2f, 0x0b, 0xb9, 0x18, 0xf8, 0xde,
-+ 0x64, 0xb1, 0xdb, 0xee, 0x00, 0x50, 0x77, 0xe1,
-+ 0xc7, 0xff, 0xa6, 0xfa, 0xdd, 0x70, 0xf4, 0xe3,
-+ 0x93, 0xe9, 0x77, 0x35, 0x3d, 0x4b, 0x2f, 0x2b,
-+ 0x6d, 0x55, 0xf0, 0xfc, 0x88, 0x54, 0x4e, 0x89,
-+ 0xc1, 0x8a, 0x23, 0x31, 0x2d, 0x14, 0x2a, 0xb8,
-+ 0x1b, 0x15, 0xdd, 0x9e, 0x6e, 0x7b, 0xda, 0x05,
-+ 0x91, 0x7d, 0x62, 0x64, 0x96, 0x72, 0xde, 0xfc,
-+ 0xc1, 0xec, 0xf0, 0x23, 0x51, 0x6f, 0xdb, 0x5b,
-+ 0x1d, 0x08, 0x57, 0xce, 0x09, 0xb8, 0xf6, 0xcd,
-+ 0x8d, 0x95, 0xf2, 0x20, 0xbf, 0x0f, 0x20, 0x57,
-+ 0x98, 0x81, 0x84, 0x4f, 0x15, 0x5c, 0x76, 0xe7,
-+ 0x3e, 0x0a, 0x3a, 0x6c, 0xc4, 0x8a, 0xbe, 0x78,
-+ 0x74, 0x77, 0xc3, 0x09, 0x4b, 0x5d, 0x48, 0xe4,
-+ 0xc8, 0xcb, 0x0b, 0xea, 0x17, 0x28, 0xcf, 0xcf,
-+ 0x31, 0x32, 0x44, 0xa4, 0xe5, 0x0e, 0x1a, 0x98,
-+ 0x94, 0xc4, 0xf0, 0xff, 0xae, 0x3e, 0x44, 0xe8,
-+ 0xa5, 0xb3, 0xb5, 0x37, 0x2f, 0xe8, 0xaf, 0x6f,
-+ 0x28, 0xc1, 0x37, 0x5f, 0x31, 0xd2, 0xb9, 0x33,
-+ 0xb1, 0xb2, 0x52, 0x94, 0x75, 0x2c, 0x29, 0x59,
-+ 0x06, 0xc2, 0x25, 0xe8, 0x71, 0x65, 0x4e, 0xed,
-+ 0xc0, 0x9c, 0xb1, 0xbb, 0x25, 0xdc, 0x6c, 0xe7,
-+ 0x4b, 0xa5, 0x7a, 0x54, 0x7a, 0x60, 0xff, 0x7a,
-+ 0xe0, 0x50, 0x40, 0x96, 0x35, 0x63, 0xe4, 0x0b,
-+ 0x76, 0xbd, 0xa4, 0x65, 0x00, 0x1b, 0x57, 0x88,
-+ 0xae, 0xed, 0x39, 0x88, 0x42, 0x11, 0x3c, 0xed,
-+ 0x85, 0x67, 0x7d, 0xb9, 0x68, 0x82, 0xe9, 0x43,
-+ 0x3c, 0x47, 0x53, 0xfa, 0xe8, 0xf8, 0x9f, 0x1f,
-+ 0x9f, 0xef, 0x0f, 0xf7, 0x30, 0xd9, 0x30, 0x0e,
-+ 0xb9, 0x9f, 0x69, 0x18, 0x2f, 0x7e, 0xf8, 0xf8,
-+ 0xf8, 0x8c, 0x0f, 0xd4, 0x02, 0x4d, 0xea, 0xcd,
-+ 0x0a, 0x9c, 0x6f, 0x71, 0x6d, 0x5a, 0x4c, 0x60,
-+ 0xce, 0x20, 0x56, 0x32, 0xc6, 0xc5, 0x99, 0x1f,
-+ 0x09, 0xe6, 0x4e, 0x18, 0x1a, 0x15, 0x13, 0xa8,
-+ 0x7d, 0xb1, 0x6b, 0xc0, 0xb2, 0x6d, 0xf8, 0x26,
-+ 0x66, 0xf8, 0x3d, 0x18, 0x74, 0x70, 0x66, 0x7a,
-+ 0x34, 0x17, 0xde, 0xba, 0x47, 0xf1, 0x06, 0x18,
-+ 0xcb, 0xaf, 0xeb, 0x4a, 0x1e, 0x8f, 0xa7, 0x77,
-+ 0xe0, 0x3b, 0x78, 0x62, 0x66, 0xc9, 0x10, 0xea,
-+ 0x1f, 0xb7, 0x29, 0x0a, 0x45, 0xa1, 0x1d, 0x1e,
-+ 0x1d, 0xe2, 0x65, 0x61, 0x50, 0x9c, 0xd7, 0x05,
-+ 0xf2, 0x0b, 0x5b, 0x12, 0x61, 0x02, 0xc8, 0xe5,
-+ 0x63, 0x4f, 0x20, 0x0c, 0x07, 0x17, 0x33, 0x5e,
-+ 0x03, 0x9a, 0x53, 0x0f, 0x2e, 0x55, 0xfe, 0x50,
-+ 0x43, 0x7d, 0xd0, 0xb6, 0x7e, 0x5a, 0xda, 0xae,
-+ 0x58, 0xef, 0x15, 0xa9, 0x83, 0xd9, 0x46, 0xb1,
-+ 0x42, 0xaa, 0xf5, 0x02, 0x6c, 0xce, 0x92, 0x06,
-+ 0x1b, 0xdb, 0x66, 0x45, 0x91, 0x79, 0xc2, 0x2d,
-+ 0xe6, 0x53, 0xd3, 0x14, 0xfd, 0xbb, 0x44, 0x63,
-+ 0xc6, 0xd7, 0x3d, 0x7a, 0x0c, 0x75, 0x78, 0x9d,
-+ 0x5c, 0xa6, 0x39, 0xb3, 0xe5, 0x63, 0xca, 0x8b,
-+ 0xfe, 0xd3, 0xef, 0x60, 0x83, 0xf6, 0x8e, 0x70,
-+ 0xb6, 0x67, 0xc7, 0x77, 0xed, 0x23, 0xef, 0x4c,
-+ 0xf0, 0xed, 0x2d, 0x07, 0x59, 0x6f, 0xc1, 0x01,
-+ 0x34, 0x37, 0x08, 0xab, 0xd9, 0x1f, 0x09, 0xb1,
-+ 0xce, 0x5b, 0x17, 0xff, 0x74, 0xf8, 0x9c, 0xd5,
-+ 0x2c, 0x56, 0x39, 0x79, 0x0f, 0x69, 0x44, 0x75,
-+ 0x58, 0x27, 0x01, 0xc4, 0xbf, 0xa7, 0xa1, 0x1d,
-+ 0x90, 0x17, 0x77, 0x86, 0x5a, 0x3f, 0xd9, 0xd1,
-+ 0x0e, 0xa0, 0x10, 0xf8, 0xec, 0x1e, 0xa5, 0x7f,
-+ 0x5e, 0x36, 0xd1, 0xe3, 0x04, 0x2c, 0x70, 0xf7,
-+ 0x8e, 0xc0, 0x98, 0x2f, 0x6c, 0x94, 0x2b, 0x41,
-+ 0xb7, 0x60, 0x00, 0xb7, 0x2e, 0xb8, 0x02, 0x8d,
-+ 0xb8, 0xb0, 0xd3, 0x86, 0xba, 0x1d, 0xd7, 0x90,
-+ 0xd6, 0xb6, 0xe1, 0xfc, 0xd7, 0xd8, 0x28, 0x06,
-+ 0x63, 0x9b, 0xce, 0x61, 0x24, 0x79, 0xc0, 0x70,
-+ 0x52, 0xd0, 0xb6, 0xd4, 0x28, 0x95, 0x24, 0x87,
-+ 0x03, 0x1f, 0xb7, 0x9a, 0xda, 0xa3, 0xfb, 0x52,
-+ 0x5b, 0x68, 0xe7, 0x4c, 0x8c, 0x24, 0xe1, 0x42,
-+ 0xf7, 0xd5, 0xfd, 0xad, 0x06, 0x32, 0x9f, 0xba,
-+ 0xc1, 0xfc, 0xdd, 0xc6, 0xfc, 0xfc, 0xb3, 0x38,
-+ 0x74, 0x56, 0x58, 0x40, 0x02, 0x37, 0x52, 0x2c,
-+ 0x55, 0xcc, 0xb3, 0x9e, 0x7a, 0xe9, 0xd4, 0x38,
-+ 0x41, 0x5e, 0x0c, 0x35, 0xe2, 0x11, 0xd1, 0x13,
-+ 0xf8, 0xb7, 0x8d, 0x72, 0x6b, 0x22, 0x2a, 0xb0,
-+ 0xdb, 0x08, 0xba, 0x35, 0xb9, 0x3f, 0xc8, 0xd3,
-+ 0x24, 0x90, 0xec, 0x58, 0xd2, 0x09, 0xc7, 0x2d,
-+ 0xed, 0x38, 0x80, 0x36, 0x72, 0x43, 0x27, 0x49,
-+ 0x4a, 0x80, 0x8a, 0xa2, 0xe8, 0xd3, 0xda, 0x30,
-+ 0x7d, 0xb6, 0x82, 0x37, 0x86, 0x92, 0x86, 0x3e,
-+ 0x08, 0xb2, 0x28, 0x5a, 0x55, 0x44, 0x24, 0x7d,
-+ 0x40, 0x48, 0x8a, 0xb6, 0x89, 0x58, 0x08, 0xa0,
-+ 0xd6, 0x6d, 0x3a, 0x17, 0xbf, 0xf6, 0x54, 0xa2,
-+ 0xf5, 0xd3, 0x8c, 0x0f, 0x78, 0x12, 0x57, 0x8b,
-+ 0xd5, 0xc2, 0xfd, 0x58, 0x5b, 0x7f, 0x38, 0xe3,
-+ 0xcc, 0xb7, 0x7c, 0x48, 0xb3, 0x20, 0xe8, 0x81,
-+ 0x14, 0x32, 0x45, 0x05, 0xe0, 0xdb, 0x9f, 0x75,
-+ 0x85, 0xb4, 0x6a, 0xfc, 0x95, 0xe3, 0x54, 0x22,
-+ 0x12, 0xee, 0x30, 0xfe, 0xd8, 0x30, 0xef, 0x34,
-+ 0x50, 0xab, 0x46, 0x30, 0x98, 0x2f, 0xb7, 0xc0,
-+ 0x15, 0xa2, 0x83, 0xb6, 0xf2, 0x06, 0x21, 0xa2,
-+ 0xc3, 0x26, 0x37, 0x14, 0xd1, 0x4d, 0xb5, 0x10,
-+ 0x52, 0x76, 0x4d, 0x6a, 0xee, 0xb5, 0x2b, 0x15,
-+ 0xb7, 0xf9, 0x51, 0xe8, 0x2a, 0xaf, 0xc7, 0xfa,
-+ 0x77, 0xaf, 0xb0, 0x05, 0x4d, 0xd1, 0x68, 0x8e,
-+ 0x74, 0x05, 0x9f, 0x9d, 0x93, 0xa5, 0x3e, 0x7f,
-+ 0x4e, 0x5f, 0x9d, 0xcb, 0x09, 0xc7, 0x83, 0xe3,
-+ 0x02, 0x9d, 0x27, 0x1f, 0xef, 0x85, 0x05, 0x8d,
-+ 0xec, 0x55, 0x88, 0x0f, 0x0d, 0x7c, 0x4c, 0xe8,
-+ 0xa1, 0x75, 0xa0, 0xd8, 0x06, 0x47, 0x14, 0xef,
-+ 0xaa, 0x61, 0xcf, 0x26, 0x15, 0xad, 0xd8, 0xa3,
-+ 0xaa, 0x75, 0xf2, 0x78, 0x4a, 0x5a, 0x61, 0xdf,
-+ 0x8b, 0xc7, 0x04, 0xbc, 0xb2, 0x32, 0xd2, 0x7e,
-+ 0x42, 0xee, 0xb4, 0x2f, 0x51, 0xff, 0x7b, 0x2e,
-+ 0xd3, 0x02, 0xe8, 0xdc, 0x5d, 0x0d, 0x50, 0xdc,
-+ 0xae, 0xb7, 0x46, 0xf9, 0xa8, 0xe6, 0xd0, 0x16,
-+ 0xcc, 0xe6, 0x2c, 0x81, 0xc7, 0xad, 0xe9, 0xf0,
-+ 0x05, 0x72, 0x6d, 0x3d, 0x0a, 0x7a, 0xa9, 0x02,
-+ 0xac, 0x82, 0x93, 0x6e, 0xb6, 0x1c, 0x28, 0xfc,
-+ 0x44, 0x12, 0xfb, 0x73, 0x77, 0xd4, 0x13, 0x39,
-+ 0x29, 0x88, 0x8a, 0xf3, 0x5c, 0xa6, 0x36, 0xa0,
-+ 0x2a, 0xed, 0x7e, 0xb1, 0x1d, 0xd6, 0x4c, 0x6b,
-+ 0x41, 0x01, 0x18, 0x5d, 0x5d, 0x07, 0x97, 0xa6,
-+ 0x4b, 0xef, 0x31, 0x18, 0xea, 0xac, 0xb1, 0x84,
-+ 0x21, 0xed, 0xda, 0x86,
-+ },
-+ .rlen = 4100,
-+ },
-+};
++ if (rq == &q->pre_flush_rq)
++ return QUEUE_ORDSEQ_PREFLUSH;
++ if (rq == &q->bar_rq)
++ return QUEUE_ORDSEQ_BAR;
++ if (rq == &q->post_flush_rq)
++ return QUEUE_ORDSEQ_POSTFLUSH;
+
-+static struct cipher_testvec aes_ctr_dec_tv_template[] = {
-+ { /* From RFC 3686 */
-+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
-+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
-+ 0x00, 0x00, 0x00, 0x30 },
-+ .klen = 20,
-+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
-+ .input = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
-+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
-+ .ilen = 16,
-+ .result = { "Single block msg" },
-+ .rlen = 16,
-+ }, {
-+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
-+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
-+ 0x00, 0x6c, 0xb6, 0xdb },
-+ .klen = 20,
-+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
-+ .input = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
-+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
-+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
-+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
-+ .ilen = 32,
-+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
-+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
-+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
-+ 0x00, 0x00, 0x00, 0x48 },
-+ .klen = 28,
-+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
-+ .input = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
-+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
-+ .ilen = 16,
-+ .result = { "Single block msg" },
-+ .rlen = 16,
-+ }, {
-+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
-+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
-+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
-+ 0x00, 0x96, 0xb0, 0x3b },
-+ .klen = 28,
-+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
-+ .input = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
-+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
-+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
-+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
-+ .ilen = 32,
-+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
-+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
-+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
-+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
-+ 0x00, 0x00, 0x00, 0x60 },
-+ .klen = 36,
-+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
-+ .input = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
-+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
-+ .ilen = 16,
-+ .result = { "Single block msg" },
-+ .rlen = 16,
-+ }, {
-+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
-+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
-+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
-+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
-+ 0x00, 0xfa, 0xac, 0x24 },
-+ .klen = 36,
-+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
-+ .input = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
-+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
-+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
-+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
-+ .ilen = 32,
-+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
-+ .rlen = 32,
-+ },
-+};
++ /*
++ * !fs requests don't need to follow barrier ordering. Always
++ * put them at the front. This fixes the following deadlock.
++ *
++ * http://thread.gmane.org/gmane.linux.kernel/537473
++ */
++ if (!blk_fs_request(rq))
++ return QUEUE_ORDSEQ_DRAIN;
+
-+static struct aead_testvec aes_gcm_enc_tv_template[] = {
-+ { /* From McGrew & Viega - http://citeseer.ist.psu.edu/656989.html */
-+ .klen = 16,
-+ .result = { 0x58, 0xe2, 0xfc, 0xce, 0xfa, 0x7e, 0x30, 0x61,
-+ 0x36, 0x7f, 0x1d, 0x57, 0xa4, 0xe7, 0x45, 0x5a },
-+ .rlen = 16,
-+ }, {
-+ .klen = 16,
-+ .ilen = 16,
-+ .result = { 0x03, 0x88, 0xda, 0xce, 0x60, 0xb6, 0xa3, 0x92,
-+ 0xf3, 0x28, 0xc2, 0xb9, 0x71, 0xb2, 0xfe, 0x78,
-+ 0xab, 0x6e, 0x47, 0xd4, 0x2c, 0xec, 0x13, 0xbd,
-+ 0xf5, 0x3a, 0x67, 0xb2, 0x12, 0x57, 0xbd, 0xdf },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
-+ .klen = 16,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
-+ .ilen = 64,
-+ .result = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
-+ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
-+ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
-+ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
-+ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
-+ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
-+ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
-+ 0x3d, 0x58, 0xe0, 0x91, 0x47, 0x3f, 0x59, 0x85,
-+ 0x4d, 0x5c, 0x2a, 0xf3, 0x27, 0xcd, 0x64, 0xa6,
-+ 0x2c, 0xf3, 0x5a, 0xbd, 0x2b, 0xa6, 0xfa, 0xb4 },
-+ .rlen = 80,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
-+ .klen = 16,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39 },
-+ .ilen = 60,
-+ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xab, 0xad, 0xda, 0xd2 },
-+ .alen = 20,
-+ .result = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
-+ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
-+ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
-+ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
-+ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
-+ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
-+ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
-+ 0x3d, 0x58, 0xe0, 0x91,
-+ 0x5b, 0xc9, 0x4f, 0xbc, 0x32, 0x21, 0xa5, 0xdb,
-+ 0x94, 0xfa, 0xe9, 0x5a, 0xe7, 0x12, 0x1a, 0x47 },
-+ .rlen = 76,
-+ }, {
-+ .klen = 24,
-+ .result = { 0xcd, 0x33, 0xb2, 0x8a, 0xc7, 0x73, 0xf7, 0x4b,
-+ 0xa0, 0x0e, 0xd1, 0xf3, 0x12, 0x57, 0x24, 0x35 },
-+ .rlen = 16,
-+ }, {
-+ .klen = 24,
-+ .ilen = 16,
-+ .result = { 0x98, 0xe7, 0x24, 0x7c, 0x07, 0xf0, 0xfe, 0x41,
-+ 0x1c, 0x26, 0x7e, 0x43, 0x84, 0xb0, 0xf6, 0x00,
-+ 0x2f, 0xf5, 0x8d, 0x80, 0x03, 0x39, 0x27, 0xab,
-+ 0x8e, 0xf4, 0xd4, 0x58, 0x75, 0x14, 0xf0, 0xfb },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
-+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
-+ .klen = 24,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
-+ .ilen = 64,
-+ .result = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
-+ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
-+ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
-+ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
-+ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
-+ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
-+ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
-+ 0xcc, 0xda, 0x27, 0x10, 0xac, 0xad, 0xe2, 0x56,
-+ 0x99, 0x24, 0xa7, 0xc8, 0x58, 0x73, 0x36, 0xbf,
-+ 0xb1, 0x18, 0x02, 0x4d, 0xb8, 0x67, 0x4a, 0x14 },
-+ .rlen = 80,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
-+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
-+ .klen = 24,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39 },
-+ .ilen = 60,
-+ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xab, 0xad, 0xda, 0xd2 },
-+ .alen = 20,
-+ .result = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
-+ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
-+ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
-+ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
-+ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
-+ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
-+ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
-+ 0xcc, 0xda, 0x27, 0x10,
-+ 0x25, 0x19, 0x49, 0x8e, 0x80, 0xf1, 0x47, 0x8f,
-+ 0x37, 0xba, 0x55, 0xbd, 0x6d, 0x27, 0x61, 0x8c },
-+ .rlen = 76,
-+ .np = 2,
-+ .tap = { 32, 28 },
-+ .anp = 2,
-+ .atap = { 8, 12 }
-+ }, {
-+ .klen = 32,
-+ .result = { 0x53, 0x0f, 0x8a, 0xfb, 0xc7, 0x45, 0x36, 0xb9,
-+ 0xa9, 0x63, 0xb4, 0xf1, 0xc4, 0xcb, 0x73, 0x8b },
-+ .rlen = 16,
-+ }
-+};
++ if ((rq->cmd_flags & REQ_ORDERED_COLOR) ==
++ (q->orig_bar_rq->cmd_flags & REQ_ORDERED_COLOR))
++ return QUEUE_ORDSEQ_DRAIN;
++ else
++ return QUEUE_ORDSEQ_DONE;
++}
+
-+static struct aead_testvec aes_gcm_dec_tv_template[] = {
-+ { /* From McGrew & Viega - http://citeseer.ist.psu.edu/656989.html */
-+ .klen = 32,
-+ .input = { 0xce, 0xa7, 0x40, 0x3d, 0x4d, 0x60, 0x6b, 0x6e,
-+ 0x07, 0x4e, 0xc5, 0xd3, 0xba, 0xf3, 0x9d, 0x18,
-+ 0xd0, 0xd1, 0xc8, 0xa7, 0x99, 0x99, 0x6b, 0xf0,
-+ 0x26, 0x5b, 0x98, 0xb5, 0xd4, 0x8a, 0xb9, 0x19 },
-+ .ilen = 32,
-+ .rlen = 16,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
-+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
-+ .klen = 32,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0x52, 0x2d, 0xc1, 0xf0, 0x99, 0x56, 0x7d, 0x07,
-+ 0xf4, 0x7f, 0x37, 0xa3, 0x2a, 0x84, 0x42, 0x7d,
-+ 0x64, 0x3a, 0x8c, 0xdc, 0xbf, 0xe5, 0xc0, 0xc9,
-+ 0x75, 0x98, 0xa2, 0xbd, 0x25, 0x55, 0xd1, 0xaa,
-+ 0x8c, 0xb0, 0x8e, 0x48, 0x59, 0x0d, 0xbb, 0x3d,
-+ 0xa7, 0xb0, 0x8b, 0x10, 0x56, 0x82, 0x88, 0x38,
-+ 0xc5, 0xf6, 0x1e, 0x63, 0x93, 0xba, 0x7a, 0x0a,
-+ 0xbc, 0xc9, 0xf6, 0x62, 0x89, 0x80, 0x15, 0xad,
-+ 0xb0, 0x94, 0xda, 0xc5, 0xd9, 0x34, 0x71, 0xbd,
-+ 0xec, 0x1a, 0x50, 0x22, 0x70, 0xe3, 0xcc, 0x6c },
-+ .ilen = 80,
-+ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
-+ .rlen = 64,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
-+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
-+ .klen = 32,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0x52, 0x2d, 0xc1, 0xf0, 0x99, 0x56, 0x7d, 0x07,
-+ 0xf4, 0x7f, 0x37, 0xa3, 0x2a, 0x84, 0x42, 0x7d,
-+ 0x64, 0x3a, 0x8c, 0xdc, 0xbf, 0xe5, 0xc0, 0xc9,
-+ 0x75, 0x98, 0xa2, 0xbd, 0x25, 0x55, 0xd1, 0xaa,
-+ 0x8c, 0xb0, 0x8e, 0x48, 0x59, 0x0d, 0xbb, 0x3d,
-+ 0xa7, 0xb0, 0x8b, 0x10, 0x56, 0x82, 0x88, 0x38,
-+ 0xc5, 0xf6, 0x1e, 0x63, 0x93, 0xba, 0x7a, 0x0a,
-+ 0xbc, 0xc9, 0xf6, 0x62,
-+ 0x76, 0xfc, 0x6e, 0xce, 0x0f, 0x4e, 0x17, 0x68,
-+ 0xcd, 0xdf, 0x88, 0x53, 0xbb, 0x2d, 0x55, 0x1b },
-+ .ilen = 76,
-+ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xab, 0xad, 0xda, 0xd2 },
-+ .alen = 20,
-+ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39 },
-+ .rlen = 60,
-+ .np = 2,
-+ .tap = { 48, 28 },
-+ .anp = 3,
-+ .atap = { 8, 8, 4 }
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
-+ .klen = 16,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
-+ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
-+ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
-+ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
-+ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
-+ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
-+ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
-+ 0x3d, 0x58, 0xe0, 0x91, 0x47, 0x3f, 0x59, 0x85,
-+ 0x4d, 0x5c, 0x2a, 0xf3, 0x27, 0xcd, 0x64, 0xa6,
-+ 0x2c, 0xf3, 0x5a, 0xbd, 0x2b, 0xa6, 0xfa, 0xb4 },
-+ .ilen = 80,
-+ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
-+ .rlen = 64,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
-+ .klen = 16,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
-+ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
-+ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
-+ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
-+ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
-+ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
-+ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
-+ 0x3d, 0x58, 0xe0, 0x91,
-+ 0x5b, 0xc9, 0x4f, 0xbc, 0x32, 0x21, 0xa5, 0xdb,
-+ 0x94, 0xfa, 0xe9, 0x5a, 0xe7, 0x12, 0x1a, 0x47 },
-+ .ilen = 76,
-+ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xab, 0xad, 0xda, 0xd2 },
-+ .alen = 20,
-+ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39 },
-+ .rlen = 60,
-+ }, {
-+ .klen = 24,
-+ .input = { 0x98, 0xe7, 0x24, 0x7c, 0x07, 0xf0, 0xfe, 0x41,
-+ 0x1c, 0x26, 0x7e, 0x43, 0x84, 0xb0, 0xf6, 0x00,
-+ 0x2f, 0xf5, 0x8d, 0x80, 0x03, 0x39, 0x27, 0xab,
-+ 0x8e, 0xf4, 0xd4, 0x58, 0x75, 0x14, 0xf0, 0xfb },
-+ .ilen = 32,
-+ .rlen = 16,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
-+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
-+ .klen = 24,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
-+ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
-+ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
-+ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
-+ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
-+ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
-+ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
-+ 0xcc, 0xda, 0x27, 0x10, 0xac, 0xad, 0xe2, 0x56,
-+ 0x99, 0x24, 0xa7, 0xc8, 0x58, 0x73, 0x36, 0xbf,
-+ 0xb1, 0x18, 0x02, 0x4d, 0xb8, 0x67, 0x4a, 0x14 },
-+ .ilen = 80,
-+ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
-+ .rlen = 64,
-+ }, {
-+ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
-+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
-+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
-+ .klen = 24,
-+ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
-+ 0xde, 0xca, 0xf8, 0x88 },
-+ .input = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
-+ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
-+ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
-+ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
-+ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
-+ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
-+ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
-+ 0xcc, 0xda, 0x27, 0x10,
-+ 0x25, 0x19, 0x49, 0x8e, 0x80, 0xf1, 0x47, 0x8f,
-+ 0x37, 0xba, 0x55, 0xbd, 0x6d, 0x27, 0x61, 0x8c },
-+ .ilen = 76,
-+ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
-+ 0xab, 0xad, 0xda, 0xd2 },
-+ .alen = 20,
-+ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
-+ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
-+ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
-+ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
-+ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
-+ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
-+ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
-+ 0xba, 0x63, 0x7b, 0x39 },
-+ .rlen = 60,
++void blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
++{
++ struct request *rq;
++
++ if (error && !q->orderr)
++ q->orderr = error;
++
++ BUG_ON(q->ordseq & seq);
++ q->ordseq |= seq;
++
++ if (blk_ordered_cur_seq(q) != QUEUE_ORDSEQ_DONE)
++ return;
++
++ /*
++ * Okay, sequence complete.
++ */
++ q->ordseq = 0;
++ rq = q->orig_bar_rq;
++
++ if (__blk_end_request(rq, q->orderr, blk_rq_bytes(rq)))
++ BUG();
++}
++
++static void pre_flush_end_io(struct request *rq, int error)
++{
++ elv_completed_request(rq->q, rq);
++ blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_PREFLUSH, error);
++}
++
++static void bar_end_io(struct request *rq, int error)
++{
++ elv_completed_request(rq->q, rq);
++ blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_BAR, error);
++}
++
++static void post_flush_end_io(struct request *rq, int error)
++{
++ elv_completed_request(rq->q, rq);
++ blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_POSTFLUSH, error);
++}
++
++static void queue_flush(struct request_queue *q, unsigned which)
++{
++ struct request *rq;
++ rq_end_io_fn *end_io;
++
++ if (which == QUEUE_ORDERED_PREFLUSH) {
++ rq = &q->pre_flush_rq;
++ end_io = pre_flush_end_io;
++ } else {
++ rq = &q->post_flush_rq;
++ end_io = post_flush_end_io;
+ }
-+};
+
-+static struct aead_testvec aes_ccm_enc_tv_template[] = {
-+ { /* From RFC 3610 */
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x03, 0x02, 0x01, 0x00,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
-+ .alen = 8,
-+ .input = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e },
-+ .ilen = 23,
-+ .result = { 0x58, 0x8c, 0x97, 0x9a, 0x61, 0xc6, 0x63, 0xd2,
-+ 0xf0, 0x66, 0xd0, 0xc2, 0xc0, 0xf9, 0x89, 0x80,
-+ 0x6d, 0x5f, 0x6b, 0x61, 0xda, 0xc3, 0x84, 0x17,
-+ 0xe8, 0xd1, 0x2c, 0xfd, 0xf9, 0x26, 0xe0 },
-+ .rlen = 31,
-+ }, {
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x07, 0x06, 0x05, 0x04,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b },
-+ .alen = 12,
-+ .input = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
-+ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
-+ 0x1c, 0x1d, 0x1e, 0x1f },
-+ .ilen = 20,
-+ .result = { 0xdc, 0xf1, 0xfb, 0x7b, 0x5d, 0x9e, 0x23, 0xfb,
-+ 0x9d, 0x4e, 0x13, 0x12, 0x53, 0x65, 0x8a, 0xd8,
-+ 0x6e, 0xbd, 0xca, 0x3e, 0x51, 0xe8, 0x3f, 0x07,
-+ 0x7d, 0x9c, 0x2d, 0x93 },
-+ .rlen = 28,
-+ }, {
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0b, 0x0a, 0x09, 0x08,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
-+ .alen = 8,
-+ .input = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-+ 0x20 },
-+ .ilen = 25,
-+ .result = { 0x82, 0x53, 0x1a, 0x60, 0xcc, 0x24, 0x94, 0x5a,
-+ 0x4b, 0x82, 0x79, 0x18, 0x1a, 0xb5, 0xc8, 0x4d,
-+ 0xf2, 0x1c, 0xe7, 0xf9, 0xb7, 0x3f, 0x42, 0xe1,
-+ 0x97, 0xea, 0x9c, 0x07, 0xe5, 0x6b, 0x5e, 0xb1,
-+ 0x7e, 0x5f, 0x4e },
-+ .rlen = 35,
-+ }, {
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0c, 0x0b, 0x0a, 0x09,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b },
-+ .alen = 12,
-+ .input = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
-+ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
-+ 0x1c, 0x1d, 0x1e },
-+ .ilen = 19,
-+ .result = { 0x07, 0x34, 0x25, 0x94, 0x15, 0x77, 0x85, 0x15,
-+ 0x2b, 0x07, 0x40, 0x98, 0x33, 0x0a, 0xbb, 0x14,
-+ 0x1b, 0x94, 0x7b, 0x56, 0x6a, 0xa9, 0x40, 0x6b,
-+ 0x4d, 0x99, 0x99, 0x88, 0xdd },
-+ .rlen = 29,
-+ }, {
-+ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
-+ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x33, 0x56, 0x8e, 0xf7, 0xb2, 0x63,
-+ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
-+ .assoc = { 0x63, 0x01, 0x8f, 0x76, 0xdc, 0x8a, 0x1b, 0xcb },
-+ .alen = 8,
-+ .input = { 0x90, 0x20, 0xea, 0x6f, 0x91, 0xbd, 0xd8, 0x5a,
-+ 0xfa, 0x00, 0x39, 0xba, 0x4b, 0xaf, 0xf9, 0xbf,
-+ 0xb7, 0x9c, 0x70, 0x28, 0x94, 0x9c, 0xd0, 0xec },
-+ .ilen = 24,
-+ .result = { 0x4c, 0xcb, 0x1e, 0x7c, 0xa9, 0x81, 0xbe, 0xfa,
-+ 0xa0, 0x72, 0x6c, 0x55, 0xd3, 0x78, 0x06, 0x12,
-+ 0x98, 0xc8, 0x5c, 0x92, 0x81, 0x4a, 0xbc, 0x33,
-+ 0xc5, 0x2e, 0xe8, 0x1d, 0x7d, 0x77, 0xc0, 0x8a },
-+ .rlen = 32,
-+ }, {
-+ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
-+ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0xd5, 0x60, 0x91, 0x2d, 0x3f, 0x70,
-+ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
-+ .assoc = { 0xcd, 0x90, 0x44, 0xd2, 0xb7, 0x1f, 0xdb, 0x81,
-+ 0x20, 0xea, 0x60, 0xc0 },
-+ .alen = 12,
-+ .input = { 0x64, 0x35, 0xac, 0xba, 0xfb, 0x11, 0xa8, 0x2e,
-+ 0x2f, 0x07, 0x1d, 0x7c, 0xa4, 0xa5, 0xeb, 0xd9,
-+ 0x3a, 0x80, 0x3b, 0xa8, 0x7f },
-+ .ilen = 21,
-+ .result = { 0x00, 0x97, 0x69, 0xec, 0xab, 0xdf, 0x48, 0x62,
-+ 0x55, 0x94, 0xc5, 0x92, 0x51, 0xe6, 0x03, 0x57,
-+ 0x22, 0x67, 0x5e, 0x04, 0xc8, 0x47, 0x09, 0x9e,
-+ 0x5a, 0xe0, 0x70, 0x45, 0x51 },
-+ .rlen = 29,
-+ }, {
-+ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
-+ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x42, 0xff, 0xf8, 0xf1, 0x95, 0x1c,
-+ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
-+ .assoc = { 0xd8, 0x5b, 0xc7, 0xe6, 0x9f, 0x94, 0x4f, 0xb8 },
-+ .alen = 8,
-+ .input = { 0x8a, 0x19, 0xb9, 0x50, 0xbc, 0xf7, 0x1a, 0x01,
-+ 0x8e, 0x5e, 0x67, 0x01, 0xc9, 0x17, 0x87, 0x65,
-+ 0x98, 0x09, 0xd6, 0x7d, 0xbe, 0xdd, 0x18 },
-+ .ilen = 23,
-+ .result = { 0xbc, 0x21, 0x8d, 0xaa, 0x94, 0x74, 0x27, 0xb6,
-+ 0xdb, 0x38, 0x6a, 0x99, 0xac, 0x1a, 0xef, 0x23,
-+ 0xad, 0xe0, 0xb5, 0x29, 0x39, 0xcb, 0x6a, 0x63,
-+ 0x7c, 0xf9, 0xbe, 0xc2, 0x40, 0x88, 0x97, 0xc6,
-+ 0xba },
-+ .rlen = 33,
-+ },
-+};
++ rq->cmd_flags = REQ_HARDBARRIER;
++ rq_init(q, rq);
++ rq->elevator_private = NULL;
++ rq->elevator_private2 = NULL;
++ rq->rq_disk = q->bar_rq.rq_disk;
++ rq->end_io = end_io;
++ q->prepare_flush_fn(q, rq);
+
-+static struct aead_testvec aes_ccm_dec_tv_template[] = {
-+ { /* From RFC 3610 */
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x03, 0x02, 0x01, 0x00,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
-+ .alen = 8,
-+ .input = { 0x58, 0x8c, 0x97, 0x9a, 0x61, 0xc6, 0x63, 0xd2,
-+ 0xf0, 0x66, 0xd0, 0xc2, 0xc0, 0xf9, 0x89, 0x80,
-+ 0x6d, 0x5f, 0x6b, 0x61, 0xda, 0xc3, 0x84, 0x17,
-+ 0xe8, 0xd1, 0x2c, 0xfd, 0xf9, 0x26, 0xe0 },
-+ .ilen = 31,
-+ .result = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e },
-+ .rlen = 23,
-+ }, {
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x07, 0x06, 0x05, 0x04,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b },
-+ .alen = 12,
-+ .input = { 0xdc, 0xf1, 0xfb, 0x7b, 0x5d, 0x9e, 0x23, 0xfb,
-+ 0x9d, 0x4e, 0x13, 0x12, 0x53, 0x65, 0x8a, 0xd8,
-+ 0x6e, 0xbd, 0xca, 0x3e, 0x51, 0xe8, 0x3f, 0x07,
-+ 0x7d, 0x9c, 0x2d, 0x93 },
-+ .ilen = 28,
-+ .result = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
-+ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
-+ 0x1c, 0x1d, 0x1e, 0x1f },
-+ .rlen = 20,
-+ }, {
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0b, 0x0a, 0x09, 0x08,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
-+ .alen = 8,
-+ .input = { 0x82, 0x53, 0x1a, 0x60, 0xcc, 0x24, 0x94, 0x5a,
-+ 0x4b, 0x82, 0x79, 0x18, 0x1a, 0xb5, 0xc8, 0x4d,
-+ 0xf2, 0x1c, 0xe7, 0xf9, 0xb7, 0x3f, 0x42, 0xe1,
-+ 0x97, 0xea, 0x9c, 0x07, 0xe5, 0x6b, 0x5e, 0xb1,
-+ 0x7e, 0x5f, 0x4e },
-+ .ilen = 35,
-+ .result = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-+ 0x20 },
-+ .rlen = 25,
-+ }, {
-+ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0c, 0x0b, 0x0a, 0x09,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
-+ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b },
-+ .alen = 12,
-+ .input = { 0x07, 0x34, 0x25, 0x94, 0x15, 0x77, 0x85, 0x15,
-+ 0x2b, 0x07, 0x40, 0x98, 0x33, 0x0a, 0xbb, 0x14,
-+ 0x1b, 0x94, 0x7b, 0x56, 0x6a, 0xa9, 0x40, 0x6b,
-+ 0x4d, 0x99, 0x99, 0x88, 0xdd },
-+ .ilen = 29,
-+ .result = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
-+ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
-+ 0x1c, 0x1d, 0x1e },
-+ .rlen = 19,
-+ }, {
-+ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
-+ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x33, 0x56, 0x8e, 0xf7, 0xb2, 0x63,
-+ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
-+ .assoc = { 0x63, 0x01, 0x8f, 0x76, 0xdc, 0x8a, 0x1b, 0xcb },
-+ .alen = 8,
-+ .input = { 0x4c, 0xcb, 0x1e, 0x7c, 0xa9, 0x81, 0xbe, 0xfa,
-+ 0xa0, 0x72, 0x6c, 0x55, 0xd3, 0x78, 0x06, 0x12,
-+ 0x98, 0xc8, 0x5c, 0x92, 0x81, 0x4a, 0xbc, 0x33,
-+ 0xc5, 0x2e, 0xe8, 0x1d, 0x7d, 0x77, 0xc0, 0x8a },
-+ .ilen = 32,
-+ .result = { 0x90, 0x20, 0xea, 0x6f, 0x91, 0xbd, 0xd8, 0x5a,
-+ 0xfa, 0x00, 0x39, 0xba, 0x4b, 0xaf, 0xf9, 0xbf,
-+ 0xb7, 0x9c, 0x70, 0x28, 0x94, 0x9c, 0xd0, 0xec },
-+ .rlen = 24,
-+ }, {
-+ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
-+ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0xd5, 0x60, 0x91, 0x2d, 0x3f, 0x70,
-+ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
-+ .assoc = { 0xcd, 0x90, 0x44, 0xd2, 0xb7, 0x1f, 0xdb, 0x81,
-+ 0x20, 0xea, 0x60, 0xc0 },
-+ .alen = 12,
-+ .input = { 0x00, 0x97, 0x69, 0xec, 0xab, 0xdf, 0x48, 0x62,
-+ 0x55, 0x94, 0xc5, 0x92, 0x51, 0xe6, 0x03, 0x57,
-+ 0x22, 0x67, 0x5e, 0x04, 0xc8, 0x47, 0x09, 0x9e,
-+ 0x5a, 0xe0, 0x70, 0x45, 0x51 },
-+ .ilen = 29,
-+ .result = { 0x64, 0x35, 0xac, 0xba, 0xfb, 0x11, 0xa8, 0x2e,
-+ 0x2f, 0x07, 0x1d, 0x7c, 0xa4, 0xa5, 0xeb, 0xd9,
-+ 0x3a, 0x80, 0x3b, 0xa8, 0x7f },
-+ .rlen = 21,
-+ }, {
-+ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
-+ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
-+ .klen = 16,
-+ .iv = { 0x01, 0x00, 0x42, 0xff, 0xf8, 0xf1, 0x95, 0x1c,
-+ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
-+ .assoc = { 0xd8, 0x5b, 0xc7, 0xe6, 0x9f, 0x94, 0x4f, 0xb8 },
-+ .alen = 8,
-+ .input = { 0xbc, 0x21, 0x8d, 0xaa, 0x94, 0x74, 0x27, 0xb6,
-+ 0xdb, 0x38, 0x6a, 0x99, 0xac, 0x1a, 0xef, 0x23,
-+ 0xad, 0xe0, 0xb5, 0x29, 0x39, 0xcb, 0x6a, 0x63,
-+ 0x7c, 0xf9, 0xbe, 0xc2, 0x40, 0x88, 0x97, 0xc6,
-+ 0xba },
-+ .ilen = 33,
-+ .result = { 0x8a, 0x19, 0xb9, 0x50, 0xbc, 0xf7, 0x1a, 0x01,
-+ 0x8e, 0x5e, 0x67, 0x01, 0xc9, 0x17, 0x87, 0x65,
-+ 0x98, 0x09, 0xd6, 0x7d, 0xbe, 0xdd, 0x18 },
-+ .rlen = 23,
-+ },
-+};
++ elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
++}
++
++static inline struct request *start_ordered(struct request_queue *q,
++ struct request *rq)
++{
++ q->orderr = 0;
++ q->ordered = q->next_ordered;
++ q->ordseq |= QUEUE_ORDSEQ_STARTED;
+
- /* Cast5 test vectors from RFC 2144 */
- #define CAST5_ENC_TEST_VECTORS 3
- #define CAST5_DEC_TEST_VECTORS 3
-@@ -4317,6 +6425,1211 @@ static struct cipher_testvec seed_dec_tv_template[] = {
- }
- };
-
-+#define SALSA20_STREAM_ENC_TEST_VECTORS 5
-+static struct cipher_testvec salsa20_stream_enc_tv_template[] = {
+ /*
-+ * Testvectors from verified.test-vectors submitted to ECRYPT.
-+ * They are truncated to size 39, 64, 111, 129 to test a variety
-+ * of input length.
-+ */
-+ { /* Set 3, vector 0 */
-+ .key = {
-+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
-+ },
-+ .klen = 16,
-+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
-+ .input = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ },
-+ .ilen = 39,
-+ .result = {
-+ 0x2D, 0xD5, 0xC3, 0xF7, 0xBA, 0x2B, 0x20, 0xF7,
-+ 0x68, 0x02, 0x41, 0x0C, 0x68, 0x86, 0x88, 0x89,
-+ 0x5A, 0xD8, 0xC1, 0xBD, 0x4E, 0xA6, 0xC9, 0xB1,
-+ 0x40, 0xFB, 0x9B, 0x90, 0xE2, 0x10, 0x49, 0xBF,
-+ 0x58, 0x3F, 0x52, 0x79, 0x70, 0xEB, 0xC1,
-+ },
-+ .rlen = 39,
-+ }, { /* Set 5, vector 0 */
-+ .key = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
-+ },
-+ .klen = 16,
-+ .iv = { 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
-+ .input = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ },
-+ .ilen = 64,
-+ .result = {
-+ 0xB6, 0x6C, 0x1E, 0x44, 0x46, 0xDD, 0x95, 0x57,
-+ 0xE5, 0x78, 0xE2, 0x23, 0xB0, 0xB7, 0x68, 0x01,
-+ 0x7B, 0x23, 0xB2, 0x67, 0xBB, 0x02, 0x34, 0xAE,
-+ 0x46, 0x26, 0xBF, 0x44, 0x3F, 0x21, 0x97, 0x76,
-+ 0x43, 0x6F, 0xB1, 0x9F, 0xD0, 0xE8, 0x86, 0x6F,
-+ 0xCD, 0x0D, 0xE9, 0xA9, 0x53, 0x8F, 0x4A, 0x09,
-+ 0xCA, 0x9A, 0xC0, 0x73, 0x2E, 0x30, 0xBC, 0xF9,
-+ 0x8E, 0x4F, 0x13, 0xE4, 0xB9, 0xE2, 0x01, 0xD9,
-+ },
-+ .rlen = 64,
-+ }, { /* Set 3, vector 27 */
-+ .key = {
-+ 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, 0x20, 0x21, 0x22,
-+ 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2A,
-+ 0x2B, 0x2C, 0x2D, 0x2E, 0x2F, 0x30, 0x31, 0x32,
-+ 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A
-+ },
-+ .klen = 32,
-+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
-+ .input = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ * Prep proxy barrier request.
++ */
++ blkdev_dequeue_request(rq);
++ q->orig_bar_rq = rq;
++ rq = &q->bar_rq;
++ rq->cmd_flags = 0;
++ rq_init(q, rq);
++ if (bio_data_dir(q->orig_bar_rq->bio) == WRITE)
++ rq->cmd_flags |= REQ_RW;
++ if (q->ordered & QUEUE_ORDERED_FUA)
++ rq->cmd_flags |= REQ_FUA;
++ rq->elevator_private = NULL;
++ rq->elevator_private2 = NULL;
++ init_request_from_bio(rq, q->orig_bar_rq->bio);
++ rq->end_io = bar_end_io;
+
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ },
-+ .ilen = 111,
-+ .result = {
-+ 0xAE, 0x39, 0x50, 0x8E, 0xAC, 0x9A, 0xEC, 0xE7,
-+ 0xBF, 0x97, 0xBB, 0x20, 0xB9, 0xDE, 0xE4, 0x1F,
-+ 0x87, 0xD9, 0x47, 0xF8, 0x28, 0x91, 0x35, 0x98,
-+ 0xDB, 0x72, 0xCC, 0x23, 0x29, 0x48, 0x56, 0x5E,
-+ 0x83, 0x7E, 0x0B, 0xF3, 0x7D, 0x5D, 0x38, 0x7B,
-+ 0x2D, 0x71, 0x02, 0xB4, 0x3B, 0xB5, 0xD8, 0x23,
-+ 0xB0, 0x4A, 0xDF, 0x3C, 0xEC, 0xB6, 0xD9, 0x3B,
-+ 0x9B, 0xA7, 0x52, 0xBE, 0xC5, 0xD4, 0x50, 0x59,
++ /*
++ * Queue ordered sequence. As we stack them at the head, we
++ * need to queue in reverse order. Note that we rely on that
++ * no fs request uses ELEVATOR_INSERT_FRONT and thus no fs
++ * request gets inbetween ordered sequence. If this request is
++ * an empty barrier, we don't need to do a postflush ever since
++ * there will be no data written between the pre and post flush.
++ * Hence a single flush will suffice.
++ */
++ if ((q->ordered & QUEUE_ORDERED_POSTFLUSH) && !blk_empty_barrier(rq))
++ queue_flush(q, QUEUE_ORDERED_POSTFLUSH);
++ else
++ q->ordseq |= QUEUE_ORDSEQ_POSTFLUSH;
+
-+ 0x15, 0x14, 0xB4, 0x0E, 0x40, 0xE6, 0x53, 0xD1,
-+ 0x83, 0x9C, 0x5B, 0xA0, 0x92, 0x29, 0x6B, 0x5E,
-+ 0x96, 0x5B, 0x1E, 0x2F, 0xD3, 0xAC, 0xC1, 0x92,
-+ 0xB1, 0x41, 0x3F, 0x19, 0x2F, 0xC4, 0x3B, 0xC6,
-+ 0x95, 0x46, 0x45, 0x54, 0xE9, 0x75, 0x03, 0x08,
-+ 0x44, 0xAF, 0xE5, 0x8A, 0x81, 0x12, 0x09,
-+ },
-+ .rlen = 111,
++ elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
+
-+ }, { /* Set 5, vector 27 */
-+ .key = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
-+ },
-+ .klen = 32,
-+ .iv = { 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00 },
-+ .input = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ if (q->ordered & QUEUE_ORDERED_PREFLUSH) {
++ queue_flush(q, QUEUE_ORDERED_PREFLUSH);
++ rq = &q->pre_flush_rq;
++ } else
++ q->ordseq |= QUEUE_ORDSEQ_PREFLUSH;
+
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ if ((q->ordered & QUEUE_ORDERED_TAG) || q->in_flight == 0)
++ q->ordseq |= QUEUE_ORDSEQ_DRAIN;
++ else
++ rq = NULL;
+
-+ 0x00,
-+ },
-+ .ilen = 129,
-+ .result = {
-+ 0xD2, 0xDB, 0x1A, 0x5C, 0xF1, 0xC1, 0xAC, 0xDB,
-+ 0xE8, 0x1A, 0x7A, 0x43, 0x40, 0xEF, 0x53, 0x43,
-+ 0x5E, 0x7F, 0x4B, 0x1A, 0x50, 0x52, 0x3F, 0x8D,
-+ 0x28, 0x3D, 0xCF, 0x85, 0x1D, 0x69, 0x6E, 0x60,
-+ 0xF2, 0xDE, 0x74, 0x56, 0x18, 0x1B, 0x84, 0x10,
-+ 0xD4, 0x62, 0xBA, 0x60, 0x50, 0xF0, 0x61, 0xF2,
-+ 0x1C, 0x78, 0x7F, 0xC1, 0x24, 0x34, 0xAF, 0x58,
-+ 0xBF, 0x2C, 0x59, 0xCA, 0x90, 0x77, 0xF3, 0xB0,
++ return rq;
++}
+
-+ 0x5B, 0x4A, 0xDF, 0x89, 0xCE, 0x2C, 0x2F, 0xFC,
-+ 0x67, 0xF0, 0xE3, 0x45, 0xE8, 0xB3, 0xB3, 0x75,
-+ 0xA0, 0x95, 0x71, 0xA1, 0x29, 0x39, 0x94, 0xCA,
-+ 0x45, 0x2F, 0xBD, 0xCB, 0x10, 0xB6, 0xBE, 0x9F,
-+ 0x8E, 0xF9, 0xB2, 0x01, 0x0A, 0x5A, 0x0A, 0xB7,
-+ 0x6B, 0x9D, 0x70, 0x8E, 0x4B, 0xD6, 0x2F, 0xCD,
-+ 0x2E, 0x40, 0x48, 0x75, 0xE9, 0xE2, 0x21, 0x45,
-+ 0x0B, 0xC9, 0xB6, 0xB5, 0x66, 0xBC, 0x9A, 0x59,
++int blk_do_ordered(struct request_queue *q, struct request **rqp)
++{
++ struct request *rq = *rqp;
++ const int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq);
+
-+ 0x5A,
-+ },
-+ .rlen = 129,
-+ }, { /* large test vector generated using Crypto++ */
-+ .key = {
-+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-+ },
-+ .klen = 32,
-+ .iv = {
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-+ },
-+ .input = {
-+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
-+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
-+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
-+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
-+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
-+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
-+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
-+ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f,
-+ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
-+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
-+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
-+ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
-+ 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
-+ 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f,
-+ 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97,
-+ 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
-+ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7,
-+ 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf,
-+ 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7,
-+ 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf,
-+ 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
-+ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf,
-+ 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7,
-+ 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf,
-+ 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7,
-+ 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef,
-+ 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7,
-+ 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff,
-+ 0x00, 0x03, 0x06, 0x09, 0x0c, 0x0f, 0x12, 0x15,
-+ 0x18, 0x1b, 0x1e, 0x21, 0x24, 0x27, 0x2a, 0x2d,
-+ 0x30, 0x33, 0x36, 0x39, 0x3c, 0x3f, 0x42, 0x45,
-+ 0x48, 0x4b, 0x4e, 0x51, 0x54, 0x57, 0x5a, 0x5d,
-+ 0x60, 0x63, 0x66, 0x69, 0x6c, 0x6f, 0x72, 0x75,
-+ 0x78, 0x7b, 0x7e, 0x81, 0x84, 0x87, 0x8a, 0x8d,
-+ 0x90, 0x93, 0x96, 0x99, 0x9c, 0x9f, 0xa2, 0xa5,
-+ 0xa8, 0xab, 0xae, 0xb1, 0xb4, 0xb7, 0xba, 0xbd,
-+ 0xc0, 0xc3, 0xc6, 0xc9, 0xcc, 0xcf, 0xd2, 0xd5,
-+ 0xd8, 0xdb, 0xde, 0xe1, 0xe4, 0xe7, 0xea, 0xed,
-+ 0xf0, 0xf3, 0xf6, 0xf9, 0xfc, 0xff, 0x02, 0x05,
-+ 0x08, 0x0b, 0x0e, 0x11, 0x14, 0x17, 0x1a, 0x1d,
-+ 0x20, 0x23, 0x26, 0x29, 0x2c, 0x2f, 0x32, 0x35,
-+ 0x38, 0x3b, 0x3e, 0x41, 0x44, 0x47, 0x4a, 0x4d,
-+ 0x50, 0x53, 0x56, 0x59, 0x5c, 0x5f, 0x62, 0x65,
-+ 0x68, 0x6b, 0x6e, 0x71, 0x74, 0x77, 0x7a, 0x7d,
-+ 0x80, 0x83, 0x86, 0x89, 0x8c, 0x8f, 0x92, 0x95,
-+ 0x98, 0x9b, 0x9e, 0xa1, 0xa4, 0xa7, 0xaa, 0xad,
-+ 0xb0, 0xb3, 0xb6, 0xb9, 0xbc, 0xbf, 0xc2, 0xc5,
-+ 0xc8, 0xcb, 0xce, 0xd1, 0xd4, 0xd7, 0xda, 0xdd,
-+ 0xe0, 0xe3, 0xe6, 0xe9, 0xec, 0xef, 0xf2, 0xf5,
-+ 0xf8, 0xfb, 0xfe, 0x01, 0x04, 0x07, 0x0a, 0x0d,
-+ 0x10, 0x13, 0x16, 0x19, 0x1c, 0x1f, 0x22, 0x25,
-+ 0x28, 0x2b, 0x2e, 0x31, 0x34, 0x37, 0x3a, 0x3d,
-+ 0x40, 0x43, 0x46, 0x49, 0x4c, 0x4f, 0x52, 0x55,
-+ 0x58, 0x5b, 0x5e, 0x61, 0x64, 0x67, 0x6a, 0x6d,
-+ 0x70, 0x73, 0x76, 0x79, 0x7c, 0x7f, 0x82, 0x85,
-+ 0x88, 0x8b, 0x8e, 0x91, 0x94, 0x97, 0x9a, 0x9d,
-+ 0xa0, 0xa3, 0xa6, 0xa9, 0xac, 0xaf, 0xb2, 0xb5,
-+ 0xb8, 0xbb, 0xbe, 0xc1, 0xc4, 0xc7, 0xca, 0xcd,
-+ 0xd0, 0xd3, 0xd6, 0xd9, 0xdc, 0xdf, 0xe2, 0xe5,
-+ 0xe8, 0xeb, 0xee, 0xf1, 0xf4, 0xf7, 0xfa, 0xfd,
-+ 0x00, 0x05, 0x0a, 0x0f, 0x14, 0x19, 0x1e, 0x23,
-+ 0x28, 0x2d, 0x32, 0x37, 0x3c, 0x41, 0x46, 0x4b,
-+ 0x50, 0x55, 0x5a, 0x5f, 0x64, 0x69, 0x6e, 0x73,
-+ 0x78, 0x7d, 0x82, 0x87, 0x8c, 0x91, 0x96, 0x9b,
-+ 0xa0, 0xa5, 0xaa, 0xaf, 0xb4, 0xb9, 0xbe, 0xc3,
-+ 0xc8, 0xcd, 0xd2, 0xd7, 0xdc, 0xe1, 0xe6, 0xeb,
-+ 0xf0, 0xf5, 0xfa, 0xff, 0x04, 0x09, 0x0e, 0x13,
-+ 0x18, 0x1d, 0x22, 0x27, 0x2c, 0x31, 0x36, 0x3b,
-+ 0x40, 0x45, 0x4a, 0x4f, 0x54, 0x59, 0x5e, 0x63,
-+ 0x68, 0x6d, 0x72, 0x77, 0x7c, 0x81, 0x86, 0x8b,
-+ 0x90, 0x95, 0x9a, 0x9f, 0xa4, 0xa9, 0xae, 0xb3,
-+ 0xb8, 0xbd, 0xc2, 0xc7, 0xcc, 0xd1, 0xd6, 0xdb,
-+ 0xe0, 0xe5, 0xea, 0xef, 0xf4, 0xf9, 0xfe, 0x03,
-+ 0x08, 0x0d, 0x12, 0x17, 0x1c, 0x21, 0x26, 0x2b,
-+ 0x30, 0x35, 0x3a, 0x3f, 0x44, 0x49, 0x4e, 0x53,
-+ 0x58, 0x5d, 0x62, 0x67, 0x6c, 0x71, 0x76, 0x7b,
-+ 0x80, 0x85, 0x8a, 0x8f, 0x94, 0x99, 0x9e, 0xa3,
-+ 0xa8, 0xad, 0xb2, 0xb7, 0xbc, 0xc1, 0xc6, 0xcb,
-+ 0xd0, 0xd5, 0xda, 0xdf, 0xe4, 0xe9, 0xee, 0xf3,
-+ 0xf8, 0xfd, 0x02, 0x07, 0x0c, 0x11, 0x16, 0x1b,
-+ 0x20, 0x25, 0x2a, 0x2f, 0x34, 0x39, 0x3e, 0x43,
-+ 0x48, 0x4d, 0x52, 0x57, 0x5c, 0x61, 0x66, 0x6b,
-+ 0x70, 0x75, 0x7a, 0x7f, 0x84, 0x89, 0x8e, 0x93,
-+ 0x98, 0x9d, 0xa2, 0xa7, 0xac, 0xb1, 0xb6, 0xbb,
-+ 0xc0, 0xc5, 0xca, 0xcf, 0xd4, 0xd9, 0xde, 0xe3,
-+ 0xe8, 0xed, 0xf2, 0xf7, 0xfc, 0x01, 0x06, 0x0b,
-+ 0x10, 0x15, 0x1a, 0x1f, 0x24, 0x29, 0x2e, 0x33,
-+ 0x38, 0x3d, 0x42, 0x47, 0x4c, 0x51, 0x56, 0x5b,
-+ 0x60, 0x65, 0x6a, 0x6f, 0x74, 0x79, 0x7e, 0x83,
-+ 0x88, 0x8d, 0x92, 0x97, 0x9c, 0xa1, 0xa6, 0xab,
-+ 0xb0, 0xb5, 0xba, 0xbf, 0xc4, 0xc9, 0xce, 0xd3,
-+ 0xd8, 0xdd, 0xe2, 0xe7, 0xec, 0xf1, 0xf6, 0xfb,
-+ 0x00, 0x07, 0x0e, 0x15, 0x1c, 0x23, 0x2a, 0x31,
-+ 0x38, 0x3f, 0x46, 0x4d, 0x54, 0x5b, 0x62, 0x69,
-+ 0x70, 0x77, 0x7e, 0x85, 0x8c, 0x93, 0x9a, 0xa1,
-+ 0xa8, 0xaf, 0xb6, 0xbd, 0xc4, 0xcb, 0xd2, 0xd9,
-+ 0xe0, 0xe7, 0xee, 0xf5, 0xfc, 0x03, 0x0a, 0x11,
-+ 0x18, 0x1f, 0x26, 0x2d, 0x34, 0x3b, 0x42, 0x49,
-+ 0x50, 0x57, 0x5e, 0x65, 0x6c, 0x73, 0x7a, 0x81,
-+ 0x88, 0x8f, 0x96, 0x9d, 0xa4, 0xab, 0xb2, 0xb9,
-+ 0xc0, 0xc7, 0xce, 0xd5, 0xdc, 0xe3, 0xea, 0xf1,
-+ 0xf8, 0xff, 0x06, 0x0d, 0x14, 0x1b, 0x22, 0x29,
-+ 0x30, 0x37, 0x3e, 0x45, 0x4c, 0x53, 0x5a, 0x61,
-+ 0x68, 0x6f, 0x76, 0x7d, 0x84, 0x8b, 0x92, 0x99,
-+ 0xa0, 0xa7, 0xae, 0xb5, 0xbc, 0xc3, 0xca, 0xd1,
-+ 0xd8, 0xdf, 0xe6, 0xed, 0xf4, 0xfb, 0x02, 0x09,
-+ 0x10, 0x17, 0x1e, 0x25, 0x2c, 0x33, 0x3a, 0x41,
-+ 0x48, 0x4f, 0x56, 0x5d, 0x64, 0x6b, 0x72, 0x79,
-+ 0x80, 0x87, 0x8e, 0x95, 0x9c, 0xa3, 0xaa, 0xb1,
-+ 0xb8, 0xbf, 0xc6, 0xcd, 0xd4, 0xdb, 0xe2, 0xe9,
-+ 0xf0, 0xf7, 0xfe, 0x05, 0x0c, 0x13, 0x1a, 0x21,
-+ 0x28, 0x2f, 0x36, 0x3d, 0x44, 0x4b, 0x52, 0x59,
-+ 0x60, 0x67, 0x6e, 0x75, 0x7c, 0x83, 0x8a, 0x91,
-+ 0x98, 0x9f, 0xa6, 0xad, 0xb4, 0xbb, 0xc2, 0xc9,
-+ 0xd0, 0xd7, 0xde, 0xe5, 0xec, 0xf3, 0xfa, 0x01,
-+ 0x08, 0x0f, 0x16, 0x1d, 0x24, 0x2b, 0x32, 0x39,
-+ 0x40, 0x47, 0x4e, 0x55, 0x5c, 0x63, 0x6a, 0x71,
-+ 0x78, 0x7f, 0x86, 0x8d, 0x94, 0x9b, 0xa2, 0xa9,
-+ 0xb0, 0xb7, 0xbe, 0xc5, 0xcc, 0xd3, 0xda, 0xe1,
-+ 0xe8, 0xef, 0xf6, 0xfd, 0x04, 0x0b, 0x12, 0x19,
-+ 0x20, 0x27, 0x2e, 0x35, 0x3c, 0x43, 0x4a, 0x51,
-+ 0x58, 0x5f, 0x66, 0x6d, 0x74, 0x7b, 0x82, 0x89,
-+ 0x90, 0x97, 0x9e, 0xa5, 0xac, 0xb3, 0xba, 0xc1,
-+ 0xc8, 0xcf, 0xd6, 0xdd, 0xe4, 0xeb, 0xf2, 0xf9,
-+ 0x00, 0x09, 0x12, 0x1b, 0x24, 0x2d, 0x36, 0x3f,
-+ 0x48, 0x51, 0x5a, 0x63, 0x6c, 0x75, 0x7e, 0x87,
-+ 0x90, 0x99, 0xa2, 0xab, 0xb4, 0xbd, 0xc6, 0xcf,
-+ 0xd8, 0xe1, 0xea, 0xf3, 0xfc, 0x05, 0x0e, 0x17,
-+ 0x20, 0x29, 0x32, 0x3b, 0x44, 0x4d, 0x56, 0x5f,
-+ 0x68, 0x71, 0x7a, 0x83, 0x8c, 0x95, 0x9e, 0xa7,
-+ 0xb0, 0xb9, 0xc2, 0xcb, 0xd4, 0xdd, 0xe6, 0xef,
-+ 0xf8, 0x01, 0x0a, 0x13, 0x1c, 0x25, 0x2e, 0x37,
-+ 0x40, 0x49, 0x52, 0x5b, 0x64, 0x6d, 0x76, 0x7f,
-+ 0x88, 0x91, 0x9a, 0xa3, 0xac, 0xb5, 0xbe, 0xc7,
-+ 0xd0, 0xd9, 0xe2, 0xeb, 0xf4, 0xfd, 0x06, 0x0f,
-+ 0x18, 0x21, 0x2a, 0x33, 0x3c, 0x45, 0x4e, 0x57,
-+ 0x60, 0x69, 0x72, 0x7b, 0x84, 0x8d, 0x96, 0x9f,
-+ 0xa8, 0xb1, 0xba, 0xc3, 0xcc, 0xd5, 0xde, 0xe7,
-+ 0xf0, 0xf9, 0x02, 0x0b, 0x14, 0x1d, 0x26, 0x2f,
-+ 0x38, 0x41, 0x4a, 0x53, 0x5c, 0x65, 0x6e, 0x77,
-+ 0x80, 0x89, 0x92, 0x9b, 0xa4, 0xad, 0xb6, 0xbf,
-+ 0xc8, 0xd1, 0xda, 0xe3, 0xec, 0xf5, 0xfe, 0x07,
-+ 0x10, 0x19, 0x22, 0x2b, 0x34, 0x3d, 0x46, 0x4f,
-+ 0x58, 0x61, 0x6a, 0x73, 0x7c, 0x85, 0x8e, 0x97,
-+ 0xa0, 0xa9, 0xb2, 0xbb, 0xc4, 0xcd, 0xd6, 0xdf,
-+ 0xe8, 0xf1, 0xfa, 0x03, 0x0c, 0x15, 0x1e, 0x27,
-+ 0x30, 0x39, 0x42, 0x4b, 0x54, 0x5d, 0x66, 0x6f,
-+ 0x78, 0x81, 0x8a, 0x93, 0x9c, 0xa5, 0xae, 0xb7,
-+ 0xc0, 0xc9, 0xd2, 0xdb, 0xe4, 0xed, 0xf6, 0xff,
-+ 0x08, 0x11, 0x1a, 0x23, 0x2c, 0x35, 0x3e, 0x47,
-+ 0x50, 0x59, 0x62, 0x6b, 0x74, 0x7d, 0x86, 0x8f,
-+ 0x98, 0xa1, 0xaa, 0xb3, 0xbc, 0xc5, 0xce, 0xd7,
-+ 0xe0, 0xe9, 0xf2, 0xfb, 0x04, 0x0d, 0x16, 0x1f,
-+ 0x28, 0x31, 0x3a, 0x43, 0x4c, 0x55, 0x5e, 0x67,
-+ 0x70, 0x79, 0x82, 0x8b, 0x94, 0x9d, 0xa6, 0xaf,
-+ 0xb8, 0xc1, 0xca, 0xd3, 0xdc, 0xe5, 0xee, 0xf7,
-+ 0x00, 0x0b, 0x16, 0x21, 0x2c, 0x37, 0x42, 0x4d,
-+ 0x58, 0x63, 0x6e, 0x79, 0x84, 0x8f, 0x9a, 0xa5,
-+ 0xb0, 0xbb, 0xc6, 0xd1, 0xdc, 0xe7, 0xf2, 0xfd,
-+ 0x08, 0x13, 0x1e, 0x29, 0x34, 0x3f, 0x4a, 0x55,
-+ 0x60, 0x6b, 0x76, 0x81, 0x8c, 0x97, 0xa2, 0xad,
-+ 0xb8, 0xc3, 0xce, 0xd9, 0xe4, 0xef, 0xfa, 0x05,
-+ 0x10, 0x1b, 0x26, 0x31, 0x3c, 0x47, 0x52, 0x5d,
-+ 0x68, 0x73, 0x7e, 0x89, 0x94, 0x9f, 0xaa, 0xb5,
-+ 0xc0, 0xcb, 0xd6, 0xe1, 0xec, 0xf7, 0x02, 0x0d,
-+ 0x18, 0x23, 0x2e, 0x39, 0x44, 0x4f, 0x5a, 0x65,
-+ 0x70, 0x7b, 0x86, 0x91, 0x9c, 0xa7, 0xb2, 0xbd,
-+ 0xc8, 0xd3, 0xde, 0xe9, 0xf4, 0xff, 0x0a, 0x15,
-+ 0x20, 0x2b, 0x36, 0x41, 0x4c, 0x57, 0x62, 0x6d,
-+ 0x78, 0x83, 0x8e, 0x99, 0xa4, 0xaf, 0xba, 0xc5,
-+ 0xd0, 0xdb, 0xe6, 0xf1, 0xfc, 0x07, 0x12, 0x1d,
-+ 0x28, 0x33, 0x3e, 0x49, 0x54, 0x5f, 0x6a, 0x75,
-+ 0x80, 0x8b, 0x96, 0xa1, 0xac, 0xb7, 0xc2, 0xcd,
-+ 0xd8, 0xe3, 0xee, 0xf9, 0x04, 0x0f, 0x1a, 0x25,
-+ 0x30, 0x3b, 0x46, 0x51, 0x5c, 0x67, 0x72, 0x7d,
-+ 0x88, 0x93, 0x9e, 0xa9, 0xb4, 0xbf, 0xca, 0xd5,
-+ 0xe0, 0xeb, 0xf6, 0x01, 0x0c, 0x17, 0x22, 0x2d,
-+ 0x38, 0x43, 0x4e, 0x59, 0x64, 0x6f, 0x7a, 0x85,
-+ 0x90, 0x9b, 0xa6, 0xb1, 0xbc, 0xc7, 0xd2, 0xdd,
-+ 0xe8, 0xf3, 0xfe, 0x09, 0x14, 0x1f, 0x2a, 0x35,
-+ 0x40, 0x4b, 0x56, 0x61, 0x6c, 0x77, 0x82, 0x8d,
-+ 0x98, 0xa3, 0xae, 0xb9, 0xc4, 0xcf, 0xda, 0xe5,
-+ 0xf0, 0xfb, 0x06, 0x11, 0x1c, 0x27, 0x32, 0x3d,
-+ 0x48, 0x53, 0x5e, 0x69, 0x74, 0x7f, 0x8a, 0x95,
-+ 0xa0, 0xab, 0xb6, 0xc1, 0xcc, 0xd7, 0xe2, 0xed,
-+ 0xf8, 0x03, 0x0e, 0x19, 0x24, 0x2f, 0x3a, 0x45,
-+ 0x50, 0x5b, 0x66, 0x71, 0x7c, 0x87, 0x92, 0x9d,
-+ 0xa8, 0xb3, 0xbe, 0xc9, 0xd4, 0xdf, 0xea, 0xf5,
-+ 0x00, 0x0d, 0x1a, 0x27, 0x34, 0x41, 0x4e, 0x5b,
-+ 0x68, 0x75, 0x82, 0x8f, 0x9c, 0xa9, 0xb6, 0xc3,
-+ 0xd0, 0xdd, 0xea, 0xf7, 0x04, 0x11, 0x1e, 0x2b,
-+ 0x38, 0x45, 0x52, 0x5f, 0x6c, 0x79, 0x86, 0x93,
-+ 0xa0, 0xad, 0xba, 0xc7, 0xd4, 0xe1, 0xee, 0xfb,
-+ 0x08, 0x15, 0x22, 0x2f, 0x3c, 0x49, 0x56, 0x63,
-+ 0x70, 0x7d, 0x8a, 0x97, 0xa4, 0xb1, 0xbe, 0xcb,
-+ 0xd8, 0xe5, 0xf2, 0xff, 0x0c, 0x19, 0x26, 0x33,
-+ 0x40, 0x4d, 0x5a, 0x67, 0x74, 0x81, 0x8e, 0x9b,
-+ 0xa8, 0xb5, 0xc2, 0xcf, 0xdc, 0xe9, 0xf6, 0x03,
-+ 0x10, 0x1d, 0x2a, 0x37, 0x44, 0x51, 0x5e, 0x6b,
-+ 0x78, 0x85, 0x92, 0x9f, 0xac, 0xb9, 0xc6, 0xd3,
-+ 0xe0, 0xed, 0xfa, 0x07, 0x14, 0x21, 0x2e, 0x3b,
-+ 0x48, 0x55, 0x62, 0x6f, 0x7c, 0x89, 0x96, 0xa3,
-+ 0xb0, 0xbd, 0xca, 0xd7, 0xe4, 0xf1, 0xfe, 0x0b,
-+ 0x18, 0x25, 0x32, 0x3f, 0x4c, 0x59, 0x66, 0x73,
-+ 0x80, 0x8d, 0x9a, 0xa7, 0xb4, 0xc1, 0xce, 0xdb,
-+ 0xe8, 0xf5, 0x02, 0x0f, 0x1c, 0x29, 0x36, 0x43,
-+ 0x50, 0x5d, 0x6a, 0x77, 0x84, 0x91, 0x9e, 0xab,
-+ 0xb8, 0xc5, 0xd2, 0xdf, 0xec, 0xf9, 0x06, 0x13,
-+ 0x20, 0x2d, 0x3a, 0x47, 0x54, 0x61, 0x6e, 0x7b,
-+ 0x88, 0x95, 0xa2, 0xaf, 0xbc, 0xc9, 0xd6, 0xe3,
-+ 0xf0, 0xfd, 0x0a, 0x17, 0x24, 0x31, 0x3e, 0x4b,
-+ 0x58, 0x65, 0x72, 0x7f, 0x8c, 0x99, 0xa6, 0xb3,
-+ 0xc0, 0xcd, 0xda, 0xe7, 0xf4, 0x01, 0x0e, 0x1b,
-+ 0x28, 0x35, 0x42, 0x4f, 0x5c, 0x69, 0x76, 0x83,
-+ 0x90, 0x9d, 0xaa, 0xb7, 0xc4, 0xd1, 0xde, 0xeb,
-+ 0xf8, 0x05, 0x12, 0x1f, 0x2c, 0x39, 0x46, 0x53,
-+ 0x60, 0x6d, 0x7a, 0x87, 0x94, 0xa1, 0xae, 0xbb,
-+ 0xc8, 0xd5, 0xe2, 0xef, 0xfc, 0x09, 0x16, 0x23,
-+ 0x30, 0x3d, 0x4a, 0x57, 0x64, 0x71, 0x7e, 0x8b,
-+ 0x98, 0xa5, 0xb2, 0xbf, 0xcc, 0xd9, 0xe6, 0xf3,
-+ 0x00, 0x0f, 0x1e, 0x2d, 0x3c, 0x4b, 0x5a, 0x69,
-+ 0x78, 0x87, 0x96, 0xa5, 0xb4, 0xc3, 0xd2, 0xe1,
-+ 0xf0, 0xff, 0x0e, 0x1d, 0x2c, 0x3b, 0x4a, 0x59,
-+ 0x68, 0x77, 0x86, 0x95, 0xa4, 0xb3, 0xc2, 0xd1,
-+ 0xe0, 0xef, 0xfe, 0x0d, 0x1c, 0x2b, 0x3a, 0x49,
-+ 0x58, 0x67, 0x76, 0x85, 0x94, 0xa3, 0xb2, 0xc1,
-+ 0xd0, 0xdf, 0xee, 0xfd, 0x0c, 0x1b, 0x2a, 0x39,
-+ 0x48, 0x57, 0x66, 0x75, 0x84, 0x93, 0xa2, 0xb1,
-+ 0xc0, 0xcf, 0xde, 0xed, 0xfc, 0x0b, 0x1a, 0x29,
-+ 0x38, 0x47, 0x56, 0x65, 0x74, 0x83, 0x92, 0xa1,
-+ 0xb0, 0xbf, 0xce, 0xdd, 0xec, 0xfb, 0x0a, 0x19,
-+ 0x28, 0x37, 0x46, 0x55, 0x64, 0x73, 0x82, 0x91,
-+ 0xa0, 0xaf, 0xbe, 0xcd, 0xdc, 0xeb, 0xfa, 0x09,
-+ 0x18, 0x27, 0x36, 0x45, 0x54, 0x63, 0x72, 0x81,
-+ 0x90, 0x9f, 0xae, 0xbd, 0xcc, 0xdb, 0xea, 0xf9,
-+ 0x08, 0x17, 0x26, 0x35, 0x44, 0x53, 0x62, 0x71,
-+ 0x80, 0x8f, 0x9e, 0xad, 0xbc, 0xcb, 0xda, 0xe9,
-+ 0xf8, 0x07, 0x16, 0x25, 0x34, 0x43, 0x52, 0x61,
-+ 0x70, 0x7f, 0x8e, 0x9d, 0xac, 0xbb, 0xca, 0xd9,
-+ 0xe8, 0xf7, 0x06, 0x15, 0x24, 0x33, 0x42, 0x51,
-+ 0x60, 0x6f, 0x7e, 0x8d, 0x9c, 0xab, 0xba, 0xc9,
-+ 0xd8, 0xe7, 0xf6, 0x05, 0x14, 0x23, 0x32, 0x41,
-+ 0x50, 0x5f, 0x6e, 0x7d, 0x8c, 0x9b, 0xaa, 0xb9,
-+ 0xc8, 0xd7, 0xe6, 0xf5, 0x04, 0x13, 0x22, 0x31,
-+ 0x40, 0x4f, 0x5e, 0x6d, 0x7c, 0x8b, 0x9a, 0xa9,
-+ 0xb8, 0xc7, 0xd6, 0xe5, 0xf4, 0x03, 0x12, 0x21,
-+ 0x30, 0x3f, 0x4e, 0x5d, 0x6c, 0x7b, 0x8a, 0x99,
-+ 0xa8, 0xb7, 0xc6, 0xd5, 0xe4, 0xf3, 0x02, 0x11,
-+ 0x20, 0x2f, 0x3e, 0x4d, 0x5c, 0x6b, 0x7a, 0x89,
-+ 0x98, 0xa7, 0xb6, 0xc5, 0xd4, 0xe3, 0xf2, 0x01,
-+ 0x10, 0x1f, 0x2e, 0x3d, 0x4c, 0x5b, 0x6a, 0x79,
-+ 0x88, 0x97, 0xa6, 0xb5, 0xc4, 0xd3, 0xe2, 0xf1,
-+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
-+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff,
-+ 0x10, 0x21, 0x32, 0x43, 0x54, 0x65, 0x76, 0x87,
-+ 0x98, 0xa9, 0xba, 0xcb, 0xdc, 0xed, 0xfe, 0x0f,
-+ 0x20, 0x31, 0x42, 0x53, 0x64, 0x75, 0x86, 0x97,
-+ 0xa8, 0xb9, 0xca, 0xdb, 0xec, 0xfd, 0x0e, 0x1f,
-+ 0x30, 0x41, 0x52, 0x63, 0x74, 0x85, 0x96, 0xa7,
-+ 0xb8, 0xc9, 0xda, 0xeb, 0xfc, 0x0d, 0x1e, 0x2f,
-+ 0x40, 0x51, 0x62, 0x73, 0x84, 0x95, 0xa6, 0xb7,
-+ 0xc8, 0xd9, 0xea, 0xfb, 0x0c, 0x1d, 0x2e, 0x3f,
-+ 0x50, 0x61, 0x72, 0x83, 0x94, 0xa5, 0xb6, 0xc7,
-+ 0xd8, 0xe9, 0xfa, 0x0b, 0x1c, 0x2d, 0x3e, 0x4f,
-+ 0x60, 0x71, 0x82, 0x93, 0xa4, 0xb5, 0xc6, 0xd7,
-+ 0xe8, 0xf9, 0x0a, 0x1b, 0x2c, 0x3d, 0x4e, 0x5f,
-+ 0x70, 0x81, 0x92, 0xa3, 0xb4, 0xc5, 0xd6, 0xe7,
-+ 0xf8, 0x09, 0x1a, 0x2b, 0x3c, 0x4d, 0x5e, 0x6f,
-+ 0x80, 0x91, 0xa2, 0xb3, 0xc4, 0xd5, 0xe6, 0xf7,
-+ 0x08, 0x19, 0x2a, 0x3b, 0x4c, 0x5d, 0x6e, 0x7f,
-+ 0x90, 0xa1, 0xb2, 0xc3, 0xd4, 0xe5, 0xf6, 0x07,
-+ 0x18, 0x29, 0x3a, 0x4b, 0x5c, 0x6d, 0x7e, 0x8f,
-+ 0xa0, 0xb1, 0xc2, 0xd3, 0xe4, 0xf5, 0x06, 0x17,
-+ 0x28, 0x39, 0x4a, 0x5b, 0x6c, 0x7d, 0x8e, 0x9f,
-+ 0xb0, 0xc1, 0xd2, 0xe3, 0xf4, 0x05, 0x16, 0x27,
-+ 0x38, 0x49, 0x5a, 0x6b, 0x7c, 0x8d, 0x9e, 0xaf,
-+ 0xc0, 0xd1, 0xe2, 0xf3, 0x04, 0x15, 0x26, 0x37,
-+ 0x48, 0x59, 0x6a, 0x7b, 0x8c, 0x9d, 0xae, 0xbf,
-+ 0xd0, 0xe1, 0xf2, 0x03, 0x14, 0x25, 0x36, 0x47,
-+ 0x58, 0x69, 0x7a, 0x8b, 0x9c, 0xad, 0xbe, 0xcf,
-+ 0xe0, 0xf1, 0x02, 0x13, 0x24, 0x35, 0x46, 0x57,
-+ 0x68, 0x79, 0x8a, 0x9b, 0xac, 0xbd, 0xce, 0xdf,
-+ 0xf0, 0x01, 0x12, 0x23, 0x34, 0x45, 0x56, 0x67,
-+ 0x78, 0x89, 0x9a, 0xab, 0xbc, 0xcd, 0xde, 0xef,
-+ 0x00, 0x13, 0x26, 0x39, 0x4c, 0x5f, 0x72, 0x85,
-+ 0x98, 0xab, 0xbe, 0xd1, 0xe4, 0xf7, 0x0a, 0x1d,
-+ 0x30, 0x43, 0x56, 0x69, 0x7c, 0x8f, 0xa2, 0xb5,
-+ 0xc8, 0xdb, 0xee, 0x01, 0x14, 0x27, 0x3a, 0x4d,
-+ 0x60, 0x73, 0x86, 0x99, 0xac, 0xbf, 0xd2, 0xe5,
-+ 0xf8, 0x0b, 0x1e, 0x31, 0x44, 0x57, 0x6a, 0x7d,
-+ 0x90, 0xa3, 0xb6, 0xc9, 0xdc, 0xef, 0x02, 0x15,
-+ 0x28, 0x3b, 0x4e, 0x61, 0x74, 0x87, 0x9a, 0xad,
-+ 0xc0, 0xd3, 0xe6, 0xf9, 0x0c, 0x1f, 0x32, 0x45,
-+ 0x58, 0x6b, 0x7e, 0x91, 0xa4, 0xb7, 0xca, 0xdd,
-+ 0xf0, 0x03, 0x16, 0x29, 0x3c, 0x4f, 0x62, 0x75,
-+ 0x88, 0x9b, 0xae, 0xc1, 0xd4, 0xe7, 0xfa, 0x0d,
-+ 0x20, 0x33, 0x46, 0x59, 0x6c, 0x7f, 0x92, 0xa5,
-+ 0xb8, 0xcb, 0xde, 0xf1, 0x04, 0x17, 0x2a, 0x3d,
-+ 0x50, 0x63, 0x76, 0x89, 0x9c, 0xaf, 0xc2, 0xd5,
-+ 0xe8, 0xfb, 0x0e, 0x21, 0x34, 0x47, 0x5a, 0x6d,
-+ 0x80, 0x93, 0xa6, 0xb9, 0xcc, 0xdf, 0xf2, 0x05,
-+ 0x18, 0x2b, 0x3e, 0x51, 0x64, 0x77, 0x8a, 0x9d,
-+ 0xb0, 0xc3, 0xd6, 0xe9, 0xfc, 0x0f, 0x22, 0x35,
-+ 0x48, 0x5b, 0x6e, 0x81, 0x94, 0xa7, 0xba, 0xcd,
-+ 0xe0, 0xf3, 0x06, 0x19, 0x2c, 0x3f, 0x52, 0x65,
-+ 0x78, 0x8b, 0x9e, 0xb1, 0xc4, 0xd7, 0xea, 0xfd,
-+ 0x10, 0x23, 0x36, 0x49, 0x5c, 0x6f, 0x82, 0x95,
-+ 0xa8, 0xbb, 0xce, 0xe1, 0xf4, 0x07, 0x1a, 0x2d,
-+ 0x40, 0x53, 0x66, 0x79, 0x8c, 0x9f, 0xb2, 0xc5,
-+ 0xd8, 0xeb, 0xfe, 0x11, 0x24, 0x37, 0x4a, 0x5d,
-+ 0x70, 0x83, 0x96, 0xa9, 0xbc, 0xcf, 0xe2, 0xf5,
-+ 0x08, 0x1b, 0x2e, 0x41, 0x54, 0x67, 0x7a, 0x8d,
-+ 0xa0, 0xb3, 0xc6, 0xd9, 0xec, 0xff, 0x12, 0x25,
-+ 0x38, 0x4b, 0x5e, 0x71, 0x84, 0x97, 0xaa, 0xbd,
-+ 0xd0, 0xe3, 0xf6, 0x09, 0x1c, 0x2f, 0x42, 0x55,
-+ 0x68, 0x7b, 0x8e, 0xa1, 0xb4, 0xc7, 0xda, 0xed,
-+ 0x00, 0x15, 0x2a, 0x3f, 0x54, 0x69, 0x7e, 0x93,
-+ 0xa8, 0xbd, 0xd2, 0xe7, 0xfc, 0x11, 0x26, 0x3b,
-+ 0x50, 0x65, 0x7a, 0x8f, 0xa4, 0xb9, 0xce, 0xe3,
-+ 0xf8, 0x0d, 0x22, 0x37, 0x4c, 0x61, 0x76, 0x8b,
-+ 0xa0, 0xb5, 0xca, 0xdf, 0xf4, 0x09, 0x1e, 0x33,
-+ 0x48, 0x5d, 0x72, 0x87, 0x9c, 0xb1, 0xc6, 0xdb,
-+ 0xf0, 0x05, 0x1a, 0x2f, 0x44, 0x59, 0x6e, 0x83,
-+ 0x98, 0xad, 0xc2, 0xd7, 0xec, 0x01, 0x16, 0x2b,
-+ 0x40, 0x55, 0x6a, 0x7f, 0x94, 0xa9, 0xbe, 0xd3,
-+ 0xe8, 0xfd, 0x12, 0x27, 0x3c, 0x51, 0x66, 0x7b,
-+ 0x90, 0xa5, 0xba, 0xcf, 0xe4, 0xf9, 0x0e, 0x23,
-+ 0x38, 0x4d, 0x62, 0x77, 0x8c, 0xa1, 0xb6, 0xcb,
-+ 0xe0, 0xf5, 0x0a, 0x1f, 0x34, 0x49, 0x5e, 0x73,
-+ 0x88, 0x9d, 0xb2, 0xc7, 0xdc, 0xf1, 0x06, 0x1b,
-+ 0x30, 0x45, 0x5a, 0x6f, 0x84, 0x99, 0xae, 0xc3,
-+ 0xd8, 0xed, 0x02, 0x17, 0x2c, 0x41, 0x56, 0x6b,
-+ 0x80, 0x95, 0xaa, 0xbf, 0xd4, 0xe9, 0xfe, 0x13,
-+ 0x28, 0x3d, 0x52, 0x67, 0x7c, 0x91, 0xa6, 0xbb,
-+ 0xd0, 0xe5, 0xfa, 0x0f, 0x24, 0x39, 0x4e, 0x63,
-+ 0x78, 0x8d, 0xa2, 0xb7, 0xcc, 0xe1, 0xf6, 0x0b,
-+ 0x20, 0x35, 0x4a, 0x5f, 0x74, 0x89, 0x9e, 0xb3,
-+ 0xc8, 0xdd, 0xf2, 0x07, 0x1c, 0x31, 0x46, 0x5b,
-+ 0x70, 0x85, 0x9a, 0xaf, 0xc4, 0xd9, 0xee, 0x03,
-+ 0x18, 0x2d, 0x42, 0x57, 0x6c, 0x81, 0x96, 0xab,
-+ 0xc0, 0xd5, 0xea, 0xff, 0x14, 0x29, 0x3e, 0x53,
-+ 0x68, 0x7d, 0x92, 0xa7, 0xbc, 0xd1, 0xe6, 0xfb,
-+ 0x10, 0x25, 0x3a, 0x4f, 0x64, 0x79, 0x8e, 0xa3,
-+ 0xb8, 0xcd, 0xe2, 0xf7, 0x0c, 0x21, 0x36, 0x4b,
-+ 0x60, 0x75, 0x8a, 0x9f, 0xb4, 0xc9, 0xde, 0xf3,
-+ 0x08, 0x1d, 0x32, 0x47, 0x5c, 0x71, 0x86, 0x9b,
-+ 0xb0, 0xc5, 0xda, 0xef, 0x04, 0x19, 0x2e, 0x43,
-+ 0x58, 0x6d, 0x82, 0x97, 0xac, 0xc1, 0xd6, 0xeb,
-+ 0x00, 0x17, 0x2e, 0x45, 0x5c, 0x73, 0x8a, 0xa1,
-+ 0xb8, 0xcf, 0xe6, 0xfd, 0x14, 0x2b, 0x42, 0x59,
-+ 0x70, 0x87, 0x9e, 0xb5, 0xcc, 0xe3, 0xfa, 0x11,
-+ 0x28, 0x3f, 0x56, 0x6d, 0x84, 0x9b, 0xb2, 0xc9,
-+ 0xe0, 0xf7, 0x0e, 0x25, 0x3c, 0x53, 0x6a, 0x81,
-+ 0x98, 0xaf, 0xc6, 0xdd, 0xf4, 0x0b, 0x22, 0x39,
-+ 0x50, 0x67, 0x7e, 0x95, 0xac, 0xc3, 0xda, 0xf1,
-+ 0x08, 0x1f, 0x36, 0x4d, 0x64, 0x7b, 0x92, 0xa9,
-+ 0xc0, 0xd7, 0xee, 0x05, 0x1c, 0x33, 0x4a, 0x61,
-+ 0x78, 0x8f, 0xa6, 0xbd, 0xd4, 0xeb, 0x02, 0x19,
-+ 0x30, 0x47, 0x5e, 0x75, 0x8c, 0xa3, 0xba, 0xd1,
-+ 0xe8, 0xff, 0x16, 0x2d, 0x44, 0x5b, 0x72, 0x89,
-+ 0xa0, 0xb7, 0xce, 0xe5, 0xfc, 0x13, 0x2a, 0x41,
-+ 0x58, 0x6f, 0x86, 0x9d, 0xb4, 0xcb, 0xe2, 0xf9,
-+ 0x10, 0x27, 0x3e, 0x55, 0x6c, 0x83, 0x9a, 0xb1,
-+ 0xc8, 0xdf, 0xf6, 0x0d, 0x24, 0x3b, 0x52, 0x69,
-+ 0x80, 0x97, 0xae, 0xc5, 0xdc, 0xf3, 0x0a, 0x21,
-+ 0x38, 0x4f, 0x66, 0x7d, 0x94, 0xab, 0xc2, 0xd9,
-+ 0xf0, 0x07, 0x1e, 0x35, 0x4c, 0x63, 0x7a, 0x91,
-+ 0xa8, 0xbf, 0xd6, 0xed, 0x04, 0x1b, 0x32, 0x49,
-+ 0x60, 0x77, 0x8e, 0xa5, 0xbc, 0xd3, 0xea, 0x01,
-+ 0x18, 0x2f, 0x46, 0x5d, 0x74, 0x8b, 0xa2, 0xb9,
-+ 0xd0, 0xe7, 0xfe, 0x15, 0x2c, 0x43, 0x5a, 0x71,
-+ 0x88, 0x9f, 0xb6, 0xcd, 0xe4, 0xfb, 0x12, 0x29,
-+ 0x40, 0x57, 0x6e, 0x85, 0x9c, 0xb3, 0xca, 0xe1,
-+ 0xf8, 0x0f, 0x26, 0x3d, 0x54, 0x6b, 0x82, 0x99,
-+ 0xb0, 0xc7, 0xde, 0xf5, 0x0c, 0x23, 0x3a, 0x51,
-+ 0x68, 0x7f, 0x96, 0xad, 0xc4, 0xdb, 0xf2, 0x09,
-+ 0x20, 0x37, 0x4e, 0x65, 0x7c, 0x93, 0xaa, 0xc1,
-+ 0xd8, 0xef, 0x06, 0x1d, 0x34, 0x4b, 0x62, 0x79,
-+ 0x90, 0xa7, 0xbe, 0xd5, 0xec, 0x03, 0x1a, 0x31,
-+ 0x48, 0x5f, 0x76, 0x8d, 0xa4, 0xbb, 0xd2, 0xe9,
-+ 0x00, 0x19, 0x32, 0x4b, 0x64, 0x7d, 0x96, 0xaf,
-+ 0xc8, 0xe1, 0xfa, 0x13, 0x2c, 0x45, 0x5e, 0x77,
-+ 0x90, 0xa9, 0xc2, 0xdb, 0xf4, 0x0d, 0x26, 0x3f,
-+ 0x58, 0x71, 0x8a, 0xa3, 0xbc, 0xd5, 0xee, 0x07,
-+ 0x20, 0x39, 0x52, 0x6b, 0x84, 0x9d, 0xb6, 0xcf,
-+ 0xe8, 0x01, 0x1a, 0x33, 0x4c, 0x65, 0x7e, 0x97,
-+ 0xb0, 0xc9, 0xe2, 0xfb, 0x14, 0x2d, 0x46, 0x5f,
-+ 0x78, 0x91, 0xaa, 0xc3, 0xdc, 0xf5, 0x0e, 0x27,
-+ 0x40, 0x59, 0x72, 0x8b, 0xa4, 0xbd, 0xd6, 0xef,
-+ 0x08, 0x21, 0x3a, 0x53, 0x6c, 0x85, 0x9e, 0xb7,
-+ 0xd0, 0xe9, 0x02, 0x1b, 0x34, 0x4d, 0x66, 0x7f,
-+ 0x98, 0xb1, 0xca, 0xe3, 0xfc, 0x15, 0x2e, 0x47,
-+ 0x60, 0x79, 0x92, 0xab, 0xc4, 0xdd, 0xf6, 0x0f,
-+ 0x28, 0x41, 0x5a, 0x73, 0x8c, 0xa5, 0xbe, 0xd7,
-+ 0xf0, 0x09, 0x22, 0x3b, 0x54, 0x6d, 0x86, 0x9f,
-+ 0xb8, 0xd1, 0xea, 0x03, 0x1c, 0x35, 0x4e, 0x67,
-+ 0x80, 0x99, 0xb2, 0xcb, 0xe4, 0xfd, 0x16, 0x2f,
-+ 0x48, 0x61, 0x7a, 0x93, 0xac, 0xc5, 0xde, 0xf7,
-+ 0x10, 0x29, 0x42, 0x5b, 0x74, 0x8d, 0xa6, 0xbf,
-+ 0xd8, 0xf1, 0x0a, 0x23, 0x3c, 0x55, 0x6e, 0x87,
-+ 0xa0, 0xb9, 0xd2, 0xeb, 0x04, 0x1d, 0x36, 0x4f,
-+ 0x68, 0x81, 0x9a, 0xb3, 0xcc, 0xe5, 0xfe, 0x17,
-+ 0x30, 0x49, 0x62, 0x7b, 0x94, 0xad, 0xc6, 0xdf,
-+ 0xf8, 0x11, 0x2a, 0x43, 0x5c, 0x75, 0x8e, 0xa7,
-+ 0xc0, 0xd9, 0xf2, 0x0b, 0x24, 0x3d, 0x56, 0x6f,
-+ 0x88, 0xa1, 0xba, 0xd3, 0xec, 0x05, 0x1e, 0x37,
-+ 0x50, 0x69, 0x82, 0x9b, 0xb4, 0xcd, 0xe6, 0xff,
-+ 0x18, 0x31, 0x4a, 0x63, 0x7c, 0x95, 0xae, 0xc7,
-+ 0xe0, 0xf9, 0x12, 0x2b, 0x44, 0x5d, 0x76, 0x8f,
-+ 0xa8, 0xc1, 0xda, 0xf3, 0x0c, 0x25, 0x3e, 0x57,
-+ 0x70, 0x89, 0xa2, 0xbb, 0xd4, 0xed, 0x06, 0x1f,
-+ 0x38, 0x51, 0x6a, 0x83, 0x9c, 0xb5, 0xce, 0xe7,
-+ 0x00, 0x1b, 0x36, 0x51, 0x6c, 0x87, 0xa2, 0xbd,
-+ 0xd8, 0xf3, 0x0e, 0x29, 0x44, 0x5f, 0x7a, 0x95,
-+ 0xb0, 0xcb, 0xe6, 0x01, 0x1c, 0x37, 0x52, 0x6d,
-+ 0x88, 0xa3, 0xbe, 0xd9, 0xf4, 0x0f, 0x2a, 0x45,
-+ 0x60, 0x7b, 0x96, 0xb1, 0xcc, 0xe7, 0x02, 0x1d,
-+ 0x38, 0x53, 0x6e, 0x89, 0xa4, 0xbf, 0xda, 0xf5,
-+ 0x10, 0x2b, 0x46, 0x61, 0x7c, 0x97, 0xb2, 0xcd,
-+ 0xe8, 0x03, 0x1e, 0x39, 0x54, 0x6f, 0x8a, 0xa5,
-+ 0xc0, 0xdb, 0xf6, 0x11, 0x2c, 0x47, 0x62, 0x7d,
-+ 0x98, 0xb3, 0xce, 0xe9, 0x04, 0x1f, 0x3a, 0x55,
-+ 0x70, 0x8b, 0xa6, 0xc1, 0xdc, 0xf7, 0x12, 0x2d,
-+ 0x48, 0x63, 0x7e, 0x99, 0xb4, 0xcf, 0xea, 0x05,
-+ 0x20, 0x3b, 0x56, 0x71, 0x8c, 0xa7, 0xc2, 0xdd,
-+ 0xf8, 0x13, 0x2e, 0x49, 0x64, 0x7f, 0x9a, 0xb5,
-+ 0xd0, 0xeb, 0x06, 0x21, 0x3c, 0x57, 0x72, 0x8d,
-+ 0xa8, 0xc3, 0xde, 0xf9, 0x14, 0x2f, 0x4a, 0x65,
-+ 0x80, 0x9b, 0xb6, 0xd1, 0xec, 0x07, 0x22, 0x3d,
-+ 0x58, 0x73, 0x8e, 0xa9, 0xc4, 0xdf, 0xfa, 0x15,
-+ 0x30, 0x4b, 0x66, 0x81, 0x9c, 0xb7, 0xd2, 0xed,
-+ 0x08, 0x23, 0x3e, 0x59, 0x74, 0x8f, 0xaa, 0xc5,
-+ 0xe0, 0xfb, 0x16, 0x31, 0x4c, 0x67, 0x82, 0x9d,
-+ 0xb8, 0xd3, 0xee, 0x09, 0x24, 0x3f, 0x5a, 0x75,
-+ 0x90, 0xab, 0xc6, 0xe1, 0xfc, 0x17, 0x32, 0x4d,
-+ 0x68, 0x83, 0x9e, 0xb9, 0xd4, 0xef, 0x0a, 0x25,
-+ 0x40, 0x5b, 0x76, 0x91, 0xac, 0xc7, 0xe2, 0xfd,
-+ 0x18, 0x33, 0x4e, 0x69, 0x84, 0x9f, 0xba, 0xd5,
-+ 0xf0, 0x0b, 0x26, 0x41, 0x5c, 0x77, 0x92, 0xad,
-+ 0xc8, 0xe3, 0xfe, 0x19, 0x34, 0x4f, 0x6a, 0x85,
-+ 0xa0, 0xbb, 0xd6, 0xf1, 0x0c, 0x27, 0x42, 0x5d,
-+ 0x78, 0x93, 0xae, 0xc9, 0xe4, 0xff, 0x1a, 0x35,
-+ 0x50, 0x6b, 0x86, 0xa1, 0xbc, 0xd7, 0xf2, 0x0d,
-+ 0x28, 0x43, 0x5e, 0x79, 0x94, 0xaf, 0xca, 0xe5,
-+ 0x00, 0x1d, 0x3a, 0x57, 0x74, 0x91, 0xae, 0xcb,
-+ 0xe8, 0x05, 0x22, 0x3f, 0x5c, 0x79, 0x96, 0xb3,
-+ 0xd0, 0xed, 0x0a, 0x27, 0x44, 0x61, 0x7e, 0x9b,
-+ 0xb8, 0xd5, 0xf2, 0x0f, 0x2c, 0x49, 0x66, 0x83,
-+ 0xa0, 0xbd, 0xda, 0xf7, 0x14, 0x31, 0x4e, 0x6b,
-+ 0x88, 0xa5, 0xc2, 0xdf, 0xfc, 0x19, 0x36, 0x53,
-+ 0x70, 0x8d, 0xaa, 0xc7, 0xe4, 0x01, 0x1e, 0x3b,
-+ 0x58, 0x75, 0x92, 0xaf, 0xcc, 0xe9, 0x06, 0x23,
-+ 0x40, 0x5d, 0x7a, 0x97, 0xb4, 0xd1, 0xee, 0x0b,
-+ 0x28, 0x45, 0x62, 0x7f, 0x9c, 0xb9, 0xd6, 0xf3,
-+ 0x10, 0x2d, 0x4a, 0x67, 0x84, 0xa1, 0xbe, 0xdb,
-+ 0xf8, 0x15, 0x32, 0x4f, 0x6c, 0x89, 0xa6, 0xc3,
-+ 0xe0, 0xfd, 0x1a, 0x37, 0x54, 0x71, 0x8e, 0xab,
-+ 0xc8, 0xe5, 0x02, 0x1f, 0x3c, 0x59, 0x76, 0x93,
-+ 0xb0, 0xcd, 0xea, 0x07, 0x24, 0x41, 0x5e, 0x7b,
-+ 0x98, 0xb5, 0xd2, 0xef, 0x0c, 0x29, 0x46, 0x63,
-+ 0x80, 0x9d, 0xba, 0xd7, 0xf4, 0x11, 0x2e, 0x4b,
-+ 0x68, 0x85, 0xa2, 0xbf, 0xdc, 0xf9, 0x16, 0x33,
-+ 0x50, 0x6d, 0x8a, 0xa7, 0xc4, 0xe1, 0xfe, 0x1b,
-+ 0x38, 0x55, 0x72, 0x8f, 0xac, 0xc9, 0xe6, 0x03,
-+ 0x20, 0x3d, 0x5a, 0x77, 0x94, 0xb1, 0xce, 0xeb,
-+ 0x08, 0x25, 0x42, 0x5f, 0x7c, 0x99, 0xb6, 0xd3,
-+ 0xf0, 0x0d, 0x2a, 0x47, 0x64, 0x81, 0x9e, 0xbb,
-+ 0xd8, 0xf5, 0x12, 0x2f, 0x4c, 0x69, 0x86, 0xa3,
-+ 0xc0, 0xdd, 0xfa, 0x17, 0x34, 0x51, 0x6e, 0x8b,
-+ 0xa8, 0xc5, 0xe2, 0xff, 0x1c, 0x39, 0x56, 0x73,
-+ 0x90, 0xad, 0xca, 0xe7, 0x04, 0x21, 0x3e, 0x5b,
-+ 0x78, 0x95, 0xb2, 0xcf, 0xec, 0x09, 0x26, 0x43,
-+ 0x60, 0x7d, 0x9a, 0xb7, 0xd4, 0xf1, 0x0e, 0x2b,
-+ 0x48, 0x65, 0x82, 0x9f, 0xbc, 0xd9, 0xf6, 0x13,
-+ 0x30, 0x4d, 0x6a, 0x87, 0xa4, 0xc1, 0xde, 0xfb,
-+ 0x18, 0x35, 0x52, 0x6f, 0x8c, 0xa9, 0xc6, 0xe3,
-+ 0x00, 0x1f, 0x3e, 0x5d, 0x7c, 0x9b, 0xba, 0xd9,
-+ 0xf8, 0x17, 0x36, 0x55, 0x74, 0x93, 0xb2, 0xd1,
-+ 0xf0, 0x0f, 0x2e, 0x4d, 0x6c, 0x8b, 0xaa, 0xc9,
-+ 0xe8, 0x07, 0x26, 0x45, 0x64, 0x83, 0xa2, 0xc1,
-+ 0xe0, 0xff, 0x1e, 0x3d, 0x5c, 0x7b, 0x9a, 0xb9,
-+ 0xd8, 0xf7, 0x16, 0x35, 0x54, 0x73, 0x92, 0xb1,
-+ 0xd0, 0xef, 0x0e, 0x2d, 0x4c, 0x6b, 0x8a, 0xa9,
-+ 0xc8, 0xe7, 0x06, 0x25, 0x44, 0x63, 0x82, 0xa1,
-+ 0xc0, 0xdf, 0xfe, 0x1d, 0x3c, 0x5b, 0x7a, 0x99,
-+ 0xb8, 0xd7, 0xf6, 0x15, 0x34, 0x53, 0x72, 0x91,
-+ 0xb0, 0xcf, 0xee, 0x0d, 0x2c, 0x4b, 0x6a, 0x89,
-+ 0xa8, 0xc7, 0xe6, 0x05, 0x24, 0x43, 0x62, 0x81,
-+ 0xa0, 0xbf, 0xde, 0xfd, 0x1c, 0x3b, 0x5a, 0x79,
-+ 0x98, 0xb7, 0xd6, 0xf5, 0x14, 0x33, 0x52, 0x71,
-+ 0x90, 0xaf, 0xce, 0xed, 0x0c, 0x2b, 0x4a, 0x69,
-+ 0x88, 0xa7, 0xc6, 0xe5, 0x04, 0x23, 0x42, 0x61,
-+ 0x80, 0x9f, 0xbe, 0xdd, 0xfc, 0x1b, 0x3a, 0x59,
-+ 0x78, 0x97, 0xb6, 0xd5, 0xf4, 0x13, 0x32, 0x51,
-+ 0x70, 0x8f, 0xae, 0xcd, 0xec, 0x0b, 0x2a, 0x49,
-+ 0x68, 0x87, 0xa6, 0xc5, 0xe4, 0x03, 0x22, 0x41,
-+ 0x60, 0x7f, 0x9e, 0xbd, 0xdc, 0xfb, 0x1a, 0x39,
-+ 0x58, 0x77, 0x96, 0xb5, 0xd4, 0xf3, 0x12, 0x31,
-+ 0x50, 0x6f, 0x8e, 0xad, 0xcc, 0xeb, 0x0a, 0x29,
-+ 0x48, 0x67, 0x86, 0xa5, 0xc4, 0xe3, 0x02, 0x21,
-+ 0x40, 0x5f, 0x7e, 0x9d, 0xbc, 0xdb, 0xfa, 0x19,
-+ 0x38, 0x57, 0x76, 0x95, 0xb4, 0xd3, 0xf2, 0x11,
-+ 0x30, 0x4f, 0x6e, 0x8d, 0xac, 0xcb, 0xea, 0x09,
-+ 0x28, 0x47, 0x66, 0x85, 0xa4, 0xc3, 0xe2, 0x01,
-+ 0x20, 0x3f, 0x5e, 0x7d, 0x9c, 0xbb, 0xda, 0xf9,
-+ 0x18, 0x37, 0x56, 0x75, 0x94, 0xb3, 0xd2, 0xf1,
-+ 0x10, 0x2f, 0x4e, 0x6d, 0x8c, 0xab, 0xca, 0xe9,
-+ 0x08, 0x27, 0x46, 0x65, 0x84, 0xa3, 0xc2, 0xe1,
-+ 0x00, 0x21, 0x42, 0x63,
-+ },
-+ .ilen = 4100,
-+ .result = {
-+ 0xb5, 0x81, 0xf5, 0x64, 0x18, 0x73, 0xe3, 0xf0,
-+ 0x4c, 0x13, 0xf2, 0x77, 0x18, 0x60, 0x65, 0x5e,
-+ 0x29, 0x01, 0xce, 0x98, 0x55, 0x53, 0xf9, 0x0c,
-+ 0x2a, 0x08, 0xd5, 0x09, 0xb3, 0x57, 0x55, 0x56,
-+ 0xc5, 0xe9, 0x56, 0x90, 0xcb, 0x6a, 0xa3, 0xc0,
-+ 0xff, 0xc4, 0x79, 0xb4, 0xd2, 0x97, 0x5d, 0xc4,
-+ 0x43, 0xd1, 0xfe, 0x94, 0x7b, 0x88, 0x06, 0x5a,
-+ 0xb2, 0x9e, 0x2c, 0xfc, 0x44, 0x03, 0xb7, 0x90,
-+ 0xa0, 0xc1, 0xba, 0x6a, 0x33, 0xb8, 0xc7, 0xb2,
-+ 0x9d, 0xe1, 0x12, 0x4f, 0xc0, 0x64, 0xd4, 0x01,
-+ 0xfe, 0x8c, 0x7a, 0x66, 0xf7, 0xe6, 0x5a, 0x91,
-+ 0xbb, 0xde, 0x56, 0x86, 0xab, 0x65, 0x21, 0x30,
-+ 0x00, 0x84, 0x65, 0x24, 0xa5, 0x7d, 0x85, 0xb4,
-+ 0xe3, 0x17, 0xed, 0x3a, 0xb7, 0x6f, 0xb4, 0x0b,
-+ 0x0b, 0xaf, 0x15, 0xae, 0x5a, 0x8f, 0xf2, 0x0c,
-+ 0x2f, 0x27, 0xf4, 0x09, 0xd8, 0xd2, 0x96, 0xb7,
-+ 0x71, 0xf2, 0xc5, 0x99, 0x4d, 0x7e, 0x7f, 0x75,
-+ 0x77, 0x89, 0x30, 0x8b, 0x59, 0xdb, 0xa2, 0xb2,
-+ 0xa0, 0xf3, 0x19, 0x39, 0x2b, 0xc5, 0x7e, 0x3f,
-+ 0x4f, 0xd9, 0xd3, 0x56, 0x28, 0x97, 0x44, 0xdc,
-+ 0xc0, 0x8b, 0x77, 0x24, 0xd9, 0x52, 0xe7, 0xc5,
-+ 0xaf, 0xf6, 0x7d, 0x59, 0xb2, 0x44, 0x05, 0x1d,
-+ 0xb1, 0xb0, 0x11, 0xa5, 0x0f, 0xec, 0x33, 0xe1,
-+ 0x6d, 0x1b, 0x4e, 0x1f, 0xff, 0x57, 0x91, 0xb4,
-+ 0x5b, 0x9a, 0x96, 0xc5, 0x53, 0xbc, 0xae, 0x20,
-+ 0x3c, 0xbb, 0x14, 0xe2, 0xe8, 0x22, 0x33, 0xc1,
-+ 0x5e, 0x76, 0x9e, 0x46, 0x99, 0xf6, 0x2a, 0x15,
-+ 0xc6, 0x97, 0x02, 0xa0, 0x66, 0x43, 0xd1, 0xa6,
-+ 0x31, 0xa6, 0x9f, 0xfb, 0xf4, 0xd3, 0x69, 0xe5,
-+ 0xcd, 0x76, 0x95, 0xb8, 0x7a, 0x82, 0x7f, 0x21,
-+ 0x45, 0xff, 0x3f, 0xce, 0x55, 0xf6, 0x95, 0x10,
-+ 0x08, 0x77, 0x10, 0x43, 0xc6, 0xf3, 0x09, 0xe5,
-+ 0x68, 0xe7, 0x3c, 0xad, 0x00, 0x52, 0x45, 0x0d,
-+ 0xfe, 0x2d, 0xc6, 0xc2, 0x94, 0x8c, 0x12, 0x1d,
-+ 0xe6, 0x25, 0xae, 0x98, 0x12, 0x8e, 0x19, 0x9c,
-+ 0x81, 0x68, 0xb1, 0x11, 0xf6, 0x69, 0xda, 0xe3,
-+ 0x62, 0x08, 0x18, 0x7a, 0x25, 0x49, 0x28, 0xac,
-+ 0xba, 0x71, 0x12, 0x0b, 0xe4, 0xa2, 0xe5, 0xc7,
-+ 0x5d, 0x8e, 0xec, 0x49, 0x40, 0x21, 0xbf, 0x5a,
-+ 0x98, 0xf3, 0x02, 0x68, 0x55, 0x03, 0x7f, 0x8a,
-+ 0xe5, 0x94, 0x0c, 0x32, 0x5c, 0x07, 0x82, 0x63,
-+ 0xaf, 0x6f, 0x91, 0x40, 0x84, 0x8e, 0x52, 0x25,
-+ 0xd0, 0xb0, 0x29, 0x53, 0x05, 0xe2, 0x50, 0x7a,
-+ 0x34, 0xeb, 0xc9, 0x46, 0x20, 0xa8, 0x3d, 0xde,
-+ 0x7f, 0x16, 0x5f, 0x36, 0xc5, 0x2e, 0xdc, 0xd1,
-+ 0x15, 0x47, 0xc7, 0x50, 0x40, 0x6d, 0x91, 0xc5,
-+ 0xe7, 0x93, 0x95, 0x1a, 0xd3, 0x57, 0xbc, 0x52,
-+ 0x33, 0xee, 0x14, 0x19, 0x22, 0x52, 0x89, 0xa7,
-+ 0x4a, 0x25, 0x56, 0x77, 0x4b, 0xca, 0xcf, 0x0a,
-+ 0xe1, 0xf5, 0x35, 0x85, 0x30, 0x7e, 0x59, 0x4a,
-+ 0xbd, 0x14, 0x5b, 0xdf, 0xe3, 0x46, 0xcb, 0xac,
-+ 0x1f, 0x6c, 0x96, 0x0e, 0xf4, 0x81, 0xd1, 0x99,
-+ 0xca, 0x88, 0x63, 0x3d, 0x02, 0x58, 0x6b, 0xa9,
-+ 0xe5, 0x9f, 0xb3, 0x00, 0xb2, 0x54, 0xc6, 0x74,
-+ 0x1c, 0xbf, 0x46, 0xab, 0x97, 0xcc, 0xf8, 0x54,
-+ 0x04, 0x07, 0x08, 0x52, 0xe6, 0xc0, 0xda, 0x93,
-+ 0x74, 0x7d, 0x93, 0x99, 0x5d, 0x78, 0x68, 0xa6,
-+ 0x2e, 0x6b, 0xd3, 0x6a, 0x69, 0xcc, 0x12, 0x6b,
-+ 0xd4, 0xc7, 0xa5, 0xc6, 0xe7, 0xf6, 0x03, 0x04,
-+ 0x5d, 0xcd, 0x61, 0x5e, 0x17, 0x40, 0xdc, 0xd1,
-+ 0x5c, 0xf5, 0x08, 0xdf, 0x5c, 0x90, 0x85, 0xa4,
-+ 0xaf, 0xf6, 0x78, 0xbb, 0x0d, 0xf1, 0xf4, 0xa4,
-+ 0x54, 0x26, 0x72, 0x9e, 0x61, 0xfa, 0x86, 0xcf,
-+ 0xe8, 0x9e, 0xa1, 0xe0, 0xc7, 0x48, 0x23, 0xae,
-+ 0x5a, 0x90, 0xae, 0x75, 0x0a, 0x74, 0x18, 0x89,
-+ 0x05, 0xb1, 0x92, 0xb2, 0x7f, 0xd0, 0x1b, 0xa6,
-+ 0x62, 0x07, 0x25, 0x01, 0xc7, 0xc2, 0x4f, 0xf9,
-+ 0xe8, 0xfe, 0x63, 0x95, 0x80, 0x07, 0xb4, 0x26,
-+ 0xcc, 0xd1, 0x26, 0xb6, 0xc4, 0x3f, 0x9e, 0xcb,
-+ 0x8e, 0x3b, 0x2e, 0x44, 0x16, 0xd3, 0x10, 0x9a,
-+ 0x95, 0x08, 0xeb, 0xc8, 0xcb, 0xeb, 0xbf, 0x6f,
-+ 0x0b, 0xcd, 0x1f, 0xc8, 0xca, 0x86, 0xaa, 0xec,
-+ 0x33, 0xe6, 0x69, 0xf4, 0x45, 0x25, 0x86, 0x3a,
-+ 0x22, 0x94, 0x4f, 0x00, 0x23, 0x6a, 0x44, 0xc2,
-+ 0x49, 0x97, 0x33, 0xab, 0x36, 0x14, 0x0a, 0x70,
-+ 0x24, 0xc3, 0xbe, 0x04, 0x3b, 0x79, 0xa0, 0xf9,
-+ 0xb8, 0xe7, 0x76, 0x29, 0x22, 0x83, 0xd7, 0xf2,
-+ 0x94, 0xf4, 0x41, 0x49, 0xba, 0x5f, 0x7b, 0x07,
-+ 0xb5, 0xfb, 0xdb, 0x03, 0x1a, 0x9f, 0xb6, 0x4c,
-+ 0xc2, 0x2e, 0x37, 0x40, 0x49, 0xc3, 0x38, 0x16,
-+ 0xe2, 0x4f, 0x77, 0x82, 0xb0, 0x68, 0x4c, 0x71,
-+ 0x1d, 0x57, 0x61, 0x9c, 0xd9, 0x4e, 0x54, 0x99,
-+ 0x47, 0x13, 0x28, 0x73, 0x3c, 0xbb, 0x00, 0x90,
-+ 0xf3, 0x4d, 0xc9, 0x0e, 0xfd, 0xe7, 0xb1, 0x71,
-+ 0xd3, 0x15, 0x79, 0xbf, 0xcc, 0x26, 0x2f, 0xbd,
-+ 0xad, 0x6c, 0x50, 0x69, 0x6c, 0x3e, 0x6d, 0x80,
-+ 0x9a, 0xea, 0x78, 0xaf, 0x19, 0xb2, 0x0d, 0x4d,
-+ 0xad, 0x04, 0x07, 0xae, 0x22, 0x90, 0x4a, 0x93,
-+ 0x32, 0x0e, 0x36, 0x9b, 0x1b, 0x46, 0xba, 0x3b,
-+ 0xb4, 0xac, 0xc6, 0xd1, 0xa2, 0x31, 0x53, 0x3b,
-+ 0x2a, 0x3d, 0x45, 0xfe, 0x03, 0x61, 0x10, 0x85,
-+ 0x17, 0x69, 0xa6, 0x78, 0xcc, 0x6c, 0x87, 0x49,
-+ 0x53, 0xf9, 0x80, 0x10, 0xde, 0x80, 0xa2, 0x41,
-+ 0x6a, 0xc3, 0x32, 0x02, 0xad, 0x6d, 0x3c, 0x56,
-+ 0x00, 0x71, 0x51, 0x06, 0xa7, 0xbd, 0xfb, 0xef,
-+ 0x3c, 0xb5, 0x9f, 0xfc, 0x48, 0x7d, 0x53, 0x7c,
-+ 0x66, 0xb0, 0x49, 0x23, 0xc4, 0x47, 0x10, 0x0e,
-+ 0xe5, 0x6c, 0x74, 0x13, 0xe6, 0xc5, 0x3f, 0xaa,
-+ 0xde, 0xff, 0x07, 0x44, 0xdd, 0x56, 0x1b, 0xad,
-+ 0x09, 0x77, 0xfb, 0x5b, 0x12, 0xb8, 0x0d, 0x38,
-+ 0x17, 0x37, 0x35, 0x7b, 0x9b, 0xbc, 0xfe, 0xd4,
-+ 0x7e, 0x8b, 0xda, 0x7e, 0x5b, 0x04, 0xa7, 0x22,
-+ 0xa7, 0x31, 0xa1, 0x20, 0x86, 0xc7, 0x1b, 0x99,
-+ 0xdb, 0xd1, 0x89, 0xf4, 0x94, 0xa3, 0x53, 0x69,
-+ 0x8d, 0xe7, 0xe8, 0x74, 0x11, 0x8d, 0x74, 0xd6,
-+ 0x07, 0x37, 0x91, 0x9f, 0xfd, 0x67, 0x50, 0x3a,
-+ 0xc9, 0xe1, 0xf4, 0x36, 0xd5, 0xa0, 0x47, 0xd1,
-+ 0xf9, 0xe5, 0x39, 0xa3, 0x31, 0xac, 0x07, 0x36,
-+ 0x23, 0xf8, 0x66, 0x18, 0x14, 0x28, 0x34, 0x0f,
-+ 0xb8, 0xd0, 0xe7, 0x29, 0xb3, 0x04, 0x4b, 0x55,
-+ 0x01, 0x41, 0xb2, 0x75, 0x8d, 0xcb, 0x96, 0x85,
-+ 0x3a, 0xfb, 0xab, 0x2b, 0x9e, 0xfa, 0x58, 0x20,
-+ 0x44, 0x1f, 0xc0, 0x14, 0x22, 0x75, 0x61, 0xe8,
-+ 0xaa, 0x19, 0xcf, 0xf1, 0x82, 0x56, 0xf4, 0xd7,
-+ 0x78, 0x7b, 0x3d, 0x5f, 0xb3, 0x9e, 0x0b, 0x8a,
-+ 0x57, 0x50, 0xdb, 0x17, 0x41, 0x65, 0x4d, 0xa3,
-+ 0x02, 0xc9, 0x9c, 0x9c, 0x53, 0xfb, 0x39, 0x39,
-+ 0x9b, 0x1d, 0x72, 0x24, 0xda, 0xb7, 0x39, 0xbe,
-+ 0x13, 0x3b, 0xfa, 0x29, 0xda, 0x9e, 0x54, 0x64,
-+ 0x6e, 0xba, 0xd8, 0xa1, 0xcb, 0xb3, 0x36, 0xfa,
-+ 0xcb, 0x47, 0x85, 0xe9, 0x61, 0x38, 0xbc, 0xbe,
-+ 0xc5, 0x00, 0x38, 0x2a, 0x54, 0xf7, 0xc4, 0xb9,
-+ 0xb3, 0xd3, 0x7b, 0xa0, 0xa0, 0xf8, 0x72, 0x7f,
-+ 0x8c, 0x8e, 0x82, 0x0e, 0xc6, 0x1c, 0x75, 0x9d,
-+ 0xca, 0x8e, 0x61, 0x87, 0xde, 0xad, 0x80, 0xd2,
-+ 0xf5, 0xf9, 0x80, 0xef, 0x15, 0x75, 0xaf, 0xf5,
-+ 0x80, 0xfb, 0xff, 0x6d, 0x1e, 0x25, 0xb7, 0x40,
-+ 0x61, 0x6a, 0x39, 0x5a, 0x6a, 0xb5, 0x31, 0xab,
-+ 0x97, 0x8a, 0x19, 0x89, 0x44, 0x40, 0xc0, 0xa6,
-+ 0xb4, 0x4e, 0x30, 0x32, 0x7b, 0x13, 0xe7, 0x67,
-+ 0xa9, 0x8b, 0x57, 0x04, 0xc2, 0x01, 0xa6, 0xf4,
-+ 0x28, 0x99, 0xad, 0x2c, 0x76, 0xa3, 0x78, 0xc2,
-+ 0x4a, 0xe6, 0xca, 0x5c, 0x50, 0x6a, 0xc1, 0xb0,
-+ 0x62, 0x4b, 0x10, 0x8e, 0x7c, 0x17, 0x43, 0xb3,
-+ 0x17, 0x66, 0x1c, 0x3e, 0x8d, 0x69, 0xf0, 0x5a,
-+ 0x71, 0xf5, 0x97, 0xdc, 0xd1, 0x45, 0xdd, 0x28,
-+ 0xf3, 0x5d, 0xdf, 0x53, 0x7b, 0x11, 0xe5, 0xbc,
-+ 0x4c, 0xdb, 0x1b, 0x51, 0x6b, 0xe9, 0xfb, 0x3d,
-+ 0xc1, 0xc3, 0x2c, 0xb9, 0x71, 0xf5, 0xb6, 0xb2,
-+ 0x13, 0x36, 0x79, 0x80, 0x53, 0xe8, 0xd3, 0xa6,
-+ 0x0a, 0xaf, 0xfd, 0x56, 0x97, 0xf7, 0x40, 0x8e,
-+ 0x45, 0xce, 0xf8, 0xb0, 0x9e, 0x5c, 0x33, 0x82,
-+ 0xb0, 0x44, 0x56, 0xfc, 0x05, 0x09, 0xe9, 0x2a,
-+ 0xac, 0x26, 0x80, 0x14, 0x1d, 0xc8, 0x3a, 0x35,
-+ 0x4c, 0x82, 0x97, 0xfd, 0x76, 0xb7, 0xa9, 0x0a,
-+ 0x35, 0x58, 0x79, 0x8e, 0x0f, 0x66, 0xea, 0xaf,
-+ 0x51, 0x6c, 0x09, 0xa9, 0x6e, 0x9b, 0xcb, 0x9a,
-+ 0x31, 0x47, 0xa0, 0x2f, 0x7c, 0x71, 0xb4, 0x4a,
-+ 0x11, 0xaa, 0x8c, 0x66, 0xc5, 0x64, 0xe6, 0x3a,
-+ 0x54, 0xda, 0x24, 0x6a, 0xc4, 0x41, 0x65, 0x46,
-+ 0x82, 0xa0, 0x0a, 0x0f, 0x5f, 0xfb, 0x25, 0xd0,
-+ 0x2c, 0x91, 0xa7, 0xee, 0xc4, 0x81, 0x07, 0x86,
-+ 0x75, 0x5e, 0x33, 0x69, 0x97, 0xe4, 0x2c, 0xa8,
-+ 0x9d, 0x9f, 0x0b, 0x6a, 0xbe, 0xad, 0x98, 0xda,
-+ 0x6d, 0x94, 0x41, 0xda, 0x2c, 0x1e, 0x89, 0xc4,
-+ 0xc2, 0xaf, 0x1e, 0x00, 0x05, 0x0b, 0x83, 0x60,
-+ 0xbd, 0x43, 0xea, 0x15, 0x23, 0x7f, 0xb9, 0xac,
-+ 0xee, 0x4f, 0x2c, 0xaf, 0x2a, 0xf3, 0xdf, 0xd0,
-+ 0xf3, 0x19, 0x31, 0xbb, 0x4a, 0x74, 0x84, 0x17,
-+ 0x52, 0x32, 0x2c, 0x7d, 0x61, 0xe4, 0xcb, 0xeb,
-+ 0x80, 0x38, 0x15, 0x52, 0xcb, 0x6f, 0xea, 0xe5,
-+ 0x73, 0x9c, 0xd9, 0x24, 0x69, 0xc6, 0x95, 0x32,
-+ 0x21, 0xc8, 0x11, 0xe4, 0xdc, 0x36, 0xd7, 0x93,
-+ 0x38, 0x66, 0xfb, 0xb2, 0x7f, 0x3a, 0xb9, 0xaf,
-+ 0x31, 0xdd, 0x93, 0x75, 0x78, 0x8a, 0x2c, 0x94,
-+ 0x87, 0x1a, 0x58, 0xec, 0x9e, 0x7d, 0x4d, 0xba,
-+ 0xe1, 0xe5, 0x4d, 0xfc, 0xbc, 0xa4, 0x2a, 0x14,
-+ 0xef, 0xcc, 0xa7, 0xec, 0xab, 0x43, 0x09, 0x18,
-+ 0xd3, 0xab, 0x68, 0xd1, 0x07, 0x99, 0x44, 0x47,
-+ 0xd6, 0x83, 0x85, 0x3b, 0x30, 0xea, 0xa9, 0x6b,
-+ 0x63, 0xea, 0xc4, 0x07, 0xfb, 0x43, 0x2f, 0xa4,
-+ 0xaa, 0xb0, 0xab, 0x03, 0x89, 0xce, 0x3f, 0x8c,
-+ 0x02, 0x7c, 0x86, 0x54, 0xbc, 0x88, 0xaf, 0x75,
-+ 0xd2, 0xdc, 0x63, 0x17, 0xd3, 0x26, 0xf6, 0x96,
-+ 0xa9, 0x3c, 0xf1, 0x61, 0x8c, 0x11, 0x18, 0xcc,
-+ 0xd6, 0xea, 0x5b, 0xe2, 0xcd, 0xf0, 0xf1, 0xb2,
-+ 0xe5, 0x35, 0x90, 0x1f, 0x85, 0x4c, 0x76, 0x5b,
-+ 0x66, 0xce, 0x44, 0xa4, 0x32, 0x9f, 0xe6, 0x7b,
-+ 0x71, 0x6e, 0x9f, 0x58, 0x15, 0x67, 0x72, 0x87,
-+ 0x64, 0x8e, 0x3a, 0x44, 0x45, 0xd4, 0x76, 0xfa,
-+ 0xc2, 0xf6, 0xef, 0x85, 0x05, 0x18, 0x7a, 0x9b,
-+ 0xba, 0x41, 0x54, 0xac, 0xf0, 0xfc, 0x59, 0x12,
-+ 0x3f, 0xdf, 0xa0, 0xe5, 0x8a, 0x65, 0xfd, 0x3a,
-+ 0x62, 0x8d, 0x83, 0x2c, 0x03, 0xbe, 0x05, 0x76,
-+ 0x2e, 0x53, 0x49, 0x97, 0x94, 0x33, 0xae, 0x40,
-+ 0x81, 0x15, 0xdb, 0x6e, 0xad, 0xaa, 0xf5, 0x4b,
-+ 0xe3, 0x98, 0x70, 0xdf, 0xe0, 0x7c, 0xcd, 0xdb,
-+ 0x02, 0xd4, 0x7d, 0x2f, 0xc1, 0xe6, 0xb4, 0xf3,
-+ 0xd7, 0x0d, 0x7a, 0xd9, 0x23, 0x9e, 0x87, 0x2d,
-+ 0xce, 0x87, 0xad, 0xcc, 0x72, 0x05, 0x00, 0x29,
-+ 0xdc, 0x73, 0x7f, 0x64, 0xc1, 0x15, 0x0e, 0xc2,
-+ 0xdf, 0xa7, 0x5f, 0xeb, 0x41, 0xa1, 0xcd, 0xef,
-+ 0x5c, 0x50, 0x79, 0x2a, 0x56, 0x56, 0x71, 0x8c,
-+ 0xac, 0xc0, 0x79, 0x50, 0x69, 0xca, 0x59, 0x32,
-+ 0x65, 0xf2, 0x54, 0xe4, 0x52, 0x38, 0x76, 0xd1,
-+ 0x5e, 0xde, 0x26, 0x9e, 0xfb, 0x75, 0x2e, 0x11,
-+ 0xb5, 0x10, 0xf4, 0x17, 0x73, 0xf5, 0x89, 0xc7,
-+ 0x4f, 0x43, 0x5c, 0x8e, 0x7c, 0xb9, 0x05, 0x52,
-+ 0x24, 0x40, 0x99, 0xfe, 0x9b, 0x85, 0x0b, 0x6c,
-+ 0x22, 0x3e, 0x8b, 0xae, 0x86, 0xa1, 0xd2, 0x79,
-+ 0x05, 0x68, 0x6b, 0xab, 0xe3, 0x41, 0x49, 0xed,
-+ 0x15, 0xa1, 0x8d, 0x40, 0x2d, 0x61, 0xdf, 0x1a,
-+ 0x59, 0xc9, 0x26, 0x8b, 0xef, 0x30, 0x4c, 0x88,
-+ 0x4b, 0x10, 0xf8, 0x8d, 0xa6, 0x92, 0x9f, 0x4b,
-+ 0xf3, 0xc4, 0x53, 0x0b, 0x89, 0x5d, 0x28, 0x92,
-+ 0xcf, 0x78, 0xb2, 0xc0, 0x5d, 0xed, 0x7e, 0xfc,
-+ 0xc0, 0x12, 0x23, 0x5f, 0x5a, 0x78, 0x86, 0x43,
-+ 0x6e, 0x27, 0xf7, 0x5a, 0xa7, 0x6a, 0xed, 0x19,
-+ 0x04, 0xf0, 0xb3, 0x12, 0xd1, 0xbd, 0x0e, 0x89,
-+ 0x6e, 0xbc, 0x96, 0xa8, 0xd8, 0x49, 0x39, 0x9f,
-+ 0x7e, 0x67, 0xf0, 0x2e, 0x3e, 0x01, 0xa9, 0xba,
-+ 0xec, 0x8b, 0x62, 0x8e, 0xcb, 0x4a, 0x70, 0x43,
-+ 0xc7, 0xc2, 0xc4, 0xca, 0x82, 0x03, 0x73, 0xe9,
-+ 0x11, 0xdf, 0xcf, 0x54, 0xea, 0xc9, 0xb0, 0x95,
-+ 0x51, 0xc0, 0x13, 0x3d, 0x92, 0x05, 0xfa, 0xf4,
-+ 0xa9, 0x34, 0xc8, 0xce, 0x6c, 0x3d, 0x54, 0xcc,
-+ 0xc4, 0xaf, 0xf1, 0xdc, 0x11, 0x44, 0x26, 0xa2,
-+ 0xaf, 0xf1, 0x85, 0x75, 0x7d, 0x03, 0x61, 0x68,
-+ 0x4e, 0x78, 0xc6, 0x92, 0x7d, 0x86, 0x7d, 0x77,
-+ 0xdc, 0x71, 0x72, 0xdb, 0xc6, 0xae, 0xa1, 0xcb,
-+ 0x70, 0x9a, 0x0b, 0x19, 0xbe, 0x4a, 0x6c, 0x2a,
-+ 0xe2, 0xba, 0x6c, 0x64, 0x9a, 0x13, 0x28, 0xdf,
-+ 0x85, 0x75, 0xe6, 0x43, 0xf6, 0x87, 0x08, 0x68,
-+ 0x6e, 0xba, 0x6e, 0x79, 0x9f, 0x04, 0xbc, 0x23,
-+ 0x50, 0xf6, 0x33, 0x5c, 0x1f, 0x24, 0x25, 0xbe,
-+ 0x33, 0x47, 0x80, 0x45, 0x56, 0xa3, 0xa7, 0xd7,
-+ 0x7a, 0xb1, 0x34, 0x0b, 0x90, 0x3c, 0x9c, 0xad,
-+ 0x44, 0x5f, 0x9e, 0x0e, 0x9d, 0xd4, 0xbd, 0x93,
-+ 0x5e, 0xfa, 0x3c, 0xe0, 0xb0, 0xd9, 0xed, 0xf3,
-+ 0xd6, 0x2e, 0xff, 0x24, 0xd8, 0x71, 0x6c, 0xed,
-+ 0xaf, 0x55, 0xeb, 0x22, 0xac, 0x93, 0x68, 0x32,
-+ 0x05, 0x5b, 0x47, 0xdd, 0xc6, 0x4a, 0xcb, 0xc7,
-+ 0x10, 0xe1, 0x3c, 0x92, 0x1a, 0xf3, 0x23, 0x78,
-+ 0x2b, 0xa1, 0xd2, 0x80, 0xf4, 0x12, 0xb1, 0x20,
-+ 0x8f, 0xff, 0x26, 0x35, 0xdd, 0xfb, 0xc7, 0x4e,
-+ 0x78, 0xf1, 0x2d, 0x50, 0x12, 0x77, 0xa8, 0x60,
-+ 0x7c, 0x0f, 0xf5, 0x16, 0x2f, 0x63, 0x70, 0x2a,
-+ 0xc0, 0x96, 0x80, 0x4e, 0x0a, 0xb4, 0x93, 0x35,
-+ 0x5d, 0x1d, 0x3f, 0x56, 0xf7, 0x2f, 0xbb, 0x90,
-+ 0x11, 0x16, 0x8f, 0xa2, 0xec, 0x47, 0xbe, 0xac,
-+ 0x56, 0x01, 0x26, 0x56, 0xb1, 0x8c, 0xb2, 0x10,
-+ 0xf9, 0x1a, 0xca, 0xf5, 0xd1, 0xb7, 0x39, 0x20,
-+ 0x63, 0xf1, 0x69, 0x20, 0x4f, 0x13, 0x12, 0x1f,
-+ 0x5b, 0x65, 0xfc, 0x98, 0xf7, 0xc4, 0x7a, 0xbe,
-+ 0xf7, 0x26, 0x4d, 0x2b, 0x84, 0x7b, 0x42, 0xad,
-+ 0xd8, 0x7a, 0x0a, 0xb4, 0xd8, 0x74, 0xbf, 0xc1,
-+ 0xf0, 0x6e, 0xb4, 0x29, 0xa3, 0xbb, 0xca, 0x46,
-+ 0x67, 0x70, 0x6a, 0x2d, 0xce, 0x0e, 0xa2, 0x8a,
-+ 0xa9, 0x87, 0xbf, 0x05, 0xc4, 0xc1, 0x04, 0xa3,
-+ 0xab, 0xd4, 0x45, 0x43, 0x8c, 0xb6, 0x02, 0xb0,
-+ 0x41, 0xc8, 0xfc, 0x44, 0x3d, 0x59, 0xaa, 0x2e,
-+ 0x44, 0x21, 0x2a, 0x8d, 0x88, 0x9d, 0x57, 0xf4,
-+ 0xa0, 0x02, 0x77, 0xb8, 0xa6, 0xa0, 0xe6, 0x75,
-+ 0x5c, 0x82, 0x65, 0x3e, 0x03, 0x5c, 0x29, 0x8f,
-+ 0x38, 0x55, 0xab, 0x33, 0x26, 0xef, 0x9f, 0x43,
-+ 0x52, 0xfd, 0x68, 0xaf, 0x36, 0xb4, 0xbb, 0x9a,
-+ 0x58, 0x09, 0x09, 0x1b, 0xc3, 0x65, 0x46, 0x46,
-+ 0x1d, 0xa7, 0x94, 0x18, 0x23, 0x50, 0x2c, 0xca,
-+ 0x2c, 0x55, 0x19, 0x97, 0x01, 0x9d, 0x93, 0x3b,
-+ 0x63, 0x86, 0xf2, 0x03, 0x67, 0x45, 0xd2, 0x72,
-+ 0x28, 0x52, 0x6c, 0xf4, 0xe3, 0x1c, 0xb5, 0x11,
-+ 0x13, 0xf1, 0xeb, 0x21, 0xc7, 0xd9, 0x56, 0x82,
-+ 0x2b, 0x82, 0x39, 0xbd, 0x69, 0x54, 0xed, 0x62,
-+ 0xc3, 0xe2, 0xde, 0x73, 0xd4, 0x6a, 0x12, 0xae,
-+ 0x13, 0x21, 0x7f, 0x4b, 0x5b, 0xfc, 0xbf, 0xe8,
-+ 0x2b, 0xbe, 0x56, 0xba, 0x68, 0x8b, 0x9a, 0xb1,
-+ 0x6e, 0xfa, 0xbf, 0x7e, 0x5a, 0x4b, 0xf1, 0xac,
-+ 0x98, 0x65, 0x85, 0xd1, 0x93, 0x53, 0xd3, 0x7b,
-+ 0x09, 0xdd, 0x4b, 0x10, 0x6d, 0x84, 0xb0, 0x13,
-+ 0x65, 0xbd, 0xcf, 0x52, 0x09, 0xc4, 0x85, 0xe2,
-+ 0x84, 0x74, 0x15, 0x65, 0xb7, 0xf7, 0x51, 0xaf,
-+ 0x55, 0xad, 0xa4, 0xd1, 0x22, 0x54, 0x70, 0x94,
-+ 0xa0, 0x1c, 0x90, 0x41, 0xfd, 0x99, 0xd7, 0x5a,
-+ 0x31, 0xef, 0xaa, 0x25, 0xd0, 0x7f, 0x4f, 0xea,
-+ 0x1d, 0x55, 0x42, 0xe5, 0x49, 0xb0, 0xd0, 0x46,
-+ 0x62, 0x36, 0x43, 0xb2, 0x82, 0x15, 0x75, 0x50,
-+ 0xa4, 0x72, 0xeb, 0x54, 0x27, 0x1f, 0x8a, 0xe4,
-+ 0x7d, 0xe9, 0x66, 0xc5, 0xf1, 0x53, 0xa4, 0xd1,
-+ 0x0c, 0xeb, 0xb8, 0xf8, 0xbc, 0xd4, 0xe2, 0xe7,
-+ 0xe1, 0xf8, 0x4b, 0xcb, 0xa9, 0xa1, 0xaf, 0x15,
-+ 0x83, 0xcb, 0x72, 0xd0, 0x33, 0x79, 0x00, 0x2d,
-+ 0x9f, 0xd7, 0xf1, 0x2e, 0x1e, 0x10, 0xe4, 0x45,
-+ 0xc0, 0x75, 0x3a, 0x39, 0xea, 0x68, 0xf7, 0x5d,
-+ 0x1b, 0x73, 0x8f, 0xe9, 0x8e, 0x0f, 0x72, 0x47,
-+ 0xae, 0x35, 0x0a, 0x31, 0x7a, 0x14, 0x4d, 0x4a,
-+ 0x6f, 0x47, 0xf7, 0x7e, 0x91, 0x6e, 0x74, 0x8b,
-+ 0x26, 0x47, 0xf9, 0xc3, 0xf9, 0xde, 0x70, 0xf5,
-+ 0x61, 0xab, 0xa9, 0x27, 0x9f, 0x82, 0xe4, 0x9c,
-+ 0x89, 0x91, 0x3f, 0x2e, 0x6a, 0xfd, 0xb5, 0x49,
-+ 0xe9, 0xfd, 0x59, 0x14, 0x36, 0x49, 0x40, 0x6d,
-+ 0x32, 0xd8, 0x85, 0x42, 0xf3, 0xa5, 0xdf, 0x0c,
-+ 0xa8, 0x27, 0xd7, 0x54, 0xe2, 0x63, 0x2f, 0xf2,
-+ 0x7e, 0x8b, 0x8b, 0xe7, 0xf1, 0x9a, 0x95, 0x35,
-+ 0x43, 0xdc, 0x3a, 0xe4, 0xb6, 0xf4, 0xd0, 0xdf,
-+ 0x9c, 0xcb, 0x94, 0xf3, 0x21, 0xa0, 0x77, 0x50,
-+ 0xe2, 0xc6, 0xc4, 0xc6, 0x5f, 0x09, 0x64, 0x5b,
-+ 0x92, 0x90, 0xd8, 0xe1, 0xd1, 0xed, 0x4b, 0x42,
-+ 0xd7, 0x37, 0xaf, 0x65, 0x3d, 0x11, 0x39, 0xb6,
-+ 0x24, 0x8a, 0x60, 0xae, 0xd6, 0x1e, 0xbf, 0x0e,
-+ 0x0d, 0xd7, 0xdc, 0x96, 0x0e, 0x65, 0x75, 0x4e,
-+ 0x29, 0x06, 0x9d, 0xa4, 0x51, 0x3a, 0x10, 0x63,
-+ 0x8f, 0x17, 0x07, 0xd5, 0x8e, 0x3c, 0xf4, 0x28,
-+ 0x00, 0x5a, 0x5b, 0x05, 0x19, 0xd8, 0xc0, 0x6c,
-+ 0xe5, 0x15, 0xe4, 0x9c, 0x9d, 0x71, 0x9d, 0x5e,
-+ 0x94, 0x29, 0x1a, 0xa7, 0x80, 0xfa, 0x0e, 0x33,
-+ 0x03, 0xdd, 0xb7, 0x3e, 0x9a, 0xa9, 0x26, 0x18,
-+ 0x37, 0xa9, 0x64, 0x08, 0x4d, 0x94, 0x5a, 0x88,
-+ 0xca, 0x35, 0xce, 0x81, 0x02, 0xe3, 0x1f, 0x1b,
-+ 0x89, 0x1a, 0x77, 0x85, 0xe3, 0x41, 0x6d, 0x32,
-+ 0x42, 0x19, 0x23, 0x7d, 0xc8, 0x73, 0xee, 0x25,
-+ 0x85, 0x0d, 0xf8, 0x31, 0x25, 0x79, 0x1b, 0x6f,
-+ 0x79, 0x25, 0xd2, 0xd8, 0xd4, 0x23, 0xfd, 0xf7,
-+ 0x82, 0x36, 0x6a, 0x0c, 0x46, 0x22, 0x15, 0xe9,
-+ 0xff, 0x72, 0x41, 0x91, 0x91, 0x7d, 0x3a, 0xb7,
-+ 0xdd, 0x65, 0x99, 0x70, 0xf6, 0x8d, 0x84, 0xf8,
-+ 0x67, 0x15, 0x20, 0x11, 0xd6, 0xb2, 0x55, 0x7b,
-+ 0xdb, 0x87, 0xee, 0xef, 0x55, 0x89, 0x2a, 0x59,
-+ 0x2b, 0x07, 0x8f, 0x43, 0x8a, 0x59, 0x3c, 0x01,
-+ 0x8b, 0x65, 0x54, 0xa1, 0x66, 0xd5, 0x38, 0xbd,
-+ 0xc6, 0x30, 0xa9, 0xcc, 0x49, 0xb6, 0xa8, 0x1b,
-+ 0xb8, 0xc0, 0x0e, 0xe3, 0x45, 0x28, 0xe2, 0xff,
-+ 0x41, 0x9f, 0x7e, 0x7c, 0xd1, 0xae, 0x9e, 0x25,
-+ 0x3f, 0x4c, 0x7c, 0x7c, 0xf4, 0xa8, 0x26, 0x4d,
-+ 0x5c, 0xfd, 0x4b, 0x27, 0x18, 0xf9, 0x61, 0x76,
-+ 0x48, 0xba, 0x0c, 0x6b, 0xa9, 0x4d, 0xfc, 0xf5,
-+ 0x3b, 0x35, 0x7e, 0x2f, 0x4a, 0xa9, 0xc2, 0x9a,
-+ 0xae, 0xab, 0x86, 0x09, 0x89, 0xc9, 0xc2, 0x40,
-+ 0x39, 0x2c, 0x81, 0xb3, 0xb8, 0x17, 0x67, 0xc2,
-+ 0x0d, 0x32, 0x4a, 0x3a, 0x67, 0x81, 0xd7, 0x1a,
-+ 0x34, 0x52, 0xc5, 0xdb, 0x0a, 0xf5, 0x63, 0x39,
-+ 0xea, 0x1f, 0xe1, 0x7c, 0xa1, 0x9e, 0xc1, 0x35,
-+ 0xe3, 0xb1, 0x18, 0x45, 0x67, 0xf9, 0x22, 0x38,
-+ 0x95, 0xd9, 0x34, 0x34, 0x86, 0xc6, 0x41, 0x94,
-+ 0x15, 0xf9, 0x5b, 0x41, 0xa6, 0x87, 0x8b, 0xf8,
-+ 0xd5, 0xe1, 0x1b, 0xe2, 0x5b, 0xf3, 0x86, 0x10,
-+ 0xff, 0xe6, 0xae, 0x69, 0x76, 0xbc, 0x0d, 0xb4,
-+ 0x09, 0x90, 0x0c, 0xa2, 0x65, 0x0c, 0xad, 0x74,
-+ 0xf5, 0xd7, 0xff, 0xda, 0xc1, 0xce, 0x85, 0xbe,
-+ 0x00, 0xa7, 0xff, 0x4d, 0x2f, 0x65, 0xd3, 0x8c,
-+ 0x86, 0x2d, 0x05, 0xe8, 0xed, 0x3e, 0x6b, 0x8b,
-+ 0x0f, 0x3d, 0x83, 0x8c, 0xf1, 0x1d, 0x5b, 0x96,
-+ 0x2e, 0xb1, 0x9c, 0xc2, 0x98, 0xe1, 0x70, 0xb9,
-+ 0xba, 0x5c, 0x8a, 0x43, 0xd6, 0x34, 0xa7, 0x2d,
-+ 0xc9, 0x92, 0xae, 0xf2, 0xa5, 0x7b, 0x05, 0x49,
-+ 0xa7, 0x33, 0x34, 0x86, 0xca, 0xe4, 0x96, 0x23,
-+ 0x76, 0x5b, 0xf2, 0xc6, 0xf1, 0x51, 0x28, 0x42,
-+ 0x7b, 0xcc, 0x76, 0x8f, 0xfa, 0xa2, 0xad, 0x31,
-+ 0xd4, 0xd6, 0x7a, 0x6d, 0x25, 0x25, 0x54, 0xe4,
-+ 0x3f, 0x50, 0x59, 0xe1, 0x5c, 0x05, 0xb7, 0x27,
-+ 0x48, 0xbf, 0x07, 0xec, 0x1b, 0x13, 0xbe, 0x2b,
-+ 0xa1, 0x57, 0x2b, 0xd5, 0xab, 0xd7, 0xd0, 0x4c,
-+ 0x1e, 0xcb, 0x71, 0x9b, 0xc5, 0x90, 0x85, 0xd3,
-+ 0xde, 0x59, 0xec, 0x71, 0xeb, 0x89, 0xbb, 0xd0,
-+ 0x09, 0x50, 0xe1, 0x16, 0x3f, 0xfd, 0x1c, 0x34,
-+ 0xc3, 0x1c, 0xa1, 0x10, 0x77, 0x53, 0x98, 0xef,
-+ 0xf2, 0xfd, 0xa5, 0x01, 0x59, 0xc2, 0x9b, 0x26,
-+ 0xc7, 0x42, 0xd9, 0x49, 0xda, 0x58, 0x2b, 0x6e,
-+ 0x9f, 0x53, 0x19, 0x76, 0x7e, 0xd9, 0xc9, 0x0e,
-+ 0x68, 0xc8, 0x7f, 0x51, 0x22, 0x42, 0xef, 0x49,
-+ 0xa4, 0x55, 0xb6, 0x36, 0xac, 0x09, 0xc7, 0x31,
-+ 0x88, 0x15, 0x4b, 0x2e, 0x8f, 0x3a, 0x08, 0xf7,
-+ 0xd8, 0xf7, 0xa8, 0xc5, 0xa9, 0x33, 0xa6, 0x45,
-+ 0xe4, 0xc4, 0x94, 0x76, 0xf3, 0x0d, 0x8f, 0x7e,
-+ 0xc8, 0xf6, 0xbc, 0x23, 0x0a, 0xb6, 0x4c, 0xd3,
-+ 0x6a, 0xcd, 0x36, 0xc2, 0x90, 0x5c, 0x5c, 0x3c,
-+ 0x65, 0x7b, 0xc2, 0xd6, 0xcc, 0xe6, 0x0d, 0x87,
-+ 0x73, 0x2e, 0x71, 0x79, 0x16, 0x06, 0x63, 0x28,
-+ 0x09, 0x15, 0xd8, 0x89, 0x38, 0x38, 0x3d, 0xb5,
-+ 0x42, 0x1c, 0x08, 0x24, 0xf7, 0x2a, 0xd2, 0x9d,
-+ 0xc8, 0xca, 0xef, 0xf9, 0x27, 0xd8, 0x07, 0x86,
-+ 0xf7, 0x43, 0x0b, 0x55, 0x15, 0x3f, 0x9f, 0x83,
-+ 0xef, 0xdc, 0x49, 0x9d, 0x2a, 0xc1, 0x54, 0x62,
-+ 0xbd, 0x9b, 0x66, 0x55, 0x9f, 0xb7, 0x12, 0xf3,
-+ 0x1b, 0x4d, 0x9d, 0x2a, 0x5c, 0xed, 0x87, 0x75,
-+ 0x87, 0x26, 0xec, 0x61, 0x2c, 0xb4, 0x0f, 0x89,
-+ 0xb0, 0xfb, 0x2e, 0x68, 0x5d, 0x15, 0xc7, 0x8d,
-+ 0x2e, 0xc0, 0xd9, 0xec, 0xaf, 0x4f, 0xd2, 0x25,
-+ 0x29, 0xe8, 0xd2, 0x26, 0x2b, 0x67, 0xe9, 0xfc,
-+ 0x2b, 0xa8, 0x67, 0x96, 0x12, 0x1f, 0x5b, 0x96,
-+ 0xc6, 0x14, 0x53, 0xaf, 0x44, 0xea, 0xd6, 0xe2,
-+ 0x94, 0x98, 0xe4, 0x12, 0x93, 0x4c, 0x92, 0xe0,
-+ 0x18, 0xa5, 0x8d, 0x2d, 0xe4, 0x71, 0x3c, 0x47,
-+ 0x4c, 0xf7, 0xe6, 0x47, 0x9e, 0xc0, 0x68, 0xdf,
-+ 0xd4, 0xf5, 0x5a, 0x74, 0xb1, 0x2b, 0x29, 0x03,
-+ 0x19, 0x07, 0xaf, 0x90, 0x62, 0x5c, 0x68, 0x98,
-+ 0x48, 0x16, 0x11, 0x02, 0x9d, 0xee, 0xb4, 0x9b,
-+ 0xe5, 0x42, 0x7f, 0x08, 0xfd, 0x16, 0x32, 0x0b,
-+ 0xd0, 0xb3, 0xfa, 0x2b, 0xb7, 0x99, 0xf9, 0x29,
-+ 0xcd, 0x20, 0x45, 0x9f, 0xb3, 0x1a, 0x5d, 0xa2,
-+ 0xaf, 0x4d, 0xe0, 0xbd, 0x42, 0x0d, 0xbc, 0x74,
-+ 0x99, 0x9c, 0x8e, 0x53, 0x1a, 0xb4, 0x3e, 0xbd,
-+ 0xa2, 0x9a, 0x2d, 0xf7, 0xf8, 0x39, 0x0f, 0x67,
-+ 0x63, 0xfc, 0x6b, 0xc0, 0xaf, 0xb3, 0x4b, 0x4f,
-+ 0x55, 0xc4, 0xcf, 0xa7, 0xc8, 0x04, 0x11, 0x3e,
-+ 0x14, 0x32, 0xbb, 0x1b, 0x38, 0x77, 0xd6, 0x7f,
-+ 0x54, 0x4c, 0xdf, 0x75, 0xf3, 0x07, 0x2d, 0x33,
-+ 0x9b, 0xa8, 0x20, 0xe1, 0x7b, 0x12, 0xb5, 0xf3,
-+ 0xef, 0x2f, 0xce, 0x72, 0xe5, 0x24, 0x60, 0xc1,
-+ 0x30, 0xe2, 0xab, 0xa1, 0x8e, 0x11, 0x09, 0xa8,
-+ 0x21, 0x33, 0x44, 0xfe, 0x7f, 0x35, 0x32, 0x93,
-+ 0x39, 0xa7, 0xad, 0x8b, 0x79, 0x06, 0xb2, 0xcb,
-+ 0x4e, 0xa9, 0x5f, 0xc7, 0xba, 0x74, 0x29, 0xec,
-+ 0x93, 0xa0, 0x4e, 0x54, 0x93, 0xc0, 0xbc, 0x55,
-+ 0x64, 0xf0, 0x48, 0xe5, 0x57, 0x99, 0xee, 0x75,
-+ 0xd6, 0x79, 0x0f, 0x66, 0xb7, 0xc6, 0x57, 0x76,
-+ 0xf7, 0xb7, 0xf3, 0x9c, 0xc5, 0x60, 0xe8, 0x7f,
-+ 0x83, 0x76, 0xd6, 0x0e, 0xaa, 0xe6, 0x90, 0x39,
-+ 0x1d, 0xa6, 0x32, 0x6a, 0x34, 0xe3, 0x55, 0xf8,
-+ 0x58, 0xa0, 0x58, 0x7d, 0x33, 0xe0, 0x22, 0x39,
-+ 0x44, 0x64, 0x87, 0x86, 0x5a, 0x2f, 0xa7, 0x7e,
-+ 0x0f, 0x38, 0xea, 0xb0, 0x30, 0xcc, 0x61, 0xa5,
-+ 0x6a, 0x32, 0xae, 0x1e, 0xf7, 0xe9, 0xd0, 0xa9,
-+ 0x0c, 0x32, 0x4b, 0xb5, 0x49, 0x28, 0xab, 0x85,
-+ 0x2f, 0x8e, 0x01, 0x36, 0x38, 0x52, 0xd0, 0xba,
-+ 0xd6, 0x02, 0x78, 0xf8, 0x0e, 0x3e, 0x9c, 0x8b,
-+ 0x6b, 0x45, 0x99, 0x3f, 0x5c, 0xfe, 0x58, 0xf1,
-+ 0x5c, 0x94, 0x04, 0xe1, 0xf5, 0x18, 0x6d, 0x51,
-+ 0xb2, 0x5d, 0x18, 0x20, 0xb6, 0xc2, 0x9a, 0x42,
-+ 0x1d, 0xb3, 0xab, 0x3c, 0xb6, 0x3a, 0x13, 0x03,
-+ 0xb2, 0x46, 0x82, 0x4f, 0xfc, 0x64, 0xbc, 0x4f,
-+ 0xca, 0xfa, 0x9c, 0xc0, 0xd5, 0xa7, 0xbd, 0x11,
-+ 0xb7, 0xe4, 0x5a, 0xf6, 0x6f, 0x4d, 0x4d, 0x54,
-+ 0xea, 0xa4, 0x98, 0x66, 0xd4, 0x22, 0x3b, 0xd3,
-+ 0x8f, 0x34, 0x47, 0xd9, 0x7c, 0xf4, 0x72, 0x3b,
-+ 0x4d, 0x02, 0x77, 0xf6, 0xd6, 0xdd, 0x08, 0x0a,
-+ 0x81, 0xe1, 0x86, 0x89, 0x3e, 0x56, 0x10, 0x3c,
-+ 0xba, 0xd7, 0x81, 0x8c, 0x08, 0xbc, 0x8b, 0xe2,
-+ 0x53, 0xec, 0xa7, 0x89, 0xee, 0xc8, 0x56, 0xb5,
-+ 0x36, 0x2c, 0xb2, 0x03, 0xba, 0x99, 0xdd, 0x7c,
-+ 0x48, 0xa0, 0xb0, 0xbc, 0x91, 0x33, 0xe9, 0xa8,
-+ 0xcb, 0xcd, 0xcf, 0x59, 0x5f, 0x1f, 0x15, 0xe2,
-+ 0x56, 0xf5, 0x4e, 0x01, 0x35, 0x27, 0x45, 0x77,
-+ 0x47, 0xc8, 0xbc, 0xcb, 0x7e, 0x39, 0xc1, 0x97,
-+ 0x28, 0xd3, 0x84, 0xfc, 0x2c, 0x3e, 0xc8, 0xad,
-+ 0x9c, 0xf8, 0x8a, 0x61, 0x9c, 0x28, 0xaa, 0xc5,
-+ 0x99, 0x20, 0x43, 0x85, 0x9d, 0xa5, 0xe2, 0x8b,
-+ 0xb8, 0xae, 0xeb, 0xd0, 0x32, 0x0d, 0x52, 0x78,
-+ 0x09, 0x56, 0x3f, 0xc7, 0xd8, 0x7e, 0x26, 0xfc,
-+ 0x37, 0xfb, 0x6f, 0x04, 0xfc, 0xfa, 0x92, 0x10,
-+ 0xac, 0xf8, 0x3e, 0x21, 0xdc, 0x8c, 0x21, 0x16,
-+ 0x7d, 0x67, 0x6e, 0xf6, 0xcd, 0xda, 0xb6, 0x98,
-+ 0x23, 0xab, 0x23, 0x3c, 0xb2, 0x10, 0xa0, 0x53,
-+ 0x5a, 0x56, 0x9f, 0xc5, 0xd0, 0xff, 0xbb, 0xe4,
-+ 0x98, 0x3c, 0x69, 0x1e, 0xdb, 0x38, 0x8f, 0x7e,
-+ 0x0f, 0xd2, 0x98, 0x88, 0x81, 0x8b, 0x45, 0x67,
-+ 0xea, 0x33, 0xf1, 0xeb, 0xe9, 0x97, 0x55, 0x2e,
-+ 0xd9, 0xaa, 0xeb, 0x5a, 0xec, 0xda, 0xe1, 0x68,
-+ 0xa8, 0x9d, 0x3c, 0x84, 0x7c, 0x05, 0x3d, 0x62,
-+ 0x87, 0x8f, 0x03, 0x21, 0x28, 0x95, 0x0c, 0x89,
-+ 0x25, 0x22, 0x4a, 0xb0, 0x93, 0xa9, 0x50, 0xa2,
-+ 0x2f, 0x57, 0x6e, 0x18, 0x42, 0x19, 0x54, 0x0c,
-+ 0x55, 0x67, 0xc6, 0x11, 0x49, 0xf4, 0x5c, 0xd2,
-+ 0xe9, 0x3d, 0xdd, 0x8b, 0x48, 0x71, 0x21, 0x00,
-+ 0xc3, 0x9a, 0x6c, 0x85, 0x74, 0x28, 0x83, 0x4a,
-+ 0x1b, 0x31, 0x05, 0xe1, 0x06, 0x92, 0xe7, 0xda,
-+ 0x85, 0x73, 0x78, 0x45, 0x20, 0x7f, 0xae, 0x13,
-+ 0x7c, 0x33, 0x06, 0x22, 0xf4, 0x83, 0xf9, 0x35,
-+ 0x3f, 0x6c, 0x71, 0xa8, 0x4e, 0x48, 0xbe, 0x9b,
-+ 0xce, 0x8a, 0xba, 0xda, 0xbe, 0x28, 0x08, 0xf7,
-+ 0xe2, 0x14, 0x8c, 0x71, 0xea, 0x72, 0xf9, 0x33,
-+ 0xf2, 0x88, 0x3f, 0xd7, 0xbb, 0x69, 0x6c, 0x29,
-+ 0x19, 0xdc, 0x84, 0xce, 0x1f, 0x12, 0x4f, 0xc8,
-+ 0xaf, 0xa5, 0x04, 0xba, 0x5a, 0xab, 0xb0, 0xd9,
-+ 0x14, 0x1f, 0x6c, 0x68, 0x98, 0x39, 0x89, 0x7a,
-+ 0xd9, 0xd8, 0x2f, 0xdf, 0xa8, 0x47, 0x4a, 0x25,
-+ 0xe2, 0xfb, 0x33, 0xf4, 0x59, 0x78, 0xe1, 0x68,
-+ 0x85, 0xcf, 0xfe, 0x59, 0x20, 0xd4, 0x05, 0x1d,
-+ 0x80, 0x99, 0xae, 0xbc, 0xca, 0xae, 0x0f, 0x2f,
-+ 0x65, 0x43, 0x34, 0x8e, 0x7e, 0xac, 0xd3, 0x93,
-+ 0x2f, 0xac, 0x6d, 0x14, 0x3d, 0x02, 0x07, 0x70,
-+ 0x9d, 0xa4, 0xf3, 0x1b, 0x5c, 0x36, 0xfc, 0x01,
-+ 0x73, 0x34, 0x85, 0x0c, 0x6c, 0xd6, 0xf1, 0xbd,
-+ 0x3f, 0xdf, 0xee, 0xf5, 0xd9, 0xba, 0x56, 0xef,
-+ 0xf4, 0x9b, 0x6b, 0xee, 0x9f, 0x5a, 0x78, 0x6d,
-+ 0x32, 0x19, 0xf4, 0xf7, 0xf8, 0x4c, 0x69, 0x0b,
-+ 0x4b, 0xbc, 0xbb, 0xb7, 0xf2, 0x85, 0xaf, 0x70,
-+ 0x75, 0x24, 0x6c, 0x54, 0xa7, 0x0e, 0x4d, 0x1d,
-+ 0x01, 0xbf, 0x08, 0xac, 0xcf, 0x7f, 0x2c, 0xe3,
-+ 0x14, 0x89, 0x5e, 0x70, 0x5a, 0x99, 0x92, 0xcd,
-+ 0x01, 0x84, 0xc8, 0xd2, 0xab, 0xe5, 0x4f, 0x58,
-+ 0xe7, 0x0f, 0x2f, 0x0e, 0xff, 0x68, 0xea, 0xfd,
-+ 0x15, 0xb3, 0x17, 0xe6, 0xb0, 0xe7, 0x85, 0xd8,
-+ 0x23, 0x2e, 0x05, 0xc7, 0xc9, 0xc4, 0x46, 0x1f,
-+ 0xe1, 0x9e, 0x49, 0x20, 0x23, 0x24, 0x4d, 0x7e,
-+ 0x29, 0x65, 0xff, 0xf4, 0xb6, 0xfd, 0x1a, 0x85,
-+ 0xc4, 0x16, 0xec, 0xfc, 0xea, 0x7b, 0xd6, 0x2c,
-+ 0x43, 0xf8, 0xb7, 0xbf, 0x79, 0xc0, 0x85, 0xcd,
-+ 0xef, 0xe1, 0x98, 0xd3, 0xa5, 0xf7, 0x90, 0x8c,
-+ 0xe9, 0x7f, 0x80, 0x6b, 0xd2, 0xac, 0x4c, 0x30,
-+ 0xa7, 0xc6, 0x61, 0x6c, 0xd2, 0xf9, 0x2c, 0xff,
-+ 0x30, 0xbc, 0x22, 0x81, 0x7d, 0x93, 0x12, 0xe4,
-+ 0x0a, 0xcd, 0xaf, 0xdd, 0xe8, 0xab, 0x0a, 0x1e,
-+ 0x13, 0xa4, 0x27, 0xc3, 0x5f, 0xf7, 0x4b, 0xbb,
-+ 0x37, 0x09, 0x4b, 0x91, 0x6f, 0x92, 0x4f, 0xaf,
-+ 0x52, 0xee, 0xdf, 0xef, 0x09, 0x6f, 0xf7, 0x5c,
-+ 0x6e, 0x12, 0x17, 0x72, 0x63, 0x57, 0xc7, 0xba,
-+ 0x3b, 0x6b, 0x38, 0x32, 0x73, 0x1b, 0x9c, 0x80,
-+ 0xc1, 0x7a, 0xc6, 0xcf, 0xcd, 0x35, 0xc0, 0x6b,
-+ 0x31, 0x1a, 0x6b, 0xe9, 0xd8, 0x2c, 0x29, 0x3f,
-+ 0x96, 0xfb, 0xb6, 0xcd, 0x13, 0x91, 0x3b, 0xc2,
-+ 0xd2, 0xa3, 0x31, 0x8d, 0xa4, 0xcd, 0x57, 0xcd,
-+ 0x13, 0x3d, 0x64, 0xfd, 0x06, 0xce, 0xe6, 0xdc,
-+ 0x0c, 0x24, 0x43, 0x31, 0x40, 0x57, 0xf1, 0x72,
-+ 0x17, 0xe3, 0x3a, 0x63, 0x6d, 0x35, 0xcf, 0x5d,
-+ 0x97, 0x40, 0x59, 0xdd, 0xf7, 0x3c, 0x02, 0xf7,
-+ 0x1c, 0x7e, 0x05, 0xbb, 0xa9, 0x0d, 0x01, 0xb1,
-+ 0x8e, 0xc0, 0x30, 0xa9, 0x53, 0x24, 0xc9, 0x89,
-+ 0x84, 0x6d, 0xaa, 0xd0, 0xcd, 0x91, 0xc2, 0x4d,
-+ 0x91, 0xb0, 0x89, 0xe2, 0xbf, 0x83, 0x44, 0xaa,
-+ 0x28, 0x72, 0x23, 0xa0, 0xc2, 0xad, 0xad, 0x1c,
-+ 0xfc, 0x3f, 0x09, 0x7a, 0x0b, 0xdc, 0xc5, 0x1b,
-+ 0x87, 0x13, 0xc6, 0x5b, 0x59, 0x8d, 0xf2, 0xc8,
-+ 0xaf, 0xdf, 0x11, 0x95,
-+ },
-+ .rlen = 4100,
-+ },
-+};
-+
- /*
- * Compression stuff.
- */
-@@ -4408,6 +7721,88 @@ static struct comp_testvec deflate_decomp_tv_template[] = {
- };
-
- /*
-+ * LZO test vectors (null-terminated strings).
-+ */
-+#define LZO_COMP_TEST_VECTORS 2
-+#define LZO_DECOMP_TEST_VECTORS 2
-+
-+static struct comp_testvec lzo_comp_tv_template[] = {
-+ {
-+ .inlen = 70,
-+ .outlen = 46,
-+ .input = "Join us now and share the software "
-+ "Join us now and share the software ",
-+ .output = { 0x00, 0x0d, 0x4a, 0x6f, 0x69, 0x6e, 0x20, 0x75,
-+ 0x73, 0x20, 0x6e, 0x6f, 0x77, 0x20, 0x61, 0x6e,
-+ 0x64, 0x20, 0x73, 0x68, 0x61, 0x72, 0x65, 0x20,
-+ 0x74, 0x68, 0x65, 0x20, 0x73, 0x6f, 0x66, 0x74,
-+ 0x77, 0x70, 0x01, 0x01, 0x4a, 0x6f, 0x69, 0x6e,
-+ 0x3d, 0x88, 0x00, 0x11, 0x00, 0x00 },
-+ }, {
-+ .inlen = 159,
-+ .outlen = 133,
-+ .input = "This document describes a compression method based on the LZO "
-+ "compression algorithm. This document defines the application of "
-+ "the LZO algorithm used in UBIFS.",
-+ .output = { 0x00, 0x2b, 0x54, 0x68, 0x69, 0x73, 0x20, 0x64,
-+ 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x20,
-+ 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65,
-+ 0x73, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6d, 0x70,
-+ 0x72, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x20,
-+ 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x20, 0x62,
-+ 0x61, 0x73, 0x65, 0x64, 0x20, 0x6f, 0x6e, 0x20,
-+ 0x74, 0x68, 0x65, 0x20, 0x4c, 0x5a, 0x4f, 0x2b,
-+ 0x8c, 0x00, 0x0d, 0x61, 0x6c, 0x67, 0x6f, 0x72,
-+ 0x69, 0x74, 0x68, 0x6d, 0x2e, 0x20, 0x20, 0x54,
-+ 0x68, 0x69, 0x73, 0x2a, 0x54, 0x01, 0x02, 0x66,
-+ 0x69, 0x6e, 0x65, 0x73, 0x94, 0x06, 0x05, 0x61,
-+ 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x76,
-+ 0x0a, 0x6f, 0x66, 0x88, 0x02, 0x60, 0x09, 0x27,
-+ 0xf0, 0x00, 0x0c, 0x20, 0x75, 0x73, 0x65, 0x64,
-+ 0x20, 0x69, 0x6e, 0x20, 0x55, 0x42, 0x49, 0x46,
-+ 0x53, 0x2e, 0x11, 0x00, 0x00 },
-+ },
-+};
++ if (!q->ordseq) {
++ if (!is_barrier)
++ return 1;
+
-+static struct comp_testvec lzo_decomp_tv_template[] = {
-+ {
-+ .inlen = 133,
-+ .outlen = 159,
-+ .input = { 0x00, 0x2b, 0x54, 0x68, 0x69, 0x73, 0x20, 0x64,
-+ 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x20,
-+ 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65,
-+ 0x73, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6d, 0x70,
-+ 0x72, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x20,
-+ 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x20, 0x62,
-+ 0x61, 0x73, 0x65, 0x64, 0x20, 0x6f, 0x6e, 0x20,
-+ 0x74, 0x68, 0x65, 0x20, 0x4c, 0x5a, 0x4f, 0x2b,
-+ 0x8c, 0x00, 0x0d, 0x61, 0x6c, 0x67, 0x6f, 0x72,
-+ 0x69, 0x74, 0x68, 0x6d, 0x2e, 0x20, 0x20, 0x54,
-+ 0x68, 0x69, 0x73, 0x2a, 0x54, 0x01, 0x02, 0x66,
-+ 0x69, 0x6e, 0x65, 0x73, 0x94, 0x06, 0x05, 0x61,
-+ 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x76,
-+ 0x0a, 0x6f, 0x66, 0x88, 0x02, 0x60, 0x09, 0x27,
-+ 0xf0, 0x00, 0x0c, 0x20, 0x75, 0x73, 0x65, 0x64,
-+ 0x20, 0x69, 0x6e, 0x20, 0x55, 0x42, 0x49, 0x46,
-+ 0x53, 0x2e, 0x11, 0x00, 0x00 },
-+ .output = "This document describes a compression method based on the LZO "
-+ "compression algorithm. This document defines the application of "
-+ "the LZO algorithm used in UBIFS.",
-+ }, {
-+ .inlen = 46,
-+ .outlen = 70,
-+ .input = { 0x00, 0x0d, 0x4a, 0x6f, 0x69, 0x6e, 0x20, 0x75,
-+ 0x73, 0x20, 0x6e, 0x6f, 0x77, 0x20, 0x61, 0x6e,
-+ 0x64, 0x20, 0x73, 0x68, 0x61, 0x72, 0x65, 0x20,
-+ 0x74, 0x68, 0x65, 0x20, 0x73, 0x6f, 0x66, 0x74,
-+ 0x77, 0x70, 0x01, 0x01, 0x4a, 0x6f, 0x69, 0x6e,
-+ 0x3d, 0x88, 0x00, 0x11, 0x00, 0x00 },
-+ .output = "Join us now and share the software "
-+ "Join us now and share the software ",
-+ },
-+};
++ if (q->next_ordered != QUEUE_ORDERED_NONE) {
++ *rqp = start_ordered(q, rq);
++ return 1;
++ } else {
++ /*
++ * This can happen when the queue switches to
++ * ORDERED_NONE while this request is on it.
++ */
++ blkdev_dequeue_request(rq);
++ if (__blk_end_request(rq, -EOPNOTSUPP,
++ blk_rq_bytes(rq)))
++ BUG();
++ *rqp = NULL;
++ return 0;
++ }
++ }
+
-+/*
- * Michael MIC test vectors from IEEE 802.11i
- */
- #define MICHAEL_MIC_TEST_VECTORS 6
-@@ -4812,4 +8207,20 @@ static struct cipher_speed camellia_speed_template[] = {
- { .klen = 0, .blen = 0, }
- };
-
-+static struct cipher_speed salsa20_speed_template[] = {
-+ { .klen = 16, .blen = 16, },
-+ { .klen = 16, .blen = 64, },
-+ { .klen = 16, .blen = 256, },
-+ { .klen = 16, .blen = 1024, },
-+ { .klen = 16, .blen = 8192, },
-+ { .klen = 32, .blen = 16, },
-+ { .klen = 32, .blen = 64, },
-+ { .klen = 32, .blen = 256, },
-+ { .klen = 32, .blen = 1024, },
-+ { .klen = 32, .blen = 8192, },
++ /*
++ * Ordered sequence in progress
++ */
+
-+ /* End marker */
-+ { .klen = 0, .blen = 0, }
-+};
++ /* Special requests are not subject to ordering rules. */
++ if (!blk_fs_request(rq) &&
++ rq != &q->pre_flush_rq && rq != &q->post_flush_rq)
++ return 1;
+
- #endif /* _CRYPTO_TCRYPT_H */
-diff --git a/crypto/twofish_common.c b/crypto/twofish_common.c
-index b4b9c0c..0af216c 100644
---- a/crypto/twofish_common.c
-+++ b/crypto/twofish_common.c
-@@ -655,84 +655,48 @@ int twofish_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int key_len)
- CALC_SB256_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
- }
-
-- /* Calculate whitening and round subkeys. The constants are
-- * indices of subkeys, preprocessed through q0 and q1. */
-- CALC_K256 (w, 0, 0xA9, 0x75, 0x67, 0xF3);
-- CALC_K256 (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
-- CALC_K256 (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
-- CALC_K256 (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
-- CALC_K256 (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
-- CALC_K256 (k, 2, 0x80, 0xE6, 0x78, 0x6B);
-- CALC_K256 (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
-- CALC_K256 (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
-- CALC_K256 (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
-- CALC_K256 (k, 10, 0x35, 0xD8, 0x98, 0xFD);
-- CALC_K256 (k, 12, 0x18, 0x37, 0xF7, 0x71);
-- CALC_K256 (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
-- CALC_K256 (k, 16, 0x43, 0x30, 0x75, 0x0F);
-- CALC_K256 (k, 18, 0x37, 0xF8, 0x26, 0x1B);
-- CALC_K256 (k, 20, 0xFA, 0x87, 0x13, 0xFA);
-- CALC_K256 (k, 22, 0x94, 0x06, 0x48, 0x3F);
-- CALC_K256 (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
-- CALC_K256 (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
-- CALC_K256 (k, 28, 0x84, 0x8A, 0x54, 0x00);
-- CALC_K256 (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
-+ /* CALC_K256/CALC_K192/CALC_K loops were unrolled.
-+ * Unrolling produced x2.5 more code (+18k on i386),
-+ * and speeded up key setup by 7%:
-+ * unrolled: twofish_setkey/sec: 41128
-+ * loop: twofish_setkey/sec: 38148
-+ * CALC_K256: ~100 insns each
-+ * CALC_K192: ~90 insns
-+ * CALC_K: ~70 insns
-+ */
-+ /* Calculate whitening and round subkeys */
-+ for ( i = 0; i < 8; i += 2 ) {
-+ CALC_K256 (w, i, q0[i], q1[i], q0[i+1], q1[i+1]);
-+ }
-+ for ( i = 0; i < 32; i += 2 ) {
-+ CALC_K256 (k, i, q0[i+8], q1[i+8], q0[i+9], q1[i+9]);
-+ }
- } else if (key_len == 24) { /* 192-bit key */
- /* Compute the S-boxes. */
- for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
- CALC_SB192_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
- }
-
-- /* Calculate whitening and round subkeys. The constants are
-- * indices of subkeys, preprocessed through q0 and q1. */
-- CALC_K192 (w, 0, 0xA9, 0x75, 0x67, 0xF3);
-- CALC_K192 (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
-- CALC_K192 (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
-- CALC_K192 (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
-- CALC_K192 (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
-- CALC_K192 (k, 2, 0x80, 0xE6, 0x78, 0x6B);
-- CALC_K192 (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
-- CALC_K192 (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
-- CALC_K192 (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
-- CALC_K192 (k, 10, 0x35, 0xD8, 0x98, 0xFD);
-- CALC_K192 (k, 12, 0x18, 0x37, 0xF7, 0x71);
-- CALC_K192 (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
-- CALC_K192 (k, 16, 0x43, 0x30, 0x75, 0x0F);
-- CALC_K192 (k, 18, 0x37, 0xF8, 0x26, 0x1B);
-- CALC_K192 (k, 20, 0xFA, 0x87, 0x13, 0xFA);
-- CALC_K192 (k, 22, 0x94, 0x06, 0x48, 0x3F);
-- CALC_K192 (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
-- CALC_K192 (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
-- CALC_K192 (k, 28, 0x84, 0x8A, 0x54, 0x00);
-- CALC_K192 (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
-+ /* Calculate whitening and round subkeys */
-+ for ( i = 0; i < 8; i += 2 ) {
-+ CALC_K192 (w, i, q0[i], q1[i], q0[i+1], q1[i+1]);
-+ }
-+ for ( i = 0; i < 32; i += 2 ) {
-+ CALC_K192 (k, i, q0[i+8], q1[i+8], q0[i+9], q1[i+9]);
-+ }
- } else { /* 128-bit key */
- /* Compute the S-boxes. */
- for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
- CALC_SB_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
- }
-
-- /* Calculate whitening and round subkeys. The constants are
-- * indices of subkeys, preprocessed through q0 and q1. */
-- CALC_K (w, 0, 0xA9, 0x75, 0x67, 0xF3);
-- CALC_K (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
-- CALC_K (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
-- CALC_K (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
-- CALC_K (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
-- CALC_K (k, 2, 0x80, 0xE6, 0x78, 0x6B);
-- CALC_K (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
-- CALC_K (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
-- CALC_K (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
-- CALC_K (k, 10, 0x35, 0xD8, 0x98, 0xFD);
-- CALC_K (k, 12, 0x18, 0x37, 0xF7, 0x71);
-- CALC_K (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
-- CALC_K (k, 16, 0x43, 0x30, 0x75, 0x0F);
-- CALC_K (k, 18, 0x37, 0xF8, 0x26, 0x1B);
-- CALC_K (k, 20, 0xFA, 0x87, 0x13, 0xFA);
-- CALC_K (k, 22, 0x94, 0x06, 0x48, 0x3F);
-- CALC_K (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
-- CALC_K (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
-- CALC_K (k, 28, 0x84, 0x8A, 0x54, 0x00);
-- CALC_K (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
-+ /* Calculate whitening and round subkeys */
-+ for ( i = 0; i < 8; i += 2 ) {
-+ CALC_K (w, i, q0[i], q1[i], q0[i+1], q1[i+1]);
-+ }
-+ for ( i = 0; i < 32; i += 2 ) {
-+ CALC_K (k, i, q0[i+8], q1[i+8], q0[i+9], q1[i+9]);
-+ }
- }
-
- return 0;
-diff --git a/crypto/xcbc.c b/crypto/xcbc.c
-index ac68f3b..a82959d 100644
---- a/crypto/xcbc.c
-+++ b/crypto/xcbc.c
-@@ -19,6 +19,7 @@
- * Kazunori Miyazawa <miyazawa at linux-ipv6.org>
- */
-
-+#include <crypto/scatterwalk.h>
- #include <linux/crypto.h>
- #include <linux/err.h>
- #include <linux/hardirq.h>
-@@ -27,7 +28,6 @@
- #include <linux/rtnetlink.h>
- #include <linux/slab.h>
- #include <linux/scatterlist.h>
--#include "internal.h"
-
- static u_int32_t ks[12] = {0x01010101, 0x01010101, 0x01010101, 0x01010101,
- 0x02020202, 0x02020202, 0x02020202, 0x02020202,
-@@ -307,7 +307,8 @@ static struct crypto_instance *xcbc_alloc(struct rtattr **tb)
- case 16:
- break;
- default:
-- return ERR_PTR(PTR_ERR(alg));
-+ inst = ERR_PTR(-EINVAL);
-+ goto out_put_alg;
- }
-
- inst = crypto_alloc_instance("xcbc", alg);
-@@ -320,10 +321,7 @@ static struct crypto_instance *xcbc_alloc(struct rtattr **tb)
- inst->alg.cra_alignmask = alg->cra_alignmask;
- inst->alg.cra_type = &crypto_hash_type;
-
-- inst->alg.cra_hash.digestsize =
-- (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
-- CRYPTO_ALG_TYPE_HASH ? alg->cra_hash.digestsize :
-- alg->cra_blocksize;
-+ inst->alg.cra_hash.digestsize = alg->cra_blocksize;
- inst->alg.cra_ctxsize = sizeof(struct crypto_xcbc_ctx) +
- ALIGN(inst->alg.cra_blocksize * 3, sizeof(void *));
- inst->alg.cra_init = xcbc_init_tfm;
-diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
-index f4487c3..1b4cf98 100644
---- a/drivers/acpi/bus.c
-+++ b/drivers/acpi/bus.c
-@@ -743,7 +743,7 @@ static int __init acpi_bus_init(void)
- return -ENODEV;
- }
-
--decl_subsys(acpi, NULL, NULL);
-+struct kobject *acpi_kobj;
-
- static int __init acpi_init(void)
- {
-@@ -755,10 +755,11 @@ static int __init acpi_init(void)
- return -ENODEV;
- }
-
-- result = firmware_register(&acpi_subsys);
-- if (result < 0)
-- printk(KERN_WARNING "%s: firmware_register error: %d\n",
-- __FUNCTION__, result);
-+ acpi_kobj = kobject_create_and_add("acpi", firmware_kobj);
-+ if (!acpi_kobj) {
-+ printk(KERN_WARNING "%s: kset create error\n", __FUNCTION__);
-+ acpi_kobj = NULL;
++ if (q->ordered & QUEUE_ORDERED_TAG) {
++ /* Ordered by tag. Blocking the next barrier is enough. */
++ if (is_barrier && rq != &q->bar_rq)
++ *rqp = NULL;
++ } else {
++ /* Ordered by draining. Wait for turn. */
++ WARN_ON(blk_ordered_req_seq(rq) < blk_ordered_cur_seq(q));
++ if (blk_ordered_req_seq(rq) > blk_ordered_cur_seq(q))
++ *rqp = NULL;
+ }
-
- result = acpi_bus_init();
-
-diff --git a/drivers/acpi/pci_link.c b/drivers/acpi/pci_link.c
-index c9f526e..5400ea1 100644
---- a/drivers/acpi/pci_link.c
-+++ b/drivers/acpi/pci_link.c
-@@ -911,7 +911,7 @@ __setup("acpi_irq_balance", acpi_irq_balance_set);
-
- /* FIXME: we will remove this interface after all drivers call pci_disable_device */
- static struct sysdev_class irqrouter_sysdev_class = {
-- set_kset_name("irqrouter"),
-+ .name = "irqrouter",
- .resume = irqrouter_resume,
- };
-
-diff --git a/drivers/acpi/system.c b/drivers/acpi/system.c
-index edee280..5ffe0ea 100644
---- a/drivers/acpi/system.c
-+++ b/drivers/acpi/system.c
-@@ -58,7 +58,7 @@ module_param_call(acpica_version, NULL, param_get_acpica_version, NULL, 0444);
- FS Interface (/sys)
- -------------------------------------------------------------------------- */
- static LIST_HEAD(acpi_table_attr_list);
--static struct kobject tables_kobj;
-+static struct kobject *tables_kobj;
-
- struct acpi_table_attr {
- struct bin_attribute attr;
-@@ -135,11 +135,9 @@ static int acpi_system_sysfs_init(void)
- int table_index = 0;
- int result;
-
-- tables_kobj.parent = &acpi_subsys.kobj;
-- kobject_set_name(&tables_kobj, "tables");
-- result = kobject_register(&tables_kobj);
-- if (result)
-- return result;
-+ tables_kobj = kobject_create_and_add("tables", acpi_kobj);
-+ if (!tables_kobj)
-+ return -ENOMEM;
-
- do {
- result = acpi_get_table_by_index(table_index, &table_header);
-@@ -153,7 +151,7 @@ static int acpi_system_sysfs_init(void)
-
- acpi_table_attr_init(table_attr, table_header);
- result =
-- sysfs_create_bin_file(&tables_kobj,
-+ sysfs_create_bin_file(tables_kobj,
- &table_attr->attr);
- if (result) {
- kfree(table_attr);
-@@ -163,6 +161,7 @@ static int acpi_system_sysfs_init(void)
- &acpi_table_attr_list);
- }
- } while (!result);
-+ kobject_uevent(tables_kobj, KOBJ_ADD);
-
- return 0;
- }
-diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
-index ba63619..2478cca 100644
---- a/drivers/ata/Kconfig
-+++ b/drivers/ata/Kconfig
-@@ -459,6 +459,15 @@ config PATA_NETCELL
-
- If unsure, say N.
-
-+config PATA_NINJA32
-+ tristate "Ninja32/Delkin Cardbus ATA support (Experimental)"
-+ depends on PCI && EXPERIMENTAL
-+ help
-+ This option enables support for the Ninja32, Delkin and
-+ possibly other brands of Cardbus ATA adapter
+
-+ If unsure, say N.
++ return 1;
++}
+
- config PATA_NS87410
- tristate "Nat Semi NS87410 PATA support (Experimental)"
- depends on PCI && EXPERIMENTAL
-diff --git a/drivers/ata/Makefile b/drivers/ata/Makefile
-index b13feb2..82550c1 100644
---- a/drivers/ata/Makefile
-+++ b/drivers/ata/Makefile
-@@ -41,6 +41,7 @@ obj-$(CONFIG_PATA_IT821X) += pata_it821x.o
- obj-$(CONFIG_PATA_IT8213) += pata_it8213.o
- obj-$(CONFIG_PATA_JMICRON) += pata_jmicron.o
- obj-$(CONFIG_PATA_NETCELL) += pata_netcell.o
-+obj-$(CONFIG_PATA_NINJA32) += pata_ninja32.o
- obj-$(CONFIG_PATA_NS87410) += pata_ns87410.o
- obj-$(CONFIG_PATA_NS87415) += pata_ns87415.o
- obj-$(CONFIG_PATA_OPTI) += pata_opti.o
-diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
-index 54f38c2..6f089b8 100644
---- a/drivers/ata/ahci.c
-+++ b/drivers/ata/ahci.c
-@@ -198,18 +198,18 @@ enum {
- };
-
- struct ahci_cmd_hdr {
-- u32 opts;
-- u32 status;
-- u32 tbl_addr;
-- u32 tbl_addr_hi;
-- u32 reserved[4];
-+ __le32 opts;
-+ __le32 status;
-+ __le32 tbl_addr;
-+ __le32 tbl_addr_hi;
-+ __le32 reserved[4];
- };
-
- struct ahci_sg {
-- u32 addr;
-- u32 addr_hi;
-- u32 reserved;
-- u32 flags_size;
-+ __le32 addr;
-+ __le32 addr_hi;
-+ __le32 reserved;
-+ __le32 flags_size;
- };
-
- struct ahci_host_priv {
-@@ -597,6 +597,20 @@ static inline void __iomem *ahci_port_base(struct ata_port *ap)
- return __ahci_port_base(ap->host, ap->port_no);
- }
-
-+static void ahci_enable_ahci(void __iomem *mmio)
++static void bio_end_empty_barrier(struct bio *bio, int err)
+{
-+ u32 tmp;
++ if (err)
++ clear_bit(BIO_UPTODATE, &bio->bi_flags);
+
-+ /* turn on AHCI_EN */
-+ tmp = readl(mmio + HOST_CTL);
-+ if (!(tmp & HOST_AHCI_EN)) {
-+ tmp |= HOST_AHCI_EN;
-+ writel(tmp, mmio + HOST_CTL);
-+ tmp = readl(mmio + HOST_CTL); /* flush && sanity check */
-+ WARN_ON(!(tmp & HOST_AHCI_EN));
-+ }
++ complete(bio->bi_private);
+}
+
- /**
- * ahci_save_initial_config - Save and fixup initial config values
- * @pdev: target PCI device
-@@ -619,6 +633,9 @@ static void ahci_save_initial_config(struct pci_dev *pdev,
- u32 cap, port_map;
- int i;
-
-+ /* make sure AHCI mode is enabled before accessing CAP */
-+ ahci_enable_ahci(mmio);
++/**
++ * blkdev_issue_flush - queue a flush
++ * @bdev: blockdev to issue flush for
++ * @error_sector: error sector
++ *
++ * Description:
++ * Issue a flush for the block device in question. Caller can supply
++ * room for storing the error offset in case of a flush error, if they
++ * wish to. Caller must run wait_for_completion() on its own.
++ */
++int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector)
++{
++ DECLARE_COMPLETION_ONSTACK(wait);
++ struct request_queue *q;
++ struct bio *bio;
++ int ret;
+
- /* Values prefixed with saved_ are written back to host after
- * reset. Values without are used for driver operation.
- */
-@@ -1036,19 +1053,17 @@ static int ahci_deinit_port(struct ata_port *ap, const char **emsg)
- static int ahci_reset_controller(struct ata_host *host)
- {
- struct pci_dev *pdev = to_pci_dev(host->dev);
-+ struct ahci_host_priv *hpriv = host->private_data;
- void __iomem *mmio = host->iomap[AHCI_PCI_BAR];
- u32 tmp;
-
- /* we must be in AHCI mode, before using anything
- * AHCI-specific, such as HOST_RESET.
- */
-- tmp = readl(mmio + HOST_CTL);
-- if (!(tmp & HOST_AHCI_EN)) {
-- tmp |= HOST_AHCI_EN;
-- writel(tmp, mmio + HOST_CTL);
-- }
-+ ahci_enable_ahci(mmio);
-
- /* global controller reset */
-+ tmp = readl(mmio + HOST_CTL);
- if ((tmp & HOST_RESET) == 0) {
- writel(tmp | HOST_RESET, mmio + HOST_CTL);
- readl(mmio + HOST_CTL); /* flush */
-@@ -1067,8 +1082,7 @@ static int ahci_reset_controller(struct ata_host *host)
- }
-
- /* turn on AHCI mode */
-- writel(HOST_AHCI_EN, mmio + HOST_CTL);
-- (void) readl(mmio + HOST_CTL); /* flush */
-+ ahci_enable_ahci(mmio);
-
- /* some registers might be cleared on reset. restore initial values */
- ahci_restore_initial_config(host);
-@@ -1078,8 +1092,10 @@ static int ahci_reset_controller(struct ata_host *host)
-
- /* configure PCS */
- pci_read_config_word(pdev, 0x92, &tmp16);
-- tmp16 |= 0xf;
-- pci_write_config_word(pdev, 0x92, tmp16);
-+ if ((tmp16 & hpriv->port_map) != hpriv->port_map) {
-+ tmp16 |= hpriv->port_map;
-+ pci_write_config_word(pdev, 0x92, tmp16);
-+ }
- }
-
- return 0;
-@@ -1480,35 +1496,31 @@ static void ahci_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
- static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl)
- {
- struct scatterlist *sg;
-- struct ahci_sg *ahci_sg;
-- unsigned int n_sg = 0;
-+ struct ahci_sg *ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ;
-+ unsigned int si;
-
- VPRINTK("ENTER\n");
-
- /*
- * Next, the S/G list.
- */
-- ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ;
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- dma_addr_t addr = sg_dma_address(sg);
- u32 sg_len = sg_dma_len(sg);
-
-- ahci_sg->addr = cpu_to_le32(addr & 0xffffffff);
-- ahci_sg->addr_hi = cpu_to_le32((addr >> 16) >> 16);
-- ahci_sg->flags_size = cpu_to_le32(sg_len - 1);
--
-- ahci_sg++;
-- n_sg++;
-+ ahci_sg[si].addr = cpu_to_le32(addr & 0xffffffff);
-+ ahci_sg[si].addr_hi = cpu_to_le32((addr >> 16) >> 16);
-+ ahci_sg[si].flags_size = cpu_to_le32(sg_len - 1);
- }
-
-- return n_sg;
-+ return si;
- }
-
- static void ahci_qc_prep(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct ahci_port_priv *pp = ap->private_data;
-- int is_atapi = is_atapi_taskfile(&qc->tf);
-+ int is_atapi = ata_is_atapi(qc->tf.protocol);
- void *cmd_tbl;
- u32 opts;
- const u32 cmd_fis_len = 5; /* five dwords */
-diff --git a/drivers/ata/ata_generic.c b/drivers/ata/ata_generic.c
-index 9032998..2053420 100644
---- a/drivers/ata/ata_generic.c
-+++ b/drivers/ata/ata_generic.c
-@@ -26,7 +26,7 @@
- #include <linux/libata.h>
-
- #define DRV_NAME "ata_generic"
--#define DRV_VERSION "0.2.13"
-+#define DRV_VERSION "0.2.15"
-
- /*
- * A generic parallel ATA driver using libata
-@@ -48,27 +48,47 @@ static int generic_set_mode(struct ata_link *link, struct ata_device **unused)
- struct ata_port *ap = link->ap;
- int dma_enabled = 0;
- struct ata_device *dev;
-+ struct pci_dev *pdev = to_pci_dev(ap->host->dev);
-
- /* Bits 5 and 6 indicate if DMA is active on master/slave */
- if (ap->ioaddr.bmdma_addr)
- dma_enabled = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_STATUS);
-
-+ if (pdev->vendor == PCI_VENDOR_ID_CENATEK)
-+ dma_enabled = 0xFF;
++ if (bdev->bd_disk == NULL)
++ return -ENXIO;
+
- ata_link_for_each_dev(dev, link) {
-- if (ata_dev_enabled(dev)) {
-- /* We don't really care */
-- dev->pio_mode = XFER_PIO_0;
-- dev->dma_mode = XFER_MW_DMA_0;
-- /* We do need the right mode information for DMA or PIO
-- and this comes from the current configuration flags */
-- if (dma_enabled & (1 << (5 + dev->devno))) {
-- ata_id_to_dma_mode(dev, XFER_MW_DMA_0);
-- dev->flags &= ~ATA_DFLAG_PIO;
-- } else {
-- ata_dev_printk(dev, KERN_INFO, "configured for PIO\n");
-- dev->xfer_mode = XFER_PIO_0;
-- dev->xfer_shift = ATA_SHIFT_PIO;
-- dev->flags |= ATA_DFLAG_PIO;
-+ if (!ata_dev_enabled(dev))
-+ continue;
++ q = bdev_get_queue(bdev);
++ if (!q)
++ return -ENXIO;
+
-+ /* We don't really care */
-+ dev->pio_mode = XFER_PIO_0;
-+ dev->dma_mode = XFER_MW_DMA_0;
-+ /* We do need the right mode information for DMA or PIO
-+ and this comes from the current configuration flags */
-+ if (dma_enabled & (1 << (5 + dev->devno))) {
-+ unsigned int xfer_mask = ata_id_xfermask(dev->id);
-+ const char *name;
++ bio = bio_alloc(GFP_KERNEL, 0);
++ if (!bio)
++ return -ENOMEM;
+
-+ if (xfer_mask & (ATA_MASK_MWDMA | ATA_MASK_UDMA))
-+ name = ata_mode_string(xfer_mask);
-+ else {
-+ /* SWDMA perhaps? */
-+ name = "DMA";
-+ xfer_mask |= ata_xfer_mode2mask(XFER_MW_DMA_0);
- }
++ bio->bi_end_io = bio_end_empty_barrier;
++ bio->bi_private = &wait;
++ bio->bi_bdev = bdev;
++ submit_bio(1 << BIO_RW_BARRIER, bio);
+
-+ ata_dev_printk(dev, KERN_INFO, "configured for %s\n",
-+ name);
++ wait_for_completion(&wait);
+
-+ dev->xfer_mode = ata_xfer_mask2mode(xfer_mask);
-+ dev->xfer_shift = ata_xfer_mode2shift(dev->xfer_mode);
-+ dev->flags &= ~ATA_DFLAG_PIO;
-+ } else {
-+ ata_dev_printk(dev, KERN_INFO, "configured for PIO\n");
-+ dev->xfer_mode = XFER_PIO_0;
-+ dev->xfer_shift = ATA_SHIFT_PIO;
-+ dev->flags |= ATA_DFLAG_PIO;
- }
- }
- return 0;
-@@ -185,6 +205,7 @@ static struct pci_device_id ata_generic[] = {
- { PCI_DEVICE(PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT_VXPROII_IDE), },
- { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C561), },
- { PCI_DEVICE(PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C558), },
-+ { PCI_DEVICE(PCI_VENDOR_ID_CENATEK,PCI_DEVICE_ID_CENATEK_IDE), },
- { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO), },
- { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1), },
- { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2), },
-diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
-index b406b39..a65c8ae 100644
---- a/drivers/ata/ata_piix.c
-+++ b/drivers/ata/ata_piix.c
-@@ -101,39 +101,21 @@ enum {
- ICH5_PMR = 0x90, /* port mapping register */
- ICH5_PCS = 0x92, /* port control and status */
- PIIX_SCC = 0x0A, /* sub-class code register */
-+ PIIX_SIDPR_BAR = 5,
-+ PIIX_SIDPR_LEN = 16,
-+ PIIX_SIDPR_IDX = 0,
-+ PIIX_SIDPR_DATA = 4,
-
-- PIIX_FLAG_SCR = (1 << 26), /* SCR available */
- PIIX_FLAG_AHCI = (1 << 27), /* AHCI possible */
- PIIX_FLAG_CHECKINTR = (1 << 28), /* make sure PCI INTx enabled */
-+ PIIX_FLAG_SIDPR = (1 << 29), /* SATA idx/data pair regs */
-
- PIIX_PATA_FLAGS = ATA_FLAG_SLAVE_POSS,
- PIIX_SATA_FLAGS = ATA_FLAG_SATA | PIIX_FLAG_CHECKINTR,
-
-- /* combined mode. if set, PATA is channel 0.
-- * if clear, PATA is channel 1.
-- */
-- PIIX_PORT_ENABLED = (1 << 0),
-- PIIX_PORT_PRESENT = (1 << 4),
--
- PIIX_80C_PRI = (1 << 5) | (1 << 4),
- PIIX_80C_SEC = (1 << 7) | (1 << 6),
-
-- /* controller IDs */
-- piix_pata_mwdma = 0, /* PIIX3 MWDMA only */
-- piix_pata_33, /* PIIX4 at 33Mhz */
-- ich_pata_33, /* ICH up to UDMA 33 only */
-- ich_pata_66, /* ICH up to 66 Mhz */
-- ich_pata_100, /* ICH up to UDMA 100 */
-- ich5_sata,
-- ich6_sata,
-- ich6_sata_ahci,
-- ich6m_sata_ahci,
-- ich8_sata_ahci,
-- ich8_2port_sata,
-- ich8m_apple_sata_ahci, /* locks up on second port enable */
-- tolapai_sata_ahci,
-- piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */
--
- /* constants for mapping table */
- P0 = 0, /* port 0 */
- P1 = 1, /* port 1 */
-@@ -149,6 +131,24 @@ enum {
- PIIX_HOST_BROKEN_SUSPEND = (1 << 24),
- };
-
-+enum piix_controller_ids {
-+ /* controller IDs */
-+ piix_pata_mwdma, /* PIIX3 MWDMA only */
-+ piix_pata_33, /* PIIX4 at 33Mhz */
-+ ich_pata_33, /* ICH up to UDMA 33 only */
-+ ich_pata_66, /* ICH up to 66 Mhz */
-+ ich_pata_100, /* ICH up to UDMA 100 */
-+ ich5_sata,
-+ ich6_sata,
-+ ich6_sata_ahci,
-+ ich6m_sata_ahci,
-+ ich8_sata_ahci,
-+ ich8_2port_sata,
-+ ich8m_apple_sata_ahci, /* locks up on second port enable */
-+ tolapai_sata_ahci,
-+ piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */
-+};
++ /*
++ * The driver must store the error location in ->bi_sector, if
++ * it supports it. For non-stacked drivers, this should be copied
++ * from rq->sector.
++ */
++ if (error_sector)
++ *error_sector = bio->bi_sector;
+
- struct piix_map_db {
- const u32 mask;
- const u16 port_enable;
-@@ -157,6 +157,7 @@ struct piix_map_db {
-
- struct piix_host_priv {
- const int *map;
-+ void __iomem *sidpr;
- };
-
- static int piix_init_one(struct pci_dev *pdev,
-@@ -167,6 +168,9 @@ static void piix_set_dmamode(struct ata_port *ap, struct ata_device *adev);
- static void ich_set_dmamode(struct ata_port *ap, struct ata_device *adev);
- static int ich_pata_cable_detect(struct ata_port *ap);
- static u8 piix_vmw_bmdma_status(struct ata_port *ap);
-+static int piix_sidpr_scr_read(struct ata_port *ap, unsigned int reg, u32 *val);
-+static int piix_sidpr_scr_write(struct ata_port *ap, unsigned int reg, u32 val);
-+static void piix_sidpr_error_handler(struct ata_port *ap);
- #ifdef CONFIG_PM
- static int piix_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg);
- static int piix_pci_device_resume(struct pci_dev *pdev);
-@@ -321,7 +325,6 @@ static const struct ata_port_operations piix_pata_ops = {
- .post_internal_cmd = ata_bmdma_post_internal_cmd,
- .cable_detect = ata_cable_40wire,
-
-- .irq_handler = ata_interrupt,
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-@@ -353,7 +356,6 @@ static const struct ata_port_operations ich_pata_ops = {
- .post_internal_cmd = ata_bmdma_post_internal_cmd,
- .cable_detect = ich_pata_cable_detect,
-
-- .irq_handler = ata_interrupt,
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-@@ -380,7 +382,6 @@ static const struct ata_port_operations piix_sata_ops = {
- .error_handler = ata_bmdma_error_handler,
- .post_internal_cmd = ata_bmdma_post_internal_cmd,
-
-- .irq_handler = ata_interrupt,
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-@@ -419,6 +420,35 @@ static const struct ata_port_operations piix_vmw_ops = {
- .port_start = ata_port_start,
- };
-
-+static const struct ata_port_operations piix_sidpr_sata_ops = {
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ata_std_dev_select,
++ ret = 0;
++ if (!bio_flagged(bio, BIO_UPTODATE))
++ ret = -EIO;
+
-+ .bmdma_setup = ata_bmdma_setup,
-+ .bmdma_start = ata_bmdma_start,
-+ .bmdma_stop = ata_bmdma_stop,
-+ .bmdma_status = ata_bmdma_status,
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = ata_qc_issue_prot,
-+ .data_xfer = ata_data_xfer,
++ bio_put(bio);
++ return ret;
++}
+
-+ .scr_read = piix_sidpr_scr_read,
-+ .scr_write = piix_sidpr_scr_write,
++EXPORT_SYMBOL(blkdev_issue_flush);
+diff --git a/block/blk-core.c b/block/blk-core.c
+new file mode 100644
+index 0000000..8ff9944
+--- /dev/null
++++ b/block/blk-core.c
+@@ -0,0 +1,2034 @@
++/*
++ * Copyright (C) 1991, 1992 Linus Torvalds
++ * Copyright (C) 1994, Karl Keyte: Added support for disk statistics
++ * Elevator latency, (C) 2000 Andrea Arcangeli <andrea at suse.de> SuSE
++ * Queue request tables / lock, selectable elevator, Jens Axboe <axboe at suse.de>
++ * kernel-doc documentation started by NeilBrown <neilb at cse.unsw.edu.au> - July2000
++ * bio rewrite, highmem i/o, etc, Jens Axboe <axboe at suse.de> - may 2001
++ */
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = piix_sidpr_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++/*
++ * This handles all read/write requests to block devices
++ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/backing-dev.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++#include <linux/highmem.h>
++#include <linux/mm.h>
++#include <linux/kernel_stat.h>
++#include <linux/string.h>
++#include <linux/init.h>
++#include <linux/completion.h>
++#include <linux/slab.h>
++#include <linux/swap.h>
++#include <linux/writeback.h>
++#include <linux/task_io_accounting_ops.h>
++#include <linux/interrupt.h>
++#include <linux/cpu.h>
++#include <linux/blktrace_api.h>
++#include <linux/fault-inject.h>
+
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
++#include "blk.h"
+
-+ .port_start = ata_port_start,
-+};
++static int __make_request(struct request_queue *q, struct bio *bio);
+
- static const struct piix_map_db ich5_map_db = {
- .mask = 0x7,
- .port_enable = 0x3,
-@@ -526,7 +556,6 @@ static const struct piix_map_db *piix_map_db_table[] = {
- static struct ata_port_info piix_port_info[] = {
- [piix_pata_mwdma] = /* PIIX3 MWDMA only */
- {
-- .sht = &piix_sht,
- .flags = PIIX_PATA_FLAGS,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x06, /* mwdma1-2 ?? CHECK 0 should be ok but slow */
-@@ -535,7 +564,6 @@ static struct ata_port_info piix_port_info[] = {
-
- [piix_pata_33] = /* PIIX4 at 33MHz */
- {
-- .sht = &piix_sht,
- .flags = PIIX_PATA_FLAGS,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x06, /* mwdma1-2 ?? CHECK 0 should be ok but slow */
-@@ -545,7 +573,6 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich_pata_33] = /* ICH0 - ICH at 33Mhz*/
- {
-- .sht = &piix_sht,
- .flags = PIIX_PATA_FLAGS,
- .pio_mask = 0x1f, /* pio 0-4 */
- .mwdma_mask = 0x06, /* Check: maybe 0x07 */
-@@ -555,7 +582,6 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich_pata_66] = /* ICH controllers up to 66MHz */
- {
-- .sht = &piix_sht,
- .flags = PIIX_PATA_FLAGS,
- .pio_mask = 0x1f, /* pio 0-4 */
- .mwdma_mask = 0x06, /* MWDMA0 is broken on chip */
-@@ -565,7 +591,6 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich_pata_100] =
- {
-- .sht = &piix_sht,
- .flags = PIIX_PATA_FLAGS | PIIX_FLAG_CHECKINTR,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x06, /* mwdma1-2 */
-@@ -575,7 +600,6 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich5_sata] =
- {
-- .sht = &piix_sht,
- .flags = PIIX_SATA_FLAGS,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
-@@ -585,8 +609,7 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich6_sata] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR,
-+ .flags = PIIX_SATA_FLAGS,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -595,9 +618,7 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich6_sata_ahci] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
-- PIIX_FLAG_AHCI,
-+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -606,9 +627,7 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich6m_sata_ahci] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
-- PIIX_FLAG_AHCI,
-+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -617,9 +636,8 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich8_sata_ahci] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
-- PIIX_FLAG_AHCI,
-+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI |
-+ PIIX_FLAG_SIDPR,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -628,9 +646,8 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich8_2port_sata] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
-- PIIX_FLAG_AHCI,
-+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI |
-+ PIIX_FLAG_SIDPR,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -639,9 +656,7 @@ static struct ata_port_info piix_port_info[] = {
-
- [tolapai_sata_ahci] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
-- PIIX_FLAG_AHCI,
-+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -650,9 +665,8 @@ static struct ata_port_info piix_port_info[] = {
-
- [ich8m_apple_sata_ahci] =
- {
-- .sht = &piix_sht,
-- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
-- PIIX_FLAG_AHCI,
-+ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI |
-+ PIIX_FLAG_SIDPR,
- .pio_mask = 0x1f, /* pio0-4 */
- .mwdma_mask = 0x07, /* mwdma0-2 */
- .udma_mask = ATA_UDMA6,
-@@ -1001,6 +1015,180 @@ static void ich_set_dmamode(struct ata_port *ap, struct ata_device *adev)
- do_pata_set_dmamode(ap, adev, 1);
- }
-
+/*
-+ * Serial ATA Index/Data Pair Superset Registers access
-+ *
-+ * Beginning from ICH8, there's a sane way to access SCRs using index
-+ * and data register pair located at BAR5. This creates an
-+ * interesting problem of mapping two SCRs to one port.
-+ *
-+ * Although they have separate SCRs, the master and slave aren't
-+ * independent enough to be treated as separate links - e.g. softreset
-+ * resets both. Also, there's no protocol defined for hard resetting
-+ * singled device sharing the virtual port (no defined way to acquire
-+ * device signature). This is worked around by merging the SCR values
-+ * into one sensible value and requesting follow-up SRST after
-+ * hardreset.
-+ *
-+ * SCR merging is perfomed in nibbles which is the unit contents in
-+ * SCRs are organized. If two values are equal, the value is used.
-+ * When they differ, merge table which lists precedence of possible
-+ * values is consulted and the first match or the last entry when
-+ * nothing matches is used. When there's no merge table for the
-+ * specific nibble, value from the first port is used.
++ * For the allocated request tables
+ */
-+static const int piix_sidx_map[] = {
-+ [SCR_STATUS] = 0,
-+ [SCR_ERROR] = 2,
-+ [SCR_CONTROL] = 1,
-+};
++struct kmem_cache *request_cachep;
+
-+static void piix_sidpr_sel(struct ata_device *dev, unsigned int reg)
-+{
-+ struct ata_port *ap = dev->link->ap;
-+ struct piix_host_priv *hpriv = ap->host->private_data;
++/*
++ * For queue allocation
++ */
++struct kmem_cache *blk_requestq_cachep = NULL;
+
-+ iowrite32(((ap->port_no * 2 + dev->devno) << 8) | piix_sidx_map[reg],
-+ hpriv->sidpr + PIIX_SIDPR_IDX);
-+}
++/*
++ * Controlling structure to kblockd
++ */
++static struct workqueue_struct *kblockd_workqueue;
+
-+static int piix_sidpr_read(struct ata_device *dev, unsigned int reg)
++static DEFINE_PER_CPU(struct list_head, blk_cpu_done);
++
++static void drive_stat_acct(struct request *rq, int new_io)
+{
-+ struct piix_host_priv *hpriv = dev->link->ap->host->private_data;
++ int rw = rq_data_dir(rq);
+
-+ piix_sidpr_sel(dev, reg);
-+ return ioread32(hpriv->sidpr + PIIX_SIDPR_DATA);
++ if (!blk_fs_request(rq) || !rq->rq_disk)
++ return;
++
++ if (!new_io) {
++ __disk_stat_inc(rq->rq_disk, merges[rw]);
++ } else {
++ disk_round_stats(rq->rq_disk);
++ rq->rq_disk->in_flight++;
++ }
+}
+
-+static void piix_sidpr_write(struct ata_device *dev, unsigned int reg, u32 val)
++void blk_queue_congestion_threshold(struct request_queue *q)
+{
-+ struct piix_host_priv *hpriv = dev->link->ap->host->private_data;
++ int nr;
+
-+ piix_sidpr_sel(dev, reg);
-+ iowrite32(val, hpriv->sidpr + PIIX_SIDPR_DATA);
++ nr = q->nr_requests - (q->nr_requests / 8) + 1;
++ if (nr > q->nr_requests)
++ nr = q->nr_requests;
++ q->nr_congestion_on = nr;
++
++ nr = q->nr_requests - (q->nr_requests / 8) - (q->nr_requests / 16) - 1;
++ if (nr < 1)
++ nr = 1;
++ q->nr_congestion_off = nr;
+}
+
-+u32 piix_merge_scr(u32 val0, u32 val1, const int * const *merge_tbl)
++/**
++ * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
++ * @bdev: device
++ *
++ * Locates the passed device's request queue and returns the address of its
++ * backing_dev_info
++ *
++ * Will return NULL if the request queue cannot be located.
++ */
++struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev)
+{
-+ u32 val = 0;
-+ int i, mi;
++ struct backing_dev_info *ret = NULL;
++ struct request_queue *q = bdev_get_queue(bdev);
+
-+ for (i = 0, mi = 0; i < 32 / 4; i++) {
-+ u8 c0 = (val0 >> (i * 4)) & 0xf;
-+ u8 c1 = (val1 >> (i * 4)) & 0xf;
-+ u8 merged = c0;
-+ const int *cur;
++ if (q)
++ ret = &q->backing_dev_info;
++ return ret;
++}
++EXPORT_SYMBOL(blk_get_backing_dev_info);
+
-+ /* if no merge preference, assume the first value */
-+ cur = merge_tbl[mi];
-+ if (!cur)
-+ goto done;
-+ mi++;
++void rq_init(struct request_queue *q, struct request *rq)
++{
++ INIT_LIST_HEAD(&rq->queuelist);
++ INIT_LIST_HEAD(&rq->donelist);
+
-+ /* if two values equal, use it */
-+ if (c0 == c1)
-+ goto done;
++ rq->errors = 0;
++ rq->bio = rq->biotail = NULL;
++ INIT_HLIST_NODE(&rq->hash);
++ RB_CLEAR_NODE(&rq->rb_node);
++ rq->ioprio = 0;
++ rq->buffer = NULL;
++ rq->ref_count = 1;
++ rq->q = q;
++ rq->special = NULL;
++ rq->data_len = 0;
++ rq->data = NULL;
++ rq->nr_phys_segments = 0;
++ rq->sense = NULL;
++ rq->end_io = NULL;
++ rq->end_io_data = NULL;
++ rq->completion_data = NULL;
++ rq->next_rq = NULL;
++}
+
-+ /* choose the first match or the last from the merge table */
-+ while (*cur != -1) {
-+ if (c0 == *cur || c1 == *cur)
-+ break;
-+ cur++;
++static void req_bio_endio(struct request *rq, struct bio *bio,
++ unsigned int nbytes, int error)
++{
++ struct request_queue *q = rq->q;
++
++ if (&q->bar_rq != rq) {
++ if (error)
++ clear_bit(BIO_UPTODATE, &bio->bi_flags);
++ else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
++ error = -EIO;
++
++ if (unlikely(nbytes > bio->bi_size)) {
++ printk("%s: want %u bytes done, only %u left\n",
++ __FUNCTION__, nbytes, bio->bi_size);
++ nbytes = bio->bi_size;
+ }
-+ if (*cur == -1)
-+ cur--;
-+ merged = *cur;
-+ done:
-+ val |= merged << (i * 4);
-+ }
+
-+ return val;
++ bio->bi_size -= nbytes;
++ bio->bi_sector += (nbytes >> 9);
++ if (bio->bi_size == 0)
++ bio_endio(bio, error);
++ } else {
++
++ /*
++ * Okay, this is the barrier request in progress, just
++ * record the error;
++ */
++ if (error && !q->orderr)
++ q->orderr = error;
++ }
+}
+
-+static int piix_sidpr_scr_read(struct ata_port *ap, unsigned int reg, u32 *val)
++void blk_dump_rq_flags(struct request *rq, char *msg)
+{
-+ const int * const sstatus_merge_tbl[] = {
-+ /* DET */ (const int []){ 1, 3, 0, 4, 3, -1 },
-+ /* SPD */ (const int []){ 2, 1, 0, -1 },
-+ /* IPM */ (const int []){ 6, 2, 1, 0, -1 },
-+ NULL,
-+ };
-+ const int * const scontrol_merge_tbl[] = {
-+ /* DET */ (const int []){ 1, 0, 4, 0, -1 },
-+ /* SPD */ (const int []){ 0, 2, 1, 0, -1 },
-+ /* IPM */ (const int []){ 0, 1, 2, 3, 0, -1 },
-+ NULL,
-+ };
-+ u32 v0, v1;
++ int bit;
+
-+ if (reg >= ARRAY_SIZE(piix_sidx_map))
-+ return -EINVAL;
++ printk("%s: dev %s: type=%x, flags=%x\n", msg,
++ rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->cmd_type,
++ rq->cmd_flags);
+
-+ if (!(ap->flags & ATA_FLAG_SLAVE_POSS)) {
-+ *val = piix_sidpr_read(&ap->link.device[0], reg);
-+ return 0;
++ printk("\nsector %llu, nr/cnr %lu/%u\n", (unsigned long long)rq->sector,
++ rq->nr_sectors,
++ rq->current_nr_sectors);
++ printk("bio %p, biotail %p, buffer %p, data %p, len %u\n", rq->bio, rq->biotail, rq->buffer, rq->data, rq->data_len);
++
++ if (blk_pc_request(rq)) {
++ printk("cdb: ");
++ for (bit = 0; bit < sizeof(rq->cmd); bit++)
++ printk("%02x ", rq->cmd[bit]);
++ printk("\n");
+ }
++}
+
-+ v0 = piix_sidpr_read(&ap->link.device[0], reg);
-+ v1 = piix_sidpr_read(&ap->link.device[1], reg);
++EXPORT_SYMBOL(blk_dump_rq_flags);
+
-+ switch (reg) {
-+ case SCR_STATUS:
-+ *val = piix_merge_scr(v0, v1, sstatus_merge_tbl);
-+ break;
-+ case SCR_ERROR:
-+ *val = v0 | v1;
-+ break;
-+ case SCR_CONTROL:
-+ *val = piix_merge_scr(v0, v1, scontrol_merge_tbl);
-+ break;
-+ }
++/*
++ * "plug" the device if there are no outstanding requests: this will
++ * force the transfer to start only after we have put all the requests
++ * on the list.
++ *
++ * This is called with interrupts off and no requests on the queue and
++ * with the queue lock held.
++ */
++void blk_plug_device(struct request_queue *q)
++{
++ WARN_ON(!irqs_disabled());
+
-+ return 0;
++ /*
++ * don't plug a stopped queue, it must be paired with blk_start_queue()
++ * which will restart the queueing
++ */
++ if (blk_queue_stopped(q))
++ return;
++
++ if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags)) {
++ mod_timer(&q->unplug_timer, jiffies + q->unplug_delay);
++ blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
++ }
+}
+
-+static int piix_sidpr_scr_write(struct ata_port *ap, unsigned int reg, u32 val)
-+{
-+ if (reg >= ARRAY_SIZE(piix_sidx_map))
-+ return -EINVAL;
++EXPORT_SYMBOL(blk_plug_device);
+
-+ piix_sidpr_write(&ap->link.device[0], reg, val);
++/*
++ * remove the queue from the plugged list, if present. called with
++ * queue lock held and interrupts disabled.
++ */
++int blk_remove_plug(struct request_queue *q)
++{
++ WARN_ON(!irqs_disabled());
+
-+ if (ap->flags & ATA_FLAG_SLAVE_POSS)
-+ piix_sidpr_write(&ap->link.device[1], reg, val);
++ if (!test_and_clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags))
++ return 0;
+
-+ return 0;
++ del_timer(&q->unplug_timer);
++ return 1;
+}
+
-+static int piix_sidpr_hardreset(struct ata_link *link, unsigned int *class,
-+ unsigned long deadline)
++EXPORT_SYMBOL(blk_remove_plug);
++
++/*
++ * remove the plug and let it rip..
++ */
++void __generic_unplug_device(struct request_queue *q)
+{
-+ const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
-+ int rc;
++ if (unlikely(blk_queue_stopped(q)))
++ return;
+
-+ /* do hardreset */
-+ rc = sata_link_hardreset(link, timing, deadline);
-+ if (rc) {
-+ ata_link_printk(link, KERN_ERR,
-+ "COMRESET failed (errno=%d)\n", rc);
-+ return rc;
-+ }
++ if (!blk_remove_plug(q))
++ return;
+
-+ /* TODO: phy layer with polling, timeouts, etc. */
-+ if (ata_link_offline(link)) {
-+ *class = ATA_DEV_NONE;
-+ return 0;
-+ }
++ q->request_fn(q);
++}
++EXPORT_SYMBOL(__generic_unplug_device);
+
-+ return -EAGAIN;
++/**
++ * generic_unplug_device - fire a request queue
++ * @q: The &struct request_queue in question
++ *
++ * Description:
++ * Linux uses plugging to build bigger requests queues before letting
++ * the device have at them. If a queue is plugged, the I/O scheduler
++ * is still adding and merging requests on the queue. Once the queue
++ * gets unplugged, the request_fn defined for the queue is invoked and
++ * transfers started.
++ **/
++void generic_unplug_device(struct request_queue *q)
++{
++ spin_lock_irq(q->queue_lock);
++ __generic_unplug_device(q);
++ spin_unlock_irq(q->queue_lock);
+}
++EXPORT_SYMBOL(generic_unplug_device);
+
-+static void piix_sidpr_error_handler(struct ata_port *ap)
++static void blk_backing_dev_unplug(struct backing_dev_info *bdi,
++ struct page *page)
+{
-+ ata_bmdma_drive_eh(ap, ata_std_prereset, ata_std_softreset,
-+ piix_sidpr_hardreset, ata_std_postreset);
++ struct request_queue *q = bdi->unplug_io_data;
++
++ blk_unplug(q);
+}
+
- #ifdef CONFIG_PM
- static int piix_broken_suspend(void)
- {
-@@ -1034,6 +1222,13 @@ static int piix_broken_suspend(void)
- },
- },
- {
-+ .ident = "TECRA M6",
-+ .matches = {
-+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
-+ DMI_MATCH(DMI_PRODUCT_NAME, "TECRA M6"),
-+ },
-+ },
-+ {
- .ident = "TECRA M7",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
-@@ -1048,6 +1243,13 @@ static int piix_broken_suspend(void)
- },
- },
- {
-+ .ident = "Satellite R20",
-+ .matches = {
-+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
-+ DMI_MATCH(DMI_PRODUCT_NAME, "Satellite R20"),
-+ },
-+ },
-+ {
- .ident = "Satellite R25",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
-@@ -1253,10 +1455,10 @@ static int __devinit piix_check_450nx_errata(struct pci_dev *ata_dev)
- return no_piix_dma;
- }
-
--static void __devinit piix_init_pcs(struct pci_dev *pdev,
-- struct ata_port_info *pinfo,
-+static void __devinit piix_init_pcs(struct ata_host *host,
- const struct piix_map_db *map_db)
- {
-+ struct pci_dev *pdev = to_pci_dev(host->dev);
- u16 pcs, new_pcs;
-
- pci_read_config_word(pdev, ICH5_PCS, &pcs);
-@@ -1270,11 +1472,10 @@ static void __devinit piix_init_pcs(struct pci_dev *pdev,
- }
- }
-
--static void __devinit piix_init_sata_map(struct pci_dev *pdev,
-- struct ata_port_info *pinfo,
-- const struct piix_map_db *map_db)
-+static const int *__devinit piix_init_sata_map(struct pci_dev *pdev,
-+ struct ata_port_info *pinfo,
-+ const struct piix_map_db *map_db)
- {
-- struct piix_host_priv *hpriv = pinfo[0].private_data;
- const int *map;
- int i, invalid_map = 0;
- u8 map_value;
-@@ -1298,7 +1499,6 @@ static void __devinit piix_init_sata_map(struct pci_dev *pdev,
- case IDE:
- WARN_ON((i & 1) || map[i + 1] != IDE);
- pinfo[i / 2] = piix_port_info[ich_pata_100];
-- pinfo[i / 2].private_data = hpriv;
- i++;
- printk(" IDE IDE");
- break;
-@@ -1316,7 +1516,33 @@ static void __devinit piix_init_sata_map(struct pci_dev *pdev,
- dev_printk(KERN_ERR, &pdev->dev,
- "invalid MAP value %u\n", map_value);
-
-- hpriv->map = map;
-+ return map;
++void blk_unplug_work(struct work_struct *work)
++{
++ struct request_queue *q =
++ container_of(work, struct request_queue, unplug_work);
++
++ blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
++ q->rq.count[READ] + q->rq.count[WRITE]);
++
++ q->unplug_fn(q);
+}
+
-+static void __devinit piix_init_sidpr(struct ata_host *host)
++void blk_unplug_timeout(unsigned long data)
+{
-+ struct pci_dev *pdev = to_pci_dev(host->dev);
-+ struct piix_host_priv *hpriv = host->private_data;
-+ int i;
++ struct request_queue *q = (struct request_queue *)data;
+
-+ /* check for availability */
-+ for (i = 0; i < 4; i++)
-+ if (hpriv->map[i] == IDE)
-+ return;
++ blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
++ q->rq.count[READ] + q->rq.count[WRITE]);
+
-+ if (!(host->ports[0]->flags & PIIX_FLAG_SIDPR))
-+ return;
++ kblockd_schedule_work(&q->unplug_work);
++}
+
-+ if (pci_resource_start(pdev, PIIX_SIDPR_BAR) == 0 ||
-+ pci_resource_len(pdev, PIIX_SIDPR_BAR) != PIIX_SIDPR_LEN)
-+ return;
++void blk_unplug(struct request_queue *q)
++{
++ /*
++ * devices don't necessarily have an ->unplug_fn defined
++ */
++ if (q->unplug_fn) {
++ blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
++ q->rq.count[READ] + q->rq.count[WRITE]);
+
-+ if (pcim_iomap_regions(pdev, 1 << PIIX_SIDPR_BAR, DRV_NAME))
-+ return;
++ q->unplug_fn(q);
++ }
++}
++EXPORT_SYMBOL(blk_unplug);
+
-+ hpriv->sidpr = pcim_iomap_table(pdev)[PIIX_SIDPR_BAR];
-+ host->ports[0]->ops = &piix_sidpr_sata_ops;
-+ host->ports[1]->ops = &piix_sidpr_sata_ops;
- }
-
- static void piix_iocfg_bit18_quirk(struct pci_dev *pdev)
-@@ -1375,8 +1601,10 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
- struct device *dev = &pdev->dev;
- struct ata_port_info port_info[2];
- const struct ata_port_info *ppi[] = { &port_info[0], &port_info[1] };
-- struct piix_host_priv *hpriv;
- unsigned long port_flags;
-+ struct ata_host *host;
-+ struct piix_host_priv *hpriv;
-+ int rc;
-
- if (!printed_version++)
- dev_printk(KERN_DEBUG, &pdev->dev,
-@@ -1386,17 +1614,31 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
- if (!in_module_init)
- return -ENODEV;
-
-+ port_info[0] = piix_port_info[ent->driver_data];
-+ port_info[1] = piix_port_info[ent->driver_data];
++/**
++ * blk_start_queue - restart a previously stopped queue
++ * @q: The &struct request_queue in question
++ *
++ * Description:
++ * blk_start_queue() will clear the stop flag on the queue, and call
++ * the request_fn for the queue if it was in a stopped state when
++ * entered. Also see blk_stop_queue(). Queue lock must be held.
++ **/
++void blk_start_queue(struct request_queue *q)
++{
++ WARN_ON(!irqs_disabled());
+
-+ port_flags = port_info[0].flags;
++ clear_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
+
-+ /* enable device and prepare host */
-+ rc = pcim_enable_device(pdev);
-+ if (rc)
-+ return rc;
++ /*
++ * one level of recursion is ok and is much faster than kicking
++ * the unplug handling
++ */
++ if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
++ q->request_fn(q);
++ clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
++ } else {
++ blk_plug_device(q);
++ kblockd_schedule_work(&q->unplug_work);
++ }
++}
+
-+ /* SATA map init can change port_info, do it before prepping host */
- hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
- if (!hpriv)
- return -ENOMEM;
-
-- port_info[0] = piix_port_info[ent->driver_data];
-- port_info[1] = piix_port_info[ent->driver_data];
-- port_info[0].private_data = hpriv;
-- port_info[1].private_data = hpriv;
-+ if (port_flags & ATA_FLAG_SATA)
-+ hpriv->map = piix_init_sata_map(pdev, port_info,
-+ piix_map_db_table[ent->driver_data]);
-
-- port_flags = port_info[0].flags;
-+ rc = ata_pci_prepare_sff_host(pdev, ppi, &host);
-+ if (rc)
-+ return rc;
-+ host->private_data = hpriv;
-
-+ /* initialize controller */
- if (port_flags & PIIX_FLAG_AHCI) {
- u8 tmp;
- pci_read_config_byte(pdev, PIIX_SCC, &tmp);
-@@ -1407,12 +1649,9 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
- }
- }
-
-- /* Initialize SATA map */
- if (port_flags & ATA_FLAG_SATA) {
-- piix_init_sata_map(pdev, port_info,
-- piix_map_db_table[ent->driver_data]);
-- piix_init_pcs(pdev, port_info,
-- piix_map_db_table[ent->driver_data]);
-+ piix_init_pcs(host, piix_map_db_table[ent->driver_data]);
-+ piix_init_sidpr(host);
- }
-
- /* apply IOCFG bit18 quirk */
-@@ -1431,12 +1670,14 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
- /* This writes into the master table but it does not
- really matter for this errata as we will apply it to
- all the PIIX devices on the board */
-- port_info[0].mwdma_mask = 0;
-- port_info[0].udma_mask = 0;
-- port_info[1].mwdma_mask = 0;
-- port_info[1].udma_mask = 0;
-+ host->ports[0]->mwdma_mask = 0;
-+ host->ports[0]->udma_mask = 0;
-+ host->ports[1]->mwdma_mask = 0;
-+ host->ports[1]->udma_mask = 0;
- }
-- return ata_pci_init_one(pdev, ppi);
++EXPORT_SYMBOL(blk_start_queue);
+
-+ pci_set_master(pdev);
-+ return ata_pci_activate_sff_host(host, ata_interrupt, &piix_sht);
- }
-
- static int __init piix_init(void)
-diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c
-index 7bf4bef..9e8ec19 100644
---- a/drivers/ata/libata-acpi.c
-+++ b/drivers/ata/libata-acpi.c
-@@ -442,40 +442,77 @@ static int ata_dev_get_GTF(struct ata_device *dev, struct ata_acpi_gtf **gtf)
- }
-
- /**
-+ * ata_acpi_gtm_xfermode - determine xfermode from GTM parameter
-+ * @dev: target device
-+ * @gtm: GTM parameter to use
++/**
++ * blk_stop_queue - stop a queue
++ * @q: The &struct request_queue in question
+ *
-+ * Determine xfermask for @dev from @gtm.
++ * Description:
++ * The Linux block layer assumes that a block driver will consume all
++ * entries on the request queue when the request_fn strategy is called.
++ * Often this will not happen, because of hardware limitations (queue
++ * depth settings). If a device driver gets a 'queue full' response,
++ * or if it simply chooses not to queue more I/O at one point, it can
++ * call this function to prevent the request_fn from being called until
++ * the driver has signalled it's ready to go again. This happens by calling
++ * blk_start_queue() to restart queue operations. Queue lock must be held.
++ **/
++void blk_stop_queue(struct request_queue *q)
++{
++ blk_remove_plug(q);
++ set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
++}
++EXPORT_SYMBOL(blk_stop_queue);
++
++/**
++ * blk_sync_queue - cancel any pending callbacks on a queue
++ * @q: the queue
+ *
-+ * LOCKING:
-+ * None.
++ * Description:
++ * The block layer may perform asynchronous callback activity
++ * on a queue, such as calling the unplug function after a timeout.
++ * A block device may call blk_sync_queue to ensure that any
++ * such activity is cancelled, thus allowing it to release resources
++ * that the callbacks might use. The caller must already have made sure
++ * that its ->make_request_fn will not re-add plugging prior to calling
++ * this function.
+ *
-+ * RETURNS:
-+ * Determined xfermask.
+ */
-+unsigned long ata_acpi_gtm_xfermask(struct ata_device *dev,
-+ const struct ata_acpi_gtm *gtm)
++void blk_sync_queue(struct request_queue *q)
+{
-+ unsigned long xfer_mask = 0;
-+ unsigned int type;
-+ int unit;
-+ u8 mode;
++ del_timer_sync(&q->unplug_timer);
++ kblockd_flush_work(&q->unplug_work);
++}
++EXPORT_SYMBOL(blk_sync_queue);
+
-+ /* we always use the 0 slot for crap hardware */
-+ unit = dev->devno;
-+ if (!(gtm->flags & 0x10))
-+ unit = 0;
++/**
++ * blk_run_queue - run a single device queue
++ * @q: The queue to run
++ */
++void blk_run_queue(struct request_queue *q)
++{
++ unsigned long flags;
+
-+ /* PIO */
-+ mode = ata_timing_cycle2mode(ATA_SHIFT_PIO, gtm->drive[unit].pio);
-+ xfer_mask |= ata_xfer_mode2mask(mode);
++ spin_lock_irqsave(q->queue_lock, flags);
++ blk_remove_plug(q);
+
-+ /* See if we have MWDMA or UDMA data. We don't bother with
-+ * MWDMA if UDMA is available as this means the BIOS set UDMA
-+ * and our error changedown if it works is UDMA to PIO anyway.
++ /*
++ * Only recurse once to avoid overrunning the stack, let the unplug
++ * handling reinvoke the handler shortly if we already got there.
+ */
-+ if (!(gtm->flags & (1 << (2 * unit))))
-+ type = ATA_SHIFT_MWDMA;
-+ else
-+ type = ATA_SHIFT_UDMA;
++ if (!elv_queue_empty(q)) {
++ if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
++ q->request_fn(q);
++ clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
++ } else {
++ blk_plug_device(q);
++ kblockd_schedule_work(&q->unplug_work);
++ }
++ }
+
-+ mode = ata_timing_cycle2mode(type, gtm->drive[unit].dma);
-+ xfer_mask |= ata_xfer_mode2mask(mode);
++ spin_unlock_irqrestore(q->queue_lock, flags);
++}
++EXPORT_SYMBOL(blk_run_queue);
+
-+ return xfer_mask;
++void blk_put_queue(struct request_queue *q)
++{
++ kobject_put(&q->kobj);
+}
-+EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);
++EXPORT_SYMBOL(blk_put_queue);
+
-+/**
- * ata_acpi_cbl_80wire - Check for 80 wire cable
- * @ap: Port to check
-+ * @gtm: GTM data to use
- *
-- * Return 1 if the ACPI mode data for this port indicates the BIOS selected
-- * an 80wire mode.
-+ * Return 1 if the @gtm indicates the BIOS selected an 80wire mode.
- */
--
--int ata_acpi_cbl_80wire(struct ata_port *ap)
-+int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
- {
-- const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);
-- int valid = 0;
-+ struct ata_device *dev;
-
-- if (!gtm)
-- return 0;
-+ ata_link_for_each_dev(dev, &ap->link) {
-+ unsigned long xfer_mask, udma_mask;
++void blk_cleanup_queue(struct request_queue * q)
++{
++ mutex_lock(&q->sysfs_lock);
++ set_bit(QUEUE_FLAG_DEAD, &q->queue_flags);
++ mutex_unlock(&q->sysfs_lock);
+
-+ if (!ata_dev_enabled(dev))
-+ continue;
++ if (q->elevator)
++ elevator_exit(q->elevator);
+
-+ xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);
-+ ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);
++ blk_put_queue(q);
++}
+
-+ if (udma_mask & ~ATA_UDMA_MASK_40C)
-+ return 1;
-+ }
-
-- /* Split timing, DMA enabled */
-- if ((gtm->flags & 0x11) == 0x11 && gtm->drive[0].dma < 55)
-- valid |= 1;
-- if ((gtm->flags & 0x14) == 0x14 && gtm->drive[1].dma < 55)
-- valid |= 2;
-- /* Shared timing, DMA enabled */
-- if ((gtm->flags & 0x11) == 0x01 && gtm->drive[0].dma < 55)
-- valid |= 1;
-- if ((gtm->flags & 0x14) == 0x04 && gtm->drive[0].dma < 55)
-- valid |= 2;
--
-- /* Drive check */
-- if ((valid & 1) && ata_dev_enabled(&ap->link.device[0]))
-- return 1;
-- if ((valid & 2) && ata_dev_enabled(&ap->link.device[1]))
-- return 1;
- return 0;
- }
--
- EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);
-
- static void ata_acpi_gtf_to_tf(struct ata_device *dev,
-@@ -776,6 +813,36 @@ void ata_acpi_on_resume(struct ata_port *ap)
- }
-
- /**
-+ * ata_acpi_set_state - set the port power state
-+ * @ap: target ATA port
-+ * @state: state, on/off
-+ *
-+ * This function executes the _PS0/_PS3 ACPI method to set the power state.
-+ * ACPI spec requires _PS0 when IDE power on and _PS3 when power off
-+ */
-+void ata_acpi_set_state(struct ata_port *ap, pm_message_t state)
++EXPORT_SYMBOL(blk_cleanup_queue);
++
++static int blk_init_free_list(struct request_queue *q)
+{
-+ struct ata_device *dev;
++ struct request_list *rl = &q->rq;
+
-+ if (!ap->acpi_handle || (ap->flags & ATA_FLAG_ACPI_SATA))
-+ return;
++ rl->count[READ] = rl->count[WRITE] = 0;
++ rl->starved[READ] = rl->starved[WRITE] = 0;
++ rl->elvpriv = 0;
++ init_waitqueue_head(&rl->wait[READ]);
++ init_waitqueue_head(&rl->wait[WRITE]);
+
-+ /* channel first and then drives for power on and vica versa
-+ for power off */
-+ if (state.event == PM_EVENT_ON)
-+ acpi_bus_set_power(ap->acpi_handle, ACPI_STATE_D0);
++ rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
++ mempool_free_slab, request_cachep, q->node);
+
-+ ata_link_for_each_dev(dev, &ap->link) {
-+ if (dev->acpi_handle && ata_dev_enabled(dev))
-+ acpi_bus_set_power(dev->acpi_handle,
-+ state.event == PM_EVENT_ON ?
-+ ACPI_STATE_D0 : ACPI_STATE_D3);
-+ }
-+ if (state.event != PM_EVENT_ON)
-+ acpi_bus_set_power(ap->acpi_handle, ACPI_STATE_D3);
++ if (!rl->rq_pool)
++ return -ENOMEM;
++
++ return 0;
+}
+
-+/**
- * ata_acpi_on_devcfg - ATA ACPI hook called on device donfiguration
- * @dev: target ATA device
- *
-diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
-index 6380726..bdbd55a 100644
---- a/drivers/ata/libata-core.c
-+++ b/drivers/ata/libata-core.c
-@@ -119,6 +119,10 @@ int libata_noacpi = 0;
- module_param_named(noacpi, libata_noacpi, int, 0444);
- MODULE_PARM_DESC(noacpi, "Disables the use of ACPI in probe/suspend/resume when set");
-
-+int libata_allow_tpm = 0;
-+module_param_named(allow_tpm, libata_allow_tpm, int, 0444);
-+MODULE_PARM_DESC(allow_tpm, "Permit the use of TPM commands");
++struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
++{
++ return blk_alloc_queue_node(gfp_mask, -1);
++}
++EXPORT_SYMBOL(blk_alloc_queue);
+
- MODULE_AUTHOR("Jeff Garzik");
- MODULE_DESCRIPTION("Library module for ATA devices");
- MODULE_LICENSE("GPL");
-@@ -450,9 +454,9 @@ int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
- * RETURNS:
- * Packed xfer_mask.
- */
--static unsigned int ata_pack_xfermask(unsigned int pio_mask,
-- unsigned int mwdma_mask,
-- unsigned int udma_mask)
-+unsigned long ata_pack_xfermask(unsigned long pio_mask,
-+ unsigned long mwdma_mask,
-+ unsigned long udma_mask)
- {
- return ((pio_mask << ATA_SHIFT_PIO) & ATA_MASK_PIO) |
- ((mwdma_mask << ATA_SHIFT_MWDMA) & ATA_MASK_MWDMA) |
-@@ -469,10 +473,8 @@ static unsigned int ata_pack_xfermask(unsigned int pio_mask,
- * Unpack @xfer_mask into @pio_mask, @mwdma_mask and @udma_mask.
- * Any NULL distination masks will be ignored.
- */
--static void ata_unpack_xfermask(unsigned int xfer_mask,
-- unsigned int *pio_mask,
-- unsigned int *mwdma_mask,
-- unsigned int *udma_mask)
-+void ata_unpack_xfermask(unsigned long xfer_mask, unsigned long *pio_mask,
-+ unsigned long *mwdma_mask, unsigned long *udma_mask)
- {
- if (pio_mask)
- *pio_mask = (xfer_mask & ATA_MASK_PIO) >> ATA_SHIFT_PIO;
-@@ -486,9 +488,9 @@ static const struct ata_xfer_ent {
- int shift, bits;
- u8 base;
- } ata_xfer_tbl[] = {
-- { ATA_SHIFT_PIO, ATA_BITS_PIO, XFER_PIO_0 },
-- { ATA_SHIFT_MWDMA, ATA_BITS_MWDMA, XFER_MW_DMA_0 },
-- { ATA_SHIFT_UDMA, ATA_BITS_UDMA, XFER_UDMA_0 },
-+ { ATA_SHIFT_PIO, ATA_NR_PIO_MODES, XFER_PIO_0 },
-+ { ATA_SHIFT_MWDMA, ATA_NR_MWDMA_MODES, XFER_MW_DMA_0 },
-+ { ATA_SHIFT_UDMA, ATA_NR_UDMA_MODES, XFER_UDMA_0 },
- { -1, },
- };
-
-@@ -503,9 +505,9 @@ static const struct ata_xfer_ent {
- * None.
- *
- * RETURNS:
-- * Matching XFER_* value, 0 if no match found.
-+ * Matching XFER_* value, 0xff if no match found.
- */
--static u8 ata_xfer_mask2mode(unsigned int xfer_mask)
-+u8 ata_xfer_mask2mode(unsigned long xfer_mask)
- {
- int highbit = fls(xfer_mask) - 1;
- const struct ata_xfer_ent *ent;
-@@ -513,7 +515,7 @@ static u8 ata_xfer_mask2mode(unsigned int xfer_mask)
- for (ent = ata_xfer_tbl; ent->shift >= 0; ent++)
- if (highbit >= ent->shift && highbit < ent->shift + ent->bits)
- return ent->base + highbit - ent->shift;
-- return 0;
-+ return 0xff;
- }
-
- /**
-@@ -528,13 +530,14 @@ static u8 ata_xfer_mask2mode(unsigned int xfer_mask)
- * RETURNS:
- * Matching xfer_mask, 0 if no match found.
- */
--static unsigned int ata_xfer_mode2mask(u8 xfer_mode)
-+unsigned long ata_xfer_mode2mask(u8 xfer_mode)
- {
- const struct ata_xfer_ent *ent;
-
- for (ent = ata_xfer_tbl; ent->shift >= 0; ent++)
- if (xfer_mode >= ent->base && xfer_mode < ent->base + ent->bits)
-- return 1 << (ent->shift + xfer_mode - ent->base);
-+ return ((2 << (ent->shift + xfer_mode - ent->base)) - 1)
-+ & ~((1 << ent->shift) - 1);
- return 0;
- }
-
-@@ -550,7 +553,7 @@ static unsigned int ata_xfer_mode2mask(u8 xfer_mode)
- * RETURNS:
- * Matching xfer_shift, -1 if no match found.
- */
--static int ata_xfer_mode2shift(unsigned int xfer_mode)
-+int ata_xfer_mode2shift(unsigned long xfer_mode)
- {
- const struct ata_xfer_ent *ent;
-
-@@ -574,7 +577,7 @@ static int ata_xfer_mode2shift(unsigned int xfer_mode)
- * Constant C string representing highest speed listed in
- * @mode_mask, or the constant C string "<n/a>".
- */
--static const char *ata_mode_string(unsigned int xfer_mask)
-+const char *ata_mode_string(unsigned long xfer_mask)
- {
- static const char * const xfer_mode_str[] = {
- "PIO0",
-@@ -947,8 +950,8 @@ unsigned int ata_dev_try_classify(struct ata_device *dev, int present,
- if (r_err)
- *r_err = err;
-
-- /* see if device passed diags: if master then continue and warn later */
-- if (err == 0 && dev->devno == 0)
-+ /* see if device passed diags: continue and warn later */
-+ if (err == 0)
- /* diagnostic fail : do nothing _YET_ */
- dev->horkage |= ATA_HORKAGE_DIAGNOSTIC;
- else if (err == 1)
-@@ -1286,48 +1289,6 @@ static int ata_hpa_resize(struct ata_device *dev)
- }
-
- /**
-- * ata_id_to_dma_mode - Identify DMA mode from id block
-- * @dev: device to identify
-- * @unknown: mode to assume if we cannot tell
-- *
-- * Set up the timing values for the device based upon the identify
-- * reported values for the DMA mode. This function is used by drivers
-- * which rely upon firmware configured modes, but wish to report the
-- * mode correctly when possible.
-- *
-- * In addition we emit similarly formatted messages to the default
-- * ata_dev_set_mode handler, in order to provide consistency of
-- * presentation.
-- */
--
--void ata_id_to_dma_mode(struct ata_device *dev, u8 unknown)
--{
-- unsigned int mask;
-- u8 mode;
--
-- /* Pack the DMA modes */
-- mask = ((dev->id[63] >> 8) << ATA_SHIFT_MWDMA) & ATA_MASK_MWDMA;
-- if (dev->id[53] & 0x04)
-- mask |= ((dev->id[88] >> 8) << ATA_SHIFT_UDMA) & ATA_MASK_UDMA;
--
-- /* Select the mode in use */
-- mode = ata_xfer_mask2mode(mask);
--
-- if (mode != 0) {
-- ata_dev_printk(dev, KERN_INFO, "configured for %s\n",
-- ata_mode_string(mask));
-- } else {
-- /* SWDMA perhaps ? */
-- mode = unknown;
-- ata_dev_printk(dev, KERN_INFO, "configured for DMA\n");
-- }
--
-- /* Configure the device reporting */
-- dev->xfer_mode = mode;
-- dev->xfer_shift = ata_xfer_mode2shift(mode);
--}
--
--/**
- * ata_noop_dev_select - Select device 0/1 on ATA bus
- * @ap: ATA channel to manipulate
- * @device: ATA device (numbered from zero) to select
-@@ -1464,9 +1425,9 @@ static inline void ata_dump_id(const u16 *id)
- * RETURNS:
- * Computed xfermask
- */
--static unsigned int ata_id_xfermask(const u16 *id)
-+unsigned long ata_id_xfermask(const u16 *id)
- {
-- unsigned int pio_mask, mwdma_mask, udma_mask;
-+ unsigned long pio_mask, mwdma_mask, udma_mask;
-
- /* Usual case. Word 53 indicates word 64 is valid */
- if (id[ATA_ID_FIELD_VALID] & (1 << 1)) {
-@@ -1519,7 +1480,7 @@ static unsigned int ata_id_xfermask(const u16 *id)
- }
-
- /**
-- * ata_port_queue_task - Queue port_task
-+ * ata_pio_queue_task - Queue port_task
- * @ap: The ata_port to queue port_task for
- * @fn: workqueue function to be scheduled
- * @data: data for @fn to use
-@@ -1531,16 +1492,15 @@ static unsigned int ata_id_xfermask(const u16 *id)
- * one task is active at any given time.
- *
- * libata core layer takes care of synchronization between
-- * port_task and EH. ata_port_queue_task() may be ignored for EH
-+ * port_task and EH. ata_pio_queue_task() may be ignored for EH
- * synchronization.
- *
- * LOCKING:
- * Inherited from caller.
- */
--void ata_port_queue_task(struct ata_port *ap, work_func_t fn, void *data,
-- unsigned long delay)
-+static void ata_pio_queue_task(struct ata_port *ap, void *data,
-+ unsigned long delay)
- {
-- PREPARE_DELAYED_WORK(&ap->port_task, fn);
- ap->port_task_data = data;
-
- /* may fail if ata_port_flush_task() in progress */
-@@ -2090,7 +2050,7 @@ int ata_dev_configure(struct ata_device *dev)
- struct ata_eh_context *ehc = &dev->link->eh_context;
- int print_info = ehc->i.flags & ATA_EHI_PRINTINFO;
- const u16 *id = dev->id;
-- unsigned int xfer_mask;
-+ unsigned long xfer_mask;
- char revbuf[7]; /* XYZ-99\0 */
- char fwrevbuf[ATA_ID_FW_REV_LEN+1];
- char modelbuf[ATA_ID_PROD_LEN+1];
-@@ -2161,8 +2121,14 @@ int ata_dev_configure(struct ata_device *dev)
- "supports DRM functions and may "
- "not be fully accessable.\n");
- snprintf(revbuf, 7, "CFA");
-- } else
-+ } else {
- snprintf(revbuf, 7, "ATA-%d", ata_id_major_version(id));
-+ /* Warn the user if the device has TPM extensions */
-+ if (ata_id_has_tpm(id))
-+ ata_dev_printk(dev, KERN_WARNING,
-+ "supports DRM functions and may "
-+ "not be fully accessable.\n");
-+ }
-
- dev->n_sectors = ata_id_n_sectors(id);
-
-@@ -2295,19 +2261,8 @@ int ata_dev_configure(struct ata_device *dev)
- dev->flags |= ATA_DFLAG_DIPM;
- }
-
-- if (dev->horkage & ATA_HORKAGE_DIAGNOSTIC) {
-- /* Let the user know. We don't want to disallow opens for
-- rescue purposes, or in case the vendor is just a blithering
-- idiot */
-- if (print_info) {
-- ata_dev_printk(dev, KERN_WARNING,
--"Drive reports diagnostics failure. This may indicate a drive\n");
-- ata_dev_printk(dev, KERN_WARNING,
--"fault or invalid emulation. Contact drive vendor for information.\n");
-- }
-- }
--
-- /* limit bridge transfers to udma5, 200 sectors */
-+ /* Limit PATA drive on SATA cable bridge transfers to udma5,
-+ 200 sectors */
- if (ata_dev_knobble(dev)) {
- if (ata_msg_drv(ap) && print_info)
- ata_dev_printk(dev, KERN_INFO,
-@@ -2336,6 +2291,21 @@ int ata_dev_configure(struct ata_device *dev)
- if (ap->ops->dev_config)
- ap->ops->dev_config(dev);
-
-+ if (dev->horkage & ATA_HORKAGE_DIAGNOSTIC) {
-+ /* Let the user know. We don't want to disallow opens for
-+ rescue purposes, or in case the vendor is just a blithering
-+ idiot. Do this after the dev_config call as some controllers
-+ with buggy firmware may want to avoid reporting false device
-+ bugs */
++struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
++{
++ struct request_queue *q;
++ int err;
+
-+ if (print_info) {
-+ ata_dev_printk(dev, KERN_WARNING,
-+"Drive reports diagnostics failure. This may indicate a drive\n");
-+ ata_dev_printk(dev, KERN_WARNING,
-+"fault or invalid emulation. Contact drive vendor for information.\n");
-+ }
++ q = kmem_cache_alloc_node(blk_requestq_cachep,
++ gfp_mask | __GFP_ZERO, node_id);
++ if (!q)
++ return NULL;
++
++ q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
++ q->backing_dev_info.unplug_io_data = q;
++ err = bdi_init(&q->backing_dev_info);
++ if (err) {
++ kmem_cache_free(blk_requestq_cachep, q);
++ return NULL;
+ }
+
- if (ata_msg_probe(ap))
- ata_dev_printk(dev, KERN_DEBUG, "%s: EXIT, drv_stat = 0x%x\n",
- __FUNCTION__, ata_chk_status(ap));
-@@ -2387,6 +2357,18 @@ int ata_cable_unknown(struct ata_port *ap)
- }
-
- /**
-+ * ata_cable_ignore - return ignored PATA cable.
-+ * @ap: port
-+ *
-+ * Helper method for drivers which don't use cable type to limit
-+ * transfer mode.
-+ */
-+int ata_cable_ignore(struct ata_port *ap)
-+{
-+ return ATA_CBL_PATA_IGN;
++ init_timer(&q->unplug_timer);
++
++ kobject_init(&q->kobj, &blk_queue_ktype);
++
++ mutex_init(&q->sysfs_lock);
++
++ return q;
+}
++EXPORT_SYMBOL(blk_alloc_queue_node);
+
+/**
- * ata_cable_sata - return SATA cable type
- * @ap: port
- *
-@@ -2781,38 +2763,33 @@ int sata_set_spd(struct ata_link *link)
- */
-
- static const struct ata_timing ata_timing[] = {
-+/* { XFER_PIO_SLOW, 120, 290, 240, 960, 290, 240, 960, 0 }, */
-+ { XFER_PIO_0, 70, 290, 240, 600, 165, 150, 600, 0 },
-+ { XFER_PIO_1, 50, 290, 93, 383, 125, 100, 383, 0 },
-+ { XFER_PIO_2, 30, 290, 40, 330, 100, 90, 240, 0 },
-+ { XFER_PIO_3, 30, 80, 70, 180, 80, 70, 180, 0 },
-+ { XFER_PIO_4, 25, 70, 25, 120, 70, 25, 120, 0 },
-+ { XFER_PIO_5, 15, 65, 25, 100, 65, 25, 100, 0 },
-+ { XFER_PIO_6, 10, 55, 20, 80, 55, 20, 80, 0 },
-
-- { XFER_UDMA_6, 0, 0, 0, 0, 0, 0, 0, 15 },
-- { XFER_UDMA_5, 0, 0, 0, 0, 0, 0, 0, 20 },
-- { XFER_UDMA_4, 0, 0, 0, 0, 0, 0, 0, 30 },
-- { XFER_UDMA_3, 0, 0, 0, 0, 0, 0, 0, 45 },
-+ { XFER_SW_DMA_0, 120, 0, 0, 0, 480, 480, 960, 0 },
-+ { XFER_SW_DMA_1, 90, 0, 0, 0, 240, 240, 480, 0 },
-+ { XFER_SW_DMA_2, 60, 0, 0, 0, 120, 120, 240, 0 },
-
-- { XFER_MW_DMA_4, 25, 0, 0, 0, 55, 20, 80, 0 },
-+ { XFER_MW_DMA_0, 60, 0, 0, 0, 215, 215, 480, 0 },
-+ { XFER_MW_DMA_1, 45, 0, 0, 0, 80, 50, 150, 0 },
-+ { XFER_MW_DMA_2, 25, 0, 0, 0, 70, 25, 120, 0 },
- { XFER_MW_DMA_3, 25, 0, 0, 0, 65, 25, 100, 0 },
-- { XFER_UDMA_2, 0, 0, 0, 0, 0, 0, 0, 60 },
-- { XFER_UDMA_1, 0, 0, 0, 0, 0, 0, 0, 80 },
-- { XFER_UDMA_0, 0, 0, 0, 0, 0, 0, 0, 120 },
-+ { XFER_MW_DMA_4, 25, 0, 0, 0, 55, 20, 80, 0 },
-
- /* { XFER_UDMA_SLOW, 0, 0, 0, 0, 0, 0, 0, 150 }, */
--
-- { XFER_MW_DMA_2, 25, 0, 0, 0, 70, 25, 120, 0 },
-- { XFER_MW_DMA_1, 45, 0, 0, 0, 80, 50, 150, 0 },
-- { XFER_MW_DMA_0, 60, 0, 0, 0, 215, 215, 480, 0 },
--
-- { XFER_SW_DMA_2, 60, 0, 0, 0, 120, 120, 240, 0 },
-- { XFER_SW_DMA_1, 90, 0, 0, 0, 240, 240, 480, 0 },
-- { XFER_SW_DMA_0, 120, 0, 0, 0, 480, 480, 960, 0 },
--
-- { XFER_PIO_6, 10, 55, 20, 80, 55, 20, 80, 0 },
-- { XFER_PIO_5, 15, 65, 25, 100, 65, 25, 100, 0 },
-- { XFER_PIO_4, 25, 70, 25, 120, 70, 25, 120, 0 },
-- { XFER_PIO_3, 30, 80, 70, 180, 80, 70, 180, 0 },
--
-- { XFER_PIO_2, 30, 290, 40, 330, 100, 90, 240, 0 },
-- { XFER_PIO_1, 50, 290, 93, 383, 125, 100, 383, 0 },
-- { XFER_PIO_0, 70, 290, 240, 600, 165, 150, 600, 0 },
--
--/* { XFER_PIO_SLOW, 120, 290, 240, 960, 290, 240, 960, 0 }, */
-+ { XFER_UDMA_0, 0, 0, 0, 0, 0, 0, 0, 120 },
-+ { XFER_UDMA_1, 0, 0, 0, 0, 0, 0, 0, 80 },
-+ { XFER_UDMA_2, 0, 0, 0, 0, 0, 0, 0, 60 },
-+ { XFER_UDMA_3, 0, 0, 0, 0, 0, 0, 0, 45 },
-+ { XFER_UDMA_4, 0, 0, 0, 0, 0, 0, 0, 30 },
-+ { XFER_UDMA_5, 0, 0, 0, 0, 0, 0, 0, 20 },
-+ { XFER_UDMA_6, 0, 0, 0, 0, 0, 0, 0, 15 },
-
- { 0xFF }
- };
-@@ -2845,14 +2822,16 @@ void ata_timing_merge(const struct ata_timing *a, const struct ata_timing *b,
- if (what & ATA_TIMING_UDMA ) m->udma = max(a->udma, b->udma);
- }
-
--static const struct ata_timing *ata_timing_find_mode(unsigned short speed)
-+const struct ata_timing *ata_timing_find_mode(u8 xfer_mode)
- {
-- const struct ata_timing *t;
-+ const struct ata_timing *t = ata_timing;
-+
-+ while (xfer_mode > t->mode)
-+ t++;
-
-- for (t = ata_timing; t->mode != speed; t++)
-- if (t->mode == 0xFF)
-- return NULL;
-- return t;
-+ if (xfer_mode == t->mode)
-+ return t;
-+ return NULL;
- }
-
- int ata_timing_compute(struct ata_device *adev, unsigned short speed,
-@@ -2927,6 +2906,57 @@ int ata_timing_compute(struct ata_device *adev, unsigned short speed,
- }
-
- /**
-+ * ata_timing_cycle2mode - find xfer mode for the specified cycle duration
-+ * @xfer_shift: ATA_SHIFT_* value for transfer type to examine.
-+ * @cycle: cycle duration in ns
++ * blk_init_queue - prepare a request queue for use with a block device
++ * @rfn: The function to be called to process requests that have been
++ * placed on the queue.
++ * @lock: Request queue spin lock
+ *
-+ * Return matching xfer mode for @cycle. The returned mode is of
-+ * the transfer type specified by @xfer_shift. If @cycle is too
-+ * slow for @xfer_shift, 0xff is returned. If @cycle is faster
-+ * than the fastest known mode, the fasted mode is returned.
++ * Description:
++ * If a block device wishes to use the standard request handling procedures,
++ * which sorts requests and coalesces adjacent requests, then it must
++ * call blk_init_queue(). The function @rfn will be called when there
++ * are requests on the queue that need to be processed. If the device
++ * supports plugging, then @rfn may not be called immediately when requests
++ * are available on the queue, but may be called at some time later instead.
++ * Plugged queues are generally unplugged when a buffer belonging to one
++ * of the requests on the queue is needed, or due to memory pressure.
+ *
-+ * LOCKING:
-+ * None.
++ * @rfn is not required, or even expected, to remove all requests off the
++ * queue, but only as many as it can handle at a time. If it does leave
++ * requests on the queue, it is responsible for arranging that the requests
++ * get dealt with eventually.
+ *
-+ * RETURNS:
-+ * Matching xfer_mode, 0xff if no match found.
-+ */
-+u8 ata_timing_cycle2mode(unsigned int xfer_shift, int cycle)
++ * The queue spin lock must be held while manipulating the requests on the
++ * request queue; this lock will be taken also from interrupt context, so irq
++ * disabling is needed for it.
++ *
++ * Function returns a pointer to the initialized request queue, or NULL if
++ * it didn't succeed.
++ *
++ * Note:
++ * blk_init_queue() must be paired with a blk_cleanup_queue() call
++ * when the block device is deactivated (such as at module unload).
++ **/
++
++struct request_queue *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock)
+{
-+ u8 base_mode = 0xff, last_mode = 0xff;
-+ const struct ata_xfer_ent *ent;
-+ const struct ata_timing *t;
++ return blk_init_queue_node(rfn, lock, -1);
++}
++EXPORT_SYMBOL(blk_init_queue);
+
-+ for (ent = ata_xfer_tbl; ent->shift >= 0; ent++)
-+ if (ent->shift == xfer_shift)
-+ base_mode = ent->base;
++struct request_queue *
++blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
++{
++ struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+
-+ for (t = ata_timing_find_mode(base_mode);
-+ t && ata_xfer_mode2shift(t->mode) == xfer_shift; t++) {
-+ unsigned short this_cycle;
++ if (!q)
++ return NULL;
+
-+ switch (xfer_shift) {
-+ case ATA_SHIFT_PIO:
-+ case ATA_SHIFT_MWDMA:
-+ this_cycle = t->cycle;
-+ break;
-+ case ATA_SHIFT_UDMA:
-+ this_cycle = t->udma;
-+ break;
-+ default:
-+ return 0xff;
-+ }
++ q->node = node_id;
++ if (blk_init_free_list(q)) {
++ kmem_cache_free(blk_requestq_cachep, q);
++ return NULL;
++ }
+
-+ if (cycle > this_cycle)
-+ break;
++ /*
++ * if caller didn't supply a lock, they get per-queue locking with
++ * our embedded lock
++ */
++ if (!lock) {
++ spin_lock_init(&q->__queue_lock);
++ lock = &q->__queue_lock;
++ }
+
-+ last_mode = t->mode;
++ q->request_fn = rfn;
++ q->prep_rq_fn = NULL;
++ q->unplug_fn = generic_unplug_device;
++ q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
++ q->queue_lock = lock;
++
++ blk_queue_segment_boundary(q, 0xffffffff);
++
++ blk_queue_make_request(q, __make_request);
++ blk_queue_max_segment_size(q, MAX_SEGMENT_SIZE);
++
++ blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
++ blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
++
++ q->sg_reserved_size = INT_MAX;
++
++ /*
++ * all done
++ */
++ if (!elevator_init(q, NULL)) {
++ blk_queue_congestion_threshold(q);
++ return q;
+ }
+
-+ return last_mode;
++ blk_put_queue(q);
++ return NULL;
+}
++EXPORT_SYMBOL(blk_init_queue_node);
+
-+/**
- * ata_down_xfermask_limit - adjust dev xfer masks downward
- * @dev: Device to adjust xfer masks
- * @sel: ATA_DNXFER_* selector
-@@ -2944,8 +2974,8 @@ int ata_timing_compute(struct ata_device *adev, unsigned short speed,
- int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel)
- {
- char buf[32];
-- unsigned int orig_mask, xfer_mask;
-- unsigned int pio_mask, mwdma_mask, udma_mask;
-+ unsigned long orig_mask, xfer_mask;
-+ unsigned long pio_mask, mwdma_mask, udma_mask;
- int quiet, highbit;
-
- quiet = !!(sel & ATA_DNXFER_QUIET);
-@@ -3039,7 +3069,7 @@ static int ata_dev_set_mode(struct ata_device *dev)
-
- /* Early MWDMA devices do DMA but don't allow DMA mode setting.
- Don't fail an MWDMA0 set IFF the device indicates it is in MWDMA0 */
-- if (dev->xfer_shift == ATA_SHIFT_MWDMA &&
-+ if (dev->xfer_shift == ATA_SHIFT_MWDMA &&
- dev->dma_mode == XFER_MW_DMA_0 &&
- (dev->id[63] >> 8) & 1)
- err_mask &= ~AC_ERR_DEV;
-@@ -3089,7 +3119,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
-
- /* step 1: calculate xfer_mask */
- ata_link_for_each_dev(dev, link) {
-- unsigned int pio_mask, dma_mask;
-+ unsigned long pio_mask, dma_mask;
- unsigned int mode_mask;
-
- if (!ata_dev_enabled(dev))
-@@ -3115,7 +3145,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
- dev->dma_mode = ata_xfer_mask2mode(dma_mask);
-
- found = 1;
-- if (dev->dma_mode)
-+ if (dev->dma_mode != 0xff)
- used_dma = 1;
- }
- if (!found)
-@@ -3126,7 +3156,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
- if (!ata_dev_enabled(dev))
- continue;
-
-- if (!dev->pio_mode) {
-+ if (dev->pio_mode == 0xff) {
- ata_dev_printk(dev, KERN_WARNING, "no PIO support\n");
- rc = -EINVAL;
- goto out;
-@@ -3140,7 +3170,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
-
- /* step 3: set host DMA timings */
- ata_link_for_each_dev(dev, link) {
-- if (!ata_dev_enabled(dev) || !dev->dma_mode)
-+ if (!ata_dev_enabled(dev) || dev->dma_mode == 0xff)
- continue;
-
- dev->xfer_mode = dev->dma_mode;
-@@ -3173,31 +3203,6 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
- }
-
- /**
-- * ata_set_mode - Program timings and issue SET FEATURES - XFER
-- * @link: link on which timings will be programmed
-- * @r_failed_dev: out paramter for failed device
-- *
-- * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). If
-- * ata_set_mode() fails, pointer to the failing device is
-- * returned in @r_failed_dev.
-- *
-- * LOCKING:
-- * PCI/etc. bus probe sem.
-- *
-- * RETURNS:
-- * 0 on success, negative errno otherwise
-- */
--int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
--{
-- struct ata_port *ap = link->ap;
--
-- /* has private set_mode? */
-- if (ap->ops->set_mode)
-- return ap->ops->set_mode(link, r_failed_dev);
-- return ata_do_set_mode(link, r_failed_dev);
--}
--
--/**
- * ata_tf_to_host - issue ATA taskfile to host controller
- * @ap: port to which command is being issued
- * @tf: ATA taskfile register set
-@@ -4363,7 +4368,14 @@ static unsigned int ata_dev_set_xfermode(struct ata_device *dev)
- tf.feature = SETFEATURES_XFER;
- tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE | ATA_TFLAG_POLLING;
- tf.protocol = ATA_PROT_NODATA;
-- tf.nsect = dev->xfer_mode;
-+ /* If we are using IORDY we must send the mode setting command */
-+ if (ata_pio_need_iordy(dev))
-+ tf.nsect = dev->xfer_mode;
-+ /* If the device has IORDY and the controller does not - turn it off */
-+ else if (ata_id_has_iordy(dev->id))
-+ tf.nsect = 0x01;
-+ else /* In the ancient relic department - skip all of this */
++int blk_get_queue(struct request_queue *q)
++{
++ if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
++ kobject_get(&q->kobj);
+ return 0;
-
- err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
-
-@@ -4462,17 +4474,13 @@ static unsigned int ata_dev_init_params(struct ata_device *dev,
- void ata_sg_clean(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
-- struct scatterlist *sg = qc->__sg;
-+ struct scatterlist *sg = qc->sg;
- int dir = qc->dma_dir;
- void *pad_buf = NULL;
-
-- WARN_ON(!(qc->flags & ATA_QCFLAG_DMAMAP));
- WARN_ON(sg == NULL);
-
-- if (qc->flags & ATA_QCFLAG_SINGLE)
-- WARN_ON(qc->n_elem > 1);
--
-- VPRINTK("unmapping %u sg elements\n", qc->n_elem);
-+ VPRINTK("unmapping %u sg elements\n", qc->mapped_n_elem);
-
- /* if we padded the buffer out to 32-bit bound, and data
- * xfer direction is from-device, we must copy from the
-@@ -4481,31 +4489,20 @@ void ata_sg_clean(struct ata_queued_cmd *qc)
- if (qc->pad_len && !(qc->tf.flags & ATA_TFLAG_WRITE))
- pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
-
-- if (qc->flags & ATA_QCFLAG_SG) {
-- if (qc->n_elem)
-- dma_unmap_sg(ap->dev, sg, qc->n_elem, dir);
-- /* restore last sg */
-- sg_last(sg, qc->orig_n_elem)->length += qc->pad_len;
-- if (pad_buf) {
-- struct scatterlist *psg = &qc->pad_sgent;
-- void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
-- memcpy(addr + psg->offset, pad_buf, qc->pad_len);
-- kunmap_atomic(addr, KM_IRQ0);
-- }
-- } else {
-- if (qc->n_elem)
-- dma_unmap_single(ap->dev,
-- sg_dma_address(&sg[0]), sg_dma_len(&sg[0]),
-- dir);
-- /* restore sg */
-- sg->length += qc->pad_len;
-- if (pad_buf)
-- memcpy(qc->buf_virt + sg->length - qc->pad_len,
-- pad_buf, qc->pad_len);
-+ if (qc->mapped_n_elem)
-+ dma_unmap_sg(ap->dev, sg, qc->mapped_n_elem, dir);
-+ /* restore last sg */
-+ if (qc->last_sg)
-+ *qc->last_sg = qc->saved_last_sg;
-+ if (pad_buf) {
-+ struct scatterlist *psg = &qc->extra_sg[1];
-+ void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
-+ memcpy(addr + psg->offset, pad_buf, qc->pad_len);
-+ kunmap_atomic(addr, KM_IRQ0);
- }
-
- qc->flags &= ~ATA_QCFLAG_DMAMAP;
-- qc->__sg = NULL;
-+ qc->sg = NULL;
- }
-
- /**
-@@ -4523,13 +4520,10 @@ static void ata_fill_sg(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct scatterlist *sg;
-- unsigned int idx;
-+ unsigned int si, pi;
-
-- WARN_ON(qc->__sg == NULL);
-- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
--
-- idx = 0;
-- ata_for_each_sg(sg, qc) {
-+ pi = 0;
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- u32 addr, offset;
- u32 sg_len, len;
-
-@@ -4546,18 +4540,17 @@ static void ata_fill_sg(struct ata_queued_cmd *qc)
- if ((offset + sg_len) > 0x10000)
- len = 0x10000 - offset;
-
-- ap->prd[idx].addr = cpu_to_le32(addr);
-- ap->prd[idx].flags_len = cpu_to_le32(len & 0xffff);
-- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
-+ ap->prd[pi].addr = cpu_to_le32(addr);
-+ ap->prd[pi].flags_len = cpu_to_le32(len & 0xffff);
-+ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", pi, addr, len);
-
-- idx++;
-+ pi++;
- sg_len -= len;
- addr += len;
- }
- }
-
-- if (idx)
-- ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
-+ ap->prd[pi - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
- }
-
- /**
-@@ -4577,13 +4570,10 @@ static void ata_fill_sg_dumb(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct scatterlist *sg;
-- unsigned int idx;
--
-- WARN_ON(qc->__sg == NULL);
-- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
-+ unsigned int si, pi;
-
-- idx = 0;
-- ata_for_each_sg(sg, qc) {
-+ pi = 0;
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- u32 addr, offset;
- u32 sg_len, len, blen;
-
-@@ -4601,25 +4591,24 @@ static void ata_fill_sg_dumb(struct ata_queued_cmd *qc)
- len = 0x10000 - offset;
-
- blen = len & 0xffff;
-- ap->prd[idx].addr = cpu_to_le32(addr);
-+ ap->prd[pi].addr = cpu_to_le32(addr);
- if (blen == 0) {
- /* Some PATA chipsets like the CS5530 can't
- cope with 0x0000 meaning 64K as the spec says */
-- ap->prd[idx].flags_len = cpu_to_le32(0x8000);
-+ ap->prd[pi].flags_len = cpu_to_le32(0x8000);
- blen = 0x8000;
-- ap->prd[++idx].addr = cpu_to_le32(addr + 0x8000);
-+ ap->prd[++pi].addr = cpu_to_le32(addr + 0x8000);
- }
-- ap->prd[idx].flags_len = cpu_to_le32(blen);
-- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
-+ ap->prd[pi].flags_len = cpu_to_le32(blen);
-+ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", pi, addr, len);
-
-- idx++;
-+ pi++;
- sg_len -= len;
- addr += len;
- }
- }
-
-- if (idx)
-- ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
-+ ap->prd[pi - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
- }
-
- /**
-@@ -4669,8 +4658,8 @@ int ata_check_atapi_dma(struct ata_queued_cmd *qc)
- */
- static int atapi_qc_may_overflow(struct ata_queued_cmd *qc)
- {
-- if (qc->tf.protocol != ATA_PROT_ATAPI &&
-- qc->tf.protocol != ATA_PROT_ATAPI_DMA)
-+ if (qc->tf.protocol != ATAPI_PROT_PIO &&
-+ qc->tf.protocol != ATAPI_PROT_DMA)
- return 0;
-
- if (qc->tf.flags & ATA_TFLAG_WRITE)
-@@ -4756,33 +4745,6 @@ void ata_dumb_qc_prep(struct ata_queued_cmd *qc)
- void ata_noop_qc_prep(struct ata_queued_cmd *qc) { }
-
- /**
-- * ata_sg_init_one - Associate command with memory buffer
-- * @qc: Command to be associated
-- * @buf: Memory buffer
-- * @buflen: Length of memory buffer, in bytes.
-- *
-- * Initialize the data-related elements of queued_cmd @qc
-- * to point to a single memory buffer, @buf of byte length @buflen.
-- *
-- * LOCKING:
-- * spin_lock_irqsave(host lock)
-- */
--
--void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, unsigned int buflen)
--{
-- qc->flags |= ATA_QCFLAG_SINGLE;
--
-- qc->__sg = &qc->sgent;
-- qc->n_elem = 1;
-- qc->orig_n_elem = 1;
-- qc->buf_virt = buf;
-- qc->nbytes = buflen;
-- qc->cursg = qc->__sg;
--
-- sg_init_one(&qc->sgent, buf, buflen);
--}
--
--/**
- * ata_sg_init - Associate command with scatter-gather table.
- * @qc: Command to be associated
- * @sg: Scatter-gather table.
-@@ -4795,84 +4757,103 @@ void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, unsigned int buflen)
- * LOCKING:
- * spin_lock_irqsave(host lock)
- */
--
- void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg,
- unsigned int n_elem)
- {
-- qc->flags |= ATA_QCFLAG_SG;
-- qc->__sg = sg;
-+ qc->sg = sg;
- qc->n_elem = n_elem;
-- qc->orig_n_elem = n_elem;
-- qc->cursg = qc->__sg;
-+ qc->cursg = qc->sg;
- }
-
--/**
-- * ata_sg_setup_one - DMA-map the memory buffer associated with a command.
-- * @qc: Command with memory buffer to be mapped.
-- *
-- * DMA-map the memory buffer associated with queued_cmd @qc.
-- *
-- * LOCKING:
-- * spin_lock_irqsave(host lock)
-- *
-- * RETURNS:
-- * Zero on success, negative on error.
-- */
--
--static int ata_sg_setup_one(struct ata_queued_cmd *qc)
-+static unsigned int ata_sg_setup_extra(struct ata_queued_cmd *qc,
-+ unsigned int *n_elem_extra,
-+ unsigned int *nbytes_extra)
- {
- struct ata_port *ap = qc->ap;
-- int dir = qc->dma_dir;
-- struct scatterlist *sg = qc->__sg;
-- dma_addr_t dma_address;
-- int trim_sg = 0;
-+ unsigned int n_elem = qc->n_elem;
-+ struct scatterlist *lsg, *copy_lsg = NULL, *tsg = NULL, *esg = NULL;
++ }
+
-+ *n_elem_extra = 0;
-+ *nbytes_extra = 0;
++ return 1;
++}
+
-+ /* needs padding? */
-+ qc->pad_len = qc->nbytes & 3;
++EXPORT_SYMBOL(blk_get_queue);
+
-+ if (likely(!qc->pad_len))
-+ return n_elem;
++static inline void blk_free_request(struct request_queue *q, struct request *rq)
++{
++ if (rq->cmd_flags & REQ_ELVPRIV)
++ elv_put_request(q, rq);
++ mempool_free(rq, q->rq.rq_pool);
++}
+
-+ /* locate last sg and save it */
-+ lsg = sg_last(qc->sg, n_elem);
-+ qc->last_sg = lsg;
-+ qc->saved_last_sg = *lsg;
++static struct request *
++blk_alloc_request(struct request_queue *q, int rw, int priv, gfp_t gfp_mask)
++{
++ struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+
-+ sg_init_table(qc->extra_sg, ARRAY_SIZE(qc->extra_sg));
-
-- /* we must lengthen transfers to end on a 32-bit boundary */
-- qc->pad_len = sg->length & 3;
- if (qc->pad_len) {
-+ struct scatterlist *psg = &qc->extra_sg[1];
- void *pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
-- struct scatterlist *psg = &qc->pad_sgent;
-+ unsigned int offset;
-
- WARN_ON(qc->dev->class != ATA_DEV_ATAPI);
-
- memset(pad_buf, 0, ATA_DMA_PAD_SZ);
-
-- if (qc->tf.flags & ATA_TFLAG_WRITE)
-- memcpy(pad_buf, qc->buf_virt + sg->length - qc->pad_len,
-- qc->pad_len);
-+ /* psg->page/offset are used to copy to-be-written
-+ * data in this function or read data in ata_sg_clean.
-+ */
-+ offset = lsg->offset + lsg->length - qc->pad_len;
-+ sg_set_page(psg, nth_page(sg_page(lsg), offset >> PAGE_SHIFT),
-+ qc->pad_len, offset_in_page(offset));
++ if (!rq)
++ return NULL;
+
-+ if (qc->tf.flags & ATA_TFLAG_WRITE) {
-+ void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
-+ memcpy(pad_buf, addr + psg->offset, qc->pad_len);
-+ kunmap_atomic(addr, KM_IRQ0);
-+ }
-
- sg_dma_address(psg) = ap->pad_dma + (qc->tag * ATA_DMA_PAD_SZ);
- sg_dma_len(psg) = ATA_DMA_PAD_SZ;
-- /* trim sg */
-- sg->length -= qc->pad_len;
-- if (sg->length == 0)
-- trim_sg = 1;
-
-- DPRINTK("padding done, sg->length=%u pad_len=%u\n",
-- sg->length, qc->pad_len);
-- }
-+ /* Trim the last sg entry and chain the original and
-+ * padding sg lists.
-+ *
-+ * Because chaining consumes one sg entry, one extra
-+ * sg entry is allocated and the last sg entry is
-+ * copied to it if the length isn't zero after padded
-+ * amount is removed.
-+ *
-+ * If the last sg entry is completely replaced by
-+ * padding sg entry, the first sg entry is skipped
-+ * while chaining.
-+ */
-+ lsg->length -= qc->pad_len;
-+ if (lsg->length) {
-+ copy_lsg = &qc->extra_sg[0];
-+ tsg = &qc->extra_sg[0];
-+ } else {
-+ n_elem--;
-+ tsg = &qc->extra_sg[1];
++ /*
++ * first three bits are identical in rq->cmd_flags and bio->bi_rw,
++ * see bio.h and blkdev.h
++ */
++ rq->cmd_flags = rw | REQ_ALLOCED;
++
++ if (priv) {
++ if (unlikely(elv_set_request(q, rq, gfp_mask))) {
++ mempool_free(rq, q->rq.rq_pool);
++ return NULL;
+ }
-
-- if (trim_sg) {
-- qc->n_elem--;
-- goto skip_map;
-- }
-+ esg = &qc->extra_sg[1];
-
-- dma_address = dma_map_single(ap->dev, qc->buf_virt,
-- sg->length, dir);
-- if (dma_mapping_error(dma_address)) {
-- /* restore sg */
-- sg->length += qc->pad_len;
-- return -1;
-+ (*n_elem_extra)++;
-+ (*nbytes_extra) += 4 - qc->pad_len;
- }
-
-- sg_dma_address(sg) = dma_address;
-- sg_dma_len(sg) = sg->length;
-+ if (copy_lsg)
-+ sg_set_page(copy_lsg, sg_page(lsg), lsg->length, lsg->offset);
-
--skip_map:
-- DPRINTK("mapped buffer of %d bytes for %s\n", sg_dma_len(sg),
-- qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read");
-+ sg_chain(lsg, 1, tsg);
-+ sg_mark_end(esg);
-
-- return 0;
-+ /* sglist can't start with chaining sg entry, fast forward */
-+ if (qc->sg == lsg) {
-+ qc->sg = tsg;
-+ qc->cursg = tsg;
++ rq->cmd_flags |= REQ_ELVPRIV;
+ }
+
-+ return n_elem;
- }
-
- /**
-@@ -4888,75 +4869,30 @@ skip_map:
- * Zero on success, negative on error.
- *
- */
--
- static int ata_sg_setup(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
-- struct scatterlist *sg = qc->__sg;
-- struct scatterlist *lsg = sg_last(qc->__sg, qc->n_elem);
-- int n_elem, pre_n_elem, dir, trim_sg = 0;
-+ unsigned int n_elem, n_elem_extra, nbytes_extra;
-
- VPRINTK("ENTER, ata%u\n", ap->print_id);
-- WARN_ON(!(qc->flags & ATA_QCFLAG_SG));
-
-- /* we must lengthen transfers to end on a 32-bit boundary */
-- qc->pad_len = lsg->length & 3;
-- if (qc->pad_len) {
-- void *pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
-- struct scatterlist *psg = &qc->pad_sgent;
-- unsigned int offset;
--
-- WARN_ON(qc->dev->class != ATA_DEV_ATAPI);
-+ n_elem = ata_sg_setup_extra(qc, &n_elem_extra, &nbytes_extra);
-
-- memset(pad_buf, 0, ATA_DMA_PAD_SZ);
--
-- /*
-- * psg->page/offset are used to copy to-be-written
-- * data in this function or read data in ata_sg_clean.
-- */
-- offset = lsg->offset + lsg->length - qc->pad_len;
-- sg_init_table(psg, 1);
-- sg_set_page(psg, nth_page(sg_page(lsg), offset >> PAGE_SHIFT),
-- qc->pad_len, offset_in_page(offset));
--
-- if (qc->tf.flags & ATA_TFLAG_WRITE) {
-- void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
-- memcpy(pad_buf, addr + psg->offset, qc->pad_len);
-- kunmap_atomic(addr, KM_IRQ0);
-+ if (n_elem) {
-+ n_elem = dma_map_sg(ap->dev, qc->sg, n_elem, qc->dma_dir);
-+ if (n_elem < 1) {
-+ /* restore last sg */
-+ if (qc->last_sg)
-+ *qc->last_sg = qc->saved_last_sg;
-+ return -1;
- }
--
-- sg_dma_address(psg) = ap->pad_dma + (qc->tag * ATA_DMA_PAD_SZ);
-- sg_dma_len(psg) = ATA_DMA_PAD_SZ;
-- /* trim last sg */
-- lsg->length -= qc->pad_len;
-- if (lsg->length == 0)
-- trim_sg = 1;
--
-- DPRINTK("padding done, sg[%d].length=%u pad_len=%u\n",
-- qc->n_elem - 1, lsg->length, qc->pad_len);
-- }
--
-- pre_n_elem = qc->n_elem;
-- if (trim_sg && pre_n_elem)
-- pre_n_elem--;
--
-- if (!pre_n_elem) {
-- n_elem = 0;
-- goto skip_map;
-- }
--
-- dir = qc->dma_dir;
-- n_elem = dma_map_sg(ap->dev, sg, pre_n_elem, dir);
-- if (n_elem < 1) {
-- /* restore last sg */
-- lsg->length += qc->pad_len;
-- return -1;
-+ DPRINTK("%d sg elements mapped\n", n_elem);
- }
-
-- DPRINTK("%d sg elements mapped\n", n_elem);
--
--skip_map:
-- qc->n_elem = n_elem;
-+ qc->n_elem = qc->mapped_n_elem = n_elem;
-+ qc->n_elem += n_elem_extra;
-+ qc->nbytes += nbytes_extra;
-+ qc->flags |= ATA_QCFLAG_DMAMAP;
-
- return 0;
- }
-@@ -4985,63 +4921,77 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words)
-
- /**
- * ata_data_xfer - Transfer data by PIO
-- * @adev: device to target
-+ * @dev: device to target
- * @buf: data buffer
- * @buflen: buffer length
-- * @write_data: read/write
-+ * @rw: read/write
- *
- * Transfer data from/to the device data register by PIO.
- *
- * LOCKING:
- * Inherited from caller.
-+ *
-+ * RETURNS:
-+ * Bytes consumed.
- */
--void ata_data_xfer(struct ata_device *adev, unsigned char *buf,
-- unsigned int buflen, int write_data)
-+unsigned int ata_data_xfer(struct ata_device *dev, unsigned char *buf,
-+ unsigned int buflen, int rw)
- {
-- struct ata_port *ap = adev->link->ap;
-+ struct ata_port *ap = dev->link->ap;
-+ void __iomem *data_addr = ap->ioaddr.data_addr;
- unsigned int words = buflen >> 1;
-
- /* Transfer multiple of 2 bytes */
-- if (write_data)
-- iowrite16_rep(ap->ioaddr.data_addr, buf, words);
-+ if (rw == READ)
-+ ioread16_rep(data_addr, buf, words);
- else
-- ioread16_rep(ap->ioaddr.data_addr, buf, words);
-+ iowrite16_rep(data_addr, buf, words);
-
- /* Transfer trailing 1 byte, if any. */
- if (unlikely(buflen & 0x01)) {
-- u16 align_buf[1] = { 0 };
-+ __le16 align_buf[1] = { 0 };
- unsigned char *trailing_buf = buf + buflen - 1;
-
-- if (write_data) {
-- memcpy(align_buf, trailing_buf, 1);
-- iowrite16(le16_to_cpu(align_buf[0]), ap->ioaddr.data_addr);
-- } else {
-- align_buf[0] = cpu_to_le16(ioread16(ap->ioaddr.data_addr));
-+ if (rw == READ) {
-+ align_buf[0] = cpu_to_le16(ioread16(data_addr));
- memcpy(trailing_buf, align_buf, 1);
-+ } else {
-+ memcpy(align_buf, trailing_buf, 1);
-+ iowrite16(le16_to_cpu(align_buf[0]), data_addr);
- }
-+ words++;
- }
++ return rq;
++}
+
-+ return words << 1;
- }
-
- /**
- * ata_data_xfer_noirq - Transfer data by PIO
-- * @adev: device to target
-+ * @dev: device to target
- * @buf: data buffer
- * @buflen: buffer length
-- * @write_data: read/write
-+ * @rw: read/write
- *
- * Transfer data from/to the device data register by PIO. Do the
- * transfer with interrupts disabled.
- *
- * LOCKING:
- * Inherited from caller.
-+ *
-+ * RETURNS:
-+ * Bytes consumed.
- */
--void ata_data_xfer_noirq(struct ata_device *adev, unsigned char *buf,
-- unsigned int buflen, int write_data)
-+unsigned int ata_data_xfer_noirq(struct ata_device *dev, unsigned char *buf,
-+ unsigned int buflen, int rw)
- {
- unsigned long flags;
-+ unsigned int consumed;
++/*
++ * ioc_batching returns true if the ioc is a valid batching request and
++ * should be given priority access to a request.
++ */
++static inline int ioc_batching(struct request_queue *q, struct io_context *ioc)
++{
++ if (!ioc)
++ return 0;
+
- local_irq_save(flags);
-- ata_data_xfer(adev, buf, buflen, write_data);
-+ consumed = ata_data_xfer(dev, buf, buflen, rw);
- local_irq_restore(flags);
++ /*
++ * Make sure the process is able to allocate at least 1 request
++ * even if the batch times out, otherwise we could theoretically
++ * lose wakeups.
++ */
++ return ioc->nr_batch_requests == q->nr_batching ||
++ (ioc->nr_batch_requests > 0
++ && time_before(jiffies, ioc->last_waited + BLK_BATCH_TIME));
++}
+
-+ return consumed;
- }
-
-
-@@ -5152,13 +5102,13 @@ static void atapi_send_cdb(struct ata_port *ap, struct ata_queued_cmd *qc)
- ata_altstatus(ap); /* flush */
-
- switch (qc->tf.protocol) {
-- case ATA_PROT_ATAPI:
-+ case ATAPI_PROT_PIO:
- ap->hsm_task_state = HSM_ST;
- break;
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_NODATA:
- ap->hsm_task_state = HSM_ST_LAST;
- break;
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- ap->hsm_task_state = HSM_ST_LAST;
- /* initiate bmdma */
- ap->ops->bmdma_start(qc);
-@@ -5300,12 +5250,15 @@ static void atapi_pio_bytes(struct ata_queued_cmd *qc)
- bytes = (bc_hi << 8) | bc_lo;
-
- /* shall be cleared to zero, indicating xfer of data */
-- if (ireason & (1 << 0))
-+ if (unlikely(ireason & (1 << 0)))
- goto err_out;
-
- /* make sure transfer direction matches expected */
- i_write = ((ireason & (1 << 1)) == 0) ? 1 : 0;
-- if (do_write != i_write)
-+ if (unlikely(do_write != i_write))
-+ goto err_out;
++/*
++ * ioc_set_batching sets ioc to be a new "batcher" if it is not one. This
++ * will cause the process to be a "batcher" on all queues in the system. This
++ * is the behaviour we want though - once it gets a wakeup it should be given
++ * a nice run.
++ */
++static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
++{
++ if (!ioc || ioc_batching(q, ioc))
++ return;
+
-+ if (unlikely(!bytes))
- goto err_out;
-
- VPRINTK("ata%u: xfering %d bytes\n", ap->print_id, bytes);
-@@ -5341,7 +5294,7 @@ static inline int ata_hsm_ok_in_wq(struct ata_port *ap, struct ata_queued_cmd *q
- (qc->tf.flags & ATA_TFLAG_WRITE))
- return 1;
-
-- if (is_atapi_taskfile(&qc->tf) &&
-+ if (ata_is_atapi(qc->tf.protocol) &&
- !(qc->dev->flags & ATA_DFLAG_CDB_INTR))
- return 1;
- }
-@@ -5506,7 +5459,7 @@ fsm_start:
-
- case HSM_ST:
- /* complete command or read/write the data register */
-- if (qc->tf.protocol == ATA_PROT_ATAPI) {
-+ if (qc->tf.protocol == ATAPI_PROT_PIO) {
- /* ATAPI PIO protocol */
- if ((status & ATA_DRQ) == 0) {
- /* No more data to transfer or device error.
-@@ -5664,7 +5617,7 @@ fsm_start:
- msleep(2);
- status = ata_busy_wait(ap, ATA_BUSY, 10);
- if (status & ATA_BUSY) {
-- ata_port_queue_task(ap, ata_pio_task, qc, ATA_SHORT_PAUSE);
-+ ata_pio_queue_task(ap, qc, ATA_SHORT_PAUSE);
- return;
- }
- }
-@@ -5805,6 +5758,22 @@ static void fill_result_tf(struct ata_queued_cmd *qc)
- ap->ops->tf_read(ap, &qc->result_tf);
- }
-
-+static void ata_verify_xfer(struct ata_queued_cmd *qc)
++ ioc->nr_batch_requests = q->nr_batching;
++ ioc->last_waited = jiffies;
++}
++
++static void __freed_request(struct request_queue *q, int rw)
+{
-+ struct ata_device *dev = qc->dev;
++ struct request_list *rl = &q->rq;
+
-+ if (ata_tag_internal(qc->tag))
-+ return;
++ if (rl->count[rw] < queue_congestion_off_threshold(q))
++ blk_clear_queue_congested(q, rw);
+
-+ if (ata_is_nodata(qc->tf.protocol))
-+ return;
++ if (rl->count[rw] + 1 <= q->nr_requests) {
++ if (waitqueue_active(&rl->wait[rw]))
++ wake_up(&rl->wait[rw]);
+
-+ if ((dev->mwdma_mask || dev->udma_mask) && ata_is_pio(qc->tf.protocol))
-+ return;
++ blk_clear_queue_full(q, rw);
++ }
++}
+
-+ dev->flags &= ~ATA_DFLAG_DUBIOUS_XFER;
++/*
++ * A request has just been released. Account for it, update the full and
++ * congestion status, wake up any waiters. Called under q->queue_lock.
++ */
++static void freed_request(struct request_queue *q, int rw, int priv)
++{
++ struct request_list *rl = &q->rq;
++
++ rl->count[rw]--;
++ if (priv)
++ rl->elvpriv--;
++
++ __freed_request(q, rw);
++
++ if (unlikely(rl->starved[rw ^ 1]))
++ __freed_request(q, rw ^ 1);
+}
+
- /**
- * ata_qc_complete - Complete an active ATA command
- * @qc: Command to complete
-@@ -5876,6 +5845,9 @@ void ata_qc_complete(struct ata_queued_cmd *qc)
- break;
- }
-
-+ if (unlikely(dev->flags & ATA_DFLAG_DUBIOUS_XFER))
-+ ata_verify_xfer(qc);
++#define blkdev_free_rq(list) list_entry((list)->next, struct request, queuelist)
++/*
++ * Get a free request, queue_lock must be held.
++ * Returns NULL on failure, with queue_lock held.
++ * Returns !NULL on success, with queue_lock *not held*.
++ */
++static struct request *get_request(struct request_queue *q, int rw_flags,
++ struct bio *bio, gfp_t gfp_mask)
++{
++ struct request *rq = NULL;
++ struct request_list *rl = &q->rq;
++ struct io_context *ioc = NULL;
++ const int rw = rw_flags & 0x01;
++ int may_queue, priv;
+
- __ata_qc_complete(qc);
- } else {
- if (qc->flags & ATA_QCFLAG_EH_SCHEDULED)
-@@ -5938,30 +5910,6 @@ int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active,
- return nr_done;
- }
-
--static inline int ata_should_dma_map(struct ata_queued_cmd *qc)
--{
-- struct ata_port *ap = qc->ap;
--
-- switch (qc->tf.protocol) {
-- case ATA_PROT_NCQ:
-- case ATA_PROT_DMA:
-- case ATA_PROT_ATAPI_DMA:
-- return 1;
--
-- case ATA_PROT_ATAPI:
-- case ATA_PROT_PIO:
-- if (ap->flags & ATA_FLAG_PIO_DMA)
-- return 1;
--
-- /* fall through */
--
-- default:
-- return 0;
-- }
--
-- /* never reached */
--}
--
- /**
- * ata_qc_issue - issue taskfile to device
- * @qc: command to issue to device
-@@ -5978,6 +5926,7 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct ata_link *link = qc->dev->link;
-+ u8 prot = qc->tf.protocol;
-
- /* Make sure only one non-NCQ command is outstanding. The
- * check is skipped for old EH because it reuses active qc to
-@@ -5985,7 +5934,7 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
- */
- WARN_ON(ap->ops->error_handler && ata_tag_valid(link->active_tag));
-
-- if (qc->tf.protocol == ATA_PROT_NCQ) {
-+ if (ata_is_ncq(prot)) {
- WARN_ON(link->sactive & (1 << qc->tag));
-
- if (!link->sactive)
-@@ -6001,17 +5950,18 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
- qc->flags |= ATA_QCFLAG_ACTIVE;
- ap->qc_active |= 1 << qc->tag;
-
-- if (ata_should_dma_map(qc)) {
-- if (qc->flags & ATA_QCFLAG_SG) {
-- if (ata_sg_setup(qc))
-- goto sg_err;
-- } else if (qc->flags & ATA_QCFLAG_SINGLE) {
-- if (ata_sg_setup_one(qc))
-- goto sg_err;
-- }
-- } else {
-- qc->flags &= ~ATA_QCFLAG_DMAMAP;
-- }
-+ /* We guarantee to LLDs that they will have at least one
-+ * non-zero sg if the command is a data command.
++ may_queue = elv_may_queue(q, rw_flags);
++ if (may_queue == ELV_MQUEUE_NO)
++ goto rq_starved;
++
++ if (rl->count[rw]+1 >= queue_congestion_on_threshold(q)) {
++ if (rl->count[rw]+1 >= q->nr_requests) {
++ ioc = current_io_context(GFP_ATOMIC, q->node);
++ /*
++ * The queue will fill after this allocation, so set
++ * it as full, and mark this process as "batching".
++ * This process will be allowed to complete a batch of
++ * requests, others will be blocked.
++ */
++ if (!blk_queue_full(q, rw)) {
++ ioc_set_batching(q, ioc);
++ blk_set_queue_full(q, rw);
++ } else {
++ if (may_queue != ELV_MQUEUE_MUST
++ && !ioc_batching(q, ioc)) {
++ /*
++ * The queue is full and the allocating
++ * process is not a "batcher", and not
++ * exempted by the IO scheduler
++ */
++ goto out;
++ }
++ }
++ }
++ blk_set_queue_congested(q, rw);
++ }
++
++ /*
++ * Only allow batching queuers to allocate up to 50% over the defined
++ * limit of requests, otherwise we could have thousands of requests
++ * allocated with any setting of ->nr_requests
+ */
-+ BUG_ON(ata_is_data(prot) && (!qc->sg || !qc->n_elem || !qc->nbytes));
++ if (rl->count[rw] >= (3 * q->nr_requests / 2))
++ goto out;
+
-+ /* ata_sg_setup() may update nbytes */
-+ qc->raw_nbytes = qc->nbytes;
++ rl->count[rw]++;
++ rl->starved[rw] = 0;
+
-+ if (ata_is_dma(prot) || (ata_is_pio(prot) &&
-+ (ap->flags & ATA_FLAG_PIO_DMA)))
-+ if (ata_sg_setup(qc))
-+ goto sg_err;
-
- /* if device is sleeping, schedule softreset and abort the link */
- if (unlikely(qc->dev->flags & ATA_DFLAG_SLEEPING)) {
-@@ -6029,7 +5979,6 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
- return;
-
- sg_err:
-- qc->flags &= ~ATA_QCFLAG_DMAMAP;
- qc->err_mask |= AC_ERR_SYSTEM;
- err:
- ata_qc_complete(qc);
-@@ -6064,11 +6013,11 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
- switch (qc->tf.protocol) {
- case ATA_PROT_PIO:
- case ATA_PROT_NODATA:
-- case ATA_PROT_ATAPI:
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_PIO:
-+ case ATAPI_PROT_NODATA:
- qc->tf.flags |= ATA_TFLAG_POLLING;
- break;
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- if (qc->dev->flags & ATA_DFLAG_CDB_INTR)
- /* see ata_dma_blacklisted() */
- BUG();
-@@ -6091,7 +6040,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
- ap->hsm_task_state = HSM_ST_LAST;
-
- if (qc->tf.flags & ATA_TFLAG_POLLING)
-- ata_port_queue_task(ap, ata_pio_task, qc, 0);
-+ ata_pio_queue_task(ap, qc, 0);
-
- break;
-
-@@ -6113,7 +6062,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
- if (qc->tf.flags & ATA_TFLAG_WRITE) {
- /* PIO data out protocol */
- ap->hsm_task_state = HSM_ST_FIRST;
-- ata_port_queue_task(ap, ata_pio_task, qc, 0);
-+ ata_pio_queue_task(ap, qc, 0);
-
- /* always send first data block using
- * the ata_pio_task() codepath.
-@@ -6123,7 +6072,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
- ap->hsm_task_state = HSM_ST;
-
- if (qc->tf.flags & ATA_TFLAG_POLLING)
-- ata_port_queue_task(ap, ata_pio_task, qc, 0);
-+ ata_pio_queue_task(ap, qc, 0);
-
- /* if polling, ata_pio_task() handles the rest.
- * otherwise, interrupt handler takes over from here.
-@@ -6132,8 +6081,8 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
-
- break;
-
-- case ATA_PROT_ATAPI:
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_PIO:
-+ case ATAPI_PROT_NODATA:
- if (qc->tf.flags & ATA_TFLAG_POLLING)
- ata_qc_set_polling(qc);
-
-@@ -6144,10 +6093,10 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
- /* send cdb by polling if no cdb interrupt */
- if ((!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) ||
- (qc->tf.flags & ATA_TFLAG_POLLING))
-- ata_port_queue_task(ap, ata_pio_task, qc, 0);
-+ ata_pio_queue_task(ap, qc, 0);
- break;
-
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING);
-
- ap->ops->tf_load(ap, &qc->tf); /* load tf registers */
-@@ -6156,7 +6105,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
-
- /* send cdb by polling if no cdb interrupt */
- if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR))
-- ata_port_queue_task(ap, ata_pio_task, qc, 0);
-+ ata_pio_queue_task(ap, qc, 0);
- break;
-
- default:
-@@ -6200,15 +6149,15 @@ inline unsigned int ata_host_intr(struct ata_port *ap,
- */
-
- /* Check the ATA_DFLAG_CDB_INTR flag is enough here.
-- * The flag was turned on only for atapi devices.
-- * No need to check is_atapi_taskfile(&qc->tf) again.
-+ * The flag was turned on only for atapi devices. No
-+ * need to check ata_is_atapi(qc->tf.protocol) again.
- */
- if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR))
- goto idle_irq;
- break;
- case HSM_ST_LAST:
- if (qc->tf.protocol == ATA_PROT_DMA ||
-- qc->tf.protocol == ATA_PROT_ATAPI_DMA) {
-+ qc->tf.protocol == ATAPI_PROT_DMA) {
- /* check status of DMA engine */
- host_stat = ap->ops->bmdma_status(ap);
- VPRINTK("ata%u: host_stat 0x%X\n",
-@@ -6250,7 +6199,7 @@ inline unsigned int ata_host_intr(struct ata_port *ap,
- ata_hsm_move(ap, qc, status, 0);
-
- if (unlikely(qc->err_mask) && (qc->tf.protocol == ATA_PROT_DMA ||
-- qc->tf.protocol == ATA_PROT_ATAPI_DMA))
-+ qc->tf.protocol == ATAPI_PROT_DMA))
- ata_ehi_push_desc(ehi, "BMDMA stat 0x%x", host_stat);
-
- return 1; /* irq handled */
-@@ -6772,7 +6721,7 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
- ap->msg_enable = ATA_MSG_DRV | ATA_MSG_ERR | ATA_MSG_WARN;
- #endif
-
-- INIT_DELAYED_WORK(&ap->port_task, NULL);
-+ INIT_DELAYED_WORK(&ap->port_task, ata_pio_task);
- INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug);
- INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
- INIT_LIST_HEAD(&ap->eh_done_q);
-@@ -7589,7 +7538,6 @@ EXPORT_SYMBOL_GPL(ata_host_register);
- EXPORT_SYMBOL_GPL(ata_host_activate);
- EXPORT_SYMBOL_GPL(ata_host_detach);
- EXPORT_SYMBOL_GPL(ata_sg_init);
--EXPORT_SYMBOL_GPL(ata_sg_init_one);
- EXPORT_SYMBOL_GPL(ata_hsm_move);
- EXPORT_SYMBOL_GPL(ata_qc_complete);
- EXPORT_SYMBOL_GPL(ata_qc_complete_multiple);
-@@ -7601,6 +7549,13 @@ EXPORT_SYMBOL_GPL(ata_std_dev_select);
- EXPORT_SYMBOL_GPL(sata_print_link_status);
- EXPORT_SYMBOL_GPL(ata_tf_to_fis);
- EXPORT_SYMBOL_GPL(ata_tf_from_fis);
-+EXPORT_SYMBOL_GPL(ata_pack_xfermask);
-+EXPORT_SYMBOL_GPL(ata_unpack_xfermask);
-+EXPORT_SYMBOL_GPL(ata_xfer_mask2mode);
-+EXPORT_SYMBOL_GPL(ata_xfer_mode2mask);
-+EXPORT_SYMBOL_GPL(ata_xfer_mode2shift);
-+EXPORT_SYMBOL_GPL(ata_mode_string);
-+EXPORT_SYMBOL_GPL(ata_id_xfermask);
- EXPORT_SYMBOL_GPL(ata_check_status);
- EXPORT_SYMBOL_GPL(ata_altstatus);
- EXPORT_SYMBOL_GPL(ata_exec_command);
-@@ -7643,7 +7598,6 @@ EXPORT_SYMBOL_GPL(ata_wait_register);
- EXPORT_SYMBOL_GPL(ata_busy_sleep);
- EXPORT_SYMBOL_GPL(ata_wait_after_reset);
- EXPORT_SYMBOL_GPL(ata_wait_ready);
--EXPORT_SYMBOL_GPL(ata_port_queue_task);
- EXPORT_SYMBOL_GPL(ata_scsi_ioctl);
- EXPORT_SYMBOL_GPL(ata_scsi_queuecmd);
- EXPORT_SYMBOL_GPL(ata_scsi_slave_config);
-@@ -7662,18 +7616,20 @@ EXPORT_SYMBOL_GPL(ata_host_resume);
- #endif /* CONFIG_PM */
- EXPORT_SYMBOL_GPL(ata_id_string);
- EXPORT_SYMBOL_GPL(ata_id_c_string);
--EXPORT_SYMBOL_GPL(ata_id_to_dma_mode);
- EXPORT_SYMBOL_GPL(ata_scsi_simulate);
-
- EXPORT_SYMBOL_GPL(ata_pio_need_iordy);
-+EXPORT_SYMBOL_GPL(ata_timing_find_mode);
- EXPORT_SYMBOL_GPL(ata_timing_compute);
- EXPORT_SYMBOL_GPL(ata_timing_merge);
-+EXPORT_SYMBOL_GPL(ata_timing_cycle2mode);
-
- #ifdef CONFIG_PCI
- EXPORT_SYMBOL_GPL(pci_test_config_bits);
- EXPORT_SYMBOL_GPL(ata_pci_init_sff_host);
- EXPORT_SYMBOL_GPL(ata_pci_init_bmdma);
- EXPORT_SYMBOL_GPL(ata_pci_prepare_sff_host);
-+EXPORT_SYMBOL_GPL(ata_pci_activate_sff_host);
- EXPORT_SYMBOL_GPL(ata_pci_init_one);
- EXPORT_SYMBOL_GPL(ata_pci_remove_one);
- #ifdef CONFIG_PM
-@@ -7715,4 +7671,5 @@ EXPORT_SYMBOL_GPL(ata_dev_try_classify);
- EXPORT_SYMBOL_GPL(ata_cable_40wire);
- EXPORT_SYMBOL_GPL(ata_cable_80wire);
- EXPORT_SYMBOL_GPL(ata_cable_unknown);
-+EXPORT_SYMBOL_GPL(ata_cable_ignore);
- EXPORT_SYMBOL_GPL(ata_cable_sata);
-diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
-index 21a81cd..4e31071 100644
---- a/drivers/ata/libata-eh.c
-+++ b/drivers/ata/libata-eh.c
-@@ -46,9 +46,26 @@
- #include "libata.h"
-
- enum {
-+ /* speed down verdicts */
- ATA_EH_SPDN_NCQ_OFF = (1 << 0),
- ATA_EH_SPDN_SPEED_DOWN = (1 << 1),
- ATA_EH_SPDN_FALLBACK_TO_PIO = (1 << 2),
-+ ATA_EH_SPDN_KEEP_ERRORS = (1 << 3),
++ priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
++ if (priv)
++ rl->elvpriv++;
+
-+ /* error flags */
-+ ATA_EFLAG_IS_IO = (1 << 0),
-+ ATA_EFLAG_DUBIOUS_XFER = (1 << 1),
++ spin_unlock_irq(q->queue_lock);
+
-+ /* error categories */
-+ ATA_ECAT_NONE = 0,
-+ ATA_ECAT_ATA_BUS = 1,
-+ ATA_ECAT_TOUT_HSM = 2,
-+ ATA_ECAT_UNK_DEV = 3,
-+ ATA_ECAT_DUBIOUS_NONE = 4,
-+ ATA_ECAT_DUBIOUS_ATA_BUS = 5,
-+ ATA_ECAT_DUBIOUS_TOUT_HSM = 6,
-+ ATA_ECAT_DUBIOUS_UNK_DEV = 7,
-+ ATA_ECAT_NR = 8,
- };
-
- /* Waiting in ->prereset can never be reliable. It's sometimes nice
-@@ -213,12 +230,13 @@ void ata_port_pbar_desc(struct ata_port *ap, int bar, ssize_t offset,
- if (offset < 0)
- ata_port_desc(ap, "%s %s%llu at 0x%llx", name, type, len, start);
- else
-- ata_port_desc(ap, "%s 0x%llx", name, start + offset);
-+ ata_port_desc(ap, "%s 0x%llx", name,
-+ start + (unsigned long long)offset);
- }
-
- #endif /* CONFIG_PCI */
-
--static void ata_ering_record(struct ata_ering *ering, int is_io,
-+static void ata_ering_record(struct ata_ering *ering, unsigned int eflags,
- unsigned int err_mask)
- {
- struct ata_ering_entry *ent;
-@@ -229,11 +247,20 @@ static void ata_ering_record(struct ata_ering *ering, int is_io,
- ering->cursor %= ATA_ERING_SIZE;
-
- ent = &ering->ring[ering->cursor];
-- ent->is_io = is_io;
-+ ent->eflags = eflags;
- ent->err_mask = err_mask;
- ent->timestamp = get_jiffies_64();
- }
-
-+static struct ata_ering_entry *ata_ering_top(struct ata_ering *ering)
-+{
-+ struct ata_ering_entry *ent = &ering->ring[ering->cursor];
++ rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
++ if (unlikely(!rq)) {
++ /*
++ * Allocation failed presumably due to memory. Undo anything
++ * we might have messed up.
++ *
++ * Allocating task should really be put onto the front of the
++ * wait queue, but this is pretty rare.
++ */
++ spin_lock_irq(q->queue_lock);
++ freed_request(q, rw, priv);
+
-+ if (ent->err_mask)
-+ return ent;
-+ return NULL;
++ /*
++ * in the very unlikely event that allocation failed and no
++ * requests for this direction was pending, mark us starved
++ * so that freeing of a request in the other direction will
++ * notice us. another possible fix would be to split the
++ * rq mempool into READ and WRITE
++ */
++rq_starved:
++ if (unlikely(rl->count[rw] == 0))
++ rl->starved[rw] = 1;
++
++ goto out;
++ }
++
++ /*
++ * ioc may be NULL here, and ioc_batching will be false. That's
++ * OK, if the queue is under the request limit then requests need
++ * not count toward the nr_batch_requests limit. There will always
++ * be some limit enforced by BLK_BATCH_TIME.
++ */
++ if (ioc_batching(q, ioc))
++ ioc->nr_batch_requests--;
++
++ rq_init(q, rq);
++
++ blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
++out:
++ return rq;
+}
+
- static void ata_ering_clear(struct ata_ering *ering)
- {
- memset(ering, 0, sizeof(*ering));
-@@ -445,9 +472,20 @@ void ata_scsi_error(struct Scsi_Host *host)
- spin_lock_irqsave(ap->lock, flags);
-
- __ata_port_for_each_link(link, ap) {
-+ struct ata_eh_context *ehc = &link->eh_context;
-+ struct ata_device *dev;
++/*
++ * No available requests for this queue, unplug the device and wait for some
++ * requests to become available.
++ *
++ * Called with q->queue_lock held, and returns with it unlocked.
++ */
++static struct request *get_request_wait(struct request_queue *q, int rw_flags,
++ struct bio *bio)
++{
++ const int rw = rw_flags & 0x01;
++ struct request *rq;
+
- memset(&link->eh_context, 0, sizeof(link->eh_context));
- link->eh_context.i = link->eh_info;
- memset(&link->eh_info, 0, sizeof(link->eh_info));
++ rq = get_request(q, rw_flags, bio, GFP_NOIO);
++ while (!rq) {
++ DEFINE_WAIT(wait);
++ struct request_list *rl = &q->rq;
+
-+ ata_link_for_each_dev(dev, link) {
-+ int devno = dev->devno;
++ prepare_to_wait_exclusive(&rl->wait[rw], &wait,
++ TASK_UNINTERRUPTIBLE);
+
-+ ehc->saved_xfer_mode[devno] = dev->xfer_mode;
-+ if (ata_ncq_enabled(dev))
-+ ehc->saved_ncq_enabled |= 1 << devno;
-+ }
- }
-
- ap->pflags |= ATA_PFLAG_EH_IN_PROGRESS;
-@@ -1260,10 +1298,10 @@ static unsigned int atapi_eh_request_sense(struct ata_queued_cmd *qc)
-
- /* is it pointless to prefer PIO for "safety reasons"? */
- if (ap->flags & ATA_FLAG_PIO_DMA) {
-- tf.protocol = ATA_PROT_ATAPI_DMA;
-+ tf.protocol = ATAPI_PROT_DMA;
- tf.feature |= ATAPI_PKT_DMA;
- } else {
-- tf.protocol = ATA_PROT_ATAPI;
-+ tf.protocol = ATAPI_PROT_PIO;
- tf.lbam = SCSI_SENSE_BUFFERSIZE;
- tf.lbah = 0;
- }
-@@ -1451,20 +1489,29 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc,
- return action;
- }
-
--static int ata_eh_categorize_error(int is_io, unsigned int err_mask)
-+static int ata_eh_categorize_error(unsigned int eflags, unsigned int err_mask,
-+ int *xfer_ok)
- {
-+ int base = 0;
++ rq = get_request(q, rw_flags, bio, GFP_NOIO);
+
-+ if (!(eflags & ATA_EFLAG_DUBIOUS_XFER))
-+ *xfer_ok = 1;
++ if (!rq) {
++ struct io_context *ioc;
+
-+ if (!*xfer_ok)
-+ base = ATA_ECAT_DUBIOUS_NONE;
++ blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
+
- if (err_mask & AC_ERR_ATA_BUS)
-- return 1;
-+ return base + ATA_ECAT_ATA_BUS;
-
- if (err_mask & AC_ERR_TIMEOUT)
-- return 2;
-+ return base + ATA_ECAT_TOUT_HSM;
-
-- if (is_io) {
-+ if (eflags & ATA_EFLAG_IS_IO) {
- if (err_mask & AC_ERR_HSM)
-- return 2;
-+ return base + ATA_ECAT_TOUT_HSM;
- if ((err_mask &
- (AC_ERR_DEV|AC_ERR_MEDIA|AC_ERR_INVALID)) == AC_ERR_DEV)
-- return 3;
-+ return base + ATA_ECAT_UNK_DEV;
- }
-
- return 0;
-@@ -1472,18 +1519,22 @@ static int ata_eh_categorize_error(int is_io, unsigned int err_mask)
-
- struct speed_down_verdict_arg {
- u64 since;
-- int nr_errors[4];
-+ int xfer_ok;
-+ int nr_errors[ATA_ECAT_NR];
- };
-
- static int speed_down_verdict_cb(struct ata_ering_entry *ent, void *void_arg)
- {
- struct speed_down_verdict_arg *arg = void_arg;
-- int cat = ata_eh_categorize_error(ent->is_io, ent->err_mask);
-+ int cat;
-
- if (ent->timestamp < arg->since)
- return -1;
-
-+ cat = ata_eh_categorize_error(ent->eflags, ent->err_mask,
-+ &arg->xfer_ok);
- arg->nr_errors[cat]++;
++ __generic_unplug_device(q);
++ spin_unlock_irq(q->queue_lock);
++ io_schedule();
+
- return 0;
- }
-
-@@ -1495,22 +1546,48 @@ static int speed_down_verdict_cb(struct ata_ering_entry *ent, void *void_arg)
- * whether NCQ needs to be turned off, transfer speed should be
- * stepped down, or falling back to PIO is necessary.
- *
-- * Cat-1 is ATA_BUS error for any command.
-+ * ECAT_ATA_BUS : ATA_BUS error for any command
-+ *
-+ * ECAT_TOUT_HSM : TIMEOUT for any command or HSM violation for
-+ * IO commands
-+ *
-+ * ECAT_UNK_DEV : Unknown DEV error for IO commands
-+ *
-+ * ECAT_DUBIOUS_* : Identical to above three but occurred while
-+ * data transfer hasn't been verified.
-+ *
-+ * Verdicts are
-+ *
-+ * NCQ_OFF : Turn off NCQ.
-+ *
-+ * SPEED_DOWN : Speed down transfer speed but don't fall back
-+ * to PIO.
++ /*
++ * After sleeping, we become a "batching" process and
++ * will be able to allocate at least one request, and
++ * up to a big batch of them for a small period time.
++ * See ioc_batching, ioc_set_batching
++ */
++ ioc = current_io_context(GFP_NOIO, q->node);
++ ioc_set_batching(q, ioc);
++
++ spin_lock_irq(q->queue_lock);
++ }
++ finish_wait(&rl->wait[rw], &wait);
++ }
++
++ return rq;
++}
++
++struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
++{
++ struct request *rq;
++
++ BUG_ON(rw != READ && rw != WRITE);
++
++ spin_lock_irq(q->queue_lock);
++ if (gfp_mask & __GFP_WAIT) {
++ rq = get_request_wait(q, rw, NULL);
++ } else {
++ rq = get_request(q, rw, NULL, gfp_mask);
++ if (!rq)
++ spin_unlock_irq(q->queue_lock);
++ }
++ /* q->queue_lock is unlocked at this point */
++
++ return rq;
++}
++EXPORT_SYMBOL(blk_get_request);
++
++/**
++ * blk_start_queueing - initiate dispatch of requests to device
++ * @q: request queue to kick into gear
+ *
-+ * FALLBACK_TO_PIO : Fall back to PIO.
++ * This is basically a helper to remove the need to know whether a queue
++ * is plugged or not if someone just wants to initiate dispatch of requests
++ * for this queue.
+ *
-+ * Even if multiple verdicts are returned, only one action is
-+ * taken per error. An action triggered by non-DUBIOUS errors
-+ * clears ering, while one triggered by DUBIOUS_* errors doesn't.
-+ * This is to expedite speed down decisions right after device is
-+ * initially configured.
- *
-- * Cat-2 is TIMEOUT for any command or HSM violation for known
-- * supported commands.
-+ * The followings are speed down rules. #1 and #2 deal with
-+ * DUBIOUS errors.
- *
-- * Cat-3 is is unclassified DEV error for known supported
-- * command.
-+ * 1. If more than one DUBIOUS_ATA_BUS or DUBIOUS_TOUT_HSM errors
-+ * occurred during last 5 mins, SPEED_DOWN and FALLBACK_TO_PIO.
- *
-- * NCQ needs to be turned off if there have been more than 3
-- * Cat-2 + Cat-3 errors during last 10 minutes.
-+ * 2. If more than one DUBIOUS_TOUT_HSM or DUBIOUS_UNK_DEV errors
-+ * occurred during last 5 mins, NCQ_OFF.
- *
-- * Speed down is necessary if there have been more than 3 Cat-1 +
-- * Cat-2 errors or 10 Cat-3 errors during last 10 minutes.
-+ * 3. If more than 8 ATA_BUS, TOUT_HSM or UNK_DEV errors
-+ * ocurred during last 5 mins, FALLBACK_TO_PIO
- *
-- * Falling back to PIO mode is necessary if there have been more
-- * than 10 Cat-1 + Cat-2 + Cat-3 errors during last 5 minutes.
-+ * 4. If more than 3 TOUT_HSM or UNK_DEV errors occurred
-+ * during last 10 mins, NCQ_OFF.
++ * The queue lock must be held with interrupts disabled.
++ */
++void blk_start_queueing(struct request_queue *q)
++{
++ if (!blk_queue_plugged(q))
++ q->request_fn(q);
++ else
++ __generic_unplug_device(q);
++}
++EXPORT_SYMBOL(blk_start_queueing);
++
++/**
++ * blk_requeue_request - put a request back on queue
++ * @q: request queue where request should be inserted
++ * @rq: request to be inserted
+ *
-+ * 5. If more than 3 ATA_BUS or TOUT_HSM errors, or more than 6
-+ * UNK_DEV errors occurred during last 10 mins, SPEED_DOWN.
- *
- * LOCKING:
- * Inherited from caller.
-@@ -1525,23 +1602,38 @@ static unsigned int ata_eh_speed_down_verdict(struct ata_device *dev)
- struct speed_down_verdict_arg arg;
- unsigned int verdict = 0;
-
-- /* scan past 10 mins of error history */
-+ /* scan past 5 mins of error history */
- memset(&arg, 0, sizeof(arg));
-- arg.since = j64 - min(j64, j10mins);
-+ arg.since = j64 - min(j64, j5mins);
- ata_ering_map(&dev->ering, speed_down_verdict_cb, &arg);
-
-- if (arg.nr_errors[2] + arg.nr_errors[3] > 3)
-- verdict |= ATA_EH_SPDN_NCQ_OFF;
-- if (arg.nr_errors[1] + arg.nr_errors[2] > 3 || arg.nr_errors[3] > 10)
-- verdict |= ATA_EH_SPDN_SPEED_DOWN;
-+ if (arg.nr_errors[ATA_ECAT_DUBIOUS_ATA_BUS] +
-+ arg.nr_errors[ATA_ECAT_DUBIOUS_TOUT_HSM] > 1)
-+ verdict |= ATA_EH_SPDN_SPEED_DOWN |
-+ ATA_EH_SPDN_FALLBACK_TO_PIO | ATA_EH_SPDN_KEEP_ERRORS;
-
-- /* scan past 3 mins of error history */
-+ if (arg.nr_errors[ATA_ECAT_DUBIOUS_TOUT_HSM] +
-+ arg.nr_errors[ATA_ECAT_DUBIOUS_UNK_DEV] > 1)
-+ verdict |= ATA_EH_SPDN_NCQ_OFF | ATA_EH_SPDN_KEEP_ERRORS;
++ * Description:
++ * Drivers often keep queueing requests until the hardware cannot accept
++ * more, when that condition happens we need to put the request back
++ * on the queue. Must be called with queue lock held.
++ */
++void blk_requeue_request(struct request_queue *q, struct request *rq)
++{
++ blk_add_trace_rq(q, rq, BLK_TA_REQUEUE);
+
-+ if (arg.nr_errors[ATA_ECAT_ATA_BUS] +
-+ arg.nr_errors[ATA_ECAT_TOUT_HSM] +
-+ arg.nr_errors[ATA_ECAT_UNK_DEV] > 6)
-+ verdict |= ATA_EH_SPDN_FALLBACK_TO_PIO;
++ if (blk_rq_tagged(rq))
++ blk_queue_end_tag(q, rq);
+
-+ /* scan past 10 mins of error history */
- memset(&arg, 0, sizeof(arg));
-- arg.since = j64 - min(j64, j5mins);
-+ arg.since = j64 - min(j64, j10mins);
- ata_ering_map(&dev->ering, speed_down_verdict_cb, &arg);
-
-- if (arg.nr_errors[1] + arg.nr_errors[2] + arg.nr_errors[3] > 10)
-- verdict |= ATA_EH_SPDN_FALLBACK_TO_PIO;
-+ if (arg.nr_errors[ATA_ECAT_TOUT_HSM] +
-+ arg.nr_errors[ATA_ECAT_UNK_DEV] > 3)
-+ verdict |= ATA_EH_SPDN_NCQ_OFF;
++ elv_requeue_request(q, rq);
++}
++
++EXPORT_SYMBOL(blk_requeue_request);
+
-+ if (arg.nr_errors[ATA_ECAT_ATA_BUS] +
-+ arg.nr_errors[ATA_ECAT_TOUT_HSM] > 3 ||
-+ arg.nr_errors[ATA_ECAT_UNK_DEV] > 6)
-+ verdict |= ATA_EH_SPDN_SPEED_DOWN;
-
- return verdict;
- }
-@@ -1549,7 +1641,7 @@ static unsigned int ata_eh_speed_down_verdict(struct ata_device *dev)
- /**
- * ata_eh_speed_down - record error and speed down if necessary
- * @dev: Failed device
-- * @is_io: Did the device fail during normal IO?
-+ * @eflags: mask of ATA_EFLAG_* flags
- * @err_mask: err_mask of the error
- *
- * Record error and examine error history to determine whether
-@@ -1563,18 +1655,20 @@ static unsigned int ata_eh_speed_down_verdict(struct ata_device *dev)
- * RETURNS:
- * Determined recovery action.
- */
--static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
-- unsigned int err_mask)
-+static unsigned int ata_eh_speed_down(struct ata_device *dev,
-+ unsigned int eflags, unsigned int err_mask)
- {
-+ struct ata_link *link = dev->link;
-+ int xfer_ok = 0;
- unsigned int verdict;
- unsigned int action = 0;
-
- /* don't bother if Cat-0 error */
-- if (ata_eh_categorize_error(is_io, err_mask) == 0)
-+ if (ata_eh_categorize_error(eflags, err_mask, &xfer_ok) == 0)
- return 0;
-
- /* record error and determine whether speed down is necessary */
-- ata_ering_record(&dev->ering, is_io, err_mask);
-+ ata_ering_record(&dev->ering, eflags, err_mask);
- verdict = ata_eh_speed_down_verdict(dev);
-
- /* turn off NCQ? */
-@@ -1590,7 +1684,7 @@ static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
- /* speed down? */
- if (verdict & ATA_EH_SPDN_SPEED_DOWN) {
- /* speed down SATA link speed if possible */
-- if (sata_down_spd_limit(dev->link) == 0) {
-+ if (sata_down_spd_limit(link) == 0) {
- action |= ATA_EH_HARDRESET;
- goto done;
- }
-@@ -1618,10 +1712,10 @@ static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
- }
-
- /* Fall back to PIO? Slowing down to PIO is meaningless for
-- * SATA. Consider it only for PATA.
-+ * SATA ATA devices. Consider it only for PATA and SATAPI.
- */
- if ((verdict & ATA_EH_SPDN_FALLBACK_TO_PIO) && (dev->spdn_cnt >= 2) &&
-- (dev->link->ap->cbl != ATA_CBL_SATA) &&
-+ (link->ap->cbl != ATA_CBL_SATA || dev->class == ATA_DEV_ATAPI) &&
- (dev->xfer_shift != ATA_SHIFT_PIO)) {
- if (ata_down_xfermask_limit(dev, ATA_DNXFER_FORCE_PIO) == 0) {
- dev->spdn_cnt = 0;
-@@ -1633,7 +1727,8 @@ static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
- return 0;
- done:
- /* device has been slowed down, blow error history */
-- ata_ering_clear(&dev->ering);
-+ if (!(verdict & ATA_EH_SPDN_KEEP_ERRORS))
-+ ata_ering_clear(&dev->ering);
- return action;
- }
-
-@@ -1653,8 +1748,8 @@ static void ata_eh_link_autopsy(struct ata_link *link)
- struct ata_port *ap = link->ap;
- struct ata_eh_context *ehc = &link->eh_context;
- struct ata_device *dev;
-- unsigned int all_err_mask = 0;
-- int tag, is_io = 0;
-+ unsigned int all_err_mask = 0, eflags = 0;
-+ int tag;
- u32 serror;
- int rc;
-
-@@ -1713,15 +1808,15 @@ static void ata_eh_link_autopsy(struct ata_link *link)
- ehc->i.dev = qc->dev;
- all_err_mask |= qc->err_mask;
- if (qc->flags & ATA_QCFLAG_IO)
-- is_io = 1;
-+ eflags |= ATA_EFLAG_IS_IO;
- }
-
- /* enforce default EH actions */
- if (ap->pflags & ATA_PFLAG_FROZEN ||
- all_err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT))
- ehc->i.action |= ATA_EH_SOFTRESET;
-- else if ((is_io && all_err_mask) ||
-- (!is_io && (all_err_mask & ~AC_ERR_DEV)))
-+ else if (((eflags & ATA_EFLAG_IS_IO) && all_err_mask) ||
-+ (!(eflags & ATA_EFLAG_IS_IO) && (all_err_mask & ~AC_ERR_DEV)))
- ehc->i.action |= ATA_EH_REVALIDATE;
-
- /* If we have offending qcs and the associated failed device,
-@@ -1743,8 +1838,11 @@ static void ata_eh_link_autopsy(struct ata_link *link)
- ata_dev_enabled(link->device))))
- dev = link->device;
-
-- if (dev)
-- ehc->i.action |= ata_eh_speed_down(dev, is_io, all_err_mask);
-+ if (dev) {
-+ if (dev->flags & ATA_DFLAG_DUBIOUS_XFER)
-+ eflags |= ATA_EFLAG_DUBIOUS_XFER;
-+ ehc->i.action |= ata_eh_speed_down(dev, eflags, all_err_mask);
-+ }
-
- DPRINTK("EXIT\n");
- }
-@@ -1880,8 +1978,8 @@ static void ata_eh_link_report(struct ata_link *link)
- [ATA_PROT_PIO] = "pio",
- [ATA_PROT_DMA] = "dma",
- [ATA_PROT_NCQ] = "ncq",
-- [ATA_PROT_ATAPI] = "pio",
-- [ATA_PROT_ATAPI_DMA] = "dma",
-+ [ATAPI_PROT_PIO] = "pio",
-+ [ATAPI_PROT_DMA] = "dma",
- };
-
- snprintf(data_buf, sizeof(data_buf), " %s %u %s",
-@@ -1889,7 +1987,7 @@ static void ata_eh_link_report(struct ata_link *link)
- dma_str[qc->dma_dir]);
- }
-
-- if (is_atapi_taskfile(&qc->tf))
-+ if (ata_is_atapi(qc->tf.protocol))
- snprintf(cdb_buf, sizeof(cdb_buf),
- "cdb %02x %02x %02x %02x %02x %02x %02x %02x "
- "%02x %02x %02x %02x %02x %02x %02x %02x\n ",
-@@ -2329,6 +2427,58 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
- return rc;
- }
-
+/**
-+ * ata_set_mode - Program timings and issue SET FEATURES - XFER
-+ * @link: link on which timings will be programmed
-+ * @r_failed_dev: out paramter for failed device
++ * blk_insert_request - insert a special request in to a request queue
++ * @q: request queue where request should be inserted
++ * @rq: request to be inserted
++ * @at_head: insert request at head or tail of queue
++ * @data: private data
+ *
-+ * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). If
-+ * ata_set_mode() fails, pointer to the failing device is
-+ * returned in @r_failed_dev.
++ * Description:
++ * Many block devices need to execute commands asynchronously, so they don't
++ * block the whole kernel from preemption during request execution. This is
++ * accomplished normally by inserting aritficial requests tagged as
++ * REQ_SPECIAL in to the corresponding request queue, and letting them be
++ * scheduled for actual execution by the request queue.
+ *
-+ * LOCKING:
-+ * PCI/etc. bus probe sem.
++ * We have the option of inserting the head or the tail of the queue.
++ * Typically we use the tail for new ioctls and so forth. We use the head
++ * of the queue for things like a QUEUE_FULL message from a device, or a
++ * host that is unable to accept a particular command.
++ */
++void blk_insert_request(struct request_queue *q, struct request *rq,
++ int at_head, void *data)
++{
++ int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
++ unsigned long flags;
++
++ /*
++ * tell I/O scheduler that this isn't a regular read/write (ie it
++ * must not attempt merges on this) and that it acts as a soft
++ * barrier
++ */
++ rq->cmd_type = REQ_TYPE_SPECIAL;
++ rq->cmd_flags |= REQ_SOFTBARRIER;
++
++ rq->special = data;
++
++ spin_lock_irqsave(q->queue_lock, flags);
++
++ /*
++ * If command is tagged, release the tag
++ */
++ if (blk_rq_tagged(rq))
++ blk_queue_end_tag(q, rq);
++
++ drive_stat_acct(rq, 1);
++ __elv_add_request(q, rq, where, 0);
++ blk_start_queueing(q);
++ spin_unlock_irqrestore(q->queue_lock, flags);
++}
++
++EXPORT_SYMBOL(blk_insert_request);
++
++/*
++ * add-request adds a request to the linked list.
++ * queue lock is held and interrupts disabled, as we muck with the
++ * request queue list.
++ */
++static inline void add_request(struct request_queue * q, struct request * req)
++{
++ drive_stat_acct(req, 1);
++
++ /*
++ * elevator indicated where it wants this request to be
++ * inserted at elevator_merge time
++ */
++ __elv_add_request(q, req, ELEVATOR_INSERT_SORT, 0);
++}
++
++/*
++ * disk_round_stats() - Round off the performance stats on a struct
++ * disk_stats.
+ *
-+ * RETURNS:
-+ * 0 on success, negative errno otherwise
++ * The average IO queue length and utilisation statistics are maintained
++ * by observing the current state of the queue length and the amount of
++ * time it has been in this state for.
++ *
++ * Normally, that accounting is done on IO completion, but that can result
++ * in more than a second's worth of IO being accounted for within any one
++ * second, leading to >100% utilisation. To deal with that, we call this
++ * function to do a round-off before returning the results when reading
++ * /proc/diskstats. This accounts immediately for all queue usage up to
++ * the current jiffies and restarts the counters again.
+ */
-+int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
++void disk_round_stats(struct gendisk *disk)
+{
-+ struct ata_port *ap = link->ap;
-+ struct ata_device *dev;
-+ int rc;
++ unsigned long now = jiffies;
+
-+ /* if data transfer is verified, clear DUBIOUS_XFER on ering top */
-+ ata_link_for_each_dev(dev, link) {
-+ if (!(dev->flags & ATA_DFLAG_DUBIOUS_XFER)) {
-+ struct ata_ering_entry *ent;
++ if (now == disk->stamp)
++ return;
+
-+ ent = ata_ering_top(&dev->ering);
-+ if (ent)
-+ ent->eflags &= ~ATA_EFLAG_DUBIOUS_XFER;
-+ }
++ if (disk->in_flight) {
++ __disk_stat_add(disk, time_in_queue,
++ disk->in_flight * (now - disk->stamp));
++ __disk_stat_add(disk, io_ticks, (now - disk->stamp));
+ }
++ disk->stamp = now;
++}
+
-+ /* has private set_mode? */
-+ if (ap->ops->set_mode)
-+ rc = ap->ops->set_mode(link, r_failed_dev);
-+ else
-+ rc = ata_do_set_mode(link, r_failed_dev);
++EXPORT_SYMBOL_GPL(disk_round_stats);
+
-+ /* if transfer mode has changed, set DUBIOUS_XFER on device */
-+ ata_link_for_each_dev(dev, link) {
-+ struct ata_eh_context *ehc = &link->eh_context;
-+ u8 saved_xfer_mode = ehc->saved_xfer_mode[dev->devno];
-+ u8 saved_ncq = !!(ehc->saved_ncq_enabled & (1 << dev->devno));
++/*
++ * queue lock must be held
++ */
++void __blk_put_request(struct request_queue *q, struct request *req)
++{
++ if (unlikely(!q))
++ return;
++ if (unlikely(--req->ref_count))
++ return;
+
-+ if (dev->xfer_mode != saved_xfer_mode ||
-+ ata_ncq_enabled(dev) != saved_ncq)
-+ dev->flags |= ATA_DFLAG_DUBIOUS_XFER;
++ elv_completed_request(q, req);
++
++ /*
++ * Request may not have originated from ll_rw_blk. if not,
++ * it didn't come out of our reserved rq pools
++ */
++ if (req->cmd_flags & REQ_ALLOCED) {
++ int rw = rq_data_dir(req);
++ int priv = req->cmd_flags & REQ_ELVPRIV;
++
++ BUG_ON(!list_empty(&req->queuelist));
++ BUG_ON(!hlist_unhashed(&req->hash));
++
++ blk_free_request(q, req);
++ freed_request(q, rw, priv);
+ }
++}
+
-+ return rc;
++EXPORT_SYMBOL_GPL(__blk_put_request);
++
++void blk_put_request(struct request *req)
++{
++ unsigned long flags;
++ struct request_queue *q = req->q;
++
++ /*
++ * Gee, IDE calls in w/ NULL q. Fix IDE and remove the
++ * following if (q) test.
++ */
++ if (q) {
++ spin_lock_irqsave(q->queue_lock, flags);
++ __blk_put_request(q, req);
++ spin_unlock_irqrestore(q->queue_lock, flags);
++ }
+}
+
- static int ata_link_nr_enabled(struct ata_link *link)
- {
- struct ata_device *dev;
-@@ -2375,6 +2525,24 @@ static int ata_eh_skip_recovery(struct ata_link *link)
- return 1;
- }
-
-+static int ata_eh_schedule_probe(struct ata_device *dev)
++EXPORT_SYMBOL(blk_put_request);
++
++void init_request_from_bio(struct request *req, struct bio *bio)
+{
-+ struct ata_eh_context *ehc = &dev->link->eh_context;
++ req->cmd_type = REQ_TYPE_FS;
+
-+ if (!(ehc->i.probe_mask & (1 << dev->devno)) ||
-+ (ehc->did_probe_mask & (1 << dev->devno)))
-+ return 0;
++ /*
++ * inherit FAILFAST from bio (for read-ahead, and explicit FAILFAST)
++ */
++ if (bio_rw_ahead(bio) || bio_failfast(bio))
++ req->cmd_flags |= REQ_FAILFAST;
+
-+ ata_eh_detach_dev(dev);
-+ ata_dev_init(dev);
-+ ehc->did_probe_mask |= (1 << dev->devno);
-+ ehc->i.action |= ATA_EH_SOFTRESET;
-+ ehc->saved_xfer_mode[dev->devno] = 0;
-+ ehc->saved_ncq_enabled &= ~(1 << dev->devno);
++ /*
++ * REQ_BARRIER implies no merging, but lets make it explicit
++ */
++ if (unlikely(bio_barrier(bio)))
++ req->cmd_flags |= (REQ_HARDBARRIER | REQ_NOMERGE);
+
-+ return 1;
++ if (bio_sync(bio))
++ req->cmd_flags |= REQ_RW_SYNC;
++ if (bio_rw_meta(bio))
++ req->cmd_flags |= REQ_RW_META;
++
++ req->errors = 0;
++ req->hard_sector = req->sector = bio->bi_sector;
++ req->ioprio = bio_prio(bio);
++ req->start_time = jiffies;
++ blk_rq_bio_prep(req->q, req, bio);
+}
+
- static int ata_eh_handle_dev_fail(struct ata_device *dev, int err)
- {
- struct ata_eh_context *ehc = &dev->link->eh_context;
-@@ -2406,16 +2574,9 @@ static int ata_eh_handle_dev_fail(struct ata_device *dev, int err)
- if (ata_link_offline(dev->link))
- ata_eh_detach_dev(dev);
-
-- /* probe if requested */
-- if ((ehc->i.probe_mask & (1 << dev->devno)) &&
-- !(ehc->did_probe_mask & (1 << dev->devno))) {
-- ata_eh_detach_dev(dev);
-- ata_dev_init(dev);
--
-+ /* schedule probe if necessary */
-+ if (ata_eh_schedule_probe(dev))
- ehc->tries[dev->devno] = ATA_EH_DEV_TRIES;
-- ehc->did_probe_mask |= (1 << dev->devno);
-- ehc->i.action |= ATA_EH_SOFTRESET;
-- }
-
- return 1;
- } else {
-@@ -2492,14 +2653,9 @@ int ata_eh_recover(struct ata_port *ap, ata_prereset_fn_t prereset,
- if (dev->flags & ATA_DFLAG_DETACH)
- ata_eh_detach_dev(dev);
-
-- if (!ata_dev_enabled(dev) &&
-- ((ehc->i.probe_mask & (1 << dev->devno)) &&
-- !(ehc->did_probe_mask & (1 << dev->devno)))) {
-- ata_eh_detach_dev(dev);
-- ata_dev_init(dev);
-- ehc->did_probe_mask |= (1 << dev->devno);
-- ehc->i.action |= ATA_EH_SOFTRESET;
-- }
-+ /* schedule probe if necessary */
-+ if (!ata_dev_enabled(dev))
-+ ata_eh_schedule_probe(dev);
- }
- }
-
-@@ -2747,6 +2903,7 @@ static void ata_eh_handle_port_suspend(struct ata_port *ap)
- if (ap->ops->port_suspend)
- rc = ap->ops->port_suspend(ap, ap->pm_mesg);
-
-+ ata_acpi_set_state(ap, PMSG_SUSPEND);
- out:
- /* report result */
- spin_lock_irqsave(ap->lock, flags);
-@@ -2792,6 +2949,8 @@ static void ata_eh_handle_port_resume(struct ata_port *ap)
-
- WARN_ON(!(ap->pflags & ATA_PFLAG_SUSPENDED));
-
-+ ata_acpi_set_state(ap, PMSG_ON);
++static int __make_request(struct request_queue *q, struct bio *bio)
++{
++ struct request *req;
++ int el_ret, nr_sectors, barrier, err;
++ const unsigned short prio = bio_prio(bio);
++ const int sync = bio_sync(bio);
++ int rw_flags;
+
- if (ap->ops->port_resume)
- rc = ap->ops->port_resume(ap);
-
-diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
-index 14daf48..c02c490 100644
---- a/drivers/ata/libata-scsi.c
-+++ b/drivers/ata/libata-scsi.c
-@@ -517,7 +517,7 @@ static struct ata_queued_cmd *ata_scsi_qc_new(struct ata_device *dev,
- qc->scsicmd = cmd;
- qc->scsidone = done;
-
-- qc->__sg = scsi_sglist(cmd);
-+ qc->sg = scsi_sglist(cmd);
- qc->n_elem = scsi_sg_count(cmd);
- } else {
- cmd->result = (DID_OK << 16) | (QUEUE_FULL << 1);
-@@ -839,7 +839,14 @@ static void ata_scsi_dev_config(struct scsi_device *sdev,
- if (dev->class == ATA_DEV_ATAPI) {
- struct request_queue *q = sdev->request_queue;
- blk_queue_max_hw_segments(q, q->max_hw_segments - 1);
-- }
++ nr_sectors = bio_sectors(bio);
+
-+ /* set the min alignment */
-+ blk_queue_update_dma_alignment(sdev->request_queue,
-+ ATA_DMA_PAD_SZ - 1);
-+ } else
-+ /* ATA devices must be sector aligned */
-+ blk_queue_update_dma_alignment(sdev->request_queue,
-+ ATA_SECT_SIZE - 1);
-
- if (dev->class == ATA_DEV_ATA)
- sdev->manage_start_stop = 1;
-@@ -878,7 +885,7 @@ int ata_scsi_slave_config(struct scsi_device *sdev)
- if (dev)
- ata_scsi_dev_config(sdev, dev);
-
-- return 0; /* scsi layer doesn't check return value, sigh */
-+ return 0;
- }
-
- /**
-@@ -2210,7 +2217,7 @@ unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf,
-
- /* sector size */
- ATA_SCSI_RBUF_SET(6, ATA_SECT_SIZE >> 8);
-- ATA_SCSI_RBUF_SET(7, ATA_SECT_SIZE);
-+ ATA_SCSI_RBUF_SET(7, ATA_SECT_SIZE & 0xff);
- } else {
- /* sector count, 64-bit */
- ATA_SCSI_RBUF_SET(0, last_lba >> (8 * 7));
-@@ -2224,7 +2231,7 @@ unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf,
-
- /* sector size */
- ATA_SCSI_RBUF_SET(10, ATA_SECT_SIZE >> 8);
-- ATA_SCSI_RBUF_SET(11, ATA_SECT_SIZE);
-+ ATA_SCSI_RBUF_SET(11, ATA_SECT_SIZE & 0xff);
- }
-
- return 0;
-@@ -2331,7 +2338,7 @@ static void atapi_request_sense(struct ata_queued_cmd *qc)
- DPRINTK("ATAPI request sense\n");
-
- /* FIXME: is this needed? */
-- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
-+ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
-
- ap->ops->tf_read(ap, &qc->tf);
-
-@@ -2341,7 +2348,9 @@ static void atapi_request_sense(struct ata_queued_cmd *qc)
-
- ata_qc_reinit(qc);
-
-- ata_sg_init_one(qc, cmd->sense_buffer, sizeof(cmd->sense_buffer));
-+ /* setup sg table and init transfer direction */
-+ sg_init_one(&qc->sgent, cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE);
-+ ata_sg_init(qc, &qc->sgent, 1);
- qc->dma_dir = DMA_FROM_DEVICE;
-
- memset(&qc->cdb, 0, qc->dev->cdb_len);
-@@ -2352,10 +2361,10 @@ static void atapi_request_sense(struct ata_queued_cmd *qc)
- qc->tf.command = ATA_CMD_PACKET;
-
- if (ata_pio_use_silly(ap)) {
-- qc->tf.protocol = ATA_PROT_ATAPI_DMA;
-+ qc->tf.protocol = ATAPI_PROT_DMA;
- qc->tf.feature |= ATAPI_PKT_DMA;
- } else {
-- qc->tf.protocol = ATA_PROT_ATAPI;
-+ qc->tf.protocol = ATAPI_PROT_PIO;
- qc->tf.lbam = SCSI_SENSE_BUFFERSIZE;
- qc->tf.lbah = 0;
- }
-@@ -2526,12 +2535,12 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
- if (using_pio || nodata) {
- /* no data, or PIO data xfer */
- if (nodata)
-- qc->tf.protocol = ATA_PROT_ATAPI_NODATA;
-+ qc->tf.protocol = ATAPI_PROT_NODATA;
- else
-- qc->tf.protocol = ATA_PROT_ATAPI;
-+ qc->tf.protocol = ATAPI_PROT_PIO;
- } else {
- /* DMA data xfer */
-- qc->tf.protocol = ATA_PROT_ATAPI_DMA;
-+ qc->tf.protocol = ATAPI_PROT_DMA;
- qc->tf.feature |= ATAPI_PKT_DMA;
-
- if (atapi_dmadir && (scmd->sc_data_direction != DMA_TO_DEVICE))
-@@ -2690,6 +2699,24 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
- if ((tf->protocol = ata_scsi_map_proto(cdb[1])) == ATA_PROT_UNKNOWN)
- goto invalid_fld;
-
+ /*
-+ * Filter TPM commands by default. These provide an
-+ * essentially uncontrolled encrypted "back door" between
-+ * applications and the disk. Set libata.allow_tpm=1 if you
-+ * have a real reason for wanting to use them. This ensures
-+ * that installed software cannot easily mess stuff up without
-+ * user intent. DVR type users will probably ship with this enabled
-+ * for movie content management.
-+ *
-+ * Note that for ATA8 we can issue a DCS change and DCS freeze lock
-+ * for this and should do in future but that it is not sufficient as
-+ * DCS is an optional feature set. Thus we also do the software filter
-+ * so that we comply with the TC consortium stated goal that the user
-+ * can turn off TC features of their system.
++ * low level driver can indicate that it wants pages above a
++ * certain limit bounced to low memory (ie for highmem, or even
++ * ISA dma in theory)
+ */
-+ if (tf->command >= 0x5C && tf->command <= 0x5F && !libata_allow_tpm)
-+ goto invalid_fld;
++ blk_queue_bounce(q, &bio);
+
- /* We may not issue DMA commands if no DMA mode is set */
- if (tf->protocol == ATA_PROT_DMA && dev->dma_mode == 0)
- goto invalid_fld;
-diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
-index b7ac80b..60cd4b1 100644
---- a/drivers/ata/libata-sff.c
-+++ b/drivers/ata/libata-sff.c
-@@ -147,7 +147,9 @@ void ata_exec_command(struct ata_port *ap, const struct ata_taskfile *tf)
- * @tf: ATA taskfile register set for storing input
- *
- * Reads ATA taskfile registers for currently-selected device
-- * into @tf.
-+ * into @tf. Assumes the device has a fully SFF compliant task file
-+ * layout and behaviour. If you device does not (eg has a different
-+ * status method) then you will need to provide a replacement tf_read
- *
- * LOCKING:
- * Inherited from caller.
-@@ -156,7 +158,7 @@ void ata_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
- {
- struct ata_ioports *ioaddr = &ap->ioaddr;
-
-- tf->command = ata_chk_status(ap);
-+ tf->command = ata_check_status(ap);
- tf->feature = ioread8(ioaddr->error_addr);
- tf->nsect = ioread8(ioaddr->nsect_addr);
- tf->lbal = ioread8(ioaddr->lbal_addr);
-@@ -415,7 +417,7 @@ void ata_bmdma_drive_eh(struct ata_port *ap, ata_prereset_fn_t prereset,
- ap->hsm_task_state = HSM_ST_IDLE;
-
- if (qc && (qc->tf.protocol == ATA_PROT_DMA ||
-- qc->tf.protocol == ATA_PROT_ATAPI_DMA)) {
-+ qc->tf.protocol == ATAPI_PROT_DMA)) {
- u8 host_stat;
-
- host_stat = ap->ops->bmdma_status(ap);
-@@ -549,7 +551,7 @@ int ata_pci_init_bmdma(struct ata_host *host)
- return rc;
-
- /* request and iomap DMA region */
-- rc = pcim_iomap_regions(pdev, 1 << 4, DRV_NAME);
-+ rc = pcim_iomap_regions(pdev, 1 << 4, dev_driver_string(gdev));
- if (rc) {
- dev_printk(KERN_ERR, gdev, "failed to request/iomap BAR4\n");
- return -ENOMEM;
-@@ -619,7 +621,8 @@ int ata_pci_init_sff_host(struct ata_host *host)
- continue;
- }
-
-- rc = pcim_iomap_regions(pdev, 0x3 << base, DRV_NAME);
-+ rc = pcim_iomap_regions(pdev, 0x3 << base,
-+ dev_driver_string(gdev));
- if (rc) {
- dev_printk(KERN_WARNING, gdev,
- "failed to request/iomap BARs for port %d "
-@@ -711,6 +714,99 @@ int ata_pci_prepare_sff_host(struct pci_dev *pdev,
- }
-
- /**
-+ * ata_pci_activate_sff_host - start SFF host, request IRQ and register it
-+ * @host: target SFF ATA host
-+ * @irq_handler: irq_handler used when requesting IRQ(s)
-+ * @sht: scsi_host_template to use when registering the host
-+ *
-+ * This is the counterpart of ata_host_activate() for SFF ATA
-+ * hosts. This separate helper is necessary because SFF hosts
-+ * use two separate interrupts in legacy mode.
-+ *
-+ * LOCKING:
-+ * Inherited from calling layer (may sleep).
-+ *
-+ * RETURNS:
-+ * 0 on success, -errno otherwise.
-+ */
-+int ata_pci_activate_sff_host(struct ata_host *host,
-+ irq_handler_t irq_handler,
-+ struct scsi_host_template *sht)
-+{
-+ struct device *dev = host->dev;
-+ struct pci_dev *pdev = to_pci_dev(dev);
-+ const char *drv_name = dev_driver_string(host->dev);
-+ int legacy_mode = 0, rc;
++ barrier = bio_barrier(bio);
++ if (unlikely(barrier) && (q->next_ordered == QUEUE_ORDERED_NONE)) {
++ err = -EOPNOTSUPP;
++ goto end_io;
++ }
+
-+ rc = ata_host_start(host);
-+ if (rc)
-+ return rc;
++ spin_lock_irq(q->queue_lock);
+
-+ if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
-+ u8 tmp8, mask;
++ if (unlikely(barrier) || elv_queue_empty(q))
++ goto get_rq;
+
-+ /* TODO: What if one channel is in native mode ... */
-+ pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
-+ mask = (1 << 2) | (1 << 0);
-+ if ((tmp8 & mask) != mask)
-+ legacy_mode = 1;
-+#if defined(CONFIG_NO_ATA_LEGACY)
-+ /* Some platforms with PCI limits cannot address compat
-+ port space. In that case we punt if their firmware has
-+ left a device in compatibility mode */
-+ if (legacy_mode) {
-+ printk(KERN_ERR "ata: Compatibility mode ATA is not supported on this platform, skipping.\n");
-+ return -EOPNOTSUPP;
-+ }
-+#endif
-+ }
++ el_ret = elv_merge(q, &req, bio);
++ switch (el_ret) {
++ case ELEVATOR_BACK_MERGE:
++ BUG_ON(!rq_mergeable(req));
+
-+ if (!devres_open_group(dev, NULL, GFP_KERNEL))
-+ return -ENOMEM;
++ if (!ll_back_merge_fn(q, req, bio))
++ break;
+
-+ if (!legacy_mode && pdev->irq) {
-+ rc = devm_request_irq(dev, pdev->irq, irq_handler,
-+ IRQF_SHARED, drv_name, host);
-+ if (rc)
++ blk_add_trace_bio(q, bio, BLK_TA_BACKMERGE);
++
++ req->biotail->bi_next = bio;
++ req->biotail = bio;
++ req->nr_sectors = req->hard_nr_sectors += nr_sectors;
++ req->ioprio = ioprio_best(req->ioprio, prio);
++ drive_stat_acct(req, 0);
++ if (!attempt_back_merge(q, req))
++ elv_merged_request(q, req, el_ret);
+ goto out;
+
-+ ata_port_desc(host->ports[0], "irq %d", pdev->irq);
-+ ata_port_desc(host->ports[1], "irq %d", pdev->irq);
-+ } else if (legacy_mode) {
-+ if (!ata_port_is_dummy(host->ports[0])) {
-+ rc = devm_request_irq(dev, ATA_PRIMARY_IRQ(pdev),
-+ irq_handler, IRQF_SHARED,
-+ drv_name, host);
-+ if (rc)
-+ goto out;
++ case ELEVATOR_FRONT_MERGE:
++ BUG_ON(!rq_mergeable(req));
+
-+ ata_port_desc(host->ports[0], "irq %d",
-+ ATA_PRIMARY_IRQ(pdev));
-+ }
++ if (!ll_front_merge_fn(q, req, bio))
++ break;
+
-+ if (!ata_port_is_dummy(host->ports[1])) {
-+ rc = devm_request_irq(dev, ATA_SECONDARY_IRQ(pdev),
-+ irq_handler, IRQF_SHARED,
-+ drv_name, host);
-+ if (rc)
-+ goto out;
++ blk_add_trace_bio(q, bio, BLK_TA_FRONTMERGE);
+
-+ ata_port_desc(host->ports[1], "irq %d",
-+ ATA_SECONDARY_IRQ(pdev));
-+ }
++ bio->bi_next = req->bio;
++ req->bio = bio;
++
++ /*
++ * may not be valid. if the low level driver said
++ * it didn't need a bounce buffer then it better
++ * not touch req->buffer either...
++ */
++ req->buffer = bio_data(bio);
++ req->current_nr_sectors = bio_cur_sectors(bio);
++ req->hard_cur_sectors = req->current_nr_sectors;
++ req->sector = req->hard_sector = bio->bi_sector;
++ req->nr_sectors = req->hard_nr_sectors += nr_sectors;
++ req->ioprio = ioprio_best(req->ioprio, prio);
++ drive_stat_acct(req, 0);
++ if (!attempt_front_merge(q, req))
++ elv_merged_request(q, req, el_ret);
++ goto out;
++
++ /* ELV_NO_MERGE: elevator says don't/can't merge. */
++ default:
++ ;
+ }
+
-+ rc = ata_host_register(host, sht);
-+ out:
-+ if (rc == 0)
-+ devres_remove_group(dev, NULL);
-+ else
-+ devres_release_group(dev, NULL);
++get_rq:
++ /*
++ * This sync check and mask will be re-done in init_request_from_bio(),
++ * but we need to set it earlier to expose the sync flag to the
++ * rq allocator and io schedulers.
++ */
++ rw_flags = bio_data_dir(bio);
++ if (sync)
++ rw_flags |= REQ_RW_SYNC;
+
-+ return rc;
-+}
++ /*
++ * Grab a free request. This is might sleep but can not fail.
++ * Returns with the queue unlocked.
++ */
++ req = get_request_wait(q, rw_flags, bio);
+
-+/**
- * ata_pci_init_one - Initialize/register PCI IDE host controller
- * @pdev: Controller to be initialized
- * @ppi: array of port_info, must be enough for two ports
-@@ -739,8 +835,6 @@ int ata_pci_init_one(struct pci_dev *pdev,
- struct device *dev = &pdev->dev;
- const struct ata_port_info *pi = NULL;
- struct ata_host *host = NULL;
-- u8 mask;
-- int legacy_mode = 0;
- int i, rc;
-
- DPRINTK("ENTER\n");
-@@ -762,95 +856,24 @@ int ata_pci_init_one(struct pci_dev *pdev,
- if (!devres_open_group(dev, NULL, GFP_KERNEL))
- return -ENOMEM;
-
-- /* FIXME: Really for ATA it isn't safe because the device may be
-- multi-purpose and we want to leave it alone if it was already
-- enabled. Secondly for shared use as Arjan says we want refcounting
--
-- Checking dev->is_enabled is insufficient as this is not set at
-- boot for the primary video which is BIOS enabled
-- */
--
- rc = pcim_enable_device(pdev);
- if (rc)
-- goto err_out;
-+ goto out;
-
-- if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
-- u8 tmp8;
--
-- /* TODO: What if one channel is in native mode ... */
-- pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
-- mask = (1 << 2) | (1 << 0);
-- if ((tmp8 & mask) != mask)
-- legacy_mode = 1;
--#if defined(CONFIG_NO_ATA_LEGACY)
-- /* Some platforms with PCI limits cannot address compat
-- port space. In that case we punt if their firmware has
-- left a device in compatibility mode */
-- if (legacy_mode) {
-- printk(KERN_ERR "ata: Compatibility mode ATA is not supported on this platform, skipping.\n");
-- rc = -EOPNOTSUPP;
-- goto err_out;
-- }
--#endif
-- }
--
-- /* prepare host */
-+ /* prepare and activate SFF host */
- rc = ata_pci_prepare_sff_host(pdev, ppi, &host);
- if (rc)
-- goto err_out;
-+ goto out;
-
- pci_set_master(pdev);
-+ rc = ata_pci_activate_sff_host(host, pi->port_ops->irq_handler,
-+ pi->sht);
-+ out:
-+ if (rc == 0)
-+ devres_remove_group(&pdev->dev, NULL);
-+ else
-+ devres_release_group(&pdev->dev, NULL);
-
-- /* start host and request IRQ */
-- rc = ata_host_start(host);
-- if (rc)
-- goto err_out;
--
-- if (!legacy_mode && pdev->irq) {
-- /* We may have no IRQ assigned in which case we can poll. This
-- shouldn't happen on a sane system but robustness is cheap
-- in this case */
-- rc = devm_request_irq(dev, pdev->irq, pi->port_ops->irq_handler,
-- IRQF_SHARED, DRV_NAME, host);
-- if (rc)
-- goto err_out;
--
-- ata_port_desc(host->ports[0], "irq %d", pdev->irq);
-- ata_port_desc(host->ports[1], "irq %d", pdev->irq);
-- } else if (legacy_mode) {
-- if (!ata_port_is_dummy(host->ports[0])) {
-- rc = devm_request_irq(dev, ATA_PRIMARY_IRQ(pdev),
-- pi->port_ops->irq_handler,
-- IRQF_SHARED, DRV_NAME, host);
-- if (rc)
-- goto err_out;
--
-- ata_port_desc(host->ports[0], "irq %d",
-- ATA_PRIMARY_IRQ(pdev));
-- }
--
-- if (!ata_port_is_dummy(host->ports[1])) {
-- rc = devm_request_irq(dev, ATA_SECONDARY_IRQ(pdev),
-- pi->port_ops->irq_handler,
-- IRQF_SHARED, DRV_NAME, host);
-- if (rc)
-- goto err_out;
--
-- ata_port_desc(host->ports[1], "irq %d",
-- ATA_SECONDARY_IRQ(pdev));
-- }
-- }
--
-- /* register */
-- rc = ata_host_register(host, pi->sht);
-- if (rc)
-- goto err_out;
--
-- devres_remove_group(dev, NULL);
-- return 0;
--
--err_out:
-- devres_release_group(dev, NULL);
- return rc;
- }
-
-diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
-index bbe59c2..409ffb9 100644
---- a/drivers/ata/libata.h
-+++ b/drivers/ata/libata.h
-@@ -60,6 +60,7 @@ extern int atapi_dmadir;
- extern int atapi_passthru16;
- extern int libata_fua;
- extern int libata_noacpi;
-+extern int libata_allow_tpm;
- extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev);
- extern int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
- u64 block, u32 n_block, unsigned int tf_flags,
-@@ -85,7 +86,6 @@ extern int ata_dev_configure(struct ata_device *dev);
- extern int sata_down_spd_limit(struct ata_link *link);
- extern int sata_set_spd_needed(struct ata_link *link);
- extern int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel);
--extern int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
- extern void ata_sg_clean(struct ata_queued_cmd *qc);
- extern void ata_qc_free(struct ata_queued_cmd *qc);
- extern void ata_qc_issue(struct ata_queued_cmd *qc);
-@@ -113,6 +113,7 @@ extern int ata_acpi_on_suspend(struct ata_port *ap);
- extern void ata_acpi_on_resume(struct ata_port *ap);
- extern int ata_acpi_on_devcfg(struct ata_device *dev);
- extern void ata_acpi_on_disable(struct ata_device *dev);
-+extern void ata_acpi_set_state(struct ata_port *ap, pm_message_t state);
- #else
- static inline void ata_acpi_associate_sata_port(struct ata_port *ap) { }
- static inline void ata_acpi_associate(struct ata_host *host) { }
-@@ -121,6 +122,8 @@ static inline int ata_acpi_on_suspend(struct ata_port *ap) { return 0; }
- static inline void ata_acpi_on_resume(struct ata_port *ap) { }
- static inline int ata_acpi_on_devcfg(struct ata_device *dev) { return 0; }
- static inline void ata_acpi_on_disable(struct ata_device *dev) { }
-+static inline void ata_acpi_set_state(struct ata_port *ap,
-+ pm_message_t state) { }
- #endif
-
- /* libata-scsi.c */
-@@ -183,6 +186,7 @@ extern void ata_eh_report(struct ata_port *ap);
- extern int ata_eh_reset(struct ata_link *link, int classify,
- ata_prereset_fn_t prereset, ata_reset_fn_t softreset,
- ata_reset_fn_t hardreset, ata_postreset_fn_t postreset);
-+extern int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
- extern int ata_eh_recover(struct ata_port *ap, ata_prereset_fn_t prereset,
- ata_reset_fn_t softreset, ata_reset_fn_t hardreset,
- ata_postreset_fn_t postreset,
-diff --git a/drivers/ata/pata_acpi.c b/drivers/ata/pata_acpi.c
-index e4542ab..244098a 100644
---- a/drivers/ata/pata_acpi.c
-+++ b/drivers/ata/pata_acpi.c
-@@ -81,17 +81,6 @@ static void pacpi_error_handler(struct ata_port *ap)
- NULL, ata_std_postreset);
- }
-
--/* Welcome to ACPI, bring a bucket */
--static const unsigned int pio_cycle[7] = {
-- 600, 383, 240, 180, 120, 100, 80
--};
--static const unsigned int mwdma_cycle[5] = {
-- 480, 150, 120, 100, 80
--};
--static const unsigned int udma_cycle[7] = {
-- 120, 80, 60, 45, 30, 20, 15
--};
--
- /**
- * pacpi_discover_modes - filter non ACPI modes
- * @adev: ATA device
-@@ -103,56 +92,20 @@ static const unsigned int udma_cycle[7] = {
-
- static unsigned long pacpi_discover_modes(struct ata_port *ap, struct ata_device *adev)
- {
-- int unit = adev->devno;
- struct pata_acpi *acpi = ap->private_data;
-- int i;
-- u32 t;
-- unsigned long mask = (0x7f << ATA_SHIFT_UDMA) | (0x7 << ATA_SHIFT_MWDMA) | (0x1F << ATA_SHIFT_PIO);
--
- struct ata_acpi_gtm probe;
-+ unsigned int xfer_mask;
-
- probe = acpi->gtm;
-
-- /* We always use the 0 slot for crap hardware */
-- if (!(probe.flags & 0x10))
-- unit = 0;
--
- ata_acpi_gtm(ap, &probe);
-
-- /* Start by scanning for PIO modes */
-- for (i = 0; i < 7; i++) {
-- t = probe.drive[unit].pio;
-- if (t <= pio_cycle[i]) {
-- mask |= (2 << (ATA_SHIFT_PIO + i)) - 1;
-- break;
-- }
-- }
-+ xfer_mask = ata_acpi_gtm_xfermask(adev, &probe);
-
-- /* See if we have MWDMA or UDMA data. We don't bother with MWDMA
-- if UDMA is availabe as this means the BIOS set UDMA and our
-- error changedown if it works is UDMA to PIO anyway */
-- if (probe.flags & (1 << (2 * unit))) {
-- /* MWDMA */
-- for (i = 0; i < 5; i++) {
-- t = probe.drive[unit].dma;
-- if (t <= mwdma_cycle[i]) {
-- mask |= (2 << (ATA_SHIFT_MWDMA + i)) - 1;
-- break;
-- }
-- }
-- } else {
-- /* UDMA */
-- for (i = 0; i < 7; i++) {
-- t = probe.drive[unit].dma;
-- if (t <= udma_cycle[i]) {
-- mask |= (2 << (ATA_SHIFT_UDMA + i)) - 1;
-- break;
-- }
-- }
-- }
-- if (mask & (0xF8 << ATA_SHIFT_UDMA))
-+ if (xfer_mask & (0xF8 << ATA_SHIFT_UDMA))
- ap->cbl = ATA_CBL_PATA80;
-- return mask;
++ /*
++ * After dropping the lock and possibly sleeping here, our request
++ * may now be mergeable after it had proven unmergeable (above).
++ * We don't worry about that case for efficiency. It won't happen
++ * often, and the elevators are able to handle it.
++ */
++ init_request_from_bio(req, bio);
+
-+ return xfer_mask;
- }
-
- /**
-@@ -180,12 +133,14 @@ static void pacpi_set_piomode(struct ata_port *ap, struct ata_device *adev)
- {
- int unit = adev->devno;
- struct pata_acpi *acpi = ap->private_data;
-+ const struct ata_timing *t;
-
- if (!(acpi->gtm.flags & 0x10))
- unit = 0;
-
- /* Now stuff the nS values into the structure */
-- acpi->gtm.drive[unit].pio = pio_cycle[adev->pio_mode - XFER_PIO_0];
-+ t = ata_timing_find_mode(adev->pio_mode);
-+ acpi->gtm.drive[unit].pio = t->cycle;
- ata_acpi_stm(ap, &acpi->gtm);
- /* See what mode we actually got */
- ata_acpi_gtm(ap, &acpi->gtm);
-@@ -201,16 +156,18 @@ static void pacpi_set_dmamode(struct ata_port *ap, struct ata_device *adev)
- {
- int unit = adev->devno;
- struct pata_acpi *acpi = ap->private_data;
-+ const struct ata_timing *t;
-
- if (!(acpi->gtm.flags & 0x10))
- unit = 0;
-
- /* Now stuff the nS values into the structure */
-+ t = ata_timing_find_mode(adev->dma_mode);
- if (adev->dma_mode >= XFER_UDMA_0) {
-- acpi->gtm.drive[unit].dma = udma_cycle[adev->dma_mode - XFER_UDMA_0];
-+ acpi->gtm.drive[unit].dma = t->udma;
- acpi->gtm.flags |= (1 << (2 * unit));
- } else {
-- acpi->gtm.drive[unit].dma = mwdma_cycle[adev->dma_mode - XFER_MW_DMA_0];
-+ acpi->gtm.drive[unit].dma = t->cycle;
- acpi->gtm.flags &= ~(1 << (2 * unit));
- }
- ata_acpi_stm(ap, &acpi->gtm);
-diff --git a/drivers/ata/pata_ali.c b/drivers/ata/pata_ali.c
-index 8caf9af..7e68edf 100644
---- a/drivers/ata/pata_ali.c
-+++ b/drivers/ata/pata_ali.c
-@@ -64,7 +64,7 @@ static int ali_cable_override(struct pci_dev *pdev)
- if (pdev->subsystem_vendor == 0x10CF && pdev->subsystem_device == 0x10AF)
- return 1;
- /* Mitac 8317 (Winbook-A) and relatives */
-- if (pdev->subsystem_vendor == 0x1071 && pdev->subsystem_device == 0x8317)
-+ if (pdev->subsystem_vendor == 0x1071 && pdev->subsystem_device == 0x8317)
- return 1;
- /* Systems by DMI */
- if (dmi_check_system(cable_dmi_table))
-diff --git a/drivers/ata/pata_amd.c b/drivers/ata/pata_amd.c
-index 3cc27b5..761a666 100644
---- a/drivers/ata/pata_amd.c
-+++ b/drivers/ata/pata_amd.c
-@@ -220,6 +220,62 @@ static void amd133_set_dmamode(struct ata_port *ap, struct ata_device *adev)
- timing_setup(ap, adev, 0x40, adev->dma_mode, 4);
- }
-
-+/* Both host-side and drive-side detection results are worthless on NV
-+ * PATAs. Ignore them and just follow what BIOS configured. Both the
-+ * current configuration in PCI config reg and ACPI GTM result are
-+ * cached during driver attach and are consulted to select transfer
-+ * mode.
++ spin_lock_irq(q->queue_lock);
++ if (elv_queue_empty(q))
++ blk_plug_device(q);
++ add_request(q, req);
++out:
++ if (sync)
++ __generic_unplug_device(q);
++
++ spin_unlock_irq(q->queue_lock);
++ return 0;
++
++end_io:
++ bio_endio(bio, err);
++ return 0;
++}
++
++/*
++ * If bio->bi_dev is a partition, remap the location
+ */
-+static unsigned long nv_mode_filter(struct ata_device *dev,
-+ unsigned long xfer_mask)
++static inline void blk_partition_remap(struct bio *bio)
+{
-+ static const unsigned int udma_mask_map[] =
-+ { ATA_UDMA2, ATA_UDMA1, ATA_UDMA0, 0,
-+ ATA_UDMA3, ATA_UDMA4, ATA_UDMA5, ATA_UDMA6 };
-+ struct ata_port *ap = dev->link->ap;
-+ char acpi_str[32] = "";
-+ u32 saved_udma, udma;
-+ const struct ata_acpi_gtm *gtm;
-+ unsigned long bios_limit = 0, acpi_limit = 0, limit;
-+
-+ /* find out what BIOS configured */
-+ udma = saved_udma = (unsigned long)ap->host->private_data;
++ struct block_device *bdev = bio->bi_bdev;
+
-+ if (ap->port_no == 0)
-+ udma >>= 16;
-+ if (dev->devno == 0)
-+ udma >>= 8;
++ if (bio_sectors(bio) && bdev != bdev->bd_contains) {
++ struct hd_struct *p = bdev->bd_part;
++ const int rw = bio_data_dir(bio);
+
-+ if ((udma & 0xc0) == 0xc0)
-+ bios_limit = ata_pack_xfermask(0, 0, udma_mask_map[udma & 0x7]);
++ p->sectors[rw] += bio_sectors(bio);
++ p->ios[rw]++;
+
-+ /* consult ACPI GTM too */
-+ gtm = ata_acpi_init_gtm(ap);
-+ if (gtm) {
-+ acpi_limit = ata_acpi_gtm_xfermask(dev, gtm);
++ bio->bi_sector += p->start_sect;
++ bio->bi_bdev = bdev->bd_contains;
+
-+ snprintf(acpi_str, sizeof(acpi_str), " (%u:%u:0x%x)",
-+ gtm->drive[0].dma, gtm->drive[1].dma, gtm->flags);
++ blk_add_trace_remap(bdev_get_queue(bio->bi_bdev), bio,
++ bdev->bd_dev, bio->bi_sector,
++ bio->bi_sector - p->start_sect);
+ }
++}
+
-+ /* be optimistic, EH can take care of things if something goes wrong */
-+ limit = bios_limit | acpi_limit;
++static void handle_bad_sector(struct bio *bio)
++{
++ char b[BDEVNAME_SIZE];
+
-+ /* If PIO or DMA isn't configured at all, don't limit. Let EH
-+ * handle it.
-+ */
-+ if (!(limit & ATA_MASK_PIO))
-+ limit |= ATA_MASK_PIO;
-+ if (!(limit & (ATA_MASK_MWDMA | ATA_MASK_UDMA)))
-+ limit |= ATA_MASK_MWDMA | ATA_MASK_UDMA;
++ printk(KERN_INFO "attempt to access beyond end of device\n");
++ printk(KERN_INFO "%s: rw=%ld, want=%Lu, limit=%Lu\n",
++ bdevname(bio->bi_bdev, b),
++ bio->bi_rw,
++ (unsigned long long)bio->bi_sector + bio_sectors(bio),
++ (long long)(bio->bi_bdev->bd_inode->i_size >> 9));
+
-+ ata_port_printk(ap, KERN_DEBUG, "nv_mode_filter: 0x%lx&0x%lx->0x%lx, "
-+ "BIOS=0x%lx (0x%x) ACPI=0x%lx%s\n",
-+ xfer_mask, limit, xfer_mask & limit, bios_limit,
-+ saved_udma, acpi_limit, acpi_str);
++ set_bit(BIO_EOF, &bio->bi_flags);
++}
+
-+ return xfer_mask & limit;
++#ifdef CONFIG_FAIL_MAKE_REQUEST
++
++static DECLARE_FAULT_ATTR(fail_make_request);
++
++static int __init setup_fail_make_request(char *str)
++{
++ return setup_fault_attr(&fail_make_request, str);
+}
-
- /**
- * nv_probe_init - cable detection
-@@ -252,31 +308,6 @@ static void nv_error_handler(struct ata_port *ap)
- ata_std_postreset);
- }
-
--static int nv_cable_detect(struct ata_port *ap)
--{
-- static const u8 bitmask[2] = {0x03, 0x0C};
-- struct pci_dev *pdev = to_pci_dev(ap->host->dev);
-- u8 ata66;
-- u16 udma;
-- int cbl;
--
-- pci_read_config_byte(pdev, 0x52, &ata66);
-- if (ata66 & bitmask[ap->port_no])
-- cbl = ATA_CBL_PATA80;
-- else
-- cbl = ATA_CBL_PATA40;
--
-- /* We now have to double check because the Nvidia boxes BIOS
-- doesn't always set the cable bits but does set mode bits */
-- pci_read_config_word(pdev, 0x62 - 2 * ap->port_no, &udma);
-- if ((udma & 0xC4) == 0xC4 || (udma & 0xC400) == 0xC400)
-- cbl = ATA_CBL_PATA80;
-- /* And a triple check across suspend/resume with ACPI around */
-- if (ata_acpi_cbl_80wire(ap))
-- cbl = ATA_CBL_PATA80;
-- return cbl;
--}
--
- /**
- * nv100_set_piomode - set initial PIO mode data
- * @ap: ATA interface
-@@ -314,6 +345,14 @@ static void nv133_set_dmamode(struct ata_port *ap, struct ata_device *adev)
- timing_setup(ap, adev, 0x50, adev->dma_mode, 4);
- }
-
-+static void nv_host_stop(struct ata_host *host)
++__setup("fail_make_request=", setup_fail_make_request);
++
++static int should_fail_request(struct bio *bio)
+{
-+ u32 udma = (unsigned long)host->private_data;
++ if ((bio->bi_bdev->bd_disk->flags & GENHD_FL_FAIL) ||
++ (bio->bi_bdev->bd_part && bio->bi_bdev->bd_part->make_it_fail))
++ return should_fail(&fail_make_request, bio->bi_size);
+
-+ /* restore PCI config register 0x60 */
-+ pci_write_config_dword(to_pci_dev(host->dev), 0x60, udma);
++ return 0;
+}
+
- static struct scsi_host_template amd_sht = {
- .module = THIS_MODULE,
- .name = DRV_NAME,
-@@ -478,7 +517,8 @@ static struct ata_port_operations nv100_port_ops = {
- .thaw = ata_bmdma_thaw,
- .error_handler = nv_error_handler,
- .post_internal_cmd = ata_bmdma_post_internal_cmd,
-- .cable_detect = nv_cable_detect,
-+ .cable_detect = ata_cable_ignore,
-+ .mode_filter = nv_mode_filter,
-
- .bmdma_setup = ata_bmdma_setup,
- .bmdma_start = ata_bmdma_start,
-@@ -495,6 +535,7 @@ static struct ata_port_operations nv100_port_ops = {
- .irq_on = ata_irq_on,
-
- .port_start = ata_sff_port_start,
-+ .host_stop = nv_host_stop,
- };
-
- static struct ata_port_operations nv133_port_ops = {
-@@ -511,7 +552,8 @@ static struct ata_port_operations nv133_port_ops = {
- .thaw = ata_bmdma_thaw,
- .error_handler = nv_error_handler,
- .post_internal_cmd = ata_bmdma_post_internal_cmd,
-- .cable_detect = nv_cable_detect,
-+ .cable_detect = ata_cable_ignore,
-+ .mode_filter = nv_mode_filter,
-
- .bmdma_setup = ata_bmdma_setup,
- .bmdma_start = ata_bmdma_start,
-@@ -528,6 +570,7 @@ static struct ata_port_operations nv133_port_ops = {
- .irq_on = ata_irq_on,
-
- .port_start = ata_sff_port_start,
-+ .host_stop = nv_host_stop,
- };
-
- static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
-@@ -614,7 +657,8 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
- .port_ops = &amd100_port_ops
- }
- };
-- const struct ata_port_info *ppi[] = { NULL, NULL };
-+ struct ata_port_info pi;
-+ const struct ata_port_info *ppi[] = { &pi, NULL };
- static int printed_version;
- int type = id->driver_data;
- u8 fifo;
-@@ -628,6 +672,19 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
- if (type == 1 && pdev->revision > 0x7)
- type = 2;
-
-+ /* Serenade ? */
-+ if (type == 5 && pdev->subsystem_vendor == PCI_VENDOR_ID_AMD &&
-+ pdev->subsystem_device == PCI_DEVICE_ID_AMD_SERENADE)
-+ type = 6; /* UDMA 100 only */
++static int __init fail_make_request_debugfs(void)
++{
++ return init_fault_attr_dentries(&fail_make_request,
++ "fail_make_request");
++}
++
++late_initcall(fail_make_request_debugfs);
++
++#else /* CONFIG_FAIL_MAKE_REQUEST */
++
++static inline int should_fail_request(struct bio *bio)
++{
++ return 0;
++}
++
++#endif /* CONFIG_FAIL_MAKE_REQUEST */
++
++/*
++ * Check whether this bio extends beyond the end of the device.
++ */
++static inline int bio_check_eod(struct bio *bio, unsigned int nr_sectors)
++{
++ sector_t maxsector;
++
++ if (!nr_sectors)
++ return 0;
++
++ /* Test device or partition size, when known. */
++ maxsector = bio->bi_bdev->bd_inode->i_size >> 9;
++ if (maxsector) {
++ sector_t sector = bio->bi_sector;
++
++ if (maxsector < nr_sectors || maxsector - nr_sectors < sector) {
++ /*
++ * This may well happen - the kernel calls bread()
++ * without checking the size of the device, e.g., when
++ * mounting a device.
++ */
++ handle_bad_sector(bio);
++ return 1;
++ }
++ }
++
++ return 0;
++}
++
++/**
++ * generic_make_request: hand a buffer to its device driver for I/O
++ * @bio: The bio describing the location in memory and on the device.
++ *
++ * generic_make_request() is used to make I/O requests of block
++ * devices. It is passed a &struct bio, which describes the I/O that needs
++ * to be done.
++ *
++ * generic_make_request() does not return any status. The
++ * success/failure status of the request, along with notification of
++ * completion, is delivered asynchronously through the bio->bi_end_io
++ * function described (one day) else where.
++ *
++ * The caller of generic_make_request must make sure that bi_io_vec
++ * are set to describe the memory buffer, and that bi_dev and bi_sector are
++ * set to describe the device address, and the
++ * bi_end_io and optionally bi_private are set to describe how
++ * completion notification should be signaled.
++ *
++ * generic_make_request and the drivers it calls may use bi_next if this
++ * bio happens to be merged with someone else, and may change bi_dev and
++ * bi_sector for remaps as it sees fit. So the values of these fields
++ * should NOT be depended on after the call to generic_make_request.
++ */
++static inline void __generic_make_request(struct bio *bio)
++{
++ struct request_queue *q;
++ sector_t old_sector;
++ int ret, nr_sectors = bio_sectors(bio);
++ dev_t old_dev;
++ int err = -EIO;
++
++ might_sleep();
++
++ if (bio_check_eod(bio, nr_sectors))
++ goto end_io;
+
+ /*
-+ * Okay, type is determined now. Apply type-specific workarounds.
++ * Resolve the mapping until finished. (drivers are
++ * still free to implement/resolve their own stacking
++ * by explicitly returning 0)
++ *
++ * NOTE: we don't repeat the blk_size check for each new device.
++ * Stacking drivers are expected to know what they are doing.
+ */
-+ pi = info[type];
++ old_sector = -1;
++ old_dev = 0;
++ do {
++ char b[BDEVNAME_SIZE];
+
-+ if (type < 3)
-+ ata_pci_clear_simplex(pdev);
++ q = bdev_get_queue(bio->bi_bdev);
++ if (!q) {
++ printk(KERN_ERR
++ "generic_make_request: Trying to access "
++ "nonexistent block-device %s (%Lu)\n",
++ bdevname(bio->bi_bdev, b),
++ (long long) bio->bi_sector);
++end_io:
++ bio_endio(bio, err);
++ break;
++ }
+
- /* Check for AMD7411 */
- if (type == 3)
- /* FIFO is broken */
-@@ -635,16 +692,17 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
- else
- pci_write_config_byte(pdev, 0x41, fifo | 0xF0);
-
-- /* Serenade ? */
-- if (type == 5 && pdev->subsystem_vendor == PCI_VENDOR_ID_AMD &&
-- pdev->subsystem_device == PCI_DEVICE_ID_AMD_SERENADE)
-- type = 6; /* UDMA 100 only */
-+ /* Cable detection on Nvidia chips doesn't work too well,
-+ * cache BIOS programmed UDMA mode.
-+ */
-+ if (type == 7 || type == 8) {
-+ u32 udma;
-
-- if (type < 3)
-- ata_pci_clear_simplex(pdev);
-+ pci_read_config_dword(pdev, 0x60, &udma);
-+ pi.private_data = (void *)(unsigned long)udma;
-+ }
-
- /* And fire it up */
-- ppi[0] = &info[type];
- return ata_pci_init_one(pdev, ppi);
- }
-
-diff --git a/drivers/ata/pata_bf54x.c b/drivers/ata/pata_bf54x.c
-index 7842cc4..a32e3c4 100644
---- a/drivers/ata/pata_bf54x.c
-+++ b/drivers/ata/pata_bf54x.c
-@@ -832,6 +832,7 @@ static void bfin_bmdma_setup(struct ata_queued_cmd *qc)
- {
- unsigned short config = WDSIZE_16;
- struct scatterlist *sg;
-+ unsigned int si;
-
- pr_debug("in atapi dma setup\n");
- /* Program the ATA_CTRL register with dir */
-@@ -839,7 +840,7 @@ static void bfin_bmdma_setup(struct ata_queued_cmd *qc)
- /* fill the ATAPI DMA controller */
- set_dma_config(CH_ATAPI_TX, config);
- set_dma_x_modify(CH_ATAPI_TX, 2);
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- set_dma_start_addr(CH_ATAPI_TX, sg_dma_address(sg));
- set_dma_x_count(CH_ATAPI_TX, sg_dma_len(sg) >> 1);
- }
-@@ -848,7 +849,7 @@ static void bfin_bmdma_setup(struct ata_queued_cmd *qc)
- /* fill the ATAPI DMA controller */
- set_dma_config(CH_ATAPI_RX, config);
- set_dma_x_modify(CH_ATAPI_RX, 2);
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- set_dma_start_addr(CH_ATAPI_RX, sg_dma_address(sg));
- set_dma_x_count(CH_ATAPI_RX, sg_dma_len(sg) >> 1);
- }
-@@ -867,6 +868,7 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc)
- struct ata_port *ap = qc->ap;
- void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr;
- struct scatterlist *sg;
-+ unsigned int si;
-
- pr_debug("in atapi dma start\n");
- if (!(ap->udma_mask || ap->mwdma_mask))
-@@ -881,7 +883,7 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc)
- * data cache is enabled. Otherwise, this loop
- * is an empty loop and optimized out.
- */
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- flush_dcache_range(sg_dma_address(sg),
- sg_dma_address(sg) + sg_dma_len(sg));
- }
-@@ -910,7 +912,7 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc)
- ATAPI_SET_CONTROL(base, ATAPI_GET_CONTROL(base) | TFRCNT_RST);
-
- /* Set transfer length to buffer len */
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- ATAPI_SET_XFER_LEN(base, (sg_dma_len(sg) >> 1));
- }
-
-@@ -932,6 +934,7 @@ static void bfin_bmdma_stop(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct scatterlist *sg;
-+ unsigned int si;
-
- pr_debug("in atapi dma stop\n");
- if (!(ap->udma_mask || ap->mwdma_mask))
-@@ -950,7 +953,7 @@ static void bfin_bmdma_stop(struct ata_queued_cmd *qc)
- * data cache is enabled. Otherwise, this loop
- * is an empty loop and optimized out.
- */
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- invalidate_dcache_range(
- sg_dma_address(sg),
- sg_dma_address(sg)
-@@ -1167,34 +1170,36 @@ static unsigned char bfin_bmdma_status(struct ata_port *ap)
- * Note: Original code is ata_data_xfer().
- */
-
--static void bfin_data_xfer(struct ata_device *adev, unsigned char *buf,
-- unsigned int buflen, int write_data)
-+static unsigned int bfin_data_xfer(struct ata_device *dev, unsigned char *buf,
-+ unsigned int buflen, int rw)
- {
-- struct ata_port *ap = adev->link->ap;
-- unsigned int words = buflen >> 1;
-- unsigned short *buf16 = (u16 *) buf;
-+ struct ata_port *ap = dev->link->ap;
- void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr;
-+ unsigned int words = buflen >> 1;
-+ unsigned short *buf16 = (u16 *)buf;
-
- /* Transfer multiple of 2 bytes */
-- if (write_data) {
-- write_atapi_data(base, words, buf16);
-- } else {
-+ if (rw == READ)
- read_atapi_data(base, words, buf16);
-- }
-+ else
-+ write_atapi_data(base, words, buf16);
-
- /* Transfer trailing 1 byte, if any. */
- if (unlikely(buflen & 0x01)) {
- unsigned short align_buf[1] = { 0 };
- unsigned char *trailing_buf = buf + buflen - 1;
-
-- if (write_data) {
-- memcpy(align_buf, trailing_buf, 1);
-- write_atapi_data(base, 1, align_buf);
-- } else {
-+ if (rw == READ) {
- read_atapi_data(base, 1, align_buf);
- memcpy(trailing_buf, align_buf, 1);
-+ } else {
-+ memcpy(align_buf, trailing_buf, 1);
-+ write_atapi_data(base, 1, align_buf);
- }
-+ words++;
- }
++ if (unlikely(nr_sectors > q->max_hw_sectors)) {
++ printk("bio too big device %s (%u > %u)\n",
++ bdevname(bio->bi_bdev, b),
++ bio_sectors(bio),
++ q->max_hw_sectors);
++ goto end_io;
++ }
+
-+ return words << 1;
- }
-
- /**
-diff --git a/drivers/ata/pata_cs5520.c b/drivers/ata/pata_cs5520.c
-index 33f7f08..d4590f5 100644
---- a/drivers/ata/pata_cs5520.c
-+++ b/drivers/ata/pata_cs5520.c
-@@ -198,7 +198,7 @@ static int __devinit cs5520_init_one(struct pci_dev *pdev, const struct pci_devi
- };
- const struct ata_port_info *ppi[2];
- u8 pcicfg;
-- void *iomap[5];
-+ void __iomem *iomap[5];
- struct ata_host *host;
- struct ata_ioports *ioaddr;
- int i, rc;
-diff --git a/drivers/ata/pata_hpt37x.c b/drivers/ata/pata_hpt37x.c
-index c79f066..68eb349 100644
---- a/drivers/ata/pata_hpt37x.c
-+++ b/drivers/ata/pata_hpt37x.c
-@@ -847,15 +847,16 @@ static u32 hpt374_read_freq(struct pci_dev *pdev)
- u32 freq;
- unsigned long io_base = pci_resource_start(pdev, 4);
- if (PCI_FUNC(pdev->devfn) & 1) {
-- struct pci_dev *pdev_0 = pci_get_slot(pdev->bus, pdev->devfn - 1);
-+ struct pci_dev *pdev_0;
++ if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
++ goto end_io;
+
-+ pdev_0 = pci_get_slot(pdev->bus, pdev->devfn - 1);
- /* Someone hot plugged the controller on us ? */
- if (pdev_0 == NULL)
- return 0;
- io_base = pci_resource_start(pdev_0, 4);
- freq = inl(io_base + 0x90);
- pci_dev_put(pdev_0);
-- }
-- else
-+ } else
- freq = inl(io_base + 0x90);
- return freq;
- }
-diff --git a/drivers/ata/pata_icside.c b/drivers/ata/pata_icside.c
-index 842fe08..5b8586d 100644
---- a/drivers/ata/pata_icside.c
-+++ b/drivers/ata/pata_icside.c
-@@ -224,6 +224,7 @@ static void pata_icside_bmdma_setup(struct ata_queued_cmd *qc)
- struct pata_icside_state *state = ap->host->private_data;
- struct scatterlist *sg, *rsg = state->sg;
- unsigned int write = qc->tf.flags & ATA_TFLAG_WRITE;
-+ unsigned int si;
-
- /*
- * We are simplex; BUG if we try to fiddle with DMA
-@@ -234,7 +235,7 @@ static void pata_icside_bmdma_setup(struct ata_queued_cmd *qc)
- /*
- * Copy ATAs scattered sg list into a contiguous array of sg
- */
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- memcpy(rsg, sg, sizeof(*sg));
- rsg++;
- }
-diff --git a/drivers/ata/pata_it821x.c b/drivers/ata/pata_it821x.c
-index ca9aae0..109ddd4 100644
---- a/drivers/ata/pata_it821x.c
-+++ b/drivers/ata/pata_it821x.c
-@@ -430,7 +430,7 @@ static unsigned int it821x_smart_qc_issue_prot(struct ata_queued_cmd *qc)
- return ata_qc_issue_prot(qc);
- }
- printk(KERN_DEBUG "it821x: can't process command 0x%02X\n", qc->tf.command);
-- return AC_ERR_INVALID;
-+ return AC_ERR_DEV;
- }
-
- /**
-@@ -516,6 +516,37 @@ static void it821x_dev_config(struct ata_device *adev)
- printk("(%dK stripe)", adev->id[146]);
- printk(".\n");
- }
-+ /* This is a controller firmware triggered funny, don't
-+ report the drive faulty! */
-+ adev->horkage &= ~ATA_HORKAGE_DIAGNOSTIC;
-+}
++ if (should_fail_request(bio))
++ goto end_io;
+
-+/**
-+ * it821x_ident_hack - Hack identify data up
-+ * @ap: Port
-+ *
-+ * Walk the devices on this firmware driven port and slightly
-+ * mash the identify data to stop us and common tools trying to
-+ * use features not firmware supported. The firmware itself does
-+ * some masking (eg SMART) but not enough.
-+ *
-+ * This is a bit of an abuse of the cable method, but it is the
-+ * only method called at the right time. We could modify the libata
-+ * core specifically for ident hacking but while we have one offender
-+ * it seems better to keep the fallout localised.
-+ */
++ /*
++ * If this device has partitions, remap block n
++ * of partition p to block n+start(p) of the disk.
++ */
++ blk_partition_remap(bio);
+
-+static int it821x_ident_hack(struct ata_port *ap)
-+{
-+ struct ata_device *adev;
-+ ata_link_for_each_dev(adev, &ap->link) {
-+ if (ata_dev_enabled(adev)) {
-+ adev->id[84] &= ~(1 << 6); /* No FUA */
-+ adev->id[85] &= ~(1 << 10); /* No HPA */
-+ adev->id[76] = 0; /* No NCQ/AN etc */
-+ }
-+ }
-+ return ata_cable_unknown(ap);
- }
-
-
-@@ -634,7 +665,7 @@ static struct ata_port_operations it821x_smart_port_ops = {
- .thaw = ata_bmdma_thaw,
- .error_handler = ata_bmdma_error_handler,
- .post_internal_cmd = ata_bmdma_post_internal_cmd,
-- .cable_detect = ata_cable_unknown,
-+ .cable_detect = it821x_ident_hack,
-
- .bmdma_setup = ata_bmdma_setup,
- .bmdma_start = ata_bmdma_start,
-diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
-index 120b5bf..030878f 100644
---- a/drivers/ata/pata_ixp4xx_cf.c
-+++ b/drivers/ata/pata_ixp4xx_cf.c
-@@ -42,13 +42,13 @@ static int ixp4xx_set_mode(struct ata_link *link, struct ata_device **error)
- return 0;
- }
-
--static void ixp4xx_mmio_data_xfer(struct ata_device *adev, unsigned char *buf,
-- unsigned int buflen, int write_data)
-+static unsigned int ixp4xx_mmio_data_xfer(struct ata_device *dev,
-+ unsigned char *buf, unsigned int buflen, int rw)
- {
- unsigned int i;
- unsigned int words = buflen >> 1;
- u16 *buf16 = (u16 *) buf;
-- struct ata_port *ap = adev->link->ap;
-+ struct ata_port *ap = dev->link->ap;
- void __iomem *mmio = ap->ioaddr.data_addr;
- struct ixp4xx_pata_data *data = ap->host->dev->platform_data;
-
-@@ -59,30 +59,32 @@ static void ixp4xx_mmio_data_xfer(struct ata_device *adev, unsigned char *buf,
- udelay(100);
-
- /* Transfer multiple of 2 bytes */
-- if (write_data) {
-- for (i = 0; i < words; i++)
-- writew(buf16[i], mmio);
-- } else {
-+ if (rw == READ)
- for (i = 0; i < words; i++)
- buf16[i] = readw(mmio);
-- }
-+ else
-+ for (i = 0; i < words; i++)
-+ writew(buf16[i], mmio);
-
- /* Transfer trailing 1 byte, if any. */
- if (unlikely(buflen & 0x01)) {
- u16 align_buf[1] = { 0 };
- unsigned char *trailing_buf = buf + buflen - 1;
-
-- if (write_data) {
-- memcpy(align_buf, trailing_buf, 1);
-- writew(align_buf[0], mmio);
-- } else {
-+ if (rw == READ) {
- align_buf[0] = readw(mmio);
- memcpy(trailing_buf, align_buf, 1);
-+ } else {
-+ memcpy(align_buf, trailing_buf, 1);
-+ writew(align_buf[0], mmio);
- }
-+ words++;
- }
-
- udelay(100);
- *data->cs0_cfg |= 0x01;
++ if (old_sector != -1)
++ blk_add_trace_remap(q, bio, old_dev, bio->bi_sector,
++ old_sector);
+
-+ return words << 1;
- }
-
- static struct scsi_host_template ixp4xx_sht = {
-diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c
-index 17159b5..333dc15 100644
---- a/drivers/ata/pata_legacy.c
-+++ b/drivers/ata/pata_legacy.c
-@@ -28,7 +28,6 @@
- *
- * Unsupported but docs exist:
- * Appian/Adaptec AIC25VL01/Cirrus Logic PD7220
-- * Winbond W83759A
- *
- * This driver handles legacy (that is "ISA/VLB side") IDE ports found
- * on PC class systems. There are three hybrid devices that are exceptions
-@@ -36,7 +35,7 @@
- * the MPIIX where the tuning is PCI side but the IDE is "ISA side".
- *
- * Specific support is included for the ht6560a/ht6560b/opti82c611a/
-- * opti82c465mv/promise 20230c/20630
-+ * opti82c465mv/promise 20230c/20630/winbond83759A
- *
- * Use the autospeed and pio_mask options with:
- * Appian ADI/2 aka CLPD7220 or AIC25VL01.
-@@ -47,9 +46,6 @@
- * For now use autospeed and pio_mask as above with the W83759A. This may
- * change.
- *
-- * TODO
-- * Merge existing pata_qdi driver
-- *
- */
-
- #include <linux/kernel.h>
-@@ -64,12 +60,13 @@
- #include <linux/platform_device.h>
-
- #define DRV_NAME "pata_legacy"
--#define DRV_VERSION "0.5.5"
-+#define DRV_VERSION "0.6.5"
-
- #define NR_HOST 6
-
--static int legacy_port[NR_HOST] = { 0x1f0, 0x170, 0x1e8, 0x168, 0x1e0, 0x160 };
--static int legacy_irq[NR_HOST] = { 14, 15, 11, 10, 8, 12 };
-+static int all;
-+module_param(all, int, 0444);
-+MODULE_PARM_DESC(all, "Grab all legacy port devices, even if PCI(0=off, 1=on)");
-
- struct legacy_data {
- unsigned long timing;
-@@ -80,21 +77,107 @@ struct legacy_data {
-
- };
-
-+enum controller {
-+ BIOS = 0,
-+ SNOOP = 1,
-+ PDC20230 = 2,
-+ HT6560A = 3,
-+ HT6560B = 4,
-+ OPTI611A = 5,
-+ OPTI46X = 6,
-+ QDI6500 = 7,
-+ QDI6580 = 8,
-+ QDI6580DP = 9, /* Dual channel mode is different */
-+ W83759A = 10,
++ blk_add_trace_bio(q, bio, BLK_TA_QUEUE);
+
-+ UNKNOWN = -1
-+};
++ old_sector = bio->bi_sector;
++ old_dev = bio->bi_bdev->bd_dev;
+
++ if (bio_check_eod(bio, nr_sectors))
++ goto end_io;
++ if (bio_empty_barrier(bio) && !q->prepare_flush_fn) {
++ err = -EOPNOTSUPP;
++ goto end_io;
++ }
+
-+struct legacy_probe {
-+ unsigned char *name;
-+ unsigned long port;
-+ unsigned int irq;
-+ unsigned int slot;
-+ enum controller type;
-+ unsigned long private;
-+};
++ ret = q->make_request_fn(q, bio);
++ } while (ret);
++}
+
-+struct legacy_controller {
-+ const char *name;
-+ struct ata_port_operations *ops;
-+ unsigned int pio_mask;
-+ unsigned int flags;
-+ int (*setup)(struct platform_device *, struct legacy_probe *probe,
-+ struct legacy_data *data);
-+};
++/*
++ * We only want one ->make_request_fn to be active at a time,
++ * else stack usage with stacked devices could be a problem.
++ * So use current->bio_{list,tail} to keep a list of requests
++ * submited by a make_request_fn function.
++ * current->bio_tail is also used as a flag to say if
++ * generic_make_request is currently active in this task or not.
++ * If it is NULL, then no make_request is active. If it is non-NULL,
++ * then a make_request is active, and new requests should be added
++ * at the tail
++ */
++void generic_make_request(struct bio *bio)
++{
++ if (current->bio_tail) {
++ /* make_request is active */
++ *(current->bio_tail) = bio;
++ bio->bi_next = NULL;
++ current->bio_tail = &bio->bi_next;
++ return;
++ }
++ /* following loop may be a bit non-obvious, and so deserves some
++ * explanation.
++ * Before entering the loop, bio->bi_next is NULL (as all callers
++ * ensure that) so we have a list with a single bio.
++ * We pretend that we have just taken it off a longer list, so
++ * we assign bio_list to the next (which is NULL) and bio_tail
++ * to &bio_list, thus initialising the bio_list of new bios to be
++ * added. __generic_make_request may indeed add some more bios
++ * through a recursive call to generic_make_request. If it
++ * did, we find a non-NULL value in bio_list and re-enter the loop
++ * from the top. In this case we really did just take the bio
++ * of the top of the list (no pretending) and so fixup bio_list and
++ * bio_tail or bi_next, and call into __generic_make_request again.
++ *
++ * The loop was structured like this to make only one call to
++ * __generic_make_request (which is important as it is large and
++ * inlined) and to keep the structure simple.
++ */
++ BUG_ON(bio->bi_next);
++ do {
++ current->bio_list = bio->bi_next;
++ if (bio->bi_next == NULL)
++ current->bio_tail = ¤t->bio_list;
++ else
++ bio->bi_next = NULL;
++ __generic_make_request(bio);
++ bio = current->bio_list;
++ } while (bio);
++ current->bio_tail = NULL; /* deactivate */
++}
+
-+static int legacy_port[NR_HOST] = { 0x1f0, 0x170, 0x1e8, 0x168, 0x1e0, 0x160 };
++EXPORT_SYMBOL(generic_make_request);
+
-+static struct legacy_probe probe_list[NR_HOST];
- static struct legacy_data legacy_data[NR_HOST];
- static struct ata_host *legacy_host[NR_HOST];
- static int nr_legacy_host;
-
-
--static int probe_all; /* Set to check all ISA port ranges */
--static int ht6560a; /* HT 6560A on primary 1, secondary 2, both 3 */
--static int ht6560b; /* HT 6560A on primary 1, secondary 2, both 3 */
--static int opti82c611a; /* Opti82c611A on primary 1, secondary 2, both 3 */
--static int opti82c46x; /* Opti 82c465MV present (pri/sec autodetect) */
--static int autospeed; /* Chip present which snoops speed changes */
--static int pio_mask = 0x1F; /* PIO range for autospeed devices */
-+static int probe_all; /* Set to check all ISA port ranges */
-+static int ht6560a; /* HT 6560A on primary 1, second 2, both 3 */
-+static int ht6560b; /* HT 6560A on primary 1, second 2, both 3 */
-+static int opti82c611a; /* Opti82c611A on primary 1, sec 2, both 3 */
-+static int opti82c46x; /* Opti 82c465MV present(pri/sec autodetect) */
-+static int qdi; /* Set to probe QDI controllers */
-+static int winbond; /* Set to probe Winbond controllers,
-+ give I/O port if non stdanard */
-+static int autospeed; /* Chip present which snoops speed changes */
-+static int pio_mask = 0x1F; /* PIO range for autospeed devices */
- static int iordy_mask = 0xFFFFFFFF; /* Use iordy if available */
-
- /**
-+ * legacy_probe_add - Add interface to probe list
-+ * @port: Controller port
-+ * @irq: IRQ number
-+ * @type: Controller type
-+ * @private: Controller specific info
++/**
++ * submit_bio: submit a bio to the block device layer for I/O
++ * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
++ * @bio: The &struct bio which describes the I/O
+ *
-+ * Add an entry into the probe list for ATA controllers. This is used
-+ * to add the default ISA slots and then to build up the table
-+ * further according to other ISA/VLB/Weird device scans
++ * submit_bio() is very similar in purpose to generic_make_request(), and
++ * uses that function to do most of the work. Both are fairly rough
++ * interfaces, @bio must be presetup and ready for I/O.
+ *
-+ * An I/O port list is used to keep ordering stable and sane, as we
-+ * don't have any good way to talk about ordering otherwise
+ */
-+
-+static int legacy_probe_add(unsigned long port, unsigned int irq,
-+ enum controller type, unsigned long private)
++void submit_bio(int rw, struct bio *bio)
+{
-+ struct legacy_probe *lp = &probe_list[0];
-+ int i;
-+ struct legacy_probe *free = NULL;
++ int count = bio_sectors(bio);
+
-+ for (i = 0; i < NR_HOST; i++) {
-+ if (lp->port == 0 && free == NULL)
-+ free = lp;
-+ /* Matching port, or the correct slot for ordering */
-+ if (lp->port == port || legacy_port[i] == port) {
-+ free = lp;
-+ break;
++ bio->bi_rw |= rw;
++
++ /*
++ * If it's a regular read/write or a barrier with data attached,
++ * go through the normal accounting stuff before submission.
++ */
++ if (!bio_empty_barrier(bio)) {
++
++ BIO_BUG_ON(!bio->bi_size);
++ BIO_BUG_ON(!bio->bi_io_vec);
++
++ if (rw & WRITE) {
++ count_vm_events(PGPGOUT, count);
++ } else {
++ task_io_account_read(bio->bi_size);
++ count_vm_events(PGPGIN, count);
++ }
++
++ if (unlikely(block_dump)) {
++ char b[BDEVNAME_SIZE];
++ printk(KERN_DEBUG "%s(%d): %s block %Lu on %s\n",
++ current->comm, task_pid_nr(current),
++ (rw & WRITE) ? "WRITE" : "READ",
++ (unsigned long long)bio->bi_sector,
++ bdevname(bio->bi_bdev,b));
+ }
-+ lp++;
-+ }
-+ if (free == NULL) {
-+ printk(KERN_ERR "pata_legacy: Too many interfaces.\n");
-+ return -1;
+ }
-+ /* Fill in the entry for later probing */
-+ free->port = port;
-+ free->irq = irq;
-+ free->type = type;
-+ free->private = private;
-+ return 0;
++
++ generic_make_request(bio);
+}
+
++EXPORT_SYMBOL(submit_bio);
+
+/**
- * legacy_set_mode - mode setting
- * @link: IDE link
- * @unused: Device that failed when error is returned
-@@ -113,7 +196,8 @@ static int legacy_set_mode(struct ata_link *link, struct ata_device **unused)
-
- ata_link_for_each_dev(dev, link) {
- if (ata_dev_enabled(dev)) {
-- ata_dev_printk(dev, KERN_INFO, "configured for PIO\n");
-+ ata_dev_printk(dev, KERN_INFO,
-+ "configured for PIO\n");
- dev->pio_mode = XFER_PIO_0;
- dev->xfer_mode = XFER_PIO_0;
- dev->xfer_shift = ATA_SHIFT_PIO;
-@@ -171,7 +255,7 @@ static struct ata_port_operations simple_port_ops = {
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
- static struct ata_port_operations legacy_port_ops = {
-@@ -198,15 +282,16 @@ static struct ata_port_operations legacy_port_ops = {
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
- /*
- * Promise 20230C and 20620 support
- *
-- * This controller supports PIO0 to PIO2. We set PIO timings conservatively to
-- * allow for 50MHz Vesa Local Bus. The 20620 DMA support is weird being DMA to
-- * controller and PIO'd to the host and not supported.
-+ * This controller supports PIO0 to PIO2. We set PIO timings
-+ * conservatively to allow for 50MHz Vesa Local Bus. The 20620 DMA
-+ * support is weird being DMA to controller and PIO'd to the host
-+ * and not supported.
- */
-
- static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
-@@ -221,8 +306,7 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
- local_irq_save(flags);
-
- /* Unlock the control interface */
-- do
-- {
-+ do {
- inb(0x1F5);
- outb(inb(0x1F2) | 0x80, 0x1F2);
- inb(0x1F2);
-@@ -231,7 +315,7 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
- inb(0x1F2);
- inb(0x1F2);
- }
-- while((inb(0x1F2) & 0x80) && --tries);
-+ while ((inb(0x1F2) & 0x80) && --tries);
-
- local_irq_restore(flags);
-
-@@ -249,13 +333,14 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
-
- }
-
--static void pdc_data_xfer_vlb(struct ata_device *adev, unsigned char *buf, unsigned int buflen, int write_data)
-+static unsigned int pdc_data_xfer_vlb(struct ata_device *dev,
-+ unsigned char *buf, unsigned int buflen, int rw)
- {
-- struct ata_port *ap = adev->link->ap;
-- int slop = buflen & 3;
-- unsigned long flags;
-+ if (ata_id_has_dword_io(dev->id)) {
-+ struct ata_port *ap = dev->link->ap;
-+ int slop = buflen & 3;
-+ unsigned long flags;
-
-- if (ata_id_has_dword_io(adev->id)) {
- local_irq_save(flags);
-
- /* Perform the 32bit I/O synchronization sequence */
-@@ -264,26 +349,27 @@ static void pdc_data_xfer_vlb(struct ata_device *adev, unsigned char *buf, unsig
- ioread8(ap->ioaddr.nsect_addr);
-
- /* Now the data */
--
-- if (write_data)
-- iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-- else
-+ if (rw == READ)
- ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-+ else
-+ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-
- if (unlikely(slop)) {
-- __le32 pad = 0;
-- if (write_data) {
-- memcpy(&pad, buf + buflen - slop, slop);
-- iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
-- } else {
-+ u32 pad;
-+ if (rw == READ) {
- pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
- memcpy(buf + buflen - slop, &pad, slop);
-+ } else {
-+ memcpy(&pad, buf + buflen - slop, slop);
-+ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
- }
-+ buflen += 4 - slop;
- }
- local_irq_restore(flags);
-- }
-- else
-- ata_data_xfer_noirq(adev, buf, buflen, write_data);
-+ } else
-+ buflen = ata_data_xfer_noirq(dev, buf, buflen, rw);
-+
-+ return buflen;
- }
-
- static struct ata_port_operations pdc20230_port_ops = {
-@@ -310,14 +396,14 @@ static struct ata_port_operations pdc20230_port_ops = {
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
- /*
- * Holtek 6560A support
- *
-- * This controller supports PIO0 to PIO2 (no IORDY even though higher timings
-- * can be loaded).
-+ * This controller supports PIO0 to PIO2 (no IORDY even though higher
-+ * timings can be loaded).
- */
-
- static void ht6560a_set_piomode(struct ata_port *ap, struct ata_device *adev)
-@@ -364,14 +450,14 @@ static struct ata_port_operations ht6560a_port_ops = {
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
- /*
- * Holtek 6560B support
- *
-- * This controller supports PIO0 to PIO4. We honour the BIOS/jumper FIFO setting
-- * unless we see an ATAPI device in which case we force it off.
-+ * This controller supports PIO0 to PIO4. We honour the BIOS/jumper FIFO
-+ * setting unless we see an ATAPI device in which case we force it off.
- *
- * FIXME: need to implement 2nd channel support.
- */
-@@ -398,7 +484,7 @@ static void ht6560b_set_piomode(struct ata_port *ap, struct ata_device *adev)
- if (adev->class != ATA_DEV_ATA) {
- u8 rconf = inb(0x3E6);
- if (rconf & 0x24) {
-- rconf &= ~ 0x24;
-+ rconf &= ~0x24;
- outb(rconf, 0x3E6);
- }
- }
-@@ -423,13 +509,13 @@ static struct ata_port_operations ht6560b_port_ops = {
- .qc_prep = ata_qc_prep,
- .qc_issue = ata_qc_issue_prot,
-
-- .data_xfer = ata_data_xfer, /* FIXME: Check 32bit and noirq */
-+ .data_xfer = ata_data_xfer, /* FIXME: Check 32bit and noirq */
-
- .irq_handler = ata_interrupt,
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
- /*
-@@ -462,7 +548,8 @@ static u8 opti_syscfg(u8 reg)
- * This controller supports PIO0 to PIO3.
- */
-
--static void opti82c611a_set_piomode(struct ata_port *ap, struct ata_device *adev)
-+static void opti82c611a_set_piomode(struct ata_port *ap,
-+ struct ata_device *adev)
- {
- u8 active, recover, setup;
- struct ata_timing t;
-@@ -549,7 +636,7 @@ static struct ata_port_operations opti82c611a_port_ops = {
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
- /*
-@@ -681,77 +768,398 @@ static struct ata_port_operations opti82c46x_port_ops = {
- .irq_clear = ata_bmdma_irq_clear,
- .irq_on = ata_irq_on,
-
-- .port_start = ata_port_start,
-+ .port_start = ata_sff_port_start,
- };
-
-+static void qdi6500_set_piomode(struct ata_port *ap, struct ata_device *adev)
++ * __end_that_request_first - end I/O on a request
++ * @req: the request being processed
++ * @error: 0 for success, < 0 for error
++ * @nr_bytes: number of bytes to complete
++ *
++ * Description:
++ * Ends I/O on a number of bytes attached to @req, and sets it up
++ * for the next range of segments (if any) in the cluster.
++ *
++ * Return:
++ * 0 - we are done with this request, call end_that_request_last()
++ * 1 - still buffers pending for this request
++ **/
++static int __end_that_request_first(struct request *req, int error,
++ int nr_bytes)
+{
-+ struct ata_timing t;
-+ struct legacy_data *qdi = ap->host->private_data;
-+ int active, recovery;
-+ u8 timing;
++ int total_bytes, bio_nbytes, next_idx = 0;
++ struct bio *bio;
+
-+ /* Get the timing data in cycles */
-+ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++ blk_add_trace_rq(req->q, req, BLK_TA_COMPLETE);
+
-+ if (qdi->fast) {
-+ active = 8 - FIT(t.active, 1, 8);
-+ recovery = 18 - FIT(t.recover, 3, 18);
-+ } else {
-+ active = 9 - FIT(t.active, 2, 9);
-+ recovery = 15 - FIT(t.recover, 0, 15);
-+ }
-+ timing = (recovery << 4) | active | 0x08;
++ /*
++ * for a REQ_BLOCK_PC request, we want to carry any eventual
++ * sense key with us all the way through
++ */
++ if (!blk_pc_request(req))
++ req->errors = 0;
+
-+ qdi->clock[adev->devno] = timing;
++ if (error) {
++ if (blk_fs_request(req) && !(req->cmd_flags & REQ_QUIET))
++ printk("end_request: I/O error, dev %s, sector %llu\n",
++ req->rq_disk ? req->rq_disk->disk_name : "?",
++ (unsigned long long)req->sector);
++ }
+
-+ outb(timing, qdi->timing);
-+}
-
- /**
-- * legacy_init_one - attach a legacy interface
-- * @port: port number
-- * @io: I/O port start
-- * @ctrl: control port
-+ * qdi6580dp_set_piomode - PIO setup for dual channel
-+ * @ap: Port
-+ * @adev: Device
- * @irq: interrupt line
- *
-- * Register an ISA bus IDE interface. Such interfaces are PIO and we
-- * assume do not support IRQ sharing.
-+ * In dual channel mode the 6580 has one clock per channel and we have
-+ * to software clockswitch in qc_issue_prot.
- */
-
--static __init int legacy_init_one(int port, unsigned long io, unsigned long ctrl, int irq)
-+static void qdi6580dp_set_piomode(struct ata_port *ap, struct ata_device *adev)
- {
-- struct legacy_data *ld = &legacy_data[nr_legacy_host];
-- struct ata_host *host;
-- struct ata_port *ap;
-- struct platform_device *pdev;
-- struct ata_port_operations *ops = &legacy_port_ops;
-- void __iomem *io_addr, *ctrl_addr;
-- int pio_modes = pio_mask;
-- u32 mask = (1 << port);
-- u32 iordy = (iordy_mask & mask) ? 0: ATA_FLAG_NO_IORDY;
-- int ret;
-+ struct ata_timing t;
-+ struct legacy_data *qdi = ap->host->private_data;
-+ int active, recovery;
-+ u8 timing;
-
-- pdev = platform_device_register_simple(DRV_NAME, nr_legacy_host, NULL, 0);
-- if (IS_ERR(pdev))
-- return PTR_ERR(pdev);
-+ /* Get the timing data in cycles */
-+ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++ if (blk_fs_request(req) && req->rq_disk) {
++ const int rw = rq_data_dir(req);
+
-+ if (qdi->fast) {
-+ active = 8 - FIT(t.active, 1, 8);
-+ recovery = 18 - FIT(t.recover, 3, 18);
-+ } else {
-+ active = 9 - FIT(t.active, 2, 9);
-+ recovery = 15 - FIT(t.recover, 0, 15);
++ disk_stat_add(req->rq_disk, sectors[rw], nr_bytes >> 9);
+ }
-+ timing = (recovery << 4) | active | 0x08;
-
-- ret = -EBUSY;
-- if (devm_request_region(&pdev->dev, io, 8, "pata_legacy") == NULL ||
-- devm_request_region(&pdev->dev, ctrl, 1, "pata_legacy") == NULL)
-- goto fail;
-+ qdi->clock[adev->devno] = timing;
-
-- ret = -ENOMEM;
-- io_addr = devm_ioport_map(&pdev->dev, io, 8);
-- ctrl_addr = devm_ioport_map(&pdev->dev, ctrl, 1);
-- if (!io_addr || !ctrl_addr)
-- goto fail;
-+ outb(timing, qdi->timing + 2 * ap->port_no);
-+ /* Clear the FIFO */
-+ if (adev->class != ATA_DEV_ATA)
-+ outb(0x5F, qdi->timing + 3);
-+}
-
-- if (ht6560a & mask) {
-- ops = &ht6560a_port_ops;
-- pio_modes = 0x07;
-- iordy = ATA_FLAG_NO_IORDY;
-- }
-- if (ht6560b & mask) {
-- ops = &ht6560b_port_ops;
-- pio_modes = 0x1F;
-- }
-- if (opti82c611a & mask) {
-- ops = &opti82c611a_port_ops;
-- pio_modes = 0x0F;
-+/**
-+ * qdi6580_set_piomode - PIO setup for single channel
-+ * @ap: Port
-+ * @adev: Device
-+ *
-+ * In single channel mode the 6580 has one clock per device and we can
-+ * avoid the requirement to clock switch. We also have to load the timing
-+ * into the right clock according to whether we are master or slave.
-+ */
+
-+static void qdi6580_set_piomode(struct ata_port *ap, struct ata_device *adev)
-+{
-+ struct ata_timing t;
-+ struct legacy_data *qdi = ap->host->private_data;
-+ int active, recovery;
-+ u8 timing;
++ total_bytes = bio_nbytes = 0;
++ while ((bio = req->bio) != NULL) {
++ int nbytes;
+
-+ /* Get the timing data in cycles */
-+ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++ /*
++ * For an empty barrier request, the low level driver must
++ * store a potential error location in ->sector. We pass
++ * that back up in ->bi_sector.
++ */
++ if (blk_empty_barrier(req))
++ bio->bi_sector = req->sector;
+
-+ if (qdi->fast) {
-+ active = 8 - FIT(t.active, 1, 8);
-+ recovery = 18 - FIT(t.recover, 3, 18);
-+ } else {
-+ active = 9 - FIT(t.active, 2, 9);
-+ recovery = 15 - FIT(t.recover, 0, 15);
- }
-- if (opti82c46x & mask) {
-- ops = &opti82c46x_port_ops;
-- pio_modes = 0x0F;
-+ timing = (recovery << 4) | active | 0x08;
-+ qdi->clock[adev->devno] = timing;
-+ outb(timing, qdi->timing + 2 * adev->devno);
-+ /* Clear the FIFO */
-+ if (adev->class != ATA_DEV_ATA)
-+ outb(0x5F, qdi->timing + 3);
-+}
++ if (nr_bytes >= bio->bi_size) {
++ req->bio = bio->bi_next;
++ nbytes = bio->bi_size;
++ req_bio_endio(req, bio, nbytes, error);
++ next_idx = 0;
++ bio_nbytes = 0;
++ } else {
++ int idx = bio->bi_idx + next_idx;
+
-+/**
-+ * qdi_qc_issue_prot - command issue
-+ * @qc: command pending
-+ *
-+ * Called when the libata layer is about to issue a command. We wrap
-+ * this interface so that we can load the correct ATA timings.
-+ */
++ if (unlikely(bio->bi_idx >= bio->bi_vcnt)) {
++ blk_dump_rq_flags(req, "__end_that");
++ printk("%s: bio idx %d >= vcnt %d\n",
++ __FUNCTION__,
++ bio->bi_idx, bio->bi_vcnt);
++ break;
++ }
+
-+static unsigned int qdi_qc_issue_prot(struct ata_queued_cmd *qc)
-+{
-+ struct ata_port *ap = qc->ap;
-+ struct ata_device *adev = qc->dev;
-+ struct legacy_data *qdi = ap->host->private_data;
++ nbytes = bio_iovec_idx(bio, idx)->bv_len;
++ BIO_BUG_ON(nbytes > bio->bi_size);
+
-+ if (qdi->clock[adev->devno] != qdi->last) {
-+ if (adev->pio_mode) {
-+ qdi->last = qdi->clock[adev->devno];
-+ outb(qdi->clock[adev->devno], qdi->timing +
-+ 2 * ap->port_no);
-+ }
- }
-+ return ata_qc_issue_prot(qc);
-+}
-
-- /* Probe for automatically detectable controllers */
-+static unsigned int vlb32_data_xfer(struct ata_device *adev, unsigned char *buf,
-+ unsigned int buflen, int rw)
-+{
-+ struct ata_port *ap = adev->link->ap;
-+ int slop = buflen & 3;
-
-- if (io == 0x1F0 && ops == &legacy_port_ops) {
-- unsigned long flags;
-+ if (ata_id_has_dword_io(adev->id)) {
-+ if (rw == WRITE)
-+ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-+ else
-+ ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-
-- local_irq_save(flags);
-+ if (unlikely(slop)) {
-+ u32 pad;
-+ if (rw == WRITE) {
-+ memcpy(&pad, buf + buflen - slop, slop);
-+ pad = le32_to_cpu(pad);
-+ iowrite32(pad, ap->ioaddr.data_addr);
-+ } else {
-+ pad = ioread32(ap->ioaddr.data_addr);
-+ pad = cpu_to_le32(pad);
-+ memcpy(buf + buflen - slop, &pad, slop);
++ /*
++ * not a complete bvec done
++ */
++ if (unlikely(nbytes > nr_bytes)) {
++ bio_nbytes += nr_bytes;
++ total_bytes += nr_bytes;
++ break;
+ }
++
++ /*
++ * advance to the next vector
++ */
++ next_idx++;
++ bio_nbytes += nbytes;
+ }
-+ return (buflen + 3) & ~3;
-+ } else
-+ return ata_data_xfer(adev, buf, buflen, rw);
++
++ total_bytes += nbytes;
++ nr_bytes -= nbytes;
++
++ if ((bio = req->bio)) {
++ /*
++ * end more in this run, or just return 'not-done'
++ */
++ if (unlikely(nr_bytes <= 0))
++ break;
++ }
++ }
++
++ /*
++ * completely done
++ */
++ if (!req->bio)
++ return 0;
++
++ /*
++ * if the request wasn't completed, update state
++ */
++ if (bio_nbytes) {
++ req_bio_endio(req, bio, bio_nbytes, error);
++ bio->bi_idx += next_idx;
++ bio_iovec(bio)->bv_offset += nr_bytes;
++ bio_iovec(bio)->bv_len -= nr_bytes;
++ }
++
++ blk_recalc_rq_sectors(req, total_bytes >> 9);
++ blk_recalc_rq_segments(req);
++ return 1;
+}
+
-+static int qdi_port(struct platform_device *dev,
-+ struct legacy_probe *lp, struct legacy_data *ld)
++/*
++ * splice the completion data to a local structure and hand off to
++ * process_completion_queue() to complete the requests
++ */
++static void blk_done_softirq(struct softirq_action *h)
+{
-+ if (devm_request_region(&dev->dev, lp->private, 4, "qdi") == NULL)
-+ return -EBUSY;
-+ ld->timing = lp->private;
-+ return 0;
-+}
++ struct list_head *cpu_list, local_list;
+
-+static struct ata_port_operations qdi6500_port_ops = {
-+ .set_piomode = qdi6500_set_piomode,
++ local_irq_disable();
++ cpu_list = &__get_cpu_var(blk_cpu_done);
++ list_replace_init(cpu_list, &local_list);
++ local_irq_enable();
+
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ata_std_dev_select,
++ while (!list_empty(&local_list)) {
++ struct request *rq = list_entry(local_list.next, struct request, donelist);
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = ata_bmdma_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
-+ .cable_detect = ata_cable_40wire,
++ list_del_init(&rq->donelist);
++ rq->q->softirq_done_fn(rq);
++ }
++}
+
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = qdi_qc_issue_prot,
++static int __cpuinit blk_cpu_notify(struct notifier_block *self, unsigned long action,
++ void *hcpu)
++{
++ /*
++ * If a CPU goes away, splice its entries to the current CPU
++ * and trigger a run of the softirq
++ */
++ if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
++ int cpu = (unsigned long) hcpu;
+
-+ .data_xfer = vlb32_data_xfer,
++ local_irq_disable();
++ list_splice_init(&per_cpu(blk_cpu_done, cpu),
++ &__get_cpu_var(blk_cpu_done));
++ raise_softirq_irqoff(BLOCK_SOFTIRQ);
++ local_irq_enable();
++ }
+
-+ .irq_handler = ata_interrupt,
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
++ return NOTIFY_OK;
++}
+
-+ .port_start = ata_sff_port_start,
++
++static struct notifier_block blk_cpu_notifier __cpuinitdata = {
++ .notifier_call = blk_cpu_notify,
+};
+
-+static struct ata_port_operations qdi6580_port_ops = {
-+ .set_piomode = qdi6580_set_piomode,
++/**
++ * blk_complete_request - end I/O on a request
++ * @req: the request being processed
++ *
++ * Description:
++ * Ends all I/O on a request. It does not handle partial completions,
++ * unless the driver actually implements this in its completion callback
++ * through requeueing. The actual completion happens out-of-order,
++ * through a softirq handler. The user must have registered a completion
++ * callback through blk_queue_softirq_done().
++ **/
+
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ata_std_dev_select,
++void blk_complete_request(struct request *req)
++{
++ struct list_head *cpu_list;
++ unsigned long flags;
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = ata_bmdma_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
-+ .cable_detect = ata_cable_40wire,
++ BUG_ON(!req->q->softirq_done_fn);
++
++ local_irq_save(flags);
+
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = ata_qc_issue_prot,
++ cpu_list = &__get_cpu_var(blk_cpu_done);
++ list_add_tail(&req->donelist, cpu_list);
++ raise_softirq_irqoff(BLOCK_SOFTIRQ);
+
-+ .data_xfer = vlb32_data_xfer,
++ local_irq_restore(flags);
++}
+
-+ .irq_handler = ata_interrupt,
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
++EXPORT_SYMBOL(blk_complete_request);
++
++/*
++ * queue lock must be held
++ */
++static void end_that_request_last(struct request *req, int error)
++{
++ struct gendisk *disk = req->rq_disk;
+
-+ .port_start = ata_sff_port_start,
-+};
++ if (blk_rq_tagged(req))
++ blk_queue_end_tag(req->q, req);
+
-+static struct ata_port_operations qdi6580dp_port_ops = {
-+ .set_piomode = qdi6580dp_set_piomode,
++ if (blk_queued_rq(req))
++ blkdev_dequeue_request(req);
+
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ata_std_dev_select,
++ if (unlikely(laptop_mode) && blk_fs_request(req))
++ laptop_io_completion();
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = ata_bmdma_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
-+ .cable_detect = ata_cable_40wire,
++ /*
++ * Account IO completion. bar_rq isn't accounted as a normal
++ * IO on queueing nor completion. Accounting the containing
++ * request is enough.
++ */
++ if (disk && blk_fs_request(req) && req != &req->q->bar_rq) {
++ unsigned long duration = jiffies - req->start_time;
++ const int rw = rq_data_dir(req);
+
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = qdi_qc_issue_prot,
++ __disk_stat_inc(disk, ios[rw]);
++ __disk_stat_add(disk, ticks[rw], duration);
++ disk_round_stats(disk);
++ disk->in_flight--;
++ }
+
-+ .data_xfer = vlb32_data_xfer,
++ if (req->end_io)
++ req->end_io(req, error);
++ else {
++ if (blk_bidi_rq(req))
++ __blk_put_request(req->next_rq->q, req->next_rq);
+
-+ .irq_handler = ata_interrupt,
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
++ __blk_put_request(req->q, req);
++ }
++}
+
-+ .port_start = ata_sff_port_start,
-+};
++static inline void __end_request(struct request *rq, int uptodate,
++ unsigned int nr_bytes)
++{
++ int error = 0;
+
-+static DEFINE_SPINLOCK(winbond_lock);
++ if (uptodate <= 0)
++ error = uptodate ? uptodate : -EIO;
+
-+static void winbond_writecfg(unsigned long port, u8 reg, u8 val)
-+{
-+ unsigned long flags;
-+ spin_lock_irqsave(&winbond_lock, flags);
-+ outb(reg, port + 0x01);
-+ outb(val, port + 0x02);
-+ spin_unlock_irqrestore(&winbond_lock, flags);
++ __blk_end_request(rq, error, nr_bytes);
+}
+
-+static u8 winbond_readcfg(unsigned long port, u8 reg)
++/**
++ * blk_rq_bytes - Returns bytes left to complete in the entire request
++ **/
++unsigned int blk_rq_bytes(struct request *rq)
+{
-+ u8 val;
-+
-+ unsigned long flags;
-+ spin_lock_irqsave(&winbond_lock, flags);
-+ outb(reg, port + 0x01);
-+ val = inb(port + 0x02);
-+ spin_unlock_irqrestore(&winbond_lock, flags);
++ if (blk_fs_request(rq))
++ return rq->hard_nr_sectors << 9;
+
-+ return val;
++ return rq->data_len;
+}
++EXPORT_SYMBOL_GPL(blk_rq_bytes);
+
-+static void winbond_set_piomode(struct ata_port *ap, struct ata_device *adev)
++/**
++ * blk_rq_cur_bytes - Returns bytes left to complete in the current segment
++ **/
++unsigned int blk_rq_cur_bytes(struct request *rq)
+{
-+ struct ata_timing t;
-+ struct legacy_data *winbond = ap->host->private_data;
-+ int active, recovery;
-+ u8 reg;
-+ int timing = 0x88 + (ap->port_no * 4) + (adev->devno * 2);
-+
-+ reg = winbond_readcfg(winbond->timing, 0x81);
++ if (blk_fs_request(rq))
++ return rq->current_nr_sectors << 9;
+
-+ /* Get the timing data in cycles */
-+ if (reg & 0x40) /* Fast VLB bus, assume 50MHz */
-+ ata_timing_compute(adev, adev->pio_mode, &t, 20000, 1000);
-+ else
-+ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++ if (rq->bio)
++ return rq->bio->bi_size;
+
-+ active = (FIT(t.active, 3, 17) - 1) & 0x0F;
-+ recovery = (FIT(t.recover, 1, 15) + 1) & 0x0F;
-+ timing = (active << 4) | recovery;
-+ winbond_writecfg(winbond->timing, timing, reg);
++ return rq->data_len;
++}
++EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
+
-+ /* Load the setup timing */
++/**
++ * end_queued_request - end all I/O on a queued request
++ * @rq: the request being processed
++ * @uptodate: error value or 0/1 uptodate flag
++ *
++ * Description:
++ * Ends all I/O on a request, and removes it from the block layer queues.
++ * Not suitable for normal IO completion, unless the driver still has
++ * the request attached to the block layer.
++ *
++ **/
++void end_queued_request(struct request *rq, int uptodate)
++{
++ __end_request(rq, uptodate, blk_rq_bytes(rq));
++}
++EXPORT_SYMBOL(end_queued_request);
+
-+ reg = 0x35;
-+ if (adev->class != ATA_DEV_ATA)
-+ reg |= 0x08; /* FIFO off */
-+ if (!ata_pio_need_iordy(adev))
-+ reg |= 0x02; /* IORDY off */
-+ reg |= (FIT(t.setup, 0, 3) << 6);
-+ winbond_writecfg(winbond->timing, timing + 1, reg);
++/**
++ * end_dequeued_request - end all I/O on a dequeued request
++ * @rq: the request being processed
++ * @uptodate: error value or 0/1 uptodate flag
++ *
++ * Description:
++ * Ends all I/O on a request. The request must already have been
++ * dequeued using blkdev_dequeue_request(), as is normally the case
++ * for most drivers.
++ *
++ **/
++void end_dequeued_request(struct request *rq, int uptodate)
++{
++ __end_request(rq, uptodate, blk_rq_bytes(rq));
+}
++EXPORT_SYMBOL(end_dequeued_request);
+
-+static int winbond_port(struct platform_device *dev,
-+ struct legacy_probe *lp, struct legacy_data *ld)
++
++/**
++ * end_request - end I/O on the current segment of the request
++ * @req: the request being processed
++ * @uptodate: error value or 0/1 uptodate flag
++ *
++ * Description:
++ * Ends I/O on the current segment of a request. If that is the only
++ * remaining segment, the request is also completed and freed.
++ *
++ * This is a remnant of how older block drivers handled IO completions.
++ * Modern drivers typically end IO on the full request in one go, unless
++ * they have a residual value to account for. For that case this function
++ * isn't really useful, unless the residual just happens to be the
++ * full current segment. In other words, don't use this function in new
++ * code. Either use end_request_completely(), or the
++ * end_that_request_chunk() (along with end_that_request_last()) for
++ * partial completions.
++ *
++ **/
++void end_request(struct request *req, int uptodate)
+{
-+ if (devm_request_region(&dev->dev, lp->private, 4, "winbond") == NULL)
-+ return -EBUSY;
-+ ld->timing = lp->private;
-+ return 0;
++ __end_request(req, uptodate, req->hard_cur_sectors << 9);
+}
++EXPORT_SYMBOL(end_request);
+
-+static struct ata_port_operations winbond_port_ops = {
-+ .set_piomode = winbond_set_piomode,
++/**
++ * blk_end_io - Generic end_io function to complete a request.
++ * @rq: the request being processed
++ * @error: 0 for success, < 0 for error
++ * @nr_bytes: number of bytes to complete @rq
++ * @bidi_bytes: number of bytes to complete @rq->next_rq
++ * @drv_callback: function called between completion of bios in the request
++ * and completion of the request.
++ * If the callback returns non 0, this helper returns without
++ * completion of the request.
++ *
++ * Description:
++ * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
++ * If @rq has leftover, sets it up for the next range of segments.
++ *
++ * Return:
++ * 0 - we are done with this request
++ * 1 - this request is not freed yet, it still has pending buffers.
++ **/
++static int blk_end_io(struct request *rq, int error, int nr_bytes,
++ int bidi_bytes, int (drv_callback)(struct request *))
++{
++ struct request_queue *q = rq->q;
++ unsigned long flags = 0UL;
+
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ata_std_dev_select,
++ if (blk_fs_request(rq) || blk_pc_request(rq)) {
++ if (__end_that_request_first(rq, error, nr_bytes))
++ return 1;
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = ata_bmdma_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
-+ .cable_detect = ata_cable_40wire,
++ /* Bidi request must be completed as a whole */
++ if (blk_bidi_rq(rq) &&
++ __end_that_request_first(rq->next_rq, error, bidi_bytes))
++ return 1;
++ }
+
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = ata_qc_issue_prot,
++ /* Special feature for tricky drivers */
++ if (drv_callback && drv_callback(rq))
++ return 1;
+
-+ .data_xfer = vlb32_data_xfer,
++ add_disk_randomness(rq->rq_disk);
+
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
-
-+ .port_start = ata_sff_port_start,
-+};
++ spin_lock_irqsave(q->queue_lock, flags);
++ end_that_request_last(rq, error);
++ spin_unlock_irqrestore(q->queue_lock, flags);
+
-+static struct legacy_controller controllers[] = {
-+ {"BIOS", &legacy_port_ops, 0x1F,
-+ ATA_FLAG_NO_IORDY, NULL },
-+ {"Snooping", &simple_port_ops, 0x1F,
-+ 0 , NULL },
-+ {"PDC20230", &pdc20230_port_ops, 0x7,
-+ ATA_FLAG_NO_IORDY, NULL },
-+ {"HT6560A", &ht6560a_port_ops, 0x07,
-+ ATA_FLAG_NO_IORDY, NULL },
-+ {"HT6560B", &ht6560b_port_ops, 0x1F,
-+ ATA_FLAG_NO_IORDY, NULL },
-+ {"OPTI82C611A", &opti82c611a_port_ops, 0x0F,
-+ 0 , NULL },
-+ {"OPTI82C46X", &opti82c46x_port_ops, 0x0F,
-+ 0 , NULL },
-+ {"QDI6500", &qdi6500_port_ops, 0x07,
-+ ATA_FLAG_NO_IORDY, qdi_port },
-+ {"QDI6580", &qdi6580_port_ops, 0x1F,
-+ 0 , qdi_port },
-+ {"QDI6580DP", &qdi6580dp_port_ops, 0x1F,
-+ 0 , qdi_port },
-+ {"W83759A", &winbond_port_ops, 0x1F,
-+ 0 , winbond_port }
-+};
++ return 0;
++}
+
+/**
-+ * probe_chip_type - Discover controller
-+ * @probe: Probe entry to check
++ * blk_end_request - Helper function for drivers to complete the request.
++ * @rq: the request being processed
++ * @error: 0 for success, < 0 for error
++ * @nr_bytes: number of bytes to complete
+ *
-+ * Probe an ATA port and identify the type of controller. We don't
-+ * check if the controller appears to be driveless at this point.
-+ */
++ * Description:
++ * Ends I/O on a number of bytes attached to @rq.
++ * If @rq has leftover, sets it up for the next range of segments.
++ *
++ * Return:
++ * 0 - we are done with this request
++ * 1 - still buffers pending for this request
++ **/
++int blk_end_request(struct request *rq, int error, int nr_bytes)
++{
++ return blk_end_io(rq, error, nr_bytes, 0, NULL);
++}
++EXPORT_SYMBOL_GPL(blk_end_request);
+
-+static __init int probe_chip_type(struct legacy_probe *probe)
++/**
++ * __blk_end_request - Helper function for drivers to complete the request.
++ * @rq: the request being processed
++ * @error: 0 for success, < 0 for error
++ * @nr_bytes: number of bytes to complete
++ *
++ * Description:
++ * Must be called with queue lock held unlike blk_end_request().
++ *
++ * Return:
++ * 0 - we are done with this request
++ * 1 - still buffers pending for this request
++ **/
++int __blk_end_request(struct request *rq, int error, int nr_bytes)
+{
-+ int mask = 1 << probe->slot;
++ if (blk_fs_request(rq) || blk_pc_request(rq)) {
++ if (__end_that_request_first(rq, error, nr_bytes))
++ return 1;
++ }
+
-+ if (winbond && (probe->port == 0x1F0 || probe->port == 0x170)) {
-+ u8 reg = winbond_readcfg(winbond, 0x81);
-+ reg |= 0x80; /* jumpered mode off */
-+ winbond_writecfg(winbond, 0x81, reg);
-+ reg = winbond_readcfg(winbond, 0x83);
-+ reg |= 0xF0; /* local control */
-+ winbond_writecfg(winbond, 0x83, reg);
-+ reg = winbond_readcfg(winbond, 0x85);
-+ reg |= 0xF0; /* programmable timing */
-+ winbond_writecfg(winbond, 0x85, reg);
++ add_disk_randomness(rq->rq_disk);
+
-+ reg = winbond_readcfg(winbond, 0x81);
++ end_that_request_last(rq, error);
+
-+ if (reg & mask)
-+ return W83759A;
-+ }
-+ if (probe->port == 0x1F0) {
-+ unsigned long flags;
-+ local_irq_save(flags);
- /* Probes */
-- inb(0x1F5);
- outb(inb(0x1F2) | 0x80, 0x1F2);
-+ inb(0x1F5);
- inb(0x1F2);
- inb(0x3F6);
- inb(0x3F6);
-@@ -760,29 +1168,83 @@ static __init int legacy_init_one(int port, unsigned long io, unsigned long ctrl
-
- if ((inb(0x1F2) & 0x80) == 0) {
- /* PDC20230c or 20630 ? */
-- printk(KERN_INFO "PDC20230-C/20630 VLB ATA controller detected.\n");
-- pio_modes = 0x07;
-- ops = &pdc20230_port_ops;
-- iordy = ATA_FLAG_NO_IORDY;
-+ printk(KERN_INFO "PDC20230-C/20630 VLB ATA controller"
-+ " detected.\n");
- udelay(100);
- inb(0x1F5);
-+ local_irq_restore(flags);
-+ return PDC20230;
- } else {
- outb(0x55, 0x1F2);
- inb(0x1F2);
- inb(0x1F2);
-- if (inb(0x1F2) == 0x00) {
-- printk(KERN_INFO "PDC20230-B VLB ATA controller detected.\n");
-- }
-+ if (inb(0x1F2) == 0x00)
-+ printk(KERN_INFO "PDC20230-B VLB ATA "
-+ "controller detected.\n");
-+ local_irq_restore(flags);
-+ return BIOS;
- }
- local_irq_restore(flags);
- }
-
-+ if (ht6560a & mask)
-+ return HT6560A;
-+ if (ht6560b & mask)
-+ return HT6560B;
-+ if (opti82c611a & mask)
-+ return OPTI611A;
-+ if (opti82c46x & mask)
-+ return OPTI46X;
-+ if (autospeed & mask)
-+ return SNOOP;
-+ return BIOS;
++ return 0;
+}
++EXPORT_SYMBOL_GPL(__blk_end_request);
+
++/**
++ * blk_end_bidi_request - Helper function for drivers to complete bidi request.
++ * @rq: the bidi request being processed
++ * @error: 0 for success, < 0 for error
++ * @nr_bytes: number of bytes to complete @rq
++ * @bidi_bytes: number of bytes to complete @rq->next_rq
++ *
++ * Description:
++ * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
++ *
++ * Return:
++ * 0 - we are done with this request
++ * 1 - still buffers pending for this request
++ **/
++int blk_end_bidi_request(struct request *rq, int error, int nr_bytes,
++ int bidi_bytes)
++{
++ return blk_end_io(rq, error, nr_bytes, bidi_bytes, NULL);
++}
++EXPORT_SYMBOL_GPL(blk_end_bidi_request);
+
+/**
-+ * legacy_init_one - attach a legacy interface
-+ * @pl: probe record
++ * blk_end_request_callback - Special helper function for tricky drivers
++ * @rq: the request being processed
++ * @error: 0 for success, < 0 for error
++ * @nr_bytes: number of bytes to complete
++ * @drv_callback: function called between completion of bios in the request
++ * and completion of the request.
++ * If the callback returns non 0, this helper returns without
++ * completion of the request.
+ *
-+ * Register an ISA bus IDE interface. Such interfaces are PIO and we
-+ * assume do not support IRQ sharing.
-+ */
++ * Description:
++ * Ends I/O on a number of bytes attached to @rq.
++ * If @rq has leftover, sets it up for the next range of segments.
++ *
++ * This special helper function is used only for existing tricky drivers.
++ * (e.g. cdrom_newpc_intr() of ide-cd)
++ * This interface will be removed when such drivers are rewritten.
++ * Don't use this interface in other places anymore.
++ *
++ * Return:
++ * 0 - we are done with this request
++ * 1 - this request is not freed yet.
++ * this request still has pending buffers or
++ * the driver doesn't want to finish this request yet.
++ **/
++int blk_end_request_callback(struct request *rq, int error, int nr_bytes,
++ int (drv_callback)(struct request *))
++{
++ return blk_end_io(rq, error, nr_bytes, 0, drv_callback);
++}
++EXPORT_SYMBOL_GPL(blk_end_request_callback);
+
-+static __init int legacy_init_one(struct legacy_probe *probe)
++void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
++ struct bio *bio)
+{
-+ struct legacy_controller *controller = &controllers[probe->type];
-+ int pio_modes = controller->pio_mask;
-+ unsigned long io = probe->port;
-+ u32 mask = (1 << probe->slot);
-+ struct ata_port_operations *ops = controller->ops;
-+ struct legacy_data *ld = &legacy_data[probe->slot];
-+ struct ata_host *host = NULL;
-+ struct ata_port *ap;
-+ struct platform_device *pdev;
-+ struct ata_device *dev;
-+ void __iomem *io_addr, *ctrl_addr;
-+ u32 iordy = (iordy_mask & mask) ? 0: ATA_FLAG_NO_IORDY;
-+ int ret;
-
-- /* Chip does mode setting by command snooping */
-- if (ops == &legacy_port_ops && (autospeed & mask))
-- ops = &simple_port_ops;
-+ iordy |= controller->flags;
++ /* first two bits are identical in rq->cmd_flags and bio->bi_rw */
++ rq->cmd_flags |= (bio->bi_rw & 3);
+
-+ pdev = platform_device_register_simple(DRV_NAME, probe->slot, NULL, 0);
-+ if (IS_ERR(pdev))
-+ return PTR_ERR(pdev);
++ rq->nr_phys_segments = bio_phys_segments(q, bio);
++ rq->nr_hw_segments = bio_hw_segments(q, bio);
++ rq->current_nr_sectors = bio_cur_sectors(bio);
++ rq->hard_cur_sectors = rq->current_nr_sectors;
++ rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
++ rq->buffer = bio_data(bio);
++ rq->data_len = bio->bi_size;
+
-+ ret = -EBUSY;
-+ if (devm_request_region(&pdev->dev, io, 8, "pata_legacy") == NULL ||
-+ devm_request_region(&pdev->dev, io + 0x0206, 1,
-+ "pata_legacy") == NULL)
-+ goto fail;
-
- ret = -ENOMEM;
-+ io_addr = devm_ioport_map(&pdev->dev, io, 8);
-+ ctrl_addr = devm_ioport_map(&pdev->dev, io + 0x0206, 1);
-+ if (!io_addr || !ctrl_addr)
-+ goto fail;
-+ if (controller->setup)
-+ if (controller->setup(pdev, probe, ld) < 0)
-+ goto fail;
- host = ata_host_alloc(&pdev->dev, 1);
- if (!host)
- goto fail;
-@@ -795,19 +1257,29 @@ static __init int legacy_init_one(int port, unsigned long io, unsigned long ctrl
- ap->ioaddr.altstatus_addr = ctrl_addr;
- ap->ioaddr.ctl_addr = ctrl_addr;
- ata_std_ports(&ap->ioaddr);
-- ap->private_data = ld;
-+ ap->host->private_data = ld;
-
-- ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io, ctrl);
-+ ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io, io + 0x0206);
-
-- ret = ata_host_activate(host, irq, ata_interrupt, 0, &legacy_sht);
-+ ret = ata_host_activate(host, probe->irq, ata_interrupt, 0,
-+ &legacy_sht);
- if (ret)
- goto fail;
--
-- legacy_host[nr_legacy_host++] = dev_get_drvdata(&pdev->dev);
- ld->platform_dev = pdev;
-- return 0;
-
-+ /* Nothing found means we drop the port as its probably not there */
++ rq->bio = rq->biotail = bio;
+
-+ ret = -ENODEV;
-+ ata_link_for_each_dev(dev, &ap->link) {
-+ if (!ata_dev_absent(dev)) {
-+ legacy_host[probe->slot] = host;
-+ ld->platform_dev = pdev;
-+ return 0;
-+ }
-+ }
- fail:
-+ if (host)
-+ ata_host_detach(host);
- platform_device_unregister(pdev);
- return ret;
- }
-@@ -818,13 +1290,15 @@ fail:
- * @master: set this if we find an ATA master
- * @master: set this if we find an ATA secondary
- *
-- * A small number of vendors implemented early PCI ATA interfaces on bridge logic
-- * without the ATA interface being PCI visible. Where we have a matching PCI driver
-- * we must skip the relevant device here. If we don't know about it then the legacy
-- * driver is the right driver anyway.
-+ * A small number of vendors implemented early PCI ATA interfaces
-+ * on bridge logic without the ATA interface being PCI visible.
-+ * Where we have a matching PCI driver we must skip the relevant
-+ * device here. If we don't know about it then the legacy driver
-+ * is the right driver anyway.
- */
-
--static void legacy_check_special_cases(struct pci_dev *p, int *primary, int *secondary)
-+static void __init legacy_check_special_cases(struct pci_dev *p, int *primary,
-+ int *secondary)
- {
- /* Cyrix CS5510 pre SFF MWDMA ATA on the bridge */
- if (p->vendor == 0x1078 && p->device == 0x0000) {
-@@ -840,7 +1314,8 @@ static void legacy_check_special_cases(struct pci_dev *p, int *primary, int *sec
- if (p->vendor == 0x8086 && p->device == 0x1234) {
- u16 r;
- pci_read_config_word(p, 0x6C, &r);
-- if (r & 0x8000) { /* ATA port enabled */
-+ if (r & 0x8000) {
-+ /* ATA port enabled */
- if (r & 0x4000)
- *secondary = 1;
- else
-@@ -850,6 +1325,114 @@ static void legacy_check_special_cases(struct pci_dev *p, int *primary, int *sec
- }
- }
-
-+static __init void probe_opti_vlb(void)
-+{
-+ /* If an OPTI 82C46X is present find out where the channels are */
-+ static const char *optis[4] = {
-+ "3/463MV", "5MV",
-+ "5MVA", "5MVB"
-+ };
-+ u8 chans = 1;
-+ u8 ctrl = (opti_syscfg(0x30) & 0xC0) >> 6;
++ if (bio->bi_bdev)
++ rq->rq_disk = bio->bi_bdev->bd_disk;
++}
+
-+ opti82c46x = 3; /* Assume master and slave first */
-+ printk(KERN_INFO DRV_NAME ": Opti 82C46%s chipset support.\n",
-+ optis[ctrl]);
-+ if (ctrl == 3)
-+ chans = (opti_syscfg(0x3F) & 0x20) ? 2 : 1;
-+ ctrl = opti_syscfg(0xAC);
-+ /* Check enabled and this port is the 465MV port. On the
-+ MVB we may have two channels */
-+ if (ctrl & 8) {
-+ if (chans == 2) {
-+ legacy_probe_add(0x1F0, 14, OPTI46X, 0);
-+ legacy_probe_add(0x170, 15, OPTI46X, 0);
-+ }
-+ if (ctrl & 4)
-+ legacy_probe_add(0x170, 15, OPTI46X, 0);
-+ else
-+ legacy_probe_add(0x1F0, 14, OPTI46X, 0);
-+ } else
-+ legacy_probe_add(0x1F0, 14, OPTI46X, 0);
++int kblockd_schedule_work(struct work_struct *work)
++{
++ return queue_work(kblockd_workqueue, work);
+}
+
-+static __init void qdi65_identify_port(u8 r, u8 res, unsigned long port)
++EXPORT_SYMBOL(kblockd_schedule_work);
++
++void kblockd_flush_work(struct work_struct *work)
+{
-+ static const unsigned long ide_port[2] = { 0x170, 0x1F0 };
-+ /* Check card type */
-+ if ((r & 0xF0) == 0xC0) {
-+ /* QD6500: single channel */
-+ if (r & 8)
-+ /* Disabled ? */
-+ return;
-+ legacy_probe_add(ide_port[r & 0x01], 14 + (r & 0x01),
-+ QDI6500, port);
-+ }
-+ if (((r & 0xF0) == 0xA0) || (r & 0xF0) == 0x50) {
-+ /* QD6580: dual channel */
-+ if (!request_region(port + 2 , 2, "pata_qdi")) {
-+ release_region(port, 2);
-+ return;
-+ }
-+ res = inb(port + 3);
-+ /* Single channel mode ? */
-+ if (res & 1)
-+ legacy_probe_add(ide_port[r & 0x01], 14 + (r & 0x01),
-+ QDI6580, port);
-+ else { /* Dual channel mode */
-+ legacy_probe_add(0x1F0, 14, QDI6580DP, port);
-+ /* port + 0x02, r & 0x04 */
-+ legacy_probe_add(0x170, 15, QDI6580DP, port + 2);
-+ }
-+ release_region(port + 2, 2);
-+ }
++ cancel_work_sync(work);
+}
++EXPORT_SYMBOL(kblockd_flush_work);
+
-+static __init void probe_qdi_vlb(void)
++int __init blk_dev_init(void)
+{
-+ unsigned long flags;
-+ static const unsigned long qd_port[2] = { 0x30, 0xB0 };
+ int i;
+
-+ /*
-+ * Check each possible QD65xx base address
-+ */
-+
-+ for (i = 0; i < 2; i++) {
-+ unsigned long port = qd_port[i];
-+ u8 r, res;
++ kblockd_workqueue = create_workqueue("kblockd");
++ if (!kblockd_workqueue)
++ panic("Failed to create kblockd\n");
+
++ request_cachep = kmem_cache_create("blkdev_requests",
++ sizeof(struct request), 0, SLAB_PANIC, NULL);
+
-+ if (request_region(port, 2, "pata_qdi")) {
-+ /* Check for a card */
-+ local_irq_save(flags);
-+ /* I have no h/w that needs this delay but it
-+ is present in the historic code */
-+ r = inb(port);
-+ udelay(1);
-+ outb(0x19, port);
-+ udelay(1);
-+ res = inb(port);
-+ udelay(1);
-+ outb(r, port);
-+ udelay(1);
-+ local_irq_restore(flags);
++ blk_requestq_cachep = kmem_cache_create("blkdev_queue",
++ sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
+
-+ /* Fail */
-+ if (res == 0x19) {
-+ release_region(port, 2);
-+ continue;
-+ }
-+ /* Passes the presence test */
-+ r = inb(port + 1);
-+ udelay(1);
-+ /* Check port agrees with port set */
-+ if ((r & 2) >> 1 == i)
-+ qdi65_identify_port(r, res, port);
-+ release_region(port, 2);
-+ }
-+ }
-+}
-
- /**
- * legacy_init - attach legacy interfaces
-@@ -867,15 +1450,17 @@ static __init int legacy_init(void)
- int ct = 0;
- int primary = 0;
- int secondary = 0;
-- int last_port = NR_HOST;
-+ int pci_present = 0;
-+ struct legacy_probe *pl = &probe_list[0];
-+ int slot = 0;
-
- struct pci_dev *p = NULL;
-
- for_each_pci_dev(p) {
- int r;
-- /* Check for any overlap of the system ATA mappings. Native mode controllers
-- stuck on these addresses or some devices in 'raid' mode won't be found by
-- the storage class test */
-+ /* Check for any overlap of the system ATA mappings. Native
-+ mode controllers stuck on these addresses or some devices
-+ in 'raid' mode won't be found by the storage class test */
- for (r = 0; r < 6; r++) {
- if (pci_resource_start(p, r) == 0x1f0)
- primary = 1;
-@@ -885,49 +1470,39 @@ static __init int legacy_init(void)
- /* Check for special cases */
- legacy_check_special_cases(p, &primary, &secondary);
-
-- /* If PCI bus is present then don't probe for tertiary legacy ports */
-- if (probe_all == 0)
-- last_port = 2;
-+ /* If PCI bus is present then don't probe for tertiary
-+ legacy ports */
-+ pci_present = 1;
- }
-
-- /* If an OPTI 82C46X is present find out where the channels are */
-- if (opti82c46x) {
-- static const char *optis[4] = {
-- "3/463MV", "5MV",
-- "5MVA", "5MVB"
-- };
-- u8 chans = 1;
-- u8 ctrl = (opti_syscfg(0x30) & 0xC0) >> 6;
--
-- opti82c46x = 3; /* Assume master and slave first */
-- printk(KERN_INFO DRV_NAME ": Opti 82C46%s chipset support.\n", optis[ctrl]);
-- if (ctrl == 3)
-- chans = (opti_syscfg(0x3F) & 0x20) ? 2 : 1;
-- ctrl = opti_syscfg(0xAC);
-- /* Check enabled and this port is the 465MV port. On the
-- MVB we may have two channels */
-- if (ctrl & 8) {
-- if (ctrl & 4)
-- opti82c46x = 2; /* Slave */
-- else
-- opti82c46x = 1; /* Master */
-- if (chans == 2)
-- opti82c46x = 3; /* Master and Slave */
-- } /* Slave only */
-- else if (chans == 1)
-- opti82c46x = 1;
-+ if (winbond == 1)
-+ winbond = 0x130; /* Default port, alt is 1B0 */
++ for_each_possible_cpu(i)
++ INIT_LIST_HEAD(&per_cpu(blk_cpu_done, i));
+
-+ if (primary == 0 || all)
-+ legacy_probe_add(0x1F0, 14, UNKNOWN, 0);
-+ if (secondary == 0 || all)
-+ legacy_probe_add(0x170, 15, UNKNOWN, 0);
++ open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
++ register_hotcpu_notifier(&blk_cpu_notifier);
+
-+ if (probe_all || !pci_present) {
-+ /* ISA/VLB extra ports */
-+ legacy_probe_add(0x1E8, 11, UNKNOWN, 0);
-+ legacy_probe_add(0x168, 10, UNKNOWN, 0);
-+ legacy_probe_add(0x1E0, 8, UNKNOWN, 0);
-+ legacy_probe_add(0x160, 12, UNKNOWN, 0);
- }
-
-- for (i = 0; i < last_port; i++) {
-- /* Skip primary if we have seen a PCI one */
-- if (i == 0 && primary == 1)
-- continue;
-- /* Skip secondary if we have seen a PCI one */
-- if (i == 1 && secondary == 1)
-+ if (opti82c46x)
-+ probe_opti_vlb();
-+ if (qdi)
-+ probe_qdi_vlb();
++ return 0;
++}
+
-+ for (i = 0; i < NR_HOST; i++, pl++) {
-+ if (pl->port == 0)
- continue;
-- if (legacy_init_one(i, legacy_port[i],
-- legacy_port[i] + 0x0206,
-- legacy_irq[i]) == 0)
-+ if (pl->type == UNKNOWN)
-+ pl->type = probe_chip_type(pl);
-+ pl->slot = slot++;
-+ if (legacy_init_one(pl) == 0)
- ct++;
- }
- if (ct != 0)
-@@ -941,11 +1516,8 @@ static __exit void legacy_exit(void)
-
- for (i = 0; i < nr_legacy_host; i++) {
- struct legacy_data *ld = &legacy_data[i];
--
- ata_host_detach(legacy_host[i]);
- platform_device_unregister(ld->platform_dev);
-- if (ld->timing)
-- release_region(ld->timing, 2);
- }
- }
-
-@@ -960,9 +1532,9 @@ module_param(ht6560a, int, 0);
- module_param(ht6560b, int, 0);
- module_param(opti82c611a, int, 0);
- module_param(opti82c46x, int, 0);
-+module_param(qdi, int, 0);
- module_param(pio_mask, int, 0);
- module_param(iordy_mask, int, 0);
-
- module_init(legacy_init);
- module_exit(legacy_exit);
--
-diff --git a/drivers/ata/pata_mpc52xx.c b/drivers/ata/pata_mpc52xx.c
-index 50c56e2..dc40162 100644
---- a/drivers/ata/pata_mpc52xx.c
-+++ b/drivers/ata/pata_mpc52xx.c
-@@ -364,7 +364,7 @@ mpc52xx_ata_probe(struct of_device *op, const struct of_device_id *match)
- {
- unsigned int ipb_freq;
- struct resource res_mem;
-- int ata_irq = NO_IRQ;
-+ int ata_irq;
- struct mpc52xx_ata __iomem *ata_regs;
- struct mpc52xx_ata_priv *priv;
- int rv;
-diff --git a/drivers/ata/pata_ninja32.c b/drivers/ata/pata_ninja32.c
+diff --git a/block/blk-exec.c b/block/blk-exec.c
new file mode 100644
-index 0000000..1c1b835
+index 0000000..ebfb44e
--- /dev/null
-+++ b/drivers/ata/pata_ninja32.c
-@@ -0,0 +1,214 @@
++++ b/block/blk-exec.c
+@@ -0,0 +1,105 @@
+/*
-+ * pata_ninja32.c - Ninja32 PATA for new ATA layer
-+ * (C) 2007 Red Hat Inc
-+ * Alan Cox <alan at redhat.com>
-+ *
-+ * Note: The controller like many controllers has shared timings for
-+ * PIO and DMA. We thus flip to the DMA timings in dma_start and flip back
-+ * in the dma_stop function. Thus we actually don't need a set_dmamode
-+ * method as the PIO method is always called and will set the right PIO
-+ * timing parameters.
-+ *
-+ * The Ninja32 Cardbus is not a generic SFF controller. Instead it is
-+ * laid out as follows off BAR 0. This is based upon Mark Lord's delkin
-+ * driver and the extensive analysis done by the BSD developers, notably
-+ * ITOH Yasufumi.
-+ *
-+ * Base + 0x00 IRQ Status
-+ * Base + 0x01 IRQ control
-+ * Base + 0x02 Chipset control
-+ * Base + 0x04 VDMA and reset control + wait bits
-+ * Base + 0x08 BMIMBA
-+ * Base + 0x0C DMA Length
-+ * Base + 0x10 Taskfile
-+ * Base + 0x18 BMDMA Status ?
-+ * Base + 0x1C
-+ * Base + 0x1D Bus master control
-+ * bit 0 = enable
-+ * bit 1 = 0 write/1 read
-+ * bit 2 = 1 sgtable
-+ * bit 3 = go
-+ * bit 4-6 wait bits
-+ * bit 7 = done
-+ * Base + 0x1E AltStatus
-+ * Base + 0x1F timing register
++ * Functions related to setting various queue properties from drivers
+ */
-+
+#include <linux/kernel.h>
+#include <linux/module.h>
-+#include <linux/pci.h>
-+#include <linux/init.h>
++#include <linux/bio.h>
+#include <linux/blkdev.h>
-+#include <linux/delay.h>
-+#include <scsi/scsi_host.h>
-+#include <linux/libata.h>
+
-+#define DRV_NAME "pata_ninja32"
-+#define DRV_VERSION "0.0.1"
++#include "blk.h"
+
++/*
++ * for max sense size
++ */
++#include <scsi/scsi_cmnd.h>
+
+/**
-+ * ninja32_set_piomode - set initial PIO mode data
-+ * @ap: ATA interface
-+ * @adev: ATA device
-+ *
-+ * Called to do the PIO mode setup. Our timing registers are shared
-+ * but we want to set the PIO timing by default.
++ * blk_end_sync_rq - executes a completion event on a request
++ * @rq: request to complete
++ * @error: end io status of the request
+ */
-+
-+static void ninja32_set_piomode(struct ata_port *ap, struct ata_device *adev)
++void blk_end_sync_rq(struct request *rq, int error)
+{
-+ static u16 pio_timing[5] = {
-+ 0xd6, 0x85, 0x44, 0x33, 0x13
-+ };
-+ iowrite8(pio_timing[adev->pio_mode - XFER_PIO_0],
-+ ap->ioaddr.bmdma_addr + 0x1f);
-+ ap->private_data = adev;
++ struct completion *waiting = rq->end_io_data;
++
++ rq->end_io_data = NULL;
++ __blk_put_request(rq->q, rq);
++
++ /*
++ * complete last, if this is a stack request the process (and thus
++ * the rq pointer) could be invalid right after this complete()
++ */
++ complete(waiting);
+}
++EXPORT_SYMBOL(blk_end_sync_rq);
++
++/**
++ * blk_execute_rq_nowait - insert a request into queue for execution
++ * @q: queue to insert the request in
++ * @bd_disk: matching gendisk
++ * @rq: request to insert
++ * @at_head: insert request at head or tail of queue
++ * @done: I/O completion handler
++ *
++ * Description:
++ * Insert a fully prepared request at the back of the io scheduler queue
++ * for execution. Don't wait for completion.
++ */
++void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
++ struct request *rq, int at_head,
++ rq_end_io_fn *done)
++{
++ int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+
++ rq->rq_disk = bd_disk;
++ rq->cmd_flags |= REQ_NOMERGE;
++ rq->end_io = done;
++ WARN_ON(irqs_disabled());
++ spin_lock_irq(q->queue_lock);
++ __elv_add_request(q, rq, where, 1);
++ __generic_unplug_device(q);
++ spin_unlock_irq(q->queue_lock);
++}
++EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
+
-+static void ninja32_dev_select(struct ata_port *ap, unsigned int device)
++/**
++ * blk_execute_rq - insert a request into queue for execution
++ * @q: queue to insert the request in
++ * @bd_disk: matching gendisk
++ * @rq: request to insert
++ * @at_head: insert request at head or tail of queue
++ *
++ * Description:
++ * Insert a fully prepared request at the back of the io scheduler queue
++ * for execution and wait for completion.
++ */
++int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
++ struct request *rq, int at_head)
+{
-+ struct ata_device *adev = &ap->link.device[device];
-+ if (ap->private_data != adev) {
-+ iowrite8(0xd6, ap->ioaddr.bmdma_addr + 0x1f);
-+ ata_std_dev_select(ap, device);
-+ ninja32_set_piomode(ap, adev);
++ DECLARE_COMPLETION_ONSTACK(wait);
++ char sense[SCSI_SENSE_BUFFERSIZE];
++ int err = 0;
++
++ /*
++ * we need an extra reference to the request, so we can look at
++ * it after io completion
++ */
++ rq->ref_count++;
++
++ if (!rq->sense) {
++ memset(sense, 0, sizeof(sense));
++ rq->sense = sense;
++ rq->sense_len = 0;
+ }
++
++ rq->end_io_data = &wait;
++ blk_execute_rq_nowait(q, bd_disk, rq, at_head, blk_end_sync_rq);
++ wait_for_completion(&wait);
++
++ if (rq->errors)
++ err = -EIO;
++
++ return err;
+}
+
-+static struct scsi_host_template ninja32_sht = {
-+ .module = THIS_MODULE,
-+ .name = DRV_NAME,
-+ .ioctl = ata_scsi_ioctl,
-+ .queuecommand = ata_scsi_queuecmd,
-+ .can_queue = ATA_DEF_QUEUE,
-+ .this_id = ATA_SHT_THIS_ID,
-+ .sg_tablesize = LIBATA_MAX_PRD,
-+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
-+ .emulated = ATA_SHT_EMULATED,
-+ .use_clustering = ATA_SHT_USE_CLUSTERING,
-+ .proc_name = DRV_NAME,
-+ .dma_boundary = ATA_DMA_BOUNDARY,
-+ .slave_configure = ata_scsi_slave_config,
-+ .slave_destroy = ata_scsi_slave_destroy,
-+ .bios_param = ata_std_bios_param,
-+};
++EXPORT_SYMBOL(blk_execute_rq);
+diff --git a/block/blk-ioc.c b/block/blk-ioc.c
+new file mode 100644
+index 0000000..6d16755
+--- /dev/null
++++ b/block/blk-ioc.c
+@@ -0,0 +1,194 @@
++/*
++ * Functions related to io context handling
++ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/init.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
+
-+static struct ata_port_operations ninja32_port_ops = {
-+ .set_piomode = ninja32_set_piomode,
-+ .mode_filter = ata_pci_default_filter,
++#include "blk.h"
+
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ninja32_dev_select,
++/*
++ * For io context allocations
++ */
++static struct kmem_cache *iocontext_cachep;
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = ata_bmdma_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
-+ .cable_detect = ata_cable_40wire,
++static void cfq_dtor(struct io_context *ioc)
++{
++ struct cfq_io_context *cic[1];
++ int r;
+
-+ .bmdma_setup = ata_bmdma_setup,
-+ .bmdma_start = ata_bmdma_start,
-+ .bmdma_stop = ata_bmdma_stop,
-+ .bmdma_status = ata_bmdma_status,
++ /*
++ * We don't have a specific key to lookup with, so use the gang
++ * lookup to just retrieve the first item stored. The cfq exit
++ * function will iterate the full tree, so any member will do.
++ */
++ r = radix_tree_gang_lookup(&ioc->radix_root, (void **) cic, 0, 1);
++ if (r > 0)
++ cic[0]->dtor(ioc);
++}
+
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = ata_qc_issue_prot,
++/*
++ * IO Context helper functions. put_io_context() returns 1 if there are no
++ * more users of this io context, 0 otherwise.
++ */
++int put_io_context(struct io_context *ioc)
++{
++ if (ioc == NULL)
++ return 1;
+
-+ .data_xfer = ata_data_xfer,
++ BUG_ON(atomic_read(&ioc->refcount) == 0);
+
-+ .irq_handler = ata_interrupt,
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
++ if (atomic_dec_and_test(&ioc->refcount)) {
++ rcu_read_lock();
++ if (ioc->aic && ioc->aic->dtor)
++ ioc->aic->dtor(ioc->aic);
++ rcu_read_unlock();
++ cfq_dtor(ioc);
+
-+ .port_start = ata_sff_port_start,
-+};
++ kmem_cache_free(iocontext_cachep, ioc);
++ return 1;
++ }
++ return 0;
++}
++EXPORT_SYMBOL(put_io_context);
+
-+static int ninja32_init_one(struct pci_dev *dev, const struct pci_device_id *id)
++static void cfq_exit(struct io_context *ioc)
+{
-+ struct ata_host *host;
-+ struct ata_port *ap;
-+ void __iomem *base;
-+ int rc;
++ struct cfq_io_context *cic[1];
++ int r;
+
-+ host = ata_host_alloc(&dev->dev, 1);
-+ if (!host)
-+ return -ENOMEM;
-+ ap = host->ports[0];
++ rcu_read_lock();
++ /*
++ * See comment for cfq_dtor()
++ */
++ r = radix_tree_gang_lookup(&ioc->radix_root, (void **) cic, 0, 1);
++ rcu_read_unlock();
+
-+ /* Set up the PCI device */
-+ rc = pcim_enable_device(dev);
-+ if (rc)
-+ return rc;
-+ rc = pcim_iomap_regions(dev, 1 << 0, DRV_NAME);
-+ if (rc == -EBUSY)
-+ pcim_pin_device(dev);
-+ if (rc)
-+ return rc;
++ if (r > 0)
++ cic[0]->exit(ioc);
++}
+
-+ host->iomap = pcim_iomap_table(dev);
-+ rc = pci_set_dma_mask(dev, ATA_DMA_MASK);
-+ if (rc)
-+ return rc;
-+ rc = pci_set_consistent_dma_mask(dev, ATA_DMA_MASK);
-+ if (rc)
-+ return rc;
-+ pci_set_master(dev);
++/* Called by the exitting task */
++void exit_io_context(void)
++{
++ struct io_context *ioc;
+
-+ /* Set up the register mappings */
-+ base = host->iomap[0];
-+ if (!base)
-+ return -ENOMEM;
-+ ap->ops = &ninja32_port_ops;
-+ ap->pio_mask = 0x1F;
-+ ap->flags |= ATA_FLAG_SLAVE_POSS;
++ task_lock(current);
++ ioc = current->io_context;
++ current->io_context = NULL;
++ task_unlock(current);
+
-+ ap->ioaddr.cmd_addr = base + 0x10;
-+ ap->ioaddr.ctl_addr = base + 0x1E;
-+ ap->ioaddr.altstatus_addr = base + 0x1E;
-+ ap->ioaddr.bmdma_addr = base;
-+ ata_std_ports(&ap->ioaddr);
++ if (atomic_dec_and_test(&ioc->nr_tasks)) {
++ if (ioc->aic && ioc->aic->exit)
++ ioc->aic->exit(ioc->aic);
++ cfq_exit(ioc);
+
-+ iowrite8(0x05, base + 0x01); /* Enable interrupt lines */
-+ iowrite8(0xB3, base + 0x02); /* Burst, ?? setup */
-+ iowrite8(0x00, base + 0x04); /* WAIT0 ? */
-+ /* FIXME: Should we disable them at remove ? */
-+ return ata_host_activate(host, dev->irq, ata_interrupt,
-+ IRQF_SHARED, &ninja32_sht);
++ put_io_context(ioc);
++ }
+}
+
-+static const struct pci_device_id ninja32[] = {
-+ { 0x1145, 0xf021, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
-+ { 0x1145, 0xf024, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
-+ { },
-+};
++struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
++{
++ struct io_context *ret;
+
-+static struct pci_driver ninja32_pci_driver = {
-+ .name = DRV_NAME,
-+ .id_table = ninja32,
-+ .probe = ninja32_init_one,
-+ .remove = ata_pci_remove_one
-+};
++ ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
++ if (ret) {
++ atomic_set(&ret->refcount, 1);
++ atomic_set(&ret->nr_tasks, 1);
++ spin_lock_init(&ret->lock);
++ ret->ioprio_changed = 0;
++ ret->ioprio = 0;
++ ret->last_waited = jiffies; /* doesn't matter... */
++ ret->nr_batch_requests = 0; /* because this is 0 */
++ ret->aic = NULL;
++ INIT_RADIX_TREE(&ret->radix_root, GFP_ATOMIC | __GFP_HIGH);
++ ret->ioc_data = NULL;
++ }
+
-+static int __init ninja32_init(void)
++ return ret;
++}
++
++/*
++ * If the current task has no IO context then create one and initialise it.
++ * Otherwise, return its existing IO context.
++ *
++ * This returned IO context doesn't have a specifically elevated refcount,
++ * but since the current task itself holds a reference, the context can be
++ * used in general code, so long as it stays within `current` context.
++ */
++struct io_context *current_io_context(gfp_t gfp_flags, int node)
+{
-+ return pci_register_driver(&ninja32_pci_driver);
++ struct task_struct *tsk = current;
++ struct io_context *ret;
++
++ ret = tsk->io_context;
++ if (likely(ret))
++ return ret;
++
++ ret = alloc_io_context(gfp_flags, node);
++ if (ret) {
++ /* make sure set_task_ioprio() sees the settings above */
++ smp_wmb();
++ tsk->io_context = ret;
++ }
++
++ return ret;
+}
+
-+static void __exit ninja32_exit(void)
++/*
++ * If the current task has no IO context then create one and initialise it.
++ * If it does have a context, take a ref on it.
++ *
++ * This is always called in the context of the task which submitted the I/O.
++ */
++struct io_context *get_io_context(gfp_t gfp_flags, int node)
+{
-+ pci_unregister_driver(&ninja32_pci_driver);
++ struct io_context *ret = NULL;
++
++ /*
++ * Check for unlikely race with exiting task. ioc ref count is
++ * zero when ioc is being detached.
++ */
++ do {
++ ret = current_io_context(gfp_flags, node);
++ if (unlikely(!ret))
++ break;
++ } while (!atomic_inc_not_zero(&ret->refcount));
++
++ return ret;
+}
++EXPORT_SYMBOL(get_io_context);
+
-+MODULE_AUTHOR("Alan Cox");
-+MODULE_DESCRIPTION("low-level driver for Ninja32 ATA");
-+MODULE_LICENSE("GPL");
-+MODULE_DEVICE_TABLE(pci, ninja32);
-+MODULE_VERSION(DRV_VERSION);
++void copy_io_context(struct io_context **pdst, struct io_context **psrc)
++{
++ struct io_context *src = *psrc;
++ struct io_context *dst = *pdst;
+
-+module_init(ninja32_init);
-+module_exit(ninja32_exit);
-diff --git a/drivers/ata/pata_pcmcia.c b/drivers/ata/pata_pcmcia.c
-index fd36099..3e7f6a9 100644
---- a/drivers/ata/pata_pcmcia.c
-+++ b/drivers/ata/pata_pcmcia.c
-@@ -42,7 +42,7 @@
-
-
- #define DRV_NAME "pata_pcmcia"
--#define DRV_VERSION "0.3.2"
-+#define DRV_VERSION "0.3.3"
-
- /*
- * Private data structure to glue stuff together
-@@ -86,6 +86,47 @@ static int pcmcia_set_mode(struct ata_link *link, struct ata_device **r_failed_d
- return ata_do_set_mode(link, r_failed_dev);
- }
-
-+/**
-+ * pcmcia_set_mode_8bit - PCMCIA specific mode setup
-+ * @link: link
-+ * @r_failed_dev: Return pointer for failed device
-+ *
-+ * For the simple emulated 8bit stuff the less we do the better.
++ if (src) {
++ BUG_ON(atomic_read(&src->refcount) == 0);
++ atomic_inc(&src->refcount);
++ put_io_context(dst);
++ *pdst = src;
++ }
++}
++EXPORT_SYMBOL(copy_io_context);
++
++void swap_io_context(struct io_context **ioc1, struct io_context **ioc2)
++{
++ struct io_context *temp;
++ temp = *ioc1;
++ *ioc1 = *ioc2;
++ *ioc2 = temp;
++}
++EXPORT_SYMBOL(swap_io_context);
++
++int __init blk_ioc_init(void)
++{
++ iocontext_cachep = kmem_cache_create("blkdev_ioc",
++ sizeof(struct io_context), 0, SLAB_PANIC, NULL);
++ return 0;
++}
++subsys_initcall(blk_ioc_init);
+diff --git a/block/blk-map.c b/block/blk-map.c
+new file mode 100644
+index 0000000..916cfc9
+--- /dev/null
++++ b/block/blk-map.c
+@@ -0,0 +1,264 @@
++/*
++ * Functions related to mapping data to requests
+ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
+
-+static int pcmcia_set_mode_8bit(struct ata_link *link,
-+ struct ata_device **r_failed_dev)
++#include "blk.h"
++
++int blk_rq_append_bio(struct request_queue *q, struct request *rq,
++ struct bio *bio)
+{
++ if (!rq->bio)
++ blk_rq_bio_prep(q, rq, bio);
++ else if (!ll_back_merge_fn(q, rq, bio))
++ return -EINVAL;
++ else {
++ rq->biotail->bi_next = bio;
++ rq->biotail = bio;
++
++ rq->data_len += bio->bi_size;
++ }
+ return 0;
+}
++EXPORT_SYMBOL(blk_rq_append_bio);
++
++static int __blk_rq_unmap_user(struct bio *bio)
++{
++ int ret = 0;
++
++ if (bio) {
++ if (bio_flagged(bio, BIO_USER_MAPPED))
++ bio_unmap_user(bio);
++ else
++ ret = bio_uncopy_user(bio);
++ }
++
++ return ret;
++}
++
++static int __blk_rq_map_user(struct request_queue *q, struct request *rq,
++ void __user *ubuf, unsigned int len)
++{
++ unsigned long uaddr;
++ struct bio *bio, *orig_bio;
++ int reading, ret;
++
++ reading = rq_data_dir(rq) == READ;
++
++ /*
++ * if alignment requirement is satisfied, map in user pages for
++ * direct dma. else, set up kernel bounce buffers
++ */
++ uaddr = (unsigned long) ubuf;
++ if (!(uaddr & queue_dma_alignment(q)) && !(len & queue_dma_alignment(q)))
++ bio = bio_map_user(q, NULL, uaddr, len, reading);
++ else
++ bio = bio_copy_user(q, uaddr, len, reading);
++
++ if (IS_ERR(bio))
++ return PTR_ERR(bio);
++
++ orig_bio = bio;
++ blk_queue_bounce(q, &bio);
++
++ /*
++ * We link the bounce buffer in and could have to traverse it
++ * later so we have to get a ref to prevent it from being freed
++ */
++ bio_get(bio);
++
++ ret = blk_rq_append_bio(q, rq, bio);
++ if (!ret)
++ return bio->bi_size;
++
++ /* if it was boucned we must call the end io function */
++ bio_endio(bio, 0);
++ __blk_rq_unmap_user(orig_bio);
++ bio_put(bio);
++ return ret;
++}
+
+/**
-+ * ata_data_xfer_8bit - Transfer data by 8bit PIO
-+ * @dev: device to target
-+ * @buf: data buffer
-+ * @buflen: buffer length
-+ * @rw: read/write
++ * blk_rq_map_user - map user data to a request, for REQ_BLOCK_PC usage
++ * @q: request queue where request should be inserted
++ * @rq: request structure to fill
++ * @ubuf: the user buffer
++ * @len: length of user data
+ *
-+ * Transfer data from/to the device data register by 8 bit PIO.
++ * Description:
++ * Data will be mapped directly for zero copy io, if possible. Otherwise
++ * a kernel bounce buffer is used.
+ *
-+ * LOCKING:
-+ * Inherited from caller.
++ * A matching blk_rq_unmap_user() must be issued at the end of io, while
++ * still in process context.
++ *
++ * Note: The mapped bio may need to be bounced through blk_queue_bounce()
++ * before being submitted to the device, as pages mapped may be out of
++ * reach. It's the callers responsibility to make sure this happens. The
++ * original bio must be passed back in to blk_rq_unmap_user() for proper
++ * unmapping.
+ */
-+
-+static unsigned int ata_data_xfer_8bit(struct ata_device *dev,
-+ unsigned char *buf, unsigned int buflen, int rw)
++int blk_rq_map_user(struct request_queue *q, struct request *rq,
++ void __user *ubuf, unsigned long len)
+{
-+ struct ata_port *ap = dev->link->ap;
++ unsigned long bytes_read = 0;
++ struct bio *bio = NULL;
++ int ret;
+
-+ if (rw == READ)
-+ ioread8_rep(ap->ioaddr.data_addr, buf, buflen);
-+ else
-+ iowrite8_rep(ap->ioaddr.data_addr, buf, buflen);
++ if (len > (q->max_hw_sectors << 9))
++ return -EINVAL;
++ if (!len || !ubuf)
++ return -EINVAL;
+
-+ return buflen;
++ while (bytes_read != len) {
++ unsigned long map_len, end, start;
++
++ map_len = min_t(unsigned long, len - bytes_read, BIO_MAX_SIZE);
++ end = ((unsigned long)ubuf + map_len + PAGE_SIZE - 1)
++ >> PAGE_SHIFT;
++ start = (unsigned long)ubuf >> PAGE_SHIFT;
++
++ /*
++ * A bad offset could cause us to require BIO_MAX_PAGES + 1
++ * pages. If this happens we just lower the requested
++ * mapping len by a page so that we can fit
++ */
++ if (end - start > BIO_MAX_PAGES)
++ map_len -= PAGE_SIZE;
++
++ ret = __blk_rq_map_user(q, rq, ubuf, map_len);
++ if (ret < 0)
++ goto unmap_rq;
++ if (!bio)
++ bio = rq->bio;
++ bytes_read += ret;
++ ubuf += ret;
++ }
++
++ rq->buffer = rq->data = NULL;
++ return 0;
++unmap_rq:
++ blk_rq_unmap_user(bio);
++ return ret;
+}
+
++EXPORT_SYMBOL(blk_rq_map_user);
+
- static struct scsi_host_template pcmcia_sht = {
- .module = THIS_MODULE,
- .name = DRV_NAME,
-@@ -129,6 +170,31 @@ static struct ata_port_operations pcmcia_port_ops = {
- .port_start = ata_sff_port_start,
- };
-
-+static struct ata_port_operations pcmcia_8bit_port_ops = {
-+ .set_mode = pcmcia_set_mode_8bit,
-+ .tf_load = ata_tf_load,
-+ .tf_read = ata_tf_read,
-+ .check_status = ata_check_status,
-+ .exec_command = ata_exec_command,
-+ .dev_select = ata_std_dev_select,
++/**
++ * blk_rq_map_user_iov - map user data to a request, for REQ_BLOCK_PC usage
++ * @q: request queue where request should be inserted
++ * @rq: request to map data to
++ * @iov: pointer to the iovec
++ * @iov_count: number of elements in the iovec
++ * @len: I/O byte count
++ *
++ * Description:
++ * Data will be mapped directly for zero copy io, if possible. Otherwise
++ * a kernel bounce buffer is used.
++ *
++ * A matching blk_rq_unmap_user() must be issued at the end of io, while
++ * still in process context.
++ *
++ * Note: The mapped bio may need to be bounced through blk_queue_bounce()
++ * before being submitted to the device, as pages mapped may be out of
++ * reach. It's the callers responsibility to make sure this happens. The
++ * original bio must be passed back in to blk_rq_unmap_user() for proper
++ * unmapping.
++ */
++int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
++ struct sg_iovec *iov, int iov_count, unsigned int len)
++{
++ struct bio *bio;
+
-+ .freeze = ata_bmdma_freeze,
-+ .thaw = ata_bmdma_thaw,
-+ .error_handler = ata_bmdma_error_handler,
-+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
-+ .cable_detect = ata_cable_40wire,
++ if (!iov || iov_count <= 0)
++ return -EINVAL;
+
-+ .qc_prep = ata_qc_prep,
-+ .qc_issue = ata_qc_issue_prot,
++ /* we don't allow misaligned data like bio_map_user() does. If the
++ * user is using sg, they're expected to know the alignment constraints
++ * and respect them accordingly */
++ bio = bio_map_user_iov(q, NULL, iov, iov_count, rq_data_dir(rq)== READ);
++ if (IS_ERR(bio))
++ return PTR_ERR(bio);
+
-+ .data_xfer = ata_data_xfer_8bit,
++ if (bio->bi_size != len) {
++ bio_endio(bio, 0);
++ bio_unmap_user(bio);
++ return -EINVAL;
++ }
+
-+ .irq_clear = ata_bmdma_irq_clear,
-+ .irq_on = ata_irq_on,
++ bio_get(bio);
++ blk_rq_bio_prep(q, rq, bio);
++ rq->buffer = rq->data = NULL;
++ return 0;
++}
+
-+ .port_start = ata_sff_port_start,
-+};
++EXPORT_SYMBOL(blk_rq_map_user_iov);
+
- #define CS_CHECK(fn, ret) \
- do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
-
-@@ -153,9 +219,12 @@ static int pcmcia_init_one(struct pcmcia_device *pdev)
- cistpl_cftable_entry_t dflt;
- } *stk = NULL;
- cistpl_cftable_entry_t *cfg;
-- int pass, last_ret = 0, last_fn = 0, is_kme = 0, ret = -ENOMEM;
-+ int pass, last_ret = 0, last_fn = 0, is_kme = 0, ret = -ENOMEM, p;
- unsigned long io_base, ctl_base;
- void __iomem *io_addr, *ctl_addr;
-+ int n_ports = 1;
++/**
++ * blk_rq_unmap_user - unmap a request with user data
++ * @bio: start of bio list
++ *
++ * Description:
++ * Unmap a rq previously mapped by blk_rq_map_user(). The caller must
++ * supply the original rq->bio from the blk_rq_map_user() return, since
++ * the io completion may have changed rq->bio.
++ */
++int blk_rq_unmap_user(struct bio *bio)
++{
++ struct bio *mapped_bio;
++ int ret = 0, ret2;
+
-+ struct ata_port_operations *ops = &pcmcia_port_ops;
-
- info = kzalloc(sizeof(*info), GFP_KERNEL);
- if (info == NULL)
-@@ -282,27 +351,32 @@ next_entry:
- /* FIXME: Could be more ports at base + 0x10 but we only deal with
- one right now */
- if (pdev->io.NumPorts1 >= 0x20)
-- printk(KERN_WARNING DRV_NAME ": second channel not yet supported.\n");
-+ n_ports = 2;
-
-+ if (pdev->manf_id == 0x0097 && pdev->card_id == 0x1620)
-+ ops = &pcmcia_8bit_port_ops;
- /*
- * Having done the PCMCIA plumbing the ATA side is relatively
- * sane.
- */
- ret = -ENOMEM;
-- host = ata_host_alloc(&pdev->dev, 1);
-+ host = ata_host_alloc(&pdev->dev, n_ports);
- if (!host)
- goto failed;
-- ap = host->ports[0];
-
-- ap->ops = &pcmcia_port_ops;
-- ap->pio_mask = 1; /* ISA so PIO 0 cycles */
-- ap->flags |= ATA_FLAG_SLAVE_POSS;
-- ap->ioaddr.cmd_addr = io_addr;
-- ap->ioaddr.altstatus_addr = ctl_addr;
-- ap->ioaddr.ctl_addr = ctl_addr;
-- ata_std_ports(&ap->ioaddr);
-+ for (p = 0; p < n_ports; p++) {
-+ ap = host->ports[p];
-
-- ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io_base, ctl_base);
-+ ap->ops = ops;
-+ ap->pio_mask = 1; /* ISA so PIO 0 cycles */
-+ ap->flags |= ATA_FLAG_SLAVE_POSS;
-+ ap->ioaddr.cmd_addr = io_addr + 0x10 * p;
-+ ap->ioaddr.altstatus_addr = ctl_addr + 0x10 * p;
-+ ap->ioaddr.ctl_addr = ctl_addr + 0x10 * p;
-+ ata_std_ports(&ap->ioaddr);
++ while (bio) {
++ mapped_bio = bio;
++ if (unlikely(bio_flagged(bio, BIO_BOUNCED)))
++ mapped_bio = bio->bi_private;
+
-+ ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io_base, ctl_base);
++ ret2 = __blk_rq_unmap_user(mapped_bio);
++ if (ret2 && !ret)
++ ret = ret2;
++
++ mapped_bio = bio;
++ bio = bio->bi_next;
++ bio_put(mapped_bio);
+ }
-
- /* activate */
- ret = ata_host_activate(host, pdev->irq.AssignedIRQ, ata_interrupt,
-@@ -360,6 +434,7 @@ static struct pcmcia_device_id pcmcia_devices[] = {
- PCMCIA_DEVICE_MANF_CARD(0x0032, 0x0704),
- PCMCIA_DEVICE_MANF_CARD(0x0032, 0x2904),
- PCMCIA_DEVICE_MANF_CARD(0x0045, 0x0401), /* SanDisk CFA */
-+ PCMCIA_DEVICE_MANF_CARD(0x0097, 0x1620), /* TI emulated */
- PCMCIA_DEVICE_MANF_CARD(0x0098, 0x0000), /* Toshiba */
- PCMCIA_DEVICE_MANF_CARD(0x00a4, 0x002d),
- PCMCIA_DEVICE_MANF_CARD(0x00ce, 0x0000), /* Samsung */
-diff --git a/drivers/ata/pata_pdc2027x.c b/drivers/ata/pata_pdc2027x.c
-index 2622577..028af5d 100644
---- a/drivers/ata/pata_pdc2027x.c
-+++ b/drivers/ata/pata_pdc2027x.c
-@@ -348,7 +348,7 @@ static unsigned long pdc2027x_mode_filter(struct ata_device *adev, unsigned long
- ata_id_c_string(pair->id, model_num, ATA_ID_PROD,
- ATA_ID_PROD_LEN + 1);
- /* If the master is a maxtor in UDMA6 then the slave should not use UDMA 6 */
-- if (strstr(model_num, "Maxtor") == 0 && pair->dma_mode == XFER_UDMA_6)
-+ if (strstr(model_num, "Maxtor") == NULL && pair->dma_mode == XFER_UDMA_6)
- mask &= ~ (1 << (6 + ATA_SHIFT_UDMA));
-
- return ata_pci_default_filter(adev, mask);
-diff --git a/drivers/ata/pata_pdc202xx_old.c b/drivers/ata/pata_pdc202xx_old.c
-index 6c9689b..3ed8667 100644
---- a/drivers/ata/pata_pdc202xx_old.c
-+++ b/drivers/ata/pata_pdc202xx_old.c
-@@ -168,8 +168,7 @@ static void pdc2026x_bmdma_start(struct ata_queued_cmd *qc)
- pdc202xx_set_dmamode(ap, qc->dev);
-
- /* Cases the state machine will not complete correctly without help */
-- if ((tf->flags & ATA_TFLAG_LBA48) || tf->protocol == ATA_PROT_ATAPI_DMA)
-- {
-+ if ((tf->flags & ATA_TFLAG_LBA48) || tf->protocol == ATAPI_PROT_DMA) {
- len = qc->nbytes / 2;
-
- if (tf->flags & ATA_TFLAG_WRITE)
-@@ -208,7 +207,7 @@ static void pdc2026x_bmdma_stop(struct ata_queued_cmd *qc)
- void __iomem *atapi_reg = master + 0x20 + (4 * ap->port_no);
-
- /* Cases the state machine will not complete correctly */
-- if (tf->protocol == ATA_PROT_ATAPI_DMA || ( tf->flags & ATA_TFLAG_LBA48)) {
-+ if (tf->protocol == ATAPI_PROT_DMA || (tf->flags & ATA_TFLAG_LBA48)) {
- iowrite32(0, atapi_reg);
- iowrite8(ioread8(clock) & ~sel66, clock);
- }
-diff --git a/drivers/ata/pata_qdi.c b/drivers/ata/pata_qdi.c
-index a4c0e50..9f308ed 100644
---- a/drivers/ata/pata_qdi.c
-+++ b/drivers/ata/pata_qdi.c
-@@ -124,29 +124,33 @@ static unsigned int qdi_qc_issue_prot(struct ata_queued_cmd *qc)
- return ata_qc_issue_prot(qc);
- }
-
--static void qdi_data_xfer(struct ata_device *adev, unsigned char *buf, unsigned int buflen, int write_data)
-+static unsigned int qdi_data_xfer(struct ata_device *dev, unsigned char *buf,
-+ unsigned int buflen, int rw)
- {
-- struct ata_port *ap = adev->link->ap;
-- int slop = buflen & 3;
-+ if (ata_id_has_dword_io(dev->id)) {
-+ struct ata_port *ap = dev->link->ap;
-+ int slop = buflen & 3;
-
-- if (ata_id_has_dword_io(adev->id)) {
-- if (write_data)
-- iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-- else
-+ if (rw == READ)
- ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-+ else
-+ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-
- if (unlikely(slop)) {
-- __le32 pad = 0;
-- if (write_data) {
-- memcpy(&pad, buf + buflen - slop, slop);
-- iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
-- } else {
-+ u32 pad;
-+ if (rw == READ) {
- pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
- memcpy(buf + buflen - slop, &pad, slop);
-+ } else {
-+ memcpy(&pad, buf + buflen - slop, slop);
-+ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
- }
-+ buflen += 4 - slop;
- }
- } else
-- ata_data_xfer(adev, buf, buflen, write_data);
-+ buflen = ata_data_xfer(dev, buf, buflen, rw);
+
-+ return buflen;
- }
-
- static struct scsi_host_template qdi_sht = {
-diff --git a/drivers/ata/pata_scc.c b/drivers/ata/pata_scc.c
-index ea2ef9f..55055b2 100644
---- a/drivers/ata/pata_scc.c
-+++ b/drivers/ata/pata_scc.c
-@@ -768,45 +768,47 @@ static u8 scc_bmdma_status (struct ata_port *ap)
-
- /**
- * scc_data_xfer - Transfer data by PIO
-- * @adev: device for this I/O
-+ * @dev: device for this I/O
- * @buf: data buffer
- * @buflen: buffer length
-- * @write_data: read/write
-+ * @rw: read/write
- *
- * Note: Original code is ata_data_xfer().
- */
-
--static void scc_data_xfer (struct ata_device *adev, unsigned char *buf,
-- unsigned int buflen, int write_data)
-+static unsigned int scc_data_xfer (struct ata_device *dev, unsigned char *buf,
-+ unsigned int buflen, int rw)
- {
-- struct ata_port *ap = adev->link->ap;
-+ struct ata_port *ap = dev->link->ap;
- unsigned int words = buflen >> 1;
- unsigned int i;
- u16 *buf16 = (u16 *) buf;
- void __iomem *mmio = ap->ioaddr.data_addr;
-
- /* Transfer multiple of 2 bytes */
-- if (write_data) {
-- for (i = 0; i < words; i++)
-- out_be32(mmio, cpu_to_le16(buf16[i]));
-- } else {
-+ if (rw == READ)
- for (i = 0; i < words; i++)
- buf16[i] = le16_to_cpu(in_be32(mmio));
-- }
-+ else
-+ for (i = 0; i < words; i++)
-+ out_be32(mmio, cpu_to_le16(buf16[i]));
-
- /* Transfer trailing 1 byte, if any. */
- if (unlikely(buflen & 0x01)) {
- u16 align_buf[1] = { 0 };
- unsigned char *trailing_buf = buf + buflen - 1;
-
-- if (write_data) {
-- memcpy(align_buf, trailing_buf, 1);
-- out_be32(mmio, cpu_to_le16(align_buf[0]));
-- } else {
-+ if (rw == READ) {
- align_buf[0] = le16_to_cpu(in_be32(mmio));
- memcpy(trailing_buf, align_buf, 1);
-+ } else {
-+ memcpy(align_buf, trailing_buf, 1);
-+ out_be32(mmio, cpu_to_le16(align_buf[0]));
- }
-+ words++;
- }
++ return ret;
++}
+
-+ return words << 1;
- }
-
- /**
-diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c
-index 8bed888..9c523fb 100644
---- a/drivers/ata/pata_serverworks.c
-+++ b/drivers/ata/pata_serverworks.c
-@@ -41,7 +41,7 @@
- #include <linux/libata.h>
-
- #define DRV_NAME "pata_serverworks"
--#define DRV_VERSION "0.4.2"
-+#define DRV_VERSION "0.4.3"
-
- #define SVWKS_CSB5_REVISION_NEW 0x92 /* min PCI_REVISION_ID for UDMA5 (A2.0) */
- #define SVWKS_CSB6_REVISION 0xa0 /* min PCI_REVISION_ID for UDMA4 (A1.0) */
-@@ -102,7 +102,7 @@ static int osb4_cable(struct ata_port *ap) {
- }
-
- /**
-- * csb4_cable - CSB5/6 cable detect
-+ * csb_cable - CSB5/6 cable detect
- * @ap: ATA port to check
- *
- * Serverworks default arrangement is to use the drive side detection
-@@ -110,7 +110,7 @@ static int osb4_cable(struct ata_port *ap) {
- */
-
- static int csb_cable(struct ata_port *ap) {
-- return ATA_CBL_PATA80;
-+ return ATA_CBL_PATA_UNK;
- }
-
- struct sv_cable_table {
-@@ -231,7 +231,6 @@ static unsigned long serverworks_csb_filter(struct ata_device *adev, unsigned lo
- return ata_pci_default_filter(adev, mask);
- }
-
--
- /**
- * serverworks_set_piomode - set initial PIO mode data
- * @ap: ATA interface
-@@ -243,7 +242,7 @@ static unsigned long serverworks_csb_filter(struct ata_device *adev, unsigned lo
- static void serverworks_set_piomode(struct ata_port *ap, struct ata_device *adev)
- {
- static const u8 pio_mode[] = { 0x5d, 0x47, 0x34, 0x22, 0x20 };
-- int offset = 1 + (2 * ap->port_no) - adev->devno;
-+ int offset = 1 + 2 * ap->port_no - adev->devno;
- int devbits = (2 * ap->port_no + adev->devno) * 4;
- u16 csb5_pio;
- struct pci_dev *pdev = to_pci_dev(ap->host->dev);
-diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c
-index 453d72b..39627ab 100644
---- a/drivers/ata/pata_via.c
-+++ b/drivers/ata/pata_via.c
-@@ -185,7 +185,8 @@ static int via_cable_detect(struct ata_port *ap) {
- if (ata66 & (0x10100000 >> (16 * ap->port_no)))
- return ATA_CBL_PATA80;
- /* Check with ACPI so we can spot BIOS reported SATA bridges */
-- if (ata_acpi_cbl_80wire(ap))
-+ if (ata_acpi_init_gtm(ap) &&
-+ ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap)))
- return ATA_CBL_PATA80;
- return ATA_CBL_PATA40;
- }
-diff --git a/drivers/ata/pata_winbond.c b/drivers/ata/pata_winbond.c
-index 7116a9e..99c92ed 100644
---- a/drivers/ata/pata_winbond.c
-+++ b/drivers/ata/pata_winbond.c
-@@ -92,29 +92,33 @@ static void winbond_set_piomode(struct ata_port *ap, struct ata_device *adev)
- }
-
-
--static void winbond_data_xfer(struct ata_device *adev, unsigned char *buf, unsigned int buflen, int write_data)
-+static unsigned int winbond_data_xfer(struct ata_device *dev,
-+ unsigned char *buf, unsigned int buflen, int rw)
- {
-- struct ata_port *ap = adev->link->ap;
-+ struct ata_port *ap = dev->link->ap;
- int slop = buflen & 3;
-
-- if (ata_id_has_dword_io(adev->id)) {
-- if (write_data)
-- iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-- else
-+ if (ata_id_has_dword_io(dev->id)) {
-+ if (rw == READ)
- ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-+ else
-+ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
-
- if (unlikely(slop)) {
-- __le32 pad = 0;
-- if (write_data) {
-- memcpy(&pad, buf + buflen - slop, slop);
-- iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
-- } else {
-+ u32 pad;
-+ if (rw == READ) {
- pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
- memcpy(buf + buflen - slop, &pad, slop);
-+ } else {
-+ memcpy(&pad, buf + buflen - slop, slop);
-+ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
- }
-+ buflen += 4 - slop;
- }
- } else
-- ata_data_xfer(adev, buf, buflen, write_data);
-+ buflen = ata_data_xfer(dev, buf, buflen, rw);
++EXPORT_SYMBOL(blk_rq_unmap_user);
+
-+ return buflen;
- }
-
- static struct scsi_host_template winbond_sht = {
-@@ -191,7 +195,7 @@ static __init int winbond_init_one(unsigned long port)
- reg = winbond_readcfg(port, 0x81);
-
- if (!(reg & 0x03)) /* Disabled */
-- return 0;
-+ return -ENODEV;
-
- for (i = 0; i < 2 ; i ++) {
- unsigned long cmd_port = 0x1F0 - (0x80 * i);
-diff --git a/drivers/ata/pdc_adma.c b/drivers/ata/pdc_adma.c
-index bd4c2a3..8e1b7e9 100644
---- a/drivers/ata/pdc_adma.c
-+++ b/drivers/ata/pdc_adma.c
-@@ -321,8 +321,9 @@ static int adma_fill_sg(struct ata_queued_cmd *qc)
- u8 *buf = pp->pkt, *last_buf = NULL;
- int i = (2 + buf[3]) * 8;
- u8 pFLAGS = pORD | ((qc->tf.flags & ATA_TFLAG_WRITE) ? pDIRO : 0);
-+ unsigned int si;
-
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- u32 addr;
- u32 len;
-
-@@ -455,7 +456,7 @@ static unsigned int adma_qc_issue(struct ata_queued_cmd *qc)
- adma_packet_start(qc);
- return 0;
-
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- BUG();
- break;
-
-diff --git a/drivers/ata/sata_fsl.c b/drivers/ata/sata_fsl.c
-index d015b4a..922d7b2 100644
---- a/drivers/ata/sata_fsl.c
-+++ b/drivers/ata/sata_fsl.c
-@@ -333,13 +333,14 @@ static unsigned int sata_fsl_fill_sg(struct ata_queued_cmd *qc, void *cmd_desc,
- struct prde *prd_ptr_to_indirect_ext = NULL;
- unsigned indirect_ext_segment_sz = 0;
- dma_addr_t indirect_ext_segment_paddr;
-+ unsigned int si;
-
- VPRINTK("SATA FSL : cd = 0x%x, prd = 0x%x\n", cmd_desc, prd);
-
- indirect_ext_segment_paddr = cmd_desc_paddr +
- SATA_FSL_CMD_DESC_OFFSET_TO_PRDT + SATA_FSL_MAX_PRD_DIRECT * 16;
-
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- dma_addr_t sg_addr = sg_dma_address(sg);
- u32 sg_len = sg_dma_len(sg);
-
-@@ -417,7 +418,7 @@ static void sata_fsl_qc_prep(struct ata_queued_cmd *qc)
- }
-
- /* setup "ACMD - atapi command" in cmd. desc. if this is ATAPI cmd */
-- if (is_atapi_taskfile(&qc->tf)) {
-+ if (ata_is_atapi(qc->tf.protocol)) {
- desc_info |= ATAPI_CMD;
- memset((void *)&cd->acmd, 0, 32);
- memcpy((void *)&cd->acmd, qc->cdb, qc->dev->cdb_len);
-diff --git a/drivers/ata/sata_inic162x.c b/drivers/ata/sata_inic162x.c
-index 323c087..96e614a 100644
---- a/drivers/ata/sata_inic162x.c
-+++ b/drivers/ata/sata_inic162x.c
-@@ -585,7 +585,7 @@ static struct ata_port_operations inic_port_ops = {
- };
-
- static struct ata_port_info inic_port_info = {
-- /* For some reason, ATA_PROT_ATAPI is broken on this
-+ /* For some reason, ATAPI_PROT_PIO is broken on this
- * controller, and no, PIO_POLLING does't fix it. It somehow
- * manages to report the wrong ireason and ignoring ireason
- * results in machine lock up. Tell libata to always prefer
-diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
-index 37b850a..7e72463 100644
---- a/drivers/ata/sata_mv.c
-+++ b/drivers/ata/sata_mv.c
-@@ -1136,9 +1136,10 @@ static void mv_fill_sg(struct ata_queued_cmd *qc)
- struct mv_port_priv *pp = qc->ap->private_data;
- struct scatterlist *sg;
- struct mv_sg *mv_sg, *last_sg = NULL;
-+ unsigned int si;
-
- mv_sg = pp->sg_tbl;
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- dma_addr_t addr = sg_dma_address(sg);
- u32 sg_len = sg_dma_len(sg);
-
-diff --git a/drivers/ata/sata_nv.c b/drivers/ata/sata_nv.c
-index ed5dc7c..a0f98fd 100644
---- a/drivers/ata/sata_nv.c
-+++ b/drivers/ata/sata_nv.c
-@@ -1336,21 +1336,18 @@ static void nv_adma_fill_aprd(struct ata_queued_cmd *qc,
- static void nv_adma_fill_sg(struct ata_queued_cmd *qc, struct nv_adma_cpb *cpb)
- {
- struct nv_adma_port_priv *pp = qc->ap->private_data;
-- unsigned int idx;
- struct nv_adma_prd *aprd;
- struct scatterlist *sg;
-+ unsigned int si;
-
- VPRINTK("ENTER\n");
-
-- idx = 0;
--
-- ata_for_each_sg(sg, qc) {
-- aprd = (idx < 5) ? &cpb->aprd[idx] :
-- &pp->aprd[NV_ADMA_SGTBL_LEN * qc->tag + (idx-5)];
-- nv_adma_fill_aprd(qc, sg, idx, aprd);
-- idx++;
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
-+ aprd = (si < 5) ? &cpb->aprd[si] :
-+ &pp->aprd[NV_ADMA_SGTBL_LEN * qc->tag + (si-5)];
-+ nv_adma_fill_aprd(qc, sg, si, aprd);
- }
-- if (idx > 5)
-+ if (si > 5)
- cpb->next_aprd = cpu_to_le64(((u64)(pp->aprd_dma + NV_ADMA_SGTBL_SZ * qc->tag)));
- else
- cpb->next_aprd = cpu_to_le64(0);
-@@ -1995,17 +1992,14 @@ static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct scatterlist *sg;
-- unsigned int idx;
- struct nv_swncq_port_priv *pp = ap->private_data;
- struct ata_prd *prd;
--
-- WARN_ON(qc->__sg == NULL);
-- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
-+ unsigned int si, idx;
-
- prd = pp->prd + ATA_MAX_PRD * qc->tag;
-
- idx = 0;
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- u32 addr, offset;
- u32 sg_len, len;
-
-@@ -2027,8 +2021,7 @@ static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
- }
- }
-
-- if (idx)
-- prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
-+ prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
- }
-
- static unsigned int nv_swncq_issue_atacmd(struct ata_port *ap,
-diff --git a/drivers/ata/sata_promise.c b/drivers/ata/sata_promise.c
-index 7914def..a07d319 100644
---- a/drivers/ata/sata_promise.c
-+++ b/drivers/ata/sata_promise.c
-@@ -450,19 +450,19 @@ static void pdc_atapi_pkt(struct ata_queued_cmd *qc)
- struct pdc_port_priv *pp = ap->private_data;
- u8 *buf = pp->pkt;
- u32 *buf32 = (u32 *) buf;
-- unsigned int dev_sel, feature, nbytes;
-+ unsigned int dev_sel, feature;
-
- /* set control bits (byte 0), zero delay seq id (byte 3),
- * and seq id (byte 2)
- */
- switch (qc->tf.protocol) {
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- if (!(qc->tf.flags & ATA_TFLAG_WRITE))
- buf32[0] = cpu_to_le32(PDC_PKT_READ);
- else
- buf32[0] = 0;
- break;
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_NODATA:
- buf32[0] = cpu_to_le32(PDC_PKT_NODATA);
- break;
- default:
-@@ -473,45 +473,37 @@ static void pdc_atapi_pkt(struct ata_queued_cmd *qc)
- buf32[2] = 0; /* no next-packet */
-
- /* select drive */
-- if (sata_scr_valid(&ap->link)) {
-+ if (sata_scr_valid(&ap->link))
- dev_sel = PDC_DEVICE_SATA;
-- } else {
-- dev_sel = ATA_DEVICE_OBS;
-- if (qc->dev->devno != 0)
-- dev_sel |= ATA_DEV1;
-- }
-+ else
-+ dev_sel = qc->tf.device;
++/**
++ * blk_rq_map_kern - map kernel data to a request, for REQ_BLOCK_PC usage
++ * @q: request queue where request should be inserted
++ * @rq: request to fill
++ * @kbuf: the kernel buffer
++ * @len: length of user data
++ * @gfp_mask: memory allocation flags
++ */
++int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
++ unsigned int len, gfp_t gfp_mask)
++{
++ struct bio *bio;
+
- buf[12] = (1 << 5) | ATA_REG_DEVICE;
- buf[13] = dev_sel;
- buf[14] = (1 << 5) | ATA_REG_DEVICE | PDC_PKT_CLEAR_BSY;
- buf[15] = dev_sel; /* once more, waiting for BSY to clear */
-
- buf[16] = (1 << 5) | ATA_REG_NSECT;
-- buf[17] = 0x00;
-+ buf[17] = qc->tf.nsect;
- buf[18] = (1 << 5) | ATA_REG_LBAL;
-- buf[19] = 0x00;
-+ buf[19] = qc->tf.lbal;
-
- /* set feature and byte counter registers */
-- if (qc->tf.protocol != ATA_PROT_ATAPI_DMA) {
-+ if (qc->tf.protocol != ATAPI_PROT_DMA)
- feature = PDC_FEATURE_ATAPI_PIO;
-- /* set byte counter register to real transfer byte count */
-- nbytes = qc->nbytes;
-- if (nbytes > 0xffff)
-- nbytes = 0xffff;
-- } else {
-+ else
- feature = PDC_FEATURE_ATAPI_DMA;
-- /* set byte counter register to 0 */
-- nbytes = 0;
-- }
++ if (len > (q->max_hw_sectors << 9))
++ return -EINVAL;
++ if (!len || !kbuf)
++ return -EINVAL;
+
- buf[20] = (1 << 5) | ATA_REG_FEATURE;
- buf[21] = feature;
- buf[22] = (1 << 5) | ATA_REG_BYTEL;
-- buf[23] = nbytes & 0xFF;
-+ buf[23] = qc->tf.lbam;
- buf[24] = (1 << 5) | ATA_REG_BYTEH;
-- buf[25] = (nbytes >> 8) & 0xFF;
-+ buf[25] = qc->tf.lbah;
-
- /* send ATAPI packet command 0xA0 */
- buf[26] = (1 << 5) | ATA_REG_CMD;
-- buf[27] = ATA_CMD_PACKET;
-+ buf[27] = qc->tf.command;
-
- /* select drive and check DRQ */
- buf[28] = (1 << 5) | ATA_REG_DEVICE | PDC_PKT_WAIT_DRDY;
-@@ -541,17 +533,15 @@ static void pdc_fill_sg(struct ata_queued_cmd *qc)
- {
- struct ata_port *ap = qc->ap;
- struct scatterlist *sg;
-- unsigned int idx;
- const u32 SG_COUNT_ASIC_BUG = 41*4;
-+ unsigned int si, idx;
-+ u32 len;
-
- if (!(qc->flags & ATA_QCFLAG_DMAMAP))
- return;
-
-- WARN_ON(qc->__sg == NULL);
-- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
--
- idx = 0;
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- u32 addr, offset;
- u32 sg_len, len;
-
-@@ -578,29 +568,27 @@ static void pdc_fill_sg(struct ata_queued_cmd *qc)
- }
- }
-
-- if (idx) {
-- u32 len = le32_to_cpu(ap->prd[idx - 1].flags_len);
-+ len = le32_to_cpu(ap->prd[idx - 1].flags_len);
-
-- if (len > SG_COUNT_ASIC_BUG) {
-- u32 addr;
-+ if (len > SG_COUNT_ASIC_BUG) {
-+ u32 addr;
-
-- VPRINTK("Splitting last PRD.\n");
-+ VPRINTK("Splitting last PRD.\n");
-
-- addr = le32_to_cpu(ap->prd[idx - 1].addr);
-- ap->prd[idx - 1].flags_len = cpu_to_le32(len - SG_COUNT_ASIC_BUG);
-- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx - 1, addr, SG_COUNT_ASIC_BUG);
-+ addr = le32_to_cpu(ap->prd[idx - 1].addr);
-+ ap->prd[idx - 1].flags_len = cpu_to_le32(len - SG_COUNT_ASIC_BUG);
-+ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx - 1, addr, SG_COUNT_ASIC_BUG);
-
-- addr = addr + len - SG_COUNT_ASIC_BUG;
-- len = SG_COUNT_ASIC_BUG;
-- ap->prd[idx].addr = cpu_to_le32(addr);
-- ap->prd[idx].flags_len = cpu_to_le32(len);
-- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
-+ addr = addr + len - SG_COUNT_ASIC_BUG;
-+ len = SG_COUNT_ASIC_BUG;
-+ ap->prd[idx].addr = cpu_to_le32(addr);
-+ ap->prd[idx].flags_len = cpu_to_le32(len);
-+ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
-
-- idx++;
-- }
--
-- ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
-+ idx++;
- }
++ bio = bio_map_kern(q, kbuf, len, gfp_mask);
++ if (IS_ERR(bio))
++ return PTR_ERR(bio);
+
-+ ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
- }
-
- static void pdc_qc_prep(struct ata_queued_cmd *qc)
-@@ -627,14 +615,14 @@ static void pdc_qc_prep(struct ata_queued_cmd *qc)
- pdc_pkt_footer(&qc->tf, pp->pkt, i);
- break;
-
-- case ATA_PROT_ATAPI:
-+ case ATAPI_PROT_PIO:
- pdc_fill_sg(qc);
- break;
-
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- pdc_fill_sg(qc);
- /*FALLTHROUGH*/
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_NODATA:
- pdc_atapi_pkt(qc);
- break;
-
-@@ -754,8 +742,8 @@ static inline unsigned int pdc_host_intr(struct ata_port *ap,
- switch (qc->tf.protocol) {
- case ATA_PROT_DMA:
- case ATA_PROT_NODATA:
-- case ATA_PROT_ATAPI_DMA:
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_DMA:
-+ case ATAPI_PROT_NODATA:
- qc->err_mask |= ac_err_mask(ata_wait_idle(ap));
- ata_qc_complete(qc);
- handled = 1;
-@@ -900,7 +888,7 @@ static inline void pdc_packet_start(struct ata_queued_cmd *qc)
- static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
- {
- switch (qc->tf.protocol) {
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_NODATA:
- if (qc->dev->flags & ATA_DFLAG_CDB_INTR)
- break;
- /*FALLTHROUGH*/
-@@ -908,7 +896,7 @@ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
- if (qc->tf.flags & ATA_TFLAG_POLLING)
- break;
- /*FALLTHROUGH*/
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- case ATA_PROT_DMA:
- pdc_packet_start(qc);
- return 0;
-@@ -922,16 +910,14 @@ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
-
- static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
- {
-- WARN_ON(tf->protocol == ATA_PROT_DMA ||
-- tf->protocol == ATA_PROT_ATAPI_DMA);
-+ WARN_ON(tf->protocol == ATA_PROT_DMA || tf->protocol == ATAPI_PROT_DMA);
- ata_tf_load(ap, tf);
- }
-
- static void pdc_exec_command_mmio(struct ata_port *ap,
- const struct ata_taskfile *tf)
- {
-- WARN_ON(tf->protocol == ATA_PROT_DMA ||
-- tf->protocol == ATA_PROT_ATAPI_DMA);
-+ WARN_ON(tf->protocol == ATA_PROT_DMA || tf->protocol == ATAPI_PROT_DMA);
- ata_exec_command(ap, tf);
- }
-
-diff --git a/drivers/ata/sata_promise.h b/drivers/ata/sata_promise.h
-index 6ee5e19..00d6000 100644
---- a/drivers/ata/sata_promise.h
-+++ b/drivers/ata/sata_promise.h
-@@ -46,7 +46,7 @@ static inline unsigned int pdc_pkt_header(struct ata_taskfile *tf,
- unsigned int devno, u8 *buf)
- {
- u8 dev_reg;
-- u32 *buf32 = (u32 *) buf;
-+ __le32 *buf32 = (__le32 *) buf;
-
- /* set control bits (byte 0), zero delay seq id (byte 3),
- * and seq id (byte 2)
-diff --git a/drivers/ata/sata_qstor.c b/drivers/ata/sata_qstor.c
-index c68b241..91cc12c 100644
---- a/drivers/ata/sata_qstor.c
-+++ b/drivers/ata/sata_qstor.c
-@@ -287,14 +287,10 @@ static unsigned int qs_fill_sg(struct ata_queued_cmd *qc)
- struct scatterlist *sg;
- struct ata_port *ap = qc->ap;
- struct qs_port_priv *pp = ap->private_data;
-- unsigned int nelem;
- u8 *prd = pp->pkt + QS_CPB_BYTES;
-+ unsigned int si;
-
-- WARN_ON(qc->__sg == NULL);
-- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
--
-- nelem = 0;
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- u64 addr;
- u32 len;
-
-@@ -306,12 +302,11 @@ static unsigned int qs_fill_sg(struct ata_queued_cmd *qc)
- *(__le32 *)prd = cpu_to_le32(len);
- prd += sizeof(u64);
-
-- VPRINTK("PRD[%u] = (0x%llX, 0x%X)\n", nelem,
-+ VPRINTK("PRD[%u] = (0x%llX, 0x%X)\n", si,
- (unsigned long long)addr, len);
-- nelem++;
- }
-
-- return nelem;
-+ return si;
- }
-
- static void qs_qc_prep(struct ata_queued_cmd *qc)
-@@ -376,7 +371,7 @@ static unsigned int qs_qc_issue(struct ata_queued_cmd *qc)
- qs_packet_start(qc);
- return 0;
-
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- BUG();
- break;
-
-diff --git a/drivers/ata/sata_sil.c b/drivers/ata/sata_sil.c
-index f5119bf..0b8191b 100644
---- a/drivers/ata/sata_sil.c
-+++ b/drivers/ata/sata_sil.c
-@@ -416,15 +416,14 @@ static void sil_host_intr(struct ata_port *ap, u32 bmdma2)
- */
-
- /* Check the ATA_DFLAG_CDB_INTR flag is enough here.
-- * The flag was turned on only for atapi devices.
-- * No need to check is_atapi_taskfile(&qc->tf) again.
-+ * The flag was turned on only for atapi devices. No
-+ * need to check ata_is_atapi(qc->tf.protocol) again.
- */
- if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR))
- goto err_hsm;
- break;
- case HSM_ST_LAST:
-- if (qc->tf.protocol == ATA_PROT_DMA ||
-- qc->tf.protocol == ATA_PROT_ATAPI_DMA) {
-+ if (ata_is_dma(qc->tf.protocol)) {
- /* clear DMA-Start bit */
- ap->ops->bmdma_stop(qc);
-
-@@ -451,8 +450,7 @@ static void sil_host_intr(struct ata_port *ap, u32 bmdma2)
- /* kick HSM in the ass */
- ata_hsm_move(ap, qc, status, 0);
-
-- if (unlikely(qc->err_mask) && (qc->tf.protocol == ATA_PROT_DMA ||
-- qc->tf.protocol == ATA_PROT_ATAPI_DMA))
-+ if (unlikely(qc->err_mask) && ata_is_dma(qc->tf.protocol))
- ata_ehi_push_desc(ehi, "BMDMA2 stat 0x%x", bmdma2);
-
- return;
-diff --git a/drivers/ata/sata_sil24.c b/drivers/ata/sata_sil24.c
-index 864c1c1..b4b1f91 100644
---- a/drivers/ata/sata_sil24.c
-+++ b/drivers/ata/sata_sil24.c
-@@ -813,8 +813,9 @@ static inline void sil24_fill_sg(struct ata_queued_cmd *qc,
- {
- struct scatterlist *sg;
- struct sil24_sge *last_sge = NULL;
-+ unsigned int si;
-
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- sge->addr = cpu_to_le64(sg_dma_address(sg));
- sge->cnt = cpu_to_le32(sg_dma_len(sg));
- sge->flags = 0;
-@@ -823,8 +824,7 @@ static inline void sil24_fill_sg(struct ata_queued_cmd *qc,
- sge++;
- }
-
-- if (likely(last_sge))
-- last_sge->flags = cpu_to_le32(SGE_TRM);
-+ last_sge->flags = cpu_to_le32(SGE_TRM);
- }
-
- static int sil24_qc_defer(struct ata_queued_cmd *qc)
-@@ -852,9 +852,7 @@ static int sil24_qc_defer(struct ata_queued_cmd *qc)
- * set.
- *
- */
-- int is_excl = (prot == ATA_PROT_ATAPI ||
-- prot == ATA_PROT_ATAPI_NODATA ||
-- prot == ATA_PROT_ATAPI_DMA ||
-+ int is_excl = (ata_is_atapi(prot) ||
- (qc->flags & ATA_QCFLAG_RESULT_TF));
-
- if (unlikely(ap->excl_link)) {
-@@ -885,35 +883,21 @@ static void sil24_qc_prep(struct ata_queued_cmd *qc)
-
- cb = &pp->cmd_block[sil24_tag(qc->tag)];
-
-- switch (qc->tf.protocol) {
-- case ATA_PROT_PIO:
-- case ATA_PROT_DMA:
-- case ATA_PROT_NCQ:
-- case ATA_PROT_NODATA:
-+ if (!ata_is_atapi(qc->tf.protocol)) {
- prb = &cb->ata.prb;
- sge = cb->ata.sge;
-- break;
--
-- case ATA_PROT_ATAPI:
-- case ATA_PROT_ATAPI_DMA:
-- case ATA_PROT_ATAPI_NODATA:
-+ } else {
- prb = &cb->atapi.prb;
- sge = cb->atapi.sge;
- memset(cb->atapi.cdb, 0, 32);
- memcpy(cb->atapi.cdb, qc->cdb, qc->dev->cdb_len);
-
-- if (qc->tf.protocol != ATA_PROT_ATAPI_NODATA) {
-+ if (ata_is_data(qc->tf.protocol)) {
- if (qc->tf.flags & ATA_TFLAG_WRITE)
- ctrl = PRB_CTRL_PACKET_WRITE;
- else
- ctrl = PRB_CTRL_PACKET_READ;
- }
-- break;
--
-- default:
-- prb = NULL; /* shut up, gcc */
-- sge = NULL;
-- BUG();
- }
-
- prb->ctrl = cpu_to_le16(ctrl);
-diff --git a/drivers/ata/sata_sx4.c b/drivers/ata/sata_sx4.c
-index 4d85718..e3d56bc 100644
---- a/drivers/ata/sata_sx4.c
-+++ b/drivers/ata/sata_sx4.c
-@@ -334,7 +334,7 @@ static inline void pdc20621_ata_sg(struct ata_taskfile *tf, u8 *buf,
- {
- u32 addr;
- unsigned int dw = PDC_DIMM_APKT_PRD >> 2;
-- u32 *buf32 = (u32 *) buf;
-+ __le32 *buf32 = (__le32 *) buf;
-
- /* output ATA packet S/G table */
- addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA +
-@@ -356,7 +356,7 @@ static inline void pdc20621_host_sg(struct ata_taskfile *tf, u8 *buf,
- {
- u32 addr;
- unsigned int dw = PDC_DIMM_HPKT_PRD >> 2;
-- u32 *buf32 = (u32 *) buf;
-+ __le32 *buf32 = (__le32 *) buf;
-
- /* output Host DMA packet S/G table */
- addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA +
-@@ -377,7 +377,7 @@ static inline unsigned int pdc20621_ata_pkt(struct ata_taskfile *tf,
- unsigned int portno)
- {
- unsigned int i, dw;
-- u32 *buf32 = (u32 *) buf;
-+ __le32 *buf32 = (__le32 *) buf;
- u8 dev_reg;
-
- unsigned int dimm_sg = PDC_20621_DIMM_BASE +
-@@ -429,7 +429,8 @@ static inline void pdc20621_host_pkt(struct ata_taskfile *tf, u8 *buf,
- unsigned int portno)
- {
- unsigned int dw;
-- u32 tmp, *buf32 = (u32 *) buf;
-+ u32 tmp;
-+ __le32 *buf32 = (__le32 *) buf;
-
- unsigned int host_sg = PDC_20621_DIMM_BASE +
- (PDC_DIMM_WINDOW_STEP * portno) +
-@@ -473,7 +474,7 @@ static void pdc20621_dma_prep(struct ata_queued_cmd *qc)
- void __iomem *mmio = ap->host->iomap[PDC_MMIO_BAR];
- void __iomem *dimm_mmio = ap->host->iomap[PDC_DIMM_BAR];
- unsigned int portno = ap->port_no;
-- unsigned int i, idx, total_len = 0, sgt_len;
-+ unsigned int i, si, idx, total_len = 0, sgt_len;
- u32 *buf = (u32 *) &pp->dimm_buf[PDC_DIMM_HEADER_SZ];
-
- WARN_ON(!(qc->flags & ATA_QCFLAG_DMAMAP));
-@@ -487,7 +488,7 @@ static void pdc20621_dma_prep(struct ata_queued_cmd *qc)
- * Build S/G table
- */
- idx = 0;
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- buf[idx++] = cpu_to_le32(sg_dma_address(sg));
- buf[idx++] = cpu_to_le32(sg_dma_len(sg));
- total_len += sg_dma_len(sg);
-@@ -700,7 +701,7 @@ static unsigned int pdc20621_qc_issue_prot(struct ata_queued_cmd *qc)
- pdc20621_packet_start(qc);
- return 0;
-
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- BUG();
- break;
-
-diff --git a/drivers/atm/ambassador.c b/drivers/atm/ambassador.c
-index b34b382..7b44a59 100644
---- a/drivers/atm/ambassador.c
-+++ b/drivers/atm/ambassador.c
-@@ -2163,7 +2163,6 @@ static int __devinit amb_init (amb_dev * dev)
- static void setup_dev(amb_dev *dev, struct pci_dev *pci_dev)
- {
- unsigned char pool;
-- memset (dev, 0, sizeof(amb_dev));
-
- // set up known dev items straight away
- dev->pci_dev = pci_dev;
-@@ -2253,7 +2252,7 @@ static int __devinit amb_probe(struct pci_dev *pci_dev, const struct pci_device_
- goto out_disable;
- }
-
-- dev = kmalloc (sizeof(amb_dev), GFP_KERNEL);
-+ dev = kzalloc(sizeof(amb_dev), GFP_KERNEL);
- if (!dev) {
- PRINTK (KERN_ERR, "out of memory!");
- err = -ENOMEM;
-diff --git a/drivers/atm/he.c b/drivers/atm/he.c
-index 3b64a99..2e3395b 100644
---- a/drivers/atm/he.c
-+++ b/drivers/atm/he.c
-@@ -1,5 +1,3 @@
--/* $Id: he.c,v 1.18 2003/05/06 22:57:15 chas Exp $ */
--
- /*
-
- he.c
-@@ -99,10 +97,6 @@
- #define HPRINTK(fmt,args...) do { } while (0)
- #endif /* HE_DEBUG */
-
--/* version definition */
--
--static char *version = "$Id: he.c,v 1.18 2003/05/06 22:57:15 chas Exp $";
--
- /* declarations */
-
- static int he_open(struct atm_vcc *vcc);
-@@ -366,7 +360,7 @@ he_init_one(struct pci_dev *pci_dev, const struct pci_device_id *pci_ent)
- struct he_dev *he_dev = NULL;
- int err = 0;
-
-- printk(KERN_INFO "he: %s\n", version);
-+ printk(KERN_INFO "ATM he driver\n");
-
- if (pci_enable_device(pci_dev))
- return -EIO;
-@@ -1643,6 +1637,8 @@ he_stop(struct he_dev *he_dev)
-
- if (he_dev->rbpl_base) {
- #ifdef USE_RBPL_POOL
-+ int i;
++ if (rq_data_dir(rq) == WRITE)
++ bio->bi_rw |= (1 << BIO_RW);
+
- for (i = 0; i < CONFIG_RBPL_SIZE; ++i) {
- void *cpuaddr = he_dev->rbpl_virt[i].virt;
- dma_addr_t dma_handle = he_dev->rbpl_base[i].phys;
-@@ -1665,6 +1661,8 @@ he_stop(struct he_dev *he_dev)
- #ifdef USE_RBPS
- if (he_dev->rbps_base) {
- #ifdef USE_RBPS_POOL
-+ int i;
++ blk_rq_bio_prep(q, rq, bio);
++ blk_queue_bounce(q, &rq->bio);
++ rq->buffer = rq->data = NULL;
++ return 0;
++}
+
- for (i = 0; i < CONFIG_RBPS_SIZE; ++i) {
- void *cpuaddr = he_dev->rbps_virt[i].virt;
- dma_addr_t dma_handle = he_dev->rbps_base[i].phys;
-@@ -2933,7 +2931,7 @@ he_proc_read(struct atm_dev *dev, loff_t *pos, char *page)
-
- left = *pos;
- if (!left--)
-- return sprintf(page, "%s\n", version);
-+ return sprintf(page, "ATM he driver\n");
-
- if (!left--)
- return sprintf(page, "%s%s\n\n",
-diff --git a/drivers/base/Makefile b/drivers/base/Makefile
-index b39ea3f..63e09c0 100644
---- a/drivers/base/Makefile
-+++ b/drivers/base/Makefile
-@@ -11,6 +11,9 @@ obj-$(CONFIG_FW_LOADER) += firmware_class.o
- obj-$(CONFIG_NUMA) += node.o
- obj-$(CONFIG_MEMORY_HOTPLUG_SPARSE) += memory.o
- obj-$(CONFIG_SMP) += topology.o
-+ifeq ($(CONFIG_SYSFS),y)
-+obj-$(CONFIG_MODULES) += module.o
-+endif
- obj-$(CONFIG_SYS_HYPERVISOR) += hypervisor.o
-
- ifeq ($(CONFIG_DEBUG_DRIVER),y)
-diff --git a/drivers/base/attribute_container.c b/drivers/base/attribute_container.c
-index 7370d7c..3b43e8a 100644
---- a/drivers/base/attribute_container.c
-+++ b/drivers/base/attribute_container.c
-@@ -61,7 +61,7 @@ attribute_container_classdev_to_container(struct class_device *classdev)
- }
- EXPORT_SYMBOL_GPL(attribute_container_classdev_to_container);
-
--static struct list_head attribute_container_list;
-+static LIST_HEAD(attribute_container_list);
-
- static DEFINE_MUTEX(attribute_container_mutex);
-
-@@ -320,9 +320,14 @@ attribute_container_add_attrs(struct class_device *classdev)
- struct class_device_attribute **attrs = cont->attrs;
- int i, error;
-
-- if (!attrs)
-+ BUG_ON(attrs && cont->grp);
++EXPORT_SYMBOL(blk_rq_map_kern);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+new file mode 100644
+index 0000000..5023f0b
+--- /dev/null
++++ b/block/blk-merge.c
+@@ -0,0 +1,485 @@
++/*
++ * Functions related to segment and merge handling
++ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++#include <linux/scatterlist.h>
+
-+ if (!attrs && !cont->grp)
- return 0;
-
-+ if (cont->grp)
-+ return sysfs_create_group(&classdev->kobj, cont->grp);
++#include "blk.h"
+
- for (i = 0; attrs[i]; i++) {
- error = class_device_create_file(classdev, attrs[i]);
- if (error)
-@@ -378,9 +383,14 @@ attribute_container_remove_attrs(struct class_device *classdev)
- struct class_device_attribute **attrs = cont->attrs;
- int i;
-
-- if (!attrs)
-+ if (!attrs && !cont->grp)
- return;
-
-+ if (cont->grp) {
-+ sysfs_remove_group(&classdev->kobj, cont->grp);
-+ return ;
++void blk_recalc_rq_sectors(struct request *rq, int nsect)
++{
++ if (blk_fs_request(rq)) {
++ rq->hard_sector += nsect;
++ rq->hard_nr_sectors -= nsect;
++
++ /*
++ * Move the I/O submission pointers ahead if required.
++ */
++ if ((rq->nr_sectors >= rq->hard_nr_sectors) &&
++ (rq->sector <= rq->hard_sector)) {
++ rq->sector = rq->hard_sector;
++ rq->nr_sectors = rq->hard_nr_sectors;
++ rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
++ rq->current_nr_sectors = rq->hard_cur_sectors;
++ rq->buffer = bio_data(rq->bio);
++ }
++
++ /*
++ * if total number of sectors is less than the first segment
++ * size, something has gone terribly wrong
++ */
++ if (rq->nr_sectors < rq->current_nr_sectors) {
++ printk("blk: request botched\n");
++ rq->nr_sectors = rq->current_nr_sectors;
++ }
+ }
++}
+
- for (i = 0; attrs[i]; i++)
- class_device_remove_file(classdev, attrs[i]);
- }
-@@ -429,10 +439,3 @@ attribute_container_find_class_device(struct attribute_container *cont,
- return cdev;
- }
- EXPORT_SYMBOL_GPL(attribute_container_find_class_device);
--
--int __init
--attribute_container_init(void)
--{
-- INIT_LIST_HEAD(&attribute_container_list);
-- return 0;
--}
-diff --git a/drivers/base/base.h b/drivers/base/base.h
-index 10b2fb6..c044414 100644
---- a/drivers/base/base.h
-+++ b/drivers/base/base.h
-@@ -1,6 +1,42 @@
-
--/* initialisation functions */
-+/**
-+ * struct bus_type_private - structure to hold the private to the driver core portions of the bus_type structure.
-+ *
-+ * @subsys - the struct kset that defines this bus. This is the main kobject
-+ * @drivers_kset - the list of drivers associated with this bus
-+ * @devices_kset - the list of devices associated with this bus
-+ * @klist_devices - the klist to iterate over the @devices_kset
-+ * @klist_drivers - the klist to iterate over the @drivers_kset
-+ * @bus_notifier - the bus notifier list for anything that cares about things
-+ * on this bus.
-+ * @bus - pointer back to the struct bus_type that this structure is associated
-+ * with.
-+ *
-+ * This structure is the one that is the actual kobject allowing struct
-+ * bus_type to be statically allocated safely. Nothing outside of the driver
-+ * core should ever touch these fields.
-+ */
-+struct bus_type_private {
-+ struct kset subsys;
-+ struct kset *drivers_kset;
-+ struct kset *devices_kset;
-+ struct klist klist_devices;
-+ struct klist klist_drivers;
-+ struct blocking_notifier_head bus_notifier;
-+ unsigned int drivers_autoprobe:1;
-+ struct bus_type *bus;
-+};
++void blk_recalc_rq_segments(struct request *rq)
++{
++ int nr_phys_segs;
++ int nr_hw_segs;
++ unsigned int phys_size;
++ unsigned int hw_size;
++ struct bio_vec *bv, *bvprv = NULL;
++ int seg_size;
++ int hw_seg_size;
++ int cluster;
++ struct req_iterator iter;
++ int high, highprv = 1;
++ struct request_queue *q = rq->q;
+
-+struct driver_private {
-+ struct kobject kobj;
-+ struct klist klist_devices;
-+ struct klist_node knode_bus;
-+ struct module_kobject *mkobj;
-+ struct device_driver *driver;
-+};
-+#define to_driver(obj) container_of(obj, struct driver_private, kobj)
-
-+/* initialisation functions */
- extern int devices_init(void);
- extern int buses_init(void);
- extern int classes_init(void);
-@@ -13,17 +49,16 @@ static inline int hypervisor_init(void) { return 0; }
- extern int platform_bus_init(void);
- extern int system_bus_init(void);
- extern int cpu_dev_init(void);
--extern int attribute_container_init(void);
-
--extern int bus_add_device(struct device * dev);
--extern void bus_attach_device(struct device * dev);
--extern void bus_remove_device(struct device * dev);
-+extern int bus_add_device(struct device *dev);
-+extern void bus_attach_device(struct device *dev);
-+extern void bus_remove_device(struct device *dev);
-
--extern int bus_add_driver(struct device_driver *);
--extern void bus_remove_driver(struct device_driver *);
-+extern int bus_add_driver(struct device_driver *drv);
-+extern void bus_remove_driver(struct device_driver *drv);
-
--extern void driver_detach(struct device_driver * drv);
--extern int driver_probe_device(struct device_driver *, struct device *);
-+extern void driver_detach(struct device_driver *drv);
-+extern int driver_probe_device(struct device_driver *drv, struct device *dev);
-
- extern void sysdev_shutdown(void);
- extern int sysdev_suspend(pm_message_t state);
-@@ -44,4 +79,13 @@ extern char *make_class_name(const char *name, struct kobject *kobj);
-
- extern int devres_release_all(struct device *dev);
-
--extern struct kset devices_subsys;
-+extern struct kset *devices_kset;
++ if (!rq->bio)
++ return;
+
-+#if defined(CONFIG_MODULES) && defined(CONFIG_SYSFS)
-+extern void module_add_driver(struct module *mod, struct device_driver *drv);
-+extern void module_remove_driver(struct device_driver *drv);
-+#else
-+static inline void module_add_driver(struct module *mod,
-+ struct device_driver *drv) { }
-+static inline void module_remove_driver(struct device_driver *drv) { }
-+#endif
-diff --git a/drivers/base/bus.c b/drivers/base/bus.c
-index 9a19b07..f484495 100644
---- a/drivers/base/bus.c
-+++ b/drivers/base/bus.c
-@@ -3,6 +3,8 @@
- *
- * Copyright (c) 2002-3 Patrick Mochel
- * Copyright (c) 2002-3 Open Source Development Labs
-+ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
-+ * Copyright (c) 2007 Novell Inc.
- *
- * This file is released under the GPLv2
- *
-@@ -17,14 +19,13 @@
- #include "power/power.h"
-
- #define to_bus_attr(_attr) container_of(_attr, struct bus_attribute, attr)
--#define to_bus(obj) container_of(obj, struct bus_type, subsys.kobj)
-+#define to_bus(obj) container_of(obj, struct bus_type_private, subsys.kobj)
-
- /*
- * sysfs bindings for drivers
- */
-
- #define to_drv_attr(_attr) container_of(_attr, struct driver_attribute, attr)
--#define to_driver(obj) container_of(obj, struct device_driver, kobj)
-
-
- static int __must_check bus_rescan_devices_helper(struct device *dev,
-@@ -32,37 +33,40 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
-
- static struct bus_type *bus_get(struct bus_type *bus)
- {
-- return bus ? container_of(kset_get(&bus->subsys),
-- struct bus_type, subsys) : NULL;
-+ if (bus) {
-+ kset_get(&bus->p->subsys);
-+ return bus;
++ cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
++ hw_seg_size = seg_size = 0;
++ phys_size = hw_size = nr_phys_segs = nr_hw_segs = 0;
++ rq_for_each_segment(bv, rq, iter) {
++ /*
++ * the trick here is making sure that a high page is never
++ * considered part of another segment, since that might
++ * change with the bounce page.
++ */
++ high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
++ if (high || highprv)
++ goto new_hw_segment;
++ if (cluster) {
++ if (seg_size + bv->bv_len > q->max_segment_size)
++ goto new_segment;
++ if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
++ goto new_segment;
++ if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
++ goto new_segment;
++ if (BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
++ goto new_hw_segment;
++
++ seg_size += bv->bv_len;
++ hw_seg_size += bv->bv_len;
++ bvprv = bv;
++ continue;
++ }
++new_segment:
++ if (BIOVEC_VIRT_MERGEABLE(bvprv, bv) &&
++ !BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
++ hw_seg_size += bv->bv_len;
++ else {
++new_hw_segment:
++ if (nr_hw_segs == 1 &&
++ hw_seg_size > rq->bio->bi_hw_front_size)
++ rq->bio->bi_hw_front_size = hw_seg_size;
++ hw_seg_size = BIOVEC_VIRT_START_SIZE(bv) + bv->bv_len;
++ nr_hw_segs++;
++ }
++
++ nr_phys_segs++;
++ bvprv = bv;
++ seg_size = bv->bv_len;
++ highprv = high;
+ }
-+ return NULL;
- }
-
- static void bus_put(struct bus_type *bus)
- {
-- kset_put(&bus->subsys);
-+ if (bus)
-+ kset_put(&bus->p->subsys);
- }
-
--static ssize_t
--drv_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
-+static ssize_t drv_attr_show(struct kobject *kobj, struct attribute *attr,
-+ char *buf)
- {
-- struct driver_attribute * drv_attr = to_drv_attr(attr);
-- struct device_driver * drv = to_driver(kobj);
-+ struct driver_attribute *drv_attr = to_drv_attr(attr);
-+ struct driver_private *drv_priv = to_driver(kobj);
- ssize_t ret = -EIO;
-
- if (drv_attr->show)
-- ret = drv_attr->show(drv, buf);
-+ ret = drv_attr->show(drv_priv->driver, buf);
- return ret;
- }
-
--static ssize_t
--drv_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char * buf, size_t count)
-+static ssize_t drv_attr_store(struct kobject *kobj, struct attribute *attr,
-+ const char *buf, size_t count)
- {
-- struct driver_attribute * drv_attr = to_drv_attr(attr);
-- struct device_driver * drv = to_driver(kobj);
-+ struct driver_attribute *drv_attr = to_drv_attr(attr);
-+ struct driver_private *drv_priv = to_driver(kobj);
- ssize_t ret = -EIO;
-
- if (drv_attr->store)
-- ret = drv_attr->store(drv, buf, count);
-+ ret = drv_attr->store(drv_priv->driver, buf, count);
- return ret;
- }
-
-@@ -71,22 +75,12 @@ static struct sysfs_ops driver_sysfs_ops = {
- .store = drv_attr_store,
- };
-
--
--static void driver_release(struct kobject * kobj)
-+static void driver_release(struct kobject *kobj)
- {
-- /*
-- * Yes this is an empty release function, it is this way because struct
-- * device is always a static object, not a dynamic one. Yes, this is
-- * not nice and bad, but remember, drivers are code, reference counted
-- * by the module count, not a device, which is really data. And yes,
-- * in the future I do want to have all drivers be created dynamically,
-- * and am working toward that goal, but it will take a bit longer...
-- *
-- * But do not let this example give _anyone_ the idea that they can
-- * create a release function without any code in it at all, to do that
-- * is almost always wrong. If you have any questions about this,
-- * please send an email to <greg at kroah.com>
-- */
-+ struct driver_private *drv_priv = to_driver(kobj);
+
-+ pr_debug("driver: '%s': %s\n", kobject_name(kobj), __FUNCTION__);
-+ kfree(drv_priv);
- }
-
- static struct kobj_type driver_ktype = {
-@@ -94,34 +88,30 @@ static struct kobj_type driver_ktype = {
- .release = driver_release,
- };
-
--
- /*
- * sysfs bindings for buses
- */
--
--
--static ssize_t
--bus_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
-+static ssize_t bus_attr_show(struct kobject *kobj, struct attribute *attr,
-+ char *buf)
- {
-- struct bus_attribute * bus_attr = to_bus_attr(attr);
-- struct bus_type * bus = to_bus(kobj);
-+ struct bus_attribute *bus_attr = to_bus_attr(attr);
-+ struct bus_type_private *bus_priv = to_bus(kobj);
- ssize_t ret = 0;
-
- if (bus_attr->show)
-- ret = bus_attr->show(bus, buf);
-+ ret = bus_attr->show(bus_priv->bus, buf);
- return ret;
- }
-
--static ssize_t
--bus_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char * buf, size_t count)
-+static ssize_t bus_attr_store(struct kobject *kobj, struct attribute *attr,
-+ const char *buf, size_t count)
- {
-- struct bus_attribute * bus_attr = to_bus_attr(attr);
-- struct bus_type * bus = to_bus(kobj);
-+ struct bus_attribute *bus_attr = to_bus_attr(attr);
-+ struct bus_type_private *bus_priv = to_bus(kobj);
- ssize_t ret = 0;
-
- if (bus_attr->store)
-- ret = bus_attr->store(bus, buf, count);
-+ ret = bus_attr->store(bus_priv->bus, buf, count);
- return ret;
- }
-
-@@ -130,24 +120,26 @@ static struct sysfs_ops bus_sysfs_ops = {
- .store = bus_attr_store,
- };
-
--int bus_create_file(struct bus_type * bus, struct bus_attribute * attr)
-+int bus_create_file(struct bus_type *bus, struct bus_attribute *attr)
- {
- int error;
- if (bus_get(bus)) {
-- error = sysfs_create_file(&bus->subsys.kobj, &attr->attr);
-+ error = sysfs_create_file(&bus->p->subsys.kobj, &attr->attr);
- bus_put(bus);
- } else
- error = -EINVAL;
- return error;
- }
-+EXPORT_SYMBOL_GPL(bus_create_file);
-
--void bus_remove_file(struct bus_type * bus, struct bus_attribute * attr)
-+void bus_remove_file(struct bus_type *bus, struct bus_attribute *attr)
- {
- if (bus_get(bus)) {
-- sysfs_remove_file(&bus->subsys.kobj, &attr->attr);
-+ sysfs_remove_file(&bus->p->subsys.kobj, &attr->attr);
- bus_put(bus);
- }
- }
-+EXPORT_SYMBOL_GPL(bus_remove_file);
-
- static struct kobj_type bus_ktype = {
- .sysfs_ops = &bus_sysfs_ops,
-@@ -166,7 +158,7 @@ static struct kset_uevent_ops bus_uevent_ops = {
- .filter = bus_uevent_filter,
- };
-
--static decl_subsys(bus, &bus_ktype, &bus_uevent_ops);
-+static struct kset *bus_kset;
-
-
- #ifdef CONFIG_HOTPLUG
-@@ -224,10 +216,13 @@ static ssize_t driver_bind(struct device_driver *drv,
- if (dev->parent)
- up(&dev->parent->sem);
-
-- if (err > 0) /* success */
-+ if (err > 0) {
-+ /* success */
- err = count;
-- else if (err == 0) /* driver didn't accept device */
-+ } else if (err == 0) {
-+ /* driver didn't accept device */
- err = -ENODEV;
++ if (nr_hw_segs == 1 &&
++ hw_seg_size > rq->bio->bi_hw_front_size)
++ rq->bio->bi_hw_front_size = hw_seg_size;
++ if (hw_seg_size > rq->biotail->bi_hw_back_size)
++ rq->biotail->bi_hw_back_size = hw_seg_size;
++ rq->nr_phys_segments = nr_phys_segs;
++ rq->nr_hw_segments = nr_hw_segs;
++}
++
++void blk_recount_segments(struct request_queue *q, struct bio *bio)
++{
++ struct request rq;
++ struct bio *nxt = bio->bi_next;
++ rq.q = q;
++ rq.bio = rq.biotail = bio;
++ bio->bi_next = NULL;
++ blk_recalc_rq_segments(&rq);
++ bio->bi_next = nxt;
++ bio->bi_phys_segments = rq.nr_phys_segments;
++ bio->bi_hw_segments = rq.nr_hw_segments;
++ bio->bi_flags |= (1 << BIO_SEG_VALID);
++}
++EXPORT_SYMBOL(blk_recount_segments);
++
++static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
++ struct bio *nxt)
++{
++ if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER)))
++ return 0;
++
++ if (!BIOVEC_PHYS_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)))
++ return 0;
++ if (bio->bi_size + nxt->bi_size > q->max_segment_size)
++ return 0;
++
++ /*
++ * bio and nxt are contigous in memory, check if the queue allows
++ * these two to be merged into one
++ */
++ if (BIO_SEG_BOUNDARY(q, bio, nxt))
++ return 1;
++
++ return 0;
++}
++
++static int blk_hw_contig_segment(struct request_queue *q, struct bio *bio,
++ struct bio *nxt)
++{
++ if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
++ blk_recount_segments(q, bio);
++ if (unlikely(!bio_flagged(nxt, BIO_SEG_VALID)))
++ blk_recount_segments(q, nxt);
++ if (!BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)) ||
++ BIOVEC_VIRT_OVERSIZE(bio->bi_hw_back_size + nxt->bi_hw_front_size))
++ return 0;
++ if (bio->bi_hw_back_size + nxt->bi_hw_front_size > q->max_segment_size)
++ return 0;
++
++ return 1;
++}
++
++/*
++ * map a request to scatterlist, return number of sg entries setup. Caller
++ * must make sure sg can hold rq->nr_phys_segments entries
++ */
++int blk_rq_map_sg(struct request_queue *q, struct request *rq,
++ struct scatterlist *sglist)
++{
++ struct bio_vec *bvec, *bvprv;
++ struct req_iterator iter;
++ struct scatterlist *sg;
++ int nsegs, cluster;
++
++ nsegs = 0;
++ cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
++
++ /*
++ * for each bio in rq
++ */
++ bvprv = NULL;
++ sg = NULL;
++ rq_for_each_segment(bvec, rq, iter) {
++ int nbytes = bvec->bv_len;
++
++ if (bvprv && cluster) {
++ if (sg->length + nbytes > q->max_segment_size)
++ goto new_segment;
++
++ if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
++ goto new_segment;
++ if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
++ goto new_segment;
++
++ sg->length += nbytes;
++ } else {
++new_segment:
++ if (!sg)
++ sg = sglist;
++ else {
++ /*
++ * If the driver previously mapped a shorter
++ * list, we could see a termination bit
++ * prematurely unless it fully inits the sg
++ * table on each mapping. We KNOW that there
++ * must be more entries here or the driver
++ * would be buggy, so force clear the
++ * termination bit to avoid doing a full
++ * sg_init_table() in drivers for each command.
++ */
++ sg->page_link &= ~0x02;
++ sg = sg_next(sg);
++ }
++
++ sg_set_page(sg, bvec->bv_page, nbytes, bvec->bv_offset);
++ nsegs++;
+ }
- }
- put_device(dev);
- bus_put(bus);
-@@ -237,16 +232,16 @@ static DRIVER_ATTR(bind, S_IWUSR, NULL, driver_bind);
-
- static ssize_t show_drivers_autoprobe(struct bus_type *bus, char *buf)
- {
-- return sprintf(buf, "%d\n", bus->drivers_autoprobe);
-+ return sprintf(buf, "%d\n", bus->p->drivers_autoprobe);
- }
-
- static ssize_t store_drivers_autoprobe(struct bus_type *bus,
- const char *buf, size_t count)
- {
- if (buf[0] == '0')
-- bus->drivers_autoprobe = 0;
-+ bus->p->drivers_autoprobe = 0;
- else
-- bus->drivers_autoprobe = 1;
-+ bus->p->drivers_autoprobe = 1;
- return count;
- }
-
-@@ -264,49 +259,49 @@ static ssize_t store_drivers_probe(struct bus_type *bus,
- }
- #endif
-
--static struct device * next_device(struct klist_iter * i)
-+static struct device *next_device(struct klist_iter *i)
- {
-- struct klist_node * n = klist_next(i);
-+ struct klist_node *n = klist_next(i);
- return n ? container_of(n, struct device, knode_bus) : NULL;
- }
-
- /**
-- * bus_for_each_dev - device iterator.
-- * @bus: bus type.
-- * @start: device to start iterating from.
-- * @data: data for the callback.
-- * @fn: function to be called for each device.
-+ * bus_for_each_dev - device iterator.
-+ * @bus: bus type.
-+ * @start: device to start iterating from.
-+ * @data: data for the callback.
-+ * @fn: function to be called for each device.
- *
-- * Iterate over @bus's list of devices, and call @fn for each,
-- * passing it @data. If @start is not NULL, we use that device to
-- * begin iterating from.
-+ * Iterate over @bus's list of devices, and call @fn for each,
-+ * passing it @data. If @start is not NULL, we use that device to
-+ * begin iterating from.
- *
-- * We check the return of @fn each time. If it returns anything
-- * other than 0, we break out and return that value.
-+ * We check the return of @fn each time. If it returns anything
-+ * other than 0, we break out and return that value.
- *
-- * NOTE: The device that returns a non-zero value is not retained
-- * in any way, nor is its refcount incremented. If the caller needs
-- * to retain this data, it should do, and increment the reference
-- * count in the supplied callback.
-+ * NOTE: The device that returns a non-zero value is not retained
-+ * in any way, nor is its refcount incremented. If the caller needs
-+ * to retain this data, it should do, and increment the reference
-+ * count in the supplied callback.
- */
--
--int bus_for_each_dev(struct bus_type * bus, struct device * start,
-- void * data, int (*fn)(struct device *, void *))
-+int bus_for_each_dev(struct bus_type *bus, struct device *start,
-+ void *data, int (*fn)(struct device *, void *))
- {
- struct klist_iter i;
-- struct device * dev;
-+ struct device *dev;
- int error = 0;
-
- if (!bus)
- return -EINVAL;
-
-- klist_iter_init_node(&bus->klist_devices, &i,
-+ klist_iter_init_node(&bus->p->klist_devices, &i,
- (start ? &start->knode_bus : NULL));
- while ((dev = next_device(&i)) && !error)
- error = fn(dev, data);
- klist_iter_exit(&i);
- return error;
- }
-+EXPORT_SYMBOL_GPL(bus_for_each_dev);
-
- /**
- * bus_find_device - device iterator for locating a particular device.
-@@ -323,9 +318,9 @@ int bus_for_each_dev(struct bus_type * bus, struct device * start,
- * if it does. If the callback returns non-zero, this function will
- * return to the caller and not iterate over any more devices.
- */
--struct device * bus_find_device(struct bus_type *bus,
-- struct device *start, void *data,
-- int (*match)(struct device *, void *))
-+struct device *bus_find_device(struct bus_type *bus,
-+ struct device *start, void *data,
-+ int (*match)(struct device *dev, void *data))
- {
- struct klist_iter i;
- struct device *dev;
-@@ -333,7 +328,7 @@ struct device * bus_find_device(struct bus_type *bus,
- if (!bus)
- return NULL;
-
-- klist_iter_init_node(&bus->klist_devices, &i,
-+ klist_iter_init_node(&bus->p->klist_devices, &i,
- (start ? &start->knode_bus : NULL));
- while ((dev = next_device(&i)))
- if (match(dev, data) && get_device(dev))
-@@ -341,51 +336,57 @@ struct device * bus_find_device(struct bus_type *bus,
- klist_iter_exit(&i);
- return dev;
- }
-+EXPORT_SYMBOL_GPL(bus_find_device);
-
--
--static struct device_driver * next_driver(struct klist_iter * i)
-+static struct device_driver *next_driver(struct klist_iter *i)
- {
-- struct klist_node * n = klist_next(i);
-- return n ? container_of(n, struct device_driver, knode_bus) : NULL;
-+ struct klist_node *n = klist_next(i);
-+ struct driver_private *drv_priv;
++ bvprv = bvec;
++ } /* segments in rq */
+
-+ if (n) {
-+ drv_priv = container_of(n, struct driver_private, knode_bus);
-+ return drv_priv->driver;
++ if (q->dma_drain_size) {
++ sg->page_link &= ~0x02;
++ sg = sg_next(sg);
++ sg_set_page(sg, virt_to_page(q->dma_drain_buffer),
++ q->dma_drain_size,
++ ((unsigned long)q->dma_drain_buffer) &
++ (PAGE_SIZE - 1));
++ nsegs++;
+ }
-+ return NULL;
- }
-
- /**
-- * bus_for_each_drv - driver iterator
-- * @bus: bus we're dealing with.
-- * @start: driver to start iterating on.
-- * @data: data to pass to the callback.
-- * @fn: function to call for each driver.
-+ * bus_for_each_drv - driver iterator
-+ * @bus: bus we're dealing with.
-+ * @start: driver to start iterating on.
-+ * @data: data to pass to the callback.
-+ * @fn: function to call for each driver.
- *
-- * This is nearly identical to the device iterator above.
-- * We iterate over each driver that belongs to @bus, and call
-- * @fn for each. If @fn returns anything but 0, we break out
-- * and return it. If @start is not NULL, we use it as the head
-- * of the list.
-+ * This is nearly identical to the device iterator above.
-+ * We iterate over each driver that belongs to @bus, and call
-+ * @fn for each. If @fn returns anything but 0, we break out
-+ * and return it. If @start is not NULL, we use it as the head
-+ * of the list.
- *
-- * NOTE: we don't return the driver that returns a non-zero
-- * value, nor do we leave the reference count incremented for that
-- * driver. If the caller needs to know that info, it must set it
-- * in the callback. It must also be sure to increment the refcount
-- * so it doesn't disappear before returning to the caller.
-+ * NOTE: we don't return the driver that returns a non-zero
-+ * value, nor do we leave the reference count incremented for that
-+ * driver. If the caller needs to know that info, it must set it
-+ * in the callback. It must also be sure to increment the refcount
-+ * so it doesn't disappear before returning to the caller.
- */
--
--int bus_for_each_drv(struct bus_type * bus, struct device_driver * start,
-- void * data, int (*fn)(struct device_driver *, void *))
-+int bus_for_each_drv(struct bus_type *bus, struct device_driver *start,
-+ void *data, int (*fn)(struct device_driver *, void *))
- {
- struct klist_iter i;
-- struct device_driver * drv;
-+ struct device_driver *drv;
- int error = 0;
-
- if (!bus)
- return -EINVAL;
-
-- klist_iter_init_node(&bus->klist_drivers, &i,
-- start ? &start->knode_bus : NULL);
-+ klist_iter_init_node(&bus->p->klist_drivers, &i,
-+ start ? &start->p->knode_bus : NULL);
- while ((drv = next_driver(&i)) && !error)
- error = fn(drv, data);
- klist_iter_exit(&i);
- return error;
- }
-+EXPORT_SYMBOL_GPL(bus_for_each_drv);
-
- static int device_add_attrs(struct bus_type *bus, struct device *dev)
- {
-@@ -396,7 +397,7 @@ static int device_add_attrs(struct bus_type *bus, struct device *dev)
- return 0;
-
- for (i = 0; attr_name(bus->dev_attrs[i]); i++) {
-- error = device_create_file(dev,&bus->dev_attrs[i]);
-+ error = device_create_file(dev, &bus->dev_attrs[i]);
- if (error) {
- while (--i >= 0)
- device_remove_file(dev, &bus->dev_attrs[i]);
-@@ -406,13 +407,13 @@ static int device_add_attrs(struct bus_type *bus, struct device *dev)
- return error;
- }
-
--static void device_remove_attrs(struct bus_type * bus, struct device * dev)
-+static void device_remove_attrs(struct bus_type *bus, struct device *dev)
- {
- int i;
-
- if (bus->dev_attrs) {
- for (i = 0; attr_name(bus->dev_attrs[i]); i++)
-- device_remove_file(dev,&bus->dev_attrs[i]);
-+ device_remove_file(dev, &bus->dev_attrs[i]);
- }
- }
-
-@@ -420,7 +421,7 @@ static void device_remove_attrs(struct bus_type * bus, struct device * dev)
- static int make_deprecated_bus_links(struct device *dev)
- {
- return sysfs_create_link(&dev->kobj,
-- &dev->bus->subsys.kobj, "bus");
-+ &dev->bus->p->subsys.kobj, "bus");
- }
-
- static void remove_deprecated_bus_links(struct device *dev)
-@@ -433,28 +434,28 @@ static inline void remove_deprecated_bus_links(struct device *dev) { }
- #endif
-
- /**
-- * bus_add_device - add device to bus
-- * @dev: device being added
-+ * bus_add_device - add device to bus
-+ * @dev: device being added
- *
-- * - Add the device to its bus's list of devices.
-- * - Create link to device's bus.
-+ * - Add the device to its bus's list of devices.
-+ * - Create link to device's bus.
- */
--int bus_add_device(struct device * dev)
-+int bus_add_device(struct device *dev)
- {
-- struct bus_type * bus = bus_get(dev->bus);
-+ struct bus_type *bus = bus_get(dev->bus);
- int error = 0;
-
- if (bus) {
-- pr_debug("bus %s: add device %s\n", bus->name, dev->bus_id);
-+ pr_debug("bus: '%s': add device %s\n", bus->name, dev->bus_id);
- error = device_add_attrs(bus, dev);
- if (error)
- goto out_put;
-- error = sysfs_create_link(&bus->devices.kobj,
-+ error = sysfs_create_link(&bus->p->devices_kset->kobj,
- &dev->kobj, dev->bus_id);
- if (error)
- goto out_id;
- error = sysfs_create_link(&dev->kobj,
-- &dev->bus->subsys.kobj, "subsystem");
-+ &dev->bus->p->subsys.kobj, "subsystem");
- if (error)
- goto out_subsys;
- error = make_deprecated_bus_links(dev);
-@@ -466,7 +467,7 @@ int bus_add_device(struct device * dev)
- out_deprecated:
- sysfs_remove_link(&dev->kobj, "subsystem");
- out_subsys:
-- sysfs_remove_link(&bus->devices.kobj, dev->bus_id);
-+ sysfs_remove_link(&bus->p->devices_kset->kobj, dev->bus_id);
- out_id:
- device_remove_attrs(bus, dev);
- out_put:
-@@ -475,56 +476,58 @@ out_put:
- }
-
- /**
-- * bus_attach_device - add device to bus
-- * @dev: device tried to attach to a driver
-+ * bus_attach_device - add device to bus
-+ * @dev: device tried to attach to a driver
- *
-- * - Add device to bus's list of devices.
-- * - Try to attach to driver.
-+ * - Add device to bus's list of devices.
-+ * - Try to attach to driver.
- */
--void bus_attach_device(struct device * dev)
-+void bus_attach_device(struct device *dev)
- {
- struct bus_type *bus = dev->bus;
- int ret = 0;
-
- if (bus) {
- dev->is_registered = 1;
-- if (bus->drivers_autoprobe)
-+ if (bus->p->drivers_autoprobe)
- ret = device_attach(dev);
- WARN_ON(ret < 0);
- if (ret >= 0)
-- klist_add_tail(&dev->knode_bus, &bus->klist_devices);
-+ klist_add_tail(&dev->knode_bus, &bus->p->klist_devices);
- else
- dev->is_registered = 0;
- }
- }
-
- /**
-- * bus_remove_device - remove device from bus
-- * @dev: device to be removed
-+ * bus_remove_device - remove device from bus
-+ * @dev: device to be removed
- *
-- * - Remove symlink from bus's directory.
-- * - Delete device from bus's list.
-- * - Detach from its driver.
-- * - Drop reference taken in bus_add_device().
-+ * - Remove symlink from bus's directory.
-+ * - Delete device from bus's list.
-+ * - Detach from its driver.
-+ * - Drop reference taken in bus_add_device().
- */
--void bus_remove_device(struct device * dev)
-+void bus_remove_device(struct device *dev)
- {
- if (dev->bus) {
- sysfs_remove_link(&dev->kobj, "subsystem");
- remove_deprecated_bus_links(dev);
-- sysfs_remove_link(&dev->bus->devices.kobj, dev->bus_id);
-+ sysfs_remove_link(&dev->bus->p->devices_kset->kobj,
-+ dev->bus_id);
- device_remove_attrs(dev->bus, dev);
- if (dev->is_registered) {
- dev->is_registered = 0;
- klist_del(&dev->knode_bus);
- }
-- pr_debug("bus %s: remove device %s\n", dev->bus->name, dev->bus_id);
-+ pr_debug("bus: '%s': remove device %s\n",
-+ dev->bus->name, dev->bus_id);
- device_release_driver(dev);
- bus_put(dev->bus);
- }
- }
-
--static int driver_add_attrs(struct bus_type * bus, struct device_driver * drv)
-+static int driver_add_attrs(struct bus_type *bus, struct device_driver *drv)
- {
- int error = 0;
- int i;
-@@ -533,19 +536,19 @@ static int driver_add_attrs(struct bus_type * bus, struct device_driver * drv)
- for (i = 0; attr_name(bus->drv_attrs[i]); i++) {
- error = driver_create_file(drv, &bus->drv_attrs[i]);
- if (error)
-- goto Err;
-+ goto err;
- }
- }
-- Done:
-+done:
- return error;
-- Err:
-+err:
- while (--i >= 0)
- driver_remove_file(drv, &bus->drv_attrs[i]);
-- goto Done;
-+ goto done;
- }
-
--
--static void driver_remove_attrs(struct bus_type * bus, struct device_driver * drv)
-+static void driver_remove_attrs(struct bus_type *bus,
-+ struct device_driver *drv)
- {
- int i;
-
-@@ -616,39 +619,46 @@ static ssize_t driver_uevent_store(struct device_driver *drv,
- enum kobject_action action;
-
- if (kobject_action_type(buf, count, &action) == 0)
-- kobject_uevent(&drv->kobj, action);
-+ kobject_uevent(&drv->p->kobj, action);
- return count;
- }
- static DRIVER_ATTR(uevent, S_IWUSR, NULL, driver_uevent_store);
-
- /**
-- * bus_add_driver - Add a driver to the bus.
-- * @drv: driver.
-- *
-+ * bus_add_driver - Add a driver to the bus.
-+ * @drv: driver.
- */
- int bus_add_driver(struct device_driver *drv)
- {
-- struct bus_type * bus = bus_get(drv->bus);
-+ struct bus_type *bus;
-+ struct driver_private *priv;
- int error = 0;
-
-+ bus = bus_get(drv->bus);
- if (!bus)
- return -EINVAL;
-
-- pr_debug("bus %s: add driver %s\n", bus->name, drv->name);
-- error = kobject_set_name(&drv->kobj, "%s", drv->name);
-- if (error)
-- goto out_put_bus;
-- drv->kobj.kset = &bus->drivers;
-- error = kobject_register(&drv->kobj);
-+ pr_debug("bus: '%s': add driver %s\n", bus->name, drv->name);
+
-+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
-+ if (!priv)
-+ return -ENOMEM;
++ if (sg)
++ sg_mark_end(sg);
+
-+ klist_init(&priv->klist_devices, NULL, NULL);
-+ priv->driver = drv;
-+ drv->p = priv;
-+ priv->kobj.kset = bus->p->drivers_kset;
-+ error = kobject_init_and_add(&priv->kobj, &driver_ktype, NULL,
-+ "%s", drv->name);
- if (error)
- goto out_put_bus;
-
-- if (drv->bus->drivers_autoprobe) {
-+ if (drv->bus->p->drivers_autoprobe) {
- error = driver_attach(drv);
- if (error)
- goto out_unregister;
- }
-- klist_add_tail(&drv->knode_bus, &bus->klist_drivers);
-+ klist_add_tail(&priv->knode_bus, &bus->p->klist_drivers);
- module_add_driver(drv->owner, drv);
-
- error = driver_create_file(drv, &driver_attr_uevent);
-@@ -669,24 +679,24 @@ int bus_add_driver(struct device_driver *drv)
- __FUNCTION__, drv->name);
- }
-
-+ kobject_uevent(&priv->kobj, KOBJ_ADD);
- return error;
- out_unregister:
-- kobject_unregister(&drv->kobj);
-+ kobject_put(&priv->kobj);
- out_put_bus:
- bus_put(bus);
- return error;
- }
-
- /**
-- * bus_remove_driver - delete driver from bus's knowledge.
-- * @drv: driver.
-+ * bus_remove_driver - delete driver from bus's knowledge.
-+ * @drv: driver.
- *
-- * Detach the driver from the devices it controls, and remove
-- * it from its bus's list of drivers. Finally, we drop the reference
-- * to the bus we took in bus_add_driver().
-+ * Detach the driver from the devices it controls, and remove
-+ * it from its bus's list of drivers. Finally, we drop the reference
-+ * to the bus we took in bus_add_driver().
- */
--
--void bus_remove_driver(struct device_driver * drv)
-+void bus_remove_driver(struct device_driver *drv)
- {
- if (!drv->bus)
- return;
-@@ -694,18 +704,17 @@ void bus_remove_driver(struct device_driver * drv)
- remove_bind_files(drv);
- driver_remove_attrs(drv->bus, drv);
- driver_remove_file(drv, &driver_attr_uevent);
-- klist_remove(&drv->knode_bus);
-- pr_debug("bus %s: remove driver %s\n", drv->bus->name, drv->name);
-+ klist_remove(&drv->p->knode_bus);
-+ pr_debug("bus: '%s': remove driver %s\n", drv->bus->name, drv->name);
- driver_detach(drv);
- module_remove_driver(drv);
-- kobject_unregister(&drv->kobj);
-+ kobject_put(&drv->p->kobj);
- bus_put(drv->bus);
- }
-
--
- /* Helper for bus_rescan_devices's iter */
- static int __must_check bus_rescan_devices_helper(struct device *dev,
-- void *data)
-+ void *data)
- {
- int ret = 0;
-
-@@ -727,10 +736,11 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
- * attached and rescan it against existing drivers to see if it matches
- * any by calling device_attach() for the unbound devices.
- */
--int bus_rescan_devices(struct bus_type * bus)
-+int bus_rescan_devices(struct bus_type *bus)
- {
- return bus_for_each_dev(bus, NULL, NULL, bus_rescan_devices_helper);
- }
-+EXPORT_SYMBOL_GPL(bus_rescan_devices);
-
- /**
- * device_reprobe - remove driver for a device and probe for a new driver
-@@ -755,55 +765,55 @@ int device_reprobe(struct device *dev)
- EXPORT_SYMBOL_GPL(device_reprobe);
-
- /**
-- * find_bus - locate bus by name.
-- * @name: name of bus.
-+ * find_bus - locate bus by name.
-+ * @name: name of bus.
- *
-- * Call kset_find_obj() to iterate over list of buses to
-- * find a bus by name. Return bus if found.
-+ * Call kset_find_obj() to iterate over list of buses to
-+ * find a bus by name. Return bus if found.
- *
-- * Note that kset_find_obj increments bus' reference count.
-+ * Note that kset_find_obj increments bus' reference count.
- */
- #if 0
--struct bus_type * find_bus(char * name)
-+struct bus_type *find_bus(char *name)
- {
-- struct kobject * k = kset_find_obj(&bus_subsys.kset, name);
-+ struct kobject *k = kset_find_obj(bus_kset, name);
- return k ? to_bus(k) : NULL;
- }
- #endif /* 0 */
-
-
- /**
-- * bus_add_attrs - Add default attributes for this bus.
-- * @bus: Bus that has just been registered.
-+ * bus_add_attrs - Add default attributes for this bus.
-+ * @bus: Bus that has just been registered.
- */
-
--static int bus_add_attrs(struct bus_type * bus)
-+static int bus_add_attrs(struct bus_type *bus)
- {
- int error = 0;
- int i;
-
- if (bus->bus_attrs) {
- for (i = 0; attr_name(bus->bus_attrs[i]); i++) {
-- error = bus_create_file(bus,&bus->bus_attrs[i]);
-+ error = bus_create_file(bus, &bus->bus_attrs[i]);
- if (error)
-- goto Err;
-+ goto err;
- }
- }
-- Done:
-+done:
- return error;
-- Err:
-+err:
- while (--i >= 0)
-- bus_remove_file(bus,&bus->bus_attrs[i]);
-- goto Done;
-+ bus_remove_file(bus, &bus->bus_attrs[i]);
-+ goto done;
- }
-
--static void bus_remove_attrs(struct bus_type * bus)
-+static void bus_remove_attrs(struct bus_type *bus)
- {
- int i;
-
- if (bus->bus_attrs) {
- for (i = 0; attr_name(bus->bus_attrs[i]); i++)
-- bus_remove_file(bus,&bus->bus_attrs[i]);
-+ bus_remove_file(bus, &bus->bus_attrs[i]);
- }
- }
-
-@@ -827,32 +837,42 @@ static ssize_t bus_uevent_store(struct bus_type *bus,
- enum kobject_action action;
-
- if (kobject_action_type(buf, count, &action) == 0)
-- kobject_uevent(&bus->subsys.kobj, action);
-+ kobject_uevent(&bus->p->subsys.kobj, action);
- return count;
- }
- static BUS_ATTR(uevent, S_IWUSR, NULL, bus_uevent_store);
-
- /**
-- * bus_register - register a bus with the system.
-- * @bus: bus.
-+ * bus_register - register a bus with the system.
-+ * @bus: bus.
- *
-- * Once we have that, we registered the bus with the kobject
-- * infrastructure, then register the children subsystems it has:
-- * the devices and drivers that belong to the bus.
-+ * Once we have that, we registered the bus with the kobject
-+ * infrastructure, then register the children subsystems it has:
-+ * the devices and drivers that belong to the bus.
- */
--int bus_register(struct bus_type * bus)
-+int bus_register(struct bus_type *bus)
- {
- int retval;
-+ struct bus_type_private *priv;
++ return nsegs;
++}
+
-+ priv = kzalloc(sizeof(struct bus_type_private), GFP_KERNEL);
-+ if (!priv)
-+ return -ENOMEM;
-
-- BLOCKING_INIT_NOTIFIER_HEAD(&bus->bus_notifier);
-+ priv->bus = bus;
-+ bus->p = priv;
-
-- retval = kobject_set_name(&bus->subsys.kobj, "%s", bus->name);
-+ BLOCKING_INIT_NOTIFIER_HEAD(&priv->bus_notifier);
++EXPORT_SYMBOL(blk_rq_map_sg);
+
-+ retval = kobject_set_name(&priv->subsys.kobj, "%s", bus->name);
- if (retval)
- goto out;
-
-- bus->subsys.kobj.kset = &bus_subsys;
-+ priv->subsys.kobj.kset = bus_kset;
-+ priv->subsys.kobj.ktype = &bus_ktype;
-+ priv->drivers_autoprobe = 1;
-
-- retval = subsystem_register(&bus->subsys);
-+ retval = kset_register(&priv->subsys);
- if (retval)
- goto out;
-
-@@ -860,23 +880,23 @@ int bus_register(struct bus_type * bus)
- if (retval)
- goto bus_uevent_fail;
-
-- kobject_set_name(&bus->devices.kobj, "devices");
-- bus->devices.kobj.parent = &bus->subsys.kobj;
-- retval = kset_register(&bus->devices);
-- if (retval)
-+ priv->devices_kset = kset_create_and_add("devices", NULL,
-+ &priv->subsys.kobj);
-+ if (!priv->devices_kset) {
-+ retval = -ENOMEM;
- goto bus_devices_fail;
++static inline int ll_new_mergeable(struct request_queue *q,
++ struct request *req,
++ struct bio *bio)
++{
++ int nr_phys_segs = bio_phys_segments(q, bio);
++
++ if (req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
++ req->cmd_flags |= REQ_NOMERGE;
++ if (req == q->last_merge)
++ q->last_merge = NULL;
++ return 0;
+ }
-
-- kobject_set_name(&bus->drivers.kobj, "drivers");
-- bus->drivers.kobj.parent = &bus->subsys.kobj;
-- bus->drivers.ktype = &driver_ktype;
-- retval = kset_register(&bus->drivers);
-- if (retval)
-+ priv->drivers_kset = kset_create_and_add("drivers", NULL,
-+ &priv->subsys.kobj);
-+ if (!priv->drivers_kset) {
-+ retval = -ENOMEM;
- goto bus_drivers_fail;
++
++ /*
++ * A hw segment is just getting larger, bump just the phys
++ * counter.
++ */
++ req->nr_phys_segments += nr_phys_segs;
++ return 1;
++}
++
++static inline int ll_new_hw_segment(struct request_queue *q,
++ struct request *req,
++ struct bio *bio)
++{
++ int nr_hw_segs = bio_hw_segments(q, bio);
++ int nr_phys_segs = bio_phys_segments(q, bio);
++
++ if (req->nr_hw_segments + nr_hw_segs > q->max_hw_segments
++ || req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
++ req->cmd_flags |= REQ_NOMERGE;
++ if (req == q->last_merge)
++ q->last_merge = NULL;
++ return 0;
+ }
-
-- klist_init(&bus->klist_devices, klist_devices_get, klist_devices_put);
-- klist_init(&bus->klist_drivers, NULL, NULL);
-+ klist_init(&priv->klist_devices, klist_devices_get, klist_devices_put);
-+ klist_init(&priv->klist_drivers, NULL, NULL);
-
-- bus->drivers_autoprobe = 1;
- retval = add_probe_files(bus);
- if (retval)
- goto bus_probe_files_fail;
-@@ -885,66 +905,73 @@ int bus_register(struct bus_type * bus)
- if (retval)
- goto bus_attrs_fail;
-
-- pr_debug("bus type '%s' registered\n", bus->name);
-+ pr_debug("bus: '%s': registered\n", bus->name);
- return 0;
-
- bus_attrs_fail:
- remove_probe_files(bus);
- bus_probe_files_fail:
-- kset_unregister(&bus->drivers);
-+ kset_unregister(bus->p->drivers_kset);
- bus_drivers_fail:
-- kset_unregister(&bus->devices);
-+ kset_unregister(bus->p->devices_kset);
- bus_devices_fail:
- bus_remove_file(bus, &bus_attr_uevent);
- bus_uevent_fail:
-- subsystem_unregister(&bus->subsys);
-+ kset_unregister(&bus->p->subsys);
-+ kfree(bus->p);
- out:
- return retval;
- }
-+EXPORT_SYMBOL_GPL(bus_register);
-
- /**
-- * bus_unregister - remove a bus from the system
-- * @bus: bus.
-+ * bus_unregister - remove a bus from the system
-+ * @bus: bus.
- *
-- * Unregister the child subsystems and the bus itself.
-- * Finally, we call bus_put() to release the refcount
-+ * Unregister the child subsystems and the bus itself.
-+ * Finally, we call bus_put() to release the refcount
- */
--void bus_unregister(struct bus_type * bus)
-+void bus_unregister(struct bus_type *bus)
- {
-- pr_debug("bus %s: unregistering\n", bus->name);
-+ pr_debug("bus: '%s': unregistering\n", bus->name);
- bus_remove_attrs(bus);
- remove_probe_files(bus);
-- kset_unregister(&bus->drivers);
-- kset_unregister(&bus->devices);
-+ kset_unregister(bus->p->drivers_kset);
-+ kset_unregister(bus->p->devices_kset);
- bus_remove_file(bus, &bus_attr_uevent);
-- subsystem_unregister(&bus->subsys);
-+ kset_unregister(&bus->p->subsys);
-+ kfree(bus->p);
- }
-+EXPORT_SYMBOL_GPL(bus_unregister);
-
- int bus_register_notifier(struct bus_type *bus, struct notifier_block *nb)
- {
-- return blocking_notifier_chain_register(&bus->bus_notifier, nb);
-+ return blocking_notifier_chain_register(&bus->p->bus_notifier, nb);
- }
- EXPORT_SYMBOL_GPL(bus_register_notifier);
-
- int bus_unregister_notifier(struct bus_type *bus, struct notifier_block *nb)
- {
-- return blocking_notifier_chain_unregister(&bus->bus_notifier, nb);
-+ return blocking_notifier_chain_unregister(&bus->p->bus_notifier, nb);
- }
- EXPORT_SYMBOL_GPL(bus_unregister_notifier);
-
--int __init buses_init(void)
-+struct kset *bus_get_kset(struct bus_type *bus)
- {
-- return subsystem_register(&bus_subsys);
-+ return &bus->p->subsys;
- }
-+EXPORT_SYMBOL_GPL(bus_get_kset);
-
-+struct klist *bus_get_device_klist(struct bus_type *bus)
++
++ /*
++ * This will form the start of a new hw segment. Bump both
++ * counters.
++ */
++ req->nr_hw_segments += nr_hw_segs;
++ req->nr_phys_segments += nr_phys_segs;
++ return 1;
++}
++
++int ll_back_merge_fn(struct request_queue *q, struct request *req,
++ struct bio *bio)
+{
-+ return &bus->p->klist_devices;
++ unsigned short max_sectors;
++ int len;
++
++ if (unlikely(blk_pc_request(req)))
++ max_sectors = q->max_hw_sectors;
++ else
++ max_sectors = q->max_sectors;
++
++ if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
++ req->cmd_flags |= REQ_NOMERGE;
++ if (req == q->last_merge)
++ q->last_merge = NULL;
++ return 0;
++ }
++ if (unlikely(!bio_flagged(req->biotail, BIO_SEG_VALID)))
++ blk_recount_segments(q, req->biotail);
++ if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
++ blk_recount_segments(q, bio);
++ len = req->biotail->bi_hw_back_size + bio->bi_hw_front_size;
++ if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(req->biotail), __BVEC_START(bio)) &&
++ !BIOVEC_VIRT_OVERSIZE(len)) {
++ int mergeable = ll_new_mergeable(q, req, bio);
++
++ if (mergeable) {
++ if (req->nr_hw_segments == 1)
++ req->bio->bi_hw_front_size = len;
++ if (bio->bi_hw_segments == 1)
++ bio->bi_hw_back_size = len;
++ }
++ return mergeable;
++ }
++
++ return ll_new_hw_segment(q, req, bio);
+}
-+EXPORT_SYMBOL_GPL(bus_get_device_klist);
-
--EXPORT_SYMBOL_GPL(bus_for_each_dev);
--EXPORT_SYMBOL_GPL(bus_find_device);
--EXPORT_SYMBOL_GPL(bus_for_each_drv);
--
--EXPORT_SYMBOL_GPL(bus_register);
--EXPORT_SYMBOL_GPL(bus_unregister);
--EXPORT_SYMBOL_GPL(bus_rescan_devices);
--
--EXPORT_SYMBOL_GPL(bus_create_file);
--EXPORT_SYMBOL_GPL(bus_remove_file);
-+int __init buses_init(void)
++
++int ll_front_merge_fn(struct request_queue *q, struct request *req,
++ struct bio *bio)
+{
-+ bus_kset = kset_create_and_add("bus", &bus_uevent_ops, NULL);
-+ if (!bus_kset)
-+ return -ENOMEM;
++ unsigned short max_sectors;
++ int len;
++
++ if (unlikely(blk_pc_request(req)))
++ max_sectors = q->max_hw_sectors;
++ else
++ max_sectors = q->max_sectors;
++
++
++ if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
++ req->cmd_flags |= REQ_NOMERGE;
++ if (req == q->last_merge)
++ q->last_merge = NULL;
++ return 0;
++ }
++ len = bio->bi_hw_back_size + req->bio->bi_hw_front_size;
++ if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
++ blk_recount_segments(q, bio);
++ if (unlikely(!bio_flagged(req->bio, BIO_SEG_VALID)))
++ blk_recount_segments(q, req->bio);
++ if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(req->bio)) &&
++ !BIOVEC_VIRT_OVERSIZE(len)) {
++ int mergeable = ll_new_mergeable(q, req, bio);
++
++ if (mergeable) {
++ if (bio->bi_hw_segments == 1)
++ bio->bi_hw_front_size = len;
++ if (req->nr_hw_segments == 1)
++ req->biotail->bi_hw_back_size = len;
++ }
++ return mergeable;
++ }
++
++ return ll_new_hw_segment(q, req, bio);
++}
++
++static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
++ struct request *next)
++{
++ int total_phys_segments;
++ int total_hw_segments;
++
++ /*
++ * First check if the either of the requests are re-queued
++ * requests. Can't merge them if they are.
++ */
++ if (req->special || next->special)
++ return 0;
++
++ /*
++ * Will it become too large?
++ */
++ if ((req->nr_sectors + next->nr_sectors) > q->max_sectors)
++ return 0;
++
++ total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
++ if (blk_phys_contig_segment(q, req->biotail, next->bio))
++ total_phys_segments--;
++
++ if (total_phys_segments > q->max_phys_segments)
++ return 0;
++
++ total_hw_segments = req->nr_hw_segments + next->nr_hw_segments;
++ if (blk_hw_contig_segment(q, req->biotail, next->bio)) {
++ int len = req->biotail->bi_hw_back_size + next->bio->bi_hw_front_size;
++ /*
++ * propagate the combined length to the end of the requests
++ */
++ if (req->nr_hw_segments == 1)
++ req->bio->bi_hw_front_size = len;
++ if (next->nr_hw_segments == 1)
++ next->biotail->bi_hw_back_size = len;
++ total_hw_segments--;
++ }
++
++ if (total_hw_segments > q->max_hw_segments)
++ return 0;
++
++ /* Merge is OK... */
++ req->nr_phys_segments = total_phys_segments;
++ req->nr_hw_segments = total_hw_segments;
++ return 1;
++}
++
++/*
++ * Has to be called with the request spinlock acquired
++ */
++static int attempt_merge(struct request_queue *q, struct request *req,
++ struct request *next)
++{
++ if (!rq_mergeable(req) || !rq_mergeable(next))
++ return 0;
++
++ /*
++ * not contiguous
++ */
++ if (req->sector + req->nr_sectors != next->sector)
++ return 0;
++
++ if (rq_data_dir(req) != rq_data_dir(next)
++ || req->rq_disk != next->rq_disk
++ || next->special)
++ return 0;
++
++ /*
++ * If we are allowed to merge, then append bio list
++ * from next to rq and release next. merge_requests_fn
++ * will have updated segment counts, update sector
++ * counts here.
++ */
++ if (!ll_merge_requests_fn(q, req, next))
++ return 0;
++
++ /*
++ * At this point we have either done a back merge
++ * or front merge. We need the smaller start_time of
++ * the merged requests to be the current request
++ * for accounting purposes.
++ */
++ if (time_after(req->start_time, next->start_time))
++ req->start_time = next->start_time;
++
++ req->biotail->bi_next = next->bio;
++ req->biotail = next->biotail;
++
++ req->nr_sectors = req->hard_nr_sectors += next->hard_nr_sectors;
++
++ elv_merge_requests(q, req, next);
++
++ if (req->rq_disk) {
++ disk_round_stats(req->rq_disk);
++ req->rq_disk->in_flight--;
++ }
++
++ req->ioprio = ioprio_best(req->ioprio, next->ioprio);
++
++ __blk_put_request(q, next);
++ return 1;
++}
++
++int attempt_back_merge(struct request_queue *q, struct request *rq)
++{
++ struct request *next = elv_latter_request(q, rq);
++
++ if (next)
++ return attempt_merge(q, rq, next);
++
+ return 0;
+}
-diff --git a/drivers/base/class.c b/drivers/base/class.c
-index a863bb0..59cf358 100644
---- a/drivers/base/class.c
-+++ b/drivers/base/class.c
-@@ -17,16 +17,17 @@
- #include <linux/kdev_t.h>
- #include <linux/err.h>
- #include <linux/slab.h>
-+#include <linux/genhd.h>
- #include "base.h"
-
- #define to_class_attr(_attr) container_of(_attr, struct class_attribute, attr)
- #define to_class(obj) container_of(obj, struct class, subsys.kobj)
-
--static ssize_t
--class_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
-+static ssize_t class_attr_show(struct kobject *kobj, struct attribute *attr,
-+ char *buf)
- {
-- struct class_attribute * class_attr = to_class_attr(attr);
-- struct class * dc = to_class(kobj);
-+ struct class_attribute *class_attr = to_class_attr(attr);
-+ struct class *dc = to_class(kobj);
- ssize_t ret = -EIO;
-
- if (class_attr->show)
-@@ -34,12 +35,11 @@ class_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
- return ret;
- }
-
--static ssize_t
--class_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char * buf, size_t count)
-+static ssize_t class_attr_store(struct kobject *kobj, struct attribute *attr,
-+ const char *buf, size_t count)
- {
-- struct class_attribute * class_attr = to_class_attr(attr);
-- struct class * dc = to_class(kobj);
-+ struct class_attribute *class_attr = to_class_attr(attr);
-+ struct class *dc = to_class(kobj);
- ssize_t ret = -EIO;
-
- if (class_attr->store)
-@@ -47,7 +47,7 @@ class_attr_store(struct kobject * kobj, struct attribute * attr,
- return ret;
- }
-
--static void class_release(struct kobject * kobj)
-+static void class_release(struct kobject *kobj)
- {
- struct class *class = to_class(kobj);
-
-@@ -71,20 +71,20 @@ static struct kobj_type class_ktype = {
- };
-
- /* Hotplug events for classes go to the class_obj subsys */
--static decl_subsys(class, &class_ktype, NULL);
-+static struct kset *class_kset;
-
-
--int class_create_file(struct class * cls, const struct class_attribute * attr)
-+int class_create_file(struct class *cls, const struct class_attribute *attr)
- {
- int error;
-- if (cls) {
-+ if (cls)
- error = sysfs_create_file(&cls->subsys.kobj, &attr->attr);
-- } else
-+ else
- error = -EINVAL;
- return error;
- }
-
--void class_remove_file(struct class * cls, const struct class_attribute * attr)
-+void class_remove_file(struct class *cls, const struct class_attribute *attr)
- {
- if (cls)
- sysfs_remove_file(&cls->subsys.kobj, &attr->attr);
-@@ -93,48 +93,48 @@ void class_remove_file(struct class * cls, const struct class_attribute * attr)
- static struct class *class_get(struct class *cls)
- {
- if (cls)
-- return container_of(kset_get(&cls->subsys), struct class, subsys);
-+ return container_of(kset_get(&cls->subsys),
-+ struct class, subsys);
- return NULL;
- }
-
--static void class_put(struct class * cls)
-+static void class_put(struct class *cls)
- {
- if (cls)
- kset_put(&cls->subsys);
- }
-
--
--static int add_class_attrs(struct class * cls)
-+static int add_class_attrs(struct class *cls)
- {
- int i;
- int error = 0;
-
- if (cls->class_attrs) {
- for (i = 0; attr_name(cls->class_attrs[i]); i++) {
-- error = class_create_file(cls,&cls->class_attrs[i]);
-+ error = class_create_file(cls, &cls->class_attrs[i]);
- if (error)
-- goto Err;
-+ goto error;
- }
- }
-- Done:
-+done:
- return error;
-- Err:
-+error:
- while (--i >= 0)
-- class_remove_file(cls,&cls->class_attrs[i]);
-- goto Done;
-+ class_remove_file(cls, &cls->class_attrs[i]);
-+ goto done;
- }
-
--static void remove_class_attrs(struct class * cls)
-+static void remove_class_attrs(struct class *cls)
- {
- int i;
-
- if (cls->class_attrs) {
- for (i = 0; attr_name(cls->class_attrs[i]); i++)
-- class_remove_file(cls,&cls->class_attrs[i]);
-+ class_remove_file(cls, &cls->class_attrs[i]);
- }
- }
-
--int class_register(struct class * cls)
-+int class_register(struct class *cls)
- {
- int error;
-
-@@ -149,9 +149,16 @@ int class_register(struct class * cls)
- if (error)
- return error;
-
-- cls->subsys.kobj.kset = &class_subsys;
-+#ifdef CONFIG_SYSFS_DEPRECATED
-+ /* let the block class directory show up in the root of sysfs */
-+ if (cls != &block_class)
-+ cls->subsys.kobj.kset = class_kset;
-+#else
-+ cls->subsys.kobj.kset = class_kset;
-+#endif
-+ cls->subsys.kobj.ktype = &class_ktype;
-
-- error = subsystem_register(&cls->subsys);
-+ error = kset_register(&cls->subsys);
- if (!error) {
- error = add_class_attrs(class_get(cls));
- class_put(cls);
-@@ -159,11 +166,11 @@ int class_register(struct class * cls)
- return error;
- }
-
--void class_unregister(struct class * cls)
-+void class_unregister(struct class *cls)
- {
- pr_debug("device class '%s': unregistering\n", cls->name);
- remove_class_attrs(cls);
-- subsystem_unregister(&cls->subsys);
-+ kset_unregister(&cls->subsys);
- }
-
- static void class_create_release(struct class *cls)
-@@ -241,8 +248,8 @@ void class_destroy(struct class *cls)
-
- /* Class Device Stuff */
-
--int class_device_create_file(struct class_device * class_dev,
-- const struct class_device_attribute * attr)
-+int class_device_create_file(struct class_device *class_dev,
-+ const struct class_device_attribute *attr)
- {
- int error = -EINVAL;
- if (class_dev)
-@@ -250,8 +257,8 @@ int class_device_create_file(struct class_device * class_dev,
- return error;
- }
-
--void class_device_remove_file(struct class_device * class_dev,
-- const struct class_device_attribute * attr)
-+void class_device_remove_file(struct class_device *class_dev,
-+ const struct class_device_attribute *attr)
- {
- if (class_dev)
- sysfs_remove_file(&class_dev->kobj, &attr->attr);
-@@ -273,12 +280,11 @@ void class_device_remove_bin_file(struct class_device *class_dev,
- sysfs_remove_bin_file(&class_dev->kobj, attr);
- }
-
--static ssize_t
--class_device_attr_show(struct kobject * kobj, struct attribute * attr,
-- char * buf)
-+static ssize_t class_device_attr_show(struct kobject *kobj,
-+ struct attribute *attr, char *buf)
- {
-- struct class_device_attribute * class_dev_attr = to_class_dev_attr(attr);
-- struct class_device * cd = to_class_dev(kobj);
-+ struct class_device_attribute *class_dev_attr = to_class_dev_attr(attr);
-+ struct class_device *cd = to_class_dev(kobj);
- ssize_t ret = 0;
-
- if (class_dev_attr->show)
-@@ -286,12 +292,12 @@ class_device_attr_show(struct kobject * kobj, struct attribute * attr,
- return ret;
- }
-
--static ssize_t
--class_device_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char * buf, size_t count)
-+static ssize_t class_device_attr_store(struct kobject *kobj,
-+ struct attribute *attr,
-+ const char *buf, size_t count)
- {
-- struct class_device_attribute * class_dev_attr = to_class_dev_attr(attr);
-- struct class_device * cd = to_class_dev(kobj);
-+ struct class_device_attribute *class_dev_attr = to_class_dev_attr(attr);
-+ struct class_device *cd = to_class_dev(kobj);
- ssize_t ret = 0;
-
- if (class_dev_attr->store)
-@@ -304,10 +310,10 @@ static struct sysfs_ops class_dev_sysfs_ops = {
- .store = class_device_attr_store,
- };
-
--static void class_dev_release(struct kobject * kobj)
-+static void class_dev_release(struct kobject *kobj)
- {
- struct class_device *cd = to_class_dev(kobj);
-- struct class * cls = cd->class;
-+ struct class *cls = cd->class;
-
- pr_debug("device class '%s': release.\n", cd->class_id);
-
-@@ -316,8 +322,8 @@ static void class_dev_release(struct kobject * kobj)
- else if (cls->release)
- cls->release(cd);
- else {
-- printk(KERN_ERR "Class Device '%s' does not have a release() function, "
-- "it is broken and must be fixed.\n",
-+ printk(KERN_ERR "Class Device '%s' does not have a release() "
-+ "function, it is broken and must be fixed.\n",
- cd->class_id);
- WARN_ON(1);
- }
-@@ -428,7 +434,8 @@ static int class_uevent(struct kset *kset, struct kobject *kobj,
- add_uevent_var(env, "PHYSDEVBUS=%s", dev->bus->name);
-
- if (dev->driver)
-- add_uevent_var(env, "PHYSDEVDRIVER=%s", dev->driver->name);
-+ add_uevent_var(env, "PHYSDEVDRIVER=%s",
-+ dev->driver->name);
- }
-
- if (class_dev->uevent) {
-@@ -452,43 +459,49 @@ static struct kset_uevent_ops class_uevent_ops = {
- .uevent = class_uevent,
- };
-
--static decl_subsys(class_obj, &class_device_ktype, &class_uevent_ops);
--
++
++int attempt_front_merge(struct request_queue *q, struct request *rq)
++{
++ struct request *prev = elv_former_request(q, rq);
++
++ if (prev)
++ return attempt_merge(q, prev, rq);
++
++ return 0;
++}
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+new file mode 100644
+index 0000000..4df09a1
+--- /dev/null
++++ b/block/blk-settings.c
+@@ -0,0 +1,402 @@
+/*
-+ * DO NOT copy how this is created, kset_create_and_add() should be
-+ * called, but this is a hold-over from the old-way and will be deleted
-+ * entirely soon.
++ * Functions related to setting various queue properties from drivers
+ */
-+static struct kset class_obj_subsys = {
-+ .uevent_ops = &class_uevent_ops,
-+};
-
--static int class_device_add_attrs(struct class_device * cd)
-+static int class_device_add_attrs(struct class_device *cd)
- {
- int i;
- int error = 0;
-- struct class * cls = cd->class;
-+ struct class *cls = cd->class;
-
- if (cls->class_dev_attrs) {
- for (i = 0; attr_name(cls->class_dev_attrs[i]); i++) {
- error = class_device_create_file(cd,
-- &cls->class_dev_attrs[i]);
-+ &cls->class_dev_attrs[i]);
- if (error)
-- goto Err;
-+ goto err;
- }
- }
-- Done:
-+done:
- return error;
-- Err:
-+err:
- while (--i >= 0)
-- class_device_remove_file(cd,&cls->class_dev_attrs[i]);
-- goto Done;
-+ class_device_remove_file(cd, &cls->class_dev_attrs[i]);
-+ goto done;
- }
-
--static void class_device_remove_attrs(struct class_device * cd)
-+static void class_device_remove_attrs(struct class_device *cd)
- {
- int i;
-- struct class * cls = cd->class;
-+ struct class *cls = cd->class;
-
- if (cls->class_dev_attrs) {
- for (i = 0; attr_name(cls->class_dev_attrs[i]); i++)
-- class_device_remove_file(cd,&cls->class_dev_attrs[i]);
-+ class_device_remove_file(cd, &cls->class_dev_attrs[i]);
- }
- }
-
--static int class_device_add_groups(struct class_device * cd)
-+static int class_device_add_groups(struct class_device *cd)
- {
- int i;
- int error = 0;
-@@ -498,7 +511,8 @@ static int class_device_add_groups(struct class_device * cd)
- error = sysfs_create_group(&cd->kobj, cd->groups[i]);
- if (error) {
- while (--i >= 0)
-- sysfs_remove_group(&cd->kobj, cd->groups[i]);
-+ sysfs_remove_group(&cd->kobj,
-+ cd->groups[i]);
- goto out;
- }
- }
-@@ -507,14 +521,12 @@ out:
- return error;
- }
-
--static void class_device_remove_groups(struct class_device * cd)
-+static void class_device_remove_groups(struct class_device *cd)
- {
- int i;
-- if (cd->groups) {
-- for (i = 0; cd->groups[i]; i++) {
-+ if (cd->groups)
-+ for (i = 0; cd->groups[i]; i++)
- sysfs_remove_group(&cd->kobj, cd->groups[i]);
-- }
-- }
- }
-
- static ssize_t show_dev(struct class_device *class_dev, char *buf)
-@@ -537,8 +549,8 @@ static struct class_device_attribute class_uevent_attr =
-
- void class_device_initialize(struct class_device *class_dev)
- {
-- kobj_set_kset_s(class_dev, class_obj_subsys);
-- kobject_init(&class_dev->kobj);
-+ class_dev->kobj.kset = &class_obj_subsys;
-+ kobject_init(&class_dev->kobj, &class_device_ktype);
- INIT_LIST_HEAD(&class_dev->node);
- }
-
-@@ -566,16 +578,13 @@ int class_device_add(struct class_device *class_dev)
- class_dev->class_id);
-
- /* first, register with generic layer. */
-- error = kobject_set_name(&class_dev->kobj, "%s", class_dev->class_id);
-- if (error)
-- goto out2;
--
- if (parent_class_dev)
- class_dev->kobj.parent = &parent_class_dev->kobj;
- else
- class_dev->kobj.parent = &parent_class->subsys.kobj;
-
-- error = kobject_add(&class_dev->kobj);
-+ error = kobject_add(&class_dev->kobj, class_dev->kobj.parent,
-+ "%s", class_dev->class_id);
- if (error)
- goto out2;
-
-@@ -642,7 +651,7 @@ int class_device_add(struct class_device *class_dev)
- out3:
- kobject_del(&class_dev->kobj);
- out2:
-- if(parent_class_dev)
-+ if (parent_class_dev)
- class_device_put(parent_class_dev);
- class_put(parent_class);
- out1:
-@@ -659,9 +668,11 @@ int class_device_register(struct class_device *class_dev)
- /**
- * class_device_create - creates a class device and registers it with sysfs
- * @cls: pointer to the struct class that this device should be registered to.
-- * @parent: pointer to the parent struct class_device of this new device, if any.
-+ * @parent: pointer to the parent struct class_device of this new device, if
-+ * any.
- * @devt: the dev_t for the char device to be added.
-- * @device: a pointer to a struct device that is assiociated with this class device.
-+ * @device: a pointer to a struct device that is assiociated with this class
-+ * device.
- * @fmt: string for the class device's name
- *
- * This function can be used by char device classes. A struct
-@@ -785,7 +796,7 @@ void class_device_destroy(struct class *cls, dev_t devt)
- class_device_unregister(class_dev);
- }
-
--struct class_device * class_device_get(struct class_device *class_dev)
-+struct class_device *class_device_get(struct class_device *class_dev)
- {
- if (class_dev)
- return to_class_dev(kobject_get(&class_dev->kobj));
-@@ -798,6 +809,139 @@ void class_device_put(struct class_device *class_dev)
- kobject_put(&class_dev->kobj);
- }
-
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/init.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
++
++#include "blk.h"
++
++unsigned long blk_max_low_pfn, blk_max_pfn;
++EXPORT_SYMBOL(blk_max_low_pfn);
++EXPORT_SYMBOL(blk_max_pfn);
++
+/**
-+ * class_for_each_device - device iterator
-+ * @class: the class we're iterating
-+ * @data: data for the callback
-+ * @fn: function to be called for each device
++ * blk_queue_prep_rq - set a prepare_request function for queue
++ * @q: queue
++ * @pfn: prepare_request function
+ *
-+ * Iterate over @class's list of devices, and call @fn for each,
-+ * passing it @data.
++ * It's possible for a queue to register a prepare_request callback which
++ * is invoked before the request is handed to the request_fn. The goal of
++ * the function is to prepare a request for I/O, it can be used to build a
++ * cdb from the request data for instance.
+ *
-+ * We check the return of @fn each time. If it returns anything
-+ * other than 0, we break out and return that value.
++ */
++void blk_queue_prep_rq(struct request_queue *q, prep_rq_fn *pfn)
++{
++ q->prep_rq_fn = pfn;
++}
++
++EXPORT_SYMBOL(blk_queue_prep_rq);
++
++/**
++ * blk_queue_merge_bvec - set a merge_bvec function for queue
++ * @q: queue
++ * @mbfn: merge_bvec_fn
+ *
-+ * Note, we hold class->sem in this function, so it can not be
-+ * re-acquired in @fn, otherwise it will self-deadlocking. For
-+ * example, calls to add or remove class members would be verboten.
++ * Usually queues have static limitations on the max sectors or segments that
++ * we can put in a request. Stacking drivers may have some settings that
++ * are dynamic, and thus we have to query the queue whether it is ok to
++ * add a new bio_vec to a bio at a given offset or not. If the block device
++ * has such limitations, it needs to register a merge_bvec_fn to control
++ * the size of bio's sent to it. Note that a block device *must* allow a
++ * single page to be added to an empty bio. The block device driver may want
++ * to use the bio_split() function to deal with these bio's. By default
++ * no merge_bvec_fn is defined for a queue, and only the fixed limits are
++ * honored.
+ */
-+int class_for_each_device(struct class *class, void *data,
-+ int (*fn)(struct device *, void *))
++void blk_queue_merge_bvec(struct request_queue *q, merge_bvec_fn *mbfn)
+{
-+ struct device *dev;
-+ int error = 0;
++ q->merge_bvec_fn = mbfn;
++}
+
-+ if (!class)
-+ return -EINVAL;
-+ down(&class->sem);
-+ list_for_each_entry(dev, &class->devices, node) {
-+ dev = get_device(dev);
-+ if (dev) {
-+ error = fn(dev, data);
-+ put_device(dev);
-+ } else
-+ error = -ENODEV;
-+ if (error)
-+ break;
-+ }
-+ up(&class->sem);
++EXPORT_SYMBOL(blk_queue_merge_bvec);
+
-+ return error;
++void blk_queue_softirq_done(struct request_queue *q, softirq_done_fn *fn)
++{
++ q->softirq_done_fn = fn;
+}
-+EXPORT_SYMBOL_GPL(class_for_each_device);
++
++EXPORT_SYMBOL(blk_queue_softirq_done);
+
+/**
-+ * class_find_device - device iterator for locating a particular device
-+ * @class: the class we're iterating
-+ * @data: data for the match function
-+ * @match: function to check device
++ * blk_queue_make_request - define an alternate make_request function for a device
++ * @q: the request queue for the device to be affected
++ * @mfn: the alternate make_request function
+ *
-+ * This is similar to the class_for_each_dev() function above, but it
-+ * returns a reference to a device that is 'found' for later use, as
-+ * determined by the @match callback.
++ * Description:
++ * The normal way for &struct bios to be passed to a device
++ * driver is for them to be collected into requests on a request
++ * queue, and then to allow the device driver to select requests
++ * off that queue when it is ready. This works well for many block
++ * devices. However some block devices (typically virtual devices
++ * such as md or lvm) do not benefit from the processing on the
++ * request queue, and are served best by having the requests passed
++ * directly to them. This can be achieved by providing a function
++ * to blk_queue_make_request().
+ *
-+ * The callback should return 0 if the device doesn't match and non-zero
-+ * if it does. If the callback returns non-zero, this function will
-+ * return to the caller and not iterate over any more devices.
++ * Caveat:
++ * The driver that does this *must* be able to deal appropriately
++ * with buffers in "highmemory". This can be accomplished by either calling
++ * __bio_kmap_atomic() to get a temporary kernel mapping, or by calling
++ * blk_queue_bounce() to create a buffer in normal memory.
++ **/
++void blk_queue_make_request(struct request_queue * q, make_request_fn * mfn)
++{
++ /*
++ * set defaults
++ */
++ q->nr_requests = BLKDEV_MAX_RQ;
++ blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
++ blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
++ q->make_request_fn = mfn;
++ q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
++ q->backing_dev_info.state = 0;
++ q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY;
++ blk_queue_max_sectors(q, SAFE_MAX_SECTORS);
++ blk_queue_hardsect_size(q, 512);
++ blk_queue_dma_alignment(q, 511);
++ blk_queue_congestion_threshold(q);
++ q->nr_batching = BLK_BATCH_REQ;
+
-+ * Note, you will need to drop the reference with put_device() after use.
++ q->unplug_thresh = 4; /* hmm */
++ q->unplug_delay = (3 * HZ) / 1000; /* 3 milliseconds */
++ if (q->unplug_delay == 0)
++ q->unplug_delay = 1;
++
++ INIT_WORK(&q->unplug_work, blk_unplug_work);
++
++ q->unplug_timer.function = blk_unplug_timeout;
++ q->unplug_timer.data = (unsigned long)q;
++
++ /*
++ * by default assume old behaviour and bounce for any highmem page
++ */
++ blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
++}
++
++EXPORT_SYMBOL(blk_queue_make_request);
++
++/**
++ * blk_queue_bounce_limit - set bounce buffer limit for queue
++ * @q: the request queue for the device
++ * @dma_addr: bus address limit
+ *
-+ * We hold class->sem in this function, so it can not be
-+ * re-acquired in @match, otherwise it will self-deadlocking. For
-+ * example, calls to add or remove class members would be verboten.
-+ */
-+struct device *class_find_device(struct class *class, void *data,
-+ int (*match)(struct device *, void *))
++ * Description:
++ * Different hardware can have different requirements as to what pages
++ * it can do I/O directly to. A low level driver can call
++ * blk_queue_bounce_limit to have lower memory pages allocated as bounce
++ * buffers for doing I/O to pages residing above @page.
++ **/
++void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr)
+{
-+ struct device *dev;
-+ int found = 0;
++ unsigned long bounce_pfn = dma_addr >> PAGE_SHIFT;
++ int dma = 0;
+
-+ if (!class)
-+ return NULL;
++ q->bounce_gfp = GFP_NOIO;
++#if BITS_PER_LONG == 64
++ /* Assume anything <= 4GB can be handled by IOMMU.
++ Actually some IOMMUs can handle everything, but I don't
++ know of a way to test this here. */
++ if (bounce_pfn < (min_t(u64,0xffffffff,BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
++ dma = 1;
++ q->bounce_pfn = max_low_pfn;
++#else
++ if (bounce_pfn < blk_max_low_pfn)
++ dma = 1;
++ q->bounce_pfn = bounce_pfn;
++#endif
++ if (dma) {
++ init_emergency_isa_pool();
++ q->bounce_gfp = GFP_NOIO | GFP_DMA;
++ q->bounce_pfn = bounce_pfn;
++ }
++}
+
-+ down(&class->sem);
-+ list_for_each_entry(dev, &class->devices, node) {
-+ dev = get_device(dev);
-+ if (dev) {
-+ if (match(dev, data)) {
-+ found = 1;
-+ break;
-+ } else
-+ put_device(dev);
-+ } else
-+ break;
++EXPORT_SYMBOL(blk_queue_bounce_limit);
++
++/**
++ * blk_queue_max_sectors - set max sectors for a request for this queue
++ * @q: the request queue for the device
++ * @max_sectors: max sectors in the usual 512b unit
++ *
++ * Description:
++ * Enables a low level driver to set an upper limit on the size of
++ * received requests.
++ **/
++void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors)
++{
++ if ((max_sectors << 9) < PAGE_CACHE_SIZE) {
++ max_sectors = 1 << (PAGE_CACHE_SHIFT - 9);
++ printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors);
+ }
-+ up(&class->sem);
+
-+ return found ? dev : NULL;
++ if (BLK_DEF_MAX_SECTORS > max_sectors)
++ q->max_hw_sectors = q->max_sectors = max_sectors;
++ else {
++ q->max_sectors = BLK_DEF_MAX_SECTORS;
++ q->max_hw_sectors = max_sectors;
++ }
+}
-+EXPORT_SYMBOL_GPL(class_find_device);
++
++EXPORT_SYMBOL(blk_queue_max_sectors);
+
+/**
-+ * class_find_child - device iterator for locating a particular class_device
-+ * @class: the class we're iterating
-+ * @data: data for the match function
-+ * @match: function to check class_device
++ * blk_queue_max_phys_segments - set max phys segments for a request for this queue
++ * @q: the request queue for the device
++ * @max_segments: max number of segments
+ *
-+ * This function returns a reference to a class_device that is 'found' for
-+ * later use, as determined by the @match callback.
++ * Description:
++ * Enables a low level driver to set an upper limit on the number of
++ * physical data segments in a request. This would be the largest sized
++ * scatter list the driver could handle.
++ **/
++void blk_queue_max_phys_segments(struct request_queue *q,
++ unsigned short max_segments)
++{
++ if (!max_segments) {
++ max_segments = 1;
++ printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
++ }
++
++ q->max_phys_segments = max_segments;
++}
++
++EXPORT_SYMBOL(blk_queue_max_phys_segments);
++
++/**
++ * blk_queue_max_hw_segments - set max hw segments for a request for this queue
++ * @q: the request queue for the device
++ * @max_segments: max number of segments
+ *
-+ * The callback should return 0 if the class_device doesn't match and non-zero
-+ * if it does. If the callback returns non-zero, this function will
-+ * return to the caller and not iterate over any more class_devices.
++ * Description:
++ * Enables a low level driver to set an upper limit on the number of
++ * hw data segments in a request. This would be the largest number of
++ * address/length pairs the host adapter can actually give as once
++ * to the device.
++ **/
++void blk_queue_max_hw_segments(struct request_queue *q,
++ unsigned short max_segments)
++{
++ if (!max_segments) {
++ max_segments = 1;
++ printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
++ }
++
++ q->max_hw_segments = max_segments;
++}
++
++EXPORT_SYMBOL(blk_queue_max_hw_segments);
++
++/**
++ * blk_queue_max_segment_size - set max segment size for blk_rq_map_sg
++ * @q: the request queue for the device
++ * @max_size: max size of segment in bytes
+ *
-+ * Note, you will need to drop the reference with class_device_put() after use.
++ * Description:
++ * Enables a low level driver to set an upper limit on the size of a
++ * coalesced segment
++ **/
++void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
++{
++ if (max_size < PAGE_CACHE_SIZE) {
++ max_size = PAGE_CACHE_SIZE;
++ printk("%s: set to minimum %d\n", __FUNCTION__, max_size);
++ }
++
++ q->max_segment_size = max_size;
++}
++
++EXPORT_SYMBOL(blk_queue_max_segment_size);
++
++/**
++ * blk_queue_hardsect_size - set hardware sector size for the queue
++ * @q: the request queue for the device
++ * @size: the hardware sector size, in bytes
+ *
-+ * We hold class->sem in this function, so it can not be
-+ * re-acquired in @match, otherwise it will self-deadlocking. For
-+ * example, calls to add or remove class members would be verboten.
++ * Description:
++ * This should typically be set to the lowest possible sector size
++ * that the hardware can operate on (possible without reverting to
++ * even internal read-modify-write operations). Usually the default
++ * of 512 covers most hardware.
++ **/
++void blk_queue_hardsect_size(struct request_queue *q, unsigned short size)
++{
++ q->hardsect_size = size;
++}
++
++EXPORT_SYMBOL(blk_queue_hardsect_size);
++
++/*
++ * Returns the minimum that is _not_ zero, unless both are zero.
+ */
-+struct class_device *class_find_child(struct class *class, void *data,
-+ int (*match)(struct class_device *, void *))
++#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r))
++
++/**
++ * blk_queue_stack_limits - inherit underlying queue limits for stacked drivers
++ * @t: the stacking driver (top)
++ * @b: the underlying device (bottom)
++ **/
++void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
+{
-+ struct class_device *dev;
-+ int found = 0;
++ /* zero is "infinity" */
++ t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors);
++ t->max_hw_sectors = min_not_zero(t->max_hw_sectors,b->max_hw_sectors);
+
-+ if (!class)
-+ return NULL;
++ t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments);
++ t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments);
++ t->max_segment_size = min(t->max_segment_size,b->max_segment_size);
++ t->hardsect_size = max(t->hardsect_size,b->hardsect_size);
++ if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags))
++ clear_bit(QUEUE_FLAG_CLUSTER, &t->queue_flags);
++}
+
-+ down(&class->sem);
-+ list_for_each_entry(dev, &class->children, node) {
-+ dev = class_device_get(dev);
-+ if (dev) {
-+ if (match(dev, data)) {
-+ found = 1;
-+ break;
-+ } else
-+ class_device_put(dev);
-+ } else
-+ break;
-+ }
-+ up(&class->sem);
++EXPORT_SYMBOL(blk_queue_stack_limits);
+
-+ return found ? dev : NULL;
++/**
++ * blk_queue_dma_drain - Set up a drain buffer for excess dma.
++ *
++ * @q: the request queue for the device
++ * @buf: physically contiguous buffer
++ * @size: size of the buffer in bytes
++ *
++ * Some devices have excess DMA problems and can't simply discard (or
++ * zero fill) the unwanted piece of the transfer. They have to have a
++ * real area of memory to transfer it into. The use case for this is
++ * ATAPI devices in DMA mode. If the packet command causes a transfer
++ * bigger than the transfer size some HBAs will lock up if there
++ * aren't DMA elements to contain the excess transfer. What this API
++ * does is adjust the queue so that the buf is always appended
++ * silently to the scatterlist.
++ *
++ * Note: This routine adjusts max_hw_segments to make room for
++ * appending the drain buffer. If you call
++ * blk_queue_max_hw_segments() or blk_queue_max_phys_segments() after
++ * calling this routine, you must set the limit to one fewer than your
++ * device can support otherwise there won't be room for the drain
++ * buffer.
++ */
++int blk_queue_dma_drain(struct request_queue *q, void *buf,
++ unsigned int size)
++{
++ if (q->max_hw_segments < 2 || q->max_phys_segments < 2)
++ return -EINVAL;
++ /* make room for appending the drain */
++ --q->max_hw_segments;
++ --q->max_phys_segments;
++ q->dma_drain_buffer = buf;
++ q->dma_drain_size = size;
++
++ return 0;
+}
-+EXPORT_SYMBOL_GPL(class_find_child);
++
++EXPORT_SYMBOL_GPL(blk_queue_dma_drain);
++
++/**
++ * blk_queue_segment_boundary - set boundary rules for segment merging
++ * @q: the request queue for the device
++ * @mask: the memory boundary mask
++ **/
++void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
++{
++ if (mask < PAGE_CACHE_SIZE - 1) {
++ mask = PAGE_CACHE_SIZE - 1;
++ printk("%s: set to minimum %lx\n", __FUNCTION__, mask);
++ }
++
++ q->seg_boundary_mask = mask;
++}
++
++EXPORT_SYMBOL(blk_queue_segment_boundary);
++
++/**
++ * blk_queue_dma_alignment - set dma length and memory alignment
++ * @q: the request queue for the device
++ * @mask: alignment mask
++ *
++ * description:
++ * set required memory and length aligment for direct dma transactions.
++ * this is used when buiding direct io requests for the queue.
++ *
++ **/
++void blk_queue_dma_alignment(struct request_queue *q, int mask)
++{
++ q->dma_alignment = mask;
++}
++
++EXPORT_SYMBOL(blk_queue_dma_alignment);
++
++/**
++ * blk_queue_update_dma_alignment - update dma length and memory alignment
++ * @q: the request queue for the device
++ * @mask: alignment mask
++ *
++ * description:
++ * update required memory and length aligment for direct dma transactions.
++ * If the requested alignment is larger than the current alignment, then
++ * the current queue alignment is updated to the new value, otherwise it
++ * is left alone. The design of this is to allow multiple objects
++ * (driver, device, transport etc) to set their respective
++ * alignments without having them interfere.
++ *
++ **/
++void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
++{
++ BUG_ON(mask > PAGE_SIZE);
++
++ if (mask > q->dma_alignment)
++ q->dma_alignment = mask;
++}
++
++EXPORT_SYMBOL(blk_queue_update_dma_alignment);
++
++int __init blk_settings_init(void)
++{
++ blk_max_low_pfn = max_low_pfn - 1;
++ blk_max_pfn = max_pfn - 1;
++ return 0;
++}
++subsys_initcall(blk_settings_init);
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+new file mode 100644
+index 0000000..bc28776
+--- /dev/null
++++ b/block/blk-sysfs.c
+@@ -0,0 +1,309 @@
++/*
++ * Functions related to sysfs handling
++ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++#include <linux/blktrace_api.h>
++
++#include "blk.h"
++
++struct queue_sysfs_entry {
++ struct attribute attr;
++ ssize_t (*show)(struct request_queue *, char *);
++ ssize_t (*store)(struct request_queue *, const char *, size_t);
++};
++
++static ssize_t
++queue_var_show(unsigned int var, char *page)
++{
++ return sprintf(page, "%d\n", var);
++}
++
++static ssize_t
++queue_var_store(unsigned long *var, const char *page, size_t count)
++{
++ char *p = (char *) page;
++
++ *var = simple_strtoul(p, &p, 10);
++ return count;
++}
++
++static ssize_t queue_requests_show(struct request_queue *q, char *page)
++{
++ return queue_var_show(q->nr_requests, (page));
++}
++
++static ssize_t
++queue_requests_store(struct request_queue *q, const char *page, size_t count)
++{
++ struct request_list *rl = &q->rq;
++ unsigned long nr;
++ int ret = queue_var_store(&nr, page, count);
++ if (nr < BLKDEV_MIN_RQ)
++ nr = BLKDEV_MIN_RQ;
++
++ spin_lock_irq(q->queue_lock);
++ q->nr_requests = nr;
++ blk_queue_congestion_threshold(q);
++
++ if (rl->count[READ] >= queue_congestion_on_threshold(q))
++ blk_set_queue_congested(q, READ);
++ else if (rl->count[READ] < queue_congestion_off_threshold(q))
++ blk_clear_queue_congested(q, READ);
++
++ if (rl->count[WRITE] >= queue_congestion_on_threshold(q))
++ blk_set_queue_congested(q, WRITE);
++ else if (rl->count[WRITE] < queue_congestion_off_threshold(q))
++ blk_clear_queue_congested(q, WRITE);
++
++ if (rl->count[READ] >= q->nr_requests) {
++ blk_set_queue_full(q, READ);
++ } else if (rl->count[READ]+1 <= q->nr_requests) {
++ blk_clear_queue_full(q, READ);
++ wake_up(&rl->wait[READ]);
++ }
++
++ if (rl->count[WRITE] >= q->nr_requests) {
++ blk_set_queue_full(q, WRITE);
++ } else if (rl->count[WRITE]+1 <= q->nr_requests) {
++ blk_clear_queue_full(q, WRITE);
++ wake_up(&rl->wait[WRITE]);
++ }
++ spin_unlock_irq(q->queue_lock);
++ return ret;
++}
++
++static ssize_t queue_ra_show(struct request_queue *q, char *page)
++{
++ int ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10);
++
++ return queue_var_show(ra_kb, (page));
++}
++
++static ssize_t
++queue_ra_store(struct request_queue *q, const char *page, size_t count)
++{
++ unsigned long ra_kb;
++ ssize_t ret = queue_var_store(&ra_kb, page, count);
++
++ spin_lock_irq(q->queue_lock);
++ q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10);
++ spin_unlock_irq(q->queue_lock);
++
++ return ret;
++}
++
++static ssize_t queue_max_sectors_show(struct request_queue *q, char *page)
++{
++ int max_sectors_kb = q->max_sectors >> 1;
++
++ return queue_var_show(max_sectors_kb, (page));
++}
++
++static ssize_t queue_hw_sector_size_show(struct request_queue *q, char *page)
++{
++ return queue_var_show(q->hardsect_size, page);
++}
++
++static ssize_t
++queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
++{
++ unsigned long max_sectors_kb,
++ max_hw_sectors_kb = q->max_hw_sectors >> 1,
++ page_kb = 1 << (PAGE_CACHE_SHIFT - 10);
++ ssize_t ret = queue_var_store(&max_sectors_kb, page, count);
++
++ if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb)
++ return -EINVAL;
++ /*
++ * Take the queue lock to update the readahead and max_sectors
++ * values synchronously:
++ */
++ spin_lock_irq(q->queue_lock);
++ q->max_sectors = max_sectors_kb << 1;
++ spin_unlock_irq(q->queue_lock);
++
++ return ret;
++}
++
++static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page)
++{
++ int max_hw_sectors_kb = q->max_hw_sectors >> 1;
++
++ return queue_var_show(max_hw_sectors_kb, (page));
++}
++
++
++static struct queue_sysfs_entry queue_requests_entry = {
++ .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
++ .show = queue_requests_show,
++ .store = queue_requests_store,
++};
++
++static struct queue_sysfs_entry queue_ra_entry = {
++ .attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
++ .show = queue_ra_show,
++ .store = queue_ra_store,
++};
++
++static struct queue_sysfs_entry queue_max_sectors_entry = {
++ .attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
++ .show = queue_max_sectors_show,
++ .store = queue_max_sectors_store,
++};
++
++static struct queue_sysfs_entry queue_max_hw_sectors_entry = {
++ .attr = {.name = "max_hw_sectors_kb", .mode = S_IRUGO },
++ .show = queue_max_hw_sectors_show,
++};
++
++static struct queue_sysfs_entry queue_iosched_entry = {
++ .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR },
++ .show = elv_iosched_show,
++ .store = elv_iosched_store,
++};
++
++static struct queue_sysfs_entry queue_hw_sector_size_entry = {
++ .attr = {.name = "hw_sector_size", .mode = S_IRUGO },
++ .show = queue_hw_sector_size_show,
++};
++
++static struct attribute *default_attrs[] = {
++ &queue_requests_entry.attr,
++ &queue_ra_entry.attr,
++ &queue_max_hw_sectors_entry.attr,
++ &queue_max_sectors_entry.attr,
++ &queue_iosched_entry.attr,
++ &queue_hw_sector_size_entry.attr,
++ NULL,
++};
++
++#define to_queue(atr) container_of((atr), struct queue_sysfs_entry, attr)
++
++static ssize_t
++queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
++{
++ struct queue_sysfs_entry *entry = to_queue(attr);
++ struct request_queue *q =
++ container_of(kobj, struct request_queue, kobj);
++ ssize_t res;
++
++ if (!entry->show)
++ return -EIO;
++ mutex_lock(&q->sysfs_lock);
++ if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
++ mutex_unlock(&q->sysfs_lock);
++ return -ENOENT;
++ }
++ res = entry->show(q, page);
++ mutex_unlock(&q->sysfs_lock);
++ return res;
++}
++
++static ssize_t
++queue_attr_store(struct kobject *kobj, struct attribute *attr,
++ const char *page, size_t length)
++{
++ struct queue_sysfs_entry *entry = to_queue(attr);
++ struct request_queue *q = container_of(kobj, struct request_queue, kobj);
++
++ ssize_t res;
++
++ if (!entry->store)
++ return -EIO;
++ mutex_lock(&q->sysfs_lock);
++ if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
++ mutex_unlock(&q->sysfs_lock);
++ return -ENOENT;
++ }
++ res = entry->store(q, page, length);
++ mutex_unlock(&q->sysfs_lock);
++ return res;
++}
++
++/**
++ * blk_cleanup_queue: - release a &struct request_queue when it is no longer needed
++ * @kobj: the kobj belonging of the request queue to be released
++ *
++ * Description:
++ * blk_cleanup_queue is the pair to blk_init_queue() or
++ * blk_queue_make_request(). It should be called when a request queue is
++ * being released; typically when a block device is being de-registered.
++ * Currently, its primary task it to free all the &struct request
++ * structures that were allocated to the queue and the queue itself.
++ *
++ * Caveat:
++ * Hopefully the low level driver will have finished any
++ * outstanding requests first...
++ **/
++static void blk_release_queue(struct kobject *kobj)
++{
++ struct request_queue *q =
++ container_of(kobj, struct request_queue, kobj);
++ struct request_list *rl = &q->rq;
++
++ blk_sync_queue(q);
++
++ if (rl->rq_pool)
++ mempool_destroy(rl->rq_pool);
++
++ if (q->queue_tags)
++ __blk_queue_free_tags(q);
++
++ blk_trace_shutdown(q);
++
++ bdi_destroy(&q->backing_dev_info);
++ kmem_cache_free(blk_requestq_cachep, q);
++}
++
++static struct sysfs_ops queue_sysfs_ops = {
++ .show = queue_attr_show,
++ .store = queue_attr_store,
++};
++
++struct kobj_type blk_queue_ktype = {
++ .sysfs_ops = &queue_sysfs_ops,
++ .default_attrs = default_attrs,
++ .release = blk_release_queue,
++};
++
++int blk_register_queue(struct gendisk *disk)
++{
++ int ret;
++
++ struct request_queue *q = disk->queue;
++
++ if (!q || !q->request_fn)
++ return -ENXIO;
++
++ ret = kobject_add(&q->kobj, kobject_get(&disk->dev.kobj),
++ "%s", "queue");
++ if (ret < 0)
++ return ret;
++
++ kobject_uevent(&q->kobj, KOBJ_ADD);
++
++ ret = elv_register_queue(q);
++ if (ret) {
++ kobject_uevent(&q->kobj, KOBJ_REMOVE);
++ kobject_del(&q->kobj);
++ return ret;
++ }
++
++ return 0;
++}
++
++void blk_unregister_queue(struct gendisk *disk)
++{
++ struct request_queue *q = disk->queue;
++
++ if (q && q->request_fn) {
++ elv_unregister_queue(q);
++
++ kobject_uevent(&q->kobj, KOBJ_REMOVE);
++ kobject_del(&q->kobj);
++ kobject_put(&disk->dev.kobj);
++ }
++}
+diff --git a/block/blk-tag.c b/block/blk-tag.c
+new file mode 100644
+index 0000000..d1fd300
+--- /dev/null
++++ b/block/blk-tag.c
+@@ -0,0 +1,396 @@
++/*
++ * Functions related to tagged command queuing
++ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++
++/**
++ * blk_queue_find_tag - find a request by its tag and queue
++ * @q: The request queue for the device
++ * @tag: The tag of the request
++ *
++ * Notes:
++ * Should be used when a device returns a tag and you want to match
++ * it with a request.
++ *
++ * no locks need be held.
++ **/
++struct request *blk_queue_find_tag(struct request_queue *q, int tag)
++{
++ return blk_map_queue_find_tag(q->queue_tags, tag);
++}
++
++EXPORT_SYMBOL(blk_queue_find_tag);
++
++/**
++ * __blk_free_tags - release a given set of tag maintenance info
++ * @bqt: the tag map to free
++ *
++ * Tries to free the specified @bqt at . Returns true if it was
++ * actually freed and false if there are still references using it
++ */
++static int __blk_free_tags(struct blk_queue_tag *bqt)
++{
++ int retval;
++
++ retval = atomic_dec_and_test(&bqt->refcnt);
++ if (retval) {
++ BUG_ON(bqt->busy);
++
++ kfree(bqt->tag_index);
++ bqt->tag_index = NULL;
++
++ kfree(bqt->tag_map);
++ bqt->tag_map = NULL;
++
++ kfree(bqt);
++ }
++
++ return retval;
++}
++
++/**
++ * __blk_queue_free_tags - release tag maintenance info
++ * @q: the request queue for the device
++ *
++ * Notes:
++ * blk_cleanup_queue() will take care of calling this function, if tagging
++ * has been used. So there's no need to call this directly.
++ **/
++void __blk_queue_free_tags(struct request_queue *q)
++{
++ struct blk_queue_tag *bqt = q->queue_tags;
++
++ if (!bqt)
++ return;
++
++ __blk_free_tags(bqt);
++
++ q->queue_tags = NULL;
++ q->queue_flags &= ~(1 << QUEUE_FLAG_QUEUED);
++}
++
++/**
++ * blk_free_tags - release a given set of tag maintenance info
++ * @bqt: the tag map to free
++ *
++ * For externally managed @bqt@ frees the map. Callers of this
++ * function must guarantee to have released all the queues that
++ * might have been using this tag map.
++ */
++void blk_free_tags(struct blk_queue_tag *bqt)
++{
++ if (unlikely(!__blk_free_tags(bqt)))
++ BUG();
++}
++EXPORT_SYMBOL(blk_free_tags);
++
++/**
++ * blk_queue_free_tags - release tag maintenance info
++ * @q: the request queue for the device
++ *
++ * Notes:
++ * This is used to disabled tagged queuing to a device, yet leave
++ * queue in function.
++ **/
++void blk_queue_free_tags(struct request_queue *q)
++{
++ clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
++}
++
++EXPORT_SYMBOL(blk_queue_free_tags);
++
++static int
++init_tag_map(struct request_queue *q, struct blk_queue_tag *tags, int depth)
++{
++ struct request **tag_index;
++ unsigned long *tag_map;
++ int nr_ulongs;
++
++ if (q && depth > q->nr_requests * 2) {
++ depth = q->nr_requests * 2;
++ printk(KERN_ERR "%s: adjusted depth to %d\n",
++ __FUNCTION__, depth);
++ }
++
++ tag_index = kzalloc(depth * sizeof(struct request *), GFP_ATOMIC);
++ if (!tag_index)
++ goto fail;
++
++ nr_ulongs = ALIGN(depth, BITS_PER_LONG) / BITS_PER_LONG;
++ tag_map = kzalloc(nr_ulongs * sizeof(unsigned long), GFP_ATOMIC);
++ if (!tag_map)
++ goto fail;
++
++ tags->real_max_depth = depth;
++ tags->max_depth = depth;
++ tags->tag_index = tag_index;
++ tags->tag_map = tag_map;
++
++ return 0;
++fail:
++ kfree(tag_index);
++ return -ENOMEM;
++}
++
++static struct blk_queue_tag *__blk_queue_init_tags(struct request_queue *q,
++ int depth)
++{
++ struct blk_queue_tag *tags;
++
++ tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC);
++ if (!tags)
++ goto fail;
++
++ if (init_tag_map(q, tags, depth))
++ goto fail;
++
++ tags->busy = 0;
++ atomic_set(&tags->refcnt, 1);
++ return tags;
++fail:
++ kfree(tags);
++ return NULL;
++}
++
++/**
++ * blk_init_tags - initialize the tag info for an external tag map
++ * @depth: the maximum queue depth supported
++ * @tags: the tag to use
++ **/
++struct blk_queue_tag *blk_init_tags(int depth)
++{
++ return __blk_queue_init_tags(NULL, depth);
++}
++EXPORT_SYMBOL(blk_init_tags);
++
++/**
++ * blk_queue_init_tags - initialize the queue tag info
++ * @q: the request queue for the device
++ * @depth: the maximum queue depth supported
++ * @tags: the tag to use
++ **/
++int blk_queue_init_tags(struct request_queue *q, int depth,
++ struct blk_queue_tag *tags)
++{
++ int rc;
++
++ BUG_ON(tags && q->queue_tags && tags != q->queue_tags);
++
++ if (!tags && !q->queue_tags) {
++ tags = __blk_queue_init_tags(q, depth);
++
++ if (!tags)
++ goto fail;
++ } else if (q->queue_tags) {
++ if ((rc = blk_queue_resize_tags(q, depth)))
++ return rc;
++ set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
++ return 0;
++ } else
++ atomic_inc(&tags->refcnt);
++
++ /*
++ * assign it, all done
++ */
++ q->queue_tags = tags;
++ q->queue_flags |= (1 << QUEUE_FLAG_QUEUED);
++ INIT_LIST_HEAD(&q->tag_busy_list);
++ return 0;
++fail:
++ kfree(tags);
++ return -ENOMEM;
++}
++
++EXPORT_SYMBOL(blk_queue_init_tags);
++
++/**
++ * blk_queue_resize_tags - change the queueing depth
++ * @q: the request queue for the device
++ * @new_depth: the new max command queueing depth
++ *
++ * Notes:
++ * Must be called with the queue lock held.
++ **/
++int blk_queue_resize_tags(struct request_queue *q, int new_depth)
++{
++ struct blk_queue_tag *bqt = q->queue_tags;
++ struct request **tag_index;
++ unsigned long *tag_map;
++ int max_depth, nr_ulongs;
++
++ if (!bqt)
++ return -ENXIO;
++
++ /*
++ * if we already have large enough real_max_depth. just
++ * adjust max_depth. *NOTE* as requests with tag value
++ * between new_depth and real_max_depth can be in-flight, tag
++ * map can not be shrunk blindly here.
++ */
++ if (new_depth <= bqt->real_max_depth) {
++ bqt->max_depth = new_depth;
++ return 0;
++ }
++
++ /*
++ * Currently cannot replace a shared tag map with a new
++ * one, so error out if this is the case
++ */
++ if (atomic_read(&bqt->refcnt) != 1)
++ return -EBUSY;
++
++ /*
++ * save the old state info, so we can copy it back
++ */
++ tag_index = bqt->tag_index;
++ tag_map = bqt->tag_map;
++ max_depth = bqt->real_max_depth;
++
++ if (init_tag_map(q, bqt, new_depth))
++ return -ENOMEM;
++
++ memcpy(bqt->tag_index, tag_index, max_depth * sizeof(struct request *));
++ nr_ulongs = ALIGN(max_depth, BITS_PER_LONG) / BITS_PER_LONG;
++ memcpy(bqt->tag_map, tag_map, nr_ulongs * sizeof(unsigned long));
++
++ kfree(tag_index);
++ kfree(tag_map);
++ return 0;
++}
++
++EXPORT_SYMBOL(blk_queue_resize_tags);
++
++/**
++ * blk_queue_end_tag - end tag operations for a request
++ * @q: the request queue for the device
++ * @rq: the request that has completed
++ *
++ * Description:
++ * Typically called when end_that_request_first() returns 0, meaning
++ * all transfers have been done for a request. It's important to call
++ * this function before end_that_request_last(), as that will put the
++ * request back on the free list thus corrupting the internal tag list.
++ *
++ * Notes:
++ * queue lock must be held.
++ **/
++void blk_queue_end_tag(struct request_queue *q, struct request *rq)
++{
++ struct blk_queue_tag *bqt = q->queue_tags;
++ int tag = rq->tag;
++
++ BUG_ON(tag == -1);
++
++ if (unlikely(tag >= bqt->real_max_depth))
++ /*
++ * This can happen after tag depth has been reduced.
++ * FIXME: how about a warning or info message here?
++ */
++ return;
++
++ list_del_init(&rq->queuelist);
++ rq->cmd_flags &= ~REQ_QUEUED;
++ rq->tag = -1;
++
++ if (unlikely(bqt->tag_index[tag] == NULL))
++ printk(KERN_ERR "%s: tag %d is missing\n",
++ __FUNCTION__, tag);
++
++ bqt->tag_index[tag] = NULL;
++
++ if (unlikely(!test_bit(tag, bqt->tag_map))) {
++ printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n",
++ __FUNCTION__, tag);
++ return;
++ }
++ /*
++ * The tag_map bit acts as a lock for tag_index[bit], so we need
++ * unlock memory barrier semantics.
++ */
++ clear_bit_unlock(tag, bqt->tag_map);
++ bqt->busy--;
++}
++
++EXPORT_SYMBOL(blk_queue_end_tag);
++
++/**
++ * blk_queue_start_tag - find a free tag and assign it
++ * @q: the request queue for the device
++ * @rq: the block request that needs tagging
++ *
++ * Description:
++ * This can either be used as a stand-alone helper, or possibly be
++ * assigned as the queue &prep_rq_fn (in which case &struct request
++ * automagically gets a tag assigned). Note that this function
++ * assumes that any type of request can be queued! if this is not
++ * true for your device, you must check the request type before
++ * calling this function. The request will also be removed from
++ * the request queue, so it's the drivers responsibility to readd
++ * it if it should need to be restarted for some reason.
++ *
++ * Notes:
++ * queue lock must be held.
++ **/
++int blk_queue_start_tag(struct request_queue *q, struct request *rq)
++{
++ struct blk_queue_tag *bqt = q->queue_tags;
++ int tag;
++
++ if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
++ printk(KERN_ERR
++ "%s: request %p for device [%s] already tagged %d",
++ __FUNCTION__, rq,
++ rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->tag);
++ BUG();
++ }
++
++ /*
++ * Protect against shared tag maps, as we may not have exclusive
++ * access to the tag map.
++ */
++ do {
++ tag = find_first_zero_bit(bqt->tag_map, bqt->max_depth);
++ if (tag >= bqt->max_depth)
++ return 1;
++
++ } while (test_and_set_bit_lock(tag, bqt->tag_map));
++ /*
++ * We need lock ordering semantics given by test_and_set_bit_lock.
++ * See blk_queue_end_tag for details.
++ */
++
++ rq->cmd_flags |= REQ_QUEUED;
++ rq->tag = tag;
++ bqt->tag_index[tag] = rq;
++ blkdev_dequeue_request(rq);
++ list_add(&rq->queuelist, &q->tag_busy_list);
++ bqt->busy++;
++ return 0;
++}
++
++EXPORT_SYMBOL(blk_queue_start_tag);
++
++/**
++ * blk_queue_invalidate_tags - invalidate all pending tags
++ * @q: the request queue for the device
++ *
++ * Description:
++ * Hardware conditions may dictate a need to stop all pending requests.
++ * In this case, we will safely clear the block side of the tag queue and
++ * readd all requests to the request queue in the right order.
++ *
++ * Notes:
++ * queue lock must be held.
++ **/
++void blk_queue_invalidate_tags(struct request_queue *q)
++{
++ struct list_head *tmp, *n;
++
++ list_for_each_safe(tmp, n, &q->tag_busy_list)
++ blk_requeue_request(q, list_entry_rq(tmp));
++}
++
++EXPORT_SYMBOL(blk_queue_invalidate_tags);
+diff --git a/block/blk.h b/block/blk.h
+new file mode 100644
+index 0000000..ec898dd
+--- /dev/null
++++ b/block/blk.h
+@@ -0,0 +1,53 @@
++#ifndef BLK_INTERNAL_H
++#define BLK_INTERNAL_H
++
++/* Amount of time in which a process may batch requests */
++#define BLK_BATCH_TIME (HZ/50UL)
++
++/* Number of requests a "batching" process may submit */
++#define BLK_BATCH_REQ 32
++
++extern struct kmem_cache *blk_requestq_cachep;
++extern struct kobj_type blk_queue_ktype;
++
++void rq_init(struct request_queue *q, struct request *rq);
++void init_request_from_bio(struct request *req, struct bio *bio);
++void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
++ struct bio *bio);
++void __blk_queue_free_tags(struct request_queue *q);
++
++void blk_unplug_work(struct work_struct *work);
++void blk_unplug_timeout(unsigned long data);
++
++struct io_context *current_io_context(gfp_t gfp_flags, int node);
++
++int ll_back_merge_fn(struct request_queue *q, struct request *req,
++ struct bio *bio);
++int ll_front_merge_fn(struct request_queue *q, struct request *req,
++ struct bio *bio);
++int attempt_back_merge(struct request_queue *q, struct request *rq);
++int attempt_front_merge(struct request_queue *q, struct request *rq);
++void blk_recalc_rq_segments(struct request *rq);
++void blk_recalc_rq_sectors(struct request *rq, int nsect);
++
++void blk_queue_congestion_threshold(struct request_queue *q);
++
++/*
++ * Return the threshold (number of used requests) at which the queue is
++ * considered to be congested. It include a little hysteresis to keep the
++ * context switch rate down.
++ */
++static inline int queue_congestion_on_threshold(struct request_queue *q)
++{
++ return q->nr_congestion_on;
++}
++
++/*
++ * The threshold at which a queue is considered to be uncongested
++ */
++static inline int queue_congestion_off_threshold(struct request_queue *q)
++{
++ return q->nr_congestion_off;
++}
++
++#endif
+diff --git a/block/blktrace.c b/block/blktrace.c
+index 9b4da4a..568588c 100644
+--- a/block/blktrace.c
++++ b/block/blktrace.c
+@@ -235,7 +235,7 @@ static void blk_trace_cleanup(struct blk_trace *bt)
+ kfree(bt);
+ }
- int class_interface_register(struct class_interface *class_intf)
+-static int blk_trace_remove(struct request_queue *q)
++int blk_trace_remove(struct request_queue *q)
{
-@@ -829,7 +973,7 @@ int class_interface_register(struct class_interface *class_intf)
+ struct blk_trace *bt;
- void class_interface_unregister(struct class_interface *class_intf)
- {
-- struct class * parent = class_intf->class;
-+ struct class *parent = class_intf->class;
- struct class_device *class_dev;
- struct device *dev;
+@@ -249,6 +249,7 @@ static int blk_trace_remove(struct request_queue *q)
-@@ -853,15 +997,14 @@ void class_interface_unregister(struct class_interface *class_intf)
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(blk_trace_remove);
- int __init classes_init(void)
+ static int blk_dropped_open(struct inode *inode, struct file *filp)
{
-- int retval;
--
-- retval = subsystem_register(&class_subsys);
-- if (retval)
-- return retval;
-+ class_kset = kset_create_and_add("class", NULL, NULL);
-+ if (!class_kset)
-+ return -ENOMEM;
+@@ -316,18 +317,17 @@ static struct rchan_callbacks blk_relay_callbacks = {
+ /*
+ * Setup everything required to start tracing
+ */
+-int do_blk_trace_setup(struct request_queue *q, struct block_device *bdev,
++int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ struct blk_user_trace_setup *buts)
+ {
+ struct blk_trace *old_bt, *bt = NULL;
+ struct dentry *dir = NULL;
+- char b[BDEVNAME_SIZE];
+ int ret, i;
- /* ick, this is ugly, the things we go through to keep from showing up
- * in sysfs... */
- kset_init(&class_obj_subsys);
-+ kobject_set_name(&class_obj_subsys.kobj, "class_obj");
- if (!class_obj_subsys.kobj.parent)
- class_obj_subsys.kobj.parent = &class_obj_subsys.kobj;
- return 0;
-diff --git a/drivers/base/core.c b/drivers/base/core.c
-index 2683eac..edf3bbe 100644
---- a/drivers/base/core.c
-+++ b/drivers/base/core.c
-@@ -18,14 +18,14 @@
- #include <linux/string.h>
- #include <linux/kdev_t.h>
- #include <linux/notifier.h>
--
-+#include <linux/genhd.h>
- #include <asm/semaphore.h>
+ if (!buts->buf_size || !buts->buf_nr)
+ return -EINVAL;
- #include "base.h"
- #include "power/power.h"
+- strcpy(buts->name, bdevname(bdev, b));
++ strcpy(buts->name, name);
--int (*platform_notify)(struct device * dev) = NULL;
--int (*platform_notify_remove)(struct device * dev) = NULL;
-+int (*platform_notify)(struct device *dev) = NULL;
-+int (*platform_notify_remove)(struct device *dev) = NULL;
+ /*
+ * some device names have larger paths - convert the slashes
+@@ -352,7 +352,7 @@ int do_blk_trace_setup(struct request_queue *q, struct block_device *bdev,
+ goto err;
- /*
- * sysfs bindings for devices.
-@@ -51,11 +51,11 @@ EXPORT_SYMBOL(dev_driver_string);
- #define to_dev(obj) container_of(obj, struct device, kobj)
- #define to_dev_attr(_attr) container_of(_attr, struct device_attribute, attr)
+ bt->dir = dir;
+- bt->dev = bdev->bd_dev;
++ bt->dev = dev;
+ atomic_set(&bt->dropped, 0);
--static ssize_t
--dev_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
-+static ssize_t dev_attr_show(struct kobject *kobj, struct attribute *attr,
-+ char *buf)
+ ret = -EIO;
+@@ -399,8 +399,8 @@ err:
+ return ret;
+ }
+
+-static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
+- char __user *arg)
++int blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
++ char __user *arg)
{
-- struct device_attribute * dev_attr = to_dev_attr(attr);
-- struct device * dev = to_dev(kobj);
-+ struct device_attribute *dev_attr = to_dev_attr(attr);
-+ struct device *dev = to_dev(kobj);
- ssize_t ret = -EIO;
+ struct blk_user_trace_setup buts;
+ int ret;
+@@ -409,7 +409,7 @@ static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
+ if (ret)
+ return -EFAULT;
- if (dev_attr->show)
-@@ -63,12 +63,11 @@ dev_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
- return ret;
+- ret = do_blk_trace_setup(q, bdev, &buts);
++ ret = do_blk_trace_setup(q, name, dev, &buts);
+ if (ret)
+ return ret;
+
+@@ -418,8 +418,9 @@ static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
+
+ return 0;
}
++EXPORT_SYMBOL_GPL(blk_trace_setup);
--static ssize_t
--dev_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char * buf, size_t count)
-+static ssize_t dev_attr_store(struct kobject *kobj, struct attribute *attr,
-+ const char *buf, size_t count)
+-static int blk_trace_startstop(struct request_queue *q, int start)
++int blk_trace_startstop(struct request_queue *q, int start)
{
-- struct device_attribute * dev_attr = to_dev_attr(attr);
-- struct device * dev = to_dev(kobj);
-+ struct device_attribute *dev_attr = to_dev_attr(attr);
-+ struct device *dev = to_dev(kobj);
- ssize_t ret = -EIO;
+ struct blk_trace *bt;
+ int ret;
+@@ -452,6 +453,7 @@ static int blk_trace_startstop(struct request_queue *q, int start)
- if (dev_attr->store)
-@@ -90,9 +89,9 @@ static struct sysfs_ops dev_sysfs_ops = {
- * reaches 0. We forward the call to the device's release
- * method, which should handle actually freeing the structure.
- */
--static void device_release(struct kobject * kobj)
-+static void device_release(struct kobject *kobj)
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(blk_trace_startstop);
+
+ /**
+ * blk_trace_ioctl: - handle the ioctls associated with tracing
+@@ -464,6 +466,7 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
{
-- struct device * dev = to_dev(kobj);
-+ struct device *dev = to_dev(kobj);
+ struct request_queue *q;
+ int ret, start = 0;
++ char b[BDEVNAME_SIZE];
- if (dev->release)
- dev->release(dev);
-@@ -101,8 +100,8 @@ static void device_release(struct kobject * kobj)
- else if (dev->class && dev->class->dev_release)
- dev->class->dev_release(dev);
- else {
-- printk(KERN_ERR "Device '%s' does not have a release() function, "
-- "it is broken and must be fixed.\n",
-+ printk(KERN_ERR "Device '%s' does not have a release() "
-+ "function, it is broken and must be fixed.\n",
- dev->bus_id);
- WARN_ON(1);
- }
-@@ -185,7 +184,8 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
- add_uevent_var(env, "PHYSDEVBUS=%s", dev->bus->name);
+ q = bdev_get_queue(bdev);
+ if (!q)
+@@ -473,7 +476,8 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
- if (dev->driver)
-- add_uevent_var(env, "PHYSDEVDRIVER=%s", dev->driver->name);
-+ add_uevent_var(env, "PHYSDEVDRIVER=%s",
-+ dev->driver->name);
- }
- #endif
+ switch (cmd) {
+ case BLKTRACESETUP:
+- ret = blk_trace_setup(q, bdev, arg);
++ strcpy(b, bdevname(bdev, b));
++ ret = blk_trace_setup(q, b, bdev->bd_dev, arg);
+ break;
+ case BLKTRACESTART:
+ start = 1;
+diff --git a/block/bsg.c b/block/bsg.c
+index 8e181ab..69b0a9d 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -445,6 +445,15 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
+ else
+ hdr->dout_resid = rq->data_len;
-@@ -193,15 +193,16 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
- if (dev->bus && dev->bus->uevent) {
- retval = dev->bus->uevent(dev, env);
- if (retval)
-- pr_debug ("%s: bus uevent() returned %d\n",
-- __FUNCTION__, retval);
-+ pr_debug("device: '%s': %s: bus uevent() returned %d\n",
-+ dev->bus_id, __FUNCTION__, retval);
- }
++ /*
++ * If the request generated a negative error number, return it
++ * (providing we aren't already returning an error); if it's
++ * just a protocol response (i.e. non negative), that gets
++ * processed above.
++ */
++ if (!ret && rq->errors < 0)
++ ret = rq->errors;
++
+ blk_rq_unmap_user(bio);
+ blk_put_request(rq);
- /* have the class specific function add its stuff */
- if (dev->class && dev->class->dev_uevent) {
- retval = dev->class->dev_uevent(dev, env);
- if (retval)
-- pr_debug("%s: class uevent() returned %d\n",
-+ pr_debug("device: '%s': %s: class uevent() "
-+ "returned %d\n", dev->bus_id,
- __FUNCTION__, retval);
- }
+@@ -837,6 +846,7 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ struct bsg_device *bd = file->private_data;
+ int __user *uarg = (int __user *) arg;
++ int ret;
-@@ -209,7 +210,8 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
- if (dev->type && dev->type->uevent) {
- retval = dev->type->uevent(dev, env);
- if (retval)
-- pr_debug("%s: dev_type uevent() returned %d\n",
-+ pr_debug("device: '%s': %s: dev_type uevent() "
-+ "returned %d\n", dev->bus_id,
- __FUNCTION__, retval);
+ switch (cmd) {
+ /*
+@@ -889,12 +899,12 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ if (rq->next_rq)
+ bidi_bio = rq->next_rq->bio;
+ blk_execute_rq(bd->queue, NULL, rq, 0);
+- blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio);
++ ret = blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio);
+
+ if (copy_to_user(uarg, &hdr, sizeof(hdr)))
+ return -EFAULT;
+
+- return 0;
++ return ret;
}
+ /*
+ * block device ioctls
+diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
+index 13553e0..f28d1fb 100644
+--- a/block/cfq-iosched.c
++++ b/block/cfq-iosched.c
+@@ -26,9 +26,9 @@ static const int cfq_slice_async_rq = 2;
+ static int cfq_slice_idle = HZ / 125;
-@@ -325,7 +327,8 @@ static int device_add_groups(struct device *dev,
- error = sysfs_create_group(&dev->kobj, groups[i]);
- if (error) {
- while (--i >= 0)
-- sysfs_remove_group(&dev->kobj, groups[i]);
-+ sysfs_remove_group(&dev->kobj,
-+ groups[i]);
- break;
- }
- }
-@@ -401,20 +404,15 @@ static ssize_t show_dev(struct device *dev, struct device_attribute *attr,
- static struct device_attribute devt_attr =
- __ATTR(dev, S_IRUGO, show_dev, NULL);
+ /*
+- * grace period before allowing idle class to get disk access
++ * offset from end of service tree
+ */
+-#define CFQ_IDLE_GRACE (HZ / 10)
++#define CFQ_IDLE_DELAY (HZ / 5)
--/*
-- * devices_subsys - structure to be registered with kobject core.
-- */
--
--decl_subsys(devices, &device_ktype, &device_uevent_ops);
+ /*
+ * below this threshold, we consider thinktime immediate
+@@ -98,8 +98,6 @@ struct cfq_data {
+ struct cfq_queue *async_cfqq[2][IOPRIO_BE_NR];
+ struct cfq_queue *async_idle_cfqq;
+
+- struct timer_list idle_class_timer;
-
-+/* kset to create /sys/devices/ */
-+struct kset *devices_kset;
+ sector_t last_position;
+ unsigned long last_end_request;
- /**
-- * device_create_file - create sysfs attribute file for device.
-- * @dev: device.
-- * @attr: device attribute descriptor.
-+ * device_create_file - create sysfs attribute file for device.
-+ * @dev: device.
-+ * @attr: device attribute descriptor.
+@@ -199,8 +197,8 @@ CFQ_CFQQ_FNS(sync);
+
+ static void cfq_dispatch_insert(struct request_queue *, struct request *);
+ static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,
+- struct task_struct *, gfp_t);
+-static struct cfq_io_context *cfq_cic_rb_lookup(struct cfq_data *,
++ struct io_context *, gfp_t);
++static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *,
+ struct io_context *);
+
+ static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic,
+@@ -384,12 +382,15 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2)
+ /*
+ * The below is leftmost cache rbtree addon
*/
--
--int device_create_file(struct device * dev, struct device_attribute * attr)
-+int device_create_file(struct device *dev, struct device_attribute *attr)
+-static struct rb_node *cfq_rb_first(struct cfq_rb_root *root)
++static struct cfq_queue *cfq_rb_first(struct cfq_rb_root *root)
{
- int error = 0;
- if (get_device(dev)) {
-@@ -425,12 +423,11 @@ int device_create_file(struct device * dev, struct device_attribute * attr)
+ if (!root->left)
+ root->left = rb_first(&root->rb);
+
+- return root->left;
++ if (root->left)
++ return rb_entry(root->left, struct cfq_queue, rb_node);
++
++ return NULL;
}
- /**
-- * device_remove_file - remove sysfs attribute file.
-- * @dev: device.
-- * @attr: device attribute descriptor.
-+ * device_remove_file - remove sysfs attribute file.
-+ * @dev: device.
-+ * @attr: device attribute descriptor.
- */
--
--void device_remove_file(struct device * dev, struct device_attribute * attr)
-+void device_remove_file(struct device *dev, struct device_attribute *attr)
+ static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
+@@ -446,12 +447,20 @@ static unsigned long cfq_slice_offset(struct cfq_data *cfqd,
+ static void cfq_service_tree_add(struct cfq_data *cfqd,
+ struct cfq_queue *cfqq, int add_front)
{
- if (get_device(dev)) {
- sysfs_remove_file(&dev->kobj, &attr->attr);
-@@ -511,22 +508,20 @@ static void klist_children_put(struct klist_node *n)
- put_device(dev);
- }
+- struct rb_node **p = &cfqd->service_tree.rb.rb_node;
+- struct rb_node *parent = NULL;
++ struct rb_node **p, *parent;
++ struct cfq_queue *__cfqq;
+ unsigned long rb_key;
+ int left;
--
- /**
-- * device_initialize - init device structure.
-- * @dev: device.
-+ * device_initialize - init device structure.
-+ * @dev: device.
- *
-- * This prepares the device for use by other layers,
-- * including adding it to the device hierarchy.
-- * It is the first half of device_register(), if called by
-- * that, though it can also be called separately, so one
-- * may use @dev's fields (e.g. the refcount).
-+ * This prepares the device for use by other layers,
-+ * including adding it to the device hierarchy.
-+ * It is the first half of device_register(), if called by
-+ * that, though it can also be called separately, so one
-+ * may use @dev's fields (e.g. the refcount).
+- if (!add_front) {
++ if (cfq_class_idle(cfqq)) {
++ rb_key = CFQ_IDLE_DELAY;
++ parent = rb_last(&cfqd->service_tree.rb);
++ if (parent && parent != &cfqq->rb_node) {
++ __cfqq = rb_entry(parent, struct cfq_queue, rb_node);
++ rb_key += __cfqq->rb_key;
++ } else
++ rb_key += jiffies;
++ } else if (!add_front) {
+ rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies;
+ rb_key += cfqq->slice_resid;
+ cfqq->slice_resid = 0;
+@@ -469,8 +478,9 @@ static void cfq_service_tree_add(struct cfq_data *cfqd,
+ }
+
+ left = 1;
++ parent = NULL;
++ p = &cfqd->service_tree.rb.rb_node;
+ while (*p) {
+- struct cfq_queue *__cfqq;
+ struct rb_node **n;
+
+ parent = *p;
+@@ -524,8 +534,7 @@ static void cfq_resort_rr_list(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+ * add to busy list of queues for service, trying to be fair in ordering
+ * the pending list according to last request service
*/
--
- void device_initialize(struct device *dev)
+-static inline void
+-cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
++static void cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
{
-- kobj_set_kset_s(dev, devices_subsys);
-- kobject_init(&dev->kobj);
-+ dev->kobj.kset = devices_kset;
-+ kobject_init(&dev->kobj, &device_ktype);
- klist_init(&dev->klist_children, klist_children_get,
- klist_children_put);
- INIT_LIST_HEAD(&dev->dma_pools);
-@@ -539,36 +534,39 @@ void device_initialize(struct device *dev)
+ BUG_ON(cfq_cfqq_on_rr(cfqq));
+ cfq_mark_cfqq_on_rr(cfqq);
+@@ -538,8 +547,7 @@ cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+ * Called when the cfqq no longer has requests pending, remove it from
+ * the service tree.
+ */
+-static inline void
+-cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
++static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+ {
+ BUG_ON(!cfq_cfqq_on_rr(cfqq));
+ cfq_clear_cfqq_on_rr(cfqq);
+@@ -554,7 +562,7 @@ cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+ /*
+ * rb tree support functions
+ */
+-static inline void cfq_del_rq_rb(struct request *rq)
++static void cfq_del_rq_rb(struct request *rq)
+ {
+ struct cfq_queue *cfqq = RQ_CFQQ(rq);
+ struct cfq_data *cfqd = cfqq->cfqd;
+@@ -594,8 +602,7 @@ static void cfq_add_rq_rb(struct request *rq)
+ BUG_ON(!cfqq->next_rq);
}
- #ifdef CONFIG_SYSFS_DEPRECATED
--static struct kobject * get_device_parent(struct device *dev,
-- struct device *parent)
-+static struct kobject *get_device_parent(struct device *dev,
-+ struct device *parent)
+-static inline void
+-cfq_reposition_rq_rb(struct cfq_queue *cfqq, struct request *rq)
++static void cfq_reposition_rq_rb(struct cfq_queue *cfqq, struct request *rq)
{
-- /*
-- * Set the parent to the class, not the parent device
-- * for topmost devices in class hierarchy.
-- * This keeps sysfs from having a symlink to make old
-- * udevs happy
-- */
-+ /* class devices without a parent live in /sys/class/<classname>/ */
- if (dev->class && (!parent || parent->class != dev->class))
- return &dev->class->subsys.kobj;
-+ /* all other devices keep their parent */
- else if (parent)
- return &parent->kobj;
+ elv_rb_del(&cfqq->sort_list, rq);
+ cfqq->queued[rq_is_sync(rq)]--;
+@@ -609,7 +616,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
+ struct cfq_io_context *cic;
+ struct cfq_queue *cfqq;
- return NULL;
- }
-+
-+static inline void cleanup_device_parent(struct device *dev) {}
-+static inline void cleanup_glue_dir(struct device *dev,
-+ struct kobject *glue_dir) {}
- #else
- static struct kobject *virtual_device_parent(struct device *dev)
- {
- static struct kobject *virtual_dir = NULL;
+- cic = cfq_cic_rb_lookup(cfqd, tsk->io_context);
++ cic = cfq_cic_lookup(cfqd, tsk->io_context);
+ if (!cic)
+ return NULL;
- if (!virtual_dir)
-- virtual_dir = kobject_add_dir(&devices_subsys.kobj, "virtual");
-+ virtual_dir = kobject_create_and_add("virtual",
-+ &devices_kset->kobj);
+@@ -721,7 +728,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
+ * Lookup the cfqq that this bio will be queued with. Allow
+ * merge only if rq is queued there.
+ */
+- cic = cfq_cic_rb_lookup(cfqd, current->io_context);
++ cic = cfq_cic_lookup(cfqd, current->io_context);
+ if (!cic)
+ return 0;
- return virtual_dir;
+@@ -732,15 +739,10 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq,
+ return 0;
}
--static struct kobject * get_device_parent(struct device *dev,
-- struct device *parent)
-+static struct kobject *get_device_parent(struct device *dev,
-+ struct device *parent)
+-static inline void
+-__cfq_set_active_queue(struct cfq_data *cfqd, struct cfq_queue *cfqq)
++static void __cfq_set_active_queue(struct cfq_data *cfqd,
++ struct cfq_queue *cfqq)
{
-+ int retval;
-+
- if (dev->class) {
- struct kobject *kobj = NULL;
- struct kobject *parent_kobj;
-@@ -576,8 +574,8 @@ static struct kobject * get_device_parent(struct device *dev,
-
- /*
- * If we have no parent, we live in "virtual".
-- * Class-devices with a bus-device as parent, live
-- * in a class-directory to prevent namespace collisions.
-+ * Class-devices with a non class-device as parent, live
-+ * in a "glue" directory to prevent namespace collisions.
- */
- if (parent == NULL)
- parent_kobj = virtual_device_parent(dev);
-@@ -598,25 +596,45 @@ static struct kobject * get_device_parent(struct device *dev,
- return kobj;
+ if (cfqq) {
+- /*
+- * stop potential idle class queues waiting service
+- */
+- del_timer(&cfqd->idle_class_timer);
+-
+ cfqq->slice_end = 0;
+ cfq_clear_cfqq_must_alloc_slice(cfqq);
+ cfq_clear_cfqq_fifo_expire(cfqq);
+@@ -789,47 +791,16 @@ static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out)
+ __cfq_slice_expired(cfqd, cfqq, timed_out);
+ }
- /* or create a new class-directory at the parent device */
-- return kobject_kset_add_dir(&dev->class->class_dirs,
-- parent_kobj, dev->class->name);
-+ k = kobject_create();
-+ if (!k)
-+ return NULL;
-+ k->kset = &dev->class->class_dirs;
-+ retval = kobject_add(k, parent_kobj, "%s", dev->class->name);
-+ if (retval < 0) {
-+ kobject_put(k);
-+ return NULL;
-+ }
-+ /* do not emit an uevent for this simple "glue" directory */
-+ return k;
- }
+-static int start_idle_class_timer(struct cfq_data *cfqd)
+-{
+- unsigned long end = cfqd->last_end_request + CFQ_IDLE_GRACE;
+- unsigned long now = jiffies;
+-
+- if (time_before(now, end) &&
+- time_after_eq(now, cfqd->last_end_request)) {
+- mod_timer(&cfqd->idle_class_timer, end);
+- return 1;
+- }
+-
+- return 0;
+-}
+-
+ /*
+ * Get next queue for service. Unless we have a queue preemption,
+ * we'll simply select the first cfqq in the service tree.
+ */
+ static struct cfq_queue *cfq_get_next_queue(struct cfq_data *cfqd)
+ {
+- struct cfq_queue *cfqq;
+- struct rb_node *n;
+-
+ if (RB_EMPTY_ROOT(&cfqd->service_tree.rb))
+ return NULL;
- if (parent)
- return &parent->kobj;
- return NULL;
+- n = cfq_rb_first(&cfqd->service_tree);
+- cfqq = rb_entry(n, struct cfq_queue, rb_node);
+-
+- if (cfq_class_idle(cfqq)) {
+- /*
+- * if we have idle queues and no rt or be queues had
+- * pending requests, either allow immediate service if
+- * the grace period has passed or arm the idle grace
+- * timer
+- */
+- if (start_idle_class_timer(cfqd))
+- cfqq = NULL;
+- }
+-
+- return cfqq;
++ return cfq_rb_first(&cfqd->service_tree);
}
-+
-+static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
-+{
-+ /* see if we live in a "glue" directory */
-+ if (!dev->class || glue_dir->kset != &dev->class->class_dirs)
-+ return;
-+
-+ kobject_put(glue_dir);
-+}
-+
-+static void cleanup_device_parent(struct device *dev)
-+{
-+ cleanup_glue_dir(dev, dev->kobj.parent);
-+}
- #endif
--static int setup_parent(struct device *dev, struct device *parent)
-+static void setup_parent(struct device *dev, struct device *parent)
+ /*
+@@ -895,7 +866,7 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
+ * task has exited, don't wait
+ */
+ cic = cfqd->active_cic;
+- if (!cic || !cic->ioc->task)
++ if (!cic || !atomic_read(&cic->ioc->nr_tasks))
+ return;
+
+ /*
+@@ -939,7 +910,7 @@ static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
+ /*
+ * return expired entry, or NULL to just start from scratch in rbtree
+ */
+-static inline struct request *cfq_check_fifo(struct cfq_queue *cfqq)
++static struct request *cfq_check_fifo(struct cfq_queue *cfqq)
{
- struct kobject *kobj;
- kobj = get_device_parent(dev, parent);
-- if (IS_ERR(kobj))
-- return PTR_ERR(kobj);
- if (kobj)
- dev->kobj.parent = kobj;
-- return 0;
+ struct cfq_data *cfqd = cfqq->cfqd;
+ struct request *rq;
+@@ -1068,7 +1039,7 @@ __cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq,
+ return dispatched;
}
- static int device_add_class_symlinks(struct device *dev)
-@@ -625,65 +643,76 @@ static int device_add_class_symlinks(struct device *dev)
+-static inline int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
++static int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
+ {
+ int dispatched = 0;
- if (!dev->class)
- return 0;
-+
- error = sysfs_create_link(&dev->kobj, &dev->class->subsys.kobj,
- "subsystem");
- if (error)
- goto out;
-- /*
-- * If this is not a "fake" compatible device, then create the
-- * symlink from the class to the device.
-- */
-- if (dev->kobj.parent != &dev->class->subsys.kobj) {
-+
-+#ifdef CONFIG_SYSFS_DEPRECATED
-+ /* stacked class devices need a symlink in the class directory */
-+ if (dev->kobj.parent != &dev->class->subsys.kobj &&
-+ dev->type != &part_type) {
- error = sysfs_create_link(&dev->class->subsys.kobj, &dev->kobj,
- dev->bus_id);
- if (error)
- goto out_subsys;
- }
-- if (dev->parent) {
--#ifdef CONFIG_SYSFS_DEPRECATED
-- {
-- struct device *parent = dev->parent;
-- char *class_name;
--
-- /*
-- * In old sysfs stacked class devices had 'device'
-- * link pointing to real device instead of parent
-- */
-- while (parent->class && !parent->bus && parent->parent)
-- parent = parent->parent;
+@@ -1087,14 +1058,11 @@ static inline int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq)
+ */
+ static int cfq_forced_dispatch(struct cfq_data *cfqd)
+ {
++ struct cfq_queue *cfqq;
+ int dispatched = 0;
+- struct rb_node *n;
-
-- error = sysfs_create_link(&dev->kobj,
-- &parent->kobj,
-- "device");
-- if (error)
-- goto out_busid;
+- while ((n = cfq_rb_first(&cfqd->service_tree)) != NULL) {
+- struct cfq_queue *cfqq = rb_entry(n, struct cfq_queue, rb_node);
-- class_name = make_class_name(dev->class->name,
-- &dev->kobj);
-- if (class_name)
-- error = sysfs_create_link(&dev->parent->kobj,
-- &dev->kobj, class_name);
-- kfree(class_name);
-- if (error)
-- goto out_device;
-- }
--#else
-- error = sysfs_create_link(&dev->kobj, &dev->parent->kobj,
-+ if (dev->parent && dev->type != &part_type) {
-+ struct device *parent = dev->parent;
-+ char *class_name;
++ while ((cfqq = cfq_rb_first(&cfqd->service_tree)) != NULL)
+ dispatched += __cfq_forced_dispatch_cfqq(cfqq);
+- }
+
+ cfq_slice_expired(cfqd, 0);
+
+@@ -1170,20 +1138,69 @@ static void cfq_put_queue(struct cfq_queue *cfqq)
+ kmem_cache_free(cfq_pool, cfqq);
+ }
+
+-static void cfq_free_io_context(struct io_context *ioc)
++/*
++ * Call func for each cic attached to this ioc. Returns number of cic's seen.
++ */
++#define CIC_GANG_NR 16
++static unsigned int
++call_for_each_cic(struct io_context *ioc,
++ void (*func)(struct io_context *, struct cfq_io_context *))
+ {
+- struct cfq_io_context *__cic;
+- struct rb_node *n;
+- int freed = 0;
++ struct cfq_io_context *cics[CIC_GANG_NR];
++ unsigned long index = 0;
++ unsigned int called = 0;
++ int nr;
+
+- ioc->ioc_data = NULL;
++ rcu_read_lock();
+
+- while ((n = rb_first(&ioc->cic_root)) != NULL) {
+- __cic = rb_entry(n, struct cfq_io_context, rb_node);
+- rb_erase(&__cic->rb_node, &ioc->cic_root);
+- kmem_cache_free(cfq_ioc_pool, __cic);
+- freed++;
+- }
++ do {
++ int i;
+
+ /*
-+ * stacked class devices have the 'device' link
-+ * pointing to the bus device instead of the parent
++ * Perhaps there's a better way - this just gang lookups from
++ * 0 to the end, restarting after each CIC_GANG_NR from the
++ * last key + 1.
+ */
-+ while (parent->class && !parent->bus && parent->parent)
-+ parent = parent->parent;
++ nr = radix_tree_gang_lookup(&ioc->radix_root, (void **) cics,
++ index, CIC_GANG_NR);
++ if (!nr)
++ break;
+
-+ error = sysfs_create_link(&dev->kobj,
-+ &parent->kobj,
- "device");
- if (error)
- goto out_busid;
--#endif
++ called += nr;
++ index = 1 + (unsigned long) cics[nr - 1]->key;
+
-+ class_name = make_class_name(dev->class->name,
-+ &dev->kobj);
-+ if (class_name)
-+ error = sysfs_create_link(&dev->parent->kobj,
-+ &dev->kobj, class_name);
-+ kfree(class_name);
-+ if (error)
-+ goto out_device;
- }
- return 0;
-
--#ifdef CONFIG_SYSFS_DEPRECATED
- out_device:
-- if (dev->parent)
-+ if (dev->parent && dev->type != &part_type)
- sysfs_remove_link(&dev->kobj, "device");
--#endif
- out_busid:
-- if (dev->kobj.parent != &dev->class->subsys.kobj)
-+ if (dev->kobj.parent != &dev->class->subsys.kobj &&
-+ dev->type != &part_type)
- sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
-+#else
-+ /* link in the class directory pointing to the device */
-+ error = sysfs_create_link(&dev->class->subsys.kobj, &dev->kobj,
-+ dev->bus_id);
-+ if (error)
-+ goto out_subsys;
++ for (i = 0; i < nr; i++)
++ func(ioc, cics[i]);
++ } while (nr == CIC_GANG_NR);
+
-+ if (dev->parent && dev->type != &part_type) {
-+ error = sysfs_create_link(&dev->kobj, &dev->parent->kobj,
-+ "device");
-+ if (error)
-+ goto out_busid;
-+ }
-+ return 0;
++ rcu_read_unlock();
+
-+out_busid:
-+ sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
-+#endif
++ return called;
++}
+
- out_subsys:
- sysfs_remove_link(&dev->kobj, "subsystem");
- out:
-@@ -694,8 +723,9 @@ static void device_remove_class_symlinks(struct device *dev)
- {
- if (!dev->class)
- return;
-- if (dev->parent) {
++static void cic_free_func(struct io_context *ioc, struct cfq_io_context *cic)
++{
++ unsigned long flags;
+
- #ifdef CONFIG_SYSFS_DEPRECATED
-+ if (dev->parent && dev->type != &part_type) {
- char *class_name;
-
- class_name = make_class_name(dev->class->name, &dev->kobj);
-@@ -703,45 +733,59 @@ static void device_remove_class_symlinks(struct device *dev)
- sysfs_remove_link(&dev->parent->kobj, class_name);
- kfree(class_name);
- }
--#endif
- sysfs_remove_link(&dev->kobj, "device");
- }
-- if (dev->kobj.parent != &dev->class->subsys.kobj)
++ BUG_ON(!cic->dead_key);
+
-+ if (dev->kobj.parent != &dev->class->subsys.kobj &&
-+ dev->type != &part_type)
- sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
-+#else
-+ if (dev->parent && dev->type != &part_type)
-+ sysfs_remove_link(&dev->kobj, "device");
++ spin_lock_irqsave(&ioc->lock, flags);
++ radix_tree_delete(&ioc->radix_root, cic->dead_key);
++ spin_unlock_irqrestore(&ioc->lock, flags);
+
-+ sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
-+#endif
++ kmem_cache_free(cfq_ioc_pool, cic);
++}
+
- sysfs_remove_link(&dev->kobj, "subsystem");
- }
-
- /**
-- * device_add - add device to device hierarchy.
-- * @dev: device.
-+ * device_add - add device to device hierarchy.
-+ * @dev: device.
- *
-- * This is part 2 of device_register(), though may be called
-- * separately _iff_ device_initialize() has been called separately.
-+ * This is part 2 of device_register(), though may be called
-+ * separately _iff_ device_initialize() has been called separately.
- *
-- * This adds it to the kobject hierarchy via kobject_add(), adds it
-- * to the global and sibling lists for the device, then
-- * adds it to the other relevant subsystems of the driver model.
-+ * This adds it to the kobject hierarchy via kobject_add(), adds it
-+ * to the global and sibling lists for the device, then
-+ * adds it to the other relevant subsystems of the driver model.
- */
- int device_add(struct device *dev)
- {
- struct device *parent = NULL;
- struct class_interface *class_intf;
-- int error = -EINVAL;
-+ int error;
++static void cfq_free_io_context(struct io_context *ioc)
++{
++ int freed;
+
-+ error = pm_sleep_lock();
-+ if (error) {
-+ dev_warn(dev, "Suspicious %s during suspend\n", __FUNCTION__);
-+ dump_stack();
-+ return error;
-+ }
-
- dev = get_device(dev);
-- if (!dev || !strlen(dev->bus_id))
-+ if (!dev || !strlen(dev->bus_id)) {
-+ error = -EINVAL;
- goto Error;
-+ }
++ /*
++ * ioc->refcount is zero here, so no more cic's are allowed to be
++ * linked into this ioc. So it should be ok to iterate over the known
++ * list, we will see all cic's since no new ones are added.
++ */
++ freed = call_for_each_cic(ioc, cic_free_func);
-- pr_debug("DEV: registering device: ID = '%s'\n", dev->bus_id);
-+ pr_debug("device: '%s': %s\n", dev->bus_id, __FUNCTION__);
+ elv_ioc_count_mod(ioc_count, -freed);
- parent = get_device(dev->parent);
-- error = setup_parent(dev, parent);
-- if (error)
-- goto Error;
-+ setup_parent(dev, parent);
+@@ -1205,7 +1222,12 @@ static void __cfq_exit_single_io_context(struct cfq_data *cfqd,
+ struct cfq_io_context *cic)
+ {
+ list_del_init(&cic->queue_list);
++
++ /*
++ * Make sure key == NULL is seen for dead queues
++ */
+ smp_wmb();
++ cic->dead_key = (unsigned long) cic->key;
+ cic->key = NULL;
- /* first, register with generic layer. */
-- kobject_set_name(&dev->kobj, "%s", dev->bus_id);
-- error = kobject_add(&dev->kobj);
-+ error = kobject_add(&dev->kobj, dev->kobj.parent, "%s", dev->bus_id);
- if (error)
- goto Error;
+ if (cic->cfqq[ASYNC]) {
+@@ -1219,16 +1241,18 @@ static void __cfq_exit_single_io_context(struct cfq_data *cfqd,
+ }
+ }
-@@ -751,7 +795,7 @@ int device_add(struct device *dev)
+-static void cfq_exit_single_io_context(struct cfq_io_context *cic)
++static void cfq_exit_single_io_context(struct io_context *ioc,
++ struct cfq_io_context *cic)
+ {
+ struct cfq_data *cfqd = cic->key;
- /* notify clients of device entry (new way) */
- if (dev->bus)
-- blocking_notifier_call_chain(&dev->bus->bus_notifier,
-+ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
- BUS_NOTIFY_ADD_DEVICE, dev);
+ if (cfqd) {
+ struct request_queue *q = cfqd->queue;
++ unsigned long flags;
- error = device_create_file(dev, &uevent_attr);
-@@ -795,13 +839,14 @@ int device_add(struct device *dev)
+- spin_lock_irq(q->queue_lock);
++ spin_lock_irqsave(q->queue_lock, flags);
+ __cfq_exit_single_io_context(cfqd, cic);
+- spin_unlock_irq(q->queue_lock);
++ spin_unlock_irqrestore(q->queue_lock, flags);
}
- Done:
- put_device(dev);
-+ pm_sleep_unlock();
- return error;
- BusError:
- device_pm_remove(dev);
- dpm_sysfs_remove(dev);
- PMError:
- if (dev->bus)
-- blocking_notifier_call_chain(&dev->bus->bus_notifier,
-+ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
- BUS_NOTIFY_DEL_DEVICE, dev);
- device_remove_attrs(dev);
- AttrsError:
-@@ -809,124 +854,84 @@ int device_add(struct device *dev)
- SymlinkError:
- if (MAJOR(dev->devt))
- device_remove_file(dev, &devt_attr);
--
-- if (dev->class) {
-- sysfs_remove_link(&dev->kobj, "subsystem");
-- /* If this is not a "fake" compatible device, remove the
-- * symlink from the class to the device. */
-- if (dev->kobj.parent != &dev->class->subsys.kobj)
-- sysfs_remove_link(&dev->class->subsys.kobj,
-- dev->bus_id);
-- if (parent) {
--#ifdef CONFIG_SYSFS_DEPRECATED
-- char *class_name = make_class_name(dev->class->name,
-- &dev->kobj);
-- if (class_name)
-- sysfs_remove_link(&dev->parent->kobj,
-- class_name);
-- kfree(class_name);
--#endif
-- sysfs_remove_link(&dev->kobj, "device");
-- }
-- }
- ueventattrError:
- device_remove_file(dev, &uevent_attr);
- attrError:
- kobject_uevent(&dev->kobj, KOBJ_REMOVE);
- kobject_del(&dev->kobj);
- Error:
-+ cleanup_device_parent(dev);
- if (parent)
- put_device(parent);
- goto Done;
}
--
- /**
-- * device_register - register a device with the system.
-- * @dev: pointer to the device structure
-+ * device_register - register a device with the system.
-+ * @dev: pointer to the device structure
- *
-- * This happens in two clean steps - initialize the device
-- * and add it to the system. The two steps can be called
-- * separately, but this is the easiest and most common.
-- * I.e. you should only call the two helpers separately if
-- * have a clearly defined need to use and refcount the device
-- * before it is added to the hierarchy.
-+ * This happens in two clean steps - initialize the device
-+ * and add it to the system. The two steps can be called
-+ * separately, but this is the easiest and most common.
-+ * I.e. you should only call the two helpers separately if
-+ * have a clearly defined need to use and refcount the device
-+ * before it is added to the hierarchy.
+@@ -1238,21 +1262,8 @@ static void cfq_exit_single_io_context(struct cfq_io_context *cic)
*/
--
- int device_register(struct device *dev)
+ static void cfq_exit_io_context(struct io_context *ioc)
{
- device_initialize(dev);
- return device_add(dev);
- }
-
+- struct cfq_io_context *__cic;
+- struct rb_node *n;
-
- /**
-- * get_device - increment reference count for device.
-- * @dev: device.
-+ * get_device - increment reference count for device.
-+ * @dev: device.
- *
-- * This simply forwards the call to kobject_get(), though
-- * we do take care to provide for the case that we get a NULL
-- * pointer passed in.
-+ * This simply forwards the call to kobject_get(), though
-+ * we do take care to provide for the case that we get a NULL
-+ * pointer passed in.
- */
+- ioc->ioc_data = NULL;
-
--struct device * get_device(struct device * dev)
-+struct device *get_device(struct device *dev)
- {
- return dev ? to_dev(kobject_get(&dev->kobj)) : NULL;
+- /*
+- * put the reference this task is holding to the various queues
+- */
+- n = rb_first(&ioc->cic_root);
+- while (n != NULL) {
+- __cic = rb_entry(n, struct cfq_io_context, rb_node);
+-
+- cfq_exit_single_io_context(__cic);
+- n = rb_next(n);
+- }
++ rcu_assign_pointer(ioc->ioc_data, NULL);
++ call_for_each_cic(ioc, cfq_exit_single_io_context);
}
--
- /**
-- * put_device - decrement reference count.
-- * @dev: device in question.
-+ * put_device - decrement reference count.
-+ * @dev: device in question.
- */
--void put_device(struct device * dev)
-+void put_device(struct device *dev)
+ static struct cfq_io_context *
+@@ -1273,7 +1284,7 @@ cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
+ return cic;
+ }
+
+-static void cfq_init_prio_data(struct cfq_queue *cfqq)
++static void cfq_init_prio_data(struct cfq_queue *cfqq, struct io_context *ioc)
{
-+ /* might_sleep(); */
- if (dev)
- kobject_put(&dev->kobj);
+ struct task_struct *tsk = current;
+ int ioprio_class;
+@@ -1281,7 +1292,7 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq)
+ if (!cfq_cfqq_prio_changed(cfqq))
+ return;
+
+- ioprio_class = IOPRIO_PRIO_CLASS(tsk->ioprio);
++ ioprio_class = IOPRIO_PRIO_CLASS(ioc->ioprio);
+ switch (ioprio_class) {
+ default:
+ printk(KERN_ERR "cfq: bad prio %x\n", ioprio_class);
+@@ -1293,11 +1304,11 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq)
+ cfqq->ioprio_class = IOPRIO_CLASS_BE;
+ break;
+ case IOPRIO_CLASS_RT:
+- cfqq->ioprio = task_ioprio(tsk);
++ cfqq->ioprio = task_ioprio(ioc);
+ cfqq->ioprio_class = IOPRIO_CLASS_RT;
+ break;
+ case IOPRIO_CLASS_BE:
+- cfqq->ioprio = task_ioprio(tsk);
++ cfqq->ioprio = task_ioprio(ioc);
+ cfqq->ioprio_class = IOPRIO_CLASS_BE;
+ break;
+ case IOPRIO_CLASS_IDLE:
+@@ -1316,7 +1327,7 @@ static void cfq_init_prio_data(struct cfq_queue *cfqq)
+ cfq_clear_cfqq_prio_changed(cfqq);
}
+-static inline void changed_ioprio(struct cfq_io_context *cic)
++static void changed_ioprio(struct io_context *ioc, struct cfq_io_context *cic)
+ {
+ struct cfq_data *cfqd = cic->key;
+ struct cfq_queue *cfqq;
+@@ -1330,8 +1341,7 @@ static inline void changed_ioprio(struct cfq_io_context *cic)
+ cfqq = cic->cfqq[ASYNC];
+ if (cfqq) {
+ struct cfq_queue *new_cfqq;
+- new_cfqq = cfq_get_queue(cfqd, ASYNC, cic->ioc->task,
+- GFP_ATOMIC);
++ new_cfqq = cfq_get_queue(cfqd, ASYNC, cic->ioc, GFP_ATOMIC);
+ if (new_cfqq) {
+ cic->cfqq[ASYNC] = new_cfqq;
+ cfq_put_queue(cfqq);
+@@ -1347,29 +1357,19 @@ static inline void changed_ioprio(struct cfq_io_context *cic)
+
+ static void cfq_ioc_set_ioprio(struct io_context *ioc)
+ {
+- struct cfq_io_context *cic;
+- struct rb_node *n;
-
- /**
-- * device_del - delete device from system.
-- * @dev: device.
-+ * device_del - delete device from system.
-+ * @dev: device.
- *
-- * This is the first part of the device unregistration
-- * sequence. This removes the device from the lists we control
-- * from here, has it removed from the other driver model
-- * subsystems it was added to in device_add(), and removes it
-- * from the kobject hierarchy.
-+ * This is the first part of the device unregistration
-+ * sequence. This removes the device from the lists we control
-+ * from here, has it removed from the other driver model
-+ * subsystems it was added to in device_add(), and removes it
-+ * from the kobject hierarchy.
- *
-- * NOTE: this should be called manually _iff_ device_add() was
-- * also called manually.
-+ * NOTE: this should be called manually _iff_ device_add() was
-+ * also called manually.
- */
++ call_for_each_cic(ioc, changed_ioprio);
+ ioc->ioprio_changed = 0;
-
--void device_del(struct device * dev)
-+void device_del(struct device *dev)
+- n = rb_first(&ioc->cic_root);
+- while (n != NULL) {
+- cic = rb_entry(n, struct cfq_io_context, rb_node);
+-
+- changed_ioprio(cic);
+- n = rb_next(n);
+- }
+ }
+
+ static struct cfq_queue *
+ cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,
+- struct task_struct *tsk, gfp_t gfp_mask)
++ struct io_context *ioc, gfp_t gfp_mask)
{
-- struct device * parent = dev->parent;
-+ struct device *parent = dev->parent;
- struct class_interface *class_intf;
+ struct cfq_queue *cfqq, *new_cfqq = NULL;
+ struct cfq_io_context *cic;
-+ device_pm_remove(dev);
- if (parent)
- klist_del(&dev->knode_parent);
- if (MAJOR(dev->devt))
- device_remove_file(dev, &devt_attr);
- if (dev->class) {
-- sysfs_remove_link(&dev->kobj, "subsystem");
-- /* If this is not a "fake" compatible device, remove the
-- * symlink from the class to the device. */
-- if (dev->kobj.parent != &dev->class->subsys.kobj)
-- sysfs_remove_link(&dev->class->subsys.kobj,
-- dev->bus_id);
-- if (parent) {
--#ifdef CONFIG_SYSFS_DEPRECATED
-- char *class_name = make_class_name(dev->class->name,
-- &dev->kobj);
-- if (class_name)
-- sysfs_remove_link(&dev->parent->kobj,
-- class_name);
-- kfree(class_name);
--#endif
-- sysfs_remove_link(&dev->kobj, "device");
-- }
-+ device_remove_class_symlinks(dev);
+ retry:
+- cic = cfq_cic_rb_lookup(cfqd, tsk->io_context);
++ cic = cfq_cic_lookup(cfqd, ioc);
+ /* cic always exists here */
+ cfqq = cic_to_cfqq(cic, is_sync);
- down(&dev->class->sem);
- /* notify any interfaces that the device is now gone */
-@@ -936,31 +941,6 @@ void device_del(struct device * dev)
- /* remove the device from the class list */
- list_del_init(&dev->node);
- up(&dev->class->sem);
--
-- /* If we live in a parent class-directory, unreference it */
-- if (dev->kobj.parent->kset == &dev->class->class_dirs) {
-- struct device *d;
-- int other = 0;
+@@ -1404,15 +1404,16 @@ retry:
+ atomic_set(&cfqq->ref, 0);
+ cfqq->cfqd = cfqd;
+
+- if (is_sync) {
+- cfq_mark_cfqq_idle_window(cfqq);
+- cfq_mark_cfqq_sync(cfqq);
+- }
-
-- /*
-- * if we are the last child of our class, delete
-- * our class-directory at this parent
-- */
-- down(&dev->class->sem);
-- list_for_each_entry(d, &dev->class->devices, node) {
-- if (d == dev)
-- continue;
-- if (d->kobj.parent == dev->kobj.parent) {
-- other = 1;
-- break;
-- }
-- }
-- if (!other)
-- kobject_del(dev->kobj.parent);
--
-- kobject_put(dev->kobj.parent);
-- up(&dev->class->sem);
-- }
+ cfq_mark_cfqq_prio_changed(cfqq);
+ cfq_mark_cfqq_queue_new(cfqq);
+
+- cfq_init_prio_data(cfqq);
++ cfq_init_prio_data(cfqq, ioc);
++
++ if (is_sync) {
++ if (!cfq_class_idle(cfqq))
++ cfq_mark_cfqq_idle_window(cfqq);
++ cfq_mark_cfqq_sync(cfqq);
++ }
}
- device_remove_file(dev, &uevent_attr);
- device_remove_attrs(dev);
-@@ -979,57 +959,55 @@ void device_del(struct device * dev)
- if (platform_notify_remove)
- platform_notify_remove(dev);
- if (dev->bus)
-- blocking_notifier_call_chain(&dev->bus->bus_notifier,
-+ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
- BUS_NOTIFY_DEL_DEVICE, dev);
-- device_pm_remove(dev);
- kobject_uevent(&dev->kobj, KOBJ_REMOVE);
-+ cleanup_device_parent(dev);
- kobject_del(&dev->kobj);
-- if (parent)
-- put_device(parent);
-+ put_device(parent);
- }
- /**
-- * device_unregister - unregister device from system.
-- * @dev: device going away.
-+ * device_unregister - unregister device from system.
-+ * @dev: device going away.
- *
-- * We do this in two parts, like we do device_register(). First,
-- * we remove it from all the subsystems with device_del(), then
-- * we decrement the reference count via put_device(). If that
-- * is the final reference count, the device will be cleaned up
-- * via device_release() above. Otherwise, the structure will
-- * stick around until the final reference to the device is dropped.
-+ * We do this in two parts, like we do device_register(). First,
-+ * we remove it from all the subsystems with device_del(), then
-+ * we decrement the reference count via put_device(). If that
-+ * is the final reference count, the device will be cleaned up
-+ * via device_release() above. Otherwise, the structure will
-+ * stick around until the final reference to the device is dropped.
- */
--void device_unregister(struct device * dev)
-+void device_unregister(struct device *dev)
- {
-- pr_debug("DEV: Unregistering device. ID = '%s'\n", dev->bus_id);
-+ pr_debug("device: '%s': %s\n", dev->bus_id, __FUNCTION__);
- device_del(dev);
- put_device(dev);
+ if (new_cfqq)
+@@ -1439,11 +1440,11 @@ cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio)
}
--
--static struct device * next_device(struct klist_iter * i)
-+static struct device *next_device(struct klist_iter *i)
+ static struct cfq_queue *
+-cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct task_struct *tsk,
++cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,
+ gfp_t gfp_mask)
{
-- struct klist_node * n = klist_next(i);
-+ struct klist_node *n = klist_next(i);
- return n ? container_of(n, struct device, knode_parent) : NULL;
+- const int ioprio = task_ioprio(tsk);
+- const int ioprio_class = task_ioprio_class(tsk);
++ const int ioprio = task_ioprio(ioc);
++ const int ioprio_class = task_ioprio_class(ioc);
+ struct cfq_queue **async_cfqq = NULL;
+ struct cfq_queue *cfqq = NULL;
+
+@@ -1453,7 +1454,7 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct task_struct *tsk,
+ }
+
+ if (!cfqq) {
+- cfqq = cfq_find_alloc_queue(cfqd, is_sync, tsk, gfp_mask);
++ cfqq = cfq_find_alloc_queue(cfqd, is_sync, ioc, gfp_mask);
+ if (!cfqq)
+ return NULL;
+ }
+@@ -1470,28 +1471,42 @@ cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct task_struct *tsk,
+ return cfqq;
}
- /**
-- * device_for_each_child - device child iterator.
-- * @parent: parent struct device.
-- * @data: data for the callback.
-- * @fn: function to be called for each device.
-+ * device_for_each_child - device child iterator.
-+ * @parent: parent struct device.
-+ * @data: data for the callback.
-+ * @fn: function to be called for each device.
- *
-- * Iterate over @parent's child devices, and call @fn for each,
-- * passing it @data.
-+ * Iterate over @parent's child devices, and call @fn for each,
-+ * passing it @data.
- *
-- * We check the return of @fn each time. If it returns anything
-- * other than 0, we break out and return that value.
-+ * We check the return of @fn each time. If it returns anything
-+ * other than 0, we break out and return that value.
++static void cfq_cic_free(struct cfq_io_context *cic)
++{
++ kmem_cache_free(cfq_ioc_pool, cic);
++ elv_ioc_count_dec(ioc_count);
++
++ if (ioc_gone && !elv_ioc_count_read(ioc_count))
++ complete(ioc_gone);
++}
++
+ /*
+ * We drop cfq io contexts lazily, so we may find a dead one.
*/
--int device_for_each_child(struct device * parent, void * data,
-- int (*fn)(struct device *, void *))
-+int device_for_each_child(struct device *parent, void *data,
-+ int (*fn)(struct device *dev, void *data))
+ static void
+-cfq_drop_dead_cic(struct io_context *ioc, struct cfq_io_context *cic)
++cfq_drop_dead_cic(struct cfq_data *cfqd, struct io_context *ioc,
++ struct cfq_io_context *cic)
{
- struct klist_iter i;
-- struct device * child;
-+ struct device *child;
- int error = 0;
++ unsigned long flags;
++
+ WARN_ON(!list_empty(&cic->queue_list));
- klist_iter_init(&parent->klist_children, &i);
-@@ -1054,8 +1032,8 @@ int device_for_each_child(struct device * parent, void * data,
- * current device can be obtained, this function will return to the caller
- * and not iterate over any more devices.
- */
--struct device * device_find_child(struct device *parent, void *data,
-- int (*match)(struct device *, void *))
-+struct device *device_find_child(struct device *parent, void *data,
-+ int (*match)(struct device *dev, void *data))
- {
- struct klist_iter i;
- struct device *child;
-@@ -1073,7 +1051,10 @@ struct device * device_find_child(struct device *parent, void *data,
++ spin_lock_irqsave(&ioc->lock, flags);
++
+ if (ioc->ioc_data == cic)
+- ioc->ioc_data = NULL;
++ rcu_assign_pointer(ioc->ioc_data, NULL);
- int __init devices_init(void)
- {
-- return subsystem_register(&devices_subsys);
-+ devices_kset = kset_create_and_add("devices", &device_uevent_ops, NULL);
-+ if (!devices_kset)
-+ return -ENOMEM;
-+ return 0;
+- rb_erase(&cic->rb_node, &ioc->cic_root);
+- kmem_cache_free(cfq_ioc_pool, cic);
+- elv_ioc_count_dec(ioc_count);
++ radix_tree_delete(&ioc->radix_root, (unsigned long) cfqd);
++ spin_unlock_irqrestore(&ioc->lock, flags);
++
++ cfq_cic_free(cic);
}
- EXPORT_SYMBOL_GPL(device_for_each_child);
-@@ -1094,7 +1075,7 @@ EXPORT_SYMBOL_GPL(device_remove_file);
-
- static void device_create_release(struct device *dev)
+ static struct cfq_io_context *
+-cfq_cic_rb_lookup(struct cfq_data *cfqd, struct io_context *ioc)
++cfq_cic_lookup(struct cfq_data *cfqd, struct io_context *ioc)
{
-- pr_debug("%s called for %s\n", __FUNCTION__, dev->bus_id);
-+ pr_debug("device: '%s': %s\n", dev->bus_id, __FUNCTION__);
- kfree(dev);
- }
+- struct rb_node *n;
+ struct cfq_io_context *cic;
+- void *k, *key = cfqd;
++ void *k;
-@@ -1156,14 +1137,11 @@ error:
- EXPORT_SYMBOL_GPL(device_create);
+ if (unlikely(!ioc))
+ return NULL;
+@@ -1499,74 +1514,64 @@ cfq_cic_rb_lookup(struct cfq_data *cfqd, struct io_context *ioc)
+ /*
+ * we maintain a last-hit cache, to avoid browsing over the tree
+ */
+- cic = ioc->ioc_data;
++ cic = rcu_dereference(ioc->ioc_data);
+ if (cic && cic->key == cfqd)
+ return cic;
- /**
-- * device_destroy - removes a device that was created with device_create()
-+ * find_device - finds a device that was created with device_create()
- * @class: pointer to the struct class that this device was registered with
- * @devt: the dev_t of the device that was previously registered
-- *
-- * This call unregisters and cleans up a device that was created with a
-- * call to device_create().
- */
--void device_destroy(struct class *class, dev_t devt)
-+static struct device *find_device(struct class *class, dev_t devt)
- {
- struct device *dev = NULL;
- struct device *dev_tmp;
-@@ -1176,12 +1154,54 @@ void device_destroy(struct class *class, dev_t devt)
+-restart:
+- n = ioc->cic_root.rb_node;
+- while (n) {
+- cic = rb_entry(n, struct cfq_io_context, rb_node);
++ do {
++ rcu_read_lock();
++ cic = radix_tree_lookup(&ioc->radix_root, (unsigned long) cfqd);
++ rcu_read_unlock();
++ if (!cic)
++ break;
+ /* ->key must be copied to avoid race with cfq_exit_queue() */
+ k = cic->key;
+ if (unlikely(!k)) {
+- cfq_drop_dead_cic(ioc, cic);
+- goto restart;
++ cfq_drop_dead_cic(cfqd, ioc, cic);
++ continue;
}
- }
- up(&class->sem);
-+ return dev;
-+}
-+
-+/**
-+ * device_destroy - removes a device that was created with device_create()
-+ * @class: pointer to the struct class that this device was registered with
-+ * @devt: the dev_t of the device that was previously registered
-+ *
-+ * This call unregisters and cleans up a device that was created with a
-+ * call to device_create().
-+ */
-+void device_destroy(struct class *class, dev_t devt)
-+{
-+ struct device *dev;
-+ dev = find_device(class, devt);
- if (dev)
- device_unregister(dev);
+- if (key < k)
+- n = n->rb_left;
+- else if (key > k)
+- n = n->rb_right;
+- else {
+- ioc->ioc_data = cic;
+- return cic;
+- }
+- }
++ rcu_assign_pointer(ioc->ioc_data, cic);
++ break;
++ } while (1);
+
+- return NULL;
++ return cic;
}
- EXPORT_SYMBOL_GPL(device_destroy);
-+#ifdef CONFIG_PM_SLEEP
-+/**
-+ * destroy_suspended_device - asks the PM core to remove a suspended device
-+ * @class: pointer to the struct class that this device was registered with
-+ * @devt: the dev_t of the device that was previously registered
-+ *
-+ * This call notifies the PM core of the necessity to unregister a suspended
-+ * device created with a call to device_create() (devices cannot be
-+ * unregistered directly while suspended, since the PM core holds their
-+ * semaphores at that time).
-+ *
-+ * It can only be called within the scope of a system sleep transition. In
-+ * practice this means it has to be directly or indirectly invoked either by
-+ * a suspend or resume method, or by the PM core (e.g. via
-+ * disable_nonboot_cpus() or enable_nonboot_cpus()).
+-static inline void
+-cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
+- struct cfq_io_context *cic)
++/*
++ * Add cic into ioc, using cfqd as the search key. This enables us to lookup
++ * the process specific cfq io context when entered from the block layer.
++ * Also adds the cic to a per-cfqd list, used when this queue is removed.
+ */
-+void destroy_suspended_device(struct class *class, dev_t devt)
-+{
-+ struct device *dev;
-+
-+ dev = find_device(class, devt);
-+ if (dev)
-+ device_pm_schedule_removal(dev);
-+}
-+EXPORT_SYMBOL_GPL(destroy_suspended_device);
-+#endif /* CONFIG_PM_SLEEP */
-+
- /**
- * device_rename - renames a device
- * @dev: the pointer to the struct device to be renamed
-@@ -1198,7 +1218,8 @@ int device_rename(struct device *dev, char *new_name)
- if (!dev)
- return -EINVAL;
++static int cfq_cic_link(struct cfq_data *cfqd, struct io_context *ioc,
++ struct cfq_io_context *cic, gfp_t gfp_mask)
+ {
+- struct rb_node **p;
+- struct rb_node *parent;
+- struct cfq_io_context *__cic;
+ unsigned long flags;
+- void *k;
++ int ret;
-- pr_debug("DEVICE: renaming '%s' to '%s'\n", dev->bus_id, new_name);
-+ pr_debug("device: '%s': %s: renaming to '%s'\n", dev->bus_id,
-+ __FUNCTION__, new_name);
+- cic->ioc = ioc;
+- cic->key = cfqd;
++ ret = radix_tree_preload(gfp_mask);
++ if (!ret) {
++ cic->ioc = ioc;
++ cic->key = cfqd;
- #ifdef CONFIG_SYSFS_DEPRECATED
- if ((dev->class) && (dev->parent))
-@@ -1279,8 +1300,7 @@ static int device_move_class_links(struct device *dev,
- class_name);
- if (error)
- sysfs_remove_link(&dev->kobj, "device");
-- }
-- else
-+ } else
- error = 0;
- out:
- kfree(class_name);
-@@ -1311,16 +1331,13 @@ int device_move(struct device *dev, struct device *new_parent)
- return -EINVAL;
+-restart:
+- parent = NULL;
+- p = &ioc->cic_root.rb_node;
+- while (*p) {
+- parent = *p;
+- __cic = rb_entry(parent, struct cfq_io_context, rb_node);
+- /* ->key must be copied to avoid race with cfq_exit_queue() */
+- k = __cic->key;
+- if (unlikely(!k)) {
+- cfq_drop_dead_cic(ioc, __cic);
+- goto restart;
+- }
++ spin_lock_irqsave(&ioc->lock, flags);
++ ret = radix_tree_insert(&ioc->radix_root,
++ (unsigned long) cfqd, cic);
++ spin_unlock_irqrestore(&ioc->lock, flags);
- new_parent = get_device(new_parent);
-- new_parent_kobj = get_device_parent (dev, new_parent);
-- if (IS_ERR(new_parent_kobj)) {
-- error = PTR_ERR(new_parent_kobj);
-- put_device(new_parent);
-- goto out;
-- }
-- pr_debug("DEVICE: moving '%s' to '%s'\n", dev->bus_id,
-- new_parent ? new_parent->bus_id : "<NULL>");
-+ new_parent_kobj = get_device_parent(dev, new_parent);
-+
-+ pr_debug("device: '%s': %s: moving to '%s'\n", dev->bus_id,
-+ __FUNCTION__, new_parent ? new_parent->bus_id : "<NULL>");
- error = kobject_move(&dev->kobj, new_parent_kobj);
- if (error) {
-+ cleanup_glue_dir(dev, new_parent_kobj);
- put_device(new_parent);
- goto out;
- }
-@@ -1343,6 +1360,7 @@ int device_move(struct device *dev, struct device *new_parent)
- klist_add_tail(&dev->knode_parent,
- &old_parent->klist_children);
- }
-+ cleanup_glue_dir(dev, new_parent_kobj);
- put_device(new_parent);
- goto out;
- }
-@@ -1352,5 +1370,23 @@ out:
- put_device(dev);
- return error;
- }
--
- EXPORT_SYMBOL_GPL(device_move);
-+
-+/**
-+ * device_shutdown - call ->shutdown() on each device to shutdown.
-+ */
-+void device_shutdown(void)
-+{
-+ struct device *dev, *devn;
+- if (cic->key < k)
+- p = &(*p)->rb_left;
+- else if (cic->key > k)
+- p = &(*p)->rb_right;
+- else
+- BUG();
++ radix_tree_preload_end();
+
-+ list_for_each_entry_safe_reverse(dev, devn, &devices_kset->list,
-+ kobj.entry) {
-+ if (dev->bus && dev->bus->shutdown) {
-+ dev_dbg(dev, "shutdown\n");
-+ dev->bus->shutdown(dev);
-+ } else if (dev->driver && dev->driver->shutdown) {
-+ dev_dbg(dev, "shutdown\n");
-+ dev->driver->shutdown(dev);
++ if (!ret) {
++ spin_lock_irqsave(cfqd->queue->queue_lock, flags);
++ list_add(&cic->queue_list, &cfqd->cic_list);
++ spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+ }
-+ }
-+}
-diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
-index 4054507..c5885f5 100644
---- a/drivers/base/cpu.c
-+++ b/drivers/base/cpu.c
-@@ -14,7 +14,7 @@
- #include "base.h"
+ }
- struct sysdev_class cpu_sysdev_class = {
-- set_kset_name("cpu"),
-+ .name = "cpu",
- };
- EXPORT_SYMBOL(cpu_sysdev_class);
+- rb_link_node(&cic->rb_node, parent, p);
+- rb_insert_color(&cic->rb_node, &ioc->cic_root);
++ if (ret)
++ printk(KERN_ERR "cfq: cic link failed!\n");
+
+- spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+- list_add(&cic->queue_list, &cfqd->cic_list);
+- spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
++ return ret;
+ }
-diff --git a/drivers/base/dd.c b/drivers/base/dd.c
-index 7ac474d..a5cde94 100644
---- a/drivers/base/dd.c
-+++ b/drivers/base/dd.c
-@@ -1,18 +1,20 @@
/*
-- * drivers/base/dd.c - The core device/driver interactions.
-+ * drivers/base/dd.c - The core device/driver interactions.
- *
-- * This file contains the (sometimes tricky) code that controls the
-- * interactions between devices and drivers, which primarily includes
-- * driver binding and unbinding.
-+ * This file contains the (sometimes tricky) code that controls the
-+ * interactions between devices and drivers, which primarily includes
-+ * driver binding and unbinding.
- *
-- * All of this code used to exist in drivers/base/bus.c, but was
-- * relocated to here in the name of compartmentalization (since it wasn't
-- * strictly code just for the 'struct bus_type'.
-+ * All of this code used to exist in drivers/base/bus.c, but was
-+ * relocated to here in the name of compartmentalization (since it wasn't
-+ * strictly code just for the 'struct bus_type'.
- *
-- * Copyright (c) 2002-5 Patrick Mochel
-- * Copyright (c) 2002-3 Open Source Development Labs
-+ * Copyright (c) 2002-5 Patrick Mochel
-+ * Copyright (c) 2002-3 Open Source Development Labs
-+ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
-+ * Copyright (c) 2007 Novell Inc.
- *
-- * This file is released under the GPLv2
-+ * This file is released under the GPLv2
- */
+@@ -1586,7 +1591,7 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
+ if (!ioc)
+ return NULL;
- #include <linux/device.h>
-@@ -23,8 +25,6 @@
- #include "base.h"
- #include "power/power.h"
+- cic = cfq_cic_rb_lookup(cfqd, ioc);
++ cic = cfq_cic_lookup(cfqd, ioc);
+ if (cic)
+ goto out;
--#define to_drv(node) container_of(node, struct device_driver, kobj.entry)
--
+@@ -1594,13 +1599,17 @@ cfq_get_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
+ if (cic == NULL)
+ goto err;
- static void driver_bound(struct device *dev)
+- cfq_cic_link(cfqd, ioc, cic);
++ if (cfq_cic_link(cfqd, ioc, cic, gfp_mask))
++ goto err_free;
++
+ out:
+ smp_read_barrier_depends();
+ if (unlikely(ioc->ioprio_changed))
+ cfq_ioc_set_ioprio(ioc);
+
+ return cic;
++err_free:
++ cfq_cic_free(cic);
+ err:
+ put_io_context(ioc);
+ return NULL;
+@@ -1655,12 +1664,15 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
{
-@@ -34,27 +34,27 @@ static void driver_bound(struct device *dev)
+ int enable_idle;
+
+- if (!cfq_cfqq_sync(cfqq))
++ /*
++ * Don't idle for async or idle io prio class
++ */
++ if (!cfq_cfqq_sync(cfqq) || cfq_class_idle(cfqq))
return;
- }
-- pr_debug("bound device '%s' to driver '%s'\n",
-- dev->bus_id, dev->driver->name);
-+ pr_debug("driver: '%s': %s: bound to device '%s'\n", dev->bus_id,
-+ __FUNCTION__, dev->driver->name);
+ enable_idle = cfq_cfqq_idle_window(cfqq);
- if (dev->bus)
-- blocking_notifier_call_chain(&dev->bus->bus_notifier,
-+ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
- BUS_NOTIFY_BOUND_DRIVER, dev);
+- if (!cic->ioc->task || !cfqd->cfq_slice_idle ||
++ if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
+ (cfqd->hw_tag && CIC_SEEKY(cic)))
+ enable_idle = 0;
+ else if (sample_valid(cic->ttime_samples)) {
+@@ -1793,7 +1805,7 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq)
+ struct cfq_data *cfqd = q->elevator->elevator_data;
+ struct cfq_queue *cfqq = RQ_CFQQ(rq);
-- klist_add_tail(&dev->knode_driver, &dev->driver->klist_devices);
-+ klist_add_tail(&dev->knode_driver, &dev->driver->p->klist_devices);
- }
+- cfq_init_prio_data(cfqq);
++ cfq_init_prio_data(cfqq, RQ_CIC(rq)->ioc);
- static int driver_sysfs_add(struct device *dev)
+ cfq_add_rq_rb(rq);
+
+@@ -1834,7 +1846,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
+ cfq_set_prio_slice(cfqd, cfqq);
+ cfq_clear_cfqq_slice_new(cfqq);
+ }
+- if (cfq_slice_used(cfqq))
++ if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq))
+ cfq_slice_expired(cfqd, 1);
+ else if (sync && RB_EMPTY_ROOT(&cfqq->sort_list))
+ cfq_arm_slice_timer(cfqd);
+@@ -1894,13 +1906,13 @@ static int cfq_may_queue(struct request_queue *q, int rw)
+ * so just lookup a possibly existing queue, or return 'may queue'
+ * if that fails
+ */
+- cic = cfq_cic_rb_lookup(cfqd, tsk->io_context);
++ cic = cfq_cic_lookup(cfqd, tsk->io_context);
+ if (!cic)
+ return ELV_MQUEUE_MAY;
+
+ cfqq = cic_to_cfqq(cic, rw & REQ_RW_SYNC);
+ if (cfqq) {
+- cfq_init_prio_data(cfqq);
++ cfq_init_prio_data(cfqq, cic->ioc);
+ cfq_prio_boost(cfqq);
+
+ return __cfq_may_queue(cfqq);
+@@ -1938,7 +1950,6 @@ static int
+ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
{
- int ret;
+ struct cfq_data *cfqd = q->elevator->elevator_data;
+- struct task_struct *tsk = current;
+ struct cfq_io_context *cic;
+ const int rw = rq_data_dir(rq);
+ const int is_sync = rq_is_sync(rq);
+@@ -1956,7 +1967,7 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
-- ret = sysfs_create_link(&dev->driver->kobj, &dev->kobj,
-+ ret = sysfs_create_link(&dev->driver->p->kobj, &dev->kobj,
- kobject_name(&dev->kobj));
- if (ret == 0) {
-- ret = sysfs_create_link(&dev->kobj, &dev->driver->kobj,
-+ ret = sysfs_create_link(&dev->kobj, &dev->driver->p->kobj,
- "driver");
- if (ret)
-- sysfs_remove_link(&dev->driver->kobj,
-+ sysfs_remove_link(&dev->driver->p->kobj,
- kobject_name(&dev->kobj));
- }
- return ret;
-@@ -65,24 +65,24 @@ static void driver_sysfs_remove(struct device *dev)
- struct device_driver *drv = dev->driver;
+ cfqq = cic_to_cfqq(cic, is_sync);
+ if (!cfqq) {
+- cfqq = cfq_get_queue(cfqd, is_sync, tsk, gfp_mask);
++ cfqq = cfq_get_queue(cfqd, is_sync, cic->ioc, gfp_mask);
- if (drv) {
-- sysfs_remove_link(&drv->kobj, kobject_name(&dev->kobj));
-+ sysfs_remove_link(&drv->p->kobj, kobject_name(&dev->kobj));
- sysfs_remove_link(&dev->kobj, "driver");
- }
+ if (!cfqq)
+ goto queue_fail;
+@@ -2039,29 +2050,9 @@ out_cont:
+ spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
}
- /**
-- * device_bind_driver - bind a driver to one device.
-- * @dev: device.
-+ * device_bind_driver - bind a driver to one device.
-+ * @dev: device.
- *
-- * Allow manual attachment of a driver to a device.
-- * Caller must have already set @dev->driver.
-+ * Allow manual attachment of a driver to a device.
-+ * Caller must have already set @dev->driver.
- *
-- * Note that this does not modify the bus reference count
-- * nor take the bus's rwsem. Please verify those are accounted
-- * for before calling this. (It is ok to call with no other effort
-- * from a driver's probe() method.)
-+ * Note that this does not modify the bus reference count
-+ * nor take the bus's rwsem. Please verify those are accounted
-+ * for before calling this. (It is ok to call with no other effort
-+ * from a driver's probe() method.)
- *
-- * This function must be called with @dev->sem held.
-+ * This function must be called with @dev->sem held.
- */
- int device_bind_driver(struct device *dev)
+-/*
+- * Timer running if an idle class queue is waiting for service
+- */
+-static void cfq_idle_class_timer(unsigned long data)
+-{
+- struct cfq_data *cfqd = (struct cfq_data *) data;
+- unsigned long flags;
+-
+- spin_lock_irqsave(cfqd->queue->queue_lock, flags);
+-
+- /*
+- * race with a non-idle queue, reset timer
+- */
+- if (!start_idle_class_timer(cfqd))
+- cfq_schedule_dispatch(cfqd);
+-
+- spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);
+-}
+-
+ static void cfq_shutdown_timer_wq(struct cfq_data *cfqd)
{
-@@ -93,6 +93,7 @@ int device_bind_driver(struct device *dev)
- driver_bound(dev);
- return ret;
+ del_timer_sync(&cfqd->idle_slice_timer);
+- del_timer_sync(&cfqd->idle_class_timer);
+ kblockd_flush_work(&cfqd->unplug_work);
}
-+EXPORT_SYMBOL_GPL(device_bind_driver);
- static atomic_t probe_count = ATOMIC_INIT(0);
- static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue);
-@@ -102,8 +103,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
- int ret = 0;
+@@ -2126,10 +2117,6 @@ static void *cfq_init_queue(struct request_queue *q)
+ cfqd->idle_slice_timer.function = cfq_idle_slice_timer;
+ cfqd->idle_slice_timer.data = (unsigned long) cfqd;
- atomic_inc(&probe_count);
-- pr_debug("%s: Probing driver %s with device %s\n",
-- drv->bus->name, drv->name, dev->bus_id);
-+ pr_debug("bus: '%s': %s: probing driver %s with device %s\n",
-+ drv->bus->name, __FUNCTION__, drv->name, dev->bus_id);
- WARN_ON(!list_empty(&dev->devres_head));
+- init_timer(&cfqd->idle_class_timer);
+- cfqd->idle_class_timer.function = cfq_idle_class_timer;
+- cfqd->idle_class_timer.data = (unsigned long) cfqd;
+-
+ INIT_WORK(&cfqd->unplug_work, cfq_kick_queue);
- dev->driver = drv;
-@@ -125,8 +126,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
+ cfqd->last_end_request = jiffies;
+@@ -2160,7 +2147,7 @@ static int __init cfq_slab_setup(void)
+ if (!cfq_pool)
+ goto fail;
- driver_bound(dev);
- ret = 1;
-- pr_debug("%s: Bound Device %s to Driver %s\n",
-- drv->bus->name, dev->bus_id, drv->name);
-+ pr_debug("bus: '%s': %s: bound device %s to driver %s\n",
-+ drv->bus->name, __FUNCTION__, dev->bus_id, drv->name);
- goto done;
+- cfq_ioc_pool = KMEM_CACHE(cfq_io_context, 0);
++ cfq_ioc_pool = KMEM_CACHE(cfq_io_context, SLAB_DESTROY_BY_RCU);
+ if (!cfq_ioc_pool)
+ goto fail;
- probe_failed:
-@@ -183,7 +184,7 @@ int driver_probe_done(void)
- * This function must be called with @dev->sem held. When called for a
- * USB interface, @dev->parent->sem must be held as well.
- */
--int driver_probe_device(struct device_driver * drv, struct device * dev)
-+int driver_probe_device(struct device_driver *drv, struct device *dev)
- {
- int ret = 0;
+diff --git a/block/compat_ioctl.c b/block/compat_ioctl.c
+index cae0a85..b733732 100644
+--- a/block/compat_ioctl.c
++++ b/block/compat_ioctl.c
+@@ -545,6 +545,7 @@ static int compat_blk_trace_setup(struct block_device *bdev, char __user *arg)
+ struct blk_user_trace_setup buts;
+ struct compat_blk_user_trace_setup cbuts;
+ struct request_queue *q;
++ char b[BDEVNAME_SIZE];
+ int ret;
-@@ -192,8 +193,8 @@ int driver_probe_device(struct device_driver * drv, struct device * dev)
- if (drv->bus->match && !drv->bus->match(dev, drv))
- goto done;
+ q = bdev_get_queue(bdev);
+@@ -554,6 +555,8 @@ static int compat_blk_trace_setup(struct block_device *bdev, char __user *arg)
+ if (copy_from_user(&cbuts, arg, sizeof(cbuts)))
+ return -EFAULT;
-- pr_debug("%s: Matched Device %s with Driver %s\n",
-- drv->bus->name, dev->bus_id, drv->name);
-+ pr_debug("bus: '%s': %s: matched device %s with driver %s\n",
-+ drv->bus->name, __FUNCTION__, dev->bus_id, drv->name);
++ strcpy(b, bdevname(bdev, b));
++
+ buts = (struct blk_user_trace_setup) {
+ .act_mask = cbuts.act_mask,
+ .buf_size = cbuts.buf_size,
+@@ -565,7 +568,7 @@ static int compat_blk_trace_setup(struct block_device *bdev, char __user *arg)
+ memcpy(&buts.name, &cbuts.name, 32);
- ret = really_probe(dev, drv);
+ mutex_lock(&bdev->bd_mutex);
+- ret = do_blk_trace_setup(q, bdev, &buts);
++ ret = do_blk_trace_setup(q, b, bdev->bd_dev, &buts);
+ mutex_unlock(&bdev->bd_mutex);
+ if (ret)
+ return ret;
+diff --git a/block/elevator.c b/block/elevator.c
+index e452deb..8cd5775 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -185,9 +185,7 @@ static elevator_t *elevator_alloc(struct request_queue *q,
-@@ -201,27 +202,27 @@ done:
- return ret;
+ eq->ops = &e->ops;
+ eq->elevator_type = e;
+- kobject_init(&eq->kobj);
+- kobject_set_name(&eq->kobj, "%s", "iosched");
+- eq->kobj.ktype = &elv_ktype;
++ kobject_init(&eq->kobj, &elv_ktype);
+ mutex_init(&eq->sysfs_lock);
+
+ eq->hash = kmalloc_node(sizeof(struct hlist_head) * ELV_HASH_ENTRIES,
+@@ -743,7 +741,21 @@ struct request *elv_next_request(struct request_queue *q)
+ q->boundary_rq = NULL;
+ }
+
+- if ((rq->cmd_flags & REQ_DONTPREP) || !q->prep_rq_fn)
++ if (rq->cmd_flags & REQ_DONTPREP)
++ break;
++
++ if (q->dma_drain_size && rq->data_len) {
++ /*
++ * make sure space for the drain appears we
++ * know we can do this because max_hw_segments
++ * has been adjusted to be one fewer than the
++ * device can handle
++ */
++ rq->nr_phys_segments++;
++ rq->nr_hw_segments++;
++ }
++
++ if (!q->prep_rq_fn)
+ break;
+
+ ret = q->prep_rq_fn(q, rq);
+@@ -756,6 +768,16 @@ struct request *elv_next_request(struct request_queue *q)
+ * avoid resource deadlock. REQ_STARTED will
+ * prevent other fs requests from passing this one.
+ */
++ if (q->dma_drain_size && rq->data_len &&
++ !(rq->cmd_flags & REQ_DONTPREP)) {
++ /*
++ * remove the space for the drain we added
++ * so that we don't add it again
++ */
++ --rq->nr_phys_segments;
++ --rq->nr_hw_segments;
++ }
++
+ rq = NULL;
+ break;
+ } else if (ret == BLKPREP_KILL) {
+@@ -931,9 +953,7 @@ int elv_register_queue(struct request_queue *q)
+ elevator_t *e = q->elevator;
+ int error;
+
+- e->kobj.parent = &q->kobj;
+-
+- error = kobject_add(&e->kobj);
++ error = kobject_add(&e->kobj, &q->kobj, "%s", "iosched");
+ if (!error) {
+ struct elv_fs_entry *attr = e->elevator_type->elevator_attrs;
+ if (attr) {
+diff --git a/block/genhd.c b/block/genhd.c
+index f2ac914..de2ebb2 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -17,8 +17,10 @@
+ #include <linux/buffer_head.h>
+ #include <linux/mutex.h>
+
+-struct kset block_subsys;
+-static DEFINE_MUTEX(block_subsys_lock);
++static DEFINE_MUTEX(block_class_lock);
++#ifndef CONFIG_SYSFS_DEPRECATED
++struct kobject *block_depr;
++#endif
+
+ /*
+ * Can be deleted altogether. Later.
+@@ -37,19 +39,17 @@ static inline int major_to_index(int major)
}
--static int __device_attach(struct device_driver * drv, void * data)
-+static int __device_attach(struct device_driver *drv, void *data)
+ #ifdef CONFIG_PROC_FS
+-
+ void blkdev_show(struct seq_file *f, off_t offset)
{
-- struct device * dev = data;
-+ struct device *dev = data;
- return driver_probe_device(drv, dev);
+ struct blk_major_name *dp;
+
+ if (offset < BLKDEV_MAJOR_HASH_SIZE) {
+- mutex_lock(&block_subsys_lock);
++ mutex_lock(&block_class_lock);
+ for (dp = major_names[offset]; dp; dp = dp->next)
+ seq_printf(f, "%3d %s\n", dp->major, dp->name);
+- mutex_unlock(&block_subsys_lock);
++ mutex_unlock(&block_class_lock);
+ }
}
+-
+ #endif /* CONFIG_PROC_FS */
- /**
-- * device_attach - try to attach device to a driver.
-- * @dev: device.
-+ * device_attach - try to attach device to a driver.
-+ * @dev: device.
- *
-- * Walk the list of drivers that the bus has and call
-- * driver_probe_device() for each pair. If a compatible
-- * pair is found, break out and return.
-+ * Walk the list of drivers that the bus has and call
-+ * driver_probe_device() for each pair. If a compatible
-+ * pair is found, break out and return.
- *
-- * Returns 1 if the device was bound to a driver;
-- * 0 if no matching device was found;
-- * -ENODEV if the device is not registered.
-+ * Returns 1 if the device was bound to a driver;
-+ * 0 if no matching device was found;
-+ * -ENODEV if the device is not registered.
- *
-- * When called for a USB interface, @dev->parent->sem must be held.
-+ * When called for a USB interface, @dev->parent->sem must be held.
- */
--int device_attach(struct device * dev)
-+int device_attach(struct device *dev)
- {
- int ret = 0;
+ int register_blkdev(unsigned int major, const char *name)
+@@ -57,7 +57,7 @@ int register_blkdev(unsigned int major, const char *name)
+ struct blk_major_name **n, *p;
+ int index, ret = 0;
-@@ -240,10 +241,11 @@ int device_attach(struct device * dev)
- up(&dev->sem);
+- mutex_lock(&block_subsys_lock);
++ mutex_lock(&block_class_lock);
+
+ /* temporary */
+ if (major == 0) {
+@@ -102,7 +102,7 @@ int register_blkdev(unsigned int major, const char *name)
+ kfree(p);
+ }
+ out:
+- mutex_unlock(&block_subsys_lock);
++ mutex_unlock(&block_class_lock);
return ret;
}
-+EXPORT_SYMBOL_GPL(device_attach);
--static int __driver_attach(struct device * dev, void * data)
-+static int __driver_attach(struct device *dev, void *data)
- {
-- struct device_driver * drv = data;
-+ struct device_driver *drv = data;
+@@ -114,7 +114,7 @@ void unregister_blkdev(unsigned int major, const char *name)
+ struct blk_major_name *p = NULL;
+ int index = major_to_index(major);
- /*
- * Lock device and try to bind to it. We drop the error
-@@ -268,35 +270,35 @@ static int __driver_attach(struct device * dev, void * data)
+- mutex_lock(&block_subsys_lock);
++ mutex_lock(&block_class_lock);
+ for (n = &major_names[index]; *n; n = &(*n)->next)
+ if ((*n)->major == major)
+ break;
+@@ -124,7 +124,7 @@ void unregister_blkdev(unsigned int major, const char *name)
+ p = *n;
+ *n = p->next;
+ }
+- mutex_unlock(&block_subsys_lock);
++ mutex_unlock(&block_class_lock);
+ kfree(p);
}
- /**
-- * driver_attach - try to bind driver to devices.
-- * @drv: driver.
-+ * driver_attach - try to bind driver to devices.
-+ * @drv: driver.
- *
-- * Walk the list of devices that the bus has on it and try to
-- * match the driver with each one. If driver_probe_device()
-- * returns 0 and the @dev->driver is set, we've found a
-- * compatible pair.
-+ * Walk the list of devices that the bus has on it and try to
-+ * match the driver with each one. If driver_probe_device()
-+ * returns 0 and the @dev->driver is set, we've found a
-+ * compatible pair.
+@@ -137,29 +137,30 @@ static struct kobj_map *bdev_map;
+ * range must be nonzero
+ * The hash chain is sorted on range, so that subranges can override.
*/
--int driver_attach(struct device_driver * drv)
-+int driver_attach(struct device_driver *drv)
+-void blk_register_region(dev_t dev, unsigned long range, struct module *module,
++void blk_register_region(dev_t devt, unsigned long range, struct module *module,
+ struct kobject *(*probe)(dev_t, int *, void *),
+ int (*lock)(dev_t, void *), void *data)
{
- return bus_for_each_dev(drv->bus, NULL, drv, __driver_attach);
+- kobj_map(bdev_map, dev, range, module, probe, lock, data);
++ kobj_map(bdev_map, devt, range, module, probe, lock, data);
}
-+EXPORT_SYMBOL_GPL(driver_attach);
- /*
-- * __device_release_driver() must be called with @dev->sem held.
-- * When called for a USB interface, @dev->parent->sem must be held as well.
-+ * __device_release_driver() must be called with @dev->sem held.
-+ * When called for a USB interface, @dev->parent->sem must be held as well.
- */
--static void __device_release_driver(struct device * dev)
-+static void __device_release_driver(struct device *dev)
+ EXPORT_SYMBOL(blk_register_region);
+
+-void blk_unregister_region(dev_t dev, unsigned long range)
++void blk_unregister_region(dev_t devt, unsigned long range)
{
-- struct device_driver * drv;
-+ struct device_driver *drv;
+- kobj_unmap(bdev_map, dev, range);
++ kobj_unmap(bdev_map, devt, range);
+ }
-- drv = get_driver(dev->driver);
-+ drv = dev->driver;
- if (drv) {
- driver_sysfs_remove(dev);
- sysfs_remove_link(&dev->kobj, "driver");
-- klist_remove(&dev->knode_driver);
+ EXPORT_SYMBOL(blk_unregister_region);
- if (dev->bus)
-- blocking_notifier_call_chain(&dev->bus->bus_notifier,
-+ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
- BUS_NOTIFY_UNBIND_DRIVER,
- dev);
+-static struct kobject *exact_match(dev_t dev, int *part, void *data)
++static struct kobject *exact_match(dev_t devt, int *part, void *data)
+ {
+ struct gendisk *p = data;
+- return &p->kobj;
++
++ return &p->dev.kobj;
+ }
-@@ -306,18 +308,18 @@ static void __device_release_driver(struct device * dev)
- drv->remove(dev);
- devres_release_all(dev);
- dev->driver = NULL;
-- put_driver(drv);
-+ klist_remove(&dev->knode_driver);
- }
+-static int exact_lock(dev_t dev, void *data)
++static int exact_lock(dev_t devt, void *data)
+ {
+ struct gendisk *p = data;
+
+@@ -194,8 +195,6 @@ void unlink_gendisk(struct gendisk *disk)
+ disk->minors);
}
+-#define to_disk(obj) container_of(obj,struct gendisk,kobj)
+-
/**
-- * device_release_driver - manually detach device from driver.
-- * @dev: device.
-+ * device_release_driver - manually detach device from driver.
-+ * @dev: device.
- *
-- * Manually detach device from driver.
-- * When called for a USB interface, @dev->parent->sem must be held.
-+ * Manually detach device from driver.
-+ * When called for a USB interface, @dev->parent->sem must be held.
+ * get_gendisk - get partitioning information for a given device
+ * @dev: device to get partitioning information for
+@@ -203,10 +202,12 @@ void unlink_gendisk(struct gendisk *disk)
+ * This function gets the structure containing partitioning
+ * information for the given device @dev.
*/
--void device_release_driver(struct device * dev)
-+void device_release_driver(struct device *dev)
+-struct gendisk *get_gendisk(dev_t dev, int *part)
++struct gendisk *get_gendisk(dev_t devt, int *part)
{
- /*
- * If anyone calls device_release_driver() recursively from
-@@ -328,26 +330,26 @@ void device_release_driver(struct device * dev)
- __device_release_driver(dev);
- up(&dev->sem);
+- struct kobject *kobj = kobj_lookup(bdev_map, dev, part);
+- return kobj ? to_disk(kobj) : NULL;
++ struct kobject *kobj = kobj_lookup(bdev_map, devt, part);
++ struct device *dev = kobj_to_dev(kobj);
++
++ return kobj ? dev_to_disk(dev) : NULL;
}
--
-+EXPORT_SYMBOL_GPL(device_release_driver);
- /**
- * driver_detach - detach driver from all devices it controls.
- * @drv: driver.
+ /*
+@@ -216,13 +217,17 @@ struct gendisk *get_gendisk(dev_t dev, int *part)
*/
--void driver_detach(struct device_driver * drv)
-+void driver_detach(struct device_driver *drv)
+ void __init printk_all_partitions(void)
{
-- struct device * dev;
+- int n;
+ struct device *dev;
+ struct gendisk *sgp;
++ char buf[BDEVNAME_SIZE];
++ int n;
- for (;;) {
-- spin_lock(&drv->klist_devices.k_lock);
-- if (list_empty(&drv->klist_devices.k_list)) {
-- spin_unlock(&drv->klist_devices.k_lock);
-+ spin_lock(&drv->p->klist_devices.k_lock);
-+ if (list_empty(&drv->p->klist_devices.k_list)) {
-+ spin_unlock(&drv->p->klist_devices.k_lock);
- break;
- }
-- dev = list_entry(drv->klist_devices.k_list.prev,
-+ dev = list_entry(drv->p->klist_devices.k_list.prev,
- struct device, knode_driver.n_node);
- get_device(dev);
-- spin_unlock(&drv->klist_devices.k_lock);
-+ spin_unlock(&drv->p->klist_devices.k_lock);
+- mutex_lock(&block_subsys_lock);
++ mutex_lock(&block_class_lock);
+ /* For each block device... */
+- list_for_each_entry(sgp, &block_subsys.list, kobj.entry) {
+- char buf[BDEVNAME_SIZE];
++ list_for_each_entry(dev, &block_class.devices, node) {
++ if (dev->type != &disk_type)
++ continue;
++ sgp = dev_to_disk(dev);
+ /*
+ * Don't show empty devices or things that have been surpressed
+ */
+@@ -255,38 +260,46 @@ void __init printk_all_partitions(void)
+ sgp->major, n + 1 + sgp->first_minor,
+ (unsigned long long)sgp->part[n]->nr_sects >> 1,
+ disk_name(sgp, n + 1, buf));
+- } /* partition subloop */
+- } /* Block device loop */
++ }
++ }
- if (dev->parent) /* Needed for USB */
- down(&dev->parent->sem);
-@@ -360,9 +362,3 @@ void driver_detach(struct device_driver * drv)
- put_device(dev);
- }
+- mutex_unlock(&block_subsys_lock);
+- return;
++ mutex_unlock(&block_class_lock);
}
--
--EXPORT_SYMBOL_GPL(device_bind_driver);
--EXPORT_SYMBOL_GPL(device_release_driver);
--EXPORT_SYMBOL_GPL(device_attach);
--EXPORT_SYMBOL_GPL(driver_attach);
--
-diff --git a/drivers/base/driver.c b/drivers/base/driver.c
-index eb11475..a35f041 100644
---- a/drivers/base/driver.c
-+++ b/drivers/base/driver.c
-@@ -3,6 +3,8 @@
- *
- * Copyright (c) 2002-3 Patrick Mochel
- * Copyright (c) 2002-3 Open Source Development Labs
-+ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
-+ * Copyright (c) 2007 Novell Inc.
- *
- * This file is released under the GPLv2
- *
-@@ -15,46 +17,42 @@
- #include "base.h"
- #define to_dev(node) container_of(node, struct device, driver_list)
--#define to_drv(obj) container_of(obj, struct device_driver, kobj)
+ #ifdef CONFIG_PROC_FS
+ /* iterator */
+ static void *part_start(struct seq_file *part, loff_t *pos)
+ {
+- struct list_head *p;
+- loff_t l = *pos;
++ loff_t k = *pos;
++ struct device *dev;
+- mutex_lock(&block_subsys_lock);
+- list_for_each(p, &block_subsys.list)
+- if (!l--)
+- return list_entry(p, struct gendisk, kobj.entry);
++ mutex_lock(&block_class_lock);
++ list_for_each_entry(dev, &block_class.devices, node) {
++ if (dev->type != &disk_type)
++ continue;
++ if (!k--)
++ return dev_to_disk(dev);
++ }
+ return NULL;
+ }
--static struct device * next_device(struct klist_iter * i)
-+static struct device *next_device(struct klist_iter *i)
+ static void *part_next(struct seq_file *part, void *v, loff_t *pos)
{
-- struct klist_node * n = klist_next(i);
-+ struct klist_node *n = klist_next(i);
- return n ? container_of(n, struct device, knode_driver) : NULL;
+- struct list_head *p = ((struct gendisk *)v)->kobj.entry.next;
++ struct gendisk *gp = v;
++ struct device *dev;
+ ++*pos;
+- return p==&block_subsys.list ? NULL :
+- list_entry(p, struct gendisk, kobj.entry);
++ list_for_each_entry(dev, &gp->dev.node, node) {
++ if (&dev->node == &block_class.devices)
++ return NULL;
++ if (dev->type == &disk_type)
++ return dev_to_disk(dev);
++ }
++ return NULL;
}
- /**
-- * driver_for_each_device - Iterator for devices bound to a driver.
-- * @drv: Driver we're iterating.
-- * @start: Device to begin with
-- * @data: Data to pass to the callback.
-- * @fn: Function to call for each device.
-+ * driver_for_each_device - Iterator for devices bound to a driver.
-+ * @drv: Driver we're iterating.
-+ * @start: Device to begin with
-+ * @data: Data to pass to the callback.
-+ * @fn: Function to call for each device.
- *
-- * Iterate over the @drv's list of devices calling @fn for each one.
-+ * Iterate over the @drv's list of devices calling @fn for each one.
- */
--
--int driver_for_each_device(struct device_driver * drv, struct device * start,
-- void * data, int (*fn)(struct device *, void *))
-+int driver_for_each_device(struct device_driver *drv, struct device *start,
-+ void *data, int (*fn)(struct device *, void *))
+ static void part_stop(struct seq_file *part, void *v)
{
- struct klist_iter i;
-- struct device * dev;
-+ struct device *dev;
- int error = 0;
+- mutex_unlock(&block_subsys_lock);
++ mutex_unlock(&block_class_lock);
+ }
- if (!drv)
- return -EINVAL;
+ static int show_partition(struct seq_file *part, void *v)
+@@ -295,7 +308,7 @@ static int show_partition(struct seq_file *part, void *v)
+ int n;
+ char buf[BDEVNAME_SIZE];
-- klist_iter_init_node(&drv->klist_devices, &i,
-+ klist_iter_init_node(&drv->p->klist_devices, &i,
- start ? &start->knode_driver : NULL);
- while ((dev = next_device(&i)) && !error)
- error = fn(dev, data);
- klist_iter_exit(&i);
- return error;
+- if (&sgp->kobj.entry == block_subsys.list.next)
++ if (&sgp->dev.node == block_class.devices.next)
+ seq_puts(part, "major minor #blocks name\n\n");
+
+ /* Don't show non-partitionable removeable devices or empty devices */
+@@ -324,111 +337,82 @@ static int show_partition(struct seq_file *part, void *v)
+ return 0;
}
--
- EXPORT_SYMBOL_GPL(driver_for_each_device);
--
- /**
- * driver_find_device - device iterator for locating a particular device.
- * @drv: The device's driver
-@@ -70,9 +68,9 @@ EXPORT_SYMBOL_GPL(driver_for_each_device);
- * if it does. If the callback returns non-zero, this function will
- * return to the caller and not iterate over any more devices.
- */
--struct device * driver_find_device(struct device_driver *drv,
-- struct device * start, void * data,
-- int (*match)(struct device *, void *))
-+struct device *driver_find_device(struct device_driver *drv,
-+ struct device *start, void *data,
-+ int (*match)(struct device *dev, void *data))
- {
- struct klist_iter i;
- struct device *dev;
-@@ -80,7 +78,7 @@ struct device * driver_find_device(struct device_driver *drv,
- if (!drv)
- return NULL;
+-struct seq_operations partitions_op = {
+- .start =part_start,
+- .next = part_next,
+- .stop = part_stop,
+- .show = show_partition
++const struct seq_operations partitions_op = {
++ .start = part_start,
++ .next = part_next,
++ .stop = part_stop,
++ .show = show_partition
+ };
+ #endif
-- klist_iter_init_node(&drv->klist_devices, &i,
-+ klist_iter_init_node(&drv->p->klist_devices, &i,
- (start ? &start->knode_driver : NULL));
- while ((dev = next_device(&i)))
- if (match(dev, data) && get_device(dev))
-@@ -91,111 +89,179 @@ struct device * driver_find_device(struct device_driver *drv,
- EXPORT_SYMBOL_GPL(driver_find_device);
- /**
-- * driver_create_file - create sysfs file for driver.
-- * @drv: driver.
-- * @attr: driver attribute descriptor.
-+ * driver_create_file - create sysfs file for driver.
-+ * @drv: driver.
-+ * @attr: driver attribute descriptor.
- */
--
--int driver_create_file(struct device_driver * drv, struct driver_attribute * attr)
-+int driver_create_file(struct device_driver *drv,
-+ struct driver_attribute *attr)
+ extern int blk_dev_init(void);
+
+-static struct kobject *base_probe(dev_t dev, int *part, void *data)
++static struct kobject *base_probe(dev_t devt, int *part, void *data)
{
- int error;
- if (get_driver(drv)) {
-- error = sysfs_create_file(&drv->kobj, &attr->attr);
-+ error = sysfs_create_file(&drv->p->kobj, &attr->attr);
- put_driver(drv);
- } else
- error = -EINVAL;
- return error;
+- if (request_module("block-major-%d-%d", MAJOR(dev), MINOR(dev)) > 0)
++ if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
+ /* Make old-style 2.4 aliases work */
+- request_module("block-major-%d", MAJOR(dev));
++ request_module("block-major-%d", MAJOR(devt));
+ return NULL;
}
+
+ static int __init genhd_device_init(void)
+ {
+- int err;
-
-+EXPORT_SYMBOL_GPL(driver_create_file);
+- bdev_map = kobj_map_init(base_probe, &block_subsys_lock);
++ class_register(&block_class);
++ bdev_map = kobj_map_init(base_probe, &block_class_lock);
+ blk_dev_init();
+- err = subsystem_register(&block_subsys);
+- if (err < 0)
+- printk(KERN_WARNING "%s: subsystem_register error: %d\n",
+- __FUNCTION__, err);
+- return err;
++
++#ifndef CONFIG_SYSFS_DEPRECATED
++ /* create top-level block dir */
++ block_depr = kobject_create_and_add("block", NULL);
++#endif
++ return 0;
+ }
+
+ subsys_initcall(genhd_device_init);
- /**
-- * driver_remove_file - remove sysfs file for driver.
-- * @drv: driver.
-- * @attr: driver attribute descriptor.
-+ * driver_remove_file - remove sysfs file for driver.
-+ * @drv: driver.
-+ * @attr: driver attribute descriptor.
- */
-
--void driver_remove_file(struct device_driver * drv, struct driver_attribute * attr)
-+void driver_remove_file(struct device_driver *drv,
-+ struct driver_attribute *attr)
+-
+-/*
+- * kobject & sysfs bindings for block devices
+- */
+-static ssize_t disk_attr_show(struct kobject *kobj, struct attribute *attr,
+- char *page)
++static ssize_t disk_range_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
{
- if (get_driver(drv)) {
-- sysfs_remove_file(&drv->kobj, &attr->attr);
-+ sysfs_remove_file(&drv->p->kobj, &attr->attr);
- put_driver(drv);
- }
+- struct gendisk *disk = to_disk(kobj);
+- struct disk_attribute *disk_attr =
+- container_of(attr,struct disk_attribute,attr);
+- ssize_t ret = -EIO;
++ struct gendisk *disk = dev_to_disk(dev);
+
+- if (disk_attr->show)
+- ret = disk_attr->show(disk,page);
+- return ret;
++ return sprintf(buf, "%d\n", disk->minors);
}
--
-+EXPORT_SYMBOL_GPL(driver_remove_file);
- /**
-- * get_driver - increment driver reference count.
-- * @drv: driver.
-+ * driver_add_kobj - add a kobject below the specified driver
-+ *
-+ * You really don't want to do this, this is only here due to one looney
-+ * iseries driver, go poke those developers if you are annoyed about
-+ * this...
- */
--struct device_driver * get_driver(struct device_driver * drv)
-+int driver_add_kobj(struct device_driver *drv, struct kobject *kobj,
-+ const char *fmt, ...)
+-static ssize_t disk_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char *page, size_t count)
++static ssize_t disk_removable_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
{
-- return drv ? to_drv(kobject_get(&drv->kobj)) : NULL;
-+ va_list args;
-+ char *name;
-+
-+ va_start(args, fmt);
-+ name = kvasprintf(GFP_KERNEL, fmt, args);
-+ va_end(args);
-+
-+ if (!name)
-+ return -ENOMEM;
-+
-+ return kobject_add(kobj, &drv->p->kobj, "%s", name);
+- struct gendisk *disk = to_disk(kobj);
+- struct disk_attribute *disk_attr =
+- container_of(attr,struct disk_attribute,attr);
+- ssize_t ret = 0;
++ struct gendisk *disk = dev_to_disk(dev);
+
+- if (disk_attr->store)
+- ret = disk_attr->store(disk, page, count);
+- return ret;
++ return sprintf(buf, "%d\n",
++ (disk->flags & GENHD_FL_REMOVABLE ? 1 : 0));
}
-+EXPORT_SYMBOL_GPL(driver_add_kobj);
-+
-+/**
-+ * get_driver - increment driver reference count.
-+ * @drv: driver.
-+ */
-+struct device_driver *get_driver(struct device_driver *drv)
-+{
-+ if (drv) {
-+ struct driver_private *priv;
-+ struct kobject *kobj;
-+ kobj = kobject_get(&drv->p->kobj);
-+ priv = to_driver(kobj);
-+ return priv->driver;
-+ }
-+ return NULL;
-+}
-+EXPORT_SYMBOL_GPL(get_driver);
+-static struct sysfs_ops disk_sysfs_ops = {
+- .show = &disk_attr_show,
+- .store = &disk_attr_store,
+-};
+-
+-static ssize_t disk_uevent_store(struct gendisk * disk,
+- const char *buf, size_t count)
+-{
+- kobject_uevent(&disk->kobj, KOBJ_ADD);
+- return count;
+-}
+-static ssize_t disk_dev_read(struct gendisk * disk, char *page)
+-{
+- dev_t base = MKDEV(disk->major, disk->first_minor);
+- return print_dev_t(page, base);
+-}
+-static ssize_t disk_range_read(struct gendisk * disk, char *page)
++static ssize_t disk_size_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
+ {
+- return sprintf(page, "%d\n", disk->minors);
+-}
+-static ssize_t disk_removable_read(struct gendisk * disk, char *page)
+-{
+- return sprintf(page, "%d\n",
+- (disk->flags & GENHD_FL_REMOVABLE ? 1 : 0));
++ struct gendisk *disk = dev_to_disk(dev);
- /**
-- * put_driver - decrement driver's refcount.
-- * @drv: driver.
-+ * put_driver - decrement driver's refcount.
-+ * @drv: driver.
- */
--void put_driver(struct device_driver * drv)
-+void put_driver(struct device_driver *drv)
-+{
-+ kobject_put(&drv->p->kobj);
-+}
-+EXPORT_SYMBOL_GPL(put_driver);
++ return sprintf(buf, "%llu\n", (unsigned long long)get_capacity(disk));
+ }
+-static ssize_t disk_size_read(struct gendisk * disk, char *page)
+-{
+- return sprintf(page, "%llu\n", (unsigned long long)get_capacity(disk));
+-}
+-static ssize_t disk_capability_read(struct gendisk *disk, char *page)
+
-+static int driver_add_groups(struct device_driver *drv,
-+ struct attribute_group **groups)
++static ssize_t disk_capability_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
{
-- kobject_put(&drv->kobj);
-+ int error = 0;
-+ int i;
+- return sprintf(page, "%x\n", disk->flags);
++ struct gendisk *disk = dev_to_disk(dev);
+
-+ if (groups) {
-+ for (i = 0; groups[i]; i++) {
-+ error = sysfs_create_group(&drv->p->kobj, groups[i]);
-+ if (error) {
-+ while (--i >= 0)
-+ sysfs_remove_group(&drv->p->kobj,
-+ groups[i]);
-+ break;
-+ }
-+ }
-+ }
-+ return error;
-+}
++ return sprintf(buf, "%x\n", disk->flags);
+ }
+-static ssize_t disk_stats_read(struct gendisk * disk, char *page)
+
-+static void driver_remove_groups(struct device_driver *drv,
-+ struct attribute_group **groups)
-+{
-+ int i;
++static ssize_t disk_stat_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
+ {
++ struct gendisk *disk = dev_to_disk(dev);
+
-+ if (groups)
-+ for (i = 0; groups[i]; i++)
-+ sysfs_remove_group(&drv->p->kobj, groups[i]);
+ preempt_disable();
+ disk_round_stats(disk);
+ preempt_enable();
+- return sprintf(page,
++ return sprintf(buf,
+ "%8lu %8lu %8llu %8u "
+ "%8lu %8lu %8llu %8u "
+ "%8u %8u %8u"
+@@ -445,40 +429,21 @@ static ssize_t disk_stats_read(struct gendisk * disk, char *page)
+ jiffies_to_msecs(disk_stat_read(disk, io_ticks)),
+ jiffies_to_msecs(disk_stat_read(disk, time_in_queue)));
}
+-static struct disk_attribute disk_attr_uevent = {
+- .attr = {.name = "uevent", .mode = S_IWUSR },
+- .store = disk_uevent_store
+-};
+-static struct disk_attribute disk_attr_dev = {
+- .attr = {.name = "dev", .mode = S_IRUGO },
+- .show = disk_dev_read
+-};
+-static struct disk_attribute disk_attr_range = {
+- .attr = {.name = "range", .mode = S_IRUGO },
+- .show = disk_range_read
+-};
+-static struct disk_attribute disk_attr_removable = {
+- .attr = {.name = "removable", .mode = S_IRUGO },
+- .show = disk_removable_read
+-};
+-static struct disk_attribute disk_attr_size = {
+- .attr = {.name = "size", .mode = S_IRUGO },
+- .show = disk_size_read
+-};
+-static struct disk_attribute disk_attr_capability = {
+- .attr = {.name = "capability", .mode = S_IRUGO },
+- .show = disk_capability_read
+-};
+-static struct disk_attribute disk_attr_stat = {
+- .attr = {.name = "stat", .mode = S_IRUGO },
+- .show = disk_stats_read
+-};
- /**
-- * driver_register - register driver with bus
-- * @drv: driver to register
-+ * driver_register - register driver with bus
-+ * @drv: driver to register
- *
-- * We pass off most of the work to the bus_add_driver() call,
-- * since most of the things we have to do deal with the bus
-- * structures.
-+ * We pass off most of the work to the bus_add_driver() call,
-+ * since most of the things we have to do deal with the bus
-+ * structures.
- */
--int driver_register(struct device_driver * drv)
-+int driver_register(struct device_driver *drv)
- {
-+ int ret;
+ #ifdef CONFIG_FAIL_MAKE_REQUEST
++static ssize_t disk_fail_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ struct gendisk *disk = dev_to_disk(dev);
+
- if ((drv->bus->probe && drv->probe) ||
- (drv->bus->remove && drv->remove) ||
-- (drv->bus->shutdown && drv->shutdown)) {
-- printk(KERN_WARNING "Driver '%s' needs updating - please use bus_type methods\n", drv->name);
-- }
-- klist_init(&drv->klist_devices, NULL, NULL);
-- return bus_add_driver(drv);
-+ (drv->bus->shutdown && drv->shutdown))
-+ printk(KERN_WARNING "Driver '%s' needs updating - please use "
-+ "bus_type methods\n", drv->name);
-+ ret = bus_add_driver(drv);
-+ if (ret)
-+ return ret;
-+ ret = driver_add_groups(drv, drv->groups);
-+ if (ret)
-+ bus_remove_driver(drv);
-+ return ret;
- }
-+EXPORT_SYMBOL_GPL(driver_register);
++ return sprintf(buf, "%d\n", disk->flags & GENHD_FL_FAIL ? 1 : 0);
++}
- /**
-- * driver_unregister - remove driver from system.
-- * @drv: driver.
-+ * driver_unregister - remove driver from system.
-+ * @drv: driver.
- *
-- * Again, we pass off most of the work to the bus-level call.
-+ * Again, we pass off most of the work to the bus-level call.
- */
--
--void driver_unregister(struct device_driver * drv)
-+void driver_unregister(struct device_driver *drv)
+-static ssize_t disk_fail_store(struct gendisk * disk,
++static ssize_t disk_fail_store(struct device *dev,
++ struct device_attribute *attr,
+ const char *buf, size_t count)
{
-+ driver_remove_groups(drv, drv->groups);
- bus_remove_driver(drv);
++ struct gendisk *disk = dev_to_disk(dev);
+ int i;
+
+ if (count > 0 && sscanf(buf, "%d", &i) > 0) {
+@@ -490,136 +455,100 @@ static ssize_t disk_fail_store(struct gendisk * disk,
+
+ return count;
}
-+EXPORT_SYMBOL_GPL(driver_unregister);
+-static ssize_t disk_fail_read(struct gendisk * disk, char *page)
+-{
+- return sprintf(page, "%d\n", disk->flags & GENHD_FL_FAIL ? 1 : 0);
+-}
+-static struct disk_attribute disk_attr_fail = {
+- .attr = {.name = "make-it-fail", .mode = S_IRUGO | S_IWUSR },
+- .store = disk_fail_store,
+- .show = disk_fail_read
+-};
- /**
-- * driver_find - locate driver on a bus by its name.
-- * @name: name of the driver.
-- * @bus: bus to scan for the driver.
-+ * driver_find - locate driver on a bus by its name.
-+ * @name: name of the driver.
-+ * @bus: bus to scan for the driver.
- *
-- * Call kset_find_obj() to iterate over list of drivers on
-- * a bus to find driver by name. Return driver if found.
-+ * Call kset_find_obj() to iterate over list of drivers on
-+ * a bus to find driver by name. Return driver if found.
- *
-- * Note that kset_find_obj increments driver's reference count.
-+ * Note that kset_find_obj increments driver's reference count.
- */
- struct device_driver *driver_find(const char *name, struct bus_type *bus)
+ #endif
+
+-static struct attribute * default_attrs[] = {
+- &disk_attr_uevent.attr,
+- &disk_attr_dev.attr,
+- &disk_attr_range.attr,
+- &disk_attr_removable.attr,
+- &disk_attr_size.attr,
+- &disk_attr_stat.attr,
+- &disk_attr_capability.attr,
++static DEVICE_ATTR(range, S_IRUGO, disk_range_show, NULL);
++static DEVICE_ATTR(removable, S_IRUGO, disk_removable_show, NULL);
++static DEVICE_ATTR(size, S_IRUGO, disk_size_show, NULL);
++static DEVICE_ATTR(capability, S_IRUGO, disk_capability_show, NULL);
++static DEVICE_ATTR(stat, S_IRUGO, disk_stat_show, NULL);
++#ifdef CONFIG_FAIL_MAKE_REQUEST
++static struct device_attribute dev_attr_fail =
++ __ATTR(make-it-fail, S_IRUGO|S_IWUSR, disk_fail_show, disk_fail_store);
++#endif
++
++static struct attribute *disk_attrs[] = {
++ &dev_attr_range.attr,
++ &dev_attr_removable.attr,
++ &dev_attr_size.attr,
++ &dev_attr_capability.attr,
++ &dev_attr_stat.attr,
+ #ifdef CONFIG_FAIL_MAKE_REQUEST
+- &disk_attr_fail.attr,
++ &dev_attr_fail.attr,
+ #endif
+- NULL,
++ NULL
++};
++
++static struct attribute_group disk_attr_group = {
++ .attrs = disk_attrs,
+ };
+
+-static void disk_release(struct kobject * kobj)
++static struct attribute_group *disk_attr_groups[] = {
++ &disk_attr_group,
++ NULL
++};
++
++static void disk_release(struct device *dev)
{
-- struct kobject *k = kset_find_obj(&bus->drivers, name);
-- if (k)
-- return to_drv(k);
-+ struct kobject *k = kset_find_obj(bus->p->drivers_kset, name);
-+ struct driver_private *priv;
+- struct gendisk *disk = to_disk(kobj);
++ struct gendisk *disk = dev_to_disk(dev);
+
-+ if (k) {
-+ priv = to_driver(k);
-+ return priv->driver;
-+ }
- return NULL;
+ kfree(disk->random);
+ kfree(disk->part);
+ free_disk_stats(disk);
+ kfree(disk);
}
-
--EXPORT_SYMBOL_GPL(driver_register);
--EXPORT_SYMBOL_GPL(driver_unregister);
--EXPORT_SYMBOL_GPL(get_driver);
--EXPORT_SYMBOL_GPL(put_driver);
- EXPORT_SYMBOL_GPL(driver_find);
--
--EXPORT_SYMBOL_GPL(driver_create_file);
--EXPORT_SYMBOL_GPL(driver_remove_file);
-diff --git a/drivers/base/firmware.c b/drivers/base/firmware.c
-index 90c8629..1138155 100644
---- a/drivers/base/firmware.c
-+++ b/drivers/base/firmware.c
-@@ -3,11 +3,11 @@
- *
- * Copyright (c) 2002-3 Patrick Mochel
- * Copyright (c) 2002-3 Open Source Development Labs
-+ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
-+ * Copyright (c) 2007 Novell Inc.
- *
- * This file is released under the GPLv2
-- *
- */
--
- #include <linux/kobject.h>
- #include <linux/module.h>
- #include <linux/init.h>
-@@ -15,23 +15,13 @@
-
- #include "base.h"
+-static struct kobj_type ktype_block = {
+- .release = disk_release,
+- .sysfs_ops = &disk_sysfs_ops,
+- .default_attrs = default_attrs,
++struct class block_class = {
++ .name = "block",
+ };
--static decl_subsys(firmware, NULL, NULL);
+-extern struct kobj_type ktype_part;
-
--int firmware_register(struct kset *s)
+-static int block_uevent_filter(struct kset *kset, struct kobject *kobj)
-{
-- kobj_set_kset_s(s, firmware_subsys);
-- return subsystem_register(s);
+- struct kobj_type *ktype = get_ktype(kobj);
+-
+- return ((ktype == &ktype_block) || (ktype == &ktype_part));
-}
-
--void firmware_unregister(struct kset *s)
+-static int block_uevent(struct kset *kset, struct kobject *kobj,
+- struct kobj_uevent_env *env)
-{
-- subsystem_unregister(s);
+- struct kobj_type *ktype = get_ktype(kobj);
+- struct device *physdev;
+- struct gendisk *disk;
+- struct hd_struct *part;
+-
+- if (ktype == &ktype_block) {
+- disk = container_of(kobj, struct gendisk, kobj);
+- add_uevent_var(env, "MINOR=%u", disk->first_minor);
+- } else if (ktype == &ktype_part) {
+- disk = container_of(kobj->parent, struct gendisk, kobj);
+- part = container_of(kobj, struct hd_struct, kobj);
+- add_uevent_var(env, "MINOR=%u",
+- disk->first_minor + part->partno);
+- } else
+- return 0;
+-
+- add_uevent_var(env, "MAJOR=%u", disk->major);
+-
+- /* add physical device, backing this device */
+- physdev = disk->driverfs_dev;
+- if (physdev) {
+- char *path = kobject_get_path(&physdev->kobj, GFP_KERNEL);
+-
+- add_uevent_var(env, "PHYSDEVPATH=%s", path);
+- kfree(path);
+-
+- if (physdev->bus)
+- add_uevent_var(env, "PHYSDEVBUS=%s", physdev->bus->name);
+-
+- if (physdev->driver)
+- add_uevent_var(env, physdev->driver->name);
+- }
+-
+- return 0;
-}
-+struct kobject *firmware_kobj;
-+EXPORT_SYMBOL_GPL(firmware_kobj);
+-
+-static struct kset_uevent_ops block_uevent_ops = {
+- .filter = block_uevent_filter,
+- .uevent = block_uevent,
++struct device_type disk_type = {
++ .name = "disk",
++ .groups = disk_attr_groups,
++ .release = disk_release,
+ };
- int __init firmware_init(void)
- {
-- return subsystem_register(&firmware_subsys);
-+ firmware_kobj = kobject_create_and_add("firmware", NULL);
-+ if (!firmware_kobj)
-+ return -ENOMEM;
-+ return 0;
- }
+-decl_subsys(block, &ktype_block, &block_uevent_ops);
-
--EXPORT_SYMBOL_GPL(firmware_register);
--EXPORT_SYMBOL_GPL(firmware_unregister);
-diff --git a/drivers/base/hypervisor.c b/drivers/base/hypervisor.c
-index 7080b41..6428cba 100644
---- a/drivers/base/hypervisor.c
-+++ b/drivers/base/hypervisor.c
-@@ -2,19 +2,23 @@
- * hypervisor.c - /sys/hypervisor subsystem.
- *
- * Copyright (C) IBM Corp. 2006
-+ * Copyright (C) 2007 Greg Kroah-Hartman <gregkh at suse.de>
-+ * Copyright (C) 2007 Novell Inc.
+ /*
+ * aggregate disk stat collector. Uses the same stats that the sysfs
+ * entries do, above, but makes them available through one seq_file.
+- * Watching a few disks may be efficient through sysfs, but watching
+- * all of them will be more efficient through this interface.
*
- * This file is released under the GPLv2
+ * The output looks suspiciously like /proc/partitions with a bunch of
+ * extra fields.
*/
- #include <linux/kobject.h>
- #include <linux/device.h>
--
- #include "base.h"
+-/* iterator */
+ static void *diskstats_start(struct seq_file *part, loff_t *pos)
+ {
+ loff_t k = *pos;
+- struct list_head *p;
++ struct device *dev;
--decl_subsys(hypervisor, NULL, NULL);
--EXPORT_SYMBOL_GPL(hypervisor_subsys);
-+struct kobject *hypervisor_kobj;
-+EXPORT_SYMBOL_GPL(hypervisor_kobj);
+- mutex_lock(&block_subsys_lock);
+- list_for_each(p, &block_subsys.list)
++ mutex_lock(&block_class_lock);
++ list_for_each_entry(dev, &block_class.devices, node) {
++ if (dev->type != &disk_type)
++ continue;
+ if (!k--)
+- return list_entry(p, struct gendisk, kobj.entry);
++ return dev_to_disk(dev);
++ }
+ return NULL;
+ }
- int __init hypervisor_init(void)
+ static void *diskstats_next(struct seq_file *part, void *v, loff_t *pos)
{
-- return subsystem_register(&hypervisor_subsys);
-+ hypervisor_kobj = kobject_create_and_add("hypervisor", NULL);
-+ if (!hypervisor_kobj)
-+ return -ENOMEM;
-+ return 0;
+- struct list_head *p = ((struct gendisk *)v)->kobj.entry.next;
++ struct gendisk *gp = v;
++ struct device *dev;
++
+ ++*pos;
+- return p==&block_subsys.list ? NULL :
+- list_entry(p, struct gendisk, kobj.entry);
++ list_for_each_entry(dev, &gp->dev.node, node) {
++ if (&dev->node == &block_class.devices)
++ return NULL;
++ if (dev->type == &disk_type)
++ return dev_to_disk(dev);
++ }
++ return NULL;
}
-diff --git a/drivers/base/init.c b/drivers/base/init.c
-index 3713815..7bd9b6a 100644
---- a/drivers/base/init.c
-+++ b/drivers/base/init.c
-@@ -1,10 +1,8 @@
- /*
-- *
- * Copyright (c) 2002-3 Patrick Mochel
- * Copyright (c) 2002-3 Open Source Development Labs
- *
- * This file is released under the GPLv2
-- *
- */
- #include <linux/device.h>
-@@ -14,12 +12,11 @@
- #include "base.h"
-
- /**
-- * driver_init - initialize driver model.
-+ * driver_init - initialize driver model.
- *
-- * Call the driver model init functions to initialize their
-- * subsystems. Called early from init/main.c.
-+ * Call the driver model init functions to initialize their
-+ * subsystems. Called early from init/main.c.
- */
--
- void __init driver_init(void)
+ static void diskstats_stop(struct seq_file *part, void *v)
{
- /* These are the core pieces */
-@@ -36,5 +33,4 @@ void __init driver_init(void)
- system_bus_init();
- cpu_dev_init();
- memory_dev_init();
-- attribute_container_init();
+- mutex_unlock(&block_subsys_lock);
++ mutex_unlock(&block_class_lock);
}
-diff --git a/drivers/base/memory.c b/drivers/base/memory.c
-index 7868707..7ae413f 100644
---- a/drivers/base/memory.c
-+++ b/drivers/base/memory.c
-@@ -26,7 +26,7 @@
- #define MEMORY_CLASS_NAME "memory"
- static struct sysdev_class memory_sysdev_class = {
-- set_kset_name(MEMORY_CLASS_NAME),
-+ .name = MEMORY_CLASS_NAME,
- };
+ static int diskstats_show(struct seq_file *s, void *v)
+@@ -629,7 +558,7 @@ static int diskstats_show(struct seq_file *s, void *v)
+ int n = 0;
- static const char *memory_uevent_name(struct kset *kset, struct kobject *kobj)
-diff --git a/drivers/base/module.c b/drivers/base/module.c
-new file mode 100644
-index 0000000..103be9c
---- /dev/null
-+++ b/drivers/base/module.c
-@@ -0,0 +1,94 @@
-+/*
-+ * module.c - module sysfs fun for drivers
-+ *
-+ * This file is released under the GPLv2
-+ *
-+ */
-+#include <linux/device.h>
-+#include <linux/module.h>
-+#include <linux/errno.h>
-+#include <linux/string.h>
-+#include "base.h"
-+
-+static char *make_driver_name(struct device_driver *drv)
-+{
-+ char *driver_name;
-+
-+ driver_name = kmalloc(strlen(drv->name) + strlen(drv->bus->name) + 2,
-+ GFP_KERNEL);
-+ if (!driver_name)
-+ return NULL;
-+
-+ sprintf(driver_name, "%s:%s", drv->bus->name, drv->name);
-+ return driver_name;
-+}
-+
-+static void module_create_drivers_dir(struct module_kobject *mk)
-+{
-+ if (!mk || mk->drivers_dir)
-+ return;
-+
-+ mk->drivers_dir = kobject_create_and_add("drivers", &mk->kobj);
-+}
-+
-+void module_add_driver(struct module *mod, struct device_driver *drv)
+ /*
+- if (&sgp->kobj.entry == block_subsys.kset.list.next)
++ if (&gp->dev.kobj.entry == block_class.devices.next)
+ seq_puts(s, "major minor name"
+ " rio rmerge rsect ruse wio wmerge "
+ "wsect wuse running use aveq"
+@@ -666,7 +595,7 @@ static int diskstats_show(struct seq_file *s, void *v)
+ return 0;
+ }
+
+-struct seq_operations diskstats_op = {
++const struct seq_operations diskstats_op = {
+ .start = diskstats_start,
+ .next = diskstats_next,
+ .stop = diskstats_stop,
+@@ -683,7 +612,7 @@ static void media_change_notify_thread(struct work_struct *work)
+ * set enviroment vars to indicate which event this is for
+ * so that user space will know to go check the media status.
+ */
+- kobject_uevent_env(&gd->kobj, KOBJ_CHANGE, envp);
++ kobject_uevent_env(&gd->dev.kobj, KOBJ_CHANGE, envp);
+ put_device(gd->driverfs_dev);
+ }
+
+@@ -694,6 +623,25 @@ void genhd_media_change_notify(struct gendisk *disk)
+ }
+ EXPORT_SYMBOL_GPL(genhd_media_change_notify);
+
++dev_t blk_lookup_devt(const char *name)
+{
-+ char *driver_name;
-+ int no_warn;
-+ struct module_kobject *mk = NULL;
-+
-+ if (!drv)
-+ return;
-+
-+ if (mod)
-+ mk = &mod->mkobj;
-+ else if (drv->mod_name) {
-+ struct kobject *mkobj;
++ struct device *dev;
++ dev_t devt = MKDEV(0, 0);
+
-+ /* Lookup built-in module entry in /sys/modules */
-+ mkobj = kset_find_obj(module_kset, drv->mod_name);
-+ if (mkobj) {
-+ mk = container_of(mkobj, struct module_kobject, kobj);
-+ /* remember our module structure */
-+ drv->p->mkobj = mk;
-+ /* kset_find_obj took a reference */
-+ kobject_put(mkobj);
++ mutex_lock(&block_class_lock);
++ list_for_each_entry(dev, &block_class.devices, node) {
++ if (strcmp(dev->bus_id, name) == 0) {
++ devt = dev->devt;
++ break;
+ }
+ }
++ mutex_unlock(&block_class_lock);
+
-+ if (!mk)
-+ return;
-+
-+ /* Don't check return codes; these calls are idempotent */
-+ no_warn = sysfs_create_link(&drv->p->kobj, &mk->kobj, "module");
-+ driver_name = make_driver_name(drv);
-+ if (driver_name) {
-+ module_create_drivers_dir(mk);
-+ no_warn = sysfs_create_link(mk->drivers_dir, &drv->p->kobj,
-+ driver_name);
-+ kfree(driver_name);
-+ }
++ return devt;
+}
+
-+void module_remove_driver(struct device_driver *drv)
-+{
-+ struct module_kobject *mk = NULL;
-+ char *driver_name;
-+
-+ if (!drv)
-+ return;
-+
-+ sysfs_remove_link(&drv->p->kobj, "module");
++EXPORT_SYMBOL(blk_lookup_devt);
+
-+ if (drv->owner)
-+ mk = &drv->owner->mkobj;
-+ else if (drv->p->mkobj)
-+ mk = drv->p->mkobj;
-+ if (mk && mk->drivers_dir) {
-+ driver_name = make_driver_name(drv);
-+ if (driver_name) {
-+ sysfs_remove_link(mk->drivers_dir, driver_name);
-+ kfree(driver_name);
-+ }
-+ }
-+}
-diff --git a/drivers/base/node.c b/drivers/base/node.c
-index 88eeed7..e59861f 100644
---- a/drivers/base/node.c
-+++ b/drivers/base/node.c
-@@ -15,7 +15,7 @@
- #include <linux/device.h>
-
- static struct sysdev_class node_class = {
-- set_kset_name("node"),
-+ .name = "node",
- };
-
-
-diff --git a/drivers/base/platform.c b/drivers/base/platform.c
-index fb56092..efaf282 100644
---- a/drivers/base/platform.c
-+++ b/drivers/base/platform.c
-@@ -20,7 +20,8 @@
-
- #include "base.h"
-
--#define to_platform_driver(drv) (container_of((drv), struct platform_driver, driver))
-+#define to_platform_driver(drv) (container_of((drv), struct platform_driver, \
-+ driver))
-
- struct device platform_bus = {
- .bus_id = "platform",
-@@ -28,14 +29,13 @@ struct device platform_bus = {
- EXPORT_SYMBOL_GPL(platform_bus);
-
- /**
-- * platform_get_resource - get a resource for a device
-- * @dev: platform device
-- * @type: resource type
-- * @num: resource index
-+ * platform_get_resource - get a resource for a device
-+ * @dev: platform device
-+ * @type: resource type
-+ * @num: resource index
- */
--struct resource *
--platform_get_resource(struct platform_device *dev, unsigned int type,
-- unsigned int num)
-+struct resource *platform_get_resource(struct platform_device *dev,
-+ unsigned int type, unsigned int num)
+ struct gendisk *alloc_disk(int minors)
{
- int i;
-
-@@ -43,8 +43,7 @@ platform_get_resource(struct platform_device *dev, unsigned int type,
- struct resource *r = &dev->resource[i];
-
- if ((r->flags & (IORESOURCE_IO|IORESOURCE_MEM|
-- IORESOURCE_IRQ|IORESOURCE_DMA))
-- == type)
-+ IORESOURCE_IRQ|IORESOURCE_DMA)) == type)
- if (num-- == 0)
- return r;
+ return alloc_disk_node(minors, -1);
+@@ -721,9 +669,10 @@ struct gendisk *alloc_disk_node(int minors, int node_id)
+ }
+ }
+ disk->minors = minors;
+- kobj_set_kset_s(disk,block_subsys);
+- kobject_init(&disk->kobj);
+ rand_initialize_disk(disk);
++ disk->dev.class = &block_class;
++ disk->dev.type = &disk_type;
++ device_initialize(&disk->dev);
+ INIT_WORK(&disk->async_notify,
+ media_change_notify_thread);
}
-@@ -53,9 +52,9 @@ platform_get_resource(struct platform_device *dev, unsigned int type,
- EXPORT_SYMBOL_GPL(platform_get_resource);
-
- /**
-- * platform_get_irq - get an IRQ for a device
-- * @dev: platform device
-- * @num: IRQ number index
-+ * platform_get_irq - get an IRQ for a device
-+ * @dev: platform device
-+ * @num: IRQ number index
- */
- int platform_get_irq(struct platform_device *dev, unsigned int num)
- {
-@@ -66,14 +65,13 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
- EXPORT_SYMBOL_GPL(platform_get_irq);
-
- /**
-- * platform_get_resource_byname - get a resource for a device by name
-- * @dev: platform device
-- * @type: resource type
-- * @name: resource name
-+ * platform_get_resource_byname - get a resource for a device by name
-+ * @dev: platform device
-+ * @type: resource type
-+ * @name: resource name
- */
--struct resource *
--platform_get_resource_byname(struct platform_device *dev, unsigned int type,
-- char *name)
-+struct resource *platform_get_resource_byname(struct platform_device *dev,
-+ unsigned int type, char *name)
- {
- int i;
-
-@@ -90,22 +88,23 @@ platform_get_resource_byname(struct platform_device *dev, unsigned int type,
- EXPORT_SYMBOL_GPL(platform_get_resource_byname);
-
- /**
-- * platform_get_irq - get an IRQ for a device
-- * @dev: platform device
-- * @name: IRQ name
-+ * platform_get_irq - get an IRQ for a device
-+ * @dev: platform device
-+ * @name: IRQ name
- */
- int platform_get_irq_byname(struct platform_device *dev, char *name)
- {
-- struct resource *r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name);
-+ struct resource *r = platform_get_resource_byname(dev, IORESOURCE_IRQ,
-+ name);
-
- return r ? r->start : -ENXIO;
- }
- EXPORT_SYMBOL_GPL(platform_get_irq_byname);
-
- /**
-- * platform_add_devices - add a numbers of platform devices
-- * @devs: array of platform devices to add
-- * @num: number of platform devices in array
-+ * platform_add_devices - add a numbers of platform devices
-+ * @devs: array of platform devices to add
-+ * @num: number of platform devices in array
- */
- int platform_add_devices(struct platform_device **devs, int num)
- {
-@@ -130,12 +129,11 @@ struct platform_object {
- };
-
- /**
-- * platform_device_put
-- * @pdev: platform device to free
-+ * platform_device_put
-+ * @pdev: platform device to free
- *
-- * Free all memory associated with a platform device. This function
-- * must _only_ be externally called in error cases. All other usage
-- * is a bug.
-+ * Free all memory associated with a platform device. This function must
-+ * _only_ be externally called in error cases. All other usage is a bug.
- */
- void platform_device_put(struct platform_device *pdev)
- {
-@@ -146,7 +144,8 @@ EXPORT_SYMBOL_GPL(platform_device_put);
-
- static void platform_device_release(struct device *dev)
- {
-- struct platform_object *pa = container_of(dev, struct platform_object, pdev.dev);
-+ struct platform_object *pa = container_of(dev, struct platform_object,
-+ pdev.dev);
-
- kfree(pa->pdev.dev.platform_data);
- kfree(pa->pdev.resource);
-@@ -154,12 +153,12 @@ static void platform_device_release(struct device *dev)
- }
-
- /**
-- * platform_device_alloc
-- * @name: base name of the device we're adding
-- * @id: instance id
-+ * platform_device_alloc
-+ * @name: base name of the device we're adding
-+ * @id: instance id
- *
-- * Create a platform device object which can have other objects attached
-- * to it, and which will have attached objects freed when it is released.
-+ * Create a platform device object which can have other objects attached
-+ * to it, and which will have attached objects freed when it is released.
- */
- struct platform_device *platform_device_alloc(const char *name, int id)
- {
-@@ -179,16 +178,17 @@ struct platform_device *platform_device_alloc(const char *name, int id)
- EXPORT_SYMBOL_GPL(platform_device_alloc);
-
- /**
-- * platform_device_add_resources
-- * @pdev: platform device allocated by platform_device_alloc to add resources to
-- * @res: set of resources that needs to be allocated for the device
-- * @num: number of resources
-+ * platform_device_add_resources
-+ * @pdev: platform device allocated by platform_device_alloc to add resources to
-+ * @res: set of resources that needs to be allocated for the device
-+ * @num: number of resources
- *
-- * Add a copy of the resources to the platform device. The memory
-- * associated with the resources will be freed when the platform
-- * device is released.
-+ * Add a copy of the resources to the platform device. The memory
-+ * associated with the resources will be freed when the platform device is
-+ * released.
- */
--int platform_device_add_resources(struct platform_device *pdev, struct resource *res, unsigned int num)
-+int platform_device_add_resources(struct platform_device *pdev,
-+ struct resource *res, unsigned int num)
- {
- struct resource *r;
-
-@@ -203,16 +203,17 @@ int platform_device_add_resources(struct platform_device *pdev, struct resource
- EXPORT_SYMBOL_GPL(platform_device_add_resources);
-
- /**
-- * platform_device_add_data
-- * @pdev: platform device allocated by platform_device_alloc to add resources to
-- * @data: platform specific data for this platform device
-- * @size: size of platform specific data
-+ * platform_device_add_data
-+ * @pdev: platform device allocated by platform_device_alloc to add resources to
-+ * @data: platform specific data for this platform device
-+ * @size: size of platform specific data
- *
-- * Add a copy of platform specific data to the platform device's platform_data
-- * pointer. The memory associated with the platform data will be freed
-- * when the platform device is released.
-+ * Add a copy of platform specific data to the platform device's
-+ * platform_data pointer. The memory associated with the platform data
-+ * will be freed when the platform device is released.
- */
--int platform_device_add_data(struct platform_device *pdev, const void *data, size_t size)
-+int platform_device_add_data(struct platform_device *pdev, const void *data,
-+ size_t size)
- {
- void *d;
-
-@@ -226,11 +227,11 @@ int platform_device_add_data(struct platform_device *pdev, const void *data, siz
- EXPORT_SYMBOL_GPL(platform_device_add_data);
-
- /**
-- * platform_device_add - add a platform device to device hierarchy
-- * @pdev: platform device we're adding
-+ * platform_device_add - add a platform device to device hierarchy
-+ * @pdev: platform device we're adding
- *
-- * This is part 2 of platform_device_register(), though may be called
-- * separately _iff_ pdev was allocated by platform_device_alloc().
-+ * This is part 2 of platform_device_register(), though may be called
-+ * separately _iff_ pdev was allocated by platform_device_alloc().
- */
- int platform_device_add(struct platform_device *pdev)
- {
-@@ -289,13 +290,12 @@ int platform_device_add(struct platform_device *pdev)
- EXPORT_SYMBOL_GPL(platform_device_add);
-
- /**
-- * platform_device_del - remove a platform-level device
-- * @pdev: platform device we're removing
-+ * platform_device_del - remove a platform-level device
-+ * @pdev: platform device we're removing
- *
-- * Note that this function will also release all memory- and port-based
-- * resources owned by the device (@dev->resource). This function
-- * must _only_ be externally called in error cases. All other usage
-- * is a bug.
-+ * Note that this function will also release all memory- and port-based
-+ * resources owned by the device (@dev->resource). This function must
-+ * _only_ be externally called in error cases. All other usage is a bug.
- */
- void platform_device_del(struct platform_device *pdev)
- {
-@@ -314,11 +314,10 @@ void platform_device_del(struct platform_device *pdev)
- EXPORT_SYMBOL_GPL(platform_device_del);
-
- /**
-- * platform_device_register - add a platform-level device
-- * @pdev: platform device we're adding
-- *
-+ * platform_device_register - add a platform-level device
-+ * @pdev: platform device we're adding
- */
--int platform_device_register(struct platform_device * pdev)
-+int platform_device_register(struct platform_device *pdev)
- {
- device_initialize(&pdev->dev);
- return platform_device_add(pdev);
-@@ -326,14 +325,14 @@ int platform_device_register(struct platform_device * pdev)
- EXPORT_SYMBOL_GPL(platform_device_register);
-
- /**
-- * platform_device_unregister - unregister a platform-level device
-- * @pdev: platform device we're unregistering
-+ * platform_device_unregister - unregister a platform-level device
-+ * @pdev: platform device we're unregistering
- *
-- * Unregistration is done in 2 steps. First we release all resources
-- * and remove it from the subsystem, then we drop reference count by
-- * calling platform_device_put().
-+ * Unregistration is done in 2 steps. First we release all resources
-+ * and remove it from the subsystem, then we drop reference count by
-+ * calling platform_device_put().
- */
--void platform_device_unregister(struct platform_device * pdev)
-+void platform_device_unregister(struct platform_device *pdev)
- {
- platform_device_del(pdev);
- platform_device_put(pdev);
-@@ -341,27 +340,29 @@ void platform_device_unregister(struct platform_device * pdev)
- EXPORT_SYMBOL_GPL(platform_device_unregister);
-
- /**
-- * platform_device_register_simple
-- * @name: base name of the device we're adding
-- * @id: instance id
-- * @res: set of resources that needs to be allocated for the device
-- * @num: number of resources
-+ * platform_device_register_simple
-+ * @name: base name of the device we're adding
-+ * @id: instance id
-+ * @res: set of resources that needs to be allocated for the device
-+ * @num: number of resources
- *
-- * This function creates a simple platform device that requires minimal
-- * resource and memory management. Canned release function freeing
-- * memory allocated for the device allows drivers using such devices
-- * to be unloaded without waiting for the last reference to the device
-- * to be dropped.
-+ * This function creates a simple platform device that requires minimal
-+ * resource and memory management. Canned release function freeing memory
-+ * allocated for the device allows drivers using such devices to be
-+ * unloaded without waiting for the last reference to the device to be
-+ * dropped.
- *
-- * This interface is primarily intended for use with legacy drivers
-- * which probe hardware directly. Because such drivers create sysfs
-- * device nodes themselves, rather than letting system infrastructure
-- * handle such device enumeration tasks, they don't fully conform to
-- * the Linux driver model. In particular, when such drivers are built
-- * as modules, they can't be "hotplugged".
-+ * This interface is primarily intended for use with legacy drivers which
-+ * probe hardware directly. Because such drivers create sysfs device nodes
-+ * themselves, rather than letting system infrastructure handle such device
-+ * enumeration tasks, they don't fully conform to the Linux driver model.
-+ * In particular, when such drivers are built as modules, they can't be
-+ * "hotplugged".
- */
--struct platform_device *platform_device_register_simple(char *name, int id,
-- struct resource *res, unsigned int num)
-+struct platform_device *platform_device_register_simple(const char *name,
-+ int id,
-+ struct resource *res,
-+ unsigned int num)
- {
- struct platform_device *pdev;
- int retval;
-@@ -436,8 +437,8 @@ static int platform_drv_resume(struct device *_dev)
- }
-
- /**
-- * platform_driver_register
-- * @drv: platform driver structure
-+ * platform_driver_register
-+ * @drv: platform driver structure
- */
- int platform_driver_register(struct platform_driver *drv)
- {
-@@ -457,8 +458,8 @@ int platform_driver_register(struct platform_driver *drv)
- EXPORT_SYMBOL_GPL(platform_driver_register);
-
- /**
-- * platform_driver_unregister
-- * @drv: platform driver structure
-+ * platform_driver_unregister
-+ * @drv: platform driver structure
- */
- void platform_driver_unregister(struct platform_driver *drv)
- {
-@@ -497,12 +498,12 @@ int __init_or_module platform_driver_probe(struct platform_driver *drv,
- * if the probe was successful, and make sure any forced probes of
- * new devices fail.
- */
-- spin_lock(&platform_bus_type.klist_drivers.k_lock);
-+ spin_lock(&platform_bus_type.p->klist_drivers.k_lock);
- drv->probe = NULL;
-- if (code == 0 && list_empty(&drv->driver.klist_devices.k_list))
-+ if (code == 0 && list_empty(&drv->driver.p->klist_devices.k_list))
- retval = -ENODEV;
- drv->driver.probe = platform_drv_probe_fail;
-- spin_unlock(&platform_bus_type.klist_drivers.k_lock);
-+ spin_unlock(&platform_bus_type.p->klist_drivers.k_lock);
-
- if (code != retval)
- platform_driver_unregister(drv);
-@@ -516,8 +517,8 @@ EXPORT_SYMBOL_GPL(platform_driver_probe);
- * (b) sysfs attribute lets new-style coldplug recover from hotplug events
- * mishandled before system is fully running: "modprobe $(cat modalias)"
- */
--static ssize_t
--modalias_show(struct device *dev, struct device_attribute *a, char *buf)
-+static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
-+ char *buf)
+@@ -743,7 +692,7 @@ struct kobject *get_disk(struct gendisk *disk)
+ owner = disk->fops->owner;
+ if (owner && !try_module_get(owner))
+ return NULL;
+- kobj = kobject_get(&disk->kobj);
++ kobj = kobject_get(&disk->dev.kobj);
+ if (kobj == NULL) {
+ module_put(owner);
+ return NULL;
+@@ -757,7 +706,7 @@ EXPORT_SYMBOL(get_disk);
+ void put_disk(struct gendisk *disk)
{
- struct platform_device *pdev = to_platform_device(dev);
- int len = snprintf(buf, PAGE_SIZE, "platform:%s\n", pdev->name);
-@@ -538,26 +539,24 @@ static int platform_uevent(struct device *dev, struct kobj_uevent_env *env)
- return 0;
+ if (disk)
+- kobject_put(&disk->kobj);
++ kobject_put(&disk->dev.kobj);
}
+ EXPORT_SYMBOL(put_disk);
+diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c
+deleted file mode 100644
+index 8b91994..0000000
+--- a/block/ll_rw_blk.c
++++ /dev/null
+@@ -1,4214 +0,0 @@
+-/*
+- * Copyright (C) 1991, 1992 Linus Torvalds
+- * Copyright (C) 1994, Karl Keyte: Added support for disk statistics
+- * Elevator latency, (C) 2000 Andrea Arcangeli <andrea at suse.de> SuSE
+- * Queue request tables / lock, selectable elevator, Jens Axboe <axboe at suse.de>
+- * kernel-doc documentation started by NeilBrown <neilb at cse.unsw.edu.au> - July2000
+- * bio rewrite, highmem i/o, etc, Jens Axboe <axboe at suse.de> - may 2001
+- */
-
- /**
-- * platform_match - bind platform device to platform driver.
-- * @dev: device.
-- * @drv: driver.
-+ * platform_match - bind platform device to platform driver.
-+ * @dev: device.
-+ * @drv: driver.
- *
-- * Platform device IDs are assumed to be encoded like this:
-- * "<name><instance>", where <name> is a short description of the
-- * type of device, like "pci" or "floppy", and <instance> is the
-- * enumerated instance of the device, like '0' or '42'.
-- * Driver IDs are simply "<name>".
-- * So, extract the <name> from the platform_device structure,
-- * and compare it against the name of the driver. Return whether
-- * they match or not.
-+ * Platform device IDs are assumed to be encoded like this:
-+ * "<name><instance>", where <name> is a short description of the type of
-+ * device, like "pci" or "floppy", and <instance> is the enumerated
-+ * instance of the device, like '0' or '42'. Driver IDs are simply
-+ * "<name>". So, extract the <name> from the platform_device structure,
-+ * and compare it against the name of the driver. Return whether they match
-+ * or not.
- */
+-/*
+- * This handles all read/write requests to block devices
+- */
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/backing-dev.h>
+-#include <linux/bio.h>
+-#include <linux/blkdev.h>
+-#include <linux/highmem.h>
+-#include <linux/mm.h>
+-#include <linux/kernel_stat.h>
+-#include <linux/string.h>
+-#include <linux/init.h>
+-#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
+-#include <linux/completion.h>
+-#include <linux/slab.h>
+-#include <linux/swap.h>
+-#include <linux/writeback.h>
+-#include <linux/task_io_accounting_ops.h>
+-#include <linux/interrupt.h>
+-#include <linux/cpu.h>
+-#include <linux/blktrace_api.h>
+-#include <linux/fault-inject.h>
+-#include <linux/scatterlist.h>
-
--static int platform_match(struct device * dev, struct device_driver * drv)
-+static int platform_match(struct device *dev, struct device_driver *drv)
- {
-- struct platform_device *pdev = container_of(dev, struct platform_device, dev);
-+ struct platform_device *pdev;
-
-+ pdev = container_of(dev, struct platform_device, dev);
- return (strncmp(pdev->name, drv->name, BUS_ID_SIZE) == 0);
- }
-
-@@ -574,9 +573,10 @@ static int platform_suspend(struct device *dev, pm_message_t mesg)
- static int platform_suspend_late(struct device *dev, pm_message_t mesg)
- {
- struct platform_driver *drv = to_platform_driver(dev->driver);
-- struct platform_device *pdev = container_of(dev, struct platform_device, dev);
-+ struct platform_device *pdev;
- int ret = 0;
-
-+ pdev = container_of(dev, struct platform_device, dev);
- if (dev->driver && drv->suspend_late)
- ret = drv->suspend_late(pdev, mesg);
-
-@@ -586,16 +586,17 @@ static int platform_suspend_late(struct device *dev, pm_message_t mesg)
- static int platform_resume_early(struct device *dev)
- {
- struct platform_driver *drv = to_platform_driver(dev->driver);
-- struct platform_device *pdev = container_of(dev, struct platform_device, dev);
-+ struct platform_device *pdev;
- int ret = 0;
-
-+ pdev = container_of(dev, struct platform_device, dev);
- if (dev->driver && drv->resume_early)
- ret = drv->resume_early(pdev);
-
- return ret;
- }
-
--static int platform_resume(struct device * dev)
-+static int platform_resume(struct device *dev)
- {
- int ret = 0;
-
-diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
-index 44504e6..de28dfd 100644
---- a/drivers/base/power/Makefile
-+++ b/drivers/base/power/Makefile
-@@ -1,11 +1,6 @@
--obj-y := shutdown.o
- obj-$(CONFIG_PM) += sysfs.o
- obj-$(CONFIG_PM_SLEEP) += main.o
- obj-$(CONFIG_PM_TRACE) += trace.o
-
--ifeq ($(CONFIG_DEBUG_DRIVER),y)
--EXTRA_CFLAGS += -DDEBUG
--endif
--ifeq ($(CONFIG_PM_VERBOSE),y)
--EXTRA_CFLAGS += -DDEBUG
--endif
-+ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
-+ccflags-$(CONFIG_PM_VERBOSE) += -DDEBUG
-diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
-index 691ffb6..200ed5f 100644
---- a/drivers/base/power/main.c
-+++ b/drivers/base/power/main.c
-@@ -24,20 +24,45 @@
- #include <linux/mutex.h>
- #include <linux/pm.h>
- #include <linux/resume-trace.h>
-+#include <linux/rwsem.h>
-
- #include "../base.h"
- #include "power.h"
-
-+/*
-+ * The entries in the dpm_active list are in a depth first order, simply
-+ * because children are guaranteed to be discovered after parents, and
-+ * are inserted at the back of the list on discovery.
-+ *
-+ * All the other lists are kept in the same order, for consistency.
-+ * However the lists aren't always traversed in the same order.
-+ * Semaphores must be acquired from the top (i.e., front) down
-+ * and released in the opposite order. Devices must be suspended
-+ * from the bottom (i.e., end) up and resumed in the opposite order.
-+ * That way no parent will be suspended while it still has an active
-+ * child.
-+ *
-+ * Since device_pm_add() may be called with a device semaphore held,
-+ * we must never try to acquire a device semaphore while holding
-+ * dpm_list_mutex.
-+ */
-+
- LIST_HEAD(dpm_active);
-+static LIST_HEAD(dpm_locked);
- static LIST_HEAD(dpm_off);
- static LIST_HEAD(dpm_off_irq);
-+static LIST_HEAD(dpm_destroy);
-
--static DEFINE_MUTEX(dpm_mtx);
- static DEFINE_MUTEX(dpm_list_mtx);
-
--int (*platform_enable_wakeup)(struct device *dev, int is_on);
-+static DECLARE_RWSEM(pm_sleep_rwsem);
-
-+int (*platform_enable_wakeup)(struct device *dev, int is_on);
-
-+/**
-+ * device_pm_add - add a device to the list of active devices
-+ * @dev: Device to be added to the list
-+ */
- void device_pm_add(struct device *dev)
- {
- pr_debug("PM: Adding info for %s:%s\n",
-@@ -48,8 +73,36 @@ void device_pm_add(struct device *dev)
- mutex_unlock(&dpm_list_mtx);
- }
-
-+/**
-+ * device_pm_remove - remove a device from the list of active devices
-+ * @dev: Device to be removed from the list
-+ *
-+ * This function also removes the device's PM-related sysfs attributes.
-+ */
- void device_pm_remove(struct device *dev)
- {
-+ /*
-+ * If this function is called during a suspend, it will be blocked,
-+ * because we're holding the device's semaphore at that time, which may
-+ * lead to a deadlock. In that case we want to print a warning.
-+ * However, it may also be called by unregister_dropped_devices() with
-+ * the device's semaphore released, in which case the warning should
-+ * not be printed.
-+ */
-+ if (down_trylock(&dev->sem)) {
-+ if (down_read_trylock(&pm_sleep_rwsem)) {
-+ /* No suspend in progress, wait on dev->sem */
-+ down(&dev->sem);
-+ up_read(&pm_sleep_rwsem);
-+ } else {
-+ /* Suspend in progress, we may deadlock */
-+ dev_warn(dev, "Suspicious %s during suspend\n",
-+ __FUNCTION__);
-+ dump_stack();
-+ /* The user has been warned ... */
-+ down(&dev->sem);
-+ }
-+ }
- pr_debug("PM: Removing info for %s:%s\n",
- dev->bus ? dev->bus->name : "No Bus",
- kobject_name(&dev->kobj));
-@@ -57,25 +110,124 @@ void device_pm_remove(struct device *dev)
- dpm_sysfs_remove(dev);
- list_del_init(&dev->power.entry);
- mutex_unlock(&dpm_list_mtx);
-+ up(&dev->sem);
-+}
-+
-+/**
-+ * device_pm_schedule_removal - schedule the removal of a suspended device
-+ * @dev: Device to destroy
-+ *
-+ * Moves the device to the dpm_destroy list for further processing by
-+ * unregister_dropped_devices().
-+ */
-+void device_pm_schedule_removal(struct device *dev)
-+{
-+ pr_debug("PM: Preparing for removal: %s:%s\n",
-+ dev->bus ? dev->bus->name : "No Bus",
-+ kobject_name(&dev->kobj));
-+ mutex_lock(&dpm_list_mtx);
-+ list_move_tail(&dev->power.entry, &dpm_destroy);
-+ mutex_unlock(&dpm_list_mtx);
-+}
-+
-+/**
-+ * pm_sleep_lock - mutual exclusion for registration and suspend
-+ *
-+ * Returns 0 if no suspend is underway and device registration
-+ * may proceed, otherwise -EBUSY.
-+ */
-+int pm_sleep_lock(void)
-+{
-+ if (down_read_trylock(&pm_sleep_rwsem))
-+ return 0;
-+
-+ return -EBUSY;
-+}
-+
-+/**
-+ * pm_sleep_unlock - mutual exclusion for registration and suspend
-+ *
-+ * This routine undoes the effect of device_pm_add_lock
-+ * when a device's registration is complete.
-+ */
-+void pm_sleep_unlock(void)
-+{
-+ up_read(&pm_sleep_rwsem);
- }
-
-
- /*------------------------- Resume routines -------------------------*/
-
- /**
-- * resume_device - Restore state for one device.
-+ * resume_device_early - Power on one device (early resume).
- * @dev: Device.
- *
-+ * Must be called with interrupts disabled.
- */
+-/*
+- * for max sense size
+- */
+-#include <scsi/scsi_cmnd.h>
-
--static int resume_device(struct device * dev)
-+static int resume_device_early(struct device *dev)
- {
- int error = 0;
-
- TRACE_DEVICE(dev);
- TRACE_RESUME(0);
-
-- down(&dev->sem);
-+ if (dev->bus && dev->bus->resume_early) {
-+ dev_dbg(dev, "EARLY resume\n");
-+ error = dev->bus->resume_early(dev);
-+ }
-+
-+ TRACE_RESUME(error);
-+ return error;
-+}
-+
-+/**
-+ * dpm_power_up - Power on all regular (non-sysdev) devices.
-+ *
-+ * Walk the dpm_off_irq list and power each device up. This
-+ * is used for devices that required they be powered down with
-+ * interrupts disabled. As devices are powered on, they are moved
-+ * to the dpm_off list.
-+ *
-+ * Must be called with interrupts disabled and only one CPU running.
-+ */
-+static void dpm_power_up(void)
-+{
-+
-+ while (!list_empty(&dpm_off_irq)) {
-+ struct list_head *entry = dpm_off_irq.next;
-+ struct device *dev = to_device(entry);
-+
-+ list_move_tail(entry, &dpm_off);
-+ resume_device_early(dev);
-+ }
-+}
-+
-+/**
-+ * device_power_up - Turn on all devices that need special attention.
-+ *
-+ * Power on system devices, then devices that required we shut them down
-+ * with interrupts disabled.
-+ *
-+ * Must be called with interrupts disabled.
-+ */
-+void device_power_up(void)
-+{
-+ sysdev_resume();
-+ dpm_power_up();
-+}
-+EXPORT_SYMBOL_GPL(device_power_up);
-+
-+/**
-+ * resume_device - Restore state for one device.
-+ * @dev: Device.
-+ *
-+ */
-+static int resume_device(struct device *dev)
-+{
-+ int error = 0;
-+
-+ TRACE_DEVICE(dev);
-+ TRACE_RESUME(0);
-
- if (dev->bus && dev->bus->resume) {
- dev_dbg(dev,"resuming\n");
-@@ -92,126 +244,94 @@ static int resume_device(struct device * dev)
- error = dev->class->resume(dev);
- }
-
-- up(&dev->sem);
+-static void blk_unplug_work(struct work_struct *work);
+-static void blk_unplug_timeout(unsigned long data);
+-static void drive_stat_acct(struct request *rq, int new_io);
+-static void init_request_from_bio(struct request *req, struct bio *bio);
+-static int __make_request(struct request_queue *q, struct bio *bio);
+-static struct io_context *current_io_context(gfp_t gfp_flags, int node);
+-static void blk_recalc_rq_segments(struct request *rq);
+-static void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
+- struct bio *bio);
-
- TRACE_RESUME(error);
- return error;
- }
-
+-/*
+- * For the allocated request tables
+- */
+-static struct kmem_cache *request_cachep;
-
--static int resume_device_early(struct device * dev)
--{
-- int error = 0;
+-/*
+- * For queue allocation
+- */
+-static struct kmem_cache *requestq_cachep;
-
-- TRACE_DEVICE(dev);
-- TRACE_RESUME(0);
-- if (dev->bus && dev->bus->resume_early) {
-- dev_dbg(dev,"EARLY resume\n");
-- error = dev->bus->resume_early(dev);
-- }
-- TRACE_RESUME(error);
-- return error;
--}
+-/*
+- * For io context allocations
+- */
+-static struct kmem_cache *iocontext_cachep;
-
-/*
-- * Resume the devices that have either not gone through
-- * the late suspend, or that did go through it but also
-- * went through the early resume
-+/**
-+ * dpm_resume - Resume every device.
-+ *
-+ * Resume the devices that have either not gone through
-+ * the late suspend, or that did go through it but also
-+ * went through the early resume.
-+ *
-+ * Take devices from the dpm_off_list, resume them,
-+ * and put them on the dpm_locked list.
- */
- static void dpm_resume(void)
- {
- mutex_lock(&dpm_list_mtx);
- while(!list_empty(&dpm_off)) {
-- struct list_head * entry = dpm_off.next;
-- struct device * dev = to_device(entry);
+- * Controlling structure to kblockd
+- */
+-static struct workqueue_struct *kblockd_workqueue;
-
-- get_device(dev);
-- list_move_tail(entry, &dpm_active);
-+ struct list_head *entry = dpm_off.next;
-+ struct device *dev = to_device(entry);
-
-+ list_move_tail(entry, &dpm_locked);
- mutex_unlock(&dpm_list_mtx);
- resume_device(dev);
- mutex_lock(&dpm_list_mtx);
-- put_device(dev);
- }
- mutex_unlock(&dpm_list_mtx);
- }
-
+-unsigned long blk_max_low_pfn, blk_max_pfn;
-
- /**
-- * device_resume - Restore state of each device in system.
-+ * unlock_all_devices - Release each device's semaphore
- *
-- * Walk the dpm_off list, remove each entry, resume the device,
-- * then add it to the dpm_active list.
-+ * Go through the dpm_off list. Put each device on the dpm_active
-+ * list and unlock it.
- */
+-EXPORT_SYMBOL(blk_max_low_pfn);
+-EXPORT_SYMBOL(blk_max_pfn);
-
--void device_resume(void)
-+static void unlock_all_devices(void)
- {
-- might_sleep();
-- mutex_lock(&dpm_mtx);
-- dpm_resume();
-- mutex_unlock(&dpm_mtx);
+-static DEFINE_PER_CPU(struct list_head, blk_cpu_done);
+-
+-/* Amount of time in which a process may batch requests */
+-#define BLK_BATCH_TIME (HZ/50UL)
+-
+-/* Number of requests a "batching" process may submit */
+-#define BLK_BATCH_REQ 32
+-
+-/*
+- * Return the threshold (number of used requests) at which the queue is
+- * considered to be congested. It include a little hysteresis to keep the
+- * context switch rate down.
+- */
+-static inline int queue_congestion_on_threshold(struct request_queue *q)
+-{
+- return q->nr_congestion_on;
-}
-
--EXPORT_SYMBOL_GPL(device_resume);
-+ mutex_lock(&dpm_list_mtx);
-+ while (!list_empty(&dpm_locked)) {
-+ struct list_head *entry = dpm_locked.prev;
-+ struct device *dev = to_device(entry);
-
-+ list_move(entry, &dpm_active);
-+ up(&dev->sem);
-+ }
-+ mutex_unlock(&dpm_list_mtx);
-+}
-
- /**
-- * dpm_power_up - Power on some devices.
-- *
-- * Walk the dpm_off_irq list and power each device up. This
-- * is used for devices that required they be powered down with
-- * interrupts disabled. As devices are powered on, they are moved
-- * to the dpm_active list.
-+ * unregister_dropped_devices - Unregister devices scheduled for removal
- *
-- * Interrupts must be disabled when calling this.
-+ * Unregister all devices on the dpm_destroy list.
- */
+-/*
+- * The threshold at which a queue is considered to be uncongested
+- */
+-static inline int queue_congestion_off_threshold(struct request_queue *q)
+-{
+- return q->nr_congestion_off;
+-}
-
--static void dpm_power_up(void)
-+static void unregister_dropped_devices(void)
- {
-- while(!list_empty(&dpm_off_irq)) {
-- struct list_head * entry = dpm_off_irq.next;
-- struct device * dev = to_device(entry);
-+ mutex_lock(&dpm_list_mtx);
-+ while (!list_empty(&dpm_destroy)) {
-+ struct list_head *entry = dpm_destroy.next;
-+ struct device *dev = to_device(entry);
-
-- list_move_tail(entry, &dpm_off);
-- resume_device_early(dev);
-+ up(&dev->sem);
-+ mutex_unlock(&dpm_list_mtx);
-+ /* This also removes the device from the list */
-+ device_unregister(dev);
-+ mutex_lock(&dpm_list_mtx);
- }
-+ mutex_unlock(&dpm_list_mtx);
- }
-
+-static void blk_queue_congestion_threshold(struct request_queue *q)
+-{
+- int nr;
-
- /**
-- * device_power_up - Turn on all devices that need special attention.
-+ * device_resume - Restore state of each device in system.
- *
-- * Power on system devices then devices that required we shut them down
-- * with interrupts disabled.
-- * Called with interrupts disabled.
-+ * Resume all the devices, unlock them all, and allow new
-+ * devices to be registered once again.
- */
+- nr = q->nr_requests - (q->nr_requests / 8) + 1;
+- if (nr > q->nr_requests)
+- nr = q->nr_requests;
+- q->nr_congestion_on = nr;
-
--void device_power_up(void)
-+void device_resume(void)
- {
-- sysdev_resume();
-- dpm_power_up();
-+ might_sleep();
-+ dpm_resume();
-+ unlock_all_devices();
-+ unregister_dropped_devices();
-+ up_write(&pm_sleep_rwsem);
- }
+- nr = q->nr_requests - (q->nr_requests / 8) - (q->nr_requests / 16) - 1;
+- if (nr < 1)
+- nr = 1;
+- q->nr_congestion_off = nr;
+-}
-
--EXPORT_SYMBOL_GPL(device_power_up);
-+EXPORT_SYMBOL_GPL(device_resume);
-
-
- /*------------------------- Suspend routines -------------------------*/
-
--/*
-- * The entries in the dpm_active list are in a depth first order, simply
-- * because children are guaranteed to be discovered after parents, and
-- * are inserted at the back of the list on discovery.
+-/**
+- * blk_get_backing_dev_info - get the address of a queue's backing_dev_info
+- * @bdev: device
- *
-- * All list on the suspend path are done in reverse order, so we operate
-- * on the leaves of the device tree (or forests, depending on how you want
-- * to look at it ;) first. As nodes are removed from the back of the list,
-- * they are inserted into the front of their destintation lists.
+- * Locates the passed device's request queue and returns the address of its
+- * backing_dev_info
- *
-- * Things are the reverse on the resume path - iterations are done in
-- * forward order, and nodes are inserted at the back of their destination
-- * lists. This way, the ancestors will be accessed before their descendents.
+- * Will return NULL if the request queue cannot be located.
- */
+-struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev)
+-{
+- struct backing_dev_info *ret = NULL;
+- struct request_queue *q = bdev_get_queue(bdev);
-
- static inline char *suspend_verb(u32 event)
- {
- switch (event) {
-@@ -222,7 +342,6 @@ static inline char *suspend_verb(u32 event)
- }
- }
-
+- if (q)
+- ret = &q->backing_dev_info;
+- return ret;
+-}
+-EXPORT_SYMBOL(blk_get_backing_dev_info);
-
- static void
- suspend_device_dbg(struct device *dev, pm_message_t state, char *info)
- {
-@@ -232,16 +351,73 @@ suspend_device_dbg(struct device *dev, pm_message_t state, char *info)
- }
-
- /**
-- * suspend_device - Save state of one device.
-+ * suspend_device_late - Shut down one device (late suspend).
- * @dev: Device.
- * @state: Power state device is entering.
-+ *
-+ * This is called with interrupts off and only a single CPU running.
- */
-+static int suspend_device_late(struct device *dev, pm_message_t state)
-+{
-+ int error = 0;
-
--static int suspend_device(struct device * dev, pm_message_t state)
-+ if (dev->bus && dev->bus->suspend_late) {
-+ suspend_device_dbg(dev, state, "LATE ");
-+ error = dev->bus->suspend_late(dev, state);
-+ suspend_report_result(dev->bus->suspend_late, error);
-+ }
-+ return error;
-+}
-+
-+/**
-+ * device_power_down - Shut down special devices.
-+ * @state: Power state to enter.
-+ *
-+ * Power down devices that require interrupts to be disabled
-+ * and move them from the dpm_off list to the dpm_off_irq list.
-+ * Then power down system devices.
-+ *
-+ * Must be called with interrupts disabled and only one CPU running.
-+ */
-+int device_power_down(pm_message_t state)
-+{
-+ int error = 0;
-+
-+ while (!list_empty(&dpm_off)) {
-+ struct list_head *entry = dpm_off.prev;
-+ struct device *dev = to_device(entry);
-+
-+ list_del_init(&dev->power.entry);
-+ error = suspend_device_late(dev, state);
-+ if (error) {
-+ printk(KERN_ERR "Could not power down device %s: "
-+ "error %d\n",
-+ kobject_name(&dev->kobj), error);
-+ if (list_empty(&dev->power.entry))
-+ list_add(&dev->power.entry, &dpm_off);
-+ break;
-+ }
-+ if (list_empty(&dev->power.entry))
-+ list_add(&dev->power.entry, &dpm_off_irq);
-+ }
-+
-+ if (!error)
-+ error = sysdev_suspend(state);
-+ if (error)
-+ dpm_power_up();
-+ return error;
-+}
-+EXPORT_SYMBOL_GPL(device_power_down);
-+
-+/**
-+ * suspend_device - Save state of one device.
-+ * @dev: Device.
-+ * @state: Power state device is entering.
-+ */
-+int suspend_device(struct device *dev, pm_message_t state)
- {
- int error = 0;
-
-- down(&dev->sem);
- if (dev->power.power_state.event) {
- dev_dbg(dev, "PM: suspend %d-->%d\n",
- dev->power.power_state.event, state.event);
-@@ -264,123 +440,105 @@ static int suspend_device(struct device * dev, pm_message_t state)
- error = dev->bus->suspend(dev, state);
- suspend_report_result(dev->bus->suspend, error);
- }
-- up(&dev->sem);
-- return error;
+-/**
+- * blk_queue_prep_rq - set a prepare_request function for queue
+- * @q: queue
+- * @pfn: prepare_request function
+- *
+- * It's possible for a queue to register a prepare_request callback which
+- * is invoked before the request is handed to the request_fn. The goal of
+- * the function is to prepare a request for I/O, it can be used to build a
+- * cdb from the request data for instance.
+- *
+- */
+-void blk_queue_prep_rq(struct request_queue *q, prep_rq_fn *pfn)
+-{
+- q->prep_rq_fn = pfn;
-}
-
+-EXPORT_SYMBOL(blk_queue_prep_rq);
-
--/*
-- * This is called with interrupts off, only a single CPU
-- * running. We can't acquire a mutex or semaphore (and we don't
-- * need the protection)
+-/**
+- * blk_queue_merge_bvec - set a merge_bvec function for queue
+- * @q: queue
+- * @mbfn: merge_bvec_fn
+- *
+- * Usually queues have static limitations on the max sectors or segments that
+- * we can put in a request. Stacking drivers may have some settings that
+- * are dynamic, and thus we have to query the queue whether it is ok to
+- * add a new bio_vec to a bio at a given offset or not. If the block device
+- * has such limitations, it needs to register a merge_bvec_fn to control
+- * the size of bio's sent to it. Note that a block device *must* allow a
+- * single page to be added to an empty bio. The block device driver may want
+- * to use the bio_split() function to deal with these bio's. By default
+- * no merge_bvec_fn is defined for a queue, and only the fixed limits are
+- * honored.
- */
--static int suspend_device_late(struct device *dev, pm_message_t state)
+-void blk_queue_merge_bvec(struct request_queue *q, merge_bvec_fn *mbfn)
-{
-- int error = 0;
+- q->merge_bvec_fn = mbfn;
+-}
-
-- if (dev->bus && dev->bus->suspend_late) {
-- suspend_device_dbg(dev, state, "LATE ");
-- error = dev->bus->suspend_late(dev, state);
-- suspend_report_result(dev->bus->suspend_late, error);
-- }
- return error;
- }
-
- /**
-- * device_suspend - Save state and stop all devices in system.
-- * @state: Power state to put each device in.
-+ * dpm_suspend - Suspend every device.
-+ * @state: Power state to put each device in.
- *
-- * Walk the dpm_active list, call ->suspend() for each device, and move
-- * it to the dpm_off list.
-+ * Walk the dpm_locked list. Suspend each device and move it
-+ * to the dpm_off list.
- *
- * (For historical reasons, if it returns -EAGAIN, that used to mean
- * that the device would be called again with interrupts disabled.
- * These days, we use the "suspend_late()" callback for that, so we
- * print a warning and consider it an error).
-- *
-- * If we get a different error, try and back out.
-- *
-- * If we hit a failure with any of the devices, call device_resume()
-- * above to bring the suspended devices back to life.
-- *
- */
+-EXPORT_SYMBOL(blk_queue_merge_bvec);
-
--int device_suspend(pm_message_t state)
-+static int dpm_suspend(pm_message_t state)
- {
- int error = 0;
-
-- might_sleep();
-- mutex_lock(&dpm_mtx);
- mutex_lock(&dpm_list_mtx);
-- while (!list_empty(&dpm_active) && error == 0) {
-- struct list_head * entry = dpm_active.prev;
-- struct device * dev = to_device(entry);
-+ while (!list_empty(&dpm_locked)) {
-+ struct list_head *entry = dpm_locked.prev;
-+ struct device *dev = to_device(entry);
-
-- get_device(dev);
-+ list_del_init(&dev->power.entry);
- mutex_unlock(&dpm_list_mtx);
+-void blk_queue_softirq_done(struct request_queue *q, softirq_done_fn *fn)
+-{
+- q->softirq_done_fn = fn;
+-}
-
- error = suspend_device(dev, state);
+-EXPORT_SYMBOL(blk_queue_softirq_done);
-
-- mutex_lock(&dpm_list_mtx);
+-/**
+- * blk_queue_make_request - define an alternate make_request function for a device
+- * @q: the request queue for the device to be affected
+- * @mfn: the alternate make_request function
+- *
+- * Description:
+- * The normal way for &struct bios to be passed to a device
+- * driver is for them to be collected into requests on a request
+- * queue, and then to allow the device driver to select requests
+- * off that queue when it is ready. This works well for many block
+- * devices. However some block devices (typically virtual devices
+- * such as md or lvm) do not benefit from the processing on the
+- * request queue, and are served best by having the requests passed
+- * directly to them. This can be achieved by providing a function
+- * to blk_queue_make_request().
+- *
+- * Caveat:
+- * The driver that does this *must* be able to deal appropriately
+- * with buffers in "highmemory". This can be accomplished by either calling
+- * __bio_kmap_atomic() to get a temporary kernel mapping, or by calling
+- * blk_queue_bounce() to create a buffer in normal memory.
+- **/
+-void blk_queue_make_request(struct request_queue * q, make_request_fn * mfn)
+-{
+- /*
+- * set defaults
+- */
+- q->nr_requests = BLKDEV_MAX_RQ;
+- blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
+- blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
+- q->make_request_fn = mfn;
+- q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
+- q->backing_dev_info.state = 0;
+- q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY;
+- blk_queue_max_sectors(q, SAFE_MAX_SECTORS);
+- blk_queue_hardsect_size(q, 512);
+- blk_queue_dma_alignment(q, 511);
+- blk_queue_congestion_threshold(q);
+- q->nr_batching = BLK_BATCH_REQ;
-
-- /* Check if the device got removed */
-- if (!list_empty(&dev->power.entry)) {
-- /* Move it to the dpm_off list */
-- if (!error)
-- list_move(&dev->power.entry, &dpm_off);
-- }
-- if (error)
-+ if (error) {
- printk(KERN_ERR "Could not suspend device %s: "
-- "error %d%s\n",
-- kobject_name(&dev->kobj), error,
-- error == -EAGAIN ? " (please convert to suspend_late)" : "");
-- put_device(dev);
-+ "error %d%s\n",
-+ kobject_name(&dev->kobj),
-+ error,
-+ (error == -EAGAIN ?
-+ " (please convert to suspend_late)" :
-+ ""));
-+ mutex_lock(&dpm_list_mtx);
-+ if (list_empty(&dev->power.entry))
-+ list_add(&dev->power.entry, &dpm_locked);
-+ mutex_unlock(&dpm_list_mtx);
-+ break;
-+ }
-+ mutex_lock(&dpm_list_mtx);
-+ if (list_empty(&dev->power.entry))
-+ list_add(&dev->power.entry, &dpm_off);
- }
- mutex_unlock(&dpm_list_mtx);
-- if (error)
-- dpm_resume();
-
-- mutex_unlock(&dpm_mtx);
- return error;
- }
-
--EXPORT_SYMBOL_GPL(device_suspend);
+- q->unplug_thresh = 4; /* hmm */
+- q->unplug_delay = (3 * HZ) / 1000; /* 3 milliseconds */
+- if (q->unplug_delay == 0)
+- q->unplug_delay = 1;
-
- /**
-- * device_power_down - Shut down special devices.
-- * @state: Power state to enter.
-+ * lock_all_devices - Acquire every device's semaphore
- *
-- * Walk the dpm_off_irq list, calling ->power_down() for each device that
-- * couldn't power down the device with interrupts enabled. When we're
-- * done, power down system devices.
-+ * Go through the dpm_active list. Carefully lock each device's
-+ * semaphore and put it in on the dpm_locked list.
- */
+- INIT_WORK(&q->unplug_work, blk_unplug_work);
-
--int device_power_down(pm_message_t state)
-+static void lock_all_devices(void)
- {
-- int error = 0;
-- struct device * dev;
-+ mutex_lock(&dpm_list_mtx);
-+ while (!list_empty(&dpm_active)) {
-+ struct list_head *entry = dpm_active.next;
-+ struct device *dev = to_device(entry);
-
-- while (!list_empty(&dpm_off)) {
-- struct list_head * entry = dpm_off.prev;
-+ /* Required locking order is dev->sem first,
-+ * then dpm_list_mutex. Hence this awkward code.
-+ */
-+ get_device(dev);
-+ mutex_unlock(&dpm_list_mtx);
-+ down(&dev->sem);
-+ mutex_lock(&dpm_list_mtx);
-
-- dev = to_device(entry);
-- error = suspend_device_late(dev, state);
-- if (error)
-- goto Error;
-- list_move(&dev->power.entry, &dpm_off_irq);
-+ if (list_empty(entry))
-+ up(&dev->sem); /* Device was removed */
-+ else
-+ list_move_tail(entry, &dpm_locked);
-+ put_device(dev);
- }
-+ mutex_unlock(&dpm_list_mtx);
-+}
-+
-+/**
-+ * device_suspend - Save state and stop all devices in system.
-+ *
-+ * Prevent new devices from being registered, then lock all devices
-+ * and suspend them.
-+ */
-+int device_suspend(pm_message_t state)
-+{
-+ int error;
-
-- error = sysdev_suspend(state);
-- Done:
-+ might_sleep();
-+ down_write(&pm_sleep_rwsem);
-+ lock_all_devices();
-+ error = dpm_suspend(state);
-+ if (error)
-+ device_resume();
- return error;
-- Error:
-- printk(KERN_ERR "Could not power down device %s: "
-- "error %d\n", kobject_name(&dev->kobj), error);
-- dpm_power_up();
-- goto Done;
- }
+- q->unplug_timer.function = blk_unplug_timeout;
+- q->unplug_timer.data = (unsigned long)q;
-
--EXPORT_SYMBOL_GPL(device_power_down);
-+EXPORT_SYMBOL_GPL(device_suspend);
-
- void __suspend_report_result(const char *function, void *fn, int ret)
- {
-diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
-index 379da4e..6f0dfca 100644
---- a/drivers/base/power/power.h
-+++ b/drivers/base/power/power.h
-@@ -1,10 +1,3 @@
--/*
-- * shutdown.c
-- */
+- /*
+- * by default assume old behaviour and bounce for any highmem page
+- */
+- blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
+-}
-
--extern void device_shutdown(void);
+-EXPORT_SYMBOL(blk_queue_make_request);
-
+-static void rq_init(struct request_queue *q, struct request *rq)
+-{
+- INIT_LIST_HEAD(&rq->queuelist);
+- INIT_LIST_HEAD(&rq->donelist);
-
- #ifdef CONFIG_PM_SLEEP
-
- /*
-@@ -20,6 +13,9 @@ static inline struct device *to_device(struct list_head *entry)
-
- extern void device_pm_add(struct device *);
- extern void device_pm_remove(struct device *);
-+extern void device_pm_schedule_removal(struct device *);
-+extern int pm_sleep_lock(void);
-+extern void pm_sleep_unlock(void);
-
- #else /* CONFIG_PM_SLEEP */
-
-@@ -32,6 +28,15 @@ static inline void device_pm_remove(struct device *dev)
- {
- }
-
-+static inline int pm_sleep_lock(void)
-+{
-+ return 0;
-+}
-+
-+static inline void pm_sleep_unlock(void)
-+{
-+}
-+
- #endif
-
- #ifdef CONFIG_PM
-diff --git a/drivers/base/power/shutdown.c b/drivers/base/power/shutdown.c
-deleted file mode 100644
-index 56e8eaa..0000000
---- a/drivers/base/power/shutdown.c
-+++ /dev/null
-@@ -1,48 +0,0 @@
--/*
-- * shutdown.c - power management functions for the device tree.
-- *
-- * Copyright (c) 2002-3 Patrick Mochel
-- * 2002-3 Open Source Development Lab
+- rq->errors = 0;
+- rq->bio = rq->biotail = NULL;
+- INIT_HLIST_NODE(&rq->hash);
+- RB_CLEAR_NODE(&rq->rb_node);
+- rq->ioprio = 0;
+- rq->buffer = NULL;
+- rq->ref_count = 1;
+- rq->q = q;
+- rq->special = NULL;
+- rq->data_len = 0;
+- rq->data = NULL;
+- rq->nr_phys_segments = 0;
+- rq->sense = NULL;
+- rq->end_io = NULL;
+- rq->end_io_data = NULL;
+- rq->completion_data = NULL;
+- rq->next_rq = NULL;
+-}
+-
+-/**
+- * blk_queue_ordered - does this queue support ordered writes
+- * @q: the request queue
+- * @ordered: one of QUEUE_ORDERED_*
+- * @prepare_flush_fn: rq setup helper for cache flush ordered writes
- *
-- * This file is released under the GPLv2
+- * Description:
+- * For journalled file systems, doing ordered writes on a commit
+- * block instead of explicitly doing wait_on_buffer (which is bad
+- * for performance) can be a big win. Block drivers supporting this
+- * feature should call this function and indicate so.
- *
-- */
+- **/
+-int blk_queue_ordered(struct request_queue *q, unsigned ordered,
+- prepare_flush_fn *prepare_flush_fn)
+-{
+- if (ordered & (QUEUE_ORDERED_PREFLUSH | QUEUE_ORDERED_POSTFLUSH) &&
+- prepare_flush_fn == NULL) {
+- printk(KERN_ERR "blk_queue_ordered: prepare_flush_fn required\n");
+- return -EINVAL;
+- }
-
--#include <linux/device.h>
--#include <asm/semaphore.h>
+- if (ordered != QUEUE_ORDERED_NONE &&
+- ordered != QUEUE_ORDERED_DRAIN &&
+- ordered != QUEUE_ORDERED_DRAIN_FLUSH &&
+- ordered != QUEUE_ORDERED_DRAIN_FUA &&
+- ordered != QUEUE_ORDERED_TAG &&
+- ordered != QUEUE_ORDERED_TAG_FLUSH &&
+- ordered != QUEUE_ORDERED_TAG_FUA) {
+- printk(KERN_ERR "blk_queue_ordered: bad value %d\n", ordered);
+- return -EINVAL;
+- }
-
--#include "../base.h"
--#include "power.h"
+- q->ordered = ordered;
+- q->next_ordered = ordered;
+- q->prepare_flush_fn = prepare_flush_fn;
-
--#define to_dev(node) container_of(node, struct device, kobj.entry)
+- return 0;
+-}
-
+-EXPORT_SYMBOL(blk_queue_ordered);
-
--/**
-- * We handle system devices differently - we suspend and shut them
-- * down last and resume them first. That way, we don't do anything stupid like
-- * shutting down the interrupt controller before any devices..
-- *
-- * Note that there are not different stages for power management calls -
-- * they only get one called once when interrupts are disabled.
+-/*
+- * Cache flushing for ordered writes handling
- */
+-inline unsigned blk_ordered_cur_seq(struct request_queue *q)
+-{
+- if (!q->ordseq)
+- return 0;
+- return 1 << ffz(q->ordseq);
+-}
-
--
--/**
-- * device_shutdown - call ->shutdown() on each device to shutdown.
-- */
--void device_shutdown(void)
+-unsigned blk_ordered_req_seq(struct request *rq)
-{
-- struct device * dev, *devn;
+- struct request_queue *q = rq->q;
-
-- list_for_each_entry_safe_reverse(dev, devn, &devices_subsys.list,
-- kobj.entry) {
-- if (dev->bus && dev->bus->shutdown) {
-- dev_dbg(dev, "shutdown\n");
-- dev->bus->shutdown(dev);
-- } else if (dev->driver && dev->driver->shutdown) {
-- dev_dbg(dev, "shutdown\n");
-- dev->driver->shutdown(dev);
-- }
-- }
+- BUG_ON(q->ordseq == 0);
+-
+- if (rq == &q->pre_flush_rq)
+- return QUEUE_ORDSEQ_PREFLUSH;
+- if (rq == &q->bar_rq)
+- return QUEUE_ORDSEQ_BAR;
+- if (rq == &q->post_flush_rq)
+- return QUEUE_ORDSEQ_POSTFLUSH;
+-
+- /*
+- * !fs requests don't need to follow barrier ordering. Always
+- * put them at the front. This fixes the following deadlock.
+- *
+- * http://thread.gmane.org/gmane.linux.kernel/537473
+- */
+- if (!blk_fs_request(rq))
+- return QUEUE_ORDSEQ_DRAIN;
+-
+- if ((rq->cmd_flags & REQ_ORDERED_COLOR) ==
+- (q->orig_bar_rq->cmd_flags & REQ_ORDERED_COLOR))
+- return QUEUE_ORDSEQ_DRAIN;
+- else
+- return QUEUE_ORDSEQ_DONE;
-}
-
-diff --git a/drivers/base/sys.c b/drivers/base/sys.c
-index ac7ff6d..2f79c55 100644
---- a/drivers/base/sys.c
-+++ b/drivers/base/sys.c
-@@ -25,8 +25,6 @@
-
- #include "base.h"
-
--extern struct kset devices_subsys;
+-void blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
+-{
+- struct request *rq;
+- int uptodate;
-
- #define to_sysdev(k) container_of(k, struct sys_device, kobj)
- #define to_sysdev_attr(a) container_of(a, struct sysdev_attribute, attr)
-
-@@ -128,18 +126,17 @@ void sysdev_class_remove_file(struct sysdev_class *c,
- }
- EXPORT_SYMBOL_GPL(sysdev_class_remove_file);
-
--/*
-- * declare system_subsys
-- */
--static decl_subsys(system, &ktype_sysdev_class, NULL);
-+static struct kset *system_kset;
-
- int sysdev_class_register(struct sysdev_class * cls)
- {
- pr_debug("Registering sysdev class '%s'\n",
- kobject_name(&cls->kset.kobj));
- INIT_LIST_HEAD(&cls->drivers);
-- cls->kset.kobj.parent = &system_subsys.kobj;
-- cls->kset.kobj.kset = &system_subsys;
-+ cls->kset.kobj.parent = &system_kset->kobj;
-+ cls->kset.kobj.ktype = &ktype_sysdev_class;
-+ cls->kset.kobj.kset = system_kset;
-+ kobject_set_name(&cls->kset.kobj, cls->name);
- return kset_register(&cls->kset);
- }
-
-@@ -228,20 +225,15 @@ int sysdev_register(struct sys_device * sysdev)
- if (!cls)
- return -EINVAL;
-
-+ pr_debug("Registering sys device '%s'\n", kobject_name(&sysdev->kobj));
-+
- /* Make sure the kset is set */
- sysdev->kobj.kset = &cls->kset;
-
-- /* But make sure we point to the right type for sysfs translation */
-- sysdev->kobj.ktype = &ktype_sysdev;
-- error = kobject_set_name(&sysdev->kobj, "%s%d",
-- kobject_name(&cls->kset.kobj), sysdev->id);
-- if (error)
-- return error;
+- if (error && !q->orderr)
+- q->orderr = error;
-
-- pr_debug("Registering sys device '%s'\n", kobject_name(&sysdev->kobj));
+- BUG_ON(q->ordseq & seq);
+- q->ordseq |= seq;
-
- /* Register the object */
-- error = kobject_register(&sysdev->kobj);
-+ error = kobject_init_and_add(&sysdev->kobj, &ktype_sysdev, NULL,
-+ "%s%d", kobject_name(&cls->kset.kobj),
-+ sysdev->id);
-
- if (!error) {
- struct sysdev_driver * drv;
-@@ -258,6 +250,7 @@ int sysdev_register(struct sys_device * sysdev)
- }
- mutex_unlock(&sysdev_drivers_lock);
- }
-+ kobject_uevent(&sysdev->kobj, KOBJ_ADD);
- return error;
- }
-
-@@ -272,7 +265,7 @@ void sysdev_unregister(struct sys_device * sysdev)
- }
- mutex_unlock(&sysdev_drivers_lock);
-
-- kobject_unregister(&sysdev->kobj);
-+ kobject_put(&sysdev->kobj);
- }
-
-
-@@ -298,8 +291,7 @@ void sysdev_shutdown(void)
- pr_debug("Shutting Down System Devices\n");
-
- mutex_lock(&sysdev_drivers_lock);
-- list_for_each_entry_reverse(cls, &system_subsys.list,
-- kset.kobj.entry) {
-+ list_for_each_entry_reverse(cls, &system_kset->list, kset.kobj.entry) {
- struct sys_device * sysdev;
-
- pr_debug("Shutting down type '%s':\n",
-@@ -361,9 +353,7 @@ int sysdev_suspend(pm_message_t state)
-
- pr_debug("Suspending System Devices\n");
-
-- list_for_each_entry_reverse(cls, &system_subsys.list,
-- kset.kobj.entry) {
+- if (blk_ordered_cur_seq(q) != QUEUE_ORDSEQ_DONE)
+- return;
-
-+ list_for_each_entry_reverse(cls, &system_kset->list, kset.kobj.entry) {
- pr_debug("Suspending type '%s':\n",
- kobject_name(&cls->kset.kobj));
-
-@@ -414,8 +404,7 @@ aux_driver:
- }
-
- /* resume other classes */
-- list_for_each_entry_continue(cls, &system_subsys.list,
-- kset.kobj.entry) {
-+ list_for_each_entry_continue(cls, &system_kset->list, kset.kobj.entry) {
- list_for_each_entry(err_dev, &cls->kset.list, kobj.entry) {
- pr_debug(" %s\n", kobject_name(&err_dev->kobj));
- __sysdev_resume(err_dev);
-@@ -440,7 +429,7 @@ int sysdev_resume(void)
-
- pr_debug("Resuming System Devices\n");
-
-- list_for_each_entry(cls, &system_subsys.list, kset.kobj.entry) {
-+ list_for_each_entry(cls, &system_kset->list, kset.kobj.entry) {
- struct sys_device * sysdev;
-
- pr_debug("Resuming type '%s':\n",
-@@ -458,8 +447,10 @@ int sysdev_resume(void)
-
- int __init system_bus_init(void)
- {
-- system_subsys.kobj.parent = &devices_subsys.kobj;
-- return subsystem_register(&system_subsys);
-+ system_kset = kset_create_and_add("system", NULL, &devices_kset->kobj);
-+ if (!system_kset)
-+ return -ENOMEM;
-+ return 0;
- }
-
- EXPORT_SYMBOL_GPL(sysdev_register);
-diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c
-index 9030c37..cd03473 100644
---- a/drivers/block/DAC960.c
-+++ b/drivers/block/DAC960.c
-@@ -3455,19 +3455,12 @@ static inline bool DAC960_ProcessCompletedRequest(DAC960_Command_T *Command,
- bool SuccessfulIO)
- {
- struct request *Request = Command->Request;
-- int UpToDate;
+- /*
+- * Okay, sequence complete.
+- */
+- uptodate = 1;
+- if (q->orderr)
+- uptodate = q->orderr;
-
-- UpToDate = 0;
-- if (SuccessfulIO)
-- UpToDate = 1;
-+ int Error = SuccessfulIO ? 0 : -EIO;
-
- pci_unmap_sg(Command->Controller->PCIDevice, Command->cmd_sglist,
- Command->SegmentCount, Command->DmaDirection);
-
-- if (!end_that_request_first(Request, UpToDate, Command->BlockCount)) {
-- add_disk_randomness(Request->rq_disk);
-- end_that_request_last(Request, UpToDate);
+- q->ordseq = 0;
+- rq = q->orig_bar_rq;
-
-+ if (!__blk_end_request(Request, Error, Command->BlockCount << 9)) {
- if (Command->Completion) {
- complete(Command->Completion);
- Command->Completion = NULL;
-diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
-index 4d0119e..f212285 100644
---- a/drivers/block/Kconfig
-+++ b/drivers/block/Kconfig
-@@ -105,6 +105,17 @@ config PARIDE
- "MicroSolutions backpack protocol", "DataStor Commuter protocol"
- etc.).
-
-+config GDROM
-+ tristate "SEGA Dreamcast GD-ROM drive"
-+ depends on SH_DREAMCAST
-+ help
-+ A standard SEGA Dreamcast comes with a modified CD ROM drive called a
-+ "GD-ROM" by SEGA to signify it is capable of reading special disks
-+ with up to 1 GB of data. This drive will also read standard CD ROM
-+ disks. Select this option to access any disks in your GD ROM drive.
-+ Most users will want to say "Y" here.
-+ You can also build this as a module which will be called gdrom.ko
-+
- source "drivers/block/paride/Kconfig"
-
- config BLK_CPQ_DA
-diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
-index ad00b3d..826d123 100644
---- a/drivers/block/aoe/aoeblk.c
-+++ b/drivers/block/aoe/aoeblk.c
-@@ -15,8 +15,10 @@
-
- static struct kmem_cache *buf_pool_cache;
-
--static ssize_t aoedisk_show_state(struct gendisk * disk, char *page)
-+static ssize_t aoedisk_show_state(struct device *dev,
-+ struct device_attribute *attr, char *page)
- {
-+ struct gendisk *disk = dev_to_disk(dev);
- struct aoedev *d = disk->private_data;
-
- return snprintf(page, PAGE_SIZE,
-@@ -26,50 +28,47 @@ static ssize_t aoedisk_show_state(struct gendisk * disk, char *page)
- (d->nopen && !(d->flags & DEVFL_UP)) ? ",closewait" : "");
- /* I'd rather see nopen exported so we can ditch closewait */
- }
--static ssize_t aoedisk_show_mac(struct gendisk * disk, char *page)
-+static ssize_t aoedisk_show_mac(struct device *dev,
-+ struct device_attribute *attr, char *page)
- {
-+ struct gendisk *disk = dev_to_disk(dev);
- struct aoedev *d = disk->private_data;
-
- return snprintf(page, PAGE_SIZE, "%012llx\n",
- (unsigned long long)mac_addr(d->addr));
- }
--static ssize_t aoedisk_show_netif(struct gendisk * disk, char *page)
-+static ssize_t aoedisk_show_netif(struct device *dev,
-+ struct device_attribute *attr, char *page)
- {
-+ struct gendisk *disk = dev_to_disk(dev);
- struct aoedev *d = disk->private_data;
-
- return snprintf(page, PAGE_SIZE, "%s\n", d->ifp->name);
- }
- /* firmware version */
--static ssize_t aoedisk_show_fwver(struct gendisk * disk, char *page)
-+static ssize_t aoedisk_show_fwver(struct device *dev,
-+ struct device_attribute *attr, char *page)
- {
-+ struct gendisk *disk = dev_to_disk(dev);
- struct aoedev *d = disk->private_data;
-
- return snprintf(page, PAGE_SIZE, "0x%04x\n", (unsigned int) d->fw_ver);
- }
-
--static struct disk_attribute disk_attr_state = {
-- .attr = {.name = "state", .mode = S_IRUGO },
-- .show = aoedisk_show_state
--};
--static struct disk_attribute disk_attr_mac = {
-- .attr = {.name = "mac", .mode = S_IRUGO },
-- .show = aoedisk_show_mac
--};
--static struct disk_attribute disk_attr_netif = {
-- .attr = {.name = "netif", .mode = S_IRUGO },
-- .show = aoedisk_show_netif
--};
--static struct disk_attribute disk_attr_fwver = {
-- .attr = {.name = "firmware-version", .mode = S_IRUGO },
-- .show = aoedisk_show_fwver
-+static DEVICE_ATTR(state, S_IRUGO, aoedisk_show_state, NULL);
-+static DEVICE_ATTR(mac, S_IRUGO, aoedisk_show_mac, NULL);
-+static DEVICE_ATTR(netif, S_IRUGO, aoedisk_show_netif, NULL);
-+static struct device_attribute dev_attr_firmware_version = {
-+ .attr = { .name = "firmware-version", .mode = S_IRUGO, .owner = THIS_MODULE },
-+ .show = aoedisk_show_fwver,
- };
-
- static struct attribute *aoe_attrs[] = {
-- &disk_attr_state.attr,
-- &disk_attr_mac.attr,
-- &disk_attr_netif.attr,
-- &disk_attr_fwver.attr,
-- NULL
-+ &dev_attr_state.attr,
-+ &dev_attr_mac.attr,
-+ &dev_attr_netif.attr,
-+ &dev_attr_firmware_version.attr,
-+ NULL,
- };
-
- static const struct attribute_group attr_group = {
-@@ -79,12 +78,12 @@ static const struct attribute_group attr_group = {
- static int
- aoedisk_add_sysfs(struct aoedev *d)
- {
-- return sysfs_create_group(&d->gd->kobj, &attr_group);
-+ return sysfs_create_group(&d->gd->dev.kobj, &attr_group);
- }
- void
- aoedisk_rm_sysfs(struct aoedev *d)
- {
-- sysfs_remove_group(&d->gd->kobj, &attr_group);
-+ sysfs_remove_group(&d->gd->dev.kobj, &attr_group);
- }
-
- static int
-diff --git a/drivers/block/aoe/aoechr.c b/drivers/block/aoe/aoechr.c
-index 39e563e..d5480e3 100644
---- a/drivers/block/aoe/aoechr.c
-+++ b/drivers/block/aoe/aoechr.c
-@@ -259,9 +259,8 @@ aoechr_init(void)
- return PTR_ERR(aoe_class);
- }
- for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
-- class_device_create(aoe_class, NULL,
-- MKDEV(AOE_MAJOR, chardevs[i].minor),
-- NULL, chardevs[i].name);
-+ device_create(aoe_class, NULL,
-+ MKDEV(AOE_MAJOR, chardevs[i].minor), chardevs[i].name);
-
- return 0;
- }
-@@ -272,7 +271,7 @@ aoechr_exit(void)
- int i;
-
- for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
-- class_device_destroy(aoe_class, MKDEV(AOE_MAJOR, chardevs[i].minor));
-+ device_destroy(aoe_class, MKDEV(AOE_MAJOR, chardevs[i].minor));
- class_destroy(aoe_class);
- unregister_chrdev(AOE_MAJOR, "aoechr");
- }
-diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
-index 509b649..855ce8e 100644
---- a/drivers/block/cciss.c
-+++ b/drivers/block/cciss.c
-@@ -1187,17 +1187,6 @@ static int cciss_ioctl(struct inode *inode, struct file *filep,
- }
- }
-
--static inline void complete_buffers(struct bio *bio, int status)
+- end_that_request_first(rq, uptodate, rq->hard_nr_sectors);
+- end_that_request_last(rq, uptodate);
+-}
+-
+-static void pre_flush_end_io(struct request *rq, int error)
-{
-- while (bio) {
-- struct bio *xbh = bio->bi_next;
+- elv_completed_request(rq->q, rq);
+- blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_PREFLUSH, error);
+-}
-
-- bio->bi_next = NULL;
-- bio_endio(bio, status ? 0 : -EIO);
-- bio = xbh;
-- }
+-static void bar_end_io(struct request *rq, int error)
+-{
+- elv_completed_request(rq->q, rq);
+- blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_BAR, error);
-}
-
- static void cciss_check_queues(ctlr_info_t *h)
- {
- int start_queue = h->next_to_run;
-@@ -1263,21 +1252,14 @@ static void cciss_softirq_done(struct request *rq)
- pci_unmap_page(h->pdev, temp64.val, cmd->SG[i].Len, ddir);
- }
-
-- complete_buffers(rq->bio, (rq->errors == 0));
+-static void post_flush_end_io(struct request *rq, int error)
+-{
+- elv_completed_request(rq->q, rq);
+- blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_POSTFLUSH, error);
+-}
-
-- if (blk_fs_request(rq)) {
-- const int rw = rq_data_dir(rq);
+-static void queue_flush(struct request_queue *q, unsigned which)
+-{
+- struct request *rq;
+- rq_end_io_fn *end_io;
-
-- disk_stat_add(rq->rq_disk, sectors[rw], rq->nr_sectors);
+- if (which == QUEUE_ORDERED_PREFLUSH) {
+- rq = &q->pre_flush_rq;
+- end_io = pre_flush_end_io;
+- } else {
+- rq = &q->post_flush_rq;
+- end_io = post_flush_end_io;
- }
-
- #ifdef CCISS_DEBUG
- printk("Done with %p\n", rq);
- #endif /* CCISS_DEBUG */
-
-- add_disk_randomness(rq->rq_disk);
-+ if (blk_end_request(rq, (rq->errors == 0) ? 0 : -EIO, blk_rq_bytes(rq)))
-+ BUG();
-+
- spin_lock_irqsave(&h->lock, flags);
-- end_that_request_last(rq, (rq->errors == 0));
- cmd_free(h, cmd, 1);
- cciss_check_queues(h);
- spin_unlock_irqrestore(&h->lock, flags);
-@@ -2542,9 +2524,7 @@ after_error_processing:
- resend_cciss_cmd(h, cmd);
- return;
- }
-- cmd->rq->data_len = 0;
- cmd->rq->completion_data = cmd;
-- blk_add_trace_rq(cmd->rq->q, cmd->rq, BLK_TA_COMPLETE);
- blk_complete_request(cmd->rq);
- }
-
-diff --git a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c
-index c8132d9..6919918 100644
---- a/drivers/block/cpqarray.c
-+++ b/drivers/block/cpqarray.c
-@@ -167,7 +167,6 @@ static void start_io(ctlr_info_t *h);
-
- static inline void addQ(cmdlist_t **Qptr, cmdlist_t *c);
- static inline cmdlist_t *removeQ(cmdlist_t **Qptr, cmdlist_t *c);
--static inline void complete_buffers(struct bio *bio, int ok);
- static inline void complete_command(cmdlist_t *cmd, int timeout);
-
- static irqreturn_t do_ida_intr(int irq, void *dev_id);
-@@ -980,26 +979,13 @@ static void start_io(ctlr_info_t *h)
- }
- }
-
--static inline void complete_buffers(struct bio *bio, int ok)
+- rq->cmd_flags = REQ_HARDBARRIER;
+- rq_init(q, rq);
+- rq->elevator_private = NULL;
+- rq->elevator_private2 = NULL;
+- rq->rq_disk = q->bar_rq.rq_disk;
+- rq->end_io = end_io;
+- q->prepare_flush_fn(q, rq);
+-
+- elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
+-}
+-
+-static inline struct request *start_ordered(struct request_queue *q,
+- struct request *rq)
-{
-- struct bio *xbh;
+- q->orderr = 0;
+- q->ordered = q->next_ordered;
+- q->ordseq |= QUEUE_ORDSEQ_STARTED;
-
-- while (bio) {
-- xbh = bio->bi_next;
-- bio->bi_next = NULL;
--
-- bio_endio(bio, ok ? 0 : -EIO);
+- /*
+- * Prep proxy barrier request.
+- */
+- blkdev_dequeue_request(rq);
+- q->orig_bar_rq = rq;
+- rq = &q->bar_rq;
+- rq->cmd_flags = 0;
+- rq_init(q, rq);
+- if (bio_data_dir(q->orig_bar_rq->bio) == WRITE)
+- rq->cmd_flags |= REQ_RW;
+- if (q->ordered & QUEUE_ORDERED_FUA)
+- rq->cmd_flags |= REQ_FUA;
+- rq->elevator_private = NULL;
+- rq->elevator_private2 = NULL;
+- init_request_from_bio(rq, q->orig_bar_rq->bio);
+- rq->end_io = bar_end_io;
-
-- bio = xbh;
-- }
+- /*
+- * Queue ordered sequence. As we stack them at the head, we
+- * need to queue in reverse order. Note that we rely on that
+- * no fs request uses ELEVATOR_INSERT_FRONT and thus no fs
+- * request gets inbetween ordered sequence. If this request is
+- * an empty barrier, we don't need to do a postflush ever since
+- * there will be no data written between the pre and post flush.
+- * Hence a single flush will suffice.
+- */
+- if ((q->ordered & QUEUE_ORDERED_POSTFLUSH) && !blk_empty_barrier(rq))
+- queue_flush(q, QUEUE_ORDERED_POSTFLUSH);
+- else
+- q->ordseq |= QUEUE_ORDSEQ_POSTFLUSH;
+-
+- elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
+-
+- if (q->ordered & QUEUE_ORDERED_PREFLUSH) {
+- queue_flush(q, QUEUE_ORDERED_PREFLUSH);
+- rq = &q->pre_flush_rq;
+- } else
+- q->ordseq |= QUEUE_ORDSEQ_PREFLUSH;
+-
+- if ((q->ordered & QUEUE_ORDERED_TAG) || q->in_flight == 0)
+- q->ordseq |= QUEUE_ORDSEQ_DRAIN;
+- else
+- rq = NULL;
+-
+- return rq;
-}
- /*
- * Mark all buffers that cmd was responsible for
- */
- static inline void complete_command(cmdlist_t *cmd, int timeout)
- {
- struct request *rq = cmd->rq;
-- int ok=1;
-+ int error = 0;
- int i, ddir;
-
- if (cmd->req.hdr.rcode & RCODE_NONFATAL &&
-@@ -1011,16 +997,17 @@ static inline void complete_command(cmdlist_t *cmd, int timeout)
- if (cmd->req.hdr.rcode & RCODE_FATAL) {
- printk(KERN_WARNING "Fatal error on ida/c%dd%d\n",
- cmd->ctlr, cmd->hdr.unit);
-- ok = 0;
-+ error = -EIO;
- }
- if (cmd->req.hdr.rcode & RCODE_INVREQ) {
- printk(KERN_WARNING "Invalid request on ida/c%dd%d = (cmd=%x sect=%d cnt=%d sg=%d ret=%x)\n",
- cmd->ctlr, cmd->hdr.unit, cmd->req.hdr.cmd,
- cmd->req.hdr.blk, cmd->req.hdr.blk_cnt,
- cmd->req.hdr.sg_cnt, cmd->req.hdr.rcode);
-- ok = 0;
-+ error = -EIO;
- }
-- if (timeout) ok = 0;
-+ if (timeout)
-+ error = -EIO;
- /* unmap the DMA mapping for all the scatter gather elements */
- if (cmd->req.hdr.cmd == IDA_READ)
- ddir = PCI_DMA_FROMDEVICE;
-@@ -1030,18 +1017,9 @@ static inline void complete_command(cmdlist_t *cmd, int timeout)
- pci_unmap_page(hba[cmd->ctlr]->pci_dev, cmd->req.sg[i].addr,
- cmd->req.sg[i].size, ddir);
-
-- complete_buffers(rq->bio, ok);
-
-- if (blk_fs_request(rq)) {
-- const int rw = rq_data_dir(rq);
+-int blk_do_ordered(struct request_queue *q, struct request **rqp)
+-{
+- struct request *rq = *rqp;
+- const int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq);
-
-- disk_stat_add(rq->rq_disk, sectors[rw], rq->nr_sectors);
+- if (!q->ordseq) {
+- if (!is_barrier)
+- return 1;
+-
+- if (q->next_ordered != QUEUE_ORDERED_NONE) {
+- *rqp = start_ordered(q, rq);
+- return 1;
+- } else {
+- /*
+- * This can happen when the queue switches to
+- * ORDERED_NONE while this request is on it.
+- */
+- blkdev_dequeue_request(rq);
+- end_that_request_first(rq, -EOPNOTSUPP,
+- rq->hard_nr_sectors);
+- end_that_request_last(rq, -EOPNOTSUPP);
+- *rqp = NULL;
+- return 0;
+- }
- }
-
-- add_disk_randomness(rq->rq_disk);
+- /*
+- * Ordered sequence in progress
+- */
-
- DBGPX(printk("Done with %p\n", rq););
-- end_that_request_last(rq, ok ? 1 : -EIO);
-+ if (__blk_end_request(rq, error, blk_rq_bytes(rq)))
-+ BUG();
- }
-
- /*
-diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
-index 639ed14..32c79a5 100644
---- a/drivers/block/floppy.c
-+++ b/drivers/block/floppy.c
-@@ -2287,21 +2287,19 @@ static int do_format(int drive, struct format_descr *tmp_format_req)
- * =============================
- */
-
--static void floppy_end_request(struct request *req, int uptodate)
-+static void floppy_end_request(struct request *req, int error)
- {
- unsigned int nr_sectors = current_count_sectors;
-+ unsigned int drive = (unsigned long)req->rq_disk->private_data;
-
- /* current_count_sectors can be zero if transfer failed */
-- if (!uptodate)
-+ if (error)
- nr_sectors = req->current_nr_sectors;
-- if (end_that_request_first(req, uptodate, nr_sectors))
-+ if (__blk_end_request(req, error, nr_sectors << 9))
- return;
-- add_disk_randomness(req->rq_disk);
-- floppy_off((long)req->rq_disk->private_data);
-- blkdev_dequeue_request(req);
-- end_that_request_last(req, uptodate);
-
- /* We're done with the request */
-+ floppy_off(drive);
- current_req = NULL;
- }
-
-@@ -2332,7 +2330,7 @@ static void request_done(int uptodate)
-
- /* unlock chained buffers */
- spin_lock_irqsave(q->queue_lock, flags);
-- floppy_end_request(req, 1);
-+ floppy_end_request(req, 0);
- spin_unlock_irqrestore(q->queue_lock, flags);
- } else {
- if (rq_data_dir(req) == WRITE) {
-@@ -2346,7 +2344,7 @@ static void request_done(int uptodate)
- DRWE->last_error_generation = DRS->generation;
- }
- spin_lock_irqsave(q->queue_lock, flags);
-- floppy_end_request(req, 0);
-+ floppy_end_request(req, -EIO);
- spin_unlock_irqrestore(q->queue_lock, flags);
- }
- }
-diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
-index b4c0888..ae31060 100644
---- a/drivers/block/nbd.c
-+++ b/drivers/block/nbd.c
-@@ -100,17 +100,15 @@ static const char *nbdcmd_to_ascii(int cmd)
-
- static void nbd_end_request(struct request *req)
- {
-- int uptodate = (req->errors == 0) ? 1 : 0;
-+ int error = req->errors ? -EIO : 0;
- struct request_queue *q = req->q;
- unsigned long flags;
-
- dprintk(DBG_BLKDEV, "%s: request %p: %s\n", req->rq_disk->disk_name,
-- req, uptodate? "done": "failed");
-+ req, error ? "failed" : "done");
-
- spin_lock_irqsave(q->queue_lock, flags);
-- if (!end_that_request_first(req, uptodate, req->nr_sectors)) {
-- end_that_request_last(req, uptodate);
-- }
-+ __blk_end_request(req, error, req->nr_sectors << 9);
- spin_unlock_irqrestore(q->queue_lock, flags);
- }
-
-@@ -375,14 +373,17 @@ harderror:
- return NULL;
- }
-
--static ssize_t pid_show(struct gendisk *disk, char *page)
-+static ssize_t pid_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
- {
-- return sprintf(page, "%ld\n",
-+ struct gendisk *disk = dev_to_disk(dev);
-+
-+ return sprintf(buf, "%ld\n",
- (long) ((struct nbd_device *)disk->private_data)->pid);
- }
-
--static struct disk_attribute pid_attr = {
-- .attr = { .name = "pid", .mode = S_IRUGO },
-+static struct device_attribute pid_attr = {
-+ .attr = { .name = "pid", .mode = S_IRUGO, .owner = THIS_MODULE },
- .show = pid_show,
- };
-
-@@ -394,7 +395,7 @@ static int nbd_do_it(struct nbd_device *lo)
- BUG_ON(lo->magic != LO_MAGIC);
-
- lo->pid = current->pid;
-- ret = sysfs_create_file(&lo->disk->kobj, &pid_attr.attr);
-+ ret = sysfs_create_file(&lo->disk->dev.kobj, &pid_attr.attr);
- if (ret) {
- printk(KERN_ERR "nbd: sysfs_create_file failed!");
- return ret;
-@@ -403,7 +404,7 @@ static int nbd_do_it(struct nbd_device *lo)
- while ((req = nbd_read_stat(lo)) != NULL)
- nbd_end_request(req);
-
-- sysfs_remove_file(&lo->disk->kobj, &pid_attr.attr);
-+ sysfs_remove_file(&lo->disk->dev.kobj, &pid_attr.attr);
- return 0;
- }
-
-diff --git a/drivers/block/paride/pg.c b/drivers/block/paride/pg.c
-index d89e7d3..ab86e23 100644
---- a/drivers/block/paride/pg.c
-+++ b/drivers/block/paride/pg.c
-@@ -676,8 +676,8 @@ static int __init pg_init(void)
- for (unit = 0; unit < PG_UNITS; unit++) {
- struct pg *dev = &devices[unit];
- if (dev->present)
-- class_device_create(pg_class, NULL, MKDEV(major, unit),
-- NULL, "pg%u", unit);
-+ device_create(pg_class, NULL, MKDEV(major, unit),
-+ "pg%u", unit);
- }
- err = 0;
- goto out;
-@@ -695,7 +695,7 @@ static void __exit pg_exit(void)
- for (unit = 0; unit < PG_UNITS; unit++) {
- struct pg *dev = &devices[unit];
- if (dev->present)
-- class_device_destroy(pg_class, MKDEV(major, unit));
-+ device_destroy(pg_class, MKDEV(major, unit));
- }
- class_destroy(pg_class);
- unregister_chrdev(major, name);
-diff --git a/drivers/block/paride/pt.c b/drivers/block/paride/pt.c
-index b91accf..76096ca 100644
---- a/drivers/block/paride/pt.c
-+++ b/drivers/block/paride/pt.c
-@@ -972,10 +972,10 @@ static int __init pt_init(void)
-
- for (unit = 0; unit < PT_UNITS; unit++)
- if (pt[unit].present) {
-- class_device_create(pt_class, NULL, MKDEV(major, unit),
-- NULL, "pt%d", unit);
-- class_device_create(pt_class, NULL, MKDEV(major, unit + 128),
-- NULL, "pt%dn", unit);
-+ device_create(pt_class, NULL, MKDEV(major, unit),
-+ "pt%d", unit);
-+ device_create(pt_class, NULL, MKDEV(major, unit + 128),
-+ "pt%dn", unit);
- }
- goto out;
-
-@@ -990,8 +990,8 @@ static void __exit pt_exit(void)
- int unit;
- for (unit = 0; unit < PT_UNITS; unit++)
- if (pt[unit].present) {
-- class_device_destroy(pt_class, MKDEV(major, unit));
-- class_device_destroy(pt_class, MKDEV(major, unit + 128));
-+ device_destroy(pt_class, MKDEV(major, unit));
-+ device_destroy(pt_class, MKDEV(major, unit + 128));
- }
- class_destroy(pt_class);
- unregister_chrdev(major, name);
-diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
-index 3535ef8..e9de171 100644
---- a/drivers/block/pktcdvd.c
-+++ b/drivers/block/pktcdvd.c
-@@ -110,17 +110,18 @@ static struct pktcdvd_kobj* pkt_kobj_create(struct pktcdvd_device *pd,
- struct kobj_type* ktype)
- {
- struct pktcdvd_kobj *p;
-+ int error;
-+
- p = kzalloc(sizeof(*p), GFP_KERNEL);
- if (!p)
- return NULL;
-- kobject_set_name(&p->kobj, "%s", name);
-- p->kobj.parent = parent;
-- p->kobj.ktype = ktype;
- p->pd = pd;
-- if (kobject_register(&p->kobj) != 0) {
-+ error = kobject_init_and_add(&p->kobj, ktype, parent, "%s", name);
-+ if (error) {
- kobject_put(&p->kobj);
- return NULL;
- }
-+ kobject_uevent(&p->kobj, KOBJ_ADD);
- return p;
- }
- /*
-@@ -129,7 +130,7 @@ static struct pktcdvd_kobj* pkt_kobj_create(struct pktcdvd_device *pd,
- static void pkt_kobj_remove(struct pktcdvd_kobj *p)
- {
- if (p)
-- kobject_unregister(&p->kobj);
-+ kobject_put(&p->kobj);
- }
- /*
- * default release function for pktcdvd kernel objects.
-@@ -301,18 +302,16 @@ static struct kobj_type kobj_pkt_type_wqueue = {
- static void pkt_sysfs_dev_new(struct pktcdvd_device *pd)
- {
- if (class_pktcdvd) {
-- pd->clsdev = class_device_create(class_pktcdvd,
-- NULL, pd->pkt_dev,
-- NULL, "%s", pd->name);
-- if (IS_ERR(pd->clsdev))
-- pd->clsdev = NULL;
-+ pd->dev = device_create(class_pktcdvd, NULL, pd->pkt_dev, "%s", pd->name);
-+ if (IS_ERR(pd->dev))
-+ pd->dev = NULL;
- }
-- if (pd->clsdev) {
-+ if (pd->dev) {
- pd->kobj_stat = pkt_kobj_create(pd, "stat",
-- &pd->clsdev->kobj,
-+ &pd->dev->kobj,
- &kobj_pkt_type_stat);
- pd->kobj_wqueue = pkt_kobj_create(pd, "write_queue",
-- &pd->clsdev->kobj,
-+ &pd->dev->kobj,
- &kobj_pkt_type_wqueue);
- }
- }
-@@ -322,7 +321,7 @@ static void pkt_sysfs_dev_remove(struct pktcdvd_device *pd)
- pkt_kobj_remove(pd->kobj_stat);
- pkt_kobj_remove(pd->kobj_wqueue);
- if (class_pktcdvd)
-- class_device_destroy(class_pktcdvd, pd->pkt_dev);
-+ device_destroy(class_pktcdvd, pd->pkt_dev);
- }
-
-
-diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
-index e354bfc..7483f94 100644
---- a/drivers/block/ps3disk.c
-+++ b/drivers/block/ps3disk.c
-@@ -229,7 +229,7 @@ static irqreturn_t ps3disk_interrupt(int irq, void *data)
- struct ps3_storage_device *dev = data;
- struct ps3disk_private *priv;
- struct request *req;
-- int res, read, uptodate;
-+ int res, read, error;
- u64 tag, status;
- unsigned long num_sectors;
- const char *op;
-@@ -270,21 +270,17 @@ static irqreturn_t ps3disk_interrupt(int irq, void *data)
- if (status) {
- dev_dbg(&dev->sbd.core, "%s:%u: %s failed 0x%lx\n", __func__,
- __LINE__, op, status);
-- uptodate = 0;
-+ error = -EIO;
- } else {
- dev_dbg(&dev->sbd.core, "%s:%u: %s completed\n", __func__,
- __LINE__, op);
-- uptodate = 1;
-+ error = 0;
- if (read)
- ps3disk_scatter_gather(dev, req, 0);
- }
-
- spin_lock(&priv->lock);
-- if (!end_that_request_first(req, uptodate, num_sectors)) {
-- add_disk_randomness(req->rq_disk);
-- blkdev_dequeue_request(req);
-- end_that_request_last(req, uptodate);
+- /* Special requests are not subject to ordering rules. */
+- if (!blk_fs_request(rq) &&
+- rq != &q->pre_flush_rq && rq != &q->post_flush_rq)
+- return 1;
+-
+- if (q->ordered & QUEUE_ORDERED_TAG) {
+- /* Ordered by tag. Blocking the next barrier is enough. */
+- if (is_barrier && rq != &q->bar_rq)
+- *rqp = NULL;
+- } else {
+- /* Ordered by draining. Wait for turn. */
+- WARN_ON(blk_ordered_req_seq(rq) < blk_ordered_cur_seq(q));
+- if (blk_ordered_req_seq(rq) > blk_ordered_cur_seq(q))
+- *rqp = NULL;
- }
-+ __blk_end_request(req, error, num_sectors << 9);
- priv->req = NULL;
- ps3disk_do_request(dev, priv->queue);
- spin_unlock(&priv->lock);
-diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
-index fac4c6c..66e3015 100644
---- a/drivers/block/sunvdc.c
-+++ b/drivers/block/sunvdc.c
-@@ -212,12 +212,9 @@ static void vdc_end_special(struct vdc_port *port, struct vio_disk_desc *desc)
- vdc_finish(&port->vio, -err, WAITING_FOR_GEN_CMD);
- }
-
--static void vdc_end_request(struct request *req, int uptodate, int num_sectors)
-+static void vdc_end_request(struct request *req, int error, int num_sectors)
- {
-- if (end_that_request_first(req, uptodate, num_sectors))
-- return;
-- add_disk_randomness(req->rq_disk);
-- end_that_request_last(req, uptodate);
-+ __blk_end_request(req, error, num_sectors << 9);
- }
-
- static void vdc_end_one(struct vdc_port *port, struct vio_dring_state *dr,
-@@ -242,7 +239,7 @@ static void vdc_end_one(struct vdc_port *port, struct vio_dring_state *dr,
-
- rqe->req = NULL;
-
-- vdc_end_request(req, !desc->status, desc->size >> 9);
-+ vdc_end_request(req, (desc->status ? -EIO : 0), desc->size >> 9);
-
- if (blk_queue_stopped(port->disk->queue))
- blk_start_queue(port->disk->queue);
-@@ -456,7 +453,7 @@ static void do_vdc_request(struct request_queue *q)
-
- blkdev_dequeue_request(req);
- if (__send_request(req) < 0)
-- vdc_end_request(req, 0, req->hard_nr_sectors);
-+ vdc_end_request(req, -EIO, req->hard_nr_sectors);
- }
- }
-
-diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
-index 52dc5e1..cd5674b 100644
---- a/drivers/block/sx8.c
-+++ b/drivers/block/sx8.c
-@@ -744,16 +744,14 @@ static unsigned int carm_fill_get_fw_ver(struct carm_host *host,
-
- static inline void carm_end_request_queued(struct carm_host *host,
- struct carm_request *crq,
-- int uptodate)
-+ int error)
- {
- struct request *req = crq->rq;
- int rc;
-
-- rc = end_that_request_first(req, uptodate, req->hard_nr_sectors);
-+ rc = __blk_end_request(req, error, blk_rq_bytes(req));
- assert(rc == 0);
-
-- end_that_request_last(req, uptodate);
-
- rc = carm_put_request(host, crq);
- assert(rc == 0);
- }
-@@ -793,9 +791,9 @@ static inline void carm_round_robin(struct carm_host *host)
- }
-
- static inline void carm_end_rq(struct carm_host *host, struct carm_request *crq,
-- int is_ok)
-+ int error)
- {
-- carm_end_request_queued(host, crq, is_ok);
-+ carm_end_request_queued(host, crq, error);
- if (max_queue == 1)
- carm_round_robin(host);
- else if ((host->n_msgs <= CARM_MSG_LOW_WATER) &&
-@@ -873,14 +871,14 @@ queue_one_request:
- sg = &crq->sg[0];
- n_elem = blk_rq_map_sg(q, rq, sg);
- if (n_elem <= 0) {
-- carm_end_rq(host, crq, 0);
-+ carm_end_rq(host, crq, -EIO);
- return; /* request with no s/g entries? */
- }
-
- /* map scatterlist to PCI bus addresses */
- n_elem = pci_map_sg(host->pdev, sg, n_elem, pci_dir);
- if (n_elem <= 0) {
-- carm_end_rq(host, crq, 0);
-+ carm_end_rq(host, crq, -EIO);
- return; /* request with no s/g entries? */
- }
- crq->n_elem = n_elem;
-@@ -941,7 +939,7 @@ queue_one_request:
-
- static void carm_handle_array_info(struct carm_host *host,
- struct carm_request *crq, u8 *mem,
-- int is_ok)
-+ int error)
- {
- struct carm_port *port;
- u8 *msg_data = mem + sizeof(struct carm_array_info);
-@@ -952,9 +950,9 @@ static void carm_handle_array_info(struct carm_host *host,
-
- DPRINTK("ENTER\n");
-
-- carm_end_rq(host, crq, is_ok);
-+ carm_end_rq(host, crq, error);
-
-- if (!is_ok)
-+ if (error)
- goto out;
- if (le32_to_cpu(desc->array_status) & ARRAY_NO_EXIST)
- goto out;
-@@ -1001,7 +999,7 @@ out:
-
- static void carm_handle_scan_chan(struct carm_host *host,
- struct carm_request *crq, u8 *mem,
-- int is_ok)
-+ int error)
- {
- u8 *msg_data = mem + IOC_SCAN_CHAN_OFFSET;
- unsigned int i, dev_count = 0;
-@@ -1009,9 +1007,9 @@ static void carm_handle_scan_chan(struct carm_host *host,
-
- DPRINTK("ENTER\n");
-
-- carm_end_rq(host, crq, is_ok);
-+ carm_end_rq(host, crq, error);
-
-- if (!is_ok) {
-+ if (error) {
- new_state = HST_ERROR;
- goto out;
- }
-@@ -1033,23 +1031,23 @@ out:
- }
-
- static void carm_handle_generic(struct carm_host *host,
-- struct carm_request *crq, int is_ok,
-+ struct carm_request *crq, int error,
- int cur_state, int next_state)
- {
- DPRINTK("ENTER\n");
-
-- carm_end_rq(host, crq, is_ok);
-+ carm_end_rq(host, crq, error);
-
- assert(host->state == cur_state);
-- if (is_ok)
-- host->state = next_state;
-- else
-+ if (error)
- host->state = HST_ERROR;
-+ else
-+ host->state = next_state;
- schedule_work(&host->fsm_task);
- }
-
- static inline void carm_handle_rw(struct carm_host *host,
-- struct carm_request *crq, int is_ok)
-+ struct carm_request *crq, int error)
- {
- int pci_dir;
-
-@@ -1062,7 +1060,7 @@ static inline void carm_handle_rw(struct carm_host *host,
-
- pci_unmap_sg(host->pdev, &crq->sg[0], crq->n_elem, pci_dir);
-
-- carm_end_rq(host, crq, is_ok);
-+ carm_end_rq(host, crq, error);
- }
-
- static inline void carm_handle_resp(struct carm_host *host,
-@@ -1071,7 +1069,7 @@ static inline void carm_handle_resp(struct carm_host *host,
- u32 handle = le32_to_cpu(ret_handle_le);
- unsigned int msg_idx;
- struct carm_request *crq;
-- int is_ok = (status == RMSG_OK);
-+ int error = (status == RMSG_OK) ? 0 : -EIO;
- u8 *mem;
-
- VPRINTK("ENTER, handle == 0x%x\n", handle);
-@@ -1090,7 +1088,7 @@ static inline void carm_handle_resp(struct carm_host *host,
- /* fast path */
- if (likely(crq->msg_type == CARM_MSG_READ ||
- crq->msg_type == CARM_MSG_WRITE)) {
-- carm_handle_rw(host, crq, is_ok);
-+ carm_handle_rw(host, crq, error);
- return;
- }
-
-@@ -1100,7 +1098,7 @@ static inline void carm_handle_resp(struct carm_host *host,
- case CARM_MSG_IOCTL: {
- switch (crq->msg_subtype) {
- case CARM_IOC_SCAN_CHAN:
-- carm_handle_scan_chan(host, crq, mem, is_ok);
-+ carm_handle_scan_chan(host, crq, mem, error);
- break;
- default:
- /* unknown / invalid response */
-@@ -1112,21 +1110,21 @@ static inline void carm_handle_resp(struct carm_host *host,
- case CARM_MSG_MISC: {
- switch (crq->msg_subtype) {
- case MISC_ALLOC_MEM:
-- carm_handle_generic(host, crq, is_ok,
-+ carm_handle_generic(host, crq, error,
- HST_ALLOC_BUF, HST_SYNC_TIME);
- break;
- case MISC_SET_TIME:
-- carm_handle_generic(host, crq, is_ok,
-+ carm_handle_generic(host, crq, error,
- HST_SYNC_TIME, HST_GET_FW_VER);
- break;
- case MISC_GET_FW_VER: {
- struct carm_fw_ver *ver = (struct carm_fw_ver *)
- mem + sizeof(struct carm_msg_get_fw_ver);
-- if (is_ok) {
-+ if (!error) {
- host->fw_ver = le32_to_cpu(ver->version);
- host->flags |= (ver->features & FL_FW_VER_MASK);
- }
-- carm_handle_generic(host, crq, is_ok,
-+ carm_handle_generic(host, crq, error,
- HST_GET_FW_VER, HST_PORT_SCAN);
- break;
- }
-@@ -1140,7 +1138,7 @@ static inline void carm_handle_resp(struct carm_host *host,
- case CARM_MSG_ARRAY: {
- switch (crq->msg_subtype) {
- case CARM_ARRAY_INFO:
-- carm_handle_array_info(host, crq, mem, is_ok);
-+ carm_handle_array_info(host, crq, mem, error);
- break;
- default:
- /* unknown / invalid response */
-@@ -1159,7 +1157,7 @@ static inline void carm_handle_resp(struct carm_host *host,
- err_out:
- printk(KERN_WARNING DRV_NAME "(%s): BUG: unhandled message type %d/%d\n",
- pci_name(host->pdev), crq->msg_type, crq->msg_subtype);
-- carm_end_rq(host, crq, 0);
-+ carm_end_rq(host, crq, -EIO);
- }
-
- static inline void carm_handle_responses(struct carm_host *host)
-diff --git a/drivers/block/ub.c b/drivers/block/ub.c
-index 08e909d..c6179d6 100644
---- a/drivers/block/ub.c
-+++ b/drivers/block/ub.c
-@@ -808,16 +808,16 @@ static void ub_rw_cmd_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
-
- static void ub_end_rq(struct request *rq, unsigned int scsi_status)
- {
-- int uptodate;
-+ int error;
-
- if (scsi_status == 0) {
-- uptodate = 1;
-+ error = 0;
- } else {
-- uptodate = 0;
-+ error = -EIO;
- rq->errors = scsi_status;
- }
-- end_that_request_first(rq, uptodate, rq->hard_nr_sectors);
-- end_that_request_last(rq, uptodate);
-+ if (__blk_end_request(rq, error, blk_rq_bytes(rq)))
-+ BUG();
- }
-
- static int ub_rw_cmd_retry(struct ub_dev *sc, struct ub_lun *lun,
-diff --git a/drivers/block/viodasd.c b/drivers/block/viodasd.c
-index ab5d404..9e61fca 100644
---- a/drivers/block/viodasd.c
-+++ b/drivers/block/viodasd.c
-@@ -229,13 +229,10 @@ static struct block_device_operations viodasd_fops = {
- /*
- * End a request
- */
--static void viodasd_end_request(struct request *req, int uptodate,
-+static void viodasd_end_request(struct request *req, int error,
- int num_sectors)
- {
-- if (end_that_request_first(req, uptodate, num_sectors))
-- return;
-- add_disk_randomness(req->rq_disk);
-- end_that_request_last(req, uptodate);
-+ __blk_end_request(req, error, num_sectors << 9);
- }
-
- /*
-@@ -374,12 +371,12 @@ static void do_viodasd_request(struct request_queue *q)
- blkdev_dequeue_request(req);
- /* check that request contains a valid command */
- if (!blk_fs_request(req)) {
-- viodasd_end_request(req, 0, req->hard_nr_sectors);
-+ viodasd_end_request(req, -EIO, req->hard_nr_sectors);
- continue;
- }
- /* Try sending the request */
- if (send_request(req) != 0)
-- viodasd_end_request(req, 0, req->hard_nr_sectors);
-+ viodasd_end_request(req, -EIO, req->hard_nr_sectors);
- }
- }
-
-@@ -591,7 +588,7 @@ static int viodasd_handle_read_write(struct vioblocklpevent *bevent)
- num_req_outstanding--;
- spin_unlock_irqrestore(&viodasd_spinlock, irq_flags);
-
-- error = event->xRc != HvLpEvent_Rc_Good;
-+ error = (event->xRc == HvLpEvent_Rc_Good) ? 0 : -EIO;
- if (error) {
- const struct vio_error_entry *err;
- err = vio_lookup_rc(viodasd_err_table, bevent->sub_result);
-@@ -601,7 +598,7 @@ static int viodasd_handle_read_write(struct vioblocklpevent *bevent)
- }
- qlock = req->q->queue_lock;
- spin_lock_irqsave(qlock, irq_flags);
-- viodasd_end_request(req, !error, num_sect);
-+ viodasd_end_request(req, error, num_sect);
- spin_unlock_irqrestore(qlock, irq_flags);
-
- /* Finally, try to get more requests off of this device's queue */
-diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
-index 2bdebcb..8afce67 100644
---- a/drivers/block/xen-blkfront.c
-+++ b/drivers/block/xen-blkfront.c
-@@ -452,7 +452,7 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
- RING_IDX i, rp;
- unsigned long flags;
- struct blkfront_info *info = (struct blkfront_info *)dev_id;
-- int uptodate;
-+ int error;
-
- spin_lock_irqsave(&blkif_io_lock, flags);
-
-@@ -477,13 +477,13 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
-
- add_id_to_freelist(info, id);
-
-- uptodate = (bret->status == BLKIF_RSP_OKAY);
-+ error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO;
- switch (bret->operation) {
- case BLKIF_OP_WRITE_BARRIER:
- if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
- printk(KERN_WARNING "blkfront: %s: write barrier op failed\n",
- info->gd->disk_name);
-- uptodate = -EOPNOTSUPP;
-+ error = -EOPNOTSUPP;
- info->feature_barrier = 0;
- xlvbd_barrier(info);
- }
-@@ -494,10 +494,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
- dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
- "request: %x\n", bret->status);
-
-- ret = end_that_request_first(req, uptodate,
-- req->hard_nr_sectors);
-+ ret = __blk_end_request(req, error, blk_rq_bytes(req));
- BUG_ON(ret);
-- end_that_request_last(req, uptodate);
- break;
- default:
- BUG();
-diff --git a/drivers/block/xsysace.c b/drivers/block/xsysace.c
-index 82effce..78ebfff 100644
---- a/drivers/block/xsysace.c
-+++ b/drivers/block/xsysace.c
-@@ -483,7 +483,6 @@ static void ace_fsm_dostate(struct ace_device *ace)
- u32 status;
- u16 val;
- int count;
-- int i;
-
- #if defined(DEBUG)
- dev_dbg(ace->dev, "fsm_state=%i, id_req_count=%i\n",
-@@ -688,7 +687,6 @@ static void ace_fsm_dostate(struct ace_device *ace)
- }
-
- /* Transfer the next buffer */
-- i = 16;
- if (ace->fsm_task == ACE_TASK_WRITE)
- ace->reg_ops->dataout(ace);
- else
-@@ -702,8 +700,8 @@ static void ace_fsm_dostate(struct ace_device *ace)
- }
-
- /* bio finished; is there another one? */
-- i = ace->req->current_nr_sectors;
-- if (end_that_request_first(ace->req, 1, i)) {
-+ if (__blk_end_request(ace->req, 0,
-+ blk_rq_cur_bytes(ace->req))) {
- /* dev_dbg(ace->dev, "next block; h=%li c=%i\n",
- * ace->req->hard_nr_sectors,
- * ace->req->current_nr_sectors);
-@@ -718,9 +716,6 @@ static void ace_fsm_dostate(struct ace_device *ace)
- break;
-
- case ACE_FSM_STATE_REQ_COMPLETE:
-- /* Complete the block request */
-- blkdev_dequeue_request(ace->req);
-- end_that_request_last(ace->req, 1);
- ace->req = NULL;
-
- /* Finished request; go to idle state */
-diff --git a/drivers/cdrom/Makefile b/drivers/cdrom/Makefile
-index 774c180..ecf85fd 100644
---- a/drivers/cdrom/Makefile
-+++ b/drivers/cdrom/Makefile
-@@ -11,3 +11,4 @@ obj-$(CONFIG_PARIDE_PCD) += cdrom.o
- obj-$(CONFIG_CDROM_PKTCDVD) += cdrom.o
-
- obj-$(CONFIG_VIOCD) += viocd.o cdrom.o
-+obj-$(CONFIG_GDROM) += gdrom.o cdrom.o
-diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
-new file mode 100644
-index 0000000..4e2bbcc
---- /dev/null
-+++ b/drivers/cdrom/gdrom.c
-@@ -0,0 +1,867 @@
-+/* GD ROM driver for the SEGA Dreamcast
-+ * copyright Adrian McMenamin, 2007
-+ * With thanks to Marcus Comstedt and Nathan Keynes
-+ * for work in reversing PIO and DMA
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public License along
-+ * with this program; if not, write to the Free Software Foundation, Inc.,
-+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-+ *
-+ */
-+
-+#include <linux/init.h>
-+#include <linux/module.h>
-+#include <linux/fs.h>
-+#include <linux/kernel.h>
-+#include <linux/list.h>
-+#include <linux/slab.h>
-+#include <linux/dma-mapping.h>
-+#include <linux/cdrom.h>
-+#include <linux/genhd.h>
-+#include <linux/bio.h>
-+#include <linux/blkdev.h>
-+#include <linux/interrupt.h>
-+#include <linux/device.h>
-+#include <linux/wait.h>
-+#include <linux/workqueue.h>
-+#include <linux/platform_device.h>
-+#include <scsi/scsi.h>
-+#include <asm/io.h>
-+#include <asm/dma.h>
-+#include <asm/delay.h>
-+#include <asm/mach/dma.h>
-+#include <asm/mach/sysasic.h>
-+
-+#define GDROM_DEV_NAME "gdrom"
-+#define GD_SESSION_OFFSET 150
-+
-+/* GD Rom commands */
-+#define GDROM_COM_SOFTRESET 0x08
-+#define GDROM_COM_EXECDIAG 0x90
-+#define GDROM_COM_PACKET 0xA0
-+#define GDROM_COM_IDDEV 0xA1
-+
-+/* GD Rom registers */
-+#define GDROM_BASE_REG 0xA05F7000
-+#define GDROM_ALTSTATUS_REG (GDROM_BASE_REG + 0x18)
-+#define GDROM_DATA_REG (GDROM_BASE_REG + 0x80)
-+#define GDROM_ERROR_REG (GDROM_BASE_REG + 0x84)
-+#define GDROM_INTSEC_REG (GDROM_BASE_REG + 0x88)
-+#define GDROM_SECNUM_REG (GDROM_BASE_REG + 0x8C)
-+#define GDROM_BCL_REG (GDROM_BASE_REG + 0x90)
-+#define GDROM_BCH_REG (GDROM_BASE_REG + 0x94)
-+#define GDROM_DSEL_REG (GDROM_BASE_REG + 0x98)
-+#define GDROM_STATUSCOMMAND_REG (GDROM_BASE_REG + 0x9C)
-+#define GDROM_RESET_REG (GDROM_BASE_REG + 0x4E4)
-+
-+#define GDROM_DMA_STARTADDR_REG (GDROM_BASE_REG + 0x404)
-+#define GDROM_DMA_LENGTH_REG (GDROM_BASE_REG + 0x408)
-+#define GDROM_DMA_DIRECTION_REG (GDROM_BASE_REG + 0x40C)
-+#define GDROM_DMA_ENABLE_REG (GDROM_BASE_REG + 0x414)
-+#define GDROM_DMA_STATUS_REG (GDROM_BASE_REG + 0x418)
-+#define GDROM_DMA_WAIT_REG (GDROM_BASE_REG + 0x4A0)
-+#define GDROM_DMA_ACCESS_CTRL_REG (GDROM_BASE_REG + 0x4B8)
-+
-+#define GDROM_HARD_SECTOR 2048
-+#define BLOCK_LAYER_SECTOR 512
-+#define GD_TO_BLK 4
-+
-+#define GDROM_DEFAULT_TIMEOUT (HZ * 7)
-+
-+static const struct {
-+ int sense_key;
-+ const char * const text;
-+} sense_texts[] = {
-+ {NO_SENSE, "OK"},
-+ {RECOVERED_ERROR, "Recovered from error"},
-+ {NOT_READY, "Device not ready"},
-+ {MEDIUM_ERROR, "Disk not ready"},
-+ {HARDWARE_ERROR, "Hardware error"},
-+ {ILLEGAL_REQUEST, "Command has failed"},
-+ {UNIT_ATTENTION, "Device needs attention - disk may have been changed"},
-+ {DATA_PROTECT, "Data protection error"},
-+ {ABORTED_COMMAND, "Command aborted"},
-+};
-+
-+static struct platform_device *pd;
-+static int gdrom_major;
-+static DECLARE_WAIT_QUEUE_HEAD(command_queue);
-+static DECLARE_WAIT_QUEUE_HEAD(request_queue);
-+
-+static DEFINE_SPINLOCK(gdrom_lock);
-+static void gdrom_readdisk_dma(struct work_struct *work);
-+static DECLARE_WORK(work, gdrom_readdisk_dma);
-+static LIST_HEAD(gdrom_deferred);
-+
-+struct gdromtoc {
-+ unsigned int entry[99];
-+ unsigned int first, last;
-+ unsigned int leadout;
-+};
-+
-+static struct gdrom_unit {
-+ struct gendisk *disk;
-+ struct cdrom_device_info *cd_info;
-+ int status;
-+ int pending;
-+ int transfer;
-+ char disk_type;
-+ struct gdromtoc *toc;
-+ struct request_queue *gdrom_rq;
-+} gd;
-+
-+struct gdrom_id {
-+ char mid;
-+ char modid;
-+ char verid;
-+ char padA[13];
-+ char mname[16];
-+ char modname[16];
-+ char firmver[16];
-+ char padB[16];
-+};
-+
-+static int gdrom_getsense(short *bufstring);
-+static int gdrom_packetcommand(struct cdrom_device_info *cd_info,
-+ struct packet_command *command);
-+static int gdrom_hardreset(struct cdrom_device_info *cd_info);
-+
-+static bool gdrom_is_busy(void)
-+{
-+ return (ctrl_inb(GDROM_ALTSTATUS_REG) & 0x80) != 0;
-+}
-+
-+static bool gdrom_data_request(void)
-+{
-+ return (ctrl_inb(GDROM_ALTSTATUS_REG) & 0x88) == 8;
-+}
-+
-+static bool gdrom_wait_clrbusy(void)
-+{
-+ unsigned long timeout = jiffies + GDROM_DEFAULT_TIMEOUT;
-+ while ((ctrl_inb(GDROM_ALTSTATUS_REG) & 0x80) &&
-+ (time_before(jiffies, timeout)))
-+ cpu_relax();
-+ return time_before(jiffies, timeout + 1);
-+}
-+
-+static bool gdrom_wait_busy_sleeps(void)
-+{
-+ unsigned long timeout;
-+ /* Wait to get busy first */
-+ timeout = jiffies + GDROM_DEFAULT_TIMEOUT;
-+ while (!gdrom_is_busy() && time_before(jiffies, timeout))
-+ cpu_relax();
-+ /* Now wait for busy to clear */
-+ return gdrom_wait_clrbusy();
-+}
-+
-+static void gdrom_identifydevice(void *buf)
-+{
-+ int c;
-+ short *data = buf;
-+ /* If the device won't clear it has probably
-+ * been hit by a serious failure - but we'll
-+ * try to return a sense key even so */
-+ if (!gdrom_wait_clrbusy()) {
-+ gdrom_getsense(NULL);
-+ return;
-+ }
-+ ctrl_outb(GDROM_COM_IDDEV, GDROM_STATUSCOMMAND_REG);
-+ if (!gdrom_wait_busy_sleeps()) {
-+ gdrom_getsense(NULL);
-+ return;
-+ }
-+ /* now read in the data */
-+ for (c = 0; c < 40; c++)
-+ data[c] = ctrl_inw(GDROM_DATA_REG);
-+}
-+
-+static void gdrom_spicommand(void *spi_string, int buflen)
-+{
-+ short *cmd = spi_string;
-+ unsigned long timeout;
-+
-+ /* ensure IRQ_WAIT is set */
-+ ctrl_outb(0x08, GDROM_ALTSTATUS_REG);
-+ /* specify how many bytes we expect back */
-+ ctrl_outb(buflen & 0xFF, GDROM_BCL_REG);
-+ ctrl_outb((buflen >> 8) & 0xFF, GDROM_BCH_REG);
-+ /* other parameters */
-+ ctrl_outb(0, GDROM_INTSEC_REG);
-+ ctrl_outb(0, GDROM_SECNUM_REG);
-+ ctrl_outb(0, GDROM_ERROR_REG);
-+ /* Wait until we can go */
-+ if (!gdrom_wait_clrbusy()) {
-+ gdrom_getsense(NULL);
-+ return;
-+ }
-+ timeout = jiffies + GDROM_DEFAULT_TIMEOUT;
-+ ctrl_outb(GDROM_COM_PACKET, GDROM_STATUSCOMMAND_REG);
-+ while (!gdrom_data_request() && time_before(jiffies, timeout))
-+ cpu_relax();
-+ if (!time_before(jiffies, timeout + 1)) {
-+ gdrom_getsense(NULL);
-+ return;
-+ }
-+ outsw(PHYSADDR(GDROM_DATA_REG), cmd, 6);
-+}
-+
-+
-+/* gdrom_command_executediagnostic:
-+ * Used to probe for presence of working GDROM
-+ * Restarts GDROM device and then applies standard ATA 3
-+ * Execute Diagnostic Command: a return of '1' indicates device 0
-+ * present and device 1 absent
-+ */
-+static char gdrom_execute_diagnostic(void)
-+{
-+ gdrom_hardreset(gd.cd_info);
-+ if (!gdrom_wait_clrbusy())
-+ return 0;
-+ ctrl_outb(GDROM_COM_EXECDIAG, GDROM_STATUSCOMMAND_REG);
-+ if (!gdrom_wait_busy_sleeps())
-+ return 0;
-+ return ctrl_inb(GDROM_ERROR_REG);
-+}
-+
-+/*
-+ * Prepare disk command
-+ * byte 0 = 0x70
-+ * byte 1 = 0x1f
-+ */
-+static int gdrom_preparedisk_cmd(void)
-+{
-+ struct packet_command *spin_command;
-+ spin_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
-+ if (!spin_command)
-+ return -ENOMEM;
-+ spin_command->cmd[0] = 0x70;
-+ spin_command->cmd[2] = 0x1f;
-+ spin_command->buflen = 0;
-+ gd.pending = 1;
-+ gdrom_packetcommand(gd.cd_info, spin_command);
-+ /* 60 second timeout */
-+ wait_event_interruptible_timeout(command_queue, gd.pending == 0,
-+ GDROM_DEFAULT_TIMEOUT);
-+ gd.pending = 0;
-+ kfree(spin_command);
-+ if (gd.status & 0x01) {
-+ /* log an error */
-+ gdrom_getsense(NULL);
-+ return -EIO;
-+ }
-+ return 0;
-+}
-+
-+/*
-+ * Read TOC command
-+ * byte 0 = 0x14
-+ * byte 1 = session
-+ * byte 3 = sizeof TOC >> 8 ie upper byte
-+ * byte 4 = sizeof TOC & 0xff ie lower byte
-+ */
-+static int gdrom_readtoc_cmd(struct gdromtoc *toc, int session)
-+{
-+ int tocsize;
-+ struct packet_command *toc_command;
-+ int err = 0;
-+
-+ toc_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
-+ if (!toc_command)
-+ return -ENOMEM;
-+ tocsize = sizeof(struct gdromtoc);
-+ toc_command->cmd[0] = 0x14;
-+ toc_command->cmd[1] = session;
-+ toc_command->cmd[3] = tocsize >> 8;
-+ toc_command->cmd[4] = tocsize & 0xff;
-+ toc_command->buflen = tocsize;
-+ if (gd.pending) {
-+ err = -EBUSY;
-+ goto cleanup_readtoc_final;
-+ }
-+ gd.pending = 1;
-+ gdrom_packetcommand(gd.cd_info, toc_command);
-+ wait_event_interruptible_timeout(command_queue, gd.pending == 0,
-+ GDROM_DEFAULT_TIMEOUT);
-+ if (gd.pending) {
-+ err = -EINVAL;
-+ goto cleanup_readtoc;
-+ }
-+ insw(PHYSADDR(GDROM_DATA_REG), toc, tocsize/2);
-+ if (gd.status & 0x01)
-+ err = -EINVAL;
-+
-+cleanup_readtoc:
-+ gd.pending = 0;
-+cleanup_readtoc_final:
-+ kfree(toc_command);
-+ return err;
-+}
-+
-+/* TOC helpers */
-+static int get_entry_lba(int track)
-+{
-+ return (cpu_to_be32(track & 0xffffff00) - GD_SESSION_OFFSET);
-+}
-+
-+static int get_entry_q_ctrl(int track)
-+{
-+ return (track & 0x000000f0) >> 4;
-+}
-+
-+static int get_entry_track(int track)
-+{
-+ return (track & 0x0000ff00) >> 8;
-+}
-+
-+static int gdrom_get_last_session(struct cdrom_device_info *cd_info,
-+ struct cdrom_multisession *ms_info)
-+{
-+ int fentry, lentry, track, data, tocuse, err;
-+ if (!gd.toc)
-+ return -ENOMEM;
-+ tocuse = 1;
-+ /* Check if GD-ROM */
-+ err = gdrom_readtoc_cmd(gd.toc, 1);
-+ /* Not a GD-ROM so check if standard CD-ROM */
-+ if (err) {
-+ tocuse = 0;
-+ err = gdrom_readtoc_cmd(gd.toc, 0);
-+ if (err) {
-+ printk(KERN_INFO "GDROM: Could not get CD "
-+ "table of contents\n");
-+ return -ENXIO;
-+ }
-+ }
-+
-+ fentry = get_entry_track(gd.toc->first);
-+ lentry = get_entry_track(gd.toc->last);
-+ /* Find the first data track */
-+ track = get_entry_track(gd.toc->last);
-+ do {
-+ data = gd.toc->entry[track - 1];
-+ if (get_entry_q_ctrl(data))
-+ break; /* ie a real data track */
-+ track--;
-+ } while (track >= fentry);
-+
-+ if ((track > 100) || (track < get_entry_track(gd.toc->first))) {
-+ printk(KERN_INFO "GDROM: No data on the last "
-+ "session of the CD\n");
-+ gdrom_getsense(NULL);
-+ return -ENXIO;
-+ }
-+
-+ ms_info->addr_format = CDROM_LBA;
-+ ms_info->addr.lba = get_entry_lba(data);
-+ ms_info->xa_flag = 1;
-+ return 0;
-+}
-+
-+static int gdrom_open(struct cdrom_device_info *cd_info, int purpose)
-+{
-+ /* spin up the disk */
-+ return gdrom_preparedisk_cmd();
-+}
-+
-+/* this function is required even if empty */
-+static void gdrom_release(struct cdrom_device_info *cd_info)
-+{
-+}
-+
-+static int gdrom_drivestatus(struct cdrom_device_info *cd_info, int ignore)
-+{
-+ /* read the sense key */
-+ char sense = ctrl_inb(GDROM_ERROR_REG);
-+ sense &= 0xF0;
-+ if (sense == 0)
-+ return CDS_DISC_OK;
-+ if (sense == 0x20)
-+ return CDS_DRIVE_NOT_READY;
-+ /* default */
-+ return CDS_NO_INFO;
-+}
-+
-+static int gdrom_mediachanged(struct cdrom_device_info *cd_info, int ignore)
-+{
-+ /* check the sense key */
-+ return (ctrl_inb(GDROM_ERROR_REG) & 0xF0) == 0x60;
-+}
-+
-+/* reset the G1 bus */
-+static int gdrom_hardreset(struct cdrom_device_info *cd_info)
-+{
-+ int count;
-+ ctrl_outl(0x1fffff, GDROM_RESET_REG);
-+ for (count = 0xa0000000; count < 0xa0200000; count += 4)
-+ ctrl_inl(count);
-+ return 0;
-+}
-+
-+/* keep the function looking like the universal
-+ * CD Rom specification - returning int */
-+static int gdrom_packetcommand(struct cdrom_device_info *cd_info,
-+ struct packet_command *command)
-+{
-+ gdrom_spicommand(&command->cmd, command->buflen);
-+ return 0;
-+}
-+
-+/* Get Sense SPI command
-+ * From Marcus Comstedt
-+ * cmd = 0x13
-+ * cmd + 4 = length of returned buffer
-+ * Returns 5 16 bit words
-+ */
-+static int gdrom_getsense(short *bufstring)
-+{
-+ struct packet_command *sense_command;
-+ short sense[5];
-+ int sense_key;
-+ int err = -EIO;
-+
-+ sense_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
-+ if (!sense_command)
-+ return -ENOMEM;
-+ sense_command->cmd[0] = 0x13;
-+ sense_command->cmd[4] = 10;
-+ sense_command->buflen = 10;
-+ /* even if something is pending try to get
-+ * the sense key if possible */
-+ if (gd.pending && !gdrom_wait_clrbusy()) {
-+ err = -EBUSY;
-+ goto cleanup_sense_final;
-+ }
-+ gd.pending = 1;
-+ gdrom_packetcommand(gd.cd_info, sense_command);
-+ wait_event_interruptible_timeout(command_queue, gd.pending == 0,
-+ GDROM_DEFAULT_TIMEOUT);
-+ if (gd.pending)
-+ goto cleanup_sense;
-+ insw(PHYSADDR(GDROM_DATA_REG), &sense, sense_command->buflen/2);
-+ if (sense[1] & 40) {
-+ printk(KERN_INFO "GDROM: Drive not ready - command aborted\n");
-+ goto cleanup_sense;
-+ }
-+ sense_key = sense[1] & 0x0F;
-+ if (sense_key < ARRAY_SIZE(sense_texts))
-+ printk(KERN_INFO "GDROM: %s\n", sense_texts[sense_key].text);
-+ else
-+ printk(KERN_ERR "GDROM: Unknown sense key: %d\n", sense_key);
-+ if (bufstring) /* return addional sense data */
-+ memcpy(bufstring, &sense[4], 2);
-+ if (sense_key < 2)
-+ err = 0;
-+
-+cleanup_sense:
-+ gd.pending = 0;
-+cleanup_sense_final:
-+ kfree(sense_command);
-+ return err;
-+}
-+
-+static struct cdrom_device_ops gdrom_ops = {
-+ .open = gdrom_open,
-+ .release = gdrom_release,
-+ .drive_status = gdrom_drivestatus,
-+ .media_changed = gdrom_mediachanged,
-+ .get_last_session = gdrom_get_last_session,
-+ .reset = gdrom_hardreset,
-+ .capability = CDC_MULTI_SESSION | CDC_MEDIA_CHANGED |
-+ CDC_RESET | CDC_DRIVE_STATUS | CDC_CD_R,
-+ .n_minors = 1,
-+};
-+
-+static int gdrom_bdops_open(struct inode *inode, struct file *file)
-+{
-+ return cdrom_open(gd.cd_info, inode, file);
-+}
-+
-+static int gdrom_bdops_release(struct inode *inode, struct file *file)
-+{
-+ return cdrom_release(gd.cd_info, file);
-+}
-+
-+static int gdrom_bdops_mediachanged(struct gendisk *disk)
-+{
-+ return cdrom_media_changed(gd.cd_info);
-+}
-+
-+static int gdrom_bdops_ioctl(struct inode *inode, struct file *file,
-+ unsigned cmd, unsigned long arg)
-+{
-+ return cdrom_ioctl(file, gd.cd_info, inode, cmd, arg);
-+}
-+
-+static struct block_device_operations gdrom_bdops = {
-+ .owner = THIS_MODULE,
-+ .open = gdrom_bdops_open,
-+ .release = gdrom_bdops_release,
-+ .media_changed = gdrom_bdops_mediachanged,
-+ .ioctl = gdrom_bdops_ioctl,
-+};
-+
-+static irqreturn_t gdrom_command_interrupt(int irq, void *dev_id)
-+{
-+ gd.status = ctrl_inb(GDROM_STATUSCOMMAND_REG);
-+ if (gd.pending != 1)
-+ return IRQ_HANDLED;
-+ gd.pending = 0;
-+ wake_up_interruptible(&command_queue);
-+ return IRQ_HANDLED;
-+}
-+
-+static irqreturn_t gdrom_dma_interrupt(int irq, void *dev_id)
-+{
-+ gd.status = ctrl_inb(GDROM_STATUSCOMMAND_REG);
-+ if (gd.transfer != 1)
-+ return IRQ_HANDLED;
-+ gd.transfer = 0;
-+ wake_up_interruptible(&request_queue);
-+ return IRQ_HANDLED;
-+}
-+
-+static int __devinit gdrom_set_interrupt_handlers(void)
-+{
-+ int err;
-+
-+ err = request_irq(HW_EVENT_GDROM_CMD, gdrom_command_interrupt,
-+ IRQF_DISABLED, "gdrom_command", &gd);
-+ if (err)
-+ return err;
-+ err = request_irq(HW_EVENT_GDROM_DMA, gdrom_dma_interrupt,
-+ IRQF_DISABLED, "gdrom_dma", &gd);
-+ if (err)
-+ free_irq(HW_EVENT_GDROM_CMD, &gd);
-+ return err;
-+}
-+
-+/* Implement DMA read using SPI command
-+ * 0 -> 0x30
-+ * 1 -> mode
-+ * 2 -> block >> 16
-+ * 3 -> block >> 8
-+ * 4 -> block
-+ * 8 -> sectors >> 16
-+ * 9 -> sectors >> 8
-+ * 10 -> sectors
-+ */
-+static void gdrom_readdisk_dma(struct work_struct *work)
-+{
-+ int err, block, block_cnt;
-+ struct packet_command *read_command;
-+ struct list_head *elem, *next;
-+ struct request *req;
-+ unsigned long timeout;
-+
-+ if (list_empty(&gdrom_deferred))
-+ return;
-+ read_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
-+ if (!read_command)
-+ return; /* get more memory later? */
-+ read_command->cmd[0] = 0x30;
-+ read_command->cmd[1] = 0x20;
-+ spin_lock(&gdrom_lock);
-+ list_for_each_safe(elem, next, &gdrom_deferred) {
-+ req = list_entry(elem, struct request, queuelist);
-+ spin_unlock(&gdrom_lock);
-+ block = req->sector/GD_TO_BLK + GD_SESSION_OFFSET;
-+ block_cnt = req->nr_sectors/GD_TO_BLK;
-+ ctrl_outl(PHYSADDR(req->buffer), GDROM_DMA_STARTADDR_REG);
-+ ctrl_outl(block_cnt * GDROM_HARD_SECTOR, GDROM_DMA_LENGTH_REG);
-+ ctrl_outl(1, GDROM_DMA_DIRECTION_REG);
-+ ctrl_outl(1, GDROM_DMA_ENABLE_REG);
-+ read_command->cmd[2] = (block >> 16) & 0xFF;
-+ read_command->cmd[3] = (block >> 8) & 0xFF;
-+ read_command->cmd[4] = block & 0xFF;
-+ read_command->cmd[8] = (block_cnt >> 16) & 0xFF;
-+ read_command->cmd[9] = (block_cnt >> 8) & 0xFF;
-+ read_command->cmd[10] = block_cnt & 0xFF;
-+ /* set for DMA */
-+ ctrl_outb(1, GDROM_ERROR_REG);
-+ /* other registers */
-+ ctrl_outb(0, GDROM_SECNUM_REG);
-+ ctrl_outb(0, GDROM_BCL_REG);
-+ ctrl_outb(0, GDROM_BCH_REG);
-+ ctrl_outb(0, GDROM_DSEL_REG);
-+ ctrl_outb(0, GDROM_INTSEC_REG);
-+ /* Wait for registers to reset after any previous activity */
-+ timeout = jiffies + HZ / 2;
-+ while (gdrom_is_busy() && time_before(jiffies, timeout))
-+ cpu_relax();
-+ ctrl_outb(GDROM_COM_PACKET, GDROM_STATUSCOMMAND_REG);
-+ timeout = jiffies + HZ / 2;
-+ /* Wait for packet command to finish */
-+ while (gdrom_is_busy() && time_before(jiffies, timeout))
-+ cpu_relax();
-+ gd.pending = 1;
-+ gd.transfer = 1;
-+ outsw(PHYSADDR(GDROM_DATA_REG), &read_command->cmd, 6);
-+ timeout = jiffies + HZ / 2;
-+ /* Wait for any pending DMA to finish */
-+ while (ctrl_inb(GDROM_DMA_STATUS_REG) &&
-+ time_before(jiffies, timeout))
-+ cpu_relax();
-+ /* start transfer */
-+ ctrl_outb(1, GDROM_DMA_STATUS_REG);
-+ wait_event_interruptible_timeout(request_queue,
-+ gd.transfer == 0, GDROM_DEFAULT_TIMEOUT);
-+ err = gd.transfer;
-+ gd.transfer = 0;
-+ gd.pending = 0;
-+ /* now seek to take the request spinlock
-+ * before handling ending the request */
-+ spin_lock(&gdrom_lock);
-+ list_del_init(&req->queuelist);
-+ end_dequeued_request(req, 1 - err);
-+ }
-+ spin_unlock(&gdrom_lock);
-+ kfree(read_command);
-+}
-+
-+static void gdrom_request_handler_dma(struct request *req)
-+{
-+ /* dequeue, add to list of deferred work
-+ * and then schedule workqueue */
-+ blkdev_dequeue_request(req);
-+ list_add_tail(&req->queuelist, &gdrom_deferred);
-+ schedule_work(&work);
-+}
-+
-+static void gdrom_request(struct request_queue *rq)
-+{
-+ struct request *req;
-+
-+ while ((req = elv_next_request(rq)) != NULL) {
-+ if (!blk_fs_request(req)) {
-+ printk(KERN_DEBUG "GDROM: Non-fs request ignored\n");
-+ end_request(req, 0);
-+ }
-+ if (rq_data_dir(req) != READ) {
-+ printk(KERN_NOTICE "GDROM: Read only device -");
-+ printk(" write request ignored\n");
-+ end_request(req, 0);
-+ }
-+ if (req->nr_sectors)
-+ gdrom_request_handler_dma(req);
-+ else
-+ end_request(req, 0);
-+ }
-+}
-+
-+/* Print string identifying GD ROM device */
-+static int __devinit gdrom_outputversion(void)
-+{
-+ struct gdrom_id *id;
-+ char *model_name, *manuf_name, *firmw_ver;
-+ int err = -ENOMEM;
-+
-+ /* query device ID */
-+ id = kzalloc(sizeof(struct gdrom_id), GFP_KERNEL);
-+ if (!id)
-+ return err;
-+ gdrom_identifydevice(id);
-+ model_name = kstrndup(id->modname, 16, GFP_KERNEL);
-+ if (!model_name)
-+ goto free_id;
-+ manuf_name = kstrndup(id->mname, 16, GFP_KERNEL);
-+ if (!manuf_name)
-+ goto free_model_name;
-+ firmw_ver = kstrndup(id->firmver, 16, GFP_KERNEL);
-+ if (!firmw_ver)
-+ goto free_manuf_name;
-+ printk(KERN_INFO "GDROM: %s from %s with firmware %s\n",
-+ model_name, manuf_name, firmw_ver);
-+ err = 0;
-+ kfree(firmw_ver);
-+free_manuf_name:
-+ kfree(manuf_name);
-+free_model_name:
-+ kfree(model_name);
-+free_id:
-+ kfree(id);
-+ return err;
-+}
-+
-+/* set the default mode for DMA transfer */
-+static int __devinit gdrom_init_dma_mode(void)
-+{
-+ ctrl_outb(0x13, GDROM_ERROR_REG);
-+ ctrl_outb(0x22, GDROM_INTSEC_REG);
-+ if (!gdrom_wait_clrbusy())
-+ return -EBUSY;
-+ ctrl_outb(0xEF, GDROM_STATUSCOMMAND_REG);
-+ if (!gdrom_wait_busy_sleeps())
-+ return -EBUSY;
-+ /* Memory protection setting for GDROM DMA
-+ * Bits 31 - 16 security: 0x8843
-+ * Bits 15 and 7 reserved (0)
-+ * Bits 14 - 8 start of transfer range in 1 MB blocks OR'ed with 0x80
-+ * Bits 6 - 0 end of transfer range in 1 MB blocks OR'ed with 0x80
-+ * (0x40 | 0x80) = start range at 0x0C000000
-+ * (0x7F | 0x80) = end range at 0x0FFFFFFF */
-+ ctrl_outl(0x8843407F, GDROM_DMA_ACCESS_CTRL_REG);
-+ ctrl_outl(9, GDROM_DMA_WAIT_REG); /* DMA word setting */
-+ return 0;
-+}
-+
-+static void __devinit probe_gdrom_setupcd(void)
-+{
-+ gd.cd_info->ops = &gdrom_ops;
-+ gd.cd_info->capacity = 1;
-+ strcpy(gd.cd_info->name, GDROM_DEV_NAME);
-+ gd.cd_info->mask = CDC_CLOSE_TRAY|CDC_OPEN_TRAY|CDC_LOCK|
-+ CDC_SELECT_DISC;
-+}
-+
-+static void __devinit probe_gdrom_setupdisk(void)
-+{
-+ gd.disk->major = gdrom_major;
-+ gd.disk->first_minor = 1;
-+ gd.disk->minors = 1;
-+ strcpy(gd.disk->disk_name, GDROM_DEV_NAME);
-+}
-+
-+static int __devinit probe_gdrom_setupqueue(void)
-+{
-+ blk_queue_hardsect_size(gd.gdrom_rq, GDROM_HARD_SECTOR);
-+ /* using DMA so memory will need to be contiguous */
-+ blk_queue_max_hw_segments(gd.gdrom_rq, 1);
-+ /* set a large max size to get most from DMA */
-+ blk_queue_max_segment_size(gd.gdrom_rq, 0x40000);
-+ gd.disk->queue = gd.gdrom_rq;
-+ return gdrom_init_dma_mode();
-+}
-+
-+/*
-+ * register this as a block device and as compliant with the
-+ * universal CD Rom driver interface
-+ */
-+static int __devinit probe_gdrom(struct platform_device *devptr)
-+{
-+ int err;
-+ /* Start the device */
-+ if (gdrom_execute_diagnostic() != 1) {
-+ printk(KERN_WARNING "GDROM: ATA Probe for GDROM failed.\n");
-+ return -ENODEV;
-+ }
-+ /* Print out firmware ID */
-+ if (gdrom_outputversion())
-+ return -ENOMEM;
-+ /* Register GDROM */
-+ gdrom_major = register_blkdev(0, GDROM_DEV_NAME);
-+ if (gdrom_major <= 0)
-+ return gdrom_major;
-+ printk(KERN_INFO "GDROM: Registered with major number %d\n",
-+ gdrom_major);
-+ /* Specify basic properties of drive */
-+ gd.cd_info = kzalloc(sizeof(struct cdrom_device_info), GFP_KERNEL);
-+ if (!gd.cd_info) {
-+ err = -ENOMEM;
-+ goto probe_fail_no_mem;
-+ }
-+ probe_gdrom_setupcd();
-+ gd.disk = alloc_disk(1);
-+ if (!gd.disk) {
-+ err = -ENODEV;
-+ goto probe_fail_no_disk;
-+ }
-+ probe_gdrom_setupdisk();
-+ if (register_cdrom(gd.cd_info)) {
-+ err = -ENODEV;
-+ goto probe_fail_cdrom_register;
-+ }
-+ gd.disk->fops = &gdrom_bdops;
-+ /* latch on to the interrupt */
-+ err = gdrom_set_interrupt_handlers();
-+ if (err)
-+ goto probe_fail_cmdirq_register;
-+ gd.gdrom_rq = blk_init_queue(gdrom_request, &gdrom_lock);
-+ if (!gd.gdrom_rq)
-+ goto probe_fail_requestq;
-+
-+ err = probe_gdrom_setupqueue();
-+ if (err)
-+ goto probe_fail_toc;
-+
-+ gd.toc = kzalloc(sizeof(struct gdromtoc), GFP_KERNEL);
-+ if (!gd.toc)
-+ goto probe_fail_toc;
-+ add_disk(gd.disk);
-+ return 0;
-+
-+probe_fail_toc:
-+ blk_cleanup_queue(gd.gdrom_rq);
-+probe_fail_requestq:
-+ free_irq(HW_EVENT_GDROM_DMA, &gd);
-+ free_irq(HW_EVENT_GDROM_CMD, &gd);
-+probe_fail_cmdirq_register:
-+probe_fail_cdrom_register:
-+ del_gendisk(gd.disk);
-+probe_fail_no_disk:
-+ kfree(gd.cd_info);
-+ unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
-+ gdrom_major = 0;
-+probe_fail_no_mem:
-+ printk(KERN_WARNING "GDROM: Probe failed - error is 0x%X\n", err);
-+ return err;
-+}
-+
-+static int __devexit remove_gdrom(struct platform_device *devptr)
-+{
-+ flush_scheduled_work();
-+ blk_cleanup_queue(gd.gdrom_rq);
-+ free_irq(HW_EVENT_GDROM_CMD, &gd);
-+ free_irq(HW_EVENT_GDROM_DMA, &gd);
-+ del_gendisk(gd.disk);
-+ if (gdrom_major)
-+ unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
-+ return unregister_cdrom(gd.cd_info);
-+}
-+
-+static struct platform_driver gdrom_driver = {
-+ .probe = probe_gdrom,
-+ .remove = __devexit_p(remove_gdrom),
-+ .driver = {
-+ .name = GDROM_DEV_NAME,
-+ },
-+};
-+
-+static int __init init_gdrom(void)
-+{
-+ int rc;
-+ gd.toc = NULL;
-+ rc = platform_driver_register(&gdrom_driver);
-+ if (rc)
-+ return rc;
-+ pd = platform_device_register_simple(GDROM_DEV_NAME, -1, NULL, 0);
-+ if (IS_ERR(pd)) {
-+ platform_driver_unregister(&gdrom_driver);
-+ return PTR_ERR(pd);
-+ }
-+ return 0;
-+}
-+
-+static void __exit exit_gdrom(void)
-+{
-+ platform_device_unregister(pd);
-+ platform_driver_unregister(&gdrom_driver);
-+ kfree(gd.toc);
-+}
-+
-+module_init(init_gdrom);
-+module_exit(exit_gdrom);
-+MODULE_AUTHOR("Adrian McMenamin <adrian at mcmen.demon.co.uk>");
-+MODULE_DESCRIPTION("SEGA Dreamcast GD-ROM Driver");
-+MODULE_LICENSE("GPL");
-diff --git a/drivers/cdrom/viocd.c b/drivers/cdrom/viocd.c
-index d8bb44b..8473b9f 100644
---- a/drivers/cdrom/viocd.c
-+++ b/drivers/cdrom/viocd.c
-@@ -289,7 +289,7 @@ static int send_request(struct request *req)
- return 0;
- }
-
--static void viocd_end_request(struct request *req, int uptodate)
-+static void viocd_end_request(struct request *req, int error)
- {
- int nsectors = req->hard_nr_sectors;
-
-@@ -302,11 +302,8 @@ static void viocd_end_request(struct request *req, int uptodate)
- if (!nsectors)
- nsectors = 1;
-
-- if (end_that_request_first(req, uptodate, nsectors))
-+ if (__blk_end_request(req, error, nsectors << 9))
- BUG();
-- add_disk_randomness(req->rq_disk);
-- blkdev_dequeue_request(req);
-- end_that_request_last(req, uptodate);
- }
-
- static int rwreq;
-@@ -317,11 +314,11 @@ static void do_viocd_request(struct request_queue *q)
-
- while ((rwreq == 0) && ((req = elv_next_request(q)) != NULL)) {
- if (!blk_fs_request(req))
-- viocd_end_request(req, 0);
-+ viocd_end_request(req, -EIO);
- else if (send_request(req) < 0) {
- printk(VIOCD_KERN_WARNING
- "unable to send message to OS/400!");
-- viocd_end_request(req, 0);
-+ viocd_end_request(req, -EIO);
- } else
- rwreq++;
- }
-@@ -532,9 +529,9 @@ return_complete:
- "with rc %d:0x%04X: %s\n",
- req, event->xRc,
- bevent->sub_result, err->msg);
-- viocd_end_request(req, 0);
-+ viocd_end_request(req, -EIO);
- } else
-- viocd_end_request(req, 1);
-+ viocd_end_request(req, 0);
-
- /* restart handling of incoming requests */
- spin_unlock_irqrestore(&viocd_reqlock, flags);
-diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
-index 2e3a0d4..4666295 100644
---- a/drivers/char/Kconfig
-+++ b/drivers/char/Kconfig
-@@ -373,6 +373,16 @@ config ISTALLION
- To compile this driver as a module, choose M here: the
- module will be called istallion.
-
-+config NOZOMI
-+ tristate "HSDPA Broadband Wireless Data Card - Globe Trotter"
-+ depends on PCI && EXPERIMENTAL
-+ help
-+ If you have a HSDPA driver Broadband Wireless Data Card -
-+ Globe Trotter PCMCIA card, say Y here.
-+
-+ To compile this driver as a module, choose M here, the module
-+ will be called nozomi.
-+
- config A2232
- tristate "Commodore A2232 serial support (EXPERIMENTAL)"
- depends on EXPERIMENTAL && ZORRO && BROKEN_ON_SMP
-diff --git a/drivers/char/Makefile b/drivers/char/Makefile
-index 07304d5..96fc01e 100644
---- a/drivers/char/Makefile
-+++ b/drivers/char/Makefile
-@@ -26,6 +26,7 @@ obj-$(CONFIG_SERIAL167) += serial167.o
- obj-$(CONFIG_CYCLADES) += cyclades.o
- obj-$(CONFIG_STALLION) += stallion.o
- obj-$(CONFIG_ISTALLION) += istallion.o
-+obj-$(CONFIG_NOZOMI) += nozomi.o
- obj-$(CONFIG_DIGIEPCA) += epca.o
- obj-$(CONFIG_SPECIALIX) += specialix.o
- obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
-diff --git a/drivers/char/hvc_console.c b/drivers/char/hvc_console.c
-index 8252f86..480fae2 100644
---- a/drivers/char/hvc_console.c
-+++ b/drivers/char/hvc_console.c
-@@ -27,7 +27,7 @@
- #include <linux/init.h>
- #include <linux/kbd_kern.h>
- #include <linux/kernel.h>
--#include <linux/kobject.h>
-+#include <linux/kref.h>
- #include <linux/kthread.h>
- #include <linux/list.h>
- #include <linux/module.h>
-@@ -89,7 +89,7 @@ struct hvc_struct {
- int irq_requested;
- int irq;
- struct list_head next;
-- struct kobject kobj; /* ref count & hvc_struct lifetime */
-+ struct kref kref; /* ref count & hvc_struct lifetime */
- };
-
- /* dynamic list of hvc_struct instances */
-@@ -110,7 +110,7 @@ static int last_hvc = -1;
-
- /*
- * Do not call this function with either the hvc_structs_lock or the hvc_struct
-- * lock held. If successful, this function increments the kobject reference
-+ * lock held. If successful, this function increments the kref reference
- * count against the target hvc_struct so it should be released when finished.
- */
- static struct hvc_struct *hvc_get_by_index(int index)
-@@ -123,7 +123,7 @@ static struct hvc_struct *hvc_get_by_index(int index)
- list_for_each_entry(hp, &hvc_structs, next) {
- spin_lock_irqsave(&hp->lock, flags);
- if (hp->index == index) {
-- kobject_get(&hp->kobj);
-+ kref_get(&hp->kref);
- spin_unlock_irqrestore(&hp->lock, flags);
- spin_unlock(&hvc_structs_lock);
- return hp;
-@@ -242,6 +242,23 @@ static int __init hvc_console_init(void)
- }
- console_initcall(hvc_console_init);
-
-+/* callback when the kboject ref count reaches zero. */
-+static void destroy_hvc_struct(struct kref *kref)
-+{
-+ struct hvc_struct *hp = container_of(kref, struct hvc_struct, kref);
-+ unsigned long flags;
-+
-+ spin_lock(&hvc_structs_lock);
-+
-+ spin_lock_irqsave(&hp->lock, flags);
-+ list_del(&(hp->next));
-+ spin_unlock_irqrestore(&hp->lock, flags);
-+
-+ spin_unlock(&hvc_structs_lock);
-+
-+ kfree(hp);
-+}
-+
- /*
- * hvc_instantiate() is an early console discovery method which locates
- * consoles * prior to the vio subsystem discovering them. Hotplugged
-@@ -261,7 +278,7 @@ int hvc_instantiate(uint32_t vtermno, int index, struct hv_ops *ops)
- /* make sure no no tty has been registered in this index */
- hp = hvc_get_by_index(index);
- if (hp) {
-- kobject_put(&hp->kobj);
-+ kref_put(&hp->kref, destroy_hvc_struct);
- return -1;
- }
-
-@@ -318,9 +335,8 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
- unsigned long flags;
- int irq = 0;
- int rc = 0;
-- struct kobject *kobjp;
-
-- /* Auto increments kobject reference if found. */
-+ /* Auto increments kref reference if found. */
- if (!(hp = hvc_get_by_index(tty->index)))
- return -ENODEV;
-
-@@ -341,8 +357,6 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
- if (irq)
- hp->irq_requested = 1;
-
-- kobjp = &hp->kobj;
+- return 1;
+-}
-
- spin_unlock_irqrestore(&hp->lock, flags);
- /* check error, fallback to non-irq */
- if (irq)
-@@ -352,7 +366,7 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
- * If the request_irq() fails and we return an error. The tty layer
- * will call hvc_close() after a failed open but we don't want to clean
- * up there so we'll clean up here and clear out the previously set
-- * tty fields and return the kobject reference.
-+ * tty fields and return the kref reference.
- */
- if (rc) {
- spin_lock_irqsave(&hp->lock, flags);
-@@ -360,7 +374,7 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
- hp->irq_requested = 0;
- spin_unlock_irqrestore(&hp->lock, flags);
- tty->driver_data = NULL;
-- kobject_put(kobjp);
-+ kref_put(&hp->kref, destroy_hvc_struct);
- printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
- }
- /* Force wakeup of the polling thread */
-@@ -372,7 +386,6 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
- static void hvc_close(struct tty_struct *tty, struct file * filp)
- {
- struct hvc_struct *hp;
-- struct kobject *kobjp;
- int irq = 0;
- unsigned long flags;
-
-@@ -382,7 +395,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
- /*
- * No driver_data means that this close was issued after a failed
- * hvc_open by the tty layer's release_dev() function and we can just
-- * exit cleanly because the kobject reference wasn't made.
-+ * exit cleanly because the kref reference wasn't made.
- */
- if (!tty->driver_data)
- return;
-@@ -390,7 +403,6 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
- hp = tty->driver_data;
- spin_lock_irqsave(&hp->lock, flags);
-
-- kobjp = &hp->kobj;
- if (--hp->count == 0) {
- if (hp->irq_requested)
- irq = hp->irq;
-@@ -417,7 +429,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
- spin_unlock_irqrestore(&hp->lock, flags);
- }
-
-- kobject_put(kobjp);
-+ kref_put(&hp->kref, destroy_hvc_struct);
- }
-
- static void hvc_hangup(struct tty_struct *tty)
-@@ -426,7 +438,6 @@ static void hvc_hangup(struct tty_struct *tty)
- unsigned long flags;
- int irq = 0;
- int temp_open_count;
-- struct kobject *kobjp;
-
- if (!hp)
- return;
-@@ -443,7 +454,6 @@ static void hvc_hangup(struct tty_struct *tty)
- return;
- }
-
-- kobjp = &hp->kobj;
- temp_open_count = hp->count;
- hp->count = 0;
- hp->n_outbuf = 0;
-@@ -457,7 +467,7 @@ static void hvc_hangup(struct tty_struct *tty)
- free_irq(irq, hp);
- while(temp_open_count) {
- --temp_open_count;
-- kobject_put(kobjp);
-+ kref_put(&hp->kref, destroy_hvc_struct);
- }
- }
-
-@@ -729,27 +739,6 @@ static const struct tty_operations hvc_ops = {
- .chars_in_buffer = hvc_chars_in_buffer,
- };
-
--/* callback when the kboject ref count reaches zero. */
--static void destroy_hvc_struct(struct kobject *kobj)
+-static void req_bio_endio(struct request *rq, struct bio *bio,
+- unsigned int nbytes, int error)
-{
-- struct hvc_struct *hp = container_of(kobj, struct hvc_struct, kobj);
-- unsigned long flags;
+- struct request_queue *q = rq->q;
-
-- spin_lock(&hvc_structs_lock);
+- if (&q->bar_rq != rq) {
+- if (error)
+- clear_bit(BIO_UPTODATE, &bio->bi_flags);
+- else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+- error = -EIO;
-
-- spin_lock_irqsave(&hp->lock, flags);
-- list_del(&(hp->next));
-- spin_unlock_irqrestore(&hp->lock, flags);
+- if (unlikely(nbytes > bio->bi_size)) {
+- printk("%s: want %u bytes done, only %u left\n",
+- __FUNCTION__, nbytes, bio->bi_size);
+- nbytes = bio->bi_size;
+- }
-
-- spin_unlock(&hvc_structs_lock);
+- bio->bi_size -= nbytes;
+- bio->bi_sector += (nbytes >> 9);
+- if (bio->bi_size == 0)
+- bio_endio(bio, error);
+- } else {
-
-- kfree(hp);
+- /*
+- * Okay, this is the barrier request in progress, just
+- * record the error;
+- */
+- if (error && !q->orderr)
+- q->orderr = error;
+- }
-}
-
--static struct kobj_type hvc_kobj_type = {
-- .release = destroy_hvc_struct,
--};
+-/**
+- * blk_queue_bounce_limit - set bounce buffer limit for queue
+- * @q: the request queue for the device
+- * @dma_addr: bus address limit
+- *
+- * Description:
+- * Different hardware can have different requirements as to what pages
+- * it can do I/O directly to. A low level driver can call
+- * blk_queue_bounce_limit to have lower memory pages allocated as bounce
+- * buffers for doing I/O to pages residing above @page.
+- **/
+-void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr)
+-{
+- unsigned long bounce_pfn = dma_addr >> PAGE_SHIFT;
+- int dma = 0;
-
- struct hvc_struct __devinit *hvc_alloc(uint32_t vtermno, int irq,
- struct hv_ops *ops, int outbuf_size)
- {
-@@ -776,8 +765,7 @@ struct hvc_struct __devinit *hvc_alloc(uint32_t vtermno, int irq,
- hp->outbuf_size = outbuf_size;
- hp->outbuf = &((char *)hp)[ALIGN(sizeof(*hp), sizeof(long))];
-
-- kobject_init(&hp->kobj);
-- hp->kobj.ktype = &hvc_kobj_type;
-+ kref_init(&hp->kref);
-
- spin_lock_init(&hp->lock);
- spin_lock(&hvc_structs_lock);
-@@ -806,12 +794,10 @@ struct hvc_struct __devinit *hvc_alloc(uint32_t vtermno, int irq,
- int __devexit hvc_remove(struct hvc_struct *hp)
- {
- unsigned long flags;
-- struct kobject *kobjp;
- struct tty_struct *tty;
-
- spin_lock_irqsave(&hp->lock, flags);
- tty = hp->tty;
-- kobjp = &hp->kobj;
-
- if (hp->index < MAX_NR_HVC_CONSOLES)
- vtermnos[hp->index] = -1;
-@@ -821,12 +807,12 @@ int __devexit hvc_remove(struct hvc_struct *hp)
- spin_unlock_irqrestore(&hp->lock, flags);
-
- /*
-- * We 'put' the instance that was grabbed when the kobject instance
-- * was initialized using kobject_init(). Let the last holder of this
-- * kobject cause it to be removed, which will probably be the tty_hangup
-+ * We 'put' the instance that was grabbed when the kref instance
-+ * was initialized using kref_init(). Let the last holder of this
-+ * kref cause it to be removed, which will probably be the tty_hangup
- * below.
- */
-- kobject_put(kobjp);
-+ kref_put(&hp->kref, destroy_hvc_struct);
-
- /*
- * This function call will auto chain call hvc_hangup. The tty should
-diff --git a/drivers/char/hvcs.c b/drivers/char/hvcs.c
-index 69d8866..fd75590 100644
---- a/drivers/char/hvcs.c
-+++ b/drivers/char/hvcs.c
-@@ -57,11 +57,7 @@
- * rescanning partner information upon a user's request.
- *
- * Each vty-server, prior to being exposed to this driver is reference counted
-- * using the 2.6 Linux kernel kobject construct. This kobject is also used by
-- * the vio bus to provide a vio device sysfs entry that this driver attaches
-- * device specific attributes to, including partner information. The vio bus
-- * framework also provides a sysfs entry for each vio driver. The hvcs driver
-- * provides driver attributes in this entry.
-+ * using the 2.6 Linux kernel kref construct.
- *
- * For direction on installation and usage of this driver please reference
- * Documentation/powerpc/hvcs.txt.
-@@ -71,7 +67,7 @@
- #include <linux/init.h>
- #include <linux/interrupt.h>
- #include <linux/kernel.h>
--#include <linux/kobject.h>
-+#include <linux/kref.h>
- #include <linux/kthread.h>
- #include <linux/list.h>
- #include <linux/major.h>
-@@ -293,12 +289,12 @@ struct hvcs_struct {
- int chars_in_buffer;
-
- /*
-- * Any variable below the kobject is valid before a tty is connected and
-+ * Any variable below the kref is valid before a tty is connected and
- * stays valid after the tty is disconnected. These shouldn't be
- * whacked until the koject refcount reaches zero though some entries
- * may be changed via sysfs initiatives.
- */
-- struct kobject kobj; /* ref count & hvcs_struct lifetime */
-+ struct kref kref; /* ref count & hvcs_struct lifetime */
- int connected; /* is the vty-server currently connected to a vty? */
- uint32_t p_unit_address; /* partner unit address */
- uint32_t p_partition_ID; /* partner partition ID */
-@@ -307,8 +303,8 @@ struct hvcs_struct {
- struct vio_dev *vdev;
- };
-
--/* Required to back map a kobject to its containing object */
--#define from_kobj(kobj) container_of(kobj, struct hvcs_struct, kobj)
-+/* Required to back map a kref to its containing object */
-+#define from_kref(k) container_of(k, struct hvcs_struct, kref)
-
- static struct list_head hvcs_structs = LIST_HEAD_INIT(hvcs_structs);
- static DEFINE_SPINLOCK(hvcs_structs_lock);
-@@ -334,7 +330,6 @@ static void hvcs_partner_free(struct hvcs_struct *hvcsd);
- static int hvcs_enable_device(struct hvcs_struct *hvcsd,
- uint32_t unit_address, unsigned int irq, struct vio_dev *dev);
-
--static void destroy_hvcs_struct(struct kobject *kobj);
- static int hvcs_open(struct tty_struct *tty, struct file *filp);
- static void hvcs_close(struct tty_struct *tty, struct file *filp);
- static void hvcs_hangup(struct tty_struct * tty);
-@@ -703,10 +698,10 @@ static void hvcs_return_index(int index)
- hvcs_index_list[index] = -1;
- }
-
--/* callback when the kboject ref count reaches zero */
--static void destroy_hvcs_struct(struct kobject *kobj)
-+/* callback when the kref ref count reaches zero */
-+static void destroy_hvcs_struct(struct kref *kref)
- {
-- struct hvcs_struct *hvcsd = from_kobj(kobj);
-+ struct hvcs_struct *hvcsd = from_kref(kref);
- struct vio_dev *vdev;
- unsigned long flags;
-
-@@ -743,10 +738,6 @@ static void destroy_hvcs_struct(struct kobject *kobj)
- kfree(hvcsd);
- }
-
--static struct kobj_type hvcs_kobj_type = {
-- .release = destroy_hvcs_struct,
--};
+- q->bounce_gfp = GFP_NOIO;
+-#if BITS_PER_LONG == 64
+- /* Assume anything <= 4GB can be handled by IOMMU.
+- Actually some IOMMUs can handle everything, but I don't
+- know of a way to test this here. */
+- if (bounce_pfn < (min_t(u64,0xffffffff,BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
+- dma = 1;
+- q->bounce_pfn = max_low_pfn;
+-#else
+- if (bounce_pfn < blk_max_low_pfn)
+- dma = 1;
+- q->bounce_pfn = bounce_pfn;
+-#endif
+- if (dma) {
+- init_emergency_isa_pool();
+- q->bounce_gfp = GFP_NOIO | GFP_DMA;
+- q->bounce_pfn = bounce_pfn;
+- }
+-}
-
- static int hvcs_get_index(void)
- {
- int i;
-@@ -791,9 +782,7 @@ static int __devinit hvcs_probe(
-
- spin_lock_init(&hvcsd->lock);
- /* Automatically incs the refcount the first time */
-- kobject_init(&hvcsd->kobj);
-- /* Set up the callback for terminating the hvcs_struct's life */
-- hvcsd->kobj.ktype = &hvcs_kobj_type;
-+ kref_init(&hvcsd->kref);
-
- hvcsd->vdev = dev;
- dev->dev.driver_data = hvcsd;
-@@ -844,7 +833,6 @@ static int __devexit hvcs_remove(struct vio_dev *dev)
- {
- struct hvcs_struct *hvcsd = dev->dev.driver_data;
- unsigned long flags;
-- struct kobject *kobjp;
- struct tty_struct *tty;
-
- if (!hvcsd)
-@@ -856,15 +844,13 @@ static int __devexit hvcs_remove(struct vio_dev *dev)
-
- tty = hvcsd->tty;
-
-- kobjp = &hvcsd->kobj;
+-EXPORT_SYMBOL(blk_queue_bounce_limit);
-
- spin_unlock_irqrestore(&hvcsd->lock, flags);
-
- /*
- * Let the last holder of this object cause it to be removed, which
- * would probably be tty_hangup below.
- */
-- kobject_put (kobjp);
-+ kref_put(&hvcsd->kref, destroy_hvcs_struct);
-
- /*
- * The hangup is a scheduled function which will auto chain call
-@@ -1086,7 +1072,7 @@ static int hvcs_enable_device(struct hvcs_struct *hvcsd, uint32_t unit_address,
- }
-
- /*
-- * This always increments the kobject ref count if the call is successful.
-+ * This always increments the kref ref count if the call is successful.
- * Please remember to dec when you are done with the instance.
- *
- * NOTICE: Do NOT hold either the hvcs_struct.lock or hvcs_structs_lock when
-@@ -1103,7 +1089,7 @@ static struct hvcs_struct *hvcs_get_by_index(int index)
- list_for_each_entry(hvcsd, &hvcs_structs, next) {
- spin_lock_irqsave(&hvcsd->lock, flags);
- if (hvcsd->index == index) {
-- kobject_get(&hvcsd->kobj);
-+ kref_get(&hvcsd->kref);
- spin_unlock_irqrestore(&hvcsd->lock, flags);
- spin_unlock(&hvcs_structs_lock);
- return hvcsd;
-@@ -1129,14 +1115,13 @@ static int hvcs_open(struct tty_struct *tty, struct file *filp)
- unsigned int irq;
- struct vio_dev *vdev;
- unsigned long unit_address;
-- struct kobject *kobjp;
-
- if (tty->driver_data)
- goto fast_open;
-
- /*
- * Is there a vty-server that shares the same index?
-- * This function increments the kobject index.
-+ * This function increments the kref index.
- */
- if (!(hvcsd = hvcs_get_by_index(tty->index))) {
- printk(KERN_WARNING "HVCS: open failed, no device associated"
-@@ -1181,7 +1166,7 @@ static int hvcs_open(struct tty_struct *tty, struct file *filp)
- * and will grab the spinlock and free the connection if it fails.
- */
- if (((rc = hvcs_enable_device(hvcsd, unit_address, irq, vdev)))) {
-- kobject_put(&hvcsd->kobj);
-+ kref_put(&hvcsd->kref, destroy_hvcs_struct);
- printk(KERN_WARNING "HVCS: enable device failed.\n");
- return rc;
- }
-@@ -1192,17 +1177,11 @@ fast_open:
- hvcsd = tty->driver_data;
-
- spin_lock_irqsave(&hvcsd->lock, flags);
-- if (!kobject_get(&hvcsd->kobj)) {
-- spin_unlock_irqrestore(&hvcsd->lock, flags);
-- printk(KERN_ERR "HVCS: Kobject of open"
-- " hvcs doesn't exist.\n");
-- return -EFAULT; /* Is this the right return value? */
+-/**
+- * blk_queue_max_sectors - set max sectors for a request for this queue
+- * @q: the request queue for the device
+- * @max_sectors: max sectors in the usual 512b unit
+- *
+- * Description:
+- * Enables a low level driver to set an upper limit on the size of
+- * received requests.
+- **/
+-void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors)
+-{
+- if ((max_sectors << 9) < PAGE_CACHE_SIZE) {
+- max_sectors = 1 << (PAGE_CACHE_SHIFT - 9);
+- printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors);
- }
-
-+ kref_get(&hvcsd->kref);
- hvcsd->open_count++;
+- if (BLK_DEF_MAX_SECTORS > max_sectors)
+- q->max_hw_sectors = q->max_sectors = max_sectors;
+- else {
+- q->max_sectors = BLK_DEF_MAX_SECTORS;
+- q->max_hw_sectors = max_sectors;
+- }
+-}
-
- hvcsd->todo_mask |= HVCS_SCHED_READ;
- spin_unlock_irqrestore(&hvcsd->lock, flags);
-+
- open_success:
- hvcs_kick();
-
-@@ -1212,9 +1191,8 @@ open_success:
- return 0;
-
- error_release:
-- kobjp = &hvcsd->kobj;
- spin_unlock_irqrestore(&hvcsd->lock, flags);
-- kobject_put(&hvcsd->kobj);
-+ kref_put(&hvcsd->kref, destroy_hvcs_struct);
-
- printk(KERN_WARNING "HVCS: partner connect failed.\n");
- return retval;
-@@ -1224,7 +1202,6 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
- {
- struct hvcs_struct *hvcsd;
- unsigned long flags;
-- struct kobject *kobjp;
- int irq = NO_IRQ;
-
- /*
-@@ -1245,7 +1222,6 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
- hvcsd = tty->driver_data;
-
- spin_lock_irqsave(&hvcsd->lock, flags);
-- kobjp = &hvcsd->kobj;
- if (--hvcsd->open_count == 0) {
-
- vio_disable_interrupts(hvcsd->vdev);
-@@ -1270,7 +1246,7 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
- tty->driver_data = NULL;
-
- free_irq(irq, hvcsd);
-- kobject_put(kobjp);
-+ kref_put(&hvcsd->kref, destroy_hvcs_struct);
- return;
- } else if (hvcsd->open_count < 0) {
- printk(KERN_ERR "HVCS: vty-server@%X open_count: %d"
-@@ -1279,7 +1255,7 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
- }
-
- spin_unlock_irqrestore(&hvcsd->lock, flags);
-- kobject_put(kobjp);
-+ kref_put(&hvcsd->kref, destroy_hvcs_struct);
- }
-
- static void hvcs_hangup(struct tty_struct * tty)
-@@ -1287,21 +1263,17 @@ static void hvcs_hangup(struct tty_struct * tty)
- struct hvcs_struct *hvcsd = tty->driver_data;
- unsigned long flags;
- int temp_open_count;
-- struct kobject *kobjp;
- int irq = NO_IRQ;
-
- spin_lock_irqsave(&hvcsd->lock, flags);
-- /* Preserve this so that we know how many kobject refs to put */
-+ /* Preserve this so that we know how many kref refs to put */
- temp_open_count = hvcsd->open_count;
-
- /*
-- * Don't kobject put inside the spinlock because the destruction
-+ * Don't kref put inside the spinlock because the destruction
- * callback may use the spinlock and it may get called before the
-- * spinlock has been released. Get a pointer to the kobject and
-- * kobject_put on that after releasing the spinlock.
-+ * spinlock has been released.
- */
-- kobjp = &hvcsd->kobj;
+-EXPORT_SYMBOL(blk_queue_max_sectors);
-
- vio_disable_interrupts(hvcsd->vdev);
-
- hvcsd->todo_mask = 0;
-@@ -1324,7 +1296,7 @@ static void hvcs_hangup(struct tty_struct * tty)
- free_irq(irq, hvcsd);
-
- /*
-- * We need to kobject_put() for every open_count we have since the
-+ * We need to kref_put() for every open_count we have since the
- * tty_hangup() function doesn't invoke a close per open connection on a
- * non-console device.
- */
-@@ -1335,7 +1307,7 @@ static void hvcs_hangup(struct tty_struct * tty)
- * NOTE: If this hangup was signaled from user space then the
- * final put will never happen.
- */
-- kobject_put(kobjp);
-+ kref_put(&hvcsd->kref, destroy_hvcs_struct);
- }
- }
-
-diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
-index 556fd81..c422e87 100644
---- a/drivers/char/hw_random/amd-rng.c
-+++ b/drivers/char/hw_random/amd-rng.c
-@@ -28,6 +28,7 @@
- #include <linux/kernel.h>
- #include <linux/pci.h>
- #include <linux/hw_random.h>
-+#include <linux/delay.h>
- #include <asm/io.h>
-
-
-@@ -52,11 +53,18 @@ MODULE_DEVICE_TABLE(pci, pci_tbl);
- static struct pci_dev *amd_pdev;
-
-
--static int amd_rng_data_present(struct hwrng *rng)
-+static int amd_rng_data_present(struct hwrng *rng, int wait)
- {
- u32 pmbase = (u32)rng->priv;
-+ int data, i;
-
-- return !!(inl(pmbase + 0xF4) & 1);
-+ for (i = 0; i < 20; i++) {
-+ data = !!(inl(pmbase + 0xF4) & 1);
-+ if (data || !wait)
-+ break;
-+ udelay(10);
-+ }
-+ return data;
- }
-
- static int amd_rng_data_read(struct hwrng *rng, u32 *data)
-diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
-index 26a860a..0118b98 100644
---- a/drivers/char/hw_random/core.c
-+++ b/drivers/char/hw_random/core.c
-@@ -66,11 +66,11 @@ static inline void hwrng_cleanup(struct hwrng *rng)
- rng->cleanup(rng);
- }
-
--static inline int hwrng_data_present(struct hwrng *rng)
-+static inline int hwrng_data_present(struct hwrng *rng, int wait)
- {
- if (!rng->data_present)
- return 1;
-- return rng->data_present(rng);
-+ return rng->data_present(rng, wait);
- }
-
- static inline int hwrng_data_read(struct hwrng *rng, u32 *data)
-@@ -94,8 +94,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
- {
- u32 data;
- ssize_t ret = 0;
-- int i, err = 0;
-- int data_present;
-+ int err = 0;
- int bytes_read;
-
- while (size) {
-@@ -107,21 +106,10 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
- err = -ENODEV;
- goto out;
- }
-- if (filp->f_flags & O_NONBLOCK) {
-- data_present = hwrng_data_present(current_rng);
+-/**
+- * blk_queue_max_phys_segments - set max phys segments for a request for this queue
+- * @q: the request queue for the device
+- * @max_segments: max number of segments
+- *
+- * Description:
+- * Enables a low level driver to set an upper limit on the number of
+- * physical data segments in a request. This would be the largest sized
+- * scatter list the driver could handle.
+- **/
+-void blk_queue_max_phys_segments(struct request_queue *q,
+- unsigned short max_segments)
+-{
+- if (!max_segments) {
+- max_segments = 1;
+- printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
+- }
+-
+- q->max_phys_segments = max_segments;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_max_phys_segments);
+-
+-/**
+- * blk_queue_max_hw_segments - set max hw segments for a request for this queue
+- * @q: the request queue for the device
+- * @max_segments: max number of segments
+- *
+- * Description:
+- * Enables a low level driver to set an upper limit on the number of
+- * hw data segments in a request. This would be the largest number of
+- * address/length pairs the host adapter can actually give as once
+- * to the device.
+- **/
+-void blk_queue_max_hw_segments(struct request_queue *q,
+- unsigned short max_segments)
+-{
+- if (!max_segments) {
+- max_segments = 1;
+- printk("%s: set to minimum %d\n", __FUNCTION__, max_segments);
+- }
+-
+- q->max_hw_segments = max_segments;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_max_hw_segments);
+-
+-/**
+- * blk_queue_max_segment_size - set max segment size for blk_rq_map_sg
+- * @q: the request queue for the device
+- * @max_size: max size of segment in bytes
+- *
+- * Description:
+- * Enables a low level driver to set an upper limit on the size of a
+- * coalesced segment
+- **/
+-void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
+-{
+- if (max_size < PAGE_CACHE_SIZE) {
+- max_size = PAGE_CACHE_SIZE;
+- printk("%s: set to minimum %d\n", __FUNCTION__, max_size);
+- }
+-
+- q->max_segment_size = max_size;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_max_segment_size);
+-
+-/**
+- * blk_queue_hardsect_size - set hardware sector size for the queue
+- * @q: the request queue for the device
+- * @size: the hardware sector size, in bytes
+- *
+- * Description:
+- * This should typically be set to the lowest possible sector size
+- * that the hardware can operate on (possible without reverting to
+- * even internal read-modify-write operations). Usually the default
+- * of 512 covers most hardware.
+- **/
+-void blk_queue_hardsect_size(struct request_queue *q, unsigned short size)
+-{
+- q->hardsect_size = size;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_hardsect_size);
+-
+-/*
+- * Returns the minimum that is _not_ zero, unless both are zero.
+- */
+-#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r))
+-
+-/**
+- * blk_queue_stack_limits - inherit underlying queue limits for stacked drivers
+- * @t: the stacking driver (top)
+- * @b: the underlying device (bottom)
+- **/
+-void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
+-{
+- /* zero is "infinity" */
+- t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors);
+- t->max_hw_sectors = min_not_zero(t->max_hw_sectors,b->max_hw_sectors);
+-
+- t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments);
+- t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments);
+- t->max_segment_size = min(t->max_segment_size,b->max_segment_size);
+- t->hardsect_size = max(t->hardsect_size,b->hardsect_size);
+- if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags))
+- clear_bit(QUEUE_FLAG_CLUSTER, &t->queue_flags);
+-}
+-
+-EXPORT_SYMBOL(blk_queue_stack_limits);
+-
+-/**
+- * blk_queue_segment_boundary - set boundary rules for segment merging
+- * @q: the request queue for the device
+- * @mask: the memory boundary mask
+- **/
+-void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
+-{
+- if (mask < PAGE_CACHE_SIZE - 1) {
+- mask = PAGE_CACHE_SIZE - 1;
+- printk("%s: set to minimum %lx\n", __FUNCTION__, mask);
+- }
+-
+- q->seg_boundary_mask = mask;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_segment_boundary);
+-
+-/**
+- * blk_queue_dma_alignment - set dma length and memory alignment
+- * @q: the request queue for the device
+- * @mask: alignment mask
+- *
+- * description:
+- * set required memory and length aligment for direct dma transactions.
+- * this is used when buiding direct io requests for the queue.
+- *
+- **/
+-void blk_queue_dma_alignment(struct request_queue *q, int mask)
+-{
+- q->dma_alignment = mask;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_dma_alignment);
+-
+-/**
+- * blk_queue_find_tag - find a request by its tag and queue
+- * @q: The request queue for the device
+- * @tag: The tag of the request
+- *
+- * Notes:
+- * Should be used when a device returns a tag and you want to match
+- * it with a request.
+- *
+- * no locks need be held.
+- **/
+-struct request *blk_queue_find_tag(struct request_queue *q, int tag)
+-{
+- return blk_map_queue_find_tag(q->queue_tags, tag);
+-}
+-
+-EXPORT_SYMBOL(blk_queue_find_tag);
+-
+-/**
+- * __blk_free_tags - release a given set of tag maintenance info
+- * @bqt: the tag map to free
+- *
+- * Tries to free the specified @bqt at . Returns true if it was
+- * actually freed and false if there are still references using it
+- */
+-static int __blk_free_tags(struct blk_queue_tag *bqt)
+-{
+- int retval;
+-
+- retval = atomic_dec_and_test(&bqt->refcnt);
+- if (retval) {
+- BUG_ON(bqt->busy);
+-
+- kfree(bqt->tag_index);
+- bqt->tag_index = NULL;
+-
+- kfree(bqt->tag_map);
+- bqt->tag_map = NULL;
+-
+- kfree(bqt);
+-
+- }
+-
+- return retval;
+-}
+-
+-/**
+- * __blk_queue_free_tags - release tag maintenance info
+- * @q: the request queue for the device
+- *
+- * Notes:
+- * blk_cleanup_queue() will take care of calling this function, if tagging
+- * has been used. So there's no need to call this directly.
+- **/
+-static void __blk_queue_free_tags(struct request_queue *q)
+-{
+- struct blk_queue_tag *bqt = q->queue_tags;
+-
+- if (!bqt)
+- return;
+-
+- __blk_free_tags(bqt);
+-
+- q->queue_tags = NULL;
+- q->queue_flags &= ~(1 << QUEUE_FLAG_QUEUED);
+-}
+-
+-
+-/**
+- * blk_free_tags - release a given set of tag maintenance info
+- * @bqt: the tag map to free
+- *
+- * For externally managed @bqt@ frees the map. Callers of this
+- * function must guarantee to have released all the queues that
+- * might have been using this tag map.
+- */
+-void blk_free_tags(struct blk_queue_tag *bqt)
+-{
+- if (unlikely(!__blk_free_tags(bqt)))
+- BUG();
+-}
+-EXPORT_SYMBOL(blk_free_tags);
+-
+-/**
+- * blk_queue_free_tags - release tag maintenance info
+- * @q: the request queue for the device
+- *
+- * Notes:
+- * This is used to disabled tagged queuing to a device, yet leave
+- * queue in function.
+- **/
+-void blk_queue_free_tags(struct request_queue *q)
+-{
+- clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+-}
+-
+-EXPORT_SYMBOL(blk_queue_free_tags);
+-
+-static int
+-init_tag_map(struct request_queue *q, struct blk_queue_tag *tags, int depth)
+-{
+- struct request **tag_index;
+- unsigned long *tag_map;
+- int nr_ulongs;
+-
+- if (q && depth > q->nr_requests * 2) {
+- depth = q->nr_requests * 2;
+- printk(KERN_ERR "%s: adjusted depth to %d\n",
+- __FUNCTION__, depth);
+- }
+-
+- tag_index = kzalloc(depth * sizeof(struct request *), GFP_ATOMIC);
+- if (!tag_index)
+- goto fail;
+-
+- nr_ulongs = ALIGN(depth, BITS_PER_LONG) / BITS_PER_LONG;
+- tag_map = kzalloc(nr_ulongs * sizeof(unsigned long), GFP_ATOMIC);
+- if (!tag_map)
+- goto fail;
+-
+- tags->real_max_depth = depth;
+- tags->max_depth = depth;
+- tags->tag_index = tag_index;
+- tags->tag_map = tag_map;
+-
+- return 0;
+-fail:
+- kfree(tag_index);
+- return -ENOMEM;
+-}
+-
+-static struct blk_queue_tag *__blk_queue_init_tags(struct request_queue *q,
+- int depth)
+-{
+- struct blk_queue_tag *tags;
+-
+- tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC);
+- if (!tags)
+- goto fail;
+-
+- if (init_tag_map(q, tags, depth))
+- goto fail;
+-
+- tags->busy = 0;
+- atomic_set(&tags->refcnt, 1);
+- return tags;
+-fail:
+- kfree(tags);
+- return NULL;
+-}
+-
+-/**
+- * blk_init_tags - initialize the tag info for an external tag map
+- * @depth: the maximum queue depth supported
+- * @tags: the tag to use
+- **/
+-struct blk_queue_tag *blk_init_tags(int depth)
+-{
+- return __blk_queue_init_tags(NULL, depth);
+-}
+-EXPORT_SYMBOL(blk_init_tags);
+-
+-/**
+- * blk_queue_init_tags - initialize the queue tag info
+- * @q: the request queue for the device
+- * @depth: the maximum queue depth supported
+- * @tags: the tag to use
+- **/
+-int blk_queue_init_tags(struct request_queue *q, int depth,
+- struct blk_queue_tag *tags)
+-{
+- int rc;
+-
+- BUG_ON(tags && q->queue_tags && tags != q->queue_tags);
+-
+- if (!tags && !q->queue_tags) {
+- tags = __blk_queue_init_tags(q, depth);
+-
+- if (!tags)
+- goto fail;
+- } else if (q->queue_tags) {
+- if ((rc = blk_queue_resize_tags(q, depth)))
+- return rc;
+- set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+- return 0;
+- } else
+- atomic_inc(&tags->refcnt);
+-
+- /*
+- * assign it, all done
+- */
+- q->queue_tags = tags;
+- q->queue_flags |= (1 << QUEUE_FLAG_QUEUED);
+- INIT_LIST_HEAD(&q->tag_busy_list);
+- return 0;
+-fail:
+- kfree(tags);
+- return -ENOMEM;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_init_tags);
+-
+-/**
+- * blk_queue_resize_tags - change the queueing depth
+- * @q: the request queue for the device
+- * @new_depth: the new max command queueing depth
+- *
+- * Notes:
+- * Must be called with the queue lock held.
+- **/
+-int blk_queue_resize_tags(struct request_queue *q, int new_depth)
+-{
+- struct blk_queue_tag *bqt = q->queue_tags;
+- struct request **tag_index;
+- unsigned long *tag_map;
+- int max_depth, nr_ulongs;
+-
+- if (!bqt)
+- return -ENXIO;
+-
+- /*
+- * if we already have large enough real_max_depth. just
+- * adjust max_depth. *NOTE* as requests with tag value
+- * between new_depth and real_max_depth can be in-flight, tag
+- * map can not be shrunk blindly here.
+- */
+- if (new_depth <= bqt->real_max_depth) {
+- bqt->max_depth = new_depth;
+- return 0;
+- }
+-
+- /*
+- * Currently cannot replace a shared tag map with a new
+- * one, so error out if this is the case
+- */
+- if (atomic_read(&bqt->refcnt) != 1)
+- return -EBUSY;
+-
+- /*
+- * save the old state info, so we can copy it back
+- */
+- tag_index = bqt->tag_index;
+- tag_map = bqt->tag_map;
+- max_depth = bqt->real_max_depth;
+-
+- if (init_tag_map(q, bqt, new_depth))
+- return -ENOMEM;
+-
+- memcpy(bqt->tag_index, tag_index, max_depth * sizeof(struct request *));
+- nr_ulongs = ALIGN(max_depth, BITS_PER_LONG) / BITS_PER_LONG;
+- memcpy(bqt->tag_map, tag_map, nr_ulongs * sizeof(unsigned long));
+-
+- kfree(tag_index);
+- kfree(tag_map);
+- return 0;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_resize_tags);
+-
+-/**
+- * blk_queue_end_tag - end tag operations for a request
+- * @q: the request queue for the device
+- * @rq: the request that has completed
+- *
+- * Description:
+- * Typically called when end_that_request_first() returns 0, meaning
+- * all transfers have been done for a request. It's important to call
+- * this function before end_that_request_last(), as that will put the
+- * request back on the free list thus corrupting the internal tag list.
+- *
+- * Notes:
+- * queue lock must be held.
+- **/
+-void blk_queue_end_tag(struct request_queue *q, struct request *rq)
+-{
+- struct blk_queue_tag *bqt = q->queue_tags;
+- int tag = rq->tag;
+-
+- BUG_ON(tag == -1);
+-
+- if (unlikely(tag >= bqt->real_max_depth))
+- /*
+- * This can happen after tag depth has been reduced.
+- * FIXME: how about a warning or info message here?
+- */
+- return;
+-
+- list_del_init(&rq->queuelist);
+- rq->cmd_flags &= ~REQ_QUEUED;
+- rq->tag = -1;
+-
+- if (unlikely(bqt->tag_index[tag] == NULL))
+- printk(KERN_ERR "%s: tag %d is missing\n",
+- __FUNCTION__, tag);
+-
+- bqt->tag_index[tag] = NULL;
+-
+- if (unlikely(!test_bit(tag, bqt->tag_map))) {
+- printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n",
+- __FUNCTION__, tag);
+- return;
+- }
+- /*
+- * The tag_map bit acts as a lock for tag_index[bit], so we need
+- * unlock memory barrier semantics.
+- */
+- clear_bit_unlock(tag, bqt->tag_map);
+- bqt->busy--;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_end_tag);
+-
+-/**
+- * blk_queue_start_tag - find a free tag and assign it
+- * @q: the request queue for the device
+- * @rq: the block request that needs tagging
+- *
+- * Description:
+- * This can either be used as a stand-alone helper, or possibly be
+- * assigned as the queue &prep_rq_fn (in which case &struct request
+- * automagically gets a tag assigned). Note that this function
+- * assumes that any type of request can be queued! if this is not
+- * true for your device, you must check the request type before
+- * calling this function. The request will also be removed from
+- * the request queue, so it's the drivers responsibility to readd
+- * it if it should need to be restarted for some reason.
+- *
+- * Notes:
+- * queue lock must be held.
+- **/
+-int blk_queue_start_tag(struct request_queue *q, struct request *rq)
+-{
+- struct blk_queue_tag *bqt = q->queue_tags;
+- int tag;
+-
+- if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
+- printk(KERN_ERR
+- "%s: request %p for device [%s] already tagged %d",
+- __FUNCTION__, rq,
+- rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->tag);
+- BUG();
+- }
+-
+- /*
+- * Protect against shared tag maps, as we may not have exclusive
+- * access to the tag map.
+- */
+- do {
+- tag = find_first_zero_bit(bqt->tag_map, bqt->max_depth);
+- if (tag >= bqt->max_depth)
+- return 1;
+-
+- } while (test_and_set_bit_lock(tag, bqt->tag_map));
+- /*
+- * We need lock ordering semantics given by test_and_set_bit_lock.
+- * See blk_queue_end_tag for details.
+- */
+-
+- rq->cmd_flags |= REQ_QUEUED;
+- rq->tag = tag;
+- bqt->tag_index[tag] = rq;
+- blkdev_dequeue_request(rq);
+- list_add(&rq->queuelist, &q->tag_busy_list);
+- bqt->busy++;
+- return 0;
+-}
+-
+-EXPORT_SYMBOL(blk_queue_start_tag);
+-
+-/**
+- * blk_queue_invalidate_tags - invalidate all pending tags
+- * @q: the request queue for the device
+- *
+- * Description:
+- * Hardware conditions may dictate a need to stop all pending requests.
+- * In this case, we will safely clear the block side of the tag queue and
+- * readd all requests to the request queue in the right order.
+- *
+- * Notes:
+- * queue lock must be held.
+- **/
+-void blk_queue_invalidate_tags(struct request_queue *q)
+-{
+- struct list_head *tmp, *n;
+-
+- list_for_each_safe(tmp, n, &q->tag_busy_list)
+- blk_requeue_request(q, list_entry_rq(tmp));
+-}
+-
+-EXPORT_SYMBOL(blk_queue_invalidate_tags);
+-
+-void blk_dump_rq_flags(struct request *rq, char *msg)
+-{
+- int bit;
+-
+- printk("%s: dev %s: type=%x, flags=%x\n", msg,
+- rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->cmd_type,
+- rq->cmd_flags);
+-
+- printk("\nsector %llu, nr/cnr %lu/%u\n", (unsigned long long)rq->sector,
+- rq->nr_sectors,
+- rq->current_nr_sectors);
+- printk("bio %p, biotail %p, buffer %p, data %p, len %u\n", rq->bio, rq->biotail, rq->buffer, rq->data, rq->data_len);
+-
+- if (blk_pc_request(rq)) {
+- printk("cdb: ");
+- for (bit = 0; bit < sizeof(rq->cmd); bit++)
+- printk("%02x ", rq->cmd[bit]);
+- printk("\n");
+- }
+-}
+-
+-EXPORT_SYMBOL(blk_dump_rq_flags);
+-
+-void blk_recount_segments(struct request_queue *q, struct bio *bio)
+-{
+- struct request rq;
+- struct bio *nxt = bio->bi_next;
+- rq.q = q;
+- rq.bio = rq.biotail = bio;
+- bio->bi_next = NULL;
+- blk_recalc_rq_segments(&rq);
+- bio->bi_next = nxt;
+- bio->bi_phys_segments = rq.nr_phys_segments;
+- bio->bi_hw_segments = rq.nr_hw_segments;
+- bio->bi_flags |= (1 << BIO_SEG_VALID);
+-}
+-EXPORT_SYMBOL(blk_recount_segments);
+-
+-static void blk_recalc_rq_segments(struct request *rq)
+-{
+- int nr_phys_segs;
+- int nr_hw_segs;
+- unsigned int phys_size;
+- unsigned int hw_size;
+- struct bio_vec *bv, *bvprv = NULL;
+- int seg_size;
+- int hw_seg_size;
+- int cluster;
+- struct req_iterator iter;
+- int high, highprv = 1;
+- struct request_queue *q = rq->q;
+-
+- if (!rq->bio)
+- return;
+-
+- cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
+- hw_seg_size = seg_size = 0;
+- phys_size = hw_size = nr_phys_segs = nr_hw_segs = 0;
+- rq_for_each_segment(bv, rq, iter) {
+- /*
+- * the trick here is making sure that a high page is never
+- * considered part of another segment, since that might
+- * change with the bounce page.
+- */
+- high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
+- if (high || highprv)
+- goto new_hw_segment;
+- if (cluster) {
+- if (seg_size + bv->bv_len > q->max_segment_size)
+- goto new_segment;
+- if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
+- goto new_segment;
+- if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
+- goto new_segment;
+- if (BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
+- goto new_hw_segment;
+-
+- seg_size += bv->bv_len;
+- hw_seg_size += bv->bv_len;
+- bvprv = bv;
+- continue;
+- }
+-new_segment:
+- if (BIOVEC_VIRT_MERGEABLE(bvprv, bv) &&
+- !BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
+- hw_seg_size += bv->bv_len;
+- else {
+-new_hw_segment:
+- if (nr_hw_segs == 1 &&
+- hw_seg_size > rq->bio->bi_hw_front_size)
+- rq->bio->bi_hw_front_size = hw_seg_size;
+- hw_seg_size = BIOVEC_VIRT_START_SIZE(bv) + bv->bv_len;
+- nr_hw_segs++;
+- }
+-
+- nr_phys_segs++;
+- bvprv = bv;
+- seg_size = bv->bv_len;
+- highprv = high;
+- }
+-
+- if (nr_hw_segs == 1 &&
+- hw_seg_size > rq->bio->bi_hw_front_size)
+- rq->bio->bi_hw_front_size = hw_seg_size;
+- if (hw_seg_size > rq->biotail->bi_hw_back_size)
+- rq->biotail->bi_hw_back_size = hw_seg_size;
+- rq->nr_phys_segments = nr_phys_segs;
+- rq->nr_hw_segments = nr_hw_segs;
+-}
+-
+-static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
+- struct bio *nxt)
+-{
+- if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER)))
+- return 0;
+-
+- if (!BIOVEC_PHYS_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)))
+- return 0;
+- if (bio->bi_size + nxt->bi_size > q->max_segment_size)
+- return 0;
+-
+- /*
+- * bio and nxt are contigous in memory, check if the queue allows
+- * these two to be merged into one
+- */
+- if (BIO_SEG_BOUNDARY(q, bio, nxt))
+- return 1;
+-
+- return 0;
+-}
+-
+-static int blk_hw_contig_segment(struct request_queue *q, struct bio *bio,
+- struct bio *nxt)
+-{
+- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
+- blk_recount_segments(q, bio);
+- if (unlikely(!bio_flagged(nxt, BIO_SEG_VALID)))
+- blk_recount_segments(q, nxt);
+- if (!BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)) ||
+- BIOVEC_VIRT_OVERSIZE(bio->bi_hw_back_size + nxt->bi_hw_front_size))
+- return 0;
+- if (bio->bi_hw_back_size + nxt->bi_hw_front_size > q->max_segment_size)
+- return 0;
+-
+- return 1;
+-}
+-
+-/*
+- * map a request to scatterlist, return number of sg entries setup. Caller
+- * must make sure sg can hold rq->nr_phys_segments entries
+- */
+-int blk_rq_map_sg(struct request_queue *q, struct request *rq,
+- struct scatterlist *sglist)
+-{
+- struct bio_vec *bvec, *bvprv;
+- struct req_iterator iter;
+- struct scatterlist *sg;
+- int nsegs, cluster;
+-
+- nsegs = 0;
+- cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
+-
+- /*
+- * for each bio in rq
+- */
+- bvprv = NULL;
+- sg = NULL;
+- rq_for_each_segment(bvec, rq, iter) {
+- int nbytes = bvec->bv_len;
+-
+- if (bvprv && cluster) {
+- if (sg->length + nbytes > q->max_segment_size)
+- goto new_segment;
+-
+- if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
+- goto new_segment;
+- if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
+- goto new_segment;
+-
+- sg->length += nbytes;
- } else {
-- /* Some RNG require some time between data_reads to gather
-- * new entropy. Poll it.
-- */
-- for (i = 0; i < 20; i++) {
-- data_present = hwrng_data_present(current_rng);
-- if (data_present)
-- break;
-- udelay(10);
+-new_segment:
+- if (!sg)
+- sg = sglist;
+- else {
+- /*
+- * If the driver previously mapped a shorter
+- * list, we could see a termination bit
+- * prematurely unless it fully inits the sg
+- * table on each mapping. We KNOW that there
+- * must be more entries here or the driver
+- * would be buggy, so force clear the
+- * termination bit to avoid doing a full
+- * sg_init_table() in drivers for each command.
+- */
+- sg->page_link &= ~0x02;
+- sg = sg_next(sg);
- }
+-
+- sg_set_page(sg, bvec->bv_page, nbytes, bvec->bv_offset);
+- nsegs++;
- }
-+
- bytes_read = 0;
-- if (data_present)
-+ if (hwrng_data_present(current_rng,
-+ !(filp->f_flags & O_NONBLOCK)))
- bytes_read = hwrng_data_read(current_rng, &data);
- mutex_unlock(&rng_mutex);
-
-diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
-index 8e8658d..fed4ef5 100644
---- a/drivers/char/hw_random/geode-rng.c
-+++ b/drivers/char/hw_random/geode-rng.c
-@@ -28,6 +28,7 @@
- #include <linux/kernel.h>
- #include <linux/pci.h>
- #include <linux/hw_random.h>
-+#include <linux/delay.h>
- #include <asm/io.h>
-
-
-@@ -61,11 +62,18 @@ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
- return 4;
- }
-
--static int geode_rng_data_present(struct hwrng *rng)
-+static int geode_rng_data_present(struct hwrng *rng, int wait)
- {
- void __iomem *mem = (void __iomem *)rng->priv;
-+ int data, i;
-
-- return !!(readl(mem + GEODE_RNG_STATUS_REG));
-+ for (i = 0; i < 20; i++) {
-+ data = !!(readl(mem + GEODE_RNG_STATUS_REG));
-+ if (data || !wait)
-+ break;
-+ udelay(10);
-+ }
-+ return data;
- }
-
-
-diff --git a/drivers/char/hw_random/intel-rng.c b/drivers/char/hw_random/intel-rng.c
-index 753f460..5cc651e 100644
---- a/drivers/char/hw_random/intel-rng.c
-+++ b/drivers/char/hw_random/intel-rng.c
-@@ -29,6 +29,7 @@
- #include <linux/module.h>
- #include <linux/pci.h>
- #include <linux/stop_machine.h>
-+#include <linux/delay.h>
- #include <asm/io.h>
-
-
-@@ -162,11 +163,19 @@ static inline u8 hwstatus_set(void __iomem *mem,
- return hwstatus_get(mem);
- }
-
--static int intel_rng_data_present(struct hwrng *rng)
-+static int intel_rng_data_present(struct hwrng *rng, int wait)
- {
- void __iomem *mem = (void __iomem *)rng->priv;
+- bvprv = bvec;
+- } /* segments in rq */
-
-- return !!(readb(mem + INTEL_RNG_STATUS) & INTEL_RNG_DATA_PRESENT);
-+ int data, i;
-+
-+ for (i = 0; i < 20; i++) {
-+ data = !!(readb(mem + INTEL_RNG_STATUS) &
-+ INTEL_RNG_DATA_PRESENT);
-+ if (data || !wait)
-+ break;
-+ udelay(10);
-+ }
-+ return data;
- }
-
- static int intel_rng_data_read(struct hwrng *rng, u32 *data)
-diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
-index 3f35a1c..7e31995 100644
---- a/drivers/char/hw_random/omap-rng.c
-+++ b/drivers/char/hw_random/omap-rng.c
-@@ -29,6 +29,7 @@
- #include <linux/err.h>
- #include <linux/platform_device.h>
- #include <linux/hw_random.h>
-+#include <linux/delay.h>
-
- #include <asm/io.h>
-
-@@ -65,9 +66,17 @@ static void omap_rng_write_reg(int reg, u32 val)
- }
-
- /* REVISIT: Does the status bit really work on 16xx? */
--static int omap_rng_data_present(struct hwrng *rng)
-+static int omap_rng_data_present(struct hwrng *rng, int wait)
- {
-- return omap_rng_read_reg(RNG_STAT_REG) ? 0 : 1;
-+ int data, i;
-+
-+ for (i = 0; i < 20; i++) {
-+ data = omap_rng_read_reg(RNG_STAT_REG) ? 0 : 1;
-+ if (data || !wait)
-+ break;
-+ udelay(10);
-+ }
-+ return data;
- }
-
- static int omap_rng_data_read(struct hwrng *rng, u32 *data)
-diff --git a/drivers/char/hw_random/pasemi-rng.c b/drivers/char/hw_random/pasemi-rng.c
-index fa6040b..e2ea210 100644
---- a/drivers/char/hw_random/pasemi-rng.c
-+++ b/drivers/char/hw_random/pasemi-rng.c
-@@ -23,6 +23,7 @@
- #include <linux/kernel.h>
- #include <linux/platform_device.h>
- #include <linux/hw_random.h>
-+#include <linux/delay.h>
- #include <asm/of_platform.h>
- #include <asm/io.h>
-
-@@ -41,12 +42,19 @@
-
- #define MODULE_NAME "pasemi_rng"
-
--static int pasemi_rng_data_present(struct hwrng *rng)
-+static int pasemi_rng_data_present(struct hwrng *rng, int wait)
- {
- void __iomem *rng_regs = (void __iomem *)rng->priv;
+- if (sg)
+- sg_mark_end(sg);
-
-- return (in_le32(rng_regs + SDCRNG_CTL_REG)
-- & SDCRNG_CTL_FVLD_M) ? 1 : 0;
-+ int data, i;
-+
-+ for (i = 0; i < 20; i++) {
-+ data = (in_le32(rng_regs + SDCRNG_CTL_REG)
-+ & SDCRNG_CTL_FVLD_M) ? 1 : 0;
-+ if (data || !wait)
-+ break;
-+ udelay(10);
-+ }
-+ return data;
- }
-
- static int pasemi_rng_data_read(struct hwrng *rng, u32 *data)
-diff --git a/drivers/char/hw_random/via-rng.c b/drivers/char/hw_random/via-rng.c
-index ec435cb..868e39f 100644
---- a/drivers/char/hw_random/via-rng.c
-+++ b/drivers/char/hw_random/via-rng.c
-@@ -27,6 +27,7 @@
- #include <linux/module.h>
- #include <linux/kernel.h>
- #include <linux/hw_random.h>
-+#include <linux/delay.h>
- #include <asm/io.h>
- #include <asm/msr.h>
- #include <asm/cpufeature.h>
-@@ -77,10 +78,11 @@ static inline u32 xstore(u32 *addr, u32 edx_in)
- return eax_out;
- }
-
--static int via_rng_data_present(struct hwrng *rng)
-+static int via_rng_data_present(struct hwrng *rng, int wait)
- {
- u32 bytes_out;
- u32 *via_rng_datum = (u32 *)(&rng->priv);
-+ int i;
-
- /* We choose the recommended 1-byte-per-instruction RNG rate,
- * for greater randomness at the expense of speed. Larger
-@@ -95,12 +97,15 @@ static int via_rng_data_present(struct hwrng *rng)
- * completes.
- */
-
-- *via_rng_datum = 0; /* paranoia, not really necessary */
-- bytes_out = xstore(via_rng_datum, VIA_RNG_CHUNK_1);
-- bytes_out &= VIA_XSTORE_CNT_MASK;
-- if (bytes_out == 0)
+- return nsegs;
+-}
+-
+-EXPORT_SYMBOL(blk_rq_map_sg);
+-
+-/*
+- * the standard queue merge functions, can be overridden with device
+- * specific ones if so desired
+- */
+-
+-static inline int ll_new_mergeable(struct request_queue *q,
+- struct request *req,
+- struct bio *bio)
+-{
+- int nr_phys_segs = bio_phys_segments(q, bio);
+-
+- if (req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
+- req->cmd_flags |= REQ_NOMERGE;
+- if (req == q->last_merge)
+- q->last_merge = NULL;
- return 0;
+- }
+-
+- /*
+- * A hw segment is just getting larger, bump just the phys
+- * counter.
+- */
+- req->nr_phys_segments += nr_phys_segs;
- return 1;
-+ for (i = 0; i < 20; i++) {
-+ *via_rng_datum = 0; /* paranoia, not really necessary */
-+ bytes_out = xstore(via_rng_datum, VIA_RNG_CHUNK_1);
-+ bytes_out &= VIA_XSTORE_CNT_MASK;
-+ if (bytes_out || !wait)
-+ break;
-+ udelay(10);
-+ }
-+ return bytes_out ? 1 : 0;
- }
-
- static int via_rng_data_read(struct hwrng *rng, u32 *data)
-diff --git a/drivers/char/nozomi.c b/drivers/char/nozomi.c
-new file mode 100644
-index 0000000..6076e66
---- /dev/null
-+++ b/drivers/char/nozomi.c
-@@ -0,0 +1,1993 @@
-+/*
-+ * nozomi.c -- HSDPA driver Broadband Wireless Data Card - Globe Trotter
-+ *
-+ * Written by: Ulf Jakobsson,
-+ * Jan �erfeldt,
-+ * Stefan Thomasson,
-+ *
-+ * Maintained by: Paul Hardwick (p.hardwick at option.com)
-+ *
-+ * Patches:
-+ * Locking code changes for Vodafone by Sphere Systems Ltd,
-+ * Andrew Bird (ajb at spheresystems.co.uk )
-+ * & Phil Sanderson
-+ *
-+ * Source has been ported from an implementation made by Filip Aben @ Option
-+ *
-+ * --------------------------------------------------------------------------
-+ *
-+ * Copyright (c) 2005,2006 Option Wireless Sweden AB
-+ * Copyright (c) 2006 Sphere Systems Ltd
-+ * Copyright (c) 2006 Option Wireless n/v
-+ * All rights Reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with this program; if not, write to the Free Software
-+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-+ *
-+ * --------------------------------------------------------------------------
-+ */
-+
-+/*
-+ * CHANGELOG
-+ * Version 2.1d
-+ * 11-November-2007 Jiri Slaby, Frank Seidel
-+ * - Big rework of multicard support by Jiri
-+ * - Major cleanups (semaphore to mutex, endianess, no major reservation)
-+ * - Optimizations
-+ *
-+ * Version 2.1c
-+ * 30-October-2007 Frank Seidel
-+ * - Completed multicard support
-+ * - Minor cleanups
-+ *
-+ * Version 2.1b
-+ * 07-August-2007 Frank Seidel
-+ * - Minor cleanups
-+ * - theoretical multicard support
-+ *
-+ * Version 2.1
-+ * 03-July-2006 Paul Hardwick
-+ *
-+ * - Stability Improvements. Incorporated spinlock wraps patch.
-+ * - Updated for newer 2.6.14+ kernels (tty_buffer_request_room)
-+ * - using __devexit macro for tty
-+ *
-+ *
-+ * Version 2.0
-+ * 08-feb-2006 15:34:10:Ulf
-+ *
-+ * -Fixed issue when not waking up line disipine layer, could probably result
-+ * in better uplink performance for 2.4.
-+ *
-+ * -Fixed issue with big endian during initalization, now proper toggle flags
-+ * are handled between preloader and maincode.
-+ *
-+ * -Fixed flow control issue.
-+ *
-+ * -Added support for setting DTR.
-+ *
-+ * -For 2.4 kernels, removing temporary buffer that's not needed.
-+ *
-+ * -Reading CTS only for modem port (only port that supports it).
-+ *
-+ * -Return 0 in write_room instead of netative value, it's not handled in
-+ * upper layer.
-+ *
-+ * --------------------------------------------------------------------------
-+ * Version 1.0
-+ *
-+ * First version of driver, only tested with card of type F32_2.
-+ * Works fine with 2.4 and 2.6 kernels.
-+ * Driver also support big endian architecture.
-+ */
-+
-+/* Enable this to have a lot of debug printouts */
-+#define DEBUG
-+
-+#include <linux/kernel.h>
-+#include <linux/module.h>
-+#include <linux/pci.h>
-+#include <linux/ioport.h>
-+#include <linux/tty.h>
-+#include <linux/tty_driver.h>
-+#include <linux/tty_flip.h>
-+#include <linux/serial.h>
-+#include <linux/interrupt.h>
-+#include <linux/kmod.h>
-+#include <linux/init.h>
-+#include <linux/kfifo.h>
-+#include <linux/uaccess.h>
-+#include <asm/byteorder.h>
-+
-+#include <linux/delay.h>
-+
-+
-+#define VERSION_STRING DRIVER_DESC " 2.1d (build date: " \
-+ __DATE__ " " __TIME__ ")"
-+
-+/* Macros definitions */
-+
-+/* Default debug printout level */
-+#define NOZOMI_DEBUG_LEVEL 0x00
-+
-+#define P_BUF_SIZE 128
-+#define NFO(_err_flag_, args...) \
-+do { \
-+ char tmp[P_BUF_SIZE]; \
-+ snprintf(tmp, sizeof(tmp), ##args); \
-+ printk(_err_flag_ "[%d] %s(): %s\n", __LINE__, \
-+ __FUNCTION__, tmp); \
-+} while (0)
-+
-+#define DBG1(args...) D_(0x01, ##args)
-+#define DBG2(args...) D_(0x02, ##args)
-+#define DBG3(args...) D_(0x04, ##args)
-+#define DBG4(args...) D_(0x08, ##args)
-+#define DBG5(args...) D_(0x10, ##args)
-+#define DBG6(args...) D_(0x20, ##args)
-+#define DBG7(args...) D_(0x40, ##args)
-+#define DBG8(args...) D_(0x80, ##args)
-+
-+#ifdef DEBUG
-+/* Do we need this settable at runtime? */
-+static int debug = NOZOMI_DEBUG_LEVEL;
-+
-+#define D(lvl, args...) do {if (lvl & debug) NFO(KERN_DEBUG, ##args); } \
-+ while (0)
-+#define D_(lvl, args...) D(lvl, ##args)
-+
-+/* These printouts are always printed */
-+
-+#else
-+static int debug;
-+#define D_(lvl, args...)
-+#endif
-+
-+/* TODO: rewrite to optimize macros... */
-+
-+#define TMP_BUF_MAX 256
-+
-+#define DUMP(buf__,len__) \
-+ do { \
-+ char tbuf[TMP_BUF_MAX] = {0};\
-+ if (len__ > 1) {\
-+ snprintf(tbuf, len__ > TMP_BUF_MAX ? TMP_BUF_MAX : len__, "%s", buf__);\
-+ if (tbuf[len__-2] == '\r') {\
-+ tbuf[len__-2] = 'r';\
-+ } \
-+ DBG1("SENDING: '%s' (%d+n)", tbuf, len__);\
-+ } else {\
-+ DBG1("SENDING: '%s' (%d)", tbuf, len__);\
-+ } \
-+} while (0)
-+
-+/* Defines */
-+#define NOZOMI_NAME "nozomi"
-+#define NOZOMI_NAME_TTY "nozomi_tty"
-+#define DRIVER_DESC "Nozomi driver"
-+
-+#define NTTY_TTY_MAXMINORS 256
-+#define NTTY_FIFO_BUFFER_SIZE 8192
-+
-+/* Must be power of 2 */
-+#define FIFO_BUFFER_SIZE_UL 8192
-+
-+/* Size of tmp send buffer to card */
-+#define SEND_BUF_MAX 1024
-+#define RECEIVE_BUF_MAX 4
-+
-+
-+/* Define all types of vendors and devices to support */
-+#define VENDOR1 0x1931 /* Vendor Option */
-+#define DEVICE1 0x000c /* HSDPA card */
-+
-+#define R_IIR 0x0000 /* Interrupt Identity Register */
-+#define R_FCR 0x0000 /* Flow Control Register */
-+#define R_IER 0x0004 /* Interrupt Enable Register */
-+
-+#define CONFIG_MAGIC 0xEFEFFEFE
-+#define TOGGLE_VALID 0x0000
-+
-+/* Definition of interrupt tokens */
-+#define MDM_DL1 0x0001
-+#define MDM_UL1 0x0002
-+#define MDM_DL2 0x0004
-+#define MDM_UL2 0x0008
-+#define DIAG_DL1 0x0010
-+#define DIAG_DL2 0x0020
-+#define DIAG_UL 0x0040
-+#define APP1_DL 0x0080
-+#define APP1_UL 0x0100
-+#define APP2_DL 0x0200
-+#define APP2_UL 0x0400
-+#define CTRL_DL 0x0800
-+#define CTRL_UL 0x1000
-+#define RESET 0x8000
-+
-+#define MDM_DL (MDM_DL1 | MDM_DL2)
-+#define MDM_UL (MDM_UL1 | MDM_UL2)
-+#define DIAG_DL (DIAG_DL1 | DIAG_DL2)
-+
-+/* modem signal definition */
-+#define CTRL_DSR 0x0001
-+#define CTRL_DCD 0x0002
-+#define CTRL_RI 0x0004
-+#define CTRL_CTS 0x0008
-+
-+#define CTRL_DTR 0x0001
-+#define CTRL_RTS 0x0002
-+
-+#define MAX_PORT 4
-+#define NOZOMI_MAX_PORTS 5
-+#define NOZOMI_MAX_CARDS (NTTY_TTY_MAXMINORS / MAX_PORT)
-+
-+/* Type definitions */
-+
-+/*
-+ * There are two types of nozomi cards,
-+ * one with 2048 memory and with 8192 memory
-+ */
-+enum card_type {
-+ F32_2 = 2048, /* 512 bytes downlink + uplink * 2 -> 2048 */
-+ F32_8 = 8192, /* 3072 bytes downl. + 1024 bytes uplink * 2 -> 8192 */
-+};
-+
-+/* Two different toggle channels exist */
-+enum channel_type {
-+ CH_A = 0,
-+ CH_B = 1,
-+};
-+
-+/* Port definition for the card regarding flow control */
-+enum ctrl_port_type {
-+ CTRL_CMD = 0,
-+ CTRL_MDM = 1,
-+ CTRL_DIAG = 2,
-+ CTRL_APP1 = 3,
-+ CTRL_APP2 = 4,
-+ CTRL_ERROR = -1,
-+};
-+
-+/* Ports that the nozomi has */
-+enum port_type {
-+ PORT_MDM = 0,
-+ PORT_DIAG = 1,
-+ PORT_APP1 = 2,
-+ PORT_APP2 = 3,
-+ PORT_CTRL = 4,
-+ PORT_ERROR = -1,
-+};
-+
-+#ifdef __BIG_ENDIAN
-+/* Big endian */
-+
-+struct toggles {
-+ unsigned enabled:5; /*
-+ * Toggle fields are valid if enabled is 0,
-+ * else A-channels must always be used.
-+ */
-+ unsigned diag_dl:1;
-+ unsigned mdm_dl:1;
-+ unsigned mdm_ul:1;
-+} __attribute__ ((packed));
-+
-+/* Configuration table to read at startup of card */
-+/* Is for now only needed during initialization phase */
-+struct config_table {
-+ u32 signature;
-+ u16 product_information;
-+ u16 version;
-+ u8 pad3[3];
-+ struct toggles toggle;
-+ u8 pad1[4];
-+ u16 dl_mdm_len1; /*
-+ * If this is 64, it can hold
-+ * 60 bytes + 4 that is length field
-+ */
-+ u16 dl_start;
-+
-+ u16 dl_diag_len1;
-+ u16 dl_mdm_len2; /*
-+ * If this is 64, it can hold
-+ * 60 bytes + 4 that is length field
-+ */
-+ u16 dl_app1_len;
-+
-+ u16 dl_diag_len2;
-+ u16 dl_ctrl_len;
-+ u16 dl_app2_len;
-+ u8 pad2[16];
-+ u16 ul_mdm_len1;
-+ u16 ul_start;
-+ u16 ul_diag_len;
-+ u16 ul_mdm_len2;
-+ u16 ul_app1_len;
-+ u16 ul_app2_len;
-+ u16 ul_ctrl_len;
-+} __attribute__ ((packed));
-+
-+/* This stores all control downlink flags */
-+struct ctrl_dl {
-+ u8 port;
-+ unsigned reserved:4;
-+ unsigned CTS:1;
-+ unsigned RI:1;
-+ unsigned DCD:1;
-+ unsigned DSR:1;
-+} __attribute__ ((packed));
-+
-+/* This stores all control uplink flags */
-+struct ctrl_ul {
-+ u8 port;
-+ unsigned reserved:6;
-+ unsigned RTS:1;
-+ unsigned DTR:1;
-+} __attribute__ ((packed));
-+
-+#else
-+/* Little endian */
-+
-+/* This represents the toggle information */
-+struct toggles {
-+ unsigned mdm_ul:1;
-+ unsigned mdm_dl:1;
-+ unsigned diag_dl:1;
-+ unsigned enabled:5; /*
-+ * Toggle fields are valid if enabled is 0,
-+ * else A-channels must always be used.
-+ */
-+} __attribute__ ((packed));
-+
-+/* Configuration table to read at startup of card */
-+struct config_table {
-+ u32 signature;
-+ u16 version;
-+ u16 product_information;
-+ struct toggles toggle;
-+ u8 pad1[7];
-+ u16 dl_start;
-+ u16 dl_mdm_len1; /*
-+ * If this is 64, it can hold
-+ * 60 bytes + 4 that is length field
-+ */
-+ u16 dl_mdm_len2;
-+ u16 dl_diag_len1;
-+ u16 dl_diag_len2;
-+ u16 dl_app1_len;
-+ u16 dl_app2_len;
-+ u16 dl_ctrl_len;
-+ u8 pad2[16];
-+ u16 ul_start;
-+ u16 ul_mdm_len2;
-+ u16 ul_mdm_len1;
-+ u16 ul_diag_len;
-+ u16 ul_app1_len;
-+ u16 ul_app2_len;
-+ u16 ul_ctrl_len;
-+} __attribute__ ((packed));
-+
-+/* This stores all control downlink flags */
-+struct ctrl_dl {
-+ unsigned DSR:1;
-+ unsigned DCD:1;
-+ unsigned RI:1;
-+ unsigned CTS:1;
-+ unsigned reserverd:4;
-+ u8 port;
-+} __attribute__ ((packed));
-+
-+/* This stores all control uplink flags */
-+struct ctrl_ul {
-+ unsigned DTR:1;
-+ unsigned RTS:1;
-+ unsigned reserved:6;
-+ u8 port;
-+} __attribute__ ((packed));
-+#endif
-+
-+/* This holds all information that is needed regarding a port */
-+struct port {
-+ u8 update_flow_control;
-+ struct ctrl_ul ctrl_ul;
-+ struct ctrl_dl ctrl_dl;
-+ struct kfifo *fifo_ul;
-+ void __iomem *dl_addr[2];
-+ u32 dl_size[2];
-+ u8 toggle_dl;
-+ void __iomem *ul_addr[2];
-+ u32 ul_size[2];
-+ u8 toggle_ul;
-+ u16 token_dl;
-+
-+ struct tty_struct *tty;
-+ int tty_open_count;
-+ /* mutex to ensure one access patch to this port */
-+ struct mutex tty_sem;
-+ wait_queue_head_t tty_wait;
-+ struct async_icount tty_icount;
-+};
-+
-+/* Private data one for each card in the system */
-+struct nozomi {
-+ void __iomem *base_addr;
-+ unsigned long flip;
-+
-+ /* Pointers to registers */
-+ void __iomem *reg_iir;
-+ void __iomem *reg_fcr;
-+ void __iomem *reg_ier;
-+
-+ u16 last_ier;
-+ enum card_type card_type;
-+ struct config_table config_table; /* Configuration table */
-+ struct pci_dev *pdev;
-+ struct port port[NOZOMI_MAX_PORTS];
-+ u8 *send_buf;
-+
-+ spinlock_t spin_mutex; /* secures access to registers and tty */
-+
-+ unsigned int index_start;
-+ u32 open_ttys;
-+};
-+
-+/* This is a data packet that is read or written to/from card */
-+struct buffer {
-+ u32 size; /* size is the length of the data buffer */
-+ u8 *data;
-+} __attribute__ ((packed));
-+
-+/* Global variables */
-+static struct pci_device_id nozomi_pci_tbl[] = {
-+ {PCI_DEVICE(VENDOR1, DEVICE1)},
-+ {},
-+};
-+
-+MODULE_DEVICE_TABLE(pci, nozomi_pci_tbl);
-+
-+static struct nozomi *ndevs[NOZOMI_MAX_CARDS];
-+static struct tty_driver *ntty_driver;
-+
-+/*
-+ * find card by tty_index
-+ */
-+static inline struct nozomi *get_dc_by_tty(const struct tty_struct *tty)
-+{
-+ return tty ? ndevs[tty->index / MAX_PORT] : NULL;
-+}
-+
-+static inline struct port *get_port_by_tty(const struct tty_struct *tty)
-+{
-+ struct nozomi *ndev = get_dc_by_tty(tty);
-+ return ndev ? &ndev->port[tty->index % MAX_PORT] : NULL;
-+}
-+
-+/*
-+ * TODO:
-+ * -Optimize
-+ * -Rewrite cleaner
-+ */
-+
-+static void read_mem32(u32 *buf, const void __iomem *mem_addr_start,
-+ u32 size_bytes)
-+{
-+ u32 i = 0;
-+ const u32 *ptr = (__force u32 *) mem_addr_start;
-+ u16 *buf16;
-+
-+ if (unlikely(!ptr || !buf))
-+ goto out;
-+
-+ /* shortcut for extremely often used cases */
-+ switch (size_bytes) {
-+ case 2: /* 2 bytes */
-+ buf16 = (u16 *) buf;
-+ *buf16 = __le16_to_cpu(readw((void __iomem *)ptr));
-+ goto out;
-+ break;
-+ case 4: /* 4 bytes */
-+ *(buf) = __le32_to_cpu(readl((void __iomem *)ptr));
-+ goto out;
-+ break;
-+ }
-+
-+ while (i < size_bytes) {
-+ if (size_bytes - i == 2) {
-+ /* Handle 2 bytes in the end */
-+ buf16 = (u16 *) buf;
-+ *(buf16) = __le16_to_cpu(readw((void __iomem *)ptr));
-+ i += 2;
-+ } else {
-+ /* Read 4 bytes */
-+ *(buf) = __le32_to_cpu(readl((void __iomem *)ptr));
-+ i += 4;
-+ }
-+ buf++;
-+ ptr++;
-+ }
-+out:
-+ return;
-+}
-+
-+/*
-+ * TODO:
-+ * -Optimize
-+ * -Rewrite cleaner
-+ */
-+static u32 write_mem32(void __iomem *mem_addr_start, u32 *buf,
-+ u32 size_bytes)
-+{
-+ u32 i = 0;
-+ u32 *ptr = (__force u32 *) mem_addr_start;
-+ u16 *buf16;
-+
-+ if (unlikely(!ptr || !buf))
-+ return 0;
-+
-+ /* shortcut for extremely often used cases */
-+ switch (size_bytes) {
-+ case 2: /* 2 bytes */
-+ buf16 = (u16 *) buf;
-+ writew(__cpu_to_le16(*buf16), (void __iomem *)ptr);
-+ return 2;
-+ break;
-+ case 1: /*
-+ * also needs to write 4 bytes in this case
-+ * so falling through..
-+ */
-+ case 4: /* 4 bytes */
-+ writel(__cpu_to_le32(*buf), (void __iomem *)ptr);
-+ return 4;
-+ break;
-+ }
-+
-+ while (i < size_bytes) {
-+ if (size_bytes - i == 2) {
-+ /* 2 bytes */
-+ buf16 = (u16 *) buf;
-+ writew(__cpu_to_le16(*buf16), (void __iomem *)ptr);
-+ i += 2;
-+ } else {
-+ /* 4 bytes */
-+ writel(__cpu_to_le32(*buf), (void __iomem *)ptr);
-+ i += 4;
-+ }
-+ buf++;
-+ ptr++;
-+ }
-+ return i;
-+}
-+
-+/* Setup pointers to different channels and also setup buffer sizes. */
-+static void setup_memory(struct nozomi *dc)
-+{
-+ void __iomem *offset = dc->base_addr + dc->config_table.dl_start;
-+ /* The length reported is including the length field of 4 bytes,
-+ * hence subtract with 4.
-+ */
-+ const u16 buff_offset = 4;
-+
-+ /* Modem port dl configuration */
-+ dc->port[PORT_MDM].dl_addr[CH_A] = offset;
-+ dc->port[PORT_MDM].dl_addr[CH_B] =
-+ (offset += dc->config_table.dl_mdm_len1);
-+ dc->port[PORT_MDM].dl_size[CH_A] =
-+ dc->config_table.dl_mdm_len1 - buff_offset;
-+ dc->port[PORT_MDM].dl_size[CH_B] =
-+ dc->config_table.dl_mdm_len2 - buff_offset;
-+
-+ /* Diag port dl configuration */
-+ dc->port[PORT_DIAG].dl_addr[CH_A] =
-+ (offset += dc->config_table.dl_mdm_len2);
-+ dc->port[PORT_DIAG].dl_size[CH_A] =
-+ dc->config_table.dl_diag_len1 - buff_offset;
-+ dc->port[PORT_DIAG].dl_addr[CH_B] =
-+ (offset += dc->config_table.dl_diag_len1);
-+ dc->port[PORT_DIAG].dl_size[CH_B] =
-+ dc->config_table.dl_diag_len2 - buff_offset;
-+
-+ /* App1 port dl configuration */
-+ dc->port[PORT_APP1].dl_addr[CH_A] =
-+ (offset += dc->config_table.dl_diag_len2);
-+ dc->port[PORT_APP1].dl_size[CH_A] =
-+ dc->config_table.dl_app1_len - buff_offset;
-+
-+ /* App2 port dl configuration */
-+ dc->port[PORT_APP2].dl_addr[CH_A] =
-+ (offset += dc->config_table.dl_app1_len);
-+ dc->port[PORT_APP2].dl_size[CH_A] =
-+ dc->config_table.dl_app2_len - buff_offset;
-+
-+ /* Ctrl dl configuration */
-+ dc->port[PORT_CTRL].dl_addr[CH_A] =
-+ (offset += dc->config_table.dl_app2_len);
-+ dc->port[PORT_CTRL].dl_size[CH_A] =
-+ dc->config_table.dl_ctrl_len - buff_offset;
-+
-+ offset = dc->base_addr + dc->config_table.ul_start;
-+
-+ /* Modem Port ul configuration */
-+ dc->port[PORT_MDM].ul_addr[CH_A] = offset;
-+ dc->port[PORT_MDM].ul_size[CH_A] =
-+ dc->config_table.ul_mdm_len1 - buff_offset;
-+ dc->port[PORT_MDM].ul_addr[CH_B] =
-+ (offset += dc->config_table.ul_mdm_len1);
-+ dc->port[PORT_MDM].ul_size[CH_B] =
-+ dc->config_table.ul_mdm_len2 - buff_offset;
-+
-+ /* Diag port ul configuration */
-+ dc->port[PORT_DIAG].ul_addr[CH_A] =
-+ (offset += dc->config_table.ul_mdm_len2);
-+ dc->port[PORT_DIAG].ul_size[CH_A] =
-+ dc->config_table.ul_diag_len - buff_offset;
-+
-+ /* App1 port ul configuration */
-+ dc->port[PORT_APP1].ul_addr[CH_A] =
-+ (offset += dc->config_table.ul_diag_len);
-+ dc->port[PORT_APP1].ul_size[CH_A] =
-+ dc->config_table.ul_app1_len - buff_offset;
-+
-+ /* App2 port ul configuration */
-+ dc->port[PORT_APP2].ul_addr[CH_A] =
-+ (offset += dc->config_table.ul_app1_len);
-+ dc->port[PORT_APP2].ul_size[CH_A] =
-+ dc->config_table.ul_app2_len - buff_offset;
-+
-+ /* Ctrl ul configuration */
-+ dc->port[PORT_CTRL].ul_addr[CH_A] =
-+ (offset += dc->config_table.ul_app2_len);
-+ dc->port[PORT_CTRL].ul_size[CH_A] =
-+ dc->config_table.ul_ctrl_len - buff_offset;
-+}
-+
-+/* Dump config table under initalization phase */
-+#ifdef DEBUG
-+static void dump_table(const struct nozomi *dc)
-+{
-+ DBG3("signature: 0x%08X", dc->config_table.signature);
-+ DBG3("version: 0x%04X", dc->config_table.version);
-+ DBG3("product_information: 0x%04X", \
-+ dc->config_table.product_information);
-+ DBG3("toggle enabled: %d", dc->config_table.toggle.enabled);
-+ DBG3("toggle up_mdm: %d", dc->config_table.toggle.mdm_ul);
-+ DBG3("toggle dl_mdm: %d", dc->config_table.toggle.mdm_dl);
-+ DBG3("toggle dl_dbg: %d", dc->config_table.toggle.diag_dl);
-+
-+ DBG3("dl_start: 0x%04X", dc->config_table.dl_start);
-+ DBG3("dl_mdm_len0: 0x%04X, %d", dc->config_table.dl_mdm_len1,
-+ dc->config_table.dl_mdm_len1);
-+ DBG3("dl_mdm_len1: 0x%04X, %d", dc->config_table.dl_mdm_len2,
-+ dc->config_table.dl_mdm_len2);
-+ DBG3("dl_diag_len0: 0x%04X, %d", dc->config_table.dl_diag_len1,
-+ dc->config_table.dl_diag_len1);
-+ DBG3("dl_diag_len1: 0x%04X, %d", dc->config_table.dl_diag_len2,
-+ dc->config_table.dl_diag_len2);
-+ DBG3("dl_app1_len: 0x%04X, %d", dc->config_table.dl_app1_len,
-+ dc->config_table.dl_app1_len);
-+ DBG3("dl_app2_len: 0x%04X, %d", dc->config_table.dl_app2_len,
-+ dc->config_table.dl_app2_len);
-+ DBG3("dl_ctrl_len: 0x%04X, %d", dc->config_table.dl_ctrl_len,
-+ dc->config_table.dl_ctrl_len);
-+ DBG3("ul_start: 0x%04X, %d", dc->config_table.ul_start,
-+ dc->config_table.ul_start);
-+ DBG3("ul_mdm_len[0]: 0x%04X, %d", dc->config_table.ul_mdm_len1,
-+ dc->config_table.ul_mdm_len1);
-+ DBG3("ul_mdm_len[1]: 0x%04X, %d", dc->config_table.ul_mdm_len2,
-+ dc->config_table.ul_mdm_len2);
-+ DBG3("ul_diag_len: 0x%04X, %d", dc->config_table.ul_diag_len,
-+ dc->config_table.ul_diag_len);
-+ DBG3("ul_app1_len: 0x%04X, %d", dc->config_table.ul_app1_len,
-+ dc->config_table.ul_app1_len);
-+ DBG3("ul_app2_len: 0x%04X, %d", dc->config_table.ul_app2_len,
-+ dc->config_table.ul_app2_len);
-+ DBG3("ul_ctrl_len: 0x%04X, %d", dc->config_table.ul_ctrl_len,
-+ dc->config_table.ul_ctrl_len);
-+}
-+#else
-+static __inline__ void dump_table(const struct nozomi *dc) { }
-+#endif
-+
-+/*
-+ * Read configuration table from card under intalization phase
-+ * Returns 1 if ok, else 0
-+ */
-+static int nozomi_read_config_table(struct nozomi *dc)
-+{
-+ read_mem32((u32 *) &dc->config_table, dc->base_addr + 0,
-+ sizeof(struct config_table));
-+
-+ if (dc->config_table.signature != CONFIG_MAGIC) {
-+ dev_err(&dc->pdev->dev, "ConfigTable Bad! 0x%08X != 0x%08X\n",
-+ dc->config_table.signature, CONFIG_MAGIC);
-+ return 0;
-+ }
-+
-+ if ((dc->config_table.version == 0)
-+ || (dc->config_table.toggle.enabled == TOGGLE_VALID)) {
-+ int i;
-+ DBG1("Second phase, configuring card");
-+
-+ setup_memory(dc);
-+
-+ dc->port[PORT_MDM].toggle_ul = dc->config_table.toggle.mdm_ul;
-+ dc->port[PORT_MDM].toggle_dl = dc->config_table.toggle.mdm_dl;
-+ dc->port[PORT_DIAG].toggle_dl = dc->config_table.toggle.diag_dl;
-+ DBG1("toggle ports: MDM UL:%d MDM DL:%d, DIAG DL:%d",
-+ dc->port[PORT_MDM].toggle_ul,
-+ dc->port[PORT_MDM].toggle_dl, dc->port[PORT_DIAG].toggle_dl);
-+
-+ dump_table(dc);
-+
-+ for (i = PORT_MDM; i < MAX_PORT; i++) {
-+ dc->port[i].fifo_ul =
-+ kfifo_alloc(FIFO_BUFFER_SIZE_UL, GFP_ATOMIC, NULL);
-+ memset(&dc->port[i].ctrl_dl, 0, sizeof(struct ctrl_dl));
-+ memset(&dc->port[i].ctrl_ul, 0, sizeof(struct ctrl_ul));
-+ }
-+
-+ /* Enable control channel */
-+ dc->last_ier = dc->last_ier | CTRL_DL;
-+ writew(dc->last_ier, dc->reg_ier);
-+
-+ dev_info(&dc->pdev->dev, "Initialization OK!\n");
-+ return 1;
-+ }
-+
-+ if ((dc->config_table.version > 0)
-+ && (dc->config_table.toggle.enabled != TOGGLE_VALID)) {
-+ u32 offset = 0;
-+ DBG1("First phase: pushing upload buffers, clearing download");
-+
-+ dev_info(&dc->pdev->dev, "Version of card: %d\n",
-+ dc->config_table.version);
-+
-+ /* Here we should disable all I/O over F32. */
-+ setup_memory(dc);
-+
-+ /*
-+ * We should send ALL channel pair tokens back along
-+ * with reset token
-+ */
-+
-+ /* push upload modem buffers */
-+ write_mem32(dc->port[PORT_MDM].ul_addr[CH_A],
-+ (u32 *) &offset, 4);
-+ write_mem32(dc->port[PORT_MDM].ul_addr[CH_B],
-+ (u32 *) &offset, 4);
-+
-+ writew(MDM_UL | DIAG_DL | MDM_DL, dc->reg_fcr);
-+
-+ DBG1("First phase done");
-+ }
-+
-+ return 1;
-+}
-+
-+/* Enable uplink interrupts */
-+static void enable_transmit_ul(enum port_type port, struct nozomi *dc)
-+{
-+ u16 mask[NOZOMI_MAX_PORTS] = \
-+ {MDM_UL, DIAG_UL, APP1_UL, APP2_UL, CTRL_UL};
-+
-+ if (port < NOZOMI_MAX_PORTS) {
-+ dc->last_ier |= mask[port];
-+ writew(dc->last_ier, dc->reg_ier);
-+ } else {
-+ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
-+ }
-+}
-+
-+/* Disable uplink interrupts */
-+static void disable_transmit_ul(enum port_type port, struct nozomi *dc)
-+{
-+ u16 mask[NOZOMI_MAX_PORTS] = \
-+ {~MDM_UL, ~DIAG_UL, ~APP1_UL, ~APP2_UL, ~CTRL_UL};
-+
-+ if (port < NOZOMI_MAX_PORTS) {
-+ dc->last_ier &= mask[port];
-+ writew(dc->last_ier, dc->reg_ier);
-+ } else {
-+ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
-+ }
-+}
-+
-+/* Enable downlink interrupts */
-+static void enable_transmit_dl(enum port_type port, struct nozomi *dc)
-+{
-+ u16 mask[NOZOMI_MAX_PORTS] = \
-+ {MDM_DL, DIAG_DL, APP1_DL, APP2_DL, CTRL_DL};
-+
-+ if (port < NOZOMI_MAX_PORTS) {
-+ dc->last_ier |= mask[port];
-+ writew(dc->last_ier, dc->reg_ier);
-+ } else {
-+ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
-+ }
-+}
-+
-+/* Disable downlink interrupts */
-+static void disable_transmit_dl(enum port_type port, struct nozomi *dc)
-+{
-+ u16 mask[NOZOMI_MAX_PORTS] = \
-+ {~MDM_DL, ~DIAG_DL, ~APP1_DL, ~APP2_DL, ~CTRL_DL};
-+
-+ if (port < NOZOMI_MAX_PORTS) {
-+ dc->last_ier &= mask[port];
-+ writew(dc->last_ier, dc->reg_ier);
-+ } else {
-+ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
-+ }
-+}
-+
-+/*
-+ * Return 1 - send buffer to card and ack.
-+ * Return 0 - don't ack, don't send buffer to card.
-+ */
-+static int send_data(enum port_type index, struct nozomi *dc)
-+{
-+ u32 size = 0;
-+ struct port *port = &dc->port[index];
-+ u8 toggle = port->toggle_ul;
-+ void __iomem *addr = port->ul_addr[toggle];
-+ u32 ul_size = port->ul_size[toggle];
-+ struct tty_struct *tty = port->tty;
-+
-+ /* Get data from tty and place in buf for now */
-+ size = __kfifo_get(port->fifo_ul, dc->send_buf,
-+ ul_size < SEND_BUF_MAX ? ul_size : SEND_BUF_MAX);
-+
-+ if (size == 0) {
-+ DBG4("No more data to send, disable link:");
-+ return 0;
-+ }
-+
-+ /* DUMP(buf, size); */
-+
-+ /* Write length + data */
-+ write_mem32(addr, (u32 *) &size, 4);
-+ write_mem32(addr + 4, (u32 *) dc->send_buf, size);
-+
-+ if (tty)
-+ tty_wakeup(tty);
-+
-+ return 1;
-+}
-+
-+/* If all data has been read, return 1, else 0 */
-+static int receive_data(enum port_type index, struct nozomi *dc)
-+{
-+ u8 buf[RECEIVE_BUF_MAX] = { 0 };
-+ int size;
-+ u32 offset = 4;
-+ struct port *port = &dc->port[index];
-+ void __iomem *addr = port->dl_addr[port->toggle_dl];
-+ struct tty_struct *tty = port->tty;
-+ int i;
-+
-+ if (unlikely(!tty)) {
-+ DBG1("tty not open for port: %d?", index);
-+ return 1;
-+ }
-+
-+ read_mem32((u32 *) &size, addr, 4);
-+ /* DBG1( "%d bytes port: %d", size, index); */
-+
-+ if (test_bit(TTY_THROTTLED, &tty->flags)) {
-+ DBG1("No room in tty, don't read data, don't ack interrupt, "
-+ "disable interrupt");
-+
-+ /* disable interrupt in downlink... */
-+ disable_transmit_dl(index, dc);
-+ return 0;
-+ }
-+
-+ if (unlikely(size == 0)) {
-+ dev_err(&dc->pdev->dev, "size == 0?\n");
-+ return 1;
-+ }
-+
-+ tty_buffer_request_room(tty, size);
-+
-+ while (size > 0) {
-+ read_mem32((u32 *) buf, addr + offset, RECEIVE_BUF_MAX);
-+
-+ if (size == 1) {
-+ tty_insert_flip_char(tty, buf[0], TTY_NORMAL);
-+ size = 0;
-+ } else if (size < RECEIVE_BUF_MAX) {
-+ size -= tty_insert_flip_string(tty, (char *) buf, size);
-+ } else {
-+ i = tty_insert_flip_string(tty, \
-+ (char *) buf, RECEIVE_BUF_MAX);
-+ size -= i;
-+ offset += i;
-+ }
-+ }
-+
-+ set_bit(index, &dc->flip);
-+
-+ return 1;
-+}
-+
-+/* Debug for interrupts */
-+#ifdef DEBUG
-+static char *interrupt2str(u16 interrupt)
-+{
-+ static char buf[TMP_BUF_MAX];
-+ char *p = buf;
-+
-+ interrupt & MDM_DL1 ? p += snprintf(p, TMP_BUF_MAX, "MDM_DL1 ") : NULL;
-+ interrupt & MDM_DL2 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "MDM_DL2 ") : NULL;
-+
-+ interrupt & MDM_UL1 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "MDM_UL1 ") : NULL;
-+ interrupt & MDM_UL2 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "MDM_UL2 ") : NULL;
-+
-+ interrupt & DIAG_DL1 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "DIAG_DL1 ") : NULL;
-+ interrupt & DIAG_DL2 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "DIAG_DL2 ") : NULL;
-+
-+ interrupt & DIAG_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "DIAG_UL ") : NULL;
-+
-+ interrupt & APP1_DL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "APP1_DL ") : NULL;
-+ interrupt & APP2_DL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "APP2_DL ") : NULL;
-+
-+ interrupt & APP1_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "APP1_UL ") : NULL;
-+ interrupt & APP2_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "APP2_UL ") : NULL;
-+
-+ interrupt & CTRL_DL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "CTRL_DL ") : NULL;
-+ interrupt & CTRL_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "CTRL_UL ") : NULL;
-+
-+ interrupt & RESET ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
-+ "RESET ") : NULL;
-+
-+ return buf;
-+}
-+#endif
-+
-+/*
-+ * Receive flow control
-+ * Return 1 - If ok, else 0
-+ */
-+static int receive_flow_control(struct nozomi *dc)
-+{
-+ enum port_type port = PORT_MDM;
-+ struct ctrl_dl ctrl_dl;
-+ struct ctrl_dl old_ctrl;
-+ u16 enable_ier = 0;
-+
-+ read_mem32((u32 *) &ctrl_dl, dc->port[PORT_CTRL].dl_addr[CH_A], 2);
-+
-+ switch (ctrl_dl.port) {
-+ case CTRL_CMD:
-+ DBG1("The Base Band sends this value as a response to a "
-+ "request for IMSI detach sent over the control "
-+ "channel uplink (see section 7.6.1).");
-+ break;
-+ case CTRL_MDM:
-+ port = PORT_MDM;
-+ enable_ier = MDM_DL;
-+ break;
-+ case CTRL_DIAG:
-+ port = PORT_DIAG;
-+ enable_ier = DIAG_DL;
-+ break;
-+ case CTRL_APP1:
-+ port = PORT_APP1;
-+ enable_ier = APP1_DL;
-+ break;
-+ case CTRL_APP2:
-+ port = PORT_APP2;
-+ enable_ier = APP2_DL;
-+ break;
-+ default:
-+ dev_err(&dc->pdev->dev,
-+ "ERROR: flow control received for non-existing port\n");
-+ return 0;
-+ };
-+
-+ DBG1("0x%04X->0x%04X", *((u16 *)&dc->port[port].ctrl_dl),
-+ *((u16 *)&ctrl_dl));
-+
-+ old_ctrl = dc->port[port].ctrl_dl;
-+ dc->port[port].ctrl_dl = ctrl_dl;
-+
-+ if (old_ctrl.CTS == 1 && ctrl_dl.CTS == 0) {
-+ DBG1("Disable interrupt (0x%04X) on port: %d",
-+ enable_ier, port);
-+ disable_transmit_ul(port, dc);
-+
-+ } else if (old_ctrl.CTS == 0 && ctrl_dl.CTS == 1) {
-+
-+ if (__kfifo_len(dc->port[port].fifo_ul)) {
-+ DBG1("Enable interrupt (0x%04X) on port: %d",
-+ enable_ier, port);
-+ DBG1("Data in buffer [%d], enable transmit! ",
-+ __kfifo_len(dc->port[port].fifo_ul));
-+ enable_transmit_ul(port, dc);
-+ } else {
-+ DBG1("No data in buffer...");
-+ }
-+ }
-+
-+ if (*(u16 *)&old_ctrl == *(u16 *)&ctrl_dl) {
-+ DBG1(" No change in mctrl");
-+ return 1;
-+ }
-+ /* Update statistics */
-+ if (old_ctrl.CTS != ctrl_dl.CTS)
-+ dc->port[port].tty_icount.cts++;
-+ if (old_ctrl.DSR != ctrl_dl.DSR)
-+ dc->port[port].tty_icount.dsr++;
-+ if (old_ctrl.RI != ctrl_dl.RI)
-+ dc->port[port].tty_icount.rng++;
-+ if (old_ctrl.DCD != ctrl_dl.DCD)
-+ dc->port[port].tty_icount.dcd++;
-+
-+ wake_up_interruptible(&dc->port[port].tty_wait);
-+
-+ DBG1("port: %d DCD(%d), CTS(%d), RI(%d), DSR(%d)",
-+ port,
-+ dc->port[port].tty_icount.dcd, dc->port[port].tty_icount.cts,
-+ dc->port[port].tty_icount.rng, dc->port[port].tty_icount.dsr);
-+
-+ return 1;
-+}
-+
-+static enum ctrl_port_type port2ctrl(enum port_type port,
-+ const struct nozomi *dc)
-+{
-+ switch (port) {
-+ case PORT_MDM:
-+ return CTRL_MDM;
-+ case PORT_DIAG:
-+ return CTRL_DIAG;
-+ case PORT_APP1:
-+ return CTRL_APP1;
-+ case PORT_APP2:
-+ return CTRL_APP2;
-+ default:
-+ dev_err(&dc->pdev->dev,
-+ "ERROR: send flow control " \
-+ "received for non-existing port\n");
-+ };
-+ return CTRL_ERROR;
-+}
-+
-+/*
-+ * Send flow control, can only update one channel at a time
-+ * Return 0 - If we have updated all flow control
-+ * Return 1 - If we need to update more flow control, ack current enable more
-+ */
-+static int send_flow_control(struct nozomi *dc)
-+{
-+ u32 i, more_flow_control_to_be_updated = 0;
-+ u16 *ctrl;
-+
-+ for (i = PORT_MDM; i < MAX_PORT; i++) {
-+ if (dc->port[i].update_flow_control) {
-+ if (more_flow_control_to_be_updated) {
-+ /* We have more flow control to be updated */
-+ return 1;
-+ }
-+ dc->port[i].ctrl_ul.port = port2ctrl(i, dc);
-+ ctrl = (u16 *)&dc->port[i].ctrl_ul;
-+ write_mem32(dc->port[PORT_CTRL].ul_addr[0], \
-+ (u32 *) ctrl, 2);
-+ dc->port[i].update_flow_control = 0;
-+ more_flow_control_to_be_updated = 1;
-+ }
-+ }
-+ return 0;
-+}
-+
-+/*
-+ * Handle donlink data, ports that are handled are modem and diagnostics
-+ * Return 1 - ok
-+ * Return 0 - toggle fields are out of sync
-+ */
-+static int handle_data_dl(struct nozomi *dc, enum port_type port, u8 *toggle,
-+ u16 read_iir, u16 mask1, u16 mask2)
-+{
-+ if (*toggle == 0 && read_iir & mask1) {
-+ if (receive_data(port, dc)) {
-+ writew(mask1, dc->reg_fcr);
-+ *toggle = !(*toggle);
-+ }
-+
-+ if (read_iir & mask2) {
-+ if (receive_data(port, dc)) {
-+ writew(mask2, dc->reg_fcr);
-+ *toggle = !(*toggle);
-+ }
-+ }
-+ } else if (*toggle == 1 && read_iir & mask2) {
-+ if (receive_data(port, dc)) {
-+ writew(mask2, dc->reg_fcr);
-+ *toggle = !(*toggle);
-+ }
-+
-+ if (read_iir & mask1) {
-+ if (receive_data(port, dc)) {
-+ writew(mask1, dc->reg_fcr);
-+ *toggle = !(*toggle);
-+ }
-+ }
-+ } else {
-+ dev_err(&dc->pdev->dev, "port out of sync!, toggle:%d\n",
-+ *toggle);
-+ return 0;
-+ }
-+ return 1;
-+}
-+
-+/*
-+ * Handle uplink data, this is currently for the modem port
-+ * Return 1 - ok
-+ * Return 0 - toggle field are out of sync
-+ */
-+static int handle_data_ul(struct nozomi *dc, enum port_type port, u16 read_iir)
-+{
-+ u8 *toggle = &(dc->port[port].toggle_ul);
-+
-+ if (*toggle == 0 && read_iir & MDM_UL1) {
-+ dc->last_ier &= ~MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(port, dc)) {
-+ writew(MDM_UL1, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ *toggle = !*toggle;
-+ }
-+
-+ if (read_iir & MDM_UL2) {
-+ dc->last_ier &= ~MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(port, dc)) {
-+ writew(MDM_UL2, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ *toggle = !*toggle;
-+ }
-+ }
-+
-+ } else if (*toggle == 1 && read_iir & MDM_UL2) {
-+ dc->last_ier &= ~MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(port, dc)) {
-+ writew(MDM_UL2, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ *toggle = !*toggle;
-+ }
-+
-+ if (read_iir & MDM_UL1) {
-+ dc->last_ier &= ~MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(port, dc)) {
-+ writew(MDM_UL1, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | MDM_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ *toggle = !*toggle;
-+ }
-+ }
-+ } else {
-+ writew(read_iir & MDM_UL, dc->reg_fcr);
-+ dev_err(&dc->pdev->dev, "port out of sync!\n");
-+ return 0;
-+ }
-+ return 1;
-+}
-+
-+static irqreturn_t interrupt_handler(int irq, void *dev_id)
-+{
-+ struct nozomi *dc = dev_id;
-+ unsigned int a;
-+ u16 read_iir;
-+
-+ if (!dc)
-+ return IRQ_NONE;
-+
-+ spin_lock(&dc->spin_mutex);
-+ read_iir = readw(dc->reg_iir);
-+
-+ /* Card removed */
-+ if (read_iir == (u16)-1)
-+ goto none;
-+ /*
-+ * Just handle interrupt enabled in IER
-+ * (by masking with dc->last_ier)
-+ */
-+ read_iir &= dc->last_ier;
-+
-+ if (read_iir == 0)
-+ goto none;
-+
-+
-+ DBG4("%s irq:0x%04X, prev:0x%04X", interrupt2str(read_iir), read_iir,
-+ dc->last_ier);
-+
-+ if (read_iir & RESET) {
-+ if (unlikely(!nozomi_read_config_table(dc))) {
-+ dc->last_ier = 0x0;
-+ writew(dc->last_ier, dc->reg_ier);
-+ dev_err(&dc->pdev->dev, "Could not read status from "
-+ "card, we should disable interface\n");
-+ } else {
-+ writew(RESET, dc->reg_fcr);
-+ }
-+ /* No more useful info if this was the reset interrupt. */
-+ goto exit_handler;
-+ }
-+ if (read_iir & CTRL_UL) {
-+ DBG1("CTRL_UL");
-+ dc->last_ier &= ~CTRL_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_flow_control(dc)) {
-+ writew(CTRL_UL, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | CTRL_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ }
-+ }
-+ if (read_iir & CTRL_DL) {
-+ receive_flow_control(dc);
-+ writew(CTRL_DL, dc->reg_fcr);
-+ }
-+ if (read_iir & MDM_DL) {
-+ if (!handle_data_dl(dc, PORT_MDM,
-+ &(dc->port[PORT_MDM].toggle_dl), read_iir,
-+ MDM_DL1, MDM_DL2)) {
-+ dev_err(&dc->pdev->dev, "MDM_DL out of sync!\n");
-+ goto exit_handler;
-+ }
-+ }
-+ if (read_iir & MDM_UL) {
-+ if (!handle_data_ul(dc, PORT_MDM, read_iir)) {
-+ dev_err(&dc->pdev->dev, "MDM_UL out of sync!\n");
-+ goto exit_handler;
-+ }
-+ }
-+ if (read_iir & DIAG_DL) {
-+ if (!handle_data_dl(dc, PORT_DIAG,
-+ &(dc->port[PORT_DIAG].toggle_dl), read_iir,
-+ DIAG_DL1, DIAG_DL2)) {
-+ dev_err(&dc->pdev->dev, "DIAG_DL out of sync!\n");
-+ goto exit_handler;
-+ }
-+ }
-+ if (read_iir & DIAG_UL) {
-+ dc->last_ier &= ~DIAG_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(PORT_DIAG, dc)) {
-+ writew(DIAG_UL, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | DIAG_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ }
-+ }
-+ if (read_iir & APP1_DL) {
-+ if (receive_data(PORT_APP1, dc))
-+ writew(APP1_DL, dc->reg_fcr);
-+ }
-+ if (read_iir & APP1_UL) {
-+ dc->last_ier &= ~APP1_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(PORT_APP1, dc)) {
-+ writew(APP1_UL, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | APP1_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ }
-+ }
-+ if (read_iir & APP2_DL) {
-+ if (receive_data(PORT_APP2, dc))
-+ writew(APP2_DL, dc->reg_fcr);
-+ }
-+ if (read_iir & APP2_UL) {
-+ dc->last_ier &= ~APP2_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ if (send_data(PORT_APP2, dc)) {
-+ writew(APP2_UL, dc->reg_fcr);
-+ dc->last_ier = dc->last_ier | APP2_UL;
-+ writew(dc->last_ier, dc->reg_ier);
-+ }
-+ }
-+
-+exit_handler:
-+ spin_unlock(&dc->spin_mutex);
-+ for (a = 0; a < NOZOMI_MAX_PORTS; a++)
-+ if (test_and_clear_bit(a, &dc->flip))
-+ tty_flip_buffer_push(dc->port[a].tty);
-+ return IRQ_HANDLED;
-+none:
-+ spin_unlock(&dc->spin_mutex);
-+ return IRQ_NONE;
-+}
-+
-+static void nozomi_get_card_type(struct nozomi *dc)
-+{
-+ int i;
-+ u32 size = 0;
-+
-+ for (i = 0; i < 6; i++)
-+ size += pci_resource_len(dc->pdev, i);
-+
-+ /* Assume card type F32_8 if no match */
-+ dc->card_type = size == 2048 ? F32_2 : F32_8;
-+
-+ dev_info(&dc->pdev->dev, "Card type is: %d\n", dc->card_type);
-+}
-+
-+static void nozomi_setup_private_data(struct nozomi *dc)
-+{
-+ void __iomem *offset = dc->base_addr + dc->card_type / 2;
-+ unsigned int i;
-+
-+ dc->reg_fcr = (void __iomem *)(offset + R_FCR);
-+ dc->reg_iir = (void __iomem *)(offset + R_IIR);
-+ dc->reg_ier = (void __iomem *)(offset + R_IER);
-+ dc->last_ier = 0;
-+ dc->flip = 0;
-+
-+ dc->port[PORT_MDM].token_dl = MDM_DL;
-+ dc->port[PORT_DIAG].token_dl = DIAG_DL;
-+ dc->port[PORT_APP1].token_dl = APP1_DL;
-+ dc->port[PORT_APP2].token_dl = APP2_DL;
-+
-+ for (i = 0; i < MAX_PORT; i++)
-+ init_waitqueue_head(&dc->port[i].tty_wait);
-+}
-+
-+static ssize_t card_type_show(struct device *dev, struct device_attribute *attr,
-+ char *buf)
-+{
-+ struct nozomi *dc = pci_get_drvdata(to_pci_dev(dev));
-+
-+ return sprintf(buf, "%d\n", dc->card_type);
-+}
-+static DEVICE_ATTR(card_type, 0444, card_type_show, NULL);
-+
-+static ssize_t open_ttys_show(struct device *dev, struct device_attribute *attr,
-+ char *buf)
-+{
-+ struct nozomi *dc = pci_get_drvdata(to_pci_dev(dev));
-+
-+ return sprintf(buf, "%u\n", dc->open_ttys);
-+}
-+static DEVICE_ATTR(open_ttys, 0444, open_ttys_show, NULL);
-+
-+static void make_sysfs_files(struct nozomi *dc)
-+{
-+ if (device_create_file(&dc->pdev->dev, &dev_attr_card_type))
-+ dev_err(&dc->pdev->dev,
-+ "Could not create sysfs file for card_type\n");
-+ if (device_create_file(&dc->pdev->dev, &dev_attr_open_ttys))
-+ dev_err(&dc->pdev->dev,
-+ "Could not create sysfs file for open_ttys\n");
-+}
-+
-+static void remove_sysfs_files(struct nozomi *dc)
-+{
-+ device_remove_file(&dc->pdev->dev, &dev_attr_card_type);
-+ device_remove_file(&dc->pdev->dev, &dev_attr_open_ttys);
-+}
-+
-+/* Allocate memory for one device */
-+static int __devinit nozomi_card_init(struct pci_dev *pdev,
-+ const struct pci_device_id *ent)
-+{
-+ resource_size_t start;
-+ int ret;
-+ struct nozomi *dc = NULL;
-+ int ndev_idx;
-+ int i;
-+
-+ dev_dbg(&pdev->dev, "Init, new card found\n");
-+
-+ for (ndev_idx = 0; ndev_idx < ARRAY_SIZE(ndevs); ndev_idx++)
-+ if (!ndevs[ndev_idx])
-+ break;
-+
-+ if (ndev_idx >= ARRAY_SIZE(ndevs)) {
-+ dev_err(&pdev->dev, "no free tty range for this card left\n");
-+ ret = -EIO;
-+ goto err;
-+ }
-+
-+ dc = kzalloc(sizeof(struct nozomi), GFP_KERNEL);
-+ if (unlikely(!dc)) {
-+ dev_err(&pdev->dev, "Could not allocate memory\n");
-+ ret = -ENOMEM;
-+ goto err_free;
-+ }
-+
-+ dc->pdev = pdev;
-+
-+ /* Find out what card type it is */
-+ nozomi_get_card_type(dc);
-+
-+ ret = pci_enable_device(dc->pdev);
-+ if (ret) {
-+ dev_err(&pdev->dev, "Failed to enable PCI Device\n");
-+ goto err_free;
-+ }
-+
-+ start = pci_resource_start(dc->pdev, 0);
-+ if (start == 0) {
-+ dev_err(&pdev->dev, "No I/O address for card detected\n");
-+ ret = -ENODEV;
-+ goto err_disable_device;
-+ }
-+
-+ ret = pci_request_regions(dc->pdev, NOZOMI_NAME);
-+ if (ret) {
-+ dev_err(&pdev->dev, "I/O address 0x%04x already in use\n",
-+ (int) /* nozomi_private.io_addr */ 0);
-+ goto err_disable_device;
-+ }
-+
-+ dc->base_addr = ioremap(start, dc->card_type);
-+ if (!dc->base_addr) {
-+ dev_err(&pdev->dev, "Unable to map card MMIO\n");
-+ ret = -ENODEV;
-+ goto err_rel_regs;
-+ }
-+
-+ dc->send_buf = kmalloc(SEND_BUF_MAX, GFP_KERNEL);
-+ if (!dc->send_buf) {
-+ dev_err(&pdev->dev, "Could not allocate send buffer?\n");
-+ ret = -ENOMEM;
-+ goto err_free_sbuf;
-+ }
-+
-+ spin_lock_init(&dc->spin_mutex);
-+
-+ nozomi_setup_private_data(dc);
-+
-+ /* Disable all interrupts */
-+ dc->last_ier = 0;
-+ writew(dc->last_ier, dc->reg_ier);
-+
-+ ret = request_irq(pdev->irq, &interrupt_handler, IRQF_SHARED,
-+ NOZOMI_NAME, dc);
-+ if (unlikely(ret)) {
-+ dev_err(&pdev->dev, "can't request irq %d\n", pdev->irq);
-+ goto err_free_sbuf;
-+ }
-+
-+ DBG1("base_addr: %p", dc->base_addr);
-+
-+ make_sysfs_files(dc);
-+
-+ dc->index_start = ndev_idx * MAX_PORT;
-+ ndevs[ndev_idx] = dc;
-+
-+ for (i = 0; i < MAX_PORT; i++) {
-+ mutex_init(&dc->port[i].tty_sem);
-+ dc->port[i].tty_open_count = 0;
-+ dc->port[i].tty = NULL;
-+ tty_register_device(ntty_driver, dc->index_start + i,
-+ &pdev->dev);
-+ }
-+
-+ /* Enable RESET interrupt. */
-+ dc->last_ier = RESET;
-+ writew(dc->last_ier, dc->reg_ier);
-+
-+ pci_set_drvdata(pdev, dc);
-+
-+ return 0;
-+
-+err_free_sbuf:
-+ kfree(dc->send_buf);
-+ iounmap(dc->base_addr);
-+err_rel_regs:
-+ pci_release_regions(pdev);
-+err_disable_device:
-+ pci_disable_device(pdev);
-+err_free:
-+ kfree(dc);
-+err:
-+ return ret;
-+}
-+
-+static void __devexit tty_exit(struct nozomi *dc)
-+{
-+ unsigned int i;
-+
-+ DBG1(" ");
-+
-+ flush_scheduled_work();
-+
-+ for (i = 0; i < MAX_PORT; ++i)
-+ if (dc->port[i].tty && \
-+ list_empty(&dc->port[i].tty->hangup_work.entry))
-+ tty_hangup(dc->port[i].tty);
-+
-+ while (dc->open_ttys)
-+ msleep(1);
-+
-+ for (i = dc->index_start; i < dc->index_start + MAX_PORT; ++i)
-+ tty_unregister_device(ntty_driver, i);
-+}
-+
-+/* Deallocate memory for one device */
-+static void __devexit nozomi_card_exit(struct pci_dev *pdev)
-+{
-+ int i;
-+ struct ctrl_ul ctrl;
-+ struct nozomi *dc = pci_get_drvdata(pdev);
-+
-+ /* Disable all interrupts */
-+ dc->last_ier = 0;
-+ writew(dc->last_ier, dc->reg_ier);
-+
-+ tty_exit(dc);
-+
-+ /* Send 0x0001, command card to resend the reset token. */
-+ /* This is to get the reset when the module is reloaded. */
-+ ctrl.port = 0x00;
-+ ctrl.reserved = 0;
-+ ctrl.RTS = 0;
-+ ctrl.DTR = 1;
-+ DBG1("sending flow control 0x%04X", *((u16 *)&ctrl));
-+
-+ /* Setup dc->reg addresses to we can use defines here */
-+ write_mem32(dc->port[PORT_CTRL].ul_addr[0], (u32 *)&ctrl, 2);
-+ writew(CTRL_UL, dc->reg_fcr); /* push the token to the card. */
-+
-+ remove_sysfs_files(dc);
-+
-+ free_irq(pdev->irq, dc);
-+
-+ for (i = 0; i < MAX_PORT; i++)
-+ if (dc->port[i].fifo_ul)
-+ kfifo_free(dc->port[i].fifo_ul);
-+
-+ kfree(dc->send_buf);
-+
-+ iounmap(dc->base_addr);
-+
-+ pci_release_regions(pdev);
-+
-+ pci_disable_device(pdev);
-+
-+ ndevs[dc->index_start / MAX_PORT] = NULL;
-+
-+ kfree(dc);
-+}
-+
-+static void set_rts(const struct tty_struct *tty, int rts)
-+{
-+ struct port *port = get_port_by_tty(tty);
-+
-+ port->ctrl_ul.RTS = rts;
-+ port->update_flow_control = 1;
-+ enable_transmit_ul(PORT_CTRL, get_dc_by_tty(tty));
-+}
-+
-+static void set_dtr(const struct tty_struct *tty, int dtr)
-+{
-+ struct port *port = get_port_by_tty(tty);
-+
-+ DBG1("SETTING DTR index: %d, dtr: %d", tty->index, dtr);
-+
-+ port->ctrl_ul.DTR = dtr;
-+ port->update_flow_control = 1;
-+ enable_transmit_ul(PORT_CTRL, get_dc_by_tty(tty));
-+}
-+
-+/*
-+ * ----------------------------------------------------------------------------
-+ * TTY code
-+ * ----------------------------------------------------------------------------
-+ */
-+
-+/* Called when the userspace process opens the tty, /dev/noz*. */
-+static int ntty_open(struct tty_struct *tty, struct file *file)
-+{
-+ struct port *port = get_port_by_tty(tty);
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+ unsigned long flags;
-+
-+ if (!port || !dc)
-+ return -ENODEV;
-+
-+ if (mutex_lock_interruptible(&port->tty_sem))
-+ return -ERESTARTSYS;
-+
-+ port->tty_open_count++;
-+ dc->open_ttys++;
-+
-+ /* Enable interrupt downlink for channel */
-+ if (port->tty_open_count == 1) {
-+ tty->low_latency = 1;
-+ tty->driver_data = port;
-+ port->tty = tty;
-+ DBG1("open: %d", port->token_dl);
-+ spin_lock_irqsave(&dc->spin_mutex, flags);
-+ dc->last_ier = dc->last_ier | port->token_dl;
-+ writew(dc->last_ier, dc->reg_ier);
-+ spin_unlock_irqrestore(&dc->spin_mutex, flags);
-+ }
-+
-+ mutex_unlock(&port->tty_sem);
-+
-+ return 0;
-+}
-+
-+/* Called when the userspace process close the tty, /dev/noz*. */
-+static void ntty_close(struct tty_struct *tty, struct file *file)
-+{
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+ struct port *port = tty->driver_data;
-+ unsigned long flags;
-+
-+ if (!dc || !port)
-+ return;
-+
-+ if (mutex_lock_interruptible(&port->tty_sem))
-+ return;
-+
-+ if (!port->tty_open_count)
-+ goto exit;
-+
-+ dc->open_ttys--;
-+ port->tty_open_count--;
-+
-+ if (port->tty_open_count == 0) {
-+ DBG1("close: %d", port->token_dl);
-+ spin_lock_irqsave(&dc->spin_mutex, flags);
-+ dc->last_ier &= ~(port->token_dl);
-+ writew(dc->last_ier, dc->reg_ier);
-+ spin_unlock_irqrestore(&dc->spin_mutex, flags);
-+ }
-+
-+exit:
-+ mutex_unlock(&port->tty_sem);
-+}
-+
-+/*
-+ * called when the userspace process writes to the tty (/dev/noz*).
-+ * Data is inserted into a fifo, which is then read and transfered to the modem.
-+ */
-+static int ntty_write(struct tty_struct *tty, const unsigned char *buffer,
-+ int count)
-+{
-+ int rval = -EINVAL;
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+ struct port *port = tty->driver_data;
-+ unsigned long flags;
-+
-+ /* DBG1( "WRITEx: %d, index = %d", count, index); */
-+
-+ if (!dc || !port)
-+ return -ENODEV;
-+
-+ if (unlikely(!mutex_trylock(&port->tty_sem))) {
-+ /*
-+ * must test lock as tty layer wraps calls
-+ * to this function with BKL
-+ */
-+ dev_err(&dc->pdev->dev, "Would have deadlocked - "
-+ "return EAGAIN\n");
-+ return -EAGAIN;
-+ }
-+
-+ if (unlikely(!port->tty_open_count)) {
-+ DBG1(" ");
-+ goto exit;
-+ }
-+
-+ rval = __kfifo_put(port->fifo_ul, (unsigned char *)buffer, count);
-+
-+ /* notify card */
-+ if (unlikely(dc == NULL)) {
-+ DBG1("No device context?");
-+ goto exit;
-+ }
-+
-+ spin_lock_irqsave(&dc->spin_mutex, flags);
-+ /* CTS is only valid on the modem channel */
-+ if (port == &(dc->port[PORT_MDM])) {
-+ if (port->ctrl_dl.CTS) {
-+ DBG4("Enable interrupt");
-+ enable_transmit_ul(tty->index % MAX_PORT, dc);
-+ } else {
-+ dev_err(&dc->pdev->dev,
-+ "CTS not active on modem port?\n");
-+ }
-+ } else {
-+ enable_transmit_ul(tty->index % MAX_PORT, dc);
-+ }
-+ spin_unlock_irqrestore(&dc->spin_mutex, flags);
-+
-+exit:
-+ mutex_unlock(&port->tty_sem);
-+ return rval;
-+}
-+
-+/*
-+ * Calculate how much is left in device
-+ * This method is called by the upper tty layer.
-+ * #according to sources N_TTY.c it expects a value >= 0 and
-+ * does not check for negative values.
-+ */
-+static int ntty_write_room(struct tty_struct *tty)
-+{
-+ struct port *port = tty->driver_data;
-+ int room = 0;
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+
-+ if (!dc || !port)
-+ return 0;
-+ if (!mutex_trylock(&port->tty_sem))
-+ return 0;
-+
-+ if (!port->tty_open_count)
-+ goto exit;
-+
-+ room = port->fifo_ul->size - __kfifo_len(port->fifo_ul);
-+
-+exit:
-+ mutex_unlock(&port->tty_sem);
-+ return room;
-+}
-+
-+/* Gets io control parameters */
-+static int ntty_tiocmget(struct tty_struct *tty, struct file *file)
-+{
-+ struct port *port = tty->driver_data;
-+ struct ctrl_dl *ctrl_dl = &port->ctrl_dl;
-+ struct ctrl_ul *ctrl_ul = &port->ctrl_ul;
-+
-+ return (ctrl_ul->RTS ? TIOCM_RTS : 0) |
-+ (ctrl_ul->DTR ? TIOCM_DTR : 0) |
-+ (ctrl_dl->DCD ? TIOCM_CAR : 0) |
-+ (ctrl_dl->RI ? TIOCM_RNG : 0) |
-+ (ctrl_dl->DSR ? TIOCM_DSR : 0) |
-+ (ctrl_dl->CTS ? TIOCM_CTS : 0);
-+}
-+
-+/* Sets io controls parameters */
-+static int ntty_tiocmset(struct tty_struct *tty, struct file *file,
-+ unsigned int set, unsigned int clear)
-+{
-+ if (set & TIOCM_RTS)
-+ set_rts(tty, 1);
-+ else if (clear & TIOCM_RTS)
-+ set_rts(tty, 0);
-+
-+ if (set & TIOCM_DTR)
-+ set_dtr(tty, 1);
-+ else if (clear & TIOCM_DTR)
-+ set_dtr(tty, 0);
-+
-+ return 0;
-+}
-+
-+static int ntty_cflags_changed(struct port *port, unsigned long flags,
-+ struct async_icount *cprev)
-+{
-+ struct async_icount cnow = port->tty_icount;
-+ int ret;
-+
-+ ret = ((flags & TIOCM_RNG) && (cnow.rng != cprev->rng)) ||
-+ ((flags & TIOCM_DSR) && (cnow.dsr != cprev->dsr)) ||
-+ ((flags & TIOCM_CD) && (cnow.dcd != cprev->dcd)) ||
-+ ((flags & TIOCM_CTS) && (cnow.cts != cprev->cts));
-+
-+ *cprev = cnow;
-+
-+ return ret;
-+}
-+
-+static int ntty_ioctl_tiocgicount(struct port *port, void __user *argp)
-+{
-+ struct async_icount cnow = port->tty_icount;
-+ struct serial_icounter_struct icount;
-+
-+ icount.cts = cnow.cts;
-+ icount.dsr = cnow.dsr;
-+ icount.rng = cnow.rng;
-+ icount.dcd = cnow.dcd;
-+ icount.rx = cnow.rx;
-+ icount.tx = cnow.tx;
-+ icount.frame = cnow.frame;
-+ icount.overrun = cnow.overrun;
-+ icount.parity = cnow.parity;
-+ icount.brk = cnow.brk;
-+ icount.buf_overrun = cnow.buf_overrun;
-+
-+ return copy_to_user(argp, &icount, sizeof(icount));
-+}
-+
-+static int ntty_ioctl(struct tty_struct *tty, struct file *file,
-+ unsigned int cmd, unsigned long arg)
-+{
-+ struct port *port = tty->driver_data;
-+ void __user *argp = (void __user *)arg;
-+ int rval = -ENOIOCTLCMD;
-+
-+ DBG1("******** IOCTL, cmd: %d", cmd);
-+
-+ switch (cmd) {
-+ case TIOCMIWAIT: {
-+ struct async_icount cprev = port->tty_icount;
-+
-+ rval = wait_event_interruptible(port->tty_wait,
-+ ntty_cflags_changed(port, arg, &cprev));
-+ break;
-+ } case TIOCGICOUNT:
-+ rval = ntty_ioctl_tiocgicount(port, argp);
-+ break;
-+ default:
-+ DBG1("ERR: 0x%08X, %d", cmd, cmd);
-+ break;
-+ };
-+
-+ return rval;
-+}
-+
-+/*
-+ * Called by the upper tty layer when tty buffers are ready
-+ * to receive data again after a call to throttle.
-+ */
-+static void ntty_unthrottle(struct tty_struct *tty)
-+{
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+ unsigned long flags;
-+
-+ DBG1("UNTHROTTLE");
-+ spin_lock_irqsave(&dc->spin_mutex, flags);
-+ enable_transmit_dl(tty->index % MAX_PORT, dc);
-+ set_rts(tty, 1);
-+
-+ spin_unlock_irqrestore(&dc->spin_mutex, flags);
-+}
-+
-+/*
-+ * Called by the upper tty layer when the tty buffers are almost full.
-+ * The driver should stop send more data.
-+ */
-+static void ntty_throttle(struct tty_struct *tty)
-+{
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+ unsigned long flags;
-+
-+ DBG1("THROTTLE");
-+ spin_lock_irqsave(&dc->spin_mutex, flags);
-+ set_rts(tty, 0);
-+ spin_unlock_irqrestore(&dc->spin_mutex, flags);
-+}
-+
-+/* just to discard single character writes */
-+static void ntty_put_char(struct tty_struct *tty, unsigned char c)
-+{
-+ /* FIXME !!! */
-+ DBG2("PUT CHAR Function: %c", c);
-+}
-+
-+/* Returns number of chars in buffer, called by tty layer */
-+static s32 ntty_chars_in_buffer(struct tty_struct *tty)
-+{
-+ struct port *port = tty->driver_data;
-+ struct nozomi *dc = get_dc_by_tty(tty);
-+ s32 rval;
-+
-+ if (unlikely(!dc || !port)) {
-+ rval = -ENODEV;
-+ goto exit_in_buffer;
-+ }
-+
-+ if (unlikely(!port->tty_open_count)) {
-+ dev_err(&dc->pdev->dev, "No tty open?\n");
-+ rval = -ENODEV;
-+ goto exit_in_buffer;
-+ }
-+
-+ rval = __kfifo_len(port->fifo_ul);
-+
-+exit_in_buffer:
-+ return rval;
-+}
-+
-+static struct tty_operations tty_ops = {
-+ .ioctl = ntty_ioctl,
-+ .open = ntty_open,
-+ .close = ntty_close,
-+ .write = ntty_write,
-+ .write_room = ntty_write_room,
-+ .unthrottle = ntty_unthrottle,
-+ .throttle = ntty_throttle,
-+ .chars_in_buffer = ntty_chars_in_buffer,
-+ .put_char = ntty_put_char,
-+ .tiocmget = ntty_tiocmget,
-+ .tiocmset = ntty_tiocmset,
-+};
-+
-+/* Module initialization */
-+static struct pci_driver nozomi_driver = {
-+ .name = NOZOMI_NAME,
-+ .id_table = nozomi_pci_tbl,
-+ .probe = nozomi_card_init,
-+ .remove = __devexit_p(nozomi_card_exit),
-+};
-+
-+static __init int nozomi_init(void)
-+{
-+ int ret;
-+
-+ printk(KERN_INFO "Initializing %s\n", VERSION_STRING);
-+
-+ ntty_driver = alloc_tty_driver(NTTY_TTY_MAXMINORS);
-+ if (!ntty_driver)
-+ return -ENOMEM;
-+
-+ ntty_driver->owner = THIS_MODULE;
-+ ntty_driver->driver_name = NOZOMI_NAME_TTY;
-+ ntty_driver->name = "noz";
-+ ntty_driver->major = 0;
-+ ntty_driver->type = TTY_DRIVER_TYPE_SERIAL;
-+ ntty_driver->subtype = SERIAL_TYPE_NORMAL;
-+ ntty_driver->flags = TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
-+ ntty_driver->init_termios = tty_std_termios;
-+ ntty_driver->init_termios.c_cflag = B115200 | CS8 | CREAD | \
-+ HUPCL | CLOCAL;
-+ ntty_driver->init_termios.c_ispeed = 115200;
-+ ntty_driver->init_termios.c_ospeed = 115200;
-+ tty_set_operations(ntty_driver, &tty_ops);
-+
-+ ret = tty_register_driver(ntty_driver);
-+ if (ret) {
-+ printk(KERN_ERR "Nozomi: failed to register ntty driver\n");
-+ goto free_tty;
-+ }
-+
-+ ret = pci_register_driver(&nozomi_driver);
-+ if (ret) {
-+ printk(KERN_ERR "Nozomi: can't register pci driver\n");
-+ goto unr_tty;
-+ }
-+
-+ return 0;
-+unr_tty:
-+ tty_unregister_driver(ntty_driver);
-+free_tty:
-+ put_tty_driver(ntty_driver);
-+ return ret;
-+}
-+
-+static __exit void nozomi_exit(void)
-+{
-+ printk(KERN_INFO "Unloading %s\n", DRIVER_DESC);
-+ pci_unregister_driver(&nozomi_driver);
-+ tty_unregister_driver(ntty_driver);
-+ put_tty_driver(ntty_driver);
-+}
-+
-+module_init(nozomi_init);
-+module_exit(nozomi_exit);
-+
-+module_param(debug, int, S_IRUGO | S_IWUSR);
-+
-+MODULE_LICENSE("Dual BSD/GPL");
-+MODULE_DESCRIPTION(DRIVER_DESC);
-diff --git a/drivers/connector/cn_queue.c b/drivers/connector/cn_queue.c
-index 12ceed5..5732ca3 100644
---- a/drivers/connector/cn_queue.c
-+++ b/drivers/connector/cn_queue.c
-@@ -104,7 +104,6 @@ int cn_queue_add_callback(struct cn_queue_dev *dev, char *name, struct cb_id *id
- return -EINVAL;
- }
-
-- cbq->nls = dev->nls;
- cbq->seq = 0;
- cbq->group = cbq->id.id.idx;
-
-@@ -146,7 +145,6 @@ struct cn_queue_dev *cn_queue_alloc_dev(char *name, struct sock *nls)
- spin_lock_init(&dev->queue_lock);
-
- dev->nls = nls;
-- dev->netlink_groups = 0;
-
- dev->cn_queue = create_workqueue(dev->name);
- if (!dev->cn_queue) {
-diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
-index bf9716b..fea2d3e 100644
---- a/drivers/connector/connector.c
-+++ b/drivers/connector/connector.c
-@@ -88,6 +88,7 @@ int cn_netlink_send(struct cn_msg *msg, u32 __group, gfp_t gfp_mask)
- if (cn_cb_equal(&__cbq->id.id, &msg->id)) {
- found = 1;
- group = __cbq->group;
-+ break;
- }
- }
- spin_unlock_bh(&dev->cbdev->queue_lock);
-@@ -181,33 +182,14 @@ static int cn_call_callback(struct cn_msg *msg, void (*destruct_data)(void *), v
- }
-
- /*
-- * Skb receive helper - checks skb and msg size and calls callback
-- * helper.
+-}
+-
+-static inline int ll_new_hw_segment(struct request_queue *q,
+- struct request *req,
+- struct bio *bio)
+-{
+- int nr_hw_segs = bio_hw_segments(q, bio);
+- int nr_phys_segs = bio_phys_segments(q, bio);
+-
+- if (req->nr_hw_segments + nr_hw_segs > q->max_hw_segments
+- || req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
+- req->cmd_flags |= REQ_NOMERGE;
+- if (req == q->last_merge)
+- q->last_merge = NULL;
+- return 0;
+- }
+-
+- /*
+- * This will form the start of a new hw segment. Bump both
+- * counters.
+- */
+- req->nr_hw_segments += nr_hw_segs;
+- req->nr_phys_segments += nr_phys_segs;
+- return 1;
+-}
+-
+-static int ll_back_merge_fn(struct request_queue *q, struct request *req,
+- struct bio *bio)
+-{
+- unsigned short max_sectors;
+- int len;
+-
+- if (unlikely(blk_pc_request(req)))
+- max_sectors = q->max_hw_sectors;
+- else
+- max_sectors = q->max_sectors;
+-
+- if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
+- req->cmd_flags |= REQ_NOMERGE;
+- if (req == q->last_merge)
+- q->last_merge = NULL;
+- return 0;
+- }
+- if (unlikely(!bio_flagged(req->biotail, BIO_SEG_VALID)))
+- blk_recount_segments(q, req->biotail);
+- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
+- blk_recount_segments(q, bio);
+- len = req->biotail->bi_hw_back_size + bio->bi_hw_front_size;
+- if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(req->biotail), __BVEC_START(bio)) &&
+- !BIOVEC_VIRT_OVERSIZE(len)) {
+- int mergeable = ll_new_mergeable(q, req, bio);
+-
+- if (mergeable) {
+- if (req->nr_hw_segments == 1)
+- req->bio->bi_hw_front_size = len;
+- if (bio->bi_hw_segments == 1)
+- bio->bi_hw_back_size = len;
+- }
+- return mergeable;
+- }
+-
+- return ll_new_hw_segment(q, req, bio);
+-}
+-
+-static int ll_front_merge_fn(struct request_queue *q, struct request *req,
+- struct bio *bio)
+-{
+- unsigned short max_sectors;
+- int len;
+-
+- if (unlikely(blk_pc_request(req)))
+- max_sectors = q->max_hw_sectors;
+- else
+- max_sectors = q->max_sectors;
+-
+-
+- if (req->nr_sectors + bio_sectors(bio) > max_sectors) {
+- req->cmd_flags |= REQ_NOMERGE;
+- if (req == q->last_merge)
+- q->last_merge = NULL;
+- return 0;
+- }
+- len = bio->bi_hw_back_size + req->bio->bi_hw_front_size;
+- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
+- blk_recount_segments(q, bio);
+- if (unlikely(!bio_flagged(req->bio, BIO_SEG_VALID)))
+- blk_recount_segments(q, req->bio);
+- if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(req->bio)) &&
+- !BIOVEC_VIRT_OVERSIZE(len)) {
+- int mergeable = ll_new_mergeable(q, req, bio);
+-
+- if (mergeable) {
+- if (bio->bi_hw_segments == 1)
+- bio->bi_hw_front_size = len;
+- if (req->nr_hw_segments == 1)
+- req->biotail->bi_hw_back_size = len;
+- }
+- return mergeable;
+- }
+-
+- return ll_new_hw_segment(q, req, bio);
+-}
+-
+-static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
+- struct request *next)
+-{
+- int total_phys_segments;
+- int total_hw_segments;
+-
+- /*
+- * First check if the either of the requests are re-queued
+- * requests. Can't merge them if they are.
+- */
+- if (req->special || next->special)
+- return 0;
+-
+- /*
+- * Will it become too large?
+- */
+- if ((req->nr_sectors + next->nr_sectors) > q->max_sectors)
+- return 0;
+-
+- total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
+- if (blk_phys_contig_segment(q, req->biotail, next->bio))
+- total_phys_segments--;
+-
+- if (total_phys_segments > q->max_phys_segments)
+- return 0;
+-
+- total_hw_segments = req->nr_hw_segments + next->nr_hw_segments;
+- if (blk_hw_contig_segment(q, req->biotail, next->bio)) {
+- int len = req->biotail->bi_hw_back_size + next->bio->bi_hw_front_size;
+- /*
+- * propagate the combined length to the end of the requests
+- */
+- if (req->nr_hw_segments == 1)
+- req->bio->bi_hw_front_size = len;
+- if (next->nr_hw_segments == 1)
+- next->biotail->bi_hw_back_size = len;
+- total_hw_segments--;
+- }
+-
+- if (total_hw_segments > q->max_hw_segments)
+- return 0;
+-
+- /* Merge is OK... */
+- req->nr_phys_segments = total_phys_segments;
+- req->nr_hw_segments = total_hw_segments;
+- return 1;
+-}
+-
+-/*
+- * "plug" the device if there are no outstanding requests: this will
+- * force the transfer to start only after we have put all the requests
+- * on the list.
+- *
+- * This is called with interrupts off and no requests on the queue and
+- * with the queue lock held.
- */
--static int __cn_rx_skb(struct sk_buff *skb, struct nlmsghdr *nlh)
+-void blk_plug_device(struct request_queue *q)
-{
-- u32 pid, uid, seq, group;
-- struct cn_msg *msg;
+- WARN_ON(!irqs_disabled());
-
-- pid = NETLINK_CREDS(skb)->pid;
-- uid = NETLINK_CREDS(skb)->uid;
-- seq = nlh->nlmsg_seq;
-- group = NETLINK_CB((skb)).dst_group;
-- msg = NLMSG_DATA(nlh);
+- /*
+- * don't plug a stopped queue, it must be paired with blk_start_queue()
+- * which will restart the queueing
+- */
+- if (blk_queue_stopped(q))
+- return;
-
-- return cn_call_callback(msg, (void (*)(void *))kfree_skb, skb);
+- if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags)) {
+- mod_timer(&q->unplug_timer, jiffies + q->unplug_delay);
+- blk_add_trace_generic(q, NULL, 0, BLK_TA_PLUG);
+- }
-}
-
+-EXPORT_SYMBOL(blk_plug_device);
+-
-/*
- * Main netlink receiving function.
- *
-- * It checks skb and netlink header sizes and calls the skb receive
-- * helper with a shared skb.
-+ * It checks skb, netlink header and msg sizes, and calls callback helper.
- */
- static void cn_rx_skb(struct sk_buff *__skb)
- {
-+ struct cn_msg *msg;
- struct nlmsghdr *nlh;
-- u32 len;
- int err;
- struct sk_buff *skb;
-
-@@ -223,11 +205,8 @@ static void cn_rx_skb(struct sk_buff *__skb)
- return;
- }
-
-- len = NLMSG_ALIGN(nlh->nlmsg_len);
-- if (len > skb->len)
-- len = skb->len;
+- * remove the queue from the plugged list, if present. called with
+- * queue lock held and interrupts disabled.
+- */
+-int blk_remove_plug(struct request_queue *q)
+-{
+- WARN_ON(!irqs_disabled());
-
-- err = __cn_rx_skb(skb, nlh);
-+ msg = NLMSG_DATA(nlh);
-+ err = cn_call_callback(msg, (void (*)(void *))kfree_skb, skb);
- if (err < 0)
- kfree_skb(skb);
- }
-@@ -441,8 +420,7 @@ static int __devinit cn_init(void)
-
- dev->cbdev = cn_queue_alloc_dev("cqueue", dev->nls);
- if (!dev->cbdev) {
-- if (dev->nls->sk_socket)
-- sock_release(dev->nls->sk_socket);
-+ netlink_kernel_release(dev->nls);
- return -EINVAL;
- }
-
-@@ -452,8 +430,7 @@ static int __devinit cn_init(void)
- if (err) {
- cn_already_initialized = 0;
- cn_queue_free_dev(dev->cbdev);
-- if (dev->nls->sk_socket)
-- sock_release(dev->nls->sk_socket);
-+ netlink_kernel_release(dev->nls);
- return -EINVAL;
- }
-
-@@ -468,8 +445,7 @@ static void __devexit cn_fini(void)
-
- cn_del_callback(&dev->id);
- cn_queue_free_dev(dev->cbdev);
-- if (dev->nls->sk_socket)
-- sock_release(dev->nls->sk_socket);
-+ netlink_kernel_release(dev->nls);
- }
-
- subsys_initcall(cn_init);
-diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
-index 79581fa..5efd555 100644
---- a/drivers/cpufreq/cpufreq.c
-+++ b/drivers/cpufreq/cpufreq.c
-@@ -828,11 +828,8 @@ static int cpufreq_add_dev (struct sys_device * sys_dev)
- memcpy(&new_policy, policy, sizeof(struct cpufreq_policy));
-
- /* prepare interface data */
-- policy->kobj.parent = &sys_dev->kobj;
-- policy->kobj.ktype = &ktype_cpufreq;
-- kobject_set_name(&policy->kobj, "cpufreq");
+- if (!test_and_clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags))
+- return 0;
-
-- ret = kobject_register(&policy->kobj);
-+ ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, &sys_dev->kobj,
-+ "cpufreq");
- if (ret) {
- unlock_policy_rwsem_write(cpu);
- goto err_out_driver_exit;
-@@ -902,6 +899,7 @@ static int cpufreq_add_dev (struct sys_device * sys_dev)
- goto err_out_unregister;
- }
-
-+ kobject_uevent(&policy->kobj, KOBJ_ADD);
- module_put(cpufreq_driver->owner);
- dprintk("initialization complete\n");
- cpufreq_debug_enable_ratelimit();
-@@ -915,7 +913,7 @@ err_out_unregister:
- cpufreq_cpu_data[j] = NULL;
- spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
-
-- kobject_unregister(&policy->kobj);
-+ kobject_put(&policy->kobj);
- wait_for_completion(&policy->kobj_unregister);
-
- err_out_driver_exit:
-@@ -1032,8 +1030,6 @@ static int __cpufreq_remove_dev (struct sys_device * sys_dev)
-
- unlock_policy_rwsem_write(cpu);
-
-- kobject_unregister(&data->kobj);
+- del_timer(&q->unplug_timer);
+- return 1;
+-}
-
- kobject_put(&data->kobj);
-
- /* we need to make sure that the underlying kobj is actually
-diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
-index 0f3515e..088ea74 100644
---- a/drivers/cpuidle/sysfs.c
-+++ b/drivers/cpuidle/sysfs.c
-@@ -277,7 +277,7 @@ static struct kobj_type ktype_state_cpuidle = {
-
- static void inline cpuidle_free_state_kobj(struct cpuidle_device *device, int i)
- {
-- kobject_unregister(&device->kobjs[i]->kobj);
-+ kobject_put(&device->kobjs[i]->kobj);
- wait_for_completion(&device->kobjs[i]->kobj_unregister);
- kfree(device->kobjs[i]);
- device->kobjs[i] = NULL;
-@@ -300,14 +300,13 @@ int cpuidle_add_state_sysfs(struct cpuidle_device *device)
- kobj->state = &device->states[i];
- init_completion(&kobj->kobj_unregister);
-
-- kobj->kobj.parent = &device->kobj;
-- kobj->kobj.ktype = &ktype_state_cpuidle;
-- kobject_set_name(&kobj->kobj, "state%d", i);
-- ret = kobject_register(&kobj->kobj);
-+ ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle, &device->kobj,
-+ "state%d", i);
- if (ret) {
- kfree(kobj);
- goto error_state;
- }
-+ kobject_uevent(&kobj->kobj, KOBJ_ADD);
- device->kobjs[i] = kobj;
- }
-
-@@ -339,12 +338,14 @@ int cpuidle_add_sysfs(struct sys_device *sysdev)
- {
- int cpu = sysdev->id;
- struct cpuidle_device *dev;
-+ int error;
-
- dev = per_cpu(cpuidle_devices, cpu);
-- dev->kobj.parent = &sysdev->kobj;
-- dev->kobj.ktype = &ktype_cpuidle;
-- kobject_set_name(&dev->kobj, "%s", "cpuidle");
-- return kobject_register(&dev->kobj);
-+ error = kobject_init_and_add(&dev->kobj, &ktype_cpuidle, &sysdev->kobj,
-+ "cpuidle");
-+ if (!error)
-+ kobject_uevent(&dev->kobj, KOBJ_ADD);
-+ return error;
- }
-
- /**
-@@ -357,5 +358,5 @@ void cpuidle_remove_sysfs(struct sys_device *sysdev)
- struct cpuidle_device *dev;
-
- dev = per_cpu(cpuidle_devices, cpu);
-- kobject_unregister(&dev->kobj);
-+ kobject_put(&dev->kobj);
- }
-diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
-index ddd3a25..6b658d8 100644
---- a/drivers/crypto/Kconfig
-+++ b/drivers/crypto/Kconfig
-@@ -48,8 +48,6 @@ config CRYPTO_DEV_PADLOCK_SHA
- If unsure say M. The compiled module will be
- called padlock-sha.ko
-
--source "arch/s390/crypto/Kconfig"
+-EXPORT_SYMBOL(blk_remove_plug);
-
- config CRYPTO_DEV_GEODE
- tristate "Support for the Geode LX AES engine"
- depends on X86_32 && PCI
-@@ -83,4 +81,82 @@ config ZCRYPT_MONOLITHIC
- that contains all parts of the crypto device driver (ap bus,
- request router and all the card drivers).
-
-+config CRYPTO_SHA1_S390
-+ tristate "SHA1 digest algorithm"
-+ depends on S390
-+ select CRYPTO_ALGAPI
-+ help
-+ This is the s390 hardware accelerated implementation of the
-+ SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
-+
-+config CRYPTO_SHA256_S390
-+ tristate "SHA256 digest algorithm"
-+ depends on S390
-+ select CRYPTO_ALGAPI
-+ help
-+ This is the s390 hardware accelerated implementation of the
-+ SHA256 secure hash standard (DFIPS 180-2).
-+
-+ This version of SHA implements a 256 bit hash with 128 bits of
-+ security against collision attacks.
-+
-+config CRYPTO_DES_S390
-+ tristate "DES and Triple DES cipher algorithms"
-+ depends on S390
-+ select CRYPTO_ALGAPI
-+ select CRYPTO_BLKCIPHER
-+ help
-+ This us the s390 hardware accelerated implementation of the
-+ DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
-+
-+config CRYPTO_AES_S390
-+ tristate "AES cipher algorithms"
-+ depends on S390
-+ select CRYPTO_ALGAPI
-+ select CRYPTO_BLKCIPHER
-+ help
-+ This is the s390 hardware accelerated implementation of the
-+ AES cipher algorithms (FIPS-197). AES uses the Rijndael
-+ algorithm.
-+
-+ Rijndael appears to be consistently a very good performer in
-+ both hardware and software across a wide range of computing
-+ environments regardless of its use in feedback or non-feedback
-+ modes. Its key setup time is excellent, and its key agility is
-+ good. Rijndael's very low memory requirements make it very well
-+ suited for restricted-space environments, in which it also
-+ demonstrates excellent performance. Rijndael's operations are
-+ among the easiest to defend against power and timing attacks.
-+
-+ On s390 the System z9-109 currently only supports the key size
-+ of 128 bit.
-+
-+config S390_PRNG
-+ tristate "Pseudo random number generator device driver"
-+ depends on S390
-+ default "m"
-+ help
-+ Select this option if you want to use the s390 pseudo random number
-+ generator. The PRNG is part of the cryptographic processor functions
-+ and uses triple-DES to generate secure random numbers like the
-+ ANSI X9.17 standard. The PRNG is usable via the char device
-+ /dev/prandom.
-+
-+config CRYPTO_DEV_HIFN_795X
-+ tristate "Driver HIFN 795x crypto accelerator chips"
-+ select CRYPTO_DES
-+ select CRYPTO_ALGAPI
-+ select CRYPTO_BLKCIPHER
-+ select HW_RANDOM if CRYPTO_DEV_HIFN_795X_RNG
-+ depends on PCI
-+ help
-+ This option allows you to have support for HIFN 795x crypto adapters.
-+
-+config CRYPTO_DEV_HIFN_795X_RNG
-+ bool "HIFN 795x random number generator"
-+ depends on CRYPTO_DEV_HIFN_795X
-+ help
-+ Select this option if you want to enable the random number generator
-+ on the HIFN 795x crypto adapters.
-+
- endif # CRYPTO_HW
-diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
-index d070030..c0327f0 100644
---- a/drivers/crypto/Makefile
-+++ b/drivers/crypto/Makefile
-@@ -1,3 +1,4 @@
- obj-$(CONFIG_CRYPTO_DEV_PADLOCK_AES) += padlock-aes.o
- obj-$(CONFIG_CRYPTO_DEV_PADLOCK_SHA) += padlock-sha.o
- obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
-+obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
-diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
-index 711e246..4801162 100644
---- a/drivers/crypto/geode-aes.c
-+++ b/drivers/crypto/geode-aes.c
-@@ -13,44 +13,13 @@
- #include <linux/crypto.h>
- #include <linux/spinlock.h>
- #include <crypto/algapi.h>
-+#include <crypto/aes.h>
-
- #include <asm/io.h>
- #include <asm/delay.h>
-
- #include "geode-aes.h"
-
--/* Register definitions */
+-/*
+- * remove the plug and let it rip..
+- */
+-void __generic_unplug_device(struct request_queue *q)
+-{
+- if (unlikely(blk_queue_stopped(q)))
+- return;
-
--#define AES_CTRLA_REG 0x0000
+- if (!blk_remove_plug(q))
+- return;
-
--#define AES_CTRL_START 0x01
--#define AES_CTRL_DECRYPT 0x00
--#define AES_CTRL_ENCRYPT 0x02
--#define AES_CTRL_WRKEY 0x04
--#define AES_CTRL_DCA 0x08
--#define AES_CTRL_SCA 0x10
--#define AES_CTRL_CBC 0x20
+- q->request_fn(q);
+-}
+-EXPORT_SYMBOL(__generic_unplug_device);
-
--#define AES_INTR_REG 0x0008
+-/**
+- * generic_unplug_device - fire a request queue
+- * @q: The &struct request_queue in question
+- *
+- * Description:
+- * Linux uses plugging to build bigger requests queues before letting
+- * the device have at them. If a queue is plugged, the I/O scheduler
+- * is still adding and merging requests on the queue. Once the queue
+- * gets unplugged, the request_fn defined for the queue is invoked and
+- * transfers started.
+- **/
+-void generic_unplug_device(struct request_queue *q)
+-{
+- spin_lock_irq(q->queue_lock);
+- __generic_unplug_device(q);
+- spin_unlock_irq(q->queue_lock);
+-}
+-EXPORT_SYMBOL(generic_unplug_device);
-
--#define AES_INTRA_PENDING (1 << 16)
--#define AES_INTRB_PENDING (1 << 17)
+-static void blk_backing_dev_unplug(struct backing_dev_info *bdi,
+- struct page *page)
+-{
+- struct request_queue *q = bdi->unplug_io_data;
-
--#define AES_INTR_PENDING (AES_INTRA_PENDING | AES_INTRB_PENDING)
--#define AES_INTR_MASK 0x07
+- blk_unplug(q);
+-}
-
--#define AES_SOURCEA_REG 0x0010
--#define AES_DSTA_REG 0x0014
--#define AES_LENA_REG 0x0018
--#define AES_WRITEKEY0_REG 0x0030
--#define AES_WRITEIV0_REG 0x0040
+-static void blk_unplug_work(struct work_struct *work)
+-{
+- struct request_queue *q =
+- container_of(work, struct request_queue, unplug_work);
-
--/* A very large counter that is used to gracefully bail out of an
-- * operation in case of trouble
-- */
+- blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+- q->rq.count[READ] + q->rq.count[WRITE]);
-
--#define AES_OP_TIMEOUT 0x50000
+- q->unplug_fn(q);
+-}
-
- /* Static structures */
-
- static void __iomem * _iobase;
-@@ -87,9 +56,10 @@ do_crypt(void *src, void *dst, int len, u32 flags)
- /* Start the operation */
- iowrite32(AES_CTRL_START | flags, _iobase + AES_CTRLA_REG);
-
-- do
-+ do {
- status = ioread32(_iobase + AES_INTR_REG);
-- while(!(status & AES_INTRA_PENDING) && --counter);
-+ cpu_relax();
-+ } while(!(status & AES_INTRA_PENDING) && --counter);
-
- /* Clear the event */
- iowrite32((status & 0xFF) | AES_INTRA_PENDING, _iobase + AES_INTR_REG);
-@@ -101,6 +71,7 @@ geode_aes_crypt(struct geode_aes_op *op)
- {
- u32 flags = 0;
- unsigned long iflags;
-+ int ret;
-
- if (op->len == 0)
- return 0;
-@@ -129,7 +100,8 @@ geode_aes_crypt(struct geode_aes_op *op)
- _writefield(AES_WRITEKEY0_REG, op->key);
- }
-
-- do_crypt(op->src, op->dst, op->len, flags);
-+ ret = do_crypt(op->src, op->dst, op->len, flags);
-+ BUG_ON(ret);
-
- if (op->mode == AES_MODE_CBC)
- _readfield(AES_WRITEIV0_REG, op->iv);
-@@ -141,18 +113,103 @@ geode_aes_crypt(struct geode_aes_op *op)
-
- /* CRYPTO-API Functions */
-
--static int
--geode_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int len)
-+static int geode_setkey_cip(struct crypto_tfm *tfm, const u8 *key,
-+ unsigned int len)
- {
- struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-+ unsigned int ret;
-
-- if (len != AES_KEY_LENGTH) {
-+ op->keylen = len;
-+
-+ if (len == AES_KEYSIZE_128) {
-+ memcpy(op->key, key, len);
-+ return 0;
-+ }
-+
-+ if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
-+ /* not supported at all */
- tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
- return -EINVAL;
- }
-
-- memcpy(op->key, key, len);
-- return 0;
-+ /*
-+ * The requested key size is not supported by HW, do a fallback
-+ */
-+ op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
-+ op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
-+
-+ ret = crypto_cipher_setkey(op->fallback.cip, key, len);
-+ if (ret) {
-+ tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
-+ tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
-+ }
-+ return ret;
-+}
-+
-+static int geode_setkey_blk(struct crypto_tfm *tfm, const u8 *key,
-+ unsigned int len)
-+{
-+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-+ unsigned int ret;
-+
-+ op->keylen = len;
-+
-+ if (len == AES_KEYSIZE_128) {
-+ memcpy(op->key, key, len);
-+ return 0;
-+ }
-+
-+ if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
-+ /* not supported at all */
-+ tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
-+ return -EINVAL;
-+ }
-+
-+ /*
-+ * The requested key size is not supported by HW, do a fallback
-+ */
-+ op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
-+ op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
-+
-+ ret = crypto_blkcipher_setkey(op->fallback.blk, key, len);
-+ if (ret) {
-+ tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
-+ tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
-+ }
-+ return ret;
-+}
-+
-+static int fallback_blk_dec(struct blkcipher_desc *desc,
-+ struct scatterlist *dst, struct scatterlist *src,
-+ unsigned int nbytes)
-+{
-+ unsigned int ret;
-+ struct crypto_blkcipher *tfm;
-+ struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
-+
-+ tfm = desc->tfm;
-+ desc->tfm = op->fallback.blk;
-+
-+ ret = crypto_blkcipher_decrypt_iv(desc, dst, src, nbytes);
-+
-+ desc->tfm = tfm;
-+ return ret;
-+}
-+static int fallback_blk_enc(struct blkcipher_desc *desc,
-+ struct scatterlist *dst, struct scatterlist *src,
-+ unsigned int nbytes)
-+{
-+ unsigned int ret;
-+ struct crypto_blkcipher *tfm;
-+ struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
-+
-+ tfm = desc->tfm;
-+ desc->tfm = op->fallback.blk;
-+
-+ ret = crypto_blkcipher_encrypt_iv(desc, dst, src, nbytes);
-+
-+ desc->tfm = tfm;
-+ return ret;
- }
-
- static void
-@@ -160,8 +217,10 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
- {
- struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-
-- if ((out == NULL) || (in == NULL))
-+ if (unlikely(op->keylen != AES_KEYSIZE_128)) {
-+ crypto_cipher_encrypt_one(op->fallback.cip, out, in);
- return;
-+ }
-
- op->src = (void *) in;
- op->dst = (void *) out;
-@@ -179,8 +238,10 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
- {
- struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-
-- if ((out == NULL) || (in == NULL))
-+ if (unlikely(op->keylen != AES_KEYSIZE_128)) {
-+ crypto_cipher_decrypt_one(op->fallback.cip, out, in);
- return;
-+ }
-
- op->src = (void *) in;
- op->dst = (void *) out;
-@@ -192,24 +253,50 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
- geode_aes_crypt(op);
- }
-
-+static int fallback_init_cip(struct crypto_tfm *tfm)
-+{
-+ const char *name = tfm->__crt_alg->cra_name;
-+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-+
-+ op->fallback.cip = crypto_alloc_cipher(name, 0,
-+ CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
-+
-+ if (IS_ERR(op->fallback.cip)) {
-+ printk(KERN_ERR "Error allocating fallback algo %s\n", name);
-+ return PTR_ERR(op->fallback.blk);
-+ }
-+
-+ return 0;
-+}
-+
-+static void fallback_exit_cip(struct crypto_tfm *tfm)
-+{
-+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-+
-+ crypto_free_cipher(op->fallback.cip);
-+ op->fallback.cip = NULL;
-+}
-
- static struct crypto_alg geode_alg = {
-- .cra_name = "aes",
-- .cra_driver_name = "geode-aes-128",
-- .cra_priority = 300,
-- .cra_alignmask = 15,
-- .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
-+ .cra_name = "aes",
-+ .cra_driver_name = "geode-aes",
-+ .cra_priority = 300,
-+ .cra_alignmask = 15,
-+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER |
-+ CRYPTO_ALG_NEED_FALLBACK,
-+ .cra_init = fallback_init_cip,
-+ .cra_exit = fallback_exit_cip,
- .cra_blocksize = AES_MIN_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct geode_aes_op),
-- .cra_module = THIS_MODULE,
-- .cra_list = LIST_HEAD_INIT(geode_alg.cra_list),
-- .cra_u = {
-- .cipher = {
-- .cia_min_keysize = AES_KEY_LENGTH,
-- .cia_max_keysize = AES_KEY_LENGTH,
-- .cia_setkey = geode_setkey,
-- .cia_encrypt = geode_encrypt,
-- .cia_decrypt = geode_decrypt
-+ .cra_module = THIS_MODULE,
-+ .cra_list = LIST_HEAD_INIT(geode_alg.cra_list),
-+ .cra_u = {
-+ .cipher = {
-+ .cia_min_keysize = AES_MIN_KEY_SIZE,
-+ .cia_max_keysize = AES_MAX_KEY_SIZE,
-+ .cia_setkey = geode_setkey_cip,
-+ .cia_encrypt = geode_encrypt,
-+ .cia_decrypt = geode_decrypt
- }
- }
- };
-@@ -223,8 +310,12 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
- struct blkcipher_walk walk;
- int err, ret;
-
-+ if (unlikely(op->keylen != AES_KEYSIZE_128))
-+ return fallback_blk_dec(desc, dst, src, nbytes);
-+
- blkcipher_walk_init(&walk, dst, src, nbytes);
- err = blkcipher_walk_virt(desc, &walk);
-+ op->iv = walk.iv;
-
- while((nbytes = walk.nbytes)) {
- op->src = walk.src.virt.addr,
-@@ -233,13 +324,9 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
- op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
- op->dir = AES_DIR_DECRYPT;
-
-- memcpy(op->iv, walk.iv, AES_IV_LENGTH);
+-static void blk_unplug_timeout(unsigned long data)
+-{
+- struct request_queue *q = (struct request_queue *)data;
-
- ret = geode_aes_crypt(op);
-
-- memcpy(walk.iv, op->iv, AES_IV_LENGTH);
- nbytes -= ret;
+- blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
+- q->rq.count[READ] + q->rq.count[WRITE]);
-
- err = blkcipher_walk_done(desc, &walk, nbytes);
- }
-
-@@ -255,8 +342,12 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
- struct blkcipher_walk walk;
- int err, ret;
-
-+ if (unlikely(op->keylen != AES_KEYSIZE_128))
-+ return fallback_blk_enc(desc, dst, src, nbytes);
-+
- blkcipher_walk_init(&walk, dst, src, nbytes);
- err = blkcipher_walk_virt(desc, &walk);
-+ op->iv = walk.iv;
-
- while((nbytes = walk.nbytes)) {
- op->src = walk.src.virt.addr,
-@@ -265,8 +356,6 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
- op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
- op->dir = AES_DIR_ENCRYPT;
-
-- memcpy(op->iv, walk.iv, AES_IV_LENGTH);
+- kblockd_schedule_work(&q->unplug_work);
+-}
-
- ret = geode_aes_crypt(op);
- nbytes -= ret;
- err = blkcipher_walk_done(desc, &walk, nbytes);
-@@ -275,22 +364,49 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
- return err;
- }
-
-+static int fallback_init_blk(struct crypto_tfm *tfm)
-+{
-+ const char *name = tfm->__crt_alg->cra_name;
-+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-+
-+ op->fallback.blk = crypto_alloc_blkcipher(name, 0,
-+ CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
-+
-+ if (IS_ERR(op->fallback.blk)) {
-+ printk(KERN_ERR "Error allocating fallback algo %s\n", name);
-+ return PTR_ERR(op->fallback.blk);
-+ }
-+
-+ return 0;
-+}
-+
-+static void fallback_exit_blk(struct crypto_tfm *tfm)
-+{
-+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
-+
-+ crypto_free_blkcipher(op->fallback.blk);
-+ op->fallback.blk = NULL;
-+}
-+
- static struct crypto_alg geode_cbc_alg = {
- .cra_name = "cbc(aes)",
-- .cra_driver_name = "cbc-aes-geode-128",
-+ .cra_driver_name = "cbc-aes-geode",
- .cra_priority = 400,
-- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
-+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
-+ CRYPTO_ALG_NEED_FALLBACK,
-+ .cra_init = fallback_init_blk,
-+ .cra_exit = fallback_exit_blk,
- .cra_blocksize = AES_MIN_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct geode_aes_op),
- .cra_alignmask = 15,
-- .cra_type = &crypto_blkcipher_type,
-- .cra_module = THIS_MODULE,
-- .cra_list = LIST_HEAD_INIT(geode_cbc_alg.cra_list),
-- .cra_u = {
-- .blkcipher = {
-- .min_keysize = AES_KEY_LENGTH,
-- .max_keysize = AES_KEY_LENGTH,
-- .setkey = geode_setkey,
-+ .cra_type = &crypto_blkcipher_type,
+-void blk_unplug(struct request_queue *q)
+-{
+- /*
+- * devices don't necessarily have an ->unplug_fn defined
+- */
+- if (q->unplug_fn) {
+- blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
+- q->rq.count[READ] + q->rq.count[WRITE]);
+-
+- q->unplug_fn(q);
+- }
+-}
+-EXPORT_SYMBOL(blk_unplug);
+-
+-/**
+- * blk_start_queue - restart a previously stopped queue
+- * @q: The &struct request_queue in question
+- *
+- * Description:
+- * blk_start_queue() will clear the stop flag on the queue, and call
+- * the request_fn for the queue if it was in a stopped state when
+- * entered. Also see blk_stop_queue(). Queue lock must be held.
+- **/
+-void blk_start_queue(struct request_queue *q)
+-{
+- WARN_ON(!irqs_disabled());
+-
+- clear_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
+-
+- /*
+- * one level of recursion is ok and is much faster than kicking
+- * the unplug handling
+- */
+- if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
+- q->request_fn(q);
+- clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
+- } else {
+- blk_plug_device(q);
+- kblockd_schedule_work(&q->unplug_work);
+- }
+-}
+-
+-EXPORT_SYMBOL(blk_start_queue);
+-
+-/**
+- * blk_stop_queue - stop a queue
+- * @q: The &struct request_queue in question
+- *
+- * Description:
+- * The Linux block layer assumes that a block driver will consume all
+- * entries on the request queue when the request_fn strategy is called.
+- * Often this will not happen, because of hardware limitations (queue
+- * depth settings). If a device driver gets a 'queue full' response,
+- * or if it simply chooses not to queue more I/O at one point, it can
+- * call this function to prevent the request_fn from being called until
+- * the driver has signalled it's ready to go again. This happens by calling
+- * blk_start_queue() to restart queue operations. Queue lock must be held.
+- **/
+-void blk_stop_queue(struct request_queue *q)
+-{
+- blk_remove_plug(q);
+- set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
+-}
+-EXPORT_SYMBOL(blk_stop_queue);
+-
+-/**
+- * blk_sync_queue - cancel any pending callbacks on a queue
+- * @q: the queue
+- *
+- * Description:
+- * The block layer may perform asynchronous callback activity
+- * on a queue, such as calling the unplug function after a timeout.
+- * A block device may call blk_sync_queue to ensure that any
+- * such activity is cancelled, thus allowing it to release resources
+- * that the callbacks might use. The caller must already have made sure
+- * that its ->make_request_fn will not re-add plugging prior to calling
+- * this function.
+- *
+- */
+-void blk_sync_queue(struct request_queue *q)
+-{
+- del_timer_sync(&q->unplug_timer);
+- kblockd_flush_work(&q->unplug_work);
+-}
+-EXPORT_SYMBOL(blk_sync_queue);
+-
+-/**
+- * blk_run_queue - run a single device queue
+- * @q: The queue to run
+- */
+-void blk_run_queue(struct request_queue *q)
+-{
+- unsigned long flags;
+-
+- spin_lock_irqsave(q->queue_lock, flags);
+- blk_remove_plug(q);
+-
+- /*
+- * Only recurse once to avoid overrunning the stack, let the unplug
+- * handling reinvoke the handler shortly if we already got there.
+- */
+- if (!elv_queue_empty(q)) {
+- if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
+- q->request_fn(q);
+- clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
+- } else {
+- blk_plug_device(q);
+- kblockd_schedule_work(&q->unplug_work);
+- }
+- }
+-
+- spin_unlock_irqrestore(q->queue_lock, flags);
+-}
+-EXPORT_SYMBOL(blk_run_queue);
+-
+-/**
+- * blk_cleanup_queue: - release a &struct request_queue when it is no longer needed
+- * @kobj: the kobj belonging of the request queue to be released
+- *
+- * Description:
+- * blk_cleanup_queue is the pair to blk_init_queue() or
+- * blk_queue_make_request(). It should be called when a request queue is
+- * being released; typically when a block device is being de-registered.
+- * Currently, its primary task it to free all the &struct request
+- * structures that were allocated to the queue and the queue itself.
+- *
+- * Caveat:
+- * Hopefully the low level driver will have finished any
+- * outstanding requests first...
+- **/
+-static void blk_release_queue(struct kobject *kobj)
+-{
+- struct request_queue *q =
+- container_of(kobj, struct request_queue, kobj);
+- struct request_list *rl = &q->rq;
+-
+- blk_sync_queue(q);
+-
+- if (rl->rq_pool)
+- mempool_destroy(rl->rq_pool);
+-
+- if (q->queue_tags)
+- __blk_queue_free_tags(q);
+-
+- blk_trace_shutdown(q);
+-
+- bdi_destroy(&q->backing_dev_info);
+- kmem_cache_free(requestq_cachep, q);
+-}
+-
+-void blk_put_queue(struct request_queue *q)
+-{
+- kobject_put(&q->kobj);
+-}
+-EXPORT_SYMBOL(blk_put_queue);
+-
+-void blk_cleanup_queue(struct request_queue * q)
+-{
+- mutex_lock(&q->sysfs_lock);
+- set_bit(QUEUE_FLAG_DEAD, &q->queue_flags);
+- mutex_unlock(&q->sysfs_lock);
+-
+- if (q->elevator)
+- elevator_exit(q->elevator);
+-
+- blk_put_queue(q);
+-}
+-
+-EXPORT_SYMBOL(blk_cleanup_queue);
+-
+-static int blk_init_free_list(struct request_queue *q)
+-{
+- struct request_list *rl = &q->rq;
+-
+- rl->count[READ] = rl->count[WRITE] = 0;
+- rl->starved[READ] = rl->starved[WRITE] = 0;
+- rl->elvpriv = 0;
+- init_waitqueue_head(&rl->wait[READ]);
+- init_waitqueue_head(&rl->wait[WRITE]);
+-
+- rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
+- mempool_free_slab, request_cachep, q->node);
+-
+- if (!rl->rq_pool)
+- return -ENOMEM;
+-
+- return 0;
+-}
+-
+-struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
+-{
+- return blk_alloc_queue_node(gfp_mask, -1);
+-}
+-EXPORT_SYMBOL(blk_alloc_queue);
+-
+-static struct kobj_type queue_ktype;
+-
+-struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
+-{
+- struct request_queue *q;
+- int err;
+-
+- q = kmem_cache_alloc_node(requestq_cachep,
+- gfp_mask | __GFP_ZERO, node_id);
+- if (!q)
+- return NULL;
+-
+- q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
+- q->backing_dev_info.unplug_io_data = q;
+- err = bdi_init(&q->backing_dev_info);
+- if (err) {
+- kmem_cache_free(requestq_cachep, q);
+- return NULL;
+- }
+-
+- init_timer(&q->unplug_timer);
+-
+- kobject_set_name(&q->kobj, "%s", "queue");
+- q->kobj.ktype = &queue_ktype;
+- kobject_init(&q->kobj);
+-
+- mutex_init(&q->sysfs_lock);
+-
+- return q;
+-}
+-EXPORT_SYMBOL(blk_alloc_queue_node);
+-
+-/**
+- * blk_init_queue - prepare a request queue for use with a block device
+- * @rfn: The function to be called to process requests that have been
+- * placed on the queue.
+- * @lock: Request queue spin lock
+- *
+- * Description:
+- * If a block device wishes to use the standard request handling procedures,
+- * which sorts requests and coalesces adjacent requests, then it must
+- * call blk_init_queue(). The function @rfn will be called when there
+- * are requests on the queue that need to be processed. If the device
+- * supports plugging, then @rfn may not be called immediately when requests
+- * are available on the queue, but may be called at some time later instead.
+- * Plugged queues are generally unplugged when a buffer belonging to one
+- * of the requests on the queue is needed, or due to memory pressure.
+- *
+- * @rfn is not required, or even expected, to remove all requests off the
+- * queue, but only as many as it can handle at a time. If it does leave
+- * requests on the queue, it is responsible for arranging that the requests
+- * get dealt with eventually.
+- *
+- * The queue spin lock must be held while manipulating the requests on the
+- * request queue; this lock will be taken also from interrupt context, so irq
+- * disabling is needed for it.
+- *
+- * Function returns a pointer to the initialized request queue, or NULL if
+- * it didn't succeed.
+- *
+- * Note:
+- * blk_init_queue() must be paired with a blk_cleanup_queue() call
+- * when the block device is deactivated (such as at module unload).
+- **/
+-
+-struct request_queue *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock)
+-{
+- return blk_init_queue_node(rfn, lock, -1);
+-}
+-EXPORT_SYMBOL(blk_init_queue);
+-
+-struct request_queue *
+-blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
+-{
+- struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+-
+- if (!q)
+- return NULL;
+-
+- q->node = node_id;
+- if (blk_init_free_list(q)) {
+- kmem_cache_free(requestq_cachep, q);
+- return NULL;
+- }
+-
+- /*
+- * if caller didn't supply a lock, they get per-queue locking with
+- * our embedded lock
+- */
+- if (!lock) {
+- spin_lock_init(&q->__queue_lock);
+- lock = &q->__queue_lock;
+- }
+-
+- q->request_fn = rfn;
+- q->prep_rq_fn = NULL;
+- q->unplug_fn = generic_unplug_device;
+- q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
+- q->queue_lock = lock;
+-
+- blk_queue_segment_boundary(q, 0xffffffff);
+-
+- blk_queue_make_request(q, __make_request);
+- blk_queue_max_segment_size(q, MAX_SEGMENT_SIZE);
+-
+- blk_queue_max_hw_segments(q, MAX_HW_SEGMENTS);
+- blk_queue_max_phys_segments(q, MAX_PHYS_SEGMENTS);
+-
+- q->sg_reserved_size = INT_MAX;
+-
+- /*
+- * all done
+- */
+- if (!elevator_init(q, NULL)) {
+- blk_queue_congestion_threshold(q);
+- return q;
+- }
+-
+- blk_put_queue(q);
+- return NULL;
+-}
+-EXPORT_SYMBOL(blk_init_queue_node);
+-
+-int blk_get_queue(struct request_queue *q)
+-{
+- if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+- kobject_get(&q->kobj);
+- return 0;
+- }
+-
+- return 1;
+-}
+-
+-EXPORT_SYMBOL(blk_get_queue);
+-
+-static inline void blk_free_request(struct request_queue *q, struct request *rq)
+-{
+- if (rq->cmd_flags & REQ_ELVPRIV)
+- elv_put_request(q, rq);
+- mempool_free(rq, q->rq.rq_pool);
+-}
+-
+-static struct request *
+-blk_alloc_request(struct request_queue *q, int rw, int priv, gfp_t gfp_mask)
+-{
+- struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+-
+- if (!rq)
+- return NULL;
+-
+- /*
+- * first three bits are identical in rq->cmd_flags and bio->bi_rw,
+- * see bio.h and blkdev.h
+- */
+- rq->cmd_flags = rw | REQ_ALLOCED;
+-
+- if (priv) {
+- if (unlikely(elv_set_request(q, rq, gfp_mask))) {
+- mempool_free(rq, q->rq.rq_pool);
+- return NULL;
+- }
+- rq->cmd_flags |= REQ_ELVPRIV;
+- }
+-
+- return rq;
+-}
+-
+-/*
+- * ioc_batching returns true if the ioc is a valid batching request and
+- * should be given priority access to a request.
+- */
+-static inline int ioc_batching(struct request_queue *q, struct io_context *ioc)
+-{
+- if (!ioc)
+- return 0;
+-
+- /*
+- * Make sure the process is able to allocate at least 1 request
+- * even if the batch times out, otherwise we could theoretically
+- * lose wakeups.
+- */
+- return ioc->nr_batch_requests == q->nr_batching ||
+- (ioc->nr_batch_requests > 0
+- && time_before(jiffies, ioc->last_waited + BLK_BATCH_TIME));
+-}
+-
+-/*
+- * ioc_set_batching sets ioc to be a new "batcher" if it is not one. This
+- * will cause the process to be a "batcher" on all queues in the system. This
+- * is the behaviour we want though - once it gets a wakeup it should be given
+- * a nice run.
+- */
+-static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
+-{
+- if (!ioc || ioc_batching(q, ioc))
+- return;
+-
+- ioc->nr_batch_requests = q->nr_batching;
+- ioc->last_waited = jiffies;
+-}
+-
+-static void __freed_request(struct request_queue *q, int rw)
+-{
+- struct request_list *rl = &q->rq;
+-
+- if (rl->count[rw] < queue_congestion_off_threshold(q))
+- blk_clear_queue_congested(q, rw);
+-
+- if (rl->count[rw] + 1 <= q->nr_requests) {
+- if (waitqueue_active(&rl->wait[rw]))
+- wake_up(&rl->wait[rw]);
+-
+- blk_clear_queue_full(q, rw);
+- }
+-}
+-
+-/*
+- * A request has just been released. Account for it, update the full and
+- * congestion status, wake up any waiters. Called under q->queue_lock.
+- */
+-static void freed_request(struct request_queue *q, int rw, int priv)
+-{
+- struct request_list *rl = &q->rq;
+-
+- rl->count[rw]--;
+- if (priv)
+- rl->elvpriv--;
+-
+- __freed_request(q, rw);
+-
+- if (unlikely(rl->starved[rw ^ 1]))
+- __freed_request(q, rw ^ 1);
+-}
+-
+-#define blkdev_free_rq(list) list_entry((list)->next, struct request, queuelist)
+-/*
+- * Get a free request, queue_lock must be held.
+- * Returns NULL on failure, with queue_lock held.
+- * Returns !NULL on success, with queue_lock *not held*.
+- */
+-static struct request *get_request(struct request_queue *q, int rw_flags,
+- struct bio *bio, gfp_t gfp_mask)
+-{
+- struct request *rq = NULL;
+- struct request_list *rl = &q->rq;
+- struct io_context *ioc = NULL;
+- const int rw = rw_flags & 0x01;
+- int may_queue, priv;
+-
+- may_queue = elv_may_queue(q, rw_flags);
+- if (may_queue == ELV_MQUEUE_NO)
+- goto rq_starved;
+-
+- if (rl->count[rw]+1 >= queue_congestion_on_threshold(q)) {
+- if (rl->count[rw]+1 >= q->nr_requests) {
+- ioc = current_io_context(GFP_ATOMIC, q->node);
+- /*
+- * The queue will fill after this allocation, so set
+- * it as full, and mark this process as "batching".
+- * This process will be allowed to complete a batch of
+- * requests, others will be blocked.
+- */
+- if (!blk_queue_full(q, rw)) {
+- ioc_set_batching(q, ioc);
+- blk_set_queue_full(q, rw);
+- } else {
+- if (may_queue != ELV_MQUEUE_MUST
+- && !ioc_batching(q, ioc)) {
+- /*
+- * The queue is full and the allocating
+- * process is not a "batcher", and not
+- * exempted by the IO scheduler
+- */
+- goto out;
+- }
+- }
+- }
+- blk_set_queue_congested(q, rw);
+- }
+-
+- /*
+- * Only allow batching queuers to allocate up to 50% over the defined
+- * limit of requests, otherwise we could have thousands of requests
+- * allocated with any setting of ->nr_requests
+- */
+- if (rl->count[rw] >= (3 * q->nr_requests / 2))
+- goto out;
+-
+- rl->count[rw]++;
+- rl->starved[rw] = 0;
+-
+- priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
+- if (priv)
+- rl->elvpriv++;
+-
+- spin_unlock_irq(q->queue_lock);
+-
+- rq = blk_alloc_request(q, rw_flags, priv, gfp_mask);
+- if (unlikely(!rq)) {
+- /*
+- * Allocation failed presumably due to memory. Undo anything
+- * we might have messed up.
+- *
+- * Allocating task should really be put onto the front of the
+- * wait queue, but this is pretty rare.
+- */
+- spin_lock_irq(q->queue_lock);
+- freed_request(q, rw, priv);
+-
+- /*
+- * in the very unlikely event that allocation failed and no
+- * requests for this direction was pending, mark us starved
+- * so that freeing of a request in the other direction will
+- * notice us. another possible fix would be to split the
+- * rq mempool into READ and WRITE
+- */
+-rq_starved:
+- if (unlikely(rl->count[rw] == 0))
+- rl->starved[rw] = 1;
+-
+- goto out;
+- }
+-
+- /*
+- * ioc may be NULL here, and ioc_batching will be false. That's
+- * OK, if the queue is under the request limit then requests need
+- * not count toward the nr_batch_requests limit. There will always
+- * be some limit enforced by BLK_BATCH_TIME.
+- */
+- if (ioc_batching(q, ioc))
+- ioc->nr_batch_requests--;
+-
+- rq_init(q, rq);
+-
+- blk_add_trace_generic(q, bio, rw, BLK_TA_GETRQ);
+-out:
+- return rq;
+-}
+-
+-/*
+- * No available requests for this queue, unplug the device and wait for some
+- * requests to become available.
+- *
+- * Called with q->queue_lock held, and returns with it unlocked.
+- */
+-static struct request *get_request_wait(struct request_queue *q, int rw_flags,
+- struct bio *bio)
+-{
+- const int rw = rw_flags & 0x01;
+- struct request *rq;
+-
+- rq = get_request(q, rw_flags, bio, GFP_NOIO);
+- while (!rq) {
+- DEFINE_WAIT(wait);
+- struct request_list *rl = &q->rq;
+-
+- prepare_to_wait_exclusive(&rl->wait[rw], &wait,
+- TASK_UNINTERRUPTIBLE);
+-
+- rq = get_request(q, rw_flags, bio, GFP_NOIO);
+-
+- if (!rq) {
+- struct io_context *ioc;
+-
+- blk_add_trace_generic(q, bio, rw, BLK_TA_SLEEPRQ);
+-
+- __generic_unplug_device(q);
+- spin_unlock_irq(q->queue_lock);
+- io_schedule();
+-
+- /*
+- * After sleeping, we become a "batching" process and
+- * will be able to allocate at least one request, and
+- * up to a big batch of them for a small period time.
+- * See ioc_batching, ioc_set_batching
+- */
+- ioc = current_io_context(GFP_NOIO, q->node);
+- ioc_set_batching(q, ioc);
+-
+- spin_lock_irq(q->queue_lock);
+- }
+- finish_wait(&rl->wait[rw], &wait);
+- }
+-
+- return rq;
+-}
+-
+-struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
+-{
+- struct request *rq;
+-
+- BUG_ON(rw != READ && rw != WRITE);
+-
+- spin_lock_irq(q->queue_lock);
+- if (gfp_mask & __GFP_WAIT) {
+- rq = get_request_wait(q, rw, NULL);
+- } else {
+- rq = get_request(q, rw, NULL, gfp_mask);
+- if (!rq)
+- spin_unlock_irq(q->queue_lock);
+- }
+- /* q->queue_lock is unlocked at this point */
+-
+- return rq;
+-}
+-EXPORT_SYMBOL(blk_get_request);
+-
+-/**
+- * blk_start_queueing - initiate dispatch of requests to device
+- * @q: request queue to kick into gear
+- *
+- * This is basically a helper to remove the need to know whether a queue
+- * is plugged or not if someone just wants to initiate dispatch of requests
+- * for this queue.
+- *
+- * The queue lock must be held with interrupts disabled.
+- */
+-void blk_start_queueing(struct request_queue *q)
+-{
+- if (!blk_queue_plugged(q))
+- q->request_fn(q);
+- else
+- __generic_unplug_device(q);
+-}
+-EXPORT_SYMBOL(blk_start_queueing);
+-
+-/**
+- * blk_requeue_request - put a request back on queue
+- * @q: request queue where request should be inserted
+- * @rq: request to be inserted
+- *
+- * Description:
+- * Drivers often keep queueing requests until the hardware cannot accept
+- * more, when that condition happens we need to put the request back
+- * on the queue. Must be called with queue lock held.
+- */
+-void blk_requeue_request(struct request_queue *q, struct request *rq)
+-{
+- blk_add_trace_rq(q, rq, BLK_TA_REQUEUE);
+-
+- if (blk_rq_tagged(rq))
+- blk_queue_end_tag(q, rq);
+-
+- elv_requeue_request(q, rq);
+-}
+-
+-EXPORT_SYMBOL(blk_requeue_request);
+-
+-/**
+- * blk_insert_request - insert a special request in to a request queue
+- * @q: request queue where request should be inserted
+- * @rq: request to be inserted
+- * @at_head: insert request at head or tail of queue
+- * @data: private data
+- *
+- * Description:
+- * Many block devices need to execute commands asynchronously, so they don't
+- * block the whole kernel from preemption during request execution. This is
+- * accomplished normally by inserting aritficial requests tagged as
+- * REQ_SPECIAL in to the corresponding request queue, and letting them be
+- * scheduled for actual execution by the request queue.
+- *
+- * We have the option of inserting the head or the tail of the queue.
+- * Typically we use the tail for new ioctls and so forth. We use the head
+- * of the queue for things like a QUEUE_FULL message from a device, or a
+- * host that is unable to accept a particular command.
+- */
+-void blk_insert_request(struct request_queue *q, struct request *rq,
+- int at_head, void *data)
+-{
+- int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+- unsigned long flags;
+-
+- /*
+- * tell I/O scheduler that this isn't a regular read/write (ie it
+- * must not attempt merges on this) and that it acts as a soft
+- * barrier
+- */
+- rq->cmd_type = REQ_TYPE_SPECIAL;
+- rq->cmd_flags |= REQ_SOFTBARRIER;
+-
+- rq->special = data;
+-
+- spin_lock_irqsave(q->queue_lock, flags);
+-
+- /*
+- * If command is tagged, release the tag
+- */
+- if (blk_rq_tagged(rq))
+- blk_queue_end_tag(q, rq);
+-
+- drive_stat_acct(rq, 1);
+- __elv_add_request(q, rq, where, 0);
+- blk_start_queueing(q);
+- spin_unlock_irqrestore(q->queue_lock, flags);
+-}
+-
+-EXPORT_SYMBOL(blk_insert_request);
+-
+-static int __blk_rq_unmap_user(struct bio *bio)
+-{
+- int ret = 0;
+-
+- if (bio) {
+- if (bio_flagged(bio, BIO_USER_MAPPED))
+- bio_unmap_user(bio);
+- else
+- ret = bio_uncopy_user(bio);
+- }
+-
+- return ret;
+-}
+-
+-int blk_rq_append_bio(struct request_queue *q, struct request *rq,
+- struct bio *bio)
+-{
+- if (!rq->bio)
+- blk_rq_bio_prep(q, rq, bio);
+- else if (!ll_back_merge_fn(q, rq, bio))
+- return -EINVAL;
+- else {
+- rq->biotail->bi_next = bio;
+- rq->biotail = bio;
+-
+- rq->data_len += bio->bi_size;
+- }
+- return 0;
+-}
+-EXPORT_SYMBOL(blk_rq_append_bio);
+-
+-static int __blk_rq_map_user(struct request_queue *q, struct request *rq,
+- void __user *ubuf, unsigned int len)
+-{
+- unsigned long uaddr;
+- struct bio *bio, *orig_bio;
+- int reading, ret;
+-
+- reading = rq_data_dir(rq) == READ;
+-
+- /*
+- * if alignment requirement is satisfied, map in user pages for
+- * direct dma. else, set up kernel bounce buffers
+- */
+- uaddr = (unsigned long) ubuf;
+- if (!(uaddr & queue_dma_alignment(q)) && !(len & queue_dma_alignment(q)))
+- bio = bio_map_user(q, NULL, uaddr, len, reading);
+- else
+- bio = bio_copy_user(q, uaddr, len, reading);
+-
+- if (IS_ERR(bio))
+- return PTR_ERR(bio);
+-
+- orig_bio = bio;
+- blk_queue_bounce(q, &bio);
+-
+- /*
+- * We link the bounce buffer in and could have to traverse it
+- * later so we have to get a ref to prevent it from being freed
+- */
+- bio_get(bio);
+-
+- ret = blk_rq_append_bio(q, rq, bio);
+- if (!ret)
+- return bio->bi_size;
+-
+- /* if it was boucned we must call the end io function */
+- bio_endio(bio, 0);
+- __blk_rq_unmap_user(orig_bio);
+- bio_put(bio);
+- return ret;
+-}
+-
+-/**
+- * blk_rq_map_user - map user data to a request, for REQ_BLOCK_PC usage
+- * @q: request queue where request should be inserted
+- * @rq: request structure to fill
+- * @ubuf: the user buffer
+- * @len: length of user data
+- *
+- * Description:
+- * Data will be mapped directly for zero copy io, if possible. Otherwise
+- * a kernel bounce buffer is used.
+- *
+- * A matching blk_rq_unmap_user() must be issued at the end of io, while
+- * still in process context.
+- *
+- * Note: The mapped bio may need to be bounced through blk_queue_bounce()
+- * before being submitted to the device, as pages mapped may be out of
+- * reach. It's the callers responsibility to make sure this happens. The
+- * original bio must be passed back in to blk_rq_unmap_user() for proper
+- * unmapping.
+- */
+-int blk_rq_map_user(struct request_queue *q, struct request *rq,
+- void __user *ubuf, unsigned long len)
+-{
+- unsigned long bytes_read = 0;
+- struct bio *bio = NULL;
+- int ret;
+-
+- if (len > (q->max_hw_sectors << 9))
+- return -EINVAL;
+- if (!len || !ubuf)
+- return -EINVAL;
+-
+- while (bytes_read != len) {
+- unsigned long map_len, end, start;
+-
+- map_len = min_t(unsigned long, len - bytes_read, BIO_MAX_SIZE);
+- end = ((unsigned long)ubuf + map_len + PAGE_SIZE - 1)
+- >> PAGE_SHIFT;
+- start = (unsigned long)ubuf >> PAGE_SHIFT;
+-
+- /*
+- * A bad offset could cause us to require BIO_MAX_PAGES + 1
+- * pages. If this happens we just lower the requested
+- * mapping len by a page so that we can fit
+- */
+- if (end - start > BIO_MAX_PAGES)
+- map_len -= PAGE_SIZE;
+-
+- ret = __blk_rq_map_user(q, rq, ubuf, map_len);
+- if (ret < 0)
+- goto unmap_rq;
+- if (!bio)
+- bio = rq->bio;
+- bytes_read += ret;
+- ubuf += ret;
+- }
+-
+- rq->buffer = rq->data = NULL;
+- return 0;
+-unmap_rq:
+- blk_rq_unmap_user(bio);
+- return ret;
+-}
+-
+-EXPORT_SYMBOL(blk_rq_map_user);
+-
+-/**
+- * blk_rq_map_user_iov - map user data to a request, for REQ_BLOCK_PC usage
+- * @q: request queue where request should be inserted
+- * @rq: request to map data to
+- * @iov: pointer to the iovec
+- * @iov_count: number of elements in the iovec
+- * @len: I/O byte count
+- *
+- * Description:
+- * Data will be mapped directly for zero copy io, if possible. Otherwise
+- * a kernel bounce buffer is used.
+- *
+- * A matching blk_rq_unmap_user() must be issued at the end of io, while
+- * still in process context.
+- *
+- * Note: The mapped bio may need to be bounced through blk_queue_bounce()
+- * before being submitted to the device, as pages mapped may be out of
+- * reach. It's the callers responsibility to make sure this happens. The
+- * original bio must be passed back in to blk_rq_unmap_user() for proper
+- * unmapping.
+- */
+-int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
+- struct sg_iovec *iov, int iov_count, unsigned int len)
+-{
+- struct bio *bio;
+-
+- if (!iov || iov_count <= 0)
+- return -EINVAL;
+-
+- /* we don't allow misaligned data like bio_map_user() does. If the
+- * user is using sg, they're expected to know the alignment constraints
+- * and respect them accordingly */
+- bio = bio_map_user_iov(q, NULL, iov, iov_count, rq_data_dir(rq)== READ);
+- if (IS_ERR(bio))
+- return PTR_ERR(bio);
+-
+- if (bio->bi_size != len) {
+- bio_endio(bio, 0);
+- bio_unmap_user(bio);
+- return -EINVAL;
+- }
+-
+- bio_get(bio);
+- blk_rq_bio_prep(q, rq, bio);
+- rq->buffer = rq->data = NULL;
+- return 0;
+-}
+-
+-EXPORT_SYMBOL(blk_rq_map_user_iov);
+-
+-/**
+- * blk_rq_unmap_user - unmap a request with user data
+- * @bio: start of bio list
+- *
+- * Description:
+- * Unmap a rq previously mapped by blk_rq_map_user(). The caller must
+- * supply the original rq->bio from the blk_rq_map_user() return, since
+- * the io completion may have changed rq->bio.
+- */
+-int blk_rq_unmap_user(struct bio *bio)
+-{
+- struct bio *mapped_bio;
+- int ret = 0, ret2;
+-
+- while (bio) {
+- mapped_bio = bio;
+- if (unlikely(bio_flagged(bio, BIO_BOUNCED)))
+- mapped_bio = bio->bi_private;
+-
+- ret2 = __blk_rq_unmap_user(mapped_bio);
+- if (ret2 && !ret)
+- ret = ret2;
+-
+- mapped_bio = bio;
+- bio = bio->bi_next;
+- bio_put(mapped_bio);
+- }
+-
+- return ret;
+-}
+-
+-EXPORT_SYMBOL(blk_rq_unmap_user);
+-
+-/**
+- * blk_rq_map_kern - map kernel data to a request, for REQ_BLOCK_PC usage
+- * @q: request queue where request should be inserted
+- * @rq: request to fill
+- * @kbuf: the kernel buffer
+- * @len: length of user data
+- * @gfp_mask: memory allocation flags
+- */
+-int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
+- unsigned int len, gfp_t gfp_mask)
+-{
+- struct bio *bio;
+-
+- if (len > (q->max_hw_sectors << 9))
+- return -EINVAL;
+- if (!len || !kbuf)
+- return -EINVAL;
+-
+- bio = bio_map_kern(q, kbuf, len, gfp_mask);
+- if (IS_ERR(bio))
+- return PTR_ERR(bio);
+-
+- if (rq_data_dir(rq) == WRITE)
+- bio->bi_rw |= (1 << BIO_RW);
+-
+- blk_rq_bio_prep(q, rq, bio);
+- blk_queue_bounce(q, &rq->bio);
+- rq->buffer = rq->data = NULL;
+- return 0;
+-}
+-
+-EXPORT_SYMBOL(blk_rq_map_kern);
+-
+-/**
+- * blk_execute_rq_nowait - insert a request into queue for execution
+- * @q: queue to insert the request in
+- * @bd_disk: matching gendisk
+- * @rq: request to insert
+- * @at_head: insert request at head or tail of queue
+- * @done: I/O completion handler
+- *
+- * Description:
+- * Insert a fully prepared request at the back of the io scheduler queue
+- * for execution. Don't wait for completion.
+- */
+-void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
+- struct request *rq, int at_head,
+- rq_end_io_fn *done)
+-{
+- int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+-
+- rq->rq_disk = bd_disk;
+- rq->cmd_flags |= REQ_NOMERGE;
+- rq->end_io = done;
+- WARN_ON(irqs_disabled());
+- spin_lock_irq(q->queue_lock);
+- __elv_add_request(q, rq, where, 1);
+- __generic_unplug_device(q);
+- spin_unlock_irq(q->queue_lock);
+-}
+-EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
+-
+-/**
+- * blk_execute_rq - insert a request into queue for execution
+- * @q: queue to insert the request in
+- * @bd_disk: matching gendisk
+- * @rq: request to insert
+- * @at_head: insert request at head or tail of queue
+- *
+- * Description:
+- * Insert a fully prepared request at the back of the io scheduler queue
+- * for execution and wait for completion.
+- */
+-int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
+- struct request *rq, int at_head)
+-{
+- DECLARE_COMPLETION_ONSTACK(wait);
+- char sense[SCSI_SENSE_BUFFERSIZE];
+- int err = 0;
+-
+- /*
+- * we need an extra reference to the request, so we can look at
+- * it after io completion
+- */
+- rq->ref_count++;
+-
+- if (!rq->sense) {
+- memset(sense, 0, sizeof(sense));
+- rq->sense = sense;
+- rq->sense_len = 0;
+- }
+-
+- rq->end_io_data = &wait;
+- blk_execute_rq_nowait(q, bd_disk, rq, at_head, blk_end_sync_rq);
+- wait_for_completion(&wait);
+-
+- if (rq->errors)
+- err = -EIO;
+-
+- return err;
+-}
+-
+-EXPORT_SYMBOL(blk_execute_rq);
+-
+-static void bio_end_empty_barrier(struct bio *bio, int err)
+-{
+- if (err)
+- clear_bit(BIO_UPTODATE, &bio->bi_flags);
+-
+- complete(bio->bi_private);
+-}
+-
+-/**
+- * blkdev_issue_flush - queue a flush
+- * @bdev: blockdev to issue flush for
+- * @error_sector: error sector
+- *
+- * Description:
+- * Issue a flush for the block device in question. Caller can supply
+- * room for storing the error offset in case of a flush error, if they
+- * wish to. Caller must run wait_for_completion() on its own.
+- */
+-int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector)
+-{
+- DECLARE_COMPLETION_ONSTACK(wait);
+- struct request_queue *q;
+- struct bio *bio;
+- int ret;
+-
+- if (bdev->bd_disk == NULL)
+- return -ENXIO;
+-
+- q = bdev_get_queue(bdev);
+- if (!q)
+- return -ENXIO;
+-
+- bio = bio_alloc(GFP_KERNEL, 0);
+- if (!bio)
+- return -ENOMEM;
+-
+- bio->bi_end_io = bio_end_empty_barrier;
+- bio->bi_private = &wait;
+- bio->bi_bdev = bdev;
+- submit_bio(1 << BIO_RW_BARRIER, bio);
+-
+- wait_for_completion(&wait);
+-
+- /*
+- * The driver must store the error location in ->bi_sector, if
+- * it supports it. For non-stacked drivers, this should be copied
+- * from rq->sector.
+- */
+- if (error_sector)
+- *error_sector = bio->bi_sector;
+-
+- ret = 0;
+- if (!bio_flagged(bio, BIO_UPTODATE))
+- ret = -EIO;
+-
+- bio_put(bio);
+- return ret;
+-}
+-
+-EXPORT_SYMBOL(blkdev_issue_flush);
+-
+-static void drive_stat_acct(struct request *rq, int new_io)
+-{
+- int rw = rq_data_dir(rq);
+-
+- if (!blk_fs_request(rq) || !rq->rq_disk)
+- return;
+-
+- if (!new_io) {
+- __disk_stat_inc(rq->rq_disk, merges[rw]);
+- } else {
+- disk_round_stats(rq->rq_disk);
+- rq->rq_disk->in_flight++;
+- }
+-}
+-
+-/*
+- * add-request adds a request to the linked list.
+- * queue lock is held and interrupts disabled, as we muck with the
+- * request queue list.
+- */
+-static inline void add_request(struct request_queue * q, struct request * req)
+-{
+- drive_stat_acct(req, 1);
+-
+- /*
+- * elevator indicated where it wants this request to be
+- * inserted at elevator_merge time
+- */
+- __elv_add_request(q, req, ELEVATOR_INSERT_SORT, 0);
+-}
+-
+-/*
+- * disk_round_stats() - Round off the performance stats on a struct
+- * disk_stats.
+- *
+- * The average IO queue length and utilisation statistics are maintained
+- * by observing the current state of the queue length and the amount of
+- * time it has been in this state for.
+- *
+- * Normally, that accounting is done on IO completion, but that can result
+- * in more than a second's worth of IO being accounted for within any one
+- * second, leading to >100% utilisation. To deal with that, we call this
+- * function to do a round-off before returning the results when reading
+- * /proc/diskstats. This accounts immediately for all queue usage up to
+- * the current jiffies and restarts the counters again.
+- */
+-void disk_round_stats(struct gendisk *disk)
+-{
+- unsigned long now = jiffies;
+-
+- if (now == disk->stamp)
+- return;
+-
+- if (disk->in_flight) {
+- __disk_stat_add(disk, time_in_queue,
+- disk->in_flight * (now - disk->stamp));
+- __disk_stat_add(disk, io_ticks, (now - disk->stamp));
+- }
+- disk->stamp = now;
+-}
+-
+-EXPORT_SYMBOL_GPL(disk_round_stats);
+-
+-/*
+- * queue lock must be held
+- */
+-void __blk_put_request(struct request_queue *q, struct request *req)
+-{
+- if (unlikely(!q))
+- return;
+- if (unlikely(--req->ref_count))
+- return;
+-
+- elv_completed_request(q, req);
+-
+- /*
+- * Request may not have originated from ll_rw_blk. if not,
+- * it didn't come out of our reserved rq pools
+- */
+- if (req->cmd_flags & REQ_ALLOCED) {
+- int rw = rq_data_dir(req);
+- int priv = req->cmd_flags & REQ_ELVPRIV;
+-
+- BUG_ON(!list_empty(&req->queuelist));
+- BUG_ON(!hlist_unhashed(&req->hash));
+-
+- blk_free_request(q, req);
+- freed_request(q, rw, priv);
+- }
+-}
+-
+-EXPORT_SYMBOL_GPL(__blk_put_request);
+-
+-void blk_put_request(struct request *req)
+-{
+- unsigned long flags;
+- struct request_queue *q = req->q;
+-
+- /*
+- * Gee, IDE calls in w/ NULL q. Fix IDE and remove the
+- * following if (q) test.
+- */
+- if (q) {
+- spin_lock_irqsave(q->queue_lock, flags);
+- __blk_put_request(q, req);
+- spin_unlock_irqrestore(q->queue_lock, flags);
+- }
+-}
+-
+-EXPORT_SYMBOL(blk_put_request);
+-
+-/**
+- * blk_end_sync_rq - executes a completion event on a request
+- * @rq: request to complete
+- * @error: end io status of the request
+- */
+-void blk_end_sync_rq(struct request *rq, int error)
+-{
+- struct completion *waiting = rq->end_io_data;
+-
+- rq->end_io_data = NULL;
+- __blk_put_request(rq->q, rq);
+-
+- /*
+- * complete last, if this is a stack request the process (and thus
+- * the rq pointer) could be invalid right after this complete()
+- */
+- complete(waiting);
+-}
+-EXPORT_SYMBOL(blk_end_sync_rq);
+-
+-/*
+- * Has to be called with the request spinlock acquired
+- */
+-static int attempt_merge(struct request_queue *q, struct request *req,
+- struct request *next)
+-{
+- if (!rq_mergeable(req) || !rq_mergeable(next))
+- return 0;
+-
+- /*
+- * not contiguous
+- */
+- if (req->sector + req->nr_sectors != next->sector)
+- return 0;
+-
+- if (rq_data_dir(req) != rq_data_dir(next)
+- || req->rq_disk != next->rq_disk
+- || next->special)
+- return 0;
+-
+- /*
+- * If we are allowed to merge, then append bio list
+- * from next to rq and release next. merge_requests_fn
+- * will have updated segment counts, update sector
+- * counts here.
+- */
+- if (!ll_merge_requests_fn(q, req, next))
+- return 0;
+-
+- /*
+- * At this point we have either done a back merge
+- * or front merge. We need the smaller start_time of
+- * the merged requests to be the current request
+- * for accounting purposes.
+- */
+- if (time_after(req->start_time, next->start_time))
+- req->start_time = next->start_time;
+-
+- req->biotail->bi_next = next->bio;
+- req->biotail = next->biotail;
+-
+- req->nr_sectors = req->hard_nr_sectors += next->hard_nr_sectors;
+-
+- elv_merge_requests(q, req, next);
+-
+- if (req->rq_disk) {
+- disk_round_stats(req->rq_disk);
+- req->rq_disk->in_flight--;
+- }
+-
+- req->ioprio = ioprio_best(req->ioprio, next->ioprio);
+-
+- __blk_put_request(q, next);
+- return 1;
+-}
+-
+-static inline int attempt_back_merge(struct request_queue *q,
+- struct request *rq)
+-{
+- struct request *next = elv_latter_request(q, rq);
+-
+- if (next)
+- return attempt_merge(q, rq, next);
+-
+- return 0;
+-}
+-
+-static inline int attempt_front_merge(struct request_queue *q,
+- struct request *rq)
+-{
+- struct request *prev = elv_former_request(q, rq);
+-
+- if (prev)
+- return attempt_merge(q, prev, rq);
+-
+- return 0;
+-}
+-
+-static void init_request_from_bio(struct request *req, struct bio *bio)
+-{
+- req->cmd_type = REQ_TYPE_FS;
+-
+- /*
+- * inherit FAILFAST from bio (for read-ahead, and explicit FAILFAST)
+- */
+- if (bio_rw_ahead(bio) || bio_failfast(bio))
+- req->cmd_flags |= REQ_FAILFAST;
+-
+- /*
+- * REQ_BARRIER implies no merging, but lets make it explicit
+- */
+- if (unlikely(bio_barrier(bio)))
+- req->cmd_flags |= (REQ_HARDBARRIER | REQ_NOMERGE);
+-
+- if (bio_sync(bio))
+- req->cmd_flags |= REQ_RW_SYNC;
+- if (bio_rw_meta(bio))
+- req->cmd_flags |= REQ_RW_META;
+-
+- req->errors = 0;
+- req->hard_sector = req->sector = bio->bi_sector;
+- req->ioprio = bio_prio(bio);
+- req->start_time = jiffies;
+- blk_rq_bio_prep(req->q, req, bio);
+-}
+-
+-static int __make_request(struct request_queue *q, struct bio *bio)
+-{
+- struct request *req;
+- int el_ret, nr_sectors, barrier, err;
+- const unsigned short prio = bio_prio(bio);
+- const int sync = bio_sync(bio);
+- int rw_flags;
+-
+- nr_sectors = bio_sectors(bio);
+-
+- /*
+- * low level driver can indicate that it wants pages above a
+- * certain limit bounced to low memory (ie for highmem, or even
+- * ISA dma in theory)
+- */
+- blk_queue_bounce(q, &bio);
+-
+- barrier = bio_barrier(bio);
+- if (unlikely(barrier) && (q->next_ordered == QUEUE_ORDERED_NONE)) {
+- err = -EOPNOTSUPP;
+- goto end_io;
+- }
+-
+- spin_lock_irq(q->queue_lock);
+-
+- if (unlikely(barrier) || elv_queue_empty(q))
+- goto get_rq;
+-
+- el_ret = elv_merge(q, &req, bio);
+- switch (el_ret) {
+- case ELEVATOR_BACK_MERGE:
+- BUG_ON(!rq_mergeable(req));
+-
+- if (!ll_back_merge_fn(q, req, bio))
+- break;
+-
+- blk_add_trace_bio(q, bio, BLK_TA_BACKMERGE);
+-
+- req->biotail->bi_next = bio;
+- req->biotail = bio;
+- req->nr_sectors = req->hard_nr_sectors += nr_sectors;
+- req->ioprio = ioprio_best(req->ioprio, prio);
+- drive_stat_acct(req, 0);
+- if (!attempt_back_merge(q, req))
+- elv_merged_request(q, req, el_ret);
+- goto out;
+-
+- case ELEVATOR_FRONT_MERGE:
+- BUG_ON(!rq_mergeable(req));
+-
+- if (!ll_front_merge_fn(q, req, bio))
+- break;
+-
+- blk_add_trace_bio(q, bio, BLK_TA_FRONTMERGE);
+-
+- bio->bi_next = req->bio;
+- req->bio = bio;
+-
+- /*
+- * may not be valid. if the low level driver said
+- * it didn't need a bounce buffer then it better
+- * not touch req->buffer either...
+- */
+- req->buffer = bio_data(bio);
+- req->current_nr_sectors = bio_cur_sectors(bio);
+- req->hard_cur_sectors = req->current_nr_sectors;
+- req->sector = req->hard_sector = bio->bi_sector;
+- req->nr_sectors = req->hard_nr_sectors += nr_sectors;
+- req->ioprio = ioprio_best(req->ioprio, prio);
+- drive_stat_acct(req, 0);
+- if (!attempt_front_merge(q, req))
+- elv_merged_request(q, req, el_ret);
+- goto out;
+-
+- /* ELV_NO_MERGE: elevator says don't/can't merge. */
+- default:
+- ;
+- }
+-
+-get_rq:
+- /*
+- * This sync check and mask will be re-done in init_request_from_bio(),
+- * but we need to set it earlier to expose the sync flag to the
+- * rq allocator and io schedulers.
+- */
+- rw_flags = bio_data_dir(bio);
+- if (sync)
+- rw_flags |= REQ_RW_SYNC;
+-
+- /*
+- * Grab a free request. This is might sleep but can not fail.
+- * Returns with the queue unlocked.
+- */
+- req = get_request_wait(q, rw_flags, bio);
+-
+- /*
+- * After dropping the lock and possibly sleeping here, our request
+- * may now be mergeable after it had proven unmergeable (above).
+- * We don't worry about that case for efficiency. It won't happen
+- * often, and the elevators are able to handle it.
+- */
+- init_request_from_bio(req, bio);
+-
+- spin_lock_irq(q->queue_lock);
+- if (elv_queue_empty(q))
+- blk_plug_device(q);
+- add_request(q, req);
+-out:
+- if (sync)
+- __generic_unplug_device(q);
+-
+- spin_unlock_irq(q->queue_lock);
+- return 0;
+-
+-end_io:
+- bio_endio(bio, err);
+- return 0;
+-}
+-
+-/*
+- * If bio->bi_dev is a partition, remap the location
+- */
+-static inline void blk_partition_remap(struct bio *bio)
+-{
+- struct block_device *bdev = bio->bi_bdev;
+-
+- if (bio_sectors(bio) && bdev != bdev->bd_contains) {
+- struct hd_struct *p = bdev->bd_part;
+- const int rw = bio_data_dir(bio);
+-
+- p->sectors[rw] += bio_sectors(bio);
+- p->ios[rw]++;
+-
+- bio->bi_sector += p->start_sect;
+- bio->bi_bdev = bdev->bd_contains;
+-
+- blk_add_trace_remap(bdev_get_queue(bio->bi_bdev), bio,
+- bdev->bd_dev, bio->bi_sector,
+- bio->bi_sector - p->start_sect);
+- }
+-}
+-
+-static void handle_bad_sector(struct bio *bio)
+-{
+- char b[BDEVNAME_SIZE];
+-
+- printk(KERN_INFO "attempt to access beyond end of device\n");
+- printk(KERN_INFO "%s: rw=%ld, want=%Lu, limit=%Lu\n",
+- bdevname(bio->bi_bdev, b),
+- bio->bi_rw,
+- (unsigned long long)bio->bi_sector + bio_sectors(bio),
+- (long long)(bio->bi_bdev->bd_inode->i_size >> 9));
+-
+- set_bit(BIO_EOF, &bio->bi_flags);
+-}
+-
+-#ifdef CONFIG_FAIL_MAKE_REQUEST
+-
+-static DECLARE_FAULT_ATTR(fail_make_request);
+-
+-static int __init setup_fail_make_request(char *str)
+-{
+- return setup_fault_attr(&fail_make_request, str);
+-}
+-__setup("fail_make_request=", setup_fail_make_request);
+-
+-static int should_fail_request(struct bio *bio)
+-{
+- if ((bio->bi_bdev->bd_disk->flags & GENHD_FL_FAIL) ||
+- (bio->bi_bdev->bd_part && bio->bi_bdev->bd_part->make_it_fail))
+- return should_fail(&fail_make_request, bio->bi_size);
+-
+- return 0;
+-}
+-
+-static int __init fail_make_request_debugfs(void)
+-{
+- return init_fault_attr_dentries(&fail_make_request,
+- "fail_make_request");
+-}
+-
+-late_initcall(fail_make_request_debugfs);
+-
+-#else /* CONFIG_FAIL_MAKE_REQUEST */
+-
+-static inline int should_fail_request(struct bio *bio)
+-{
+- return 0;
+-}
+-
+-#endif /* CONFIG_FAIL_MAKE_REQUEST */
+-
+-/*
+- * Check whether this bio extends beyond the end of the device.
+- */
+-static inline int bio_check_eod(struct bio *bio, unsigned int nr_sectors)
+-{
+- sector_t maxsector;
+-
+- if (!nr_sectors)
+- return 0;
+-
+- /* Test device or partition size, when known. */
+- maxsector = bio->bi_bdev->bd_inode->i_size >> 9;
+- if (maxsector) {
+- sector_t sector = bio->bi_sector;
+-
+- if (maxsector < nr_sectors || maxsector - nr_sectors < sector) {
+- /*
+- * This may well happen - the kernel calls bread()
+- * without checking the size of the device, e.g., when
+- * mounting a device.
+- */
+- handle_bad_sector(bio);
+- return 1;
+- }
+- }
+-
+- return 0;
+-}
+-
+-/**
+- * generic_make_request: hand a buffer to its device driver for I/O
+- * @bio: The bio describing the location in memory and on the device.
+- *
+- * generic_make_request() is used to make I/O requests of block
+- * devices. It is passed a &struct bio, which describes the I/O that needs
+- * to be done.
+- *
+- * generic_make_request() does not return any status. The
+- * success/failure status of the request, along with notification of
+- * completion, is delivered asynchronously through the bio->bi_end_io
+- * function described (one day) else where.
+- *
+- * The caller of generic_make_request must make sure that bi_io_vec
+- * are set to describe the memory buffer, and that bi_dev and bi_sector are
+- * set to describe the device address, and the
+- * bi_end_io and optionally bi_private are set to describe how
+- * completion notification should be signaled.
+- *
+- * generic_make_request and the drivers it calls may use bi_next if this
+- * bio happens to be merged with someone else, and may change bi_dev and
+- * bi_sector for remaps as it sees fit. So the values of these fields
+- * should NOT be depended on after the call to generic_make_request.
+- */
+-static inline void __generic_make_request(struct bio *bio)
+-{
+- struct request_queue *q;
+- sector_t old_sector;
+- int ret, nr_sectors = bio_sectors(bio);
+- dev_t old_dev;
+- int err = -EIO;
+-
+- might_sleep();
+-
+- if (bio_check_eod(bio, nr_sectors))
+- goto end_io;
+-
+- /*
+- * Resolve the mapping until finished. (drivers are
+- * still free to implement/resolve their own stacking
+- * by explicitly returning 0)
+- *
+- * NOTE: we don't repeat the blk_size check for each new device.
+- * Stacking drivers are expected to know what they are doing.
+- */
+- old_sector = -1;
+- old_dev = 0;
+- do {
+- char b[BDEVNAME_SIZE];
+-
+- q = bdev_get_queue(bio->bi_bdev);
+- if (!q) {
+- printk(KERN_ERR
+- "generic_make_request: Trying to access "
+- "nonexistent block-device %s (%Lu)\n",
+- bdevname(bio->bi_bdev, b),
+- (long long) bio->bi_sector);
+-end_io:
+- bio_endio(bio, err);
+- break;
+- }
+-
+- if (unlikely(nr_sectors > q->max_hw_sectors)) {
+- printk("bio too big device %s (%u > %u)\n",
+- bdevname(bio->bi_bdev, b),
+- bio_sectors(bio),
+- q->max_hw_sectors);
+- goto end_io;
+- }
+-
+- if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+- goto end_io;
+-
+- if (should_fail_request(bio))
+- goto end_io;
+-
+- /*
+- * If this device has partitions, remap block n
+- * of partition p to block n+start(p) of the disk.
+- */
+- blk_partition_remap(bio);
+-
+- if (old_sector != -1)
+- blk_add_trace_remap(q, bio, old_dev, bio->bi_sector,
+- old_sector);
+-
+- blk_add_trace_bio(q, bio, BLK_TA_QUEUE);
+-
+- old_sector = bio->bi_sector;
+- old_dev = bio->bi_bdev->bd_dev;
+-
+- if (bio_check_eod(bio, nr_sectors))
+- goto end_io;
+- if (bio_empty_barrier(bio) && !q->prepare_flush_fn) {
+- err = -EOPNOTSUPP;
+- goto end_io;
+- }
+-
+- ret = q->make_request_fn(q, bio);
+- } while (ret);
+-}
+-
+-/*
+- * We only want one ->make_request_fn to be active at a time,
+- * else stack usage with stacked devices could be a problem.
+- * So use current->bio_{list,tail} to keep a list of requests
+- * submited by a make_request_fn function.
+- * current->bio_tail is also used as a flag to say if
+- * generic_make_request is currently active in this task or not.
+- * If it is NULL, then no make_request is active. If it is non-NULL,
+- * then a make_request is active, and new requests should be added
+- * at the tail
+- */
+-void generic_make_request(struct bio *bio)
+-{
+- if (current->bio_tail) {
+- /* make_request is active */
+- *(current->bio_tail) = bio;
+- bio->bi_next = NULL;
+- current->bio_tail = &bio->bi_next;
+- return;
+- }
+- /* following loop may be a bit non-obvious, and so deserves some
+- * explanation.
+- * Before entering the loop, bio->bi_next is NULL (as all callers
+- * ensure that) so we have a list with a single bio.
+- * We pretend that we have just taken it off a longer list, so
+- * we assign bio_list to the next (which is NULL) and bio_tail
+- * to &bio_list, thus initialising the bio_list of new bios to be
+- * added. __generic_make_request may indeed add some more bios
+- * through a recursive call to generic_make_request. If it
+- * did, we find a non-NULL value in bio_list and re-enter the loop
+- * from the top. In this case we really did just take the bio
+- * of the top of the list (no pretending) and so fixup bio_list and
+- * bio_tail or bi_next, and call into __generic_make_request again.
+- *
+- * The loop was structured like this to make only one call to
+- * __generic_make_request (which is important as it is large and
+- * inlined) and to keep the structure simple.
+- */
+- BUG_ON(bio->bi_next);
+- do {
+- current->bio_list = bio->bi_next;
+- if (bio->bi_next == NULL)
+- current->bio_tail = ¤t->bio_list;
+- else
+- bio->bi_next = NULL;
+- __generic_make_request(bio);
+- bio = current->bio_list;
+- } while (bio);
+- current->bio_tail = NULL; /* deactivate */
+-}
+-
+-EXPORT_SYMBOL(generic_make_request);
+-
+-/**
+- * submit_bio: submit a bio to the block device layer for I/O
+- * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
+- * @bio: The &struct bio which describes the I/O
+- *
+- * submit_bio() is very similar in purpose to generic_make_request(), and
+- * uses that function to do most of the work. Both are fairly rough
+- * interfaces, @bio must be presetup and ready for I/O.
+- *
+- */
+-void submit_bio(int rw, struct bio *bio)
+-{
+- int count = bio_sectors(bio);
+-
+- bio->bi_rw |= rw;
+-
+- /*
+- * If it's a regular read/write or a barrier with data attached,
+- * go through the normal accounting stuff before submission.
+- */
+- if (!bio_empty_barrier(bio)) {
+-
+- BIO_BUG_ON(!bio->bi_size);
+- BIO_BUG_ON(!bio->bi_io_vec);
+-
+- if (rw & WRITE) {
+- count_vm_events(PGPGOUT, count);
+- } else {
+- task_io_account_read(bio->bi_size);
+- count_vm_events(PGPGIN, count);
+- }
+-
+- if (unlikely(block_dump)) {
+- char b[BDEVNAME_SIZE];
+- printk(KERN_DEBUG "%s(%d): %s block %Lu on %s\n",
+- current->comm, task_pid_nr(current),
+- (rw & WRITE) ? "WRITE" : "READ",
+- (unsigned long long)bio->bi_sector,
+- bdevname(bio->bi_bdev,b));
+- }
+- }
+-
+- generic_make_request(bio);
+-}
+-
+-EXPORT_SYMBOL(submit_bio);
+-
+-static void blk_recalc_rq_sectors(struct request *rq, int nsect)
+-{
+- if (blk_fs_request(rq)) {
+- rq->hard_sector += nsect;
+- rq->hard_nr_sectors -= nsect;
+-
+- /*
+- * Move the I/O submission pointers ahead if required.
+- */
+- if ((rq->nr_sectors >= rq->hard_nr_sectors) &&
+- (rq->sector <= rq->hard_sector)) {
+- rq->sector = rq->hard_sector;
+- rq->nr_sectors = rq->hard_nr_sectors;
+- rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
+- rq->current_nr_sectors = rq->hard_cur_sectors;
+- rq->buffer = bio_data(rq->bio);
+- }
+-
+- /*
+- * if total number of sectors is less than the first segment
+- * size, something has gone terribly wrong
+- */
+- if (rq->nr_sectors < rq->current_nr_sectors) {
+- printk("blk: request botched\n");
+- rq->nr_sectors = rq->current_nr_sectors;
+- }
+- }
+-}
+-
+-static int __end_that_request_first(struct request *req, int uptodate,
+- int nr_bytes)
+-{
+- int total_bytes, bio_nbytes, error, next_idx = 0;
+- struct bio *bio;
+-
+- blk_add_trace_rq(req->q, req, BLK_TA_COMPLETE);
+-
+- /*
+- * extend uptodate bool to allow < 0 value to be direct io error
+- */
+- error = 0;
+- if (end_io_error(uptodate))
+- error = !uptodate ? -EIO : uptodate;
+-
+- /*
+- * for a REQ_BLOCK_PC request, we want to carry any eventual
+- * sense key with us all the way through
+- */
+- if (!blk_pc_request(req))
+- req->errors = 0;
+-
+- if (!uptodate) {
+- if (blk_fs_request(req) && !(req->cmd_flags & REQ_QUIET))
+- printk("end_request: I/O error, dev %s, sector %llu\n",
+- req->rq_disk ? req->rq_disk->disk_name : "?",
+- (unsigned long long)req->sector);
+- }
+-
+- if (blk_fs_request(req) && req->rq_disk) {
+- const int rw = rq_data_dir(req);
+-
+- disk_stat_add(req->rq_disk, sectors[rw], nr_bytes >> 9);
+- }
+-
+- total_bytes = bio_nbytes = 0;
+- while ((bio = req->bio) != NULL) {
+- int nbytes;
+-
+- /*
+- * For an empty barrier request, the low level driver must
+- * store a potential error location in ->sector. We pass
+- * that back up in ->bi_sector.
+- */
+- if (blk_empty_barrier(req))
+- bio->bi_sector = req->sector;
+-
+- if (nr_bytes >= bio->bi_size) {
+- req->bio = bio->bi_next;
+- nbytes = bio->bi_size;
+- req_bio_endio(req, bio, nbytes, error);
+- next_idx = 0;
+- bio_nbytes = 0;
+- } else {
+- int idx = bio->bi_idx + next_idx;
+-
+- if (unlikely(bio->bi_idx >= bio->bi_vcnt)) {
+- blk_dump_rq_flags(req, "__end_that");
+- printk("%s: bio idx %d >= vcnt %d\n",
+- __FUNCTION__,
+- bio->bi_idx, bio->bi_vcnt);
+- break;
+- }
+-
+- nbytes = bio_iovec_idx(bio, idx)->bv_len;
+- BIO_BUG_ON(nbytes > bio->bi_size);
+-
+- /*
+- * not a complete bvec done
+- */
+- if (unlikely(nbytes > nr_bytes)) {
+- bio_nbytes += nr_bytes;
+- total_bytes += nr_bytes;
+- break;
+- }
+-
+- /*
+- * advance to the next vector
+- */
+- next_idx++;
+- bio_nbytes += nbytes;
+- }
+-
+- total_bytes += nbytes;
+- nr_bytes -= nbytes;
+-
+- if ((bio = req->bio)) {
+- /*
+- * end more in this run, or just return 'not-done'
+- */
+- if (unlikely(nr_bytes <= 0))
+- break;
+- }
+- }
+-
+- /*
+- * completely done
+- */
+- if (!req->bio)
+- return 0;
+-
+- /*
+- * if the request wasn't completed, update state
+- */
+- if (bio_nbytes) {
+- req_bio_endio(req, bio, bio_nbytes, error);
+- bio->bi_idx += next_idx;
+- bio_iovec(bio)->bv_offset += nr_bytes;
+- bio_iovec(bio)->bv_len -= nr_bytes;
+- }
+-
+- blk_recalc_rq_sectors(req, total_bytes >> 9);
+- blk_recalc_rq_segments(req);
+- return 1;
+-}
+-
+-/**
+- * end_that_request_first - end I/O on a request
+- * @req: the request being processed
+- * @uptodate: 1 for success, 0 for I/O error, < 0 for specific error
+- * @nr_sectors: number of sectors to end I/O on
+- *
+- * Description:
+- * Ends I/O on a number of sectors attached to @req, and sets it up
+- * for the next range of segments (if any) in the cluster.
+- *
+- * Return:
+- * 0 - we are done with this request, call end_that_request_last()
+- * 1 - still buffers pending for this request
+- **/
+-int end_that_request_first(struct request *req, int uptodate, int nr_sectors)
+-{
+- return __end_that_request_first(req, uptodate, nr_sectors << 9);
+-}
+-
+-EXPORT_SYMBOL(end_that_request_first);
+-
+-/**
+- * end_that_request_chunk - end I/O on a request
+- * @req: the request being processed
+- * @uptodate: 1 for success, 0 for I/O error, < 0 for specific error
+- * @nr_bytes: number of bytes to complete
+- *
+- * Description:
+- * Ends I/O on a number of bytes attached to @req, and sets it up
+- * for the next range of segments (if any). Like end_that_request_first(),
+- * but deals with bytes instead of sectors.
+- *
+- * Return:
+- * 0 - we are done with this request, call end_that_request_last()
+- * 1 - still buffers pending for this request
+- **/
+-int end_that_request_chunk(struct request *req, int uptodate, int nr_bytes)
+-{
+- return __end_that_request_first(req, uptodate, nr_bytes);
+-}
+-
+-EXPORT_SYMBOL(end_that_request_chunk);
+-
+-/*
+- * splice the completion data to a local structure and hand off to
+- * process_completion_queue() to complete the requests
+- */
+-static void blk_done_softirq(struct softirq_action *h)
+-{
+- struct list_head *cpu_list, local_list;
+-
+- local_irq_disable();
+- cpu_list = &__get_cpu_var(blk_cpu_done);
+- list_replace_init(cpu_list, &local_list);
+- local_irq_enable();
+-
+- while (!list_empty(&local_list)) {
+- struct request *rq = list_entry(local_list.next, struct request, donelist);
+-
+- list_del_init(&rq->donelist);
+- rq->q->softirq_done_fn(rq);
+- }
+-}
+-
+-static int __cpuinit blk_cpu_notify(struct notifier_block *self, unsigned long action,
+- void *hcpu)
+-{
+- /*
+- * If a CPU goes away, splice its entries to the current CPU
+- * and trigger a run of the softirq
+- */
+- if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+- int cpu = (unsigned long) hcpu;
+-
+- local_irq_disable();
+- list_splice_init(&per_cpu(blk_cpu_done, cpu),
+- &__get_cpu_var(blk_cpu_done));
+- raise_softirq_irqoff(BLOCK_SOFTIRQ);
+- local_irq_enable();
+- }
+-
+- return NOTIFY_OK;
+-}
+-
+-
+-static struct notifier_block blk_cpu_notifier __cpuinitdata = {
+- .notifier_call = blk_cpu_notify,
+-};
+-
+-/**
+- * blk_complete_request - end I/O on a request
+- * @req: the request being processed
+- *
+- * Description:
+- * Ends all I/O on a request. It does not handle partial completions,
+- * unless the driver actually implements this in its completion callback
+- * through requeueing. The actual completion happens out-of-order,
+- * through a softirq handler. The user must have registered a completion
+- * callback through blk_queue_softirq_done().
+- **/
+-
+-void blk_complete_request(struct request *req)
+-{
+- struct list_head *cpu_list;
+- unsigned long flags;
+-
+- BUG_ON(!req->q->softirq_done_fn);
+-
+- local_irq_save(flags);
+-
+- cpu_list = &__get_cpu_var(blk_cpu_done);
+- list_add_tail(&req->donelist, cpu_list);
+- raise_softirq_irqoff(BLOCK_SOFTIRQ);
+-
+- local_irq_restore(flags);
+-}
+-
+-EXPORT_SYMBOL(blk_complete_request);
+-
+-/*
+- * queue lock must be held
+- */
+-void end_that_request_last(struct request *req, int uptodate)
+-{
+- struct gendisk *disk = req->rq_disk;
+- int error;
+-
+- /*
+- * extend uptodate bool to allow < 0 value to be direct io error
+- */
+- error = 0;
+- if (end_io_error(uptodate))
+- error = !uptodate ? -EIO : uptodate;
+-
+- if (unlikely(laptop_mode) && blk_fs_request(req))
+- laptop_io_completion();
+-
+- /*
+- * Account IO completion. bar_rq isn't accounted as a normal
+- * IO on queueing nor completion. Accounting the containing
+- * request is enough.
+- */
+- if (disk && blk_fs_request(req) && req != &req->q->bar_rq) {
+- unsigned long duration = jiffies - req->start_time;
+- const int rw = rq_data_dir(req);
+-
+- __disk_stat_inc(disk, ios[rw]);
+- __disk_stat_add(disk, ticks[rw], duration);
+- disk_round_stats(disk);
+- disk->in_flight--;
+- }
+- if (req->end_io)
+- req->end_io(req, error);
+- else
+- __blk_put_request(req->q, req);
+-}
+-
+-EXPORT_SYMBOL(end_that_request_last);
+-
+-static inline void __end_request(struct request *rq, int uptodate,
+- unsigned int nr_bytes, int dequeue)
+-{
+- if (!end_that_request_chunk(rq, uptodate, nr_bytes)) {
+- if (dequeue)
+- blkdev_dequeue_request(rq);
+- add_disk_randomness(rq->rq_disk);
+- end_that_request_last(rq, uptodate);
+- }
+-}
+-
+-static unsigned int rq_byte_size(struct request *rq)
+-{
+- if (blk_fs_request(rq))
+- return rq->hard_nr_sectors << 9;
+-
+- return rq->data_len;
+-}
+-
+-/**
+- * end_queued_request - end all I/O on a queued request
+- * @rq: the request being processed
+- * @uptodate: error value or 0/1 uptodate flag
+- *
+- * Description:
+- * Ends all I/O on a request, and removes it from the block layer queues.
+- * Not suitable for normal IO completion, unless the driver still has
+- * the request attached to the block layer.
+- *
+- **/
+-void end_queued_request(struct request *rq, int uptodate)
+-{
+- __end_request(rq, uptodate, rq_byte_size(rq), 1);
+-}
+-EXPORT_SYMBOL(end_queued_request);
+-
+-/**
+- * end_dequeued_request - end all I/O on a dequeued request
+- * @rq: the request being processed
+- * @uptodate: error value or 0/1 uptodate flag
+- *
+- * Description:
+- * Ends all I/O on a request. The request must already have been
+- * dequeued using blkdev_dequeue_request(), as is normally the case
+- * for most drivers.
+- *
+- **/
+-void end_dequeued_request(struct request *rq, int uptodate)
+-{
+- __end_request(rq, uptodate, rq_byte_size(rq), 0);
+-}
+-EXPORT_SYMBOL(end_dequeued_request);
+-
+-
+-/**
+- * end_request - end I/O on the current segment of the request
+- * @req: the request being processed
+- * @uptodate: error value or 0/1 uptodate flag
+- *
+- * Description:
+- * Ends I/O on the current segment of a request. If that is the only
+- * remaining segment, the request is also completed and freed.
+- *
+- * This is a remnant of how older block drivers handled IO completions.
+- * Modern drivers typically end IO on the full request in one go, unless
+- * they have a residual value to account for. For that case this function
+- * isn't really useful, unless the residual just happens to be the
+- * full current segment. In other words, don't use this function in new
+- * code. Either use end_request_completely(), or the
+- * end_that_request_chunk() (along with end_that_request_last()) for
+- * partial completions.
+- *
+- **/
+-void end_request(struct request *req, int uptodate)
+-{
+- __end_request(req, uptodate, req->hard_cur_sectors << 9, 1);
+-}
+-EXPORT_SYMBOL(end_request);
+-
+-static void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
+- struct bio *bio)
+-{
+- /* first two bits are identical in rq->cmd_flags and bio->bi_rw */
+- rq->cmd_flags |= (bio->bi_rw & 3);
+-
+- rq->nr_phys_segments = bio_phys_segments(q, bio);
+- rq->nr_hw_segments = bio_hw_segments(q, bio);
+- rq->current_nr_sectors = bio_cur_sectors(bio);
+- rq->hard_cur_sectors = rq->current_nr_sectors;
+- rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
+- rq->buffer = bio_data(bio);
+- rq->data_len = bio->bi_size;
+-
+- rq->bio = rq->biotail = bio;
+-
+- if (bio->bi_bdev)
+- rq->rq_disk = bio->bi_bdev->bd_disk;
+-}
+-
+-int kblockd_schedule_work(struct work_struct *work)
+-{
+- return queue_work(kblockd_workqueue, work);
+-}
+-
+-EXPORT_SYMBOL(kblockd_schedule_work);
+-
+-void kblockd_flush_work(struct work_struct *work)
+-{
+- cancel_work_sync(work);
+-}
+-EXPORT_SYMBOL(kblockd_flush_work);
+-
+-int __init blk_dev_init(void)
+-{
+- int i;
+-
+- kblockd_workqueue = create_workqueue("kblockd");
+- if (!kblockd_workqueue)
+- panic("Failed to create kblockd\n");
+-
+- request_cachep = kmem_cache_create("blkdev_requests",
+- sizeof(struct request), 0, SLAB_PANIC, NULL);
+-
+- requestq_cachep = kmem_cache_create("blkdev_queue",
+- sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
+-
+- iocontext_cachep = kmem_cache_create("blkdev_ioc",
+- sizeof(struct io_context), 0, SLAB_PANIC, NULL);
+-
+- for_each_possible_cpu(i)
+- INIT_LIST_HEAD(&per_cpu(blk_cpu_done, i));
+-
+- open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
+- register_hotcpu_notifier(&blk_cpu_notifier);
+-
+- blk_max_low_pfn = max_low_pfn - 1;
+- blk_max_pfn = max_pfn - 1;
+-
+- return 0;
+-}
+-
+-/*
+- * IO Context helper functions
+- */
+-void put_io_context(struct io_context *ioc)
+-{
+- if (ioc == NULL)
+- return;
+-
+- BUG_ON(atomic_read(&ioc->refcount) == 0);
+-
+- if (atomic_dec_and_test(&ioc->refcount)) {
+- struct cfq_io_context *cic;
+-
+- rcu_read_lock();
+- if (ioc->aic && ioc->aic->dtor)
+- ioc->aic->dtor(ioc->aic);
+- if (ioc->cic_root.rb_node != NULL) {
+- struct rb_node *n = rb_first(&ioc->cic_root);
+-
+- cic = rb_entry(n, struct cfq_io_context, rb_node);
+- cic->dtor(ioc);
+- }
+- rcu_read_unlock();
+-
+- kmem_cache_free(iocontext_cachep, ioc);
+- }
+-}
+-EXPORT_SYMBOL(put_io_context);
+-
+-/* Called by the exitting task */
+-void exit_io_context(void)
+-{
+- struct io_context *ioc;
+- struct cfq_io_context *cic;
+-
+- task_lock(current);
+- ioc = current->io_context;
+- current->io_context = NULL;
+- task_unlock(current);
+-
+- ioc->task = NULL;
+- if (ioc->aic && ioc->aic->exit)
+- ioc->aic->exit(ioc->aic);
+- if (ioc->cic_root.rb_node != NULL) {
+- cic = rb_entry(rb_first(&ioc->cic_root), struct cfq_io_context, rb_node);
+- cic->exit(ioc);
+- }
+-
+- put_io_context(ioc);
+-}
+-
+-/*
+- * If the current task has no IO context then create one and initialise it.
+- * Otherwise, return its existing IO context.
+- *
+- * This returned IO context doesn't have a specifically elevated refcount,
+- * but since the current task itself holds a reference, the context can be
+- * used in general code, so long as it stays within `current` context.
+- */
+-static struct io_context *current_io_context(gfp_t gfp_flags, int node)
+-{
+- struct task_struct *tsk = current;
+- struct io_context *ret;
+-
+- ret = tsk->io_context;
+- if (likely(ret))
+- return ret;
+-
+- ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
+- if (ret) {
+- atomic_set(&ret->refcount, 1);
+- ret->task = current;
+- ret->ioprio_changed = 0;
+- ret->last_waited = jiffies; /* doesn't matter... */
+- ret->nr_batch_requests = 0; /* because this is 0 */
+- ret->aic = NULL;
+- ret->cic_root.rb_node = NULL;
+- ret->ioc_data = NULL;
+- /* make sure set_task_ioprio() sees the settings above */
+- smp_wmb();
+- tsk->io_context = ret;
+- }
+-
+- return ret;
+-}
+-
+-/*
+- * If the current task has no IO context then create one and initialise it.
+- * If it does have a context, take a ref on it.
+- *
+- * This is always called in the context of the task which submitted the I/O.
+- */
+-struct io_context *get_io_context(gfp_t gfp_flags, int node)
+-{
+- struct io_context *ret;
+- ret = current_io_context(gfp_flags, node);
+- if (likely(ret))
+- atomic_inc(&ret->refcount);
+- return ret;
+-}
+-EXPORT_SYMBOL(get_io_context);
+-
+-void copy_io_context(struct io_context **pdst, struct io_context **psrc)
+-{
+- struct io_context *src = *psrc;
+- struct io_context *dst = *pdst;
+-
+- if (src) {
+- BUG_ON(atomic_read(&src->refcount) == 0);
+- atomic_inc(&src->refcount);
+- put_io_context(dst);
+- *pdst = src;
+- }
+-}
+-EXPORT_SYMBOL(copy_io_context);
+-
+-void swap_io_context(struct io_context **ioc1, struct io_context **ioc2)
+-{
+- struct io_context *temp;
+- temp = *ioc1;
+- *ioc1 = *ioc2;
+- *ioc2 = temp;
+-}
+-EXPORT_SYMBOL(swap_io_context);
+-
+-/*
+- * sysfs parts below
+- */
+-struct queue_sysfs_entry {
+- struct attribute attr;
+- ssize_t (*show)(struct request_queue *, char *);
+- ssize_t (*store)(struct request_queue *, const char *, size_t);
+-};
+-
+-static ssize_t
+-queue_var_show(unsigned int var, char *page)
+-{
+- return sprintf(page, "%d\n", var);
+-}
+-
+-static ssize_t
+-queue_var_store(unsigned long *var, const char *page, size_t count)
+-{
+- char *p = (char *) page;
+-
+- *var = simple_strtoul(p, &p, 10);
+- return count;
+-}
+-
+-static ssize_t queue_requests_show(struct request_queue *q, char *page)
+-{
+- return queue_var_show(q->nr_requests, (page));
+-}
+-
+-static ssize_t
+-queue_requests_store(struct request_queue *q, const char *page, size_t count)
+-{
+- struct request_list *rl = &q->rq;
+- unsigned long nr;
+- int ret = queue_var_store(&nr, page, count);
+- if (nr < BLKDEV_MIN_RQ)
+- nr = BLKDEV_MIN_RQ;
+-
+- spin_lock_irq(q->queue_lock);
+- q->nr_requests = nr;
+- blk_queue_congestion_threshold(q);
+-
+- if (rl->count[READ] >= queue_congestion_on_threshold(q))
+- blk_set_queue_congested(q, READ);
+- else if (rl->count[READ] < queue_congestion_off_threshold(q))
+- blk_clear_queue_congested(q, READ);
+-
+- if (rl->count[WRITE] >= queue_congestion_on_threshold(q))
+- blk_set_queue_congested(q, WRITE);
+- else if (rl->count[WRITE] < queue_congestion_off_threshold(q))
+- blk_clear_queue_congested(q, WRITE);
+-
+- if (rl->count[READ] >= q->nr_requests) {
+- blk_set_queue_full(q, READ);
+- } else if (rl->count[READ]+1 <= q->nr_requests) {
+- blk_clear_queue_full(q, READ);
+- wake_up(&rl->wait[READ]);
+- }
+-
+- if (rl->count[WRITE] >= q->nr_requests) {
+- blk_set_queue_full(q, WRITE);
+- } else if (rl->count[WRITE]+1 <= q->nr_requests) {
+- blk_clear_queue_full(q, WRITE);
+- wake_up(&rl->wait[WRITE]);
+- }
+- spin_unlock_irq(q->queue_lock);
+- return ret;
+-}
+-
+-static ssize_t queue_ra_show(struct request_queue *q, char *page)
+-{
+- int ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10);
+-
+- return queue_var_show(ra_kb, (page));
+-}
+-
+-static ssize_t
+-queue_ra_store(struct request_queue *q, const char *page, size_t count)
+-{
+- unsigned long ra_kb;
+- ssize_t ret = queue_var_store(&ra_kb, page, count);
+-
+- spin_lock_irq(q->queue_lock);
+- q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10);
+- spin_unlock_irq(q->queue_lock);
+-
+- return ret;
+-}
+-
+-static ssize_t queue_max_sectors_show(struct request_queue *q, char *page)
+-{
+- int max_sectors_kb = q->max_sectors >> 1;
+-
+- return queue_var_show(max_sectors_kb, (page));
+-}
+-
+-static ssize_t
+-queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
+-{
+- unsigned long max_sectors_kb,
+- max_hw_sectors_kb = q->max_hw_sectors >> 1,
+- page_kb = 1 << (PAGE_CACHE_SHIFT - 10);
+- ssize_t ret = queue_var_store(&max_sectors_kb, page, count);
+-
+- if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb)
+- return -EINVAL;
+- /*
+- * Take the queue lock to update the readahead and max_sectors
+- * values synchronously:
+- */
+- spin_lock_irq(q->queue_lock);
+- q->max_sectors = max_sectors_kb << 1;
+- spin_unlock_irq(q->queue_lock);
+-
+- return ret;
+-}
+-
+-static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page)
+-{
+- int max_hw_sectors_kb = q->max_hw_sectors >> 1;
+-
+- return queue_var_show(max_hw_sectors_kb, (page));
+-}
+-
+-
+-static struct queue_sysfs_entry queue_requests_entry = {
+- .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
+- .show = queue_requests_show,
+- .store = queue_requests_store,
+-};
+-
+-static struct queue_sysfs_entry queue_ra_entry = {
+- .attr = {.name = "read_ahead_kb", .mode = S_IRUGO | S_IWUSR },
+- .show = queue_ra_show,
+- .store = queue_ra_store,
+-};
+-
+-static struct queue_sysfs_entry queue_max_sectors_entry = {
+- .attr = {.name = "max_sectors_kb", .mode = S_IRUGO | S_IWUSR },
+- .show = queue_max_sectors_show,
+- .store = queue_max_sectors_store,
+-};
+-
+-static struct queue_sysfs_entry queue_max_hw_sectors_entry = {
+- .attr = {.name = "max_hw_sectors_kb", .mode = S_IRUGO },
+- .show = queue_max_hw_sectors_show,
+-};
+-
+-static struct queue_sysfs_entry queue_iosched_entry = {
+- .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR },
+- .show = elv_iosched_show,
+- .store = elv_iosched_store,
+-};
+-
+-static struct attribute *default_attrs[] = {
+- &queue_requests_entry.attr,
+- &queue_ra_entry.attr,
+- &queue_max_hw_sectors_entry.attr,
+- &queue_max_sectors_entry.attr,
+- &queue_iosched_entry.attr,
+- NULL,
+-};
+-
+-#define to_queue(atr) container_of((atr), struct queue_sysfs_entry, attr)
+-
+-static ssize_t
+-queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
+-{
+- struct queue_sysfs_entry *entry = to_queue(attr);
+- struct request_queue *q =
+- container_of(kobj, struct request_queue, kobj);
+- ssize_t res;
+-
+- if (!entry->show)
+- return -EIO;
+- mutex_lock(&q->sysfs_lock);
+- if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
+- mutex_unlock(&q->sysfs_lock);
+- return -ENOENT;
+- }
+- res = entry->show(q, page);
+- mutex_unlock(&q->sysfs_lock);
+- return res;
+-}
+-
+-static ssize_t
+-queue_attr_store(struct kobject *kobj, struct attribute *attr,
+- const char *page, size_t length)
+-{
+- struct queue_sysfs_entry *entry = to_queue(attr);
+- struct request_queue *q = container_of(kobj, struct request_queue, kobj);
+-
+- ssize_t res;
+-
+- if (!entry->store)
+- return -EIO;
+- mutex_lock(&q->sysfs_lock);
+- if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
+- mutex_unlock(&q->sysfs_lock);
+- return -ENOENT;
+- }
+- res = entry->store(q, page, length);
+- mutex_unlock(&q->sysfs_lock);
+- return res;
+-}
+-
+-static struct sysfs_ops queue_sysfs_ops = {
+- .show = queue_attr_show,
+- .store = queue_attr_store,
+-};
+-
+-static struct kobj_type queue_ktype = {
+- .sysfs_ops = &queue_sysfs_ops,
+- .default_attrs = default_attrs,
+- .release = blk_release_queue,
+-};
+-
+-int blk_register_queue(struct gendisk *disk)
+-{
+- int ret;
+-
+- struct request_queue *q = disk->queue;
+-
+- if (!q || !q->request_fn)
+- return -ENXIO;
+-
+- q->kobj.parent = kobject_get(&disk->kobj);
+-
+- ret = kobject_add(&q->kobj);
+- if (ret < 0)
+- return ret;
+-
+- kobject_uevent(&q->kobj, KOBJ_ADD);
+-
+- ret = elv_register_queue(q);
+- if (ret) {
+- kobject_uevent(&q->kobj, KOBJ_REMOVE);
+- kobject_del(&q->kobj);
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-void blk_unregister_queue(struct gendisk *disk)
+-{
+- struct request_queue *q = disk->queue;
+-
+- if (q && q->request_fn) {
+- elv_unregister_queue(q);
+-
+- kobject_uevent(&q->kobj, KOBJ_REMOVE);
+- kobject_del(&q->kobj);
+- kobject_put(&disk->kobj);
+- }
+-}
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 083d2e1..c3166a1 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -24,10 +24,6 @@ config CRYPTO_ALGAPI
+ help
+ This option provides the API for cryptographic algorithms.
+
+-config CRYPTO_ABLKCIPHER
+- tristate
+- select CRYPTO_BLKCIPHER
+-
+ config CRYPTO_AEAD
+ tristate
+ select CRYPTO_ALGAPI
+@@ -36,6 +32,15 @@ config CRYPTO_BLKCIPHER
+ tristate
+ select CRYPTO_ALGAPI
+
++config CRYPTO_SEQIV
++ tristate "Sequence Number IV Generator"
++ select CRYPTO_AEAD
++ select CRYPTO_BLKCIPHER
++ help
++ This IV generator generates an IV based on a sequence number by
++ xoring it with a salt. This algorithm is mainly useful for CTR
++ and similar modes.
++
+ config CRYPTO_HASH
+ tristate
+ select CRYPTO_ALGAPI
+@@ -91,7 +96,7 @@ config CRYPTO_SHA1
+ SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
+
+ config CRYPTO_SHA256
+- tristate "SHA256 digest algorithm"
++ tristate "SHA224 and SHA256 digest algorithm"
+ select CRYPTO_ALGAPI
+ help
+ SHA256 secure hash standard (DFIPS 180-2).
+@@ -99,6 +104,9 @@ config CRYPTO_SHA256
+ This version of SHA implements a 256 bit hash with 128 bits of
+ security against collision attacks.
+
++ This code also includes SHA-224, a 224 bit hash with 112 bits
++ of security against collision attacks.
++
+ config CRYPTO_SHA512
+ tristate "SHA384 and SHA512 digest algorithms"
+ select CRYPTO_ALGAPI
+@@ -195,9 +203,34 @@ config CRYPTO_XTS
+ key size 256, 384 or 512 bits. This implementation currently
+ can't handle a sectorsize which is not a multiple of 16 bytes.
+
++config CRYPTO_CTR
++ tristate "CTR support"
++ select CRYPTO_BLKCIPHER
++ select CRYPTO_SEQIV
++ select CRYPTO_MANAGER
++ help
++ CTR: Counter mode
++ This block cipher algorithm is required for IPSec.
++
++config CRYPTO_GCM
++ tristate "GCM/GMAC support"
++ select CRYPTO_CTR
++ select CRYPTO_AEAD
++ select CRYPTO_GF128MUL
++ help
++ Support for Galois/Counter Mode (GCM) and Galois Message
++ Authentication Code (GMAC). Required for IPSec.
++
++config CRYPTO_CCM
++ tristate "CCM support"
++ select CRYPTO_CTR
++ select CRYPTO_AEAD
++ help
++ Support for Counter with CBC MAC. Required for IPsec.
++
+ config CRYPTO_CRYPTD
+ tristate "Software async crypto daemon"
+- select CRYPTO_ABLKCIPHER
++ select CRYPTO_BLKCIPHER
+ select CRYPTO_MANAGER
+ help
+ This is a generic software asynchronous crypto daemon that
+@@ -320,6 +353,7 @@ config CRYPTO_AES_586
+ tristate "AES cipher algorithms (i586)"
+ depends on (X86 || UML_X86) && !64BIT
+ select CRYPTO_ALGAPI
++ select CRYPTO_AES
+ help
+ AES cipher algorithms (FIPS-197). AES uses the Rijndael
+ algorithm.
+@@ -341,6 +375,7 @@ config CRYPTO_AES_X86_64
+ tristate "AES cipher algorithms (x86_64)"
+ depends on (X86 || UML_X86) && 64BIT
+ select CRYPTO_ALGAPI
++ select CRYPTO_AES
+ help
+ AES cipher algorithms (FIPS-197). AES uses the Rijndael
+ algorithm.
+@@ -441,6 +476,46 @@ config CRYPTO_SEED
+ See also:
+ <http://www.kisa.or.kr/kisa/seed/jsp/seed_eng.jsp>
+
++config CRYPTO_SALSA20
++ tristate "Salsa20 stream cipher algorithm (EXPERIMENTAL)"
++ depends on EXPERIMENTAL
++ select CRYPTO_BLKCIPHER
++ help
++ Salsa20 stream cipher algorithm.
++
++ Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
++ Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
++
++ The Salsa20 stream cipher algorithm is designed by Daniel J.
++ Bernstein <djb at cr.yp.to>. See <http://cr.yp.to/snuffle.html>
++
++config CRYPTO_SALSA20_586
++ tristate "Salsa20 stream cipher algorithm (i586) (EXPERIMENTAL)"
++ depends on (X86 || UML_X86) && !64BIT
++ depends on EXPERIMENTAL
++ select CRYPTO_BLKCIPHER
++ help
++ Salsa20 stream cipher algorithm.
++
++ Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
++ Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
++
++ The Salsa20 stream cipher algorithm is designed by Daniel J.
++ Bernstein <djb at cr.yp.to>. See <http://cr.yp.to/snuffle.html>
++
++config CRYPTO_SALSA20_X86_64
++ tristate "Salsa20 stream cipher algorithm (x86_64) (EXPERIMENTAL)"
++ depends on (X86 || UML_X86) && 64BIT
++ depends on EXPERIMENTAL
++ select CRYPTO_BLKCIPHER
++ help
++ Salsa20 stream cipher algorithm.
++
++ Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
++ Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
++
++ The Salsa20 stream cipher algorithm is designed by Daniel J.
++ Bernstein <djb at cr.yp.to>. See <http://cr.yp.to/snuffle.html>
+
+ config CRYPTO_DEFLATE
+ tristate "Deflate compression algorithm"
+@@ -491,6 +566,7 @@ config CRYPTO_TEST
+ tristate "Testing module"
+ depends on m
+ select CRYPTO_ALGAPI
++ select CRYPTO_AEAD
+ help
+ Quick & dirty crypto test module.
+
+@@ -498,10 +574,19 @@ config CRYPTO_AUTHENC
+ tristate "Authenc support"
+ select CRYPTO_AEAD
+ select CRYPTO_MANAGER
++ select CRYPTO_HASH
+ help
+ Authenc: Combined mode wrapper for IPsec.
+ This is required for IPSec.
+
++config CRYPTO_LZO
++ tristate "LZO compression algorithm"
++ select CRYPTO_ALGAPI
++ select LZO_COMPRESS
++ select LZO_DECOMPRESS
++ help
++ This is the LZO algorithm.
++
+ source "drivers/crypto/Kconfig"
+
+ endif # if CRYPTO
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 43c2a0d..48c7583 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -8,9 +8,14 @@ crypto_algapi-$(CONFIG_PROC_FS) += proc.o
+ crypto_algapi-objs := algapi.o scatterwalk.o $(crypto_algapi-y)
+ obj-$(CONFIG_CRYPTO_ALGAPI) += crypto_algapi.o
+
+-obj-$(CONFIG_CRYPTO_ABLKCIPHER) += ablkcipher.o
+ obj-$(CONFIG_CRYPTO_AEAD) += aead.o
+-obj-$(CONFIG_CRYPTO_BLKCIPHER) += blkcipher.o
++
++crypto_blkcipher-objs := ablkcipher.o
++crypto_blkcipher-objs += blkcipher.o
++obj-$(CONFIG_CRYPTO_BLKCIPHER) += crypto_blkcipher.o
++obj-$(CONFIG_CRYPTO_BLKCIPHER) += chainiv.o
++obj-$(CONFIG_CRYPTO_BLKCIPHER) += eseqiv.o
++obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
+
+ crypto_hash-objs := hash.o
+ obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o
+@@ -32,6 +37,9 @@ obj-$(CONFIG_CRYPTO_CBC) += cbc.o
+ obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
+ obj-$(CONFIG_CRYPTO_LRW) += lrw.o
+ obj-$(CONFIG_CRYPTO_XTS) += xts.o
++obj-$(CONFIG_CRYPTO_CTR) += ctr.o
++obj-$(CONFIG_CRYPTO_GCM) += gcm.o
++obj-$(CONFIG_CRYPTO_CCM) += ccm.o
+ obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
+ obj-$(CONFIG_CRYPTO_DES) += des_generic.o
+ obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
+@@ -48,10 +56,12 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
+ obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
+ obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
+ obj-$(CONFIG_CRYPTO_SEED) += seed.o
++obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
+ obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o
+ obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
+ obj-$(CONFIG_CRYPTO_CRC32C) += crc32c.o
+ obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o
++obj-$(CONFIG_CRYPTO_LZO) += lzo.o
+
+ obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
+
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index 2731acb..3bcb099 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -13,14 +13,18 @@
+ *
+ */
+
+-#include <crypto/algapi.h>
+-#include <linux/errno.h>
++#include <crypto/internal/skcipher.h>
++#include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/rtnetlink.h>
++#include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/seq_file.h>
+
++#include "internal.h"
++
+ static int setkey_unaligned(struct crypto_ablkcipher *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+@@ -66,6 +70,16 @@ static unsigned int crypto_ablkcipher_ctxsize(struct crypto_alg *alg, u32 type,
+ return alg->cra_ctxsize;
+ }
+
++int skcipher_null_givencrypt(struct skcipher_givcrypt_request *req)
++{
++ return crypto_ablkcipher_encrypt(&req->creq);
++}
++
++int skcipher_null_givdecrypt(struct skcipher_givcrypt_request *req)
++{
++ return crypto_ablkcipher_decrypt(&req->creq);
++}
++
+ static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type,
+ u32 mask)
+ {
+@@ -78,6 +92,11 @@ static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type,
+ crt->setkey = setkey;
+ crt->encrypt = alg->encrypt;
+ crt->decrypt = alg->decrypt;
++ if (!alg->ivsize) {
++ crt->givencrypt = skcipher_null_givencrypt;
++ crt->givdecrypt = skcipher_null_givdecrypt;
++ }
++ crt->base = __crypto_ablkcipher_cast(tfm);
+ crt->ivsize = alg->ivsize;
+
+ return 0;
+@@ -90,10 +109,13 @@ static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg *alg)
+ struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher;
+
+ seq_printf(m, "type : ablkcipher\n");
++ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
++ "yes" : "no");
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize);
+ seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize);
+ seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize);
++ seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: "<default>");
+ }
+
+ const struct crypto_type crypto_ablkcipher_type = {
+@@ -105,5 +127,220 @@ const struct crypto_type crypto_ablkcipher_type = {
+ };
+ EXPORT_SYMBOL_GPL(crypto_ablkcipher_type);
+
++static int no_givdecrypt(struct skcipher_givcrypt_request *req)
++{
++ return -ENOSYS;
++}
++
++static int crypto_init_givcipher_ops(struct crypto_tfm *tfm, u32 type,
++ u32 mask)
++{
++ struct ablkcipher_alg *alg = &tfm->__crt_alg->cra_ablkcipher;
++ struct ablkcipher_tfm *crt = &tfm->crt_ablkcipher;
++
++ if (alg->ivsize > PAGE_SIZE / 8)
++ return -EINVAL;
++
++ crt->setkey = tfm->__crt_alg->cra_flags & CRYPTO_ALG_GENIV ?
++ alg->setkey : setkey;
++ crt->encrypt = alg->encrypt;
++ crt->decrypt = alg->decrypt;
++ crt->givencrypt = alg->givencrypt;
++ crt->givdecrypt = alg->givdecrypt ?: no_givdecrypt;
++ crt->base = __crypto_ablkcipher_cast(tfm);
++ crt->ivsize = alg->ivsize;
++
++ return 0;
++}
++
++static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
++ __attribute__ ((unused));
++static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
++{
++ struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher;
++
++ seq_printf(m, "type : givcipher\n");
++ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
++ "yes" : "no");
++ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
++ seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize);
++ seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize);
++ seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize);
++ seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: "<built-in>");
++}
++
++const struct crypto_type crypto_givcipher_type = {
++ .ctxsize = crypto_ablkcipher_ctxsize,
++ .init = crypto_init_givcipher_ops,
++#ifdef CONFIG_PROC_FS
++ .show = crypto_givcipher_show,
++#endif
++};
++EXPORT_SYMBOL_GPL(crypto_givcipher_type);
++
++const char *crypto_default_geniv(const struct crypto_alg *alg)
++{
++ return alg->cra_flags & CRYPTO_ALG_ASYNC ? "eseqiv" : "chainiv";
++}
++
++static int crypto_givcipher_default(struct crypto_alg *alg, u32 type, u32 mask)
++{
++ struct rtattr *tb[3];
++ struct {
++ struct rtattr attr;
++ struct crypto_attr_type data;
++ } ptype;
++ struct {
++ struct rtattr attr;
++ struct crypto_attr_alg data;
++ } palg;
++ struct crypto_template *tmpl;
++ struct crypto_instance *inst;
++ struct crypto_alg *larval;
++ const char *geniv;
++ int err;
++
++ larval = crypto_larval_lookup(alg->cra_driver_name,
++ CRYPTO_ALG_TYPE_GIVCIPHER,
++ CRYPTO_ALG_TYPE_MASK);
++ err = PTR_ERR(larval);
++ if (IS_ERR(larval))
++ goto out;
++
++ err = -EAGAIN;
++ if (!crypto_is_larval(larval))
++ goto drop_larval;
++
++ ptype.attr.rta_len = sizeof(ptype);
++ ptype.attr.rta_type = CRYPTOA_TYPE;
++ ptype.data.type = type | CRYPTO_ALG_GENIV;
++ /* GENIV tells the template that we're making a default geniv. */
++ ptype.data.mask = mask | CRYPTO_ALG_GENIV;
++ tb[0] = &ptype.attr;
++
++ palg.attr.rta_len = sizeof(palg);
++ palg.attr.rta_type = CRYPTOA_ALG;
++ /* Must use the exact name to locate ourselves. */
++ memcpy(palg.data.name, alg->cra_driver_name, CRYPTO_MAX_ALG_NAME);
++ tb[1] = &palg.attr;
++
++ tb[2] = NULL;
++
++ if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
++ CRYPTO_ALG_TYPE_BLKCIPHER)
++ geniv = alg->cra_blkcipher.geniv;
++ else
++ geniv = alg->cra_ablkcipher.geniv;
++
++ if (!geniv)
++ geniv = crypto_default_geniv(alg);
++
++ tmpl = crypto_lookup_template(geniv);
++ err = -ENOENT;
++ if (!tmpl)
++ goto kill_larval;
++
++ inst = tmpl->alloc(tb);
++ err = PTR_ERR(inst);
++ if (IS_ERR(inst))
++ goto put_tmpl;
++
++ if ((err = crypto_register_instance(tmpl, inst))) {
++ tmpl->free(inst);
++ goto put_tmpl;
++ }
++
++ /* Redo the lookup to use the instance we just registered. */
++ err = -EAGAIN;
++
++put_tmpl:
++ crypto_tmpl_put(tmpl);
++kill_larval:
++ crypto_larval_kill(larval);
++drop_larval:
++ crypto_mod_put(larval);
++out:
++ crypto_mod_put(alg);
++ return err;
++}
++
++static struct crypto_alg *crypto_lookup_skcipher(const char *name, u32 type,
++ u32 mask)
++{
++ struct crypto_alg *alg;
++
++ alg = crypto_alg_mod_lookup(name, type, mask);
++ if (IS_ERR(alg))
++ return alg;
++
++ if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
++ CRYPTO_ALG_TYPE_GIVCIPHER)
++ return alg;
++
++ if (!((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
++ CRYPTO_ALG_TYPE_BLKCIPHER ? alg->cra_blkcipher.ivsize :
++ alg->cra_ablkcipher.ivsize))
++ return alg;
++
++ return ERR_PTR(crypto_givcipher_default(alg, type, mask));
++}
++
++int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn, const char *name,
++ u32 type, u32 mask)
++{
++ struct crypto_alg *alg;
++ int err;
++
++ type = crypto_skcipher_type(type);
++ mask = crypto_skcipher_mask(mask);
++
++ alg = crypto_lookup_skcipher(name, type, mask);
++ if (IS_ERR(alg))
++ return PTR_ERR(alg);
++
++ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
++ crypto_mod_put(alg);
++ return err;
++}
++EXPORT_SYMBOL_GPL(crypto_grab_skcipher);
++
++struct crypto_ablkcipher *crypto_alloc_ablkcipher(const char *alg_name,
++ u32 type, u32 mask)
++{
++ struct crypto_tfm *tfm;
++ int err;
++
++ type = crypto_skcipher_type(type);
++ mask = crypto_skcipher_mask(mask);
++
++ for (;;) {
++ struct crypto_alg *alg;
++
++ alg = crypto_lookup_skcipher(alg_name, type, mask);
++ if (IS_ERR(alg)) {
++ err = PTR_ERR(alg);
++ goto err;
++ }
++
++ tfm = __crypto_alloc_tfm(alg, type, mask);
++ if (!IS_ERR(tfm))
++ return __crypto_ablkcipher_cast(tfm);
++
++ crypto_mod_put(alg);
++ err = PTR_ERR(tfm);
++
++err:
++ if (err != -EAGAIN)
++ break;
++ if (signal_pending(current)) {
++ err = -EINTR;
++ break;
++ }
++ }
++
++ return ERR_PTR(err);
++}
++EXPORT_SYMBOL_GPL(crypto_alloc_ablkcipher);
++
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Asynchronous block chaining cipher type");
+diff --git a/crypto/aead.c b/crypto/aead.c
+index 84a3501..3a6f3f5 100644
+--- a/crypto/aead.c
++++ b/crypto/aead.c
+@@ -12,14 +12,17 @@
+ *
+ */
+
+-#include <crypto/algapi.h>
+-#include <linux/errno.h>
++#include <crypto/internal/aead.h>
++#include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/rtnetlink.h>
+ #include <linux/slab.h>
+ #include <linux/seq_file.h>
+
++#include "internal.h"
++
+ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+@@ -53,25 +56,54 @@ static int setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+ return aead->setkey(tfm, key, keylen);
+ }
+
++int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
++{
++ struct aead_tfm *crt = crypto_aead_crt(tfm);
++ int err;
++
++ if (authsize > crypto_aead_alg(tfm)->maxauthsize)
++ return -EINVAL;
++
++ if (crypto_aead_alg(tfm)->setauthsize) {
++ err = crypto_aead_alg(tfm)->setauthsize(crt->base, authsize);
++ if (err)
++ return err;
++ }
++
++ crypto_aead_crt(crt->base)->authsize = authsize;
++ crt->authsize = authsize;
++ return 0;
++}
++EXPORT_SYMBOL_GPL(crypto_aead_setauthsize);
++
+ static unsigned int crypto_aead_ctxsize(struct crypto_alg *alg, u32 type,
+ u32 mask)
+ {
+ return alg->cra_ctxsize;
+ }
+
++static int no_givcrypt(struct aead_givcrypt_request *req)
++{
++ return -ENOSYS;
++}
++
+ static int crypto_init_aead_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+ {
+ struct aead_alg *alg = &tfm->__crt_alg->cra_aead;
+ struct aead_tfm *crt = &tfm->crt_aead;
+
+- if (max(alg->authsize, alg->ivsize) > PAGE_SIZE / 8)
++ if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
+ return -EINVAL;
+
+- crt->setkey = setkey;
++ crt->setkey = tfm->__crt_alg->cra_flags & CRYPTO_ALG_GENIV ?
++ alg->setkey : setkey;
+ crt->encrypt = alg->encrypt;
+ crt->decrypt = alg->decrypt;
++ crt->givencrypt = alg->givencrypt ?: no_givcrypt;
++ crt->givdecrypt = alg->givdecrypt ?: no_givcrypt;
++ crt->base = __crypto_aead_cast(tfm);
+ crt->ivsize = alg->ivsize;
+- crt->authsize = alg->authsize;
++ crt->authsize = alg->maxauthsize;
+
+ return 0;
+ }
+@@ -83,9 +115,12 @@ static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
+ struct aead_alg *aead = &alg->cra_aead;
+
+ seq_printf(m, "type : aead\n");
++ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
++ "yes" : "no");
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "ivsize : %u\n", aead->ivsize);
+- seq_printf(m, "authsize : %u\n", aead->authsize);
++ seq_printf(m, "maxauthsize : %u\n", aead->maxauthsize);
++ seq_printf(m, "geniv : %s\n", aead->geniv ?: "<built-in>");
+ }
+
+ const struct crypto_type crypto_aead_type = {
+@@ -97,5 +132,358 @@ const struct crypto_type crypto_aead_type = {
+ };
+ EXPORT_SYMBOL_GPL(crypto_aead_type);
+
++static int aead_null_givencrypt(struct aead_givcrypt_request *req)
++{
++ return crypto_aead_encrypt(&req->areq);
++}
++
++static int aead_null_givdecrypt(struct aead_givcrypt_request *req)
++{
++ return crypto_aead_decrypt(&req->areq);
++}
++
++static int crypto_init_nivaead_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
++{
++ struct aead_alg *alg = &tfm->__crt_alg->cra_aead;
++ struct aead_tfm *crt = &tfm->crt_aead;
++
++ if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
++ return -EINVAL;
++
++ crt->setkey = setkey;
++ crt->encrypt = alg->encrypt;
++ crt->decrypt = alg->decrypt;
++ if (!alg->ivsize) {
++ crt->givencrypt = aead_null_givencrypt;
++ crt->givdecrypt = aead_null_givdecrypt;
++ }
++ crt->base = __crypto_aead_cast(tfm);
++ crt->ivsize = alg->ivsize;
++ crt->authsize = alg->maxauthsize;
++
++ return 0;
++}
++
++static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg)
++ __attribute__ ((unused));
++static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg)
++{
++ struct aead_alg *aead = &alg->cra_aead;
++
++ seq_printf(m, "type : nivaead\n");
++ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
++ "yes" : "no");
++ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
++ seq_printf(m, "ivsize : %u\n", aead->ivsize);
++ seq_printf(m, "maxauthsize : %u\n", aead->maxauthsize);
++ seq_printf(m, "geniv : %s\n", aead->geniv);
++}
++
++const struct crypto_type crypto_nivaead_type = {
++ .ctxsize = crypto_aead_ctxsize,
++ .init = crypto_init_nivaead_ops,
++#ifdef CONFIG_PROC_FS
++ .show = crypto_nivaead_show,
++#endif
++};
++EXPORT_SYMBOL_GPL(crypto_nivaead_type);
++
++static int crypto_grab_nivaead(struct crypto_aead_spawn *spawn,
++ const char *name, u32 type, u32 mask)
++{
++ struct crypto_alg *alg;
++ int err;
++
++ type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
++ type |= CRYPTO_ALG_TYPE_AEAD;
++ mask |= CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV;
++
++ alg = crypto_alg_mod_lookup(name, type, mask);
++ if (IS_ERR(alg))
++ return PTR_ERR(alg);
++
++ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
++ crypto_mod_put(alg);
++ return err;
++}
++
++struct crypto_instance *aead_geniv_alloc(struct crypto_template *tmpl,
++ struct rtattr **tb, u32 type,
++ u32 mask)
++{
++ const char *name;
++ struct crypto_aead_spawn *spawn;
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ struct crypto_alg *alg;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ (CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV)) &
++ algt->mask)
++ return ERR_PTR(-EINVAL);
++
++ name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(name);
++ if (IS_ERR(name))
++ return ERR_PTR(err);
++
++ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
++ if (!inst)
++ return ERR_PTR(-ENOMEM);
++
++ spawn = crypto_instance_ctx(inst);
++
++ /* Ignore async algorithms if necessary. */
++ mask |= crypto_requires_sync(algt->type, algt->mask);
++
++ crypto_set_aead_spawn(spawn, inst);
++ err = crypto_grab_nivaead(spawn, name, type, mask);
++ if (err)
++ goto err_free_inst;
++
++ alg = crypto_aead_spawn_alg(spawn);
++
++ err = -EINVAL;
++ if (!alg->cra_aead.ivsize)
++ goto err_drop_alg;
++
++ /*
++ * This is only true if we're constructing an algorithm with its
++ * default IV generator. For the default generator we elide the
++ * template name and double-check the IV generator.
++ */
++ if (algt->mask & CRYPTO_ALG_GENIV) {
++ if (strcmp(tmpl->name, alg->cra_aead.geniv))
++ goto err_drop_alg;
++
++ memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
++ memcpy(inst->alg.cra_driver_name, alg->cra_driver_name,
++ CRYPTO_MAX_ALG_NAME);
++ } else {
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
++ "%s(%s)", tmpl->name, alg->cra_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto err_drop_alg;
++ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "%s(%s)", tmpl->name, alg->cra_driver_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto err_drop_alg;
++ }
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV;
++ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
++ inst->alg.cra_priority = alg->cra_priority;
++ inst->alg.cra_blocksize = alg->cra_blocksize;
++ inst->alg.cra_alignmask = alg->cra_alignmask;
++ inst->alg.cra_type = &crypto_aead_type;
++
++ inst->alg.cra_aead.ivsize = alg->cra_aead.ivsize;
++ inst->alg.cra_aead.maxauthsize = alg->cra_aead.maxauthsize;
++ inst->alg.cra_aead.geniv = alg->cra_aead.geniv;
++
++ inst->alg.cra_aead.setkey = alg->cra_aead.setkey;
++ inst->alg.cra_aead.setauthsize = alg->cra_aead.setauthsize;
++ inst->alg.cra_aead.encrypt = alg->cra_aead.encrypt;
++ inst->alg.cra_aead.decrypt = alg->cra_aead.decrypt;
++
++out:
++ return inst;
++
++err_drop_alg:
++ crypto_drop_aead(spawn);
++err_free_inst:
++ kfree(inst);
++ inst = ERR_PTR(err);
++ goto out;
++}
++EXPORT_SYMBOL_GPL(aead_geniv_alloc);
++
++void aead_geniv_free(struct crypto_instance *inst)
++{
++ crypto_drop_aead(crypto_instance_ctx(inst));
++ kfree(inst);
++}
++EXPORT_SYMBOL_GPL(aead_geniv_free);
++
++int aead_geniv_init(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_aead *aead;
++
++ aead = crypto_spawn_aead(crypto_instance_ctx(inst));
++ if (IS_ERR(aead))
++ return PTR_ERR(aead);
++
++ tfm->crt_aead.base = aead;
++ tfm->crt_aead.reqsize += crypto_aead_reqsize(aead);
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(aead_geniv_init);
++
++void aead_geniv_exit(struct crypto_tfm *tfm)
++{
++ crypto_free_aead(tfm->crt_aead.base);
++}
++EXPORT_SYMBOL_GPL(aead_geniv_exit);
++
++static int crypto_nivaead_default(struct crypto_alg *alg, u32 type, u32 mask)
++{
++ struct rtattr *tb[3];
++ struct {
++ struct rtattr attr;
++ struct crypto_attr_type data;
++ } ptype;
++ struct {
++ struct rtattr attr;
++ struct crypto_attr_alg data;
++ } palg;
++ struct crypto_template *tmpl;
++ struct crypto_instance *inst;
++ struct crypto_alg *larval;
++ const char *geniv;
++ int err;
++
++ larval = crypto_larval_lookup(alg->cra_driver_name,
++ CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV,
++ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
++ err = PTR_ERR(larval);
++ if (IS_ERR(larval))
++ goto out;
++
++ err = -EAGAIN;
++ if (!crypto_is_larval(larval))
++ goto drop_larval;
++
++ ptype.attr.rta_len = sizeof(ptype);
++ ptype.attr.rta_type = CRYPTOA_TYPE;
++ ptype.data.type = type | CRYPTO_ALG_GENIV;
++ /* GENIV tells the template that we're making a default geniv. */
++ ptype.data.mask = mask | CRYPTO_ALG_GENIV;
++ tb[0] = &ptype.attr;
++
++ palg.attr.rta_len = sizeof(palg);
++ palg.attr.rta_type = CRYPTOA_ALG;
++ /* Must use the exact name to locate ourselves. */
++ memcpy(palg.data.name, alg->cra_driver_name, CRYPTO_MAX_ALG_NAME);
++ tb[1] = &palg.attr;
++
++ tb[2] = NULL;
++
++ geniv = alg->cra_aead.geniv;
++
++ tmpl = crypto_lookup_template(geniv);
++ err = -ENOENT;
++ if (!tmpl)
++ goto kill_larval;
++
++ inst = tmpl->alloc(tb);
++ err = PTR_ERR(inst);
++ if (IS_ERR(inst))
++ goto put_tmpl;
++
++ if ((err = crypto_register_instance(tmpl, inst))) {
++ tmpl->free(inst);
++ goto put_tmpl;
++ }
++
++ /* Redo the lookup to use the instance we just registered. */
++ err = -EAGAIN;
++
++put_tmpl:
++ crypto_tmpl_put(tmpl);
++kill_larval:
++ crypto_larval_kill(larval);
++drop_larval:
++ crypto_mod_put(larval);
++out:
++ crypto_mod_put(alg);
++ return err;
++}
++
++static struct crypto_alg *crypto_lookup_aead(const char *name, u32 type,
++ u32 mask)
++{
++ struct crypto_alg *alg;
++
++ alg = crypto_alg_mod_lookup(name, type, mask);
++ if (IS_ERR(alg))
++ return alg;
++
++ if (alg->cra_type == &crypto_aead_type)
++ return alg;
++
++ if (!alg->cra_aead.ivsize)
++ return alg;
++
++ return ERR_PTR(crypto_nivaead_default(alg, type, mask));
++}
++
++int crypto_grab_aead(struct crypto_aead_spawn *spawn, const char *name,
++ u32 type, u32 mask)
++{
++ struct crypto_alg *alg;
++ int err;
++
++ type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
++ type |= CRYPTO_ALG_TYPE_AEAD;
++ mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
++ mask |= CRYPTO_ALG_TYPE_MASK;
++
++ alg = crypto_lookup_aead(name, type, mask);
++ if (IS_ERR(alg))
++ return PTR_ERR(alg);
++
++ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
++ crypto_mod_put(alg);
++ return err;
++}
++EXPORT_SYMBOL_GPL(crypto_grab_aead);
++
++struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
++{
++ struct crypto_tfm *tfm;
++ int err;
++
++ type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
++ type |= CRYPTO_ALG_TYPE_AEAD;
++ mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
++ mask |= CRYPTO_ALG_TYPE_MASK;
++
++ for (;;) {
++ struct crypto_alg *alg;
++
++ alg = crypto_lookup_aead(alg_name, type, mask);
++ if (IS_ERR(alg)) {
++ err = PTR_ERR(alg);
++ goto err;
++ }
++
++ tfm = __crypto_alloc_tfm(alg, type, mask);
++ if (!IS_ERR(tfm))
++ return __crypto_aead_cast(tfm);
++
++ crypto_mod_put(alg);
++ err = PTR_ERR(tfm);
++
++err:
++ if (err != -EAGAIN)
++ break;
++ if (signal_pending(current)) {
++ err = -EINTR;
++ break;
++ }
++ }
++
++ return ERR_PTR(err);
++}
++EXPORT_SYMBOL_GPL(crypto_alloc_aead);
++
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Authenticated Encryption with Associated Data (AEAD)");
+diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c
+index 9401dca..cf30af7 100644
+--- a/crypto/aes_generic.c
++++ b/crypto/aes_generic.c
+@@ -47,11 +47,7 @@
+ * ---------------------------------------------------------------------------
+ */
+
+-/* Some changes from the Gladman version:
+- s/RIJNDAEL(e_key)/E_KEY/g
+- s/RIJNDAEL(d_key)/D_KEY/g
+-*/
+-
++#include <crypto/aes.h>
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/types.h>
+@@ -59,88 +55,46 @@
+ #include <linux/crypto.h>
+ #include <asm/byteorder.h>
+
+-#define AES_MIN_KEY_SIZE 16
+-#define AES_MAX_KEY_SIZE 32
+-
+-#define AES_BLOCK_SIZE 16
+-
+-/*
+- * #define byte(x, nr) ((unsigned char)((x) >> (nr*8)))
+- */
+-static inline u8
+-byte(const u32 x, const unsigned n)
++static inline u8 byte(const u32 x, const unsigned n)
+ {
+ return x >> (n << 3);
+ }
+
+-struct aes_ctx {
+- int key_length;
+- u32 buf[120];
+-};
+-
+-#define E_KEY (&ctx->buf[0])
+-#define D_KEY (&ctx->buf[60])
+-
+ static u8 pow_tab[256] __initdata;
+ static u8 log_tab[256] __initdata;
+ static u8 sbx_tab[256] __initdata;
+ static u8 isb_tab[256] __initdata;
+ static u32 rco_tab[10];
+-static u32 ft_tab[4][256];
+-static u32 it_tab[4][256];
+
+-static u32 fl_tab[4][256];
+-static u32 il_tab[4][256];
++u32 crypto_ft_tab[4][256];
++u32 crypto_fl_tab[4][256];
++u32 crypto_it_tab[4][256];
++u32 crypto_il_tab[4][256];
+
+-static inline u8 __init
+-f_mult (u8 a, u8 b)
++EXPORT_SYMBOL_GPL(crypto_ft_tab);
++EXPORT_SYMBOL_GPL(crypto_fl_tab);
++EXPORT_SYMBOL_GPL(crypto_it_tab);
++EXPORT_SYMBOL_GPL(crypto_il_tab);
++
++static inline u8 __init f_mult(u8 a, u8 b)
+ {
+ u8 aa = log_tab[a], cc = aa + log_tab[b];
+
+ return pow_tab[cc + (cc < aa ? 1 : 0)];
+ }
+
+-#define ff_mult(a,b) (a && b ? f_mult(a, b) : 0)
+-
+-#define f_rn(bo, bi, n, k) \
+- bo[n] = ft_tab[0][byte(bi[n],0)] ^ \
+- ft_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
+- ft_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+- ft_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
+-
+-#define i_rn(bo, bi, n, k) \
+- bo[n] = it_tab[0][byte(bi[n],0)] ^ \
+- it_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
+- it_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+- it_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+-
+-#define ls_box(x) \
+- ( fl_tab[0][byte(x, 0)] ^ \
+- fl_tab[1][byte(x, 1)] ^ \
+- fl_tab[2][byte(x, 2)] ^ \
+- fl_tab[3][byte(x, 3)] )
+-
+-#define f_rl(bo, bi, n, k) \
+- bo[n] = fl_tab[0][byte(bi[n],0)] ^ \
+- fl_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
+- fl_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+- fl_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
+-
+-#define i_rl(bo, bi, n, k) \
+- bo[n] = il_tab[0][byte(bi[n],0)] ^ \
+- il_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
+- il_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+- il_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+-
+-static void __init
+-gen_tabs (void)
++#define ff_mult(a, b) (a && b ? f_mult(a, b) : 0)
++
++static void __init gen_tabs(void)
+ {
+ u32 i, t;
+ u8 p, q;
+
+- /* log and power tables for GF(2**8) finite field with
+- 0x011b as modular polynomial - the simplest primitive
+- root is 0x03, used here to generate the tables */
++ /*
++ * log and power tables for GF(2**8) finite field with
++ * 0x011b as modular polynomial - the simplest primitive
++ * root is 0x03, used here to generate the tables
++ */
+
+ for (i = 0, p = 1; i < 256; ++i) {
+ pow_tab[i] = (u8) p;
+@@ -169,92 +123,119 @@ gen_tabs (void)
+ p = sbx_tab[i];
+
+ t = p;
+- fl_tab[0][i] = t;
+- fl_tab[1][i] = rol32(t, 8);
+- fl_tab[2][i] = rol32(t, 16);
+- fl_tab[3][i] = rol32(t, 24);
++ crypto_fl_tab[0][i] = t;
++ crypto_fl_tab[1][i] = rol32(t, 8);
++ crypto_fl_tab[2][i] = rol32(t, 16);
++ crypto_fl_tab[3][i] = rol32(t, 24);
+
+- t = ((u32) ff_mult (2, p)) |
++ t = ((u32) ff_mult(2, p)) |
+ ((u32) p << 8) |
+- ((u32) p << 16) | ((u32) ff_mult (3, p) << 24);
++ ((u32) p << 16) | ((u32) ff_mult(3, p) << 24);
+
+- ft_tab[0][i] = t;
+- ft_tab[1][i] = rol32(t, 8);
+- ft_tab[2][i] = rol32(t, 16);
+- ft_tab[3][i] = rol32(t, 24);
++ crypto_ft_tab[0][i] = t;
++ crypto_ft_tab[1][i] = rol32(t, 8);
++ crypto_ft_tab[2][i] = rol32(t, 16);
++ crypto_ft_tab[3][i] = rol32(t, 24);
+
+ p = isb_tab[i];
+
+ t = p;
+- il_tab[0][i] = t;
+- il_tab[1][i] = rol32(t, 8);
+- il_tab[2][i] = rol32(t, 16);
+- il_tab[3][i] = rol32(t, 24);
+-
+- t = ((u32) ff_mult (14, p)) |
+- ((u32) ff_mult (9, p) << 8) |
+- ((u32) ff_mult (13, p) << 16) |
+- ((u32) ff_mult (11, p) << 24);
+-
+- it_tab[0][i] = t;
+- it_tab[1][i] = rol32(t, 8);
+- it_tab[2][i] = rol32(t, 16);
+- it_tab[3][i] = rol32(t, 24);
++ crypto_il_tab[0][i] = t;
++ crypto_il_tab[1][i] = rol32(t, 8);
++ crypto_il_tab[2][i] = rol32(t, 16);
++ crypto_il_tab[3][i] = rol32(t, 24);
++
++ t = ((u32) ff_mult(14, p)) |
++ ((u32) ff_mult(9, p) << 8) |
++ ((u32) ff_mult(13, p) << 16) |
++ ((u32) ff_mult(11, p) << 24);
++
++ crypto_it_tab[0][i] = t;
++ crypto_it_tab[1][i] = rol32(t, 8);
++ crypto_it_tab[2][i] = rol32(t, 16);
++ crypto_it_tab[3][i] = rol32(t, 24);
+ }
+ }
+
+-#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
+-
+-#define imix_col(y,x) \
+- u = star_x(x); \
+- v = star_x(u); \
+- w = star_x(v); \
+- t = w ^ (x); \
+- (y) = u ^ v ^ w; \
+- (y) ^= ror32(u ^ t, 8) ^ \
+- ror32(v ^ t, 16) ^ \
+- ror32(t,24)
+-
+ /* initialise the key schedule from the user supplied key */
+
+-#define loop4(i) \
+-{ t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \
+- t ^= E_KEY[4 * i]; E_KEY[4 * i + 4] = t; \
+- t ^= E_KEY[4 * i + 1]; E_KEY[4 * i + 5] = t; \
+- t ^= E_KEY[4 * i + 2]; E_KEY[4 * i + 6] = t; \
+- t ^= E_KEY[4 * i + 3]; E_KEY[4 * i + 7] = t; \
+-}
+-
+-#define loop6(i) \
+-{ t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \
+- t ^= E_KEY[6 * i]; E_KEY[6 * i + 6] = t; \
+- t ^= E_KEY[6 * i + 1]; E_KEY[6 * i + 7] = t; \
+- t ^= E_KEY[6 * i + 2]; E_KEY[6 * i + 8] = t; \
+- t ^= E_KEY[6 * i + 3]; E_KEY[6 * i + 9] = t; \
+- t ^= E_KEY[6 * i + 4]; E_KEY[6 * i + 10] = t; \
+- t ^= E_KEY[6 * i + 5]; E_KEY[6 * i + 11] = t; \
+-}
+-
+-#define loop8(i) \
+-{ t = ror32(t, 8); ; t = ls_box(t) ^ rco_tab[i]; \
+- t ^= E_KEY[8 * i]; E_KEY[8 * i + 8] = t; \
+- t ^= E_KEY[8 * i + 1]; E_KEY[8 * i + 9] = t; \
+- t ^= E_KEY[8 * i + 2]; E_KEY[8 * i + 10] = t; \
+- t ^= E_KEY[8 * i + 3]; E_KEY[8 * i + 11] = t; \
+- t = E_KEY[8 * i + 4] ^ ls_box(t); \
+- E_KEY[8 * i + 12] = t; \
+- t ^= E_KEY[8 * i + 5]; E_KEY[8 * i + 13] = t; \
+- t ^= E_KEY[8 * i + 6]; E_KEY[8 * i + 14] = t; \
+- t ^= E_KEY[8 * i + 7]; E_KEY[8 * i + 15] = t; \
+-}
++#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
+
+-static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+- unsigned int key_len)
++#define imix_col(y,x) do { \
++ u = star_x(x); \
++ v = star_x(u); \
++ w = star_x(v); \
++ t = w ^ (x); \
++ (y) = u ^ v ^ w; \
++ (y) ^= ror32(u ^ t, 8) ^ \
++ ror32(v ^ t, 16) ^ \
++ ror32(t, 24); \
++} while (0)
++
++#define ls_box(x) \
++ crypto_fl_tab[0][byte(x, 0)] ^ \
++ crypto_fl_tab[1][byte(x, 1)] ^ \
++ crypto_fl_tab[2][byte(x, 2)] ^ \
++ crypto_fl_tab[3][byte(x, 3)]
++
++#define loop4(i) do { \
++ t = ror32(t, 8); \
++ t = ls_box(t) ^ rco_tab[i]; \
++ t ^= ctx->key_enc[4 * i]; \
++ ctx->key_enc[4 * i + 4] = t; \
++ t ^= ctx->key_enc[4 * i + 1]; \
++ ctx->key_enc[4 * i + 5] = t; \
++ t ^= ctx->key_enc[4 * i + 2]; \
++ ctx->key_enc[4 * i + 6] = t; \
++ t ^= ctx->key_enc[4 * i + 3]; \
++ ctx->key_enc[4 * i + 7] = t; \
++} while (0)
++
++#define loop6(i) do { \
++ t = ror32(t, 8); \
++ t = ls_box(t) ^ rco_tab[i]; \
++ t ^= ctx->key_enc[6 * i]; \
++ ctx->key_enc[6 * i + 6] = t; \
++ t ^= ctx->key_enc[6 * i + 1]; \
++ ctx->key_enc[6 * i + 7] = t; \
++ t ^= ctx->key_enc[6 * i + 2]; \
++ ctx->key_enc[6 * i + 8] = t; \
++ t ^= ctx->key_enc[6 * i + 3]; \
++ ctx->key_enc[6 * i + 9] = t; \
++ t ^= ctx->key_enc[6 * i + 4]; \
++ ctx->key_enc[6 * i + 10] = t; \
++ t ^= ctx->key_enc[6 * i + 5]; \
++ ctx->key_enc[6 * i + 11] = t; \
++} while (0)
++
++#define loop8(i) do { \
++ t = ror32(t, 8); \
++ t = ls_box(t) ^ rco_tab[i]; \
++ t ^= ctx->key_enc[8 * i]; \
++ ctx->key_enc[8 * i + 8] = t; \
++ t ^= ctx->key_enc[8 * i + 1]; \
++ ctx->key_enc[8 * i + 9] = t; \
++ t ^= ctx->key_enc[8 * i + 2]; \
++ ctx->key_enc[8 * i + 10] = t; \
++ t ^= ctx->key_enc[8 * i + 3]; \
++ ctx->key_enc[8 * i + 11] = t; \
++ t = ctx->key_enc[8 * i + 4] ^ ls_box(t); \
++ ctx->key_enc[8 * i + 12] = t; \
++ t ^= ctx->key_enc[8 * i + 5]; \
++ ctx->key_enc[8 * i + 13] = t; \
++ t ^= ctx->key_enc[8 * i + 6]; \
++ ctx->key_enc[8 * i + 14] = t; \
++ t ^= ctx->key_enc[8 * i + 7]; \
++ ctx->key_enc[8 * i + 15] = t; \
++} while (0)
++
++int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
++ unsigned int key_len)
+ {
+- struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+ const __le32 *key = (const __le32 *)in_key;
+ u32 *flags = &tfm->crt_flags;
+- u32 i, t, u, v, w;
++ u32 i, t, u, v, w, j;
+
+ if (key_len % 8) {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+@@ -263,95 +244,113 @@ static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+
+ ctx->key_length = key_len;
+
+- E_KEY[0] = le32_to_cpu(key[0]);
+- E_KEY[1] = le32_to_cpu(key[1]);
+- E_KEY[2] = le32_to_cpu(key[2]);
+- E_KEY[3] = le32_to_cpu(key[3]);
++ ctx->key_dec[key_len + 24] = ctx->key_enc[0] = le32_to_cpu(key[0]);
++ ctx->key_dec[key_len + 25] = ctx->key_enc[1] = le32_to_cpu(key[1]);
++ ctx->key_dec[key_len + 26] = ctx->key_enc[2] = le32_to_cpu(key[2]);
++ ctx->key_dec[key_len + 27] = ctx->key_enc[3] = le32_to_cpu(key[3]);
+
+ switch (key_len) {
+ case 16:
+- t = E_KEY[3];
++ t = ctx->key_enc[3];
+ for (i = 0; i < 10; ++i)
+- loop4 (i);
++ loop4(i);
+ break;
+
+ case 24:
+- E_KEY[4] = le32_to_cpu(key[4]);
+- t = E_KEY[5] = le32_to_cpu(key[5]);
++ ctx->key_enc[4] = le32_to_cpu(key[4]);
++ t = ctx->key_enc[5] = le32_to_cpu(key[5]);
+ for (i = 0; i < 8; ++i)
+- loop6 (i);
++ loop6(i);
+ break;
+
+ case 32:
+- E_KEY[4] = le32_to_cpu(key[4]);
+- E_KEY[5] = le32_to_cpu(key[5]);
+- E_KEY[6] = le32_to_cpu(key[6]);
+- t = E_KEY[7] = le32_to_cpu(key[7]);
++ ctx->key_enc[4] = le32_to_cpu(key[4]);
++ ctx->key_enc[5] = le32_to_cpu(key[5]);
++ ctx->key_enc[6] = le32_to_cpu(key[6]);
++ t = ctx->key_enc[7] = le32_to_cpu(key[7]);
+ for (i = 0; i < 7; ++i)
+- loop8 (i);
++ loop8(i);
+ break;
+ }
+
+- D_KEY[0] = E_KEY[0];
+- D_KEY[1] = E_KEY[1];
+- D_KEY[2] = E_KEY[2];
+- D_KEY[3] = E_KEY[3];
++ ctx->key_dec[0] = ctx->key_enc[key_len + 24];
++ ctx->key_dec[1] = ctx->key_enc[key_len + 25];
++ ctx->key_dec[2] = ctx->key_enc[key_len + 26];
++ ctx->key_dec[3] = ctx->key_enc[key_len + 27];
+
+ for (i = 4; i < key_len + 24; ++i) {
+- imix_col (D_KEY[i], E_KEY[i]);
++ j = key_len + 24 - (i & ~3) + (i & 3);
++ imix_col(ctx->key_dec[j], ctx->key_enc[i]);
+ }
+-
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(crypto_aes_set_key);
+
+ /* encrypt a block of text */
+
+-#define f_nround(bo, bi, k) \
+- f_rn(bo, bi, 0, k); \
+- f_rn(bo, bi, 1, k); \
+- f_rn(bo, bi, 2, k); \
+- f_rn(bo, bi, 3, k); \
+- k += 4
+-
+-#define f_lround(bo, bi, k) \
+- f_rl(bo, bi, 0, k); \
+- f_rl(bo, bi, 1, k); \
+- f_rl(bo, bi, 2, k); \
+- f_rl(bo, bi, 3, k)
++#define f_rn(bo, bi, n, k) do { \
++ bo[n] = crypto_ft_tab[0][byte(bi[n], 0)] ^ \
++ crypto_ft_tab[1][byte(bi[(n + 1) & 3], 1)] ^ \
++ crypto_ft_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
++ crypto_ft_tab[3][byte(bi[(n + 3) & 3], 3)] ^ *(k + n); \
++} while (0)
++
++#define f_nround(bo, bi, k) do {\
++ f_rn(bo, bi, 0, k); \
++ f_rn(bo, bi, 1, k); \
++ f_rn(bo, bi, 2, k); \
++ f_rn(bo, bi, 3, k); \
++ k += 4; \
++} while (0)
++
++#define f_rl(bo, bi, n, k) do { \
++ bo[n] = crypto_fl_tab[0][byte(bi[n], 0)] ^ \
++ crypto_fl_tab[1][byte(bi[(n + 1) & 3], 1)] ^ \
++ crypto_fl_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
++ crypto_fl_tab[3][byte(bi[(n + 3) & 3], 3)] ^ *(k + n); \
++} while (0)
++
++#define f_lround(bo, bi, k) do {\
++ f_rl(bo, bi, 0, k); \
++ f_rl(bo, bi, 1, k); \
++ f_rl(bo, bi, 2, k); \
++ f_rl(bo, bi, 3, k); \
++} while (0)
+
+ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ {
+- const struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
++ const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+ const __le32 *src = (const __le32 *)in;
+ __le32 *dst = (__le32 *)out;
+ u32 b0[4], b1[4];
+- const u32 *kp = E_KEY + 4;
++ const u32 *kp = ctx->key_enc + 4;
++ const int key_len = ctx->key_length;
+
+- b0[0] = le32_to_cpu(src[0]) ^ E_KEY[0];
+- b0[1] = le32_to_cpu(src[1]) ^ E_KEY[1];
+- b0[2] = le32_to_cpu(src[2]) ^ E_KEY[2];
+- b0[3] = le32_to_cpu(src[3]) ^ E_KEY[3];
++ b0[0] = le32_to_cpu(src[0]) ^ ctx->key_enc[0];
++ b0[1] = le32_to_cpu(src[1]) ^ ctx->key_enc[1];
++ b0[2] = le32_to_cpu(src[2]) ^ ctx->key_enc[2];
++ b0[3] = le32_to_cpu(src[3]) ^ ctx->key_enc[3];
+
+- if (ctx->key_length > 24) {
+- f_nround (b1, b0, kp);
+- f_nround (b0, b1, kp);
++ if (key_len > 24) {
++ f_nround(b1, b0, kp);
++ f_nround(b0, b1, kp);
+ }
+
+- if (ctx->key_length > 16) {
+- f_nround (b1, b0, kp);
+- f_nround (b0, b1, kp);
++ if (key_len > 16) {
++ f_nround(b1, b0, kp);
++ f_nround(b0, b1, kp);
+ }
+
+- f_nround (b1, b0, kp);
+- f_nround (b0, b1, kp);
+- f_nround (b1, b0, kp);
+- f_nround (b0, b1, kp);
+- f_nround (b1, b0, kp);
+- f_nround (b0, b1, kp);
+- f_nround (b1, b0, kp);
+- f_nround (b0, b1, kp);
+- f_nround (b1, b0, kp);
+- f_lround (b0, b1, kp);
++ f_nround(b1, b0, kp);
++ f_nround(b0, b1, kp);
++ f_nround(b1, b0, kp);
++ f_nround(b0, b1, kp);
++ f_nround(b1, b0, kp);
++ f_nround(b0, b1, kp);
++ f_nround(b1, b0, kp);
++ f_nround(b0, b1, kp);
++ f_nround(b1, b0, kp);
++ f_lround(b0, b1, kp);
+
+ dst[0] = cpu_to_le32(b0[0]);
+ dst[1] = cpu_to_le32(b0[1]);
+@@ -361,53 +360,69 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+
+ /* decrypt a block of text */
+
+-#define i_nround(bo, bi, k) \
+- i_rn(bo, bi, 0, k); \
+- i_rn(bo, bi, 1, k); \
+- i_rn(bo, bi, 2, k); \
+- i_rn(bo, bi, 3, k); \
+- k -= 4
+-
+-#define i_lround(bo, bi, k) \
+- i_rl(bo, bi, 0, k); \
+- i_rl(bo, bi, 1, k); \
+- i_rl(bo, bi, 2, k); \
+- i_rl(bo, bi, 3, k)
++#define i_rn(bo, bi, n, k) do { \
++ bo[n] = crypto_it_tab[0][byte(bi[n], 0)] ^ \
++ crypto_it_tab[1][byte(bi[(n + 3) & 3], 1)] ^ \
++ crypto_it_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
++ crypto_it_tab[3][byte(bi[(n + 1) & 3], 3)] ^ *(k + n); \
++} while (0)
++
++#define i_nround(bo, bi, k) do {\
++ i_rn(bo, bi, 0, k); \
++ i_rn(bo, bi, 1, k); \
++ i_rn(bo, bi, 2, k); \
++ i_rn(bo, bi, 3, k); \
++ k += 4; \
++} while (0)
++
++#define i_rl(bo, bi, n, k) do { \
++ bo[n] = crypto_il_tab[0][byte(bi[n], 0)] ^ \
++ crypto_il_tab[1][byte(bi[(n + 3) & 3], 1)] ^ \
++ crypto_il_tab[2][byte(bi[(n + 2) & 3], 2)] ^ \
++ crypto_il_tab[3][byte(bi[(n + 1) & 3], 3)] ^ *(k + n); \
++} while (0)
++
++#define i_lround(bo, bi, k) do {\
++ i_rl(bo, bi, 0, k); \
++ i_rl(bo, bi, 1, k); \
++ i_rl(bo, bi, 2, k); \
++ i_rl(bo, bi, 3, k); \
++} while (0)
+
+ static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ {
+- const struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
++ const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+ const __le32 *src = (const __le32 *)in;
+ __le32 *dst = (__le32 *)out;
+ u32 b0[4], b1[4];
+ const int key_len = ctx->key_length;
+- const u32 *kp = D_KEY + key_len + 20;
++ const u32 *kp = ctx->key_dec + 4;
+
+- b0[0] = le32_to_cpu(src[0]) ^ E_KEY[key_len + 24];
+- b0[1] = le32_to_cpu(src[1]) ^ E_KEY[key_len + 25];
+- b0[2] = le32_to_cpu(src[2]) ^ E_KEY[key_len + 26];
+- b0[3] = le32_to_cpu(src[3]) ^ E_KEY[key_len + 27];
++ b0[0] = le32_to_cpu(src[0]) ^ ctx->key_dec[0];
++ b0[1] = le32_to_cpu(src[1]) ^ ctx->key_dec[1];
++ b0[2] = le32_to_cpu(src[2]) ^ ctx->key_dec[2];
++ b0[3] = le32_to_cpu(src[3]) ^ ctx->key_dec[3];
+
+ if (key_len > 24) {
+- i_nround (b1, b0, kp);
+- i_nround (b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_nround(b0, b1, kp);
+ }
+
+ if (key_len > 16) {
+- i_nround (b1, b0, kp);
+- i_nround (b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_nround(b0, b1, kp);
+ }
+
+- i_nround (b1, b0, kp);
+- i_nround (b0, b1, kp);
+- i_nround (b1, b0, kp);
+- i_nround (b0, b1, kp);
+- i_nround (b1, b0, kp);
+- i_nround (b0, b1, kp);
+- i_nround (b1, b0, kp);
+- i_nround (b0, b1, kp);
+- i_nround (b1, b0, kp);
+- i_lround (b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_nround(b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_nround(b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_nround(b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_nround(b0, b1, kp);
++ i_nround(b1, b0, kp);
++ i_lround(b0, b1, kp);
+
+ dst[0] = cpu_to_le32(b0[0]);
+ dst[1] = cpu_to_le32(b0[1]);
+@@ -415,14 +430,13 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ dst[3] = cpu_to_le32(b0[3]);
+ }
+
+-
+ static struct crypto_alg aes_alg = {
+ .cra_name = "aes",
+ .cra_driver_name = "aes-generic",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+- .cra_ctxsize = sizeof(struct aes_ctx),
++ .cra_ctxsize = sizeof(struct crypto_aes_ctx),
+ .cra_alignmask = 3,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
+@@ -430,9 +444,9 @@ static struct crypto_alg aes_alg = {
+ .cipher = {
+ .cia_min_keysize = AES_MIN_KEY_SIZE,
+ .cia_max_keysize = AES_MAX_KEY_SIZE,
+- .cia_setkey = aes_set_key,
+- .cia_encrypt = aes_encrypt,
+- .cia_decrypt = aes_decrypt
++ .cia_setkey = crypto_aes_set_key,
++ .cia_encrypt = aes_encrypt,
++ .cia_decrypt = aes_decrypt
+ }
+ }
+ };
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 8383282..e65cb50 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -472,7 +472,7 @@ int crypto_check_attr_type(struct rtattr **tb, u32 type)
+ }
+ EXPORT_SYMBOL_GPL(crypto_check_attr_type);
+
+-struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
++const char *crypto_attr_alg_name(struct rtattr *rta)
+ {
+ struct crypto_attr_alg *alga;
+
+@@ -486,7 +486,21 @@ struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
+ alga = RTA_DATA(rta);
+ alga->name[CRYPTO_MAX_ALG_NAME - 1] = 0;
+
+- return crypto_alg_mod_lookup(alga->name, type, mask);
++ return alga->name;
++}
++EXPORT_SYMBOL_GPL(crypto_attr_alg_name);
++
++struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
++{
++ const char *name;
++ int err;
++
++ name = crypto_attr_alg_name(rta);
++ err = PTR_ERR(name);
++ if (IS_ERR(name))
++ return ERR_PTR(err);
++
++ return crypto_alg_mod_lookup(name, type, mask);
+ }
+ EXPORT_SYMBOL_GPL(crypto_attr_alg);
+
+@@ -605,6 +619,53 @@ int crypto_tfm_in_queue(struct crypto_queue *queue, struct crypto_tfm *tfm)
+ }
+ EXPORT_SYMBOL_GPL(crypto_tfm_in_queue);
+
++static inline void crypto_inc_byte(u8 *a, unsigned int size)
++{
++ u8 *b = (a + size);
++ u8 c;
++
++ for (; size; size--) {
++ c = *--b + 1;
++ *b = c;
++ if (c)
++ break;
++ }
++}
++
++void crypto_inc(u8 *a, unsigned int size)
++{
++ __be32 *b = (__be32 *)(a + size);
++ u32 c;
++
++ for (; size >= 4; size -= 4) {
++ c = be32_to_cpu(*--b) + 1;
++ *b = cpu_to_be32(c);
++ if (c)
++ return;
++ }
++
++ crypto_inc_byte(a, size);
++}
++EXPORT_SYMBOL_GPL(crypto_inc);
++
++static inline void crypto_xor_byte(u8 *a, const u8 *b, unsigned int size)
++{
++ for (; size; size--)
++ *a++ ^= *b++;
++}
++
++void crypto_xor(u8 *dst, const u8 *src, unsigned int size)
++{
++ u32 *a = (u32 *)dst;
++ u32 *b = (u32 *)src;
++
++ for (; size >= 4; size -= 4)
++ *a++ ^= *b++;
++
++ crypto_xor_byte((u8 *)a, (u8 *)b, size);
++}
++EXPORT_SYMBOL_GPL(crypto_xor);
++
+ static int __init crypto_algapi_init(void)
+ {
+ crypto_init_proc();
+diff --git a/crypto/api.c b/crypto/api.c
+index 1f5c724..a2496d1 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -137,7 +137,7 @@ static struct crypto_alg *crypto_larval_alloc(const char *name, u32 type,
+ return alg;
+ }
+
+-static void crypto_larval_kill(struct crypto_alg *alg)
++void crypto_larval_kill(struct crypto_alg *alg)
+ {
+ struct crypto_larval *larval = (void *)alg;
+
+@@ -147,6 +147,7 @@ static void crypto_larval_kill(struct crypto_alg *alg)
+ complete_all(&larval->completion);
+ crypto_alg_put(alg);
+ }
++EXPORT_SYMBOL_GPL(crypto_larval_kill);
+
+ static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
+ {
+@@ -176,11 +177,9 @@ static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type,
+ return alg;
+ }
+
+-struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
++struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask)
+ {
+ struct crypto_alg *alg;
+- struct crypto_alg *larval;
+- int ok;
+
+ if (!name)
+ return ERR_PTR(-ENOENT);
+@@ -193,7 +192,17 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
+ if (alg)
+ return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg;
+
+- larval = crypto_larval_alloc(name, type, mask);
++ return crypto_larval_alloc(name, type, mask);
++}
++EXPORT_SYMBOL_GPL(crypto_larval_lookup);
++
++struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
++{
++ struct crypto_alg *alg;
++ struct crypto_alg *larval;
++ int ok;
++
++ larval = crypto_larval_lookup(name, type, mask);
+ if (IS_ERR(larval) || !crypto_is_larval(larval))
+ return larval;
+
+diff --git a/crypto/authenc.c b/crypto/authenc.c
+index 126a529..ed8ac5a 100644
+--- a/crypto/authenc.c
++++ b/crypto/authenc.c
+@@ -10,22 +10,21 @@
+ *
+ */
+
+-#include <crypto/algapi.h>
++#include <crypto/aead.h>
++#include <crypto/internal/skcipher.h>
++#include <crypto/authenc.h>
++#include <crypto/scatterwalk.h>
+ #include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/rtnetlink.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+
+-#include "scatterwalk.h"
+-
+ struct authenc_instance_ctx {
+ struct crypto_spawn auth;
+- struct crypto_spawn enc;
+-
+- unsigned int authsize;
+- unsigned int enckeylen;
++ struct crypto_skcipher_spawn enc;
+ };
+
+ struct crypto_authenc_ctx {
+@@ -37,19 +36,31 @@ struct crypto_authenc_ctx {
+ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
+ unsigned int keylen)
+ {
+- struct authenc_instance_ctx *ictx =
+- crypto_instance_ctx(crypto_aead_alg_instance(authenc));
+- unsigned int enckeylen = ictx->enckeylen;
+ unsigned int authkeylen;
++ unsigned int enckeylen;
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct crypto_hash *auth = ctx->auth;
+ struct crypto_ablkcipher *enc = ctx->enc;
++ struct rtattr *rta = (void *)key;
++ struct crypto_authenc_key_param *param;
+ int err = -EINVAL;
+
+- if (keylen < enckeylen) {
+- crypto_aead_set_flags(authenc, CRYPTO_TFM_RES_BAD_KEY_LEN);
+- goto out;
+- }
++ if (!RTA_OK(rta, keylen))
++ goto badkey;
++ if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
++ goto badkey;
++ if (RTA_PAYLOAD(rta) < sizeof(*param))
++ goto badkey;
++
++ param = RTA_DATA(rta);
++ enckeylen = be32_to_cpu(param->enckeylen);
++
++ key += RTA_ALIGN(rta->rta_len);
++ keylen -= RTA_ALIGN(rta->rta_len);
++
++ if (keylen < enckeylen)
++ goto badkey;
++
+ authkeylen = keylen - enckeylen;
+
+ crypto_hash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
+@@ -71,21 +82,38 @@ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
+
+ out:
+ return err;
++
++badkey:
++ crypto_aead_set_flags(authenc, CRYPTO_TFM_RES_BAD_KEY_LEN);
++ goto out;
+ }
+
+-static int crypto_authenc_hash(struct aead_request *req)
++static void authenc_chain(struct scatterlist *head, struct scatterlist *sg,
++ int chain)
++{
++ if (chain) {
++ head->length += sg->length;
++ sg = scatterwalk_sg_next(sg);
++ }
++
++ if (sg)
++ scatterwalk_sg_chain(head, 2, sg);
++ else
++ sg_mark_end(head);
++}
++
++static u8 *crypto_authenc_hash(struct aead_request *req, unsigned int flags,
++ struct scatterlist *cipher,
++ unsigned int cryptlen)
+ {
+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+- struct authenc_instance_ctx *ictx =
+- crypto_instance_ctx(crypto_aead_alg_instance(authenc));
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct crypto_hash *auth = ctx->auth;
+ struct hash_desc desc = {
+ .tfm = auth,
++ .flags = aead_request_flags(req) & flags,
+ };
+ u8 *hash = aead_request_ctx(req);
+- struct scatterlist *dst = req->dst;
+- unsigned int cryptlen = req->cryptlen;
+ int err;
+
+ hash = (u8 *)ALIGN((unsigned long)hash + crypto_hash_alignmask(auth),
+@@ -100,7 +128,7 @@ static int crypto_authenc_hash(struct aead_request *req)
+ if (err)
+ goto auth_unlock;
+
+- err = crypto_hash_update(&desc, dst, cryptlen);
++ err = crypto_hash_update(&desc, cipher, cryptlen);
+ if (err)
+ goto auth_unlock;
+
+@@ -109,17 +137,53 @@ auth_unlock:
+ spin_unlock_bh(&ctx->auth_lock);
+
+ if (err)
+- return err;
++ return ERR_PTR(err);
++
++ return hash;
++}
+
+- scatterwalk_map_and_copy(hash, dst, cryptlen, ictx->authsize, 1);
++static int crypto_authenc_genicv(struct aead_request *req, u8 *iv,
++ unsigned int flags)
++{
++ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
++ struct scatterlist *dst = req->dst;
++ struct scatterlist cipher[2];
++ struct page *dstp;
++ unsigned int ivsize = crypto_aead_ivsize(authenc);
++ unsigned int cryptlen;
++ u8 *vdst;
++ u8 *hash;
++
++ dstp = sg_page(dst);
++ vdst = PageHighMem(dstp) ? NULL : page_address(dstp) + dst->offset;
++
++ sg_init_table(cipher, 2);
++ sg_set_buf(cipher, iv, ivsize);
++ authenc_chain(cipher, dst, vdst == iv + ivsize);
++
++ cryptlen = req->cryptlen + ivsize;
++ hash = crypto_authenc_hash(req, flags, cipher, cryptlen);
++ if (IS_ERR(hash))
++ return PTR_ERR(hash);
++
++ scatterwalk_map_and_copy(hash, cipher, cryptlen,
++ crypto_aead_authsize(authenc), 1);
+ return 0;
+ }
+
+ static void crypto_authenc_encrypt_done(struct crypto_async_request *req,
+ int err)
+ {
+- if (!err)
+- err = crypto_authenc_hash(req->data);
++ if (!err) {
++ struct aead_request *areq = req->data;
++ struct crypto_aead *authenc = crypto_aead_reqtfm(areq);
++ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
++ struct ablkcipher_request *abreq = aead_request_ctx(areq);
++ u8 *iv = (u8 *)(abreq + 1) +
++ crypto_ablkcipher_reqsize(ctx->enc);
++
++ err = crypto_authenc_genicv(areq, iv, 0);
++ }
+
+ aead_request_complete(req->data, err);
+ }
+@@ -129,72 +193,99 @@ static int crypto_authenc_encrypt(struct aead_request *req)
+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct ablkcipher_request *abreq = aead_request_ctx(req);
++ struct crypto_ablkcipher *enc = ctx->enc;
++ struct scatterlist *dst = req->dst;
++ unsigned int cryptlen = req->cryptlen;
++ u8 *iv = (u8 *)(abreq + 1) + crypto_ablkcipher_reqsize(enc);
+ int err;
+
+- ablkcipher_request_set_tfm(abreq, ctx->enc);
++ ablkcipher_request_set_tfm(abreq, enc);
+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
+ crypto_authenc_encrypt_done, req);
+- ablkcipher_request_set_crypt(abreq, req->src, req->dst, req->cryptlen,
+- req->iv);
++ ablkcipher_request_set_crypt(abreq, req->src, dst, cryptlen, req->iv);
++
++ memcpy(iv, req->iv, crypto_aead_ivsize(authenc));
+
+ err = crypto_ablkcipher_encrypt(abreq);
+ if (err)
+ return err;
+
+- return crypto_authenc_hash(req);
++ return crypto_authenc_genicv(req, iv, CRYPTO_TFM_REQ_MAY_SLEEP);
+ }
+
+-static int crypto_authenc_verify(struct aead_request *req)
++static void crypto_authenc_givencrypt_done(struct crypto_async_request *req,
++ int err)
+ {
+- struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+- struct authenc_instance_ctx *ictx =
+- crypto_instance_ctx(crypto_aead_alg_instance(authenc));
++ if (!err) {
++ struct aead_givcrypt_request *greq = req->data;
++
++ err = crypto_authenc_genicv(&greq->areq, greq->giv, 0);
++ }
++
++ aead_request_complete(req->data, err);
++}
++
++static int crypto_authenc_givencrypt(struct aead_givcrypt_request *req)
++{
++ struct crypto_aead *authenc = aead_givcrypt_reqtfm(req);
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+- struct crypto_hash *auth = ctx->auth;
+- struct hash_desc desc = {
+- .tfm = auth,
+- .flags = aead_request_flags(req),
+- };
+- u8 *ohash = aead_request_ctx(req);
+- u8 *ihash;
+- struct scatterlist *src = req->src;
+- unsigned int cryptlen = req->cryptlen;
+- unsigned int authsize;
++ struct aead_request *areq = &req->areq;
++ struct skcipher_givcrypt_request *greq = aead_request_ctx(areq);
++ u8 *iv = req->giv;
+ int err;
+
+- ohash = (u8 *)ALIGN((unsigned long)ohash + crypto_hash_alignmask(auth),
+- crypto_hash_alignmask(auth) + 1);
+- ihash = ohash + crypto_hash_digestsize(auth);
+-
+- spin_lock_bh(&ctx->auth_lock);
+- err = crypto_hash_init(&desc);
+- if (err)
+- goto auth_unlock;
++ skcipher_givcrypt_set_tfm(greq, ctx->enc);
++ skcipher_givcrypt_set_callback(greq, aead_request_flags(areq),
++ crypto_authenc_givencrypt_done, areq);
++ skcipher_givcrypt_set_crypt(greq, areq->src, areq->dst, areq->cryptlen,
++ areq->iv);
++ skcipher_givcrypt_set_giv(greq, iv, req->seq);
+
+- err = crypto_hash_update(&desc, req->assoc, req->assoclen);
++ err = crypto_skcipher_givencrypt(greq);
+ if (err)
+- goto auth_unlock;
++ return err;
+
+- err = crypto_hash_update(&desc, src, cryptlen);
+- if (err)
+- goto auth_unlock;
++ return crypto_authenc_genicv(areq, iv, CRYPTO_TFM_REQ_MAY_SLEEP);
++}
+
+- err = crypto_hash_final(&desc, ohash);
+-auth_unlock:
+- spin_unlock_bh(&ctx->auth_lock);
++static int crypto_authenc_verify(struct aead_request *req,
++ struct scatterlist *cipher,
++ unsigned int cryptlen)
++{
++ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
++ u8 *ohash;
++ u8 *ihash;
++ unsigned int authsize;
+
+- if (err)
+- return err;
++ ohash = crypto_authenc_hash(req, CRYPTO_TFM_REQ_MAY_SLEEP, cipher,
++ cryptlen);
++ if (IS_ERR(ohash))
++ return PTR_ERR(ohash);
+
+- authsize = ictx->authsize;
+- scatterwalk_map_and_copy(ihash, src, cryptlen, authsize, 0);
+- return memcmp(ihash, ohash, authsize) ? -EINVAL : 0;
++ authsize = crypto_aead_authsize(authenc);
++ ihash = ohash + authsize;
++ scatterwalk_map_and_copy(ihash, cipher, cryptlen, authsize, 0);
++ return memcmp(ihash, ohash, authsize) ? -EBADMSG: 0;
+ }
+
+-static void crypto_authenc_decrypt_done(struct crypto_async_request *req,
+- int err)
++static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
++ unsigned int cryptlen)
+ {
+- aead_request_complete(req->data, err);
++ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
++ struct scatterlist *src = req->src;
++ struct scatterlist cipher[2];
++ struct page *srcp;
++ unsigned int ivsize = crypto_aead_ivsize(authenc);
++ u8 *vsrc;
++
++ srcp = sg_page(src);
++ vsrc = PageHighMem(srcp) ? NULL : page_address(srcp) + src->offset;
++
++ sg_init_table(cipher, 2);
++ sg_set_buf(cipher, iv, ivsize);
++ authenc_chain(cipher, src, vsrc == iv + ivsize);
++
++ return crypto_authenc_verify(req, cipher, cryptlen + ivsize);
+ }
+
+ static int crypto_authenc_decrypt(struct aead_request *req)
+@@ -202,17 +293,23 @@ static int crypto_authenc_decrypt(struct aead_request *req)
+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct ablkcipher_request *abreq = aead_request_ctx(req);
++ unsigned int cryptlen = req->cryptlen;
++ unsigned int authsize = crypto_aead_authsize(authenc);
++ u8 *iv = req->iv;
+ int err;
+
+- err = crypto_authenc_verify(req);
++ if (cryptlen < authsize)
++ return -EINVAL;
++ cryptlen -= authsize;
++
++ err = crypto_authenc_iverify(req, iv, cryptlen);
+ if (err)
+ return err;
+
+ ablkcipher_request_set_tfm(abreq, ctx->enc);
+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
+- crypto_authenc_decrypt_done, req);
+- ablkcipher_request_set_crypt(abreq, req->src, req->dst, req->cryptlen,
+- req->iv);
++ req->base.complete, req->base.data);
++ ablkcipher_request_set_crypt(abreq, req->src, req->dst, cryptlen, iv);
+
+ return crypto_ablkcipher_decrypt(abreq);
+ }
+@@ -224,19 +321,13 @@ static int crypto_authenc_init_tfm(struct crypto_tfm *tfm)
+ struct crypto_authenc_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_hash *auth;
+ struct crypto_ablkcipher *enc;
+- unsigned int digestsize;
+ int err;
+
+ auth = crypto_spawn_hash(&ictx->auth);
+ if (IS_ERR(auth))
+ return PTR_ERR(auth);
+
+- err = -EINVAL;
+- digestsize = crypto_hash_digestsize(auth);
+- if (ictx->authsize > digestsize)
+- goto err_free_hash;
+-
+- enc = crypto_spawn_ablkcipher(&ictx->enc);
++ enc = crypto_spawn_skcipher(&ictx->enc);
+ err = PTR_ERR(enc);
+ if (IS_ERR(enc))
+ goto err_free_hash;
+@@ -246,9 +337,10 @@ static int crypto_authenc_init_tfm(struct crypto_tfm *tfm)
+ tfm->crt_aead.reqsize = max_t(unsigned int,
+ (crypto_hash_alignmask(auth) &
+ ~(crypto_tfm_ctx_alignment() - 1)) +
+- digestsize * 2,
+- sizeof(struct ablkcipher_request) +
+- crypto_ablkcipher_reqsize(enc));
++ crypto_hash_digestsize(auth) * 2,
++ sizeof(struct skcipher_givcrypt_request) +
++ crypto_ablkcipher_reqsize(enc) +
++ crypto_ablkcipher_ivsize(enc));
+
+ spin_lock_init(&ctx->auth_lock);
+
+@@ -269,75 +361,74 @@ static void crypto_authenc_exit_tfm(struct crypto_tfm *tfm)
+
+ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
+ {
++ struct crypto_attr_type *algt;
+ struct crypto_instance *inst;
+ struct crypto_alg *auth;
+ struct crypto_alg *enc;
+ struct authenc_instance_ctx *ctx;
+- unsigned int authsize;
+- unsigned int enckeylen;
++ const char *enc_name;
+ int err;
+
+- err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD);
+- if (err)
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
+ return ERR_PTR(err);
+
++ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
++ return ERR_PTR(-EINVAL);
++
+ auth = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH,
+ CRYPTO_ALG_TYPE_HASH_MASK);
+ if (IS_ERR(auth))
+ return ERR_PTR(PTR_ERR(auth));
+
+- err = crypto_attr_u32(tb[2], &authsize);
+- inst = ERR_PTR(err);
+- if (err)
+- goto out_put_auth;
+-
+- enc = crypto_attr_alg(tb[3], CRYPTO_ALG_TYPE_BLKCIPHER,
+- CRYPTO_ALG_TYPE_MASK);
+- inst = ERR_PTR(PTR_ERR(enc));
+- if (IS_ERR(enc))
++ enc_name = crypto_attr_alg_name(tb[2]);
++ err = PTR_ERR(enc_name);
++ if (IS_ERR(enc_name))
+ goto out_put_auth;
+
+- err = crypto_attr_u32(tb[4], &enckeylen);
+- if (err)
+- goto out_put_enc;
+-
+ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+ err = -ENOMEM;
+ if (!inst)
+- goto out_put_enc;
+-
+- err = -ENAMETOOLONG;
+- if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
+- "authenc(%s,%u,%s,%u)", auth->cra_name, authsize,
+- enc->cra_name, enckeylen) >= CRYPTO_MAX_ALG_NAME)
+- goto err_free_inst;
+-
+- if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+- "authenc(%s,%u,%s,%u)", auth->cra_driver_name,
+- authsize, enc->cra_driver_name, enckeylen) >=
+- CRYPTO_MAX_ALG_NAME)
+- goto err_free_inst;
++ goto out_put_auth;
+
+ ctx = crypto_instance_ctx(inst);
+- ctx->authsize = authsize;
+- ctx->enckeylen = enckeylen;
+
+ err = crypto_init_spawn(&ctx->auth, auth, inst, CRYPTO_ALG_TYPE_MASK);
+ if (err)
+ goto err_free_inst;
+
+- err = crypto_init_spawn(&ctx->enc, enc, inst, CRYPTO_ALG_TYPE_MASK);
++ crypto_set_skcipher_spawn(&ctx->enc, inst);
++ err = crypto_grab_skcipher(&ctx->enc, enc_name, 0,
++ crypto_requires_sync(algt->type,
++ algt->mask));
+ if (err)
+ goto err_drop_auth;
+
+- inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_ASYNC;
++ enc = crypto_skcipher_spawn_alg(&ctx->enc);
++
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
++ "authenc(%s,%s)", auth->cra_name, enc->cra_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto err_drop_enc;
++
++ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "authenc(%s,%s)", auth->cra_driver_name,
++ enc->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
++ goto err_drop_enc;
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
++ inst->alg.cra_flags |= enc->cra_flags & CRYPTO_ALG_ASYNC;
+ inst->alg.cra_priority = enc->cra_priority * 10 + auth->cra_priority;
+ inst->alg.cra_blocksize = enc->cra_blocksize;
+- inst->alg.cra_alignmask = max(auth->cra_alignmask, enc->cra_alignmask);
++ inst->alg.cra_alignmask = auth->cra_alignmask | enc->cra_alignmask;
+ inst->alg.cra_type = &crypto_aead_type;
+
+- inst->alg.cra_aead.ivsize = enc->cra_blkcipher.ivsize;
+- inst->alg.cra_aead.authsize = authsize;
++ inst->alg.cra_aead.ivsize = enc->cra_ablkcipher.ivsize;
++ inst->alg.cra_aead.maxauthsize = auth->cra_type == &crypto_hash_type ?
++ auth->cra_hash.digestsize :
++ auth->cra_digest.dia_digestsize;
+
+ inst->alg.cra_ctxsize = sizeof(struct crypto_authenc_ctx);
+
+@@ -347,18 +438,19 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
+ inst->alg.cra_aead.setkey = crypto_authenc_setkey;
+ inst->alg.cra_aead.encrypt = crypto_authenc_encrypt;
+ inst->alg.cra_aead.decrypt = crypto_authenc_decrypt;
++ inst->alg.cra_aead.givencrypt = crypto_authenc_givencrypt;
+
+ out:
+- crypto_mod_put(enc);
+-out_put_auth:
+ crypto_mod_put(auth);
+ return inst;
+
++err_drop_enc:
++ crypto_drop_skcipher(&ctx->enc);
+ err_drop_auth:
+ crypto_drop_spawn(&ctx->auth);
+ err_free_inst:
+ kfree(inst);
+-out_put_enc:
++out_put_auth:
+ inst = ERR_PTR(err);
+ goto out;
+ }
+@@ -367,7 +459,7 @@ static void crypto_authenc_free(struct crypto_instance *inst)
+ {
+ struct authenc_instance_ctx *ctx = crypto_instance_ctx(inst);
+
+- crypto_drop_spawn(&ctx->enc);
++ crypto_drop_skcipher(&ctx->enc);
+ crypto_drop_spawn(&ctx->auth);
+ kfree(inst);
+ }
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index f6c67f9..4a7e65c 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -14,7 +14,8 @@
+ *
+ */
+
+-#include <linux/crypto.h>
++#include <crypto/internal/skcipher.h>
++#include <crypto/scatterwalk.h>
+ #include <linux/errno.h>
+ #include <linux/hardirq.h>
+ #include <linux/kernel.h>
+@@ -25,7 +26,6 @@
+ #include <linux/string.h>
+
+ #include "internal.h"
+-#include "scatterwalk.h"
+
+ enum {
+ BLKCIPHER_WALK_PHYS = 1 << 0,
+@@ -433,9 +433,8 @@ static unsigned int crypto_blkcipher_ctxsize(struct crypto_alg *alg, u32 type,
+ struct blkcipher_alg *cipher = &alg->cra_blkcipher;
+ unsigned int len = alg->cra_ctxsize;
+
+- type ^= CRYPTO_ALG_ASYNC;
+- mask &= CRYPTO_ALG_ASYNC;
+- if ((type & mask) && cipher->ivsize) {
++ if ((mask & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_MASK &&
++ cipher->ivsize) {
+ len = ALIGN(len, (unsigned long)alg->cra_alignmask + 1);
+ len += cipher->ivsize;
+ }
+@@ -451,6 +450,11 @@ static int crypto_init_blkcipher_ops_async(struct crypto_tfm *tfm)
+ crt->setkey = async_setkey;
+ crt->encrypt = async_encrypt;
+ crt->decrypt = async_decrypt;
++ if (!alg->ivsize) {
++ crt->givencrypt = skcipher_null_givencrypt;
++ crt->givdecrypt = skcipher_null_givdecrypt;
++ }
++ crt->base = __crypto_ablkcipher_cast(tfm);
+ crt->ivsize = alg->ivsize;
+
+ return 0;
+@@ -482,9 +486,7 @@ static int crypto_init_blkcipher_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+ if (alg->ivsize > PAGE_SIZE / 8)
+ return -EINVAL;
+
+- type ^= CRYPTO_ALG_ASYNC;
+- mask &= CRYPTO_ALG_ASYNC;
+- if (type & mask)
++ if ((mask & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_MASK)
+ return crypto_init_blkcipher_ops_sync(tfm);
+ else
+ return crypto_init_blkcipher_ops_async(tfm);
+@@ -499,6 +501,8 @@ static void crypto_blkcipher_show(struct seq_file *m, struct crypto_alg *alg)
+ seq_printf(m, "min keysize : %u\n", alg->cra_blkcipher.min_keysize);
+ seq_printf(m, "max keysize : %u\n", alg->cra_blkcipher.max_keysize);
+ seq_printf(m, "ivsize : %u\n", alg->cra_blkcipher.ivsize);
++ seq_printf(m, "geniv : %s\n", alg->cra_blkcipher.geniv ?:
++ "<default>");
+ }
+
+ const struct crypto_type crypto_blkcipher_type = {
+@@ -510,5 +514,187 @@ const struct crypto_type crypto_blkcipher_type = {
+ };
+ EXPORT_SYMBOL_GPL(crypto_blkcipher_type);
+
++static int crypto_grab_nivcipher(struct crypto_skcipher_spawn *spawn,
++ const char *name, u32 type, u32 mask)
++{
++ struct crypto_alg *alg;
++ int err;
++
++ type = crypto_skcipher_type(type);
++ mask = crypto_skcipher_mask(mask) | CRYPTO_ALG_GENIV;
++
++ alg = crypto_alg_mod_lookup(name, type, mask);
++ if (IS_ERR(alg))
++ return PTR_ERR(alg);
++
++ err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
++ crypto_mod_put(alg);
++ return err;
++}
++
++struct crypto_instance *skcipher_geniv_alloc(struct crypto_template *tmpl,
++ struct rtattr **tb, u32 type,
++ u32 mask)
++{
++ struct {
++ int (*setkey)(struct crypto_ablkcipher *tfm, const u8 *key,
++ unsigned int keylen);
++ int (*encrypt)(struct ablkcipher_request *req);
++ int (*decrypt)(struct ablkcipher_request *req);
++
++ unsigned int min_keysize;
++ unsigned int max_keysize;
++ unsigned int ivsize;
++
++ const char *geniv;
++ } balg;
++ const char *name;
++ struct crypto_skcipher_spawn *spawn;
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ struct crypto_alg *alg;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ (CRYPTO_ALG_TYPE_GIVCIPHER | CRYPTO_ALG_GENIV)) &
++ algt->mask)
++ return ERR_PTR(-EINVAL);
++
++ name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(name);
++ if (IS_ERR(name))
++ return ERR_PTR(err);
++
++ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
++ if (!inst)
++ return ERR_PTR(-ENOMEM);
++
++ spawn = crypto_instance_ctx(inst);
++
++ /* Ignore async algorithms if necessary. */
++ mask |= crypto_requires_sync(algt->type, algt->mask);
++
++ crypto_set_skcipher_spawn(spawn, inst);
++ err = crypto_grab_nivcipher(spawn, name, type, mask);
++ if (err)
++ goto err_free_inst;
++
++ alg = crypto_skcipher_spawn_alg(spawn);
++
++ if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
++ CRYPTO_ALG_TYPE_BLKCIPHER) {
++ balg.ivsize = alg->cra_blkcipher.ivsize;
++ balg.min_keysize = alg->cra_blkcipher.min_keysize;
++ balg.max_keysize = alg->cra_blkcipher.max_keysize;
++
++ balg.setkey = async_setkey;
++ balg.encrypt = async_encrypt;
++ balg.decrypt = async_decrypt;
++
++ balg.geniv = alg->cra_blkcipher.geniv;
++ } else {
++ balg.ivsize = alg->cra_ablkcipher.ivsize;
++ balg.min_keysize = alg->cra_ablkcipher.min_keysize;
++ balg.max_keysize = alg->cra_ablkcipher.max_keysize;
++
++ balg.setkey = alg->cra_ablkcipher.setkey;
++ balg.encrypt = alg->cra_ablkcipher.encrypt;
++ balg.decrypt = alg->cra_ablkcipher.decrypt;
++
++ balg.geniv = alg->cra_ablkcipher.geniv;
++ }
++
++ err = -EINVAL;
++ if (!balg.ivsize)
++ goto err_drop_alg;
++
++ /*
++ * This is only true if we're constructing an algorithm with its
++ * default IV generator. For the default generator we elide the
++ * template name and double-check the IV generator.
++ */
++ if (algt->mask & CRYPTO_ALG_GENIV) {
++ if (!balg.geniv)
++ balg.geniv = crypto_default_geniv(alg);
++ err = -EAGAIN;
++ if (strcmp(tmpl->name, balg.geniv))
++ goto err_drop_alg;
++
++ memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
++ memcpy(inst->alg.cra_driver_name, alg->cra_driver_name,
++ CRYPTO_MAX_ALG_NAME);
++ } else {
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
++ "%s(%s)", tmpl->name, alg->cra_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto err_drop_alg;
++ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "%s(%s)", tmpl->name, alg->cra_driver_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto err_drop_alg;
++ }
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_GIVCIPHER | CRYPTO_ALG_GENIV;
++ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
++ inst->alg.cra_priority = alg->cra_priority;
++ inst->alg.cra_blocksize = alg->cra_blocksize;
++ inst->alg.cra_alignmask = alg->cra_alignmask;
++ inst->alg.cra_type = &crypto_givcipher_type;
++
++ inst->alg.cra_ablkcipher.ivsize = balg.ivsize;
++ inst->alg.cra_ablkcipher.min_keysize = balg.min_keysize;
++ inst->alg.cra_ablkcipher.max_keysize = balg.max_keysize;
++ inst->alg.cra_ablkcipher.geniv = balg.geniv;
++
++ inst->alg.cra_ablkcipher.setkey = balg.setkey;
++ inst->alg.cra_ablkcipher.encrypt = balg.encrypt;
++ inst->alg.cra_ablkcipher.decrypt = balg.decrypt;
++
++out:
++ return inst;
++
++err_drop_alg:
++ crypto_drop_skcipher(spawn);
++err_free_inst:
++ kfree(inst);
++ inst = ERR_PTR(err);
++ goto out;
++}
++EXPORT_SYMBOL_GPL(skcipher_geniv_alloc);
++
++void skcipher_geniv_free(struct crypto_instance *inst)
++{
++ crypto_drop_skcipher(crypto_instance_ctx(inst));
++ kfree(inst);
++}
++EXPORT_SYMBOL_GPL(skcipher_geniv_free);
++
++int skcipher_geniv_init(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_ablkcipher *cipher;
++
++ cipher = crypto_spawn_skcipher(crypto_instance_ctx(inst));
++ if (IS_ERR(cipher))
++ return PTR_ERR(cipher);
++
++ tfm->crt_ablkcipher.base = cipher;
++ tfm->crt_ablkcipher.reqsize += crypto_ablkcipher_reqsize(cipher);
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(skcipher_geniv_init);
++
++void skcipher_geniv_exit(struct crypto_tfm *tfm)
++{
++ crypto_free_ablkcipher(tfm->crt_ablkcipher.base);
++}
++EXPORT_SYMBOL_GPL(skcipher_geniv_exit);
++
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Generic block chaining cipher type");
+diff --git a/crypto/camellia.c b/crypto/camellia.c
+index 6877ecf..493fee7 100644
+--- a/crypto/camellia.c
++++ b/crypto/camellia.c
+@@ -36,176 +36,6 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+
+-
+-#define CAMELLIA_MIN_KEY_SIZE 16
+-#define CAMELLIA_MAX_KEY_SIZE 32
+-#define CAMELLIA_BLOCK_SIZE 16
+-#define CAMELLIA_TABLE_BYTE_LEN 272
+-#define CAMELLIA_TABLE_WORD_LEN (CAMELLIA_TABLE_BYTE_LEN / 4)
+-
+-typedef u32 KEY_TABLE_TYPE[CAMELLIA_TABLE_WORD_LEN];
+-
+-
+-/* key constants */
+-
+-#define CAMELLIA_SIGMA1L (0xA09E667FL)
+-#define CAMELLIA_SIGMA1R (0x3BCC908BL)
+-#define CAMELLIA_SIGMA2L (0xB67AE858L)
+-#define CAMELLIA_SIGMA2R (0x4CAA73B2L)
+-#define CAMELLIA_SIGMA3L (0xC6EF372FL)
+-#define CAMELLIA_SIGMA3R (0xE94F82BEL)
+-#define CAMELLIA_SIGMA4L (0x54FF53A5L)
+-#define CAMELLIA_SIGMA4R (0xF1D36F1CL)
+-#define CAMELLIA_SIGMA5L (0x10E527FAL)
+-#define CAMELLIA_SIGMA5R (0xDE682D1DL)
+-#define CAMELLIA_SIGMA6L (0xB05688C2L)
+-#define CAMELLIA_SIGMA6R (0xB3E6C1FDL)
+-
+-struct camellia_ctx {
+- int key_length;
+- KEY_TABLE_TYPE key_table;
+-};
+-
+-
+-/*
+- * macros
+- */
+-
+-
+-# define GETU32(pt) (((u32)(pt)[0] << 24) \
+- ^ ((u32)(pt)[1] << 16) \
+- ^ ((u32)(pt)[2] << 8) \
+- ^ ((u32)(pt)[3]))
+-
+-#define COPY4WORD(dst, src) \
+- do { \
+- (dst)[0]=(src)[0]; \
+- (dst)[1]=(src)[1]; \
+- (dst)[2]=(src)[2]; \
+- (dst)[3]=(src)[3]; \
+- }while(0)
+-
+-#define SWAP4WORD(word) \
+- do { \
+- CAMELLIA_SWAP4((word)[0]); \
+- CAMELLIA_SWAP4((word)[1]); \
+- CAMELLIA_SWAP4((word)[2]); \
+- CAMELLIA_SWAP4((word)[3]); \
+- }while(0)
+-
+-#define XOR4WORD(a, b)/* a = a ^ b */ \
+- do { \
+- (a)[0]^=(b)[0]; \
+- (a)[1]^=(b)[1]; \
+- (a)[2]^=(b)[2]; \
+- (a)[3]^=(b)[3]; \
+- }while(0)
+-
+-#define XOR4WORD2(a, b, c)/* a = b ^ c */ \
+- do { \
+- (a)[0]=(b)[0]^(c)[0]; \
+- (a)[1]=(b)[1]^(c)[1]; \
+- (a)[2]=(b)[2]^(c)[2]; \
+- (a)[3]=(b)[3]^(c)[3]; \
+- }while(0)
+-
+-#define CAMELLIA_SUBKEY_L(INDEX) (subkey[(INDEX)*2])
+-#define CAMELLIA_SUBKEY_R(INDEX) (subkey[(INDEX)*2 + 1])
+-
+-/* rotation right shift 1byte */
+-#define CAMELLIA_RR8(x) (((x) >> 8) + ((x) << 24))
+-/* rotation left shift 1bit */
+-#define CAMELLIA_RL1(x) (((x) << 1) + ((x) >> 31))
+-/* rotation left shift 1byte */
+-#define CAMELLIA_RL8(x) (((x) << 8) + ((x) >> 24))
+-
+-#define CAMELLIA_ROLDQ(ll, lr, rl, rr, w0, w1, bits) \
+- do { \
+- w0 = ll; \
+- ll = (ll << bits) + (lr >> (32 - bits)); \
+- lr = (lr << bits) + (rl >> (32 - bits)); \
+- rl = (rl << bits) + (rr >> (32 - bits)); \
+- rr = (rr << bits) + (w0 >> (32 - bits)); \
+- } while(0)
+-
+-#define CAMELLIA_ROLDQo32(ll, lr, rl, rr, w0, w1, bits) \
+- do { \
+- w0 = ll; \
+- w1 = lr; \
+- ll = (lr << (bits - 32)) + (rl >> (64 - bits)); \
+- lr = (rl << (bits - 32)) + (rr >> (64 - bits)); \
+- rl = (rr << (bits - 32)) + (w0 >> (64 - bits)); \
+- rr = (w0 << (bits - 32)) + (w1 >> (64 - bits)); \
+- } while(0)
+-
+-#define CAMELLIA_SP1110(INDEX) (camellia_sp1110[(INDEX)])
+-#define CAMELLIA_SP0222(INDEX) (camellia_sp0222[(INDEX)])
+-#define CAMELLIA_SP3033(INDEX) (camellia_sp3033[(INDEX)])
+-#define CAMELLIA_SP4404(INDEX) (camellia_sp4404[(INDEX)])
+-
+-#define CAMELLIA_F(xl, xr, kl, kr, yl, yr, il, ir, t0, t1) \
+- do { \
+- il = xl ^ kl; \
+- ir = xr ^ kr; \
+- t0 = il >> 16; \
+- t1 = ir >> 16; \
+- yl = CAMELLIA_SP1110(ir & 0xff) \
+- ^ CAMELLIA_SP0222((t1 >> 8) & 0xff) \
+- ^ CAMELLIA_SP3033(t1 & 0xff) \
+- ^ CAMELLIA_SP4404((ir >> 8) & 0xff); \
+- yr = CAMELLIA_SP1110((t0 >> 8) & 0xff) \
+- ^ CAMELLIA_SP0222(t0 & 0xff) \
+- ^ CAMELLIA_SP3033((il >> 8) & 0xff) \
+- ^ CAMELLIA_SP4404(il & 0xff); \
+- yl ^= yr; \
+- yr = CAMELLIA_RR8(yr); \
+- yr ^= yl; \
+- } while(0)
+-
+-
+-/*
+- * for speed up
+- *
+- */
+-#define CAMELLIA_FLS(ll, lr, rl, rr, kll, klr, krl, krr, t0, t1, t2, t3) \
+- do { \
+- t0 = kll; \
+- t2 = krr; \
+- t0 &= ll; \
+- t2 |= rr; \
+- rl ^= t2; \
+- lr ^= CAMELLIA_RL1(t0); \
+- t3 = krl; \
+- t1 = klr; \
+- t3 &= rl; \
+- t1 |= lr; \
+- ll ^= t1; \
+- rr ^= CAMELLIA_RL1(t3); \
+- } while(0)
+-
+-#define CAMELLIA_ROUNDSM(xl, xr, kl, kr, yl, yr, il, ir, t0, t1) \
+- do { \
+- ir = CAMELLIA_SP1110(xr & 0xff); \
+- il = CAMELLIA_SP1110((xl>>24) & 0xff); \
+- ir ^= CAMELLIA_SP0222((xr>>24) & 0xff); \
+- il ^= CAMELLIA_SP0222((xl>>16) & 0xff); \
+- ir ^= CAMELLIA_SP3033((xr>>16) & 0xff); \
+- il ^= CAMELLIA_SP3033((xl>>8) & 0xff); \
+- ir ^= CAMELLIA_SP4404((xr>>8) & 0xff); \
+- il ^= CAMELLIA_SP4404(xl & 0xff); \
+- il ^= kl; \
+- ir ^= il ^ kr; \
+- yl ^= ir; \
+- yr ^= CAMELLIA_RR8(il) ^ ir; \
+- } while(0)
+-
+-/**
+- * Stuff related to the Camellia key schedule
+- */
+-#define SUBL(x) subL[(x)]
+-#define SUBR(x) subR[(x)]
+-
+-
+ static const u32 camellia_sp1110[256] = {
+ 0x70707000,0x82828200,0x2c2c2c00,0xececec00,
+ 0xb3b3b300,0x27272700,0xc0c0c000,0xe5e5e500,
+@@ -475,67 +305,348 @@ static const u32 camellia_sp4404[256] = {
+ };
+
+
++#define CAMELLIA_MIN_KEY_SIZE 16
++#define CAMELLIA_MAX_KEY_SIZE 32
++#define CAMELLIA_BLOCK_SIZE 16
++#define CAMELLIA_TABLE_BYTE_LEN 272
++
++/*
++ * NB: L and R below stand for 'left' and 'right' as in written numbers.
++ * That is, in (xxxL,xxxR) pair xxxL holds most significant digits,
++ * _not_ least significant ones!
++ */
++
++
++/* key constants */
++
++#define CAMELLIA_SIGMA1L (0xA09E667FL)
++#define CAMELLIA_SIGMA1R (0x3BCC908BL)
++#define CAMELLIA_SIGMA2L (0xB67AE858L)
++#define CAMELLIA_SIGMA2R (0x4CAA73B2L)
++#define CAMELLIA_SIGMA3L (0xC6EF372FL)
++#define CAMELLIA_SIGMA3R (0xE94F82BEL)
++#define CAMELLIA_SIGMA4L (0x54FF53A5L)
++#define CAMELLIA_SIGMA4R (0xF1D36F1CL)
++#define CAMELLIA_SIGMA5L (0x10E527FAL)
++#define CAMELLIA_SIGMA5R (0xDE682D1DL)
++#define CAMELLIA_SIGMA6L (0xB05688C2L)
++#define CAMELLIA_SIGMA6R (0xB3E6C1FDL)
++
++/*
++ * macros
++ */
++#define GETU32(v, pt) \
++ do { \
++ /* latest breed of gcc is clever enough to use move */ \
++ memcpy(&(v), (pt), 4); \
++ (v) = be32_to_cpu(v); \
++ } while(0)
++
++/* rotation right shift 1byte */
++#define ROR8(x) (((x) >> 8) + ((x) << 24))
++/* rotation left shift 1bit */
++#define ROL1(x) (((x) << 1) + ((x) >> 31))
++/* rotation left shift 1byte */
++#define ROL8(x) (((x) << 8) + ((x) >> 24))
++
++#define ROLDQ(ll, lr, rl, rr, w0, w1, bits) \
++ do { \
++ w0 = ll; \
++ ll = (ll << bits) + (lr >> (32 - bits)); \
++ lr = (lr << bits) + (rl >> (32 - bits)); \
++ rl = (rl << bits) + (rr >> (32 - bits)); \
++ rr = (rr << bits) + (w0 >> (32 - bits)); \
++ } while(0)
++
++#define ROLDQo32(ll, lr, rl, rr, w0, w1, bits) \
++ do { \
++ w0 = ll; \
++ w1 = lr; \
++ ll = (lr << (bits - 32)) + (rl >> (64 - bits)); \
++ lr = (rl << (bits - 32)) + (rr >> (64 - bits)); \
++ rl = (rr << (bits - 32)) + (w0 >> (64 - bits)); \
++ rr = (w0 << (bits - 32)) + (w1 >> (64 - bits)); \
++ } while(0)
++
++#define CAMELLIA_F(xl, xr, kl, kr, yl, yr, il, ir, t0, t1) \
++ do { \
++ il = xl ^ kl; \
++ ir = xr ^ kr; \
++ t0 = il >> 16; \
++ t1 = ir >> 16; \
++ yl = camellia_sp1110[(u8)(ir )] \
++ ^ camellia_sp0222[ (t1 >> 8)] \
++ ^ camellia_sp3033[(u8)(t1 )] \
++ ^ camellia_sp4404[(u8)(ir >> 8)]; \
++ yr = camellia_sp1110[ (t0 >> 8)] \
++ ^ camellia_sp0222[(u8)(t0 )] \
++ ^ camellia_sp3033[(u8)(il >> 8)] \
++ ^ camellia_sp4404[(u8)(il )]; \
++ yl ^= yr; \
++ yr = ROR8(yr); \
++ yr ^= yl; \
++ } while(0)
++
++#define SUBKEY_L(INDEX) (subkey[(INDEX)*2])
++#define SUBKEY_R(INDEX) (subkey[(INDEX)*2 + 1])
++
++static void camellia_setup_tail(u32 *subkey, u32 *subL, u32 *subR, int max)
++{
++ u32 dw, tl, tr;
++ u32 kw4l, kw4r;
++ int i;
++
++ /* absorb kw2 to other subkeys */
++ /* round 2 */
++ subL[3] ^= subL[1]; subR[3] ^= subR[1];
++ /* round 4 */
++ subL[5] ^= subL[1]; subR[5] ^= subR[1];
++ /* round 6 */
++ subL[7] ^= subL[1]; subR[7] ^= subR[1];
++ subL[1] ^= subR[1] & ~subR[9];
++ dw = subL[1] & subL[9],
++ subR[1] ^= ROL1(dw); /* modified for FLinv(kl2) */
++ /* round 8 */
++ subL[11] ^= subL[1]; subR[11] ^= subR[1];
++ /* round 10 */
++ subL[13] ^= subL[1]; subR[13] ^= subR[1];
++ /* round 12 */
++ subL[15] ^= subL[1]; subR[15] ^= subR[1];
++ subL[1] ^= subR[1] & ~subR[17];
++ dw = subL[1] & subL[17],
++ subR[1] ^= ROL1(dw); /* modified for FLinv(kl4) */
++ /* round 14 */
++ subL[19] ^= subL[1]; subR[19] ^= subR[1];
++ /* round 16 */
++ subL[21] ^= subL[1]; subR[21] ^= subR[1];
++ /* round 18 */
++ subL[23] ^= subL[1]; subR[23] ^= subR[1];
++ if (max == 24) {
++ /* kw3 */
++ subL[24] ^= subL[1]; subR[24] ^= subR[1];
++
++ /* absorb kw4 to other subkeys */
++ kw4l = subL[25]; kw4r = subR[25];
++ } else {
++ subL[1] ^= subR[1] & ~subR[25];
++ dw = subL[1] & subL[25],
++ subR[1] ^= ROL1(dw); /* modified for FLinv(kl6) */
++ /* round 20 */
++ subL[27] ^= subL[1]; subR[27] ^= subR[1];
++ /* round 22 */
++ subL[29] ^= subL[1]; subR[29] ^= subR[1];
++ /* round 24 */
++ subL[31] ^= subL[1]; subR[31] ^= subR[1];
++ /* kw3 */
++ subL[32] ^= subL[1]; subR[32] ^= subR[1];
++
++ /* absorb kw4 to other subkeys */
++ kw4l = subL[33]; kw4r = subR[33];
++ /* round 23 */
++ subL[30] ^= kw4l; subR[30] ^= kw4r;
++ /* round 21 */
++ subL[28] ^= kw4l; subR[28] ^= kw4r;
++ /* round 19 */
++ subL[26] ^= kw4l; subR[26] ^= kw4r;
++ kw4l ^= kw4r & ~subR[24];
++ dw = kw4l & subL[24],
++ kw4r ^= ROL1(dw); /* modified for FL(kl5) */
++ }
++ /* round 17 */
++ subL[22] ^= kw4l; subR[22] ^= kw4r;
++ /* round 15 */
++ subL[20] ^= kw4l; subR[20] ^= kw4r;
++ /* round 13 */
++ subL[18] ^= kw4l; subR[18] ^= kw4r;
++ kw4l ^= kw4r & ~subR[16];
++ dw = kw4l & subL[16],
++ kw4r ^= ROL1(dw); /* modified for FL(kl3) */
++ /* round 11 */
++ subL[14] ^= kw4l; subR[14] ^= kw4r;
++ /* round 9 */
++ subL[12] ^= kw4l; subR[12] ^= kw4r;
++ /* round 7 */
++ subL[10] ^= kw4l; subR[10] ^= kw4r;
++ kw4l ^= kw4r & ~subR[8];
++ dw = kw4l & subL[8],
++ kw4r ^= ROL1(dw); /* modified for FL(kl1) */
++ /* round 5 */
++ subL[6] ^= kw4l; subR[6] ^= kw4r;
++ /* round 3 */
++ subL[4] ^= kw4l; subR[4] ^= kw4r;
++ /* round 1 */
++ subL[2] ^= kw4l; subR[2] ^= kw4r;
++ /* kw1 */
++ subL[0] ^= kw4l; subR[0] ^= kw4r;
++
++ /* key XOR is end of F-function */
++ SUBKEY_L(0) = subL[0] ^ subL[2];/* kw1 */
++ SUBKEY_R(0) = subR[0] ^ subR[2];
++ SUBKEY_L(2) = subL[3]; /* round 1 */
++ SUBKEY_R(2) = subR[3];
++ SUBKEY_L(3) = subL[2] ^ subL[4]; /* round 2 */
++ SUBKEY_R(3) = subR[2] ^ subR[4];
++ SUBKEY_L(4) = subL[3] ^ subL[5]; /* round 3 */
++ SUBKEY_R(4) = subR[3] ^ subR[5];
++ SUBKEY_L(5) = subL[4] ^ subL[6]; /* round 4 */
++ SUBKEY_R(5) = subR[4] ^ subR[6];
++ SUBKEY_L(6) = subL[5] ^ subL[7]; /* round 5 */
++ SUBKEY_R(6) = subR[5] ^ subR[7];
++ tl = subL[10] ^ (subR[10] & ~subR[8]);
++ dw = tl & subL[8], /* FL(kl1) */
++ tr = subR[10] ^ ROL1(dw);
++ SUBKEY_L(7) = subL[6] ^ tl; /* round 6 */
++ SUBKEY_R(7) = subR[6] ^ tr;
++ SUBKEY_L(8) = subL[8]; /* FL(kl1) */
++ SUBKEY_R(8) = subR[8];
++ SUBKEY_L(9) = subL[9]; /* FLinv(kl2) */
++ SUBKEY_R(9) = subR[9];
++ tl = subL[7] ^ (subR[7] & ~subR[9]);
++ dw = tl & subL[9], /* FLinv(kl2) */
++ tr = subR[7] ^ ROL1(dw);
++ SUBKEY_L(10) = tl ^ subL[11]; /* round 7 */
++ SUBKEY_R(10) = tr ^ subR[11];
++ SUBKEY_L(11) = subL[10] ^ subL[12]; /* round 8 */
++ SUBKEY_R(11) = subR[10] ^ subR[12];
++ SUBKEY_L(12) = subL[11] ^ subL[13]; /* round 9 */
++ SUBKEY_R(12) = subR[11] ^ subR[13];
++ SUBKEY_L(13) = subL[12] ^ subL[14]; /* round 10 */
++ SUBKEY_R(13) = subR[12] ^ subR[14];
++ SUBKEY_L(14) = subL[13] ^ subL[15]; /* round 11 */
++ SUBKEY_R(14) = subR[13] ^ subR[15];
++ tl = subL[18] ^ (subR[18] & ~subR[16]);
++ dw = tl & subL[16], /* FL(kl3) */
++ tr = subR[18] ^ ROL1(dw);
++ SUBKEY_L(15) = subL[14] ^ tl; /* round 12 */
++ SUBKEY_R(15) = subR[14] ^ tr;
++ SUBKEY_L(16) = subL[16]; /* FL(kl3) */
++ SUBKEY_R(16) = subR[16];
++ SUBKEY_L(17) = subL[17]; /* FLinv(kl4) */
++ SUBKEY_R(17) = subR[17];
++ tl = subL[15] ^ (subR[15] & ~subR[17]);
++ dw = tl & subL[17], /* FLinv(kl4) */
++ tr = subR[15] ^ ROL1(dw);
++ SUBKEY_L(18) = tl ^ subL[19]; /* round 13 */
++ SUBKEY_R(18) = tr ^ subR[19];
++ SUBKEY_L(19) = subL[18] ^ subL[20]; /* round 14 */
++ SUBKEY_R(19) = subR[18] ^ subR[20];
++ SUBKEY_L(20) = subL[19] ^ subL[21]; /* round 15 */
++ SUBKEY_R(20) = subR[19] ^ subR[21];
++ SUBKEY_L(21) = subL[20] ^ subL[22]; /* round 16 */
++ SUBKEY_R(21) = subR[20] ^ subR[22];
++ SUBKEY_L(22) = subL[21] ^ subL[23]; /* round 17 */
++ SUBKEY_R(22) = subR[21] ^ subR[23];
++ if (max == 24) {
++ SUBKEY_L(23) = subL[22]; /* round 18 */
++ SUBKEY_R(23) = subR[22];
++ SUBKEY_L(24) = subL[24] ^ subL[23]; /* kw3 */
++ SUBKEY_R(24) = subR[24] ^ subR[23];
++ } else {
++ tl = subL[26] ^ (subR[26] & ~subR[24]);
++ dw = tl & subL[24], /* FL(kl5) */
++ tr = subR[26] ^ ROL1(dw);
++ SUBKEY_L(23) = subL[22] ^ tl; /* round 18 */
++ SUBKEY_R(23) = subR[22] ^ tr;
++ SUBKEY_L(24) = subL[24]; /* FL(kl5) */
++ SUBKEY_R(24) = subR[24];
++ SUBKEY_L(25) = subL[25]; /* FLinv(kl6) */
++ SUBKEY_R(25) = subR[25];
++ tl = subL[23] ^ (subR[23] & ~subR[25]);
++ dw = tl & subL[25], /* FLinv(kl6) */
++ tr = subR[23] ^ ROL1(dw);
++ SUBKEY_L(26) = tl ^ subL[27]; /* round 19 */
++ SUBKEY_R(26) = tr ^ subR[27];
++ SUBKEY_L(27) = subL[26] ^ subL[28]; /* round 20 */
++ SUBKEY_R(27) = subR[26] ^ subR[28];
++ SUBKEY_L(28) = subL[27] ^ subL[29]; /* round 21 */
++ SUBKEY_R(28) = subR[27] ^ subR[29];
++ SUBKEY_L(29) = subL[28] ^ subL[30]; /* round 22 */
++ SUBKEY_R(29) = subR[28] ^ subR[30];
++ SUBKEY_L(30) = subL[29] ^ subL[31]; /* round 23 */
++ SUBKEY_R(30) = subR[29] ^ subR[31];
++ SUBKEY_L(31) = subL[30]; /* round 24 */
++ SUBKEY_R(31) = subR[30];
++ SUBKEY_L(32) = subL[32] ^ subL[31]; /* kw3 */
++ SUBKEY_R(32) = subR[32] ^ subR[31];
++ }
++
++ /* apply the inverse of the last half of P-function */
++ i = 2;
++ do {
++ dw = SUBKEY_L(i + 0) ^ SUBKEY_R(i + 0); dw = ROL8(dw);/* round 1 */
++ SUBKEY_R(i + 0) = SUBKEY_L(i + 0) ^ dw; SUBKEY_L(i + 0) = dw;
++ dw = SUBKEY_L(i + 1) ^ SUBKEY_R(i + 1); dw = ROL8(dw);/* round 2 */
++ SUBKEY_R(i + 1) = SUBKEY_L(i + 1) ^ dw; SUBKEY_L(i + 1) = dw;
++ dw = SUBKEY_L(i + 2) ^ SUBKEY_R(i + 2); dw = ROL8(dw);/* round 3 */
++ SUBKEY_R(i + 2) = SUBKEY_L(i + 2) ^ dw; SUBKEY_L(i + 2) = dw;
++ dw = SUBKEY_L(i + 3) ^ SUBKEY_R(i + 3); dw = ROL8(dw);/* round 4 */
++ SUBKEY_R(i + 3) = SUBKEY_L(i + 3) ^ dw; SUBKEY_L(i + 3) = dw;
++ dw = SUBKEY_L(i + 4) ^ SUBKEY_R(i + 4); dw = ROL8(dw);/* round 5 */
++ SUBKEY_R(i + 4) = SUBKEY_L(i + 4) ^ dw; SUBKEY_L(i + 4) = dw;
++ dw = SUBKEY_L(i + 5) ^ SUBKEY_R(i + 5); dw = ROL8(dw);/* round 6 */
++ SUBKEY_R(i + 5) = SUBKEY_L(i + 5) ^ dw; SUBKEY_L(i + 5) = dw;
++ i += 8;
++ } while (i < max);
++}
+
+ static void camellia_setup128(const unsigned char *key, u32 *subkey)
+ {
+ u32 kll, klr, krl, krr;
+ u32 il, ir, t0, t1, w0, w1;
+- u32 kw4l, kw4r, dw, tl, tr;
+ u32 subL[26];
+ u32 subR[26];
+
+ /**
+- * k == kll || klr || krl || krr (|| is concatination)
+- */
+- kll = GETU32(key );
+- klr = GETU32(key + 4);
+- krl = GETU32(key + 8);
+- krr = GETU32(key + 12);
+- /**
+- * generate KL dependent subkeys
++ * k == kll || klr || krl || krr (|| is concatenation)
+ */
++ GETU32(kll, key );
++ GETU32(klr, key + 4);
++ GETU32(krl, key + 8);
++ GETU32(krr, key + 12);
++
++ /* generate KL dependent subkeys */
+ /* kw1 */
+- SUBL(0) = kll; SUBR(0) = klr;
++ subL[0] = kll; subR[0] = klr;
+ /* kw2 */
+- SUBL(1) = krl; SUBR(1) = krr;
++ subL[1] = krl; subR[1] = krr;
+ /* rotation left shift 15bit */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* k3 */
+- SUBL(4) = kll; SUBR(4) = klr;
++ subL[4] = kll; subR[4] = klr;
+ /* k4 */
+- SUBL(5) = krl; SUBR(5) = krr;
++ subL[5] = krl; subR[5] = krr;
+ /* rotation left shift 15+30bit */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 30);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 30);
+ /* k7 */
+- SUBL(10) = kll; SUBR(10) = klr;
++ subL[10] = kll; subR[10] = klr;
+ /* k8 */
+- SUBL(11) = krl; SUBR(11) = krr;
++ subL[11] = krl; subR[11] = krr;
+ /* rotation left shift 15+30+15bit */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* k10 */
+- SUBL(13) = krl; SUBR(13) = krr;
++ subL[13] = krl; subR[13] = krr;
+ /* rotation left shift 15+30+15+17 bit */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
+ /* kl3 */
+- SUBL(16) = kll; SUBR(16) = klr;
++ subL[16] = kll; subR[16] = klr;
+ /* kl4 */
+- SUBL(17) = krl; SUBR(17) = krr;
++ subL[17] = krl; subR[17] = krr;
+ /* rotation left shift 15+30+15+17+17 bit */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
+ /* k13 */
+- SUBL(18) = kll; SUBR(18) = klr;
++ subL[18] = kll; subR[18] = klr;
+ /* k14 */
+- SUBL(19) = krl; SUBR(19) = krr;
++ subL[19] = krl; subR[19] = krr;
+ /* rotation left shift 15+30+15+17+17+17 bit */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
+ /* k17 */
+- SUBL(22) = kll; SUBR(22) = klr;
++ subL[22] = kll; subR[22] = klr;
+ /* k18 */
+- SUBL(23) = krl; SUBR(23) = krr;
++ subL[23] = krl; subR[23] = krr;
+
+ /* generate KA */
+- kll = SUBL(0); klr = SUBR(0);
+- krl = SUBL(1); krr = SUBR(1);
++ kll = subL[0]; klr = subR[0];
++ krl = subL[1]; krr = subR[1];
+ CAMELLIA_F(kll, klr,
+ CAMELLIA_SIGMA1L, CAMELLIA_SIGMA1R,
+ w0, w1, il, ir, t0, t1);
+@@ -555,306 +666,108 @@ static void camellia_setup128(const unsigned char *key, u32 *subkey)
+
+ /* generate KA dependent subkeys */
+ /* k1, k2 */
+- SUBL(2) = kll; SUBR(2) = klr;
+- SUBL(3) = krl; SUBR(3) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ subL[2] = kll; subR[2] = klr;
++ subL[3] = krl; subR[3] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* k5,k6 */
+- SUBL(6) = kll; SUBR(6) = klr;
+- SUBL(7) = krl; SUBR(7) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ subL[6] = kll; subR[6] = klr;
++ subL[7] = krl; subR[7] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* kl1, kl2 */
+- SUBL(8) = kll; SUBR(8) = klr;
+- SUBL(9) = krl; SUBR(9) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ subL[8] = kll; subR[8] = klr;
++ subL[9] = krl; subR[9] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* k9 */
+- SUBL(12) = kll; SUBR(12) = klr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ subL[12] = kll; subR[12] = klr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* k11, k12 */
+- SUBL(14) = kll; SUBR(14) = klr;
+- SUBL(15) = krl; SUBR(15) = krr;
+- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
++ subL[14] = kll; subR[14] = klr;
++ subL[15] = krl; subR[15] = krr;
++ ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
+ /* k15, k16 */
+- SUBL(20) = kll; SUBR(20) = klr;
+- SUBL(21) = krl; SUBR(21) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
++ subL[20] = kll; subR[20] = klr;
++ subL[21] = krl; subR[21] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
+ /* kw3, kw4 */
+- SUBL(24) = kll; SUBR(24) = klr;
+- SUBL(25) = krl; SUBR(25) = krr;
++ subL[24] = kll; subR[24] = klr;
++ subL[25] = krl; subR[25] = krr;
+
+-
+- /* absorb kw2 to other subkeys */
+- /* round 2 */
+- SUBL(3) ^= SUBL(1); SUBR(3) ^= SUBR(1);
+- /* round 4 */
+- SUBL(5) ^= SUBL(1); SUBR(5) ^= SUBR(1);
+- /* round 6 */
+- SUBL(7) ^= SUBL(1); SUBR(7) ^= SUBR(1);
+- SUBL(1) ^= SUBR(1) & ~SUBR(9);
+- dw = SUBL(1) & SUBL(9),
+- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl2) */
+- /* round 8 */
+- SUBL(11) ^= SUBL(1); SUBR(11) ^= SUBR(1);
+- /* round 10 */
+- SUBL(13) ^= SUBL(1); SUBR(13) ^= SUBR(1);
+- /* round 12 */
+- SUBL(15) ^= SUBL(1); SUBR(15) ^= SUBR(1);
+- SUBL(1) ^= SUBR(1) & ~SUBR(17);
+- dw = SUBL(1) & SUBL(17),
+- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl4) */
+- /* round 14 */
+- SUBL(19) ^= SUBL(1); SUBR(19) ^= SUBR(1);
+- /* round 16 */
+- SUBL(21) ^= SUBL(1); SUBR(21) ^= SUBR(1);
+- /* round 18 */
+- SUBL(23) ^= SUBL(1); SUBR(23) ^= SUBR(1);
+- /* kw3 */
+- SUBL(24) ^= SUBL(1); SUBR(24) ^= SUBR(1);
+-
+- /* absorb kw4 to other subkeys */
+- kw4l = SUBL(25); kw4r = SUBR(25);
+- /* round 17 */
+- SUBL(22) ^= kw4l; SUBR(22) ^= kw4r;
+- /* round 15 */
+- SUBL(20) ^= kw4l; SUBR(20) ^= kw4r;
+- /* round 13 */
+- SUBL(18) ^= kw4l; SUBR(18) ^= kw4r;
+- kw4l ^= kw4r & ~SUBR(16);
+- dw = kw4l & SUBL(16),
+- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl3) */
+- /* round 11 */
+- SUBL(14) ^= kw4l; SUBR(14) ^= kw4r;
+- /* round 9 */
+- SUBL(12) ^= kw4l; SUBR(12) ^= kw4r;
+- /* round 7 */
+- SUBL(10) ^= kw4l; SUBR(10) ^= kw4r;
+- kw4l ^= kw4r & ~SUBR(8);
+- dw = kw4l & SUBL(8),
+- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl1) */
+- /* round 5 */
+- SUBL(6) ^= kw4l; SUBR(6) ^= kw4r;
+- /* round 3 */
+- SUBL(4) ^= kw4l; SUBR(4) ^= kw4r;
+- /* round 1 */
+- SUBL(2) ^= kw4l; SUBR(2) ^= kw4r;
+- /* kw1 */
+- SUBL(0) ^= kw4l; SUBR(0) ^= kw4r;
+-
+-
+- /* key XOR is end of F-function */
+- CAMELLIA_SUBKEY_L(0) = SUBL(0) ^ SUBL(2);/* kw1 */
+- CAMELLIA_SUBKEY_R(0) = SUBR(0) ^ SUBR(2);
+- CAMELLIA_SUBKEY_L(2) = SUBL(3); /* round 1 */
+- CAMELLIA_SUBKEY_R(2) = SUBR(3);
+- CAMELLIA_SUBKEY_L(3) = SUBL(2) ^ SUBL(4); /* round 2 */
+- CAMELLIA_SUBKEY_R(3) = SUBR(2) ^ SUBR(4);
+- CAMELLIA_SUBKEY_L(4) = SUBL(3) ^ SUBL(5); /* round 3 */
+- CAMELLIA_SUBKEY_R(4) = SUBR(3) ^ SUBR(5);
+- CAMELLIA_SUBKEY_L(5) = SUBL(4) ^ SUBL(6); /* round 4 */
+- CAMELLIA_SUBKEY_R(5) = SUBR(4) ^ SUBR(6);
+- CAMELLIA_SUBKEY_L(6) = SUBL(5) ^ SUBL(7); /* round 5 */
+- CAMELLIA_SUBKEY_R(6) = SUBR(5) ^ SUBR(7);
+- tl = SUBL(10) ^ (SUBR(10) & ~SUBR(8));
+- dw = tl & SUBL(8), /* FL(kl1) */
+- tr = SUBR(10) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(7) = SUBL(6) ^ tl; /* round 6 */
+- CAMELLIA_SUBKEY_R(7) = SUBR(6) ^ tr;
+- CAMELLIA_SUBKEY_L(8) = SUBL(8); /* FL(kl1) */
+- CAMELLIA_SUBKEY_R(8) = SUBR(8);
+- CAMELLIA_SUBKEY_L(9) = SUBL(9); /* FLinv(kl2) */
+- CAMELLIA_SUBKEY_R(9) = SUBR(9);
+- tl = SUBL(7) ^ (SUBR(7) & ~SUBR(9));
+- dw = tl & SUBL(9), /* FLinv(kl2) */
+- tr = SUBR(7) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(10) = tl ^ SUBL(11); /* round 7 */
+- CAMELLIA_SUBKEY_R(10) = tr ^ SUBR(11);
+- CAMELLIA_SUBKEY_L(11) = SUBL(10) ^ SUBL(12); /* round 8 */
+- CAMELLIA_SUBKEY_R(11) = SUBR(10) ^ SUBR(12);
+- CAMELLIA_SUBKEY_L(12) = SUBL(11) ^ SUBL(13); /* round 9 */
+- CAMELLIA_SUBKEY_R(12) = SUBR(11) ^ SUBR(13);
+- CAMELLIA_SUBKEY_L(13) = SUBL(12) ^ SUBL(14); /* round 10 */
+- CAMELLIA_SUBKEY_R(13) = SUBR(12) ^ SUBR(14);
+- CAMELLIA_SUBKEY_L(14) = SUBL(13) ^ SUBL(15); /* round 11 */
+- CAMELLIA_SUBKEY_R(14) = SUBR(13) ^ SUBR(15);
+- tl = SUBL(18) ^ (SUBR(18) & ~SUBR(16));
+- dw = tl & SUBL(16), /* FL(kl3) */
+- tr = SUBR(18) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(15) = SUBL(14) ^ tl; /* round 12 */
+- CAMELLIA_SUBKEY_R(15) = SUBR(14) ^ tr;
+- CAMELLIA_SUBKEY_L(16) = SUBL(16); /* FL(kl3) */
+- CAMELLIA_SUBKEY_R(16) = SUBR(16);
+- CAMELLIA_SUBKEY_L(17) = SUBL(17); /* FLinv(kl4) */
+- CAMELLIA_SUBKEY_R(17) = SUBR(17);
+- tl = SUBL(15) ^ (SUBR(15) & ~SUBR(17));
+- dw = tl & SUBL(17), /* FLinv(kl4) */
+- tr = SUBR(15) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(18) = tl ^ SUBL(19); /* round 13 */
+- CAMELLIA_SUBKEY_R(18) = tr ^ SUBR(19);
+- CAMELLIA_SUBKEY_L(19) = SUBL(18) ^ SUBL(20); /* round 14 */
+- CAMELLIA_SUBKEY_R(19) = SUBR(18) ^ SUBR(20);
+- CAMELLIA_SUBKEY_L(20) = SUBL(19) ^ SUBL(21); /* round 15 */
+- CAMELLIA_SUBKEY_R(20) = SUBR(19) ^ SUBR(21);
+- CAMELLIA_SUBKEY_L(21) = SUBL(20) ^ SUBL(22); /* round 16 */
+- CAMELLIA_SUBKEY_R(21) = SUBR(20) ^ SUBR(22);
+- CAMELLIA_SUBKEY_L(22) = SUBL(21) ^ SUBL(23); /* round 17 */
+- CAMELLIA_SUBKEY_R(22) = SUBR(21) ^ SUBR(23);
+- CAMELLIA_SUBKEY_L(23) = SUBL(22); /* round 18 */
+- CAMELLIA_SUBKEY_R(23) = SUBR(22);
+- CAMELLIA_SUBKEY_L(24) = SUBL(24) ^ SUBL(23); /* kw3 */
+- CAMELLIA_SUBKEY_R(24) = SUBR(24) ^ SUBR(23);
+-
+- /* apply the inverse of the last half of P-function */
+- dw = CAMELLIA_SUBKEY_L(2) ^ CAMELLIA_SUBKEY_R(2),
+- dw = CAMELLIA_RL8(dw);/* round 1 */
+- CAMELLIA_SUBKEY_R(2) = CAMELLIA_SUBKEY_L(2) ^ dw,
+- CAMELLIA_SUBKEY_L(2) = dw;
+- dw = CAMELLIA_SUBKEY_L(3) ^ CAMELLIA_SUBKEY_R(3),
+- dw = CAMELLIA_RL8(dw);/* round 2 */
+- CAMELLIA_SUBKEY_R(3) = CAMELLIA_SUBKEY_L(3) ^ dw,
+- CAMELLIA_SUBKEY_L(3) = dw;
+- dw = CAMELLIA_SUBKEY_L(4) ^ CAMELLIA_SUBKEY_R(4),
+- dw = CAMELLIA_RL8(dw);/* round 3 */
+- CAMELLIA_SUBKEY_R(4) = CAMELLIA_SUBKEY_L(4) ^ dw,
+- CAMELLIA_SUBKEY_L(4) = dw;
+- dw = CAMELLIA_SUBKEY_L(5) ^ CAMELLIA_SUBKEY_R(5),
+- dw = CAMELLIA_RL8(dw);/* round 4 */
+- CAMELLIA_SUBKEY_R(5) = CAMELLIA_SUBKEY_L(5) ^ dw,
+- CAMELLIA_SUBKEY_L(5) = dw;
+- dw = CAMELLIA_SUBKEY_L(6) ^ CAMELLIA_SUBKEY_R(6),
+- dw = CAMELLIA_RL8(dw);/* round 5 */
+- CAMELLIA_SUBKEY_R(6) = CAMELLIA_SUBKEY_L(6) ^ dw,
+- CAMELLIA_SUBKEY_L(6) = dw;
+- dw = CAMELLIA_SUBKEY_L(7) ^ CAMELLIA_SUBKEY_R(7),
+- dw = CAMELLIA_RL8(dw);/* round 6 */
+- CAMELLIA_SUBKEY_R(7) = CAMELLIA_SUBKEY_L(7) ^ dw,
+- CAMELLIA_SUBKEY_L(7) = dw;
+- dw = CAMELLIA_SUBKEY_L(10) ^ CAMELLIA_SUBKEY_R(10),
+- dw = CAMELLIA_RL8(dw);/* round 7 */
+- CAMELLIA_SUBKEY_R(10) = CAMELLIA_SUBKEY_L(10) ^ dw,
+- CAMELLIA_SUBKEY_L(10) = dw;
+- dw = CAMELLIA_SUBKEY_L(11) ^ CAMELLIA_SUBKEY_R(11),
+- dw = CAMELLIA_RL8(dw);/* round 8 */
+- CAMELLIA_SUBKEY_R(11) = CAMELLIA_SUBKEY_L(11) ^ dw,
+- CAMELLIA_SUBKEY_L(11) = dw;
+- dw = CAMELLIA_SUBKEY_L(12) ^ CAMELLIA_SUBKEY_R(12),
+- dw = CAMELLIA_RL8(dw);/* round 9 */
+- CAMELLIA_SUBKEY_R(12) = CAMELLIA_SUBKEY_L(12) ^ dw,
+- CAMELLIA_SUBKEY_L(12) = dw;
+- dw = CAMELLIA_SUBKEY_L(13) ^ CAMELLIA_SUBKEY_R(13),
+- dw = CAMELLIA_RL8(dw);/* round 10 */
+- CAMELLIA_SUBKEY_R(13) = CAMELLIA_SUBKEY_L(13) ^ dw,
+- CAMELLIA_SUBKEY_L(13) = dw;
+- dw = CAMELLIA_SUBKEY_L(14) ^ CAMELLIA_SUBKEY_R(14),
+- dw = CAMELLIA_RL8(dw);/* round 11 */
+- CAMELLIA_SUBKEY_R(14) = CAMELLIA_SUBKEY_L(14) ^ dw,
+- CAMELLIA_SUBKEY_L(14) = dw;
+- dw = CAMELLIA_SUBKEY_L(15) ^ CAMELLIA_SUBKEY_R(15),
+- dw = CAMELLIA_RL8(dw);/* round 12 */
+- CAMELLIA_SUBKEY_R(15) = CAMELLIA_SUBKEY_L(15) ^ dw,
+- CAMELLIA_SUBKEY_L(15) = dw;
+- dw = CAMELLIA_SUBKEY_L(18) ^ CAMELLIA_SUBKEY_R(18),
+- dw = CAMELLIA_RL8(dw);/* round 13 */
+- CAMELLIA_SUBKEY_R(18) = CAMELLIA_SUBKEY_L(18) ^ dw,
+- CAMELLIA_SUBKEY_L(18) = dw;
+- dw = CAMELLIA_SUBKEY_L(19) ^ CAMELLIA_SUBKEY_R(19),
+- dw = CAMELLIA_RL8(dw);/* round 14 */
+- CAMELLIA_SUBKEY_R(19) = CAMELLIA_SUBKEY_L(19) ^ dw,
+- CAMELLIA_SUBKEY_L(19) = dw;
+- dw = CAMELLIA_SUBKEY_L(20) ^ CAMELLIA_SUBKEY_R(20),
+- dw = CAMELLIA_RL8(dw);/* round 15 */
+- CAMELLIA_SUBKEY_R(20) = CAMELLIA_SUBKEY_L(20) ^ dw,
+- CAMELLIA_SUBKEY_L(20) = dw;
+- dw = CAMELLIA_SUBKEY_L(21) ^ CAMELLIA_SUBKEY_R(21),
+- dw = CAMELLIA_RL8(dw);/* round 16 */
+- CAMELLIA_SUBKEY_R(21) = CAMELLIA_SUBKEY_L(21) ^ dw,
+- CAMELLIA_SUBKEY_L(21) = dw;
+- dw = CAMELLIA_SUBKEY_L(22) ^ CAMELLIA_SUBKEY_R(22),
+- dw = CAMELLIA_RL8(dw);/* round 17 */
+- CAMELLIA_SUBKEY_R(22) = CAMELLIA_SUBKEY_L(22) ^ dw,
+- CAMELLIA_SUBKEY_L(22) = dw;
+- dw = CAMELLIA_SUBKEY_L(23) ^ CAMELLIA_SUBKEY_R(23),
+- dw = CAMELLIA_RL8(dw);/* round 18 */
+- CAMELLIA_SUBKEY_R(23) = CAMELLIA_SUBKEY_L(23) ^ dw,
+- CAMELLIA_SUBKEY_L(23) = dw;
+-
+- return;
++ camellia_setup_tail(subkey, subL, subR, 24);
+ }
+
+-
+ static void camellia_setup256(const unsigned char *key, u32 *subkey)
+ {
+- u32 kll,klr,krl,krr; /* left half of key */
+- u32 krll,krlr,krrl,krrr; /* right half of key */
++ u32 kll, klr, krl, krr; /* left half of key */
++ u32 krll, krlr, krrl, krrr; /* right half of key */
+ u32 il, ir, t0, t1, w0, w1; /* temporary variables */
+- u32 kw4l, kw4r, dw, tl, tr;
+ u32 subL[34];
+ u32 subR[34];
+
+ /**
+ * key = (kll || klr || krl || krr || krll || krlr || krrl || krrr)
+- * (|| is concatination)
++ * (|| is concatenation)
+ */
+-
+- kll = GETU32(key );
+- klr = GETU32(key + 4);
+- krl = GETU32(key + 8);
+- krr = GETU32(key + 12);
+- krll = GETU32(key + 16);
+- krlr = GETU32(key + 20);
+- krrl = GETU32(key + 24);
+- krrr = GETU32(key + 28);
++ GETU32(kll, key );
++ GETU32(klr, key + 4);
++ GETU32(krl, key + 8);
++ GETU32(krr, key + 12);
++ GETU32(krll, key + 16);
++ GETU32(krlr, key + 20);
++ GETU32(krrl, key + 24);
++ GETU32(krrr, key + 28);
+
+ /* generate KL dependent subkeys */
+ /* kw1 */
+- SUBL(0) = kll; SUBR(0) = klr;
++ subL[0] = kll; subR[0] = klr;
+ /* kw2 */
+- SUBL(1) = krl; SUBR(1) = krr;
+- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 45);
++ subL[1] = krl; subR[1] = krr;
++ ROLDQo32(kll, klr, krl, krr, w0, w1, 45);
+ /* k9 */
+- SUBL(12) = kll; SUBR(12) = klr;
++ subL[12] = kll; subR[12] = klr;
+ /* k10 */
+- SUBL(13) = krl; SUBR(13) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ subL[13] = krl; subR[13] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* kl3 */
+- SUBL(16) = kll; SUBR(16) = klr;
++ subL[16] = kll; subR[16] = klr;
+ /* kl4 */
+- SUBL(17) = krl; SUBR(17) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 17);
++ subL[17] = krl; subR[17] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 17);
+ /* k17 */
+- SUBL(22) = kll; SUBR(22) = klr;
++ subL[22] = kll; subR[22] = klr;
+ /* k18 */
+- SUBL(23) = krl; SUBR(23) = krr;
+- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
++ subL[23] = krl; subR[23] = krr;
++ ROLDQo32(kll, klr, krl, krr, w0, w1, 34);
+ /* k23 */
+- SUBL(30) = kll; SUBR(30) = klr;
++ subL[30] = kll; subR[30] = klr;
+ /* k24 */
+- SUBL(31) = krl; SUBR(31) = krr;
++ subL[31] = krl; subR[31] = krr;
+
+ /* generate KR dependent subkeys */
+- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
++ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
+ /* k3 */
+- SUBL(4) = krll; SUBR(4) = krlr;
++ subL[4] = krll; subR[4] = krlr;
+ /* k4 */
+- SUBL(5) = krrl; SUBR(5) = krrr;
+- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
++ subL[5] = krrl; subR[5] = krrr;
++ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 15);
+ /* kl1 */
+- SUBL(8) = krll; SUBR(8) = krlr;
++ subL[8] = krll; subR[8] = krlr;
+ /* kl2 */
+- SUBL(9) = krrl; SUBR(9) = krrr;
+- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
++ subL[9] = krrl; subR[9] = krrr;
++ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
+ /* k13 */
+- SUBL(18) = krll; SUBR(18) = krlr;
++ subL[18] = krll; subR[18] = krlr;
+ /* k14 */
+- SUBL(19) = krrl; SUBR(19) = krrr;
+- CAMELLIA_ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
++ subL[19] = krrl; subR[19] = krrr;
++ ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
+ /* k19 */
+- SUBL(26) = krll; SUBR(26) = krlr;
++ subL[26] = krll; subR[26] = krlr;
+ /* k20 */
+- SUBL(27) = krrl; SUBR(27) = krrr;
+- CAMELLIA_ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
++ subL[27] = krrl; subR[27] = krrr;
++ ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 34);
+
+ /* generate KA */
+- kll = SUBL(0) ^ krll; klr = SUBR(0) ^ krlr;
+- krl = SUBL(1) ^ krrl; krr = SUBR(1) ^ krrr;
++ kll = subL[0] ^ krll; klr = subR[0] ^ krlr;
++ krl = subL[1] ^ krrl; krr = subR[1] ^ krrr;
+ CAMELLIA_F(kll, klr,
+ CAMELLIA_SIGMA1L, CAMELLIA_SIGMA1R,
+ w0, w1, il, ir, t0, t1);
+@@ -885,310 +798,50 @@ static void camellia_setup256(const unsigned char *key, u32 *subkey)
+ krll ^= w0; krlr ^= w1;
+
+ /* generate KA dependent subkeys */
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 15);
++ ROLDQ(kll, klr, krl, krr, w0, w1, 15);
+ /* k5 */
+- SUBL(6) = kll; SUBR(6) = klr;
++ subL[6] = kll; subR[6] = klr;
+ /* k6 */
+- SUBL(7) = krl; SUBR(7) = krr;
+- CAMELLIA_ROLDQ(kll, klr, krl, krr, w0, w1, 30);
++ subL[7] = krl; subR[7] = krr;
++ ROLDQ(kll, klr, krl, krr, w0, w1, 30);
+ /* k11 */
+- SUBL(14) = kll; SUBR(14) = klr;
++ subL[14] = kll; subR[14] = klr;
+ /* k12 */
+- SUBL(15) = krl; SUBR(15) = krr;
++ subL[15] = krl; subR[15] = krr;
+ /* rotation left shift 32bit */
+ /* kl5 */
+- SUBL(24) = klr; SUBR(24) = krl;
++ subL[24] = klr; subR[24] = krl;
+ /* kl6 */
+- SUBL(25) = krr; SUBR(25) = kll;
++ subL[25] = krr; subR[25] = kll;
+ /* rotation left shift 49 from k11,k12 -> k21,k22 */
+- CAMELLIA_ROLDQo32(kll, klr, krl, krr, w0, w1, 49);
++ ROLDQo32(kll, klr, krl, krr, w0, w1, 49);
+ /* k21 */
+- SUBL(28) = kll; SUBR(28) = klr;
++ subL[28] = kll; subR[28] = klr;
+ /* k22 */
+- SUBL(29) = krl; SUBR(29) = krr;
++ subL[29] = krl; subR[29] = krr;
+
+ /* generate KB dependent subkeys */
+ /* k1 */
+- SUBL(2) = krll; SUBR(2) = krlr;
++ subL[2] = krll; subR[2] = krlr;
+ /* k2 */
+- SUBL(3) = krrl; SUBR(3) = krrr;
+- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
++ subL[3] = krrl; subR[3] = krrr;
++ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
+ /* k7 */
+- SUBL(10) = krll; SUBR(10) = krlr;
++ subL[10] = krll; subR[10] = krlr;
+ /* k8 */
+- SUBL(11) = krrl; SUBR(11) = krrr;
+- CAMELLIA_ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
++ subL[11] = krrl; subR[11] = krrr;
++ ROLDQ(krll, krlr, krrl, krrr, w0, w1, 30);
+ /* k15 */
+- SUBL(20) = krll; SUBR(20) = krlr;
++ subL[20] = krll; subR[20] = krlr;
+ /* k16 */
+- SUBL(21) = krrl; SUBR(21) = krrr;
+- CAMELLIA_ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 51);
++ subL[21] = krrl; subR[21] = krrr;
++ ROLDQo32(krll, krlr, krrl, krrr, w0, w1, 51);
+ /* kw3 */
+- SUBL(32) = krll; SUBR(32) = krlr;
++ subL[32] = krll; subR[32] = krlr;
+ /* kw4 */
+- SUBL(33) = krrl; SUBR(33) = krrr;
+-
+- /* absorb kw2 to other subkeys */
+- /* round 2 */
+- SUBL(3) ^= SUBL(1); SUBR(3) ^= SUBR(1);
+- /* round 4 */
+- SUBL(5) ^= SUBL(1); SUBR(5) ^= SUBR(1);
+- /* round 6 */
+- SUBL(7) ^= SUBL(1); SUBR(7) ^= SUBR(1);
+- SUBL(1) ^= SUBR(1) & ~SUBR(9);
+- dw = SUBL(1) & SUBL(9),
+- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl2) */
+- /* round 8 */
+- SUBL(11) ^= SUBL(1); SUBR(11) ^= SUBR(1);
+- /* round 10 */
+- SUBL(13) ^= SUBL(1); SUBR(13) ^= SUBR(1);
+- /* round 12 */
+- SUBL(15) ^= SUBL(1); SUBR(15) ^= SUBR(1);
+- SUBL(1) ^= SUBR(1) & ~SUBR(17);
+- dw = SUBL(1) & SUBL(17),
+- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl4) */
+- /* round 14 */
+- SUBL(19) ^= SUBL(1); SUBR(19) ^= SUBR(1);
+- /* round 16 */
+- SUBL(21) ^= SUBL(1); SUBR(21) ^= SUBR(1);
+- /* round 18 */
+- SUBL(23) ^= SUBL(1); SUBR(23) ^= SUBR(1);
+- SUBL(1) ^= SUBR(1) & ~SUBR(25);
+- dw = SUBL(1) & SUBL(25),
+- SUBR(1) ^= CAMELLIA_RL1(dw); /* modified for FLinv(kl6) */
+- /* round 20 */
+- SUBL(27) ^= SUBL(1); SUBR(27) ^= SUBR(1);
+- /* round 22 */
+- SUBL(29) ^= SUBL(1); SUBR(29) ^= SUBR(1);
+- /* round 24 */
+- SUBL(31) ^= SUBL(1); SUBR(31) ^= SUBR(1);
+- /* kw3 */
+- SUBL(32) ^= SUBL(1); SUBR(32) ^= SUBR(1);
+-
+-
+- /* absorb kw4 to other subkeys */
+- kw4l = SUBL(33); kw4r = SUBR(33);
+- /* round 23 */
+- SUBL(30) ^= kw4l; SUBR(30) ^= kw4r;
+- /* round 21 */
+- SUBL(28) ^= kw4l; SUBR(28) ^= kw4r;
+- /* round 19 */
+- SUBL(26) ^= kw4l; SUBR(26) ^= kw4r;
+- kw4l ^= kw4r & ~SUBR(24);
+- dw = kw4l & SUBL(24),
+- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl5) */
+- /* round 17 */
+- SUBL(22) ^= kw4l; SUBR(22) ^= kw4r;
+- /* round 15 */
+- SUBL(20) ^= kw4l; SUBR(20) ^= kw4r;
+- /* round 13 */
+- SUBL(18) ^= kw4l; SUBR(18) ^= kw4r;
+- kw4l ^= kw4r & ~SUBR(16);
+- dw = kw4l & SUBL(16),
+- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl3) */
+- /* round 11 */
+- SUBL(14) ^= kw4l; SUBR(14) ^= kw4r;
+- /* round 9 */
+- SUBL(12) ^= kw4l; SUBR(12) ^= kw4r;
+- /* round 7 */
+- SUBL(10) ^= kw4l; SUBR(10) ^= kw4r;
+- kw4l ^= kw4r & ~SUBR(8);
+- dw = kw4l & SUBL(8),
+- kw4r ^= CAMELLIA_RL1(dw); /* modified for FL(kl1) */
+- /* round 5 */
+- SUBL(6) ^= kw4l; SUBR(6) ^= kw4r;
+- /* round 3 */
+- SUBL(4) ^= kw4l; SUBR(4) ^= kw4r;
+- /* round 1 */
+- SUBL(2) ^= kw4l; SUBR(2) ^= kw4r;
+- /* kw1 */
+- SUBL(0) ^= kw4l; SUBR(0) ^= kw4r;
++ subL[33] = krrl; subR[33] = krrr;
+
+- /* key XOR is end of F-function */
+- CAMELLIA_SUBKEY_L(0) = SUBL(0) ^ SUBL(2);/* kw1 */
+- CAMELLIA_SUBKEY_R(0) = SUBR(0) ^ SUBR(2);
+- CAMELLIA_SUBKEY_L(2) = SUBL(3); /* round 1 */
+- CAMELLIA_SUBKEY_R(2) = SUBR(3);
+- CAMELLIA_SUBKEY_L(3) = SUBL(2) ^ SUBL(4); /* round 2 */
+- CAMELLIA_SUBKEY_R(3) = SUBR(2) ^ SUBR(4);
+- CAMELLIA_SUBKEY_L(4) = SUBL(3) ^ SUBL(5); /* round 3 */
+- CAMELLIA_SUBKEY_R(4) = SUBR(3) ^ SUBR(5);
+- CAMELLIA_SUBKEY_L(5) = SUBL(4) ^ SUBL(6); /* round 4 */
+- CAMELLIA_SUBKEY_R(5) = SUBR(4) ^ SUBR(6);
+- CAMELLIA_SUBKEY_L(6) = SUBL(5) ^ SUBL(7); /* round 5 */
+- CAMELLIA_SUBKEY_R(6) = SUBR(5) ^ SUBR(7);
+- tl = SUBL(10) ^ (SUBR(10) & ~SUBR(8));
+- dw = tl & SUBL(8), /* FL(kl1) */
+- tr = SUBR(10) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(7) = SUBL(6) ^ tl; /* round 6 */
+- CAMELLIA_SUBKEY_R(7) = SUBR(6) ^ tr;
+- CAMELLIA_SUBKEY_L(8) = SUBL(8); /* FL(kl1) */
+- CAMELLIA_SUBKEY_R(8) = SUBR(8);
+- CAMELLIA_SUBKEY_L(9) = SUBL(9); /* FLinv(kl2) */
+- CAMELLIA_SUBKEY_R(9) = SUBR(9);
+- tl = SUBL(7) ^ (SUBR(7) & ~SUBR(9));
+- dw = tl & SUBL(9), /* FLinv(kl2) */
+- tr = SUBR(7) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(10) = tl ^ SUBL(11); /* round 7 */
+- CAMELLIA_SUBKEY_R(10) = tr ^ SUBR(11);
+- CAMELLIA_SUBKEY_L(11) = SUBL(10) ^ SUBL(12); /* round 8 */
+- CAMELLIA_SUBKEY_R(11) = SUBR(10) ^ SUBR(12);
+- CAMELLIA_SUBKEY_L(12) = SUBL(11) ^ SUBL(13); /* round 9 */
+- CAMELLIA_SUBKEY_R(12) = SUBR(11) ^ SUBR(13);
+- CAMELLIA_SUBKEY_L(13) = SUBL(12) ^ SUBL(14); /* round 10 */
+- CAMELLIA_SUBKEY_R(13) = SUBR(12) ^ SUBR(14);
+- CAMELLIA_SUBKEY_L(14) = SUBL(13) ^ SUBL(15); /* round 11 */
+- CAMELLIA_SUBKEY_R(14) = SUBR(13) ^ SUBR(15);
+- tl = SUBL(18) ^ (SUBR(18) & ~SUBR(16));
+- dw = tl & SUBL(16), /* FL(kl3) */
+- tr = SUBR(18) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(15) = SUBL(14) ^ tl; /* round 12 */
+- CAMELLIA_SUBKEY_R(15) = SUBR(14) ^ tr;
+- CAMELLIA_SUBKEY_L(16) = SUBL(16); /* FL(kl3) */
+- CAMELLIA_SUBKEY_R(16) = SUBR(16);
+- CAMELLIA_SUBKEY_L(17) = SUBL(17); /* FLinv(kl4) */
+- CAMELLIA_SUBKEY_R(17) = SUBR(17);
+- tl = SUBL(15) ^ (SUBR(15) & ~SUBR(17));
+- dw = tl & SUBL(17), /* FLinv(kl4) */
+- tr = SUBR(15) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(18) = tl ^ SUBL(19); /* round 13 */
+- CAMELLIA_SUBKEY_R(18) = tr ^ SUBR(19);
+- CAMELLIA_SUBKEY_L(19) = SUBL(18) ^ SUBL(20); /* round 14 */
+- CAMELLIA_SUBKEY_R(19) = SUBR(18) ^ SUBR(20);
+- CAMELLIA_SUBKEY_L(20) = SUBL(19) ^ SUBL(21); /* round 15 */
+- CAMELLIA_SUBKEY_R(20) = SUBR(19) ^ SUBR(21);
+- CAMELLIA_SUBKEY_L(21) = SUBL(20) ^ SUBL(22); /* round 16 */
+- CAMELLIA_SUBKEY_R(21) = SUBR(20) ^ SUBR(22);
+- CAMELLIA_SUBKEY_L(22) = SUBL(21) ^ SUBL(23); /* round 17 */
+- CAMELLIA_SUBKEY_R(22) = SUBR(21) ^ SUBR(23);
+- tl = SUBL(26) ^ (SUBR(26)
+- & ~SUBR(24));
+- dw = tl & SUBL(24), /* FL(kl5) */
+- tr = SUBR(26) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(23) = SUBL(22) ^ tl; /* round 18 */
+- CAMELLIA_SUBKEY_R(23) = SUBR(22) ^ tr;
+- CAMELLIA_SUBKEY_L(24) = SUBL(24); /* FL(kl5) */
+- CAMELLIA_SUBKEY_R(24) = SUBR(24);
+- CAMELLIA_SUBKEY_L(25) = SUBL(25); /* FLinv(kl6) */
+- CAMELLIA_SUBKEY_R(25) = SUBR(25);
+- tl = SUBL(23) ^ (SUBR(23) &
+- ~SUBR(25));
+- dw = tl & SUBL(25), /* FLinv(kl6) */
+- tr = SUBR(23) ^ CAMELLIA_RL1(dw);
+- CAMELLIA_SUBKEY_L(26) = tl ^ SUBL(27); /* round 19 */
+- CAMELLIA_SUBKEY_R(26) = tr ^ SUBR(27);
+- CAMELLIA_SUBKEY_L(27) = SUBL(26) ^ SUBL(28); /* round 20 */
+- CAMELLIA_SUBKEY_R(27) = SUBR(26) ^ SUBR(28);
+- CAMELLIA_SUBKEY_L(28) = SUBL(27) ^ SUBL(29); /* round 21 */
+- CAMELLIA_SUBKEY_R(28) = SUBR(27) ^ SUBR(29);
+- CAMELLIA_SUBKEY_L(29) = SUBL(28) ^ SUBL(30); /* round 22 */
+- CAMELLIA_SUBKEY_R(29) = SUBR(28) ^ SUBR(30);
+- CAMELLIA_SUBKEY_L(30) = SUBL(29) ^ SUBL(31); /* round 23 */
+- CAMELLIA_SUBKEY_R(30) = SUBR(29) ^ SUBR(31);
+- CAMELLIA_SUBKEY_L(31) = SUBL(30); /* round 24 */
+- CAMELLIA_SUBKEY_R(31) = SUBR(30);
+- CAMELLIA_SUBKEY_L(32) = SUBL(32) ^ SUBL(31); /* kw3 */
+- CAMELLIA_SUBKEY_R(32) = SUBR(32) ^ SUBR(31);
+-
+- /* apply the inverse of the last half of P-function */
+- dw = CAMELLIA_SUBKEY_L(2) ^ CAMELLIA_SUBKEY_R(2),
+- dw = CAMELLIA_RL8(dw);/* round 1 */
+- CAMELLIA_SUBKEY_R(2) = CAMELLIA_SUBKEY_L(2) ^ dw,
+- CAMELLIA_SUBKEY_L(2) = dw;
+- dw = CAMELLIA_SUBKEY_L(3) ^ CAMELLIA_SUBKEY_R(3),
+- dw = CAMELLIA_RL8(dw);/* round 2 */
+- CAMELLIA_SUBKEY_R(3) = CAMELLIA_SUBKEY_L(3) ^ dw,
+- CAMELLIA_SUBKEY_L(3) = dw;
+- dw = CAMELLIA_SUBKEY_L(4) ^ CAMELLIA_SUBKEY_R(4),
+- dw = CAMELLIA_RL8(dw);/* round 3 */
+- CAMELLIA_SUBKEY_R(4) = CAMELLIA_SUBKEY_L(4) ^ dw,
+- CAMELLIA_SUBKEY_L(4) = dw;
+- dw = CAMELLIA_SUBKEY_L(5) ^ CAMELLIA_SUBKEY_R(5),
+- dw = CAMELLIA_RL8(dw);/* round 4 */
+- CAMELLIA_SUBKEY_R(5) = CAMELLIA_SUBKEY_L(5) ^ dw,
+- CAMELLIA_SUBKEY_L(5) = dw;
+- dw = CAMELLIA_SUBKEY_L(6) ^ CAMELLIA_SUBKEY_R(6),
+- dw = CAMELLIA_RL8(dw);/* round 5 */
+- CAMELLIA_SUBKEY_R(6) = CAMELLIA_SUBKEY_L(6) ^ dw,
+- CAMELLIA_SUBKEY_L(6) = dw;
+- dw = CAMELLIA_SUBKEY_L(7) ^ CAMELLIA_SUBKEY_R(7),
+- dw = CAMELLIA_RL8(dw);/* round 6 */
+- CAMELLIA_SUBKEY_R(7) = CAMELLIA_SUBKEY_L(7) ^ dw,
+- CAMELLIA_SUBKEY_L(7) = dw;
+- dw = CAMELLIA_SUBKEY_L(10) ^ CAMELLIA_SUBKEY_R(10),
+- dw = CAMELLIA_RL8(dw);/* round 7 */
+- CAMELLIA_SUBKEY_R(10) = CAMELLIA_SUBKEY_L(10) ^ dw,
+- CAMELLIA_SUBKEY_L(10) = dw;
+- dw = CAMELLIA_SUBKEY_L(11) ^ CAMELLIA_SUBKEY_R(11),
+- dw = CAMELLIA_RL8(dw);/* round 8 */
+- CAMELLIA_SUBKEY_R(11) = CAMELLIA_SUBKEY_L(11) ^ dw,
+- CAMELLIA_SUBKEY_L(11) = dw;
+- dw = CAMELLIA_SUBKEY_L(12) ^ CAMELLIA_SUBKEY_R(12),
+- dw = CAMELLIA_RL8(dw);/* round 9 */
+- CAMELLIA_SUBKEY_R(12) = CAMELLIA_SUBKEY_L(12) ^ dw,
+- CAMELLIA_SUBKEY_L(12) = dw;
+- dw = CAMELLIA_SUBKEY_L(13) ^ CAMELLIA_SUBKEY_R(13),
+- dw = CAMELLIA_RL8(dw);/* round 10 */
+- CAMELLIA_SUBKEY_R(13) = CAMELLIA_SUBKEY_L(13) ^ dw,
+- CAMELLIA_SUBKEY_L(13) = dw;
+- dw = CAMELLIA_SUBKEY_L(14) ^ CAMELLIA_SUBKEY_R(14),
+- dw = CAMELLIA_RL8(dw);/* round 11 */
+- CAMELLIA_SUBKEY_R(14) = CAMELLIA_SUBKEY_L(14) ^ dw,
+- CAMELLIA_SUBKEY_L(14) = dw;
+- dw = CAMELLIA_SUBKEY_L(15) ^ CAMELLIA_SUBKEY_R(15),
+- dw = CAMELLIA_RL8(dw);/* round 12 */
+- CAMELLIA_SUBKEY_R(15) = CAMELLIA_SUBKEY_L(15) ^ dw,
+- CAMELLIA_SUBKEY_L(15) = dw;
+- dw = CAMELLIA_SUBKEY_L(18) ^ CAMELLIA_SUBKEY_R(18),
+- dw = CAMELLIA_RL8(dw);/* round 13 */
+- CAMELLIA_SUBKEY_R(18) = CAMELLIA_SUBKEY_L(18) ^ dw,
+- CAMELLIA_SUBKEY_L(18) = dw;
+- dw = CAMELLIA_SUBKEY_L(19) ^ CAMELLIA_SUBKEY_R(19),
+- dw = CAMELLIA_RL8(dw);/* round 14 */
+- CAMELLIA_SUBKEY_R(19) = CAMELLIA_SUBKEY_L(19) ^ dw,
+- CAMELLIA_SUBKEY_L(19) = dw;
+- dw = CAMELLIA_SUBKEY_L(20) ^ CAMELLIA_SUBKEY_R(20),
+- dw = CAMELLIA_RL8(dw);/* round 15 */
+- CAMELLIA_SUBKEY_R(20) = CAMELLIA_SUBKEY_L(20) ^ dw,
+- CAMELLIA_SUBKEY_L(20) = dw;
+- dw = CAMELLIA_SUBKEY_L(21) ^ CAMELLIA_SUBKEY_R(21),
+- dw = CAMELLIA_RL8(dw);/* round 16 */
+- CAMELLIA_SUBKEY_R(21) = CAMELLIA_SUBKEY_L(21) ^ dw,
+- CAMELLIA_SUBKEY_L(21) = dw;
+- dw = CAMELLIA_SUBKEY_L(22) ^ CAMELLIA_SUBKEY_R(22),
+- dw = CAMELLIA_RL8(dw);/* round 17 */
+- CAMELLIA_SUBKEY_R(22) = CAMELLIA_SUBKEY_L(22) ^ dw,
+- CAMELLIA_SUBKEY_L(22) = dw;
+- dw = CAMELLIA_SUBKEY_L(23) ^ CAMELLIA_SUBKEY_R(23),
+- dw = CAMELLIA_RL8(dw);/* round 18 */
+- CAMELLIA_SUBKEY_R(23) = CAMELLIA_SUBKEY_L(23) ^ dw,
+- CAMELLIA_SUBKEY_L(23) = dw;
+- dw = CAMELLIA_SUBKEY_L(26) ^ CAMELLIA_SUBKEY_R(26),
+- dw = CAMELLIA_RL8(dw);/* round 19 */
+- CAMELLIA_SUBKEY_R(26) = CAMELLIA_SUBKEY_L(26) ^ dw,
+- CAMELLIA_SUBKEY_L(26) = dw;
+- dw = CAMELLIA_SUBKEY_L(27) ^ CAMELLIA_SUBKEY_R(27),
+- dw = CAMELLIA_RL8(dw);/* round 20 */
+- CAMELLIA_SUBKEY_R(27) = CAMELLIA_SUBKEY_L(27) ^ dw,
+- CAMELLIA_SUBKEY_L(27) = dw;
+- dw = CAMELLIA_SUBKEY_L(28) ^ CAMELLIA_SUBKEY_R(28),
+- dw = CAMELLIA_RL8(dw);/* round 21 */
+- CAMELLIA_SUBKEY_R(28) = CAMELLIA_SUBKEY_L(28) ^ dw,
+- CAMELLIA_SUBKEY_L(28) = dw;
+- dw = CAMELLIA_SUBKEY_L(29) ^ CAMELLIA_SUBKEY_R(29),
+- dw = CAMELLIA_RL8(dw);/* round 22 */
+- CAMELLIA_SUBKEY_R(29) = CAMELLIA_SUBKEY_L(29) ^ dw,
+- CAMELLIA_SUBKEY_L(29) = dw;
+- dw = CAMELLIA_SUBKEY_L(30) ^ CAMELLIA_SUBKEY_R(30),
+- dw = CAMELLIA_RL8(dw);/* round 23 */
+- CAMELLIA_SUBKEY_R(30) = CAMELLIA_SUBKEY_L(30) ^ dw,
+- CAMELLIA_SUBKEY_L(30) = dw;
+- dw = CAMELLIA_SUBKEY_L(31) ^ CAMELLIA_SUBKEY_R(31),
+- dw = CAMELLIA_RL8(dw);/* round 24 */
+- CAMELLIA_SUBKEY_R(31) = CAMELLIA_SUBKEY_L(31) ^ dw,
+- CAMELLIA_SUBKEY_L(31) = dw;
+-
+- return;
++ camellia_setup_tail(subkey, subL, subR, 32);
+ }
+
+ static void camellia_setup192(const unsigned char *key, u32 *subkey)
+@@ -1197,482 +850,168 @@ static void camellia_setup192(const unsigned char *key, u32 *subkey)
+ u32 krll, krlr, krrl,krrr;
+
+ memcpy(kk, key, 24);
+- memcpy((unsigned char *)&krll, key+16,4);
+- memcpy((unsigned char *)&krlr, key+20,4);
++ memcpy((unsigned char *)&krll, key+16, 4);
++ memcpy((unsigned char *)&krlr, key+20, 4);
+ krrl = ~krll;
+ krrr = ~krlr;
+ memcpy(kk+24, (unsigned char *)&krrl, 4);
+ memcpy(kk+28, (unsigned char *)&krrr, 4);
+ camellia_setup256(kk, subkey);
+- return;
+ }
+
+
+-/**
+- * Stuff related to camellia encryption/decryption
++/*
++ * Encrypt/decrypt
+ */
+-static void camellia_encrypt128(const u32 *subkey, __be32 *io_text)
+-{
+- u32 il,ir,t0,t1; /* temporary valiables */
+-
+- u32 io[4];
+-
+- io[0] = be32_to_cpu(io_text[0]);
+- io[1] = be32_to_cpu(io_text[1]);
+- io[2] = be32_to_cpu(io_text[2]);
+- io[3] = be32_to_cpu(io_text[3]);
+-
+- /* pre whitening but absorb kw2*/
+- io[0] ^= CAMELLIA_SUBKEY_L(0);
+- io[1] ^= CAMELLIA_SUBKEY_R(0);
+- /* main iteration */
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
+- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
+- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
+- io[0],io[1],il,ir,t0,t1);
++#define CAMELLIA_FLS(ll, lr, rl, rr, kll, klr, krl, krr, t0, t1, t2, t3) \
++ do { \
++ t0 = kll; \
++ t2 = krr; \
++ t0 &= ll; \
++ t2 |= rr; \
++ rl ^= t2; \
++ lr ^= ROL1(t0); \
++ t3 = krl; \
++ t1 = klr; \
++ t3 &= rl; \
++ t1 |= lr; \
++ ll ^= t1; \
++ rr ^= ROL1(t3); \
++ } while(0)
+
+- /* post whitening but kw4 */
+- io[2] ^= CAMELLIA_SUBKEY_L(24);
+- io[3] ^= CAMELLIA_SUBKEY_R(24);
+-
+- t0 = io[0];
+- t1 = io[1];
+- io[0] = io[2];
+- io[1] = io[3];
+- io[2] = t0;
+- io[3] = t1;
+-
+- io_text[0] = cpu_to_be32(io[0]);
+- io_text[1] = cpu_to_be32(io[1]);
+- io_text[2] = cpu_to_be32(io[2]);
+- io_text[3] = cpu_to_be32(io[3]);
+-
+- return;
+-}
++#define CAMELLIA_ROUNDSM(xl, xr, kl, kr, yl, yr, il, ir) \
++ do { \
++ ir = camellia_sp1110[(u8)xr]; \
++ il = camellia_sp1110[ (xl >> 24)]; \
++ ir ^= camellia_sp0222[ (xr >> 24)]; \
++ il ^= camellia_sp0222[(u8)(xl >> 16)]; \
++ ir ^= camellia_sp3033[(u8)(xr >> 16)]; \
++ il ^= camellia_sp3033[(u8)(xl >> 8)]; \
++ ir ^= camellia_sp4404[(u8)(xr >> 8)]; \
++ il ^= camellia_sp4404[(u8)xl]; \
++ il ^= kl; \
++ ir ^= il ^ kr; \
++ yl ^= ir; \
++ yr ^= ROR8(il) ^ ir; \
++ } while(0)
+
+-static void camellia_decrypt128(const u32 *subkey, __be32 *io_text)
++/* max = 24: 128bit encrypt, max = 32: 256bit encrypt */
++static void camellia_do_encrypt(const u32 *subkey, u32 *io, unsigned max)
+ {
+- u32 il,ir,t0,t1; /* temporary valiables */
++ u32 il,ir,t0,t1; /* temporary variables */
+
+- u32 io[4];
+-
+- io[0] = be32_to_cpu(io_text[0]);
+- io[1] = be32_to_cpu(io_text[1]);
+- io[2] = be32_to_cpu(io_text[2]);
+- io[3] = be32_to_cpu(io_text[3]);
+-
+- /* pre whitening but absorb kw2*/
+- io[0] ^= CAMELLIA_SUBKEY_L(24);
+- io[1] ^= CAMELLIA_SUBKEY_R(24);
++ /* pre whitening but absorb kw2 */
++ io[0] ^= SUBKEY_L(0);
++ io[1] ^= SUBKEY_R(0);
+
+ /* main iteration */
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
+- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
+- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
+- io[0],io[1],il,ir,t0,t1);
+-
+- /* post whitening but kw4 */
+- io[2] ^= CAMELLIA_SUBKEY_L(0);
+- io[3] ^= CAMELLIA_SUBKEY_R(0);
+-
+- t0 = io[0];
+- t1 = io[1];
+- io[0] = io[2];
+- io[1] = io[3];
+- io[2] = t0;
+- io[3] = t1;
+-
+- io_text[0] = cpu_to_be32(io[0]);
+- io_text[1] = cpu_to_be32(io[1]);
+- io_text[2] = cpu_to_be32(io[2]);
+- io_text[3] = cpu_to_be32(io[3]);
+-
+- return;
+-}
+-
+-
+-/**
+- * stuff for 192 and 256bit encryption/decryption
+- */
+-static void camellia_encrypt256(const u32 *subkey, __be32 *io_text)
+-{
+- u32 il,ir,t0,t1; /* temporary valiables */
+-
+- u32 io[4];
+-
+- io[0] = be32_to_cpu(io_text[0]);
+- io[1] = be32_to_cpu(io_text[1]);
+- io[2] = be32_to_cpu(io_text[2]);
+- io[3] = be32_to_cpu(io_text[3]);
++#define ROUNDS(i) do { \
++ CAMELLIA_ROUNDSM(io[0],io[1], \
++ SUBKEY_L(i + 2),SUBKEY_R(i + 2), \
++ io[2],io[3],il,ir); \
++ CAMELLIA_ROUNDSM(io[2],io[3], \
++ SUBKEY_L(i + 3),SUBKEY_R(i + 3), \
++ io[0],io[1],il,ir); \
++ CAMELLIA_ROUNDSM(io[0],io[1], \
++ SUBKEY_L(i + 4),SUBKEY_R(i + 4), \
++ io[2],io[3],il,ir); \
++ CAMELLIA_ROUNDSM(io[2],io[3], \
++ SUBKEY_L(i + 5),SUBKEY_R(i + 5), \
++ io[0],io[1],il,ir); \
++ CAMELLIA_ROUNDSM(io[0],io[1], \
++ SUBKEY_L(i + 6),SUBKEY_R(i + 6), \
++ io[2],io[3],il,ir); \
++ CAMELLIA_ROUNDSM(io[2],io[3], \
++ SUBKEY_L(i + 7),SUBKEY_R(i + 7), \
++ io[0],io[1],il,ir); \
++} while (0)
++#define FLS(i) do { \
++ CAMELLIA_FLS(io[0],io[1],io[2],io[3], \
++ SUBKEY_L(i + 0),SUBKEY_R(i + 0), \
++ SUBKEY_L(i + 1),SUBKEY_R(i + 1), \
++ t0,t1,il,ir); \
++} while (0)
++
++ ROUNDS(0);
++ FLS(8);
++ ROUNDS(8);
++ FLS(16);
++ ROUNDS(16);
++ if (max == 32) {
++ FLS(24);
++ ROUNDS(24);
++ }
+
+- /* pre whitening but absorb kw2*/
+- io[0] ^= CAMELLIA_SUBKEY_L(0);
+- io[1] ^= CAMELLIA_SUBKEY_R(0);
+-
+- /* main iteration */
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
+- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
+- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(24),CAMELLIA_SUBKEY_R(24),
+- CAMELLIA_SUBKEY_L(25),CAMELLIA_SUBKEY_R(25),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(26),CAMELLIA_SUBKEY_R(26),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(27),CAMELLIA_SUBKEY_R(27),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(28),CAMELLIA_SUBKEY_R(28),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(29),CAMELLIA_SUBKEY_R(29),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(30),CAMELLIA_SUBKEY_R(30),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(31),CAMELLIA_SUBKEY_R(31),
+- io[0],io[1],il,ir,t0,t1);
++#undef ROUNDS
++#undef FLS
+
+ /* post whitening but kw4 */
+- io[2] ^= CAMELLIA_SUBKEY_L(32);
+- io[3] ^= CAMELLIA_SUBKEY_R(32);
+-
+- t0 = io[0];
+- t1 = io[1];
+- io[0] = io[2];
+- io[1] = io[3];
+- io[2] = t0;
+- io[3] = t1;
+-
+- io_text[0] = cpu_to_be32(io[0]);
+- io_text[1] = cpu_to_be32(io[1]);
+- io_text[2] = cpu_to_be32(io[2]);
+- io_text[3] = cpu_to_be32(io[3]);
+-
+- return;
++ io[2] ^= SUBKEY_L(max);
++ io[3] ^= SUBKEY_R(max);
++ /* NB: io[0],[1] should be swapped with [2],[3] by caller! */
+ }
+
+-
+-static void camellia_decrypt256(const u32 *subkey, __be32 *io_text)
++static void camellia_do_decrypt(const u32 *subkey, u32 *io, unsigned i)
+ {
+- u32 il,ir,t0,t1; /* temporary valiables */
++ u32 il,ir,t0,t1; /* temporary variables */
+
+- u32 io[4];
+-
+- io[0] = be32_to_cpu(io_text[0]);
+- io[1] = be32_to_cpu(io_text[1]);
+- io[2] = be32_to_cpu(io_text[2]);
+- io[3] = be32_to_cpu(io_text[3]);
+-
+- /* pre whitening but absorb kw2*/
+- io[0] ^= CAMELLIA_SUBKEY_L(32);
+- io[1] ^= CAMELLIA_SUBKEY_R(32);
++ /* pre whitening but absorb kw2 */
++ io[0] ^= SUBKEY_L(i);
++ io[1] ^= SUBKEY_R(i);
+
+ /* main iteration */
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(31),CAMELLIA_SUBKEY_R(31),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(30),CAMELLIA_SUBKEY_R(30),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(29),CAMELLIA_SUBKEY_R(29),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(28),CAMELLIA_SUBKEY_R(28),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(27),CAMELLIA_SUBKEY_R(27),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(26),CAMELLIA_SUBKEY_R(26),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(25),CAMELLIA_SUBKEY_R(25),
+- CAMELLIA_SUBKEY_L(24),CAMELLIA_SUBKEY_R(24),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(23),CAMELLIA_SUBKEY_R(23),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(22),CAMELLIA_SUBKEY_R(22),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(21),CAMELLIA_SUBKEY_R(21),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(20),CAMELLIA_SUBKEY_R(20),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(19),CAMELLIA_SUBKEY_R(19),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(18),CAMELLIA_SUBKEY_R(18),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(17),CAMELLIA_SUBKEY_R(17),
+- CAMELLIA_SUBKEY_L(16),CAMELLIA_SUBKEY_R(16),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(15),CAMELLIA_SUBKEY_R(15),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(14),CAMELLIA_SUBKEY_R(14),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(13),CAMELLIA_SUBKEY_R(13),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(12),CAMELLIA_SUBKEY_R(12),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(11),CAMELLIA_SUBKEY_R(11),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(10),CAMELLIA_SUBKEY_R(10),
+- io[0],io[1],il,ir,t0,t1);
+-
+- CAMELLIA_FLS(io[0],io[1],io[2],io[3],
+- CAMELLIA_SUBKEY_L(9),CAMELLIA_SUBKEY_R(9),
+- CAMELLIA_SUBKEY_L(8),CAMELLIA_SUBKEY_R(8),
+- t0,t1,il,ir);
+-
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(7),CAMELLIA_SUBKEY_R(7),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(6),CAMELLIA_SUBKEY_R(6),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(5),CAMELLIA_SUBKEY_R(5),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(4),CAMELLIA_SUBKEY_R(4),
+- io[0],io[1],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[0],io[1],
+- CAMELLIA_SUBKEY_L(3),CAMELLIA_SUBKEY_R(3),
+- io[2],io[3],il,ir,t0,t1);
+- CAMELLIA_ROUNDSM(io[2],io[3],
+- CAMELLIA_SUBKEY_L(2),CAMELLIA_SUBKEY_R(2),
+- io[0],io[1],il,ir,t0,t1);
++#define ROUNDS(i) do { \
++ CAMELLIA_ROUNDSM(io[0],io[1], \
++ SUBKEY_L(i + 7),SUBKEY_R(i + 7), \
++ io[2],io[3],il,ir); \
++ CAMELLIA_ROUNDSM(io[2],io[3], \
++ SUBKEY_L(i + 6),SUBKEY_R(i + 6), \
++ io[0],io[1],il,ir); \
++ CAMELLIA_ROUNDSM(io[0],io[1], \
++ SUBKEY_L(i + 5),SUBKEY_R(i + 5), \
++ io[2],io[3],il,ir); \
++ CAMELLIA_ROUNDSM(io[2],io[3], \
++ SUBKEY_L(i + 4),SUBKEY_R(i + 4), \
++ io[0],io[1],il,ir); \
++ CAMELLIA_ROUNDSM(io[0],io[1], \
++ SUBKEY_L(i + 3),SUBKEY_R(i + 3), \
++ io[2],io[3],il,ir); \
++ CAMELLIA_ROUNDSM(io[2],io[3], \
++ SUBKEY_L(i + 2),SUBKEY_R(i + 2), \
++ io[0],io[1],il,ir); \
++} while (0)
++#define FLS(i) do { \
++ CAMELLIA_FLS(io[0],io[1],io[2],io[3], \
++ SUBKEY_L(i + 1),SUBKEY_R(i + 1), \
++ SUBKEY_L(i + 0),SUBKEY_R(i + 0), \
++ t0,t1,il,ir); \
++} while (0)
++
++ if (i == 32) {
++ ROUNDS(24);
++ FLS(24);
++ }
++ ROUNDS(16);
++ FLS(16);
++ ROUNDS(8);
++ FLS(8);
++ ROUNDS(0);
++
++#undef ROUNDS
++#undef FLS
+
+ /* post whitening but kw4 */
+- io[2] ^= CAMELLIA_SUBKEY_L(0);
+- io[3] ^= CAMELLIA_SUBKEY_R(0);
+-
+- t0 = io[0];
+- t1 = io[1];
+- io[0] = io[2];
+- io[1] = io[3];
+- io[2] = t0;
+- io[3] = t1;
+-
+- io_text[0] = cpu_to_be32(io[0]);
+- io_text[1] = cpu_to_be32(io[1]);
+- io_text[2] = cpu_to_be32(io[2]);
+- io_text[3] = cpu_to_be32(io[3]);
+-
+- return;
++ io[2] ^= SUBKEY_L(0);
++ io[3] ^= SUBKEY_R(0);
++ /* NB: 0,1 should be swapped with 2,3 by caller! */
+ }
+
+
++struct camellia_ctx {
++ int key_length;
++ u32 key_table[CAMELLIA_TABLE_BYTE_LEN / sizeof(u32)];
++};
++
+ static int
+ camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+ unsigned int key_len)
+@@ -1688,7 +1027,7 @@ camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+
+ cctx->key_length = key_len;
+
+- switch(key_len) {
++ switch (key_len) {
+ case 16:
+ camellia_setup128(key, cctx->key_table);
+ break;
+@@ -1698,68 +1037,59 @@ camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+ case 32:
+ camellia_setup256(key, cctx->key_table);
+ break;
+- default:
+- break;
+ }
+
+ return 0;
+ }
+
+-
+ static void camellia_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ {
+ const struct camellia_ctx *cctx = crypto_tfm_ctx(tfm);
+ const __be32 *src = (const __be32 *)in;
+ __be32 *dst = (__be32 *)out;
+
+- __be32 tmp[4];
++ u32 tmp[4];
+
+- memcpy(tmp, src, CAMELLIA_BLOCK_SIZE);
++ tmp[0] = be32_to_cpu(src[0]);
++ tmp[1] = be32_to_cpu(src[1]);
++ tmp[2] = be32_to_cpu(src[2]);
++ tmp[3] = be32_to_cpu(src[3]);
+
+- switch (cctx->key_length) {
+- case 16:
+- camellia_encrypt128(cctx->key_table, tmp);
+- break;
+- case 24:
+- /* fall through */
+- case 32:
+- camellia_encrypt256(cctx->key_table, tmp);
+- break;
+- default:
+- break;
+- }
++ camellia_do_encrypt(cctx->key_table, tmp,
++ cctx->key_length == 16 ? 24 : 32 /* for key lengths of 24 and 32 */
++ );
+
+- memcpy(dst, tmp, CAMELLIA_BLOCK_SIZE);
++ /* do_encrypt returns 0,1 swapped with 2,3 */
++ dst[0] = cpu_to_be32(tmp[2]);
++ dst[1] = cpu_to_be32(tmp[3]);
++ dst[2] = cpu_to_be32(tmp[0]);
++ dst[3] = cpu_to_be32(tmp[1]);
+ }
+
+-
+ static void camellia_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ {
+ const struct camellia_ctx *cctx = crypto_tfm_ctx(tfm);
+ const __be32 *src = (const __be32 *)in;
+ __be32 *dst = (__be32 *)out;
+
+- __be32 tmp[4];
++ u32 tmp[4];
+
+- memcpy(tmp, src, CAMELLIA_BLOCK_SIZE);
++ tmp[0] = be32_to_cpu(src[0]);
++ tmp[1] = be32_to_cpu(src[1]);
++ tmp[2] = be32_to_cpu(src[2]);
++ tmp[3] = be32_to_cpu(src[3]);
+
+- switch (cctx->key_length) {
+- case 16:
+- camellia_decrypt128(cctx->key_table, tmp);
+- break;
+- case 24:
+- /* fall through */
+- case 32:
+- camellia_decrypt256(cctx->key_table, tmp);
+- break;
+- default:
+- break;
+- }
++ camellia_do_decrypt(cctx->key_table, tmp,
++ cctx->key_length == 16 ? 24 : 32 /* for key lengths of 24 and 32 */
++ );
+
+- memcpy(dst, tmp, CAMELLIA_BLOCK_SIZE);
++ /* do_decrypt returns 0,1 swapped with 2,3 */
++ dst[0] = cpu_to_be32(tmp[2]);
++ dst[1] = cpu_to_be32(tmp[3]);
++ dst[2] = cpu_to_be32(tmp[0]);
++ dst[3] = cpu_to_be32(tmp[1]);
+ }
+
+-
+ static struct crypto_alg camellia_alg = {
+ .cra_name = "camellia",
+ .cra_driver_name = "camellia-generic",
+@@ -1786,16 +1116,13 @@ static int __init camellia_init(void)
+ return crypto_register_alg(&camellia_alg);
+ }
+
+-
+ static void __exit camellia_fini(void)
+ {
+ crypto_unregister_alg(&camellia_alg);
+ }
+
+-
+ module_init(camellia_init);
+ module_exit(camellia_fini);
+
+-
+ MODULE_DESCRIPTION("Camellia Cipher Algorithm");
+ MODULE_LICENSE("GPL");
+diff --git a/crypto/cast6.c b/crypto/cast6.c
+index 136ab6d..5fd9420 100644
+--- a/crypto/cast6.c
++++ b/crypto/cast6.c
+@@ -369,7 +369,7 @@ static const u8 Tr[4][8] = {
+ };
+
+ /* forward octave */
+-static inline void W(u32 *key, unsigned int i) {
++static void W(u32 *key, unsigned int i) {
+ u32 I;
+ key[6] ^= F1(key[7], Tr[i % 4][0], Tm[i][0]);
+ key[5] ^= F2(key[6], Tr[i % 4][1], Tm[i][1]);
+@@ -428,7 +428,7 @@ static int cast6_setkey(struct crypto_tfm *tfm, const u8 *in_key,
+ }
+
+ /*forward quad round*/
+-static inline void Q (u32 * block, u8 * Kr, u32 * Km) {
++static void Q (u32 * block, u8 * Kr, u32 * Km) {
+ u32 I;
+ block[2] ^= F1(block[3], Kr[0], Km[0]);
+ block[1] ^= F2(block[2], Kr[1], Km[1]);
+@@ -437,7 +437,7 @@ static inline void Q (u32 * block, u8 * Kr, u32 * Km) {
+ }
+
+ /*reverse quad round*/
+-static inline void QBAR (u32 * block, u8 * Kr, u32 * Km) {
++static void QBAR (u32 * block, u8 * Kr, u32 * Km) {
+ u32 I;
+ block[3] ^= F1(block[0], Kr[3], Km[3]);
+ block[0] ^= F3(block[1], Kr[2], Km[2]);
+diff --git a/crypto/cbc.c b/crypto/cbc.c
+index 1f2649e..6affff8 100644
+--- a/crypto/cbc.c
++++ b/crypto/cbc.c
+@@ -14,13 +14,13 @@
+ #include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
++#include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/scatterlist.h>
+ #include <linux/slab.h>
+
+ struct crypto_cbc_ctx {
+ struct crypto_cipher *child;
+- void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
+ };
+
+ static int crypto_cbc_setkey(struct crypto_tfm *parent, const u8 *key,
+@@ -41,9 +41,7 @@ static int crypto_cbc_setkey(struct crypto_tfm *parent, const u8 *key,
+
+ static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+@@ -54,7 +52,7 @@ static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
+ u8 *iv = walk->iv;
+
+ do {
+- xor(iv, src, bsize);
++ crypto_xor(iv, src, bsize);
+ fn(crypto_cipher_tfm(tfm), dst, iv);
+ memcpy(iv, dst, bsize);
+
+@@ -67,9 +65,7 @@ static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
+
+ static int crypto_cbc_encrypt_inplace(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+@@ -79,7 +75,7 @@ static int crypto_cbc_encrypt_inplace(struct blkcipher_desc *desc,
+ u8 *iv = walk->iv;
+
+ do {
+- xor(src, iv, bsize);
++ crypto_xor(src, iv, bsize);
+ fn(crypto_cipher_tfm(tfm), src, src);
+ iv = src;
+
+@@ -99,7 +95,6 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+@@ -107,11 +102,9 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
+
+ while ((nbytes = walk.nbytes)) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+- nbytes = crypto_cbc_encrypt_inplace(desc, &walk, child,
+- xor);
++ nbytes = crypto_cbc_encrypt_inplace(desc, &walk, child);
+ else
+- nbytes = crypto_cbc_encrypt_segment(desc, &walk, child,
+- xor);
++ nbytes = crypto_cbc_encrypt_segment(desc, &walk, child);
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+@@ -120,9 +113,7 @@ static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
+
+ static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_decrypt;
+@@ -134,7 +125,7 @@ static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
+
+ do {
+ fn(crypto_cipher_tfm(tfm), dst, src);
+- xor(dst, iv, bsize);
++ crypto_xor(dst, iv, bsize);
+ iv = src;
+
+ src += bsize;
+@@ -148,34 +139,29 @@ static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
+
+ static int crypto_cbc_decrypt_inplace(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_decrypt;
+ int bsize = crypto_cipher_blocksize(tfm);
+- unsigned long alignmask = crypto_cipher_alignmask(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+- u8 stack[bsize + alignmask];
+- u8 *first_iv = (u8 *)ALIGN((unsigned long)stack, alignmask + 1);
+-
+- memcpy(first_iv, walk->iv, bsize);
++ u8 last_iv[bsize];
+
+ /* Start of the last block. */
+- src += nbytes - nbytes % bsize - bsize;
+- memcpy(walk->iv, src, bsize);
++ src += nbytes - (nbytes & (bsize - 1)) - bsize;
++ memcpy(last_iv, src, bsize);
+
+ for (;;) {
+ fn(crypto_cipher_tfm(tfm), src, src);
+ if ((nbytes -= bsize) < bsize)
+ break;
+- xor(src, src - bsize, bsize);
++ crypto_xor(src, src - bsize, bsize);
+ src -= bsize;
+ }
+
+- xor(src, first_iv, bsize);
++ crypto_xor(src, walk->iv, bsize);
++ memcpy(walk->iv, last_iv, bsize);
+
+ return nbytes;
+ }
+@@ -188,7 +174,6 @@ static int crypto_cbc_decrypt(struct blkcipher_desc *desc,
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+@@ -196,48 +181,15 @@ static int crypto_cbc_decrypt(struct blkcipher_desc *desc,
+
+ while ((nbytes = walk.nbytes)) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+- nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child,
+- xor);
++ nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child);
+ else
+- nbytes = crypto_cbc_decrypt_segment(desc, &walk, child,
+- xor);
++ nbytes = crypto_cbc_decrypt_segment(desc, &walk, child);
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+ return err;
+ }
+
+-static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
+-{
+- do {
+- *a++ ^= *b++;
+- } while (--bs);
+-}
+-
+-static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
+-{
+- u32 *a = (u32 *)dst;
+- u32 *b = (u32 *)src;
+-
+- do {
+- *a++ ^= *b++;
+- } while ((bs -= 4));
+-}
+-
+-static void xor_64(u8 *a, const u8 *b, unsigned int bs)
+-{
+- ((u32 *)a)[0] ^= ((u32 *)b)[0];
+- ((u32 *)a)[1] ^= ((u32 *)b)[1];
+-}
+-
+-static void xor_128(u8 *a, const u8 *b, unsigned int bs)
+-{
+- ((u32 *)a)[0] ^= ((u32 *)b)[0];
+- ((u32 *)a)[1] ^= ((u32 *)b)[1];
+- ((u32 *)a)[2] ^= ((u32 *)b)[2];
+- ((u32 *)a)[3] ^= ((u32 *)b)[3];
+-}
+-
+ static int crypto_cbc_init_tfm(struct crypto_tfm *tfm)
+ {
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+@@ -245,22 +197,6 @@ static int crypto_cbc_init_tfm(struct crypto_tfm *tfm)
+ struct crypto_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_cipher *cipher;
+
+- switch (crypto_tfm_alg_blocksize(tfm)) {
+- case 8:
+- ctx->xor = xor_64;
+- break;
+-
+- case 16:
+- ctx->xor = xor_128;
+- break;
+-
+- default:
+- if (crypto_tfm_alg_blocksize(tfm) % 4)
+- ctx->xor = xor_byte;
+- else
+- ctx->xor = xor_quad;
+- }
+-
+ cipher = crypto_spawn_cipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+@@ -290,6 +226,10 @@ static struct crypto_instance *crypto_cbc_alloc(struct rtattr **tb)
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
++ inst = ERR_PTR(-EINVAL);
++ if (!is_power_of_2(alg->cra_blocksize))
++ goto out_put_alg;
++
+ inst = crypto_alloc_instance("cbc", alg);
+ if (IS_ERR(inst))
+ goto out_put_alg;
+@@ -300,8 +240,9 @@ static struct crypto_instance *crypto_cbc_alloc(struct rtattr **tb)
+ inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+- if (!(alg->cra_blocksize % 4))
+- inst->alg.cra_alignmask |= 3;
++ /* We access the data as u32s when xoring. */
++ inst->alg.cra_alignmask |= __alignof__(u32) - 1;
++
+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
+diff --git a/crypto/ccm.c b/crypto/ccm.c
+new file mode 100644
+index 0000000..7cf7e5a
+--- /dev/null
++++ b/crypto/ccm.c
+@@ -0,0 +1,889 @@
++/*
++ * CCM: Counter with CBC-MAC
++ *
++ * (C) Copyright IBM Corp. 2007 - Joy Latten <latten at us.ibm.com>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ */
++
++#include <crypto/internal/aead.h>
++#include <crypto/internal/skcipher.h>
++#include <crypto/scatterwalk.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++
++#include "internal.h"
++
++struct ccm_instance_ctx {
++ struct crypto_skcipher_spawn ctr;
++ struct crypto_spawn cipher;
++};
++
++struct crypto_ccm_ctx {
++ struct crypto_cipher *cipher;
++ struct crypto_ablkcipher *ctr;
++};
++
++struct crypto_rfc4309_ctx {
++ struct crypto_aead *child;
++ u8 nonce[3];
++};
++
++struct crypto_ccm_req_priv_ctx {
++ u8 odata[16];
++ u8 idata[16];
++ u8 auth_tag[16];
++ u32 ilen;
++ u32 flags;
++ struct scatterlist src[2];
++ struct scatterlist dst[2];
++ struct ablkcipher_request abreq;
++};
++
++static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx(
++ struct aead_request *req)
++{
++ unsigned long align = crypto_aead_alignmask(crypto_aead_reqtfm(req));
++
++ return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1);
++}
++
++static int set_msg_len(u8 *block, unsigned int msglen, int csize)
++{
++ __be32 data;
++
++ memset(block, 0, csize);
++ block += csize;
++
++ if (csize >= 4)
++ csize = 4;
++ else if (msglen > (1 << (8 * csize)))
++ return -EOVERFLOW;
++
++ data = cpu_to_be32(msglen);
++ memcpy(block - csize, (u8 *)&data + 4 - csize, csize);
++
++ return 0;
++}
++
++static int crypto_ccm_setkey(struct crypto_aead *aead, const u8 *key,
++ unsigned int keylen)
++{
++ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_ablkcipher *ctr = ctx->ctr;
++ struct crypto_cipher *tfm = ctx->cipher;
++ int err = 0;
++
++ crypto_ablkcipher_clear_flags(ctr, CRYPTO_TFM_REQ_MASK);
++ crypto_ablkcipher_set_flags(ctr, crypto_aead_get_flags(aead) &
++ CRYPTO_TFM_REQ_MASK);
++ err = crypto_ablkcipher_setkey(ctr, key, keylen);
++ crypto_aead_set_flags(aead, crypto_ablkcipher_get_flags(ctr) &
++ CRYPTO_TFM_RES_MASK);
++ if (err)
++ goto out;
++
++ crypto_cipher_clear_flags(tfm, CRYPTO_TFM_REQ_MASK);
++ crypto_cipher_set_flags(tfm, crypto_aead_get_flags(aead) &
++ CRYPTO_TFM_REQ_MASK);
++ err = crypto_cipher_setkey(tfm, key, keylen);
++ crypto_aead_set_flags(aead, crypto_cipher_get_flags(tfm) &
++ CRYPTO_TFM_RES_MASK);
++
++out:
++ return err;
++}
++
++static int crypto_ccm_setauthsize(struct crypto_aead *tfm,
++ unsigned int authsize)
++{
++ switch (authsize) {
++ case 4:
++ case 6:
++ case 8:
++ case 10:
++ case 12:
++ case 14:
++ case 16:
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static int format_input(u8 *info, struct aead_request *req,
++ unsigned int cryptlen)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ unsigned int lp = req->iv[0];
++ unsigned int l = lp + 1;
++ unsigned int m;
++
++ m = crypto_aead_authsize(aead);
++
++ memcpy(info, req->iv, 16);
++
++ /* format control info per RFC 3610 and
++ * NIST Special Publication 800-38C
++ */
++ *info |= (8 * ((m - 2) / 2));
++ if (req->assoclen)
++ *info |= 64;
++
++ return set_msg_len(info + 16 - l, cryptlen, l);
++}
++
++static int format_adata(u8 *adata, unsigned int a)
++{
++ int len = 0;
++
++ /* add control info for associated data
++ * RFC 3610 and NIST Special Publication 800-38C
++ */
++ if (a < 65280) {
++ *(__be16 *)adata = cpu_to_be16(a);
++ len = 2;
++ } else {
++ *(__be16 *)adata = cpu_to_be16(0xfffe);
++ *(__be32 *)&adata[2] = cpu_to_be32(a);
++ len = 6;
++ }
++
++ return len;
++}
++
++static void compute_mac(struct crypto_cipher *tfm, u8 *data, int n,
++ struct crypto_ccm_req_priv_ctx *pctx)
++{
++ unsigned int bs = 16;
++ u8 *odata = pctx->odata;
++ u8 *idata = pctx->idata;
++ int datalen, getlen;
++
++ datalen = n;
++
++ /* first time in here, block may be partially filled. */
++ getlen = bs - pctx->ilen;
++ if (datalen >= getlen) {
++ memcpy(idata + pctx->ilen, data, getlen);
++ crypto_xor(odata, idata, bs);
++ crypto_cipher_encrypt_one(tfm, odata, odata);
++ datalen -= getlen;
++ data += getlen;
++ pctx->ilen = 0;
++ }
++
++ /* now encrypt rest of data */
++ while (datalen >= bs) {
++ crypto_xor(odata, data, bs);
++ crypto_cipher_encrypt_one(tfm, odata, odata);
++
++ datalen -= bs;
++ data += bs;
++ }
++
++ /* check and see if there's leftover data that wasn't
++ * enough to fill a block.
++ */
++ if (datalen) {
++ memcpy(idata + pctx->ilen, data, datalen);
++ pctx->ilen += datalen;
++ }
++}
++
++static void get_data_to_compute(struct crypto_cipher *tfm,
++ struct crypto_ccm_req_priv_ctx *pctx,
++ struct scatterlist *sg, unsigned int len)
++{
++ struct scatter_walk walk;
++ u8 *data_src;
++ int n;
++
++ scatterwalk_start(&walk, sg);
++
++ while (len) {
++ n = scatterwalk_clamp(&walk, len);
++ if (!n) {
++ scatterwalk_start(&walk, sg_next(walk.sg));
++ n = scatterwalk_clamp(&walk, len);
++ }
++ data_src = scatterwalk_map(&walk, 0);
++
++ compute_mac(tfm, data_src, n, pctx);
++ len -= n;
++
++ scatterwalk_unmap(data_src, 0);
++ scatterwalk_advance(&walk, n);
++ scatterwalk_done(&walk, 0, len);
++ if (len)
++ crypto_yield(pctx->flags);
++ }
++
++ /* any leftover needs padding and then encrypted */
++ if (pctx->ilen) {
++ int padlen;
++ u8 *odata = pctx->odata;
++ u8 *idata = pctx->idata;
++
++ padlen = 16 - pctx->ilen;
++ memset(idata + pctx->ilen, 0, padlen);
++ crypto_xor(odata, idata, 16);
++ crypto_cipher_encrypt_one(tfm, odata, odata);
++ pctx->ilen = 0;
++ }
++}
++
++static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain,
++ unsigned int cryptlen)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
++ struct crypto_cipher *cipher = ctx->cipher;
++ unsigned int assoclen = req->assoclen;
++ u8 *odata = pctx->odata;
++ u8 *idata = pctx->idata;
++ int err;
++
++ /* format control data for input */
++ err = format_input(odata, req, cryptlen);
++ if (err)
++ goto out;
++
++ /* encrypt first block to use as start in computing mac */
++ crypto_cipher_encrypt_one(cipher, odata, odata);
++
++ /* format associated data and compute into mac */
++ if (assoclen) {
++ pctx->ilen = format_adata(idata, assoclen);
++ get_data_to_compute(cipher, pctx, req->assoc, req->assoclen);
++ }
++
++ /* compute plaintext into mac */
++ get_data_to_compute(cipher, pctx, plain, cryptlen);
++
++out:
++ return err;
++}
++
++static void crypto_ccm_encrypt_done(struct crypto_async_request *areq, int err)
++{
++ struct aead_request *req = areq->data;
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
++ u8 *odata = pctx->odata;
++
++ if (!err)
++ scatterwalk_map_and_copy(odata, req->dst, req->cryptlen,
++ crypto_aead_authsize(aead), 1);
++ aead_request_complete(req, err);
++}
++
++static inline int crypto_ccm_check_iv(const u8 *iv)
++{
++ /* 2 <= L <= 8, so 1 <= L' <= 7. */
++ if (1 > iv[0] || iv[0] > 7)
++ return -EINVAL;
++
++ return 0;
++}
++
++static int crypto_ccm_encrypt(struct aead_request *req)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
++ struct ablkcipher_request *abreq = &pctx->abreq;
++ struct scatterlist *dst;
++ unsigned int cryptlen = req->cryptlen;
++ u8 *odata = pctx->odata;
++ u8 *iv = req->iv;
++ int err;
++
++ err = crypto_ccm_check_iv(iv);
++ if (err)
++ return err;
++
++ pctx->flags = aead_request_flags(req);
++
++ err = crypto_ccm_auth(req, req->src, cryptlen);
++ if (err)
++ return err;
++
++ /* Note: rfc 3610 and NIST 800-38C require counter of
++ * zero to encrypt auth tag.
++ */
++ memset(iv + 15 - iv[0], 0, iv[0] + 1);
++
++ sg_init_table(pctx->src, 2);
++ sg_set_buf(pctx->src, odata, 16);
++ scatterwalk_sg_chain(pctx->src, 2, req->src);
++
++ dst = pctx->src;
++ if (req->src != req->dst) {
++ sg_init_table(pctx->dst, 2);
++ sg_set_buf(pctx->dst, odata, 16);
++ scatterwalk_sg_chain(pctx->dst, 2, req->dst);
++ dst = pctx->dst;
++ }
++
++ ablkcipher_request_set_tfm(abreq, ctx->ctr);
++ ablkcipher_request_set_callback(abreq, pctx->flags,
++ crypto_ccm_encrypt_done, req);
++ ablkcipher_request_set_crypt(abreq, pctx->src, dst, cryptlen + 16, iv);
++ err = crypto_ablkcipher_encrypt(abreq);
++ if (err)
++ return err;
++
++ /* copy authtag to end of dst */
++ scatterwalk_map_and_copy(odata, req->dst, cryptlen,
++ crypto_aead_authsize(aead), 1);
++ return err;
++}
++
++static void crypto_ccm_decrypt_done(struct crypto_async_request *areq,
++ int err)
++{
++ struct aead_request *req = areq->data;
++ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ unsigned int authsize = crypto_aead_authsize(aead);
++ unsigned int cryptlen = req->cryptlen - authsize;
++
++ if (!err) {
++ err = crypto_ccm_auth(req, req->dst, cryptlen);
++ if (!err && memcmp(pctx->auth_tag, pctx->odata, authsize))
++ err = -EBADMSG;
++ }
++ aead_request_complete(req, err);
++}
++
++static int crypto_ccm_decrypt(struct aead_request *req)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
++ struct ablkcipher_request *abreq = &pctx->abreq;
++ struct scatterlist *dst;
++ unsigned int authsize = crypto_aead_authsize(aead);
++ unsigned int cryptlen = req->cryptlen;
++ u8 *authtag = pctx->auth_tag;
++ u8 *odata = pctx->odata;
++ u8 *iv = req->iv;
++ int err;
++
++ if (cryptlen < authsize)
++ return -EINVAL;
++ cryptlen -= authsize;
++
++ err = crypto_ccm_check_iv(iv);
++ if (err)
++ return err;
++
++ pctx->flags = aead_request_flags(req);
++
++ scatterwalk_map_and_copy(authtag, req->src, cryptlen, authsize, 0);
++
++ memset(iv + 15 - iv[0], 0, iv[0] + 1);
++
++ sg_init_table(pctx->src, 2);
++ sg_set_buf(pctx->src, authtag, 16);
++ scatterwalk_sg_chain(pctx->src, 2, req->src);
++
++ dst = pctx->src;
++ if (req->src != req->dst) {
++ sg_init_table(pctx->dst, 2);
++ sg_set_buf(pctx->dst, authtag, 16);
++ scatterwalk_sg_chain(pctx->dst, 2, req->dst);
++ dst = pctx->dst;
++ }
++
++ ablkcipher_request_set_tfm(abreq, ctx->ctr);
++ ablkcipher_request_set_callback(abreq, pctx->flags,
++ crypto_ccm_decrypt_done, req);
++ ablkcipher_request_set_crypt(abreq, pctx->src, dst, cryptlen + 16, iv);
++ err = crypto_ablkcipher_decrypt(abreq);
++ if (err)
++ return err;
++
++ err = crypto_ccm_auth(req, req->dst, cryptlen);
++ if (err)
++ return err;
++
++ /* verify */
++ if (memcmp(authtag, odata, authsize))
++ return -EBADMSG;
++
++ return err;
++}
++
++static int crypto_ccm_init_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct ccm_instance_ctx *ictx = crypto_instance_ctx(inst);
++ struct crypto_ccm_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_cipher *cipher;
++ struct crypto_ablkcipher *ctr;
++ unsigned long align;
++ int err;
++
++ cipher = crypto_spawn_cipher(&ictx->cipher);
++ if (IS_ERR(cipher))
++ return PTR_ERR(cipher);
++
++ ctr = crypto_spawn_skcipher(&ictx->ctr);
++ err = PTR_ERR(ctr);
++ if (IS_ERR(ctr))
++ goto err_free_cipher;
++
++ ctx->cipher = cipher;
++ ctx->ctr = ctr;
++
++ align = crypto_tfm_alg_alignmask(tfm);
++ align &= ~(crypto_tfm_ctx_alignment() - 1);
++ tfm->crt_aead.reqsize = align +
++ sizeof(struct crypto_ccm_req_priv_ctx) +
++ crypto_ablkcipher_reqsize(ctr);
++
++ return 0;
++
++err_free_cipher:
++ crypto_free_cipher(cipher);
++ return err;
++}
++
++static void crypto_ccm_exit_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_ccm_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ crypto_free_cipher(ctx->cipher);
++ crypto_free_ablkcipher(ctx->ctr);
++}
++
++static struct crypto_instance *crypto_ccm_alloc_common(struct rtattr **tb,
++ const char *full_name,
++ const char *ctr_name,
++ const char *cipher_name)
++{
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ struct crypto_alg *ctr;
++ struct crypto_alg *cipher;
++ struct ccm_instance_ctx *ictx;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
++ return ERR_PTR(-EINVAL);
++
++ cipher = crypto_alg_mod_lookup(cipher_name, CRYPTO_ALG_TYPE_CIPHER,
++ CRYPTO_ALG_TYPE_MASK);
++ err = PTR_ERR(cipher);
++ if (IS_ERR(cipher))
++ return ERR_PTR(err);
++
++ err = -EINVAL;
++ if (cipher->cra_blocksize != 16)
++ goto out_put_cipher;
++
++ inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
++ err = -ENOMEM;
++ if (!inst)
++ goto out_put_cipher;
++
++ ictx = crypto_instance_ctx(inst);
++
++ err = crypto_init_spawn(&ictx->cipher, cipher, inst,
++ CRYPTO_ALG_TYPE_MASK);
++ if (err)
++ goto err_free_inst;
++
++ crypto_set_skcipher_spawn(&ictx->ctr, inst);
++ err = crypto_grab_skcipher(&ictx->ctr, ctr_name, 0,
++ crypto_requires_sync(algt->type,
++ algt->mask));
++ if (err)
++ goto err_drop_cipher;
++
++ ctr = crypto_skcipher_spawn_alg(&ictx->ctr);
++
++ /* Not a stream cipher? */
++ err = -EINVAL;
++ if (ctr->cra_blocksize != 1)
++ goto err_drop_ctr;
++
++ /* We want the real thing! */
++ if (ctr->cra_ablkcipher.ivsize != 16)
++ goto err_drop_ctr;
++
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "ccm_base(%s,%s)", ctr->cra_driver_name,
++ cipher->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
++ goto err_drop_ctr;
++
++ memcpy(inst->alg.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
++ inst->alg.cra_flags |= ctr->cra_flags & CRYPTO_ALG_ASYNC;
++ inst->alg.cra_priority = cipher->cra_priority + ctr->cra_priority;
++ inst->alg.cra_blocksize = 1;
++ inst->alg.cra_alignmask = cipher->cra_alignmask | ctr->cra_alignmask |
++ (__alignof__(u32) - 1);
++ inst->alg.cra_type = &crypto_aead_type;
++ inst->alg.cra_aead.ivsize = 16;
++ inst->alg.cra_aead.maxauthsize = 16;
++ inst->alg.cra_ctxsize = sizeof(struct crypto_ccm_ctx);
++ inst->alg.cra_init = crypto_ccm_init_tfm;
++ inst->alg.cra_exit = crypto_ccm_exit_tfm;
++ inst->alg.cra_aead.setkey = crypto_ccm_setkey;
++ inst->alg.cra_aead.setauthsize = crypto_ccm_setauthsize;
++ inst->alg.cra_aead.encrypt = crypto_ccm_encrypt;
++ inst->alg.cra_aead.decrypt = crypto_ccm_decrypt;
++
++out:
++ crypto_mod_put(cipher);
++ return inst;
++
++err_drop_ctr:
++ crypto_drop_skcipher(&ictx->ctr);
++err_drop_cipher:
++ crypto_drop_spawn(&ictx->cipher);
++err_free_inst:
++ kfree(inst);
++out_put_cipher:
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static struct crypto_instance *crypto_ccm_alloc(struct rtattr **tb)
++{
++ int err;
++ const char *cipher_name;
++ char ctr_name[CRYPTO_MAX_ALG_NAME];
++ char full_name[CRYPTO_MAX_ALG_NAME];
++
++ cipher_name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(cipher_name);
++ if (IS_ERR(cipher_name))
++ return ERR_PTR(err);
++
++ if (snprintf(ctr_name, CRYPTO_MAX_ALG_NAME, "ctr(%s)",
++ cipher_name) >= CRYPTO_MAX_ALG_NAME)
++ return ERR_PTR(-ENAMETOOLONG);
++
++ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm(%s)", cipher_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ return ERR_PTR(-ENAMETOOLONG);
++
++ return crypto_ccm_alloc_common(tb, full_name, ctr_name, cipher_name);
++}
++
++static void crypto_ccm_free(struct crypto_instance *inst)
++{
++ struct ccm_instance_ctx *ctx = crypto_instance_ctx(inst);
++
++ crypto_drop_spawn(&ctx->cipher);
++ crypto_drop_skcipher(&ctx->ctr);
++ kfree(inst);
++}
++
++static struct crypto_template crypto_ccm_tmpl = {
++ .name = "ccm",
++ .alloc = crypto_ccm_alloc,
++ .free = crypto_ccm_free,
++ .module = THIS_MODULE,
++};
++
++static struct crypto_instance *crypto_ccm_base_alloc(struct rtattr **tb)
++{
++ int err;
++ const char *ctr_name;
++ const char *cipher_name;
++ char full_name[CRYPTO_MAX_ALG_NAME];
++
++ ctr_name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(ctr_name);
++ if (IS_ERR(ctr_name))
++ return ERR_PTR(err);
++
++ cipher_name = crypto_attr_alg_name(tb[2]);
++ err = PTR_ERR(cipher_name);
++ if (IS_ERR(cipher_name))
++ return ERR_PTR(err);
++
++ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)",
++ ctr_name, cipher_name) >= CRYPTO_MAX_ALG_NAME)
++ return ERR_PTR(-ENAMETOOLONG);
++
++ return crypto_ccm_alloc_common(tb, full_name, ctr_name, cipher_name);
++}
++
++static struct crypto_template crypto_ccm_base_tmpl = {
++ .name = "ccm_base",
++ .alloc = crypto_ccm_base_alloc,
++ .free = crypto_ccm_free,
++ .module = THIS_MODULE,
++};
++
++static int crypto_rfc4309_setkey(struct crypto_aead *parent, const u8 *key,
++ unsigned int keylen)
++{
++ struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(parent);
++ struct crypto_aead *child = ctx->child;
++ int err;
++
++ if (keylen < 3)
++ return -EINVAL;
++
++ keylen -= 3;
++ memcpy(ctx->nonce, key + keylen, 3);
++
++ crypto_aead_clear_flags(child, CRYPTO_TFM_REQ_MASK);
++ crypto_aead_set_flags(child, crypto_aead_get_flags(parent) &
++ CRYPTO_TFM_REQ_MASK);
++ err = crypto_aead_setkey(child, key, keylen);
++ crypto_aead_set_flags(parent, crypto_aead_get_flags(child) &
++ CRYPTO_TFM_RES_MASK);
++
++ return err;
++}
++
++static int crypto_rfc4309_setauthsize(struct crypto_aead *parent,
++ unsigned int authsize)
++{
++ struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(parent);
++
++ switch (authsize) {
++ case 8:
++ case 12:
++ case 16:
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ return crypto_aead_setauthsize(ctx->child, authsize);
++}
++
++static struct aead_request *crypto_rfc4309_crypt(struct aead_request *req)
++{
++ struct aead_request *subreq = aead_request_ctx(req);
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_aead *child = ctx->child;
++ u8 *iv = PTR_ALIGN((u8 *)(subreq + 1) + crypto_aead_reqsize(child),
++ crypto_aead_alignmask(child) + 1);
++
++ /* L' */
++ iv[0] = 3;
++
++ memcpy(iv + 1, ctx->nonce, 3);
++ memcpy(iv + 4, req->iv, 8);
++
++ aead_request_set_tfm(subreq, child);
++ aead_request_set_callback(subreq, req->base.flags, req->base.complete,
++ req->base.data);
++ aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, iv);
++ aead_request_set_assoc(subreq, req->assoc, req->assoclen);
++
++ return subreq;
++}
++
++static int crypto_rfc4309_encrypt(struct aead_request *req)
++{
++ req = crypto_rfc4309_crypt(req);
++
++ return crypto_aead_encrypt(req);
++}
++
++static int crypto_rfc4309_decrypt(struct aead_request *req)
++{
++ req = crypto_rfc4309_crypt(req);
++
++ return crypto_aead_decrypt(req);
++}
++
++static int crypto_rfc4309_init_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_aead_spawn *spawn = crypto_instance_ctx(inst);
++ struct crypto_rfc4309_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_aead *aead;
++ unsigned long align;
++
++ aead = crypto_spawn_aead(spawn);
++ if (IS_ERR(aead))
++ return PTR_ERR(aead);
++
++ ctx->child = aead;
++
++ align = crypto_aead_alignmask(aead);
++ align &= ~(crypto_tfm_ctx_alignment() - 1);
++ tfm->crt_aead.reqsize = sizeof(struct aead_request) +
++ ALIGN(crypto_aead_reqsize(aead),
++ crypto_tfm_ctx_alignment()) +
++ align + 16;
++
++ return 0;
++}
++
++static void crypto_rfc4309_exit_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_rfc4309_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ crypto_free_aead(ctx->child);
++}
++
++static struct crypto_instance *crypto_rfc4309_alloc(struct rtattr **tb)
++{
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ struct crypto_aead_spawn *spawn;
++ struct crypto_alg *alg;
++ const char *ccm_name;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
++ return ERR_PTR(-EINVAL);
++
++ ccm_name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(ccm_name);
++ if (IS_ERR(ccm_name))
++ return ERR_PTR(err);
++
++ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
++ if (!inst)
++ return ERR_PTR(-ENOMEM);
++
++ spawn = crypto_instance_ctx(inst);
++ crypto_set_aead_spawn(spawn, inst);
++ err = crypto_grab_aead(spawn, ccm_name, 0,
++ crypto_requires_sync(algt->type, algt->mask));
++ if (err)
++ goto out_free_inst;
++
++ alg = crypto_aead_spawn_alg(spawn);
++
++ err = -EINVAL;
++
++ /* We only support 16-byte blocks. */
++ if (alg->cra_aead.ivsize != 16)
++ goto out_drop_alg;
++
++ /* Not a stream cipher? */
++ if (alg->cra_blocksize != 1)
++ goto out_drop_alg;
++
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
++ "rfc4309(%s)", alg->cra_name) >= CRYPTO_MAX_ALG_NAME ||
++ snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "rfc4309(%s)", alg->cra_driver_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto out_drop_alg;
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
++ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
++ inst->alg.cra_priority = alg->cra_priority;
++ inst->alg.cra_blocksize = 1;
++ inst->alg.cra_alignmask = alg->cra_alignmask;
++ inst->alg.cra_type = &crypto_nivaead_type;
++
++ inst->alg.cra_aead.ivsize = 8;
++ inst->alg.cra_aead.maxauthsize = 16;
++
++ inst->alg.cra_ctxsize = sizeof(struct crypto_rfc4309_ctx);
++
++ inst->alg.cra_init = crypto_rfc4309_init_tfm;
++ inst->alg.cra_exit = crypto_rfc4309_exit_tfm;
++
++ inst->alg.cra_aead.setkey = crypto_rfc4309_setkey;
++ inst->alg.cra_aead.setauthsize = crypto_rfc4309_setauthsize;
++ inst->alg.cra_aead.encrypt = crypto_rfc4309_encrypt;
++ inst->alg.cra_aead.decrypt = crypto_rfc4309_decrypt;
++
++ inst->alg.cra_aead.geniv = "seqiv";
++
++out:
++ return inst;
++
++out_drop_alg:
++ crypto_drop_aead(spawn);
++out_free_inst:
++ kfree(inst);
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static void crypto_rfc4309_free(struct crypto_instance *inst)
++{
++ crypto_drop_spawn(crypto_instance_ctx(inst));
++ kfree(inst);
++}
++
++static struct crypto_template crypto_rfc4309_tmpl = {
++ .name = "rfc4309",
++ .alloc = crypto_rfc4309_alloc,
++ .free = crypto_rfc4309_free,
++ .module = THIS_MODULE,
++};
++
++static int __init crypto_ccm_module_init(void)
++{
++ int err;
++
++ err = crypto_register_template(&crypto_ccm_base_tmpl);
++ if (err)
++ goto out;
++
++ err = crypto_register_template(&crypto_ccm_tmpl);
++ if (err)
++ goto out_undo_base;
++
++ err = crypto_register_template(&crypto_rfc4309_tmpl);
++ if (err)
++ goto out_undo_ccm;
++
++out:
++ return err;
++
++out_undo_ccm:
++ crypto_unregister_template(&crypto_ccm_tmpl);
++out_undo_base:
++ crypto_unregister_template(&crypto_ccm_base_tmpl);
++ goto out;
++}
++
++static void __exit crypto_ccm_module_exit(void)
++{
++ crypto_unregister_template(&crypto_rfc4309_tmpl);
++ crypto_unregister_template(&crypto_ccm_tmpl);
++ crypto_unregister_template(&crypto_ccm_base_tmpl);
++}
++
++module_init(crypto_ccm_module_init);
++module_exit(crypto_ccm_module_exit);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Counter with CBC MAC");
++MODULE_ALIAS("ccm_base");
++MODULE_ALIAS("rfc4309");
+diff --git a/crypto/chainiv.c b/crypto/chainiv.c
+new file mode 100644
+index 0000000..d17fa04
+--- /dev/null
++++ b/crypto/chainiv.c
+@@ -0,0 +1,331 @@
++/*
++ * chainiv: Chain IV Generator
++ *
++ * Generate IVs simply be using the last block of the previous encryption.
++ * This is mainly useful for CBC with a synchronous algorithm.
++ *
++ * Copyright (c) 2007 Herbert Xu <herbert at gondor.apana.org.au>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ */
++
++#include <crypto/internal/skcipher.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/random.h>
++#include <linux/spinlock.h>
++#include <linux/string.h>
++#include <linux/workqueue.h>
++
++enum {
++ CHAINIV_STATE_INUSE = 0,
++};
++
++struct chainiv_ctx {
++ spinlock_t lock;
++ char iv[];
++};
++
++struct async_chainiv_ctx {
++ unsigned long state;
++
++ spinlock_t lock;
++ int err;
++
++ struct crypto_queue queue;
++ struct work_struct postponed;
++
++ char iv[];
++};
++
++static int chainiv_givencrypt(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
++ unsigned int ivsize;
++ int err;
++
++ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
++ ablkcipher_request_set_callback(subreq, req->creq.base.flags &
++ ~CRYPTO_TFM_REQ_MAY_SLEEP,
++ req->creq.base.complete,
++ req->creq.base.data);
++ ablkcipher_request_set_crypt(subreq, req->creq.src, req->creq.dst,
++ req->creq.nbytes, req->creq.info);
++
++ spin_lock_bh(&ctx->lock);
++
++ ivsize = crypto_ablkcipher_ivsize(geniv);
++
++ memcpy(req->giv, ctx->iv, ivsize);
++ memcpy(subreq->info, ctx->iv, ivsize);
++
++ err = crypto_ablkcipher_encrypt(subreq);
++ if (err)
++ goto unlock;
++
++ memcpy(ctx->iv, subreq->info, ivsize);
++
++unlock:
++ spin_unlock_bh(&ctx->lock);
++
++ return err;
++}
++
++static int chainiv_givencrypt_first(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++
++ spin_lock_bh(&ctx->lock);
++ if (crypto_ablkcipher_crt(geniv)->givencrypt !=
++ chainiv_givencrypt_first)
++ goto unlock;
++
++ crypto_ablkcipher_crt(geniv)->givencrypt = chainiv_givencrypt;
++ get_random_bytes(ctx->iv, crypto_ablkcipher_ivsize(geniv));
++
++unlock:
++ spin_unlock_bh(&ctx->lock);
++
++ return chainiv_givencrypt(req);
++}
++
++static int chainiv_init_common(struct crypto_tfm *tfm)
++{
++ tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request);
++
++ return skcipher_geniv_init(tfm);
++}
++
++static int chainiv_init(struct crypto_tfm *tfm)
++{
++ struct chainiv_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ spin_lock_init(&ctx->lock);
++
++ return chainiv_init_common(tfm);
++}
++
++static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx)
++{
++ int queued;
++
++ if (!ctx->queue.qlen) {
++ smp_mb__before_clear_bit();
++ clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
++
++ if (!ctx->queue.qlen ||
++ test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
++ goto out;
++ }
++
++ queued = schedule_work(&ctx->postponed);
++ BUG_ON(!queued);
++
++out:
++ return ctx->err;
++}
++
++static int async_chainiv_postpone_request(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ int err;
++
++ spin_lock_bh(&ctx->lock);
++ err = skcipher_enqueue_givcrypt(&ctx->queue, req);
++ spin_unlock_bh(&ctx->lock);
++
++ if (test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
++ return err;
++
++ ctx->err = err;
++ return async_chainiv_schedule_work(ctx);
++}
++
++static int async_chainiv_givencrypt_tail(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
++ unsigned int ivsize = crypto_ablkcipher_ivsize(geniv);
++
++ memcpy(req->giv, ctx->iv, ivsize);
++ memcpy(subreq->info, ctx->iv, ivsize);
++
++ ctx->err = crypto_ablkcipher_encrypt(subreq);
++ if (ctx->err)
++ goto out;
++
++ memcpy(ctx->iv, subreq->info, ivsize);
++
++out:
++ return async_chainiv_schedule_work(ctx);
++}
++
++static int async_chainiv_givencrypt(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
++
++ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
++ ablkcipher_request_set_callback(subreq, req->creq.base.flags,
++ req->creq.base.complete,
++ req->creq.base.data);
++ ablkcipher_request_set_crypt(subreq, req->creq.src, req->creq.dst,
++ req->creq.nbytes, req->creq.info);
++
++ if (test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
++ goto postpone;
++
++ if (ctx->queue.qlen) {
++ clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
++ goto postpone;
++ }
++
++ return async_chainiv_givencrypt_tail(req);
++
++postpone:
++ return async_chainiv_postpone_request(req);
++}
++
++static int async_chainiv_givencrypt_first(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct async_chainiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++
++ if (test_and_set_bit(CHAINIV_STATE_INUSE, &ctx->state))
++ goto out;
++
++ if (crypto_ablkcipher_crt(geniv)->givencrypt !=
++ async_chainiv_givencrypt_first)
++ goto unlock;
++
++ crypto_ablkcipher_crt(geniv)->givencrypt = async_chainiv_givencrypt;
++ get_random_bytes(ctx->iv, crypto_ablkcipher_ivsize(geniv));
++
++unlock:
++ clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
++
++out:
++ return async_chainiv_givencrypt(req);
++}
++
++static void async_chainiv_do_postponed(struct work_struct *work)
++{
++ struct async_chainiv_ctx *ctx = container_of(work,
++ struct async_chainiv_ctx,
++ postponed);
++ struct skcipher_givcrypt_request *req;
++ struct ablkcipher_request *subreq;
++
++ /* Only handle one request at a time to avoid hogging keventd. */
++ spin_lock_bh(&ctx->lock);
++ req = skcipher_dequeue_givcrypt(&ctx->queue);
++ spin_unlock_bh(&ctx->lock);
++
++ if (!req) {
++ async_chainiv_schedule_work(ctx);
++ return;
++ }
++
++ subreq = skcipher_givcrypt_reqctx(req);
++ subreq->base.flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
++
++ async_chainiv_givencrypt_tail(req);
++}
++
++static int async_chainiv_init(struct crypto_tfm *tfm)
++{
++ struct async_chainiv_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ spin_lock_init(&ctx->lock);
++
++ crypto_init_queue(&ctx->queue, 100);
++ INIT_WORK(&ctx->postponed, async_chainiv_do_postponed);
++
++ return chainiv_init_common(tfm);
++}
++
++static void async_chainiv_exit(struct crypto_tfm *tfm)
++{
++ struct async_chainiv_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ BUG_ON(test_bit(CHAINIV_STATE_INUSE, &ctx->state) || ctx->queue.qlen);
++
++ skcipher_geniv_exit(tfm);
++}
++
++static struct crypto_template chainiv_tmpl;
++
++static struct crypto_instance *chainiv_alloc(struct rtattr **tb)
++{
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ inst = skcipher_geniv_alloc(&chainiv_tmpl, tb, 0, 0);
++ if (IS_ERR(inst))
++ goto out;
++
++ inst->alg.cra_ablkcipher.givencrypt = chainiv_givencrypt_first;
++
++ inst->alg.cra_init = chainiv_init;
++ inst->alg.cra_exit = skcipher_geniv_exit;
++
++ inst->alg.cra_ctxsize = sizeof(struct chainiv_ctx);
++
++ if (!crypto_requires_sync(algt->type, algt->mask)) {
++ inst->alg.cra_flags |= CRYPTO_ALG_ASYNC;
++
++ inst->alg.cra_ablkcipher.givencrypt =
++ async_chainiv_givencrypt_first;
++
++ inst->alg.cra_init = async_chainiv_init;
++ inst->alg.cra_exit = async_chainiv_exit;
++
++ inst->alg.cra_ctxsize = sizeof(struct async_chainiv_ctx);
++ }
++
++ inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
++
++out:
++ return inst;
++}
++
++static struct crypto_template chainiv_tmpl = {
++ .name = "chainiv",
++ .alloc = chainiv_alloc,
++ .free = skcipher_geniv_free,
++ .module = THIS_MODULE,
++};
++
++static int __init chainiv_module_init(void)
++{
++ return crypto_register_template(&chainiv_tmpl);
++}
++
++static void __exit chainiv_module_exit(void)
++{
++ crypto_unregister_template(&chainiv_tmpl);
++}
++
++module_init(chainiv_module_init);
++module_exit(chainiv_module_exit);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Chain IV Generator");
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index 8bf2da8..074298f 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -228,7 +228,7 @@ static struct crypto_instance *cryptd_alloc_blkcipher(
+ struct crypto_alg *alg;
+
+ alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_BLKCIPHER,
+- CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
++ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+@@ -236,13 +236,15 @@ static struct crypto_instance *cryptd_alloc_blkcipher(
+ if (IS_ERR(inst))
+ goto out_put_alg;
+
+- inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | CRYPTO_ALG_ASYNC;
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC;
+ inst->alg.cra_type = &crypto_ablkcipher_type;
+
+ inst->alg.cra_ablkcipher.ivsize = alg->cra_blkcipher.ivsize;
+ inst->alg.cra_ablkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
+ inst->alg.cra_ablkcipher.max_keysize = alg->cra_blkcipher.max_keysize;
+
++ inst->alg.cra_ablkcipher.geniv = alg->cra_blkcipher.geniv;
++
+ inst->alg.cra_ctxsize = sizeof(struct cryptd_blkcipher_ctx);
+
+ inst->alg.cra_init = cryptd_blkcipher_init_tfm;
+diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
+index 29f7747..ff7b3de 100644
+--- a/crypto/crypto_null.c
++++ b/crypto/crypto_null.c
+@@ -16,15 +16,17 @@
+ * (at your option) any later version.
+ *
+ */
++
++#include <crypto/internal/skcipher.h>
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/mm.h>
+-#include <linux/crypto.h>
+ #include <linux/string.h>
+
+ #define NULL_KEY_SIZE 0
+ #define NULL_BLOCK_SIZE 1
+ #define NULL_DIGEST_SIZE 0
++#define NULL_IV_SIZE 0
+
+ static int null_compress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+@@ -55,6 +57,26 @@ static void null_crypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
+ memcpy(dst, src, NULL_BLOCK_SIZE);
+ }
+
++static int skcipher_null_crypt(struct blkcipher_desc *desc,
++ struct scatterlist *dst,
++ struct scatterlist *src, unsigned int nbytes)
++{
++ struct blkcipher_walk walk;
++ int err;
++
++ blkcipher_walk_init(&walk, dst, src, nbytes);
++ err = blkcipher_walk_virt(desc, &walk);
++
++ while (walk.nbytes) {
++ if (walk.src.virt.addr != walk.dst.virt.addr)
++ memcpy(walk.dst.virt.addr, walk.src.virt.addr,
++ walk.nbytes);
++ err = blkcipher_walk_done(desc, &walk, 0);
++ }
++
++ return err;
++}
++
+ static struct crypto_alg compress_null = {
+ .cra_name = "compress_null",
+ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
+@@ -76,6 +98,7 @@ static struct crypto_alg digest_null = {
+ .cra_list = LIST_HEAD_INIT(digest_null.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = NULL_DIGEST_SIZE,
++ .dia_setkey = null_setkey,
+ .dia_init = null_init,
+ .dia_update = null_update,
+ .dia_final = null_final } }
+@@ -96,6 +119,25 @@ static struct crypto_alg cipher_null = {
+ .cia_decrypt = null_crypt } }
+ };
+
++static struct crypto_alg skcipher_null = {
++ .cra_name = "ecb(cipher_null)",
++ .cra_driver_name = "ecb-cipher_null",
++ .cra_priority = 100,
++ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
++ .cra_blocksize = NULL_BLOCK_SIZE,
++ .cra_type = &crypto_blkcipher_type,
++ .cra_ctxsize = 0,
++ .cra_module = THIS_MODULE,
++ .cra_list = LIST_HEAD_INIT(skcipher_null.cra_list),
++ .cra_u = { .blkcipher = {
++ .min_keysize = NULL_KEY_SIZE,
++ .max_keysize = NULL_KEY_SIZE,
++ .ivsize = NULL_IV_SIZE,
++ .setkey = null_setkey,
++ .encrypt = skcipher_null_crypt,
++ .decrypt = skcipher_null_crypt } }
++};
++
+ MODULE_ALIAS("compress_null");
+ MODULE_ALIAS("digest_null");
+ MODULE_ALIAS("cipher_null");
+@@ -108,27 +150,35 @@ static int __init init(void)
+ if (ret < 0)
+ goto out;
+
++ ret = crypto_register_alg(&skcipher_null);
++ if (ret < 0)
++ goto out_unregister_cipher;
++
+ ret = crypto_register_alg(&digest_null);
+- if (ret < 0) {
+- crypto_unregister_alg(&cipher_null);
+- goto out;
+- }
++ if (ret < 0)
++ goto out_unregister_skcipher;
+
+ ret = crypto_register_alg(&compress_null);
+- if (ret < 0) {
+- crypto_unregister_alg(&digest_null);
+- crypto_unregister_alg(&cipher_null);
+- goto out;
+- }
++ if (ret < 0)
++ goto out_unregister_digest;
+
+ out:
+ return ret;
++
++out_unregister_digest:
++ crypto_unregister_alg(&digest_null);
++out_unregister_skcipher:
++ crypto_unregister_alg(&skcipher_null);
++out_unregister_cipher:
++ crypto_unregister_alg(&cipher_null);
++ goto out;
+ }
+
+ static void __exit fini(void)
+ {
+ crypto_unregister_alg(&compress_null);
+ crypto_unregister_alg(&digest_null);
++ crypto_unregister_alg(&skcipher_null);
+ crypto_unregister_alg(&cipher_null);
+ }
+
+diff --git a/crypto/ctr.c b/crypto/ctr.c
+new file mode 100644
+index 0000000..2d7425f
+--- /dev/null
++++ b/crypto/ctr.c
+@@ -0,0 +1,422 @@
++/*
++ * CTR: Counter mode
++ *
++ * (C) Copyright IBM Corp. 2007 - Joy Latten <latten at us.ibm.com>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ */
++
++#include <crypto/algapi.h>
++#include <crypto/ctr.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/random.h>
++#include <linux/scatterlist.h>
++#include <linux/slab.h>
++
++struct crypto_ctr_ctx {
++ struct crypto_cipher *child;
++};
++
++struct crypto_rfc3686_ctx {
++ struct crypto_blkcipher *child;
++ u8 nonce[CTR_RFC3686_NONCE_SIZE];
++};
++
++static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
++ unsigned int keylen)
++{
++ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
++ struct crypto_cipher *child = ctx->child;
++ int err;
++
++ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
++ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
++ CRYPTO_TFM_REQ_MASK);
++ err = crypto_cipher_setkey(child, key, keylen);
++ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
++ CRYPTO_TFM_RES_MASK);
++
++ return err;
++}
++
++static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
++ struct crypto_cipher *tfm)
++{
++ unsigned int bsize = crypto_cipher_blocksize(tfm);
++ unsigned long alignmask = crypto_cipher_alignmask(tfm);
++ u8 *ctrblk = walk->iv;
++ u8 tmp[bsize + alignmask];
++ u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
++ u8 *src = walk->src.virt.addr;
++ u8 *dst = walk->dst.virt.addr;
++ unsigned int nbytes = walk->nbytes;
++
++ crypto_cipher_encrypt_one(tfm, keystream, ctrblk);
++ crypto_xor(keystream, src, nbytes);
++ memcpy(dst, keystream, nbytes);
++
++ crypto_inc(ctrblk, bsize);
++}
++
++static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
++ struct crypto_cipher *tfm)
++{
++ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
++ crypto_cipher_alg(tfm)->cia_encrypt;
++ unsigned int bsize = crypto_cipher_blocksize(tfm);
++ u8 *ctrblk = walk->iv;
++ u8 *src = walk->src.virt.addr;
++ u8 *dst = walk->dst.virt.addr;
++ unsigned int nbytes = walk->nbytes;
++
++ do {
++ /* create keystream */
++ fn(crypto_cipher_tfm(tfm), dst, ctrblk);
++ crypto_xor(dst, src, bsize);
++
++ /* increment counter in counterblock */
++ crypto_inc(ctrblk, bsize);
++
++ src += bsize;
++ dst += bsize;
++ } while ((nbytes -= bsize) >= bsize);
++
++ return nbytes;
++}
++
++static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
++ struct crypto_cipher *tfm)
++{
++ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
++ crypto_cipher_alg(tfm)->cia_encrypt;
++ unsigned int bsize = crypto_cipher_blocksize(tfm);
++ unsigned long alignmask = crypto_cipher_alignmask(tfm);
++ unsigned int nbytes = walk->nbytes;
++ u8 *ctrblk = walk->iv;
++ u8 *src = walk->src.virt.addr;
++ u8 tmp[bsize + alignmask];
++ u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
++
++ do {
++ /* create keystream */
++ fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
++ crypto_xor(src, keystream, bsize);
++
++ /* increment counter in counterblock */
++ crypto_inc(ctrblk, bsize);
++
++ src += bsize;
++ } while ((nbytes -= bsize) >= bsize);
++
++ return nbytes;
++}
++
++static int crypto_ctr_crypt(struct blkcipher_desc *desc,
++ struct scatterlist *dst, struct scatterlist *src,
++ unsigned int nbytes)
++{
++ struct blkcipher_walk walk;
++ struct crypto_blkcipher *tfm = desc->tfm;
++ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
++ struct crypto_cipher *child = ctx->child;
++ unsigned int bsize = crypto_cipher_blocksize(child);
++ int err;
++
++ blkcipher_walk_init(&walk, dst, src, nbytes);
++ err = blkcipher_walk_virt_block(desc, &walk, bsize);
++
++ while (walk.nbytes >= bsize) {
++ if (walk.src.virt.addr == walk.dst.virt.addr)
++ nbytes = crypto_ctr_crypt_inplace(&walk, child);
++ else
++ nbytes = crypto_ctr_crypt_segment(&walk, child);
++
++ err = blkcipher_walk_done(desc, &walk, nbytes);
++ }
++
++ if (walk.nbytes) {
++ crypto_ctr_crypt_final(&walk, child);
++ err = blkcipher_walk_done(desc, &walk, 0);
++ }
++
++ return err;
++}
++
++static int crypto_ctr_init_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
++ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_cipher *cipher;
++
++ cipher = crypto_spawn_cipher(spawn);
++ if (IS_ERR(cipher))
++ return PTR_ERR(cipher);
++
++ ctx->child = cipher;
++
++ return 0;
++}
++
++static void crypto_ctr_exit_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ crypto_free_cipher(ctx->child);
++}
++
++static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
++{
++ struct crypto_instance *inst;
++ struct crypto_alg *alg;
++ int err;
++
++ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
++ if (err)
++ return ERR_PTR(err);
++
++ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
++ CRYPTO_ALG_TYPE_MASK);
++ if (IS_ERR(alg))
++ return ERR_PTR(PTR_ERR(alg));
++
++ /* Block size must be >= 4 bytes. */
++ err = -EINVAL;
++ if (alg->cra_blocksize < 4)
++ goto out_put_alg;
++
++ /* If this is false we'd fail the alignment of crypto_inc. */
++ if (alg->cra_blocksize % 4)
++ goto out_put_alg;
++
++ inst = crypto_alloc_instance("ctr", alg);
++ if (IS_ERR(inst))
++ goto out;
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
++ inst->alg.cra_priority = alg->cra_priority;
++ inst->alg.cra_blocksize = 1;
++ inst->alg.cra_alignmask = alg->cra_alignmask | (__alignof__(u32) - 1);
++ inst->alg.cra_type = &crypto_blkcipher_type;
++
++ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
++ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
++ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
++
++ inst->alg.cra_ctxsize = sizeof(struct crypto_ctr_ctx);
++
++ inst->alg.cra_init = crypto_ctr_init_tfm;
++ inst->alg.cra_exit = crypto_ctr_exit_tfm;
++
++ inst->alg.cra_blkcipher.setkey = crypto_ctr_setkey;
++ inst->alg.cra_blkcipher.encrypt = crypto_ctr_crypt;
++ inst->alg.cra_blkcipher.decrypt = crypto_ctr_crypt;
++
++out:
++ crypto_mod_put(alg);
++ return inst;
++
++out_put_alg:
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static void crypto_ctr_free(struct crypto_instance *inst)
++{
++ crypto_drop_spawn(crypto_instance_ctx(inst));
++ kfree(inst);
++}
++
++static struct crypto_template crypto_ctr_tmpl = {
++ .name = "ctr",
++ .alloc = crypto_ctr_alloc,
++ .free = crypto_ctr_free,
++ .module = THIS_MODULE,
++};
++
++static int crypto_rfc3686_setkey(struct crypto_tfm *parent, const u8 *key,
++ unsigned int keylen)
++{
++ struct crypto_rfc3686_ctx *ctx = crypto_tfm_ctx(parent);
++ struct crypto_blkcipher *child = ctx->child;
++ int err;
++
++ /* the nonce is stored in bytes at end of key */
++ if (keylen < CTR_RFC3686_NONCE_SIZE)
++ return -EINVAL;
++
++ memcpy(ctx->nonce, key + (keylen - CTR_RFC3686_NONCE_SIZE),
++ CTR_RFC3686_NONCE_SIZE);
++
++ keylen -= CTR_RFC3686_NONCE_SIZE;
++
++ crypto_blkcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
++ crypto_blkcipher_set_flags(child, crypto_tfm_get_flags(parent) &
++ CRYPTO_TFM_REQ_MASK);
++ err = crypto_blkcipher_setkey(child, key, keylen);
++ crypto_tfm_set_flags(parent, crypto_blkcipher_get_flags(child) &
++ CRYPTO_TFM_RES_MASK);
++
++ return err;
++}
++
++static int crypto_rfc3686_crypt(struct blkcipher_desc *desc,
++ struct scatterlist *dst,
++ struct scatterlist *src, unsigned int nbytes)
++{
++ struct crypto_blkcipher *tfm = desc->tfm;
++ struct crypto_rfc3686_ctx *ctx = crypto_blkcipher_ctx(tfm);
++ struct crypto_blkcipher *child = ctx->child;
++ unsigned long alignmask = crypto_blkcipher_alignmask(tfm);
++ u8 ivblk[CTR_RFC3686_BLOCK_SIZE + alignmask];
++ u8 *iv = PTR_ALIGN(ivblk + 0, alignmask + 1);
++ u8 *info = desc->info;
++ int err;
++
++ /* set up counter block */
++ memcpy(iv, ctx->nonce, CTR_RFC3686_NONCE_SIZE);
++ memcpy(iv + CTR_RFC3686_NONCE_SIZE, info, CTR_RFC3686_IV_SIZE);
++
++ /* initialize counter portion of counter block */
++ *(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) =
++ cpu_to_be32(1);
++
++ desc->tfm = child;
++ desc->info = iv;
++ err = crypto_blkcipher_encrypt_iv(desc, dst, src, nbytes);
++ desc->tfm = tfm;
++ desc->info = info;
++
++ return err;
++}
++
++static int crypto_rfc3686_init_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
++ struct crypto_rfc3686_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_blkcipher *cipher;
++
++ cipher = crypto_spawn_blkcipher(spawn);
++ if (IS_ERR(cipher))
++ return PTR_ERR(cipher);
++
++ ctx->child = cipher;
++
++ return 0;
++}
++
++static void crypto_rfc3686_exit_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_rfc3686_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ crypto_free_blkcipher(ctx->child);
++}
++
++static struct crypto_instance *crypto_rfc3686_alloc(struct rtattr **tb)
++{
++ struct crypto_instance *inst;
++ struct crypto_alg *alg;
++ int err;
++
++ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
++ if (err)
++ return ERR_PTR(err);
++
++ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_BLKCIPHER,
++ CRYPTO_ALG_TYPE_MASK);
++ err = PTR_ERR(alg);
++ if (IS_ERR(alg))
++ return ERR_PTR(err);
++
++ /* We only support 16-byte blocks. */
++ err = -EINVAL;
++ if (alg->cra_blkcipher.ivsize != CTR_RFC3686_BLOCK_SIZE)
++ goto out_put_alg;
++
++ /* Not a stream cipher? */
++ if (alg->cra_blocksize != 1)
++ goto out_put_alg;
++
++ inst = crypto_alloc_instance("rfc3686", alg);
++ if (IS_ERR(inst))
++ goto out;
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
++ inst->alg.cra_priority = alg->cra_priority;
++ inst->alg.cra_blocksize = 1;
++ inst->alg.cra_alignmask = alg->cra_alignmask;
++ inst->alg.cra_type = &crypto_blkcipher_type;
++
++ inst->alg.cra_blkcipher.ivsize = CTR_RFC3686_IV_SIZE;
++ inst->alg.cra_blkcipher.min_keysize = alg->cra_blkcipher.min_keysize
++ + CTR_RFC3686_NONCE_SIZE;
++ inst->alg.cra_blkcipher.max_keysize = alg->cra_blkcipher.max_keysize
++ + CTR_RFC3686_NONCE_SIZE;
++
++ inst->alg.cra_blkcipher.geniv = "seqiv";
++
++ inst->alg.cra_ctxsize = sizeof(struct crypto_rfc3686_ctx);
++
++ inst->alg.cra_init = crypto_rfc3686_init_tfm;
++ inst->alg.cra_exit = crypto_rfc3686_exit_tfm;
++
++ inst->alg.cra_blkcipher.setkey = crypto_rfc3686_setkey;
++ inst->alg.cra_blkcipher.encrypt = crypto_rfc3686_crypt;
++ inst->alg.cra_blkcipher.decrypt = crypto_rfc3686_crypt;
++
++out:
++ crypto_mod_put(alg);
++ return inst;
++
++out_put_alg:
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static struct crypto_template crypto_rfc3686_tmpl = {
++ .name = "rfc3686",
++ .alloc = crypto_rfc3686_alloc,
++ .free = crypto_ctr_free,
++ .module = THIS_MODULE,
++};
++
++static int __init crypto_ctr_module_init(void)
++{
++ int err;
++
++ err = crypto_register_template(&crypto_ctr_tmpl);
++ if (err)
++ goto out;
++
++ err = crypto_register_template(&crypto_rfc3686_tmpl);
++ if (err)
++ goto out_drop_ctr;
++
++out:
++ return err;
++
++out_drop_ctr:
++ crypto_unregister_template(&crypto_ctr_tmpl);
++ goto out;
++}
++
++static void __exit crypto_ctr_module_exit(void)
++{
++ crypto_unregister_template(&crypto_rfc3686_tmpl);
++ crypto_unregister_template(&crypto_ctr_tmpl);
++}
++
++module_init(crypto_ctr_module_init);
++module_exit(crypto_ctr_module_exit);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("CTR Counter block mode");
++MODULE_ALIAS("rfc3686");
+diff --git a/crypto/des_generic.c b/crypto/des_generic.c
+index 59966d1..355ecb7 100644
+--- a/crypto/des_generic.c
++++ b/crypto/des_generic.c
+@@ -20,13 +20,7 @@
+ #include <linux/crypto.h>
+ #include <linux/types.h>
+
+-#define DES_KEY_SIZE 8
+-#define DES_EXPKEY_WORDS 32
+-#define DES_BLOCK_SIZE 8
+-
+-#define DES3_EDE_KEY_SIZE (3 * DES_KEY_SIZE)
+-#define DES3_EDE_EXPKEY_WORDS (3 * DES_EXPKEY_WORDS)
+-#define DES3_EDE_BLOCK_SIZE DES_BLOCK_SIZE
++#include <crypto/des.h>
+
+ #define ROL(x, r) ((x) = rol32((x), (r)))
+ #define ROR(x, r) ((x) = ror32((x), (r)))
+@@ -634,7 +628,7 @@ static const u32 S8[64] = {
+ * Choice 1 has operated on the key.
+ *
+ */
+-static unsigned long ekey(u32 *pe, const u8 *k)
++unsigned long des_ekey(u32 *pe, const u8 *k)
+ {
+ /* K&R: long is at least 32 bits */
+ unsigned long a, b, c, d, w;
+@@ -709,6 +703,7 @@ static unsigned long ekey(u32 *pe, const u8 *k)
+ /* Zero if weak key */
+ return w;
+ }
++EXPORT_SYMBOL_GPL(des_ekey);
+
+ /*
+ * Decryption key expansion
+@@ -792,7 +787,7 @@ static int des_setkey(struct crypto_tfm *tfm, const u8 *key,
+ int ret;
+
+ /* Expand to tmp */
+- ret = ekey(tmp, key);
++ ret = des_ekey(tmp, key);
+
+ if (unlikely(ret == 0) && (*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
+ *flags |= CRYPTO_TFM_RES_WEAK_KEY;
+@@ -879,9 +874,9 @@ static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key,
+ return -EINVAL;
+ }
+
+- ekey(expkey, key); expkey += DES_EXPKEY_WORDS; key += DES_KEY_SIZE;
++ des_ekey(expkey, key); expkey += DES_EXPKEY_WORDS; key += DES_KEY_SIZE;
+ dkey(expkey, key); expkey += DES_EXPKEY_WORDS; key += DES_KEY_SIZE;
+- ekey(expkey, key);
++ des_ekey(expkey, key);
+
+ return 0;
+ }
+diff --git a/crypto/digest.c b/crypto/digest.c
+index 8871dec..6fd43bd 100644
+--- a/crypto/digest.c
++++ b/crypto/digest.c
+@@ -12,6 +12,7 @@
+ *
+ */
+
++#include <crypto/scatterwalk.h>
+ #include <linux/mm.h>
+ #include <linux/errno.h>
+ #include <linux/hardirq.h>
+@@ -20,9 +21,6 @@
+ #include <linux/module.h>
+ #include <linux/scatterlist.h>
+
+-#include "internal.h"
+-#include "scatterwalk.h"
+-
+ static int init(struct hash_desc *desc)
+ {
+ struct crypto_tfm *tfm = crypto_hash_tfm(desc->tfm);
+diff --git a/crypto/eseqiv.c b/crypto/eseqiv.c
+new file mode 100644
+index 0000000..eb90d27
+--- /dev/null
++++ b/crypto/eseqiv.c
+@@ -0,0 +1,264 @@
++/*
++ * eseqiv: Encrypted Sequence Number IV Generator
++ *
++ * This generator generates an IV based on a sequence number by xoring it
++ * with a salt and then encrypting it with the same key as used to encrypt
++ * the plain text. This algorithm requires that the block size be equal
++ * to the IV size. It is mainly useful for CBC.
++ *
++ * Copyright (c) 2007 Herbert Xu <herbert at gondor.apana.org.au>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ */
++
++#include <crypto/internal/skcipher.h>
++#include <crypto/scatterwalk.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/mm.h>
++#include <linux/module.h>
++#include <linux/random.h>
++#include <linux/scatterlist.h>
++#include <linux/spinlock.h>
++#include <linux/string.h>
++
++struct eseqiv_request_ctx {
++ struct scatterlist src[2];
++ struct scatterlist dst[2];
++ char tail[];
++};
++
++struct eseqiv_ctx {
++ spinlock_t lock;
++ unsigned int reqoff;
++ char salt[];
++};
++
++static void eseqiv_complete2(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct eseqiv_request_ctx *reqctx = skcipher_givcrypt_reqctx(req);
++
++ memcpy(req->giv, PTR_ALIGN((u8 *)reqctx->tail,
++ crypto_ablkcipher_alignmask(geniv) + 1),
++ crypto_ablkcipher_ivsize(geniv));
++}
++
++static void eseqiv_complete(struct crypto_async_request *base, int err)
++{
++ struct skcipher_givcrypt_request *req = base->data;
++
++ if (err)
++ goto out;
++
++ eseqiv_complete2(req);
++
++out:
++ skcipher_givcrypt_complete(req, err);
++}
++
++static void eseqiv_chain(struct scatterlist *head, struct scatterlist *sg,
++ int chain)
++{
++ if (chain) {
++ head->length += sg->length;
++ sg = scatterwalk_sg_next(sg);
++ }
++
++ if (sg)
++ scatterwalk_sg_chain(head, 2, sg);
++ else
++ sg_mark_end(head);
++}
++
++static int eseqiv_givencrypt(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ struct eseqiv_request_ctx *reqctx = skcipher_givcrypt_reqctx(req);
++ struct ablkcipher_request *subreq;
++ crypto_completion_t complete;
++ void *data;
++ struct scatterlist *osrc, *odst;
++ struct scatterlist *dst;
++ struct page *srcp;
++ struct page *dstp;
++ u8 *giv;
++ u8 *vsrc;
++ u8 *vdst;
++ __be64 seq;
++ unsigned int ivsize;
++ unsigned int len;
++ int err;
++
++ subreq = (void *)(reqctx->tail + ctx->reqoff);
++ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
++
++ giv = req->giv;
++ complete = req->creq.base.complete;
++ data = req->creq.base.data;
++
++ osrc = req->creq.src;
++ odst = req->creq.dst;
++ srcp = sg_page(osrc);
++ dstp = sg_page(odst);
++ vsrc = PageHighMem(srcp) ? NULL : page_address(srcp) + osrc->offset;
++ vdst = PageHighMem(dstp) ? NULL : page_address(dstp) + odst->offset;
++
++ ivsize = crypto_ablkcipher_ivsize(geniv);
++
++ if (vsrc != giv + ivsize && vdst != giv + ivsize) {
++ giv = PTR_ALIGN((u8 *)reqctx->tail,
++ crypto_ablkcipher_alignmask(geniv) + 1);
++ complete = eseqiv_complete;
++ data = req;
++ }
++
++ ablkcipher_request_set_callback(subreq, req->creq.base.flags, complete,
++ data);
++
++ sg_init_table(reqctx->src, 2);
++ sg_set_buf(reqctx->src, giv, ivsize);
++ eseqiv_chain(reqctx->src, osrc, vsrc == giv + ivsize);
++
++ dst = reqctx->src;
++ if (osrc != odst) {
++ sg_init_table(reqctx->dst, 2);
++ sg_set_buf(reqctx->dst, giv, ivsize);
++ eseqiv_chain(reqctx->dst, odst, vdst == giv + ivsize);
++
++ dst = reqctx->dst;
++ }
++
++ ablkcipher_request_set_crypt(subreq, reqctx->src, dst,
++ req->creq.nbytes, req->creq.info);
++
++ memcpy(req->creq.info, ctx->salt, ivsize);
++
++ len = ivsize;
++ if (ivsize > sizeof(u64)) {
++ memset(req->giv, 0, ivsize - sizeof(u64));
++ len = sizeof(u64);
++ }
++ seq = cpu_to_be64(req->seq);
++ memcpy(req->giv + ivsize - len, &seq, len);
++
++ err = crypto_ablkcipher_encrypt(subreq);
++ if (err)
++ goto out;
++
++ eseqiv_complete2(req);
++
++out:
++ return err;
++}
++
++static int eseqiv_givencrypt_first(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++
++ spin_lock_bh(&ctx->lock);
++ if (crypto_ablkcipher_crt(geniv)->givencrypt != eseqiv_givencrypt_first)
++ goto unlock;
++
++ crypto_ablkcipher_crt(geniv)->givencrypt = eseqiv_givencrypt;
++ get_random_bytes(ctx->salt, crypto_ablkcipher_ivsize(geniv));
++
++unlock:
++ spin_unlock_bh(&ctx->lock);
++
++ return eseqiv_givencrypt(req);
++}
++
++static int eseqiv_init(struct crypto_tfm *tfm)
++{
++ struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm);
++ struct eseqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ unsigned long alignmask;
++ unsigned int reqsize;
++
++ spin_lock_init(&ctx->lock);
++
++ alignmask = crypto_tfm_ctx_alignment() - 1;
++ reqsize = sizeof(struct eseqiv_request_ctx);
++
++ if (alignmask & reqsize) {
++ alignmask &= reqsize;
++ alignmask--;
++ }
++
++ alignmask = ~alignmask;
++ alignmask &= crypto_ablkcipher_alignmask(geniv);
++
++ reqsize += alignmask;
++ reqsize += crypto_ablkcipher_ivsize(geniv);
++ reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
++
++ ctx->reqoff = reqsize - sizeof(struct eseqiv_request_ctx);
++
++ tfm->crt_ablkcipher.reqsize = reqsize +
++ sizeof(struct ablkcipher_request);
++
++ return skcipher_geniv_init(tfm);
++}
++
++static struct crypto_template eseqiv_tmpl;
++
++static struct crypto_instance *eseqiv_alloc(struct rtattr **tb)
++{
++ struct crypto_instance *inst;
++ int err;
++
++ inst = skcipher_geniv_alloc(&eseqiv_tmpl, tb, 0, 0);
++ if (IS_ERR(inst))
++ goto out;
++
++ err = -EINVAL;
++ if (inst->alg.cra_ablkcipher.ivsize != inst->alg.cra_blocksize)
++ goto free_inst;
++
++ inst->alg.cra_ablkcipher.givencrypt = eseqiv_givencrypt_first;
++
++ inst->alg.cra_init = eseqiv_init;
++ inst->alg.cra_exit = skcipher_geniv_exit;
++
++ inst->alg.cra_ctxsize = sizeof(struct eseqiv_ctx);
++ inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
++
++out:
++ return inst;
++
++free_inst:
++ skcipher_geniv_free(inst);
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static struct crypto_template eseqiv_tmpl = {
++ .name = "eseqiv",
++ .alloc = eseqiv_alloc,
++ .free = skcipher_geniv_free,
++ .module = THIS_MODULE,
++};
++
++static int __init eseqiv_module_init(void)
++{
++ return crypto_register_template(&eseqiv_tmpl);
++}
++
++static void __exit eseqiv_module_exit(void)
++{
++ crypto_unregister_template(&eseqiv_tmpl);
++}
++
++module_init(eseqiv_module_init);
++module_exit(eseqiv_module_exit);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Encrypted Sequence Number IV Generator");
+diff --git a/crypto/gcm.c b/crypto/gcm.c
+new file mode 100644
+index 0000000..e70afd0
+--- /dev/null
++++ b/crypto/gcm.c
+@@ -0,0 +1,823 @@
++/*
++ * GCM: Galois/Counter Mode.
++ *
++ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <mh1 at iki.fi>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License version 2 as published
++ * by the Free Software Foundation.
++ */
++
++#include <crypto/gf128mul.h>
++#include <crypto/internal/aead.h>
++#include <crypto/internal/skcipher.h>
++#include <crypto/scatterwalk.h>
++#include <linux/completion.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++
++struct gcm_instance_ctx {
++ struct crypto_skcipher_spawn ctr;
++};
++
++struct crypto_gcm_ctx {
++ struct crypto_ablkcipher *ctr;
++ struct gf128mul_4k *gf128;
++};
++
++struct crypto_rfc4106_ctx {
++ struct crypto_aead *child;
++ u8 nonce[4];
++};
++
++struct crypto_gcm_ghash_ctx {
++ u32 bytes;
++ u32 flags;
++ struct gf128mul_4k *gf128;
++ u8 buffer[16];
++};
++
++struct crypto_gcm_req_priv_ctx {
++ u8 auth_tag[16];
++ u8 iauth_tag[16];
++ struct scatterlist src[2];
++ struct scatterlist dst[2];
++ struct crypto_gcm_ghash_ctx ghash;
++ struct ablkcipher_request abreq;
++};
++
++struct crypto_gcm_setkey_result {
++ int err;
++ struct completion completion;
++};
++
++static inline struct crypto_gcm_req_priv_ctx *crypto_gcm_reqctx(
++ struct aead_request *req)
++{
++ unsigned long align = crypto_aead_alignmask(crypto_aead_reqtfm(req));
++
++ return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1);
++}
++
++static void crypto_gcm_ghash_init(struct crypto_gcm_ghash_ctx *ctx, u32 flags,
++ struct gf128mul_4k *gf128)
++{
++ ctx->bytes = 0;
++ ctx->flags = flags;
++ ctx->gf128 = gf128;
++ memset(ctx->buffer, 0, 16);
++}
++
++static void crypto_gcm_ghash_update(struct crypto_gcm_ghash_ctx *ctx,
++ const u8 *src, unsigned int srclen)
++{
++ u8 *dst = ctx->buffer;
++
++ if (ctx->bytes) {
++ int n = min(srclen, ctx->bytes);
++ u8 *pos = dst + (16 - ctx->bytes);
++
++ ctx->bytes -= n;
++ srclen -= n;
++
++ while (n--)
++ *pos++ ^= *src++;
++
++ if (!ctx->bytes)
++ gf128mul_4k_lle((be128 *)dst, ctx->gf128);
++ }
++
++ while (srclen >= 16) {
++ crypto_xor(dst, src, 16);
++ gf128mul_4k_lle((be128 *)dst, ctx->gf128);
++ src += 16;
++ srclen -= 16;
++ }
++
++ if (srclen) {
++ ctx->bytes = 16 - srclen;
++ while (srclen--)
++ *dst++ ^= *src++;
++ }
++}
++
++static void crypto_gcm_ghash_update_sg(struct crypto_gcm_ghash_ctx *ctx,
++ struct scatterlist *sg, int len)
++{
++ struct scatter_walk walk;
++ u8 *src;
++ int n;
++
++ if (!len)
++ return;
++
++ scatterwalk_start(&walk, sg);
++
++ while (len) {
++ n = scatterwalk_clamp(&walk, len);
++
++ if (!n) {
++ scatterwalk_start(&walk, scatterwalk_sg_next(walk.sg));
++ n = scatterwalk_clamp(&walk, len);
++ }
++
++ src = scatterwalk_map(&walk, 0);
++
++ crypto_gcm_ghash_update(ctx, src, n);
++ len -= n;
++
++ scatterwalk_unmap(src, 0);
++ scatterwalk_advance(&walk, n);
++ scatterwalk_done(&walk, 0, len);
++ if (len)
++ crypto_yield(ctx->flags);
++ }
++}
++
++static void crypto_gcm_ghash_flush(struct crypto_gcm_ghash_ctx *ctx)
++{
++ u8 *dst = ctx->buffer;
++
++ if (ctx->bytes) {
++ u8 *tmp = dst + (16 - ctx->bytes);
++
++ while (ctx->bytes--)
++ *tmp++ ^= 0;
++
++ gf128mul_4k_lle((be128 *)dst, ctx->gf128);
++ }
++
++ ctx->bytes = 0;
++}
++
++static void crypto_gcm_ghash_final_xor(struct crypto_gcm_ghash_ctx *ctx,
++ unsigned int authlen,
++ unsigned int cryptlen, u8 *dst)
++{
++ u8 *buf = ctx->buffer;
++ u128 lengths;
++
++ lengths.a = cpu_to_be64(authlen * 8);
++ lengths.b = cpu_to_be64(cryptlen * 8);
++
++ crypto_gcm_ghash_flush(ctx);
++ crypto_xor(buf, (u8 *)&lengths, 16);
++ gf128mul_4k_lle((be128 *)buf, ctx->gf128);
++ crypto_xor(dst, buf, 16);
++}
++
++static void crypto_gcm_setkey_done(struct crypto_async_request *req, int err)
++{
++ struct crypto_gcm_setkey_result *result = req->data;
++
++ if (err == -EINPROGRESS)
++ return;
++
++ result->err = err;
++ complete(&result->completion);
++}
++
++static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key,
++ unsigned int keylen)
++{
++ struct crypto_gcm_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_ablkcipher *ctr = ctx->ctr;
++ struct {
++ be128 hash;
++ u8 iv[8];
++
++ struct crypto_gcm_setkey_result result;
++
++ struct scatterlist sg[1];
++ struct ablkcipher_request req;
++ } *data;
++ int err;
++
++ crypto_ablkcipher_clear_flags(ctr, CRYPTO_TFM_REQ_MASK);
++ crypto_ablkcipher_set_flags(ctr, crypto_aead_get_flags(aead) &
++ CRYPTO_TFM_REQ_MASK);
++
++ err = crypto_ablkcipher_setkey(ctr, key, keylen);
++ if (err)
++ return err;
++
++ crypto_aead_set_flags(aead, crypto_ablkcipher_get_flags(ctr) &
++ CRYPTO_TFM_RES_MASK);
++
++ data = kzalloc(sizeof(*data) + crypto_ablkcipher_reqsize(ctr),
++ GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
++ init_completion(&data->result.completion);
++ sg_init_one(data->sg, &data->hash, sizeof(data->hash));
++ ablkcipher_request_set_tfm(&data->req, ctr);
++ ablkcipher_request_set_callback(&data->req, CRYPTO_TFM_REQ_MAY_SLEEP |
++ CRYPTO_TFM_REQ_MAY_BACKLOG,
++ crypto_gcm_setkey_done,
++ &data->result);
++ ablkcipher_request_set_crypt(&data->req, data->sg, data->sg,
++ sizeof(data->hash), data->iv);
++
++ err = crypto_ablkcipher_encrypt(&data->req);
++ if (err == -EINPROGRESS || err == -EBUSY) {
++ err = wait_for_completion_interruptible(
++ &data->result.completion);
++ if (!err)
++ err = data->result.err;
++ }
++
++ if (err)
++ goto out;
++
++ if (ctx->gf128 != NULL)
++ gf128mul_free_4k(ctx->gf128);
++
++ ctx->gf128 = gf128mul_init_4k_lle(&data->hash);
++
++ if (ctx->gf128 == NULL)
++ err = -ENOMEM;
++
++out:
++ kfree(data);
++ return err;
++}
++
++static int crypto_gcm_setauthsize(struct crypto_aead *tfm,
++ unsigned int authsize)
++{
++ switch (authsize) {
++ case 4:
++ case 8:
++ case 12:
++ case 13:
++ case 14:
++ case 15:
++ case 16:
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static void crypto_gcm_init_crypt(struct ablkcipher_request *ablk_req,
++ struct aead_request *req,
++ unsigned int cryptlen)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_gcm_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
++ u32 flags = req->base.tfm->crt_flags;
++ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
++ struct scatterlist *dst;
++ __be32 counter = cpu_to_be32(1);
++
++ memset(pctx->auth_tag, 0, sizeof(pctx->auth_tag));
++ memcpy(req->iv + 12, &counter, 4);
++
++ sg_init_table(pctx->src, 2);
++ sg_set_buf(pctx->src, pctx->auth_tag, sizeof(pctx->auth_tag));
++ scatterwalk_sg_chain(pctx->src, 2, req->src);
++
++ dst = pctx->src;
++ if (req->src != req->dst) {
++ sg_init_table(pctx->dst, 2);
++ sg_set_buf(pctx->dst, pctx->auth_tag, sizeof(pctx->auth_tag));
++ scatterwalk_sg_chain(pctx->dst, 2, req->dst);
++ dst = pctx->dst;
++ }
++
++ ablkcipher_request_set_tfm(ablk_req, ctx->ctr);
++ ablkcipher_request_set_crypt(ablk_req, pctx->src, dst,
++ cryptlen + sizeof(pctx->auth_tag),
++ req->iv);
++
++ crypto_gcm_ghash_init(ghash, flags, ctx->gf128);
++
++ crypto_gcm_ghash_update_sg(ghash, req->assoc, req->assoclen);
++ crypto_gcm_ghash_flush(ghash);
++}
++
++static int crypto_gcm_hash(struct aead_request *req)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
++ u8 *auth_tag = pctx->auth_tag;
++ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
++
++ crypto_gcm_ghash_update_sg(ghash, req->dst, req->cryptlen);
++ crypto_gcm_ghash_final_xor(ghash, req->assoclen, req->cryptlen,
++ auth_tag);
++
++ scatterwalk_map_and_copy(auth_tag, req->dst, req->cryptlen,
++ crypto_aead_authsize(aead), 1);
++ return 0;
++}
++
++static void crypto_gcm_encrypt_done(struct crypto_async_request *areq, int err)
++{
++ struct aead_request *req = areq->data;
++
++ if (!err)
++ err = crypto_gcm_hash(req);
++
++ aead_request_complete(req, err);
++}
++
++static int crypto_gcm_encrypt(struct aead_request *req)
++{
++ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
++ struct ablkcipher_request *abreq = &pctx->abreq;
++ int err;
++
++ crypto_gcm_init_crypt(abreq, req, req->cryptlen);
++ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
++ crypto_gcm_encrypt_done, req);
++
++ err = crypto_ablkcipher_encrypt(abreq);
++ if (err)
++ return err;
++
++ return crypto_gcm_hash(req);
++}
++
++static int crypto_gcm_verify(struct aead_request *req)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
++ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
++ u8 *auth_tag = pctx->auth_tag;
++ u8 *iauth_tag = pctx->iauth_tag;
++ unsigned int authsize = crypto_aead_authsize(aead);
++ unsigned int cryptlen = req->cryptlen - authsize;
++
++ crypto_gcm_ghash_final_xor(ghash, req->assoclen, cryptlen, auth_tag);
++
++ authsize = crypto_aead_authsize(aead);
++ scatterwalk_map_and_copy(iauth_tag, req->src, cryptlen, authsize, 0);
++ return memcmp(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
++}
++
++static void crypto_gcm_decrypt_done(struct crypto_async_request *areq, int err)
++{
++ struct aead_request *req = areq->data;
++
++ if (!err)
++ err = crypto_gcm_verify(req);
++
++ aead_request_complete(req, err);
++}
++
++static int crypto_gcm_decrypt(struct aead_request *req)
++{
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_gcm_req_priv_ctx *pctx = crypto_gcm_reqctx(req);
++ struct ablkcipher_request *abreq = &pctx->abreq;
++ struct crypto_gcm_ghash_ctx *ghash = &pctx->ghash;
++ unsigned int cryptlen = req->cryptlen;
++ unsigned int authsize = crypto_aead_authsize(aead);
++ int err;
++
++ if (cryptlen < authsize)
++ return -EINVAL;
++ cryptlen -= authsize;
++
++ crypto_gcm_init_crypt(abreq, req, cryptlen);
++ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
++ crypto_gcm_decrypt_done, req);
++
++ crypto_gcm_ghash_update_sg(ghash, req->src, cryptlen);
++
++ err = crypto_ablkcipher_decrypt(abreq);
++ if (err)
++ return err;
++
++ return crypto_gcm_verify(req);
++}
++
++static int crypto_gcm_init_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct gcm_instance_ctx *ictx = crypto_instance_ctx(inst);
++ struct crypto_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_ablkcipher *ctr;
++ unsigned long align;
++ int err;
++
++ ctr = crypto_spawn_skcipher(&ictx->ctr);
++ err = PTR_ERR(ctr);
++ if (IS_ERR(ctr))
++ return err;
++
++ ctx->ctr = ctr;
++ ctx->gf128 = NULL;
++
++ align = crypto_tfm_alg_alignmask(tfm);
++ align &= ~(crypto_tfm_ctx_alignment() - 1);
++ tfm->crt_aead.reqsize = align +
++ sizeof(struct crypto_gcm_req_priv_ctx) +
++ crypto_ablkcipher_reqsize(ctr);
++
++ return 0;
++}
++
++static void crypto_gcm_exit_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ if (ctx->gf128 != NULL)
++ gf128mul_free_4k(ctx->gf128);
++
++ crypto_free_ablkcipher(ctx->ctr);
++}
++
++static struct crypto_instance *crypto_gcm_alloc_common(struct rtattr **tb,
++ const char *full_name,
++ const char *ctr_name)
++{
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ struct crypto_alg *ctr;
++ struct gcm_instance_ctx *ctx;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
++ return ERR_PTR(-EINVAL);
++
++ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
++ if (!inst)
++ return ERR_PTR(-ENOMEM);
++
++ ctx = crypto_instance_ctx(inst);
++ crypto_set_skcipher_spawn(&ctx->ctr, inst);
++ err = crypto_grab_skcipher(&ctx->ctr, ctr_name, 0,
++ crypto_requires_sync(algt->type,
++ algt->mask));
++ if (err)
++ goto err_free_inst;
++
++ ctr = crypto_skcipher_spawn_alg(&ctx->ctr);
++
++ /* We only support 16-byte blocks. */
++ if (ctr->cra_ablkcipher.ivsize != 16)
++ goto out_put_ctr;
++
++ /* Not a stream cipher? */
++ err = -EINVAL;
++ if (ctr->cra_blocksize != 1)
++ goto out_put_ctr;
++
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "gcm_base(%s)", ctr->cra_driver_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto out_put_ctr;
++
++ memcpy(inst->alg.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
++ inst->alg.cra_flags |= ctr->cra_flags & CRYPTO_ALG_ASYNC;
++ inst->alg.cra_priority = ctr->cra_priority;
++ inst->alg.cra_blocksize = 1;
++ inst->alg.cra_alignmask = ctr->cra_alignmask | (__alignof__(u64) - 1);
++ inst->alg.cra_type = &crypto_aead_type;
++ inst->alg.cra_aead.ivsize = 16;
++ inst->alg.cra_aead.maxauthsize = 16;
++ inst->alg.cra_ctxsize = sizeof(struct crypto_gcm_ctx);
++ inst->alg.cra_init = crypto_gcm_init_tfm;
++ inst->alg.cra_exit = crypto_gcm_exit_tfm;
++ inst->alg.cra_aead.setkey = crypto_gcm_setkey;
++ inst->alg.cra_aead.setauthsize = crypto_gcm_setauthsize;
++ inst->alg.cra_aead.encrypt = crypto_gcm_encrypt;
++ inst->alg.cra_aead.decrypt = crypto_gcm_decrypt;
++
++out:
++ return inst;
++
++out_put_ctr:
++ crypto_drop_skcipher(&ctx->ctr);
++err_free_inst:
++ kfree(inst);
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static struct crypto_instance *crypto_gcm_alloc(struct rtattr **tb)
++{
++ int err;
++ const char *cipher_name;
++ char ctr_name[CRYPTO_MAX_ALG_NAME];
++ char full_name[CRYPTO_MAX_ALG_NAME];
++
++ cipher_name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(cipher_name);
++ if (IS_ERR(cipher_name))
++ return ERR_PTR(err);
++
++ if (snprintf(ctr_name, CRYPTO_MAX_ALG_NAME, "ctr(%s)", cipher_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ return ERR_PTR(-ENAMETOOLONG);
++
++ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm(%s)", cipher_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ return ERR_PTR(-ENAMETOOLONG);
++
++ return crypto_gcm_alloc_common(tb, full_name, ctr_name);
++}
++
++static void crypto_gcm_free(struct crypto_instance *inst)
++{
++ struct gcm_instance_ctx *ctx = crypto_instance_ctx(inst);
++
++ crypto_drop_skcipher(&ctx->ctr);
++ kfree(inst);
++}
++
++static struct crypto_template crypto_gcm_tmpl = {
++ .name = "gcm",
++ .alloc = crypto_gcm_alloc,
++ .free = crypto_gcm_free,
++ .module = THIS_MODULE,
++};
++
++static struct crypto_instance *crypto_gcm_base_alloc(struct rtattr **tb)
++{
++ int err;
++ const char *ctr_name;
++ char full_name[CRYPTO_MAX_ALG_NAME];
++
++ ctr_name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(ctr_name);
++ if (IS_ERR(ctr_name))
++ return ERR_PTR(err);
++
++ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s)",
++ ctr_name) >= CRYPTO_MAX_ALG_NAME)
++ return ERR_PTR(-ENAMETOOLONG);
++
++ return crypto_gcm_alloc_common(tb, full_name, ctr_name);
++}
++
++static struct crypto_template crypto_gcm_base_tmpl = {
++ .name = "gcm_base",
++ .alloc = crypto_gcm_base_alloc,
++ .free = crypto_gcm_free,
++ .module = THIS_MODULE,
++};
++
++static int crypto_rfc4106_setkey(struct crypto_aead *parent, const u8 *key,
++ unsigned int keylen)
++{
++ struct crypto_rfc4106_ctx *ctx = crypto_aead_ctx(parent);
++ struct crypto_aead *child = ctx->child;
++ int err;
++
++ if (keylen < 4)
++ return -EINVAL;
++
++ keylen -= 4;
++ memcpy(ctx->nonce, key + keylen, 4);
++
++ crypto_aead_clear_flags(child, CRYPTO_TFM_REQ_MASK);
++ crypto_aead_set_flags(child, crypto_aead_get_flags(parent) &
++ CRYPTO_TFM_REQ_MASK);
++ err = crypto_aead_setkey(child, key, keylen);
++ crypto_aead_set_flags(parent, crypto_aead_get_flags(child) &
++ CRYPTO_TFM_RES_MASK);
++
++ return err;
++}
++
++static int crypto_rfc4106_setauthsize(struct crypto_aead *parent,
++ unsigned int authsize)
++{
++ struct crypto_rfc4106_ctx *ctx = crypto_aead_ctx(parent);
++
++ switch (authsize) {
++ case 8:
++ case 12:
++ case 16:
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ return crypto_aead_setauthsize(ctx->child, authsize);
++}
++
++static struct aead_request *crypto_rfc4106_crypt(struct aead_request *req)
++{
++ struct aead_request *subreq = aead_request_ctx(req);
++ struct crypto_aead *aead = crypto_aead_reqtfm(req);
++ struct crypto_rfc4106_ctx *ctx = crypto_aead_ctx(aead);
++ struct crypto_aead *child = ctx->child;
++ u8 *iv = PTR_ALIGN((u8 *)(subreq + 1) + crypto_aead_reqsize(child),
++ crypto_aead_alignmask(child) + 1);
++
++ memcpy(iv, ctx->nonce, 4);
++ memcpy(iv + 4, req->iv, 8);
++
++ aead_request_set_tfm(subreq, child);
++ aead_request_set_callback(subreq, req->base.flags, req->base.complete,
++ req->base.data);
++ aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, iv);
++ aead_request_set_assoc(subreq, req->assoc, req->assoclen);
++
++ return subreq;
++}
++
++static int crypto_rfc4106_encrypt(struct aead_request *req)
++{
++ req = crypto_rfc4106_crypt(req);
++
++ return crypto_aead_encrypt(req);
++}
++
++static int crypto_rfc4106_decrypt(struct aead_request *req)
++{
++ req = crypto_rfc4106_crypt(req);
++
++ return crypto_aead_decrypt(req);
++}
++
++static int crypto_rfc4106_init_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_instance *inst = (void *)tfm->__crt_alg;
++ struct crypto_aead_spawn *spawn = crypto_instance_ctx(inst);
++ struct crypto_rfc4106_ctx *ctx = crypto_tfm_ctx(tfm);
++ struct crypto_aead *aead;
++ unsigned long align;
++
++ aead = crypto_spawn_aead(spawn);
++ if (IS_ERR(aead))
++ return PTR_ERR(aead);
++
++ ctx->child = aead;
++
++ align = crypto_aead_alignmask(aead);
++ align &= ~(crypto_tfm_ctx_alignment() - 1);
++ tfm->crt_aead.reqsize = sizeof(struct aead_request) +
++ ALIGN(crypto_aead_reqsize(aead),
++ crypto_tfm_ctx_alignment()) +
++ align + 16;
++
++ return 0;
++}
++
++static void crypto_rfc4106_exit_tfm(struct crypto_tfm *tfm)
++{
++ struct crypto_rfc4106_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ crypto_free_aead(ctx->child);
++}
++
++static struct crypto_instance *crypto_rfc4106_alloc(struct rtattr **tb)
++{
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ struct crypto_aead_spawn *spawn;
++ struct crypto_alg *alg;
++ const char *ccm_name;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
++ return ERR_PTR(-EINVAL);
++
++ ccm_name = crypto_attr_alg_name(tb[1]);
++ err = PTR_ERR(ccm_name);
++ if (IS_ERR(ccm_name))
++ return ERR_PTR(err);
++
++ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
++ if (!inst)
++ return ERR_PTR(-ENOMEM);
++
++ spawn = crypto_instance_ctx(inst);
++ crypto_set_aead_spawn(spawn, inst);
++ err = crypto_grab_aead(spawn, ccm_name, 0,
++ crypto_requires_sync(algt->type, algt->mask));
++ if (err)
++ goto out_free_inst;
++
++ alg = crypto_aead_spawn_alg(spawn);
++
++ err = -EINVAL;
++
++ /* We only support 16-byte blocks. */
++ if (alg->cra_aead.ivsize != 16)
++ goto out_drop_alg;
++
++ /* Not a stream cipher? */
++ if (alg->cra_blocksize != 1)
++ goto out_drop_alg;
++
++ err = -ENAMETOOLONG;
++ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
++ "rfc4106(%s)", alg->cra_name) >= CRYPTO_MAX_ALG_NAME ||
++ snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
++ "rfc4106(%s)", alg->cra_driver_name) >=
++ CRYPTO_MAX_ALG_NAME)
++ goto out_drop_alg;
++
++ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
++ inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
++ inst->alg.cra_priority = alg->cra_priority;
++ inst->alg.cra_blocksize = 1;
++ inst->alg.cra_alignmask = alg->cra_alignmask;
++ inst->alg.cra_type = &crypto_nivaead_type;
++
++ inst->alg.cra_aead.ivsize = 8;
++ inst->alg.cra_aead.maxauthsize = 16;
++
++ inst->alg.cra_ctxsize = sizeof(struct crypto_rfc4106_ctx);
++
++ inst->alg.cra_init = crypto_rfc4106_init_tfm;
++ inst->alg.cra_exit = crypto_rfc4106_exit_tfm;
++
++ inst->alg.cra_aead.setkey = crypto_rfc4106_setkey;
++ inst->alg.cra_aead.setauthsize = crypto_rfc4106_setauthsize;
++ inst->alg.cra_aead.encrypt = crypto_rfc4106_encrypt;
++ inst->alg.cra_aead.decrypt = crypto_rfc4106_decrypt;
++
++ inst->alg.cra_aead.geniv = "seqiv";
++
++out:
++ return inst;
++
++out_drop_alg:
++ crypto_drop_aead(spawn);
++out_free_inst:
++ kfree(inst);
++ inst = ERR_PTR(err);
++ goto out;
++}
++
++static void crypto_rfc4106_free(struct crypto_instance *inst)
++{
++ crypto_drop_spawn(crypto_instance_ctx(inst));
++ kfree(inst);
++}
++
++static struct crypto_template crypto_rfc4106_tmpl = {
++ .name = "rfc4106",
++ .alloc = crypto_rfc4106_alloc,
++ .free = crypto_rfc4106_free,
++ .module = THIS_MODULE,
++};
++
++static int __init crypto_gcm_module_init(void)
++{
++ int err;
++
++ err = crypto_register_template(&crypto_gcm_base_tmpl);
++ if (err)
++ goto out;
++
++ err = crypto_register_template(&crypto_gcm_tmpl);
++ if (err)
++ goto out_undo_base;
++
++ err = crypto_register_template(&crypto_rfc4106_tmpl);
++ if (err)
++ goto out_undo_gcm;
++
++out:
++ return err;
++
++out_undo_gcm:
++ crypto_unregister_template(&crypto_gcm_tmpl);
++out_undo_base:
++ crypto_unregister_template(&crypto_gcm_base_tmpl);
++ goto out;
++}
++
++static void __exit crypto_gcm_module_exit(void)
++{
++ crypto_unregister_template(&crypto_rfc4106_tmpl);
++ crypto_unregister_template(&crypto_gcm_tmpl);
++ crypto_unregister_template(&crypto_gcm_base_tmpl);
++}
++
++module_init(crypto_gcm_module_init);
++module_exit(crypto_gcm_module_exit);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Galois/Counter Mode");
++MODULE_AUTHOR("Mikko Herranen <mh1 at iki.fi>");
++MODULE_ALIAS("gcm_base");
++MODULE_ALIAS("rfc4106");
+diff --git a/crypto/hmac.c b/crypto/hmac.c
+index 0f05be7..a1d016a 100644
+--- a/crypto/hmac.c
++++ b/crypto/hmac.c
+@@ -17,6 +17,7 @@
+ */
+
+ #include <crypto/algapi.h>
++#include <crypto/scatterwalk.h>
+ #include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+@@ -160,7 +161,7 @@ static int hmac_digest(struct hash_desc *pdesc, struct scatterlist *sg,
+
+ sg_init_table(sg1, 2);
+ sg_set_buf(sg1, ipad, bs);
+- sg_set_page(&sg1[1], (void *) sg, 0, 0);
++ scatterwalk_sg_chain(sg1, 2, sg);
+
+ sg_init_table(sg2, 1);
+ sg_set_buf(sg2, opad, bs + ds);
+diff --git a/crypto/internal.h b/crypto/internal.h
+index abb01f7..32f4c21 100644
+--- a/crypto/internal.h
++++ b/crypto/internal.h
+@@ -25,7 +25,6 @@
+ #include <linux/notifier.h>
+ #include <linux/rwsem.h>
+ #include <linux/slab.h>
+-#include <asm/kmap_types.h>
+
+ /* Crypto notification events. */
+ enum {
+@@ -50,34 +49,6 @@ extern struct list_head crypto_alg_list;
+ extern struct rw_semaphore crypto_alg_sem;
+ extern struct blocking_notifier_head crypto_chain;
+
+-static inline enum km_type crypto_kmap_type(int out)
+-{
+- enum km_type type;
+-
+- if (in_softirq())
+- type = out * (KM_SOFTIRQ1 - KM_SOFTIRQ0) + KM_SOFTIRQ0;
+- else
+- type = out * (KM_USER1 - KM_USER0) + KM_USER0;
+-
+- return type;
+-}
+-
+-static inline void *crypto_kmap(struct page *page, int out)
+-{
+- return kmap_atomic(page, crypto_kmap_type(out));
+-}
+-
+-static inline void crypto_kunmap(void *vaddr, int out)
+-{
+- kunmap_atomic(vaddr, crypto_kmap_type(out));
+-}
+-
+-static inline void crypto_yield(u32 flags)
+-{
+- if (flags & CRYPTO_TFM_REQ_MAY_SLEEP)
+- cond_resched();
+-}
+-
+ #ifdef CONFIG_PROC_FS
+ void __init crypto_init_proc(void);
+ void __exit crypto_exit_proc(void);
+@@ -122,6 +93,8 @@ void crypto_exit_digest_ops(struct crypto_tfm *tfm);
+ void crypto_exit_cipher_ops(struct crypto_tfm *tfm);
+ void crypto_exit_compress_ops(struct crypto_tfm *tfm);
+
++void crypto_larval_kill(struct crypto_alg *alg);
++struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask);
+ void crypto_larval_error(const char *name, u32 type, u32 mask);
+
+ void crypto_shoot_alg(struct crypto_alg *alg);
+diff --git a/crypto/lzo.c b/crypto/lzo.c
+new file mode 100644
+index 0000000..48c3288
+--- /dev/null
++++ b/crypto/lzo.c
+@@ -0,0 +1,106 @@
++/*
++ * Cryptographic API.
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License version 2 as published by
++ * the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but WITHOUT
++ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
++ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
++ * more details.
++ *
++ * You should have received a copy of the GNU General Public License along with
++ * this program; if not, write to the Free Software Foundation, Inc., 51
++ * Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
++ *
++ */
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/crypto.h>
++#include <linux/vmalloc.h>
++#include <linux/lzo.h>
++
++struct lzo_ctx {
++ void *lzo_comp_mem;
++};
++
++static int lzo_init(struct crypto_tfm *tfm)
++{
++ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
++ if (!ctx->lzo_comp_mem)
++ return -ENOMEM;
++
++ return 0;
++}
++
++static void lzo_exit(struct crypto_tfm *tfm)
++{
++ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
++
++ vfree(ctx->lzo_comp_mem);
++}
++
++static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
++ unsigned int slen, u8 *dst, unsigned int *dlen)
++{
++ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
++ size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
++ int err;
++
++ err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx->lzo_comp_mem);
++
++ if (err != LZO_E_OK)
++ return -EINVAL;
++
++ *dlen = tmp_len;
++ return 0;
++}
++
++static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
++ unsigned int slen, u8 *dst, unsigned int *dlen)
++{
++ int err;
++ size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
++
++ err = lzo1x_decompress_safe(src, slen, dst, &tmp_len);
++
++ if (err != LZO_E_OK)
++ return -EINVAL;
++
++ *dlen = tmp_len;
++ return 0;
++
++}
++
++static struct crypto_alg alg = {
++ .cra_name = "lzo",
++ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
++ .cra_ctxsize = sizeof(struct lzo_ctx),
++ .cra_module = THIS_MODULE,
++ .cra_list = LIST_HEAD_INIT(alg.cra_list),
++ .cra_init = lzo_init,
++ .cra_exit = lzo_exit,
++ .cra_u = { .compress = {
++ .coa_compress = lzo_compress,
++ .coa_decompress = lzo_decompress } }
++};
++
++static int __init init(void)
++{
++ return crypto_register_alg(&alg);
++}
++
++static void __exit fini(void)
++{
++ crypto_unregister_alg(&alg);
++}
++
++module_init(init);
++module_exit(fini);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("LZO Compression Algorithm");
+diff --git a/crypto/pcbc.c b/crypto/pcbc.c
+index c3ed8a1..fe70477 100644
+--- a/crypto/pcbc.c
++++ b/crypto/pcbc.c
+@@ -24,7 +24,6 @@
+
+ struct crypto_pcbc_ctx {
+ struct crypto_cipher *child;
+- void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
+ };
+
+ static int crypto_pcbc_setkey(struct crypto_tfm *parent, const u8 *key,
+@@ -45,9 +44,7 @@ static int crypto_pcbc_setkey(struct crypto_tfm *parent, const u8 *key,
+
+ static int crypto_pcbc_encrypt_segment(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+@@ -58,10 +55,10 @@ static int crypto_pcbc_encrypt_segment(struct blkcipher_desc *desc,
+ u8 *iv = walk->iv;
+
+ do {
+- xor(iv, src, bsize);
++ crypto_xor(iv, src, bsize);
+ fn(crypto_cipher_tfm(tfm), dst, iv);
+ memcpy(iv, dst, bsize);
+- xor(iv, src, bsize);
++ crypto_xor(iv, src, bsize);
+
+ src += bsize;
+ dst += bsize;
+@@ -72,9 +69,7 @@ static int crypto_pcbc_encrypt_segment(struct blkcipher_desc *desc,
+
+ static int crypto_pcbc_encrypt_inplace(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+@@ -86,10 +81,10 @@ static int crypto_pcbc_encrypt_inplace(struct blkcipher_desc *desc,
+
+ do {
+ memcpy(tmpbuf, src, bsize);
+- xor(iv, tmpbuf, bsize);
++ crypto_xor(iv, src, bsize);
+ fn(crypto_cipher_tfm(tfm), src, iv);
+- memcpy(iv, src, bsize);
+- xor(iv, tmpbuf, bsize);
++ memcpy(iv, tmpbuf, bsize);
++ crypto_xor(iv, src, bsize);
+
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+@@ -107,7 +102,6 @@ static int crypto_pcbc_encrypt(struct blkcipher_desc *desc,
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_pcbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+@@ -115,11 +109,11 @@ static int crypto_pcbc_encrypt(struct blkcipher_desc *desc,
+
+ while ((nbytes = walk.nbytes)) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+- nbytes = crypto_pcbc_encrypt_inplace(desc, &walk, child,
+- xor);
++ nbytes = crypto_pcbc_encrypt_inplace(desc, &walk,
++ child);
+ else
+- nbytes = crypto_pcbc_encrypt_segment(desc, &walk, child,
+- xor);
++ nbytes = crypto_pcbc_encrypt_segment(desc, &walk,
++ child);
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+@@ -128,9 +122,7 @@ static int crypto_pcbc_encrypt(struct blkcipher_desc *desc,
+
+ static int crypto_pcbc_decrypt_segment(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_decrypt;
+@@ -142,9 +134,9 @@ static int crypto_pcbc_decrypt_segment(struct blkcipher_desc *desc,
+
+ do {
+ fn(crypto_cipher_tfm(tfm), dst, src);
+- xor(dst, iv, bsize);
++ crypto_xor(dst, iv, bsize);
+ memcpy(iv, src, bsize);
+- xor(iv, dst, bsize);
++ crypto_xor(iv, dst, bsize);
+
+ src += bsize;
+ dst += bsize;
+@@ -157,9 +149,7 @@ static int crypto_pcbc_decrypt_segment(struct blkcipher_desc *desc,
+
+ static int crypto_pcbc_decrypt_inplace(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+- struct crypto_cipher *tfm,
+- void (*xor)(u8 *, const u8 *,
+- unsigned int))
++ struct crypto_cipher *tfm)
+ {
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_decrypt;
+@@ -172,9 +162,9 @@ static int crypto_pcbc_decrypt_inplace(struct blkcipher_desc *desc,
+ do {
+ memcpy(tmpbuf, src, bsize);
+ fn(crypto_cipher_tfm(tfm), src, src);
+- xor(src, iv, bsize);
++ crypto_xor(src, iv, bsize);
+ memcpy(iv, tmpbuf, bsize);
+- xor(iv, src, bsize);
++ crypto_xor(iv, src, bsize);
+
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+@@ -192,7 +182,6 @@ static int crypto_pcbc_decrypt(struct blkcipher_desc *desc,
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_pcbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+- void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+@@ -200,48 +189,17 @@ static int crypto_pcbc_decrypt(struct blkcipher_desc *desc,
+
+ while ((nbytes = walk.nbytes)) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+- nbytes = crypto_pcbc_decrypt_inplace(desc, &walk, child,
+- xor);
++ nbytes = crypto_pcbc_decrypt_inplace(desc, &walk,
++ child);
+ else
+- nbytes = crypto_pcbc_decrypt_segment(desc, &walk, child,
+- xor);
++ nbytes = crypto_pcbc_decrypt_segment(desc, &walk,
++ child);
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+ return err;
+ }
+
+-static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
+-{
+- do {
+- *a++ ^= *b++;
+- } while (--bs);
+-}
+-
+-static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
+-{
+- u32 *a = (u32 *)dst;
+- u32 *b = (u32 *)src;
+-
+- do {
+- *a++ ^= *b++;
+- } while ((bs -= 4));
+-}
+-
+-static void xor_64(u8 *a, const u8 *b, unsigned int bs)
+-{
+- ((u32 *)a)[0] ^= ((u32 *)b)[0];
+- ((u32 *)a)[1] ^= ((u32 *)b)[1];
+-}
+-
+-static void xor_128(u8 *a, const u8 *b, unsigned int bs)
+-{
+- ((u32 *)a)[0] ^= ((u32 *)b)[0];
+- ((u32 *)a)[1] ^= ((u32 *)b)[1];
+- ((u32 *)a)[2] ^= ((u32 *)b)[2];
+- ((u32 *)a)[3] ^= ((u32 *)b)[3];
+-}
+-
+ static int crypto_pcbc_init_tfm(struct crypto_tfm *tfm)
+ {
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+@@ -249,22 +207,6 @@ static int crypto_pcbc_init_tfm(struct crypto_tfm *tfm)
+ struct crypto_pcbc_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_cipher *cipher;
+
+- switch (crypto_tfm_alg_blocksize(tfm)) {
+- case 8:
+- ctx->xor = xor_64;
+- break;
+-
+- case 16:
+- ctx->xor = xor_128;
+- break;
+-
+- default:
+- if (crypto_tfm_alg_blocksize(tfm) % 4)
+- ctx->xor = xor_byte;
+- else
+- ctx->xor = xor_quad;
+- }
+-
+ cipher = crypto_spawn_cipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+@@ -304,8 +246,9 @@ static struct crypto_instance *crypto_pcbc_alloc(struct rtattr **tb)
+ inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+- if (!(alg->cra_blocksize % 4))
+- inst->alg.cra_alignmask |= 3;
++ /* We access the data as u32s when xoring. */
++ inst->alg.cra_alignmask |= __alignof__(u32) - 1;
++
+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
+diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c
+new file mode 100644
+index 0000000..1fa4e4d
+--- /dev/null
++++ b/crypto/salsa20_generic.c
+@@ -0,0 +1,255 @@
++/*
++ * Salsa20: Salsa20 stream cipher algorithm
++ *
++ * Copyright (c) 2007 Tan Swee Heng <thesweeheng at gmail.com>
++ *
++ * Derived from:
++ * - salsa20.c: Public domain C code by Daniel J. Bernstein <djb at cr.yp.to>
++ *
++ * Salsa20 is a stream cipher candidate in eSTREAM, the ECRYPT Stream
++ * Cipher Project. It is designed by Daniel J. Bernstein <djb at cr.yp.to>.
++ * More information about eSTREAM and Salsa20 can be found here:
++ * http://www.ecrypt.eu.org/stream/
++ * http://cr.yp.to/snuffle.html
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ */
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/errno.h>
++#include <linux/crypto.h>
++#include <linux/types.h>
++#include <crypto/algapi.h>
++#include <asm/byteorder.h>
++
++#define SALSA20_IV_SIZE 8U
++#define SALSA20_MIN_KEY_SIZE 16U
++#define SALSA20_MAX_KEY_SIZE 32U
++
++/*
++ * Start of code taken from D. J. Bernstein's reference implementation.
++ * With some modifications and optimizations made to suit our needs.
++ */
++
++/*
++salsa20-ref.c version 20051118
++D. J. Bernstein
++Public domain.
++*/
++
++#define ROTATE(v,n) (((v) << (n)) | ((v) >> (32 - (n))))
++#define XOR(v,w) ((v) ^ (w))
++#define PLUS(v,w) (((v) + (w)))
++#define PLUSONE(v) (PLUS((v),1))
++#define U32TO8_LITTLE(p, v) \
++ { (p)[0] = (v >> 0) & 0xff; (p)[1] = (v >> 8) & 0xff; \
++ (p)[2] = (v >> 16) & 0xff; (p)[3] = (v >> 24) & 0xff; }
++#define U8TO32_LITTLE(p) \
++ (((u32)((p)[0]) ) | ((u32)((p)[1]) << 8) | \
++ ((u32)((p)[2]) << 16) | ((u32)((p)[3]) << 24) )
++
++struct salsa20_ctx
++{
++ u32 input[16];
++};
++
++static void salsa20_wordtobyte(u8 output[64], const u32 input[16])
++{
++ u32 x[16];
++ int i;
++
++ memcpy(x, input, sizeof(x));
++ for (i = 20; i > 0; i -= 2) {
++ x[ 4] = XOR(x[ 4],ROTATE(PLUS(x[ 0],x[12]), 7));
++ x[ 8] = XOR(x[ 8],ROTATE(PLUS(x[ 4],x[ 0]), 9));
++ x[12] = XOR(x[12],ROTATE(PLUS(x[ 8],x[ 4]),13));
++ x[ 0] = XOR(x[ 0],ROTATE(PLUS(x[12],x[ 8]),18));
++ x[ 9] = XOR(x[ 9],ROTATE(PLUS(x[ 5],x[ 1]), 7));
++ x[13] = XOR(x[13],ROTATE(PLUS(x[ 9],x[ 5]), 9));
++ x[ 1] = XOR(x[ 1],ROTATE(PLUS(x[13],x[ 9]),13));
++ x[ 5] = XOR(x[ 5],ROTATE(PLUS(x[ 1],x[13]),18));
++ x[14] = XOR(x[14],ROTATE(PLUS(x[10],x[ 6]), 7));
++ x[ 2] = XOR(x[ 2],ROTATE(PLUS(x[14],x[10]), 9));
++ x[ 6] = XOR(x[ 6],ROTATE(PLUS(x[ 2],x[14]),13));
++ x[10] = XOR(x[10],ROTATE(PLUS(x[ 6],x[ 2]),18));
++ x[ 3] = XOR(x[ 3],ROTATE(PLUS(x[15],x[11]), 7));
++ x[ 7] = XOR(x[ 7],ROTATE(PLUS(x[ 3],x[15]), 9));
++ x[11] = XOR(x[11],ROTATE(PLUS(x[ 7],x[ 3]),13));
++ x[15] = XOR(x[15],ROTATE(PLUS(x[11],x[ 7]),18));
++ x[ 1] = XOR(x[ 1],ROTATE(PLUS(x[ 0],x[ 3]), 7));
++ x[ 2] = XOR(x[ 2],ROTATE(PLUS(x[ 1],x[ 0]), 9));
++ x[ 3] = XOR(x[ 3],ROTATE(PLUS(x[ 2],x[ 1]),13));
++ x[ 0] = XOR(x[ 0],ROTATE(PLUS(x[ 3],x[ 2]),18));
++ x[ 6] = XOR(x[ 6],ROTATE(PLUS(x[ 5],x[ 4]), 7));
++ x[ 7] = XOR(x[ 7],ROTATE(PLUS(x[ 6],x[ 5]), 9));
++ x[ 4] = XOR(x[ 4],ROTATE(PLUS(x[ 7],x[ 6]),13));
++ x[ 5] = XOR(x[ 5],ROTATE(PLUS(x[ 4],x[ 7]),18));
++ x[11] = XOR(x[11],ROTATE(PLUS(x[10],x[ 9]), 7));
++ x[ 8] = XOR(x[ 8],ROTATE(PLUS(x[11],x[10]), 9));
++ x[ 9] = XOR(x[ 9],ROTATE(PLUS(x[ 8],x[11]),13));
++ x[10] = XOR(x[10],ROTATE(PLUS(x[ 9],x[ 8]),18));
++ x[12] = XOR(x[12],ROTATE(PLUS(x[15],x[14]), 7));
++ x[13] = XOR(x[13],ROTATE(PLUS(x[12],x[15]), 9));
++ x[14] = XOR(x[14],ROTATE(PLUS(x[13],x[12]),13));
++ x[15] = XOR(x[15],ROTATE(PLUS(x[14],x[13]),18));
++ }
++ for (i = 0; i < 16; ++i)
++ x[i] = PLUS(x[i],input[i]);
++ for (i = 0; i < 16; ++i)
++ U32TO8_LITTLE(output + 4 * i,x[i]);
++}
++
++static const char sigma[16] = "expand 32-byte k";
++static const char tau[16] = "expand 16-byte k";
++
++static void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k, u32 kbytes)
++{
++ const char *constants;
++
++ ctx->input[1] = U8TO32_LITTLE(k + 0);
++ ctx->input[2] = U8TO32_LITTLE(k + 4);
++ ctx->input[3] = U8TO32_LITTLE(k + 8);
++ ctx->input[4] = U8TO32_LITTLE(k + 12);
++ if (kbytes == 32) { /* recommended */
++ k += 16;
++ constants = sigma;
++ } else { /* kbytes == 16 */
++ constants = tau;
++ }
++ ctx->input[11] = U8TO32_LITTLE(k + 0);
++ ctx->input[12] = U8TO32_LITTLE(k + 4);
++ ctx->input[13] = U8TO32_LITTLE(k + 8);
++ ctx->input[14] = U8TO32_LITTLE(k + 12);
++ ctx->input[0] = U8TO32_LITTLE(constants + 0);
++ ctx->input[5] = U8TO32_LITTLE(constants + 4);
++ ctx->input[10] = U8TO32_LITTLE(constants + 8);
++ ctx->input[15] = U8TO32_LITTLE(constants + 12);
++}
++
++static void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv)
++{
++ ctx->input[6] = U8TO32_LITTLE(iv + 0);
++ ctx->input[7] = U8TO32_LITTLE(iv + 4);
++ ctx->input[8] = 0;
++ ctx->input[9] = 0;
++}
++
++static void salsa20_encrypt_bytes(struct salsa20_ctx *ctx, u8 *dst,
++ const u8 *src, unsigned int bytes)
++{
++ u8 buf[64];
++
++ if (dst != src)
++ memcpy(dst, src, bytes);
++
++ while (bytes) {
++ salsa20_wordtobyte(buf, ctx->input);
++
++ ctx->input[8] = PLUSONE(ctx->input[8]);
++ if (!ctx->input[8])
++ ctx->input[9] = PLUSONE(ctx->input[9]);
++
++ if (bytes <= 64) {
++ crypto_xor(dst, buf, bytes);
++ return;
++ }
++
++ crypto_xor(dst, buf, 64);
++ bytes -= 64;
++ dst += 64;
++ }
++}
++
++/*
++ * End of code taken from D. J. Bernstein's reference implementation.
++ */
++
++static int setkey(struct crypto_tfm *tfm, const u8 *key,
++ unsigned int keysize)
++{
++ struct salsa20_ctx *ctx = crypto_tfm_ctx(tfm);
++ salsa20_keysetup(ctx, key, keysize);
++ return 0;
++}
++
++static int encrypt(struct blkcipher_desc *desc,
++ struct scatterlist *dst, struct scatterlist *src,
++ unsigned int nbytes)
++{
++ struct blkcipher_walk walk;
++ struct crypto_blkcipher *tfm = desc->tfm;
++ struct salsa20_ctx *ctx = crypto_blkcipher_ctx(tfm);
++ int err;
++
++ blkcipher_walk_init(&walk, dst, src, nbytes);
++ err = blkcipher_walk_virt_block(desc, &walk, 64);
++
++ salsa20_ivsetup(ctx, walk.iv);
++
++ if (likely(walk.nbytes == nbytes))
++ {
++ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,
++ walk.src.virt.addr, nbytes);
++ return blkcipher_walk_done(desc, &walk, 0);
++ }
++
++ while (walk.nbytes >= 64) {
++ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,
++ walk.src.virt.addr,
++ walk.nbytes - (walk.nbytes % 64));
++ err = blkcipher_walk_done(desc, &walk, walk.nbytes % 64);
++ }
++
++ if (walk.nbytes) {
++ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,
++ walk.src.virt.addr, walk.nbytes);
++ err = blkcipher_walk_done(desc, &walk, 0);
++ }
++
++ return err;
++}
++
++static struct crypto_alg alg = {
++ .cra_name = "salsa20",
++ .cra_driver_name = "salsa20-generic",
++ .cra_priority = 100,
++ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
++ .cra_type = &crypto_blkcipher_type,
++ .cra_blocksize = 1,
++ .cra_ctxsize = sizeof(struct salsa20_ctx),
++ .cra_alignmask = 3,
++ .cra_module = THIS_MODULE,
++ .cra_list = LIST_HEAD_INIT(alg.cra_list),
++ .cra_u = {
++ .blkcipher = {
++ .setkey = setkey,
++ .encrypt = encrypt,
++ .decrypt = encrypt,
++ .min_keysize = SALSA20_MIN_KEY_SIZE,
++ .max_keysize = SALSA20_MAX_KEY_SIZE,
++ .ivsize = SALSA20_IV_SIZE,
++ }
++ }
++};
++
++static int __init init(void)
++{
++ return crypto_register_alg(&alg);
++}
++
++static void __exit fini(void)
++{
++ crypto_unregister_alg(&alg);
++}
++
++module_init(init);
++module_exit(fini);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm");
++MODULE_ALIAS("salsa20");
+diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
+index b9bbda0..9aeeb52 100644
+--- a/crypto/scatterwalk.c
++++ b/crypto/scatterwalk.c
+@@ -13,6 +13,8 @@
+ * any later version.
+ *
+ */
++
++#include <crypto/scatterwalk.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/module.h>
+@@ -20,9 +22,6 @@
+ #include <linux/highmem.h>
+ #include <linux/scatterlist.h>
+
+-#include "internal.h"
+-#include "scatterwalk.h"
+-
+ static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out)
+ {
+ void *src = out ? buf : sgdata;
+@@ -106,6 +105,9 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
+ struct scatter_walk walk;
+ unsigned int offset = 0;
+
++ if (!nbytes)
++ return;
++
+ for (;;) {
+ scatterwalk_start(&walk, sg);
+
+@@ -113,7 +115,7 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
+ break;
+
+ offset += sg->length;
+- sg = sg_next(sg);
++ sg = scatterwalk_sg_next(sg);
+ }
+
+ scatterwalk_advance(&walk, start - offset);
+diff --git a/crypto/scatterwalk.h b/crypto/scatterwalk.h
+deleted file mode 100644
+index 87ed681..0000000
+--- a/crypto/scatterwalk.h
++++ /dev/null
+@@ -1,80 +0,0 @@
+-/*
+- * Cryptographic API.
+- *
+- * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
+- * Copyright (c) 2002 Adam J. Richter <adam at yggdrasil.com>
+- * Copyright (c) 2004 Jean-Luc Cooke <jlcooke at certainkey.com>
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of the GNU General Public License as published by the Free
+- * Software Foundation; either version 2 of the License, or (at your option)
+- * any later version.
+- *
+- */
+-
+-#ifndef _CRYPTO_SCATTERWALK_H
+-#define _CRYPTO_SCATTERWALK_H
+-
+-#include <linux/mm.h>
+-#include <linux/scatterlist.h>
+-
+-#include "internal.h"
+-
+-static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg)
+-{
+- return (++sg)->length ? sg : (void *) sg_page(sg);
+-}
+-
+-static inline unsigned long scatterwalk_samebuf(struct scatter_walk *walk_in,
+- struct scatter_walk *walk_out)
+-{
+- return !(((sg_page(walk_in->sg) - sg_page(walk_out->sg)) << PAGE_SHIFT) +
+- (int)(walk_in->offset - walk_out->offset));
+-}
+-
+-static inline unsigned int scatterwalk_pagelen(struct scatter_walk *walk)
+-{
+- unsigned int len = walk->sg->offset + walk->sg->length - walk->offset;
+- unsigned int len_this_page = offset_in_page(~walk->offset) + 1;
+- return len_this_page > len ? len : len_this_page;
+-}
+-
+-static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk,
+- unsigned int nbytes)
+-{
+- unsigned int len_this_page = scatterwalk_pagelen(walk);
+- return nbytes > len_this_page ? len_this_page : nbytes;
+-}
+-
+-static inline void scatterwalk_advance(struct scatter_walk *walk,
+- unsigned int nbytes)
+-{
+- walk->offset += nbytes;
+-}
+-
+-static inline unsigned int scatterwalk_aligned(struct scatter_walk *walk,
+- unsigned int alignmask)
+-{
+- return !(walk->offset & alignmask);
+-}
+-
+-static inline struct page *scatterwalk_page(struct scatter_walk *walk)
+-{
+- return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
+-}
+-
+-static inline void scatterwalk_unmap(void *vaddr, int out)
+-{
+- crypto_kunmap(vaddr, out);
+-}
+-
+-void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg);
+-void scatterwalk_copychunks(void *buf, struct scatter_walk *walk,
+- size_t nbytes, int out);
+-void *scatterwalk_map(struct scatter_walk *walk, int out);
+-void scatterwalk_done(struct scatter_walk *walk, int out, int more);
+-
+-void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
+- unsigned int start, unsigned int nbytes, int out);
+-
+-#endif /* _CRYPTO_SCATTERWALK_H */
+diff --git a/crypto/seqiv.c b/crypto/seqiv.c
+new file mode 100644
+index 0000000..b903aab
+--- /dev/null
++++ b/crypto/seqiv.c
+@@ -0,0 +1,345 @@
++/*
++ * seqiv: Sequence Number IV Generator
++ *
++ * This generator generates an IV based on a sequence number by xoring it
++ * with a salt. This algorithm is mainly useful for CTR and similar modes.
++ *
++ * Copyright (c) 2007 Herbert Xu <herbert at gondor.apana.org.au>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ */
++
++#include <crypto/internal/aead.h>
++#include <crypto/internal/skcipher.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/random.h>
++#include <linux/spinlock.h>
++#include <linux/string.h>
++
++struct seqiv_ctx {
++ spinlock_t lock;
++ u8 salt[] __attribute__ ((aligned(__alignof__(u32))));
++};
++
++static void seqiv_complete2(struct skcipher_givcrypt_request *req, int err)
++{
++ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
++ struct crypto_ablkcipher *geniv;
++
++ if (err == -EINPROGRESS)
++ return;
++
++ if (err)
++ goto out;
++
++ geniv = skcipher_givcrypt_reqtfm(req);
++ memcpy(req->creq.info, subreq->info, crypto_ablkcipher_ivsize(geniv));
++
++out:
++ kfree(subreq->info);
++}
++
++static void seqiv_complete(struct crypto_async_request *base, int err)
++{
++ struct skcipher_givcrypt_request *req = base->data;
++
++ seqiv_complete2(req, err);
++ skcipher_givcrypt_complete(req, err);
++}
++
++static void seqiv_aead_complete2(struct aead_givcrypt_request *req, int err)
++{
++ struct aead_request *subreq = aead_givcrypt_reqctx(req);
++ struct crypto_aead *geniv;
++
++ if (err == -EINPROGRESS)
++ return;
++
++ if (err)
++ goto out;
++
++ geniv = aead_givcrypt_reqtfm(req);
++ memcpy(req->areq.iv, subreq->iv, crypto_aead_ivsize(geniv));
++
++out:
++ kfree(subreq->iv);
++}
++
++static void seqiv_aead_complete(struct crypto_async_request *base, int err)
++{
++ struct aead_givcrypt_request *req = base->data;
++
++ seqiv_aead_complete2(req, err);
++ aead_givcrypt_complete(req, err);
++}
++
++static void seqiv_geniv(struct seqiv_ctx *ctx, u8 *info, u64 seq,
++ unsigned int ivsize)
++{
++ unsigned int len = ivsize;
++
++ if (ivsize > sizeof(u64)) {
++ memset(info, 0, ivsize - sizeof(u64));
++ len = sizeof(u64);
++ }
++ seq = cpu_to_be64(seq);
++ memcpy(info + ivsize - len, &seq, len);
++ crypto_xor(info, ctx->salt, ivsize);
++}
++
++static int seqiv_givencrypt(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++ struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
++ crypto_completion_t complete;
++ void *data;
++ u8 *info;
++ unsigned int ivsize;
++ int err;
++
++ ablkcipher_request_set_tfm(subreq, skcipher_geniv_cipher(geniv));
++
++ complete = req->creq.base.complete;
++ data = req->creq.base.data;
++ info = req->creq.info;
++
++ ivsize = crypto_ablkcipher_ivsize(geniv);
++
++ if (unlikely(!IS_ALIGNED((unsigned long)info,
++ crypto_ablkcipher_alignmask(geniv) + 1))) {
++ info = kmalloc(ivsize, req->creq.base.flags &
++ CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
++ GFP_ATOMIC);
++ if (!info)
++ return -ENOMEM;
++
++ complete = seqiv_complete;
++ data = req;
++ }
++
++ ablkcipher_request_set_callback(subreq, req->creq.base.flags, complete,
++ data);
++ ablkcipher_request_set_crypt(subreq, req->creq.src, req->creq.dst,
++ req->creq.nbytes, info);
++
++ seqiv_geniv(ctx, info, req->seq, ivsize);
++ memcpy(req->giv, info, ivsize);
++
++ err = crypto_ablkcipher_encrypt(subreq);
++ if (unlikely(info != req->creq.info))
++ seqiv_complete2(req, err);
++ return err;
++}
++
++static int seqiv_aead_givencrypt(struct aead_givcrypt_request *req)
++{
++ struct crypto_aead *geniv = aead_givcrypt_reqtfm(req);
++ struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
++ struct aead_request *areq = &req->areq;
++ struct aead_request *subreq = aead_givcrypt_reqctx(req);
++ crypto_completion_t complete;
++ void *data;
++ u8 *info;
++ unsigned int ivsize;
++ int err;
++
++ aead_request_set_tfm(subreq, aead_geniv_base(geniv));
++
++ complete = areq->base.complete;
++ data = areq->base.data;
++ info = areq->iv;
++
++ ivsize = crypto_aead_ivsize(geniv);
++
++ if (unlikely(!IS_ALIGNED((unsigned long)info,
++ crypto_aead_alignmask(geniv) + 1))) {
++ info = kmalloc(ivsize, areq->base.flags &
++ CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
++ GFP_ATOMIC);
++ if (!info)
++ return -ENOMEM;
++
++ complete = seqiv_aead_complete;
++ data = req;
++ }
++
++ aead_request_set_callback(subreq, areq->base.flags, complete, data);
++ aead_request_set_crypt(subreq, areq->src, areq->dst, areq->cryptlen,
++ info);
++ aead_request_set_assoc(subreq, areq->assoc, areq->assoclen);
++
++ seqiv_geniv(ctx, info, req->seq, ivsize);
++ memcpy(req->giv, info, ivsize);
++
++ err = crypto_aead_encrypt(subreq);
++ if (unlikely(info != areq->iv))
++ seqiv_aead_complete2(req, err);
++ return err;
++}
++
++static int seqiv_givencrypt_first(struct skcipher_givcrypt_request *req)
++{
++ struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
++ struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++
++ spin_lock_bh(&ctx->lock);
++ if (crypto_ablkcipher_crt(geniv)->givencrypt != seqiv_givencrypt_first)
++ goto unlock;
++
++ crypto_ablkcipher_crt(geniv)->givencrypt = seqiv_givencrypt;
++ get_random_bytes(ctx->salt, crypto_ablkcipher_ivsize(geniv));
++
++unlock:
++ spin_unlock_bh(&ctx->lock);
++
++ return seqiv_givencrypt(req);
++}
++
++static int seqiv_aead_givencrypt_first(struct aead_givcrypt_request *req)
++{
++ struct crypto_aead *geniv = aead_givcrypt_reqtfm(req);
++ struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
++
++ spin_lock_bh(&ctx->lock);
++ if (crypto_aead_crt(geniv)->givencrypt != seqiv_aead_givencrypt_first)
++ goto unlock;
++
++ crypto_aead_crt(geniv)->givencrypt = seqiv_aead_givencrypt;
++ get_random_bytes(ctx->salt, crypto_aead_ivsize(geniv));
++
++unlock:
++ spin_unlock_bh(&ctx->lock);
++
++ return seqiv_aead_givencrypt(req);
++}
++
++static int seqiv_init(struct crypto_tfm *tfm)
++{
++ struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm);
++ struct seqiv_ctx *ctx = crypto_ablkcipher_ctx(geniv);
++
++ spin_lock_init(&ctx->lock);
++
++ tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request);
++
++ return skcipher_geniv_init(tfm);
++}
++
++static int seqiv_aead_init(struct crypto_tfm *tfm)
++{
++ struct crypto_aead *geniv = __crypto_aead_cast(tfm);
++ struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
++
++ spin_lock_init(&ctx->lock);
++
++ tfm->crt_aead.reqsize = sizeof(struct aead_request);
++
++ return aead_geniv_init(tfm);
++}
++
++static struct crypto_template seqiv_tmpl;
++
++static struct crypto_instance *seqiv_ablkcipher_alloc(struct rtattr **tb)
++{
++ struct crypto_instance *inst;
++
++ inst = skcipher_geniv_alloc(&seqiv_tmpl, tb, 0, 0);
++
++ if (IS_ERR(inst))
++ goto out;
++
++ inst->alg.cra_ablkcipher.givencrypt = seqiv_givencrypt_first;
++
++ inst->alg.cra_init = seqiv_init;
++ inst->alg.cra_exit = skcipher_geniv_exit;
++
++ inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
++
++out:
++ return inst;
++}
++
++static struct crypto_instance *seqiv_aead_alloc(struct rtattr **tb)
++{
++ struct crypto_instance *inst;
++
++ inst = aead_geniv_alloc(&seqiv_tmpl, tb, 0, 0);
++
++ if (IS_ERR(inst))
++ goto out;
++
++ inst->alg.cra_aead.givencrypt = seqiv_aead_givencrypt_first;
++
++ inst->alg.cra_init = seqiv_aead_init;
++ inst->alg.cra_exit = aead_geniv_exit;
++
++ inst->alg.cra_ctxsize = inst->alg.cra_aead.ivsize;
++
++out:
++ return inst;
++}
++
++static struct crypto_instance *seqiv_alloc(struct rtattr **tb)
++{
++ struct crypto_attr_type *algt;
++ struct crypto_instance *inst;
++ int err;
++
++ algt = crypto_get_attr_type(tb);
++ err = PTR_ERR(algt);
++ if (IS_ERR(algt))
++ return ERR_PTR(err);
++
++ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK)
++ inst = seqiv_ablkcipher_alloc(tb);
++ else
++ inst = seqiv_aead_alloc(tb);
++
++ if (IS_ERR(inst))
++ goto out;
++
++ inst->alg.cra_alignmask |= __alignof__(u32) - 1;
++ inst->alg.cra_ctxsize += sizeof(struct seqiv_ctx);
++
++out:
++ return inst;
++}
++
++static void seqiv_free(struct crypto_instance *inst)
++{
++ if ((inst->alg.cra_flags ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK)
++ skcipher_geniv_free(inst);
++ else
++ aead_geniv_free(inst);
++}
++
++static struct crypto_template seqiv_tmpl = {
++ .name = "seqiv",
++ .alloc = seqiv_alloc,
++ .free = seqiv_free,
++ .module = THIS_MODULE,
++};
++
++static int __init seqiv_module_init(void)
++{
++ return crypto_register_template(&seqiv_tmpl);
++}
++
++static void __exit seqiv_module_exit(void)
++{
++ crypto_unregister_template(&seqiv_tmpl);
++}
++
++module_init(seqiv_module_init);
++module_exit(seqiv_module_exit);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Sequence Number IV Generator");
+diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c
+index fd3918b..3cc93fd 100644
+--- a/crypto/sha256_generic.c
++++ b/crypto/sha256_generic.c
+@@ -9,6 +9,7 @@
+ * Copyright (c) Jean-Luc Cooke <jlcooke at certainkey.com>
+ * Copyright (c) Andrew McDonald <andrew at mcdonald.org.uk>
+ * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
++ * SHA224 Support Copyright 2007 Intel Corporation <jonathan.lynch at intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+@@ -218,6 +219,22 @@ static void sha256_transform(u32 *state, const u8 *input)
+ memset(W, 0, 64 * sizeof(u32));
+ }
+
++
++static void sha224_init(struct crypto_tfm *tfm)
++{
++ struct sha256_ctx *sctx = crypto_tfm_ctx(tfm);
++ sctx->state[0] = SHA224_H0;
++ sctx->state[1] = SHA224_H1;
++ sctx->state[2] = SHA224_H2;
++ sctx->state[3] = SHA224_H3;
++ sctx->state[4] = SHA224_H4;
++ sctx->state[5] = SHA224_H5;
++ sctx->state[6] = SHA224_H6;
++ sctx->state[7] = SHA224_H7;
++ sctx->count[0] = 0;
++ sctx->count[1] = 0;
++}
++
+ static void sha256_init(struct crypto_tfm *tfm)
+ {
+ struct sha256_ctx *sctx = crypto_tfm_ctx(tfm);
+@@ -294,8 +311,17 @@ static void sha256_final(struct crypto_tfm *tfm, u8 *out)
+ memset(sctx, 0, sizeof(*sctx));
+ }
+
++static void sha224_final(struct crypto_tfm *tfm, u8 *hash)
++{
++ u8 D[SHA256_DIGEST_SIZE];
++
++ sha256_final(tfm, D);
++
++ memcpy(hash, D, SHA224_DIGEST_SIZE);
++ memset(D, 0, SHA256_DIGEST_SIZE);
++}
+
+-static struct crypto_alg alg = {
++static struct crypto_alg sha256 = {
+ .cra_name = "sha256",
+ .cra_driver_name= "sha256-generic",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+@@ -303,28 +329,58 @@ static struct crypto_alg alg = {
+ .cra_ctxsize = sizeof(struct sha256_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_alignmask = 3,
+- .cra_list = LIST_HEAD_INIT(alg.cra_list),
++ .cra_list = LIST_HEAD_INIT(sha256.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = SHA256_DIGEST_SIZE,
+- .dia_init = sha256_init,
+- .dia_update = sha256_update,
+- .dia_final = sha256_final } }
++ .dia_init = sha256_init,
++ .dia_update = sha256_update,
++ .dia_final = sha256_final } }
++};
++
++static struct crypto_alg sha224 = {
++ .cra_name = "sha224",
++ .cra_driver_name = "sha224-generic",
++ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
++ .cra_blocksize = SHA224_BLOCK_SIZE,
++ .cra_ctxsize = sizeof(struct sha256_ctx),
++ .cra_module = THIS_MODULE,
++ .cra_alignmask = 3,
++ .cra_list = LIST_HEAD_INIT(sha224.cra_list),
++ .cra_u = { .digest = {
++ .dia_digestsize = SHA224_DIGEST_SIZE,
++ .dia_init = sha224_init,
++ .dia_update = sha256_update,
++ .dia_final = sha224_final } }
+ };
+
+ static int __init init(void)
+ {
+- return crypto_register_alg(&alg);
++ int ret = 0;
++
++ ret = crypto_register_alg(&sha224);
++
++ if (ret < 0)
++ return ret;
++
++ ret = crypto_register_alg(&sha256);
++
++ if (ret < 0)
++ crypto_unregister_alg(&sha224);
++
++ return ret;
+ }
+
+ static void __exit fini(void)
+ {
+- crypto_unregister_alg(&alg);
++ crypto_unregister_alg(&sha224);
++ crypto_unregister_alg(&sha256);
+ }
+
+ module_init(init);
+ module_exit(fini);
+
+ MODULE_LICENSE("GPL");
+-MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm");
++MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm");
+
++MODULE_ALIAS("sha224");
+ MODULE_ALIAS("sha256");
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index 24141fb..1ab8c01 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -6,12 +6,16 @@
+ *
+ * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
+ * Copyright (c) 2002 Jean-Francois Dive <jef at linuxbe.org>
++ * Copyright (c) 2007 Nokia Siemens Networks
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
++ * 2007-11-13 Added GCM tests
++ * 2007-11-13 Added AEAD support
++ * 2007-11-06 Added SHA-224 and SHA-224-HMAC tests
+ * 2006-12-07 Added SHA384 HMAC and SHA512 HMAC tests
+ * 2004-08-09 Added cipher speed tests (Reyk Floeter <reyk at vantronix.net>)
+ * 2003-09-14 Rewritten by Kartikey Mahendra Bhatt
+@@ -71,22 +75,23 @@ static unsigned int sec;
+
+ static int mode;
+ static char *xbuf;
++static char *axbuf;
+ static char *tvmem;
+
+ static char *check[] = {
+- "des", "md5", "des3_ede", "rot13", "sha1", "sha256", "blowfish",
+- "twofish", "serpent", "sha384", "sha512", "md4", "aes", "cast6",
++ "des", "md5", "des3_ede", "rot13", "sha1", "sha224", "sha256",
++ "blowfish", "twofish", "serpent", "sha384", "sha512", "md4", "aes",
++ "cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
+ "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
+ "khazad", "wp512", "wp384", "wp256", "tnepres", "xeta", "fcrypt",
+- "camellia", "seed", NULL
++ "camellia", "seed", "salsa20", "lzo", NULL
+ };
+
+ static void hexdump(unsigned char *buf, unsigned int len)
+ {
+- while (len--)
+- printk("%02x", *buf++);
+-
+- printk("\n");
++ print_hex_dump(KERN_CONT, "", DUMP_PREFIX_OFFSET,
++ 16, 1,
++ buf, len, false);
+ }
+
+ static void tcrypt_complete(struct crypto_async_request *req, int err)
+@@ -215,6 +220,238 @@ out:
+ crypto_free_hash(tfm);
+ }
+
++static void test_aead(char *algo, int enc, struct aead_testvec *template,
++ unsigned int tcount)
++{
++ unsigned int ret, i, j, k, temp;
++ unsigned int tsize;
++ char *q;
++ struct crypto_aead *tfm;
++ char *key;
++ struct aead_testvec *aead_tv;
++ struct aead_request *req;
++ struct scatterlist sg[8];
++ struct scatterlist asg[8];
++ const char *e;
++ struct tcrypt_result result;
++ unsigned int authsize;
++
++ if (enc == ENCRYPT)
++ e = "encryption";
++ else
++ e = "decryption";
++
++ printk(KERN_INFO "\ntesting %s %s\n", algo, e);
++
++ tsize = sizeof(struct aead_testvec);
++ tsize *= tcount;
++
++ if (tsize > TVMEMSIZE) {
++ printk(KERN_INFO "template (%u) too big for tvmem (%u)\n",
++ tsize, TVMEMSIZE);
++ return;
++ }
++
++ memcpy(tvmem, template, tsize);
++ aead_tv = (void *)tvmem;
++
++ init_completion(&result.completion);
++
++ tfm = crypto_alloc_aead(algo, 0, 0);
++
++ if (IS_ERR(tfm)) {
++ printk(KERN_INFO "failed to load transform for %s: %ld\n",
++ algo, PTR_ERR(tfm));
++ return;
++ }
++
++ req = aead_request_alloc(tfm, GFP_KERNEL);
++ if (!req) {
++ printk(KERN_INFO "failed to allocate request for %s\n", algo);
++ goto out;
++ }
++
++ aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
++ tcrypt_complete, &result);
++
++ for (i = 0, j = 0; i < tcount; i++) {
++ if (!aead_tv[i].np) {
++ printk(KERN_INFO "test %u (%d bit key):\n",
++ ++j, aead_tv[i].klen * 8);
++
++ crypto_aead_clear_flags(tfm, ~0);
++ if (aead_tv[i].wk)
++ crypto_aead_set_flags(
++ tfm, CRYPTO_TFM_REQ_WEAK_KEY);
++ key = aead_tv[i].key;
++
++ ret = crypto_aead_setkey(tfm, key,
++ aead_tv[i].klen);
++ if (ret) {
++ printk(KERN_INFO "setkey() failed flags=%x\n",
++ crypto_aead_get_flags(tfm));
++
++ if (!aead_tv[i].fail)
++ goto out;
++ }
++
++ authsize = abs(aead_tv[i].rlen - aead_tv[i].ilen);
++ ret = crypto_aead_setauthsize(tfm, authsize);
++ if (ret) {
++ printk(KERN_INFO
++ "failed to set authsize = %u\n",
++ authsize);
++ goto out;
++ }
++
++ sg_init_one(&sg[0], aead_tv[i].input,
++ aead_tv[i].ilen + (enc ? authsize : 0));
++
++ sg_init_one(&asg[0], aead_tv[i].assoc,
++ aead_tv[i].alen);
++
++ aead_request_set_crypt(req, sg, sg,
++ aead_tv[i].ilen,
++ aead_tv[i].iv);
++
++ aead_request_set_assoc(req, asg, aead_tv[i].alen);
++
++ ret = enc ?
++ crypto_aead_encrypt(req) :
++ crypto_aead_decrypt(req);
++
++ switch (ret) {
++ case 0:
++ break;
++ case -EINPROGRESS:
++ case -EBUSY:
++ ret = wait_for_completion_interruptible(
++ &result.completion);
++ if (!ret && !(ret = result.err)) {
++ INIT_COMPLETION(result.completion);
++ break;
++ }
++ /* fall through */
++ default:
++ printk(KERN_INFO "%s () failed err=%d\n",
++ e, -ret);
++ goto out;
++ }
++
++ q = kmap(sg_page(&sg[0])) + sg[0].offset;
++ hexdump(q, aead_tv[i].rlen);
++
++ printk(KERN_INFO "enc/dec: %s\n",
++ memcmp(q, aead_tv[i].result,
++ aead_tv[i].rlen) ? "fail" : "pass");
++ }
++ }
++
++ printk(KERN_INFO "\ntesting %s %s across pages (chunking)\n", algo, e);
++ memset(xbuf, 0, XBUFSIZE);
++ memset(axbuf, 0, XBUFSIZE);
++
++ for (i = 0, j = 0; i < tcount; i++) {
++ if (aead_tv[i].np) {
++ printk(KERN_INFO "test %u (%d bit key):\n",
++ ++j, aead_tv[i].klen * 8);
++
++ crypto_aead_clear_flags(tfm, ~0);
++ if (aead_tv[i].wk)
++ crypto_aead_set_flags(
++ tfm, CRYPTO_TFM_REQ_WEAK_KEY);
++ key = aead_tv[i].key;
++
++ ret = crypto_aead_setkey(tfm, key, aead_tv[i].klen);
++ if (ret) {
++ printk(KERN_INFO "setkey() failed flags=%x\n",
++ crypto_aead_get_flags(tfm));
++
++ if (!aead_tv[i].fail)
++ goto out;
++ }
++
++ sg_init_table(sg, aead_tv[i].np);
++ for (k = 0, temp = 0; k < aead_tv[i].np; k++) {
++ memcpy(&xbuf[IDX[k]],
++ aead_tv[i].input + temp,
++ aead_tv[i].tap[k]);
++ temp += aead_tv[i].tap[k];
++ sg_set_buf(&sg[k], &xbuf[IDX[k]],
++ aead_tv[i].tap[k]);
++ }
++
++ authsize = abs(aead_tv[i].rlen - aead_tv[i].ilen);
++ ret = crypto_aead_setauthsize(tfm, authsize);
++ if (ret) {
++ printk(KERN_INFO
++ "failed to set authsize = %u\n",
++ authsize);
++ goto out;
++ }
++
++ if (enc)
++ sg[k - 1].length += authsize;
++
++ sg_init_table(asg, aead_tv[i].anp);
++ for (k = 0, temp = 0; k < aead_tv[i].anp; k++) {
++ memcpy(&axbuf[IDX[k]],
++ aead_tv[i].assoc + temp,
++ aead_tv[i].atap[k]);
++ temp += aead_tv[i].atap[k];
++ sg_set_buf(&asg[k], &axbuf[IDX[k]],
++ aead_tv[i].atap[k]);
++ }
++
++ aead_request_set_crypt(req, sg, sg,
++ aead_tv[i].ilen,
++ aead_tv[i].iv);
++
++ aead_request_set_assoc(req, asg, aead_tv[i].alen);
++
++ ret = enc ?
++ crypto_aead_encrypt(req) :
++ crypto_aead_decrypt(req);
++
++ switch (ret) {
++ case 0:
++ break;
++ case -EINPROGRESS:
++ case -EBUSY:
++ ret = wait_for_completion_interruptible(
++ &result.completion);
++ if (!ret && !(ret = result.err)) {
++ INIT_COMPLETION(result.completion);
++ break;
++ }
++ /* fall through */
++ default:
++ printk(KERN_INFO "%s () failed err=%d\n",
++ e, -ret);
++ goto out;
++ }
++
++ for (k = 0, temp = 0; k < aead_tv[i].np; k++) {
++ printk(KERN_INFO "page %u\n", k);
++ q = kmap(sg_page(&sg[k])) + sg[k].offset;
++ hexdump(q, aead_tv[i].tap[k]);
++ printk(KERN_INFO "%s\n",
++ memcmp(q, aead_tv[i].result + temp,
++ aead_tv[i].tap[k] -
++ (k < aead_tv[i].np - 1 || enc ?
++ 0 : authsize)) ?
++ "fail" : "pass");
++
++ temp += aead_tv[i].tap[k];
++ }
++ }
++ }
++
++out:
++ crypto_free_aead(tfm);
++ aead_request_free(req);
++}
++
+ static void test_cipher(char *algo, int enc,
+ struct cipher_testvec *template, unsigned int tcount)
+ {
+@@ -237,15 +474,11 @@ static void test_cipher(char *algo, int enc,
+ printk("\ntesting %s %s\n", algo, e);
+
+ tsize = sizeof (struct cipher_testvec);
+- tsize *= tcount;
+-
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+-
+- memcpy(tvmem, template, tsize);
+ cipher_tv = (void *)tvmem;
+
+ init_completion(&result.completion);
+@@ -269,33 +502,34 @@ static void test_cipher(char *algo, int enc,
+
+ j = 0;
+ for (i = 0; i < tcount; i++) {
+- if (!(cipher_tv[i].np)) {
++ memcpy(cipher_tv, &template[i], tsize);
++ if (!(cipher_tv->np)) {
+ j++;
+ printk("test %u (%d bit key):\n",
+- j, cipher_tv[i].klen * 8);
++ j, cipher_tv->klen * 8);
+
+ crypto_ablkcipher_clear_flags(tfm, ~0);
+- if (cipher_tv[i].wk)
++ if (cipher_tv->wk)
+ crypto_ablkcipher_set_flags(
+ tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+- key = cipher_tv[i].key;
++ key = cipher_tv->key;
+
+ ret = crypto_ablkcipher_setkey(tfm, key,
+- cipher_tv[i].klen);
++ cipher_tv->klen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n",
+ crypto_ablkcipher_get_flags(tfm));
+
+- if (!cipher_tv[i].fail)
++ if (!cipher_tv->fail)
+ goto out;
+ }
+
+- sg_init_one(&sg[0], cipher_tv[i].input,
+- cipher_tv[i].ilen);
++ sg_init_one(&sg[0], cipher_tv->input,
++ cipher_tv->ilen);
+
+ ablkcipher_request_set_crypt(req, sg, sg,
+- cipher_tv[i].ilen,
+- cipher_tv[i].iv);
++ cipher_tv->ilen,
++ cipher_tv->iv);
+
+ ret = enc ?
+ crypto_ablkcipher_encrypt(req) :
+@@ -319,11 +553,11 @@ static void test_cipher(char *algo, int enc,
+ }
+
+ q = kmap(sg_page(&sg[0])) + sg[0].offset;
+- hexdump(q, cipher_tv[i].rlen);
++ hexdump(q, cipher_tv->rlen);
+
+ printk("%s\n",
+- memcmp(q, cipher_tv[i].result,
+- cipher_tv[i].rlen) ? "fail" : "pass");
++ memcmp(q, cipher_tv->result,
++ cipher_tv->rlen) ? "fail" : "pass");
+ }
+ }
+
+@@ -332,41 +566,42 @@ static void test_cipher(char *algo, int enc,
+
+ j = 0;
+ for (i = 0; i < tcount; i++) {
+- if (cipher_tv[i].np) {
++ memcpy(cipher_tv, &template[i], tsize);
++ if (cipher_tv->np) {
+ j++;
+ printk("test %u (%d bit key):\n",
+- j, cipher_tv[i].klen * 8);
++ j, cipher_tv->klen * 8);
+
+ crypto_ablkcipher_clear_flags(tfm, ~0);
+- if (cipher_tv[i].wk)
++ if (cipher_tv->wk)
+ crypto_ablkcipher_set_flags(
+ tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+- key = cipher_tv[i].key;
++ key = cipher_tv->key;
+
+ ret = crypto_ablkcipher_setkey(tfm, key,
+- cipher_tv[i].klen);
++ cipher_tv->klen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n",
+ crypto_ablkcipher_get_flags(tfm));
+
+- if (!cipher_tv[i].fail)
++ if (!cipher_tv->fail)
+ goto out;
+ }
+
+ temp = 0;
+- sg_init_table(sg, cipher_tv[i].np);
+- for (k = 0; k < cipher_tv[i].np; k++) {
++ sg_init_table(sg, cipher_tv->np);
++ for (k = 0; k < cipher_tv->np; k++) {
+ memcpy(&xbuf[IDX[k]],
+- cipher_tv[i].input + temp,
+- cipher_tv[i].tap[k]);
+- temp += cipher_tv[i].tap[k];
++ cipher_tv->input + temp,
++ cipher_tv->tap[k]);
++ temp += cipher_tv->tap[k];
+ sg_set_buf(&sg[k], &xbuf[IDX[k]],
+- cipher_tv[i].tap[k]);
++ cipher_tv->tap[k]);
+ }
+
+ ablkcipher_request_set_crypt(req, sg, sg,
+- cipher_tv[i].ilen,
+- cipher_tv[i].iv);
++ cipher_tv->ilen,
++ cipher_tv->iv);
+
+ ret = enc ?
+ crypto_ablkcipher_encrypt(req) :
+@@ -390,15 +625,15 @@ static void test_cipher(char *algo, int enc,
+ }
+
+ temp = 0;
+- for (k = 0; k < cipher_tv[i].np; k++) {
++ for (k = 0; k < cipher_tv->np; k++) {
+ printk("page %u\n", k);
+ q = kmap(sg_page(&sg[k])) + sg[k].offset;
+- hexdump(q, cipher_tv[i].tap[k]);
++ hexdump(q, cipher_tv->tap[k]);
+ printk("%s\n",
+- memcmp(q, cipher_tv[i].result + temp,
+- cipher_tv[i].tap[k]) ? "fail" :
++ memcmp(q, cipher_tv->result + temp,
++ cipher_tv->tap[k]) ? "fail" :
+ "pass");
+- temp += cipher_tv[i].tap[k];
++ temp += cipher_tv->tap[k];
+ }
+ }
+ }
+@@ -800,7 +1035,8 @@ out:
+ crypto_free_hash(tfm);
+ }
+
+-static void test_deflate(void)
++static void test_comp(char *algo, struct comp_testvec *ctemplate,
++ struct comp_testvec *dtemplate, int ctcount, int dtcount)
+ {
+ unsigned int i;
+ char result[COMP_BUF_SIZE];
+@@ -808,25 +1044,26 @@ static void test_deflate(void)
+ struct comp_testvec *tv;
+ unsigned int tsize;
+
+- printk("\ntesting deflate compression\n");
++ printk("\ntesting %s compression\n", algo);
+
+- tsize = sizeof (deflate_comp_tv_template);
++ tsize = sizeof(struct comp_testvec);
++ tsize *= ctcount;
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+- memcpy(tvmem, deflate_comp_tv_template, tsize);
++ memcpy(tvmem, ctemplate, tsize);
+ tv = (void *)tvmem;
+
+- tfm = crypto_alloc_comp("deflate", 0, CRYPTO_ALG_ASYNC);
++ tfm = crypto_alloc_comp(algo, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(tfm)) {
+- printk("failed to load transform for deflate\n");
++ printk("failed to load transform for %s\n", algo);
+ return;
+ }
+
+- for (i = 0; i < DEFLATE_COMP_TEST_VECTORS; i++) {
++ for (i = 0; i < ctcount; i++) {
+ int ilen, ret, dlen = COMP_BUF_SIZE;
+
+ printk("test %u:\n", i + 1);
+@@ -845,19 +1082,20 @@ static void test_deflate(void)
+ ilen, dlen);
+ }
+
+- printk("\ntesting deflate decompression\n");
++ printk("\ntesting %s decompression\n", algo);
+
+- tsize = sizeof (deflate_decomp_tv_template);
++ tsize = sizeof(struct comp_testvec);
++ tsize *= dtcount;
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+
+- memcpy(tvmem, deflate_decomp_tv_template, tsize);
++ memcpy(tvmem, dtemplate, tsize);
+ tv = (void *)tvmem;
+
+- for (i = 0; i < DEFLATE_DECOMP_TEST_VECTORS; i++) {
++ for (i = 0; i < dtcount; i++) {
+ int ilen, ret, dlen = COMP_BUF_SIZE;
+
+ printk("test %u:\n", i + 1);
+@@ -918,6 +1156,8 @@ static void do_test(void)
+
+ test_hash("md4", md4_tv_template, MD4_TEST_VECTORS);
+
++ test_hash("sha224", sha224_tv_template, SHA224_TEST_VECTORS);
++
+ test_hash("sha256", sha256_tv_template, SHA256_TEST_VECTORS);
+
+ //BLOWFISH
+@@ -969,6 +1209,18 @@ static void do_test(void)
+ AES_XTS_ENC_TEST_VECTORS);
+ test_cipher("xts(aes)", DECRYPT, aes_xts_dec_tv_template,
+ AES_XTS_DEC_TEST_VECTORS);
++ test_cipher("rfc3686(ctr(aes))", ENCRYPT, aes_ctr_enc_tv_template,
++ AES_CTR_ENC_TEST_VECTORS);
++ test_cipher("rfc3686(ctr(aes))", DECRYPT, aes_ctr_dec_tv_template,
++ AES_CTR_DEC_TEST_VECTORS);
++ test_aead("gcm(aes)", ENCRYPT, aes_gcm_enc_tv_template,
++ AES_GCM_ENC_TEST_VECTORS);
++ test_aead("gcm(aes)", DECRYPT, aes_gcm_dec_tv_template,
++ AES_GCM_DEC_TEST_VECTORS);
++ test_aead("ccm(aes)", ENCRYPT, aes_ccm_enc_tv_template,
++ AES_CCM_ENC_TEST_VECTORS);
++ test_aead("ccm(aes)", DECRYPT, aes_ccm_dec_tv_template,
++ AES_CCM_DEC_TEST_VECTORS);
+
+ //CAST5
+ test_cipher("ecb(cast5)", ENCRYPT, cast5_enc_tv_template,
+@@ -1057,12 +1309,18 @@ static void do_test(void)
+ test_hash("tgr192", tgr192_tv_template, TGR192_TEST_VECTORS);
+ test_hash("tgr160", tgr160_tv_template, TGR160_TEST_VECTORS);
+ test_hash("tgr128", tgr128_tv_template, TGR128_TEST_VECTORS);
+- test_deflate();
++ test_comp("deflate", deflate_comp_tv_template,
++ deflate_decomp_tv_template, DEFLATE_COMP_TEST_VECTORS,
++ DEFLATE_DECOMP_TEST_VECTORS);
++ test_comp("lzo", lzo_comp_tv_template, lzo_decomp_tv_template,
++ LZO_COMP_TEST_VECTORS, LZO_DECOMP_TEST_VECTORS);
+ test_hash("crc32c", crc32c_tv_template, CRC32C_TEST_VECTORS);
+ test_hash("hmac(md5)", hmac_md5_tv_template,
+ HMAC_MD5_TEST_VECTORS);
+ test_hash("hmac(sha1)", hmac_sha1_tv_template,
+ HMAC_SHA1_TEST_VECTORS);
++ test_hash("hmac(sha224)", hmac_sha224_tv_template,
++ HMAC_SHA224_TEST_VECTORS);
+ test_hash("hmac(sha256)", hmac_sha256_tv_template,
+ HMAC_SHA256_TEST_VECTORS);
+ test_hash("hmac(sha384)", hmac_sha384_tv_template,
+@@ -1156,6 +1414,10 @@ static void do_test(void)
+ AES_XTS_ENC_TEST_VECTORS);
+ test_cipher("xts(aes)", DECRYPT, aes_xts_dec_tv_template,
+ AES_XTS_DEC_TEST_VECTORS);
++ test_cipher("rfc3686(ctr(aes))", ENCRYPT, aes_ctr_enc_tv_template,
++ AES_CTR_ENC_TEST_VECTORS);
++ test_cipher("rfc3686(ctr(aes))", DECRYPT, aes_ctr_dec_tv_template,
++ AES_CTR_DEC_TEST_VECTORS);
+ break;
+
+ case 11:
+@@ -1167,7 +1429,9 @@ static void do_test(void)
+ break;
+
+ case 13:
+- test_deflate();
++ test_comp("deflate", deflate_comp_tv_template,
++ deflate_decomp_tv_template, DEFLATE_COMP_TEST_VECTORS,
++ DEFLATE_DECOMP_TEST_VECTORS);
+ break;
+
+ case 14:
+@@ -1291,6 +1555,34 @@ static void do_test(void)
+ camellia_cbc_dec_tv_template,
+ CAMELLIA_CBC_DEC_TEST_VECTORS);
+ break;
++ case 33:
++ test_hash("sha224", sha224_tv_template, SHA224_TEST_VECTORS);
++ break;
++
++ case 34:
++ test_cipher("salsa20", ENCRYPT,
++ salsa20_stream_enc_tv_template,
++ SALSA20_STREAM_ENC_TEST_VECTORS);
++ break;
++
++ case 35:
++ test_aead("gcm(aes)", ENCRYPT, aes_gcm_enc_tv_template,
++ AES_GCM_ENC_TEST_VECTORS);
++ test_aead("gcm(aes)", DECRYPT, aes_gcm_dec_tv_template,
++ AES_GCM_DEC_TEST_VECTORS);
++ break;
++
++ case 36:
++ test_comp("lzo", lzo_comp_tv_template, lzo_decomp_tv_template,
++ LZO_COMP_TEST_VECTORS, LZO_DECOMP_TEST_VECTORS);
++ break;
++
++ case 37:
++ test_aead("ccm(aes)", ENCRYPT, aes_ccm_enc_tv_template,
++ AES_CCM_ENC_TEST_VECTORS);
++ test_aead("ccm(aes)", DECRYPT, aes_ccm_dec_tv_template,
++ AES_CCM_DEC_TEST_VECTORS);
++ break;
+
+ case 100:
+ test_hash("hmac(md5)", hmac_md5_tv_template,
+@@ -1317,6 +1609,15 @@ static void do_test(void)
+ HMAC_SHA512_TEST_VECTORS);
+ break;
+
++ case 105:
++ test_hash("hmac(sha224)", hmac_sha224_tv_template,
++ HMAC_SHA224_TEST_VECTORS);
++ break;
++
++ case 106:
++ test_hash("xcbc(aes)", aes_xcbc128_tv_template,
++ XCBC_AES_TEST_VECTORS);
++ break;
+
+ case 200:
+ test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
+@@ -1400,6 +1701,11 @@ static void do_test(void)
+ camellia_speed_template);
+ break;
+
++ case 206:
++ test_cipher_speed("salsa20", ENCRYPT, sec, NULL, 0,
++ salsa20_speed_template);
++ break;
++
+ case 300:
+ /* fall through */
+
+@@ -1451,6 +1757,10 @@ static void do_test(void)
+ test_hash_speed("tgr192", sec, generic_hash_speed_template);
+ if (mode > 300 && mode < 400) break;
+
++ case 313:
++ test_hash_speed("sha224", sec, generic_hash_speed_template);
++ if (mode > 300 && mode < 400) break;
++
+ case 399:
+ break;
+
+@@ -1467,20 +1777,21 @@ static void do_test(void)
+
+ static int __init init(void)
+ {
++ int err = -ENOMEM;
++
+ tvmem = kmalloc(TVMEMSIZE, GFP_KERNEL);
+ if (tvmem == NULL)
+- return -ENOMEM;
++ return err;
+
+ xbuf = kmalloc(XBUFSIZE, GFP_KERNEL);
+- if (xbuf == NULL) {
+- kfree(tvmem);
+- return -ENOMEM;
+- }
++ if (xbuf == NULL)
++ goto err_free_tv;
+
+- do_test();
++ axbuf = kmalloc(XBUFSIZE, GFP_KERNEL);
++ if (axbuf == NULL)
++ goto err_free_xbuf;
+
+- kfree(xbuf);
+- kfree(tvmem);
++ do_test();
+
+ /* We intentionaly return -EAGAIN to prevent keeping
+ * the module. It does all its work from init()
+@@ -1488,7 +1799,15 @@ static int __init init(void)
+ * => we don't need it in the memory, do we?
+ * -- mludvig
+ */
+- return -EAGAIN;
++ err = -EAGAIN;
++
++ kfree(axbuf);
++ err_free_xbuf:
++ kfree(xbuf);
++ err_free_tv:
++ kfree(tvmem);
++
++ return err;
+ }
+
+ /*
+diff --git a/crypto/tcrypt.h b/crypto/tcrypt.h
+index ec86138..f785e56 100644
+--- a/crypto/tcrypt.h
++++ b/crypto/tcrypt.h
+@@ -6,12 +6,15 @@
+ *
+ * Copyright (c) 2002 James Morris <jmorris at intercode.com.au>
+ * Copyright (c) 2002 Jean-Francois Dive <jef at linuxbe.org>
++ * Copyright (c) 2007 Nokia Siemens Networks
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
++ * 2007-11-13 Added GCM tests
++ * 2007-11-13 Added AEAD support
+ * 2006-12-07 Added SHA384 HMAC and SHA512 HMAC tests
+ * 2004-08-09 Cipher speed tests by Reyk Floeter <reyk at vantronix.net>
+ * 2003-09-14 Changes by Kartikey Mahendra Bhatt
+@@ -40,14 +43,32 @@ struct hash_testvec {
+ struct cipher_testvec {
+ char key[MAX_KEYLEN] __attribute__ ((__aligned__(4)));
+ char iv[MAX_IVLEN];
++ char input[4100];
++ char result[4100];
++ unsigned char tap[MAX_TAP];
++ int np;
++ unsigned char fail;
++ unsigned char wk; /* weak key flag */
++ unsigned char klen;
++ unsigned short ilen;
++ unsigned short rlen;
++};
++
++struct aead_testvec {
++ char key[MAX_KEYLEN] __attribute__ ((__aligned__(4)));
++ char iv[MAX_IVLEN];
+ char input[512];
++ char assoc[512];
+ char result[512];
+ unsigned char tap[MAX_TAP];
++ unsigned char atap[MAX_TAP];
+ int np;
++ int anp;
+ unsigned char fail;
+ unsigned char wk; /* weak key flag */
+ unsigned char klen;
+ unsigned short ilen;
++ unsigned short alen;
+ unsigned short rlen;
+ };
+
+@@ -173,6 +194,33 @@ static struct hash_testvec sha1_tv_template[] = {
+ }
+ };
+
++
++/*
++ * SHA224 test vectors from from FIPS PUB 180-2
++ */
++#define SHA224_TEST_VECTORS 2
++
++static struct hash_testvec sha224_tv_template[] = {
++ {
++ .plaintext = "abc",
++ .psize = 3,
++ .digest = { 0x23, 0x09, 0x7D, 0x22, 0x34, 0x05, 0xD8, 0x22,
++ 0x86, 0x42, 0xA4, 0x77, 0xBD, 0xA2, 0x55, 0xB3,
++ 0x2A, 0xAD, 0xBC, 0xE4, 0xBD, 0xA0, 0xB3, 0xF7,
++ 0xE3, 0x6C, 0x9D, 0xA7},
++ }, {
++ .plaintext =
++ "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
++ .psize = 56,
++ .digest = { 0x75, 0x38, 0x8B, 0x16, 0x51, 0x27, 0x76, 0xCC,
++ 0x5D, 0xBA, 0x5D, 0xA1, 0xFD, 0x89, 0x01, 0x50,
++ 0xB0, 0xC6, 0x45, 0x5C, 0xB4, 0xF5, 0x8B, 0x19,
++ 0x52, 0x52, 0x25, 0x25 },
++ .np = 2,
++ .tap = { 28, 28 }
++ }
++};
++
+ /*
+ * SHA256 test vectors from from NIST
+ */
+@@ -817,6 +865,121 @@ static struct hash_testvec hmac_sha1_tv_template[] = {
+ },
+ };
+
++
++/*
++ * SHA224 HMAC test vectors from RFC4231
++ */
++#define HMAC_SHA224_TEST_VECTORS 4
++
++static struct hash_testvec hmac_sha224_tv_template[] = {
++ {
++ .key = { 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
++ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
++ 0x0b, 0x0b, 0x0b, 0x0b },
++ .ksize = 20,
++ /* ("Hi There") */
++ .plaintext = { 0x48, 0x69, 0x20, 0x54, 0x68, 0x65, 0x72, 0x65 },
++ .psize = 8,
++ .digest = { 0x89, 0x6f, 0xb1, 0x12, 0x8a, 0xbb, 0xdf, 0x19,
++ 0x68, 0x32, 0x10, 0x7c, 0xd4, 0x9d, 0xf3, 0x3f,
++ 0x47, 0xb4, 0xb1, 0x16, 0x99, 0x12, 0xba, 0x4f,
++ 0x53, 0x68, 0x4b, 0x22},
++ }, {
++ .key = { 0x4a, 0x65, 0x66, 0x65 }, /* ("Jefe") */
++ .ksize = 4,
++ /* ("what do ya want for nothing?") */
++ .plaintext = { 0x77, 0x68, 0x61, 0x74, 0x20, 0x64, 0x6f, 0x20,
++ 0x79, 0x61, 0x20, 0x77, 0x61, 0x6e, 0x74, 0x20,
++ 0x66, 0x6f, 0x72, 0x20, 0x6e, 0x6f, 0x74, 0x68,
++ 0x69, 0x6e, 0x67, 0x3f },
++ .psize = 28,
++ .digest = { 0xa3, 0x0e, 0x01, 0x09, 0x8b, 0xc6, 0xdb, 0xbf,
++ 0x45, 0x69, 0x0f, 0x3a, 0x7e, 0x9e, 0x6d, 0x0f,
++ 0x8b, 0xbe, 0xa2, 0xa3, 0x9e, 0x61, 0x48, 0x00,
++ 0x8f, 0xd0, 0x5e, 0x44 },
++ .np = 4,
++ .tap = { 7, 7, 7, 7 }
++ }, {
++ .key = { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa },
++ .ksize = 131,
++ /* ("Test Using Larger Than Block-Size Key - Hash Key First") */
++ .plaintext = { 0x54, 0x65, 0x73, 0x74, 0x20, 0x55, 0x73, 0x69,
++ 0x6e, 0x67, 0x20, 0x4c, 0x61, 0x72, 0x67, 0x65,
++ 0x72, 0x20, 0x54, 0x68, 0x61, 0x6e, 0x20, 0x42,
++ 0x6c, 0x6f, 0x63, 0x6b, 0x2d, 0x53, 0x69, 0x7a,
++ 0x65, 0x20, 0x4b, 0x65, 0x79, 0x20, 0x2d, 0x20,
++ 0x48, 0x61, 0x73, 0x68, 0x20, 0x4b, 0x65, 0x79,
++ 0x20, 0x46, 0x69, 0x72, 0x73, 0x74 },
++ .psize = 54,
++ .digest = { 0x95, 0xe9, 0xa0, 0xdb, 0x96, 0x20, 0x95, 0xad,
++ 0xae, 0xbe, 0x9b, 0x2d, 0x6f, 0x0d, 0xbc, 0xe2,
++ 0xd4, 0x99, 0xf1, 0x12, 0xf2, 0xd2, 0xb7, 0x27,
++ 0x3f, 0xa6, 0x87, 0x0e },
++ }, {
++ .key = { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
++ 0xaa, 0xaa, 0xaa },
++ .ksize = 131,
++ /* ("This is a test using a larger than block-size key and a")
++ (" larger than block-size data. The key needs to be")
++ (" hashed before being used by the HMAC algorithm.") */
++ .plaintext = { 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20,
++ 0x61, 0x20, 0x74, 0x65, 0x73, 0x74, 0x20, 0x75,
++ 0x73, 0x69, 0x6e, 0x67, 0x20, 0x61, 0x20, 0x6c,
++ 0x61, 0x72, 0x67, 0x65, 0x72, 0x20, 0x74, 0x68,
++ 0x61, 0x6e, 0x20, 0x62, 0x6c, 0x6f, 0x63, 0x6b,
++ 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x20, 0x6b, 0x65,
++ 0x79, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x61, 0x20,
++ 0x6c, 0x61, 0x72, 0x67, 0x65, 0x72, 0x20, 0x74,
++ 0x68, 0x61, 0x6e, 0x20, 0x62, 0x6c, 0x6f, 0x63,
++ 0x6b, 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x20, 0x64,
++ 0x61, 0x74, 0x61, 0x2e, 0x20, 0x54, 0x68, 0x65,
++ 0x20, 0x6b, 0x65, 0x79, 0x20, 0x6e, 0x65, 0x65,
++ 0x64, 0x73, 0x20, 0x74, 0x6f, 0x20, 0x62, 0x65,
++ 0x20, 0x68, 0x61, 0x73, 0x68, 0x65, 0x64, 0x20,
++ 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x20, 0x62,
++ 0x65, 0x69, 0x6e, 0x67, 0x20, 0x75, 0x73, 0x65,
++ 0x64, 0x20, 0x62, 0x79, 0x20, 0x74, 0x68, 0x65,
++ 0x20, 0x48, 0x4d, 0x41, 0x43, 0x20, 0x61, 0x6c,
++ 0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x2e },
++ .psize = 152,
++ .digest = { 0x3a, 0x85, 0x41, 0x66, 0xac, 0x5d, 0x9f, 0x02,
++ 0x3f, 0x54, 0xd5, 0x17, 0xd0, 0xb3, 0x9d, 0xbd,
++ 0x94, 0x67, 0x70, 0xdb, 0x9c, 0x2b, 0x95, 0xc9,
++ 0xf6, 0xf5, 0x65, 0xd1 },
++ },
++};
++
+ /*
+ * HMAC-SHA256 test vectors from
+ * draft-ietf-ipsec-ciph-sha-256-01.txt
+@@ -2140,12 +2303,18 @@ static struct cipher_testvec cast6_dec_tv_template[] = {
+ */
+ #define AES_ENC_TEST_VECTORS 3
+ #define AES_DEC_TEST_VECTORS 3
+-#define AES_CBC_ENC_TEST_VECTORS 2
+-#define AES_CBC_DEC_TEST_VECTORS 2
++#define AES_CBC_ENC_TEST_VECTORS 4
++#define AES_CBC_DEC_TEST_VECTORS 4
+ #define AES_LRW_ENC_TEST_VECTORS 8
+ #define AES_LRW_DEC_TEST_VECTORS 8
+ #define AES_XTS_ENC_TEST_VECTORS 4
+ #define AES_XTS_DEC_TEST_VECTORS 4
++#define AES_CTR_ENC_TEST_VECTORS 7
++#define AES_CTR_DEC_TEST_VECTORS 6
++#define AES_GCM_ENC_TEST_VECTORS 9
++#define AES_GCM_DEC_TEST_VECTORS 8
++#define AES_CCM_ENC_TEST_VECTORS 7
++#define AES_CCM_DEC_TEST_VECTORS 7
+
+ static struct cipher_testvec aes_enc_tv_template[] = {
+ { /* From FIPS-197 */
+@@ -2249,6 +2418,57 @@ static struct cipher_testvec aes_cbc_enc_tv_template[] = {
+ 0x75, 0x86, 0x60, 0x2d, 0x25, 0x3c, 0xff, 0xf9,
+ 0x1b, 0x82, 0x66, 0xbe, 0xa6, 0xd6, 0x1a, 0xb1 },
+ .rlen = 32,
++ }, { /* From NIST SP800-38A */
++ .key = { 0x8e, 0x73, 0xb0, 0xf7, 0xda, 0x0e, 0x64, 0x52,
++ 0xc8, 0x10, 0xf3, 0x2b, 0x80, 0x90, 0x79, 0xe5,
++ 0x62, 0xf8, 0xea, 0xd2, 0x52, 0x2c, 0x6b, 0x7b },
++ .klen = 24,
++ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
++ .input = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
++ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
++ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
++ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
++ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
++ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
++ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
++ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
++ .ilen = 64,
++ .result = { 0x4f, 0x02, 0x1d, 0xb2, 0x43, 0xbc, 0x63, 0x3d,
++ 0x71, 0x78, 0x18, 0x3a, 0x9f, 0xa0, 0x71, 0xe8,
++ 0xb4, 0xd9, 0xad, 0xa9, 0xad, 0x7d, 0xed, 0xf4,
++ 0xe5, 0xe7, 0x38, 0x76, 0x3f, 0x69, 0x14, 0x5a,
++ 0x57, 0x1b, 0x24, 0x20, 0x12, 0xfb, 0x7a, 0xe0,
++ 0x7f, 0xa9, 0xba, 0xac, 0x3d, 0xf1, 0x02, 0xe0,
++ 0x08, 0xb0, 0xe2, 0x79, 0x88, 0x59, 0x88, 0x81,
++ 0xd9, 0x20, 0xa9, 0xe6, 0x4f, 0x56, 0x15, 0xcd },
++ .rlen = 64,
++ }, {
++ .key = { 0x60, 0x3d, 0xeb, 0x10, 0x15, 0xca, 0x71, 0xbe,
++ 0x2b, 0x73, 0xae, 0xf0, 0x85, 0x7d, 0x77, 0x81,
++ 0x1f, 0x35, 0x2c, 0x07, 0x3b, 0x61, 0x08, 0xd7,
++ 0x2d, 0x98, 0x10, 0xa3, 0x09, 0x14, 0xdf, 0xf4 },
++ .klen = 32,
++ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
++ .input = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
++ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
++ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
++ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
++ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
++ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
++ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
++ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
++ .ilen = 64,
++ .result = { 0xf5, 0x8c, 0x4c, 0x04, 0xd6, 0xe5, 0xf1, 0xba,
++ 0x77, 0x9e, 0xab, 0xfb, 0x5f, 0x7b, 0xfb, 0xd6,
++ 0x9c, 0xfc, 0x4e, 0x96, 0x7e, 0xdb, 0x80, 0x8d,
++ 0x67, 0x9f, 0x77, 0x7b, 0xc6, 0x70, 0x2c, 0x7d,
++ 0x39, 0xf2, 0x33, 0x69, 0xa9, 0xd9, 0xba, 0xcf,
++ 0xa5, 0x30, 0xe2, 0x63, 0x04, 0x23, 0x14, 0x61,
++ 0xb2, 0xeb, 0x05, 0xe2, 0xc3, 0x9b, 0xe9, 0xfc,
++ 0xda, 0x6c, 0x19, 0x07, 0x8c, 0x6a, 0x9d, 0x1b },
++ .rlen = 64,
+ },
+ };
+
+@@ -2280,6 +2500,57 @@ static struct cipher_testvec aes_cbc_dec_tv_template[] = {
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
++ }, { /* From NIST SP800-38A */
++ .key = { 0x8e, 0x73, 0xb0, 0xf7, 0xda, 0x0e, 0x64, 0x52,
++ 0xc8, 0x10, 0xf3, 0x2b, 0x80, 0x90, 0x79, 0xe5,
++ 0x62, 0xf8, 0xea, 0xd2, 0x52, 0x2c, 0x6b, 0x7b },
++ .klen = 24,
++ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
++ .input = { 0x4f, 0x02, 0x1d, 0xb2, 0x43, 0xbc, 0x63, 0x3d,
++ 0x71, 0x78, 0x18, 0x3a, 0x9f, 0xa0, 0x71, 0xe8,
++ 0xb4, 0xd9, 0xad, 0xa9, 0xad, 0x7d, 0xed, 0xf4,
++ 0xe5, 0xe7, 0x38, 0x76, 0x3f, 0x69, 0x14, 0x5a,
++ 0x57, 0x1b, 0x24, 0x20, 0x12, 0xfb, 0x7a, 0xe0,
++ 0x7f, 0xa9, 0xba, 0xac, 0x3d, 0xf1, 0x02, 0xe0,
++ 0x08, 0xb0, 0xe2, 0x79, 0x88, 0x59, 0x88, 0x81,
++ 0xd9, 0x20, 0xa9, 0xe6, 0x4f, 0x56, 0x15, 0xcd },
++ .ilen = 64,
++ .result = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
++ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
++ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
++ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
++ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
++ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
++ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
++ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
++ .rlen = 64,
++ }, {
++ .key = { 0x60, 0x3d, 0xeb, 0x10, 0x15, 0xca, 0x71, 0xbe,
++ 0x2b, 0x73, 0xae, 0xf0, 0x85, 0x7d, 0x77, 0x81,
++ 0x1f, 0x35, 0x2c, 0x07, 0x3b, 0x61, 0x08, 0xd7,
++ 0x2d, 0x98, 0x10, 0xa3, 0x09, 0x14, 0xdf, 0xf4 },
++ .klen = 32,
++ .iv = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
++ .input = { 0xf5, 0x8c, 0x4c, 0x04, 0xd6, 0xe5, 0xf1, 0xba,
++ 0x77, 0x9e, 0xab, 0xfb, 0x5f, 0x7b, 0xfb, 0xd6,
++ 0x9c, 0xfc, 0x4e, 0x96, 0x7e, 0xdb, 0x80, 0x8d,
++ 0x67, 0x9f, 0x77, 0x7b, 0xc6, 0x70, 0x2c, 0x7d,
++ 0x39, 0xf2, 0x33, 0x69, 0xa9, 0xd9, 0xba, 0xcf,
++ 0xa5, 0x30, 0xe2, 0x63, 0x04, 0x23, 0x14, 0x61,
++ 0xb2, 0xeb, 0x05, 0xe2, 0xc3, 0x9b, 0xe9, 0xfc,
++ 0xda, 0x6c, 0x19, 0x07, 0x8c, 0x6a, 0x9d, 0x1b },
++ .ilen = 64,
++ .result = { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96,
++ 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a,
++ 0xae, 0x2d, 0x8a, 0x57, 0x1e, 0x03, 0xac, 0x9c,
++ 0x9e, 0xb7, 0x6f, 0xac, 0x45, 0xaf, 0x8e, 0x51,
++ 0x30, 0xc8, 0x1c, 0x46, 0xa3, 0x5c, 0xe4, 0x11,
++ 0xe5, 0xfb, 0xc1, 0x19, 0x1a, 0x0a, 0x52, 0xef,
++ 0xf6, 0x9f, 0x24, 0x45, 0xdf, 0x4f, 0x9b, 0x17,
++ 0xad, 0x2b, 0x41, 0x7b, 0xe6, 0x6c, 0x37, 0x10 },
++ .rlen = 64,
+ },
+ };
+
+@@ -3180,6 +3451,1843 @@ static struct cipher_testvec aes_xts_dec_tv_template[] = {
+ }
+ };
+
++
++static struct cipher_testvec aes_ctr_enc_tv_template[] = {
++ { /* From RFC 3686 */
++ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
++ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
++ 0x00, 0x00, 0x00, 0x30 },
++ .klen = 20,
++ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
++ .input = { "Single block msg" },
++ .ilen = 16,
++ .result = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
++ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
++ .rlen = 16,
++ }, {
++ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
++ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
++ 0x00, 0x6c, 0xb6, 0xdb },
++ .klen = 20,
++ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
++ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
++ .ilen = 32,
++ .result = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
++ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
++ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
++ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
++ .rlen = 32,
++ }, {
++ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
++ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
++ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
++ 0x00, 0x00, 0x00, 0x48 },
++ .klen = 28,
++ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
++ .input = { "Single block msg" },
++ .ilen = 16,
++ .result = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
++ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
++ .rlen = 16,
++ }, {
++ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
++ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
++ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
++ 0x00, 0x96, 0xb0, 0x3b },
++ .klen = 28,
++ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
++ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
++ .ilen = 32,
++ .result = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
++ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
++ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
++ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
++ .rlen = 32,
++ }, {
++ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
++ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
++ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
++ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
++ 0x00, 0x00, 0x00, 0x60 },
++ .klen = 36,
++ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
++ .input = { "Single block msg" },
++ .ilen = 16,
++ .result = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
++ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
++ .rlen = 16,
++ }, {
++ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
++ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
++ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
++ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
++ 0x00, 0xfa, 0xac, 0x24 },
++ .klen = 36,
++ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
++ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
++ .ilen = 32,
++ .result = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
++ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
++ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
++ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
++ .rlen = 32,
++ }, {
++ // generated using Crypto++
++ .key = {
++ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
++ 0x00, 0x00, 0x00, 0x00,
++ },
++ .klen = 32 + 4,
++ .iv = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ },
++ .input = {
++ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
++ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
++ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
++ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
++ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
++ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
++ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
++ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
++ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f,
++ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
++ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
++ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
++ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
++ 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
++ 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f,
++ 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97,
++ 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7,
++ 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf,
++ 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7,
++ 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf,
++ 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf,
++ 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7,
++ 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf,
++ 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7,
++ 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef,
++ 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7,
++ 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff,
++ 0x00, 0x03, 0x06, 0x09, 0x0c, 0x0f, 0x12, 0x15,
++ 0x18, 0x1b, 0x1e, 0x21, 0x24, 0x27, 0x2a, 0x2d,
++ 0x30, 0x33, 0x36, 0x39, 0x3c, 0x3f, 0x42, 0x45,
++ 0x48, 0x4b, 0x4e, 0x51, 0x54, 0x57, 0x5a, 0x5d,
++ 0x60, 0x63, 0x66, 0x69, 0x6c, 0x6f, 0x72, 0x75,
++ 0x78, 0x7b, 0x7e, 0x81, 0x84, 0x87, 0x8a, 0x8d,
++ 0x90, 0x93, 0x96, 0x99, 0x9c, 0x9f, 0xa2, 0xa5,
++ 0xa8, 0xab, 0xae, 0xb1, 0xb4, 0xb7, 0xba, 0xbd,
++ 0xc0, 0xc3, 0xc6, 0xc9, 0xcc, 0xcf, 0xd2, 0xd5,
++ 0xd8, 0xdb, 0xde, 0xe1, 0xe4, 0xe7, 0xea, 0xed,
++ 0xf0, 0xf3, 0xf6, 0xf9, 0xfc, 0xff, 0x02, 0x05,
++ 0x08, 0x0b, 0x0e, 0x11, 0x14, 0x17, 0x1a, 0x1d,
++ 0x20, 0x23, 0x26, 0x29, 0x2c, 0x2f, 0x32, 0x35,
++ 0x38, 0x3b, 0x3e, 0x41, 0x44, 0x47, 0x4a, 0x4d,
++ 0x50, 0x53, 0x56, 0x59, 0x5c, 0x5f, 0x62, 0x65,
++ 0x68, 0x6b, 0x6e, 0x71, 0x74, 0x77, 0x7a, 0x7d,
++ 0x80, 0x83, 0x86, 0x89, 0x8c, 0x8f, 0x92, 0x95,
++ 0x98, 0x9b, 0x9e, 0xa1, 0xa4, 0xa7, 0xaa, 0xad,
++ 0xb0, 0xb3, 0xb6, 0xb9, 0xbc, 0xbf, 0xc2, 0xc5,
++ 0xc8, 0xcb, 0xce, 0xd1, 0xd4, 0xd7, 0xda, 0xdd,
++ 0xe0, 0xe3, 0xe6, 0xe9, 0xec, 0xef, 0xf2, 0xf5,
++ 0xf8, 0xfb, 0xfe, 0x01, 0x04, 0x07, 0x0a, 0x0d,
++ 0x10, 0x13, 0x16, 0x19, 0x1c, 0x1f, 0x22, 0x25,
++ 0x28, 0x2b, 0x2e, 0x31, 0x34, 0x37, 0x3a, 0x3d,
++ 0x40, 0x43, 0x46, 0x49, 0x4c, 0x4f, 0x52, 0x55,
++ 0x58, 0x5b, 0x5e, 0x61, 0x64, 0x67, 0x6a, 0x6d,
++ 0x70, 0x73, 0x76, 0x79, 0x7c, 0x7f, 0x82, 0x85,
++ 0x88, 0x8b, 0x8e, 0x91, 0x94, 0x97, 0x9a, 0x9d,
++ 0xa0, 0xa3, 0xa6, 0xa9, 0xac, 0xaf, 0xb2, 0xb5,
++ 0xb8, 0xbb, 0xbe, 0xc1, 0xc4, 0xc7, 0xca, 0xcd,
++ 0xd0, 0xd3, 0xd6, 0xd9, 0xdc, 0xdf, 0xe2, 0xe5,
++ 0xe8, 0xeb, 0xee, 0xf1, 0xf4, 0xf7, 0xfa, 0xfd,
++ 0x00, 0x05, 0x0a, 0x0f, 0x14, 0x19, 0x1e, 0x23,
++ 0x28, 0x2d, 0x32, 0x37, 0x3c, 0x41, 0x46, 0x4b,
++ 0x50, 0x55, 0x5a, 0x5f, 0x64, 0x69, 0x6e, 0x73,
++ 0x78, 0x7d, 0x82, 0x87, 0x8c, 0x91, 0x96, 0x9b,
++ 0xa0, 0xa5, 0xaa, 0xaf, 0xb4, 0xb9, 0xbe, 0xc3,
++ 0xc8, 0xcd, 0xd2, 0xd7, 0xdc, 0xe1, 0xe6, 0xeb,
++ 0xf0, 0xf5, 0xfa, 0xff, 0x04, 0x09, 0x0e, 0x13,
++ 0x18, 0x1d, 0x22, 0x27, 0x2c, 0x31, 0x36, 0x3b,
++ 0x40, 0x45, 0x4a, 0x4f, 0x54, 0x59, 0x5e, 0x63,
++ 0x68, 0x6d, 0x72, 0x77, 0x7c, 0x81, 0x86, 0x8b,
++ 0x90, 0x95, 0x9a, 0x9f, 0xa4, 0xa9, 0xae, 0xb3,
++ 0xb8, 0xbd, 0xc2, 0xc7, 0xcc, 0xd1, 0xd6, 0xdb,
++ 0xe0, 0xe5, 0xea, 0xef, 0xf4, 0xf9, 0xfe, 0x03,
++ 0x08, 0x0d, 0x12, 0x17, 0x1c, 0x21, 0x26, 0x2b,
++ 0x30, 0x35, 0x3a, 0x3f, 0x44, 0x49, 0x4e, 0x53,
++ 0x58, 0x5d, 0x62, 0x67, 0x6c, 0x71, 0x76, 0x7b,
++ 0x80, 0x85, 0x8a, 0x8f, 0x94, 0x99, 0x9e, 0xa3,
++ 0xa8, 0xad, 0xb2, 0xb7, 0xbc, 0xc1, 0xc6, 0xcb,
++ 0xd0, 0xd5, 0xda, 0xdf, 0xe4, 0xe9, 0xee, 0xf3,
++ 0xf8, 0xfd, 0x02, 0x07, 0x0c, 0x11, 0x16, 0x1b,
++ 0x20, 0x25, 0x2a, 0x2f, 0x34, 0x39, 0x3e, 0x43,
++ 0x48, 0x4d, 0x52, 0x57, 0x5c, 0x61, 0x66, 0x6b,
++ 0x70, 0x75, 0x7a, 0x7f, 0x84, 0x89, 0x8e, 0x93,
++ 0x98, 0x9d, 0xa2, 0xa7, 0xac, 0xb1, 0xb6, 0xbb,
++ 0xc0, 0xc5, 0xca, 0xcf, 0xd4, 0xd9, 0xde, 0xe3,
++ 0xe8, 0xed, 0xf2, 0xf7, 0xfc, 0x01, 0x06, 0x0b,
++ 0x10, 0x15, 0x1a, 0x1f, 0x24, 0x29, 0x2e, 0x33,
++ 0x38, 0x3d, 0x42, 0x47, 0x4c, 0x51, 0x56, 0x5b,
++ 0x60, 0x65, 0x6a, 0x6f, 0x74, 0x79, 0x7e, 0x83,
++ 0x88, 0x8d, 0x92, 0x97, 0x9c, 0xa1, 0xa6, 0xab,
++ 0xb0, 0xb5, 0xba, 0xbf, 0xc4, 0xc9, 0xce, 0xd3,
++ 0xd8, 0xdd, 0xe2, 0xe7, 0xec, 0xf1, 0xf6, 0xfb,
++ 0x00, 0x07, 0x0e, 0x15, 0x1c, 0x23, 0x2a, 0x31,
++ 0x38, 0x3f, 0x46, 0x4d, 0x54, 0x5b, 0x62, 0x69,
++ 0x70, 0x77, 0x7e, 0x85, 0x8c, 0x93, 0x9a, 0xa1,
++ 0xa8, 0xaf, 0xb6, 0xbd, 0xc4, 0xcb, 0xd2, 0xd9,
++ 0xe0, 0xe7, 0xee, 0xf5, 0xfc, 0x03, 0x0a, 0x11,
++ 0x18, 0x1f, 0x26, 0x2d, 0x34, 0x3b, 0x42, 0x49,
++ 0x50, 0x57, 0x5e, 0x65, 0x6c, 0x73, 0x7a, 0x81,
++ 0x88, 0x8f, 0x96, 0x9d, 0xa4, 0xab, 0xb2, 0xb9,
++ 0xc0, 0xc7, 0xce, 0xd5, 0xdc, 0xe3, 0xea, 0xf1,
++ 0xf8, 0xff, 0x06, 0x0d, 0x14, 0x1b, 0x22, 0x29,
++ 0x30, 0x37, 0x3e, 0x45, 0x4c, 0x53, 0x5a, 0x61,
++ 0x68, 0x6f, 0x76, 0x7d, 0x84, 0x8b, 0x92, 0x99,
++ 0xa0, 0xa7, 0xae, 0xb5, 0xbc, 0xc3, 0xca, 0xd1,
++ 0xd8, 0xdf, 0xe6, 0xed, 0xf4, 0xfb, 0x02, 0x09,
++ 0x10, 0x17, 0x1e, 0x25, 0x2c, 0x33, 0x3a, 0x41,
++ 0x48, 0x4f, 0x56, 0x5d, 0x64, 0x6b, 0x72, 0x79,
++ 0x80, 0x87, 0x8e, 0x95, 0x9c, 0xa3, 0xaa, 0xb1,
++ 0xb8, 0xbf, 0xc6, 0xcd, 0xd4, 0xdb, 0xe2, 0xe9,
++ 0xf0, 0xf7, 0xfe, 0x05, 0x0c, 0x13, 0x1a, 0x21,
++ 0x28, 0x2f, 0x36, 0x3d, 0x44, 0x4b, 0x52, 0x59,
++ 0x60, 0x67, 0x6e, 0x75, 0x7c, 0x83, 0x8a, 0x91,
++ 0x98, 0x9f, 0xa6, 0xad, 0xb4, 0xbb, 0xc2, 0xc9,
++ 0xd0, 0xd7, 0xde, 0xe5, 0xec, 0xf3, 0xfa, 0x01,
++ 0x08, 0x0f, 0x16, 0x1d, 0x24, 0x2b, 0x32, 0x39,
++ 0x40, 0x47, 0x4e, 0x55, 0x5c, 0x63, 0x6a, 0x71,
++ 0x78, 0x7f, 0x86, 0x8d, 0x94, 0x9b, 0xa2, 0xa9,
++ 0xb0, 0xb7, 0xbe, 0xc5, 0xcc, 0xd3, 0xda, 0xe1,
++ 0xe8, 0xef, 0xf6, 0xfd, 0x04, 0x0b, 0x12, 0x19,
++ 0x20, 0x27, 0x2e, 0x35, 0x3c, 0x43, 0x4a, 0x51,
++ 0x58, 0x5f, 0x66, 0x6d, 0x74, 0x7b, 0x82, 0x89,
++ 0x90, 0x97, 0x9e, 0xa5, 0xac, 0xb3, 0xba, 0xc1,
++ 0xc8, 0xcf, 0xd6, 0xdd, 0xe4, 0xeb, 0xf2, 0xf9,
++ 0x00, 0x09, 0x12, 0x1b, 0x24, 0x2d, 0x36, 0x3f,
++ 0x48, 0x51, 0x5a, 0x63, 0x6c, 0x75, 0x7e, 0x87,
++ 0x90, 0x99, 0xa2, 0xab, 0xb4, 0xbd, 0xc6, 0xcf,
++ 0xd8, 0xe1, 0xea, 0xf3, 0xfc, 0x05, 0x0e, 0x17,
++ 0x20, 0x29, 0x32, 0x3b, 0x44, 0x4d, 0x56, 0x5f,
++ 0x68, 0x71, 0x7a, 0x83, 0x8c, 0x95, 0x9e, 0xa7,
++ 0xb0, 0xb9, 0xc2, 0xcb, 0xd4, 0xdd, 0xe6, 0xef,
++ 0xf8, 0x01, 0x0a, 0x13, 0x1c, 0x25, 0x2e, 0x37,
++ 0x40, 0x49, 0x52, 0x5b, 0x64, 0x6d, 0x76, 0x7f,
++ 0x88, 0x91, 0x9a, 0xa3, 0xac, 0xb5, 0xbe, 0xc7,
++ 0xd0, 0xd9, 0xe2, 0xeb, 0xf4, 0xfd, 0x06, 0x0f,
++ 0x18, 0x21, 0x2a, 0x33, 0x3c, 0x45, 0x4e, 0x57,
++ 0x60, 0x69, 0x72, 0x7b, 0x84, 0x8d, 0x96, 0x9f,
++ 0xa8, 0xb1, 0xba, 0xc3, 0xcc, 0xd5, 0xde, 0xe7,
++ 0xf0, 0xf9, 0x02, 0x0b, 0x14, 0x1d, 0x26, 0x2f,
++ 0x38, 0x41, 0x4a, 0x53, 0x5c, 0x65, 0x6e, 0x77,
++ 0x80, 0x89, 0x92, 0x9b, 0xa4, 0xad, 0xb6, 0xbf,
++ 0xc8, 0xd1, 0xda, 0xe3, 0xec, 0xf5, 0xfe, 0x07,
++ 0x10, 0x19, 0x22, 0x2b, 0x34, 0x3d, 0x46, 0x4f,
++ 0x58, 0x61, 0x6a, 0x73, 0x7c, 0x85, 0x8e, 0x97,
++ 0xa0, 0xa9, 0xb2, 0xbb, 0xc4, 0xcd, 0xd6, 0xdf,
++ 0xe8, 0xf1, 0xfa, 0x03, 0x0c, 0x15, 0x1e, 0x27,
++ 0x30, 0x39, 0x42, 0x4b, 0x54, 0x5d, 0x66, 0x6f,
++ 0x78, 0x81, 0x8a, 0x93, 0x9c, 0xa5, 0xae, 0xb7,
++ 0xc0, 0xc9, 0xd2, 0xdb, 0xe4, 0xed, 0xf6, 0xff,
++ 0x08, 0x11, 0x1a, 0x23, 0x2c, 0x35, 0x3e, 0x47,
++ 0x50, 0x59, 0x62, 0x6b, 0x74, 0x7d, 0x86, 0x8f,
++ 0x98, 0xa1, 0xaa, 0xb3, 0xbc, 0xc5, 0xce, 0xd7,
++ 0xe0, 0xe9, 0xf2, 0xfb, 0x04, 0x0d, 0x16, 0x1f,
++ 0x28, 0x31, 0x3a, 0x43, 0x4c, 0x55, 0x5e, 0x67,
++ 0x70, 0x79, 0x82, 0x8b, 0x94, 0x9d, 0xa6, 0xaf,
++ 0xb8, 0xc1, 0xca, 0xd3, 0xdc, 0xe5, 0xee, 0xf7,
++ 0x00, 0x0b, 0x16, 0x21, 0x2c, 0x37, 0x42, 0x4d,
++ 0x58, 0x63, 0x6e, 0x79, 0x84, 0x8f, 0x9a, 0xa5,
++ 0xb0, 0xbb, 0xc6, 0xd1, 0xdc, 0xe7, 0xf2, 0xfd,
++ 0x08, 0x13, 0x1e, 0x29, 0x34, 0x3f, 0x4a, 0x55,
++ 0x60, 0x6b, 0x76, 0x81, 0x8c, 0x97, 0xa2, 0xad,
++ 0xb8, 0xc3, 0xce, 0xd9, 0xe4, 0xef, 0xfa, 0x05,
++ 0x10, 0x1b, 0x26, 0x31, 0x3c, 0x47, 0x52, 0x5d,
++ 0x68, 0x73, 0x7e, 0x89, 0x94, 0x9f, 0xaa, 0xb5,
++ 0xc0, 0xcb, 0xd6, 0xe1, 0xec, 0xf7, 0x02, 0x0d,
++ 0x18, 0x23, 0x2e, 0x39, 0x44, 0x4f, 0x5a, 0x65,
++ 0x70, 0x7b, 0x86, 0x91, 0x9c, 0xa7, 0xb2, 0xbd,
++ 0xc8, 0xd3, 0xde, 0xe9, 0xf4, 0xff, 0x0a, 0x15,
++ 0x20, 0x2b, 0x36, 0x41, 0x4c, 0x57, 0x62, 0x6d,
++ 0x78, 0x83, 0x8e, 0x99, 0xa4, 0xaf, 0xba, 0xc5,
++ 0xd0, 0xdb, 0xe6, 0xf1, 0xfc, 0x07, 0x12, 0x1d,
++ 0x28, 0x33, 0x3e, 0x49, 0x54, 0x5f, 0x6a, 0x75,
++ 0x80, 0x8b, 0x96, 0xa1, 0xac, 0xb7, 0xc2, 0xcd,
++ 0xd8, 0xe3, 0xee, 0xf9, 0x04, 0x0f, 0x1a, 0x25,
++ 0x30, 0x3b, 0x46, 0x51, 0x5c, 0x67, 0x72, 0x7d,
++ 0x88, 0x93, 0x9e, 0xa9, 0xb4, 0xbf, 0xca, 0xd5,
++ 0xe0, 0xeb, 0xf6, 0x01, 0x0c, 0x17, 0x22, 0x2d,
++ 0x38, 0x43, 0x4e, 0x59, 0x64, 0x6f, 0x7a, 0x85,
++ 0x90, 0x9b, 0xa6, 0xb1, 0xbc, 0xc7, 0xd2, 0xdd,
++ 0xe8, 0xf3, 0xfe, 0x09, 0x14, 0x1f, 0x2a, 0x35,
++ 0x40, 0x4b, 0x56, 0x61, 0x6c, 0x77, 0x82, 0x8d,
++ 0x98, 0xa3, 0xae, 0xb9, 0xc4, 0xcf, 0xda, 0xe5,
++ 0xf0, 0xfb, 0x06, 0x11, 0x1c, 0x27, 0x32, 0x3d,
++ 0x48, 0x53, 0x5e, 0x69, 0x74, 0x7f, 0x8a, 0x95,
++ 0xa0, 0xab, 0xb6, 0xc1, 0xcc, 0xd7, 0xe2, 0xed,
++ 0xf8, 0x03, 0x0e, 0x19, 0x24, 0x2f, 0x3a, 0x45,
++ 0x50, 0x5b, 0x66, 0x71, 0x7c, 0x87, 0x92, 0x9d,
++ 0xa8, 0xb3, 0xbe, 0xc9, 0xd4, 0xdf, 0xea, 0xf5,
++ 0x00, 0x0d, 0x1a, 0x27, 0x34, 0x41, 0x4e, 0x5b,
++ 0x68, 0x75, 0x82, 0x8f, 0x9c, 0xa9, 0xb6, 0xc3,
++ 0xd0, 0xdd, 0xea, 0xf7, 0x04, 0x11, 0x1e, 0x2b,
++ 0x38, 0x45, 0x52, 0x5f, 0x6c, 0x79, 0x86, 0x93,
++ 0xa0, 0xad, 0xba, 0xc7, 0xd4, 0xe1, 0xee, 0xfb,
++ 0x08, 0x15, 0x22, 0x2f, 0x3c, 0x49, 0x56, 0x63,
++ 0x70, 0x7d, 0x8a, 0x97, 0xa4, 0xb1, 0xbe, 0xcb,
++ 0xd8, 0xe5, 0xf2, 0xff, 0x0c, 0x19, 0x26, 0x33,
++ 0x40, 0x4d, 0x5a, 0x67, 0x74, 0x81, 0x8e, 0x9b,
++ 0xa8, 0xb5, 0xc2, 0xcf, 0xdc, 0xe9, 0xf6, 0x03,
++ 0x10, 0x1d, 0x2a, 0x37, 0x44, 0x51, 0x5e, 0x6b,
++ 0x78, 0x85, 0x92, 0x9f, 0xac, 0xb9, 0xc6, 0xd3,
++ 0xe0, 0xed, 0xfa, 0x07, 0x14, 0x21, 0x2e, 0x3b,
++ 0x48, 0x55, 0x62, 0x6f, 0x7c, 0x89, 0x96, 0xa3,
++ 0xb0, 0xbd, 0xca, 0xd7, 0xe4, 0xf1, 0xfe, 0x0b,
++ 0x18, 0x25, 0x32, 0x3f, 0x4c, 0x59, 0x66, 0x73,
++ 0x80, 0x8d, 0x9a, 0xa7, 0xb4, 0xc1, 0xce, 0xdb,
++ 0xe8, 0xf5, 0x02, 0x0f, 0x1c, 0x29, 0x36, 0x43,
++ 0x50, 0x5d, 0x6a, 0x77, 0x84, 0x91, 0x9e, 0xab,
++ 0xb8, 0xc5, 0xd2, 0xdf, 0xec, 0xf9, 0x06, 0x13,
++ 0x20, 0x2d, 0x3a, 0x47, 0x54, 0x61, 0x6e, 0x7b,
++ 0x88, 0x95, 0xa2, 0xaf, 0xbc, 0xc9, 0xd6, 0xe3,
++ 0xf0, 0xfd, 0x0a, 0x17, 0x24, 0x31, 0x3e, 0x4b,
++ 0x58, 0x65, 0x72, 0x7f, 0x8c, 0x99, 0xa6, 0xb3,
++ 0xc0, 0xcd, 0xda, 0xe7, 0xf4, 0x01, 0x0e, 0x1b,
++ 0x28, 0x35, 0x42, 0x4f, 0x5c, 0x69, 0x76, 0x83,
++ 0x90, 0x9d, 0xaa, 0xb7, 0xc4, 0xd1, 0xde, 0xeb,
++ 0xf8, 0x05, 0x12, 0x1f, 0x2c, 0x39, 0x46, 0x53,
++ 0x60, 0x6d, 0x7a, 0x87, 0x94, 0xa1, 0xae, 0xbb,
++ 0xc8, 0xd5, 0xe2, 0xef, 0xfc, 0x09, 0x16, 0x23,
++ 0x30, 0x3d, 0x4a, 0x57, 0x64, 0x71, 0x7e, 0x8b,
++ 0x98, 0xa5, 0xb2, 0xbf, 0xcc, 0xd9, 0xe6, 0xf3,
++ 0x00, 0x0f, 0x1e, 0x2d, 0x3c, 0x4b, 0x5a, 0x69,
++ 0x78, 0x87, 0x96, 0xa5, 0xb4, 0xc3, 0xd2, 0xe1,
++ 0xf0, 0xff, 0x0e, 0x1d, 0x2c, 0x3b, 0x4a, 0x59,
++ 0x68, 0x77, 0x86, 0x95, 0xa4, 0xb3, 0xc2, 0xd1,
++ 0xe0, 0xef, 0xfe, 0x0d, 0x1c, 0x2b, 0x3a, 0x49,
++ 0x58, 0x67, 0x76, 0x85, 0x94, 0xa3, 0xb2, 0xc1,
++ 0xd0, 0xdf, 0xee, 0xfd, 0x0c, 0x1b, 0x2a, 0x39,
++ 0x48, 0x57, 0x66, 0x75, 0x84, 0x93, 0xa2, 0xb1,
++ 0xc0, 0xcf, 0xde, 0xed, 0xfc, 0x0b, 0x1a, 0x29,
++ 0x38, 0x47, 0x56, 0x65, 0x74, 0x83, 0x92, 0xa1,
++ 0xb0, 0xbf, 0xce, 0xdd, 0xec, 0xfb, 0x0a, 0x19,
++ 0x28, 0x37, 0x46, 0x55, 0x64, 0x73, 0x82, 0x91,
++ 0xa0, 0xaf, 0xbe, 0xcd, 0xdc, 0xeb, 0xfa, 0x09,
++ 0x18, 0x27, 0x36, 0x45, 0x54, 0x63, 0x72, 0x81,
++ 0x90, 0x9f, 0xae, 0xbd, 0xcc, 0xdb, 0xea, 0xf9,
++ 0x08, 0x17, 0x26, 0x35, 0x44, 0x53, 0x62, 0x71,
++ 0x80, 0x8f, 0x9e, 0xad, 0xbc, 0xcb, 0xda, 0xe9,
++ 0xf8, 0x07, 0x16, 0x25, 0x34, 0x43, 0x52, 0x61,
++ 0x70, 0x7f, 0x8e, 0x9d, 0xac, 0xbb, 0xca, 0xd9,
++ 0xe8, 0xf7, 0x06, 0x15, 0x24, 0x33, 0x42, 0x51,
++ 0x60, 0x6f, 0x7e, 0x8d, 0x9c, 0xab, 0xba, 0xc9,
++ 0xd8, 0xe7, 0xf6, 0x05, 0x14, 0x23, 0x32, 0x41,
++ 0x50, 0x5f, 0x6e, 0x7d, 0x8c, 0x9b, 0xaa, 0xb9,
++ 0xc8, 0xd7, 0xe6, 0xf5, 0x04, 0x13, 0x22, 0x31,
++ 0x40, 0x4f, 0x5e, 0x6d, 0x7c, 0x8b, 0x9a, 0xa9,
++ 0xb8, 0xc7, 0xd6, 0xe5, 0xf4, 0x03, 0x12, 0x21,
++ 0x30, 0x3f, 0x4e, 0x5d, 0x6c, 0x7b, 0x8a, 0x99,
++ 0xa8, 0xb7, 0xc6, 0xd5, 0xe4, 0xf3, 0x02, 0x11,
++ 0x20, 0x2f, 0x3e, 0x4d, 0x5c, 0x6b, 0x7a, 0x89,
++ 0x98, 0xa7, 0xb6, 0xc5, 0xd4, 0xe3, 0xf2, 0x01,
++ 0x10, 0x1f, 0x2e, 0x3d, 0x4c, 0x5b, 0x6a, 0x79,
++ 0x88, 0x97, 0xa6, 0xb5, 0xc4, 0xd3, 0xe2, 0xf1,
++ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
++ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff,
++ 0x10, 0x21, 0x32, 0x43, 0x54, 0x65, 0x76, 0x87,
++ 0x98, 0xa9, 0xba, 0xcb, 0xdc, 0xed, 0xfe, 0x0f,
++ 0x20, 0x31, 0x42, 0x53, 0x64, 0x75, 0x86, 0x97,
++ 0xa8, 0xb9, 0xca, 0xdb, 0xec, 0xfd, 0x0e, 0x1f,
++ 0x30, 0x41, 0x52, 0x63, 0x74, 0x85, 0x96, 0xa7,
++ 0xb8, 0xc9, 0xda, 0xeb, 0xfc, 0x0d, 0x1e, 0x2f,
++ 0x40, 0x51, 0x62, 0x73, 0x84, 0x95, 0xa6, 0xb7,
++ 0xc8, 0xd9, 0xea, 0xfb, 0x0c, 0x1d, 0x2e, 0x3f,
++ 0x50, 0x61, 0x72, 0x83, 0x94, 0xa5, 0xb6, 0xc7,
++ 0xd8, 0xe9, 0xfa, 0x0b, 0x1c, 0x2d, 0x3e, 0x4f,
++ 0x60, 0x71, 0x82, 0x93, 0xa4, 0xb5, 0xc6, 0xd7,
++ 0xe8, 0xf9, 0x0a, 0x1b, 0x2c, 0x3d, 0x4e, 0x5f,
++ 0x70, 0x81, 0x92, 0xa3, 0xb4, 0xc5, 0xd6, 0xe7,
++ 0xf8, 0x09, 0x1a, 0x2b, 0x3c, 0x4d, 0x5e, 0x6f,
++ 0x80, 0x91, 0xa2, 0xb3, 0xc4, 0xd5, 0xe6, 0xf7,
++ 0x08, 0x19, 0x2a, 0x3b, 0x4c, 0x5d, 0x6e, 0x7f,
++ 0x90, 0xa1, 0xb2, 0xc3, 0xd4, 0xe5, 0xf6, 0x07,
++ 0x18, 0x29, 0x3a, 0x4b, 0x5c, 0x6d, 0x7e, 0x8f,
++ 0xa0, 0xb1, 0xc2, 0xd3, 0xe4, 0xf5, 0x06, 0x17,
++ 0x28, 0x39, 0x4a, 0x5b, 0x6c, 0x7d, 0x8e, 0x9f,
++ 0xb0, 0xc1, 0xd2, 0xe3, 0xf4, 0x05, 0x16, 0x27,
++ 0x38, 0x49, 0x5a, 0x6b, 0x7c, 0x8d, 0x9e, 0xaf,
++ 0xc0, 0xd1, 0xe2, 0xf3, 0x04, 0x15, 0x26, 0x37,
++ 0x48, 0x59, 0x6a, 0x7b, 0x8c, 0x9d, 0xae, 0xbf,
++ 0xd0, 0xe1, 0xf2, 0x03, 0x14, 0x25, 0x36, 0x47,
++ 0x58, 0x69, 0x7a, 0x8b, 0x9c, 0xad, 0xbe, 0xcf,
++ 0xe0, 0xf1, 0x02, 0x13, 0x24, 0x35, 0x46, 0x57,
++ 0x68, 0x79, 0x8a, 0x9b, 0xac, 0xbd, 0xce, 0xdf,
++ 0xf0, 0x01, 0x12, 0x23, 0x34, 0x45, 0x56, 0x67,
++ 0x78, 0x89, 0x9a, 0xab, 0xbc, 0xcd, 0xde, 0xef,
++ 0x00, 0x13, 0x26, 0x39, 0x4c, 0x5f, 0x72, 0x85,
++ 0x98, 0xab, 0xbe, 0xd1, 0xe4, 0xf7, 0x0a, 0x1d,
++ 0x30, 0x43, 0x56, 0x69, 0x7c, 0x8f, 0xa2, 0xb5,
++ 0xc8, 0xdb, 0xee, 0x01, 0x14, 0x27, 0x3a, 0x4d,
++ 0x60, 0x73, 0x86, 0x99, 0xac, 0xbf, 0xd2, 0xe5,
++ 0xf8, 0x0b, 0x1e, 0x31, 0x44, 0x57, 0x6a, 0x7d,
++ 0x90, 0xa3, 0xb6, 0xc9, 0xdc, 0xef, 0x02, 0x15,
++ 0x28, 0x3b, 0x4e, 0x61, 0x74, 0x87, 0x9a, 0xad,
++ 0xc0, 0xd3, 0xe6, 0xf9, 0x0c, 0x1f, 0x32, 0x45,
++ 0x58, 0x6b, 0x7e, 0x91, 0xa4, 0xb7, 0xca, 0xdd,
++ 0xf0, 0x03, 0x16, 0x29, 0x3c, 0x4f, 0x62, 0x75,
++ 0x88, 0x9b, 0xae, 0xc1, 0xd4, 0xe7, 0xfa, 0x0d,
++ 0x20, 0x33, 0x46, 0x59, 0x6c, 0x7f, 0x92, 0xa5,
++ 0xb8, 0xcb, 0xde, 0xf1, 0x04, 0x17, 0x2a, 0x3d,
++ 0x50, 0x63, 0x76, 0x89, 0x9c, 0xaf, 0xc2, 0xd5,
++ 0xe8, 0xfb, 0x0e, 0x21, 0x34, 0x47, 0x5a, 0x6d,
++ 0x80, 0x93, 0xa6, 0xb9, 0xcc, 0xdf, 0xf2, 0x05,
++ 0x18, 0x2b, 0x3e, 0x51, 0x64, 0x77, 0x8a, 0x9d,
++ 0xb0, 0xc3, 0xd6, 0xe9, 0xfc, 0x0f, 0x22, 0x35,
++ 0x48, 0x5b, 0x6e, 0x81, 0x94, 0xa7, 0xba, 0xcd,
++ 0xe0, 0xf3, 0x06, 0x19, 0x2c, 0x3f, 0x52, 0x65,
++ 0x78, 0x8b, 0x9e, 0xb1, 0xc4, 0xd7, 0xea, 0xfd,
++ 0x10, 0x23, 0x36, 0x49, 0x5c, 0x6f, 0x82, 0x95,
++ 0xa8, 0xbb, 0xce, 0xe1, 0xf4, 0x07, 0x1a, 0x2d,
++ 0x40, 0x53, 0x66, 0x79, 0x8c, 0x9f, 0xb2, 0xc5,
++ 0xd8, 0xeb, 0xfe, 0x11, 0x24, 0x37, 0x4a, 0x5d,
++ 0x70, 0x83, 0x96, 0xa9, 0xbc, 0xcf, 0xe2, 0xf5,
++ 0x08, 0x1b, 0x2e, 0x41, 0x54, 0x67, 0x7a, 0x8d,
++ 0xa0, 0xb3, 0xc6, 0xd9, 0xec, 0xff, 0x12, 0x25,
++ 0x38, 0x4b, 0x5e, 0x71, 0x84, 0x97, 0xaa, 0xbd,
++ 0xd0, 0xe3, 0xf6, 0x09, 0x1c, 0x2f, 0x42, 0x55,
++ 0x68, 0x7b, 0x8e, 0xa1, 0xb4, 0xc7, 0xda, 0xed,
++ 0x00, 0x15, 0x2a, 0x3f, 0x54, 0x69, 0x7e, 0x93,
++ 0xa8, 0xbd, 0xd2, 0xe7, 0xfc, 0x11, 0x26, 0x3b,
++ 0x50, 0x65, 0x7a, 0x8f, 0xa4, 0xb9, 0xce, 0xe3,
++ 0xf8, 0x0d, 0x22, 0x37, 0x4c, 0x61, 0x76, 0x8b,
++ 0xa0, 0xb5, 0xca, 0xdf, 0xf4, 0x09, 0x1e, 0x33,
++ 0x48, 0x5d, 0x72, 0x87, 0x9c, 0xb1, 0xc6, 0xdb,
++ 0xf0, 0x05, 0x1a, 0x2f, 0x44, 0x59, 0x6e, 0x83,
++ 0x98, 0xad, 0xc2, 0xd7, 0xec, 0x01, 0x16, 0x2b,
++ 0x40, 0x55, 0x6a, 0x7f, 0x94, 0xa9, 0xbe, 0xd3,
++ 0xe8, 0xfd, 0x12, 0x27, 0x3c, 0x51, 0x66, 0x7b,
++ 0x90, 0xa5, 0xba, 0xcf, 0xe4, 0xf9, 0x0e, 0x23,
++ 0x38, 0x4d, 0x62, 0x77, 0x8c, 0xa1, 0xb6, 0xcb,
++ 0xe0, 0xf5, 0x0a, 0x1f, 0x34, 0x49, 0x5e, 0x73,
++ 0x88, 0x9d, 0xb2, 0xc7, 0xdc, 0xf1, 0x06, 0x1b,
++ 0x30, 0x45, 0x5a, 0x6f, 0x84, 0x99, 0xae, 0xc3,
++ 0xd8, 0xed, 0x02, 0x17, 0x2c, 0x41, 0x56, 0x6b,
++ 0x80, 0x95, 0xaa, 0xbf, 0xd4, 0xe9, 0xfe, 0x13,
++ 0x28, 0x3d, 0x52, 0x67, 0x7c, 0x91, 0xa6, 0xbb,
++ 0xd0, 0xe5, 0xfa, 0x0f, 0x24, 0x39, 0x4e, 0x63,
++ 0x78, 0x8d, 0xa2, 0xb7, 0xcc, 0xe1, 0xf6, 0x0b,
++ 0x20, 0x35, 0x4a, 0x5f, 0x74, 0x89, 0x9e, 0xb3,
++ 0xc8, 0xdd, 0xf2, 0x07, 0x1c, 0x31, 0x46, 0x5b,
++ 0x70, 0x85, 0x9a, 0xaf, 0xc4, 0xd9, 0xee, 0x03,
++ 0x18, 0x2d, 0x42, 0x57, 0x6c, 0x81, 0x96, 0xab,
++ 0xc0, 0xd5, 0xea, 0xff, 0x14, 0x29, 0x3e, 0x53,
++ 0x68, 0x7d, 0x92, 0xa7, 0xbc, 0xd1, 0xe6, 0xfb,
++ 0x10, 0x25, 0x3a, 0x4f, 0x64, 0x79, 0x8e, 0xa3,
++ 0xb8, 0xcd, 0xe2, 0xf7, 0x0c, 0x21, 0x36, 0x4b,
++ 0x60, 0x75, 0x8a, 0x9f, 0xb4, 0xc9, 0xde, 0xf3,
++ 0x08, 0x1d, 0x32, 0x47, 0x5c, 0x71, 0x86, 0x9b,
++ 0xb0, 0xc5, 0xda, 0xef, 0x04, 0x19, 0x2e, 0x43,
++ 0x58, 0x6d, 0x82, 0x97, 0xac, 0xc1, 0xd6, 0xeb,
++ 0x00, 0x17, 0x2e, 0x45, 0x5c, 0x73, 0x8a, 0xa1,
++ 0xb8, 0xcf, 0xe6, 0xfd, 0x14, 0x2b, 0x42, 0x59,
++ 0x70, 0x87, 0x9e, 0xb5, 0xcc, 0xe3, 0xfa, 0x11,
++ 0x28, 0x3f, 0x56, 0x6d, 0x84, 0x9b, 0xb2, 0xc9,
++ 0xe0, 0xf7, 0x0e, 0x25, 0x3c, 0x53, 0x6a, 0x81,
++ 0x98, 0xaf, 0xc6, 0xdd, 0xf4, 0x0b, 0x22, 0x39,
++ 0x50, 0x67, 0x7e, 0x95, 0xac, 0xc3, 0xda, 0xf1,
++ 0x08, 0x1f, 0x36, 0x4d, 0x64, 0x7b, 0x92, 0xa9,
++ 0xc0, 0xd7, 0xee, 0x05, 0x1c, 0x33, 0x4a, 0x61,
++ 0x78, 0x8f, 0xa6, 0xbd, 0xd4, 0xeb, 0x02, 0x19,
++ 0x30, 0x47, 0x5e, 0x75, 0x8c, 0xa3, 0xba, 0xd1,
++ 0xe8, 0xff, 0x16, 0x2d, 0x44, 0x5b, 0x72, 0x89,
++ 0xa0, 0xb7, 0xce, 0xe5, 0xfc, 0x13, 0x2a, 0x41,
++ 0x58, 0x6f, 0x86, 0x9d, 0xb4, 0xcb, 0xe2, 0xf9,
++ 0x10, 0x27, 0x3e, 0x55, 0x6c, 0x83, 0x9a, 0xb1,
++ 0xc8, 0xdf, 0xf6, 0x0d, 0x24, 0x3b, 0x52, 0x69,
++ 0x80, 0x97, 0xae, 0xc5, 0xdc, 0xf3, 0x0a, 0x21,
++ 0x38, 0x4f, 0x66, 0x7d, 0x94, 0xab, 0xc2, 0xd9,
++ 0xf0, 0x07, 0x1e, 0x35, 0x4c, 0x63, 0x7a, 0x91,
++ 0xa8, 0xbf, 0xd6, 0xed, 0x04, 0x1b, 0x32, 0x49,
++ 0x60, 0x77, 0x8e, 0xa5, 0xbc, 0xd3, 0xea, 0x01,
++ 0x18, 0x2f, 0x46, 0x5d, 0x74, 0x8b, 0xa2, 0xb9,
++ 0xd0, 0xe7, 0xfe, 0x15, 0x2c, 0x43, 0x5a, 0x71,
++ 0x88, 0x9f, 0xb6, 0xcd, 0xe4, 0xfb, 0x12, 0x29,
++ 0x40, 0x57, 0x6e, 0x85, 0x9c, 0xb3, 0xca, 0xe1,
++ 0xf8, 0x0f, 0x26, 0x3d, 0x54, 0x6b, 0x82, 0x99,
++ 0xb0, 0xc7, 0xde, 0xf5, 0x0c, 0x23, 0x3a, 0x51,
++ 0x68, 0x7f, 0x96, 0xad, 0xc4, 0xdb, 0xf2, 0x09,
++ 0x20, 0x37, 0x4e, 0x65, 0x7c, 0x93, 0xaa, 0xc1,
++ 0xd8, 0xef, 0x06, 0x1d, 0x34, 0x4b, 0x62, 0x79,
++ 0x90, 0xa7, 0xbe, 0xd5, 0xec, 0x03, 0x1a, 0x31,
++ 0x48, 0x5f, 0x76, 0x8d, 0xa4, 0xbb, 0xd2, 0xe9,
++ 0x00, 0x19, 0x32, 0x4b, 0x64, 0x7d, 0x96, 0xaf,
++ 0xc8, 0xe1, 0xfa, 0x13, 0x2c, 0x45, 0x5e, 0x77,
++ 0x90, 0xa9, 0xc2, 0xdb, 0xf4, 0x0d, 0x26, 0x3f,
++ 0x58, 0x71, 0x8a, 0xa3, 0xbc, 0xd5, 0xee, 0x07,
++ 0x20, 0x39, 0x52, 0x6b, 0x84, 0x9d, 0xb6, 0xcf,
++ 0xe8, 0x01, 0x1a, 0x33, 0x4c, 0x65, 0x7e, 0x97,
++ 0xb0, 0xc9, 0xe2, 0xfb, 0x14, 0x2d, 0x46, 0x5f,
++ 0x78, 0x91, 0xaa, 0xc3, 0xdc, 0xf5, 0x0e, 0x27,
++ 0x40, 0x59, 0x72, 0x8b, 0xa4, 0xbd, 0xd6, 0xef,
++ 0x08, 0x21, 0x3a, 0x53, 0x6c, 0x85, 0x9e, 0xb7,
++ 0xd0, 0xe9, 0x02, 0x1b, 0x34, 0x4d, 0x66, 0x7f,
++ 0x98, 0xb1, 0xca, 0xe3, 0xfc, 0x15, 0x2e, 0x47,
++ 0x60, 0x79, 0x92, 0xab, 0xc4, 0xdd, 0xf6, 0x0f,
++ 0x28, 0x41, 0x5a, 0x73, 0x8c, 0xa5, 0xbe, 0xd7,
++ 0xf0, 0x09, 0x22, 0x3b, 0x54, 0x6d, 0x86, 0x9f,
++ 0xb8, 0xd1, 0xea, 0x03, 0x1c, 0x35, 0x4e, 0x67,
++ 0x80, 0x99, 0xb2, 0xcb, 0xe4, 0xfd, 0x16, 0x2f,
++ 0x48, 0x61, 0x7a, 0x93, 0xac, 0xc5, 0xde, 0xf7,
++ 0x10, 0x29, 0x42, 0x5b, 0x74, 0x8d, 0xa6, 0xbf,
++ 0xd8, 0xf1, 0x0a, 0x23, 0x3c, 0x55, 0x6e, 0x87,
++ 0xa0, 0xb9, 0xd2, 0xeb, 0x04, 0x1d, 0x36, 0x4f,
++ 0x68, 0x81, 0x9a, 0xb3, 0xcc, 0xe5, 0xfe, 0x17,
++ 0x30, 0x49, 0x62, 0x7b, 0x94, 0xad, 0xc6, 0xdf,
++ 0xf8, 0x11, 0x2a, 0x43, 0x5c, 0x75, 0x8e, 0xa7,
++ 0xc0, 0xd9, 0xf2, 0x0b, 0x24, 0x3d, 0x56, 0x6f,
++ 0x88, 0xa1, 0xba, 0xd3, 0xec, 0x05, 0x1e, 0x37,
++ 0x50, 0x69, 0x82, 0x9b, 0xb4, 0xcd, 0xe6, 0xff,
++ 0x18, 0x31, 0x4a, 0x63, 0x7c, 0x95, 0xae, 0xc7,
++ 0xe0, 0xf9, 0x12, 0x2b, 0x44, 0x5d, 0x76, 0x8f,
++ 0xa8, 0xc1, 0xda, 0xf3, 0x0c, 0x25, 0x3e, 0x57,
++ 0x70, 0x89, 0xa2, 0xbb, 0xd4, 0xed, 0x06, 0x1f,
++ 0x38, 0x51, 0x6a, 0x83, 0x9c, 0xb5, 0xce, 0xe7,
++ 0x00, 0x1b, 0x36, 0x51, 0x6c, 0x87, 0xa2, 0xbd,
++ 0xd8, 0xf3, 0x0e, 0x29, 0x44, 0x5f, 0x7a, 0x95,
++ 0xb0, 0xcb, 0xe6, 0x01, 0x1c, 0x37, 0x52, 0x6d,
++ 0x88, 0xa3, 0xbe, 0xd9, 0xf4, 0x0f, 0x2a, 0x45,
++ 0x60, 0x7b, 0x96, 0xb1, 0xcc, 0xe7, 0x02, 0x1d,
++ 0x38, 0x53, 0x6e, 0x89, 0xa4, 0xbf, 0xda, 0xf5,
++ 0x10, 0x2b, 0x46, 0x61, 0x7c, 0x97, 0xb2, 0xcd,
++ 0xe8, 0x03, 0x1e, 0x39, 0x54, 0x6f, 0x8a, 0xa5,
++ 0xc0, 0xdb, 0xf6, 0x11, 0x2c, 0x47, 0x62, 0x7d,
++ 0x98, 0xb3, 0xce, 0xe9, 0x04, 0x1f, 0x3a, 0x55,
++ 0x70, 0x8b, 0xa6, 0xc1, 0xdc, 0xf7, 0x12, 0x2d,
++ 0x48, 0x63, 0x7e, 0x99, 0xb4, 0xcf, 0xea, 0x05,
++ 0x20, 0x3b, 0x56, 0x71, 0x8c, 0xa7, 0xc2, 0xdd,
++ 0xf8, 0x13, 0x2e, 0x49, 0x64, 0x7f, 0x9a, 0xb5,
++ 0xd0, 0xeb, 0x06, 0x21, 0x3c, 0x57, 0x72, 0x8d,
++ 0xa8, 0xc3, 0xde, 0xf9, 0x14, 0x2f, 0x4a, 0x65,
++ 0x80, 0x9b, 0xb6, 0xd1, 0xec, 0x07, 0x22, 0x3d,
++ 0x58, 0x73, 0x8e, 0xa9, 0xc4, 0xdf, 0xfa, 0x15,
++ 0x30, 0x4b, 0x66, 0x81, 0x9c, 0xb7, 0xd2, 0xed,
++ 0x08, 0x23, 0x3e, 0x59, 0x74, 0x8f, 0xaa, 0xc5,
++ 0xe0, 0xfb, 0x16, 0x31, 0x4c, 0x67, 0x82, 0x9d,
++ 0xb8, 0xd3, 0xee, 0x09, 0x24, 0x3f, 0x5a, 0x75,
++ 0x90, 0xab, 0xc6, 0xe1, 0xfc, 0x17, 0x32, 0x4d,
++ 0x68, 0x83, 0x9e, 0xb9, 0xd4, 0xef, 0x0a, 0x25,
++ 0x40, 0x5b, 0x76, 0x91, 0xac, 0xc7, 0xe2, 0xfd,
++ 0x18, 0x33, 0x4e, 0x69, 0x84, 0x9f, 0xba, 0xd5,
++ 0xf0, 0x0b, 0x26, 0x41, 0x5c, 0x77, 0x92, 0xad,
++ 0xc8, 0xe3, 0xfe, 0x19, 0x34, 0x4f, 0x6a, 0x85,
++ 0xa0, 0xbb, 0xd6, 0xf1, 0x0c, 0x27, 0x42, 0x5d,
++ 0x78, 0x93, 0xae, 0xc9, 0xe4, 0xff, 0x1a, 0x35,
++ 0x50, 0x6b, 0x86, 0xa1, 0xbc, 0xd7, 0xf2, 0x0d,
++ 0x28, 0x43, 0x5e, 0x79, 0x94, 0xaf, 0xca, 0xe5,
++ 0x00, 0x1d, 0x3a, 0x57, 0x74, 0x91, 0xae, 0xcb,
++ 0xe8, 0x05, 0x22, 0x3f, 0x5c, 0x79, 0x96, 0xb3,
++ 0xd0, 0xed, 0x0a, 0x27, 0x44, 0x61, 0x7e, 0x9b,
++ 0xb8, 0xd5, 0xf2, 0x0f, 0x2c, 0x49, 0x66, 0x83,
++ 0xa0, 0xbd, 0xda, 0xf7, 0x14, 0x31, 0x4e, 0x6b,
++ 0x88, 0xa5, 0xc2, 0xdf, 0xfc, 0x19, 0x36, 0x53,
++ 0x70, 0x8d, 0xaa, 0xc7, 0xe4, 0x01, 0x1e, 0x3b,
++ 0x58, 0x75, 0x92, 0xaf, 0xcc, 0xe9, 0x06, 0x23,
++ 0x40, 0x5d, 0x7a, 0x97, 0xb4, 0xd1, 0xee, 0x0b,
++ 0x28, 0x45, 0x62, 0x7f, 0x9c, 0xb9, 0xd6, 0xf3,
++ 0x10, 0x2d, 0x4a, 0x67, 0x84, 0xa1, 0xbe, 0xdb,
++ 0xf8, 0x15, 0x32, 0x4f, 0x6c, 0x89, 0xa6, 0xc3,
++ 0xe0, 0xfd, 0x1a, 0x37, 0x54, 0x71, 0x8e, 0xab,
++ 0xc8, 0xe5, 0x02, 0x1f, 0x3c, 0x59, 0x76, 0x93,
++ 0xb0, 0xcd, 0xea, 0x07, 0x24, 0x41, 0x5e, 0x7b,
++ 0x98, 0xb5, 0xd2, 0xef, 0x0c, 0x29, 0x46, 0x63,
++ 0x80, 0x9d, 0xba, 0xd7, 0xf4, 0x11, 0x2e, 0x4b,
++ 0x68, 0x85, 0xa2, 0xbf, 0xdc, 0xf9, 0x16, 0x33,
++ 0x50, 0x6d, 0x8a, 0xa7, 0xc4, 0xe1, 0xfe, 0x1b,
++ 0x38, 0x55, 0x72, 0x8f, 0xac, 0xc9, 0xe6, 0x03,
++ 0x20, 0x3d, 0x5a, 0x77, 0x94, 0xb1, 0xce, 0xeb,
++ 0x08, 0x25, 0x42, 0x5f, 0x7c, 0x99, 0xb6, 0xd3,
++ 0xf0, 0x0d, 0x2a, 0x47, 0x64, 0x81, 0x9e, 0xbb,
++ 0xd8, 0xf5, 0x12, 0x2f, 0x4c, 0x69, 0x86, 0xa3,
++ 0xc0, 0xdd, 0xfa, 0x17, 0x34, 0x51, 0x6e, 0x8b,
++ 0xa8, 0xc5, 0xe2, 0xff, 0x1c, 0x39, 0x56, 0x73,
++ 0x90, 0xad, 0xca, 0xe7, 0x04, 0x21, 0x3e, 0x5b,
++ 0x78, 0x95, 0xb2, 0xcf, 0xec, 0x09, 0x26, 0x43,
++ 0x60, 0x7d, 0x9a, 0xb7, 0xd4, 0xf1, 0x0e, 0x2b,
++ 0x48, 0x65, 0x82, 0x9f, 0xbc, 0xd9, 0xf6, 0x13,
++ 0x30, 0x4d, 0x6a, 0x87, 0xa4, 0xc1, 0xde, 0xfb,
++ 0x18, 0x35, 0x52, 0x6f, 0x8c, 0xa9, 0xc6, 0xe3,
++ 0x00, 0x1f, 0x3e, 0x5d, 0x7c, 0x9b, 0xba, 0xd9,
++ 0xf8, 0x17, 0x36, 0x55, 0x74, 0x93, 0xb2, 0xd1,
++ 0xf0, 0x0f, 0x2e, 0x4d, 0x6c, 0x8b, 0xaa, 0xc9,
++ 0xe8, 0x07, 0x26, 0x45, 0x64, 0x83, 0xa2, 0xc1,
++ 0xe0, 0xff, 0x1e, 0x3d, 0x5c, 0x7b, 0x9a, 0xb9,
++ 0xd8, 0xf7, 0x16, 0x35, 0x54, 0x73, 0x92, 0xb1,
++ 0xd0, 0xef, 0x0e, 0x2d, 0x4c, 0x6b, 0x8a, 0xa9,
++ 0xc8, 0xe7, 0x06, 0x25, 0x44, 0x63, 0x82, 0xa1,
++ 0xc0, 0xdf, 0xfe, 0x1d, 0x3c, 0x5b, 0x7a, 0x99,
++ 0xb8, 0xd7, 0xf6, 0x15, 0x34, 0x53, 0x72, 0x91,
++ 0xb0, 0xcf, 0xee, 0x0d, 0x2c, 0x4b, 0x6a, 0x89,
++ 0xa8, 0xc7, 0xe6, 0x05, 0x24, 0x43, 0x62, 0x81,
++ 0xa0, 0xbf, 0xde, 0xfd, 0x1c, 0x3b, 0x5a, 0x79,
++ 0x98, 0xb7, 0xd6, 0xf5, 0x14, 0x33, 0x52, 0x71,
++ 0x90, 0xaf, 0xce, 0xed, 0x0c, 0x2b, 0x4a, 0x69,
++ 0x88, 0xa7, 0xc6, 0xe5, 0x04, 0x23, 0x42, 0x61,
++ 0x80, 0x9f, 0xbe, 0xdd, 0xfc, 0x1b, 0x3a, 0x59,
++ 0x78, 0x97, 0xb6, 0xd5, 0xf4, 0x13, 0x32, 0x51,
++ 0x70, 0x8f, 0xae, 0xcd, 0xec, 0x0b, 0x2a, 0x49,
++ 0x68, 0x87, 0xa6, 0xc5, 0xe4, 0x03, 0x22, 0x41,
++ 0x60, 0x7f, 0x9e, 0xbd, 0xdc, 0xfb, 0x1a, 0x39,
++ 0x58, 0x77, 0x96, 0xb5, 0xd4, 0xf3, 0x12, 0x31,
++ 0x50, 0x6f, 0x8e, 0xad, 0xcc, 0xeb, 0x0a, 0x29,
++ 0x48, 0x67, 0x86, 0xa5, 0xc4, 0xe3, 0x02, 0x21,
++ 0x40, 0x5f, 0x7e, 0x9d, 0xbc, 0xdb, 0xfa, 0x19,
++ 0x38, 0x57, 0x76, 0x95, 0xb4, 0xd3, 0xf2, 0x11,
++ 0x30, 0x4f, 0x6e, 0x8d, 0xac, 0xcb, 0xea, 0x09,
++ 0x28, 0x47, 0x66, 0x85, 0xa4, 0xc3, 0xe2, 0x01,
++ 0x20, 0x3f, 0x5e, 0x7d, 0x9c, 0xbb, 0xda, 0xf9,
++ 0x18, 0x37, 0x56, 0x75, 0x94, 0xb3, 0xd2, 0xf1,
++ 0x10, 0x2f, 0x4e, 0x6d, 0x8c, 0xab, 0xca, 0xe9,
++ 0x08, 0x27, 0x46, 0x65, 0x84, 0xa3, 0xc2, 0xe1,
++ 0x00, 0x21, 0x42, 0x63,
++ },
++ .ilen = 4100,
++ .result = {
++ 0xf0, 0x5c, 0x74, 0xad, 0x4e, 0xbc, 0x99, 0xe2,
++ 0xae, 0xff, 0x91, 0x3a, 0x44, 0xcf, 0x38, 0x32,
++ 0x1e, 0xad, 0xa7, 0xcd, 0xa1, 0x39, 0x95, 0xaa,
++ 0x10, 0xb1, 0xb3, 0x2e, 0x04, 0x31, 0x8f, 0x86,
++ 0xf2, 0x62, 0x74, 0x70, 0x0c, 0xa4, 0x46, 0x08,
++ 0xa8, 0xb7, 0x99, 0xa8, 0xe9, 0xd2, 0x73, 0x79,
++ 0x7e, 0x6e, 0xd4, 0x8f, 0x1e, 0xc7, 0x8e, 0x31,
++ 0x0b, 0xfa, 0x4b, 0xce, 0xfd, 0xf3, 0x57, 0x71,
++ 0xe9, 0x46, 0x03, 0xa5, 0x3d, 0x34, 0x00, 0xe2,
++ 0x18, 0xff, 0x75, 0x6d, 0x06, 0x2d, 0x00, 0xab,
++ 0xb9, 0x3e, 0x6c, 0x59, 0xc5, 0x84, 0x06, 0xb5,
++ 0x8b, 0xd0, 0x89, 0x9c, 0x4a, 0x79, 0x16, 0xc6,
++ 0x3d, 0x74, 0x54, 0xfa, 0x44, 0xcd, 0x23, 0x26,
++ 0x5c, 0xcf, 0x7e, 0x28, 0x92, 0x32, 0xbf, 0xdf,
++ 0xa7, 0x20, 0x3c, 0x74, 0x58, 0x2a, 0x9a, 0xde,
++ 0x61, 0x00, 0x1c, 0x4f, 0xff, 0x59, 0xc4, 0x22,
++ 0xac, 0x3c, 0xd0, 0xe8, 0x6c, 0xf9, 0x97, 0x1b,
++ 0x58, 0x9b, 0xad, 0x71, 0xe8, 0xa9, 0xb5, 0x0d,
++ 0xee, 0x2f, 0x04, 0x1f, 0x7f, 0xbc, 0x99, 0xee,
++ 0x84, 0xff, 0x42, 0x60, 0xdc, 0x3a, 0x18, 0xa5,
++ 0x81, 0xf9, 0xef, 0xdc, 0x7a, 0x0f, 0x65, 0x41,
++ 0x2f, 0xa3, 0xd3, 0xf9, 0xc2, 0xcb, 0xc0, 0x4d,
++ 0x8f, 0xd3, 0x76, 0x96, 0xad, 0x49, 0x6d, 0x38,
++ 0x3d, 0x39, 0x0b, 0x6c, 0x80, 0xb7, 0x54, 0x69,
++ 0xf0, 0x2c, 0x90, 0x02, 0x29, 0x0d, 0x1c, 0x12,
++ 0xad, 0x55, 0xc3, 0x8b, 0x68, 0xd9, 0xcc, 0xb3,
++ 0xb2, 0x64, 0x33, 0x90, 0x5e, 0xca, 0x4b, 0xe2,
++ 0xfb, 0x75, 0xdc, 0x63, 0xf7, 0x9f, 0x82, 0x74,
++ 0xf0, 0xc9, 0xaa, 0x7f, 0xe9, 0x2a, 0x9b, 0x33,
++ 0xbc, 0x88, 0x00, 0x7f, 0xca, 0xb2, 0x1f, 0x14,
++ 0xdb, 0xc5, 0x8e, 0x7b, 0x11, 0x3c, 0x3e, 0x08,
++ 0xf3, 0x83, 0xe8, 0xe0, 0x94, 0x86, 0x2e, 0x92,
++ 0x78, 0x6b, 0x01, 0xc9, 0xc7, 0x83, 0xba, 0x21,
++ 0x6a, 0x25, 0x15, 0x33, 0x4e, 0x45, 0x08, 0xec,
++ 0x35, 0xdb, 0xe0, 0x6e, 0x31, 0x51, 0x79, 0xa9,
++ 0x42, 0x44, 0x65, 0xc1, 0xa0, 0xf1, 0xf9, 0x2a,
++ 0x70, 0xd5, 0xb6, 0xc6, 0xc1, 0x8c, 0x39, 0xfc,
++ 0x25, 0xa6, 0x55, 0xd9, 0xdd, 0x2d, 0x4c, 0xec,
++ 0x49, 0xc6, 0xeb, 0x0e, 0xa8, 0x25, 0x2a, 0x16,
++ 0x1b, 0x66, 0x84, 0xda, 0xe2, 0x92, 0xe5, 0xc0,
++ 0xc8, 0x53, 0x07, 0xaf, 0x80, 0x84, 0xec, 0xfd,
++ 0xcd, 0xd1, 0x6e, 0xcd, 0x6f, 0x6a, 0xf5, 0x36,
++ 0xc5, 0x15, 0xe5, 0x25, 0x7d, 0x77, 0xd1, 0x1a,
++ 0x93, 0x36, 0xa9, 0xcf, 0x7c, 0xa4, 0x54, 0x4a,
++ 0x06, 0x51, 0x48, 0x4e, 0xf6, 0x59, 0x87, 0xd2,
++ 0x04, 0x02, 0xef, 0xd3, 0x44, 0xde, 0x76, 0x31,
++ 0xb3, 0x34, 0x17, 0x1b, 0x9d, 0x66, 0x11, 0x9f,
++ 0x1e, 0xcc, 0x17, 0xe9, 0xc7, 0x3c, 0x1b, 0xe7,
++ 0xcb, 0x50, 0x08, 0xfc, 0xdc, 0x2b, 0x24, 0xdb,
++ 0x65, 0x83, 0xd0, 0x3b, 0xe3, 0x30, 0xea, 0x94,
++ 0x6c, 0xe7, 0xe8, 0x35, 0x32, 0xc7, 0xdb, 0x64,
++ 0xb4, 0x01, 0xab, 0x36, 0x2c, 0x77, 0x13, 0xaf,
++ 0xf8, 0x2b, 0x88, 0x3f, 0x54, 0x39, 0xc4, 0x44,
++ 0xfe, 0xef, 0x6f, 0x68, 0x34, 0xbe, 0x0f, 0x05,
++ 0x16, 0x6d, 0xf6, 0x0a, 0x30, 0xe7, 0xe3, 0xed,
++ 0xc4, 0xde, 0x3c, 0x1b, 0x13, 0xd8, 0xdb, 0xfe,
++ 0x41, 0x62, 0xe5, 0x28, 0xd4, 0x8d, 0xa3, 0xc7,
++ 0x93, 0x97, 0xc6, 0x48, 0x45, 0x1d, 0x9f, 0x83,
++ 0xdf, 0x4b, 0x40, 0x3e, 0x42, 0x25, 0x87, 0x80,
++ 0x4c, 0x7d, 0xa8, 0xd4, 0x98, 0x23, 0x95, 0x75,
++ 0x41, 0x8c, 0xda, 0x41, 0x9b, 0xd4, 0xa7, 0x06,
++ 0xb5, 0xf1, 0x71, 0x09, 0x53, 0xbe, 0xca, 0xbf,
++ 0x32, 0x03, 0xed, 0xf0, 0x50, 0x1c, 0x56, 0x39,
++ 0x5b, 0xa4, 0x75, 0x18, 0xf7, 0x9b, 0x58, 0xef,
++ 0x53, 0xfc, 0x2a, 0x38, 0x23, 0x15, 0x75, 0xcd,
++ 0x45, 0xe5, 0x5a, 0x82, 0x55, 0xba, 0x21, 0xfa,
++ 0xd4, 0xbd, 0xc6, 0x94, 0x7c, 0xc5, 0x80, 0x12,
++ 0xf7, 0x4b, 0x32, 0xc4, 0x9a, 0x82, 0xd8, 0x28,
++ 0x8f, 0xd9, 0xc2, 0x0f, 0x60, 0x03, 0xbe, 0x5e,
++ 0x21, 0xd6, 0x5f, 0x58, 0xbf, 0x5c, 0xb1, 0x32,
++ 0x82, 0x8d, 0xa9, 0xe5, 0xf2, 0x66, 0x1a, 0xc0,
++ 0xa0, 0xbc, 0x58, 0x2f, 0x71, 0xf5, 0x2f, 0xed,
++ 0xd1, 0x26, 0xb9, 0xd8, 0x49, 0x5a, 0x07, 0x19,
++ 0x01, 0x7c, 0x59, 0xb0, 0xf8, 0xa4, 0xb7, 0xd3,
++ 0x7b, 0x1a, 0x8c, 0x38, 0xf4, 0x50, 0xa4, 0x59,
++ 0xb0, 0xcc, 0x41, 0x0b, 0x88, 0x7f, 0xe5, 0x31,
++ 0xb3, 0x42, 0xba, 0xa2, 0x7e, 0xd4, 0x32, 0x71,
++ 0x45, 0x87, 0x48, 0xa9, 0xc2, 0xf2, 0x89, 0xb3,
++ 0xe4, 0xa7, 0x7e, 0x52, 0x15, 0x61, 0xfa, 0xfe,
++ 0xc9, 0xdd, 0x81, 0xeb, 0x13, 0xab, 0xab, 0xc3,
++ 0x98, 0x59, 0xd8, 0x16, 0x3d, 0x14, 0x7a, 0x1c,
++ 0x3c, 0x41, 0x9a, 0x16, 0x16, 0x9b, 0xd2, 0xd2,
++ 0x69, 0x3a, 0x29, 0x23, 0xac, 0x86, 0x32, 0xa5,
++ 0x48, 0x9c, 0x9e, 0xf3, 0x47, 0x77, 0x81, 0x70,
++ 0x24, 0xe8, 0x85, 0xd2, 0xf5, 0xb5, 0xfa, 0xff,
++ 0x59, 0x6a, 0xd3, 0x50, 0x59, 0x43, 0x59, 0xde,
++ 0xd9, 0xf1, 0x55, 0xa5, 0x0c, 0xc3, 0x1a, 0x1a,
++ 0x18, 0x34, 0x0d, 0x1a, 0x63, 0x33, 0xed, 0x10,
++ 0xe0, 0x1d, 0x2a, 0x18, 0xd2, 0xc0, 0x54, 0xa8,
++ 0xca, 0xb5, 0x9a, 0xd3, 0xdd, 0xca, 0x45, 0x84,
++ 0x50, 0xe7, 0x0f, 0xfe, 0xa4, 0x99, 0x5a, 0xbe,
++ 0x43, 0x2d, 0x9a, 0xcb, 0x92, 0x3f, 0x5a, 0x1d,
++ 0x85, 0xd8, 0xc9, 0xdf, 0x68, 0xc9, 0x12, 0x80,
++ 0x56, 0x0c, 0xdc, 0x00, 0xdc, 0x3a, 0x7d, 0x9d,
++ 0xa3, 0xa2, 0xe8, 0x4d, 0xbf, 0xf9, 0x70, 0xa0,
++ 0xa4, 0x13, 0x4f, 0x6b, 0xaf, 0x0a, 0x89, 0x7f,
++ 0xda, 0xf0, 0xbf, 0x9b, 0xc8, 0x1d, 0xe5, 0xf8,
++ 0x2e, 0x8b, 0x07, 0xb5, 0x73, 0x1b, 0xcc, 0xa2,
++ 0xa6, 0xad, 0x30, 0xbc, 0x78, 0x3c, 0x5b, 0x10,
++ 0xfa, 0x5e, 0x62, 0x2d, 0x9e, 0x64, 0xb3, 0x33,
++ 0xce, 0xf9, 0x1f, 0x86, 0xe7, 0x8b, 0xa2, 0xb8,
++ 0xe8, 0x99, 0x57, 0x8c, 0x11, 0xed, 0x66, 0xd9,
++ 0x3c, 0x72, 0xb9, 0xc3, 0xe6, 0x4e, 0x17, 0x3a,
++ 0x6a, 0xcb, 0x42, 0x24, 0x06, 0xed, 0x3e, 0x4e,
++ 0xa3, 0xe8, 0x6a, 0x94, 0xda, 0x0d, 0x4e, 0xd5,
++ 0x14, 0x19, 0xcf, 0xb6, 0x26, 0xd8, 0x2e, 0xcc,
++ 0x64, 0x76, 0x38, 0x49, 0x4d, 0xfe, 0x30, 0x6d,
++ 0xe4, 0xc8, 0x8c, 0x7b, 0xc4, 0xe0, 0x35, 0xba,
++ 0x22, 0x6e, 0x76, 0xe1, 0x1a, 0xf2, 0x53, 0xc3,
++ 0x28, 0xa2, 0x82, 0x1f, 0x61, 0x69, 0xad, 0xc1,
++ 0x7b, 0x28, 0x4b, 0x1e, 0x6c, 0x85, 0x95, 0x9b,
++ 0x51, 0xb5, 0x17, 0x7f, 0x12, 0x69, 0x8c, 0x24,
++ 0xd5, 0xc7, 0x5a, 0x5a, 0x11, 0x54, 0xff, 0x5a,
++ 0xf7, 0x16, 0xc3, 0x91, 0xa6, 0xf0, 0xdc, 0x0a,
++ 0xb6, 0xa7, 0x4a, 0x0d, 0x7a, 0x58, 0xfe, 0xa5,
++ 0xf5, 0xcb, 0x8f, 0x7b, 0x0e, 0xea, 0x57, 0xe7,
++ 0xbd, 0x79, 0xd6, 0x1c, 0x88, 0x23, 0x6c, 0xf2,
++ 0x4d, 0x29, 0x77, 0x53, 0x35, 0x6a, 0x00, 0x8d,
++ 0xcd, 0xa3, 0x58, 0xbe, 0x77, 0x99, 0x18, 0xf8,
++ 0xe6, 0xe1, 0x8f, 0xe9, 0x37, 0x8f, 0xe3, 0xe2,
++ 0x5a, 0x8a, 0x93, 0x25, 0xaf, 0xf3, 0x78, 0x80,
++ 0xbe, 0xa6, 0x1b, 0xc6, 0xac, 0x8b, 0x1c, 0x91,
++ 0x58, 0xe1, 0x9f, 0x89, 0x35, 0x9d, 0x1d, 0x21,
++ 0x29, 0x9f, 0xf4, 0x99, 0x02, 0x27, 0x0f, 0xa8,
++ 0x4f, 0x79, 0x94, 0x2b, 0x33, 0x2c, 0xda, 0xa2,
++ 0x26, 0x39, 0x83, 0x94, 0xef, 0x27, 0xd8, 0x53,
++ 0x8f, 0x66, 0x0d, 0xe4, 0x41, 0x7d, 0x34, 0xcd,
++ 0x43, 0x7c, 0x95, 0x0a, 0x53, 0xef, 0x66, 0xda,
++ 0x7e, 0x9b, 0xf3, 0x93, 0xaf, 0xd0, 0x73, 0x71,
++ 0xba, 0x40, 0x9b, 0x74, 0xf8, 0xd7, 0xd7, 0x41,
++ 0x6d, 0xaf, 0x72, 0x9c, 0x8d, 0x21, 0x87, 0x3c,
++ 0xfd, 0x0a, 0x90, 0xa9, 0x47, 0x96, 0x9e, 0xd3,
++ 0x88, 0xee, 0x73, 0xcf, 0x66, 0x2f, 0x52, 0x56,
++ 0x6d, 0xa9, 0x80, 0x4c, 0xe2, 0x6f, 0x62, 0x88,
++ 0x3f, 0x0e, 0x54, 0x17, 0x48, 0x80, 0x5d, 0xd3,
++ 0xc3, 0xda, 0x25, 0x3d, 0xa1, 0xc8, 0xcb, 0x9f,
++ 0x9b, 0x70, 0xb3, 0xa1, 0xeb, 0x04, 0x52, 0xa1,
++ 0xf2, 0x22, 0x0f, 0xfc, 0xc8, 0x18, 0xfa, 0xf9,
++ 0x85, 0x9c, 0xf1, 0xac, 0xeb, 0x0c, 0x02, 0x46,
++ 0x75, 0xd2, 0xf5, 0x2c, 0xe3, 0xd2, 0x59, 0x94,
++ 0x12, 0xf3, 0x3c, 0xfc, 0xd7, 0x92, 0xfa, 0x36,
++ 0xba, 0x61, 0x34, 0x38, 0x7c, 0xda, 0x48, 0x3e,
++ 0x08, 0xc9, 0x39, 0x23, 0x5e, 0x02, 0x2c, 0x1a,
++ 0x18, 0x7e, 0xb4, 0xd9, 0xfd, 0x9e, 0x40, 0x02,
++ 0xb1, 0x33, 0x37, 0x32, 0xe7, 0xde, 0xd6, 0xd0,
++ 0x7c, 0x58, 0x65, 0x4b, 0xf8, 0x34, 0x27, 0x9c,
++ 0x44, 0xb4, 0xbd, 0xe9, 0xe9, 0x4c, 0x78, 0x7d,
++ 0x4b, 0x9f, 0xce, 0xb1, 0xcd, 0x47, 0xa5, 0x37,
++ 0xe5, 0x6d, 0xbd, 0xb9, 0x43, 0x94, 0x0a, 0xd4,
++ 0xd6, 0xf9, 0x04, 0x5f, 0xb5, 0x66, 0x6c, 0x1a,
++ 0x35, 0x12, 0xe3, 0x36, 0x28, 0x27, 0x36, 0x58,
++ 0x01, 0x2b, 0x79, 0xe4, 0xba, 0x6d, 0x10, 0x7d,
++ 0x65, 0xdf, 0x84, 0x95, 0xf4, 0xd5, 0xb6, 0x8f,
++ 0x2b, 0x9f, 0x96, 0x00, 0x86, 0x60, 0xf0, 0x21,
++ 0x76, 0xa8, 0x6a, 0x8c, 0x28, 0x1c, 0xb3, 0x6b,
++ 0x97, 0xd7, 0xb6, 0x53, 0x2a, 0xcc, 0xab, 0x40,
++ 0x9d, 0x62, 0x79, 0x58, 0x52, 0xe6, 0x65, 0xb7,
++ 0xab, 0x55, 0x67, 0x9c, 0x89, 0x7c, 0x03, 0xb0,
++ 0x73, 0x59, 0xc5, 0x81, 0xf5, 0x18, 0x17, 0x5c,
++ 0x89, 0xf3, 0x78, 0x35, 0x44, 0x62, 0x78, 0x72,
++ 0xd0, 0x96, 0xeb, 0x31, 0xe7, 0x87, 0x77, 0x14,
++ 0x99, 0x51, 0xf2, 0x59, 0x26, 0x9e, 0xb5, 0xa6,
++ 0x45, 0xfe, 0x6e, 0xbd, 0x07, 0x4c, 0x94, 0x5a,
++ 0xa5, 0x7d, 0xfc, 0xf1, 0x2b, 0x77, 0xe2, 0xfe,
++ 0x17, 0xd4, 0x84, 0xa0, 0xac, 0xb5, 0xc7, 0xda,
++ 0xa9, 0x1a, 0xb6, 0xf3, 0x74, 0x11, 0xb4, 0x9d,
++ 0xfb, 0x79, 0x2e, 0x04, 0x2d, 0x50, 0x28, 0x83,
++ 0xbf, 0xc6, 0x52, 0xd3, 0x34, 0xd6, 0xe8, 0x7a,
++ 0xb6, 0xea, 0xe7, 0xa8, 0x6c, 0x15, 0x1e, 0x2c,
++ 0x57, 0xbc, 0x48, 0x4e, 0x5f, 0x5c, 0xb6, 0x92,
++ 0xd2, 0x49, 0x77, 0x81, 0x6d, 0x90, 0x70, 0xae,
++ 0x98, 0xa1, 0x03, 0x0d, 0x6b, 0xb9, 0x77, 0x14,
++ 0xf1, 0x4e, 0x23, 0xd3, 0xf8, 0x68, 0xbd, 0xc2,
++ 0xfe, 0x04, 0xb7, 0x5c, 0xc5, 0x17, 0x60, 0x8f,
++ 0x65, 0x54, 0xa4, 0x7a, 0x42, 0xdc, 0x18, 0x0d,
++ 0xb5, 0xcf, 0x0f, 0xd3, 0xc7, 0x91, 0x66, 0x1b,
++ 0x45, 0x42, 0x27, 0x75, 0x50, 0xe5, 0xee, 0xb8,
++ 0x7f, 0x33, 0x2c, 0xba, 0x4a, 0x92, 0x4d, 0x2c,
++ 0x3c, 0xe3, 0x0d, 0x80, 0x01, 0xba, 0x0d, 0x29,
++ 0xd8, 0x3c, 0xe9, 0x13, 0x16, 0x57, 0xe6, 0xea,
++ 0x94, 0x52, 0xe7, 0x00, 0x4d, 0x30, 0xb0, 0x0f,
++ 0x35, 0xb8, 0xb8, 0xa7, 0xb1, 0xb5, 0x3b, 0x44,
++ 0xe1, 0x2f, 0xfd, 0x88, 0xed, 0x43, 0xe7, 0x52,
++ 0x10, 0x93, 0xb3, 0x8a, 0x30, 0x6b, 0x0a, 0xf7,
++ 0x23, 0xc6, 0x50, 0x9d, 0x4a, 0xb0, 0xde, 0xc3,
++ 0xdc, 0x9b, 0x2f, 0x01, 0x56, 0x36, 0x09, 0xc5,
++ 0x2f, 0x6b, 0xfe, 0xf1, 0xd8, 0x27, 0x45, 0x03,
++ 0x30, 0x5e, 0x5c, 0x5b, 0xb4, 0x62, 0x0e, 0x1a,
++ 0xa9, 0x21, 0x2b, 0x92, 0x94, 0x87, 0x62, 0x57,
++ 0x4c, 0x10, 0x74, 0x1a, 0xf1, 0x0a, 0xc5, 0x84,
++ 0x3b, 0x9e, 0x72, 0x02, 0xd7, 0xcc, 0x09, 0x56,
++ 0xbd, 0x54, 0xc1, 0xf0, 0xc3, 0xe3, 0xb3, 0xf8,
++ 0xd2, 0x0d, 0x61, 0xcb, 0xef, 0xce, 0x0d, 0x05,
++ 0xb0, 0x98, 0xd9, 0x8e, 0x4f, 0xf9, 0xbc, 0x93,
++ 0xa6, 0xea, 0xc8, 0xcf, 0x10, 0x53, 0x4b, 0xf1,
++ 0xec, 0xfc, 0x89, 0xf9, 0x64, 0xb0, 0x22, 0xbf,
++ 0x9e, 0x55, 0x46, 0x9f, 0x7c, 0x50, 0x8e, 0x84,
++ 0x54, 0x20, 0x98, 0xd7, 0x6c, 0x40, 0x1e, 0xdb,
++ 0x69, 0x34, 0x78, 0x61, 0x24, 0x21, 0x9c, 0x8a,
++ 0xb3, 0x62, 0x31, 0x8b, 0x6e, 0xf5, 0x2a, 0x35,
++ 0x86, 0x13, 0xb1, 0x6c, 0x64, 0x2e, 0x41, 0xa5,
++ 0x05, 0xf2, 0x42, 0xba, 0xd2, 0x3a, 0x0d, 0x8e,
++ 0x8a, 0x59, 0x94, 0x3c, 0xcf, 0x36, 0x27, 0x82,
++ 0xc2, 0x45, 0xee, 0x58, 0xcd, 0x88, 0xb4, 0xec,
++ 0xde, 0xb2, 0x96, 0x0a, 0xaf, 0x38, 0x6f, 0x88,
++ 0xd7, 0xd8, 0xe1, 0xdf, 0xb9, 0x96, 0xa9, 0x0a,
++ 0xb1, 0x95, 0x28, 0x86, 0x20, 0xe9, 0x17, 0x49,
++ 0xa2, 0x29, 0x38, 0xaa, 0xa5, 0xe9, 0x6e, 0xf1,
++ 0x19, 0x27, 0xc0, 0xd5, 0x2a, 0x22, 0xc3, 0x0b,
++ 0xdb, 0x7c, 0x73, 0x10, 0xb9, 0xba, 0x89, 0x76,
++ 0x54, 0xae, 0x7d, 0x71, 0xb3, 0x93, 0xf6, 0x32,
++ 0xe6, 0x47, 0x43, 0x55, 0xac, 0xa0, 0x0d, 0xc2,
++ 0x93, 0x27, 0x4a, 0x8e, 0x0e, 0x74, 0x15, 0xc7,
++ 0x0b, 0x85, 0xd9, 0x0c, 0xa9, 0x30, 0x7a, 0x3e,
++ 0xea, 0x8f, 0x85, 0x6d, 0x3a, 0x12, 0x4f, 0x72,
++ 0x69, 0x58, 0x7a, 0x80, 0xbb, 0xb5, 0x97, 0xf3,
++ 0xcf, 0x70, 0xd2, 0x5d, 0xdd, 0x4d, 0x21, 0x79,
++ 0x54, 0x4d, 0xe4, 0x05, 0xe8, 0xbd, 0xc2, 0x62,
++ 0xb1, 0x3b, 0x77, 0x1c, 0xd6, 0x5c, 0xf3, 0xa0,
++ 0x79, 0x00, 0xa8, 0x6c, 0x29, 0xd9, 0x18, 0x24,
++ 0x36, 0xa2, 0x46, 0xc0, 0x96, 0x65, 0x7f, 0xbd,
++ 0x2a, 0xed, 0x36, 0x16, 0x0c, 0xaa, 0x9f, 0xf4,
++ 0xc5, 0xb4, 0xe2, 0x12, 0xed, 0x69, 0xed, 0x4f,
++ 0x26, 0x2c, 0x39, 0x52, 0x89, 0x98, 0xe7, 0x2c,
++ 0x99, 0xa4, 0x9e, 0xa3, 0x9b, 0x99, 0x46, 0x7a,
++ 0x3a, 0xdc, 0xa8, 0x59, 0xa3, 0xdb, 0xc3, 0x3b,
++ 0x95, 0x0d, 0x3b, 0x09, 0x6e, 0xee, 0x83, 0x5d,
++ 0x32, 0x4d, 0xed, 0xab, 0xfa, 0x98, 0x14, 0x4e,
++ 0xc3, 0x15, 0x45, 0x53, 0x61, 0xc4, 0x93, 0xbd,
++ 0x90, 0xf4, 0x99, 0x95, 0x4c, 0xe6, 0x76, 0x92,
++ 0x29, 0x90, 0x46, 0x30, 0x92, 0x69, 0x7d, 0x13,
++ 0xf2, 0xa5, 0xcd, 0x69, 0x49, 0x44, 0xb2, 0x0f,
++ 0x63, 0x40, 0x36, 0x5f, 0x09, 0xe2, 0x78, 0xf8,
++ 0x91, 0xe3, 0xe2, 0xfa, 0x10, 0xf7, 0xc8, 0x24,
++ 0xa8, 0x89, 0x32, 0x5c, 0x37, 0x25, 0x1d, 0xb2,
++ 0xea, 0x17, 0x8a, 0x0a, 0xa9, 0x64, 0xc3, 0x7c,
++ 0x3c, 0x7c, 0xbd, 0xc6, 0x79, 0x34, 0xe7, 0xe2,
++ 0x85, 0x8e, 0xbf, 0xf8, 0xde, 0x92, 0xa0, 0xae,
++ 0x20, 0xc4, 0xf6, 0xbb, 0x1f, 0x38, 0x19, 0x0e,
++ 0xe8, 0x79, 0x9c, 0xa1, 0x23, 0xe9, 0x54, 0x7e,
++ 0x37, 0x2f, 0xe2, 0x94, 0x32, 0xaf, 0xa0, 0x23,
++ 0x49, 0xe4, 0xc0, 0xb3, 0xac, 0x00, 0x8f, 0x36,
++ 0x05, 0xc4, 0xa6, 0x96, 0xec, 0x05, 0x98, 0x4f,
++ 0x96, 0x67, 0x57, 0x1f, 0x20, 0x86, 0x1b, 0x2d,
++ 0x69, 0xe4, 0x29, 0x93, 0x66, 0x5f, 0xaf, 0x6b,
++ 0x88, 0x26, 0x2c, 0x67, 0x02, 0x4b, 0x52, 0xd0,
++ 0x83, 0x7a, 0x43, 0x1f, 0xc0, 0x71, 0x15, 0x25,
++ 0x77, 0x65, 0x08, 0x60, 0x11, 0x76, 0x4c, 0x8d,
++ 0xed, 0xa9, 0x27, 0xc6, 0xb1, 0x2a, 0x2c, 0x6a,
++ 0x4a, 0x97, 0xf5, 0xc6, 0xb7, 0x70, 0x42, 0xd3,
++ 0x03, 0xd1, 0x24, 0x95, 0xec, 0x6d, 0xab, 0x38,
++ 0x72, 0xce, 0xe2, 0x8b, 0x33, 0xd7, 0x51, 0x09,
++ 0xdc, 0x45, 0xe0, 0x09, 0x96, 0x32, 0xf3, 0xc4,
++ 0x84, 0xdc, 0x73, 0x73, 0x2d, 0x1b, 0x11, 0x98,
++ 0xc5, 0x0e, 0x69, 0x28, 0x94, 0xc7, 0xb5, 0x4d,
++ 0xc8, 0x8a, 0xd0, 0xaa, 0x13, 0x2e, 0x18, 0x74,
++ 0xdd, 0xd1, 0x1e, 0xf3, 0x90, 0xe8, 0xfc, 0x9a,
++ 0x72, 0x4a, 0x0e, 0xd1, 0xe4, 0xfb, 0x0d, 0x96,
++ 0xd1, 0x0c, 0x79, 0x85, 0x1b, 0x1c, 0xfe, 0xe1,
++ 0x62, 0x8f, 0x7a, 0x73, 0x32, 0xab, 0xc8, 0x18,
++ 0x69, 0xe3, 0x34, 0x30, 0xdf, 0x13, 0xa6, 0xe5,
++ 0xe8, 0x0e, 0x67, 0x7f, 0x81, 0x11, 0xb4, 0x60,
++ 0xc7, 0xbd, 0x79, 0x65, 0x50, 0xdc, 0xc4, 0x5b,
++ 0xde, 0x39, 0xa4, 0x01, 0x72, 0x63, 0xf3, 0xd1,
++ 0x64, 0x4e, 0xdf, 0xfc, 0x27, 0x92, 0x37, 0x0d,
++ 0x57, 0xcd, 0x11, 0x4f, 0x11, 0x04, 0x8e, 0x1d,
++ 0x16, 0xf7, 0xcd, 0x92, 0x9a, 0x99, 0x30, 0x14,
++ 0xf1, 0x7c, 0x67, 0x1b, 0x1f, 0x41, 0x0b, 0xe8,
++ 0x32, 0xe8, 0xb8, 0xc1, 0x4f, 0x54, 0x86, 0x4f,
++ 0xe5, 0x79, 0x81, 0x73, 0xcd, 0x43, 0x59, 0x68,
++ 0x73, 0x02, 0x3b, 0x78, 0x21, 0x72, 0x43, 0x00,
++ 0x49, 0x17, 0xf7, 0x00, 0xaf, 0x68, 0x24, 0x53,
++ 0x05, 0x0a, 0xc3, 0x33, 0xe0, 0x33, 0x3f, 0x69,
++ 0xd2, 0x84, 0x2f, 0x0b, 0xed, 0xde, 0x04, 0xf4,
++ 0x11, 0x94, 0x13, 0x69, 0x51, 0x09, 0x28, 0xde,
++ 0x57, 0x5c, 0xef, 0xdc, 0x9a, 0x49, 0x1c, 0x17,
++ 0x97, 0xf3, 0x96, 0xc1, 0x7f, 0x5d, 0x2e, 0x7d,
++ 0x55, 0xb8, 0xb3, 0x02, 0x09, 0xb3, 0x1f, 0xe7,
++ 0xc9, 0x8d, 0xa3, 0x36, 0x34, 0x8a, 0x77, 0x13,
++ 0x30, 0x63, 0x4c, 0xa5, 0xcd, 0xc3, 0xe0, 0x7e,
++ 0x05, 0xa1, 0x7b, 0x0c, 0xcb, 0x74, 0x47, 0x31,
++ 0x62, 0x03, 0x43, 0xf1, 0x87, 0xb4, 0xb0, 0x85,
++ 0x87, 0x8e, 0x4b, 0x25, 0xc7, 0xcf, 0xae, 0x4b,
++ 0x36, 0x46, 0x3e, 0x62, 0xbc, 0x6f, 0xeb, 0x5f,
++ 0x73, 0xac, 0xe6, 0x07, 0xee, 0xc1, 0xa1, 0xd6,
++ 0xc4, 0xab, 0xc9, 0xd6, 0x89, 0x45, 0xe1, 0xf1,
++ 0x04, 0x4e, 0x1a, 0x6f, 0xbb, 0x4f, 0x3a, 0xa3,
++ 0xa0, 0xcb, 0xa3, 0x0a, 0xd8, 0x71, 0x35, 0x55,
++ 0xe4, 0xbc, 0x2e, 0x04, 0x06, 0xe6, 0xff, 0x5b,
++ 0x1c, 0xc0, 0x11, 0x7c, 0xc5, 0x17, 0xf3, 0x38,
++ 0xcf, 0xe9, 0xba, 0x0f, 0x0e, 0xef, 0x02, 0xc2,
++ 0x8d, 0xc6, 0xbc, 0x4b, 0x67, 0x20, 0x95, 0xd7,
++ 0x2c, 0x45, 0x5b, 0x86, 0x44, 0x8c, 0x6f, 0x2e,
++ 0x7e, 0x9f, 0x1c, 0x77, 0xba, 0x6b, 0x0e, 0xa3,
++ 0x69, 0xdc, 0xab, 0x24, 0x57, 0x60, 0x47, 0xc1,
++ 0xd1, 0xa5, 0x9d, 0x23, 0xe6, 0xb1, 0x37, 0xfe,
++ 0x93, 0xd2, 0x4c, 0x46, 0xf9, 0x0c, 0xc6, 0xfb,
++ 0xd6, 0x9d, 0x99, 0x69, 0xab, 0x7a, 0x07, 0x0c,
++ 0x65, 0xe7, 0xc4, 0x08, 0x96, 0xe2, 0xa5, 0x01,
++ 0x3f, 0x46, 0x07, 0x05, 0x7e, 0xe8, 0x9a, 0x90,
++ 0x50, 0xdc, 0xe9, 0x7a, 0xea, 0xa1, 0x39, 0x6e,
++ 0x66, 0xe4, 0x6f, 0xa5, 0x5f, 0xb2, 0xd9, 0x5b,
++ 0xf5, 0xdb, 0x2a, 0x32, 0xf0, 0x11, 0x6f, 0x7c,
++ 0x26, 0x10, 0x8f, 0x3d, 0x80, 0xe9, 0x58, 0xf7,
++ 0xe0, 0xa8, 0x57, 0xf8, 0xdb, 0x0e, 0xce, 0x99,
++ 0x63, 0x19, 0x3d, 0xd5, 0xec, 0x1b, 0x77, 0x69,
++ 0x98, 0xf6, 0xe4, 0x5f, 0x67, 0x17, 0x4b, 0x09,
++ 0x85, 0x62, 0x82, 0x70, 0x18, 0xe2, 0x9a, 0x78,
++ 0xe2, 0x62, 0xbd, 0xb4, 0xf1, 0x42, 0xc6, 0xfb,
++ 0x08, 0xd0, 0xbd, 0xeb, 0x4e, 0x09, 0xf2, 0xc8,
++ 0x1e, 0xdc, 0x3d, 0x32, 0x21, 0x56, 0x9c, 0x4f,
++ 0x35, 0xf3, 0x61, 0x06, 0x72, 0x84, 0xc4, 0x32,
++ 0xf2, 0xf1, 0xfa, 0x0b, 0x2f, 0xc3, 0xdb, 0x02,
++ 0x04, 0xc2, 0xde, 0x57, 0x64, 0x60, 0x8d, 0xcf,
++ 0xcb, 0x86, 0x5d, 0x97, 0x3e, 0xb1, 0x9c, 0x01,
++ 0xd6, 0x28, 0x8f, 0x99, 0xbc, 0x46, 0xeb, 0x05,
++ 0xaf, 0x7e, 0xb8, 0x21, 0x2a, 0x56, 0x85, 0x1c,
++ 0xb3, 0x71, 0xa0, 0xde, 0xca, 0x96, 0xf1, 0x78,
++ 0x49, 0xa2, 0x99, 0x81, 0x80, 0x5c, 0x01, 0xf5,
++ 0xa0, 0xa2, 0x56, 0x63, 0xe2, 0x70, 0x07, 0xa5,
++ 0x95, 0xd6, 0x85, 0xeb, 0x36, 0x9e, 0xa9, 0x51,
++ 0x66, 0x56, 0x5f, 0x1d, 0x02, 0x19, 0xe2, 0xf6,
++ 0x4f, 0x73, 0x38, 0x09, 0x75, 0x64, 0x48, 0xe0,
++ 0xf1, 0x7e, 0x0e, 0xe8, 0x9d, 0xf9, 0xed, 0x94,
++ 0xfe, 0x16, 0x26, 0x62, 0x49, 0x74, 0xf4, 0xb0,
++ 0xd4, 0xa9, 0x6c, 0xb0, 0xfd, 0x53, 0xe9, 0x81,
++ 0xe0, 0x7a, 0xbf, 0xcf, 0xb5, 0xc4, 0x01, 0x81,
++ 0x79, 0x99, 0x77, 0x01, 0x3b, 0xe9, 0xa2, 0xb6,
++ 0xe6, 0x6a, 0x8a, 0x9e, 0x56, 0x1c, 0x8d, 0x1e,
++ 0x8f, 0x06, 0x55, 0x2c, 0x6c, 0xdc, 0x92, 0x87,
++ 0x64, 0x3b, 0x4b, 0x19, 0xa1, 0x13, 0x64, 0x1d,
++ 0x4a, 0xe9, 0xc0, 0x00, 0xb8, 0x95, 0xef, 0x6b,
++ 0x1a, 0x86, 0x6d, 0x37, 0x52, 0x02, 0xc2, 0xe0,
++ 0xc8, 0xbb, 0x42, 0x0c, 0x02, 0x21, 0x4a, 0xc9,
++ 0xef, 0xa0, 0x54, 0xe4, 0x5e, 0x16, 0x53, 0x81,
++ 0x70, 0x62, 0x10, 0xaf, 0xde, 0xb8, 0xb5, 0xd3,
++ 0xe8, 0x5e, 0x6c, 0xc3, 0x8a, 0x3e, 0x18, 0x07,
++ 0xf2, 0x2f, 0x7d, 0xa7, 0xe1, 0x3d, 0x4e, 0xb4,
++ 0x26, 0xa7, 0xa3, 0x93, 0x86, 0xb2, 0x04, 0x1e,
++ 0x53, 0x5d, 0x86, 0xd6, 0xde, 0x65, 0xca, 0xe3,
++ 0x4e, 0xc1, 0xcf, 0xef, 0xc8, 0x70, 0x1b, 0x83,
++ 0x13, 0xdd, 0x18, 0x8b, 0x0d, 0x76, 0xd2, 0xf6,
++ 0x37, 0x7a, 0x93, 0x7a, 0x50, 0x11, 0x9f, 0x96,
++ 0x86, 0x25, 0xfd, 0xac, 0xdc, 0xbe, 0x18, 0x93,
++ 0x19, 0x6b, 0xec, 0x58, 0x4f, 0xb9, 0x75, 0xa7,
++ 0xdd, 0x3f, 0x2f, 0xec, 0xc8, 0x5a, 0x84, 0xab,
++ 0xd5, 0xe4, 0x8a, 0x07, 0xf6, 0x4d, 0x23, 0xd6,
++ 0x03, 0xfb, 0x03, 0x6a, 0xea, 0x66, 0xbf, 0xd4,
++ 0xb1, 0x34, 0xfb, 0x78, 0xe9, 0x55, 0xdc, 0x7c,
++ 0x3d, 0x9c, 0xe5, 0x9a, 0xac, 0xc3, 0x7a, 0x80,
++ 0x24, 0x6d, 0xa0, 0xef, 0x25, 0x7c, 0xb7, 0xea,
++ 0xce, 0x4d, 0x5f, 0x18, 0x60, 0xce, 0x87, 0x22,
++ 0x66, 0x2f, 0xd5, 0xdd, 0xdd, 0x02, 0x21, 0x75,
++ 0x82, 0xa0, 0x1f, 0x58, 0xc6, 0xd3, 0x62, 0xf7,
++ 0x32, 0xd8, 0xaf, 0x1e, 0x07, 0x77, 0x51, 0x96,
++ 0xd5, 0x6b, 0x1e, 0x7e, 0x80, 0x02, 0xe8, 0x67,
++ 0xea, 0x17, 0x0b, 0x10, 0xd2, 0x3f, 0x28, 0x25,
++ 0x4f, 0x05, 0x77, 0x02, 0x14, 0x69, 0xf0, 0x2c,
++ 0xbe, 0x0c, 0xf1, 0x74, 0x30, 0xd1, 0xb9, 0x9b,
++ 0xfc, 0x8c, 0xbb, 0x04, 0x16, 0xd9, 0xba, 0xc3,
++ 0xbc, 0x91, 0x8a, 0xc4, 0x30, 0xa4, 0xb0, 0x12,
++ 0x4c, 0x21, 0x87, 0xcb, 0xc9, 0x1d, 0x16, 0x96,
++ 0x07, 0x6f, 0x23, 0x54, 0xb9, 0x6f, 0x79, 0xe5,
++ 0x64, 0xc0, 0x64, 0xda, 0xb1, 0xae, 0xdd, 0x60,
++ 0x6c, 0x1a, 0x9d, 0xd3, 0x04, 0x8e, 0x45, 0xb0,
++ 0x92, 0x61, 0xd0, 0x48, 0x81, 0xed, 0x5e, 0x1d,
++ 0xa0, 0xc9, 0xa4, 0x33, 0xc7, 0x13, 0x51, 0x5d,
++ 0x7f, 0x83, 0x73, 0xb6, 0x70, 0x18, 0x65, 0x3e,
++ 0x2f, 0x0e, 0x7a, 0x12, 0x39, 0x98, 0xab, 0xd8,
++ 0x7e, 0x6f, 0xa3, 0xd1, 0xba, 0x56, 0xad, 0xbd,
++ 0xf0, 0x03, 0x01, 0x1c, 0x85, 0x35, 0x9f, 0xeb,
++ 0x19, 0x63, 0xa1, 0xaf, 0xfe, 0x2d, 0x35, 0x50,
++ 0x39, 0xa0, 0x65, 0x7c, 0x95, 0x7e, 0x6b, 0xfe,
++ 0xc1, 0xac, 0x07, 0x7c, 0x98, 0x4f, 0xbe, 0x57,
++ 0xa7, 0x22, 0xec, 0xe2, 0x7e, 0x29, 0x09, 0x53,
++ 0xe8, 0xbf, 0xb4, 0x7e, 0x3f, 0x8f, 0xfc, 0x14,
++ 0xce, 0x54, 0xf9, 0x18, 0x58, 0xb5, 0xff, 0x44,
++ 0x05, 0x9d, 0xce, 0x1b, 0xb6, 0x82, 0x23, 0xc8,
++ 0x2e, 0xbc, 0x69, 0xbb, 0x4a, 0x29, 0x0f, 0x65,
++ 0x94, 0xf0, 0x63, 0x06, 0x0e, 0xef, 0x8c, 0xbd,
++ 0xff, 0xfd, 0xb0, 0x21, 0x6e, 0x57, 0x05, 0x75,
++ 0xda, 0xd5, 0xc4, 0xeb, 0x8d, 0x32, 0xf7, 0x50,
++ 0xd3, 0x6f, 0x22, 0xed, 0x5f, 0x8e, 0xa2, 0x5b,
++ 0x80, 0x8c, 0xc8, 0x78, 0x40, 0x24, 0x4b, 0x89,
++ 0x30, 0xce, 0x7a, 0x97, 0x0e, 0xc4, 0xaf, 0xef,
++ 0x9b, 0xb4, 0xcd, 0x66, 0x74, 0x14, 0x04, 0x2b,
++ 0xf7, 0xce, 0x0b, 0x1c, 0x6e, 0xc2, 0x78, 0x8c,
++ 0xca, 0xc5, 0xd0, 0x1c, 0x95, 0x4a, 0x91, 0x2d,
++ 0xa7, 0x20, 0xeb, 0x86, 0x52, 0xb7, 0x67, 0xd8,
++ 0x0c, 0xd6, 0x04, 0x14, 0xde, 0x51, 0x74, 0x75,
++ 0xe7, 0x11, 0xb4, 0x87, 0xa3, 0x3d, 0x2d, 0xad,
++ 0x4f, 0xef, 0xa0, 0x0f, 0x70, 0x00, 0x6d, 0x13,
++ 0x19, 0x1d, 0x41, 0x50, 0xe9, 0xd8, 0xf0, 0x32,
++ 0x71, 0xbc, 0xd3, 0x11, 0xf2, 0xac, 0xbe, 0xaf,
++ 0x75, 0x46, 0x65, 0x4e, 0x07, 0x34, 0x37, 0xa3,
++ 0x89, 0xfe, 0x75, 0xd4, 0x70, 0x4c, 0xc6, 0x3f,
++ 0x69, 0x24, 0x0e, 0x38, 0x67, 0x43, 0x8c, 0xde,
++ 0x06, 0xb5, 0xb8, 0xe7, 0xc4, 0xf0, 0x41, 0x8f,
++ 0xf0, 0xbd, 0x2f, 0x0b, 0xb9, 0x18, 0xf8, 0xde,
++ 0x64, 0xb1, 0xdb, 0xee, 0x00, 0x50, 0x77, 0xe1,
++ 0xc7, 0xff, 0xa6, 0xfa, 0xdd, 0x70, 0xf4, 0xe3,
++ 0x93, 0xe9, 0x77, 0x35, 0x3d, 0x4b, 0x2f, 0x2b,
++ 0x6d, 0x55, 0xf0, 0xfc, 0x88, 0x54, 0x4e, 0x89,
++ 0xc1, 0x8a, 0x23, 0x31, 0x2d, 0x14, 0x2a, 0xb8,
++ 0x1b, 0x15, 0xdd, 0x9e, 0x6e, 0x7b, 0xda, 0x05,
++ 0x91, 0x7d, 0x62, 0x64, 0x96, 0x72, 0xde, 0xfc,
++ 0xc1, 0xec, 0xf0, 0x23, 0x51, 0x6f, 0xdb, 0x5b,
++ 0x1d, 0x08, 0x57, 0xce, 0x09, 0xb8, 0xf6, 0xcd,
++ 0x8d, 0x95, 0xf2, 0x20, 0xbf, 0x0f, 0x20, 0x57,
++ 0x98, 0x81, 0x84, 0x4f, 0x15, 0x5c, 0x76, 0xe7,
++ 0x3e, 0x0a, 0x3a, 0x6c, 0xc4, 0x8a, 0xbe, 0x78,
++ 0x74, 0x77, 0xc3, 0x09, 0x4b, 0x5d, 0x48, 0xe4,
++ 0xc8, 0xcb, 0x0b, 0xea, 0x17, 0x28, 0xcf, 0xcf,
++ 0x31, 0x32, 0x44, 0xa4, 0xe5, 0x0e, 0x1a, 0x98,
++ 0x94, 0xc4, 0xf0, 0xff, 0xae, 0x3e, 0x44, 0xe8,
++ 0xa5, 0xb3, 0xb5, 0x37, 0x2f, 0xe8, 0xaf, 0x6f,
++ 0x28, 0xc1, 0x37, 0x5f, 0x31, 0xd2, 0xb9, 0x33,
++ 0xb1, 0xb2, 0x52, 0x94, 0x75, 0x2c, 0x29, 0x59,
++ 0x06, 0xc2, 0x25, 0xe8, 0x71, 0x65, 0x4e, 0xed,
++ 0xc0, 0x9c, 0xb1, 0xbb, 0x25, 0xdc, 0x6c, 0xe7,
++ 0x4b, 0xa5, 0x7a, 0x54, 0x7a, 0x60, 0xff, 0x7a,
++ 0xe0, 0x50, 0x40, 0x96, 0x35, 0x63, 0xe4, 0x0b,
++ 0x76, 0xbd, 0xa4, 0x65, 0x00, 0x1b, 0x57, 0x88,
++ 0xae, 0xed, 0x39, 0x88, 0x42, 0x11, 0x3c, 0xed,
++ 0x85, 0x67, 0x7d, 0xb9, 0x68, 0x82, 0xe9, 0x43,
++ 0x3c, 0x47, 0x53, 0xfa, 0xe8, 0xf8, 0x9f, 0x1f,
++ 0x9f, 0xef, 0x0f, 0xf7, 0x30, 0xd9, 0x30, 0x0e,
++ 0xb9, 0x9f, 0x69, 0x18, 0x2f, 0x7e, 0xf8, 0xf8,
++ 0xf8, 0x8c, 0x0f, 0xd4, 0x02, 0x4d, 0xea, 0xcd,
++ 0x0a, 0x9c, 0x6f, 0x71, 0x6d, 0x5a, 0x4c, 0x60,
++ 0xce, 0x20, 0x56, 0x32, 0xc6, 0xc5, 0x99, 0x1f,
++ 0x09, 0xe6, 0x4e, 0x18, 0x1a, 0x15, 0x13, 0xa8,
++ 0x7d, 0xb1, 0x6b, 0xc0, 0xb2, 0x6d, 0xf8, 0x26,
++ 0x66, 0xf8, 0x3d, 0x18, 0x74, 0x70, 0x66, 0x7a,
++ 0x34, 0x17, 0xde, 0xba, 0x47, 0xf1, 0x06, 0x18,
++ 0xcb, 0xaf, 0xeb, 0x4a, 0x1e, 0x8f, 0xa7, 0x77,
++ 0xe0, 0x3b, 0x78, 0x62, 0x66, 0xc9, 0x10, 0xea,
++ 0x1f, 0xb7, 0x29, 0x0a, 0x45, 0xa1, 0x1d, 0x1e,
++ 0x1d, 0xe2, 0x65, 0x61, 0x50, 0x9c, 0xd7, 0x05,
++ 0xf2, 0x0b, 0x5b, 0x12, 0x61, 0x02, 0xc8, 0xe5,
++ 0x63, 0x4f, 0x20, 0x0c, 0x07, 0x17, 0x33, 0x5e,
++ 0x03, 0x9a, 0x53, 0x0f, 0x2e, 0x55, 0xfe, 0x50,
++ 0x43, 0x7d, 0xd0, 0xb6, 0x7e, 0x5a, 0xda, 0xae,
++ 0x58, 0xef, 0x15, 0xa9, 0x83, 0xd9, 0x46, 0xb1,
++ 0x42, 0xaa, 0xf5, 0x02, 0x6c, 0xce, 0x92, 0x06,
++ 0x1b, 0xdb, 0x66, 0x45, 0x91, 0x79, 0xc2, 0x2d,
++ 0xe6, 0x53, 0xd3, 0x14, 0xfd, 0xbb, 0x44, 0x63,
++ 0xc6, 0xd7, 0x3d, 0x7a, 0x0c, 0x75, 0x78, 0x9d,
++ 0x5c, 0xa6, 0x39, 0xb3, 0xe5, 0x63, 0xca, 0x8b,
++ 0xfe, 0xd3, 0xef, 0x60, 0x83, 0xf6, 0x8e, 0x70,
++ 0xb6, 0x67, 0xc7, 0x77, 0xed, 0x23, 0xef, 0x4c,
++ 0xf0, 0xed, 0x2d, 0x07, 0x59, 0x6f, 0xc1, 0x01,
++ 0x34, 0x37, 0x08, 0xab, 0xd9, 0x1f, 0x09, 0xb1,
++ 0xce, 0x5b, 0x17, 0xff, 0x74, 0xf8, 0x9c, 0xd5,
++ 0x2c, 0x56, 0x39, 0x79, 0x0f, 0x69, 0x44, 0x75,
++ 0x58, 0x27, 0x01, 0xc4, 0xbf, 0xa7, 0xa1, 0x1d,
++ 0x90, 0x17, 0x77, 0x86, 0x5a, 0x3f, 0xd9, 0xd1,
++ 0x0e, 0xa0, 0x10, 0xf8, 0xec, 0x1e, 0xa5, 0x7f,
++ 0x5e, 0x36, 0xd1, 0xe3, 0x04, 0x2c, 0x70, 0xf7,
++ 0x8e, 0xc0, 0x98, 0x2f, 0x6c, 0x94, 0x2b, 0x41,
++ 0xb7, 0x60, 0x00, 0xb7, 0x2e, 0xb8, 0x02, 0x8d,
++ 0xb8, 0xb0, 0xd3, 0x86, 0xba, 0x1d, 0xd7, 0x90,
++ 0xd6, 0xb6, 0xe1, 0xfc, 0xd7, 0xd8, 0x28, 0x06,
++ 0x63, 0x9b, 0xce, 0x61, 0x24, 0x79, 0xc0, 0x70,
++ 0x52, 0xd0, 0xb6, 0xd4, 0x28, 0x95, 0x24, 0x87,
++ 0x03, 0x1f, 0xb7, 0x9a, 0xda, 0xa3, 0xfb, 0x52,
++ 0x5b, 0x68, 0xe7, 0x4c, 0x8c, 0x24, 0xe1, 0x42,
++ 0xf7, 0xd5, 0xfd, 0xad, 0x06, 0x32, 0x9f, 0xba,
++ 0xc1, 0xfc, 0xdd, 0xc6, 0xfc, 0xfc, 0xb3, 0x38,
++ 0x74, 0x56, 0x58, 0x40, 0x02, 0x37, 0x52, 0x2c,
++ 0x55, 0xcc, 0xb3, 0x9e, 0x7a, 0xe9, 0xd4, 0x38,
++ 0x41, 0x5e, 0x0c, 0x35, 0xe2, 0x11, 0xd1, 0x13,
++ 0xf8, 0xb7, 0x8d, 0x72, 0x6b, 0x22, 0x2a, 0xb0,
++ 0xdb, 0x08, 0xba, 0x35, 0xb9, 0x3f, 0xc8, 0xd3,
++ 0x24, 0x90, 0xec, 0x58, 0xd2, 0x09, 0xc7, 0x2d,
++ 0xed, 0x38, 0x80, 0x36, 0x72, 0x43, 0x27, 0x49,
++ 0x4a, 0x80, 0x8a, 0xa2, 0xe8, 0xd3, 0xda, 0x30,
++ 0x7d, 0xb6, 0x82, 0x37, 0x86, 0x92, 0x86, 0x3e,
++ 0x08, 0xb2, 0x28, 0x5a, 0x55, 0x44, 0x24, 0x7d,
++ 0x40, 0x48, 0x8a, 0xb6, 0x89, 0x58, 0x08, 0xa0,
++ 0xd6, 0x6d, 0x3a, 0x17, 0xbf, 0xf6, 0x54, 0xa2,
++ 0xf5, 0xd3, 0x8c, 0x0f, 0x78, 0x12, 0x57, 0x8b,
++ 0xd5, 0xc2, 0xfd, 0x58, 0x5b, 0x7f, 0x38, 0xe3,
++ 0xcc, 0xb7, 0x7c, 0x48, 0xb3, 0x20, 0xe8, 0x81,
++ 0x14, 0x32, 0x45, 0x05, 0xe0, 0xdb, 0x9f, 0x75,
++ 0x85, 0xb4, 0x6a, 0xfc, 0x95, 0xe3, 0x54, 0x22,
++ 0x12, 0xee, 0x30, 0xfe, 0xd8, 0x30, 0xef, 0x34,
++ 0x50, 0xab, 0x46, 0x30, 0x98, 0x2f, 0xb7, 0xc0,
++ 0x15, 0xa2, 0x83, 0xb6, 0xf2, 0x06, 0x21, 0xa2,
++ 0xc3, 0x26, 0x37, 0x14, 0xd1, 0x4d, 0xb5, 0x10,
++ 0x52, 0x76, 0x4d, 0x6a, 0xee, 0xb5, 0x2b, 0x15,
++ 0xb7, 0xf9, 0x51, 0xe8, 0x2a, 0xaf, 0xc7, 0xfa,
++ 0x77, 0xaf, 0xb0, 0x05, 0x4d, 0xd1, 0x68, 0x8e,
++ 0x74, 0x05, 0x9f, 0x9d, 0x93, 0xa5, 0x3e, 0x7f,
++ 0x4e, 0x5f, 0x9d, 0xcb, 0x09, 0xc7, 0x83, 0xe3,
++ 0x02, 0x9d, 0x27, 0x1f, 0xef, 0x85, 0x05, 0x8d,
++ 0xec, 0x55, 0x88, 0x0f, 0x0d, 0x7c, 0x4c, 0xe8,
++ 0xa1, 0x75, 0xa0, 0xd8, 0x06, 0x47, 0x14, 0xef,
++ 0xaa, 0x61, 0xcf, 0x26, 0x15, 0xad, 0xd8, 0xa3,
++ 0xaa, 0x75, 0xf2, 0x78, 0x4a, 0x5a, 0x61, 0xdf,
++ 0x8b, 0xc7, 0x04, 0xbc, 0xb2, 0x32, 0xd2, 0x7e,
++ 0x42, 0xee, 0xb4, 0x2f, 0x51, 0xff, 0x7b, 0x2e,
++ 0xd3, 0x02, 0xe8, 0xdc, 0x5d, 0x0d, 0x50, 0xdc,
++ 0xae, 0xb7, 0x46, 0xf9, 0xa8, 0xe6, 0xd0, 0x16,
++ 0xcc, 0xe6, 0x2c, 0x81, 0xc7, 0xad, 0xe9, 0xf0,
++ 0x05, 0x72, 0x6d, 0x3d, 0x0a, 0x7a, 0xa9, 0x02,
++ 0xac, 0x82, 0x93, 0x6e, 0xb6, 0x1c, 0x28, 0xfc,
++ 0x44, 0x12, 0xfb, 0x73, 0x77, 0xd4, 0x13, 0x39,
++ 0x29, 0x88, 0x8a, 0xf3, 0x5c, 0xa6, 0x36, 0xa0,
++ 0x2a, 0xed, 0x7e, 0xb1, 0x1d, 0xd6, 0x4c, 0x6b,
++ 0x41, 0x01, 0x18, 0x5d, 0x5d, 0x07, 0x97, 0xa6,
++ 0x4b, 0xef, 0x31, 0x18, 0xea, 0xac, 0xb1, 0x84,
++ 0x21, 0xed, 0xda, 0x86,
++ },
++ .rlen = 4100,
++ },
++};
++
++static struct cipher_testvec aes_ctr_dec_tv_template[] = {
++ { /* From RFC 3686 */
++ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
++ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
++ 0x00, 0x00, 0x00, 0x30 },
++ .klen = 20,
++ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
++ .input = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
++ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
++ .ilen = 16,
++ .result = { "Single block msg" },
++ .rlen = 16,
++ }, {
++ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
++ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
++ 0x00, 0x6c, 0xb6, 0xdb },
++ .klen = 20,
++ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
++ .input = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
++ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
++ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
++ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
++ .ilen = 32,
++ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
++ .rlen = 32,
++ }, {
++ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
++ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
++ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
++ 0x00, 0x00, 0x00, 0x48 },
++ .klen = 28,
++ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
++ .input = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
++ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
++ .ilen = 16,
++ .result = { "Single block msg" },
++ .rlen = 16,
++ }, {
++ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
++ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
++ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
++ 0x00, 0x96, 0xb0, 0x3b },
++ .klen = 28,
++ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
++ .input = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
++ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
++ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
++ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
++ .ilen = 32,
++ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
++ .rlen = 32,
++ }, {
++ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
++ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
++ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
++ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
++ 0x00, 0x00, 0x00, 0x60 },
++ .klen = 36,
++ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
++ .input = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
++ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
++ .ilen = 16,
++ .result = { "Single block msg" },
++ .rlen = 16,
++ }, {
++ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
++ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
++ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
++ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
++ 0x00, 0xfa, 0xac, 0x24 },
++ .klen = 36,
++ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
++ .input = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
++ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
++ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
++ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
++ .ilen = 32,
++ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
++ .rlen = 32,
++ },
++};
++
++static struct aead_testvec aes_gcm_enc_tv_template[] = {
++ { /* From McGrew & Viega - http://citeseer.ist.psu.edu/656989.html */
++ .klen = 16,
++ .result = { 0x58, 0xe2, 0xfc, 0xce, 0xfa, 0x7e, 0x30, 0x61,
++ 0x36, 0x7f, 0x1d, 0x57, 0xa4, 0xe7, 0x45, 0x5a },
++ .rlen = 16,
++ }, {
++ .klen = 16,
++ .ilen = 16,
++ .result = { 0x03, 0x88, 0xda, 0xce, 0x60, 0xb6, 0xa3, 0x92,
++ 0xf3, 0x28, 0xc2, 0xb9, 0x71, 0xb2, 0xfe, 0x78,
++ 0xab, 0x6e, 0x47, 0xd4, 0x2c, 0xec, 0x13, 0xbd,
++ 0xf5, 0x3a, 0x67, 0xb2, 0x12, 0x57, 0xbd, 0xdf },
++ .rlen = 32,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
++ .klen = 16,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
++ .ilen = 64,
++ .result = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
++ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
++ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
++ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
++ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
++ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
++ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
++ 0x3d, 0x58, 0xe0, 0x91, 0x47, 0x3f, 0x59, 0x85,
++ 0x4d, 0x5c, 0x2a, 0xf3, 0x27, 0xcd, 0x64, 0xa6,
++ 0x2c, 0xf3, 0x5a, 0xbd, 0x2b, 0xa6, 0xfa, 0xb4 },
++ .rlen = 80,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
++ .klen = 16,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39 },
++ .ilen = 60,
++ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xab, 0xad, 0xda, 0xd2 },
++ .alen = 20,
++ .result = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
++ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
++ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
++ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
++ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
++ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
++ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
++ 0x3d, 0x58, 0xe0, 0x91,
++ 0x5b, 0xc9, 0x4f, 0xbc, 0x32, 0x21, 0xa5, 0xdb,
++ 0x94, 0xfa, 0xe9, 0x5a, 0xe7, 0x12, 0x1a, 0x47 },
++ .rlen = 76,
++ }, {
++ .klen = 24,
++ .result = { 0xcd, 0x33, 0xb2, 0x8a, 0xc7, 0x73, 0xf7, 0x4b,
++ 0xa0, 0x0e, 0xd1, 0xf3, 0x12, 0x57, 0x24, 0x35 },
++ .rlen = 16,
++ }, {
++ .klen = 24,
++ .ilen = 16,
++ .result = { 0x98, 0xe7, 0x24, 0x7c, 0x07, 0xf0, 0xfe, 0x41,
++ 0x1c, 0x26, 0x7e, 0x43, 0x84, 0xb0, 0xf6, 0x00,
++ 0x2f, 0xf5, 0x8d, 0x80, 0x03, 0x39, 0x27, 0xab,
++ 0x8e, 0xf4, 0xd4, 0x58, 0x75, 0x14, 0xf0, 0xfb },
++ .rlen = 32,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
++ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
++ .klen = 24,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
++ .ilen = 64,
++ .result = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
++ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
++ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
++ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
++ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
++ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
++ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
++ 0xcc, 0xda, 0x27, 0x10, 0xac, 0xad, 0xe2, 0x56,
++ 0x99, 0x24, 0xa7, 0xc8, 0x58, 0x73, 0x36, 0xbf,
++ 0xb1, 0x18, 0x02, 0x4d, 0xb8, 0x67, 0x4a, 0x14 },
++ .rlen = 80,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
++ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
++ .klen = 24,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39 },
++ .ilen = 60,
++ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xab, 0xad, 0xda, 0xd2 },
++ .alen = 20,
++ .result = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
++ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
++ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
++ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
++ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
++ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
++ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
++ 0xcc, 0xda, 0x27, 0x10,
++ 0x25, 0x19, 0x49, 0x8e, 0x80, 0xf1, 0x47, 0x8f,
++ 0x37, 0xba, 0x55, 0xbd, 0x6d, 0x27, 0x61, 0x8c },
++ .rlen = 76,
++ .np = 2,
++ .tap = { 32, 28 },
++ .anp = 2,
++ .atap = { 8, 12 }
++ }, {
++ .klen = 32,
++ .result = { 0x53, 0x0f, 0x8a, 0xfb, 0xc7, 0x45, 0x36, 0xb9,
++ 0xa9, 0x63, 0xb4, 0xf1, 0xc4, 0xcb, 0x73, 0x8b },
++ .rlen = 16,
++ }
++};
++
++static struct aead_testvec aes_gcm_dec_tv_template[] = {
++ { /* From McGrew & Viega - http://citeseer.ist.psu.edu/656989.html */
++ .klen = 32,
++ .input = { 0xce, 0xa7, 0x40, 0x3d, 0x4d, 0x60, 0x6b, 0x6e,
++ 0x07, 0x4e, 0xc5, 0xd3, 0xba, 0xf3, 0x9d, 0x18,
++ 0xd0, 0xd1, 0xc8, 0xa7, 0x99, 0x99, 0x6b, 0xf0,
++ 0x26, 0x5b, 0x98, 0xb5, 0xd4, 0x8a, 0xb9, 0x19 },
++ .ilen = 32,
++ .rlen = 16,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
++ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
++ .klen = 32,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0x52, 0x2d, 0xc1, 0xf0, 0x99, 0x56, 0x7d, 0x07,
++ 0xf4, 0x7f, 0x37, 0xa3, 0x2a, 0x84, 0x42, 0x7d,
++ 0x64, 0x3a, 0x8c, 0xdc, 0xbf, 0xe5, 0xc0, 0xc9,
++ 0x75, 0x98, 0xa2, 0xbd, 0x25, 0x55, 0xd1, 0xaa,
++ 0x8c, 0xb0, 0x8e, 0x48, 0x59, 0x0d, 0xbb, 0x3d,
++ 0xa7, 0xb0, 0x8b, 0x10, 0x56, 0x82, 0x88, 0x38,
++ 0xc5, 0xf6, 0x1e, 0x63, 0x93, 0xba, 0x7a, 0x0a,
++ 0xbc, 0xc9, 0xf6, 0x62, 0x89, 0x80, 0x15, 0xad,
++ 0xb0, 0x94, 0xda, 0xc5, 0xd9, 0x34, 0x71, 0xbd,
++ 0xec, 0x1a, 0x50, 0x22, 0x70, 0xe3, 0xcc, 0x6c },
++ .ilen = 80,
++ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
++ .rlen = 64,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
++ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
++ .klen = 32,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0x52, 0x2d, 0xc1, 0xf0, 0x99, 0x56, 0x7d, 0x07,
++ 0xf4, 0x7f, 0x37, 0xa3, 0x2a, 0x84, 0x42, 0x7d,
++ 0x64, 0x3a, 0x8c, 0xdc, 0xbf, 0xe5, 0xc0, 0xc9,
++ 0x75, 0x98, 0xa2, 0xbd, 0x25, 0x55, 0xd1, 0xaa,
++ 0x8c, 0xb0, 0x8e, 0x48, 0x59, 0x0d, 0xbb, 0x3d,
++ 0xa7, 0xb0, 0x8b, 0x10, 0x56, 0x82, 0x88, 0x38,
++ 0xc5, 0xf6, 0x1e, 0x63, 0x93, 0xba, 0x7a, 0x0a,
++ 0xbc, 0xc9, 0xf6, 0x62,
++ 0x76, 0xfc, 0x6e, 0xce, 0x0f, 0x4e, 0x17, 0x68,
++ 0xcd, 0xdf, 0x88, 0x53, 0xbb, 0x2d, 0x55, 0x1b },
++ .ilen = 76,
++ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xab, 0xad, 0xda, 0xd2 },
++ .alen = 20,
++ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39 },
++ .rlen = 60,
++ .np = 2,
++ .tap = { 48, 28 },
++ .anp = 3,
++ .atap = { 8, 8, 4 }
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
++ .klen = 16,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
++ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
++ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
++ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
++ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
++ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
++ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
++ 0x3d, 0x58, 0xe0, 0x91, 0x47, 0x3f, 0x59, 0x85,
++ 0x4d, 0x5c, 0x2a, 0xf3, 0x27, 0xcd, 0x64, 0xa6,
++ 0x2c, 0xf3, 0x5a, 0xbd, 0x2b, 0xa6, 0xfa, 0xb4 },
++ .ilen = 80,
++ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
++ .rlen = 64,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 },
++ .klen = 16,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0x42, 0x83, 0x1e, 0xc2, 0x21, 0x77, 0x74, 0x24,
++ 0x4b, 0x72, 0x21, 0xb7, 0x84, 0xd0, 0xd4, 0x9c,
++ 0xe3, 0xaa, 0x21, 0x2f, 0x2c, 0x02, 0xa4, 0xe0,
++ 0x35, 0xc1, 0x7e, 0x23, 0x29, 0xac, 0xa1, 0x2e,
++ 0x21, 0xd5, 0x14, 0xb2, 0x54, 0x66, 0x93, 0x1c,
++ 0x7d, 0x8f, 0x6a, 0x5a, 0xac, 0x84, 0xaa, 0x05,
++ 0x1b, 0xa3, 0x0b, 0x39, 0x6a, 0x0a, 0xac, 0x97,
++ 0x3d, 0x58, 0xe0, 0x91,
++ 0x5b, 0xc9, 0x4f, 0xbc, 0x32, 0x21, 0xa5, 0xdb,
++ 0x94, 0xfa, 0xe9, 0x5a, 0xe7, 0x12, 0x1a, 0x47 },
++ .ilen = 76,
++ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xab, 0xad, 0xda, 0xd2 },
++ .alen = 20,
++ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39 },
++ .rlen = 60,
++ }, {
++ .klen = 24,
++ .input = { 0x98, 0xe7, 0x24, 0x7c, 0x07, 0xf0, 0xfe, 0x41,
++ 0x1c, 0x26, 0x7e, 0x43, 0x84, 0xb0, 0xf6, 0x00,
++ 0x2f, 0xf5, 0x8d, 0x80, 0x03, 0x39, 0x27, 0xab,
++ 0x8e, 0xf4, 0xd4, 0x58, 0x75, 0x14, 0xf0, 0xfb },
++ .ilen = 32,
++ .rlen = 16,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
++ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
++ .klen = 24,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
++ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
++ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
++ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
++ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
++ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
++ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
++ 0xcc, 0xda, 0x27, 0x10, 0xac, 0xad, 0xe2, 0x56,
++ 0x99, 0x24, 0xa7, 0xc8, 0x58, 0x73, 0x36, 0xbf,
++ 0xb1, 0x18, 0x02, 0x4d, 0xb8, 0x67, 0x4a, 0x14 },
++ .ilen = 80,
++ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39, 0x1a, 0xaf, 0xd2, 0x55 },
++ .rlen = 64,
++ }, {
++ .key = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
++ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08,
++ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c },
++ .klen = 24,
++ .iv = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
++ 0xde, 0xca, 0xf8, 0x88 },
++ .input = { 0x39, 0x80, 0xca, 0x0b, 0x3c, 0x00, 0xe8, 0x41,
++ 0xeb, 0x06, 0xfa, 0xc4, 0x87, 0x2a, 0x27, 0x57,
++ 0x85, 0x9e, 0x1c, 0xea, 0xa6, 0xef, 0xd9, 0x84,
++ 0x62, 0x85, 0x93, 0xb4, 0x0c, 0xa1, 0xe1, 0x9c,
++ 0x7d, 0x77, 0x3d, 0x00, 0xc1, 0x44, 0xc5, 0x25,
++ 0xac, 0x61, 0x9d, 0x18, 0xc8, 0x4a, 0x3f, 0x47,
++ 0x18, 0xe2, 0x44, 0x8b, 0x2f, 0xe3, 0x24, 0xd9,
++ 0xcc, 0xda, 0x27, 0x10,
++ 0x25, 0x19, 0x49, 0x8e, 0x80, 0xf1, 0x47, 0x8f,
++ 0x37, 0xba, 0x55, 0xbd, 0x6d, 0x27, 0x61, 0x8c },
++ .ilen = 76,
++ .assoc = { 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
++ 0xab, 0xad, 0xda, 0xd2 },
++ .alen = 20,
++ .result = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5,
++ 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
++ 0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda,
++ 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
++ 0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53,
++ 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
++ 0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57,
++ 0xba, 0x63, 0x7b, 0x39 },
++ .rlen = 60,
++ }
++};
++
++static struct aead_testvec aes_ccm_enc_tv_template[] = {
++ { /* From RFC 3610 */
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x03, 0x02, 0x01, 0x00,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
++ .alen = 8,
++ .input = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e },
++ .ilen = 23,
++ .result = { 0x58, 0x8c, 0x97, 0x9a, 0x61, 0xc6, 0x63, 0xd2,
++ 0xf0, 0x66, 0xd0, 0xc2, 0xc0, 0xf9, 0x89, 0x80,
++ 0x6d, 0x5f, 0x6b, 0x61, 0xda, 0xc3, 0x84, 0x17,
++ 0xe8, 0xd1, 0x2c, 0xfd, 0xf9, 0x26, 0xe0 },
++ .rlen = 31,
++ }, {
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x07, 0x06, 0x05, 0x04,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b },
++ .alen = 12,
++ .input = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
++ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
++ 0x1c, 0x1d, 0x1e, 0x1f },
++ .ilen = 20,
++ .result = { 0xdc, 0xf1, 0xfb, 0x7b, 0x5d, 0x9e, 0x23, 0xfb,
++ 0x9d, 0x4e, 0x13, 0x12, 0x53, 0x65, 0x8a, 0xd8,
++ 0x6e, 0xbd, 0xca, 0x3e, 0x51, 0xe8, 0x3f, 0x07,
++ 0x7d, 0x9c, 0x2d, 0x93 },
++ .rlen = 28,
++ }, {
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0b, 0x0a, 0x09, 0x08,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
++ .alen = 8,
++ .input = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
++ 0x20 },
++ .ilen = 25,
++ .result = { 0x82, 0x53, 0x1a, 0x60, 0xcc, 0x24, 0x94, 0x5a,
++ 0x4b, 0x82, 0x79, 0x18, 0x1a, 0xb5, 0xc8, 0x4d,
++ 0xf2, 0x1c, 0xe7, 0xf9, 0xb7, 0x3f, 0x42, 0xe1,
++ 0x97, 0xea, 0x9c, 0x07, 0xe5, 0x6b, 0x5e, 0xb1,
++ 0x7e, 0x5f, 0x4e },
++ .rlen = 35,
++ }, {
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0c, 0x0b, 0x0a, 0x09,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b },
++ .alen = 12,
++ .input = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
++ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
++ 0x1c, 0x1d, 0x1e },
++ .ilen = 19,
++ .result = { 0x07, 0x34, 0x25, 0x94, 0x15, 0x77, 0x85, 0x15,
++ 0x2b, 0x07, 0x40, 0x98, 0x33, 0x0a, 0xbb, 0x14,
++ 0x1b, 0x94, 0x7b, 0x56, 0x6a, 0xa9, 0x40, 0x6b,
++ 0x4d, 0x99, 0x99, 0x88, 0xdd },
++ .rlen = 29,
++ }, {
++ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
++ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x33, 0x56, 0x8e, 0xf7, 0xb2, 0x63,
++ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
++ .assoc = { 0x63, 0x01, 0x8f, 0x76, 0xdc, 0x8a, 0x1b, 0xcb },
++ .alen = 8,
++ .input = { 0x90, 0x20, 0xea, 0x6f, 0x91, 0xbd, 0xd8, 0x5a,
++ 0xfa, 0x00, 0x39, 0xba, 0x4b, 0xaf, 0xf9, 0xbf,
++ 0xb7, 0x9c, 0x70, 0x28, 0x94, 0x9c, 0xd0, 0xec },
++ .ilen = 24,
++ .result = { 0x4c, 0xcb, 0x1e, 0x7c, 0xa9, 0x81, 0xbe, 0xfa,
++ 0xa0, 0x72, 0x6c, 0x55, 0xd3, 0x78, 0x06, 0x12,
++ 0x98, 0xc8, 0x5c, 0x92, 0x81, 0x4a, 0xbc, 0x33,
++ 0xc5, 0x2e, 0xe8, 0x1d, 0x7d, 0x77, 0xc0, 0x8a },
++ .rlen = 32,
++ }, {
++ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
++ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0xd5, 0x60, 0x91, 0x2d, 0x3f, 0x70,
++ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
++ .assoc = { 0xcd, 0x90, 0x44, 0xd2, 0xb7, 0x1f, 0xdb, 0x81,
++ 0x20, 0xea, 0x60, 0xc0 },
++ .alen = 12,
++ .input = { 0x64, 0x35, 0xac, 0xba, 0xfb, 0x11, 0xa8, 0x2e,
++ 0x2f, 0x07, 0x1d, 0x7c, 0xa4, 0xa5, 0xeb, 0xd9,
++ 0x3a, 0x80, 0x3b, 0xa8, 0x7f },
++ .ilen = 21,
++ .result = { 0x00, 0x97, 0x69, 0xec, 0xab, 0xdf, 0x48, 0x62,
++ 0x55, 0x94, 0xc5, 0x92, 0x51, 0xe6, 0x03, 0x57,
++ 0x22, 0x67, 0x5e, 0x04, 0xc8, 0x47, 0x09, 0x9e,
++ 0x5a, 0xe0, 0x70, 0x45, 0x51 },
++ .rlen = 29,
++ }, {
++ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
++ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x42, 0xff, 0xf8, 0xf1, 0x95, 0x1c,
++ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
++ .assoc = { 0xd8, 0x5b, 0xc7, 0xe6, 0x9f, 0x94, 0x4f, 0xb8 },
++ .alen = 8,
++ .input = { 0x8a, 0x19, 0xb9, 0x50, 0xbc, 0xf7, 0x1a, 0x01,
++ 0x8e, 0x5e, 0x67, 0x01, 0xc9, 0x17, 0x87, 0x65,
++ 0x98, 0x09, 0xd6, 0x7d, 0xbe, 0xdd, 0x18 },
++ .ilen = 23,
++ .result = { 0xbc, 0x21, 0x8d, 0xaa, 0x94, 0x74, 0x27, 0xb6,
++ 0xdb, 0x38, 0x6a, 0x99, 0xac, 0x1a, 0xef, 0x23,
++ 0xad, 0xe0, 0xb5, 0x29, 0x39, 0xcb, 0x6a, 0x63,
++ 0x7c, 0xf9, 0xbe, 0xc2, 0x40, 0x88, 0x97, 0xc6,
++ 0xba },
++ .rlen = 33,
++ },
++};
++
++static struct aead_testvec aes_ccm_dec_tv_template[] = {
++ { /* From RFC 3610 */
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x03, 0x02, 0x01, 0x00,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
++ .alen = 8,
++ .input = { 0x58, 0x8c, 0x97, 0x9a, 0x61, 0xc6, 0x63, 0xd2,
++ 0xf0, 0x66, 0xd0, 0xc2, 0xc0, 0xf9, 0x89, 0x80,
++ 0x6d, 0x5f, 0x6b, 0x61, 0xda, 0xc3, 0x84, 0x17,
++ 0xe8, 0xd1, 0x2c, 0xfd, 0xf9, 0x26, 0xe0 },
++ .ilen = 31,
++ .result = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e },
++ .rlen = 23,
++ }, {
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x07, 0x06, 0x05, 0x04,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b },
++ .alen = 12,
++ .input = { 0xdc, 0xf1, 0xfb, 0x7b, 0x5d, 0x9e, 0x23, 0xfb,
++ 0x9d, 0x4e, 0x13, 0x12, 0x53, 0x65, 0x8a, 0xd8,
++ 0x6e, 0xbd, 0xca, 0x3e, 0x51, 0xe8, 0x3f, 0x07,
++ 0x7d, 0x9c, 0x2d, 0x93 },
++ .ilen = 28,
++ .result = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
++ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
++ 0x1c, 0x1d, 0x1e, 0x1f },
++ .rlen = 20,
++ }, {
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0b, 0x0a, 0x09, 0x08,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 },
++ .alen = 8,
++ .input = { 0x82, 0x53, 0x1a, 0x60, 0xcc, 0x24, 0x94, 0x5a,
++ 0x4b, 0x82, 0x79, 0x18, 0x1a, 0xb5, 0xc8, 0x4d,
++ 0xf2, 0x1c, 0xe7, 0xf9, 0xb7, 0x3f, 0x42, 0xe1,
++ 0x97, 0xea, 0x9c, 0x07, 0xe5, 0x6b, 0x5e, 0xb1,
++ 0x7e, 0x5f, 0x4e },
++ .ilen = 35,
++ .result = { 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
++ 0x20 },
++ .rlen = 25,
++ }, {
++ .key = { 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x00, 0x00, 0x0c, 0x0b, 0x0a, 0x09,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0x00, 0x00 },
++ .assoc = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b },
++ .alen = 12,
++ .input = { 0x07, 0x34, 0x25, 0x94, 0x15, 0x77, 0x85, 0x15,
++ 0x2b, 0x07, 0x40, 0x98, 0x33, 0x0a, 0xbb, 0x14,
++ 0x1b, 0x94, 0x7b, 0x56, 0x6a, 0xa9, 0x40, 0x6b,
++ 0x4d, 0x99, 0x99, 0x88, 0xdd },
++ .ilen = 29,
++ .result = { 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
++ 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
++ 0x1c, 0x1d, 0x1e },
++ .rlen = 19,
++ }, {
++ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
++ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x33, 0x56, 0x8e, 0xf7, 0xb2, 0x63,
++ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
++ .assoc = { 0x63, 0x01, 0x8f, 0x76, 0xdc, 0x8a, 0x1b, 0xcb },
++ .alen = 8,
++ .input = { 0x4c, 0xcb, 0x1e, 0x7c, 0xa9, 0x81, 0xbe, 0xfa,
++ 0xa0, 0x72, 0x6c, 0x55, 0xd3, 0x78, 0x06, 0x12,
++ 0x98, 0xc8, 0x5c, 0x92, 0x81, 0x4a, 0xbc, 0x33,
++ 0xc5, 0x2e, 0xe8, 0x1d, 0x7d, 0x77, 0xc0, 0x8a },
++ .ilen = 32,
++ .result = { 0x90, 0x20, 0xea, 0x6f, 0x91, 0xbd, 0xd8, 0x5a,
++ 0xfa, 0x00, 0x39, 0xba, 0x4b, 0xaf, 0xf9, 0xbf,
++ 0xb7, 0x9c, 0x70, 0x28, 0x94, 0x9c, 0xd0, 0xec },
++ .rlen = 24,
++ }, {
++ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
++ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0xd5, 0x60, 0x91, 0x2d, 0x3f, 0x70,
++ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
++ .assoc = { 0xcd, 0x90, 0x44, 0xd2, 0xb7, 0x1f, 0xdb, 0x81,
++ 0x20, 0xea, 0x60, 0xc0 },
++ .alen = 12,
++ .input = { 0x00, 0x97, 0x69, 0xec, 0xab, 0xdf, 0x48, 0x62,
++ 0x55, 0x94, 0xc5, 0x92, 0x51, 0xe6, 0x03, 0x57,
++ 0x22, 0x67, 0x5e, 0x04, 0xc8, 0x47, 0x09, 0x9e,
++ 0x5a, 0xe0, 0x70, 0x45, 0x51 },
++ .ilen = 29,
++ .result = { 0x64, 0x35, 0xac, 0xba, 0xfb, 0x11, 0xa8, 0x2e,
++ 0x2f, 0x07, 0x1d, 0x7c, 0xa4, 0xa5, 0xeb, 0xd9,
++ 0x3a, 0x80, 0x3b, 0xa8, 0x7f },
++ .rlen = 21,
++ }, {
++ .key = { 0xd7, 0x82, 0x8d, 0x13, 0xb2, 0xb0, 0xbd, 0xc3,
++ 0x25, 0xa7, 0x62, 0x36, 0xdf, 0x93, 0xcc, 0x6b },
++ .klen = 16,
++ .iv = { 0x01, 0x00, 0x42, 0xff, 0xf8, 0xf1, 0x95, 0x1c,
++ 0x3c, 0x96, 0x96, 0x76, 0x6c, 0xfa, 0x00, 0x00 },
++ .assoc = { 0xd8, 0x5b, 0xc7, 0xe6, 0x9f, 0x94, 0x4f, 0xb8 },
++ .alen = 8,
++ .input = { 0xbc, 0x21, 0x8d, 0xaa, 0x94, 0x74, 0x27, 0xb6,
++ 0xdb, 0x38, 0x6a, 0x99, 0xac, 0x1a, 0xef, 0x23,
++ 0xad, 0xe0, 0xb5, 0x29, 0x39, 0xcb, 0x6a, 0x63,
++ 0x7c, 0xf9, 0xbe, 0xc2, 0x40, 0x88, 0x97, 0xc6,
++ 0xba },
++ .ilen = 33,
++ .result = { 0x8a, 0x19, 0xb9, 0x50, 0xbc, 0xf7, 0x1a, 0x01,
++ 0x8e, 0x5e, 0x67, 0x01, 0xc9, 0x17, 0x87, 0x65,
++ 0x98, 0x09, 0xd6, 0x7d, 0xbe, 0xdd, 0x18 },
++ .rlen = 23,
++ },
++};
++
+ /* Cast5 test vectors from RFC 2144 */
+ #define CAST5_ENC_TEST_VECTORS 3
+ #define CAST5_DEC_TEST_VECTORS 3
+@@ -4317,6 +6425,1211 @@ static struct cipher_testvec seed_dec_tv_template[] = {
+ }
+ };
+
++#define SALSA20_STREAM_ENC_TEST_VECTORS 5
++static struct cipher_testvec salsa20_stream_enc_tv_template[] = {
++ /*
++ * Testvectors from verified.test-vectors submitted to ECRYPT.
++ * They are truncated to size 39, 64, 111, 129 to test a variety
++ * of input length.
++ */
++ { /* Set 3, vector 0 */
++ .key = {
++ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
++ },
++ .klen = 16,
++ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
++ .input = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ },
++ .ilen = 39,
++ .result = {
++ 0x2D, 0xD5, 0xC3, 0xF7, 0xBA, 0x2B, 0x20, 0xF7,
++ 0x68, 0x02, 0x41, 0x0C, 0x68, 0x86, 0x88, 0x89,
++ 0x5A, 0xD8, 0xC1, 0xBD, 0x4E, 0xA6, 0xC9, 0xB1,
++ 0x40, 0xFB, 0x9B, 0x90, 0xE2, 0x10, 0x49, 0xBF,
++ 0x58, 0x3F, 0x52, 0x79, 0x70, 0xEB, 0xC1,
++ },
++ .rlen = 39,
++ }, { /* Set 5, vector 0 */
++ .key = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
++ },
++ .klen = 16,
++ .iv = { 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
++ .input = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ },
++ .ilen = 64,
++ .result = {
++ 0xB6, 0x6C, 0x1E, 0x44, 0x46, 0xDD, 0x95, 0x57,
++ 0xE5, 0x78, 0xE2, 0x23, 0xB0, 0xB7, 0x68, 0x01,
++ 0x7B, 0x23, 0xB2, 0x67, 0xBB, 0x02, 0x34, 0xAE,
++ 0x46, 0x26, 0xBF, 0x44, 0x3F, 0x21, 0x97, 0x76,
++ 0x43, 0x6F, 0xB1, 0x9F, 0xD0, 0xE8, 0x86, 0x6F,
++ 0xCD, 0x0D, 0xE9, 0xA9, 0x53, 0x8F, 0x4A, 0x09,
++ 0xCA, 0x9A, 0xC0, 0x73, 0x2E, 0x30, 0xBC, 0xF9,
++ 0x8E, 0x4F, 0x13, 0xE4, 0xB9, 0xE2, 0x01, 0xD9,
++ },
++ .rlen = 64,
++ }, { /* Set 3, vector 27 */
++ .key = {
++ 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, 0x20, 0x21, 0x22,
++ 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2A,
++ 0x2B, 0x2C, 0x2D, 0x2E, 0x2F, 0x30, 0x31, 0x32,
++ 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A
++ },
++ .klen = 32,
++ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
++ .input = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ },
++ .ilen = 111,
++ .result = {
++ 0xAE, 0x39, 0x50, 0x8E, 0xAC, 0x9A, 0xEC, 0xE7,
++ 0xBF, 0x97, 0xBB, 0x20, 0xB9, 0xDE, 0xE4, 0x1F,
++ 0x87, 0xD9, 0x47, 0xF8, 0x28, 0x91, 0x35, 0x98,
++ 0xDB, 0x72, 0xCC, 0x23, 0x29, 0x48, 0x56, 0x5E,
++ 0x83, 0x7E, 0x0B, 0xF3, 0x7D, 0x5D, 0x38, 0x7B,
++ 0x2D, 0x71, 0x02, 0xB4, 0x3B, 0xB5, 0xD8, 0x23,
++ 0xB0, 0x4A, 0xDF, 0x3C, 0xEC, 0xB6, 0xD9, 0x3B,
++ 0x9B, 0xA7, 0x52, 0xBE, 0xC5, 0xD4, 0x50, 0x59,
++
++ 0x15, 0x14, 0xB4, 0x0E, 0x40, 0xE6, 0x53, 0xD1,
++ 0x83, 0x9C, 0x5B, 0xA0, 0x92, 0x29, 0x6B, 0x5E,
++ 0x96, 0x5B, 0x1E, 0x2F, 0xD3, 0xAC, 0xC1, 0x92,
++ 0xB1, 0x41, 0x3F, 0x19, 0x2F, 0xC4, 0x3B, 0xC6,
++ 0x95, 0x46, 0x45, 0x54, 0xE9, 0x75, 0x03, 0x08,
++ 0x44, 0xAF, 0xE5, 0x8A, 0x81, 0x12, 0x09,
++ },
++ .rlen = 111,
++
++ }, { /* Set 5, vector 27 */
++ .key = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
++ },
++ .klen = 32,
++ .iv = { 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00 },
++ .input = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++
++ 0x00,
++ },
++ .ilen = 129,
++ .result = {
++ 0xD2, 0xDB, 0x1A, 0x5C, 0xF1, 0xC1, 0xAC, 0xDB,
++ 0xE8, 0x1A, 0x7A, 0x43, 0x40, 0xEF, 0x53, 0x43,
++ 0x5E, 0x7F, 0x4B, 0x1A, 0x50, 0x52, 0x3F, 0x8D,
++ 0x28, 0x3D, 0xCF, 0x85, 0x1D, 0x69, 0x6E, 0x60,
++ 0xF2, 0xDE, 0x74, 0x56, 0x18, 0x1B, 0x84, 0x10,
++ 0xD4, 0x62, 0xBA, 0x60, 0x50, 0xF0, 0x61, 0xF2,
++ 0x1C, 0x78, 0x7F, 0xC1, 0x24, 0x34, 0xAF, 0x58,
++ 0xBF, 0x2C, 0x59, 0xCA, 0x90, 0x77, 0xF3, 0xB0,
++
++ 0x5B, 0x4A, 0xDF, 0x89, 0xCE, 0x2C, 0x2F, 0xFC,
++ 0x67, 0xF0, 0xE3, 0x45, 0xE8, 0xB3, 0xB3, 0x75,
++ 0xA0, 0x95, 0x71, 0xA1, 0x29, 0x39, 0x94, 0xCA,
++ 0x45, 0x2F, 0xBD, 0xCB, 0x10, 0xB6, 0xBE, 0x9F,
++ 0x8E, 0xF9, 0xB2, 0x01, 0x0A, 0x5A, 0x0A, 0xB7,
++ 0x6B, 0x9D, 0x70, 0x8E, 0x4B, 0xD6, 0x2F, 0xCD,
++ 0x2E, 0x40, 0x48, 0x75, 0xE9, 0xE2, 0x21, 0x45,
++ 0x0B, 0xC9, 0xB6, 0xB5, 0x66, 0xBC, 0x9A, 0x59,
++
++ 0x5A,
++ },
++ .rlen = 129,
++ }, { /* large test vector generated using Crypto++ */
++ .key = {
++ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
++ },
++ .klen = 32,
++ .iv = {
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++ },
++ .input = {
++ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
++ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
++ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
++ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
++ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
++ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
++ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
++ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
++ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
++ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
++ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f,
++ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
++ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
++ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
++ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
++ 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
++ 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f,
++ 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97,
++ 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
++ 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7,
++ 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf,
++ 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7,
++ 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf,
++ 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
++ 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf,
++ 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7,
++ 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf,
++ 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7,
++ 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef,
++ 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7,
++ 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff,
++ 0x00, 0x03, 0x06, 0x09, 0x0c, 0x0f, 0x12, 0x15,
++ 0x18, 0x1b, 0x1e, 0x21, 0x24, 0x27, 0x2a, 0x2d,
++ 0x30, 0x33, 0x36, 0x39, 0x3c, 0x3f, 0x42, 0x45,
++ 0x48, 0x4b, 0x4e, 0x51, 0x54, 0x57, 0x5a, 0x5d,
++ 0x60, 0x63, 0x66, 0x69, 0x6c, 0x6f, 0x72, 0x75,
++ 0x78, 0x7b, 0x7e, 0x81, 0x84, 0x87, 0x8a, 0x8d,
++ 0x90, 0x93, 0x96, 0x99, 0x9c, 0x9f, 0xa2, 0xa5,
++ 0xa8, 0xab, 0xae, 0xb1, 0xb4, 0xb7, 0xba, 0xbd,
++ 0xc0, 0xc3, 0xc6, 0xc9, 0xcc, 0xcf, 0xd2, 0xd5,
++ 0xd8, 0xdb, 0xde, 0xe1, 0xe4, 0xe7, 0xea, 0xed,
++ 0xf0, 0xf3, 0xf6, 0xf9, 0xfc, 0xff, 0x02, 0x05,
++ 0x08, 0x0b, 0x0e, 0x11, 0x14, 0x17, 0x1a, 0x1d,
++ 0x20, 0x23, 0x26, 0x29, 0x2c, 0x2f, 0x32, 0x35,
++ 0x38, 0x3b, 0x3e, 0x41, 0x44, 0x47, 0x4a, 0x4d,
++ 0x50, 0x53, 0x56, 0x59, 0x5c, 0x5f, 0x62, 0x65,
++ 0x68, 0x6b, 0x6e, 0x71, 0x74, 0x77, 0x7a, 0x7d,
++ 0x80, 0x83, 0x86, 0x89, 0x8c, 0x8f, 0x92, 0x95,
++ 0x98, 0x9b, 0x9e, 0xa1, 0xa4, 0xa7, 0xaa, 0xad,
++ 0xb0, 0xb3, 0xb6, 0xb9, 0xbc, 0xbf, 0xc2, 0xc5,
++ 0xc8, 0xcb, 0xce, 0xd1, 0xd4, 0xd7, 0xda, 0xdd,
++ 0xe0, 0xe3, 0xe6, 0xe9, 0xec, 0xef, 0xf2, 0xf5,
++ 0xf8, 0xfb, 0xfe, 0x01, 0x04, 0x07, 0x0a, 0x0d,
++ 0x10, 0x13, 0x16, 0x19, 0x1c, 0x1f, 0x22, 0x25,
++ 0x28, 0x2b, 0x2e, 0x31, 0x34, 0x37, 0x3a, 0x3d,
++ 0x40, 0x43, 0x46, 0x49, 0x4c, 0x4f, 0x52, 0x55,
++ 0x58, 0x5b, 0x5e, 0x61, 0x64, 0x67, 0x6a, 0x6d,
++ 0x70, 0x73, 0x76, 0x79, 0x7c, 0x7f, 0x82, 0x85,
++ 0x88, 0x8b, 0x8e, 0x91, 0x94, 0x97, 0x9a, 0x9d,
++ 0xa0, 0xa3, 0xa6, 0xa9, 0xac, 0xaf, 0xb2, 0xb5,
++ 0xb8, 0xbb, 0xbe, 0xc1, 0xc4, 0xc7, 0xca, 0xcd,
++ 0xd0, 0xd3, 0xd6, 0xd9, 0xdc, 0xdf, 0xe2, 0xe5,
++ 0xe8, 0xeb, 0xee, 0xf1, 0xf4, 0xf7, 0xfa, 0xfd,
++ 0x00, 0x05, 0x0a, 0x0f, 0x14, 0x19, 0x1e, 0x23,
++ 0x28, 0x2d, 0x32, 0x37, 0x3c, 0x41, 0x46, 0x4b,
++ 0x50, 0x55, 0x5a, 0x5f, 0x64, 0x69, 0x6e, 0x73,
++ 0x78, 0x7d, 0x82, 0x87, 0x8c, 0x91, 0x96, 0x9b,
++ 0xa0, 0xa5, 0xaa, 0xaf, 0xb4, 0xb9, 0xbe, 0xc3,
++ 0xc8, 0xcd, 0xd2, 0xd7, 0xdc, 0xe1, 0xe6, 0xeb,
++ 0xf0, 0xf5, 0xfa, 0xff, 0x04, 0x09, 0x0e, 0x13,
++ 0x18, 0x1d, 0x22, 0x27, 0x2c, 0x31, 0x36, 0x3b,
++ 0x40, 0x45, 0x4a, 0x4f, 0x54, 0x59, 0x5e, 0x63,
++ 0x68, 0x6d, 0x72, 0x77, 0x7c, 0x81, 0x86, 0x8b,
++ 0x90, 0x95, 0x9a, 0x9f, 0xa4, 0xa9, 0xae, 0xb3,
++ 0xb8, 0xbd, 0xc2, 0xc7, 0xcc, 0xd1, 0xd6, 0xdb,
++ 0xe0, 0xe5, 0xea, 0xef, 0xf4, 0xf9, 0xfe, 0x03,
++ 0x08, 0x0d, 0x12, 0x17, 0x1c, 0x21, 0x26, 0x2b,
++ 0x30, 0x35, 0x3a, 0x3f, 0x44, 0x49, 0x4e, 0x53,
++ 0x58, 0x5d, 0x62, 0x67, 0x6c, 0x71, 0x76, 0x7b,
++ 0x80, 0x85, 0x8a, 0x8f, 0x94, 0x99, 0x9e, 0xa3,
++ 0xa8, 0xad, 0xb2, 0xb7, 0xbc, 0xc1, 0xc6, 0xcb,
++ 0xd0, 0xd5, 0xda, 0xdf, 0xe4, 0xe9, 0xee, 0xf3,
++ 0xf8, 0xfd, 0x02, 0x07, 0x0c, 0x11, 0x16, 0x1b,
++ 0x20, 0x25, 0x2a, 0x2f, 0x34, 0x39, 0x3e, 0x43,
++ 0x48, 0x4d, 0x52, 0x57, 0x5c, 0x61, 0x66, 0x6b,
++ 0x70, 0x75, 0x7a, 0x7f, 0x84, 0x89, 0x8e, 0x93,
++ 0x98, 0x9d, 0xa2, 0xa7, 0xac, 0xb1, 0xb6, 0xbb,
++ 0xc0, 0xc5, 0xca, 0xcf, 0xd4, 0xd9, 0xde, 0xe3,
++ 0xe8, 0xed, 0xf2, 0xf7, 0xfc, 0x01, 0x06, 0x0b,
++ 0x10, 0x15, 0x1a, 0x1f, 0x24, 0x29, 0x2e, 0x33,
++ 0x38, 0x3d, 0x42, 0x47, 0x4c, 0x51, 0x56, 0x5b,
++ 0x60, 0x65, 0x6a, 0x6f, 0x74, 0x79, 0x7e, 0x83,
++ 0x88, 0x8d, 0x92, 0x97, 0x9c, 0xa1, 0xa6, 0xab,
++ 0xb0, 0xb5, 0xba, 0xbf, 0xc4, 0xc9, 0xce, 0xd3,
++ 0xd8, 0xdd, 0xe2, 0xe7, 0xec, 0xf1, 0xf6, 0xfb,
++ 0x00, 0x07, 0x0e, 0x15, 0x1c, 0x23, 0x2a, 0x31,
++ 0x38, 0x3f, 0x46, 0x4d, 0x54, 0x5b, 0x62, 0x69,
++ 0x70, 0x77, 0x7e, 0x85, 0x8c, 0x93, 0x9a, 0xa1,
++ 0xa8, 0xaf, 0xb6, 0xbd, 0xc4, 0xcb, 0xd2, 0xd9,
++ 0xe0, 0xe7, 0xee, 0xf5, 0xfc, 0x03, 0x0a, 0x11,
++ 0x18, 0x1f, 0x26, 0x2d, 0x34, 0x3b, 0x42, 0x49,
++ 0x50, 0x57, 0x5e, 0x65, 0x6c, 0x73, 0x7a, 0x81,
++ 0x88, 0x8f, 0x96, 0x9d, 0xa4, 0xab, 0xb2, 0xb9,
++ 0xc0, 0xc7, 0xce, 0xd5, 0xdc, 0xe3, 0xea, 0xf1,
++ 0xf8, 0xff, 0x06, 0x0d, 0x14, 0x1b, 0x22, 0x29,
++ 0x30, 0x37, 0x3e, 0x45, 0x4c, 0x53, 0x5a, 0x61,
++ 0x68, 0x6f, 0x76, 0x7d, 0x84, 0x8b, 0x92, 0x99,
++ 0xa0, 0xa7, 0xae, 0xb5, 0xbc, 0xc3, 0xca, 0xd1,
++ 0xd8, 0xdf, 0xe6, 0xed, 0xf4, 0xfb, 0x02, 0x09,
++ 0x10, 0x17, 0x1e, 0x25, 0x2c, 0x33, 0x3a, 0x41,
++ 0x48, 0x4f, 0x56, 0x5d, 0x64, 0x6b, 0x72, 0x79,
++ 0x80, 0x87, 0x8e, 0x95, 0x9c, 0xa3, 0xaa, 0xb1,
++ 0xb8, 0xbf, 0xc6, 0xcd, 0xd4, 0xdb, 0xe2, 0xe9,
++ 0xf0, 0xf7, 0xfe, 0x05, 0x0c, 0x13, 0x1a, 0x21,
++ 0x28, 0x2f, 0x36, 0x3d, 0x44, 0x4b, 0x52, 0x59,
++ 0x60, 0x67, 0x6e, 0x75, 0x7c, 0x83, 0x8a, 0x91,
++ 0x98, 0x9f, 0xa6, 0xad, 0xb4, 0xbb, 0xc2, 0xc9,
++ 0xd0, 0xd7, 0xde, 0xe5, 0xec, 0xf3, 0xfa, 0x01,
++ 0x08, 0x0f, 0x16, 0x1d, 0x24, 0x2b, 0x32, 0x39,
++ 0x40, 0x47, 0x4e, 0x55, 0x5c, 0x63, 0x6a, 0x71,
++ 0x78, 0x7f, 0x86, 0x8d, 0x94, 0x9b, 0xa2, 0xa9,
++ 0xb0, 0xb7, 0xbe, 0xc5, 0xcc, 0xd3, 0xda, 0xe1,
++ 0xe8, 0xef, 0xf6, 0xfd, 0x04, 0x0b, 0x12, 0x19,
++ 0x20, 0x27, 0x2e, 0x35, 0x3c, 0x43, 0x4a, 0x51,
++ 0x58, 0x5f, 0x66, 0x6d, 0x74, 0x7b, 0x82, 0x89,
++ 0x90, 0x97, 0x9e, 0xa5, 0xac, 0xb3, 0xba, 0xc1,
++ 0xc8, 0xcf, 0xd6, 0xdd, 0xe4, 0xeb, 0xf2, 0xf9,
++ 0x00, 0x09, 0x12, 0x1b, 0x24, 0x2d, 0x36, 0x3f,
++ 0x48, 0x51, 0x5a, 0x63, 0x6c, 0x75, 0x7e, 0x87,
++ 0x90, 0x99, 0xa2, 0xab, 0xb4, 0xbd, 0xc6, 0xcf,
++ 0xd8, 0xe1, 0xea, 0xf3, 0xfc, 0x05, 0x0e, 0x17,
++ 0x20, 0x29, 0x32, 0x3b, 0x44, 0x4d, 0x56, 0x5f,
++ 0x68, 0x71, 0x7a, 0x83, 0x8c, 0x95, 0x9e, 0xa7,
++ 0xb0, 0xb9, 0xc2, 0xcb, 0xd4, 0xdd, 0xe6, 0xef,
++ 0xf8, 0x01, 0x0a, 0x13, 0x1c, 0x25, 0x2e, 0x37,
++ 0x40, 0x49, 0x52, 0x5b, 0x64, 0x6d, 0x76, 0x7f,
++ 0x88, 0x91, 0x9a, 0xa3, 0xac, 0xb5, 0xbe, 0xc7,
++ 0xd0, 0xd9, 0xe2, 0xeb, 0xf4, 0xfd, 0x06, 0x0f,
++ 0x18, 0x21, 0x2a, 0x33, 0x3c, 0x45, 0x4e, 0x57,
++ 0x60, 0x69, 0x72, 0x7b, 0x84, 0x8d, 0x96, 0x9f,
++ 0xa8, 0xb1, 0xba, 0xc3, 0xcc, 0xd5, 0xde, 0xe7,
++ 0xf0, 0xf9, 0x02, 0x0b, 0x14, 0x1d, 0x26, 0x2f,
++ 0x38, 0x41, 0x4a, 0x53, 0x5c, 0x65, 0x6e, 0x77,
++ 0x80, 0x89, 0x92, 0x9b, 0xa4, 0xad, 0xb6, 0xbf,
++ 0xc8, 0xd1, 0xda, 0xe3, 0xec, 0xf5, 0xfe, 0x07,
++ 0x10, 0x19, 0x22, 0x2b, 0x34, 0x3d, 0x46, 0x4f,
++ 0x58, 0x61, 0x6a, 0x73, 0x7c, 0x85, 0x8e, 0x97,
++ 0xa0, 0xa9, 0xb2, 0xbb, 0xc4, 0xcd, 0xd6, 0xdf,
++ 0xe8, 0xf1, 0xfa, 0x03, 0x0c, 0x15, 0x1e, 0x27,
++ 0x30, 0x39, 0x42, 0x4b, 0x54, 0x5d, 0x66, 0x6f,
++ 0x78, 0x81, 0x8a, 0x93, 0x9c, 0xa5, 0xae, 0xb7,
++ 0xc0, 0xc9, 0xd2, 0xdb, 0xe4, 0xed, 0xf6, 0xff,
++ 0x08, 0x11, 0x1a, 0x23, 0x2c, 0x35, 0x3e, 0x47,
++ 0x50, 0x59, 0x62, 0x6b, 0x74, 0x7d, 0x86, 0x8f,
++ 0x98, 0xa1, 0xaa, 0xb3, 0xbc, 0xc5, 0xce, 0xd7,
++ 0xe0, 0xe9, 0xf2, 0xfb, 0x04, 0x0d, 0x16, 0x1f,
++ 0x28, 0x31, 0x3a, 0x43, 0x4c, 0x55, 0x5e, 0x67,
++ 0x70, 0x79, 0x82, 0x8b, 0x94, 0x9d, 0xa6, 0xaf,
++ 0xb8, 0xc1, 0xca, 0xd3, 0xdc, 0xe5, 0xee, 0xf7,
++ 0x00, 0x0b, 0x16, 0x21, 0x2c, 0x37, 0x42, 0x4d,
++ 0x58, 0x63, 0x6e, 0x79, 0x84, 0x8f, 0x9a, 0xa5,
++ 0xb0, 0xbb, 0xc6, 0xd1, 0xdc, 0xe7, 0xf2, 0xfd,
++ 0x08, 0x13, 0x1e, 0x29, 0x34, 0x3f, 0x4a, 0x55,
++ 0x60, 0x6b, 0x76, 0x81, 0x8c, 0x97, 0xa2, 0xad,
++ 0xb8, 0xc3, 0xce, 0xd9, 0xe4, 0xef, 0xfa, 0x05,
++ 0x10, 0x1b, 0x26, 0x31, 0x3c, 0x47, 0x52, 0x5d,
++ 0x68, 0x73, 0x7e, 0x89, 0x94, 0x9f, 0xaa, 0xb5,
++ 0xc0, 0xcb, 0xd6, 0xe1, 0xec, 0xf7, 0x02, 0x0d,
++ 0x18, 0x23, 0x2e, 0x39, 0x44, 0x4f, 0x5a, 0x65,
++ 0x70, 0x7b, 0x86, 0x91, 0x9c, 0xa7, 0xb2, 0xbd,
++ 0xc8, 0xd3, 0xde, 0xe9, 0xf4, 0xff, 0x0a, 0x15,
++ 0x20, 0x2b, 0x36, 0x41, 0x4c, 0x57, 0x62, 0x6d,
++ 0x78, 0x83, 0x8e, 0x99, 0xa4, 0xaf, 0xba, 0xc5,
++ 0xd0, 0xdb, 0xe6, 0xf1, 0xfc, 0x07, 0x12, 0x1d,
++ 0x28, 0x33, 0x3e, 0x49, 0x54, 0x5f, 0x6a, 0x75,
++ 0x80, 0x8b, 0x96, 0xa1, 0xac, 0xb7, 0xc2, 0xcd,
++ 0xd8, 0xe3, 0xee, 0xf9, 0x04, 0x0f, 0x1a, 0x25,
++ 0x30, 0x3b, 0x46, 0x51, 0x5c, 0x67, 0x72, 0x7d,
++ 0x88, 0x93, 0x9e, 0xa9, 0xb4, 0xbf, 0xca, 0xd5,
++ 0xe0, 0xeb, 0xf6, 0x01, 0x0c, 0x17, 0x22, 0x2d,
++ 0x38, 0x43, 0x4e, 0x59, 0x64, 0x6f, 0x7a, 0x85,
++ 0x90, 0x9b, 0xa6, 0xb1, 0xbc, 0xc7, 0xd2, 0xdd,
++ 0xe8, 0xf3, 0xfe, 0x09, 0x14, 0x1f, 0x2a, 0x35,
++ 0x40, 0x4b, 0x56, 0x61, 0x6c, 0x77, 0x82, 0x8d,
++ 0x98, 0xa3, 0xae, 0xb9, 0xc4, 0xcf, 0xda, 0xe5,
++ 0xf0, 0xfb, 0x06, 0x11, 0x1c, 0x27, 0x32, 0x3d,
++ 0x48, 0x53, 0x5e, 0x69, 0x74, 0x7f, 0x8a, 0x95,
++ 0xa0, 0xab, 0xb6, 0xc1, 0xcc, 0xd7, 0xe2, 0xed,
++ 0xf8, 0x03, 0x0e, 0x19, 0x24, 0x2f, 0x3a, 0x45,
++ 0x50, 0x5b, 0x66, 0x71, 0x7c, 0x87, 0x92, 0x9d,
++ 0xa8, 0xb3, 0xbe, 0xc9, 0xd4, 0xdf, 0xea, 0xf5,
++ 0x00, 0x0d, 0x1a, 0x27, 0x34, 0x41, 0x4e, 0x5b,
++ 0x68, 0x75, 0x82, 0x8f, 0x9c, 0xa9, 0xb6, 0xc3,
++ 0xd0, 0xdd, 0xea, 0xf7, 0x04, 0x11, 0x1e, 0x2b,
++ 0x38, 0x45, 0x52, 0x5f, 0x6c, 0x79, 0x86, 0x93,
++ 0xa0, 0xad, 0xba, 0xc7, 0xd4, 0xe1, 0xee, 0xfb,
++ 0x08, 0x15, 0x22, 0x2f, 0x3c, 0x49, 0x56, 0x63,
++ 0x70, 0x7d, 0x8a, 0x97, 0xa4, 0xb1, 0xbe, 0xcb,
++ 0xd8, 0xe5, 0xf2, 0xff, 0x0c, 0x19, 0x26, 0x33,
++ 0x40, 0x4d, 0x5a, 0x67, 0x74, 0x81, 0x8e, 0x9b,
++ 0xa8, 0xb5, 0xc2, 0xcf, 0xdc, 0xe9, 0xf6, 0x03,
++ 0x10, 0x1d, 0x2a, 0x37, 0x44, 0x51, 0x5e, 0x6b,
++ 0x78, 0x85, 0x92, 0x9f, 0xac, 0xb9, 0xc6, 0xd3,
++ 0xe0, 0xed, 0xfa, 0x07, 0x14, 0x21, 0x2e, 0x3b,
++ 0x48, 0x55, 0x62, 0x6f, 0x7c, 0x89, 0x96, 0xa3,
++ 0xb0, 0xbd, 0xca, 0xd7, 0xe4, 0xf1, 0xfe, 0x0b,
++ 0x18, 0x25, 0x32, 0x3f, 0x4c, 0x59, 0x66, 0x73,
++ 0x80, 0x8d, 0x9a, 0xa7, 0xb4, 0xc1, 0xce, 0xdb,
++ 0xe8, 0xf5, 0x02, 0x0f, 0x1c, 0x29, 0x36, 0x43,
++ 0x50, 0x5d, 0x6a, 0x77, 0x84, 0x91, 0x9e, 0xab,
++ 0xb8, 0xc5, 0xd2, 0xdf, 0xec, 0xf9, 0x06, 0x13,
++ 0x20, 0x2d, 0x3a, 0x47, 0x54, 0x61, 0x6e, 0x7b,
++ 0x88, 0x95, 0xa2, 0xaf, 0xbc, 0xc9, 0xd6, 0xe3,
++ 0xf0, 0xfd, 0x0a, 0x17, 0x24, 0x31, 0x3e, 0x4b,
++ 0x58, 0x65, 0x72, 0x7f, 0x8c, 0x99, 0xa6, 0xb3,
++ 0xc0, 0xcd, 0xda, 0xe7, 0xf4, 0x01, 0x0e, 0x1b,
++ 0x28, 0x35, 0x42, 0x4f, 0x5c, 0x69, 0x76, 0x83,
++ 0x90, 0x9d, 0xaa, 0xb7, 0xc4, 0xd1, 0xde, 0xeb,
++ 0xf8, 0x05, 0x12, 0x1f, 0x2c, 0x39, 0x46, 0x53,
++ 0x60, 0x6d, 0x7a, 0x87, 0x94, 0xa1, 0xae, 0xbb,
++ 0xc8, 0xd5, 0xe2, 0xef, 0xfc, 0x09, 0x16, 0x23,
++ 0x30, 0x3d, 0x4a, 0x57, 0x64, 0x71, 0x7e, 0x8b,
++ 0x98, 0xa5, 0xb2, 0xbf, 0xcc, 0xd9, 0xe6, 0xf3,
++ 0x00, 0x0f, 0x1e, 0x2d, 0x3c, 0x4b, 0x5a, 0x69,
++ 0x78, 0x87, 0x96, 0xa5, 0xb4, 0xc3, 0xd2, 0xe1,
++ 0xf0, 0xff, 0x0e, 0x1d, 0x2c, 0x3b, 0x4a, 0x59,
++ 0x68, 0x77, 0x86, 0x95, 0xa4, 0xb3, 0xc2, 0xd1,
++ 0xe0, 0xef, 0xfe, 0x0d, 0x1c, 0x2b, 0x3a, 0x49,
++ 0x58, 0x67, 0x76, 0x85, 0x94, 0xa3, 0xb2, 0xc1,
++ 0xd0, 0xdf, 0xee, 0xfd, 0x0c, 0x1b, 0x2a, 0x39,
++ 0x48, 0x57, 0x66, 0x75, 0x84, 0x93, 0xa2, 0xb1,
++ 0xc0, 0xcf, 0xde, 0xed, 0xfc, 0x0b, 0x1a, 0x29,
++ 0x38, 0x47, 0x56, 0x65, 0x74, 0x83, 0x92, 0xa1,
++ 0xb0, 0xbf, 0xce, 0xdd, 0xec, 0xfb, 0x0a, 0x19,
++ 0x28, 0x37, 0x46, 0x55, 0x64, 0x73, 0x82, 0x91,
++ 0xa0, 0xaf, 0xbe, 0xcd, 0xdc, 0xeb, 0xfa, 0x09,
++ 0x18, 0x27, 0x36, 0x45, 0x54, 0x63, 0x72, 0x81,
++ 0x90, 0x9f, 0xae, 0xbd, 0xcc, 0xdb, 0xea, 0xf9,
++ 0x08, 0x17, 0x26, 0x35, 0x44, 0x53, 0x62, 0x71,
++ 0x80, 0x8f, 0x9e, 0xad, 0xbc, 0xcb, 0xda, 0xe9,
++ 0xf8, 0x07, 0x16, 0x25, 0x34, 0x43, 0x52, 0x61,
++ 0x70, 0x7f, 0x8e, 0x9d, 0xac, 0xbb, 0xca, 0xd9,
++ 0xe8, 0xf7, 0x06, 0x15, 0x24, 0x33, 0x42, 0x51,
++ 0x60, 0x6f, 0x7e, 0x8d, 0x9c, 0xab, 0xba, 0xc9,
++ 0xd8, 0xe7, 0xf6, 0x05, 0x14, 0x23, 0x32, 0x41,
++ 0x50, 0x5f, 0x6e, 0x7d, 0x8c, 0x9b, 0xaa, 0xb9,
++ 0xc8, 0xd7, 0xe6, 0xf5, 0x04, 0x13, 0x22, 0x31,
++ 0x40, 0x4f, 0x5e, 0x6d, 0x7c, 0x8b, 0x9a, 0xa9,
++ 0xb8, 0xc7, 0xd6, 0xe5, 0xf4, 0x03, 0x12, 0x21,
++ 0x30, 0x3f, 0x4e, 0x5d, 0x6c, 0x7b, 0x8a, 0x99,
++ 0xa8, 0xb7, 0xc6, 0xd5, 0xe4, 0xf3, 0x02, 0x11,
++ 0x20, 0x2f, 0x3e, 0x4d, 0x5c, 0x6b, 0x7a, 0x89,
++ 0x98, 0xa7, 0xb6, 0xc5, 0xd4, 0xe3, 0xf2, 0x01,
++ 0x10, 0x1f, 0x2e, 0x3d, 0x4c, 0x5b, 0x6a, 0x79,
++ 0x88, 0x97, 0xa6, 0xb5, 0xc4, 0xd3, 0xe2, 0xf1,
++ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
++ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff,
++ 0x10, 0x21, 0x32, 0x43, 0x54, 0x65, 0x76, 0x87,
++ 0x98, 0xa9, 0xba, 0xcb, 0xdc, 0xed, 0xfe, 0x0f,
++ 0x20, 0x31, 0x42, 0x53, 0x64, 0x75, 0x86, 0x97,
++ 0xa8, 0xb9, 0xca, 0xdb, 0xec, 0xfd, 0x0e, 0x1f,
++ 0x30, 0x41, 0x52, 0x63, 0x74, 0x85, 0x96, 0xa7,
++ 0xb8, 0xc9, 0xda, 0xeb, 0xfc, 0x0d, 0x1e, 0x2f,
++ 0x40, 0x51, 0x62, 0x73, 0x84, 0x95, 0xa6, 0xb7,
++ 0xc8, 0xd9, 0xea, 0xfb, 0x0c, 0x1d, 0x2e, 0x3f,
++ 0x50, 0x61, 0x72, 0x83, 0x94, 0xa5, 0xb6, 0xc7,
++ 0xd8, 0xe9, 0xfa, 0x0b, 0x1c, 0x2d, 0x3e, 0x4f,
++ 0x60, 0x71, 0x82, 0x93, 0xa4, 0xb5, 0xc6, 0xd7,
++ 0xe8, 0xf9, 0x0a, 0x1b, 0x2c, 0x3d, 0x4e, 0x5f,
++ 0x70, 0x81, 0x92, 0xa3, 0xb4, 0xc5, 0xd6, 0xe7,
++ 0xf8, 0x09, 0x1a, 0x2b, 0x3c, 0x4d, 0x5e, 0x6f,
++ 0x80, 0x91, 0xa2, 0xb3, 0xc4, 0xd5, 0xe6, 0xf7,
++ 0x08, 0x19, 0x2a, 0x3b, 0x4c, 0x5d, 0x6e, 0x7f,
++ 0x90, 0xa1, 0xb2, 0xc3, 0xd4, 0xe5, 0xf6, 0x07,
++ 0x18, 0x29, 0x3a, 0x4b, 0x5c, 0x6d, 0x7e, 0x8f,
++ 0xa0, 0xb1, 0xc2, 0xd3, 0xe4, 0xf5, 0x06, 0x17,
++ 0x28, 0x39, 0x4a, 0x5b, 0x6c, 0x7d, 0x8e, 0x9f,
++ 0xb0, 0xc1, 0xd2, 0xe3, 0xf4, 0x05, 0x16, 0x27,
++ 0x38, 0x49, 0x5a, 0x6b, 0x7c, 0x8d, 0x9e, 0xaf,
++ 0xc0, 0xd1, 0xe2, 0xf3, 0x04, 0x15, 0x26, 0x37,
++ 0x48, 0x59, 0x6a, 0x7b, 0x8c, 0x9d, 0xae, 0xbf,
++ 0xd0, 0xe1, 0xf2, 0x03, 0x14, 0x25, 0x36, 0x47,
++ 0x58, 0x69, 0x7a, 0x8b, 0x9c, 0xad, 0xbe, 0xcf,
++ 0xe0, 0xf1, 0x02, 0x13, 0x24, 0x35, 0x46, 0x57,
++ 0x68, 0x79, 0x8a, 0x9b, 0xac, 0xbd, 0xce, 0xdf,
++ 0xf0, 0x01, 0x12, 0x23, 0x34, 0x45, 0x56, 0x67,
++ 0x78, 0x89, 0x9a, 0xab, 0xbc, 0xcd, 0xde, 0xef,
++ 0x00, 0x13, 0x26, 0x39, 0x4c, 0x5f, 0x72, 0x85,
++ 0x98, 0xab, 0xbe, 0xd1, 0xe4, 0xf7, 0x0a, 0x1d,
++ 0x30, 0x43, 0x56, 0x69, 0x7c, 0x8f, 0xa2, 0xb5,
++ 0xc8, 0xdb, 0xee, 0x01, 0x14, 0x27, 0x3a, 0x4d,
++ 0x60, 0x73, 0x86, 0x99, 0xac, 0xbf, 0xd2, 0xe5,
++ 0xf8, 0x0b, 0x1e, 0x31, 0x44, 0x57, 0x6a, 0x7d,
++ 0x90, 0xa3, 0xb6, 0xc9, 0xdc, 0xef, 0x02, 0x15,
++ 0x28, 0x3b, 0x4e, 0x61, 0x74, 0x87, 0x9a, 0xad,
++ 0xc0, 0xd3, 0xe6, 0xf9, 0x0c, 0x1f, 0x32, 0x45,
++ 0x58, 0x6b, 0x7e, 0x91, 0xa4, 0xb7, 0xca, 0xdd,
++ 0xf0, 0x03, 0x16, 0x29, 0x3c, 0x4f, 0x62, 0x75,
++ 0x88, 0x9b, 0xae, 0xc1, 0xd4, 0xe7, 0xfa, 0x0d,
++ 0x20, 0x33, 0x46, 0x59, 0x6c, 0x7f, 0x92, 0xa5,
++ 0xb8, 0xcb, 0xde, 0xf1, 0x04, 0x17, 0x2a, 0x3d,
++ 0x50, 0x63, 0x76, 0x89, 0x9c, 0xaf, 0xc2, 0xd5,
++ 0xe8, 0xfb, 0x0e, 0x21, 0x34, 0x47, 0x5a, 0x6d,
++ 0x80, 0x93, 0xa6, 0xb9, 0xcc, 0xdf, 0xf2, 0x05,
++ 0x18, 0x2b, 0x3e, 0x51, 0x64, 0x77, 0x8a, 0x9d,
++ 0xb0, 0xc3, 0xd6, 0xe9, 0xfc, 0x0f, 0x22, 0x35,
++ 0x48, 0x5b, 0x6e, 0x81, 0x94, 0xa7, 0xba, 0xcd,
++ 0xe0, 0xf3, 0x06, 0x19, 0x2c, 0x3f, 0x52, 0x65,
++ 0x78, 0x8b, 0x9e, 0xb1, 0xc4, 0xd7, 0xea, 0xfd,
++ 0x10, 0x23, 0x36, 0x49, 0x5c, 0x6f, 0x82, 0x95,
++ 0xa8, 0xbb, 0xce, 0xe1, 0xf4, 0x07, 0x1a, 0x2d,
++ 0x40, 0x53, 0x66, 0x79, 0x8c, 0x9f, 0xb2, 0xc5,
++ 0xd8, 0xeb, 0xfe, 0x11, 0x24, 0x37, 0x4a, 0x5d,
++ 0x70, 0x83, 0x96, 0xa9, 0xbc, 0xcf, 0xe2, 0xf5,
++ 0x08, 0x1b, 0x2e, 0x41, 0x54, 0x67, 0x7a, 0x8d,
++ 0xa0, 0xb3, 0xc6, 0xd9, 0xec, 0xff, 0x12, 0x25,
++ 0x38, 0x4b, 0x5e, 0x71, 0x84, 0x97, 0xaa, 0xbd,
++ 0xd0, 0xe3, 0xf6, 0x09, 0x1c, 0x2f, 0x42, 0x55,
++ 0x68, 0x7b, 0x8e, 0xa1, 0xb4, 0xc7, 0xda, 0xed,
++ 0x00, 0x15, 0x2a, 0x3f, 0x54, 0x69, 0x7e, 0x93,
++ 0xa8, 0xbd, 0xd2, 0xe7, 0xfc, 0x11, 0x26, 0x3b,
++ 0x50, 0x65, 0x7a, 0x8f, 0xa4, 0xb9, 0xce, 0xe3,
++ 0xf8, 0x0d, 0x22, 0x37, 0x4c, 0x61, 0x76, 0x8b,
++ 0xa0, 0xb5, 0xca, 0xdf, 0xf4, 0x09, 0x1e, 0x33,
++ 0x48, 0x5d, 0x72, 0x87, 0x9c, 0xb1, 0xc6, 0xdb,
++ 0xf0, 0x05, 0x1a, 0x2f, 0x44, 0x59, 0x6e, 0x83,
++ 0x98, 0xad, 0xc2, 0xd7, 0xec, 0x01, 0x16, 0x2b,
++ 0x40, 0x55, 0x6a, 0x7f, 0x94, 0xa9, 0xbe, 0xd3,
++ 0xe8, 0xfd, 0x12, 0x27, 0x3c, 0x51, 0x66, 0x7b,
++ 0x90, 0xa5, 0xba, 0xcf, 0xe4, 0xf9, 0x0e, 0x23,
++ 0x38, 0x4d, 0x62, 0x77, 0x8c, 0xa1, 0xb6, 0xcb,
++ 0xe0, 0xf5, 0x0a, 0x1f, 0x34, 0x49, 0x5e, 0x73,
++ 0x88, 0x9d, 0xb2, 0xc7, 0xdc, 0xf1, 0x06, 0x1b,
++ 0x30, 0x45, 0x5a, 0x6f, 0x84, 0x99, 0xae, 0xc3,
++ 0xd8, 0xed, 0x02, 0x17, 0x2c, 0x41, 0x56, 0x6b,
++ 0x80, 0x95, 0xaa, 0xbf, 0xd4, 0xe9, 0xfe, 0x13,
++ 0x28, 0x3d, 0x52, 0x67, 0x7c, 0x91, 0xa6, 0xbb,
++ 0xd0, 0xe5, 0xfa, 0x0f, 0x24, 0x39, 0x4e, 0x63,
++ 0x78, 0x8d, 0xa2, 0xb7, 0xcc, 0xe1, 0xf6, 0x0b,
++ 0x20, 0x35, 0x4a, 0x5f, 0x74, 0x89, 0x9e, 0xb3,
++ 0xc8, 0xdd, 0xf2, 0x07, 0x1c, 0x31, 0x46, 0x5b,
++ 0x70, 0x85, 0x9a, 0xaf, 0xc4, 0xd9, 0xee, 0x03,
++ 0x18, 0x2d, 0x42, 0x57, 0x6c, 0x81, 0x96, 0xab,
++ 0xc0, 0xd5, 0xea, 0xff, 0x14, 0x29, 0x3e, 0x53,
++ 0x68, 0x7d, 0x92, 0xa7, 0xbc, 0xd1, 0xe6, 0xfb,
++ 0x10, 0x25, 0x3a, 0x4f, 0x64, 0x79, 0x8e, 0xa3,
++ 0xb8, 0xcd, 0xe2, 0xf7, 0x0c, 0x21, 0x36, 0x4b,
++ 0x60, 0x75, 0x8a, 0x9f, 0xb4, 0xc9, 0xde, 0xf3,
++ 0x08, 0x1d, 0x32, 0x47, 0x5c, 0x71, 0x86, 0x9b,
++ 0xb0, 0xc5, 0xda, 0xef, 0x04, 0x19, 0x2e, 0x43,
++ 0x58, 0x6d, 0x82, 0x97, 0xac, 0xc1, 0xd6, 0xeb,
++ 0x00, 0x17, 0x2e, 0x45, 0x5c, 0x73, 0x8a, 0xa1,
++ 0xb8, 0xcf, 0xe6, 0xfd, 0x14, 0x2b, 0x42, 0x59,
++ 0x70, 0x87, 0x9e, 0xb5, 0xcc, 0xe3, 0xfa, 0x11,
++ 0x28, 0x3f, 0x56, 0x6d, 0x84, 0x9b, 0xb2, 0xc9,
++ 0xe0, 0xf7, 0x0e, 0x25, 0x3c, 0x53, 0x6a, 0x81,
++ 0x98, 0xaf, 0xc6, 0xdd, 0xf4, 0x0b, 0x22, 0x39,
++ 0x50, 0x67, 0x7e, 0x95, 0xac, 0xc3, 0xda, 0xf1,
++ 0x08, 0x1f, 0x36, 0x4d, 0x64, 0x7b, 0x92, 0xa9,
++ 0xc0, 0xd7, 0xee, 0x05, 0x1c, 0x33, 0x4a, 0x61,
++ 0x78, 0x8f, 0xa6, 0xbd, 0xd4, 0xeb, 0x02, 0x19,
++ 0x30, 0x47, 0x5e, 0x75, 0x8c, 0xa3, 0xba, 0xd1,
++ 0xe8, 0xff, 0x16, 0x2d, 0x44, 0x5b, 0x72, 0x89,
++ 0xa0, 0xb7, 0xce, 0xe5, 0xfc, 0x13, 0x2a, 0x41,
++ 0x58, 0x6f, 0x86, 0x9d, 0xb4, 0xcb, 0xe2, 0xf9,
++ 0x10, 0x27, 0x3e, 0x55, 0x6c, 0x83, 0x9a, 0xb1,
++ 0xc8, 0xdf, 0xf6, 0x0d, 0x24, 0x3b, 0x52, 0x69,
++ 0x80, 0x97, 0xae, 0xc5, 0xdc, 0xf3, 0x0a, 0x21,
++ 0x38, 0x4f, 0x66, 0x7d, 0x94, 0xab, 0xc2, 0xd9,
++ 0xf0, 0x07, 0x1e, 0x35, 0x4c, 0x63, 0x7a, 0x91,
++ 0xa8, 0xbf, 0xd6, 0xed, 0x04, 0x1b, 0x32, 0x49,
++ 0x60, 0x77, 0x8e, 0xa5, 0xbc, 0xd3, 0xea, 0x01,
++ 0x18, 0x2f, 0x46, 0x5d, 0x74, 0x8b, 0xa2, 0xb9,
++ 0xd0, 0xe7, 0xfe, 0x15, 0x2c, 0x43, 0x5a, 0x71,
++ 0x88, 0x9f, 0xb6, 0xcd, 0xe4, 0xfb, 0x12, 0x29,
++ 0x40, 0x57, 0x6e, 0x85, 0x9c, 0xb3, 0xca, 0xe1,
++ 0xf8, 0x0f, 0x26, 0x3d, 0x54, 0x6b, 0x82, 0x99,
++ 0xb0, 0xc7, 0xde, 0xf5, 0x0c, 0x23, 0x3a, 0x51,
++ 0x68, 0x7f, 0x96, 0xad, 0xc4, 0xdb, 0xf2, 0x09,
++ 0x20, 0x37, 0x4e, 0x65, 0x7c, 0x93, 0xaa, 0xc1,
++ 0xd8, 0xef, 0x06, 0x1d, 0x34, 0x4b, 0x62, 0x79,
++ 0x90, 0xa7, 0xbe, 0xd5, 0xec, 0x03, 0x1a, 0x31,
++ 0x48, 0x5f, 0x76, 0x8d, 0xa4, 0xbb, 0xd2, 0xe9,
++ 0x00, 0x19, 0x32, 0x4b, 0x64, 0x7d, 0x96, 0xaf,
++ 0xc8, 0xe1, 0xfa, 0x13, 0x2c, 0x45, 0x5e, 0x77,
++ 0x90, 0xa9, 0xc2, 0xdb, 0xf4, 0x0d, 0x26, 0x3f,
++ 0x58, 0x71, 0x8a, 0xa3, 0xbc, 0xd5, 0xee, 0x07,
++ 0x20, 0x39, 0x52, 0x6b, 0x84, 0x9d, 0xb6, 0xcf,
++ 0xe8, 0x01, 0x1a, 0x33, 0x4c, 0x65, 0x7e, 0x97,
++ 0xb0, 0xc9, 0xe2, 0xfb, 0x14, 0x2d, 0x46, 0x5f,
++ 0x78, 0x91, 0xaa, 0xc3, 0xdc, 0xf5, 0x0e, 0x27,
++ 0x40, 0x59, 0x72, 0x8b, 0xa4, 0xbd, 0xd6, 0xef,
++ 0x08, 0x21, 0x3a, 0x53, 0x6c, 0x85, 0x9e, 0xb7,
++ 0xd0, 0xe9, 0x02, 0x1b, 0x34, 0x4d, 0x66, 0x7f,
++ 0x98, 0xb1, 0xca, 0xe3, 0xfc, 0x15, 0x2e, 0x47,
++ 0x60, 0x79, 0x92, 0xab, 0xc4, 0xdd, 0xf6, 0x0f,
++ 0x28, 0x41, 0x5a, 0x73, 0x8c, 0xa5, 0xbe, 0xd7,
++ 0xf0, 0x09, 0x22, 0x3b, 0x54, 0x6d, 0x86, 0x9f,
++ 0xb8, 0xd1, 0xea, 0x03, 0x1c, 0x35, 0x4e, 0x67,
++ 0x80, 0x99, 0xb2, 0xcb, 0xe4, 0xfd, 0x16, 0x2f,
++ 0x48, 0x61, 0x7a, 0x93, 0xac, 0xc5, 0xde, 0xf7,
++ 0x10, 0x29, 0x42, 0x5b, 0x74, 0x8d, 0xa6, 0xbf,
++ 0xd8, 0xf1, 0x0a, 0x23, 0x3c, 0x55, 0x6e, 0x87,
++ 0xa0, 0xb9, 0xd2, 0xeb, 0x04, 0x1d, 0x36, 0x4f,
++ 0x68, 0x81, 0x9a, 0xb3, 0xcc, 0xe5, 0xfe, 0x17,
++ 0x30, 0x49, 0x62, 0x7b, 0x94, 0xad, 0xc6, 0xdf,
++ 0xf8, 0x11, 0x2a, 0x43, 0x5c, 0x75, 0x8e, 0xa7,
++ 0xc0, 0xd9, 0xf2, 0x0b, 0x24, 0x3d, 0x56, 0x6f,
++ 0x88, 0xa1, 0xba, 0xd3, 0xec, 0x05, 0x1e, 0x37,
++ 0x50, 0x69, 0x82, 0x9b, 0xb4, 0xcd, 0xe6, 0xff,
++ 0x18, 0x31, 0x4a, 0x63, 0x7c, 0x95, 0xae, 0xc7,
++ 0xe0, 0xf9, 0x12, 0x2b, 0x44, 0x5d, 0x76, 0x8f,
++ 0xa8, 0xc1, 0xda, 0xf3, 0x0c, 0x25, 0x3e, 0x57,
++ 0x70, 0x89, 0xa2, 0xbb, 0xd4, 0xed, 0x06, 0x1f,
++ 0x38, 0x51, 0x6a, 0x83, 0x9c, 0xb5, 0xce, 0xe7,
++ 0x00, 0x1b, 0x36, 0x51, 0x6c, 0x87, 0xa2, 0xbd,
++ 0xd8, 0xf3, 0x0e, 0x29, 0x44, 0x5f, 0x7a, 0x95,
++ 0xb0, 0xcb, 0xe6, 0x01, 0x1c, 0x37, 0x52, 0x6d,
++ 0x88, 0xa3, 0xbe, 0xd9, 0xf4, 0x0f, 0x2a, 0x45,
++ 0x60, 0x7b, 0x96, 0xb1, 0xcc, 0xe7, 0x02, 0x1d,
++ 0x38, 0x53, 0x6e, 0x89, 0xa4, 0xbf, 0xda, 0xf5,
++ 0x10, 0x2b, 0x46, 0x61, 0x7c, 0x97, 0xb2, 0xcd,
++ 0xe8, 0x03, 0x1e, 0x39, 0x54, 0x6f, 0x8a, 0xa5,
++ 0xc0, 0xdb, 0xf6, 0x11, 0x2c, 0x47, 0x62, 0x7d,
++ 0x98, 0xb3, 0xce, 0xe9, 0x04, 0x1f, 0x3a, 0x55,
++ 0x70, 0x8b, 0xa6, 0xc1, 0xdc, 0xf7, 0x12, 0x2d,
++ 0x48, 0x63, 0x7e, 0x99, 0xb4, 0xcf, 0xea, 0x05,
++ 0x20, 0x3b, 0x56, 0x71, 0x8c, 0xa7, 0xc2, 0xdd,
++ 0xf8, 0x13, 0x2e, 0x49, 0x64, 0x7f, 0x9a, 0xb5,
++ 0xd0, 0xeb, 0x06, 0x21, 0x3c, 0x57, 0x72, 0x8d,
++ 0xa8, 0xc3, 0xde, 0xf9, 0x14, 0x2f, 0x4a, 0x65,
++ 0x80, 0x9b, 0xb6, 0xd1, 0xec, 0x07, 0x22, 0x3d,
++ 0x58, 0x73, 0x8e, 0xa9, 0xc4, 0xdf, 0xfa, 0x15,
++ 0x30, 0x4b, 0x66, 0x81, 0x9c, 0xb7, 0xd2, 0xed,
++ 0x08, 0x23, 0x3e, 0x59, 0x74, 0x8f, 0xaa, 0xc5,
++ 0xe0, 0xfb, 0x16, 0x31, 0x4c, 0x67, 0x82, 0x9d,
++ 0xb8, 0xd3, 0xee, 0x09, 0x24, 0x3f, 0x5a, 0x75,
++ 0x90, 0xab, 0xc6, 0xe1, 0xfc, 0x17, 0x32, 0x4d,
++ 0x68, 0x83, 0x9e, 0xb9, 0xd4, 0xef, 0x0a, 0x25,
++ 0x40, 0x5b, 0x76, 0x91, 0xac, 0xc7, 0xe2, 0xfd,
++ 0x18, 0x33, 0x4e, 0x69, 0x84, 0x9f, 0xba, 0xd5,
++ 0xf0, 0x0b, 0x26, 0x41, 0x5c, 0x77, 0x92, 0xad,
++ 0xc8, 0xe3, 0xfe, 0x19, 0x34, 0x4f, 0x6a, 0x85,
++ 0xa0, 0xbb, 0xd6, 0xf1, 0x0c, 0x27, 0x42, 0x5d,
++ 0x78, 0x93, 0xae, 0xc9, 0xe4, 0xff, 0x1a, 0x35,
++ 0x50, 0x6b, 0x86, 0xa1, 0xbc, 0xd7, 0xf2, 0x0d,
++ 0x28, 0x43, 0x5e, 0x79, 0x94, 0xaf, 0xca, 0xe5,
++ 0x00, 0x1d, 0x3a, 0x57, 0x74, 0x91, 0xae, 0xcb,
++ 0xe8, 0x05, 0x22, 0x3f, 0x5c, 0x79, 0x96, 0xb3,
++ 0xd0, 0xed, 0x0a, 0x27, 0x44, 0x61, 0x7e, 0x9b,
++ 0xb8, 0xd5, 0xf2, 0x0f, 0x2c, 0x49, 0x66, 0x83,
++ 0xa0, 0xbd, 0xda, 0xf7, 0x14, 0x31, 0x4e, 0x6b,
++ 0x88, 0xa5, 0xc2, 0xdf, 0xfc, 0x19, 0x36, 0x53,
++ 0x70, 0x8d, 0xaa, 0xc7, 0xe4, 0x01, 0x1e, 0x3b,
++ 0x58, 0x75, 0x92, 0xaf, 0xcc, 0xe9, 0x06, 0x23,
++ 0x40, 0x5d, 0x7a, 0x97, 0xb4, 0xd1, 0xee, 0x0b,
++ 0x28, 0x45, 0x62, 0x7f, 0x9c, 0xb9, 0xd6, 0xf3,
++ 0x10, 0x2d, 0x4a, 0x67, 0x84, 0xa1, 0xbe, 0xdb,
++ 0xf8, 0x15, 0x32, 0x4f, 0x6c, 0x89, 0xa6, 0xc3,
++ 0xe0, 0xfd, 0x1a, 0x37, 0x54, 0x71, 0x8e, 0xab,
++ 0xc8, 0xe5, 0x02, 0x1f, 0x3c, 0x59, 0x76, 0x93,
++ 0xb0, 0xcd, 0xea, 0x07, 0x24, 0x41, 0x5e, 0x7b,
++ 0x98, 0xb5, 0xd2, 0xef, 0x0c, 0x29, 0x46, 0x63,
++ 0x80, 0x9d, 0xba, 0xd7, 0xf4, 0x11, 0x2e, 0x4b,
++ 0x68, 0x85, 0xa2, 0xbf, 0xdc, 0xf9, 0x16, 0x33,
++ 0x50, 0x6d, 0x8a, 0xa7, 0xc4, 0xe1, 0xfe, 0x1b,
++ 0x38, 0x55, 0x72, 0x8f, 0xac, 0xc9, 0xe6, 0x03,
++ 0x20, 0x3d, 0x5a, 0x77, 0x94, 0xb1, 0xce, 0xeb,
++ 0x08, 0x25, 0x42, 0x5f, 0x7c, 0x99, 0xb6, 0xd3,
++ 0xf0, 0x0d, 0x2a, 0x47, 0x64, 0x81, 0x9e, 0xbb,
++ 0xd8, 0xf5, 0x12, 0x2f, 0x4c, 0x69, 0x86, 0xa3,
++ 0xc0, 0xdd, 0xfa, 0x17, 0x34, 0x51, 0x6e, 0x8b,
++ 0xa8, 0xc5, 0xe2, 0xff, 0x1c, 0x39, 0x56, 0x73,
++ 0x90, 0xad, 0xca, 0xe7, 0x04, 0x21, 0x3e, 0x5b,
++ 0x78, 0x95, 0xb2, 0xcf, 0xec, 0x09, 0x26, 0x43,
++ 0x60, 0x7d, 0x9a, 0xb7, 0xd4, 0xf1, 0x0e, 0x2b,
++ 0x48, 0x65, 0x82, 0x9f, 0xbc, 0xd9, 0xf6, 0x13,
++ 0x30, 0x4d, 0x6a, 0x87, 0xa4, 0xc1, 0xde, 0xfb,
++ 0x18, 0x35, 0x52, 0x6f, 0x8c, 0xa9, 0xc6, 0xe3,
++ 0x00, 0x1f, 0x3e, 0x5d, 0x7c, 0x9b, 0xba, 0xd9,
++ 0xf8, 0x17, 0x36, 0x55, 0x74, 0x93, 0xb2, 0xd1,
++ 0xf0, 0x0f, 0x2e, 0x4d, 0x6c, 0x8b, 0xaa, 0xc9,
++ 0xe8, 0x07, 0x26, 0x45, 0x64, 0x83, 0xa2, 0xc1,
++ 0xe0, 0xff, 0x1e, 0x3d, 0x5c, 0x7b, 0x9a, 0xb9,
++ 0xd8, 0xf7, 0x16, 0x35, 0x54, 0x73, 0x92, 0xb1,
++ 0xd0, 0xef, 0x0e, 0x2d, 0x4c, 0x6b, 0x8a, 0xa9,
++ 0xc8, 0xe7, 0x06, 0x25, 0x44, 0x63, 0x82, 0xa1,
++ 0xc0, 0xdf, 0xfe, 0x1d, 0x3c, 0x5b, 0x7a, 0x99,
++ 0xb8, 0xd7, 0xf6, 0x15, 0x34, 0x53, 0x72, 0x91,
++ 0xb0, 0xcf, 0xee, 0x0d, 0x2c, 0x4b, 0x6a, 0x89,
++ 0xa8, 0xc7, 0xe6, 0x05, 0x24, 0x43, 0x62, 0x81,
++ 0xa0, 0xbf, 0xde, 0xfd, 0x1c, 0x3b, 0x5a, 0x79,
++ 0x98, 0xb7, 0xd6, 0xf5, 0x14, 0x33, 0x52, 0x71,
++ 0x90, 0xaf, 0xce, 0xed, 0x0c, 0x2b, 0x4a, 0x69,
++ 0x88, 0xa7, 0xc6, 0xe5, 0x04, 0x23, 0x42, 0x61,
++ 0x80, 0x9f, 0xbe, 0xdd, 0xfc, 0x1b, 0x3a, 0x59,
++ 0x78, 0x97, 0xb6, 0xd5, 0xf4, 0x13, 0x32, 0x51,
++ 0x70, 0x8f, 0xae, 0xcd, 0xec, 0x0b, 0x2a, 0x49,
++ 0x68, 0x87, 0xa6, 0xc5, 0xe4, 0x03, 0x22, 0x41,
++ 0x60, 0x7f, 0x9e, 0xbd, 0xdc, 0xfb, 0x1a, 0x39,
++ 0x58, 0x77, 0x96, 0xb5, 0xd4, 0xf3, 0x12, 0x31,
++ 0x50, 0x6f, 0x8e, 0xad, 0xcc, 0xeb, 0x0a, 0x29,
++ 0x48, 0x67, 0x86, 0xa5, 0xc4, 0xe3, 0x02, 0x21,
++ 0x40, 0x5f, 0x7e, 0x9d, 0xbc, 0xdb, 0xfa, 0x19,
++ 0x38, 0x57, 0x76, 0x95, 0xb4, 0xd3, 0xf2, 0x11,
++ 0x30, 0x4f, 0x6e, 0x8d, 0xac, 0xcb, 0xea, 0x09,
++ 0x28, 0x47, 0x66, 0x85, 0xa4, 0xc3, 0xe2, 0x01,
++ 0x20, 0x3f, 0x5e, 0x7d, 0x9c, 0xbb, 0xda, 0xf9,
++ 0x18, 0x37, 0x56, 0x75, 0x94, 0xb3, 0xd2, 0xf1,
++ 0x10, 0x2f, 0x4e, 0x6d, 0x8c, 0xab, 0xca, 0xe9,
++ 0x08, 0x27, 0x46, 0x65, 0x84, 0xa3, 0xc2, 0xe1,
++ 0x00, 0x21, 0x42, 0x63,
++ },
++ .ilen = 4100,
++ .result = {
++ 0xb5, 0x81, 0xf5, 0x64, 0x18, 0x73, 0xe3, 0xf0,
++ 0x4c, 0x13, 0xf2, 0x77, 0x18, 0x60, 0x65, 0x5e,
++ 0x29, 0x01, 0xce, 0x98, 0x55, 0x53, 0xf9, 0x0c,
++ 0x2a, 0x08, 0xd5, 0x09, 0xb3, 0x57, 0x55, 0x56,
++ 0xc5, 0xe9, 0x56, 0x90, 0xcb, 0x6a, 0xa3, 0xc0,
++ 0xff, 0xc4, 0x79, 0xb4, 0xd2, 0x97, 0x5d, 0xc4,
++ 0x43, 0xd1, 0xfe, 0x94, 0x7b, 0x88, 0x06, 0x5a,
++ 0xb2, 0x9e, 0x2c, 0xfc, 0x44, 0x03, 0xb7, 0x90,
++ 0xa0, 0xc1, 0xba, 0x6a, 0x33, 0xb8, 0xc7, 0xb2,
++ 0x9d, 0xe1, 0x12, 0x4f, 0xc0, 0x64, 0xd4, 0x01,
++ 0xfe, 0x8c, 0x7a, 0x66, 0xf7, 0xe6, 0x5a, 0x91,
++ 0xbb, 0xde, 0x56, 0x86, 0xab, 0x65, 0x21, 0x30,
++ 0x00, 0x84, 0x65, 0x24, 0xa5, 0x7d, 0x85, 0xb4,
++ 0xe3, 0x17, 0xed, 0x3a, 0xb7, 0x6f, 0xb4, 0x0b,
++ 0x0b, 0xaf, 0x15, 0xae, 0x5a, 0x8f, 0xf2, 0x0c,
++ 0x2f, 0x27, 0xf4, 0x09, 0xd8, 0xd2, 0x96, 0xb7,
++ 0x71, 0xf2, 0xc5, 0x99, 0x4d, 0x7e, 0x7f, 0x75,
++ 0x77, 0x89, 0x30, 0x8b, 0x59, 0xdb, 0xa2, 0xb2,
++ 0xa0, 0xf3, 0x19, 0x39, 0x2b, 0xc5, 0x7e, 0x3f,
++ 0x4f, 0xd9, 0xd3, 0x56, 0x28, 0x97, 0x44, 0xdc,
++ 0xc0, 0x8b, 0x77, 0x24, 0xd9, 0x52, 0xe7, 0xc5,
++ 0xaf, 0xf6, 0x7d, 0x59, 0xb2, 0x44, 0x05, 0x1d,
++ 0xb1, 0xb0, 0x11, 0xa5, 0x0f, 0xec, 0x33, 0xe1,
++ 0x6d, 0x1b, 0x4e, 0x1f, 0xff, 0x57, 0x91, 0xb4,
++ 0x5b, 0x9a, 0x96, 0xc5, 0x53, 0xbc, 0xae, 0x20,
++ 0x3c, 0xbb, 0x14, 0xe2, 0xe8, 0x22, 0x33, 0xc1,
++ 0x5e, 0x76, 0x9e, 0x46, 0x99, 0xf6, 0x2a, 0x15,
++ 0xc6, 0x97, 0x02, 0xa0, 0x66, 0x43, 0xd1, 0xa6,
++ 0x31, 0xa6, 0x9f, 0xfb, 0xf4, 0xd3, 0x69, 0xe5,
++ 0xcd, 0x76, 0x95, 0xb8, 0x7a, 0x82, 0x7f, 0x21,
++ 0x45, 0xff, 0x3f, 0xce, 0x55, 0xf6, 0x95, 0x10,
++ 0x08, 0x77, 0x10, 0x43, 0xc6, 0xf3, 0x09, 0xe5,
++ 0x68, 0xe7, 0x3c, 0xad, 0x00, 0x52, 0x45, 0x0d,
++ 0xfe, 0x2d, 0xc6, 0xc2, 0x94, 0x8c, 0x12, 0x1d,
++ 0xe6, 0x25, 0xae, 0x98, 0x12, 0x8e, 0x19, 0x9c,
++ 0x81, 0x68, 0xb1, 0x11, 0xf6, 0x69, 0xda, 0xe3,
++ 0x62, 0x08, 0x18, 0x7a, 0x25, 0x49, 0x28, 0xac,
++ 0xba, 0x71, 0x12, 0x0b, 0xe4, 0xa2, 0xe5, 0xc7,
++ 0x5d, 0x8e, 0xec, 0x49, 0x40, 0x21, 0xbf, 0x5a,
++ 0x98, 0xf3, 0x02, 0x68, 0x55, 0x03, 0x7f, 0x8a,
++ 0xe5, 0x94, 0x0c, 0x32, 0x5c, 0x07, 0x82, 0x63,
++ 0xaf, 0x6f, 0x91, 0x40, 0x84, 0x8e, 0x52, 0x25,
++ 0xd0, 0xb0, 0x29, 0x53, 0x05, 0xe2, 0x50, 0x7a,
++ 0x34, 0xeb, 0xc9, 0x46, 0x20, 0xa8, 0x3d, 0xde,
++ 0x7f, 0x16, 0x5f, 0x36, 0xc5, 0x2e, 0xdc, 0xd1,
++ 0x15, 0x47, 0xc7, 0x50, 0x40, 0x6d, 0x91, 0xc5,
++ 0xe7, 0x93, 0x95, 0x1a, 0xd3, 0x57, 0xbc, 0x52,
++ 0x33, 0xee, 0x14, 0x19, 0x22, 0x52, 0x89, 0xa7,
++ 0x4a, 0x25, 0x56, 0x77, 0x4b, 0xca, 0xcf, 0x0a,
++ 0xe1, 0xf5, 0x35, 0x85, 0x30, 0x7e, 0x59, 0x4a,
++ 0xbd, 0x14, 0x5b, 0xdf, 0xe3, 0x46, 0xcb, 0xac,
++ 0x1f, 0x6c, 0x96, 0x0e, 0xf4, 0x81, 0xd1, 0x99,
++ 0xca, 0x88, 0x63, 0x3d, 0x02, 0x58, 0x6b, 0xa9,
++ 0xe5, 0x9f, 0xb3, 0x00, 0xb2, 0x54, 0xc6, 0x74,
++ 0x1c, 0xbf, 0x46, 0xab, 0x97, 0xcc, 0xf8, 0x54,
++ 0x04, 0x07, 0x08, 0x52, 0xe6, 0xc0, 0xda, 0x93,
++ 0x74, 0x7d, 0x93, 0x99, 0x5d, 0x78, 0x68, 0xa6,
++ 0x2e, 0x6b, 0xd3, 0x6a, 0x69, 0xcc, 0x12, 0x6b,
++ 0xd4, 0xc7, 0xa5, 0xc6, 0xe7, 0xf6, 0x03, 0x04,
++ 0x5d, 0xcd, 0x61, 0x5e, 0x17, 0x40, 0xdc, 0xd1,
++ 0x5c, 0xf5, 0x08, 0xdf, 0x5c, 0x90, 0x85, 0xa4,
++ 0xaf, 0xf6, 0x78, 0xbb, 0x0d, 0xf1, 0xf4, 0xa4,
++ 0x54, 0x26, 0x72, 0x9e, 0x61, 0xfa, 0x86, 0xcf,
++ 0xe8, 0x9e, 0xa1, 0xe0, 0xc7, 0x48, 0x23, 0xae,
++ 0x5a, 0x90, 0xae, 0x75, 0x0a, 0x74, 0x18, 0x89,
++ 0x05, 0xb1, 0x92, 0xb2, 0x7f, 0xd0, 0x1b, 0xa6,
++ 0x62, 0x07, 0x25, 0x01, 0xc7, 0xc2, 0x4f, 0xf9,
++ 0xe8, 0xfe, 0x63, 0x95, 0x80, 0x07, 0xb4, 0x26,
++ 0xcc, 0xd1, 0x26, 0xb6, 0xc4, 0x3f, 0x9e, 0xcb,
++ 0x8e, 0x3b, 0x2e, 0x44, 0x16, 0xd3, 0x10, 0x9a,
++ 0x95, 0x08, 0xeb, 0xc8, 0xcb, 0xeb, 0xbf, 0x6f,
++ 0x0b, 0xcd, 0x1f, 0xc8, 0xca, 0x86, 0xaa, 0xec,
++ 0x33, 0xe6, 0x69, 0xf4, 0x45, 0x25, 0x86, 0x3a,
++ 0x22, 0x94, 0x4f, 0x00, 0x23, 0x6a, 0x44, 0xc2,
++ 0x49, 0x97, 0x33, 0xab, 0x36, 0x14, 0x0a, 0x70,
++ 0x24, 0xc3, 0xbe, 0x04, 0x3b, 0x79, 0xa0, 0xf9,
++ 0xb8, 0xe7, 0x76, 0x29, 0x22, 0x83, 0xd7, 0xf2,
++ 0x94, 0xf4, 0x41, 0x49, 0xba, 0x5f, 0x7b, 0x07,
++ 0xb5, 0xfb, 0xdb, 0x03, 0x1a, 0x9f, 0xb6, 0x4c,
++ 0xc2, 0x2e, 0x37, 0x40, 0x49, 0xc3, 0x38, 0x16,
++ 0xe2, 0x4f, 0x77, 0x82, 0xb0, 0x68, 0x4c, 0x71,
++ 0x1d, 0x57, 0x61, 0x9c, 0xd9, 0x4e, 0x54, 0x99,
++ 0x47, 0x13, 0x28, 0x73, 0x3c, 0xbb, 0x00, 0x90,
++ 0xf3, 0x4d, 0xc9, 0x0e, 0xfd, 0xe7, 0xb1, 0x71,
++ 0xd3, 0x15, 0x79, 0xbf, 0xcc, 0x26, 0x2f, 0xbd,
++ 0xad, 0x6c, 0x50, 0x69, 0x6c, 0x3e, 0x6d, 0x80,
++ 0x9a, 0xea, 0x78, 0xaf, 0x19, 0xb2, 0x0d, 0x4d,
++ 0xad, 0x04, 0x07, 0xae, 0x22, 0x90, 0x4a, 0x93,
++ 0x32, 0x0e, 0x36, 0x9b, 0x1b, 0x46, 0xba, 0x3b,
++ 0xb4, 0xac, 0xc6, 0xd1, 0xa2, 0x31, 0x53, 0x3b,
++ 0x2a, 0x3d, 0x45, 0xfe, 0x03, 0x61, 0x10, 0x85,
++ 0x17, 0x69, 0xa6, 0x78, 0xcc, 0x6c, 0x87, 0x49,
++ 0x53, 0xf9, 0x80, 0x10, 0xde, 0x80, 0xa2, 0x41,
++ 0x6a, 0xc3, 0x32, 0x02, 0xad, 0x6d, 0x3c, 0x56,
++ 0x00, 0x71, 0x51, 0x06, 0xa7, 0xbd, 0xfb, 0xef,
++ 0x3c, 0xb5, 0x9f, 0xfc, 0x48, 0x7d, 0x53, 0x7c,
++ 0x66, 0xb0, 0x49, 0x23, 0xc4, 0x47, 0x10, 0x0e,
++ 0xe5, 0x6c, 0x74, 0x13, 0xe6, 0xc5, 0x3f, 0xaa,
++ 0xde, 0xff, 0x07, 0x44, 0xdd, 0x56, 0x1b, 0xad,
++ 0x09, 0x77, 0xfb, 0x5b, 0x12, 0xb8, 0x0d, 0x38,
++ 0x17, 0x37, 0x35, 0x7b, 0x9b, 0xbc, 0xfe, 0xd4,
++ 0x7e, 0x8b, 0xda, 0x7e, 0x5b, 0x04, 0xa7, 0x22,
++ 0xa7, 0x31, 0xa1, 0x20, 0x86, 0xc7, 0x1b, 0x99,
++ 0xdb, 0xd1, 0x89, 0xf4, 0x94, 0xa3, 0x53, 0x69,
++ 0x8d, 0xe7, 0xe8, 0x74, 0x11, 0x8d, 0x74, 0xd6,
++ 0x07, 0x37, 0x91, 0x9f, 0xfd, 0x67, 0x50, 0x3a,
++ 0xc9, 0xe1, 0xf4, 0x36, 0xd5, 0xa0, 0x47, 0xd1,
++ 0xf9, 0xe5, 0x39, 0xa3, 0x31, 0xac, 0x07, 0x36,
++ 0x23, 0xf8, 0x66, 0x18, 0x14, 0x28, 0x34, 0x0f,
++ 0xb8, 0xd0, 0xe7, 0x29, 0xb3, 0x04, 0x4b, 0x55,
++ 0x01, 0x41, 0xb2, 0x75, 0x8d, 0xcb, 0x96, 0x85,
++ 0x3a, 0xfb, 0xab, 0x2b, 0x9e, 0xfa, 0x58, 0x20,
++ 0x44, 0x1f, 0xc0, 0x14, 0x22, 0x75, 0x61, 0xe8,
++ 0xaa, 0x19, 0xcf, 0xf1, 0x82, 0x56, 0xf4, 0xd7,
++ 0x78, 0x7b, 0x3d, 0x5f, 0xb3, 0x9e, 0x0b, 0x8a,
++ 0x57, 0x50, 0xdb, 0x17, 0x41, 0x65, 0x4d, 0xa3,
++ 0x02, 0xc9, 0x9c, 0x9c, 0x53, 0xfb, 0x39, 0x39,
++ 0x9b, 0x1d, 0x72, 0x24, 0xda, 0xb7, 0x39, 0xbe,
++ 0x13, 0x3b, 0xfa, 0x29, 0xda, 0x9e, 0x54, 0x64,
++ 0x6e, 0xba, 0xd8, 0xa1, 0xcb, 0xb3, 0x36, 0xfa,
++ 0xcb, 0x47, 0x85, 0xe9, 0x61, 0x38, 0xbc, 0xbe,
++ 0xc5, 0x00, 0x38, 0x2a, 0x54, 0xf7, 0xc4, 0xb9,
++ 0xb3, 0xd3, 0x7b, 0xa0, 0xa0, 0xf8, 0x72, 0x7f,
++ 0x8c, 0x8e, 0x82, 0x0e, 0xc6, 0x1c, 0x75, 0x9d,
++ 0xca, 0x8e, 0x61, 0x87, 0xde, 0xad, 0x80, 0xd2,
++ 0xf5, 0xf9, 0x80, 0xef, 0x15, 0x75, 0xaf, 0xf5,
++ 0x80, 0xfb, 0xff, 0x6d, 0x1e, 0x25, 0xb7, 0x40,
++ 0x61, 0x6a, 0x39, 0x5a, 0x6a, 0xb5, 0x31, 0xab,
++ 0x97, 0x8a, 0x19, 0x89, 0x44, 0x40, 0xc0, 0xa6,
++ 0xb4, 0x4e, 0x30, 0x32, 0x7b, 0x13, 0xe7, 0x67,
++ 0xa9, 0x8b, 0x57, 0x04, 0xc2, 0x01, 0xa6, 0xf4,
++ 0x28, 0x99, 0xad, 0x2c, 0x76, 0xa3, 0x78, 0xc2,
++ 0x4a, 0xe6, 0xca, 0x5c, 0x50, 0x6a, 0xc1, 0xb0,
++ 0x62, 0x4b, 0x10, 0x8e, 0x7c, 0x17, 0x43, 0xb3,
++ 0x17, 0x66, 0x1c, 0x3e, 0x8d, 0x69, 0xf0, 0x5a,
++ 0x71, 0xf5, 0x97, 0xdc, 0xd1, 0x45, 0xdd, 0x28,
++ 0xf3, 0x5d, 0xdf, 0x53, 0x7b, 0x11, 0xe5, 0xbc,
++ 0x4c, 0xdb, 0x1b, 0x51, 0x6b, 0xe9, 0xfb, 0x3d,
++ 0xc1, 0xc3, 0x2c, 0xb9, 0x71, 0xf5, 0xb6, 0xb2,
++ 0x13, 0x36, 0x79, 0x80, 0x53, 0xe8, 0xd3, 0xa6,
++ 0x0a, 0xaf, 0xfd, 0x56, 0x97, 0xf7, 0x40, 0x8e,
++ 0x45, 0xce, 0xf8, 0xb0, 0x9e, 0x5c, 0x33, 0x82,
++ 0xb0, 0x44, 0x56, 0xfc, 0x05, 0x09, 0xe9, 0x2a,
++ 0xac, 0x26, 0x80, 0x14, 0x1d, 0xc8, 0x3a, 0x35,
++ 0x4c, 0x82, 0x97, 0xfd, 0x76, 0xb7, 0xa9, 0x0a,
++ 0x35, 0x58, 0x79, 0x8e, 0x0f, 0x66, 0xea, 0xaf,
++ 0x51, 0x6c, 0x09, 0xa9, 0x6e, 0x9b, 0xcb, 0x9a,
++ 0x31, 0x47, 0xa0, 0x2f, 0x7c, 0x71, 0xb4, 0x4a,
++ 0x11, 0xaa, 0x8c, 0x66, 0xc5, 0x64, 0xe6, 0x3a,
++ 0x54, 0xda, 0x24, 0x6a, 0xc4, 0x41, 0x65, 0x46,
++ 0x82, 0xa0, 0x0a, 0x0f, 0x5f, 0xfb, 0x25, 0xd0,
++ 0x2c, 0x91, 0xa7, 0xee, 0xc4, 0x81, 0x07, 0x86,
++ 0x75, 0x5e, 0x33, 0x69, 0x97, 0xe4, 0x2c, 0xa8,
++ 0x9d, 0x9f, 0x0b, 0x6a, 0xbe, 0xad, 0x98, 0xda,
++ 0x6d, 0x94, 0x41, 0xda, 0x2c, 0x1e, 0x89, 0xc4,
++ 0xc2, 0xaf, 0x1e, 0x00, 0x05, 0x0b, 0x83, 0x60,
++ 0xbd, 0x43, 0xea, 0x15, 0x23, 0x7f, 0xb9, 0xac,
++ 0xee, 0x4f, 0x2c, 0xaf, 0x2a, 0xf3, 0xdf, 0xd0,
++ 0xf3, 0x19, 0x31, 0xbb, 0x4a, 0x74, 0x84, 0x17,
++ 0x52, 0x32, 0x2c, 0x7d, 0x61, 0xe4, 0xcb, 0xeb,
++ 0x80, 0x38, 0x15, 0x52, 0xcb, 0x6f, 0xea, 0xe5,
++ 0x73, 0x9c, 0xd9, 0x24, 0x69, 0xc6, 0x95, 0x32,
++ 0x21, 0xc8, 0x11, 0xe4, 0xdc, 0x36, 0xd7, 0x93,
++ 0x38, 0x66, 0xfb, 0xb2, 0x7f, 0x3a, 0xb9, 0xaf,
++ 0x31, 0xdd, 0x93, 0x75, 0x78, 0x8a, 0x2c, 0x94,
++ 0x87, 0x1a, 0x58, 0xec, 0x9e, 0x7d, 0x4d, 0xba,
++ 0xe1, 0xe5, 0x4d, 0xfc, 0xbc, 0xa4, 0x2a, 0x14,
++ 0xef, 0xcc, 0xa7, 0xec, 0xab, 0x43, 0x09, 0x18,
++ 0xd3, 0xab, 0x68, 0xd1, 0x07, 0x99, 0x44, 0x47,
++ 0xd6, 0x83, 0x85, 0x3b, 0x30, 0xea, 0xa9, 0x6b,
++ 0x63, 0xea, 0xc4, 0x07, 0xfb, 0x43, 0x2f, 0xa4,
++ 0xaa, 0xb0, 0xab, 0x03, 0x89, 0xce, 0x3f, 0x8c,
++ 0x02, 0x7c, 0x86, 0x54, 0xbc, 0x88, 0xaf, 0x75,
++ 0xd2, 0xdc, 0x63, 0x17, 0xd3, 0x26, 0xf6, 0x96,
++ 0xa9, 0x3c, 0xf1, 0x61, 0x8c, 0x11, 0x18, 0xcc,
++ 0xd6, 0xea, 0x5b, 0xe2, 0xcd, 0xf0, 0xf1, 0xb2,
++ 0xe5, 0x35, 0x90, 0x1f, 0x85, 0x4c, 0x76, 0x5b,
++ 0x66, 0xce, 0x44, 0xa4, 0x32, 0x9f, 0xe6, 0x7b,
++ 0x71, 0x6e, 0x9f, 0x58, 0x15, 0x67, 0x72, 0x87,
++ 0x64, 0x8e, 0x3a, 0x44, 0x45, 0xd4, 0x76, 0xfa,
++ 0xc2, 0xf6, 0xef, 0x85, 0x05, 0x18, 0x7a, 0x9b,
++ 0xba, 0x41, 0x54, 0xac, 0xf0, 0xfc, 0x59, 0x12,
++ 0x3f, 0xdf, 0xa0, 0xe5, 0x8a, 0x65, 0xfd, 0x3a,
++ 0x62, 0x8d, 0x83, 0x2c, 0x03, 0xbe, 0x05, 0x76,
++ 0x2e, 0x53, 0x49, 0x97, 0x94, 0x33, 0xae, 0x40,
++ 0x81, 0x15, 0xdb, 0x6e, 0xad, 0xaa, 0xf5, 0x4b,
++ 0xe3, 0x98, 0x70, 0xdf, 0xe0, 0x7c, 0xcd, 0xdb,
++ 0x02, 0xd4, 0x7d, 0x2f, 0xc1, 0xe6, 0xb4, 0xf3,
++ 0xd7, 0x0d, 0x7a, 0xd9, 0x23, 0x9e, 0x87, 0x2d,
++ 0xce, 0x87, 0xad, 0xcc, 0x72, 0x05, 0x00, 0x29,
++ 0xdc, 0x73, 0x7f, 0x64, 0xc1, 0x15, 0x0e, 0xc2,
++ 0xdf, 0xa7, 0x5f, 0xeb, 0x41, 0xa1, 0xcd, 0xef,
++ 0x5c, 0x50, 0x79, 0x2a, 0x56, 0x56, 0x71, 0x8c,
++ 0xac, 0xc0, 0x79, 0x50, 0x69, 0xca, 0x59, 0x32,
++ 0x65, 0xf2, 0x54, 0xe4, 0x52, 0x38, 0x76, 0xd1,
++ 0x5e, 0xde, 0x26, 0x9e, 0xfb, 0x75, 0x2e, 0x11,
++ 0xb5, 0x10, 0xf4, 0x17, 0x73, 0xf5, 0x89, 0xc7,
++ 0x4f, 0x43, 0x5c, 0x8e, 0x7c, 0xb9, 0x05, 0x52,
++ 0x24, 0x40, 0x99, 0xfe, 0x9b, 0x85, 0x0b, 0x6c,
++ 0x22, 0x3e, 0x8b, 0xae, 0x86, 0xa1, 0xd2, 0x79,
++ 0x05, 0x68, 0x6b, 0xab, 0xe3, 0x41, 0x49, 0xed,
++ 0x15, 0xa1, 0x8d, 0x40, 0x2d, 0x61, 0xdf, 0x1a,
++ 0x59, 0xc9, 0x26, 0x8b, 0xef, 0x30, 0x4c, 0x88,
++ 0x4b, 0x10, 0xf8, 0x8d, 0xa6, 0x92, 0x9f, 0x4b,
++ 0xf3, 0xc4, 0x53, 0x0b, 0x89, 0x5d, 0x28, 0x92,
++ 0xcf, 0x78, 0xb2, 0xc0, 0x5d, 0xed, 0x7e, 0xfc,
++ 0xc0, 0x12, 0x23, 0x5f, 0x5a, 0x78, 0x86, 0x43,
++ 0x6e, 0x27, 0xf7, 0x5a, 0xa7, 0x6a, 0xed, 0x19,
++ 0x04, 0xf0, 0xb3, 0x12, 0xd1, 0xbd, 0x0e, 0x89,
++ 0x6e, 0xbc, 0x96, 0xa8, 0xd8, 0x49, 0x39, 0x9f,
++ 0x7e, 0x67, 0xf0, 0x2e, 0x3e, 0x01, 0xa9, 0xba,
++ 0xec, 0x8b, 0x62, 0x8e, 0xcb, 0x4a, 0x70, 0x43,
++ 0xc7, 0xc2, 0xc4, 0xca, 0x82, 0x03, 0x73, 0xe9,
++ 0x11, 0xdf, 0xcf, 0x54, 0xea, 0xc9, 0xb0, 0x95,
++ 0x51, 0xc0, 0x13, 0x3d, 0x92, 0x05, 0xfa, 0xf4,
++ 0xa9, 0x34, 0xc8, 0xce, 0x6c, 0x3d, 0x54, 0xcc,
++ 0xc4, 0xaf, 0xf1, 0xdc, 0x11, 0x44, 0x26, 0xa2,
++ 0xaf, 0xf1, 0x85, 0x75, 0x7d, 0x03, 0x61, 0x68,
++ 0x4e, 0x78, 0xc6, 0x92, 0x7d, 0x86, 0x7d, 0x77,
++ 0xdc, 0x71, 0x72, 0xdb, 0xc6, 0xae, 0xa1, 0xcb,
++ 0x70, 0x9a, 0x0b, 0x19, 0xbe, 0x4a, 0x6c, 0x2a,
++ 0xe2, 0xba, 0x6c, 0x64, 0x9a, 0x13, 0x28, 0xdf,
++ 0x85, 0x75, 0xe6, 0x43, 0xf6, 0x87, 0x08, 0x68,
++ 0x6e, 0xba, 0x6e, 0x79, 0x9f, 0x04, 0xbc, 0x23,
++ 0x50, 0xf6, 0x33, 0x5c, 0x1f, 0x24, 0x25, 0xbe,
++ 0x33, 0x47, 0x80, 0x45, 0x56, 0xa3, 0xa7, 0xd7,
++ 0x7a, 0xb1, 0x34, 0x0b, 0x90, 0x3c, 0x9c, 0xad,
++ 0x44, 0x5f, 0x9e, 0x0e, 0x9d, 0xd4, 0xbd, 0x93,
++ 0x5e, 0xfa, 0x3c, 0xe0, 0xb0, 0xd9, 0xed, 0xf3,
++ 0xd6, 0x2e, 0xff, 0x24, 0xd8, 0x71, 0x6c, 0xed,
++ 0xaf, 0x55, 0xeb, 0x22, 0xac, 0x93, 0x68, 0x32,
++ 0x05, 0x5b, 0x47, 0xdd, 0xc6, 0x4a, 0xcb, 0xc7,
++ 0x10, 0xe1, 0x3c, 0x92, 0x1a, 0xf3, 0x23, 0x78,
++ 0x2b, 0xa1, 0xd2, 0x80, 0xf4, 0x12, 0xb1, 0x20,
++ 0x8f, 0xff, 0x26, 0x35, 0xdd, 0xfb, 0xc7, 0x4e,
++ 0x78, 0xf1, 0x2d, 0x50, 0x12, 0x77, 0xa8, 0x60,
++ 0x7c, 0x0f, 0xf5, 0x16, 0x2f, 0x63, 0x70, 0x2a,
++ 0xc0, 0x96, 0x80, 0x4e, 0x0a, 0xb4, 0x93, 0x35,
++ 0x5d, 0x1d, 0x3f, 0x56, 0xf7, 0x2f, 0xbb, 0x90,
++ 0x11, 0x16, 0x8f, 0xa2, 0xec, 0x47, 0xbe, 0xac,
++ 0x56, 0x01, 0x26, 0x56, 0xb1, 0x8c, 0xb2, 0x10,
++ 0xf9, 0x1a, 0xca, 0xf5, 0xd1, 0xb7, 0x39, 0x20,
++ 0x63, 0xf1, 0x69, 0x20, 0x4f, 0x13, 0x12, 0x1f,
++ 0x5b, 0x65, 0xfc, 0x98, 0xf7, 0xc4, 0x7a, 0xbe,
++ 0xf7, 0x26, 0x4d, 0x2b, 0x84, 0x7b, 0x42, 0xad,
++ 0xd8, 0x7a, 0x0a, 0xb4, 0xd8, 0x74, 0xbf, 0xc1,
++ 0xf0, 0x6e, 0xb4, 0x29, 0xa3, 0xbb, 0xca, 0x46,
++ 0x67, 0x70, 0x6a, 0x2d, 0xce, 0x0e, 0xa2, 0x8a,
++ 0xa9, 0x87, 0xbf, 0x05, 0xc4, 0xc1, 0x04, 0xa3,
++ 0xab, 0xd4, 0x45, 0x43, 0x8c, 0xb6, 0x02, 0xb0,
++ 0x41, 0xc8, 0xfc, 0x44, 0x3d, 0x59, 0xaa, 0x2e,
++ 0x44, 0x21, 0x2a, 0x8d, 0x88, 0x9d, 0x57, 0xf4,
++ 0xa0, 0x02, 0x77, 0xb8, 0xa6, 0xa0, 0xe6, 0x75,
++ 0x5c, 0x82, 0x65, 0x3e, 0x03, 0x5c, 0x29, 0x8f,
++ 0x38, 0x55, 0xab, 0x33, 0x26, 0xef, 0x9f, 0x43,
++ 0x52, 0xfd, 0x68, 0xaf, 0x36, 0xb4, 0xbb, 0x9a,
++ 0x58, 0x09, 0x09, 0x1b, 0xc3, 0x65, 0x46, 0x46,
++ 0x1d, 0xa7, 0x94, 0x18, 0x23, 0x50, 0x2c, 0xca,
++ 0x2c, 0x55, 0x19, 0x97, 0x01, 0x9d, 0x93, 0x3b,
++ 0x63, 0x86, 0xf2, 0x03, 0x67, 0x45, 0xd2, 0x72,
++ 0x28, 0x52, 0x6c, 0xf4, 0xe3, 0x1c, 0xb5, 0x11,
++ 0x13, 0xf1, 0xeb, 0x21, 0xc7, 0xd9, 0x56, 0x82,
++ 0x2b, 0x82, 0x39, 0xbd, 0x69, 0x54, 0xed, 0x62,
++ 0xc3, 0xe2, 0xde, 0x73, 0xd4, 0x6a, 0x12, 0xae,
++ 0x13, 0x21, 0x7f, 0x4b, 0x5b, 0xfc, 0xbf, 0xe8,
++ 0x2b, 0xbe, 0x56, 0xba, 0x68, 0x8b, 0x9a, 0xb1,
++ 0x6e, 0xfa, 0xbf, 0x7e, 0x5a, 0x4b, 0xf1, 0xac,
++ 0x98, 0x65, 0x85, 0xd1, 0x93, 0x53, 0xd3, 0x7b,
++ 0x09, 0xdd, 0x4b, 0x10, 0x6d, 0x84, 0xb0, 0x13,
++ 0x65, 0xbd, 0xcf, 0x52, 0x09, 0xc4, 0x85, 0xe2,
++ 0x84, 0x74, 0x15, 0x65, 0xb7, 0xf7, 0x51, 0xaf,
++ 0x55, 0xad, 0xa4, 0xd1, 0x22, 0x54, 0x70, 0x94,
++ 0xa0, 0x1c, 0x90, 0x41, 0xfd, 0x99, 0xd7, 0x5a,
++ 0x31, 0xef, 0xaa, 0x25, 0xd0, 0x7f, 0x4f, 0xea,
++ 0x1d, 0x55, 0x42, 0xe5, 0x49, 0xb0, 0xd0, 0x46,
++ 0x62, 0x36, 0x43, 0xb2, 0x82, 0x15, 0x75, 0x50,
++ 0xa4, 0x72, 0xeb, 0x54, 0x27, 0x1f, 0x8a, 0xe4,
++ 0x7d, 0xe9, 0x66, 0xc5, 0xf1, 0x53, 0xa4, 0xd1,
++ 0x0c, 0xeb, 0xb8, 0xf8, 0xbc, 0xd4, 0xe2, 0xe7,
++ 0xe1, 0xf8, 0x4b, 0xcb, 0xa9, 0xa1, 0xaf, 0x15,
++ 0x83, 0xcb, 0x72, 0xd0, 0x33, 0x79, 0x00, 0x2d,
++ 0x9f, 0xd7, 0xf1, 0x2e, 0x1e, 0x10, 0xe4, 0x45,
++ 0xc0, 0x75, 0x3a, 0x39, 0xea, 0x68, 0xf7, 0x5d,
++ 0x1b, 0x73, 0x8f, 0xe9, 0x8e, 0x0f, 0x72, 0x47,
++ 0xae, 0x35, 0x0a, 0x31, 0x7a, 0x14, 0x4d, 0x4a,
++ 0x6f, 0x47, 0xf7, 0x7e, 0x91, 0x6e, 0x74, 0x8b,
++ 0x26, 0x47, 0xf9, 0xc3, 0xf9, 0xde, 0x70, 0xf5,
++ 0x61, 0xab, 0xa9, 0x27, 0x9f, 0x82, 0xe4, 0x9c,
++ 0x89, 0x91, 0x3f, 0x2e, 0x6a, 0xfd, 0xb5, 0x49,
++ 0xe9, 0xfd, 0x59, 0x14, 0x36, 0x49, 0x40, 0x6d,
++ 0x32, 0xd8, 0x85, 0x42, 0xf3, 0xa5, 0xdf, 0x0c,
++ 0xa8, 0x27, 0xd7, 0x54, 0xe2, 0x63, 0x2f, 0xf2,
++ 0x7e, 0x8b, 0x8b, 0xe7, 0xf1, 0x9a, 0x95, 0x35,
++ 0x43, 0xdc, 0x3a, 0xe4, 0xb6, 0xf4, 0xd0, 0xdf,
++ 0x9c, 0xcb, 0x94, 0xf3, 0x21, 0xa0, 0x77, 0x50,
++ 0xe2, 0xc6, 0xc4, 0xc6, 0x5f, 0x09, 0x64, 0x5b,
++ 0x92, 0x90, 0xd8, 0xe1, 0xd1, 0xed, 0x4b, 0x42,
++ 0xd7, 0x37, 0xaf, 0x65, 0x3d, 0x11, 0x39, 0xb6,
++ 0x24, 0x8a, 0x60, 0xae, 0xd6, 0x1e, 0xbf, 0x0e,
++ 0x0d, 0xd7, 0xdc, 0x96, 0x0e, 0x65, 0x75, 0x4e,
++ 0x29, 0x06, 0x9d, 0xa4, 0x51, 0x3a, 0x10, 0x63,
++ 0x8f, 0x17, 0x07, 0xd5, 0x8e, 0x3c, 0xf4, 0x28,
++ 0x00, 0x5a, 0x5b, 0x05, 0x19, 0xd8, 0xc0, 0x6c,
++ 0xe5, 0x15, 0xe4, 0x9c, 0x9d, 0x71, 0x9d, 0x5e,
++ 0x94, 0x29, 0x1a, 0xa7, 0x80, 0xfa, 0x0e, 0x33,
++ 0x03, 0xdd, 0xb7, 0x3e, 0x9a, 0xa9, 0x26, 0x18,
++ 0x37, 0xa9, 0x64, 0x08, 0x4d, 0x94, 0x5a, 0x88,
++ 0xca, 0x35, 0xce, 0x81, 0x02, 0xe3, 0x1f, 0x1b,
++ 0x89, 0x1a, 0x77, 0x85, 0xe3, 0x41, 0x6d, 0x32,
++ 0x42, 0x19, 0x23, 0x7d, 0xc8, 0x73, 0xee, 0x25,
++ 0x85, 0x0d, 0xf8, 0x31, 0x25, 0x79, 0x1b, 0x6f,
++ 0x79, 0x25, 0xd2, 0xd8, 0xd4, 0x23, 0xfd, 0xf7,
++ 0x82, 0x36, 0x6a, 0x0c, 0x46, 0x22, 0x15, 0xe9,
++ 0xff, 0x72, 0x41, 0x91, 0x91, 0x7d, 0x3a, 0xb7,
++ 0xdd, 0x65, 0x99, 0x70, 0xf6, 0x8d, 0x84, 0xf8,
++ 0x67, 0x15, 0x20, 0x11, 0xd6, 0xb2, 0x55, 0x7b,
++ 0xdb, 0x87, 0xee, 0xef, 0x55, 0x89, 0x2a, 0x59,
++ 0x2b, 0x07, 0x8f, 0x43, 0x8a, 0x59, 0x3c, 0x01,
++ 0x8b, 0x65, 0x54, 0xa1, 0x66, 0xd5, 0x38, 0xbd,
++ 0xc6, 0x30, 0xa9, 0xcc, 0x49, 0xb6, 0xa8, 0x1b,
++ 0xb8, 0xc0, 0x0e, 0xe3, 0x45, 0x28, 0xe2, 0xff,
++ 0x41, 0x9f, 0x7e, 0x7c, 0xd1, 0xae, 0x9e, 0x25,
++ 0x3f, 0x4c, 0x7c, 0x7c, 0xf4, 0xa8, 0x26, 0x4d,
++ 0x5c, 0xfd, 0x4b, 0x27, 0x18, 0xf9, 0x61, 0x76,
++ 0x48, 0xba, 0x0c, 0x6b, 0xa9, 0x4d, 0xfc, 0xf5,
++ 0x3b, 0x35, 0x7e, 0x2f, 0x4a, 0xa9, 0xc2, 0x9a,
++ 0xae, 0xab, 0x86, 0x09, 0x89, 0xc9, 0xc2, 0x40,
++ 0x39, 0x2c, 0x81, 0xb3, 0xb8, 0x17, 0x67, 0xc2,
++ 0x0d, 0x32, 0x4a, 0x3a, 0x67, 0x81, 0xd7, 0x1a,
++ 0x34, 0x52, 0xc5, 0xdb, 0x0a, 0xf5, 0x63, 0x39,
++ 0xea, 0x1f, 0xe1, 0x7c, 0xa1, 0x9e, 0xc1, 0x35,
++ 0xe3, 0xb1, 0x18, 0x45, 0x67, 0xf9, 0x22, 0x38,
++ 0x95, 0xd9, 0x34, 0x34, 0x86, 0xc6, 0x41, 0x94,
++ 0x15, 0xf9, 0x5b, 0x41, 0xa6, 0x87, 0x8b, 0xf8,
++ 0xd5, 0xe1, 0x1b, 0xe2, 0x5b, 0xf3, 0x86, 0x10,
++ 0xff, 0xe6, 0xae, 0x69, 0x76, 0xbc, 0x0d, 0xb4,
++ 0x09, 0x90, 0x0c, 0xa2, 0x65, 0x0c, 0xad, 0x74,
++ 0xf5, 0xd7, 0xff, 0xda, 0xc1, 0xce, 0x85, 0xbe,
++ 0x00, 0xa7, 0xff, 0x4d, 0x2f, 0x65, 0xd3, 0x8c,
++ 0x86, 0x2d, 0x05, 0xe8, 0xed, 0x3e, 0x6b, 0x8b,
++ 0x0f, 0x3d, 0x83, 0x8c, 0xf1, 0x1d, 0x5b, 0x96,
++ 0x2e, 0xb1, 0x9c, 0xc2, 0x98, 0xe1, 0x70, 0xb9,
++ 0xba, 0x5c, 0x8a, 0x43, 0xd6, 0x34, 0xa7, 0x2d,
++ 0xc9, 0x92, 0xae, 0xf2, 0xa5, 0x7b, 0x05, 0x49,
++ 0xa7, 0x33, 0x34, 0x86, 0xca, 0xe4, 0x96, 0x23,
++ 0x76, 0x5b, 0xf2, 0xc6, 0xf1, 0x51, 0x28, 0x42,
++ 0x7b, 0xcc, 0x76, 0x8f, 0xfa, 0xa2, 0xad, 0x31,
++ 0xd4, 0xd6, 0x7a, 0x6d, 0x25, 0x25, 0x54, 0xe4,
++ 0x3f, 0x50, 0x59, 0xe1, 0x5c, 0x05, 0xb7, 0x27,
++ 0x48, 0xbf, 0x07, 0xec, 0x1b, 0x13, 0xbe, 0x2b,
++ 0xa1, 0x57, 0x2b, 0xd5, 0xab, 0xd7, 0xd0, 0x4c,
++ 0x1e, 0xcb, 0x71, 0x9b, 0xc5, 0x90, 0x85, 0xd3,
++ 0xde, 0x59, 0xec, 0x71, 0xeb, 0x89, 0xbb, 0xd0,
++ 0x09, 0x50, 0xe1, 0x16, 0x3f, 0xfd, 0x1c, 0x34,
++ 0xc3, 0x1c, 0xa1, 0x10, 0x77, 0x53, 0x98, 0xef,
++ 0xf2, 0xfd, 0xa5, 0x01, 0x59, 0xc2, 0x9b, 0x26,
++ 0xc7, 0x42, 0xd9, 0x49, 0xda, 0x58, 0x2b, 0x6e,
++ 0x9f, 0x53, 0x19, 0x76, 0x7e, 0xd9, 0xc9, 0x0e,
++ 0x68, 0xc8, 0x7f, 0x51, 0x22, 0x42, 0xef, 0x49,
++ 0xa4, 0x55, 0xb6, 0x36, 0xac, 0x09, 0xc7, 0x31,
++ 0x88, 0x15, 0x4b, 0x2e, 0x8f, 0x3a, 0x08, 0xf7,
++ 0xd8, 0xf7, 0xa8, 0xc5, 0xa9, 0x33, 0xa6, 0x45,
++ 0xe4, 0xc4, 0x94, 0x76, 0xf3, 0x0d, 0x8f, 0x7e,
++ 0xc8, 0xf6, 0xbc, 0x23, 0x0a, 0xb6, 0x4c, 0xd3,
++ 0x6a, 0xcd, 0x36, 0xc2, 0x90, 0x5c, 0x5c, 0x3c,
++ 0x65, 0x7b, 0xc2, 0xd6, 0xcc, 0xe6, 0x0d, 0x87,
++ 0x73, 0x2e, 0x71, 0x79, 0x16, 0x06, 0x63, 0x28,
++ 0x09, 0x15, 0xd8, 0x89, 0x38, 0x38, 0x3d, 0xb5,
++ 0x42, 0x1c, 0x08, 0x24, 0xf7, 0x2a, 0xd2, 0x9d,
++ 0xc8, 0xca, 0xef, 0xf9, 0x27, 0xd8, 0x07, 0x86,
++ 0xf7, 0x43, 0x0b, 0x55, 0x15, 0x3f, 0x9f, 0x83,
++ 0xef, 0xdc, 0x49, 0x9d, 0x2a, 0xc1, 0x54, 0x62,
++ 0xbd, 0x9b, 0x66, 0x55, 0x9f, 0xb7, 0x12, 0xf3,
++ 0x1b, 0x4d, 0x9d, 0x2a, 0x5c, 0xed, 0x87, 0x75,
++ 0x87, 0x26, 0xec, 0x61, 0x2c, 0xb4, 0x0f, 0x89,
++ 0xb0, 0xfb, 0x2e, 0x68, 0x5d, 0x15, 0xc7, 0x8d,
++ 0x2e, 0xc0, 0xd9, 0xec, 0xaf, 0x4f, 0xd2, 0x25,
++ 0x29, 0xe8, 0xd2, 0x26, 0x2b, 0x67, 0xe9, 0xfc,
++ 0x2b, 0xa8, 0x67, 0x96, 0x12, 0x1f, 0x5b, 0x96,
++ 0xc6, 0x14, 0x53, 0xaf, 0x44, 0xea, 0xd6, 0xe2,
++ 0x94, 0x98, 0xe4, 0x12, 0x93, 0x4c, 0x92, 0xe0,
++ 0x18, 0xa5, 0x8d, 0x2d, 0xe4, 0x71, 0x3c, 0x47,
++ 0x4c, 0xf7, 0xe6, 0x47, 0x9e, 0xc0, 0x68, 0xdf,
++ 0xd4, 0xf5, 0x5a, 0x74, 0xb1, 0x2b, 0x29, 0x03,
++ 0x19, 0x07, 0xaf, 0x90, 0x62, 0x5c, 0x68, 0x98,
++ 0x48, 0x16, 0x11, 0x02, 0x9d, 0xee, 0xb4, 0x9b,
++ 0xe5, 0x42, 0x7f, 0x08, 0xfd, 0x16, 0x32, 0x0b,
++ 0xd0, 0xb3, 0xfa, 0x2b, 0xb7, 0x99, 0xf9, 0x29,
++ 0xcd, 0x20, 0x45, 0x9f, 0xb3, 0x1a, 0x5d, 0xa2,
++ 0xaf, 0x4d, 0xe0, 0xbd, 0x42, 0x0d, 0xbc, 0x74,
++ 0x99, 0x9c, 0x8e, 0x53, 0x1a, 0xb4, 0x3e, 0xbd,
++ 0xa2, 0x9a, 0x2d, 0xf7, 0xf8, 0x39, 0x0f, 0x67,
++ 0x63, 0xfc, 0x6b, 0xc0, 0xaf, 0xb3, 0x4b, 0x4f,
++ 0x55, 0xc4, 0xcf, 0xa7, 0xc8, 0x04, 0x11, 0x3e,
++ 0x14, 0x32, 0xbb, 0x1b, 0x38, 0x77, 0xd6, 0x7f,
++ 0x54, 0x4c, 0xdf, 0x75, 0xf3, 0x07, 0x2d, 0x33,
++ 0x9b, 0xa8, 0x20, 0xe1, 0x7b, 0x12, 0xb5, 0xf3,
++ 0xef, 0x2f, 0xce, 0x72, 0xe5, 0x24, 0x60, 0xc1,
++ 0x30, 0xe2, 0xab, 0xa1, 0x8e, 0x11, 0x09, 0xa8,
++ 0x21, 0x33, 0x44, 0xfe, 0x7f, 0x35, 0x32, 0x93,
++ 0x39, 0xa7, 0xad, 0x8b, 0x79, 0x06, 0xb2, 0xcb,
++ 0x4e, 0xa9, 0x5f, 0xc7, 0xba, 0x74, 0x29, 0xec,
++ 0x93, 0xa0, 0x4e, 0x54, 0x93, 0xc0, 0xbc, 0x55,
++ 0x64, 0xf0, 0x48, 0xe5, 0x57, 0x99, 0xee, 0x75,
++ 0xd6, 0x79, 0x0f, 0x66, 0xb7, 0xc6, 0x57, 0x76,
++ 0xf7, 0xb7, 0xf3, 0x9c, 0xc5, 0x60, 0xe8, 0x7f,
++ 0x83, 0x76, 0xd6, 0x0e, 0xaa, 0xe6, 0x90, 0x39,
++ 0x1d, 0xa6, 0x32, 0x6a, 0x34, 0xe3, 0x55, 0xf8,
++ 0x58, 0xa0, 0x58, 0x7d, 0x33, 0xe0, 0x22, 0x39,
++ 0x44, 0x64, 0x87, 0x86, 0x5a, 0x2f, 0xa7, 0x7e,
++ 0x0f, 0x38, 0xea, 0xb0, 0x30, 0xcc, 0x61, 0xa5,
++ 0x6a, 0x32, 0xae, 0x1e, 0xf7, 0xe9, 0xd0, 0xa9,
++ 0x0c, 0x32, 0x4b, 0xb5, 0x49, 0x28, 0xab, 0x85,
++ 0x2f, 0x8e, 0x01, 0x36, 0x38, 0x52, 0xd0, 0xba,
++ 0xd6, 0x02, 0x78, 0xf8, 0x0e, 0x3e, 0x9c, 0x8b,
++ 0x6b, 0x45, 0x99, 0x3f, 0x5c, 0xfe, 0x58, 0xf1,
++ 0x5c, 0x94, 0x04, 0xe1, 0xf5, 0x18, 0x6d, 0x51,
++ 0xb2, 0x5d, 0x18, 0x20, 0xb6, 0xc2, 0x9a, 0x42,
++ 0x1d, 0xb3, 0xab, 0x3c, 0xb6, 0x3a, 0x13, 0x03,
++ 0xb2, 0x46, 0x82, 0x4f, 0xfc, 0x64, 0xbc, 0x4f,
++ 0xca, 0xfa, 0x9c, 0xc0, 0xd5, 0xa7, 0xbd, 0x11,
++ 0xb7, 0xe4, 0x5a, 0xf6, 0x6f, 0x4d, 0x4d, 0x54,
++ 0xea, 0xa4, 0x98, 0x66, 0xd4, 0x22, 0x3b, 0xd3,
++ 0x8f, 0x34, 0x47, 0xd9, 0x7c, 0xf4, 0x72, 0x3b,
++ 0x4d, 0x02, 0x77, 0xf6, 0xd6, 0xdd, 0x08, 0x0a,
++ 0x81, 0xe1, 0x86, 0x89, 0x3e, 0x56, 0x10, 0x3c,
++ 0xba, 0xd7, 0x81, 0x8c, 0x08, 0xbc, 0x8b, 0xe2,
++ 0x53, 0xec, 0xa7, 0x89, 0xee, 0xc8, 0x56, 0xb5,
++ 0x36, 0x2c, 0xb2, 0x03, 0xba, 0x99, 0xdd, 0x7c,
++ 0x48, 0xa0, 0xb0, 0xbc, 0x91, 0x33, 0xe9, 0xa8,
++ 0xcb, 0xcd, 0xcf, 0x59, 0x5f, 0x1f, 0x15, 0xe2,
++ 0x56, 0xf5, 0x4e, 0x01, 0x35, 0x27, 0x45, 0x77,
++ 0x47, 0xc8, 0xbc, 0xcb, 0x7e, 0x39, 0xc1, 0x97,
++ 0x28, 0xd3, 0x84, 0xfc, 0x2c, 0x3e, 0xc8, 0xad,
++ 0x9c, 0xf8, 0x8a, 0x61, 0x9c, 0x28, 0xaa, 0xc5,
++ 0x99, 0x20, 0x43, 0x85, 0x9d, 0xa5, 0xe2, 0x8b,
++ 0xb8, 0xae, 0xeb, 0xd0, 0x32, 0x0d, 0x52, 0x78,
++ 0x09, 0x56, 0x3f, 0xc7, 0xd8, 0x7e, 0x26, 0xfc,
++ 0x37, 0xfb, 0x6f, 0x04, 0xfc, 0xfa, 0x92, 0x10,
++ 0xac, 0xf8, 0x3e, 0x21, 0xdc, 0x8c, 0x21, 0x16,
++ 0x7d, 0x67, 0x6e, 0xf6, 0xcd, 0xda, 0xb6, 0x98,
++ 0x23, 0xab, 0x23, 0x3c, 0xb2, 0x10, 0xa0, 0x53,
++ 0x5a, 0x56, 0x9f, 0xc5, 0xd0, 0xff, 0xbb, 0xe4,
++ 0x98, 0x3c, 0x69, 0x1e, 0xdb, 0x38, 0x8f, 0x7e,
++ 0x0f, 0xd2, 0x98, 0x88, 0x81, 0x8b, 0x45, 0x67,
++ 0xea, 0x33, 0xf1, 0xeb, 0xe9, 0x97, 0x55, 0x2e,
++ 0xd9, 0xaa, 0xeb, 0x5a, 0xec, 0xda, 0xe1, 0x68,
++ 0xa8, 0x9d, 0x3c, 0x84, 0x7c, 0x05, 0x3d, 0x62,
++ 0x87, 0x8f, 0x03, 0x21, 0x28, 0x95, 0x0c, 0x89,
++ 0x25, 0x22, 0x4a, 0xb0, 0x93, 0xa9, 0x50, 0xa2,
++ 0x2f, 0x57, 0x6e, 0x18, 0x42, 0x19, 0x54, 0x0c,
++ 0x55, 0x67, 0xc6, 0x11, 0x49, 0xf4, 0x5c, 0xd2,
++ 0xe9, 0x3d, 0xdd, 0x8b, 0x48, 0x71, 0x21, 0x00,
++ 0xc3, 0x9a, 0x6c, 0x85, 0x74, 0x28, 0x83, 0x4a,
++ 0x1b, 0x31, 0x05, 0xe1, 0x06, 0x92, 0xe7, 0xda,
++ 0x85, 0x73, 0x78, 0x45, 0x20, 0x7f, 0xae, 0x13,
++ 0x7c, 0x33, 0x06, 0x22, 0xf4, 0x83, 0xf9, 0x35,
++ 0x3f, 0x6c, 0x71, 0xa8, 0x4e, 0x48, 0xbe, 0x9b,
++ 0xce, 0x8a, 0xba, 0xda, 0xbe, 0x28, 0x08, 0xf7,
++ 0xe2, 0x14, 0x8c, 0x71, 0xea, 0x72, 0xf9, 0x33,
++ 0xf2, 0x88, 0x3f, 0xd7, 0xbb, 0x69, 0x6c, 0x29,
++ 0x19, 0xdc, 0x84, 0xce, 0x1f, 0x12, 0x4f, 0xc8,
++ 0xaf, 0xa5, 0x04, 0xba, 0x5a, 0xab, 0xb0, 0xd9,
++ 0x14, 0x1f, 0x6c, 0x68, 0x98, 0x39, 0x89, 0x7a,
++ 0xd9, 0xd8, 0x2f, 0xdf, 0xa8, 0x47, 0x4a, 0x25,
++ 0xe2, 0xfb, 0x33, 0xf4, 0x59, 0x78, 0xe1, 0x68,
++ 0x85, 0xcf, 0xfe, 0x59, 0x20, 0xd4, 0x05, 0x1d,
++ 0x80, 0x99, 0xae, 0xbc, 0xca, 0xae, 0x0f, 0x2f,
++ 0x65, 0x43, 0x34, 0x8e, 0x7e, 0xac, 0xd3, 0x93,
++ 0x2f, 0xac, 0x6d, 0x14, 0x3d, 0x02, 0x07, 0x70,
++ 0x9d, 0xa4, 0xf3, 0x1b, 0x5c, 0x36, 0xfc, 0x01,
++ 0x73, 0x34, 0x85, 0x0c, 0x6c, 0xd6, 0xf1, 0xbd,
++ 0x3f, 0xdf, 0xee, 0xf5, 0xd9, 0xba, 0x56, 0xef,
++ 0xf4, 0x9b, 0x6b, 0xee, 0x9f, 0x5a, 0x78, 0x6d,
++ 0x32, 0x19, 0xf4, 0xf7, 0xf8, 0x4c, 0x69, 0x0b,
++ 0x4b, 0xbc, 0xbb, 0xb7, 0xf2, 0x85, 0xaf, 0x70,
++ 0x75, 0x24, 0x6c, 0x54, 0xa7, 0x0e, 0x4d, 0x1d,
++ 0x01, 0xbf, 0x08, 0xac, 0xcf, 0x7f, 0x2c, 0xe3,
++ 0x14, 0x89, 0x5e, 0x70, 0x5a, 0x99, 0x92, 0xcd,
++ 0x01, 0x84, 0xc8, 0xd2, 0xab, 0xe5, 0x4f, 0x58,
++ 0xe7, 0x0f, 0x2f, 0x0e, 0xff, 0x68, 0xea, 0xfd,
++ 0x15, 0xb3, 0x17, 0xe6, 0xb0, 0xe7, 0x85, 0xd8,
++ 0x23, 0x2e, 0x05, 0xc7, 0xc9, 0xc4, 0x46, 0x1f,
++ 0xe1, 0x9e, 0x49, 0x20, 0x23, 0x24, 0x4d, 0x7e,
++ 0x29, 0x65, 0xff, 0xf4, 0xb6, 0xfd, 0x1a, 0x85,
++ 0xc4, 0x16, 0xec, 0xfc, 0xea, 0x7b, 0xd6, 0x2c,
++ 0x43, 0xf8, 0xb7, 0xbf, 0x79, 0xc0, 0x85, 0xcd,
++ 0xef, 0xe1, 0x98, 0xd3, 0xa5, 0xf7, 0x90, 0x8c,
++ 0xe9, 0x7f, 0x80, 0x6b, 0xd2, 0xac, 0x4c, 0x30,
++ 0xa7, 0xc6, 0x61, 0x6c, 0xd2, 0xf9, 0x2c, 0xff,
++ 0x30, 0xbc, 0x22, 0x81, 0x7d, 0x93, 0x12, 0xe4,
++ 0x0a, 0xcd, 0xaf, 0xdd, 0xe8, 0xab, 0x0a, 0x1e,
++ 0x13, 0xa4, 0x27, 0xc3, 0x5f, 0xf7, 0x4b, 0xbb,
++ 0x37, 0x09, 0x4b, 0x91, 0x6f, 0x92, 0x4f, 0xaf,
++ 0x52, 0xee, 0xdf, 0xef, 0x09, 0x6f, 0xf7, 0x5c,
++ 0x6e, 0x12, 0x17, 0x72, 0x63, 0x57, 0xc7, 0xba,
++ 0x3b, 0x6b, 0x38, 0x32, 0x73, 0x1b, 0x9c, 0x80,
++ 0xc1, 0x7a, 0xc6, 0xcf, 0xcd, 0x35, 0xc0, 0x6b,
++ 0x31, 0x1a, 0x6b, 0xe9, 0xd8, 0x2c, 0x29, 0x3f,
++ 0x96, 0xfb, 0xb6, 0xcd, 0x13, 0x91, 0x3b, 0xc2,
++ 0xd2, 0xa3, 0x31, 0x8d, 0xa4, 0xcd, 0x57, 0xcd,
++ 0x13, 0x3d, 0x64, 0xfd, 0x06, 0xce, 0xe6, 0xdc,
++ 0x0c, 0x24, 0x43, 0x31, 0x40, 0x57, 0xf1, 0x72,
++ 0x17, 0xe3, 0x3a, 0x63, 0x6d, 0x35, 0xcf, 0x5d,
++ 0x97, 0x40, 0x59, 0xdd, 0xf7, 0x3c, 0x02, 0xf7,
++ 0x1c, 0x7e, 0x05, 0xbb, 0xa9, 0x0d, 0x01, 0xb1,
++ 0x8e, 0xc0, 0x30, 0xa9, 0x53, 0x24, 0xc9, 0x89,
++ 0x84, 0x6d, 0xaa, 0xd0, 0xcd, 0x91, 0xc2, 0x4d,
++ 0x91, 0xb0, 0x89, 0xe2, 0xbf, 0x83, 0x44, 0xaa,
++ 0x28, 0x72, 0x23, 0xa0, 0xc2, 0xad, 0xad, 0x1c,
++ 0xfc, 0x3f, 0x09, 0x7a, 0x0b, 0xdc, 0xc5, 0x1b,
++ 0x87, 0x13, 0xc6, 0x5b, 0x59, 0x8d, 0xf2, 0xc8,
++ 0xaf, 0xdf, 0x11, 0x95,
++ },
++ .rlen = 4100,
++ },
++};
++
+ /*
+ * Compression stuff.
+ */
+@@ -4408,6 +7721,88 @@ static struct comp_testvec deflate_decomp_tv_template[] = {
+ };
+
+ /*
++ * LZO test vectors (null-terminated strings).
++ */
++#define LZO_COMP_TEST_VECTORS 2
++#define LZO_DECOMP_TEST_VECTORS 2
++
++static struct comp_testvec lzo_comp_tv_template[] = {
++ {
++ .inlen = 70,
++ .outlen = 46,
++ .input = "Join us now and share the software "
++ "Join us now and share the software ",
++ .output = { 0x00, 0x0d, 0x4a, 0x6f, 0x69, 0x6e, 0x20, 0x75,
++ 0x73, 0x20, 0x6e, 0x6f, 0x77, 0x20, 0x61, 0x6e,
++ 0x64, 0x20, 0x73, 0x68, 0x61, 0x72, 0x65, 0x20,
++ 0x74, 0x68, 0x65, 0x20, 0x73, 0x6f, 0x66, 0x74,
++ 0x77, 0x70, 0x01, 0x01, 0x4a, 0x6f, 0x69, 0x6e,
++ 0x3d, 0x88, 0x00, 0x11, 0x00, 0x00 },
++ }, {
++ .inlen = 159,
++ .outlen = 133,
++ .input = "This document describes a compression method based on the LZO "
++ "compression algorithm. This document defines the application of "
++ "the LZO algorithm used in UBIFS.",
++ .output = { 0x00, 0x2b, 0x54, 0x68, 0x69, 0x73, 0x20, 0x64,
++ 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x20,
++ 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65,
++ 0x73, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6d, 0x70,
++ 0x72, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x20,
++ 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x20, 0x62,
++ 0x61, 0x73, 0x65, 0x64, 0x20, 0x6f, 0x6e, 0x20,
++ 0x74, 0x68, 0x65, 0x20, 0x4c, 0x5a, 0x4f, 0x2b,
++ 0x8c, 0x00, 0x0d, 0x61, 0x6c, 0x67, 0x6f, 0x72,
++ 0x69, 0x74, 0x68, 0x6d, 0x2e, 0x20, 0x20, 0x54,
++ 0x68, 0x69, 0x73, 0x2a, 0x54, 0x01, 0x02, 0x66,
++ 0x69, 0x6e, 0x65, 0x73, 0x94, 0x06, 0x05, 0x61,
++ 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x76,
++ 0x0a, 0x6f, 0x66, 0x88, 0x02, 0x60, 0x09, 0x27,
++ 0xf0, 0x00, 0x0c, 0x20, 0x75, 0x73, 0x65, 0x64,
++ 0x20, 0x69, 0x6e, 0x20, 0x55, 0x42, 0x49, 0x46,
++ 0x53, 0x2e, 0x11, 0x00, 0x00 },
++ },
++};
++
++static struct comp_testvec lzo_decomp_tv_template[] = {
++ {
++ .inlen = 133,
++ .outlen = 159,
++ .input = { 0x00, 0x2b, 0x54, 0x68, 0x69, 0x73, 0x20, 0x64,
++ 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x20,
++ 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65,
++ 0x73, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6d, 0x70,
++ 0x72, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x20,
++ 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x20, 0x62,
++ 0x61, 0x73, 0x65, 0x64, 0x20, 0x6f, 0x6e, 0x20,
++ 0x74, 0x68, 0x65, 0x20, 0x4c, 0x5a, 0x4f, 0x2b,
++ 0x8c, 0x00, 0x0d, 0x61, 0x6c, 0x67, 0x6f, 0x72,
++ 0x69, 0x74, 0x68, 0x6d, 0x2e, 0x20, 0x20, 0x54,
++ 0x68, 0x69, 0x73, 0x2a, 0x54, 0x01, 0x02, 0x66,
++ 0x69, 0x6e, 0x65, 0x73, 0x94, 0x06, 0x05, 0x61,
++ 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x76,
++ 0x0a, 0x6f, 0x66, 0x88, 0x02, 0x60, 0x09, 0x27,
++ 0xf0, 0x00, 0x0c, 0x20, 0x75, 0x73, 0x65, 0x64,
++ 0x20, 0x69, 0x6e, 0x20, 0x55, 0x42, 0x49, 0x46,
++ 0x53, 0x2e, 0x11, 0x00, 0x00 },
++ .output = "This document describes a compression method based on the LZO "
++ "compression algorithm. This document defines the application of "
++ "the LZO algorithm used in UBIFS.",
++ }, {
++ .inlen = 46,
++ .outlen = 70,
++ .input = { 0x00, 0x0d, 0x4a, 0x6f, 0x69, 0x6e, 0x20, 0x75,
++ 0x73, 0x20, 0x6e, 0x6f, 0x77, 0x20, 0x61, 0x6e,
++ 0x64, 0x20, 0x73, 0x68, 0x61, 0x72, 0x65, 0x20,
++ 0x74, 0x68, 0x65, 0x20, 0x73, 0x6f, 0x66, 0x74,
++ 0x77, 0x70, 0x01, 0x01, 0x4a, 0x6f, 0x69, 0x6e,
++ 0x3d, 0x88, 0x00, 0x11, 0x00, 0x00 },
++ .output = "Join us now and share the software "
++ "Join us now and share the software ",
++ },
++};
++
++/*
+ * Michael MIC test vectors from IEEE 802.11i
+ */
+ #define MICHAEL_MIC_TEST_VECTORS 6
+@@ -4812,4 +8207,20 @@ static struct cipher_speed camellia_speed_template[] = {
+ { .klen = 0, .blen = 0, }
+ };
+
++static struct cipher_speed salsa20_speed_template[] = {
++ { .klen = 16, .blen = 16, },
++ { .klen = 16, .blen = 64, },
++ { .klen = 16, .blen = 256, },
++ { .klen = 16, .blen = 1024, },
++ { .klen = 16, .blen = 8192, },
++ { .klen = 32, .blen = 16, },
++ { .klen = 32, .blen = 64, },
++ { .klen = 32, .blen = 256, },
++ { .klen = 32, .blen = 1024, },
++ { .klen = 32, .blen = 8192, },
++
++ /* End marker */
++ { .klen = 0, .blen = 0, }
++};
++
+ #endif /* _CRYPTO_TCRYPT_H */
+diff --git a/crypto/twofish_common.c b/crypto/twofish_common.c
+index b4b9c0c..0af216c 100644
+--- a/crypto/twofish_common.c
++++ b/crypto/twofish_common.c
+@@ -655,84 +655,48 @@ int twofish_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int key_len)
+ CALC_SB256_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
+ }
+
+- /* Calculate whitening and round subkeys. The constants are
+- * indices of subkeys, preprocessed through q0 and q1. */
+- CALC_K256 (w, 0, 0xA9, 0x75, 0x67, 0xF3);
+- CALC_K256 (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
+- CALC_K256 (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
+- CALC_K256 (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
+- CALC_K256 (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
+- CALC_K256 (k, 2, 0x80, 0xE6, 0x78, 0x6B);
+- CALC_K256 (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
+- CALC_K256 (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
+- CALC_K256 (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
+- CALC_K256 (k, 10, 0x35, 0xD8, 0x98, 0xFD);
+- CALC_K256 (k, 12, 0x18, 0x37, 0xF7, 0x71);
+- CALC_K256 (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
+- CALC_K256 (k, 16, 0x43, 0x30, 0x75, 0x0F);
+- CALC_K256 (k, 18, 0x37, 0xF8, 0x26, 0x1B);
+- CALC_K256 (k, 20, 0xFA, 0x87, 0x13, 0xFA);
+- CALC_K256 (k, 22, 0x94, 0x06, 0x48, 0x3F);
+- CALC_K256 (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
+- CALC_K256 (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
+- CALC_K256 (k, 28, 0x84, 0x8A, 0x54, 0x00);
+- CALC_K256 (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
++ /* CALC_K256/CALC_K192/CALC_K loops were unrolled.
++ * Unrolling produced x2.5 more code (+18k on i386),
++ * and speeded up key setup by 7%:
++ * unrolled: twofish_setkey/sec: 41128
++ * loop: twofish_setkey/sec: 38148
++ * CALC_K256: ~100 insns each
++ * CALC_K192: ~90 insns
++ * CALC_K: ~70 insns
++ */
++ /* Calculate whitening and round subkeys */
++ for ( i = 0; i < 8; i += 2 ) {
++ CALC_K256 (w, i, q0[i], q1[i], q0[i+1], q1[i+1]);
++ }
++ for ( i = 0; i < 32; i += 2 ) {
++ CALC_K256 (k, i, q0[i+8], q1[i+8], q0[i+9], q1[i+9]);
++ }
+ } else if (key_len == 24) { /* 192-bit key */
+ /* Compute the S-boxes. */
+ for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
+ CALC_SB192_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
+ }
+
+- /* Calculate whitening and round subkeys. The constants are
+- * indices of subkeys, preprocessed through q0 and q1. */
+- CALC_K192 (w, 0, 0xA9, 0x75, 0x67, 0xF3);
+- CALC_K192 (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
+- CALC_K192 (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
+- CALC_K192 (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
+- CALC_K192 (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
+- CALC_K192 (k, 2, 0x80, 0xE6, 0x78, 0x6B);
+- CALC_K192 (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
+- CALC_K192 (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
+- CALC_K192 (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
+- CALC_K192 (k, 10, 0x35, 0xD8, 0x98, 0xFD);
+- CALC_K192 (k, 12, 0x18, 0x37, 0xF7, 0x71);
+- CALC_K192 (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
+- CALC_K192 (k, 16, 0x43, 0x30, 0x75, 0x0F);
+- CALC_K192 (k, 18, 0x37, 0xF8, 0x26, 0x1B);
+- CALC_K192 (k, 20, 0xFA, 0x87, 0x13, 0xFA);
+- CALC_K192 (k, 22, 0x94, 0x06, 0x48, 0x3F);
+- CALC_K192 (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
+- CALC_K192 (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
+- CALC_K192 (k, 28, 0x84, 0x8A, 0x54, 0x00);
+- CALC_K192 (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
++ /* Calculate whitening and round subkeys */
++ for ( i = 0; i < 8; i += 2 ) {
++ CALC_K192 (w, i, q0[i], q1[i], q0[i+1], q1[i+1]);
++ }
++ for ( i = 0; i < 32; i += 2 ) {
++ CALC_K192 (k, i, q0[i+8], q1[i+8], q0[i+9], q1[i+9]);
++ }
+ } else { /* 128-bit key */
+ /* Compute the S-boxes. */
+ for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
+ CALC_SB_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
+ }
+
+- /* Calculate whitening and round subkeys. The constants are
+- * indices of subkeys, preprocessed through q0 and q1. */
+- CALC_K (w, 0, 0xA9, 0x75, 0x67, 0xF3);
+- CALC_K (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
+- CALC_K (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
+- CALC_K (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
+- CALC_K (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
+- CALC_K (k, 2, 0x80, 0xE6, 0x78, 0x6B);
+- CALC_K (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
+- CALC_K (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
+- CALC_K (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
+- CALC_K (k, 10, 0x35, 0xD8, 0x98, 0xFD);
+- CALC_K (k, 12, 0x18, 0x37, 0xF7, 0x71);
+- CALC_K (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
+- CALC_K (k, 16, 0x43, 0x30, 0x75, 0x0F);
+- CALC_K (k, 18, 0x37, 0xF8, 0x26, 0x1B);
+- CALC_K (k, 20, 0xFA, 0x87, 0x13, 0xFA);
+- CALC_K (k, 22, 0x94, 0x06, 0x48, 0x3F);
+- CALC_K (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
+- CALC_K (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
+- CALC_K (k, 28, 0x84, 0x8A, 0x54, 0x00);
+- CALC_K (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
++ /* Calculate whitening and round subkeys */
++ for ( i = 0; i < 8; i += 2 ) {
++ CALC_K (w, i, q0[i], q1[i], q0[i+1], q1[i+1]);
++ }
++ for ( i = 0; i < 32; i += 2 ) {
++ CALC_K (k, i, q0[i+8], q1[i+8], q0[i+9], q1[i+9]);
++ }
+ }
+
+ return 0;
+diff --git a/crypto/xcbc.c b/crypto/xcbc.c
+index ac68f3b..a82959d 100644
+--- a/crypto/xcbc.c
++++ b/crypto/xcbc.c
+@@ -19,6 +19,7 @@
+ * Kazunori Miyazawa <miyazawa at linux-ipv6.org>
+ */
+
++#include <crypto/scatterwalk.h>
+ #include <linux/crypto.h>
+ #include <linux/err.h>
+ #include <linux/hardirq.h>
+@@ -27,7 +28,6 @@
+ #include <linux/rtnetlink.h>
+ #include <linux/slab.h>
+ #include <linux/scatterlist.h>
+-#include "internal.h"
+
+ static u_int32_t ks[12] = {0x01010101, 0x01010101, 0x01010101, 0x01010101,
+ 0x02020202, 0x02020202, 0x02020202, 0x02020202,
+@@ -307,7 +307,8 @@ static struct crypto_instance *xcbc_alloc(struct rtattr **tb)
+ case 16:
+ break;
+ default:
+- return ERR_PTR(PTR_ERR(alg));
++ inst = ERR_PTR(-EINVAL);
++ goto out_put_alg;
+ }
+
+ inst = crypto_alloc_instance("xcbc", alg);
+@@ -320,10 +321,7 @@ static struct crypto_instance *xcbc_alloc(struct rtattr **tb)
+ inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_hash_type;
+
+- inst->alg.cra_hash.digestsize =
+- (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
+- CRYPTO_ALG_TYPE_HASH ? alg->cra_hash.digestsize :
+- alg->cra_blocksize;
++ inst->alg.cra_hash.digestsize = alg->cra_blocksize;
+ inst->alg.cra_ctxsize = sizeof(struct crypto_xcbc_ctx) +
+ ALIGN(inst->alg.cra_blocksize * 3, sizeof(void *));
+ inst->alg.cra_init = xcbc_init_tfm;
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 8cb37e3..d92d4d8 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -38,7 +38,7 @@ obj-$(CONFIG_SCSI) += scsi/
+ obj-$(CONFIG_ATA) += ata/
+ obj-$(CONFIG_FUSION) += message/
+ obj-$(CONFIG_FIREWIRE) += firewire/
+-obj-$(CONFIG_IEEE1394) += ieee1394/
++obj-y += ieee1394/
+ obj-$(CONFIG_UIO) += uio/
+ obj-y += cdrom/
+ obj-y += auxdisplay/
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index f4487c3..1b4cf98 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -743,7 +743,7 @@ static int __init acpi_bus_init(void)
+ return -ENODEV;
+ }
+
+-decl_subsys(acpi, NULL, NULL);
++struct kobject *acpi_kobj;
+
+ static int __init acpi_init(void)
+ {
+@@ -755,10 +755,11 @@ static int __init acpi_init(void)
+ return -ENODEV;
+ }
+
+- result = firmware_register(&acpi_subsys);
+- if (result < 0)
+- printk(KERN_WARNING "%s: firmware_register error: %d\n",
+- __FUNCTION__, result);
++ acpi_kobj = kobject_create_and_add("acpi", firmware_kobj);
++ if (!acpi_kobj) {
++ printk(KERN_WARNING "%s: kset create error\n", __FUNCTION__);
++ acpi_kobj = NULL;
++ }
+
+ result = acpi_bus_init();
+
+diff --git a/drivers/acpi/pci_link.c b/drivers/acpi/pci_link.c
+index c9f526e..5400ea1 100644
+--- a/drivers/acpi/pci_link.c
++++ b/drivers/acpi/pci_link.c
+@@ -911,7 +911,7 @@ __setup("acpi_irq_balance", acpi_irq_balance_set);
+
+ /* FIXME: we will remove this interface after all drivers call pci_disable_device */
+ static struct sysdev_class irqrouter_sysdev_class = {
+- set_kset_name("irqrouter"),
++ .name = "irqrouter",
+ .resume = irqrouter_resume,
+ };
+
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 2235f4e..eb1f82f 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -357,6 +357,26 @@ int acpi_processor_resume(struct acpi_device * device)
+ return 0;
+ }
+
++#if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
++static int tsc_halts_in_c(int state)
++{
++ switch (boot_cpu_data.x86_vendor) {
++ case X86_VENDOR_AMD:
++ /*
++ * AMD Fam10h TSC will tick in all
++ * C/P/S0/S1 states when this bit is set.
++ */
++ if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
++ return 0;
++ /*FALL THROUGH*/
++ case X86_VENDOR_INTEL:
++ /* Several cases known where TSC halts in C2 too */
++ default:
++ return state > ACPI_STATE_C1;
++ }
++}
++#endif
++
+ #ifndef CONFIG_CPU_IDLE
+ static void acpi_processor_idle(void)
+ {
+@@ -516,7 +536,8 @@ static void acpi_processor_idle(void)
+
+ #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
+ /* TSC halts in C2, so notify users */
+- mark_tsc_unstable("possible TSC halt in C2");
++ if (tsc_halts_in_c(ACPI_STATE_C2))
++ mark_tsc_unstable("possible TSC halt in C2");
+ #endif
+ /* Compute time (ticks) that we were actually asleep */
+ sleep_ticks = ticks_elapsed(t1, t2);
+@@ -534,6 +555,7 @@ static void acpi_processor_idle(void)
+ break;
+
+ case ACPI_STATE_C3:
++ acpi_unlazy_tlb(smp_processor_id());
+ /*
+ * Must be done before busmaster disable as we might
+ * need to access HPET !
+@@ -579,7 +601,8 @@ static void acpi_processor_idle(void)
+
+ #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
+ /* TSC halts in C3, so notify users */
+- mark_tsc_unstable("TSC halts in C3");
++ if (tsc_halts_in_c(ACPI_STATE_C3))
++ mark_tsc_unstable("TSC halts in C3");
+ #endif
+ /* Compute time (ticks) that we were actually asleep */
+ sleep_ticks = ticks_elapsed(t1, t2);
+@@ -1423,6 +1446,7 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev,
+ return 0;
+ }
+
++ acpi_unlazy_tlb(smp_processor_id());
+ /*
+ * Must be done before busmaster disable as we might need to
+ * access HPET !
+@@ -1443,7 +1467,8 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev,
+
+ #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
+ /* TSC could halt in idle, so notify users */
+- mark_tsc_unstable("TSC halts in idle");;
++ if (tsc_halts_in_c(cx->type))
++ mark_tsc_unstable("TSC halts in idle");;
+ #endif
+ sleep_ticks = ticks_elapsed(t1, t2);
+
+@@ -1554,7 +1579,8 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev,
+
+ #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
+ /* TSC could halt in idle, so notify users */
+- mark_tsc_unstable("TSC halts in idle");
++ if (tsc_halts_in_c(ACPI_STATE_C3))
++ mark_tsc_unstable("TSC halts in idle");
+ #endif
+ sleep_ticks = ticks_elapsed(t1, t2);
+ /* Tell the scheduler how much we idled: */
+diff --git a/drivers/acpi/system.c b/drivers/acpi/system.c
+index edee280..5ffe0ea 100644
+--- a/drivers/acpi/system.c
++++ b/drivers/acpi/system.c
+@@ -58,7 +58,7 @@ module_param_call(acpica_version, NULL, param_get_acpica_version, NULL, 0444);
+ FS Interface (/sys)
+ -------------------------------------------------------------------------- */
+ static LIST_HEAD(acpi_table_attr_list);
+-static struct kobject tables_kobj;
++static struct kobject *tables_kobj;
+
+ struct acpi_table_attr {
+ struct bin_attribute attr;
+@@ -135,11 +135,9 @@ static int acpi_system_sysfs_init(void)
+ int table_index = 0;
+ int result;
+
+- tables_kobj.parent = &acpi_subsys.kobj;
+- kobject_set_name(&tables_kobj, "tables");
+- result = kobject_register(&tables_kobj);
+- if (result)
+- return result;
++ tables_kobj = kobject_create_and_add("tables", acpi_kobj);
++ if (!tables_kobj)
++ return -ENOMEM;
+
+ do {
+ result = acpi_get_table_by_index(table_index, &table_header);
+@@ -153,7 +151,7 @@ static int acpi_system_sysfs_init(void)
+
+ acpi_table_attr_init(table_attr, table_header);
+ result =
+- sysfs_create_bin_file(&tables_kobj,
++ sysfs_create_bin_file(tables_kobj,
+ &table_attr->attr);
+ if (result) {
+ kfree(table_attr);
+@@ -163,6 +161,7 @@ static int acpi_system_sysfs_init(void)
+ &acpi_table_attr_list);
+ }
+ } while (!result);
++ kobject_uevent(tables_kobj, KOBJ_ADD);
+
+ return 0;
+ }
+diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
+index ba63619..2478cca 100644
+--- a/drivers/ata/Kconfig
++++ b/drivers/ata/Kconfig
+@@ -459,6 +459,15 @@ config PATA_NETCELL
+
+ If unsure, say N.
+
++config PATA_NINJA32
++ tristate "Ninja32/Delkin Cardbus ATA support (Experimental)"
++ depends on PCI && EXPERIMENTAL
++ help
++ This option enables support for the Ninja32, Delkin and
++ possibly other brands of Cardbus ATA adapter
++
++ If unsure, say N.
++
+ config PATA_NS87410
+ tristate "Nat Semi NS87410 PATA support (Experimental)"
+ depends on PCI && EXPERIMENTAL
+diff --git a/drivers/ata/Makefile b/drivers/ata/Makefile
+index b13feb2..82550c1 100644
+--- a/drivers/ata/Makefile
++++ b/drivers/ata/Makefile
+@@ -41,6 +41,7 @@ obj-$(CONFIG_PATA_IT821X) += pata_it821x.o
+ obj-$(CONFIG_PATA_IT8213) += pata_it8213.o
+ obj-$(CONFIG_PATA_JMICRON) += pata_jmicron.o
+ obj-$(CONFIG_PATA_NETCELL) += pata_netcell.o
++obj-$(CONFIG_PATA_NINJA32) += pata_ninja32.o
+ obj-$(CONFIG_PATA_NS87410) += pata_ns87410.o
+ obj-$(CONFIG_PATA_NS87415) += pata_ns87415.o
+ obj-$(CONFIG_PATA_OPTI) += pata_opti.o
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 54f38c2..6f089b8 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -198,18 +198,18 @@ enum {
+ };
+
+ struct ahci_cmd_hdr {
+- u32 opts;
+- u32 status;
+- u32 tbl_addr;
+- u32 tbl_addr_hi;
+- u32 reserved[4];
++ __le32 opts;
++ __le32 status;
++ __le32 tbl_addr;
++ __le32 tbl_addr_hi;
++ __le32 reserved[4];
+ };
+
+ struct ahci_sg {
+- u32 addr;
+- u32 addr_hi;
+- u32 reserved;
+- u32 flags_size;
++ __le32 addr;
++ __le32 addr_hi;
++ __le32 reserved;
++ __le32 flags_size;
+ };
+
+ struct ahci_host_priv {
+@@ -597,6 +597,20 @@ static inline void __iomem *ahci_port_base(struct ata_port *ap)
+ return __ahci_port_base(ap->host, ap->port_no);
+ }
+
++static void ahci_enable_ahci(void __iomem *mmio)
++{
++ u32 tmp;
++
++ /* turn on AHCI_EN */
++ tmp = readl(mmio + HOST_CTL);
++ if (!(tmp & HOST_AHCI_EN)) {
++ tmp |= HOST_AHCI_EN;
++ writel(tmp, mmio + HOST_CTL);
++ tmp = readl(mmio + HOST_CTL); /* flush && sanity check */
++ WARN_ON(!(tmp & HOST_AHCI_EN));
++ }
++}
++
+ /**
+ * ahci_save_initial_config - Save and fixup initial config values
+ * @pdev: target PCI device
+@@ -619,6 +633,9 @@ static void ahci_save_initial_config(struct pci_dev *pdev,
+ u32 cap, port_map;
+ int i;
+
++ /* make sure AHCI mode is enabled before accessing CAP */
++ ahci_enable_ahci(mmio);
++
+ /* Values prefixed with saved_ are written back to host after
+ * reset. Values without are used for driver operation.
+ */
+@@ -1036,19 +1053,17 @@ static int ahci_deinit_port(struct ata_port *ap, const char **emsg)
+ static int ahci_reset_controller(struct ata_host *host)
+ {
+ struct pci_dev *pdev = to_pci_dev(host->dev);
++ struct ahci_host_priv *hpriv = host->private_data;
+ void __iomem *mmio = host->iomap[AHCI_PCI_BAR];
+ u32 tmp;
+
+ /* we must be in AHCI mode, before using anything
+ * AHCI-specific, such as HOST_RESET.
+ */
+- tmp = readl(mmio + HOST_CTL);
+- if (!(tmp & HOST_AHCI_EN)) {
+- tmp |= HOST_AHCI_EN;
+- writel(tmp, mmio + HOST_CTL);
+- }
++ ahci_enable_ahci(mmio);
+
+ /* global controller reset */
++ tmp = readl(mmio + HOST_CTL);
+ if ((tmp & HOST_RESET) == 0) {
+ writel(tmp | HOST_RESET, mmio + HOST_CTL);
+ readl(mmio + HOST_CTL); /* flush */
+@@ -1067,8 +1082,7 @@ static int ahci_reset_controller(struct ata_host *host)
+ }
+
+ /* turn on AHCI mode */
+- writel(HOST_AHCI_EN, mmio + HOST_CTL);
+- (void) readl(mmio + HOST_CTL); /* flush */
++ ahci_enable_ahci(mmio);
+
+ /* some registers might be cleared on reset. restore initial values */
+ ahci_restore_initial_config(host);
+@@ -1078,8 +1092,10 @@ static int ahci_reset_controller(struct ata_host *host)
+
+ /* configure PCS */
+ pci_read_config_word(pdev, 0x92, &tmp16);
+- tmp16 |= 0xf;
+- pci_write_config_word(pdev, 0x92, tmp16);
++ if ((tmp16 & hpriv->port_map) != hpriv->port_map) {
++ tmp16 |= hpriv->port_map;
++ pci_write_config_word(pdev, 0x92, tmp16);
++ }
+ }
+
+ return 0;
+@@ -1480,35 +1496,31 @@ static void ahci_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
+ static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl)
+ {
+ struct scatterlist *sg;
+- struct ahci_sg *ahci_sg;
+- unsigned int n_sg = 0;
++ struct ahci_sg *ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ;
++ unsigned int si;
+
+ VPRINTK("ENTER\n");
+
+ /*
+ * Next, the S/G list.
+ */
+- ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ;
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ dma_addr_t addr = sg_dma_address(sg);
+ u32 sg_len = sg_dma_len(sg);
+
+- ahci_sg->addr = cpu_to_le32(addr & 0xffffffff);
+- ahci_sg->addr_hi = cpu_to_le32((addr >> 16) >> 16);
+- ahci_sg->flags_size = cpu_to_le32(sg_len - 1);
+-
+- ahci_sg++;
+- n_sg++;
++ ahci_sg[si].addr = cpu_to_le32(addr & 0xffffffff);
++ ahci_sg[si].addr_hi = cpu_to_le32((addr >> 16) >> 16);
++ ahci_sg[si].flags_size = cpu_to_le32(sg_len - 1);
+ }
+
+- return n_sg;
++ return si;
+ }
+
+ static void ahci_qc_prep(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct ahci_port_priv *pp = ap->private_data;
+- int is_atapi = is_atapi_taskfile(&qc->tf);
++ int is_atapi = ata_is_atapi(qc->tf.protocol);
+ void *cmd_tbl;
+ u32 opts;
+ const u32 cmd_fis_len = 5; /* five dwords */
+diff --git a/drivers/ata/ata_generic.c b/drivers/ata/ata_generic.c
+index 9032998..2053420 100644
+--- a/drivers/ata/ata_generic.c
++++ b/drivers/ata/ata_generic.c
+@@ -26,7 +26,7 @@
+ #include <linux/libata.h>
+
+ #define DRV_NAME "ata_generic"
+-#define DRV_VERSION "0.2.13"
++#define DRV_VERSION "0.2.15"
+
+ /*
+ * A generic parallel ATA driver using libata
+@@ -48,27 +48,47 @@ static int generic_set_mode(struct ata_link *link, struct ata_device **unused)
+ struct ata_port *ap = link->ap;
+ int dma_enabled = 0;
+ struct ata_device *dev;
++ struct pci_dev *pdev = to_pci_dev(ap->host->dev);
+
+ /* Bits 5 and 6 indicate if DMA is active on master/slave */
+ if (ap->ioaddr.bmdma_addr)
+ dma_enabled = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_STATUS);
+
++ if (pdev->vendor == PCI_VENDOR_ID_CENATEK)
++ dma_enabled = 0xFF;
++
+ ata_link_for_each_dev(dev, link) {
+- if (ata_dev_enabled(dev)) {
+- /* We don't really care */
+- dev->pio_mode = XFER_PIO_0;
+- dev->dma_mode = XFER_MW_DMA_0;
+- /* We do need the right mode information for DMA or PIO
+- and this comes from the current configuration flags */
+- if (dma_enabled & (1 << (5 + dev->devno))) {
+- ata_id_to_dma_mode(dev, XFER_MW_DMA_0);
+- dev->flags &= ~ATA_DFLAG_PIO;
+- } else {
+- ata_dev_printk(dev, KERN_INFO, "configured for PIO\n");
+- dev->xfer_mode = XFER_PIO_0;
+- dev->xfer_shift = ATA_SHIFT_PIO;
+- dev->flags |= ATA_DFLAG_PIO;
++ if (!ata_dev_enabled(dev))
++ continue;
++
++ /* We don't really care */
++ dev->pio_mode = XFER_PIO_0;
++ dev->dma_mode = XFER_MW_DMA_0;
++ /* We do need the right mode information for DMA or PIO
++ and this comes from the current configuration flags */
++ if (dma_enabled & (1 << (5 + dev->devno))) {
++ unsigned int xfer_mask = ata_id_xfermask(dev->id);
++ const char *name;
++
++ if (xfer_mask & (ATA_MASK_MWDMA | ATA_MASK_UDMA))
++ name = ata_mode_string(xfer_mask);
++ else {
++ /* SWDMA perhaps? */
++ name = "DMA";
++ xfer_mask |= ata_xfer_mode2mask(XFER_MW_DMA_0);
+ }
++
++ ata_dev_printk(dev, KERN_INFO, "configured for %s\n",
++ name);
++
++ dev->xfer_mode = ata_xfer_mask2mode(xfer_mask);
++ dev->xfer_shift = ata_xfer_mode2shift(dev->xfer_mode);
++ dev->flags &= ~ATA_DFLAG_PIO;
++ } else {
++ ata_dev_printk(dev, KERN_INFO, "configured for PIO\n");
++ dev->xfer_mode = XFER_PIO_0;
++ dev->xfer_shift = ATA_SHIFT_PIO;
++ dev->flags |= ATA_DFLAG_PIO;
+ }
+ }
+ return 0;
+@@ -185,6 +205,7 @@ static struct pci_device_id ata_generic[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT_VXPROII_IDE), },
+ { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C561), },
+ { PCI_DEVICE(PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C558), },
++ { PCI_DEVICE(PCI_VENDOR_ID_CENATEK,PCI_DEVICE_ID_CENATEK_IDE), },
+ { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO), },
+ { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1), },
+ { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2), },
+diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
+index b406b39..a65c8ae 100644
+--- a/drivers/ata/ata_piix.c
++++ b/drivers/ata/ata_piix.c
+@@ -101,39 +101,21 @@ enum {
+ ICH5_PMR = 0x90, /* port mapping register */
+ ICH5_PCS = 0x92, /* port control and status */
+ PIIX_SCC = 0x0A, /* sub-class code register */
++ PIIX_SIDPR_BAR = 5,
++ PIIX_SIDPR_LEN = 16,
++ PIIX_SIDPR_IDX = 0,
++ PIIX_SIDPR_DATA = 4,
+
+- PIIX_FLAG_SCR = (1 << 26), /* SCR available */
+ PIIX_FLAG_AHCI = (1 << 27), /* AHCI possible */
+ PIIX_FLAG_CHECKINTR = (1 << 28), /* make sure PCI INTx enabled */
++ PIIX_FLAG_SIDPR = (1 << 29), /* SATA idx/data pair regs */
+
+ PIIX_PATA_FLAGS = ATA_FLAG_SLAVE_POSS,
+ PIIX_SATA_FLAGS = ATA_FLAG_SATA | PIIX_FLAG_CHECKINTR,
+
+- /* combined mode. if set, PATA is channel 0.
+- * if clear, PATA is channel 1.
+- */
+- PIIX_PORT_ENABLED = (1 << 0),
+- PIIX_PORT_PRESENT = (1 << 4),
+-
+ PIIX_80C_PRI = (1 << 5) | (1 << 4),
+ PIIX_80C_SEC = (1 << 7) | (1 << 6),
+
+- /* controller IDs */
+- piix_pata_mwdma = 0, /* PIIX3 MWDMA only */
+- piix_pata_33, /* PIIX4 at 33Mhz */
+- ich_pata_33, /* ICH up to UDMA 33 only */
+- ich_pata_66, /* ICH up to 66 Mhz */
+- ich_pata_100, /* ICH up to UDMA 100 */
+- ich5_sata,
+- ich6_sata,
+- ich6_sata_ahci,
+- ich6m_sata_ahci,
+- ich8_sata_ahci,
+- ich8_2port_sata,
+- ich8m_apple_sata_ahci, /* locks up on second port enable */
+- tolapai_sata_ahci,
+- piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */
+-
+ /* constants for mapping table */
+ P0 = 0, /* port 0 */
+ P1 = 1, /* port 1 */
+@@ -149,6 +131,24 @@ enum {
+ PIIX_HOST_BROKEN_SUSPEND = (1 << 24),
+ };
+
++enum piix_controller_ids {
++ /* controller IDs */
++ piix_pata_mwdma, /* PIIX3 MWDMA only */
++ piix_pata_33, /* PIIX4 at 33Mhz */
++ ich_pata_33, /* ICH up to UDMA 33 only */
++ ich_pata_66, /* ICH up to 66 Mhz */
++ ich_pata_100, /* ICH up to UDMA 100 */
++ ich5_sata,
++ ich6_sata,
++ ich6_sata_ahci,
++ ich6m_sata_ahci,
++ ich8_sata_ahci,
++ ich8_2port_sata,
++ ich8m_apple_sata_ahci, /* locks up on second port enable */
++ tolapai_sata_ahci,
++ piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */
++};
++
+ struct piix_map_db {
+ const u32 mask;
+ const u16 port_enable;
+@@ -157,6 +157,7 @@ struct piix_map_db {
+
+ struct piix_host_priv {
+ const int *map;
++ void __iomem *sidpr;
+ };
+
+ static int piix_init_one(struct pci_dev *pdev,
+@@ -167,6 +168,9 @@ static void piix_set_dmamode(struct ata_port *ap, struct ata_device *adev);
+ static void ich_set_dmamode(struct ata_port *ap, struct ata_device *adev);
+ static int ich_pata_cable_detect(struct ata_port *ap);
+ static u8 piix_vmw_bmdma_status(struct ata_port *ap);
++static int piix_sidpr_scr_read(struct ata_port *ap, unsigned int reg, u32 *val);
++static int piix_sidpr_scr_write(struct ata_port *ap, unsigned int reg, u32 val);
++static void piix_sidpr_error_handler(struct ata_port *ap);
+ #ifdef CONFIG_PM
+ static int piix_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg);
+ static int piix_pci_device_resume(struct pci_dev *pdev);
+@@ -321,7 +325,6 @@ static const struct ata_port_operations piix_pata_ops = {
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+ .cable_detect = ata_cable_40wire,
+
+- .irq_handler = ata_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+@@ -353,7 +356,6 @@ static const struct ata_port_operations ich_pata_ops = {
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+ .cable_detect = ich_pata_cable_detect,
+
+- .irq_handler = ata_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+@@ -380,7 +382,6 @@ static const struct ata_port_operations piix_sata_ops = {
+ .error_handler = ata_bmdma_error_handler,
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+
+- .irq_handler = ata_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+@@ -419,6 +420,35 @@ static const struct ata_port_operations piix_vmw_ops = {
+ .port_start = ata_port_start,
+ };
+
++static const struct ata_port_operations piix_sidpr_sata_ops = {
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ata_std_dev_select,
++
++ .bmdma_setup = ata_bmdma_setup,
++ .bmdma_start = ata_bmdma_start,
++ .bmdma_stop = ata_bmdma_stop,
++ .bmdma_status = ata_bmdma_status,
++ .qc_prep = ata_qc_prep,
++ .qc_issue = ata_qc_issue_prot,
++ .data_xfer = ata_data_xfer,
++
++ .scr_read = piix_sidpr_scr_read,
++ .scr_write = piix_sidpr_scr_write,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = piix_sidpr_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
++
++ .port_start = ata_port_start,
++};
++
+ static const struct piix_map_db ich5_map_db = {
+ .mask = 0x7,
+ .port_enable = 0x3,
+@@ -526,7 +556,6 @@ static const struct piix_map_db *piix_map_db_table[] = {
+ static struct ata_port_info piix_port_info[] = {
+ [piix_pata_mwdma] = /* PIIX3 MWDMA only */
+ {
+- .sht = &piix_sht,
+ .flags = PIIX_PATA_FLAGS,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x06, /* mwdma1-2 ?? CHECK 0 should be ok but slow */
+@@ -535,7 +564,6 @@ static struct ata_port_info piix_port_info[] = {
+
+ [piix_pata_33] = /* PIIX4 at 33MHz */
+ {
+- .sht = &piix_sht,
+ .flags = PIIX_PATA_FLAGS,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x06, /* mwdma1-2 ?? CHECK 0 should be ok but slow */
+@@ -545,7 +573,6 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich_pata_33] = /* ICH0 - ICH at 33Mhz*/
+ {
+- .sht = &piix_sht,
+ .flags = PIIX_PATA_FLAGS,
+ .pio_mask = 0x1f, /* pio 0-4 */
+ .mwdma_mask = 0x06, /* Check: maybe 0x07 */
+@@ -555,7 +582,6 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich_pata_66] = /* ICH controllers up to 66MHz */
+ {
+- .sht = &piix_sht,
+ .flags = PIIX_PATA_FLAGS,
+ .pio_mask = 0x1f, /* pio 0-4 */
+ .mwdma_mask = 0x06, /* MWDMA0 is broken on chip */
+@@ -565,7 +591,6 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich_pata_100] =
+ {
+- .sht = &piix_sht,
+ .flags = PIIX_PATA_FLAGS | PIIX_FLAG_CHECKINTR,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x06, /* mwdma1-2 */
+@@ -575,7 +600,6 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich5_sata] =
+ {
+- .sht = &piix_sht,
+ .flags = PIIX_SATA_FLAGS,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+@@ -585,8 +609,7 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich6_sata] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR,
++ .flags = PIIX_SATA_FLAGS,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -595,9 +618,7 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich6_sata_ahci] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
+- PIIX_FLAG_AHCI,
++ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -606,9 +627,7 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich6m_sata_ahci] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
+- PIIX_FLAG_AHCI,
++ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -617,9 +636,8 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich8_sata_ahci] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
+- PIIX_FLAG_AHCI,
++ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI |
++ PIIX_FLAG_SIDPR,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -628,9 +646,8 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich8_2port_sata] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
+- PIIX_FLAG_AHCI,
++ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI |
++ PIIX_FLAG_SIDPR,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -639,9 +656,7 @@ static struct ata_port_info piix_port_info[] = {
+
+ [tolapai_sata_ahci] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
+- PIIX_FLAG_AHCI,
++ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -650,9 +665,8 @@ static struct ata_port_info piix_port_info[] = {
+
+ [ich8m_apple_sata_ahci] =
+ {
+- .sht = &piix_sht,
+- .flags = PIIX_SATA_FLAGS | PIIX_FLAG_SCR |
+- PIIX_FLAG_AHCI,
++ .flags = PIIX_SATA_FLAGS | PIIX_FLAG_AHCI |
++ PIIX_FLAG_SIDPR,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = ATA_UDMA6,
+@@ -1001,6 +1015,180 @@ static void ich_set_dmamode(struct ata_port *ap, struct ata_device *adev)
+ do_pata_set_dmamode(ap, adev, 1);
+ }
+
++/*
++ * Serial ATA Index/Data Pair Superset Registers access
++ *
++ * Beginning from ICH8, there's a sane way to access SCRs using index
++ * and data register pair located at BAR5. This creates an
++ * interesting problem of mapping two SCRs to one port.
++ *
++ * Although they have separate SCRs, the master and slave aren't
++ * independent enough to be treated as separate links - e.g. softreset
++ * resets both. Also, there's no protocol defined for hard resetting
++ * singled device sharing the virtual port (no defined way to acquire
++ * device signature). This is worked around by merging the SCR values
++ * into one sensible value and requesting follow-up SRST after
++ * hardreset.
++ *
++ * SCR merging is perfomed in nibbles which is the unit contents in
++ * SCRs are organized. If two values are equal, the value is used.
++ * When they differ, merge table which lists precedence of possible
++ * values is consulted and the first match or the last entry when
++ * nothing matches is used. When there's no merge table for the
++ * specific nibble, value from the first port is used.
++ */
++static const int piix_sidx_map[] = {
++ [SCR_STATUS] = 0,
++ [SCR_ERROR] = 2,
++ [SCR_CONTROL] = 1,
++};
++
++static void piix_sidpr_sel(struct ata_device *dev, unsigned int reg)
++{
++ struct ata_port *ap = dev->link->ap;
++ struct piix_host_priv *hpriv = ap->host->private_data;
++
++ iowrite32(((ap->port_no * 2 + dev->devno) << 8) | piix_sidx_map[reg],
++ hpriv->sidpr + PIIX_SIDPR_IDX);
++}
++
++static int piix_sidpr_read(struct ata_device *dev, unsigned int reg)
++{
++ struct piix_host_priv *hpriv = dev->link->ap->host->private_data;
++
++ piix_sidpr_sel(dev, reg);
++ return ioread32(hpriv->sidpr + PIIX_SIDPR_DATA);
++}
++
++static void piix_sidpr_write(struct ata_device *dev, unsigned int reg, u32 val)
++{
++ struct piix_host_priv *hpriv = dev->link->ap->host->private_data;
++
++ piix_sidpr_sel(dev, reg);
++ iowrite32(val, hpriv->sidpr + PIIX_SIDPR_DATA);
++}
++
++u32 piix_merge_scr(u32 val0, u32 val1, const int * const *merge_tbl)
++{
++ u32 val = 0;
++ int i, mi;
++
++ for (i = 0, mi = 0; i < 32 / 4; i++) {
++ u8 c0 = (val0 >> (i * 4)) & 0xf;
++ u8 c1 = (val1 >> (i * 4)) & 0xf;
++ u8 merged = c0;
++ const int *cur;
++
++ /* if no merge preference, assume the first value */
++ cur = merge_tbl[mi];
++ if (!cur)
++ goto done;
++ mi++;
++
++ /* if two values equal, use it */
++ if (c0 == c1)
++ goto done;
++
++ /* choose the first match or the last from the merge table */
++ while (*cur != -1) {
++ if (c0 == *cur || c1 == *cur)
++ break;
++ cur++;
++ }
++ if (*cur == -1)
++ cur--;
++ merged = *cur;
++ done:
++ val |= merged << (i * 4);
++ }
++
++ return val;
++}
++
++static int piix_sidpr_scr_read(struct ata_port *ap, unsigned int reg, u32 *val)
++{
++ const int * const sstatus_merge_tbl[] = {
++ /* DET */ (const int []){ 1, 3, 0, 4, 3, -1 },
++ /* SPD */ (const int []){ 2, 1, 0, -1 },
++ /* IPM */ (const int []){ 6, 2, 1, 0, -1 },
++ NULL,
++ };
++ const int * const scontrol_merge_tbl[] = {
++ /* DET */ (const int []){ 1, 0, 4, 0, -1 },
++ /* SPD */ (const int []){ 0, 2, 1, 0, -1 },
++ /* IPM */ (const int []){ 0, 1, 2, 3, 0, -1 },
++ NULL,
++ };
++ u32 v0, v1;
++
++ if (reg >= ARRAY_SIZE(piix_sidx_map))
++ return -EINVAL;
++
++ if (!(ap->flags & ATA_FLAG_SLAVE_POSS)) {
++ *val = piix_sidpr_read(&ap->link.device[0], reg);
++ return 0;
++ }
++
++ v0 = piix_sidpr_read(&ap->link.device[0], reg);
++ v1 = piix_sidpr_read(&ap->link.device[1], reg);
++
++ switch (reg) {
++ case SCR_STATUS:
++ *val = piix_merge_scr(v0, v1, sstatus_merge_tbl);
++ break;
++ case SCR_ERROR:
++ *val = v0 | v1;
++ break;
++ case SCR_CONTROL:
++ *val = piix_merge_scr(v0, v1, scontrol_merge_tbl);
++ break;
++ }
++
++ return 0;
++}
++
++static int piix_sidpr_scr_write(struct ata_port *ap, unsigned int reg, u32 val)
++{
++ if (reg >= ARRAY_SIZE(piix_sidx_map))
++ return -EINVAL;
++
++ piix_sidpr_write(&ap->link.device[0], reg, val);
++
++ if (ap->flags & ATA_FLAG_SLAVE_POSS)
++ piix_sidpr_write(&ap->link.device[1], reg, val);
++
++ return 0;
++}
++
++static int piix_sidpr_hardreset(struct ata_link *link, unsigned int *class,
++ unsigned long deadline)
++{
++ const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
++ int rc;
++
++ /* do hardreset */
++ rc = sata_link_hardreset(link, timing, deadline);
++ if (rc) {
++ ata_link_printk(link, KERN_ERR,
++ "COMRESET failed (errno=%d)\n", rc);
++ return rc;
++ }
++
++ /* TODO: phy layer with polling, timeouts, etc. */
++ if (ata_link_offline(link)) {
++ *class = ATA_DEV_NONE;
++ return 0;
++ }
++
++ return -EAGAIN;
++}
++
++static void piix_sidpr_error_handler(struct ata_port *ap)
++{
++ ata_bmdma_drive_eh(ap, ata_std_prereset, ata_std_softreset,
++ piix_sidpr_hardreset, ata_std_postreset);
++}
++
+ #ifdef CONFIG_PM
+ static int piix_broken_suspend(void)
+ {
+@@ -1034,6 +1222,13 @@ static int piix_broken_suspend(void)
+ },
+ },
+ {
++ .ident = "TECRA M6",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "TECRA M6"),
++ },
++ },
++ {
+ .ident = "TECRA M7",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+@@ -1048,6 +1243,13 @@ static int piix_broken_suspend(void)
+ },
+ },
+ {
++ .ident = "Satellite R20",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Satellite R20"),
++ },
++ },
++ {
+ .ident = "Satellite R25",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+@@ -1253,10 +1455,10 @@ static int __devinit piix_check_450nx_errata(struct pci_dev *ata_dev)
+ return no_piix_dma;
+ }
+
+-static void __devinit piix_init_pcs(struct pci_dev *pdev,
+- struct ata_port_info *pinfo,
++static void __devinit piix_init_pcs(struct ata_host *host,
+ const struct piix_map_db *map_db)
+ {
++ struct pci_dev *pdev = to_pci_dev(host->dev);
+ u16 pcs, new_pcs;
+
+ pci_read_config_word(pdev, ICH5_PCS, &pcs);
+@@ -1270,11 +1472,10 @@ static void __devinit piix_init_pcs(struct pci_dev *pdev,
+ }
+ }
+
+-static void __devinit piix_init_sata_map(struct pci_dev *pdev,
+- struct ata_port_info *pinfo,
+- const struct piix_map_db *map_db)
++static const int *__devinit piix_init_sata_map(struct pci_dev *pdev,
++ struct ata_port_info *pinfo,
++ const struct piix_map_db *map_db)
+ {
+- struct piix_host_priv *hpriv = pinfo[0].private_data;
+ const int *map;
+ int i, invalid_map = 0;
+ u8 map_value;
+@@ -1298,7 +1499,6 @@ static void __devinit piix_init_sata_map(struct pci_dev *pdev,
+ case IDE:
+ WARN_ON((i & 1) || map[i + 1] != IDE);
+ pinfo[i / 2] = piix_port_info[ich_pata_100];
+- pinfo[i / 2].private_data = hpriv;
+ i++;
+ printk(" IDE IDE");
+ break;
+@@ -1316,7 +1516,33 @@ static void __devinit piix_init_sata_map(struct pci_dev *pdev,
+ dev_printk(KERN_ERR, &pdev->dev,
+ "invalid MAP value %u\n", map_value);
+
+- hpriv->map = map;
++ return map;
++}
++
++static void __devinit piix_init_sidpr(struct ata_host *host)
++{
++ struct pci_dev *pdev = to_pci_dev(host->dev);
++ struct piix_host_priv *hpriv = host->private_data;
++ int i;
++
++ /* check for availability */
++ for (i = 0; i < 4; i++)
++ if (hpriv->map[i] == IDE)
++ return;
++
++ if (!(host->ports[0]->flags & PIIX_FLAG_SIDPR))
++ return;
++
++ if (pci_resource_start(pdev, PIIX_SIDPR_BAR) == 0 ||
++ pci_resource_len(pdev, PIIX_SIDPR_BAR) != PIIX_SIDPR_LEN)
++ return;
++
++ if (pcim_iomap_regions(pdev, 1 << PIIX_SIDPR_BAR, DRV_NAME))
++ return;
++
++ hpriv->sidpr = pcim_iomap_table(pdev)[PIIX_SIDPR_BAR];
++ host->ports[0]->ops = &piix_sidpr_sata_ops;
++ host->ports[1]->ops = &piix_sidpr_sata_ops;
+ }
+
+ static void piix_iocfg_bit18_quirk(struct pci_dev *pdev)
+@@ -1375,8 +1601,10 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ struct device *dev = &pdev->dev;
+ struct ata_port_info port_info[2];
+ const struct ata_port_info *ppi[] = { &port_info[0], &port_info[1] };
+- struct piix_host_priv *hpriv;
+ unsigned long port_flags;
++ struct ata_host *host;
++ struct piix_host_priv *hpriv;
++ int rc;
+
+ if (!printed_version++)
+ dev_printk(KERN_DEBUG, &pdev->dev,
+@@ -1386,17 +1614,31 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (!in_module_init)
+ return -ENODEV;
+
++ port_info[0] = piix_port_info[ent->driver_data];
++ port_info[1] = piix_port_info[ent->driver_data];
++
++ port_flags = port_info[0].flags;
++
++ /* enable device and prepare host */
++ rc = pcim_enable_device(pdev);
++ if (rc)
++ return rc;
++
++ /* SATA map init can change port_info, do it before prepping host */
+ hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
+ if (!hpriv)
+ return -ENOMEM;
+
+- port_info[0] = piix_port_info[ent->driver_data];
+- port_info[1] = piix_port_info[ent->driver_data];
+- port_info[0].private_data = hpriv;
+- port_info[1].private_data = hpriv;
++ if (port_flags & ATA_FLAG_SATA)
++ hpriv->map = piix_init_sata_map(pdev, port_info,
++ piix_map_db_table[ent->driver_data]);
+
+- port_flags = port_info[0].flags;
++ rc = ata_pci_prepare_sff_host(pdev, ppi, &host);
++ if (rc)
++ return rc;
++ host->private_data = hpriv;
+
++ /* initialize controller */
+ if (port_flags & PIIX_FLAG_AHCI) {
+ u8 tmp;
+ pci_read_config_byte(pdev, PIIX_SCC, &tmp);
+@@ -1407,12 +1649,9 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ }
+ }
+
+- /* Initialize SATA map */
+ if (port_flags & ATA_FLAG_SATA) {
+- piix_init_sata_map(pdev, port_info,
+- piix_map_db_table[ent->driver_data]);
+- piix_init_pcs(pdev, port_info,
+- piix_map_db_table[ent->driver_data]);
++ piix_init_pcs(host, piix_map_db_table[ent->driver_data]);
++ piix_init_sidpr(host);
+ }
+
+ /* apply IOCFG bit18 quirk */
+@@ -1431,12 +1670,14 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ /* This writes into the master table but it does not
+ really matter for this errata as we will apply it to
+ all the PIIX devices on the board */
+- port_info[0].mwdma_mask = 0;
+- port_info[0].udma_mask = 0;
+- port_info[1].mwdma_mask = 0;
+- port_info[1].udma_mask = 0;
++ host->ports[0]->mwdma_mask = 0;
++ host->ports[0]->udma_mask = 0;
++ host->ports[1]->mwdma_mask = 0;
++ host->ports[1]->udma_mask = 0;
+ }
+- return ata_pci_init_one(pdev, ppi);
++
++ pci_set_master(pdev);
++ return ata_pci_activate_sff_host(host, ata_interrupt, &piix_sht);
+ }
+
+ static int __init piix_init(void)
+diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c
+index 7bf4bef..9e8ec19 100644
+--- a/drivers/ata/libata-acpi.c
++++ b/drivers/ata/libata-acpi.c
+@@ -442,40 +442,77 @@ static int ata_dev_get_GTF(struct ata_device *dev, struct ata_acpi_gtf **gtf)
+ }
+
+ /**
++ * ata_acpi_gtm_xfermode - determine xfermode from GTM parameter
++ * @dev: target device
++ * @gtm: GTM parameter to use
++ *
++ * Determine xfermask for @dev from @gtm.
++ *
++ * LOCKING:
++ * None.
++ *
++ * RETURNS:
++ * Determined xfermask.
++ */
++unsigned long ata_acpi_gtm_xfermask(struct ata_device *dev,
++ const struct ata_acpi_gtm *gtm)
++{
++ unsigned long xfer_mask = 0;
++ unsigned int type;
++ int unit;
++ u8 mode;
++
++ /* we always use the 0 slot for crap hardware */
++ unit = dev->devno;
++ if (!(gtm->flags & 0x10))
++ unit = 0;
++
++ /* PIO */
++ mode = ata_timing_cycle2mode(ATA_SHIFT_PIO, gtm->drive[unit].pio);
++ xfer_mask |= ata_xfer_mode2mask(mode);
++
++ /* See if we have MWDMA or UDMA data. We don't bother with
++ * MWDMA if UDMA is available as this means the BIOS set UDMA
++ * and our error changedown if it works is UDMA to PIO anyway.
++ */
++ if (!(gtm->flags & (1 << (2 * unit))))
++ type = ATA_SHIFT_MWDMA;
++ else
++ type = ATA_SHIFT_UDMA;
++
++ mode = ata_timing_cycle2mode(type, gtm->drive[unit].dma);
++ xfer_mask |= ata_xfer_mode2mask(mode);
++
++ return xfer_mask;
++}
++EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);
++
++/**
+ * ata_acpi_cbl_80wire - Check for 80 wire cable
+ * @ap: Port to check
++ * @gtm: GTM data to use
+ *
+- * Return 1 if the ACPI mode data for this port indicates the BIOS selected
+- * an 80wire mode.
++ * Return 1 if the @gtm indicates the BIOS selected an 80wire mode.
+ */
+-
+-int ata_acpi_cbl_80wire(struct ata_port *ap)
++int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
+ {
+- const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);
+- int valid = 0;
++ struct ata_device *dev;
+
+- if (!gtm)
+- return 0;
++ ata_link_for_each_dev(dev, &ap->link) {
++ unsigned long xfer_mask, udma_mask;
++
++ if (!ata_dev_enabled(dev))
++ continue;
++
++ xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);
++ ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);
++
++ if (udma_mask & ~ATA_UDMA_MASK_40C)
++ return 1;
++ }
+
+- /* Split timing, DMA enabled */
+- if ((gtm->flags & 0x11) == 0x11 && gtm->drive[0].dma < 55)
+- valid |= 1;
+- if ((gtm->flags & 0x14) == 0x14 && gtm->drive[1].dma < 55)
+- valid |= 2;
+- /* Shared timing, DMA enabled */
+- if ((gtm->flags & 0x11) == 0x01 && gtm->drive[0].dma < 55)
+- valid |= 1;
+- if ((gtm->flags & 0x14) == 0x04 && gtm->drive[0].dma < 55)
+- valid |= 2;
+-
+- /* Drive check */
+- if ((valid & 1) && ata_dev_enabled(&ap->link.device[0]))
+- return 1;
+- if ((valid & 2) && ata_dev_enabled(&ap->link.device[1]))
+- return 1;
+ return 0;
+ }
+-
+ EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);
+
+ static void ata_acpi_gtf_to_tf(struct ata_device *dev,
+@@ -776,6 +813,36 @@ void ata_acpi_on_resume(struct ata_port *ap)
+ }
+
+ /**
++ * ata_acpi_set_state - set the port power state
++ * @ap: target ATA port
++ * @state: state, on/off
++ *
++ * This function executes the _PS0/_PS3 ACPI method to set the power state.
++ * ACPI spec requires _PS0 when IDE power on and _PS3 when power off
++ */
++void ata_acpi_set_state(struct ata_port *ap, pm_message_t state)
++{
++ struct ata_device *dev;
++
++ if (!ap->acpi_handle || (ap->flags & ATA_FLAG_ACPI_SATA))
++ return;
++
++ /* channel first and then drives for power on and vica versa
++ for power off */
++ if (state.event == PM_EVENT_ON)
++ acpi_bus_set_power(ap->acpi_handle, ACPI_STATE_D0);
++
++ ata_link_for_each_dev(dev, &ap->link) {
++ if (dev->acpi_handle && ata_dev_enabled(dev))
++ acpi_bus_set_power(dev->acpi_handle,
++ state.event == PM_EVENT_ON ?
++ ACPI_STATE_D0 : ACPI_STATE_D3);
++ }
++ if (state.event != PM_EVENT_ON)
++ acpi_bus_set_power(ap->acpi_handle, ACPI_STATE_D3);
++}
++
++/**
+ * ata_acpi_on_devcfg - ATA ACPI hook called on device donfiguration
+ * @dev: target ATA device
+ *
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 6380726..bdbd55a 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -119,6 +119,10 @@ int libata_noacpi = 0;
+ module_param_named(noacpi, libata_noacpi, int, 0444);
+ MODULE_PARM_DESC(noacpi, "Disables the use of ACPI in probe/suspend/resume when set");
+
++int libata_allow_tpm = 0;
++module_param_named(allow_tpm, libata_allow_tpm, int, 0444);
++MODULE_PARM_DESC(allow_tpm, "Permit the use of TPM commands");
++
+ MODULE_AUTHOR("Jeff Garzik");
+ MODULE_DESCRIPTION("Library module for ATA devices");
+ MODULE_LICENSE("GPL");
+@@ -450,9 +454,9 @@ int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
+ * RETURNS:
+ * Packed xfer_mask.
+ */
+-static unsigned int ata_pack_xfermask(unsigned int pio_mask,
+- unsigned int mwdma_mask,
+- unsigned int udma_mask)
++unsigned long ata_pack_xfermask(unsigned long pio_mask,
++ unsigned long mwdma_mask,
++ unsigned long udma_mask)
+ {
+ return ((pio_mask << ATA_SHIFT_PIO) & ATA_MASK_PIO) |
+ ((mwdma_mask << ATA_SHIFT_MWDMA) & ATA_MASK_MWDMA) |
+@@ -469,10 +473,8 @@ static unsigned int ata_pack_xfermask(unsigned int pio_mask,
+ * Unpack @xfer_mask into @pio_mask, @mwdma_mask and @udma_mask.
+ * Any NULL distination masks will be ignored.
+ */
+-static void ata_unpack_xfermask(unsigned int xfer_mask,
+- unsigned int *pio_mask,
+- unsigned int *mwdma_mask,
+- unsigned int *udma_mask)
++void ata_unpack_xfermask(unsigned long xfer_mask, unsigned long *pio_mask,
++ unsigned long *mwdma_mask, unsigned long *udma_mask)
+ {
+ if (pio_mask)
+ *pio_mask = (xfer_mask & ATA_MASK_PIO) >> ATA_SHIFT_PIO;
+@@ -486,9 +488,9 @@ static const struct ata_xfer_ent {
+ int shift, bits;
+ u8 base;
+ } ata_xfer_tbl[] = {
+- { ATA_SHIFT_PIO, ATA_BITS_PIO, XFER_PIO_0 },
+- { ATA_SHIFT_MWDMA, ATA_BITS_MWDMA, XFER_MW_DMA_0 },
+- { ATA_SHIFT_UDMA, ATA_BITS_UDMA, XFER_UDMA_0 },
++ { ATA_SHIFT_PIO, ATA_NR_PIO_MODES, XFER_PIO_0 },
++ { ATA_SHIFT_MWDMA, ATA_NR_MWDMA_MODES, XFER_MW_DMA_0 },
++ { ATA_SHIFT_UDMA, ATA_NR_UDMA_MODES, XFER_UDMA_0 },
+ { -1, },
+ };
+
+@@ -503,9 +505,9 @@ static const struct ata_xfer_ent {
+ * None.
+ *
+ * RETURNS:
+- * Matching XFER_* value, 0 if no match found.
++ * Matching XFER_* value, 0xff if no match found.
+ */
+-static u8 ata_xfer_mask2mode(unsigned int xfer_mask)
++u8 ata_xfer_mask2mode(unsigned long xfer_mask)
+ {
+ int highbit = fls(xfer_mask) - 1;
+ const struct ata_xfer_ent *ent;
+@@ -513,7 +515,7 @@ static u8 ata_xfer_mask2mode(unsigned int xfer_mask)
+ for (ent = ata_xfer_tbl; ent->shift >= 0; ent++)
+ if (highbit >= ent->shift && highbit < ent->shift + ent->bits)
+ return ent->base + highbit - ent->shift;
+- return 0;
++ return 0xff;
+ }
+
+ /**
+@@ -528,13 +530,14 @@ static u8 ata_xfer_mask2mode(unsigned int xfer_mask)
+ * RETURNS:
+ * Matching xfer_mask, 0 if no match found.
+ */
+-static unsigned int ata_xfer_mode2mask(u8 xfer_mode)
++unsigned long ata_xfer_mode2mask(u8 xfer_mode)
+ {
+ const struct ata_xfer_ent *ent;
+
+ for (ent = ata_xfer_tbl; ent->shift >= 0; ent++)
+ if (xfer_mode >= ent->base && xfer_mode < ent->base + ent->bits)
+- return 1 << (ent->shift + xfer_mode - ent->base);
++ return ((2 << (ent->shift + xfer_mode - ent->base)) - 1)
++ & ~((1 << ent->shift) - 1);
+ return 0;
+ }
+
+@@ -550,7 +553,7 @@ static unsigned int ata_xfer_mode2mask(u8 xfer_mode)
+ * RETURNS:
+ * Matching xfer_shift, -1 if no match found.
+ */
+-static int ata_xfer_mode2shift(unsigned int xfer_mode)
++int ata_xfer_mode2shift(unsigned long xfer_mode)
+ {
+ const struct ata_xfer_ent *ent;
+
+@@ -574,7 +577,7 @@ static int ata_xfer_mode2shift(unsigned int xfer_mode)
+ * Constant C string representing highest speed listed in
+ * @mode_mask, or the constant C string "<n/a>".
+ */
+-static const char *ata_mode_string(unsigned int xfer_mask)
++const char *ata_mode_string(unsigned long xfer_mask)
+ {
+ static const char * const xfer_mode_str[] = {
+ "PIO0",
+@@ -947,8 +950,8 @@ unsigned int ata_dev_try_classify(struct ata_device *dev, int present,
+ if (r_err)
+ *r_err = err;
+
+- /* see if device passed diags: if master then continue and warn later */
+- if (err == 0 && dev->devno == 0)
++ /* see if device passed diags: continue and warn later */
++ if (err == 0)
+ /* diagnostic fail : do nothing _YET_ */
+ dev->horkage |= ATA_HORKAGE_DIAGNOSTIC;
+ else if (err == 1)
+@@ -1286,48 +1289,6 @@ static int ata_hpa_resize(struct ata_device *dev)
+ }
+
+ /**
+- * ata_id_to_dma_mode - Identify DMA mode from id block
+- * @dev: device to identify
+- * @unknown: mode to assume if we cannot tell
+- *
+- * Set up the timing values for the device based upon the identify
+- * reported values for the DMA mode. This function is used by drivers
+- * which rely upon firmware configured modes, but wish to report the
+- * mode correctly when possible.
+- *
+- * In addition we emit similarly formatted messages to the default
+- * ata_dev_set_mode handler, in order to provide consistency of
+- * presentation.
+- */
+-
+-void ata_id_to_dma_mode(struct ata_device *dev, u8 unknown)
+-{
+- unsigned int mask;
+- u8 mode;
+-
+- /* Pack the DMA modes */
+- mask = ((dev->id[63] >> 8) << ATA_SHIFT_MWDMA) & ATA_MASK_MWDMA;
+- if (dev->id[53] & 0x04)
+- mask |= ((dev->id[88] >> 8) << ATA_SHIFT_UDMA) & ATA_MASK_UDMA;
+-
+- /* Select the mode in use */
+- mode = ata_xfer_mask2mode(mask);
+-
+- if (mode != 0) {
+- ata_dev_printk(dev, KERN_INFO, "configured for %s\n",
+- ata_mode_string(mask));
+- } else {
+- /* SWDMA perhaps ? */
+- mode = unknown;
+- ata_dev_printk(dev, KERN_INFO, "configured for DMA\n");
+- }
+-
+- /* Configure the device reporting */
+- dev->xfer_mode = mode;
+- dev->xfer_shift = ata_xfer_mode2shift(mode);
+-}
+-
+-/**
+ * ata_noop_dev_select - Select device 0/1 on ATA bus
+ * @ap: ATA channel to manipulate
+ * @device: ATA device (numbered from zero) to select
+@@ -1464,9 +1425,9 @@ static inline void ata_dump_id(const u16 *id)
+ * RETURNS:
+ * Computed xfermask
+ */
+-static unsigned int ata_id_xfermask(const u16 *id)
++unsigned long ata_id_xfermask(const u16 *id)
+ {
+- unsigned int pio_mask, mwdma_mask, udma_mask;
++ unsigned long pio_mask, mwdma_mask, udma_mask;
+
+ /* Usual case. Word 53 indicates word 64 is valid */
+ if (id[ATA_ID_FIELD_VALID] & (1 << 1)) {
+@@ -1519,7 +1480,7 @@ static unsigned int ata_id_xfermask(const u16 *id)
+ }
+
+ /**
+- * ata_port_queue_task - Queue port_task
++ * ata_pio_queue_task - Queue port_task
+ * @ap: The ata_port to queue port_task for
+ * @fn: workqueue function to be scheduled
+ * @data: data for @fn to use
+@@ -1531,16 +1492,15 @@ static unsigned int ata_id_xfermask(const u16 *id)
+ * one task is active at any given time.
+ *
+ * libata core layer takes care of synchronization between
+- * port_task and EH. ata_port_queue_task() may be ignored for EH
++ * port_task and EH. ata_pio_queue_task() may be ignored for EH
+ * synchronization.
+ *
+ * LOCKING:
+ * Inherited from caller.
+ */
+-void ata_port_queue_task(struct ata_port *ap, work_func_t fn, void *data,
+- unsigned long delay)
++static void ata_pio_queue_task(struct ata_port *ap, void *data,
++ unsigned long delay)
+ {
+- PREPARE_DELAYED_WORK(&ap->port_task, fn);
+ ap->port_task_data = data;
+
+ /* may fail if ata_port_flush_task() in progress */
+@@ -2090,7 +2050,7 @@ int ata_dev_configure(struct ata_device *dev)
+ struct ata_eh_context *ehc = &dev->link->eh_context;
+ int print_info = ehc->i.flags & ATA_EHI_PRINTINFO;
+ const u16 *id = dev->id;
+- unsigned int xfer_mask;
++ unsigned long xfer_mask;
+ char revbuf[7]; /* XYZ-99\0 */
+ char fwrevbuf[ATA_ID_FW_REV_LEN+1];
+ char modelbuf[ATA_ID_PROD_LEN+1];
+@@ -2161,8 +2121,14 @@ int ata_dev_configure(struct ata_device *dev)
+ "supports DRM functions and may "
+ "not be fully accessable.\n");
+ snprintf(revbuf, 7, "CFA");
+- } else
++ } else {
+ snprintf(revbuf, 7, "ATA-%d", ata_id_major_version(id));
++ /* Warn the user if the device has TPM extensions */
++ if (ata_id_has_tpm(id))
++ ata_dev_printk(dev, KERN_WARNING,
++ "supports DRM functions and may "
++ "not be fully accessable.\n");
++ }
+
+ dev->n_sectors = ata_id_n_sectors(id);
+
+@@ -2295,19 +2261,8 @@ int ata_dev_configure(struct ata_device *dev)
+ dev->flags |= ATA_DFLAG_DIPM;
+ }
+
+- if (dev->horkage & ATA_HORKAGE_DIAGNOSTIC) {
+- /* Let the user know. We don't want to disallow opens for
+- rescue purposes, or in case the vendor is just a blithering
+- idiot */
+- if (print_info) {
+- ata_dev_printk(dev, KERN_WARNING,
+-"Drive reports diagnostics failure. This may indicate a drive\n");
+- ata_dev_printk(dev, KERN_WARNING,
+-"fault or invalid emulation. Contact drive vendor for information.\n");
+- }
+- }
+-
+- /* limit bridge transfers to udma5, 200 sectors */
++ /* Limit PATA drive on SATA cable bridge transfers to udma5,
++ 200 sectors */
+ if (ata_dev_knobble(dev)) {
+ if (ata_msg_drv(ap) && print_info)
+ ata_dev_printk(dev, KERN_INFO,
+@@ -2336,6 +2291,21 @@ int ata_dev_configure(struct ata_device *dev)
+ if (ap->ops->dev_config)
+ ap->ops->dev_config(dev);
+
++ if (dev->horkage & ATA_HORKAGE_DIAGNOSTIC) {
++ /* Let the user know. We don't want to disallow opens for
++ rescue purposes, or in case the vendor is just a blithering
++ idiot. Do this after the dev_config call as some controllers
++ with buggy firmware may want to avoid reporting false device
++ bugs */
++
++ if (print_info) {
++ ata_dev_printk(dev, KERN_WARNING,
++"Drive reports diagnostics failure. This may indicate a drive\n");
++ ata_dev_printk(dev, KERN_WARNING,
++"fault or invalid emulation. Contact drive vendor for information.\n");
++ }
++ }
++
+ if (ata_msg_probe(ap))
+ ata_dev_printk(dev, KERN_DEBUG, "%s: EXIT, drv_stat = 0x%x\n",
+ __FUNCTION__, ata_chk_status(ap));
+@@ -2387,6 +2357,18 @@ int ata_cable_unknown(struct ata_port *ap)
+ }
+
+ /**
++ * ata_cable_ignore - return ignored PATA cable.
++ * @ap: port
++ *
++ * Helper method for drivers which don't use cable type to limit
++ * transfer mode.
++ */
++int ata_cable_ignore(struct ata_port *ap)
++{
++ return ATA_CBL_PATA_IGN;
++}
++
++/**
+ * ata_cable_sata - return SATA cable type
+ * @ap: port
+ *
+@@ -2781,38 +2763,33 @@ int sata_set_spd(struct ata_link *link)
+ */
+
+ static const struct ata_timing ata_timing[] = {
++/* { XFER_PIO_SLOW, 120, 290, 240, 960, 290, 240, 960, 0 }, */
++ { XFER_PIO_0, 70, 290, 240, 600, 165, 150, 600, 0 },
++ { XFER_PIO_1, 50, 290, 93, 383, 125, 100, 383, 0 },
++ { XFER_PIO_2, 30, 290, 40, 330, 100, 90, 240, 0 },
++ { XFER_PIO_3, 30, 80, 70, 180, 80, 70, 180, 0 },
++ { XFER_PIO_4, 25, 70, 25, 120, 70, 25, 120, 0 },
++ { XFER_PIO_5, 15, 65, 25, 100, 65, 25, 100, 0 },
++ { XFER_PIO_6, 10, 55, 20, 80, 55, 20, 80, 0 },
+
+- { XFER_UDMA_6, 0, 0, 0, 0, 0, 0, 0, 15 },
+- { XFER_UDMA_5, 0, 0, 0, 0, 0, 0, 0, 20 },
+- { XFER_UDMA_4, 0, 0, 0, 0, 0, 0, 0, 30 },
+- { XFER_UDMA_3, 0, 0, 0, 0, 0, 0, 0, 45 },
++ { XFER_SW_DMA_0, 120, 0, 0, 0, 480, 480, 960, 0 },
++ { XFER_SW_DMA_1, 90, 0, 0, 0, 240, 240, 480, 0 },
++ { XFER_SW_DMA_2, 60, 0, 0, 0, 120, 120, 240, 0 },
+
+- { XFER_MW_DMA_4, 25, 0, 0, 0, 55, 20, 80, 0 },
++ { XFER_MW_DMA_0, 60, 0, 0, 0, 215, 215, 480, 0 },
++ { XFER_MW_DMA_1, 45, 0, 0, 0, 80, 50, 150, 0 },
++ { XFER_MW_DMA_2, 25, 0, 0, 0, 70, 25, 120, 0 },
+ { XFER_MW_DMA_3, 25, 0, 0, 0, 65, 25, 100, 0 },
+- { XFER_UDMA_2, 0, 0, 0, 0, 0, 0, 0, 60 },
+- { XFER_UDMA_1, 0, 0, 0, 0, 0, 0, 0, 80 },
+- { XFER_UDMA_0, 0, 0, 0, 0, 0, 0, 0, 120 },
++ { XFER_MW_DMA_4, 25, 0, 0, 0, 55, 20, 80, 0 },
+
+ /* { XFER_UDMA_SLOW, 0, 0, 0, 0, 0, 0, 0, 150 }, */
+-
+- { XFER_MW_DMA_2, 25, 0, 0, 0, 70, 25, 120, 0 },
+- { XFER_MW_DMA_1, 45, 0, 0, 0, 80, 50, 150, 0 },
+- { XFER_MW_DMA_0, 60, 0, 0, 0, 215, 215, 480, 0 },
+-
+- { XFER_SW_DMA_2, 60, 0, 0, 0, 120, 120, 240, 0 },
+- { XFER_SW_DMA_1, 90, 0, 0, 0, 240, 240, 480, 0 },
+- { XFER_SW_DMA_0, 120, 0, 0, 0, 480, 480, 960, 0 },
+-
+- { XFER_PIO_6, 10, 55, 20, 80, 55, 20, 80, 0 },
+- { XFER_PIO_5, 15, 65, 25, 100, 65, 25, 100, 0 },
+- { XFER_PIO_4, 25, 70, 25, 120, 70, 25, 120, 0 },
+- { XFER_PIO_3, 30, 80, 70, 180, 80, 70, 180, 0 },
+-
+- { XFER_PIO_2, 30, 290, 40, 330, 100, 90, 240, 0 },
+- { XFER_PIO_1, 50, 290, 93, 383, 125, 100, 383, 0 },
+- { XFER_PIO_0, 70, 290, 240, 600, 165, 150, 600, 0 },
+-
+-/* { XFER_PIO_SLOW, 120, 290, 240, 960, 290, 240, 960, 0 }, */
++ { XFER_UDMA_0, 0, 0, 0, 0, 0, 0, 0, 120 },
++ { XFER_UDMA_1, 0, 0, 0, 0, 0, 0, 0, 80 },
++ { XFER_UDMA_2, 0, 0, 0, 0, 0, 0, 0, 60 },
++ { XFER_UDMA_3, 0, 0, 0, 0, 0, 0, 0, 45 },
++ { XFER_UDMA_4, 0, 0, 0, 0, 0, 0, 0, 30 },
++ { XFER_UDMA_5, 0, 0, 0, 0, 0, 0, 0, 20 },
++ { XFER_UDMA_6, 0, 0, 0, 0, 0, 0, 0, 15 },
+
+ { 0xFF }
+ };
+@@ -2845,14 +2822,16 @@ void ata_timing_merge(const struct ata_timing *a, const struct ata_timing *b,
+ if (what & ATA_TIMING_UDMA ) m->udma = max(a->udma, b->udma);
+ }
+
+-static const struct ata_timing *ata_timing_find_mode(unsigned short speed)
++const struct ata_timing *ata_timing_find_mode(u8 xfer_mode)
+ {
+- const struct ata_timing *t;
++ const struct ata_timing *t = ata_timing;
++
++ while (xfer_mode > t->mode)
++ t++;
+
+- for (t = ata_timing; t->mode != speed; t++)
+- if (t->mode == 0xFF)
+- return NULL;
+- return t;
++ if (xfer_mode == t->mode)
++ return t;
++ return NULL;
+ }
+
+ int ata_timing_compute(struct ata_device *adev, unsigned short speed,
+@@ -2927,6 +2906,57 @@ int ata_timing_compute(struct ata_device *adev, unsigned short speed,
+ }
+
+ /**
++ * ata_timing_cycle2mode - find xfer mode for the specified cycle duration
++ * @xfer_shift: ATA_SHIFT_* value for transfer type to examine.
++ * @cycle: cycle duration in ns
++ *
++ * Return matching xfer mode for @cycle. The returned mode is of
++ * the transfer type specified by @xfer_shift. If @cycle is too
++ * slow for @xfer_shift, 0xff is returned. If @cycle is faster
++ * than the fastest known mode, the fasted mode is returned.
++ *
++ * LOCKING:
++ * None.
++ *
++ * RETURNS:
++ * Matching xfer_mode, 0xff if no match found.
++ */
++u8 ata_timing_cycle2mode(unsigned int xfer_shift, int cycle)
++{
++ u8 base_mode = 0xff, last_mode = 0xff;
++ const struct ata_xfer_ent *ent;
++ const struct ata_timing *t;
++
++ for (ent = ata_xfer_tbl; ent->shift >= 0; ent++)
++ if (ent->shift == xfer_shift)
++ base_mode = ent->base;
++
++ for (t = ata_timing_find_mode(base_mode);
++ t && ata_xfer_mode2shift(t->mode) == xfer_shift; t++) {
++ unsigned short this_cycle;
++
++ switch (xfer_shift) {
++ case ATA_SHIFT_PIO:
++ case ATA_SHIFT_MWDMA:
++ this_cycle = t->cycle;
++ break;
++ case ATA_SHIFT_UDMA:
++ this_cycle = t->udma;
++ break;
++ default:
++ return 0xff;
++ }
++
++ if (cycle > this_cycle)
++ break;
++
++ last_mode = t->mode;
++ }
++
++ return last_mode;
++}
++
++/**
+ * ata_down_xfermask_limit - adjust dev xfer masks downward
+ * @dev: Device to adjust xfer masks
+ * @sel: ATA_DNXFER_* selector
+@@ -2944,8 +2974,8 @@ int ata_timing_compute(struct ata_device *adev, unsigned short speed,
+ int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel)
+ {
+ char buf[32];
+- unsigned int orig_mask, xfer_mask;
+- unsigned int pio_mask, mwdma_mask, udma_mask;
++ unsigned long orig_mask, xfer_mask;
++ unsigned long pio_mask, mwdma_mask, udma_mask;
+ int quiet, highbit;
+
+ quiet = !!(sel & ATA_DNXFER_QUIET);
+@@ -3039,7 +3069,7 @@ static int ata_dev_set_mode(struct ata_device *dev)
+
+ /* Early MWDMA devices do DMA but don't allow DMA mode setting.
+ Don't fail an MWDMA0 set IFF the device indicates it is in MWDMA0 */
+- if (dev->xfer_shift == ATA_SHIFT_MWDMA &&
++ if (dev->xfer_shift == ATA_SHIFT_MWDMA &&
+ dev->dma_mode == XFER_MW_DMA_0 &&
+ (dev->id[63] >> 8) & 1)
+ err_mask &= ~AC_ERR_DEV;
+@@ -3089,7 +3119,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
+
+ /* step 1: calculate xfer_mask */
+ ata_link_for_each_dev(dev, link) {
+- unsigned int pio_mask, dma_mask;
++ unsigned long pio_mask, dma_mask;
+ unsigned int mode_mask;
+
+ if (!ata_dev_enabled(dev))
+@@ -3115,7 +3145,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
+ dev->dma_mode = ata_xfer_mask2mode(dma_mask);
+
+ found = 1;
+- if (dev->dma_mode)
++ if (dev->dma_mode != 0xff)
+ used_dma = 1;
+ }
+ if (!found)
+@@ -3126,7 +3156,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
+ if (!ata_dev_enabled(dev))
+ continue;
+
+- if (!dev->pio_mode) {
++ if (dev->pio_mode == 0xff) {
+ ata_dev_printk(dev, KERN_WARNING, "no PIO support\n");
+ rc = -EINVAL;
+ goto out;
+@@ -3140,7 +3170,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
+
+ /* step 3: set host DMA timings */
+ ata_link_for_each_dev(dev, link) {
+- if (!ata_dev_enabled(dev) || !dev->dma_mode)
++ if (!ata_dev_enabled(dev) || dev->dma_mode == 0xff)
+ continue;
+
+ dev->xfer_mode = dev->dma_mode;
+@@ -3173,31 +3203,6 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
+ }
+
+ /**
+- * ata_set_mode - Program timings and issue SET FEATURES - XFER
+- * @link: link on which timings will be programmed
+- * @r_failed_dev: out paramter for failed device
+- *
+- * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). If
+- * ata_set_mode() fails, pointer to the failing device is
+- * returned in @r_failed_dev.
+- *
+- * LOCKING:
+- * PCI/etc. bus probe sem.
+- *
+- * RETURNS:
+- * 0 on success, negative errno otherwise
+- */
+-int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
+-{
+- struct ata_port *ap = link->ap;
+-
+- /* has private set_mode? */
+- if (ap->ops->set_mode)
+- return ap->ops->set_mode(link, r_failed_dev);
+- return ata_do_set_mode(link, r_failed_dev);
+-}
+-
+-/**
+ * ata_tf_to_host - issue ATA taskfile to host controller
+ * @ap: port to which command is being issued
+ * @tf: ATA taskfile register set
+@@ -4363,7 +4368,14 @@ static unsigned int ata_dev_set_xfermode(struct ata_device *dev)
+ tf.feature = SETFEATURES_XFER;
+ tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE | ATA_TFLAG_POLLING;
+ tf.protocol = ATA_PROT_NODATA;
+- tf.nsect = dev->xfer_mode;
++ /* If we are using IORDY we must send the mode setting command */
++ if (ata_pio_need_iordy(dev))
++ tf.nsect = dev->xfer_mode;
++ /* If the device has IORDY and the controller does not - turn it off */
++ else if (ata_id_has_iordy(dev->id))
++ tf.nsect = 0x01;
++ else /* In the ancient relic department - skip all of this */
++ return 0;
+
+ err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
+
+@@ -4462,17 +4474,13 @@ static unsigned int ata_dev_init_params(struct ata_device *dev,
+ void ata_sg_clean(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+- struct scatterlist *sg = qc->__sg;
++ struct scatterlist *sg = qc->sg;
+ int dir = qc->dma_dir;
+ void *pad_buf = NULL;
+
+- WARN_ON(!(qc->flags & ATA_QCFLAG_DMAMAP));
+ WARN_ON(sg == NULL);
+
+- if (qc->flags & ATA_QCFLAG_SINGLE)
+- WARN_ON(qc->n_elem > 1);
+-
+- VPRINTK("unmapping %u sg elements\n", qc->n_elem);
++ VPRINTK("unmapping %u sg elements\n", qc->mapped_n_elem);
+
+ /* if we padded the buffer out to 32-bit bound, and data
+ * xfer direction is from-device, we must copy from the
+@@ -4481,31 +4489,20 @@ void ata_sg_clean(struct ata_queued_cmd *qc)
+ if (qc->pad_len && !(qc->tf.flags & ATA_TFLAG_WRITE))
+ pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
+
+- if (qc->flags & ATA_QCFLAG_SG) {
+- if (qc->n_elem)
+- dma_unmap_sg(ap->dev, sg, qc->n_elem, dir);
+- /* restore last sg */
+- sg_last(sg, qc->orig_n_elem)->length += qc->pad_len;
+- if (pad_buf) {
+- struct scatterlist *psg = &qc->pad_sgent;
+- void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
+- memcpy(addr + psg->offset, pad_buf, qc->pad_len);
+- kunmap_atomic(addr, KM_IRQ0);
+- }
+- } else {
+- if (qc->n_elem)
+- dma_unmap_single(ap->dev,
+- sg_dma_address(&sg[0]), sg_dma_len(&sg[0]),
+- dir);
+- /* restore sg */
+- sg->length += qc->pad_len;
+- if (pad_buf)
+- memcpy(qc->buf_virt + sg->length - qc->pad_len,
+- pad_buf, qc->pad_len);
++ if (qc->mapped_n_elem)
++ dma_unmap_sg(ap->dev, sg, qc->mapped_n_elem, dir);
++ /* restore last sg */
++ if (qc->last_sg)
++ *qc->last_sg = qc->saved_last_sg;
++ if (pad_buf) {
++ struct scatterlist *psg = &qc->extra_sg[1];
++ void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
++ memcpy(addr + psg->offset, pad_buf, qc->pad_len);
++ kunmap_atomic(addr, KM_IRQ0);
+ }
+
+ qc->flags &= ~ATA_QCFLAG_DMAMAP;
+- qc->__sg = NULL;
++ qc->sg = NULL;
+ }
+
+ /**
+@@ -4523,13 +4520,10 @@ static void ata_fill_sg(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct scatterlist *sg;
+- unsigned int idx;
++ unsigned int si, pi;
+
+- WARN_ON(qc->__sg == NULL);
+- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
+-
+- idx = 0;
+- ata_for_each_sg(sg, qc) {
++ pi = 0;
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ u32 addr, offset;
+ u32 sg_len, len;
+
+@@ -4546,18 +4540,17 @@ static void ata_fill_sg(struct ata_queued_cmd *qc)
+ if ((offset + sg_len) > 0x10000)
+ len = 0x10000 - offset;
+
+- ap->prd[idx].addr = cpu_to_le32(addr);
+- ap->prd[idx].flags_len = cpu_to_le32(len & 0xffff);
+- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
++ ap->prd[pi].addr = cpu_to_le32(addr);
++ ap->prd[pi].flags_len = cpu_to_le32(len & 0xffff);
++ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", pi, addr, len);
+
+- idx++;
++ pi++;
+ sg_len -= len;
+ addr += len;
+ }
+ }
+
+- if (idx)
+- ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
++ ap->prd[pi - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
+ }
+
+ /**
+@@ -4577,13 +4570,10 @@ static void ata_fill_sg_dumb(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct scatterlist *sg;
+- unsigned int idx;
+-
+- WARN_ON(qc->__sg == NULL);
+- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
++ unsigned int si, pi;
+
+- idx = 0;
+- ata_for_each_sg(sg, qc) {
++ pi = 0;
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ u32 addr, offset;
+ u32 sg_len, len, blen;
+
+@@ -4601,25 +4591,24 @@ static void ata_fill_sg_dumb(struct ata_queued_cmd *qc)
+ len = 0x10000 - offset;
+
+ blen = len & 0xffff;
+- ap->prd[idx].addr = cpu_to_le32(addr);
++ ap->prd[pi].addr = cpu_to_le32(addr);
+ if (blen == 0) {
+ /* Some PATA chipsets like the CS5530 can't
+ cope with 0x0000 meaning 64K as the spec says */
+- ap->prd[idx].flags_len = cpu_to_le32(0x8000);
++ ap->prd[pi].flags_len = cpu_to_le32(0x8000);
+ blen = 0x8000;
+- ap->prd[++idx].addr = cpu_to_le32(addr + 0x8000);
++ ap->prd[++pi].addr = cpu_to_le32(addr + 0x8000);
+ }
+- ap->prd[idx].flags_len = cpu_to_le32(blen);
+- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
++ ap->prd[pi].flags_len = cpu_to_le32(blen);
++ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", pi, addr, len);
+
+- idx++;
++ pi++;
+ sg_len -= len;
+ addr += len;
+ }
+ }
+
+- if (idx)
+- ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
++ ap->prd[pi - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
+ }
+
+ /**
+@@ -4669,8 +4658,8 @@ int ata_check_atapi_dma(struct ata_queued_cmd *qc)
+ */
+ static int atapi_qc_may_overflow(struct ata_queued_cmd *qc)
+ {
+- if (qc->tf.protocol != ATA_PROT_ATAPI &&
+- qc->tf.protocol != ATA_PROT_ATAPI_DMA)
++ if (qc->tf.protocol != ATAPI_PROT_PIO &&
++ qc->tf.protocol != ATAPI_PROT_DMA)
+ return 0;
+
+ if (qc->tf.flags & ATA_TFLAG_WRITE)
+@@ -4756,33 +4745,6 @@ void ata_dumb_qc_prep(struct ata_queued_cmd *qc)
+ void ata_noop_qc_prep(struct ata_queued_cmd *qc) { }
+
+ /**
+- * ata_sg_init_one - Associate command with memory buffer
+- * @qc: Command to be associated
+- * @buf: Memory buffer
+- * @buflen: Length of memory buffer, in bytes.
+- *
+- * Initialize the data-related elements of queued_cmd @qc
+- * to point to a single memory buffer, @buf of byte length @buflen.
+- *
+- * LOCKING:
+- * spin_lock_irqsave(host lock)
+- */
+-
+-void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, unsigned int buflen)
+-{
+- qc->flags |= ATA_QCFLAG_SINGLE;
+-
+- qc->__sg = &qc->sgent;
+- qc->n_elem = 1;
+- qc->orig_n_elem = 1;
+- qc->buf_virt = buf;
+- qc->nbytes = buflen;
+- qc->cursg = qc->__sg;
+-
+- sg_init_one(&qc->sgent, buf, buflen);
+-}
+-
+-/**
+ * ata_sg_init - Associate command with scatter-gather table.
+ * @qc: Command to be associated
+ * @sg: Scatter-gather table.
+@@ -4795,84 +4757,103 @@ void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, unsigned int buflen)
+ * LOCKING:
+ * spin_lock_irqsave(host lock)
+ */
+-
+ void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg,
+ unsigned int n_elem)
+ {
+- qc->flags |= ATA_QCFLAG_SG;
+- qc->__sg = sg;
++ qc->sg = sg;
+ qc->n_elem = n_elem;
+- qc->orig_n_elem = n_elem;
+- qc->cursg = qc->__sg;
++ qc->cursg = qc->sg;
+ }
+
+-/**
+- * ata_sg_setup_one - DMA-map the memory buffer associated with a command.
+- * @qc: Command with memory buffer to be mapped.
+- *
+- * DMA-map the memory buffer associated with queued_cmd @qc.
+- *
+- * LOCKING:
+- * spin_lock_irqsave(host lock)
+- *
+- * RETURNS:
+- * Zero on success, negative on error.
+- */
+-
+-static int ata_sg_setup_one(struct ata_queued_cmd *qc)
++static unsigned int ata_sg_setup_extra(struct ata_queued_cmd *qc,
++ unsigned int *n_elem_extra,
++ unsigned int *nbytes_extra)
+ {
+ struct ata_port *ap = qc->ap;
+- int dir = qc->dma_dir;
+- struct scatterlist *sg = qc->__sg;
+- dma_addr_t dma_address;
+- int trim_sg = 0;
++ unsigned int n_elem = qc->n_elem;
++ struct scatterlist *lsg, *copy_lsg = NULL, *tsg = NULL, *esg = NULL;
++
++ *n_elem_extra = 0;
++ *nbytes_extra = 0;
++
++ /* needs padding? */
++ qc->pad_len = qc->nbytes & 3;
++
++ if (likely(!qc->pad_len))
++ return n_elem;
++
++ /* locate last sg and save it */
++ lsg = sg_last(qc->sg, n_elem);
++ qc->last_sg = lsg;
++ qc->saved_last_sg = *lsg;
++
++ sg_init_table(qc->extra_sg, ARRAY_SIZE(qc->extra_sg));
+
+- /* we must lengthen transfers to end on a 32-bit boundary */
+- qc->pad_len = sg->length & 3;
+ if (qc->pad_len) {
++ struct scatterlist *psg = &qc->extra_sg[1];
+ void *pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
+- struct scatterlist *psg = &qc->pad_sgent;
++ unsigned int offset;
+
+ WARN_ON(qc->dev->class != ATA_DEV_ATAPI);
+
+ memset(pad_buf, 0, ATA_DMA_PAD_SZ);
+
+- if (qc->tf.flags & ATA_TFLAG_WRITE)
+- memcpy(pad_buf, qc->buf_virt + sg->length - qc->pad_len,
+- qc->pad_len);
++ /* psg->page/offset are used to copy to-be-written
++ * data in this function or read data in ata_sg_clean.
++ */
++ offset = lsg->offset + lsg->length - qc->pad_len;
++ sg_set_page(psg, nth_page(sg_page(lsg), offset >> PAGE_SHIFT),
++ qc->pad_len, offset_in_page(offset));
++
++ if (qc->tf.flags & ATA_TFLAG_WRITE) {
++ void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
++ memcpy(pad_buf, addr + psg->offset, qc->pad_len);
++ kunmap_atomic(addr, KM_IRQ0);
++ }
+
+ sg_dma_address(psg) = ap->pad_dma + (qc->tag * ATA_DMA_PAD_SZ);
+ sg_dma_len(psg) = ATA_DMA_PAD_SZ;
+- /* trim sg */
+- sg->length -= qc->pad_len;
+- if (sg->length == 0)
+- trim_sg = 1;
+
+- DPRINTK("padding done, sg->length=%u pad_len=%u\n",
+- sg->length, qc->pad_len);
+- }
++ /* Trim the last sg entry and chain the original and
++ * padding sg lists.
++ *
++ * Because chaining consumes one sg entry, one extra
++ * sg entry is allocated and the last sg entry is
++ * copied to it if the length isn't zero after padded
++ * amount is removed.
++ *
++ * If the last sg entry is completely replaced by
++ * padding sg entry, the first sg entry is skipped
++ * while chaining.
++ */
++ lsg->length -= qc->pad_len;
++ if (lsg->length) {
++ copy_lsg = &qc->extra_sg[0];
++ tsg = &qc->extra_sg[0];
++ } else {
++ n_elem--;
++ tsg = &qc->extra_sg[1];
++ }
+
+- if (trim_sg) {
+- qc->n_elem--;
+- goto skip_map;
+- }
++ esg = &qc->extra_sg[1];
+
+- dma_address = dma_map_single(ap->dev, qc->buf_virt,
+- sg->length, dir);
+- if (dma_mapping_error(dma_address)) {
+- /* restore sg */
+- sg->length += qc->pad_len;
+- return -1;
++ (*n_elem_extra)++;
++ (*nbytes_extra) += 4 - qc->pad_len;
+ }
+
+- sg_dma_address(sg) = dma_address;
+- sg_dma_len(sg) = sg->length;
++ if (copy_lsg)
++ sg_set_page(copy_lsg, sg_page(lsg), lsg->length, lsg->offset);
+
+-skip_map:
+- DPRINTK("mapped buffer of %d bytes for %s\n", sg_dma_len(sg),
+- qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read");
++ sg_chain(lsg, 1, tsg);
++ sg_mark_end(esg);
+
+- return 0;
++ /* sglist can't start with chaining sg entry, fast forward */
++ if (qc->sg == lsg) {
++ qc->sg = tsg;
++ qc->cursg = tsg;
++ }
++
++ return n_elem;
+ }
+
+ /**
+@@ -4888,75 +4869,30 @@ skip_map:
+ * Zero on success, negative on error.
+ *
+ */
+-
+ static int ata_sg_setup(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+- struct scatterlist *sg = qc->__sg;
+- struct scatterlist *lsg = sg_last(qc->__sg, qc->n_elem);
+- int n_elem, pre_n_elem, dir, trim_sg = 0;
++ unsigned int n_elem, n_elem_extra, nbytes_extra;
+
+ VPRINTK("ENTER, ata%u\n", ap->print_id);
+- WARN_ON(!(qc->flags & ATA_QCFLAG_SG));
+
+- /* we must lengthen transfers to end on a 32-bit boundary */
+- qc->pad_len = lsg->length & 3;
+- if (qc->pad_len) {
+- void *pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
+- struct scatterlist *psg = &qc->pad_sgent;
+- unsigned int offset;
+-
+- WARN_ON(qc->dev->class != ATA_DEV_ATAPI);
++ n_elem = ata_sg_setup_extra(qc, &n_elem_extra, &nbytes_extra);
+
+- memset(pad_buf, 0, ATA_DMA_PAD_SZ);
+-
+- /*
+- * psg->page/offset are used to copy to-be-written
+- * data in this function or read data in ata_sg_clean.
+- */
+- offset = lsg->offset + lsg->length - qc->pad_len;
+- sg_init_table(psg, 1);
+- sg_set_page(psg, nth_page(sg_page(lsg), offset >> PAGE_SHIFT),
+- qc->pad_len, offset_in_page(offset));
+-
+- if (qc->tf.flags & ATA_TFLAG_WRITE) {
+- void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
+- memcpy(pad_buf, addr + psg->offset, qc->pad_len);
+- kunmap_atomic(addr, KM_IRQ0);
++ if (n_elem) {
++ n_elem = dma_map_sg(ap->dev, qc->sg, n_elem, qc->dma_dir);
++ if (n_elem < 1) {
++ /* restore last sg */
++ if (qc->last_sg)
++ *qc->last_sg = qc->saved_last_sg;
++ return -1;
+ }
+-
+- sg_dma_address(psg) = ap->pad_dma + (qc->tag * ATA_DMA_PAD_SZ);
+- sg_dma_len(psg) = ATA_DMA_PAD_SZ;
+- /* trim last sg */
+- lsg->length -= qc->pad_len;
+- if (lsg->length == 0)
+- trim_sg = 1;
+-
+- DPRINTK("padding done, sg[%d].length=%u pad_len=%u\n",
+- qc->n_elem - 1, lsg->length, qc->pad_len);
+- }
+-
+- pre_n_elem = qc->n_elem;
+- if (trim_sg && pre_n_elem)
+- pre_n_elem--;
+-
+- if (!pre_n_elem) {
+- n_elem = 0;
+- goto skip_map;
+- }
+-
+- dir = qc->dma_dir;
+- n_elem = dma_map_sg(ap->dev, sg, pre_n_elem, dir);
+- if (n_elem < 1) {
+- /* restore last sg */
+- lsg->length += qc->pad_len;
+- return -1;
++ DPRINTK("%d sg elements mapped\n", n_elem);
+ }
+
+- DPRINTK("%d sg elements mapped\n", n_elem);
+-
+-skip_map:
+- qc->n_elem = n_elem;
++ qc->n_elem = qc->mapped_n_elem = n_elem;
++ qc->n_elem += n_elem_extra;
++ qc->nbytes += nbytes_extra;
++ qc->flags |= ATA_QCFLAG_DMAMAP;
+
+ return 0;
+ }
+@@ -4985,63 +4921,77 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words)
+
+ /**
+ * ata_data_xfer - Transfer data by PIO
+- * @adev: device to target
++ * @dev: device to target
+ * @buf: data buffer
+ * @buflen: buffer length
+- * @write_data: read/write
++ * @rw: read/write
+ *
+ * Transfer data from/to the device data register by PIO.
+ *
+ * LOCKING:
+ * Inherited from caller.
++ *
++ * RETURNS:
++ * Bytes consumed.
+ */
+-void ata_data_xfer(struct ata_device *adev, unsigned char *buf,
+- unsigned int buflen, int write_data)
++unsigned int ata_data_xfer(struct ata_device *dev, unsigned char *buf,
++ unsigned int buflen, int rw)
+ {
+- struct ata_port *ap = adev->link->ap;
++ struct ata_port *ap = dev->link->ap;
++ void __iomem *data_addr = ap->ioaddr.data_addr;
+ unsigned int words = buflen >> 1;
+
+ /* Transfer multiple of 2 bytes */
+- if (write_data)
+- iowrite16_rep(ap->ioaddr.data_addr, buf, words);
++ if (rw == READ)
++ ioread16_rep(data_addr, buf, words);
+ else
+- ioread16_rep(ap->ioaddr.data_addr, buf, words);
++ iowrite16_rep(data_addr, buf, words);
+
+ /* Transfer trailing 1 byte, if any. */
+ if (unlikely(buflen & 0x01)) {
+- u16 align_buf[1] = { 0 };
++ __le16 align_buf[1] = { 0 };
+ unsigned char *trailing_buf = buf + buflen - 1;
+
+- if (write_data) {
+- memcpy(align_buf, trailing_buf, 1);
+- iowrite16(le16_to_cpu(align_buf[0]), ap->ioaddr.data_addr);
+- } else {
+- align_buf[0] = cpu_to_le16(ioread16(ap->ioaddr.data_addr));
++ if (rw == READ) {
++ align_buf[0] = cpu_to_le16(ioread16(data_addr));
+ memcpy(trailing_buf, align_buf, 1);
++ } else {
++ memcpy(align_buf, trailing_buf, 1);
++ iowrite16(le16_to_cpu(align_buf[0]), data_addr);
+ }
++ words++;
+ }
++
++ return words << 1;
+ }
+
+ /**
+ * ata_data_xfer_noirq - Transfer data by PIO
+- * @adev: device to target
++ * @dev: device to target
+ * @buf: data buffer
+ * @buflen: buffer length
+- * @write_data: read/write
++ * @rw: read/write
+ *
+ * Transfer data from/to the device data register by PIO. Do the
+ * transfer with interrupts disabled.
+ *
+ * LOCKING:
+ * Inherited from caller.
++ *
++ * RETURNS:
++ * Bytes consumed.
+ */
+-void ata_data_xfer_noirq(struct ata_device *adev, unsigned char *buf,
+- unsigned int buflen, int write_data)
++unsigned int ata_data_xfer_noirq(struct ata_device *dev, unsigned char *buf,
++ unsigned int buflen, int rw)
+ {
+ unsigned long flags;
++ unsigned int consumed;
++
+ local_irq_save(flags);
+- ata_data_xfer(adev, buf, buflen, write_data);
++ consumed = ata_data_xfer(dev, buf, buflen, rw);
+ local_irq_restore(flags);
++
++ return consumed;
+ }
+
+
+@@ -5152,13 +5102,13 @@ static void atapi_send_cdb(struct ata_port *ap, struct ata_queued_cmd *qc)
+ ata_altstatus(ap); /* flush */
+
+ switch (qc->tf.protocol) {
+- case ATA_PROT_ATAPI:
++ case ATAPI_PROT_PIO:
+ ap->hsm_task_state = HSM_ST;
+ break;
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_NODATA:
+ ap->hsm_task_state = HSM_ST_LAST;
+ break;
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ ap->hsm_task_state = HSM_ST_LAST;
+ /* initiate bmdma */
+ ap->ops->bmdma_start(qc);
+@@ -5300,12 +5250,15 @@ static void atapi_pio_bytes(struct ata_queued_cmd *qc)
+ bytes = (bc_hi << 8) | bc_lo;
+
+ /* shall be cleared to zero, indicating xfer of data */
+- if (ireason & (1 << 0))
++ if (unlikely(ireason & (1 << 0)))
+ goto err_out;
+
+ /* make sure transfer direction matches expected */
+ i_write = ((ireason & (1 << 1)) == 0) ? 1 : 0;
+- if (do_write != i_write)
++ if (unlikely(do_write != i_write))
++ goto err_out;
++
++ if (unlikely(!bytes))
+ goto err_out;
+
+ VPRINTK("ata%u: xfering %d bytes\n", ap->print_id, bytes);
+@@ -5341,7 +5294,7 @@ static inline int ata_hsm_ok_in_wq(struct ata_port *ap, struct ata_queued_cmd *q
+ (qc->tf.flags & ATA_TFLAG_WRITE))
+ return 1;
+
+- if (is_atapi_taskfile(&qc->tf) &&
++ if (ata_is_atapi(qc->tf.protocol) &&
+ !(qc->dev->flags & ATA_DFLAG_CDB_INTR))
+ return 1;
+ }
+@@ -5506,7 +5459,7 @@ fsm_start:
+
+ case HSM_ST:
+ /* complete command or read/write the data register */
+- if (qc->tf.protocol == ATA_PROT_ATAPI) {
++ if (qc->tf.protocol == ATAPI_PROT_PIO) {
+ /* ATAPI PIO protocol */
+ if ((status & ATA_DRQ) == 0) {
+ /* No more data to transfer or device error.
+@@ -5664,7 +5617,7 @@ fsm_start:
+ msleep(2);
+ status = ata_busy_wait(ap, ATA_BUSY, 10);
+ if (status & ATA_BUSY) {
+- ata_port_queue_task(ap, ata_pio_task, qc, ATA_SHORT_PAUSE);
++ ata_pio_queue_task(ap, qc, ATA_SHORT_PAUSE);
+ return;
+ }
+ }
+@@ -5805,6 +5758,22 @@ static void fill_result_tf(struct ata_queued_cmd *qc)
+ ap->ops->tf_read(ap, &qc->result_tf);
+ }
+
++static void ata_verify_xfer(struct ata_queued_cmd *qc)
++{
++ struct ata_device *dev = qc->dev;
++
++ if (ata_tag_internal(qc->tag))
++ return;
++
++ if (ata_is_nodata(qc->tf.protocol))
++ return;
++
++ if ((dev->mwdma_mask || dev->udma_mask) && ata_is_pio(qc->tf.protocol))
++ return;
++
++ dev->flags &= ~ATA_DFLAG_DUBIOUS_XFER;
++}
++
+ /**
+ * ata_qc_complete - Complete an active ATA command
+ * @qc: Command to complete
+@@ -5876,6 +5845,9 @@ void ata_qc_complete(struct ata_queued_cmd *qc)
+ break;
+ }
+
++ if (unlikely(dev->flags & ATA_DFLAG_DUBIOUS_XFER))
++ ata_verify_xfer(qc);
++
+ __ata_qc_complete(qc);
+ } else {
+ if (qc->flags & ATA_QCFLAG_EH_SCHEDULED)
+@@ -5938,30 +5910,6 @@ int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active,
+ return nr_done;
+ }
+
+-static inline int ata_should_dma_map(struct ata_queued_cmd *qc)
+-{
+- struct ata_port *ap = qc->ap;
+-
+- switch (qc->tf.protocol) {
+- case ATA_PROT_NCQ:
+- case ATA_PROT_DMA:
+- case ATA_PROT_ATAPI_DMA:
+- return 1;
+-
+- case ATA_PROT_ATAPI:
+- case ATA_PROT_PIO:
+- if (ap->flags & ATA_FLAG_PIO_DMA)
+- return 1;
+-
+- /* fall through */
+-
+- default:
+- return 0;
+- }
+-
+- /* never reached */
+-}
+-
+ /**
+ * ata_qc_issue - issue taskfile to device
+ * @qc: command to issue to device
+@@ -5978,6 +5926,7 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct ata_link *link = qc->dev->link;
++ u8 prot = qc->tf.protocol;
+
+ /* Make sure only one non-NCQ command is outstanding. The
+ * check is skipped for old EH because it reuses active qc to
+@@ -5985,7 +5934,7 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
+ */
+ WARN_ON(ap->ops->error_handler && ata_tag_valid(link->active_tag));
+
+- if (qc->tf.protocol == ATA_PROT_NCQ) {
++ if (ata_is_ncq(prot)) {
+ WARN_ON(link->sactive & (1 << qc->tag));
+
+ if (!link->sactive)
+@@ -6001,17 +5950,18 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
+ qc->flags |= ATA_QCFLAG_ACTIVE;
+ ap->qc_active |= 1 << qc->tag;
+
+- if (ata_should_dma_map(qc)) {
+- if (qc->flags & ATA_QCFLAG_SG) {
+- if (ata_sg_setup(qc))
+- goto sg_err;
+- } else if (qc->flags & ATA_QCFLAG_SINGLE) {
+- if (ata_sg_setup_one(qc))
+- goto sg_err;
+- }
+- } else {
+- qc->flags &= ~ATA_QCFLAG_DMAMAP;
+- }
++ /* We guarantee to LLDs that they will have at least one
++ * non-zero sg if the command is a data command.
++ */
++ BUG_ON(ata_is_data(prot) && (!qc->sg || !qc->n_elem || !qc->nbytes));
++
++ /* ata_sg_setup() may update nbytes */
++ qc->raw_nbytes = qc->nbytes;
++
++ if (ata_is_dma(prot) || (ata_is_pio(prot) &&
++ (ap->flags & ATA_FLAG_PIO_DMA)))
++ if (ata_sg_setup(qc))
++ goto sg_err;
+
+ /* if device is sleeping, schedule softreset and abort the link */
+ if (unlikely(qc->dev->flags & ATA_DFLAG_SLEEPING)) {
+@@ -6029,7 +5979,6 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
+ return;
+
+ sg_err:
+- qc->flags &= ~ATA_QCFLAG_DMAMAP;
+ qc->err_mask |= AC_ERR_SYSTEM;
+ err:
+ ata_qc_complete(qc);
+@@ -6064,11 +6013,11 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+ switch (qc->tf.protocol) {
+ case ATA_PROT_PIO:
+ case ATA_PROT_NODATA:
+- case ATA_PROT_ATAPI:
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_PIO:
++ case ATAPI_PROT_NODATA:
+ qc->tf.flags |= ATA_TFLAG_POLLING;
+ break;
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ if (qc->dev->flags & ATA_DFLAG_CDB_INTR)
+ /* see ata_dma_blacklisted() */
+ BUG();
+@@ -6091,7 +6040,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+ ap->hsm_task_state = HSM_ST_LAST;
+
+ if (qc->tf.flags & ATA_TFLAG_POLLING)
+- ata_port_queue_task(ap, ata_pio_task, qc, 0);
++ ata_pio_queue_task(ap, qc, 0);
+
+ break;
+
+@@ -6113,7 +6062,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+ if (qc->tf.flags & ATA_TFLAG_WRITE) {
+ /* PIO data out protocol */
+ ap->hsm_task_state = HSM_ST_FIRST;
+- ata_port_queue_task(ap, ata_pio_task, qc, 0);
++ ata_pio_queue_task(ap, qc, 0);
+
+ /* always send first data block using
+ * the ata_pio_task() codepath.
+@@ -6123,7 +6072,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+ ap->hsm_task_state = HSM_ST;
+
+ if (qc->tf.flags & ATA_TFLAG_POLLING)
+- ata_port_queue_task(ap, ata_pio_task, qc, 0);
++ ata_pio_queue_task(ap, qc, 0);
+
+ /* if polling, ata_pio_task() handles the rest.
+ * otherwise, interrupt handler takes over from here.
+@@ -6132,8 +6081,8 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+
+ break;
+
+- case ATA_PROT_ATAPI:
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_PIO:
++ case ATAPI_PROT_NODATA:
+ if (qc->tf.flags & ATA_TFLAG_POLLING)
+ ata_qc_set_polling(qc);
+
+@@ -6144,10 +6093,10 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+ /* send cdb by polling if no cdb interrupt */
+ if ((!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) ||
+ (qc->tf.flags & ATA_TFLAG_POLLING))
+- ata_port_queue_task(ap, ata_pio_task, qc, 0);
++ ata_pio_queue_task(ap, qc, 0);
+ break;
+
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING);
+
+ ap->ops->tf_load(ap, &qc->tf); /* load tf registers */
+@@ -6156,7 +6105,7 @@ unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc)
+
+ /* send cdb by polling if no cdb interrupt */
+ if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR))
+- ata_port_queue_task(ap, ata_pio_task, qc, 0);
++ ata_pio_queue_task(ap, qc, 0);
+ break;
+
+ default:
+@@ -6200,15 +6149,15 @@ inline unsigned int ata_host_intr(struct ata_port *ap,
+ */
+
+ /* Check the ATA_DFLAG_CDB_INTR flag is enough here.
+- * The flag was turned on only for atapi devices.
+- * No need to check is_atapi_taskfile(&qc->tf) again.
++ * The flag was turned on only for atapi devices. No
++ * need to check ata_is_atapi(qc->tf.protocol) again.
+ */
+ if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR))
+ goto idle_irq;
+ break;
+ case HSM_ST_LAST:
+ if (qc->tf.protocol == ATA_PROT_DMA ||
+- qc->tf.protocol == ATA_PROT_ATAPI_DMA) {
++ qc->tf.protocol == ATAPI_PROT_DMA) {
+ /* check status of DMA engine */
+ host_stat = ap->ops->bmdma_status(ap);
+ VPRINTK("ata%u: host_stat 0x%X\n",
+@@ -6250,7 +6199,7 @@ inline unsigned int ata_host_intr(struct ata_port *ap,
+ ata_hsm_move(ap, qc, status, 0);
+
+ if (unlikely(qc->err_mask) && (qc->tf.protocol == ATA_PROT_DMA ||
+- qc->tf.protocol == ATA_PROT_ATAPI_DMA))
++ qc->tf.protocol == ATAPI_PROT_DMA))
+ ata_ehi_push_desc(ehi, "BMDMA stat 0x%x", host_stat);
+
+ return 1; /* irq handled */
+@@ -6772,7 +6721,7 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
+ ap->msg_enable = ATA_MSG_DRV | ATA_MSG_ERR | ATA_MSG_WARN;
+ #endif
+
+- INIT_DELAYED_WORK(&ap->port_task, NULL);
++ INIT_DELAYED_WORK(&ap->port_task, ata_pio_task);
+ INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug);
+ INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
+ INIT_LIST_HEAD(&ap->eh_done_q);
+@@ -7589,7 +7538,6 @@ EXPORT_SYMBOL_GPL(ata_host_register);
+ EXPORT_SYMBOL_GPL(ata_host_activate);
+ EXPORT_SYMBOL_GPL(ata_host_detach);
+ EXPORT_SYMBOL_GPL(ata_sg_init);
+-EXPORT_SYMBOL_GPL(ata_sg_init_one);
+ EXPORT_SYMBOL_GPL(ata_hsm_move);
+ EXPORT_SYMBOL_GPL(ata_qc_complete);
+ EXPORT_SYMBOL_GPL(ata_qc_complete_multiple);
+@@ -7601,6 +7549,13 @@ EXPORT_SYMBOL_GPL(ata_std_dev_select);
+ EXPORT_SYMBOL_GPL(sata_print_link_status);
+ EXPORT_SYMBOL_GPL(ata_tf_to_fis);
+ EXPORT_SYMBOL_GPL(ata_tf_from_fis);
++EXPORT_SYMBOL_GPL(ata_pack_xfermask);
++EXPORT_SYMBOL_GPL(ata_unpack_xfermask);
++EXPORT_SYMBOL_GPL(ata_xfer_mask2mode);
++EXPORT_SYMBOL_GPL(ata_xfer_mode2mask);
++EXPORT_SYMBOL_GPL(ata_xfer_mode2shift);
++EXPORT_SYMBOL_GPL(ata_mode_string);
++EXPORT_SYMBOL_GPL(ata_id_xfermask);
+ EXPORT_SYMBOL_GPL(ata_check_status);
+ EXPORT_SYMBOL_GPL(ata_altstatus);
+ EXPORT_SYMBOL_GPL(ata_exec_command);
+@@ -7643,7 +7598,6 @@ EXPORT_SYMBOL_GPL(ata_wait_register);
+ EXPORT_SYMBOL_GPL(ata_busy_sleep);
+ EXPORT_SYMBOL_GPL(ata_wait_after_reset);
+ EXPORT_SYMBOL_GPL(ata_wait_ready);
+-EXPORT_SYMBOL_GPL(ata_port_queue_task);
+ EXPORT_SYMBOL_GPL(ata_scsi_ioctl);
+ EXPORT_SYMBOL_GPL(ata_scsi_queuecmd);
+ EXPORT_SYMBOL_GPL(ata_scsi_slave_config);
+@@ -7662,18 +7616,20 @@ EXPORT_SYMBOL_GPL(ata_host_resume);
+ #endif /* CONFIG_PM */
+ EXPORT_SYMBOL_GPL(ata_id_string);
+ EXPORT_SYMBOL_GPL(ata_id_c_string);
+-EXPORT_SYMBOL_GPL(ata_id_to_dma_mode);
+ EXPORT_SYMBOL_GPL(ata_scsi_simulate);
+
+ EXPORT_SYMBOL_GPL(ata_pio_need_iordy);
++EXPORT_SYMBOL_GPL(ata_timing_find_mode);
+ EXPORT_SYMBOL_GPL(ata_timing_compute);
+ EXPORT_SYMBOL_GPL(ata_timing_merge);
++EXPORT_SYMBOL_GPL(ata_timing_cycle2mode);
+
+ #ifdef CONFIG_PCI
+ EXPORT_SYMBOL_GPL(pci_test_config_bits);
+ EXPORT_SYMBOL_GPL(ata_pci_init_sff_host);
+ EXPORT_SYMBOL_GPL(ata_pci_init_bmdma);
+ EXPORT_SYMBOL_GPL(ata_pci_prepare_sff_host);
++EXPORT_SYMBOL_GPL(ata_pci_activate_sff_host);
+ EXPORT_SYMBOL_GPL(ata_pci_init_one);
+ EXPORT_SYMBOL_GPL(ata_pci_remove_one);
+ #ifdef CONFIG_PM
+@@ -7715,4 +7671,5 @@ EXPORT_SYMBOL_GPL(ata_dev_try_classify);
+ EXPORT_SYMBOL_GPL(ata_cable_40wire);
+ EXPORT_SYMBOL_GPL(ata_cable_80wire);
+ EXPORT_SYMBOL_GPL(ata_cable_unknown);
++EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 21a81cd..4e31071 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -46,9 +46,26 @@
+ #include "libata.h"
+
+ enum {
++ /* speed down verdicts */
+ ATA_EH_SPDN_NCQ_OFF = (1 << 0),
+ ATA_EH_SPDN_SPEED_DOWN = (1 << 1),
+ ATA_EH_SPDN_FALLBACK_TO_PIO = (1 << 2),
++ ATA_EH_SPDN_KEEP_ERRORS = (1 << 3),
++
++ /* error flags */
++ ATA_EFLAG_IS_IO = (1 << 0),
++ ATA_EFLAG_DUBIOUS_XFER = (1 << 1),
++
++ /* error categories */
++ ATA_ECAT_NONE = 0,
++ ATA_ECAT_ATA_BUS = 1,
++ ATA_ECAT_TOUT_HSM = 2,
++ ATA_ECAT_UNK_DEV = 3,
++ ATA_ECAT_DUBIOUS_NONE = 4,
++ ATA_ECAT_DUBIOUS_ATA_BUS = 5,
++ ATA_ECAT_DUBIOUS_TOUT_HSM = 6,
++ ATA_ECAT_DUBIOUS_UNK_DEV = 7,
++ ATA_ECAT_NR = 8,
+ };
+
+ /* Waiting in ->prereset can never be reliable. It's sometimes nice
+@@ -213,12 +230,13 @@ void ata_port_pbar_desc(struct ata_port *ap, int bar, ssize_t offset,
+ if (offset < 0)
+ ata_port_desc(ap, "%s %s%llu at 0x%llx", name, type, len, start);
+ else
+- ata_port_desc(ap, "%s 0x%llx", name, start + offset);
++ ata_port_desc(ap, "%s 0x%llx", name,
++ start + (unsigned long long)offset);
+ }
+
+ #endif /* CONFIG_PCI */
+
+-static void ata_ering_record(struct ata_ering *ering, int is_io,
++static void ata_ering_record(struct ata_ering *ering, unsigned int eflags,
+ unsigned int err_mask)
+ {
+ struct ata_ering_entry *ent;
+@@ -229,11 +247,20 @@ static void ata_ering_record(struct ata_ering *ering, int is_io,
+ ering->cursor %= ATA_ERING_SIZE;
+
+ ent = &ering->ring[ering->cursor];
+- ent->is_io = is_io;
++ ent->eflags = eflags;
+ ent->err_mask = err_mask;
+ ent->timestamp = get_jiffies_64();
+ }
+
++static struct ata_ering_entry *ata_ering_top(struct ata_ering *ering)
++{
++ struct ata_ering_entry *ent = &ering->ring[ering->cursor];
++
++ if (ent->err_mask)
++ return ent;
++ return NULL;
++}
++
+ static void ata_ering_clear(struct ata_ering *ering)
+ {
+ memset(ering, 0, sizeof(*ering));
+@@ -445,9 +472,20 @@ void ata_scsi_error(struct Scsi_Host *host)
+ spin_lock_irqsave(ap->lock, flags);
+
+ __ata_port_for_each_link(link, ap) {
++ struct ata_eh_context *ehc = &link->eh_context;
++ struct ata_device *dev;
++
+ memset(&link->eh_context, 0, sizeof(link->eh_context));
+ link->eh_context.i = link->eh_info;
+ memset(&link->eh_info, 0, sizeof(link->eh_info));
++
++ ata_link_for_each_dev(dev, link) {
++ int devno = dev->devno;
++
++ ehc->saved_xfer_mode[devno] = dev->xfer_mode;
++ if (ata_ncq_enabled(dev))
++ ehc->saved_ncq_enabled |= 1 << devno;
++ }
+ }
+
+ ap->pflags |= ATA_PFLAG_EH_IN_PROGRESS;
+@@ -1260,10 +1298,10 @@ static unsigned int atapi_eh_request_sense(struct ata_queued_cmd *qc)
+
+ /* is it pointless to prefer PIO for "safety reasons"? */
+ if (ap->flags & ATA_FLAG_PIO_DMA) {
+- tf.protocol = ATA_PROT_ATAPI_DMA;
++ tf.protocol = ATAPI_PROT_DMA;
+ tf.feature |= ATAPI_PKT_DMA;
+ } else {
+- tf.protocol = ATA_PROT_ATAPI;
++ tf.protocol = ATAPI_PROT_PIO;
+ tf.lbam = SCSI_SENSE_BUFFERSIZE;
+ tf.lbah = 0;
+ }
+@@ -1451,20 +1489,29 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc,
+ return action;
+ }
+
+-static int ata_eh_categorize_error(int is_io, unsigned int err_mask)
++static int ata_eh_categorize_error(unsigned int eflags, unsigned int err_mask,
++ int *xfer_ok)
+ {
++ int base = 0;
++
++ if (!(eflags & ATA_EFLAG_DUBIOUS_XFER))
++ *xfer_ok = 1;
++
++ if (!*xfer_ok)
++ base = ATA_ECAT_DUBIOUS_NONE;
++
+ if (err_mask & AC_ERR_ATA_BUS)
+- return 1;
++ return base + ATA_ECAT_ATA_BUS;
+
+ if (err_mask & AC_ERR_TIMEOUT)
+- return 2;
++ return base + ATA_ECAT_TOUT_HSM;
+
+- if (is_io) {
++ if (eflags & ATA_EFLAG_IS_IO) {
+ if (err_mask & AC_ERR_HSM)
+- return 2;
++ return base + ATA_ECAT_TOUT_HSM;
+ if ((err_mask &
+ (AC_ERR_DEV|AC_ERR_MEDIA|AC_ERR_INVALID)) == AC_ERR_DEV)
+- return 3;
++ return base + ATA_ECAT_UNK_DEV;
+ }
+
+ return 0;
+@@ -1472,18 +1519,22 @@ static int ata_eh_categorize_error(int is_io, unsigned int err_mask)
+
+ struct speed_down_verdict_arg {
+ u64 since;
+- int nr_errors[4];
++ int xfer_ok;
++ int nr_errors[ATA_ECAT_NR];
+ };
+
+ static int speed_down_verdict_cb(struct ata_ering_entry *ent, void *void_arg)
+ {
+ struct speed_down_verdict_arg *arg = void_arg;
+- int cat = ata_eh_categorize_error(ent->is_io, ent->err_mask);
++ int cat;
+
+ if (ent->timestamp < arg->since)
+ return -1;
+
++ cat = ata_eh_categorize_error(ent->eflags, ent->err_mask,
++ &arg->xfer_ok);
+ arg->nr_errors[cat]++;
++
+ return 0;
+ }
+
+@@ -1495,22 +1546,48 @@ static int speed_down_verdict_cb(struct ata_ering_entry *ent, void *void_arg)
+ * whether NCQ needs to be turned off, transfer speed should be
+ * stepped down, or falling back to PIO is necessary.
+ *
+- * Cat-1 is ATA_BUS error for any command.
++ * ECAT_ATA_BUS : ATA_BUS error for any command
++ *
++ * ECAT_TOUT_HSM : TIMEOUT for any command or HSM violation for
++ * IO commands
++ *
++ * ECAT_UNK_DEV : Unknown DEV error for IO commands
++ *
++ * ECAT_DUBIOUS_* : Identical to above three but occurred while
++ * data transfer hasn't been verified.
++ *
++ * Verdicts are
++ *
++ * NCQ_OFF : Turn off NCQ.
++ *
++ * SPEED_DOWN : Speed down transfer speed but don't fall back
++ * to PIO.
++ *
++ * FALLBACK_TO_PIO : Fall back to PIO.
++ *
++ * Even if multiple verdicts are returned, only one action is
++ * taken per error. An action triggered by non-DUBIOUS errors
++ * clears ering, while one triggered by DUBIOUS_* errors doesn't.
++ * This is to expedite speed down decisions right after device is
++ * initially configured.
+ *
+- * Cat-2 is TIMEOUT for any command or HSM violation for known
+- * supported commands.
++ * The followings are speed down rules. #1 and #2 deal with
++ * DUBIOUS errors.
+ *
+- * Cat-3 is is unclassified DEV error for known supported
+- * command.
++ * 1. If more than one DUBIOUS_ATA_BUS or DUBIOUS_TOUT_HSM errors
++ * occurred during last 5 mins, SPEED_DOWN and FALLBACK_TO_PIO.
+ *
+- * NCQ needs to be turned off if there have been more than 3
+- * Cat-2 + Cat-3 errors during last 10 minutes.
++ * 2. If more than one DUBIOUS_TOUT_HSM or DUBIOUS_UNK_DEV errors
++ * occurred during last 5 mins, NCQ_OFF.
+ *
+- * Speed down is necessary if there have been more than 3 Cat-1 +
+- * Cat-2 errors or 10 Cat-3 errors during last 10 minutes.
++ * 3. If more than 8 ATA_BUS, TOUT_HSM or UNK_DEV errors
++ * ocurred during last 5 mins, FALLBACK_TO_PIO
+ *
+- * Falling back to PIO mode is necessary if there have been more
+- * than 10 Cat-1 + Cat-2 + Cat-3 errors during last 5 minutes.
++ * 4. If more than 3 TOUT_HSM or UNK_DEV errors occurred
++ * during last 10 mins, NCQ_OFF.
++ *
++ * 5. If more than 3 ATA_BUS or TOUT_HSM errors, or more than 6
++ * UNK_DEV errors occurred during last 10 mins, SPEED_DOWN.
+ *
+ * LOCKING:
+ * Inherited from caller.
+@@ -1525,23 +1602,38 @@ static unsigned int ata_eh_speed_down_verdict(struct ata_device *dev)
+ struct speed_down_verdict_arg arg;
+ unsigned int verdict = 0;
+
+- /* scan past 10 mins of error history */
++ /* scan past 5 mins of error history */
+ memset(&arg, 0, sizeof(arg));
+- arg.since = j64 - min(j64, j10mins);
++ arg.since = j64 - min(j64, j5mins);
+ ata_ering_map(&dev->ering, speed_down_verdict_cb, &arg);
+
+- if (arg.nr_errors[2] + arg.nr_errors[3] > 3)
+- verdict |= ATA_EH_SPDN_NCQ_OFF;
+- if (arg.nr_errors[1] + arg.nr_errors[2] > 3 || arg.nr_errors[3] > 10)
+- verdict |= ATA_EH_SPDN_SPEED_DOWN;
++ if (arg.nr_errors[ATA_ECAT_DUBIOUS_ATA_BUS] +
++ arg.nr_errors[ATA_ECAT_DUBIOUS_TOUT_HSM] > 1)
++ verdict |= ATA_EH_SPDN_SPEED_DOWN |
++ ATA_EH_SPDN_FALLBACK_TO_PIO | ATA_EH_SPDN_KEEP_ERRORS;
+
+- /* scan past 3 mins of error history */
++ if (arg.nr_errors[ATA_ECAT_DUBIOUS_TOUT_HSM] +
++ arg.nr_errors[ATA_ECAT_DUBIOUS_UNK_DEV] > 1)
++ verdict |= ATA_EH_SPDN_NCQ_OFF | ATA_EH_SPDN_KEEP_ERRORS;
++
++ if (arg.nr_errors[ATA_ECAT_ATA_BUS] +
++ arg.nr_errors[ATA_ECAT_TOUT_HSM] +
++ arg.nr_errors[ATA_ECAT_UNK_DEV] > 6)
++ verdict |= ATA_EH_SPDN_FALLBACK_TO_PIO;
++
++ /* scan past 10 mins of error history */
+ memset(&arg, 0, sizeof(arg));
+- arg.since = j64 - min(j64, j5mins);
++ arg.since = j64 - min(j64, j10mins);
+ ata_ering_map(&dev->ering, speed_down_verdict_cb, &arg);
+
+- if (arg.nr_errors[1] + arg.nr_errors[2] + arg.nr_errors[3] > 10)
+- verdict |= ATA_EH_SPDN_FALLBACK_TO_PIO;
++ if (arg.nr_errors[ATA_ECAT_TOUT_HSM] +
++ arg.nr_errors[ATA_ECAT_UNK_DEV] > 3)
++ verdict |= ATA_EH_SPDN_NCQ_OFF;
++
++ if (arg.nr_errors[ATA_ECAT_ATA_BUS] +
++ arg.nr_errors[ATA_ECAT_TOUT_HSM] > 3 ||
++ arg.nr_errors[ATA_ECAT_UNK_DEV] > 6)
++ verdict |= ATA_EH_SPDN_SPEED_DOWN;
+
+ return verdict;
+ }
+@@ -1549,7 +1641,7 @@ static unsigned int ata_eh_speed_down_verdict(struct ata_device *dev)
+ /**
+ * ata_eh_speed_down - record error and speed down if necessary
+ * @dev: Failed device
+- * @is_io: Did the device fail during normal IO?
++ * @eflags: mask of ATA_EFLAG_* flags
+ * @err_mask: err_mask of the error
+ *
+ * Record error and examine error history to determine whether
+@@ -1563,18 +1655,20 @@ static unsigned int ata_eh_speed_down_verdict(struct ata_device *dev)
+ * RETURNS:
+ * Determined recovery action.
+ */
+-static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
+- unsigned int err_mask)
++static unsigned int ata_eh_speed_down(struct ata_device *dev,
++ unsigned int eflags, unsigned int err_mask)
+ {
++ struct ata_link *link = dev->link;
++ int xfer_ok = 0;
+ unsigned int verdict;
+ unsigned int action = 0;
+
+ /* don't bother if Cat-0 error */
+- if (ata_eh_categorize_error(is_io, err_mask) == 0)
++ if (ata_eh_categorize_error(eflags, err_mask, &xfer_ok) == 0)
+ return 0;
+
+ /* record error and determine whether speed down is necessary */
+- ata_ering_record(&dev->ering, is_io, err_mask);
++ ata_ering_record(&dev->ering, eflags, err_mask);
+ verdict = ata_eh_speed_down_verdict(dev);
+
+ /* turn off NCQ? */
+@@ -1590,7 +1684,7 @@ static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
+ /* speed down? */
+ if (verdict & ATA_EH_SPDN_SPEED_DOWN) {
+ /* speed down SATA link speed if possible */
+- if (sata_down_spd_limit(dev->link) == 0) {
++ if (sata_down_spd_limit(link) == 0) {
+ action |= ATA_EH_HARDRESET;
+ goto done;
+ }
+@@ -1618,10 +1712,10 @@ static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
+ }
+
+ /* Fall back to PIO? Slowing down to PIO is meaningless for
+- * SATA. Consider it only for PATA.
++ * SATA ATA devices. Consider it only for PATA and SATAPI.
+ */
+ if ((verdict & ATA_EH_SPDN_FALLBACK_TO_PIO) && (dev->spdn_cnt >= 2) &&
+- (dev->link->ap->cbl != ATA_CBL_SATA) &&
++ (link->ap->cbl != ATA_CBL_SATA || dev->class == ATA_DEV_ATAPI) &&
+ (dev->xfer_shift != ATA_SHIFT_PIO)) {
+ if (ata_down_xfermask_limit(dev, ATA_DNXFER_FORCE_PIO) == 0) {
+ dev->spdn_cnt = 0;
+@@ -1633,7 +1727,8 @@ static unsigned int ata_eh_speed_down(struct ata_device *dev, int is_io,
+ return 0;
+ done:
+ /* device has been slowed down, blow error history */
+- ata_ering_clear(&dev->ering);
++ if (!(verdict & ATA_EH_SPDN_KEEP_ERRORS))
++ ata_ering_clear(&dev->ering);
+ return action;
+ }
+
+@@ -1653,8 +1748,8 @@ static void ata_eh_link_autopsy(struct ata_link *link)
+ struct ata_port *ap = link->ap;
+ struct ata_eh_context *ehc = &link->eh_context;
+ struct ata_device *dev;
+- unsigned int all_err_mask = 0;
+- int tag, is_io = 0;
++ unsigned int all_err_mask = 0, eflags = 0;
++ int tag;
+ u32 serror;
+ int rc;
+
+@@ -1713,15 +1808,15 @@ static void ata_eh_link_autopsy(struct ata_link *link)
+ ehc->i.dev = qc->dev;
+ all_err_mask |= qc->err_mask;
+ if (qc->flags & ATA_QCFLAG_IO)
+- is_io = 1;
++ eflags |= ATA_EFLAG_IS_IO;
+ }
+
+ /* enforce default EH actions */
+ if (ap->pflags & ATA_PFLAG_FROZEN ||
+ all_err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT))
+ ehc->i.action |= ATA_EH_SOFTRESET;
+- else if ((is_io && all_err_mask) ||
+- (!is_io && (all_err_mask & ~AC_ERR_DEV)))
++ else if (((eflags & ATA_EFLAG_IS_IO) && all_err_mask) ||
++ (!(eflags & ATA_EFLAG_IS_IO) && (all_err_mask & ~AC_ERR_DEV)))
+ ehc->i.action |= ATA_EH_REVALIDATE;
+
+ /* If we have offending qcs and the associated failed device,
+@@ -1743,8 +1838,11 @@ static void ata_eh_link_autopsy(struct ata_link *link)
+ ata_dev_enabled(link->device))))
+ dev = link->device;
+
+- if (dev)
+- ehc->i.action |= ata_eh_speed_down(dev, is_io, all_err_mask);
++ if (dev) {
++ if (dev->flags & ATA_DFLAG_DUBIOUS_XFER)
++ eflags |= ATA_EFLAG_DUBIOUS_XFER;
++ ehc->i.action |= ata_eh_speed_down(dev, eflags, all_err_mask);
++ }
+
+ DPRINTK("EXIT\n");
+ }
+@@ -1880,8 +1978,8 @@ static void ata_eh_link_report(struct ata_link *link)
+ [ATA_PROT_PIO] = "pio",
+ [ATA_PROT_DMA] = "dma",
+ [ATA_PROT_NCQ] = "ncq",
+- [ATA_PROT_ATAPI] = "pio",
+- [ATA_PROT_ATAPI_DMA] = "dma",
++ [ATAPI_PROT_PIO] = "pio",
++ [ATAPI_PROT_DMA] = "dma",
+ };
+
+ snprintf(data_buf, sizeof(data_buf), " %s %u %s",
+@@ -1889,7 +1987,7 @@ static void ata_eh_link_report(struct ata_link *link)
+ dma_str[qc->dma_dir]);
+ }
+
+- if (is_atapi_taskfile(&qc->tf))
++ if (ata_is_atapi(qc->tf.protocol))
+ snprintf(cdb_buf, sizeof(cdb_buf),
+ "cdb %02x %02x %02x %02x %02x %02x %02x %02x "
+ "%02x %02x %02x %02x %02x %02x %02x %02x\n ",
+@@ -2329,6 +2427,58 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
+ return rc;
+ }
+
++/**
++ * ata_set_mode - Program timings and issue SET FEATURES - XFER
++ * @link: link on which timings will be programmed
++ * @r_failed_dev: out paramter for failed device
++ *
++ * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). If
++ * ata_set_mode() fails, pointer to the failing device is
++ * returned in @r_failed_dev.
++ *
++ * LOCKING:
++ * PCI/etc. bus probe sem.
++ *
++ * RETURNS:
++ * 0 on success, negative errno otherwise
++ */
++int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
++{
++ struct ata_port *ap = link->ap;
++ struct ata_device *dev;
++ int rc;
++
++ /* if data transfer is verified, clear DUBIOUS_XFER on ering top */
++ ata_link_for_each_dev(dev, link) {
++ if (!(dev->flags & ATA_DFLAG_DUBIOUS_XFER)) {
++ struct ata_ering_entry *ent;
++
++ ent = ata_ering_top(&dev->ering);
++ if (ent)
++ ent->eflags &= ~ATA_EFLAG_DUBIOUS_XFER;
++ }
++ }
++
++ /* has private set_mode? */
++ if (ap->ops->set_mode)
++ rc = ap->ops->set_mode(link, r_failed_dev);
++ else
++ rc = ata_do_set_mode(link, r_failed_dev);
++
++ /* if transfer mode has changed, set DUBIOUS_XFER on device */
++ ata_link_for_each_dev(dev, link) {
++ struct ata_eh_context *ehc = &link->eh_context;
++ u8 saved_xfer_mode = ehc->saved_xfer_mode[dev->devno];
++ u8 saved_ncq = !!(ehc->saved_ncq_enabled & (1 << dev->devno));
++
++ if (dev->xfer_mode != saved_xfer_mode ||
++ ata_ncq_enabled(dev) != saved_ncq)
++ dev->flags |= ATA_DFLAG_DUBIOUS_XFER;
++ }
++
++ return rc;
++}
++
+ static int ata_link_nr_enabled(struct ata_link *link)
+ {
+ struct ata_device *dev;
+@@ -2375,6 +2525,24 @@ static int ata_eh_skip_recovery(struct ata_link *link)
+ return 1;
+ }
+
++static int ata_eh_schedule_probe(struct ata_device *dev)
++{
++ struct ata_eh_context *ehc = &dev->link->eh_context;
++
++ if (!(ehc->i.probe_mask & (1 << dev->devno)) ||
++ (ehc->did_probe_mask & (1 << dev->devno)))
++ return 0;
++
++ ata_eh_detach_dev(dev);
++ ata_dev_init(dev);
++ ehc->did_probe_mask |= (1 << dev->devno);
++ ehc->i.action |= ATA_EH_SOFTRESET;
++ ehc->saved_xfer_mode[dev->devno] = 0;
++ ehc->saved_ncq_enabled &= ~(1 << dev->devno);
++
++ return 1;
++}
++
+ static int ata_eh_handle_dev_fail(struct ata_device *dev, int err)
+ {
+ struct ata_eh_context *ehc = &dev->link->eh_context;
+@@ -2406,16 +2574,9 @@ static int ata_eh_handle_dev_fail(struct ata_device *dev, int err)
+ if (ata_link_offline(dev->link))
+ ata_eh_detach_dev(dev);
+
+- /* probe if requested */
+- if ((ehc->i.probe_mask & (1 << dev->devno)) &&
+- !(ehc->did_probe_mask & (1 << dev->devno))) {
+- ata_eh_detach_dev(dev);
+- ata_dev_init(dev);
+-
++ /* schedule probe if necessary */
++ if (ata_eh_schedule_probe(dev))
+ ehc->tries[dev->devno] = ATA_EH_DEV_TRIES;
+- ehc->did_probe_mask |= (1 << dev->devno);
+- ehc->i.action |= ATA_EH_SOFTRESET;
+- }
+
+ return 1;
+ } else {
+@@ -2492,14 +2653,9 @@ int ata_eh_recover(struct ata_port *ap, ata_prereset_fn_t prereset,
+ if (dev->flags & ATA_DFLAG_DETACH)
+ ata_eh_detach_dev(dev);
+
+- if (!ata_dev_enabled(dev) &&
+- ((ehc->i.probe_mask & (1 << dev->devno)) &&
+- !(ehc->did_probe_mask & (1 << dev->devno)))) {
+- ata_eh_detach_dev(dev);
+- ata_dev_init(dev);
+- ehc->did_probe_mask |= (1 << dev->devno);
+- ehc->i.action |= ATA_EH_SOFTRESET;
+- }
++ /* schedule probe if necessary */
++ if (!ata_dev_enabled(dev))
++ ata_eh_schedule_probe(dev);
+ }
+ }
+
+@@ -2747,6 +2903,7 @@ static void ata_eh_handle_port_suspend(struct ata_port *ap)
+ if (ap->ops->port_suspend)
+ rc = ap->ops->port_suspend(ap, ap->pm_mesg);
+
++ ata_acpi_set_state(ap, PMSG_SUSPEND);
+ out:
+ /* report result */
+ spin_lock_irqsave(ap->lock, flags);
+@@ -2792,6 +2949,8 @@ static void ata_eh_handle_port_resume(struct ata_port *ap)
+
+ WARN_ON(!(ap->pflags & ATA_PFLAG_SUSPENDED));
+
++ ata_acpi_set_state(ap, PMSG_ON);
++
+ if (ap->ops->port_resume)
+ rc = ap->ops->port_resume(ap);
+
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 14daf48..c02c490 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -517,7 +517,7 @@ static struct ata_queued_cmd *ata_scsi_qc_new(struct ata_device *dev,
+ qc->scsicmd = cmd;
+ qc->scsidone = done;
+
+- qc->__sg = scsi_sglist(cmd);
++ qc->sg = scsi_sglist(cmd);
+ qc->n_elem = scsi_sg_count(cmd);
+ } else {
+ cmd->result = (DID_OK << 16) | (QUEUE_FULL << 1);
+@@ -839,7 +839,14 @@ static void ata_scsi_dev_config(struct scsi_device *sdev,
+ if (dev->class == ATA_DEV_ATAPI) {
+ struct request_queue *q = sdev->request_queue;
+ blk_queue_max_hw_segments(q, q->max_hw_segments - 1);
+- }
++
++ /* set the min alignment */
++ blk_queue_update_dma_alignment(sdev->request_queue,
++ ATA_DMA_PAD_SZ - 1);
++ } else
++ /* ATA devices must be sector aligned */
++ blk_queue_update_dma_alignment(sdev->request_queue,
++ ATA_SECT_SIZE - 1);
+
+ if (dev->class == ATA_DEV_ATA)
+ sdev->manage_start_stop = 1;
+@@ -878,7 +885,7 @@ int ata_scsi_slave_config(struct scsi_device *sdev)
+ if (dev)
+ ata_scsi_dev_config(sdev, dev);
+
+- return 0; /* scsi layer doesn't check return value, sigh */
++ return 0;
+ }
+
+ /**
+@@ -2210,7 +2217,7 @@ unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf,
+
+ /* sector size */
+ ATA_SCSI_RBUF_SET(6, ATA_SECT_SIZE >> 8);
+- ATA_SCSI_RBUF_SET(7, ATA_SECT_SIZE);
++ ATA_SCSI_RBUF_SET(7, ATA_SECT_SIZE & 0xff);
+ } else {
+ /* sector count, 64-bit */
+ ATA_SCSI_RBUF_SET(0, last_lba >> (8 * 7));
+@@ -2224,7 +2231,7 @@ unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf,
+
+ /* sector size */
+ ATA_SCSI_RBUF_SET(10, ATA_SECT_SIZE >> 8);
+- ATA_SCSI_RBUF_SET(11, ATA_SECT_SIZE);
++ ATA_SCSI_RBUF_SET(11, ATA_SECT_SIZE & 0xff);
+ }
+
+ return 0;
+@@ -2331,7 +2338,7 @@ static void atapi_request_sense(struct ata_queued_cmd *qc)
+ DPRINTK("ATAPI request sense\n");
+
+ /* FIXME: is this needed? */
+- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
++ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+
+ ap->ops->tf_read(ap, &qc->tf);
+
+@@ -2341,7 +2348,9 @@ static void atapi_request_sense(struct ata_queued_cmd *qc)
+
+ ata_qc_reinit(qc);
+
+- ata_sg_init_one(qc, cmd->sense_buffer, sizeof(cmd->sense_buffer));
++ /* setup sg table and init transfer direction */
++ sg_init_one(&qc->sgent, cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE);
++ ata_sg_init(qc, &qc->sgent, 1);
+ qc->dma_dir = DMA_FROM_DEVICE;
+
+ memset(&qc->cdb, 0, qc->dev->cdb_len);
+@@ -2352,10 +2361,10 @@ static void atapi_request_sense(struct ata_queued_cmd *qc)
+ qc->tf.command = ATA_CMD_PACKET;
+
+ if (ata_pio_use_silly(ap)) {
+- qc->tf.protocol = ATA_PROT_ATAPI_DMA;
++ qc->tf.protocol = ATAPI_PROT_DMA;
+ qc->tf.feature |= ATAPI_PKT_DMA;
+ } else {
+- qc->tf.protocol = ATA_PROT_ATAPI;
++ qc->tf.protocol = ATAPI_PROT_PIO;
+ qc->tf.lbam = SCSI_SENSE_BUFFERSIZE;
+ qc->tf.lbah = 0;
+ }
+@@ -2526,12 +2535,12 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
+ if (using_pio || nodata) {
+ /* no data, or PIO data xfer */
+ if (nodata)
+- qc->tf.protocol = ATA_PROT_ATAPI_NODATA;
++ qc->tf.protocol = ATAPI_PROT_NODATA;
+ else
+- qc->tf.protocol = ATA_PROT_ATAPI;
++ qc->tf.protocol = ATAPI_PROT_PIO;
+ } else {
+ /* DMA data xfer */
+- qc->tf.protocol = ATA_PROT_ATAPI_DMA;
++ qc->tf.protocol = ATAPI_PROT_DMA;
+ qc->tf.feature |= ATAPI_PKT_DMA;
+
+ if (atapi_dmadir && (scmd->sc_data_direction != DMA_TO_DEVICE))
+@@ -2690,6 +2699,24 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
+ if ((tf->protocol = ata_scsi_map_proto(cdb[1])) == ATA_PROT_UNKNOWN)
+ goto invalid_fld;
+
++ /*
++ * Filter TPM commands by default. These provide an
++ * essentially uncontrolled encrypted "back door" between
++ * applications and the disk. Set libata.allow_tpm=1 if you
++ * have a real reason for wanting to use them. This ensures
++ * that installed software cannot easily mess stuff up without
++ * user intent. DVR type users will probably ship with this enabled
++ * for movie content management.
++ *
++ * Note that for ATA8 we can issue a DCS change and DCS freeze lock
++ * for this and should do in future but that it is not sufficient as
++ * DCS is an optional feature set. Thus we also do the software filter
++ * so that we comply with the TC consortium stated goal that the user
++ * can turn off TC features of their system.
++ */
++ if (tf->command >= 0x5C && tf->command <= 0x5F && !libata_allow_tpm)
++ goto invalid_fld;
++
+ /* We may not issue DMA commands if no DMA mode is set */
+ if (tf->protocol == ATA_PROT_DMA && dev->dma_mode == 0)
+ goto invalid_fld;
+diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
+index b7ac80b..60cd4b1 100644
+--- a/drivers/ata/libata-sff.c
++++ b/drivers/ata/libata-sff.c
+@@ -147,7 +147,9 @@ void ata_exec_command(struct ata_port *ap, const struct ata_taskfile *tf)
+ * @tf: ATA taskfile register set for storing input
+ *
+ * Reads ATA taskfile registers for currently-selected device
+- * into @tf.
++ * into @tf. Assumes the device has a fully SFF compliant task file
++ * layout and behaviour. If you device does not (eg has a different
++ * status method) then you will need to provide a replacement tf_read
+ *
+ * LOCKING:
+ * Inherited from caller.
+@@ -156,7 +158,7 @@ void ata_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
+ {
+ struct ata_ioports *ioaddr = &ap->ioaddr;
+
+- tf->command = ata_chk_status(ap);
++ tf->command = ata_check_status(ap);
+ tf->feature = ioread8(ioaddr->error_addr);
+ tf->nsect = ioread8(ioaddr->nsect_addr);
+ tf->lbal = ioread8(ioaddr->lbal_addr);
+@@ -415,7 +417,7 @@ void ata_bmdma_drive_eh(struct ata_port *ap, ata_prereset_fn_t prereset,
+ ap->hsm_task_state = HSM_ST_IDLE;
+
+ if (qc && (qc->tf.protocol == ATA_PROT_DMA ||
+- qc->tf.protocol == ATA_PROT_ATAPI_DMA)) {
++ qc->tf.protocol == ATAPI_PROT_DMA)) {
+ u8 host_stat;
+
+ host_stat = ap->ops->bmdma_status(ap);
+@@ -549,7 +551,7 @@ int ata_pci_init_bmdma(struct ata_host *host)
+ return rc;
+
+ /* request and iomap DMA region */
+- rc = pcim_iomap_regions(pdev, 1 << 4, DRV_NAME);
++ rc = pcim_iomap_regions(pdev, 1 << 4, dev_driver_string(gdev));
+ if (rc) {
+ dev_printk(KERN_ERR, gdev, "failed to request/iomap BAR4\n");
+ return -ENOMEM;
+@@ -619,7 +621,8 @@ int ata_pci_init_sff_host(struct ata_host *host)
+ continue;
+ }
+
+- rc = pcim_iomap_regions(pdev, 0x3 << base, DRV_NAME);
++ rc = pcim_iomap_regions(pdev, 0x3 << base,
++ dev_driver_string(gdev));
+ if (rc) {
+ dev_printk(KERN_WARNING, gdev,
+ "failed to request/iomap BARs for port %d "
+@@ -711,6 +714,99 @@ int ata_pci_prepare_sff_host(struct pci_dev *pdev,
+ }
+
+ /**
++ * ata_pci_activate_sff_host - start SFF host, request IRQ and register it
++ * @host: target SFF ATA host
++ * @irq_handler: irq_handler used when requesting IRQ(s)
++ * @sht: scsi_host_template to use when registering the host
++ *
++ * This is the counterpart of ata_host_activate() for SFF ATA
++ * hosts. This separate helper is necessary because SFF hosts
++ * use two separate interrupts in legacy mode.
++ *
++ * LOCKING:
++ * Inherited from calling layer (may sleep).
++ *
++ * RETURNS:
++ * 0 on success, -errno otherwise.
++ */
++int ata_pci_activate_sff_host(struct ata_host *host,
++ irq_handler_t irq_handler,
++ struct scsi_host_template *sht)
++{
++ struct device *dev = host->dev;
++ struct pci_dev *pdev = to_pci_dev(dev);
++ const char *drv_name = dev_driver_string(host->dev);
++ int legacy_mode = 0, rc;
++
++ rc = ata_host_start(host);
++ if (rc)
++ return rc;
++
++ if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
++ u8 tmp8, mask;
++
++ /* TODO: What if one channel is in native mode ... */
++ pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
++ mask = (1 << 2) | (1 << 0);
++ if ((tmp8 & mask) != mask)
++ legacy_mode = 1;
++#if defined(CONFIG_NO_ATA_LEGACY)
++ /* Some platforms with PCI limits cannot address compat
++ port space. In that case we punt if their firmware has
++ left a device in compatibility mode */
++ if (legacy_mode) {
++ printk(KERN_ERR "ata: Compatibility mode ATA is not supported on this platform, skipping.\n");
++ return -EOPNOTSUPP;
++ }
++#endif
++ }
++
++ if (!devres_open_group(dev, NULL, GFP_KERNEL))
++ return -ENOMEM;
++
++ if (!legacy_mode && pdev->irq) {
++ rc = devm_request_irq(dev, pdev->irq, irq_handler,
++ IRQF_SHARED, drv_name, host);
++ if (rc)
++ goto out;
++
++ ata_port_desc(host->ports[0], "irq %d", pdev->irq);
++ ata_port_desc(host->ports[1], "irq %d", pdev->irq);
++ } else if (legacy_mode) {
++ if (!ata_port_is_dummy(host->ports[0])) {
++ rc = devm_request_irq(dev, ATA_PRIMARY_IRQ(pdev),
++ irq_handler, IRQF_SHARED,
++ drv_name, host);
++ if (rc)
++ goto out;
++
++ ata_port_desc(host->ports[0], "irq %d",
++ ATA_PRIMARY_IRQ(pdev));
++ }
++
++ if (!ata_port_is_dummy(host->ports[1])) {
++ rc = devm_request_irq(dev, ATA_SECONDARY_IRQ(pdev),
++ irq_handler, IRQF_SHARED,
++ drv_name, host);
++ if (rc)
++ goto out;
++
++ ata_port_desc(host->ports[1], "irq %d",
++ ATA_SECONDARY_IRQ(pdev));
++ }
++ }
++
++ rc = ata_host_register(host, sht);
++ out:
++ if (rc == 0)
++ devres_remove_group(dev, NULL);
++ else
++ devres_release_group(dev, NULL);
++
++ return rc;
++}
++
++/**
+ * ata_pci_init_one - Initialize/register PCI IDE host controller
+ * @pdev: Controller to be initialized
+ * @ppi: array of port_info, must be enough for two ports
+@@ -739,8 +835,6 @@ int ata_pci_init_one(struct pci_dev *pdev,
+ struct device *dev = &pdev->dev;
+ const struct ata_port_info *pi = NULL;
+ struct ata_host *host = NULL;
+- u8 mask;
+- int legacy_mode = 0;
+ int i, rc;
+
+ DPRINTK("ENTER\n");
+@@ -762,95 +856,24 @@ int ata_pci_init_one(struct pci_dev *pdev,
+ if (!devres_open_group(dev, NULL, GFP_KERNEL))
+ return -ENOMEM;
+
+- /* FIXME: Really for ATA it isn't safe because the device may be
+- multi-purpose and we want to leave it alone if it was already
+- enabled. Secondly for shared use as Arjan says we want refcounting
+-
+- Checking dev->is_enabled is insufficient as this is not set at
+- boot for the primary video which is BIOS enabled
+- */
+-
+ rc = pcim_enable_device(pdev);
+ if (rc)
+- goto err_out;
++ goto out;
+
+- if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
+- u8 tmp8;
+-
+- /* TODO: What if one channel is in native mode ... */
+- pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
+- mask = (1 << 2) | (1 << 0);
+- if ((tmp8 & mask) != mask)
+- legacy_mode = 1;
+-#if defined(CONFIG_NO_ATA_LEGACY)
+- /* Some platforms with PCI limits cannot address compat
+- port space. In that case we punt if their firmware has
+- left a device in compatibility mode */
+- if (legacy_mode) {
+- printk(KERN_ERR "ata: Compatibility mode ATA is not supported on this platform, skipping.\n");
+- rc = -EOPNOTSUPP;
+- goto err_out;
+- }
+-#endif
+- }
+-
+- /* prepare host */
++ /* prepare and activate SFF host */
+ rc = ata_pci_prepare_sff_host(pdev, ppi, &host);
+ if (rc)
+- goto err_out;
++ goto out;
+
+ pci_set_master(pdev);
++ rc = ata_pci_activate_sff_host(host, pi->port_ops->irq_handler,
++ pi->sht);
++ out:
++ if (rc == 0)
++ devres_remove_group(&pdev->dev, NULL);
++ else
++ devres_release_group(&pdev->dev, NULL);
+
+- /* start host and request IRQ */
+- rc = ata_host_start(host);
+- if (rc)
+- goto err_out;
+-
+- if (!legacy_mode && pdev->irq) {
+- /* We may have no IRQ assigned in which case we can poll. This
+- shouldn't happen on a sane system but robustness is cheap
+- in this case */
+- rc = devm_request_irq(dev, pdev->irq, pi->port_ops->irq_handler,
+- IRQF_SHARED, DRV_NAME, host);
+- if (rc)
+- goto err_out;
+-
+- ata_port_desc(host->ports[0], "irq %d", pdev->irq);
+- ata_port_desc(host->ports[1], "irq %d", pdev->irq);
+- } else if (legacy_mode) {
+- if (!ata_port_is_dummy(host->ports[0])) {
+- rc = devm_request_irq(dev, ATA_PRIMARY_IRQ(pdev),
+- pi->port_ops->irq_handler,
+- IRQF_SHARED, DRV_NAME, host);
+- if (rc)
+- goto err_out;
+-
+- ata_port_desc(host->ports[0], "irq %d",
+- ATA_PRIMARY_IRQ(pdev));
+- }
+-
+- if (!ata_port_is_dummy(host->ports[1])) {
+- rc = devm_request_irq(dev, ATA_SECONDARY_IRQ(pdev),
+- pi->port_ops->irq_handler,
+- IRQF_SHARED, DRV_NAME, host);
+- if (rc)
+- goto err_out;
+-
+- ata_port_desc(host->ports[1], "irq %d",
+- ATA_SECONDARY_IRQ(pdev));
+- }
+- }
+-
+- /* register */
+- rc = ata_host_register(host, pi->sht);
+- if (rc)
+- goto err_out;
+-
+- devres_remove_group(dev, NULL);
+- return 0;
+-
+-err_out:
+- devres_release_group(dev, NULL);
+ return rc;
+ }
+
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index bbe59c2..409ffb9 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -60,6 +60,7 @@ extern int atapi_dmadir;
+ extern int atapi_passthru16;
+ extern int libata_fua;
+ extern int libata_noacpi;
++extern int libata_allow_tpm;
+ extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev);
+ extern int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
+ u64 block, u32 n_block, unsigned int tf_flags,
+@@ -85,7 +86,6 @@ extern int ata_dev_configure(struct ata_device *dev);
+ extern int sata_down_spd_limit(struct ata_link *link);
+ extern int sata_set_spd_needed(struct ata_link *link);
+ extern int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel);
+-extern int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
+ extern void ata_sg_clean(struct ata_queued_cmd *qc);
+ extern void ata_qc_free(struct ata_queued_cmd *qc);
+ extern void ata_qc_issue(struct ata_queued_cmd *qc);
+@@ -113,6 +113,7 @@ extern int ata_acpi_on_suspend(struct ata_port *ap);
+ extern void ata_acpi_on_resume(struct ata_port *ap);
+ extern int ata_acpi_on_devcfg(struct ata_device *dev);
+ extern void ata_acpi_on_disable(struct ata_device *dev);
++extern void ata_acpi_set_state(struct ata_port *ap, pm_message_t state);
+ #else
+ static inline void ata_acpi_associate_sata_port(struct ata_port *ap) { }
+ static inline void ata_acpi_associate(struct ata_host *host) { }
+@@ -121,6 +122,8 @@ static inline int ata_acpi_on_suspend(struct ata_port *ap) { return 0; }
+ static inline void ata_acpi_on_resume(struct ata_port *ap) { }
+ static inline int ata_acpi_on_devcfg(struct ata_device *dev) { return 0; }
+ static inline void ata_acpi_on_disable(struct ata_device *dev) { }
++static inline void ata_acpi_set_state(struct ata_port *ap,
++ pm_message_t state) { }
+ #endif
+
+ /* libata-scsi.c */
+@@ -183,6 +186,7 @@ extern void ata_eh_report(struct ata_port *ap);
+ extern int ata_eh_reset(struct ata_link *link, int classify,
+ ata_prereset_fn_t prereset, ata_reset_fn_t softreset,
+ ata_reset_fn_t hardreset, ata_postreset_fn_t postreset);
++extern int ata_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
+ extern int ata_eh_recover(struct ata_port *ap, ata_prereset_fn_t prereset,
+ ata_reset_fn_t softreset, ata_reset_fn_t hardreset,
+ ata_postreset_fn_t postreset,
+diff --git a/drivers/ata/pata_acpi.c b/drivers/ata/pata_acpi.c
+index e4542ab..244098a 100644
+--- a/drivers/ata/pata_acpi.c
++++ b/drivers/ata/pata_acpi.c
+@@ -81,17 +81,6 @@ static void pacpi_error_handler(struct ata_port *ap)
+ NULL, ata_std_postreset);
+ }
+
+-/* Welcome to ACPI, bring a bucket */
+-static const unsigned int pio_cycle[7] = {
+- 600, 383, 240, 180, 120, 100, 80
+-};
+-static const unsigned int mwdma_cycle[5] = {
+- 480, 150, 120, 100, 80
+-};
+-static const unsigned int udma_cycle[7] = {
+- 120, 80, 60, 45, 30, 20, 15
+-};
+-
+ /**
+ * pacpi_discover_modes - filter non ACPI modes
+ * @adev: ATA device
+@@ -103,56 +92,20 @@ static const unsigned int udma_cycle[7] = {
+
+ static unsigned long pacpi_discover_modes(struct ata_port *ap, struct ata_device *adev)
+ {
+- int unit = adev->devno;
+ struct pata_acpi *acpi = ap->private_data;
+- int i;
+- u32 t;
+- unsigned long mask = (0x7f << ATA_SHIFT_UDMA) | (0x7 << ATA_SHIFT_MWDMA) | (0x1F << ATA_SHIFT_PIO);
+-
+ struct ata_acpi_gtm probe;
++ unsigned int xfer_mask;
+
+ probe = acpi->gtm;
+
+- /* We always use the 0 slot for crap hardware */
+- if (!(probe.flags & 0x10))
+- unit = 0;
+-
+ ata_acpi_gtm(ap, &probe);
+
+- /* Start by scanning for PIO modes */
+- for (i = 0; i < 7; i++) {
+- t = probe.drive[unit].pio;
+- if (t <= pio_cycle[i]) {
+- mask |= (2 << (ATA_SHIFT_PIO + i)) - 1;
+- break;
+- }
+- }
++ xfer_mask = ata_acpi_gtm_xfermask(adev, &probe);
+
+- /* See if we have MWDMA or UDMA data. We don't bother with MWDMA
+- if UDMA is availabe as this means the BIOS set UDMA and our
+- error changedown if it works is UDMA to PIO anyway */
+- if (probe.flags & (1 << (2 * unit))) {
+- /* MWDMA */
+- for (i = 0; i < 5; i++) {
+- t = probe.drive[unit].dma;
+- if (t <= mwdma_cycle[i]) {
+- mask |= (2 << (ATA_SHIFT_MWDMA + i)) - 1;
+- break;
+- }
+- }
+- } else {
+- /* UDMA */
+- for (i = 0; i < 7; i++) {
+- t = probe.drive[unit].dma;
+- if (t <= udma_cycle[i]) {
+- mask |= (2 << (ATA_SHIFT_UDMA + i)) - 1;
+- break;
+- }
+- }
+- }
+- if (mask & (0xF8 << ATA_SHIFT_UDMA))
++ if (xfer_mask & (0xF8 << ATA_SHIFT_UDMA))
+ ap->cbl = ATA_CBL_PATA80;
+- return mask;
++
++ return xfer_mask;
+ }
+
+ /**
+@@ -180,12 +133,14 @@ static void pacpi_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ {
+ int unit = adev->devno;
+ struct pata_acpi *acpi = ap->private_data;
++ const struct ata_timing *t;
+
+ if (!(acpi->gtm.flags & 0x10))
+ unit = 0;
+
+ /* Now stuff the nS values into the structure */
+- acpi->gtm.drive[unit].pio = pio_cycle[adev->pio_mode - XFER_PIO_0];
++ t = ata_timing_find_mode(adev->pio_mode);
++ acpi->gtm.drive[unit].pio = t->cycle;
+ ata_acpi_stm(ap, &acpi->gtm);
+ /* See what mode we actually got */
+ ata_acpi_gtm(ap, &acpi->gtm);
+@@ -201,16 +156,18 @@ static void pacpi_set_dmamode(struct ata_port *ap, struct ata_device *adev)
+ {
+ int unit = adev->devno;
+ struct pata_acpi *acpi = ap->private_data;
++ const struct ata_timing *t;
+
+ if (!(acpi->gtm.flags & 0x10))
+ unit = 0;
+
+ /* Now stuff the nS values into the structure */
++ t = ata_timing_find_mode(adev->dma_mode);
+ if (adev->dma_mode >= XFER_UDMA_0) {
+- acpi->gtm.drive[unit].dma = udma_cycle[adev->dma_mode - XFER_UDMA_0];
++ acpi->gtm.drive[unit].dma = t->udma;
+ acpi->gtm.flags |= (1 << (2 * unit));
+ } else {
+- acpi->gtm.drive[unit].dma = mwdma_cycle[adev->dma_mode - XFER_MW_DMA_0];
++ acpi->gtm.drive[unit].dma = t->cycle;
+ acpi->gtm.flags &= ~(1 << (2 * unit));
+ }
+ ata_acpi_stm(ap, &acpi->gtm);
+diff --git a/drivers/ata/pata_ali.c b/drivers/ata/pata_ali.c
+index 8caf9af..7e68edf 100644
+--- a/drivers/ata/pata_ali.c
++++ b/drivers/ata/pata_ali.c
+@@ -64,7 +64,7 @@ static int ali_cable_override(struct pci_dev *pdev)
+ if (pdev->subsystem_vendor == 0x10CF && pdev->subsystem_device == 0x10AF)
+ return 1;
+ /* Mitac 8317 (Winbook-A) and relatives */
+- if (pdev->subsystem_vendor == 0x1071 && pdev->subsystem_device == 0x8317)
++ if (pdev->subsystem_vendor == 0x1071 && pdev->subsystem_device == 0x8317)
+ return 1;
+ /* Systems by DMI */
+ if (dmi_check_system(cable_dmi_table))
+diff --git a/drivers/ata/pata_amd.c b/drivers/ata/pata_amd.c
+index 3cc27b5..761a666 100644
+--- a/drivers/ata/pata_amd.c
++++ b/drivers/ata/pata_amd.c
+@@ -220,6 +220,62 @@ static void amd133_set_dmamode(struct ata_port *ap, struct ata_device *adev)
+ timing_setup(ap, adev, 0x40, adev->dma_mode, 4);
+ }
+
++/* Both host-side and drive-side detection results are worthless on NV
++ * PATAs. Ignore them and just follow what BIOS configured. Both the
++ * current configuration in PCI config reg and ACPI GTM result are
++ * cached during driver attach and are consulted to select transfer
++ * mode.
++ */
++static unsigned long nv_mode_filter(struct ata_device *dev,
++ unsigned long xfer_mask)
++{
++ static const unsigned int udma_mask_map[] =
++ { ATA_UDMA2, ATA_UDMA1, ATA_UDMA0, 0,
++ ATA_UDMA3, ATA_UDMA4, ATA_UDMA5, ATA_UDMA6 };
++ struct ata_port *ap = dev->link->ap;
++ char acpi_str[32] = "";
++ u32 saved_udma, udma;
++ const struct ata_acpi_gtm *gtm;
++ unsigned long bios_limit = 0, acpi_limit = 0, limit;
++
++ /* find out what BIOS configured */
++ udma = saved_udma = (unsigned long)ap->host->private_data;
++
++ if (ap->port_no == 0)
++ udma >>= 16;
++ if (dev->devno == 0)
++ udma >>= 8;
++
++ if ((udma & 0xc0) == 0xc0)
++ bios_limit = ata_pack_xfermask(0, 0, udma_mask_map[udma & 0x7]);
++
++ /* consult ACPI GTM too */
++ gtm = ata_acpi_init_gtm(ap);
++ if (gtm) {
++ acpi_limit = ata_acpi_gtm_xfermask(dev, gtm);
++
++ snprintf(acpi_str, sizeof(acpi_str), " (%u:%u:0x%x)",
++ gtm->drive[0].dma, gtm->drive[1].dma, gtm->flags);
++ }
++
++ /* be optimistic, EH can take care of things if something goes wrong */
++ limit = bios_limit | acpi_limit;
++
++ /* If PIO or DMA isn't configured at all, don't limit. Let EH
++ * handle it.
++ */
++ if (!(limit & ATA_MASK_PIO))
++ limit |= ATA_MASK_PIO;
++ if (!(limit & (ATA_MASK_MWDMA | ATA_MASK_UDMA)))
++ limit |= ATA_MASK_MWDMA | ATA_MASK_UDMA;
++
++ ata_port_printk(ap, KERN_DEBUG, "nv_mode_filter: 0x%lx&0x%lx->0x%lx, "
++ "BIOS=0x%lx (0x%x) ACPI=0x%lx%s\n",
++ xfer_mask, limit, xfer_mask & limit, bios_limit,
++ saved_udma, acpi_limit, acpi_str);
++
++ return xfer_mask & limit;
++}
+
+ /**
+ * nv_probe_init - cable detection
+@@ -252,31 +308,6 @@ static void nv_error_handler(struct ata_port *ap)
+ ata_std_postreset);
+ }
+
+-static int nv_cable_detect(struct ata_port *ap)
+-{
+- static const u8 bitmask[2] = {0x03, 0x0C};
+- struct pci_dev *pdev = to_pci_dev(ap->host->dev);
+- u8 ata66;
+- u16 udma;
+- int cbl;
+-
+- pci_read_config_byte(pdev, 0x52, &ata66);
+- if (ata66 & bitmask[ap->port_no])
+- cbl = ATA_CBL_PATA80;
+- else
+- cbl = ATA_CBL_PATA40;
+-
+- /* We now have to double check because the Nvidia boxes BIOS
+- doesn't always set the cable bits but does set mode bits */
+- pci_read_config_word(pdev, 0x62 - 2 * ap->port_no, &udma);
+- if ((udma & 0xC4) == 0xC4 || (udma & 0xC400) == 0xC400)
+- cbl = ATA_CBL_PATA80;
+- /* And a triple check across suspend/resume with ACPI around */
+- if (ata_acpi_cbl_80wire(ap))
+- cbl = ATA_CBL_PATA80;
+- return cbl;
+-}
+-
+ /**
+ * nv100_set_piomode - set initial PIO mode data
+ * @ap: ATA interface
+@@ -314,6 +345,14 @@ static void nv133_set_dmamode(struct ata_port *ap, struct ata_device *adev)
+ timing_setup(ap, adev, 0x50, adev->dma_mode, 4);
+ }
+
++static void nv_host_stop(struct ata_host *host)
++{
++ u32 udma = (unsigned long)host->private_data;
++
++ /* restore PCI config register 0x60 */
++ pci_write_config_dword(to_pci_dev(host->dev), 0x60, udma);
++}
++
+ static struct scsi_host_template amd_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+@@ -478,7 +517,8 @@ static struct ata_port_operations nv100_port_ops = {
+ .thaw = ata_bmdma_thaw,
+ .error_handler = nv_error_handler,
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+- .cable_detect = nv_cable_detect,
++ .cable_detect = ata_cable_ignore,
++ .mode_filter = nv_mode_filter,
+
+ .bmdma_setup = ata_bmdma_setup,
+ .bmdma_start = ata_bmdma_start,
+@@ -495,6 +535,7 @@ static struct ata_port_operations nv100_port_ops = {
+ .irq_on = ata_irq_on,
+
+ .port_start = ata_sff_port_start,
++ .host_stop = nv_host_stop,
+ };
+
+ static struct ata_port_operations nv133_port_ops = {
+@@ -511,7 +552,8 @@ static struct ata_port_operations nv133_port_ops = {
+ .thaw = ata_bmdma_thaw,
+ .error_handler = nv_error_handler,
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+- .cable_detect = nv_cable_detect,
++ .cable_detect = ata_cable_ignore,
++ .mode_filter = nv_mode_filter,
+
+ .bmdma_setup = ata_bmdma_setup,
+ .bmdma_start = ata_bmdma_start,
+@@ -528,6 +570,7 @@ static struct ata_port_operations nv133_port_ops = {
+ .irq_on = ata_irq_on,
+
+ .port_start = ata_sff_port_start,
++ .host_stop = nv_host_stop,
+ };
+
+ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+@@ -614,7 +657,8 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ .port_ops = &amd100_port_ops
+ }
+ };
+- const struct ata_port_info *ppi[] = { NULL, NULL };
++ struct ata_port_info pi;
++ const struct ata_port_info *ppi[] = { &pi, NULL };
+ static int printed_version;
+ int type = id->driver_data;
+ u8 fifo;
+@@ -628,6 +672,19 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (type == 1 && pdev->revision > 0x7)
+ type = 2;
+
++ /* Serenade ? */
++ if (type == 5 && pdev->subsystem_vendor == PCI_VENDOR_ID_AMD &&
++ pdev->subsystem_device == PCI_DEVICE_ID_AMD_SERENADE)
++ type = 6; /* UDMA 100 only */
++
++ /*
++ * Okay, type is determined now. Apply type-specific workarounds.
++ */
++ pi = info[type];
++
++ if (type < 3)
++ ata_pci_clear_simplex(pdev);
++
+ /* Check for AMD7411 */
+ if (type == 3)
+ /* FIFO is broken */
+@@ -635,16 +692,17 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ else
+ pci_write_config_byte(pdev, 0x41, fifo | 0xF0);
+
+- /* Serenade ? */
+- if (type == 5 && pdev->subsystem_vendor == PCI_VENDOR_ID_AMD &&
+- pdev->subsystem_device == PCI_DEVICE_ID_AMD_SERENADE)
+- type = 6; /* UDMA 100 only */
++ /* Cable detection on Nvidia chips doesn't work too well,
++ * cache BIOS programmed UDMA mode.
++ */
++ if (type == 7 || type == 8) {
++ u32 udma;
+
+- if (type < 3)
+- ata_pci_clear_simplex(pdev);
++ pci_read_config_dword(pdev, 0x60, &udma);
++ pi.private_data = (void *)(unsigned long)udma;
++ }
+
+ /* And fire it up */
+- ppi[0] = &info[type];
+ return ata_pci_init_one(pdev, ppi);
+ }
+
+diff --git a/drivers/ata/pata_bf54x.c b/drivers/ata/pata_bf54x.c
+index 7842cc4..a32e3c4 100644
+--- a/drivers/ata/pata_bf54x.c
++++ b/drivers/ata/pata_bf54x.c
+@@ -832,6 +832,7 @@ static void bfin_bmdma_setup(struct ata_queued_cmd *qc)
+ {
+ unsigned short config = WDSIZE_16;
+ struct scatterlist *sg;
++ unsigned int si;
+
+ pr_debug("in atapi dma setup\n");
+ /* Program the ATA_CTRL register with dir */
+@@ -839,7 +840,7 @@ static void bfin_bmdma_setup(struct ata_queued_cmd *qc)
+ /* fill the ATAPI DMA controller */
+ set_dma_config(CH_ATAPI_TX, config);
+ set_dma_x_modify(CH_ATAPI_TX, 2);
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ set_dma_start_addr(CH_ATAPI_TX, sg_dma_address(sg));
+ set_dma_x_count(CH_ATAPI_TX, sg_dma_len(sg) >> 1);
+ }
+@@ -848,7 +849,7 @@ static void bfin_bmdma_setup(struct ata_queued_cmd *qc)
+ /* fill the ATAPI DMA controller */
+ set_dma_config(CH_ATAPI_RX, config);
+ set_dma_x_modify(CH_ATAPI_RX, 2);
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ set_dma_start_addr(CH_ATAPI_RX, sg_dma_address(sg));
+ set_dma_x_count(CH_ATAPI_RX, sg_dma_len(sg) >> 1);
+ }
+@@ -867,6 +868,7 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc)
+ struct ata_port *ap = qc->ap;
+ void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr;
+ struct scatterlist *sg;
++ unsigned int si;
+
+ pr_debug("in atapi dma start\n");
+ if (!(ap->udma_mask || ap->mwdma_mask))
+@@ -881,7 +883,7 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc)
+ * data cache is enabled. Otherwise, this loop
+ * is an empty loop and optimized out.
+ */
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ flush_dcache_range(sg_dma_address(sg),
+ sg_dma_address(sg) + sg_dma_len(sg));
+ }
+@@ -910,7 +912,7 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc)
+ ATAPI_SET_CONTROL(base, ATAPI_GET_CONTROL(base) | TFRCNT_RST);
+
+ /* Set transfer length to buffer len */
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ ATAPI_SET_XFER_LEN(base, (sg_dma_len(sg) >> 1));
+ }
+
+@@ -932,6 +934,7 @@ static void bfin_bmdma_stop(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct scatterlist *sg;
++ unsigned int si;
+
+ pr_debug("in atapi dma stop\n");
+ if (!(ap->udma_mask || ap->mwdma_mask))
+@@ -950,7 +953,7 @@ static void bfin_bmdma_stop(struct ata_queued_cmd *qc)
+ * data cache is enabled. Otherwise, this loop
+ * is an empty loop and optimized out.
+ */
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ invalidate_dcache_range(
+ sg_dma_address(sg),
+ sg_dma_address(sg)
+@@ -1167,34 +1170,36 @@ static unsigned char bfin_bmdma_status(struct ata_port *ap)
+ * Note: Original code is ata_data_xfer().
+ */
+
+-static void bfin_data_xfer(struct ata_device *adev, unsigned char *buf,
+- unsigned int buflen, int write_data)
++static unsigned int bfin_data_xfer(struct ata_device *dev, unsigned char *buf,
++ unsigned int buflen, int rw)
+ {
+- struct ata_port *ap = adev->link->ap;
+- unsigned int words = buflen >> 1;
+- unsigned short *buf16 = (u16 *) buf;
++ struct ata_port *ap = dev->link->ap;
+ void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr;
++ unsigned int words = buflen >> 1;
++ unsigned short *buf16 = (u16 *)buf;
+
+ /* Transfer multiple of 2 bytes */
+- if (write_data) {
+- write_atapi_data(base, words, buf16);
+- } else {
++ if (rw == READ)
+ read_atapi_data(base, words, buf16);
+- }
++ else
++ write_atapi_data(base, words, buf16);
+
+ /* Transfer trailing 1 byte, if any. */
+ if (unlikely(buflen & 0x01)) {
+ unsigned short align_buf[1] = { 0 };
+ unsigned char *trailing_buf = buf + buflen - 1;
+
+- if (write_data) {
+- memcpy(align_buf, trailing_buf, 1);
+- write_atapi_data(base, 1, align_buf);
+- } else {
++ if (rw == READ) {
+ read_atapi_data(base, 1, align_buf);
+ memcpy(trailing_buf, align_buf, 1);
++ } else {
++ memcpy(align_buf, trailing_buf, 1);
++ write_atapi_data(base, 1, align_buf);
+ }
++ words++;
+ }
++
++ return words << 1;
+ }
+
+ /**
+diff --git a/drivers/ata/pata_cs5520.c b/drivers/ata/pata_cs5520.c
+index 33f7f08..d4590f5 100644
+--- a/drivers/ata/pata_cs5520.c
++++ b/drivers/ata/pata_cs5520.c
+@@ -198,7 +198,7 @@ static int __devinit cs5520_init_one(struct pci_dev *pdev, const struct pci_devi
+ };
+ const struct ata_port_info *ppi[2];
+ u8 pcicfg;
+- void *iomap[5];
++ void __iomem *iomap[5];
+ struct ata_host *host;
+ struct ata_ioports *ioaddr;
+ int i, rc;
+diff --git a/drivers/ata/pata_hpt37x.c b/drivers/ata/pata_hpt37x.c
+index c79f066..68eb349 100644
+--- a/drivers/ata/pata_hpt37x.c
++++ b/drivers/ata/pata_hpt37x.c
+@@ -847,15 +847,16 @@ static u32 hpt374_read_freq(struct pci_dev *pdev)
+ u32 freq;
+ unsigned long io_base = pci_resource_start(pdev, 4);
+ if (PCI_FUNC(pdev->devfn) & 1) {
+- struct pci_dev *pdev_0 = pci_get_slot(pdev->bus, pdev->devfn - 1);
++ struct pci_dev *pdev_0;
++
++ pdev_0 = pci_get_slot(pdev->bus, pdev->devfn - 1);
+ /* Someone hot plugged the controller on us ? */
+ if (pdev_0 == NULL)
+ return 0;
+ io_base = pci_resource_start(pdev_0, 4);
+ freq = inl(io_base + 0x90);
+ pci_dev_put(pdev_0);
+- }
+- else
++ } else
+ freq = inl(io_base + 0x90);
+ return freq;
+ }
+diff --git a/drivers/ata/pata_icside.c b/drivers/ata/pata_icside.c
+index 842fe08..5b8586d 100644
+--- a/drivers/ata/pata_icside.c
++++ b/drivers/ata/pata_icside.c
+@@ -224,6 +224,7 @@ static void pata_icside_bmdma_setup(struct ata_queued_cmd *qc)
+ struct pata_icside_state *state = ap->host->private_data;
+ struct scatterlist *sg, *rsg = state->sg;
+ unsigned int write = qc->tf.flags & ATA_TFLAG_WRITE;
++ unsigned int si;
+
+ /*
+ * We are simplex; BUG if we try to fiddle with DMA
+@@ -234,7 +235,7 @@ static void pata_icside_bmdma_setup(struct ata_queued_cmd *qc)
+ /*
+ * Copy ATAs scattered sg list into a contiguous array of sg
+ */
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ memcpy(rsg, sg, sizeof(*sg));
+ rsg++;
+ }
+diff --git a/drivers/ata/pata_it821x.c b/drivers/ata/pata_it821x.c
+index ca9aae0..109ddd4 100644
+--- a/drivers/ata/pata_it821x.c
++++ b/drivers/ata/pata_it821x.c
+@@ -430,7 +430,7 @@ static unsigned int it821x_smart_qc_issue_prot(struct ata_queued_cmd *qc)
+ return ata_qc_issue_prot(qc);
+ }
+ printk(KERN_DEBUG "it821x: can't process command 0x%02X\n", qc->tf.command);
+- return AC_ERR_INVALID;
++ return AC_ERR_DEV;
+ }
+
+ /**
+@@ -516,6 +516,37 @@ static void it821x_dev_config(struct ata_device *adev)
+ printk("(%dK stripe)", adev->id[146]);
+ printk(".\n");
+ }
++ /* This is a controller firmware triggered funny, don't
++ report the drive faulty! */
++ adev->horkage &= ~ATA_HORKAGE_DIAGNOSTIC;
++}
++
++/**
++ * it821x_ident_hack - Hack identify data up
++ * @ap: Port
++ *
++ * Walk the devices on this firmware driven port and slightly
++ * mash the identify data to stop us and common tools trying to
++ * use features not firmware supported. The firmware itself does
++ * some masking (eg SMART) but not enough.
++ *
++ * This is a bit of an abuse of the cable method, but it is the
++ * only method called at the right time. We could modify the libata
++ * core specifically for ident hacking but while we have one offender
++ * it seems better to keep the fallout localised.
++ */
++
++static int it821x_ident_hack(struct ata_port *ap)
++{
++ struct ata_device *adev;
++ ata_link_for_each_dev(adev, &ap->link) {
++ if (ata_dev_enabled(adev)) {
++ adev->id[84] &= ~(1 << 6); /* No FUA */
++ adev->id[85] &= ~(1 << 10); /* No HPA */
++ adev->id[76] = 0; /* No NCQ/AN etc */
++ }
++ }
++ return ata_cable_unknown(ap);
+ }
+
+
+@@ -634,7 +665,7 @@ static struct ata_port_operations it821x_smart_port_ops = {
+ .thaw = ata_bmdma_thaw,
+ .error_handler = ata_bmdma_error_handler,
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+- .cable_detect = ata_cable_unknown,
++ .cable_detect = it821x_ident_hack,
+
+ .bmdma_setup = ata_bmdma_setup,
+ .bmdma_start = ata_bmdma_start,
+diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
+index 120b5bf..030878f 100644
+--- a/drivers/ata/pata_ixp4xx_cf.c
++++ b/drivers/ata/pata_ixp4xx_cf.c
+@@ -42,13 +42,13 @@ static int ixp4xx_set_mode(struct ata_link *link, struct ata_device **error)
+ return 0;
+ }
+
+-static void ixp4xx_mmio_data_xfer(struct ata_device *adev, unsigned char *buf,
+- unsigned int buflen, int write_data)
++static unsigned int ixp4xx_mmio_data_xfer(struct ata_device *dev,
++ unsigned char *buf, unsigned int buflen, int rw)
+ {
+ unsigned int i;
+ unsigned int words = buflen >> 1;
+ u16 *buf16 = (u16 *) buf;
+- struct ata_port *ap = adev->link->ap;
++ struct ata_port *ap = dev->link->ap;
+ void __iomem *mmio = ap->ioaddr.data_addr;
+ struct ixp4xx_pata_data *data = ap->host->dev->platform_data;
+
+@@ -59,30 +59,32 @@ static void ixp4xx_mmio_data_xfer(struct ata_device *adev, unsigned char *buf,
+ udelay(100);
+
+ /* Transfer multiple of 2 bytes */
+- if (write_data) {
+- for (i = 0; i < words; i++)
+- writew(buf16[i], mmio);
+- } else {
++ if (rw == READ)
+ for (i = 0; i < words; i++)
+ buf16[i] = readw(mmio);
+- }
++ else
++ for (i = 0; i < words; i++)
++ writew(buf16[i], mmio);
+
+ /* Transfer trailing 1 byte, if any. */
+ if (unlikely(buflen & 0x01)) {
+ u16 align_buf[1] = { 0 };
+ unsigned char *trailing_buf = buf + buflen - 1;
+
+- if (write_data) {
+- memcpy(align_buf, trailing_buf, 1);
+- writew(align_buf[0], mmio);
+- } else {
++ if (rw == READ) {
+ align_buf[0] = readw(mmio);
+ memcpy(trailing_buf, align_buf, 1);
++ } else {
++ memcpy(align_buf, trailing_buf, 1);
++ writew(align_buf[0], mmio);
+ }
++ words++;
+ }
+
+ udelay(100);
+ *data->cs0_cfg |= 0x01;
++
++ return words << 1;
+ }
+
+ static struct scsi_host_template ixp4xx_sht = {
+diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c
+index 17159b5..333dc15 100644
+--- a/drivers/ata/pata_legacy.c
++++ b/drivers/ata/pata_legacy.c
+@@ -28,7 +28,6 @@
+ *
+ * Unsupported but docs exist:
+ * Appian/Adaptec AIC25VL01/Cirrus Logic PD7220
+- * Winbond W83759A
+ *
+ * This driver handles legacy (that is "ISA/VLB side") IDE ports found
+ * on PC class systems. There are three hybrid devices that are exceptions
+@@ -36,7 +35,7 @@
+ * the MPIIX where the tuning is PCI side but the IDE is "ISA side".
+ *
+ * Specific support is included for the ht6560a/ht6560b/opti82c611a/
+- * opti82c465mv/promise 20230c/20630
++ * opti82c465mv/promise 20230c/20630/winbond83759A
+ *
+ * Use the autospeed and pio_mask options with:
+ * Appian ADI/2 aka CLPD7220 or AIC25VL01.
+@@ -47,9 +46,6 @@
+ * For now use autospeed and pio_mask as above with the W83759A. This may
+ * change.
+ *
+- * TODO
+- * Merge existing pata_qdi driver
+- *
+ */
+
+ #include <linux/kernel.h>
+@@ -64,12 +60,13 @@
+ #include <linux/platform_device.h>
+
+ #define DRV_NAME "pata_legacy"
+-#define DRV_VERSION "0.5.5"
++#define DRV_VERSION "0.6.5"
+
+ #define NR_HOST 6
+
+-static int legacy_port[NR_HOST] = { 0x1f0, 0x170, 0x1e8, 0x168, 0x1e0, 0x160 };
+-static int legacy_irq[NR_HOST] = { 14, 15, 11, 10, 8, 12 };
++static int all;
++module_param(all, int, 0444);
++MODULE_PARM_DESC(all, "Grab all legacy port devices, even if PCI(0=off, 1=on)");
+
+ struct legacy_data {
+ unsigned long timing;
+@@ -80,21 +77,107 @@ struct legacy_data {
+
+ };
+
++enum controller {
++ BIOS = 0,
++ SNOOP = 1,
++ PDC20230 = 2,
++ HT6560A = 3,
++ HT6560B = 4,
++ OPTI611A = 5,
++ OPTI46X = 6,
++ QDI6500 = 7,
++ QDI6580 = 8,
++ QDI6580DP = 9, /* Dual channel mode is different */
++ W83759A = 10,
++
++ UNKNOWN = -1
++};
++
++
++struct legacy_probe {
++ unsigned char *name;
++ unsigned long port;
++ unsigned int irq;
++ unsigned int slot;
++ enum controller type;
++ unsigned long private;
++};
++
++struct legacy_controller {
++ const char *name;
++ struct ata_port_operations *ops;
++ unsigned int pio_mask;
++ unsigned int flags;
++ int (*setup)(struct platform_device *, struct legacy_probe *probe,
++ struct legacy_data *data);
++};
++
++static int legacy_port[NR_HOST] = { 0x1f0, 0x170, 0x1e8, 0x168, 0x1e0, 0x160 };
++
++static struct legacy_probe probe_list[NR_HOST];
+ static struct legacy_data legacy_data[NR_HOST];
+ static struct ata_host *legacy_host[NR_HOST];
+ static int nr_legacy_host;
+
+
+-static int probe_all; /* Set to check all ISA port ranges */
+-static int ht6560a; /* HT 6560A on primary 1, secondary 2, both 3 */
+-static int ht6560b; /* HT 6560A on primary 1, secondary 2, both 3 */
+-static int opti82c611a; /* Opti82c611A on primary 1, secondary 2, both 3 */
+-static int opti82c46x; /* Opti 82c465MV present (pri/sec autodetect) */
+-static int autospeed; /* Chip present which snoops speed changes */
+-static int pio_mask = 0x1F; /* PIO range for autospeed devices */
++static int probe_all; /* Set to check all ISA port ranges */
++static int ht6560a; /* HT 6560A on primary 1, second 2, both 3 */
++static int ht6560b; /* HT 6560A on primary 1, second 2, both 3 */
++static int opti82c611a; /* Opti82c611A on primary 1, sec 2, both 3 */
++static int opti82c46x; /* Opti 82c465MV present(pri/sec autodetect) */
++static int qdi; /* Set to probe QDI controllers */
++static int winbond; /* Set to probe Winbond controllers,
++ give I/O port if non stdanard */
++static int autospeed; /* Chip present which snoops speed changes */
++static int pio_mask = 0x1F; /* PIO range for autospeed devices */
+ static int iordy_mask = 0xFFFFFFFF; /* Use iordy if available */
+
+ /**
++ * legacy_probe_add - Add interface to probe list
++ * @port: Controller port
++ * @irq: IRQ number
++ * @type: Controller type
++ * @private: Controller specific info
++ *
++ * Add an entry into the probe list for ATA controllers. This is used
++ * to add the default ISA slots and then to build up the table
++ * further according to other ISA/VLB/Weird device scans
++ *
++ * An I/O port list is used to keep ordering stable and sane, as we
++ * don't have any good way to talk about ordering otherwise
++ */
++
++static int legacy_probe_add(unsigned long port, unsigned int irq,
++ enum controller type, unsigned long private)
++{
++ struct legacy_probe *lp = &probe_list[0];
++ int i;
++ struct legacy_probe *free = NULL;
++
++ for (i = 0; i < NR_HOST; i++) {
++ if (lp->port == 0 && free == NULL)
++ free = lp;
++ /* Matching port, or the correct slot for ordering */
++ if (lp->port == port || legacy_port[i] == port) {
++ free = lp;
++ break;
++ }
++ lp++;
++ }
++ if (free == NULL) {
++ printk(KERN_ERR "pata_legacy: Too many interfaces.\n");
++ return -1;
++ }
++ /* Fill in the entry for later probing */
++ free->port = port;
++ free->irq = irq;
++ free->type = type;
++ free->private = private;
++ return 0;
++}
++
++
++/**
+ * legacy_set_mode - mode setting
+ * @link: IDE link
+ * @unused: Device that failed when error is returned
+@@ -113,7 +196,8 @@ static int legacy_set_mode(struct ata_link *link, struct ata_device **unused)
+
+ ata_link_for_each_dev(dev, link) {
+ if (ata_dev_enabled(dev)) {
+- ata_dev_printk(dev, KERN_INFO, "configured for PIO\n");
++ ata_dev_printk(dev, KERN_INFO,
++ "configured for PIO\n");
+ dev->pio_mode = XFER_PIO_0;
+ dev->xfer_mode = XFER_PIO_0;
+ dev->xfer_shift = ATA_SHIFT_PIO;
+@@ -171,7 +255,7 @@ static struct ata_port_operations simple_port_ops = {
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
+ static struct ata_port_operations legacy_port_ops = {
+@@ -198,15 +282,16 @@ static struct ata_port_operations legacy_port_ops = {
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
+ /*
+ * Promise 20230C and 20620 support
+ *
+- * This controller supports PIO0 to PIO2. We set PIO timings conservatively to
+- * allow for 50MHz Vesa Local Bus. The 20620 DMA support is weird being DMA to
+- * controller and PIO'd to the host and not supported.
++ * This controller supports PIO0 to PIO2. We set PIO timings
++ * conservatively to allow for 50MHz Vesa Local Bus. The 20620 DMA
++ * support is weird being DMA to controller and PIO'd to the host
++ * and not supported.
+ */
+
+ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
+@@ -221,8 +306,7 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ local_irq_save(flags);
+
+ /* Unlock the control interface */
+- do
+- {
++ do {
+ inb(0x1F5);
+ outb(inb(0x1F2) | 0x80, 0x1F2);
+ inb(0x1F2);
+@@ -231,7 +315,7 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ inb(0x1F2);
+ inb(0x1F2);
+ }
+- while((inb(0x1F2) & 0x80) && --tries);
++ while ((inb(0x1F2) & 0x80) && --tries);
+
+ local_irq_restore(flags);
+
+@@ -249,13 +333,14 @@ static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
+
+ }
+
+-static void pdc_data_xfer_vlb(struct ata_device *adev, unsigned char *buf, unsigned int buflen, int write_data)
++static unsigned int pdc_data_xfer_vlb(struct ata_device *dev,
++ unsigned char *buf, unsigned int buflen, int rw)
+ {
+- struct ata_port *ap = adev->link->ap;
+- int slop = buflen & 3;
+- unsigned long flags;
++ if (ata_id_has_dword_io(dev->id)) {
++ struct ata_port *ap = dev->link->ap;
++ int slop = buflen & 3;
++ unsigned long flags;
+
+- if (ata_id_has_dword_io(adev->id)) {
+ local_irq_save(flags);
+
+ /* Perform the 32bit I/O synchronization sequence */
+@@ -264,26 +349,27 @@ static void pdc_data_xfer_vlb(struct ata_device *adev, unsigned char *buf, unsig
+ ioread8(ap->ioaddr.nsect_addr);
+
+ /* Now the data */
+-
+- if (write_data)
+- iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+- else
++ if (rw == READ)
+ ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
++ else
++ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+
+ if (unlikely(slop)) {
+- __le32 pad = 0;
+- if (write_data) {
+- memcpy(&pad, buf + buflen - slop, slop);
+- iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+- } else {
++ u32 pad;
++ if (rw == READ) {
+ pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
+ memcpy(buf + buflen - slop, &pad, slop);
++ } else {
++ memcpy(&pad, buf + buflen - slop, slop);
++ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+ }
++ buflen += 4 - slop;
+ }
+ local_irq_restore(flags);
+- }
+- else
+- ata_data_xfer_noirq(adev, buf, buflen, write_data);
++ } else
++ buflen = ata_data_xfer_noirq(dev, buf, buflen, rw);
++
++ return buflen;
+ }
+
+ static struct ata_port_operations pdc20230_port_ops = {
+@@ -310,14 +396,14 @@ static struct ata_port_operations pdc20230_port_ops = {
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
+ /*
+ * Holtek 6560A support
+ *
+- * This controller supports PIO0 to PIO2 (no IORDY even though higher timings
+- * can be loaded).
++ * This controller supports PIO0 to PIO2 (no IORDY even though higher
++ * timings can be loaded).
+ */
+
+ static void ht6560a_set_piomode(struct ata_port *ap, struct ata_device *adev)
+@@ -364,14 +450,14 @@ static struct ata_port_operations ht6560a_port_ops = {
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
+ /*
+ * Holtek 6560B support
+ *
+- * This controller supports PIO0 to PIO4. We honour the BIOS/jumper FIFO setting
+- * unless we see an ATAPI device in which case we force it off.
++ * This controller supports PIO0 to PIO4. We honour the BIOS/jumper FIFO
++ * setting unless we see an ATAPI device in which case we force it off.
+ *
+ * FIXME: need to implement 2nd channel support.
+ */
+@@ -398,7 +484,7 @@ static void ht6560b_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ if (adev->class != ATA_DEV_ATA) {
+ u8 rconf = inb(0x3E6);
+ if (rconf & 0x24) {
+- rconf &= ~ 0x24;
++ rconf &= ~0x24;
+ outb(rconf, 0x3E6);
+ }
+ }
+@@ -423,13 +509,13 @@ static struct ata_port_operations ht6560b_port_ops = {
+ .qc_prep = ata_qc_prep,
+ .qc_issue = ata_qc_issue_prot,
+
+- .data_xfer = ata_data_xfer, /* FIXME: Check 32bit and noirq */
++ .data_xfer = ata_data_xfer, /* FIXME: Check 32bit and noirq */
+
+ .irq_handler = ata_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
+ /*
+@@ -462,7 +548,8 @@ static u8 opti_syscfg(u8 reg)
+ * This controller supports PIO0 to PIO3.
+ */
+
+-static void opti82c611a_set_piomode(struct ata_port *ap, struct ata_device *adev)
++static void opti82c611a_set_piomode(struct ata_port *ap,
++ struct ata_device *adev)
+ {
+ u8 active, recover, setup;
+ struct ata_timing t;
+@@ -549,7 +636,7 @@ static struct ata_port_operations opti82c611a_port_ops = {
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
+ /*
+@@ -681,77 +768,398 @@ static struct ata_port_operations opti82c46x_port_ops = {
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+- .port_start = ata_port_start,
++ .port_start = ata_sff_port_start,
+ };
+
++static void qdi6500_set_piomode(struct ata_port *ap, struct ata_device *adev)
++{
++ struct ata_timing t;
++ struct legacy_data *qdi = ap->host->private_data;
++ int active, recovery;
++ u8 timing;
++
++ /* Get the timing data in cycles */
++ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++
++ if (qdi->fast) {
++ active = 8 - FIT(t.active, 1, 8);
++ recovery = 18 - FIT(t.recover, 3, 18);
++ } else {
++ active = 9 - FIT(t.active, 2, 9);
++ recovery = 15 - FIT(t.recover, 0, 15);
++ }
++ timing = (recovery << 4) | active | 0x08;
++
++ qdi->clock[adev->devno] = timing;
++
++ outb(timing, qdi->timing);
++}
+
+ /**
+- * legacy_init_one - attach a legacy interface
+- * @port: port number
+- * @io: I/O port start
+- * @ctrl: control port
++ * qdi6580dp_set_piomode - PIO setup for dual channel
++ * @ap: Port
++ * @adev: Device
+ * @irq: interrupt line
+ *
+- * Register an ISA bus IDE interface. Such interfaces are PIO and we
+- * assume do not support IRQ sharing.
++ * In dual channel mode the 6580 has one clock per channel and we have
++ * to software clockswitch in qc_issue_prot.
+ */
+
+-static __init int legacy_init_one(int port, unsigned long io, unsigned long ctrl, int irq)
++static void qdi6580dp_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ {
+- struct legacy_data *ld = &legacy_data[nr_legacy_host];
+- struct ata_host *host;
+- struct ata_port *ap;
+- struct platform_device *pdev;
+- struct ata_port_operations *ops = &legacy_port_ops;
+- void __iomem *io_addr, *ctrl_addr;
+- int pio_modes = pio_mask;
+- u32 mask = (1 << port);
+- u32 iordy = (iordy_mask & mask) ? 0: ATA_FLAG_NO_IORDY;
+- int ret;
++ struct ata_timing t;
++ struct legacy_data *qdi = ap->host->private_data;
++ int active, recovery;
++ u8 timing;
+
+- pdev = platform_device_register_simple(DRV_NAME, nr_legacy_host, NULL, 0);
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
++ /* Get the timing data in cycles */
++ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++
++ if (qdi->fast) {
++ active = 8 - FIT(t.active, 1, 8);
++ recovery = 18 - FIT(t.recover, 3, 18);
++ } else {
++ active = 9 - FIT(t.active, 2, 9);
++ recovery = 15 - FIT(t.recover, 0, 15);
++ }
++ timing = (recovery << 4) | active | 0x08;
+
+- ret = -EBUSY;
+- if (devm_request_region(&pdev->dev, io, 8, "pata_legacy") == NULL ||
+- devm_request_region(&pdev->dev, ctrl, 1, "pata_legacy") == NULL)
+- goto fail;
++ qdi->clock[adev->devno] = timing;
+
+- ret = -ENOMEM;
+- io_addr = devm_ioport_map(&pdev->dev, io, 8);
+- ctrl_addr = devm_ioport_map(&pdev->dev, ctrl, 1);
+- if (!io_addr || !ctrl_addr)
+- goto fail;
++ outb(timing, qdi->timing + 2 * ap->port_no);
++ /* Clear the FIFO */
++ if (adev->class != ATA_DEV_ATA)
++ outb(0x5F, qdi->timing + 3);
++}
+
+- if (ht6560a & mask) {
+- ops = &ht6560a_port_ops;
+- pio_modes = 0x07;
+- iordy = ATA_FLAG_NO_IORDY;
+- }
+- if (ht6560b & mask) {
+- ops = &ht6560b_port_ops;
+- pio_modes = 0x1F;
+- }
+- if (opti82c611a & mask) {
+- ops = &opti82c611a_port_ops;
+- pio_modes = 0x0F;
++/**
++ * qdi6580_set_piomode - PIO setup for single channel
++ * @ap: Port
++ * @adev: Device
++ *
++ * In single channel mode the 6580 has one clock per device and we can
++ * avoid the requirement to clock switch. We also have to load the timing
++ * into the right clock according to whether we are master or slave.
++ */
++
++static void qdi6580_set_piomode(struct ata_port *ap, struct ata_device *adev)
++{
++ struct ata_timing t;
++ struct legacy_data *qdi = ap->host->private_data;
++ int active, recovery;
++ u8 timing;
++
++ /* Get the timing data in cycles */
++ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++
++ if (qdi->fast) {
++ active = 8 - FIT(t.active, 1, 8);
++ recovery = 18 - FIT(t.recover, 3, 18);
++ } else {
++ active = 9 - FIT(t.active, 2, 9);
++ recovery = 15 - FIT(t.recover, 0, 15);
+ }
+- if (opti82c46x & mask) {
+- ops = &opti82c46x_port_ops;
+- pio_modes = 0x0F;
++ timing = (recovery << 4) | active | 0x08;
++ qdi->clock[adev->devno] = timing;
++ outb(timing, qdi->timing + 2 * adev->devno);
++ /* Clear the FIFO */
++ if (adev->class != ATA_DEV_ATA)
++ outb(0x5F, qdi->timing + 3);
++}
++
++/**
++ * qdi_qc_issue_prot - command issue
++ * @qc: command pending
++ *
++ * Called when the libata layer is about to issue a command. We wrap
++ * this interface so that we can load the correct ATA timings.
++ */
++
++static unsigned int qdi_qc_issue_prot(struct ata_queued_cmd *qc)
++{
++ struct ata_port *ap = qc->ap;
++ struct ata_device *adev = qc->dev;
++ struct legacy_data *qdi = ap->host->private_data;
++
++ if (qdi->clock[adev->devno] != qdi->last) {
++ if (adev->pio_mode) {
++ qdi->last = qdi->clock[adev->devno];
++ outb(qdi->clock[adev->devno], qdi->timing +
++ 2 * ap->port_no);
++ }
+ }
++ return ata_qc_issue_prot(qc);
++}
+
+- /* Probe for automatically detectable controllers */
++static unsigned int vlb32_data_xfer(struct ata_device *adev, unsigned char *buf,
++ unsigned int buflen, int rw)
++{
++ struct ata_port *ap = adev->link->ap;
++ int slop = buflen & 3;
+
+- if (io == 0x1F0 && ops == &legacy_port_ops) {
+- unsigned long flags;
++ if (ata_id_has_dword_io(adev->id)) {
++ if (rw == WRITE)
++ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
++ else
++ ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+
+- local_irq_save(flags);
++ if (unlikely(slop)) {
++ u32 pad;
++ if (rw == WRITE) {
++ memcpy(&pad, buf + buflen - slop, slop);
++ pad = le32_to_cpu(pad);
++ iowrite32(pad, ap->ioaddr.data_addr);
++ } else {
++ pad = ioread32(ap->ioaddr.data_addr);
++ pad = cpu_to_le32(pad);
++ memcpy(buf + buflen - slop, &pad, slop);
++ }
++ }
++ return (buflen + 3) & ~3;
++ } else
++ return ata_data_xfer(adev, buf, buflen, rw);
++}
++
++static int qdi_port(struct platform_device *dev,
++ struct legacy_probe *lp, struct legacy_data *ld)
++{
++ if (devm_request_region(&dev->dev, lp->private, 4, "qdi") == NULL)
++ return -EBUSY;
++ ld->timing = lp->private;
++ return 0;
++}
++
++static struct ata_port_operations qdi6500_port_ops = {
++ .set_piomode = qdi6500_set_piomode,
++
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ata_std_dev_select,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = ata_bmdma_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++ .cable_detect = ata_cable_40wire,
++
++ .qc_prep = ata_qc_prep,
++ .qc_issue = qdi_qc_issue_prot,
++
++ .data_xfer = vlb32_data_xfer,
++
++ .irq_handler = ata_interrupt,
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
++
++ .port_start = ata_sff_port_start,
++};
++
++static struct ata_port_operations qdi6580_port_ops = {
++ .set_piomode = qdi6580_set_piomode,
++
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ata_std_dev_select,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = ata_bmdma_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++ .cable_detect = ata_cable_40wire,
++
++ .qc_prep = ata_qc_prep,
++ .qc_issue = ata_qc_issue_prot,
++
++ .data_xfer = vlb32_data_xfer,
++
++ .irq_handler = ata_interrupt,
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
++
++ .port_start = ata_sff_port_start,
++};
++
++static struct ata_port_operations qdi6580dp_port_ops = {
++ .set_piomode = qdi6580dp_set_piomode,
++
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ata_std_dev_select,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = ata_bmdma_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++ .cable_detect = ata_cable_40wire,
++
++ .qc_prep = ata_qc_prep,
++ .qc_issue = qdi_qc_issue_prot,
++
++ .data_xfer = vlb32_data_xfer,
++
++ .irq_handler = ata_interrupt,
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
++
++ .port_start = ata_sff_port_start,
++};
++
++static DEFINE_SPINLOCK(winbond_lock);
++
++static void winbond_writecfg(unsigned long port, u8 reg, u8 val)
++{
++ unsigned long flags;
++ spin_lock_irqsave(&winbond_lock, flags);
++ outb(reg, port + 0x01);
++ outb(val, port + 0x02);
++ spin_unlock_irqrestore(&winbond_lock, flags);
++}
++
++static u8 winbond_readcfg(unsigned long port, u8 reg)
++{
++ u8 val;
++
++ unsigned long flags;
++ spin_lock_irqsave(&winbond_lock, flags);
++ outb(reg, port + 0x01);
++ val = inb(port + 0x02);
++ spin_unlock_irqrestore(&winbond_lock, flags);
++
++ return val;
++}
++
++static void winbond_set_piomode(struct ata_port *ap, struct ata_device *adev)
++{
++ struct ata_timing t;
++ struct legacy_data *winbond = ap->host->private_data;
++ int active, recovery;
++ u8 reg;
++ int timing = 0x88 + (ap->port_no * 4) + (adev->devno * 2);
++
++ reg = winbond_readcfg(winbond->timing, 0x81);
++
++ /* Get the timing data in cycles */
++ if (reg & 0x40) /* Fast VLB bus, assume 50MHz */
++ ata_timing_compute(adev, adev->pio_mode, &t, 20000, 1000);
++ else
++ ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
++
++ active = (FIT(t.active, 3, 17) - 1) & 0x0F;
++ recovery = (FIT(t.recover, 1, 15) + 1) & 0x0F;
++ timing = (active << 4) | recovery;
++ winbond_writecfg(winbond->timing, timing, reg);
++
++ /* Load the setup timing */
++
++ reg = 0x35;
++ if (adev->class != ATA_DEV_ATA)
++ reg |= 0x08; /* FIFO off */
++ if (!ata_pio_need_iordy(adev))
++ reg |= 0x02; /* IORDY off */
++ reg |= (FIT(t.setup, 0, 3) << 6);
++ winbond_writecfg(winbond->timing, timing + 1, reg);
++}
++
++static int winbond_port(struct platform_device *dev,
++ struct legacy_probe *lp, struct legacy_data *ld)
++{
++ if (devm_request_region(&dev->dev, lp->private, 4, "winbond") == NULL)
++ return -EBUSY;
++ ld->timing = lp->private;
++ return 0;
++}
++
++static struct ata_port_operations winbond_port_ops = {
++ .set_piomode = winbond_set_piomode,
++
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ata_std_dev_select,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = ata_bmdma_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++ .cable_detect = ata_cable_40wire,
++
++ .qc_prep = ata_qc_prep,
++ .qc_issue = ata_qc_issue_prot,
++
++ .data_xfer = vlb32_data_xfer,
++
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
+
++ .port_start = ata_sff_port_start,
++};
++
++static struct legacy_controller controllers[] = {
++ {"BIOS", &legacy_port_ops, 0x1F,
++ ATA_FLAG_NO_IORDY, NULL },
++ {"Snooping", &simple_port_ops, 0x1F,
++ 0 , NULL },
++ {"PDC20230", &pdc20230_port_ops, 0x7,
++ ATA_FLAG_NO_IORDY, NULL },
++ {"HT6560A", &ht6560a_port_ops, 0x07,
++ ATA_FLAG_NO_IORDY, NULL },
++ {"HT6560B", &ht6560b_port_ops, 0x1F,
++ ATA_FLAG_NO_IORDY, NULL },
++ {"OPTI82C611A", &opti82c611a_port_ops, 0x0F,
++ 0 , NULL },
++ {"OPTI82C46X", &opti82c46x_port_ops, 0x0F,
++ 0 , NULL },
++ {"QDI6500", &qdi6500_port_ops, 0x07,
++ ATA_FLAG_NO_IORDY, qdi_port },
++ {"QDI6580", &qdi6580_port_ops, 0x1F,
++ 0 , qdi_port },
++ {"QDI6580DP", &qdi6580dp_port_ops, 0x1F,
++ 0 , qdi_port },
++ {"W83759A", &winbond_port_ops, 0x1F,
++ 0 , winbond_port }
++};
++
++/**
++ * probe_chip_type - Discover controller
++ * @probe: Probe entry to check
++ *
++ * Probe an ATA port and identify the type of controller. We don't
++ * check if the controller appears to be driveless at this point.
++ */
++
++static __init int probe_chip_type(struct legacy_probe *probe)
++{
++ int mask = 1 << probe->slot;
++
++ if (winbond && (probe->port == 0x1F0 || probe->port == 0x170)) {
++ u8 reg = winbond_readcfg(winbond, 0x81);
++ reg |= 0x80; /* jumpered mode off */
++ winbond_writecfg(winbond, 0x81, reg);
++ reg = winbond_readcfg(winbond, 0x83);
++ reg |= 0xF0; /* local control */
++ winbond_writecfg(winbond, 0x83, reg);
++ reg = winbond_readcfg(winbond, 0x85);
++ reg |= 0xF0; /* programmable timing */
++ winbond_writecfg(winbond, 0x85, reg);
++
++ reg = winbond_readcfg(winbond, 0x81);
++
++ if (reg & mask)
++ return W83759A;
++ }
++ if (probe->port == 0x1F0) {
++ unsigned long flags;
++ local_irq_save(flags);
+ /* Probes */
+- inb(0x1F5);
+ outb(inb(0x1F2) | 0x80, 0x1F2);
++ inb(0x1F5);
+ inb(0x1F2);
+ inb(0x3F6);
+ inb(0x3F6);
+@@ -760,29 +1168,83 @@ static __init int legacy_init_one(int port, unsigned long io, unsigned long ctrl
+
+ if ((inb(0x1F2) & 0x80) == 0) {
+ /* PDC20230c or 20630 ? */
+- printk(KERN_INFO "PDC20230-C/20630 VLB ATA controller detected.\n");
+- pio_modes = 0x07;
+- ops = &pdc20230_port_ops;
+- iordy = ATA_FLAG_NO_IORDY;
++ printk(KERN_INFO "PDC20230-C/20630 VLB ATA controller"
++ " detected.\n");
+ udelay(100);
+ inb(0x1F5);
++ local_irq_restore(flags);
++ return PDC20230;
+ } else {
+ outb(0x55, 0x1F2);
+ inb(0x1F2);
+ inb(0x1F2);
+- if (inb(0x1F2) == 0x00) {
+- printk(KERN_INFO "PDC20230-B VLB ATA controller detected.\n");
+- }
++ if (inb(0x1F2) == 0x00)
++ printk(KERN_INFO "PDC20230-B VLB ATA "
++ "controller detected.\n");
++ local_irq_restore(flags);
++ return BIOS;
+ }
+ local_irq_restore(flags);
+ }
+
++ if (ht6560a & mask)
++ return HT6560A;
++ if (ht6560b & mask)
++ return HT6560B;
++ if (opti82c611a & mask)
++ return OPTI611A;
++ if (opti82c46x & mask)
++ return OPTI46X;
++ if (autospeed & mask)
++ return SNOOP;
++ return BIOS;
++}
++
++
++/**
++ * legacy_init_one - attach a legacy interface
++ * @pl: probe record
++ *
++ * Register an ISA bus IDE interface. Such interfaces are PIO and we
++ * assume do not support IRQ sharing.
++ */
++
++static __init int legacy_init_one(struct legacy_probe *probe)
++{
++ struct legacy_controller *controller = &controllers[probe->type];
++ int pio_modes = controller->pio_mask;
++ unsigned long io = probe->port;
++ u32 mask = (1 << probe->slot);
++ struct ata_port_operations *ops = controller->ops;
++ struct legacy_data *ld = &legacy_data[probe->slot];
++ struct ata_host *host = NULL;
++ struct ata_port *ap;
++ struct platform_device *pdev;
++ struct ata_device *dev;
++ void __iomem *io_addr, *ctrl_addr;
++ u32 iordy = (iordy_mask & mask) ? 0: ATA_FLAG_NO_IORDY;
++ int ret;
+
+- /* Chip does mode setting by command snooping */
+- if (ops == &legacy_port_ops && (autospeed & mask))
+- ops = &simple_port_ops;
++ iordy |= controller->flags;
++
++ pdev = platform_device_register_simple(DRV_NAME, probe->slot, NULL, 0);
++ if (IS_ERR(pdev))
++ return PTR_ERR(pdev);
++
++ ret = -EBUSY;
++ if (devm_request_region(&pdev->dev, io, 8, "pata_legacy") == NULL ||
++ devm_request_region(&pdev->dev, io + 0x0206, 1,
++ "pata_legacy") == NULL)
++ goto fail;
+
+ ret = -ENOMEM;
++ io_addr = devm_ioport_map(&pdev->dev, io, 8);
++ ctrl_addr = devm_ioport_map(&pdev->dev, io + 0x0206, 1);
++ if (!io_addr || !ctrl_addr)
++ goto fail;
++ if (controller->setup)
++ if (controller->setup(pdev, probe, ld) < 0)
++ goto fail;
+ host = ata_host_alloc(&pdev->dev, 1);
+ if (!host)
+ goto fail;
+@@ -795,19 +1257,29 @@ static __init int legacy_init_one(int port, unsigned long io, unsigned long ctrl
+ ap->ioaddr.altstatus_addr = ctrl_addr;
+ ap->ioaddr.ctl_addr = ctrl_addr;
+ ata_std_ports(&ap->ioaddr);
+- ap->private_data = ld;
++ ap->host->private_data = ld;
+
+- ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io, ctrl);
++ ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io, io + 0x0206);
+
+- ret = ata_host_activate(host, irq, ata_interrupt, 0, &legacy_sht);
++ ret = ata_host_activate(host, probe->irq, ata_interrupt, 0,
++ &legacy_sht);
+ if (ret)
+ goto fail;
+-
+- legacy_host[nr_legacy_host++] = dev_get_drvdata(&pdev->dev);
+ ld->platform_dev = pdev;
+- return 0;
+
++ /* Nothing found means we drop the port as its probably not there */
++
++ ret = -ENODEV;
++ ata_link_for_each_dev(dev, &ap->link) {
++ if (!ata_dev_absent(dev)) {
++ legacy_host[probe->slot] = host;
++ ld->platform_dev = pdev;
++ return 0;
++ }
++ }
+ fail:
++ if (host)
++ ata_host_detach(host);
+ platform_device_unregister(pdev);
+ return ret;
+ }
+@@ -818,13 +1290,15 @@ fail:
+ * @master: set this if we find an ATA master
+ * @master: set this if we find an ATA secondary
+ *
+- * A small number of vendors implemented early PCI ATA interfaces on bridge logic
+- * without the ATA interface being PCI visible. Where we have a matching PCI driver
+- * we must skip the relevant device here. If we don't know about it then the legacy
+- * driver is the right driver anyway.
++ * A small number of vendors implemented early PCI ATA interfaces
++ * on bridge logic without the ATA interface being PCI visible.
++ * Where we have a matching PCI driver we must skip the relevant
++ * device here. If we don't know about it then the legacy driver
++ * is the right driver anyway.
+ */
+
+-static void legacy_check_special_cases(struct pci_dev *p, int *primary, int *secondary)
++static void __init legacy_check_special_cases(struct pci_dev *p, int *primary,
++ int *secondary)
+ {
+ /* Cyrix CS5510 pre SFF MWDMA ATA on the bridge */
+ if (p->vendor == 0x1078 && p->device == 0x0000) {
+@@ -840,7 +1314,8 @@ static void legacy_check_special_cases(struct pci_dev *p, int *primary, int *sec
+ if (p->vendor == 0x8086 && p->device == 0x1234) {
+ u16 r;
+ pci_read_config_word(p, 0x6C, &r);
+- if (r & 0x8000) { /* ATA port enabled */
++ if (r & 0x8000) {
++ /* ATA port enabled */
+ if (r & 0x4000)
+ *secondary = 1;
+ else
+@@ -850,6 +1325,114 @@ static void legacy_check_special_cases(struct pci_dev *p, int *primary, int *sec
+ }
+ }
+
++static __init void probe_opti_vlb(void)
++{
++ /* If an OPTI 82C46X is present find out where the channels are */
++ static const char *optis[4] = {
++ "3/463MV", "5MV",
++ "5MVA", "5MVB"
++ };
++ u8 chans = 1;
++ u8 ctrl = (opti_syscfg(0x30) & 0xC0) >> 6;
++
++ opti82c46x = 3; /* Assume master and slave first */
++ printk(KERN_INFO DRV_NAME ": Opti 82C46%s chipset support.\n",
++ optis[ctrl]);
++ if (ctrl == 3)
++ chans = (opti_syscfg(0x3F) & 0x20) ? 2 : 1;
++ ctrl = opti_syscfg(0xAC);
++ /* Check enabled and this port is the 465MV port. On the
++ MVB we may have two channels */
++ if (ctrl & 8) {
++ if (chans == 2) {
++ legacy_probe_add(0x1F0, 14, OPTI46X, 0);
++ legacy_probe_add(0x170, 15, OPTI46X, 0);
++ }
++ if (ctrl & 4)
++ legacy_probe_add(0x170, 15, OPTI46X, 0);
++ else
++ legacy_probe_add(0x1F0, 14, OPTI46X, 0);
++ } else
++ legacy_probe_add(0x1F0, 14, OPTI46X, 0);
++}
++
++static __init void qdi65_identify_port(u8 r, u8 res, unsigned long port)
++{
++ static const unsigned long ide_port[2] = { 0x170, 0x1F0 };
++ /* Check card type */
++ if ((r & 0xF0) == 0xC0) {
++ /* QD6500: single channel */
++ if (r & 8)
++ /* Disabled ? */
++ return;
++ legacy_probe_add(ide_port[r & 0x01], 14 + (r & 0x01),
++ QDI6500, port);
++ }
++ if (((r & 0xF0) == 0xA0) || (r & 0xF0) == 0x50) {
++ /* QD6580: dual channel */
++ if (!request_region(port + 2 , 2, "pata_qdi")) {
++ release_region(port, 2);
++ return;
++ }
++ res = inb(port + 3);
++ /* Single channel mode ? */
++ if (res & 1)
++ legacy_probe_add(ide_port[r & 0x01], 14 + (r & 0x01),
++ QDI6580, port);
++ else { /* Dual channel mode */
++ legacy_probe_add(0x1F0, 14, QDI6580DP, port);
++ /* port + 0x02, r & 0x04 */
++ legacy_probe_add(0x170, 15, QDI6580DP, port + 2);
++ }
++ release_region(port + 2, 2);
++ }
++}
++
++static __init void probe_qdi_vlb(void)
++{
++ unsigned long flags;
++ static const unsigned long qd_port[2] = { 0x30, 0xB0 };
++ int i;
++
++ /*
++ * Check each possible QD65xx base address
++ */
++
++ for (i = 0; i < 2; i++) {
++ unsigned long port = qd_port[i];
++ u8 r, res;
++
++
++ if (request_region(port, 2, "pata_qdi")) {
++ /* Check for a card */
++ local_irq_save(flags);
++ /* I have no h/w that needs this delay but it
++ is present in the historic code */
++ r = inb(port);
++ udelay(1);
++ outb(0x19, port);
++ udelay(1);
++ res = inb(port);
++ udelay(1);
++ outb(r, port);
++ udelay(1);
++ local_irq_restore(flags);
++
++ /* Fail */
++ if (res == 0x19) {
++ release_region(port, 2);
++ continue;
++ }
++ /* Passes the presence test */
++ r = inb(port + 1);
++ udelay(1);
++ /* Check port agrees with port set */
++ if ((r & 2) >> 1 == i)
++ qdi65_identify_port(r, res, port);
++ release_region(port, 2);
++ }
++ }
++}
+
+ /**
+ * legacy_init - attach legacy interfaces
+@@ -867,15 +1450,17 @@ static __init int legacy_init(void)
+ int ct = 0;
+ int primary = 0;
+ int secondary = 0;
+- int last_port = NR_HOST;
++ int pci_present = 0;
++ struct legacy_probe *pl = &probe_list[0];
++ int slot = 0;
+
+ struct pci_dev *p = NULL;
+
+ for_each_pci_dev(p) {
+ int r;
+- /* Check for any overlap of the system ATA mappings. Native mode controllers
+- stuck on these addresses or some devices in 'raid' mode won't be found by
+- the storage class test */
++ /* Check for any overlap of the system ATA mappings. Native
++ mode controllers stuck on these addresses or some devices
++ in 'raid' mode won't be found by the storage class test */
+ for (r = 0; r < 6; r++) {
+ if (pci_resource_start(p, r) == 0x1f0)
+ primary = 1;
+@@ -885,49 +1470,39 @@ static __init int legacy_init(void)
+ /* Check for special cases */
+ legacy_check_special_cases(p, &primary, &secondary);
+
+- /* If PCI bus is present then don't probe for tertiary legacy ports */
+- if (probe_all == 0)
+- last_port = 2;
++ /* If PCI bus is present then don't probe for tertiary
++ legacy ports */
++ pci_present = 1;
+ }
+
+- /* If an OPTI 82C46X is present find out where the channels are */
+- if (opti82c46x) {
+- static const char *optis[4] = {
+- "3/463MV", "5MV",
+- "5MVA", "5MVB"
+- };
+- u8 chans = 1;
+- u8 ctrl = (opti_syscfg(0x30) & 0xC0) >> 6;
+-
+- opti82c46x = 3; /* Assume master and slave first */
+- printk(KERN_INFO DRV_NAME ": Opti 82C46%s chipset support.\n", optis[ctrl]);
+- if (ctrl == 3)
+- chans = (opti_syscfg(0x3F) & 0x20) ? 2 : 1;
+- ctrl = opti_syscfg(0xAC);
+- /* Check enabled and this port is the 465MV port. On the
+- MVB we may have two channels */
+- if (ctrl & 8) {
+- if (ctrl & 4)
+- opti82c46x = 2; /* Slave */
+- else
+- opti82c46x = 1; /* Master */
+- if (chans == 2)
+- opti82c46x = 3; /* Master and Slave */
+- } /* Slave only */
+- else if (chans == 1)
+- opti82c46x = 1;
++ if (winbond == 1)
++ winbond = 0x130; /* Default port, alt is 1B0 */
++
++ if (primary == 0 || all)
++ legacy_probe_add(0x1F0, 14, UNKNOWN, 0);
++ if (secondary == 0 || all)
++ legacy_probe_add(0x170, 15, UNKNOWN, 0);
++
++ if (probe_all || !pci_present) {
++ /* ISA/VLB extra ports */
++ legacy_probe_add(0x1E8, 11, UNKNOWN, 0);
++ legacy_probe_add(0x168, 10, UNKNOWN, 0);
++ legacy_probe_add(0x1E0, 8, UNKNOWN, 0);
++ legacy_probe_add(0x160, 12, UNKNOWN, 0);
+ }
+
+- for (i = 0; i < last_port; i++) {
+- /* Skip primary if we have seen a PCI one */
+- if (i == 0 && primary == 1)
+- continue;
+- /* Skip secondary if we have seen a PCI one */
+- if (i == 1 && secondary == 1)
++ if (opti82c46x)
++ probe_opti_vlb();
++ if (qdi)
++ probe_qdi_vlb();
++
++ for (i = 0; i < NR_HOST; i++, pl++) {
++ if (pl->port == 0)
+ continue;
+- if (legacy_init_one(i, legacy_port[i],
+- legacy_port[i] + 0x0206,
+- legacy_irq[i]) == 0)
++ if (pl->type == UNKNOWN)
++ pl->type = probe_chip_type(pl);
++ pl->slot = slot++;
++ if (legacy_init_one(pl) == 0)
+ ct++;
+ }
+ if (ct != 0)
+@@ -941,11 +1516,8 @@ static __exit void legacy_exit(void)
+
+ for (i = 0; i < nr_legacy_host; i++) {
+ struct legacy_data *ld = &legacy_data[i];
+-
+ ata_host_detach(legacy_host[i]);
+ platform_device_unregister(ld->platform_dev);
+- if (ld->timing)
+- release_region(ld->timing, 2);
+ }
+ }
+
+@@ -960,9 +1532,9 @@ module_param(ht6560a, int, 0);
+ module_param(ht6560b, int, 0);
+ module_param(opti82c611a, int, 0);
+ module_param(opti82c46x, int, 0);
++module_param(qdi, int, 0);
+ module_param(pio_mask, int, 0);
+ module_param(iordy_mask, int, 0);
+
+ module_init(legacy_init);
+ module_exit(legacy_exit);
+-
+diff --git a/drivers/ata/pata_mpc52xx.c b/drivers/ata/pata_mpc52xx.c
+index 50c56e2..dc40162 100644
+--- a/drivers/ata/pata_mpc52xx.c
++++ b/drivers/ata/pata_mpc52xx.c
+@@ -364,7 +364,7 @@ mpc52xx_ata_probe(struct of_device *op, const struct of_device_id *match)
+ {
+ unsigned int ipb_freq;
+ struct resource res_mem;
+- int ata_irq = NO_IRQ;
++ int ata_irq;
+ struct mpc52xx_ata __iomem *ata_regs;
+ struct mpc52xx_ata_priv *priv;
+ int rv;
+diff --git a/drivers/ata/pata_ninja32.c b/drivers/ata/pata_ninja32.c
+new file mode 100644
+index 0000000..1c1b835
+--- /dev/null
++++ b/drivers/ata/pata_ninja32.c
+@@ -0,0 +1,214 @@
++/*
++ * pata_ninja32.c - Ninja32 PATA for new ATA layer
++ * (C) 2007 Red Hat Inc
++ * Alan Cox <alan at redhat.com>
++ *
++ * Note: The controller like many controllers has shared timings for
++ * PIO and DMA. We thus flip to the DMA timings in dma_start and flip back
++ * in the dma_stop function. Thus we actually don't need a set_dmamode
++ * method as the PIO method is always called and will set the right PIO
++ * timing parameters.
++ *
++ * The Ninja32 Cardbus is not a generic SFF controller. Instead it is
++ * laid out as follows off BAR 0. This is based upon Mark Lord's delkin
++ * driver and the extensive analysis done by the BSD developers, notably
++ * ITOH Yasufumi.
++ *
++ * Base + 0x00 IRQ Status
++ * Base + 0x01 IRQ control
++ * Base + 0x02 Chipset control
++ * Base + 0x04 VDMA and reset control + wait bits
++ * Base + 0x08 BMIMBA
++ * Base + 0x0C DMA Length
++ * Base + 0x10 Taskfile
++ * Base + 0x18 BMDMA Status ?
++ * Base + 0x1C
++ * Base + 0x1D Bus master control
++ * bit 0 = enable
++ * bit 1 = 0 write/1 read
++ * bit 2 = 1 sgtable
++ * bit 3 = go
++ * bit 4-6 wait bits
++ * bit 7 = done
++ * Base + 0x1E AltStatus
++ * Base + 0x1F timing register
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++#include <linux/init.h>
++#include <linux/blkdev.h>
++#include <linux/delay.h>
++#include <scsi/scsi_host.h>
++#include <linux/libata.h>
++
++#define DRV_NAME "pata_ninja32"
++#define DRV_VERSION "0.0.1"
++
++
++/**
++ * ninja32_set_piomode - set initial PIO mode data
++ * @ap: ATA interface
++ * @adev: ATA device
++ *
++ * Called to do the PIO mode setup. Our timing registers are shared
++ * but we want to set the PIO timing by default.
++ */
++
++static void ninja32_set_piomode(struct ata_port *ap, struct ata_device *adev)
++{
++ static u16 pio_timing[5] = {
++ 0xd6, 0x85, 0x44, 0x33, 0x13
++ };
++ iowrite8(pio_timing[adev->pio_mode - XFER_PIO_0],
++ ap->ioaddr.bmdma_addr + 0x1f);
++ ap->private_data = adev;
++}
++
++
++static void ninja32_dev_select(struct ata_port *ap, unsigned int device)
++{
++ struct ata_device *adev = &ap->link.device[device];
++ if (ap->private_data != adev) {
++ iowrite8(0xd6, ap->ioaddr.bmdma_addr + 0x1f);
++ ata_std_dev_select(ap, device);
++ ninja32_set_piomode(ap, adev);
++ }
++}
++
++static struct scsi_host_template ninja32_sht = {
++ .module = THIS_MODULE,
++ .name = DRV_NAME,
++ .ioctl = ata_scsi_ioctl,
++ .queuecommand = ata_scsi_queuecmd,
++ .can_queue = ATA_DEF_QUEUE,
++ .this_id = ATA_SHT_THIS_ID,
++ .sg_tablesize = LIBATA_MAX_PRD,
++ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
++ .emulated = ATA_SHT_EMULATED,
++ .use_clustering = ATA_SHT_USE_CLUSTERING,
++ .proc_name = DRV_NAME,
++ .dma_boundary = ATA_DMA_BOUNDARY,
++ .slave_configure = ata_scsi_slave_config,
++ .slave_destroy = ata_scsi_slave_destroy,
++ .bios_param = ata_std_bios_param,
++};
++
++static struct ata_port_operations ninja32_port_ops = {
++ .set_piomode = ninja32_set_piomode,
++ .mode_filter = ata_pci_default_filter,
++
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ninja32_dev_select,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = ata_bmdma_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++ .cable_detect = ata_cable_40wire,
++
++ .bmdma_setup = ata_bmdma_setup,
++ .bmdma_start = ata_bmdma_start,
++ .bmdma_stop = ata_bmdma_stop,
++ .bmdma_status = ata_bmdma_status,
++
++ .qc_prep = ata_qc_prep,
++ .qc_issue = ata_qc_issue_prot,
++
++ .data_xfer = ata_data_xfer,
++
++ .irq_handler = ata_interrupt,
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
++
++ .port_start = ata_sff_port_start,
++};
++
++static int ninja32_init_one(struct pci_dev *dev, const struct pci_device_id *id)
++{
++ struct ata_host *host;
++ struct ata_port *ap;
++ void __iomem *base;
++ int rc;
++
++ host = ata_host_alloc(&dev->dev, 1);
++ if (!host)
++ return -ENOMEM;
++ ap = host->ports[0];
++
++ /* Set up the PCI device */
++ rc = pcim_enable_device(dev);
++ if (rc)
++ return rc;
++ rc = pcim_iomap_regions(dev, 1 << 0, DRV_NAME);
++ if (rc == -EBUSY)
++ pcim_pin_device(dev);
++ if (rc)
++ return rc;
++
++ host->iomap = pcim_iomap_table(dev);
++ rc = pci_set_dma_mask(dev, ATA_DMA_MASK);
++ if (rc)
++ return rc;
++ rc = pci_set_consistent_dma_mask(dev, ATA_DMA_MASK);
++ if (rc)
++ return rc;
++ pci_set_master(dev);
++
++ /* Set up the register mappings */
++ base = host->iomap[0];
++ if (!base)
++ return -ENOMEM;
++ ap->ops = &ninja32_port_ops;
++ ap->pio_mask = 0x1F;
++ ap->flags |= ATA_FLAG_SLAVE_POSS;
++
++ ap->ioaddr.cmd_addr = base + 0x10;
++ ap->ioaddr.ctl_addr = base + 0x1E;
++ ap->ioaddr.altstatus_addr = base + 0x1E;
++ ap->ioaddr.bmdma_addr = base;
++ ata_std_ports(&ap->ioaddr);
++
++ iowrite8(0x05, base + 0x01); /* Enable interrupt lines */
++ iowrite8(0xB3, base + 0x02); /* Burst, ?? setup */
++ iowrite8(0x00, base + 0x04); /* WAIT0 ? */
++ /* FIXME: Should we disable them at remove ? */
++ return ata_host_activate(host, dev->irq, ata_interrupt,
++ IRQF_SHARED, &ninja32_sht);
++}
++
++static const struct pci_device_id ninja32[] = {
++ { 0x1145, 0xf021, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
++ { 0x1145, 0xf024, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
++ { },
++};
++
++static struct pci_driver ninja32_pci_driver = {
++ .name = DRV_NAME,
++ .id_table = ninja32,
++ .probe = ninja32_init_one,
++ .remove = ata_pci_remove_one
++};
++
++static int __init ninja32_init(void)
++{
++ return pci_register_driver(&ninja32_pci_driver);
++}
++
++static void __exit ninja32_exit(void)
++{
++ pci_unregister_driver(&ninja32_pci_driver);
++}
++
++MODULE_AUTHOR("Alan Cox");
++MODULE_DESCRIPTION("low-level driver for Ninja32 ATA");
++MODULE_LICENSE("GPL");
++MODULE_DEVICE_TABLE(pci, ninja32);
++MODULE_VERSION(DRV_VERSION);
++
++module_init(ninja32_init);
++module_exit(ninja32_exit);
+diff --git a/drivers/ata/pata_pcmcia.c b/drivers/ata/pata_pcmcia.c
+index fd36099..3e7f6a9 100644
+--- a/drivers/ata/pata_pcmcia.c
++++ b/drivers/ata/pata_pcmcia.c
+@@ -42,7 +42,7 @@
+
+
+ #define DRV_NAME "pata_pcmcia"
+-#define DRV_VERSION "0.3.2"
++#define DRV_VERSION "0.3.3"
+
+ /*
+ * Private data structure to glue stuff together
+@@ -86,6 +86,47 @@ static int pcmcia_set_mode(struct ata_link *link, struct ata_device **r_failed_d
+ return ata_do_set_mode(link, r_failed_dev);
+ }
+
++/**
++ * pcmcia_set_mode_8bit - PCMCIA specific mode setup
++ * @link: link
++ * @r_failed_dev: Return pointer for failed device
++ *
++ * For the simple emulated 8bit stuff the less we do the better.
++ */
++
++static int pcmcia_set_mode_8bit(struct ata_link *link,
++ struct ata_device **r_failed_dev)
++{
++ return 0;
++}
++
++/**
++ * ata_data_xfer_8bit - Transfer data by 8bit PIO
++ * @dev: device to target
++ * @buf: data buffer
++ * @buflen: buffer length
++ * @rw: read/write
++ *
++ * Transfer data from/to the device data register by 8 bit PIO.
++ *
++ * LOCKING:
++ * Inherited from caller.
++ */
++
++static unsigned int ata_data_xfer_8bit(struct ata_device *dev,
++ unsigned char *buf, unsigned int buflen, int rw)
++{
++ struct ata_port *ap = dev->link->ap;
++
++ if (rw == READ)
++ ioread8_rep(ap->ioaddr.data_addr, buf, buflen);
++ else
++ iowrite8_rep(ap->ioaddr.data_addr, buf, buflen);
++
++ return buflen;
++}
++
++
+ static struct scsi_host_template pcmcia_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+@@ -129,6 +170,31 @@ static struct ata_port_operations pcmcia_port_ops = {
+ .port_start = ata_sff_port_start,
+ };
+
++static struct ata_port_operations pcmcia_8bit_port_ops = {
++ .set_mode = pcmcia_set_mode_8bit,
++ .tf_load = ata_tf_load,
++ .tf_read = ata_tf_read,
++ .check_status = ata_check_status,
++ .exec_command = ata_exec_command,
++ .dev_select = ata_std_dev_select,
++
++ .freeze = ata_bmdma_freeze,
++ .thaw = ata_bmdma_thaw,
++ .error_handler = ata_bmdma_error_handler,
++ .post_internal_cmd = ata_bmdma_post_internal_cmd,
++ .cable_detect = ata_cable_40wire,
++
++ .qc_prep = ata_qc_prep,
++ .qc_issue = ata_qc_issue_prot,
++
++ .data_xfer = ata_data_xfer_8bit,
++
++ .irq_clear = ata_bmdma_irq_clear,
++ .irq_on = ata_irq_on,
++
++ .port_start = ata_sff_port_start,
++};
++
+ #define CS_CHECK(fn, ret) \
+ do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
+
+@@ -153,9 +219,12 @@ static int pcmcia_init_one(struct pcmcia_device *pdev)
+ cistpl_cftable_entry_t dflt;
+ } *stk = NULL;
+ cistpl_cftable_entry_t *cfg;
+- int pass, last_ret = 0, last_fn = 0, is_kme = 0, ret = -ENOMEM;
++ int pass, last_ret = 0, last_fn = 0, is_kme = 0, ret = -ENOMEM, p;
+ unsigned long io_base, ctl_base;
+ void __iomem *io_addr, *ctl_addr;
++ int n_ports = 1;
++
++ struct ata_port_operations *ops = &pcmcia_port_ops;
+
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
+ if (info == NULL)
+@@ -282,27 +351,32 @@ next_entry:
+ /* FIXME: Could be more ports at base + 0x10 but we only deal with
+ one right now */
+ if (pdev->io.NumPorts1 >= 0x20)
+- printk(KERN_WARNING DRV_NAME ": second channel not yet supported.\n");
++ n_ports = 2;
+
++ if (pdev->manf_id == 0x0097 && pdev->card_id == 0x1620)
++ ops = &pcmcia_8bit_port_ops;
+ /*
+ * Having done the PCMCIA plumbing the ATA side is relatively
+ * sane.
+ */
+ ret = -ENOMEM;
+- host = ata_host_alloc(&pdev->dev, 1);
++ host = ata_host_alloc(&pdev->dev, n_ports);
+ if (!host)
+ goto failed;
+- ap = host->ports[0];
+
+- ap->ops = &pcmcia_port_ops;
+- ap->pio_mask = 1; /* ISA so PIO 0 cycles */
+- ap->flags |= ATA_FLAG_SLAVE_POSS;
+- ap->ioaddr.cmd_addr = io_addr;
+- ap->ioaddr.altstatus_addr = ctl_addr;
+- ap->ioaddr.ctl_addr = ctl_addr;
+- ata_std_ports(&ap->ioaddr);
++ for (p = 0; p < n_ports; p++) {
++ ap = host->ports[p];
+
+- ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io_base, ctl_base);
++ ap->ops = ops;
++ ap->pio_mask = 1; /* ISA so PIO 0 cycles */
++ ap->flags |= ATA_FLAG_SLAVE_POSS;
++ ap->ioaddr.cmd_addr = io_addr + 0x10 * p;
++ ap->ioaddr.altstatus_addr = ctl_addr + 0x10 * p;
++ ap->ioaddr.ctl_addr = ctl_addr + 0x10 * p;
++ ata_std_ports(&ap->ioaddr);
++
++ ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", io_base, ctl_base);
++ }
+
+ /* activate */
+ ret = ata_host_activate(host, pdev->irq.AssignedIRQ, ata_interrupt,
+@@ -360,6 +434,7 @@ static struct pcmcia_device_id pcmcia_devices[] = {
+ PCMCIA_DEVICE_MANF_CARD(0x0032, 0x0704),
+ PCMCIA_DEVICE_MANF_CARD(0x0032, 0x2904),
+ PCMCIA_DEVICE_MANF_CARD(0x0045, 0x0401), /* SanDisk CFA */
++ PCMCIA_DEVICE_MANF_CARD(0x0097, 0x1620), /* TI emulated */
+ PCMCIA_DEVICE_MANF_CARD(0x0098, 0x0000), /* Toshiba */
+ PCMCIA_DEVICE_MANF_CARD(0x00a4, 0x002d),
+ PCMCIA_DEVICE_MANF_CARD(0x00ce, 0x0000), /* Samsung */
+diff --git a/drivers/ata/pata_pdc2027x.c b/drivers/ata/pata_pdc2027x.c
+index 2622577..028af5d 100644
+--- a/drivers/ata/pata_pdc2027x.c
++++ b/drivers/ata/pata_pdc2027x.c
+@@ -348,7 +348,7 @@ static unsigned long pdc2027x_mode_filter(struct ata_device *adev, unsigned long
+ ata_id_c_string(pair->id, model_num, ATA_ID_PROD,
+ ATA_ID_PROD_LEN + 1);
+ /* If the master is a maxtor in UDMA6 then the slave should not use UDMA 6 */
+- if (strstr(model_num, "Maxtor") == 0 && pair->dma_mode == XFER_UDMA_6)
++ if (strstr(model_num, "Maxtor") == NULL && pair->dma_mode == XFER_UDMA_6)
+ mask &= ~ (1 << (6 + ATA_SHIFT_UDMA));
+
+ return ata_pci_default_filter(adev, mask);
+diff --git a/drivers/ata/pata_pdc202xx_old.c b/drivers/ata/pata_pdc202xx_old.c
+index 6c9689b..3ed8667 100644
+--- a/drivers/ata/pata_pdc202xx_old.c
++++ b/drivers/ata/pata_pdc202xx_old.c
+@@ -168,8 +168,7 @@ static void pdc2026x_bmdma_start(struct ata_queued_cmd *qc)
+ pdc202xx_set_dmamode(ap, qc->dev);
+
+ /* Cases the state machine will not complete correctly without help */
+- if ((tf->flags & ATA_TFLAG_LBA48) || tf->protocol == ATA_PROT_ATAPI_DMA)
+- {
++ if ((tf->flags & ATA_TFLAG_LBA48) || tf->protocol == ATAPI_PROT_DMA) {
+ len = qc->nbytes / 2;
+
+ if (tf->flags & ATA_TFLAG_WRITE)
+@@ -208,7 +207,7 @@ static void pdc2026x_bmdma_stop(struct ata_queued_cmd *qc)
+ void __iomem *atapi_reg = master + 0x20 + (4 * ap->port_no);
+
+ /* Cases the state machine will not complete correctly */
+- if (tf->protocol == ATA_PROT_ATAPI_DMA || ( tf->flags & ATA_TFLAG_LBA48)) {
++ if (tf->protocol == ATAPI_PROT_DMA || (tf->flags & ATA_TFLAG_LBA48)) {
+ iowrite32(0, atapi_reg);
+ iowrite8(ioread8(clock) & ~sel66, clock);
+ }
+diff --git a/drivers/ata/pata_qdi.c b/drivers/ata/pata_qdi.c
+index a4c0e50..9f308ed 100644
+--- a/drivers/ata/pata_qdi.c
++++ b/drivers/ata/pata_qdi.c
+@@ -124,29 +124,33 @@ static unsigned int qdi_qc_issue_prot(struct ata_queued_cmd *qc)
+ return ata_qc_issue_prot(qc);
+ }
+
+-static void qdi_data_xfer(struct ata_device *adev, unsigned char *buf, unsigned int buflen, int write_data)
++static unsigned int qdi_data_xfer(struct ata_device *dev, unsigned char *buf,
++ unsigned int buflen, int rw)
+ {
+- struct ata_port *ap = adev->link->ap;
+- int slop = buflen & 3;
++ if (ata_id_has_dword_io(dev->id)) {
++ struct ata_port *ap = dev->link->ap;
++ int slop = buflen & 3;
+
+- if (ata_id_has_dword_io(adev->id)) {
+- if (write_data)
+- iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+- else
++ if (rw == READ)
+ ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
++ else
++ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+
+ if (unlikely(slop)) {
+- __le32 pad = 0;
+- if (write_data) {
+- memcpy(&pad, buf + buflen - slop, slop);
+- iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+- } else {
++ u32 pad;
++ if (rw == READ) {
+ pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
+ memcpy(buf + buflen - slop, &pad, slop);
++ } else {
++ memcpy(&pad, buf + buflen - slop, slop);
++ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+ }
++ buflen += 4 - slop;
+ }
+ } else
+- ata_data_xfer(adev, buf, buflen, write_data);
++ buflen = ata_data_xfer(dev, buf, buflen, rw);
++
++ return buflen;
+ }
+
+ static struct scsi_host_template qdi_sht = {
+diff --git a/drivers/ata/pata_scc.c b/drivers/ata/pata_scc.c
+index ea2ef9f..55055b2 100644
+--- a/drivers/ata/pata_scc.c
++++ b/drivers/ata/pata_scc.c
+@@ -768,45 +768,47 @@ static u8 scc_bmdma_status (struct ata_port *ap)
+
+ /**
+ * scc_data_xfer - Transfer data by PIO
+- * @adev: device for this I/O
++ * @dev: device for this I/O
+ * @buf: data buffer
+ * @buflen: buffer length
+- * @write_data: read/write
++ * @rw: read/write
+ *
+ * Note: Original code is ata_data_xfer().
+ */
+
+-static void scc_data_xfer (struct ata_device *adev, unsigned char *buf,
+- unsigned int buflen, int write_data)
++static unsigned int scc_data_xfer (struct ata_device *dev, unsigned char *buf,
++ unsigned int buflen, int rw)
+ {
+- struct ata_port *ap = adev->link->ap;
++ struct ata_port *ap = dev->link->ap;
+ unsigned int words = buflen >> 1;
+ unsigned int i;
+ u16 *buf16 = (u16 *) buf;
+ void __iomem *mmio = ap->ioaddr.data_addr;
+
+ /* Transfer multiple of 2 bytes */
+- if (write_data) {
+- for (i = 0; i < words; i++)
+- out_be32(mmio, cpu_to_le16(buf16[i]));
+- } else {
++ if (rw == READ)
+ for (i = 0; i < words; i++)
+ buf16[i] = le16_to_cpu(in_be32(mmio));
+- }
++ else
++ for (i = 0; i < words; i++)
++ out_be32(mmio, cpu_to_le16(buf16[i]));
+
+ /* Transfer trailing 1 byte, if any. */
+ if (unlikely(buflen & 0x01)) {
+ u16 align_buf[1] = { 0 };
+ unsigned char *trailing_buf = buf + buflen - 1;
+
+- if (write_data) {
+- memcpy(align_buf, trailing_buf, 1);
+- out_be32(mmio, cpu_to_le16(align_buf[0]));
+- } else {
++ if (rw == READ) {
+ align_buf[0] = le16_to_cpu(in_be32(mmio));
+ memcpy(trailing_buf, align_buf, 1);
++ } else {
++ memcpy(align_buf, trailing_buf, 1);
++ out_be32(mmio, cpu_to_le16(align_buf[0]));
+ }
++ words++;
+ }
++
++ return words << 1;
+ }
+
+ /**
+diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c
+index 8bed888..9c523fb 100644
+--- a/drivers/ata/pata_serverworks.c
++++ b/drivers/ata/pata_serverworks.c
+@@ -41,7 +41,7 @@
+ #include <linux/libata.h>
+
+ #define DRV_NAME "pata_serverworks"
+-#define DRV_VERSION "0.4.2"
++#define DRV_VERSION "0.4.3"
+
+ #define SVWKS_CSB5_REVISION_NEW 0x92 /* min PCI_REVISION_ID for UDMA5 (A2.0) */
+ #define SVWKS_CSB6_REVISION 0xa0 /* min PCI_REVISION_ID for UDMA4 (A1.0) */
+@@ -102,7 +102,7 @@ static int osb4_cable(struct ata_port *ap) {
+ }
+
+ /**
+- * csb4_cable - CSB5/6 cable detect
++ * csb_cable - CSB5/6 cable detect
+ * @ap: ATA port to check
+ *
+ * Serverworks default arrangement is to use the drive side detection
+@@ -110,7 +110,7 @@ static int osb4_cable(struct ata_port *ap) {
+ */
+
+ static int csb_cable(struct ata_port *ap) {
+- return ATA_CBL_PATA80;
++ return ATA_CBL_PATA_UNK;
+ }
+
+ struct sv_cable_table {
+@@ -231,7 +231,6 @@ static unsigned long serverworks_csb_filter(struct ata_device *adev, unsigned lo
+ return ata_pci_default_filter(adev, mask);
+ }
+
+-
+ /**
+ * serverworks_set_piomode - set initial PIO mode data
+ * @ap: ATA interface
+@@ -243,7 +242,7 @@ static unsigned long serverworks_csb_filter(struct ata_device *adev, unsigned lo
+ static void serverworks_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ {
+ static const u8 pio_mode[] = { 0x5d, 0x47, 0x34, 0x22, 0x20 };
+- int offset = 1 + (2 * ap->port_no) - adev->devno;
++ int offset = 1 + 2 * ap->port_no - adev->devno;
+ int devbits = (2 * ap->port_no + adev->devno) * 4;
+ u16 csb5_pio;
+ struct pci_dev *pdev = to_pci_dev(ap->host->dev);
+diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c
+index 453d72b..39627ab 100644
+--- a/drivers/ata/pata_via.c
++++ b/drivers/ata/pata_via.c
+@@ -185,7 +185,8 @@ static int via_cable_detect(struct ata_port *ap) {
+ if (ata66 & (0x10100000 >> (16 * ap->port_no)))
+ return ATA_CBL_PATA80;
+ /* Check with ACPI so we can spot BIOS reported SATA bridges */
+- if (ata_acpi_cbl_80wire(ap))
++ if (ata_acpi_init_gtm(ap) &&
++ ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap)))
+ return ATA_CBL_PATA80;
+ return ATA_CBL_PATA40;
+ }
+diff --git a/drivers/ata/pata_winbond.c b/drivers/ata/pata_winbond.c
+index 7116a9e..99c92ed 100644
+--- a/drivers/ata/pata_winbond.c
++++ b/drivers/ata/pata_winbond.c
+@@ -92,29 +92,33 @@ static void winbond_set_piomode(struct ata_port *ap, struct ata_device *adev)
+ }
+
+
+-static void winbond_data_xfer(struct ata_device *adev, unsigned char *buf, unsigned int buflen, int write_data)
++static unsigned int winbond_data_xfer(struct ata_device *dev,
++ unsigned char *buf, unsigned int buflen, int rw)
+ {
+- struct ata_port *ap = adev->link->ap;
++ struct ata_port *ap = dev->link->ap;
+ int slop = buflen & 3;
+
+- if (ata_id_has_dword_io(adev->id)) {
+- if (write_data)
+- iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+- else
++ if (ata_id_has_dword_io(dev->id)) {
++ if (rw == READ)
+ ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
++ else
++ iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+
+ if (unlikely(slop)) {
+- __le32 pad = 0;
+- if (write_data) {
+- memcpy(&pad, buf + buflen - slop, slop);
+- iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+- } else {
++ u32 pad;
++ if (rw == READ) {
+ pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
+ memcpy(buf + buflen - slop, &pad, slop);
++ } else {
++ memcpy(&pad, buf + buflen - slop, slop);
++ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+ }
++ buflen += 4 - slop;
+ }
+ } else
+- ata_data_xfer(adev, buf, buflen, write_data);
++ buflen = ata_data_xfer(dev, buf, buflen, rw);
++
++ return buflen;
+ }
+
+ static struct scsi_host_template winbond_sht = {
+@@ -191,7 +195,7 @@ static __init int winbond_init_one(unsigned long port)
+ reg = winbond_readcfg(port, 0x81);
+
+ if (!(reg & 0x03)) /* Disabled */
+- return 0;
++ return -ENODEV;
+
+ for (i = 0; i < 2 ; i ++) {
+ unsigned long cmd_port = 0x1F0 - (0x80 * i);
+diff --git a/drivers/ata/pdc_adma.c b/drivers/ata/pdc_adma.c
+index bd4c2a3..8e1b7e9 100644
+--- a/drivers/ata/pdc_adma.c
++++ b/drivers/ata/pdc_adma.c
+@@ -321,8 +321,9 @@ static int adma_fill_sg(struct ata_queued_cmd *qc)
+ u8 *buf = pp->pkt, *last_buf = NULL;
+ int i = (2 + buf[3]) * 8;
+ u8 pFLAGS = pORD | ((qc->tf.flags & ATA_TFLAG_WRITE) ? pDIRO : 0);
++ unsigned int si;
+
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ u32 addr;
+ u32 len;
+
+@@ -455,7 +456,7 @@ static unsigned int adma_qc_issue(struct ata_queued_cmd *qc)
+ adma_packet_start(qc);
+ return 0;
+
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ BUG();
+ break;
+
+diff --git a/drivers/ata/sata_fsl.c b/drivers/ata/sata_fsl.c
+index d015b4a..922d7b2 100644
+--- a/drivers/ata/sata_fsl.c
++++ b/drivers/ata/sata_fsl.c
+@@ -333,13 +333,14 @@ static unsigned int sata_fsl_fill_sg(struct ata_queued_cmd *qc, void *cmd_desc,
+ struct prde *prd_ptr_to_indirect_ext = NULL;
+ unsigned indirect_ext_segment_sz = 0;
+ dma_addr_t indirect_ext_segment_paddr;
++ unsigned int si;
+
+ VPRINTK("SATA FSL : cd = 0x%x, prd = 0x%x\n", cmd_desc, prd);
+
+ indirect_ext_segment_paddr = cmd_desc_paddr +
+ SATA_FSL_CMD_DESC_OFFSET_TO_PRDT + SATA_FSL_MAX_PRD_DIRECT * 16;
+
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ dma_addr_t sg_addr = sg_dma_address(sg);
+ u32 sg_len = sg_dma_len(sg);
+
+@@ -417,7 +418,7 @@ static void sata_fsl_qc_prep(struct ata_queued_cmd *qc)
+ }
+
+ /* setup "ACMD - atapi command" in cmd. desc. if this is ATAPI cmd */
+- if (is_atapi_taskfile(&qc->tf)) {
++ if (ata_is_atapi(qc->tf.protocol)) {
+ desc_info |= ATAPI_CMD;
+ memset((void *)&cd->acmd, 0, 32);
+ memcpy((void *)&cd->acmd, qc->cdb, qc->dev->cdb_len);
+diff --git a/drivers/ata/sata_inic162x.c b/drivers/ata/sata_inic162x.c
+index 323c087..96e614a 100644
+--- a/drivers/ata/sata_inic162x.c
++++ b/drivers/ata/sata_inic162x.c
+@@ -585,7 +585,7 @@ static struct ata_port_operations inic_port_ops = {
+ };
+
+ static struct ata_port_info inic_port_info = {
+- /* For some reason, ATA_PROT_ATAPI is broken on this
++ /* For some reason, ATAPI_PROT_PIO is broken on this
+ * controller, and no, PIO_POLLING does't fix it. It somehow
+ * manages to report the wrong ireason and ignoring ireason
+ * results in machine lock up. Tell libata to always prefer
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index 37b850a..7e72463 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -1136,9 +1136,10 @@ static void mv_fill_sg(struct ata_queued_cmd *qc)
+ struct mv_port_priv *pp = qc->ap->private_data;
+ struct scatterlist *sg;
+ struct mv_sg *mv_sg, *last_sg = NULL;
++ unsigned int si;
+
+ mv_sg = pp->sg_tbl;
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ dma_addr_t addr = sg_dma_address(sg);
+ u32 sg_len = sg_dma_len(sg);
+
+diff --git a/drivers/ata/sata_nv.c b/drivers/ata/sata_nv.c
+index ed5dc7c..a0f98fd 100644
+--- a/drivers/ata/sata_nv.c
++++ b/drivers/ata/sata_nv.c
+@@ -1336,21 +1336,18 @@ static void nv_adma_fill_aprd(struct ata_queued_cmd *qc,
+ static void nv_adma_fill_sg(struct ata_queued_cmd *qc, struct nv_adma_cpb *cpb)
+ {
+ struct nv_adma_port_priv *pp = qc->ap->private_data;
+- unsigned int idx;
+ struct nv_adma_prd *aprd;
+ struct scatterlist *sg;
++ unsigned int si;
+
+ VPRINTK("ENTER\n");
+
+- idx = 0;
+-
+- ata_for_each_sg(sg, qc) {
+- aprd = (idx < 5) ? &cpb->aprd[idx] :
+- &pp->aprd[NV_ADMA_SGTBL_LEN * qc->tag + (idx-5)];
+- nv_adma_fill_aprd(qc, sg, idx, aprd);
+- idx++;
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
++ aprd = (si < 5) ? &cpb->aprd[si] :
++ &pp->aprd[NV_ADMA_SGTBL_LEN * qc->tag + (si-5)];
++ nv_adma_fill_aprd(qc, sg, si, aprd);
+ }
+- if (idx > 5)
++ if (si > 5)
+ cpb->next_aprd = cpu_to_le64(((u64)(pp->aprd_dma + NV_ADMA_SGTBL_SZ * qc->tag)));
+ else
+ cpb->next_aprd = cpu_to_le64(0);
+@@ -1995,17 +1992,14 @@ static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct scatterlist *sg;
+- unsigned int idx;
+ struct nv_swncq_port_priv *pp = ap->private_data;
+ struct ata_prd *prd;
+-
+- WARN_ON(qc->__sg == NULL);
+- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
++ unsigned int si, idx;
+
+ prd = pp->prd + ATA_MAX_PRD * qc->tag;
+
+ idx = 0;
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ u32 addr, offset;
+ u32 sg_len, len;
+
+@@ -2027,8 +2021,7 @@ static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
+ }
+ }
+
+- if (idx)
+- prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
++ prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
+ }
+
+ static unsigned int nv_swncq_issue_atacmd(struct ata_port *ap,
+diff --git a/drivers/ata/sata_promise.c b/drivers/ata/sata_promise.c
+index 7914def..a07d319 100644
+--- a/drivers/ata/sata_promise.c
++++ b/drivers/ata/sata_promise.c
+@@ -450,19 +450,19 @@ static void pdc_atapi_pkt(struct ata_queued_cmd *qc)
+ struct pdc_port_priv *pp = ap->private_data;
+ u8 *buf = pp->pkt;
+ u32 *buf32 = (u32 *) buf;
+- unsigned int dev_sel, feature, nbytes;
++ unsigned int dev_sel, feature;
+
+ /* set control bits (byte 0), zero delay seq id (byte 3),
+ * and seq id (byte 2)
+ */
+ switch (qc->tf.protocol) {
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ if (!(qc->tf.flags & ATA_TFLAG_WRITE))
+ buf32[0] = cpu_to_le32(PDC_PKT_READ);
+ else
+ buf32[0] = 0;
+ break;
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_NODATA:
+ buf32[0] = cpu_to_le32(PDC_PKT_NODATA);
+ break;
+ default:
+@@ -473,45 +473,37 @@ static void pdc_atapi_pkt(struct ata_queued_cmd *qc)
+ buf32[2] = 0; /* no next-packet */
+
+ /* select drive */
+- if (sata_scr_valid(&ap->link)) {
++ if (sata_scr_valid(&ap->link))
+ dev_sel = PDC_DEVICE_SATA;
+- } else {
+- dev_sel = ATA_DEVICE_OBS;
+- if (qc->dev->devno != 0)
+- dev_sel |= ATA_DEV1;
+- }
++ else
++ dev_sel = qc->tf.device;
++
+ buf[12] = (1 << 5) | ATA_REG_DEVICE;
+ buf[13] = dev_sel;
+ buf[14] = (1 << 5) | ATA_REG_DEVICE | PDC_PKT_CLEAR_BSY;
+ buf[15] = dev_sel; /* once more, waiting for BSY to clear */
+
+ buf[16] = (1 << 5) | ATA_REG_NSECT;
+- buf[17] = 0x00;
++ buf[17] = qc->tf.nsect;
+ buf[18] = (1 << 5) | ATA_REG_LBAL;
+- buf[19] = 0x00;
++ buf[19] = qc->tf.lbal;
+
+ /* set feature and byte counter registers */
+- if (qc->tf.protocol != ATA_PROT_ATAPI_DMA) {
++ if (qc->tf.protocol != ATAPI_PROT_DMA)
+ feature = PDC_FEATURE_ATAPI_PIO;
+- /* set byte counter register to real transfer byte count */
+- nbytes = qc->nbytes;
+- if (nbytes > 0xffff)
+- nbytes = 0xffff;
+- } else {
++ else
+ feature = PDC_FEATURE_ATAPI_DMA;
+- /* set byte counter register to 0 */
+- nbytes = 0;
+- }
++
+ buf[20] = (1 << 5) | ATA_REG_FEATURE;
+ buf[21] = feature;
+ buf[22] = (1 << 5) | ATA_REG_BYTEL;
+- buf[23] = nbytes & 0xFF;
++ buf[23] = qc->tf.lbam;
+ buf[24] = (1 << 5) | ATA_REG_BYTEH;
+- buf[25] = (nbytes >> 8) & 0xFF;
++ buf[25] = qc->tf.lbah;
+
+ /* send ATAPI packet command 0xA0 */
+ buf[26] = (1 << 5) | ATA_REG_CMD;
+- buf[27] = ATA_CMD_PACKET;
++ buf[27] = qc->tf.command;
+
+ /* select drive and check DRQ */
+ buf[28] = (1 << 5) | ATA_REG_DEVICE | PDC_PKT_WAIT_DRDY;
+@@ -541,17 +533,15 @@ static void pdc_fill_sg(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct scatterlist *sg;
+- unsigned int idx;
+ const u32 SG_COUNT_ASIC_BUG = 41*4;
++ unsigned int si, idx;
++ u32 len;
+
+ if (!(qc->flags & ATA_QCFLAG_DMAMAP))
+ return;
+
+- WARN_ON(qc->__sg == NULL);
+- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
+-
+ idx = 0;
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ u32 addr, offset;
+ u32 sg_len, len;
+
+@@ -578,29 +568,27 @@ static void pdc_fill_sg(struct ata_queued_cmd *qc)
+ }
+ }
+
+- if (idx) {
+- u32 len = le32_to_cpu(ap->prd[idx - 1].flags_len);
++ len = le32_to_cpu(ap->prd[idx - 1].flags_len);
+
+- if (len > SG_COUNT_ASIC_BUG) {
+- u32 addr;
++ if (len > SG_COUNT_ASIC_BUG) {
++ u32 addr;
+
+- VPRINTK("Splitting last PRD.\n");
++ VPRINTK("Splitting last PRD.\n");
+
+- addr = le32_to_cpu(ap->prd[idx - 1].addr);
+- ap->prd[idx - 1].flags_len = cpu_to_le32(len - SG_COUNT_ASIC_BUG);
+- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx - 1, addr, SG_COUNT_ASIC_BUG);
++ addr = le32_to_cpu(ap->prd[idx - 1].addr);
++ ap->prd[idx - 1].flags_len = cpu_to_le32(len - SG_COUNT_ASIC_BUG);
++ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx - 1, addr, SG_COUNT_ASIC_BUG);
+
+- addr = addr + len - SG_COUNT_ASIC_BUG;
+- len = SG_COUNT_ASIC_BUG;
+- ap->prd[idx].addr = cpu_to_le32(addr);
+- ap->prd[idx].flags_len = cpu_to_le32(len);
+- VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
++ addr = addr + len - SG_COUNT_ASIC_BUG;
++ len = SG_COUNT_ASIC_BUG;
++ ap->prd[idx].addr = cpu_to_le32(addr);
++ ap->prd[idx].flags_len = cpu_to_le32(len);
++ VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len);
+
+- idx++;
+- }
+-
+- ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
++ idx++;
+ }
++
++ ap->prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
+ }
+
+ static void pdc_qc_prep(struct ata_queued_cmd *qc)
+@@ -627,14 +615,14 @@ static void pdc_qc_prep(struct ata_queued_cmd *qc)
+ pdc_pkt_footer(&qc->tf, pp->pkt, i);
+ break;
+
+- case ATA_PROT_ATAPI:
++ case ATAPI_PROT_PIO:
+ pdc_fill_sg(qc);
+ break;
+
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ pdc_fill_sg(qc);
+ /*FALLTHROUGH*/
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_NODATA:
+ pdc_atapi_pkt(qc);
+ break;
+
+@@ -754,8 +742,8 @@ static inline unsigned int pdc_host_intr(struct ata_port *ap,
+ switch (qc->tf.protocol) {
+ case ATA_PROT_DMA:
+ case ATA_PROT_NODATA:
+- case ATA_PROT_ATAPI_DMA:
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_DMA:
++ case ATAPI_PROT_NODATA:
+ qc->err_mask |= ac_err_mask(ata_wait_idle(ap));
+ ata_qc_complete(qc);
+ handled = 1;
+@@ -900,7 +888,7 @@ static inline void pdc_packet_start(struct ata_queued_cmd *qc)
+ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
+ {
+ switch (qc->tf.protocol) {
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_NODATA:
+ if (qc->dev->flags & ATA_DFLAG_CDB_INTR)
+ break;
+ /*FALLTHROUGH*/
+@@ -908,7 +896,7 @@ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
+ if (qc->tf.flags & ATA_TFLAG_POLLING)
+ break;
+ /*FALLTHROUGH*/
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ case ATA_PROT_DMA:
+ pdc_packet_start(qc);
+ return 0;
+@@ -922,16 +910,14 @@ static unsigned int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
+
+ static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
+ {
+- WARN_ON(tf->protocol == ATA_PROT_DMA ||
+- tf->protocol == ATA_PROT_ATAPI_DMA);
++ WARN_ON(tf->protocol == ATA_PROT_DMA || tf->protocol == ATAPI_PROT_DMA);
+ ata_tf_load(ap, tf);
+ }
+
+ static void pdc_exec_command_mmio(struct ata_port *ap,
+ const struct ata_taskfile *tf)
+ {
+- WARN_ON(tf->protocol == ATA_PROT_DMA ||
+- tf->protocol == ATA_PROT_ATAPI_DMA);
++ WARN_ON(tf->protocol == ATA_PROT_DMA || tf->protocol == ATAPI_PROT_DMA);
+ ata_exec_command(ap, tf);
+ }
+
+diff --git a/drivers/ata/sata_promise.h b/drivers/ata/sata_promise.h
+index 6ee5e19..00d6000 100644
+--- a/drivers/ata/sata_promise.h
++++ b/drivers/ata/sata_promise.h
+@@ -46,7 +46,7 @@ static inline unsigned int pdc_pkt_header(struct ata_taskfile *tf,
+ unsigned int devno, u8 *buf)
+ {
+ u8 dev_reg;
+- u32 *buf32 = (u32 *) buf;
++ __le32 *buf32 = (__le32 *) buf;
+
+ /* set control bits (byte 0), zero delay seq id (byte 3),
+ * and seq id (byte 2)
+diff --git a/drivers/ata/sata_qstor.c b/drivers/ata/sata_qstor.c
+index c68b241..91cc12c 100644
+--- a/drivers/ata/sata_qstor.c
++++ b/drivers/ata/sata_qstor.c
+@@ -287,14 +287,10 @@ static unsigned int qs_fill_sg(struct ata_queued_cmd *qc)
+ struct scatterlist *sg;
+ struct ata_port *ap = qc->ap;
+ struct qs_port_priv *pp = ap->private_data;
+- unsigned int nelem;
+ u8 *prd = pp->pkt + QS_CPB_BYTES;
++ unsigned int si;
+
+- WARN_ON(qc->__sg == NULL);
+- WARN_ON(qc->n_elem == 0 && qc->pad_len == 0);
+-
+- nelem = 0;
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ u64 addr;
+ u32 len;
+
+@@ -306,12 +302,11 @@ static unsigned int qs_fill_sg(struct ata_queued_cmd *qc)
+ *(__le32 *)prd = cpu_to_le32(len);
+ prd += sizeof(u64);
+
+- VPRINTK("PRD[%u] = (0x%llX, 0x%X)\n", nelem,
++ VPRINTK("PRD[%u] = (0x%llX, 0x%X)\n", si,
+ (unsigned long long)addr, len);
+- nelem++;
+ }
+
+- return nelem;
++ return si;
+ }
+
+ static void qs_qc_prep(struct ata_queued_cmd *qc)
+@@ -376,7 +371,7 @@ static unsigned int qs_qc_issue(struct ata_queued_cmd *qc)
+ qs_packet_start(qc);
+ return 0;
+
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ BUG();
+ break;
+
+diff --git a/drivers/ata/sata_sil.c b/drivers/ata/sata_sil.c
+index f5119bf..0b8191b 100644
+--- a/drivers/ata/sata_sil.c
++++ b/drivers/ata/sata_sil.c
+@@ -416,15 +416,14 @@ static void sil_host_intr(struct ata_port *ap, u32 bmdma2)
+ */
+
+ /* Check the ATA_DFLAG_CDB_INTR flag is enough here.
+- * The flag was turned on only for atapi devices.
+- * No need to check is_atapi_taskfile(&qc->tf) again.
++ * The flag was turned on only for atapi devices. No
++ * need to check ata_is_atapi(qc->tf.protocol) again.
+ */
+ if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR))
+ goto err_hsm;
+ break;
+ case HSM_ST_LAST:
+- if (qc->tf.protocol == ATA_PROT_DMA ||
+- qc->tf.protocol == ATA_PROT_ATAPI_DMA) {
++ if (ata_is_dma(qc->tf.protocol)) {
+ /* clear DMA-Start bit */
+ ap->ops->bmdma_stop(qc);
+
+@@ -451,8 +450,7 @@ static void sil_host_intr(struct ata_port *ap, u32 bmdma2)
+ /* kick HSM in the ass */
+ ata_hsm_move(ap, qc, status, 0);
+
+- if (unlikely(qc->err_mask) && (qc->tf.protocol == ATA_PROT_DMA ||
+- qc->tf.protocol == ATA_PROT_ATAPI_DMA))
++ if (unlikely(qc->err_mask) && ata_is_dma(qc->tf.protocol))
+ ata_ehi_push_desc(ehi, "BMDMA2 stat 0x%x", bmdma2);
+
+ return;
+diff --git a/drivers/ata/sata_sil24.c b/drivers/ata/sata_sil24.c
+index 864c1c1..b4b1f91 100644
+--- a/drivers/ata/sata_sil24.c
++++ b/drivers/ata/sata_sil24.c
+@@ -813,8 +813,9 @@ static inline void sil24_fill_sg(struct ata_queued_cmd *qc,
+ {
+ struct scatterlist *sg;
+ struct sil24_sge *last_sge = NULL;
++ unsigned int si;
+
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ sge->addr = cpu_to_le64(sg_dma_address(sg));
+ sge->cnt = cpu_to_le32(sg_dma_len(sg));
+ sge->flags = 0;
+@@ -823,8 +824,7 @@ static inline void sil24_fill_sg(struct ata_queued_cmd *qc,
+ sge++;
+ }
+
+- if (likely(last_sge))
+- last_sge->flags = cpu_to_le32(SGE_TRM);
++ last_sge->flags = cpu_to_le32(SGE_TRM);
+ }
+
+ static int sil24_qc_defer(struct ata_queued_cmd *qc)
+@@ -852,9 +852,7 @@ static int sil24_qc_defer(struct ata_queued_cmd *qc)
+ * set.
+ *
+ */
+- int is_excl = (prot == ATA_PROT_ATAPI ||
+- prot == ATA_PROT_ATAPI_NODATA ||
+- prot == ATA_PROT_ATAPI_DMA ||
++ int is_excl = (ata_is_atapi(prot) ||
+ (qc->flags & ATA_QCFLAG_RESULT_TF));
+
+ if (unlikely(ap->excl_link)) {
+@@ -885,35 +883,21 @@ static void sil24_qc_prep(struct ata_queued_cmd *qc)
+
+ cb = &pp->cmd_block[sil24_tag(qc->tag)];
+
+- switch (qc->tf.protocol) {
+- case ATA_PROT_PIO:
+- case ATA_PROT_DMA:
+- case ATA_PROT_NCQ:
+- case ATA_PROT_NODATA:
++ if (!ata_is_atapi(qc->tf.protocol)) {
+ prb = &cb->ata.prb;
+ sge = cb->ata.sge;
+- break;
+-
+- case ATA_PROT_ATAPI:
+- case ATA_PROT_ATAPI_DMA:
+- case ATA_PROT_ATAPI_NODATA:
++ } else {
+ prb = &cb->atapi.prb;
+ sge = cb->atapi.sge;
+ memset(cb->atapi.cdb, 0, 32);
+ memcpy(cb->atapi.cdb, qc->cdb, qc->dev->cdb_len);
+
+- if (qc->tf.protocol != ATA_PROT_ATAPI_NODATA) {
++ if (ata_is_data(qc->tf.protocol)) {
+ if (qc->tf.flags & ATA_TFLAG_WRITE)
+ ctrl = PRB_CTRL_PACKET_WRITE;
+ else
+ ctrl = PRB_CTRL_PACKET_READ;
+ }
+- break;
+-
+- default:
+- prb = NULL; /* shut up, gcc */
+- sge = NULL;
+- BUG();
+ }
+
+ prb->ctrl = cpu_to_le16(ctrl);
+diff --git a/drivers/ata/sata_sx4.c b/drivers/ata/sata_sx4.c
+index 4d85718..e3d56bc 100644
+--- a/drivers/ata/sata_sx4.c
++++ b/drivers/ata/sata_sx4.c
+@@ -334,7 +334,7 @@ static inline void pdc20621_ata_sg(struct ata_taskfile *tf, u8 *buf,
+ {
+ u32 addr;
+ unsigned int dw = PDC_DIMM_APKT_PRD >> 2;
+- u32 *buf32 = (u32 *) buf;
++ __le32 *buf32 = (__le32 *) buf;
+
+ /* output ATA packet S/G table */
+ addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA +
+@@ -356,7 +356,7 @@ static inline void pdc20621_host_sg(struct ata_taskfile *tf, u8 *buf,
+ {
+ u32 addr;
+ unsigned int dw = PDC_DIMM_HPKT_PRD >> 2;
+- u32 *buf32 = (u32 *) buf;
++ __le32 *buf32 = (__le32 *) buf;
+
+ /* output Host DMA packet S/G table */
+ addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA +
+@@ -377,7 +377,7 @@ static inline unsigned int pdc20621_ata_pkt(struct ata_taskfile *tf,
+ unsigned int portno)
+ {
+ unsigned int i, dw;
+- u32 *buf32 = (u32 *) buf;
++ __le32 *buf32 = (__le32 *) buf;
+ u8 dev_reg;
+
+ unsigned int dimm_sg = PDC_20621_DIMM_BASE +
+@@ -429,7 +429,8 @@ static inline void pdc20621_host_pkt(struct ata_taskfile *tf, u8 *buf,
+ unsigned int portno)
+ {
+ unsigned int dw;
+- u32 tmp, *buf32 = (u32 *) buf;
++ u32 tmp;
++ __le32 *buf32 = (__le32 *) buf;
+
+ unsigned int host_sg = PDC_20621_DIMM_BASE +
+ (PDC_DIMM_WINDOW_STEP * portno) +
+@@ -473,7 +474,7 @@ static void pdc20621_dma_prep(struct ata_queued_cmd *qc)
+ void __iomem *mmio = ap->host->iomap[PDC_MMIO_BAR];
+ void __iomem *dimm_mmio = ap->host->iomap[PDC_DIMM_BAR];
+ unsigned int portno = ap->port_no;
+- unsigned int i, idx, total_len = 0, sgt_len;
++ unsigned int i, si, idx, total_len = 0, sgt_len;
+ u32 *buf = (u32 *) &pp->dimm_buf[PDC_DIMM_HEADER_SZ];
+
+ WARN_ON(!(qc->flags & ATA_QCFLAG_DMAMAP));
+@@ -487,7 +488,7 @@ static void pdc20621_dma_prep(struct ata_queued_cmd *qc)
+ * Build S/G table
+ */
+ idx = 0;
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ buf[idx++] = cpu_to_le32(sg_dma_address(sg));
+ buf[idx++] = cpu_to_le32(sg_dma_len(sg));
+ total_len += sg_dma_len(sg);
+@@ -700,7 +701,7 @@ static unsigned int pdc20621_qc_issue_prot(struct ata_queued_cmd *qc)
+ pdc20621_packet_start(qc);
+ return 0;
+
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ BUG();
+ break;
+
+diff --git a/drivers/atm/ambassador.c b/drivers/atm/ambassador.c
+index b34b382..7b44a59 100644
+--- a/drivers/atm/ambassador.c
++++ b/drivers/atm/ambassador.c
+@@ -2163,7 +2163,6 @@ static int __devinit amb_init (amb_dev * dev)
+ static void setup_dev(amb_dev *dev, struct pci_dev *pci_dev)
+ {
+ unsigned char pool;
+- memset (dev, 0, sizeof(amb_dev));
+
+ // set up known dev items straight away
+ dev->pci_dev = pci_dev;
+@@ -2253,7 +2252,7 @@ static int __devinit amb_probe(struct pci_dev *pci_dev, const struct pci_device_
+ goto out_disable;
+ }
+
+- dev = kmalloc (sizeof(amb_dev), GFP_KERNEL);
++ dev = kzalloc(sizeof(amb_dev), GFP_KERNEL);
+ if (!dev) {
+ PRINTK (KERN_ERR, "out of memory!");
+ err = -ENOMEM;
+diff --git a/drivers/atm/he.c b/drivers/atm/he.c
+index 3b64a99..2e3395b 100644
+--- a/drivers/atm/he.c
++++ b/drivers/atm/he.c
+@@ -1,5 +1,3 @@
+-/* $Id: he.c,v 1.18 2003/05/06 22:57:15 chas Exp $ */
+-
+ /*
+
+ he.c
+@@ -99,10 +97,6 @@
+ #define HPRINTK(fmt,args...) do { } while (0)
+ #endif /* HE_DEBUG */
+
+-/* version definition */
+-
+-static char *version = "$Id: he.c,v 1.18 2003/05/06 22:57:15 chas Exp $";
+-
+ /* declarations */
+
+ static int he_open(struct atm_vcc *vcc);
+@@ -366,7 +360,7 @@ he_init_one(struct pci_dev *pci_dev, const struct pci_device_id *pci_ent)
+ struct he_dev *he_dev = NULL;
+ int err = 0;
+
+- printk(KERN_INFO "he: %s\n", version);
++ printk(KERN_INFO "ATM he driver\n");
+
+ if (pci_enable_device(pci_dev))
+ return -EIO;
+@@ -1643,6 +1637,8 @@ he_stop(struct he_dev *he_dev)
+
+ if (he_dev->rbpl_base) {
+ #ifdef USE_RBPL_POOL
++ int i;
++
+ for (i = 0; i < CONFIG_RBPL_SIZE; ++i) {
+ void *cpuaddr = he_dev->rbpl_virt[i].virt;
+ dma_addr_t dma_handle = he_dev->rbpl_base[i].phys;
+@@ -1665,6 +1661,8 @@ he_stop(struct he_dev *he_dev)
+ #ifdef USE_RBPS
+ if (he_dev->rbps_base) {
+ #ifdef USE_RBPS_POOL
++ int i;
++
+ for (i = 0; i < CONFIG_RBPS_SIZE; ++i) {
+ void *cpuaddr = he_dev->rbps_virt[i].virt;
+ dma_addr_t dma_handle = he_dev->rbps_base[i].phys;
+@@ -2933,7 +2931,7 @@ he_proc_read(struct atm_dev *dev, loff_t *pos, char *page)
+
+ left = *pos;
+ if (!left--)
+- return sprintf(page, "%s\n", version);
++ return sprintf(page, "ATM he driver\n");
+
+ if (!left--)
+ return sprintf(page, "%s%s\n\n",
+diff --git a/drivers/base/Makefile b/drivers/base/Makefile
+index b39ea3f..63e09c0 100644
+--- a/drivers/base/Makefile
++++ b/drivers/base/Makefile
+@@ -11,6 +11,9 @@ obj-$(CONFIG_FW_LOADER) += firmware_class.o
+ obj-$(CONFIG_NUMA) += node.o
+ obj-$(CONFIG_MEMORY_HOTPLUG_SPARSE) += memory.o
+ obj-$(CONFIG_SMP) += topology.o
++ifeq ($(CONFIG_SYSFS),y)
++obj-$(CONFIG_MODULES) += module.o
++endif
+ obj-$(CONFIG_SYS_HYPERVISOR) += hypervisor.o
+
+ ifeq ($(CONFIG_DEBUG_DRIVER),y)
+diff --git a/drivers/base/attribute_container.c b/drivers/base/attribute_container.c
+index 7370d7c..3b43e8a 100644
+--- a/drivers/base/attribute_container.c
++++ b/drivers/base/attribute_container.c
+@@ -61,7 +61,7 @@ attribute_container_classdev_to_container(struct class_device *classdev)
+ }
+ EXPORT_SYMBOL_GPL(attribute_container_classdev_to_container);
+
+-static struct list_head attribute_container_list;
++static LIST_HEAD(attribute_container_list);
+
+ static DEFINE_MUTEX(attribute_container_mutex);
+
+@@ -320,9 +320,14 @@ attribute_container_add_attrs(struct class_device *classdev)
+ struct class_device_attribute **attrs = cont->attrs;
+ int i, error;
+
+- if (!attrs)
++ BUG_ON(attrs && cont->grp);
++
++ if (!attrs && !cont->grp)
+ return 0;
+
++ if (cont->grp)
++ return sysfs_create_group(&classdev->kobj, cont->grp);
++
+ for (i = 0; attrs[i]; i++) {
+ error = class_device_create_file(classdev, attrs[i]);
+ if (error)
+@@ -378,9 +383,14 @@ attribute_container_remove_attrs(struct class_device *classdev)
+ struct class_device_attribute **attrs = cont->attrs;
+ int i;
+
+- if (!attrs)
++ if (!attrs && !cont->grp)
+ return;
+
++ if (cont->grp) {
++ sysfs_remove_group(&classdev->kobj, cont->grp);
++ return ;
++ }
++
+ for (i = 0; attrs[i]; i++)
+ class_device_remove_file(classdev, attrs[i]);
+ }
+@@ -429,10 +439,3 @@ attribute_container_find_class_device(struct attribute_container *cont,
+ return cdev;
+ }
+ EXPORT_SYMBOL_GPL(attribute_container_find_class_device);
+-
+-int __init
+-attribute_container_init(void)
+-{
+- INIT_LIST_HEAD(&attribute_container_list);
+- return 0;
+-}
+diff --git a/drivers/base/base.h b/drivers/base/base.h
+index 10b2fb6..c044414 100644
+--- a/drivers/base/base.h
++++ b/drivers/base/base.h
+@@ -1,6 +1,42 @@
+
+-/* initialisation functions */
++/**
++ * struct bus_type_private - structure to hold the private to the driver core portions of the bus_type structure.
++ *
++ * @subsys - the struct kset that defines this bus. This is the main kobject
++ * @drivers_kset - the list of drivers associated with this bus
++ * @devices_kset - the list of devices associated with this bus
++ * @klist_devices - the klist to iterate over the @devices_kset
++ * @klist_drivers - the klist to iterate over the @drivers_kset
++ * @bus_notifier - the bus notifier list for anything that cares about things
++ * on this bus.
++ * @bus - pointer back to the struct bus_type that this structure is associated
++ * with.
++ *
++ * This structure is the one that is the actual kobject allowing struct
++ * bus_type to be statically allocated safely. Nothing outside of the driver
++ * core should ever touch these fields.
++ */
++struct bus_type_private {
++ struct kset subsys;
++ struct kset *drivers_kset;
++ struct kset *devices_kset;
++ struct klist klist_devices;
++ struct klist klist_drivers;
++ struct blocking_notifier_head bus_notifier;
++ unsigned int drivers_autoprobe:1;
++ struct bus_type *bus;
++};
++
++struct driver_private {
++ struct kobject kobj;
++ struct klist klist_devices;
++ struct klist_node knode_bus;
++ struct module_kobject *mkobj;
++ struct device_driver *driver;
++};
++#define to_driver(obj) container_of(obj, struct driver_private, kobj)
+
++/* initialisation functions */
+ extern int devices_init(void);
+ extern int buses_init(void);
+ extern int classes_init(void);
+@@ -13,17 +49,16 @@ static inline int hypervisor_init(void) { return 0; }
+ extern int platform_bus_init(void);
+ extern int system_bus_init(void);
+ extern int cpu_dev_init(void);
+-extern int attribute_container_init(void);
+
+-extern int bus_add_device(struct device * dev);
+-extern void bus_attach_device(struct device * dev);
+-extern void bus_remove_device(struct device * dev);
++extern int bus_add_device(struct device *dev);
++extern void bus_attach_device(struct device *dev);
++extern void bus_remove_device(struct device *dev);
+
+-extern int bus_add_driver(struct device_driver *);
+-extern void bus_remove_driver(struct device_driver *);
++extern int bus_add_driver(struct device_driver *drv);
++extern void bus_remove_driver(struct device_driver *drv);
+
+-extern void driver_detach(struct device_driver * drv);
+-extern int driver_probe_device(struct device_driver *, struct device *);
++extern void driver_detach(struct device_driver *drv);
++extern int driver_probe_device(struct device_driver *drv, struct device *dev);
+
+ extern void sysdev_shutdown(void);
+ extern int sysdev_suspend(pm_message_t state);
+@@ -44,4 +79,13 @@ extern char *make_class_name(const char *name, struct kobject *kobj);
+
+ extern int devres_release_all(struct device *dev);
+
+-extern struct kset devices_subsys;
++extern struct kset *devices_kset;
++
++#if defined(CONFIG_MODULES) && defined(CONFIG_SYSFS)
++extern void module_add_driver(struct module *mod, struct device_driver *drv);
++extern void module_remove_driver(struct device_driver *drv);
++#else
++static inline void module_add_driver(struct module *mod,
++ struct device_driver *drv) { }
++static inline void module_remove_driver(struct device_driver *drv) { }
++#endif
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 9a19b07..f484495 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -3,6 +3,8 @@
+ *
+ * Copyright (c) 2002-3 Patrick Mochel
+ * Copyright (c) 2002-3 Open Source Development Labs
++ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
++ * Copyright (c) 2007 Novell Inc.
+ *
+ * This file is released under the GPLv2
+ *
+@@ -17,14 +19,13 @@
+ #include "power/power.h"
+
+ #define to_bus_attr(_attr) container_of(_attr, struct bus_attribute, attr)
+-#define to_bus(obj) container_of(obj, struct bus_type, subsys.kobj)
++#define to_bus(obj) container_of(obj, struct bus_type_private, subsys.kobj)
+
+ /*
+ * sysfs bindings for drivers
+ */
+
+ #define to_drv_attr(_attr) container_of(_attr, struct driver_attribute, attr)
+-#define to_driver(obj) container_of(obj, struct device_driver, kobj)
+
+
+ static int __must_check bus_rescan_devices_helper(struct device *dev,
+@@ -32,37 +33,40 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
+
+ static struct bus_type *bus_get(struct bus_type *bus)
+ {
+- return bus ? container_of(kset_get(&bus->subsys),
+- struct bus_type, subsys) : NULL;
++ if (bus) {
++ kset_get(&bus->p->subsys);
++ return bus;
++ }
++ return NULL;
+ }
+
+ static void bus_put(struct bus_type *bus)
+ {
+- kset_put(&bus->subsys);
++ if (bus)
++ kset_put(&bus->p->subsys);
+ }
+
+-static ssize_t
+-drv_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
++static ssize_t drv_attr_show(struct kobject *kobj, struct attribute *attr,
++ char *buf)
+ {
+- struct driver_attribute * drv_attr = to_drv_attr(attr);
+- struct device_driver * drv = to_driver(kobj);
++ struct driver_attribute *drv_attr = to_drv_attr(attr);
++ struct driver_private *drv_priv = to_driver(kobj);
+ ssize_t ret = -EIO;
+
+ if (drv_attr->show)
+- ret = drv_attr->show(drv, buf);
++ ret = drv_attr->show(drv_priv->driver, buf);
+ return ret;
+ }
+
+-static ssize_t
+-drv_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char * buf, size_t count)
++static ssize_t drv_attr_store(struct kobject *kobj, struct attribute *attr,
++ const char *buf, size_t count)
+ {
+- struct driver_attribute * drv_attr = to_drv_attr(attr);
+- struct device_driver * drv = to_driver(kobj);
++ struct driver_attribute *drv_attr = to_drv_attr(attr);
++ struct driver_private *drv_priv = to_driver(kobj);
+ ssize_t ret = -EIO;
+
+ if (drv_attr->store)
+- ret = drv_attr->store(drv, buf, count);
++ ret = drv_attr->store(drv_priv->driver, buf, count);
+ return ret;
+ }
+
+@@ -71,22 +75,12 @@ static struct sysfs_ops driver_sysfs_ops = {
+ .store = drv_attr_store,
+ };
+
+-
+-static void driver_release(struct kobject * kobj)
++static void driver_release(struct kobject *kobj)
+ {
+- /*
+- * Yes this is an empty release function, it is this way because struct
+- * device is always a static object, not a dynamic one. Yes, this is
+- * not nice and bad, but remember, drivers are code, reference counted
+- * by the module count, not a device, which is really data. And yes,
+- * in the future I do want to have all drivers be created dynamically,
+- * and am working toward that goal, but it will take a bit longer...
+- *
+- * But do not let this example give _anyone_ the idea that they can
+- * create a release function without any code in it at all, to do that
+- * is almost always wrong. If you have any questions about this,
+- * please send an email to <greg at kroah.com>
+- */
++ struct driver_private *drv_priv = to_driver(kobj);
++
++ pr_debug("driver: '%s': %s\n", kobject_name(kobj), __FUNCTION__);
++ kfree(drv_priv);
+ }
+
+ static struct kobj_type driver_ktype = {
+@@ -94,34 +88,30 @@ static struct kobj_type driver_ktype = {
+ .release = driver_release,
+ };
+
+-
+ /*
+ * sysfs bindings for buses
+ */
+-
+-
+-static ssize_t
+-bus_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
++static ssize_t bus_attr_show(struct kobject *kobj, struct attribute *attr,
++ char *buf)
+ {
+- struct bus_attribute * bus_attr = to_bus_attr(attr);
+- struct bus_type * bus = to_bus(kobj);
++ struct bus_attribute *bus_attr = to_bus_attr(attr);
++ struct bus_type_private *bus_priv = to_bus(kobj);
+ ssize_t ret = 0;
+
+ if (bus_attr->show)
+- ret = bus_attr->show(bus, buf);
++ ret = bus_attr->show(bus_priv->bus, buf);
+ return ret;
+ }
+
+-static ssize_t
+-bus_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char * buf, size_t count)
++static ssize_t bus_attr_store(struct kobject *kobj, struct attribute *attr,
++ const char *buf, size_t count)
+ {
+- struct bus_attribute * bus_attr = to_bus_attr(attr);
+- struct bus_type * bus = to_bus(kobj);
++ struct bus_attribute *bus_attr = to_bus_attr(attr);
++ struct bus_type_private *bus_priv = to_bus(kobj);
+ ssize_t ret = 0;
+
+ if (bus_attr->store)
+- ret = bus_attr->store(bus, buf, count);
++ ret = bus_attr->store(bus_priv->bus, buf, count);
+ return ret;
+ }
+
+@@ -130,24 +120,26 @@ static struct sysfs_ops bus_sysfs_ops = {
+ .store = bus_attr_store,
+ };
+
+-int bus_create_file(struct bus_type * bus, struct bus_attribute * attr)
++int bus_create_file(struct bus_type *bus, struct bus_attribute *attr)
+ {
+ int error;
+ if (bus_get(bus)) {
+- error = sysfs_create_file(&bus->subsys.kobj, &attr->attr);
++ error = sysfs_create_file(&bus->p->subsys.kobj, &attr->attr);
+ bus_put(bus);
+ } else
+ error = -EINVAL;
+ return error;
+ }
++EXPORT_SYMBOL_GPL(bus_create_file);
+
+-void bus_remove_file(struct bus_type * bus, struct bus_attribute * attr)
++void bus_remove_file(struct bus_type *bus, struct bus_attribute *attr)
+ {
+ if (bus_get(bus)) {
+- sysfs_remove_file(&bus->subsys.kobj, &attr->attr);
++ sysfs_remove_file(&bus->p->subsys.kobj, &attr->attr);
+ bus_put(bus);
+ }
+ }
++EXPORT_SYMBOL_GPL(bus_remove_file);
+
+ static struct kobj_type bus_ktype = {
+ .sysfs_ops = &bus_sysfs_ops,
+@@ -166,7 +158,7 @@ static struct kset_uevent_ops bus_uevent_ops = {
+ .filter = bus_uevent_filter,
+ };
+
+-static decl_subsys(bus, &bus_ktype, &bus_uevent_ops);
++static struct kset *bus_kset;
+
+
+ #ifdef CONFIG_HOTPLUG
+@@ -224,10 +216,13 @@ static ssize_t driver_bind(struct device_driver *drv,
+ if (dev->parent)
+ up(&dev->parent->sem);
+
+- if (err > 0) /* success */
++ if (err > 0) {
++ /* success */
+ err = count;
+- else if (err == 0) /* driver didn't accept device */
++ } else if (err == 0) {
++ /* driver didn't accept device */
+ err = -ENODEV;
++ }
+ }
+ put_device(dev);
+ bus_put(bus);
+@@ -237,16 +232,16 @@ static DRIVER_ATTR(bind, S_IWUSR, NULL, driver_bind);
+
+ static ssize_t show_drivers_autoprobe(struct bus_type *bus, char *buf)
+ {
+- return sprintf(buf, "%d\n", bus->drivers_autoprobe);
++ return sprintf(buf, "%d\n", bus->p->drivers_autoprobe);
+ }
+
+ static ssize_t store_drivers_autoprobe(struct bus_type *bus,
+ const char *buf, size_t count)
+ {
+ if (buf[0] == '0')
+- bus->drivers_autoprobe = 0;
++ bus->p->drivers_autoprobe = 0;
+ else
+- bus->drivers_autoprobe = 1;
++ bus->p->drivers_autoprobe = 1;
+ return count;
+ }
+
+@@ -264,49 +259,49 @@ static ssize_t store_drivers_probe(struct bus_type *bus,
+ }
+ #endif
+
+-static struct device * next_device(struct klist_iter * i)
++static struct device *next_device(struct klist_iter *i)
+ {
+- struct klist_node * n = klist_next(i);
++ struct klist_node *n = klist_next(i);
+ return n ? container_of(n, struct device, knode_bus) : NULL;
+ }
+
+ /**
+- * bus_for_each_dev - device iterator.
+- * @bus: bus type.
+- * @start: device to start iterating from.
+- * @data: data for the callback.
+- * @fn: function to be called for each device.
++ * bus_for_each_dev - device iterator.
++ * @bus: bus type.
++ * @start: device to start iterating from.
++ * @data: data for the callback.
++ * @fn: function to be called for each device.
+ *
+- * Iterate over @bus's list of devices, and call @fn for each,
+- * passing it @data. If @start is not NULL, we use that device to
+- * begin iterating from.
++ * Iterate over @bus's list of devices, and call @fn for each,
++ * passing it @data. If @start is not NULL, we use that device to
++ * begin iterating from.
+ *
+- * We check the return of @fn each time. If it returns anything
+- * other than 0, we break out and return that value.
++ * We check the return of @fn each time. If it returns anything
++ * other than 0, we break out and return that value.
+ *
+- * NOTE: The device that returns a non-zero value is not retained
+- * in any way, nor is its refcount incremented. If the caller needs
+- * to retain this data, it should do, and increment the reference
+- * count in the supplied callback.
++ * NOTE: The device that returns a non-zero value is not retained
++ * in any way, nor is its refcount incremented. If the caller needs
++ * to retain this data, it should do, and increment the reference
++ * count in the supplied callback.
+ */
+-
+-int bus_for_each_dev(struct bus_type * bus, struct device * start,
+- void * data, int (*fn)(struct device *, void *))
++int bus_for_each_dev(struct bus_type *bus, struct device *start,
++ void *data, int (*fn)(struct device *, void *))
+ {
+ struct klist_iter i;
+- struct device * dev;
++ struct device *dev;
+ int error = 0;
+
+ if (!bus)
+ return -EINVAL;
+
+- klist_iter_init_node(&bus->klist_devices, &i,
++ klist_iter_init_node(&bus->p->klist_devices, &i,
+ (start ? &start->knode_bus : NULL));
+ while ((dev = next_device(&i)) && !error)
+ error = fn(dev, data);
+ klist_iter_exit(&i);
+ return error;
+ }
++EXPORT_SYMBOL_GPL(bus_for_each_dev);
+
+ /**
+ * bus_find_device - device iterator for locating a particular device.
+@@ -323,9 +318,9 @@ int bus_for_each_dev(struct bus_type * bus, struct device * start,
+ * if it does. If the callback returns non-zero, this function will
+ * return to the caller and not iterate over any more devices.
+ */
+-struct device * bus_find_device(struct bus_type *bus,
+- struct device *start, void *data,
+- int (*match)(struct device *, void *))
++struct device *bus_find_device(struct bus_type *bus,
++ struct device *start, void *data,
++ int (*match)(struct device *dev, void *data))
+ {
+ struct klist_iter i;
+ struct device *dev;
+@@ -333,7 +328,7 @@ struct device * bus_find_device(struct bus_type *bus,
+ if (!bus)
+ return NULL;
+
+- klist_iter_init_node(&bus->klist_devices, &i,
++ klist_iter_init_node(&bus->p->klist_devices, &i,
+ (start ? &start->knode_bus : NULL));
+ while ((dev = next_device(&i)))
+ if (match(dev, data) && get_device(dev))
+@@ -341,51 +336,57 @@ struct device * bus_find_device(struct bus_type *bus,
+ klist_iter_exit(&i);
+ return dev;
+ }
++EXPORT_SYMBOL_GPL(bus_find_device);
+
+-
+-static struct device_driver * next_driver(struct klist_iter * i)
++static struct device_driver *next_driver(struct klist_iter *i)
+ {
+- struct klist_node * n = klist_next(i);
+- return n ? container_of(n, struct device_driver, knode_bus) : NULL;
++ struct klist_node *n = klist_next(i);
++ struct driver_private *drv_priv;
++
++ if (n) {
++ drv_priv = container_of(n, struct driver_private, knode_bus);
++ return drv_priv->driver;
++ }
++ return NULL;
+ }
+
+ /**
+- * bus_for_each_drv - driver iterator
+- * @bus: bus we're dealing with.
+- * @start: driver to start iterating on.
+- * @data: data to pass to the callback.
+- * @fn: function to call for each driver.
++ * bus_for_each_drv - driver iterator
++ * @bus: bus we're dealing with.
++ * @start: driver to start iterating on.
++ * @data: data to pass to the callback.
++ * @fn: function to call for each driver.
+ *
+- * This is nearly identical to the device iterator above.
+- * We iterate over each driver that belongs to @bus, and call
+- * @fn for each. If @fn returns anything but 0, we break out
+- * and return it. If @start is not NULL, we use it as the head
+- * of the list.
++ * This is nearly identical to the device iterator above.
++ * We iterate over each driver that belongs to @bus, and call
++ * @fn for each. If @fn returns anything but 0, we break out
++ * and return it. If @start is not NULL, we use it as the head
++ * of the list.
+ *
+- * NOTE: we don't return the driver that returns a non-zero
+- * value, nor do we leave the reference count incremented for that
+- * driver. If the caller needs to know that info, it must set it
+- * in the callback. It must also be sure to increment the refcount
+- * so it doesn't disappear before returning to the caller.
++ * NOTE: we don't return the driver that returns a non-zero
++ * value, nor do we leave the reference count incremented for that
++ * driver. If the caller needs to know that info, it must set it
++ * in the callback. It must also be sure to increment the refcount
++ * so it doesn't disappear before returning to the caller.
+ */
+-
+-int bus_for_each_drv(struct bus_type * bus, struct device_driver * start,
+- void * data, int (*fn)(struct device_driver *, void *))
++int bus_for_each_drv(struct bus_type *bus, struct device_driver *start,
++ void *data, int (*fn)(struct device_driver *, void *))
+ {
+ struct klist_iter i;
+- struct device_driver * drv;
++ struct device_driver *drv;
+ int error = 0;
+
+ if (!bus)
+ return -EINVAL;
+
+- klist_iter_init_node(&bus->klist_drivers, &i,
+- start ? &start->knode_bus : NULL);
++ klist_iter_init_node(&bus->p->klist_drivers, &i,
++ start ? &start->p->knode_bus : NULL);
+ while ((drv = next_driver(&i)) && !error)
+ error = fn(drv, data);
+ klist_iter_exit(&i);
+ return error;
+ }
++EXPORT_SYMBOL_GPL(bus_for_each_drv);
+
+ static int device_add_attrs(struct bus_type *bus, struct device *dev)
+ {
+@@ -396,7 +397,7 @@ static int device_add_attrs(struct bus_type *bus, struct device *dev)
+ return 0;
+
+ for (i = 0; attr_name(bus->dev_attrs[i]); i++) {
+- error = device_create_file(dev,&bus->dev_attrs[i]);
++ error = device_create_file(dev, &bus->dev_attrs[i]);
+ if (error) {
+ while (--i >= 0)
+ device_remove_file(dev, &bus->dev_attrs[i]);
+@@ -406,13 +407,13 @@ static int device_add_attrs(struct bus_type *bus, struct device *dev)
+ return error;
+ }
+
+-static void device_remove_attrs(struct bus_type * bus, struct device * dev)
++static void device_remove_attrs(struct bus_type *bus, struct device *dev)
+ {
+ int i;
+
+ if (bus->dev_attrs) {
+ for (i = 0; attr_name(bus->dev_attrs[i]); i++)
+- device_remove_file(dev,&bus->dev_attrs[i]);
++ device_remove_file(dev, &bus->dev_attrs[i]);
+ }
+ }
+
+@@ -420,7 +421,7 @@ static void device_remove_attrs(struct bus_type * bus, struct device * dev)
+ static int make_deprecated_bus_links(struct device *dev)
+ {
+ return sysfs_create_link(&dev->kobj,
+- &dev->bus->subsys.kobj, "bus");
++ &dev->bus->p->subsys.kobj, "bus");
+ }
+
+ static void remove_deprecated_bus_links(struct device *dev)
+@@ -433,28 +434,28 @@ static inline void remove_deprecated_bus_links(struct device *dev) { }
+ #endif
+
+ /**
+- * bus_add_device - add device to bus
+- * @dev: device being added
++ * bus_add_device - add device to bus
++ * @dev: device being added
+ *
+- * - Add the device to its bus's list of devices.
+- * - Create link to device's bus.
++ * - Add the device to its bus's list of devices.
++ * - Create link to device's bus.
+ */
+-int bus_add_device(struct device * dev)
++int bus_add_device(struct device *dev)
+ {
+- struct bus_type * bus = bus_get(dev->bus);
++ struct bus_type *bus = bus_get(dev->bus);
+ int error = 0;
+
+ if (bus) {
+- pr_debug("bus %s: add device %s\n", bus->name, dev->bus_id);
++ pr_debug("bus: '%s': add device %s\n", bus->name, dev->bus_id);
+ error = device_add_attrs(bus, dev);
+ if (error)
+ goto out_put;
+- error = sysfs_create_link(&bus->devices.kobj,
++ error = sysfs_create_link(&bus->p->devices_kset->kobj,
+ &dev->kobj, dev->bus_id);
+ if (error)
+ goto out_id;
+ error = sysfs_create_link(&dev->kobj,
+- &dev->bus->subsys.kobj, "subsystem");
++ &dev->bus->p->subsys.kobj, "subsystem");
+ if (error)
+ goto out_subsys;
+ error = make_deprecated_bus_links(dev);
+@@ -466,7 +467,7 @@ int bus_add_device(struct device * dev)
+ out_deprecated:
+ sysfs_remove_link(&dev->kobj, "subsystem");
+ out_subsys:
+- sysfs_remove_link(&bus->devices.kobj, dev->bus_id);
++ sysfs_remove_link(&bus->p->devices_kset->kobj, dev->bus_id);
+ out_id:
+ device_remove_attrs(bus, dev);
+ out_put:
+@@ -475,56 +476,58 @@ out_put:
+ }
+
+ /**
+- * bus_attach_device - add device to bus
+- * @dev: device tried to attach to a driver
++ * bus_attach_device - add device to bus
++ * @dev: device tried to attach to a driver
+ *
+- * - Add device to bus's list of devices.
+- * - Try to attach to driver.
++ * - Add device to bus's list of devices.
++ * - Try to attach to driver.
+ */
+-void bus_attach_device(struct device * dev)
++void bus_attach_device(struct device *dev)
+ {
+ struct bus_type *bus = dev->bus;
+ int ret = 0;
+
+ if (bus) {
+ dev->is_registered = 1;
+- if (bus->drivers_autoprobe)
++ if (bus->p->drivers_autoprobe)
+ ret = device_attach(dev);
+ WARN_ON(ret < 0);
+ if (ret >= 0)
+- klist_add_tail(&dev->knode_bus, &bus->klist_devices);
++ klist_add_tail(&dev->knode_bus, &bus->p->klist_devices);
+ else
+ dev->is_registered = 0;
+ }
+ }
+
+ /**
+- * bus_remove_device - remove device from bus
+- * @dev: device to be removed
++ * bus_remove_device - remove device from bus
++ * @dev: device to be removed
+ *
+- * - Remove symlink from bus's directory.
+- * - Delete device from bus's list.
+- * - Detach from its driver.
+- * - Drop reference taken in bus_add_device().
++ * - Remove symlink from bus's directory.
++ * - Delete device from bus's list.
++ * - Detach from its driver.
++ * - Drop reference taken in bus_add_device().
+ */
+-void bus_remove_device(struct device * dev)
++void bus_remove_device(struct device *dev)
+ {
+ if (dev->bus) {
+ sysfs_remove_link(&dev->kobj, "subsystem");
+ remove_deprecated_bus_links(dev);
+- sysfs_remove_link(&dev->bus->devices.kobj, dev->bus_id);
++ sysfs_remove_link(&dev->bus->p->devices_kset->kobj,
++ dev->bus_id);
+ device_remove_attrs(dev->bus, dev);
+ if (dev->is_registered) {
+ dev->is_registered = 0;
+ klist_del(&dev->knode_bus);
+ }
+- pr_debug("bus %s: remove device %s\n", dev->bus->name, dev->bus_id);
++ pr_debug("bus: '%s': remove device %s\n",
++ dev->bus->name, dev->bus_id);
+ device_release_driver(dev);
+ bus_put(dev->bus);
+ }
+ }
+
+-static int driver_add_attrs(struct bus_type * bus, struct device_driver * drv)
++static int driver_add_attrs(struct bus_type *bus, struct device_driver *drv)
+ {
+ int error = 0;
+ int i;
+@@ -533,19 +536,19 @@ static int driver_add_attrs(struct bus_type * bus, struct device_driver * drv)
+ for (i = 0; attr_name(bus->drv_attrs[i]); i++) {
+ error = driver_create_file(drv, &bus->drv_attrs[i]);
+ if (error)
+- goto Err;
++ goto err;
+ }
+ }
+- Done:
++done:
+ return error;
+- Err:
++err:
+ while (--i >= 0)
+ driver_remove_file(drv, &bus->drv_attrs[i]);
+- goto Done;
++ goto done;
+ }
+
+-
+-static void driver_remove_attrs(struct bus_type * bus, struct device_driver * drv)
++static void driver_remove_attrs(struct bus_type *bus,
++ struct device_driver *drv)
+ {
+ int i;
+
+@@ -616,39 +619,46 @@ static ssize_t driver_uevent_store(struct device_driver *drv,
+ enum kobject_action action;
+
+ if (kobject_action_type(buf, count, &action) == 0)
+- kobject_uevent(&drv->kobj, action);
++ kobject_uevent(&drv->p->kobj, action);
+ return count;
+ }
+ static DRIVER_ATTR(uevent, S_IWUSR, NULL, driver_uevent_store);
+
+ /**
+- * bus_add_driver - Add a driver to the bus.
+- * @drv: driver.
+- *
++ * bus_add_driver - Add a driver to the bus.
++ * @drv: driver.
+ */
+ int bus_add_driver(struct device_driver *drv)
+ {
+- struct bus_type * bus = bus_get(drv->bus);
++ struct bus_type *bus;
++ struct driver_private *priv;
+ int error = 0;
+
++ bus = bus_get(drv->bus);
+ if (!bus)
+ return -EINVAL;
+
+- pr_debug("bus %s: add driver %s\n", bus->name, drv->name);
+- error = kobject_set_name(&drv->kobj, "%s", drv->name);
+- if (error)
+- goto out_put_bus;
+- drv->kobj.kset = &bus->drivers;
+- error = kobject_register(&drv->kobj);
++ pr_debug("bus: '%s': add driver %s\n", bus->name, drv->name);
++
++ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++ if (!priv)
++ return -ENOMEM;
++
++ klist_init(&priv->klist_devices, NULL, NULL);
++ priv->driver = drv;
++ drv->p = priv;
++ priv->kobj.kset = bus->p->drivers_kset;
++ error = kobject_init_and_add(&priv->kobj, &driver_ktype, NULL,
++ "%s", drv->name);
+ if (error)
+ goto out_put_bus;
+
+- if (drv->bus->drivers_autoprobe) {
++ if (drv->bus->p->drivers_autoprobe) {
+ error = driver_attach(drv);
+ if (error)
+ goto out_unregister;
+ }
+- klist_add_tail(&drv->knode_bus, &bus->klist_drivers);
++ klist_add_tail(&priv->knode_bus, &bus->p->klist_drivers);
+ module_add_driver(drv->owner, drv);
+
+ error = driver_create_file(drv, &driver_attr_uevent);
+@@ -669,24 +679,24 @@ int bus_add_driver(struct device_driver *drv)
+ __FUNCTION__, drv->name);
+ }
+
++ kobject_uevent(&priv->kobj, KOBJ_ADD);
+ return error;
+ out_unregister:
+- kobject_unregister(&drv->kobj);
++ kobject_put(&priv->kobj);
+ out_put_bus:
+ bus_put(bus);
+ return error;
+ }
+
+ /**
+- * bus_remove_driver - delete driver from bus's knowledge.
+- * @drv: driver.
++ * bus_remove_driver - delete driver from bus's knowledge.
++ * @drv: driver.
+ *
+- * Detach the driver from the devices it controls, and remove
+- * it from its bus's list of drivers. Finally, we drop the reference
+- * to the bus we took in bus_add_driver().
++ * Detach the driver from the devices it controls, and remove
++ * it from its bus's list of drivers. Finally, we drop the reference
++ * to the bus we took in bus_add_driver().
+ */
+-
+-void bus_remove_driver(struct device_driver * drv)
++void bus_remove_driver(struct device_driver *drv)
+ {
+ if (!drv->bus)
+ return;
+@@ -694,18 +704,17 @@ void bus_remove_driver(struct device_driver * drv)
+ remove_bind_files(drv);
+ driver_remove_attrs(drv->bus, drv);
+ driver_remove_file(drv, &driver_attr_uevent);
+- klist_remove(&drv->knode_bus);
+- pr_debug("bus %s: remove driver %s\n", drv->bus->name, drv->name);
++ klist_remove(&drv->p->knode_bus);
++ pr_debug("bus: '%s': remove driver %s\n", drv->bus->name, drv->name);
+ driver_detach(drv);
+ module_remove_driver(drv);
+- kobject_unregister(&drv->kobj);
++ kobject_put(&drv->p->kobj);
+ bus_put(drv->bus);
+ }
+
+-
+ /* Helper for bus_rescan_devices's iter */
+ static int __must_check bus_rescan_devices_helper(struct device *dev,
+- void *data)
++ void *data)
+ {
+ int ret = 0;
+
+@@ -727,10 +736,11 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
+ * attached and rescan it against existing drivers to see if it matches
+ * any by calling device_attach() for the unbound devices.
+ */
+-int bus_rescan_devices(struct bus_type * bus)
++int bus_rescan_devices(struct bus_type *bus)
+ {
+ return bus_for_each_dev(bus, NULL, NULL, bus_rescan_devices_helper);
+ }
++EXPORT_SYMBOL_GPL(bus_rescan_devices);
+
+ /**
+ * device_reprobe - remove driver for a device and probe for a new driver
+@@ -755,55 +765,55 @@ int device_reprobe(struct device *dev)
+ EXPORT_SYMBOL_GPL(device_reprobe);
+
+ /**
+- * find_bus - locate bus by name.
+- * @name: name of bus.
++ * find_bus - locate bus by name.
++ * @name: name of bus.
+ *
+- * Call kset_find_obj() to iterate over list of buses to
+- * find a bus by name. Return bus if found.
++ * Call kset_find_obj() to iterate over list of buses to
++ * find a bus by name. Return bus if found.
+ *
+- * Note that kset_find_obj increments bus' reference count.
++ * Note that kset_find_obj increments bus' reference count.
+ */
+ #if 0
+-struct bus_type * find_bus(char * name)
++struct bus_type *find_bus(char *name)
+ {
+- struct kobject * k = kset_find_obj(&bus_subsys.kset, name);
++ struct kobject *k = kset_find_obj(bus_kset, name);
+ return k ? to_bus(k) : NULL;
+ }
+ #endif /* 0 */
+
+
+ /**
+- * bus_add_attrs - Add default attributes for this bus.
+- * @bus: Bus that has just been registered.
++ * bus_add_attrs - Add default attributes for this bus.
++ * @bus: Bus that has just been registered.
+ */
+
+-static int bus_add_attrs(struct bus_type * bus)
++static int bus_add_attrs(struct bus_type *bus)
+ {
+ int error = 0;
+ int i;
+
+ if (bus->bus_attrs) {
+ for (i = 0; attr_name(bus->bus_attrs[i]); i++) {
+- error = bus_create_file(bus,&bus->bus_attrs[i]);
++ error = bus_create_file(bus, &bus->bus_attrs[i]);
+ if (error)
+- goto Err;
++ goto err;
+ }
+ }
+- Done:
++done:
+ return error;
+- Err:
++err:
+ while (--i >= 0)
+- bus_remove_file(bus,&bus->bus_attrs[i]);
+- goto Done;
++ bus_remove_file(bus, &bus->bus_attrs[i]);
++ goto done;
+ }
+
+-static void bus_remove_attrs(struct bus_type * bus)
++static void bus_remove_attrs(struct bus_type *bus)
+ {
+ int i;
+
+ if (bus->bus_attrs) {
+ for (i = 0; attr_name(bus->bus_attrs[i]); i++)
+- bus_remove_file(bus,&bus->bus_attrs[i]);
++ bus_remove_file(bus, &bus->bus_attrs[i]);
+ }
+ }
+
+@@ -827,32 +837,42 @@ static ssize_t bus_uevent_store(struct bus_type *bus,
+ enum kobject_action action;
+
+ if (kobject_action_type(buf, count, &action) == 0)
+- kobject_uevent(&bus->subsys.kobj, action);
++ kobject_uevent(&bus->p->subsys.kobj, action);
+ return count;
+ }
+ static BUS_ATTR(uevent, S_IWUSR, NULL, bus_uevent_store);
+
+ /**
+- * bus_register - register a bus with the system.
+- * @bus: bus.
++ * bus_register - register a bus with the system.
++ * @bus: bus.
+ *
+- * Once we have that, we registered the bus with the kobject
+- * infrastructure, then register the children subsystems it has:
+- * the devices and drivers that belong to the bus.
++ * Once we have that, we registered the bus with the kobject
++ * infrastructure, then register the children subsystems it has:
++ * the devices and drivers that belong to the bus.
+ */
+-int bus_register(struct bus_type * bus)
++int bus_register(struct bus_type *bus)
+ {
+ int retval;
++ struct bus_type_private *priv;
++
++ priv = kzalloc(sizeof(struct bus_type_private), GFP_KERNEL);
++ if (!priv)
++ return -ENOMEM;
+
+- BLOCKING_INIT_NOTIFIER_HEAD(&bus->bus_notifier);
++ priv->bus = bus;
++ bus->p = priv;
+
+- retval = kobject_set_name(&bus->subsys.kobj, "%s", bus->name);
++ BLOCKING_INIT_NOTIFIER_HEAD(&priv->bus_notifier);
++
++ retval = kobject_set_name(&priv->subsys.kobj, "%s", bus->name);
+ if (retval)
+ goto out;
+
+- bus->subsys.kobj.kset = &bus_subsys;
++ priv->subsys.kobj.kset = bus_kset;
++ priv->subsys.kobj.ktype = &bus_ktype;
++ priv->drivers_autoprobe = 1;
+
+- retval = subsystem_register(&bus->subsys);
++ retval = kset_register(&priv->subsys);
+ if (retval)
+ goto out;
+
+@@ -860,23 +880,23 @@ int bus_register(struct bus_type * bus)
+ if (retval)
+ goto bus_uevent_fail;
+
+- kobject_set_name(&bus->devices.kobj, "devices");
+- bus->devices.kobj.parent = &bus->subsys.kobj;
+- retval = kset_register(&bus->devices);
+- if (retval)
++ priv->devices_kset = kset_create_and_add("devices", NULL,
++ &priv->subsys.kobj);
++ if (!priv->devices_kset) {
++ retval = -ENOMEM;
+ goto bus_devices_fail;
++ }
+
+- kobject_set_name(&bus->drivers.kobj, "drivers");
+- bus->drivers.kobj.parent = &bus->subsys.kobj;
+- bus->drivers.ktype = &driver_ktype;
+- retval = kset_register(&bus->drivers);
+- if (retval)
++ priv->drivers_kset = kset_create_and_add("drivers", NULL,
++ &priv->subsys.kobj);
++ if (!priv->drivers_kset) {
++ retval = -ENOMEM;
+ goto bus_drivers_fail;
++ }
+
+- klist_init(&bus->klist_devices, klist_devices_get, klist_devices_put);
+- klist_init(&bus->klist_drivers, NULL, NULL);
++ klist_init(&priv->klist_devices, klist_devices_get, klist_devices_put);
++ klist_init(&priv->klist_drivers, NULL, NULL);
+
+- bus->drivers_autoprobe = 1;
+ retval = add_probe_files(bus);
+ if (retval)
+ goto bus_probe_files_fail;
+@@ -885,66 +905,73 @@ int bus_register(struct bus_type * bus)
+ if (retval)
+ goto bus_attrs_fail;
+
+- pr_debug("bus type '%s' registered\n", bus->name);
++ pr_debug("bus: '%s': registered\n", bus->name);
+ return 0;
+
+ bus_attrs_fail:
+ remove_probe_files(bus);
+ bus_probe_files_fail:
+- kset_unregister(&bus->drivers);
++ kset_unregister(bus->p->drivers_kset);
+ bus_drivers_fail:
+- kset_unregister(&bus->devices);
++ kset_unregister(bus->p->devices_kset);
+ bus_devices_fail:
+ bus_remove_file(bus, &bus_attr_uevent);
+ bus_uevent_fail:
+- subsystem_unregister(&bus->subsys);
++ kset_unregister(&bus->p->subsys);
++ kfree(bus->p);
+ out:
+ return retval;
+ }
++EXPORT_SYMBOL_GPL(bus_register);
+
+ /**
+- * bus_unregister - remove a bus from the system
+- * @bus: bus.
++ * bus_unregister - remove a bus from the system
++ * @bus: bus.
+ *
+- * Unregister the child subsystems and the bus itself.
+- * Finally, we call bus_put() to release the refcount
++ * Unregister the child subsystems and the bus itself.
++ * Finally, we call bus_put() to release the refcount
+ */
+-void bus_unregister(struct bus_type * bus)
++void bus_unregister(struct bus_type *bus)
+ {
+- pr_debug("bus %s: unregistering\n", bus->name);
++ pr_debug("bus: '%s': unregistering\n", bus->name);
+ bus_remove_attrs(bus);
+ remove_probe_files(bus);
+- kset_unregister(&bus->drivers);
+- kset_unregister(&bus->devices);
++ kset_unregister(bus->p->drivers_kset);
++ kset_unregister(bus->p->devices_kset);
+ bus_remove_file(bus, &bus_attr_uevent);
+- subsystem_unregister(&bus->subsys);
++ kset_unregister(&bus->p->subsys);
++ kfree(bus->p);
+ }
++EXPORT_SYMBOL_GPL(bus_unregister);
+
+ int bus_register_notifier(struct bus_type *bus, struct notifier_block *nb)
+ {
+- return blocking_notifier_chain_register(&bus->bus_notifier, nb);
++ return blocking_notifier_chain_register(&bus->p->bus_notifier, nb);
+ }
+ EXPORT_SYMBOL_GPL(bus_register_notifier);
+
+ int bus_unregister_notifier(struct bus_type *bus, struct notifier_block *nb)
+ {
+- return blocking_notifier_chain_unregister(&bus->bus_notifier, nb);
++ return blocking_notifier_chain_unregister(&bus->p->bus_notifier, nb);
+ }
+ EXPORT_SYMBOL_GPL(bus_unregister_notifier);
+
+-int __init buses_init(void)
++struct kset *bus_get_kset(struct bus_type *bus)
+ {
+- return subsystem_register(&bus_subsys);
++ return &bus->p->subsys;
+ }
++EXPORT_SYMBOL_GPL(bus_get_kset);
+
++struct klist *bus_get_device_klist(struct bus_type *bus)
++{
++ return &bus->p->klist_devices;
++}
++EXPORT_SYMBOL_GPL(bus_get_device_klist);
+
+-EXPORT_SYMBOL_GPL(bus_for_each_dev);
+-EXPORT_SYMBOL_GPL(bus_find_device);
+-EXPORT_SYMBOL_GPL(bus_for_each_drv);
+-
+-EXPORT_SYMBOL_GPL(bus_register);
+-EXPORT_SYMBOL_GPL(bus_unregister);
+-EXPORT_SYMBOL_GPL(bus_rescan_devices);
+-
+-EXPORT_SYMBOL_GPL(bus_create_file);
+-EXPORT_SYMBOL_GPL(bus_remove_file);
++int __init buses_init(void)
++{
++ bus_kset = kset_create_and_add("bus", &bus_uevent_ops, NULL);
++ if (!bus_kset)
++ return -ENOMEM;
++ return 0;
++}
+diff --git a/drivers/base/class.c b/drivers/base/class.c
+index a863bb0..59cf358 100644
+--- a/drivers/base/class.c
++++ b/drivers/base/class.c
+@@ -17,16 +17,17 @@
+ #include <linux/kdev_t.h>
+ #include <linux/err.h>
+ #include <linux/slab.h>
++#include <linux/genhd.h>
+ #include "base.h"
+
+ #define to_class_attr(_attr) container_of(_attr, struct class_attribute, attr)
+ #define to_class(obj) container_of(obj, struct class, subsys.kobj)
+
+-static ssize_t
+-class_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
++static ssize_t class_attr_show(struct kobject *kobj, struct attribute *attr,
++ char *buf)
+ {
+- struct class_attribute * class_attr = to_class_attr(attr);
+- struct class * dc = to_class(kobj);
++ struct class_attribute *class_attr = to_class_attr(attr);
++ struct class *dc = to_class(kobj);
+ ssize_t ret = -EIO;
+
+ if (class_attr->show)
+@@ -34,12 +35,11 @@ class_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
+ return ret;
+ }
+
+-static ssize_t
+-class_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char * buf, size_t count)
++static ssize_t class_attr_store(struct kobject *kobj, struct attribute *attr,
++ const char *buf, size_t count)
+ {
+- struct class_attribute * class_attr = to_class_attr(attr);
+- struct class * dc = to_class(kobj);
++ struct class_attribute *class_attr = to_class_attr(attr);
++ struct class *dc = to_class(kobj);
+ ssize_t ret = -EIO;
+
+ if (class_attr->store)
+@@ -47,7 +47,7 @@ class_attr_store(struct kobject * kobj, struct attribute * attr,
+ return ret;
+ }
+
+-static void class_release(struct kobject * kobj)
++static void class_release(struct kobject *kobj)
+ {
+ struct class *class = to_class(kobj);
+
+@@ -71,20 +71,20 @@ static struct kobj_type class_ktype = {
+ };
+
+ /* Hotplug events for classes go to the class_obj subsys */
+-static decl_subsys(class, &class_ktype, NULL);
++static struct kset *class_kset;
+
+
+-int class_create_file(struct class * cls, const struct class_attribute * attr)
++int class_create_file(struct class *cls, const struct class_attribute *attr)
+ {
+ int error;
+- if (cls) {
++ if (cls)
+ error = sysfs_create_file(&cls->subsys.kobj, &attr->attr);
+- } else
++ else
+ error = -EINVAL;
+ return error;
+ }
+
+-void class_remove_file(struct class * cls, const struct class_attribute * attr)
++void class_remove_file(struct class *cls, const struct class_attribute *attr)
+ {
+ if (cls)
+ sysfs_remove_file(&cls->subsys.kobj, &attr->attr);
+@@ -93,48 +93,48 @@ void class_remove_file(struct class * cls, const struct class_attribute * attr)
+ static struct class *class_get(struct class *cls)
+ {
+ if (cls)
+- return container_of(kset_get(&cls->subsys), struct class, subsys);
++ return container_of(kset_get(&cls->subsys),
++ struct class, subsys);
+ return NULL;
+ }
+
+-static void class_put(struct class * cls)
++static void class_put(struct class *cls)
+ {
+ if (cls)
+ kset_put(&cls->subsys);
+ }
+
+-
+-static int add_class_attrs(struct class * cls)
++static int add_class_attrs(struct class *cls)
+ {
+ int i;
+ int error = 0;
+
+ if (cls->class_attrs) {
+ for (i = 0; attr_name(cls->class_attrs[i]); i++) {
+- error = class_create_file(cls,&cls->class_attrs[i]);
++ error = class_create_file(cls, &cls->class_attrs[i]);
+ if (error)
+- goto Err;
++ goto error;
+ }
+ }
+- Done:
++done:
+ return error;
+- Err:
++error:
+ while (--i >= 0)
+- class_remove_file(cls,&cls->class_attrs[i]);
+- goto Done;
++ class_remove_file(cls, &cls->class_attrs[i]);
++ goto done;
+ }
+
+-static void remove_class_attrs(struct class * cls)
++static void remove_class_attrs(struct class *cls)
+ {
+ int i;
+
+ if (cls->class_attrs) {
+ for (i = 0; attr_name(cls->class_attrs[i]); i++)
+- class_remove_file(cls,&cls->class_attrs[i]);
++ class_remove_file(cls, &cls->class_attrs[i]);
+ }
+ }
+
+-int class_register(struct class * cls)
++int class_register(struct class *cls)
+ {
+ int error;
+
+@@ -149,9 +149,16 @@ int class_register(struct class * cls)
+ if (error)
+ return error;
+
+- cls->subsys.kobj.kset = &class_subsys;
++#ifdef CONFIG_SYSFS_DEPRECATED
++ /* let the block class directory show up in the root of sysfs */
++ if (cls != &block_class)
++ cls->subsys.kobj.kset = class_kset;
++#else
++ cls->subsys.kobj.kset = class_kset;
++#endif
++ cls->subsys.kobj.ktype = &class_ktype;
+
+- error = subsystem_register(&cls->subsys);
++ error = kset_register(&cls->subsys);
+ if (!error) {
+ error = add_class_attrs(class_get(cls));
+ class_put(cls);
+@@ -159,11 +166,11 @@ int class_register(struct class * cls)
+ return error;
+ }
+
+-void class_unregister(struct class * cls)
++void class_unregister(struct class *cls)
+ {
+ pr_debug("device class '%s': unregistering\n", cls->name);
+ remove_class_attrs(cls);
+- subsystem_unregister(&cls->subsys);
++ kset_unregister(&cls->subsys);
+ }
+
+ static void class_create_release(struct class *cls)
+@@ -241,8 +248,8 @@ void class_destroy(struct class *cls)
+
+ /* Class Device Stuff */
+
+-int class_device_create_file(struct class_device * class_dev,
+- const struct class_device_attribute * attr)
++int class_device_create_file(struct class_device *class_dev,
++ const struct class_device_attribute *attr)
+ {
+ int error = -EINVAL;
+ if (class_dev)
+@@ -250,8 +257,8 @@ int class_device_create_file(struct class_device * class_dev,
+ return error;
+ }
+
+-void class_device_remove_file(struct class_device * class_dev,
+- const struct class_device_attribute * attr)
++void class_device_remove_file(struct class_device *class_dev,
++ const struct class_device_attribute *attr)
+ {
+ if (class_dev)
+ sysfs_remove_file(&class_dev->kobj, &attr->attr);
+@@ -273,12 +280,11 @@ void class_device_remove_bin_file(struct class_device *class_dev,
+ sysfs_remove_bin_file(&class_dev->kobj, attr);
+ }
+
+-static ssize_t
+-class_device_attr_show(struct kobject * kobj, struct attribute * attr,
+- char * buf)
++static ssize_t class_device_attr_show(struct kobject *kobj,
++ struct attribute *attr, char *buf)
+ {
+- struct class_device_attribute * class_dev_attr = to_class_dev_attr(attr);
+- struct class_device * cd = to_class_dev(kobj);
++ struct class_device_attribute *class_dev_attr = to_class_dev_attr(attr);
++ struct class_device *cd = to_class_dev(kobj);
+ ssize_t ret = 0;
+
+ if (class_dev_attr->show)
+@@ -286,12 +292,12 @@ class_device_attr_show(struct kobject * kobj, struct attribute * attr,
+ return ret;
+ }
+
+-static ssize_t
+-class_device_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char * buf, size_t count)
++static ssize_t class_device_attr_store(struct kobject *kobj,
++ struct attribute *attr,
++ const char *buf, size_t count)
+ {
+- struct class_device_attribute * class_dev_attr = to_class_dev_attr(attr);
+- struct class_device * cd = to_class_dev(kobj);
++ struct class_device_attribute *class_dev_attr = to_class_dev_attr(attr);
++ struct class_device *cd = to_class_dev(kobj);
+ ssize_t ret = 0;
+
+ if (class_dev_attr->store)
+@@ -304,10 +310,10 @@ static struct sysfs_ops class_dev_sysfs_ops = {
+ .store = class_device_attr_store,
+ };
+
+-static void class_dev_release(struct kobject * kobj)
++static void class_dev_release(struct kobject *kobj)
+ {
+ struct class_device *cd = to_class_dev(kobj);
+- struct class * cls = cd->class;
++ struct class *cls = cd->class;
+
+ pr_debug("device class '%s': release.\n", cd->class_id);
+
+@@ -316,8 +322,8 @@ static void class_dev_release(struct kobject * kobj)
+ else if (cls->release)
+ cls->release(cd);
+ else {
+- printk(KERN_ERR "Class Device '%s' does not have a release() function, "
+- "it is broken and must be fixed.\n",
++ printk(KERN_ERR "Class Device '%s' does not have a release() "
++ "function, it is broken and must be fixed.\n",
+ cd->class_id);
+ WARN_ON(1);
+ }
+@@ -428,7 +434,8 @@ static int class_uevent(struct kset *kset, struct kobject *kobj,
+ add_uevent_var(env, "PHYSDEVBUS=%s", dev->bus->name);
+
+ if (dev->driver)
+- add_uevent_var(env, "PHYSDEVDRIVER=%s", dev->driver->name);
++ add_uevent_var(env, "PHYSDEVDRIVER=%s",
++ dev->driver->name);
+ }
+
+ if (class_dev->uevent) {
+@@ -452,43 +459,49 @@ static struct kset_uevent_ops class_uevent_ops = {
+ .uevent = class_uevent,
+ };
+
+-static decl_subsys(class_obj, &class_device_ktype, &class_uevent_ops);
+-
++/*
++ * DO NOT copy how this is created, kset_create_and_add() should be
++ * called, but this is a hold-over from the old-way and will be deleted
++ * entirely soon.
++ */
++static struct kset class_obj_subsys = {
++ .uevent_ops = &class_uevent_ops,
++};
+
+-static int class_device_add_attrs(struct class_device * cd)
++static int class_device_add_attrs(struct class_device *cd)
+ {
+ int i;
+ int error = 0;
+- struct class * cls = cd->class;
++ struct class *cls = cd->class;
+
+ if (cls->class_dev_attrs) {
+ for (i = 0; attr_name(cls->class_dev_attrs[i]); i++) {
+ error = class_device_create_file(cd,
+- &cls->class_dev_attrs[i]);
++ &cls->class_dev_attrs[i]);
+ if (error)
+- goto Err;
++ goto err;
+ }
+ }
+- Done:
++done:
+ return error;
+- Err:
++err:
+ while (--i >= 0)
+- class_device_remove_file(cd,&cls->class_dev_attrs[i]);
+- goto Done;
++ class_device_remove_file(cd, &cls->class_dev_attrs[i]);
++ goto done;
+ }
+
+-static void class_device_remove_attrs(struct class_device * cd)
++static void class_device_remove_attrs(struct class_device *cd)
+ {
+ int i;
+- struct class * cls = cd->class;
++ struct class *cls = cd->class;
+
+ if (cls->class_dev_attrs) {
+ for (i = 0; attr_name(cls->class_dev_attrs[i]); i++)
+- class_device_remove_file(cd,&cls->class_dev_attrs[i]);
++ class_device_remove_file(cd, &cls->class_dev_attrs[i]);
+ }
+ }
+
+-static int class_device_add_groups(struct class_device * cd)
++static int class_device_add_groups(struct class_device *cd)
+ {
+ int i;
+ int error = 0;
+@@ -498,7 +511,8 @@ static int class_device_add_groups(struct class_device * cd)
+ error = sysfs_create_group(&cd->kobj, cd->groups[i]);
+ if (error) {
+ while (--i >= 0)
+- sysfs_remove_group(&cd->kobj, cd->groups[i]);
++ sysfs_remove_group(&cd->kobj,
++ cd->groups[i]);
+ goto out;
+ }
+ }
+@@ -507,14 +521,12 @@ out:
+ return error;
+ }
+
+-static void class_device_remove_groups(struct class_device * cd)
++static void class_device_remove_groups(struct class_device *cd)
+ {
+ int i;
+- if (cd->groups) {
+- for (i = 0; cd->groups[i]; i++) {
++ if (cd->groups)
++ for (i = 0; cd->groups[i]; i++)
+ sysfs_remove_group(&cd->kobj, cd->groups[i]);
+- }
+- }
+ }
+
+ static ssize_t show_dev(struct class_device *class_dev, char *buf)
+@@ -537,8 +549,8 @@ static struct class_device_attribute class_uevent_attr =
+
+ void class_device_initialize(struct class_device *class_dev)
+ {
+- kobj_set_kset_s(class_dev, class_obj_subsys);
+- kobject_init(&class_dev->kobj);
++ class_dev->kobj.kset = &class_obj_subsys;
++ kobject_init(&class_dev->kobj, &class_device_ktype);
+ INIT_LIST_HEAD(&class_dev->node);
+ }
+
+@@ -566,16 +578,13 @@ int class_device_add(struct class_device *class_dev)
+ class_dev->class_id);
+
+ /* first, register with generic layer. */
+- error = kobject_set_name(&class_dev->kobj, "%s", class_dev->class_id);
+- if (error)
+- goto out2;
+-
+ if (parent_class_dev)
+ class_dev->kobj.parent = &parent_class_dev->kobj;
+ else
+ class_dev->kobj.parent = &parent_class->subsys.kobj;
+
+- error = kobject_add(&class_dev->kobj);
++ error = kobject_add(&class_dev->kobj, class_dev->kobj.parent,
++ "%s", class_dev->class_id);
+ if (error)
+ goto out2;
+
+@@ -642,7 +651,7 @@ int class_device_add(struct class_device *class_dev)
+ out3:
+ kobject_del(&class_dev->kobj);
+ out2:
+- if(parent_class_dev)
++ if (parent_class_dev)
+ class_device_put(parent_class_dev);
+ class_put(parent_class);
+ out1:
+@@ -659,9 +668,11 @@ int class_device_register(struct class_device *class_dev)
+ /**
+ * class_device_create - creates a class device and registers it with sysfs
+ * @cls: pointer to the struct class that this device should be registered to.
+- * @parent: pointer to the parent struct class_device of this new device, if any.
++ * @parent: pointer to the parent struct class_device of this new device, if
++ * any.
+ * @devt: the dev_t for the char device to be added.
+- * @device: a pointer to a struct device that is assiociated with this class device.
++ * @device: a pointer to a struct device that is assiociated with this class
++ * device.
+ * @fmt: string for the class device's name
+ *
+ * This function can be used by char device classes. A struct
+@@ -785,7 +796,7 @@ void class_device_destroy(struct class *cls, dev_t devt)
+ class_device_unregister(class_dev);
+ }
+
+-struct class_device * class_device_get(struct class_device *class_dev)
++struct class_device *class_device_get(struct class_device *class_dev)
+ {
+ if (class_dev)
+ return to_class_dev(kobject_get(&class_dev->kobj));
+@@ -798,6 +809,139 @@ void class_device_put(struct class_device *class_dev)
+ kobject_put(&class_dev->kobj);
+ }
+
++/**
++ * class_for_each_device - device iterator
++ * @class: the class we're iterating
++ * @data: data for the callback
++ * @fn: function to be called for each device
++ *
++ * Iterate over @class's list of devices, and call @fn for each,
++ * passing it @data.
++ *
++ * We check the return of @fn each time. If it returns anything
++ * other than 0, we break out and return that value.
++ *
++ * Note, we hold class->sem in this function, so it can not be
++ * re-acquired in @fn, otherwise it will self-deadlocking. For
++ * example, calls to add or remove class members would be verboten.
++ */
++int class_for_each_device(struct class *class, void *data,
++ int (*fn)(struct device *, void *))
++{
++ struct device *dev;
++ int error = 0;
++
++ if (!class)
++ return -EINVAL;
++ down(&class->sem);
++ list_for_each_entry(dev, &class->devices, node) {
++ dev = get_device(dev);
++ if (dev) {
++ error = fn(dev, data);
++ put_device(dev);
++ } else
++ error = -ENODEV;
++ if (error)
++ break;
++ }
++ up(&class->sem);
++
++ return error;
++}
++EXPORT_SYMBOL_GPL(class_for_each_device);
++
++/**
++ * class_find_device - device iterator for locating a particular device
++ * @class: the class we're iterating
++ * @data: data for the match function
++ * @match: function to check device
++ *
++ * This is similar to the class_for_each_dev() function above, but it
++ * returns a reference to a device that is 'found' for later use, as
++ * determined by the @match callback.
++ *
++ * The callback should return 0 if the device doesn't match and non-zero
++ * if it does. If the callback returns non-zero, this function will
++ * return to the caller and not iterate over any more devices.
++
++ * Note, you will need to drop the reference with put_device() after use.
++ *
++ * We hold class->sem in this function, so it can not be
++ * re-acquired in @match, otherwise it will self-deadlocking. For
++ * example, calls to add or remove class members would be verboten.
++ */
++struct device *class_find_device(struct class *class, void *data,
++ int (*match)(struct device *, void *))
++{
++ struct device *dev;
++ int found = 0;
++
++ if (!class)
++ return NULL;
++
++ down(&class->sem);
++ list_for_each_entry(dev, &class->devices, node) {
++ dev = get_device(dev);
++ if (dev) {
++ if (match(dev, data)) {
++ found = 1;
++ break;
++ } else
++ put_device(dev);
++ } else
++ break;
++ }
++ up(&class->sem);
++
++ return found ? dev : NULL;
++}
++EXPORT_SYMBOL_GPL(class_find_device);
++
++/**
++ * class_find_child - device iterator for locating a particular class_device
++ * @class: the class we're iterating
++ * @data: data for the match function
++ * @match: function to check class_device
++ *
++ * This function returns a reference to a class_device that is 'found' for
++ * later use, as determined by the @match callback.
++ *
++ * The callback should return 0 if the class_device doesn't match and non-zero
++ * if it does. If the callback returns non-zero, this function will
++ * return to the caller and not iterate over any more class_devices.
++ *
++ * Note, you will need to drop the reference with class_device_put() after use.
++ *
++ * We hold class->sem in this function, so it can not be
++ * re-acquired in @match, otherwise it will self-deadlocking. For
++ * example, calls to add or remove class members would be verboten.
++ */
++struct class_device *class_find_child(struct class *class, void *data,
++ int (*match)(struct class_device *, void *))
++{
++ struct class_device *dev;
++ int found = 0;
++
++ if (!class)
++ return NULL;
++
++ down(&class->sem);
++ list_for_each_entry(dev, &class->children, node) {
++ dev = class_device_get(dev);
++ if (dev) {
++ if (match(dev, data)) {
++ found = 1;
++ break;
++ } else
++ class_device_put(dev);
++ } else
++ break;
++ }
++ up(&class->sem);
++
++ return found ? dev : NULL;
++}
++EXPORT_SYMBOL_GPL(class_find_child);
+
+ int class_interface_register(struct class_interface *class_intf)
+ {
+@@ -829,7 +973,7 @@ int class_interface_register(struct class_interface *class_intf)
+
+ void class_interface_unregister(struct class_interface *class_intf)
+ {
+- struct class * parent = class_intf->class;
++ struct class *parent = class_intf->class;
+ struct class_device *class_dev;
+ struct device *dev;
+
+@@ -853,15 +997,14 @@ void class_interface_unregister(struct class_interface *class_intf)
+
+ int __init classes_init(void)
+ {
+- int retval;
+-
+- retval = subsystem_register(&class_subsys);
+- if (retval)
+- return retval;
++ class_kset = kset_create_and_add("class", NULL, NULL);
++ if (!class_kset)
++ return -ENOMEM;
+
+ /* ick, this is ugly, the things we go through to keep from showing up
+ * in sysfs... */
+ kset_init(&class_obj_subsys);
++ kobject_set_name(&class_obj_subsys.kobj, "class_obj");
+ if (!class_obj_subsys.kobj.parent)
+ class_obj_subsys.kobj.parent = &class_obj_subsys.kobj;
+ return 0;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 2683eac..edf3bbe 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -18,14 +18,14 @@
+ #include <linux/string.h>
+ #include <linux/kdev_t.h>
+ #include <linux/notifier.h>
+-
++#include <linux/genhd.h>
+ #include <asm/semaphore.h>
+
+ #include "base.h"
+ #include "power/power.h"
+
+-int (*platform_notify)(struct device * dev) = NULL;
+-int (*platform_notify_remove)(struct device * dev) = NULL;
++int (*platform_notify)(struct device *dev) = NULL;
++int (*platform_notify_remove)(struct device *dev) = NULL;
+
+ /*
+ * sysfs bindings for devices.
+@@ -51,11 +51,11 @@ EXPORT_SYMBOL(dev_driver_string);
+ #define to_dev(obj) container_of(obj, struct device, kobj)
+ #define to_dev_attr(_attr) container_of(_attr, struct device_attribute, attr)
+
+-static ssize_t
+-dev_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
++static ssize_t dev_attr_show(struct kobject *kobj, struct attribute *attr,
++ char *buf)
+ {
+- struct device_attribute * dev_attr = to_dev_attr(attr);
+- struct device * dev = to_dev(kobj);
++ struct device_attribute *dev_attr = to_dev_attr(attr);
++ struct device *dev = to_dev(kobj);
+ ssize_t ret = -EIO;
+
+ if (dev_attr->show)
+@@ -63,12 +63,11 @@ dev_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
+ return ret;
+ }
+
+-static ssize_t
+-dev_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char * buf, size_t count)
++static ssize_t dev_attr_store(struct kobject *kobj, struct attribute *attr,
++ const char *buf, size_t count)
+ {
+- struct device_attribute * dev_attr = to_dev_attr(attr);
+- struct device * dev = to_dev(kobj);
++ struct device_attribute *dev_attr = to_dev_attr(attr);
++ struct device *dev = to_dev(kobj);
+ ssize_t ret = -EIO;
+
+ if (dev_attr->store)
+@@ -90,9 +89,9 @@ static struct sysfs_ops dev_sysfs_ops = {
+ * reaches 0. We forward the call to the device's release
+ * method, which should handle actually freeing the structure.
+ */
+-static void device_release(struct kobject * kobj)
++static void device_release(struct kobject *kobj)
+ {
+- struct device * dev = to_dev(kobj);
++ struct device *dev = to_dev(kobj);
+
+ if (dev->release)
+ dev->release(dev);
+@@ -101,8 +100,8 @@ static void device_release(struct kobject * kobj)
+ else if (dev->class && dev->class->dev_release)
+ dev->class->dev_release(dev);
+ else {
+- printk(KERN_ERR "Device '%s' does not have a release() function, "
+- "it is broken and must be fixed.\n",
++ printk(KERN_ERR "Device '%s' does not have a release() "
++ "function, it is broken and must be fixed.\n",
+ dev->bus_id);
+ WARN_ON(1);
+ }
+@@ -185,7 +184,8 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
+ add_uevent_var(env, "PHYSDEVBUS=%s", dev->bus->name);
+
+ if (dev->driver)
+- add_uevent_var(env, "PHYSDEVDRIVER=%s", dev->driver->name);
++ add_uevent_var(env, "PHYSDEVDRIVER=%s",
++ dev->driver->name);
+ }
+ #endif
+
+@@ -193,15 +193,16 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
+ if (dev->bus && dev->bus->uevent) {
+ retval = dev->bus->uevent(dev, env);
+ if (retval)
+- pr_debug ("%s: bus uevent() returned %d\n",
+- __FUNCTION__, retval);
++ pr_debug("device: '%s': %s: bus uevent() returned %d\n",
++ dev->bus_id, __FUNCTION__, retval);
+ }
+
+ /* have the class specific function add its stuff */
+ if (dev->class && dev->class->dev_uevent) {
+ retval = dev->class->dev_uevent(dev, env);
+ if (retval)
+- pr_debug("%s: class uevent() returned %d\n",
++ pr_debug("device: '%s': %s: class uevent() "
++ "returned %d\n", dev->bus_id,
+ __FUNCTION__, retval);
+ }
+
+@@ -209,7 +210,8 @@ static int dev_uevent(struct kset *kset, struct kobject *kobj,
+ if (dev->type && dev->type->uevent) {
+ retval = dev->type->uevent(dev, env);
+ if (retval)
+- pr_debug("%s: dev_type uevent() returned %d\n",
++ pr_debug("device: '%s': %s: dev_type uevent() "
++ "returned %d\n", dev->bus_id,
+ __FUNCTION__, retval);
+ }
+
+@@ -325,7 +327,8 @@ static int device_add_groups(struct device *dev,
+ error = sysfs_create_group(&dev->kobj, groups[i]);
+ if (error) {
+ while (--i >= 0)
+- sysfs_remove_group(&dev->kobj, groups[i]);
++ sysfs_remove_group(&dev->kobj,
++ groups[i]);
+ break;
+ }
+ }
+@@ -401,20 +404,15 @@ static ssize_t show_dev(struct device *dev, struct device_attribute *attr,
+ static struct device_attribute devt_attr =
+ __ATTR(dev, S_IRUGO, show_dev, NULL);
+
+-/*
+- * devices_subsys - structure to be registered with kobject core.
+- */
+-
+-decl_subsys(devices, &device_ktype, &device_uevent_ops);
+-
++/* kset to create /sys/devices/ */
++struct kset *devices_kset;
+
+ /**
+- * device_create_file - create sysfs attribute file for device.
+- * @dev: device.
+- * @attr: device attribute descriptor.
++ * device_create_file - create sysfs attribute file for device.
++ * @dev: device.
++ * @attr: device attribute descriptor.
+ */
+-
+-int device_create_file(struct device * dev, struct device_attribute * attr)
++int device_create_file(struct device *dev, struct device_attribute *attr)
+ {
+ int error = 0;
+ if (get_device(dev)) {
+@@ -425,12 +423,11 @@ int device_create_file(struct device * dev, struct device_attribute * attr)
+ }
+
+ /**
+- * device_remove_file - remove sysfs attribute file.
+- * @dev: device.
+- * @attr: device attribute descriptor.
++ * device_remove_file - remove sysfs attribute file.
++ * @dev: device.
++ * @attr: device attribute descriptor.
+ */
+-
+-void device_remove_file(struct device * dev, struct device_attribute * attr)
++void device_remove_file(struct device *dev, struct device_attribute *attr)
+ {
+ if (get_device(dev)) {
+ sysfs_remove_file(&dev->kobj, &attr->attr);
+@@ -511,22 +508,20 @@ static void klist_children_put(struct klist_node *n)
+ put_device(dev);
+ }
+
+-
+ /**
+- * device_initialize - init device structure.
+- * @dev: device.
++ * device_initialize - init device structure.
++ * @dev: device.
+ *
+- * This prepares the device for use by other layers,
+- * including adding it to the device hierarchy.
+- * It is the first half of device_register(), if called by
+- * that, though it can also be called separately, so one
+- * may use @dev's fields (e.g. the refcount).
++ * This prepares the device for use by other layers,
++ * including adding it to the device hierarchy.
++ * It is the first half of device_register(), if called by
++ * that, though it can also be called separately, so one
++ * may use @dev's fields (e.g. the refcount).
+ */
+-
+ void device_initialize(struct device *dev)
+ {
+- kobj_set_kset_s(dev, devices_subsys);
+- kobject_init(&dev->kobj);
++ dev->kobj.kset = devices_kset;
++ kobject_init(&dev->kobj, &device_ktype);
+ klist_init(&dev->klist_children, klist_children_get,
+ klist_children_put);
+ INIT_LIST_HEAD(&dev->dma_pools);
+@@ -539,36 +534,39 @@ void device_initialize(struct device *dev)
+ }
+
+ #ifdef CONFIG_SYSFS_DEPRECATED
+-static struct kobject * get_device_parent(struct device *dev,
+- struct device *parent)
++static struct kobject *get_device_parent(struct device *dev,
++ struct device *parent)
+ {
+- /*
+- * Set the parent to the class, not the parent device
+- * for topmost devices in class hierarchy.
+- * This keeps sysfs from having a symlink to make old
+- * udevs happy
+- */
++ /* class devices without a parent live in /sys/class/<classname>/ */
+ if (dev->class && (!parent || parent->class != dev->class))
+ return &dev->class->subsys.kobj;
++ /* all other devices keep their parent */
+ else if (parent)
+ return &parent->kobj;
+
+ return NULL;
+ }
++
++static inline void cleanup_device_parent(struct device *dev) {}
++static inline void cleanup_glue_dir(struct device *dev,
++ struct kobject *glue_dir) {}
+ #else
+ static struct kobject *virtual_device_parent(struct device *dev)
+ {
+ static struct kobject *virtual_dir = NULL;
+
+ if (!virtual_dir)
+- virtual_dir = kobject_add_dir(&devices_subsys.kobj, "virtual");
++ virtual_dir = kobject_create_and_add("virtual",
++ &devices_kset->kobj);
+
+ return virtual_dir;
+ }
+
+-static struct kobject * get_device_parent(struct device *dev,
+- struct device *parent)
++static struct kobject *get_device_parent(struct device *dev,
++ struct device *parent)
+ {
++ int retval;
++
+ if (dev->class) {
+ struct kobject *kobj = NULL;
+ struct kobject *parent_kobj;
+@@ -576,8 +574,8 @@ static struct kobject * get_device_parent(struct device *dev,
+
+ /*
+ * If we have no parent, we live in "virtual".
+- * Class-devices with a bus-device as parent, live
+- * in a class-directory to prevent namespace collisions.
++ * Class-devices with a non class-device as parent, live
++ * in a "glue" directory to prevent namespace collisions.
+ */
+ if (parent == NULL)
+ parent_kobj = virtual_device_parent(dev);
+@@ -598,25 +596,45 @@ static struct kobject * get_device_parent(struct device *dev,
+ return kobj;
+
+ /* or create a new class-directory at the parent device */
+- return kobject_kset_add_dir(&dev->class->class_dirs,
+- parent_kobj, dev->class->name);
++ k = kobject_create();
++ if (!k)
++ return NULL;
++ k->kset = &dev->class->class_dirs;
++ retval = kobject_add(k, parent_kobj, "%s", dev->class->name);
++ if (retval < 0) {
++ kobject_put(k);
++ return NULL;
++ }
++ /* do not emit an uevent for this simple "glue" directory */
++ return k;
+ }
+
+ if (parent)
+ return &parent->kobj;
+ return NULL;
+ }
++
++static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
++{
++ /* see if we live in a "glue" directory */
++ if (!dev->class || glue_dir->kset != &dev->class->class_dirs)
++ return;
++
++ kobject_put(glue_dir);
++}
++
++static void cleanup_device_parent(struct device *dev)
++{
++ cleanup_glue_dir(dev, dev->kobj.parent);
++}
+ #endif
+
+-static int setup_parent(struct device *dev, struct device *parent)
++static void setup_parent(struct device *dev, struct device *parent)
+ {
+ struct kobject *kobj;
+ kobj = get_device_parent(dev, parent);
+- if (IS_ERR(kobj))
+- return PTR_ERR(kobj);
+ if (kobj)
+ dev->kobj.parent = kobj;
+- return 0;
+ }
+
+ static int device_add_class_symlinks(struct device *dev)
+@@ -625,65 +643,76 @@ static int device_add_class_symlinks(struct device *dev)
+
+ if (!dev->class)
+ return 0;
++
+ error = sysfs_create_link(&dev->kobj, &dev->class->subsys.kobj,
+ "subsystem");
+ if (error)
+ goto out;
+- /*
+- * If this is not a "fake" compatible device, then create the
+- * symlink from the class to the device.
+- */
+- if (dev->kobj.parent != &dev->class->subsys.kobj) {
++
++#ifdef CONFIG_SYSFS_DEPRECATED
++ /* stacked class devices need a symlink in the class directory */
++ if (dev->kobj.parent != &dev->class->subsys.kobj &&
++ dev->type != &part_type) {
+ error = sysfs_create_link(&dev->class->subsys.kobj, &dev->kobj,
+ dev->bus_id);
+ if (error)
+ goto out_subsys;
+ }
+- if (dev->parent) {
+-#ifdef CONFIG_SYSFS_DEPRECATED
+- {
+- struct device *parent = dev->parent;
+- char *class_name;
+-
+- /*
+- * In old sysfs stacked class devices had 'device'
+- * link pointing to real device instead of parent
+- */
+- while (parent->class && !parent->bus && parent->parent)
+- parent = parent->parent;
+-
+- error = sysfs_create_link(&dev->kobj,
+- &parent->kobj,
+- "device");
+- if (error)
+- goto out_busid;
+
+- class_name = make_class_name(dev->class->name,
+- &dev->kobj);
+- if (class_name)
+- error = sysfs_create_link(&dev->parent->kobj,
+- &dev->kobj, class_name);
+- kfree(class_name);
+- if (error)
+- goto out_device;
+- }
+-#else
+- error = sysfs_create_link(&dev->kobj, &dev->parent->kobj,
++ if (dev->parent && dev->type != &part_type) {
++ struct device *parent = dev->parent;
++ char *class_name;
++
++ /*
++ * stacked class devices have the 'device' link
++ * pointing to the bus device instead of the parent
++ */
++ while (parent->class && !parent->bus && parent->parent)
++ parent = parent->parent;
++
++ error = sysfs_create_link(&dev->kobj,
++ &parent->kobj,
+ "device");
+ if (error)
+ goto out_busid;
+-#endif
++
++ class_name = make_class_name(dev->class->name,
++ &dev->kobj);
++ if (class_name)
++ error = sysfs_create_link(&dev->parent->kobj,
++ &dev->kobj, class_name);
++ kfree(class_name);
++ if (error)
++ goto out_device;
+ }
+ return 0;
+
+-#ifdef CONFIG_SYSFS_DEPRECATED
+ out_device:
+- if (dev->parent)
++ if (dev->parent && dev->type != &part_type)
+ sysfs_remove_link(&dev->kobj, "device");
+-#endif
+ out_busid:
+- if (dev->kobj.parent != &dev->class->subsys.kobj)
++ if (dev->kobj.parent != &dev->class->subsys.kobj &&
++ dev->type != &part_type)
+ sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
++#else
++ /* link in the class directory pointing to the device */
++ error = sysfs_create_link(&dev->class->subsys.kobj, &dev->kobj,
++ dev->bus_id);
++ if (error)
++ goto out_subsys;
++
++ if (dev->parent && dev->type != &part_type) {
++ error = sysfs_create_link(&dev->kobj, &dev->parent->kobj,
++ "device");
++ if (error)
++ goto out_busid;
++ }
++ return 0;
++
++out_busid:
++ sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
++#endif
++
+ out_subsys:
+ sysfs_remove_link(&dev->kobj, "subsystem");
+ out:
+@@ -694,8 +723,9 @@ static void device_remove_class_symlinks(struct device *dev)
+ {
+ if (!dev->class)
+ return;
+- if (dev->parent) {
++
+ #ifdef CONFIG_SYSFS_DEPRECATED
++ if (dev->parent && dev->type != &part_type) {
+ char *class_name;
+
+ class_name = make_class_name(dev->class->name, &dev->kobj);
+@@ -703,45 +733,59 @@ static void device_remove_class_symlinks(struct device *dev)
+ sysfs_remove_link(&dev->parent->kobj, class_name);
+ kfree(class_name);
+ }
+-#endif
+ sysfs_remove_link(&dev->kobj, "device");
+ }
+- if (dev->kobj.parent != &dev->class->subsys.kobj)
++
++ if (dev->kobj.parent != &dev->class->subsys.kobj &&
++ dev->type != &part_type)
+ sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
++#else
++ if (dev->parent && dev->type != &part_type)
++ sysfs_remove_link(&dev->kobj, "device");
++
++ sysfs_remove_link(&dev->class->subsys.kobj, dev->bus_id);
++#endif
++
+ sysfs_remove_link(&dev->kobj, "subsystem");
+ }
+
+ /**
+- * device_add - add device to device hierarchy.
+- * @dev: device.
++ * device_add - add device to device hierarchy.
++ * @dev: device.
+ *
+- * This is part 2 of device_register(), though may be called
+- * separately _iff_ device_initialize() has been called separately.
++ * This is part 2 of device_register(), though may be called
++ * separately _iff_ device_initialize() has been called separately.
+ *
+- * This adds it to the kobject hierarchy via kobject_add(), adds it
+- * to the global and sibling lists for the device, then
+- * adds it to the other relevant subsystems of the driver model.
++ * This adds it to the kobject hierarchy via kobject_add(), adds it
++ * to the global and sibling lists for the device, then
++ * adds it to the other relevant subsystems of the driver model.
+ */
+ int device_add(struct device *dev)
+ {
+ struct device *parent = NULL;
+ struct class_interface *class_intf;
+- int error = -EINVAL;
++ int error;
++
++ error = pm_sleep_lock();
++ if (error) {
++ dev_warn(dev, "Suspicious %s during suspend\n", __FUNCTION__);
++ dump_stack();
++ return error;
++ }
+
+ dev = get_device(dev);
+- if (!dev || !strlen(dev->bus_id))
++ if (!dev || !strlen(dev->bus_id)) {
++ error = -EINVAL;
+ goto Error;
++ }
+
+- pr_debug("DEV: registering device: ID = '%s'\n", dev->bus_id);
++ pr_debug("device: '%s': %s\n", dev->bus_id, __FUNCTION__);
+
+ parent = get_device(dev->parent);
+- error = setup_parent(dev, parent);
+- if (error)
+- goto Error;
++ setup_parent(dev, parent);
+
+ /* first, register with generic layer. */
+- kobject_set_name(&dev->kobj, "%s", dev->bus_id);
+- error = kobject_add(&dev->kobj);
++ error = kobject_add(&dev->kobj, dev->kobj.parent, "%s", dev->bus_id);
+ if (error)
+ goto Error;
+
+@@ -751,7 +795,7 @@ int device_add(struct device *dev)
+
+ /* notify clients of device entry (new way) */
+ if (dev->bus)
+- blocking_notifier_call_chain(&dev->bus->bus_notifier,
++ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ BUS_NOTIFY_ADD_DEVICE, dev);
+
+ error = device_create_file(dev, &uevent_attr);
+@@ -795,13 +839,14 @@ int device_add(struct device *dev)
+ }
+ Done:
+ put_device(dev);
++ pm_sleep_unlock();
+ return error;
+ BusError:
+ device_pm_remove(dev);
+ dpm_sysfs_remove(dev);
+ PMError:
+ if (dev->bus)
+- blocking_notifier_call_chain(&dev->bus->bus_notifier,
++ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ BUS_NOTIFY_DEL_DEVICE, dev);
+ device_remove_attrs(dev);
+ AttrsError:
+@@ -809,124 +854,84 @@ int device_add(struct device *dev)
+ SymlinkError:
+ if (MAJOR(dev->devt))
+ device_remove_file(dev, &devt_attr);
+-
+- if (dev->class) {
+- sysfs_remove_link(&dev->kobj, "subsystem");
+- /* If this is not a "fake" compatible device, remove the
+- * symlink from the class to the device. */
+- if (dev->kobj.parent != &dev->class->subsys.kobj)
+- sysfs_remove_link(&dev->class->subsys.kobj,
+- dev->bus_id);
+- if (parent) {
+-#ifdef CONFIG_SYSFS_DEPRECATED
+- char *class_name = make_class_name(dev->class->name,
+- &dev->kobj);
+- if (class_name)
+- sysfs_remove_link(&dev->parent->kobj,
+- class_name);
+- kfree(class_name);
+-#endif
+- sysfs_remove_link(&dev->kobj, "device");
+- }
+- }
+ ueventattrError:
+ device_remove_file(dev, &uevent_attr);
+ attrError:
+ kobject_uevent(&dev->kobj, KOBJ_REMOVE);
+ kobject_del(&dev->kobj);
+ Error:
++ cleanup_device_parent(dev);
+ if (parent)
+ put_device(parent);
+ goto Done;
+ }
+
+-
+ /**
+- * device_register - register a device with the system.
+- * @dev: pointer to the device structure
++ * device_register - register a device with the system.
++ * @dev: pointer to the device structure
+ *
+- * This happens in two clean steps - initialize the device
+- * and add it to the system. The two steps can be called
+- * separately, but this is the easiest and most common.
+- * I.e. you should only call the two helpers separately if
+- * have a clearly defined need to use and refcount the device
+- * before it is added to the hierarchy.
++ * This happens in two clean steps - initialize the device
++ * and add it to the system. The two steps can be called
++ * separately, but this is the easiest and most common.
++ * I.e. you should only call the two helpers separately if
++ * have a clearly defined need to use and refcount the device
++ * before it is added to the hierarchy.
+ */
+-
+ int device_register(struct device *dev)
+ {
+ device_initialize(dev);
+ return device_add(dev);
+ }
+
+-
+ /**
+- * get_device - increment reference count for device.
+- * @dev: device.
++ * get_device - increment reference count for device.
++ * @dev: device.
+ *
+- * This simply forwards the call to kobject_get(), though
+- * we do take care to provide for the case that we get a NULL
+- * pointer passed in.
++ * This simply forwards the call to kobject_get(), though
++ * we do take care to provide for the case that we get a NULL
++ * pointer passed in.
+ */
+-
+-struct device * get_device(struct device * dev)
++struct device *get_device(struct device *dev)
+ {
+ return dev ? to_dev(kobject_get(&dev->kobj)) : NULL;
+ }
+
+-
+ /**
+- * put_device - decrement reference count.
+- * @dev: device in question.
++ * put_device - decrement reference count.
++ * @dev: device in question.
+ */
+-void put_device(struct device * dev)
++void put_device(struct device *dev)
+ {
++ /* might_sleep(); */
+ if (dev)
+ kobject_put(&dev->kobj);
+ }
+
+-
+ /**
+- * device_del - delete device from system.
+- * @dev: device.
++ * device_del - delete device from system.
++ * @dev: device.
+ *
+- * This is the first part of the device unregistration
+- * sequence. This removes the device from the lists we control
+- * from here, has it removed from the other driver model
+- * subsystems it was added to in device_add(), and removes it
+- * from the kobject hierarchy.
++ * This is the first part of the device unregistration
++ * sequence. This removes the device from the lists we control
++ * from here, has it removed from the other driver model
++ * subsystems it was added to in device_add(), and removes it
++ * from the kobject hierarchy.
+ *
+- * NOTE: this should be called manually _iff_ device_add() was
+- * also called manually.
++ * NOTE: this should be called manually _iff_ device_add() was
++ * also called manually.
+ */
+-
+-void device_del(struct device * dev)
++void device_del(struct device *dev)
+ {
+- struct device * parent = dev->parent;
++ struct device *parent = dev->parent;
+ struct class_interface *class_intf;
+
++ device_pm_remove(dev);
+ if (parent)
+ klist_del(&dev->knode_parent);
+ if (MAJOR(dev->devt))
+ device_remove_file(dev, &devt_attr);
+ if (dev->class) {
+- sysfs_remove_link(&dev->kobj, "subsystem");
+- /* If this is not a "fake" compatible device, remove the
+- * symlink from the class to the device. */
+- if (dev->kobj.parent != &dev->class->subsys.kobj)
+- sysfs_remove_link(&dev->class->subsys.kobj,
+- dev->bus_id);
+- if (parent) {
+-#ifdef CONFIG_SYSFS_DEPRECATED
+- char *class_name = make_class_name(dev->class->name,
+- &dev->kobj);
+- if (class_name)
+- sysfs_remove_link(&dev->parent->kobj,
+- class_name);
+- kfree(class_name);
+-#endif
+- sysfs_remove_link(&dev->kobj, "device");
+- }
++ device_remove_class_symlinks(dev);
+
+ down(&dev->class->sem);
+ /* notify any interfaces that the device is now gone */
+@@ -936,31 +941,6 @@ void device_del(struct device * dev)
+ /* remove the device from the class list */
+ list_del_init(&dev->node);
+ up(&dev->class->sem);
+-
+- /* If we live in a parent class-directory, unreference it */
+- if (dev->kobj.parent->kset == &dev->class->class_dirs) {
+- struct device *d;
+- int other = 0;
+-
+- /*
+- * if we are the last child of our class, delete
+- * our class-directory at this parent
+- */
+- down(&dev->class->sem);
+- list_for_each_entry(d, &dev->class->devices, node) {
+- if (d == dev)
+- continue;
+- if (d->kobj.parent == dev->kobj.parent) {
+- other = 1;
+- break;
+- }
+- }
+- if (!other)
+- kobject_del(dev->kobj.parent);
+-
+- kobject_put(dev->kobj.parent);
+- up(&dev->class->sem);
+- }
+ }
+ device_remove_file(dev, &uevent_attr);
+ device_remove_attrs(dev);
+@@ -979,57 +959,55 @@ void device_del(struct device * dev)
+ if (platform_notify_remove)
+ platform_notify_remove(dev);
+ if (dev->bus)
+- blocking_notifier_call_chain(&dev->bus->bus_notifier,
++ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ BUS_NOTIFY_DEL_DEVICE, dev);
+- device_pm_remove(dev);
+ kobject_uevent(&dev->kobj, KOBJ_REMOVE);
++ cleanup_device_parent(dev);
+ kobject_del(&dev->kobj);
+- if (parent)
+- put_device(parent);
++ put_device(parent);
+ }
+
+ /**
+- * device_unregister - unregister device from system.
+- * @dev: device going away.
++ * device_unregister - unregister device from system.
++ * @dev: device going away.
+ *
+- * We do this in two parts, like we do device_register(). First,
+- * we remove it from all the subsystems with device_del(), then
+- * we decrement the reference count via put_device(). If that
+- * is the final reference count, the device will be cleaned up
+- * via device_release() above. Otherwise, the structure will
+- * stick around until the final reference to the device is dropped.
++ * We do this in two parts, like we do device_register(). First,
++ * we remove it from all the subsystems with device_del(), then
++ * we decrement the reference count via put_device(). If that
++ * is the final reference count, the device will be cleaned up
++ * via device_release() above. Otherwise, the structure will
++ * stick around until the final reference to the device is dropped.
+ */
+-void device_unregister(struct device * dev)
++void device_unregister(struct device *dev)
+ {
+- pr_debug("DEV: Unregistering device. ID = '%s'\n", dev->bus_id);
++ pr_debug("device: '%s': %s\n", dev->bus_id, __FUNCTION__);
+ device_del(dev);
+ put_device(dev);
+ }
+
+-
+-static struct device * next_device(struct klist_iter * i)
++static struct device *next_device(struct klist_iter *i)
+ {
+- struct klist_node * n = klist_next(i);
++ struct klist_node *n = klist_next(i);
+ return n ? container_of(n, struct device, knode_parent) : NULL;
+ }
+
+ /**
+- * device_for_each_child - device child iterator.
+- * @parent: parent struct device.
+- * @data: data for the callback.
+- * @fn: function to be called for each device.
++ * device_for_each_child - device child iterator.
++ * @parent: parent struct device.
++ * @data: data for the callback.
++ * @fn: function to be called for each device.
+ *
+- * Iterate over @parent's child devices, and call @fn for each,
+- * passing it @data.
++ * Iterate over @parent's child devices, and call @fn for each,
++ * passing it @data.
+ *
+- * We check the return of @fn each time. If it returns anything
+- * other than 0, we break out and return that value.
++ * We check the return of @fn each time. If it returns anything
++ * other than 0, we break out and return that value.
+ */
+-int device_for_each_child(struct device * parent, void * data,
+- int (*fn)(struct device *, void *))
++int device_for_each_child(struct device *parent, void *data,
++ int (*fn)(struct device *dev, void *data))
+ {
+ struct klist_iter i;
+- struct device * child;
++ struct device *child;
+ int error = 0;
+
+ klist_iter_init(&parent->klist_children, &i);
+@@ -1054,8 +1032,8 @@ int device_for_each_child(struct device * parent, void * data,
+ * current device can be obtained, this function will return to the caller
+ * and not iterate over any more devices.
+ */
+-struct device * device_find_child(struct device *parent, void *data,
+- int (*match)(struct device *, void *))
++struct device *device_find_child(struct device *parent, void *data,
++ int (*match)(struct device *dev, void *data))
+ {
+ struct klist_iter i;
+ struct device *child;
+@@ -1073,7 +1051,10 @@ struct device * device_find_child(struct device *parent, void *data,
+
+ int __init devices_init(void)
+ {
+- return subsystem_register(&devices_subsys);
++ devices_kset = kset_create_and_add("devices", &device_uevent_ops, NULL);
++ if (!devices_kset)
++ return -ENOMEM;
++ return 0;
+ }
+
+ EXPORT_SYMBOL_GPL(device_for_each_child);
+@@ -1094,7 +1075,7 @@ EXPORT_SYMBOL_GPL(device_remove_file);
+
+ static void device_create_release(struct device *dev)
+ {
+- pr_debug("%s called for %s\n", __FUNCTION__, dev->bus_id);
++ pr_debug("device: '%s': %s\n", dev->bus_id, __FUNCTION__);
+ kfree(dev);
+ }
+
+@@ -1156,14 +1137,11 @@ error:
+ EXPORT_SYMBOL_GPL(device_create);
+
+ /**
+- * device_destroy - removes a device that was created with device_create()
++ * find_device - finds a device that was created with device_create()
+ * @class: pointer to the struct class that this device was registered with
+ * @devt: the dev_t of the device that was previously registered
+- *
+- * This call unregisters and cleans up a device that was created with a
+- * call to device_create().
+ */
+-void device_destroy(struct class *class, dev_t devt)
++static struct device *find_device(struct class *class, dev_t devt)
+ {
+ struct device *dev = NULL;
+ struct device *dev_tmp;
+@@ -1176,12 +1154,54 @@ void device_destroy(struct class *class, dev_t devt)
+ }
+ }
+ up(&class->sem);
++ return dev;
++}
++
++/**
++ * device_destroy - removes a device that was created with device_create()
++ * @class: pointer to the struct class that this device was registered with
++ * @devt: the dev_t of the device that was previously registered
++ *
++ * This call unregisters and cleans up a device that was created with a
++ * call to device_create().
++ */
++void device_destroy(struct class *class, dev_t devt)
++{
++ struct device *dev;
+
++ dev = find_device(class, devt);
+ if (dev)
+ device_unregister(dev);
+ }
+ EXPORT_SYMBOL_GPL(device_destroy);
+
++#ifdef CONFIG_PM_SLEEP
++/**
++ * destroy_suspended_device - asks the PM core to remove a suspended device
++ * @class: pointer to the struct class that this device was registered with
++ * @devt: the dev_t of the device that was previously registered
++ *
++ * This call notifies the PM core of the necessity to unregister a suspended
++ * device created with a call to device_create() (devices cannot be
++ * unregistered directly while suspended, since the PM core holds their
++ * semaphores at that time).
++ *
++ * It can only be called within the scope of a system sleep transition. In
++ * practice this means it has to be directly or indirectly invoked either by
++ * a suspend or resume method, or by the PM core (e.g. via
++ * disable_nonboot_cpus() or enable_nonboot_cpus()).
++ */
++void destroy_suspended_device(struct class *class, dev_t devt)
++{
++ struct device *dev;
++
++ dev = find_device(class, devt);
++ if (dev)
++ device_pm_schedule_removal(dev);
++}
++EXPORT_SYMBOL_GPL(destroy_suspended_device);
++#endif /* CONFIG_PM_SLEEP */
++
+ /**
+ * device_rename - renames a device
+ * @dev: the pointer to the struct device to be renamed
+@@ -1198,7 +1218,8 @@ int device_rename(struct device *dev, char *new_name)
+ if (!dev)
+ return -EINVAL;
+
+- pr_debug("DEVICE: renaming '%s' to '%s'\n", dev->bus_id, new_name);
++ pr_debug("device: '%s': %s: renaming to '%s'\n", dev->bus_id,
++ __FUNCTION__, new_name);
+
+ #ifdef CONFIG_SYSFS_DEPRECATED
+ if ((dev->class) && (dev->parent))
+@@ -1279,8 +1300,7 @@ static int device_move_class_links(struct device *dev,
+ class_name);
+ if (error)
+ sysfs_remove_link(&dev->kobj, "device");
+- }
+- else
++ } else
+ error = 0;
+ out:
+ kfree(class_name);
+@@ -1311,16 +1331,13 @@ int device_move(struct device *dev, struct device *new_parent)
+ return -EINVAL;
+
+ new_parent = get_device(new_parent);
+- new_parent_kobj = get_device_parent (dev, new_parent);
+- if (IS_ERR(new_parent_kobj)) {
+- error = PTR_ERR(new_parent_kobj);
+- put_device(new_parent);
+- goto out;
+- }
+- pr_debug("DEVICE: moving '%s' to '%s'\n", dev->bus_id,
+- new_parent ? new_parent->bus_id : "<NULL>");
++ new_parent_kobj = get_device_parent(dev, new_parent);
++
++ pr_debug("device: '%s': %s: moving to '%s'\n", dev->bus_id,
++ __FUNCTION__, new_parent ? new_parent->bus_id : "<NULL>");
+ error = kobject_move(&dev->kobj, new_parent_kobj);
+ if (error) {
++ cleanup_glue_dir(dev, new_parent_kobj);
+ put_device(new_parent);
+ goto out;
+ }
+@@ -1343,6 +1360,7 @@ int device_move(struct device *dev, struct device *new_parent)
+ klist_add_tail(&dev->knode_parent,
+ &old_parent->klist_children);
+ }
++ cleanup_glue_dir(dev, new_parent_kobj);
+ put_device(new_parent);
+ goto out;
+ }
+@@ -1352,5 +1370,23 @@ out:
+ put_device(dev);
+ return error;
+ }
+-
+ EXPORT_SYMBOL_GPL(device_move);
++
++/**
++ * device_shutdown - call ->shutdown() on each device to shutdown.
++ */
++void device_shutdown(void)
++{
++ struct device *dev, *devn;
++
++ list_for_each_entry_safe_reverse(dev, devn, &devices_kset->list,
++ kobj.entry) {
++ if (dev->bus && dev->bus->shutdown) {
++ dev_dbg(dev, "shutdown\n");
++ dev->bus->shutdown(dev);
++ } else if (dev->driver && dev->driver->shutdown) {
++ dev_dbg(dev, "shutdown\n");
++ dev->driver->shutdown(dev);
++ }
++ }
++}
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 4054507..c5885f5 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -14,7 +14,7 @@
+ #include "base.h"
+
+ struct sysdev_class cpu_sysdev_class = {
+- set_kset_name("cpu"),
++ .name = "cpu",
+ };
+ EXPORT_SYMBOL(cpu_sysdev_class);
+
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 7ac474d..a5cde94 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -1,18 +1,20 @@
+ /*
+- * drivers/base/dd.c - The core device/driver interactions.
++ * drivers/base/dd.c - The core device/driver interactions.
+ *
+- * This file contains the (sometimes tricky) code that controls the
+- * interactions between devices and drivers, which primarily includes
+- * driver binding and unbinding.
++ * This file contains the (sometimes tricky) code that controls the
++ * interactions between devices and drivers, which primarily includes
++ * driver binding and unbinding.
+ *
+- * All of this code used to exist in drivers/base/bus.c, but was
+- * relocated to here in the name of compartmentalization (since it wasn't
+- * strictly code just for the 'struct bus_type'.
++ * All of this code used to exist in drivers/base/bus.c, but was
++ * relocated to here in the name of compartmentalization (since it wasn't
++ * strictly code just for the 'struct bus_type'.
+ *
+- * Copyright (c) 2002-5 Patrick Mochel
+- * Copyright (c) 2002-3 Open Source Development Labs
++ * Copyright (c) 2002-5 Patrick Mochel
++ * Copyright (c) 2002-3 Open Source Development Labs
++ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
++ * Copyright (c) 2007 Novell Inc.
+ *
+- * This file is released under the GPLv2
++ * This file is released under the GPLv2
+ */
+
+ #include <linux/device.h>
+@@ -23,8 +25,6 @@
+ #include "base.h"
+ #include "power/power.h"
+
+-#define to_drv(node) container_of(node, struct device_driver, kobj.entry)
+-
+
+ static void driver_bound(struct device *dev)
+ {
+@@ -34,27 +34,27 @@ static void driver_bound(struct device *dev)
+ return;
+ }
+
+- pr_debug("bound device '%s' to driver '%s'\n",
+- dev->bus_id, dev->driver->name);
++ pr_debug("driver: '%s': %s: bound to device '%s'\n", dev->bus_id,
++ __FUNCTION__, dev->driver->name);
+
+ if (dev->bus)
+- blocking_notifier_call_chain(&dev->bus->bus_notifier,
++ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ BUS_NOTIFY_BOUND_DRIVER, dev);
+
+- klist_add_tail(&dev->knode_driver, &dev->driver->klist_devices);
++ klist_add_tail(&dev->knode_driver, &dev->driver->p->klist_devices);
+ }
+
+ static int driver_sysfs_add(struct device *dev)
+ {
+ int ret;
+
+- ret = sysfs_create_link(&dev->driver->kobj, &dev->kobj,
++ ret = sysfs_create_link(&dev->driver->p->kobj, &dev->kobj,
+ kobject_name(&dev->kobj));
+ if (ret == 0) {
+- ret = sysfs_create_link(&dev->kobj, &dev->driver->kobj,
++ ret = sysfs_create_link(&dev->kobj, &dev->driver->p->kobj,
+ "driver");
+ if (ret)
+- sysfs_remove_link(&dev->driver->kobj,
++ sysfs_remove_link(&dev->driver->p->kobj,
+ kobject_name(&dev->kobj));
+ }
+ return ret;
+@@ -65,24 +65,24 @@ static void driver_sysfs_remove(struct device *dev)
+ struct device_driver *drv = dev->driver;
+
+ if (drv) {
+- sysfs_remove_link(&drv->kobj, kobject_name(&dev->kobj));
++ sysfs_remove_link(&drv->p->kobj, kobject_name(&dev->kobj));
+ sysfs_remove_link(&dev->kobj, "driver");
+ }
+ }
+
+ /**
+- * device_bind_driver - bind a driver to one device.
+- * @dev: device.
++ * device_bind_driver - bind a driver to one device.
++ * @dev: device.
+ *
+- * Allow manual attachment of a driver to a device.
+- * Caller must have already set @dev->driver.
++ * Allow manual attachment of a driver to a device.
++ * Caller must have already set @dev->driver.
+ *
+- * Note that this does not modify the bus reference count
+- * nor take the bus's rwsem. Please verify those are accounted
+- * for before calling this. (It is ok to call with no other effort
+- * from a driver's probe() method.)
++ * Note that this does not modify the bus reference count
++ * nor take the bus's rwsem. Please verify those are accounted
++ * for before calling this. (It is ok to call with no other effort
++ * from a driver's probe() method.)
+ *
+- * This function must be called with @dev->sem held.
++ * This function must be called with @dev->sem held.
+ */
+ int device_bind_driver(struct device *dev)
+ {
+@@ -93,6 +93,7 @@ int device_bind_driver(struct device *dev)
+ driver_bound(dev);
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(device_bind_driver);
+
+ static atomic_t probe_count = ATOMIC_INIT(0);
+ static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue);
+@@ -102,8 +103,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
+ int ret = 0;
+
+ atomic_inc(&probe_count);
+- pr_debug("%s: Probing driver %s with device %s\n",
+- drv->bus->name, drv->name, dev->bus_id);
++ pr_debug("bus: '%s': %s: probing driver %s with device %s\n",
++ drv->bus->name, __FUNCTION__, drv->name, dev->bus_id);
+ WARN_ON(!list_empty(&dev->devres_head));
+
+ dev->driver = drv;
+@@ -125,8 +126,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
+
+ driver_bound(dev);
+ ret = 1;
+- pr_debug("%s: Bound Device %s to Driver %s\n",
+- drv->bus->name, dev->bus_id, drv->name);
++ pr_debug("bus: '%s': %s: bound device %s to driver %s\n",
++ drv->bus->name, __FUNCTION__, dev->bus_id, drv->name);
+ goto done;
+
+ probe_failed:
+@@ -183,7 +184,7 @@ int driver_probe_done(void)
+ * This function must be called with @dev->sem held. When called for a
+ * USB interface, @dev->parent->sem must be held as well.
+ */
+-int driver_probe_device(struct device_driver * drv, struct device * dev)
++int driver_probe_device(struct device_driver *drv, struct device *dev)
+ {
+ int ret = 0;
+
+@@ -192,8 +193,8 @@ int driver_probe_device(struct device_driver * drv, struct device * dev)
+ if (drv->bus->match && !drv->bus->match(dev, drv))
+ goto done;
+
+- pr_debug("%s: Matched Device %s with Driver %s\n",
+- drv->bus->name, dev->bus_id, drv->name);
++ pr_debug("bus: '%s': %s: matched device %s with driver %s\n",
++ drv->bus->name, __FUNCTION__, dev->bus_id, drv->name);
+
+ ret = really_probe(dev, drv);
+
+@@ -201,27 +202,27 @@ done:
+ return ret;
+ }
+
+-static int __device_attach(struct device_driver * drv, void * data)
++static int __device_attach(struct device_driver *drv, void *data)
+ {
+- struct device * dev = data;
++ struct device *dev = data;
+ return driver_probe_device(drv, dev);
+ }
+
+ /**
+- * device_attach - try to attach device to a driver.
+- * @dev: device.
++ * device_attach - try to attach device to a driver.
++ * @dev: device.
+ *
+- * Walk the list of drivers that the bus has and call
+- * driver_probe_device() for each pair. If a compatible
+- * pair is found, break out and return.
++ * Walk the list of drivers that the bus has and call
++ * driver_probe_device() for each pair. If a compatible
++ * pair is found, break out and return.
+ *
+- * Returns 1 if the device was bound to a driver;
+- * 0 if no matching device was found;
+- * -ENODEV if the device is not registered.
++ * Returns 1 if the device was bound to a driver;
++ * 0 if no matching device was found;
++ * -ENODEV if the device is not registered.
+ *
+- * When called for a USB interface, @dev->parent->sem must be held.
++ * When called for a USB interface, @dev->parent->sem must be held.
+ */
+-int device_attach(struct device * dev)
++int device_attach(struct device *dev)
+ {
+ int ret = 0;
+
+@@ -240,10 +241,11 @@ int device_attach(struct device * dev)
+ up(&dev->sem);
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(device_attach);
+
+-static int __driver_attach(struct device * dev, void * data)
++static int __driver_attach(struct device *dev, void *data)
+ {
+- struct device_driver * drv = data;
++ struct device_driver *drv = data;
+
+ /*
+ * Lock device and try to bind to it. We drop the error
+@@ -268,35 +270,35 @@ static int __driver_attach(struct device * dev, void * data)
+ }
+
+ /**
+- * driver_attach - try to bind driver to devices.
+- * @drv: driver.
++ * driver_attach - try to bind driver to devices.
++ * @drv: driver.
+ *
+- * Walk the list of devices that the bus has on it and try to
+- * match the driver with each one. If driver_probe_device()
+- * returns 0 and the @dev->driver is set, we've found a
+- * compatible pair.
++ * Walk the list of devices that the bus has on it and try to
++ * match the driver with each one. If driver_probe_device()
++ * returns 0 and the @dev->driver is set, we've found a
++ * compatible pair.
+ */
+-int driver_attach(struct device_driver * drv)
++int driver_attach(struct device_driver *drv)
+ {
+ return bus_for_each_dev(drv->bus, NULL, drv, __driver_attach);
+ }
++EXPORT_SYMBOL_GPL(driver_attach);
+
+ /*
+- * __device_release_driver() must be called with @dev->sem held.
+- * When called for a USB interface, @dev->parent->sem must be held as well.
++ * __device_release_driver() must be called with @dev->sem held.
++ * When called for a USB interface, @dev->parent->sem must be held as well.
+ */
+-static void __device_release_driver(struct device * dev)
++static void __device_release_driver(struct device *dev)
+ {
+- struct device_driver * drv;
++ struct device_driver *drv;
+
+- drv = get_driver(dev->driver);
++ drv = dev->driver;
+ if (drv) {
+ driver_sysfs_remove(dev);
+ sysfs_remove_link(&dev->kobj, "driver");
+- klist_remove(&dev->knode_driver);
+
+ if (dev->bus)
+- blocking_notifier_call_chain(&dev->bus->bus_notifier,
++ blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ BUS_NOTIFY_UNBIND_DRIVER,
+ dev);
+
+@@ -306,18 +308,18 @@ static void __device_release_driver(struct device * dev)
+ drv->remove(dev);
+ devres_release_all(dev);
+ dev->driver = NULL;
+- put_driver(drv);
++ klist_remove(&dev->knode_driver);
+ }
+ }
+
+ /**
+- * device_release_driver - manually detach device from driver.
+- * @dev: device.
++ * device_release_driver - manually detach device from driver.
++ * @dev: device.
+ *
+- * Manually detach device from driver.
+- * When called for a USB interface, @dev->parent->sem must be held.
++ * Manually detach device from driver.
++ * When called for a USB interface, @dev->parent->sem must be held.
+ */
+-void device_release_driver(struct device * dev)
++void device_release_driver(struct device *dev)
+ {
+ /*
+ * If anyone calls device_release_driver() recursively from
+@@ -328,26 +330,26 @@ void device_release_driver(struct device * dev)
+ __device_release_driver(dev);
+ up(&dev->sem);
+ }
+-
++EXPORT_SYMBOL_GPL(device_release_driver);
+
+ /**
+ * driver_detach - detach driver from all devices it controls.
+ * @drv: driver.
+ */
+-void driver_detach(struct device_driver * drv)
++void driver_detach(struct device_driver *drv)
+ {
+- struct device * dev;
++ struct device *dev;
+
+ for (;;) {
+- spin_lock(&drv->klist_devices.k_lock);
+- if (list_empty(&drv->klist_devices.k_list)) {
+- spin_unlock(&drv->klist_devices.k_lock);
++ spin_lock(&drv->p->klist_devices.k_lock);
++ if (list_empty(&drv->p->klist_devices.k_list)) {
++ spin_unlock(&drv->p->klist_devices.k_lock);
+ break;
+ }
+- dev = list_entry(drv->klist_devices.k_list.prev,
++ dev = list_entry(drv->p->klist_devices.k_list.prev,
+ struct device, knode_driver.n_node);
+ get_device(dev);
+- spin_unlock(&drv->klist_devices.k_lock);
++ spin_unlock(&drv->p->klist_devices.k_lock);
+
+ if (dev->parent) /* Needed for USB */
+ down(&dev->parent->sem);
+@@ -360,9 +362,3 @@ void driver_detach(struct device_driver * drv)
+ put_device(dev);
+ }
+ }
+-
+-EXPORT_SYMBOL_GPL(device_bind_driver);
+-EXPORT_SYMBOL_GPL(device_release_driver);
+-EXPORT_SYMBOL_GPL(device_attach);
+-EXPORT_SYMBOL_GPL(driver_attach);
+-
+diff --git a/drivers/base/driver.c b/drivers/base/driver.c
+index eb11475..a35f041 100644
+--- a/drivers/base/driver.c
++++ b/drivers/base/driver.c
+@@ -3,6 +3,8 @@
+ *
+ * Copyright (c) 2002-3 Patrick Mochel
+ * Copyright (c) 2002-3 Open Source Development Labs
++ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
++ * Copyright (c) 2007 Novell Inc.
+ *
+ * This file is released under the GPLv2
+ *
+@@ -15,46 +17,42 @@
+ #include "base.h"
+
+ #define to_dev(node) container_of(node, struct device, driver_list)
+-#define to_drv(obj) container_of(obj, struct device_driver, kobj)
+
+
+-static struct device * next_device(struct klist_iter * i)
++static struct device *next_device(struct klist_iter *i)
+ {
+- struct klist_node * n = klist_next(i);
++ struct klist_node *n = klist_next(i);
+ return n ? container_of(n, struct device, knode_driver) : NULL;
+ }
+
+ /**
+- * driver_for_each_device - Iterator for devices bound to a driver.
+- * @drv: Driver we're iterating.
+- * @start: Device to begin with
+- * @data: Data to pass to the callback.
+- * @fn: Function to call for each device.
++ * driver_for_each_device - Iterator for devices bound to a driver.
++ * @drv: Driver we're iterating.
++ * @start: Device to begin with
++ * @data: Data to pass to the callback.
++ * @fn: Function to call for each device.
+ *
+- * Iterate over the @drv's list of devices calling @fn for each one.
++ * Iterate over the @drv's list of devices calling @fn for each one.
+ */
+-
+-int driver_for_each_device(struct device_driver * drv, struct device * start,
+- void * data, int (*fn)(struct device *, void *))
++int driver_for_each_device(struct device_driver *drv, struct device *start,
++ void *data, int (*fn)(struct device *, void *))
+ {
+ struct klist_iter i;
+- struct device * dev;
++ struct device *dev;
+ int error = 0;
+
+ if (!drv)
+ return -EINVAL;
+
+- klist_iter_init_node(&drv->klist_devices, &i,
++ klist_iter_init_node(&drv->p->klist_devices, &i,
+ start ? &start->knode_driver : NULL);
+ while ((dev = next_device(&i)) && !error)
+ error = fn(dev, data);
+ klist_iter_exit(&i);
+ return error;
+ }
+-
+ EXPORT_SYMBOL_GPL(driver_for_each_device);
+
+-
+ /**
+ * driver_find_device - device iterator for locating a particular device.
+ * @drv: The device's driver
+@@ -70,9 +68,9 @@ EXPORT_SYMBOL_GPL(driver_for_each_device);
+ * if it does. If the callback returns non-zero, this function will
+ * return to the caller and not iterate over any more devices.
+ */
+-struct device * driver_find_device(struct device_driver *drv,
+- struct device * start, void * data,
+- int (*match)(struct device *, void *))
++struct device *driver_find_device(struct device_driver *drv,
++ struct device *start, void *data,
++ int (*match)(struct device *dev, void *data))
+ {
+ struct klist_iter i;
+ struct device *dev;
+@@ -80,7 +78,7 @@ struct device * driver_find_device(struct device_driver *drv,
+ if (!drv)
+ return NULL;
+
+- klist_iter_init_node(&drv->klist_devices, &i,
++ klist_iter_init_node(&drv->p->klist_devices, &i,
+ (start ? &start->knode_driver : NULL));
+ while ((dev = next_device(&i)))
+ if (match(dev, data) && get_device(dev))
+@@ -91,111 +89,179 @@ struct device * driver_find_device(struct device_driver *drv,
+ EXPORT_SYMBOL_GPL(driver_find_device);
+
+ /**
+- * driver_create_file - create sysfs file for driver.
+- * @drv: driver.
+- * @attr: driver attribute descriptor.
++ * driver_create_file - create sysfs file for driver.
++ * @drv: driver.
++ * @attr: driver attribute descriptor.
+ */
+-
+-int driver_create_file(struct device_driver * drv, struct driver_attribute * attr)
++int driver_create_file(struct device_driver *drv,
++ struct driver_attribute *attr)
+ {
+ int error;
+ if (get_driver(drv)) {
+- error = sysfs_create_file(&drv->kobj, &attr->attr);
++ error = sysfs_create_file(&drv->p->kobj, &attr->attr);
+ put_driver(drv);
+ } else
+ error = -EINVAL;
+ return error;
+ }
+-
++EXPORT_SYMBOL_GPL(driver_create_file);
+
+ /**
+- * driver_remove_file - remove sysfs file for driver.
+- * @drv: driver.
+- * @attr: driver attribute descriptor.
++ * driver_remove_file - remove sysfs file for driver.
++ * @drv: driver.
++ * @attr: driver attribute descriptor.
+ */
+-
+-void driver_remove_file(struct device_driver * drv, struct driver_attribute * attr)
++void driver_remove_file(struct device_driver *drv,
++ struct driver_attribute *attr)
+ {
+ if (get_driver(drv)) {
+- sysfs_remove_file(&drv->kobj, &attr->attr);
++ sysfs_remove_file(&drv->p->kobj, &attr->attr);
+ put_driver(drv);
+ }
+ }
+-
++EXPORT_SYMBOL_GPL(driver_remove_file);
+
+ /**
+- * get_driver - increment driver reference count.
+- * @drv: driver.
++ * driver_add_kobj - add a kobject below the specified driver
++ *
++ * You really don't want to do this, this is only here due to one looney
++ * iseries driver, go poke those developers if you are annoyed about
++ * this...
+ */
+-struct device_driver * get_driver(struct device_driver * drv)
++int driver_add_kobj(struct device_driver *drv, struct kobject *kobj,
++ const char *fmt, ...)
+ {
+- return drv ? to_drv(kobject_get(&drv->kobj)) : NULL;
++ va_list args;
++ char *name;
++
++ va_start(args, fmt);
++ name = kvasprintf(GFP_KERNEL, fmt, args);
++ va_end(args);
++
++ if (!name)
++ return -ENOMEM;
++
++ return kobject_add(kobj, &drv->p->kobj, "%s", name);
+ }
++EXPORT_SYMBOL_GPL(driver_add_kobj);
++
++/**
++ * get_driver - increment driver reference count.
++ * @drv: driver.
++ */
++struct device_driver *get_driver(struct device_driver *drv)
++{
++ if (drv) {
++ struct driver_private *priv;
++ struct kobject *kobj;
+
++ kobj = kobject_get(&drv->p->kobj);
++ priv = to_driver(kobj);
++ return priv->driver;
++ }
++ return NULL;
++}
++EXPORT_SYMBOL_GPL(get_driver);
+
+ /**
+- * put_driver - decrement driver's refcount.
+- * @drv: driver.
++ * put_driver - decrement driver's refcount.
++ * @drv: driver.
+ */
+-void put_driver(struct device_driver * drv)
++void put_driver(struct device_driver *drv)
++{
++ kobject_put(&drv->p->kobj);
++}
++EXPORT_SYMBOL_GPL(put_driver);
++
++static int driver_add_groups(struct device_driver *drv,
++ struct attribute_group **groups)
+ {
+- kobject_put(&drv->kobj);
++ int error = 0;
++ int i;
++
++ if (groups) {
++ for (i = 0; groups[i]; i++) {
++ error = sysfs_create_group(&drv->p->kobj, groups[i]);
++ if (error) {
++ while (--i >= 0)
++ sysfs_remove_group(&drv->p->kobj,
++ groups[i]);
++ break;
++ }
++ }
++ }
++ return error;
++}
++
++static void driver_remove_groups(struct device_driver *drv,
++ struct attribute_group **groups)
++{
++ int i;
++
++ if (groups)
++ for (i = 0; groups[i]; i++)
++ sysfs_remove_group(&drv->p->kobj, groups[i]);
+ }
+
+ /**
+- * driver_register - register driver with bus
+- * @drv: driver to register
++ * driver_register - register driver with bus
++ * @drv: driver to register
+ *
+- * We pass off most of the work to the bus_add_driver() call,
+- * since most of the things we have to do deal with the bus
+- * structures.
++ * We pass off most of the work to the bus_add_driver() call,
++ * since most of the things we have to do deal with the bus
++ * structures.
+ */
+-int driver_register(struct device_driver * drv)
++int driver_register(struct device_driver *drv)
+ {
++ int ret;
++
+ if ((drv->bus->probe && drv->probe) ||
+ (drv->bus->remove && drv->remove) ||
+- (drv->bus->shutdown && drv->shutdown)) {
+- printk(KERN_WARNING "Driver '%s' needs updating - please use bus_type methods\n", drv->name);
+- }
+- klist_init(&drv->klist_devices, NULL, NULL);
+- return bus_add_driver(drv);
++ (drv->bus->shutdown && drv->shutdown))
++ printk(KERN_WARNING "Driver '%s' needs updating - please use "
++ "bus_type methods\n", drv->name);
++ ret = bus_add_driver(drv);
++ if (ret)
++ return ret;
++ ret = driver_add_groups(drv, drv->groups);
++ if (ret)
++ bus_remove_driver(drv);
++ return ret;
+ }
++EXPORT_SYMBOL_GPL(driver_register);
+
+ /**
+- * driver_unregister - remove driver from system.
+- * @drv: driver.
++ * driver_unregister - remove driver from system.
++ * @drv: driver.
+ *
+- * Again, we pass off most of the work to the bus-level call.
++ * Again, we pass off most of the work to the bus-level call.
+ */
+-
+-void driver_unregister(struct device_driver * drv)
++void driver_unregister(struct device_driver *drv)
+ {
++ driver_remove_groups(drv, drv->groups);
+ bus_remove_driver(drv);
+ }
++EXPORT_SYMBOL_GPL(driver_unregister);
+
+ /**
+- * driver_find - locate driver on a bus by its name.
+- * @name: name of the driver.
+- * @bus: bus to scan for the driver.
++ * driver_find - locate driver on a bus by its name.
++ * @name: name of the driver.
++ * @bus: bus to scan for the driver.
+ *
+- * Call kset_find_obj() to iterate over list of drivers on
+- * a bus to find driver by name. Return driver if found.
++ * Call kset_find_obj() to iterate over list of drivers on
++ * a bus to find driver by name. Return driver if found.
+ *
+- * Note that kset_find_obj increments driver's reference count.
++ * Note that kset_find_obj increments driver's reference count.
+ */
+ struct device_driver *driver_find(const char *name, struct bus_type *bus)
+ {
+- struct kobject *k = kset_find_obj(&bus->drivers, name);
+- if (k)
+- return to_drv(k);
++ struct kobject *k = kset_find_obj(bus->p->drivers_kset, name);
++ struct driver_private *priv;
++
++ if (k) {
++ priv = to_driver(k);
++ return priv->driver;
++ }
+ return NULL;
+ }
+-
+-EXPORT_SYMBOL_GPL(driver_register);
+-EXPORT_SYMBOL_GPL(driver_unregister);
+-EXPORT_SYMBOL_GPL(get_driver);
+-EXPORT_SYMBOL_GPL(put_driver);
+ EXPORT_SYMBOL_GPL(driver_find);
+-
+-EXPORT_SYMBOL_GPL(driver_create_file);
+-EXPORT_SYMBOL_GPL(driver_remove_file);
+diff --git a/drivers/base/firmware.c b/drivers/base/firmware.c
+index 90c8629..1138155 100644
+--- a/drivers/base/firmware.c
++++ b/drivers/base/firmware.c
+@@ -3,11 +3,11 @@
+ *
+ * Copyright (c) 2002-3 Patrick Mochel
+ * Copyright (c) 2002-3 Open Source Development Labs
++ * Copyright (c) 2007 Greg Kroah-Hartman <gregkh at suse.de>
++ * Copyright (c) 2007 Novell Inc.
+ *
+ * This file is released under the GPLv2
+- *
+ */
+-
+ #include <linux/kobject.h>
+ #include <linux/module.h>
+ #include <linux/init.h>
+@@ -15,23 +15,13 @@
+
+ #include "base.h"
+
+-static decl_subsys(firmware, NULL, NULL);
+-
+-int firmware_register(struct kset *s)
+-{
+- kobj_set_kset_s(s, firmware_subsys);
+- return subsystem_register(s);
+-}
+-
+-void firmware_unregister(struct kset *s)
+-{
+- subsystem_unregister(s);
+-}
++struct kobject *firmware_kobj;
++EXPORT_SYMBOL_GPL(firmware_kobj);
+
+ int __init firmware_init(void)
+ {
+- return subsystem_register(&firmware_subsys);
++ firmware_kobj = kobject_create_and_add("firmware", NULL);
++ if (!firmware_kobj)
++ return -ENOMEM;
++ return 0;
+ }
+-
+-EXPORT_SYMBOL_GPL(firmware_register);
+-EXPORT_SYMBOL_GPL(firmware_unregister);
+diff --git a/drivers/base/hypervisor.c b/drivers/base/hypervisor.c
+index 7080b41..6428cba 100644
+--- a/drivers/base/hypervisor.c
++++ b/drivers/base/hypervisor.c
+@@ -2,19 +2,23 @@
+ * hypervisor.c - /sys/hypervisor subsystem.
+ *
+ * Copyright (C) IBM Corp. 2006
++ * Copyright (C) 2007 Greg Kroah-Hartman <gregkh at suse.de>
++ * Copyright (C) 2007 Novell Inc.
+ *
+ * This file is released under the GPLv2
+ */
+
+ #include <linux/kobject.h>
+ #include <linux/device.h>
+-
+ #include "base.h"
+
+-decl_subsys(hypervisor, NULL, NULL);
+-EXPORT_SYMBOL_GPL(hypervisor_subsys);
++struct kobject *hypervisor_kobj;
++EXPORT_SYMBOL_GPL(hypervisor_kobj);
+
+ int __init hypervisor_init(void)
+ {
+- return subsystem_register(&hypervisor_subsys);
++ hypervisor_kobj = kobject_create_and_add("hypervisor", NULL);
++ if (!hypervisor_kobj)
++ return -ENOMEM;
++ return 0;
+ }
+diff --git a/drivers/base/init.c b/drivers/base/init.c
+index 3713815..7bd9b6a 100644
+--- a/drivers/base/init.c
++++ b/drivers/base/init.c
+@@ -1,10 +1,8 @@
+ /*
+- *
+ * Copyright (c) 2002-3 Patrick Mochel
+ * Copyright (c) 2002-3 Open Source Development Labs
+ *
+ * This file is released under the GPLv2
+- *
+ */
+
+ #include <linux/device.h>
+@@ -14,12 +12,11 @@
+ #include "base.h"
+
+ /**
+- * driver_init - initialize driver model.
++ * driver_init - initialize driver model.
+ *
+- * Call the driver model init functions to initialize their
+- * subsystems. Called early from init/main.c.
++ * Call the driver model init functions to initialize their
++ * subsystems. Called early from init/main.c.
+ */
+-
+ void __init driver_init(void)
+ {
+ /* These are the core pieces */
+@@ -36,5 +33,4 @@ void __init driver_init(void)
+ system_bus_init();
+ cpu_dev_init();
+ memory_dev_init();
+- attribute_container_init();
+ }
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 7868707..7ae413f 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -26,7 +26,7 @@
+ #define MEMORY_CLASS_NAME "memory"
+
+ static struct sysdev_class memory_sysdev_class = {
+- set_kset_name(MEMORY_CLASS_NAME),
++ .name = MEMORY_CLASS_NAME,
+ };
+
+ static const char *memory_uevent_name(struct kset *kset, struct kobject *kobj)
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+new file mode 100644
+index 0000000..103be9c
+--- /dev/null
++++ b/drivers/base/module.c
+@@ -0,0 +1,94 @@
++/*
++ * module.c - module sysfs fun for drivers
++ *
++ * This file is released under the GPLv2
++ *
++ */
++#include <linux/device.h>
++#include <linux/module.h>
++#include <linux/errno.h>
++#include <linux/string.h>
++#include "base.h"
++
++static char *make_driver_name(struct device_driver *drv)
++{
++ char *driver_name;
++
++ driver_name = kmalloc(strlen(drv->name) + strlen(drv->bus->name) + 2,
++ GFP_KERNEL);
++ if (!driver_name)
++ return NULL;
++
++ sprintf(driver_name, "%s:%s", drv->bus->name, drv->name);
++ return driver_name;
++}
++
++static void module_create_drivers_dir(struct module_kobject *mk)
++{
++ if (!mk || mk->drivers_dir)
++ return;
++
++ mk->drivers_dir = kobject_create_and_add("drivers", &mk->kobj);
++}
++
++void module_add_driver(struct module *mod, struct device_driver *drv)
++{
++ char *driver_name;
++ int no_warn;
++ struct module_kobject *mk = NULL;
++
++ if (!drv)
++ return;
++
++ if (mod)
++ mk = &mod->mkobj;
++ else if (drv->mod_name) {
++ struct kobject *mkobj;
++
++ /* Lookup built-in module entry in /sys/modules */
++ mkobj = kset_find_obj(module_kset, drv->mod_name);
++ if (mkobj) {
++ mk = container_of(mkobj, struct module_kobject, kobj);
++ /* remember our module structure */
++ drv->p->mkobj = mk;
++ /* kset_find_obj took a reference */
++ kobject_put(mkobj);
++ }
++ }
++
++ if (!mk)
++ return;
++
++ /* Don't check return codes; these calls are idempotent */
++ no_warn = sysfs_create_link(&drv->p->kobj, &mk->kobj, "module");
++ driver_name = make_driver_name(drv);
++ if (driver_name) {
++ module_create_drivers_dir(mk);
++ no_warn = sysfs_create_link(mk->drivers_dir, &drv->p->kobj,
++ driver_name);
++ kfree(driver_name);
++ }
++}
++
++void module_remove_driver(struct device_driver *drv)
++{
++ struct module_kobject *mk = NULL;
++ char *driver_name;
++
++ if (!drv)
++ return;
++
++ sysfs_remove_link(&drv->p->kobj, "module");
++
++ if (drv->owner)
++ mk = &drv->owner->mkobj;
++ else if (drv->p->mkobj)
++ mk = drv->p->mkobj;
++ if (mk && mk->drivers_dir) {
++ driver_name = make_driver_name(drv);
++ if (driver_name) {
++ sysfs_remove_link(mk->drivers_dir, driver_name);
++ kfree(driver_name);
++ }
++ }
++}
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 88eeed7..e59861f 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -15,7 +15,7 @@
+ #include <linux/device.h>
+
+ static struct sysdev_class node_class = {
+- set_kset_name("node"),
++ .name = "node",
+ };
+
+
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index fb56092..efaf282 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -20,7 +20,8 @@
+
+ #include "base.h"
+
+-#define to_platform_driver(drv) (container_of((drv), struct platform_driver, driver))
++#define to_platform_driver(drv) (container_of((drv), struct platform_driver, \
++ driver))
+
+ struct device platform_bus = {
+ .bus_id = "platform",
+@@ -28,14 +29,13 @@ struct device platform_bus = {
+ EXPORT_SYMBOL_GPL(platform_bus);
+
+ /**
+- * platform_get_resource - get a resource for a device
+- * @dev: platform device
+- * @type: resource type
+- * @num: resource index
++ * platform_get_resource - get a resource for a device
++ * @dev: platform device
++ * @type: resource type
++ * @num: resource index
+ */
+-struct resource *
+-platform_get_resource(struct platform_device *dev, unsigned int type,
+- unsigned int num)
++struct resource *platform_get_resource(struct platform_device *dev,
++ unsigned int type, unsigned int num)
+ {
+ int i;
+
+@@ -43,8 +43,7 @@ platform_get_resource(struct platform_device *dev, unsigned int type,
+ struct resource *r = &dev->resource[i];
+
+ if ((r->flags & (IORESOURCE_IO|IORESOURCE_MEM|
+- IORESOURCE_IRQ|IORESOURCE_DMA))
+- == type)
++ IORESOURCE_IRQ|IORESOURCE_DMA)) == type)
+ if (num-- == 0)
+ return r;
+ }
+@@ -53,9 +52,9 @@ platform_get_resource(struct platform_device *dev, unsigned int type,
+ EXPORT_SYMBOL_GPL(platform_get_resource);
+
+ /**
+- * platform_get_irq - get an IRQ for a device
+- * @dev: platform device
+- * @num: IRQ number index
++ * platform_get_irq - get an IRQ for a device
++ * @dev: platform device
++ * @num: IRQ number index
+ */
+ int platform_get_irq(struct platform_device *dev, unsigned int num)
+ {
+@@ -66,14 +65,13 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
+ EXPORT_SYMBOL_GPL(platform_get_irq);
+
+ /**
+- * platform_get_resource_byname - get a resource for a device by name
+- * @dev: platform device
+- * @type: resource type
+- * @name: resource name
++ * platform_get_resource_byname - get a resource for a device by name
++ * @dev: platform device
++ * @type: resource type
++ * @name: resource name
+ */
+-struct resource *
+-platform_get_resource_byname(struct platform_device *dev, unsigned int type,
+- char *name)
++struct resource *platform_get_resource_byname(struct platform_device *dev,
++ unsigned int type, char *name)
+ {
+ int i;
+
+@@ -90,22 +88,23 @@ platform_get_resource_byname(struct platform_device *dev, unsigned int type,
+ EXPORT_SYMBOL_GPL(platform_get_resource_byname);
+
+ /**
+- * platform_get_irq - get an IRQ for a device
+- * @dev: platform device
+- * @name: IRQ name
++ * platform_get_irq - get an IRQ for a device
++ * @dev: platform device
++ * @name: IRQ name
+ */
+ int platform_get_irq_byname(struct platform_device *dev, char *name)
+ {
+- struct resource *r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name);
++ struct resource *r = platform_get_resource_byname(dev, IORESOURCE_IRQ,
++ name);
+
+ return r ? r->start : -ENXIO;
+ }
+ EXPORT_SYMBOL_GPL(platform_get_irq_byname);
+
+ /**
+- * platform_add_devices - add a numbers of platform devices
+- * @devs: array of platform devices to add
+- * @num: number of platform devices in array
++ * platform_add_devices - add a numbers of platform devices
++ * @devs: array of platform devices to add
++ * @num: number of platform devices in array
+ */
+ int platform_add_devices(struct platform_device **devs, int num)
+ {
+@@ -130,12 +129,11 @@ struct platform_object {
+ };
+
+ /**
+- * platform_device_put
+- * @pdev: platform device to free
++ * platform_device_put
++ * @pdev: platform device to free
+ *
+- * Free all memory associated with a platform device. This function
+- * must _only_ be externally called in error cases. All other usage
+- * is a bug.
++ * Free all memory associated with a platform device. This function must
++ * _only_ be externally called in error cases. All other usage is a bug.
+ */
+ void platform_device_put(struct platform_device *pdev)
+ {
+@@ -146,7 +144,8 @@ EXPORT_SYMBOL_GPL(platform_device_put);
+
+ static void platform_device_release(struct device *dev)
+ {
+- struct platform_object *pa = container_of(dev, struct platform_object, pdev.dev);
++ struct platform_object *pa = container_of(dev, struct platform_object,
++ pdev.dev);
+
+ kfree(pa->pdev.dev.platform_data);
+ kfree(pa->pdev.resource);
+@@ -154,12 +153,12 @@ static void platform_device_release(struct device *dev)
+ }
+
+ /**
+- * platform_device_alloc
+- * @name: base name of the device we're adding
+- * @id: instance id
++ * platform_device_alloc
++ * @name: base name of the device we're adding
++ * @id: instance id
+ *
+- * Create a platform device object which can have other objects attached
+- * to it, and which will have attached objects freed when it is released.
++ * Create a platform device object which can have other objects attached
++ * to it, and which will have attached objects freed when it is released.
+ */
+ struct platform_device *platform_device_alloc(const char *name, int id)
+ {
+@@ -179,16 +178,17 @@ struct platform_device *platform_device_alloc(const char *name, int id)
+ EXPORT_SYMBOL_GPL(platform_device_alloc);
+
+ /**
+- * platform_device_add_resources
+- * @pdev: platform device allocated by platform_device_alloc to add resources to
+- * @res: set of resources that needs to be allocated for the device
+- * @num: number of resources
++ * platform_device_add_resources
++ * @pdev: platform device allocated by platform_device_alloc to add resources to
++ * @res: set of resources that needs to be allocated for the device
++ * @num: number of resources
+ *
+- * Add a copy of the resources to the platform device. The memory
+- * associated with the resources will be freed when the platform
+- * device is released.
++ * Add a copy of the resources to the platform device. The memory
++ * associated with the resources will be freed when the platform device is
++ * released.
+ */
+-int platform_device_add_resources(struct platform_device *pdev, struct resource *res, unsigned int num)
++int platform_device_add_resources(struct platform_device *pdev,
++ struct resource *res, unsigned int num)
+ {
+ struct resource *r;
+
+@@ -203,16 +203,17 @@ int platform_device_add_resources(struct platform_device *pdev, struct resource
+ EXPORT_SYMBOL_GPL(platform_device_add_resources);
+
+ /**
+- * platform_device_add_data
+- * @pdev: platform device allocated by platform_device_alloc to add resources to
+- * @data: platform specific data for this platform device
+- * @size: size of platform specific data
++ * platform_device_add_data
++ * @pdev: platform device allocated by platform_device_alloc to add resources to
++ * @data: platform specific data for this platform device
++ * @size: size of platform specific data
+ *
+- * Add a copy of platform specific data to the platform device's platform_data
+- * pointer. The memory associated with the platform data will be freed
+- * when the platform device is released.
++ * Add a copy of platform specific data to the platform device's
++ * platform_data pointer. The memory associated with the platform data
++ * will be freed when the platform device is released.
+ */
+-int platform_device_add_data(struct platform_device *pdev, const void *data, size_t size)
++int platform_device_add_data(struct platform_device *pdev, const void *data,
++ size_t size)
+ {
+ void *d;
+
+@@ -226,11 +227,11 @@ int platform_device_add_data(struct platform_device *pdev, const void *data, siz
+ EXPORT_SYMBOL_GPL(platform_device_add_data);
+
+ /**
+- * platform_device_add - add a platform device to device hierarchy
+- * @pdev: platform device we're adding
++ * platform_device_add - add a platform device to device hierarchy
++ * @pdev: platform device we're adding
+ *
+- * This is part 2 of platform_device_register(), though may be called
+- * separately _iff_ pdev was allocated by platform_device_alloc().
++ * This is part 2 of platform_device_register(), though may be called
++ * separately _iff_ pdev was allocated by platform_device_alloc().
+ */
+ int platform_device_add(struct platform_device *pdev)
+ {
+@@ -289,13 +290,12 @@ int platform_device_add(struct platform_device *pdev)
+ EXPORT_SYMBOL_GPL(platform_device_add);
+
+ /**
+- * platform_device_del - remove a platform-level device
+- * @pdev: platform device we're removing
++ * platform_device_del - remove a platform-level device
++ * @pdev: platform device we're removing
+ *
+- * Note that this function will also release all memory- and port-based
+- * resources owned by the device (@dev->resource). This function
+- * must _only_ be externally called in error cases. All other usage
+- * is a bug.
++ * Note that this function will also release all memory- and port-based
++ * resources owned by the device (@dev->resource). This function must
++ * _only_ be externally called in error cases. All other usage is a bug.
+ */
+ void platform_device_del(struct platform_device *pdev)
+ {
+@@ -314,11 +314,10 @@ void platform_device_del(struct platform_device *pdev)
+ EXPORT_SYMBOL_GPL(platform_device_del);
+
+ /**
+- * platform_device_register - add a platform-level device
+- * @pdev: platform device we're adding
+- *
++ * platform_device_register - add a platform-level device
++ * @pdev: platform device we're adding
+ */
+-int platform_device_register(struct platform_device * pdev)
++int platform_device_register(struct platform_device *pdev)
+ {
+ device_initialize(&pdev->dev);
+ return platform_device_add(pdev);
+@@ -326,14 +325,14 @@ int platform_device_register(struct platform_device * pdev)
+ EXPORT_SYMBOL_GPL(platform_device_register);
+
+ /**
+- * platform_device_unregister - unregister a platform-level device
+- * @pdev: platform device we're unregistering
++ * platform_device_unregister - unregister a platform-level device
++ * @pdev: platform device we're unregistering
+ *
+- * Unregistration is done in 2 steps. First we release all resources
+- * and remove it from the subsystem, then we drop reference count by
+- * calling platform_device_put().
++ * Unregistration is done in 2 steps. First we release all resources
++ * and remove it from the subsystem, then we drop reference count by
++ * calling platform_device_put().
+ */
+-void platform_device_unregister(struct platform_device * pdev)
++void platform_device_unregister(struct platform_device *pdev)
+ {
+ platform_device_del(pdev);
+ platform_device_put(pdev);
+@@ -341,27 +340,29 @@ void platform_device_unregister(struct platform_device * pdev)
+ EXPORT_SYMBOL_GPL(platform_device_unregister);
+
+ /**
+- * platform_device_register_simple
+- * @name: base name of the device we're adding
+- * @id: instance id
+- * @res: set of resources that needs to be allocated for the device
+- * @num: number of resources
++ * platform_device_register_simple
++ * @name: base name of the device we're adding
++ * @id: instance id
++ * @res: set of resources that needs to be allocated for the device
++ * @num: number of resources
+ *
+- * This function creates a simple platform device that requires minimal
+- * resource and memory management. Canned release function freeing
+- * memory allocated for the device allows drivers using such devices
+- * to be unloaded without waiting for the last reference to the device
+- * to be dropped.
++ * This function creates a simple platform device that requires minimal
++ * resource and memory management. Canned release function freeing memory
++ * allocated for the device allows drivers using such devices to be
++ * unloaded without waiting for the last reference to the device to be
++ * dropped.
+ *
+- * This interface is primarily intended for use with legacy drivers
+- * which probe hardware directly. Because such drivers create sysfs
+- * device nodes themselves, rather than letting system infrastructure
+- * handle such device enumeration tasks, they don't fully conform to
+- * the Linux driver model. In particular, when such drivers are built
+- * as modules, they can't be "hotplugged".
++ * This interface is primarily intended for use with legacy drivers which
++ * probe hardware directly. Because such drivers create sysfs device nodes
++ * themselves, rather than letting system infrastructure handle such device
++ * enumeration tasks, they don't fully conform to the Linux driver model.
++ * In particular, when such drivers are built as modules, they can't be
++ * "hotplugged".
+ */
+-struct platform_device *platform_device_register_simple(char *name, int id,
+- struct resource *res, unsigned int num)
++struct platform_device *platform_device_register_simple(const char *name,
++ int id,
++ struct resource *res,
++ unsigned int num)
+ {
+ struct platform_device *pdev;
+ int retval;
+@@ -436,8 +437,8 @@ static int platform_drv_resume(struct device *_dev)
+ }
+
+ /**
+- * platform_driver_register
+- * @drv: platform driver structure
++ * platform_driver_register
++ * @drv: platform driver structure
+ */
+ int platform_driver_register(struct platform_driver *drv)
+ {
+@@ -457,8 +458,8 @@ int platform_driver_register(struct platform_driver *drv)
+ EXPORT_SYMBOL_GPL(platform_driver_register);
+
+ /**
+- * platform_driver_unregister
+- * @drv: platform driver structure
++ * platform_driver_unregister
++ * @drv: platform driver structure
+ */
+ void platform_driver_unregister(struct platform_driver *drv)
+ {
+@@ -497,12 +498,12 @@ int __init_or_module platform_driver_probe(struct platform_driver *drv,
+ * if the probe was successful, and make sure any forced probes of
+ * new devices fail.
+ */
+- spin_lock(&platform_bus_type.klist_drivers.k_lock);
++ spin_lock(&platform_bus_type.p->klist_drivers.k_lock);
+ drv->probe = NULL;
+- if (code == 0 && list_empty(&drv->driver.klist_devices.k_list))
++ if (code == 0 && list_empty(&drv->driver.p->klist_devices.k_list))
+ retval = -ENODEV;
+ drv->driver.probe = platform_drv_probe_fail;
+- spin_unlock(&platform_bus_type.klist_drivers.k_lock);
++ spin_unlock(&platform_bus_type.p->klist_drivers.k_lock);
+
+ if (code != retval)
+ platform_driver_unregister(drv);
+@@ -516,8 +517,8 @@ EXPORT_SYMBOL_GPL(platform_driver_probe);
+ * (b) sysfs attribute lets new-style coldplug recover from hotplug events
+ * mishandled before system is fully running: "modprobe $(cat modalias)"
+ */
+-static ssize_t
+-modalias_show(struct device *dev, struct device_attribute *a, char *buf)
++static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
++ char *buf)
+ {
+ struct platform_device *pdev = to_platform_device(dev);
+ int len = snprintf(buf, PAGE_SIZE, "platform:%s\n", pdev->name);
+@@ -538,26 +539,24 @@ static int platform_uevent(struct device *dev, struct kobj_uevent_env *env)
+ return 0;
+ }
+
+-
+ /**
+- * platform_match - bind platform device to platform driver.
+- * @dev: device.
+- * @drv: driver.
++ * platform_match - bind platform device to platform driver.
++ * @dev: device.
++ * @drv: driver.
+ *
+- * Platform device IDs are assumed to be encoded like this:
+- * "<name><instance>", where <name> is a short description of the
+- * type of device, like "pci" or "floppy", and <instance> is the
+- * enumerated instance of the device, like '0' or '42'.
+- * Driver IDs are simply "<name>".
+- * So, extract the <name> from the platform_device structure,
+- * and compare it against the name of the driver. Return whether
+- * they match or not.
++ * Platform device IDs are assumed to be encoded like this:
++ * "<name><instance>", where <name> is a short description of the type of
++ * device, like "pci" or "floppy", and <instance> is the enumerated
++ * instance of the device, like '0' or '42'. Driver IDs are simply
++ * "<name>". So, extract the <name> from the platform_device structure,
++ * and compare it against the name of the driver. Return whether they match
++ * or not.
+ */
+-
+-static int platform_match(struct device * dev, struct device_driver * drv)
++static int platform_match(struct device *dev, struct device_driver *drv)
+ {
+- struct platform_device *pdev = container_of(dev, struct platform_device, dev);
++ struct platform_device *pdev;
+
++ pdev = container_of(dev, struct platform_device, dev);
+ return (strncmp(pdev->name, drv->name, BUS_ID_SIZE) == 0);
+ }
+
+@@ -574,9 +573,10 @@ static int platform_suspend(struct device *dev, pm_message_t mesg)
+ static int platform_suspend_late(struct device *dev, pm_message_t mesg)
+ {
+ struct platform_driver *drv = to_platform_driver(dev->driver);
+- struct platform_device *pdev = container_of(dev, struct platform_device, dev);
++ struct platform_device *pdev;
+ int ret = 0;
+
++ pdev = container_of(dev, struct platform_device, dev);
+ if (dev->driver && drv->suspend_late)
+ ret = drv->suspend_late(pdev, mesg);
+
+@@ -586,16 +586,17 @@ static int platform_suspend_late(struct device *dev, pm_message_t mesg)
+ static int platform_resume_early(struct device *dev)
+ {
+ struct platform_driver *drv = to_platform_driver(dev->driver);
+- struct platform_device *pdev = container_of(dev, struct platform_device, dev);
++ struct platform_device *pdev;
+ int ret = 0;
+
++ pdev = container_of(dev, struct platform_device, dev);
+ if (dev->driver && drv->resume_early)
+ ret = drv->resume_early(pdev);
+
+ return ret;
+ }
+
+-static int platform_resume(struct device * dev)
++static int platform_resume(struct device *dev)
+ {
+ int ret = 0;
+
+diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
+index 44504e6..de28dfd 100644
+--- a/drivers/base/power/Makefile
++++ b/drivers/base/power/Makefile
+@@ -1,11 +1,6 @@
+-obj-y := shutdown.o
+ obj-$(CONFIG_PM) += sysfs.o
+ obj-$(CONFIG_PM_SLEEP) += main.o
+ obj-$(CONFIG_PM_TRACE) += trace.o
+
+-ifeq ($(CONFIG_DEBUG_DRIVER),y)
+-EXTRA_CFLAGS += -DDEBUG
+-endif
+-ifeq ($(CONFIG_PM_VERBOSE),y)
+-EXTRA_CFLAGS += -DDEBUG
+-endif
++ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
++ccflags-$(CONFIG_PM_VERBOSE) += -DDEBUG
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 691ffb6..200ed5f 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -24,20 +24,45 @@
+ #include <linux/mutex.h>
+ #include <linux/pm.h>
+ #include <linux/resume-trace.h>
++#include <linux/rwsem.h>
+
+ #include "../base.h"
+ #include "power.h"
+
++/*
++ * The entries in the dpm_active list are in a depth first order, simply
++ * because children are guaranteed to be discovered after parents, and
++ * are inserted at the back of the list on discovery.
++ *
++ * All the other lists are kept in the same order, for consistency.
++ * However the lists aren't always traversed in the same order.
++ * Semaphores must be acquired from the top (i.e., front) down
++ * and released in the opposite order. Devices must be suspended
++ * from the bottom (i.e., end) up and resumed in the opposite order.
++ * That way no parent will be suspended while it still has an active
++ * child.
++ *
++ * Since device_pm_add() may be called with a device semaphore held,
++ * we must never try to acquire a device semaphore while holding
++ * dpm_list_mutex.
++ */
++
+ LIST_HEAD(dpm_active);
++static LIST_HEAD(dpm_locked);
+ static LIST_HEAD(dpm_off);
+ static LIST_HEAD(dpm_off_irq);
++static LIST_HEAD(dpm_destroy);
+
+-static DEFINE_MUTEX(dpm_mtx);
+ static DEFINE_MUTEX(dpm_list_mtx);
+
+-int (*platform_enable_wakeup)(struct device *dev, int is_on);
++static DECLARE_RWSEM(pm_sleep_rwsem);
+
++int (*platform_enable_wakeup)(struct device *dev, int is_on);
+
++/**
++ * device_pm_add - add a device to the list of active devices
++ * @dev: Device to be added to the list
++ */
+ void device_pm_add(struct device *dev)
+ {
+ pr_debug("PM: Adding info for %s:%s\n",
+@@ -48,8 +73,36 @@ void device_pm_add(struct device *dev)
+ mutex_unlock(&dpm_list_mtx);
+ }
+
++/**
++ * device_pm_remove - remove a device from the list of active devices
++ * @dev: Device to be removed from the list
++ *
++ * This function also removes the device's PM-related sysfs attributes.
++ */
+ void device_pm_remove(struct device *dev)
+ {
++ /*
++ * If this function is called during a suspend, it will be blocked,
++ * because we're holding the device's semaphore at that time, which may
++ * lead to a deadlock. In that case we want to print a warning.
++ * However, it may also be called by unregister_dropped_devices() with
++ * the device's semaphore released, in which case the warning should
++ * not be printed.
++ */
++ if (down_trylock(&dev->sem)) {
++ if (down_read_trylock(&pm_sleep_rwsem)) {
++ /* No suspend in progress, wait on dev->sem */
++ down(&dev->sem);
++ up_read(&pm_sleep_rwsem);
++ } else {
++ /* Suspend in progress, we may deadlock */
++ dev_warn(dev, "Suspicious %s during suspend\n",
++ __FUNCTION__);
++ dump_stack();
++ /* The user has been warned ... */
++ down(&dev->sem);
++ }
++ }
+ pr_debug("PM: Removing info for %s:%s\n",
+ dev->bus ? dev->bus->name : "No Bus",
+ kobject_name(&dev->kobj));
+@@ -57,25 +110,124 @@ void device_pm_remove(struct device *dev)
+ dpm_sysfs_remove(dev);
+ list_del_init(&dev->power.entry);
+ mutex_unlock(&dpm_list_mtx);
++ up(&dev->sem);
++}
++
++/**
++ * device_pm_schedule_removal - schedule the removal of a suspended device
++ * @dev: Device to destroy
++ *
++ * Moves the device to the dpm_destroy list for further processing by
++ * unregister_dropped_devices().
++ */
++void device_pm_schedule_removal(struct device *dev)
++{
++ pr_debug("PM: Preparing for removal: %s:%s\n",
++ dev->bus ? dev->bus->name : "No Bus",
++ kobject_name(&dev->kobj));
++ mutex_lock(&dpm_list_mtx);
++ list_move_tail(&dev->power.entry, &dpm_destroy);
++ mutex_unlock(&dpm_list_mtx);
++}
++
++/**
++ * pm_sleep_lock - mutual exclusion for registration and suspend
++ *
++ * Returns 0 if no suspend is underway and device registration
++ * may proceed, otherwise -EBUSY.
++ */
++int pm_sleep_lock(void)
++{
++ if (down_read_trylock(&pm_sleep_rwsem))
++ return 0;
++
++ return -EBUSY;
++}
++
++/**
++ * pm_sleep_unlock - mutual exclusion for registration and suspend
++ *
++ * This routine undoes the effect of device_pm_add_lock
++ * when a device's registration is complete.
++ */
++void pm_sleep_unlock(void)
++{
++ up_read(&pm_sleep_rwsem);
+ }
+
+
+ /*------------------------- Resume routines -------------------------*/
+
+ /**
+- * resume_device - Restore state for one device.
++ * resume_device_early - Power on one device (early resume).
+ * @dev: Device.
+ *
++ * Must be called with interrupts disabled.
+ */
+-
+-static int resume_device(struct device * dev)
++static int resume_device_early(struct device *dev)
+ {
+ int error = 0;
+
+ TRACE_DEVICE(dev);
+ TRACE_RESUME(0);
+
+- down(&dev->sem);
++ if (dev->bus && dev->bus->resume_early) {
++ dev_dbg(dev, "EARLY resume\n");
++ error = dev->bus->resume_early(dev);
++ }
++
++ TRACE_RESUME(error);
++ return error;
++}
++
++/**
++ * dpm_power_up - Power on all regular (non-sysdev) devices.
++ *
++ * Walk the dpm_off_irq list and power each device up. This
++ * is used for devices that required they be powered down with
++ * interrupts disabled. As devices are powered on, they are moved
++ * to the dpm_off list.
++ *
++ * Must be called with interrupts disabled and only one CPU running.
++ */
++static void dpm_power_up(void)
++{
++
++ while (!list_empty(&dpm_off_irq)) {
++ struct list_head *entry = dpm_off_irq.next;
++ struct device *dev = to_device(entry);
++
++ list_move_tail(entry, &dpm_off);
++ resume_device_early(dev);
++ }
++}
++
++/**
++ * device_power_up - Turn on all devices that need special attention.
++ *
++ * Power on system devices, then devices that required we shut them down
++ * with interrupts disabled.
++ *
++ * Must be called with interrupts disabled.
++ */
++void device_power_up(void)
++{
++ sysdev_resume();
++ dpm_power_up();
++}
++EXPORT_SYMBOL_GPL(device_power_up);
++
++/**
++ * resume_device - Restore state for one device.
++ * @dev: Device.
++ *
++ */
++static int resume_device(struct device *dev)
++{
++ int error = 0;
++
++ TRACE_DEVICE(dev);
++ TRACE_RESUME(0);
+
+ if (dev->bus && dev->bus->resume) {
+ dev_dbg(dev,"resuming\n");
+@@ -92,126 +244,94 @@ static int resume_device(struct device * dev)
+ error = dev->class->resume(dev);
+ }
+
+- up(&dev->sem);
+-
+ TRACE_RESUME(error);
+ return error;
+ }
+
+-
+-static int resume_device_early(struct device * dev)
+-{
+- int error = 0;
+-
+- TRACE_DEVICE(dev);
+- TRACE_RESUME(0);
+- if (dev->bus && dev->bus->resume_early) {
+- dev_dbg(dev,"EARLY resume\n");
+- error = dev->bus->resume_early(dev);
+- }
+- TRACE_RESUME(error);
+- return error;
+-}
+-
+-/*
+- * Resume the devices that have either not gone through
+- * the late suspend, or that did go through it but also
+- * went through the early resume
++/**
++ * dpm_resume - Resume every device.
++ *
++ * Resume the devices that have either not gone through
++ * the late suspend, or that did go through it but also
++ * went through the early resume.
++ *
++ * Take devices from the dpm_off_list, resume them,
++ * and put them on the dpm_locked list.
+ */
+ static void dpm_resume(void)
+ {
+ mutex_lock(&dpm_list_mtx);
+ while(!list_empty(&dpm_off)) {
+- struct list_head * entry = dpm_off.next;
+- struct device * dev = to_device(entry);
+-
+- get_device(dev);
+- list_move_tail(entry, &dpm_active);
++ struct list_head *entry = dpm_off.next;
++ struct device *dev = to_device(entry);
+
++ list_move_tail(entry, &dpm_locked);
+ mutex_unlock(&dpm_list_mtx);
+ resume_device(dev);
+ mutex_lock(&dpm_list_mtx);
+- put_device(dev);
+ }
+ mutex_unlock(&dpm_list_mtx);
+ }
+
+-
+ /**
+- * device_resume - Restore state of each device in system.
++ * unlock_all_devices - Release each device's semaphore
+ *
+- * Walk the dpm_off list, remove each entry, resume the device,
+- * then add it to the dpm_active list.
++ * Go through the dpm_off list. Put each device on the dpm_active
++ * list and unlock it.
+ */
+-
+-void device_resume(void)
++static void unlock_all_devices(void)
+ {
+- might_sleep();
+- mutex_lock(&dpm_mtx);
+- dpm_resume();
+- mutex_unlock(&dpm_mtx);
+-}
+-
+-EXPORT_SYMBOL_GPL(device_resume);
++ mutex_lock(&dpm_list_mtx);
++ while (!list_empty(&dpm_locked)) {
++ struct list_head *entry = dpm_locked.prev;
++ struct device *dev = to_device(entry);
+
++ list_move(entry, &dpm_active);
++ up(&dev->sem);
++ }
++ mutex_unlock(&dpm_list_mtx);
++}
+
+ /**
+- * dpm_power_up - Power on some devices.
+- *
+- * Walk the dpm_off_irq list and power each device up. This
+- * is used for devices that required they be powered down with
+- * interrupts disabled. As devices are powered on, they are moved
+- * to the dpm_active list.
++ * unregister_dropped_devices - Unregister devices scheduled for removal
+ *
+- * Interrupts must be disabled when calling this.
++ * Unregister all devices on the dpm_destroy list.
+ */
+-
+-static void dpm_power_up(void)
++static void unregister_dropped_devices(void)
+ {
+- while(!list_empty(&dpm_off_irq)) {
+- struct list_head * entry = dpm_off_irq.next;
+- struct device * dev = to_device(entry);
++ mutex_lock(&dpm_list_mtx);
++ while (!list_empty(&dpm_destroy)) {
++ struct list_head *entry = dpm_destroy.next;
++ struct device *dev = to_device(entry);
+
+- list_move_tail(entry, &dpm_off);
+- resume_device_early(dev);
++ up(&dev->sem);
++ mutex_unlock(&dpm_list_mtx);
++ /* This also removes the device from the list */
++ device_unregister(dev);
++ mutex_lock(&dpm_list_mtx);
+ }
++ mutex_unlock(&dpm_list_mtx);
+ }
+
+-
+ /**
+- * device_power_up - Turn on all devices that need special attention.
++ * device_resume - Restore state of each device in system.
+ *
+- * Power on system devices then devices that required we shut them down
+- * with interrupts disabled.
+- * Called with interrupts disabled.
++ * Resume all the devices, unlock them all, and allow new
++ * devices to be registered once again.
+ */
+-
+-void device_power_up(void)
++void device_resume(void)
+ {
+- sysdev_resume();
+- dpm_power_up();
++ might_sleep();
++ dpm_resume();
++ unlock_all_devices();
++ unregister_dropped_devices();
++ up_write(&pm_sleep_rwsem);
+ }
+-
+-EXPORT_SYMBOL_GPL(device_power_up);
++EXPORT_SYMBOL_GPL(device_resume);
+
+
+ /*------------------------- Suspend routines -------------------------*/
+
+-/*
+- * The entries in the dpm_active list are in a depth first order, simply
+- * because children are guaranteed to be discovered after parents, and
+- * are inserted at the back of the list on discovery.
+- *
+- * All list on the suspend path are done in reverse order, so we operate
+- * on the leaves of the device tree (or forests, depending on how you want
+- * to look at it ;) first. As nodes are removed from the back of the list,
+- * they are inserted into the front of their destintation lists.
+- *
+- * Things are the reverse on the resume path - iterations are done in
+- * forward order, and nodes are inserted at the back of their destination
+- * lists. This way, the ancestors will be accessed before their descendents.
+- */
+-
+ static inline char *suspend_verb(u32 event)
+ {
+ switch (event) {
+@@ -222,7 +342,6 @@ static inline char *suspend_verb(u32 event)
+ }
+ }
+
+-
+ static void
+ suspend_device_dbg(struct device *dev, pm_message_t state, char *info)
+ {
+@@ -232,16 +351,73 @@ suspend_device_dbg(struct device *dev, pm_message_t state, char *info)
+ }
+
+ /**
+- * suspend_device - Save state of one device.
++ * suspend_device_late - Shut down one device (late suspend).
+ * @dev: Device.
+ * @state: Power state device is entering.
++ *
++ * This is called with interrupts off and only a single CPU running.
+ */
++static int suspend_device_late(struct device *dev, pm_message_t state)
++{
++ int error = 0;
+
+-static int suspend_device(struct device * dev, pm_message_t state)
++ if (dev->bus && dev->bus->suspend_late) {
++ suspend_device_dbg(dev, state, "LATE ");
++ error = dev->bus->suspend_late(dev, state);
++ suspend_report_result(dev->bus->suspend_late, error);
++ }
++ return error;
++}
++
++/**
++ * device_power_down - Shut down special devices.
++ * @state: Power state to enter.
++ *
++ * Power down devices that require interrupts to be disabled
++ * and move them from the dpm_off list to the dpm_off_irq list.
++ * Then power down system devices.
++ *
++ * Must be called with interrupts disabled and only one CPU running.
++ */
++int device_power_down(pm_message_t state)
++{
++ int error = 0;
++
++ while (!list_empty(&dpm_off)) {
++ struct list_head *entry = dpm_off.prev;
++ struct device *dev = to_device(entry);
++
++ list_del_init(&dev->power.entry);
++ error = suspend_device_late(dev, state);
++ if (error) {
++ printk(KERN_ERR "Could not power down device %s: "
++ "error %d\n",
++ kobject_name(&dev->kobj), error);
++ if (list_empty(&dev->power.entry))
++ list_add(&dev->power.entry, &dpm_off);
++ break;
++ }
++ if (list_empty(&dev->power.entry))
++ list_add(&dev->power.entry, &dpm_off_irq);
++ }
++
++ if (!error)
++ error = sysdev_suspend(state);
++ if (error)
++ dpm_power_up();
++ return error;
++}
++EXPORT_SYMBOL_GPL(device_power_down);
++
++/**
++ * suspend_device - Save state of one device.
++ * @dev: Device.
++ * @state: Power state device is entering.
++ */
++int suspend_device(struct device *dev, pm_message_t state)
+ {
+ int error = 0;
+
+- down(&dev->sem);
+ if (dev->power.power_state.event) {
+ dev_dbg(dev, "PM: suspend %d-->%d\n",
+ dev->power.power_state.event, state.event);
+@@ -264,123 +440,105 @@ static int suspend_device(struct device * dev, pm_message_t state)
+ error = dev->bus->suspend(dev, state);
+ suspend_report_result(dev->bus->suspend, error);
+ }
+- up(&dev->sem);
+- return error;
+-}
+-
+-
+-/*
+- * This is called with interrupts off, only a single CPU
+- * running. We can't acquire a mutex or semaphore (and we don't
+- * need the protection)
+- */
+-static int suspend_device_late(struct device *dev, pm_message_t state)
+-{
+- int error = 0;
+-
+- if (dev->bus && dev->bus->suspend_late) {
+- suspend_device_dbg(dev, state, "LATE ");
+- error = dev->bus->suspend_late(dev, state);
+- suspend_report_result(dev->bus->suspend_late, error);
+- }
+ return error;
+ }
+
+ /**
+- * device_suspend - Save state and stop all devices in system.
+- * @state: Power state to put each device in.
++ * dpm_suspend - Suspend every device.
++ * @state: Power state to put each device in.
+ *
+- * Walk the dpm_active list, call ->suspend() for each device, and move
+- * it to the dpm_off list.
++ * Walk the dpm_locked list. Suspend each device and move it
++ * to the dpm_off list.
+ *
+ * (For historical reasons, if it returns -EAGAIN, that used to mean
+ * that the device would be called again with interrupts disabled.
+ * These days, we use the "suspend_late()" callback for that, so we
+ * print a warning and consider it an error).
+- *
+- * If we get a different error, try and back out.
+- *
+- * If we hit a failure with any of the devices, call device_resume()
+- * above to bring the suspended devices back to life.
+- *
+ */
+-
+-int device_suspend(pm_message_t state)
++static int dpm_suspend(pm_message_t state)
+ {
+ int error = 0;
+
+- might_sleep();
+- mutex_lock(&dpm_mtx);
+ mutex_lock(&dpm_list_mtx);
+- while (!list_empty(&dpm_active) && error == 0) {
+- struct list_head * entry = dpm_active.prev;
+- struct device * dev = to_device(entry);
++ while (!list_empty(&dpm_locked)) {
++ struct list_head *entry = dpm_locked.prev;
++ struct device *dev = to_device(entry);
+
+- get_device(dev);
++ list_del_init(&dev->power.entry);
+ mutex_unlock(&dpm_list_mtx);
+-
+ error = suspend_device(dev, state);
+-
+- mutex_lock(&dpm_list_mtx);
+-
+- /* Check if the device got removed */
+- if (!list_empty(&dev->power.entry)) {
+- /* Move it to the dpm_off list */
+- if (!error)
+- list_move(&dev->power.entry, &dpm_off);
+- }
+- if (error)
++ if (error) {
+ printk(KERN_ERR "Could not suspend device %s: "
+- "error %d%s\n",
+- kobject_name(&dev->kobj), error,
+- error == -EAGAIN ? " (please convert to suspend_late)" : "");
+- put_device(dev);
++ "error %d%s\n",
++ kobject_name(&dev->kobj),
++ error,
++ (error == -EAGAIN ?
++ " (please convert to suspend_late)" :
++ ""));
++ mutex_lock(&dpm_list_mtx);
++ if (list_empty(&dev->power.entry))
++ list_add(&dev->power.entry, &dpm_locked);
++ mutex_unlock(&dpm_list_mtx);
++ break;
++ }
++ mutex_lock(&dpm_list_mtx);
++ if (list_empty(&dev->power.entry))
++ list_add(&dev->power.entry, &dpm_off);
+ }
+ mutex_unlock(&dpm_list_mtx);
+- if (error)
+- dpm_resume();
+
+- mutex_unlock(&dpm_mtx);
+ return error;
+ }
+
+-EXPORT_SYMBOL_GPL(device_suspend);
+-
+ /**
+- * device_power_down - Shut down special devices.
+- * @state: Power state to enter.
++ * lock_all_devices - Acquire every device's semaphore
+ *
+- * Walk the dpm_off_irq list, calling ->power_down() for each device that
+- * couldn't power down the device with interrupts enabled. When we're
+- * done, power down system devices.
++ * Go through the dpm_active list. Carefully lock each device's
++ * semaphore and put it in on the dpm_locked list.
+ */
+-
+-int device_power_down(pm_message_t state)
++static void lock_all_devices(void)
+ {
+- int error = 0;
+- struct device * dev;
++ mutex_lock(&dpm_list_mtx);
++ while (!list_empty(&dpm_active)) {
++ struct list_head *entry = dpm_active.next;
++ struct device *dev = to_device(entry);
+
+- while (!list_empty(&dpm_off)) {
+- struct list_head * entry = dpm_off.prev;
++ /* Required locking order is dev->sem first,
++ * then dpm_list_mutex. Hence this awkward code.
++ */
++ get_device(dev);
++ mutex_unlock(&dpm_list_mtx);
++ down(&dev->sem);
++ mutex_lock(&dpm_list_mtx);
+
+- dev = to_device(entry);
+- error = suspend_device_late(dev, state);
+- if (error)
+- goto Error;
+- list_move(&dev->power.entry, &dpm_off_irq);
++ if (list_empty(entry))
++ up(&dev->sem); /* Device was removed */
++ else
++ list_move_tail(entry, &dpm_locked);
++ put_device(dev);
+ }
++ mutex_unlock(&dpm_list_mtx);
++}
++
++/**
++ * device_suspend - Save state and stop all devices in system.
++ *
++ * Prevent new devices from being registered, then lock all devices
++ * and suspend them.
++ */
++int device_suspend(pm_message_t state)
++{
++ int error;
+
+- error = sysdev_suspend(state);
+- Done:
++ might_sleep();
++ down_write(&pm_sleep_rwsem);
++ lock_all_devices();
++ error = dpm_suspend(state);
++ if (error)
++ device_resume();
+ return error;
+- Error:
+- printk(KERN_ERR "Could not power down device %s: "
+- "error %d\n", kobject_name(&dev->kobj), error);
+- dpm_power_up();
+- goto Done;
+ }
+-
+-EXPORT_SYMBOL_GPL(device_power_down);
++EXPORT_SYMBOL_GPL(device_suspend);
+
+ void __suspend_report_result(const char *function, void *fn, int ret)
+ {
+diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
+index 379da4e..6f0dfca 100644
+--- a/drivers/base/power/power.h
++++ b/drivers/base/power/power.h
+@@ -1,10 +1,3 @@
+-/*
+- * shutdown.c
+- */
+-
+-extern void device_shutdown(void);
+-
+-
+ #ifdef CONFIG_PM_SLEEP
+
+ /*
+@@ -20,6 +13,9 @@ static inline struct device *to_device(struct list_head *entry)
+
+ extern void device_pm_add(struct device *);
+ extern void device_pm_remove(struct device *);
++extern void device_pm_schedule_removal(struct device *);
++extern int pm_sleep_lock(void);
++extern void pm_sleep_unlock(void);
+
+ #else /* CONFIG_PM_SLEEP */
+
+@@ -32,6 +28,15 @@ static inline void device_pm_remove(struct device *dev)
+ {
+ }
+
++static inline int pm_sleep_lock(void)
++{
++ return 0;
++}
++
++static inline void pm_sleep_unlock(void)
++{
++}
++
+ #endif
+
+ #ifdef CONFIG_PM
+diff --git a/drivers/base/power/shutdown.c b/drivers/base/power/shutdown.c
+deleted file mode 100644
+index 56e8eaa..0000000
+--- a/drivers/base/power/shutdown.c
++++ /dev/null
+@@ -1,48 +0,0 @@
+-/*
+- * shutdown.c - power management functions for the device tree.
+- *
+- * Copyright (c) 2002-3 Patrick Mochel
+- * 2002-3 Open Source Development Lab
+- *
+- * This file is released under the GPLv2
+- *
+- */
+-
+-#include <linux/device.h>
+-#include <asm/semaphore.h>
+-
+-#include "../base.h"
+-#include "power.h"
+-
+-#define to_dev(node) container_of(node, struct device, kobj.entry)
+-
+-
+-/**
+- * We handle system devices differently - we suspend and shut them
+- * down last and resume them first. That way, we don't do anything stupid like
+- * shutting down the interrupt controller before any devices..
+- *
+- * Note that there are not different stages for power management calls -
+- * they only get one called once when interrupts are disabled.
+- */
+-
+-
+-/**
+- * device_shutdown - call ->shutdown() on each device to shutdown.
+- */
+-void device_shutdown(void)
+-{
+- struct device * dev, *devn;
+-
+- list_for_each_entry_safe_reverse(dev, devn, &devices_subsys.list,
+- kobj.entry) {
+- if (dev->bus && dev->bus->shutdown) {
+- dev_dbg(dev, "shutdown\n");
+- dev->bus->shutdown(dev);
+- } else if (dev->driver && dev->driver->shutdown) {
+- dev_dbg(dev, "shutdown\n");
+- dev->driver->shutdown(dev);
+- }
+- }
+-}
+-
+diff --git a/drivers/base/sys.c b/drivers/base/sys.c
+index ac7ff6d..2f79c55 100644
+--- a/drivers/base/sys.c
++++ b/drivers/base/sys.c
+@@ -25,8 +25,6 @@
+
+ #include "base.h"
+
+-extern struct kset devices_subsys;
+-
+ #define to_sysdev(k) container_of(k, struct sys_device, kobj)
+ #define to_sysdev_attr(a) container_of(a, struct sysdev_attribute, attr)
+
+@@ -128,18 +126,17 @@ void sysdev_class_remove_file(struct sysdev_class *c,
+ }
+ EXPORT_SYMBOL_GPL(sysdev_class_remove_file);
+
+-/*
+- * declare system_subsys
+- */
+-static decl_subsys(system, &ktype_sysdev_class, NULL);
++static struct kset *system_kset;
+
+ int sysdev_class_register(struct sysdev_class * cls)
+ {
+ pr_debug("Registering sysdev class '%s'\n",
+ kobject_name(&cls->kset.kobj));
+ INIT_LIST_HEAD(&cls->drivers);
+- cls->kset.kobj.parent = &system_subsys.kobj;
+- cls->kset.kobj.kset = &system_subsys;
++ cls->kset.kobj.parent = &system_kset->kobj;
++ cls->kset.kobj.ktype = &ktype_sysdev_class;
++ cls->kset.kobj.kset = system_kset;
++ kobject_set_name(&cls->kset.kobj, cls->name);
+ return kset_register(&cls->kset);
+ }
+
+@@ -228,20 +225,15 @@ int sysdev_register(struct sys_device * sysdev)
+ if (!cls)
+ return -EINVAL;
+
++ pr_debug("Registering sys device '%s'\n", kobject_name(&sysdev->kobj));
++
+ /* Make sure the kset is set */
+ sysdev->kobj.kset = &cls->kset;
+
+- /* But make sure we point to the right type for sysfs translation */
+- sysdev->kobj.ktype = &ktype_sysdev;
+- error = kobject_set_name(&sysdev->kobj, "%s%d",
+- kobject_name(&cls->kset.kobj), sysdev->id);
+- if (error)
+- return error;
+-
+- pr_debug("Registering sys device '%s'\n", kobject_name(&sysdev->kobj));
+-
+ /* Register the object */
+- error = kobject_register(&sysdev->kobj);
++ error = kobject_init_and_add(&sysdev->kobj, &ktype_sysdev, NULL,
++ "%s%d", kobject_name(&cls->kset.kobj),
++ sysdev->id);
+
+ if (!error) {
+ struct sysdev_driver * drv;
+@@ -258,6 +250,7 @@ int sysdev_register(struct sys_device * sysdev)
+ }
+ mutex_unlock(&sysdev_drivers_lock);
+ }
++ kobject_uevent(&sysdev->kobj, KOBJ_ADD);
+ return error;
+ }
+
+@@ -272,7 +265,7 @@ void sysdev_unregister(struct sys_device * sysdev)
+ }
+ mutex_unlock(&sysdev_drivers_lock);
+
+- kobject_unregister(&sysdev->kobj);
++ kobject_put(&sysdev->kobj);
+ }
+
+
+@@ -298,8 +291,7 @@ void sysdev_shutdown(void)
+ pr_debug("Shutting Down System Devices\n");
+
+ mutex_lock(&sysdev_drivers_lock);
+- list_for_each_entry_reverse(cls, &system_subsys.list,
+- kset.kobj.entry) {
++ list_for_each_entry_reverse(cls, &system_kset->list, kset.kobj.entry) {
+ struct sys_device * sysdev;
+
+ pr_debug("Shutting down type '%s':\n",
+@@ -361,9 +353,7 @@ int sysdev_suspend(pm_message_t state)
+
+ pr_debug("Suspending System Devices\n");
+
+- list_for_each_entry_reverse(cls, &system_subsys.list,
+- kset.kobj.entry) {
+-
++ list_for_each_entry_reverse(cls, &system_kset->list, kset.kobj.entry) {
+ pr_debug("Suspending type '%s':\n",
+ kobject_name(&cls->kset.kobj));
+
+@@ -414,8 +404,7 @@ aux_driver:
+ }
+
+ /* resume other classes */
+- list_for_each_entry_continue(cls, &system_subsys.list,
+- kset.kobj.entry) {
++ list_for_each_entry_continue(cls, &system_kset->list, kset.kobj.entry) {
+ list_for_each_entry(err_dev, &cls->kset.list, kobj.entry) {
+ pr_debug(" %s\n", kobject_name(&err_dev->kobj));
+ __sysdev_resume(err_dev);
+@@ -440,7 +429,7 @@ int sysdev_resume(void)
+
+ pr_debug("Resuming System Devices\n");
+
+- list_for_each_entry(cls, &system_subsys.list, kset.kobj.entry) {
++ list_for_each_entry(cls, &system_kset->list, kset.kobj.entry) {
+ struct sys_device * sysdev;
+
+ pr_debug("Resuming type '%s':\n",
+@@ -458,8 +447,10 @@ int sysdev_resume(void)
+
+ int __init system_bus_init(void)
+ {
+- system_subsys.kobj.parent = &devices_subsys.kobj;
+- return subsystem_register(&system_subsys);
++ system_kset = kset_create_and_add("system", NULL, &devices_kset->kobj);
++ if (!system_kset)
++ return -ENOMEM;
++ return 0;
+ }
+
+ EXPORT_SYMBOL_GPL(sysdev_register);
+diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c
+index 9030c37..cd03473 100644
+--- a/drivers/block/DAC960.c
++++ b/drivers/block/DAC960.c
+@@ -3455,19 +3455,12 @@ static inline bool DAC960_ProcessCompletedRequest(DAC960_Command_T *Command,
+ bool SuccessfulIO)
+ {
+ struct request *Request = Command->Request;
+- int UpToDate;
+-
+- UpToDate = 0;
+- if (SuccessfulIO)
+- UpToDate = 1;
++ int Error = SuccessfulIO ? 0 : -EIO;
+
+ pci_unmap_sg(Command->Controller->PCIDevice, Command->cmd_sglist,
+ Command->SegmentCount, Command->DmaDirection);
+
+- if (!end_that_request_first(Request, UpToDate, Command->BlockCount)) {
+- add_disk_randomness(Request->rq_disk);
+- end_that_request_last(Request, UpToDate);
+-
++ if (!__blk_end_request(Request, Error, Command->BlockCount << 9)) {
+ if (Command->Completion) {
+ complete(Command->Completion);
+ Command->Completion = NULL;
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index 4d0119e..f212285 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -105,6 +105,17 @@ config PARIDE
+ "MicroSolutions backpack protocol", "DataStor Commuter protocol"
+ etc.).
+
++config GDROM
++ tristate "SEGA Dreamcast GD-ROM drive"
++ depends on SH_DREAMCAST
++ help
++ A standard SEGA Dreamcast comes with a modified CD ROM drive called a
++ "GD-ROM" by SEGA to signify it is capable of reading special disks
++ with up to 1 GB of data. This drive will also read standard CD ROM
++ disks. Select this option to access any disks in your GD ROM drive.
++ Most users will want to say "Y" here.
++ You can also build this as a module which will be called gdrom.ko
++
+ source "drivers/block/paride/Kconfig"
+
+ config BLK_CPQ_DA
+diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
+index ad00b3d..826d123 100644
+--- a/drivers/block/aoe/aoeblk.c
++++ b/drivers/block/aoe/aoeblk.c
+@@ -15,8 +15,10 @@
+
+ static struct kmem_cache *buf_pool_cache;
+
+-static ssize_t aoedisk_show_state(struct gendisk * disk, char *page)
++static ssize_t aoedisk_show_state(struct device *dev,
++ struct device_attribute *attr, char *page)
+ {
++ struct gendisk *disk = dev_to_disk(dev);
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE,
+@@ -26,50 +28,47 @@ static ssize_t aoedisk_show_state(struct gendisk * disk, char *page)
+ (d->nopen && !(d->flags & DEVFL_UP)) ? ",closewait" : "");
+ /* I'd rather see nopen exported so we can ditch closewait */
+ }
+-static ssize_t aoedisk_show_mac(struct gendisk * disk, char *page)
++static ssize_t aoedisk_show_mac(struct device *dev,
++ struct device_attribute *attr, char *page)
+ {
++ struct gendisk *disk = dev_to_disk(dev);
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE, "%012llx\n",
+ (unsigned long long)mac_addr(d->addr));
+ }
+-static ssize_t aoedisk_show_netif(struct gendisk * disk, char *page)
++static ssize_t aoedisk_show_netif(struct device *dev,
++ struct device_attribute *attr, char *page)
+ {
++ struct gendisk *disk = dev_to_disk(dev);
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE, "%s\n", d->ifp->name);
+ }
+ /* firmware version */
+-static ssize_t aoedisk_show_fwver(struct gendisk * disk, char *page)
++static ssize_t aoedisk_show_fwver(struct device *dev,
++ struct device_attribute *attr, char *page)
+ {
++ struct gendisk *disk = dev_to_disk(dev);
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE, "0x%04x\n", (unsigned int) d->fw_ver);
+ }
+
+-static struct disk_attribute disk_attr_state = {
+- .attr = {.name = "state", .mode = S_IRUGO },
+- .show = aoedisk_show_state
+-};
+-static struct disk_attribute disk_attr_mac = {
+- .attr = {.name = "mac", .mode = S_IRUGO },
+- .show = aoedisk_show_mac
+-};
+-static struct disk_attribute disk_attr_netif = {
+- .attr = {.name = "netif", .mode = S_IRUGO },
+- .show = aoedisk_show_netif
+-};
+-static struct disk_attribute disk_attr_fwver = {
+- .attr = {.name = "firmware-version", .mode = S_IRUGO },
+- .show = aoedisk_show_fwver
++static DEVICE_ATTR(state, S_IRUGO, aoedisk_show_state, NULL);
++static DEVICE_ATTR(mac, S_IRUGO, aoedisk_show_mac, NULL);
++static DEVICE_ATTR(netif, S_IRUGO, aoedisk_show_netif, NULL);
++static struct device_attribute dev_attr_firmware_version = {
++ .attr = { .name = "firmware-version", .mode = S_IRUGO, .owner = THIS_MODULE },
++ .show = aoedisk_show_fwver,
+ };
+
+ static struct attribute *aoe_attrs[] = {
+- &disk_attr_state.attr,
+- &disk_attr_mac.attr,
+- &disk_attr_netif.attr,
+- &disk_attr_fwver.attr,
+- NULL
++ &dev_attr_state.attr,
++ &dev_attr_mac.attr,
++ &dev_attr_netif.attr,
++ &dev_attr_firmware_version.attr,
++ NULL,
+ };
+
+ static const struct attribute_group attr_group = {
+@@ -79,12 +78,12 @@ static const struct attribute_group attr_group = {
+ static int
+ aoedisk_add_sysfs(struct aoedev *d)
+ {
+- return sysfs_create_group(&d->gd->kobj, &attr_group);
++ return sysfs_create_group(&d->gd->dev.kobj, &attr_group);
+ }
+ void
+ aoedisk_rm_sysfs(struct aoedev *d)
+ {
+- sysfs_remove_group(&d->gd->kobj, &attr_group);
++ sysfs_remove_group(&d->gd->dev.kobj, &attr_group);
+ }
+
+ static int
+diff --git a/drivers/block/aoe/aoechr.c b/drivers/block/aoe/aoechr.c
+index 39e563e..d5480e3 100644
+--- a/drivers/block/aoe/aoechr.c
++++ b/drivers/block/aoe/aoechr.c
+@@ -259,9 +259,8 @@ aoechr_init(void)
+ return PTR_ERR(aoe_class);
+ }
+ for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
+- class_device_create(aoe_class, NULL,
+- MKDEV(AOE_MAJOR, chardevs[i].minor),
+- NULL, chardevs[i].name);
++ device_create(aoe_class, NULL,
++ MKDEV(AOE_MAJOR, chardevs[i].minor), chardevs[i].name);
+
+ return 0;
+ }
+@@ -272,7 +271,7 @@ aoechr_exit(void)
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
+- class_device_destroy(aoe_class, MKDEV(AOE_MAJOR, chardevs[i].minor));
++ device_destroy(aoe_class, MKDEV(AOE_MAJOR, chardevs[i].minor));
+ class_destroy(aoe_class);
+ unregister_chrdev(AOE_MAJOR, "aoechr");
+ }
+diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
+index 509b649..855ce8e 100644
+--- a/drivers/block/cciss.c
++++ b/drivers/block/cciss.c
+@@ -1187,17 +1187,6 @@ static int cciss_ioctl(struct inode *inode, struct file *filep,
+ }
+ }
+
+-static inline void complete_buffers(struct bio *bio, int status)
+-{
+- while (bio) {
+- struct bio *xbh = bio->bi_next;
+-
+- bio->bi_next = NULL;
+- bio_endio(bio, status ? 0 : -EIO);
+- bio = xbh;
+- }
+-}
+-
+ static void cciss_check_queues(ctlr_info_t *h)
+ {
+ int start_queue = h->next_to_run;
+@@ -1263,21 +1252,14 @@ static void cciss_softirq_done(struct request *rq)
+ pci_unmap_page(h->pdev, temp64.val, cmd->SG[i].Len, ddir);
+ }
+
+- complete_buffers(rq->bio, (rq->errors == 0));
+-
+- if (blk_fs_request(rq)) {
+- const int rw = rq_data_dir(rq);
+-
+- disk_stat_add(rq->rq_disk, sectors[rw], rq->nr_sectors);
+- }
+-
+ #ifdef CCISS_DEBUG
+ printk("Done with %p\n", rq);
+ #endif /* CCISS_DEBUG */
+
+- add_disk_randomness(rq->rq_disk);
++ if (blk_end_request(rq, (rq->errors == 0) ? 0 : -EIO, blk_rq_bytes(rq)))
++ BUG();
++
+ spin_lock_irqsave(&h->lock, flags);
+- end_that_request_last(rq, (rq->errors == 0));
+ cmd_free(h, cmd, 1);
+ cciss_check_queues(h);
+ spin_unlock_irqrestore(&h->lock, flags);
+@@ -2542,9 +2524,7 @@ after_error_processing:
+ resend_cciss_cmd(h, cmd);
+ return;
+ }
+- cmd->rq->data_len = 0;
+ cmd->rq->completion_data = cmd;
+- blk_add_trace_rq(cmd->rq->q, cmd->rq, BLK_TA_COMPLETE);
+ blk_complete_request(cmd->rq);
+ }
+
+diff --git a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c
+index c8132d9..6919918 100644
+--- a/drivers/block/cpqarray.c
++++ b/drivers/block/cpqarray.c
+@@ -167,7 +167,6 @@ static void start_io(ctlr_info_t *h);
+
+ static inline void addQ(cmdlist_t **Qptr, cmdlist_t *c);
+ static inline cmdlist_t *removeQ(cmdlist_t **Qptr, cmdlist_t *c);
+-static inline void complete_buffers(struct bio *bio, int ok);
+ static inline void complete_command(cmdlist_t *cmd, int timeout);
+
+ static irqreturn_t do_ida_intr(int irq, void *dev_id);
+@@ -980,26 +979,13 @@ static void start_io(ctlr_info_t *h)
+ }
+ }
+
+-static inline void complete_buffers(struct bio *bio, int ok)
+-{
+- struct bio *xbh;
+-
+- while (bio) {
+- xbh = bio->bi_next;
+- bio->bi_next = NULL;
+-
+- bio_endio(bio, ok ? 0 : -EIO);
+-
+- bio = xbh;
+- }
+-}
+ /*
+ * Mark all buffers that cmd was responsible for
+ */
+ static inline void complete_command(cmdlist_t *cmd, int timeout)
+ {
+ struct request *rq = cmd->rq;
+- int ok=1;
++ int error = 0;
+ int i, ddir;
+
+ if (cmd->req.hdr.rcode & RCODE_NONFATAL &&
+@@ -1011,16 +997,17 @@ static inline void complete_command(cmdlist_t *cmd, int timeout)
+ if (cmd->req.hdr.rcode & RCODE_FATAL) {
+ printk(KERN_WARNING "Fatal error on ida/c%dd%d\n",
+ cmd->ctlr, cmd->hdr.unit);
+- ok = 0;
++ error = -EIO;
+ }
+ if (cmd->req.hdr.rcode & RCODE_INVREQ) {
+ printk(KERN_WARNING "Invalid request on ida/c%dd%d = (cmd=%x sect=%d cnt=%d sg=%d ret=%x)\n",
+ cmd->ctlr, cmd->hdr.unit, cmd->req.hdr.cmd,
+ cmd->req.hdr.blk, cmd->req.hdr.blk_cnt,
+ cmd->req.hdr.sg_cnt, cmd->req.hdr.rcode);
+- ok = 0;
++ error = -EIO;
+ }
+- if (timeout) ok = 0;
++ if (timeout)
++ error = -EIO;
+ /* unmap the DMA mapping for all the scatter gather elements */
+ if (cmd->req.hdr.cmd == IDA_READ)
+ ddir = PCI_DMA_FROMDEVICE;
+@@ -1030,18 +1017,9 @@ static inline void complete_command(cmdlist_t *cmd, int timeout)
+ pci_unmap_page(hba[cmd->ctlr]->pci_dev, cmd->req.sg[i].addr,
+ cmd->req.sg[i].size, ddir);
+
+- complete_buffers(rq->bio, ok);
+-
+- if (blk_fs_request(rq)) {
+- const int rw = rq_data_dir(rq);
+-
+- disk_stat_add(rq->rq_disk, sectors[rw], rq->nr_sectors);
+- }
+-
+- add_disk_randomness(rq->rq_disk);
+-
+ DBGPX(printk("Done with %p\n", rq););
+- end_that_request_last(rq, ok ? 1 : -EIO);
++ if (__blk_end_request(rq, error, blk_rq_bytes(rq)))
++ BUG();
+ }
+
+ /*
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 639ed14..32c79a5 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -2287,21 +2287,19 @@ static int do_format(int drive, struct format_descr *tmp_format_req)
+ * =============================
+ */
+
+-static void floppy_end_request(struct request *req, int uptodate)
++static void floppy_end_request(struct request *req, int error)
+ {
+ unsigned int nr_sectors = current_count_sectors;
++ unsigned int drive = (unsigned long)req->rq_disk->private_data;
+
+ /* current_count_sectors can be zero if transfer failed */
+- if (!uptodate)
++ if (error)
+ nr_sectors = req->current_nr_sectors;
+- if (end_that_request_first(req, uptodate, nr_sectors))
++ if (__blk_end_request(req, error, nr_sectors << 9))
+ return;
+- add_disk_randomness(req->rq_disk);
+- floppy_off((long)req->rq_disk->private_data);
+- blkdev_dequeue_request(req);
+- end_that_request_last(req, uptodate);
+
+ /* We're done with the request */
++ floppy_off(drive);
+ current_req = NULL;
+ }
+
+@@ -2332,7 +2330,7 @@ static void request_done(int uptodate)
+
+ /* unlock chained buffers */
+ spin_lock_irqsave(q->queue_lock, flags);
+- floppy_end_request(req, 1);
++ floppy_end_request(req, 0);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ } else {
+ if (rq_data_dir(req) == WRITE) {
+@@ -2346,7 +2344,7 @@ static void request_done(int uptodate)
+ DRWE->last_error_generation = DRS->generation;
+ }
+ spin_lock_irqsave(q->queue_lock, flags);
+- floppy_end_request(req, 0);
++ floppy_end_request(req, -EIO);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ }
+ }
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b4c0888..ae31060 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -100,17 +100,15 @@ static const char *nbdcmd_to_ascii(int cmd)
+
+ static void nbd_end_request(struct request *req)
+ {
+- int uptodate = (req->errors == 0) ? 1 : 0;
++ int error = req->errors ? -EIO : 0;
+ struct request_queue *q = req->q;
+ unsigned long flags;
+
+ dprintk(DBG_BLKDEV, "%s: request %p: %s\n", req->rq_disk->disk_name,
+- req, uptodate? "done": "failed");
++ req, error ? "failed" : "done");
+
+ spin_lock_irqsave(q->queue_lock, flags);
+- if (!end_that_request_first(req, uptodate, req->nr_sectors)) {
+- end_that_request_last(req, uptodate);
+- }
++ __blk_end_request(req, error, req->nr_sectors << 9);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ }
+
+@@ -375,14 +373,17 @@ harderror:
+ return NULL;
+ }
+
+-static ssize_t pid_show(struct gendisk *disk, char *page)
++static ssize_t pid_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
+ {
+- return sprintf(page, "%ld\n",
++ struct gendisk *disk = dev_to_disk(dev);
++
++ return sprintf(buf, "%ld\n",
+ (long) ((struct nbd_device *)disk->private_data)->pid);
+ }
+
+-static struct disk_attribute pid_attr = {
+- .attr = { .name = "pid", .mode = S_IRUGO },
++static struct device_attribute pid_attr = {
++ .attr = { .name = "pid", .mode = S_IRUGO, .owner = THIS_MODULE },
+ .show = pid_show,
+ };
+
+@@ -394,7 +395,7 @@ static int nbd_do_it(struct nbd_device *lo)
+ BUG_ON(lo->magic != LO_MAGIC);
+
+ lo->pid = current->pid;
+- ret = sysfs_create_file(&lo->disk->kobj, &pid_attr.attr);
++ ret = sysfs_create_file(&lo->disk->dev.kobj, &pid_attr.attr);
+ if (ret) {
+ printk(KERN_ERR "nbd: sysfs_create_file failed!");
+ return ret;
+@@ -403,7 +404,7 @@ static int nbd_do_it(struct nbd_device *lo)
+ while ((req = nbd_read_stat(lo)) != NULL)
+ nbd_end_request(req);
+
+- sysfs_remove_file(&lo->disk->kobj, &pid_attr.attr);
++ sysfs_remove_file(&lo->disk->dev.kobj, &pid_attr.attr);
+ return 0;
+ }
+
+diff --git a/drivers/block/paride/pg.c b/drivers/block/paride/pg.c
+index d89e7d3..ab86e23 100644
+--- a/drivers/block/paride/pg.c
++++ b/drivers/block/paride/pg.c
+@@ -676,8 +676,8 @@ static int __init pg_init(void)
+ for (unit = 0; unit < PG_UNITS; unit++) {
+ struct pg *dev = &devices[unit];
+ if (dev->present)
+- class_device_create(pg_class, NULL, MKDEV(major, unit),
+- NULL, "pg%u", unit);
++ device_create(pg_class, NULL, MKDEV(major, unit),
++ "pg%u", unit);
+ }
+ err = 0;
+ goto out;
+@@ -695,7 +695,7 @@ static void __exit pg_exit(void)
+ for (unit = 0; unit < PG_UNITS; unit++) {
+ struct pg *dev = &devices[unit];
+ if (dev->present)
+- class_device_destroy(pg_class, MKDEV(major, unit));
++ device_destroy(pg_class, MKDEV(major, unit));
+ }
+ class_destroy(pg_class);
+ unregister_chrdev(major, name);
+diff --git a/drivers/block/paride/pt.c b/drivers/block/paride/pt.c
+index b91accf..76096ca 100644
+--- a/drivers/block/paride/pt.c
++++ b/drivers/block/paride/pt.c
+@@ -972,10 +972,10 @@ static int __init pt_init(void)
+
+ for (unit = 0; unit < PT_UNITS; unit++)
+ if (pt[unit].present) {
+- class_device_create(pt_class, NULL, MKDEV(major, unit),
+- NULL, "pt%d", unit);
+- class_device_create(pt_class, NULL, MKDEV(major, unit + 128),
+- NULL, "pt%dn", unit);
++ device_create(pt_class, NULL, MKDEV(major, unit),
++ "pt%d", unit);
++ device_create(pt_class, NULL, MKDEV(major, unit + 128),
++ "pt%dn", unit);
+ }
+ goto out;
+
+@@ -990,8 +990,8 @@ static void __exit pt_exit(void)
+ int unit;
+ for (unit = 0; unit < PT_UNITS; unit++)
+ if (pt[unit].present) {
+- class_device_destroy(pt_class, MKDEV(major, unit));
+- class_device_destroy(pt_class, MKDEV(major, unit + 128));
++ device_destroy(pt_class, MKDEV(major, unit));
++ device_destroy(pt_class, MKDEV(major, unit + 128));
+ }
+ class_destroy(pt_class);
+ unregister_chrdev(major, name);
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index 3535ef8..e9de171 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -110,17 +110,18 @@ static struct pktcdvd_kobj* pkt_kobj_create(struct pktcdvd_device *pd,
+ struct kobj_type* ktype)
+ {
+ struct pktcdvd_kobj *p;
++ int error;
++
+ p = kzalloc(sizeof(*p), GFP_KERNEL);
+ if (!p)
+ return NULL;
+- kobject_set_name(&p->kobj, "%s", name);
+- p->kobj.parent = parent;
+- p->kobj.ktype = ktype;
+ p->pd = pd;
+- if (kobject_register(&p->kobj) != 0) {
++ error = kobject_init_and_add(&p->kobj, ktype, parent, "%s", name);
++ if (error) {
+ kobject_put(&p->kobj);
+ return NULL;
+ }
++ kobject_uevent(&p->kobj, KOBJ_ADD);
+ return p;
+ }
+ /*
+@@ -129,7 +130,7 @@ static struct pktcdvd_kobj* pkt_kobj_create(struct pktcdvd_device *pd,
+ static void pkt_kobj_remove(struct pktcdvd_kobj *p)
+ {
+ if (p)
+- kobject_unregister(&p->kobj);
++ kobject_put(&p->kobj);
+ }
+ /*
+ * default release function for pktcdvd kernel objects.
+@@ -301,18 +302,16 @@ static struct kobj_type kobj_pkt_type_wqueue = {
+ static void pkt_sysfs_dev_new(struct pktcdvd_device *pd)
+ {
+ if (class_pktcdvd) {
+- pd->clsdev = class_device_create(class_pktcdvd,
+- NULL, pd->pkt_dev,
+- NULL, "%s", pd->name);
+- if (IS_ERR(pd->clsdev))
+- pd->clsdev = NULL;
++ pd->dev = device_create(class_pktcdvd, NULL, pd->pkt_dev, "%s", pd->name);
++ if (IS_ERR(pd->dev))
++ pd->dev = NULL;
+ }
+- if (pd->clsdev) {
++ if (pd->dev) {
+ pd->kobj_stat = pkt_kobj_create(pd, "stat",
+- &pd->clsdev->kobj,
++ &pd->dev->kobj,
+ &kobj_pkt_type_stat);
+ pd->kobj_wqueue = pkt_kobj_create(pd, "write_queue",
+- &pd->clsdev->kobj,
++ &pd->dev->kobj,
+ &kobj_pkt_type_wqueue);
+ }
+ }
+@@ -322,7 +321,7 @@ static void pkt_sysfs_dev_remove(struct pktcdvd_device *pd)
+ pkt_kobj_remove(pd->kobj_stat);
+ pkt_kobj_remove(pd->kobj_wqueue);
+ if (class_pktcdvd)
+- class_device_destroy(class_pktcdvd, pd->pkt_dev);
++ device_destroy(class_pktcdvd, pd->pkt_dev);
+ }
+
+
+diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
+index e354bfc..7483f94 100644
+--- a/drivers/block/ps3disk.c
++++ b/drivers/block/ps3disk.c
+@@ -229,7 +229,7 @@ static irqreturn_t ps3disk_interrupt(int irq, void *data)
+ struct ps3_storage_device *dev = data;
+ struct ps3disk_private *priv;
+ struct request *req;
+- int res, read, uptodate;
++ int res, read, error;
+ u64 tag, status;
+ unsigned long num_sectors;
+ const char *op;
+@@ -270,21 +270,17 @@ static irqreturn_t ps3disk_interrupt(int irq, void *data)
+ if (status) {
+ dev_dbg(&dev->sbd.core, "%s:%u: %s failed 0x%lx\n", __func__,
+ __LINE__, op, status);
+- uptodate = 0;
++ error = -EIO;
+ } else {
+ dev_dbg(&dev->sbd.core, "%s:%u: %s completed\n", __func__,
+ __LINE__, op);
+- uptodate = 1;
++ error = 0;
+ if (read)
+ ps3disk_scatter_gather(dev, req, 0);
+ }
+
+ spin_lock(&priv->lock);
+- if (!end_that_request_first(req, uptodate, num_sectors)) {
+- add_disk_randomness(req->rq_disk);
+- blkdev_dequeue_request(req);
+- end_that_request_last(req, uptodate);
+- }
++ __blk_end_request(req, error, num_sectors << 9);
+ priv->req = NULL;
+ ps3disk_do_request(dev, priv->queue);
+ spin_unlock(&priv->lock);
+diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
+index fac4c6c..66e3015 100644
+--- a/drivers/block/sunvdc.c
++++ b/drivers/block/sunvdc.c
+@@ -212,12 +212,9 @@ static void vdc_end_special(struct vdc_port *port, struct vio_disk_desc *desc)
+ vdc_finish(&port->vio, -err, WAITING_FOR_GEN_CMD);
+ }
+
+-static void vdc_end_request(struct request *req, int uptodate, int num_sectors)
++static void vdc_end_request(struct request *req, int error, int num_sectors)
+ {
+- if (end_that_request_first(req, uptodate, num_sectors))
+- return;
+- add_disk_randomness(req->rq_disk);
+- end_that_request_last(req, uptodate);
++ __blk_end_request(req, error, num_sectors << 9);
+ }
+
+ static void vdc_end_one(struct vdc_port *port, struct vio_dring_state *dr,
+@@ -242,7 +239,7 @@ static void vdc_end_one(struct vdc_port *port, struct vio_dring_state *dr,
+
+ rqe->req = NULL;
+
+- vdc_end_request(req, !desc->status, desc->size >> 9);
++ vdc_end_request(req, (desc->status ? -EIO : 0), desc->size >> 9);
+
+ if (blk_queue_stopped(port->disk->queue))
+ blk_start_queue(port->disk->queue);
+@@ -456,7 +453,7 @@ static void do_vdc_request(struct request_queue *q)
+
+ blkdev_dequeue_request(req);
+ if (__send_request(req) < 0)
+- vdc_end_request(req, 0, req->hard_nr_sectors);
++ vdc_end_request(req, -EIO, req->hard_nr_sectors);
+ }
+ }
+
+diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
+index 52dc5e1..cd5674b 100644
+--- a/drivers/block/sx8.c
++++ b/drivers/block/sx8.c
+@@ -744,16 +744,14 @@ static unsigned int carm_fill_get_fw_ver(struct carm_host *host,
+
+ static inline void carm_end_request_queued(struct carm_host *host,
+ struct carm_request *crq,
+- int uptodate)
++ int error)
+ {
+ struct request *req = crq->rq;
+ int rc;
+
+- rc = end_that_request_first(req, uptodate, req->hard_nr_sectors);
++ rc = __blk_end_request(req, error, blk_rq_bytes(req));
+ assert(rc == 0);
+
+- end_that_request_last(req, uptodate);
+-
+ rc = carm_put_request(host, crq);
+ assert(rc == 0);
+ }
+@@ -793,9 +791,9 @@ static inline void carm_round_robin(struct carm_host *host)
+ }
+
+ static inline void carm_end_rq(struct carm_host *host, struct carm_request *crq,
+- int is_ok)
++ int error)
+ {
+- carm_end_request_queued(host, crq, is_ok);
++ carm_end_request_queued(host, crq, error);
+ if (max_queue == 1)
+ carm_round_robin(host);
+ else if ((host->n_msgs <= CARM_MSG_LOW_WATER) &&
+@@ -873,14 +871,14 @@ queue_one_request:
+ sg = &crq->sg[0];
+ n_elem = blk_rq_map_sg(q, rq, sg);
+ if (n_elem <= 0) {
+- carm_end_rq(host, crq, 0);
++ carm_end_rq(host, crq, -EIO);
+ return; /* request with no s/g entries? */
+ }
+
+ /* map scatterlist to PCI bus addresses */
+ n_elem = pci_map_sg(host->pdev, sg, n_elem, pci_dir);
+ if (n_elem <= 0) {
+- carm_end_rq(host, crq, 0);
++ carm_end_rq(host, crq, -EIO);
+ return; /* request with no s/g entries? */
+ }
+ crq->n_elem = n_elem;
+@@ -941,7 +939,7 @@ queue_one_request:
+
+ static void carm_handle_array_info(struct carm_host *host,
+ struct carm_request *crq, u8 *mem,
+- int is_ok)
++ int error)
+ {
+ struct carm_port *port;
+ u8 *msg_data = mem + sizeof(struct carm_array_info);
+@@ -952,9 +950,9 @@ static void carm_handle_array_info(struct carm_host *host,
+
+ DPRINTK("ENTER\n");
+
+- carm_end_rq(host, crq, is_ok);
++ carm_end_rq(host, crq, error);
+
+- if (!is_ok)
++ if (error)
+ goto out;
+ if (le32_to_cpu(desc->array_status) & ARRAY_NO_EXIST)
+ goto out;
+@@ -1001,7 +999,7 @@ out:
+
+ static void carm_handle_scan_chan(struct carm_host *host,
+ struct carm_request *crq, u8 *mem,
+- int is_ok)
++ int error)
+ {
+ u8 *msg_data = mem + IOC_SCAN_CHAN_OFFSET;
+ unsigned int i, dev_count = 0;
+@@ -1009,9 +1007,9 @@ static void carm_handle_scan_chan(struct carm_host *host,
+
+ DPRINTK("ENTER\n");
+
+- carm_end_rq(host, crq, is_ok);
++ carm_end_rq(host, crq, error);
+
+- if (!is_ok) {
++ if (error) {
+ new_state = HST_ERROR;
+ goto out;
+ }
+@@ -1033,23 +1031,23 @@ out:
+ }
+
+ static void carm_handle_generic(struct carm_host *host,
+- struct carm_request *crq, int is_ok,
++ struct carm_request *crq, int error,
+ int cur_state, int next_state)
+ {
+ DPRINTK("ENTER\n");
+
+- carm_end_rq(host, crq, is_ok);
++ carm_end_rq(host, crq, error);
+
+ assert(host->state == cur_state);
+- if (is_ok)
+- host->state = next_state;
+- else
++ if (error)
+ host->state = HST_ERROR;
++ else
++ host->state = next_state;
+ schedule_work(&host->fsm_task);
+ }
+
+ static inline void carm_handle_rw(struct carm_host *host,
+- struct carm_request *crq, int is_ok)
++ struct carm_request *crq, int error)
+ {
+ int pci_dir;
+
+@@ -1062,7 +1060,7 @@ static inline void carm_handle_rw(struct carm_host *host,
+
+ pci_unmap_sg(host->pdev, &crq->sg[0], crq->n_elem, pci_dir);
+
+- carm_end_rq(host, crq, is_ok);
++ carm_end_rq(host, crq, error);
+ }
+
+ static inline void carm_handle_resp(struct carm_host *host,
+@@ -1071,7 +1069,7 @@ static inline void carm_handle_resp(struct carm_host *host,
+ u32 handle = le32_to_cpu(ret_handle_le);
+ unsigned int msg_idx;
+ struct carm_request *crq;
+- int is_ok = (status == RMSG_OK);
++ int error = (status == RMSG_OK) ? 0 : -EIO;
+ u8 *mem;
+
+ VPRINTK("ENTER, handle == 0x%x\n", handle);
+@@ -1090,7 +1088,7 @@ static inline void carm_handle_resp(struct carm_host *host,
+ /* fast path */
+ if (likely(crq->msg_type == CARM_MSG_READ ||
+ crq->msg_type == CARM_MSG_WRITE)) {
+- carm_handle_rw(host, crq, is_ok);
++ carm_handle_rw(host, crq, error);
+ return;
+ }
+
+@@ -1100,7 +1098,7 @@ static inline void carm_handle_resp(struct carm_host *host,
+ case CARM_MSG_IOCTL: {
+ switch (crq->msg_subtype) {
+ case CARM_IOC_SCAN_CHAN:
+- carm_handle_scan_chan(host, crq, mem, is_ok);
++ carm_handle_scan_chan(host, crq, mem, error);
+ break;
+ default:
+ /* unknown / invalid response */
+@@ -1112,21 +1110,21 @@ static inline void carm_handle_resp(struct carm_host *host,
+ case CARM_MSG_MISC: {
+ switch (crq->msg_subtype) {
+ case MISC_ALLOC_MEM:
+- carm_handle_generic(host, crq, is_ok,
++ carm_handle_generic(host, crq, error,
+ HST_ALLOC_BUF, HST_SYNC_TIME);
+ break;
+ case MISC_SET_TIME:
+- carm_handle_generic(host, crq, is_ok,
++ carm_handle_generic(host, crq, error,
+ HST_SYNC_TIME, HST_GET_FW_VER);
+ break;
+ case MISC_GET_FW_VER: {
+ struct carm_fw_ver *ver = (struct carm_fw_ver *)
+ mem + sizeof(struct carm_msg_get_fw_ver);
+- if (is_ok) {
++ if (!error) {
+ host->fw_ver = le32_to_cpu(ver->version);
+ host->flags |= (ver->features & FL_FW_VER_MASK);
+ }
+- carm_handle_generic(host, crq, is_ok,
++ carm_handle_generic(host, crq, error,
+ HST_GET_FW_VER, HST_PORT_SCAN);
+ break;
+ }
+@@ -1140,7 +1138,7 @@ static inline void carm_handle_resp(struct carm_host *host,
+ case CARM_MSG_ARRAY: {
+ switch (crq->msg_subtype) {
+ case CARM_ARRAY_INFO:
+- carm_handle_array_info(host, crq, mem, is_ok);
++ carm_handle_array_info(host, crq, mem, error);
+ break;
+ default:
+ /* unknown / invalid response */
+@@ -1159,7 +1157,7 @@ static inline void carm_handle_resp(struct carm_host *host,
+ err_out:
+ printk(KERN_WARNING DRV_NAME "(%s): BUG: unhandled message type %d/%d\n",
+ pci_name(host->pdev), crq->msg_type, crq->msg_subtype);
+- carm_end_rq(host, crq, 0);
++ carm_end_rq(host, crq, -EIO);
+ }
+
+ static inline void carm_handle_responses(struct carm_host *host)
+diff --git a/drivers/block/ub.c b/drivers/block/ub.c
+index 08e909d..c6179d6 100644
+--- a/drivers/block/ub.c
++++ b/drivers/block/ub.c
+@@ -808,16 +808,16 @@ static void ub_rw_cmd_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
+
+ static void ub_end_rq(struct request *rq, unsigned int scsi_status)
+ {
+- int uptodate;
++ int error;
+
+ if (scsi_status == 0) {
+- uptodate = 1;
++ error = 0;
+ } else {
+- uptodate = 0;
++ error = -EIO;
+ rq->errors = scsi_status;
+ }
+- end_that_request_first(rq, uptodate, rq->hard_nr_sectors);
+- end_that_request_last(rq, uptodate);
++ if (__blk_end_request(rq, error, blk_rq_bytes(rq)))
++ BUG();
+ }
+
+ static int ub_rw_cmd_retry(struct ub_dev *sc, struct ub_lun *lun,
+diff --git a/drivers/block/viodasd.c b/drivers/block/viodasd.c
+index ab5d404..9e61fca 100644
+--- a/drivers/block/viodasd.c
++++ b/drivers/block/viodasd.c
+@@ -229,13 +229,10 @@ static struct block_device_operations viodasd_fops = {
+ /*
+ * End a request
+ */
+-static void viodasd_end_request(struct request *req, int uptodate,
++static void viodasd_end_request(struct request *req, int error,
+ int num_sectors)
+ {
+- if (end_that_request_first(req, uptodate, num_sectors))
+- return;
+- add_disk_randomness(req->rq_disk);
+- end_that_request_last(req, uptodate);
++ __blk_end_request(req, error, num_sectors << 9);
+ }
+
+ /*
+@@ -374,12 +371,12 @@ static void do_viodasd_request(struct request_queue *q)
+ blkdev_dequeue_request(req);
+ /* check that request contains a valid command */
+ if (!blk_fs_request(req)) {
+- viodasd_end_request(req, 0, req->hard_nr_sectors);
++ viodasd_end_request(req, -EIO, req->hard_nr_sectors);
+ continue;
+ }
+ /* Try sending the request */
+ if (send_request(req) != 0)
+- viodasd_end_request(req, 0, req->hard_nr_sectors);
++ viodasd_end_request(req, -EIO, req->hard_nr_sectors);
+ }
+ }
+
+@@ -591,7 +588,7 @@ static int viodasd_handle_read_write(struct vioblocklpevent *bevent)
+ num_req_outstanding--;
+ spin_unlock_irqrestore(&viodasd_spinlock, irq_flags);
+
+- error = event->xRc != HvLpEvent_Rc_Good;
++ error = (event->xRc == HvLpEvent_Rc_Good) ? 0 : -EIO;
+ if (error) {
+ const struct vio_error_entry *err;
+ err = vio_lookup_rc(viodasd_err_table, bevent->sub_result);
+@@ -601,7 +598,7 @@ static int viodasd_handle_read_write(struct vioblocklpevent *bevent)
+ }
+ qlock = req->q->queue_lock;
+ spin_lock_irqsave(qlock, irq_flags);
+- viodasd_end_request(req, !error, num_sect);
++ viodasd_end_request(req, error, num_sect);
+ spin_unlock_irqrestore(qlock, irq_flags);
+
+ /* Finally, try to get more requests off of this device's queue */
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 2bdebcb..8afce67 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -452,7 +452,7 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ RING_IDX i, rp;
+ unsigned long flags;
+ struct blkfront_info *info = (struct blkfront_info *)dev_id;
+- int uptodate;
++ int error;
+
+ spin_lock_irqsave(&blkif_io_lock, flags);
+
+@@ -477,13 +477,13 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+
+ add_id_to_freelist(info, id);
+
+- uptodate = (bret->status == BLKIF_RSP_OKAY);
++ error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO;
+ switch (bret->operation) {
+ case BLKIF_OP_WRITE_BARRIER:
+ if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+ printk(KERN_WARNING "blkfront: %s: write barrier op failed\n",
+ info->gd->disk_name);
+- uptodate = -EOPNOTSUPP;
++ error = -EOPNOTSUPP;
+ info->feature_barrier = 0;
+ xlvbd_barrier(info);
+ }
+@@ -494,10 +494,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
+ "request: %x\n", bret->status);
+
+- ret = end_that_request_first(req, uptodate,
+- req->hard_nr_sectors);
++ ret = __blk_end_request(req, error, blk_rq_bytes(req));
+ BUG_ON(ret);
+- end_that_request_last(req, uptodate);
+ break;
+ default:
+ BUG();
+diff --git a/drivers/block/xsysace.c b/drivers/block/xsysace.c
+index 82effce..78ebfff 100644
+--- a/drivers/block/xsysace.c
++++ b/drivers/block/xsysace.c
+@@ -483,7 +483,6 @@ static void ace_fsm_dostate(struct ace_device *ace)
+ u32 status;
+ u16 val;
+ int count;
+- int i;
+
+ #if defined(DEBUG)
+ dev_dbg(ace->dev, "fsm_state=%i, id_req_count=%i\n",
+@@ -688,7 +687,6 @@ static void ace_fsm_dostate(struct ace_device *ace)
+ }
+
+ /* Transfer the next buffer */
+- i = 16;
+ if (ace->fsm_task == ACE_TASK_WRITE)
+ ace->reg_ops->dataout(ace);
+ else
+@@ -702,8 +700,8 @@ static void ace_fsm_dostate(struct ace_device *ace)
+ }
+
+ /* bio finished; is there another one? */
+- i = ace->req->current_nr_sectors;
+- if (end_that_request_first(ace->req, 1, i)) {
++ if (__blk_end_request(ace->req, 0,
++ blk_rq_cur_bytes(ace->req))) {
+ /* dev_dbg(ace->dev, "next block; h=%li c=%i\n",
+ * ace->req->hard_nr_sectors,
+ * ace->req->current_nr_sectors);
+@@ -718,9 +716,6 @@ static void ace_fsm_dostate(struct ace_device *ace)
+ break;
+
+ case ACE_FSM_STATE_REQ_COMPLETE:
+- /* Complete the block request */
+- blkdev_dequeue_request(ace->req);
+- end_that_request_last(ace->req, 1);
+ ace->req = NULL;
+
+ /* Finished request; go to idle state */
+diff --git a/drivers/cdrom/Makefile b/drivers/cdrom/Makefile
+index 774c180..ecf85fd 100644
+--- a/drivers/cdrom/Makefile
++++ b/drivers/cdrom/Makefile
+@@ -11,3 +11,4 @@ obj-$(CONFIG_PARIDE_PCD) += cdrom.o
+ obj-$(CONFIG_CDROM_PKTCDVD) += cdrom.o
+
+ obj-$(CONFIG_VIOCD) += viocd.o cdrom.o
++obj-$(CONFIG_GDROM) += gdrom.o cdrom.o
+diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
+new file mode 100644
+index 0000000..4e2bbcc
+--- /dev/null
++++ b/drivers/cdrom/gdrom.c
+@@ -0,0 +1,867 @@
++/* GD ROM driver for the SEGA Dreamcast
++ * copyright Adrian McMenamin, 2007
++ * With thanks to Marcus Comstedt and Nathan Keynes
++ * for work in reversing PIO and DMA
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License along
++ * with this program; if not, write to the Free Software Foundation, Inc.,
++ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
++ *
++ */
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/fs.h>
++#include <linux/kernel.h>
++#include <linux/list.h>
++#include <linux/slab.h>
++#include <linux/dma-mapping.h>
++#include <linux/cdrom.h>
++#include <linux/genhd.h>
++#include <linux/bio.h>
++#include <linux/blkdev.h>
++#include <linux/interrupt.h>
++#include <linux/device.h>
++#include <linux/wait.h>
++#include <linux/workqueue.h>
++#include <linux/platform_device.h>
++#include <scsi/scsi.h>
++#include <asm/io.h>
++#include <asm/dma.h>
++#include <asm/delay.h>
++#include <asm/mach/dma.h>
++#include <asm/mach/sysasic.h>
++
++#define GDROM_DEV_NAME "gdrom"
++#define GD_SESSION_OFFSET 150
++
++/* GD Rom commands */
++#define GDROM_COM_SOFTRESET 0x08
++#define GDROM_COM_EXECDIAG 0x90
++#define GDROM_COM_PACKET 0xA0
++#define GDROM_COM_IDDEV 0xA1
++
++/* GD Rom registers */
++#define GDROM_BASE_REG 0xA05F7000
++#define GDROM_ALTSTATUS_REG (GDROM_BASE_REG + 0x18)
++#define GDROM_DATA_REG (GDROM_BASE_REG + 0x80)
++#define GDROM_ERROR_REG (GDROM_BASE_REG + 0x84)
++#define GDROM_INTSEC_REG (GDROM_BASE_REG + 0x88)
++#define GDROM_SECNUM_REG (GDROM_BASE_REG + 0x8C)
++#define GDROM_BCL_REG (GDROM_BASE_REG + 0x90)
++#define GDROM_BCH_REG (GDROM_BASE_REG + 0x94)
++#define GDROM_DSEL_REG (GDROM_BASE_REG + 0x98)
++#define GDROM_STATUSCOMMAND_REG (GDROM_BASE_REG + 0x9C)
++#define GDROM_RESET_REG (GDROM_BASE_REG + 0x4E4)
++
++#define GDROM_DMA_STARTADDR_REG (GDROM_BASE_REG + 0x404)
++#define GDROM_DMA_LENGTH_REG (GDROM_BASE_REG + 0x408)
++#define GDROM_DMA_DIRECTION_REG (GDROM_BASE_REG + 0x40C)
++#define GDROM_DMA_ENABLE_REG (GDROM_BASE_REG + 0x414)
++#define GDROM_DMA_STATUS_REG (GDROM_BASE_REG + 0x418)
++#define GDROM_DMA_WAIT_REG (GDROM_BASE_REG + 0x4A0)
++#define GDROM_DMA_ACCESS_CTRL_REG (GDROM_BASE_REG + 0x4B8)
++
++#define GDROM_HARD_SECTOR 2048
++#define BLOCK_LAYER_SECTOR 512
++#define GD_TO_BLK 4
++
++#define GDROM_DEFAULT_TIMEOUT (HZ * 7)
++
++static const struct {
++ int sense_key;
++ const char * const text;
++} sense_texts[] = {
++ {NO_SENSE, "OK"},
++ {RECOVERED_ERROR, "Recovered from error"},
++ {NOT_READY, "Device not ready"},
++ {MEDIUM_ERROR, "Disk not ready"},
++ {HARDWARE_ERROR, "Hardware error"},
++ {ILLEGAL_REQUEST, "Command has failed"},
++ {UNIT_ATTENTION, "Device needs attention - disk may have been changed"},
++ {DATA_PROTECT, "Data protection error"},
++ {ABORTED_COMMAND, "Command aborted"},
++};
++
++static struct platform_device *pd;
++static int gdrom_major;
++static DECLARE_WAIT_QUEUE_HEAD(command_queue);
++static DECLARE_WAIT_QUEUE_HEAD(request_queue);
++
++static DEFINE_SPINLOCK(gdrom_lock);
++static void gdrom_readdisk_dma(struct work_struct *work);
++static DECLARE_WORK(work, gdrom_readdisk_dma);
++static LIST_HEAD(gdrom_deferred);
++
++struct gdromtoc {
++ unsigned int entry[99];
++ unsigned int first, last;
++ unsigned int leadout;
++};
++
++static struct gdrom_unit {
++ struct gendisk *disk;
++ struct cdrom_device_info *cd_info;
++ int status;
++ int pending;
++ int transfer;
++ char disk_type;
++ struct gdromtoc *toc;
++ struct request_queue *gdrom_rq;
++} gd;
++
++struct gdrom_id {
++ char mid;
++ char modid;
++ char verid;
++ char padA[13];
++ char mname[16];
++ char modname[16];
++ char firmver[16];
++ char padB[16];
++};
++
++static int gdrom_getsense(short *bufstring);
++static int gdrom_packetcommand(struct cdrom_device_info *cd_info,
++ struct packet_command *command);
++static int gdrom_hardreset(struct cdrom_device_info *cd_info);
++
++static bool gdrom_is_busy(void)
++{
++ return (ctrl_inb(GDROM_ALTSTATUS_REG) & 0x80) != 0;
++}
++
++static bool gdrom_data_request(void)
++{
++ return (ctrl_inb(GDROM_ALTSTATUS_REG) & 0x88) == 8;
++}
++
++static bool gdrom_wait_clrbusy(void)
++{
++ unsigned long timeout = jiffies + GDROM_DEFAULT_TIMEOUT;
++ while ((ctrl_inb(GDROM_ALTSTATUS_REG) & 0x80) &&
++ (time_before(jiffies, timeout)))
++ cpu_relax();
++ return time_before(jiffies, timeout + 1);
++}
++
++static bool gdrom_wait_busy_sleeps(void)
++{
++ unsigned long timeout;
++ /* Wait to get busy first */
++ timeout = jiffies + GDROM_DEFAULT_TIMEOUT;
++ while (!gdrom_is_busy() && time_before(jiffies, timeout))
++ cpu_relax();
++ /* Now wait for busy to clear */
++ return gdrom_wait_clrbusy();
++}
++
++static void gdrom_identifydevice(void *buf)
++{
++ int c;
++ short *data = buf;
++ /* If the device won't clear it has probably
++ * been hit by a serious failure - but we'll
++ * try to return a sense key even so */
++ if (!gdrom_wait_clrbusy()) {
++ gdrom_getsense(NULL);
++ return;
++ }
++ ctrl_outb(GDROM_COM_IDDEV, GDROM_STATUSCOMMAND_REG);
++ if (!gdrom_wait_busy_sleeps()) {
++ gdrom_getsense(NULL);
++ return;
++ }
++ /* now read in the data */
++ for (c = 0; c < 40; c++)
++ data[c] = ctrl_inw(GDROM_DATA_REG);
++}
++
++static void gdrom_spicommand(void *spi_string, int buflen)
++{
++ short *cmd = spi_string;
++ unsigned long timeout;
++
++ /* ensure IRQ_WAIT is set */
++ ctrl_outb(0x08, GDROM_ALTSTATUS_REG);
++ /* specify how many bytes we expect back */
++ ctrl_outb(buflen & 0xFF, GDROM_BCL_REG);
++ ctrl_outb((buflen >> 8) & 0xFF, GDROM_BCH_REG);
++ /* other parameters */
++ ctrl_outb(0, GDROM_INTSEC_REG);
++ ctrl_outb(0, GDROM_SECNUM_REG);
++ ctrl_outb(0, GDROM_ERROR_REG);
++ /* Wait until we can go */
++ if (!gdrom_wait_clrbusy()) {
++ gdrom_getsense(NULL);
++ return;
++ }
++ timeout = jiffies + GDROM_DEFAULT_TIMEOUT;
++ ctrl_outb(GDROM_COM_PACKET, GDROM_STATUSCOMMAND_REG);
++ while (!gdrom_data_request() && time_before(jiffies, timeout))
++ cpu_relax();
++ if (!time_before(jiffies, timeout + 1)) {
++ gdrom_getsense(NULL);
++ return;
++ }
++ outsw(PHYSADDR(GDROM_DATA_REG), cmd, 6);
++}
++
++
++/* gdrom_command_executediagnostic:
++ * Used to probe for presence of working GDROM
++ * Restarts GDROM device and then applies standard ATA 3
++ * Execute Diagnostic Command: a return of '1' indicates device 0
++ * present and device 1 absent
++ */
++static char gdrom_execute_diagnostic(void)
++{
++ gdrom_hardreset(gd.cd_info);
++ if (!gdrom_wait_clrbusy())
++ return 0;
++ ctrl_outb(GDROM_COM_EXECDIAG, GDROM_STATUSCOMMAND_REG);
++ if (!gdrom_wait_busy_sleeps())
++ return 0;
++ return ctrl_inb(GDROM_ERROR_REG);
++}
++
++/*
++ * Prepare disk command
++ * byte 0 = 0x70
++ * byte 1 = 0x1f
++ */
++static int gdrom_preparedisk_cmd(void)
++{
++ struct packet_command *spin_command;
++ spin_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
++ if (!spin_command)
++ return -ENOMEM;
++ spin_command->cmd[0] = 0x70;
++ spin_command->cmd[2] = 0x1f;
++ spin_command->buflen = 0;
++ gd.pending = 1;
++ gdrom_packetcommand(gd.cd_info, spin_command);
++ /* 60 second timeout */
++ wait_event_interruptible_timeout(command_queue, gd.pending == 0,
++ GDROM_DEFAULT_TIMEOUT);
++ gd.pending = 0;
++ kfree(spin_command);
++ if (gd.status & 0x01) {
++ /* log an error */
++ gdrom_getsense(NULL);
++ return -EIO;
++ }
++ return 0;
++}
++
++/*
++ * Read TOC command
++ * byte 0 = 0x14
++ * byte 1 = session
++ * byte 3 = sizeof TOC >> 8 ie upper byte
++ * byte 4 = sizeof TOC & 0xff ie lower byte
++ */
++static int gdrom_readtoc_cmd(struct gdromtoc *toc, int session)
++{
++ int tocsize;
++ struct packet_command *toc_command;
++ int err = 0;
++
++ toc_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
++ if (!toc_command)
++ return -ENOMEM;
++ tocsize = sizeof(struct gdromtoc);
++ toc_command->cmd[0] = 0x14;
++ toc_command->cmd[1] = session;
++ toc_command->cmd[3] = tocsize >> 8;
++ toc_command->cmd[4] = tocsize & 0xff;
++ toc_command->buflen = tocsize;
++ if (gd.pending) {
++ err = -EBUSY;
++ goto cleanup_readtoc_final;
++ }
++ gd.pending = 1;
++ gdrom_packetcommand(gd.cd_info, toc_command);
++ wait_event_interruptible_timeout(command_queue, gd.pending == 0,
++ GDROM_DEFAULT_TIMEOUT);
++ if (gd.pending) {
++ err = -EINVAL;
++ goto cleanup_readtoc;
++ }
++ insw(PHYSADDR(GDROM_DATA_REG), toc, tocsize/2);
++ if (gd.status & 0x01)
++ err = -EINVAL;
++
++cleanup_readtoc:
++ gd.pending = 0;
++cleanup_readtoc_final:
++ kfree(toc_command);
++ return err;
++}
++
++/* TOC helpers */
++static int get_entry_lba(int track)
++{
++ return (cpu_to_be32(track & 0xffffff00) - GD_SESSION_OFFSET);
++}
++
++static int get_entry_q_ctrl(int track)
++{
++ return (track & 0x000000f0) >> 4;
++}
++
++static int get_entry_track(int track)
++{
++ return (track & 0x0000ff00) >> 8;
++}
++
++static int gdrom_get_last_session(struct cdrom_device_info *cd_info,
++ struct cdrom_multisession *ms_info)
++{
++ int fentry, lentry, track, data, tocuse, err;
++ if (!gd.toc)
++ return -ENOMEM;
++ tocuse = 1;
++ /* Check if GD-ROM */
++ err = gdrom_readtoc_cmd(gd.toc, 1);
++ /* Not a GD-ROM so check if standard CD-ROM */
++ if (err) {
++ tocuse = 0;
++ err = gdrom_readtoc_cmd(gd.toc, 0);
++ if (err) {
++ printk(KERN_INFO "GDROM: Could not get CD "
++ "table of contents\n");
++ return -ENXIO;
++ }
++ }
++
++ fentry = get_entry_track(gd.toc->first);
++ lentry = get_entry_track(gd.toc->last);
++ /* Find the first data track */
++ track = get_entry_track(gd.toc->last);
++ do {
++ data = gd.toc->entry[track - 1];
++ if (get_entry_q_ctrl(data))
++ break; /* ie a real data track */
++ track--;
++ } while (track >= fentry);
++
++ if ((track > 100) || (track < get_entry_track(gd.toc->first))) {
++ printk(KERN_INFO "GDROM: No data on the last "
++ "session of the CD\n");
++ gdrom_getsense(NULL);
++ return -ENXIO;
++ }
++
++ ms_info->addr_format = CDROM_LBA;
++ ms_info->addr.lba = get_entry_lba(data);
++ ms_info->xa_flag = 1;
++ return 0;
++}
++
++static int gdrom_open(struct cdrom_device_info *cd_info, int purpose)
++{
++ /* spin up the disk */
++ return gdrom_preparedisk_cmd();
++}
++
++/* this function is required even if empty */
++static void gdrom_release(struct cdrom_device_info *cd_info)
++{
++}
++
++static int gdrom_drivestatus(struct cdrom_device_info *cd_info, int ignore)
++{
++ /* read the sense key */
++ char sense = ctrl_inb(GDROM_ERROR_REG);
++ sense &= 0xF0;
++ if (sense == 0)
++ return CDS_DISC_OK;
++ if (sense == 0x20)
++ return CDS_DRIVE_NOT_READY;
++ /* default */
++ return CDS_NO_INFO;
++}
++
++static int gdrom_mediachanged(struct cdrom_device_info *cd_info, int ignore)
++{
++ /* check the sense key */
++ return (ctrl_inb(GDROM_ERROR_REG) & 0xF0) == 0x60;
++}
++
++/* reset the G1 bus */
++static int gdrom_hardreset(struct cdrom_device_info *cd_info)
++{
++ int count;
++ ctrl_outl(0x1fffff, GDROM_RESET_REG);
++ for (count = 0xa0000000; count < 0xa0200000; count += 4)
++ ctrl_inl(count);
++ return 0;
++}
++
++/* keep the function looking like the universal
++ * CD Rom specification - returning int */
++static int gdrom_packetcommand(struct cdrom_device_info *cd_info,
++ struct packet_command *command)
++{
++ gdrom_spicommand(&command->cmd, command->buflen);
++ return 0;
++}
++
++/* Get Sense SPI command
++ * From Marcus Comstedt
++ * cmd = 0x13
++ * cmd + 4 = length of returned buffer
++ * Returns 5 16 bit words
++ */
++static int gdrom_getsense(short *bufstring)
++{
++ struct packet_command *sense_command;
++ short sense[5];
++ int sense_key;
++ int err = -EIO;
++
++ sense_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
++ if (!sense_command)
++ return -ENOMEM;
++ sense_command->cmd[0] = 0x13;
++ sense_command->cmd[4] = 10;
++ sense_command->buflen = 10;
++ /* even if something is pending try to get
++ * the sense key if possible */
++ if (gd.pending && !gdrom_wait_clrbusy()) {
++ err = -EBUSY;
++ goto cleanup_sense_final;
++ }
++ gd.pending = 1;
++ gdrom_packetcommand(gd.cd_info, sense_command);
++ wait_event_interruptible_timeout(command_queue, gd.pending == 0,
++ GDROM_DEFAULT_TIMEOUT);
++ if (gd.pending)
++ goto cleanup_sense;
++ insw(PHYSADDR(GDROM_DATA_REG), &sense, sense_command->buflen/2);
++ if (sense[1] & 40) {
++ printk(KERN_INFO "GDROM: Drive not ready - command aborted\n");
++ goto cleanup_sense;
++ }
++ sense_key = sense[1] & 0x0F;
++ if (sense_key < ARRAY_SIZE(sense_texts))
++ printk(KERN_INFO "GDROM: %s\n", sense_texts[sense_key].text);
++ else
++ printk(KERN_ERR "GDROM: Unknown sense key: %d\n", sense_key);
++ if (bufstring) /* return addional sense data */
++ memcpy(bufstring, &sense[4], 2);
++ if (sense_key < 2)
++ err = 0;
++
++cleanup_sense:
++ gd.pending = 0;
++cleanup_sense_final:
++ kfree(sense_command);
++ return err;
++}
++
++static struct cdrom_device_ops gdrom_ops = {
++ .open = gdrom_open,
++ .release = gdrom_release,
++ .drive_status = gdrom_drivestatus,
++ .media_changed = gdrom_mediachanged,
++ .get_last_session = gdrom_get_last_session,
++ .reset = gdrom_hardreset,
++ .capability = CDC_MULTI_SESSION | CDC_MEDIA_CHANGED |
++ CDC_RESET | CDC_DRIVE_STATUS | CDC_CD_R,
++ .n_minors = 1,
++};
++
++static int gdrom_bdops_open(struct inode *inode, struct file *file)
++{
++ return cdrom_open(gd.cd_info, inode, file);
++}
++
++static int gdrom_bdops_release(struct inode *inode, struct file *file)
++{
++ return cdrom_release(gd.cd_info, file);
++}
++
++static int gdrom_bdops_mediachanged(struct gendisk *disk)
++{
++ return cdrom_media_changed(gd.cd_info);
++}
++
++static int gdrom_bdops_ioctl(struct inode *inode, struct file *file,
++ unsigned cmd, unsigned long arg)
++{
++ return cdrom_ioctl(file, gd.cd_info, inode, cmd, arg);
++}
++
++static struct block_device_operations gdrom_bdops = {
++ .owner = THIS_MODULE,
++ .open = gdrom_bdops_open,
++ .release = gdrom_bdops_release,
++ .media_changed = gdrom_bdops_mediachanged,
++ .ioctl = gdrom_bdops_ioctl,
++};
++
++static irqreturn_t gdrom_command_interrupt(int irq, void *dev_id)
++{
++ gd.status = ctrl_inb(GDROM_STATUSCOMMAND_REG);
++ if (gd.pending != 1)
++ return IRQ_HANDLED;
++ gd.pending = 0;
++ wake_up_interruptible(&command_queue);
++ return IRQ_HANDLED;
++}
++
++static irqreturn_t gdrom_dma_interrupt(int irq, void *dev_id)
++{
++ gd.status = ctrl_inb(GDROM_STATUSCOMMAND_REG);
++ if (gd.transfer != 1)
++ return IRQ_HANDLED;
++ gd.transfer = 0;
++ wake_up_interruptible(&request_queue);
++ return IRQ_HANDLED;
++}
++
++static int __devinit gdrom_set_interrupt_handlers(void)
++{
++ int err;
++
++ err = request_irq(HW_EVENT_GDROM_CMD, gdrom_command_interrupt,
++ IRQF_DISABLED, "gdrom_command", &gd);
++ if (err)
++ return err;
++ err = request_irq(HW_EVENT_GDROM_DMA, gdrom_dma_interrupt,
++ IRQF_DISABLED, "gdrom_dma", &gd);
++ if (err)
++ free_irq(HW_EVENT_GDROM_CMD, &gd);
++ return err;
++}
++
++/* Implement DMA read using SPI command
++ * 0 -> 0x30
++ * 1 -> mode
++ * 2 -> block >> 16
++ * 3 -> block >> 8
++ * 4 -> block
++ * 8 -> sectors >> 16
++ * 9 -> sectors >> 8
++ * 10 -> sectors
++ */
++static void gdrom_readdisk_dma(struct work_struct *work)
++{
++ int err, block, block_cnt;
++ struct packet_command *read_command;
++ struct list_head *elem, *next;
++ struct request *req;
++ unsigned long timeout;
++
++ if (list_empty(&gdrom_deferred))
++ return;
++ read_command = kzalloc(sizeof(struct packet_command), GFP_KERNEL);
++ if (!read_command)
++ return; /* get more memory later? */
++ read_command->cmd[0] = 0x30;
++ read_command->cmd[1] = 0x20;
++ spin_lock(&gdrom_lock);
++ list_for_each_safe(elem, next, &gdrom_deferred) {
++ req = list_entry(elem, struct request, queuelist);
++ spin_unlock(&gdrom_lock);
++ block = req->sector/GD_TO_BLK + GD_SESSION_OFFSET;
++ block_cnt = req->nr_sectors/GD_TO_BLK;
++ ctrl_outl(PHYSADDR(req->buffer), GDROM_DMA_STARTADDR_REG);
++ ctrl_outl(block_cnt * GDROM_HARD_SECTOR, GDROM_DMA_LENGTH_REG);
++ ctrl_outl(1, GDROM_DMA_DIRECTION_REG);
++ ctrl_outl(1, GDROM_DMA_ENABLE_REG);
++ read_command->cmd[2] = (block >> 16) & 0xFF;
++ read_command->cmd[3] = (block >> 8) & 0xFF;
++ read_command->cmd[4] = block & 0xFF;
++ read_command->cmd[8] = (block_cnt >> 16) & 0xFF;
++ read_command->cmd[9] = (block_cnt >> 8) & 0xFF;
++ read_command->cmd[10] = block_cnt & 0xFF;
++ /* set for DMA */
++ ctrl_outb(1, GDROM_ERROR_REG);
++ /* other registers */
++ ctrl_outb(0, GDROM_SECNUM_REG);
++ ctrl_outb(0, GDROM_BCL_REG);
++ ctrl_outb(0, GDROM_BCH_REG);
++ ctrl_outb(0, GDROM_DSEL_REG);
++ ctrl_outb(0, GDROM_INTSEC_REG);
++ /* Wait for registers to reset after any previous activity */
++ timeout = jiffies + HZ / 2;
++ while (gdrom_is_busy() && time_before(jiffies, timeout))
++ cpu_relax();
++ ctrl_outb(GDROM_COM_PACKET, GDROM_STATUSCOMMAND_REG);
++ timeout = jiffies + HZ / 2;
++ /* Wait for packet command to finish */
++ while (gdrom_is_busy() && time_before(jiffies, timeout))
++ cpu_relax();
++ gd.pending = 1;
++ gd.transfer = 1;
++ outsw(PHYSADDR(GDROM_DATA_REG), &read_command->cmd, 6);
++ timeout = jiffies + HZ / 2;
++ /* Wait for any pending DMA to finish */
++ while (ctrl_inb(GDROM_DMA_STATUS_REG) &&
++ time_before(jiffies, timeout))
++ cpu_relax();
++ /* start transfer */
++ ctrl_outb(1, GDROM_DMA_STATUS_REG);
++ wait_event_interruptible_timeout(request_queue,
++ gd.transfer == 0, GDROM_DEFAULT_TIMEOUT);
++ err = gd.transfer;
++ gd.transfer = 0;
++ gd.pending = 0;
++ /* now seek to take the request spinlock
++ * before handling ending the request */
++ spin_lock(&gdrom_lock);
++ list_del_init(&req->queuelist);
++ end_dequeued_request(req, 1 - err);
++ }
++ spin_unlock(&gdrom_lock);
++ kfree(read_command);
++}
++
++static void gdrom_request_handler_dma(struct request *req)
++{
++ /* dequeue, add to list of deferred work
++ * and then schedule workqueue */
++ blkdev_dequeue_request(req);
++ list_add_tail(&req->queuelist, &gdrom_deferred);
++ schedule_work(&work);
++}
++
++static void gdrom_request(struct request_queue *rq)
++{
++ struct request *req;
++
++ while ((req = elv_next_request(rq)) != NULL) {
++ if (!blk_fs_request(req)) {
++ printk(KERN_DEBUG "GDROM: Non-fs request ignored\n");
++ end_request(req, 0);
++ }
++ if (rq_data_dir(req) != READ) {
++ printk(KERN_NOTICE "GDROM: Read only device -");
++ printk(" write request ignored\n");
++ end_request(req, 0);
++ }
++ if (req->nr_sectors)
++ gdrom_request_handler_dma(req);
++ else
++ end_request(req, 0);
++ }
++}
++
++/* Print string identifying GD ROM device */
++static int __devinit gdrom_outputversion(void)
++{
++ struct gdrom_id *id;
++ char *model_name, *manuf_name, *firmw_ver;
++ int err = -ENOMEM;
++
++ /* query device ID */
++ id = kzalloc(sizeof(struct gdrom_id), GFP_KERNEL);
++ if (!id)
++ return err;
++ gdrom_identifydevice(id);
++ model_name = kstrndup(id->modname, 16, GFP_KERNEL);
++ if (!model_name)
++ goto free_id;
++ manuf_name = kstrndup(id->mname, 16, GFP_KERNEL);
++ if (!manuf_name)
++ goto free_model_name;
++ firmw_ver = kstrndup(id->firmver, 16, GFP_KERNEL);
++ if (!firmw_ver)
++ goto free_manuf_name;
++ printk(KERN_INFO "GDROM: %s from %s with firmware %s\n",
++ model_name, manuf_name, firmw_ver);
++ err = 0;
++ kfree(firmw_ver);
++free_manuf_name:
++ kfree(manuf_name);
++free_model_name:
++ kfree(model_name);
++free_id:
++ kfree(id);
++ return err;
++}
++
++/* set the default mode for DMA transfer */
++static int __devinit gdrom_init_dma_mode(void)
++{
++ ctrl_outb(0x13, GDROM_ERROR_REG);
++ ctrl_outb(0x22, GDROM_INTSEC_REG);
++ if (!gdrom_wait_clrbusy())
++ return -EBUSY;
++ ctrl_outb(0xEF, GDROM_STATUSCOMMAND_REG);
++ if (!gdrom_wait_busy_sleeps())
++ return -EBUSY;
++ /* Memory protection setting for GDROM DMA
++ * Bits 31 - 16 security: 0x8843
++ * Bits 15 and 7 reserved (0)
++ * Bits 14 - 8 start of transfer range in 1 MB blocks OR'ed with 0x80
++ * Bits 6 - 0 end of transfer range in 1 MB blocks OR'ed with 0x80
++ * (0x40 | 0x80) = start range at 0x0C000000
++ * (0x7F | 0x80) = end range at 0x0FFFFFFF */
++ ctrl_outl(0x8843407F, GDROM_DMA_ACCESS_CTRL_REG);
++ ctrl_outl(9, GDROM_DMA_WAIT_REG); /* DMA word setting */
++ return 0;
++}
++
++static void __devinit probe_gdrom_setupcd(void)
++{
++ gd.cd_info->ops = &gdrom_ops;
++ gd.cd_info->capacity = 1;
++ strcpy(gd.cd_info->name, GDROM_DEV_NAME);
++ gd.cd_info->mask = CDC_CLOSE_TRAY|CDC_OPEN_TRAY|CDC_LOCK|
++ CDC_SELECT_DISC;
++}
++
++static void __devinit probe_gdrom_setupdisk(void)
++{
++ gd.disk->major = gdrom_major;
++ gd.disk->first_minor = 1;
++ gd.disk->minors = 1;
++ strcpy(gd.disk->disk_name, GDROM_DEV_NAME);
++}
++
++static int __devinit probe_gdrom_setupqueue(void)
++{
++ blk_queue_hardsect_size(gd.gdrom_rq, GDROM_HARD_SECTOR);
++ /* using DMA so memory will need to be contiguous */
++ blk_queue_max_hw_segments(gd.gdrom_rq, 1);
++ /* set a large max size to get most from DMA */
++ blk_queue_max_segment_size(gd.gdrom_rq, 0x40000);
++ gd.disk->queue = gd.gdrom_rq;
++ return gdrom_init_dma_mode();
++}
++
++/*
++ * register this as a block device and as compliant with the
++ * universal CD Rom driver interface
++ */
++static int __devinit probe_gdrom(struct platform_device *devptr)
++{
++ int err;
++ /* Start the device */
++ if (gdrom_execute_diagnostic() != 1) {
++ printk(KERN_WARNING "GDROM: ATA Probe for GDROM failed.\n");
++ return -ENODEV;
++ }
++ /* Print out firmware ID */
++ if (gdrom_outputversion())
++ return -ENOMEM;
++ /* Register GDROM */
++ gdrom_major = register_blkdev(0, GDROM_DEV_NAME);
++ if (gdrom_major <= 0)
++ return gdrom_major;
++ printk(KERN_INFO "GDROM: Registered with major number %d\n",
++ gdrom_major);
++ /* Specify basic properties of drive */
++ gd.cd_info = kzalloc(sizeof(struct cdrom_device_info), GFP_KERNEL);
++ if (!gd.cd_info) {
++ err = -ENOMEM;
++ goto probe_fail_no_mem;
++ }
++ probe_gdrom_setupcd();
++ gd.disk = alloc_disk(1);
++ if (!gd.disk) {
++ err = -ENODEV;
++ goto probe_fail_no_disk;
++ }
++ probe_gdrom_setupdisk();
++ if (register_cdrom(gd.cd_info)) {
++ err = -ENODEV;
++ goto probe_fail_cdrom_register;
++ }
++ gd.disk->fops = &gdrom_bdops;
++ /* latch on to the interrupt */
++ err = gdrom_set_interrupt_handlers();
++ if (err)
++ goto probe_fail_cmdirq_register;
++ gd.gdrom_rq = blk_init_queue(gdrom_request, &gdrom_lock);
++ if (!gd.gdrom_rq)
++ goto probe_fail_requestq;
++
++ err = probe_gdrom_setupqueue();
++ if (err)
++ goto probe_fail_toc;
++
++ gd.toc = kzalloc(sizeof(struct gdromtoc), GFP_KERNEL);
++ if (!gd.toc)
++ goto probe_fail_toc;
++ add_disk(gd.disk);
++ return 0;
++
++probe_fail_toc:
++ blk_cleanup_queue(gd.gdrom_rq);
++probe_fail_requestq:
++ free_irq(HW_EVENT_GDROM_DMA, &gd);
++ free_irq(HW_EVENT_GDROM_CMD, &gd);
++probe_fail_cmdirq_register:
++probe_fail_cdrom_register:
++ del_gendisk(gd.disk);
++probe_fail_no_disk:
++ kfree(gd.cd_info);
++ unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
++ gdrom_major = 0;
++probe_fail_no_mem:
++ printk(KERN_WARNING "GDROM: Probe failed - error is 0x%X\n", err);
++ return err;
++}
++
++static int __devexit remove_gdrom(struct platform_device *devptr)
++{
++ flush_scheduled_work();
++ blk_cleanup_queue(gd.gdrom_rq);
++ free_irq(HW_EVENT_GDROM_CMD, &gd);
++ free_irq(HW_EVENT_GDROM_DMA, &gd);
++ del_gendisk(gd.disk);
++ if (gdrom_major)
++ unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
++ return unregister_cdrom(gd.cd_info);
++}
++
++static struct platform_driver gdrom_driver = {
++ .probe = probe_gdrom,
++ .remove = __devexit_p(remove_gdrom),
++ .driver = {
++ .name = GDROM_DEV_NAME,
++ },
++};
++
++static int __init init_gdrom(void)
++{
++ int rc;
++ gd.toc = NULL;
++ rc = platform_driver_register(&gdrom_driver);
++ if (rc)
++ return rc;
++ pd = platform_device_register_simple(GDROM_DEV_NAME, -1, NULL, 0);
++ if (IS_ERR(pd)) {
++ platform_driver_unregister(&gdrom_driver);
++ return PTR_ERR(pd);
++ }
++ return 0;
++}
++
++static void __exit exit_gdrom(void)
++{
++ platform_device_unregister(pd);
++ platform_driver_unregister(&gdrom_driver);
++ kfree(gd.toc);
++}
++
++module_init(init_gdrom);
++module_exit(exit_gdrom);
++MODULE_AUTHOR("Adrian McMenamin <adrian at mcmen.demon.co.uk>");
++MODULE_DESCRIPTION("SEGA Dreamcast GD-ROM Driver");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/cdrom/viocd.c b/drivers/cdrom/viocd.c
+index d8bb44b..8473b9f 100644
+--- a/drivers/cdrom/viocd.c
++++ b/drivers/cdrom/viocd.c
+@@ -289,7 +289,7 @@ static int send_request(struct request *req)
+ return 0;
+ }
+
+-static void viocd_end_request(struct request *req, int uptodate)
++static void viocd_end_request(struct request *req, int error)
+ {
+ int nsectors = req->hard_nr_sectors;
+
+@@ -302,11 +302,8 @@ static void viocd_end_request(struct request *req, int uptodate)
+ if (!nsectors)
+ nsectors = 1;
+
+- if (end_that_request_first(req, uptodate, nsectors))
++ if (__blk_end_request(req, error, nsectors << 9))
+ BUG();
+- add_disk_randomness(req->rq_disk);
+- blkdev_dequeue_request(req);
+- end_that_request_last(req, uptodate);
+ }
+
+ static int rwreq;
+@@ -317,11 +314,11 @@ static void do_viocd_request(struct request_queue *q)
+
+ while ((rwreq == 0) && ((req = elv_next_request(q)) != NULL)) {
+ if (!blk_fs_request(req))
+- viocd_end_request(req, 0);
++ viocd_end_request(req, -EIO);
+ else if (send_request(req) < 0) {
+ printk(VIOCD_KERN_WARNING
+ "unable to send message to OS/400!");
+- viocd_end_request(req, 0);
++ viocd_end_request(req, -EIO);
+ } else
+ rwreq++;
+ }
+@@ -532,9 +529,9 @@ return_complete:
+ "with rc %d:0x%04X: %s\n",
+ req, event->xRc,
+ bevent->sub_result, err->msg);
+- viocd_end_request(req, 0);
++ viocd_end_request(req, -EIO);
+ } else
+- viocd_end_request(req, 1);
++ viocd_end_request(req, 0);
+
+ /* restart handling of incoming requests */
+ spin_unlock_irqrestore(&viocd_reqlock, flags);
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 2e3a0d4..4666295 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -373,6 +373,16 @@ config ISTALLION
+ To compile this driver as a module, choose M here: the
+ module will be called istallion.
+
++config NOZOMI
++ tristate "HSDPA Broadband Wireless Data Card - Globe Trotter"
++ depends on PCI && EXPERIMENTAL
++ help
++ If you have a HSDPA driver Broadband Wireless Data Card -
++ Globe Trotter PCMCIA card, say Y here.
++
++ To compile this driver as a module, choose M here, the module
++ will be called nozomi.
++
+ config A2232
+ tristate "Commodore A2232 serial support (EXPERIMENTAL)"
+ depends on EXPERIMENTAL && ZORRO && BROKEN_ON_SMP
+diff --git a/drivers/char/Makefile b/drivers/char/Makefile
+index 07304d5..96fc01e 100644
+--- a/drivers/char/Makefile
++++ b/drivers/char/Makefile
+@@ -26,6 +26,7 @@ obj-$(CONFIG_SERIAL167) += serial167.o
+ obj-$(CONFIG_CYCLADES) += cyclades.o
+ obj-$(CONFIG_STALLION) += stallion.o
+ obj-$(CONFIG_ISTALLION) += istallion.o
++obj-$(CONFIG_NOZOMI) += nozomi.o
+ obj-$(CONFIG_DIGIEPCA) += epca.o
+ obj-$(CONFIG_SPECIALIX) += specialix.o
+ obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
+diff --git a/drivers/char/agp/ali-agp.c b/drivers/char/agp/ali-agp.c
+index aa5ddb7..1ffb381 100644
+--- a/drivers/char/agp/ali-agp.c
++++ b/drivers/char/agp/ali-agp.c
+@@ -145,7 +145,6 @@ static void *m1541_alloc_page(struct agp_bridge_data *bridge)
+ void *addr = agp_generic_alloc_page(agp_bridge);
+ u32 temp;
+
+- global_flush_tlb();
+ if (!addr)
+ return NULL;
+
+@@ -162,7 +161,6 @@ static void ali_destroy_page(void * addr, int flags)
+ if (flags & AGP_PAGE_DESTROY_UNMAP) {
+ global_cache_flush(); /* is this really needed? --hch */
+ agp_generic_destroy_page(addr, flags);
+- global_flush_tlb();
+ } else
+ agp_generic_destroy_page(addr, flags);
+ }
+diff --git a/drivers/char/agp/backend.c b/drivers/char/agp/backend.c
+index 832ded2..2720882 100644
+--- a/drivers/char/agp/backend.c
++++ b/drivers/char/agp/backend.c
+@@ -147,7 +147,6 @@ static int agp_backend_initialize(struct agp_bridge_data *bridge)
+ printk(KERN_ERR PFX "unable to get memory for scratch page.\n");
+ return -ENOMEM;
+ }
+- flush_agp_mappings();
+
+ bridge->scratch_page_real = virt_to_gart(addr);
+ bridge->scratch_page =
+@@ -191,7 +190,6 @@ err_out:
+ if (bridge->driver->needs_scratch_page) {
+ bridge->driver->agp_destroy_page(gart_to_virt(bridge->scratch_page_real),
+ AGP_PAGE_DESTROY_UNMAP);
+- flush_agp_mappings();
+ bridge->driver->agp_destroy_page(gart_to_virt(bridge->scratch_page_real),
+ AGP_PAGE_DESTROY_FREE);
+ }
+@@ -219,7 +217,6 @@ static void agp_backend_cleanup(struct agp_bridge_data *bridge)
+ bridge->driver->needs_scratch_page) {
+ bridge->driver->agp_destroy_page(gart_to_virt(bridge->scratch_page_real),
+ AGP_PAGE_DESTROY_UNMAP);
+- flush_agp_mappings();
+ bridge->driver->agp_destroy_page(gart_to_virt(bridge->scratch_page_real),
+ AGP_PAGE_DESTROY_FREE);
+ }
+diff --git a/drivers/char/agp/generic.c b/drivers/char/agp/generic.c
+index 64b2f6d..1a4674c 100644
+--- a/drivers/char/agp/generic.c
++++ b/drivers/char/agp/generic.c
+@@ -197,7 +197,6 @@ void agp_free_memory(struct agp_memory *curr)
+ for (i = 0; i < curr->page_count; i++) {
+ curr->bridge->driver->agp_destroy_page(gart_to_virt(curr->memory[i]), AGP_PAGE_DESTROY_UNMAP);
+ }
+- flush_agp_mappings();
+ for (i = 0; i < curr->page_count; i++) {
+ curr->bridge->driver->agp_destroy_page(gart_to_virt(curr->memory[i]), AGP_PAGE_DESTROY_FREE);
+ }
+@@ -267,8 +266,6 @@ struct agp_memory *agp_allocate_memory(struct agp_bridge_data *bridge,
+ }
+ new->bridge = bridge;
+
+- flush_agp_mappings();
+-
+ return new;
+ }
+ EXPORT_SYMBOL(agp_allocate_memory);
+diff --git a/drivers/char/agp/i460-agp.c b/drivers/char/agp/i460-agp.c
+index e72a83e..76f581c 100644
+--- a/drivers/char/agp/i460-agp.c
++++ b/drivers/char/agp/i460-agp.c
+@@ -527,7 +527,6 @@ static void *i460_alloc_page (struct agp_bridge_data *bridge)
+
+ if (I460_IO_PAGE_SHIFT <= PAGE_SHIFT) {
+ page = agp_generic_alloc_page(agp_bridge);
+- global_flush_tlb();
+ } else
+ /* Returning NULL would cause problems */
+ /* AK: really dubious code. */
+@@ -539,7 +538,6 @@ static void i460_destroy_page (void *page, int flags)
+ {
+ if (I460_IO_PAGE_SHIFT <= PAGE_SHIFT) {
+ agp_generic_destroy_page(page, flags);
+- global_flush_tlb();
+ }
+ }
+
+diff --git a/drivers/char/agp/intel-agp.c b/drivers/char/agp/intel-agp.c
+index 03eac1e..189efb6 100644
+--- a/drivers/char/agp/intel-agp.c
++++ b/drivers/char/agp/intel-agp.c
+@@ -210,13 +210,11 @@ static void *i8xx_alloc_pages(void)
+ if (page == NULL)
+ return NULL;
+
+- if (change_page_attr(page, 4, PAGE_KERNEL_NOCACHE) < 0) {
+- change_page_attr(page, 4, PAGE_KERNEL);
+- global_flush_tlb();
++ if (set_pages_uc(page, 4) < 0) {
++ set_pages_wb(page, 4);
+ __free_pages(page, 2);
+ return NULL;
+ }
+- global_flush_tlb();
+ get_page(page);
+ atomic_inc(&agp_bridge->current_memory_agp);
+ return page_address(page);
+@@ -230,8 +228,7 @@ static void i8xx_destroy_pages(void *addr)
+ return;
+
+ page = virt_to_page(addr);
+- change_page_attr(page, 4, PAGE_KERNEL);
+- global_flush_tlb();
++ set_pages_wb(page, 4);
+ put_page(page);
+ __free_pages(page, 2);
+ atomic_dec(&agp_bridge->current_memory_agp);
+@@ -341,7 +338,6 @@ static struct agp_memory *alloc_agpphysmem_i8xx(size_t pg_count, int type)
+
+ switch (pg_count) {
+ case 1: addr = agp_bridge->driver->agp_alloc_page(agp_bridge);
+- global_flush_tlb();
+ break;
+ case 4:
+ /* kludge to get 4 physical pages for ARGB cursor */
+@@ -404,7 +400,6 @@ static void intel_i810_free_by_type(struct agp_memory *curr)
+ else {
+ agp_bridge->driver->agp_destroy_page(gart_to_virt(curr->memory[0]),
+ AGP_PAGE_DESTROY_UNMAP);
+- global_flush_tlb();
+ agp_bridge->driver->agp_destroy_page(gart_to_virt(curr->memory[0]),
+ AGP_PAGE_DESTROY_FREE);
+ }
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index 4c16778..465ad35 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -600,63 +600,6 @@ static int hpet_is_known(struct hpet_data *hdp)
+ return 0;
+ }
+
+-EXPORT_SYMBOL(hpet_alloc);
+-EXPORT_SYMBOL(hpet_register);
+-EXPORT_SYMBOL(hpet_unregister);
+-EXPORT_SYMBOL(hpet_control);
+-
+-int hpet_register(struct hpet_task *tp, int periodic)
+-{
+- unsigned int i;
+- u64 mask;
+- struct hpet_timer __iomem *timer;
+- struct hpet_dev *devp;
+- struct hpets *hpetp;
+-
+- switch (periodic) {
+- case 1:
+- mask = Tn_PER_INT_CAP_MASK;
+- break;
+- case 0:
+- mask = 0;
+- break;
+- default:
+- return -EINVAL;
+- }
+-
+- tp->ht_opaque = NULL;
+-
+- spin_lock_irq(&hpet_task_lock);
+- spin_lock(&hpet_lock);
+-
+- for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
+- for (timer = hpetp->hp_hpet->hpet_timers, i = 0;
+- i < hpetp->hp_ntimer; i++, timer++) {
+- if ((readq(&timer->hpet_config) & Tn_PER_INT_CAP_MASK)
+- != mask)
+- continue;
+-
+- devp = &hpetp->hp_dev[i];
+-
+- if (devp->hd_flags & HPET_OPEN || devp->hd_task) {
+- devp = NULL;
+- continue;
+- }
+-
+- tp->ht_opaque = devp;
+- devp->hd_task = tp;
+- break;
+- }
+-
+- spin_unlock(&hpet_lock);
+- spin_unlock_irq(&hpet_task_lock);
+-
+- if (tp->ht_opaque)
+- return 0;
+- else
+- return -EBUSY;
+-}
+-
+ static inline int hpet_tpcheck(struct hpet_task *tp)
+ {
+ struct hpet_dev *devp;
+@@ -706,24 +649,6 @@ int hpet_unregister(struct hpet_task *tp)
+ return 0;
+ }
+
+-int hpet_control(struct hpet_task *tp, unsigned int cmd, unsigned long arg)
+-{
+- struct hpet_dev *devp;
+- int err;
+-
+- if ((err = hpet_tpcheck(tp)))
+- return err;
+-
+- spin_lock_irq(&hpet_lock);
+- devp = tp->ht_opaque;
+- if (devp->hd_task != tp) {
+- spin_unlock_irq(&hpet_lock);
+- return -ENXIO;
+- }
+- spin_unlock_irq(&hpet_lock);
+- return hpet_ioctl_common(devp, cmd, arg, 1);
+-}
+-
+ static ctl_table hpet_table[] = {
+ {
+ .ctl_name = CTL_UNNUMBERED,
+@@ -806,14 +731,14 @@ static unsigned long hpet_calibrate(struct hpets *hpetp)
+
+ int hpet_alloc(struct hpet_data *hdp)
+ {
+- u64 cap, mcfg;
++ u64 cap, mcfg, hpet_config;
+ struct hpet_dev *devp;
+- u32 i, ntimer;
++ u32 i, ntimer, irq;
+ struct hpets *hpetp;
+ size_t siz;
+ struct hpet __iomem *hpet;
+ static struct hpets *last = NULL;
+- unsigned long period;
++ unsigned long period, irq_bitmap;
+ unsigned long long temp;
+
+ /*
+@@ -840,11 +765,47 @@ int hpet_alloc(struct hpet_data *hdp)
+ hpetp->hp_hpet_phys = hdp->hd_phys_address;
+
+ hpetp->hp_ntimer = hdp->hd_nirqs;
++ hpet = hpetp->hp_hpet;
+
+- for (i = 0; i < hdp->hd_nirqs; i++)
+- hpetp->hp_dev[i].hd_hdwirq = hdp->hd_irq[i];
++ /* Assign IRQs statically for legacy devices */
++ hpetp->hp_dev[0].hd_hdwirq = hdp->hd_irq[0];
++ hpetp->hp_dev[1].hd_hdwirq = hdp->hd_irq[1];
+
+- hpet = hpetp->hp_hpet;
++ /* Assign IRQs dynamically for the others */
++ for (i = 2, devp = &hpetp->hp_dev[2]; i < hdp->hd_nirqs; i++, devp++) {
++ struct hpet_timer __iomem *timer;
++
++ timer = &hpet->hpet_timers[devp - hpetp->hp_dev];
++
++ /* Check if there's already an IRQ assigned to the timer */
++ if (hdp->hd_irq[i]) {
++ hpetp->hp_dev[i].hd_hdwirq = hdp->hd_irq[i];
++ continue;
++ }
++
++ hpet_config = readq(&timer->hpet_config);
++ irq_bitmap = (hpet_config & Tn_INT_ROUTE_CAP_MASK)
++ >> Tn_INT_ROUTE_CAP_SHIFT;
++ if (!irq_bitmap)
++ irq = 0; /* No valid IRQ Assignable */
++ else {
++ irq = find_first_bit(&irq_bitmap, 32);
++ do {
++ hpet_config |= irq << Tn_INT_ROUTE_CNF_SHIFT;
++ writeq(hpet_config, &timer->hpet_config);
++
++ /*
++ * Verify whether we have written a valid
++ * IRQ number by reading it back again
++ */
++ hpet_config = readq(&timer->hpet_config);
++ if (irq == (hpet_config & Tn_INT_ROUTE_CNF_MASK)
++ >> Tn_INT_ROUTE_CNF_SHIFT)
++ break; /* Success */
++ } while ((irq = (find_next_bit(&irq_bitmap, 32, irq))));
++ }
++ hpetp->hp_dev[i].hd_hdwirq = irq;
++ }
+
+ cap = readq(&hpet->hpet_cap);
+
+@@ -875,7 +836,8 @@ int hpet_alloc(struct hpet_data *hdp)
+ hpetp->hp_which, hdp->hd_phys_address,
+ hpetp->hp_ntimer > 1 ? "s" : "");
+ for (i = 0; i < hpetp->hp_ntimer; i++)
+- printk("%s %d", i > 0 ? "," : "", hdp->hd_irq[i]);
++ printk("%s %d", i > 0 ? "," : "",
++ hpetp->hp_dev[i].hd_hdwirq);
+ printk("\n");
+
+ printk(KERN_INFO "hpet%u: %u %d-bit timers, %Lu Hz\n",
+diff --git a/drivers/char/hvc_console.c b/drivers/char/hvc_console.c
+index 8252f86..480fae2 100644
+--- a/drivers/char/hvc_console.c
++++ b/drivers/char/hvc_console.c
+@@ -27,7 +27,7 @@
+ #include <linux/init.h>
+ #include <linux/kbd_kern.h>
+ #include <linux/kernel.h>
+-#include <linux/kobject.h>
++#include <linux/kref.h>
+ #include <linux/kthread.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
+@@ -89,7 +89,7 @@ struct hvc_struct {
+ int irq_requested;
+ int irq;
+ struct list_head next;
+- struct kobject kobj; /* ref count & hvc_struct lifetime */
++ struct kref kref; /* ref count & hvc_struct lifetime */
+ };
+
+ /* dynamic list of hvc_struct instances */
+@@ -110,7 +110,7 @@ static int last_hvc = -1;
+
+ /*
+ * Do not call this function with either the hvc_structs_lock or the hvc_struct
+- * lock held. If successful, this function increments the kobject reference
++ * lock held. If successful, this function increments the kref reference
+ * count against the target hvc_struct so it should be released when finished.
+ */
+ static struct hvc_struct *hvc_get_by_index(int index)
+@@ -123,7 +123,7 @@ static struct hvc_struct *hvc_get_by_index(int index)
+ list_for_each_entry(hp, &hvc_structs, next) {
+ spin_lock_irqsave(&hp->lock, flags);
+ if (hp->index == index) {
+- kobject_get(&hp->kobj);
++ kref_get(&hp->kref);
+ spin_unlock_irqrestore(&hp->lock, flags);
+ spin_unlock(&hvc_structs_lock);
+ return hp;
+@@ -242,6 +242,23 @@ static int __init hvc_console_init(void)
+ }
+ console_initcall(hvc_console_init);
+
++/* callback when the kboject ref count reaches zero. */
++static void destroy_hvc_struct(struct kref *kref)
++{
++ struct hvc_struct *hp = container_of(kref, struct hvc_struct, kref);
++ unsigned long flags;
++
++ spin_lock(&hvc_structs_lock);
++
++ spin_lock_irqsave(&hp->lock, flags);
++ list_del(&(hp->next));
++ spin_unlock_irqrestore(&hp->lock, flags);
++
++ spin_unlock(&hvc_structs_lock);
++
++ kfree(hp);
++}
++
+ /*
+ * hvc_instantiate() is an early console discovery method which locates
+ * consoles * prior to the vio subsystem discovering them. Hotplugged
+@@ -261,7 +278,7 @@ int hvc_instantiate(uint32_t vtermno, int index, struct hv_ops *ops)
+ /* make sure no no tty has been registered in this index */
+ hp = hvc_get_by_index(index);
+ if (hp) {
+- kobject_put(&hp->kobj);
++ kref_put(&hp->kref, destroy_hvc_struct);
+ return -1;
+ }
+
+@@ -318,9 +335,8 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ unsigned long flags;
+ int irq = 0;
+ int rc = 0;
+- struct kobject *kobjp;
+
+- /* Auto increments kobject reference if found. */
++ /* Auto increments kref reference if found. */
+ if (!(hp = hvc_get_by_index(tty->index)))
+ return -ENODEV;
+
+@@ -341,8 +357,6 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ if (irq)
+ hp->irq_requested = 1;
+
+- kobjp = &hp->kobj;
+-
+ spin_unlock_irqrestore(&hp->lock, flags);
+ /* check error, fallback to non-irq */
+ if (irq)
+@@ -352,7 +366,7 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ * If the request_irq() fails and we return an error. The tty layer
+ * will call hvc_close() after a failed open but we don't want to clean
+ * up there so we'll clean up here and clear out the previously set
+- * tty fields and return the kobject reference.
++ * tty fields and return the kref reference.
+ */
+ if (rc) {
+ spin_lock_irqsave(&hp->lock, flags);
+@@ -360,7 +374,7 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ hp->irq_requested = 0;
+ spin_unlock_irqrestore(&hp->lock, flags);
+ tty->driver_data = NULL;
+- kobject_put(kobjp);
++ kref_put(&hp->kref, destroy_hvc_struct);
+ printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
+ }
+ /* Force wakeup of the polling thread */
+@@ -372,7 +386,6 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ {
+ struct hvc_struct *hp;
+- struct kobject *kobjp;
+ int irq = 0;
+ unsigned long flags;
+
+@@ -382,7 +395,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ /*
+ * No driver_data means that this close was issued after a failed
+ * hvc_open by the tty layer's release_dev() function and we can just
+- * exit cleanly because the kobject reference wasn't made.
++ * exit cleanly because the kref reference wasn't made.
+ */
+ if (!tty->driver_data)
+ return;
+@@ -390,7 +403,6 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ hp = tty->driver_data;
+ spin_lock_irqsave(&hp->lock, flags);
+
+- kobjp = &hp->kobj;
+ if (--hp->count == 0) {
+ if (hp->irq_requested)
+ irq = hp->irq;
+@@ -417,7 +429,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ spin_unlock_irqrestore(&hp->lock, flags);
+ }
+
+- kobject_put(kobjp);
++ kref_put(&hp->kref, destroy_hvc_struct);
+ }
+
+ static void hvc_hangup(struct tty_struct *tty)
+@@ -426,7 +438,6 @@ static void hvc_hangup(struct tty_struct *tty)
+ unsigned long flags;
+ int irq = 0;
+ int temp_open_count;
+- struct kobject *kobjp;
+
+ if (!hp)
+ return;
+@@ -443,7 +454,6 @@ static void hvc_hangup(struct tty_struct *tty)
+ return;
+ }
+
+- kobjp = &hp->kobj;
+ temp_open_count = hp->count;
+ hp->count = 0;
+ hp->n_outbuf = 0;
+@@ -457,7 +467,7 @@ static void hvc_hangup(struct tty_struct *tty)
+ free_irq(irq, hp);
+ while(temp_open_count) {
+ --temp_open_count;
+- kobject_put(kobjp);
++ kref_put(&hp->kref, destroy_hvc_struct);
+ }
+ }
+
+@@ -729,27 +739,6 @@ static const struct tty_operations hvc_ops = {
+ .chars_in_buffer = hvc_chars_in_buffer,
+ };
+
+-/* callback when the kboject ref count reaches zero. */
+-static void destroy_hvc_struct(struct kobject *kobj)
+-{
+- struct hvc_struct *hp = container_of(kobj, struct hvc_struct, kobj);
+- unsigned long flags;
+-
+- spin_lock(&hvc_structs_lock);
+-
+- spin_lock_irqsave(&hp->lock, flags);
+- list_del(&(hp->next));
+- spin_unlock_irqrestore(&hp->lock, flags);
+-
+- spin_unlock(&hvc_structs_lock);
+-
+- kfree(hp);
+-}
+-
+-static struct kobj_type hvc_kobj_type = {
+- .release = destroy_hvc_struct,
+-};
+-
+ struct hvc_struct __devinit *hvc_alloc(uint32_t vtermno, int irq,
+ struct hv_ops *ops, int outbuf_size)
+ {
+@@ -776,8 +765,7 @@ struct hvc_struct __devinit *hvc_alloc(uint32_t vtermno, int irq,
+ hp->outbuf_size = outbuf_size;
+ hp->outbuf = &((char *)hp)[ALIGN(sizeof(*hp), sizeof(long))];
+
+- kobject_init(&hp->kobj);
+- hp->kobj.ktype = &hvc_kobj_type;
++ kref_init(&hp->kref);
+
+ spin_lock_init(&hp->lock);
+ spin_lock(&hvc_structs_lock);
+@@ -806,12 +794,10 @@ struct hvc_struct __devinit *hvc_alloc(uint32_t vtermno, int irq,
+ int __devexit hvc_remove(struct hvc_struct *hp)
+ {
+ unsigned long flags;
+- struct kobject *kobjp;
+ struct tty_struct *tty;
+
+ spin_lock_irqsave(&hp->lock, flags);
+ tty = hp->tty;
+- kobjp = &hp->kobj;
+
+ if (hp->index < MAX_NR_HVC_CONSOLES)
+ vtermnos[hp->index] = -1;
+@@ -821,12 +807,12 @@ int __devexit hvc_remove(struct hvc_struct *hp)
+ spin_unlock_irqrestore(&hp->lock, flags);
+
+ /*
+- * We 'put' the instance that was grabbed when the kobject instance
+- * was initialized using kobject_init(). Let the last holder of this
+- * kobject cause it to be removed, which will probably be the tty_hangup
++ * We 'put' the instance that was grabbed when the kref instance
++ * was initialized using kref_init(). Let the last holder of this
++ * kref cause it to be removed, which will probably be the tty_hangup
+ * below.
+ */
+- kobject_put(kobjp);
++ kref_put(&hp->kref, destroy_hvc_struct);
+
+ /*
+ * This function call will auto chain call hvc_hangup. The tty should
+diff --git a/drivers/char/hvcs.c b/drivers/char/hvcs.c
+index 69d8866..fd75590 100644
+--- a/drivers/char/hvcs.c
++++ b/drivers/char/hvcs.c
+@@ -57,11 +57,7 @@
+ * rescanning partner information upon a user's request.
+ *
+ * Each vty-server, prior to being exposed to this driver is reference counted
+- * using the 2.6 Linux kernel kobject construct. This kobject is also used by
+- * the vio bus to provide a vio device sysfs entry that this driver attaches
+- * device specific attributes to, including partner information. The vio bus
+- * framework also provides a sysfs entry for each vio driver. The hvcs driver
+- * provides driver attributes in this entry.
++ * using the 2.6 Linux kernel kref construct.
+ *
+ * For direction on installation and usage of this driver please reference
+ * Documentation/powerpc/hvcs.txt.
+@@ -71,7 +67,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
+-#include <linux/kobject.h>
++#include <linux/kref.h>
+ #include <linux/kthread.h>
+ #include <linux/list.h>
+ #include <linux/major.h>
+@@ -293,12 +289,12 @@ struct hvcs_struct {
+ int chars_in_buffer;
+
+ /*
+- * Any variable below the kobject is valid before a tty is connected and
++ * Any variable below the kref is valid before a tty is connected and
+ * stays valid after the tty is disconnected. These shouldn't be
+ * whacked until the koject refcount reaches zero though some entries
+ * may be changed via sysfs initiatives.
+ */
+- struct kobject kobj; /* ref count & hvcs_struct lifetime */
++ struct kref kref; /* ref count & hvcs_struct lifetime */
+ int connected; /* is the vty-server currently connected to a vty? */
+ uint32_t p_unit_address; /* partner unit address */
+ uint32_t p_partition_ID; /* partner partition ID */
+@@ -307,8 +303,8 @@ struct hvcs_struct {
+ struct vio_dev *vdev;
+ };
+
+-/* Required to back map a kobject to its containing object */
+-#define from_kobj(kobj) container_of(kobj, struct hvcs_struct, kobj)
++/* Required to back map a kref to its containing object */
++#define from_kref(k) container_of(k, struct hvcs_struct, kref)
+
+ static struct list_head hvcs_structs = LIST_HEAD_INIT(hvcs_structs);
+ static DEFINE_SPINLOCK(hvcs_structs_lock);
+@@ -334,7 +330,6 @@ static void hvcs_partner_free(struct hvcs_struct *hvcsd);
+ static int hvcs_enable_device(struct hvcs_struct *hvcsd,
+ uint32_t unit_address, unsigned int irq, struct vio_dev *dev);
+
+-static void destroy_hvcs_struct(struct kobject *kobj);
+ static int hvcs_open(struct tty_struct *tty, struct file *filp);
+ static void hvcs_close(struct tty_struct *tty, struct file *filp);
+ static void hvcs_hangup(struct tty_struct * tty);
+@@ -703,10 +698,10 @@ static void hvcs_return_index(int index)
+ hvcs_index_list[index] = -1;
+ }
+
+-/* callback when the kboject ref count reaches zero */
+-static void destroy_hvcs_struct(struct kobject *kobj)
++/* callback when the kref ref count reaches zero */
++static void destroy_hvcs_struct(struct kref *kref)
+ {
+- struct hvcs_struct *hvcsd = from_kobj(kobj);
++ struct hvcs_struct *hvcsd = from_kref(kref);
+ struct vio_dev *vdev;
+ unsigned long flags;
+
+@@ -743,10 +738,6 @@ static void destroy_hvcs_struct(struct kobject *kobj)
+ kfree(hvcsd);
+ }
+
+-static struct kobj_type hvcs_kobj_type = {
+- .release = destroy_hvcs_struct,
+-};
+-
+ static int hvcs_get_index(void)
+ {
+ int i;
+@@ -791,9 +782,7 @@ static int __devinit hvcs_probe(
+
+ spin_lock_init(&hvcsd->lock);
+ /* Automatically incs the refcount the first time */
+- kobject_init(&hvcsd->kobj);
+- /* Set up the callback for terminating the hvcs_struct's life */
+- hvcsd->kobj.ktype = &hvcs_kobj_type;
++ kref_init(&hvcsd->kref);
+
+ hvcsd->vdev = dev;
+ dev->dev.driver_data = hvcsd;
+@@ -844,7 +833,6 @@ static int __devexit hvcs_remove(struct vio_dev *dev)
+ {
+ struct hvcs_struct *hvcsd = dev->dev.driver_data;
+ unsigned long flags;
+- struct kobject *kobjp;
+ struct tty_struct *tty;
+
+ if (!hvcsd)
+@@ -856,15 +844,13 @@ static int __devexit hvcs_remove(struct vio_dev *dev)
+
+ tty = hvcsd->tty;
+
+- kobjp = &hvcsd->kobj;
+-
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * Let the last holder of this object cause it to be removed, which
+ * would probably be tty_hangup below.
+ */
+- kobject_put (kobjp);
++ kref_put(&hvcsd->kref, destroy_hvcs_struct);
+
+ /*
+ * The hangup is a scheduled function which will auto chain call
+@@ -1086,7 +1072,7 @@ static int hvcs_enable_device(struct hvcs_struct *hvcsd, uint32_t unit_address,
+ }
+
+ /*
+- * This always increments the kobject ref count if the call is successful.
++ * This always increments the kref ref count if the call is successful.
+ * Please remember to dec when you are done with the instance.
+ *
+ * NOTICE: Do NOT hold either the hvcs_struct.lock or hvcs_structs_lock when
+@@ -1103,7 +1089,7 @@ static struct hvcs_struct *hvcs_get_by_index(int index)
+ list_for_each_entry(hvcsd, &hvcs_structs, next) {
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ if (hvcsd->index == index) {
+- kobject_get(&hvcsd->kobj);
++ kref_get(&hvcsd->kref);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ spin_unlock(&hvcs_structs_lock);
+ return hvcsd;
+@@ -1129,14 +1115,13 @@ static int hvcs_open(struct tty_struct *tty, struct file *filp)
+ unsigned int irq;
+ struct vio_dev *vdev;
+ unsigned long unit_address;
+- struct kobject *kobjp;
+
+ if (tty->driver_data)
+ goto fast_open;
+
+ /*
+ * Is there a vty-server that shares the same index?
+- * This function increments the kobject index.
++ * This function increments the kref index.
+ */
+ if (!(hvcsd = hvcs_get_by_index(tty->index))) {
+ printk(KERN_WARNING "HVCS: open failed, no device associated"
+@@ -1181,7 +1166,7 @@ static int hvcs_open(struct tty_struct *tty, struct file *filp)
+ * and will grab the spinlock and free the connection if it fails.
+ */
+ if (((rc = hvcs_enable_device(hvcsd, unit_address, irq, vdev)))) {
+- kobject_put(&hvcsd->kobj);
++ kref_put(&hvcsd->kref, destroy_hvcs_struct);
+ printk(KERN_WARNING "HVCS: enable device failed.\n");
+ return rc;
+ }
+@@ -1192,17 +1177,11 @@ fast_open:
+ hvcsd = tty->driver_data;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+- if (!kobject_get(&hvcsd->kobj)) {
+- spin_unlock_irqrestore(&hvcsd->lock, flags);
+- printk(KERN_ERR "HVCS: Kobject of open"
+- " hvcs doesn't exist.\n");
+- return -EFAULT; /* Is this the right return value? */
+- }
+-
++ kref_get(&hvcsd->kref);
+ hvcsd->open_count++;
+-
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
++
+ open_success:
+ hvcs_kick();
+
+@@ -1212,9 +1191,8 @@ open_success:
+ return 0;
+
+ error_release:
+- kobjp = &hvcsd->kobj;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+- kobject_put(&hvcsd->kobj);
++ kref_put(&hvcsd->kref, destroy_hvcs_struct);
+
+ printk(KERN_WARNING "HVCS: partner connect failed.\n");
+ return retval;
+@@ -1224,7 +1202,6 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
+ {
+ struct hvcs_struct *hvcsd;
+ unsigned long flags;
+- struct kobject *kobjp;
+ int irq = NO_IRQ;
+
+ /*
+@@ -1245,7 +1222,6 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
+ hvcsd = tty->driver_data;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+- kobjp = &hvcsd->kobj;
+ if (--hvcsd->open_count == 0) {
+
+ vio_disable_interrupts(hvcsd->vdev);
+@@ -1270,7 +1246,7 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
+ tty->driver_data = NULL;
+
+ free_irq(irq, hvcsd);
+- kobject_put(kobjp);
++ kref_put(&hvcsd->kref, destroy_hvcs_struct);
+ return;
+ } else if (hvcsd->open_count < 0) {
+ printk(KERN_ERR "HVCS: vty-server@%X open_count: %d"
+@@ -1279,7 +1255,7 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
+ }
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+- kobject_put(kobjp);
++ kref_put(&hvcsd->kref, destroy_hvcs_struct);
+ }
+
+ static void hvcs_hangup(struct tty_struct * tty)
+@@ -1287,21 +1263,17 @@ static void hvcs_hangup(struct tty_struct * tty)
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+ int temp_open_count;
+- struct kobject *kobjp;
+ int irq = NO_IRQ;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+- /* Preserve this so that we know how many kobject refs to put */
++ /* Preserve this so that we know how many kref refs to put */
+ temp_open_count = hvcsd->open_count;
+
+ /*
+- * Don't kobject put inside the spinlock because the destruction
++ * Don't kref put inside the spinlock because the destruction
+ * callback may use the spinlock and it may get called before the
+- * spinlock has been released. Get a pointer to the kobject and
+- * kobject_put on that after releasing the spinlock.
++ * spinlock has been released.
+ */
+- kobjp = &hvcsd->kobj;
+-
+ vio_disable_interrupts(hvcsd->vdev);
+
+ hvcsd->todo_mask = 0;
+@@ -1324,7 +1296,7 @@ static void hvcs_hangup(struct tty_struct * tty)
+ free_irq(irq, hvcsd);
+
+ /*
+- * We need to kobject_put() for every open_count we have since the
++ * We need to kref_put() for every open_count we have since the
+ * tty_hangup() function doesn't invoke a close per open connection on a
+ * non-console device.
+ */
+@@ -1335,7 +1307,7 @@ static void hvcs_hangup(struct tty_struct * tty)
+ * NOTE: If this hangup was signaled from user space then the
+ * final put will never happen.
+ */
+- kobject_put(kobjp);
++ kref_put(&hvcsd->kref, destroy_hvcs_struct);
+ }
+ }
+
+diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
+index 556fd81..c422e87 100644
+--- a/drivers/char/hw_random/amd-rng.c
++++ b/drivers/char/hw_random/amd-rng.c
+@@ -28,6 +28,7 @@
+ #include <linux/kernel.h>
+ #include <linux/pci.h>
+ #include <linux/hw_random.h>
++#include <linux/delay.h>
+ #include <asm/io.h>
+
+
+@@ -52,11 +53,18 @@ MODULE_DEVICE_TABLE(pci, pci_tbl);
+ static struct pci_dev *amd_pdev;
+
+
+-static int amd_rng_data_present(struct hwrng *rng)
++static int amd_rng_data_present(struct hwrng *rng, int wait)
+ {
+ u32 pmbase = (u32)rng->priv;
++ int data, i;
+
+- return !!(inl(pmbase + 0xF4) & 1);
++ for (i = 0; i < 20; i++) {
++ data = !!(inl(pmbase + 0xF4) & 1);
++ if (data || !wait)
++ break;
++ udelay(10);
++ }
++ return data;
+ }
+
+ static int amd_rng_data_read(struct hwrng *rng, u32 *data)
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 26a860a..0118b98 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -66,11 +66,11 @@ static inline void hwrng_cleanup(struct hwrng *rng)
+ rng->cleanup(rng);
+ }
+
+-static inline int hwrng_data_present(struct hwrng *rng)
++static inline int hwrng_data_present(struct hwrng *rng, int wait)
+ {
+ if (!rng->data_present)
+ return 1;
+- return rng->data_present(rng);
++ return rng->data_present(rng, wait);
+ }
+
+ static inline int hwrng_data_read(struct hwrng *rng, u32 *data)
+@@ -94,8 +94,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ {
+ u32 data;
+ ssize_t ret = 0;
+- int i, err = 0;
+- int data_present;
++ int err = 0;
+ int bytes_read;
+
+ while (size) {
+@@ -107,21 +106,10 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ err = -ENODEV;
+ goto out;
+ }
+- if (filp->f_flags & O_NONBLOCK) {
+- data_present = hwrng_data_present(current_rng);
+- } else {
+- /* Some RNG require some time between data_reads to gather
+- * new entropy. Poll it.
+- */
+- for (i = 0; i < 20; i++) {
+- data_present = hwrng_data_present(current_rng);
+- if (data_present)
+- break;
+- udelay(10);
+- }
+- }
++
+ bytes_read = 0;
+- if (data_present)
++ if (hwrng_data_present(current_rng,
++ !(filp->f_flags & O_NONBLOCK)))
+ bytes_read = hwrng_data_read(current_rng, &data);
+ mutex_unlock(&rng_mutex);
+
+diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
+index 8e8658d..fed4ef5 100644
+--- a/drivers/char/hw_random/geode-rng.c
++++ b/drivers/char/hw_random/geode-rng.c
+@@ -28,6 +28,7 @@
+ #include <linux/kernel.h>
+ #include <linux/pci.h>
+ #include <linux/hw_random.h>
++#include <linux/delay.h>
+ #include <asm/io.h>
+
+
+@@ -61,11 +62,18 @@ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
+ return 4;
+ }
+
+-static int geode_rng_data_present(struct hwrng *rng)
++static int geode_rng_data_present(struct hwrng *rng, int wait)
+ {
+ void __iomem *mem = (void __iomem *)rng->priv;
++ int data, i;
+
+- return !!(readl(mem + GEODE_RNG_STATUS_REG));
++ for (i = 0; i < 20; i++) {
++ data = !!(readl(mem + GEODE_RNG_STATUS_REG));
++ if (data || !wait)
++ break;
++ udelay(10);
++ }
++ return data;
+ }
+
+
+diff --git a/drivers/char/hw_random/intel-rng.c b/drivers/char/hw_random/intel-rng.c
+index 753f460..5cc651e 100644
+--- a/drivers/char/hw_random/intel-rng.c
++++ b/drivers/char/hw_random/intel-rng.c
+@@ -29,6 +29,7 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/stop_machine.h>
++#include <linux/delay.h>
+ #include <asm/io.h>
+
+
+@@ -162,11 +163,19 @@ static inline u8 hwstatus_set(void __iomem *mem,
+ return hwstatus_get(mem);
+ }
+
+-static int intel_rng_data_present(struct hwrng *rng)
++static int intel_rng_data_present(struct hwrng *rng, int wait)
+ {
+ void __iomem *mem = (void __iomem *)rng->priv;
+-
+- return !!(readb(mem + INTEL_RNG_STATUS) & INTEL_RNG_DATA_PRESENT);
++ int data, i;
++
++ for (i = 0; i < 20; i++) {
++ data = !!(readb(mem + INTEL_RNG_STATUS) &
++ INTEL_RNG_DATA_PRESENT);
++ if (data || !wait)
++ break;
++ udelay(10);
++ }
++ return data;
+ }
+
+ static int intel_rng_data_read(struct hwrng *rng, u32 *data)
+diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
+index 3f35a1c..7e31995 100644
+--- a/drivers/char/hw_random/omap-rng.c
++++ b/drivers/char/hw_random/omap-rng.c
+@@ -29,6 +29,7 @@
+ #include <linux/err.h>
+ #include <linux/platform_device.h>
+ #include <linux/hw_random.h>
++#include <linux/delay.h>
+
+ #include <asm/io.h>
+
+@@ -65,9 +66,17 @@ static void omap_rng_write_reg(int reg, u32 val)
+ }
+
+ /* REVISIT: Does the status bit really work on 16xx? */
+-static int omap_rng_data_present(struct hwrng *rng)
++static int omap_rng_data_present(struct hwrng *rng, int wait)
+ {
+- return omap_rng_read_reg(RNG_STAT_REG) ? 0 : 1;
++ int data, i;
++
++ for (i = 0; i < 20; i++) {
++ data = omap_rng_read_reg(RNG_STAT_REG) ? 0 : 1;
++ if (data || !wait)
++ break;
++ udelay(10);
++ }
++ return data;
+ }
+
+ static int omap_rng_data_read(struct hwrng *rng, u32 *data)
+diff --git a/drivers/char/hw_random/pasemi-rng.c b/drivers/char/hw_random/pasemi-rng.c
+index fa6040b..e2ea210 100644
+--- a/drivers/char/hw_random/pasemi-rng.c
++++ b/drivers/char/hw_random/pasemi-rng.c
+@@ -23,6 +23,7 @@
+ #include <linux/kernel.h>
+ #include <linux/platform_device.h>
+ #include <linux/hw_random.h>
++#include <linux/delay.h>
+ #include <asm/of_platform.h>
+ #include <asm/io.h>
+
+@@ -41,12 +42,19 @@
+
+ #define MODULE_NAME "pasemi_rng"
+
+-static int pasemi_rng_data_present(struct hwrng *rng)
++static int pasemi_rng_data_present(struct hwrng *rng, int wait)
+ {
+ void __iomem *rng_regs = (void __iomem *)rng->priv;
+-
+- return (in_le32(rng_regs + SDCRNG_CTL_REG)
+- & SDCRNG_CTL_FVLD_M) ? 1 : 0;
++ int data, i;
++
++ for (i = 0; i < 20; i++) {
++ data = (in_le32(rng_regs + SDCRNG_CTL_REG)
++ & SDCRNG_CTL_FVLD_M) ? 1 : 0;
++ if (data || !wait)
++ break;
++ udelay(10);
++ }
++ return data;
+ }
+
+ static int pasemi_rng_data_read(struct hwrng *rng, u32 *data)
+diff --git a/drivers/char/hw_random/via-rng.c b/drivers/char/hw_random/via-rng.c
+index ec435cb..868e39f 100644
+--- a/drivers/char/hw_random/via-rng.c
++++ b/drivers/char/hw_random/via-rng.c
+@@ -27,6 +27,7 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/hw_random.h>
++#include <linux/delay.h>
+ #include <asm/io.h>
+ #include <asm/msr.h>
+ #include <asm/cpufeature.h>
+@@ -77,10 +78,11 @@ static inline u32 xstore(u32 *addr, u32 edx_in)
+ return eax_out;
+ }
+
+-static int via_rng_data_present(struct hwrng *rng)
++static int via_rng_data_present(struct hwrng *rng, int wait)
+ {
+ u32 bytes_out;
+ u32 *via_rng_datum = (u32 *)(&rng->priv);
++ int i;
+
+ /* We choose the recommended 1-byte-per-instruction RNG rate,
+ * for greater randomness at the expense of speed. Larger
+@@ -95,12 +97,15 @@ static int via_rng_data_present(struct hwrng *rng)
+ * completes.
+ */
+
+- *via_rng_datum = 0; /* paranoia, not really necessary */
+- bytes_out = xstore(via_rng_datum, VIA_RNG_CHUNK_1);
+- bytes_out &= VIA_XSTORE_CNT_MASK;
+- if (bytes_out == 0)
+- return 0;
+- return 1;
++ for (i = 0; i < 20; i++) {
++ *via_rng_datum = 0; /* paranoia, not really necessary */
++ bytes_out = xstore(via_rng_datum, VIA_RNG_CHUNK_1);
++ bytes_out &= VIA_XSTORE_CNT_MASK;
++ if (bytes_out || !wait)
++ break;
++ udelay(10);
++ }
++ return bytes_out ? 1 : 0;
+ }
+
+ static int via_rng_data_read(struct hwrng *rng, u32 *data)
+diff --git a/drivers/char/nozomi.c b/drivers/char/nozomi.c
+new file mode 100644
+index 0000000..6076e66
+--- /dev/null
++++ b/drivers/char/nozomi.c
+@@ -0,0 +1,1993 @@
++/*
++ * nozomi.c -- HSDPA driver Broadband Wireless Data Card - Globe Trotter
++ *
++ * Written by: Ulf Jakobsson,
++ * Jan �erfeldt,
++ * Stefan Thomasson,
++ *
++ * Maintained by: Paul Hardwick (p.hardwick at option.com)
++ *
++ * Patches:
++ * Locking code changes for Vodafone by Sphere Systems Ltd,
++ * Andrew Bird (ajb at spheresystems.co.uk )
++ * & Phil Sanderson
++ *
++ * Source has been ported from an implementation made by Filip Aben @ Option
++ *
++ * --------------------------------------------------------------------------
++ *
++ * Copyright (c) 2005,2006 Option Wireless Sweden AB
++ * Copyright (c) 2006 Sphere Systems Ltd
++ * Copyright (c) 2006 Option Wireless n/v
++ * All rights Reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
++ *
++ * --------------------------------------------------------------------------
++ */
++
++/*
++ * CHANGELOG
++ * Version 2.1d
++ * 11-November-2007 Jiri Slaby, Frank Seidel
++ * - Big rework of multicard support by Jiri
++ * - Major cleanups (semaphore to mutex, endianess, no major reservation)
++ * - Optimizations
++ *
++ * Version 2.1c
++ * 30-October-2007 Frank Seidel
++ * - Completed multicard support
++ * - Minor cleanups
++ *
++ * Version 2.1b
++ * 07-August-2007 Frank Seidel
++ * - Minor cleanups
++ * - theoretical multicard support
++ *
++ * Version 2.1
++ * 03-July-2006 Paul Hardwick
++ *
++ * - Stability Improvements. Incorporated spinlock wraps patch.
++ * - Updated for newer 2.6.14+ kernels (tty_buffer_request_room)
++ * - using __devexit macro for tty
++ *
++ *
++ * Version 2.0
++ * 08-feb-2006 15:34:10:Ulf
++ *
++ * -Fixed issue when not waking up line disipine layer, could probably result
++ * in better uplink performance for 2.4.
++ *
++ * -Fixed issue with big endian during initalization, now proper toggle flags
++ * are handled between preloader and maincode.
++ *
++ * -Fixed flow control issue.
++ *
++ * -Added support for setting DTR.
++ *
++ * -For 2.4 kernels, removing temporary buffer that's not needed.
++ *
++ * -Reading CTS only for modem port (only port that supports it).
++ *
++ * -Return 0 in write_room instead of netative value, it's not handled in
++ * upper layer.
++ *
++ * --------------------------------------------------------------------------
++ * Version 1.0
++ *
++ * First version of driver, only tested with card of type F32_2.
++ * Works fine with 2.4 and 2.6 kernels.
++ * Driver also support big endian architecture.
++ */
++
++/* Enable this to have a lot of debug printouts */
++#define DEBUG
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++#include <linux/ioport.h>
++#include <linux/tty.h>
++#include <linux/tty_driver.h>
++#include <linux/tty_flip.h>
++#include <linux/serial.h>
++#include <linux/interrupt.h>
++#include <linux/kmod.h>
++#include <linux/init.h>
++#include <linux/kfifo.h>
++#include <linux/uaccess.h>
++#include <asm/byteorder.h>
++
++#include <linux/delay.h>
++
++
++#define VERSION_STRING DRIVER_DESC " 2.1d (build date: " \
++ __DATE__ " " __TIME__ ")"
++
++/* Macros definitions */
++
++/* Default debug printout level */
++#define NOZOMI_DEBUG_LEVEL 0x00
++
++#define P_BUF_SIZE 128
++#define NFO(_err_flag_, args...) \
++do { \
++ char tmp[P_BUF_SIZE]; \
++ snprintf(tmp, sizeof(tmp), ##args); \
++ printk(_err_flag_ "[%d] %s(): %s\n", __LINE__, \
++ __FUNCTION__, tmp); \
++} while (0)
++
++#define DBG1(args...) D_(0x01, ##args)
++#define DBG2(args...) D_(0x02, ##args)
++#define DBG3(args...) D_(0x04, ##args)
++#define DBG4(args...) D_(0x08, ##args)
++#define DBG5(args...) D_(0x10, ##args)
++#define DBG6(args...) D_(0x20, ##args)
++#define DBG7(args...) D_(0x40, ##args)
++#define DBG8(args...) D_(0x80, ##args)
++
++#ifdef DEBUG
++/* Do we need this settable at runtime? */
++static int debug = NOZOMI_DEBUG_LEVEL;
++
++#define D(lvl, args...) do {if (lvl & debug) NFO(KERN_DEBUG, ##args); } \
++ while (0)
++#define D_(lvl, args...) D(lvl, ##args)
++
++/* These printouts are always printed */
++
++#else
++static int debug;
++#define D_(lvl, args...)
++#endif
++
++/* TODO: rewrite to optimize macros... */
++
++#define TMP_BUF_MAX 256
++
++#define DUMP(buf__,len__) \
++ do { \
++ char tbuf[TMP_BUF_MAX] = {0};\
++ if (len__ > 1) {\
++ snprintf(tbuf, len__ > TMP_BUF_MAX ? TMP_BUF_MAX : len__, "%s", buf__);\
++ if (tbuf[len__-2] == '\r') {\
++ tbuf[len__-2] = 'r';\
++ } \
++ DBG1("SENDING: '%s' (%d+n)", tbuf, len__);\
++ } else {\
++ DBG1("SENDING: '%s' (%d)", tbuf, len__);\
++ } \
++} while (0)
++
++/* Defines */
++#define NOZOMI_NAME "nozomi"
++#define NOZOMI_NAME_TTY "nozomi_tty"
++#define DRIVER_DESC "Nozomi driver"
++
++#define NTTY_TTY_MAXMINORS 256
++#define NTTY_FIFO_BUFFER_SIZE 8192
++
++/* Must be power of 2 */
++#define FIFO_BUFFER_SIZE_UL 8192
++
++/* Size of tmp send buffer to card */
++#define SEND_BUF_MAX 1024
++#define RECEIVE_BUF_MAX 4
++
++
++/* Define all types of vendors and devices to support */
++#define VENDOR1 0x1931 /* Vendor Option */
++#define DEVICE1 0x000c /* HSDPA card */
++
++#define R_IIR 0x0000 /* Interrupt Identity Register */
++#define R_FCR 0x0000 /* Flow Control Register */
++#define R_IER 0x0004 /* Interrupt Enable Register */
++
++#define CONFIG_MAGIC 0xEFEFFEFE
++#define TOGGLE_VALID 0x0000
++
++/* Definition of interrupt tokens */
++#define MDM_DL1 0x0001
++#define MDM_UL1 0x0002
++#define MDM_DL2 0x0004
++#define MDM_UL2 0x0008
++#define DIAG_DL1 0x0010
++#define DIAG_DL2 0x0020
++#define DIAG_UL 0x0040
++#define APP1_DL 0x0080
++#define APP1_UL 0x0100
++#define APP2_DL 0x0200
++#define APP2_UL 0x0400
++#define CTRL_DL 0x0800
++#define CTRL_UL 0x1000
++#define RESET 0x8000
++
++#define MDM_DL (MDM_DL1 | MDM_DL2)
++#define MDM_UL (MDM_UL1 | MDM_UL2)
++#define DIAG_DL (DIAG_DL1 | DIAG_DL2)
++
++/* modem signal definition */
++#define CTRL_DSR 0x0001
++#define CTRL_DCD 0x0002
++#define CTRL_RI 0x0004
++#define CTRL_CTS 0x0008
++
++#define CTRL_DTR 0x0001
++#define CTRL_RTS 0x0002
++
++#define MAX_PORT 4
++#define NOZOMI_MAX_PORTS 5
++#define NOZOMI_MAX_CARDS (NTTY_TTY_MAXMINORS / MAX_PORT)
++
++/* Type definitions */
++
++/*
++ * There are two types of nozomi cards,
++ * one with 2048 memory and with 8192 memory
++ */
++enum card_type {
++ F32_2 = 2048, /* 512 bytes downlink + uplink * 2 -> 2048 */
++ F32_8 = 8192, /* 3072 bytes downl. + 1024 bytes uplink * 2 -> 8192 */
++};
++
++/* Two different toggle channels exist */
++enum channel_type {
++ CH_A = 0,
++ CH_B = 1,
++};
++
++/* Port definition for the card regarding flow control */
++enum ctrl_port_type {
++ CTRL_CMD = 0,
++ CTRL_MDM = 1,
++ CTRL_DIAG = 2,
++ CTRL_APP1 = 3,
++ CTRL_APP2 = 4,
++ CTRL_ERROR = -1,
++};
++
++/* Ports that the nozomi has */
++enum port_type {
++ PORT_MDM = 0,
++ PORT_DIAG = 1,
++ PORT_APP1 = 2,
++ PORT_APP2 = 3,
++ PORT_CTRL = 4,
++ PORT_ERROR = -1,
++};
++
++#ifdef __BIG_ENDIAN
++/* Big endian */
++
++struct toggles {
++ unsigned enabled:5; /*
++ * Toggle fields are valid if enabled is 0,
++ * else A-channels must always be used.
++ */
++ unsigned diag_dl:1;
++ unsigned mdm_dl:1;
++ unsigned mdm_ul:1;
++} __attribute__ ((packed));
++
++/* Configuration table to read at startup of card */
++/* Is for now only needed during initialization phase */
++struct config_table {
++ u32 signature;
++ u16 product_information;
++ u16 version;
++ u8 pad3[3];
++ struct toggles toggle;
++ u8 pad1[4];
++ u16 dl_mdm_len1; /*
++ * If this is 64, it can hold
++ * 60 bytes + 4 that is length field
++ */
++ u16 dl_start;
++
++ u16 dl_diag_len1;
++ u16 dl_mdm_len2; /*
++ * If this is 64, it can hold
++ * 60 bytes + 4 that is length field
++ */
++ u16 dl_app1_len;
++
++ u16 dl_diag_len2;
++ u16 dl_ctrl_len;
++ u16 dl_app2_len;
++ u8 pad2[16];
++ u16 ul_mdm_len1;
++ u16 ul_start;
++ u16 ul_diag_len;
++ u16 ul_mdm_len2;
++ u16 ul_app1_len;
++ u16 ul_app2_len;
++ u16 ul_ctrl_len;
++} __attribute__ ((packed));
++
++/* This stores all control downlink flags */
++struct ctrl_dl {
++ u8 port;
++ unsigned reserved:4;
++ unsigned CTS:1;
++ unsigned RI:1;
++ unsigned DCD:1;
++ unsigned DSR:1;
++} __attribute__ ((packed));
++
++/* This stores all control uplink flags */
++struct ctrl_ul {
++ u8 port;
++ unsigned reserved:6;
++ unsigned RTS:1;
++ unsigned DTR:1;
++} __attribute__ ((packed));
++
++#else
++/* Little endian */
++
++/* This represents the toggle information */
++struct toggles {
++ unsigned mdm_ul:1;
++ unsigned mdm_dl:1;
++ unsigned diag_dl:1;
++ unsigned enabled:5; /*
++ * Toggle fields are valid if enabled is 0,
++ * else A-channels must always be used.
++ */
++} __attribute__ ((packed));
++
++/* Configuration table to read at startup of card */
++struct config_table {
++ u32 signature;
++ u16 version;
++ u16 product_information;
++ struct toggles toggle;
++ u8 pad1[7];
++ u16 dl_start;
++ u16 dl_mdm_len1; /*
++ * If this is 64, it can hold
++ * 60 bytes + 4 that is length field
++ */
++ u16 dl_mdm_len2;
++ u16 dl_diag_len1;
++ u16 dl_diag_len2;
++ u16 dl_app1_len;
++ u16 dl_app2_len;
++ u16 dl_ctrl_len;
++ u8 pad2[16];
++ u16 ul_start;
++ u16 ul_mdm_len2;
++ u16 ul_mdm_len1;
++ u16 ul_diag_len;
++ u16 ul_app1_len;
++ u16 ul_app2_len;
++ u16 ul_ctrl_len;
++} __attribute__ ((packed));
++
++/* This stores all control downlink flags */
++struct ctrl_dl {
++ unsigned DSR:1;
++ unsigned DCD:1;
++ unsigned RI:1;
++ unsigned CTS:1;
++ unsigned reserverd:4;
++ u8 port;
++} __attribute__ ((packed));
++
++/* This stores all control uplink flags */
++struct ctrl_ul {
++ unsigned DTR:1;
++ unsigned RTS:1;
++ unsigned reserved:6;
++ u8 port;
++} __attribute__ ((packed));
++#endif
++
++/* This holds all information that is needed regarding a port */
++struct port {
++ u8 update_flow_control;
++ struct ctrl_ul ctrl_ul;
++ struct ctrl_dl ctrl_dl;
++ struct kfifo *fifo_ul;
++ void __iomem *dl_addr[2];
++ u32 dl_size[2];
++ u8 toggle_dl;
++ void __iomem *ul_addr[2];
++ u32 ul_size[2];
++ u8 toggle_ul;
++ u16 token_dl;
++
++ struct tty_struct *tty;
++ int tty_open_count;
++ /* mutex to ensure one access patch to this port */
++ struct mutex tty_sem;
++ wait_queue_head_t tty_wait;
++ struct async_icount tty_icount;
++};
++
++/* Private data one for each card in the system */
++struct nozomi {
++ void __iomem *base_addr;
++ unsigned long flip;
++
++ /* Pointers to registers */
++ void __iomem *reg_iir;
++ void __iomem *reg_fcr;
++ void __iomem *reg_ier;
++
++ u16 last_ier;
++ enum card_type card_type;
++ struct config_table config_table; /* Configuration table */
++ struct pci_dev *pdev;
++ struct port port[NOZOMI_MAX_PORTS];
++ u8 *send_buf;
++
++ spinlock_t spin_mutex; /* secures access to registers and tty */
++
++ unsigned int index_start;
++ u32 open_ttys;
++};
++
++/* This is a data packet that is read or written to/from card */
++struct buffer {
++ u32 size; /* size is the length of the data buffer */
++ u8 *data;
++} __attribute__ ((packed));
++
++/* Global variables */
++static struct pci_device_id nozomi_pci_tbl[] = {
++ {PCI_DEVICE(VENDOR1, DEVICE1)},
++ {},
++};
++
++MODULE_DEVICE_TABLE(pci, nozomi_pci_tbl);
++
++static struct nozomi *ndevs[NOZOMI_MAX_CARDS];
++static struct tty_driver *ntty_driver;
++
++/*
++ * find card by tty_index
++ */
++static inline struct nozomi *get_dc_by_tty(const struct tty_struct *tty)
++{
++ return tty ? ndevs[tty->index / MAX_PORT] : NULL;
++}
++
++static inline struct port *get_port_by_tty(const struct tty_struct *tty)
++{
++ struct nozomi *ndev = get_dc_by_tty(tty);
++ return ndev ? &ndev->port[tty->index % MAX_PORT] : NULL;
++}
++
++/*
++ * TODO:
++ * -Optimize
++ * -Rewrite cleaner
++ */
++
++static void read_mem32(u32 *buf, const void __iomem *mem_addr_start,
++ u32 size_bytes)
++{
++ u32 i = 0;
++ const u32 *ptr = (__force u32 *) mem_addr_start;
++ u16 *buf16;
++
++ if (unlikely(!ptr || !buf))
++ goto out;
++
++ /* shortcut for extremely often used cases */
++ switch (size_bytes) {
++ case 2: /* 2 bytes */
++ buf16 = (u16 *) buf;
++ *buf16 = __le16_to_cpu(readw((void __iomem *)ptr));
++ goto out;
++ break;
++ case 4: /* 4 bytes */
++ *(buf) = __le32_to_cpu(readl((void __iomem *)ptr));
++ goto out;
++ break;
++ }
++
++ while (i < size_bytes) {
++ if (size_bytes - i == 2) {
++ /* Handle 2 bytes in the end */
++ buf16 = (u16 *) buf;
++ *(buf16) = __le16_to_cpu(readw((void __iomem *)ptr));
++ i += 2;
++ } else {
++ /* Read 4 bytes */
++ *(buf) = __le32_to_cpu(readl((void __iomem *)ptr));
++ i += 4;
++ }
++ buf++;
++ ptr++;
++ }
++out:
++ return;
++}
++
++/*
++ * TODO:
++ * -Optimize
++ * -Rewrite cleaner
++ */
++static u32 write_mem32(void __iomem *mem_addr_start, u32 *buf,
++ u32 size_bytes)
++{
++ u32 i = 0;
++ u32 *ptr = (__force u32 *) mem_addr_start;
++ u16 *buf16;
++
++ if (unlikely(!ptr || !buf))
++ return 0;
++
++ /* shortcut for extremely often used cases */
++ switch (size_bytes) {
++ case 2: /* 2 bytes */
++ buf16 = (u16 *) buf;
++ writew(__cpu_to_le16(*buf16), (void __iomem *)ptr);
++ return 2;
++ break;
++ case 1: /*
++ * also needs to write 4 bytes in this case
++ * so falling through..
++ */
++ case 4: /* 4 bytes */
++ writel(__cpu_to_le32(*buf), (void __iomem *)ptr);
++ return 4;
++ break;
++ }
++
++ while (i < size_bytes) {
++ if (size_bytes - i == 2) {
++ /* 2 bytes */
++ buf16 = (u16 *) buf;
++ writew(__cpu_to_le16(*buf16), (void __iomem *)ptr);
++ i += 2;
++ } else {
++ /* 4 bytes */
++ writel(__cpu_to_le32(*buf), (void __iomem *)ptr);
++ i += 4;
++ }
++ buf++;
++ ptr++;
++ }
++ return i;
++}
++
++/* Setup pointers to different channels and also setup buffer sizes. */
++static void setup_memory(struct nozomi *dc)
++{
++ void __iomem *offset = dc->base_addr + dc->config_table.dl_start;
++ /* The length reported is including the length field of 4 bytes,
++ * hence subtract with 4.
++ */
++ const u16 buff_offset = 4;
++
++ /* Modem port dl configuration */
++ dc->port[PORT_MDM].dl_addr[CH_A] = offset;
++ dc->port[PORT_MDM].dl_addr[CH_B] =
++ (offset += dc->config_table.dl_mdm_len1);
++ dc->port[PORT_MDM].dl_size[CH_A] =
++ dc->config_table.dl_mdm_len1 - buff_offset;
++ dc->port[PORT_MDM].dl_size[CH_B] =
++ dc->config_table.dl_mdm_len2 - buff_offset;
++
++ /* Diag port dl configuration */
++ dc->port[PORT_DIAG].dl_addr[CH_A] =
++ (offset += dc->config_table.dl_mdm_len2);
++ dc->port[PORT_DIAG].dl_size[CH_A] =
++ dc->config_table.dl_diag_len1 - buff_offset;
++ dc->port[PORT_DIAG].dl_addr[CH_B] =
++ (offset += dc->config_table.dl_diag_len1);
++ dc->port[PORT_DIAG].dl_size[CH_B] =
++ dc->config_table.dl_diag_len2 - buff_offset;
++
++ /* App1 port dl configuration */
++ dc->port[PORT_APP1].dl_addr[CH_A] =
++ (offset += dc->config_table.dl_diag_len2);
++ dc->port[PORT_APP1].dl_size[CH_A] =
++ dc->config_table.dl_app1_len - buff_offset;
++
++ /* App2 port dl configuration */
++ dc->port[PORT_APP2].dl_addr[CH_A] =
++ (offset += dc->config_table.dl_app1_len);
++ dc->port[PORT_APP2].dl_size[CH_A] =
++ dc->config_table.dl_app2_len - buff_offset;
++
++ /* Ctrl dl configuration */
++ dc->port[PORT_CTRL].dl_addr[CH_A] =
++ (offset += dc->config_table.dl_app2_len);
++ dc->port[PORT_CTRL].dl_size[CH_A] =
++ dc->config_table.dl_ctrl_len - buff_offset;
++
++ offset = dc->base_addr + dc->config_table.ul_start;
++
++ /* Modem Port ul configuration */
++ dc->port[PORT_MDM].ul_addr[CH_A] = offset;
++ dc->port[PORT_MDM].ul_size[CH_A] =
++ dc->config_table.ul_mdm_len1 - buff_offset;
++ dc->port[PORT_MDM].ul_addr[CH_B] =
++ (offset += dc->config_table.ul_mdm_len1);
++ dc->port[PORT_MDM].ul_size[CH_B] =
++ dc->config_table.ul_mdm_len2 - buff_offset;
++
++ /* Diag port ul configuration */
++ dc->port[PORT_DIAG].ul_addr[CH_A] =
++ (offset += dc->config_table.ul_mdm_len2);
++ dc->port[PORT_DIAG].ul_size[CH_A] =
++ dc->config_table.ul_diag_len - buff_offset;
++
++ /* App1 port ul configuration */
++ dc->port[PORT_APP1].ul_addr[CH_A] =
++ (offset += dc->config_table.ul_diag_len);
++ dc->port[PORT_APP1].ul_size[CH_A] =
++ dc->config_table.ul_app1_len - buff_offset;
++
++ /* App2 port ul configuration */
++ dc->port[PORT_APP2].ul_addr[CH_A] =
++ (offset += dc->config_table.ul_app1_len);
++ dc->port[PORT_APP2].ul_size[CH_A] =
++ dc->config_table.ul_app2_len - buff_offset;
++
++ /* Ctrl ul configuration */
++ dc->port[PORT_CTRL].ul_addr[CH_A] =
++ (offset += dc->config_table.ul_app2_len);
++ dc->port[PORT_CTRL].ul_size[CH_A] =
++ dc->config_table.ul_ctrl_len - buff_offset;
++}
++
++/* Dump config table under initalization phase */
++#ifdef DEBUG
++static void dump_table(const struct nozomi *dc)
++{
++ DBG3("signature: 0x%08X", dc->config_table.signature);
++ DBG3("version: 0x%04X", dc->config_table.version);
++ DBG3("product_information: 0x%04X", \
++ dc->config_table.product_information);
++ DBG3("toggle enabled: %d", dc->config_table.toggle.enabled);
++ DBG3("toggle up_mdm: %d", dc->config_table.toggle.mdm_ul);
++ DBG3("toggle dl_mdm: %d", dc->config_table.toggle.mdm_dl);
++ DBG3("toggle dl_dbg: %d", dc->config_table.toggle.diag_dl);
++
++ DBG3("dl_start: 0x%04X", dc->config_table.dl_start);
++ DBG3("dl_mdm_len0: 0x%04X, %d", dc->config_table.dl_mdm_len1,
++ dc->config_table.dl_mdm_len1);
++ DBG3("dl_mdm_len1: 0x%04X, %d", dc->config_table.dl_mdm_len2,
++ dc->config_table.dl_mdm_len2);
++ DBG3("dl_diag_len0: 0x%04X, %d", dc->config_table.dl_diag_len1,
++ dc->config_table.dl_diag_len1);
++ DBG3("dl_diag_len1: 0x%04X, %d", dc->config_table.dl_diag_len2,
++ dc->config_table.dl_diag_len2);
++ DBG3("dl_app1_len: 0x%04X, %d", dc->config_table.dl_app1_len,
++ dc->config_table.dl_app1_len);
++ DBG3("dl_app2_len: 0x%04X, %d", dc->config_table.dl_app2_len,
++ dc->config_table.dl_app2_len);
++ DBG3("dl_ctrl_len: 0x%04X, %d", dc->config_table.dl_ctrl_len,
++ dc->config_table.dl_ctrl_len);
++ DBG3("ul_start: 0x%04X, %d", dc->config_table.ul_start,
++ dc->config_table.ul_start);
++ DBG3("ul_mdm_len[0]: 0x%04X, %d", dc->config_table.ul_mdm_len1,
++ dc->config_table.ul_mdm_len1);
++ DBG3("ul_mdm_len[1]: 0x%04X, %d", dc->config_table.ul_mdm_len2,
++ dc->config_table.ul_mdm_len2);
++ DBG3("ul_diag_len: 0x%04X, %d", dc->config_table.ul_diag_len,
++ dc->config_table.ul_diag_len);
++ DBG3("ul_app1_len: 0x%04X, %d", dc->config_table.ul_app1_len,
++ dc->config_table.ul_app1_len);
++ DBG3("ul_app2_len: 0x%04X, %d", dc->config_table.ul_app2_len,
++ dc->config_table.ul_app2_len);
++ DBG3("ul_ctrl_len: 0x%04X, %d", dc->config_table.ul_ctrl_len,
++ dc->config_table.ul_ctrl_len);
++}
++#else
++static __inline__ void dump_table(const struct nozomi *dc) { }
++#endif
++
++/*
++ * Read configuration table from card under intalization phase
++ * Returns 1 if ok, else 0
++ */
++static int nozomi_read_config_table(struct nozomi *dc)
++{
++ read_mem32((u32 *) &dc->config_table, dc->base_addr + 0,
++ sizeof(struct config_table));
++
++ if (dc->config_table.signature != CONFIG_MAGIC) {
++ dev_err(&dc->pdev->dev, "ConfigTable Bad! 0x%08X != 0x%08X\n",
++ dc->config_table.signature, CONFIG_MAGIC);
++ return 0;
++ }
++
++ if ((dc->config_table.version == 0)
++ || (dc->config_table.toggle.enabled == TOGGLE_VALID)) {
++ int i;
++ DBG1("Second phase, configuring card");
++
++ setup_memory(dc);
++
++ dc->port[PORT_MDM].toggle_ul = dc->config_table.toggle.mdm_ul;
++ dc->port[PORT_MDM].toggle_dl = dc->config_table.toggle.mdm_dl;
++ dc->port[PORT_DIAG].toggle_dl = dc->config_table.toggle.diag_dl;
++ DBG1("toggle ports: MDM UL:%d MDM DL:%d, DIAG DL:%d",
++ dc->port[PORT_MDM].toggle_ul,
++ dc->port[PORT_MDM].toggle_dl, dc->port[PORT_DIAG].toggle_dl);
++
++ dump_table(dc);
++
++ for (i = PORT_MDM; i < MAX_PORT; i++) {
++ dc->port[i].fifo_ul =
++ kfifo_alloc(FIFO_BUFFER_SIZE_UL, GFP_ATOMIC, NULL);
++ memset(&dc->port[i].ctrl_dl, 0, sizeof(struct ctrl_dl));
++ memset(&dc->port[i].ctrl_ul, 0, sizeof(struct ctrl_ul));
++ }
++
++ /* Enable control channel */
++ dc->last_ier = dc->last_ier | CTRL_DL;
++ writew(dc->last_ier, dc->reg_ier);
++
++ dev_info(&dc->pdev->dev, "Initialization OK!\n");
++ return 1;
++ }
++
++ if ((dc->config_table.version > 0)
++ && (dc->config_table.toggle.enabled != TOGGLE_VALID)) {
++ u32 offset = 0;
++ DBG1("First phase: pushing upload buffers, clearing download");
++
++ dev_info(&dc->pdev->dev, "Version of card: %d\n",
++ dc->config_table.version);
++
++ /* Here we should disable all I/O over F32. */
++ setup_memory(dc);
++
++ /*
++ * We should send ALL channel pair tokens back along
++ * with reset token
++ */
++
++ /* push upload modem buffers */
++ write_mem32(dc->port[PORT_MDM].ul_addr[CH_A],
++ (u32 *) &offset, 4);
++ write_mem32(dc->port[PORT_MDM].ul_addr[CH_B],
++ (u32 *) &offset, 4);
++
++ writew(MDM_UL | DIAG_DL | MDM_DL, dc->reg_fcr);
++
++ DBG1("First phase done");
++ }
++
++ return 1;
++}
++
++/* Enable uplink interrupts */
++static void enable_transmit_ul(enum port_type port, struct nozomi *dc)
++{
++ u16 mask[NOZOMI_MAX_PORTS] = \
++ {MDM_UL, DIAG_UL, APP1_UL, APP2_UL, CTRL_UL};
++
++ if (port < NOZOMI_MAX_PORTS) {
++ dc->last_ier |= mask[port];
++ writew(dc->last_ier, dc->reg_ier);
++ } else {
++ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
++ }
++}
++
++/* Disable uplink interrupts */
++static void disable_transmit_ul(enum port_type port, struct nozomi *dc)
++{
++ u16 mask[NOZOMI_MAX_PORTS] = \
++ {~MDM_UL, ~DIAG_UL, ~APP1_UL, ~APP2_UL, ~CTRL_UL};
++
++ if (port < NOZOMI_MAX_PORTS) {
++ dc->last_ier &= mask[port];
++ writew(dc->last_ier, dc->reg_ier);
++ } else {
++ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
++ }
++}
++
++/* Enable downlink interrupts */
++static void enable_transmit_dl(enum port_type port, struct nozomi *dc)
++{
++ u16 mask[NOZOMI_MAX_PORTS] = \
++ {MDM_DL, DIAG_DL, APP1_DL, APP2_DL, CTRL_DL};
++
++ if (port < NOZOMI_MAX_PORTS) {
++ dc->last_ier |= mask[port];
++ writew(dc->last_ier, dc->reg_ier);
++ } else {
++ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
++ }
++}
++
++/* Disable downlink interrupts */
++static void disable_transmit_dl(enum port_type port, struct nozomi *dc)
++{
++ u16 mask[NOZOMI_MAX_PORTS] = \
++ {~MDM_DL, ~DIAG_DL, ~APP1_DL, ~APP2_DL, ~CTRL_DL};
++
++ if (port < NOZOMI_MAX_PORTS) {
++ dc->last_ier &= mask[port];
++ writew(dc->last_ier, dc->reg_ier);
++ } else {
++ dev_err(&dc->pdev->dev, "Called with wrong port?\n");
++ }
++}
++
++/*
++ * Return 1 - send buffer to card and ack.
++ * Return 0 - don't ack, don't send buffer to card.
++ */
++static int send_data(enum port_type index, struct nozomi *dc)
++{
++ u32 size = 0;
++ struct port *port = &dc->port[index];
++ u8 toggle = port->toggle_ul;
++ void __iomem *addr = port->ul_addr[toggle];
++ u32 ul_size = port->ul_size[toggle];
++ struct tty_struct *tty = port->tty;
++
++ /* Get data from tty and place in buf for now */
++ size = __kfifo_get(port->fifo_ul, dc->send_buf,
++ ul_size < SEND_BUF_MAX ? ul_size : SEND_BUF_MAX);
++
++ if (size == 0) {
++ DBG4("No more data to send, disable link:");
++ return 0;
++ }
++
++ /* DUMP(buf, size); */
++
++ /* Write length + data */
++ write_mem32(addr, (u32 *) &size, 4);
++ write_mem32(addr + 4, (u32 *) dc->send_buf, size);
++
++ if (tty)
++ tty_wakeup(tty);
++
++ return 1;
++}
++
++/* If all data has been read, return 1, else 0 */
++static int receive_data(enum port_type index, struct nozomi *dc)
++{
++ u8 buf[RECEIVE_BUF_MAX] = { 0 };
++ int size;
++ u32 offset = 4;
++ struct port *port = &dc->port[index];
++ void __iomem *addr = port->dl_addr[port->toggle_dl];
++ struct tty_struct *tty = port->tty;
++ int i;
++
++ if (unlikely(!tty)) {
++ DBG1("tty not open for port: %d?", index);
++ return 1;
++ }
++
++ read_mem32((u32 *) &size, addr, 4);
++ /* DBG1( "%d bytes port: %d", size, index); */
++
++ if (test_bit(TTY_THROTTLED, &tty->flags)) {
++ DBG1("No room in tty, don't read data, don't ack interrupt, "
++ "disable interrupt");
++
++ /* disable interrupt in downlink... */
++ disable_transmit_dl(index, dc);
++ return 0;
++ }
++
++ if (unlikely(size == 0)) {
++ dev_err(&dc->pdev->dev, "size == 0?\n");
++ return 1;
++ }
++
++ tty_buffer_request_room(tty, size);
++
++ while (size > 0) {
++ read_mem32((u32 *) buf, addr + offset, RECEIVE_BUF_MAX);
++
++ if (size == 1) {
++ tty_insert_flip_char(tty, buf[0], TTY_NORMAL);
++ size = 0;
++ } else if (size < RECEIVE_BUF_MAX) {
++ size -= tty_insert_flip_string(tty, (char *) buf, size);
++ } else {
++ i = tty_insert_flip_string(tty, \
++ (char *) buf, RECEIVE_BUF_MAX);
++ size -= i;
++ offset += i;
++ }
++ }
++
++ set_bit(index, &dc->flip);
++
++ return 1;
++}
++
++/* Debug for interrupts */
++#ifdef DEBUG
++static char *interrupt2str(u16 interrupt)
++{
++ static char buf[TMP_BUF_MAX];
++ char *p = buf;
++
++ interrupt & MDM_DL1 ? p += snprintf(p, TMP_BUF_MAX, "MDM_DL1 ") : NULL;
++ interrupt & MDM_DL2 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "MDM_DL2 ") : NULL;
++
++ interrupt & MDM_UL1 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "MDM_UL1 ") : NULL;
++ interrupt & MDM_UL2 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "MDM_UL2 ") : NULL;
++
++ interrupt & DIAG_DL1 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "DIAG_DL1 ") : NULL;
++ interrupt & DIAG_DL2 ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "DIAG_DL2 ") : NULL;
++
++ interrupt & DIAG_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "DIAG_UL ") : NULL;
++
++ interrupt & APP1_DL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "APP1_DL ") : NULL;
++ interrupt & APP2_DL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "APP2_DL ") : NULL;
++
++ interrupt & APP1_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "APP1_UL ") : NULL;
++ interrupt & APP2_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "APP2_UL ") : NULL;
++
++ interrupt & CTRL_DL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "CTRL_DL ") : NULL;
++ interrupt & CTRL_UL ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "CTRL_UL ") : NULL;
++
++ interrupt & RESET ? p += snprintf(p, TMP_BUF_MAX - (p - buf),
++ "RESET ") : NULL;
++
++ return buf;
++}
++#endif
++
++/*
++ * Receive flow control
++ * Return 1 - If ok, else 0
++ */
++static int receive_flow_control(struct nozomi *dc)
++{
++ enum port_type port = PORT_MDM;
++ struct ctrl_dl ctrl_dl;
++ struct ctrl_dl old_ctrl;
++ u16 enable_ier = 0;
++
++ read_mem32((u32 *) &ctrl_dl, dc->port[PORT_CTRL].dl_addr[CH_A], 2);
++
++ switch (ctrl_dl.port) {
++ case CTRL_CMD:
++ DBG1("The Base Band sends this value as a response to a "
++ "request for IMSI detach sent over the control "
++ "channel uplink (see section 7.6.1).");
++ break;
++ case CTRL_MDM:
++ port = PORT_MDM;
++ enable_ier = MDM_DL;
++ break;
++ case CTRL_DIAG:
++ port = PORT_DIAG;
++ enable_ier = DIAG_DL;
++ break;
++ case CTRL_APP1:
++ port = PORT_APP1;
++ enable_ier = APP1_DL;
++ break;
++ case CTRL_APP2:
++ port = PORT_APP2;
++ enable_ier = APP2_DL;
++ break;
++ default:
++ dev_err(&dc->pdev->dev,
++ "ERROR: flow control received for non-existing port\n");
++ return 0;
++ };
++
++ DBG1("0x%04X->0x%04X", *((u16 *)&dc->port[port].ctrl_dl),
++ *((u16 *)&ctrl_dl));
++
++ old_ctrl = dc->port[port].ctrl_dl;
++ dc->port[port].ctrl_dl = ctrl_dl;
++
++ if (old_ctrl.CTS == 1 && ctrl_dl.CTS == 0) {
++ DBG1("Disable interrupt (0x%04X) on port: %d",
++ enable_ier, port);
++ disable_transmit_ul(port, dc);
++
++ } else if (old_ctrl.CTS == 0 && ctrl_dl.CTS == 1) {
++
++ if (__kfifo_len(dc->port[port].fifo_ul)) {
++ DBG1("Enable interrupt (0x%04X) on port: %d",
++ enable_ier, port);
++ DBG1("Data in buffer [%d], enable transmit! ",
++ __kfifo_len(dc->port[port].fifo_ul));
++ enable_transmit_ul(port, dc);
++ } else {
++ DBG1("No data in buffer...");
++ }
++ }
++
++ if (*(u16 *)&old_ctrl == *(u16 *)&ctrl_dl) {
++ DBG1(" No change in mctrl");
++ return 1;
++ }
++ /* Update statistics */
++ if (old_ctrl.CTS != ctrl_dl.CTS)
++ dc->port[port].tty_icount.cts++;
++ if (old_ctrl.DSR != ctrl_dl.DSR)
++ dc->port[port].tty_icount.dsr++;
++ if (old_ctrl.RI != ctrl_dl.RI)
++ dc->port[port].tty_icount.rng++;
++ if (old_ctrl.DCD != ctrl_dl.DCD)
++ dc->port[port].tty_icount.dcd++;
++
++ wake_up_interruptible(&dc->port[port].tty_wait);
++
++ DBG1("port: %d DCD(%d), CTS(%d), RI(%d), DSR(%d)",
++ port,
++ dc->port[port].tty_icount.dcd, dc->port[port].tty_icount.cts,
++ dc->port[port].tty_icount.rng, dc->port[port].tty_icount.dsr);
++
++ return 1;
++}
++
++static enum ctrl_port_type port2ctrl(enum port_type port,
++ const struct nozomi *dc)
++{
++ switch (port) {
++ case PORT_MDM:
++ return CTRL_MDM;
++ case PORT_DIAG:
++ return CTRL_DIAG;
++ case PORT_APP1:
++ return CTRL_APP1;
++ case PORT_APP2:
++ return CTRL_APP2;
++ default:
++ dev_err(&dc->pdev->dev,
++ "ERROR: send flow control " \
++ "received for non-existing port\n");
++ };
++ return CTRL_ERROR;
++}
++
++/*
++ * Send flow control, can only update one channel at a time
++ * Return 0 - If we have updated all flow control
++ * Return 1 - If we need to update more flow control, ack current enable more
++ */
++static int send_flow_control(struct nozomi *dc)
++{
++ u32 i, more_flow_control_to_be_updated = 0;
++ u16 *ctrl;
++
++ for (i = PORT_MDM; i < MAX_PORT; i++) {
++ if (dc->port[i].update_flow_control) {
++ if (more_flow_control_to_be_updated) {
++ /* We have more flow control to be updated */
++ return 1;
++ }
++ dc->port[i].ctrl_ul.port = port2ctrl(i, dc);
++ ctrl = (u16 *)&dc->port[i].ctrl_ul;
++ write_mem32(dc->port[PORT_CTRL].ul_addr[0], \
++ (u32 *) ctrl, 2);
++ dc->port[i].update_flow_control = 0;
++ more_flow_control_to_be_updated = 1;
++ }
++ }
++ return 0;
++}
++
++/*
++ * Handle donlink data, ports that are handled are modem and diagnostics
++ * Return 1 - ok
++ * Return 0 - toggle fields are out of sync
++ */
++static int handle_data_dl(struct nozomi *dc, enum port_type port, u8 *toggle,
++ u16 read_iir, u16 mask1, u16 mask2)
++{
++ if (*toggle == 0 && read_iir & mask1) {
++ if (receive_data(port, dc)) {
++ writew(mask1, dc->reg_fcr);
++ *toggle = !(*toggle);
++ }
++
++ if (read_iir & mask2) {
++ if (receive_data(port, dc)) {
++ writew(mask2, dc->reg_fcr);
++ *toggle = !(*toggle);
++ }
++ }
++ } else if (*toggle == 1 && read_iir & mask2) {
++ if (receive_data(port, dc)) {
++ writew(mask2, dc->reg_fcr);
++ *toggle = !(*toggle);
++ }
++
++ if (read_iir & mask1) {
++ if (receive_data(port, dc)) {
++ writew(mask1, dc->reg_fcr);
++ *toggle = !(*toggle);
++ }
++ }
++ } else {
++ dev_err(&dc->pdev->dev, "port out of sync!, toggle:%d\n",
++ *toggle);
++ return 0;
++ }
++ return 1;
++}
++
++/*
++ * Handle uplink data, this is currently for the modem port
++ * Return 1 - ok
++ * Return 0 - toggle field are out of sync
++ */
++static int handle_data_ul(struct nozomi *dc, enum port_type port, u16 read_iir)
++{
++ u8 *toggle = &(dc->port[port].toggle_ul);
++
++ if (*toggle == 0 && read_iir & MDM_UL1) {
++ dc->last_ier &= ~MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(port, dc)) {
++ writew(MDM_UL1, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ *toggle = !*toggle;
++ }
++
++ if (read_iir & MDM_UL2) {
++ dc->last_ier &= ~MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(port, dc)) {
++ writew(MDM_UL2, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ *toggle = !*toggle;
++ }
++ }
++
++ } else if (*toggle == 1 && read_iir & MDM_UL2) {
++ dc->last_ier &= ~MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(port, dc)) {
++ writew(MDM_UL2, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ *toggle = !*toggle;
++ }
++
++ if (read_iir & MDM_UL1) {
++ dc->last_ier &= ~MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(port, dc)) {
++ writew(MDM_UL1, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | MDM_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ *toggle = !*toggle;
++ }
++ }
++ } else {
++ writew(read_iir & MDM_UL, dc->reg_fcr);
++ dev_err(&dc->pdev->dev, "port out of sync!\n");
++ return 0;
++ }
++ return 1;
++}
++
++static irqreturn_t interrupt_handler(int irq, void *dev_id)
++{
++ struct nozomi *dc = dev_id;
++ unsigned int a;
++ u16 read_iir;
++
++ if (!dc)
++ return IRQ_NONE;
++
++ spin_lock(&dc->spin_mutex);
++ read_iir = readw(dc->reg_iir);
++
++ /* Card removed */
++ if (read_iir == (u16)-1)
++ goto none;
++ /*
++ * Just handle interrupt enabled in IER
++ * (by masking with dc->last_ier)
++ */
++ read_iir &= dc->last_ier;
++
++ if (read_iir == 0)
++ goto none;
++
++
++ DBG4("%s irq:0x%04X, prev:0x%04X", interrupt2str(read_iir), read_iir,
++ dc->last_ier);
++
++ if (read_iir & RESET) {
++ if (unlikely(!nozomi_read_config_table(dc))) {
++ dc->last_ier = 0x0;
++ writew(dc->last_ier, dc->reg_ier);
++ dev_err(&dc->pdev->dev, "Could not read status from "
++ "card, we should disable interface\n");
++ } else {
++ writew(RESET, dc->reg_fcr);
++ }
++ /* No more useful info if this was the reset interrupt. */
++ goto exit_handler;
++ }
++ if (read_iir & CTRL_UL) {
++ DBG1("CTRL_UL");
++ dc->last_ier &= ~CTRL_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_flow_control(dc)) {
++ writew(CTRL_UL, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | CTRL_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ }
++ }
++ if (read_iir & CTRL_DL) {
++ receive_flow_control(dc);
++ writew(CTRL_DL, dc->reg_fcr);
++ }
++ if (read_iir & MDM_DL) {
++ if (!handle_data_dl(dc, PORT_MDM,
++ &(dc->port[PORT_MDM].toggle_dl), read_iir,
++ MDM_DL1, MDM_DL2)) {
++ dev_err(&dc->pdev->dev, "MDM_DL out of sync!\n");
++ goto exit_handler;
++ }
++ }
++ if (read_iir & MDM_UL) {
++ if (!handle_data_ul(dc, PORT_MDM, read_iir)) {
++ dev_err(&dc->pdev->dev, "MDM_UL out of sync!\n");
++ goto exit_handler;
++ }
++ }
++ if (read_iir & DIAG_DL) {
++ if (!handle_data_dl(dc, PORT_DIAG,
++ &(dc->port[PORT_DIAG].toggle_dl), read_iir,
++ DIAG_DL1, DIAG_DL2)) {
++ dev_err(&dc->pdev->dev, "DIAG_DL out of sync!\n");
++ goto exit_handler;
++ }
++ }
++ if (read_iir & DIAG_UL) {
++ dc->last_ier &= ~DIAG_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(PORT_DIAG, dc)) {
++ writew(DIAG_UL, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | DIAG_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ }
++ }
++ if (read_iir & APP1_DL) {
++ if (receive_data(PORT_APP1, dc))
++ writew(APP1_DL, dc->reg_fcr);
++ }
++ if (read_iir & APP1_UL) {
++ dc->last_ier &= ~APP1_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(PORT_APP1, dc)) {
++ writew(APP1_UL, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | APP1_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ }
++ }
++ if (read_iir & APP2_DL) {
++ if (receive_data(PORT_APP2, dc))
++ writew(APP2_DL, dc->reg_fcr);
++ }
++ if (read_iir & APP2_UL) {
++ dc->last_ier &= ~APP2_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ if (send_data(PORT_APP2, dc)) {
++ writew(APP2_UL, dc->reg_fcr);
++ dc->last_ier = dc->last_ier | APP2_UL;
++ writew(dc->last_ier, dc->reg_ier);
++ }
++ }
++
++exit_handler:
++ spin_unlock(&dc->spin_mutex);
++ for (a = 0; a < NOZOMI_MAX_PORTS; a++)
++ if (test_and_clear_bit(a, &dc->flip))
++ tty_flip_buffer_push(dc->port[a].tty);
++ return IRQ_HANDLED;
++none:
++ spin_unlock(&dc->spin_mutex);
++ return IRQ_NONE;
++}
++
++static void nozomi_get_card_type(struct nozomi *dc)
++{
++ int i;
++ u32 size = 0;
++
++ for (i = 0; i < 6; i++)
++ size += pci_resource_len(dc->pdev, i);
++
++ /* Assume card type F32_8 if no match */
++ dc->card_type = size == 2048 ? F32_2 : F32_8;
++
++ dev_info(&dc->pdev->dev, "Card type is: %d\n", dc->card_type);
++}
++
++static void nozomi_setup_private_data(struct nozomi *dc)
++{
++ void __iomem *offset = dc->base_addr + dc->card_type / 2;
++ unsigned int i;
++
++ dc->reg_fcr = (void __iomem *)(offset + R_FCR);
++ dc->reg_iir = (void __iomem *)(offset + R_IIR);
++ dc->reg_ier = (void __iomem *)(offset + R_IER);
++ dc->last_ier = 0;
++ dc->flip = 0;
++
++ dc->port[PORT_MDM].token_dl = MDM_DL;
++ dc->port[PORT_DIAG].token_dl = DIAG_DL;
++ dc->port[PORT_APP1].token_dl = APP1_DL;
++ dc->port[PORT_APP2].token_dl = APP2_DL;
++
++ for (i = 0; i < MAX_PORT; i++)
++ init_waitqueue_head(&dc->port[i].tty_wait);
++}
++
++static ssize_t card_type_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ struct nozomi *dc = pci_get_drvdata(to_pci_dev(dev));
++
++ return sprintf(buf, "%d\n", dc->card_type);
++}
++static DEVICE_ATTR(card_type, 0444, card_type_show, NULL);
++
++static ssize_t open_ttys_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ struct nozomi *dc = pci_get_drvdata(to_pci_dev(dev));
++
++ return sprintf(buf, "%u\n", dc->open_ttys);
++}
++static DEVICE_ATTR(open_ttys, 0444, open_ttys_show, NULL);
++
++static void make_sysfs_files(struct nozomi *dc)
++{
++ if (device_create_file(&dc->pdev->dev, &dev_attr_card_type))
++ dev_err(&dc->pdev->dev,
++ "Could not create sysfs file for card_type\n");
++ if (device_create_file(&dc->pdev->dev, &dev_attr_open_ttys))
++ dev_err(&dc->pdev->dev,
++ "Could not create sysfs file for open_ttys\n");
++}
++
++static void remove_sysfs_files(struct nozomi *dc)
++{
++ device_remove_file(&dc->pdev->dev, &dev_attr_card_type);
++ device_remove_file(&dc->pdev->dev, &dev_attr_open_ttys);
++}
++
++/* Allocate memory for one device */
++static int __devinit nozomi_card_init(struct pci_dev *pdev,
++ const struct pci_device_id *ent)
++{
++ resource_size_t start;
++ int ret;
++ struct nozomi *dc = NULL;
++ int ndev_idx;
++ int i;
++
++ dev_dbg(&pdev->dev, "Init, new card found\n");
++
++ for (ndev_idx = 0; ndev_idx < ARRAY_SIZE(ndevs); ndev_idx++)
++ if (!ndevs[ndev_idx])
++ break;
++
++ if (ndev_idx >= ARRAY_SIZE(ndevs)) {
++ dev_err(&pdev->dev, "no free tty range for this card left\n");
++ ret = -EIO;
++ goto err;
++ }
++
++ dc = kzalloc(sizeof(struct nozomi), GFP_KERNEL);
++ if (unlikely(!dc)) {
++ dev_err(&pdev->dev, "Could not allocate memory\n");
++ ret = -ENOMEM;
++ goto err_free;
++ }
++
++ dc->pdev = pdev;
++
++ /* Find out what card type it is */
++ nozomi_get_card_type(dc);
++
++ ret = pci_enable_device(dc->pdev);
++ if (ret) {
++ dev_err(&pdev->dev, "Failed to enable PCI Device\n");
++ goto err_free;
++ }
++
++ start = pci_resource_start(dc->pdev, 0);
++ if (start == 0) {
++ dev_err(&pdev->dev, "No I/O address for card detected\n");
++ ret = -ENODEV;
++ goto err_disable_device;
++ }
++
++ ret = pci_request_regions(dc->pdev, NOZOMI_NAME);
++ if (ret) {
++ dev_err(&pdev->dev, "I/O address 0x%04x already in use\n",
++ (int) /* nozomi_private.io_addr */ 0);
++ goto err_disable_device;
++ }
++
++ dc->base_addr = ioremap(start, dc->card_type);
++ if (!dc->base_addr) {
++ dev_err(&pdev->dev, "Unable to map card MMIO\n");
++ ret = -ENODEV;
++ goto err_rel_regs;
++ }
++
++ dc->send_buf = kmalloc(SEND_BUF_MAX, GFP_KERNEL);
++ if (!dc->send_buf) {
++ dev_err(&pdev->dev, "Could not allocate send buffer?\n");
++ ret = -ENOMEM;
++ goto err_free_sbuf;
++ }
++
++ spin_lock_init(&dc->spin_mutex);
++
++ nozomi_setup_private_data(dc);
++
++ /* Disable all interrupts */
++ dc->last_ier = 0;
++ writew(dc->last_ier, dc->reg_ier);
++
++ ret = request_irq(pdev->irq, &interrupt_handler, IRQF_SHARED,
++ NOZOMI_NAME, dc);
++ if (unlikely(ret)) {
++ dev_err(&pdev->dev, "can't request irq %d\n", pdev->irq);
++ goto err_free_sbuf;
++ }
++
++ DBG1("base_addr: %p", dc->base_addr);
++
++ make_sysfs_files(dc);
++
++ dc->index_start = ndev_idx * MAX_PORT;
++ ndevs[ndev_idx] = dc;
++
++ for (i = 0; i < MAX_PORT; i++) {
++ mutex_init(&dc->port[i].tty_sem);
++ dc->port[i].tty_open_count = 0;
++ dc->port[i].tty = NULL;
++ tty_register_device(ntty_driver, dc->index_start + i,
++ &pdev->dev);
++ }
++
++ /* Enable RESET interrupt. */
++ dc->last_ier = RESET;
++ writew(dc->last_ier, dc->reg_ier);
++
++ pci_set_drvdata(pdev, dc);
++
++ return 0;
++
++err_free_sbuf:
++ kfree(dc->send_buf);
++ iounmap(dc->base_addr);
++err_rel_regs:
++ pci_release_regions(pdev);
++err_disable_device:
++ pci_disable_device(pdev);
++err_free:
++ kfree(dc);
++err:
++ return ret;
++}
++
++static void __devexit tty_exit(struct nozomi *dc)
++{
++ unsigned int i;
++
++ DBG1(" ");
++
++ flush_scheduled_work();
++
++ for (i = 0; i < MAX_PORT; ++i)
++ if (dc->port[i].tty && \
++ list_empty(&dc->port[i].tty->hangup_work.entry))
++ tty_hangup(dc->port[i].tty);
++
++ while (dc->open_ttys)
++ msleep(1);
++
++ for (i = dc->index_start; i < dc->index_start + MAX_PORT; ++i)
++ tty_unregister_device(ntty_driver, i);
++}
++
++/* Deallocate memory for one device */
++static void __devexit nozomi_card_exit(struct pci_dev *pdev)
++{
++ int i;
++ struct ctrl_ul ctrl;
++ struct nozomi *dc = pci_get_drvdata(pdev);
++
++ /* Disable all interrupts */
++ dc->last_ier = 0;
++ writew(dc->last_ier, dc->reg_ier);
++
++ tty_exit(dc);
++
++ /* Send 0x0001, command card to resend the reset token. */
++ /* This is to get the reset when the module is reloaded. */
++ ctrl.port = 0x00;
++ ctrl.reserved = 0;
++ ctrl.RTS = 0;
++ ctrl.DTR = 1;
++ DBG1("sending flow control 0x%04X", *((u16 *)&ctrl));
++
++ /* Setup dc->reg addresses to we can use defines here */
++ write_mem32(dc->port[PORT_CTRL].ul_addr[0], (u32 *)&ctrl, 2);
++ writew(CTRL_UL, dc->reg_fcr); /* push the token to the card. */
++
++ remove_sysfs_files(dc);
++
++ free_irq(pdev->irq, dc);
++
++ for (i = 0; i < MAX_PORT; i++)
++ if (dc->port[i].fifo_ul)
++ kfifo_free(dc->port[i].fifo_ul);
++
++ kfree(dc->send_buf);
++
++ iounmap(dc->base_addr);
++
++ pci_release_regions(pdev);
++
++ pci_disable_device(pdev);
++
++ ndevs[dc->index_start / MAX_PORT] = NULL;
++
++ kfree(dc);
++}
++
++static void set_rts(const struct tty_struct *tty, int rts)
++{
++ struct port *port = get_port_by_tty(tty);
++
++ port->ctrl_ul.RTS = rts;
++ port->update_flow_control = 1;
++ enable_transmit_ul(PORT_CTRL, get_dc_by_tty(tty));
++}
++
++static void set_dtr(const struct tty_struct *tty, int dtr)
++{
++ struct port *port = get_port_by_tty(tty);
++
++ DBG1("SETTING DTR index: %d, dtr: %d", tty->index, dtr);
++
++ port->ctrl_ul.DTR = dtr;
++ port->update_flow_control = 1;
++ enable_transmit_ul(PORT_CTRL, get_dc_by_tty(tty));
++}
++
++/*
++ * ----------------------------------------------------------------------------
++ * TTY code
++ * ----------------------------------------------------------------------------
++ */
++
++/* Called when the userspace process opens the tty, /dev/noz*. */
++static int ntty_open(struct tty_struct *tty, struct file *file)
++{
++ struct port *port = get_port_by_tty(tty);
++ struct nozomi *dc = get_dc_by_tty(tty);
++ unsigned long flags;
++
++ if (!port || !dc)
++ return -ENODEV;
++
++ if (mutex_lock_interruptible(&port->tty_sem))
++ return -ERESTARTSYS;
++
++ port->tty_open_count++;
++ dc->open_ttys++;
++
++ /* Enable interrupt downlink for channel */
++ if (port->tty_open_count == 1) {
++ tty->low_latency = 1;
++ tty->driver_data = port;
++ port->tty = tty;
++ DBG1("open: %d", port->token_dl);
++ spin_lock_irqsave(&dc->spin_mutex, flags);
++ dc->last_ier = dc->last_ier | port->token_dl;
++ writew(dc->last_ier, dc->reg_ier);
++ spin_unlock_irqrestore(&dc->spin_mutex, flags);
++ }
++
++ mutex_unlock(&port->tty_sem);
++
++ return 0;
++}
++
++/* Called when the userspace process close the tty, /dev/noz*. */
++static void ntty_close(struct tty_struct *tty, struct file *file)
++{
++ struct nozomi *dc = get_dc_by_tty(tty);
++ struct port *port = tty->driver_data;
++ unsigned long flags;
++
++ if (!dc || !port)
++ return;
++
++ if (mutex_lock_interruptible(&port->tty_sem))
++ return;
++
++ if (!port->tty_open_count)
++ goto exit;
++
++ dc->open_ttys--;
++ port->tty_open_count--;
++
++ if (port->tty_open_count == 0) {
++ DBG1("close: %d", port->token_dl);
++ spin_lock_irqsave(&dc->spin_mutex, flags);
++ dc->last_ier &= ~(port->token_dl);
++ writew(dc->last_ier, dc->reg_ier);
++ spin_unlock_irqrestore(&dc->spin_mutex, flags);
++ }
++
++exit:
++ mutex_unlock(&port->tty_sem);
++}
++
++/*
++ * called when the userspace process writes to the tty (/dev/noz*).
++ * Data is inserted into a fifo, which is then read and transfered to the modem.
++ */
++static int ntty_write(struct tty_struct *tty, const unsigned char *buffer,
++ int count)
++{
++ int rval = -EINVAL;
++ struct nozomi *dc = get_dc_by_tty(tty);
++ struct port *port = tty->driver_data;
++ unsigned long flags;
++
++ /* DBG1( "WRITEx: %d, index = %d", count, index); */
++
++ if (!dc || !port)
++ return -ENODEV;
++
++ if (unlikely(!mutex_trylock(&port->tty_sem))) {
++ /*
++ * must test lock as tty layer wraps calls
++ * to this function with BKL
++ */
++ dev_err(&dc->pdev->dev, "Would have deadlocked - "
++ "return EAGAIN\n");
++ return -EAGAIN;
++ }
++
++ if (unlikely(!port->tty_open_count)) {
++ DBG1(" ");
++ goto exit;
++ }
++
++ rval = __kfifo_put(port->fifo_ul, (unsigned char *)buffer, count);
++
++ /* notify card */
++ if (unlikely(dc == NULL)) {
++ DBG1("No device context?");
++ goto exit;
++ }
++
++ spin_lock_irqsave(&dc->spin_mutex, flags);
++ /* CTS is only valid on the modem channel */
++ if (port == &(dc->port[PORT_MDM])) {
++ if (port->ctrl_dl.CTS) {
++ DBG4("Enable interrupt");
++ enable_transmit_ul(tty->index % MAX_PORT, dc);
++ } else {
++ dev_err(&dc->pdev->dev,
++ "CTS not active on modem port?\n");
++ }
++ } else {
++ enable_transmit_ul(tty->index % MAX_PORT, dc);
++ }
++ spin_unlock_irqrestore(&dc->spin_mutex, flags);
++
++exit:
++ mutex_unlock(&port->tty_sem);
++ return rval;
++}
++
++/*
++ * Calculate how much is left in device
++ * This method is called by the upper tty layer.
++ * #according to sources N_TTY.c it expects a value >= 0 and
++ * does not check for negative values.
++ */
++static int ntty_write_room(struct tty_struct *tty)
++{
++ struct port *port = tty->driver_data;
++ int room = 0;
++ struct nozomi *dc = get_dc_by_tty(tty);
++
++ if (!dc || !port)
++ return 0;
++ if (!mutex_trylock(&port->tty_sem))
++ return 0;
++
++ if (!port->tty_open_count)
++ goto exit;
++
++ room = port->fifo_ul->size - __kfifo_len(port->fifo_ul);
++
++exit:
++ mutex_unlock(&port->tty_sem);
++ return room;
++}
++
++/* Gets io control parameters */
++static int ntty_tiocmget(struct tty_struct *tty, struct file *file)
++{
++ struct port *port = tty->driver_data;
++ struct ctrl_dl *ctrl_dl = &port->ctrl_dl;
++ struct ctrl_ul *ctrl_ul = &port->ctrl_ul;
++
++ return (ctrl_ul->RTS ? TIOCM_RTS : 0) |
++ (ctrl_ul->DTR ? TIOCM_DTR : 0) |
++ (ctrl_dl->DCD ? TIOCM_CAR : 0) |
++ (ctrl_dl->RI ? TIOCM_RNG : 0) |
++ (ctrl_dl->DSR ? TIOCM_DSR : 0) |
++ (ctrl_dl->CTS ? TIOCM_CTS : 0);
++}
++
++/* Sets io controls parameters */
++static int ntty_tiocmset(struct tty_struct *tty, struct file *file,
++ unsigned int set, unsigned int clear)
++{
++ if (set & TIOCM_RTS)
++ set_rts(tty, 1);
++ else if (clear & TIOCM_RTS)
++ set_rts(tty, 0);
++
++ if (set & TIOCM_DTR)
++ set_dtr(tty, 1);
++ else if (clear & TIOCM_DTR)
++ set_dtr(tty, 0);
++
++ return 0;
++}
++
++static int ntty_cflags_changed(struct port *port, unsigned long flags,
++ struct async_icount *cprev)
++{
++ struct async_icount cnow = port->tty_icount;
++ int ret;
++
++ ret = ((flags & TIOCM_RNG) && (cnow.rng != cprev->rng)) ||
++ ((flags & TIOCM_DSR) && (cnow.dsr != cprev->dsr)) ||
++ ((flags & TIOCM_CD) && (cnow.dcd != cprev->dcd)) ||
++ ((flags & TIOCM_CTS) && (cnow.cts != cprev->cts));
++
++ *cprev = cnow;
++
++ return ret;
++}
++
++static int ntty_ioctl_tiocgicount(struct port *port, void __user *argp)
++{
++ struct async_icount cnow = port->tty_icount;
++ struct serial_icounter_struct icount;
++
++ icount.cts = cnow.cts;
++ icount.dsr = cnow.dsr;
++ icount.rng = cnow.rng;
++ icount.dcd = cnow.dcd;
++ icount.rx = cnow.rx;
++ icount.tx = cnow.tx;
++ icount.frame = cnow.frame;
++ icount.overrun = cnow.overrun;
++ icount.parity = cnow.parity;
++ icount.brk = cnow.brk;
++ icount.buf_overrun = cnow.buf_overrun;
++
++ return copy_to_user(argp, &icount, sizeof(icount));
++}
++
++static int ntty_ioctl(struct tty_struct *tty, struct file *file,
++ unsigned int cmd, unsigned long arg)
++{
++ struct port *port = tty->driver_data;
++ void __user *argp = (void __user *)arg;
++ int rval = -ENOIOCTLCMD;
++
++ DBG1("******** IOCTL, cmd: %d", cmd);
++
++ switch (cmd) {
++ case TIOCMIWAIT: {
++ struct async_icount cprev = port->tty_icount;
++
++ rval = wait_event_interruptible(port->tty_wait,
++ ntty_cflags_changed(port, arg, &cprev));
++ break;
++ } case TIOCGICOUNT:
++ rval = ntty_ioctl_tiocgicount(port, argp);
++ break;
++ default:
++ DBG1("ERR: 0x%08X, %d", cmd, cmd);
++ break;
++ };
++
++ return rval;
++}
++
++/*
++ * Called by the upper tty layer when tty buffers are ready
++ * to receive data again after a call to throttle.
++ */
++static void ntty_unthrottle(struct tty_struct *tty)
++{
++ struct nozomi *dc = get_dc_by_tty(tty);
++ unsigned long flags;
++
++ DBG1("UNTHROTTLE");
++ spin_lock_irqsave(&dc->spin_mutex, flags);
++ enable_transmit_dl(tty->index % MAX_PORT, dc);
++ set_rts(tty, 1);
++
++ spin_unlock_irqrestore(&dc->spin_mutex, flags);
++}
++
++/*
++ * Called by the upper tty layer when the tty buffers are almost full.
++ * The driver should stop send more data.
++ */
++static void ntty_throttle(struct tty_struct *tty)
++{
++ struct nozomi *dc = get_dc_by_tty(tty);
++ unsigned long flags;
++
++ DBG1("THROTTLE");
++ spin_lock_irqsave(&dc->spin_mutex, flags);
++ set_rts(tty, 0);
++ spin_unlock_irqrestore(&dc->spin_mutex, flags);
++}
++
++/* just to discard single character writes */
++static void ntty_put_char(struct tty_struct *tty, unsigned char c)
++{
++ /* FIXME !!! */
++ DBG2("PUT CHAR Function: %c", c);
++}
++
++/* Returns number of chars in buffer, called by tty layer */
++static s32 ntty_chars_in_buffer(struct tty_struct *tty)
++{
++ struct port *port = tty->driver_data;
++ struct nozomi *dc = get_dc_by_tty(tty);
++ s32 rval;
++
++ if (unlikely(!dc || !port)) {
++ rval = -ENODEV;
++ goto exit_in_buffer;
++ }
++
++ if (unlikely(!port->tty_open_count)) {
++ dev_err(&dc->pdev->dev, "No tty open?\n");
++ rval = -ENODEV;
++ goto exit_in_buffer;
++ }
++
++ rval = __kfifo_len(port->fifo_ul);
++
++exit_in_buffer:
++ return rval;
++}
++
++static struct tty_operations tty_ops = {
++ .ioctl = ntty_ioctl,
++ .open = ntty_open,
++ .close = ntty_close,
++ .write = ntty_write,
++ .write_room = ntty_write_room,
++ .unthrottle = ntty_unthrottle,
++ .throttle = ntty_throttle,
++ .chars_in_buffer = ntty_chars_in_buffer,
++ .put_char = ntty_put_char,
++ .tiocmget = ntty_tiocmget,
++ .tiocmset = ntty_tiocmset,
++};
++
++/* Module initialization */
++static struct pci_driver nozomi_driver = {
++ .name = NOZOMI_NAME,
++ .id_table = nozomi_pci_tbl,
++ .probe = nozomi_card_init,
++ .remove = __devexit_p(nozomi_card_exit),
++};
++
++static __init int nozomi_init(void)
++{
++ int ret;
++
++ printk(KERN_INFO "Initializing %s\n", VERSION_STRING);
++
++ ntty_driver = alloc_tty_driver(NTTY_TTY_MAXMINORS);
++ if (!ntty_driver)
++ return -ENOMEM;
++
++ ntty_driver->owner = THIS_MODULE;
++ ntty_driver->driver_name = NOZOMI_NAME_TTY;
++ ntty_driver->name = "noz";
++ ntty_driver->major = 0;
++ ntty_driver->type = TTY_DRIVER_TYPE_SERIAL;
++ ntty_driver->subtype = SERIAL_TYPE_NORMAL;
++ ntty_driver->flags = TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
++ ntty_driver->init_termios = tty_std_termios;
++ ntty_driver->init_termios.c_cflag = B115200 | CS8 | CREAD | \
++ HUPCL | CLOCAL;
++ ntty_driver->init_termios.c_ispeed = 115200;
++ ntty_driver->init_termios.c_ospeed = 115200;
++ tty_set_operations(ntty_driver, &tty_ops);
++
++ ret = tty_register_driver(ntty_driver);
++ if (ret) {
++ printk(KERN_ERR "Nozomi: failed to register ntty driver\n");
++ goto free_tty;
++ }
++
++ ret = pci_register_driver(&nozomi_driver);
++ if (ret) {
++ printk(KERN_ERR "Nozomi: can't register pci driver\n");
++ goto unr_tty;
++ }
++
++ return 0;
++unr_tty:
++ tty_unregister_driver(ntty_driver);
++free_tty:
++ put_tty_driver(ntty_driver);
++ return ret;
++}
++
++static __exit void nozomi_exit(void)
++{
++ printk(KERN_INFO "Unloading %s\n", DRIVER_DESC);
++ pci_unregister_driver(&nozomi_driver);
++ tty_unregister_driver(ntty_driver);
++ put_tty_driver(ntty_driver);
++}
++
++module_init(nozomi_init);
++module_exit(nozomi_exit);
++
++module_param(debug, int, S_IRUGO | S_IWUSR);
++
++MODULE_LICENSE("Dual BSD/GPL");
++MODULE_DESCRIPTION(DRIVER_DESC);
+diff --git a/drivers/char/rtc.c b/drivers/char/rtc.c
+index 0c66b80..78b151c 100644
+--- a/drivers/char/rtc.c
++++ b/drivers/char/rtc.c
+@@ -1,5 +1,5 @@
+ /*
+- * Real Time Clock interface for Linux
++ * Real Time Clock interface for Linux
+ *
+ * Copyright (C) 1996 Paul Gortmaker
+ *
+@@ -17,7 +17,7 @@
+ * has been received. If a RTC interrupt has already happened,
+ * it will output an unsigned long and then block. The output value
+ * contains the interrupt status in the low byte and the number of
+- * interrupts since the last read in the remaining high bytes. The
++ * interrupts since the last read in the remaining high bytes. The
+ * /dev/rtc interface can also be used with the select(2) call.
+ *
+ * This program is free software; you can redistribute it and/or
+@@ -104,12 +104,14 @@ static int rtc_has_irq = 1;
+
+ #ifndef CONFIG_HPET_EMULATE_RTC
+ #define is_hpet_enabled() 0
+-#define hpet_set_alarm_time(hrs, min, sec) 0
+-#define hpet_set_periodic_freq(arg) 0
+-#define hpet_mask_rtc_irq_bit(arg) 0
+-#define hpet_set_rtc_irq_bit(arg) 0
+-#define hpet_rtc_timer_init() do { } while (0)
+-#define hpet_rtc_dropped_irq() 0
++#define hpet_set_alarm_time(hrs, min, sec) 0
++#define hpet_set_periodic_freq(arg) 0
++#define hpet_mask_rtc_irq_bit(arg) 0
++#define hpet_set_rtc_irq_bit(arg) 0
++#define hpet_rtc_timer_init() do { } while (0)
++#define hpet_rtc_dropped_irq() 0
++#define hpet_register_irq_handler(h) 0
++#define hpet_unregister_irq_handler(h) 0
+ #ifdef RTC_IRQ
+ static irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+ {
+@@ -147,7 +149,7 @@ static int rtc_ioctl(struct inode *inode, struct file *file,
+ static unsigned int rtc_poll(struct file *file, poll_table *wait);
+ #endif
+
+-static void get_rtc_alm_time (struct rtc_time *alm_tm);
++static void get_rtc_alm_time(struct rtc_time *alm_tm);
+ #ifdef RTC_IRQ
+ static void set_rtc_irq_bit_locked(unsigned char bit);
+ static void mask_rtc_irq_bit_locked(unsigned char bit);
+@@ -185,9 +187,9 @@ static int rtc_proc_open(struct inode *inode, struct file *file);
+ * rtc_status but before mod_timer is called, which would then reenable the
+ * timer (but you would need to have an awful timing before you'd trip on it)
+ */
+-static unsigned long rtc_status = 0; /* bitmapped status byte. */
+-static unsigned long rtc_freq = 0; /* Current periodic IRQ rate */
+-static unsigned long rtc_irq_data = 0; /* our output to the world */
++static unsigned long rtc_status; /* bitmapped status byte. */
++static unsigned long rtc_freq; /* Current periodic IRQ rate */
++static unsigned long rtc_irq_data; /* our output to the world */
+ static unsigned long rtc_max_user_freq = 64; /* > this, need CAP_SYS_RESOURCE */
+
+ #ifdef RTC_IRQ
+@@ -195,7 +197,7 @@ static unsigned long rtc_max_user_freq = 64; /* > this, need CAP_SYS_RESOURCE */
+ * rtc_task_lock nests inside rtc_lock.
+ */
+ static DEFINE_SPINLOCK(rtc_task_lock);
+-static rtc_task_t *rtc_callback = NULL;
++static rtc_task_t *rtc_callback;
+ #endif
+
+ /*
+@@ -205,7 +207,7 @@ static rtc_task_t *rtc_callback = NULL;
+
+ static unsigned long epoch = 1900; /* year corresponding to 0x00 */
+
+-static const unsigned char days_in_mo[] =
++static const unsigned char days_in_mo[] =
+ {0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
+
+ /*
+@@ -242,7 +244,7 @@ irqreturn_t rtc_interrupt(int irq, void *dev_id)
+ * the last read in the remainder of rtc_irq_data.
+ */
+
+- spin_lock (&rtc_lock);
++ spin_lock(&rtc_lock);
+ rtc_irq_data += 0x100;
+ rtc_irq_data &= ~0xff;
+ if (is_hpet_enabled()) {
+@@ -259,16 +261,16 @@ irqreturn_t rtc_interrupt(int irq, void *dev_id)
+ if (rtc_status & RTC_TIMER_ON)
+ mod_timer(&rtc_irq_timer, jiffies + HZ/rtc_freq + 2*HZ/100);
+
+- spin_unlock (&rtc_lock);
++ spin_unlock(&rtc_lock);
+
+ /* Now do the rest of the actions */
+ spin_lock(&rtc_task_lock);
+ if (rtc_callback)
+ rtc_callback->func(rtc_callback->private_data);
+ spin_unlock(&rtc_task_lock);
+- wake_up_interruptible(&rtc_wait);
++ wake_up_interruptible(&rtc_wait);
+
+- kill_fasync (&rtc_async_queue, SIGIO, POLL_IN);
++ kill_fasync(&rtc_async_queue, SIGIO, POLL_IN);
+
+ return IRQ_HANDLED;
+ }
+@@ -335,7 +337,7 @@ static ssize_t rtc_read(struct file *file, char __user *buf,
+ DECLARE_WAITQUEUE(wait, current);
+ unsigned long data;
+ ssize_t retval;
+-
++
+ if (rtc_has_irq == 0)
+ return -EIO;
+
+@@ -358,11 +360,11 @@ static ssize_t rtc_read(struct file *file, char __user *buf,
+ * confusing. And no, xchg() is not the answer. */
+
+ __set_current_state(TASK_INTERRUPTIBLE);
+-
+- spin_lock_irq (&rtc_lock);
++
++ spin_lock_irq(&rtc_lock);
+ data = rtc_irq_data;
+ rtc_irq_data = 0;
+- spin_unlock_irq (&rtc_lock);
++ spin_unlock_irq(&rtc_lock);
+
+ if (data != 0)
+ break;
+@@ -378,10 +380,13 @@ static ssize_t rtc_read(struct file *file, char __user *buf,
+ schedule();
+ } while (1);
+
+- if (count == sizeof(unsigned int))
+- retval = put_user(data, (unsigned int __user *)buf) ?: sizeof(int);
+- else
+- retval = put_user(data, (unsigned long __user *)buf) ?: sizeof(long);
++ if (count == sizeof(unsigned int)) {
++ retval = put_user(data,
++ (unsigned int __user *)buf) ?: sizeof(int);
++ } else {
++ retval = put_user(data,
++ (unsigned long __user *)buf) ?: sizeof(long);
++ }
+ if (!retval)
+ retval = count;
+ out:
+@@ -394,7 +399,7 @@ static ssize_t rtc_read(struct file *file, char __user *buf,
+
+ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ {
+- struct rtc_time wtime;
++ struct rtc_time wtime;
+
+ #ifdef RTC_IRQ
+ if (rtc_has_irq == 0) {
+@@ -426,35 +431,41 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ }
+ case RTC_PIE_OFF: /* Mask periodic int. enab. bit */
+ {
+- unsigned long flags; /* can be called from isr via rtc_control() */
+- spin_lock_irqsave (&rtc_lock, flags);
++ /* can be called from isr via rtc_control() */
++ unsigned long flags;
++
++ spin_lock_irqsave(&rtc_lock, flags);
+ mask_rtc_irq_bit_locked(RTC_PIE);
+ if (rtc_status & RTC_TIMER_ON) {
+ rtc_status &= ~RTC_TIMER_ON;
+ del_timer(&rtc_irq_timer);
+ }
+- spin_unlock_irqrestore (&rtc_lock, flags);
++ spin_unlock_irqrestore(&rtc_lock, flags);
++
+ return 0;
+ }
+ case RTC_PIE_ON: /* Allow periodic ints */
+ {
+- unsigned long flags; /* can be called from isr via rtc_control() */
++ /* can be called from isr via rtc_control() */
++ unsigned long flags;
++
+ /*
+ * We don't really want Joe User enabling more
+ * than 64Hz of interrupts on a multi-user machine.
+ */
+ if (!kernel && (rtc_freq > rtc_max_user_freq) &&
+- (!capable(CAP_SYS_RESOURCE)))
++ (!capable(CAP_SYS_RESOURCE)))
+ return -EACCES;
+
+- spin_lock_irqsave (&rtc_lock, flags);
++ spin_lock_irqsave(&rtc_lock, flags);
+ if (!(rtc_status & RTC_TIMER_ON)) {
+ mod_timer(&rtc_irq_timer, jiffies + HZ/rtc_freq +
+ 2*HZ/100);
+ rtc_status |= RTC_TIMER_ON;
+ }
+ set_rtc_irq_bit_locked(RTC_PIE);
+- spin_unlock_irqrestore (&rtc_lock, flags);
++ spin_unlock_irqrestore(&rtc_lock, flags);
++
+ return 0;
+ }
+ case RTC_UIE_OFF: /* Mask ints from RTC updates. */
+@@ -477,7 +488,7 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ */
+ memset(&wtime, 0, sizeof(struct rtc_time));
+ get_rtc_alm_time(&wtime);
+- break;
++ break;
+ }
+ case RTC_ALM_SET: /* Store a time into the alarm */
+ {
+@@ -505,16 +516,21 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ */
+ }
+ if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) ||
+- RTC_ALWAYS_BCD)
+- {
+- if (sec < 60) BIN_TO_BCD(sec);
+- else sec = 0xff;
+-
+- if (min < 60) BIN_TO_BCD(min);
+- else min = 0xff;
+-
+- if (hrs < 24) BIN_TO_BCD(hrs);
+- else hrs = 0xff;
++ RTC_ALWAYS_BCD) {
++ if (sec < 60)
++ BIN_TO_BCD(sec);
++ else
++ sec = 0xff;
++
++ if (min < 60)
++ BIN_TO_BCD(min);
++ else
++ min = 0xff;
++
++ if (hrs < 24)
++ BIN_TO_BCD(hrs);
++ else
++ hrs = 0xff;
+ }
+ CMOS_WRITE(hrs, RTC_HOURS_ALARM);
+ CMOS_WRITE(min, RTC_MINUTES_ALARM);
+@@ -563,11 +579,12 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+
+ if (day > (days_in_mo[mon] + ((mon == 2) && leap_yr)))
+ return -EINVAL;
+-
++
+ if ((hrs >= 24) || (min >= 60) || (sec >= 60))
+ return -EINVAL;
+
+- if ((yrs -= epoch) > 255) /* They are unsigned */
++ yrs -= epoch;
++ if (yrs > 255) /* They are unsigned */
+ return -EINVAL;
+
+ spin_lock_irq(&rtc_lock);
+@@ -635,9 +652,10 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ {
+ int tmp = 0;
+ unsigned char val;
+- unsigned long flags; /* can be called from isr via rtc_control() */
++ /* can be called from isr via rtc_control() */
++ unsigned long flags;
+
+- /*
++ /*
+ * The max we can do is 8192Hz.
+ */
+ if ((arg < 2) || (arg > 8192))
+@@ -646,7 +664,8 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ * We don't really want Joe User generating more
+ * than 64Hz of interrupts on a multi-user machine.
+ */
+- if (!kernel && (arg > rtc_max_user_freq) && (!capable(CAP_SYS_RESOURCE)))
++ if (!kernel && (arg > rtc_max_user_freq) &&
++ !capable(CAP_SYS_RESOURCE))
+ return -EACCES;
+
+ while (arg > (1<<tmp))
+@@ -674,11 +693,11 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ #endif
+ case RTC_EPOCH_READ: /* Read the epoch. */
+ {
+- return put_user (epoch, (unsigned long __user *)arg);
++ return put_user(epoch, (unsigned long __user *)arg);
+ }
+ case RTC_EPOCH_SET: /* Set the epoch. */
+ {
+- /*
++ /*
+ * There were no RTC clocks before 1900.
+ */
+ if (arg < 1900)
+@@ -693,7 +712,8 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
+ default:
+ return -ENOTTY;
+ }
+- return copy_to_user((void __user *)arg, &wtime, sizeof wtime) ? -EFAULT : 0;
++ return copy_to_user((void __user *)arg,
++ &wtime, sizeof wtime) ? -EFAULT : 0;
+ }
+
+ static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+@@ -712,26 +732,25 @@ static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ * needed here. Or anywhere else in this driver. */
+ static int rtc_open(struct inode *inode, struct file *file)
+ {
+- spin_lock_irq (&rtc_lock);
++ spin_lock_irq(&rtc_lock);
+
+- if(rtc_status & RTC_IS_OPEN)
++ if (rtc_status & RTC_IS_OPEN)
+ goto out_busy;
+
+ rtc_status |= RTC_IS_OPEN;
+
+ rtc_irq_data = 0;
+- spin_unlock_irq (&rtc_lock);
++ spin_unlock_irq(&rtc_lock);
+ return 0;
+
+ out_busy:
+- spin_unlock_irq (&rtc_lock);
++ spin_unlock_irq(&rtc_lock);
+ return -EBUSY;
+ }
+
+-static int rtc_fasync (int fd, struct file *filp, int on)
+-
++static int rtc_fasync(int fd, struct file *filp, int on)
+ {
+- return fasync_helper (fd, filp, on, &rtc_async_queue);
++ return fasync_helper(fd, filp, on, &rtc_async_queue);
+ }
+
+ static int rtc_release(struct inode *inode, struct file *file)
+@@ -762,16 +781,16 @@ static int rtc_release(struct inode *inode, struct file *file)
+ }
+ spin_unlock_irq(&rtc_lock);
+
+- if (file->f_flags & FASYNC) {
+- rtc_fasync (-1, file, 0);
+- }
++ if (file->f_flags & FASYNC)
++ rtc_fasync(-1, file, 0);
+ no_irq:
+ #endif
+
+- spin_lock_irq (&rtc_lock);
++ spin_lock_irq(&rtc_lock);
+ rtc_irq_data = 0;
+ rtc_status &= ~RTC_IS_OPEN;
+- spin_unlock_irq (&rtc_lock);
++ spin_unlock_irq(&rtc_lock);
++
+ return 0;
+ }
+
+@@ -786,9 +805,9 @@ static unsigned int rtc_poll(struct file *file, poll_table *wait)
+
+ poll_wait(file, &rtc_wait, wait);
+
+- spin_lock_irq (&rtc_lock);
++ spin_lock_irq(&rtc_lock);
+ l = rtc_irq_data;
+- spin_unlock_irq (&rtc_lock);
++ spin_unlock_irq(&rtc_lock);
+
+ if (l != 0)
+ return POLLIN | POLLRDNORM;
+@@ -796,14 +815,6 @@ static unsigned int rtc_poll(struct file *file, poll_table *wait)
+ }
+ #endif
+
+-/*
+- * exported stuffs
+- */
+-
+-EXPORT_SYMBOL(rtc_register);
+-EXPORT_SYMBOL(rtc_unregister);
+-EXPORT_SYMBOL(rtc_control);
+-
+ int rtc_register(rtc_task_t *task)
+ {
+ #ifndef RTC_IRQ
+@@ -829,6 +840,7 @@ int rtc_register(rtc_task_t *task)
+ return 0;
+ #endif
+ }
++EXPORT_SYMBOL(rtc_register);
+
+ int rtc_unregister(rtc_task_t *task)
+ {
+@@ -845,7 +857,7 @@ int rtc_unregister(rtc_task_t *task)
+ return -ENXIO;
+ }
+ rtc_callback = NULL;
+-
++
+ /* disable controls */
+ if (!hpet_mask_rtc_irq_bit(RTC_PIE | RTC_AIE | RTC_UIE)) {
+ tmp = CMOS_READ(RTC_CONTROL);
+@@ -865,6 +877,7 @@ int rtc_unregister(rtc_task_t *task)
+ return 0;
+ #endif
+ }
++EXPORT_SYMBOL(rtc_unregister);
+
+ int rtc_control(rtc_task_t *task, unsigned int cmd, unsigned long arg)
+ {
+@@ -883,7 +896,7 @@ int rtc_control(rtc_task_t *task, unsigned int cmd, unsigned long arg)
+ return rtc_do_ioctl(cmd, arg, 1);
+ #endif
+ }
+-
++EXPORT_SYMBOL(rtc_control);
+
+ /*
+ * The various file operations we support.
+@@ -910,11 +923,11 @@ static struct miscdevice rtc_dev = {
+
+ #ifdef CONFIG_PROC_FS
+ static const struct file_operations rtc_proc_fops = {
+- .owner = THIS_MODULE,
+- .open = rtc_proc_open,
+- .read = seq_read,
+- .llseek = seq_lseek,
+- .release = single_release,
++ .owner = THIS_MODULE,
++ .open = rtc_proc_open,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = single_release,
+ };
+ #endif
+
+@@ -965,7 +978,7 @@ static int __init rtc_init(void)
+ #ifdef CONFIG_SPARC32
+ for_each_ebus(ebus) {
+ for_each_ebusdev(edev, ebus) {
+- if(strcmp(edev->prom_node->name, "rtc") == 0) {
++ if (strcmp(edev->prom_node->name, "rtc") == 0) {
+ rtc_port = edev->resource[0].start;
+ rtc_irq = edev->irqs[0];
+ goto found;
+@@ -986,7 +999,8 @@ found:
+ * XXX Interrupt pin #7 in Espresso is shared between RTC and
+ * PCI Slot 2 INTA# (and some INTx# in Slot 1).
+ */
+- if (request_irq(rtc_irq, rtc_interrupt, IRQF_SHARED, "rtc", (void *)&rtc_port)) {
++ if (request_irq(rtc_irq, rtc_interrupt, IRQF_SHARED, "rtc",
++ (void *)&rtc_port)) {
+ rtc_has_irq = 0;
+ printk(KERN_ERR "rtc: cannot register IRQ %d\n", rtc_irq);
+ return -EIO;
+@@ -1015,16 +1029,26 @@ no_irq:
+
+ #ifdef RTC_IRQ
+ if (is_hpet_enabled()) {
++ int err;
++
+ rtc_int_handler_ptr = hpet_rtc_interrupt;
++ err = hpet_register_irq_handler(rtc_interrupt);
++ if (err != 0) {
++ printk(KERN_WARNING "hpet_register_irq_handler failed "
++ "in rtc_init().");
++ return err;
++ }
+ } else {
+ rtc_int_handler_ptr = rtc_interrupt;
+ }
+
+- if(request_irq(RTC_IRQ, rtc_int_handler_ptr, IRQF_DISABLED, "rtc", NULL)) {
++ if (request_irq(RTC_IRQ, rtc_int_handler_ptr, IRQF_DISABLED,
++ "rtc", NULL)) {
+ /* Yeah right, seeing as irq 8 doesn't even hit the bus. */
+ rtc_has_irq = 0;
+ printk(KERN_ERR "rtc: IRQ %d is not free.\n", RTC_IRQ);
+ rtc_release_region();
++
+ return -EIO;
+ }
+ hpet_rtc_timer_init();
+@@ -1036,6 +1060,7 @@ no_irq:
+ if (misc_register(&rtc_dev)) {
+ #ifdef RTC_IRQ
+ free_irq(RTC_IRQ, NULL);
++ hpet_unregister_irq_handler(rtc_interrupt);
+ rtc_has_irq = 0;
+ #endif
+ rtc_release_region();
+@@ -1052,21 +1077,21 @@ no_irq:
+
+ #if defined(__alpha__) || defined(__mips__)
+ rtc_freq = HZ;
+-
++
+ /* Each operating system on an Alpha uses its own epoch.
+ Let's try to guess which one we are using now. */
+-
++
+ if (rtc_is_updating() != 0)
+ msleep(20);
+-
++
+ spin_lock_irq(&rtc_lock);
+ year = CMOS_READ(RTC_YEAR);
+ ctrl = CMOS_READ(RTC_CONTROL);
+ spin_unlock_irq(&rtc_lock);
+-
++
+ if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ BCD_TO_BIN(year); /* This should never happen... */
+-
++
+ if (year < 20) {
+ epoch = 2000;
+ guess = "SRM (post-2000)";
+@@ -1087,7 +1112,8 @@ no_irq:
+ #endif
+ }
+ if (guess)
+- printk(KERN_INFO "rtc: %s epoch (%lu) detected\n", guess, epoch);
++ printk(KERN_INFO "rtc: %s epoch (%lu) detected\n",
++ guess, epoch);
+ #endif
+ #ifdef RTC_IRQ
+ if (rtc_has_irq == 0)
+@@ -1096,8 +1122,12 @@ no_irq:
+ spin_lock_irq(&rtc_lock);
+ rtc_freq = 1024;
+ if (!hpet_set_periodic_freq(rtc_freq)) {
+- /* Initialize periodic freq. to CMOS reset default, which is 1024Hz */
+- CMOS_WRITE(((CMOS_READ(RTC_FREQ_SELECT) & 0xF0) | 0x06), RTC_FREQ_SELECT);
++ /*
++ * Initialize periodic frequency to CMOS reset default,
++ * which is 1024Hz
++ */
++ CMOS_WRITE(((CMOS_READ(RTC_FREQ_SELECT) & 0xF0) | 0x06),
++ RTC_FREQ_SELECT);
+ }
+ spin_unlock_irq(&rtc_lock);
+ no_irq2:
+@@ -1110,20 +1140,22 @@ no_irq2:
+ return 0;
+ }
+
+-static void __exit rtc_exit (void)
++static void __exit rtc_exit(void)
+ {
+ cleanup_sysctl();
+- remove_proc_entry ("driver/rtc", NULL);
++ remove_proc_entry("driver/rtc", NULL);
+ misc_deregister(&rtc_dev);
+
+ #ifdef CONFIG_SPARC32
+ if (rtc_has_irq)
+- free_irq (rtc_irq, &rtc_port);
++ free_irq(rtc_irq, &rtc_port);
+ #else
+ rtc_release_region();
+ #ifdef RTC_IRQ
+- if (rtc_has_irq)
+- free_irq (RTC_IRQ, NULL);
++ if (rtc_has_irq) {
++ free_irq(RTC_IRQ, NULL);
++ hpet_unregister_irq_handler(hpet_rtc_interrupt);
++ }
+ #endif
+ #endif /* CONFIG_SPARC32 */
+ }
+@@ -1133,14 +1165,14 @@ module_exit(rtc_exit);
+
+ #ifdef RTC_IRQ
+ /*
+- * At IRQ rates >= 4096Hz, an interrupt may get lost altogether.
++ * At IRQ rates >= 4096Hz, an interrupt may get lost altogether.
+ * (usually during an IDE disk interrupt, with IRQ unmasking off)
+ * Since the interrupt handler doesn't get called, the IRQ status
+ * byte doesn't get read, and the RTC stops generating interrupts.
+ * A timer is set, and will call this function if/when that happens.
+ * To get it out of this stalled state, we just read the status.
+ * At least a jiffy of interrupts (rtc_freq/HZ) will have been lost.
+- * (You *really* shouldn't be trying to use a non-realtime system
++ * (You *really* shouldn't be trying to use a non-realtime system
+ * for something that requires a steady > 1KHz signal anyways.)
+ */
+
+@@ -1148,7 +1180,7 @@ static void rtc_dropped_irq(unsigned long data)
+ {
+ unsigned long freq;
+
+- spin_lock_irq (&rtc_lock);
++ spin_lock_irq(&rtc_lock);
+
+ if (hpet_rtc_dropped_irq()) {
+ spin_unlock_irq(&rtc_lock);
+@@ -1167,13 +1199,15 @@ static void rtc_dropped_irq(unsigned long data)
+
+ spin_unlock_irq(&rtc_lock);
+
+- if (printk_ratelimit())
+- printk(KERN_WARNING "rtc: lost some interrupts at %ldHz.\n", freq);
++ if (printk_ratelimit()) {
++ printk(KERN_WARNING "rtc: lost some interrupts at %ldHz.\n",
++ freq);
++ }
+
+ /* Now we have new data */
+ wake_up_interruptible(&rtc_wait);
+
+- kill_fasync (&rtc_async_queue, SIGIO, POLL_IN);
++ kill_fasync(&rtc_async_queue, SIGIO, POLL_IN);
+ }
+ #endif
+
+@@ -1277,7 +1311,7 @@ void rtc_get_rtc_time(struct rtc_time *rtc_tm)
+ * can take just over 2ms. We wait 20ms. There is no need to
+ * to poll-wait (up to 1s - eeccch) for the falling edge of RTC_UIP.
+ * If you need to know *exactly* when a second has started, enable
+- * periodic update complete interrupts, (via ioctl) and then
++ * periodic update complete interrupts, (via ioctl) and then
+ * immediately read /dev/rtc which will block until you get the IRQ.
+ * Once the read clears, read the RTC time (again via ioctl). Easy.
+ */
+@@ -1307,8 +1341,7 @@ void rtc_get_rtc_time(struct rtc_time *rtc_tm)
+ ctrl = CMOS_READ(RTC_CONTROL);
+ spin_unlock_irqrestore(&rtc_lock, flags);
+
+- if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+- {
++ if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BCD_TO_BIN(rtc_tm->tm_sec);
+ BCD_TO_BIN(rtc_tm->tm_min);
+ BCD_TO_BIN(rtc_tm->tm_hour);
+@@ -1326,7 +1359,8 @@ void rtc_get_rtc_time(struct rtc_time *rtc_tm)
+ * Account for differences between how the RTC uses the values
+ * and how they are defined in a struct rtc_time;
+ */
+- if ((rtc_tm->tm_year += (epoch - 1900)) <= 69)
++ rtc_tm->tm_year += epoch - 1900;
++ if (rtc_tm->tm_year <= 69)
+ rtc_tm->tm_year += 100;
+
+ rtc_tm->tm_mon--;
+@@ -1347,8 +1381,7 @@ static void get_rtc_alm_time(struct rtc_time *alm_tm)
+ ctrl = CMOS_READ(RTC_CONTROL);
+ spin_unlock_irq(&rtc_lock);
+
+- if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+- {
++ if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BCD_TO_BIN(alm_tm->tm_sec);
+ BCD_TO_BIN(alm_tm->tm_min);
+ BCD_TO_BIN(alm_tm->tm_hour);
+diff --git a/drivers/connector/cn_queue.c b/drivers/connector/cn_queue.c
+index 12ceed5..5732ca3 100644
+--- a/drivers/connector/cn_queue.c
++++ b/drivers/connector/cn_queue.c
+@@ -104,7 +104,6 @@ int cn_queue_add_callback(struct cn_queue_dev *dev, char *name, struct cb_id *id
+ return -EINVAL;
+ }
+
+- cbq->nls = dev->nls;
+ cbq->seq = 0;
+ cbq->group = cbq->id.id.idx;
+
+@@ -146,7 +145,6 @@ struct cn_queue_dev *cn_queue_alloc_dev(char *name, struct sock *nls)
+ spin_lock_init(&dev->queue_lock);
+
+ dev->nls = nls;
+- dev->netlink_groups = 0;
+
+ dev->cn_queue = create_workqueue(dev->name);
+ if (!dev->cn_queue) {
+diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
+index bf9716b..fea2d3e 100644
+--- a/drivers/connector/connector.c
++++ b/drivers/connector/connector.c
+@@ -88,6 +88,7 @@ int cn_netlink_send(struct cn_msg *msg, u32 __group, gfp_t gfp_mask)
+ if (cn_cb_equal(&__cbq->id.id, &msg->id)) {
+ found = 1;
+ group = __cbq->group;
++ break;
+ }
+ }
+ spin_unlock_bh(&dev->cbdev->queue_lock);
+@@ -181,33 +182,14 @@ static int cn_call_callback(struct cn_msg *msg, void (*destruct_data)(void *), v
+ }
+
+ /*
+- * Skb receive helper - checks skb and msg size and calls callback
+- * helper.
+- */
+-static int __cn_rx_skb(struct sk_buff *skb, struct nlmsghdr *nlh)
+-{
+- u32 pid, uid, seq, group;
+- struct cn_msg *msg;
+-
+- pid = NETLINK_CREDS(skb)->pid;
+- uid = NETLINK_CREDS(skb)->uid;
+- seq = nlh->nlmsg_seq;
+- group = NETLINK_CB((skb)).dst_group;
+- msg = NLMSG_DATA(nlh);
+-
+- return cn_call_callback(msg, (void (*)(void *))kfree_skb, skb);
+-}
+-
+-/*
+ * Main netlink receiving function.
+ *
+- * It checks skb and netlink header sizes and calls the skb receive
+- * helper with a shared skb.
++ * It checks skb, netlink header and msg sizes, and calls callback helper.
+ */
+ static void cn_rx_skb(struct sk_buff *__skb)
+ {
++ struct cn_msg *msg;
+ struct nlmsghdr *nlh;
+- u32 len;
+ int err;
+ struct sk_buff *skb;
+
+@@ -223,11 +205,8 @@ static void cn_rx_skb(struct sk_buff *__skb)
+ return;
+ }
+
+- len = NLMSG_ALIGN(nlh->nlmsg_len);
+- if (len > skb->len)
+- len = skb->len;
+-
+- err = __cn_rx_skb(skb, nlh);
++ msg = NLMSG_DATA(nlh);
++ err = cn_call_callback(msg, (void (*)(void *))kfree_skb, skb);
+ if (err < 0)
+ kfree_skb(skb);
+ }
+@@ -441,8 +420,7 @@ static int __devinit cn_init(void)
+
+ dev->cbdev = cn_queue_alloc_dev("cqueue", dev->nls);
+ if (!dev->cbdev) {
+- if (dev->nls->sk_socket)
+- sock_release(dev->nls->sk_socket);
++ netlink_kernel_release(dev->nls);
+ return -EINVAL;
+ }
+
+@@ -452,8 +430,7 @@ static int __devinit cn_init(void)
+ if (err) {
+ cn_already_initialized = 0;
+ cn_queue_free_dev(dev->cbdev);
+- if (dev->nls->sk_socket)
+- sock_release(dev->nls->sk_socket);
++ netlink_kernel_release(dev->nls);
+ return -EINVAL;
+ }
+
+@@ -468,8 +445,7 @@ static void __devexit cn_fini(void)
+
+ cn_del_callback(&dev->id);
+ cn_queue_free_dev(dev->cbdev);
+- if (dev->nls->sk_socket)
+- sock_release(dev->nls->sk_socket);
++ netlink_kernel_release(dev->nls);
+ }
+
+ subsys_initcall(cn_init);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 79581fa..b730d67 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -828,11 +828,8 @@ static int cpufreq_add_dev (struct sys_device * sys_dev)
+ memcpy(&new_policy, policy, sizeof(struct cpufreq_policy));
+
+ /* prepare interface data */
+- policy->kobj.parent = &sys_dev->kobj;
+- policy->kobj.ktype = &ktype_cpufreq;
+- kobject_set_name(&policy->kobj, "cpufreq");
+-
+- ret = kobject_register(&policy->kobj);
++ ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, &sys_dev->kobj,
++ "cpufreq");
+ if (ret) {
+ unlock_policy_rwsem_write(cpu);
+ goto err_out_driver_exit;
+@@ -902,6 +899,7 @@ static int cpufreq_add_dev (struct sys_device * sys_dev)
+ goto err_out_unregister;
+ }
+
++ kobject_uevent(&policy->kobj, KOBJ_ADD);
+ module_put(cpufreq_driver->owner);
+ dprintk("initialization complete\n");
+ cpufreq_debug_enable_ratelimit();
+@@ -915,7 +913,7 @@ err_out_unregister:
+ cpufreq_cpu_data[j] = NULL;
+ spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
+
+- kobject_unregister(&policy->kobj);
++ kobject_put(&policy->kobj);
+ wait_for_completion(&policy->kobj_unregister);
+
+ err_out_driver_exit:
+@@ -1032,8 +1030,6 @@ static int __cpufreq_remove_dev (struct sys_device * sys_dev)
+
+ unlock_policy_rwsem_write(cpu);
+
+- kobject_unregister(&data->kobj);
+-
+ kobject_put(&data->kobj);
+
+ /* we need to make sure that the underlying kobj is actually
+@@ -1608,7 +1604,7 @@ static int __cpufreq_set_policy(struct cpufreq_policy *data,
+ memcpy(&policy->cpuinfo, &data->cpuinfo,
+ sizeof(struct cpufreq_cpuinfo));
+
+- if (policy->min > data->min && policy->min > policy->max) {
++ if (policy->min > data->max || policy->max < data->min) {
+ ret = -EINVAL;
+ goto error_out;
+ }
+diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
+index 0f3515e..088ea74 100644
+--- a/drivers/cpuidle/sysfs.c
++++ b/drivers/cpuidle/sysfs.c
+@@ -277,7 +277,7 @@ static struct kobj_type ktype_state_cpuidle = {
+
+ static void inline cpuidle_free_state_kobj(struct cpuidle_device *device, int i)
+ {
+- kobject_unregister(&device->kobjs[i]->kobj);
++ kobject_put(&device->kobjs[i]->kobj);
+ wait_for_completion(&device->kobjs[i]->kobj_unregister);
+ kfree(device->kobjs[i]);
+ device->kobjs[i] = NULL;
+@@ -300,14 +300,13 @@ int cpuidle_add_state_sysfs(struct cpuidle_device *device)
+ kobj->state = &device->states[i];
+ init_completion(&kobj->kobj_unregister);
+
+- kobj->kobj.parent = &device->kobj;
+- kobj->kobj.ktype = &ktype_state_cpuidle;
+- kobject_set_name(&kobj->kobj, "state%d", i);
+- ret = kobject_register(&kobj->kobj);
++ ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle, &device->kobj,
++ "state%d", i);
+ if (ret) {
+ kfree(kobj);
+ goto error_state;
+ }
++ kobject_uevent(&kobj->kobj, KOBJ_ADD);
+ device->kobjs[i] = kobj;
+ }
+
+@@ -339,12 +338,14 @@ int cpuidle_add_sysfs(struct sys_device *sysdev)
+ {
+ int cpu = sysdev->id;
+ struct cpuidle_device *dev;
++ int error;
+
+ dev = per_cpu(cpuidle_devices, cpu);
+- dev->kobj.parent = &sysdev->kobj;
+- dev->kobj.ktype = &ktype_cpuidle;
+- kobject_set_name(&dev->kobj, "%s", "cpuidle");
+- return kobject_register(&dev->kobj);
++ error = kobject_init_and_add(&dev->kobj, &ktype_cpuidle, &sysdev->kobj,
++ "cpuidle");
++ if (!error)
++ kobject_uevent(&dev->kobj, KOBJ_ADD);
++ return error;
+ }
+
+ /**
+@@ -357,5 +358,5 @@ void cpuidle_remove_sysfs(struct sys_device *sysdev)
+ struct cpuidle_device *dev;
+
+ dev = per_cpu(cpuidle_devices, cpu);
+- kobject_unregister(&dev->kobj);
++ kobject_put(&dev->kobj);
+ }
+diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
+index ddd3a25..6b658d8 100644
+--- a/drivers/crypto/Kconfig
++++ b/drivers/crypto/Kconfig
+@@ -48,8 +48,6 @@ config CRYPTO_DEV_PADLOCK_SHA
+ If unsure say M. The compiled module will be
+ called padlock-sha.ko
+
+-source "arch/s390/crypto/Kconfig"
+-
+ config CRYPTO_DEV_GEODE
+ tristate "Support for the Geode LX AES engine"
+ depends on X86_32 && PCI
+@@ -83,4 +81,82 @@ config ZCRYPT_MONOLITHIC
+ that contains all parts of the crypto device driver (ap bus,
+ request router and all the card drivers).
+
++config CRYPTO_SHA1_S390
++ tristate "SHA1 digest algorithm"
++ depends on S390
++ select CRYPTO_ALGAPI
++ help
++ This is the s390 hardware accelerated implementation of the
++ SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
++
++config CRYPTO_SHA256_S390
++ tristate "SHA256 digest algorithm"
++ depends on S390
++ select CRYPTO_ALGAPI
++ help
++ This is the s390 hardware accelerated implementation of the
++ SHA256 secure hash standard (DFIPS 180-2).
++
++ This version of SHA implements a 256 bit hash with 128 bits of
++ security against collision attacks.
++
++config CRYPTO_DES_S390
++ tristate "DES and Triple DES cipher algorithms"
++ depends on S390
++ select CRYPTO_ALGAPI
++ select CRYPTO_BLKCIPHER
++ help
++ This us the s390 hardware accelerated implementation of the
++ DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
++
++config CRYPTO_AES_S390
++ tristate "AES cipher algorithms"
++ depends on S390
++ select CRYPTO_ALGAPI
++ select CRYPTO_BLKCIPHER
++ help
++ This is the s390 hardware accelerated implementation of the
++ AES cipher algorithms (FIPS-197). AES uses the Rijndael
++ algorithm.
++
++ Rijndael appears to be consistently a very good performer in
++ both hardware and software across a wide range of computing
++ environments regardless of its use in feedback or non-feedback
++ modes. Its key setup time is excellent, and its key agility is
++ good. Rijndael's very low memory requirements make it very well
++ suited for restricted-space environments, in which it also
++ demonstrates excellent performance. Rijndael's operations are
++ among the easiest to defend against power and timing attacks.
++
++ On s390 the System z9-109 currently only supports the key size
++ of 128 bit.
++
++config S390_PRNG
++ tristate "Pseudo random number generator device driver"
++ depends on S390
++ default "m"
++ help
++ Select this option if you want to use the s390 pseudo random number
++ generator. The PRNG is part of the cryptographic processor functions
++ and uses triple-DES to generate secure random numbers like the
++ ANSI X9.17 standard. The PRNG is usable via the char device
++ /dev/prandom.
++
++config CRYPTO_DEV_HIFN_795X
++ tristate "Driver HIFN 795x crypto accelerator chips"
++ select CRYPTO_DES
++ select CRYPTO_ALGAPI
++ select CRYPTO_BLKCIPHER
++ select HW_RANDOM if CRYPTO_DEV_HIFN_795X_RNG
++ depends on PCI
++ help
++ This option allows you to have support for HIFN 795x crypto adapters.
++
++config CRYPTO_DEV_HIFN_795X_RNG
++ bool "HIFN 795x random number generator"
++ depends on CRYPTO_DEV_HIFN_795X
++ help
++ Select this option if you want to enable the random number generator
++ on the HIFN 795x crypto adapters.
++
+ endif # CRYPTO_HW
+diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
+index d070030..c0327f0 100644
+--- a/drivers/crypto/Makefile
++++ b/drivers/crypto/Makefile
+@@ -1,3 +1,4 @@
+ obj-$(CONFIG_CRYPTO_DEV_PADLOCK_AES) += padlock-aes.o
+ obj-$(CONFIG_CRYPTO_DEV_PADLOCK_SHA) += padlock-sha.o
+ obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
++obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
+diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
+index 711e246..4801162 100644
+--- a/drivers/crypto/geode-aes.c
++++ b/drivers/crypto/geode-aes.c
+@@ -13,44 +13,13 @@
+ #include <linux/crypto.h>
+ #include <linux/spinlock.h>
+ #include <crypto/algapi.h>
++#include <crypto/aes.h>
+
+ #include <asm/io.h>
+ #include <asm/delay.h>
+
+ #include "geode-aes.h"
+
+-/* Register definitions */
+-
+-#define AES_CTRLA_REG 0x0000
+-
+-#define AES_CTRL_START 0x01
+-#define AES_CTRL_DECRYPT 0x00
+-#define AES_CTRL_ENCRYPT 0x02
+-#define AES_CTRL_WRKEY 0x04
+-#define AES_CTRL_DCA 0x08
+-#define AES_CTRL_SCA 0x10
+-#define AES_CTRL_CBC 0x20
+-
+-#define AES_INTR_REG 0x0008
+-
+-#define AES_INTRA_PENDING (1 << 16)
+-#define AES_INTRB_PENDING (1 << 17)
+-
+-#define AES_INTR_PENDING (AES_INTRA_PENDING | AES_INTRB_PENDING)
+-#define AES_INTR_MASK 0x07
+-
+-#define AES_SOURCEA_REG 0x0010
+-#define AES_DSTA_REG 0x0014
+-#define AES_LENA_REG 0x0018
+-#define AES_WRITEKEY0_REG 0x0030
+-#define AES_WRITEIV0_REG 0x0040
+-
+-/* A very large counter that is used to gracefully bail out of an
+- * operation in case of trouble
+- */
+-
+-#define AES_OP_TIMEOUT 0x50000
+-
+ /* Static structures */
+
+ static void __iomem * _iobase;
+@@ -87,9 +56,10 @@ do_crypt(void *src, void *dst, int len, u32 flags)
+ /* Start the operation */
+ iowrite32(AES_CTRL_START | flags, _iobase + AES_CTRLA_REG);
+
+- do
++ do {
+ status = ioread32(_iobase + AES_INTR_REG);
+- while(!(status & AES_INTRA_PENDING) && --counter);
++ cpu_relax();
++ } while(!(status & AES_INTRA_PENDING) && --counter);
+
+ /* Clear the event */
+ iowrite32((status & 0xFF) | AES_INTRA_PENDING, _iobase + AES_INTR_REG);
+@@ -101,6 +71,7 @@ geode_aes_crypt(struct geode_aes_op *op)
+ {
+ u32 flags = 0;
+ unsigned long iflags;
++ int ret;
+
+ if (op->len == 0)
+ return 0;
+@@ -129,7 +100,8 @@ geode_aes_crypt(struct geode_aes_op *op)
+ _writefield(AES_WRITEKEY0_REG, op->key);
+ }
+
+- do_crypt(op->src, op->dst, op->len, flags);
++ ret = do_crypt(op->src, op->dst, op->len, flags);
++ BUG_ON(ret);
+
+ if (op->mode == AES_MODE_CBC)
+ _readfield(AES_WRITEIV0_REG, op->iv);
+@@ -141,18 +113,103 @@ geode_aes_crypt(struct geode_aes_op *op)
+
+ /* CRYPTO-API Functions */
+
+-static int
+-geode_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int len)
++static int geode_setkey_cip(struct crypto_tfm *tfm, const u8 *key,
++ unsigned int len)
+ {
+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
++ unsigned int ret;
+
+- if (len != AES_KEY_LENGTH) {
++ op->keylen = len;
++
++ if (len == AES_KEYSIZE_128) {
++ memcpy(op->key, key, len);
++ return 0;
++ }
++
++ if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
++ /* not supported at all */
+ tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+- memcpy(op->key, key, len);
+- return 0;
++ /*
++ * The requested key size is not supported by HW, do a fallback
++ */
++ op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
++ op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
++
++ ret = crypto_cipher_setkey(op->fallback.cip, key, len);
++ if (ret) {
++ tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
++ tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
++ }
++ return ret;
++}
++
++static int geode_setkey_blk(struct crypto_tfm *tfm, const u8 *key,
++ unsigned int len)
++{
++ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
++ unsigned int ret;
++
++ op->keylen = len;
++
++ if (len == AES_KEYSIZE_128) {
++ memcpy(op->key, key, len);
++ return 0;
++ }
++
++ if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
++ /* not supported at all */
++ tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
++ return -EINVAL;
++ }
++
++ /*
++ * The requested key size is not supported by HW, do a fallback
++ */
++ op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
++ op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
++
++ ret = crypto_blkcipher_setkey(op->fallback.blk, key, len);
++ if (ret) {
++ tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
++ tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
++ }
++ return ret;
++}
++
++static int fallback_blk_dec(struct blkcipher_desc *desc,
++ struct scatterlist *dst, struct scatterlist *src,
++ unsigned int nbytes)
++{
++ unsigned int ret;
++ struct crypto_blkcipher *tfm;
++ struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
++
++ tfm = desc->tfm;
++ desc->tfm = op->fallback.blk;
++
++ ret = crypto_blkcipher_decrypt_iv(desc, dst, src, nbytes);
++
++ desc->tfm = tfm;
++ return ret;
++}
++static int fallback_blk_enc(struct blkcipher_desc *desc,
++ struct scatterlist *dst, struct scatterlist *src,
++ unsigned int nbytes)
++{
++ unsigned int ret;
++ struct crypto_blkcipher *tfm;
++ struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
++
++ tfm = desc->tfm;
++ desc->tfm = op->fallback.blk;
++
++ ret = crypto_blkcipher_encrypt_iv(desc, dst, src, nbytes);
++
++ desc->tfm = tfm;
++ return ret;
+ }
+
+ static void
+@@ -160,8 +217,10 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ {
+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+- if ((out == NULL) || (in == NULL))
++ if (unlikely(op->keylen != AES_KEYSIZE_128)) {
++ crypto_cipher_encrypt_one(op->fallback.cip, out, in);
+ return;
++ }
+
+ op->src = (void *) in;
+ op->dst = (void *) out;
+@@ -179,8 +238,10 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ {
+ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+- if ((out == NULL) || (in == NULL))
++ if (unlikely(op->keylen != AES_KEYSIZE_128)) {
++ crypto_cipher_decrypt_one(op->fallback.cip, out, in);
+ return;
++ }
+
+ op->src = (void *) in;
+ op->dst = (void *) out;
+@@ -192,24 +253,50 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+ geode_aes_crypt(op);
+ }
+
++static int fallback_init_cip(struct crypto_tfm *tfm)
++{
++ const char *name = tfm->__crt_alg->cra_name;
++ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
++
++ op->fallback.cip = crypto_alloc_cipher(name, 0,
++ CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
++
++ if (IS_ERR(op->fallback.cip)) {
++ printk(KERN_ERR "Error allocating fallback algo %s\n", name);
++ return PTR_ERR(op->fallback.blk);
++ }
++
++ return 0;
++}
++
++static void fallback_exit_cip(struct crypto_tfm *tfm)
++{
++ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
++
++ crypto_free_cipher(op->fallback.cip);
++ op->fallback.cip = NULL;
++}
+
+ static struct crypto_alg geode_alg = {
+- .cra_name = "aes",
+- .cra_driver_name = "geode-aes-128",
+- .cra_priority = 300,
+- .cra_alignmask = 15,
+- .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
++ .cra_name = "aes",
++ .cra_driver_name = "geode-aes",
++ .cra_priority = 300,
++ .cra_alignmask = 15,
++ .cra_flags = CRYPTO_ALG_TYPE_CIPHER |
++ CRYPTO_ALG_NEED_FALLBACK,
++ .cra_init = fallback_init_cip,
++ .cra_exit = fallback_exit_cip,
+ .cra_blocksize = AES_MIN_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct geode_aes_op),
+- .cra_module = THIS_MODULE,
+- .cra_list = LIST_HEAD_INIT(geode_alg.cra_list),
+- .cra_u = {
+- .cipher = {
+- .cia_min_keysize = AES_KEY_LENGTH,
+- .cia_max_keysize = AES_KEY_LENGTH,
+- .cia_setkey = geode_setkey,
+- .cia_encrypt = geode_encrypt,
+- .cia_decrypt = geode_decrypt
++ .cra_module = THIS_MODULE,
++ .cra_list = LIST_HEAD_INIT(geode_alg.cra_list),
++ .cra_u = {
++ .cipher = {
++ .cia_min_keysize = AES_MIN_KEY_SIZE,
++ .cia_max_keysize = AES_MAX_KEY_SIZE,
++ .cia_setkey = geode_setkey_cip,
++ .cia_encrypt = geode_encrypt,
++ .cia_decrypt = geode_decrypt
+ }
+ }
+ };
+@@ -223,8 +310,12 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ int err, ret;
+
++ if (unlikely(op->keylen != AES_KEYSIZE_128))
++ return fallback_blk_dec(desc, dst, src, nbytes);
++
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt(desc, &walk);
++ op->iv = walk.iv;
+
+ while((nbytes = walk.nbytes)) {
+ op->src = walk.src.virt.addr,
+@@ -233,13 +324,9 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
+ op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
+ op->dir = AES_DIR_DECRYPT;
+
+- memcpy(op->iv, walk.iv, AES_IV_LENGTH);
+-
+ ret = geode_aes_crypt(op);
+
+- memcpy(walk.iv, op->iv, AES_IV_LENGTH);
+ nbytes -= ret;
+-
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+@@ -255,8 +342,12 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ int err, ret;
+
++ if (unlikely(op->keylen != AES_KEYSIZE_128))
++ return fallback_blk_enc(desc, dst, src, nbytes);
++
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt(desc, &walk);
++ op->iv = walk.iv;
+
+ while((nbytes = walk.nbytes)) {
+ op->src = walk.src.virt.addr,
+@@ -265,8 +356,6 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
+ op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
+ op->dir = AES_DIR_ENCRYPT;
+
+- memcpy(op->iv, walk.iv, AES_IV_LENGTH);
+-
+ ret = geode_aes_crypt(op);
+ nbytes -= ret;
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+@@ -275,22 +364,49 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
+ return err;
+ }
+
++static int fallback_init_blk(struct crypto_tfm *tfm)
++{
++ const char *name = tfm->__crt_alg->cra_name;
++ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
++
++ op->fallback.blk = crypto_alloc_blkcipher(name, 0,
++ CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
++
++ if (IS_ERR(op->fallback.blk)) {
++ printk(KERN_ERR "Error allocating fallback algo %s\n", name);
++ return PTR_ERR(op->fallback.blk);
++ }
++
++ return 0;
++}
++
++static void fallback_exit_blk(struct crypto_tfm *tfm)
++{
++ struct geode_aes_op *op = crypto_tfm_ctx(tfm);
++
++ crypto_free_blkcipher(op->fallback.blk);
++ op->fallback.blk = NULL;
++}
++
+ static struct crypto_alg geode_cbc_alg = {
+ .cra_name = "cbc(aes)",
+- .cra_driver_name = "cbc-aes-geode-128",
++ .cra_driver_name = "cbc-aes-geode",
+ .cra_priority = 400,
+- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
++ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
++ CRYPTO_ALG_NEED_FALLBACK,
++ .cra_init = fallback_init_blk,
++ .cra_exit = fallback_exit_blk,
+ .cra_blocksize = AES_MIN_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct geode_aes_op),
+ .cra_alignmask = 15,
+- .cra_type = &crypto_blkcipher_type,
+- .cra_module = THIS_MODULE,
+- .cra_list = LIST_HEAD_INIT(geode_cbc_alg.cra_list),
+- .cra_u = {
+- .blkcipher = {
+- .min_keysize = AES_KEY_LENGTH,
+- .max_keysize = AES_KEY_LENGTH,
+- .setkey = geode_setkey,
++ .cra_type = &crypto_blkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(geode_cbc_alg.cra_list),
+ .cra_u = {
@@ -180821,6 +253129,83 @@
if (lu->tgt->workarounds & SBP2_WORKAROUND_INQUIRY_36)
sdev->inquiry_len = 36;
+diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
+index 5e596a7..9008ed5 100644
+--- a/drivers/firmware/dmi_scan.c
++++ b/drivers/firmware/dmi_scan.c
+@@ -8,6 +8,8 @@
+ #include <linux/slab.h>
+ #include <asm/dmi.h>
+
++static char dmi_empty_string[] = " ";
++
+ static char * __init dmi_string(const struct dmi_header *dm, u8 s)
+ {
+ const u8 *bp = ((u8 *) dm) + dm->length;
+@@ -21,11 +23,16 @@ static char * __init dmi_string(const struct dmi_header *dm, u8 s)
+ }
+
+ if (*bp != 0) {
+- str = dmi_alloc(strlen(bp) + 1);
++ size_t len = strlen(bp)+1;
++ size_t cmp_len = len > 8 ? 8 : len;
++
++ if (!memcmp(bp, dmi_empty_string, cmp_len))
++ return dmi_empty_string;
++ str = dmi_alloc(len);
+ if (str != NULL)
+ strcpy(str, bp);
+ else
+- printk(KERN_ERR "dmi_string: out of memory.\n");
++ printk(KERN_ERR "dmi_string: cannot allocate %Zu bytes.\n", len);
+ }
+ }
+
+@@ -175,12 +182,23 @@ static void __init dmi_save_devices(const struct dmi_header *dm)
+ }
+ }
+
++static struct dmi_device empty_oem_string_dev = {
++ .name = dmi_empty_string,
++};
++
+ static void __init dmi_save_oem_strings_devices(const struct dmi_header *dm)
+ {
+ int i, count = *(u8 *)(dm + 1);
+ struct dmi_device *dev;
+
+ for (i = 1; i <= count; i++) {
++ char *devname = dmi_string(dm, i);
++
++ if (!strcmp(devname, dmi_empty_string)) {
++ list_add(&empty_oem_string_dev.list, &dmi_devices);
++ continue;
++ }
++
+ dev = dmi_alloc(sizeof(*dev));
+ if (!dev) {
+ printk(KERN_ERR
+@@ -189,7 +207,7 @@ static void __init dmi_save_oem_strings_devices(const struct dmi_header *dm)
+ }
+
+ dev->type = DMI_DEV_TYPE_OEM_STRING;
+- dev->name = dmi_string(dm, i);
++ dev->name = devname;
+ dev->device_data = NULL;
+
+ list_add(&dev->list, &dmi_devices);
+@@ -331,9 +349,11 @@ void __init dmi_scan_machine(void)
+ rc = dmi_present(q);
+ if (!rc) {
+ dmi_available = 1;
++ dmi_iounmap(p, 0x10000);
+ return;
+ }
+ }
++ dmi_iounmap(p, 0x10000);
+ }
+ out: printk(KERN_INFO "DMI not present or invalid.\n");
+ }
diff --git a/drivers/firmware/edd.c b/drivers/firmware/edd.c
index 6942e06..d168223 100644
--- a/drivers/firmware/edd.c
@@ -196576,6 +268961,306 @@
- }
-}
-#endif
+diff --git a/drivers/ieee1394/Makefile b/drivers/ieee1394/Makefile
+index 489c133..1f8153b 100644
+--- a/drivers/ieee1394/Makefile
++++ b/drivers/ieee1394/Makefile
+@@ -15,3 +15,4 @@ obj-$(CONFIG_IEEE1394_SBP2) += sbp2.o
+ obj-$(CONFIG_IEEE1394_DV1394) += dv1394.o
+ obj-$(CONFIG_IEEE1394_ETH1394) += eth1394.o
+
++obj-$(CONFIG_PROVIDE_OHCI1394_DMA_INIT) += init_ohci1394_dma.o
+diff --git a/drivers/ieee1394/init_ohci1394_dma.c b/drivers/ieee1394/init_ohci1394_dma.c
+new file mode 100644
+index 0000000..ddaab6e
+--- /dev/null
++++ b/drivers/ieee1394/init_ohci1394_dma.c
+@@ -0,0 +1,285 @@
++/*
++ * init_ohci1394_dma.c - Initializes physical DMA on all OHCI 1394 controllers
++ *
++ * Copyright (C) 2006-2007 Bernhard Kaindl <bk at suse.de>
++ *
++ * Derived from drivers/ieee1394/ohci1394.c and arch/x86/kernel/early-quirks.c
++ * this file has functions to:
++ * - scan the PCI very early on boot for all OHCI 1394-compliant controllers
++ * - reset and initialize them and make them join the IEEE1394 bus and
++ * - enable physical DMA on them to allow remote debugging
++ *
++ * All code and data is marked as __init and __initdata, respective as
++ * during boot, all OHCI1394 controllers may be claimed by the firewire
++ * stack and at this point, this code should not touch them anymore.
++ *
++ * To use physical DMA after the initialization of the firewire stack,
++ * be sure that the stack enables it and (re-)attach after the bus reset
++ * which may be caused by the firewire stack initialization.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software Foundation,
++ * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
++ */
++
++#include <linux/interrupt.h> /* for ohci1394.h */
++#include <linux/delay.h>
++#include <linux/pci.h> /* for PCI defines */
++#include <linux/init_ohci1394_dma.h>
++#include <asm/pci-direct.h> /* for direct PCI config space access */
++#include <asm/fixmap.h>
++
++#include "ieee1394_types.h"
++#include "ohci1394.h"
++
++int __initdata init_ohci1394_dma_early;
++
++/* Reads a PHY register of an OHCI-1394 controller */
++static inline u8 __init get_phy_reg(struct ti_ohci *ohci, u8 addr)
++{
++ int i;
++ quadlet_t r;
++
++ reg_write(ohci, OHCI1394_PhyControl, (addr << 8) | 0x00008000);
++
++ for (i = 0; i < OHCI_LOOP_COUNT; i++) {
++ if (reg_read(ohci, OHCI1394_PhyControl) & 0x80000000)
++ break;
++ mdelay(1);
++ }
++ r = reg_read(ohci, OHCI1394_PhyControl);
++
++ return (r & 0x00ff0000) >> 16;
++}
++
++/* Writes to a PHY register of an OHCI-1394 controller */
++static inline void __init set_phy_reg(struct ti_ohci *ohci, u8 addr, u8 data)
++{
++ int i;
++
++ reg_write(ohci, OHCI1394_PhyControl, (addr << 8) | data | 0x00004000);
++
++ for (i = 0; i < OHCI_LOOP_COUNT; i++) {
++ u32 r = reg_read(ohci, OHCI1394_PhyControl);
++ if (!(r & 0x00004000))
++ break;
++ mdelay(1);
++ }
++}
++
++/* Resets an OHCI-1394 controller (for sane state before initialization) */
++static inline void __init init_ohci1394_soft_reset(struct ti_ohci *ohci) {
++ int i;
++
++ reg_write(ohci, OHCI1394_HCControlSet, OHCI1394_HCControl_softReset);
++
++ for (i = 0; i < OHCI_LOOP_COUNT; i++) {
++ if (!(reg_read(ohci, OHCI1394_HCControlSet)
++ & OHCI1394_HCControl_softReset))
++ break;
++ mdelay(1);
++ }
++}
++
++/* Basic OHCI-1394 register and port inititalization */
++static inline void __init init_ohci1394_initialize(struct ti_ohci *ohci)
++{
++ quadlet_t bus_options;
++ int num_ports, i;
++
++ /* Put some defaults to these undefined bus options */
++ bus_options = reg_read(ohci, OHCI1394_BusOptions);
++ bus_options |= 0x60000000; /* Enable CMC and ISC */
++ bus_options &= ~0x00ff0000; /* XXX: Set cyc_clk_acc to zero for now */
++ bus_options &= ~0x18000000; /* Disable PMC and BMC */
++ reg_write(ohci, OHCI1394_BusOptions, bus_options);
++
++ /* Set the bus number */
++ reg_write(ohci, OHCI1394_NodeID, 0x0000ffc0);
++
++ /* Enable posted writes */
++ reg_write(ohci, OHCI1394_HCControlSet,
++ OHCI1394_HCControl_postedWriteEnable);
++
++ /* Clear link control register */
++ reg_write(ohci, OHCI1394_LinkControlClear, 0xffffffff);
++
++ /* enable phys */
++ reg_write(ohci, OHCI1394_LinkControlSet,
++ OHCI1394_LinkControl_RcvPhyPkt);
++
++ /* Don't accept phy packets into AR request context */
++ reg_write(ohci, OHCI1394_LinkControlClear, 0x00000400);
++
++ /* Clear the Isochonouys interrupt masks */
++ reg_write(ohci, OHCI1394_IsoRecvIntMaskClear, 0xffffffff);
++ reg_write(ohci, OHCI1394_IsoRecvIntEventClear, 0xffffffff);
++ reg_write(ohci, OHCI1394_IsoXmitIntMaskClear, 0xffffffff);
++ reg_write(ohci, OHCI1394_IsoXmitIntEventClear, 0xffffffff);
++
++ /* Accept asyncronous transfer requests from all nodes for now */
++ reg_write(ohci,OHCI1394_AsReqFilterHiSet, 0x80000000);
++
++ /* Specify asyncronous transfer retries */
++ reg_write(ohci, OHCI1394_ATRetries,
++ OHCI1394_MAX_AT_REQ_RETRIES |
++ (OHCI1394_MAX_AT_RESP_RETRIES<<4) |
++ (OHCI1394_MAX_PHYS_RESP_RETRIES<<8));
++
++ /* We don't want hardware swapping */
++ reg_write(ohci, OHCI1394_HCControlClear, OHCI1394_HCControl_noByteSwap);
++
++ /* Enable link */
++ reg_write(ohci, OHCI1394_HCControlSet, OHCI1394_HCControl_linkEnable);
++
++ /* If anything is connected to a port, make sure it is enabled */
++ num_ports = get_phy_reg(ohci, 2) & 0xf;
++ for (i = 0; i < num_ports; i++) {
++ unsigned int status;
++
++ set_phy_reg(ohci, 7, i);
++ status = get_phy_reg(ohci, 8);
++
++ if (status & 0x20)
++ set_phy_reg(ohci, 8, status & ~1);
++ }
++}
++
++/**
++ * init_ohci1394_wait_for_busresets - wait until bus resets are completed
++ *
++ * OHCI1394 initialization itself and any device going on- or offline
++ * and any cable issue cause a IEEE1394 bus reset. The OHCI1394 spec
++ * specifies that physical DMA is disabled on each bus reset and it
++ * has to be enabled after each bus reset when needed. We resort
++ * to polling here because on early boot, we have no interrupts.
++ */
++static inline void __init init_ohci1394_wait_for_busresets(struct ti_ohci *ohci)
++{
++ int i, events;
++
++ for (i=0; i < 9; i++) {
++ mdelay(200);
++ events = reg_read(ohci, OHCI1394_IntEventSet);
++ if (events & OHCI1394_busReset)
++ reg_write(ohci, OHCI1394_IntEventClear,
++ OHCI1394_busReset);
++ }
++}
++
++/**
++ * init_ohci1394_enable_physical_dma - Enable physical DMA for remote debugging
++ * This enables remote DMA access over IEEE1394 from every host for the low
++ * 4GB of address space. DMA accesses above 4GB are not available currently.
++ */
++static inline void __init init_ohci1394_enable_physical_dma(struct ti_ohci *hci)
++{
++ reg_write(hci, OHCI1394_PhyReqFilterHiSet, 0xffffffff);
++ reg_write(hci, OHCI1394_PhyReqFilterLoSet, 0xffffffff);
++ reg_write(hci, OHCI1394_PhyUpperBound, 0xffff0000);
++}
++
++/**
++ * init_ohci1394_reset_and_init_dma - init controller and enable DMA
++ * This initializes the given controller and enables physical DMA engine in it.
++ */
++static inline void __init init_ohci1394_reset_and_init_dma(struct ti_ohci *ohci)
++{
++ /* Start off with a soft reset, clears everything to a sane state. */
++ init_ohci1394_soft_reset(ohci);
++
++ /* Accessing some registers without LPS enabled may cause lock up */
++ reg_write(ohci, OHCI1394_HCControlSet, OHCI1394_HCControl_LPS);
++
++ /* Disable and clear interrupts */
++ reg_write(ohci, OHCI1394_IntEventClear, 0xffffffff);
++ reg_write(ohci, OHCI1394_IntMaskClear, 0xffffffff);
++
++ mdelay(50); /* Wait 50msec to make sure we have full link enabled */
++
++ init_ohci1394_initialize(ohci);
++ /*
++ * The initialization causes at least one IEEE1394 bus reset. Enabling
++ * physical DMA only works *after* *all* bus resets have calmed down:
++ */
++ init_ohci1394_wait_for_busresets(ohci);
++
++ /* We had to wait and do this now if we want to debug early problems */
++ init_ohci1394_enable_physical_dma(ohci);
++}
++
++/**
++ * init_ohci1394_controller - Map the registers of the controller and init DMA
++ * This maps the registers of the specified controller and initializes it
++ */
++static inline void __init init_ohci1394_controller(int num, int slot, int func)
++{
++ unsigned long ohci_base;
++ struct ti_ohci ohci;
++
++ printk(KERN_INFO "init_ohci1394_dma: initializing OHCI-1394"
++ " at %02x:%02x.%x\n", num, slot, func);
++
++ ohci_base = read_pci_config(num, slot, func, PCI_BASE_ADDRESS_0+(0<<2))
++ & PCI_BASE_ADDRESS_MEM_MASK;
++
++ set_fixmap_nocache(FIX_OHCI1394_BASE, ohci_base);
++
++ ohci.registers = (void *)fix_to_virt(FIX_OHCI1394_BASE);
++
++ init_ohci1394_reset_and_init_dma(&ohci);
++}
++
++/**
++ * debug_init_ohci1394_dma - scan for OHCI1394 controllers and init DMA on them
++ * Scans the whole PCI space for OHCI1394 controllers and inits DMA on them
++ */
++void __init init_ohci1394_dma_on_all_controllers(void)
++{
++ int num, slot, func;
++
++ if (!early_pci_allowed())
++ return;
++
++ /* Poor man's PCI discovery, the only thing we can do at early boot */
++ for (num = 0; num < 32; num++) {
++ for (slot = 0; slot < 32; slot++) {
++ for (func = 0; func < 8; func++) {
++ u32 class = read_pci_config(num,slot,func,
++ PCI_CLASS_REVISION);
++ if ((class == 0xffffffff))
++ continue; /* No device at this func */
++
++ if (class>>8 != PCI_CLASS_SERIAL_FIREWIRE_OHCI)
++ continue; /* Not an OHCI-1394 device */
++
++ init_ohci1394_controller(num, slot, func);
++ break; /* Assume one controller per device */
++ }
++ }
++ }
++ printk(KERN_INFO "init_ohci1394_dma: finished initializing OHCI DMA\n");
++}
++
++/**
++ * setup_init_ohci1394_early - enables early OHCI1394 DMA initialization
++ */
++static int __init setup_ohci1394_dma(char *opt)
++{
++ if (!strcmp(opt, "early"))
++ init_ohci1394_dma_early = 1;
++ return 0;
++}
++
++/* passing ohci1394_dma=early on boot causes early OHCI1394 DMA initialization */
++early_param("ohci1394_dma", setup_ohci1394_dma);
diff --git a/drivers/ieee1394/nodemgr.c b/drivers/ieee1394/nodemgr.c
index 90dc75b..511e432 100644
--- a/drivers/ieee1394/nodemgr.c
@@ -205680,6 +278365,31 @@
struct srp_device {
struct list_head dev_list;
struct ib_device *dev;
+diff --git a/drivers/input/mouse/pc110pad.c b/drivers/input/mouse/pc110pad.c
+index 8991ab0..61cff83 100644
+--- a/drivers/input/mouse/pc110pad.c
++++ b/drivers/input/mouse/pc110pad.c
+@@ -39,6 +39,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/pci.h>
++#include <linux/delay.h>
+
+ #include <asm/io.h>
+ #include <asm/irq.h>
+@@ -62,8 +63,10 @@ static irqreturn_t pc110pad_interrupt(int irq, void *ptr)
+ int value = inb_p(pc110pad_io);
+ int handshake = inb_p(pc110pad_io + 2);
+
+- outb_p(handshake | 1, pc110pad_io + 2);
+- outb_p(handshake & ~1, pc110pad_io + 2);
++ outb(handshake | 1, pc110pad_io + 2);
++ udelay(2);
++ outb(handshake & ~1, pc110pad_io + 2);
++ udelay(2);
+ inb_p(0x64);
+
+ pc110pad_data[pc110pad_count++] = value;
diff --git a/drivers/input/touchscreen/corgi_ts.c b/drivers/input/touchscreen/corgi_ts.c
index b1b2e07..99d92f5 100644
--- a/drivers/input/touchscreen/corgi_ts.c
@@ -205787,10 +278497,75 @@
.suspend = kvm_suspend,
.resume = kvm_resume,
};
+diff --git a/drivers/kvm/svm.c b/drivers/kvm/svm.c
+index 4e04e49..ced4ac1 100644
+--- a/drivers/kvm/svm.c
++++ b/drivers/kvm/svm.c
+@@ -290,7 +290,7 @@ static void svm_hardware_enable(void *garbage)
+ #ifdef CONFIG_X86_64
+ struct desc_ptr gdt_descr;
+ #else
+- struct Xgt_desc_struct gdt_descr;
++ struct desc_ptr gdt_descr;
+ #endif
+ struct desc_struct *gdt;
+ int me = raw_smp_processor_id();
+diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
+index bb56ae3..5b397b6 100644
+--- a/drivers/kvm/vmx.c
++++ b/drivers/kvm/vmx.c
+@@ -524,7 +524,7 @@ static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
+ static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
+ {
+ if (vcpu->rmode.active)
+- rflags |= IOPL_MASK | X86_EFLAGS_VM;
++ rflags |= X86_EFLAGS_IOPL | X86_EFLAGS_VM;
+ vmcs_writel(GUEST_RFLAGS, rflags);
+ }
+
+@@ -1050,7 +1050,7 @@ static void enter_pmode(struct kvm_vcpu *vcpu)
+ vmcs_write32(GUEST_TR_AR_BYTES, vcpu->rmode.tr.ar);
+
+ flags = vmcs_readl(GUEST_RFLAGS);
+- flags &= ~(IOPL_MASK | X86_EFLAGS_VM);
++ flags &= ~(X86_EFLAGS_IOPL | X86_EFLAGS_VM);
+ flags |= (vcpu->rmode.save_iopl << IOPL_SHIFT);
+ vmcs_writel(GUEST_RFLAGS, flags);
+
+@@ -1107,9 +1107,9 @@ static void enter_rmode(struct kvm_vcpu *vcpu)
+ vmcs_write32(GUEST_TR_AR_BYTES, 0x008b);
+
+ flags = vmcs_readl(GUEST_RFLAGS);
+- vcpu->rmode.save_iopl = (flags & IOPL_MASK) >> IOPL_SHIFT;
++ vcpu->rmode.save_iopl = (flags & X86_EFLAGS_IOPL) >> IOPL_SHIFT;
+
+- flags |= IOPL_MASK | X86_EFLAGS_VM;
++ flags |= X86_EFLAGS_IOPL | X86_EFLAGS_VM;
+
+ vmcs_writel(GUEST_RFLAGS, flags);
+ vmcs_writel(GUEST_CR4, vmcs_readl(GUEST_CR4) | X86_CR4_VME);
diff --git a/drivers/lguest/x86/core.c b/drivers/lguest/x86/core.c
-index 482aec2..96d0fd0 100644
+index 482aec2..44adb00 100644
--- a/drivers/lguest/x86/core.c
+++ b/drivers/lguest/x86/core.c
+@@ -94,7 +94,7 @@ static void copy_in_guest_info(struct lguest *lg, struct lguest_pages *pages)
+ /* Set up the two "TSS" members which tell the CPU what stack to use
+ * for traps which do directly into the Guest (ie. traps at privilege
+ * level 1). */
+- pages->state.guest_tss.esp1 = lg->esp1;
++ pages->state.guest_tss.sp1 = lg->esp1;
+ pages->state.guest_tss.ss1 = lg->ss1;
+
+ /* Copy direct-to-Guest trap entries. */
+@@ -416,7 +416,7 @@ void __init lguest_arch_host_init(void)
+ /* We know where we want the stack to be when the Guest enters
+ * the switcher: in pages->regs. The stack grows upwards, so
+ * we start it at the end of that structure. */
+- state->guest_tss.esp0 = (long)(&pages->regs + 1);
++ state->guest_tss.sp0 = (long)(&pages->regs + 1);
+ /* And this is the GDT entry to use for the stack: we keep a
+ * couple of special LGUEST entries. */
+ state->guest_tss.ss0 = LGUEST_DS;
@@ -459,7 +459,7 @@ void __init lguest_arch_host_init(void)
/* We don't need the complexity of CPUs coming and going while we're
@@ -262167,7 +334942,7 @@
dev->name);
dev->stats.rx_dropped++;
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
-index 9af05a2..5a2d1dd 100644
+index 9af05a2..6c57540 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -212,7 +212,7 @@ config MII
@@ -262232,16 +335007,17 @@
help
Say Y here if you have an Seeq based Ethernet network card. This is
used in many Silicon Graphics machines.
-@@ -1962,7 +1992,7 @@ config E1000_DISABLE_PACKET_SPLIT
+@@ -1979,6 +2009,9 @@ config E1000E
+ To compile this driver as a module, choose M here. The module
+ will be called e1000e.
- config E1000E
- tristate "Intel(R) PRO/1000 PCI-Express Gigabit Ethernet support"
-- depends on PCI
-+ depends on PCI && EXPERIMENTAL
- ---help---
- This driver supports the PCI-Express Intel(R) PRO/1000 gigabit
- ethernet family of adapters. For PCI or PCI-X e1000 adapters,
-@@ -1989,6 +2019,28 @@ config IP1000
++config E1000E_ENABLED
++ def_bool E1000E != n
++
+ config IP1000
+ tristate "IP1000 Gigabit Ethernet support"
+ depends on PCI && EXPERIMENTAL
+@@ -1989,6 +2022,28 @@ config IP1000
To compile this driver as a module, choose M here: the module
will be called ipg. This is recommended.
@@ -262270,7 +335046,7 @@
source "drivers/net/ixp2000/Kconfig"
config MYRI_SBUS
-@@ -2560,6 +2612,7 @@ config PASEMI_MAC
+@@ -2560,6 +2615,7 @@ config PASEMI_MAC
tristate "PA Semi 1/10Gbit MAC"
depends on PPC64 && PCI
select PHYLIB
@@ -262278,7 +335054,7 @@
help
This driver supports the on-chip 1/10Gbit Ethernet controller on
PA Semi's PWRficient line of chips.
-@@ -2585,6 +2638,16 @@ config TEHUTI
+@@ -2585,6 +2641,16 @@ config TEHUTI
help
Tehuti Networks 10G Ethernet NIC
@@ -262295,7 +335071,7 @@
endif # NETDEV_10000
source "drivers/net/tokenring/Kconfig"
-@@ -3015,23 +3078,6 @@ config NET_FC
+@@ -3015,23 +3081,6 @@ config NET_FC
adaptor below. You also should have said Y to "SCSI support" and
"SCSI generic support".
@@ -292542,10 +365318,94 @@
/* Number of entries in the Multicast Table Array (MTA). */
diff --git a/drivers/net/e1000/e1000_main.c b/drivers/net/e1000/e1000_main.c
-index 76c0fa6..3111af6 100644
+index 76c0fa6..8c87940 100644
--- a/drivers/net/e1000/e1000_main.c
+++ b/drivers/net/e1000/e1000_main.c
-@@ -153,7 +153,7 @@ static void e1000_clean_tx_ring(struct e1000_adapter *adapter,
+@@ -47,6 +47,12 @@ static const char e1000_copyright[] = "Copyright (c) 1999-2006 Intel Corporation
+ * Macro expands to...
+ * {PCI_DEVICE(PCI_VENDOR_ID_INTEL, device_id)}
+ */
++#ifdef CONFIG_E1000E_ENABLED
++ #define PCIE(x)
++#else
++ #define PCIE(x) x,
++#endif
++
+ static struct pci_device_id e1000_pci_tbl[] = {
+ INTEL_E1000_ETHERNET_DEVICE(0x1000),
+ INTEL_E1000_ETHERNET_DEVICE(0x1001),
+@@ -73,14 +79,14 @@ static struct pci_device_id e1000_pci_tbl[] = {
+ INTEL_E1000_ETHERNET_DEVICE(0x1026),
+ INTEL_E1000_ETHERNET_DEVICE(0x1027),
+ INTEL_E1000_ETHERNET_DEVICE(0x1028),
+- INTEL_E1000_ETHERNET_DEVICE(0x1049),
+- INTEL_E1000_ETHERNET_DEVICE(0x104A),
+- INTEL_E1000_ETHERNET_DEVICE(0x104B),
+- INTEL_E1000_ETHERNET_DEVICE(0x104C),
+- INTEL_E1000_ETHERNET_DEVICE(0x104D),
+- INTEL_E1000_ETHERNET_DEVICE(0x105E),
+- INTEL_E1000_ETHERNET_DEVICE(0x105F),
+- INTEL_E1000_ETHERNET_DEVICE(0x1060),
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x1049))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x104A))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x104B))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x104C))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x104D))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x105E))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x105F))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x1060))
+ INTEL_E1000_ETHERNET_DEVICE(0x1075),
+ INTEL_E1000_ETHERNET_DEVICE(0x1076),
+ INTEL_E1000_ETHERNET_DEVICE(0x1077),
+@@ -89,28 +95,28 @@ static struct pci_device_id e1000_pci_tbl[] = {
+ INTEL_E1000_ETHERNET_DEVICE(0x107A),
+ INTEL_E1000_ETHERNET_DEVICE(0x107B),
+ INTEL_E1000_ETHERNET_DEVICE(0x107C),
+- INTEL_E1000_ETHERNET_DEVICE(0x107D),
+- INTEL_E1000_ETHERNET_DEVICE(0x107E),
+- INTEL_E1000_ETHERNET_DEVICE(0x107F),
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x107D))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x107E))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x107F))
+ INTEL_E1000_ETHERNET_DEVICE(0x108A),
+- INTEL_E1000_ETHERNET_DEVICE(0x108B),
+- INTEL_E1000_ETHERNET_DEVICE(0x108C),
+- INTEL_E1000_ETHERNET_DEVICE(0x1096),
+- INTEL_E1000_ETHERNET_DEVICE(0x1098),
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x108B))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x108C))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x1096))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x1098))
+ INTEL_E1000_ETHERNET_DEVICE(0x1099),
+- INTEL_E1000_ETHERNET_DEVICE(0x109A),
+- INTEL_E1000_ETHERNET_DEVICE(0x10A4),
+- INTEL_E1000_ETHERNET_DEVICE(0x10A5),
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x109A))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10A4))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10A5))
+ INTEL_E1000_ETHERNET_DEVICE(0x10B5),
+- INTEL_E1000_ETHERNET_DEVICE(0x10B9),
+- INTEL_E1000_ETHERNET_DEVICE(0x10BA),
+- INTEL_E1000_ETHERNET_DEVICE(0x10BB),
+- INTEL_E1000_ETHERNET_DEVICE(0x10BC),
+- INTEL_E1000_ETHERNET_DEVICE(0x10C4),
+- INTEL_E1000_ETHERNET_DEVICE(0x10C5),
+- INTEL_E1000_ETHERNET_DEVICE(0x10D5),
+- INTEL_E1000_ETHERNET_DEVICE(0x10D9),
+- INTEL_E1000_ETHERNET_DEVICE(0x10DA),
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10B9))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10BA))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10BB))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10BC))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10C4))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10C5))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10D5))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10D9))
++PCIE( INTEL_E1000_ETHERNET_DEVICE(0x10DA))
+ /* required last entry */
+ {0,}
+ };
+@@ -153,7 +159,7 @@ static void e1000_clean_tx_ring(struct e1000_adapter *adapter,
struct e1000_tx_ring *tx_ring);
static void e1000_clean_rx_ring(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring);
@@ -292554,7 +365414,7 @@
static void e1000_update_phy_info(unsigned long data);
static void e1000_watchdog(unsigned long data);
static void e1000_82547_tx_fifo_stall(unsigned long data);
-@@ -299,14 +299,14 @@ module_exit(e1000_exit_module);
+@@ -299,14 +305,14 @@ module_exit(e1000_exit_module);
static int e1000_request_irq(struct e1000_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
@@ -292571,7 +365431,7 @@
irq_flags = 0;
}
}
-@@ -514,7 +514,7 @@ static void e1000_configure(struct e1000_adapter *adapter)
+@@ -514,7 +520,7 @@ static void e1000_configure(struct e1000_adapter *adapter)
struct net_device *netdev = adapter->netdev;
int i;
@@ -292580,7 +365440,7 @@
e1000_restore_vlan(adapter);
e1000_init_manageability(adapter);
-@@ -845,6 +845,64 @@ e1000_reset(struct e1000_adapter *adapter)
+@@ -845,6 +851,64 @@ e1000_reset(struct e1000_adapter *adapter)
}
/**
@@ -292645,7 +365505,7 @@
* e1000_probe - Device Initialization Routine
* @pdev: PCI device information struct
* @ent: entry in e1000_pci_tbl
-@@ -927,7 +985,7 @@ e1000_probe(struct pci_dev *pdev,
+@@ -927,7 +991,7 @@ e1000_probe(struct pci_dev *pdev,
netdev->stop = &e1000_close;
netdev->hard_start_xmit = &e1000_xmit_frame;
netdev->get_stats = &e1000_get_stats;
@@ -292654,7 +365514,7 @@
netdev->set_mac_address = &e1000_set_mac;
netdev->change_mtu = &e1000_change_mtu;
netdev->do_ioctl = &e1000_ioctl;
-@@ -995,7 +1053,6 @@ e1000_probe(struct pci_dev *pdev,
+@@ -995,7 +1059,6 @@ e1000_probe(struct pci_dev *pdev,
adapter->en_mng_pt = e1000_enable_mng_pass_thru(&adapter->hw);
/* initialize eeprom parameters */
@@ -292662,7 +365522,7 @@
if (e1000_init_eeprom_params(&adapter->hw)) {
E1000_ERR("EEPROM initialization failed\n");
goto err_eeprom;
-@@ -1007,23 +1064,29 @@ e1000_probe(struct pci_dev *pdev,
+@@ -1007,23 +1070,29 @@ e1000_probe(struct pci_dev *pdev,
e1000_reset_hw(&adapter->hw);
/* make sure the EEPROM is good */
@@ -292702,7 +365562,7 @@
e1000_get_bus_info(&adapter->hw);
-@@ -2410,21 +2473,22 @@ e1000_set_mac(struct net_device *netdev, void *p)
+@@ -2410,21 +2479,22 @@ e1000_set_mac(struct net_device *netdev, void *p)
}
/**
@@ -292731,7 +365591,7 @@
uint32_t rctl;
uint32_t hash_value;
int i, rar_entries = E1000_RAR_ENTRIES;
-@@ -2447,9 +2511,16 @@ e1000_set_multi(struct net_device *netdev)
+@@ -2447,9 +2517,16 @@ e1000_set_multi(struct net_device *netdev)
rctl |= (E1000_RCTL_UPE | E1000_RCTL_MPE);
} else if (netdev->flags & IFF_ALLMULTI) {
rctl |= E1000_RCTL_MPE;
@@ -292750,7 +365610,7 @@
}
E1000_WRITE_REG(hw, RCTL, rctl);
-@@ -2459,7 +2530,10 @@ e1000_set_multi(struct net_device *netdev)
+@@ -2459,7 +2536,10 @@ e1000_set_multi(struct net_device *netdev)
if (hw->mac_type == e1000_82542_rev2_0)
e1000_enter_82542_rst(adapter);
@@ -292762,7 +365622,7 @@
* RAR 0 is used for the station MAC adddress
* if there are not 14 addresses, go ahead and clear the filters
* -- with 82571 controllers only 0-13 entries are filled here
-@@ -2467,8 +2541,11 @@ e1000_set_multi(struct net_device *netdev)
+@@ -2467,8 +2547,11 @@ e1000_set_multi(struct net_device *netdev)
mc_ptr = netdev->mc_list;
for (i = 1; i < rar_entries; i++) {
@@ -292776,7 +365636,7 @@
mc_ptr = mc_ptr->next;
} else {
E1000_WRITE_REG_ARRAY(hw, RA, i << 1, 0);
-@@ -2477,6 +2554,7 @@ e1000_set_multi(struct net_device *netdev)
+@@ -2477,6 +2560,7 @@ e1000_set_multi(struct net_device *netdev)
E1000_WRITE_FLUSH(hw);
}
}
@@ -292784,7 +365644,7 @@
/* clear the old settings from the multicast hash table */
-@@ -2488,7 +2566,7 @@ e1000_set_multi(struct net_device *netdev)
+@@ -2488,7 +2572,7 @@ e1000_set_multi(struct net_device *netdev)
/* load any remaining addresses into the hash table */
for (; mc_ptr; mc_ptr = mc_ptr->next) {
@@ -292793,7 +365653,7 @@
e1000_mta_set(hw, hash_value);
}
-@@ -3680,10 +3758,6 @@ e1000_update_stats(struct e1000_adapter *adapter)
+@@ -3680,10 +3764,6 @@ e1000_update_stats(struct e1000_adapter *adapter)
}
/* Fill out the OS statistics structure */
@@ -292804,7 +365664,7 @@
adapter->net_stats.multicast = adapter->stats.mprc;
adapter->net_stats.collisions = adapter->stats.colc;
-@@ -4059,6 +4133,8 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter,
+@@ -4059,6 +4139,8 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter,
}
adapter->total_tx_bytes += total_tx_bytes;
adapter->total_tx_packets += total_tx_packets;
@@ -292813,7 +365673,7 @@
return cleaned;
}
-@@ -4106,8 +4182,8 @@ e1000_rx_checksum(struct e1000_adapter *adapter,
+@@ -4106,8 +4188,8 @@ e1000_rx_checksum(struct e1000_adapter *adapter,
/* Hardware complements the payload checksum, so we undo it
* and then put the value in host order for further stack use.
*/
@@ -292824,7 +365684,7 @@
skb->ip_summed = CHECKSUM_COMPLETE;
}
adapter->hw_csum_good++;
-@@ -4281,6 +4357,8 @@ next_desc:
+@@ -4281,6 +4363,8 @@ next_desc:
adapter->total_rx_packets += total_rx_packets;
adapter->total_rx_bytes += total_rx_bytes;
@@ -292833,7 +365693,7 @@
return cleaned;
}
-@@ -4468,6 +4546,8 @@ next_desc:
+@@ -4468,6 +4552,8 @@ next_desc:
adapter->total_rx_packets += total_rx_packets;
adapter->total_rx_bytes += total_rx_bytes;
@@ -292842,7 +365702,7 @@
return cleaned;
}
-@@ -4631,7 +4711,7 @@ e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
+@@ -4631,7 +4717,7 @@ e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
rx_desc->read.buffer_addr[j+1] =
cpu_to_le64(ps_page_dma->ps_page_dma[j]);
} else
@@ -292851,7 +365711,7 @@
}
skb = netdev_alloc_skb(netdev,
-@@ -4874,22 +4954,6 @@ e1000_pci_clear_mwi(struct e1000_hw *hw)
+@@ -4874,22 +4960,6 @@ e1000_pci_clear_mwi(struct e1000_hw *hw)
pci_clear_mwi(adapter->pdev);
}
@@ -292874,7 +365734,7 @@
int
e1000_pcix_get_mmrbc(struct e1000_hw *hw)
{
-@@ -5095,7 +5159,7 @@ e1000_suspend(struct pci_dev *pdev, pm_message_t state)
+@@ -5095,7 +5165,7 @@ e1000_suspend(struct pci_dev *pdev, pm_message_t state)
if (wufc) {
e1000_setup_rctl(adapter);
@@ -376500,5324 +449360,7233 @@
+ lq_sta->drv = priv;
#endif
- if (priv->assoc_station_added)
- priv->lq_mngr.lq_ready = 1;
+ if (priv->assoc_station_added)
+ priv->lq_mngr.lq_ready = 1;
+
+- rs_initialize_lq(priv, sta);
++ rs_initialize_lq(priv, conf, sta);
+ }
+
+-static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
+- struct iwl_rate *tx_mcs,
+- struct iwl_link_quality_cmd *lq_cmd)
++static void rs_fill_link_cmd(struct iwl4965_lq_sta *lq_sta,
++ struct iwl4965_rate *tx_mcs,
++ struct iwl4965_link_quality_cmd *lq_cmd)
+ {
+ int index = 0;
+ int rate_idx;
+ int repeat_rate = 0;
+ u8 ant_toggle_count = 0;
+ u8 use_ht_possible = 1;
+- struct iwl_rate new_rate;
+- struct iwl_scale_tbl_info tbl_type = { 0 };
++ struct iwl4965_rate new_rate;
++ struct iwl4965_scale_tbl_info tbl_type = { 0 };
+
+- rs_dbgfs_set_mcs(lq_data, tx_mcs, index);
++ /* Override starting rate (index 0) if needed for debug purposes */
++ rs_dbgfs_set_mcs(lq_sta, tx_mcs, index);
+
+- rs_get_tbl_info_from_mcs(tx_mcs, lq_data->phymode,
++ /* Interpret rate_n_flags */
++ rs_get_tbl_info_from_mcs(tx_mcs, lq_sta->phymode,
+ &tbl_type, &rate_idx);
+
++ /* How many times should we repeat the initial rate? */
+ if (is_legacy(tbl_type.lq_type)) {
+ ant_toggle_count = 1;
+ repeat_rate = IWL_NUMBER_TRY;
+@@ -1909,19 +2219,27 @@ static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
+
+ lq_cmd->general_params.mimo_delimiter =
+ is_mimo(tbl_type.lq_type) ? 1 : 0;
++
++ /* Fill 1st table entry (index 0) */
+ lq_cmd->rs_table[index].rate_n_flags =
+ cpu_to_le32(tx_mcs->rate_n_flags);
+ new_rate.rate_n_flags = tx_mcs->rate_n_flags;
+
+ if (is_mimo(tbl_type.lq_type) || (tbl_type.antenna_type == ANT_MAIN))
+- lq_cmd->general_params.single_stream_ant_msk = 1;
++ lq_cmd->general_params.single_stream_ant_msk
++ = LINK_QUAL_ANT_A_MSK;
+ else
+- lq_cmd->general_params.single_stream_ant_msk = 2;
++ lq_cmd->general_params.single_stream_ant_msk
++ = LINK_QUAL_ANT_B_MSK;
+
+ index++;
+ repeat_rate--;
+
++ /* Fill rest of rate table */
+ while (index < LINK_QUAL_MAX_RETRY_NUM) {
++ /* Repeat initial/next rate.
++ * For legacy IWL_NUMBER_TRY == 1, this loop will not execute.
++ * For HT IWL_HT_NUMBER_TRY == 3, this executes twice. */
+ while (repeat_rate > 0 && (index < LINK_QUAL_MAX_RETRY_NUM)) {
+ if (is_legacy(tbl_type.lq_type)) {
+ if (ant_toggle_count <
+@@ -1933,22 +2251,30 @@ static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
+ }
+ }
+
+- rs_dbgfs_set_mcs(lq_data, &new_rate, index);
++ /* Override next rate if needed for debug purposes */
++ rs_dbgfs_set_mcs(lq_sta, &new_rate, index);
++
++ /* Fill next table entry */
+ lq_cmd->rs_table[index].rate_n_flags =
+ cpu_to_le32(new_rate.rate_n_flags);
+ repeat_rate--;
+ index++;
+ }
+
+- rs_get_tbl_info_from_mcs(&new_rate, lq_data->phymode, &tbl_type,
++ rs_get_tbl_info_from_mcs(&new_rate, lq_sta->phymode, &tbl_type,
+ &rate_idx);
+
++ /* Indicate to uCode which entries might be MIMO.
++ * If initial rate was MIMO, this will finally end up
++ * as (IWL_HT_NUMBER_TRY * 2), after 2nd pass, otherwise 0. */
+ if (is_mimo(tbl_type.lq_type))
+ lq_cmd->general_params.mimo_delimiter = index;
+
+- rs_get_lower_rate(lq_data, &tbl_type, rate_idx,
++ /* Get next rate */
++ rs_get_lower_rate(lq_sta, &tbl_type, rate_idx,
+ use_ht_possible, &new_rate);
+
++ /* How many times should we repeat the next rate? */
+ if (is_legacy(tbl_type.lq_type)) {
+ if (ant_toggle_count < NUM_TRY_BEFORE_ANTENNA_TOGGLE)
+ ant_toggle_count++;
+@@ -1960,9 +2286,14 @@ static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
+ } else
+ repeat_rate = IWL_HT_NUMBER_TRY;
+
++ /* Don't allow HT rates after next pass.
++ * rs_get_lower_rate() will change type to LQ_A or LQ_G. */
+ use_ht_possible = 0;
+
+- rs_dbgfs_set_mcs(lq_data, &new_rate, index);
++ /* Override next rate if needed for debug purposes */
++ rs_dbgfs_set_mcs(lq_sta, &new_rate, index);
++
++ /* Fill next table entry */
+ lq_cmd->rs_table[index].rate_n_flags =
+ cpu_to_le32(new_rate.rate_n_flags);
+
+@@ -1987,27 +2318,27 @@ static void rs_free(void *priv_rate)
+
+ static void rs_clear(void *priv_rate)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *) priv_rate;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *) priv_rate;
+
+ IWL_DEBUG_RATE("enter\n");
+
+ priv->lq_mngr.lq_ready = 0;
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ if (priv->lq_mngr.agg_ctrl.granted_ba)
+ iwl4965_turn_off_agg(priv, TID_ALL_SPECIFIED);
+-#endif /*CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /*CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
+
+ IWL_DEBUG_RATE("leave\n");
+ }
+
+ static void rs_free_sta(void *priv, void *priv_sta)
+ {
+- struct iwl_rate_scale_priv *rs_priv = priv_sta;
++ struct iwl4965_lq_sta *lq_sta = priv_sta;
+
+ IWL_DEBUG_RATE("enter\n");
+- kfree(rs_priv);
++ kfree(lq_sta);
+ IWL_DEBUG_RATE("leave\n");
+ }
+
+@@ -2018,19 +2349,19 @@ static int open_file_generic(struct inode *inode, struct file *file)
+ file->private_data = inode->i_private;
+ return 0;
+ }
+-static void rs_dbgfs_set_mcs(struct iwl_rate_scale_priv *rs_priv,
+- struct iwl_rate *mcs, int index)
++static void rs_dbgfs_set_mcs(struct iwl4965_lq_sta *lq_sta,
++ struct iwl4965_rate *mcs, int index)
+ {
+ u32 base_rate;
+
+- if (rs_priv->phymode == (u8) MODE_IEEE80211A)
++ if (lq_sta->phymode == (u8) MODE_IEEE80211A)
+ base_rate = 0x800D;
+ else
+ base_rate = 0x820A;
+
+- if (rs_priv->dbg_fixed.rate_n_flags) {
++ if (lq_sta->dbg_fixed.rate_n_flags) {
+ if (index < 12)
+- mcs->rate_n_flags = rs_priv->dbg_fixed.rate_n_flags;
++ mcs->rate_n_flags = lq_sta->dbg_fixed.rate_n_flags;
+ else
+ mcs->rate_n_flags = base_rate;
+ IWL_DEBUG_RATE("Fixed rate ON\n");
+@@ -2043,7 +2374,7 @@ static void rs_dbgfs_set_mcs(struct iwl_rate_scale_priv *rs_priv,
+ static ssize_t rs_sta_dbgfs_scale_table_write(struct file *file,
+ const char __user *user_buf, size_t count, loff_t *ppos)
+ {
+- struct iwl_rate_scale_priv *rs_priv = file->private_data;
++ struct iwl4965_lq_sta *lq_sta = file->private_data;
+ char buf[64];
+ int buf_size;
+ u32 parsed_rate;
+@@ -2054,20 +2385,20 @@ static ssize_t rs_sta_dbgfs_scale_table_write(struct file *file,
+ return -EFAULT;
+
+ if (sscanf(buf, "%x", &parsed_rate) == 1)
+- rs_priv->dbg_fixed.rate_n_flags = parsed_rate;
++ lq_sta->dbg_fixed.rate_n_flags = parsed_rate;
+ else
+- rs_priv->dbg_fixed.rate_n_flags = 0;
++ lq_sta->dbg_fixed.rate_n_flags = 0;
+
+- rs_priv->active_rate = 0x0FFF;
+- rs_priv->active_siso_rate = 0x1FD0;
+- rs_priv->active_mimo_rate = 0x1FD0;
++ lq_sta->active_rate = 0x0FFF; /* 1 - 54 MBits, includes CCK */
++ lq_sta->active_siso_rate = 0x1FD0; /* 6 - 60 MBits, no 9, no CCK */
++ lq_sta->active_mimo_rate = 0x1FD0; /* 6 - 60 MBits, no 9, no CCK */
+
+ IWL_DEBUG_RATE("sta_id %d rate 0x%X\n",
+- rs_priv->lq.sta_id, rs_priv->dbg_fixed.rate_n_flags);
++ lq_sta->lq.sta_id, lq_sta->dbg_fixed.rate_n_flags);
+
+- if (rs_priv->dbg_fixed.rate_n_flags) {
+- rs_fill_link_cmd(rs_priv, &rs_priv->dbg_fixed, &rs_priv->lq);
+- rs_send_lq_cmd(rs_priv->drv, &rs_priv->lq, CMD_ASYNC);
++ if (lq_sta->dbg_fixed.rate_n_flags) {
++ rs_fill_link_cmd(lq_sta, &lq_sta->dbg_fixed, &lq_sta->lq);
++ rs_send_lq_cmd(lq_sta->drv, &lq_sta->lq, CMD_ASYNC);
+ }
+
+ return count;
+@@ -2080,38 +2411,38 @@ static ssize_t rs_sta_dbgfs_scale_table_read(struct file *file,
+ int desc = 0;
+ int i = 0;
+
+- struct iwl_rate_scale_priv *rs_priv = file->private_data;
++ struct iwl4965_lq_sta *lq_sta = file->private_data;
+
+- desc += sprintf(buff+desc, "sta_id %d\n", rs_priv->lq.sta_id);
++ desc += sprintf(buff+desc, "sta_id %d\n", lq_sta->lq.sta_id);
+ desc += sprintf(buff+desc, "failed=%d success=%d rate=0%X\n",
+- rs_priv->total_failed, rs_priv->total_success,
+- rs_priv->active_rate);
++ lq_sta->total_failed, lq_sta->total_success,
++ lq_sta->active_rate);
+ desc += sprintf(buff+desc, "fixed rate 0x%X\n",
+- rs_priv->dbg_fixed.rate_n_flags);
++ lq_sta->dbg_fixed.rate_n_flags);
+ desc += sprintf(buff+desc, "general:"
+ "flags=0x%X mimo-d=%d s-ant0x%x d-ant=0x%x\n",
+- rs_priv->lq.general_params.flags,
+- rs_priv->lq.general_params.mimo_delimiter,
+- rs_priv->lq.general_params.single_stream_ant_msk,
+- rs_priv->lq.general_params.dual_stream_ant_msk);
++ lq_sta->lq.general_params.flags,
++ lq_sta->lq.general_params.mimo_delimiter,
++ lq_sta->lq.general_params.single_stream_ant_msk,
++ lq_sta->lq.general_params.dual_stream_ant_msk);
+
+ desc += sprintf(buff+desc, "agg:"
+ "time_limit=%d dist_start_th=%d frame_cnt_limit=%d\n",
+- le16_to_cpu(rs_priv->lq.agg_params.agg_time_limit),
+- rs_priv->lq.agg_params.agg_dis_start_th,
+- rs_priv->lq.agg_params.agg_frame_cnt_limit);
++ le16_to_cpu(lq_sta->lq.agg_params.agg_time_limit),
++ lq_sta->lq.agg_params.agg_dis_start_th,
++ lq_sta->lq.agg_params.agg_frame_cnt_limit);
+
+ desc += sprintf(buff+desc,
+ "Start idx [0]=0x%x [1]=0x%x [2]=0x%x [3]=0x%x\n",
+- rs_priv->lq.general_params.start_rate_index[0],
+- rs_priv->lq.general_params.start_rate_index[1],
+- rs_priv->lq.general_params.start_rate_index[2],
+- rs_priv->lq.general_params.start_rate_index[3]);
++ lq_sta->lq.general_params.start_rate_index[0],
++ lq_sta->lq.general_params.start_rate_index[1],
++ lq_sta->lq.general_params.start_rate_index[2],
++ lq_sta->lq.general_params.start_rate_index[3]);
+
+
+ for (i = 0; i < LINK_QUAL_MAX_RETRY_NUM; i++)
+ desc += sprintf(buff+desc, " rate[%d] 0x%X\n",
+- i, le32_to_cpu(rs_priv->lq.rs_table[i].rate_n_flags));
++ i, le32_to_cpu(lq_sta->lq.rs_table[i].rate_n_flags));
+
+ return simple_read_from_buffer(user_buf, count, ppos, buff, desc);
+ }
+@@ -2128,22 +2459,22 @@ static ssize_t rs_sta_dbgfs_stats_table_read(struct file *file,
+ int desc = 0;
+ int i, j;
+
+- struct iwl_rate_scale_priv *rs_priv = file->private_data;
++ struct iwl4965_lq_sta *lq_sta = file->private_data;
+ for (i = 0; i < LQ_SIZE; i++) {
+ desc += sprintf(buff+desc, "%s type=%d SGI=%d FAT=%d DUP=%d\n"
+ "rate=0x%X\n",
+- rs_priv->active_tbl == i?"*":"x",
+- rs_priv->lq_info[i].lq_type,
+- rs_priv->lq_info[i].is_SGI,
+- rs_priv->lq_info[i].is_fat,
+- rs_priv->lq_info[i].is_dup,
+- rs_priv->lq_info[i].current_rate.rate_n_flags);
++ lq_sta->active_tbl == i?"*":"x",
++ lq_sta->lq_info[i].lq_type,
++ lq_sta->lq_info[i].is_SGI,
++ lq_sta->lq_info[i].is_fat,
++ lq_sta->lq_info[i].is_dup,
++ lq_sta->lq_info[i].current_rate.rate_n_flags);
+ for (j = 0; j < IWL_RATE_COUNT; j++) {
+ desc += sprintf(buff+desc,
+- "counter=%d success=%d %%=%d\n",
+- rs_priv->lq_info[i].win[j].counter,
+- rs_priv->lq_info[i].win[j].success_counter,
+- rs_priv->lq_info[i].win[j].success_ratio);
++ "counter=%d success=%d %%=%d\n",
++ lq_sta->lq_info[i].win[j].counter,
++ lq_sta->lq_info[i].win[j].success_counter,
++ lq_sta->lq_info[i].win[j].success_ratio);
+ }
+ }
+ return simple_read_from_buffer(user_buf, count, ppos, buff, desc);
+@@ -2157,20 +2488,20 @@ static const struct file_operations rs_sta_dbgfs_stats_table_ops = {
+ static void rs_add_debugfs(void *priv, void *priv_sta,
+ struct dentry *dir)
+ {
+- struct iwl_rate_scale_priv *rs_priv = priv_sta;
+- rs_priv->rs_sta_dbgfs_scale_table_file =
++ struct iwl4965_lq_sta *lq_sta = priv_sta;
++ lq_sta->rs_sta_dbgfs_scale_table_file =
+ debugfs_create_file("rate_scale_table", 0600, dir,
+- rs_priv, &rs_sta_dbgfs_scale_table_ops);
+- rs_priv->rs_sta_dbgfs_stats_table_file =
++ lq_sta, &rs_sta_dbgfs_scale_table_ops);
++ lq_sta->rs_sta_dbgfs_stats_table_file =
+ debugfs_create_file("rate_stats_table", 0600, dir,
+- rs_priv, &rs_sta_dbgfs_stats_table_ops);
++ lq_sta, &rs_sta_dbgfs_stats_table_ops);
+ }
+
+ static void rs_remove_debugfs(void *priv, void *priv_sta)
+ {
+- struct iwl_rate_scale_priv *rs_priv = priv_sta;
+- debugfs_remove(rs_priv->rs_sta_dbgfs_scale_table_file);
+- debugfs_remove(rs_priv->rs_sta_dbgfs_stats_table_file);
++ struct iwl4965_lq_sta *lq_sta = priv_sta;
++ debugfs_remove(lq_sta->rs_sta_dbgfs_scale_table_file);
++ debugfs_remove(lq_sta->rs_sta_dbgfs_stats_table_file);
+ }
+ #endif
+
+@@ -2191,13 +2522,13 @@ static struct rate_control_ops rs_ops = {
+ #endif
+ };
+
+-int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
++int iwl4965_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
+ {
+ struct ieee80211_local *local = hw_to_local(hw);
+- struct iwl_priv *priv = hw->priv;
+- struct iwl_rate_scale_priv *rs_priv;
++ struct iwl4965_priv *priv = hw->priv;
++ struct iwl4965_lq_sta *lq_sta;
+ struct sta_info *sta;
+- int count = 0, i;
++ int cnt = 0, i;
+ u32 samples = 0, success = 0, good = 0;
+ unsigned long now = jiffies;
+ u32 max_time = 0;
+@@ -2213,10 +2544,10 @@ int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
+ return sprintf(buf, "station %d not found\n", sta_id);
+ }
+
+- rs_priv = (void *)sta->rate_ctrl_priv;
++ lq_sta = (void *)sta->rate_ctrl_priv;
+
+- lq_type = rs_priv->lq_info[rs_priv->active_tbl].lq_type;
+- antenna = rs_priv->lq_info[rs_priv->active_tbl].antenna_type;
++ lq_type = lq_sta->lq_info[lq_sta->active_tbl].lq_type;
++ antenna = lq_sta->lq_info[lq_sta->active_tbl].antenna_type;
+
+ if (is_legacy(lq_type))
+ i = IWL_RATE_54M_INDEX;
+@@ -2225,35 +2556,35 @@ int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
+ while (1) {
+ u64 mask;
+ int j;
+- int active = rs_priv->active_tbl;
++ int active = lq_sta->active_tbl;
+
+- count +=
+- sprintf(&buf[count], " %2dMbs: ", iwl_rates[i].ieee / 2);
++ cnt +=
++ sprintf(&buf[cnt], " %2dMbs: ", iwl4965_rates[i].ieee / 2);
+
+ mask = (1ULL << (IWL_RATE_MAX_WINDOW - 1));
+ for (j = 0; j < IWL_RATE_MAX_WINDOW; j++, mask >>= 1)
+- buf[count++] =
+- (rs_priv->lq_info[active].win[i].data & mask)
++ buf[cnt++] =
++ (lq_sta->lq_info[active].win[i].data & mask)
+ ? '1' : '0';
+
+- samples += rs_priv->lq_info[active].win[i].counter;
+- good += rs_priv->lq_info[active].win[i].success_counter;
+- success += rs_priv->lq_info[active].win[i].success_counter *
+- iwl_rates[i].ieee;
++ samples += lq_sta->lq_info[active].win[i].counter;
++ good += lq_sta->lq_info[active].win[i].success_counter;
++ success += lq_sta->lq_info[active].win[i].success_counter *
++ iwl4965_rates[i].ieee;
+
+- if (rs_priv->lq_info[active].win[i].stamp) {
++ if (lq_sta->lq_info[active].win[i].stamp) {
+ int delta =
+ jiffies_to_msecs(now -
+- rs_priv->lq_info[active].win[i].stamp);
++ lq_sta->lq_info[active].win[i].stamp);
+
+ if (delta > max_time)
+ max_time = delta;
+
+- count += sprintf(&buf[count], "%5dms\n", delta);
++ cnt += sprintf(&buf[cnt], "%5dms\n", delta);
+ } else
+- buf[count++] = '\n';
++ buf[cnt++] = '\n';
+
+- j = iwl_get_prev_ieee_rate(i);
++ j = iwl4965_get_prev_ieee_rate(i);
+ if (j == i)
+ break;
+ i = j;
+@@ -2261,37 +2592,38 @@ int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
+
+ /* Display the average rate of all samples taken.
+ *
+- * NOTE: We multiple # of samples by 2 since the IEEE measurement
+- * added from iwl_rates is actually 2X the rate */
++ * NOTE: We multiply # of samples by 2 since the IEEE measurement
++ * added from iwl4965_rates is actually 2X the rate */
+ if (samples)
+- count += sprintf(&buf[count],
++ cnt += sprintf(&buf[cnt],
+ "\nAverage rate is %3d.%02dMbs over last %4dms\n"
+ "%3d%% success (%d good packets over %d tries)\n",
+ success / (2 * samples), (success * 5 / samples) % 10,
+ max_time, good * 100 / samples, good, samples);
+ else
+- count += sprintf(&buf[count], "\nAverage rate: 0Mbs\n");
+- count += sprintf(&buf[count], "\nrate scale type %d anntena %d "
++ cnt += sprintf(&buf[cnt], "\nAverage rate: 0Mbs\n");
++
++ cnt += sprintf(&buf[cnt], "\nrate scale type %d antenna %d "
+ "active_search %d rate index %d\n", lq_type, antenna,
+- rs_priv->search_better_tbl, sta->last_txrate);
++ lq_sta->search_better_tbl, sta->last_txrate);
+
+ sta_info_put(sta);
+- return count;
++ return cnt;
+ }
+
+-void iwl_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id)
++void iwl4965_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
+
+ priv->lq_mngr.lq_ready = 1;
+ }
+
+-void iwl_rate_control_register(struct ieee80211_hw *hw)
++void iwl4965_rate_control_register(struct ieee80211_hw *hw)
+ {
+ ieee80211_rate_control_register(&rs_ops);
+ }
+
+-void iwl_rate_control_unregister(struct ieee80211_hw *hw)
++void iwl4965_rate_control_unregister(struct ieee80211_hw *hw)
+ {
+ ieee80211_rate_control_unregister(&rs_ops);
+ }
+diff --git a/drivers/net/wireless/iwlwifi/iwl-4965-rs.h b/drivers/net/wireless/iwlwifi/iwl-4965-rs.h
+index c6325f7..55f7073 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-4965-rs.h
++++ b/drivers/net/wireless/iwlwifi/iwl-4965-rs.h
+@@ -29,11 +29,11 @@
+
+ #include "iwl-4965.h"
+
+-struct iwl_rate_info {
+- u8 plcp;
+- u8 plcp_siso;
+- u8 plcp_mimo;
+- u8 ieee;
++struct iwl4965_rate_info {
++ u8 plcp; /* uCode API: IWL_RATE_6M_PLCP, etc. */
++ u8 plcp_siso; /* uCode API: IWL_RATE_SISO_6M_PLCP, etc. */
++ u8 plcp_mimo; /* uCode API: IWL_RATE_MIMO_6M_PLCP, etc. */
++ u8 ieee; /* MAC header: IWL_RATE_6M_IEEE, etc. */
+ u8 prev_ieee; /* previous rate in IEEE speeds */
+ u8 next_ieee; /* next rate in IEEE speeds */
+ u8 prev_rs; /* previous rate used in rs algo */
+@@ -42,6 +42,10 @@ struct iwl_rate_info {
+ u8 next_rs_tgg; /* next rate used in TGG rs algo */
+ };
+
++/*
++ * These serve as indexes into
++ * struct iwl4965_rate_info iwl4965_rates[IWL_RATE_COUNT];
++ */
+ enum {
+ IWL_RATE_1M_INDEX = 0,
+ IWL_RATE_2M_INDEX,
+@@ -69,20 +73,21 @@ enum {
+ };
+
+ /* #define vs. enum to keep from defaulting to 'large integer' */
+-#define IWL_RATE_6M_MASK (1<<IWL_RATE_6M_INDEX)
+-#define IWL_RATE_9M_MASK (1<<IWL_RATE_9M_INDEX)
+-#define IWL_RATE_12M_MASK (1<<IWL_RATE_12M_INDEX)
+-#define IWL_RATE_18M_MASK (1<<IWL_RATE_18M_INDEX)
+-#define IWL_RATE_24M_MASK (1<<IWL_RATE_24M_INDEX)
+-#define IWL_RATE_36M_MASK (1<<IWL_RATE_36M_INDEX)
+-#define IWL_RATE_48M_MASK (1<<IWL_RATE_48M_INDEX)
+-#define IWL_RATE_54M_MASK (1<<IWL_RATE_54M_INDEX)
+-#define IWL_RATE_60M_MASK (1<<IWL_RATE_60M_INDEX)
+-#define IWL_RATE_1M_MASK (1<<IWL_RATE_1M_INDEX)
+-#define IWL_RATE_2M_MASK (1<<IWL_RATE_2M_INDEX)
+-#define IWL_RATE_5M_MASK (1<<IWL_RATE_5M_INDEX)
+-#define IWL_RATE_11M_MASK (1<<IWL_RATE_11M_INDEX)
+-
++#define IWL_RATE_6M_MASK (1 << IWL_RATE_6M_INDEX)
++#define IWL_RATE_9M_MASK (1 << IWL_RATE_9M_INDEX)
++#define IWL_RATE_12M_MASK (1 << IWL_RATE_12M_INDEX)
++#define IWL_RATE_18M_MASK (1 << IWL_RATE_18M_INDEX)
++#define IWL_RATE_24M_MASK (1 << IWL_RATE_24M_INDEX)
++#define IWL_RATE_36M_MASK (1 << IWL_RATE_36M_INDEX)
++#define IWL_RATE_48M_MASK (1 << IWL_RATE_48M_INDEX)
++#define IWL_RATE_54M_MASK (1 << IWL_RATE_54M_INDEX)
++#define IWL_RATE_60M_MASK (1 << IWL_RATE_60M_INDEX)
++#define IWL_RATE_1M_MASK (1 << IWL_RATE_1M_INDEX)
++#define IWL_RATE_2M_MASK (1 << IWL_RATE_2M_INDEX)
++#define IWL_RATE_5M_MASK (1 << IWL_RATE_5M_INDEX)
++#define IWL_RATE_11M_MASK (1 << IWL_RATE_11M_INDEX)
++
++/* 4965 uCode API values for legacy bit rates, both OFDM and CCK */
+ enum {
+ IWL_RATE_6M_PLCP = 13,
+ IWL_RATE_9M_PLCP = 15,
+@@ -99,7 +104,7 @@ enum {
+ IWL_RATE_11M_PLCP = 110,
+ };
+
+-/* OFDM HT rate plcp */
++/* 4965 uCode API values for OFDM high-throughput (HT) bit rates */
+ enum {
+ IWL_RATE_SISO_6M_PLCP = 0,
+ IWL_RATE_SISO_12M_PLCP = 1,
+@@ -121,6 +126,7 @@ enum {
+ IWL_RATE_MIMO_INVM_PLCP = IWL_RATE_SISO_INVM_PLCP,
+ };
+
++/* MAC header values for bit rates */
+ enum {
+ IWL_RATE_6M_IEEE = 12,
+ IWL_RATE_9M_IEEE = 18,
+@@ -163,20 +169,15 @@ enum {
+ (IWL_OFDM_BASIC_RATES_MASK | \
+ IWL_CCK_BASIC_RATES_MASK)
+
+-#define IWL_RATES_MASK ((1<<IWL_RATE_COUNT)-1)
++#define IWL_RATES_MASK ((1 << IWL_RATE_COUNT) - 1)
+
+ #define IWL_INVALID_VALUE -1
+
+ #define IWL_MIN_RSSI_VAL -100
+ #define IWL_MAX_RSSI_VAL 0
+
+-#define IWL_LEGACY_SWITCH_ANTENNA 0
+-#define IWL_LEGACY_SWITCH_SISO 1
+-#define IWL_LEGACY_SWITCH_MIMO 2
+-
+-#define IWL_RS_GOOD_RATIO 12800
+-
+-#define IWL_ACTION_LIMIT 3
++/* These values specify how many Tx frame attempts before
++ * searching for a new modulation mode */
+ #define IWL_LEGACY_FAILURE_LIMIT 160
+ #define IWL_LEGACY_SUCCESS_LIMIT 480
+ #define IWL_LEGACY_TABLE_COUNT 160
+@@ -185,82 +186,104 @@ enum {
+ #define IWL_NONE_LEGACY_SUCCESS_LIMIT 4500
+ #define IWL_NONE_LEGACY_TABLE_COUNT 1500
+
+-#define IWL_RATE_SCALE_SWITCH (10880)
++/* Success ratio (ACKed / attempted tx frames) values (perfect is 128 * 100) */
++#define IWL_RS_GOOD_RATIO 12800 /* 100% */
++#define IWL_RATE_SCALE_SWITCH 10880 /* 85% */
++#define IWL_RATE_HIGH_TH 10880 /* 85% */
++#define IWL_RATE_INCREASE_TH 8960 /* 70% */
++#define IWL_RATE_DECREASE_TH 1920 /* 15% */
+
++/* possible actions when in legacy mode */
++#define IWL_LEGACY_SWITCH_ANTENNA 0
++#define IWL_LEGACY_SWITCH_SISO 1
++#define IWL_LEGACY_SWITCH_MIMO 2
++
++/* possible actions when in siso mode */
+ #define IWL_SISO_SWITCH_ANTENNA 0
+ #define IWL_SISO_SWITCH_MIMO 1
+ #define IWL_SISO_SWITCH_GI 2
+
++/* possible actions when in mimo mode */
+ #define IWL_MIMO_SWITCH_ANTENNA_A 0
+ #define IWL_MIMO_SWITCH_ANTENNA_B 1
+ #define IWL_MIMO_SWITCH_GI 2
+
+-#define LQ_SIZE 2
++#define IWL_ACTION_LIMIT 3 /* # possible actions */
++
++#define LQ_SIZE 2 /* 2 mode tables: "Active" and "Search" */
+
+-extern const struct iwl_rate_info iwl_rates[IWL_RATE_COUNT];
++extern const struct iwl4965_rate_info iwl4965_rates[IWL_RATE_COUNT];
+
+-enum iwl_table_type {
++enum iwl4965_table_type {
+ LQ_NONE,
+- LQ_G,
++ LQ_G, /* legacy types */
+ LQ_A,
+- LQ_SISO,
++ LQ_SISO, /* high-throughput types */
+ LQ_MIMO,
+ LQ_MAX,
+ };
+
+-enum iwl_antenna_type {
++#define is_legacy(tbl) (((tbl) == LQ_G) || ((tbl) == LQ_A))
++#define is_siso(tbl) (((tbl) == LQ_SISO))
++#define is_mimo(tbl) (((tbl) == LQ_MIMO))
++#define is_Ht(tbl) (is_siso(tbl) || is_mimo(tbl))
++#define is_a_band(tbl) (((tbl) == LQ_A))
++#define is_g_and(tbl) (((tbl) == LQ_G))
++
++/* 4965 has 2 antennas/chains for Tx (but 3 for Rx) */
++enum iwl4965_antenna_type {
+ ANT_NONE,
+ ANT_MAIN,
+ ANT_AUX,
+ ANT_BOTH,
+ };
+
+-static inline u8 iwl_get_prev_ieee_rate(u8 rate_index)
++static inline u8 iwl4965_get_prev_ieee_rate(u8 rate_index)
+ {
+- u8 rate = iwl_rates[rate_index].prev_ieee;
++ u8 rate = iwl4965_rates[rate_index].prev_ieee;
+
+ if (rate == IWL_RATE_INVALID)
+ rate = rate_index;
+ return rate;
+ }
+
+-extern int iwl_rate_index_from_plcp(int plcp);
++extern int iwl4965_rate_index_from_plcp(int plcp);
+
+ /**
+- * iwl_fill_rs_info - Fill an output text buffer with the rate representation
++ * iwl4965_fill_rs_info - Fill an output text buffer with the rate representation
+ *
+ * NOTE: This is provided as a quick mechanism for a user to visualize
+- * the performance of the rate control alogirthm and is not meant to be
++ * the performance of the rate control algorithm and is not meant to be
+ * parsed software.
+ */
+-extern int iwl_fill_rs_info(struct ieee80211_hw *, char *buf, u8 sta_id);
++extern int iwl4965_fill_rs_info(struct ieee80211_hw *, char *buf, u8 sta_id);
+
+ /**
+- * iwl_rate_scale_init - Initialize the rate scale table based on assoc info
++ * iwl4965_rate_scale_init - Initialize the rate scale table based on assoc info
+ *
+- * The specific througput table used is based on the type of network
++ * The specific throughput table used is based on the type of network
+ * the associated with, including A, B, G, and G w/ TGG protection
+ */
+-extern void iwl_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id);
++extern void iwl4965_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id);
+
+ /**
+- * iwl_rate_control_register - Register the rate control algorithm callbacks
++ * iwl4965_rate_control_register - Register the rate control algorithm callbacks
+ *
+ * Since the rate control algorithm is hardware specific, there is no need
+ * or reason to place it as a stand alone module. The driver can call
+- * iwl_rate_control_register in order to register the rate control callbacks
++ * iwl4965_rate_control_register in order to register the rate control callbacks
+ * with the mac80211 subsystem. This should be performed prior to calling
+ * ieee80211_register_hw
+ *
+ */
+-extern void iwl_rate_control_register(struct ieee80211_hw *hw);
++extern void iwl4965_rate_control_register(struct ieee80211_hw *hw);
+
+ /**
+- * iwl_rate_control_unregister - Unregister the rate control callbacks
++ * iwl4965_rate_control_unregister - Unregister the rate control callbacks
+ *
+ * This should be called after calling ieee80211_unregister_hw, but before
+ * the driver is unloaded.
+ */
+-extern void iwl_rate_control_unregister(struct ieee80211_hw *hw);
++extern void iwl4965_rate_control_unregister(struct ieee80211_hw *hw);
+
+ #endif
+diff --git a/drivers/net/wireless/iwlwifi/iwl-4965.c b/drivers/net/wireless/iwlwifi/iwl-4965.c
+index 891f90d..04db34b 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-4965.c
++++ b/drivers/net/wireless/iwlwifi/iwl-4965.c
+@@ -36,13 +36,13 @@
+ #include <linux/wireless.h>
+ #include <net/mac80211.h>
+ #include <linux/etherdevice.h>
++#include <asm/unaligned.h>
+
+-#define IWL 4965
+-
+-#include "iwlwifi.h"
+ #include "iwl-4965.h"
+ #include "iwl-helpers.h"
+
++static void iwl4965_hw_card_show_info(struct iwl4965_priv *priv);
++
+ #define IWL_DECLARE_RATE_INFO(r, s, ip, in, rp, rn, pp, np) \
+ [IWL_RATE_##r##M_INDEX] = { IWL_RATE_##r##M_PLCP, \
+ IWL_RATE_SISO_##s##M_PLCP, \
+@@ -63,7 +63,7 @@
+ * maps to IWL_RATE_INVALID
+ *
+ */
+-const struct iwl_rate_info iwl_rates[IWL_RATE_COUNT] = {
++const struct iwl4965_rate_info iwl4965_rates[IWL_RATE_COUNT] = {
+ IWL_DECLARE_RATE_INFO(1, INV, INV, 2, INV, 2, INV, 2), /* 1mbps */
+ IWL_DECLARE_RATE_INFO(2, INV, 1, 5, 1, 5, 1, 5), /* 2mbps */
+ IWL_DECLARE_RATE_INFO(5, INV, 2, 6, 2, 11, 2, 11), /*5.5mbps */
+@@ -85,16 +85,16 @@ static int is_fat_channel(__le32 rxon_flags)
+ (rxon_flags & RXON_FLG_CHANNEL_MODE_MIXED_MSK);
+ }
+
+-static u8 is_single_stream(struct iwl_priv *priv)
++static u8 is_single_stream(struct iwl4965_priv *priv)
+ {
+-#ifdef CONFIG_IWLWIFI_HT
+- if (!priv->is_ht_enabled || !priv->current_assoc_ht.is_ht ||
+- (priv->active_rate_ht[1] == 0) ||
++#ifdef CONFIG_IWL4965_HT
++ if (!priv->current_ht_config.is_ht ||
++ (priv->current_ht_config.supp_mcs_set[1] == 0) ||
+ (priv->ps_mode == IWL_MIMO_PS_STATIC))
+ return 1;
+ #else
+ return 1;
+-#endif /*CONFIG_IWLWIFI_HT */
++#endif /*CONFIG_IWL4965_HT */
+ return 0;
+ }
+
+@@ -104,7 +104,7 @@ static u8 is_single_stream(struct iwl_priv *priv)
+ * MIMO (dual stream) requires at least 2, but works better with 3.
+ * This does not determine *which* chains to use, just how many.
+ */
+-static int iwl4965_get_rx_chain_counter(struct iwl_priv *priv,
++static int iwl4965_get_rx_chain_counter(struct iwl4965_priv *priv,
+ u8 *idle_state, u8 *rx_state)
+ {
+ u8 is_single = is_single_stream(priv);
+@@ -133,32 +133,32 @@ static int iwl4965_get_rx_chain_counter(struct iwl_priv *priv,
+ return 0;
+ }
+
+-int iwl_hw_rxq_stop(struct iwl_priv *priv)
++int iwl4965_hw_rxq_stop(struct iwl4965_priv *priv)
+ {
+ int rc;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
+
+- /* stop HW */
+- iwl_write_restricted(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
+- rc = iwl_poll_restricted_bit(priv, FH_MEM_RSSR_RX_STATUS_REG,
++ /* stop Rx DMA */
++ iwl4965_write_direct32(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
++ rc = iwl4965_poll_direct_bit(priv, FH_MEM_RSSR_RX_STATUS_REG,
+ (1 << 24), 1000);
+ if (rc < 0)
+ IWL_ERROR("Can't stop Rx DMA.\n");
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+ }
+
+-u8 iwl_hw_find_station(struct iwl_priv *priv, const u8 *addr)
++u8 iwl4965_hw_find_station(struct iwl4965_priv *priv, const u8 *addr)
+ {
+ int i;
+ int start = 0;
+@@ -190,104 +190,114 @@ u8 iwl_hw_find_station(struct iwl_priv *priv, const u8 *addr)
+ return ret;
+ }
+
+-static int iwl4965_nic_set_pwr_src(struct iwl_priv *priv, int pwr_max)
++static int iwl4965_nic_set_pwr_src(struct iwl4965_priv *priv, int pwr_max)
+ {
+- int rc = 0;
++ int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- rc = iwl_grab_restricted_access(priv);
+- if (rc) {
++ ret = iwl4965_grab_nic_access(priv);
++ if (ret) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+- return rc;
++ return ret;
+ }
+
+ if (!pwr_max) {
+ u32 val;
+
+- rc = pci_read_config_dword(priv->pci_dev, PCI_POWER_SOURCE,
++ ret = pci_read_config_dword(priv->pci_dev, PCI_POWER_SOURCE,
+ &val);
+
+ if (val & PCI_CFG_PMC_PME_FROM_D3COLD_SUPPORT)
+- iwl_set_bits_mask_restricted_reg(
+- priv, APMG_PS_CTRL_REG,
++ iwl4965_set_bits_mask_prph(priv, APMG_PS_CTRL_REG,
+ APMG_PS_CTRL_VAL_PWR_SRC_VAUX,
+ ~APMG_PS_CTRL_MSK_PWR_SRC);
+ } else
+- iwl_set_bits_mask_restricted_reg(
+- priv, APMG_PS_CTRL_REG,
++ iwl4965_set_bits_mask_prph(priv, APMG_PS_CTRL_REG,
+ APMG_PS_CTRL_VAL_PWR_SRC_VMAIN,
+ ~APMG_PS_CTRL_MSK_PWR_SRC);
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+- return rc;
++ return ret;
+ }
+
+-static int iwl4965_rx_init(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
++static int iwl4965_rx_init(struct iwl4965_priv *priv, struct iwl4965_rx_queue *rxq)
+ {
+ int rc;
+ unsigned long flags;
++ unsigned int rb_size;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
+
+- /* stop HW */
+- iwl_write_restricted(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
++ if (iwl4965_param_amsdu_size_8K)
++ rb_size = FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_8K;
++ else
++ rb_size = FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_4K;
++
++ /* Stop Rx DMA */
++ iwl4965_write_direct32(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
++
++ /* Reset driver's Rx queue write index */
++ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_RBDCB_WPTR_REG, 0);
+
+- iwl_write_restricted(priv, FH_RSCSR_CHNL0_RBDCB_WPTR_REG, 0);
+- iwl_write_restricted(priv, FH_RSCSR_CHNL0_RBDCB_BASE_REG,
++ /* Tell device where to find RBD circular buffer in DRAM */
++ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_RBDCB_BASE_REG,
+ rxq->dma_addr >> 8);
+
+- iwl_write_restricted(priv, FH_RSCSR_CHNL0_STTS_WPTR_REG,
++ /* Tell device where in DRAM to update its Rx status */
++ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_STTS_WPTR_REG,
+ (priv->hw_setting.shared_phys +
+- offsetof(struct iwl_shared, val0)) >> 4);
++ offsetof(struct iwl4965_shared, val0)) >> 4);
+
+- iwl_write_restricted(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG,
++ /* Enable Rx DMA, enable host interrupt, Rx buffer size 4k, 256 RBDs */
++ iwl4965_write_direct32(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG,
+ FH_RCSR_RX_CONFIG_CHNL_EN_ENABLE_VAL |
+ FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_INT_HOST_VAL |
+- IWL_FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_4K |
++ rb_size |
+ /*0x10 << 4 | */
+ (RX_QUEUE_SIZE_LOG <<
+ FH_RCSR_RX_CONFIG_RBDCB_SIZE_BITSHIFT));
+
+ /*
+- * iwl_write32(priv,CSR_INT_COAL_REG,0);
++ * iwl4965_write32(priv,CSR_INT_COAL_REG,0);
+ */
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+ }
+
+-static int iwl4965_kw_init(struct iwl_priv *priv)
++/* Tell 4965 where to find the "keep warm" buffer */
++static int iwl4965_kw_init(struct iwl4965_priv *priv)
+ {
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc)
+ goto out;
+
+- iwl_write_restricted(priv, IWL_FH_KW_MEM_ADDR_REG,
++ iwl4965_write_direct32(priv, IWL_FH_KW_MEM_ADDR_REG,
+ priv->kw.dma_addr >> 4);
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ out:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
+
+-static int iwl4965_kw_alloc(struct iwl_priv *priv)
++static int iwl4965_kw_alloc(struct iwl4965_priv *priv)
+ {
+ struct pci_dev *dev = priv->pci_dev;
+- struct iwl_kw *kw = &priv->kw;
++ struct iwl4965_kw *kw = &priv->kw;
+
+ kw->size = IWL4965_KW_SIZE; /* TBW need set somewhere else */
+ kw->v_addr = pci_alloc_consistent(dev, kw->size, &kw->dma_addr);
+@@ -300,14 +310,19 @@ static int iwl4965_kw_alloc(struct iwl_priv *priv)
+ #define CHECK_AND_PRINT(x) ((eeprom_ch->flags & EEPROM_CHANNEL_##x) \
+ ? # x " " : "")
+
+-int iwl4965_set_fat_chan_info(struct iwl_priv *priv, int phymode, u16 channel,
+- const struct iwl_eeprom_channel *eeprom_ch,
++/**
++ * iwl4965_set_fat_chan_info - Copy fat channel info into driver's priv.
++ *
++ * Does not set up a command, or touch hardware.
++ */
++int iwl4965_set_fat_chan_info(struct iwl4965_priv *priv, int phymode, u16 channel,
++ const struct iwl4965_eeprom_channel *eeprom_ch,
+ u8 fat_extension_channel)
+ {
+- struct iwl_channel_info *ch_info;
++ struct iwl4965_channel_info *ch_info;
+
+- ch_info = (struct iwl_channel_info *)
+- iwl_get_channel_info(priv, phymode, channel);
++ ch_info = (struct iwl4965_channel_info *)
++ iwl4965_get_channel_info(priv, phymode, channel);
+
+ if (!is_channel_valid(ch_info))
+ return -1;
+@@ -340,10 +355,13 @@ int iwl4965_set_fat_chan_info(struct iwl_priv *priv, int phymode, u16 channel,
+ return 0;
+ }
+
+-static void iwl4965_kw_free(struct iwl_priv *priv)
++/**
++ * iwl4965_kw_free - Free the "keep warm" buffer
++ */
++static void iwl4965_kw_free(struct iwl4965_priv *priv)
+ {
+ struct pci_dev *dev = priv->pci_dev;
+- struct iwl_kw *kw = &priv->kw;
++ struct iwl4965_kw *kw = &priv->kw;
+
+ if (kw->v_addr) {
+ pci_free_consistent(dev, kw->size, kw->v_addr, kw->dma_addr);
+@@ -358,7 +376,7 @@ static void iwl4965_kw_free(struct iwl_priv *priv)
+ * @param priv
+ * @return error code
+ */
+-static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
++static int iwl4965_txq_ctx_reset(struct iwl4965_priv *priv)
+ {
+ int rc = 0;
+ int txq_id, slots_num;
+@@ -366,9 +384,10 @@ static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
+
+ iwl4965_kw_free(priv);
+
+- iwl_hw_txq_ctx_free(priv);
++ /* Free all tx/cmd queues and keep-warm buffer */
++ iwl4965_hw_txq_ctx_free(priv);
+
+- /* Tx CMD queue */
++ /* Alloc keep-warm buffer */
+ rc = iwl4965_kw_alloc(priv);
+ if (rc) {
+ IWL_ERROR("Keep Warm allocation failed");
+@@ -377,28 +396,31 @@ static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (unlikely(rc)) {
+ IWL_ERROR("TX reset failed");
+ spin_unlock_irqrestore(&priv->lock, flags);
+ goto error_reset;
+ }
+
+- iwl_write_restricted_reg(priv, SCD_TXFACT, 0);
+- iwl_release_restricted_access(priv);
++ /* Turn off all Tx DMA channels */
++ iwl4965_write_prph(priv, KDR_SCD_TXFACT, 0);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
++ /* Tell 4965 where to find the keep-warm buffer */
+ rc = iwl4965_kw_init(priv);
+ if (rc) {
+ IWL_ERROR("kw_init failed\n");
+ goto error_reset;
+ }
+
+- /* Tx queue(s) */
++ /* Alloc and init all (default 16) Tx queues,
++ * including the command queue (#4) */
+ for (txq_id = 0; txq_id < priv->hw_setting.max_txq_num; txq_id++) {
+ slots_num = (txq_id == IWL_CMD_QUEUE_NUM) ?
+ TFD_CMD_SLOTS : TFD_TX_CMD_SLOTS;
+- rc = iwl_tx_queue_init(priv, &priv->txq[txq_id], slots_num,
++ rc = iwl4965_tx_queue_init(priv, &priv->txq[txq_id], slots_num,
+ txq_id);
+ if (rc) {
+ IWL_ERROR("Tx %d queue init failed\n", txq_id);
+@@ -409,32 +431,32 @@ static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
+ return rc;
+
+ error:
+- iwl_hw_txq_ctx_free(priv);
++ iwl4965_hw_txq_ctx_free(priv);
+ error_reset:
+ iwl4965_kw_free(priv);
+ error_kw:
+ return rc;
+ }
+
+-int iwl_hw_nic_init(struct iwl_priv *priv)
++int iwl4965_hw_nic_init(struct iwl4965_priv *priv)
+ {
+ int rc;
+ unsigned long flags;
+- struct iwl_rx_queue *rxq = &priv->rxq;
++ struct iwl4965_rx_queue *rxq = &priv->rxq;
+ u8 rev_id;
+ u32 val;
+ u8 val_link;
+
+- iwl_power_init_handle(priv);
++ iwl4965_power_init_handle(priv);
+
+ /* nic_init */
+ spin_lock_irqsave(&priv->lock, flags);
+
+- iwl_set_bit(priv, CSR_GIO_CHICKEN_BITS,
++ iwl4965_set_bit(priv, CSR_GIO_CHICKEN_BITS,
+ CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER);
+
+- iwl_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
+- rc = iwl_poll_bit(priv, CSR_GP_CNTRL,
++ iwl4965_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
++ rc = iwl4965_poll_bit(priv, CSR_GP_CNTRL,
+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000);
+ if (rc < 0) {
+@@ -443,26 +465,26 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
+ return rc;
+ }
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
+
+- iwl_read_restricted_reg(priv, APMG_CLK_CTRL_REG);
++ iwl4965_read_prph(priv, APMG_CLK_CTRL_REG);
+
+- iwl_write_restricted_reg(priv, APMG_CLK_CTRL_REG,
++ iwl4965_write_prph(priv, APMG_CLK_CTRL_REG,
+ APMG_CLK_VAL_DMA_CLK_RQT |
+ APMG_CLK_VAL_BSM_CLK_RQT);
+- iwl_read_restricted_reg(priv, APMG_CLK_CTRL_REG);
++ iwl4965_read_prph(priv, APMG_CLK_CTRL_REG);
+
+ udelay(20);
+
+- iwl_set_bits_restricted_reg(priv, APMG_PCIDEV_STT_REG,
++ iwl4965_set_bits_prph(priv, APMG_PCIDEV_STT_REG,
+ APMG_PCIDEV_STT_VAL_L1_ACT_DIS);
+
+- iwl_release_restricted_access(priv);
+- iwl_write32(priv, CSR_INT_COALESCING, 512 / 32);
++ iwl4965_release_nic_access(priv);
++ iwl4965_write32(priv, CSR_INT_COALESCING, 512 / 32);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ /* Determine HW type */
+@@ -484,11 +506,6 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+- /* Read the EEPROM */
+- rc = iwl_eeprom_init(priv);
+- if (rc)
+- return rc;
+-
+ if (priv->eeprom.calib_version < EEPROM_TX_POWER_VERSION_NEW) {
+ IWL_ERROR("Older EEPROM detected! Aborting.\n");
+ return -EINVAL;
+@@ -503,51 +520,53 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
+
+ /* set CSR_HW_CONFIG_REG for uCode use */
+
+- iwl_set_bit(priv, CSR_SW_VER, CSR_HW_IF_CONFIG_REG_BIT_KEDRON_R |
++ iwl4965_set_bit(priv, CSR_SW_VER, CSR_HW_IF_CONFIG_REG_BIT_KEDRON_R |
+ CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI |
+ CSR_HW_IF_CONFIG_REG_BIT_MAC_SI);
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc < 0) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ IWL_DEBUG_INFO("Failed to init the card\n");
+ return rc;
+ }
+
+- iwl_read_restricted_reg(priv, APMG_PS_CTRL_REG);
+- iwl_set_bits_restricted_reg(priv, APMG_PS_CTRL_REG,
++ iwl4965_read_prph(priv, APMG_PS_CTRL_REG);
++ iwl4965_set_bits_prph(priv, APMG_PS_CTRL_REG,
+ APMG_PS_CTRL_VAL_RESET_REQ);
+ udelay(5);
+- iwl_clear_bits_restricted_reg(priv, APMG_PS_CTRL_REG,
++ iwl4965_clear_bits_prph(priv, APMG_PS_CTRL_REG,
+ APMG_PS_CTRL_VAL_RESET_REQ);
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+- iwl_hw_card_show_info(priv);
++ iwl4965_hw_card_show_info(priv);
+
+ /* end nic_init */
+
+ /* Allocate the RX queue, or reset if it is already allocated */
+ if (!rxq->bd) {
+- rc = iwl_rx_queue_alloc(priv);
++ rc = iwl4965_rx_queue_alloc(priv);
+ if (rc) {
+ IWL_ERROR("Unable to initialize Rx queue\n");
+ return -ENOMEM;
+ }
+ } else
+- iwl_rx_queue_reset(priv, rxq);
++ iwl4965_rx_queue_reset(priv, rxq);
+
+- iwl_rx_replenish(priv);
++ iwl4965_rx_replenish(priv);
+
+ iwl4965_rx_init(priv, rxq);
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ rxq->need_update = 1;
+- iwl_rx_queue_update_write_ptr(priv, rxq);
++ iwl4965_rx_queue_update_write_ptr(priv, rxq);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
++
++ /* Allocate and init all Tx and Command queues */
+ rc = iwl4965_txq_ctx_reset(priv);
+ if (rc)
+ return rc;
+@@ -563,7 +582,7 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
+ return 0;
+ }
+
+-int iwl_hw_nic_stop_master(struct iwl_priv *priv)
++int iwl4965_hw_nic_stop_master(struct iwl4965_priv *priv)
+ {
+ int rc = 0;
+ u32 reg_val;
+@@ -572,16 +591,16 @@ int iwl_hw_nic_stop_master(struct iwl_priv *priv)
+ spin_lock_irqsave(&priv->lock, flags);
+
+ /* set stop master bit */
+- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_STOP_MASTER);
++ iwl4965_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_STOP_MASTER);
+
+- reg_val = iwl_read32(priv, CSR_GP_CNTRL);
++ reg_val = iwl4965_read32(priv, CSR_GP_CNTRL);
+
+ if (CSR_GP_CNTRL_REG_FLAG_MAC_POWER_SAVE ==
+ (reg_val & CSR_GP_CNTRL_REG_MSK_POWER_SAVE_TYPE))
+ IWL_DEBUG_INFO("Card in power save, master is already "
+ "stopped\n");
+ else {
+- rc = iwl_poll_bit(priv, CSR_RESET,
++ rc = iwl4965_poll_bit(priv, CSR_RESET,
+ CSR_RESET_REG_FLAG_MASTER_DISABLED,
+ CSR_RESET_REG_FLAG_MASTER_DISABLED, 100);
+ if (rc < 0) {
+@@ -596,65 +615,69 @@ int iwl_hw_nic_stop_master(struct iwl_priv *priv)
+ return rc;
+ }
+
+-void iwl_hw_txq_ctx_stop(struct iwl_priv *priv)
++/**
++ * iwl4965_hw_txq_ctx_stop - Stop all Tx DMA channels, free Tx queue memory
++ */
++void iwl4965_hw_txq_ctx_stop(struct iwl4965_priv *priv)
+ {
+
+ int txq_id;
+ unsigned long flags;
+
+- /* reset TFD queues */
++ /* Stop each Tx DMA channel, and wait for it to be idle */
+ for (txq_id = 0; txq_id < priv->hw_setting.max_txq_num; txq_id++) {
+ spin_lock_irqsave(&priv->lock, flags);
+- if (iwl_grab_restricted_access(priv)) {
++ if (iwl4965_grab_nic_access(priv)) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ continue;
+ }
+
+- iwl_write_restricted(priv,
++ iwl4965_write_direct32(priv,
+ IWL_FH_TCSR_CHNL_TX_CONFIG_REG(txq_id),
+ 0x0);
+- iwl_poll_restricted_bit(priv, IWL_FH_TSSR_TX_STATUS_REG,
++ iwl4965_poll_direct_bit(priv, IWL_FH_TSSR_TX_STATUS_REG,
+ IWL_FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE
+ (txq_id), 200);
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+- iwl_hw_txq_ctx_free(priv);
++ /* Deallocate memory for all Tx queues */
++ iwl4965_hw_txq_ctx_free(priv);
+ }
+
+-int iwl_hw_nic_reset(struct iwl_priv *priv)
++int iwl4965_hw_nic_reset(struct iwl4965_priv *priv)
+ {
+ int rc = 0;
+ unsigned long flags;
+
+- iwl_hw_nic_stop_master(priv);
++ iwl4965_hw_nic_stop_master(priv);
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
++ iwl4965_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
+
+ udelay(10);
+
+- iwl_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
+- rc = iwl_poll_bit(priv, CSR_RESET,
++ iwl4965_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
++ rc = iwl4965_poll_bit(priv, CSR_RESET,
+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25);
+
+ udelay(10);
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (!rc) {
+- iwl_write_restricted_reg(priv, APMG_CLK_EN_REG,
++ iwl4965_write_prph(priv, APMG_CLK_EN_REG,
+ APMG_CLK_VAL_DMA_CLK_RQT |
+ APMG_CLK_VAL_BSM_CLK_RQT);
+
+ udelay(10);
+
+- iwl_set_bits_restricted_reg(priv, APMG_PCIDEV_STT_REG,
++ iwl4965_set_bits_prph(priv, APMG_PCIDEV_STT_REG,
+ APMG_PCIDEV_STT_VAL_L1_ACT_DIS);
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ }
+
+ clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
+@@ -684,7 +707,7 @@ int iwl_hw_nic_reset(struct iwl_priv *priv)
+ */
+ static void iwl4965_bg_statistics_periodic(unsigned long data)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)data;
+
+ queue_work(priv->workqueue, &priv->statistics_work);
+ }
+@@ -692,27 +715,27 @@ static void iwl4965_bg_statistics_periodic(unsigned long data)
+ /**
+ * iwl4965_bg_statistics_work - Send the statistics request to the hardware.
+ *
+- * This is queued by iwl_bg_statistics_periodic.
++ * This is queued by iwl4965_bg_statistics_periodic.
+ */
+ static void iwl4965_bg_statistics_work(struct work_struct *work)
+ {
+- struct iwl_priv *priv = container_of(work, struct iwl_priv,
++ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
+ statistics_work);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+ mutex_lock(&priv->mutex);
+- iwl_send_statistics_request(priv);
++ iwl4965_send_statistics_request(priv);
+ mutex_unlock(&priv->mutex);
+ }
+
+ #define CT_LIMIT_CONST 259
+ #define TM_CT_KILL_THRESHOLD 110
+
+-void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
++void iwl4965_rf_kill_ct_config(struct iwl4965_priv *priv)
+ {
+- struct iwl_ct_kill_config cmd;
++ struct iwl4965_ct_kill_config cmd;
+ u32 R1, R2, R3;
+ u32 temp_th;
+ u32 crit_temperature;
+@@ -720,7 +743,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
+ int rc = 0;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR,
+ CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+@@ -738,7 +761,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
+
+ crit_temperature = ((temp_th * (R3-R1))/CT_LIMIT_CONST) + R2;
+ cmd.critical_temperature_R = cpu_to_le32(crit_temperature);
+- rc = iwl_send_cmd_pdu(priv,
++ rc = iwl4965_send_cmd_pdu(priv,
+ REPLY_CT_KILL_CONFIG_CMD, sizeof(cmd), &cmd);
+ if (rc)
+ IWL_ERROR("REPLY_CT_KILL_CONFIG_CMD failed\n");
+@@ -746,7 +769,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
+ IWL_DEBUG_INFO("REPLY_CT_KILL_CONFIG_CMD succeeded\n");
+ }
+
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+
+ /* "false alarms" are signals that our DSP tries to lock onto,
+ * but then determines that they are either noise, or transmissions
+@@ -756,7 +779,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
+ * enough to receive all of our own network traffic, but not so
+ * high that our DSP gets too busy trying to lock onto non-network
+ * activity/noise. */
+-static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
++static int iwl4965_sens_energy_cck(struct iwl4965_priv *priv,
+ u32 norm_fa,
+ u32 rx_enable_time,
+ struct statistics_general_data *rx_info)
+@@ -782,7 +805,7 @@ static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
+ u32 false_alarms = norm_fa * 200 * 1024;
+ u32 max_false_alarms = MAX_FA_CCK * rx_enable_time;
+ u32 min_false_alarms = MIN_FA_CCK * rx_enable_time;
+- struct iwl_sensitivity_data *data = NULL;
++ struct iwl4965_sensitivity_data *data = NULL;
+
+ data = &(priv->sensitivity_data);
+
+@@ -792,11 +815,11 @@ static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
+ * This is background noise, which may include transmissions from other
+ * networks, measured during silence before our network's beacon */
+ silence_rssi_a = (u8)((rx_info->beacon_silence_rssi_a &
+- ALL_BAND_FILTER)>>8);
++ ALL_BAND_FILTER) >> 8);
+ silence_rssi_b = (u8)((rx_info->beacon_silence_rssi_b &
+- ALL_BAND_FILTER)>>8);
++ ALL_BAND_FILTER) >> 8);
+ silence_rssi_c = (u8)((rx_info->beacon_silence_rssi_c &
+- ALL_BAND_FILTER)>>8);
++ ALL_BAND_FILTER) >> 8);
+
+ val = max(silence_rssi_b, silence_rssi_c);
+ max_silence_rssi = max(silence_rssi_a, (u8) val);
+@@ -947,7 +970,7 @@ static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
+ }
+
+
+-static int iwl4965_sens_auto_corr_ofdm(struct iwl_priv *priv,
++static int iwl4965_sens_auto_corr_ofdm(struct iwl4965_priv *priv,
+ u32 norm_fa,
+ u32 rx_enable_time)
+ {
+@@ -955,7 +978,7 @@ static int iwl4965_sens_auto_corr_ofdm(struct iwl_priv *priv,
+ u32 false_alarms = norm_fa * 200 * 1024;
+ u32 max_false_alarms = MAX_FA_OFDM * rx_enable_time;
+ u32 min_false_alarms = MIN_FA_OFDM * rx_enable_time;
+- struct iwl_sensitivity_data *data = NULL;
++ struct iwl4965_sensitivity_data *data = NULL;
+
+ data = &(priv->sensitivity_data);
+
+@@ -1012,22 +1035,22 @@ static int iwl4965_sens_auto_corr_ofdm(struct iwl_priv *priv,
+ return 0;
+ }
+
+-static int iwl_sensitivity_callback(struct iwl_priv *priv,
+- struct iwl_cmd *cmd, struct sk_buff *skb)
++static int iwl4965_sensitivity_callback(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd, struct sk_buff *skb)
+ {
+ /* We didn't cache the SKB; let the caller free it */
+ return 1;
+ }
+
+ /* Prepare a SENSITIVITY_CMD, send to uCode if values have changed */
+-static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
++static int iwl4965_sensitivity_write(struct iwl4965_priv *priv, u8 flags)
+ {
+ int rc = 0;
+- struct iwl_sensitivity_cmd cmd ;
+- struct iwl_sensitivity_data *data = NULL;
+- struct iwl_host_cmd cmd_out = {
++ struct iwl4965_sensitivity_cmd cmd ;
++ struct iwl4965_sensitivity_data *data = NULL;
++ struct iwl4965_host_cmd cmd_out = {
+ .id = SENSITIVITY_CMD,
+- .len = sizeof(struct iwl_sensitivity_cmd),
++ .len = sizeof(struct iwl4965_sensitivity_cmd),
+ .meta.flags = flags,
+ .data = &cmd,
+ };
+@@ -1071,10 +1094,11 @@ static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
+ data->auto_corr_cck, data->auto_corr_cck_mrc,
+ data->nrg_th_cck);
+
++ /* Update uCode's "work" table, and copy it to DSP */
+ cmd.control = SENSITIVITY_CMD_CONTROL_WORK_TABLE;
+
+ if (flags & CMD_ASYNC)
+- cmd_out.meta.u.callback = iwl_sensitivity_callback;
++ cmd_out.meta.u.callback = iwl4965_sensitivity_callback;
+
+ /* Don't send command to uCode if nothing has changed */
+ if (!memcmp(&cmd.table[0], &(priv->sensitivity_tbl[0]),
+@@ -1087,7 +1111,7 @@ static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
+ memcpy(&(priv->sensitivity_tbl[0]), &(cmd.table[0]),
+ sizeof(u16)*HD_TABLE_SIZE);
+
+- rc = iwl_send_cmd(priv, &cmd_out);
++ rc = iwl4965_send_cmd(priv, &cmd_out);
+ if (!rc) {
+ IWL_DEBUG_CALIB("SENSITIVITY_CMD succeeded\n");
+ return rc;
+@@ -1096,11 +1120,11 @@ static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
+ return 0;
+ }
+
+-void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags, u8 force)
++void iwl4965_init_sensitivity(struct iwl4965_priv *priv, u8 flags, u8 force)
+ {
+ int rc = 0;
+ int i;
+- struct iwl_sensitivity_data *data = NULL;
++ struct iwl4965_sensitivity_data *data = NULL;
+
+ IWL_DEBUG_CALIB("Start iwl4965_init_sensitivity\n");
+
+@@ -1110,7 +1134,7 @@ void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags, u8 force)
+
+ /* Clear driver's sensitivity algo data */
+ data = &(priv->sensitivity_data);
+- memset(data, 0, sizeof(struct iwl_sensitivity_data));
++ memset(data, 0, sizeof(struct iwl4965_sensitivity_data));
+
+ data->num_in_cck_no_fa = 0;
+ data->nrg_curr_state = IWL_FA_TOO_MANY;
+@@ -1154,21 +1178,21 @@ void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags, u8 force)
+ /* Reset differential Rx gains in NIC to prepare for chain noise calibration.
+ * Called after every association, but this runs only once!
+ * ... once chain noise is calibrated the first time, it's good forever. */
+-void iwl4965_chain_noise_reset(struct iwl_priv *priv)
++void iwl4965_chain_noise_reset(struct iwl4965_priv *priv)
+ {
+- struct iwl_chain_noise_data *data = NULL;
++ struct iwl4965_chain_noise_data *data = NULL;
+ int rc = 0;
+
+ data = &(priv->chain_noise_data);
+- if ((data->state == IWL_CHAIN_NOISE_ALIVE) && iwl_is_associated(priv)) {
+- struct iwl_calibration_cmd cmd;
++ if ((data->state == IWL_CHAIN_NOISE_ALIVE) && iwl4965_is_associated(priv)) {
++ struct iwl4965_calibration_cmd cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opCode = PHY_CALIBRATE_DIFF_GAIN_CMD;
+ cmd.diff_gain_a = 0;
+ cmd.diff_gain_b = 0;
+ cmd.diff_gain_c = 0;
+- rc = iwl_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
+ sizeof(cmd), &cmd);
+ msleep(4);
+ data->state = IWL_CHAIN_NOISE_ACCUMULATE;
+@@ -1183,10 +1207,10 @@ void iwl4965_chain_noise_reset(struct iwl_priv *priv)
+ * 1) Which antennas are connected.
+ * 2) Differential rx gain settings to balance the 3 receivers.
+ */
+-static void iwl4965_noise_calibration(struct iwl_priv *priv,
+- struct iwl_notif_statistics *stat_resp)
++static void iwl4965_noise_calibration(struct iwl4965_priv *priv,
++ struct iwl4965_notif_statistics *stat_resp)
+ {
+- struct iwl_chain_noise_data *data = NULL;
++ struct iwl4965_chain_noise_data *data = NULL;
+ int rc = 0;
+
+ u32 chain_noise_a;
+@@ -1385,7 +1409,7 @@ static void iwl4965_noise_calibration(struct iwl_priv *priv,
+
+ /* Differential gain gets sent to uCode only once */
+ if (!data->radio_write) {
+- struct iwl_calibration_cmd cmd;
++ struct iwl4965_calibration_cmd cmd;
+ data->radio_write = 1;
+
+ memset(&cmd, 0, sizeof(cmd));
+@@ -1393,7 +1417,7 @@ static void iwl4965_noise_calibration(struct iwl_priv *priv,
+ cmd.diff_gain_a = data->delta_gain_code[0];
+ cmd.diff_gain_b = data->delta_gain_code[1];
+ cmd.diff_gain_c = data->delta_gain_code[2];
+- rc = iwl_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
+ sizeof(cmd), &cmd);
+ if (rc)
+ IWL_DEBUG_CALIB("fail sending cmd "
+@@ -1416,8 +1440,8 @@ static void iwl4965_noise_calibration(struct iwl_priv *priv,
+ return;
+ }
+
+-static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
+- struct iwl_notif_statistics *resp)
++static void iwl4965_sensitivity_calibration(struct iwl4965_priv *priv,
++ struct iwl4965_notif_statistics *resp)
+ {
+ int rc = 0;
+ u32 rx_enable_time;
+@@ -1427,7 +1451,7 @@ static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
+ u32 bad_plcp_ofdm;
+ u32 norm_fa_ofdm;
+ u32 norm_fa_cck;
+- struct iwl_sensitivity_data *data = NULL;
++ struct iwl4965_sensitivity_data *data = NULL;
+ struct statistics_rx_non_phy *rx_info = &(resp->rx.general);
+ struct statistics_rx *statistics = &(resp->rx);
+ unsigned long flags;
+@@ -1435,7 +1459,7 @@ static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
+
+ data = &(priv->sensitivity_data);
+
+- if (!iwl_is_associated(priv)) {
++ if (!iwl4965_is_associated(priv)) {
+ IWL_DEBUG_CALIB("<< - not associated\n");
+ return;
+ }
+@@ -1523,7 +1547,7 @@ static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
+
+ static void iwl4965_bg_sensitivity_work(struct work_struct *work)
+ {
+- struct iwl_priv *priv = container_of(work, struct iwl_priv,
++ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
+ sensitivity_work);
+
+ mutex_lock(&priv->mutex);
+@@ -1549,11 +1573,11 @@ static void iwl4965_bg_sensitivity_work(struct work_struct *work)
+ mutex_unlock(&priv->mutex);
+ return;
+ }
+-#endif /*CONFIG_IWLWIFI_SENSITIVITY*/
++#endif /*CONFIG_IWL4965_SENSITIVITY*/
+
+ static void iwl4965_bg_txpower_work(struct work_struct *work)
+ {
+- struct iwl_priv *priv = container_of(work, struct iwl_priv,
++ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
+ txpower_work);
+
+ /* If a scan happened to start before we got here
+@@ -1569,7 +1593,7 @@ static void iwl4965_bg_txpower_work(struct work_struct *work)
+ /* Regardless of if we are assocaited, we must reconfigure the
+ * TX power since frames can be sent on non-radar channels while
+ * not associated */
+- iwl_hw_reg_send_txpower(priv);
++ iwl4965_hw_reg_send_txpower(priv);
+
+ /* Update last_temperature to keep is_calib_needed from running
+ * when it isn't needed... */
+@@ -1581,24 +1605,31 @@ static void iwl4965_bg_txpower_work(struct work_struct *work)
+ /*
+ * Acquire priv->lock before calling this function !
+ */
+-static void iwl4965_set_wr_ptrs(struct iwl_priv *priv, int txq_id, u32 index)
++static void iwl4965_set_wr_ptrs(struct iwl4965_priv *priv, int txq_id, u32 index)
+ {
+- iwl_write_restricted(priv, HBUS_TARG_WRPTR,
++ iwl4965_write_direct32(priv, HBUS_TARG_WRPTR,
+ (index & 0xff) | (txq_id << 8));
+- iwl_write_restricted_reg(priv, SCD_QUEUE_RDPTR(txq_id), index);
++ iwl4965_write_prph(priv, KDR_SCD_QUEUE_RDPTR(txq_id), index);
+ }
+
+-/*
+- * Acquire priv->lock before calling this function !
++/**
++ * iwl4965_tx_queue_set_status - (optionally) start Tx/Cmd queue
++ * @tx_fifo_id: Tx DMA/FIFO channel (range 0-7) that the queue will feed
++ * @scd_retry: (1) Indicates queue will be used in aggregation mode
++ *
++ * NOTE: Acquire priv->lock before calling this function !
+ */
+-static void iwl4965_tx_queue_set_status(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq,
++static void iwl4965_tx_queue_set_status(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq,
+ int tx_fifo_id, int scd_retry)
+ {
+ int txq_id = txq->q.id;
++
++ /* Find out whether to activate Tx queue */
+ int active = test_bit(txq_id, &priv->txq_ctx_active_msk)?1:0;
+
+- iwl_write_restricted_reg(priv, SCD_QUEUE_STATUS_BITS(txq_id),
++ /* Set up and activate */
++ iwl4965_write_prph(priv, KDR_SCD_QUEUE_STATUS_BITS(txq_id),
+ (active << SCD_QUEUE_STTS_REG_POS_ACTIVE) |
+ (tx_fifo_id << SCD_QUEUE_STTS_REG_POS_TXF) |
+ (scd_retry << SCD_QUEUE_STTS_REG_POS_WSL) |
+@@ -1608,7 +1639,7 @@ static void iwl4965_tx_queue_set_status(struct iwl_priv *priv,
+ txq->sched_retry = scd_retry;
+
+ IWL_DEBUG_INFO("%s %s Queue %d on AC %d\n",
+- active ? "Activete" : "Deactivate",
++ active ? "Activate" : "Deactivate",
+ scd_retry ? "BA" : "AC", txq_id, tx_fifo_id);
+ }
+
+@@ -1622,17 +1653,17 @@ static const u16 default_queue_to_tx_fifo[] = {
+ IWL_TX_FIFO_HCCA_2
+ };
+
+-static inline void iwl4965_txq_ctx_activate(struct iwl_priv *priv, int txq_id)
++static inline void iwl4965_txq_ctx_activate(struct iwl4965_priv *priv, int txq_id)
+ {
+ set_bit(txq_id, &priv->txq_ctx_active_msk);
+ }
+
+-static inline void iwl4965_txq_ctx_deactivate(struct iwl_priv *priv, int txq_id)
++static inline void iwl4965_txq_ctx_deactivate(struct iwl4965_priv *priv, int txq_id)
+ {
+ clear_bit(txq_id, &priv->txq_ctx_active_msk);
+ }
+
+-int iwl4965_alive_notify(struct iwl_priv *priv)
++int iwl4965_alive_notify(struct iwl4965_priv *priv)
+ {
+ u32 a;
+ int i = 0;
+@@ -1641,45 +1672,55 @@ int iwl4965_alive_notify(struct iwl_priv *priv)
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+ memset(&(priv->sensitivity_data), 0,
+- sizeof(struct iwl_sensitivity_data));
++ sizeof(struct iwl4965_sensitivity_data));
+ memset(&(priv->chain_noise_data), 0,
+- sizeof(struct iwl_chain_noise_data));
++ sizeof(struct iwl4965_chain_noise_data));
+ for (i = 0; i < NUM_RX_CHAINS; i++)
+ priv->chain_noise_data.delta_gain_code[i] =
+ CHAIN_NOISE_DELTA_GAIN_INIT_VAL;
+-#endif /* CONFIG_IWLWIFI_SENSITIVITY*/
+- rc = iwl_grab_restricted_access(priv);
++#endif /* CONFIG_IWL4965_SENSITIVITY*/
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
+
+- priv->scd_base_addr = iwl_read_restricted_reg(priv, SCD_SRAM_BASE_ADDR);
++ /* Clear 4965's internal Tx Scheduler data base */
++ priv->scd_base_addr = iwl4965_read_prph(priv, KDR_SCD_SRAM_BASE_ADDR);
+ a = priv->scd_base_addr + SCD_CONTEXT_DATA_OFFSET;
+ for (; a < priv->scd_base_addr + SCD_TX_STTS_BITMAP_OFFSET; a += 4)
+- iwl_write_restricted_mem(priv, a, 0);
++ iwl4965_write_targ_mem(priv, a, 0);
+ for (; a < priv->scd_base_addr + SCD_TRANSLATE_TBL_OFFSET; a += 4)
+- iwl_write_restricted_mem(priv, a, 0);
++ iwl4965_write_targ_mem(priv, a, 0);
+ for (; a < sizeof(u16) * priv->hw_setting.max_txq_num; a += 4)
+- iwl_write_restricted_mem(priv, a, 0);
++ iwl4965_write_targ_mem(priv, a, 0);
+
+- iwl_write_restricted_reg(priv, SCD_DRAM_BASE_ADDR,
++ /* Tel 4965 where to find Tx byte count tables */
++ iwl4965_write_prph(priv, KDR_SCD_DRAM_BASE_ADDR,
+ (priv->hw_setting.shared_phys +
+- offsetof(struct iwl_shared, queues_byte_cnt_tbls)) >> 10);
+- iwl_write_restricted_reg(priv, SCD_QUEUECHAIN_SEL, 0);
++ offsetof(struct iwl4965_shared, queues_byte_cnt_tbls)) >> 10);
+
+- /* initiate the queues */
++ /* Disable chain mode for all queues */
++ iwl4965_write_prph(priv, KDR_SCD_QUEUECHAIN_SEL, 0);
++
++ /* Initialize each Tx queue (including the command queue) */
+ for (i = 0; i < priv->hw_setting.max_txq_num; i++) {
+- iwl_write_restricted_reg(priv, SCD_QUEUE_RDPTR(i), 0);
+- iwl_write_restricted(priv, HBUS_TARG_WRPTR, 0 | (i << 8));
+- iwl_write_restricted_mem(priv, priv->scd_base_addr +
++
++ /* TFD circular buffer read/write indexes */
++ iwl4965_write_prph(priv, KDR_SCD_QUEUE_RDPTR(i), 0);
++ iwl4965_write_direct32(priv, HBUS_TARG_WRPTR, 0 | (i << 8));
++
++ /* Max Tx Window size for Scheduler-ACK mode */
++ iwl4965_write_targ_mem(priv, priv->scd_base_addr +
+ SCD_CONTEXT_QUEUE_OFFSET(i),
+ (SCD_WIN_SIZE <<
+ SCD_QUEUE_CTX_REG1_WIN_SIZE_POS) &
+ SCD_QUEUE_CTX_REG1_WIN_SIZE_MSK);
+- iwl_write_restricted_mem(priv, priv->scd_base_addr +
++
++ /* Frame limit */
++ iwl4965_write_targ_mem(priv, priv->scd_base_addr +
+ SCD_CONTEXT_QUEUE_OFFSET(i) +
+ sizeof(u32),
+ (SCD_FRAME_LIMIT <<
+@@ -1687,87 +1728,98 @@ int iwl4965_alive_notify(struct iwl_priv *priv)
+ SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK);
+
+ }
+- iwl_write_restricted_reg(priv, SCD_INTERRUPT_MASK,
++ iwl4965_write_prph(priv, KDR_SCD_INTERRUPT_MASK,
+ (1 << priv->hw_setting.max_txq_num) - 1);
+
+- iwl_write_restricted_reg(priv, SCD_TXFACT,
++ /* Activate all Tx DMA/FIFO channels */
++ iwl4965_write_prph(priv, KDR_SCD_TXFACT,
+ SCD_TXFACT_REG_TXFIFO_MASK(0, 7));
-- rs_initialize_lq(priv, sta);
-+ rs_initialize_lq(priv, conf, sta);
+ iwl4965_set_wr_ptrs(priv, IWL_CMD_QUEUE_NUM, 0);
+- /* map qos queues to fifos one-to-one */
++
++ /* Map each Tx/cmd queue to its corresponding fifo */
+ for (i = 0; i < ARRAY_SIZE(default_queue_to_tx_fifo); i++) {
+ int ac = default_queue_to_tx_fifo[i];
+ iwl4965_txq_ctx_activate(priv, i);
+ iwl4965_tx_queue_set_status(priv, &priv->txq[i], ac, 0);
+ }
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
}
--static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
-- struct iwl_rate *tx_mcs,
-- struct iwl_link_quality_cmd *lq_cmd)
-+static void rs_fill_link_cmd(struct iwl4965_lq_sta *lq_sta,
-+ struct iwl4965_rate *tx_mcs,
-+ struct iwl4965_link_quality_cmd *lq_cmd)
+-int iwl_hw_set_hw_setting(struct iwl_priv *priv)
++/**
++ * iwl4965_hw_set_hw_setting
++ *
++ * Called when initializing driver
++ */
++int iwl4965_hw_set_hw_setting(struct iwl4965_priv *priv)
{
- int index = 0;
- int rate_idx;
- int repeat_rate = 0;
- u8 ant_toggle_count = 0;
- u8 use_ht_possible = 1;
-- struct iwl_rate new_rate;
-- struct iwl_scale_tbl_info tbl_type = { 0 };
-+ struct iwl4965_rate new_rate;
-+ struct iwl4965_scale_tbl_info tbl_type = { 0 };
-
-- rs_dbgfs_set_mcs(lq_data, tx_mcs, index);
-+ /* Override starting rate (index 0) if needed for debug purposes */
-+ rs_dbgfs_set_mcs(lq_sta, tx_mcs, index);
++ /* Allocate area for Tx byte count tables and Rx queue status */
+ priv->hw_setting.shared_virt =
+ pci_alloc_consistent(priv->pci_dev,
+- sizeof(struct iwl_shared),
++ sizeof(struct iwl4965_shared),
+ &priv->hw_setting.shared_phys);
-- rs_get_tbl_info_from_mcs(tx_mcs, lq_data->phymode,
-+ /* Interpret rate_n_flags */
-+ rs_get_tbl_info_from_mcs(tx_mcs, lq_sta->phymode,
- &tbl_type, &rate_idx);
+ if (!priv->hw_setting.shared_virt)
+ return -1;
-+ /* How many times should we repeat the initial rate? */
- if (is_legacy(tbl_type.lq_type)) {
- ant_toggle_count = 1;
- repeat_rate = IWL_NUMBER_TRY;
-@@ -1909,19 +2219,27 @@ static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
+- memset(priv->hw_setting.shared_virt, 0, sizeof(struct iwl_shared));
++ memset(priv->hw_setting.shared_virt, 0, sizeof(struct iwl4965_shared));
- lq_cmd->general_params.mimo_delimiter =
- is_mimo(tbl_type.lq_type) ? 1 : 0;
-+
-+ /* Fill 1st table entry (index 0) */
- lq_cmd->rs_table[index].rate_n_flags =
- cpu_to_le32(tx_mcs->rate_n_flags);
- new_rate.rate_n_flags = tx_mcs->rate_n_flags;
+- priv->hw_setting.max_txq_num = iwl_param_queues_num;
++ priv->hw_setting.max_txq_num = iwl4965_param_queues_num;
+ priv->hw_setting.ac_queue_count = AC_NUM;
+-
+- priv->hw_setting.cck_flag = RATE_MCS_CCK_MSK;
+- priv->hw_setting.tx_cmd_len = sizeof(struct iwl_tx_cmd);
++ priv->hw_setting.tx_cmd_len = sizeof(struct iwl4965_tx_cmd);
+ priv->hw_setting.max_rxq_size = RX_QUEUE_SIZE;
+ priv->hw_setting.max_rxq_log = RX_QUEUE_SIZE_LOG;
+-
++ if (iwl4965_param_amsdu_size_8K)
++ priv->hw_setting.rx_buf_size = IWL_RX_BUF_SIZE_8K;
++ else
++ priv->hw_setting.rx_buf_size = IWL_RX_BUF_SIZE_4K;
++ priv->hw_setting.max_pkt_size = priv->hw_setting.rx_buf_size - 256;
+ priv->hw_setting.max_stations = IWL4965_STATION_COUNT;
+ priv->hw_setting.bcast_sta_id = IWL4965_BROADCAST_ID;
+ return 0;
+ }
- if (is_mimo(tbl_type.lq_type) || (tbl_type.antenna_type == ANT_MAIN))
-- lq_cmd->general_params.single_stream_ant_msk = 1;
-+ lq_cmd->general_params.single_stream_ant_msk
-+ = LINK_QUAL_ANT_A_MSK;
- else
-- lq_cmd->general_params.single_stream_ant_msk = 2;
-+ lq_cmd->general_params.single_stream_ant_msk
-+ = LINK_QUAL_ANT_B_MSK;
+ /**
+- * iwl_hw_txq_ctx_free - Free TXQ Context
++ * iwl4965_hw_txq_ctx_free - Free TXQ Context
+ *
+ * Destroy all TX DMA queues and structures
+ */
+-void iwl_hw_txq_ctx_free(struct iwl_priv *priv)
++void iwl4965_hw_txq_ctx_free(struct iwl4965_priv *priv)
+ {
+ int txq_id;
- index++;
- repeat_rate--;
+ /* Tx queues */
+ for (txq_id = 0; txq_id < priv->hw_setting.max_txq_num; txq_id++)
+- iwl_tx_queue_free(priv, &priv->txq[txq_id]);
++ iwl4965_tx_queue_free(priv, &priv->txq[txq_id]);
-+ /* Fill rest of rate table */
- while (index < LINK_QUAL_MAX_RETRY_NUM) {
-+ /* Repeat initial/next rate.
-+ * For legacy IWL_NUMBER_TRY == 1, this loop will not execute.
-+ * For HT IWL_HT_NUMBER_TRY == 3, this executes twice. */
- while (repeat_rate > 0 && (index < LINK_QUAL_MAX_RETRY_NUM)) {
- if (is_legacy(tbl_type.lq_type)) {
- if (ant_toggle_count <
-@@ -1933,22 +2251,30 @@ static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
- }
- }
++ /* Keep-warm buffer */
+ iwl4965_kw_free(priv);
+ }
-- rs_dbgfs_set_mcs(lq_data, &new_rate, index);
-+ /* Override next rate if needed for debug purposes */
-+ rs_dbgfs_set_mcs(lq_sta, &new_rate, index);
-+
-+ /* Fill next table entry */
- lq_cmd->rs_table[index].rate_n_flags =
- cpu_to_le32(new_rate.rate_n_flags);
- repeat_rate--;
- index++;
- }
+ /**
+- * iwl_hw_txq_free_tfd - Free one TFD, those at index [txq->q.last_used]
++ * iwl4965_hw_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr]
+ *
+- * Does NOT advance any indexes
++ * Does NOT advance any TFD circular buffer read/write indexes
++ * Does NOT free the TFD itself (which is within circular buffer)
+ */
+-int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq)
++int iwl4965_hw_txq_free_tfd(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq)
+ {
+- struct iwl_tfd_frame *bd_tmp = (struct iwl_tfd_frame *)&txq->bd[0];
+- struct iwl_tfd_frame *bd = &bd_tmp[txq->q.last_used];
++ struct iwl4965_tfd_frame *bd_tmp = (struct iwl4965_tfd_frame *)&txq->bd[0];
++ struct iwl4965_tfd_frame *bd = &bd_tmp[txq->q.read_ptr];
+ struct pci_dev *dev = priv->pci_dev;
+ int i;
+ int counter = 0;
+ int index, is_odd;
-- rs_get_tbl_info_from_mcs(&new_rate, lq_data->phymode, &tbl_type,
-+ rs_get_tbl_info_from_mcs(&new_rate, lq_sta->phymode, &tbl_type,
- &rate_idx);
+- /* classify bd */
++ /* Host command buffers stay mapped in memory, nothing to clean */
+ if (txq->q.id == IWL_CMD_QUEUE_NUM)
+- /* nothing to cleanup after for host commands */
+ return 0;
-+ /* Indicate to uCode which entries might be MIMO.
-+ * If initial rate was MIMO, this will finally end up
-+ * as (IWL_HT_NUMBER_TRY * 2), after 2nd pass, otherwise 0. */
- if (is_mimo(tbl_type.lq_type))
- lq_cmd->general_params.mimo_delimiter = index;
+- /* sanity check */
++ /* Sanity check on number of chunks */
+ counter = IWL_GET_BITS(*bd, num_tbs);
+ if (counter > MAX_NUM_OF_TBS) {
+ IWL_ERROR("Too many chunks: %i\n", counter);
+@@ -1775,8 +1827,8 @@ int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq)
+ return 0;
+ }
-- rs_get_lower_rate(lq_data, &tbl_type, rate_idx,
-+ /* Get next rate */
-+ rs_get_lower_rate(lq_sta, &tbl_type, rate_idx,
- use_ht_possible, &new_rate);
+- /* unmap chunks if any */
+-
++ /* Unmap chunks, if any.
++ * TFD info for odd chunks is different format than for even chunks. */
+ for (i = 0; i < counter; i++) {
+ index = i / 2;
+ is_odd = i & 0x1;
+@@ -1796,19 +1848,20 @@ int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq)
+ IWL_GET_BITS(bd->pa[index], tb1_len),
+ PCI_DMA_TODEVICE);
-+ /* How many times should we repeat the next rate? */
- if (is_legacy(tbl_type.lq_type)) {
- if (ant_toggle_count < NUM_TRY_BEFORE_ANTENNA_TOGGLE)
- ant_toggle_count++;
-@@ -1960,9 +2286,14 @@ static void rs_fill_link_cmd(struct iwl_rate_scale_priv *lq_data,
- } else
- repeat_rate = IWL_HT_NUMBER_TRY;
+- if (txq->txb[txq->q.last_used].skb[i]) {
+- struct sk_buff *skb = txq->txb[txq->q.last_used].skb[i];
++ /* Free SKB, if any, for this chunk */
++ if (txq->txb[txq->q.read_ptr].skb[i]) {
++ struct sk_buff *skb = txq->txb[txq->q.read_ptr].skb[i];
-+ /* Don't allow HT rates after next pass.
-+ * rs_get_lower_rate() will change type to LQ_A or LQ_G. */
- use_ht_possible = 0;
+ dev_kfree_skb(skb);
+- txq->txb[txq->q.last_used].skb[i] = NULL;
++ txq->txb[txq->q.read_ptr].skb[i] = NULL;
+ }
+ }
+ return 0;
+ }
-- rs_dbgfs_set_mcs(lq_data, &new_rate, index);
-+ /* Override next rate if needed for debug purposes */
-+ rs_dbgfs_set_mcs(lq_sta, &new_rate, index);
-+
-+ /* Fill next table entry */
- lq_cmd->rs_table[index].rate_n_flags =
- cpu_to_le32(new_rate.rate_n_flags);
+-int iwl_hw_reg_set_txpower(struct iwl_priv *priv, s8 power)
++int iwl4965_hw_reg_set_txpower(struct iwl4965_priv *priv, s8 power)
+ {
+- IWL_ERROR("TODO: Implement iwl_hw_reg_set_txpower!\n");
++ IWL_ERROR("TODO: Implement iwl4965_hw_reg_set_txpower!\n");
+ return -EINVAL;
+ }
-@@ -1987,27 +2318,27 @@ static void rs_free(void *priv_rate)
+@@ -1830,6 +1883,17 @@ static s32 iwl4965_math_div_round(s32 num, s32 denom, s32 *res)
+ return 1;
+ }
- static void rs_clear(void *priv_rate)
++/**
++ * iwl4965_get_voltage_compensation - Power supply voltage comp for txpower
++ *
++ * Determines power supply voltage compensation for txpower calculations.
++ * Returns number of 1/2-dB steps to subtract from gain table index,
++ * to compensate for difference between power supply voltage during
++ * factory measurements, vs. current power supply voltage.
++ *
++ * Voltage indication is higher for lower voltage.
++ * Lower voltage requires more gain (lower gain table index).
++ */
+ static s32 iwl4965_get_voltage_compensation(s32 eeprom_voltage,
+ s32 current_voltage)
{
-- struct iwl_priv *priv = (struct iwl_priv *) priv_rate;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *) priv_rate;
+@@ -1850,12 +1914,12 @@ static s32 iwl4965_get_voltage_compensation(s32 eeprom_voltage,
+ return comp;
+ }
- IWL_DEBUG_RATE("enter\n");
+-static const struct iwl_channel_info *
+-iwl4965_get_channel_txpower_info(struct iwl_priv *priv, u8 phymode, u16 channel)
++static const struct iwl4965_channel_info *
++iwl4965_get_channel_txpower_info(struct iwl4965_priv *priv, u8 phymode, u16 channel)
+ {
+- const struct iwl_channel_info *ch_info;
++ const struct iwl4965_channel_info *ch_info;
- priv->lq_mngr.lq_ready = 0;
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- if (priv->lq_mngr.agg_ctrl.granted_ba)
- iwl4965_turn_off_agg(priv, TID_ALL_SPECIFIED);
--#endif /*CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /*CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
+- ch_info = iwl_get_channel_info(priv, phymode, channel);
++ ch_info = iwl4965_get_channel_info(priv, phymode, channel);
- IWL_DEBUG_RATE("leave\n");
+ if (!is_channel_valid(ch_info))
+ return NULL;
+@@ -1889,7 +1953,7 @@ static s32 iwl4965_get_tx_atten_grp(u16 channel)
+ return -1;
}
- static void rs_free_sta(void *priv, void *priv_sta)
+-static u32 iwl4965_get_sub_band(const struct iwl_priv *priv, u32 channel)
++static u32 iwl4965_get_sub_band(const struct iwl4965_priv *priv, u32 channel)
{
-- struct iwl_rate_scale_priv *rs_priv = priv_sta;
-+ struct iwl4965_lq_sta *lq_sta = priv_sta;
+ s32 b = -1;
- IWL_DEBUG_RATE("enter\n");
-- kfree(rs_priv);
-+ kfree(lq_sta);
- IWL_DEBUG_RATE("leave\n");
+@@ -1917,15 +1981,23 @@ static s32 iwl4965_interpolate_value(s32 x, s32 x1, s32 y1, s32 x2, s32 y2)
+ }
}
-@@ -2018,19 +2349,19 @@ static int open_file_generic(struct inode *inode, struct file *file)
- file->private_data = inode->i_private;
- return 0;
+-static int iwl4965_interpolate_chan(struct iwl_priv *priv, u32 channel,
+- struct iwl_eeprom_calib_ch_info *chan_info)
++/**
++ * iwl4965_interpolate_chan - Interpolate factory measurements for one channel
++ *
++ * Interpolates factory measurements from the two sample channels within a
++ * sub-band, to apply to channel of interest. Interpolation is proportional to
++ * differences in channel frequencies, which is proportional to differences
++ * in channel number.
++ */
++static int iwl4965_interpolate_chan(struct iwl4965_priv *priv, u32 channel,
++ struct iwl4965_eeprom_calib_ch_info *chan_info)
+ {
+ s32 s = -1;
+ u32 c;
+ u32 m;
+- const struct iwl_eeprom_calib_measure *m1;
+- const struct iwl_eeprom_calib_measure *m2;
+- struct iwl_eeprom_calib_measure *omeas;
++ const struct iwl4965_eeprom_calib_measure *m1;
++ const struct iwl4965_eeprom_calib_measure *m2;
++ struct iwl4965_eeprom_calib_measure *omeas;
+ u32 ch_i1;
+ u32 ch_i2;
+
+@@ -2000,7 +2072,7 @@ static s32 back_off_table[] = {
+
+ /* Thermal compensation values for txpower for various frequency ranges ...
+ * ratios from 3:1 to 4.5:1 of degrees (Celsius) per half-dB gain adjust */
+-static struct iwl_txpower_comp_entry {
++static struct iwl4965_txpower_comp_entry {
+ s32 degrees_per_05db_a;
+ s32 degrees_per_05db_a_denom;
+ } tx_power_cmp_tble[CALIB_CH_GROUP_MAX] = {
+@@ -2250,9 +2322,9 @@ static const struct gain_entry gain_table[2][108] = {
+ }
+ };
+
+-static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
++static int iwl4965_fill_txpower_tbl(struct iwl4965_priv *priv, u8 band, u16 channel,
+ u8 is_fat, u8 ctrl_chan_high,
+- struct iwl_tx_power_db *tx_power_tbl)
++ struct iwl4965_tx_power_db *tx_power_tbl)
+ {
+ u8 saturation_power;
+ s32 target_power;
+@@ -2264,9 +2336,9 @@ static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
+ s32 txatten_grp = CALIB_CH_GROUP_MAX;
+ int i;
+ int c;
+- const struct iwl_channel_info *ch_info = NULL;
+- struct iwl_eeprom_calib_ch_info ch_eeprom_info;
+- const struct iwl_eeprom_calib_measure *measurement;
++ const struct iwl4965_channel_info *ch_info = NULL;
++ struct iwl4965_eeprom_calib_ch_info ch_eeprom_info;
++ const struct iwl4965_eeprom_calib_measure *measurement;
+ s16 voltage;
+ s32 init_voltage;
+ s32 voltage_compensation;
+@@ -2405,7 +2477,7 @@ static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
+ /* for each of 33 bit-rates (including 1 for CCK) */
+ for (i = 0; i < POWER_TABLE_NUM_ENTRIES; i++) {
+ u8 is_mimo_rate;
+- union iwl_tx_power_dual_stream tx_power;
++ union iwl4965_tx_power_dual_stream tx_power;
+
+ /* for mimo, reduce each chain's txpower by half
+ * (3dB, 6 steps), so total output power is regulatory
+@@ -2502,14 +2574,14 @@ static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
}
--static void rs_dbgfs_set_mcs(struct iwl_rate_scale_priv *rs_priv,
-- struct iwl_rate *mcs, int index)
-+static void rs_dbgfs_set_mcs(struct iwl4965_lq_sta *lq_sta,
-+ struct iwl4965_rate *mcs, int index)
+
+ /**
+- * iwl_hw_reg_send_txpower - Configure the TXPOWER level user limit
++ * iwl4965_hw_reg_send_txpower - Configure the TXPOWER level user limit
+ *
+ * Uses the active RXON for channel, band, and characteristics (fat, high)
+ * The power limit is taken from priv->user_txpower_limit.
+ */
+-int iwl_hw_reg_send_txpower(struct iwl_priv *priv)
++int iwl4965_hw_reg_send_txpower(struct iwl4965_priv *priv)
{
- u32 base_rate;
+- struct iwl_txpowertable_cmd cmd = { 0 };
++ struct iwl4965_txpowertable_cmd cmd = { 0 };
+ int rc = 0;
+ u8 band = 0;
+ u8 is_fat = 0;
+@@ -2541,23 +2613,23 @@ int iwl_hw_reg_send_txpower(struct iwl_priv *priv)
+ if (rc)
+ return rc;
-- if (rs_priv->phymode == (u8) MODE_IEEE80211A)
-+ if (lq_sta->phymode == (u8) MODE_IEEE80211A)
- base_rate = 0x800D;
- else
- base_rate = 0x820A;
+- rc = iwl_send_cmd_pdu(priv, REPLY_TX_PWR_TABLE_CMD, sizeof(cmd), &cmd);
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_TX_PWR_TABLE_CMD, sizeof(cmd), &cmd);
+ return rc;
+ }
-- if (rs_priv->dbg_fixed.rate_n_flags) {
-+ if (lq_sta->dbg_fixed.rate_n_flags) {
- if (index < 12)
-- mcs->rate_n_flags = rs_priv->dbg_fixed.rate_n_flags;
-+ mcs->rate_n_flags = lq_sta->dbg_fixed.rate_n_flags;
- else
- mcs->rate_n_flags = base_rate;
- IWL_DEBUG_RATE("Fixed rate ON\n");
-@@ -2043,7 +2374,7 @@ static void rs_dbgfs_set_mcs(struct iwl_rate_scale_priv *rs_priv,
- static ssize_t rs_sta_dbgfs_scale_table_write(struct file *file,
- const char __user *user_buf, size_t count, loff_t *ppos)
+-int iwl_hw_channel_switch(struct iwl_priv *priv, u16 channel)
++int iwl4965_hw_channel_switch(struct iwl4965_priv *priv, u16 channel)
{
-- struct iwl_rate_scale_priv *rs_priv = file->private_data;
-+ struct iwl4965_lq_sta *lq_sta = file->private_data;
- char buf[64];
- int buf_size;
- u32 parsed_rate;
-@@ -2054,20 +2385,20 @@ static ssize_t rs_sta_dbgfs_scale_table_write(struct file *file,
- return -EFAULT;
+ int rc;
+ u8 band = 0;
+ u8 is_fat = 0;
+ u8 ctrl_chan_high = 0;
+- struct iwl_channel_switch_cmd cmd = { 0 };
+- const struct iwl_channel_info *ch_info;
++ struct iwl4965_channel_switch_cmd cmd = { 0 };
++ const struct iwl4965_channel_info *ch_info;
- if (sscanf(buf, "%x", &parsed_rate) == 1)
-- rs_priv->dbg_fixed.rate_n_flags = parsed_rate;
-+ lq_sta->dbg_fixed.rate_n_flags = parsed_rate;
- else
-- rs_priv->dbg_fixed.rate_n_flags = 0;
-+ lq_sta->dbg_fixed.rate_n_flags = 0;
+ band = ((priv->phymode == MODE_IEEE80211B) ||
+ (priv->phymode == MODE_IEEE80211G));
-- rs_priv->active_rate = 0x0FFF;
-- rs_priv->active_siso_rate = 0x1FD0;
-- rs_priv->active_mimo_rate = 0x1FD0;
-+ lq_sta->active_rate = 0x0FFF; /* 1 - 54 MBits, includes CCK */
-+ lq_sta->active_siso_rate = 0x1FD0; /* 6 - 60 MBits, no 9, no CCK */
-+ lq_sta->active_mimo_rate = 0x1FD0; /* 6 - 60 MBits, no 9, no CCK */
+- ch_info = iwl_get_channel_info(priv, priv->phymode, channel);
++ ch_info = iwl4965_get_channel_info(priv, priv->phymode, channel);
- IWL_DEBUG_RATE("sta_id %d rate 0x%X\n",
-- rs_priv->lq.sta_id, rs_priv->dbg_fixed.rate_n_flags);
-+ lq_sta->lq.sta_id, lq_sta->dbg_fixed.rate_n_flags);
+ is_fat = is_fat_channel(priv->staging_rxon.flags);
-- if (rs_priv->dbg_fixed.rate_n_flags) {
-- rs_fill_link_cmd(rs_priv, &rs_priv->dbg_fixed, &rs_priv->lq);
-- rs_send_lq_cmd(rs_priv->drv, &rs_priv->lq, CMD_ASYNC);
-+ if (lq_sta->dbg_fixed.rate_n_flags) {
-+ rs_fill_link_cmd(lq_sta, &lq_sta->dbg_fixed, &lq_sta->lq);
-+ rs_send_lq_cmd(lq_sta->drv, &lq_sta->lq, CMD_ASYNC);
+@@ -2583,32 +2655,36 @@ int iwl_hw_channel_switch(struct iwl_priv *priv, u16 channel)
+ return rc;
}
- return count;
-@@ -2080,38 +2411,38 @@ static ssize_t rs_sta_dbgfs_scale_table_read(struct file *file,
- int desc = 0;
- int i = 0;
+- rc = iwl_send_cmd_pdu(priv, REPLY_CHANNEL_SWITCH, sizeof(cmd), &cmd);
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_CHANNEL_SWITCH, sizeof(cmd), &cmd);
+ return rc;
+ }
-- struct iwl_rate_scale_priv *rs_priv = file->private_data;
-+ struct iwl4965_lq_sta *lq_sta = file->private_data;
+ #define RTS_HCCA_RETRY_LIMIT 3
+ #define RTS_DFAULT_RETRY_LIMIT 60
-- desc += sprintf(buff+desc, "sta_id %d\n", rs_priv->lq.sta_id);
-+ desc += sprintf(buff+desc, "sta_id %d\n", lq_sta->lq.sta_id);
- desc += sprintf(buff+desc, "failed=%d success=%d rate=0%X\n",
-- rs_priv->total_failed, rs_priv->total_success,
-- rs_priv->active_rate);
-+ lq_sta->total_failed, lq_sta->total_success,
-+ lq_sta->active_rate);
- desc += sprintf(buff+desc, "fixed rate 0x%X\n",
-- rs_priv->dbg_fixed.rate_n_flags);
-+ lq_sta->dbg_fixed.rate_n_flags);
- desc += sprintf(buff+desc, "general:"
- "flags=0x%X mimo-d=%d s-ant0x%x d-ant=0x%x\n",
-- rs_priv->lq.general_params.flags,
-- rs_priv->lq.general_params.mimo_delimiter,
-- rs_priv->lq.general_params.single_stream_ant_msk,
-- rs_priv->lq.general_params.dual_stream_ant_msk);
-+ lq_sta->lq.general_params.flags,
-+ lq_sta->lq.general_params.mimo_delimiter,
-+ lq_sta->lq.general_params.single_stream_ant_msk,
-+ lq_sta->lq.general_params.dual_stream_ant_msk);
+-void iwl_hw_build_tx_cmd_rate(struct iwl_priv *priv,
+- struct iwl_cmd *cmd,
++void iwl4965_hw_build_tx_cmd_rate(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd,
+ struct ieee80211_tx_control *ctrl,
+ struct ieee80211_hdr *hdr, int sta_id,
+ int is_hcca)
+ {
+- u8 rate;
++ struct iwl4965_tx_cmd *tx = &cmd->cmd.tx;
+ u8 rts_retry_limit = 0;
+ u8 data_retry_limit = 0;
+- __le32 tx_flags;
+ u16 fc = le16_to_cpu(hdr->frame_control);
++ u8 rate_plcp;
++ u16 rate_flags = 0;
++ int rate_idx = min(ctrl->tx_rate & 0xffff, IWL_RATE_COUNT - 1);
- desc += sprintf(buff+desc, "agg:"
- "time_limit=%d dist_start_th=%d frame_cnt_limit=%d\n",
-- le16_to_cpu(rs_priv->lq.agg_params.agg_time_limit),
-- rs_priv->lq.agg_params.agg_dis_start_th,
-- rs_priv->lq.agg_params.agg_frame_cnt_limit);
-+ le16_to_cpu(lq_sta->lq.agg_params.agg_time_limit),
-+ lq_sta->lq.agg_params.agg_dis_start_th,
-+ lq_sta->lq.agg_params.agg_frame_cnt_limit);
+- tx_flags = cmd->cmd.tx.tx_flags;
+-
+- rate = iwl_rates[ctrl->tx_rate].plcp;
++ rate_plcp = iwl4965_rates[rate_idx].plcp;
- desc += sprintf(buff+desc,
- "Start idx [0]=0x%x [1]=0x%x [2]=0x%x [3]=0x%x\n",
-- rs_priv->lq.general_params.start_rate_index[0],
-- rs_priv->lq.general_params.start_rate_index[1],
-- rs_priv->lq.general_params.start_rate_index[2],
-- rs_priv->lq.general_params.start_rate_index[3]);
-+ lq_sta->lq.general_params.start_rate_index[0],
-+ lq_sta->lq.general_params.start_rate_index[1],
-+ lq_sta->lq.general_params.start_rate_index[2],
-+ lq_sta->lq.general_params.start_rate_index[3]);
+ rts_retry_limit = (is_hcca) ?
+ RTS_HCCA_RETRY_LIMIT : RTS_DFAULT_RETRY_LIMIT;
++ if ((rate_idx >= IWL_FIRST_CCK_RATE) && (rate_idx <= IWL_LAST_CCK_RATE))
++ rate_flags |= RATE_MCS_CCK_MSK;
++
++
+ if (ieee80211_is_probe_response(fc)) {
+ data_retry_limit = 3;
+ if (data_retry_limit < rts_retry_limit)
+@@ -2619,44 +2695,56 @@ void iwl_hw_build_tx_cmd_rate(struct iwl_priv *priv,
+ if (priv->data_retry_limit != -1)
+ data_retry_limit = priv->data_retry_limit;
- for (i = 0; i < LINK_QUAL_MAX_RETRY_NUM; i++)
- desc += sprintf(buff+desc, " rate[%d] 0x%X\n",
-- i, le32_to_cpu(rs_priv->lq.rs_table[i].rate_n_flags));
-+ i, le32_to_cpu(lq_sta->lq.rs_table[i].rate_n_flags));
+- if ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) {
++
++ if (ieee80211_is_data(fc)) {
++ tx->initial_rate_index = 0;
++ tx->tx_flags |= TX_CMD_FLG_STA_RATE_MSK;
++ } else {
+ switch (fc & IEEE80211_FCTL_STYPE) {
+ case IEEE80211_STYPE_AUTH:
+ case IEEE80211_STYPE_DEAUTH:
+ case IEEE80211_STYPE_ASSOC_REQ:
+ case IEEE80211_STYPE_REASSOC_REQ:
+- if (tx_flags & TX_CMD_FLG_RTS_MSK) {
+- tx_flags &= ~TX_CMD_FLG_RTS_MSK;
+- tx_flags |= TX_CMD_FLG_CTS_MSK;
++ if (tx->tx_flags & TX_CMD_FLG_RTS_MSK) {
++ tx->tx_flags &= ~TX_CMD_FLG_RTS_MSK;
++ tx->tx_flags |= TX_CMD_FLG_CTS_MSK;
+ }
+ break;
+ default:
+ break;
+ }
++
++ /* Alternate between antenna A and B for successive frames */
++ if (priv->use_ant_b_for_management_frame) {
++ priv->use_ant_b_for_management_frame = 0;
++ rate_flags |= RATE_MCS_ANT_B_MSK;
++ } else {
++ priv->use_ant_b_for_management_frame = 1;
++ rate_flags |= RATE_MCS_ANT_A_MSK;
++ }
+ }
- return simple_read_from_buffer(user_buf, count, ppos, buff, desc);
+- cmd->cmd.tx.rts_retry_limit = rts_retry_limit;
+- cmd->cmd.tx.data_retry_limit = data_retry_limit;
+- cmd->cmd.tx.rate_n_flags = iwl_hw_set_rate_n_flags(rate, 0);
+- cmd->cmd.tx.tx_flags = tx_flags;
++ tx->rts_retry_limit = rts_retry_limit;
++ tx->data_retry_limit = data_retry_limit;
++ tx->rate_n_flags = iwl4965_hw_set_rate_n_flags(rate_plcp, rate_flags);
}
-@@ -2128,22 +2459,22 @@ static ssize_t rs_sta_dbgfs_stats_table_read(struct file *file,
- int desc = 0;
- int i, j;
-- struct iwl_rate_scale_priv *rs_priv = file->private_data;
-+ struct iwl4965_lq_sta *lq_sta = file->private_data;
- for (i = 0; i < LQ_SIZE; i++) {
- desc += sprintf(buff+desc, "%s type=%d SGI=%d FAT=%d DUP=%d\n"
- "rate=0x%X\n",
-- rs_priv->active_tbl == i?"*":"x",
-- rs_priv->lq_info[i].lq_type,
-- rs_priv->lq_info[i].is_SGI,
-- rs_priv->lq_info[i].is_fat,
-- rs_priv->lq_info[i].is_dup,
-- rs_priv->lq_info[i].current_rate.rate_n_flags);
-+ lq_sta->active_tbl == i?"*":"x",
-+ lq_sta->lq_info[i].lq_type,
-+ lq_sta->lq_info[i].is_SGI,
-+ lq_sta->lq_info[i].is_fat,
-+ lq_sta->lq_info[i].is_dup,
-+ lq_sta->lq_info[i].current_rate.rate_n_flags);
- for (j = 0; j < IWL_RATE_COUNT; j++) {
- desc += sprintf(buff+desc,
-- "counter=%d success=%d %%=%d\n",
-- rs_priv->lq_info[i].win[j].counter,
-- rs_priv->lq_info[i].win[j].success_counter,
-- rs_priv->lq_info[i].win[j].success_ratio);
-+ "counter=%d success=%d %%=%d\n",
-+ lq_sta->lq_info[i].win[j].counter,
-+ lq_sta->lq_info[i].win[j].success_counter,
-+ lq_sta->lq_info[i].win[j].success_ratio);
- }
- }
- return simple_read_from_buffer(user_buf, count, ppos, buff, desc);
-@@ -2157,20 +2488,20 @@ static const struct file_operations rs_sta_dbgfs_stats_table_ops = {
- static void rs_add_debugfs(void *priv, void *priv_sta,
- struct dentry *dir)
+-int iwl_hw_get_rx_read(struct iwl_priv *priv)
++int iwl4965_hw_get_rx_read(struct iwl4965_priv *priv)
{
-- struct iwl_rate_scale_priv *rs_priv = priv_sta;
-- rs_priv->rs_sta_dbgfs_scale_table_file =
-+ struct iwl4965_lq_sta *lq_sta = priv_sta;
-+ lq_sta->rs_sta_dbgfs_scale_table_file =
- debugfs_create_file("rate_scale_table", 0600, dir,
-- rs_priv, &rs_sta_dbgfs_scale_table_ops);
-- rs_priv->rs_sta_dbgfs_stats_table_file =
-+ lq_sta, &rs_sta_dbgfs_scale_table_ops);
-+ lq_sta->rs_sta_dbgfs_stats_table_file =
- debugfs_create_file("rate_stats_table", 0600, dir,
-- rs_priv, &rs_sta_dbgfs_stats_table_ops);
-+ lq_sta, &rs_sta_dbgfs_stats_table_ops);
+- struct iwl_shared *shared_data = priv->hw_setting.shared_virt;
++ struct iwl4965_shared *shared_data = priv->hw_setting.shared_virt;
+
+ return IWL_GET_BITS(*shared_data, rb_closed_stts_rb_num);
}
- static void rs_remove_debugfs(void *priv, void *priv_sta)
+-int iwl_hw_get_temperature(struct iwl_priv *priv)
++int iwl4965_hw_get_temperature(struct iwl4965_priv *priv)
{
-- struct iwl_rate_scale_priv *rs_priv = priv_sta;
-- debugfs_remove(rs_priv->rs_sta_dbgfs_scale_table_file);
-- debugfs_remove(rs_priv->rs_sta_dbgfs_stats_table_file);
-+ struct iwl4965_lq_sta *lq_sta = priv_sta;
-+ debugfs_remove(lq_sta->rs_sta_dbgfs_scale_table_file);
-+ debugfs_remove(lq_sta->rs_sta_dbgfs_stats_table_file);
+ return priv->temperature;
}
- #endif
-
-@@ -2191,13 +2522,13 @@ static struct rate_control_ops rs_ops = {
- #endif
- };
--int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
-+int iwl4965_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
+-unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
+- struct iwl_frame *frame, u8 rate)
++unsigned int iwl4965_hw_get_beacon_cmd(struct iwl4965_priv *priv,
++ struct iwl4965_frame *frame, u8 rate)
{
- struct ieee80211_local *local = hw_to_local(hw);
-- struct iwl_priv *priv = hw->priv;
-- struct iwl_rate_scale_priv *rs_priv;
-+ struct iwl4965_priv *priv = hw->priv;
-+ struct iwl4965_lq_sta *lq_sta;
- struct sta_info *sta;
-- int count = 0, i;
-+ int cnt = 0, i;
- u32 samples = 0, success = 0, good = 0;
- unsigned long now = jiffies;
- u32 max_time = 0;
-@@ -2213,10 +2544,10 @@ int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
- return sprintf(buf, "station %d not found\n", sta_id);
- }
-
-- rs_priv = (void *)sta->rate_ctrl_priv;
-+ lq_sta = (void *)sta->rate_ctrl_priv;
-
-- lq_type = rs_priv->lq_info[rs_priv->active_tbl].lq_type;
-- antenna = rs_priv->lq_info[rs_priv->active_tbl].antenna_type;
-+ lq_type = lq_sta->lq_info[lq_sta->active_tbl].lq_type;
-+ antenna = lq_sta->lq_info[lq_sta->active_tbl].antenna_type;
-
- if (is_legacy(lq_type))
- i = IWL_RATE_54M_INDEX;
-@@ -2225,35 +2556,35 @@ int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
- while (1) {
- u64 mask;
- int j;
-- int active = rs_priv->active_tbl;
-+ int active = lq_sta->active_tbl;
+- struct iwl_tx_beacon_cmd *tx_beacon_cmd;
++ struct iwl4965_tx_beacon_cmd *tx_beacon_cmd;
+ unsigned int frame_size;
-- count +=
-- sprintf(&buf[count], " %2dMbs: ", iwl_rates[i].ieee / 2);
-+ cnt +=
-+ sprintf(&buf[cnt], " %2dMbs: ", iwl4965_rates[i].ieee / 2);
+ tx_beacon_cmd = &frame->u.beacon;
+@@ -2665,9 +2753,9 @@ unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
+ tx_beacon_cmd->tx.sta_id = IWL4965_BROADCAST_ID;
+ tx_beacon_cmd->tx.stop_time.life_time = TX_CMD_LIFE_TIME_INFINITE;
- mask = (1ULL << (IWL_RATE_MAX_WINDOW - 1));
- for (j = 0; j < IWL_RATE_MAX_WINDOW; j++, mask >>= 1)
-- buf[count++] =
-- (rs_priv->lq_info[active].win[i].data & mask)
-+ buf[cnt++] =
-+ (lq_sta->lq_info[active].win[i].data & mask)
- ? '1' : '0';
+- frame_size = iwl_fill_beacon_frame(priv,
++ frame_size = iwl4965_fill_beacon_frame(priv,
+ tx_beacon_cmd->frame,
+- BROADCAST_ADDR,
++ iwl4965_broadcast_addr,
+ sizeof(frame->u) - sizeof(*tx_beacon_cmd));
-- samples += rs_priv->lq_info[active].win[i].counter;
-- good += rs_priv->lq_info[active].win[i].success_counter;
-- success += rs_priv->lq_info[active].win[i].success_counter *
-- iwl_rates[i].ieee;
-+ samples += lq_sta->lq_info[active].win[i].counter;
-+ good += lq_sta->lq_info[active].win[i].success_counter;
-+ success += lq_sta->lq_info[active].win[i].success_counter *
-+ iwl4965_rates[i].ieee;
+ BUG_ON(frame_size > MAX_MPDU_SIZE);
+@@ -2675,53 +2763,59 @@ unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
-- if (rs_priv->lq_info[active].win[i].stamp) {
-+ if (lq_sta->lq_info[active].win[i].stamp) {
- int delta =
- jiffies_to_msecs(now -
-- rs_priv->lq_info[active].win[i].stamp);
-+ lq_sta->lq_info[active].win[i].stamp);
+ if ((rate == IWL_RATE_1M_PLCP) || (rate >= IWL_RATE_2M_PLCP))
+ tx_beacon_cmd->tx.rate_n_flags =
+- iwl_hw_set_rate_n_flags(rate, RATE_MCS_CCK_MSK);
++ iwl4965_hw_set_rate_n_flags(rate, RATE_MCS_CCK_MSK);
+ else
+ tx_beacon_cmd->tx.rate_n_flags =
+- iwl_hw_set_rate_n_flags(rate, 0);
++ iwl4965_hw_set_rate_n_flags(rate, 0);
- if (delta > max_time)
- max_time = delta;
+ tx_beacon_cmd->tx.tx_flags = (TX_CMD_FLG_SEQ_CTL_MSK |
+ TX_CMD_FLG_TSF_MSK | TX_CMD_FLG_STA_RATE_MSK);
+ return (sizeof(*tx_beacon_cmd) + frame_size);
+ }
-- count += sprintf(&buf[count], "%5dms\n", delta);
-+ cnt += sprintf(&buf[cnt], "%5dms\n", delta);
- } else
-- buf[count++] = '\n';
-+ buf[cnt++] = '\n';
+-int iwl_hw_tx_queue_init(struct iwl_priv *priv, struct iwl_tx_queue *txq)
++/*
++ * Tell 4965 where to find circular buffer of Tx Frame Descriptors for
++ * given Tx queue, and enable the DMA channel used for that queue.
++ *
++ * 4965 supports up to 16 Tx queues in DRAM, mapped to up to 8 Tx DMA
++ * channels supported in hardware.
++ */
++int iwl4965_hw_tx_queue_init(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq)
+ {
+ int rc;
+ unsigned long flags;
+ int txq_id = txq->q.id;
-- j = iwl_get_prev_ieee_rate(i);
-+ j = iwl4965_get_prev_ieee_rate(i);
- if (j == i)
- break;
- i = j;
-@@ -2261,37 +2592,38 @@ int iwl_fill_rs_info(struct ieee80211_hw *hw, char *buf, u8 sta_id)
+ spin_lock_irqsave(&priv->lock, flags);
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
+ if (rc) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
- /* Display the average rate of all samples taken.
- *
-- * NOTE: We multiple # of samples by 2 since the IEEE measurement
-- * added from iwl_rates is actually 2X the rate */
-+ * NOTE: We multiply # of samples by 2 since the IEEE measurement
-+ * added from iwl4965_rates is actually 2X the rate */
- if (samples)
-- count += sprintf(&buf[count],
-+ cnt += sprintf(&buf[cnt],
- "\nAverage rate is %3d.%02dMbs over last %4dms\n"
- "%3d%% success (%d good packets over %d tries)\n",
- success / (2 * samples), (success * 5 / samples) % 10,
- max_time, good * 100 / samples, good, samples);
- else
-- count += sprintf(&buf[count], "\nAverage rate: 0Mbs\n");
-- count += sprintf(&buf[count], "\nrate scale type %d anntena %d "
-+ cnt += sprintf(&buf[cnt], "\nAverage rate: 0Mbs\n");
+- iwl_write_restricted(priv, FH_MEM_CBBC_QUEUE(txq_id),
++ /* Circular buffer (TFD queue in DRAM) physical base address */
++ iwl4965_write_direct32(priv, FH_MEM_CBBC_QUEUE(txq_id),
+ txq->q.dma_addr >> 8);
+- iwl_write_restricted(
+
-+ cnt += sprintf(&buf[cnt], "\nrate scale type %d antenna %d "
- "active_search %d rate index %d\n", lq_type, antenna,
-- rs_priv->search_better_tbl, sta->last_txrate);
-+ lq_sta->search_better_tbl, sta->last_txrate);
++ /* Enable DMA channel, using same id as for TFD queue */
++ iwl4965_write_direct32(
+ priv, IWL_FH_TCSR_CHNL_TX_CONFIG_REG(txq_id),
+ IWL_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE |
+ IWL_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE_VAL);
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
- sta_info_put(sta);
-- return count;
-+ return cnt;
+ return 0;
}
--void iwl_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id)
-+void iwl4965_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id)
+-static inline u8 iwl4965_get_dma_hi_address(dma_addr_t addr)
+-{
+- return sizeof(addr) > sizeof(u32) ? (addr >> 16) >> 16 : 0;
+-}
+-
+-int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *ptr,
++int iwl4965_hw_txq_attach_buf_to_tfd(struct iwl4965_priv *priv, void *ptr,
+ dma_addr_t addr, u16 len)
{
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
+ int index, is_odd;
+- struct iwl_tfd_frame *tfd = ptr;
++ struct iwl4965_tfd_frame *tfd = ptr;
+ u32 num_tbs = IWL_GET_BITS(*tfd, num_tbs);
- priv->lq_mngr.lq_ready = 1;
++ /* Each TFD can point to a maximum 20 Tx buffers */
+ if ((num_tbs >= MAX_NUM_OF_TBS) || (num_tbs < 0)) {
+ IWL_ERROR("Error can not send more than %d chunks\n",
+ MAX_NUM_OF_TBS);
+@@ -2734,7 +2828,7 @@ int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *ptr,
+ if (!is_odd) {
+ tfd->pa[index].tb1_addr = cpu_to_le32(addr);
+ IWL_SET_BITS(tfd->pa[index], tb1_addr_hi,
+- iwl4965_get_dma_hi_address(addr));
++ iwl_get_dma_hi_address(addr));
+ IWL_SET_BITS(tfd->pa[index], tb1_len, len);
+ } else {
+ IWL_SET_BITS(tfd->pa[index], tb2_addr_lo16,
+@@ -2748,7 +2842,7 @@ int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *ptr,
+ return 0;
}
--void iwl_rate_control_register(struct ieee80211_hw *hw)
-+void iwl4965_rate_control_register(struct ieee80211_hw *hw)
+-void iwl_hw_card_show_info(struct iwl_priv *priv)
++static void iwl4965_hw_card_show_info(struct iwl4965_priv *priv)
{
- ieee80211_rate_control_register(&rs_ops);
+ u16 hw_version = priv->eeprom.board_revision_4965;
+
+@@ -2763,32 +2857,41 @@ void iwl_hw_card_show_info(struct iwl_priv *priv)
+ #define IWL_TX_CRC_SIZE 4
+ #define IWL_TX_DELIMITER_SIZE 4
+
+-int iwl4965_tx_queue_update_wr_ptr(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq, u16 byte_cnt)
++/**
++ * iwl4965_tx_queue_update_wr_ptr - Set up entry in Tx byte-count array
++ */
++int iwl4965_tx_queue_update_wr_ptr(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq, u16 byte_cnt)
+ {
+ int len;
+ int txq_id = txq->q.id;
+- struct iwl_shared *shared_data = priv->hw_setting.shared_virt;
++ struct iwl4965_shared *shared_data = priv->hw_setting.shared_virt;
+
+ if (txq->need_update == 0)
+ return 0;
+
+ len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE;
+
++ /* Set up byte count within first 256 entries */
+ IWL_SET_BITS16(shared_data->queues_byte_cnt_tbls[txq_id].
+- tfd_offset[txq->q.first_empty], byte_cnt, len);
++ tfd_offset[txq->q.write_ptr], byte_cnt, len);
+
+- if (txq->q.first_empty < IWL4965_MAX_WIN_SIZE)
++ /* If within first 64 entries, duplicate at end */
++ if (txq->q.write_ptr < IWL4965_MAX_WIN_SIZE)
+ IWL_SET_BITS16(shared_data->queues_byte_cnt_tbls[txq_id].
+- tfd_offset[IWL4965_QUEUE_SIZE + txq->q.first_empty],
++ tfd_offset[IWL4965_QUEUE_SIZE + txq->q.write_ptr],
+ byte_cnt, len);
+
+ return 0;
}
--void iwl_rate_control_unregister(struct ieee80211_hw *hw)
-+void iwl4965_rate_control_unregister(struct ieee80211_hw *hw)
+-/* Set up Rx receiver/antenna/chain usage in "staging" RXON image.
+- * This should not be used for scan command ... it puts data in wrong place. */
+-void iwl4965_set_rxon_chain(struct iwl_priv *priv)
++/**
++ * iwl4965_set_rxon_chain - Set up Rx chain usage in "staging" RXON image
++ *
++ * Selects how many and which Rx receivers/antennas/chains to use.
++ * This should not be used for scan command ... it puts data in wrong place.
++ */
++void iwl4965_set_rxon_chain(struct iwl4965_priv *priv)
{
- ieee80211_rate_control_unregister(&rs_ops);
+ u8 is_single = is_single_stream(priv);
+ u8 idle_state, rx_state;
+@@ -2819,19 +2922,19 @@ void iwl4965_set_rxon_chain(struct iwl_priv *priv)
+ IWL_DEBUG_ASSOC("rx chain %X\n", priv->staging_rxon.rx_chain);
}
-diff --git a/drivers/net/wireless/iwlwifi/iwl-4965-rs.h b/drivers/net/wireless/iwlwifi/iwl-4965-rs.h
-index c6325f7..55f7073 100644
---- a/drivers/net/wireless/iwlwifi/iwl-4965-rs.h
-+++ b/drivers/net/wireless/iwlwifi/iwl-4965-rs.h
-@@ -29,11 +29,11 @@
- #include "iwl-4965.h"
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ /*
+ get the traffic load value for tid
+ */
+-static u32 iwl4965_tl_get_load(struct iwl_priv *priv, u8 tid)
++static u32 iwl4965_tl_get_load(struct iwl4965_priv *priv, u8 tid)
+ {
+ u32 load = 0;
+ u32 current_time = jiffies_to_msecs(jiffies);
+ u32 time_diff;
+ s32 index;
+ unsigned long flags;
+- struct iwl_traffic_load *tid_ptr = NULL;
++ struct iwl4965_traffic_load *tid_ptr = NULL;
--struct iwl_rate_info {
-- u8 plcp;
-- u8 plcp_siso;
-- u8 plcp_mimo;
-- u8 ieee;
-+struct iwl4965_rate_info {
-+ u8 plcp; /* uCode API: IWL_RATE_6M_PLCP, etc. */
-+ u8 plcp_siso; /* uCode API: IWL_RATE_SISO_6M_PLCP, etc. */
-+ u8 plcp_mimo; /* uCode API: IWL_RATE_MIMO_6M_PLCP, etc. */
-+ u8 ieee; /* MAC header: IWL_RATE_6M_IEEE, etc. */
- u8 prev_ieee; /* previous rate in IEEE speeds */
- u8 next_ieee; /* next rate in IEEE speeds */
- u8 prev_rs; /* previous rate used in rs algo */
-@@ -42,6 +42,10 @@ struct iwl_rate_info {
- u8 next_rs_tgg; /* next rate used in TGG rs algo */
+ if (tid >= TID_MAX_LOAD_COUNT)
+ return 0;
+@@ -2872,13 +2975,13 @@ static u32 iwl4965_tl_get_load(struct iwl_priv *priv, u8 tid)
+ increment traffic load value for tid and also remove
+ any old values if passed the certian time period
+ */
+-static void iwl4965_tl_add_packet(struct iwl_priv *priv, u8 tid)
++static void iwl4965_tl_add_packet(struct iwl4965_priv *priv, u8 tid)
+ {
+ u32 current_time = jiffies_to_msecs(jiffies);
+ u32 time_diff;
+ s32 index;
+ unsigned long flags;
+- struct iwl_traffic_load *tid_ptr = NULL;
++ struct iwl4965_traffic_load *tid_ptr = NULL;
+
+ if (tid >= TID_MAX_LOAD_COUNT)
+ return;
+@@ -2935,14 +3038,19 @@ enum HT_STATUS {
+ BA_STATUS_ACTIVE,
};
-+/*
-+ * These serve as indexes into
-+ * struct iwl4965_rate_info iwl4965_rates[IWL_RATE_COUNT];
+-static u8 iwl4964_tl_ba_avail(struct iwl_priv *priv)
++/**
++ * iwl4964_tl_ba_avail - Find out if an unused aggregation queue is available
+ */
- enum {
- IWL_RATE_1M_INDEX = 0,
- IWL_RATE_2M_INDEX,
-@@ -69,20 +73,21 @@ enum {
- };
++static u8 iwl4964_tl_ba_avail(struct iwl4965_priv *priv)
+ {
+ int i;
+- struct iwl_lq_mngr *lq;
++ struct iwl4965_lq_mngr *lq;
+ u8 count = 0;
+ u16 msk;
- /* #define vs. enum to keep from defaulting to 'large integer' */
--#define IWL_RATE_6M_MASK (1<<IWL_RATE_6M_INDEX)
--#define IWL_RATE_9M_MASK (1<<IWL_RATE_9M_INDEX)
--#define IWL_RATE_12M_MASK (1<<IWL_RATE_12M_INDEX)
--#define IWL_RATE_18M_MASK (1<<IWL_RATE_18M_INDEX)
--#define IWL_RATE_24M_MASK (1<<IWL_RATE_24M_INDEX)
--#define IWL_RATE_36M_MASK (1<<IWL_RATE_36M_INDEX)
--#define IWL_RATE_48M_MASK (1<<IWL_RATE_48M_INDEX)
--#define IWL_RATE_54M_MASK (1<<IWL_RATE_54M_INDEX)
--#define IWL_RATE_60M_MASK (1<<IWL_RATE_60M_INDEX)
--#define IWL_RATE_1M_MASK (1<<IWL_RATE_1M_INDEX)
--#define IWL_RATE_2M_MASK (1<<IWL_RATE_2M_INDEX)
--#define IWL_RATE_5M_MASK (1<<IWL_RATE_5M_INDEX)
--#define IWL_RATE_11M_MASK (1<<IWL_RATE_11M_INDEX)
--
-+#define IWL_RATE_6M_MASK (1 << IWL_RATE_6M_INDEX)
-+#define IWL_RATE_9M_MASK (1 << IWL_RATE_9M_INDEX)
-+#define IWL_RATE_12M_MASK (1 << IWL_RATE_12M_INDEX)
-+#define IWL_RATE_18M_MASK (1 << IWL_RATE_18M_INDEX)
-+#define IWL_RATE_24M_MASK (1 << IWL_RATE_24M_INDEX)
-+#define IWL_RATE_36M_MASK (1 << IWL_RATE_36M_INDEX)
-+#define IWL_RATE_48M_MASK (1 << IWL_RATE_48M_INDEX)
-+#define IWL_RATE_54M_MASK (1 << IWL_RATE_54M_INDEX)
-+#define IWL_RATE_60M_MASK (1 << IWL_RATE_60M_INDEX)
-+#define IWL_RATE_1M_MASK (1 << IWL_RATE_1M_INDEX)
-+#define IWL_RATE_2M_MASK (1 << IWL_RATE_2M_INDEX)
-+#define IWL_RATE_5M_MASK (1 << IWL_RATE_5M_INDEX)
-+#define IWL_RATE_11M_MASK (1 << IWL_RATE_11M_INDEX)
+- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
++ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
+
-+/* 4965 uCode API values for legacy bit rates, both OFDM and CCK */
- enum {
- IWL_RATE_6M_PLCP = 13,
- IWL_RATE_9M_PLCP = 15,
-@@ -99,7 +104,7 @@ enum {
- IWL_RATE_11M_PLCP = 110,
- };
++ /* Find out how many agg queues are in use */
+ for (i = 0; i < TID_MAX_LOAD_COUNT ; i++) {
+ msk = 1 << i;
+ if ((lq->agg_ctrl.granted_ba & msk) ||
+@@ -2956,10 +3064,10 @@ static u8 iwl4964_tl_ba_avail(struct iwl_priv *priv)
+ return 0;
+ }
--/* OFDM HT rate plcp */
-+/* 4965 uCode API values for OFDM high-throughput (HT) bit rates */
- enum {
- IWL_RATE_SISO_6M_PLCP = 0,
- IWL_RATE_SISO_12M_PLCP = 1,
-@@ -121,6 +126,7 @@ enum {
- IWL_RATE_MIMO_INVM_PLCP = IWL_RATE_SISO_INVM_PLCP,
- };
+-static void iwl4965_ba_status(struct iwl_priv *priv,
++static void iwl4965_ba_status(struct iwl4965_priv *priv,
+ u8 tid, enum HT_STATUS status);
-+/* MAC header values for bit rates */
- enum {
- IWL_RATE_6M_IEEE = 12,
- IWL_RATE_9M_IEEE = 18,
-@@ -163,20 +169,15 @@ enum {
- (IWL_OFDM_BASIC_RATES_MASK | \
- IWL_CCK_BASIC_RATES_MASK)
+-static int iwl4965_perform_addba(struct iwl_priv *priv, u8 tid, u32 length,
++static int iwl4965_perform_addba(struct iwl4965_priv *priv, u8 tid, u32 length,
+ u32 ba_timeout)
+ {
+ int rc;
+@@ -2971,7 +3079,7 @@ static int iwl4965_perform_addba(struct iwl_priv *priv, u8 tid, u32 length,
+ return rc;
+ }
--#define IWL_RATES_MASK ((1<<IWL_RATE_COUNT)-1)
-+#define IWL_RATES_MASK ((1 << IWL_RATE_COUNT) - 1)
+-static int iwl4965_perform_delba(struct iwl_priv *priv, u8 tid)
++static int iwl4965_perform_delba(struct iwl4965_priv *priv, u8 tid)
+ {
+ int rc;
- #define IWL_INVALID_VALUE -1
+@@ -2982,8 +3090,8 @@ static int iwl4965_perform_delba(struct iwl_priv *priv, u8 tid)
+ return rc;
+ }
- #define IWL_MIN_RSSI_VAL -100
- #define IWL_MAX_RSSI_VAL 0
+-static void iwl4965_turn_on_agg_for_tid(struct iwl_priv *priv,
+- struct iwl_lq_mngr *lq,
++static void iwl4965_turn_on_agg_for_tid(struct iwl4965_priv *priv,
++ struct iwl4965_lq_mngr *lq,
+ u8 auto_agg, u8 tid)
+ {
+ u32 tid_msk = (1 << tid);
+@@ -3030,12 +3138,12 @@ static void iwl4965_turn_on_agg_for_tid(struct iwl_priv *priv,
+ spin_unlock_irqrestore(&priv->lq_mngr.lock, flags);
+ }
--#define IWL_LEGACY_SWITCH_ANTENNA 0
--#define IWL_LEGACY_SWITCH_SISO 1
--#define IWL_LEGACY_SWITCH_MIMO 2
--
--#define IWL_RS_GOOD_RATIO 12800
--
--#define IWL_ACTION_LIMIT 3
-+/* These values specify how many Tx frame attempts before
-+ * searching for a new modulation mode */
- #define IWL_LEGACY_FAILURE_LIMIT 160
- #define IWL_LEGACY_SUCCESS_LIMIT 480
- #define IWL_LEGACY_TABLE_COUNT 160
-@@ -185,82 +186,104 @@ enum {
- #define IWL_NONE_LEGACY_SUCCESS_LIMIT 4500
- #define IWL_NONE_LEGACY_TABLE_COUNT 1500
+-static void iwl4965_turn_on_agg(struct iwl_priv *priv, u8 tid)
++static void iwl4965_turn_on_agg(struct iwl4965_priv *priv, u8 tid)
+ {
+- struct iwl_lq_mngr *lq;
++ struct iwl4965_lq_mngr *lq;
+ unsigned long flags;
--#define IWL_RATE_SCALE_SWITCH (10880)
-+/* Success ratio (ACKed / attempted tx frames) values (perfect is 128 * 100) */
-+#define IWL_RS_GOOD_RATIO 12800 /* 100% */
-+#define IWL_RATE_SCALE_SWITCH 10880 /* 85% */
-+#define IWL_RATE_HIGH_TH 10880 /* 85% */
-+#define IWL_RATE_INCREASE_TH 8960 /* 70% */
-+#define IWL_RATE_DECREASE_TH 1920 /* 15% */
+- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
++ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
-+/* possible actions when in legacy mode */
-+#define IWL_LEGACY_SWITCH_ANTENNA 0
-+#define IWL_LEGACY_SWITCH_SISO 1
-+#define IWL_LEGACY_SWITCH_MIMO 2
-+
-+/* possible actions when in siso mode */
- #define IWL_SISO_SWITCH_ANTENNA 0
- #define IWL_SISO_SWITCH_MIMO 1
- #define IWL_SISO_SWITCH_GI 2
+ if ((tid < TID_MAX_LOAD_COUNT))
+ iwl4965_turn_on_agg_for_tid(priv, lq, lq->agg_ctrl.auto_agg,
+@@ -3055,13 +3163,13 @@ static void iwl4965_turn_on_agg(struct iwl_priv *priv, u8 tid)
-+/* possible actions when in mimo mode */
- #define IWL_MIMO_SWITCH_ANTENNA_A 0
- #define IWL_MIMO_SWITCH_ANTENNA_B 1
- #define IWL_MIMO_SWITCH_GI 2
+ }
--#define LQ_SIZE 2
-+#define IWL_ACTION_LIMIT 3 /* # possible actions */
-+
-+#define LQ_SIZE 2 /* 2 mode tables: "Active" and "Search" */
+-void iwl4965_turn_off_agg(struct iwl_priv *priv, u8 tid)
++void iwl4965_turn_off_agg(struct iwl4965_priv *priv, u8 tid)
+ {
+ u32 tid_msk;
+- struct iwl_lq_mngr *lq;
++ struct iwl4965_lq_mngr *lq;
+ unsigned long flags;
--extern const struct iwl_rate_info iwl_rates[IWL_RATE_COUNT];
-+extern const struct iwl4965_rate_info iwl4965_rates[IWL_RATE_COUNT];
+- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
++ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
--enum iwl_table_type {
-+enum iwl4965_table_type {
- LQ_NONE,
-- LQ_G,
-+ LQ_G, /* legacy types */
- LQ_A,
-- LQ_SISO,
-+ LQ_SISO, /* high-throughput types */
- LQ_MIMO,
- LQ_MAX,
- };
+ if ((tid < TID_MAX_LOAD_COUNT)) {
+ tid_msk = 1 << tid;
+@@ -3084,14 +3192,17 @@ void iwl4965_turn_off_agg(struct iwl_priv *priv, u8 tid)
+ }
+ }
--enum iwl_antenna_type {
-+#define is_legacy(tbl) (((tbl) == LQ_G) || ((tbl) == LQ_A))
-+#define is_siso(tbl) (((tbl) == LQ_SISO))
-+#define is_mimo(tbl) (((tbl) == LQ_MIMO))
-+#define is_Ht(tbl) (is_siso(tbl) || is_mimo(tbl))
-+#define is_a_band(tbl) (((tbl) == LQ_A))
-+#define is_g_and(tbl) (((tbl) == LQ_G))
-+
-+/* 4965 has 2 antennas/chains for Tx (but 3 for Rx) */
-+enum iwl4965_antenna_type {
- ANT_NONE,
- ANT_MAIN,
- ANT_AUX,
- ANT_BOTH,
- };
+-static void iwl4965_ba_status(struct iwl_priv *priv,
++/**
++ * iwl4965_ba_status - Update driver's link quality mgr with tid's HT status
++ */
++static void iwl4965_ba_status(struct iwl4965_priv *priv,
+ u8 tid, enum HT_STATUS status)
+ {
+- struct iwl_lq_mngr *lq;
++ struct iwl4965_lq_mngr *lq;
+ u32 tid_msk = (1 << tid);
+ unsigned long flags;
--static inline u8 iwl_get_prev_ieee_rate(u8 rate_index)
-+static inline u8 iwl4965_get_prev_ieee_rate(u8 rate_index)
+- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
++ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
+
+ if ((tid >= TID_MAX_LOAD_COUNT))
+ goto out;
+@@ -3124,14 +3235,14 @@ static void iwl4965_ba_status(struct iwl_priv *priv,
+
+ static void iwl4965_bg_agg_work(struct work_struct *work)
{
-- u8 rate = iwl_rates[rate_index].prev_ieee;
-+ u8 rate = iwl4965_rates[rate_index].prev_ieee;
+- struct iwl_priv *priv = container_of(work, struct iwl_priv,
++ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
+ agg_work);
- if (rate == IWL_RATE_INVALID)
- rate = rate_index;
- return rate;
- }
+ u32 tid;
+ u32 retry_tid;
+ u32 tid_msk;
+ unsigned long flags;
+- struct iwl_lq_mngr *lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
++ struct iwl4965_lq_mngr *lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
--extern int iwl_rate_index_from_plcp(int plcp);
-+extern int iwl4965_rate_index_from_plcp(int plcp);
+ spin_lock_irqsave(&priv->lq_mngr.lock, flags);
+ retry_tid = lq->agg_ctrl.tid_retry;
+@@ -3154,90 +3265,13 @@ static void iwl4965_bg_agg_work(struct work_struct *work)
+ spin_unlock_irqrestore(&priv->lq_mngr.lock, flags);
+ return;
+ }
+-#endif /*CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
- /**
-- * iwl_fill_rs_info - Fill an output text buffer with the rate representation
-+ * iwl4965_fill_rs_info - Fill an output text buffer with the rate representation
- *
- * NOTE: This is provided as a quick mechanism for a user to visualize
-- * the performance of the rate control alogirthm and is not meant to be
-+ * the performance of the rate control algorithm and is not meant to be
- * parsed software.
- */
--extern int iwl_fill_rs_info(struct ieee80211_hw *, char *buf, u8 sta_id);
-+extern int iwl4965_fill_rs_info(struct ieee80211_hw *, char *buf, u8 sta_id);
+-int iwl4965_tx_cmd(struct iwl_priv *priv, struct iwl_cmd *out_cmd,
+- u8 sta_id, dma_addr_t txcmd_phys,
+- struct ieee80211_hdr *hdr, u8 hdr_len,
+- struct ieee80211_tx_control *ctrl, void *sta_in)
++/* TODO: move this functionality to rate scaling */
++void iwl4965_tl_get_stats(struct iwl4965_priv *priv,
++ struct ieee80211_hdr *hdr)
+ {
+- struct iwl_tx_cmd cmd;
+- struct iwl_tx_cmd *tx = (struct iwl_tx_cmd *)&out_cmd->cmd.payload[0];
+- dma_addr_t scratch_phys;
+- u8 unicast = 0;
+- u8 is_data = 1;
+- u16 fc;
+- u16 rate_flags;
+- int rate_index = min(ctrl->tx_rate & 0xffff, IWL_RATE_COUNT - 1);
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- __le16 *qc;
+-#endif /*CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
+-
+- unicast = !is_multicast_ether_addr(hdr->addr1);
+-
+- fc = le16_to_cpu(hdr->frame_control);
+- if ((fc & IEEE80211_FCTL_FTYPE) != IEEE80211_FTYPE_DATA)
+- is_data = 0;
+-
+- memcpy(&cmd, &(out_cmd->cmd.tx), sizeof(struct iwl_tx_cmd));
+- memset(tx, 0, sizeof(struct iwl_tx_cmd));
+- memcpy(tx->hdr, hdr, hdr_len);
+-
+- tx->len = cmd.len;
+- tx->driver_txop = cmd.driver_txop;
+- tx->stop_time.life_time = cmd.stop_time.life_time;
+- tx->tx_flags = cmd.tx_flags;
+- tx->sta_id = cmd.sta_id;
+- tx->tid_tspec = cmd.tid_tspec;
+- tx->timeout.pm_frame_timeout = cmd.timeout.pm_frame_timeout;
+- tx->next_frame_len = cmd.next_frame_len;
+-
+- tx->sec_ctl = cmd.sec_ctl;
+- memcpy(&(tx->key[0]), &(cmd.key[0]), 16);
+- tx->tx_flags = cmd.tx_flags;
+-
+- tx->rts_retry_limit = cmd.rts_retry_limit;
+- tx->data_retry_limit = cmd.data_retry_limit;
+-
+- scratch_phys = txcmd_phys + sizeof(struct iwl_cmd_header) +
+- offsetof(struct iwl_tx_cmd, scratch);
+- tx->dram_lsb_ptr = cpu_to_le32(scratch_phys);
+- tx->dram_msb_ptr = iwl4965_get_dma_hi_address(scratch_phys);
+-
+- /* Hard coded to start at the highest retry fallback position
+- * until the 4965 specific rate control algorithm is tied in */
+- tx->initial_rate_index = LINK_QUAL_MAX_RETRY_NUM - 1;
+-
+- /* Alternate between antenna A and B for successive frames */
+- if (priv->use_ant_b_for_management_frame) {
+- priv->use_ant_b_for_management_frame = 0;
+- rate_flags = RATE_MCS_ANT_B_MSK;
+- } else {
+- priv->use_ant_b_for_management_frame = 1;
+- rate_flags = RATE_MCS_ANT_A_MSK;
+- }
++ __le16 *qc = ieee80211_get_qos_ctrl(hdr);
- /**
-- * iwl_rate_scale_init - Initialize the rate scale table based on assoc info
-+ * iwl4965_rate_scale_init - Initialize the rate scale table based on assoc info
- *
-- * The specific througput table used is based on the type of network
-+ * The specific throughput table used is based on the type of network
- * the associated with, including A, B, G, and G w/ TGG protection
- */
--extern void iwl_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id);
-+extern void iwl4965_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id);
+- if (!unicast || !is_data) {
+- if ((rate_index >= IWL_FIRST_CCK_RATE) &&
+- (rate_index <= IWL_LAST_CCK_RATE))
+- rate_flags |= RATE_MCS_CCK_MSK;
+- } else {
+- tx->initial_rate_index = 0;
+- tx->tx_flags |= TX_CMD_FLG_STA_RATE_MSK;
+- }
+-
+- tx->rate_n_flags = iwl_hw_set_rate_n_flags(iwl_rates[rate_index].plcp,
+- rate_flags);
+-
+- if (ieee80211_is_back_request(fc))
+- tx->tx_flags |= TX_CMD_FLG_ACK_MSK |
+- TX_CMD_FLG_IMM_BA_RSP_MASK;
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- qc = ieee80211_get_qos_ctrl(hdr);
+ if (qc &&
+ (priv->iw_mode != IEEE80211_IF_TYPE_IBSS)) {
+ u8 tid = 0;
+@@ -3255,11 +3289,11 @@ int iwl4965_tx_cmd(struct iwl_priv *priv, struct iwl_cmd *out_cmd,
+ spin_unlock_irqrestore(&priv->lq_mngr.lock, flags);
+ schedule_work(&priv->agg_work);
+ }
+-#endif
+-#endif
+- return 0;
+ }
++#endif /*CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
++
/**
-- * iwl_rate_control_register - Register the rate control algorithm callbacks
-+ * iwl4965_rate_control_register - Register the rate control algorithm callbacks
+ * sign_extend - Sign extend a value using specified bit as sign-bit
*
- * Since the rate control algorithm is hardware specific, there is no need
- * or reason to place it as a stand alone module. The driver can call
-- * iwl_rate_control_register in order to register the rate control callbacks
-+ * iwl4965_rate_control_register in order to register the rate control callbacks
- * with the mac80211 subsystem. This should be performed prior to calling
- * ieee80211_register_hw
+@@ -3282,7 +3316,7 @@ static s32 sign_extend(u32 oper, int index)
*
+ * A return of <0 indicates bogus data in the statistics
*/
--extern void iwl_rate_control_register(struct ieee80211_hw *hw);
-+extern void iwl4965_rate_control_register(struct ieee80211_hw *hw);
+-int iwl4965_get_temperature(const struct iwl_priv *priv)
++int iwl4965_get_temperature(const struct iwl4965_priv *priv)
+ {
+ s32 temperature;
+ s32 vt;
+@@ -3305,11 +3339,12 @@ int iwl4965_get_temperature(const struct iwl_priv *priv)
+ }
- /**
-- * iwl_rate_control_unregister - Unregister the rate control callbacks
-+ * iwl4965_rate_control_unregister - Unregister the rate control callbacks
- *
- * This should be called after calling ieee80211_unregister_hw, but before
- * the driver is unloaded.
+ /*
+- * Temperature is only 23 bits so sign extend out to 32
++ * Temperature is only 23 bits, so sign extend out to 32.
+ *
+ * NOTE If we haven't received a statistics notification yet
+ * with an updated temperature, use R4 provided to us in the
+- * ALIVE response. */
++ * "initialize" ALIVE response.
++ */
+ if (!test_bit(STATUS_TEMPERATURE, &priv->status))
+ vt = sign_extend(R4, 23);
+ else
+@@ -3349,7 +3384,7 @@ int iwl4965_get_temperature(const struct iwl_priv *priv)
+ * Assumes caller will replace priv->last_temperature once calibration
+ * executed.
*/
--extern void iwl_rate_control_unregister(struct ieee80211_hw *hw);
-+extern void iwl4965_rate_control_unregister(struct ieee80211_hw *hw);
+-static int iwl4965_is_temp_calib_needed(struct iwl_priv *priv)
++static int iwl4965_is_temp_calib_needed(struct iwl4965_priv *priv)
+ {
+ int temp_diff;
+@@ -3382,7 +3417,7 @@ static int iwl4965_is_temp_calib_needed(struct iwl_priv *priv)
+ /* Calculate noise level, based on measurements during network silence just
+ * before arriving beacon. This measurement can be done only if we know
+ * exactly when to expect beacons, therefore only when we're associated. */
+-static void iwl4965_rx_calc_noise(struct iwl_priv *priv)
++static void iwl4965_rx_calc_noise(struct iwl4965_priv *priv)
+ {
+ struct statistics_rx_non_phy *rx_info
+ = &(priv->statistics.rx.general);
+@@ -3419,9 +3454,9 @@ static void iwl4965_rx_calc_noise(struct iwl_priv *priv)
+ priv->last_rx_noise);
+ }
+
+-void iwl_hw_rx_statistics(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
++void iwl4965_hw_rx_statistics(struct iwl4965_priv *priv, struct iwl4965_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
+ int change;
+ s32 temp;
+
+@@ -3448,7 +3483,7 @@ void iwl_hw_rx_statistics(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
+ if (unlikely(!test_bit(STATUS_SCANNING, &priv->status)) &&
+ (pkt->hdr.cmd == STATISTICS_NOTIFICATION)) {
+ iwl4965_rx_calc_noise(priv);
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+ queue_work(priv->workqueue, &priv->sensitivity_work);
#endif
-diff --git a/drivers/net/wireless/iwlwifi/iwl-4965.c b/drivers/net/wireless/iwlwifi/iwl-4965.c
-index 891f90d..04db34b 100644
---- a/drivers/net/wireless/iwlwifi/iwl-4965.c
-+++ b/drivers/net/wireless/iwlwifi/iwl-4965.c
-@@ -36,13 +36,13 @@
- #include <linux/wireless.h>
- #include <net/mac80211.h>
- #include <linux/etherdevice.h>
-+#include <asm/unaligned.h>
+ }
+@@ -3483,12 +3518,117 @@ void iwl_hw_rx_statistics(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
+ queue_work(priv->workqueue, &priv->txpower_work);
+ }
--#define IWL 4965
+-static void iwl4965_handle_data_packet(struct iwl_priv *priv, int is_data,
++static void iwl4965_add_radiotap(struct iwl4965_priv *priv,
++ struct sk_buff *skb,
++ struct iwl4965_rx_phy_res *rx_start,
++ struct ieee80211_rx_status *stats,
++ u32 ampdu_status)
++{
++ s8 signal = stats->ssi;
++ s8 noise = 0;
++ int rate = stats->rate;
++ u64 tsf = stats->mactime;
++ __le16 phy_flags_hw = rx_start->phy_flags;
++ struct iwl4965_rt_rx_hdr {
++ struct ieee80211_radiotap_header rt_hdr;
++ __le64 rt_tsf; /* TSF */
++ u8 rt_flags; /* radiotap packet flags */
++ u8 rt_rate; /* rate in 500kb/s */
++ __le16 rt_channelMHz; /* channel in MHz */
++ __le16 rt_chbitmask; /* channel bitfield */
++ s8 rt_dbmsignal; /* signal in dBm, kluged to signed */
++ s8 rt_dbmnoise;
++ u8 rt_antenna; /* antenna number */
++ } __attribute__ ((packed)) *iwl4965_rt;
++
++ /* TODO: We won't have enough headroom for HT frames. Fix it later. */
++ if (skb_headroom(skb) < sizeof(*iwl4965_rt)) {
++ if (net_ratelimit())
++ printk(KERN_ERR "not enough headroom [%d] for "
++ "radiotap head [%zd]\n",
++ skb_headroom(skb), sizeof(*iwl4965_rt));
++ return;
++ }
++
++ /* put radiotap header in front of 802.11 header and data */
++ iwl4965_rt = (void *)skb_push(skb, sizeof(*iwl4965_rt));
++
++ /* initialise radiotap header */
++ iwl4965_rt->rt_hdr.it_version = PKTHDR_RADIOTAP_VERSION;
++ iwl4965_rt->rt_hdr.it_pad = 0;
++
++ /* total header + data */
++ put_unaligned(cpu_to_le16(sizeof(*iwl4965_rt)),
++ &iwl4965_rt->rt_hdr.it_len);
++
++ /* Indicate all the fields we add to the radiotap header */
++ put_unaligned(cpu_to_le32((1 << IEEE80211_RADIOTAP_TSFT) |
++ (1 << IEEE80211_RADIOTAP_FLAGS) |
++ (1 << IEEE80211_RADIOTAP_RATE) |
++ (1 << IEEE80211_RADIOTAP_CHANNEL) |
++ (1 << IEEE80211_RADIOTAP_DBM_ANTSIGNAL) |
++ (1 << IEEE80211_RADIOTAP_DBM_ANTNOISE) |
++ (1 << IEEE80211_RADIOTAP_ANTENNA)),
++ &iwl4965_rt->rt_hdr.it_present);
++
++ /* Zero the flags, we'll add to them as we go */
++ iwl4965_rt->rt_flags = 0;
++
++ put_unaligned(cpu_to_le64(tsf), &iwl4965_rt->rt_tsf);
++
++ iwl4965_rt->rt_dbmsignal = signal;
++ iwl4965_rt->rt_dbmnoise = noise;
++
++ /* Convert the channel frequency and set the flags */
++ put_unaligned(cpu_to_le16(stats->freq), &iwl4965_rt->rt_channelMHz);
++ if (!(phy_flags_hw & RX_RES_PHY_FLAGS_BAND_24_MSK))
++ put_unaligned(cpu_to_le16(IEEE80211_CHAN_OFDM |
++ IEEE80211_CHAN_5GHZ),
++ &iwl4965_rt->rt_chbitmask);
++ else if (phy_flags_hw & RX_RES_PHY_FLAGS_MOD_CCK_MSK)
++ put_unaligned(cpu_to_le16(IEEE80211_CHAN_CCK |
++ IEEE80211_CHAN_2GHZ),
++ &iwl4965_rt->rt_chbitmask);
++ else /* 802.11g */
++ put_unaligned(cpu_to_le16(IEEE80211_CHAN_OFDM |
++ IEEE80211_CHAN_2GHZ),
++ &iwl4965_rt->rt_chbitmask);
++
++ rate = iwl4965_rate_index_from_plcp(rate);
++ if (rate == -1)
++ iwl4965_rt->rt_rate = 0;
++ else
++ iwl4965_rt->rt_rate = iwl4965_rates[rate].ieee;
++
++ /*
++ * "antenna number"
++ *
++ * It seems that the antenna field in the phy flags value
++ * is actually a bitfield. This is undefined by radiotap,
++ * it wants an actual antenna number but I always get "7"
++ * for most legacy frames I receive indicating that the
++ * same frame was received on all three RX chains.
++ *
++ * I think this field should be removed in favour of a
++ * new 802.11n radiotap field "RX chains" that is defined
++ * as a bitmask.
++ */
++ iwl4965_rt->rt_antenna =
++ le16_to_cpu(phy_flags_hw & RX_RES_PHY_FLAGS_ANTENNA_MSK) >> 4;
++
++ /* set the preamble flag if appropriate */
++ if (phy_flags_hw & RX_RES_PHY_FLAGS_SHORT_PREAMBLE_MSK)
++ iwl4965_rt->rt_flags |= IEEE80211_RADIOTAP_F_SHORTPRE;
++
++ stats->flag |= RX_FLAG_RADIOTAP;
++}
++
++static void iwl4965_handle_data_packet(struct iwl4965_priv *priv, int is_data,
+ int include_phy,
+- struct iwl_rx_mem_buffer *rxb,
++ struct iwl4965_rx_mem_buffer *rxb,
+ struct ieee80211_rx_status *stats)
+ {
+- struct iwl_rx_packet *pkt = (struct iwl_rx_packet *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (struct iwl4965_rx_packet *)rxb->skb->data;
+ struct iwl4965_rx_phy_res *rx_start = (include_phy) ?
+ (struct iwl4965_rx_phy_res *)&(pkt->u.raw[0]) : NULL;
+ struct ieee80211_hdr *hdr;
+@@ -3524,9 +3664,8 @@ static void iwl4965_handle_data_packet(struct iwl_priv *priv, int is_data,
+ rx_start->byte_count = amsdu->byte_count;
+ rx_end = (__le32 *) (((u8 *) hdr) + len);
+ }
+- if (len > 2342 || len < 16) {
+- IWL_DEBUG_DROP("byte count out of range [16,2342]"
+- " : %d\n", len);
++ if (len > priv->hw_setting.max_pkt_size || len < 16) {
++ IWL_WARNING("byte count out of range [16,4K] : %d\n", len);
+ return;
+ }
+
+@@ -3544,26 +3683,21 @@ static void iwl4965_handle_data_packet(struct iwl_priv *priv, int is_data,
+ return;
+ }
+
+- if (priv->iw_mode == IEEE80211_IF_TYPE_MNTR) {
+- if (iwl_param_hwcrypto)
+- iwl_set_decrypted_flag(priv, rxb->skb,
+- ampdu_status, stats);
+- iwl_handle_data_packet_monitor(priv, rxb, hdr, len, stats, 0);
+- return;
+- }
-
--#include "iwlwifi.h"
- #include "iwl-4965.h"
- #include "iwl-helpers.h"
+ stats->flag = 0;
+ hdr = (struct ieee80211_hdr *)rxb->skb->data;
-+static void iwl4965_hw_card_show_info(struct iwl4965_priv *priv);
+- if (iwl_param_hwcrypto)
+- iwl_set_decrypted_flag(priv, rxb->skb, ampdu_status, stats);
++ if (iwl4965_param_hwcrypto)
++ iwl4965_set_decrypted_flag(priv, rxb->skb, ampdu_status, stats);
+
- #define IWL_DECLARE_RATE_INFO(r, s, ip, in, rp, rn, pp, np) \
- [IWL_RATE_##r##M_INDEX] = { IWL_RATE_##r##M_PLCP, \
- IWL_RATE_SISO_##s##M_PLCP, \
-@@ -63,7 +63,7 @@
- * maps to IWL_RATE_INVALID
- *
- */
--const struct iwl_rate_info iwl_rates[IWL_RATE_COUNT] = {
-+const struct iwl4965_rate_info iwl4965_rates[IWL_RATE_COUNT] = {
- IWL_DECLARE_RATE_INFO(1, INV, INV, 2, INV, 2, INV, 2), /* 1mbps */
- IWL_DECLARE_RATE_INFO(2, INV, 1, 5, 1, 5, 1, 5), /* 2mbps */
- IWL_DECLARE_RATE_INFO(5, INV, 2, 6, 2, 11, 2, 11), /*5.5mbps */
-@@ -85,16 +85,16 @@ static int is_fat_channel(__le32 rxon_flags)
- (rxon_flags & RXON_FLG_CHANNEL_MODE_MIXED_MSK);
++ if (priv->add_radiotap)
++ iwl4965_add_radiotap(priv, rxb->skb, rx_start, stats, ampdu_status);
+
+ ieee80211_rx_irqsafe(priv->hw, rxb->skb, stats);
+ priv->alloc_rxb_skb--;
+ rxb->skb = NULL;
+ #ifdef LED
+ priv->led_packets += len;
+- iwl_setup_activity_timer(priv);
++ iwl4965_setup_activity_timer(priv);
+ #endif
+ }
+
+@@ -3601,7 +3735,7 @@ static int iwl4965_calc_rssi(struct iwl4965_rx_phy_res *rx_resp)
+ return (max_rssi - agc - IWL_RSSI_OFFSET);
}
--static u8 is_single_stream(struct iwl_priv *priv)
-+static u8 is_single_stream(struct iwl4965_priv *priv)
- {
-#ifdef CONFIG_IWLWIFI_HT
-- if (!priv->is_ht_enabled || !priv->current_assoc_ht.is_ht ||
-- (priv->active_rate_ht[1] == 0) ||
+#ifdef CONFIG_IWL4965_HT
-+ if (!priv->current_ht_config.is_ht ||
-+ (priv->current_ht_config.supp_mcs_set[1] == 0) ||
- (priv->ps_mode == IWL_MIMO_PS_STATIC))
- return 1;
- #else
- return 1;
--#endif /*CONFIG_IWLWIFI_HT */
-+#endif /*CONFIG_IWL4965_HT */
+
+ /* Parsed Information Elements */
+ struct ieee802_11_elems {
+@@ -3673,9 +3807,37 @@ static int parse_elems(u8 *start, size_t len, struct ieee802_11_elems *elems)
+
return 0;
}
+-#endif /* CONFIG_IWLWIFI_HT */
-@@ -104,7 +104,7 @@ static u8 is_single_stream(struct iwl_priv *priv)
- * MIMO (dual stream) requires at least 2, but works better with 3.
- * This does not determine *which* chains to use, just how many.
- */
--static int iwl4965_get_rx_chain_counter(struct iwl_priv *priv,
-+static int iwl4965_get_rx_chain_counter(struct iwl4965_priv *priv,
- u8 *idle_state, u8 *rx_state)
+-static void iwl4965_sta_modify_ps_wake(struct iwl_priv *priv, int sta_id)
++void iwl4965_init_ht_hw_capab(struct ieee80211_ht_info *ht_info, int mode)
++{
++ ht_info->cap = 0;
++ memset(ht_info->supp_mcs_set, 0, 16);
++
++ ht_info->ht_supported = 1;
++
++ if (mode == MODE_IEEE80211A) {
++ ht_info->cap |= (u16)IEEE80211_HT_CAP_SUP_WIDTH;
++ ht_info->cap |= (u16)IEEE80211_HT_CAP_SGI_40;
++ ht_info->supp_mcs_set[4] = 0x01;
++ }
++ ht_info->cap |= (u16)IEEE80211_HT_CAP_GRN_FLD;
++ ht_info->cap |= (u16)IEEE80211_HT_CAP_SGI_20;
++ ht_info->cap |= (u16)(IEEE80211_HT_CAP_MIMO_PS &
++ (IWL_MIMO_PS_NONE << 2));
++ if (iwl4965_param_amsdu_size_8K) {
++ printk(KERN_DEBUG "iwl4965 in A-MSDU 8K support mode\n");
++ ht_info->cap |= (u16)IEEE80211_HT_CAP_MAX_AMSDU;
++ }
++
++ ht_info->ampdu_factor = CFG_HT_RX_AMPDU_FACTOR_DEF;
++ ht_info->ampdu_density = CFG_HT_MPDU_DENSITY_DEF;
++
++ ht_info->supp_mcs_set[0] = 0xFF;
++ ht_info->supp_mcs_set[1] = 0xFF;
++}
++#endif /* CONFIG_IWL4965_HT */
++
++static void iwl4965_sta_modify_ps_wake(struct iwl4965_priv *priv, int sta_id)
{
- u8 is_single = is_single_stream(priv);
-@@ -133,32 +133,32 @@ static int iwl4965_get_rx_chain_counter(struct iwl_priv *priv,
- return 0;
+ unsigned long flags;
+
+@@ -3686,13 +3848,13 @@ static void iwl4965_sta_modify_ps_wake(struct iwl_priv *priv, int sta_id)
+ priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
+
+- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
++ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
}
--int iwl_hw_rxq_stop(struct iwl_priv *priv)
-+int iwl4965_hw_rxq_stop(struct iwl4965_priv *priv)
+-static void iwl4965_update_ps_mode(struct iwl_priv *priv, u16 ps_bit, u8 *addr)
++static void iwl4965_update_ps_mode(struct iwl4965_priv *priv, u16 ps_bit, u8 *addr)
{
- int rc;
- unsigned long flags;
+ /* FIXME: need locking over ps_status ??? */
+- u8 sta_id = iwl_hw_find_station(priv, addr);
++ u8 sta_id = iwl4965_hw_find_station(priv, addr);
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
+ if (sta_id != IWL_INVALID_STATION) {
+ u8 sta_awake = priv->stations[sta_id].
+@@ -3707,12 +3869,14 @@ static void iwl4965_update_ps_mode(struct iwl_priv *priv, u16 ps_bit, u8 *addr)
}
+ }
-- /* stop HW */
-- iwl_write_restricted(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
-- rc = iwl_poll_restricted_bit(priv, FH_MEM_RSSR_RX_STATUS_REG,
-+ /* stop Rx DMA */
-+ iwl4965_write_direct32(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
-+ rc = iwl4965_poll_direct_bit(priv, FH_MEM_RSSR_RX_STATUS_REG,
- (1 << 24), 1000);
- if (rc < 0)
- IWL_ERROR("Can't stop Rx DMA.\n");
++#define IWL_DELAY_NEXT_SCAN_AFTER_ASSOC (HZ*6)
++
+ /* Called for REPLY_4965_RX (legacy ABG frames), or
+ * REPLY_RX_MPDU_CMD (HT high-throughput N frames). */
+-static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_rx(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
+ /* Use phy data (Rx signal strength, etc.) contained within
+ * this rx packet for legacy frames,
+ * or phy data cached from REPLY_RX_PHY_CMD for HT frames. */
+@@ -3731,11 +3895,8 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ (rx_start->phy_flags & RX_RES_PHY_FLAGS_BAND_24_MSK) ?
+ MODE_IEEE80211G : MODE_IEEE80211A,
+ .antenna = 0,
+- .rate = iwl_hw_get_rate(rx_start->rate_n_flags),
++ .rate = iwl4965_hw_get_rate(rx_start->rate_n_flags),
+ .flag = 0,
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- .ordered = 0
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+ };
+ u8 network_packet;
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
+@@ -3794,32 +3955,32 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ * which are gathered only when associated, and indicate noise
+ * only for the associated network channel ...
+ * Ignore these noise values while scanning (other channels) */
+- if (iwl_is_associated(priv) &&
++ if (iwl4965_is_associated(priv) &&
+ !test_bit(STATUS_SCANNING, &priv->status)) {
+ stats.noise = priv->last_rx_noise;
+- stats.signal = iwl_calc_sig_qual(stats.ssi, stats.noise);
++ stats.signal = iwl4965_calc_sig_qual(stats.ssi, stats.noise);
+ } else {
+ stats.noise = IWL_NOISE_MEAS_NOT_AVAILABLE;
+- stats.signal = iwl_calc_sig_qual(stats.ssi, 0);
++ stats.signal = iwl4965_calc_sig_qual(stats.ssi, 0);
+ }
- return 0;
- }
+ /* Reset beacon noise level if not associated. */
+- if (!iwl_is_associated(priv))
++ if (!iwl4965_is_associated(priv))
+ priv->last_rx_noise = IWL_NOISE_MEAS_NOT_AVAILABLE;
--u8 iwl_hw_find_station(struct iwl_priv *priv, const u8 *addr)
-+u8 iwl4965_hw_find_station(struct iwl4965_priv *priv, const u8 *addr)
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- /* TODO: Parts of iwl_report_frame are broken for 4965 */
+- if (iwl_debug_level & (IWL_DL_RX))
++#ifdef CONFIG_IWL4965_DEBUG
++ /* TODO: Parts of iwl4965_report_frame are broken for 4965 */
++ if (iwl4965_debug_level & (IWL_DL_RX))
+ /* Set "1" to report good data frames in groups of 100 */
+- iwl_report_frame(priv, pkt, header, 1);
++ iwl4965_report_frame(priv, pkt, header, 1);
+
+- if (iwl_debug_level & (IWL_DL_RX | IWL_DL_STATS))
++ if (iwl4965_debug_level & (IWL_DL_RX | IWL_DL_STATS))
+ IWL_DEBUG_RX("Rssi %d, noise %d, qual %d, TSF %lu\n",
+ stats.ssi, stats.noise, stats.signal,
+ (long unsigned int)le64_to_cpu(rx_start->timestamp));
+ #endif
+
+- network_packet = iwl_is_network_packet(priv, header);
++ network_packet = iwl4965_is_network_packet(priv, header);
+ if (network_packet) {
+ priv->last_rx_rssi = stats.ssi;
+ priv->last_beacon_time = priv->ucode_beacon_time;
+@@ -3863,27 +4024,31 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ break;
+
+ /*
+- * TODO: There is no callback function from upper
+- * stack to inform us when associated status. this
+- * work around to sniff assoc_resp management frame
+- * and finish the association process.
++ * TODO: Use the new callback function from
++ * mac80211 instead of sniffing these packets.
+ */
+ case IEEE80211_STYPE_ASSOC_RESP:
+ case IEEE80211_STYPE_REASSOC_RESP:
+ if (network_packet) {
+-#ifdef CONFIG_IWLWIFI_HT
++#ifdef CONFIG_IWL4965_HT
+ u8 *pos = NULL;
+ struct ieee802_11_elems elems;
+-#endif /*CONFIG_IWLWIFI_HT */
++#endif /*CONFIG_IWL4965_HT */
+ struct ieee80211_mgmt *mgnt =
+ (struct ieee80211_mgmt *)header;
+
++ /* We have just associated, give some
++ * time for the 4-way handshake if
++ * any. Don't start scan too early. */
++ priv->next_scan_jiffies = jiffies +
++ IWL_DELAY_NEXT_SCAN_AFTER_ASSOC;
++
+ priv->assoc_id = (~((1 << 15) | (1 << 14))
+ & le16_to_cpu(mgnt->u.assoc_resp.aid));
+ priv->assoc_capability =
+ le16_to_cpu(
+ mgnt->u.assoc_resp.capab_info);
+-#ifdef CONFIG_IWLWIFI_HT
++#ifdef CONFIG_IWL4965_HT
+ pos = mgnt->u.assoc_resp.variable;
+ if (!parse_elems(pos,
+ len - (pos - (u8 *) mgnt),
+@@ -3892,7 +4057,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ elems.ht_cap_param)
+ break;
+ }
+-#endif /*CONFIG_IWLWIFI_HT */
++#endif /*CONFIG_IWL4965_HT */
+ /* assoc_id is 0 no association */
+ if (!priv->assoc_id)
+ break;
+@@ -3907,7 +4072,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+
+ case IEEE80211_STYPE_PROBE_REQ:
+ if ((priv->iw_mode == IEEE80211_IF_TYPE_IBSS) &&
+- !iwl_is_associated(priv)) {
++ !iwl4965_is_associated(priv)) {
+ DECLARE_MAC_BUF(mac1);
+ DECLARE_MAC_BUF(mac2);
+ DECLARE_MAC_BUF(mac3);
+@@ -3924,7 +4089,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ break;
+
+ case IEEE80211_FTYPE_CTL:
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
+ switch (fc & IEEE80211_FCTL_STYPE) {
+ case IEEE80211_STYPE_BACK_REQ:
+ IWL_DEBUG_HT("IEEE80211_STYPE_BACK_REQ arrived\n");
+@@ -3935,7 +4100,6 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ break;
+ }
+ #endif
+-
+ break;
+
+ case IEEE80211_FTYPE_DATA: {
+@@ -3953,7 +4117,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+ print_mac(mac1, header->addr1),
+ print_mac(mac2, header->addr2),
+ print_mac(mac3, header->addr3));
+- else if (unlikely(is_duplicate_packet(priv, header)))
++ else if (unlikely(iwl4965_is_duplicate_packet(priv, header)))
+ IWL_DEBUG_DROP("Dropping (dup): %s, %s, %s\n",
+ print_mac(mac1, header->addr1),
+ print_mac(mac2, header->addr2),
+@@ -3971,22 +4135,22 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
+
+ /* Cache phy data (Rx signal strength, etc) for HT frame (REPLY_RX_PHY_CMD).
+ * This will be used later in iwl4965_rx_reply_rx() for REPLY_RX_MPDU_CMD. */
+-static void iwl4965_rx_reply_rx_phy(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_rx_phy(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- int i;
- int start = 0;
-@@ -190,104 +190,114 @@ u8 iwl_hw_find_station(struct iwl_priv *priv, const u8 *addr)
- return ret;
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
+ priv->last_phy_res[0] = 1;
+ memcpy(&priv->last_phy_res[1], &(pkt->u.raw[0]),
+ sizeof(struct iwl4965_rx_phy_res));
}
--static int iwl4965_nic_set_pwr_src(struct iwl_priv *priv, int pwr_max)
-+static int iwl4965_nic_set_pwr_src(struct iwl4965_priv *priv, int pwr_max)
+-static void iwl4965_rx_missed_beacon_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl4965_rx_missed_beacon_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
+
{
-- int rc = 0;
-+ int ret;
- unsigned long flags;
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_missed_beacon_notif *missed_beacon;
++#ifdef CONFIG_IWL4965_SENSITIVITY
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_missed_beacon_notif *missed_beacon;
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-- if (rc) {
-+ ret = iwl4965_grab_nic_access(priv);
-+ if (ret) {
- spin_unlock_irqrestore(&priv->lock, flags);
-- return rc;
-+ return ret;
+ missed_beacon = &pkt->u.missed_beacon;
+ if (le32_to_cpu(missed_beacon->consequtive_missed_beacons) > 5) {
+@@ -3999,13 +4163,18 @@ static void iwl4965_rx_missed_beacon_notif(struct iwl_priv *priv,
+ if (unlikely(!test_bit(STATUS_SCANNING, &priv->status)))
+ queue_work(priv->workqueue, &priv->sensitivity_work);
}
+-#endif /*CONFIG_IWLWIFI_SENSITIVITY*/
++#endif /*CONFIG_IWL4965_SENSITIVITY*/
+ }
- if (!pwr_max) {
- u32 val;
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
-- rc = pci_read_config_dword(priv->pci_dev, PCI_POWER_SOURCE,
-+ ret = pci_read_config_dword(priv->pci_dev, PCI_POWER_SOURCE,
- &val);
+-static void iwl4965_set_tx_status(struct iwl_priv *priv, int txq_id, int idx,
++/**
++ * iwl4965_set_tx_status - Update driver's record of one Tx frame's status
++ *
++ * This will get sent to mac80211.
++ */
++static void iwl4965_set_tx_status(struct iwl4965_priv *priv, int txq_id, int idx,
+ u32 status, u32 retry_count, u32 rate)
+ {
+ struct ieee80211_tx_status *tx_status =
+@@ -4017,24 +4186,34 @@ static void iwl4965_set_tx_status(struct iwl_priv *priv, int txq_id, int idx,
+ }
- if (val & PCI_CFG_PMC_PME_FROM_D3COLD_SUPPORT)
-- iwl_set_bits_mask_restricted_reg(
-- priv, APMG_PS_CTRL_REG,
-+ iwl4965_set_bits_mask_prph(priv, APMG_PS_CTRL_REG,
- APMG_PS_CTRL_VAL_PWR_SRC_VAUX,
- ~APMG_PS_CTRL_MSK_PWR_SRC);
- } else
-- iwl_set_bits_mask_restricted_reg(
-- priv, APMG_PS_CTRL_REG,
-+ iwl4965_set_bits_mask_prph(priv, APMG_PS_CTRL_REG,
- APMG_PS_CTRL_VAL_PWR_SRC_VMAIN,
- ~APMG_PS_CTRL_MSK_PWR_SRC);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
+-static void iwl_sta_modify_enable_tid_tx(struct iwl_priv *priv,
++/**
++ * iwl4965_sta_modify_enable_tid_tx - Enable Tx for this TID in station table
++ */
++static void iwl4965_sta_modify_enable_tid_tx(struct iwl4965_priv *priv,
+ int sta_id, int tid)
+ {
+ unsigned long flags;
-- return rc;
-+ return ret;
++ /* Remove "disable" flag, to enable Tx for this TID */
+ spin_lock_irqsave(&priv->sta_lock, flags);
+ priv->stations[sta_id].sta.sta.modify_mask = STA_MODIFY_TID_DISABLE_TX;
+ priv->stations[sta_id].sta.tid_disable_tx &= cpu_to_le16(~(1 << tid));
+ priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
+
+- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
++ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
}
--static int iwl4965_rx_init(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
-+static int iwl4965_rx_init(struct iwl4965_priv *priv, struct iwl4965_rx_queue *rxq)
- {
- int rc;
- unsigned long flags;
-+ unsigned int rb_size;
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
- }
+-static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
+- struct iwl_ht_agg *agg,
+- struct iwl_compressed_ba_resp*
++/**
++ * iwl4965_tx_status_reply_compressed_ba - Update tx status from block-ack
++ *
++ * Go through block-ack's bitmap of ACK'd frames, update driver's record of
++ * ACK vs. not. This gets sent to mac80211, then to rate scaling algo.
++ */
++static int iwl4965_tx_status_reply_compressed_ba(struct iwl4965_priv *priv,
++ struct iwl4965_ht_agg *agg,
++ struct iwl4965_compressed_ba_resp*
+ ba_resp)
-- /* stop HW */
-- iwl_write_restricted(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
-+ if (iwl4965_param_amsdu_size_8K)
-+ rb_size = FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_8K;
-+ else
-+ rb_size = FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_4K;
+ {
+@@ -4048,16 +4227,20 @@ static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
+ IWL_ERROR("Received BA when not expected\n");
+ return -EINVAL;
+ }
+
-+ /* Stop Rx DMA */
-+ iwl4965_write_direct32(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
++ /* Mark that the expected block-ack response arrived */
+ agg->wait_for_ba = 0;
+ IWL_DEBUG_TX_REPLY("BA %d %d\n", agg->start_idx, ba_resp->ba_seq_ctl);
+- sh = agg->start_idx - SEQ_TO_INDEX(ba_seq_ctl>>4);
+- if (sh < 0) /* tbw something is wrong with indeces */
+
-+ /* Reset driver's Rx queue write index */
-+ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_RBDCB_WPTR_REG, 0);
-
-- iwl_write_restricted(priv, FH_RSCSR_CHNL0_RBDCB_WPTR_REG, 0);
-- iwl_write_restricted(priv, FH_RSCSR_CHNL0_RBDCB_BASE_REG,
-+ /* Tell device where to find RBD circular buffer in DRAM */
-+ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_RBDCB_BASE_REG,
- rxq->dma_addr >> 8);
-
-- iwl_write_restricted(priv, FH_RSCSR_CHNL0_STTS_WPTR_REG,
-+ /* Tell device where in DRAM to update its Rx status */
-+ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_STTS_WPTR_REG,
- (priv->hw_setting.shared_phys +
-- offsetof(struct iwl_shared, val0)) >> 4);
-+ offsetof(struct iwl4965_shared, val0)) >> 4);
++ /* Calculate shift to align block-ack bits with our Tx window bits */
++ sh = agg->start_idx - SEQ_TO_INDEX(ba_seq_ctl >> 4);
++ if (sh < 0) /* tbw something is wrong with indices */
+ sh += 0x100;
-- iwl_write_restricted(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG,
-+ /* Enable Rx DMA, enable host interrupt, Rx buffer size 4k, 256 RBDs */
-+ iwl4965_write_direct32(priv, FH_MEM_RCSR_CHNL0_CONFIG_REG,
- FH_RCSR_RX_CONFIG_CHNL_EN_ENABLE_VAL |
- FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_INT_HOST_VAL |
-- IWL_FH_RCSR_RX_CONFIG_REG_VAL_RB_SIZE_4K |
-+ rb_size |
- /*0x10 << 4 | */
- (RX_QUEUE_SIZE_LOG <<
- FH_RCSR_RX_CONFIG_RBDCB_SIZE_BITSHIFT));
+- /* don't use 64 bits for now */
++ /* don't use 64-bit values for now */
+ bitmap0 = resp_bitmap0 >> sh;
+ bitmap1 = resp_bitmap1 >> sh;
+- bitmap0 |= (resp_bitmap1 & ((1<<sh)|((1<<sh)-1))) << (32 - sh);
++ bitmap0 |= (resp_bitmap1 & ((1 << sh) | ((1 << sh) - 1))) << (32 - sh);
- /*
-- * iwl_write32(priv,CSR_INT_COAL_REG,0);
-+ * iwl4965_write32(priv,CSR_INT_COAL_REG,0);
- */
+ if (agg->frame_count > (64 - sh)) {
+ IWL_DEBUG_TX_REPLY("more frames than bitmap size");
+@@ -4065,10 +4248,12 @@ static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
+ }
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
+ /* check for success or failure according to the
+- * transmitted bitmap and back bitmap */
++ * transmitted bitmap and block-ack bitmap */
+ bitmap0 &= agg->bitmap0;
+ bitmap1 &= agg->bitmap1;
++ /* For each frame attempted in aggregation,
++ * update driver's record of tx frame's status. */
+ for (i = 0; i < agg->frame_count ; i++) {
+ int idx = (agg->start_idx + i) & 0xff;
+ ack = bitmap0 & (1 << i);
+@@ -4084,20 +4269,36 @@ static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
return 0;
}
--static int iwl4965_kw_init(struct iwl_priv *priv)
-+/* Tell 4965 where to find the "keep warm" buffer */
-+static int iwl4965_kw_init(struct iwl4965_priv *priv)
+-static inline int iwl_queue_dec_wrap(int index, int n_bd)
++/**
++ * iwl4965_queue_dec_wrap - Decrement queue index, wrap back to end if needed
++ * @index -- current index
++ * @n_bd -- total number of entries in queue (s/b power of 2)
++ */
++static inline int iwl4965_queue_dec_wrap(int index, int n_bd)
{
- unsigned long flags;
- int rc;
-
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc)
- goto out;
-
-- iwl_write_restricted(priv, IWL_FH_KW_MEM_ADDR_REG,
-+ iwl4965_write_direct32(priv, IWL_FH_KW_MEM_ADDR_REG,
- priv->kw.dma_addr >> 4);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- out:
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
+ return (index == 0) ? n_bd - 1 : index - 1;
}
--static int iwl4965_kw_alloc(struct iwl_priv *priv)
-+static int iwl4965_kw_alloc(struct iwl4965_priv *priv)
- {
- struct pci_dev *dev = priv->pci_dev;
-- struct iwl_kw *kw = &priv->kw;
-+ struct iwl4965_kw *kw = &priv->kw;
-
- kw->size = IWL4965_KW_SIZE; /* TBW need set somewhere else */
- kw->v_addr = pci_alloc_consistent(dev, kw->size, &kw->dma_addr);
-@@ -300,14 +310,19 @@ static int iwl4965_kw_alloc(struct iwl_priv *priv)
- #define CHECK_AND_PRINT(x) ((eeprom_ch->flags & EEPROM_CHANNEL_##x) \
- ? # x " " : "")
-
--int iwl4965_set_fat_chan_info(struct iwl_priv *priv, int phymode, u16 channel,
-- const struct iwl_eeprom_channel *eeprom_ch,
+-static void iwl4965_rx_reply_compressed_ba(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
+/**
-+ * iwl4965_set_fat_chan_info - Copy fat channel info into driver's priv.
++ * iwl4965_rx_reply_compressed_ba - Handler for REPLY_COMPRESSED_BA
+ *
-+ * Does not set up a command, or touch hardware.
++ * Handles block-acknowledge notification from device, which reports success
++ * of frames sent via aggregation.
+ */
-+int iwl4965_set_fat_chan_info(struct iwl4965_priv *priv, int phymode, u16 channel,
-+ const struct iwl4965_eeprom_channel *eeprom_ch,
- u8 fat_extension_channel)
++static void iwl4965_rx_reply_compressed_ba(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
-- struct iwl_channel_info *ch_info;
-+ struct iwl4965_channel_info *ch_info;
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_compressed_ba_resp *ba_resp = &pkt->u.compressed_ba;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_compressed_ba_resp *ba_resp = &pkt->u.compressed_ba;
+ int index;
+- struct iwl_tx_queue *txq = NULL;
+- struct iwl_ht_agg *agg;
++ struct iwl4965_tx_queue *txq = NULL;
++ struct iwl4965_ht_agg *agg;
++
++ /* "flow" corresponds to Tx queue */
+ u16 ba_resp_scd_flow = le16_to_cpu(ba_resp->scd_flow);
++
++ /* "ssn" is start of block-ack Tx window, corresponds to index
++ * (in Tx queue's circular buffer) of first TFD/frame in window */
+ u16 ba_resp_scd_ssn = le16_to_cpu(ba_resp->scd_ssn);
-- ch_info = (struct iwl_channel_info *)
-- iwl_get_channel_info(priv, phymode, channel);
-+ ch_info = (struct iwl4965_channel_info *)
-+ iwl4965_get_channel_info(priv, phymode, channel);
+ if (ba_resp_scd_flow >= ARRAY_SIZE(priv->txq)) {
+@@ -4107,9 +4308,11 @@ static void iwl4965_rx_reply_compressed_ba(struct iwl_priv *priv,
+
+ txq = &priv->txq[ba_resp_scd_flow];
+ agg = &priv->stations[ba_resp->sta_id].tid[ba_resp->tid].agg;
+- index = iwl_queue_dec_wrap(ba_resp_scd_ssn & 0xff, txq->q.n_bd);
+
+- /* TODO: Need to get this copy more sefely - now good for debug */
++ /* Find index just before block-ack window */
++ index = iwl4965_queue_dec_wrap(ba_resp_scd_ssn & 0xff, txq->q.n_bd);
++
++ /* TODO: Need to get this copy more safely - now good for debug */
+ /*
+ {
+ DECLARE_MAC_BUF(mac);
+@@ -4132,23 +4335,36 @@ static void iwl4965_rx_reply_compressed_ba(struct iwl_priv *priv,
+ agg->bitmap0);
+ }
+ */
++
++ /* Update driver's record of ACK vs. not for each frame in window */
+ iwl4965_tx_status_reply_compressed_ba(priv, agg, ba_resp);
+- /* releases all the TFDs until the SSN */
+- if (txq->q.last_used != (ba_resp_scd_ssn & 0xff))
+- iwl_tx_queue_reclaim(priv, ba_resp_scd_flow, index);
++
++ /* Release all TFDs before the SSN, i.e. all TFDs in front of
++ * block-ack window (we assume that they've been successfully
++ * transmitted ... if not, it's too late anyway). */
++ if (txq->q.read_ptr != (ba_resp_scd_ssn & 0xff))
++ iwl4965_tx_queue_reclaim(priv, ba_resp_scd_flow, index);
- if (!is_channel_valid(ch_info))
- return -1;
-@@ -340,10 +355,13 @@ int iwl4965_set_fat_chan_info(struct iwl_priv *priv, int phymode, u16 channel,
- return 0;
}
--static void iwl4965_kw_free(struct iwl_priv *priv)
+
+-static void iwl4965_tx_queue_stop_scheduler(struct iwl_priv *priv, u16 txq_id)
+/**
-+ * iwl4965_kw_free - Free the "keep warm" buffer
++ * iwl4965_tx_queue_stop_scheduler - Stop queue, but keep configuration
+ */
-+static void iwl4965_kw_free(struct iwl4965_priv *priv)
++static void iwl4965_tx_queue_stop_scheduler(struct iwl4965_priv *priv, u16 txq_id)
{
- struct pci_dev *dev = priv->pci_dev;
-- struct iwl_kw *kw = &priv->kw;
-+ struct iwl4965_kw *kw = &priv->kw;
+- iwl_write_restricted_reg(priv,
+- SCD_QUEUE_STATUS_BITS(txq_id),
++ /* Simply stop the queue, but don't change any configuration;
++ * the SCD_ACT_EN bit is the write-enable mask for the ACTIVE bit. */
++ iwl4965_write_prph(priv,
++ KDR_SCD_QUEUE_STATUS_BITS(txq_id),
+ (0 << SCD_QUEUE_STTS_REG_POS_ACTIVE)|
+ (1 << SCD_QUEUE_STTS_REG_POS_SCD_ACT_EN));
+ }
- if (kw->v_addr) {
- pci_free_consistent(dev, kw->size, kw->v_addr, kw->dma_addr);
-@@ -358,7 +376,7 @@ static void iwl4965_kw_free(struct iwl_priv *priv)
- * @param priv
- * @return error code
- */
--static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
-+static int iwl4965_txq_ctx_reset(struct iwl4965_priv *priv)
+-static int iwl4965_tx_queue_set_q2ratid(struct iwl_priv *priv, u16 ra_tid,
++/**
++ * iwl4965_tx_queue_set_q2ratid - Map unique receiver/tid combination to a queue
++ */
++static int iwl4965_tx_queue_set_q2ratid(struct iwl4965_priv *priv, u16 ra_tid,
+ u16 txq_id)
{
- int rc = 0;
- int txq_id, slots_num;
-@@ -366,9 +384,10 @@ static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
+ u32 tbl_dw_addr;
+@@ -4160,22 +4376,25 @@ static int iwl4965_tx_queue_set_q2ratid(struct iwl_priv *priv, u16 ra_tid,
+ tbl_dw_addr = priv->scd_base_addr +
+ SCD_TRANSLATE_TBL_OFFSET_QUEUE(txq_id);
- iwl4965_kw_free(priv);
+- tbl_dw = iwl_read_restricted_mem(priv, tbl_dw_addr);
++ tbl_dw = iwl4965_read_targ_mem(priv, tbl_dw_addr);
-- iwl_hw_txq_ctx_free(priv);
-+ /* Free all tx/cmd queues and keep-warm buffer */
-+ iwl4965_hw_txq_ctx_free(priv);
+ if (txq_id & 0x1)
+ tbl_dw = (scd_q2ratid << 16) | (tbl_dw & 0x0000FFFF);
+ else
+ tbl_dw = scd_q2ratid | (tbl_dw & 0xFFFF0000);
-- /* Tx CMD queue */
-+ /* Alloc keep-warm buffer */
- rc = iwl4965_kw_alloc(priv);
- if (rc) {
- IWL_ERROR("Keep Warm allocation failed");
-@@ -377,28 +396,31 @@ static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
+- iwl_write_restricted_mem(priv, tbl_dw_addr, tbl_dw);
++ iwl4965_write_targ_mem(priv, tbl_dw_addr, tbl_dw);
- spin_lock_irqsave(&priv->lock, flags);
+ return 0;
+ }
+
+ /**
+- * txq_id must be greater than IWL_BACK_QUEUE_FIRST_ID
++ * iwl4965_tx_queue_agg_enable - Set up & enable aggregation for selected queue
++ *
++ * NOTE: txq_id must be greater than IWL_BACK_QUEUE_FIRST_ID,
++ * i.e. it must be one of the higher queues used for aggregation
+ */
+-static int iwl4965_tx_queue_agg_enable(struct iwl_priv *priv, int txq_id,
++static int iwl4965_tx_queue_agg_enable(struct iwl4965_priv *priv, int txq_id,
+ int tx_fifo, int sta_id, int tid,
+ u16 ssn_idx)
+ {
+@@ -4189,43 +4408,48 @@ static int iwl4965_tx_queue_agg_enable(struct iwl_priv *priv, int txq_id,
+
+ ra_tid = BUILD_RAxTID(sta_id, tid);
+
+- iwl_sta_modify_enable_tid_tx(priv, sta_id, tid);
++ /* Modify device's station table to Tx this TID */
++ iwl4965_sta_modify_enable_tid_tx(priv, sta_id, tid);
+ spin_lock_irqsave(&priv->lock, flags);
- rc = iwl_grab_restricted_access(priv);
+ rc = iwl4965_grab_nic_access(priv);
- if (unlikely(rc)) {
- IWL_ERROR("TX reset failed");
+ if (rc) {
spin_unlock_irqrestore(&priv->lock, flags);
- goto error_reset;
+ return rc;
}
-- iwl_write_restricted_reg(priv, SCD_TXFACT, 0);
-- iwl_release_restricted_access(priv);
-+ /* Turn off all Tx DMA channels */
-+ iwl4965_write_prph(priv, KDR_SCD_TXFACT, 0);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
++ /* Stop this Tx queue before configuring it */
+ iwl4965_tx_queue_stop_scheduler(priv, txq_id);
-+ /* Tell 4965 where to find the keep-warm buffer */
- rc = iwl4965_kw_init(priv);
- if (rc) {
- IWL_ERROR("kw_init failed\n");
- goto error_reset;
- }
++ /* Map receiver-address / traffic-ID to this queue */
+ iwl4965_tx_queue_set_q2ratid(priv, ra_tid, txq_id);
-- /* Tx queue(s) */
-+ /* Alloc and init all (default 16) Tx queues,
-+ * including the command queue (#4) */
- for (txq_id = 0; txq_id < priv->hw_setting.max_txq_num; txq_id++) {
- slots_num = (txq_id == IWL_CMD_QUEUE_NUM) ?
- TFD_CMD_SLOTS : TFD_TX_CMD_SLOTS;
-- rc = iwl_tx_queue_init(priv, &priv->txq[txq_id], slots_num,
-+ rc = iwl4965_tx_queue_init(priv, &priv->txq[txq_id], slots_num,
- txq_id);
- if (rc) {
- IWL_ERROR("Tx %d queue init failed\n", txq_id);
-@@ -409,32 +431,32 @@ static int iwl4965_txq_ctx_reset(struct iwl_priv *priv)
- return rc;
++ /* Set this queue as a chain-building queue */
++ iwl4965_set_bits_prph(priv, KDR_SCD_QUEUECHAIN_SEL, (1 << txq_id));
- error:
-- iwl_hw_txq_ctx_free(priv);
-+ iwl4965_hw_txq_ctx_free(priv);
- error_reset:
- iwl4965_kw_free(priv);
- error_kw:
- return rc;
- }
+- iwl_set_bits_restricted_reg(priv, SCD_QUEUECHAIN_SEL, (1<<txq_id));
+-
+- priv->txq[txq_id].q.last_used = (ssn_idx & 0xff);
+- priv->txq[txq_id].q.first_empty = (ssn_idx & 0xff);
+-
+- /* supposes that ssn_idx is valid (!= 0xFFF) */
++ /* Place first TFD at index corresponding to start sequence number.
++ * Assumes that ssn_idx is valid (!= 0xFFF) */
++ priv->txq[txq_id].q.read_ptr = (ssn_idx & 0xff);
++ priv->txq[txq_id].q.write_ptr = (ssn_idx & 0xff);
+ iwl4965_set_wr_ptrs(priv, txq_id, ssn_idx);
--int iwl_hw_nic_init(struct iwl_priv *priv)
-+int iwl4965_hw_nic_init(struct iwl4965_priv *priv)
- {
- int rc;
- unsigned long flags;
-- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl4965_rx_queue *rxq = &priv->rxq;
- u8 rev_id;
- u32 val;
- u8 val_link;
+- iwl_write_restricted_mem(priv,
++ /* Set up Tx window size and frame limit for this queue */
++ iwl4965_write_targ_mem(priv,
+ priv->scd_base_addr + SCD_CONTEXT_QUEUE_OFFSET(txq_id),
+ (SCD_WIN_SIZE << SCD_QUEUE_CTX_REG1_WIN_SIZE_POS) &
+ SCD_QUEUE_CTX_REG1_WIN_SIZE_MSK);
-- iwl_power_init_handle(priv);
-+ iwl4965_power_init_handle(priv);
+- iwl_write_restricted_mem(priv, priv->scd_base_addr +
++ iwl4965_write_targ_mem(priv, priv->scd_base_addr +
+ SCD_CONTEXT_QUEUE_OFFSET(txq_id) + sizeof(u32),
+ (SCD_FRAME_LIMIT << SCD_QUEUE_CTX_REG2_FRAME_LIMIT_POS)
+ & SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK);
- /* nic_init */
- spin_lock_irqsave(&priv->lock, flags);
+- iwl_set_bits_restricted_reg(priv, SCD_INTERRUPT_MASK, (1 << txq_id));
++ iwl4965_set_bits_prph(priv, KDR_SCD_INTERRUPT_MASK, (1 << txq_id));
-- iwl_set_bit(priv, CSR_GIO_CHICKEN_BITS,
-+ iwl4965_set_bit(priv, CSR_GIO_CHICKEN_BITS,
- CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER);
++ /* Set up Status area in SRAM, map to Tx DMA/FIFO, activate the queue */
+ iwl4965_tx_queue_set_status(priv, &priv->txq[txq_id], tx_fifo, 1);
-- iwl_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
-- rc = iwl_poll_bit(priv, CSR_GP_CNTRL,
-+ iwl4965_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
-+ rc = iwl4965_poll_bit(priv, CSR_GP_CNTRL,
- CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
- CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000);
- if (rc < 0) {
-@@ -443,26 +465,26 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
- return rc;
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+@@ -4234,7 +4458,7 @@ static int iwl4965_tx_queue_agg_enable(struct iwl_priv *priv, int txq_id,
+ /**
+ * txq_id must be greater than IWL_BACK_QUEUE_FIRST_ID
+ */
+-static int iwl4965_tx_queue_agg_disable(struct iwl_priv *priv, u16 txq_id,
++static int iwl4965_tx_queue_agg_disable(struct iwl4965_priv *priv, u16 txq_id,
+ u16 ssn_idx, u8 tx_fifo)
+ {
+ unsigned long flags;
+@@ -4247,7 +4471,7 @@ static int iwl4965_tx_queue_agg_disable(struct iwl_priv *priv, u16 txq_id,
}
+ spin_lock_irqsave(&priv->lock, flags);
- rc = iwl_grab_restricted_access(priv);
+ rc = iwl4965_grab_nic_access(priv);
if (rc) {
spin_unlock_irqrestore(&priv->lock, flags);
return rc;
- }
+@@ -4255,56 +4479,50 @@ static int iwl4965_tx_queue_agg_disable(struct iwl_priv *priv, u16 txq_id,
-- iwl_read_restricted_reg(priv, APMG_CLK_CTRL_REG);
-+ iwl4965_read_prph(priv, APMG_CLK_CTRL_REG);
+ iwl4965_tx_queue_stop_scheduler(priv, txq_id);
-- iwl_write_restricted_reg(priv, APMG_CLK_CTRL_REG,
-+ iwl4965_write_prph(priv, APMG_CLK_CTRL_REG,
- APMG_CLK_VAL_DMA_CLK_RQT |
- APMG_CLK_VAL_BSM_CLK_RQT);
-- iwl_read_restricted_reg(priv, APMG_CLK_CTRL_REG);
-+ iwl4965_read_prph(priv, APMG_CLK_CTRL_REG);
+- iwl_clear_bits_restricted_reg(priv, SCD_QUEUECHAIN_SEL, (1 << txq_id));
++ iwl4965_clear_bits_prph(priv, KDR_SCD_QUEUECHAIN_SEL, (1 << txq_id));
- udelay(20);
+- priv->txq[txq_id].q.last_used = (ssn_idx & 0xff);
+- priv->txq[txq_id].q.first_empty = (ssn_idx & 0xff);
++ priv->txq[txq_id].q.read_ptr = (ssn_idx & 0xff);
++ priv->txq[txq_id].q.write_ptr = (ssn_idx & 0xff);
+ /* supposes that ssn_idx is valid (!= 0xFFF) */
+ iwl4965_set_wr_ptrs(priv, txq_id, ssn_idx);
-- iwl_set_bits_restricted_reg(priv, APMG_PCIDEV_STT_REG,
-+ iwl4965_set_bits_prph(priv, APMG_PCIDEV_STT_REG,
- APMG_PCIDEV_STT_VAL_L1_ACT_DIS);
+- iwl_clear_bits_restricted_reg(priv, SCD_INTERRUPT_MASK, (1 << txq_id));
++ iwl4965_clear_bits_prph(priv, KDR_SCD_INTERRUPT_MASK, (1 << txq_id));
+ iwl4965_txq_ctx_deactivate(priv, txq_id);
+ iwl4965_tx_queue_set_status(priv, &priv->txq[txq_id], tx_fifo, 0);
- iwl_release_restricted_access(priv);
-- iwl_write32(priv, CSR_INT_COALESCING, 512 / 32);
+ iwl4965_release_nic_access(priv);
-+ iwl4965_write32(priv, CSR_INT_COALESCING, 512 / 32);
spin_unlock_irqrestore(&priv->lock, flags);
- /* Determine HW type */
-@@ -484,11 +506,6 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
-
- spin_unlock_irqrestore(&priv->lock, flags);
+ return 0;
+ }
-- /* Read the EEPROM */
-- rc = iwl_eeprom_init(priv);
-- if (rc)
-- return rc;
+-#endif/* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
+-/*
+- * RATE SCALE CODE
+- */
+-int iwl4965_init_hw_rates(struct iwl_priv *priv, struct ieee80211_rate *rates)
+-{
+- return 0;
+-}
-
- if (priv->eeprom.calib_version < EEPROM_TX_POWER_VERSION_NEW) {
- IWL_ERROR("Older EEPROM detected! Aborting.\n");
- return -EINVAL;
-@@ -503,51 +520,53 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
++#endif/* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
- /* set CSR_HW_CONFIG_REG for uCode use */
+ /**
+ * iwl4965_add_station - Initialize a station's hardware rate table
+ *
+- * The uCode contains a table of fallback rates and retries per rate
++ * The uCode's station table contains a table of fallback rates
+ * for automatic fallback during transmission.
+ *
+- * NOTE: This initializes the table for a single retry per data rate
+- * which is not optimal. Setting up an intelligent retry per rate
+- * requires feedback from transmission, which isn't exposed through
+- * rc80211_simple which is what this driver is currently using.
++ * NOTE: This sets up a default set of values. These will be replaced later
++ * if the driver's iwl-4965-rs rate scaling algorithm is used, instead of
++ * rc80211_simple.
+ *
++ * NOTE: Run REPLY_ADD_STA command to set up station table entry, before
++ * calling this function (which runs REPLY_TX_LINK_QUALITY_CMD,
++ * which requires station table entry to exist).
+ */
+-void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
++void iwl4965_add_station(struct iwl4965_priv *priv, const u8 *addr, int is_ap)
+ {
+ int i, r;
+- struct iwl_link_quality_cmd link_cmd = {
++ struct iwl4965_link_quality_cmd link_cmd = {
+ .reserved1 = 0,
+ };
+ u16 rate_flags;
-- iwl_set_bit(priv, CSR_SW_VER, CSR_HW_IF_CONFIG_REG_BIT_KEDRON_R |
-+ iwl4965_set_bit(priv, CSR_SW_VER, CSR_HW_IF_CONFIG_REG_BIT_KEDRON_R |
- CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI |
- CSR_HW_IF_CONFIG_REG_BIT_MAC_SI);
+- /* Set up the rate scaling to start at 54M and fallback
+- * all the way to 1M in IEEE order and then spin on IEEE */
++ /* Set up the rate scaling to start at selected rate, fall back
++ * all the way down to 1M in IEEE order, and then spin on 1M */
+ if (is_ap)
+ r = IWL_RATE_54M_INDEX;
+ else if (priv->phymode == MODE_IEEE80211A)
+@@ -4317,11 +4535,13 @@ void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
+ if (r >= IWL_FIRST_CCK_RATE && r <= IWL_LAST_CCK_RATE)
+ rate_flags |= RATE_MCS_CCK_MSK;
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc < 0) {
- spin_unlock_irqrestore(&priv->lock, flags);
- IWL_DEBUG_INFO("Failed to init the card\n");
- return rc;
++ /* Use Tx antenna B only */
+ rate_flags |= RATE_MCS_ANT_B_MSK;
+ rate_flags &= ~RATE_MCS_ANT_A_MSK;
++
+ link_cmd.rs_table[i].rate_n_flags =
+- iwl_hw_set_rate_n_flags(iwl_rates[r].plcp, rate_flags);
+- r = iwl_get_prev_ieee_rate(r);
++ iwl4965_hw_set_rate_n_flags(iwl4965_rates[r].plcp, rate_flags);
++ r = iwl4965_get_prev_ieee_rate(r);
}
-- iwl_read_restricted_reg(priv, APMG_PS_CTRL_REG);
-- iwl_set_bits_restricted_reg(priv, APMG_PS_CTRL_REG,
-+ iwl4965_read_prph(priv, APMG_PS_CTRL_REG);
-+ iwl4965_set_bits_prph(priv, APMG_PS_CTRL_REG,
- APMG_PS_CTRL_VAL_RESET_REQ);
- udelay(5);
-- iwl_clear_bits_restricted_reg(priv, APMG_PS_CTRL_REG,
-+ iwl4965_clear_bits_prph(priv, APMG_PS_CTRL_REG,
- APMG_PS_CTRL_VAL_RESET_REQ);
-
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
-
-- iwl_hw_card_show_info(priv);
-+ iwl4965_hw_card_show_info(priv);
-
- /* end nic_init */
-
- /* Allocate the RX queue, or reset if it is already allocated */
- if (!rxq->bd) {
-- rc = iwl_rx_queue_alloc(priv);
-+ rc = iwl4965_rx_queue_alloc(priv);
- if (rc) {
- IWL_ERROR("Unable to initialize Rx queue\n");
- return -ENOMEM;
- }
- } else
-- iwl_rx_queue_reset(priv, rxq);
-+ iwl4965_rx_queue_reset(priv, rxq);
+ link_cmd.general_params.single_stream_ant_msk = 2;
+@@ -4332,18 +4552,18 @@ void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
+ /* Update the rate scaling for control frame Tx to AP */
+ link_cmd.sta_id = is_ap ? IWL_AP_ID : IWL4965_BROADCAST_ID;
-- iwl_rx_replenish(priv);
-+ iwl4965_rx_replenish(priv);
+- iwl_send_cmd_pdu(priv, REPLY_TX_LINK_QUALITY_CMD, sizeof(link_cmd),
++ iwl4965_send_cmd_pdu(priv, REPLY_TX_LINK_QUALITY_CMD, sizeof(link_cmd),
+ &link_cmd);
+ }
- iwl4965_rx_init(priv, rxq);
+-#ifdef CONFIG_IWLWIFI_HT
++#ifdef CONFIG_IWL4965_HT
- spin_lock_irqsave(&priv->lock, flags);
+-static u8 iwl_is_channel_extension(struct iwl_priv *priv, int phymode,
++static u8 iwl4965_is_channel_extension(struct iwl4965_priv *priv, int phymode,
+ u16 channel, u8 extension_chan_offset)
+ {
+- const struct iwl_channel_info *ch_info;
++ const struct iwl4965_channel_info *ch_info;
- rxq->need_update = 1;
-- iwl_rx_queue_update_write_ptr(priv, rxq);
-+ iwl4965_rx_queue_update_write_ptr(priv, rxq);
+- ch_info = iwl_get_channel_info(priv, phymode, channel);
++ ch_info = iwl4965_get_channel_info(priv, phymode, channel);
+ if (!is_channel_valid(ch_info))
+ return 0;
- spin_unlock_irqrestore(&priv->lock, flags);
-+
-+ /* Allocate and init all Tx and Command queues */
- rc = iwl4965_txq_ctx_reset(priv);
- if (rc)
- return rc;
-@@ -563,7 +582,7 @@ int iwl_hw_nic_init(struct iwl_priv *priv)
+@@ -4357,36 +4577,37 @@ static u8 iwl_is_channel_extension(struct iwl_priv *priv, int phymode,
return 0;
}
--int iwl_hw_nic_stop_master(struct iwl_priv *priv)
-+int iwl4965_hw_nic_stop_master(struct iwl4965_priv *priv)
+-static u8 iwl_is_fat_tx_allowed(struct iwl_priv *priv,
+- const struct sta_ht_info *ht_info)
++static u8 iwl4965_is_fat_tx_allowed(struct iwl4965_priv *priv,
++ struct ieee80211_ht_info *sta_ht_inf)
{
- int rc = 0;
- u32 reg_val;
-@@ -572,16 +591,16 @@ int iwl_hw_nic_stop_master(struct iwl_priv *priv)
- spin_lock_irqsave(&priv->lock, flags);
++ struct iwl_ht_info *iwl_ht_conf = &priv->current_ht_config;
- /* set stop master bit */
-- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_STOP_MASTER);
-+ iwl4965_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_STOP_MASTER);
+- if (priv->channel_width != IWL_CHANNEL_WIDTH_40MHZ)
++ if ((!iwl_ht_conf->is_ht) ||
++ (iwl_ht_conf->supported_chan_width != IWL_CHANNEL_WIDTH_40MHZ) ||
++ (iwl_ht_conf->extension_chan_offset == IWL_EXT_CHANNEL_OFFSET_AUTO))
+ return 0;
-- reg_val = iwl_read32(priv, CSR_GP_CNTRL);
-+ reg_val = iwl4965_read32(priv, CSR_GP_CNTRL);
+- if (ht_info->supported_chan_width != IWL_CHANNEL_WIDTH_40MHZ)
+- return 0;
+-
+- if (ht_info->extension_chan_offset == IWL_EXT_CHANNEL_OFFSET_AUTO)
+- return 0;
++ if (sta_ht_inf) {
++ if ((!sta_ht_inf->ht_supported) ||
++ (!sta_ht_inf->cap & IEEE80211_HT_CAP_SUP_WIDTH))
++ return 0;
++ }
- if (CSR_GP_CNTRL_REG_FLAG_MAC_POWER_SAVE ==
- (reg_val & CSR_GP_CNTRL_REG_MSK_POWER_SAVE_TYPE))
- IWL_DEBUG_INFO("Card in power save, master is already "
- "stopped\n");
- else {
-- rc = iwl_poll_bit(priv, CSR_RESET,
-+ rc = iwl4965_poll_bit(priv, CSR_RESET,
- CSR_RESET_REG_FLAG_MASTER_DISABLED,
- CSR_RESET_REG_FLAG_MASTER_DISABLED, 100);
- if (rc < 0) {
-@@ -596,65 +615,69 @@ int iwl_hw_nic_stop_master(struct iwl_priv *priv)
- return rc;
+- /* no fat tx allowed on 2.4GHZ */
+- if (priv->phymode != MODE_IEEE80211A)
+- return 0;
+- return (iwl_is_channel_extension(priv, priv->phymode,
+- ht_info->control_channel,
+- ht_info->extension_chan_offset));
++ return (iwl4965_is_channel_extension(priv, priv->phymode,
++ iwl_ht_conf->control_channel,
++ iwl_ht_conf->extension_chan_offset));
}
--void iwl_hw_txq_ctx_stop(struct iwl_priv *priv)
-+/**
-+ * iwl4965_hw_txq_ctx_stop - Stop all Tx DMA channels, free Tx queue memory
-+ */
-+void iwl4965_hw_txq_ctx_stop(struct iwl4965_priv *priv)
+-void iwl4965_set_rxon_ht(struct iwl_priv *priv, struct sta_ht_info *ht_info)
++void iwl4965_set_rxon_ht(struct iwl4965_priv *priv, struct iwl_ht_info *ht_info)
{
+- struct iwl_rxon_cmd *rxon = &priv->staging_rxon;
++ struct iwl4965_rxon_cmd *rxon = &priv->staging_rxon;
+ u32 val;
- int txq_id;
- unsigned long flags;
-
-- /* reset TFD queues */
-+ /* Stop each Tx DMA channel, and wait for it to be idle */
- for (txq_id = 0; txq_id < priv->hw_setting.max_txq_num; txq_id++) {
- spin_lock_irqsave(&priv->lock, flags);
-- if (iwl_grab_restricted_access(priv)) {
-+ if (iwl4965_grab_nic_access(priv)) {
- spin_unlock_irqrestore(&priv->lock, flags);
- continue;
- }
+ if (!ht_info->is_ht)
+ return;
-- iwl_write_restricted(priv,
-+ iwl4965_write_direct32(priv,
- IWL_FH_TCSR_CHNL_TX_CONFIG_REG(txq_id),
- 0x0);
-- iwl_poll_restricted_bit(priv, IWL_FH_TSSR_TX_STATUS_REG,
-+ iwl4965_poll_direct_bit(priv, IWL_FH_TSSR_TX_STATUS_REG,
- IWL_FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE
- (txq_id), 200);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
+- if (iwl_is_fat_tx_allowed(priv, ht_info))
++ /* Set up channel bandwidth: 20 MHz only, or 20/40 mixed if fat ok */
++ if (iwl4965_is_fat_tx_allowed(priv, NULL))
+ rxon->flags |= RXON_FLG_CHANNEL_MODE_MIXED_MSK;
+ else
+ rxon->flags &= ~(RXON_FLG_CHANNEL_MODE_MIXED_MSK |
+@@ -4400,7 +4621,7 @@ void iwl4965_set_rxon_ht(struct iwl_priv *priv, struct sta_ht_info *ht_info)
+ return;
}
-- iwl_hw_txq_ctx_free(priv);
-+ /* Deallocate memory for all Tx queues */
-+ iwl4965_hw_txq_ctx_free(priv);
- }
+- /* Note: control channel is oposit to extension channel */
++ /* Note: control channel is opposite of extension channel */
+ switch (ht_info->extension_chan_offset) {
+ case IWL_EXT_CHANNEL_OFFSET_ABOVE:
+ rxon->flags &= ~(RXON_FLG_CTRL_CHANNEL_LOC_HI_MSK);
+@@ -4416,66 +4637,56 @@ void iwl4965_set_rxon_ht(struct iwl_priv *priv, struct sta_ht_info *ht_info)
+ break;
+ }
--int iwl_hw_nic_reset(struct iwl_priv *priv)
-+int iwl4965_hw_nic_reset(struct iwl4965_priv *priv)
- {
- int rc = 0;
- unsigned long flags;
+- val = ht_info->operating_mode;
++ val = ht_info->ht_protection;
-- iwl_hw_nic_stop_master(priv);
-+ iwl4965_hw_nic_stop_master(priv);
+ rxon->flags |= cpu_to_le32(val << RXON_FLG_HT_OPERATING_MODE_POS);
- spin_lock_irqsave(&priv->lock, flags);
+- priv->active_rate_ht[0] = ht_info->supp_rates[0];
+- priv->active_rate_ht[1] = ht_info->supp_rates[1];
+ iwl4965_set_rxon_chain(priv);
-- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
-+ iwl4965_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
+ IWL_DEBUG_ASSOC("supported HT rate 0x%X %X "
+ "rxon flags 0x%X operation mode :0x%X "
+ "extension channel offset 0x%x "
+ "control chan %d\n",
+- priv->active_rate_ht[0], priv->active_rate_ht[1],
+- le32_to_cpu(rxon->flags), ht_info->operating_mode,
++ ht_info->supp_mcs_set[0], ht_info->supp_mcs_set[1],
++ le32_to_cpu(rxon->flags), ht_info->ht_protection,
+ ht_info->extension_chan_offset,
+ ht_info->control_channel);
+ return;
+ }
- udelay(10);
+-void iwl4965_set_ht_add_station(struct iwl_priv *priv, u8 index)
++void iwl4965_set_ht_add_station(struct iwl4965_priv *priv, u8 index,
++ struct ieee80211_ht_info *sta_ht_inf)
+ {
+ __le32 sta_flags;
+- struct sta_ht_info *ht_info = &priv->current_assoc_ht;
-- iwl_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
-- rc = iwl_poll_bit(priv, CSR_RESET,
-+ iwl4965_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
-+ rc = iwl4965_poll_bit(priv, CSR_RESET,
- CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
- CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25);
+- priv->current_channel_width = IWL_CHANNEL_WIDTH_20MHZ;
+- if (!ht_info->is_ht)
++ if (!sta_ht_inf || !sta_ht_inf->ht_supported)
+ goto done;
- udelay(10);
+ sta_flags = priv->stations[index].sta.station_flags;
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (!rc) {
-- iwl_write_restricted_reg(priv, APMG_CLK_EN_REG,
-+ iwl4965_write_prph(priv, APMG_CLK_EN_REG,
- APMG_CLK_VAL_DMA_CLK_RQT |
- APMG_CLK_VAL_BSM_CLK_RQT);
+- if (ht_info->tx_mimo_ps_mode == IWL_MIMO_PS_DYNAMIC)
++ if (((sta_ht_inf->cap & IEEE80211_HT_CAP_MIMO_PS >> 2))
++ == IWL_MIMO_PS_DYNAMIC)
+ sta_flags |= STA_FLG_RTS_MIMO_PROT_MSK;
+ else
+ sta_flags &= ~STA_FLG_RTS_MIMO_PROT_MSK;
- udelay(10);
+ sta_flags |= cpu_to_le32(
+- (u32)ht_info->ampdu_factor << STA_FLG_MAX_AGG_SIZE_POS);
++ (u32)sta_ht_inf->ampdu_factor << STA_FLG_MAX_AGG_SIZE_POS);
-- iwl_set_bits_restricted_reg(priv, APMG_PCIDEV_STT_REG,
-+ iwl4965_set_bits_prph(priv, APMG_PCIDEV_STT_REG,
- APMG_PCIDEV_STT_VAL_L1_ACT_DIS);
+ sta_flags |= cpu_to_le32(
+- (u32)ht_info->mpdu_density << STA_FLG_AGG_MPDU_DENSITY_POS);
+-
+- sta_flags &= (~STA_FLG_FAT_EN_MSK);
+- ht_info->tx_chan_width = IWL_CHANNEL_WIDTH_20MHZ;
+- ht_info->chan_width_cap = IWL_CHANNEL_WIDTH_20MHZ;
++ (u32)sta_ht_inf->ampdu_density << STA_FLG_AGG_MPDU_DENSITY_POS);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- }
+- if (iwl_is_fat_tx_allowed(priv, ht_info)) {
++ if (iwl4965_is_fat_tx_allowed(priv, sta_ht_inf))
+ sta_flags |= STA_FLG_FAT_EN_MSK;
+- ht_info->chan_width_cap = IWL_CHANNEL_WIDTH_40MHZ;
+- if (ht_info->supported_chan_width == IWL_CHANNEL_WIDTH_40MHZ)
+- ht_info->tx_chan_width = IWL_CHANNEL_WIDTH_40MHZ;
+- }
+- priv->current_channel_width = ht_info->tx_chan_width;
++ else
++ sta_flags &= (~STA_FLG_FAT_EN_MSK);
++
+ priv->stations[index].sta.station_flags = sta_flags;
+ done:
+ return;
+ }
- clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
-@@ -684,7 +707,7 @@ int iwl_hw_nic_reset(struct iwl_priv *priv)
- */
- static void iwl4965_bg_statistics_periodic(unsigned long data)
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+-
+-static void iwl4965_sta_modify_add_ba_tid(struct iwl_priv *priv,
++static void iwl4965_sta_modify_add_ba_tid(struct iwl4965_priv *priv,
+ int sta_id, int tid, u16 ssn)
{
-- struct iwl_priv *priv = (struct iwl_priv *)data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)data;
+ unsigned long flags;
+@@ -4488,10 +4699,10 @@ static void iwl4965_sta_modify_add_ba_tid(struct iwl_priv *priv,
+ priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
- queue_work(priv->workqueue, &priv->statistics_work);
+- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
++ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
}
-@@ -692,27 +715,27 @@ static void iwl4965_bg_statistics_periodic(unsigned long data)
- /**
- * iwl4965_bg_statistics_work - Send the statistics request to the hardware.
- *
-- * This is queued by iwl_bg_statistics_periodic.
-+ * This is queued by iwl4965_bg_statistics_periodic.
- */
- static void iwl4965_bg_statistics_work(struct work_struct *work)
- {
-- struct iwl_priv *priv = container_of(work, struct iwl_priv,
-+ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
- statistics_work);
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+-static void iwl4965_sta_modify_del_ba_tid(struct iwl_priv *priv,
++static void iwl4965_sta_modify_del_ba_tid(struct iwl4965_priv *priv,
+ int sta_id, int tid)
+ {
+ unsigned long flags;
+@@ -4503,9 +4714,39 @@ static void iwl4965_sta_modify_del_ba_tid(struct iwl_priv *priv,
+ priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
- mutex_lock(&priv->mutex);
-- iwl_send_statistics_request(priv);
-+ iwl4965_send_statistics_request(priv);
- mutex_unlock(&priv->mutex);
+- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
++ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
++}
++
++int iwl4965_mac_ampdu_action(struct ieee80211_hw *hw,
++ enum ieee80211_ampdu_mlme_action action,
++ const u8 *addr, u16 tid, u16 ssn)
++{
++ struct iwl4965_priv *priv = hw->priv;
++ int sta_id;
++ DECLARE_MAC_BUF(mac);
++
++ IWL_DEBUG_HT("A-MPDU action on da=%s tid=%d ",
++ print_mac(mac, addr), tid);
++ sta_id = iwl4965_hw_find_station(priv, addr);
++ switch (action) {
++ case IEEE80211_AMPDU_RX_START:
++ IWL_DEBUG_HT("start Rx\n");
++ iwl4965_sta_modify_add_ba_tid(priv, sta_id, tid, ssn);
++ break;
++ case IEEE80211_AMPDU_RX_STOP:
++ IWL_DEBUG_HT("stop Rx\n");
++ iwl4965_sta_modify_del_ba_tid(priv, sta_id, tid);
++ break;
++ default:
++ IWL_DEBUG_HT("unknown\n");
++ return -EINVAL;
++ break;
++ }
++ return 0;
}
- #define CT_LIMIT_CONST 259
- #define TM_CT_KILL_THRESHOLD 110
++#ifdef CONFIG_IWL4965_HT_AGG
++
+ static const u16 default_tid_to_tx_fifo[] = {
+ IWL_TX_FIFO_AC1,
+ IWL_TX_FIFO_AC0,
+@@ -4526,7 +4767,13 @@ static const u16 default_tid_to_tx_fifo[] = {
+ IWL_TX_FIFO_AC3
+ };
--void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
-+void iwl4965_rf_kill_ct_config(struct iwl4965_priv *priv)
+-static int iwl_txq_ctx_activate_free(struct iwl_priv *priv)
++/*
++ * Find first available (lowest unused) Tx Queue, mark it "active".
++ * Called only when finding queue for aggregation.
++ * Should never return anything < 7, because they should already
++ * be in use as EDCA AC (0-3), Command (4), HCCA (5, 6).
++ */
++static int iwl4965_txq_ctx_activate_free(struct iwl4965_priv *priv)
{
-- struct iwl_ct_kill_config cmd;
-+ struct iwl4965_ct_kill_config cmd;
- u32 R1, R2, R3;
- u32 temp_th;
- u32 crit_temperature;
-@@ -720,7 +743,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
- int rc = 0;
-
- spin_lock_irqsave(&priv->lock, flags);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR,
- CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT);
- spin_unlock_irqrestore(&priv->lock, flags);
-
-@@ -738,7 +761,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
+ int txq_id;
- crit_temperature = ((temp_th * (R3-R1))/CT_LIMIT_CONST) + R2;
- cmd.critical_temperature_R = cpu_to_le32(crit_temperature);
-- rc = iwl_send_cmd_pdu(priv,
-+ rc = iwl4965_send_cmd_pdu(priv,
- REPLY_CT_KILL_CONFIG_CMD, sizeof(cmd), &cmd);
- if (rc)
- IWL_ERROR("REPLY_CT_KILL_CONFIG_CMD failed\n");
-@@ -746,7 +769,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
- IWL_DEBUG_INFO("REPLY_CT_KILL_CONFIG_CMD succeeded\n");
+@@ -4536,55 +4783,65 @@ static int iwl_txq_ctx_activate_free(struct iwl_priv *priv)
+ return -1;
}
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-+#ifdef CONFIG_IWL4965_SENSITIVITY
-
- /* "false alarms" are signals that our DSP tries to lock onto,
- * but then determines that they are either noise, or transmissions
-@@ -756,7 +779,7 @@ void iwl4965_rf_kill_ct_config(struct iwl_priv *priv)
- * enough to receive all of our own network traffic, but not so
- * high that our DSP gets too busy trying to lock onto non-network
- * activity/noise. */
--static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
-+static int iwl4965_sens_energy_cck(struct iwl4965_priv *priv,
- u32 norm_fa,
- u32 rx_enable_time,
- struct statistics_general_data *rx_info)
-@@ -782,7 +805,7 @@ static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
- u32 false_alarms = norm_fa * 200 * 1024;
- u32 max_false_alarms = MAX_FA_CCK * rx_enable_time;
- u32 min_false_alarms = MIN_FA_CCK * rx_enable_time;
-- struct iwl_sensitivity_data *data = NULL;
-+ struct iwl4965_sensitivity_data *data = NULL;
+-int iwl_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da, u16 tid,
++int iwl4965_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da, u16 tid,
+ u16 *start_seq_num)
+ {
- data = &(priv->sensitivity_data);
+- struct iwl_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
+ int sta_id;
+ int tx_fifo;
+ int txq_id;
+ int ssn = -1;
+ unsigned long flags;
+- struct iwl_tid_data *tid_data;
++ struct iwl4965_tid_data *tid_data;
+ DECLARE_MAC_BUF(mac);
-@@ -792,11 +815,11 @@ static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
- * This is background noise, which may include transmissions from other
- * networks, measured during silence before our network's beacon */
- silence_rssi_a = (u8)((rx_info->beacon_silence_rssi_a &
-- ALL_BAND_FILTER)>>8);
-+ ALL_BAND_FILTER) >> 8);
- silence_rssi_b = (u8)((rx_info->beacon_silence_rssi_b &
-- ALL_BAND_FILTER)>>8);
-+ ALL_BAND_FILTER) >> 8);
- silence_rssi_c = (u8)((rx_info->beacon_silence_rssi_c &
-- ALL_BAND_FILTER)>>8);
-+ ALL_BAND_FILTER) >> 8);
++ /* Determine Tx DMA/FIFO channel for this Traffic ID */
+ if (likely(tid < ARRAY_SIZE(default_tid_to_tx_fifo)))
+ tx_fifo = default_tid_to_tx_fifo[tid];
+ else
+ return -EINVAL;
- val = max(silence_rssi_b, silence_rssi_c);
- max_silence_rssi = max(silence_rssi_a, (u8) val);
-@@ -947,7 +970,7 @@ static int iwl4965_sens_energy_cck(struct iwl_priv *priv,
- }
+- IWL_WARNING("iwl-AGG iwl_mac_ht_tx_agg_start on da=%s"
++ IWL_WARNING("iwl-AGG iwl4965_mac_ht_tx_agg_start on da=%s"
+ " tid=%d\n", print_mac(mac, da), tid);
+- sta_id = iwl_hw_find_station(priv, da);
++ /* Get index into station table */
++ sta_id = iwl4965_hw_find_station(priv, da);
+ if (sta_id == IWL_INVALID_STATION)
+ return -ENXIO;
--static int iwl4965_sens_auto_corr_ofdm(struct iwl_priv *priv,
-+static int iwl4965_sens_auto_corr_ofdm(struct iwl4965_priv *priv,
- u32 norm_fa,
- u32 rx_enable_time)
- {
-@@ -955,7 +978,7 @@ static int iwl4965_sens_auto_corr_ofdm(struct iwl_priv *priv,
- u32 false_alarms = norm_fa * 200 * 1024;
- u32 max_false_alarms = MAX_FA_OFDM * rx_enable_time;
- u32 min_false_alarms = MIN_FA_OFDM * rx_enable_time;
-- struct iwl_sensitivity_data *data = NULL;
-+ struct iwl4965_sensitivity_data *data = NULL;
+- txq_id = iwl_txq_ctx_activate_free(priv);
++ /* Find available Tx queue for aggregation */
++ txq_id = iwl4965_txq_ctx_activate_free(priv);
+ if (txq_id == -1)
+ return -ENXIO;
- data = &(priv->sensitivity_data);
+ spin_lock_irqsave(&priv->sta_lock, flags);
+ tid_data = &priv->stations[sta_id].tid[tid];
++
++ /* Get starting sequence number for 1st frame in block ack window.
++ * We'll use least signif byte as 1st frame's index into Tx queue. */
+ ssn = SEQ_TO_SN(tid_data->seq_number);
+ tid_data->agg.txq_id = txq_id;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
-@@ -1012,22 +1035,22 @@ static int iwl4965_sens_auto_corr_ofdm(struct iwl_priv *priv,
- return 0;
+ *start_seq_num = ssn;
++
++ /* Update driver's link quality manager */
+ iwl4965_ba_status(priv, tid, BA_STATUS_ACTIVE);
++
++ /* Set up and enable aggregation for selected Tx queue and FIFO */
+ return iwl4965_tx_queue_agg_enable(priv, txq_id, tx_fifo,
+ sta_id, tid, ssn);
}
--static int iwl_sensitivity_callback(struct iwl_priv *priv,
-- struct iwl_cmd *cmd, struct sk_buff *skb)
-+static int iwl4965_sensitivity_callback(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd, struct sk_buff *skb)
- {
- /* We didn't cache the SKB; let the caller free it */
- return 1;
- }
- /* Prepare a SENSITIVITY_CMD, send to uCode if values have changed */
--static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
-+static int iwl4965_sensitivity_write(struct iwl4965_priv *priv, u8 flags)
+-int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
++int iwl4965_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
+ int generator)
{
- int rc = 0;
-- struct iwl_sensitivity_cmd cmd ;
-- struct iwl_sensitivity_data *data = NULL;
-- struct iwl_host_cmd cmd_out = {
-+ struct iwl4965_sensitivity_cmd cmd ;
-+ struct iwl4965_sensitivity_data *data = NULL;
-+ struct iwl4965_host_cmd cmd_out = {
- .id = SENSITIVITY_CMD,
-- .len = sizeof(struct iwl_sensitivity_cmd),
-+ .len = sizeof(struct iwl4965_sensitivity_cmd),
- .meta.flags = flags,
- .data = &cmd,
- };
-@@ -1071,10 +1094,11 @@ static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
- data->auto_corr_cck, data->auto_corr_cck_mrc,
- data->nrg_th_cck);
-+ /* Update uCode's "work" table, and copy it to DSP */
- cmd.control = SENSITIVITY_CMD_CONTROL_WORK_TABLE;
+- struct iwl_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
+ int tx_fifo_id, txq_id, sta_id, ssn = -1;
+- struct iwl_tid_data *tid_data;
++ struct iwl4965_tid_data *tid_data;
+ int rc;
+ DECLARE_MAC_BUF(mac);
- if (flags & CMD_ASYNC)
-- cmd_out.meta.u.callback = iwl_sensitivity_callback;
-+ cmd_out.meta.u.callback = iwl4965_sensitivity_callback;
+@@ -4598,7 +4855,7 @@ int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
+ else
+ return -EINVAL;
- /* Don't send command to uCode if nothing has changed */
- if (!memcmp(&cmd.table[0], &(priv->sensitivity_tbl[0]),
-@@ -1087,7 +1111,7 @@ static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
- memcpy(&(priv->sensitivity_tbl[0]), &(cmd.table[0]),
- sizeof(u16)*HD_TABLE_SIZE);
+- sta_id = iwl_hw_find_station(priv, da);
++ sta_id = iwl4965_hw_find_station(priv, da);
-- rc = iwl_send_cmd(priv, &cmd_out);
-+ rc = iwl4965_send_cmd(priv, &cmd_out);
- if (!rc) {
- IWL_DEBUG_CALIB("SENSITIVITY_CMD succeeded\n");
+ if (sta_id == IWL_INVALID_STATION)
+ return -ENXIO;
+@@ -4613,45 +4870,18 @@ int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
return rc;
-@@ -1096,11 +1120,11 @@ static int iwl4965_sensitivity_write(struct iwl_priv *priv, u8 flags)
+
+ iwl4965_ba_status(priv, tid, BA_STATUS_INITIATOR_DELBA);
+- IWL_DEBUG_INFO("iwl_mac_ht_tx_agg_stop on da=%s tid=%d\n",
++ IWL_DEBUG_INFO("iwl4965_mac_ht_tx_agg_stop on da=%s tid=%d\n",
+ print_mac(mac, da), tid);
+
return 0;
}
--void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags, u8 force)
-+void iwl4965_init_sensitivity(struct iwl4965_priv *priv, u8 flags, u8 force)
- {
- int rc = 0;
- int i;
-- struct iwl_sensitivity_data *data = NULL;
-+ struct iwl4965_sensitivity_data *data = NULL;
+-int iwl_mac_ht_rx_agg_start(struct ieee80211_hw *hw, u8 *da,
+- u16 tid, u16 start_seq_num)
+-{
+- struct iwl_priv *priv = hw->priv;
+- int sta_id;
+- DECLARE_MAC_BUF(mac);
- IWL_DEBUG_CALIB("Start iwl4965_init_sensitivity\n");
+- IWL_WARNING("iwl-AGG iwl_mac_ht_rx_agg_start on da=%s"
+- " tid=%d\n", print_mac(mac, da), tid);
+- sta_id = iwl_hw_find_station(priv, da);
+- iwl4965_sta_modify_add_ba_tid(priv, sta_id, tid, start_seq_num);
+- return 0;
+-}
+-
+-int iwl_mac_ht_rx_agg_stop(struct ieee80211_hw *hw, u8 *da,
+- u16 tid, int generator)
+-{
+- struct iwl_priv *priv = hw->priv;
+- int sta_id;
+- DECLARE_MAC_BUF(mac);
+-
+- IWL_WARNING("iwl-AGG iwl_mac_ht_rx_agg_stop on da=%s tid=%d\n",
+- print_mac(mac, da), tid);
+- sta_id = iwl_hw_find_station(priv, da);
+- iwl4965_sta_modify_del_ba_tid(priv, sta_id, tid);
+- return 0;
+-}
+-
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
-@@ -1110,7 +1134,7 @@ void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags, u8 force)
+ /* Set up 4965-specific Rx frame reply handlers */
+-void iwl_hw_rx_handler_setup(struct iwl_priv *priv)
++void iwl4965_hw_rx_handler_setup(struct iwl4965_priv *priv)
+ {
+ /* Legacy Rx frames */
+ priv->rx_handlers[REPLY_4965_RX] = iwl4965_rx_reply_rx;
+@@ -4663,57 +4893,66 @@ void iwl_hw_rx_handler_setup(struct iwl_priv *priv)
+ priv->rx_handlers[MISSED_BEACONS_NOTIFICATION] =
+ iwl4965_rx_missed_beacon_notif;
- /* Clear driver's sensitivity algo data */
- data = &(priv->sensitivity_data);
-- memset(data, 0, sizeof(struct iwl_sensitivity_data));
-+ memset(data, 0, sizeof(struct iwl4965_sensitivity_data));
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ priv->rx_handlers[REPLY_COMPRESSED_BA] = iwl4965_rx_reply_compressed_ba;
+-#endif /* CONFIG_IWLWIFI_AGG */
+-#endif /* CONFIG_IWLWIFI */
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
+ }
- data->num_in_cck_no_fa = 0;
- data->nrg_curr_state = IWL_FA_TOO_MANY;
-@@ -1154,21 +1178,21 @@ void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags, u8 force)
- /* Reset differential Rx gains in NIC to prepare for chain noise calibration.
- * Called after every association, but this runs only once!
- * ... once chain noise is calibrated the first time, it's good forever. */
--void iwl4965_chain_noise_reset(struct iwl_priv *priv)
-+void iwl4965_chain_noise_reset(struct iwl4965_priv *priv)
+-void iwl_hw_setup_deferred_work(struct iwl_priv *priv)
++void iwl4965_hw_setup_deferred_work(struct iwl4965_priv *priv)
{
-- struct iwl_chain_noise_data *data = NULL;
-+ struct iwl4965_chain_noise_data *data = NULL;
- int rc = 0;
-
- data = &(priv->chain_noise_data);
-- if ((data->state == IWL_CHAIN_NOISE_ALIVE) && iwl_is_associated(priv)) {
-- struct iwl_calibration_cmd cmd;
-+ if ((data->state == IWL_CHAIN_NOISE_ALIVE) && iwl4965_is_associated(priv)) {
-+ struct iwl4965_calibration_cmd cmd;
+ INIT_WORK(&priv->txpower_work, iwl4965_bg_txpower_work);
+ INIT_WORK(&priv->statistics_work, iwl4965_bg_statistics_work);
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+ INIT_WORK(&priv->sensitivity_work, iwl4965_bg_sensitivity_work);
+ #endif
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ INIT_WORK(&priv->agg_work, iwl4965_bg_agg_work);
+-#endif /* CONFIG_IWLWIFI_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
+ init_timer(&priv->statistics_periodic);
+ priv->statistics_periodic.data = (unsigned long)priv;
+ priv->statistics_periodic.function = iwl4965_bg_statistics_periodic;
+ }
- memset(&cmd, 0, sizeof(cmd));
- cmd.opCode = PHY_CALIBRATE_DIFF_GAIN_CMD;
- cmd.diff_gain_a = 0;
- cmd.diff_gain_b = 0;
- cmd.diff_gain_c = 0;
-- rc = iwl_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
- sizeof(cmd), &cmd);
- msleep(4);
- data->state = IWL_CHAIN_NOISE_ACCUMULATE;
-@@ -1183,10 +1207,10 @@ void iwl4965_chain_noise_reset(struct iwl_priv *priv)
- * 1) Which antennas are connected.
- * 2) Differential rx gain settings to balance the 3 receivers.
- */
--static void iwl4965_noise_calibration(struct iwl_priv *priv,
-- struct iwl_notif_statistics *stat_resp)
-+static void iwl4965_noise_calibration(struct iwl4965_priv *priv,
-+ struct iwl4965_notif_statistics *stat_resp)
+-void iwl_hw_cancel_deferred_work(struct iwl_priv *priv)
++void iwl4965_hw_cancel_deferred_work(struct iwl4965_priv *priv)
{
-- struct iwl_chain_noise_data *data = NULL;
-+ struct iwl4965_chain_noise_data *data = NULL;
- int rc = 0;
+ del_timer_sync(&priv->statistics_periodic);
- u32 chain_noise_a;
-@@ -1385,7 +1409,7 @@ static void iwl4965_noise_calibration(struct iwl_priv *priv,
+ cancel_delayed_work(&priv->init_alive_start);
+ }
- /* Differential gain gets sent to uCode only once */
- if (!data->radio_write) {
-- struct iwl_calibration_cmd cmd;
-+ struct iwl4965_calibration_cmd cmd;
- data->radio_write = 1;
+-struct pci_device_id iwl_hw_card_ids[] = {
+- {0x8086, 0x4229, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+- {0x8086, 0x4230, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
++struct pci_device_id iwl4965_hw_card_ids[] = {
++ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4229)},
++ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4230)},
+ {0}
+ };
- memset(&cmd, 0, sizeof(cmd));
-@@ -1393,7 +1417,7 @@ static void iwl4965_noise_calibration(struct iwl_priv *priv,
- cmd.diff_gain_a = data->delta_gain_code[0];
- cmd.diff_gain_b = data->delta_gain_code[1];
- cmd.diff_gain_c = data->delta_gain_code[2];
-- rc = iwl_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_PHY_CALIBRATION_CMD,
- sizeof(cmd), &cmd);
- if (rc)
- IWL_DEBUG_CALIB("fail sending cmd "
-@@ -1416,8 +1440,8 @@ static void iwl4965_noise_calibration(struct iwl_priv *priv,
- return;
+-int iwl_eeprom_aqcuire_semaphore(struct iwl_priv *priv)
++/*
++ * The device's EEPROM semaphore prevents conflicts between driver and uCode
++ * when accessing the EEPROM; each access is a series of pulses to/from the
++ * EEPROM chip, not a single event, so even reads could conflict if they
++ * weren't arbitrated by the semaphore.
++ */
++int iwl4965_eeprom_acquire_semaphore(struct iwl4965_priv *priv)
+ {
+ u16 count;
+ int rc;
+
+ for (count = 0; count < EEPROM_SEM_RETRY_LIMIT; count++) {
+- iwl_set_bit(priv, CSR_HW_IF_CONFIG_REG,
++ /* Request semaphore */
++ iwl4965_set_bit(priv, CSR_HW_IF_CONFIG_REG,
+ CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM);
+- rc = iwl_poll_bit(priv, CSR_HW_IF_CONFIG_REG,
++
++ /* See if we got it */
++ rc = iwl4965_poll_bit(priv, CSR_HW_IF_CONFIG_REG,
+ CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM,
+ CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM,
+ EEPROM_SEM_TIMEOUT);
+ if (rc >= 0) {
+- IWL_DEBUG_IO("Aqcuired semaphore after %d tries.\n",
++ IWL_DEBUG_IO("Acquired semaphore after %d tries.\n",
+ count+1);
+ return rc;
+ }
+@@ -4722,11 +4961,11 @@ int iwl_eeprom_aqcuire_semaphore(struct iwl_priv *priv)
+ return rc;
}
--static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
-- struct iwl_notif_statistics *resp)
-+static void iwl4965_sensitivity_calibration(struct iwl4965_priv *priv,
-+ struct iwl4965_notif_statistics *resp)
+-inline void iwl_eeprom_release_semaphore(struct iwl_priv *priv)
++inline void iwl4965_eeprom_release_semaphore(struct iwl4965_priv *priv)
{
- int rc = 0;
- u32 rx_enable_time;
-@@ -1427,7 +1451,7 @@ static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
- u32 bad_plcp_ofdm;
- u32 norm_fa_ofdm;
- u32 norm_fa_cck;
-- struct iwl_sensitivity_data *data = NULL;
-+ struct iwl4965_sensitivity_data *data = NULL;
- struct statistics_rx_non_phy *rx_info = &(resp->rx.general);
- struct statistics_rx *statistics = &(resp->rx);
- unsigned long flags;
-@@ -1435,7 +1459,7 @@ static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
+- iwl_clear_bit(priv, CSR_HW_IF_CONFIG_REG,
++ iwl4965_clear_bit(priv, CSR_HW_IF_CONFIG_REG,
+ CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM);
+ }
- data = &(priv->sensitivity_data);
-- if (!iwl_is_associated(priv)) {
-+ if (!iwl4965_is_associated(priv)) {
- IWL_DEBUG_CALIB("<< - not associated\n");
- return;
- }
-@@ -1523,7 +1547,7 @@ static void iwl4965_sensitivity_calibration(struct iwl_priv *priv,
+-MODULE_DEVICE_TABLE(pci, iwl_hw_card_ids);
++MODULE_DEVICE_TABLE(pci, iwl4965_hw_card_ids);
+diff --git a/drivers/net/wireless/iwlwifi/iwl-4965.h b/drivers/net/wireless/iwlwifi/iwl-4965.h
+index 4c70081..78bc148 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-4965.h
++++ b/drivers/net/wireless/iwlwifi/iwl-4965.h
+@@ -23,64 +23,777 @@
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ *
+ *****************************************************************************/
++/*
++ * Please use this file (iwl-4965.h) for driver implementation definitions.
++ * Please use iwl-4965-commands.h for uCode API definitions.
++ * Please use iwl-4965-hw.h for hardware-related definitions.
++ */
++
+ #ifndef __iwl_4965_h__
+ #define __iwl_4965_h__
- static void iwl4965_bg_sensitivity_work(struct work_struct *work)
- {
-- struct iwl_priv *priv = container_of(work, struct iwl_priv,
-+ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
- sensitivity_work);
+-struct iwl_priv;
+-struct sta_ht_info;
++#include <linux/pci.h> /* for struct pci_device_id */
++#include <linux/kernel.h>
++#include <net/ieee80211_radiotap.h>
++
++/* Hardware specific file defines the PCI IDs table for that hardware module */
++extern struct pci_device_id iwl4965_hw_card_ids[];
++
++#define DRV_NAME "iwl4965"
++#include "iwl-4965-hw.h"
++#include "iwl-prph.h"
++#include "iwl-4965-debug.h"
++
++/* Default noise level to report when noise measurement is not available.
++ * This may be because we're:
++ * 1) Not associated (4965, no beacon statistics being sent to driver)
++ * 2) Scanning (noise measurement does not apply to associated channel)
++ * 3) Receiving CCK (3945 delivers noise info only for OFDM frames)
++ * Use default noise value of -127 ... this is below the range of measurable
++ * Rx dBm for either 3945 or 4965, so it can indicate "unmeasurable" to user.
++ * Also, -127 works better than 0 when averaging frames with/without
++ * noise info (e.g. averaging might be done in app); measured dBm values are
++ * always negative ... using a negative value as the default keeps all
++ * averages within an s8's (used in some apps) range of negative values. */
++#define IWL_NOISE_MEAS_NOT_AVAILABLE (-127)
++
++/* Module parameters accessible from iwl-*.c */
++extern int iwl4965_param_hwcrypto;
++extern int iwl4965_param_queues_num;
++extern int iwl4965_param_amsdu_size_8K;
++
++enum iwl4965_antenna {
++ IWL_ANTENNA_DIVERSITY,
++ IWL_ANTENNA_MAIN,
++ IWL_ANTENNA_AUX
++};
++
++/*
++ * RTS threshold here is total size [2347] minus 4 FCS bytes
++ * Per spec:
++ * a value of 0 means RTS on all data/management packets
++ * a value > max MSDU size means no RTS
++ * else RTS for data/management frames where MPDU is larger
++ * than RTS value.
++ */
++#define DEFAULT_RTS_THRESHOLD 2347U
++#define MIN_RTS_THRESHOLD 0U
++#define MAX_RTS_THRESHOLD 2347U
++#define MAX_MSDU_SIZE 2304U
++#define MAX_MPDU_SIZE 2346U
++#define DEFAULT_BEACON_INTERVAL 100U
++#define DEFAULT_SHORT_RETRY_LIMIT 7U
++#define DEFAULT_LONG_RETRY_LIMIT 4U
++
++struct iwl4965_rx_mem_buffer {
++ dma_addr_t dma_addr;
++ struct sk_buff *skb;
++ struct list_head list;
++};
++
++/*
++ * Generic queue structure
++ *
++ * Contains common data for Rx and Tx queues
++ */
++struct iwl4965_queue {
++ int n_bd; /* number of BDs in this queue */
++ int write_ptr; /* 1-st empty entry (index) host_w*/
++ int read_ptr; /* last used entry (index) host_r*/
++ dma_addr_t dma_addr; /* physical addr for BD's */
++ int n_window; /* safe queue window */
++ u32 id;
++ int low_mark; /* low watermark, resume queue if free
++ * space more than this */
++ int high_mark; /* high watermark, stop queue if free
++ * space less than this */
++} __attribute__ ((packed));
++
++#define MAX_NUM_OF_TBS (20)
++
++/* One for each TFD */
++struct iwl4965_tx_info {
++ struct ieee80211_tx_status status;
++ struct sk_buff *skb[MAX_NUM_OF_TBS];
++};
++
++/**
++ * struct iwl4965_tx_queue - Tx Queue for DMA
++ * @q: generic Rx/Tx queue descriptor
++ * @bd: base of circular buffer of TFDs
++ * @cmd: array of command/Tx buffers
++ * @dma_addr_cmd: physical address of cmd/tx buffer array
++ * @txb: array of per-TFD driver data
++ * @need_update: indicates need to update read/write index
++ * @sched_retry: indicates queue is high-throughput aggregation (HT AGG) enabled
++ *
++ * A Tx queue consists of circular buffer of BDs (a.k.a. TFDs, transmit frame
++ * descriptors) and required locking structures.
++ */
++struct iwl4965_tx_queue {
++ struct iwl4965_queue q;
++ struct iwl4965_tfd_frame *bd;
++ struct iwl4965_cmd *cmd;
++ dma_addr_t dma_addr_cmd;
++ struct iwl4965_tx_info *txb;
++ int need_update;
++ int sched_retry;
++ int active;
++};
++
++#define IWL_NUM_SCAN_RATES (2)
++
++struct iwl4965_channel_tgd_info {
++ u8 type;
++ s8 max_power;
++};
++
++struct iwl4965_channel_tgh_info {
++ s64 last_radar_time;
++};
++
++/* current Tx power values to use, one for each rate for each channel.
++ * requested power is limited by:
++ * -- regulatory EEPROM limits for this channel
++ * -- hardware capabilities (clip-powers)
++ * -- spectrum management
++ * -- user preference (e.g. iwconfig)
++ * when requested power is set, base power index must also be set. */
++struct iwl4965_channel_power_info {
++ struct iwl4965_tx_power tpc; /* actual radio and DSP gain settings */
++ s8 power_table_index; /* actual (compenst'd) index into gain table */
++ s8 base_power_index; /* gain index for power at factory temp. */
++ s8 requested_power; /* power (dBm) requested for this chnl/rate */
++};
++
++/* current scan Tx power values to use, one for each scan rate for each
++ * channel. */
++struct iwl4965_scan_power_info {
++ struct iwl4965_tx_power tpc; /* actual radio and DSP gain settings */
++ s8 power_table_index; /* actual (compenst'd) index into gain table */
++ s8 requested_power; /* scan pwr (dBm) requested for chnl/rate */
++};
++
++/* For fat_extension_channel */
++enum {
++ HT_IE_EXT_CHANNEL_NONE = 0,
++ HT_IE_EXT_CHANNEL_ABOVE,
++ HT_IE_EXT_CHANNEL_INVALID,
++ HT_IE_EXT_CHANNEL_BELOW,
++ HT_IE_EXT_CHANNEL_MAX
++};
++
++/*
++ * One for each channel, holds all channel setup data
++ * Some of the fields (e.g. eeprom and flags/max_power_avg) are redundant
++ * with one another!
++ */
++#define IWL4965_MAX_RATE (33)
++
++struct iwl4965_channel_info {
++ struct iwl4965_channel_tgd_info tgd;
++ struct iwl4965_channel_tgh_info tgh;
++ struct iwl4965_eeprom_channel eeprom; /* EEPROM regulatory limit */
++ struct iwl4965_eeprom_channel fat_eeprom; /* EEPROM regulatory limit for
++ * FAT channel */
++
++ u8 channel; /* channel number */
++ u8 flags; /* flags copied from EEPROM */
++ s8 max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
++ s8 curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) limit */
++ s8 min_power; /* always 0 */
++ s8 scan_power; /* (dBm) regul. eeprom, direct scans, any rate */
++
++ u8 group_index; /* 0-4, maps channel to group1/2/3/4/5 */
++ u8 band_index; /* 0-4, maps channel to band1/2/3/4/5 */
++ u8 phymode; /* MODE_IEEE80211{A,B,G} */
++
++ /* Radio/DSP gain settings for each "normal" data Tx rate.
++ * These include, in addition to RF and DSP gain, a few fields for
++ * remembering/modifying gain settings (indexes). */
++ struct iwl4965_channel_power_info power_info[IWL4965_MAX_RATE];
++
++ /* FAT channel info */
++ s8 fat_max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
++ s8 fat_curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) */
++ s8 fat_min_power; /* always 0 */
++ s8 fat_scan_power; /* (dBm) eeprom, direct scans, any rate */
++ u8 fat_flags; /* flags copied from EEPROM */
++ u8 fat_extension_channel; /* HT_IE_EXT_CHANNEL_* */
++
++ /* Radio/DSP gain settings for each scan rate, for directed scans. */
++ struct iwl4965_scan_power_info scan_pwr_info[IWL_NUM_SCAN_RATES];
++};
++
++struct iwl4965_clip_group {
++ /* maximum power level to prevent clipping for each rate, derived by
++ * us from this band's saturation power in EEPROM */
++ const s8 clip_powers[IWL_MAX_RATES];
++};
++
++#include "iwl-4965-rs.h"
++
++#define IWL_TX_FIFO_AC0 0
++#define IWL_TX_FIFO_AC1 1
++#define IWL_TX_FIFO_AC2 2
++#define IWL_TX_FIFO_AC3 3
++#define IWL_TX_FIFO_HCCA_1 5
++#define IWL_TX_FIFO_HCCA_2 6
++#define IWL_TX_FIFO_NONE 7
++
++/* Minimum number of queues. MAX_NUM is defined in hw specific files */
++#define IWL_MIN_NUM_QUEUES 4
++
++/* Power management (not Tx power) structures */
++
++struct iwl4965_power_vec_entry {
++ struct iwl4965_powertable_cmd cmd;
++ u8 no_dtim;
++};
++#define IWL_POWER_RANGE_0 (0)
++#define IWL_POWER_RANGE_1 (1)
++
++#define IWL_POWER_MODE_CAM 0x00 /* Continuously Aware Mode, always on */
++#define IWL_POWER_INDEX_3 0x03
++#define IWL_POWER_INDEX_5 0x05
++#define IWL_POWER_AC 0x06
++#define IWL_POWER_BATTERY 0x07
++#define IWL_POWER_LIMIT 0x07
++#define IWL_POWER_MASK 0x0F
++#define IWL_POWER_ENABLED 0x10
++#define IWL_POWER_LEVEL(x) ((x) & IWL_POWER_MASK)
++
++struct iwl4965_power_mgr {
++ spinlock_t lock;
++ struct iwl4965_power_vec_entry pwr_range_0[IWL_POWER_AC];
++ struct iwl4965_power_vec_entry pwr_range_1[IWL_POWER_AC];
++ u8 active_index;
++ u32 dtim_val;
++};
++
++#define IEEE80211_DATA_LEN 2304
++#define IEEE80211_4ADDR_LEN 30
++#define IEEE80211_HLEN (IEEE80211_4ADDR_LEN)
++#define IEEE80211_FRAME_LEN (IEEE80211_DATA_LEN + IEEE80211_HLEN)
++
++struct iwl4965_frame {
++ union {
++ struct ieee80211_hdr frame;
++ struct iwl4965_tx_beacon_cmd beacon;
++ u8 raw[IEEE80211_FRAME_LEN];
++ u8 cmd[360];
++ } u;
++ struct list_head list;
++};
++
++#define SEQ_TO_QUEUE(x) ((x >> 8) & 0xbf)
++#define QUEUE_TO_SEQ(x) ((x & 0xbf) << 8)
++#define SEQ_TO_INDEX(x) (x & 0xff)
++#define INDEX_TO_SEQ(x) (x & 0xff)
++#define SEQ_HUGE_FRAME (0x4000)
++#define SEQ_RX_FRAME __constant_cpu_to_le16(0x8000)
++#define SEQ_TO_SN(seq) (((seq) & IEEE80211_SCTL_SEQ) >> 4)
++#define SN_TO_SEQ(ssn) (((ssn) << 4) & IEEE80211_SCTL_SEQ)
++#define MAX_SN ((IEEE80211_SCTL_SEQ) >> 4)
++
++enum {
++ /* CMD_SIZE_NORMAL = 0, */
++ CMD_SIZE_HUGE = (1 << 0),
++ /* CMD_SYNC = 0, */
++ CMD_ASYNC = (1 << 1),
++ /* CMD_NO_SKB = 0, */
++ CMD_WANT_SKB = (1 << 2),
++};
++
++struct iwl4965_cmd;
++struct iwl4965_priv;
++
++struct iwl4965_cmd_meta {
++ struct iwl4965_cmd_meta *source;
++ union {
++ struct sk_buff *skb;
++ int (*callback)(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd, struct sk_buff *skb);
++ } __attribute__ ((packed)) u;
++
++ /* The CMD_SIZE_HUGE flag bit indicates that the command
++ * structure is stored at the end of the shared queue memory. */
++ u32 flags;
++
++} __attribute__ ((packed));
++
++/**
++ * struct iwl4965_cmd
++ *
++ * For allocation of the command and tx queues, this establishes the overall
++ * size of the largest command we send to uCode, except for a scan command
++ * (which is relatively huge; space is allocated separately).
++ */
++struct iwl4965_cmd {
++ struct iwl4965_cmd_meta meta; /* driver data */
++ struct iwl4965_cmd_header hdr; /* uCode API */
++ union {
++ struct iwl4965_addsta_cmd addsta;
++ struct iwl4965_led_cmd led;
++ u32 flags;
++ u8 val8;
++ u16 val16;
++ u32 val32;
++ struct iwl4965_bt_cmd bt;
++ struct iwl4965_rxon_time_cmd rxon_time;
++ struct iwl4965_powertable_cmd powertable;
++ struct iwl4965_qosparam_cmd qosparam;
++ struct iwl4965_tx_cmd tx;
++ struct iwl4965_tx_beacon_cmd tx_beacon;
++ struct iwl4965_rxon_assoc_cmd rxon_assoc;
++ u8 *indirect;
++ u8 payload[360];
++ } __attribute__ ((packed)) cmd;
++} __attribute__ ((packed));
++
++struct iwl4965_host_cmd {
++ u8 id;
++ u16 len;
++ struct iwl4965_cmd_meta meta;
++ const void *data;
++};
++
++#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl4965_cmd) - \
++ sizeof(struct iwl4965_cmd_meta))
++
++/*
++ * RX related structures and functions
++ */
++#define RX_FREE_BUFFERS 64
++#define RX_LOW_WATERMARK 8
++
++#define SUP_RATE_11A_MAX_NUM_CHANNELS 8
++#define SUP_RATE_11B_MAX_NUM_CHANNELS 4
++#define SUP_RATE_11G_MAX_NUM_CHANNELS 12
++
++/**
++ * struct iwl4965_rx_queue - Rx queue
++ * @processed: Internal index to last handled Rx packet
++ * @read: Shared index to newest available Rx buffer
++ * @write: Shared index to oldest written Rx packet
++ * @free_count: Number of pre-allocated buffers in rx_free
++ * @rx_free: list of free SKBs for use
++ * @rx_used: List of Rx buffers with no SKB
++ * @need_update: flag to indicate we need to update read/write index
++ *
++ * NOTE: rx_free and rx_used are used as a FIFO for iwl4965_rx_mem_buffers
++ */
++struct iwl4965_rx_queue {
++ __le32 *bd;
++ dma_addr_t dma_addr;
++ struct iwl4965_rx_mem_buffer pool[RX_QUEUE_SIZE + RX_FREE_BUFFERS];
++ struct iwl4965_rx_mem_buffer *queue[RX_QUEUE_SIZE];
++ u32 processed;
++ u32 read;
++ u32 write;
++ u32 free_count;
++ struct list_head rx_free;
++ struct list_head rx_used;
++ int need_update;
++ spinlock_t lock;
++};
++
++#define IWL_SUPPORTED_RATES_IE_LEN 8
++
++#define SCAN_INTERVAL 100
++
++#define MAX_A_CHANNELS 252
++#define MIN_A_CHANNELS 7
++
++#define MAX_B_CHANNELS 14
++#define MIN_B_CHANNELS 1
++
++#define STATUS_HCMD_ACTIVE 0 /* host command in progress */
++#define STATUS_INT_ENABLED 1
++#define STATUS_RF_KILL_HW 2
++#define STATUS_RF_KILL_SW 3
++#define STATUS_INIT 4
++#define STATUS_ALIVE 5
++#define STATUS_READY 6
++#define STATUS_TEMPERATURE 7
++#define STATUS_GEO_CONFIGURED 8
++#define STATUS_EXIT_PENDING 9
++#define STATUS_IN_SUSPEND 10
++#define STATUS_STATISTICS 11
++#define STATUS_SCANNING 12
++#define STATUS_SCAN_ABORTING 13
++#define STATUS_SCAN_HW 14
++#define STATUS_POWER_PMI 15
++#define STATUS_FW_ERROR 16
++#define STATUS_CONF_PENDING 17
++
++#define MAX_TID_COUNT 9
++
++#define IWL_INVALID_RATE 0xFF
++#define IWL_INVALID_VALUE -1
++
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
++/**
++ * struct iwl4965_ht_agg -- aggregation status while waiting for block-ack
++ * @txq_id: Tx queue used for Tx attempt
++ * @frame_count: # frames attempted by Tx command
++ * @wait_for_ba: Expect block-ack before next Tx reply
++ * @start_idx: Index of 1st Transmit Frame Descriptor (TFD) in Tx window
++ * @bitmap0: Low order bitmap, one bit for each frame pending ACK in Tx window
++ * @bitmap1: High order, one bit for each frame pending ACK in Tx window
++ * @rate_n_flags: Rate at which Tx was attempted
++ *
++ * If REPLY_TX indicates that aggregation was attempted, driver must wait
++ * for block ack (REPLY_COMPRESSED_BA). This struct stores tx reply info
++ * until block ack arrives.
++ */
++struct iwl4965_ht_agg {
++ u16 txq_id;
++ u16 frame_count;
++ u16 wait_for_ba;
++ u16 start_idx;
++ u32 bitmap0;
++ u32 bitmap1;
++ u32 rate_n_flags;
++};
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
++
++struct iwl4965_tid_data {
++ u16 seq_number;
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
++ struct iwl4965_ht_agg agg;
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
++};
++
++struct iwl4965_hw_key {
++ enum ieee80211_key_alg alg;
++ int keylen;
++ u8 key[32];
++};
++
++union iwl4965_ht_rate_supp {
++ u16 rates;
++ struct {
++ u8 siso_rate;
++ u8 mimo_rate;
++ };
++};
++
++#ifdef CONFIG_IWL4965_HT
++#define CFG_HT_RX_AMPDU_FACTOR_DEF (0x3)
++#define CFG_HT_MPDU_DENSITY_2USEC (0x5)
++#define CFG_HT_MPDU_DENSITY_DEF CFG_HT_MPDU_DENSITY_2USEC
++
++struct iwl_ht_info {
++ /* self configuration data */
++ u8 is_ht;
++ u8 supported_chan_width;
++ u16 tx_mimo_ps_mode;
++ u8 is_green_field;
++ u8 sgf; /* HT_SHORT_GI_* short guard interval */
++ u8 max_amsdu_size;
++ u8 ampdu_factor;
++ u8 mpdu_density;
++ u8 supp_mcs_set[16];
++ /* BSS related data */
++ u8 control_channel;
++ u8 extension_chan_offset;
++ u8 tx_chan_width;
++ u8 ht_protection;
++ u8 non_GF_STA_present;
++};
++#endif /*CONFIG_IWL4965_HT */
++
++#ifdef CONFIG_IWL4965_QOS
++
++union iwl4965_qos_capabity {
++ struct {
++ u8 edca_count:4; /* bit 0-3 */
++ u8 q_ack:1; /* bit 4 */
++ u8 queue_request:1; /* bit 5 */
++ u8 txop_request:1; /* bit 6 */
++ u8 reserved:1; /* bit 7 */
++ } q_AP;
++ struct {
++ u8 acvo_APSD:1; /* bit 0 */
++ u8 acvi_APSD:1; /* bit 1 */
++ u8 ac_bk_APSD:1; /* bit 2 */
++ u8 ac_be_APSD:1; /* bit 3 */
++ u8 q_ack:1; /* bit 4 */
++ u8 max_len:2; /* bit 5-6 */
++ u8 more_data_ack:1; /* bit 7 */
++ } q_STA;
++ u8 val;
++};
++
++/* QoS structures */
++struct iwl4965_qos_info {
++ int qos_enable;
++ int qos_active;
++ union iwl4965_qos_capabity qos_cap;
++ struct iwl4965_qosparam_cmd def_qos_parm;
++};
++#endif /*CONFIG_IWL4965_QOS */
++
++#define STA_PS_STATUS_WAKE 0
++#define STA_PS_STATUS_SLEEP 1
++
++struct iwl4965_station_entry {
++ struct iwl4965_addsta_cmd sta;
++ struct iwl4965_tid_data tid[MAX_TID_COUNT];
++ u8 used;
++ u8 ps_status;
++ struct iwl4965_hw_key keyinfo;
++};
++
++/* one for each uCode image (inst/data, boot/init/runtime) */
++struct fw_desc {
++ void *v_addr; /* access by driver */
++ dma_addr_t p_addr; /* access by card's busmaster DMA */
++ u32 len; /* bytes */
++};
++
++/* uCode file layout */
++struct iwl4965_ucode {
++ __le32 ver; /* major/minor/subminor */
++ __le32 inst_size; /* bytes of runtime instructions */
++ __le32 data_size; /* bytes of runtime data */
++ __le32 init_size; /* bytes of initialization instructions */
++ __le32 init_data_size; /* bytes of initialization data */
++ __le32 boot_size; /* bytes of bootstrap instructions */
++ u8 data[0]; /* data in same order as "size" elements */
++};
++
++#define IWL_IBSS_MAC_HASH_SIZE 32
++
++struct iwl4965_ibss_seq {
++ u8 mac[ETH_ALEN];
++ u16 seq_num;
++ u16 frag_num;
++ unsigned long packet_time;
++ struct list_head list;
++};
++
++/**
++ * struct iwl4965_driver_hw_info
++ * @max_txq_num: Max # Tx queues supported
++ * @ac_queue_count: # Tx queues for EDCA Access Categories (AC)
++ * @tx_cmd_len: Size of Tx command (but not including frame itself)
++ * @max_rxq_size: Max # Rx frames in Rx queue (must be power-of-2)
++ * @rx_buffer_size:
++ * @max_rxq_log: Log-base-2 of max_rxq_size
++ * @max_stations:
++ * @bcast_sta_id:
++ * @shared_virt: Pointer to driver/uCode shared Tx Byte Counts and Rx status
++ * @shared_phys: Physical Pointer to Tx Byte Counts and Rx status
++ */
++struct iwl4965_driver_hw_info {
++ u16 max_txq_num;
++ u16 ac_queue_count;
++ u16 tx_cmd_len;
++ u16 max_rxq_size;
++ u32 rx_buf_size;
++ u32 max_pkt_size;
++ u16 max_rxq_log;
++ u8 max_stations;
++ u8 bcast_sta_id;
++ void *shared_virt;
++ dma_addr_t shared_phys;
++};
++
++#define HT_SHORT_GI_20MHZ_ONLY (1 << 0)
++#define HT_SHORT_GI_40MHZ_ONLY (1 << 1)
++
++
++#define IWL_RX_HDR(x) ((struct iwl4965_rx_frame_hdr *)(\
++ x->u.rx_frame.stats.payload + \
++ x->u.rx_frame.stats.phy_count))
++#define IWL_RX_END(x) ((struct iwl4965_rx_frame_end *)(\
++ IWL_RX_HDR(x)->payload + \
++ le16_to_cpu(IWL_RX_HDR(x)->len)))
++#define IWL_RX_STATS(x) (&x->u.rx_frame.stats)
++#define IWL_RX_DATA(x) (IWL_RX_HDR(x)->payload)
++
++
++/******************************************************************************
++ *
++ * Functions implemented in iwl-base.c which are forward declared here
++ * for use by iwl-*.c
++ *
++ *****************************************************************************/
++struct iwl4965_addsta_cmd;
++extern int iwl4965_send_add_station(struct iwl4965_priv *priv,
++ struct iwl4965_addsta_cmd *sta, u8 flags);
++extern u8 iwl4965_add_station_flags(struct iwl4965_priv *priv, const u8 *addr,
++ int is_ap, u8 flags, void *ht_data);
++extern int iwl4965_is_network_packet(struct iwl4965_priv *priv,
++ struct ieee80211_hdr *header);
++extern int iwl4965_power_init_handle(struct iwl4965_priv *priv);
++extern int iwl4965_eeprom_init(struct iwl4965_priv *priv);
++#ifdef CONFIG_IWL4965_DEBUG
++extern void iwl4965_report_frame(struct iwl4965_priv *priv,
++ struct iwl4965_rx_packet *pkt,
++ struct ieee80211_hdr *header, int group100);
++#else
++static inline void iwl4965_report_frame(struct iwl4965_priv *priv,
++ struct iwl4965_rx_packet *pkt,
++ struct ieee80211_hdr *header,
++ int group100) {}
++#endif
++extern void iwl4965_handle_data_packet_monitor(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb,
++ void *data, short len,
++ struct ieee80211_rx_status *stats,
++ u16 phy_flags);
++extern int iwl4965_is_duplicate_packet(struct iwl4965_priv *priv,
++ struct ieee80211_hdr *header);
++extern int iwl4965_rx_queue_alloc(struct iwl4965_priv *priv);
++extern void iwl4965_rx_queue_reset(struct iwl4965_priv *priv,
++ struct iwl4965_rx_queue *rxq);
++extern int iwl4965_calc_db_from_ratio(int sig_ratio);
++extern int iwl4965_calc_sig_qual(int rssi_dbm, int noise_dbm);
++extern int iwl4965_tx_queue_init(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq, int count, u32 id);
++extern void iwl4965_rx_replenish(void *data);
++extern void iwl4965_tx_queue_free(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq);
++extern int iwl4965_send_cmd_pdu(struct iwl4965_priv *priv, u8 id, u16 len,
++ const void *data);
++extern int __must_check iwl4965_send_cmd(struct iwl4965_priv *priv,
++ struct iwl4965_host_cmd *cmd);
++extern unsigned int iwl4965_fill_beacon_frame(struct iwl4965_priv *priv,
++ struct ieee80211_hdr *hdr,
++ const u8 *dest, int left);
++extern int iwl4965_rx_queue_update_write_ptr(struct iwl4965_priv *priv,
++ struct iwl4965_rx_queue *q);
++extern int iwl4965_send_statistics_request(struct iwl4965_priv *priv);
++extern void iwl4965_set_decrypted_flag(struct iwl4965_priv *priv, struct sk_buff *skb,
++ u32 decrypt_res,
++ struct ieee80211_rx_status *stats);
++extern __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr);
++
++extern const u8 iwl4965_broadcast_addr[ETH_ALEN];
++
++/*
++ * Currently used by iwl-3945-rs... look at restructuring so that it doesn't
++ * call this... todo... fix that.
++*/
++extern u8 iwl4965_sync_station(struct iwl4965_priv *priv, int sta_id,
++ u16 tx_rate, u8 flags);
++
++/******************************************************************************
++ *
++ * Functions implemented in iwl-[34]*.c which are forward declared here
++ * for use by iwl-base.c
++ *
++ * NOTE: The implementation of these functions are hardware specific
++ * which is why they are in the hardware specific files (vs. iwl-base.c)
++ *
++ * Naming convention --
++ * iwl4965_ <-- Its part of iwlwifi (should be changed to iwl4965_)
++ * iwl4965_hw_ <-- Hardware specific (implemented in iwl-XXXX.c by all HW)
++ * iwlXXXX_ <-- Hardware specific (implemented in iwl-XXXX.c for XXXX)
++ * iwl4965_bg_ <-- Called from work queue context
++ * iwl4965_mac_ <-- mac80211 callback
++ *
++ ****************************************************************************/
++extern void iwl4965_hw_rx_handler_setup(struct iwl4965_priv *priv);
++extern void iwl4965_hw_setup_deferred_work(struct iwl4965_priv *priv);
++extern void iwl4965_hw_cancel_deferred_work(struct iwl4965_priv *priv);
++extern int iwl4965_hw_rxq_stop(struct iwl4965_priv *priv);
++extern int iwl4965_hw_set_hw_setting(struct iwl4965_priv *priv);
++extern int iwl4965_hw_nic_init(struct iwl4965_priv *priv);
++extern int iwl4965_hw_nic_stop_master(struct iwl4965_priv *priv);
++extern void iwl4965_hw_txq_ctx_free(struct iwl4965_priv *priv);
++extern void iwl4965_hw_txq_ctx_stop(struct iwl4965_priv *priv);
++extern int iwl4965_hw_nic_reset(struct iwl4965_priv *priv);
++extern int iwl4965_hw_txq_attach_buf_to_tfd(struct iwl4965_priv *priv, void *tfd,
++ dma_addr_t addr, u16 len);
++extern int iwl4965_hw_txq_free_tfd(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq);
++extern int iwl4965_hw_get_temperature(struct iwl4965_priv *priv);
++extern int iwl4965_hw_tx_queue_init(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq);
++extern unsigned int iwl4965_hw_get_beacon_cmd(struct iwl4965_priv *priv,
++ struct iwl4965_frame *frame, u8 rate);
++extern int iwl4965_hw_get_rx_read(struct iwl4965_priv *priv);
++extern void iwl4965_hw_build_tx_cmd_rate(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd,
++ struct ieee80211_tx_control *ctrl,
++ struct ieee80211_hdr *hdr,
++ int sta_id, int tx_id);
++extern int iwl4965_hw_reg_send_txpower(struct iwl4965_priv *priv);
++extern int iwl4965_hw_reg_set_txpower(struct iwl4965_priv *priv, s8 power);
++extern void iwl4965_hw_rx_statistics(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb);
++extern void iwl4965_disable_events(struct iwl4965_priv *priv);
++extern int iwl4965_get_temperature(const struct iwl4965_priv *priv);
++
++/**
++ * iwl4965_hw_find_station - Find station id for a given BSSID
++ * @bssid: MAC address of station ID to find
++ *
++ * NOTE: This should not be hardware specific but the code has
++ * not yet been merged into a single common layer for managing the
++ * station tables.
++ */
++extern u8 iwl4965_hw_find_station(struct iwl4965_priv *priv, const u8 *bssid);
++
++extern int iwl4965_hw_channel_switch(struct iwl4965_priv *priv, u16 channel);
++extern int iwl4965_tx_queue_reclaim(struct iwl4965_priv *priv, int txq_id, int index);
++
++struct iwl4965_priv;
- mutex_lock(&priv->mutex);
-@@ -1549,11 +1573,11 @@ static void iwl4965_bg_sensitivity_work(struct work_struct *work)
- mutex_unlock(&priv->mutex);
- return;
- }
--#endif /*CONFIG_IWLWIFI_SENSITIVITY*/
-+#endif /*CONFIG_IWL4965_SENSITIVITY*/
+ /*
+ * Forward declare iwl-4965.c functions for iwl-base.c
+ */
+-extern int iwl_eeprom_aqcuire_semaphore(struct iwl_priv *priv);
+-extern void iwl_eeprom_release_semaphore(struct iwl_priv *priv);
++extern int iwl4965_eeprom_acquire_semaphore(struct iwl4965_priv *priv);
++extern void iwl4965_eeprom_release_semaphore(struct iwl4965_priv *priv);
+
+-extern int iwl4965_tx_queue_update_wr_ptr(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq,
++extern int iwl4965_tx_queue_update_wr_ptr(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq,
+ u16 byte_cnt);
+-extern void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr,
++extern void iwl4965_add_station(struct iwl4965_priv *priv, const u8 *addr,
+ int is_ap);
+-extern void iwl4965_set_rxon_ht(struct iwl_priv *priv,
+- struct sta_ht_info *ht_info);
+-
+-extern void iwl4965_set_rxon_chain(struct iwl_priv *priv);
+-extern int iwl4965_tx_cmd(struct iwl_priv *priv, struct iwl_cmd *out_cmd,
+- u8 sta_id, dma_addr_t txcmd_phys,
+- struct ieee80211_hdr *hdr, u8 hdr_len,
+- struct ieee80211_tx_control *ctrl, void *sta_in);
+-extern int iwl4965_init_hw_rates(struct iwl_priv *priv,
+- struct ieee80211_rate *rates);
+-extern int iwl4965_alive_notify(struct iwl_priv *priv);
+-extern void iwl4965_update_rate_scaling(struct iwl_priv *priv, u8 mode);
+-extern void iwl4965_set_ht_add_station(struct iwl_priv *priv, u8 index);
+-
+-extern void iwl4965_chain_noise_reset(struct iwl_priv *priv);
+-extern void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags,
++extern void iwl4965_set_rxon_chain(struct iwl4965_priv *priv);
++extern int iwl4965_alive_notify(struct iwl4965_priv *priv);
++extern void iwl4965_update_rate_scaling(struct iwl4965_priv *priv, u8 mode);
++extern void iwl4965_chain_noise_reset(struct iwl4965_priv *priv);
++extern void iwl4965_init_sensitivity(struct iwl4965_priv *priv, u8 flags,
+ u8 force);
+-extern int iwl4965_set_fat_chan_info(struct iwl_priv *priv, int phymode,
++extern int iwl4965_set_fat_chan_info(struct iwl4965_priv *priv, int phymode,
+ u16 channel,
+- const struct iwl_eeprom_channel *eeprom_ch,
++ const struct iwl4965_eeprom_channel *eeprom_ch,
+ u8 fat_extension_channel);
+-extern void iwl4965_rf_kill_ct_config(struct iwl_priv *priv);
++extern void iwl4965_rf_kill_ct_config(struct iwl4965_priv *priv);
+
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+-extern int iwl_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da,
++#ifdef CONFIG_IWL4965_HT
++extern void iwl4965_init_ht_hw_capab(struct ieee80211_ht_info *ht_info,
++ int mode);
++extern void iwl4965_set_rxon_ht(struct iwl4965_priv *priv,
++ struct iwl_ht_info *ht_info);
++extern void iwl4965_set_ht_add_station(struct iwl4965_priv *priv, u8 index,
++ struct ieee80211_ht_info *sta_ht_inf);
++extern int iwl4965_mac_ampdu_action(struct ieee80211_hw *hw,
++ enum ieee80211_ampdu_mlme_action action,
++ const u8 *addr, u16 tid, u16 ssn);
++#ifdef CONFIG_IWL4965_HT_AGG
++extern int iwl4965_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da,
+ u16 tid, u16 *start_seq_num);
+-extern int iwl_mac_ht_rx_agg_start(struct ieee80211_hw *hw, u8 *da,
+- u16 tid, u16 start_seq_num);
+-extern int iwl_mac_ht_rx_agg_stop(struct ieee80211_hw *hw, u8 *da,
++extern int iwl4965_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da,
+ u16 tid, int generator);
+-extern int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da,
+- u16 tid, int generator);
+-extern void iwl4965_turn_off_agg(struct iwl_priv *priv, u8 tid);
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /*CONFIG_IWLWIFI_HT */
++extern void iwl4965_turn_off_agg(struct iwl4965_priv *priv, u8 tid);
++extern void iwl4965_tl_get_stats(struct iwl4965_priv *priv,
++ struct ieee80211_hdr *hdr);
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /*CONFIG_IWL4965_HT */
+ /* Structures, enum, and defines specific to the 4965 */
+
+ #define IWL4965_KW_SIZE 0x1000 /*4k */
+
+-struct iwl_kw {
++struct iwl4965_kw {
+ dma_addr_t dma_addr;
+ void *v_addr;
+ size_t size;
+@@ -120,21 +833,9 @@ struct iwl_kw {
+ #define NRG_NUM_PREV_STAT_L 20
+ #define NUM_RX_CHAINS (3)
+
+-#define TX_POWER_IWL_ILLEGAL_VDET -100000
+ #define TX_POWER_IWL_ILLEGAL_VOLTAGE -10000
+-#define TX_POWER_IWL_CLOSED_LOOP_MIN_POWER 18
+-#define TX_POWER_IWL_CLOSED_LOOP_MAX_POWER 34
+-#define TX_POWER_IWL_VDET_SLOPE_BELOW_NOMINAL 17
+-#define TX_POWER_IWL_VDET_SLOPE_ABOVE_NOMINAL 20
+-#define TX_POWER_IWL_NOMINAL_POWER 26
+-#define TX_POWER_IWL_CLOSED_LOOP_ITERATION_LIMIT 1
+-#define TX_POWER_IWL_VOLTAGE_CODES_PER_03V 7
+-#define TX_POWER_IWL_DEGREES_PER_VDET_CODE 11
+-#define IWL_TX_POWER_MAX_NUM_PA_MEASUREMENTS 1
+-#define IWL_TX_POWER_CCK_COMPENSATION_B_STEP (9)
+-#define IWL_TX_POWER_CCK_COMPENSATION_C_STEP (5)
+-
+-struct iwl_traffic_load {
++
++struct iwl4965_traffic_load {
+ unsigned long time_stamp;
+ u32 packet_count[TID_QUEUE_MAX_SIZE];
+ u8 queue_count;
+@@ -142,8 +843,13 @@ struct iwl_traffic_load {
+ u32 total;
+ };
- static void iwl4965_bg_txpower_work(struct work_struct *work)
- {
-- struct iwl_priv *priv = container_of(work, struct iwl_priv,
-+ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
- txpower_work);
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+-struct iwl_agg_control {
++#ifdef CONFIG_IWL4965_HT_AGG
++/**
++ * struct iwl4965_agg_control
++ * @requested_ba: bit map of tids requesting aggregation/block-ack
++ * @granted_ba: bit map of tids granted aggregation/block-ack
++ */
++struct iwl4965_agg_control {
+ unsigned long next_retry;
+ u32 wait_for_agg_status;
+ u32 tid_retry;
+@@ -152,13 +858,13 @@ struct iwl_agg_control {
+ u8 auto_agg;
+ u32 tid_traffic_load_threshold;
+ u32 ba_timeout;
+- struct iwl_traffic_load traffic_load[TID_MAX_LOAD_COUNT];
++ struct iwl4965_traffic_load traffic_load[TID_MAX_LOAD_COUNT];
+ };
+-#endif /*CONFIG_IWLWIFI_HT_AGG */
++#endif /*CONFIG_IWL4965_HT_AGG */
- /* If a scan happened to start before we got here
-@@ -1569,7 +1593,7 @@ static void iwl4965_bg_txpower_work(struct work_struct *work)
- /* Regardless of if we are assocaited, we must reconfigure the
- * TX power since frames can be sent on non-radar channels while
- * not associated */
-- iwl_hw_reg_send_txpower(priv);
-+ iwl4965_hw_reg_send_txpower(priv);
+-struct iwl_lq_mngr {
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- struct iwl_agg_control agg_ctrl;
++struct iwl4965_lq_mngr {
++#ifdef CONFIG_IWL4965_HT_AGG
++ struct iwl4965_agg_control agg_ctrl;
+ #endif
+ spinlock_t lock;
+ s32 max_window_size;
+@@ -179,22 +885,6 @@ struct iwl_lq_mngr {
+ #define CAL_NUM_OF_BEACONS 20
+ #define MAXIMUM_ALLOWED_PATHLOSS 15
- /* Update last_temperature to keep is_calib_needed from running
- * when it isn't needed... */
-@@ -1581,24 +1605,31 @@ static void iwl4965_bg_txpower_work(struct work_struct *work)
- /*
- * Acquire priv->lock before calling this function !
- */
--static void iwl4965_set_wr_ptrs(struct iwl_priv *priv, int txq_id, u32 index)
-+static void iwl4965_set_wr_ptrs(struct iwl4965_priv *priv, int txq_id, u32 index)
- {
-- iwl_write_restricted(priv, HBUS_TARG_WRPTR,
-+ iwl4965_write_direct32(priv, HBUS_TARG_WRPTR,
- (index & 0xff) | (txq_id << 8));
-- iwl_write_restricted_reg(priv, SCD_QUEUE_RDPTR(txq_id), index);
-+ iwl4965_write_prph(priv, KDR_SCD_QUEUE_RDPTR(txq_id), index);
- }
+-/* Param table within SENSITIVITY_CMD */
+-#define HD_MIN_ENERGY_CCK_DET_INDEX (0)
+-#define HD_MIN_ENERGY_OFDM_DET_INDEX (1)
+-#define HD_AUTO_CORR32_X1_TH_ADD_MIN_INDEX (2)
+-#define HD_AUTO_CORR32_X1_TH_ADD_MIN_MRC_INDEX (3)
+-#define HD_AUTO_CORR40_X4_TH_ADD_MIN_MRC_INDEX (4)
+-#define HD_AUTO_CORR32_X4_TH_ADD_MIN_INDEX (5)
+-#define HD_AUTO_CORR32_X4_TH_ADD_MIN_MRC_INDEX (6)
+-#define HD_BARKER_CORR_TH_ADD_MIN_INDEX (7)
+-#define HD_BARKER_CORR_TH_ADD_MIN_MRC_INDEX (8)
+-#define HD_AUTO_CORR40_X4_TH_ADD_MIN_INDEX (9)
+-#define HD_OFDM_ENERGY_TH_IN_INDEX (10)
+-
+-#define SENSITIVITY_CMD_CONTROL_DEFAULT_TABLE __constant_cpu_to_le16(0)
+-#define SENSITIVITY_CMD_CONTROL_WORK_TABLE __constant_cpu_to_le16(1)
+-
+ #define CHAIN_NOISE_MAX_DELTA_GAIN_CODE 3
--/*
-- * Acquire priv->lock before calling this function !
-+/**
-+ * iwl4965_tx_queue_set_status - (optionally) start Tx/Cmd queue
-+ * @tx_fifo_id: Tx DMA/FIFO channel (range 0-7) that the queue will feed
-+ * @scd_retry: (1) Indicates queue will be used in aggregation mode
-+ *
-+ * NOTE: Acquire priv->lock before calling this function !
- */
--static void iwl4965_tx_queue_set_status(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq,
-+static void iwl4965_tx_queue_set_status(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq,
- int tx_fifo_id, int scd_retry)
- {
- int txq_id = txq->q.id;
-+
-+ /* Find out whether to activate Tx queue */
- int active = test_bit(txq_id, &priv->txq_ctx_active_msk)?1:0;
+ #define MAX_FA_OFDM 50
+@@ -222,8 +912,6 @@ struct iwl_lq_mngr {
+ #define AUTO_CORR_STEP_CCK 3
+ #define AUTO_CORR_MAX_TH_CCK 160
-- iwl_write_restricted_reg(priv, SCD_QUEUE_STATUS_BITS(txq_id),
-+ /* Set up and activate */
-+ iwl4965_write_prph(priv, KDR_SCD_QUEUE_STATUS_BITS(txq_id),
- (active << SCD_QUEUE_STTS_REG_POS_ACTIVE) |
- (tx_fifo_id << SCD_QUEUE_STTS_REG_POS_TXF) |
- (scd_retry << SCD_QUEUE_STTS_REG_POS_WSL) |
-@@ -1608,7 +1639,7 @@ static void iwl4965_tx_queue_set_status(struct iwl_priv *priv,
- txq->sched_retry = scd_retry;
+-#define NRG_ALG 0
+-#define AUTO_CORR_ALG 1
+ #define NRG_DIFF 2
+ #define NRG_STEP_CCK 2
+ #define NRG_MARGIN 8
+@@ -239,24 +927,24 @@ struct iwl_lq_mngr {
+ #define IN_BAND_FILTER 0xFF
+ #define MIN_AVERAGE_NOISE_MAX_VALUE 0xFFFFFFFF
- IWL_DEBUG_INFO("%s %s Queue %d on AC %d\n",
-- active ? "Activete" : "Deactivate",
-+ active ? "Activate" : "Deactivate",
- scd_retry ? "BA" : "AC", txq_id, tx_fifo_id);
- }
+-enum iwl_false_alarm_state {
++enum iwl4965_false_alarm_state {
+ IWL_FA_TOO_MANY = 0,
+ IWL_FA_TOO_FEW = 1,
+ IWL_FA_GOOD_RANGE = 2,
+ };
-@@ -1622,17 +1653,17 @@ static const u16 default_queue_to_tx_fifo[] = {
- IWL_TX_FIFO_HCCA_2
+-enum iwl_chain_noise_state {
++enum iwl4965_chain_noise_state {
+ IWL_CHAIN_NOISE_ALIVE = 0, /* must be 0 */
+ IWL_CHAIN_NOISE_ACCUMULATE = 1,
+ IWL_CHAIN_NOISE_CALIBRATED = 2,
};
--static inline void iwl4965_txq_ctx_activate(struct iwl_priv *priv, int txq_id)
-+static inline void iwl4965_txq_ctx_activate(struct iwl4965_priv *priv, int txq_id)
- {
- set_bit(txq_id, &priv->txq_ctx_active_msk);
- }
+-enum iwl_sensitivity_state {
++enum iwl4965_sensitivity_state {
+ IWL_SENS_CALIB_ALLOWED = 0,
+ IWL_SENS_CALIB_NEED_REINIT = 1,
+ };
--static inline void iwl4965_txq_ctx_deactivate(struct iwl_priv *priv, int txq_id)
-+static inline void iwl4965_txq_ctx_deactivate(struct iwl4965_priv *priv, int txq_id)
- {
- clear_bit(txq_id, &priv->txq_ctx_active_msk);
- }
+-enum iwl_calib_enabled_state {
++enum iwl4965_calib_enabled_state {
+ IWL_CALIB_DISABLED = 0, /* must be 0 */
+ IWL_CALIB_ENABLED = 1,
+ };
+@@ -271,7 +959,7 @@ struct statistics_general_data {
+ };
--int iwl4965_alive_notify(struct iwl_priv *priv)
-+int iwl4965_alive_notify(struct iwl4965_priv *priv)
- {
- u32 a;
- int i = 0;
-@@ -1641,45 +1672,55 @@ int iwl4965_alive_notify(struct iwl_priv *priv)
+ /* Sensitivity calib data */
+-struct iwl_sensitivity_data {
++struct iwl4965_sensitivity_data {
+ u32 auto_corr_ofdm;
+ u32 auto_corr_ofdm_mrc;
+ u32 auto_corr_ofdm_x1;
+@@ -300,7 +988,7 @@ struct iwl_sensitivity_data {
+ };
- spin_lock_irqsave(&priv->lock, flags);
+ /* Chain noise (differential Rx gain) calib data */
+-struct iwl_chain_noise_data {
++struct iwl4965_chain_noise_data {
+ u8 state;
+ u16 beacon_count;
+ u32 chain_noise_a;
+@@ -314,28 +1002,323 @@ struct iwl_chain_noise_data {
+ u8 radio_write;
+ };
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
+-/* IWL4965 */
+-#define RATE_MCS_CODE_MSK 0x7
+-#define RATE_MCS_MIMO_POS 3
+-#define RATE_MCS_MIMO_MSK 0x8
+-#define RATE_MCS_HT_DUP_POS 5
+-#define RATE_MCS_HT_DUP_MSK 0x20
+-#define RATE_MCS_FLAGS_POS 8
+-#define RATE_MCS_HT_POS 8
+-#define RATE_MCS_HT_MSK 0x100
+-#define RATE_MCS_CCK_POS 9
+-#define RATE_MCS_CCK_MSK 0x200
+-#define RATE_MCS_GF_POS 10
+-#define RATE_MCS_GF_MSK 0x400
+-
+-#define RATE_MCS_FAT_POS 11
+-#define RATE_MCS_FAT_MSK 0x800
+-#define RATE_MCS_DUP_POS 12
+-#define RATE_MCS_DUP_MSK 0x1000
+-#define RATE_MCS_SGI_POS 13
+-#define RATE_MCS_SGI_MSK 0x2000
+-
+-#define EEPROM_SEM_TIMEOUT 10
+-#define EEPROM_SEM_RETRY_LIMIT 1000
+-
+-#endif /* __iwl_4965_h__ */
++#define EEPROM_SEM_TIMEOUT 10 /* milliseconds */
++#define EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */
++
++
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
++
++enum {
++ MEASUREMENT_READY = (1 << 0),
++ MEASUREMENT_ACTIVE = (1 << 1),
++};
++
++#endif
++
++struct iwl4965_priv {
++
++ /* ieee device used by generic ieee processing code */
++ struct ieee80211_hw *hw;
++ struct ieee80211_channel *ieee_channels;
++ struct ieee80211_rate *ieee_rates;
++
++ /* temporary frame storage list */
++ struct list_head free_frames;
++ int frames_count;
++
++ u8 phymode;
++ int alloc_rxb_skb;
++ bool add_radiotap;
++
++ void (*rx_handlers[REPLY_MAX])(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb);
++
++ const struct ieee80211_hw_mode *modes;
++
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
++ /* spectrum measurement report caching */
++ struct iwl4965_spectrum_notification measure_report;
++ u8 measurement_status;
++#endif
++ /* ucode beacon time */
++ u32 ucode_beacon_time;
++
++ /* we allocate array of iwl4965_channel_info for NIC's valid channels.
++ * Access via channel # using indirect index array */
++ struct iwl4965_channel_info *channel_info; /* channel info array */
++ u8 channel_count; /* # of channels */
++
++ /* each calibration channel group in the EEPROM has a derived
++ * clip setting for each rate. */
++ const struct iwl4965_clip_group clip_groups[5];
++
++ /* thermal calibration */
++ s32 temperature; /* degrees Kelvin */
++ s32 last_temperature;
++
++ /* Scan related variables */
++ unsigned long last_scan_jiffies;
++ unsigned long next_scan_jiffies;
++ unsigned long scan_start;
++ unsigned long scan_pass_start;
++ unsigned long scan_start_tsf;
++ int scan_bands;
++ int one_direct_scan;
++ u8 direct_ssid_len;
++ u8 direct_ssid[IW_ESSID_MAX_SIZE];
++ struct iwl4965_scan_cmd *scan;
++ u8 only_active_channel;
++
++ /* spinlock */
++ spinlock_t lock; /* protect general shared data */
++ spinlock_t hcmd_lock; /* protect hcmd */
++ struct mutex mutex;
++
++ /* basic pci-network driver stuff */
++ struct pci_dev *pci_dev;
++
++ /* pci hardware address support */
++ void __iomem *hw_base;
++
++ /* uCode images, save to reload in case of failure */
++ struct fw_desc ucode_code; /* runtime inst */
++ struct fw_desc ucode_data; /* runtime data original */
++ struct fw_desc ucode_data_backup; /* runtime data save/restore */
++ struct fw_desc ucode_init; /* initialization inst */
++ struct fw_desc ucode_init_data; /* initialization data */
++ struct fw_desc ucode_boot; /* bootstrap inst */
++
++
++ struct iwl4965_rxon_time_cmd rxon_timing;
++
++ /* We declare this const so it can only be
++ * changed via explicit cast within the
++ * routines that actually update the physical
++ * hardware */
++ const struct iwl4965_rxon_cmd active_rxon;
++ struct iwl4965_rxon_cmd staging_rxon;
++
++ int error_recovering;
++ struct iwl4965_rxon_cmd recovery_rxon;
++
++ /* 1st responses from initialize and runtime uCode images.
++ * 4965's initialize alive response contains some calibration data. */
++ struct iwl4965_init_alive_resp card_alive_init;
++ struct iwl4965_alive_resp card_alive;
++
++#ifdef LED
++ /* LED related variables */
++ struct iwl4965_activity_blink activity;
++ unsigned long led_packets;
++ int led_state;
++#endif
++
++ u16 active_rate;
++ u16 active_rate_basic;
++
++ u8 call_post_assoc_from_beacon;
++ u8 assoc_station_added;
++ u8 use_ant_b_for_management_frame; /* Tx antenna selection */
++ u8 valid_antenna; /* Bit mask of antennas actually connected */
+#ifdef CONFIG_IWL4965_SENSITIVITY
- memset(&(priv->sensitivity_data), 0,
-- sizeof(struct iwl_sensitivity_data));
-+ sizeof(struct iwl4965_sensitivity_data));
- memset(&(priv->chain_noise_data), 0,
-- sizeof(struct iwl_chain_noise_data));
-+ sizeof(struct iwl4965_chain_noise_data));
- for (i = 0; i < NUM_RX_CHAINS; i++)
- priv->chain_noise_data.delta_gain_code[i] =
- CHAIN_NOISE_DELTA_GAIN_INIT_VAL;
--#endif /* CONFIG_IWLWIFI_SENSITIVITY*/
-- rc = iwl_grab_restricted_access(priv);
-+#endif /* CONFIG_IWL4965_SENSITIVITY*/
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
- }
-
-- priv->scd_base_addr = iwl_read_restricted_reg(priv, SCD_SRAM_BASE_ADDR);
-+ /* Clear 4965's internal Tx Scheduler data base */
-+ priv->scd_base_addr = iwl4965_read_prph(priv, KDR_SCD_SRAM_BASE_ADDR);
- a = priv->scd_base_addr + SCD_CONTEXT_DATA_OFFSET;
- for (; a < priv->scd_base_addr + SCD_TX_STTS_BITMAP_OFFSET; a += 4)
-- iwl_write_restricted_mem(priv, a, 0);
-+ iwl4965_write_targ_mem(priv, a, 0);
- for (; a < priv->scd_base_addr + SCD_TRANSLATE_TBL_OFFSET; a += 4)
-- iwl_write_restricted_mem(priv, a, 0);
-+ iwl4965_write_targ_mem(priv, a, 0);
- for (; a < sizeof(u16) * priv->hw_setting.max_txq_num; a += 4)
-- iwl_write_restricted_mem(priv, a, 0);
-+ iwl4965_write_targ_mem(priv, a, 0);
-
-- iwl_write_restricted_reg(priv, SCD_DRAM_BASE_ADDR,
-+ /* Tel 4965 where to find Tx byte count tables */
-+ iwl4965_write_prph(priv, KDR_SCD_DRAM_BASE_ADDR,
- (priv->hw_setting.shared_phys +
-- offsetof(struct iwl_shared, queues_byte_cnt_tbls)) >> 10);
-- iwl_write_restricted_reg(priv, SCD_QUEUECHAIN_SEL, 0);
-+ offsetof(struct iwl4965_shared, queues_byte_cnt_tbls)) >> 10);
-
-- /* initiate the queues */
-+ /* Disable chain mode for all queues */
-+ iwl4965_write_prph(priv, KDR_SCD_QUEUECHAIN_SEL, 0);
++ struct iwl4965_sensitivity_data sensitivity_data;
++ struct iwl4965_chain_noise_data chain_noise_data;
++ u8 start_calib;
++ __le16 sensitivity_tbl[HD_TABLE_SIZE];
++#endif /*CONFIG_IWL4965_SENSITIVITY*/
+
-+ /* Initialize each Tx queue (including the command queue) */
- for (i = 0; i < priv->hw_setting.max_txq_num; i++) {
-- iwl_write_restricted_reg(priv, SCD_QUEUE_RDPTR(i), 0);
-- iwl_write_restricted(priv, HBUS_TARG_WRPTR, 0 | (i << 8));
-- iwl_write_restricted_mem(priv, priv->scd_base_addr +
++#ifdef CONFIG_IWL4965_HT
++ struct iwl_ht_info current_ht_config;
++#endif
++ u8 last_phy_res[100];
+
-+ /* TFD circular buffer read/write indexes */
-+ iwl4965_write_prph(priv, KDR_SCD_QUEUE_RDPTR(i), 0);
-+ iwl4965_write_direct32(priv, HBUS_TARG_WRPTR, 0 | (i << 8));
++ /* Rate scaling data */
++ struct iwl4965_lq_mngr lq_mngr;
+
-+ /* Max Tx Window size for Scheduler-ACK mode */
-+ iwl4965_write_targ_mem(priv, priv->scd_base_addr +
- SCD_CONTEXT_QUEUE_OFFSET(i),
- (SCD_WIN_SIZE <<
- SCD_QUEUE_CTX_REG1_WIN_SIZE_POS) &
- SCD_QUEUE_CTX_REG1_WIN_SIZE_MSK);
-- iwl_write_restricted_mem(priv, priv->scd_base_addr +
++ /* Rate scaling data */
++ s8 data_retry_limit;
++ u8 retry_rate;
+
-+ /* Frame limit */
-+ iwl4965_write_targ_mem(priv, priv->scd_base_addr +
- SCD_CONTEXT_QUEUE_OFFSET(i) +
- sizeof(u32),
- (SCD_FRAME_LIMIT <<
-@@ -1687,87 +1728,98 @@ int iwl4965_alive_notify(struct iwl_priv *priv)
- SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK);
-
- }
-- iwl_write_restricted_reg(priv, SCD_INTERRUPT_MASK,
-+ iwl4965_write_prph(priv, KDR_SCD_INTERRUPT_MASK,
- (1 << priv->hw_setting.max_txq_num) - 1);
-
-- iwl_write_restricted_reg(priv, SCD_TXFACT,
-+ /* Activate all Tx DMA/FIFO channels */
-+ iwl4965_write_prph(priv, KDR_SCD_TXFACT,
- SCD_TXFACT_REG_TXFIFO_MASK(0, 7));
-
- iwl4965_set_wr_ptrs(priv, IWL_CMD_QUEUE_NUM, 0);
-- /* map qos queues to fifos one-to-one */
++ wait_queue_head_t wait_command_queue;
+
-+ /* Map each Tx/cmd queue to its corresponding fifo */
- for (i = 0; i < ARRAY_SIZE(default_queue_to_tx_fifo); i++) {
- int ac = default_queue_to_tx_fifo[i];
- iwl4965_txq_ctx_activate(priv, i);
- iwl4965_tx_queue_set_status(priv, &priv->txq[i], ac, 0);
- }
-
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
-
- return 0;
- }
-
--int iwl_hw_set_hw_setting(struct iwl_priv *priv)
-+/**
-+ * iwl4965_hw_set_hw_setting
-+ *
-+ * Called when initializing driver
-+ */
-+int iwl4965_hw_set_hw_setting(struct iwl4965_priv *priv)
- {
-+ /* Allocate area for Tx byte count tables and Rx queue status */
- priv->hw_setting.shared_virt =
- pci_alloc_consistent(priv->pci_dev,
-- sizeof(struct iwl_shared),
-+ sizeof(struct iwl4965_shared),
- &priv->hw_setting.shared_phys);
-
- if (!priv->hw_setting.shared_virt)
- return -1;
-
-- memset(priv->hw_setting.shared_virt, 0, sizeof(struct iwl_shared));
-+ memset(priv->hw_setting.shared_virt, 0, sizeof(struct iwl4965_shared));
-
-- priv->hw_setting.max_txq_num = iwl_param_queues_num;
-+ priv->hw_setting.max_txq_num = iwl4965_param_queues_num;
- priv->hw_setting.ac_queue_count = AC_NUM;
++ int activity_timer_active;
++
++ /* Rx and Tx DMA processing queues */
++ struct iwl4965_rx_queue rxq;
++ struct iwl4965_tx_queue txq[IWL_MAX_NUM_QUEUES];
++ unsigned long txq_ctx_active_msk;
++ struct iwl4965_kw kw; /* keep warm address */
++ u32 scd_base_addr; /* scheduler sram base address */
++
++ unsigned long status;
++ u32 config;
++
++ int last_rx_rssi; /* From Rx packet statisitics */
++ int last_rx_noise; /* From beacon statistics */
++
++ struct iwl4965_power_mgr power_data;
++
++ struct iwl4965_notif_statistics statistics;
++ unsigned long last_statistics_time;
++
++ /* context information */
++ u8 essid[IW_ESSID_MAX_SIZE];
++ u8 essid_len;
++ u16 rates_mask;
++
++ u32 power_mode;
++ u32 antenna;
++ u8 bssid[ETH_ALEN];
++ u16 rts_threshold;
++ u8 mac_addr[ETH_ALEN];
++
++ /*station table variables */
++ spinlock_t sta_lock;
++ int num_stations;
++ struct iwl4965_station_entry stations[IWL_STATION_COUNT];
++
++ /* Indication if ieee80211_ops->open has been called */
++ int is_open;
++
++ u8 mac80211_registered;
++ int is_abg;
++
++ u32 notif_missed_beacons;
++
++ /* Rx'd packet timing information */
++ u32 last_beacon_time;
++ u64 last_tsf;
++
++ /* Duplicate packet detection */
++ u16 last_seq_num;
++ u16 last_frag_num;
++ unsigned long last_packet_time;
++
++ /* Hash table for finding stations in IBSS network */
++ struct list_head ibss_mac_hash[IWL_IBSS_MAC_HASH_SIZE];
++
++ /* eeprom */
++ struct iwl4965_eeprom eeprom;
++
++ int iw_mode;
++
++ struct sk_buff *ibss_beacon;
++
++ /* Last Rx'd beacon timestamp */
++ u32 timestamp0;
++ u32 timestamp1;
++ u16 beacon_int;
++ struct iwl4965_driver_hw_info hw_setting;
++ struct ieee80211_vif *vif;
++
++ /* Current association information needed to configure the
++ * hardware */
++ u16 assoc_id;
++ u16 assoc_capability;
++ u8 ps_mode;
++
++#ifdef CONFIG_IWL4965_QOS
++ struct iwl4965_qos_info qos_data;
++#endif /*CONFIG_IWL4965_QOS */
++
++ struct workqueue_struct *workqueue;
++
++ struct work_struct up;
++ struct work_struct restart;
++ struct work_struct calibrated_work;
++ struct work_struct scan_completed;
++ struct work_struct rx_replenish;
++ struct work_struct rf_kill;
++ struct work_struct abort_scan;
++ struct work_struct update_link_led;
++ struct work_struct auth_work;
++ struct work_struct report_work;
++ struct work_struct request_scan;
++ struct work_struct beacon_update;
++
++ struct tasklet_struct irq_tasklet;
++
++ struct delayed_work init_alive_start;
++ struct delayed_work alive_start;
++ struct delayed_work activity_timer;
++ struct delayed_work thermal_periodic;
++ struct delayed_work gather_stats;
++ struct delayed_work scan_check;
++ struct delayed_work post_associate;
++
++#define IWL_DEFAULT_TX_POWER 0x0F
++ s8 user_txpower_limit;
++ s8 max_channel_txpower_limit;
++
++#ifdef CONFIG_PM
++ u32 pm_state[16];
++#endif
++
++#ifdef CONFIG_IWL4965_DEBUG
++ /* debugging info */
++ u32 framecnt_to_us;
++ atomic_t restrict_refcnt;
++#endif
++
++ struct work_struct txpower_work;
++#ifdef CONFIG_IWL4965_SENSITIVITY
++ struct work_struct sensitivity_work;
++#endif
++ struct work_struct statistics_work;
++ struct timer_list statistics_periodic;
++
++#ifdef CONFIG_IWL4965_HT_AGG
++ struct work_struct agg_work;
++#endif
++}; /*iwl4965_priv */
++
++static inline int iwl4965_is_associated(struct iwl4965_priv *priv)
++{
++ return (priv->active_rxon.filter_flags & RXON_FILTER_ASSOC_MSK) ? 1 : 0;
++}
++
++static inline int is_channel_valid(const struct iwl4965_channel_info *ch_info)
++{
++ if (ch_info == NULL)
++ return 0;
++ return (ch_info->flags & EEPROM_CHANNEL_VALID) ? 1 : 0;
++}
++
++static inline int is_channel_narrow(const struct iwl4965_channel_info *ch_info)
++{
++ return (ch_info->flags & EEPROM_CHANNEL_NARROW) ? 1 : 0;
++}
++
++static inline int is_channel_radar(const struct iwl4965_channel_info *ch_info)
++{
++ return (ch_info->flags & EEPROM_CHANNEL_RADAR) ? 1 : 0;
++}
++
++static inline u8 is_channel_a_band(const struct iwl4965_channel_info *ch_info)
++{
++ return ch_info->phymode == MODE_IEEE80211A;
++}
++
++static inline u8 is_channel_bg_band(const struct iwl4965_channel_info *ch_info)
++{
++ return ((ch_info->phymode == MODE_IEEE80211B) ||
++ (ch_info->phymode == MODE_IEEE80211G));
++}
++
++static inline int is_channel_passive(const struct iwl4965_channel_info *ch)
++{
++ return (!(ch->flags & EEPROM_CHANNEL_ACTIVE)) ? 1 : 0;
++}
++
++static inline int is_channel_ibss(const struct iwl4965_channel_info *ch)
++{
++ return ((ch->flags & EEPROM_CHANNEL_IBSS)) ? 1 : 0;
++}
++
++extern const struct iwl4965_channel_info *iwl4965_get_channel_info(
++ const struct iwl4965_priv *priv, int phymode, u16 channel);
++
++/* Requires full declaration of iwl4965_priv before including */
++#include "iwl-4965-io.h"
++
++#endif /* __iwl4965_4965_h__ */
+diff --git a/drivers/net/wireless/iwlwifi/iwl-channel.h b/drivers/net/wireless/iwlwifi/iwl-channel.h
+deleted file mode 100644
+index 023c3f2..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-channel.h
++++ /dev/null
+@@ -1,161 +0,0 @@
+-/******************************************************************************
+- *
+- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of version 2 of the GNU General Public License as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+- * The full GNU General Public License is included in this distribution in the
+- * file called LICENSE.
+- *
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- *****************************************************************************/
+-#ifndef __iwl_channel_h__
+-#define __iwl_channel_h__
+-
+-#define IWL_NUM_SCAN_RATES (2)
+-
+-struct iwl_channel_tgd_info {
+- u8 type;
+- s8 max_power;
+-};
+-
+-struct iwl_channel_tgh_info {
+- s64 last_radar_time;
+-};
+-
+-/* current Tx power values to use, one for each rate for each channel.
+- * requested power is limited by:
+- * -- regulatory EEPROM limits for this channel
+- * -- hardware capabilities (clip-powers)
+- * -- spectrum management
+- * -- user preference (e.g. iwconfig)
+- * when requested power is set, base power index must also be set. */
+-struct iwl_channel_power_info {
+- struct iwl_tx_power tpc; /* actual radio and DSP gain settings */
+- s8 power_table_index; /* actual (compenst'd) index into gain table */
+- s8 base_power_index; /* gain index for power at factory temp. */
+- s8 requested_power; /* power (dBm) requested for this chnl/rate */
+-};
+-
+-/* current scan Tx power values to use, one for each scan rate for each
+- * channel. */
+-struct iwl_scan_power_info {
+- struct iwl_tx_power tpc; /* actual radio and DSP gain settings */
+- s8 power_table_index; /* actual (compenst'd) index into gain table */
+- s8 requested_power; /* scan pwr (dBm) requested for chnl/rate */
+-};
+-
+-/* Channel unlock period is 15 seconds. If no beacon or probe response
+- * has been received within 15 seconds on a locked channel then the channel
+- * remains locked. */
+-#define TX_UNLOCK_PERIOD 15
+-
+-/* CSA lock period is 15 seconds. If a CSA has been received on a channel in
+- * the last 15 seconds, the channel is locked */
+-#define CSA_LOCK_PERIOD 15
+-/*
+- * One for each channel, holds all channel setup data
+- * Some of the fields (e.g. eeprom and flags/max_power_avg) are redundant
+- * with one another!
+- */
+-#define IWL4965_MAX_RATE (33)
+-
+-struct iwl_channel_info {
+- struct iwl_channel_tgd_info tgd;
+- struct iwl_channel_tgh_info tgh;
+- struct iwl_eeprom_channel eeprom; /* EEPROM regulatory limit */
+- struct iwl_eeprom_channel fat_eeprom; /* EEPROM regulatory limit for
+- * FAT channel */
+-
+- u8 channel; /* channel number */
+- u8 flags; /* flags copied from EEPROM */
+- s8 max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
+- s8 curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) */
+- s8 min_power; /* always 0 */
+- s8 scan_power; /* (dBm) regul. eeprom, direct scans, any rate */
+-
+- u8 group_index; /* 0-4, maps channel to group1/2/3/4/5 */
+- u8 band_index; /* 0-4, maps channel to band1/2/3/4/5 */
+- u8 phymode; /* MODE_IEEE80211{A,B,G} */
+-
+- /* Radio/DSP gain settings for each "normal" data Tx rate.
+- * These include, in addition to RF and DSP gain, a few fields for
+- * remembering/modifying gain settings (indexes). */
+- struct iwl_channel_power_info power_info[IWL4965_MAX_RATE];
+-
+-#if IWL == 4965
+- /* FAT channel info */
+- s8 fat_max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
+- s8 fat_curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) */
+- s8 fat_min_power; /* always 0 */
+- s8 fat_scan_power; /* (dBm) eeprom, direct scans, any rate */
+- u8 fat_flags; /* flags copied from EEPROM */
+- u8 fat_extension_channel;
+-#endif
+-
+- /* Radio/DSP gain settings for each scan rate, for directed scans. */
+- struct iwl_scan_power_info scan_pwr_info[IWL_NUM_SCAN_RATES];
+-};
+-
+-struct iwl_clip_group {
+- /* maximum power level to prevent clipping for each rate, derived by
+- * us from this band's saturation power in EEPROM */
+- const s8 clip_powers[IWL_MAX_RATES];
+-};
+-
+-static inline int is_channel_valid(const struct iwl_channel_info *ch_info)
+-{
+- if (ch_info == NULL)
+- return 0;
+- return (ch_info->flags & EEPROM_CHANNEL_VALID) ? 1 : 0;
+-}
+-
+-static inline int is_channel_narrow(const struct iwl_channel_info *ch_info)
+-{
+- return (ch_info->flags & EEPROM_CHANNEL_NARROW) ? 1 : 0;
+-}
+-
+-static inline int is_channel_radar(const struct iwl_channel_info *ch_info)
+-{
+- return (ch_info->flags & EEPROM_CHANNEL_RADAR) ? 1 : 0;
+-}
+-
+-static inline u8 is_channel_a_band(const struct iwl_channel_info *ch_info)
+-{
+- return ch_info->phymode == MODE_IEEE80211A;
+-}
+-
+-static inline u8 is_channel_bg_band(const struct iwl_channel_info *ch_info)
+-{
+- return ((ch_info->phymode == MODE_IEEE80211B) ||
+- (ch_info->phymode == MODE_IEEE80211G));
+-}
+-
+-static inline int is_channel_passive(const struct iwl_channel_info *ch)
+-{
+- return (!(ch->flags & EEPROM_CHANNEL_ACTIVE)) ? 1 : 0;
+-}
+-
+-static inline int is_channel_ibss(const struct iwl_channel_info *ch)
+-{
+- return ((ch->flags & EEPROM_CHANNEL_IBSS)) ? 1 : 0;
+-}
+-
+-extern const struct iwl_channel_info *iwl_get_channel_info(
+- const struct iwl_priv *priv, int phymode, u16 channel);
+-
+-#endif
+diff --git a/drivers/net/wireless/iwlwifi/iwl-commands.h b/drivers/net/wireless/iwlwifi/iwl-commands.h
+deleted file mode 100644
+index 9de8d7f..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-commands.h
++++ /dev/null
+@@ -1,1734 +0,0 @@
+-/******************************************************************************
+- *
+- * This file is provided under a dual BSD/GPLv2 license. When using or
+- * redistributing this file, you may do so under either license.
+- *
+- * GPL LICENSE SUMMARY
+- *
+- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of version 2 of the GNU Geeral Public License as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but
+- * WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
+- * USA
+- *
+- * The full GNU General Public License is included in this distribution
+- * in the file called LICENSE.GPL.
+- *
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- * BSD LICENSE
+- *
+- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+- * All rights reserved.
+- *
+- * Redistribution and use in source and binary forms, with or without
+- * modification, are permitted provided that the following conditions
+- * are met:
+- *
+- * * Redistributions of source code must retain the above copyright
+- * notice, this list of conditions and the following disclaimer.
+- * * Redistributions in binary form must reproduce the above copyright
+- * notice, this list of conditions and the following disclaimer in
+- * the documentation and/or other materials provided with the
+- * distribution.
+- * * Neither the name Intel Corporation nor the names of its
+- * contributors may be used to endorse or promote products derived
+- * from this software without specific prior written permission.
+- *
+- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+- *
+- *****************************************************************************/
+-
+-#ifndef __iwl_commands_h__
+-#define __iwl_commands_h__
+-
+-enum {
+- REPLY_ALIVE = 0x1,
+- REPLY_ERROR = 0x2,
+-
+- /* RXON and QOS commands */
+- REPLY_RXON = 0x10,
+- REPLY_RXON_ASSOC = 0x11,
+- REPLY_QOS_PARAM = 0x13,
+- REPLY_RXON_TIMING = 0x14,
+-
+- /* Multi-Station support */
+- REPLY_ADD_STA = 0x18,
+- REPLY_REMOVE_STA = 0x19, /* not used */
+- REPLY_REMOVE_ALL_STA = 0x1a, /* not used */
+-
+- /* RX, TX, LEDs */
+-#if IWL == 3945
+- REPLY_3945_RX = 0x1b, /* 3945 only */
+-#endif
+- REPLY_TX = 0x1c,
+- REPLY_RATE_SCALE = 0x47, /* 3945 only */
+- REPLY_LEDS_CMD = 0x48,
+- REPLY_TX_LINK_QUALITY_CMD = 0x4e, /* 4965 only */
+-
+- /* 802.11h related */
+- RADAR_NOTIFICATION = 0x70, /* not used */
+- REPLY_QUIET_CMD = 0x71, /* not used */
+- REPLY_CHANNEL_SWITCH = 0x72,
+- CHANNEL_SWITCH_NOTIFICATION = 0x73,
+- REPLY_SPECTRUM_MEASUREMENT_CMD = 0x74,
+- SPECTRUM_MEASURE_NOTIFICATION = 0x75,
+-
+- /* Power Management */
+- POWER_TABLE_CMD = 0x77,
+- PM_SLEEP_NOTIFICATION = 0x7A,
+- PM_DEBUG_STATISTIC_NOTIFIC = 0x7B,
+-
+- /* Scan commands and notifications */
+- REPLY_SCAN_CMD = 0x80,
+- REPLY_SCAN_ABORT_CMD = 0x81,
+- SCAN_START_NOTIFICATION = 0x82,
+- SCAN_RESULTS_NOTIFICATION = 0x83,
+- SCAN_COMPLETE_NOTIFICATION = 0x84,
+-
+- /* IBSS/AP commands */
+- BEACON_NOTIFICATION = 0x90,
+- REPLY_TX_BEACON = 0x91,
+- WHO_IS_AWAKE_NOTIFICATION = 0x94, /* not used */
+-
+- /* Miscellaneous commands */
+- QUIET_NOTIFICATION = 0x96, /* not used */
+- REPLY_TX_PWR_TABLE_CMD = 0x97,
+- MEASURE_ABORT_NOTIFICATION = 0x99, /* not used */
+-
+- /* BT config command */
+- REPLY_BT_CONFIG = 0x9b,
+-
+- /* 4965 Statistics */
+- REPLY_STATISTICS_CMD = 0x9c,
+- STATISTICS_NOTIFICATION = 0x9d,
+-
+- /* RF-KILL commands and notifications */
+- REPLY_CARD_STATE_CMD = 0xa0,
+- CARD_STATE_NOTIFICATION = 0xa1,
+-
+- /* Missed beacons notification */
+- MISSED_BEACONS_NOTIFICATION = 0xa2,
+-
+-#if IWL == 4965
+- REPLY_CT_KILL_CONFIG_CMD = 0xa4,
+- SENSITIVITY_CMD = 0xa8,
+- REPLY_PHY_CALIBRATION_CMD = 0xb0,
+- REPLY_RX_PHY_CMD = 0xc0,
+- REPLY_RX_MPDU_CMD = 0xc1,
+- REPLY_4965_RX = 0xc3,
+- REPLY_COMPRESSED_BA = 0xc5,
+-#endif
+- REPLY_MAX = 0xff
+-};
+-
+-/******************************************************************************
+- * (0)
+- * Header
+- *
+- *****************************************************************************/
+-
+-#define IWL_CMD_FAILED_MSK 0x40
+-
+-struct iwl_cmd_header {
+- u8 cmd;
+- u8 flags;
+- /* We have 15 LSB to use as we please (MSB indicates
+- * a frame Rx'd from the HW). We encode the following
+- * information into the sequence field:
+- *
+- * 0:7 index in fifo
+- * 8:13 fifo selection
+- * 14:14 bit indicating if this packet references the 'extra'
+- * storage at the end of the memory queue
+- * 15:15 (Rx indication)
+- *
+- */
+- __le16 sequence;
+-
+- /* command data follows immediately */
+- u8 data[0];
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (0a)
+- * Alive and Error Commands & Responses:
+- *
+- *****************************************************************************/
+-
+-#define UCODE_VALID_OK __constant_cpu_to_le32(0x1)
+-#define INITIALIZE_SUBTYPE (9)
+-
+-/*
+- * REPLY_ALIVE = 0x1 (response only, not a command)
+- */
+-struct iwl_alive_resp {
+- u8 ucode_minor;
+- u8 ucode_major;
+- __le16 reserved1;
+- u8 sw_rev[8];
+- u8 ver_type;
+- u8 ver_subtype;
+- __le16 reserved2;
+- __le32 log_event_table_ptr;
+- __le32 error_event_table_ptr;
+- __le32 timestamp;
+- __le32 is_valid;
+-} __attribute__ ((packed));
+-
+-struct iwl_init_alive_resp {
+- u8 ucode_minor;
+- u8 ucode_major;
+- __le16 reserved1;
+- u8 sw_rev[8];
+- u8 ver_type;
+- u8 ver_subtype;
+- __le16 reserved2;
+- __le32 log_event_table_ptr;
+- __le32 error_event_table_ptr;
+- __le32 timestamp;
+- __le32 is_valid;
+-
+-#if IWL == 4965
+- /* calibration values from "initialize" uCode */
+- __le32 voltage; /* signed */
+- __le32 therm_r1[2]; /* signed 1st for normal, 2nd for FAT channel */
+- __le32 therm_r2[2]; /* signed */
+- __le32 therm_r3[2]; /* signed */
+- __le32 therm_r4[2]; /* signed */
+- __le32 tx_atten[5][2]; /* signed MIMO gain comp, 5 freq groups,
+- * 2 Tx chains */
+-#endif
+-} __attribute__ ((packed));
+-
+-union tsf {
+- u8 byte[8];
+- __le16 word[4];
+- __le32 dw[2];
+-};
+-
+-/*
+- * REPLY_ERROR = 0x2 (response only, not a command)
+- */
+-struct iwl_error_resp {
+- __le32 error_type;
+- u8 cmd_id;
+- u8 reserved1;
+- __le16 bad_cmd_seq_num;
+-#if IWL == 3945
+- __le16 reserved2;
+-#endif
+- __le32 error_info;
+- union tsf timestamp;
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (1)
+- * RXON Commands & Responses:
+- *
+- *****************************************************************************/
+-
+-/*
+- * Rx config defines & structure
+- */
+-/* rx_config device types */
+-enum {
+- RXON_DEV_TYPE_AP = 1,
+- RXON_DEV_TYPE_ESS = 3,
+- RXON_DEV_TYPE_IBSS = 4,
+- RXON_DEV_TYPE_SNIFFER = 6,
+-};
+-
+-/* rx_config flags */
+-/* band & modulation selection */
+-#define RXON_FLG_BAND_24G_MSK __constant_cpu_to_le32(1 << 0)
+-#define RXON_FLG_CCK_MSK __constant_cpu_to_le32(1 << 1)
+-/* auto detection enable */
+-#define RXON_FLG_AUTO_DETECT_MSK __constant_cpu_to_le32(1 << 2)
+-/* TGg protection when tx */
+-#define RXON_FLG_TGG_PROTECT_MSK __constant_cpu_to_le32(1 << 3)
+-/* cck short slot & preamble */
+-#define RXON_FLG_SHORT_SLOT_MSK __constant_cpu_to_le32(1 << 4)
+-#define RXON_FLG_SHORT_PREAMBLE_MSK __constant_cpu_to_le32(1 << 5)
+-/* antenna selection */
+-#define RXON_FLG_DIS_DIV_MSK __constant_cpu_to_le32(1 << 7)
+-#define RXON_FLG_ANT_SEL_MSK __constant_cpu_to_le32(0x0f00)
+-#define RXON_FLG_ANT_A_MSK __constant_cpu_to_le32(1 << 8)
+-#define RXON_FLG_ANT_B_MSK __constant_cpu_to_le32(1 << 9)
+-/* radar detection enable */
+-#define RXON_FLG_RADAR_DETECT_MSK __constant_cpu_to_le32(1 << 12)
+-#define RXON_FLG_TGJ_NARROW_BAND_MSK __constant_cpu_to_le32(1 << 13)
+-/* rx response to host with 8-byte TSF
+-* (according to ON_AIR deassertion) */
+-#define RXON_FLG_TSF2HOST_MSK __constant_cpu_to_le32(1 << 15)
+-
+-/* rx_config filter flags */
+-/* accept all data frames */
+-#define RXON_FILTER_PROMISC_MSK __constant_cpu_to_le32(1 << 0)
+-/* pass control & management to host */
+-#define RXON_FILTER_CTL2HOST_MSK __constant_cpu_to_le32(1 << 1)
+-/* accept multi-cast */
+-#define RXON_FILTER_ACCEPT_GRP_MSK __constant_cpu_to_le32(1 << 2)
+-/* don't decrypt uni-cast frames */
+-#define RXON_FILTER_DIS_DECRYPT_MSK __constant_cpu_to_le32(1 << 3)
+-/* don't decrypt multi-cast frames */
+-#define RXON_FILTER_DIS_GRP_DECRYPT_MSK __constant_cpu_to_le32(1 << 4)
+-/* STA is associated */
+-#define RXON_FILTER_ASSOC_MSK __constant_cpu_to_le32(1 << 5)
+-/* transfer to host non bssid beacons in associated state */
+-#define RXON_FILTER_BCON_AWARE_MSK __constant_cpu_to_le32(1 << 6)
+-
+-/*
+- * REPLY_RXON = 0x10 (command, has simple generic response)
+- */
+-struct iwl_rxon_cmd {
+- u8 node_addr[6];
+- __le16 reserved1;
+- u8 bssid_addr[6];
+- __le16 reserved2;
+- u8 wlap_bssid_addr[6];
+- __le16 reserved3;
+- u8 dev_type;
+- u8 air_propagation;
+-#if IWL == 3945
+- __le16 reserved4;
+-#elif IWL == 4965
+- __le16 rx_chain;
+-#endif
+- u8 ofdm_basic_rates;
+- u8 cck_basic_rates;
+- __le16 assoc_id;
+- __le32 flags;
+- __le32 filter_flags;
+- __le16 channel;
+-#if IWL == 3945
+- __le16 reserved5;
+-#elif IWL == 4965
+- u8 ofdm_ht_single_stream_basic_rates;
+- u8 ofdm_ht_dual_stream_basic_rates;
+-#endif
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_RXON_ASSOC = 0x11 (command, has simple generic response)
+- */
+-struct iwl_rxon_assoc_cmd {
+- __le32 flags;
+- __le32 filter_flags;
+- u8 ofdm_basic_rates;
+- u8 cck_basic_rates;
+-#if IWL == 4965
+- u8 ofdm_ht_single_stream_basic_rates;
+- u8 ofdm_ht_dual_stream_basic_rates;
+- __le16 rx_chain_select_flags;
+-#endif
+- __le16 reserved;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_RXON_TIMING = 0x14 (command, has simple generic response)
+- */
+-struct iwl_rxon_time_cmd {
+- union tsf timestamp;
+- __le16 beacon_interval;
+- __le16 atim_window;
+- __le32 beacon_init_val;
+- __le16 listen_interval;
+- __le16 reserved;
+-} __attribute__ ((packed));
+-
+-struct iwl_tx_power {
+- u8 tx_gain; /* gain for analog radio */
+- u8 dsp_atten; /* gain for DSP */
+-} __attribute__ ((packed));
+-
+-#if IWL == 3945
+-struct iwl_power_per_rate {
+- u8 rate; /* plcp */
+- struct iwl_tx_power tpc;
+- u8 reserved;
+-} __attribute__ ((packed));
+-
+-#elif IWL == 4965
+-#define POWER_TABLE_NUM_ENTRIES 33
+-#define POWER_TABLE_NUM_HT_OFDM_ENTRIES 32
+-#define POWER_TABLE_CCK_ENTRY 32
+-struct tx_power_dual_stream {
+- __le32 dw;
+-} __attribute__ ((packed));
+-
+-struct iwl_tx_power_db {
+- struct tx_power_dual_stream power_tbl[POWER_TABLE_NUM_ENTRIES];
+-} __attribute__ ((packed));
+-#endif
+-
+-/*
+- * REPLY_CHANNEL_SWITCH = 0x72 (command, has simple generic response)
+- */
+-struct iwl_channel_switch_cmd {
+- u8 band;
+- u8 expect_beacon;
+- __le16 channel;
+- __le32 rxon_flags;
+- __le32 rxon_filter_flags;
+- __le32 switch_time;
+-#if IWL == 3945
+- struct iwl_power_per_rate power[IWL_MAX_RATES];
+-#elif IWL == 4965
+- struct iwl_tx_power_db tx_power;
+-#endif
+-} __attribute__ ((packed));
+-
+-/*
+- * CHANNEL_SWITCH_NOTIFICATION = 0x73 (notification only, not a command)
+- */
+-struct iwl_csa_notification {
+- __le16 band;
+- __le16 channel;
+- __le32 status; /* 0 - OK, 1 - fail */
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (2)
+- * Quality-of-Service (QOS) Commands & Responses:
+- *
+- *****************************************************************************/
+-struct iwl_ac_qos {
+- __le16 cw_min;
+- __le16 cw_max;
+- u8 aifsn;
+- u8 reserved1;
+- __le16 edca_txop;
+-} __attribute__ ((packed));
+-
+-/* QoS flags defines */
+-#define QOS_PARAM_FLG_UPDATE_EDCA_MSK __constant_cpu_to_le32(0x01)
+-#define QOS_PARAM_FLG_TGN_MSK __constant_cpu_to_le32(0x02)
+-#define QOS_PARAM_FLG_TXOP_TYPE_MSK __constant_cpu_to_le32(0x10)
+-
+-/*
+- * TXFIFO Queue number defines
+- */
+-/* number of Access categories (AC) (EDCA), queues 0..3 */
+-#define AC_NUM 4
+-
+-/*
+- * REPLY_QOS_PARAM = 0x13 (command, has simple generic response)
+- */
+-struct iwl_qosparam_cmd {
+- __le32 qos_flags;
+- struct iwl_ac_qos ac[AC_NUM];
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (3)
+- * Add/Modify Stations Commands & Responses:
+- *
+- *****************************************************************************/
+-/*
+- * Multi station support
+- */
+-#define IWL_AP_ID 0
+-#define IWL_MULTICAST_ID 1
+-#define IWL_STA_ID 2
+-
+-#define IWL3945_BROADCAST_ID 24
+-#define IWL3945_STATION_COUNT 25
+-
+-#define IWL4965_BROADCAST_ID 31
+-#define IWL4965_STATION_COUNT 32
+-
+-#define IWL_STATION_COUNT 32 /* MAX(3945,4965)*/
+-#define IWL_INVALID_STATION 255
+-
+-#if IWL == 3945
+-#define STA_FLG_TX_RATE_MSK __constant_cpu_to_le32(1<<2);
+-#endif
+-#define STA_FLG_PWR_SAVE_MSK __constant_cpu_to_le32(1<<8);
+-
+-#define STA_CONTROL_MODIFY_MSK 0x01
+-
+-/* key flags __le16*/
+-#define STA_KEY_FLG_ENCRYPT_MSK __constant_cpu_to_le16(0x7)
+-#define STA_KEY_FLG_NO_ENC __constant_cpu_to_le16(0x0)
+-#define STA_KEY_FLG_WEP __constant_cpu_to_le16(0x1)
+-#define STA_KEY_FLG_CCMP __constant_cpu_to_le16(0x2)
+-#define STA_KEY_FLG_TKIP __constant_cpu_to_le16(0x3)
+-
+-#define STA_KEY_FLG_KEYID_POS 8
+-#define STA_KEY_FLG_INVALID __constant_cpu_to_le16(0x0800)
+-
+-/* modify flags */
+-#define STA_MODIFY_KEY_MASK 0x01
+-#define STA_MODIFY_TID_DISABLE_TX 0x02
+-#define STA_MODIFY_TX_RATE_MSK 0x04
+-#define STA_MODIFY_ADDBA_TID_MSK 0x08
+-#define STA_MODIFY_DELBA_TID_MSK 0x10
+-#define BUILD_RAxTID(sta_id, tid) (((sta_id) << 4) + (tid))
+-
+-/*
+- * Antenna masks:
+- * bit14:15 01 B inactive, A active
+- * 10 B active, A inactive
+- * 11 Both active
+- */
+-#define RATE_MCS_ANT_A_POS 14
+-#define RATE_MCS_ANT_B_POS 15
+-#define RATE_MCS_ANT_A_MSK 0x4000
+-#define RATE_MCS_ANT_B_MSK 0x8000
+-#define RATE_MCS_ANT_AB_MSK 0xc000
+-
+-struct iwl_keyinfo {
+- __le16 key_flags;
+- u8 tkip_rx_tsc_byte2; /* TSC[2] for key mix ph1 detection */
+- u8 reserved1;
+- __le16 tkip_rx_ttak[5]; /* 10-byte unicast TKIP TTAK */
+- __le16 reserved2;
+- u8 key[16]; /* 16-byte unicast decryption key */
+-} __attribute__ ((packed));
+-
+-struct sta_id_modify {
+- u8 addr[ETH_ALEN];
+- __le16 reserved1;
+- u8 sta_id;
+- u8 modify_mask;
+- __le16 reserved2;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_ADD_STA = 0x18 (command)
+- */
+-struct iwl_addsta_cmd {
+- u8 mode;
+- u8 reserved[3];
+- struct sta_id_modify sta;
+- struct iwl_keyinfo key;
+- __le32 station_flags;
+- __le32 station_flags_msk;
+- __le16 tid_disable_tx;
+-#if IWL == 3945
+- __le16 rate_n_flags;
+-#else
+- __le16 reserved1;
+-#endif
+- u8 add_immediate_ba_tid;
+- u8 remove_immediate_ba_tid;
+- __le16 add_immediate_ba_ssn;
+-#if IWL == 4965
+- __le32 reserved2;
+-#endif
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_ADD_STA = 0x18 (response)
+- */
+-struct iwl_add_sta_resp {
+- u8 status;
+-} __attribute__ ((packed));
+-
+-#define ADD_STA_SUCCESS_MSK 0x1
+-
+-/******************************************************************************
+- * (4)
+- * Rx Responses:
+- *
+- *****************************************************************************/
+-
+-struct iwl_rx_frame_stats {
+- u8 phy_count;
+- u8 id;
+- u8 rssi;
+- u8 agc;
+- __le16 sig_avg;
+- __le16 noise_diff;
+- u8 payload[0];
+-} __attribute__ ((packed));
+-
+-struct iwl_rx_frame_hdr {
+- __le16 channel;
+- __le16 phy_flags;
+- u8 reserved1;
+- u8 rate;
+- __le16 len;
+- u8 payload[0];
+-} __attribute__ ((packed));
+-
+-#define RX_RES_STATUS_NO_CRC32_ERROR __constant_cpu_to_le32(1 << 0)
+-#define RX_RES_STATUS_NO_RXE_OVERFLOW __constant_cpu_to_le32(1 << 1)
+-
+-#define RX_RES_PHY_FLAGS_BAND_24_MSK __constant_cpu_to_le16(1 << 0)
+-#define RX_RES_PHY_FLAGS_MOD_CCK_MSK __constant_cpu_to_le16(1 << 1)
+-#define RX_RES_PHY_FLAGS_SHORT_PREAMBLE_MSK __constant_cpu_to_le16(1 << 2)
+-#define RX_RES_PHY_FLAGS_NARROW_BAND_MSK __constant_cpu_to_le16(1 << 3)
+-#define RX_RES_PHY_FLAGS_ANTENNA_MSK __constant_cpu_to_le16(0xf0)
+-
+-#define RX_RES_STATUS_SEC_TYPE_MSK (0x7 << 8)
+-#define RX_RES_STATUS_SEC_TYPE_NONE (0x0 << 8)
+-#define RX_RES_STATUS_SEC_TYPE_WEP (0x1 << 8)
+-#define RX_RES_STATUS_SEC_TYPE_CCMP (0x2 << 8)
+-#define RX_RES_STATUS_SEC_TYPE_TKIP (0x3 << 8)
+-
+-#define RX_RES_STATUS_DECRYPT_TYPE_MSK (0x3 << 11)
+-#define RX_RES_STATUS_NOT_DECRYPT (0x0 << 11)
+-#define RX_RES_STATUS_DECRYPT_OK (0x3 << 11)
+-#define RX_RES_STATUS_BAD_ICV_MIC (0x1 << 11)
+-#define RX_RES_STATUS_BAD_KEY_TTAK (0x2 << 11)
+-
+-struct iwl_rx_frame_end {
+- __le32 status;
+- __le64 timestamp;
+- __le32 beacon_timestamp;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_3945_RX = 0x1b (response only, not a command)
+- *
+- * NOTE: DO NOT dereference from casts to this structure
+- * It is provided only for calculating minimum data set size.
+- * The actual offsets of the hdr and end are dynamic based on
+- * stats.phy_count
+- */
+-struct iwl_rx_frame {
+- struct iwl_rx_frame_stats stats;
+- struct iwl_rx_frame_hdr hdr;
+- struct iwl_rx_frame_end end;
+-} __attribute__ ((packed));
+-
+-/* Fixed (non-configurable) rx data from phy */
+-#define RX_PHY_FLAGS_ANTENNAE_OFFSET (4)
+-#define RX_PHY_FLAGS_ANTENNAE_MASK (0x70)
+-#define IWL_AGC_DB_MASK (0x3f80) /* MASK(7,13) */
+-#define IWL_AGC_DB_POS (7)
+-struct iwl4965_rx_non_cfg_phy {
+- __le16 ant_selection; /* ant A bit 4, ant B bit 5, ant C bit 6 */
+- __le16 agc_info; /* agc code 0:6, agc dB 7:13, reserved 14:15 */
+- u8 rssi_info[6]; /* we use even entries, 0/2/4 for A/B/C rssi */
+- u8 pad[0];
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_4965_RX = 0xc3 (response only, not a command)
+- * Used only for legacy (non 11n) frames.
+- */
+-#define RX_RES_PHY_CNT 14
+-struct iwl4965_rx_phy_res {
+- u8 non_cfg_phy_cnt; /* non configurable DSP phy data byte count */
+- u8 cfg_phy_cnt; /* configurable DSP phy data byte count */
+- u8 stat_id; /* configurable DSP phy data set ID */
+- u8 reserved1;
+- __le64 timestamp; /* TSF at on air rise */
+- __le32 beacon_time_stamp; /* beacon at on-air rise */
+- __le16 phy_flags; /* general phy flags: band, modulation, ... */
+- __le16 channel; /* channel number */
+- __le16 non_cfg_phy[RX_RES_PHY_CNT]; /* upto 14 phy entries */
+- __le32 reserved2;
+- __le32 rate_n_flags;
+- __le16 byte_count; /* frame's byte-count */
+- __le16 reserved3;
+-} __attribute__ ((packed));
+-
+-struct iwl4965_rx_mpdu_res_start {
+- __le16 byte_count;
+- __le16 reserved;
+-} __attribute__ ((packed));
+-
+-
+-/******************************************************************************
+- * (5)
+- * Tx Commands & Responses:
+- *
+- *****************************************************************************/
+-
+-/* Tx flags */
+-#define TX_CMD_FLG_RTS_MSK __constant_cpu_to_le32(1 << 1)
+-#define TX_CMD_FLG_CTS_MSK __constant_cpu_to_le32(1 << 2)
+-#define TX_CMD_FLG_ACK_MSK __constant_cpu_to_le32(1 << 3)
+-#define TX_CMD_FLG_STA_RATE_MSK __constant_cpu_to_le32(1 << 4)
+-#define TX_CMD_FLG_IMM_BA_RSP_MASK __constant_cpu_to_le32(1 << 6)
+-#define TX_CMD_FLG_FULL_TXOP_PROT_MSK __constant_cpu_to_le32(1 << 7)
+-#define TX_CMD_FLG_ANT_SEL_MSK __constant_cpu_to_le32(0xf00)
+-#define TX_CMD_FLG_ANT_A_MSK __constant_cpu_to_le32(1 << 8)
+-#define TX_CMD_FLG_ANT_B_MSK __constant_cpu_to_le32(1 << 9)
+-
+-/* ucode ignores BT priority for this frame */
+-#define TX_CMD_FLG_BT_DIS_MSK __constant_cpu_to_le32(1 << 12)
+-
+-/* ucode overrides sequence control */
+-#define TX_CMD_FLG_SEQ_CTL_MSK __constant_cpu_to_le32(1 << 13)
+-
+-/* signal that this frame is non-last MPDU */
+-#define TX_CMD_FLG_MORE_FRAG_MSK __constant_cpu_to_le32(1 << 14)
+-
+-/* calculate TSF in outgoing frame */
+-#define TX_CMD_FLG_TSF_MSK __constant_cpu_to_le32(1 << 16)
+-
+-/* activate TX calibration. */
+-#define TX_CMD_FLG_CALIB_MSK __constant_cpu_to_le32(1 << 17)
+-
+-/* signals that 2 bytes pad was inserted
+- after the MAC header */
+-#define TX_CMD_FLG_MH_PAD_MSK __constant_cpu_to_le32(1 << 20)
+-
+-/* HCCA-AP - disable duration overwriting. */
+-#define TX_CMD_FLG_DUR_MSK __constant_cpu_to_le32(1 << 25)
+-
+-/*
+- * TX command security control
+- */
+-#define TX_CMD_SEC_WEP 0x01
+-#define TX_CMD_SEC_CCM 0x02
+-#define TX_CMD_SEC_TKIP 0x03
+-#define TX_CMD_SEC_MSK 0x03
+-#define TX_CMD_SEC_SHIFT 6
+-#define TX_CMD_SEC_KEY128 0x08
+-
+-/*
+- * TX command Frame life time
+- */
+-
+-struct iwl_dram_scratch {
+- u8 try_cnt;
+- u8 bt_kill_cnt;
+- __le16 reserved;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_TX = 0x1c (command)
+- */
+-struct iwl_tx_cmd {
+- __le16 len;
+- __le16 next_frame_len;
+- __le32 tx_flags;
+-#if IWL == 3945
+- u8 rate;
+- u8 sta_id;
+- u8 tid_tspec;
+-#elif IWL == 4965
+- struct iwl_dram_scratch scratch;
+- __le32 rate_n_flags;
+- u8 sta_id;
+-#endif
+- u8 sec_ctl;
+-#if IWL == 4965
+- u8 initial_rate_index;
+- u8 reserved;
+-#endif
+- u8 key[16];
+-#if IWL == 3945
+- union {
+- u8 byte[8];
+- __le16 word[4];
+- __le32 dw[2];
+- } tkip_mic;
+- __le32 next_frame_info;
+-#elif IWL == 4965
+- __le16 next_frame_flags;
+- __le16 reserved2;
+-#endif
+- union {
+- __le32 life_time;
+- __le32 attempt;
+- } stop_time;
+-#if IWL == 3945
+- u8 supp_rates[2];
+-#elif IWL == 4965
+- __le32 dram_lsb_ptr;
+- u8 dram_msb_ptr;
+-#endif
+- u8 rts_retry_limit; /*byte 50 */
+- u8 data_retry_limit; /*byte 51 */
+-#if IWL == 4965
+- u8 tid_tspec;
+-#endif
+- union {
+- __le16 pm_frame_timeout;
+- __le16 attempt_duration;
+- } timeout;
+- __le16 driver_txop;
+- u8 payload[0];
+- struct ieee80211_hdr hdr[0];
+-} __attribute__ ((packed));
+-
+-/* TX command response is sent after *all* transmission attempts.
+- *
+- * NOTES:
+- *
+- * TX_STATUS_FAIL_NEXT_FRAG
+- *
+- * If the fragment flag in the MAC header for the frame being transmitted
+- * is set and there is insufficient time to transmit the next frame, the
+- * TX status will be returned with 'TX_STATUS_FAIL_NEXT_FRAG'.
+- *
+- * TX_STATUS_FIFO_UNDERRUN
+- *
+- * Indicates the host did not provide bytes to the FIFO fast enough while
+- * a TX was in progress.
+- *
+- * TX_STATUS_FAIL_MGMNT_ABORT
+- *
+- * This status is only possible if the ABORT ON MGMT RX parameter was
+- * set to true with the TX command.
+- *
+- * If the MSB of the status parameter is set then an abort sequence is
+- * required. This sequence consists of the host activating the TX Abort
+- * control line, and then waiting for the TX Abort command response. This
+- * indicates that a the device is no longer in a transmit state, and that the
+- * command FIFO has been cleared. The host must then deactivate the TX Abort
+- * control line. Receiving is still allowed in this case.
+- */
+-enum {
+- TX_STATUS_SUCCESS = 0x01,
+- TX_STATUS_DIRECT_DONE = 0x02,
+- TX_STATUS_FAIL_SHORT_LIMIT = 0x82,
+- TX_STATUS_FAIL_LONG_LIMIT = 0x83,
+- TX_STATUS_FAIL_FIFO_UNDERRUN = 0x84,
+- TX_STATUS_FAIL_MGMNT_ABORT = 0x85,
+- TX_STATUS_FAIL_NEXT_FRAG = 0x86,
+- TX_STATUS_FAIL_LIFE_EXPIRE = 0x87,
+- TX_STATUS_FAIL_DEST_PS = 0x88,
+- TX_STATUS_FAIL_ABORTED = 0x89,
+- TX_STATUS_FAIL_BT_RETRY = 0x8a,
+- TX_STATUS_FAIL_STA_INVALID = 0x8b,
+- TX_STATUS_FAIL_FRAG_DROPPED = 0x8c,
+- TX_STATUS_FAIL_TID_DISABLE = 0x8d,
+- TX_STATUS_FAIL_FRAME_FLUSHED = 0x8e,
+- TX_STATUS_FAIL_INSUFFICIENT_CF_POLL = 0x8f,
+- TX_STATUS_FAIL_TX_LOCKED = 0x90,
+- TX_STATUS_FAIL_NO_BEACON_ON_RADAR = 0x91,
+-};
+-
+-#define TX_PACKET_MODE_REGULAR 0x0000
+-#define TX_PACKET_MODE_BURST_SEQ 0x0100
+-#define TX_PACKET_MODE_BURST_FIRST 0x0200
+-
+-enum {
+- TX_POWER_PA_NOT_ACTIVE = 0x0,
+-};
+-
+-enum {
+- TX_STATUS_MSK = 0x000000ff, /* bits 0:7 */
+- TX_STATUS_DELAY_MSK = 0x00000040,
+- TX_STATUS_ABORT_MSK = 0x00000080,
+- TX_PACKET_MODE_MSK = 0x0000ff00, /* bits 8:15 */
+- TX_FIFO_NUMBER_MSK = 0x00070000, /* bits 16:18 */
+- TX_RESERVED = 0x00780000, /* bits 19:22 */
+- TX_POWER_PA_DETECT_MSK = 0x7f800000, /* bits 23:30 */
+- TX_ABORT_REQUIRED_MSK = 0x80000000, /* bits 31:31 */
+-};
+-
+-/* *******************************
+- * TX aggregation state
+- ******************************* */
+-
+-enum {
+- AGG_TX_STATE_TRANSMITTED = 0x00,
+- AGG_TX_STATE_UNDERRUN_MSK = 0x01,
+- AGG_TX_STATE_BT_PRIO_MSK = 0x02,
+- AGG_TX_STATE_FEW_BYTES_MSK = 0x04,
+- AGG_TX_STATE_ABORT_MSK = 0x08,
+- AGG_TX_STATE_LAST_SENT_TTL_MSK = 0x10,
+- AGG_TX_STATE_LAST_SENT_TRY_CNT_MSK = 0x20,
+- AGG_TX_STATE_LAST_SENT_BT_KILL_MSK = 0x40,
+- AGG_TX_STATE_SCD_QUERY_MSK = 0x80,
+- AGG_TX_STATE_TEST_BAD_CRC32_MSK = 0x100,
+- AGG_TX_STATE_RESPONSE_MSK = 0x1ff,
+- AGG_TX_STATE_DUMP_TX_MSK = 0x200,
+- AGG_TX_STATE_DELAY_TX_MSK = 0x400
+-};
+-
+-#define AGG_TX_STATE_LAST_SENT_MSK \
+-(AGG_TX_STATE_LAST_SENT_TTL_MSK | \
+- AGG_TX_STATE_LAST_SENT_TRY_CNT_MSK | \
+- AGG_TX_STATE_LAST_SENT_BT_KILL_MSK)
+-
+-#define AGG_TX_STATE_TRY_CNT_POS 12
+-#define AGG_TX_STATE_TRY_CNT_MSK 0xf000
+-
+-#define AGG_TX_STATE_SEQ_NUM_POS 16
+-#define AGG_TX_STATE_SEQ_NUM_MSK 0xffff0000
+-
+-/*
+- * REPLY_TX = 0x1c (response)
+- */
+-#if IWL == 4965
+-struct iwl_tx_resp {
+- u8 frame_count; /* 1 no aggregation, >1 aggregation */
+- u8 bt_kill_count;
+- u8 failure_rts;
+- u8 failure_frame;
+- __le32 rate_n_flags;
+- __le16 wireless_media_time;
+- __le16 reserved;
+- __le32 pa_power1;
+- __le32 pa_power2;
+- __le32 status; /* TX status (for aggregation status of 1st frame) */
+-} __attribute__ ((packed));
+-
+-#elif IWL == 3945
+-struct iwl_tx_resp {
+- u8 failure_rts;
+- u8 failure_frame;
+- u8 bt_kill_count;
+- u8 rate;
+- __le32 wireless_media_time;
+- __le32 status; /* TX status (for aggregation status of 1st frame) */
+-} __attribute__ ((packed));
+-#endif
+-
+-/*
+- * REPLY_COMPRESSED_BA = 0xc5 (response only, not a command)
+- */
+-struct iwl_compressed_ba_resp {
+- __le32 sta_addr_lo32;
+- __le16 sta_addr_hi16;
+- __le16 reserved;
+- u8 sta_id;
+- u8 tid;
+- __le16 ba_seq_ctl;
+- __le32 ba_bitmap0;
+- __le32 ba_bitmap1;
+- __le16 scd_flow;
+- __le16 scd_ssn;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_TX_PWR_TABLE_CMD = 0x97 (command, has simple generic response)
+- */
+-struct iwl_txpowertable_cmd {
+- u8 band; /* 0: 5 GHz, 1: 2.4 GHz */
+- u8 reserved;
+- __le16 channel;
+-#if IWL == 3945
+- struct iwl_power_per_rate power[IWL_MAX_RATES];
+-#elif IWL == 4965
+- struct iwl_tx_power_db tx_power;
+-#endif
+-} __attribute__ ((packed));
+-
+-#if IWL == 3945
+-struct iwl_rate_scaling_info {
+- __le16 rate_n_flags;
+- u8 try_cnt;
+- u8 next_rate_index;
+-} __attribute__ ((packed));
+-
+-/**
+- * struct iwl_rate_scaling_cmd - Rate Scaling Command & Response
+- *
+- * REPLY_RATE_SCALE = 0x47 (command, has simple generic response)
+- *
+- * NOTE: The table of rates passed to the uCode via the
+- * RATE_SCALE command sets up the corresponding order of
+- * rates used for all related commands, including rate
+- * masks, etc.
+- *
+- * For example, if you set 9MB (PLCP 0x0f) as the first
+- * rate in the rate table, the bit mask for that rate
+- * when passed through ofdm_basic_rates on the REPLY_RXON
+- * command would be bit 0 (1<<0)
+- */
+-struct iwl_rate_scaling_cmd {
+- u8 table_id;
+- u8 reserved[3];
+- struct iwl_rate_scaling_info table[IWL_MAX_RATES];
+-} __attribute__ ((packed));
+-
+-#elif IWL == 4965
+-
+-/*RS_NEW_API: only TLC_RTS remains and moved to bit 0 */
+-#define LINK_QUAL_FLAGS_SET_STA_TLC_RTS_MSK (1<<0)
+-
+-#define LINK_QUAL_AC_NUM AC_NUM
+-#define LINK_QUAL_MAX_RETRY_NUM 16
+-
+-#define LINK_QUAL_ANT_A_MSK (1<<0)
+-#define LINK_QUAL_ANT_B_MSK (1<<1)
+-#define LINK_QUAL_ANT_MSK (LINK_QUAL_ANT_A_MSK|LINK_QUAL_ANT_B_MSK)
+-
+-struct iwl_link_qual_general_params {
+- u8 flags;
+- u8 mimo_delimiter;
+- u8 single_stream_ant_msk;
+- u8 dual_stream_ant_msk;
+- u8 start_rate_index[LINK_QUAL_AC_NUM];
+-} __attribute__ ((packed));
+-
+-struct iwl_link_qual_agg_params {
+- __le16 agg_time_limit;
+- u8 agg_dis_start_th;
+- u8 agg_frame_cnt_limit;
+- __le32 reserved;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_TX_LINK_QUALITY_CMD = 0x4e (command, has simple generic response)
+- */
+-struct iwl_link_quality_cmd {
+- u8 sta_id;
+- u8 reserved1;
+- __le16 control;
+- struct iwl_link_qual_general_params general_params;
+- struct iwl_link_qual_agg_params agg_params;
+- struct {
+- __le32 rate_n_flags;
+- } rs_table[LINK_QUAL_MAX_RETRY_NUM];
+- __le32 reserved2;
+-} __attribute__ ((packed));
+-#endif
+-
+-/*
+- * REPLY_BT_CONFIG = 0x9b (command, has simple generic response)
+- */
+-struct iwl_bt_cmd {
+- u8 flags;
+- u8 lead_time;
+- u8 max_kill;
+- u8 reserved;
+- __le32 kill_ack_mask;
+- __le32 kill_cts_mask;
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (6)
+- * Spectrum Management (802.11h) Commands, Responses, Notifications:
+- *
+- *****************************************************************************/
+-
+-/*
+- * Spectrum Management
+- */
+-#define MEASUREMENT_FILTER_FLAG (RXON_FILTER_PROMISC_MSK | \
+- RXON_FILTER_CTL2HOST_MSK | \
+- RXON_FILTER_ACCEPT_GRP_MSK | \
+- RXON_FILTER_DIS_DECRYPT_MSK | \
+- RXON_FILTER_DIS_GRP_DECRYPT_MSK | \
+- RXON_FILTER_ASSOC_MSK | \
+- RXON_FILTER_BCON_AWARE_MSK)
+-
+-struct iwl_measure_channel {
+- __le32 duration; /* measurement duration in extended beacon
+- * format */
+- u8 channel; /* channel to measure */
+- u8 type; /* see enum iwl_measure_type */
+- __le16 reserved;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_SPECTRUM_MEASUREMENT_CMD = 0x74 (command)
+- */
+-struct iwl_spectrum_cmd {
+- __le16 len; /* number of bytes starting from token */
+- u8 token; /* token id */
+- u8 id; /* measurement id -- 0 or 1 */
+- u8 origin; /* 0 = TGh, 1 = other, 2 = TGk */
+- u8 periodic; /* 1 = periodic */
+- __le16 path_loss_timeout;
+- __le32 start_time; /* start time in extended beacon format */
+- __le32 reserved2;
+- __le32 flags; /* rxon flags */
+- __le32 filter_flags; /* rxon filter flags */
+- __le16 channel_count; /* minimum 1, maximum 10 */
+- __le16 reserved3;
+- struct iwl_measure_channel channels[10];
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_SPECTRUM_MEASUREMENT_CMD = 0x74 (response)
+- */
+-struct iwl_spectrum_resp {
+- u8 token;
+- u8 id; /* id of the prior command replaced, or 0xff */
+- __le16 status; /* 0 - command will be handled
+- * 1 - cannot handle (conflicts with another
+- * measurement) */
+-} __attribute__ ((packed));
+-
+-enum iwl_measurement_state {
+- IWL_MEASUREMENT_START = 0,
+- IWL_MEASUREMENT_STOP = 1,
+-};
+-
+-enum iwl_measurement_status {
+- IWL_MEASUREMENT_OK = 0,
+- IWL_MEASUREMENT_CONCURRENT = 1,
+- IWL_MEASUREMENT_CSA_CONFLICT = 2,
+- IWL_MEASUREMENT_TGH_CONFLICT = 3,
+- /* 4-5 reserved */
+- IWL_MEASUREMENT_STOPPED = 6,
+- IWL_MEASUREMENT_TIMEOUT = 7,
+- IWL_MEASUREMENT_PERIODIC_FAILED = 8,
+-};
+-
+-#define NUM_ELEMENTS_IN_HISTOGRAM 8
+-
+-struct iwl_measurement_histogram {
+- __le32 ofdm[NUM_ELEMENTS_IN_HISTOGRAM]; /* in 0.8usec counts */
+- __le32 cck[NUM_ELEMENTS_IN_HISTOGRAM]; /* in 1usec counts */
+-} __attribute__ ((packed));
+-
+-/* clear channel availability counters */
+-struct iwl_measurement_cca_counters {
+- __le32 ofdm;
+- __le32 cck;
+-} __attribute__ ((packed));
+-
+-enum iwl_measure_type {
+- IWL_MEASURE_BASIC = (1 << 0),
+- IWL_MEASURE_CHANNEL_LOAD = (1 << 1),
+- IWL_MEASURE_HISTOGRAM_RPI = (1 << 2),
+- IWL_MEASURE_HISTOGRAM_NOISE = (1 << 3),
+- IWL_MEASURE_FRAME = (1 << 4),
+- /* bits 5:6 are reserved */
+- IWL_MEASURE_IDLE = (1 << 7),
+-};
+-
+-/*
+- * SPECTRUM_MEASURE_NOTIFICATION = 0x75 (notification only, not a command)
+- */
+-struct iwl_spectrum_notification {
+- u8 id; /* measurement id -- 0 or 1 */
+- u8 token;
+- u8 channel_index; /* index in measurement channel list */
+- u8 state; /* 0 - start, 1 - stop */
+- __le32 start_time; /* lower 32-bits of TSF */
+- u8 band; /* 0 - 5.2GHz, 1 - 2.4GHz */
+- u8 channel;
+- u8 type; /* see enum iwl_measurement_type */
+- u8 reserved1;
+- /* NOTE: cca_ofdm, cca_cck, basic_type, and histogram are only only
+- * valid if applicable for measurement type requested. */
+- __le32 cca_ofdm; /* cca fraction time in 40Mhz clock periods */
+- __le32 cca_cck; /* cca fraction time in 44Mhz clock periods */
+- __le32 cca_time; /* channel load time in usecs */
+- u8 basic_type; /* 0 - bss, 1 - ofdm preamble, 2 -
+- * unidentified */
+- u8 reserved2[3];
+- struct iwl_measurement_histogram histogram;
+- __le32 stop_time; /* lower 32-bits of TSF */
+- __le32 status; /* see iwl_measurement_status */
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (7)
+- * Power Management Commands, Responses, Notifications:
+- *
+- *****************************************************************************/
+-
+-/**
+- * struct iwl_powertable_cmd - Power Table Command
+- * @flags: See below:
+- *
+- * POWER_TABLE_CMD = 0x77 (command, has simple generic response)
+- *
+- * PM allow:
+- * bit 0 - '0' Driver not allow power management
+- * '1' Driver allow PM (use rest of parameters)
+- * uCode send sleep notifications:
+- * bit 1 - '0' Don't send sleep notification
+- * '1' send sleep notification (SEND_PM_NOTIFICATION)
+- * Sleep over DTIM
+- * bit 2 - '0' PM have to walk up every DTIM
+- * '1' PM could sleep over DTIM till listen Interval.
+- * PCI power managed
+- * bit 3 - '0' (PCI_LINK_CTRL & 0x1)
+- * '1' !(PCI_LINK_CTRL & 0x1)
+- * Force sleep Modes
+- * bit 31/30- '00' use both mac/xtal sleeps
+- * '01' force Mac sleep
+- * '10' force xtal sleep
+- * '11' Illegal set
+- *
+- * NOTE: if sleep_interval[SLEEP_INTRVL_TABLE_SIZE-1] > DTIM period then
+- * ucode assume sleep over DTIM is allowed and we don't need to wakeup
+- * for every DTIM.
+- */
+-#define IWL_POWER_VEC_SIZE 5
+-
+-
+-#if IWL == 3945
+-
+-#define IWL_POWER_DRIVER_ALLOW_SLEEP_MSK __constant_cpu_to_le32(1<<0)
+-#define IWL_POWER_SLEEP_OVER_DTIM_MSK __constant_cpu_to_le32(1<<2)
+-#define IWL_POWER_PCI_PM_MSK __constant_cpu_to_le32(1<<3)
+-struct iwl_powertable_cmd {
+- __le32 flags;
+- __le32 rx_data_timeout;
+- __le32 tx_data_timeout;
+- __le32 sleep_interval[IWL_POWER_VEC_SIZE];
+-} __attribute__((packed));
+-
+-#elif IWL == 4965
+-
+-#define IWL_POWER_DRIVER_ALLOW_SLEEP_MSK __constant_cpu_to_le16(1<<0)
+-#define IWL_POWER_SLEEP_OVER_DTIM_MSK __constant_cpu_to_le16(1<<2)
+-#define IWL_POWER_PCI_PM_MSK __constant_cpu_to_le16(1<<3)
+-
+-struct iwl_powertable_cmd {
+- __le16 flags;
+- u8 keep_alive_seconds;
+- u8 debug_flags;
+- __le32 rx_data_timeout;
+- __le32 tx_data_timeout;
+- __le32 sleep_interval[IWL_POWER_VEC_SIZE];
+- __le32 keep_alive_beacons;
+-} __attribute__ ((packed));
+-#endif
+-
+-/*
+- * PM_SLEEP_NOTIFICATION = 0x7A (notification only, not a command)
+- * 3945 and 4965 identical.
+- */
+-struct iwl_sleep_notification {
+- u8 pm_sleep_mode;
+- u8 pm_wakeup_src;
+- __le16 reserved;
+- __le32 sleep_time;
+- __le32 tsf_low;
+- __le32 bcon_timer;
+-} __attribute__ ((packed));
+-
+-/* Sleep states. 3945 and 4965 identical. */
+-enum {
+- IWL_PM_NO_SLEEP = 0,
+- IWL_PM_SLP_MAC = 1,
+- IWL_PM_SLP_FULL_MAC_UNASSOCIATE = 2,
+- IWL_PM_SLP_FULL_MAC_CARD_STATE = 3,
+- IWL_PM_SLP_PHY = 4,
+- IWL_PM_SLP_REPENT = 5,
+- IWL_PM_WAKEUP_BY_TIMER = 6,
+- IWL_PM_WAKEUP_BY_DRIVER = 7,
+- IWL_PM_WAKEUP_BY_RFKILL = 8,
+- /* 3 reserved */
+- IWL_PM_NUM_OF_MODES = 12,
+-};
+-
+-/*
+- * REPLY_CARD_STATE_CMD = 0xa0 (command, has simple generic response)
+- */
+-#define CARD_STATE_CMD_DISABLE 0x00 /* Put card to sleep */
+-#define CARD_STATE_CMD_ENABLE 0x01 /* Wake up card */
+-#define CARD_STATE_CMD_HALT 0x02 /* Power down permanently */
+-struct iwl_card_state_cmd {
+- __le32 status; /* CARD_STATE_CMD_* request new power state */
+-} __attribute__ ((packed));
+-
+-/*
+- * CARD_STATE_NOTIFICATION = 0xa1 (notification only, not a command)
+- */
+-struct iwl_card_state_notif {
+- __le32 flags;
+-} __attribute__ ((packed));
-
-- priv->hw_setting.cck_flag = RATE_MCS_CCK_MSK;
-- priv->hw_setting.tx_cmd_len = sizeof(struct iwl_tx_cmd);
-+ priv->hw_setting.tx_cmd_len = sizeof(struct iwl4965_tx_cmd);
- priv->hw_setting.max_rxq_size = RX_QUEUE_SIZE;
- priv->hw_setting.max_rxq_log = RX_QUEUE_SIZE_LOG;
+-#define HW_CARD_DISABLED 0x01
+-#define SW_CARD_DISABLED 0x02
+-#define RF_CARD_DISABLED 0x04
+-#define RXON_CARD_DISABLED 0x10
-
-+ if (iwl4965_param_amsdu_size_8K)
-+ priv->hw_setting.rx_buf_size = IWL_RX_BUF_SIZE_8K;
-+ else
-+ priv->hw_setting.rx_buf_size = IWL_RX_BUF_SIZE_4K;
-+ priv->hw_setting.max_pkt_size = priv->hw_setting.rx_buf_size - 256;
- priv->hw_setting.max_stations = IWL4965_STATION_COUNT;
- priv->hw_setting.bcast_sta_id = IWL4965_BROADCAST_ID;
- return 0;
- }
-
- /**
-- * iwl_hw_txq_ctx_free - Free TXQ Context
-+ * iwl4965_hw_txq_ctx_free - Free TXQ Context
- *
- * Destroy all TX DMA queues and structures
- */
--void iwl_hw_txq_ctx_free(struct iwl_priv *priv)
-+void iwl4965_hw_txq_ctx_free(struct iwl4965_priv *priv)
- {
- int txq_id;
-
- /* Tx queues */
- for (txq_id = 0; txq_id < priv->hw_setting.max_txq_num; txq_id++)
-- iwl_tx_queue_free(priv, &priv->txq[txq_id]);
-+ iwl4965_tx_queue_free(priv, &priv->txq[txq_id]);
-
-+ /* Keep-warm buffer */
- iwl4965_kw_free(priv);
- }
-
- /**
-- * iwl_hw_txq_free_tfd - Free one TFD, those at index [txq->q.last_used]
-+ * iwl4965_hw_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr]
- *
-- * Does NOT advance any indexes
-+ * Does NOT advance any TFD circular buffer read/write indexes
-+ * Does NOT free the TFD itself (which is within circular buffer)
- */
--int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq)
-+int iwl4965_hw_txq_free_tfd(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq)
- {
-- struct iwl_tfd_frame *bd_tmp = (struct iwl_tfd_frame *)&txq->bd[0];
-- struct iwl_tfd_frame *bd = &bd_tmp[txq->q.last_used];
-+ struct iwl4965_tfd_frame *bd_tmp = (struct iwl4965_tfd_frame *)&txq->bd[0];
-+ struct iwl4965_tfd_frame *bd = &bd_tmp[txq->q.read_ptr];
- struct pci_dev *dev = priv->pci_dev;
- int i;
- int counter = 0;
- int index, is_odd;
-
-- /* classify bd */
-+ /* Host command buffers stay mapped in memory, nothing to clean */
- if (txq->q.id == IWL_CMD_QUEUE_NUM)
-- /* nothing to cleanup after for host commands */
- return 0;
-
-- /* sanity check */
-+ /* Sanity check on number of chunks */
- counter = IWL_GET_BITS(*bd, num_tbs);
- if (counter > MAX_NUM_OF_TBS) {
- IWL_ERROR("Too many chunks: %i\n", counter);
-@@ -1775,8 +1827,8 @@ int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq)
- return 0;
- }
-
-- /* unmap chunks if any */
+-struct iwl_ct_kill_config {
+- __le32 reserved;
+- __le32 critical_temperature_M;
+- __le32 critical_temperature_R;
+-} __attribute__ ((packed));
-
-+ /* Unmap chunks, if any.
-+ * TFD info for odd chunks is different format than for even chunks. */
- for (i = 0; i < counter; i++) {
- index = i / 2;
- is_odd = i & 0x1;
-@@ -1796,19 +1848,20 @@ int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq)
- IWL_GET_BITS(bd->pa[index], tb1_len),
- PCI_DMA_TODEVICE);
-
-- if (txq->txb[txq->q.last_used].skb[i]) {
-- struct sk_buff *skb = txq->txb[txq->q.last_used].skb[i];
-+ /* Free SKB, if any, for this chunk */
-+ if (txq->txb[txq->q.read_ptr].skb[i]) {
-+ struct sk_buff *skb = txq->txb[txq->q.read_ptr].skb[i];
-
- dev_kfree_skb(skb);
-- txq->txb[txq->q.last_used].skb[i] = NULL;
-+ txq->txb[txq->q.read_ptr].skb[i] = NULL;
- }
- }
- return 0;
- }
-
--int iwl_hw_reg_set_txpower(struct iwl_priv *priv, s8 power)
-+int iwl4965_hw_reg_set_txpower(struct iwl4965_priv *priv, s8 power)
- {
-- IWL_ERROR("TODO: Implement iwl_hw_reg_set_txpower!\n");
-+ IWL_ERROR("TODO: Implement iwl4965_hw_reg_set_txpower!\n");
- return -EINVAL;
- }
-
-@@ -1830,6 +1883,17 @@ static s32 iwl4965_math_div_round(s32 num, s32 denom, s32 *res)
- return 1;
- }
-
-+/**
-+ * iwl4965_get_voltage_compensation - Power supply voltage comp for txpower
-+ *
-+ * Determines power supply voltage compensation for txpower calculations.
-+ * Returns number of 1/2-dB steps to subtract from gain table index,
-+ * to compensate for difference between power supply voltage during
-+ * factory measurements, vs. current power supply voltage.
-+ *
-+ * Voltage indication is higher for lower voltage.
-+ * Lower voltage requires more gain (lower gain table index).
-+ */
- static s32 iwl4965_get_voltage_compensation(s32 eeprom_voltage,
- s32 current_voltage)
- {
-@@ -1850,12 +1914,12 @@ static s32 iwl4965_get_voltage_compensation(s32 eeprom_voltage,
- return comp;
- }
-
--static const struct iwl_channel_info *
--iwl4965_get_channel_txpower_info(struct iwl_priv *priv, u8 phymode, u16 channel)
-+static const struct iwl4965_channel_info *
-+iwl4965_get_channel_txpower_info(struct iwl4965_priv *priv, u8 phymode, u16 channel)
- {
-- const struct iwl_channel_info *ch_info;
-+ const struct iwl4965_channel_info *ch_info;
-
-- ch_info = iwl_get_channel_info(priv, phymode, channel);
-+ ch_info = iwl4965_get_channel_info(priv, phymode, channel);
-
- if (!is_channel_valid(ch_info))
- return NULL;
-@@ -1889,7 +1953,7 @@ static s32 iwl4965_get_tx_atten_grp(u16 channel)
- return -1;
- }
-
--static u32 iwl4965_get_sub_band(const struct iwl_priv *priv, u32 channel)
-+static u32 iwl4965_get_sub_band(const struct iwl4965_priv *priv, u32 channel)
- {
- s32 b = -1;
-
-@@ -1917,15 +1981,23 @@ static s32 iwl4965_interpolate_value(s32 x, s32 x1, s32 y1, s32 x2, s32 y2)
- }
- }
-
--static int iwl4965_interpolate_chan(struct iwl_priv *priv, u32 channel,
-- struct iwl_eeprom_calib_ch_info *chan_info)
-+/**
-+ * iwl4965_interpolate_chan - Interpolate factory measurements for one channel
-+ *
-+ * Interpolates factory measurements from the two sample channels within a
-+ * sub-band, to apply to channel of interest. Interpolation is proportional to
-+ * differences in channel frequencies, which is proportional to differences
-+ * in channel number.
-+ */
-+static int iwl4965_interpolate_chan(struct iwl4965_priv *priv, u32 channel,
-+ struct iwl4965_eeprom_calib_ch_info *chan_info)
- {
- s32 s = -1;
- u32 c;
- u32 m;
-- const struct iwl_eeprom_calib_measure *m1;
-- const struct iwl_eeprom_calib_measure *m2;
-- struct iwl_eeprom_calib_measure *omeas;
-+ const struct iwl4965_eeprom_calib_measure *m1;
-+ const struct iwl4965_eeprom_calib_measure *m2;
-+ struct iwl4965_eeprom_calib_measure *omeas;
- u32 ch_i1;
- u32 ch_i2;
-
-@@ -2000,7 +2072,7 @@ static s32 back_off_table[] = {
-
- /* Thermal compensation values for txpower for various frequency ranges ...
- * ratios from 3:1 to 4.5:1 of degrees (Celsius) per half-dB gain adjust */
--static struct iwl_txpower_comp_entry {
-+static struct iwl4965_txpower_comp_entry {
- s32 degrees_per_05db_a;
- s32 degrees_per_05db_a_denom;
- } tx_power_cmp_tble[CALIB_CH_GROUP_MAX] = {
-@@ -2250,9 +2322,9 @@ static const struct gain_entry gain_table[2][108] = {
- }
- };
-
--static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
-+static int iwl4965_fill_txpower_tbl(struct iwl4965_priv *priv, u8 band, u16 channel,
- u8 is_fat, u8 ctrl_chan_high,
-- struct iwl_tx_power_db *tx_power_tbl)
-+ struct iwl4965_tx_power_db *tx_power_tbl)
- {
- u8 saturation_power;
- s32 target_power;
-@@ -2264,9 +2336,9 @@ static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
- s32 txatten_grp = CALIB_CH_GROUP_MAX;
- int i;
- int c;
-- const struct iwl_channel_info *ch_info = NULL;
-- struct iwl_eeprom_calib_ch_info ch_eeprom_info;
-- const struct iwl_eeprom_calib_measure *measurement;
-+ const struct iwl4965_channel_info *ch_info = NULL;
-+ struct iwl4965_eeprom_calib_ch_info ch_eeprom_info;
-+ const struct iwl4965_eeprom_calib_measure *measurement;
- s16 voltage;
- s32 init_voltage;
- s32 voltage_compensation;
-@@ -2405,7 +2477,7 @@ static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
- /* for each of 33 bit-rates (including 1 for CCK) */
- for (i = 0; i < POWER_TABLE_NUM_ENTRIES; i++) {
- u8 is_mimo_rate;
-- union iwl_tx_power_dual_stream tx_power;
-+ union iwl4965_tx_power_dual_stream tx_power;
-
- /* for mimo, reduce each chain's txpower by half
- * (3dB, 6 steps), so total output power is regulatory
-@@ -2502,14 +2574,14 @@ static int iwl4965_fill_txpower_tbl(struct iwl_priv *priv, u8 band, u16 channel,
- }
-
- /**
-- * iwl_hw_reg_send_txpower - Configure the TXPOWER level user limit
-+ * iwl4965_hw_reg_send_txpower - Configure the TXPOWER level user limit
- *
- * Uses the active RXON for channel, band, and characteristics (fat, high)
- * The power limit is taken from priv->user_txpower_limit.
- */
--int iwl_hw_reg_send_txpower(struct iwl_priv *priv)
-+int iwl4965_hw_reg_send_txpower(struct iwl4965_priv *priv)
- {
-- struct iwl_txpowertable_cmd cmd = { 0 };
-+ struct iwl4965_txpowertable_cmd cmd = { 0 };
- int rc = 0;
- u8 band = 0;
- u8 is_fat = 0;
-@@ -2541,23 +2613,23 @@ int iwl_hw_reg_send_txpower(struct iwl_priv *priv)
- if (rc)
- return rc;
-
-- rc = iwl_send_cmd_pdu(priv, REPLY_TX_PWR_TABLE_CMD, sizeof(cmd), &cmd);
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_TX_PWR_TABLE_CMD, sizeof(cmd), &cmd);
- return rc;
- }
-
--int iwl_hw_channel_switch(struct iwl_priv *priv, u16 channel)
-+int iwl4965_hw_channel_switch(struct iwl4965_priv *priv, u16 channel)
- {
- int rc;
- u8 band = 0;
- u8 is_fat = 0;
- u8 ctrl_chan_high = 0;
-- struct iwl_channel_switch_cmd cmd = { 0 };
-- const struct iwl_channel_info *ch_info;
-+ struct iwl4965_channel_switch_cmd cmd = { 0 };
-+ const struct iwl4965_channel_info *ch_info;
-
- band = ((priv->phymode == MODE_IEEE80211B) ||
- (priv->phymode == MODE_IEEE80211G));
-
-- ch_info = iwl_get_channel_info(priv, priv->phymode, channel);
-+ ch_info = iwl4965_get_channel_info(priv, priv->phymode, channel);
-
- is_fat = is_fat_channel(priv->staging_rxon.flags);
-
-@@ -2583,32 +2655,36 @@ int iwl_hw_channel_switch(struct iwl_priv *priv, u16 channel)
- return rc;
- }
-
-- rc = iwl_send_cmd_pdu(priv, REPLY_CHANNEL_SWITCH, sizeof(cmd), &cmd);
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_CHANNEL_SWITCH, sizeof(cmd), &cmd);
- return rc;
- }
-
- #define RTS_HCCA_RETRY_LIMIT 3
- #define RTS_DFAULT_RETRY_LIMIT 60
-
--void iwl_hw_build_tx_cmd_rate(struct iwl_priv *priv,
-- struct iwl_cmd *cmd,
-+void iwl4965_hw_build_tx_cmd_rate(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd,
- struct ieee80211_tx_control *ctrl,
- struct ieee80211_hdr *hdr, int sta_id,
- int is_hcca)
- {
-- u8 rate;
-+ struct iwl4965_tx_cmd *tx = &cmd->cmd.tx;
- u8 rts_retry_limit = 0;
- u8 data_retry_limit = 0;
-- __le32 tx_flags;
- u16 fc = le16_to_cpu(hdr->frame_control);
-+ u8 rate_plcp;
-+ u16 rate_flags = 0;
-+ int rate_idx = min(ctrl->tx_rate & 0xffff, IWL_RATE_COUNT - 1);
-
-- tx_flags = cmd->cmd.tx.tx_flags;
+-/******************************************************************************
+- * (8)
+- * Scan Commands, Responses, Notifications:
+- *
+- *****************************************************************************/
-
-- rate = iwl_rates[ctrl->tx_rate].plcp;
-+ rate_plcp = iwl4965_rates[rate_idx].plcp;
-
- rts_retry_limit = (is_hcca) ?
- RTS_HCCA_RETRY_LIMIT : RTS_DFAULT_RETRY_LIMIT;
-
-+ if ((rate_idx >= IWL_FIRST_CCK_RATE) && (rate_idx <= IWL_LAST_CCK_RATE))
-+ rate_flags |= RATE_MCS_CCK_MSK;
-+
-+
- if (ieee80211_is_probe_response(fc)) {
- data_retry_limit = 3;
- if (data_retry_limit < rts_retry_limit)
-@@ -2619,44 +2695,56 @@ void iwl_hw_build_tx_cmd_rate(struct iwl_priv *priv,
- if (priv->data_retry_limit != -1)
- data_retry_limit = priv->data_retry_limit;
-
-- if ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) {
-+
-+ if (ieee80211_is_data(fc)) {
-+ tx->initial_rate_index = 0;
-+ tx->tx_flags |= TX_CMD_FLG_STA_RATE_MSK;
-+ } else {
- switch (fc & IEEE80211_FCTL_STYPE) {
- case IEEE80211_STYPE_AUTH:
- case IEEE80211_STYPE_DEAUTH:
- case IEEE80211_STYPE_ASSOC_REQ:
- case IEEE80211_STYPE_REASSOC_REQ:
-- if (tx_flags & TX_CMD_FLG_RTS_MSK) {
-- tx_flags &= ~TX_CMD_FLG_RTS_MSK;
-- tx_flags |= TX_CMD_FLG_CTS_MSK;
-+ if (tx->tx_flags & TX_CMD_FLG_RTS_MSK) {
-+ tx->tx_flags &= ~TX_CMD_FLG_RTS_MSK;
-+ tx->tx_flags |= TX_CMD_FLG_CTS_MSK;
- }
- break;
- default:
- break;
- }
-+
-+ /* Alternate between antenna A and B for successive frames */
-+ if (priv->use_ant_b_for_management_frame) {
-+ priv->use_ant_b_for_management_frame = 0;
-+ rate_flags |= RATE_MCS_ANT_B_MSK;
-+ } else {
-+ priv->use_ant_b_for_management_frame = 1;
-+ rate_flags |= RATE_MCS_ANT_A_MSK;
-+ }
- }
-
-- cmd->cmd.tx.rts_retry_limit = rts_retry_limit;
-- cmd->cmd.tx.data_retry_limit = data_retry_limit;
-- cmd->cmd.tx.rate_n_flags = iwl_hw_set_rate_n_flags(rate, 0);
-- cmd->cmd.tx.tx_flags = tx_flags;
-+ tx->rts_retry_limit = rts_retry_limit;
-+ tx->data_retry_limit = data_retry_limit;
-+ tx->rate_n_flags = iwl4965_hw_set_rate_n_flags(rate_plcp, rate_flags);
- }
-
--int iwl_hw_get_rx_read(struct iwl_priv *priv)
-+int iwl4965_hw_get_rx_read(struct iwl4965_priv *priv)
- {
-- struct iwl_shared *shared_data = priv->hw_setting.shared_virt;
-+ struct iwl4965_shared *shared_data = priv->hw_setting.shared_virt;
-
- return IWL_GET_BITS(*shared_data, rb_closed_stts_rb_num);
- }
-
--int iwl_hw_get_temperature(struct iwl_priv *priv)
-+int iwl4965_hw_get_temperature(struct iwl4965_priv *priv)
- {
- return priv->temperature;
- }
-
--unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
-- struct iwl_frame *frame, u8 rate)
-+unsigned int iwl4965_hw_get_beacon_cmd(struct iwl4965_priv *priv,
-+ struct iwl4965_frame *frame, u8 rate)
- {
-- struct iwl_tx_beacon_cmd *tx_beacon_cmd;
-+ struct iwl4965_tx_beacon_cmd *tx_beacon_cmd;
- unsigned int frame_size;
-
- tx_beacon_cmd = &frame->u.beacon;
-@@ -2665,9 +2753,9 @@ unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
- tx_beacon_cmd->tx.sta_id = IWL4965_BROADCAST_ID;
- tx_beacon_cmd->tx.stop_time.life_time = TX_CMD_LIFE_TIME_INFINITE;
-
-- frame_size = iwl_fill_beacon_frame(priv,
-+ frame_size = iwl4965_fill_beacon_frame(priv,
- tx_beacon_cmd->frame,
-- BROADCAST_ADDR,
-+ iwl4965_broadcast_addr,
- sizeof(frame->u) - sizeof(*tx_beacon_cmd));
-
- BUG_ON(frame_size > MAX_MPDU_SIZE);
-@@ -2675,53 +2763,59 @@ unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
-
- if ((rate == IWL_RATE_1M_PLCP) || (rate >= IWL_RATE_2M_PLCP))
- tx_beacon_cmd->tx.rate_n_flags =
-- iwl_hw_set_rate_n_flags(rate, RATE_MCS_CCK_MSK);
-+ iwl4965_hw_set_rate_n_flags(rate, RATE_MCS_CCK_MSK);
- else
- tx_beacon_cmd->tx.rate_n_flags =
-- iwl_hw_set_rate_n_flags(rate, 0);
-+ iwl4965_hw_set_rate_n_flags(rate, 0);
-
- tx_beacon_cmd->tx.tx_flags = (TX_CMD_FLG_SEQ_CTL_MSK |
- TX_CMD_FLG_TSF_MSK | TX_CMD_FLG_STA_RATE_MSK);
- return (sizeof(*tx_beacon_cmd) + frame_size);
- }
-
--int iwl_hw_tx_queue_init(struct iwl_priv *priv, struct iwl_tx_queue *txq)
-+/*
-+ * Tell 4965 where to find circular buffer of Tx Frame Descriptors for
-+ * given Tx queue, and enable the DMA channel used for that queue.
-+ *
-+ * 4965 supports up to 16 Tx queues in DRAM, mapped to up to 8 Tx DMA
-+ * channels supported in hardware.
-+ */
-+int iwl4965_hw_tx_queue_init(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq)
- {
- int rc;
- unsigned long flags;
- int txq_id = txq->q.id;
-
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
- }
-
-- iwl_write_restricted(priv, FH_MEM_CBBC_QUEUE(txq_id),
-+ /* Circular buffer (TFD queue in DRAM) physical base address */
-+ iwl4965_write_direct32(priv, FH_MEM_CBBC_QUEUE(txq_id),
- txq->q.dma_addr >> 8);
-- iwl_write_restricted(
-+
-+ /* Enable DMA channel, using same id as for TFD queue */
-+ iwl4965_write_direct32(
- priv, IWL_FH_TCSR_CHNL_TX_CONFIG_REG(txq_id),
- IWL_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE |
- IWL_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE_VAL);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
-
- return 0;
- }
-
--static inline u8 iwl4965_get_dma_hi_address(dma_addr_t addr)
--{
-- return sizeof(addr) > sizeof(u32) ? (addr >> 16) >> 16 : 0;
--}
+-struct iwl_scan_channel {
+- /* type is defined as:
+- * 0:0 active (0 - passive)
+- * 1:4 SSID direct
+- * If 1 is set then corresponding SSID IE is transmitted in probe
+- * 5:7 reserved
+- */
+- u8 type;
+- u8 channel;
+- struct iwl_tx_power tpc;
+- __le16 active_dwell;
+- __le16 passive_dwell;
+-} __attribute__ ((packed));
-
--int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *ptr,
-+int iwl4965_hw_txq_attach_buf_to_tfd(struct iwl4965_priv *priv, void *ptr,
- dma_addr_t addr, u16 len)
- {
- int index, is_odd;
-- struct iwl_tfd_frame *tfd = ptr;
-+ struct iwl4965_tfd_frame *tfd = ptr;
- u32 num_tbs = IWL_GET_BITS(*tfd, num_tbs);
-
-+ /* Each TFD can point to a maximum 20 Tx buffers */
- if ((num_tbs >= MAX_NUM_OF_TBS) || (num_tbs < 0)) {
- IWL_ERROR("Error can not send more than %d chunks\n",
- MAX_NUM_OF_TBS);
-@@ -2734,7 +2828,7 @@ int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *ptr,
- if (!is_odd) {
- tfd->pa[index].tb1_addr = cpu_to_le32(addr);
- IWL_SET_BITS(tfd->pa[index], tb1_addr_hi,
-- iwl4965_get_dma_hi_address(addr));
-+ iwl_get_dma_hi_address(addr));
- IWL_SET_BITS(tfd->pa[index], tb1_len, len);
- } else {
- IWL_SET_BITS(tfd->pa[index], tb2_addr_lo16,
-@@ -2748,7 +2842,7 @@ int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *ptr,
- return 0;
- }
-
--void iwl_hw_card_show_info(struct iwl_priv *priv)
-+static void iwl4965_hw_card_show_info(struct iwl4965_priv *priv)
- {
- u16 hw_version = priv->eeprom.board_revision_4965;
-
-@@ -2763,32 +2857,41 @@ void iwl_hw_card_show_info(struct iwl_priv *priv)
- #define IWL_TX_CRC_SIZE 4
- #define IWL_TX_DELIMITER_SIZE 4
-
--int iwl4965_tx_queue_update_wr_ptr(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq, u16 byte_cnt)
-+/**
-+ * iwl4965_tx_queue_update_wr_ptr - Set up entry in Tx byte-count array
-+ */
-+int iwl4965_tx_queue_update_wr_ptr(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq, u16 byte_cnt)
- {
- int len;
- int txq_id = txq->q.id;
-- struct iwl_shared *shared_data = priv->hw_setting.shared_virt;
-+ struct iwl4965_shared *shared_data = priv->hw_setting.shared_virt;
-
- if (txq->need_update == 0)
- return 0;
-
- len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE;
-
-+ /* Set up byte count within first 256 entries */
- IWL_SET_BITS16(shared_data->queues_byte_cnt_tbls[txq_id].
-- tfd_offset[txq->q.first_empty], byte_cnt, len);
-+ tfd_offset[txq->q.write_ptr], byte_cnt, len);
-
-- if (txq->q.first_empty < IWL4965_MAX_WIN_SIZE)
-+ /* If within first 64 entries, duplicate at end */
-+ if (txq->q.write_ptr < IWL4965_MAX_WIN_SIZE)
- IWL_SET_BITS16(shared_data->queues_byte_cnt_tbls[txq_id].
-- tfd_offset[IWL4965_QUEUE_SIZE + txq->q.first_empty],
-+ tfd_offset[IWL4965_QUEUE_SIZE + txq->q.write_ptr],
- byte_cnt, len);
-
- return 0;
- }
-
--/* Set up Rx receiver/antenna/chain usage in "staging" RXON image.
-- * This should not be used for scan command ... it puts data in wrong place. */
--void iwl4965_set_rxon_chain(struct iwl_priv *priv)
-+/**
-+ * iwl4965_set_rxon_chain - Set up Rx chain usage in "staging" RXON image
-+ *
-+ * Selects how many and which Rx receivers/antennas/chains to use.
-+ * This should not be used for scan command ... it puts data in wrong place.
-+ */
-+void iwl4965_set_rxon_chain(struct iwl4965_priv *priv)
- {
- u8 is_single = is_single_stream(priv);
- u8 idle_state, rx_state;
-@@ -2819,19 +2922,19 @@ void iwl4965_set_rxon_chain(struct iwl_priv *priv)
- IWL_DEBUG_ASSOC("rx chain %X\n", priv->staging_rxon.rx_chain);
- }
-
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- /*
- get the traffic load value for tid
- */
--static u32 iwl4965_tl_get_load(struct iwl_priv *priv, u8 tid)
-+static u32 iwl4965_tl_get_load(struct iwl4965_priv *priv, u8 tid)
- {
- u32 load = 0;
- u32 current_time = jiffies_to_msecs(jiffies);
- u32 time_diff;
- s32 index;
- unsigned long flags;
-- struct iwl_traffic_load *tid_ptr = NULL;
-+ struct iwl4965_traffic_load *tid_ptr = NULL;
-
- if (tid >= TID_MAX_LOAD_COUNT)
- return 0;
-@@ -2872,13 +2975,13 @@ static u32 iwl4965_tl_get_load(struct iwl_priv *priv, u8 tid)
- increment traffic load value for tid and also remove
- any old values if passed the certian time period
- */
--static void iwl4965_tl_add_packet(struct iwl_priv *priv, u8 tid)
-+static void iwl4965_tl_add_packet(struct iwl4965_priv *priv, u8 tid)
- {
- u32 current_time = jiffies_to_msecs(jiffies);
- u32 time_diff;
- s32 index;
- unsigned long flags;
-- struct iwl_traffic_load *tid_ptr = NULL;
-+ struct iwl4965_traffic_load *tid_ptr = NULL;
-
- if (tid >= TID_MAX_LOAD_COUNT)
- return;
-@@ -2935,14 +3038,19 @@ enum HT_STATUS {
- BA_STATUS_ACTIVE,
- };
-
--static u8 iwl4964_tl_ba_avail(struct iwl_priv *priv)
-+/**
-+ * iwl4964_tl_ba_avail - Find out if an unused aggregation queue is available
-+ */
-+static u8 iwl4964_tl_ba_avail(struct iwl4965_priv *priv)
- {
- int i;
-- struct iwl_lq_mngr *lq;
-+ struct iwl4965_lq_mngr *lq;
- u8 count = 0;
- u16 msk;
-
-- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
-+ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
-+
-+ /* Find out how many agg queues are in use */
- for (i = 0; i < TID_MAX_LOAD_COUNT ; i++) {
- msk = 1 << i;
- if ((lq->agg_ctrl.granted_ba & msk) ||
-@@ -2956,10 +3064,10 @@ static u8 iwl4964_tl_ba_avail(struct iwl_priv *priv)
- return 0;
- }
-
--static void iwl4965_ba_status(struct iwl_priv *priv,
-+static void iwl4965_ba_status(struct iwl4965_priv *priv,
- u8 tid, enum HT_STATUS status);
-
--static int iwl4965_perform_addba(struct iwl_priv *priv, u8 tid, u32 length,
-+static int iwl4965_perform_addba(struct iwl4965_priv *priv, u8 tid, u32 length,
- u32 ba_timeout)
- {
- int rc;
-@@ -2971,7 +3079,7 @@ static int iwl4965_perform_addba(struct iwl_priv *priv, u8 tid, u32 length,
- return rc;
- }
-
--static int iwl4965_perform_delba(struct iwl_priv *priv, u8 tid)
-+static int iwl4965_perform_delba(struct iwl4965_priv *priv, u8 tid)
- {
- int rc;
-
-@@ -2982,8 +3090,8 @@ static int iwl4965_perform_delba(struct iwl_priv *priv, u8 tid)
- return rc;
- }
-
--static void iwl4965_turn_on_agg_for_tid(struct iwl_priv *priv,
-- struct iwl_lq_mngr *lq,
-+static void iwl4965_turn_on_agg_for_tid(struct iwl4965_priv *priv,
-+ struct iwl4965_lq_mngr *lq,
- u8 auto_agg, u8 tid)
- {
- u32 tid_msk = (1 << tid);
-@@ -3030,12 +3138,12 @@ static void iwl4965_turn_on_agg_for_tid(struct iwl_priv *priv,
- spin_unlock_irqrestore(&priv->lq_mngr.lock, flags);
- }
-
--static void iwl4965_turn_on_agg(struct iwl_priv *priv, u8 tid)
-+static void iwl4965_turn_on_agg(struct iwl4965_priv *priv, u8 tid)
- {
-- struct iwl_lq_mngr *lq;
-+ struct iwl4965_lq_mngr *lq;
- unsigned long flags;
-
-- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
-+ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
-
- if ((tid < TID_MAX_LOAD_COUNT))
- iwl4965_turn_on_agg_for_tid(priv, lq, lq->agg_ctrl.auto_agg,
-@@ -3055,13 +3163,13 @@ static void iwl4965_turn_on_agg(struct iwl_priv *priv, u8 tid)
-
- }
-
--void iwl4965_turn_off_agg(struct iwl_priv *priv, u8 tid)
-+void iwl4965_turn_off_agg(struct iwl4965_priv *priv, u8 tid)
- {
- u32 tid_msk;
-- struct iwl_lq_mngr *lq;
-+ struct iwl4965_lq_mngr *lq;
- unsigned long flags;
-
-- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
-+ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
-
- if ((tid < TID_MAX_LOAD_COUNT)) {
- tid_msk = 1 << tid;
-@@ -3084,14 +3192,17 @@ void iwl4965_turn_off_agg(struct iwl_priv *priv, u8 tid)
- }
- }
-
--static void iwl4965_ba_status(struct iwl_priv *priv,
-+/**
-+ * iwl4965_ba_status - Update driver's link quality mgr with tid's HT status
-+ */
-+static void iwl4965_ba_status(struct iwl4965_priv *priv,
- u8 tid, enum HT_STATUS status)
- {
-- struct iwl_lq_mngr *lq;
-+ struct iwl4965_lq_mngr *lq;
- u32 tid_msk = (1 << tid);
- unsigned long flags;
-
-- lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
-+ lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
-
- if ((tid >= TID_MAX_LOAD_COUNT))
- goto out;
-@@ -3124,14 +3235,14 @@ static void iwl4965_ba_status(struct iwl_priv *priv,
-
- static void iwl4965_bg_agg_work(struct work_struct *work)
- {
-- struct iwl_priv *priv = container_of(work, struct iwl_priv,
-+ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv,
- agg_work);
-
- u32 tid;
- u32 retry_tid;
- u32 tid_msk;
- unsigned long flags;
-- struct iwl_lq_mngr *lq = (struct iwl_lq_mngr *)&(priv->lq_mngr);
-+ struct iwl4965_lq_mngr *lq = (struct iwl4965_lq_mngr *)&(priv->lq_mngr);
-
- spin_lock_irqsave(&priv->lq_mngr.lock, flags);
- retry_tid = lq->agg_ctrl.tid_retry;
-@@ -3154,90 +3265,13 @@ static void iwl4965_bg_agg_work(struct work_struct *work)
- spin_unlock_irqrestore(&priv->lq_mngr.lock, flags);
- return;
- }
--#endif /*CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-
--int iwl4965_tx_cmd(struct iwl_priv *priv, struct iwl_cmd *out_cmd,
-- u8 sta_id, dma_addr_t txcmd_phys,
-- struct ieee80211_hdr *hdr, u8 hdr_len,
-- struct ieee80211_tx_control *ctrl, void *sta_in)
-+/* TODO: move this functionality to rate scaling */
-+void iwl4965_tl_get_stats(struct iwl4965_priv *priv,
-+ struct ieee80211_hdr *hdr)
- {
-- struct iwl_tx_cmd cmd;
-- struct iwl_tx_cmd *tx = (struct iwl_tx_cmd *)&out_cmd->cmd.payload[0];
-- dma_addr_t scratch_phys;
-- u8 unicast = 0;
-- u8 is_data = 1;
-- u16 fc;
-- u16 rate_flags;
-- int rate_index = min(ctrl->tx_rate & 0xffff, IWL_RATE_COUNT - 1);
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- __le16 *qc;
--#endif /*CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
+-struct iwl_ssid_ie {
+- u8 id;
+- u8 len;
+- u8 ssid[32];
+-} __attribute__ ((packed));
-
-- unicast = !is_multicast_ether_addr(hdr->addr1);
+-#define PROBE_OPTION_MAX 0x4
+-#define TX_CMD_LIFE_TIME_INFINITE __constant_cpu_to_le32(0xFFFFFFFF)
+-#define IWL_GOOD_CRC_TH __constant_cpu_to_le16(1)
+-#define IWL_MAX_SCAN_SIZE 1024
-
-- fc = le16_to_cpu(hdr->frame_control);
-- if ((fc & IEEE80211_FCTL_FTYPE) != IEEE80211_FTYPE_DATA)
-- is_data = 0;
+-/*
+- * REPLY_SCAN_CMD = 0x80 (command)
+- */
+-struct iwl_scan_cmd {
+- __le16 len;
+- u8 reserved0;
+- u8 channel_count;
+- __le16 quiet_time; /* dwell only this long on quiet chnl
+- * (active scan) */
+- __le16 quiet_plcp_th; /* quiet chnl is < this # pkts (typ. 1) */
+- __le16 good_CRC_th; /* passive -> active promotion threshold */
+-#if IWL == 3945
+- __le16 reserved1;
+-#elif IWL == 4965
+- __le16 rx_chain;
+-#endif
+- __le32 max_out_time; /* max usec to be out of associated (service)
+- * chnl */
+- __le32 suspend_time; /* pause scan this long when returning to svc
+- * chnl.
+- * 3945 -- 31:24 # beacons, 19:0 additional usec,
+- * 4965 -- 31:22 # beacons, 21:0 additional usec.
+- */
+- __le32 flags;
+- __le32 filter_flags;
-
-- memcpy(&cmd, &(out_cmd->cmd.tx), sizeof(struct iwl_tx_cmd));
-- memset(tx, 0, sizeof(struct iwl_tx_cmd));
-- memcpy(tx->hdr, hdr, hdr_len);
+- struct iwl_tx_cmd tx_cmd;
+- struct iwl_ssid_ie direct_scan[PROBE_OPTION_MAX];
-
-- tx->len = cmd.len;
-- tx->driver_txop = cmd.driver_txop;
-- tx->stop_time.life_time = cmd.stop_time.life_time;
-- tx->tx_flags = cmd.tx_flags;
-- tx->sta_id = cmd.sta_id;
-- tx->tid_tspec = cmd.tid_tspec;
-- tx->timeout.pm_frame_timeout = cmd.timeout.pm_frame_timeout;
-- tx->next_frame_len = cmd.next_frame_len;
+- u8 data[0];
+- /*
+- * The channels start after the probe request payload and are of type:
+- *
+- * struct iwl_scan_channel channels[0];
+- *
+- * NOTE: Only one band of channels can be scanned per pass. You
+- * can not mix 2.4GHz channels and 5.2GHz channels and must
+- * request a scan multiple times (not concurrently)
+- *
+- */
+-} __attribute__ ((packed));
-
-- tx->sec_ctl = cmd.sec_ctl;
-- memcpy(&(tx->key[0]), &(cmd.key[0]), 16);
-- tx->tx_flags = cmd.tx_flags;
+-/* Can abort will notify by complete notification with abort status. */
+-#define CAN_ABORT_STATUS __constant_cpu_to_le32(0x1)
+-/* complete notification statuses */
+-#define ABORT_STATUS 0x2
-
-- tx->rts_retry_limit = cmd.rts_retry_limit;
-- tx->data_retry_limit = cmd.data_retry_limit;
+-/*
+- * REPLY_SCAN_CMD = 0x80 (response)
+- */
+-struct iwl_scanreq_notification {
+- __le32 status; /* 1: okay, 2: cannot fulfill request */
+-} __attribute__ ((packed));
-
-- scratch_phys = txcmd_phys + sizeof(struct iwl_cmd_header) +
-- offsetof(struct iwl_tx_cmd, scratch);
-- tx->dram_lsb_ptr = cpu_to_le32(scratch_phys);
-- tx->dram_msb_ptr = iwl4965_get_dma_hi_address(scratch_phys);
+-/*
+- * SCAN_START_NOTIFICATION = 0x82 (notification only, not a command)
+- */
+-struct iwl_scanstart_notification {
+- __le32 tsf_low;
+- __le32 tsf_high;
+- __le32 beacon_timer;
+- u8 channel;
+- u8 band;
+- u8 reserved[2];
+- __le32 status;
+-} __attribute__ ((packed));
-
-- /* Hard coded to start at the highest retry fallback position
-- * until the 4965 specific rate control algorithm is tied in */
-- tx->initial_rate_index = LINK_QUAL_MAX_RETRY_NUM - 1;
+-#define SCAN_OWNER_STATUS 0x1;
+-#define MEASURE_OWNER_STATUS 0x2;
-
-- /* Alternate between antenna A and B for successive frames */
-- if (priv->use_ant_b_for_management_frame) {
-- priv->use_ant_b_for_management_frame = 0;
-- rate_flags = RATE_MCS_ANT_B_MSK;
-- } else {
-- priv->use_ant_b_for_management_frame = 1;
-- rate_flags = RATE_MCS_ANT_A_MSK;
-- }
-+ __le16 *qc = ieee80211_get_qos_ctrl(hdr);
-
-- if (!unicast || !is_data) {
-- if ((rate_index >= IWL_FIRST_CCK_RATE) &&
-- (rate_index <= IWL_LAST_CCK_RATE))
-- rate_flags |= RATE_MCS_CCK_MSK;
-- } else {
-- tx->initial_rate_index = 0;
-- tx->tx_flags |= TX_CMD_FLG_STA_RATE_MSK;
-- }
+-#define NUMBER_OF_STATISTICS 1 /* first __le32 is good CRC */
+-/*
+- * SCAN_RESULTS_NOTIFICATION = 0x83 (notification only, not a command)
+- */
+-struct iwl_scanresults_notification {
+- u8 channel;
+- u8 band;
+- u8 reserved[2];
+- __le32 tsf_low;
+- __le32 tsf_high;
+- __le32 statistics[NUMBER_OF_STATISTICS];
+-} __attribute__ ((packed));
-
-- tx->rate_n_flags = iwl_hw_set_rate_n_flags(iwl_rates[rate_index].plcp,
-- rate_flags);
+-/*
+- * SCAN_COMPLETE_NOTIFICATION = 0x84 (notification only, not a command)
+- */
+-struct iwl_scancomplete_notification {
+- u8 scanned_channels;
+- u8 status;
+- u8 reserved;
+- u8 last_channel;
+- __le32 tsf_low;
+- __le32 tsf_high;
+-} __attribute__ ((packed));
-
-- if (ieee80211_is_back_request(fc))
-- tx->tx_flags |= TX_CMD_FLG_ACK_MSK |
-- TX_CMD_FLG_IMM_BA_RSP_MASK;
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- qc = ieee80211_get_qos_ctrl(hdr);
- if (qc &&
- (priv->iw_mode != IEEE80211_IF_TYPE_IBSS)) {
- u8 tid = 0;
-@@ -3255,11 +3289,11 @@ int iwl4965_tx_cmd(struct iwl_priv *priv, struct iwl_cmd *out_cmd,
- spin_unlock_irqrestore(&priv->lq_mngr.lock, flags);
- schedule_work(&priv->agg_work);
- }
+-
+-/******************************************************************************
+- * (9)
+- * IBSS/AP Commands and Notifications:
+- *
+- *****************************************************************************/
+-
+-/*
+- * BEACON_NOTIFICATION = 0x90 (notification only, not a command)
+- */
+-struct iwl_beacon_notif {
+- struct iwl_tx_resp beacon_notify_hdr;
+- __le32 low_tsf;
+- __le32 high_tsf;
+- __le32 ibss_mgr_status;
+-} __attribute__ ((packed));
+-
+-/*
+- * REPLY_TX_BEACON = 0x91 (command, has simple generic response)
+- */
+-struct iwl_tx_beacon_cmd {
+- struct iwl_tx_cmd tx;
+- __le16 tim_idx;
+- u8 tim_size;
+- u8 reserved1;
+- struct ieee80211_hdr frame[0]; /* beacon frame */
+-} __attribute__ ((packed));
+-
+-/******************************************************************************
+- * (10)
+- * Statistics Commands and Notifications:
+- *
+- *****************************************************************************/
+-
+-#define IWL_TEMP_CONVERT 260
+-
+-#define SUP_RATE_11A_MAX_NUM_CHANNELS 8
+-#define SUP_RATE_11B_MAX_NUM_CHANNELS 4
+-#define SUP_RATE_11G_MAX_NUM_CHANNELS 12
+-
+-/* Used for passing to driver number of successes and failures per rate */
+-struct rate_histogram {
+- union {
+- __le32 a[SUP_RATE_11A_MAX_NUM_CHANNELS];
+- __le32 b[SUP_RATE_11B_MAX_NUM_CHANNELS];
+- __le32 g[SUP_RATE_11G_MAX_NUM_CHANNELS];
+- } success;
+- union {
+- __le32 a[SUP_RATE_11A_MAX_NUM_CHANNELS];
+- __le32 b[SUP_RATE_11B_MAX_NUM_CHANNELS];
+- __le32 g[SUP_RATE_11G_MAX_NUM_CHANNELS];
+- } failed;
+-} __attribute__ ((packed));
+-
+-/* statistics command response */
+-
+-struct statistics_rx_phy {
+- __le32 ina_cnt;
+- __le32 fina_cnt;
+- __le32 plcp_err;
+- __le32 crc32_err;
+- __le32 overrun_err;
+- __le32 early_overrun_err;
+- __le32 crc32_good;
+- __le32 false_alarm_cnt;
+- __le32 fina_sync_err_cnt;
+- __le32 sfd_timeout;
+- __le32 fina_timeout;
+- __le32 unresponded_rts;
+- __le32 rxe_frame_limit_overrun;
+- __le32 sent_ack_cnt;
+- __le32 sent_cts_cnt;
+-#if IWL == 4965
+- __le32 sent_ba_rsp_cnt;
+- __le32 dsp_self_kill;
+- __le32 mh_format_err;
+- __le32 re_acq_main_rssi_sum;
+- __le32 reserved3;
-#endif
+-} __attribute__ ((packed));
+-
+-#if IWL == 4965
+-struct statistics_rx_ht_phy {
+- __le32 plcp_err;
+- __le32 overrun_err;
+- __le32 early_overrun_err;
+- __le32 crc32_good;
+- __le32 crc32_err;
+- __le32 mh_format_err;
+- __le32 agg_crc32_good;
+- __le32 agg_mpdu_cnt;
+- __le32 agg_cnt;
+- __le32 reserved2;
+-} __attribute__ ((packed));
-#endif
-- return 0;
- }
-
-+#endif /*CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-+
- /**
- * sign_extend - Sign extend a value using specified bit as sign-bit
- *
-@@ -3282,7 +3316,7 @@ static s32 sign_extend(u32 oper, int index)
- *
- * A return of <0 indicates bogus data in the statistics
- */
--int iwl4965_get_temperature(const struct iwl_priv *priv)
-+int iwl4965_get_temperature(const struct iwl4965_priv *priv)
- {
- s32 temperature;
- s32 vt;
-@@ -3305,11 +3339,12 @@ int iwl4965_get_temperature(const struct iwl_priv *priv)
- }
-
- /*
-- * Temperature is only 23 bits so sign extend out to 32
-+ * Temperature is only 23 bits, so sign extend out to 32.
- *
- * NOTE If we haven't received a statistics notification yet
- * with an updated temperature, use R4 provided to us in the
-- * ALIVE response. */
-+ * "initialize" ALIVE response.
-+ */
- if (!test_bit(STATUS_TEMPERATURE, &priv->status))
- vt = sign_extend(R4, 23);
- else
-@@ -3349,7 +3384,7 @@ int iwl4965_get_temperature(const struct iwl_priv *priv)
- * Assumes caller will replace priv->last_temperature once calibration
- * executed.
- */
--static int iwl4965_is_temp_calib_needed(struct iwl_priv *priv)
-+static int iwl4965_is_temp_calib_needed(struct iwl4965_priv *priv)
- {
- int temp_diff;
-
-@@ -3382,7 +3417,7 @@ static int iwl4965_is_temp_calib_needed(struct iwl_priv *priv)
- /* Calculate noise level, based on measurements during network silence just
- * before arriving beacon. This measurement can be done only if we know
- * exactly when to expect beacons, therefore only when we're associated. */
--static void iwl4965_rx_calc_noise(struct iwl_priv *priv)
-+static void iwl4965_rx_calc_noise(struct iwl4965_priv *priv)
- {
- struct statistics_rx_non_phy *rx_info
- = &(priv->statistics.rx.general);
-@@ -3419,9 +3454,9 @@ static void iwl4965_rx_calc_noise(struct iwl_priv *priv)
- priv->last_rx_noise);
- }
-
--void iwl_hw_rx_statistics(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
-+void iwl4965_hw_rx_statistics(struct iwl4965_priv *priv, struct iwl4965_rx_mem_buffer *rxb)
- {
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
- int change;
- s32 temp;
-
-@@ -3448,7 +3483,7 @@ void iwl_hw_rx_statistics(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
- if (unlikely(!test_bit(STATUS_SCANNING, &priv->status)) &&
- (pkt->hdr.cmd == STATISTICS_NOTIFICATION)) {
- iwl4965_rx_calc_noise(priv);
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-+#ifdef CONFIG_IWL4965_SENSITIVITY
- queue_work(priv->workqueue, &priv->sensitivity_work);
- #endif
- }
-@@ -3483,12 +3518,117 @@ void iwl_hw_rx_statistics(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
- queue_work(priv->workqueue, &priv->txpower_work);
- }
-
--static void iwl4965_handle_data_packet(struct iwl_priv *priv, int is_data,
-+static void iwl4965_add_radiotap(struct iwl4965_priv *priv,
-+ struct sk_buff *skb,
-+ struct iwl4965_rx_phy_res *rx_start,
-+ struct ieee80211_rx_status *stats,
-+ u32 ampdu_status)
-+{
-+ s8 signal = stats->ssi;
-+ s8 noise = 0;
-+ int rate = stats->rate;
-+ u64 tsf = stats->mactime;
-+ __le16 phy_flags_hw = rx_start->phy_flags;
-+ struct iwl4965_rt_rx_hdr {
-+ struct ieee80211_radiotap_header rt_hdr;
-+ __le64 rt_tsf; /* TSF */
-+ u8 rt_flags; /* radiotap packet flags */
-+ u8 rt_rate; /* rate in 500kb/s */
-+ __le16 rt_channelMHz; /* channel in MHz */
-+ __le16 rt_chbitmask; /* channel bitfield */
-+ s8 rt_dbmsignal; /* signal in dBm, kluged to signed */
-+ s8 rt_dbmnoise;
-+ u8 rt_antenna; /* antenna number */
-+ } __attribute__ ((packed)) *iwl4965_rt;
-+
-+ /* TODO: We won't have enough headroom for HT frames. Fix it later. */
-+ if (skb_headroom(skb) < sizeof(*iwl4965_rt)) {
-+ if (net_ratelimit())
-+ printk(KERN_ERR "not enough headroom [%d] for "
-+ "radiotap head [%zd]\n",
-+ skb_headroom(skb), sizeof(*iwl4965_rt));
-+ return;
-+ }
-+
-+ /* put radiotap header in front of 802.11 header and data */
-+ iwl4965_rt = (void *)skb_push(skb, sizeof(*iwl4965_rt));
-+
-+ /* initialise radiotap header */
-+ iwl4965_rt->rt_hdr.it_version = PKTHDR_RADIOTAP_VERSION;
-+ iwl4965_rt->rt_hdr.it_pad = 0;
-+
-+ /* total header + data */
-+ put_unaligned(cpu_to_le16(sizeof(*iwl4965_rt)),
-+ &iwl4965_rt->rt_hdr.it_len);
-+
-+ /* Indicate all the fields we add to the radiotap header */
-+ put_unaligned(cpu_to_le32((1 << IEEE80211_RADIOTAP_TSFT) |
-+ (1 << IEEE80211_RADIOTAP_FLAGS) |
-+ (1 << IEEE80211_RADIOTAP_RATE) |
-+ (1 << IEEE80211_RADIOTAP_CHANNEL) |
-+ (1 << IEEE80211_RADIOTAP_DBM_ANTSIGNAL) |
-+ (1 << IEEE80211_RADIOTAP_DBM_ANTNOISE) |
-+ (1 << IEEE80211_RADIOTAP_ANTENNA)),
-+ &iwl4965_rt->rt_hdr.it_present);
-+
-+ /* Zero the flags, we'll add to them as we go */
-+ iwl4965_rt->rt_flags = 0;
-+
-+ put_unaligned(cpu_to_le64(tsf), &iwl4965_rt->rt_tsf);
-+
-+ iwl4965_rt->rt_dbmsignal = signal;
-+ iwl4965_rt->rt_dbmnoise = noise;
-+
-+ /* Convert the channel frequency and set the flags */
-+ put_unaligned(cpu_to_le16(stats->freq), &iwl4965_rt->rt_channelMHz);
-+ if (!(phy_flags_hw & RX_RES_PHY_FLAGS_BAND_24_MSK))
-+ put_unaligned(cpu_to_le16(IEEE80211_CHAN_OFDM |
-+ IEEE80211_CHAN_5GHZ),
-+ &iwl4965_rt->rt_chbitmask);
-+ else if (phy_flags_hw & RX_RES_PHY_FLAGS_MOD_CCK_MSK)
-+ put_unaligned(cpu_to_le16(IEEE80211_CHAN_CCK |
-+ IEEE80211_CHAN_2GHZ),
-+ &iwl4965_rt->rt_chbitmask);
-+ else /* 802.11g */
-+ put_unaligned(cpu_to_le16(IEEE80211_CHAN_OFDM |
-+ IEEE80211_CHAN_2GHZ),
-+ &iwl4965_rt->rt_chbitmask);
-+
-+ rate = iwl4965_rate_index_from_plcp(rate);
-+ if (rate == -1)
-+ iwl4965_rt->rt_rate = 0;
-+ else
-+ iwl4965_rt->rt_rate = iwl4965_rates[rate].ieee;
-+
-+ /*
-+ * "antenna number"
-+ *
-+ * It seems that the antenna field in the phy flags value
-+ * is actually a bitfield. This is undefined by radiotap,
-+ * it wants an actual antenna number but I always get "7"
-+ * for most legacy frames I receive indicating that the
-+ * same frame was received on all three RX chains.
-+ *
-+ * I think this field should be removed in favour of a
-+ * new 802.11n radiotap field "RX chains" that is defined
-+ * as a bitmask.
-+ */
-+ iwl4965_rt->rt_antenna =
-+ le16_to_cpu(phy_flags_hw & RX_RES_PHY_FLAGS_ANTENNA_MSK) >> 4;
-+
-+ /* set the preamble flag if appropriate */
-+ if (phy_flags_hw & RX_RES_PHY_FLAGS_SHORT_PREAMBLE_MSK)
-+ iwl4965_rt->rt_flags |= IEEE80211_RADIOTAP_F_SHORTPRE;
-+
-+ stats->flag |= RX_FLAG_RADIOTAP;
-+}
-+
-+static void iwl4965_handle_data_packet(struct iwl4965_priv *priv, int is_data,
- int include_phy,
-- struct iwl_rx_mem_buffer *rxb,
-+ struct iwl4965_rx_mem_buffer *rxb,
- struct ieee80211_rx_status *stats)
- {
-- struct iwl_rx_packet *pkt = (struct iwl_rx_packet *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (struct iwl4965_rx_packet *)rxb->skb->data;
- struct iwl4965_rx_phy_res *rx_start = (include_phy) ?
- (struct iwl4965_rx_phy_res *)&(pkt->u.raw[0]) : NULL;
- struct ieee80211_hdr *hdr;
-@@ -3524,9 +3664,8 @@ static void iwl4965_handle_data_packet(struct iwl_priv *priv, int is_data,
- rx_start->byte_count = amsdu->byte_count;
- rx_end = (__le32 *) (((u8 *) hdr) + len);
- }
-- if (len > 2342 || len < 16) {
-- IWL_DEBUG_DROP("byte count out of range [16,2342]"
-- " : %d\n", len);
-+ if (len > priv->hw_setting.max_pkt_size || len < 16) {
-+ IWL_WARNING("byte count out of range [16,4K] : %d\n", len);
- return;
- }
-
-@@ -3544,26 +3683,21 @@ static void iwl4965_handle_data_packet(struct iwl_priv *priv, int is_data,
- return;
- }
-
-- if (priv->iw_mode == IEEE80211_IF_TYPE_MNTR) {
-- if (iwl_param_hwcrypto)
-- iwl_set_decrypted_flag(priv, rxb->skb,
-- ampdu_status, stats);
-- iwl_handle_data_packet_monitor(priv, rxb, hdr, len, stats, 0);
-- return;
-- }
-
- stats->flag = 0;
- hdr = (struct ieee80211_hdr *)rxb->skb->data;
-
-- if (iwl_param_hwcrypto)
-- iwl_set_decrypted_flag(priv, rxb->skb, ampdu_status, stats);
-+ if (iwl4965_param_hwcrypto)
-+ iwl4965_set_decrypted_flag(priv, rxb->skb, ampdu_status, stats);
-+
-+ if (priv->add_radiotap)
-+ iwl4965_add_radiotap(priv, rxb->skb, rx_start, stats, ampdu_status);
-
- ieee80211_rx_irqsafe(priv->hw, rxb->skb, stats);
- priv->alloc_rxb_skb--;
- rxb->skb = NULL;
- #ifdef LED
- priv->led_packets += len;
-- iwl_setup_activity_timer(priv);
-+ iwl4965_setup_activity_timer(priv);
- #endif
- }
-
-@@ -3601,7 +3735,7 @@ static int iwl4965_calc_rssi(struct iwl4965_rx_phy_res *rx_resp)
- return (max_rssi - agc - IWL_RSSI_OFFSET);
- }
-
--#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
-
- /* Parsed Information Elements */
- struct ieee802_11_elems {
-@@ -3673,9 +3807,37 @@ static int parse_elems(u8 *start, size_t len, struct ieee802_11_elems *elems)
-
- return 0;
- }
--#endif /* CONFIG_IWLWIFI_HT */
-
--static void iwl4965_sta_modify_ps_wake(struct iwl_priv *priv, int sta_id)
-+void iwl4965_init_ht_hw_capab(struct ieee80211_ht_info *ht_info, int mode)
-+{
-+ ht_info->cap = 0;
-+ memset(ht_info->supp_mcs_set, 0, 16);
-+
-+ ht_info->ht_supported = 1;
-+
-+ if (mode == MODE_IEEE80211A) {
-+ ht_info->cap |= (u16)IEEE80211_HT_CAP_SUP_WIDTH;
-+ ht_info->cap |= (u16)IEEE80211_HT_CAP_SGI_40;
-+ ht_info->supp_mcs_set[4] = 0x01;
-+ }
-+ ht_info->cap |= (u16)IEEE80211_HT_CAP_GRN_FLD;
-+ ht_info->cap |= (u16)IEEE80211_HT_CAP_SGI_20;
-+ ht_info->cap |= (u16)(IEEE80211_HT_CAP_MIMO_PS &
-+ (IWL_MIMO_PS_NONE << 2));
-+ if (iwl4965_param_amsdu_size_8K) {
-+ printk(KERN_DEBUG "iwl4965 in A-MSDU 8K support mode\n");
-+ ht_info->cap |= (u16)IEEE80211_HT_CAP_MAX_AMSDU;
-+ }
-+
-+ ht_info->ampdu_factor = CFG_HT_RX_AMPDU_FACTOR_DEF;
-+ ht_info->ampdu_density = CFG_HT_MPDU_DENSITY_DEF;
-+
-+ ht_info->supp_mcs_set[0] = 0xFF;
-+ ht_info->supp_mcs_set[1] = 0xFF;
-+}
-+#endif /* CONFIG_IWL4965_HT */
-+
-+static void iwl4965_sta_modify_ps_wake(struct iwl4965_priv *priv, int sta_id)
- {
- unsigned long flags;
-
-@@ -3686,13 +3848,13 @@ static void iwl4965_sta_modify_ps_wake(struct iwl_priv *priv, int sta_id)
- priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
-
-- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
-+ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
- }
-
--static void iwl4965_update_ps_mode(struct iwl_priv *priv, u16 ps_bit, u8 *addr)
-+static void iwl4965_update_ps_mode(struct iwl4965_priv *priv, u16 ps_bit, u8 *addr)
- {
- /* FIXME: need locking over ps_status ??? */
-- u8 sta_id = iwl_hw_find_station(priv, addr);
-+ u8 sta_id = iwl4965_hw_find_station(priv, addr);
-
- if (sta_id != IWL_INVALID_STATION) {
- u8 sta_awake = priv->stations[sta_id].
-@@ -3707,12 +3869,14 @@ static void iwl4965_update_ps_mode(struct iwl_priv *priv, u16 ps_bit, u8 *addr)
- }
- }
-
-+#define IWL_DELAY_NEXT_SCAN_AFTER_ASSOC (HZ*6)
-+
- /* Called for REPLY_4965_RX (legacy ABG frames), or
- * REPLY_RX_MPDU_CMD (HT high-throughput N frames). */
--static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_reply_rx(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
- {
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
- /* Use phy data (Rx signal strength, etc.) contained within
- * this rx packet for legacy frames,
- * or phy data cached from REPLY_RX_PHY_CMD for HT frames. */
-@@ -3731,11 +3895,8 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- (rx_start->phy_flags & RX_RES_PHY_FLAGS_BAND_24_MSK) ?
- MODE_IEEE80211G : MODE_IEEE80211A,
- .antenna = 0,
-- .rate = iwl_hw_get_rate(rx_start->rate_n_flags),
-+ .rate = iwl4965_hw_get_rate(rx_start->rate_n_flags),
- .flag = 0,
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- .ordered = 0
--#endif /* CONFIG_IWLWIFI_HT_AGG */
- };
- u8 network_packet;
-
-@@ -3794,32 +3955,32 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- * which are gathered only when associated, and indicate noise
- * only for the associated network channel ...
- * Ignore these noise values while scanning (other channels) */
-- if (iwl_is_associated(priv) &&
-+ if (iwl4965_is_associated(priv) &&
- !test_bit(STATUS_SCANNING, &priv->status)) {
- stats.noise = priv->last_rx_noise;
-- stats.signal = iwl_calc_sig_qual(stats.ssi, stats.noise);
-+ stats.signal = iwl4965_calc_sig_qual(stats.ssi, stats.noise);
- } else {
- stats.noise = IWL_NOISE_MEAS_NOT_AVAILABLE;
-- stats.signal = iwl_calc_sig_qual(stats.ssi, 0);
-+ stats.signal = iwl4965_calc_sig_qual(stats.ssi, 0);
- }
-
- /* Reset beacon noise level if not associated. */
-- if (!iwl_is_associated(priv))
-+ if (!iwl4965_is_associated(priv))
- priv->last_rx_noise = IWL_NOISE_MEAS_NOT_AVAILABLE;
-
--#ifdef CONFIG_IWLWIFI_DEBUG
-- /* TODO: Parts of iwl_report_frame are broken for 4965 */
-- if (iwl_debug_level & (IWL_DL_RX))
-+#ifdef CONFIG_IWL4965_DEBUG
-+ /* TODO: Parts of iwl4965_report_frame are broken for 4965 */
-+ if (iwl4965_debug_level & (IWL_DL_RX))
- /* Set "1" to report good data frames in groups of 100 */
-- iwl_report_frame(priv, pkt, header, 1);
-+ iwl4965_report_frame(priv, pkt, header, 1);
-
-- if (iwl_debug_level & (IWL_DL_RX | IWL_DL_STATS))
-+ if (iwl4965_debug_level & (IWL_DL_RX | IWL_DL_STATS))
- IWL_DEBUG_RX("Rssi %d, noise %d, qual %d, TSF %lu\n",
- stats.ssi, stats.noise, stats.signal,
- (long unsigned int)le64_to_cpu(rx_start->timestamp));
- #endif
-
-- network_packet = iwl_is_network_packet(priv, header);
-+ network_packet = iwl4965_is_network_packet(priv, header);
- if (network_packet) {
- priv->last_rx_rssi = stats.ssi;
- priv->last_beacon_time = priv->ucode_beacon_time;
-@@ -3863,27 +4024,31 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- break;
-
- /*
-- * TODO: There is no callback function from upper
-- * stack to inform us when associated status. this
-- * work around to sniff assoc_resp management frame
-- * and finish the association process.
-+ * TODO: Use the new callback function from
-+ * mac80211 instead of sniffing these packets.
- */
- case IEEE80211_STYPE_ASSOC_RESP:
- case IEEE80211_STYPE_REASSOC_RESP:
- if (network_packet) {
--#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
- u8 *pos = NULL;
- struct ieee802_11_elems elems;
--#endif /*CONFIG_IWLWIFI_HT */
-+#endif /*CONFIG_IWL4965_HT */
- struct ieee80211_mgmt *mgnt =
- (struct ieee80211_mgmt *)header;
-
-+ /* We have just associated, give some
-+ * time for the 4-way handshake if
-+ * any. Don't start scan too early. */
-+ priv->next_scan_jiffies = jiffies +
-+ IWL_DELAY_NEXT_SCAN_AFTER_ASSOC;
-+
- priv->assoc_id = (~((1 << 15) | (1 << 14))
- & le16_to_cpu(mgnt->u.assoc_resp.aid));
- priv->assoc_capability =
- le16_to_cpu(
- mgnt->u.assoc_resp.capab_info);
--#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
- pos = mgnt->u.assoc_resp.variable;
- if (!parse_elems(pos,
- len - (pos - (u8 *) mgnt),
-@@ -3892,7 +4057,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- elems.ht_cap_param)
- break;
- }
--#endif /*CONFIG_IWLWIFI_HT */
-+#endif /*CONFIG_IWL4965_HT */
- /* assoc_id is 0 no association */
- if (!priv->assoc_id)
- break;
-@@ -3907,7 +4072,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
-
- case IEEE80211_STYPE_PROBE_REQ:
- if ((priv->iw_mode == IEEE80211_IF_TYPE_IBSS) &&
-- !iwl_is_associated(priv)) {
-+ !iwl4965_is_associated(priv)) {
- DECLARE_MAC_BUF(mac1);
- DECLARE_MAC_BUF(mac2);
- DECLARE_MAC_BUF(mac3);
-@@ -3924,7 +4089,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- break;
-
- case IEEE80211_FTYPE_CTL:
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
- switch (fc & IEEE80211_FCTL_STYPE) {
- case IEEE80211_STYPE_BACK_REQ:
- IWL_DEBUG_HT("IEEE80211_STYPE_BACK_REQ arrived\n");
-@@ -3935,7 +4100,6 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- break;
- }
- #endif
+-struct statistics_rx_non_phy {
+- __le32 bogus_cts; /* CTS received when not expecting CTS */
+- __le32 bogus_ack; /* ACK received when not expecting ACK */
+- __le32 non_bssid_frames; /* number of frames with BSSID that
+- * doesn't belong to the STA BSSID */
+- __le32 filtered_frames; /* count frames that were dumped in the
+- * filtering process */
+- __le32 non_channel_beacons; /* beacons with our bss id but not on
+- * our serving channel */
+-#if IWL == 4965
+- __le32 channel_beacons; /* beacons with our bss id and in our
+- * serving channel */
+- __le32 num_missed_bcon; /* number of missed beacons */
+- __le32 adc_rx_saturation_time; /* count in 0.8us units the time the
+- * ADC was in saturation */
+- __le32 ina_detection_search_time;/* total time (in 0.8us) searched
+- * for INA */
+- __le32 beacon_silence_rssi_a; /* RSSI silence after beacon frame */
+- __le32 beacon_silence_rssi_b; /* RSSI silence after beacon frame */
+- __le32 beacon_silence_rssi_c; /* RSSI silence after beacon frame */
+- __le32 interference_data_flag; /* flag for interference data
+- * availability. 1 when data is
+- * available. */
+- __le32 channel_load; /* counts RX Enable time */
+- __le32 dsp_false_alarms; /* DSP false alarm (both OFDM
+- * and CCK) counter */
+- __le32 beacon_rssi_a;
+- __le32 beacon_rssi_b;
+- __le32 beacon_rssi_c;
+- __le32 beacon_energy_a;
+- __le32 beacon_energy_b;
+- __le32 beacon_energy_c;
+-#endif
+-} __attribute__ ((packed));
-
- break;
-
- case IEEE80211_FTYPE_DATA: {
-@@ -3953,7 +4117,7 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
- print_mac(mac1, header->addr1),
- print_mac(mac2, header->addr2),
- print_mac(mac3, header->addr3));
-- else if (unlikely(is_duplicate_packet(priv, header)))
-+ else if (unlikely(iwl4965_is_duplicate_packet(priv, header)))
- IWL_DEBUG_DROP("Dropping (dup): %s, %s, %s\n",
- print_mac(mac1, header->addr1),
- print_mac(mac2, header->addr2),
-@@ -3971,22 +4135,22 @@ static void iwl4965_rx_reply_rx(struct iwl_priv *priv,
-
- /* Cache phy data (Rx signal strength, etc) for HT frame (REPLY_RX_PHY_CMD).
- * This will be used later in iwl4965_rx_reply_rx() for REPLY_RX_MPDU_CMD. */
--static void iwl4965_rx_reply_rx_phy(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_reply_rx_phy(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
- {
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
- priv->last_phy_res[0] = 1;
- memcpy(&priv->last_phy_res[1], &(pkt->u.raw[0]),
- sizeof(struct iwl4965_rx_phy_res));
- }
-
--static void iwl4965_rx_missed_beacon_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_missed_beacon_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
-
- {
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_missed_beacon_notif *missed_beacon;
-+#ifdef CONFIG_IWL4965_SENSITIVITY
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_missed_beacon_notif *missed_beacon;
-
- missed_beacon = &pkt->u.missed_beacon;
- if (le32_to_cpu(missed_beacon->consequtive_missed_beacons) > 5) {
-@@ -3999,13 +4163,18 @@ static void iwl4965_rx_missed_beacon_notif(struct iwl_priv *priv,
- if (unlikely(!test_bit(STATUS_SCANNING, &priv->status)))
- queue_work(priv->workqueue, &priv->sensitivity_work);
- }
--#endif /*CONFIG_IWLWIFI_SENSITIVITY*/
-+#endif /*CONFIG_IWL4965_SENSITIVITY*/
- }
-
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
-
--static void iwl4965_set_tx_status(struct iwl_priv *priv, int txq_id, int idx,
-+/**
-+ * iwl4965_set_tx_status - Update driver's record of one Tx frame's status
-+ *
-+ * This will get sent to mac80211.
-+ */
-+static void iwl4965_set_tx_status(struct iwl4965_priv *priv, int txq_id, int idx,
- u32 status, u32 retry_count, u32 rate)
- {
- struct ieee80211_tx_status *tx_status =
-@@ -4017,24 +4186,34 @@ static void iwl4965_set_tx_status(struct iwl_priv *priv, int txq_id, int idx,
- }
-
-
--static void iwl_sta_modify_enable_tid_tx(struct iwl_priv *priv,
-+/**
-+ * iwl4965_sta_modify_enable_tid_tx - Enable Tx for this TID in station table
-+ */
-+static void iwl4965_sta_modify_enable_tid_tx(struct iwl4965_priv *priv,
- int sta_id, int tid)
- {
- unsigned long flags;
-
-+ /* Remove "disable" flag, to enable Tx for this TID */
- spin_lock_irqsave(&priv->sta_lock, flags);
- priv->stations[sta_id].sta.sta.modify_mask = STA_MODIFY_TID_DISABLE_TX;
- priv->stations[sta_id].sta.tid_disable_tx &= cpu_to_le16(~(1 << tid));
- priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
-
-- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
-+ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
- }
-
-
--static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
-- struct iwl_ht_agg *agg,
-- struct iwl_compressed_ba_resp*
-+/**
-+ * iwl4965_tx_status_reply_compressed_ba - Update tx status from block-ack
-+ *
-+ * Go through block-ack's bitmap of ACK'd frames, update driver's record of
-+ * ACK vs. not. This gets sent to mac80211, then to rate scaling algo.
-+ */
-+static int iwl4965_tx_status_reply_compressed_ba(struct iwl4965_priv *priv,
-+ struct iwl4965_ht_agg *agg,
-+ struct iwl4965_compressed_ba_resp*
- ba_resp)
-
- {
-@@ -4048,16 +4227,20 @@ static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
- IWL_ERROR("Received BA when not expected\n");
- return -EINVAL;
- }
-+
-+ /* Mark that the expected block-ack response arrived */
- agg->wait_for_ba = 0;
- IWL_DEBUG_TX_REPLY("BA %d %d\n", agg->start_idx, ba_resp->ba_seq_ctl);
-- sh = agg->start_idx - SEQ_TO_INDEX(ba_seq_ctl>>4);
-- if (sh < 0) /* tbw something is wrong with indeces */
-+
-+ /* Calculate shift to align block-ack bits with our Tx window bits */
-+ sh = agg->start_idx - SEQ_TO_INDEX(ba_seq_ctl >> 4);
-+ if (sh < 0) /* tbw something is wrong with indices */
- sh += 0x100;
-
-- /* don't use 64 bits for now */
-+ /* don't use 64-bit values for now */
- bitmap0 = resp_bitmap0 >> sh;
- bitmap1 = resp_bitmap1 >> sh;
-- bitmap0 |= (resp_bitmap1 & ((1<<sh)|((1<<sh)-1))) << (32 - sh);
-+ bitmap0 |= (resp_bitmap1 & ((1 << sh) | ((1 << sh) - 1))) << (32 - sh);
-
- if (agg->frame_count > (64 - sh)) {
- IWL_DEBUG_TX_REPLY("more frames than bitmap size");
-@@ -4065,10 +4248,12 @@ static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
- }
-
- /* check for success or failure according to the
-- * transmitted bitmap and back bitmap */
-+ * transmitted bitmap and block-ack bitmap */
- bitmap0 &= agg->bitmap0;
- bitmap1 &= agg->bitmap1;
-
-+ /* For each frame attempted in aggregation,
-+ * update driver's record of tx frame's status. */
- for (i = 0; i < agg->frame_count ; i++) {
- int idx = (agg->start_idx + i) & 0xff;
- ack = bitmap0 & (1 << i);
-@@ -4084,20 +4269,36 @@ static int iwl4965_tx_status_reply_compressed_ba(struct iwl_priv *priv,
- return 0;
- }
-
--static inline int iwl_queue_dec_wrap(int index, int n_bd)
-+/**
-+ * iwl4965_queue_dec_wrap - Decrement queue index, wrap back to end if needed
-+ * @index -- current index
-+ * @n_bd -- total number of entries in queue (s/b power of 2)
-+ */
-+static inline int iwl4965_queue_dec_wrap(int index, int n_bd)
- {
- return (index == 0) ? n_bd - 1 : index - 1;
- }
-
--static void iwl4965_rx_reply_compressed_ba(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+/**
-+ * iwl4965_rx_reply_compressed_ba - Handler for REPLY_COMPRESSED_BA
-+ *
-+ * Handles block-acknowledge notification from device, which reports success
-+ * of frames sent via aggregation.
-+ */
-+static void iwl4965_rx_reply_compressed_ba(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
- {
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_compressed_ba_resp *ba_resp = &pkt->u.compressed_ba;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_compressed_ba_resp *ba_resp = &pkt->u.compressed_ba;
- int index;
-- struct iwl_tx_queue *txq = NULL;
-- struct iwl_ht_agg *agg;
-+ struct iwl4965_tx_queue *txq = NULL;
-+ struct iwl4965_ht_agg *agg;
-+
-+ /* "flow" corresponds to Tx queue */
- u16 ba_resp_scd_flow = le16_to_cpu(ba_resp->scd_flow);
-+
-+ /* "ssn" is start of block-ack Tx window, corresponds to index
-+ * (in Tx queue's circular buffer) of first TFD/frame in window */
- u16 ba_resp_scd_ssn = le16_to_cpu(ba_resp->scd_ssn);
-
- if (ba_resp_scd_flow >= ARRAY_SIZE(priv->txq)) {
-@@ -4107,9 +4308,11 @@ static void iwl4965_rx_reply_compressed_ba(struct iwl_priv *priv,
-
- txq = &priv->txq[ba_resp_scd_flow];
- agg = &priv->stations[ba_resp->sta_id].tid[ba_resp->tid].agg;
-- index = iwl_queue_dec_wrap(ba_resp_scd_ssn & 0xff, txq->q.n_bd);
-
-- /* TODO: Need to get this copy more sefely - now good for debug */
-+ /* Find index just before block-ack window */
-+ index = iwl4965_queue_dec_wrap(ba_resp_scd_ssn & 0xff, txq->q.n_bd);
-+
-+ /* TODO: Need to get this copy more safely - now good for debug */
- /*
- {
- DECLARE_MAC_BUF(mac);
-@@ -4132,23 +4335,36 @@ static void iwl4965_rx_reply_compressed_ba(struct iwl_priv *priv,
- agg->bitmap0);
- }
- */
-+
-+ /* Update driver's record of ACK vs. not for each frame in window */
- iwl4965_tx_status_reply_compressed_ba(priv, agg, ba_resp);
-- /* releases all the TFDs until the SSN */
-- if (txq->q.last_used != (ba_resp_scd_ssn & 0xff))
-- iwl_tx_queue_reclaim(priv, ba_resp_scd_flow, index);
-+
-+ /* Release all TFDs before the SSN, i.e. all TFDs in front of
-+ * block-ack window (we assume that they've been successfully
-+ * transmitted ... if not, it's too late anyway). */
-+ if (txq->q.read_ptr != (ba_resp_scd_ssn & 0xff))
-+ iwl4965_tx_queue_reclaim(priv, ba_resp_scd_flow, index);
-
- }
-
-
--static void iwl4965_tx_queue_stop_scheduler(struct iwl_priv *priv, u16 txq_id)
-+/**
-+ * iwl4965_tx_queue_stop_scheduler - Stop queue, but keep configuration
-+ */
-+static void iwl4965_tx_queue_stop_scheduler(struct iwl4965_priv *priv, u16 txq_id)
- {
-- iwl_write_restricted_reg(priv,
-- SCD_QUEUE_STATUS_BITS(txq_id),
-+ /* Simply stop the queue, but don't change any configuration;
-+ * the SCD_ACT_EN bit is the write-enable mask for the ACTIVE bit. */
-+ iwl4965_write_prph(priv,
-+ KDR_SCD_QUEUE_STATUS_BITS(txq_id),
- (0 << SCD_QUEUE_STTS_REG_POS_ACTIVE)|
- (1 << SCD_QUEUE_STTS_REG_POS_SCD_ACT_EN));
- }
-
--static int iwl4965_tx_queue_set_q2ratid(struct iwl_priv *priv, u16 ra_tid,
-+/**
-+ * iwl4965_tx_queue_set_q2ratid - Map unique receiver/tid combination to a queue
-+ */
-+static int iwl4965_tx_queue_set_q2ratid(struct iwl4965_priv *priv, u16 ra_tid,
- u16 txq_id)
- {
- u32 tbl_dw_addr;
-@@ -4160,22 +4376,25 @@ static int iwl4965_tx_queue_set_q2ratid(struct iwl_priv *priv, u16 ra_tid,
- tbl_dw_addr = priv->scd_base_addr +
- SCD_TRANSLATE_TBL_OFFSET_QUEUE(txq_id);
-
-- tbl_dw = iwl_read_restricted_mem(priv, tbl_dw_addr);
-+ tbl_dw = iwl4965_read_targ_mem(priv, tbl_dw_addr);
-
- if (txq_id & 0x1)
- tbl_dw = (scd_q2ratid << 16) | (tbl_dw & 0x0000FFFF);
- else
- tbl_dw = scd_q2ratid | (tbl_dw & 0xFFFF0000);
-
-- iwl_write_restricted_mem(priv, tbl_dw_addr, tbl_dw);
-+ iwl4965_write_targ_mem(priv, tbl_dw_addr, tbl_dw);
-
- return 0;
- }
-
- /**
-- * txq_id must be greater than IWL_BACK_QUEUE_FIRST_ID
-+ * iwl4965_tx_queue_agg_enable - Set up & enable aggregation for selected queue
-+ *
-+ * NOTE: txq_id must be greater than IWL_BACK_QUEUE_FIRST_ID,
-+ * i.e. it must be one of the higher queues used for aggregation
- */
--static int iwl4965_tx_queue_agg_enable(struct iwl_priv *priv, int txq_id,
-+static int iwl4965_tx_queue_agg_enable(struct iwl4965_priv *priv, int txq_id,
- int tx_fifo, int sta_id, int tid,
- u16 ssn_idx)
- {
-@@ -4189,43 +4408,48 @@ static int iwl4965_tx_queue_agg_enable(struct iwl_priv *priv, int txq_id,
-
- ra_tid = BUILD_RAxTID(sta_id, tid);
-
-- iwl_sta_modify_enable_tid_tx(priv, sta_id, tid);
-+ /* Modify device's station table to Tx this TID */
-+ iwl4965_sta_modify_enable_tid_tx(priv, sta_id, tid);
-
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
- }
-
-+ /* Stop this Tx queue before configuring it */
- iwl4965_tx_queue_stop_scheduler(priv, txq_id);
-
-+ /* Map receiver-address / traffic-ID to this queue */
- iwl4965_tx_queue_set_q2ratid(priv, ra_tid, txq_id);
-
-+ /* Set this queue as a chain-building queue */
-+ iwl4965_set_bits_prph(priv, KDR_SCD_QUEUECHAIN_SEL, (1 << txq_id));
-
-- iwl_set_bits_restricted_reg(priv, SCD_QUEUECHAIN_SEL, (1<<txq_id));
+-struct statistics_rx {
+- struct statistics_rx_phy ofdm;
+- struct statistics_rx_phy cck;
+- struct statistics_rx_non_phy general;
+-#if IWL == 4965
+- struct statistics_rx_ht_phy ofdm_ht;
+-#endif
+-} __attribute__ ((packed));
-
-- priv->txq[txq_id].q.last_used = (ssn_idx & 0xff);
-- priv->txq[txq_id].q.first_empty = (ssn_idx & 0xff);
+-#if IWL == 4965
+-struct statistics_tx_non_phy_agg {
+- __le32 ba_timeout;
+- __le32 ba_reschedule_frames;
+- __le32 scd_query_agg_frame_cnt;
+- __le32 scd_query_no_agg;
+- __le32 scd_query_agg;
+- __le32 scd_query_mismatch;
+- __le32 frame_not_ready;
+- __le32 underrun;
+- __le32 bt_prio_kill;
+- __le32 rx_ba_rsp_cnt;
+- __le32 reserved2;
+- __le32 reserved3;
+-} __attribute__ ((packed));
+-#endif
-
-- /* supposes that ssn_idx is valid (!= 0xFFF) */
-+ /* Place first TFD at index corresponding to start sequence number.
-+ * Assumes that ssn_idx is valid (!= 0xFFF) */
-+ priv->txq[txq_id].q.read_ptr = (ssn_idx & 0xff);
-+ priv->txq[txq_id].q.write_ptr = (ssn_idx & 0xff);
- iwl4965_set_wr_ptrs(priv, txq_id, ssn_idx);
-
-- iwl_write_restricted_mem(priv,
-+ /* Set up Tx window size and frame limit for this queue */
-+ iwl4965_write_targ_mem(priv,
- priv->scd_base_addr + SCD_CONTEXT_QUEUE_OFFSET(txq_id),
- (SCD_WIN_SIZE << SCD_QUEUE_CTX_REG1_WIN_SIZE_POS) &
- SCD_QUEUE_CTX_REG1_WIN_SIZE_MSK);
-
-- iwl_write_restricted_mem(priv, priv->scd_base_addr +
-+ iwl4965_write_targ_mem(priv, priv->scd_base_addr +
- SCD_CONTEXT_QUEUE_OFFSET(txq_id) + sizeof(u32),
- (SCD_FRAME_LIMIT << SCD_QUEUE_CTX_REG2_FRAME_LIMIT_POS)
- & SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK);
-
-- iwl_set_bits_restricted_reg(priv, SCD_INTERRUPT_MASK, (1 << txq_id));
-+ iwl4965_set_bits_prph(priv, KDR_SCD_INTERRUPT_MASK, (1 << txq_id));
-
-+ /* Set up Status area in SRAM, map to Tx DMA/FIFO, activate the queue */
- iwl4965_tx_queue_set_status(priv, &priv->txq[txq_id], tx_fifo, 1);
-
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
-
- return 0;
-@@ -4234,7 +4458,7 @@ static int iwl4965_tx_queue_agg_enable(struct iwl_priv *priv, int txq_id,
- /**
- * txq_id must be greater than IWL_BACK_QUEUE_FIRST_ID
- */
--static int iwl4965_tx_queue_agg_disable(struct iwl_priv *priv, u16 txq_id,
-+static int iwl4965_tx_queue_agg_disable(struct iwl4965_priv *priv, u16 txq_id,
- u16 ssn_idx, u8 tx_fifo)
- {
- unsigned long flags;
-@@ -4247,7 +4471,7 @@ static int iwl4965_tx_queue_agg_disable(struct iwl_priv *priv, u16 txq_id,
- }
-
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
-@@ -4255,56 +4479,50 @@ static int iwl4965_tx_queue_agg_disable(struct iwl_priv *priv, u16 txq_id,
-
- iwl4965_tx_queue_stop_scheduler(priv, txq_id);
-
-- iwl_clear_bits_restricted_reg(priv, SCD_QUEUECHAIN_SEL, (1 << txq_id));
-+ iwl4965_clear_bits_prph(priv, KDR_SCD_QUEUECHAIN_SEL, (1 << txq_id));
-
-- priv->txq[txq_id].q.last_used = (ssn_idx & 0xff);
-- priv->txq[txq_id].q.first_empty = (ssn_idx & 0xff);
-+ priv->txq[txq_id].q.read_ptr = (ssn_idx & 0xff);
-+ priv->txq[txq_id].q.write_ptr = (ssn_idx & 0xff);
- /* supposes that ssn_idx is valid (!= 0xFFF) */
- iwl4965_set_wr_ptrs(priv, txq_id, ssn_idx);
-
-- iwl_clear_bits_restricted_reg(priv, SCD_INTERRUPT_MASK, (1 << txq_id));
-+ iwl4965_clear_bits_prph(priv, KDR_SCD_INTERRUPT_MASK, (1 << txq_id));
- iwl4965_txq_ctx_deactivate(priv, txq_id);
- iwl4965_tx_queue_set_status(priv, &priv->txq[txq_id], tx_fifo, 0);
-
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
-
- return 0;
- }
-
--#endif/* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
--/*
-- * RATE SCALE CODE
-- */
--int iwl4965_init_hw_rates(struct iwl_priv *priv, struct ieee80211_rate *rates)
--{
-- return 0;
--}
+-struct statistics_tx {
+- __le32 preamble_cnt;
+- __le32 rx_detected_cnt;
+- __le32 bt_prio_defer_cnt;
+- __le32 bt_prio_kill_cnt;
+- __le32 few_bytes_cnt;
+- __le32 cts_timeout;
+- __le32 ack_timeout;
+- __le32 expected_ack_cnt;
+- __le32 actual_ack_cnt;
+-#if IWL == 4965
+- __le32 dump_msdu_cnt;
+- __le32 burst_abort_next_frame_mismatch_cnt;
+- __le32 burst_abort_missing_next_frame_cnt;
+- __le32 cts_timeout_collision;
+- __le32 ack_or_ba_timeout_collision;
+- struct statistics_tx_non_phy_agg agg;
+-#endif
+-} __attribute__ ((packed));
-
-+#endif/* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-
- /**
- * iwl4965_add_station - Initialize a station's hardware rate table
- *
-- * The uCode contains a table of fallback rates and retries per rate
-+ * The uCode's station table contains a table of fallback rates
- * for automatic fallback during transmission.
- *
-- * NOTE: This initializes the table for a single retry per data rate
-- * which is not optimal. Setting up an intelligent retry per rate
-- * requires feedback from transmission, which isn't exposed through
-- * rc80211_simple which is what this driver is currently using.
-+ * NOTE: This sets up a default set of values. These will be replaced later
-+ * if the driver's iwl-4965-rs rate scaling algorithm is used, instead of
-+ * rc80211_simple.
- *
-+ * NOTE: Run REPLY_ADD_STA command to set up station table entry, before
-+ * calling this function (which runs REPLY_TX_LINK_QUALITY_CMD,
-+ * which requires station table entry to exist).
- */
--void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
-+void iwl4965_add_station(struct iwl4965_priv *priv, const u8 *addr, int is_ap)
- {
- int i, r;
-- struct iwl_link_quality_cmd link_cmd = {
-+ struct iwl4965_link_quality_cmd link_cmd = {
- .reserved1 = 0,
- };
- u16 rate_flags;
-
-- /* Set up the rate scaling to start at 54M and fallback
-- * all the way to 1M in IEEE order and then spin on IEEE */
-+ /* Set up the rate scaling to start at selected rate, fall back
-+ * all the way down to 1M in IEEE order, and then spin on 1M */
- if (is_ap)
- r = IWL_RATE_54M_INDEX;
- else if (priv->phymode == MODE_IEEE80211A)
-@@ -4317,11 +4535,13 @@ void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
- if (r >= IWL_FIRST_CCK_RATE && r <= IWL_LAST_CCK_RATE)
- rate_flags |= RATE_MCS_CCK_MSK;
-
-+ /* Use Tx antenna B only */
- rate_flags |= RATE_MCS_ANT_B_MSK;
- rate_flags &= ~RATE_MCS_ANT_A_MSK;
-+
- link_cmd.rs_table[i].rate_n_flags =
-- iwl_hw_set_rate_n_flags(iwl_rates[r].plcp, rate_flags);
-- r = iwl_get_prev_ieee_rate(r);
-+ iwl4965_hw_set_rate_n_flags(iwl4965_rates[r].plcp, rate_flags);
-+ r = iwl4965_get_prev_ieee_rate(r);
- }
-
- link_cmd.general_params.single_stream_ant_msk = 2;
-@@ -4332,18 +4552,18 @@ void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
- /* Update the rate scaling for control frame Tx to AP */
- link_cmd.sta_id = is_ap ? IWL_AP_ID : IWL4965_BROADCAST_ID;
-
-- iwl_send_cmd_pdu(priv, REPLY_TX_LINK_QUALITY_CMD, sizeof(link_cmd),
-+ iwl4965_send_cmd_pdu(priv, REPLY_TX_LINK_QUALITY_CMD, sizeof(link_cmd),
- &link_cmd);
- }
-
--#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
-
--static u8 iwl_is_channel_extension(struct iwl_priv *priv, int phymode,
-+static u8 iwl4965_is_channel_extension(struct iwl4965_priv *priv, int phymode,
- u16 channel, u8 extension_chan_offset)
- {
-- const struct iwl_channel_info *ch_info;
-+ const struct iwl4965_channel_info *ch_info;
-
-- ch_info = iwl_get_channel_info(priv, phymode, channel);
-+ ch_info = iwl4965_get_channel_info(priv, phymode, channel);
- if (!is_channel_valid(ch_info))
- return 0;
-
-@@ -4357,36 +4577,37 @@ static u8 iwl_is_channel_extension(struct iwl_priv *priv, int phymode,
- return 0;
- }
-
--static u8 iwl_is_fat_tx_allowed(struct iwl_priv *priv,
-- const struct sta_ht_info *ht_info)
-+static u8 iwl4965_is_fat_tx_allowed(struct iwl4965_priv *priv,
-+ struct ieee80211_ht_info *sta_ht_inf)
- {
-+ struct iwl_ht_info *iwl_ht_conf = &priv->current_ht_config;
-
-- if (priv->channel_width != IWL_CHANNEL_WIDTH_40MHZ)
-+ if ((!iwl_ht_conf->is_ht) ||
-+ (iwl_ht_conf->supported_chan_width != IWL_CHANNEL_WIDTH_40MHZ) ||
-+ (iwl_ht_conf->extension_chan_offset == IWL_EXT_CHANNEL_OFFSET_AUTO))
- return 0;
-
-- if (ht_info->supported_chan_width != IWL_CHANNEL_WIDTH_40MHZ)
-- return 0;
+-struct statistics_dbg {
+- __le32 burst_check;
+- __le32 burst_count;
+- __le32 reserved[4];
+-} __attribute__ ((packed));
+-
+-struct statistics_div {
+- __le32 tx_on_a;
+- __le32 tx_on_b;
+- __le32 exec_time;
+- __le32 probe_time;
+-#if IWL == 4965
+- __le32 reserved1;
+- __le32 reserved2;
+-#endif
+-} __attribute__ ((packed));
+-
+-struct statistics_general {
+- __le32 temperature;
+-#if IWL == 4965
+- __le32 temperature_m;
+-#endif
+- struct statistics_dbg dbg;
+- __le32 sleep_time;
+- __le32 slots_out;
+- __le32 slots_idle;
+- __le32 ttl_timestamp;
+- struct statistics_div div;
+-#if IWL == 4965
+- __le32 rx_enable_counter;
+- __le32 reserved1;
+- __le32 reserved2;
+- __le32 reserved3;
+-#endif
+-} __attribute__ ((packed));
-
-- if (ht_info->extension_chan_offset == IWL_EXT_CHANNEL_OFFSET_AUTO)
-- return 0;
-+ if (sta_ht_inf) {
-+ if ((!sta_ht_inf->ht_supported) ||
-+ (!sta_ht_inf->cap & IEEE80211_HT_CAP_SUP_WIDTH))
-+ return 0;
-+ }
-
-- /* no fat tx allowed on 2.4GHZ */
-- if (priv->phymode != MODE_IEEE80211A)
-- return 0;
-- return (iwl_is_channel_extension(priv, priv->phymode,
-- ht_info->control_channel,
-- ht_info->extension_chan_offset));
-+ return (iwl4965_is_channel_extension(priv, priv->phymode,
-+ iwl_ht_conf->control_channel,
-+ iwl_ht_conf->extension_chan_offset));
- }
-
--void iwl4965_set_rxon_ht(struct iwl_priv *priv, struct sta_ht_info *ht_info)
-+void iwl4965_set_rxon_ht(struct iwl4965_priv *priv, struct iwl_ht_info *ht_info)
- {
-- struct iwl_rxon_cmd *rxon = &priv->staging_rxon;
-+ struct iwl4965_rxon_cmd *rxon = &priv->staging_rxon;
- u32 val;
-
- if (!ht_info->is_ht)
- return;
-
-- if (iwl_is_fat_tx_allowed(priv, ht_info))
-+ /* Set up channel bandwidth: 20 MHz only, or 20/40 mixed if fat ok */
-+ if (iwl4965_is_fat_tx_allowed(priv, NULL))
- rxon->flags |= RXON_FLG_CHANNEL_MODE_MIXED_MSK;
- else
- rxon->flags &= ~(RXON_FLG_CHANNEL_MODE_MIXED_MSK |
-@@ -4400,7 +4621,7 @@ void iwl4965_set_rxon_ht(struct iwl_priv *priv, struct sta_ht_info *ht_info)
- return;
- }
-
-- /* Note: control channel is oposit to extension channel */
-+ /* Note: control channel is opposite of extension channel */
- switch (ht_info->extension_chan_offset) {
- case IWL_EXT_CHANNEL_OFFSET_ABOVE:
- rxon->flags &= ~(RXON_FLG_CTRL_CHANNEL_LOC_HI_MSK);
-@@ -4416,66 +4637,56 @@ void iwl4965_set_rxon_ht(struct iwl_priv *priv, struct sta_ht_info *ht_info)
- break;
- }
-
-- val = ht_info->operating_mode;
-+ val = ht_info->ht_protection;
-
- rxon->flags |= cpu_to_le32(val << RXON_FLG_HT_OPERATING_MODE_POS);
-
-- priv->active_rate_ht[0] = ht_info->supp_rates[0];
-- priv->active_rate_ht[1] = ht_info->supp_rates[1];
- iwl4965_set_rxon_chain(priv);
-
- IWL_DEBUG_ASSOC("supported HT rate 0x%X %X "
- "rxon flags 0x%X operation mode :0x%X "
- "extension channel offset 0x%x "
- "control chan %d\n",
-- priv->active_rate_ht[0], priv->active_rate_ht[1],
-- le32_to_cpu(rxon->flags), ht_info->operating_mode,
-+ ht_info->supp_mcs_set[0], ht_info->supp_mcs_set[1],
-+ le32_to_cpu(rxon->flags), ht_info->ht_protection,
- ht_info->extension_chan_offset,
- ht_info->control_channel);
- return;
- }
-
--void iwl4965_set_ht_add_station(struct iwl_priv *priv, u8 index)
-+void iwl4965_set_ht_add_station(struct iwl4965_priv *priv, u8 index,
-+ struct ieee80211_ht_info *sta_ht_inf)
- {
- __le32 sta_flags;
-- struct sta_ht_info *ht_info = &priv->current_assoc_ht;
-
-- priv->current_channel_width = IWL_CHANNEL_WIDTH_20MHZ;
-- if (!ht_info->is_ht)
-+ if (!sta_ht_inf || !sta_ht_inf->ht_supported)
- goto done;
-
- sta_flags = priv->stations[index].sta.station_flags;
-
-- if (ht_info->tx_mimo_ps_mode == IWL_MIMO_PS_DYNAMIC)
-+ if (((sta_ht_inf->cap & IEEE80211_HT_CAP_MIMO_PS >> 2))
-+ == IWL_MIMO_PS_DYNAMIC)
- sta_flags |= STA_FLG_RTS_MIMO_PROT_MSK;
- else
- sta_flags &= ~STA_FLG_RTS_MIMO_PROT_MSK;
-
- sta_flags |= cpu_to_le32(
-- (u32)ht_info->ampdu_factor << STA_FLG_MAX_AGG_SIZE_POS);
-+ (u32)sta_ht_inf->ampdu_factor << STA_FLG_MAX_AGG_SIZE_POS);
-
- sta_flags |= cpu_to_le32(
-- (u32)ht_info->mpdu_density << STA_FLG_AGG_MPDU_DENSITY_POS);
+-/*
+- * REPLY_STATISTICS_CMD = 0x9c,
+- * 3945 and 4965 identical.
+- *
+- * This command triggers an immediate response containing uCode statistics.
+- * The response is in the same format as STATISTICS_NOTIFICATION 0x9d, below.
+- *
+- * If the CLEAR_STATS configuration flag is set, uCode will clear its
+- * internal copy of the statistics (counters) after issuing the response.
+- * This flag does not affect STATISTICS_NOTIFICATIONs after beacons (see below).
+- *
+- * If the DISABLE_NOTIF configuration flag is set, uCode will not issue
+- * STATISTICS_NOTIFICATIONs after received beacons (see below). This flag
+- * does not affect the response to the REPLY_STATISTICS_CMD 0x9c itself.
+- */
+-#define IWL_STATS_CONF_CLEAR_STATS __constant_cpu_to_le32(0x1) /* see above */
+-#define IWL_STATS_CONF_DISABLE_NOTIF __constant_cpu_to_le32(0x2)/* see above */
+-struct iwl_statistics_cmd {
+- __le32 configuration_flags; /* IWL_STATS_CONF_* */
+-} __attribute__ ((packed));
-
-- sta_flags &= (~STA_FLG_FAT_EN_MSK);
-- ht_info->tx_chan_width = IWL_CHANNEL_WIDTH_20MHZ;
-- ht_info->chan_width_cap = IWL_CHANNEL_WIDTH_20MHZ;
-+ (u32)sta_ht_inf->ampdu_density << STA_FLG_AGG_MPDU_DENSITY_POS);
-
-- if (iwl_is_fat_tx_allowed(priv, ht_info)) {
-+ if (iwl4965_is_fat_tx_allowed(priv, sta_ht_inf))
- sta_flags |= STA_FLG_FAT_EN_MSK;
-- ht_info->chan_width_cap = IWL_CHANNEL_WIDTH_40MHZ;
-- if (ht_info->supported_chan_width == IWL_CHANNEL_WIDTH_40MHZ)
-- ht_info->tx_chan_width = IWL_CHANNEL_WIDTH_40MHZ;
-- }
-- priv->current_channel_width = ht_info->tx_chan_width;
-+ else
-+ sta_flags &= (~STA_FLG_FAT_EN_MSK);
-+
- priv->stations[index].sta.station_flags = sta_flags;
- done:
- return;
- }
-
--#ifdef CONFIG_IWLWIFI_HT_AGG
+-/*
+- * STATISTICS_NOTIFICATION = 0x9d (notification only, not a command)
+- *
+- * By default, uCode issues this notification after receiving a beacon
+- * while associated. To disable this behavior, set DISABLE_NOTIF flag in the
+- * REPLY_STATISTICS_CMD 0x9c, above.
+- *
+- * Statistics counters continue to increment beacon after beacon, but are
+- * cleared when changing channels or when driver issues REPLY_STATISTICS_CMD
+- * 0x9c with CLEAR_STATS bit set (see above).
+- *
+- * uCode also issues this notification during scans. uCode clears statistics
+- * appropriately so that each notification contains statistics for only the
+- * one channel that has just been scanned.
+- */
+-#define STATISTICS_REPLY_FLG_BAND_24G_MSK __constant_cpu_to_le32(0x2)
+-#define STATISTICS_REPLY_FLG_FAT_MODE_MSK __constant_cpu_to_le32(0x8)
+-struct iwl_notif_statistics {
+- __le32 flag;
+- struct statistics_rx rx;
+- struct statistics_tx tx;
+- struct statistics_general general;
+-} __attribute__ ((packed));
-
--static void iwl4965_sta_modify_add_ba_tid(struct iwl_priv *priv,
-+static void iwl4965_sta_modify_add_ba_tid(struct iwl4965_priv *priv,
- int sta_id, int tid, u16 ssn)
- {
- unsigned long flags;
-@@ -4488,10 +4699,10 @@ static void iwl4965_sta_modify_add_ba_tid(struct iwl_priv *priv,
- priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
-
-- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
-+ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
- }
-
--static void iwl4965_sta_modify_del_ba_tid(struct iwl_priv *priv,
-+static void iwl4965_sta_modify_del_ba_tid(struct iwl4965_priv *priv,
- int sta_id, int tid)
- {
- unsigned long flags;
-@@ -4503,9 +4714,39 @@ static void iwl4965_sta_modify_del_ba_tid(struct iwl_priv *priv,
- priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
-
-- iwl_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
-+ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, CMD_ASYNC);
-+}
-+
-+int iwl4965_mac_ampdu_action(struct ieee80211_hw *hw,
-+ enum ieee80211_ampdu_mlme_action action,
-+ const u8 *addr, u16 tid, u16 ssn)
-+{
-+ struct iwl4965_priv *priv = hw->priv;
-+ int sta_id;
-+ DECLARE_MAC_BUF(mac);
-+
-+ IWL_DEBUG_HT("A-MPDU action on da=%s tid=%d ",
-+ print_mac(mac, addr), tid);
-+ sta_id = iwl4965_hw_find_station(priv, addr);
-+ switch (action) {
-+ case IEEE80211_AMPDU_RX_START:
-+ IWL_DEBUG_HT("start Rx\n");
-+ iwl4965_sta_modify_add_ba_tid(priv, sta_id, tid, ssn);
-+ break;
-+ case IEEE80211_AMPDU_RX_STOP:
-+ IWL_DEBUG_HT("stop Rx\n");
-+ iwl4965_sta_modify_del_ba_tid(priv, sta_id, tid);
-+ break;
-+ default:
-+ IWL_DEBUG_HT("unknown\n");
-+ return -EINVAL;
-+ break;
-+ }
-+ return 0;
- }
-
-+#ifdef CONFIG_IWL4965_HT_AGG
-+
- static const u16 default_tid_to_tx_fifo[] = {
- IWL_TX_FIFO_AC1,
- IWL_TX_FIFO_AC0,
-@@ -4526,7 +4767,13 @@ static const u16 default_tid_to_tx_fifo[] = {
- IWL_TX_FIFO_AC3
- };
-
--static int iwl_txq_ctx_activate_free(struct iwl_priv *priv)
-+/*
-+ * Find first available (lowest unused) Tx Queue, mark it "active".
-+ * Called only when finding queue for aggregation.
-+ * Should never return anything < 7, because they should already
-+ * be in use as EDCA AC (0-3), Command (4), HCCA (5, 6).
-+ */
-+static int iwl4965_txq_ctx_activate_free(struct iwl4965_priv *priv)
- {
- int txq_id;
-
-@@ -4536,55 +4783,65 @@ static int iwl_txq_ctx_activate_free(struct iwl_priv *priv)
- return -1;
- }
-
--int iwl_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da, u16 tid,
-+int iwl4965_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da, u16 tid,
- u16 *start_seq_num)
- {
-
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- int sta_id;
- int tx_fifo;
- int txq_id;
- int ssn = -1;
- unsigned long flags;
-- struct iwl_tid_data *tid_data;
-+ struct iwl4965_tid_data *tid_data;
- DECLARE_MAC_BUF(mac);
-
-+ /* Determine Tx DMA/FIFO channel for this Traffic ID */
- if (likely(tid < ARRAY_SIZE(default_tid_to_tx_fifo)))
- tx_fifo = default_tid_to_tx_fifo[tid];
- else
- return -EINVAL;
-
-- IWL_WARNING("iwl-AGG iwl_mac_ht_tx_agg_start on da=%s"
-+ IWL_WARNING("iwl-AGG iwl4965_mac_ht_tx_agg_start on da=%s"
- " tid=%d\n", print_mac(mac, da), tid);
-
-- sta_id = iwl_hw_find_station(priv, da);
-+ /* Get index into station table */
-+ sta_id = iwl4965_hw_find_station(priv, da);
- if (sta_id == IWL_INVALID_STATION)
- return -ENXIO;
-
-- txq_id = iwl_txq_ctx_activate_free(priv);
-+ /* Find available Tx queue for aggregation */
-+ txq_id = iwl4965_txq_ctx_activate_free(priv);
- if (txq_id == -1)
- return -ENXIO;
-
- spin_lock_irqsave(&priv->sta_lock, flags);
- tid_data = &priv->stations[sta_id].tid[tid];
-+
-+ /* Get starting sequence number for 1st frame in block ack window.
-+ * We'll use least signif byte as 1st frame's index into Tx queue. */
- ssn = SEQ_TO_SN(tid_data->seq_number);
- tid_data->agg.txq_id = txq_id;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
-
- *start_seq_num = ssn;
-+
-+ /* Update driver's link quality manager */
- iwl4965_ba_status(priv, tid, BA_STATUS_ACTIVE);
-+
-+ /* Set up and enable aggregation for selected Tx queue and FIFO */
- return iwl4965_tx_queue_agg_enable(priv, txq_id, tx_fifo,
- sta_id, tid, ssn);
- }
-
-
--int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
-+int iwl4965_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
- int generator)
- {
-
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- int tx_fifo_id, txq_id, sta_id, ssn = -1;
-- struct iwl_tid_data *tid_data;
-+ struct iwl4965_tid_data *tid_data;
- int rc;
- DECLARE_MAC_BUF(mac);
-
-@@ -4598,7 +4855,7 @@ int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
- else
- return -EINVAL;
-
-- sta_id = iwl_hw_find_station(priv, da);
-+ sta_id = iwl4965_hw_find_station(priv, da);
-
- if (sta_id == IWL_INVALID_STATION)
- return -ENXIO;
-@@ -4613,45 +4870,18 @@ int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da, u16 tid,
- return rc;
-
- iwl4965_ba_status(priv, tid, BA_STATUS_INITIATOR_DELBA);
-- IWL_DEBUG_INFO("iwl_mac_ht_tx_agg_stop on da=%s tid=%d\n",
-+ IWL_DEBUG_INFO("iwl4965_mac_ht_tx_agg_stop on da=%s tid=%d\n",
- print_mac(mac, da), tid);
-
- return 0;
- }
-
--int iwl_mac_ht_rx_agg_start(struct ieee80211_hw *hw, u8 *da,
-- u16 tid, u16 start_seq_num)
--{
-- struct iwl_priv *priv = hw->priv;
-- int sta_id;
-- DECLARE_MAC_BUF(mac);
-
-- IWL_WARNING("iwl-AGG iwl_mac_ht_rx_agg_start on da=%s"
-- " tid=%d\n", print_mac(mac, da), tid);
-- sta_id = iwl_hw_find_station(priv, da);
-- iwl4965_sta_modify_add_ba_tid(priv, sta_id, tid, start_seq_num);
-- return 0;
--}
-
--int iwl_mac_ht_rx_agg_stop(struct ieee80211_hw *hw, u8 *da,
-- u16 tid, int generator)
--{
-- struct iwl_priv *priv = hw->priv;
-- int sta_id;
-- DECLARE_MAC_BUF(mac);
+-/*
+- * MISSED_BEACONS_NOTIFICATION = 0xa2 (notification only, not a command)
+- */
+-/* if ucode missed CONSECUTIVE_MISSED_BCONS_TH beacons in a row,
+- * then this notification will be sent. */
+-#define CONSECUTIVE_MISSED_BCONS_TH 20
-
-- IWL_WARNING("iwl-AGG iwl_mac_ht_rx_agg_stop on da=%s tid=%d\n",
-- print_mac(mac, da), tid);
-- sta_id = iwl_hw_find_station(priv, da);
-- iwl4965_sta_modify_del_ba_tid(priv, sta_id, tid);
-- return 0;
--}
+-struct iwl_missed_beacon_notif {
+- __le32 consequtive_missed_beacons;
+- __le32 total_missed_becons;
+- __le32 num_expected_beacons;
+- __le32 num_recvd_beacons;
+-} __attribute__ ((packed));
-
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-
- /* Set up 4965-specific Rx frame reply handlers */
--void iwl_hw_rx_handler_setup(struct iwl_priv *priv)
-+void iwl4965_hw_rx_handler_setup(struct iwl4965_priv *priv)
- {
- /* Legacy Rx frames */
- priv->rx_handlers[REPLY_4965_RX] = iwl4965_rx_reply_rx;
-@@ -4663,57 +4893,66 @@ void iwl_hw_rx_handler_setup(struct iwl_priv *priv)
- priv->rx_handlers[MISSED_BEACONS_NOTIFICATION] =
- iwl4965_rx_missed_beacon_notif;
-
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- priv->rx_handlers[REPLY_COMPRESSED_BA] = iwl4965_rx_reply_compressed_ba;
--#endif /* CONFIG_IWLWIFI_AGG */
--#endif /* CONFIG_IWLWIFI */
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
- }
-
--void iwl_hw_setup_deferred_work(struct iwl_priv *priv)
-+void iwl4965_hw_setup_deferred_work(struct iwl4965_priv *priv)
- {
- INIT_WORK(&priv->txpower_work, iwl4965_bg_txpower_work);
- INIT_WORK(&priv->statistics_work, iwl4965_bg_statistics_work);
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-+#ifdef CONFIG_IWL4965_SENSITIVITY
- INIT_WORK(&priv->sensitivity_work, iwl4965_bg_sensitivity_work);
- #endif
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- INIT_WORK(&priv->agg_work, iwl4965_bg_agg_work);
--#endif /* CONFIG_IWLWIFI_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
- init_timer(&priv->statistics_periodic);
- priv->statistics_periodic.data = (unsigned long)priv;
- priv->statistics_periodic.function = iwl4965_bg_statistics_periodic;
- }
-
--void iwl_hw_cancel_deferred_work(struct iwl_priv *priv)
-+void iwl4965_hw_cancel_deferred_work(struct iwl4965_priv *priv)
- {
- del_timer_sync(&priv->statistics_periodic);
-
- cancel_delayed_work(&priv->init_alive_start);
- }
-
--struct pci_device_id iwl_hw_card_ids[] = {
-- {0x8086, 0x4229, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
-- {0x8086, 0x4230, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
-+struct pci_device_id iwl4965_hw_card_ids[] = {
-+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4229)},
-+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4230)},
- {0}
- };
-
--int iwl_eeprom_aqcuire_semaphore(struct iwl_priv *priv)
-+/*
-+ * The device's EEPROM semaphore prevents conflicts between driver and uCode
-+ * when accessing the EEPROM; each access is a series of pulses to/from the
-+ * EEPROM chip, not a single event, so even reads could conflict if they
-+ * weren't arbitrated by the semaphore.
-+ */
-+int iwl4965_eeprom_acquire_semaphore(struct iwl4965_priv *priv)
- {
- u16 count;
- int rc;
-
- for (count = 0; count < EEPROM_SEM_RETRY_LIMIT; count++) {
-- iwl_set_bit(priv, CSR_HW_IF_CONFIG_REG,
-+ /* Request semaphore */
-+ iwl4965_set_bit(priv, CSR_HW_IF_CONFIG_REG,
- CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM);
-- rc = iwl_poll_bit(priv, CSR_HW_IF_CONFIG_REG,
-+
-+ /* See if we got it */
-+ rc = iwl4965_poll_bit(priv, CSR_HW_IF_CONFIG_REG,
- CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM,
- CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM,
- EEPROM_SEM_TIMEOUT);
- if (rc >= 0) {
-- IWL_DEBUG_IO("Aqcuired semaphore after %d tries.\n",
-+ IWL_DEBUG_IO("Acquired semaphore after %d tries.\n",
- count+1);
- return rc;
- }
-@@ -4722,11 +4961,11 @@ int iwl_eeprom_aqcuire_semaphore(struct iwl_priv *priv)
- return rc;
- }
-
--inline void iwl_eeprom_release_semaphore(struct iwl_priv *priv)
-+inline void iwl4965_eeprom_release_semaphore(struct iwl4965_priv *priv)
- {
-- iwl_clear_bit(priv, CSR_HW_IF_CONFIG_REG,
-+ iwl4965_clear_bit(priv, CSR_HW_IF_CONFIG_REG,
- CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM);
- }
-
-
--MODULE_DEVICE_TABLE(pci, iwl_hw_card_ids);
-+MODULE_DEVICE_TABLE(pci, iwl4965_hw_card_ids);
-diff --git a/drivers/net/wireless/iwlwifi/iwl-4965.h b/drivers/net/wireless/iwlwifi/iwl-4965.h
-index 4c70081..78bc148 100644
---- a/drivers/net/wireless/iwlwifi/iwl-4965.h
-+++ b/drivers/net/wireless/iwlwifi/iwl-4965.h
-@@ -23,64 +23,777 @@
- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
- *
- *****************************************************************************/
-+/*
-+ * Please use this file (iwl-4965.h) for driver implementation definitions.
-+ * Please use iwl-4965-commands.h for uCode API definitions.
-+ * Please use iwl-4965-hw.h for hardware-related definitions.
-+ */
-+
- #ifndef __iwl_4965_h__
- #define __iwl_4965_h__
-
--struct iwl_priv;
--struct sta_ht_info;
-+#include <linux/pci.h> /* for struct pci_device_id */
-+#include <linux/kernel.h>
-+#include <net/ieee80211_radiotap.h>
-+
-+/* Hardware specific file defines the PCI IDs table for that hardware module */
-+extern struct pci_device_id iwl4965_hw_card_ids[];
-+
-+#define DRV_NAME "iwl4965"
-+#include "iwl-4965-hw.h"
-+#include "iwl-prph.h"
-+#include "iwl-4965-debug.h"
-+
-+/* Default noise level to report when noise measurement is not available.
-+ * This may be because we're:
-+ * 1) Not associated (4965, no beacon statistics being sent to driver)
-+ * 2) Scanning (noise measurement does not apply to associated channel)
-+ * 3) Receiving CCK (3945 delivers noise info only for OFDM frames)
-+ * Use default noise value of -127 ... this is below the range of measurable
-+ * Rx dBm for either 3945 or 4965, so it can indicate "unmeasurable" to user.
-+ * Also, -127 works better than 0 when averaging frames with/without
-+ * noise info (e.g. averaging might be done in app); measured dBm values are
-+ * always negative ... using a negative value as the default keeps all
-+ * averages within an s8's (used in some apps) range of negative values. */
-+#define IWL_NOISE_MEAS_NOT_AVAILABLE (-127)
-+
-+/* Module parameters accessible from iwl-*.c */
-+extern int iwl4965_param_hwcrypto;
-+extern int iwl4965_param_queues_num;
-+extern int iwl4965_param_amsdu_size_8K;
-+
-+enum iwl4965_antenna {
-+ IWL_ANTENNA_DIVERSITY,
-+ IWL_ANTENNA_MAIN,
-+ IWL_ANTENNA_AUX
-+};
-+
-+/*
-+ * RTS threshold here is total size [2347] minus 4 FCS bytes
-+ * Per spec:
-+ * a value of 0 means RTS on all data/management packets
-+ * a value > max MSDU size means no RTS
-+ * else RTS for data/management frames where MPDU is larger
-+ * than RTS value.
-+ */
-+#define DEFAULT_RTS_THRESHOLD 2347U
-+#define MIN_RTS_THRESHOLD 0U
-+#define MAX_RTS_THRESHOLD 2347U
-+#define MAX_MSDU_SIZE 2304U
-+#define MAX_MPDU_SIZE 2346U
-+#define DEFAULT_BEACON_INTERVAL 100U
-+#define DEFAULT_SHORT_RETRY_LIMIT 7U
-+#define DEFAULT_LONG_RETRY_LIMIT 4U
-+
-+struct iwl4965_rx_mem_buffer {
-+ dma_addr_t dma_addr;
-+ struct sk_buff *skb;
-+ struct list_head list;
-+};
-+
-+/*
-+ * Generic queue structure
-+ *
-+ * Contains common data for Rx and Tx queues
-+ */
-+struct iwl4965_queue {
-+ int n_bd; /* number of BDs in this queue */
-+ int write_ptr; /* 1-st empty entry (index) host_w*/
-+ int read_ptr; /* last used entry (index) host_r*/
-+ dma_addr_t dma_addr; /* physical addr for BD's */
-+ int n_window; /* safe queue window */
-+ u32 id;
-+ int low_mark; /* low watermark, resume queue if free
-+ * space more than this */
-+ int high_mark; /* high watermark, stop queue if free
-+ * space less than this */
-+} __attribute__ ((packed));
-+
-+#define MAX_NUM_OF_TBS (20)
-+
-+/* One for each TFD */
-+struct iwl4965_tx_info {
-+ struct ieee80211_tx_status status;
-+ struct sk_buff *skb[MAX_NUM_OF_TBS];
-+};
-+
-+/**
-+ * struct iwl4965_tx_queue - Tx Queue for DMA
-+ * @q: generic Rx/Tx queue descriptor
-+ * @bd: base of circular buffer of TFDs
-+ * @cmd: array of command/Tx buffers
-+ * @dma_addr_cmd: physical address of cmd/tx buffer array
-+ * @txb: array of per-TFD driver data
-+ * @need_update: indicates need to update read/write index
-+ * @sched_retry: indicates queue is high-throughput aggregation (HT AGG) enabled
-+ *
-+ * A Tx queue consists of circular buffer of BDs (a.k.a. TFDs, transmit frame
-+ * descriptors) and required locking structures.
-+ */
-+struct iwl4965_tx_queue {
-+ struct iwl4965_queue q;
-+ struct iwl4965_tfd_frame *bd;
-+ struct iwl4965_cmd *cmd;
-+ dma_addr_t dma_addr_cmd;
-+ struct iwl4965_tx_info *txb;
-+ int need_update;
-+ int sched_retry;
-+ int active;
-+};
-+
-+#define IWL_NUM_SCAN_RATES (2)
-+
-+struct iwl4965_channel_tgd_info {
-+ u8 type;
-+ s8 max_power;
-+};
-+
-+struct iwl4965_channel_tgh_info {
-+ s64 last_radar_time;
-+};
-+
-+/* current Tx power values to use, one for each rate for each channel.
-+ * requested power is limited by:
-+ * -- regulatory EEPROM limits for this channel
-+ * -- hardware capabilities (clip-powers)
-+ * -- spectrum management
-+ * -- user preference (e.g. iwconfig)
-+ * when requested power is set, base power index must also be set. */
-+struct iwl4965_channel_power_info {
-+ struct iwl4965_tx_power tpc; /* actual radio and DSP gain settings */
-+ s8 power_table_index; /* actual (compenst'd) index into gain table */
-+ s8 base_power_index; /* gain index for power at factory temp. */
-+ s8 requested_power; /* power (dBm) requested for this chnl/rate */
-+};
-+
-+/* current scan Tx power values to use, one for each scan rate for each
-+ * channel. */
-+struct iwl4965_scan_power_info {
-+ struct iwl4965_tx_power tpc; /* actual radio and DSP gain settings */
-+ s8 power_table_index; /* actual (compenst'd) index into gain table */
-+ s8 requested_power; /* scan pwr (dBm) requested for chnl/rate */
-+};
-+
-+/* For fat_extension_channel */
-+enum {
-+ HT_IE_EXT_CHANNEL_NONE = 0,
-+ HT_IE_EXT_CHANNEL_ABOVE,
-+ HT_IE_EXT_CHANNEL_INVALID,
-+ HT_IE_EXT_CHANNEL_BELOW,
-+ HT_IE_EXT_CHANNEL_MAX
-+};
-+
-+/*
-+ * One for each channel, holds all channel setup data
-+ * Some of the fields (e.g. eeprom and flags/max_power_avg) are redundant
-+ * with one another!
-+ */
-+#define IWL4965_MAX_RATE (33)
-+
-+struct iwl4965_channel_info {
-+ struct iwl4965_channel_tgd_info tgd;
-+ struct iwl4965_channel_tgh_info tgh;
-+ struct iwl4965_eeprom_channel eeprom; /* EEPROM regulatory limit */
-+ struct iwl4965_eeprom_channel fat_eeprom; /* EEPROM regulatory limit for
-+ * FAT channel */
-+
-+ u8 channel; /* channel number */
-+ u8 flags; /* flags copied from EEPROM */
-+ s8 max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
-+ s8 curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) limit */
-+ s8 min_power; /* always 0 */
-+ s8 scan_power; /* (dBm) regul. eeprom, direct scans, any rate */
-+
-+ u8 group_index; /* 0-4, maps channel to group1/2/3/4/5 */
-+ u8 band_index; /* 0-4, maps channel to band1/2/3/4/5 */
-+ u8 phymode; /* MODE_IEEE80211{A,B,G} */
-+
-+ /* Radio/DSP gain settings for each "normal" data Tx rate.
-+ * These include, in addition to RF and DSP gain, a few fields for
-+ * remembering/modifying gain settings (indexes). */
-+ struct iwl4965_channel_power_info power_info[IWL4965_MAX_RATE];
-+
-+ /* FAT channel info */
-+ s8 fat_max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
-+ s8 fat_curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) */
-+ s8 fat_min_power; /* always 0 */
-+ s8 fat_scan_power; /* (dBm) eeprom, direct scans, any rate */
-+ u8 fat_flags; /* flags copied from EEPROM */
-+ u8 fat_extension_channel; /* HT_IE_EXT_CHANNEL_* */
-+
-+ /* Radio/DSP gain settings for each scan rate, for directed scans. */
-+ struct iwl4965_scan_power_info scan_pwr_info[IWL_NUM_SCAN_RATES];
-+};
-+
-+struct iwl4965_clip_group {
-+ /* maximum power level to prevent clipping for each rate, derived by
-+ * us from this band's saturation power in EEPROM */
-+ const s8 clip_powers[IWL_MAX_RATES];
-+};
-+
-+#include "iwl-4965-rs.h"
-+
-+#define IWL_TX_FIFO_AC0 0
-+#define IWL_TX_FIFO_AC1 1
-+#define IWL_TX_FIFO_AC2 2
-+#define IWL_TX_FIFO_AC3 3
-+#define IWL_TX_FIFO_HCCA_1 5
-+#define IWL_TX_FIFO_HCCA_2 6
-+#define IWL_TX_FIFO_NONE 7
-+
-+/* Minimum number of queues. MAX_NUM is defined in hw specific files */
-+#define IWL_MIN_NUM_QUEUES 4
-+
-+/* Power management (not Tx power) structures */
-+
-+struct iwl4965_power_vec_entry {
-+ struct iwl4965_powertable_cmd cmd;
-+ u8 no_dtim;
-+};
-+#define IWL_POWER_RANGE_0 (0)
-+#define IWL_POWER_RANGE_1 (1)
-+
-+#define IWL_POWER_MODE_CAM 0x00 /* Continuously Aware Mode, always on */
-+#define IWL_POWER_INDEX_3 0x03
-+#define IWL_POWER_INDEX_5 0x05
-+#define IWL_POWER_AC 0x06
-+#define IWL_POWER_BATTERY 0x07
-+#define IWL_POWER_LIMIT 0x07
-+#define IWL_POWER_MASK 0x0F
-+#define IWL_POWER_ENABLED 0x10
-+#define IWL_POWER_LEVEL(x) ((x) & IWL_POWER_MASK)
-+
-+struct iwl4965_power_mgr {
-+ spinlock_t lock;
-+ struct iwl4965_power_vec_entry pwr_range_0[IWL_POWER_AC];
-+ struct iwl4965_power_vec_entry pwr_range_1[IWL_POWER_AC];
-+ u8 active_index;
-+ u32 dtim_val;
-+};
-+
-+#define IEEE80211_DATA_LEN 2304
-+#define IEEE80211_4ADDR_LEN 30
-+#define IEEE80211_HLEN (IEEE80211_4ADDR_LEN)
-+#define IEEE80211_FRAME_LEN (IEEE80211_DATA_LEN + IEEE80211_HLEN)
-+
-+struct iwl4965_frame {
-+ union {
-+ struct ieee80211_hdr frame;
-+ struct iwl4965_tx_beacon_cmd beacon;
-+ u8 raw[IEEE80211_FRAME_LEN];
-+ u8 cmd[360];
-+ } u;
-+ struct list_head list;
-+};
-+
-+#define SEQ_TO_QUEUE(x) ((x >> 8) & 0xbf)
-+#define QUEUE_TO_SEQ(x) ((x & 0xbf) << 8)
-+#define SEQ_TO_INDEX(x) (x & 0xff)
-+#define INDEX_TO_SEQ(x) (x & 0xff)
-+#define SEQ_HUGE_FRAME (0x4000)
-+#define SEQ_RX_FRAME __constant_cpu_to_le16(0x8000)
-+#define SEQ_TO_SN(seq) (((seq) & IEEE80211_SCTL_SEQ) >> 4)
-+#define SN_TO_SEQ(ssn) (((ssn) << 4) & IEEE80211_SCTL_SEQ)
-+#define MAX_SN ((IEEE80211_SCTL_SEQ) >> 4)
-+
-+enum {
-+ /* CMD_SIZE_NORMAL = 0, */
-+ CMD_SIZE_HUGE = (1 << 0),
-+ /* CMD_SYNC = 0, */
-+ CMD_ASYNC = (1 << 1),
-+ /* CMD_NO_SKB = 0, */
-+ CMD_WANT_SKB = (1 << 2),
-+};
-+
-+struct iwl4965_cmd;
-+struct iwl4965_priv;
-+
-+struct iwl4965_cmd_meta {
-+ struct iwl4965_cmd_meta *source;
-+ union {
-+ struct sk_buff *skb;
-+ int (*callback)(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd, struct sk_buff *skb);
-+ } __attribute__ ((packed)) u;
-+
-+ /* The CMD_SIZE_HUGE flag bit indicates that the command
-+ * structure is stored at the end of the shared queue memory. */
-+ u32 flags;
-+
-+} __attribute__ ((packed));
-+
-+/**
-+ * struct iwl4965_cmd
-+ *
-+ * For allocation of the command and tx queues, this establishes the overall
-+ * size of the largest command we send to uCode, except for a scan command
-+ * (which is relatively huge; space is allocated separately).
-+ */
-+struct iwl4965_cmd {
-+ struct iwl4965_cmd_meta meta; /* driver data */
-+ struct iwl4965_cmd_header hdr; /* uCode API */
-+ union {
-+ struct iwl4965_addsta_cmd addsta;
-+ struct iwl4965_led_cmd led;
-+ u32 flags;
-+ u8 val8;
-+ u16 val16;
-+ u32 val32;
-+ struct iwl4965_bt_cmd bt;
-+ struct iwl4965_rxon_time_cmd rxon_time;
-+ struct iwl4965_powertable_cmd powertable;
-+ struct iwl4965_qosparam_cmd qosparam;
-+ struct iwl4965_tx_cmd tx;
-+ struct iwl4965_tx_beacon_cmd tx_beacon;
-+ struct iwl4965_rxon_assoc_cmd rxon_assoc;
-+ u8 *indirect;
-+ u8 payload[360];
-+ } __attribute__ ((packed)) cmd;
-+} __attribute__ ((packed));
-+
-+struct iwl4965_host_cmd {
-+ u8 id;
-+ u16 len;
-+ struct iwl4965_cmd_meta meta;
-+ const void *data;
-+};
-+
-+#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl4965_cmd) - \
-+ sizeof(struct iwl4965_cmd_meta))
-+
-+/*
-+ * RX related structures and functions
-+ */
-+#define RX_FREE_BUFFERS 64
-+#define RX_LOW_WATERMARK 8
-+
-+#define SUP_RATE_11A_MAX_NUM_CHANNELS 8
-+#define SUP_RATE_11B_MAX_NUM_CHANNELS 4
-+#define SUP_RATE_11G_MAX_NUM_CHANNELS 12
-+
-+/**
-+ * struct iwl4965_rx_queue - Rx queue
-+ * @processed: Internal index to last handled Rx packet
-+ * @read: Shared index to newest available Rx buffer
-+ * @write: Shared index to oldest written Rx packet
-+ * @free_count: Number of pre-allocated buffers in rx_free
-+ * @rx_free: list of free SKBs for use
-+ * @rx_used: List of Rx buffers with no SKB
-+ * @need_update: flag to indicate we need to update read/write index
-+ *
-+ * NOTE: rx_free and rx_used are used as a FIFO for iwl4965_rx_mem_buffers
-+ */
-+struct iwl4965_rx_queue {
-+ __le32 *bd;
-+ dma_addr_t dma_addr;
-+ struct iwl4965_rx_mem_buffer pool[RX_QUEUE_SIZE + RX_FREE_BUFFERS];
-+ struct iwl4965_rx_mem_buffer *queue[RX_QUEUE_SIZE];
-+ u32 processed;
-+ u32 read;
-+ u32 write;
-+ u32 free_count;
-+ struct list_head rx_free;
-+ struct list_head rx_used;
-+ int need_update;
-+ spinlock_t lock;
-+};
-+
-+#define IWL_SUPPORTED_RATES_IE_LEN 8
-+
-+#define SCAN_INTERVAL 100
-+
-+#define MAX_A_CHANNELS 252
-+#define MIN_A_CHANNELS 7
-+
-+#define MAX_B_CHANNELS 14
-+#define MIN_B_CHANNELS 1
-+
-+#define STATUS_HCMD_ACTIVE 0 /* host command in progress */
-+#define STATUS_INT_ENABLED 1
-+#define STATUS_RF_KILL_HW 2
-+#define STATUS_RF_KILL_SW 3
-+#define STATUS_INIT 4
-+#define STATUS_ALIVE 5
-+#define STATUS_READY 6
-+#define STATUS_TEMPERATURE 7
-+#define STATUS_GEO_CONFIGURED 8
-+#define STATUS_EXIT_PENDING 9
-+#define STATUS_IN_SUSPEND 10
-+#define STATUS_STATISTICS 11
-+#define STATUS_SCANNING 12
-+#define STATUS_SCAN_ABORTING 13
-+#define STATUS_SCAN_HW 14
-+#define STATUS_POWER_PMI 15
-+#define STATUS_FW_ERROR 16
-+#define STATUS_CONF_PENDING 17
-+
-+#define MAX_TID_COUNT 9
-+
-+#define IWL_INVALID_RATE 0xFF
-+#define IWL_INVALID_VALUE -1
-+
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
-+/**
-+ * struct iwl4965_ht_agg -- aggregation status while waiting for block-ack
-+ * @txq_id: Tx queue used for Tx attempt
-+ * @frame_count: # frames attempted by Tx command
-+ * @wait_for_ba: Expect block-ack before next Tx reply
-+ * @start_idx: Index of 1st Transmit Frame Descriptor (TFD) in Tx window
-+ * @bitmap0: Low order bitmap, one bit for each frame pending ACK in Tx window
-+ * @bitmap1: High order, one bit for each frame pending ACK in Tx window
-+ * @rate_n_flags: Rate at which Tx was attempted
-+ *
-+ * If REPLY_TX indicates that aggregation was attempted, driver must wait
-+ * for block ack (REPLY_COMPRESSED_BA). This struct stores tx reply info
-+ * until block ack arrives.
-+ */
-+struct iwl4965_ht_agg {
-+ u16 txq_id;
-+ u16 frame_count;
-+ u16 wait_for_ba;
-+ u16 start_idx;
-+ u32 bitmap0;
-+ u32 bitmap1;
-+ u32 rate_n_flags;
-+};
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-+
-+struct iwl4965_tid_data {
-+ u16 seq_number;
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
-+ struct iwl4965_ht_agg agg;
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-+};
-+
-+struct iwl4965_hw_key {
-+ enum ieee80211_key_alg alg;
-+ int keylen;
-+ u8 key[32];
-+};
-+
-+union iwl4965_ht_rate_supp {
-+ u16 rates;
-+ struct {
-+ u8 siso_rate;
-+ u8 mimo_rate;
-+ };
-+};
-+
-+#ifdef CONFIG_IWL4965_HT
-+#define CFG_HT_RX_AMPDU_FACTOR_DEF (0x3)
-+#define CFG_HT_MPDU_DENSITY_2USEC (0x5)
-+#define CFG_HT_MPDU_DENSITY_DEF CFG_HT_MPDU_DENSITY_2USEC
-+
-+struct iwl_ht_info {
-+ /* self configuration data */
-+ u8 is_ht;
-+ u8 supported_chan_width;
-+ u16 tx_mimo_ps_mode;
-+ u8 is_green_field;
-+ u8 sgf; /* HT_SHORT_GI_* short guard interval */
-+ u8 max_amsdu_size;
-+ u8 ampdu_factor;
-+ u8 mpdu_density;
-+ u8 supp_mcs_set[16];
-+ /* BSS related data */
-+ u8 control_channel;
-+ u8 extension_chan_offset;
-+ u8 tx_chan_width;
-+ u8 ht_protection;
-+ u8 non_GF_STA_present;
-+};
-+#endif /*CONFIG_IWL4965_HT */
-+
-+#ifdef CONFIG_IWL4965_QOS
-+
-+union iwl4965_qos_capabity {
-+ struct {
-+ u8 edca_count:4; /* bit 0-3 */
-+ u8 q_ack:1; /* bit 4 */
-+ u8 queue_request:1; /* bit 5 */
-+ u8 txop_request:1; /* bit 6 */
-+ u8 reserved:1; /* bit 7 */
-+ } q_AP;
-+ struct {
-+ u8 acvo_APSD:1; /* bit 0 */
-+ u8 acvi_APSD:1; /* bit 1 */
-+ u8 ac_bk_APSD:1; /* bit 2 */
-+ u8 ac_be_APSD:1; /* bit 3 */
-+ u8 q_ack:1; /* bit 4 */
-+ u8 max_len:2; /* bit 5-6 */
-+ u8 more_data_ack:1; /* bit 7 */
-+ } q_STA;
-+ u8 val;
-+};
-+
-+/* QoS structures */
-+struct iwl4965_qos_info {
-+ int qos_enable;
-+ int qos_active;
-+ union iwl4965_qos_capabity qos_cap;
-+ struct iwl4965_qosparam_cmd def_qos_parm;
-+};
-+#endif /*CONFIG_IWL4965_QOS */
-+
-+#define STA_PS_STATUS_WAKE 0
-+#define STA_PS_STATUS_SLEEP 1
-+
-+struct iwl4965_station_entry {
-+ struct iwl4965_addsta_cmd sta;
-+ struct iwl4965_tid_data tid[MAX_TID_COUNT];
-+ u8 used;
-+ u8 ps_status;
-+ struct iwl4965_hw_key keyinfo;
-+};
-+
-+/* one for each uCode image (inst/data, boot/init/runtime) */
-+struct fw_desc {
-+ void *v_addr; /* access by driver */
-+ dma_addr_t p_addr; /* access by card's busmaster DMA */
-+ u32 len; /* bytes */
-+};
-+
-+/* uCode file layout */
-+struct iwl4965_ucode {
-+ __le32 ver; /* major/minor/subminor */
-+ __le32 inst_size; /* bytes of runtime instructions */
-+ __le32 data_size; /* bytes of runtime data */
-+ __le32 init_size; /* bytes of initialization instructions */
-+ __le32 init_data_size; /* bytes of initialization data */
-+ __le32 boot_size; /* bytes of bootstrap instructions */
-+ u8 data[0]; /* data in same order as "size" elements */
-+};
-+
-+#define IWL_IBSS_MAC_HASH_SIZE 32
-+
-+struct iwl4965_ibss_seq {
-+ u8 mac[ETH_ALEN];
-+ u16 seq_num;
-+ u16 frag_num;
-+ unsigned long packet_time;
-+ struct list_head list;
-+};
-+
-+/**
-+ * struct iwl4965_driver_hw_info
-+ * @max_txq_num: Max # Tx queues supported
-+ * @ac_queue_count: # Tx queues for EDCA Access Categories (AC)
-+ * @tx_cmd_len: Size of Tx command (but not including frame itself)
-+ * @max_rxq_size: Max # Rx frames in Rx queue (must be power-of-2)
-+ * @rx_buffer_size:
-+ * @max_rxq_log: Log-base-2 of max_rxq_size
-+ * @max_stations:
-+ * @bcast_sta_id:
-+ * @shared_virt: Pointer to driver/uCode shared Tx Byte Counts and Rx status
-+ * @shared_phys: Physical Pointer to Tx Byte Counts and Rx status
-+ */
-+struct iwl4965_driver_hw_info {
-+ u16 max_txq_num;
-+ u16 ac_queue_count;
-+ u16 tx_cmd_len;
-+ u16 max_rxq_size;
-+ u32 rx_buf_size;
-+ u32 max_pkt_size;
-+ u16 max_rxq_log;
-+ u8 max_stations;
-+ u8 bcast_sta_id;
-+ void *shared_virt;
-+ dma_addr_t shared_phys;
-+};
-+
-+#define HT_SHORT_GI_20MHZ_ONLY (1 << 0)
-+#define HT_SHORT_GI_40MHZ_ONLY (1 << 1)
-+
-+
-+#define IWL_RX_HDR(x) ((struct iwl4965_rx_frame_hdr *)(\
-+ x->u.rx_frame.stats.payload + \
-+ x->u.rx_frame.stats.phy_count))
-+#define IWL_RX_END(x) ((struct iwl4965_rx_frame_end *)(\
-+ IWL_RX_HDR(x)->payload + \
-+ le16_to_cpu(IWL_RX_HDR(x)->len)))
-+#define IWL_RX_STATS(x) (&x->u.rx_frame.stats)
-+#define IWL_RX_DATA(x) (IWL_RX_HDR(x)->payload)
-+
-+
-+/******************************************************************************
-+ *
-+ * Functions implemented in iwl-base.c which are forward declared here
-+ * for use by iwl-*.c
-+ *
-+ *****************************************************************************/
-+struct iwl4965_addsta_cmd;
-+extern int iwl4965_send_add_station(struct iwl4965_priv *priv,
-+ struct iwl4965_addsta_cmd *sta, u8 flags);
-+extern u8 iwl4965_add_station_flags(struct iwl4965_priv *priv, const u8 *addr,
-+ int is_ap, u8 flags, void *ht_data);
-+extern int iwl4965_is_network_packet(struct iwl4965_priv *priv,
-+ struct ieee80211_hdr *header);
-+extern int iwl4965_power_init_handle(struct iwl4965_priv *priv);
-+extern int iwl4965_eeprom_init(struct iwl4965_priv *priv);
-+#ifdef CONFIG_IWL4965_DEBUG
-+extern void iwl4965_report_frame(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_packet *pkt,
-+ struct ieee80211_hdr *header, int group100);
-+#else
-+static inline void iwl4965_report_frame(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_packet *pkt,
-+ struct ieee80211_hdr *header,
-+ int group100) {}
-+#endif
-+extern void iwl4965_handle_data_packet_monitor(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb,
-+ void *data, short len,
-+ struct ieee80211_rx_status *stats,
-+ u16 phy_flags);
-+extern int iwl4965_is_duplicate_packet(struct iwl4965_priv *priv,
-+ struct ieee80211_hdr *header);
-+extern int iwl4965_rx_queue_alloc(struct iwl4965_priv *priv);
-+extern void iwl4965_rx_queue_reset(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_queue *rxq);
-+extern int iwl4965_calc_db_from_ratio(int sig_ratio);
-+extern int iwl4965_calc_sig_qual(int rssi_dbm, int noise_dbm);
-+extern int iwl4965_tx_queue_init(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq, int count, u32 id);
-+extern void iwl4965_rx_replenish(void *data);
-+extern void iwl4965_tx_queue_free(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq);
-+extern int iwl4965_send_cmd_pdu(struct iwl4965_priv *priv, u8 id, u16 len,
-+ const void *data);
-+extern int __must_check iwl4965_send_cmd(struct iwl4965_priv *priv,
-+ struct iwl4965_host_cmd *cmd);
-+extern unsigned int iwl4965_fill_beacon_frame(struct iwl4965_priv *priv,
-+ struct ieee80211_hdr *hdr,
-+ const u8 *dest, int left);
-+extern int iwl4965_rx_queue_update_write_ptr(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_queue *q);
-+extern int iwl4965_send_statistics_request(struct iwl4965_priv *priv);
-+extern void iwl4965_set_decrypted_flag(struct iwl4965_priv *priv, struct sk_buff *skb,
-+ u32 decrypt_res,
-+ struct ieee80211_rx_status *stats);
-+extern __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr);
-+
-+extern const u8 iwl4965_broadcast_addr[ETH_ALEN];
-+
-+/*
-+ * Currently used by iwl-3945-rs... look at restructuring so that it doesn't
-+ * call this... todo... fix that.
-+*/
-+extern u8 iwl4965_sync_station(struct iwl4965_priv *priv, int sta_id,
-+ u16 tx_rate, u8 flags);
-+
-+/******************************************************************************
-+ *
-+ * Functions implemented in iwl-[34]*.c which are forward declared here
-+ * for use by iwl-base.c
-+ *
-+ * NOTE: The implementation of these functions are hardware specific
-+ * which is why they are in the hardware specific files (vs. iwl-base.c)
-+ *
-+ * Naming convention --
-+ * iwl4965_ <-- Its part of iwlwifi (should be changed to iwl4965_)
-+ * iwl4965_hw_ <-- Hardware specific (implemented in iwl-XXXX.c by all HW)
-+ * iwlXXXX_ <-- Hardware specific (implemented in iwl-XXXX.c for XXXX)
-+ * iwl4965_bg_ <-- Called from work queue context
-+ * iwl4965_mac_ <-- mac80211 callback
-+ *
-+ ****************************************************************************/
-+extern void iwl4965_hw_rx_handler_setup(struct iwl4965_priv *priv);
-+extern void iwl4965_hw_setup_deferred_work(struct iwl4965_priv *priv);
-+extern void iwl4965_hw_cancel_deferred_work(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_rxq_stop(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_set_hw_setting(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_nic_init(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_nic_stop_master(struct iwl4965_priv *priv);
-+extern void iwl4965_hw_txq_ctx_free(struct iwl4965_priv *priv);
-+extern void iwl4965_hw_txq_ctx_stop(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_nic_reset(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_txq_attach_buf_to_tfd(struct iwl4965_priv *priv, void *tfd,
-+ dma_addr_t addr, u16 len);
-+extern int iwl4965_hw_txq_free_tfd(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq);
-+extern int iwl4965_hw_get_temperature(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_tx_queue_init(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq);
-+extern unsigned int iwl4965_hw_get_beacon_cmd(struct iwl4965_priv *priv,
-+ struct iwl4965_frame *frame, u8 rate);
-+extern int iwl4965_hw_get_rx_read(struct iwl4965_priv *priv);
-+extern void iwl4965_hw_build_tx_cmd_rate(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd,
-+ struct ieee80211_tx_control *ctrl,
-+ struct ieee80211_hdr *hdr,
-+ int sta_id, int tx_id);
-+extern int iwl4965_hw_reg_send_txpower(struct iwl4965_priv *priv);
-+extern int iwl4965_hw_reg_set_txpower(struct iwl4965_priv *priv, s8 power);
-+extern void iwl4965_hw_rx_statistics(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb);
-+extern void iwl4965_disable_events(struct iwl4965_priv *priv);
-+extern int iwl4965_get_temperature(const struct iwl4965_priv *priv);
-+
-+/**
-+ * iwl4965_hw_find_station - Find station id for a given BSSID
-+ * @bssid: MAC address of station ID to find
-+ *
-+ * NOTE: This should not be hardware specific but the code has
-+ * not yet been merged into a single common layer for managing the
-+ * station tables.
-+ */
-+extern u8 iwl4965_hw_find_station(struct iwl4965_priv *priv, const u8 *bssid);
-+
-+extern int iwl4965_hw_channel_switch(struct iwl4965_priv *priv, u16 channel);
-+extern int iwl4965_tx_queue_reclaim(struct iwl4965_priv *priv, int txq_id, int index);
-+
-+struct iwl4965_priv;
-
- /*
- * Forward declare iwl-4965.c functions for iwl-base.c
- */
--extern int iwl_eeprom_aqcuire_semaphore(struct iwl_priv *priv);
--extern void iwl_eeprom_release_semaphore(struct iwl_priv *priv);
-+extern int iwl4965_eeprom_acquire_semaphore(struct iwl4965_priv *priv);
-+extern void iwl4965_eeprom_release_semaphore(struct iwl4965_priv *priv);
-
--extern int iwl4965_tx_queue_update_wr_ptr(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq,
-+extern int iwl4965_tx_queue_update_wr_ptr(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq,
- u16 byte_cnt);
--extern void iwl4965_add_station(struct iwl_priv *priv, const u8 *addr,
-+extern void iwl4965_add_station(struct iwl4965_priv *priv, const u8 *addr,
- int is_ap);
--extern void iwl4965_set_rxon_ht(struct iwl_priv *priv,
-- struct sta_ht_info *ht_info);
+-/******************************************************************************
+- * (11)
+- * Rx Calibration Commands:
+- *
+- *****************************************************************************/
-
--extern void iwl4965_set_rxon_chain(struct iwl_priv *priv);
--extern int iwl4965_tx_cmd(struct iwl_priv *priv, struct iwl_cmd *out_cmd,
-- u8 sta_id, dma_addr_t txcmd_phys,
-- struct ieee80211_hdr *hdr, u8 hdr_len,
-- struct ieee80211_tx_control *ctrl, void *sta_in);
--extern int iwl4965_init_hw_rates(struct iwl_priv *priv,
-- struct ieee80211_rate *rates);
--extern int iwl4965_alive_notify(struct iwl_priv *priv);
--extern void iwl4965_update_rate_scaling(struct iwl_priv *priv, u8 mode);
--extern void iwl4965_set_ht_add_station(struct iwl_priv *priv, u8 index);
+-#define PHY_CALIBRATE_DIFF_GAIN_CMD (7)
+-#define HD_TABLE_SIZE (11)
-
--extern void iwl4965_chain_noise_reset(struct iwl_priv *priv);
--extern void iwl4965_init_sensitivity(struct iwl_priv *priv, u8 flags,
-+extern void iwl4965_set_rxon_chain(struct iwl4965_priv *priv);
-+extern int iwl4965_alive_notify(struct iwl4965_priv *priv);
-+extern void iwl4965_update_rate_scaling(struct iwl4965_priv *priv, u8 mode);
-+extern void iwl4965_chain_noise_reset(struct iwl4965_priv *priv);
-+extern void iwl4965_init_sensitivity(struct iwl4965_priv *priv, u8 flags,
- u8 force);
--extern int iwl4965_set_fat_chan_info(struct iwl_priv *priv, int phymode,
-+extern int iwl4965_set_fat_chan_info(struct iwl4965_priv *priv, int phymode,
- u16 channel,
-- const struct iwl_eeprom_channel *eeprom_ch,
-+ const struct iwl4965_eeprom_channel *eeprom_ch,
- u8 fat_extension_channel);
--extern void iwl4965_rf_kill_ct_config(struct iwl_priv *priv);
-+extern void iwl4965_rf_kill_ct_config(struct iwl4965_priv *priv);
-
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
--extern int iwl_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da,
-+#ifdef CONFIG_IWL4965_HT
-+extern void iwl4965_init_ht_hw_capab(struct ieee80211_ht_info *ht_info,
-+ int mode);
-+extern void iwl4965_set_rxon_ht(struct iwl4965_priv *priv,
-+ struct iwl_ht_info *ht_info);
-+extern void iwl4965_set_ht_add_station(struct iwl4965_priv *priv, u8 index,
-+ struct ieee80211_ht_info *sta_ht_inf);
-+extern int iwl4965_mac_ampdu_action(struct ieee80211_hw *hw,
-+ enum ieee80211_ampdu_mlme_action action,
-+ const u8 *addr, u16 tid, u16 ssn);
-+#ifdef CONFIG_IWL4965_HT_AGG
-+extern int iwl4965_mac_ht_tx_agg_start(struct ieee80211_hw *hw, u8 *da,
- u16 tid, u16 *start_seq_num);
--extern int iwl_mac_ht_rx_agg_start(struct ieee80211_hw *hw, u8 *da,
-- u16 tid, u16 start_seq_num);
--extern int iwl_mac_ht_rx_agg_stop(struct ieee80211_hw *hw, u8 *da,
-+extern int iwl4965_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da,
- u16 tid, int generator);
--extern int iwl_mac_ht_tx_agg_stop(struct ieee80211_hw *hw, u8 *da,
-- u16 tid, int generator);
--extern void iwl4965_turn_off_agg(struct iwl_priv *priv, u8 tid);
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /*CONFIG_IWLWIFI_HT */
-+extern void iwl4965_turn_off_agg(struct iwl4965_priv *priv, u8 tid);
-+extern void iwl4965_tl_get_stats(struct iwl4965_priv *priv,
-+ struct ieee80211_hdr *hdr);
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /*CONFIG_IWL4965_HT */
- /* Structures, enum, and defines specific to the 4965 */
-
- #define IWL4965_KW_SIZE 0x1000 /*4k */
-
--struct iwl_kw {
-+struct iwl4965_kw {
- dma_addr_t dma_addr;
- void *v_addr;
- size_t size;
-@@ -120,21 +833,9 @@ struct iwl_kw {
- #define NRG_NUM_PREV_STAT_L 20
- #define NUM_RX_CHAINS (3)
-
--#define TX_POWER_IWL_ILLEGAL_VDET -100000
- #define TX_POWER_IWL_ILLEGAL_VOLTAGE -10000
--#define TX_POWER_IWL_CLOSED_LOOP_MIN_POWER 18
--#define TX_POWER_IWL_CLOSED_LOOP_MAX_POWER 34
--#define TX_POWER_IWL_VDET_SLOPE_BELOW_NOMINAL 17
--#define TX_POWER_IWL_VDET_SLOPE_ABOVE_NOMINAL 20
--#define TX_POWER_IWL_NOMINAL_POWER 26
--#define TX_POWER_IWL_CLOSED_LOOP_ITERATION_LIMIT 1
--#define TX_POWER_IWL_VOLTAGE_CODES_PER_03V 7
--#define TX_POWER_IWL_DEGREES_PER_VDET_CODE 11
--#define IWL_TX_POWER_MAX_NUM_PA_MEASUREMENTS 1
--#define IWL_TX_POWER_CCK_COMPENSATION_B_STEP (9)
--#define IWL_TX_POWER_CCK_COMPENSATION_C_STEP (5)
+-struct iwl_sensitivity_cmd {
+- __le16 control;
+- __le16 table[HD_TABLE_SIZE];
+-} __attribute__ ((packed));
-
--struct iwl_traffic_load {
-+
-+struct iwl4965_traffic_load {
- unsigned long time_stamp;
- u32 packet_count[TID_QUEUE_MAX_SIZE];
- u8 queue_count;
-@@ -142,8 +843,13 @@ struct iwl_traffic_load {
- u32 total;
- };
-
--#ifdef CONFIG_IWLWIFI_HT_AGG
--struct iwl_agg_control {
-+#ifdef CONFIG_IWL4965_HT_AGG
-+/**
-+ * struct iwl4965_agg_control
-+ * @requested_ba: bit map of tids requesting aggregation/block-ack
-+ * @granted_ba: bit map of tids granted aggregation/block-ack
-+ */
-+struct iwl4965_agg_control {
- unsigned long next_retry;
- u32 wait_for_agg_status;
- u32 tid_retry;
-@@ -152,13 +858,13 @@ struct iwl_agg_control {
- u8 auto_agg;
- u32 tid_traffic_load_threshold;
- u32 ba_timeout;
-- struct iwl_traffic_load traffic_load[TID_MAX_LOAD_COUNT];
-+ struct iwl4965_traffic_load traffic_load[TID_MAX_LOAD_COUNT];
- };
--#endif /*CONFIG_IWLWIFI_HT_AGG */
-+#endif /*CONFIG_IWL4965_HT_AGG */
-
--struct iwl_lq_mngr {
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- struct iwl_agg_control agg_ctrl;
-+struct iwl4965_lq_mngr {
-+#ifdef CONFIG_IWL4965_HT_AGG
-+ struct iwl4965_agg_control agg_ctrl;
- #endif
- spinlock_t lock;
- s32 max_window_size;
-@@ -179,22 +885,6 @@ struct iwl_lq_mngr {
- #define CAL_NUM_OF_BEACONS 20
- #define MAXIMUM_ALLOWED_PATHLOSS 15
-
--/* Param table within SENSITIVITY_CMD */
--#define HD_MIN_ENERGY_CCK_DET_INDEX (0)
--#define HD_MIN_ENERGY_OFDM_DET_INDEX (1)
--#define HD_AUTO_CORR32_X1_TH_ADD_MIN_INDEX (2)
--#define HD_AUTO_CORR32_X1_TH_ADD_MIN_MRC_INDEX (3)
--#define HD_AUTO_CORR40_X4_TH_ADD_MIN_MRC_INDEX (4)
--#define HD_AUTO_CORR32_X4_TH_ADD_MIN_INDEX (5)
--#define HD_AUTO_CORR32_X4_TH_ADD_MIN_MRC_INDEX (6)
--#define HD_BARKER_CORR_TH_ADD_MIN_INDEX (7)
--#define HD_BARKER_CORR_TH_ADD_MIN_MRC_INDEX (8)
--#define HD_AUTO_CORR40_X4_TH_ADD_MIN_INDEX (9)
--#define HD_OFDM_ENERGY_TH_IN_INDEX (10)
+-struct iwl_calibration_cmd {
+- u8 opCode;
+- u8 flags;
+- __le16 reserved;
+- s8 diff_gain_a;
+- s8 diff_gain_b;
+- s8 diff_gain_c;
+- u8 reserved1;
+-} __attribute__ ((packed));
-
--#define SENSITIVITY_CMD_CONTROL_DEFAULT_TABLE __constant_cpu_to_le16(0)
--#define SENSITIVITY_CMD_CONTROL_WORK_TABLE __constant_cpu_to_le16(1)
+-/******************************************************************************
+- * (12)
+- * Miscellaneous Commands:
+- *
+- *****************************************************************************/
-
- #define CHAIN_NOISE_MAX_DELTA_GAIN_CODE 3
-
- #define MAX_FA_OFDM 50
-@@ -222,8 +912,6 @@ struct iwl_lq_mngr {
- #define AUTO_CORR_STEP_CCK 3
- #define AUTO_CORR_MAX_TH_CCK 160
-
--#define NRG_ALG 0
--#define AUTO_CORR_ALG 1
- #define NRG_DIFF 2
- #define NRG_STEP_CCK 2
- #define NRG_MARGIN 8
-@@ -239,24 +927,24 @@ struct iwl_lq_mngr {
- #define IN_BAND_FILTER 0xFF
- #define MIN_AVERAGE_NOISE_MAX_VALUE 0xFFFFFFFF
-
--enum iwl_false_alarm_state {
-+enum iwl4965_false_alarm_state {
- IWL_FA_TOO_MANY = 0,
- IWL_FA_TOO_FEW = 1,
- IWL_FA_GOOD_RANGE = 2,
- };
-
--enum iwl_chain_noise_state {
-+enum iwl4965_chain_noise_state {
- IWL_CHAIN_NOISE_ALIVE = 0, /* must be 0 */
- IWL_CHAIN_NOISE_ACCUMULATE = 1,
- IWL_CHAIN_NOISE_CALIBRATED = 2,
- };
-
--enum iwl_sensitivity_state {
-+enum iwl4965_sensitivity_state {
- IWL_SENS_CALIB_ALLOWED = 0,
- IWL_SENS_CALIB_NEED_REINIT = 1,
- };
-
--enum iwl_calib_enabled_state {
-+enum iwl4965_calib_enabled_state {
- IWL_CALIB_DISABLED = 0, /* must be 0 */
- IWL_CALIB_ENABLED = 1,
- };
-@@ -271,7 +959,7 @@ struct statistics_general_data {
- };
-
- /* Sensitivity calib data */
--struct iwl_sensitivity_data {
-+struct iwl4965_sensitivity_data {
- u32 auto_corr_ofdm;
- u32 auto_corr_ofdm_mrc;
- u32 auto_corr_ofdm_x1;
-@@ -300,7 +988,7 @@ struct iwl_sensitivity_data {
- };
-
- /* Chain noise (differential Rx gain) calib data */
--struct iwl_chain_noise_data {
-+struct iwl4965_chain_noise_data {
- u8 state;
- u16 beacon_count;
- u32 chain_noise_a;
-@@ -314,28 +1002,323 @@ struct iwl_chain_noise_data {
- u8 radio_write;
- };
-
--/* IWL4965 */
--#define RATE_MCS_CODE_MSK 0x7
--#define RATE_MCS_MIMO_POS 3
--#define RATE_MCS_MIMO_MSK 0x8
--#define RATE_MCS_HT_DUP_POS 5
--#define RATE_MCS_HT_DUP_MSK 0x20
--#define RATE_MCS_FLAGS_POS 8
--#define RATE_MCS_HT_POS 8
--#define RATE_MCS_HT_MSK 0x100
--#define RATE_MCS_CCK_POS 9
--#define RATE_MCS_CCK_MSK 0x200
--#define RATE_MCS_GF_POS 10
--#define RATE_MCS_GF_MSK 0x400
+-/*
+- * LEDs Command & Response
+- * REPLY_LEDS_CMD = 0x48 (command, has simple generic response)
+- *
+- * For each of 3 possible LEDs (Activity/Link/Tech, selected by "id" field),
+- * this command turns it on or off, or sets up a periodic blinking cycle.
+- */
+-struct iwl_led_cmd {
+- __le32 interval; /* "interval" in uSec */
+- u8 id; /* 1: Activity, 2: Link, 3: Tech */
+- u8 off; /* # intervals off while blinking;
+- * "0", with >0 "on" value, turns LED on */
+- u8 on; /* # intervals on while blinking;
+- * "0", regardless of "off", turns LED off */
+- u8 reserved;
+-} __attribute__ ((packed));
-
--#define RATE_MCS_FAT_POS 11
--#define RATE_MCS_FAT_MSK 0x800
--#define RATE_MCS_DUP_POS 12
--#define RATE_MCS_DUP_MSK 0x1000
--#define RATE_MCS_SGI_POS 13
--#define RATE_MCS_SGI_MSK 0x2000
+-/******************************************************************************
+- * (13)
+- * Union of all expected notifications/responses:
+- *
+- *****************************************************************************/
-
--#define EEPROM_SEM_TIMEOUT 10
--#define EEPROM_SEM_RETRY_LIMIT 1000
+-struct iwl_rx_packet {
+- __le32 len;
+- struct iwl_cmd_header hdr;
+- union {
+- struct iwl_alive_resp alive_frame;
+- struct iwl_rx_frame rx_frame;
+- struct iwl_tx_resp tx_resp;
+- struct iwl_spectrum_notification spectrum_notif;
+- struct iwl_csa_notification csa_notif;
+- struct iwl_error_resp err_resp;
+- struct iwl_card_state_notif card_state_notif;
+- struct iwl_beacon_notif beacon_status;
+- struct iwl_add_sta_resp add_sta;
+- struct iwl_sleep_notification sleep_notif;
+- struct iwl_spectrum_resp spectrum;
+- struct iwl_notif_statistics stats;
+-#if IWL == 4965
+- struct iwl_compressed_ba_resp compressed_ba;
+- struct iwl_missed_beacon_notif missed_beacon;
+-#endif
+- __le32 status;
+- u8 raw[0];
+- } u;
+-} __attribute__ ((packed));
-
--#endif /* __iwl_4965_h__ */
-+#define EEPROM_SEM_TIMEOUT 10 /* milliseconds */
-+#define EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */
-+
-+
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
-+
-+enum {
-+ MEASUREMENT_READY = (1 << 0),
-+ MEASUREMENT_ACTIVE = (1 << 1),
-+};
-+
-+#endif
-+
-+struct iwl4965_priv {
-+
-+ /* ieee device used by generic ieee processing code */
-+ struct ieee80211_hw *hw;
-+ struct ieee80211_channel *ieee_channels;
-+ struct ieee80211_rate *ieee_rates;
-+
-+ /* temporary frame storage list */
-+ struct list_head free_frames;
-+ int frames_count;
-+
-+ u8 phymode;
-+ int alloc_rxb_skb;
-+ bool add_radiotap;
-+
-+ void (*rx_handlers[REPLY_MAX])(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb);
-+
-+ const struct ieee80211_hw_mode *modes;
-+
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
-+ /* spectrum measurement report caching */
-+ struct iwl4965_spectrum_notification measure_report;
-+ u8 measurement_status;
-+#endif
-+ /* ucode beacon time */
-+ u32 ucode_beacon_time;
-+
-+ /* we allocate array of iwl4965_channel_info for NIC's valid channels.
-+ * Access via channel # using indirect index array */
-+ struct iwl4965_channel_info *channel_info; /* channel info array */
-+ u8 channel_count; /* # of channels */
-+
-+ /* each calibration channel group in the EEPROM has a derived
-+ * clip setting for each rate. */
-+ const struct iwl4965_clip_group clip_groups[5];
-+
-+ /* thermal calibration */
-+ s32 temperature; /* degrees Kelvin */
-+ s32 last_temperature;
-+
-+ /* Scan related variables */
-+ unsigned long last_scan_jiffies;
-+ unsigned long next_scan_jiffies;
-+ unsigned long scan_start;
-+ unsigned long scan_pass_start;
-+ unsigned long scan_start_tsf;
-+ int scan_bands;
-+ int one_direct_scan;
-+ u8 direct_ssid_len;
-+ u8 direct_ssid[IW_ESSID_MAX_SIZE];
-+ struct iwl4965_scan_cmd *scan;
-+ u8 only_active_channel;
-+
-+ /* spinlock */
-+ spinlock_t lock; /* protect general shared data */
-+ spinlock_t hcmd_lock; /* protect hcmd */
-+ struct mutex mutex;
-+
-+ /* basic pci-network driver stuff */
-+ struct pci_dev *pci_dev;
-+
-+ /* pci hardware address support */
-+ void __iomem *hw_base;
-+
-+ /* uCode images, save to reload in case of failure */
-+ struct fw_desc ucode_code; /* runtime inst */
-+ struct fw_desc ucode_data; /* runtime data original */
-+ struct fw_desc ucode_data_backup; /* runtime data save/restore */
-+ struct fw_desc ucode_init; /* initialization inst */
-+ struct fw_desc ucode_init_data; /* initialization data */
-+ struct fw_desc ucode_boot; /* bootstrap inst */
-+
-+
-+ struct iwl4965_rxon_time_cmd rxon_timing;
-+
-+ /* We declare this const so it can only be
-+ * changed via explicit cast within the
-+ * routines that actually update the physical
-+ * hardware */
-+ const struct iwl4965_rxon_cmd active_rxon;
-+ struct iwl4965_rxon_cmd staging_rxon;
-+
-+ int error_recovering;
-+ struct iwl4965_rxon_cmd recovery_rxon;
-+
-+ /* 1st responses from initialize and runtime uCode images.
-+ * 4965's initialize alive response contains some calibration data. */
-+ struct iwl4965_init_alive_resp card_alive_init;
-+ struct iwl4965_alive_resp card_alive;
-+
-+#ifdef LED
-+ /* LED related variables */
-+ struct iwl4965_activity_blink activity;
-+ unsigned long led_packets;
-+ int led_state;
-+#endif
-+
-+ u16 active_rate;
-+ u16 active_rate_basic;
-+
-+ u8 call_post_assoc_from_beacon;
-+ u8 assoc_station_added;
-+ u8 use_ant_b_for_management_frame; /* Tx antenna selection */
-+ u8 valid_antenna; /* Bit mask of antennas actually connected */
-+#ifdef CONFIG_IWL4965_SENSITIVITY
-+ struct iwl4965_sensitivity_data sensitivity_data;
-+ struct iwl4965_chain_noise_data chain_noise_data;
-+ u8 start_calib;
-+ __le16 sensitivity_tbl[HD_TABLE_SIZE];
-+#endif /*CONFIG_IWL4965_SENSITIVITY*/
-+
-+#ifdef CONFIG_IWL4965_HT
-+ struct iwl_ht_info current_ht_config;
-+#endif
-+ u8 last_phy_res[100];
-+
-+ /* Rate scaling data */
-+ struct iwl4965_lq_mngr lq_mngr;
-+
-+ /* Rate scaling data */
-+ s8 data_retry_limit;
-+ u8 retry_rate;
-+
-+ wait_queue_head_t wait_command_queue;
-+
-+ int activity_timer_active;
-+
-+ /* Rx and Tx DMA processing queues */
-+ struct iwl4965_rx_queue rxq;
-+ struct iwl4965_tx_queue txq[IWL_MAX_NUM_QUEUES];
-+ unsigned long txq_ctx_active_msk;
-+ struct iwl4965_kw kw; /* keep warm address */
-+ u32 scd_base_addr; /* scheduler sram base address */
-+
-+ unsigned long status;
-+ u32 config;
-+
-+ int last_rx_rssi; /* From Rx packet statisitics */
-+ int last_rx_noise; /* From beacon statistics */
-+
-+ struct iwl4965_power_mgr power_data;
-+
-+ struct iwl4965_notif_statistics statistics;
-+ unsigned long last_statistics_time;
-+
-+ /* context information */
-+ u8 essid[IW_ESSID_MAX_SIZE];
-+ u8 essid_len;
-+ u16 rates_mask;
-+
-+ u32 power_mode;
-+ u32 antenna;
-+ u8 bssid[ETH_ALEN];
-+ u16 rts_threshold;
-+ u8 mac_addr[ETH_ALEN];
-+
-+ /*station table variables */
-+ spinlock_t sta_lock;
-+ int num_stations;
-+ struct iwl4965_station_entry stations[IWL_STATION_COUNT];
-+
-+ /* Indication if ieee80211_ops->open has been called */
-+ int is_open;
-+
-+ u8 mac80211_registered;
-+ int is_abg;
-+
-+ u32 notif_missed_beacons;
-+
-+ /* Rx'd packet timing information */
-+ u32 last_beacon_time;
-+ u64 last_tsf;
-+
-+ /* Duplicate packet detection */
-+ u16 last_seq_num;
-+ u16 last_frag_num;
-+ unsigned long last_packet_time;
-+
-+ /* Hash table for finding stations in IBSS network */
-+ struct list_head ibss_mac_hash[IWL_IBSS_MAC_HASH_SIZE];
-+
-+ /* eeprom */
-+ struct iwl4965_eeprom eeprom;
-+
-+ int iw_mode;
-+
-+ struct sk_buff *ibss_beacon;
-+
-+ /* Last Rx'd beacon timestamp */
-+ u32 timestamp0;
-+ u32 timestamp1;
-+ u16 beacon_int;
-+ struct iwl4965_driver_hw_info hw_setting;
-+ struct ieee80211_vif *vif;
-+
-+ /* Current association information needed to configure the
-+ * hardware */
-+ u16 assoc_id;
-+ u16 assoc_capability;
-+ u8 ps_mode;
-+
-+#ifdef CONFIG_IWL4965_QOS
-+ struct iwl4965_qos_info qos_data;
-+#endif /*CONFIG_IWL4965_QOS */
-+
-+ struct workqueue_struct *workqueue;
-+
-+ struct work_struct up;
-+ struct work_struct restart;
-+ struct work_struct calibrated_work;
-+ struct work_struct scan_completed;
-+ struct work_struct rx_replenish;
-+ struct work_struct rf_kill;
-+ struct work_struct abort_scan;
-+ struct work_struct update_link_led;
-+ struct work_struct auth_work;
-+ struct work_struct report_work;
-+ struct work_struct request_scan;
-+ struct work_struct beacon_update;
-+
-+ struct tasklet_struct irq_tasklet;
-+
-+ struct delayed_work init_alive_start;
-+ struct delayed_work alive_start;
-+ struct delayed_work activity_timer;
-+ struct delayed_work thermal_periodic;
-+ struct delayed_work gather_stats;
-+ struct delayed_work scan_check;
-+ struct delayed_work post_associate;
-+
-+#define IWL_DEFAULT_TX_POWER 0x0F
-+ s8 user_txpower_limit;
-+ s8 max_channel_txpower_limit;
-+
-+#ifdef CONFIG_PM
-+ u32 pm_state[16];
-+#endif
-+
-+#ifdef CONFIG_IWL4965_DEBUG
-+ /* debugging info */
-+ u32 framecnt_to_us;
-+ atomic_t restrict_refcnt;
-+#endif
-+
-+ struct work_struct txpower_work;
-+#ifdef CONFIG_IWL4965_SENSITIVITY
-+ struct work_struct sensitivity_work;
-+#endif
-+ struct work_struct statistics_work;
-+ struct timer_list statistics_periodic;
-+
-+#ifdef CONFIG_IWL4965_HT_AGG
-+ struct work_struct agg_work;
-+#endif
-+}; /*iwl4965_priv */
-+
-+static inline int iwl4965_is_associated(struct iwl4965_priv *priv)
-+{
-+ return (priv->active_rxon.filter_flags & RXON_FILTER_ASSOC_MSK) ? 1 : 0;
-+}
-+
-+static inline int is_channel_valid(const struct iwl4965_channel_info *ch_info)
-+{
-+ if (ch_info == NULL)
-+ return 0;
-+ return (ch_info->flags & EEPROM_CHANNEL_VALID) ? 1 : 0;
-+}
-+
-+static inline int is_channel_narrow(const struct iwl4965_channel_info *ch_info)
-+{
-+ return (ch_info->flags & EEPROM_CHANNEL_NARROW) ? 1 : 0;
-+}
-+
-+static inline int is_channel_radar(const struct iwl4965_channel_info *ch_info)
-+{
-+ return (ch_info->flags & EEPROM_CHANNEL_RADAR) ? 1 : 0;
-+}
-+
-+static inline u8 is_channel_a_band(const struct iwl4965_channel_info *ch_info)
-+{
-+ return ch_info->phymode == MODE_IEEE80211A;
-+}
-+
-+static inline u8 is_channel_bg_band(const struct iwl4965_channel_info *ch_info)
-+{
-+ return ((ch_info->phymode == MODE_IEEE80211B) ||
-+ (ch_info->phymode == MODE_IEEE80211G));
-+}
-+
-+static inline int is_channel_passive(const struct iwl4965_channel_info *ch)
-+{
-+ return (!(ch->flags & EEPROM_CHANNEL_ACTIVE)) ? 1 : 0;
-+}
-+
-+static inline int is_channel_ibss(const struct iwl4965_channel_info *ch)
-+{
-+ return ((ch->flags & EEPROM_CHANNEL_IBSS)) ? 1 : 0;
-+}
-+
-+extern const struct iwl4965_channel_info *iwl4965_get_channel_info(
-+ const struct iwl4965_priv *priv, int phymode, u16 channel);
-+
-+/* Requires full declaration of iwl4965_priv before including */
-+#include "iwl-4965-io.h"
-+
-+#endif /* __iwl4965_4965_h__ */
-diff --git a/drivers/net/wireless/iwlwifi/iwl-channel.h b/drivers/net/wireless/iwlwifi/iwl-channel.h
+-#define IWL_RX_FRAME_SIZE (4 + sizeof(struct iwl_rx_frame))
+-
+-#endif /* __iwl_commands_h__ */
+diff --git a/drivers/net/wireless/iwlwifi/iwl-debug.h b/drivers/net/wireless/iwlwifi/iwl-debug.h
deleted file mode 100644
-index 023c3f2..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-channel.h
+index 72318d7..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-debug.h
+++ /dev/null
-@@ -1,161 +0,0 @@
+@@ -1,152 +0,0 @@
-/******************************************************************************
- *
-- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
+- *
+- * Portions of this file are derived from the ipw3945 project.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of version 2 of the GNU General Public License as
@@ -381835,153 +456604,516 @@
- * The full GNU General Public License is included in this distribution in the
- * file called LICENSE.
- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- *****************************************************************************/
+-
+-#ifndef __iwl_debug_h__
+-#define __iwl_debug_h__
+-
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-extern u32 iwl_debug_level;
+-#define IWL_DEBUG(level, fmt, args...) \
+-do { if (iwl_debug_level & (level)) \
+- printk(KERN_ERR DRV_NAME": %c %s " fmt, \
+- in_interrupt() ? 'I' : 'U', __FUNCTION__ , ## args); } while (0)
+-
+-#define IWL_DEBUG_LIMIT(level, fmt, args...) \
+-do { if ((iwl_debug_level & (level)) && net_ratelimit()) \
+- printk(KERN_ERR DRV_NAME": %c %s " fmt, \
+- in_interrupt() ? 'I' : 'U', __FUNCTION__ , ## args); } while (0)
+-#else
+-static inline void IWL_DEBUG(int level, const char *fmt, ...)
+-{
+-}
+-static inline void IWL_DEBUG_LIMIT(int level, const char *fmt, ...)
+-{
+-}
+-#endif /* CONFIG_IWLWIFI_DEBUG */
+-
+-/*
+- * To use the debug system;
+- *
+- * If you are defining a new debug classification, simply add it to the #define
+- * list here in the form of:
+- *
+- * #define IWL_DL_xxxx VALUE
+- *
+- * shifting value to the left one bit from the previous entry. xxxx should be
+- * the name of the classification (for example, WEP)
+- *
+- * You then need to either add a IWL_xxxx_DEBUG() macro definition for your
+- * classification, or use IWL_DEBUG(IWL_DL_xxxx, ...) whenever you want
+- * to send output to that classification.
+- *
+- * To add your debug level to the list of levels seen when you perform
+- *
+- * % cat /proc/net/iwl/debug_level
+- *
+- * you simply need to add your entry to the iwl_debug_levels array.
+- *
+- * If you do not see debug_level in /proc/net/iwl then you do not have
+- * CONFIG_IWLWIFI_DEBUG defined in your kernel configuration
+- *
+- */
+-
+-#define IWL_DL_INFO (1<<0)
+-#define IWL_DL_MAC80211 (1<<1)
+-#define IWL_DL_HOST_COMMAND (1<<2)
+-#define IWL_DL_STATE (1<<3)
+-
+-#define IWL_DL_RADIO (1<<7)
+-#define IWL_DL_POWER (1<<8)
+-#define IWL_DL_TEMP (1<<9)
+-
+-#define IWL_DL_NOTIF (1<<10)
+-#define IWL_DL_SCAN (1<<11)
+-#define IWL_DL_ASSOC (1<<12)
+-#define IWL_DL_DROP (1<<13)
+-
+-#define IWL_DL_TXPOWER (1<<14)
+-
+-#define IWL_DL_AP (1<<15)
+-
+-#define IWL_DL_FW (1<<16)
+-#define IWL_DL_RF_KILL (1<<17)
+-#define IWL_DL_FW_ERRORS (1<<18)
+-
+-#define IWL_DL_LED (1<<19)
+-
+-#define IWL_DL_RATE (1<<20)
+-
+-#define IWL_DL_CALIB (1<<21)
+-#define IWL_DL_WEP (1<<22)
+-#define IWL_DL_TX (1<<23)
+-#define IWL_DL_RX (1<<24)
+-#define IWL_DL_ISR (1<<25)
+-#define IWL_DL_HT (1<<26)
+-#define IWL_DL_IO (1<<27)
+-#define IWL_DL_11H (1<<28)
+-
+-#define IWL_DL_STATS (1<<29)
+-#define IWL_DL_TX_REPLY (1<<30)
+-#define IWL_DL_QOS (1<<31)
+-
+-#define IWL_ERROR(f, a...) printk(KERN_ERR DRV_NAME ": " f, ## a)
+-#define IWL_WARNING(f, a...) printk(KERN_WARNING DRV_NAME ": " f, ## a)
+-#define IWL_DEBUG_INFO(f, a...) IWL_DEBUG(IWL_DL_INFO, f, ## a)
+-
+-#define IWL_DEBUG_MAC80211(f, a...) IWL_DEBUG(IWL_DL_MAC80211, f, ## a)
+-#define IWL_DEBUG_TEMP(f, a...) IWL_DEBUG(IWL_DL_TEMP, f, ## a)
+-#define IWL_DEBUG_SCAN(f, a...) IWL_DEBUG(IWL_DL_SCAN, f, ## a)
+-#define IWL_DEBUG_RX(f, a...) IWL_DEBUG(IWL_DL_RX, f, ## a)
+-#define IWL_DEBUG_TX(f, a...) IWL_DEBUG(IWL_DL_TX, f, ## a)
+-#define IWL_DEBUG_ISR(f, a...) IWL_DEBUG(IWL_DL_ISR, f, ## a)
+-#define IWL_DEBUG_LED(f, a...) IWL_DEBUG(IWL_DL_LED, f, ## a)
+-#define IWL_DEBUG_WEP(f, a...) IWL_DEBUG(IWL_DL_WEP, f, ## a)
+-#define IWL_DEBUG_HC(f, a...) IWL_DEBUG(IWL_DL_HOST_COMMAND, f, ## a)
+-#define IWL_DEBUG_CALIB(f, a...) IWL_DEBUG(IWL_DL_CALIB, f, ## a)
+-#define IWL_DEBUG_FW(f, a...) IWL_DEBUG(IWL_DL_FW, f, ## a)
+-#define IWL_DEBUG_RF_KILL(f, a...) IWL_DEBUG(IWL_DL_RF_KILL, f, ## a)
+-#define IWL_DEBUG_DROP(f, a...) IWL_DEBUG(IWL_DL_DROP, f, ## a)
+-#define IWL_DEBUG_DROP_LIMIT(f, a...) IWL_DEBUG_LIMIT(IWL_DL_DROP, f, ## a)
+-#define IWL_DEBUG_AP(f, a...) IWL_DEBUG(IWL_DL_AP, f, ## a)
+-#define IWL_DEBUG_TXPOWER(f, a...) IWL_DEBUG(IWL_DL_TXPOWER, f, ## a)
+-#define IWL_DEBUG_IO(f, a...) IWL_DEBUG(IWL_DL_IO, f, ## a)
+-#define IWL_DEBUG_RATE(f, a...) IWL_DEBUG(IWL_DL_RATE, f, ## a)
+-#define IWL_DEBUG_RATE_LIMIT(f, a...) IWL_DEBUG_LIMIT(IWL_DL_RATE, f, ## a)
+-#define IWL_DEBUG_NOTIF(f, a...) IWL_DEBUG(IWL_DL_NOTIF, f, ## a)
+-#define IWL_DEBUG_ASSOC(f, a...) IWL_DEBUG(IWL_DL_ASSOC | IWL_DL_INFO, f, ## a)
+-#define IWL_DEBUG_ASSOC_LIMIT(f, a...) \
+- IWL_DEBUG_LIMIT(IWL_DL_ASSOC | IWL_DL_INFO, f, ## a)
+-#define IWL_DEBUG_HT(f, a...) IWL_DEBUG(IWL_DL_HT, f, ## a)
+-#define IWL_DEBUG_STATS(f, a...) IWL_DEBUG(IWL_DL_STATS, f, ## a)
+-#define IWL_DEBUG_TX_REPLY(f, a...) IWL_DEBUG(IWL_DL_TX_REPLY, f, ## a)
+-#define IWL_DEBUG_QOS(f, a...) IWL_DEBUG(IWL_DL_QOS, f, ## a)
+-#define IWL_DEBUG_RADIO(f, a...) IWL_DEBUG(IWL_DL_RADIO, f, ## a)
+-#define IWL_DEBUG_POWER(f, a...) IWL_DEBUG(IWL_DL_POWER, f, ## a)
+-#define IWL_DEBUG_11H(f, a...) IWL_DEBUG(IWL_DL_11H, f, ## a)
+-
+-#endif
+diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom.h b/drivers/net/wireless/iwlwifi/iwl-eeprom.h
+deleted file mode 100644
+index e473c97..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-eeprom.h
++++ /dev/null
+@@ -1,336 +0,0 @@
+-/******************************************************************************
+- *
+- * This file is provided under a dual BSD/GPLv2 license. When using or
+- * redistributing this file, you may do so under either license.
+- *
+- * GPL LICENSE SUMMARY
+- *
+- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of version 2 of the GNU Geeral Public License as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but
+- * WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
+- * USA
+- *
+- * The full GNU General Public License is included in this distribution
+- * in the file called LICENSE.GPL.
+- *
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- * BSD LICENSE
+- *
+- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+- * All rights reserved.
+- *
+- * Redistribution and use in source and binary forms, with or without
+- * modification, are permitted provided that the following conditions
+- * are met:
+- *
+- * * Redistributions of source code must retain the above copyright
+- * notice, this list of conditions and the following disclaimer.
+- * * Redistributions in binary form must reproduce the above copyright
+- * notice, this list of conditions and the following disclaimer in
+- * the documentation and/or other materials provided with the
+- * distribution.
+- * * Neither the name Intel Corporation nor the names of its
+- * contributors may be used to endorse or promote products derived
+- * from this software without specific prior written permission.
+- *
+- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- *****************************************************************************/
--#ifndef __iwl_channel_h__
--#define __iwl_channel_h__
-
--#define IWL_NUM_SCAN_RATES (2)
+-#ifndef __iwl_eeprom_h__
+-#define __iwl_eeprom_h__
-
--struct iwl_channel_tgd_info {
-- u8 type;
-- s8 max_power;
--};
+-/*
+- * This file defines EEPROM related constants, enums, and inline functions.
+- *
+- */
-
--struct iwl_channel_tgh_info {
-- s64 last_radar_time;
--};
+-#define IWL_EEPROM_ACCESS_TIMEOUT 5000 /* uSec */
+-#define IWL_EEPROM_ACCESS_DELAY 10 /* uSec */
+-/* EEPROM field values */
+-#define ANTENNA_SWITCH_NORMAL 0
+-#define ANTENNA_SWITCH_INVERSE 1
-
--/* current Tx power values to use, one for each rate for each channel.
-- * requested power is limited by:
-- * -- regulatory EEPROM limits for this channel
-- * -- hardware capabilities (clip-powers)
-- * -- spectrum management
-- * -- user preference (e.g. iwconfig)
-- * when requested power is set, base power index must also be set. */
--struct iwl_channel_power_info {
-- struct iwl_tx_power tpc; /* actual radio and DSP gain settings */
-- s8 power_table_index; /* actual (compenst'd) index into gain table */
-- s8 base_power_index; /* gain index for power at factory temp. */
-- s8 requested_power; /* power (dBm) requested for this chnl/rate */
+-enum {
+- EEPROM_CHANNEL_VALID = (1 << 0), /* usable for this SKU/geo */
+- EEPROM_CHANNEL_IBSS = (1 << 1), /* usable as an IBSS channel */
+- /* Bit 2 Reserved */
+- EEPROM_CHANNEL_ACTIVE = (1 << 3), /* active scanning allowed */
+- EEPROM_CHANNEL_RADAR = (1 << 4), /* radar detection required */
+- EEPROM_CHANNEL_WIDE = (1 << 5),
+- EEPROM_CHANNEL_NARROW = (1 << 6),
+- EEPROM_CHANNEL_DFS = (1 << 7), /* dynamic freq selection candidate */
-};
-
--/* current scan Tx power values to use, one for each scan rate for each
-- * channel. */
--struct iwl_scan_power_info {
-- struct iwl_tx_power tpc; /* actual radio and DSP gain settings */
-- s8 power_table_index; /* actual (compenst'd) index into gain table */
-- s8 requested_power; /* scan pwr (dBm) requested for chnl/rate */
--};
+-/* EEPROM field lengths */
+-#define EEPROM_BOARD_PBA_NUMBER_LENGTH 11
-
--/* Channel unlock period is 15 seconds. If no beacon or probe response
-- * has been received within 15 seconds on a locked channel then the channel
-- * remains locked. */
--#define TX_UNLOCK_PERIOD 15
+-/* EEPROM field lengths */
+-#define EEPROM_BOARD_PBA_NUMBER_LENGTH 11
+-#define EEPROM_REGULATORY_SKU_ID_LENGTH 4
+-#define EEPROM_REGULATORY_BAND1_CHANNELS_LENGTH 14
+-#define EEPROM_REGULATORY_BAND2_CHANNELS_LENGTH 13
+-#define EEPROM_REGULATORY_BAND3_CHANNELS_LENGTH 12
+-#define EEPROM_REGULATORY_BAND4_CHANNELS_LENGTH 11
+-#define EEPROM_REGULATORY_BAND5_CHANNELS_LENGTH 6
+-
+-#if IWL == 3945
+-#define EEPROM_REGULATORY_CHANNELS_LENGTH ( \
+- EEPROM_REGULATORY_BAND1_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND2_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND3_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND4_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND5_CHANNELS_LENGTH)
+-#elif IWL == 4965
+-#define EEPROM_REGULATORY_BAND_24_FAT_CHANNELS_LENGTH 7
+-#define EEPROM_REGULATORY_BAND_52_FAT_CHANNELS_LENGTH 11
+-#define EEPROM_REGULATORY_CHANNELS_LENGTH ( \
+- EEPROM_REGULATORY_BAND1_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND2_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND3_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND4_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND5_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND_24_FAT_CHANNELS_LENGTH + \
+- EEPROM_REGULATORY_BAND_52_FAT_CHANNELS_LENGTH)
+-#endif
+-
+-#define EEPROM_REGULATORY_NUMBER_OF_BANDS 5
+-
+-/* SKU Capabilities */
+-#define EEPROM_SKU_CAP_SW_RF_KILL_ENABLE (1 << 0)
+-#define EEPROM_SKU_CAP_HW_RF_KILL_ENABLE (1 << 1)
+-#define EEPROM_SKU_CAP_OP_MODE_MRC (1 << 7)
+-
+-/* *regulatory* channel data from eeprom, one for each channel */
+-struct iwl_eeprom_channel {
+- u8 flags; /* flags copied from EEPROM */
+- s8 max_power_avg; /* max power (dBm) on this chnl, limit 31 */
+-} __attribute__ ((packed));
-
--/* CSA lock period is 15 seconds. If a CSA has been received on a channel in
-- * the last 15 seconds, the channel is locked */
--#define CSA_LOCK_PERIOD 15
-/*
-- * One for each channel, holds all channel setup data
-- * Some of the fields (e.g. eeprom and flags/max_power_avg) are redundant
-- * with one another!
+- * Mapping of a Tx power level, at factory calibration temperature,
+- * to a radio/DSP gain table index.
+- * One for each of 5 "sample" power levels in each band.
+- * v_det is measured at the factory, using the 3945's built-in power amplifier
+- * (PA) output voltage detector. This same detector is used during Tx of
+- * long packets in normal operation to provide feedback as to proper output
+- * level.
+- * Data copied from EEPROM.
- */
--#define IWL4965_MAX_RATE (33)
+-struct iwl_eeprom_txpower_sample {
+- u8 gain_index; /* index into power (gain) setup table ... */
+- s8 power; /* ... for this pwr level for this chnl group */
+- u16 v_det; /* PA output voltage */
+-} __attribute__ ((packed));
-
--struct iwl_channel_info {
-- struct iwl_channel_tgd_info tgd;
-- struct iwl_channel_tgh_info tgh;
-- struct iwl_eeprom_channel eeprom; /* EEPROM regulatory limit */
-- struct iwl_eeprom_channel fat_eeprom; /* EEPROM regulatory limit for
-- * FAT channel */
+-/*
+- * Mappings of Tx power levels -> nominal radio/DSP gain table indexes.
+- * One for each channel group (a.k.a. "band") (1 for BG, 4 for A).
+- * Tx power setup code interpolates between the 5 "sample" power levels
+- * to determine the nominal setup for a requested power level.
+- * Data copied from EEPROM.
+- * DO NOT ALTER THIS STRUCTURE!!!
+- */
+-struct iwl_eeprom_txpower_group {
+- struct iwl_eeprom_txpower_sample samples[5]; /* 5 power levels */
+- s32 a, b, c, d, e; /* coefficients for voltage->power
+- * formula (signed) */
+- s32 Fa, Fb, Fc, Fd, Fe; /* these modify coeffs based on
+- * frequency (signed) */
+- s8 saturation_power; /* highest power possible by h/w in this
+- * band */
+- u8 group_channel; /* "representative" channel # in this band */
+- s16 temperature; /* h/w temperature at factory calib this band
+- * (signed) */
+-} __attribute__ ((packed));
-
-- u8 channel; /* channel number */
-- u8 flags; /* flags copied from EEPROM */
-- s8 max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
-- s8 curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) */
-- s8 min_power; /* always 0 */
-- s8 scan_power; /* (dBm) regul. eeprom, direct scans, any rate */
+-/*
+- * Temperature-based Tx-power compensation data, not band-specific.
+- * These coefficients are use to modify a/b/c/d/e coeffs based on
+- * difference between current temperature and factory calib temperature.
+- * Data copied from EEPROM.
+- */
+-struct iwl_eeprom_temperature_corr {
+- u32 Ta;
+- u32 Tb;
+- u32 Tc;
+- u32 Td;
+- u32 Te;
+-} __attribute__ ((packed));
-
-- u8 group_index; /* 0-4, maps channel to group1/2/3/4/5 */
-- u8 band_index; /* 0-4, maps channel to band1/2/3/4/5 */
-- u8 phymode; /* MODE_IEEE80211{A,B,G} */
+-#if IWL == 4965
+-#define EEPROM_TX_POWER_TX_CHAINS (2)
+-#define EEPROM_TX_POWER_BANDS (8)
+-#define EEPROM_TX_POWER_MEASUREMENTS (3)
+-#define EEPROM_TX_POWER_VERSION (2)
+-#define EEPROM_TX_POWER_VERSION_NEW (5)
-
-- /* Radio/DSP gain settings for each "normal" data Tx rate.
-- * These include, in addition to RF and DSP gain, a few fields for
-- * remembering/modifying gain settings (indexes). */
-- struct iwl_channel_power_info power_info[IWL4965_MAX_RATE];
+-struct iwl_eeprom_calib_measure {
+- u8 temperature;
+- u8 gain_idx;
+- u8 actual_pow;
+- s8 pa_det;
+-} __attribute__ ((packed));
+-
+-struct iwl_eeprom_calib_ch_info {
+- u8 ch_num;
+- struct iwl_eeprom_calib_measure measurements[EEPROM_TX_POWER_TX_CHAINS]
+- [EEPROM_TX_POWER_MEASUREMENTS];
+-} __attribute__ ((packed));
+-
+-struct iwl_eeprom_calib_subband_info {
+- u8 ch_from;
+- u8 ch_to;
+- struct iwl_eeprom_calib_ch_info ch1;
+- struct iwl_eeprom_calib_ch_info ch2;
+-} __attribute__ ((packed));
+-
+-struct iwl_eeprom_calib_info {
+- u8 saturation_power24;
+- u8 saturation_power52;
+- s16 voltage; /* signed */
+- struct iwl_eeprom_calib_subband_info band_info[EEPROM_TX_POWER_BANDS];
+-} __attribute__ ((packed));
-
--#if IWL == 4965
-- /* FAT channel info */
-- s8 fat_max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */
-- s8 fat_curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) */
-- s8 fat_min_power; /* always 0 */
-- s8 fat_scan_power; /* (dBm) eeprom, direct scans, any rate */
-- u8 fat_flags; /* flags copied from EEPROM */
-- u8 fat_extension_channel;
-#endif
-
-- /* Radio/DSP gain settings for each scan rate, for directed scans. */
-- struct iwl_scan_power_info scan_pwr_info[IWL_NUM_SCAN_RATES];
--};
+-struct iwl_eeprom {
+- u8 reserved0[16];
+-#define EEPROM_DEVICE_ID (2*0x08) /* 2 bytes */
+- u16 device_id; /* abs.ofs: 16 */
+- u8 reserved1[2];
+-#define EEPROM_PMC (2*0x0A) /* 2 bytes */
+- u16 pmc; /* abs.ofs: 20 */
+- u8 reserved2[20];
+-#define EEPROM_MAC_ADDRESS (2*0x15) /* 6 bytes */
+- u8 mac_address[6]; /* abs.ofs: 42 */
+- u8 reserved3[58];
+-#define EEPROM_BOARD_REVISION (2*0x35) /* 2 bytes */
+- u16 board_revision; /* abs.ofs: 106 */
+- u8 reserved4[11];
+-#define EEPROM_BOARD_PBA_NUMBER (2*0x3B+1) /* 9 bytes */
+- u8 board_pba_number[9]; /* abs.ofs: 119 */
+- u8 reserved5[8];
+-#define EEPROM_VERSION (2*0x44) /* 2 bytes */
+- u16 version; /* abs.ofs: 136 */
+-#define EEPROM_SKU_CAP (2*0x45) /* 1 bytes */
+- u8 sku_cap; /* abs.ofs: 138 */
+-#define EEPROM_LEDS_MODE (2*0x45+1) /* 1 bytes */
+- u8 leds_mode; /* abs.ofs: 139 */
+-#define EEPROM_OEM_MODE (2*0x46) /* 2 bytes */
+- u16 oem_mode;
+-#define EEPROM_WOWLAN_MODE (2*0x47) /* 2 bytes */
+- u16 wowlan_mode; /* abs.ofs: 142 */
+-#define EEPROM_LEDS_TIME_INTERVAL (2*0x48) /* 2 bytes */
+- u16 leds_time_interval; /* abs.ofs: 144 */
+-#define EEPROM_LEDS_OFF_TIME (2*0x49) /* 1 bytes */
+- u8 leds_off_time; /* abs.ofs: 146 */
+-#define EEPROM_LEDS_ON_TIME (2*0x49+1) /* 1 bytes */
+- u8 leds_on_time; /* abs.ofs: 147 */
+-#define EEPROM_ALMGOR_M_VERSION (2*0x4A) /* 1 bytes */
+- u8 almgor_m_version; /* abs.ofs: 148 */
+-#define EEPROM_ANTENNA_SWITCH_TYPE (2*0x4A+1) /* 1 bytes */
+- u8 antenna_switch_type; /* abs.ofs: 149 */
+-#if IWL == 3945
+- u8 reserved6[42];
+-#else
+- u8 reserved6[8];
+-#define EEPROM_4965_BOARD_REVISION (2*0x4F) /* 2 bytes */
+- u16 board_revision_4965; /* abs.ofs: 158 */
+- u8 reserved7[13];
+-#define EEPROM_4965_BOARD_PBA (2*0x56+1) /* 9 bytes */
+- u8 board_pba_number_4965[9]; /* abs.ofs: 173 */
+- u8 reserved8[10];
+-#endif
+-#define EEPROM_REGULATORY_SKU_ID (2*0x60) /* 4 bytes */
+- u8 sku_id[4]; /* abs.ofs: 192 */
+-#define EEPROM_REGULATORY_BAND_1 (2*0x62) /* 2 bytes */
+- u16 band_1_count; /* abs.ofs: 196 */
+-#define EEPROM_REGULATORY_BAND_1_CHANNELS (2*0x63) /* 28 bytes */
+- struct iwl_eeprom_channel band_1_channels[14]; /* abs.ofs: 196 */
+-#define EEPROM_REGULATORY_BAND_2 (2*0x71) /* 2 bytes */
+- u16 band_2_count; /* abs.ofs: 226 */
+-#define EEPROM_REGULATORY_BAND_2_CHANNELS (2*0x72) /* 26 bytes */
+- struct iwl_eeprom_channel band_2_channels[13]; /* abs.ofs: 228 */
+-#define EEPROM_REGULATORY_BAND_3 (2*0x7F) /* 2 bytes */
+- u16 band_3_count; /* abs.ofs: 254 */
+-#define EEPROM_REGULATORY_BAND_3_CHANNELS (2*0x80) /* 24 bytes */
+- struct iwl_eeprom_channel band_3_channels[12]; /* abs.ofs: 256 */
+-#define EEPROM_REGULATORY_BAND_4 (2*0x8C) /* 2 bytes */
+- u16 band_4_count; /* abs.ofs: 280 */
+-#define EEPROM_REGULATORY_BAND_4_CHANNELS (2*0x8D) /* 22 bytes */
+- struct iwl_eeprom_channel band_4_channels[11]; /* abs.ofs: 282 */
+-#define EEPROM_REGULATORY_BAND_5 (2*0x98) /* 2 bytes */
+- u16 band_5_count; /* abs.ofs: 304 */
+-#define EEPROM_REGULATORY_BAND_5_CHANNELS (2*0x99) /* 12 bytes */
+- struct iwl_eeprom_channel band_5_channels[6]; /* abs.ofs: 306 */
-
--struct iwl_clip_group {
-- /* maximum power level to prevent clipping for each rate, derived by
-- * us from this band's saturation power in EEPROM */
-- const s8 clip_powers[IWL_MAX_RATES];
--};
+-/* From here on out the EEPROM diverges between the 4965 and the 3945 */
+-#if IWL == 3945
-
--static inline int is_channel_valid(const struct iwl_channel_info *ch_info)
--{
-- if (ch_info == NULL)
-- return 0;
-- return (ch_info->flags & EEPROM_CHANNEL_VALID) ? 1 : 0;
--}
+- u8 reserved9[194];
-
--static inline int is_channel_narrow(const struct iwl_channel_info *ch_info)
--{
-- return (ch_info->flags & EEPROM_CHANNEL_NARROW) ? 1 : 0;
--}
+-#define EEPROM_TXPOWER_CALIB_GROUP0 0x200
+-#define EEPROM_TXPOWER_CALIB_GROUP1 0x240
+-#define EEPROM_TXPOWER_CALIB_GROUP2 0x280
+-#define EEPROM_TXPOWER_CALIB_GROUP3 0x2c0
+-#define EEPROM_TXPOWER_CALIB_GROUP4 0x300
+-#define IWL_NUM_TX_CALIB_GROUPS 5
+- struct iwl_eeprom_txpower_group groups[IWL_NUM_TX_CALIB_GROUPS];
+-/* abs.ofs: 512 */
+-#define EEPROM_CALIB_TEMPERATURE_CORRECT 0x340
+- struct iwl_eeprom_temperature_corr corrections; /* abs.ofs: 832 */
+- u8 reserved16[172]; /* fill out to full 1024 byte block */
-
--static inline int is_channel_radar(const struct iwl_channel_info *ch_info)
--{
-- return (ch_info->flags & EEPROM_CHANNEL_RADAR) ? 1 : 0;
--}
+-/* 4965AGN adds fat channel support */
+-#elif IWL == 4965
-
--static inline u8 is_channel_a_band(const struct iwl_channel_info *ch_info)
--{
-- return ch_info->phymode == MODE_IEEE80211A;
--}
+- u8 reserved10[2];
+-#define EEPROM_REGULATORY_BAND_24_FAT_CHANNELS (2*0xA0) /* 14 bytes */
+- struct iwl_eeprom_channel band_24_channels[7]; /* abs.ofs: 320 */
+- u8 reserved11[2];
+-#define EEPROM_REGULATORY_BAND_52_FAT_CHANNELS (2*0xA8) /* 22 bytes */
+- struct iwl_eeprom_channel band_52_channels[11]; /* abs.ofs: 336 */
+- u8 reserved12[6];
+-#define EEPROM_CALIB_VERSION_OFFSET (2*0xB6) /* 2 bytes */
+- u16 calib_version; /* abs.ofs: 364 */
+- u8 reserved13[2];
+-#define EEPROM_SATURATION_POWER_OFFSET (2*0xB8) /* 2 bytes */
+- u16 satruation_power; /* abs.ofs: 368 */
+- u8 reserved14[94];
+-#define EEPROM_IWL_CALIB_TXPOWER_OFFSET (2*0xE8) /* 48 bytes */
+- struct iwl_eeprom_calib_info calib_info; /* abs.ofs: 464 */
-
--static inline u8 is_channel_bg_band(const struct iwl_channel_info *ch_info)
--{
-- return ((ch_info->phymode == MODE_IEEE80211B) ||
-- (ch_info->phymode == MODE_IEEE80211G));
--}
+- u8 reserved16[140]; /* fill out to full 1024 byte block */
-
--static inline int is_channel_passive(const struct iwl_channel_info *ch)
--{
-- return (!(ch->flags & EEPROM_CHANNEL_ACTIVE)) ? 1 : 0;
--}
+-#endif
-
--static inline int is_channel_ibss(const struct iwl_channel_info *ch)
--{
-- return ((ch->flags & EEPROM_CHANNEL_IBSS)) ? 1 : 0;
--}
+-} __attribute__ ((packed));
-
--extern const struct iwl_channel_info *iwl_get_channel_info(
-- const struct iwl_priv *priv, int phymode, u16 channel);
+-#define IWL_EEPROM_IMAGE_SIZE 1024
-
-#endif
-diff --git a/drivers/net/wireless/iwlwifi/iwl-commands.h b/drivers/net/wireless/iwlwifi/iwl-commands.h
+diff --git a/drivers/net/wireless/iwlwifi/iwl-helpers.h b/drivers/net/wireless/iwlwifi/iwl-helpers.h
+index e2a8d95..cd2eb18 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-helpers.h
++++ b/drivers/net/wireless/iwlwifi/iwl-helpers.h
+@@ -252,4 +252,27 @@ static inline unsigned long elapsed_jiffies(unsigned long start,
+ return end + (MAX_JIFFY_OFFSET - start);
+ }
+
++static inline u8 iwl_get_dma_hi_address(dma_addr_t addr)
++{
++ return sizeof(addr) > sizeof(u32) ? (addr >> 16) >> 16 : 0;
++}
++
++/* TODO: Move fw_desc functions to iwl-pci.ko */
++static inline void iwl_free_fw_desc(struct pci_dev *pci_dev,
++ struct fw_desc *desc)
++{
++ if (desc->v_addr)
++ pci_free_consistent(pci_dev, desc->len,
++ desc->v_addr, desc->p_addr);
++ desc->v_addr = NULL;
++ desc->len = 0;
++}
++
++static inline int iwl_alloc_fw_desc(struct pci_dev *pci_dev,
++ struct fw_desc *desc)
++{
++ desc->v_addr = pci_alloc_consistent(pci_dev, desc->len, &desc->p_addr);
++ return (desc->v_addr != NULL) ? 0 : -ENOMEM;
++}
++
+ #endif /* __iwl_helpers_h__ */
+diff --git a/drivers/net/wireless/iwlwifi/iwl-hw.h b/drivers/net/wireless/iwlwifi/iwl-hw.h
deleted file mode 100644
-index 9de8d7f..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-commands.h
+index 1aa6fcd..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-hw.h
+++ /dev/null
-@@ -1,1734 +0,0 @@
+@@ -1,537 +0,0 @@
-/******************************************************************************
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
@@ -382042,3633 +457174,8957 @@
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-- *
- *****************************************************************************/
-
--#ifndef __iwl_commands_h__
--#define __iwl_commands_h__
+-#ifndef __iwlwifi_hw_h__
+-#define __iwlwifi_hw_h__
-
--enum {
-- REPLY_ALIVE = 0x1,
-- REPLY_ERROR = 0x2,
+-/*
+- * This file defines hardware constants common to 3945 and 4965.
+- *
+- * Device-specific constants are defined in iwl-3945-hw.h and iwl-4965-hw.h,
+- * although this file contains a few definitions for which the .c
+- * implementation is the same for 3945 and 4965, except for the value of
+- * a constant.
+- *
+- * uCode API constants are defined in iwl-commands.h.
+- *
+- * NOTE: DO NOT PUT OS IMPLEMENTATION-SPECIFIC DECLARATIONS HERE
+- *
+- * The iwl-*hw.h (and files they include) files should remain OS/driver
+- * implementation independent, declaring only the hardware interface.
+- */
-
-- /* RXON and QOS commands */
-- REPLY_RXON = 0x10,
-- REPLY_RXON_ASSOC = 0x11,
-- REPLY_QOS_PARAM = 0x13,
-- REPLY_RXON_TIMING = 0x14,
+-/* uCode queue management definitions */
+-#define IWL_CMD_QUEUE_NUM 4
+-#define IWL_CMD_FIFO_NUM 4
+-#define IWL_BACK_QUEUE_FIRST_ID 7
-
-- /* Multi-Station support */
-- REPLY_ADD_STA = 0x18,
-- REPLY_REMOVE_STA = 0x19, /* not used */
-- REPLY_REMOVE_ALL_STA = 0x1a, /* not used */
+-/* Tx rates */
+-#define IWL_CCK_RATES 4
+-#define IWL_OFDM_RATES 8
-
-- /* RX, TX, LEDs */
-#if IWL == 3945
-- REPLY_3945_RX = 0x1b, /* 3945 only */
+-#define IWL_HT_RATES 0
+-#elif IWL == 4965
+-#define IWL_HT_RATES 16
-#endif
-- REPLY_TX = 0x1c,
-- REPLY_RATE_SCALE = 0x47, /* 3945 only */
-- REPLY_LEDS_CMD = 0x48,
-- REPLY_TX_LINK_QUALITY_CMD = 0x4e, /* 4965 only */
-
-- /* 802.11h related */
-- RADAR_NOTIFICATION = 0x70, /* not used */
-- REPLY_QUIET_CMD = 0x71, /* not used */
-- REPLY_CHANNEL_SWITCH = 0x72,
-- CHANNEL_SWITCH_NOTIFICATION = 0x73,
-- REPLY_SPECTRUM_MEASUREMENT_CMD = 0x74,
-- SPECTRUM_MEASURE_NOTIFICATION = 0x75,
+-#define IWL_MAX_RATES (IWL_CCK_RATES+IWL_OFDM_RATES+IWL_HT_RATES)
-
-- /* Power Management */
-- POWER_TABLE_CMD = 0x77,
-- PM_SLEEP_NOTIFICATION = 0x7A,
-- PM_DEBUG_STATISTIC_NOTIFIC = 0x7B,
+-/* Time constants */
+-#define SHORT_SLOT_TIME 9
+-#define LONG_SLOT_TIME 20
-
-- /* Scan commands and notifications */
-- REPLY_SCAN_CMD = 0x80,
-- REPLY_SCAN_ABORT_CMD = 0x81,
-- SCAN_START_NOTIFICATION = 0x82,
-- SCAN_RESULTS_NOTIFICATION = 0x83,
-- SCAN_COMPLETE_NOTIFICATION = 0x84,
+-/* RSSI to dBm */
+-#if IWL == 3945
+-#define IWL_RSSI_OFFSET 95
+-#elif IWL == 4965
+-#define IWL_RSSI_OFFSET 44
+-#endif
-
-- /* IBSS/AP commands */
-- BEACON_NOTIFICATION = 0x90,
-- REPLY_TX_BEACON = 0x91,
-- WHO_IS_AWAKE_NOTIFICATION = 0x94, /* not used */
+-#include "iwl-eeprom.h"
+-#include "iwl-commands.h"
-
-- /* Miscellaneous commands */
-- QUIET_NOTIFICATION = 0x96, /* not used */
-- REPLY_TX_PWR_TABLE_CMD = 0x97,
-- MEASURE_ABORT_NOTIFICATION = 0x99, /* not used */
+-#define PCI_LINK_CTRL 0x0F0
+-#define PCI_POWER_SOURCE 0x0C8
+-#define PCI_REG_WUM8 0x0E8
+-#define PCI_CFG_PMC_PME_FROM_D3COLD_SUPPORT (0x80000000)
-
-- /* BT config command */
-- REPLY_BT_CONFIG = 0x9b,
+-/*=== CSR (control and status registers) ===*/
+-#define CSR_BASE (0x000)
-
-- /* 4965 Statistics */
-- REPLY_STATISTICS_CMD = 0x9c,
-- STATISTICS_NOTIFICATION = 0x9d,
+-#define CSR_SW_VER (CSR_BASE+0x000)
+-#define CSR_HW_IF_CONFIG_REG (CSR_BASE+0x000) /* hardware interface config */
+-#define CSR_INT_COALESCING (CSR_BASE+0x004) /* accum ints, 32-usec units */
+-#define CSR_INT (CSR_BASE+0x008) /* host interrupt status/ack */
+-#define CSR_INT_MASK (CSR_BASE+0x00c) /* host interrupt enable */
+-#define CSR_FH_INT_STATUS (CSR_BASE+0x010) /* busmaster int status/ack*/
+-#define CSR_GPIO_IN (CSR_BASE+0x018) /* read external chip pins */
+-#define CSR_RESET (CSR_BASE+0x020) /* busmaster enable, NMI, etc*/
+-#define CSR_GP_CNTRL (CSR_BASE+0x024)
+-#define CSR_HW_REV (CSR_BASE+0x028)
+-#define CSR_EEPROM_REG (CSR_BASE+0x02c)
+-#define CSR_EEPROM_GP (CSR_BASE+0x030)
+-#define CSR_GP_UCODE (CSR_BASE+0x044)
+-#define CSR_UCODE_DRV_GP1 (CSR_BASE+0x054)
+-#define CSR_UCODE_DRV_GP1_SET (CSR_BASE+0x058)
+-#define CSR_UCODE_DRV_GP1_CLR (CSR_BASE+0x05c)
+-#define CSR_UCODE_DRV_GP2 (CSR_BASE+0x060)
+-#define CSR_LED_REG (CSR_BASE+0x094)
+-#define CSR_DRAM_INT_TBL_CTL (CSR_BASE+0x0A0)
+-#define CSR_GIO_CHICKEN_BITS (CSR_BASE+0x100)
+-#define CSR_ANA_PLL_CFG (CSR_BASE+0x20c)
+-#define CSR_HW_REV_WA_REG (CSR_BASE+0x22C)
-
-- /* RF-KILL commands and notifications */
-- REPLY_CARD_STATE_CMD = 0xa0,
-- CARD_STATE_NOTIFICATION = 0xa1,
+-/* HW I/F configuration */
+-#define CSR_HW_IF_CONFIG_REG_BIT_ALMAGOR_MB (0x00000100)
+-#define CSR_HW_IF_CONFIG_REG_BIT_ALMAGOR_MM (0x00000200)
+-#define CSR_HW_IF_CONFIG_REG_BIT_SKU_MRC (0x00000400)
+-#define CSR_HW_IF_CONFIG_REG_BIT_BOARD_TYPE (0x00000800)
+-#define CSR_HW_IF_CONFIG_REG_BITS_SILICON_TYPE_A (0x00000000)
+-#define CSR_HW_IF_CONFIG_REG_BITS_SILICON_TYPE_B (0x00001000)
+-#define CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM (0x00200000)
-
-- /* Missed beacons notification */
-- MISSED_BEACONS_NOTIFICATION = 0xa2,
+-/* interrupt flags in INTA, set by uCode or hardware (e.g. dma),
+- * acknowledged (reset) by host writing "1" to flagged bits. */
+-#define CSR_INT_BIT_FH_RX (1<<31) /* Rx DMA, cmd responses, FH_INT[17:16] */
+-#define CSR_INT_BIT_HW_ERR (1<<29) /* DMA hardware error FH_INT[31] */
+-#define CSR_INT_BIT_DNLD (1<<28) /* uCode Download */
+-#define CSR_INT_BIT_FH_TX (1<<27) /* Tx DMA FH_INT[1:0] */
+-#define CSR_INT_BIT_MAC_CLK_ACTV (1<<26) /* NIC controller's clock toggled on/off */
+-#define CSR_INT_BIT_SW_ERR (1<<25) /* uCode error */
+-#define CSR_INT_BIT_RF_KILL (1<<7) /* HW RFKILL switch GP_CNTRL[27] toggled */
+-#define CSR_INT_BIT_CT_KILL (1<<6) /* Critical temp (chip too hot) rfkill */
+-#define CSR_INT_BIT_SW_RX (1<<3) /* Rx, command responses, 3945 */
+-#define CSR_INT_BIT_WAKEUP (1<<1) /* NIC controller waking up (pwr mgmt) */
+-#define CSR_INT_BIT_ALIVE (1<<0) /* uCode interrupts once it initializes */
-
--#if IWL == 4965
-- REPLY_CT_KILL_CONFIG_CMD = 0xa4,
-- SENSITIVITY_CMD = 0xa8,
-- REPLY_PHY_CALIBRATION_CMD = 0xb0,
-- REPLY_RX_PHY_CMD = 0xc0,
-- REPLY_RX_MPDU_CMD = 0xc1,
-- REPLY_4965_RX = 0xc3,
-- REPLY_COMPRESSED_BA = 0xc5,
--#endif
-- REPLY_MAX = 0xff
--};
+-#define CSR_INI_SET_MASK (CSR_INT_BIT_FH_RX | \
+- CSR_INT_BIT_HW_ERR | \
+- CSR_INT_BIT_FH_TX | \
+- CSR_INT_BIT_SW_ERR | \
+- CSR_INT_BIT_RF_KILL | \
+- CSR_INT_BIT_SW_RX | \
+- CSR_INT_BIT_WAKEUP | \
+- CSR_INT_BIT_ALIVE)
-
--/******************************************************************************
-- * (0)
-- * Header
-- *
-- *****************************************************************************/
+-/* interrupt flags in FH (flow handler) (PCI busmaster DMA) */
+-#define CSR_FH_INT_BIT_ERR (1<<31) /* Error */
+-#define CSR_FH_INT_BIT_HI_PRIOR (1<<30) /* High priority Rx, bypass coalescing */
+-#define CSR_FH_INT_BIT_RX_CHNL2 (1<<18) /* Rx channel 2 (3945 only) */
+-#define CSR_FH_INT_BIT_RX_CHNL1 (1<<17) /* Rx channel 1 */
+-#define CSR_FH_INT_BIT_RX_CHNL0 (1<<16) /* Rx channel 0 */
+-#define CSR_FH_INT_BIT_TX_CHNL6 (1<<6) /* Tx channel 6 (3945 only) */
+-#define CSR_FH_INT_BIT_TX_CHNL1 (1<<1) /* Tx channel 1 */
+-#define CSR_FH_INT_BIT_TX_CHNL0 (1<<0) /* Tx channel 0 */
-
--#define IWL_CMD_FAILED_MSK 0x40
+-#define CSR_FH_INT_RX_MASK (CSR_FH_INT_BIT_HI_PRIOR | \
+- CSR_FH_INT_BIT_RX_CHNL2 | \
+- CSR_FH_INT_BIT_RX_CHNL1 | \
+- CSR_FH_INT_BIT_RX_CHNL0)
-
--struct iwl_cmd_header {
-- u8 cmd;
-- u8 flags;
-- /* We have 15 LSB to use as we please (MSB indicates
-- * a frame Rx'd from the HW). We encode the following
-- * information into the sequence field:
-- *
-- * 0:7 index in fifo
-- * 8:13 fifo selection
-- * 14:14 bit indicating if this packet references the 'extra'
-- * storage at the end of the memory queue
-- * 15:15 (Rx indication)
-- *
-- */
-- __le16 sequence;
+-#define CSR_FH_INT_TX_MASK (CSR_FH_INT_BIT_TX_CHNL6 | \
+- CSR_FH_INT_BIT_TX_CHNL1 | \
+- CSR_FH_INT_BIT_TX_CHNL0 )
-
-- /* command data follows immediately */
-- u8 data[0];
--} __attribute__ ((packed));
-
--/******************************************************************************
-- * (0a)
-- * Alive and Error Commands & Responses:
-- *
-- *****************************************************************************/
+-/* RESET */
+-#define CSR_RESET_REG_FLAG_NEVO_RESET (0x00000001)
+-#define CSR_RESET_REG_FLAG_FORCE_NMI (0x00000002)
+-#define CSR_RESET_REG_FLAG_SW_RESET (0x00000080)
+-#define CSR_RESET_REG_FLAG_MASTER_DISABLED (0x00000100)
+-#define CSR_RESET_REG_FLAG_STOP_MASTER (0x00000200)
-
--#define UCODE_VALID_OK __constant_cpu_to_le32(0x1)
--#define INITIALIZE_SUBTYPE (9)
+-/* GP (general purpose) CONTROL */
+-#define CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY (0x00000001)
+-#define CSR_GP_CNTRL_REG_FLAG_INIT_DONE (0x00000004)
+-#define CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ (0x00000008)
+-#define CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP (0x00000010)
-
--/*
-- * REPLY_ALIVE = 0x1 (response only, not a command)
-- */
--struct iwl_alive_resp {
-- u8 ucode_minor;
-- u8 ucode_major;
-- __le16 reserved1;
-- u8 sw_rev[8];
-- u8 ver_type;
-- u8 ver_subtype;
-- __le16 reserved2;
-- __le32 log_event_table_ptr;
-- __le32 error_event_table_ptr;
-- __le32 timestamp;
-- __le32 is_valid;
--} __attribute__ ((packed));
+-#define CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN (0x00000001)
-
--struct iwl_init_alive_resp {
-- u8 ucode_minor;
-- u8 ucode_major;
-- __le16 reserved1;
-- u8 sw_rev[8];
-- u8 ver_type;
-- u8 ver_subtype;
-- __le16 reserved2;
-- __le32 log_event_table_ptr;
-- __le32 error_event_table_ptr;
-- __le32 timestamp;
-- __le32 is_valid;
+-#define CSR_GP_CNTRL_REG_MSK_POWER_SAVE_TYPE (0x07000000)
+-#define CSR_GP_CNTRL_REG_FLAG_MAC_POWER_SAVE (0x04000000)
+-#define CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW (0x08000000)
-
--#if IWL == 4965
-- /* calibration values from "initialize" uCode */
-- __le32 voltage; /* signed */
-- __le32 therm_r1[2]; /* signed 1st for normal, 2nd for FAT channel */
-- __le32 therm_r2[2]; /* signed */
-- __le32 therm_r3[2]; /* signed */
-- __le32 therm_r4[2]; /* signed */
-- __le32 tx_atten[5][2]; /* signed MIMO gain comp, 5 freq groups,
-- * 2 Tx chains */
--#endif
--} __attribute__ ((packed));
-
--union tsf {
-- u8 byte[8];
-- __le16 word[4];
-- __le32 dw[2];
--};
+-/* EEPROM REG */
+-#define CSR_EEPROM_REG_READ_VALID_MSK (0x00000001)
+-#define CSR_EEPROM_REG_BIT_CMD (0x00000002)
-
--/*
-- * REPLY_ERROR = 0x2 (response only, not a command)
-- */
--struct iwl_error_resp {
-- __le32 error_type;
-- u8 cmd_id;
-- u8 reserved1;
-- __le16 bad_cmd_seq_num;
--#if IWL == 3945
-- __le16 reserved2;
--#endif
-- __le32 error_info;
-- union tsf timestamp;
--} __attribute__ ((packed));
+-/* EEPROM GP */
+-#define CSR_EEPROM_GP_VALID_MSK (0x00000006)
+-#define CSR_EEPROM_GP_BAD_SIGNATURE (0x00000000)
+-#define CSR_EEPROM_GP_IF_OWNER_MSK (0x00000180)
-
--/******************************************************************************
-- * (1)
-- * RXON Commands & Responses:
-- *
-- *****************************************************************************/
+-/* UCODE DRV GP */
+-#define CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP (0x00000001)
+-#define CSR_UCODE_SW_BIT_RFKILL (0x00000002)
+-#define CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED (0x00000004)
+-#define CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT (0x00000008)
-
--/*
-- * Rx config defines & structure
-- */
--/* rx_config device types */
--enum {
-- RXON_DEV_TYPE_AP = 1,
-- RXON_DEV_TYPE_ESS = 3,
-- RXON_DEV_TYPE_IBSS = 4,
-- RXON_DEV_TYPE_SNIFFER = 6,
--};
+-/* GPIO */
+-#define CSR_GPIO_IN_BIT_AUX_POWER (0x00000200)
+-#define CSR_GPIO_IN_VAL_VAUX_PWR_SRC (0x00000000)
+-#define CSR_GPIO_IN_VAL_VMAIN_PWR_SRC CSR_GPIO_IN_BIT_AUX_POWER
-
--/* rx_config flags */
--/* band & modulation selection */
--#define RXON_FLG_BAND_24G_MSK __constant_cpu_to_le32(1 << 0)
--#define RXON_FLG_CCK_MSK __constant_cpu_to_le32(1 << 1)
--/* auto detection enable */
--#define RXON_FLG_AUTO_DETECT_MSK __constant_cpu_to_le32(1 << 2)
--/* TGg protection when tx */
--#define RXON_FLG_TGG_PROTECT_MSK __constant_cpu_to_le32(1 << 3)
--/* cck short slot & preamble */
--#define RXON_FLG_SHORT_SLOT_MSK __constant_cpu_to_le32(1 << 4)
--#define RXON_FLG_SHORT_PREAMBLE_MSK __constant_cpu_to_le32(1 << 5)
--/* antenna selection */
--#define RXON_FLG_DIS_DIV_MSK __constant_cpu_to_le32(1 << 7)
--#define RXON_FLG_ANT_SEL_MSK __constant_cpu_to_le32(0x0f00)
--#define RXON_FLG_ANT_A_MSK __constant_cpu_to_le32(1 << 8)
--#define RXON_FLG_ANT_B_MSK __constant_cpu_to_le32(1 << 9)
--/* radar detection enable */
--#define RXON_FLG_RADAR_DETECT_MSK __constant_cpu_to_le32(1 << 12)
--#define RXON_FLG_TGJ_NARROW_BAND_MSK __constant_cpu_to_le32(1 << 13)
--/* rx response to host with 8-byte TSF
--* (according to ON_AIR deassertion) */
--#define RXON_FLG_TSF2HOST_MSK __constant_cpu_to_le32(1 << 15)
+-/* GI Chicken Bits */
+-#define CSR_GIO_CHICKEN_BITS_REG_BIT_L1A_NO_L0S_RX (0x00800000)
+-#define CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER (0x20000000)
-
--/* rx_config filter flags */
--/* accept all data frames */
--#define RXON_FILTER_PROMISC_MSK __constant_cpu_to_le32(1 << 0)
--/* pass control & management to host */
--#define RXON_FILTER_CTL2HOST_MSK __constant_cpu_to_le32(1 << 1)
--/* accept multi-cast */
--#define RXON_FILTER_ACCEPT_GRP_MSK __constant_cpu_to_le32(1 << 2)
--/* don't decrypt uni-cast frames */
--#define RXON_FILTER_DIS_DECRYPT_MSK __constant_cpu_to_le32(1 << 3)
--/* don't decrypt multi-cast frames */
--#define RXON_FILTER_DIS_GRP_DECRYPT_MSK __constant_cpu_to_le32(1 << 4)
--/* STA is associated */
--#define RXON_FILTER_ASSOC_MSK __constant_cpu_to_le32(1 << 5)
--/* transfer to host non bssid beacons in associated state */
--#define RXON_FILTER_BCON_AWARE_MSK __constant_cpu_to_le32(1 << 6)
+-/* CSR_ANA_PLL_CFG */
+-#define CSR_ANA_PLL_CFG_SH (0x00880300)
-
--/*
-- * REPLY_RXON = 0x10 (command, has simple generic response)
-- */
--struct iwl_rxon_cmd {
-- u8 node_addr[6];
-- __le16 reserved1;
-- u8 bssid_addr[6];
-- __le16 reserved2;
-- u8 wlap_bssid_addr[6];
-- __le16 reserved3;
-- u8 dev_type;
-- u8 air_propagation;
--#if IWL == 3945
-- __le16 reserved4;
--#elif IWL == 4965
-- __le16 rx_chain;
--#endif
-- u8 ofdm_basic_rates;
-- u8 cck_basic_rates;
-- __le16 assoc_id;
-- __le32 flags;
-- __le32 filter_flags;
-- __le16 channel;
--#if IWL == 3945
-- __le16 reserved5;
--#elif IWL == 4965
-- u8 ofdm_ht_single_stream_basic_rates;
-- u8 ofdm_ht_dual_stream_basic_rates;
--#endif
--} __attribute__ ((packed));
+-#define CSR_LED_REG_TRUN_ON (0x00000078)
+-#define CSR_LED_REG_TRUN_OFF (0x00000038)
+-#define CSR_LED_BSM_CTRL_MSK (0xFFFFFFDF)
-
--/*
-- * REPLY_RXON_ASSOC = 0x11 (command, has simple generic response)
-- */
--struct iwl_rxon_assoc_cmd {
-- __le32 flags;
-- __le32 filter_flags;
-- u8 ofdm_basic_rates;
-- u8 cck_basic_rates;
--#if IWL == 4965
-- u8 ofdm_ht_single_stream_basic_rates;
-- u8 ofdm_ht_dual_stream_basic_rates;
-- __le16 rx_chain_select_flags;
--#endif
-- __le16 reserved;
--} __attribute__ ((packed));
+-/* DRAM_INT_TBL_CTRL */
+-#define CSR_DRAM_INT_TBL_CTRL_EN (1<<31)
+-#define CSR_DRAM_INT_TBL_CTRL_WRAP_CHK (1<<27)
-
--/*
-- * REPLY_RXON_TIMING = 0x14 (command, has simple generic response)
-- */
--struct iwl_rxon_time_cmd {
-- union tsf timestamp;
-- __le16 beacon_interval;
-- __le16 atim_window;
-- __le32 beacon_init_val;
-- __le16 listen_interval;
-- __le16 reserved;
--} __attribute__ ((packed));
+-/*=== HBUS (Host-side Bus) ===*/
+-#define HBUS_BASE (0x400)
-
--struct iwl_tx_power {
-- u8 tx_gain; /* gain for analog radio */
-- u8 dsp_atten; /* gain for DSP */
--} __attribute__ ((packed));
+-#define HBUS_TARG_MEM_RADDR (HBUS_BASE+0x00c)
+-#define HBUS_TARG_MEM_WADDR (HBUS_BASE+0x010)
+-#define HBUS_TARG_MEM_WDAT (HBUS_BASE+0x018)
+-#define HBUS_TARG_MEM_RDAT (HBUS_BASE+0x01c)
+-#define HBUS_TARG_PRPH_WADDR (HBUS_BASE+0x044)
+-#define HBUS_TARG_PRPH_RADDR (HBUS_BASE+0x048)
+-#define HBUS_TARG_PRPH_WDAT (HBUS_BASE+0x04c)
+-#define HBUS_TARG_PRPH_RDAT (HBUS_BASE+0x050)
+-#define HBUS_TARG_WRPTR (HBUS_BASE+0x060)
-
--#if IWL == 3945
--struct iwl_power_per_rate {
-- u8 rate; /* plcp */
-- struct iwl_tx_power tpc;
-- u8 reserved;
--} __attribute__ ((packed));
+-#define HBUS_TARG_MBX_C (HBUS_BASE+0x030)
-
--#elif IWL == 4965
--#define POWER_TABLE_NUM_ENTRIES 33
--#define POWER_TABLE_NUM_HT_OFDM_ENTRIES 32
--#define POWER_TABLE_CCK_ENTRY 32
--struct tx_power_dual_stream {
-- __le32 dw;
--} __attribute__ ((packed));
-
--struct iwl_tx_power_db {
-- struct tx_power_dual_stream power_tbl[POWER_TABLE_NUM_ENTRIES];
--} __attribute__ ((packed));
--#endif
+-/* SCD (Scheduler) */
+-#define SCD_BASE (CSR_BASE + 0x2E00)
+-
+-#define SCD_MODE_REG (SCD_BASE + 0x000)
+-#define SCD_ARASTAT_REG (SCD_BASE + 0x004)
+-#define SCD_TXFACT_REG (SCD_BASE + 0x010)
+-#define SCD_TXF4MF_REG (SCD_BASE + 0x014)
+-#define SCD_TXF5MF_REG (SCD_BASE + 0x020)
+-#define SCD_SBYP_MODE_1_REG (SCD_BASE + 0x02C)
+-#define SCD_SBYP_MODE_2_REG (SCD_BASE + 0x030)
+-
+-/*=== FH (data Flow Handler) ===*/
+-#define FH_BASE (0x800)
+-
+-#define FH_CBCC_TABLE (FH_BASE+0x140)
+-#define FH_TFDB_TABLE (FH_BASE+0x180)
+-#define FH_RCSR_TABLE (FH_BASE+0x400)
+-#define FH_RSSR_TABLE (FH_BASE+0x4c0)
+-#define FH_TCSR_TABLE (FH_BASE+0x500)
+-#define FH_TSSR_TABLE (FH_BASE+0x680)
+-
+-/* TFDB (Transmit Frame Buffer Descriptor) */
+-#define FH_TFDB(_channel, buf) \
+- (FH_TFDB_TABLE+((_channel)*2+(buf))*0x28)
+-#define ALM_FH_TFDB_CHNL_BUF_CTRL_REG(_channel) \
+- (FH_TFDB_TABLE + 0x50 * _channel)
+-/* CBCC _channel is [0,2] */
+-#define FH_CBCC(_channel) (FH_CBCC_TABLE+(_channel)*0x8)
+-#define FH_CBCC_CTRL(_channel) (FH_CBCC(_channel)+0x00)
+-#define FH_CBCC_BASE(_channel) (FH_CBCC(_channel)+0x04)
+-
+-/* RCSR _channel is [0,2] */
+-#define FH_RCSR(_channel) (FH_RCSR_TABLE+(_channel)*0x40)
+-#define FH_RCSR_CONFIG(_channel) (FH_RCSR(_channel)+0x00)
+-#define FH_RCSR_RBD_BASE(_channel) (FH_RCSR(_channel)+0x04)
+-#define FH_RCSR_WPTR(_channel) (FH_RCSR(_channel)+0x20)
+-#define FH_RCSR_RPTR_ADDR(_channel) (FH_RCSR(_channel)+0x24)
-
--/*
-- * REPLY_CHANNEL_SWITCH = 0x72 (command, has simple generic response)
-- */
--struct iwl_channel_switch_cmd {
-- u8 band;
-- u8 expect_beacon;
-- __le16 channel;
-- __le32 rxon_flags;
-- __le32 rxon_filter_flags;
-- __le32 switch_time;
-#if IWL == 3945
-- struct iwl_power_per_rate power[IWL_MAX_RATES];
+-#define FH_RSCSR_CHNL0_WPTR (FH_RCSR_WPTR(0))
-#elif IWL == 4965
-- struct iwl_tx_power_db tx_power;
+-#define FH_RSCSR_CHNL0_WPTR (FH_RSCSR_CHNL0_RBDCB_WPTR_REG)
-#endif
--} __attribute__ ((packed));
-
--/*
-- * CHANNEL_SWITCH_NOTIFICATION = 0x73 (notification only, not a command)
-- */
--struct iwl_csa_notification {
-- __le16 band;
-- __le16 channel;
-- __le32 status; /* 0 - OK, 1 - fail */
--} __attribute__ ((packed));
+-/* RSSR */
+-#define FH_RSSR_CTRL (FH_RSSR_TABLE+0x000)
+-#define FH_RSSR_STATUS (FH_RSSR_TABLE+0x004)
+-/* TCSR */
+-#define FH_TCSR(_channel) (FH_TCSR_TABLE+(_channel)*0x20)
+-#define FH_TCSR_CONFIG(_channel) (FH_TCSR(_channel)+0x00)
+-#define FH_TCSR_CREDIT(_channel) (FH_TCSR(_channel)+0x04)
+-#define FH_TCSR_BUFF_STTS(_channel) (FH_TCSR(_channel)+0x08)
+-/* TSSR */
+-#define FH_TSSR_CBB_BASE (FH_TSSR_TABLE+0x000)
+-#define FH_TSSR_MSG_CONFIG (FH_TSSR_TABLE+0x008)
+-#define FH_TSSR_TX_STATUS (FH_TSSR_TABLE+0x010)
+-/* 18 - reserved */
-
--/******************************************************************************
-- * (2)
-- * Quality-of-Service (QOS) Commands & Responses:
-- *
-- *****************************************************************************/
--struct iwl_ac_qos {
-- __le16 cw_min;
-- __le16 cw_max;
-- u8 aifsn;
-- u8 reserved1;
-- __le16 edca_txop;
--} __attribute__ ((packed));
+-/* card static random access memory (SRAM) for processor data and instructs */
+-#define RTC_INST_LOWER_BOUND (0x000000)
+-#define RTC_DATA_LOWER_BOUND (0x800000)
-
--/* QoS flags defines */
--#define QOS_PARAM_FLG_UPDATE_EDCA_MSK __constant_cpu_to_le32(0x01)
--#define QOS_PARAM_FLG_TGN_MSK __constant_cpu_to_le32(0x02)
--#define QOS_PARAM_FLG_TXOP_TYPE_MSK __constant_cpu_to_le32(0x10)
-
--/*
-- * TXFIFO Queue number defines
-- */
--/* number of Access categories (AC) (EDCA), queues 0..3 */
--#define AC_NUM 4
+-/* DBM */
-
--/*
-- * REPLY_QOS_PARAM = 0x13 (command, has simple generic response)
-- */
--struct iwl_qosparam_cmd {
-- __le32 qos_flags;
-- struct iwl_ac_qos ac[AC_NUM];
--} __attribute__ ((packed));
+-#define ALM_FH_SRVC_CHNL (6)
-
--/******************************************************************************
-- * (3)
-- * Add/Modify Stations Commands & Responses:
-- *
-- *****************************************************************************/
--/*
-- * Multi station support
-- */
--#define IWL_AP_ID 0
--#define IWL_MULTICAST_ID 1
--#define IWL_STA_ID 2
+-#define ALM_FH_RCSR_RX_CONFIG_REG_POS_RBDC_SIZE (20)
+-#define ALM_FH_RCSR_RX_CONFIG_REG_POS_IRQ_RBTH (4)
-
--#define IWL3945_BROADCAST_ID 24
--#define IWL3945_STATION_COUNT 25
+-#define ALM_FH_RCSR_RX_CONFIG_REG_BIT_WR_STTS_EN (0x08000000)
-
--#define IWL4965_BROADCAST_ID 31
--#define IWL4965_STATION_COUNT 32
+-#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_DMA_CHNL_EN_ENABLE (0x80000000)
-
--#define IWL_STATION_COUNT 32 /* MAX(3945,4965)*/
--#define IWL_INVALID_STATION 255
+-#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_RDRBD_EN_ENABLE (0x20000000)
-
--#if IWL == 3945
--#define STA_FLG_TX_RATE_MSK __constant_cpu_to_le32(1<<2);
--#endif
--#define STA_FLG_PWR_SAVE_MSK __constant_cpu_to_le32(1<<8);
+-#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_MAX_FRAG_SIZE_128 (0x01000000)
-
--#define STA_CONTROL_MODIFY_MSK 0x01
+-#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_IRQ_DEST_INT_HOST (0x00001000)
-
--/* key flags __le16*/
--#define STA_KEY_FLG_ENCRYPT_MSK __constant_cpu_to_le16(0x7)
--#define STA_KEY_FLG_NO_ENC __constant_cpu_to_le16(0x0)
--#define STA_KEY_FLG_WEP __constant_cpu_to_le16(0x1)
--#define STA_KEY_FLG_CCMP __constant_cpu_to_le16(0x2)
--#define STA_KEY_FLG_TKIP __constant_cpu_to_le16(0x3)
+-#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_MSG_MODE_FH (0x00000000)
-
--#define STA_KEY_FLG_KEYID_POS 8
--#define STA_KEY_FLG_INVALID __constant_cpu_to_le16(0x0800)
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_MSG_MODE_TXF (0x00000000)
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_MSG_MODE_DRIVER (0x00000001)
-
--/* modify flags */
--#define STA_MODIFY_KEY_MASK 0x01
--#define STA_MODIFY_TID_DISABLE_TX 0x02
--#define STA_MODIFY_TX_RATE_MSK 0x04
--#define STA_MODIFY_ADDBA_TID_MSK 0x08
--#define STA_MODIFY_DELBA_TID_MSK 0x10
--#define BUILD_RAxTID(sta_id, tid) (((sta_id) << 4) + (tid))
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_DISABLE_VAL (0x00000000)
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE_VAL (0x00000008)
-
--/*
-- * Antenna masks:
-- * bit14:15 01 B inactive, A active
-- * 10 B active, A inactive
-- * 11 Both active
-- */
--#define RATE_MCS_ANT_A_POS 14
--#define RATE_MCS_ANT_B_POS 15
--#define RATE_MCS_ANT_A_MSK 0x4000
--#define RATE_MCS_ANT_B_MSK 0x8000
--#define RATE_MCS_ANT_AB_MSK 0xc000
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_IFTFD (0x00200000)
-
--struct iwl_keyinfo {
-- __le16 key_flags;
-- u8 tkip_rx_tsc_byte2; /* TSC[2] for key mix ph1 detection */
-- u8 reserved1;
-- __le16 tkip_rx_ttak[5]; /* 10-byte unicast TKIP TTAK */
-- __le16 reserved2;
-- u8 key[16]; /* 16-byte unicast decryption key */
--} __attribute__ ((packed));
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_RTC_NOINT (0x00000000)
-
--struct sta_id_modify {
-- u8 addr[ETH_ALEN];
-- __le16 reserved1;
-- u8 sta_id;
-- u8 modify_mask;
-- __le16 reserved2;
--} __attribute__ ((packed));
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE (0x00000000)
+-#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE (0x80000000)
-
--/*
-- * REPLY_ADD_STA = 0x18 (command)
-- */
--struct iwl_addsta_cmd {
-- u8 mode;
-- u8 reserved[3];
-- struct sta_id_modify sta;
-- struct iwl_keyinfo key;
-- __le32 station_flags;
-- __le32 station_flags_msk;
-- __le16 tid_disable_tx;
--#if IWL == 3945
-- __le16 rate_n_flags;
--#else
-- __le16 reserved1;
--#endif
-- u8 add_immediate_ba_tid;
-- u8 remove_immediate_ba_tid;
-- __le16 add_immediate_ba_ssn;
--#if IWL == 4965
-- __le32 reserved2;
--#endif
--} __attribute__ ((packed));
+-#define ALM_FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_VALID (0x00004000)
-
--/*
-- * REPLY_ADD_STA = 0x18 (response)
-- */
--struct iwl_add_sta_resp {
-- u8 status;
--} __attribute__ ((packed));
+-#define ALM_FH_TCSR_CHNL_TX_BUF_STS_REG_BIT_TFDB_WPTR (0x00000001)
-
--#define ADD_STA_SUCCESS_MSK 0x1
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_SNOOP_RD_TXPD_ON (0xFF000000)
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_ORDER_RD_TXPD_ON (0x00FF0000)
-
--/******************************************************************************
-- * (4)
-- * Rx Responses:
-- *
-- *****************************************************************************/
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_MAX_FRAG_SIZE_128B (0x00000400)
-
--struct iwl_rx_frame_stats {
-- u8 phy_count;
-- u8 id;
-- u8 rssi;
-- u8 agc;
-- __le16 sig_avg;
-- __le16 noise_diff;
-- u8 payload[0];
--} __attribute__ ((packed));
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_SNOOP_RD_TFD_ON (0x00000100)
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_ORDER_RD_CBB_ON (0x00000080)
-
--struct iwl_rx_frame_hdr {
-- __le16 channel;
-- __le16 phy_flags;
-- u8 reserved1;
-- u8 rate;
-- __le16 len;
-- u8 payload[0];
--} __attribute__ ((packed));
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_ORDER_RSP_WAIT_TH (0x00000020)
+-#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_RSP_WAIT_TH (0x00000005)
-
--#define RX_RES_STATUS_NO_CRC32_ERROR __constant_cpu_to_le32(1 << 0)
--#define RX_RES_STATUS_NO_RXE_OVERFLOW __constant_cpu_to_le32(1 << 1)
+-#define ALM_TB_MAX_BYTES_COUNT (0xFFF0)
-
--#define RX_RES_PHY_FLAGS_BAND_24_MSK __constant_cpu_to_le16(1 << 0)
--#define RX_RES_PHY_FLAGS_MOD_CCK_MSK __constant_cpu_to_le16(1 << 1)
--#define RX_RES_PHY_FLAGS_SHORT_PREAMBLE_MSK __constant_cpu_to_le16(1 << 2)
--#define RX_RES_PHY_FLAGS_NARROW_BAND_MSK __constant_cpu_to_le16(1 << 3)
--#define RX_RES_PHY_FLAGS_ANTENNA_MSK __constant_cpu_to_le16(0xf0)
+-#define ALM_FH_TSSR_TX_STATUS_REG_BIT_BUFS_EMPTY(_channel) \
+- ((1LU << _channel) << 24)
+-#define ALM_FH_TSSR_TX_STATUS_REG_BIT_NO_PEND_REQ(_channel) \
+- ((1LU << _channel) << 16)
-
--#define RX_RES_STATUS_SEC_TYPE_MSK (0x7 << 8)
--#define RX_RES_STATUS_SEC_TYPE_NONE (0x0 << 8)
--#define RX_RES_STATUS_SEC_TYPE_WEP (0x1 << 8)
--#define RX_RES_STATUS_SEC_TYPE_CCMP (0x2 << 8)
--#define RX_RES_STATUS_SEC_TYPE_TKIP (0x3 << 8)
+-#define ALM_FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE(_channel) \
+- (ALM_FH_TSSR_TX_STATUS_REG_BIT_BUFS_EMPTY(_channel) | \
+- ALM_FH_TSSR_TX_STATUS_REG_BIT_NO_PEND_REQ(_channel))
+-#define PCI_CFG_REV_ID_BIT_BASIC_SKU (0x40) /* bit 6 */
+-#define PCI_CFG_REV_ID_BIT_RTP (0x80) /* bit 7 */
-
--#define RX_RES_STATUS_DECRYPT_TYPE_MSK (0x3 << 11)
--#define RX_RES_STATUS_NOT_DECRYPT (0x0 << 11)
--#define RX_RES_STATUS_DECRYPT_OK (0x3 << 11)
--#define RX_RES_STATUS_BAD_ICV_MIC (0x1 << 11)
--#define RX_RES_STATUS_BAD_KEY_TTAK (0x2 << 11)
+-#define HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED (0x00000004)
-
--struct iwl_rx_frame_end {
-- __le32 status;
-- __le64 timestamp;
-- __le32 beacon_timestamp;
--} __attribute__ ((packed));
+-#define TFD_QUEUE_MIN 0
+-#define TFD_QUEUE_MAX 6
+-#define TFD_QUEUE_SIZE_MAX (256)
-
--/*
-- * REPLY_3945_RX = 0x1b (response only, not a command)
-- *
-- * NOTE: DO NOT dereference from casts to this structure
-- * It is provided only for calculating minimum data set size.
-- * The actual offsets of the hdr and end are dynamic based on
-- * stats.phy_count
-- */
--struct iwl_rx_frame {
-- struct iwl_rx_frame_stats stats;
-- struct iwl_rx_frame_hdr hdr;
-- struct iwl_rx_frame_end end;
--} __attribute__ ((packed));
+-/* spectrum and channel data structures */
+-#define IWL_NUM_SCAN_RATES (2)
-
--/* Fixed (non-configurable) rx data from phy */
--#define RX_PHY_FLAGS_ANTENNAE_OFFSET (4)
--#define RX_PHY_FLAGS_ANTENNAE_MASK (0x70)
--#define IWL_AGC_DB_MASK (0x3f80) /* MASK(7,13) */
--#define IWL_AGC_DB_POS (7)
--struct iwl4965_rx_non_cfg_phy {
-- __le16 ant_selection; /* ant A bit 4, ant B bit 5, ant C bit 6 */
-- __le16 agc_info; /* agc code 0:6, agc dB 7:13, reserved 14:15 */
-- u8 rssi_info[6]; /* we use even entries, 0/2/4 for A/B/C rssi */
-- u8 pad[0];
--} __attribute__ ((packed));
+-#define IWL_SCAN_FLAG_24GHZ (1<<0)
+-#define IWL_SCAN_FLAG_52GHZ (1<<1)
+-#define IWL_SCAN_FLAG_ACTIVE (1<<2)
+-#define IWL_SCAN_FLAG_DIRECT (1<<3)
-
--/*
-- * REPLY_4965_RX = 0xc3 (response only, not a command)
-- * Used only for legacy (non 11n) frames.
-- */
--#define RX_RES_PHY_CNT 14
--struct iwl4965_rx_phy_res {
-- u8 non_cfg_phy_cnt; /* non configurable DSP phy data byte count */
-- u8 cfg_phy_cnt; /* configurable DSP phy data byte count */
-- u8 stat_id; /* configurable DSP phy data set ID */
-- u8 reserved1;
-- __le64 timestamp; /* TSF at on air rise */
-- __le32 beacon_time_stamp; /* beacon at on-air rise */
-- __le16 phy_flags; /* general phy flags: band, modulation, ... */
-- __le16 channel; /* channel number */
-- __le16 non_cfg_phy[RX_RES_PHY_CNT]; /* upto 14 phy entries */
-- __le32 reserved2;
-- __le32 rate_n_flags;
-- __le16 byte_count; /* frame's byte-count */
-- __le16 reserved3;
--} __attribute__ ((packed));
+-#define IWL_MAX_CMD_SIZE 1024
-
--struct iwl4965_rx_mpdu_res_start {
-- __le16 byte_count;
-- __le16 reserved;
--} __attribute__ ((packed));
+-#define IWL_DEFAULT_TX_RETRY 15
+-#define IWL_MAX_TX_RETRY 16
-
+-/*********************************************/
-
--/******************************************************************************
-- * (5)
-- * Tx Commands & Responses:
-- *
-- *****************************************************************************/
+-#define RFD_SIZE 4
+-#define NUM_TFD_CHUNKS 4
-
--/* Tx flags */
--#define TX_CMD_FLG_RTS_MSK __constant_cpu_to_le32(1 << 1)
--#define TX_CMD_FLG_CTS_MSK __constant_cpu_to_le32(1 << 2)
--#define TX_CMD_FLG_ACK_MSK __constant_cpu_to_le32(1 << 3)
--#define TX_CMD_FLG_STA_RATE_MSK __constant_cpu_to_le32(1 << 4)
--#define TX_CMD_FLG_IMM_BA_RSP_MASK __constant_cpu_to_le32(1 << 6)
--#define TX_CMD_FLG_FULL_TXOP_PROT_MSK __constant_cpu_to_le32(1 << 7)
--#define TX_CMD_FLG_ANT_SEL_MSK __constant_cpu_to_le32(0xf00)
--#define TX_CMD_FLG_ANT_A_MSK __constant_cpu_to_le32(1 << 8)
--#define TX_CMD_FLG_ANT_B_MSK __constant_cpu_to_le32(1 << 9)
+-#define RX_QUEUE_SIZE 256
+-#define RX_QUEUE_MASK 255
+-#define RX_QUEUE_SIZE_LOG 8
-
--/* ucode ignores BT priority for this frame */
--#define TX_CMD_FLG_BT_DIS_MSK __constant_cpu_to_le32(1 << 12)
+-/* QoS definitions */
-
--/* ucode overrides sequence control */
--#define TX_CMD_FLG_SEQ_CTL_MSK __constant_cpu_to_le32(1 << 13)
+-#define CW_MIN_OFDM 15
+-#define CW_MAX_OFDM 1023
+-#define CW_MIN_CCK 31
+-#define CW_MAX_CCK 1023
-
--/* signal that this frame is non-last MPDU */
--#define TX_CMD_FLG_MORE_FRAG_MSK __constant_cpu_to_le32(1 << 14)
+-#define QOS_TX0_CW_MIN_OFDM CW_MIN_OFDM
+-#define QOS_TX1_CW_MIN_OFDM CW_MIN_OFDM
+-#define QOS_TX2_CW_MIN_OFDM ((CW_MIN_OFDM + 1) / 2 - 1)
+-#define QOS_TX3_CW_MIN_OFDM ((CW_MIN_OFDM + 1) / 4 - 1)
-
--/* calculate TSF in outgoing frame */
--#define TX_CMD_FLG_TSF_MSK __constant_cpu_to_le32(1 << 16)
+-#define QOS_TX0_CW_MIN_CCK CW_MIN_CCK
+-#define QOS_TX1_CW_MIN_CCK CW_MIN_CCK
+-#define QOS_TX2_CW_MIN_CCK ((CW_MIN_CCK + 1) / 2 - 1)
+-#define QOS_TX3_CW_MIN_CCK ((CW_MIN_CCK + 1) / 4 - 1)
-
--/* activate TX calibration. */
--#define TX_CMD_FLG_CALIB_MSK __constant_cpu_to_le32(1 << 17)
+-#define QOS_TX0_CW_MAX_OFDM CW_MAX_OFDM
+-#define QOS_TX1_CW_MAX_OFDM CW_MAX_OFDM
+-#define QOS_TX2_CW_MAX_OFDM CW_MIN_OFDM
+-#define QOS_TX3_CW_MAX_OFDM ((CW_MIN_OFDM + 1) / 2 - 1)
-
--/* signals that 2 bytes pad was inserted
-- after the MAC header */
--#define TX_CMD_FLG_MH_PAD_MSK __constant_cpu_to_le32(1 << 20)
+-#define QOS_TX0_CW_MAX_CCK CW_MAX_CCK
+-#define QOS_TX1_CW_MAX_CCK CW_MAX_CCK
+-#define QOS_TX2_CW_MAX_CCK CW_MIN_CCK
+-#define QOS_TX3_CW_MAX_CCK ((CW_MIN_CCK + 1) / 2 - 1)
-
--/* HCCA-AP - disable duration overwriting. */
--#define TX_CMD_FLG_DUR_MSK __constant_cpu_to_le32(1 << 25)
+-#define QOS_TX0_AIFS 3
+-#define QOS_TX1_AIFS 7
+-#define QOS_TX2_AIFS 2
+-#define QOS_TX3_AIFS 2
-
--/*
-- * TX command security control
-- */
--#define TX_CMD_SEC_WEP 0x01
--#define TX_CMD_SEC_CCM 0x02
--#define TX_CMD_SEC_TKIP 0x03
--#define TX_CMD_SEC_MSK 0x03
--#define TX_CMD_SEC_SHIFT 6
--#define TX_CMD_SEC_KEY128 0x08
+-#define QOS_TX0_ACM 0
+-#define QOS_TX1_ACM 0
+-#define QOS_TX2_ACM 0
+-#define QOS_TX3_ACM 0
+-
+-#define QOS_TX0_TXOP_LIMIT_CCK 0
+-#define QOS_TX1_TXOP_LIMIT_CCK 0
+-#define QOS_TX2_TXOP_LIMIT_CCK 6016
+-#define QOS_TX3_TXOP_LIMIT_CCK 3264
+-
+-#define QOS_TX0_TXOP_LIMIT_OFDM 0
+-#define QOS_TX1_TXOP_LIMIT_OFDM 0
+-#define QOS_TX2_TXOP_LIMIT_OFDM 3008
+-#define QOS_TX3_TXOP_LIMIT_OFDM 1504
+-
+-#define DEF_TX0_CW_MIN_OFDM CW_MIN_OFDM
+-#define DEF_TX1_CW_MIN_OFDM CW_MIN_OFDM
+-#define DEF_TX2_CW_MIN_OFDM CW_MIN_OFDM
+-#define DEF_TX3_CW_MIN_OFDM CW_MIN_OFDM
+-
+-#define DEF_TX0_CW_MIN_CCK CW_MIN_CCK
+-#define DEF_TX1_CW_MIN_CCK CW_MIN_CCK
+-#define DEF_TX2_CW_MIN_CCK CW_MIN_CCK
+-#define DEF_TX3_CW_MIN_CCK CW_MIN_CCK
+-
+-#define DEF_TX0_CW_MAX_OFDM CW_MAX_OFDM
+-#define DEF_TX1_CW_MAX_OFDM CW_MAX_OFDM
+-#define DEF_TX2_CW_MAX_OFDM CW_MAX_OFDM
+-#define DEF_TX3_CW_MAX_OFDM CW_MAX_OFDM
+-
+-#define DEF_TX0_CW_MAX_CCK CW_MAX_CCK
+-#define DEF_TX1_CW_MAX_CCK CW_MAX_CCK
+-#define DEF_TX2_CW_MAX_CCK CW_MAX_CCK
+-#define DEF_TX3_CW_MAX_CCK CW_MAX_CCK
+-
+-#define DEF_TX0_AIFS (2)
+-#define DEF_TX1_AIFS (2)
+-#define DEF_TX2_AIFS (2)
+-#define DEF_TX3_AIFS (2)
+-
+-#define DEF_TX0_ACM 0
+-#define DEF_TX1_ACM 0
+-#define DEF_TX2_ACM 0
+-#define DEF_TX3_ACM 0
+-
+-#define DEF_TX0_TXOP_LIMIT_CCK 0
+-#define DEF_TX1_TXOP_LIMIT_CCK 0
+-#define DEF_TX2_TXOP_LIMIT_CCK 0
+-#define DEF_TX3_TXOP_LIMIT_CCK 0
+-
+-#define DEF_TX0_TXOP_LIMIT_OFDM 0
+-#define DEF_TX1_TXOP_LIMIT_OFDM 0
+-#define DEF_TX2_TXOP_LIMIT_OFDM 0
+-#define DEF_TX3_TXOP_LIMIT_OFDM 0
+-
+-#define QOS_QOS_SETS 3
+-#define QOS_PARAM_SET_ACTIVE 0
+-#define QOS_PARAM_SET_DEF_CCK 1
+-#define QOS_PARAM_SET_DEF_OFDM 2
+-
+-#define CTRL_QOS_NO_ACK (0x0020)
+-#define DCT_FLAG_EXT_QOS_ENABLED (0x10)
+-
+-#define U32_PAD(n) ((4-(n))&0x3)
-
-/*
-- * TX command Frame life time
+- * Generic queue structure
+- *
+- * Contains common data for Rx and Tx queues
- */
+-#define TFD_CTL_COUNT_SET(n) (n<<24)
+-#define TFD_CTL_COUNT_GET(ctl) ((ctl>>24) & 7)
+-#define TFD_CTL_PAD_SET(n) (n<<28)
+-#define TFD_CTL_PAD_GET(ctl) (ctl>>28)
-
--struct iwl_dram_scratch {
-- u8 try_cnt;
-- u8 bt_kill_cnt;
-- __le16 reserved;
--} __attribute__ ((packed));
+-#define TFD_TX_CMD_SLOTS 256
+-#define TFD_CMD_SLOTS 32
+-
+-#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_cmd) - \
+- sizeof(struct iwl_cmd_meta))
-
-/*
-- * REPLY_TX = 0x1c (command)
+- * RX related structures and functions
- */
--struct iwl_tx_cmd {
-- __le16 len;
-- __le16 next_frame_len;
-- __le32 tx_flags;
--#if IWL == 3945
-- u8 rate;
-- u8 sta_id;
-- u8 tid_tspec;
--#elif IWL == 4965
-- struct iwl_dram_scratch scratch;
-- __le32 rate_n_flags;
-- u8 sta_id;
--#endif
-- u8 sec_ctl;
--#if IWL == 4965
-- u8 initial_rate_index;
-- u8 reserved;
--#endif
-- u8 key[16];
--#if IWL == 3945
-- union {
-- u8 byte[8];
-- __le16 word[4];
-- __le32 dw[2];
-- } tkip_mic;
-- __le32 next_frame_info;
--#elif IWL == 4965
-- __le16 next_frame_flags;
-- __le16 reserved2;
--#endif
-- union {
-- __le32 life_time;
-- __le32 attempt;
-- } stop_time;
--#if IWL == 3945
-- u8 supp_rates[2];
--#elif IWL == 4965
-- __le32 dram_lsb_ptr;
-- u8 dram_msb_ptr;
--#endif
-- u8 rts_retry_limit; /*byte 50 */
-- u8 data_retry_limit; /*byte 51 */
--#if IWL == 4965
-- u8 tid_tspec;
--#endif
-- union {
-- __le16 pm_frame_timeout;
-- __le16 attempt_duration;
-- } timeout;
-- __le16 driver_txop;
-- u8 payload[0];
-- struct ieee80211_hdr hdr[0];
--} __attribute__ ((packed));
+-#define RX_FREE_BUFFERS 64
+-#define RX_LOW_WATERMARK 8
-
--/* TX command response is sent after *all* transmission attempts.
+-#endif /* __iwlwifi_hw_h__ */
+diff --git a/drivers/net/wireless/iwlwifi/iwl-io.h b/drivers/net/wireless/iwlwifi/iwl-io.h
+deleted file mode 100644
+index 8a8b96f..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-io.h
++++ /dev/null
+@@ -1,470 +0,0 @@
+-/******************************************************************************
- *
-- * NOTES:
+- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
- *
-- * TX_STATUS_FAIL_NEXT_FRAG
+- * Portions of this file are derived from the ipw3945 project.
- *
-- * If the fragment flag in the MAC header for the frame being transmitted
-- * is set and there is insufficient time to transmit the next frame, the
-- * TX status will be returned with 'TX_STATUS_FAIL_NEXT_FRAG'.
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of version 2 of the GNU General Public License as
+- * published by the Free Software Foundation.
- *
-- * TX_STATUS_FIFO_UNDERRUN
+- * This program is distributed in the hope that it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+- * more details.
- *
-- * Indicates the host did not provide bytes to the FIFO fast enough while
-- * a TX was in progress.
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
- *
-- * TX_STATUS_FAIL_MGMNT_ABORT
+- * The full GNU General Public License is included in this distribution in the
+- * file called LICENSE.
- *
-- * This status is only possible if the ABORT ON MGMT RX parameter was
-- * set to true with the TX command.
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- *****************************************************************************/
+-
+-#ifndef __iwl_io_h__
+-#define __iwl_io_h__
+-
+-#include <linux/io.h>
+-
+-#include "iwl-debug.h"
+-
+-/*
+- * IO, register, and NIC memory access functions
+- *
+- * NOTE on naming convention and macro usage for these
+- *
+- * A single _ prefix before a an access function means that no state
+- * check or debug information is printed when that function is called.
+- *
+- * A double __ prefix before an access function means that state is checked
+- * (in the case of *restricted calls) and the current line number is printed
+- * in addition to any other debug output.
+- *
+- * The non-prefixed name is the #define that maps the caller into a
+- * #define that provides the caller's __LINE__ to the double prefix version.
+- *
+- * If you wish to call the function without any debug or state checking,
+- * you should use the single _ prefix version (as is used by dependent IO
+- * routines, for example _iwl_read_restricted calls the non-check version of
+- * _iwl_read32.)
+- *
+- * These declarations are *extremely* useful in quickly isolating code deltas
+- * which result in misconfiguring of the hardware I/O. In combination with
+- * git-bisect and the IO debug level you can quickly determine the specific
+- * commit which breaks the IO sequence to the hardware.
- *
-- * If the MSB of the status parameter is set then an abort sequence is
-- * required. This sequence consists of the host activating the TX Abort
-- * control line, and then waiting for the TX Abort command response. This
-- * indicates that a the device is no longer in a transmit state, and that the
-- * command FIFO has been cleared. The host must then deactivate the TX Abort
-- * control line. Receiving is still allowed in this case.
- */
--enum {
-- TX_STATUS_SUCCESS = 0x01,
-- TX_STATUS_DIRECT_DONE = 0x02,
-- TX_STATUS_FAIL_SHORT_LIMIT = 0x82,
-- TX_STATUS_FAIL_LONG_LIMIT = 0x83,
-- TX_STATUS_FAIL_FIFO_UNDERRUN = 0x84,
-- TX_STATUS_FAIL_MGMNT_ABORT = 0x85,
-- TX_STATUS_FAIL_NEXT_FRAG = 0x86,
-- TX_STATUS_FAIL_LIFE_EXPIRE = 0x87,
-- TX_STATUS_FAIL_DEST_PS = 0x88,
-- TX_STATUS_FAIL_ABORTED = 0x89,
-- TX_STATUS_FAIL_BT_RETRY = 0x8a,
-- TX_STATUS_FAIL_STA_INVALID = 0x8b,
-- TX_STATUS_FAIL_FRAG_DROPPED = 0x8c,
-- TX_STATUS_FAIL_TID_DISABLE = 0x8d,
-- TX_STATUS_FAIL_FRAME_FLUSHED = 0x8e,
-- TX_STATUS_FAIL_INSUFFICIENT_CF_POLL = 0x8f,
-- TX_STATUS_FAIL_TX_LOCKED = 0x90,
-- TX_STATUS_FAIL_NO_BEACON_ON_RADAR = 0x91,
--};
-
--#define TX_PACKET_MODE_REGULAR 0x0000
--#define TX_PACKET_MODE_BURST_SEQ 0x0100
--#define TX_PACKET_MODE_BURST_FIRST 0x0200
+-#define _iwl_write32(iwl, ofs, val) writel((val), (iwl)->hw_base + (ofs))
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_write32(const char *f, u32 l, struct iwl_priv *iwl,
+- u32 ofs, u32 val)
+-{
+- IWL_DEBUG_IO("write_direct32(0x%08X, 0x%08X) - %s %d\n",
+- (u32) (ofs), (u32) (val), f, l);
+- _iwl_write32(iwl, ofs, val);
+-}
+-#define iwl_write32(iwl, ofs, val) \
+- __iwl_write32(__FILE__, __LINE__, iwl, ofs, val)
+-#else
+-#define iwl_write32(iwl, ofs, val) _iwl_write32(iwl, ofs, val)
+-#endif
-
--enum {
-- TX_POWER_PA_NOT_ACTIVE = 0x0,
--};
+-#define _iwl_read32(iwl, ofs) readl((iwl)->hw_base + (ofs))
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline u32 __iwl_read32(char *f, u32 l, struct iwl_priv *iwl, u32 ofs)
+-{
+- IWL_DEBUG_IO("read_direct32(0x%08X) - %s %d\n", ofs, f, l);
+- return _iwl_read32(iwl, ofs);
+-}
+-#define iwl_read32(iwl, ofs) __iwl_read32(__FILE__, __LINE__, iwl, ofs)
+-#else
+-#define iwl_read32(p, o) _iwl_read32(p, o)
+-#endif
-
--enum {
-- TX_STATUS_MSK = 0x000000ff, /* bits 0:7 */
-- TX_STATUS_DELAY_MSK = 0x00000040,
-- TX_STATUS_ABORT_MSK = 0x00000080,
-- TX_PACKET_MODE_MSK = 0x0000ff00, /* bits 8:15 */
-- TX_FIFO_NUMBER_MSK = 0x00070000, /* bits 16:18 */
-- TX_RESERVED = 0x00780000, /* bits 19:22 */
-- TX_POWER_PA_DETECT_MSK = 0x7f800000, /* bits 23:30 */
-- TX_ABORT_REQUIRED_MSK = 0x80000000, /* bits 31:31 */
--};
+-static inline int _iwl_poll_bit(struct iwl_priv *priv, u32 addr,
+- u32 bits, u32 mask, int timeout)
+-{
+- int i = 0;
-
--/* *******************************
-- * TX aggregation state
-- ******************************* */
+- do {
+- if ((_iwl_read32(priv, addr) & mask) == (bits & mask))
+- return i;
+- mdelay(10);
+- i += 10;
+- } while (i < timeout);
-
--enum {
-- AGG_TX_STATE_TRANSMITTED = 0x00,
-- AGG_TX_STATE_UNDERRUN_MSK = 0x01,
-- AGG_TX_STATE_BT_PRIO_MSK = 0x02,
-- AGG_TX_STATE_FEW_BYTES_MSK = 0x04,
-- AGG_TX_STATE_ABORT_MSK = 0x08,
-- AGG_TX_STATE_LAST_SENT_TTL_MSK = 0x10,
-- AGG_TX_STATE_LAST_SENT_TRY_CNT_MSK = 0x20,
-- AGG_TX_STATE_LAST_SENT_BT_KILL_MSK = 0x40,
-- AGG_TX_STATE_SCD_QUERY_MSK = 0x80,
-- AGG_TX_STATE_TEST_BAD_CRC32_MSK = 0x100,
-- AGG_TX_STATE_RESPONSE_MSK = 0x1ff,
-- AGG_TX_STATE_DUMP_TX_MSK = 0x200,
-- AGG_TX_STATE_DELAY_TX_MSK = 0x400
--};
+- return -ETIMEDOUT;
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline int __iwl_poll_bit(const char *f, u32 l,
+- struct iwl_priv *priv, u32 addr,
+- u32 bits, u32 mask, int timeout)
+-{
+- int rc = _iwl_poll_bit(priv, addr, bits, mask, timeout);
+- if (unlikely(rc == -ETIMEDOUT))
+- IWL_DEBUG_IO
+- ("poll_bit(0x%08X, 0x%08X, 0x%08X) - timedout - %s %d\n",
+- addr, bits, mask, f, l);
+- else
+- IWL_DEBUG_IO
+- ("poll_bit(0x%08X, 0x%08X, 0x%08X) = 0x%08X - %s %d\n",
+- addr, bits, mask, rc, f, l);
+- return rc;
+-}
+-#define iwl_poll_bit(iwl, addr, bits, mask, timeout) \
+- __iwl_poll_bit(__FILE__, __LINE__, iwl, addr, bits, mask, timeout)
+-#else
+-#define iwl_poll_bit(p, a, b, m, t) _iwl_poll_bit(p, a, b, m, t)
+-#endif
-
--#define AGG_TX_STATE_LAST_SENT_MSK \
--(AGG_TX_STATE_LAST_SENT_TTL_MSK | \
-- AGG_TX_STATE_LAST_SENT_TRY_CNT_MSK | \
-- AGG_TX_STATE_LAST_SENT_BT_KILL_MSK)
+-static inline void _iwl_set_bit(struct iwl_priv *priv, u32 reg, u32 mask)
+-{
+- _iwl_write32(priv, reg, _iwl_read32(priv, reg) | mask);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_set_bit(const char *f, u32 l,
+- struct iwl_priv *priv, u32 reg, u32 mask)
+-{
+- u32 val = _iwl_read32(priv, reg) | mask;
+- IWL_DEBUG_IO("set_bit(0x%08X, 0x%08X) = 0x%08X\n", reg, mask, val);
+- _iwl_write32(priv, reg, val);
+-}
+-#define iwl_set_bit(p, r, m) __iwl_set_bit(__FILE__, __LINE__, p, r, m)
+-#else
+-#define iwl_set_bit(p, r, m) _iwl_set_bit(p, r, m)
+-#endif
-
--#define AGG_TX_STATE_TRY_CNT_POS 12
--#define AGG_TX_STATE_TRY_CNT_MSK 0xf000
+-static inline void _iwl_clear_bit(struct iwl_priv *priv, u32 reg, u32 mask)
+-{
+- _iwl_write32(priv, reg, _iwl_read32(priv, reg) & ~mask);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_clear_bit(const char *f, u32 l,
+- struct iwl_priv *priv, u32 reg, u32 mask)
+-{
+- u32 val = _iwl_read32(priv, reg) & ~mask;
+- IWL_DEBUG_IO("clear_bit(0x%08X, 0x%08X) = 0x%08X\n", reg, mask, val);
+- _iwl_write32(priv, reg, val);
+-}
+-#define iwl_clear_bit(p, r, m) __iwl_clear_bit(__FILE__, __LINE__, p, r, m)
+-#else
+-#define iwl_clear_bit(p, r, m) _iwl_clear_bit(p, r, m)
+-#endif
-
--#define AGG_TX_STATE_SEQ_NUM_POS 16
--#define AGG_TX_STATE_SEQ_NUM_MSK 0xffff0000
+-static inline int _iwl_grab_restricted_access(struct iwl_priv *priv)
+-{
+- int rc;
+- u32 gp_ctl;
-
--/*
-- * REPLY_TX = 0x1c (response)
-- */
--#if IWL == 4965
--struct iwl_tx_resp {
-- u8 frame_count; /* 1 no aggregation, >1 aggregation */
-- u8 bt_kill_count;
-- u8 failure_rts;
-- u8 failure_frame;
-- __le32 rate_n_flags;
-- __le16 wireless_media_time;
-- __le16 reserved;
-- __le32 pa_power1;
-- __le32 pa_power2;
-- __le32 status; /* TX status (for aggregation status of 1st frame) */
--} __attribute__ ((packed));
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (atomic_read(&priv->restrict_refcnt))
+- return 0;
+-#endif
+- if (test_bit(STATUS_RF_KILL_HW, &priv->status) ||
+- test_bit(STATUS_RF_KILL_SW, &priv->status)) {
+- IWL_WARNING("WARNING: Requesting MAC access during RFKILL "
+- "wakes up NIC\n");
-
--#elif IWL == 3945
--struct iwl_tx_resp {
-- u8 failure_rts;
-- u8 failure_frame;
-- u8 bt_kill_count;
-- u8 rate;
-- __le32 wireless_media_time;
-- __le32 status; /* TX status (for aggregation status of 1st frame) */
--} __attribute__ ((packed));
+- /* 10 msec allows time for NIC to complete its data save */
+- gp_ctl = _iwl_read32(priv, CSR_GP_CNTRL);
+- if (gp_ctl & CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY) {
+- IWL_DEBUG_RF_KILL("Wait for complete power-down, "
+- "gpctl = 0x%08x\n", gp_ctl);
+- mdelay(10);
+- } else
+- IWL_DEBUG_RF_KILL("power-down complete, "
+- "gpctl = 0x%08x\n", gp_ctl);
+- }
+-
+- /* this bit wakes up the NIC */
+- _iwl_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
+- rc = _iwl_poll_bit(priv, CSR_GP_CNTRL,
+- CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN,
+- (CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY |
+- CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP), 50);
+- if (rc < 0) {
+- IWL_ERROR("MAC is in deep sleep!\n");
+- return -EIO;
+- }
+-
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- atomic_inc(&priv->restrict_refcnt);
-#endif
+- return 0;
+-}
-
--/*
-- * REPLY_COMPRESSED_BA = 0xc5 (response only, not a command)
-- */
--struct iwl_compressed_ba_resp {
-- __le32 sta_addr_lo32;
-- __le16 sta_addr_hi16;
-- __le16 reserved;
-- u8 sta_id;
-- u8 tid;
-- __le16 ba_seq_ctl;
-- __le32 ba_bitmap0;
-- __le32 ba_bitmap1;
-- __le16 scd_flow;
-- __le16 scd_ssn;
--} __attribute__ ((packed));
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline int __iwl_grab_restricted_access(const char *f, u32 l,
+- struct iwl_priv *priv)
+-{
+- if (atomic_read(&priv->restrict_refcnt))
+- IWL_DEBUG_INFO("Grabbing access while already held at "
+- "line %d.\n", l);
-
--/*
-- * REPLY_TX_PWR_TABLE_CMD = 0x97 (command, has simple generic response)
-- */
--struct iwl_txpowertable_cmd {
-- u8 band; /* 0: 5 GHz, 1: 2.4 GHz */
-- u8 reserved;
-- __le16 channel;
--#if IWL == 3945
-- struct iwl_power_per_rate power[IWL_MAX_RATES];
--#elif IWL == 4965
-- struct iwl_tx_power_db tx_power;
+- IWL_DEBUG_IO("grabbing restricted access - %s %d\n", f, l);
+-
+- return _iwl_grab_restricted_access(priv);
+-}
+-#define iwl_grab_restricted_access(priv) \
+- __iwl_grab_restricted_access(__FILE__, __LINE__, priv)
+-#else
+-#define iwl_grab_restricted_access(priv) \
+- _iwl_grab_restricted_access(priv)
-#endif
--} __attribute__ ((packed));
-
--#if IWL == 3945
--struct iwl_rate_scaling_info {
-- __le16 rate_n_flags;
-- u8 try_cnt;
-- u8 next_rate_index;
--} __attribute__ ((packed));
+-static inline void _iwl_release_restricted_access(struct iwl_priv *priv)
+-{
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (atomic_dec_and_test(&priv->restrict_refcnt))
+-#endif
+- _iwl_clear_bit(priv, CSR_GP_CNTRL,
+- CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_release_restricted_access(const char *f, u32 l,
+- struct iwl_priv *priv)
+-{
+- if (atomic_read(&priv->restrict_refcnt) <= 0)
+- IWL_ERROR("Release unheld restricted access at line %d.\n", l);
-
--/**
-- * struct iwl_rate_scaling_cmd - Rate Scaling Command & Response
+- IWL_DEBUG_IO("releasing restricted access - %s %d\n", f, l);
+- _iwl_release_restricted_access(priv);
+-}
+-#define iwl_release_restricted_access(priv) \
+- __iwl_release_restricted_access(__FILE__, __LINE__, priv)
+-#else
+-#define iwl_release_restricted_access(priv) \
+- _iwl_release_restricted_access(priv)
+-#endif
+-
+-static inline u32 _iwl_read_restricted(struct iwl_priv *priv, u32 reg)
+-{
+- return _iwl_read32(priv, reg);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline u32 __iwl_read_restricted(const char *f, u32 l,
+- struct iwl_priv *priv, u32 reg)
+-{
+- u32 value = _iwl_read_restricted(priv, reg);
+- if (!atomic_read(&priv->restrict_refcnt))
+- IWL_ERROR("Unrestricted access from %s %d\n", f, l);
+- IWL_DEBUG_IO("read_restricted(0x%4X) = 0x%08x - %s %d \n", reg, value,
+- f, l);
+- return value;
+-}
+-#define iwl_read_restricted(priv, reg) \
+- __iwl_read_restricted(__FILE__, __LINE__, priv, reg)
+-#else
+-#define iwl_read_restricted _iwl_read_restricted
+-#endif
+-
+-static inline void _iwl_write_restricted(struct iwl_priv *priv,
+- u32 reg, u32 value)
+-{
+- _iwl_write32(priv, reg, value);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static void __iwl_write_restricted(u32 line,
+- struct iwl_priv *priv, u32 reg, u32 value)
+-{
+- if (!atomic_read(&priv->restrict_refcnt))
+- IWL_ERROR("Unrestricted access from line %d\n", line);
+- _iwl_write_restricted(priv, reg, value);
+-}
+-#define iwl_write_restricted(priv, reg, value) \
+- __iwl_write_restricted(__LINE__, priv, reg, value)
+-#else
+-#define iwl_write_restricted _iwl_write_restricted
+-#endif
+-
+-static inline void iwl_write_buffer_restricted(struct iwl_priv *priv,
+- u32 reg, u32 len, u32 *values)
+-{
+- u32 count = sizeof(u32);
+-
+- if ((priv != NULL) && (values != NULL)) {
+- for (; 0 < len; len -= count, reg += count, values++)
+- _iwl_write_restricted(priv, reg, *values);
+- }
+-}
+-
+-static inline int _iwl_poll_restricted_bit(struct iwl_priv *priv,
+- u32 addr, u32 mask, int timeout)
+-{
+- int i = 0;
+-
+- do {
+- if ((_iwl_read_restricted(priv, addr) & mask) == mask)
+- return i;
+- mdelay(10);
+- i += 10;
+- } while (i < timeout);
+-
+- return -ETIMEDOUT;
+-}
+-
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline int __iwl_poll_restricted_bit(const char *f, u32 l,
+- struct iwl_priv *priv,
+- u32 addr, u32 mask, int timeout)
+-{
+- int rc = _iwl_poll_restricted_bit(priv, addr, mask, timeout);
+-
+- if (unlikely(rc == -ETIMEDOUT))
+- IWL_DEBUG_IO("poll_restricted_bit(0x%08X, 0x%08X) - "
+- "timedout - %s %d\n", addr, mask, f, l);
+- else
+- IWL_DEBUG_IO("poll_restricted_bit(0x%08X, 0x%08X) = 0x%08X "
+- "- %s %d\n", addr, mask, rc, f, l);
+- return rc;
+-}
+-#define iwl_poll_restricted_bit(iwl, addr, mask, timeout) \
+- __iwl_poll_restricted_bit(__FILE__, __LINE__, iwl, addr, mask, timeout)
+-#else
+-#define iwl_poll_restricted_bit _iwl_poll_restricted_bit
+-#endif
+-
+-static inline u32 _iwl_read_restricted_reg(struct iwl_priv *priv, u32 reg)
+-{
+- _iwl_write_restricted(priv, HBUS_TARG_PRPH_RADDR, reg | (3 << 24));
+- return _iwl_read_restricted(priv, HBUS_TARG_PRPH_RDAT);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline u32 __iwl_read_restricted_reg(u32 line,
+- struct iwl_priv *priv, u32 reg)
+-{
+- if (!atomic_read(&priv->restrict_refcnt))
+- IWL_ERROR("Unrestricted access from line %d\n", line);
+- return _iwl_read_restricted_reg(priv, reg);
+-}
+-
+-#define iwl_read_restricted_reg(priv, reg) \
+- __iwl_read_restricted_reg(__LINE__, priv, reg)
+-#else
+-#define iwl_read_restricted_reg _iwl_read_restricted_reg
+-#endif
+-
+-static inline void _iwl_write_restricted_reg(struct iwl_priv *priv,
+- u32 addr, u32 val)
+-{
+- _iwl_write_restricted(priv, HBUS_TARG_PRPH_WADDR,
+- ((addr & 0x0000FFFF) | (3 << 24)));
+- _iwl_write_restricted(priv, HBUS_TARG_PRPH_WDAT, val);
+-}
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_write_restricted_reg(u32 line,
+- struct iwl_priv *priv,
+- u32 addr, u32 val)
+-{
+- if (!atomic_read(&priv->restrict_refcnt))
+- IWL_ERROR("Unrestricted access from line %d\n", line);
+- _iwl_write_restricted_reg(priv, addr, val);
+-}
+-
+-#define iwl_write_restricted_reg(priv, addr, val) \
+- __iwl_write_restricted_reg(__LINE__, priv, addr, val);
+-#else
+-#define iwl_write_restricted_reg _iwl_write_restricted_reg
+-#endif
+-
+-#define _iwl_set_bits_restricted_reg(priv, reg, mask) \
+- _iwl_write_restricted_reg(priv, reg, \
+- (_iwl_read_restricted_reg(priv, reg) | mask))
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_set_bits_restricted_reg(u32 line, struct iwl_priv
+- *priv, u32 reg, u32 mask)
+-{
+- if (!atomic_read(&priv->restrict_refcnt))
+- IWL_ERROR("Unrestricted access from line %d\n", line);
+- _iwl_set_bits_restricted_reg(priv, reg, mask);
+-}
+-#define iwl_set_bits_restricted_reg(priv, reg, mask) \
+- __iwl_set_bits_restricted_reg(__LINE__, priv, reg, mask)
+-#else
+-#define iwl_set_bits_restricted_reg _iwl_set_bits_restricted_reg
+-#endif
+-
+-#define _iwl_set_bits_mask_restricted_reg(priv, reg, bits, mask) \
+- _iwl_write_restricted_reg( \
+- priv, reg, ((_iwl_read_restricted_reg(priv, reg) & mask) | bits))
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static inline void __iwl_set_bits_mask_restricted_reg(u32 line,
+- struct iwl_priv *priv, u32 reg, u32 bits, u32 mask)
+-{
+- if (!atomic_read(&priv->restrict_refcnt))
+- IWL_ERROR("Unrestricted access from line %d\n", line);
+- _iwl_set_bits_mask_restricted_reg(priv, reg, bits, mask);
+-}
+-
+-#define iwl_set_bits_mask_restricted_reg(priv, reg, bits, mask) \
+- __iwl_set_bits_mask_restricted_reg(__LINE__, priv, reg, bits, mask)
+-#else
+-#define iwl_set_bits_mask_restricted_reg _iwl_set_bits_mask_restricted_reg
+-#endif
+-
+-static inline void iwl_clear_bits_restricted_reg(struct iwl_priv
+- *priv, u32 reg, u32 mask)
+-{
+- u32 val = _iwl_read_restricted_reg(priv, reg);
+- _iwl_write_restricted_reg(priv, reg, (val & ~mask));
+-}
+-
+-static inline u32 iwl_read_restricted_mem(struct iwl_priv *priv, u32 addr)
+-{
+- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR, addr);
+- return iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
+-}
+-
+-static inline void iwl_write_restricted_mem(struct iwl_priv *priv, u32 addr,
+- u32 val)
+-{
+- iwl_write_restricted(priv, HBUS_TARG_MEM_WADDR, addr);
+- iwl_write_restricted(priv, HBUS_TARG_MEM_WDAT, val);
+-}
+-
+-static inline void iwl_write_restricted_mems(struct iwl_priv *priv, u32 addr,
+- u32 len, u32 *values)
+-{
+- iwl_write_restricted(priv, HBUS_TARG_MEM_WADDR, addr);
+- for (; 0 < len; len -= sizeof(u32), values++)
+- iwl_write_restricted(priv, HBUS_TARG_MEM_WDAT, *values);
+-}
+-
+-static inline void iwl_write_restricted_regs(struct iwl_priv *priv, u32 reg,
+- u32 len, u8 *values)
+-{
+- u32 reg_offset = reg;
+- u32 aligment = reg & 0x3;
+-
+- /* write any non-dword-aligned stuff at the beginning */
+- if (len < sizeof(u32)) {
+- if ((aligment + len) <= sizeof(u32)) {
+- u8 size;
+- u32 value = 0;
+- size = len - 1;
+- memcpy(&value, values, len);
+- reg_offset = (reg_offset & 0x0000FFFF);
+-
+- _iwl_write_restricted(priv,
+- HBUS_TARG_PRPH_WADDR,
+- (reg_offset | (size << 24)));
+- _iwl_write_restricted(priv, HBUS_TARG_PRPH_WDAT,
+- value);
+- }
+-
+- return;
+- }
+-
+- /* now write all the dword-aligned stuff */
+- for (; reg_offset < (reg + len);
+- reg_offset += sizeof(u32), values += sizeof(u32))
+- _iwl_write_restricted_reg(priv, reg_offset, *((u32 *) values));
+-}
+-
+-#endif
+diff --git a/drivers/net/wireless/iwlwifi/iwl-priv.h b/drivers/net/wireless/iwlwifi/iwl-priv.h
+deleted file mode 100644
+index 6b490d0..0000000
+--- a/drivers/net/wireless/iwlwifi/iwl-priv.h
++++ /dev/null
+@@ -1,308 +0,0 @@
+-/******************************************************************************
- *
-- * REPLY_RATE_SCALE = 0x47 (command, has simple generic response)
+- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
- *
-- * NOTE: The table of rates passed to the uCode via the
-- * RATE_SCALE command sets up the corresponding order of
-- * rates used for all related commands, including rate
-- * masks, etc.
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of version 2 of the GNU General Public License as
+- * published by the Free Software Foundation.
- *
-- * For example, if you set 9MB (PLCP 0x0f) as the first
-- * rate in the rate table, the bit mask for that rate
-- * when passed through ofdm_basic_rates on the REPLY_RXON
-- * command would be bit 0 (1<<0)
-- */
--struct iwl_rate_scaling_cmd {
-- u8 table_id;
-- u8 reserved[3];
-- struct iwl_rate_scaling_info table[IWL_MAX_RATES];
--} __attribute__ ((packed));
--
--#elif IWL == 4965
--
--/*RS_NEW_API: only TLC_RTS remains and moved to bit 0 */
--#define LINK_QUAL_FLAGS_SET_STA_TLC_RTS_MSK (1<<0)
+- * This program is distributed in the hope that it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+- * The full GNU General Public License is included in this distribution in the
+- * file called LICENSE.
+- *
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- *****************************************************************************/
-
--#define LINK_QUAL_AC_NUM AC_NUM
--#define LINK_QUAL_MAX_RETRY_NUM 16
+-#ifndef __iwl_priv_h__
+-#define __iwl_priv_h__
-
--#define LINK_QUAL_ANT_A_MSK (1<<0)
--#define LINK_QUAL_ANT_B_MSK (1<<1)
--#define LINK_QUAL_ANT_MSK (LINK_QUAL_ANT_A_MSK|LINK_QUAL_ANT_B_MSK)
+-#include <linux/workqueue.h>
-
--struct iwl_link_qual_general_params {
-- u8 flags;
-- u8 mimo_delimiter;
-- u8 single_stream_ant_msk;
-- u8 dual_stream_ant_msk;
-- u8 start_rate_index[LINK_QUAL_AC_NUM];
--} __attribute__ ((packed));
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-
--struct iwl_link_qual_agg_params {
-- __le16 agg_time_limit;
-- u8 agg_dis_start_th;
-- u8 agg_frame_cnt_limit;
-- __le32 reserved;
--} __attribute__ ((packed));
+-enum {
+- MEASUREMENT_READY = (1 << 0),
+- MEASUREMENT_ACTIVE = (1 << 1),
+-};
-
--/*
-- * REPLY_TX_LINK_QUALITY_CMD = 0x4e (command, has simple generic response)
-- */
--struct iwl_link_quality_cmd {
-- u8 sta_id;
-- u8 reserved1;
-- __le16 control;
-- struct iwl_link_qual_general_params general_params;
-- struct iwl_link_qual_agg_params agg_params;
-- struct {
-- __le32 rate_n_flags;
-- } rs_table[LINK_QUAL_MAX_RETRY_NUM];
-- __le32 reserved2;
--} __attribute__ ((packed));
-#endif
-
--/*
-- * REPLY_BT_CONFIG = 0x9b (command, has simple generic response)
-- */
--struct iwl_bt_cmd {
-- u8 flags;
-- u8 lead_time;
-- u8 max_kill;
-- u8 reserved;
-- __le32 kill_ack_mask;
-- __le32 kill_cts_mask;
--} __attribute__ ((packed));
+-struct iwl_priv {
-
--/******************************************************************************
-- * (6)
-- * Spectrum Management (802.11h) Commands, Responses, Notifications:
-- *
-- *****************************************************************************/
+- /* ieee device used by generic ieee processing code */
+- struct ieee80211_hw *hw;
+- struct ieee80211_channel *ieee_channels;
+- struct ieee80211_rate *ieee_rates;
-
--/*
-- * Spectrum Management
-- */
--#define MEASUREMENT_FILTER_FLAG (RXON_FILTER_PROMISC_MSK | \
-- RXON_FILTER_CTL2HOST_MSK | \
-- RXON_FILTER_ACCEPT_GRP_MSK | \
-- RXON_FILTER_DIS_DECRYPT_MSK | \
-- RXON_FILTER_DIS_GRP_DECRYPT_MSK | \
-- RXON_FILTER_ASSOC_MSK | \
-- RXON_FILTER_BCON_AWARE_MSK)
+- /* temporary frame storage list */
+- struct list_head free_frames;
+- int frames_count;
-
--struct iwl_measure_channel {
-- __le32 duration; /* measurement duration in extended beacon
-- * format */
-- u8 channel; /* channel to measure */
-- u8 type; /* see enum iwl_measure_type */
-- __le16 reserved;
--} __attribute__ ((packed));
+- u8 phymode;
+- int alloc_rxb_skb;
-
--/*
-- * REPLY_SPECTRUM_MEASUREMENT_CMD = 0x74 (command)
-- */
--struct iwl_spectrum_cmd {
-- __le16 len; /* number of bytes starting from token */
-- u8 token; /* token id */
-- u8 id; /* measurement id -- 0 or 1 */
-- u8 origin; /* 0 = TGh, 1 = other, 2 = TGk */
-- u8 periodic; /* 1 = periodic */
-- __le16 path_loss_timeout;
-- __le32 start_time; /* start time in extended beacon format */
-- __le32 reserved2;
-- __le32 flags; /* rxon flags */
-- __le32 filter_flags; /* rxon filter flags */
-- __le16 channel_count; /* minimum 1, maximum 10 */
-- __le16 reserved3;
-- struct iwl_measure_channel channels[10];
--} __attribute__ ((packed));
+- void (*rx_handlers[REPLY_MAX])(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb);
-
--/*
-- * REPLY_SPECTRUM_MEASUREMENT_CMD = 0x74 (response)
-- */
--struct iwl_spectrum_resp {
-- u8 token;
-- u8 id; /* id of the prior command replaced, or 0xff */
-- __le16 status; /* 0 - command will be handled
-- * 1 - cannot handle (conflicts with another
-- * measurement) */
--} __attribute__ ((packed));
+- const struct ieee80211_hw_mode *modes;
-
--enum iwl_measurement_state {
-- IWL_MEASUREMENT_START = 0,
-- IWL_MEASUREMENT_STOP = 1,
--};
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
+- /* spectrum measurement report caching */
+- struct iwl_spectrum_notification measure_report;
+- u8 measurement_status;
+-#endif
+- /* ucode beacon time */
+- u32 ucode_beacon_time;
-
--enum iwl_measurement_status {
-- IWL_MEASUREMENT_OK = 0,
-- IWL_MEASUREMENT_CONCURRENT = 1,
-- IWL_MEASUREMENT_CSA_CONFLICT = 2,
-- IWL_MEASUREMENT_TGH_CONFLICT = 3,
-- /* 4-5 reserved */
-- IWL_MEASUREMENT_STOPPED = 6,
-- IWL_MEASUREMENT_TIMEOUT = 7,
-- IWL_MEASUREMENT_PERIODIC_FAILED = 8,
--};
+- /* we allocate array of iwl_channel_info for NIC's valid channels.
+- * Access via channel # using indirect index array */
+- struct iwl_channel_info *channel_info; /* channel info array */
+- u8 channel_count; /* # of channels */
-
--#define NUM_ELEMENTS_IN_HISTOGRAM 8
+- /* each calibration channel group in the EEPROM has a derived
+- * clip setting for each rate. */
+- const struct iwl_clip_group clip_groups[5];
-
--struct iwl_measurement_histogram {
-- __le32 ofdm[NUM_ELEMENTS_IN_HISTOGRAM]; /* in 0.8usec counts */
-- __le32 cck[NUM_ELEMENTS_IN_HISTOGRAM]; /* in 1usec counts */
--} __attribute__ ((packed));
+- /* thermal calibration */
+- s32 temperature; /* degrees Kelvin */
+- s32 last_temperature;
-
--/* clear channel availability counters */
--struct iwl_measurement_cca_counters {
-- __le32 ofdm;
-- __le32 cck;
--} __attribute__ ((packed));
+- /* Scan related variables */
+- unsigned long last_scan_jiffies;
+- unsigned long scan_start;
+- unsigned long scan_pass_start;
+- unsigned long scan_start_tsf;
+- int scan_bands;
+- int one_direct_scan;
+- u8 direct_ssid_len;
+- u8 direct_ssid[IW_ESSID_MAX_SIZE];
+- struct iwl_scan_cmd *scan;
+- u8 only_active_channel;
-
--enum iwl_measure_type {
-- IWL_MEASURE_BASIC = (1 << 0),
-- IWL_MEASURE_CHANNEL_LOAD = (1 << 1),
-- IWL_MEASURE_HISTOGRAM_RPI = (1 << 2),
-- IWL_MEASURE_HISTOGRAM_NOISE = (1 << 3),
-- IWL_MEASURE_FRAME = (1 << 4),
-- /* bits 5:6 are reserved */
-- IWL_MEASURE_IDLE = (1 << 7),
--};
+- /* spinlock */
+- spinlock_t lock; /* protect general shared data */
+- spinlock_t hcmd_lock; /* protect hcmd */
+- struct mutex mutex;
-
--/*
-- * SPECTRUM_MEASURE_NOTIFICATION = 0x75 (notification only, not a command)
-- */
--struct iwl_spectrum_notification {
-- u8 id; /* measurement id -- 0 or 1 */
-- u8 token;
-- u8 channel_index; /* index in measurement channel list */
-- u8 state; /* 0 - start, 1 - stop */
-- __le32 start_time; /* lower 32-bits of TSF */
-- u8 band; /* 0 - 5.2GHz, 1 - 2.4GHz */
-- u8 channel;
-- u8 type; /* see enum iwl_measurement_type */
-- u8 reserved1;
-- /* NOTE: cca_ofdm, cca_cck, basic_type, and histogram are only only
-- * valid if applicable for measurement type requested. */
-- __le32 cca_ofdm; /* cca fraction time in 40Mhz clock periods */
-- __le32 cca_cck; /* cca fraction time in 44Mhz clock periods */
-- __le32 cca_time; /* channel load time in usecs */
-- u8 basic_type; /* 0 - bss, 1 - ofdm preamble, 2 -
-- * unidentified */
-- u8 reserved2[3];
-- struct iwl_measurement_histogram histogram;
-- __le32 stop_time; /* lower 32-bits of TSF */
-- __le32 status; /* see iwl_measurement_status */
--} __attribute__ ((packed));
+- /* basic pci-network driver stuff */
+- struct pci_dev *pci_dev;
-
--/******************************************************************************
-- * (7)
-- * Power Management Commands, Responses, Notifications:
-- *
-- *****************************************************************************/
+- /* pci hardware address support */
+- void __iomem *hw_base;
-
--/**
-- * struct iwl_powertable_cmd - Power Table Command
-- * @flags: See below:
-- *
-- * POWER_TABLE_CMD = 0x77 (command, has simple generic response)
-- *
-- * PM allow:
-- * bit 0 - '0' Driver not allow power management
-- * '1' Driver allow PM (use rest of parameters)
-- * uCode send sleep notifications:
-- * bit 1 - '0' Don't send sleep notification
-- * '1' send sleep notification (SEND_PM_NOTIFICATION)
-- * Sleep over DTIM
-- * bit 2 - '0' PM have to walk up every DTIM
-- * '1' PM could sleep over DTIM till listen Interval.
-- * PCI power managed
-- * bit 3 - '0' (PCI_LINK_CTRL & 0x1)
-- * '1' !(PCI_LINK_CTRL & 0x1)
-- * Force sleep Modes
-- * bit 31/30- '00' use both mac/xtal sleeps
-- * '01' force Mac sleep
-- * '10' force xtal sleep
-- * '11' Illegal set
-- *
-- * NOTE: if sleep_interval[SLEEP_INTRVL_TABLE_SIZE-1] > DTIM period then
-- * ucode assume sleep over DTIM is allowed and we don't need to wakeup
-- * for every DTIM.
-- */
--#define IWL_POWER_VEC_SIZE 5
+- /* uCode images, save to reload in case of failure */
+- struct fw_image_desc ucode_code; /* runtime inst */
+- struct fw_image_desc ucode_data; /* runtime data original */
+- struct fw_image_desc ucode_data_backup; /* runtime data save/restore */
+- struct fw_image_desc ucode_init; /* initialization inst */
+- struct fw_image_desc ucode_init_data; /* initialization data */
+- struct fw_image_desc ucode_boot; /* bootstrap inst */
-
-
--#if IWL == 3945
+- struct iwl_rxon_time_cmd rxon_timing;
-
--#define IWL_POWER_DRIVER_ALLOW_SLEEP_MSK __constant_cpu_to_le32(1<<0)
--#define IWL_POWER_SLEEP_OVER_DTIM_MSK __constant_cpu_to_le32(1<<2)
--#define IWL_POWER_PCI_PM_MSK __constant_cpu_to_le32(1<<3)
--struct iwl_powertable_cmd {
-- __le32 flags;
-- __le32 rx_data_timeout;
-- __le32 tx_data_timeout;
-- __le32 sleep_interval[IWL_POWER_VEC_SIZE];
--} __attribute__((packed));
+- /* We declare this const so it can only be
+- * changed via explicit cast within the
+- * routines that actually update the physical
+- * hardware */
+- const struct iwl_rxon_cmd active_rxon;
+- struct iwl_rxon_cmd staging_rxon;
-
--#elif IWL == 4965
+- int error_recovering;
+- struct iwl_rxon_cmd recovery_rxon;
-
--#define IWL_POWER_DRIVER_ALLOW_SLEEP_MSK __constant_cpu_to_le16(1<<0)
--#define IWL_POWER_SLEEP_OVER_DTIM_MSK __constant_cpu_to_le16(1<<2)
--#define IWL_POWER_PCI_PM_MSK __constant_cpu_to_le16(1<<3)
+- /* 1st responses from initialize and runtime uCode images.
+- * 4965's initialize alive response contains some calibration data. */
+- struct iwl_init_alive_resp card_alive_init;
+- struct iwl_alive_resp card_alive;
-
--struct iwl_powertable_cmd {
-- __le16 flags;
-- u8 keep_alive_seconds;
-- u8 debug_flags;
-- __le32 rx_data_timeout;
-- __le32 tx_data_timeout;
-- __le32 sleep_interval[IWL_POWER_VEC_SIZE];
-- __le32 keep_alive_beacons;
--} __attribute__ ((packed));
+-#ifdef LED
+- /* LED related variables */
+- struct iwl_activity_blink activity;
+- unsigned long led_packets;
+- int led_state;
-#endif
-
--/*
-- * PM_SLEEP_NOTIFICATION = 0x7A (notification only, not a command)
-- * 3945 and 4965 identical.
-- */
--struct iwl_sleep_notification {
-- u8 pm_sleep_mode;
-- u8 pm_wakeup_src;
-- __le16 reserved;
-- __le32 sleep_time;
-- __le32 tsf_low;
-- __le32 bcon_timer;
--} __attribute__ ((packed));
+- u16 active_rate;
+- u16 active_rate_basic;
-
--/* Sleep states. 3945 and 4965 identical. */
--enum {
-- IWL_PM_NO_SLEEP = 0,
-- IWL_PM_SLP_MAC = 1,
-- IWL_PM_SLP_FULL_MAC_UNASSOCIATE = 2,
-- IWL_PM_SLP_FULL_MAC_CARD_STATE = 3,
-- IWL_PM_SLP_PHY = 4,
-- IWL_PM_SLP_REPENT = 5,
-- IWL_PM_WAKEUP_BY_TIMER = 6,
-- IWL_PM_WAKEUP_BY_DRIVER = 7,
-- IWL_PM_WAKEUP_BY_RFKILL = 8,
-- /* 3 reserved */
-- IWL_PM_NUM_OF_MODES = 12,
--};
+- u8 call_post_assoc_from_beacon;
+- u8 assoc_station_added;
+-#if IWL == 4965
+- u8 use_ant_b_for_management_frame; /* Tx antenna selection */
+- /* HT variables */
+- u8 is_dup;
+- u8 is_ht_enabled;
+- u8 channel_width; /* 0=20MHZ, 1=40MHZ */
+- u8 current_channel_width;
+- u8 valid_antenna; /* Bit mask of antennas actually connected */
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
+- struct iwl_sensitivity_data sensitivity_data;
+- struct iwl_chain_noise_data chain_noise_data;
+- u8 start_calib;
+- __le16 sensitivity_tbl[HD_TABLE_SIZE];
+-#endif /*CONFIG_IWLWIFI_SENSITIVITY*/
-
--/*
-- * REPLY_CARD_STATE_CMD = 0xa0 (command, has simple generic response)
-- */
--#define CARD_STATE_CMD_DISABLE 0x00 /* Put card to sleep */
--#define CARD_STATE_CMD_ENABLE 0x01 /* Wake up card */
--#define CARD_STATE_CMD_HALT 0x02 /* Power down permanently */
--struct iwl_card_state_cmd {
-- __le32 status; /* CARD_STATE_CMD_* request new power state */
--} __attribute__ ((packed));
+-#ifdef CONFIG_IWLWIFI_HT
+- struct sta_ht_info current_assoc_ht;
+-#endif
+- u8 active_rate_ht[2];
+- u8 last_phy_res[100];
-
--/*
-- * CARD_STATE_NOTIFICATION = 0xa1 (notification only, not a command)
-- */
--struct iwl_card_state_notif {
-- __le32 flags;
--} __attribute__ ((packed));
+- /* Rate scaling data */
+- struct iwl_lq_mngr lq_mngr;
+-#endif
-
--#define HW_CARD_DISABLED 0x01
--#define SW_CARD_DISABLED 0x02
--#define RF_CARD_DISABLED 0x04
--#define RXON_CARD_DISABLED 0x10
+- /* Rate scaling data */
+- s8 data_retry_limit;
+- u8 retry_rate;
-
--struct iwl_ct_kill_config {
-- __le32 reserved;
-- __le32 critical_temperature_M;
-- __le32 critical_temperature_R;
--} __attribute__ ((packed));
+- wait_queue_head_t wait_command_queue;
-
--/******************************************************************************
-- * (8)
-- * Scan Commands, Responses, Notifications:
-- *
-- *****************************************************************************/
+- int activity_timer_active;
-
--struct iwl_scan_channel {
-- /* type is defined as:
-- * 0:0 active (0 - passive)
-- * 1:4 SSID direct
-- * If 1 is set then corresponding SSID IE is transmitted in probe
-- * 5:7 reserved
-- */
-- u8 type;
-- u8 channel;
-- struct iwl_tx_power tpc;
-- __le16 active_dwell;
-- __le16 passive_dwell;
--} __attribute__ ((packed));
+- /* Rx and Tx DMA processing queues */
+- struct iwl_rx_queue rxq;
+- struct iwl_tx_queue txq[IWL_MAX_NUM_QUEUES];
+-#if IWL == 4965
+- unsigned long txq_ctx_active_msk;
+- struct iwl_kw kw; /* keep warm address */
+- u32 scd_base_addr; /* scheduler sram base address */
+-#endif
-
--struct iwl_ssid_ie {
-- u8 id;
-- u8 len;
-- u8 ssid[32];
--} __attribute__ ((packed));
+- unsigned long status;
+- u32 config;
-
--#define PROBE_OPTION_MAX 0x4
--#define TX_CMD_LIFE_TIME_INFINITE __constant_cpu_to_le32(0xFFFFFFFF)
--#define IWL_GOOD_CRC_TH __constant_cpu_to_le16(1)
--#define IWL_MAX_SCAN_SIZE 1024
+- int last_rx_rssi; /* From Rx packet statisitics */
+- int last_rx_noise; /* From beacon statistics */
-
--/*
-- * REPLY_SCAN_CMD = 0x80 (command)
-- */
--struct iwl_scan_cmd {
-- __le16 len;
-- u8 reserved0;
-- u8 channel_count;
-- __le16 quiet_time; /* dwell only this long on quiet chnl
-- * (active scan) */
-- __le16 quiet_plcp_th; /* quiet chnl is < this # pkts (typ. 1) */
-- __le16 good_CRC_th; /* passive -> active promotion threshold */
--#if IWL == 3945
-- __le16 reserved1;
--#elif IWL == 4965
-- __le16 rx_chain;
--#endif
-- __le32 max_out_time; /* max usec to be out of associated (service)
-- * chnl */
-- __le32 suspend_time; /* pause scan this long when returning to svc
-- * chnl.
-- * 3945 -- 31:24 # beacons, 19:0 additional usec,
-- * 4965 -- 31:22 # beacons, 21:0 additional usec.
-- */
-- __le32 flags;
-- __le32 filter_flags;
+- struct iwl_power_mgr power_data;
-
-- struct iwl_tx_cmd tx_cmd;
-- struct iwl_ssid_ie direct_scan[PROBE_OPTION_MAX];
+- struct iwl_notif_statistics statistics;
+- unsigned long last_statistics_time;
-
-- u8 data[0];
-- /*
-- * The channels start after the probe request payload and are of type:
-- *
-- * struct iwl_scan_channel channels[0];
-- *
-- * NOTE: Only one band of channels can be scanned per pass. You
-- * can not mix 2.4GHz channels and 5.2GHz channels and must
-- * request a scan multiple times (not concurrently)
-- *
-- */
--} __attribute__ ((packed));
+- /* context information */
+- u8 essid[IW_ESSID_MAX_SIZE];
+- u8 essid_len;
+- u16 rates_mask;
-
--/* Can abort will notify by complete notification with abort status. */
--#define CAN_ABORT_STATUS __constant_cpu_to_le32(0x1)
--/* complete notification statuses */
--#define ABORT_STATUS 0x2
+- u32 power_mode;
+- u32 antenna;
+- u8 bssid[ETH_ALEN];
+- u16 rts_threshold;
+- u8 mac_addr[ETH_ALEN];
-
--/*
-- * REPLY_SCAN_CMD = 0x80 (response)
-- */
--struct iwl_scanreq_notification {
-- __le32 status; /* 1: okay, 2: cannot fulfill request */
--} __attribute__ ((packed));
+- /*station table variables */
+- spinlock_t sta_lock;
+- int num_stations;
+- struct iwl_station_entry stations[IWL_STATION_COUNT];
-
--/*
-- * SCAN_START_NOTIFICATION = 0x82 (notification only, not a command)
-- */
--struct iwl_scanstart_notification {
-- __le32 tsf_low;
-- __le32 tsf_high;
-- __le32 beacon_timer;
-- u8 channel;
-- u8 band;
-- u8 reserved[2];
-- __le32 status;
--} __attribute__ ((packed));
+- /* Indication if ieee80211_ops->open has been called */
+- int is_open;
-
--#define SCAN_OWNER_STATUS 0x1;
--#define MEASURE_OWNER_STATUS 0x2;
+- u8 mac80211_registered;
+- int is_abg;
-
--#define NUMBER_OF_STATISTICS 1 /* first __le32 is good CRC */
--/*
-- * SCAN_RESULTS_NOTIFICATION = 0x83 (notification only, not a command)
-- */
--struct iwl_scanresults_notification {
-- u8 channel;
-- u8 band;
-- u8 reserved[2];
-- __le32 tsf_low;
-- __le32 tsf_high;
-- __le32 statistics[NUMBER_OF_STATISTICS];
--} __attribute__ ((packed));
+- u32 notif_missed_beacons;
-
--/*
-- * SCAN_COMPLETE_NOTIFICATION = 0x84 (notification only, not a command)
-- */
--struct iwl_scancomplete_notification {
-- u8 scanned_channels;
-- u8 status;
-- u8 reserved;
-- u8 last_channel;
-- __le32 tsf_low;
-- __le32 tsf_high;
--} __attribute__ ((packed));
+- /* Rx'd packet timing information */
+- u32 last_beacon_time;
+- u64 last_tsf;
-
+- /* Duplicate packet detection */
+- u16 last_seq_num;
+- u16 last_frag_num;
+- unsigned long last_packet_time;
+- struct list_head ibss_mac_hash[IWL_IBSS_MAC_HASH_SIZE];
-
--/******************************************************************************
-- * (9)
-- * IBSS/AP Commands and Notifications:
-- *
-- *****************************************************************************/
+- /* eeprom */
+- struct iwl_eeprom eeprom;
-
--/*
-- * BEACON_NOTIFICATION = 0x90 (notification only, not a command)
-- */
--struct iwl_beacon_notif {
-- struct iwl_tx_resp beacon_notify_hdr;
-- __le32 low_tsf;
-- __le32 high_tsf;
-- __le32 ibss_mgr_status;
--} __attribute__ ((packed));
+- int iw_mode;
-
--/*
-- * REPLY_TX_BEACON = 0x91 (command, has simple generic response)
-- */
--struct iwl_tx_beacon_cmd {
-- struct iwl_tx_cmd tx;
-- __le16 tim_idx;
-- u8 tim_size;
-- u8 reserved1;
-- struct ieee80211_hdr frame[0]; /* beacon frame */
--} __attribute__ ((packed));
+- struct sk_buff *ibss_beacon;
-
--/******************************************************************************
-- * (10)
-- * Statistics Commands and Notifications:
-- *
-- *****************************************************************************/
+- /* Last Rx'd beacon timestamp */
+- u32 timestamp0;
+- u32 timestamp1;
+- u16 beacon_int;
+- struct iwl_driver_hw_info hw_setting;
+- int interface_id;
-
--#define IWL_TEMP_CONVERT 260
+- /* Current association information needed to configure the
+- * hardware */
+- u16 assoc_id;
+- u16 assoc_capability;
+- u8 ps_mode;
-
--#define SUP_RATE_11A_MAX_NUM_CHANNELS 8
--#define SUP_RATE_11B_MAX_NUM_CHANNELS 4
--#define SUP_RATE_11G_MAX_NUM_CHANNELS 12
+-#ifdef CONFIG_IWLWIFI_QOS
+- struct iwl_qos_info qos_data;
+-#endif /*CONFIG_IWLWIFI_QOS */
-
--/* Used for passing to driver number of successes and failures per rate */
--struct rate_histogram {
-- union {
-- __le32 a[SUP_RATE_11A_MAX_NUM_CHANNELS];
-- __le32 b[SUP_RATE_11B_MAX_NUM_CHANNELS];
-- __le32 g[SUP_RATE_11G_MAX_NUM_CHANNELS];
-- } success;
-- union {
-- __le32 a[SUP_RATE_11A_MAX_NUM_CHANNELS];
-- __le32 b[SUP_RATE_11B_MAX_NUM_CHANNELS];
-- __le32 g[SUP_RATE_11G_MAX_NUM_CHANNELS];
-- } failed;
--} __attribute__ ((packed));
+- struct workqueue_struct *workqueue;
-
--/* statistics command response */
+- struct work_struct up;
+- struct work_struct restart;
+- struct work_struct calibrated_work;
+- struct work_struct scan_completed;
+- struct work_struct rx_replenish;
+- struct work_struct rf_kill;
+- struct work_struct abort_scan;
+- struct work_struct update_link_led;
+- struct work_struct auth_work;
+- struct work_struct report_work;
+- struct work_struct request_scan;
+- struct work_struct beacon_update;
-
--struct statistics_rx_phy {
-- __le32 ina_cnt;
-- __le32 fina_cnt;
-- __le32 plcp_err;
-- __le32 crc32_err;
-- __le32 overrun_err;
-- __le32 early_overrun_err;
-- __le32 crc32_good;
-- __le32 false_alarm_cnt;
-- __le32 fina_sync_err_cnt;
-- __le32 sfd_timeout;
-- __le32 fina_timeout;
-- __le32 unresponded_rts;
-- __le32 rxe_frame_limit_overrun;
-- __le32 sent_ack_cnt;
-- __le32 sent_cts_cnt;
--#if IWL == 4965
-- __le32 sent_ba_rsp_cnt;
-- __le32 dsp_self_kill;
-- __le32 mh_format_err;
-- __le32 re_acq_main_rssi_sum;
-- __le32 reserved3;
--#endif
--} __attribute__ ((packed));
+- struct tasklet_struct irq_tasklet;
-
--#if IWL == 4965
--struct statistics_rx_ht_phy {
-- __le32 plcp_err;
-- __le32 overrun_err;
-- __le32 early_overrun_err;
-- __le32 crc32_good;
-- __le32 crc32_err;
-- __le32 mh_format_err;
-- __le32 agg_crc32_good;
-- __le32 agg_mpdu_cnt;
-- __le32 agg_cnt;
-- __le32 reserved2;
--} __attribute__ ((packed));
--#endif
+- struct delayed_work init_alive_start;
+- struct delayed_work alive_start;
+- struct delayed_work activity_timer;
+- struct delayed_work thermal_periodic;
+- struct delayed_work gather_stats;
+- struct delayed_work scan_check;
+- struct delayed_work post_associate;
-
--struct statistics_rx_non_phy {
-- __le32 bogus_cts; /* CTS received when not expecting CTS */
-- __le32 bogus_ack; /* ACK received when not expecting ACK */
-- __le32 non_bssid_frames; /* number of frames with BSSID that
-- * doesn't belong to the STA BSSID */
-- __le32 filtered_frames; /* count frames that were dumped in the
-- * filtering process */
-- __le32 non_channel_beacons; /* beacons with our bss id but not on
-- * our serving channel */
--#if IWL == 4965
-- __le32 channel_beacons; /* beacons with our bss id and in our
-- * serving channel */
-- __le32 num_missed_bcon; /* number of missed beacons */
-- __le32 adc_rx_saturation_time; /* count in 0.8us units the time the
-- * ADC was in saturation */
-- __le32 ina_detection_search_time;/* total time (in 0.8us) searched
-- * for INA */
-- __le32 beacon_silence_rssi_a; /* RSSI silence after beacon frame */
-- __le32 beacon_silence_rssi_b; /* RSSI silence after beacon frame */
-- __le32 beacon_silence_rssi_c; /* RSSI silence after beacon frame */
-- __le32 interference_data_flag; /* flag for interference data
-- * availability. 1 when data is
-- * available. */
-- __le32 channel_load; /* counts RX Enable time */
-- __le32 dsp_false_alarms; /* DSP false alarm (both OFDM
-- * and CCK) counter */
-- __le32 beacon_rssi_a;
-- __le32 beacon_rssi_b;
-- __le32 beacon_rssi_c;
-- __le32 beacon_energy_a;
-- __le32 beacon_energy_b;
-- __le32 beacon_energy_c;
--#endif
--} __attribute__ ((packed));
+-#define IWL_DEFAULT_TX_POWER 0x0F
+- s8 user_txpower_limit;
+- s8 max_channel_txpower_limit;
+- u32 cck_power_index_compensation;
-
--struct statistics_rx {
-- struct statistics_rx_phy ofdm;
-- struct statistics_rx_phy cck;
-- struct statistics_rx_non_phy general;
--#if IWL == 4965
-- struct statistics_rx_ht_phy ofdm_ht;
+-#ifdef CONFIG_PM
+- u32 pm_state[16];
-#endif
--} __attribute__ ((packed));
-
--#if IWL == 4965
--struct statistics_tx_non_phy_agg {
-- __le32 ba_timeout;
-- __le32 ba_reschedule_frames;
-- __le32 scd_query_agg_frame_cnt;
-- __le32 scd_query_no_agg;
-- __le32 scd_query_agg;
-- __le32 scd_query_mismatch;
-- __le32 frame_not_ready;
-- __le32 underrun;
-- __le32 bt_prio_kill;
-- __le32 rx_ba_rsp_cnt;
-- __le32 reserved2;
-- __le32 reserved3;
--} __attribute__ ((packed));
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- /* debugging info */
+- u32 framecnt_to_us;
+- atomic_t restrict_refcnt;
-#endif
-
--struct statistics_tx {
-- __le32 preamble_cnt;
-- __le32 rx_detected_cnt;
-- __le32 bt_prio_defer_cnt;
-- __le32 bt_prio_kill_cnt;
-- __le32 few_bytes_cnt;
-- __le32 cts_timeout;
-- __le32 ack_timeout;
-- __le32 expected_ack_cnt;
-- __le32 actual_ack_cnt;
-#if IWL == 4965
-- __le32 dump_msdu_cnt;
-- __le32 burst_abort_next_frame_mismatch_cnt;
-- __le32 burst_abort_missing_next_frame_cnt;
-- __le32 cts_timeout_collision;
-- __le32 ack_or_ba_timeout_collision;
-- struct statistics_tx_non_phy_agg agg;
+- struct work_struct txpower_work;
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
+- struct work_struct sensitivity_work;
-#endif
--} __attribute__ ((packed));
--
--struct statistics_dbg {
-- __le32 burst_check;
-- __le32 burst_count;
-- __le32 reserved[4];
--} __attribute__ ((packed));
+- struct work_struct statistics_work;
+- struct timer_list statistics_periodic;
-
--struct statistics_div {
-- __le32 tx_on_a;
-- __le32 tx_on_b;
-- __le32 exec_time;
-- __le32 probe_time;
--#if IWL == 4965
-- __le32 reserved1;
-- __le32 reserved2;
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- struct work_struct agg_work;
-#endif
--} __attribute__ ((packed));
-
--struct statistics_general {
-- __le32 temperature;
--#if IWL == 4965
-- __le32 temperature_m;
--#endif
-- struct statistics_dbg dbg;
-- __le32 sleep_time;
-- __le32 slots_out;
-- __le32 slots_idle;
-- __le32 ttl_timestamp;
-- struct statistics_div div;
--#if IWL == 4965
-- __le32 rx_enable_counter;
-- __le32 reserved1;
-- __le32 reserved2;
-- __le32 reserved3;
--#endif
--} __attribute__ ((packed));
+-#endif /* 4965 */
+-}; /*iwl_priv */
+-
+-#endif /* __iwl_priv_h__ */
+diff --git a/drivers/net/wireless/iwlwifi/iwl-prph.h b/drivers/net/wireless/iwlwifi/iwl-prph.h
+index 0df4114..4ba1216 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-prph.h
++++ b/drivers/net/wireless/iwlwifi/iwl-prph.h
+@@ -8,7 +8,7 @@
+ * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+- * it under the terms of version 2 of the GNU Geeral Public License as
++ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+@@ -63,7 +63,10 @@
+ #ifndef __iwl_prph_h__
+ #define __iwl_prph_h__
+
-
++/*
++ * Registers in this file are internal, not PCI bus memory mapped.
++ * Driver accesses these via HBUS_TARG_PRPH_* registers.
++ */
+ #define PRPH_BASE (0x00000)
+ #define PRPH_END (0xFFFFF)
+
+@@ -226,4 +229,58 @@
+ #define BSM_SRAM_SIZE (1024) /* bytes */
+
+
++/* 3945 Tx scheduler registers */
++#define ALM_SCD_BASE (PRPH_BASE + 0x2E00)
++#define ALM_SCD_MODE_REG (ALM_SCD_BASE + 0x000)
++#define ALM_SCD_ARASTAT_REG (ALM_SCD_BASE + 0x004)
++#define ALM_SCD_TXFACT_REG (ALM_SCD_BASE + 0x010)
++#define ALM_SCD_TXF4MF_REG (ALM_SCD_BASE + 0x014)
++#define ALM_SCD_TXF5MF_REG (ALM_SCD_BASE + 0x020)
++#define ALM_SCD_SBYP_MODE_1_REG (ALM_SCD_BASE + 0x02C)
++#define ALM_SCD_SBYP_MODE_2_REG (ALM_SCD_BASE + 0x030)
++
++/*
++ * 4965 Tx Scheduler registers.
++ * Details are documented in iwl-4965-hw.h
++ */
++#define KDR_SCD_BASE (PRPH_BASE + 0xa02c00)
++
++#define KDR_SCD_SRAM_BASE_ADDR (KDR_SCD_BASE + 0x0)
++#define KDR_SCD_EMPTY_BITS (KDR_SCD_BASE + 0x4)
++#define KDR_SCD_DRAM_BASE_ADDR (KDR_SCD_BASE + 0x10)
++#define KDR_SCD_AIT (KDR_SCD_BASE + 0x18)
++#define KDR_SCD_TXFACT (KDR_SCD_BASE + 0x1c)
++#define KDR_SCD_QUEUE_WRPTR(x) (KDR_SCD_BASE + 0x24 + (x) * 4)
++#define KDR_SCD_QUEUE_RDPTR(x) (KDR_SCD_BASE + 0x64 + (x) * 4)
++#define KDR_SCD_SETQUEUENUM (KDR_SCD_BASE + 0xa4)
++#define KDR_SCD_SET_TXSTAT_TXED (KDR_SCD_BASE + 0xa8)
++#define KDR_SCD_SET_TXSTAT_DONE (KDR_SCD_BASE + 0xac)
++#define KDR_SCD_SET_TXSTAT_NOT_SCHD (KDR_SCD_BASE + 0xb0)
++#define KDR_SCD_DECREASE_CREDIT (KDR_SCD_BASE + 0xb4)
++#define KDR_SCD_DECREASE_SCREDIT (KDR_SCD_BASE + 0xb8)
++#define KDR_SCD_LOAD_CREDIT (KDR_SCD_BASE + 0xbc)
++#define KDR_SCD_LOAD_SCREDIT (KDR_SCD_BASE + 0xc0)
++#define KDR_SCD_BAR (KDR_SCD_BASE + 0xc4)
++#define KDR_SCD_BAR_DW0 (KDR_SCD_BASE + 0xc8)
++#define KDR_SCD_BAR_DW1 (KDR_SCD_BASE + 0xcc)
++#define KDR_SCD_QUEUECHAIN_SEL (KDR_SCD_BASE + 0xd0)
++#define KDR_SCD_QUERY_REQ (KDR_SCD_BASE + 0xd8)
++#define KDR_SCD_QUERY_RES (KDR_SCD_BASE + 0xdc)
++#define KDR_SCD_PENDING_FRAMES (KDR_SCD_BASE + 0xe0)
++#define KDR_SCD_INTERRUPT_MASK (KDR_SCD_BASE + 0xe4)
++#define KDR_SCD_INTERRUPT_THRESHOLD (KDR_SCD_BASE + 0xe8)
++#define KDR_SCD_QUERY_MIN_FRAME_SIZE (KDR_SCD_BASE + 0x100)
++#define KDR_SCD_QUEUE_STATUS_BITS(x) (KDR_SCD_BASE + 0x104 + (x) * 4)
++
++/* SP SCD */
++#define SHL_SCD_BASE (PRPH_BASE + 0xa02c00)
++
++#define SHL_SCD_AIT (SHL_SCD_BASE + 0x0c)
++#define SHL_SCD_TXFACT (SHL_SCD_BASE + 0x10)
++#define SHL_SCD_QUEUE_WRPTR(x) (SHL_SCD_BASE + 0x18 + (x) * 4)
++#define SHL_SCD_QUEUE_RDPTR(x) (SHL_SCD_BASE + 0x68 + (x) * 4)
++#define SHL_SCD_QUEUECHAIN_SEL (SHL_SCD_BASE + 0xe8)
++#define SHL_SCD_AGGR_SEL (SHL_SCD_BASE + 0x248)
++#define SHL_SCD_INTERRUPT_MASK (SHL_SCD_BASE + 0x108)
++
+ #endif /* __iwl_prph_h__ */
+diff --git a/drivers/net/wireless/iwlwifi/iwl3945-base.c b/drivers/net/wireless/iwlwifi/iwl3945-base.c
+index 0b3ec7e..748ac12 100644
+--- a/drivers/net/wireless/iwlwifi/iwl3945-base.c
++++ b/drivers/net/wireless/iwlwifi/iwl3945-base.c
+@@ -27,16 +27,6 @@
+ *
+ *****************************************************************************/
+
-/*
-- * REPLY_STATISTICS_CMD = 0x9c,
-- * 3945 and 4965 identical.
-- *
-- * This command triggers an immediate response containing uCode statistics.
-- * The response is in the same format as STATISTICS_NOTIFICATION 0x9d, below.
-- *
-- * If the CLEAR_STATS configuration flag is set, uCode will clear its
-- * internal copy of the statistics (counters) after issuing the response.
-- * This flag does not affect STATISTICS_NOTIFICATIONs after beacons (see below).
+- * NOTE: This file (iwl-base.c) is used to build to multiple hardware targets
+- * by defining IWL to either 3945 or 4965. The Makefile used when building
+- * the base targets will create base-3945.o and base-4965.o
- *
-- * If the DISABLE_NOTIF configuration flag is set, uCode will not issue
-- * STATISTICS_NOTIFICATIONs after received beacons (see below). This flag
-- * does not affect the response to the REPLY_STATISTICS_CMD 0x9c itself.
+- * The eventual goal is to move as many of the #if IWL / #endif blocks out of
+- * this file and into the hardware specific implementation files (iwl-XXXX.c)
+- * and leave only the common (non #ifdef sprinkled) code in this file
- */
--#define IWL_STATS_CONF_CLEAR_STATS __constant_cpu_to_le32(0x1) /* see above */
--#define IWL_STATS_CONF_DISABLE_NOTIF __constant_cpu_to_le32(0x2)/* see above */
--struct iwl_statistics_cmd {
-- __le32 configuration_flags; /* IWL_STATS_CONF_* */
--} __attribute__ ((packed));
-
--/*
-- * STATISTICS_NOTIFICATION = 0x9d (notification only, not a command)
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/version.h>
+@@ -56,16 +46,16 @@
+
+ #include <asm/div64.h>
+
+-#define IWL 3945
+-
+-#include "iwlwifi.h"
+ #include "iwl-3945.h"
+ #include "iwl-helpers.h"
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-u32 iwl_debug_level;
++#ifdef CONFIG_IWL3945_DEBUG
++u32 iwl3945_debug_level;
+ #endif
+
++static int iwl3945_tx_queue_update_write_ptr(struct iwl3945_priv *priv,
++ struct iwl3945_tx_queue *txq);
++
+ /******************************************************************************
+ *
+ * module boiler plate
+@@ -73,13 +63,13 @@ u32 iwl_debug_level;
+ ******************************************************************************/
+
+ /* module parameters */
+-int iwl_param_disable_hw_scan;
+-int iwl_param_debug;
+-int iwl_param_disable; /* def: enable radio */
+-int iwl_param_antenna; /* def: 0 = both antennas (use diversity) */
+-int iwl_param_hwcrypto; /* def: using software encryption */
+-int iwl_param_qos_enable = 1;
+-int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
++static int iwl3945_param_disable_hw_scan; /* def: 0 = use 3945's h/w scan */
++static int iwl3945_param_debug; /* def: 0 = minimal debug log messages */
++static int iwl3945_param_disable; /* def: 0 = enable radio */
++static int iwl3945_param_antenna; /* def: 0 = both antennas (use diversity) */
++int iwl3945_param_hwcrypto; /* def: 0 = use software encryption */
++static int iwl3945_param_qos_enable = 1; /* def: 1 = use quality of service */
++int iwl3945_param_queues_num = IWL_MAX_NUM_QUEUES; /* def: 8 Tx queues */
+
+ /*
+ * module name, copyright, version, etc.
+@@ -89,19 +79,19 @@ int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
+ #define DRV_DESCRIPTION \
+ "Intel(R) PRO/Wireless 3945ABG/BG Network Connection driver for Linux"
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
++#ifdef CONFIG_IWL3945_DEBUG
+ #define VD "d"
+ #else
+ #define VD
+ #endif
+
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
+ #define VS "s"
+ #else
+ #define VS
+ #endif
+
+-#define IWLWIFI_VERSION "1.1.17k" VD VS
++#define IWLWIFI_VERSION "1.2.23k" VD VS
+ #define DRV_COPYRIGHT "Copyright(c) 2003-2007 Intel Corporation"
+ #define DRV_VERSION IWLWIFI_VERSION
+
+@@ -116,7 +106,7 @@ MODULE_VERSION(DRV_VERSION);
+ MODULE_AUTHOR(DRV_COPYRIGHT);
+ MODULE_LICENSE("GPL");
+
+-__le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
++static __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
+ {
+ u16 fc = le16_to_cpu(hdr->frame_control);
+ int hdr_len = ieee80211_get_hdrlen(fc);
+@@ -126,8 +116,8 @@ __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
+ return NULL;
+ }
+
+-static const struct ieee80211_hw_mode *iwl_get_hw_mode(
+- struct iwl_priv *priv, int mode)
++static const struct ieee80211_hw_mode *iwl3945_get_hw_mode(
++ struct iwl3945_priv *priv, int mode)
+ {
+ int i;
+
+@@ -138,7 +128,7 @@ static const struct ieee80211_hw_mode *iwl_get_hw_mode(
+ return NULL;
+ }
+
+-static int iwl_is_empty_essid(const char *essid, int essid_len)
++static int iwl3945_is_empty_essid(const char *essid, int essid_len)
+ {
+ /* Single white space is for Linksys APs */
+ if (essid_len == 1 && essid[0] == ' ')
+@@ -154,13 +144,13 @@ static int iwl_is_empty_essid(const char *essid, int essid_len)
+ return 1;
+ }
+
+-static const char *iwl_escape_essid(const char *essid, u8 essid_len)
++static const char *iwl3945_escape_essid(const char *essid, u8 essid_len)
+ {
+ static char escaped[IW_ESSID_MAX_SIZE * 2 + 1];
+ const char *s = essid;
+ char *d = escaped;
+
+- if (iwl_is_empty_essid(essid, essid_len)) {
++ if (iwl3945_is_empty_essid(essid, essid_len)) {
+ memcpy(escaped, "<hidden>", sizeof("<hidden>"));
+ return escaped;
+ }
+@@ -178,10 +168,10 @@ static const char *iwl_escape_essid(const char *essid, u8 essid_len)
+ return escaped;
+ }
+
+-static void iwl_print_hex_dump(int level, void *p, u32 len)
++static void iwl3945_print_hex_dump(int level, void *p, u32 len)
+ {
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (!(iwl_debug_level & level))
++#ifdef CONFIG_IWL3945_DEBUG
++ if (!(iwl3945_debug_level & level))
+ return;
+
+ print_hex_dump(KERN_DEBUG, "iwl data: ", DUMP_PREFIX_OFFSET, 16, 1,
+@@ -194,24 +184,31 @@ static void iwl_print_hex_dump(int level, void *p, u32 len)
+ *
+ * Theory of operation
+ *
+- * A queue is a circular buffers with 'Read' and 'Write' pointers.
+- * 2 empty entries always kept in the buffer to protect from overflow.
++ * A Tx or Rx queue resides in host DRAM, and is comprised of a circular buffer
++ * of buffer descriptors, each of which points to one or more data buffers for
++ * the device to read from or fill. Driver and device exchange status of each
++ * queue via "read" and "write" pointers. Driver keeps minimum of 2 empty
++ * entries in each circular buffer, to protect against confusing empty and full
++ * queue states.
++ *
++ * The device reads or writes the data in the queues via the device's several
++ * DMA/FIFO channels. Each queue is mapped to a single DMA channel.
+ *
+ * For Tx queue, there are low mark and high mark limits. If, after queuing
+ * the packet for Tx, free space become < low mark, Tx queue stopped. When
+ * reclaiming packets (on 'tx done IRQ), if free space become > high mark,
+ * Tx queue resumed.
+ *
+- * The IWL operates with six queues, one receive queue in the device's
+- * sram, one transmit queue for sending commands to the device firmware,
+- * and four transmit queues for data.
++ * The 3945 operates with six queues: One receive queue, one transmit queue
++ * (#4) for sending commands to the device firmware, and four transmit queues
++ * (#0-3) for data tx via EDCA. An additional 2 HCCA queues are unused.
+ ***************************************************/
+
+-static int iwl_queue_space(const struct iwl_queue *q)
++static int iwl3945_queue_space(const struct iwl3945_queue *q)
+ {
+- int s = q->last_used - q->first_empty;
++ int s = q->read_ptr - q->write_ptr;
+
+- if (q->last_used > q->first_empty)
++ if (q->read_ptr > q->write_ptr)
+ s -= q->n_bd;
+
+ if (s <= 0)
+@@ -223,42 +220,55 @@ static int iwl_queue_space(const struct iwl_queue *q)
+ return s;
+ }
+
+-/* XXX: n_bd must be power-of-two size */
+-static inline int iwl_queue_inc_wrap(int index, int n_bd)
++/**
++ * iwl3945_queue_inc_wrap - increment queue index, wrap back to beginning
++ * @index -- current index
++ * @n_bd -- total number of entries in queue (must be power of 2)
++ */
++static inline int iwl3945_queue_inc_wrap(int index, int n_bd)
+ {
+ return ++index & (n_bd - 1);
+ }
+
+-/* XXX: n_bd must be power-of-two size */
+-static inline int iwl_queue_dec_wrap(int index, int n_bd)
++/**
++ * iwl3945_queue_dec_wrap - increment queue index, wrap back to end
++ * @index -- current index
++ * @n_bd -- total number of entries in queue (must be power of 2)
++ */
++static inline int iwl3945_queue_dec_wrap(int index, int n_bd)
+ {
+ return --index & (n_bd - 1);
+ }
+
+-static inline int x2_queue_used(const struct iwl_queue *q, int i)
++static inline int x2_queue_used(const struct iwl3945_queue *q, int i)
+ {
+- return q->first_empty > q->last_used ?
+- (i >= q->last_used && i < q->first_empty) :
+- !(i < q->last_used && i >= q->first_empty);
++ return q->write_ptr > q->read_ptr ?
++ (i >= q->read_ptr && i < q->write_ptr) :
++ !(i < q->read_ptr && i >= q->write_ptr);
+ }
+
+-static inline u8 get_cmd_index(struct iwl_queue *q, u32 index, int is_huge)
++static inline u8 get_cmd_index(struct iwl3945_queue *q, u32 index, int is_huge)
+ {
++ /* This is for scan command, the big buffer at end of command array */
+ if (is_huge)
+- return q->n_window;
++ return q->n_window; /* must be power of 2 */
+
++ /* Otherwise, use normal size buffers */
+ return index & (q->n_window - 1);
+ }
+
+-static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
++/**
++ * iwl3945_queue_init - Initialize queue's high/low-water and read/write indexes
++ */
++static int iwl3945_queue_init(struct iwl3945_priv *priv, struct iwl3945_queue *q,
+ int count, int slots_num, u32 id)
+ {
+ q->n_bd = count;
+ q->n_window = slots_num;
+ q->id = id;
+
+- /* count must be power-of-two size, otherwise iwl_queue_inc_wrap
+- * and iwl_queue_dec_wrap are broken. */
++ /* count must be power-of-two size, otherwise iwl3945_queue_inc_wrap
++ * and iwl3945_queue_dec_wrap are broken. */
+ BUG_ON(!is_power_of_2(count));
+
+ /* slots_num must be power-of-two size, otherwise
+@@ -273,27 +283,34 @@ static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
+ if (q->high_mark < 2)
+ q->high_mark = 2;
+
+- q->first_empty = q->last_used = 0;
++ q->write_ptr = q->read_ptr = 0;
+
+ return 0;
+ }
+
+-static int iwl_tx_queue_alloc(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq, u32 id)
++/**
++ * iwl3945_tx_queue_alloc - Alloc driver data and TFD CB for one Tx/cmd queue
++ */
++static int iwl3945_tx_queue_alloc(struct iwl3945_priv *priv,
++ struct iwl3945_tx_queue *txq, u32 id)
+ {
+ struct pci_dev *dev = priv->pci_dev;
+
++ /* Driver private data, only for Tx (not command) queues,
++ * not shared with device. */
+ if (id != IWL_CMD_QUEUE_NUM) {
+ txq->txb = kmalloc(sizeof(txq->txb[0]) *
+ TFD_QUEUE_SIZE_MAX, GFP_KERNEL);
+ if (!txq->txb) {
+- IWL_ERROR("kmalloc for auxilary BD "
++ IWL_ERROR("kmalloc for auxiliary BD "
+ "structures failed\n");
+ goto error;
+ }
+ } else
+ txq->txb = NULL;
+
++ /* Circular buffer of transmit frame descriptors (TFDs),
++ * shared with device */
+ txq->bd = pci_alloc_consistent(dev,
+ sizeof(txq->bd[0]) * TFD_QUEUE_SIZE_MAX,
+ &txq->q.dma_addr);
+@@ -316,24 +333,33 @@ static int iwl_tx_queue_alloc(struct iwl_priv *priv,
+ return -ENOMEM;
+ }
+
+-int iwl_tx_queue_init(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq, int slots_num, u32 txq_id)
++/**
++ * iwl3945_tx_queue_init - Allocate and initialize one tx/cmd queue
++ */
++int iwl3945_tx_queue_init(struct iwl3945_priv *priv,
++ struct iwl3945_tx_queue *txq, int slots_num, u32 txq_id)
+ {
+ struct pci_dev *dev = priv->pci_dev;
+ int len;
+ int rc = 0;
+
+- /* alocate command space + one big command for scan since scan
+- * command is very huge the system will not have two scan at the
+- * same time */
+- len = sizeof(struct iwl_cmd) * slots_num;
++ /*
++ * Alloc buffer array for commands (Tx or other types of commands).
++ * For the command queue (#4), allocate command space + one big
++ * command for scan, since scan command is very huge; the system will
++ * not have two scans at the same time, so only one is needed.
++ * For data Tx queues (all other queues), no super-size command
++ * space is needed.
++ */
++ len = sizeof(struct iwl3945_cmd) * slots_num;
+ if (txq_id == IWL_CMD_QUEUE_NUM)
+ len += IWL_MAX_SCAN_SIZE;
+ txq->cmd = pci_alloc_consistent(dev, len, &txq->dma_addr_cmd);
+ if (!txq->cmd)
+ return -ENOMEM;
+
+- rc = iwl_tx_queue_alloc(priv, txq, txq_id);
++ /* Alloc driver data array and TFD circular buffer */
++ rc = iwl3945_tx_queue_alloc(priv, txq, txq_id);
+ if (rc) {
+ pci_free_consistent(dev, len, txq->cmd, txq->dma_addr_cmd);
+
+@@ -342,26 +368,29 @@ int iwl_tx_queue_init(struct iwl_priv *priv,
+ txq->need_update = 0;
+
+ /* TFD_QUEUE_SIZE_MAX must be power-of-two size, otherwise
+- * iwl_queue_inc_wrap and iwl_queue_dec_wrap are broken. */
++ * iwl3945_queue_inc_wrap and iwl3945_queue_dec_wrap are broken. */
+ BUILD_BUG_ON(TFD_QUEUE_SIZE_MAX & (TFD_QUEUE_SIZE_MAX - 1));
+- iwl_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
+
+- iwl_hw_tx_queue_init(priv, txq);
++ /* Initialize queue high/low-water, head/tail indexes */
++ iwl3945_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
++
++ /* Tell device where to find queue, enable DMA channel. */
++ iwl3945_hw_tx_queue_init(priv, txq);
+
+ return 0;
+ }
+
+ /**
+- * iwl_tx_queue_free - Deallocate DMA queue.
++ * iwl3945_tx_queue_free - Deallocate DMA queue.
+ * @txq: Transmit queue to deallocate.
+ *
+ * Empty queue by removing and destroying all BD's.
+- * Free all buffers. txq itself is not freed.
- *
-- * By default, uCode issues this notification after receiving a beacon
-- * while associated. To disable this behavior, set DISABLE_NOTIF flag in the
-- * REPLY_STATISTICS_CMD 0x9c, above.
++ * Free all buffers.
++ * 0-fill, but do not free "txq" descriptor structure.
+ */
+-void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
++void iwl3945_tx_queue_free(struct iwl3945_priv *priv, struct iwl3945_tx_queue *txq)
+ {
+- struct iwl_queue *q = &txq->q;
++ struct iwl3945_queue *q = &txq->q;
+ struct pci_dev *dev = priv->pci_dev;
+ int len;
+
+@@ -369,44 +398,47 @@ void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
+ return;
+
+ /* first, empty all BD's */
+- for (; q->first_empty != q->last_used;
+- q->last_used = iwl_queue_inc_wrap(q->last_used, q->n_bd))
+- iwl_hw_txq_free_tfd(priv, txq);
++ for (; q->write_ptr != q->read_ptr;
++ q->read_ptr = iwl3945_queue_inc_wrap(q->read_ptr, q->n_bd))
++ iwl3945_hw_txq_free_tfd(priv, txq);
+
+- len = sizeof(struct iwl_cmd) * q->n_window;
++ len = sizeof(struct iwl3945_cmd) * q->n_window;
+ if (q->id == IWL_CMD_QUEUE_NUM)
+ len += IWL_MAX_SCAN_SIZE;
+
++ /* De-alloc array of command/tx buffers */
+ pci_free_consistent(dev, len, txq->cmd, txq->dma_addr_cmd);
+
+- /* free buffers belonging to queue itself */
++ /* De-alloc circular buffer of TFDs */
+ if (txq->q.n_bd)
+- pci_free_consistent(dev, sizeof(struct iwl_tfd_frame) *
++ pci_free_consistent(dev, sizeof(struct iwl3945_tfd_frame) *
+ txq->q.n_bd, txq->bd, txq->q.dma_addr);
+
++ /* De-alloc array of per-TFD driver data */
+ if (txq->txb) {
+ kfree(txq->txb);
+ txq->txb = NULL;
+ }
+
+- /* 0 fill whole structure */
++ /* 0-fill queue descriptor structure */
+ memset(txq, 0, sizeof(*txq));
+ }
+
+-const u8 BROADCAST_ADDR[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
++const u8 iwl3945_broadcast_addr[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
+
+ /*************** STATION TABLE MANAGEMENT ****
- *
-- * Statistics counters continue to increment beacon after beacon, but are
-- * cleared when changing channels or when driver issues REPLY_STATISTICS_CMD
-- * 0x9c with CLEAR_STATS bit set (see above).
+- * NOTE: This needs to be overhauled to better synchronize between
+- * how the iwl-4965.c is using iwl_hw_find_station vs. iwl-3945.c
- *
-- * uCode also issues this notification during scans. uCode clears statistics
-- * appropriately so that each notification contains statistics for only the
-- * one channel that has just been scanned.
-- */
--#define STATISTICS_REPLY_FLG_BAND_24G_MSK __constant_cpu_to_le32(0x2)
--#define STATISTICS_REPLY_FLG_FAT_MODE_MSK __constant_cpu_to_le32(0x8)
--struct iwl_notif_statistics {
-- __le32 flag;
-- struct statistics_rx rx;
-- struct statistics_tx tx;
-- struct statistics_general general;
--} __attribute__ ((packed));
+- * mac80211 should also be examined to determine if sta_info is duplicating
++ * mac80211 should be examined to determine if sta_info is duplicating
+ * the functionality provided here
+ */
+
+ /**************************************************************/
+-#if 0 /* temparary disable till we add real remove station */
+-static u8 iwl_remove_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
++#if 0 /* temporary disable till we add real remove station */
++/**
++ * iwl3945_remove_station - Remove driver's knowledge of station.
++ *
++ * NOTE: This does not remove station from device's station table.
++ */
++static u8 iwl3945_remove_station(struct iwl3945_priv *priv, const u8 *addr, int is_ap)
+ {
+ int index = IWL_INVALID_STATION;
+ int i;
+@@ -442,7 +474,13 @@ out:
+ return 0;
+ }
+ #endif
+-static void iwl_clear_stations_table(struct iwl_priv *priv)
++
++/**
++ * iwl3945_clear_stations_table - Clear the driver's station table
++ *
++ * NOTE: This does not clear or otherwise alter the device's station table.
++ */
++static void iwl3945_clear_stations_table(struct iwl3945_priv *priv)
+ {
+ unsigned long flags;
+
+@@ -454,12 +492,14 @@ static void iwl_clear_stations_table(struct iwl_priv *priv)
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
+ }
+
-
+-u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
++/**
++ * iwl3945_add_station - Add station to station tables in driver and device
++ */
++u8 iwl3945_add_station(struct iwl3945_priv *priv, const u8 *addr, int is_ap, u8 flags)
+ {
+ int i;
+ int index = IWL_INVALID_STATION;
+- struct iwl_station_entry *station;
++ struct iwl3945_station_entry *station;
+ unsigned long flags_spin;
+ DECLARE_MAC_BUF(mac);
+ u8 rate;
+@@ -482,7 +522,7 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
+ index = i;
+ }
+
+- /* These twh conditions has the same outcome but keep them separate
++ /* These two conditions has the same outcome but keep them separate
+ since they have different meaning */
+ if (unlikely(index == IWL_INVALID_STATION)) {
+ spin_unlock_irqrestore(&priv->sta_lock, flags_spin);
+@@ -500,30 +540,35 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
+ station->used = 1;
+ priv->num_stations++;
+
+- memset(&station->sta, 0, sizeof(struct iwl_addsta_cmd));
++ /* Set up the REPLY_ADD_STA command to send to device */
++ memset(&station->sta, 0, sizeof(struct iwl3945_addsta_cmd));
+ memcpy(station->sta.sta.addr, addr, ETH_ALEN);
+ station->sta.mode = 0;
+ station->sta.sta.sta_id = index;
+ station->sta.station_flags = 0;
+
+- rate = (priv->phymode == MODE_IEEE80211A) ? IWL_RATE_6M_PLCP :
+- IWL_RATE_1M_PLCP | priv->hw_setting.cck_flag;
++ if (priv->phymode == MODE_IEEE80211A)
++ rate = IWL_RATE_6M_PLCP;
++ else
++ rate = IWL_RATE_1M_PLCP;
+
+ /* Turn on both antennas for the station... */
+ station->sta.rate_n_flags =
+- iwl_hw_set_rate_n_flags(rate, RATE_MCS_ANT_AB_MSK);
++ iwl3945_hw_set_rate_n_flags(rate, RATE_MCS_ANT_AB_MSK);
+ station->current_rate.rate_n_flags =
+ le16_to_cpu(station->sta.rate_n_flags);
+
+ spin_unlock_irqrestore(&priv->sta_lock, flags_spin);
+- iwl_send_add_station(priv, &station->sta, flags);
++
++ /* Add station to device's station table */
++ iwl3945_send_add_station(priv, &station->sta, flags);
+ return index;
+
+ }
+
+ /*************** DRIVER STATUS FUNCTIONS *****/
+
+-static inline int iwl_is_ready(struct iwl_priv *priv)
++static inline int iwl3945_is_ready(struct iwl3945_priv *priv)
+ {
+ /* The adapter is 'ready' if READY and GEO_CONFIGURED bits are
+ * set but EXIT_PENDING is not */
+@@ -532,29 +577,29 @@ static inline int iwl_is_ready(struct iwl_priv *priv)
+ !test_bit(STATUS_EXIT_PENDING, &priv->status);
+ }
+
+-static inline int iwl_is_alive(struct iwl_priv *priv)
++static inline int iwl3945_is_alive(struct iwl3945_priv *priv)
+ {
+ return test_bit(STATUS_ALIVE, &priv->status);
+ }
+
+-static inline int iwl_is_init(struct iwl_priv *priv)
++static inline int iwl3945_is_init(struct iwl3945_priv *priv)
+ {
+ return test_bit(STATUS_INIT, &priv->status);
+ }
+
+-static inline int iwl_is_rfkill(struct iwl_priv *priv)
++static inline int iwl3945_is_rfkill(struct iwl3945_priv *priv)
+ {
+ return test_bit(STATUS_RF_KILL_HW, &priv->status) ||
+ test_bit(STATUS_RF_KILL_SW, &priv->status);
+ }
+
+-static inline int iwl_is_ready_rf(struct iwl_priv *priv)
++static inline int iwl3945_is_ready_rf(struct iwl3945_priv *priv)
+ {
+
+- if (iwl_is_rfkill(priv))
++ if (iwl3945_is_rfkill(priv))
+ return 0;
+
+- return iwl_is_ready(priv);
++ return iwl3945_is_ready(priv);
+ }
+
+ /*************** HOST COMMAND QUEUE FUNCTIONS *****/
+@@ -613,7 +658,7 @@ static const char *get_cmd_string(u8 cmd)
+ #define HOST_COMPLETE_TIMEOUT (HZ / 2)
+
+ /**
+- * iwl_enqueue_hcmd - enqueue a uCode command
++ * iwl3945_enqueue_hcmd - enqueue a uCode command
+ * @priv: device private data point
+ * @cmd: a point to the ucode command structure
+ *
+@@ -621,13 +666,13 @@ static const char *get_cmd_string(u8 cmd)
+ * failed. On success, it turns the index (> 0) of command in the
+ * command queue.
+ */
+-static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
++static int iwl3945_enqueue_hcmd(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
+ {
+- struct iwl_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
+- struct iwl_queue *q = &txq->q;
+- struct iwl_tfd_frame *tfd;
++ struct iwl3945_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
++ struct iwl3945_queue *q = &txq->q;
++ struct iwl3945_tfd_frame *tfd;
+ u32 *control_flags;
+- struct iwl_cmd *out_cmd;
++ struct iwl3945_cmd *out_cmd;
+ u32 idx;
+ u16 fix_size = (u16)(cmd->len + sizeof(out_cmd->hdr));
+ dma_addr_t phys_addr;
+@@ -642,19 +687,19 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+ BUG_ON((fix_size > TFD_MAX_PAYLOAD_SIZE) &&
+ !(cmd->meta.flags & CMD_SIZE_HUGE));
+
+- if (iwl_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
++ if (iwl3945_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
+ IWL_ERROR("No space for Tx\n");
+ return -ENOSPC;
+ }
+
+ spin_lock_irqsave(&priv->hcmd_lock, flags);
+
+- tfd = &txq->bd[q->first_empty];
++ tfd = &txq->bd[q->write_ptr];
+ memset(tfd, 0, sizeof(*tfd));
+
+ control_flags = (u32 *) tfd;
+
+- idx = get_cmd_index(q, q->first_empty, cmd->meta.flags & CMD_SIZE_HUGE);
++ idx = get_cmd_index(q, q->write_ptr, cmd->meta.flags & CMD_SIZE_HUGE);
+ out_cmd = &txq->cmd[idx];
+
+ out_cmd->hdr.cmd = cmd->id;
+@@ -666,13 +711,13 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+
+ out_cmd->hdr.flags = 0;
+ out_cmd->hdr.sequence = cpu_to_le16(QUEUE_TO_SEQ(IWL_CMD_QUEUE_NUM) |
+- INDEX_TO_SEQ(q->first_empty));
++ INDEX_TO_SEQ(q->write_ptr));
+ if (out_cmd->meta.flags & CMD_SIZE_HUGE)
+ out_cmd->hdr.sequence |= cpu_to_le16(SEQ_HUGE_FRAME);
+
+ phys_addr = txq->dma_addr_cmd + sizeof(txq->cmd[0]) * idx +
+- offsetof(struct iwl_cmd, hdr);
+- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
++ offsetof(struct iwl3945_cmd, hdr);
++ iwl3945_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
+
+ pad = U32_PAD(cmd->len);
+ count = TFD_CTL_COUNT_GET(*control_flags);
+@@ -682,17 +727,19 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+ "%d bytes at %d[%d]:%d\n",
+ get_cmd_string(out_cmd->hdr.cmd),
+ out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence),
+- fix_size, q->first_empty, idx, IWL_CMD_QUEUE_NUM);
++ fix_size, q->write_ptr, idx, IWL_CMD_QUEUE_NUM);
+
+ txq->need_update = 1;
+- q->first_empty = iwl_queue_inc_wrap(q->first_empty, q->n_bd);
+- ret = iwl_tx_queue_update_write_ptr(priv, txq);
++
++ /* Increment and update queue's write index */
++ q->write_ptr = iwl3945_queue_inc_wrap(q->write_ptr, q->n_bd);
++ ret = iwl3945_tx_queue_update_write_ptr(priv, txq);
+
+ spin_unlock_irqrestore(&priv->hcmd_lock, flags);
+ return ret ? ret : idx;
+ }
+
+-int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
++static int iwl3945_send_cmd_async(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
+ {
+ int ret;
+
+@@ -707,16 +754,16 @@ int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return -EBUSY;
+
+- ret = iwl_enqueue_hcmd(priv, cmd);
++ ret = iwl3945_enqueue_hcmd(priv, cmd);
+ if (ret < 0) {
+- IWL_ERROR("Error sending %s: iwl_enqueue_hcmd failed: %d\n",
++ IWL_ERROR("Error sending %s: iwl3945_enqueue_hcmd failed: %d\n",
+ get_cmd_string(cmd->id), ret);
+ return ret;
+ }
+ return 0;
+ }
+
+-int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
++static int iwl3945_send_cmd_sync(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
+ {
+ int cmd_idx;
+ int ret;
+@@ -738,10 +785,10 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+ if (cmd->meta.flags & CMD_WANT_SKB)
+ cmd->meta.source = &cmd->meta;
+
+- cmd_idx = iwl_enqueue_hcmd(priv, cmd);
++ cmd_idx = iwl3945_enqueue_hcmd(priv, cmd);
+ if (cmd_idx < 0) {
+ ret = cmd_idx;
+- IWL_ERROR("Error sending %s: iwl_enqueue_hcmd failed: %d\n",
++ IWL_ERROR("Error sending %s: iwl3945_enqueue_hcmd failed: %d\n",
+ get_cmd_string(cmd->id), ret);
+ goto out;
+ }
+@@ -785,7 +832,7 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+
+ cancel:
+ if (cmd->meta.flags & CMD_WANT_SKB) {
+- struct iwl_cmd *qcmd;
++ struct iwl3945_cmd *qcmd;
+
+ /* Cancel the CMD_WANT_SKB flag for the cmd in the
+ * TX cmd queue. Otherwise in case the cmd comes
+@@ -804,47 +851,43 @@ out:
+ return ret;
+ }
+
+-int iwl_send_cmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
++int iwl3945_send_cmd(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
+ {
+- /* A command can not be asynchronous AND expect an SKB to be set. */
+- BUG_ON((cmd->meta.flags & CMD_ASYNC) &&
+- (cmd->meta.flags & CMD_WANT_SKB));
-
--/*
-- * MISSED_BEACONS_NOTIFICATION = 0xa2 (notification only, not a command)
-- */
--/* if ucode missed CONSECUTIVE_MISSED_BCONS_TH beacons in a row,
-- * then this notification will be sent. */
--#define CONSECUTIVE_MISSED_BCONS_TH 20
+ if (cmd->meta.flags & CMD_ASYNC)
+- return iwl_send_cmd_async(priv, cmd);
++ return iwl3945_send_cmd_async(priv, cmd);
+
+- return iwl_send_cmd_sync(priv, cmd);
++ return iwl3945_send_cmd_sync(priv, cmd);
+ }
+
+-int iwl_send_cmd_pdu(struct iwl_priv *priv, u8 id, u16 len, const void *data)
++int iwl3945_send_cmd_pdu(struct iwl3945_priv *priv, u8 id, u16 len, const void *data)
+ {
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_host_cmd cmd = {
+ .id = id,
+ .len = len,
+ .data = data,
+ };
+
+- return iwl_send_cmd_sync(priv, &cmd);
++ return iwl3945_send_cmd_sync(priv, &cmd);
+ }
+
+-static int __must_check iwl_send_cmd_u32(struct iwl_priv *priv, u8 id, u32 val)
++static int __must_check iwl3945_send_cmd_u32(struct iwl3945_priv *priv, u8 id, u32 val)
+ {
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_host_cmd cmd = {
+ .id = id,
+ .len = sizeof(val),
+ .data = &val,
+ };
+
+- return iwl_send_cmd_sync(priv, &cmd);
++ return iwl3945_send_cmd_sync(priv, &cmd);
+ }
+
+-int iwl_send_statistics_request(struct iwl_priv *priv)
++int iwl3945_send_statistics_request(struct iwl3945_priv *priv)
+ {
+- return iwl_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
++ return iwl3945_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
+ }
+
+ /**
+- * iwl_set_rxon_channel - Set the phymode and channel values in staging RXON
++ * iwl3945_set_rxon_channel - Set the phymode and channel values in staging RXON
+ * @phymode: MODE_IEEE80211A sets to 5.2GHz; all else set to 2.4GHz
+ * @channel: Any channel valid for the requested phymode
+
+@@ -853,9 +896,9 @@ int iwl_send_statistics_request(struct iwl_priv *priv)
+ * NOTE: Does not commit to the hardware; it sets appropriate bit fields
+ * in the staging RXON flag structure based on the phymode
+ */
+-static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
++static int iwl3945_set_rxon_channel(struct iwl3945_priv *priv, u8 phymode, u16 channel)
+ {
+- if (!iwl_get_channel_info(priv, phymode, channel)) {
++ if (!iwl3945_get_channel_info(priv, phymode, channel)) {
+ IWL_DEBUG_INFO("Could not set channel to %d [%d]\n",
+ channel, phymode);
+ return -EINVAL;
+@@ -879,13 +922,13 @@ static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
+ }
+
+ /**
+- * iwl_check_rxon_cmd - validate RXON structure is valid
++ * iwl3945_check_rxon_cmd - validate RXON structure is valid
+ *
+ * NOTE: This is really only useful during development and can eventually
+ * be #ifdef'd out once the driver is stable and folks aren't actively
+ * making changes
+ */
+-static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
++static int iwl3945_check_rxon_cmd(struct iwl3945_rxon_cmd *rxon)
+ {
+ int error = 0;
+ int counter = 1;
+@@ -951,21 +994,21 @@ static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
+ le16_to_cpu(rxon->channel));
+
+ if (error) {
+- IWL_ERROR("Not a valid iwl_rxon_assoc_cmd field values\n");
++ IWL_ERROR("Not a valid iwl3945_rxon_assoc_cmd field values\n");
+ return -1;
+ }
+ return 0;
+ }
+
+ /**
+- * iwl_full_rxon_required - determine if RXON_ASSOC can be used in RXON commit
+- * @priv: staging_rxon is comapred to active_rxon
++ * iwl3945_full_rxon_required - check if full RXON (vs RXON_ASSOC) cmd is needed
++ * @priv: staging_rxon is compared to active_rxon
+ *
+- * If the RXON structure is changing sufficient to require a new
+- * tune or to clear and reset the RXON_FILTER_ASSOC_MSK then return 1
+- * to indicate a new tune is required.
++ * If the RXON structure is changing enough to require a new tune,
++ * or is clearing the RXON_FILTER_ASSOC_MSK, then return 1 to indicate that
++ * a new tune (full RXON command, rather than RXON_ASSOC cmd) is required.
+ */
+-static int iwl_full_rxon_required(struct iwl_priv *priv)
++static int iwl3945_full_rxon_required(struct iwl3945_priv *priv)
+ {
+
+ /* These items are only settable from the full RXON command */
+@@ -1000,19 +1043,19 @@ static int iwl_full_rxon_required(struct iwl_priv *priv)
+ return 0;
+ }
+
+-static int iwl_send_rxon_assoc(struct iwl_priv *priv)
++static int iwl3945_send_rxon_assoc(struct iwl3945_priv *priv)
+ {
+ int rc = 0;
+- struct iwl_rx_packet *res = NULL;
+- struct iwl_rxon_assoc_cmd rxon_assoc;
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_rx_packet *res = NULL;
++ struct iwl3945_rxon_assoc_cmd rxon_assoc;
++ struct iwl3945_host_cmd cmd = {
+ .id = REPLY_RXON_ASSOC,
+ .len = sizeof(rxon_assoc),
+ .meta.flags = CMD_WANT_SKB,
+ .data = &rxon_assoc,
+ };
+- const struct iwl_rxon_cmd *rxon1 = &priv->staging_rxon;
+- const struct iwl_rxon_cmd *rxon2 = &priv->active_rxon;
++ const struct iwl3945_rxon_cmd *rxon1 = &priv->staging_rxon;
++ const struct iwl3945_rxon_cmd *rxon2 = &priv->active_rxon;
+
+ if ((rxon1->flags == rxon2->flags) &&
+ (rxon1->filter_flags == rxon2->filter_flags) &&
+@@ -1028,11 +1071,11 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
+ rxon_assoc.cck_basic_rates = priv->staging_rxon.cck_basic_rates;
+ rxon_assoc.reserved = 0;
+
+- rc = iwl_send_cmd_sync(priv, &cmd);
++ rc = iwl3945_send_cmd_sync(priv, &cmd);
+ if (rc)
+ return rc;
+
+- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
+ if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
+ IWL_ERROR("Bad return from REPLY_RXON_ASSOC command\n");
+ rc = -EIO;
+@@ -1045,21 +1088,21 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
+ }
+
+ /**
+- * iwl_commit_rxon - commit staging_rxon to hardware
++ * iwl3945_commit_rxon - commit staging_rxon to hardware
+ *
+- * The RXON command in staging_rxon is commited to the hardware and
++ * The RXON command in staging_rxon is committed to the hardware and
+ * the active_rxon structure is updated with the new data. This
+ * function correctly transitions out of the RXON_ASSOC_MSK state if
+ * a HW tune is required based on the RXON structure changes.
+ */
+-static int iwl_commit_rxon(struct iwl_priv *priv)
++static int iwl3945_commit_rxon(struct iwl3945_priv *priv)
+ {
+ /* cast away the const for active_rxon in this function */
+- struct iwl_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
++ struct iwl3945_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
+ int rc = 0;
+ DECLARE_MAC_BUF(mac);
+
+- if (!iwl_is_alive(priv))
++ if (!iwl3945_is_alive(priv))
+ return -1;
+
+ /* always get timestamp with Rx frame */
+@@ -1070,17 +1113,17 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ ~(RXON_FLG_DIS_DIV_MSK | RXON_FLG_ANT_SEL_MSK);
+ priv->staging_rxon.flags |= iwl3945_get_antenna_flags(priv);
+
+- rc = iwl_check_rxon_cmd(&priv->staging_rxon);
++ rc = iwl3945_check_rxon_cmd(&priv->staging_rxon);
+ if (rc) {
+ IWL_ERROR("Invalid RXON configuration. Not committing.\n");
+ return -EINVAL;
+ }
+
+ /* If we don't need to send a full RXON, we can use
+- * iwl_rxon_assoc_cmd which is used to reconfigure filter
++ * iwl3945_rxon_assoc_cmd which is used to reconfigure filter
+ * and other flags for the current radio configuration. */
+- if (!iwl_full_rxon_required(priv)) {
+- rc = iwl_send_rxon_assoc(priv);
++ if (!iwl3945_full_rxon_required(priv)) {
++ rc = iwl3945_send_rxon_assoc(priv);
+ if (rc) {
+ IWL_ERROR("Error setting RXON_ASSOC "
+ "configuration (%d).\n", rc);
+@@ -1096,13 +1139,13 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ * an RXON_ASSOC and the new config wants the associated mask enabled,
+ * we must clear the associated from the active configuration
+ * before we apply the new config */
+- if (iwl_is_associated(priv) &&
++ if (iwl3945_is_associated(priv) &&
+ (priv->staging_rxon.filter_flags & RXON_FILTER_ASSOC_MSK)) {
+ IWL_DEBUG_INFO("Toggling associated bit on current RXON\n");
+ active_rxon->filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+
+- rc = iwl_send_cmd_pdu(priv, REPLY_RXON,
+- sizeof(struct iwl_rxon_cmd),
++ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON,
++ sizeof(struct iwl3945_rxon_cmd),
+ &priv->active_rxon);
+
+ /* If the mask clearing failed then we set
+@@ -1125,8 +1168,8 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ print_mac(mac, priv->staging_rxon.bssid_addr));
+
+ /* Apply the new configuration */
+- rc = iwl_send_cmd_pdu(priv, REPLY_RXON,
+- sizeof(struct iwl_rxon_cmd), &priv->staging_rxon);
++ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON,
++ sizeof(struct iwl3945_rxon_cmd), &priv->staging_rxon);
+ if (rc) {
+ IWL_ERROR("Error setting new configuration (%d).\n", rc);
+ return rc;
+@@ -1134,18 +1177,18 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+
+ memcpy(active_rxon, &priv->staging_rxon, sizeof(*active_rxon));
+
+- iwl_clear_stations_table(priv);
++ iwl3945_clear_stations_table(priv);
+
+ /* If we issue a new RXON command which required a tune then we must
+ * send a new TXPOWER command or we won't be able to Tx any frames */
+- rc = iwl_hw_reg_send_txpower(priv);
++ rc = iwl3945_hw_reg_send_txpower(priv);
+ if (rc) {
+ IWL_ERROR("Error setting Tx power (%d).\n", rc);
+ return rc;
+ }
+
+ /* Add the broadcast address so we can send broadcast frames */
+- if (iwl_add_station(priv, BROADCAST_ADDR, 0, 0) ==
++ if (iwl3945_add_station(priv, iwl3945_broadcast_addr, 0, 0) ==
+ IWL_INVALID_STATION) {
+ IWL_ERROR("Error adding BROADCAST address for transmit.\n");
+ return -EIO;
+@@ -1153,9 +1196,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+
+ /* If we have set the ASSOC_MSK and we are in BSS mode then
+ * add the IWL_AP_ID to the station rate table */
+- if (iwl_is_associated(priv) &&
++ if (iwl3945_is_associated(priv) &&
+ (priv->iw_mode == IEEE80211_IF_TYPE_STA))
+- if (iwl_add_station(priv, priv->active_rxon.bssid_addr, 1, 0)
++ if (iwl3945_add_station(priv, priv->active_rxon.bssid_addr, 1, 0)
+ == IWL_INVALID_STATION) {
+ IWL_ERROR("Error adding AP address for transmit.\n");
+ return -EIO;
+@@ -1172,9 +1215,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ return 0;
+ }
+
+-static int iwl_send_bt_config(struct iwl_priv *priv)
++static int iwl3945_send_bt_config(struct iwl3945_priv *priv)
+ {
+- struct iwl_bt_cmd bt_cmd = {
++ struct iwl3945_bt_cmd bt_cmd = {
+ .flags = 3,
+ .lead_time = 0xAA,
+ .max_kill = 1,
+@@ -1182,15 +1225,15 @@ static int iwl_send_bt_config(struct iwl_priv *priv)
+ .kill_cts_mask = 0,
+ };
+
+- return iwl_send_cmd_pdu(priv, REPLY_BT_CONFIG,
+- sizeof(struct iwl_bt_cmd), &bt_cmd);
++ return iwl3945_send_cmd_pdu(priv, REPLY_BT_CONFIG,
++ sizeof(struct iwl3945_bt_cmd), &bt_cmd);
+ }
+
+-static int iwl_send_scan_abort(struct iwl_priv *priv)
++static int iwl3945_send_scan_abort(struct iwl3945_priv *priv)
+ {
+ int rc = 0;
+- struct iwl_rx_packet *res;
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_rx_packet *res;
++ struct iwl3945_host_cmd cmd = {
+ .id = REPLY_SCAN_ABORT_CMD,
+ .meta.flags = CMD_WANT_SKB,
+ };
+@@ -1203,13 +1246,13 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
+ return 0;
+ }
+
+- rc = iwl_send_cmd_sync(priv, &cmd);
++ rc = iwl3945_send_cmd_sync(priv, &cmd);
+ if (rc) {
+ clear_bit(STATUS_SCAN_ABORTING, &priv->status);
+ return rc;
+ }
+
+- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
+ if (res->u.status != CAN_ABORT_STATUS) {
+ /* The scan abort will return 1 for success or
+ * 2 for "failure". A failure condition can be
+@@ -1227,8 +1270,8 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
+ return rc;
+ }
+
+-static int iwl_card_state_sync_callback(struct iwl_priv *priv,
+- struct iwl_cmd *cmd,
++static int iwl3945_card_state_sync_callback(struct iwl3945_priv *priv,
++ struct iwl3945_cmd *cmd,
+ struct sk_buff *skb)
+ {
+ return 1;
+@@ -1237,16 +1280,16 @@ static int iwl_card_state_sync_callback(struct iwl_priv *priv,
+ /*
+ * CARD_STATE_CMD
+ *
+- * Use: Sets the internal card state to enable, disable, or halt
++ * Use: Sets the device's internal card state to enable, disable, or halt
+ *
+ * When in the 'enable' state the card operates as normal.
+ * When in the 'disable' state, the card enters into a low power mode.
+ * When in the 'halt' state, the card is shut down and must be fully
+ * restarted to come back on.
+ */
+-static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
++static int iwl3945_send_card_state(struct iwl3945_priv *priv, u32 flags, u8 meta_flag)
+ {
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_host_cmd cmd = {
+ .id = REPLY_CARD_STATE_CMD,
+ .len = sizeof(u32),
+ .data = &flags,
+@@ -1254,22 +1297,22 @@ static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
+ };
+
+ if (meta_flag & CMD_ASYNC)
+- cmd.meta.u.callback = iwl_card_state_sync_callback;
++ cmd.meta.u.callback = iwl3945_card_state_sync_callback;
+
+- return iwl_send_cmd(priv, &cmd);
++ return iwl3945_send_cmd(priv, &cmd);
+ }
+
+-static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
+- struct iwl_cmd *cmd, struct sk_buff *skb)
++static int iwl3945_add_sta_sync_callback(struct iwl3945_priv *priv,
++ struct iwl3945_cmd *cmd, struct sk_buff *skb)
+ {
+- struct iwl_rx_packet *res = NULL;
++ struct iwl3945_rx_packet *res = NULL;
+
+ if (!skb) {
+ IWL_ERROR("Error: Response NULL in REPLY_ADD_STA.\n");
+ return 1;
+ }
+
+- res = (struct iwl_rx_packet *)skb->data;
++ res = (struct iwl3945_rx_packet *)skb->data;
+ if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
+ IWL_ERROR("Bad return from REPLY_ADD_STA (0x%08X)\n",
+ res->hdr.flags);
+@@ -1287,29 +1330,29 @@ static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
+ return 1;
+ }
+
+-int iwl_send_add_station(struct iwl_priv *priv,
+- struct iwl_addsta_cmd *sta, u8 flags)
++int iwl3945_send_add_station(struct iwl3945_priv *priv,
++ struct iwl3945_addsta_cmd *sta, u8 flags)
+ {
+- struct iwl_rx_packet *res = NULL;
++ struct iwl3945_rx_packet *res = NULL;
+ int rc = 0;
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_host_cmd cmd = {
+ .id = REPLY_ADD_STA,
+- .len = sizeof(struct iwl_addsta_cmd),
++ .len = sizeof(struct iwl3945_addsta_cmd),
+ .meta.flags = flags,
+ .data = sta,
+ };
+
+ if (flags & CMD_ASYNC)
+- cmd.meta.u.callback = iwl_add_sta_sync_callback;
++ cmd.meta.u.callback = iwl3945_add_sta_sync_callback;
+ else
+ cmd.meta.flags |= CMD_WANT_SKB;
+
+- rc = iwl_send_cmd(priv, &cmd);
++ rc = iwl3945_send_cmd(priv, &cmd);
+
+ if (rc || (flags & CMD_ASYNC))
+ return rc;
+
+- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
+ if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
+ IWL_ERROR("Bad return from REPLY_ADD_STA (0x%08X)\n",
+ res->hdr.flags);
+@@ -1334,7 +1377,7 @@ int iwl_send_add_station(struct iwl_priv *priv,
+ return rc;
+ }
+
+-static int iwl_update_sta_key_info(struct iwl_priv *priv,
++static int iwl3945_update_sta_key_info(struct iwl3945_priv *priv,
+ struct ieee80211_key_conf *keyconf,
+ u8 sta_id)
+ {
+@@ -1350,7 +1393,6 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
+ break;
+ case ALG_TKIP:
+ case ALG_WEP:
+- return -EINVAL;
+ default:
+ return -EINVAL;
+ }
+@@ -1369,28 +1411,28 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
+
+ IWL_DEBUG_INFO("hwcrypto: modify ucode station key info\n");
+- iwl_send_add_station(priv, &priv->stations[sta_id].sta, 0);
++ iwl3945_send_add_station(priv, &priv->stations[sta_id].sta, 0);
+ return 0;
+ }
+
+-static int iwl_clear_sta_key_info(struct iwl_priv *priv, u8 sta_id)
++static int iwl3945_clear_sta_key_info(struct iwl3945_priv *priv, u8 sta_id)
+ {
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->sta_lock, flags);
+- memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl_hw_key));
+- memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl_keyinfo));
++ memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl3945_hw_key));
++ memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl3945_keyinfo));
+ priv->stations[sta_id].sta.key.key_flags = STA_KEY_FLG_NO_ENC;
+ priv->stations[sta_id].sta.sta.modify_mask = STA_MODIFY_KEY_MASK;
+ priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
+
+ IWL_DEBUG_INFO("hwcrypto: clear ucode station key info\n");
+- iwl_send_add_station(priv, &priv->stations[sta_id].sta, 0);
++ iwl3945_send_add_station(priv, &priv->stations[sta_id].sta, 0);
+ return 0;
+ }
+
+-static void iwl_clear_free_frames(struct iwl_priv *priv)
++static void iwl3945_clear_free_frames(struct iwl3945_priv *priv)
+ {
+ struct list_head *element;
+
+@@ -1400,7 +1442,7 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
+ while (!list_empty(&priv->free_frames)) {
+ element = priv->free_frames.next;
+ list_del(element);
+- kfree(list_entry(element, struct iwl_frame, list));
++ kfree(list_entry(element, struct iwl3945_frame, list));
+ priv->frames_count--;
+ }
+
+@@ -1411,9 +1453,9 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
+ }
+ }
+
+-static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
++static struct iwl3945_frame *iwl3945_get_free_frame(struct iwl3945_priv *priv)
+ {
+- struct iwl_frame *frame;
++ struct iwl3945_frame *frame;
+ struct list_head *element;
+ if (list_empty(&priv->free_frames)) {
+ frame = kzalloc(sizeof(*frame), GFP_KERNEL);
+@@ -1428,21 +1470,21 @@ static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
+
+ element = priv->free_frames.next;
+ list_del(element);
+- return list_entry(element, struct iwl_frame, list);
++ return list_entry(element, struct iwl3945_frame, list);
+ }
+
+-static void iwl_free_frame(struct iwl_priv *priv, struct iwl_frame *frame)
++static void iwl3945_free_frame(struct iwl3945_priv *priv, struct iwl3945_frame *frame)
+ {
+ memset(frame, 0, sizeof(*frame));
+ list_add(&frame->list, &priv->free_frames);
+ }
+
+-unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
++unsigned int iwl3945_fill_beacon_frame(struct iwl3945_priv *priv,
+ struct ieee80211_hdr *hdr,
+ const u8 *dest, int left)
+ {
+
+- if (!iwl_is_associated(priv) || !priv->ibss_beacon ||
++ if (!iwl3945_is_associated(priv) || !priv->ibss_beacon ||
+ ((priv->iw_mode != IEEE80211_IF_TYPE_IBSS) &&
+ (priv->iw_mode != IEEE80211_IF_TYPE_AP)))
+ return 0;
+@@ -1455,37 +1497,27 @@ unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
+ return priv->ibss_beacon->len;
+ }
+
+-static int iwl_rate_index_from_plcp(int plcp)
+-{
+- int i = 0;
-
--struct iwl_missed_beacon_notif {
-- __le32 consequtive_missed_beacons;
-- __le32 total_missed_becons;
-- __le32 num_expected_beacons;
-- __le32 num_recvd_beacons;
--} __attribute__ ((packed));
+- for (i = 0; i < IWL_RATE_COUNT; i++)
+- if (iwl_rates[i].plcp == plcp)
+- return i;
+- return -1;
+-}
-
--/******************************************************************************
-- * (11)
-- * Rx Calibration Commands:
+-static u8 iwl_rate_get_lowest_plcp(int rate_mask)
++static u8 iwl3945_rate_get_lowest_plcp(int rate_mask)
+ {
+ u8 i;
+
+ for (i = IWL_RATE_1M_INDEX; i != IWL_RATE_INVALID;
+- i = iwl_rates[i].next_ieee) {
++ i = iwl3945_rates[i].next_ieee) {
+ if (rate_mask & (1 << i))
+- return iwl_rates[i].plcp;
++ return iwl3945_rates[i].plcp;
+ }
+
+ return IWL_RATE_INVALID;
+ }
+
+-static int iwl_send_beacon_cmd(struct iwl_priv *priv)
++static int iwl3945_send_beacon_cmd(struct iwl3945_priv *priv)
+ {
+- struct iwl_frame *frame;
++ struct iwl3945_frame *frame;
+ unsigned int frame_size;
+ int rc;
+ u8 rate;
+
+- frame = iwl_get_free_frame(priv);
++ frame = iwl3945_get_free_frame(priv);
+
+ if (!frame) {
+ IWL_ERROR("Could not obtain free frame buffer for beacon "
+@@ -1494,22 +1526,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
+ }
+
+ if (!(priv->staging_rxon.flags & RXON_FLG_BAND_24G_MSK)) {
+- rate = iwl_rate_get_lowest_plcp(priv->active_rate_basic &
++ rate = iwl3945_rate_get_lowest_plcp(priv->active_rate_basic &
+ 0xFF0);
+ if (rate == IWL_INVALID_RATE)
+ rate = IWL_RATE_6M_PLCP;
+ } else {
+- rate = iwl_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
++ rate = iwl3945_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
+ if (rate == IWL_INVALID_RATE)
+ rate = IWL_RATE_1M_PLCP;
+ }
+
+- frame_size = iwl_hw_get_beacon_cmd(priv, frame, rate);
++ frame_size = iwl3945_hw_get_beacon_cmd(priv, frame, rate);
+
+- rc = iwl_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
++ rc = iwl3945_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
+ &frame->u.cmd[0]);
+
+- iwl_free_frame(priv, frame);
++ iwl3945_free_frame(priv, frame);
+
+ return rc;
+ }
+@@ -1520,22 +1552,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
+ *
+ ******************************************************************************/
+
+-static void get_eeprom_mac(struct iwl_priv *priv, u8 *mac)
++static void get_eeprom_mac(struct iwl3945_priv *priv, u8 *mac)
+ {
+ memcpy(mac, priv->eeprom.mac_address, 6);
+ }
+
+ /**
+- * iwl_eeprom_init - read EEPROM contents
++ * iwl3945_eeprom_init - read EEPROM contents
+ *
+- * Load the EEPROM from adapter into priv->eeprom
++ * Load the EEPROM contents from adapter into priv->eeprom
+ *
+ * NOTE: This routine uses the non-debug IO access functions.
+ */
+-int iwl_eeprom_init(struct iwl_priv *priv)
++int iwl3945_eeprom_init(struct iwl3945_priv *priv)
+ {
+- u16 *e = (u16 *)&priv->eeprom;
+- u32 gp = iwl_read32(priv, CSR_EEPROM_GP);
++ __le16 *e = (__le16 *)&priv->eeprom;
++ u32 gp = iwl3945_read32(priv, CSR_EEPROM_GP);
+ u32 r;
+ int sz = sizeof(priv->eeprom);
+ int rc;
+@@ -1553,20 +1585,21 @@ int iwl_eeprom_init(struct iwl_priv *priv)
+ return -ENOENT;
+ }
+
+- rc = iwl_eeprom_aqcuire_semaphore(priv);
++ /* Make sure driver (instead of uCode) is allowed to read EEPROM */
++ rc = iwl3945_eeprom_acquire_semaphore(priv);
+ if (rc < 0) {
+- IWL_ERROR("Failed to aqcuire EEPROM semaphore.\n");
++ IWL_ERROR("Failed to acquire EEPROM semaphore.\n");
+ return -ENOENT;
+ }
+
+ /* eeprom is an array of 16bit values */
+ for (addr = 0; addr < sz; addr += sizeof(u16)) {
+- _iwl_write32(priv, CSR_EEPROM_REG, addr << 1);
+- _iwl_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
++ _iwl3945_write32(priv, CSR_EEPROM_REG, addr << 1);
++ _iwl3945_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
+
+ for (i = 0; i < IWL_EEPROM_ACCESS_TIMEOUT;
+ i += IWL_EEPROM_ACCESS_DELAY) {
+- r = _iwl_read_restricted(priv, CSR_EEPROM_REG);
++ r = _iwl3945_read_direct32(priv, CSR_EEPROM_REG);
+ if (r & CSR_EEPROM_REG_READ_VALID_MSK)
+ break;
+ udelay(IWL_EEPROM_ACCESS_DELAY);
+@@ -1576,7 +1609,7 @@ int iwl_eeprom_init(struct iwl_priv *priv)
+ IWL_ERROR("Time out reading EEPROM[%d]", addr);
+ return -ETIMEDOUT;
+ }
+- e[addr / 2] = le16_to_cpu(r >> 16);
++ e[addr / 2] = cpu_to_le16(r >> 16);
+ }
+
+ return 0;
+@@ -1587,22 +1620,17 @@ int iwl_eeprom_init(struct iwl_priv *priv)
+ * Misc. internal state and helper functions
+ *
+ ******************************************************************************/
+-#ifdef CONFIG_IWLWIFI_DEBUG
++#ifdef CONFIG_IWL3945_DEBUG
+
+ /**
+- * iwl_report_frame - dump frame to syslog during debug sessions
++ * iwl3945_report_frame - dump frame to syslog during debug sessions
+ *
+- * hack this function to show different aspects of received frames,
++ * You may hack this function to show different aspects of received frames,
+ * including selective frame dumps.
+ * group100 parameter selects whether to show 1 out of 100 good frames.
- *
-- *****************************************************************************/
--
--#define PHY_CALIBRATE_DIFF_GAIN_CMD (7)
--#define HD_TABLE_SIZE (11)
--
--struct iwl_sensitivity_cmd {
-- __le16 control;
-- __le16 table[HD_TABLE_SIZE];
--} __attribute__ ((packed));
--
--struct iwl_calibration_cmd {
-- u8 opCode;
-- u8 flags;
-- __le16 reserved;
-- s8 diff_gain_a;
-- s8 diff_gain_b;
-- s8 diff_gain_c;
-- u8 reserved1;
--} __attribute__ ((packed));
+- * TODO: ieee80211_hdr stuff is common to 3945 and 4965, so frame type
+- * info output is okay, but some of this stuff (e.g. iwl_rx_frame_stats)
+- * is 3945-specific and gives bad output for 4965. Need to split the
+- * functionality, keep common stuff here.
+ */
+-void iwl_report_frame(struct iwl_priv *priv,
+- struct iwl_rx_packet *pkt,
++void iwl3945_report_frame(struct iwl3945_priv *priv,
++ struct iwl3945_rx_packet *pkt,
+ struct ieee80211_hdr *header, int group100)
+ {
+ u32 to_us;
+@@ -1624,9 +1652,9 @@ void iwl_report_frame(struct iwl_priv *priv,
+ u8 agc;
+ u16 sig_avg;
+ u16 noise_diff;
+- struct iwl_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
+- struct iwl_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
+- struct iwl_rx_frame_end *rx_end = IWL_RX_END(pkt);
++ struct iwl3945_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
++ struct iwl3945_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
++ struct iwl3945_rx_frame_end *rx_end = IWL_RX_END(pkt);
+ u8 *data = IWL_RX_DATA(pkt);
+
+ /* MAC header */
+@@ -1702,11 +1730,11 @@ void iwl_report_frame(struct iwl_priv *priv,
+ else
+ title = "Frame";
+
+- rate = iwl_rate_index_from_plcp(rate_sym);
++ rate = iwl3945_rate_index_from_plcp(rate_sym);
+ if (rate == -1)
+ rate = 0;
+ else
+- rate = iwl_rates[rate].ieee / 2;
++ rate = iwl3945_rates[rate].ieee / 2;
+
+ /* print frame summary.
+ * MAC addresses show just the last byte (for brevity),
+@@ -1728,25 +1756,25 @@ void iwl_report_frame(struct iwl_priv *priv,
+ }
+ }
+ if (print_dump)
+- iwl_print_hex_dump(IWL_DL_RX, data, length);
++ iwl3945_print_hex_dump(IWL_DL_RX, data, length);
+ }
+ #endif
+
+-static void iwl_unset_hw_setting(struct iwl_priv *priv)
++static void iwl3945_unset_hw_setting(struct iwl3945_priv *priv)
+ {
+ if (priv->hw_setting.shared_virt)
+ pci_free_consistent(priv->pci_dev,
+- sizeof(struct iwl_shared),
++ sizeof(struct iwl3945_shared),
+ priv->hw_setting.shared_virt,
+ priv->hw_setting.shared_phys);
+ }
+
+ /**
+- * iwl_supported_rate_to_ie - fill in the supported rate in IE field
++ * iwl3945_supported_rate_to_ie - fill in the supported rate in IE field
+ *
+ * return : set the bit for each supported rate insert in ie
+ */
+-static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
++static u16 iwl3945_supported_rate_to_ie(u8 *ie, u16 supported_rate,
+ u16 basic_rate, int *left)
+ {
+ u16 ret_rates = 0, bit;
+@@ -1757,7 +1785,7 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
+ for (bit = 1, i = 0; i < IWL_RATE_COUNT; i++, bit <<= 1) {
+ if (bit & supported_rate) {
+ ret_rates |= bit;
+- rates[*cnt] = iwl_rates[i].ieee |
++ rates[*cnt] = iwl3945_rates[i].ieee |
+ ((bit & basic_rate) ? 0x80 : 0x00);
+ (*cnt)++;
+ (*left)--;
+@@ -1771,9 +1799,9 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
+ }
+
+ /**
+- * iwl_fill_probe_req - fill in all required fields and IE for probe request
++ * iwl3945_fill_probe_req - fill in all required fields and IE for probe request
+ */
+-static u16 iwl_fill_probe_req(struct iwl_priv *priv,
++static u16 iwl3945_fill_probe_req(struct iwl3945_priv *priv,
+ struct ieee80211_mgmt *frame,
+ int left, int is_direct)
+ {
+@@ -1789,9 +1817,9 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ len += 24;
+
+ frame->frame_control = cpu_to_le16(IEEE80211_STYPE_PROBE_REQ);
+- memcpy(frame->da, BROADCAST_ADDR, ETH_ALEN);
++ memcpy(frame->da, iwl3945_broadcast_addr, ETH_ALEN);
+ memcpy(frame->sa, priv->mac_addr, ETH_ALEN);
+- memcpy(frame->bssid, BROADCAST_ADDR, ETH_ALEN);
++ memcpy(frame->bssid, iwl3945_broadcast_addr, ETH_ALEN);
+ frame->seq_ctrl = 0;
+
+ /* fill in our indirect SSID IE */
+@@ -1834,11 +1862,11 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
+
+ cck_rates = IWL_CCK_RATES_MASK & active_rates;
+- ret_rates = iwl_supported_rate_to_ie(pos, cck_rates,
++ ret_rates = iwl3945_supported_rate_to_ie(pos, cck_rates,
+ priv->active_rate_basic, &left);
+ active_rates &= ~ret_rates;
+
+- ret_rates = iwl_supported_rate_to_ie(pos, active_rates,
++ ret_rates = iwl3945_supported_rate_to_ie(pos, active_rates,
+ priv->active_rate_basic, &left);
+ active_rates &= ~ret_rates;
+
+@@ -1855,7 +1883,7 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ /* ... fill it in... */
+ *pos++ = WLAN_EID_EXT_SUPP_RATES;
+ *pos = 0;
+- iwl_supported_rate_to_ie(pos, active_rates,
++ iwl3945_supported_rate_to_ie(pos, active_rates,
+ priv->active_rate_basic, &left);
+ if (*pos > 0)
+ len += 2 + *pos;
+@@ -1867,16 +1895,16 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ /*
+ * QoS support
+ */
+-#ifdef CONFIG_IWLWIFI_QOS
+-static int iwl_send_qos_params_command(struct iwl_priv *priv,
+- struct iwl_qosparam_cmd *qos)
++#ifdef CONFIG_IWL3945_QOS
++static int iwl3945_send_qos_params_command(struct iwl3945_priv *priv,
++ struct iwl3945_qosparam_cmd *qos)
+ {
+
+- return iwl_send_cmd_pdu(priv, REPLY_QOS_PARAM,
+- sizeof(struct iwl_qosparam_cmd), qos);
++ return iwl3945_send_cmd_pdu(priv, REPLY_QOS_PARAM,
++ sizeof(struct iwl3945_qosparam_cmd), qos);
+ }
+
+-static void iwl_reset_qos(struct iwl_priv *priv)
++static void iwl3945_reset_qos(struct iwl3945_priv *priv)
+ {
+ u16 cw_min = 15;
+ u16 cw_max = 1023;
+@@ -1963,13 +1991,10 @@ static void iwl_reset_qos(struct iwl_priv *priv)
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+-static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
++static void iwl3945_activate_qos(struct iwl3945_priv *priv, u8 force)
+ {
+ unsigned long flags;
+
+- if (priv == NULL)
+- return;
-
--/******************************************************************************
-- * (12)
-- * Miscellaneous Commands:
-- *
-- *****************************************************************************/
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+@@ -1990,16 +2015,16 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+- if (force || iwl_is_associated(priv)) {
++ if (force || iwl3945_is_associated(priv)) {
+ IWL_DEBUG_QOS("send QoS cmd with Qos active %d \n",
+ priv->qos_data.qos_active);
+
+- iwl_send_qos_params_command(priv,
++ iwl3945_send_qos_params_command(priv,
+ &(priv->qos_data.def_qos_parm));
+ }
+ }
+
+-#endif /* CONFIG_IWLWIFI_QOS */
++#endif /* CONFIG_IWL3945_QOS */
+ /*
+ * Power management (not Tx power!) functions
+ */
+@@ -2017,7 +2042,7 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
+
+ /* default power management (not Tx power) table values */
+ /* for tim 0-10 */
+-static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
++static struct iwl3945_power_vec_entry range_0[IWL_POWER_AC] = {
+ {{NOSLP, SLP_TIMEOUT(0), SLP_TIMEOUT(0), SLP_VEC(0, 0, 0, 0, 0)}, 0},
+ {{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(500), SLP_VEC(1, 2, 3, 4, 4)}, 0},
+ {{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(300), SLP_VEC(2, 4, 6, 7, 7)}, 0},
+@@ -2027,7 +2052,7 @@ static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
+ };
+
+ /* for tim > 10 */
+-static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
++static struct iwl3945_power_vec_entry range_1[IWL_POWER_AC] = {
+ {{NOSLP, SLP_TIMEOUT(0), SLP_TIMEOUT(0), SLP_VEC(0, 0, 0, 0, 0)}, 0},
+ {{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(500),
+ SLP_VEC(1, 2, 3, 4, 0xFF)}, 0},
+@@ -2040,11 +2065,11 @@ static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
+ SLP_VEC(4, 7, 10, 10, 0xFF)}, 0}
+ };
+
+-int iwl_power_init_handle(struct iwl_priv *priv)
++int iwl3945_power_init_handle(struct iwl3945_priv *priv)
+ {
+ int rc = 0, i;
+- struct iwl_power_mgr *pow_data;
+- int size = sizeof(struct iwl_power_vec_entry) * IWL_POWER_AC;
++ struct iwl3945_power_mgr *pow_data;
++ int size = sizeof(struct iwl3945_power_vec_entry) * IWL_POWER_AC;
+ u16 pci_pm;
+
+ IWL_DEBUG_POWER("Initialize power \n");
+@@ -2063,7 +2088,7 @@ int iwl_power_init_handle(struct iwl_priv *priv)
+ if (rc != 0)
+ return 0;
+ else {
+- struct iwl_powertable_cmd *cmd;
++ struct iwl3945_powertable_cmd *cmd;
+
+ IWL_DEBUG_POWER("adjust power command flags\n");
+
+@@ -2079,15 +2104,15 @@ int iwl_power_init_handle(struct iwl_priv *priv)
+ return rc;
+ }
+
+-static int iwl_update_power_cmd(struct iwl_priv *priv,
+- struct iwl_powertable_cmd *cmd, u32 mode)
++static int iwl3945_update_power_cmd(struct iwl3945_priv *priv,
++ struct iwl3945_powertable_cmd *cmd, u32 mode)
+ {
+ int rc = 0, i;
+ u8 skip;
+ u32 max_sleep = 0;
+- struct iwl_power_vec_entry *range;
++ struct iwl3945_power_vec_entry *range;
+ u8 period = 0;
+- struct iwl_power_mgr *pow_data;
++ struct iwl3945_power_mgr *pow_data;
+
+ if (mode > IWL_POWER_INDEX_5) {
+ IWL_DEBUG_POWER("Error invalid power mode \n");
+@@ -2100,7 +2125,7 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
+ else
+ range = &pow_data->pwr_range_1[1];
+
+- memcpy(cmd, &range[mode].cmd, sizeof(struct iwl_powertable_cmd));
++ memcpy(cmd, &range[mode].cmd, sizeof(struct iwl3945_powertable_cmd));
+
+ #ifdef IWL_MAC80211_DISABLE
+ if (priv->assoc_network != NULL) {
+@@ -2143,14 +2168,14 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
+ return rc;
+ }
+
+-static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
++static int iwl3945_send_power_mode(struct iwl3945_priv *priv, u32 mode)
+ {
+- u32 final_mode = mode;
++ u32 uninitialized_var(final_mode);
+ int rc;
+- struct iwl_powertable_cmd cmd;
++ struct iwl3945_powertable_cmd cmd;
+
+ /* If on battery, set to 3,
+- * if plugged into AC power, set to CAM ("continuosly aware mode"),
++ * if plugged into AC power, set to CAM ("continuously aware mode"),
+ * else user level */
+ switch (mode) {
+ case IWL_POWER_BATTERY:
+@@ -2164,9 +2189,9 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
+ break;
+ }
+
+- iwl_update_power_cmd(priv, &cmd, final_mode);
++ iwl3945_update_power_cmd(priv, &cmd, final_mode);
+
+- rc = iwl_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
++ rc = iwl3945_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
+
+ if (final_mode == IWL_POWER_MODE_CAM)
+ clear_bit(STATUS_POWER_PMI, &priv->status);
+@@ -2176,7 +2201,7 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
+ return rc;
+ }
+
+-int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
++int iwl3945_is_network_packet(struct iwl3945_priv *priv, struct ieee80211_hdr *header)
+ {
+ /* Filter incoming packets to determine if they are targeted toward
+ * this network, discarding packets coming from ourselves */
+@@ -2206,7 +2231,7 @@ int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+
+ #define TX_STATUS_ENTRY(x) case TX_STATUS_FAIL_ ## x: return #x
+
+-const char *iwl_get_tx_fail_reason(u32 status)
++static const char *iwl3945_get_tx_fail_reason(u32 status)
+ {
+ switch (status & TX_STATUS_MSK) {
+ case TX_STATUS_SUCCESS:
+@@ -2233,11 +2258,11 @@ const char *iwl_get_tx_fail_reason(u32 status)
+ }
+
+ /**
+- * iwl_scan_cancel - Cancel any currently executing HW scan
++ * iwl3945_scan_cancel - Cancel any currently executing HW scan
+ *
+ * NOTE: priv->mutex is not required before calling this function
+ */
+-static int iwl_scan_cancel(struct iwl_priv *priv)
++static int iwl3945_scan_cancel(struct iwl3945_priv *priv)
+ {
+ if (!test_bit(STATUS_SCAN_HW, &priv->status)) {
+ clear_bit(STATUS_SCANNING, &priv->status);
+@@ -2260,17 +2285,17 @@ static int iwl_scan_cancel(struct iwl_priv *priv)
+ }
+
+ /**
+- * iwl_scan_cancel_timeout - Cancel any currently executing HW scan
++ * iwl3945_scan_cancel_timeout - Cancel any currently executing HW scan
+ * @ms: amount of time to wait (in milliseconds) for scan to abort
+ *
+ * NOTE: priv->mutex must be held before calling this function
+ */
+-static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
++static int iwl3945_scan_cancel_timeout(struct iwl3945_priv *priv, unsigned long ms)
+ {
+ unsigned long now = jiffies;
+ int ret;
+
+- ret = iwl_scan_cancel(priv);
++ ret = iwl3945_scan_cancel(priv);
+ if (ret && ms) {
+ mutex_unlock(&priv->mutex);
+ while (!time_after(jiffies, now + msecs_to_jiffies(ms)) &&
+@@ -2284,7 +2309,7 @@ static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
+ return ret;
+ }
+
+-static void iwl_sequence_reset(struct iwl_priv *priv)
++static void iwl3945_sequence_reset(struct iwl3945_priv *priv)
+ {
+ /* Reset ieee stats */
+
+@@ -2295,13 +2320,13 @@ static void iwl_sequence_reset(struct iwl_priv *priv)
+ priv->last_frag_num = -1;
+ priv->last_packet_time = 0;
+
+- iwl_scan_cancel(priv);
++ iwl3945_scan_cancel(priv);
+ }
+
+ #define MAX_UCODE_BEACON_INTERVAL 1024
+ #define INTEL_CONN_LISTEN_INTERVAL __constant_cpu_to_le16(0xA)
+
+-static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
++static __le16 iwl3945_adjust_beacon_interval(u16 beacon_val)
+ {
+ u16 new_val = 0;
+ u16 beacon_factor = 0;
+@@ -2314,7 +2339,7 @@ static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
+ return cpu_to_le16(new_val);
+ }
+
+-static void iwl_setup_rxon_timing(struct iwl_priv *priv)
++static void iwl3945_setup_rxon_timing(struct iwl3945_priv *priv)
+ {
+ u64 interval_tm_unit;
+ u64 tsf, result;
+@@ -2344,14 +2369,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
+ priv->rxon_timing.beacon_interval =
+ cpu_to_le16(beacon_int);
+ priv->rxon_timing.beacon_interval =
+- iwl_adjust_beacon_interval(
++ iwl3945_adjust_beacon_interval(
+ le16_to_cpu(priv->rxon_timing.beacon_interval));
+ }
+
+ priv->rxon_timing.atim_window = 0;
+ } else {
+ priv->rxon_timing.beacon_interval =
+- iwl_adjust_beacon_interval(conf->beacon_int);
++ iwl3945_adjust_beacon_interval(conf->beacon_int);
+ /* TODO: we need to get atim_window from upper stack
+ * for now we set to 0 */
+ priv->rxon_timing.atim_window = 0;
+@@ -2370,14 +2395,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
+ le16_to_cpu(priv->rxon_timing.atim_window));
+ }
+
+-static int iwl_scan_initiate(struct iwl_priv *priv)
++static int iwl3945_scan_initiate(struct iwl3945_priv *priv)
+ {
+ if (priv->iw_mode == IEEE80211_IF_TYPE_AP) {
+ IWL_ERROR("APs don't scan.\n");
+ return 0;
+ }
+
+- if (!iwl_is_ready_rf(priv)) {
++ if (!iwl3945_is_ready_rf(priv)) {
+ IWL_DEBUG_SCAN("Aborting scan due to not ready.\n");
+ return -EIO;
+ }
+@@ -2404,9 +2429,9 @@ static int iwl_scan_initiate(struct iwl_priv *priv)
+ return 0;
+ }
+
+-static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
++static int iwl3945_set_rxon_hwcrypto(struct iwl3945_priv *priv, int hw_decrypt)
+ {
+- struct iwl_rxon_cmd *rxon = &priv->staging_rxon;
++ struct iwl3945_rxon_cmd *rxon = &priv->staging_rxon;
+
+ if (hw_decrypt)
+ rxon->filter_flags &= ~RXON_FILTER_DIS_DECRYPT_MSK;
+@@ -2416,7 +2441,7 @@ static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
+ return 0;
+ }
+
+-static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
++static void iwl3945_set_flags_for_phymode(struct iwl3945_priv *priv, u8 phymode)
+ {
+ if (phymode == MODE_IEEE80211A) {
+ priv->staging_rxon.flags &=
+@@ -2424,7 +2449,7 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
+ | RXON_FLG_CCK_MSK);
+ priv->staging_rxon.flags |= RXON_FLG_SHORT_SLOT_MSK;
+ } else {
+- /* Copied from iwl_bg_post_associate() */
++ /* Copied from iwl3945_bg_post_associate() */
+ if (priv->assoc_capability & WLAN_CAPABILITY_SHORT_SLOT_TIME)
+ priv->staging_rxon.flags |= RXON_FLG_SHORT_SLOT_MSK;
+ else
+@@ -2440,11 +2465,11 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
+ }
+
+ /*
+- * initilize rxon structure with default values fromm eeprom
++ * initialize rxon structure with default values from eeprom
+ */
+-static void iwl_connection_init_rx_config(struct iwl_priv *priv)
++static void iwl3945_connection_init_rx_config(struct iwl3945_priv *priv)
+ {
+- const struct iwl_channel_info *ch_info;
++ const struct iwl3945_channel_info *ch_info;
+
+ memset(&priv->staging_rxon, 0, sizeof(priv->staging_rxon));
+
+@@ -2481,7 +2506,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
+ priv->staging_rxon.flags |= RXON_FLG_SHORT_PREAMBLE_MSK;
+ #endif
+
+- ch_info = iwl_get_channel_info(priv, priv->phymode,
++ ch_info = iwl3945_get_channel_info(priv, priv->phymode,
+ le16_to_cpu(priv->staging_rxon.channel));
+
+ if (!ch_info)
+@@ -2501,7 +2526,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
+ else
+ priv->phymode = MODE_IEEE80211G;
+
+- iwl_set_flags_for_phymode(priv, priv->phymode);
++ iwl3945_set_flags_for_phymode(priv, priv->phymode);
+
+ priv->staging_rxon.ofdm_basic_rates =
+ (IWL_OFDM_RATES_MASK >> IWL_FIRST_OFDM_RATE) & 0xFF;
+@@ -2509,15 +2534,12 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
+ (IWL_CCK_RATES_MASK >> IWL_FIRST_CCK_RATE) & 0xF;
+ }
+
+-static int iwl_set_mode(struct iwl_priv *priv, int mode)
++static int iwl3945_set_mode(struct iwl3945_priv *priv, int mode)
+ {
+- if (!iwl_is_ready_rf(priv))
+- return -EAGAIN;
-
--/*
-- * LEDs Command & Response
-- * REPLY_LEDS_CMD = 0x48 (command, has simple generic response)
-- *
-- * For each of 3 possible LEDs (Activity/Link/Tech, selected by "id" field),
-- * this command turns it on or off, or sets up a periodic blinking cycle.
-- */
--struct iwl_led_cmd {
-- __le32 interval; /* "interval" in uSec */
-- u8 id; /* 1: Activity, 2: Link, 3: Tech */
-- u8 off; /* # intervals off while blinking;
-- * "0", with >0 "on" value, turns LED on */
-- u8 on; /* # intervals on while blinking;
-- * "0", regardless of "off", turns LED off */
-- u8 reserved;
--} __attribute__ ((packed));
+ if (mode == IEEE80211_IF_TYPE_IBSS) {
+- const struct iwl_channel_info *ch_info;
++ const struct iwl3945_channel_info *ch_info;
+
+- ch_info = iwl_get_channel_info(priv,
++ ch_info = iwl3945_get_channel_info(priv,
+ priv->phymode,
+ le16_to_cpu(priv->staging_rxon.channel));
+
+@@ -2528,32 +2550,36 @@ static int iwl_set_mode(struct iwl_priv *priv, int mode)
+ }
+ }
+
++ priv->iw_mode = mode;
++
++ iwl3945_connection_init_rx_config(priv);
++ memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
++
++ iwl3945_clear_stations_table(priv);
++
++ /* dont commit rxon if rf-kill is on*/
++ if (!iwl3945_is_ready_rf(priv))
++ return -EAGAIN;
++
+ cancel_delayed_work(&priv->scan_check);
+- if (iwl_scan_cancel_timeout(priv, 100)) {
++ if (iwl3945_scan_cancel_timeout(priv, 100)) {
+ IWL_WARNING("Aborted scan still in progress after 100ms\n");
+ IWL_DEBUG_MAC80211("leaving - scan abort failed.\n");
+ return -EAGAIN;
+ }
+
+- priv->iw_mode = mode;
-
--/******************************************************************************
-- * (13)
-- * Union of all expected notifications/responses:
-- *
-- *****************************************************************************/
+- iwl_connection_init_rx_config(priv);
+- memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
-
--struct iwl_rx_packet {
-- __le32 len;
-- struct iwl_cmd_header hdr;
-- union {
-- struct iwl_alive_resp alive_frame;
-- struct iwl_rx_frame rx_frame;
-- struct iwl_tx_resp tx_resp;
-- struct iwl_spectrum_notification spectrum_notif;
-- struct iwl_csa_notification csa_notif;
-- struct iwl_error_resp err_resp;
-- struct iwl_card_state_notif card_state_notif;
-- struct iwl_beacon_notif beacon_status;
-- struct iwl_add_sta_resp add_sta;
-- struct iwl_sleep_notification sleep_notif;
-- struct iwl_spectrum_resp spectrum;
-- struct iwl_notif_statistics stats;
--#if IWL == 4965
-- struct iwl_compressed_ba_resp compressed_ba;
-- struct iwl_missed_beacon_notif missed_beacon;
--#endif
-- __le32 status;
-- u8 raw[0];
-- } u;
--} __attribute__ ((packed));
+- iwl_clear_stations_table(priv);
-
--#define IWL_RX_FRAME_SIZE (4 + sizeof(struct iwl_rx_frame))
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+
+ return 0;
+ }
+
+-static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
++static void iwl3945_build_tx_cmd_hwcrypto(struct iwl3945_priv *priv,
+ struct ieee80211_tx_control *ctl,
+- struct iwl_cmd *cmd,
++ struct iwl3945_cmd *cmd,
+ struct sk_buff *skb_frag,
+ int last_frag)
+ {
+- struct iwl_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
++ struct iwl3945_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
+
+ switch (keyinfo->alg) {
+ case ALG_CCMP:
+@@ -2596,8 +2622,8 @@ static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
+ /*
+ * handle build REPLY_TX command notification.
+ */
+-static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
+- struct iwl_cmd *cmd,
++static void iwl3945_build_tx_cmd_basic(struct iwl3945_priv *priv,
++ struct iwl3945_cmd *cmd,
+ struct ieee80211_tx_control *ctrl,
+ struct ieee80211_hdr *hdr,
+ int is_unicast, u8 std_id)
+@@ -2645,11 +2671,9 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
+ if ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) {
+ if ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_ASSOC_REQ ||
+ (fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_REASSOC_REQ)
+- cmd->cmd.tx.timeout.pm_frame_timeout =
+- cpu_to_le16(3);
++ cmd->cmd.tx.timeout.pm_frame_timeout = cpu_to_le16(3);
+ else
+- cmd->cmd.tx.timeout.pm_frame_timeout =
+- cpu_to_le16(2);
++ cmd->cmd.tx.timeout.pm_frame_timeout = cpu_to_le16(2);
+ } else
+ cmd->cmd.tx.timeout.pm_frame_timeout = 0;
+
+@@ -2658,41 +2682,44 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
+ cmd->cmd.tx.next_frame_len = 0;
+ }
+
+-static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
++/**
++ * iwl3945_get_sta_id - Find station's index within station table
++ */
++static int iwl3945_get_sta_id(struct iwl3945_priv *priv, struct ieee80211_hdr *hdr)
+ {
+ int sta_id;
+ u16 fc = le16_to_cpu(hdr->frame_control);
+
+- /* If this frame is broadcast or not data then use the broadcast
+- * station id */
++ /* If this frame is broadcast or management, use broadcast station id */
+ if (((fc & IEEE80211_FCTL_FTYPE) != IEEE80211_FTYPE_DATA) ||
+ is_multicast_ether_addr(hdr->addr1))
+ return priv->hw_setting.bcast_sta_id;
+
+ switch (priv->iw_mode) {
+
+- /* If this frame is part of a BSS network (we're a station), then
+- * we use the AP's station id */
++ /* If we are a client station in a BSS network, use the special
++ * AP station entry (that's the only station we communicate with) */
+ case IEEE80211_IF_TYPE_STA:
+ return IWL_AP_ID;
+
+ /* If we are an AP, then find the station, or use BCAST */
+ case IEEE80211_IF_TYPE_AP:
+- sta_id = iwl_hw_find_station(priv, hdr->addr1);
++ sta_id = iwl3945_hw_find_station(priv, hdr->addr1);
+ if (sta_id != IWL_INVALID_STATION)
+ return sta_id;
+ return priv->hw_setting.bcast_sta_id;
+
+- /* If this frame is part of a IBSS network, then we use the
+- * target specific station id */
++ /* If this frame is going out to an IBSS network, find the station,
++ * or create a new station table entry */
+ case IEEE80211_IF_TYPE_IBSS: {
+ DECLARE_MAC_BUF(mac);
+
+- sta_id = iwl_hw_find_station(priv, hdr->addr1);
++ /* Create new station table entry */
++ sta_id = iwl3945_hw_find_station(priv, hdr->addr1);
+ if (sta_id != IWL_INVALID_STATION)
+ return sta_id;
+
+- sta_id = iwl_add_station(priv, hdr->addr1, 0, CMD_ASYNC);
++ sta_id = iwl3945_add_station(priv, hdr->addr1, 0, CMD_ASYNC);
+
+ if (sta_id != IWL_INVALID_STATION)
+ return sta_id;
+@@ -2700,11 +2727,11 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
+ IWL_DEBUG_DROP("Station %s not in station map. "
+ "Defaulting to broadcast...\n",
+ print_mac(mac, hdr->addr1));
+- iwl_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
++ iwl3945_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
+ return priv->hw_setting.bcast_sta_id;
+ }
+ default:
+- IWL_WARNING("Unkown mode of operation: %d", priv->iw_mode);
++ IWL_WARNING("Unknown mode of operation: %d", priv->iw_mode);
+ return priv->hw_setting.bcast_sta_id;
+ }
+ }
+@@ -2712,18 +2739,18 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
+ /*
+ * start REPLY_TX command process
+ */
+-static int iwl_tx_skb(struct iwl_priv *priv,
++static int iwl3945_tx_skb(struct iwl3945_priv *priv,
+ struct sk_buff *skb, struct ieee80211_tx_control *ctl)
+ {
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+- struct iwl_tfd_frame *tfd;
++ struct iwl3945_tfd_frame *tfd;
+ u32 *control_flags;
+ int txq_id = ctl->queue;
+- struct iwl_tx_queue *txq = NULL;
+- struct iwl_queue *q = NULL;
++ struct iwl3945_tx_queue *txq = NULL;
++ struct iwl3945_queue *q = NULL;
+ dma_addr_t phys_addr;
+ dma_addr_t txcmd_phys;
+- struct iwl_cmd *out_cmd = NULL;
++ struct iwl3945_cmd *out_cmd = NULL;
+ u16 len, idx, len_org;
+ u8 id, hdr_len, unicast;
+ u8 sta_id;
+@@ -2735,13 +2762,13 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ int rc;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- if (iwl_is_rfkill(priv)) {
++ if (iwl3945_is_rfkill(priv)) {
+ IWL_DEBUG_DROP("Dropping - RF KILL\n");
+ goto drop_unlock;
+ }
+
+- if (!priv->interface_id) {
+- IWL_DEBUG_DROP("Dropping - !priv->interface_id\n");
++ if (!priv->vif) {
++ IWL_DEBUG_DROP("Dropping - !priv->vif\n");
+ goto drop_unlock;
+ }
+
+@@ -2755,7 +2782,7 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+
+ fc = le16_to_cpu(hdr->frame_control);
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
++#ifdef CONFIG_IWL3945_DEBUG
+ if (ieee80211_is_auth(fc))
+ IWL_DEBUG_TX("Sending AUTH frame\n");
+ else if (ieee80211_is_assoc_request(fc))
+@@ -2764,16 +2791,19 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ IWL_DEBUG_TX("Sending REASSOC frame\n");
+ #endif
+
+- if (!iwl_is_associated(priv) &&
++ /* drop all data frame if we are not associated */
++ if (!iwl3945_is_associated(priv) && !priv->assoc_id &&
+ ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_DATA)) {
+- IWL_DEBUG_DROP("Dropping - !iwl_is_associated\n");
++ IWL_DEBUG_DROP("Dropping - !iwl3945_is_associated\n");
+ goto drop_unlock;
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ hdr_len = ieee80211_get_hdrlen(fc);
+- sta_id = iwl_get_sta_id(priv, hdr);
++
++ /* Find (or create) index into station table for destination station */
++ sta_id = iwl3945_get_sta_id(priv, hdr);
+ if (sta_id == IWL_INVALID_STATION) {
+ DECLARE_MAC_BUF(mac);
+
+@@ -2794,32 +2824,54 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ __constant_cpu_to_le16(IEEE80211_SCTL_FRAG));
+ seq_number += 0x10;
+ }
++
++ /* Descriptor for chosen Tx queue */
+ txq = &priv->txq[txq_id];
+ q = &txq->q;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+- tfd = &txq->bd[q->first_empty];
++ /* Set up first empty TFD within this queue's circular TFD buffer */
++ tfd = &txq->bd[q->write_ptr];
+ memset(tfd, 0, sizeof(*tfd));
+ control_flags = (u32 *) tfd;
+- idx = get_cmd_index(q, q->first_empty, 0);
++ idx = get_cmd_index(q, q->write_ptr, 0);
+
+- memset(&(txq->txb[q->first_empty]), 0, sizeof(struct iwl_tx_info));
+- txq->txb[q->first_empty].skb[0] = skb;
+- memcpy(&(txq->txb[q->first_empty].status.control),
++ /* Set up driver data for this TFD */
++ memset(&(txq->txb[q->write_ptr]), 0, sizeof(struct iwl3945_tx_info));
++ txq->txb[q->write_ptr].skb[0] = skb;
++ memcpy(&(txq->txb[q->write_ptr].status.control),
+ ctl, sizeof(struct ieee80211_tx_control));
++
++ /* Init first empty entry in queue's array of Tx/cmd buffers */
+ out_cmd = &txq->cmd[idx];
+ memset(&out_cmd->hdr, 0, sizeof(out_cmd->hdr));
+ memset(&out_cmd->cmd.tx, 0, sizeof(out_cmd->cmd.tx));
++
++ /*
++ * Set up the Tx-command (not MAC!) header.
++ * Store the chosen Tx queue and TFD index within the sequence field;
++ * after Tx, uCode's Tx response will return this value so driver can
++ * locate the frame within the tx queue and do post-tx processing.
++ */
+ out_cmd->hdr.cmd = REPLY_TX;
+ out_cmd->hdr.sequence = cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |
+- INDEX_TO_SEQ(q->first_empty)));
+- /* copy frags header */
++ INDEX_TO_SEQ(q->write_ptr)));
++
++ /* Copy MAC header from skb into command buffer */
+ memcpy(out_cmd->cmd.tx.hdr, hdr, hdr_len);
+
+- /* hdr = (struct ieee80211_hdr *)out_cmd->cmd.tx.hdr; */
++ /*
++ * Use the first empty entry in this queue's command buffer array
++ * to contain the Tx command and MAC header concatenated together
++ * (payload data will be in another buffer).
++ * Size of this varies, due to varying MAC header length.
++ * If end is not dword aligned, we'll have 2 extra bytes at the end
++ * of the MAC header (device reads on dword boundaries).
++ * We'll tell device about this padding later.
++ */
+ len = priv->hw_setting.tx_cmd_len +
+- sizeof(struct iwl_cmd_header) + hdr_len;
++ sizeof(struct iwl3945_cmd_header) + hdr_len;
+
+ len_org = len;
+ len = (len + 3) & ~3;
+@@ -2829,37 +2881,45 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ else
+ len_org = 0;
+
+- txcmd_phys = txq->dma_addr_cmd + sizeof(struct iwl_cmd) * idx +
+- offsetof(struct iwl_cmd, hdr);
++ /* Physical address of this Tx command's header (not MAC header!),
++ * within command buffer array. */
++ txcmd_phys = txq->dma_addr_cmd + sizeof(struct iwl3945_cmd) * idx +
++ offsetof(struct iwl3945_cmd, hdr);
+
+- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
++ /* Add buffer containing Tx command and MAC(!) header to TFD's
++ * first entry */
++ iwl3945_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
+
+ if (!(ctl->flags & IEEE80211_TXCTL_DO_NOT_ENCRYPT))
+- iwl_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
++ iwl3945_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
+
+- /* 802.11 null functions have no payload... */
++ /* Set up TFD's 2nd entry to point directly to remainder of skb,
++ * if any (802.11 null frames have no payload). */
+ len = skb->len - hdr_len;
+ if (len) {
+ phys_addr = pci_map_single(priv->pci_dev, skb->data + hdr_len,
+ len, PCI_DMA_TODEVICE);
+- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
++ iwl3945_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
+ }
+
+- /* If there is no payload, then only one TFD is used */
+ if (!len)
++ /* If there is no payload, then we use only one Tx buffer */
+ *control_flags = TFD_CTL_COUNT_SET(1);
+ else
++ /* Else use 2 buffers.
++ * Tell 3945 about any padding after MAC header */
+ *control_flags = TFD_CTL_COUNT_SET(2) |
+ TFD_CTL_PAD_SET(U32_PAD(len));
+
++ /* Total # bytes to be transmitted */
+ len = (u16)skb->len;
+ out_cmd->cmd.tx.len = cpu_to_le16(len);
+
+ /* TODO need this for burst mode later on */
+- iwl_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
++ iwl3945_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
+
+ /* set is_hcca to 0; it probably will never be implemented */
+- iwl_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
++ iwl3945_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
+
+ out_cmd->cmd.tx.tx_flags &= ~TX_CMD_FLG_ANT_A_MSK;
+ out_cmd->cmd.tx.tx_flags &= ~TX_CMD_FLG_ANT_B_MSK;
+@@ -2875,25 +2935,26 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ txq->need_update = 0;
+ }
+
+- iwl_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
++ iwl3945_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
+ sizeof(out_cmd->cmd.tx));
+
+- iwl_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
++ iwl3945_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
+ ieee80211_get_hdrlen(fc));
+
+- q->first_empty = iwl_queue_inc_wrap(q->first_empty, q->n_bd);
+- rc = iwl_tx_queue_update_write_ptr(priv, txq);
++ /* Tell device the write index *just past* this latest filled TFD */
++ q->write_ptr = iwl3945_queue_inc_wrap(q->write_ptr, q->n_bd);
++ rc = iwl3945_tx_queue_update_write_ptr(priv, txq);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ if (rc)
+ return rc;
+
+- if ((iwl_queue_space(q) < q->high_mark)
++ if ((iwl3945_queue_space(q) < q->high_mark)
+ && priv->mac80211_registered) {
+ if (wait_write_ptr) {
+ spin_lock_irqsave(&priv->lock, flags);
+ txq->need_update = 1;
+- iwl_tx_queue_update_write_ptr(priv, txq);
++ iwl3945_tx_queue_update_write_ptr(priv, txq);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+@@ -2908,13 +2969,13 @@ drop:
+ return -1;
+ }
+
+-static void iwl_set_rate(struct iwl_priv *priv)
++static void iwl3945_set_rate(struct iwl3945_priv *priv)
+ {
+ const struct ieee80211_hw_mode *hw = NULL;
+ struct ieee80211_rate *rate;
+ int i;
+
+- hw = iwl_get_hw_mode(priv, priv->phymode);
++ hw = iwl3945_get_hw_mode(priv, priv->phymode);
+ if (!hw) {
+ IWL_ERROR("Failed to set rate: unable to get hw mode\n");
+ return;
+@@ -2932,7 +2993,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
+ if ((rate->val < IWL_RATE_COUNT) &&
+ (rate->flags & IEEE80211_RATE_SUPPORTED)) {
+ IWL_DEBUG_RATE("Adding rate index %d (plcp %d)%s\n",
+- rate->val, iwl_rates[rate->val].plcp,
++ rate->val, iwl3945_rates[rate->val].plcp,
+ (rate->flags & IEEE80211_RATE_BASIC) ?
+ "*" : "");
+ priv->active_rate |= (1 << rate->val);
+@@ -2940,7 +3001,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
+ priv->active_rate_basic |= (1 << rate->val);
+ } else
+ IWL_DEBUG_RATE("Not adding rate %d (plcp %d)\n",
+- rate->val, iwl_rates[rate->val].plcp);
++ rate->val, iwl3945_rates[rate->val].plcp);
+ }
+
+ IWL_DEBUG_RATE("Set active_rate = %0x, active_rate_basic = %0x\n",
+@@ -2969,7 +3030,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
+ (IWL_OFDM_BASIC_RATES_MASK >> IWL_FIRST_OFDM_RATE) & 0xFF;
+ }
+
+-static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
++static void iwl3945_radio_kill_sw(struct iwl3945_priv *priv, int disable_radio)
+ {
+ unsigned long flags;
+
+@@ -2980,21 +3041,21 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
+ disable_radio ? "OFF" : "ON");
+
+ if (disable_radio) {
+- iwl_scan_cancel(priv);
++ iwl3945_scan_cancel(priv);
+ /* FIXME: This is a workaround for AP */
+ if (priv->iw_mode != IEEE80211_IF_TYPE_AP) {
+ spin_lock_irqsave(&priv->lock, flags);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_SET,
+ CSR_UCODE_SW_BIT_RFKILL);
+ spin_unlock_irqrestore(&priv->lock, flags);
+- iwl_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
++ iwl3945_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
+ set_bit(STATUS_RF_KILL_SW, &priv->status);
+ }
+ return;
+ }
+
+ spin_lock_irqsave(&priv->lock, flags);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+
+ clear_bit(STATUS_RF_KILL_SW, &priv->status);
+ spin_unlock_irqrestore(&priv->lock, flags);
+@@ -3003,9 +3064,9 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
+ msleep(10);
+
+ spin_lock_irqsave(&priv->lock, flags);
+- iwl_read32(priv, CSR_UCODE_DRV_GP1);
+- if (!iwl_grab_restricted_access(priv))
+- iwl_release_restricted_access(priv);
++ iwl3945_read32(priv, CSR_UCODE_DRV_GP1);
++ if (!iwl3945_grab_nic_access(priv))
++ iwl3945_release_nic_access(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ if (test_bit(STATUS_RF_KILL_HW, &priv->status)) {
+@@ -3018,7 +3079,7 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
+ return;
+ }
+
+-void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
++void iwl3945_set_decrypted_flag(struct iwl3945_priv *priv, struct sk_buff *skb,
+ u32 decrypt_res, struct ieee80211_rx_status *stats)
+ {
+ u16 fc =
+@@ -3050,97 +3111,9 @@ void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
+ }
+ }
+
+-void iwl_handle_data_packet_monitor(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb,
+- void *data, short len,
+- struct ieee80211_rx_status *stats,
+- u16 phy_flags)
+-{
+- struct iwl_rt_rx_hdr *iwl_rt;
-
--#endif /* __iwl_commands_h__ */
-diff --git a/drivers/net/wireless/iwlwifi/iwl-debug.h b/drivers/net/wireless/iwlwifi/iwl-debug.h
-deleted file mode 100644
-index 72318d7..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-debug.h
-+++ /dev/null
-@@ -1,152 +0,0 @@
--/******************************************************************************
-- *
-- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
-- *
-- * Portions of this file are derived from the ipw3945 project.
-- *
-- * This program is free software; you can redistribute it and/or modify it
-- * under the terms of version 2 of the GNU General Public License as
-- * published by the Free Software Foundation.
-- *
-- * This program is distributed in the hope that it will be useful, but WITHOUT
-- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
-- * more details.
-- *
-- * You should have received a copy of the GNU General Public License along with
-- * this program; if not, write to the Free Software Foundation, Inc.,
-- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
-- *
-- * The full GNU General Public License is included in this distribution in the
-- * file called LICENSE.
-- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-- *
-- *****************************************************************************/
+- /* First cache any information we need before we overwrite
+- * the information provided in the skb from the hardware */
+- s8 signal = stats->ssi;
+- s8 noise = 0;
+- int rate = stats->rate;
+- u64 tsf = stats->mactime;
+- __le16 phy_flags_hw = cpu_to_le16(phy_flags);
-
--#ifndef __iwl_debug_h__
--#define __iwl_debug_h__
+- /* We received data from the HW, so stop the watchdog */
+- if (len > IWL_RX_BUF_SIZE - sizeof(*iwl_rt)) {
+- IWL_DEBUG_DROP("Dropping too large packet in monitor\n");
+- return;
+- }
-
--#ifdef CONFIG_IWLWIFI_DEBUG
--extern u32 iwl_debug_level;
--#define IWL_DEBUG(level, fmt, args...) \
--do { if (iwl_debug_level & (level)) \
-- printk(KERN_ERR DRV_NAME": %c %s " fmt, \
-- in_interrupt() ? 'I' : 'U', __FUNCTION__ , ## args); } while (0)
+- /* copy the frame data to write after where the radiotap header goes */
+- iwl_rt = (void *)rxb->skb->data;
+- memmove(iwl_rt->payload, data, len);
-
--#define IWL_DEBUG_LIMIT(level, fmt, args...) \
--do { if ((iwl_debug_level & (level)) && net_ratelimit()) \
-- printk(KERN_ERR DRV_NAME": %c %s " fmt, \
-- in_interrupt() ? 'I' : 'U', __FUNCTION__ , ## args); } while (0)
--#else
--static inline void IWL_DEBUG(int level, const char *fmt, ...)
--{
--}
--static inline void IWL_DEBUG_LIMIT(int level, const char *fmt, ...)
--{
--}
--#endif /* CONFIG_IWLWIFI_DEBUG */
+- iwl_rt->rt_hdr.it_version = PKTHDR_RADIOTAP_VERSION;
+- iwl_rt->rt_hdr.it_pad = 0; /* always good to zero */
-
--/*
-- * To use the debug system;
-- *
-- * If you are defining a new debug classification, simply add it to the #define
-- * list here in the form of:
-- *
-- * #define IWL_DL_xxxx VALUE
-- *
-- * shifting value to the left one bit from the previous entry. xxxx should be
-- * the name of the classification (for example, WEP)
-- *
-- * You then need to either add a IWL_xxxx_DEBUG() macro definition for your
-- * classification, or use IWL_DEBUG(IWL_DL_xxxx, ...) whenever you want
-- * to send output to that classification.
-- *
-- * To add your debug level to the list of levels seen when you perform
-- *
-- * % cat /proc/net/iwl/debug_level
-- *
-- * you simply need to add your entry to the iwl_debug_levels array.
-- *
-- * If you do not see debug_level in /proc/net/iwl then you do not have
-- * CONFIG_IWLWIFI_DEBUG defined in your kernel configuration
-- *
-- */
+- /* total header + data */
+- iwl_rt->rt_hdr.it_len = cpu_to_le16(sizeof(*iwl_rt));
-
--#define IWL_DL_INFO (1<<0)
--#define IWL_DL_MAC80211 (1<<1)
--#define IWL_DL_HOST_COMMAND (1<<2)
--#define IWL_DL_STATE (1<<3)
+- /* Set the size of the skb to the size of the frame */
+- skb_put(rxb->skb, sizeof(*iwl_rt) + len);
-
--#define IWL_DL_RADIO (1<<7)
--#define IWL_DL_POWER (1<<8)
--#define IWL_DL_TEMP (1<<9)
+- /* Big bitfield of all the fields we provide in radiotap */
+- iwl_rt->rt_hdr.it_present =
+- cpu_to_le32((1 << IEEE80211_RADIOTAP_TSFT) |
+- (1 << IEEE80211_RADIOTAP_FLAGS) |
+- (1 << IEEE80211_RADIOTAP_RATE) |
+- (1 << IEEE80211_RADIOTAP_CHANNEL) |
+- (1 << IEEE80211_RADIOTAP_DBM_ANTSIGNAL) |
+- (1 << IEEE80211_RADIOTAP_DBM_ANTNOISE) |
+- (1 << IEEE80211_RADIOTAP_ANTENNA));
-
--#define IWL_DL_NOTIF (1<<10)
--#define IWL_DL_SCAN (1<<11)
--#define IWL_DL_ASSOC (1<<12)
--#define IWL_DL_DROP (1<<13)
+- /* Zero the flags, we'll add to them as we go */
+- iwl_rt->rt_flags = 0;
-
--#define IWL_DL_TXPOWER (1<<14)
+- iwl_rt->rt_tsf = cpu_to_le64(tsf);
-
--#define IWL_DL_AP (1<<15)
+- /* Convert to dBm */
+- iwl_rt->rt_dbmsignal = signal;
+- iwl_rt->rt_dbmnoise = noise;
-
--#define IWL_DL_FW (1<<16)
--#define IWL_DL_RF_KILL (1<<17)
--#define IWL_DL_FW_ERRORS (1<<18)
+- /* Convert the channel frequency and set the flags */
+- iwl_rt->rt_channelMHz = cpu_to_le16(stats->freq);
+- if (!(phy_flags_hw & RX_RES_PHY_FLAGS_BAND_24_MSK))
+- iwl_rt->rt_chbitmask =
+- cpu_to_le16((IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ));
+- else if (phy_flags_hw & RX_RES_PHY_FLAGS_MOD_CCK_MSK)
+- iwl_rt->rt_chbitmask =
+- cpu_to_le16((IEEE80211_CHAN_CCK | IEEE80211_CHAN_2GHZ));
+- else /* 802.11g */
+- iwl_rt->rt_chbitmask =
+- cpu_to_le16((IEEE80211_CHAN_OFDM | IEEE80211_CHAN_2GHZ));
-
--#define IWL_DL_LED (1<<19)
+- rate = iwl_rate_index_from_plcp(rate);
+- if (rate == -1)
+- iwl_rt->rt_rate = 0;
+- else
+- iwl_rt->rt_rate = iwl_rates[rate].ieee;
-
--#define IWL_DL_RATE (1<<20)
+- /* antenna number */
+- iwl_rt->rt_antenna =
+- le16_to_cpu(phy_flags_hw & RX_RES_PHY_FLAGS_ANTENNA_MSK) >> 4;
-
--#define IWL_DL_CALIB (1<<21)
--#define IWL_DL_WEP (1<<22)
--#define IWL_DL_TX (1<<23)
--#define IWL_DL_RX (1<<24)
--#define IWL_DL_ISR (1<<25)
--#define IWL_DL_HT (1<<26)
--#define IWL_DL_IO (1<<27)
--#define IWL_DL_11H (1<<28)
+- /* set the preamble flag if we have it */
+- if (phy_flags_hw & RX_RES_PHY_FLAGS_SHORT_PREAMBLE_MSK)
+- iwl_rt->rt_flags |= IEEE80211_RADIOTAP_F_SHORTPRE;
-
--#define IWL_DL_STATS (1<<29)
--#define IWL_DL_TX_REPLY (1<<30)
--#define IWL_DL_QOS (1<<31)
+- IWL_DEBUG_RX("Rx packet of %d bytes.\n", rxb->skb->len);
-
--#define IWL_ERROR(f, a...) printk(KERN_ERR DRV_NAME ": " f, ## a)
--#define IWL_WARNING(f, a...) printk(KERN_WARNING DRV_NAME ": " f, ## a)
--#define IWL_DEBUG_INFO(f, a...) IWL_DEBUG(IWL_DL_INFO, f, ## a)
+- stats->flag |= RX_FLAG_RADIOTAP;
+- ieee80211_rx_irqsafe(priv->hw, rxb->skb, stats);
+- rxb->skb = NULL;
+-}
-
--#define IWL_DEBUG_MAC80211(f, a...) IWL_DEBUG(IWL_DL_MAC80211, f, ## a)
--#define IWL_DEBUG_TEMP(f, a...) IWL_DEBUG(IWL_DL_TEMP, f, ## a)
--#define IWL_DEBUG_SCAN(f, a...) IWL_DEBUG(IWL_DL_SCAN, f, ## a)
--#define IWL_DEBUG_RX(f, a...) IWL_DEBUG(IWL_DL_RX, f, ## a)
--#define IWL_DEBUG_TX(f, a...) IWL_DEBUG(IWL_DL_TX, f, ## a)
--#define IWL_DEBUG_ISR(f, a...) IWL_DEBUG(IWL_DL_ISR, f, ## a)
--#define IWL_DEBUG_LED(f, a...) IWL_DEBUG(IWL_DL_LED, f, ## a)
--#define IWL_DEBUG_WEP(f, a...) IWL_DEBUG(IWL_DL_WEP, f, ## a)
--#define IWL_DEBUG_HC(f, a...) IWL_DEBUG(IWL_DL_HOST_COMMAND, f, ## a)
--#define IWL_DEBUG_CALIB(f, a...) IWL_DEBUG(IWL_DL_CALIB, f, ## a)
--#define IWL_DEBUG_FW(f, a...) IWL_DEBUG(IWL_DL_FW, f, ## a)
--#define IWL_DEBUG_RF_KILL(f, a...) IWL_DEBUG(IWL_DL_RF_KILL, f, ## a)
--#define IWL_DEBUG_DROP(f, a...) IWL_DEBUG(IWL_DL_DROP, f, ## a)
--#define IWL_DEBUG_DROP_LIMIT(f, a...) IWL_DEBUG_LIMIT(IWL_DL_DROP, f, ## a)
--#define IWL_DEBUG_AP(f, a...) IWL_DEBUG(IWL_DL_AP, f, ## a)
--#define IWL_DEBUG_TXPOWER(f, a...) IWL_DEBUG(IWL_DL_TXPOWER, f, ## a)
--#define IWL_DEBUG_IO(f, a...) IWL_DEBUG(IWL_DL_IO, f, ## a)
--#define IWL_DEBUG_RATE(f, a...) IWL_DEBUG(IWL_DL_RATE, f, ## a)
--#define IWL_DEBUG_RATE_LIMIT(f, a...) IWL_DEBUG_LIMIT(IWL_DL_RATE, f, ## a)
--#define IWL_DEBUG_NOTIF(f, a...) IWL_DEBUG(IWL_DL_NOTIF, f, ## a)
--#define IWL_DEBUG_ASSOC(f, a...) IWL_DEBUG(IWL_DL_ASSOC | IWL_DL_INFO, f, ## a)
--#define IWL_DEBUG_ASSOC_LIMIT(f, a...) \
-- IWL_DEBUG_LIMIT(IWL_DL_ASSOC | IWL_DL_INFO, f, ## a)
--#define IWL_DEBUG_HT(f, a...) IWL_DEBUG(IWL_DL_HT, f, ## a)
--#define IWL_DEBUG_STATS(f, a...) IWL_DEBUG(IWL_DL_STATS, f, ## a)
--#define IWL_DEBUG_TX_REPLY(f, a...) IWL_DEBUG(IWL_DL_TX_REPLY, f, ## a)
--#define IWL_DEBUG_QOS(f, a...) IWL_DEBUG(IWL_DL_QOS, f, ## a)
--#define IWL_DEBUG_RADIO(f, a...) IWL_DEBUG(IWL_DL_RADIO, f, ## a)
--#define IWL_DEBUG_POWER(f, a...) IWL_DEBUG(IWL_DL_POWER, f, ## a)
--#define IWL_DEBUG_11H(f, a...) IWL_DEBUG(IWL_DL_11H, f, ## a)
-
--#endif
-diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom.h b/drivers/net/wireless/iwlwifi/iwl-eeprom.h
-deleted file mode 100644
-index e473c97..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-eeprom.h
-+++ /dev/null
-@@ -1,336 +0,0 @@
--/******************************************************************************
-- *
-- * This file is provided under a dual BSD/GPLv2 license. When using or
-- * redistributing this file, you may do so under either license.
-- *
-- * GPL LICENSE SUMMARY
-- *
-- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of version 2 of the GNU Geeral Public License as
-- * published by the Free Software Foundation.
-- *
-- * This program is distributed in the hope that it will be useful, but
-- * WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- * General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; if not, write to the Free Software
-- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
-- * USA
-- *
-- * The full GNU General Public License is included in this distribution
-- * in the file called LICENSE.GPL.
-- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-- *
-- * BSD LICENSE
-- *
-- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
-- * All rights reserved.
-- *
-- * Redistribution and use in source and binary forms, with or without
-- * modification, are permitted provided that the following conditions
-- * are met:
-- *
-- * * Redistributions of source code must retain the above copyright
-- * notice, this list of conditions and the following disclaimer.
-- * * Redistributions in binary form must reproduce the above copyright
-- * notice, this list of conditions and the following disclaimer in
-- * the documentation and/or other materials provided with the
-- * distribution.
-- * * Neither the name Intel Corporation nor the names of its
-- * contributors may be used to endorse or promote products derived
-- * from this software without specific prior written permission.
+ #define IWL_PACKET_RETRY_TIME HZ
+
+-int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
++int iwl3945_is_duplicate_packet(struct iwl3945_priv *priv, struct ieee80211_hdr *header)
+ {
+ u16 sc = le16_to_cpu(header->seq_ctrl);
+ u16 seq = (sc & IEEE80211_SCTL_SEQ) >> 4;
+@@ -3151,29 +3124,26 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+ switch (priv->iw_mode) {
+ case IEEE80211_IF_TYPE_IBSS:{
+ struct list_head *p;
+- struct iwl_ibss_seq *entry = NULL;
++ struct iwl3945_ibss_seq *entry = NULL;
+ u8 *mac = header->addr2;
+ int index = mac[5] & (IWL_IBSS_MAC_HASH_SIZE - 1);
+
+ __list_for_each(p, &priv->ibss_mac_hash[index]) {
+- entry =
+- list_entry(p, struct iwl_ibss_seq, list);
++ entry = list_entry(p, struct iwl3945_ibss_seq, list);
+ if (!compare_ether_addr(entry->mac, mac))
+ break;
+ }
+ if (p == &priv->ibss_mac_hash[index]) {
+ entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
+ if (!entry) {
+- IWL_ERROR
+- ("Cannot malloc new mac entry\n");
++ IWL_ERROR("Cannot malloc new mac entry\n");
+ return 0;
+ }
+ memcpy(entry->mac, mac, ETH_ALEN);
+ entry->seq_num = seq;
+ entry->frag_num = frag;
+ entry->packet_time = jiffies;
+- list_add(&entry->list,
+- &priv->ibss_mac_hash[index]);
++ list_add(&entry->list, &priv->ibss_mac_hash[index]);
+ return 0;
+ }
+ last_seq = &entry->seq_num;
+@@ -3207,7 +3177,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+ return 1;
+ }
+
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
+
+ #include "iwl-spectrum.h"
+
+@@ -3222,7 +3192,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+ * the lower 3 bytes is the time in usec within one beacon interval
+ */
+
+-static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
++static u32 iwl3945_usecs_to_beacons(u32 usec, u32 beacon_interval)
+ {
+ u32 quot;
+ u32 rem;
+@@ -3241,7 +3211,7 @@ static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
+ * the same as HW timer counter counting down
+ */
+
+-static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
++static __le32 iwl3945_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
+ {
+ u32 base_low = base & BEACON_TIME_MASK_LOW;
+ u32 addon_low = addon & BEACON_TIME_MASK_LOW;
+@@ -3260,13 +3230,13 @@ static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
+ return cpu_to_le32(res);
+ }
+
+-static int iwl_get_measurement(struct iwl_priv *priv,
++static int iwl3945_get_measurement(struct iwl3945_priv *priv,
+ struct ieee80211_measurement_params *params,
+ u8 type)
+ {
+- struct iwl_spectrum_cmd spectrum;
+- struct iwl_rx_packet *res;
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_spectrum_cmd spectrum;
++ struct iwl3945_rx_packet *res;
++ struct iwl3945_host_cmd cmd = {
+ .id = REPLY_SPECTRUM_MEASUREMENT_CMD,
+ .data = (void *)&spectrum,
+ .meta.flags = CMD_WANT_SKB,
+@@ -3276,9 +3246,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+ int spectrum_resp_status;
+ int duration = le16_to_cpu(params->duration);
+
+- if (iwl_is_associated(priv))
++ if (iwl3945_is_associated(priv))
+ add_time =
+- iwl_usecs_to_beacons(
++ iwl3945_usecs_to_beacons(
+ le64_to_cpu(params->start_time) - priv->last_tsf,
+ le16_to_cpu(priv->rxon_timing.beacon_interval));
+
+@@ -3291,9 +3261,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+ cmd.len = sizeof(spectrum);
+ spectrum.len = cpu_to_le16(cmd.len - sizeof(spectrum.len));
+
+- if (iwl_is_associated(priv))
++ if (iwl3945_is_associated(priv))
+ spectrum.start_time =
+- iwl_add_beacon_time(priv->last_beacon_time,
++ iwl3945_add_beacon_time(priv->last_beacon_time,
+ add_time,
+ le16_to_cpu(priv->rxon_timing.beacon_interval));
+ else
+@@ -3306,11 +3276,11 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+ spectrum.flags |= RXON_FLG_BAND_24G_MSK |
+ RXON_FLG_AUTO_DETECT_MSK | RXON_FLG_TGG_PROTECT_MSK;
+
+- rc = iwl_send_cmd_sync(priv, &cmd);
++ rc = iwl3945_send_cmd_sync(priv, &cmd);
+ if (rc)
+ return rc;
+
+- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
+ if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
+ IWL_ERROR("Bad return from REPLY_RX_ON_ASSOC command\n");
+ rc = -EIO;
+@@ -3320,9 +3290,8 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+ switch (spectrum_resp_status) {
+ case 0: /* Command will be handled */
+ if (res->u.spectrum.id != 0xff) {
+- IWL_DEBUG_INFO
+- ("Replaced existing measurement: %d\n",
+- res->u.spectrum.id);
++ IWL_DEBUG_INFO("Replaced existing measurement: %d\n",
++ res->u.spectrum.id);
+ priv->measurement_status &= ~MEASUREMENT_READY;
+ }
+ priv->measurement_status |= MEASUREMENT_ACTIVE;
+@@ -3340,8 +3309,8 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+ }
+ #endif
+
+-static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
+- struct iwl_tx_info *tx_sta)
++static void iwl3945_txstatus_to_ieee(struct iwl3945_priv *priv,
++ struct iwl3945_tx_info *tx_sta)
+ {
+
+ tx_sta->status.ack_signal = 0;
+@@ -3360,41 +3329,41 @@ static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
+ }
+
+ /**
+- * iwl_tx_queue_reclaim - Reclaim Tx queue entries no more used by NIC.
++ * iwl3945_tx_queue_reclaim - Reclaim Tx queue entries already Tx'd
+ *
+- * When FW advances 'R' index, all entries between old and
+- * new 'R' index need to be reclaimed. As result, some free space
+- * forms. If there is enough free space (> low mark), wake Tx queue.
++ * When FW advances 'R' index, all entries between old and new 'R' index
++ * need to be reclaimed. As result, some free space forms. If there is
++ * enough free space (> low mark), wake the stack that feeds us.
+ */
+-int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
++static int iwl3945_tx_queue_reclaim(struct iwl3945_priv *priv, int txq_id, int index)
+ {
+- struct iwl_tx_queue *txq = &priv->txq[txq_id];
+- struct iwl_queue *q = &txq->q;
++ struct iwl3945_tx_queue *txq = &priv->txq[txq_id];
++ struct iwl3945_queue *q = &txq->q;
+ int nfreed = 0;
+
+ if ((index >= q->n_bd) || (x2_queue_used(q, index) == 0)) {
+ IWL_ERROR("Read index for DMA queue txq id (%d), index %d, "
+ "is out of range [0-%d] %d %d.\n", txq_id,
+- index, q->n_bd, q->first_empty, q->last_used);
++ index, q->n_bd, q->write_ptr, q->read_ptr);
+ return 0;
+ }
+
+- for (index = iwl_queue_inc_wrap(index, q->n_bd);
+- q->last_used != index;
+- q->last_used = iwl_queue_inc_wrap(q->last_used, q->n_bd)) {
++ for (index = iwl3945_queue_inc_wrap(index, q->n_bd);
++ q->read_ptr != index;
++ q->read_ptr = iwl3945_queue_inc_wrap(q->read_ptr, q->n_bd)) {
+ if (txq_id != IWL_CMD_QUEUE_NUM) {
+- iwl_txstatus_to_ieee(priv,
+- &(txq->txb[txq->q.last_used]));
+- iwl_hw_txq_free_tfd(priv, txq);
++ iwl3945_txstatus_to_ieee(priv,
++ &(txq->txb[txq->q.read_ptr]));
++ iwl3945_hw_txq_free_tfd(priv, txq);
+ } else if (nfreed > 1) {
+ IWL_ERROR("HCMD skipped: index (%d) %d %d\n", index,
+- q->first_empty, q->last_used);
++ q->write_ptr, q->read_ptr);
+ queue_work(priv->workqueue, &priv->restart);
+ }
+ nfreed++;
+ }
+
+- if (iwl_queue_space(q) > q->low_mark && (txq_id >= 0) &&
++ if (iwl3945_queue_space(q) > q->low_mark && (txq_id >= 0) &&
+ (txq_id != IWL_CMD_QUEUE_NUM) &&
+ priv->mac80211_registered)
+ ieee80211_wake_queue(priv->hw, txq_id);
+@@ -3403,7 +3372,7 @@ int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
+ return nfreed;
+ }
+
+-static int iwl_is_tx_success(u32 status)
++static int iwl3945_is_tx_success(u32 status)
+ {
+ return (status & 0xFF) == 0x1;
+ }
+@@ -3413,27 +3382,30 @@ static int iwl_is_tx_success(u32 status)
+ * Generic RX handler implementations
+ *
+ ******************************************************************************/
+-static void iwl_rx_reply_tx(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++/**
++ * iwl3945_rx_reply_tx - Handle Tx response
++ */
++static void iwl3945_rx_reply_tx(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
+ u16 sequence = le16_to_cpu(pkt->hdr.sequence);
+ int txq_id = SEQ_TO_QUEUE(sequence);
+ int index = SEQ_TO_INDEX(sequence);
+- struct iwl_tx_queue *txq = &priv->txq[txq_id];
++ struct iwl3945_tx_queue *txq = &priv->txq[txq_id];
+ struct ieee80211_tx_status *tx_status;
+- struct iwl_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
++ struct iwl3945_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
+ u32 status = le32_to_cpu(tx_resp->status);
+
+ if ((index >= txq->q.n_bd) || (x2_queue_used(&txq->q, index) == 0)) {
+ IWL_ERROR("Read index for DMA queue txq_id (%d) index %d "
+ "is out of range [0-%d] %d %d\n", txq_id,
+- index, txq->q.n_bd, txq->q.first_empty,
+- txq->q.last_used);
++ index, txq->q.n_bd, txq->q.write_ptr,
++ txq->q.read_ptr);
+ return;
+ }
+
+- tx_status = &(txq->txb[txq->q.last_used].status);
++ tx_status = &(txq->txb[txq->q.read_ptr].status);
+
+ tx_status->retry_count = tx_resp->failure_frame;
+ tx_status->queue_number = status;
+@@ -3441,28 +3413,28 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
+ tx_status->queue_length |= tx_resp->failure_rts;
+
+ tx_status->flags =
+- iwl_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
++ iwl3945_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
+
+- tx_status->control.tx_rate = iwl_rate_index_from_plcp(tx_resp->rate);
++ tx_status->control.tx_rate = iwl3945_rate_index_from_plcp(tx_resp->rate);
+
+ IWL_DEBUG_TX("Tx queue %d Status %s (0x%08x) plcp rate %d retries %d\n",
+- txq_id, iwl_get_tx_fail_reason(status), status,
++ txq_id, iwl3945_get_tx_fail_reason(status), status,
+ tx_resp->rate, tx_resp->failure_frame);
+
+ IWL_DEBUG_TX_REPLY("Tx queue reclaim %d\n", index);
+ if (index != -1)
+- iwl_tx_queue_reclaim(priv, txq_id, index);
++ iwl3945_tx_queue_reclaim(priv, txq_id, index);
+
+ if (iwl_check_bits(status, TX_ABORT_REQUIRED_MSK))
+ IWL_ERROR("TODO: Implement Tx ABORT REQUIRED!!!\n");
+ }
+
+
+-static void iwl_rx_reply_alive(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_reply_alive(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_alive_resp *palive;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_alive_resp *palive;
+ struct delayed_work *pwork;
+
+ palive = &pkt->u.alive_frame;
+@@ -3476,14 +3448,14 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
+ IWL_DEBUG_INFO("Initialization Alive received.\n");
+ memcpy(&priv->card_alive_init,
+ &pkt->u.alive_frame,
+- sizeof(struct iwl_init_alive_resp));
++ sizeof(struct iwl3945_init_alive_resp));
+ pwork = &priv->init_alive_start;
+ } else {
+ IWL_DEBUG_INFO("Runtime Alive received.\n");
+ memcpy(&priv->card_alive, &pkt->u.alive_frame,
+- sizeof(struct iwl_alive_resp));
++ sizeof(struct iwl3945_alive_resp));
+ pwork = &priv->alive_start;
+- iwl_disable_events(priv);
++ iwl3945_disable_events(priv);
+ }
+
+ /* We delay the ALIVE response by 5ms to
+@@ -3495,19 +3467,19 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
+ IWL_WARNING("uCode did not respond OK.\n");
+ }
+
+-static void iwl_rx_reply_add_sta(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_reply_add_sta(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
+
+ IWL_DEBUG_RX("Received REPLY_ADD_STA: 0x%02X\n", pkt->u.status);
+ return;
+ }
+
+-static void iwl_rx_reply_error(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_reply_error(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
+
+ IWL_ERROR("Error Reply type 0x%08X cmd %s (0x%02X) "
+ "seq 0x%04X ser 0x%08X\n",
+@@ -3520,23 +3492,23 @@ static void iwl_rx_reply_error(struct iwl_priv *priv,
+
+ #define TX_STATUS_ENTRY(x) case TX_STATUS_FAIL_ ## x: return #x
+
+-static void iwl_rx_csa(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_csa(struct iwl3945_priv *priv, struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_rxon_cmd *rxon = (void *)&priv->active_rxon;
+- struct iwl_csa_notification *csa = &(pkt->u.csa_notif);
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_rxon_cmd *rxon = (void *)&priv->active_rxon;
++ struct iwl3945_csa_notification *csa = &(pkt->u.csa_notif);
+ IWL_DEBUG_11H("CSA notif: channel %d, status %d\n",
+ le16_to_cpu(csa->channel), le32_to_cpu(csa->status));
+ rxon->channel = csa->channel;
+ priv->staging_rxon.channel = csa->channel;
+ }
+
+-static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_spectrum_measure_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_spectrum_notification *report = &(pkt->u.spectrum_notif);
++#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_spectrum_notification *report = &(pkt->u.spectrum_notif);
+
+ if (!report->state) {
+ IWL_DEBUG(IWL_DL_11H | IWL_DL_INFO,
+@@ -3549,35 +3521,35 @@ static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
+ #endif
+ }
+
+-static void iwl_rx_pm_sleep_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_pm_sleep_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_sleep_notification *sleep = &(pkt->u.sleep_notif);
++#ifdef CONFIG_IWL3945_DEBUG
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_sleep_notification *sleep = &(pkt->u.sleep_notif);
+ IWL_DEBUG_RX("sleep mode: %d, src: %d\n",
+ sleep->pm_sleep_mode, sleep->pm_wakeup_src);
+ #endif
+ }
+
+-static void iwl_rx_pm_debug_statistics_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_pm_debug_statistics_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
+ IWL_DEBUG_RADIO("Dumping %d bytes of unhandled "
+ "notification for %s:\n",
+ le32_to_cpu(pkt->len), get_cmd_string(pkt->hdr.cmd));
+- iwl_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
++ iwl3945_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
+ }
+
+-static void iwl_bg_beacon_update(struct work_struct *work)
++static void iwl3945_bg_beacon_update(struct work_struct *work)
+ {
+- struct iwl_priv *priv =
+- container_of(work, struct iwl_priv, beacon_update);
++ struct iwl3945_priv *priv =
++ container_of(work, struct iwl3945_priv, beacon_update);
+ struct sk_buff *beacon;
+
+ /* Pull updated AP beacon from mac80211. will fail if not in AP mode */
+- beacon = ieee80211_beacon_get(priv->hw, priv->interface_id, NULL);
++ beacon = ieee80211_beacon_get(priv->hw, priv->vif, NULL);
+
+ if (!beacon) {
+ IWL_ERROR("update beacon failed\n");
+@@ -3592,15 +3564,15 @@ static void iwl_bg_beacon_update(struct work_struct *work)
+ priv->ibss_beacon = beacon;
+ mutex_unlock(&priv->mutex);
+
+- iwl_send_beacon_cmd(priv);
++ iwl3945_send_beacon_cmd(priv);
+ }
+
+-static void iwl_rx_beacon_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_beacon_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_beacon_notif *beacon = &(pkt->u.beacon_status);
++#ifdef CONFIG_IWL3945_DEBUG
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_beacon_notif *beacon = &(pkt->u.beacon_status);
+ u8 rate = beacon->beacon_notify_hdr.rate;
+
+ IWL_DEBUG_RX("beacon status %x retries %d iss %d "
+@@ -3618,25 +3590,25 @@ static void iwl_rx_beacon_notif(struct iwl_priv *priv,
+ }
+
+ /* Service response to REPLY_SCAN_CMD (0x80) */
+-static void iwl_rx_reply_scan(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_reply_scan(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_scanreq_notification *notif =
+- (struct iwl_scanreq_notification *)pkt->u.raw;
++#ifdef CONFIG_IWL3945_DEBUG
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_scanreq_notification *notif =
++ (struct iwl3945_scanreq_notification *)pkt->u.raw;
+
+ IWL_DEBUG_RX("Scan request status = 0x%x\n", notif->status);
+ #endif
+ }
+
+ /* Service SCAN_START_NOTIFICATION (0x82) */
+-static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_scan_start_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_scanstart_notification *notif =
+- (struct iwl_scanstart_notification *)pkt->u.raw;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_scanstart_notification *notif =
++ (struct iwl3945_scanstart_notification *)pkt->u.raw;
+ priv->scan_start_tsf = le32_to_cpu(notif->tsf_low);
+ IWL_DEBUG_SCAN("Scan start: "
+ "%d [802.11%s] "
+@@ -3648,12 +3620,12 @@ static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
+ }
+
+ /* Service SCAN_RESULTS_NOTIFICATION (0x83) */
+-static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_scan_results_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_scanresults_notification *notif =
+- (struct iwl_scanresults_notification *)pkt->u.raw;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_scanresults_notification *notif =
++ (struct iwl3945_scanresults_notification *)pkt->u.raw;
+
+ IWL_DEBUG_SCAN("Scan ch.res: "
+ "%d [802.11%s] "
+@@ -3669,14 +3641,15 @@ static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
+ (priv->last_scan_jiffies, jiffies)));
+
+ priv->last_scan_jiffies = jiffies;
++ priv->next_scan_jiffies = 0;
+ }
+
+ /* Service SCAN_COMPLETE_NOTIFICATION (0x84) */
+-static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_scan_complete_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
+- struct iwl_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
+
+ IWL_DEBUG_SCAN("Scan complete: %d channels (TSF 0x%08X:%08X) - %d\n",
+ scan_notif->scanned_channels,
+@@ -3711,6 +3684,7 @@ static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
+ }
+
+ priv->last_scan_jiffies = jiffies;
++ priv->next_scan_jiffies = 0;
+ IWL_DEBUG_INFO("Setting scan to off\n");
+
+ clear_bit(STATUS_SCANNING, &priv->status);
+@@ -3729,10 +3703,10 @@ reschedule:
+
+ /* Handle notification from uCode that card's power state is changing
+ * due to software, hardware, or critical temperature RFKILL */
+-static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_rx_card_state_notif(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
+ u32 flags = le32_to_cpu(pkt->u.card_state_notif.flags);
+ unsigned long status = priv->status;
+
+@@ -3740,7 +3714,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+ (flags & HW_CARD_DISABLED) ? "Kill" : "On",
+ (flags & SW_CARD_DISABLED) ? "Kill" : "On");
+
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_SET,
+ CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+
+ if (flags & HW_CARD_DISABLED)
+@@ -3754,7 +3728,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+ else
+ clear_bit(STATUS_RF_KILL_SW, &priv->status);
+
+- iwl_scan_cancel(priv);
++ iwl3945_scan_cancel(priv);
+
+ if ((test_bit(STATUS_RF_KILL_HW, &status) !=
+ test_bit(STATUS_RF_KILL_HW, &priv->status)) ||
+@@ -3766,7 +3740,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+ }
+
+ /**
+- * iwl_setup_rx_handlers - Initialize Rx handler callbacks
++ * iwl3945_setup_rx_handlers - Initialize Rx handler callbacks
+ *
+ * Setup the RX handlers for each of the reply types sent from the uCode
+ * to the host.
+@@ -3774,61 +3748,58 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+ * This function chains into the hardware specific files for them to setup
+ * any hardware specific handlers as well.
+ */
+-static void iwl_setup_rx_handlers(struct iwl_priv *priv)
++static void iwl3945_setup_rx_handlers(struct iwl3945_priv *priv)
+ {
+- priv->rx_handlers[REPLY_ALIVE] = iwl_rx_reply_alive;
+- priv->rx_handlers[REPLY_ADD_STA] = iwl_rx_reply_add_sta;
+- priv->rx_handlers[REPLY_ERROR] = iwl_rx_reply_error;
+- priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl_rx_csa;
++ priv->rx_handlers[REPLY_ALIVE] = iwl3945_rx_reply_alive;
++ priv->rx_handlers[REPLY_ADD_STA] = iwl3945_rx_reply_add_sta;
++ priv->rx_handlers[REPLY_ERROR] = iwl3945_rx_reply_error;
++ priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl3945_rx_csa;
+ priv->rx_handlers[SPECTRUM_MEASURE_NOTIFICATION] =
+- iwl_rx_spectrum_measure_notif;
+- priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl_rx_pm_sleep_notif;
++ iwl3945_rx_spectrum_measure_notif;
++ priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl3945_rx_pm_sleep_notif;
+ priv->rx_handlers[PM_DEBUG_STATISTIC_NOTIFIC] =
+- iwl_rx_pm_debug_statistics_notif;
+- priv->rx_handlers[BEACON_NOTIFICATION] = iwl_rx_beacon_notif;
+-
+- /* NOTE: iwl_rx_statistics is different based on whether
+- * the build is for the 3945 or the 4965. See the
+- * corresponding implementation in iwl-XXXX.c
+- *
+- * The same handler is used for both the REPLY to a
+- * discrete statistics request from the host as well as
+- * for the periodic statistics notification from the uCode
++ iwl3945_rx_pm_debug_statistics_notif;
++ priv->rx_handlers[BEACON_NOTIFICATION] = iwl3945_rx_beacon_notif;
++
++ /*
++ * The same handler is used for both the REPLY to a discrete
++ * statistics request from the host as well as for the periodic
++ * statistics notifications (after received beacons) from the uCode.
+ */
+- priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl_hw_rx_statistics;
+- priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl_hw_rx_statistics;
++ priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl3945_hw_rx_statistics;
++ priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl3945_hw_rx_statistics;
+
+- priv->rx_handlers[REPLY_SCAN_CMD] = iwl_rx_reply_scan;
+- priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl_rx_scan_start_notif;
++ priv->rx_handlers[REPLY_SCAN_CMD] = iwl3945_rx_reply_scan;
++ priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl3945_rx_scan_start_notif;
+ priv->rx_handlers[SCAN_RESULTS_NOTIFICATION] =
+- iwl_rx_scan_results_notif;
++ iwl3945_rx_scan_results_notif;
+ priv->rx_handlers[SCAN_COMPLETE_NOTIFICATION] =
+- iwl_rx_scan_complete_notif;
+- priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl_rx_card_state_notif;
+- priv->rx_handlers[REPLY_TX] = iwl_rx_reply_tx;
++ iwl3945_rx_scan_complete_notif;
++ priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl3945_rx_card_state_notif;
++ priv->rx_handlers[REPLY_TX] = iwl3945_rx_reply_tx;
+
+- /* Setup hardware specific Rx handlers */
+- iwl_hw_rx_handler_setup(priv);
++ /* Set up hardware specific Rx handlers */
++ iwl3945_hw_rx_handler_setup(priv);
+ }
+
+ /**
+- * iwl_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
++ * iwl3945_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
+ * @rxb: Rx buffer to reclaim
+ *
+ * If an Rx buffer has an async callback associated with it the callback
+ * will be executed. The attached skb (if present) will only be freed
+ * if the callback returns 1
+ */
+-static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb)
++static void iwl3945_tx_cmd_complete(struct iwl3945_priv *priv,
++ struct iwl3945_rx_mem_buffer *rxb)
+ {
+- struct iwl_rx_packet *pkt = (struct iwl_rx_packet *)rxb->skb->data;
++ struct iwl3945_rx_packet *pkt = (struct iwl3945_rx_packet *)rxb->skb->data;
+ u16 sequence = le16_to_cpu(pkt->hdr.sequence);
+ int txq_id = SEQ_TO_QUEUE(sequence);
+ int index = SEQ_TO_INDEX(sequence);
+ int huge = sequence & SEQ_HUGE_FRAME;
+ int cmd_index;
+- struct iwl_cmd *cmd;
++ struct iwl3945_cmd *cmd;
+
+ /* If a Tx command is being handled and it isn't in the actual
+ * command queue then there a command routing bug has been introduced
+@@ -3849,7 +3820,7 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+ !cmd->meta.u.callback(priv, cmd, rxb->skb))
+ rxb->skb = NULL;
+
+- iwl_tx_queue_reclaim(priv, txq_id, index);
++ iwl3945_tx_queue_reclaim(priv, txq_id, index);
+
+ if (!(cmd->meta.flags & CMD_ASYNC)) {
+ clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
+@@ -3879,10 +3850,10 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+ * The queue is empty (no good data) if WRITE = READ - 1, and is full if
+ * WRITE = READ.
+ *
+- * During initialization the host sets up the READ queue position to the first
++ * During initialization, the host sets up the READ queue position to the first
+ * INDEX position, and WRITE to the last (READ - 1 wrapped)
+ *
+- * When the firmware places a packet in a buffer it will advance the READ index
++ * When the firmware places a packet in a buffer, it will advance the READ index
+ * and fire the RX interrupt. The driver can then query the READ index and
+ * process as many packets as possible, moving the WRITE index forward as it
+ * resets the Rx queue buffers with new memory.
+@@ -3890,8 +3861,8 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+ * The management in the driver is as follows:
+ * + A list of pre-allocated SKBs is stored in iwl->rxq->rx_free. When
+ * iwl->rxq->free_count drops to or below RX_LOW_WATERMARK, work is scheduled
+- * to replensish the iwl->rxq->rx_free.
+- * + In iwl_rx_replenish (scheduled) if 'processed' != 'read' then the
++ * to replenish the iwl->rxq->rx_free.
++ * + In iwl3945_rx_replenish (scheduled) if 'processed' != 'read' then the
+ * iwl->rxq is replenished and the READ INDEX is updated (updating the
+ * 'processed' and 'read' driver indexes as well)
+ * + A received packet is processed and handed to the kernel network stack,
+@@ -3904,28 +3875,28 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+ *
+ * Driver sequence:
+ *
+- * iwl_rx_queue_alloc() Allocates rx_free
+- * iwl_rx_replenish() Replenishes rx_free list from rx_used, and calls
+- * iwl_rx_queue_restock
+- * iwl_rx_queue_restock() Moves available buffers from rx_free into Rx
++ * iwl3945_rx_queue_alloc() Allocates rx_free
++ * iwl3945_rx_replenish() Replenishes rx_free list from rx_used, and calls
++ * iwl3945_rx_queue_restock
++ * iwl3945_rx_queue_restock() Moves available buffers from rx_free into Rx
+ * queue, updates firmware pointers, and updates
+ * the WRITE index. If insufficient rx_free buffers
+- * are available, schedules iwl_rx_replenish
++ * are available, schedules iwl3945_rx_replenish
+ *
+ * -- enable interrupts --
+- * ISR - iwl_rx() Detach iwl_rx_mem_buffers from pool up to the
++ * ISR - iwl3945_rx() Detach iwl3945_rx_mem_buffers from pool up to the
+ * READ INDEX, detaching the SKB from the pool.
+ * Moves the packet buffer from queue to rx_used.
+- * Calls iwl_rx_queue_restock to refill any empty
++ * Calls iwl3945_rx_queue_restock to refill any empty
+ * slots.
+ * ...
+ *
+ */
+
+ /**
+- * iwl_rx_queue_space - Return number of free slots available in queue.
++ * iwl3945_rx_queue_space - Return number of free slots available in queue.
+ */
+-static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
++static int iwl3945_rx_queue_space(const struct iwl3945_rx_queue *q)
+ {
+ int s = q->read - q->write;
+ if (s <= 0)
+@@ -3938,15 +3909,9 @@ static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
+ }
+
+ /**
+- * iwl_rx_queue_update_write_ptr - Update the write pointer for the RX queue
- *
-- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+- * NOTE: This function has 3945 and 4965 specific code sections
+- * but is declared in base due to the majority of the
+- * implementation being the same (only a numeric constant is
+- * different)
- *
-- *****************************************************************************/
--
--#ifndef __iwl_eeprom_h__
--#define __iwl_eeprom_h__
--
--/*
-- * This file defines EEPROM related constants, enums, and inline functions.
++ * iwl3945_rx_queue_update_write_ptr - Update the write pointer for the RX queue
+ */
+-int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
++int iwl3945_rx_queue_update_write_ptr(struct iwl3945_priv *priv, struct iwl3945_rx_queue *q)
+ {
+ u32 reg = 0;
+ int rc = 0;
+@@ -3957,24 +3922,29 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
+ if (q->need_update == 0)
+ goto exit_unlock;
+
++ /* If power-saving is in use, make sure device is awake */
+ if (test_bit(STATUS_POWER_PMI, &priv->status)) {
+- reg = iwl_read32(priv, CSR_UCODE_DRV_GP1);
++ reg = iwl3945_read32(priv, CSR_UCODE_DRV_GP1);
+
+ if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
+- iwl_set_bit(priv, CSR_GP_CNTRL,
++ iwl3945_set_bit(priv, CSR_GP_CNTRL,
+ CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
+ goto exit_unlock;
+ }
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc)
+ goto exit_unlock;
+
+- iwl_write_restricted(priv, FH_RSCSR_CHNL0_WPTR,
++ /* Device expects a multiple of 8 */
++ iwl3945_write_direct32(priv, FH_RSCSR_CHNL0_WPTR,
+ q->write & ~0x7);
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
++
++ /* Else device is assumed to be awake */
+ } else
+- iwl_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
++ /* Device expects a multiple of 8 */
++ iwl3945_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
+
+
+ q->need_update = 0;
+@@ -3985,42 +3955,43 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
+ }
+
+ /**
+- * iwl_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer pointer.
- *
-- */
--
--#define IWL_EEPROM_ACCESS_TIMEOUT 5000 /* uSec */
--#define IWL_EEPROM_ACCESS_DELAY 10 /* uSec */
--/* EEPROM field values */
--#define ANTENNA_SWITCH_NORMAL 0
--#define ANTENNA_SWITCH_INVERSE 1
--
--enum {
-- EEPROM_CHANNEL_VALID = (1 << 0), /* usable for this SKU/geo */
-- EEPROM_CHANNEL_IBSS = (1 << 1), /* usable as an IBSS channel */
-- /* Bit 2 Reserved */
-- EEPROM_CHANNEL_ACTIVE = (1 << 3), /* active scanning allowed */
-- EEPROM_CHANNEL_RADAR = (1 << 4), /* radar detection required */
-- EEPROM_CHANNEL_WIDE = (1 << 5),
-- EEPROM_CHANNEL_NARROW = (1 << 6),
-- EEPROM_CHANNEL_DFS = (1 << 7), /* dynamic freq selection candidate */
--};
--
--/* EEPROM field lengths */
--#define EEPROM_BOARD_PBA_NUMBER_LENGTH 11
--
--/* EEPROM field lengths */
--#define EEPROM_BOARD_PBA_NUMBER_LENGTH 11
--#define EEPROM_REGULATORY_SKU_ID_LENGTH 4
--#define EEPROM_REGULATORY_BAND1_CHANNELS_LENGTH 14
--#define EEPROM_REGULATORY_BAND2_CHANNELS_LENGTH 13
--#define EEPROM_REGULATORY_BAND3_CHANNELS_LENGTH 12
--#define EEPROM_REGULATORY_BAND4_CHANNELS_LENGTH 11
--#define EEPROM_REGULATORY_BAND5_CHANNELS_LENGTH 6
--
--#if IWL == 3945
--#define EEPROM_REGULATORY_CHANNELS_LENGTH ( \
-- EEPROM_REGULATORY_BAND1_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND2_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND3_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND4_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND5_CHANNELS_LENGTH)
--#elif IWL == 4965
--#define EEPROM_REGULATORY_BAND_24_FAT_CHANNELS_LENGTH 7
--#define EEPROM_REGULATORY_BAND_52_FAT_CHANNELS_LENGTH 11
--#define EEPROM_REGULATORY_CHANNELS_LENGTH ( \
-- EEPROM_REGULATORY_BAND1_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND2_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND3_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND4_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND5_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND_24_FAT_CHANNELS_LENGTH + \
-- EEPROM_REGULATORY_BAND_52_FAT_CHANNELS_LENGTH)
--#endif
--
--#define EEPROM_REGULATORY_NUMBER_OF_BANDS 5
--
--/* SKU Capabilities */
--#define EEPROM_SKU_CAP_SW_RF_KILL_ENABLE (1 << 0)
--#define EEPROM_SKU_CAP_HW_RF_KILL_ENABLE (1 << 1)
--#define EEPROM_SKU_CAP_OP_MODE_MRC (1 << 7)
--
--/* *regulatory* channel data from eeprom, one for each channel */
--struct iwl_eeprom_channel {
-- u8 flags; /* flags copied from EEPROM */
-- s8 max_power_avg; /* max power (dBm) on this chnl, limit 31 */
--} __attribute__ ((packed));
--
--/*
-- * Mapping of a Tx power level, at factory calibration temperature,
-- * to a radio/DSP gain table index.
-- * One for each of 5 "sample" power levels in each band.
-- * v_det is measured at the factory, using the 3945's built-in power amplifier
-- * (PA) output voltage detector. This same detector is used during Tx of
-- * long packets in normal operation to provide feedback as to proper output
-- * level.
-- * Data copied from EEPROM.
-- */
--struct iwl_eeprom_txpower_sample {
-- u8 gain_index; /* index into power (gain) setup table ... */
-- s8 power; /* ... for this pwr level for this chnl group */
-- u16 v_det; /* PA output voltage */
--} __attribute__ ((packed));
--
--/*
-- * Mappings of Tx power levels -> nominal radio/DSP gain table indexes.
-- * One for each channel group (a.k.a. "band") (1 for BG, 4 for A).
-- * Tx power setup code interpolates between the 5 "sample" power levels
-- * to determine the nominal setup for a requested power level.
-- * Data copied from EEPROM.
-- * DO NOT ALTER THIS STRUCTURE!!!
-- */
--struct iwl_eeprom_txpower_group {
-- struct iwl_eeprom_txpower_sample samples[5]; /* 5 power levels */
-- s32 a, b, c, d, e; /* coefficients for voltage->power
-- * formula (signed) */
-- s32 Fa, Fb, Fc, Fd, Fe; /* these modify coeffs based on
-- * frequency (signed) */
-- s8 saturation_power; /* highest power possible by h/w in this
-- * band */
-- u8 group_channel; /* "representative" channel # in this band */
-- s16 temperature; /* h/w temperature at factory calib this band
-- * (signed) */
--} __attribute__ ((packed));
--
--/*
-- * Temperature-based Tx-power compensation data, not band-specific.
-- * These coefficients are use to modify a/b/c/d/e coeffs based on
-- * difference between current temperature and factory calib temperature.
-- * Data copied from EEPROM.
-- */
--struct iwl_eeprom_temperature_corr {
-- u32 Ta;
-- u32 Tb;
-- u32 Tc;
-- u32 Td;
-- u32 Te;
--} __attribute__ ((packed));
--
--#if IWL == 4965
--#define EEPROM_TX_POWER_TX_CHAINS (2)
--#define EEPROM_TX_POWER_BANDS (8)
--#define EEPROM_TX_POWER_MEASUREMENTS (3)
--#define EEPROM_TX_POWER_VERSION (2)
--#define EEPROM_TX_POWER_VERSION_NEW (5)
--
--struct iwl_eeprom_calib_measure {
-- u8 temperature;
-- u8 gain_idx;
-- u8 actual_pow;
-- s8 pa_det;
--} __attribute__ ((packed));
--
--struct iwl_eeprom_calib_ch_info {
-- u8 ch_num;
-- struct iwl_eeprom_calib_measure measurements[EEPROM_TX_POWER_TX_CHAINS]
-- [EEPROM_TX_POWER_MEASUREMENTS];
--} __attribute__ ((packed));
--
--struct iwl_eeprom_calib_subband_info {
-- u8 ch_from;
-- u8 ch_to;
-- struct iwl_eeprom_calib_ch_info ch1;
-- struct iwl_eeprom_calib_ch_info ch2;
--} __attribute__ ((packed));
--
--struct iwl_eeprom_calib_info {
-- u8 saturation_power24;
-- u8 saturation_power52;
-- s16 voltage; /* signed */
-- struct iwl_eeprom_calib_subband_info band_info[EEPROM_TX_POWER_BANDS];
--} __attribute__ ((packed));
--
--#endif
--
--struct iwl_eeprom {
-- u8 reserved0[16];
--#define EEPROM_DEVICE_ID (2*0x08) /* 2 bytes */
-- u16 device_id; /* abs.ofs: 16 */
-- u8 reserved1[2];
--#define EEPROM_PMC (2*0x0A) /* 2 bytes */
-- u16 pmc; /* abs.ofs: 20 */
-- u8 reserved2[20];
--#define EEPROM_MAC_ADDRESS (2*0x15) /* 6 bytes */
-- u8 mac_address[6]; /* abs.ofs: 42 */
-- u8 reserved3[58];
--#define EEPROM_BOARD_REVISION (2*0x35) /* 2 bytes */
-- u16 board_revision; /* abs.ofs: 106 */
-- u8 reserved4[11];
--#define EEPROM_BOARD_PBA_NUMBER (2*0x3B+1) /* 9 bytes */
-- u8 board_pba_number[9]; /* abs.ofs: 119 */
-- u8 reserved5[8];
--#define EEPROM_VERSION (2*0x44) /* 2 bytes */
-- u16 version; /* abs.ofs: 136 */
--#define EEPROM_SKU_CAP (2*0x45) /* 1 bytes */
-- u8 sku_cap; /* abs.ofs: 138 */
--#define EEPROM_LEDS_MODE (2*0x45+1) /* 1 bytes */
-- u8 leds_mode; /* abs.ofs: 139 */
--#define EEPROM_OEM_MODE (2*0x46) /* 2 bytes */
-- u16 oem_mode;
--#define EEPROM_WOWLAN_MODE (2*0x47) /* 2 bytes */
-- u16 wowlan_mode; /* abs.ofs: 142 */
--#define EEPROM_LEDS_TIME_INTERVAL (2*0x48) /* 2 bytes */
-- u16 leds_time_interval; /* abs.ofs: 144 */
--#define EEPROM_LEDS_OFF_TIME (2*0x49) /* 1 bytes */
-- u8 leds_off_time; /* abs.ofs: 146 */
--#define EEPROM_LEDS_ON_TIME (2*0x49+1) /* 1 bytes */
-- u8 leds_on_time; /* abs.ofs: 147 */
--#define EEPROM_ALMGOR_M_VERSION (2*0x4A) /* 1 bytes */
-- u8 almgor_m_version; /* abs.ofs: 148 */
--#define EEPROM_ANTENNA_SWITCH_TYPE (2*0x4A+1) /* 1 bytes */
-- u8 antenna_switch_type; /* abs.ofs: 149 */
--#if IWL == 3945
-- u8 reserved6[42];
--#else
-- u8 reserved6[8];
--#define EEPROM_4965_BOARD_REVISION (2*0x4F) /* 2 bytes */
-- u16 board_revision_4965; /* abs.ofs: 158 */
-- u8 reserved7[13];
--#define EEPROM_4965_BOARD_PBA (2*0x56+1) /* 9 bytes */
-- u8 board_pba_number_4965[9]; /* abs.ofs: 173 */
-- u8 reserved8[10];
--#endif
--#define EEPROM_REGULATORY_SKU_ID (2*0x60) /* 4 bytes */
-- u8 sku_id[4]; /* abs.ofs: 192 */
--#define EEPROM_REGULATORY_BAND_1 (2*0x62) /* 2 bytes */
-- u16 band_1_count; /* abs.ofs: 196 */
--#define EEPROM_REGULATORY_BAND_1_CHANNELS (2*0x63) /* 28 bytes */
-- struct iwl_eeprom_channel band_1_channels[14]; /* abs.ofs: 196 */
--#define EEPROM_REGULATORY_BAND_2 (2*0x71) /* 2 bytes */
-- u16 band_2_count; /* abs.ofs: 226 */
--#define EEPROM_REGULATORY_BAND_2_CHANNELS (2*0x72) /* 26 bytes */
-- struct iwl_eeprom_channel band_2_channels[13]; /* abs.ofs: 228 */
--#define EEPROM_REGULATORY_BAND_3 (2*0x7F) /* 2 bytes */
-- u16 band_3_count; /* abs.ofs: 254 */
--#define EEPROM_REGULATORY_BAND_3_CHANNELS (2*0x80) /* 24 bytes */
-- struct iwl_eeprom_channel band_3_channels[12]; /* abs.ofs: 256 */
--#define EEPROM_REGULATORY_BAND_4 (2*0x8C) /* 2 bytes */
-- u16 band_4_count; /* abs.ofs: 280 */
--#define EEPROM_REGULATORY_BAND_4_CHANNELS (2*0x8D) /* 22 bytes */
-- struct iwl_eeprom_channel band_4_channels[11]; /* abs.ofs: 282 */
--#define EEPROM_REGULATORY_BAND_5 (2*0x98) /* 2 bytes */
-- u16 band_5_count; /* abs.ofs: 304 */
--#define EEPROM_REGULATORY_BAND_5_CHANNELS (2*0x99) /* 12 bytes */
-- struct iwl_eeprom_channel band_5_channels[6]; /* abs.ofs: 306 */
--
--/* From here on out the EEPROM diverges between the 4965 and the 3945 */
--#if IWL == 3945
--
-- u8 reserved9[194];
--
--#define EEPROM_TXPOWER_CALIB_GROUP0 0x200
--#define EEPROM_TXPOWER_CALIB_GROUP1 0x240
--#define EEPROM_TXPOWER_CALIB_GROUP2 0x280
--#define EEPROM_TXPOWER_CALIB_GROUP3 0x2c0
--#define EEPROM_TXPOWER_CALIB_GROUP4 0x300
--#define IWL_NUM_TX_CALIB_GROUPS 5
-- struct iwl_eeprom_txpower_group groups[IWL_NUM_TX_CALIB_GROUPS];
--/* abs.ofs: 512 */
--#define EEPROM_CALIB_TEMPERATURE_CORRECT 0x340
-- struct iwl_eeprom_temperature_corr corrections; /* abs.ofs: 832 */
-- u8 reserved16[172]; /* fill out to full 1024 byte block */
--
--/* 4965AGN adds fat channel support */
--#elif IWL == 4965
+- * NOTE: This function has 3945 and 4965 specific code paths in it.
++ * iwl3945_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer ptr
+ */
+-static inline __le32 iwl_dma_addr2rbd_ptr(struct iwl_priv *priv,
++static inline __le32 iwl3945_dma_addr2rbd_ptr(struct iwl3945_priv *priv,
+ dma_addr_t dma_addr)
+ {
+ return cpu_to_le32((u32)dma_addr);
+ }
+
+ /**
+- * iwl_rx_queue_restock - refill RX queue from pre-allocated pool
++ * iwl3945_rx_queue_restock - refill RX queue from pre-allocated pool
+ *
+- * If there are slots in the RX queue that need to be restocked,
++ * If there are slots in the RX queue that need to be restocked,
+ * and we have free pre-allocated buffers, fill the ranks as much
+- * as we can pulling from rx_free.
++ * as we can, pulling from rx_free.
+ *
+ * This moves the 'write' index forward to catch up with 'processed', and
+ * also updates the memory address in the firmware to reference the new
+ * target buffer.
+ */
+-int iwl_rx_queue_restock(struct iwl_priv *priv)
++static int iwl3945_rx_queue_restock(struct iwl3945_priv *priv)
+ {
+- struct iwl_rx_queue *rxq = &priv->rxq;
++ struct iwl3945_rx_queue *rxq = &priv->rxq;
+ struct list_head *element;
+- struct iwl_rx_mem_buffer *rxb;
++ struct iwl3945_rx_mem_buffer *rxb;
+ unsigned long flags;
+ int write, rc;
+
+ spin_lock_irqsave(&rxq->lock, flags);
+ write = rxq->write & ~0x7;
+- while ((iwl_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
++ while ((iwl3945_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
++ /* Get next free Rx buffer, remove from free list */
+ element = rxq->rx_free.next;
+- rxb = list_entry(element, struct iwl_rx_mem_buffer, list);
++ rxb = list_entry(element, struct iwl3945_rx_mem_buffer, list);
+ list_del(element);
+- rxq->bd[rxq->write] = iwl_dma_addr2rbd_ptr(priv, rxb->dma_addr);
++
++ /* Point to Rx buffer via next RBD in circular buffer */
++ rxq->bd[rxq->write] = iwl3945_dma_addr2rbd_ptr(priv, rxb->dma_addr);
+ rxq->queue[rxq->write] = rxb;
+ rxq->write = (rxq->write + 1) & RX_QUEUE_MASK;
+ rxq->free_count--;
+@@ -4032,13 +4003,14 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
+ queue_work(priv->workqueue, &priv->rx_replenish);
+
+
+- /* If we've added more space for the firmware to place data, tell it */
++ /* If we've added more space for the firmware to place data, tell it.
++ * Increment device's write pointer in multiples of 8. */
+ if ((write != (rxq->write & ~0x7))
+ || (abs(rxq->write - rxq->read) > 7)) {
+ spin_lock_irqsave(&rxq->lock, flags);
+ rxq->need_update = 1;
+ spin_unlock_irqrestore(&rxq->lock, flags);
+- rc = iwl_rx_queue_update_write_ptr(priv, rxq);
++ rc = iwl3945_rx_queue_update_write_ptr(priv, rxq);
+ if (rc)
+ return rc;
+ }
+@@ -4047,24 +4019,25 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
+ }
+
+ /**
+- * iwl_rx_replensih - Move all used packet from rx_used to rx_free
++ * iwl3945_rx_replenish - Move all used packet from rx_used to rx_free
+ *
+ * When moving to rx_free an SKB is allocated for the slot.
+ *
+- * Also restock the Rx queue via iwl_rx_queue_restock.
+- * This is called as a scheduled work item (except for during intialization)
++ * Also restock the Rx queue via iwl3945_rx_queue_restock.
++ * This is called as a scheduled work item (except for during initialization)
+ */
+-void iwl_rx_replenish(void *data)
++static void iwl3945_rx_allocate(struct iwl3945_priv *priv)
+ {
+- struct iwl_priv *priv = data;
+- struct iwl_rx_queue *rxq = &priv->rxq;
++ struct iwl3945_rx_queue *rxq = &priv->rxq;
+ struct list_head *element;
+- struct iwl_rx_mem_buffer *rxb;
++ struct iwl3945_rx_mem_buffer *rxb;
+ unsigned long flags;
+ spin_lock_irqsave(&rxq->lock, flags);
+ while (!list_empty(&rxq->rx_used)) {
+ element = rxq->rx_used.next;
+- rxb = list_entry(element, struct iwl_rx_mem_buffer, list);
++ rxb = list_entry(element, struct iwl3945_rx_mem_buffer, list);
++
++ /* Alloc a new receive buffer */
+ rxb->skb =
+ alloc_skb(IWL_RX_BUF_SIZE, __GFP_NOWARN | GFP_ATOMIC);
+ if (!rxb->skb) {
+@@ -4076,8 +4049,19 @@ void iwl_rx_replenish(void *data)
+ * more buffers it will schedule replenish */
+ break;
+ }
++
++ /* If radiotap head is required, reserve some headroom here.
++ * The physical head count is a variable rx_stats->phy_count.
++ * We reserve 4 bytes here. Plus these extra bytes, the
++ * headroom of the physical head should be enough for the
++ * radiotap head that iwl3945 supported. See iwl3945_rt.
++ */
++ skb_reserve(rxb->skb, 4);
++
+ priv->alloc_rxb_skb++;
+ list_del(element);
++
++ /* Get physical address of RB/SKB */
+ rxb->dma_addr =
+ pci_map_single(priv->pci_dev, rxb->skb->data,
+ IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
+@@ -4085,18 +4069,38 @@ void iwl_rx_replenish(void *data)
+ rxq->free_count++;
+ }
+ spin_unlock_irqrestore(&rxq->lock, flags);
++}
++
++/*
++ * this should be called while priv->lock is locked
++ */
++static void __iwl3945_rx_replenish(void *data)
++{
++ struct iwl3945_priv *priv = data;
++
++ iwl3945_rx_allocate(priv);
++ iwl3945_rx_queue_restock(priv);
++}
++
++
++void iwl3945_rx_replenish(void *data)
++{
++ struct iwl3945_priv *priv = data;
++ unsigned long flags;
++
++ iwl3945_rx_allocate(priv);
+
+ spin_lock_irqsave(&priv->lock, flags);
+- iwl_rx_queue_restock(priv);
++ iwl3945_rx_queue_restock(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+ /* Assumes that the skb field of the buffers in 'pool' is kept accurate.
+- * If an SKB has been detached, the POOL needs to have it's SKB set to NULL
++ * If an SKB has been detached, the POOL needs to have its SKB set to NULL
+ * This free routine walks the list of POOL entries and if SKB is set to
+ * non NULL it is unmapped and freed
+ */
+-void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
++static void iwl3945_rx_queue_free(struct iwl3945_priv *priv, struct iwl3945_rx_queue *rxq)
+ {
+ int i;
+ for (i = 0; i < RX_QUEUE_SIZE + RX_FREE_BUFFERS; i++) {
+@@ -4113,21 +4117,25 @@ void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
+ rxq->bd = NULL;
+ }
+
+-int iwl_rx_queue_alloc(struct iwl_priv *priv)
++int iwl3945_rx_queue_alloc(struct iwl3945_priv *priv)
+ {
+- struct iwl_rx_queue *rxq = &priv->rxq;
++ struct iwl3945_rx_queue *rxq = &priv->rxq;
+ struct pci_dev *dev = priv->pci_dev;
+ int i;
+
+ spin_lock_init(&rxq->lock);
+ INIT_LIST_HEAD(&rxq->rx_free);
+ INIT_LIST_HEAD(&rxq->rx_used);
++
++ /* Alloc the circular buffer of Read Buffer Descriptors (RBDs) */
+ rxq->bd = pci_alloc_consistent(dev, 4 * RX_QUEUE_SIZE, &rxq->dma_addr);
+ if (!rxq->bd)
+ return -ENOMEM;
++
+ /* Fill the rx_used queue with _all_ of the Rx buffers */
+ for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++)
+ list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
++
+ /* Set us so that we have processed and used all buffers, but have
+ * not restocked the Rx queue with fresh buffers */
+ rxq->read = rxq->write = 0;
+@@ -4136,7 +4144,7 @@ int iwl_rx_queue_alloc(struct iwl_priv *priv)
+ return 0;
+ }
+
+-void iwl_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
++void iwl3945_rx_queue_reset(struct iwl3945_priv *priv, struct iwl3945_rx_queue *rxq)
+ {
+ unsigned long flags;
+ int i;
+@@ -4183,7 +4191,7 @@ static u8 ratio2dB[100] = {
+ /* Calculates a relative dB value from a ratio of linear
+ * (i.e. not dB) signal levels.
+ * Conversion assumes that levels are voltages (20*log), not powers (10*log). */
+-int iwl_calc_db_from_ratio(int sig_ratio)
++int iwl3945_calc_db_from_ratio(int sig_ratio)
+ {
+ /* Anything above 1000:1 just report as 60 dB */
+ if (sig_ratio > 1000)
+@@ -4209,7 +4217,7 @@ int iwl_calc_db_from_ratio(int sig_ratio)
+ /* Calculate an indication of rx signal quality (a percentage, not dBm!).
+ * See http://www.ces.clemson.edu/linux/signal_quality.shtml for info
+ * about formulas used below. */
+-int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
++int iwl3945_calc_sig_qual(int rssi_dbm, int noise_dbm)
+ {
+ int sig_qual;
+ int degradation = PERFECT_RSSI - rssi_dbm;
+@@ -4244,24 +4252,30 @@ int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
+ }
+
+ /**
+- * iwl_rx_handle - Main entry function for receiving responses from the uCode
++ * iwl3945_rx_handle - Main entry function for receiving responses from uCode
+ *
+ * Uses the priv->rx_handlers callback function array to invoke
+ * the appropriate handlers, including command responses,
+ * frame-received notifications, and other notifications.
+ */
+-static void iwl_rx_handle(struct iwl_priv *priv)
++static void iwl3945_rx_handle(struct iwl3945_priv *priv)
+ {
+- struct iwl_rx_mem_buffer *rxb;
+- struct iwl_rx_packet *pkt;
+- struct iwl_rx_queue *rxq = &priv->rxq;
++ struct iwl3945_rx_mem_buffer *rxb;
++ struct iwl3945_rx_packet *pkt;
++ struct iwl3945_rx_queue *rxq = &priv->rxq;
+ u32 r, i;
+ int reclaim;
+ unsigned long flags;
++ u8 fill_rx = 0;
++ u32 count = 0;
+
+- r = iwl_hw_get_rx_read(priv);
++ /* uCode's read index (stored in shared DRAM) indicates the last Rx
++ * buffer that the driver may process (last buffer filled by ucode). */
++ r = iwl3945_hw_get_rx_read(priv);
+ i = rxq->read;
+
++ if (iwl3945_rx_queue_space(rxq) > (RX_QUEUE_SIZE / 2))
++ fill_rx = 1;
+ /* Rx interrupt, but nothing sent from uCode */
+ if (i == r)
+ IWL_DEBUG(IWL_DL_RX | IWL_DL_ISR, "r = %d, i = %d\n", r, i);
+@@ -4269,7 +4283,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+ while (i != r) {
+ rxb = rxq->queue[i];
+
+- /* If an RXB doesn't have a queue slot associated with it
++ /* If an RXB doesn't have a Rx queue slot associated with it,
+ * then a bug has been introduced in the queue refilling
+ * routines -- catch it here */
+ BUG_ON(rxb == NULL);
+@@ -4279,7 +4293,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+ pci_dma_sync_single_for_cpu(priv->pci_dev, rxb->dma_addr,
+ IWL_RX_BUF_SIZE,
+ PCI_DMA_FROMDEVICE);
+- pkt = (struct iwl_rx_packet *)rxb->skb->data;
++ pkt = (struct iwl3945_rx_packet *)rxb->skb->data;
+
+ /* Reclaim a command buffer only if this packet is a response
+ * to a (driver-originated) command.
+@@ -4293,7 +4307,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+
+ /* Based on type of command response or notification,
+ * handle those that need handling via function in
+- * rx_handlers table. See iwl_setup_rx_handlers() */
++ * rx_handlers table. See iwl3945_setup_rx_handlers() */
+ if (priv->rx_handlers[pkt->hdr.cmd]) {
+ IWL_DEBUG(IWL_DL_HOST_COMMAND | IWL_DL_RX | IWL_DL_ISR,
+ "r = %d, i = %d, %s, 0x%02x\n", r, i,
+@@ -4308,11 +4322,11 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+ }
+
+ if (reclaim) {
+- /* Invoke any callbacks, transfer the skb to caller,
+- * and fire off the (possibly) blocking iwl_send_cmd()
++ /* Invoke any callbacks, transfer the skb to caller, and
++ * fire off the (possibly) blocking iwl3945_send_cmd()
+ * as we reclaim the driver command queue */
+ if (rxb && rxb->skb)
+- iwl_tx_cmd_complete(priv, rxb);
++ iwl3945_tx_cmd_complete(priv, rxb);
+ else
+ IWL_WARNING("Claim null rxb?\n");
+ }
+@@ -4332,15 +4346,28 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+ list_add_tail(&rxb->list, &priv->rxq.rx_used);
+ spin_unlock_irqrestore(&rxq->lock, flags);
+ i = (i + 1) & RX_QUEUE_MASK;
++ /* If there are a lot of unused frames,
++ * restock the Rx queue so ucode won't assert. */
++ if (fill_rx) {
++ count++;
++ if (count >= 8) {
++ priv->rxq.read = i;
++ __iwl3945_rx_replenish(priv);
++ count = 0;
++ }
++ }
+ }
+
+ /* Backtrack one entry */
+ priv->rxq.read = i;
+- iwl_rx_queue_restock(priv);
++ iwl3945_rx_queue_restock(priv);
+ }
+
+-int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq)
++/**
++ * iwl3945_tx_queue_update_write_ptr - Send new write index to hardware
++ */
++static int iwl3945_tx_queue_update_write_ptr(struct iwl3945_priv *priv,
++ struct iwl3945_tx_queue *txq)
+ {
+ u32 reg = 0;
+ int rc = 0;
+@@ -4354,41 +4381,41 @@ int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
+ /* wake up nic if it's powered down ...
+ * uCode will wake up, and interrupt us again, so next
+ * time we'll skip this part. */
+- reg = iwl_read32(priv, CSR_UCODE_DRV_GP1);
++ reg = iwl3945_read32(priv, CSR_UCODE_DRV_GP1);
+
+ if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
+ IWL_DEBUG_INFO("Requesting wakeup, GP1 = 0x%x\n", reg);
+- iwl_set_bit(priv, CSR_GP_CNTRL,
++ iwl3945_set_bit(priv, CSR_GP_CNTRL,
+ CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
+ return rc;
+ }
+
+ /* restore this queue's parameters in nic hardware. */
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc)
+ return rc;
+- iwl_write_restricted(priv, HBUS_TARG_WRPTR,
+- txq->q.first_empty | (txq_id << 8));
+- iwl_release_restricted_access(priv);
++ iwl3945_write_direct32(priv, HBUS_TARG_WRPTR,
++ txq->q.write_ptr | (txq_id << 8));
++ iwl3945_release_nic_access(priv);
+
+ /* else not in power-save mode, uCode will never sleep when we're
+ * trying to tx (during RFKILL, we're not trying to tx). */
+ } else
+- iwl_write32(priv, HBUS_TARG_WRPTR,
+- txq->q.first_empty | (txq_id << 8));
++ iwl3945_write32(priv, HBUS_TARG_WRPTR,
++ txq->q.write_ptr | (txq_id << 8));
+
+ txq->need_update = 0;
+
+ return rc;
+ }
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
++#ifdef CONFIG_IWL3945_DEBUG
++static void iwl3945_print_rx_config_cmd(struct iwl3945_rxon_cmd *rxon)
+ {
+ DECLARE_MAC_BUF(mac);
+
+ IWL_DEBUG_RADIO("RX CONFIG:\n");
+- iwl_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
++ iwl3945_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
+ IWL_DEBUG_RADIO("u16 channel: 0x%x\n", le16_to_cpu(rxon->channel));
+ IWL_DEBUG_RADIO("u32 flags: 0x%08X\n", le32_to_cpu(rxon->flags));
+ IWL_DEBUG_RADIO("u32 filter_flags: 0x%08x\n",
+@@ -4405,24 +4432,24 @@ static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
+ }
+ #endif
+
+-static void iwl_enable_interrupts(struct iwl_priv *priv)
++static void iwl3945_enable_interrupts(struct iwl3945_priv *priv)
+ {
+ IWL_DEBUG_ISR("Enabling interrupts\n");
+ set_bit(STATUS_INT_ENABLED, &priv->status);
+- iwl_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
++ iwl3945_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
+ }
+
+-static inline void iwl_disable_interrupts(struct iwl_priv *priv)
++static inline void iwl3945_disable_interrupts(struct iwl3945_priv *priv)
+ {
+ clear_bit(STATUS_INT_ENABLED, &priv->status);
+
+ /* disable interrupts from uCode/NIC to host */
+- iwl_write32(priv, CSR_INT_MASK, 0x00000000);
++ iwl3945_write32(priv, CSR_INT_MASK, 0x00000000);
+
+ /* acknowledge/clear/reset any interrupts still pending
+ * from uCode or flow handler (Rx/Tx DMA) */
+- iwl_write32(priv, CSR_INT, 0xffffffff);
+- iwl_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
++ iwl3945_write32(priv, CSR_INT, 0xffffffff);
++ iwl3945_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
+ IWL_DEBUG_ISR("Disabled interrupts\n");
+ }
+
+@@ -4449,7 +4476,7 @@ static const char *desc_lookup(int i)
+ #define ERROR_START_OFFSET (1 * sizeof(u32))
+ #define ERROR_ELEM_SIZE (7 * sizeof(u32))
+
+-static void iwl_dump_nic_error_log(struct iwl_priv *priv)
++static void iwl3945_dump_nic_error_log(struct iwl3945_priv *priv)
+ {
+ u32 i;
+ u32 desc, time, count, base, data1;
+@@ -4458,18 +4485,18 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+
+ base = le32_to_cpu(priv->card_alive.error_event_table_ptr);
+
+- if (!iwl_hw_valid_rtc_data_addr(base)) {
++ if (!iwl3945_hw_valid_rtc_data_addr(base)) {
+ IWL_ERROR("Not valid error log pointer 0x%08X\n", base);
+ return;
+ }
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc) {
+ IWL_WARNING("Can not read from adapter at this time.\n");
+ return;
+ }
+
+- count = iwl_read_restricted_mem(priv, base);
++ count = iwl3945_read_targ_mem(priv, base);
+
+ if (ERROR_START_OFFSET <= count * ERROR_ELEM_SIZE) {
+ IWL_ERROR("Start IWL Error Log Dump:\n");
+@@ -4482,19 +4509,19 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+ for (i = ERROR_START_OFFSET;
+ i < (count * ERROR_ELEM_SIZE) + ERROR_START_OFFSET;
+ i += ERROR_ELEM_SIZE) {
+- desc = iwl_read_restricted_mem(priv, base + i);
++ desc = iwl3945_read_targ_mem(priv, base + i);
+ time =
+- iwl_read_restricted_mem(priv, base + i + 1 * sizeof(u32));
++ iwl3945_read_targ_mem(priv, base + i + 1 * sizeof(u32));
+ blink1 =
+- iwl_read_restricted_mem(priv, base + i + 2 * sizeof(u32));
++ iwl3945_read_targ_mem(priv, base + i + 2 * sizeof(u32));
+ blink2 =
+- iwl_read_restricted_mem(priv, base + i + 3 * sizeof(u32));
++ iwl3945_read_targ_mem(priv, base + i + 3 * sizeof(u32));
+ ilink1 =
+- iwl_read_restricted_mem(priv, base + i + 4 * sizeof(u32));
++ iwl3945_read_targ_mem(priv, base + i + 4 * sizeof(u32));
+ ilink2 =
+- iwl_read_restricted_mem(priv, base + i + 5 * sizeof(u32));
++ iwl3945_read_targ_mem(priv, base + i + 5 * sizeof(u32));
+ data1 =
+- iwl_read_restricted_mem(priv, base + i + 6 * sizeof(u32));
++ iwl3945_read_targ_mem(priv, base + i + 6 * sizeof(u32));
+
+ IWL_ERROR
+ ("%-13s (#%d) %010u 0x%05X 0x%05X 0x%05X 0x%05X %u\n\n",
+@@ -4502,18 +4529,18 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+ ilink1, ilink2, data1);
+ }
+
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+
+ }
+
+-#define EVENT_START_OFFSET (4 * sizeof(u32))
++#define EVENT_START_OFFSET (6 * sizeof(u32))
+
+ /**
+- * iwl_print_event_log - Dump error event log to syslog
++ * iwl3945_print_event_log - Dump error event log to syslog
+ *
+- * NOTE: Must be called with iwl_grab_restricted_access() already obtained!
++ * NOTE: Must be called with iwl3945_grab_nic_access() already obtained!
+ */
+-static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
++static void iwl3945_print_event_log(struct iwl3945_priv *priv, u32 start_idx,
+ u32 num_events, u32 mode)
+ {
+ u32 i;
+@@ -4537,21 +4564,21 @@ static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
+ /* "time" is actually "data" for mode 0 (no timestamp).
+ * place event id # at far right for easier visual parsing. */
+ for (i = 0; i < num_events; i++) {
+- ev = iwl_read_restricted_mem(priv, ptr);
++ ev = iwl3945_read_targ_mem(priv, ptr);
+ ptr += sizeof(u32);
+- time = iwl_read_restricted_mem(priv, ptr);
++ time = iwl3945_read_targ_mem(priv, ptr);
+ ptr += sizeof(u32);
+ if (mode == 0)
+ IWL_ERROR("0x%08x\t%04u\n", time, ev); /* data, ev */
+ else {
+- data = iwl_read_restricted_mem(priv, ptr);
++ data = iwl3945_read_targ_mem(priv, ptr);
+ ptr += sizeof(u32);
+ IWL_ERROR("%010u\t0x%08x\t%04u\n", time, data, ev);
+ }
+ }
+ }
+
+-static void iwl_dump_nic_event_log(struct iwl_priv *priv)
++static void iwl3945_dump_nic_event_log(struct iwl3945_priv *priv)
+ {
+ int rc;
+ u32 base; /* SRAM byte address of event log header */
+@@ -4562,29 +4589,29 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
+ u32 size; /* # entries that we'll print */
+
+ base = le32_to_cpu(priv->card_alive.log_event_table_ptr);
+- if (!iwl_hw_valid_rtc_data_addr(base)) {
++ if (!iwl3945_hw_valid_rtc_data_addr(base)) {
+ IWL_ERROR("Invalid event log pointer 0x%08X\n", base);
+ return;
+ }
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc) {
+ IWL_WARNING("Can not read from adapter at this time.\n");
+ return;
+ }
+
+ /* event log header */
+- capacity = iwl_read_restricted_mem(priv, base);
+- mode = iwl_read_restricted_mem(priv, base + (1 * sizeof(u32)));
+- num_wraps = iwl_read_restricted_mem(priv, base + (2 * sizeof(u32)));
+- next_entry = iwl_read_restricted_mem(priv, base + (3 * sizeof(u32)));
++ capacity = iwl3945_read_targ_mem(priv, base);
++ mode = iwl3945_read_targ_mem(priv, base + (1 * sizeof(u32)));
++ num_wraps = iwl3945_read_targ_mem(priv, base + (2 * sizeof(u32)));
++ next_entry = iwl3945_read_targ_mem(priv, base + (3 * sizeof(u32)));
+
+ size = num_wraps ? capacity : next_entry;
+
+ /* bail out if nothing in log */
+ if (size == 0) {
+ IWL_ERROR("Start IWL Event Log Dump: nothing in log\n");
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+ return;
+ }
+
+@@ -4594,31 +4621,31 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
+ /* if uCode has wrapped back to top of log, start at the oldest entry,
+ * i.e the next one that uCode would fill. */
+ if (num_wraps)
+- iwl_print_event_log(priv, next_entry,
++ iwl3945_print_event_log(priv, next_entry,
+ capacity - next_entry, mode);
+
+ /* (then/else) start at top of log */
+- iwl_print_event_log(priv, 0, next_entry, mode);
++ iwl3945_print_event_log(priv, 0, next_entry, mode);
+
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+ }
+
+ /**
+- * iwl_irq_handle_error - called for HW or SW error interrupt from card
++ * iwl3945_irq_handle_error - called for HW or SW error interrupt from card
+ */
+-static void iwl_irq_handle_error(struct iwl_priv *priv)
++static void iwl3945_irq_handle_error(struct iwl3945_priv *priv)
+ {
+- /* Set the FW error flag -- cleared on iwl_down */
++ /* Set the FW error flag -- cleared on iwl3945_down */
+ set_bit(STATUS_FW_ERROR, &priv->status);
+
+ /* Cancel currently queued command. */
+ clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (iwl_debug_level & IWL_DL_FW_ERRORS) {
+- iwl_dump_nic_error_log(priv);
+- iwl_dump_nic_event_log(priv);
+- iwl_print_rx_config_cmd(&priv->staging_rxon);
++#ifdef CONFIG_IWL3945_DEBUG
++ if (iwl3945_debug_level & IWL_DL_FW_ERRORS) {
++ iwl3945_dump_nic_error_log(priv);
++ iwl3945_dump_nic_event_log(priv);
++ iwl3945_print_rx_config_cmd(&priv->staging_rxon);
+ }
+ #endif
+
+@@ -4632,7 +4659,7 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
+ IWL_DEBUG(IWL_DL_INFO | IWL_DL_FW_ERRORS,
+ "Restarting adapter due to uCode error.\n");
+
+- if (iwl_is_associated(priv)) {
++ if (iwl3945_is_associated(priv)) {
+ memcpy(&priv->recovery_rxon, &priv->active_rxon,
+ sizeof(priv->recovery_rxon));
+ priv->error_recovering = 1;
+@@ -4641,16 +4668,16 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
+ }
+ }
+
+-static void iwl_error_recovery(struct iwl_priv *priv)
++static void iwl3945_error_recovery(struct iwl3945_priv *priv)
+ {
+ unsigned long flags;
+
+ memcpy(&priv->staging_rxon, &priv->recovery_rxon,
+ sizeof(priv->staging_rxon));
+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+
+- iwl_add_station(priv, priv->bssid, 1, 0);
++ iwl3945_add_station(priv, priv->bssid, 1, 0);
+
+ spin_lock_irqsave(&priv->lock, flags);
+ priv->assoc_id = le16_to_cpu(priv->staging_rxon.assoc_id);
+@@ -4658,12 +4685,12 @@ static void iwl_error_recovery(struct iwl_priv *priv)
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+-static void iwl_irq_tasklet(struct iwl_priv *priv)
++static void iwl3945_irq_tasklet(struct iwl3945_priv *priv)
+ {
+ u32 inta, handled = 0;
+ u32 inta_fh;
+ unsigned long flags;
+-#ifdef CONFIG_IWLWIFI_DEBUG
++#ifdef CONFIG_IWL3945_DEBUG
+ u32 inta_mask;
+ #endif
+
+@@ -4672,18 +4699,19 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ /* Ack/clear/reset pending uCode interrupts.
+ * Note: Some bits in CSR_INT are "OR" of bits in CSR_FH_INT_STATUS,
+ * and will clear only when CSR_FH_INT_STATUS gets cleared. */
+- inta = iwl_read32(priv, CSR_INT);
+- iwl_write32(priv, CSR_INT, inta);
++ inta = iwl3945_read32(priv, CSR_INT);
++ iwl3945_write32(priv, CSR_INT, inta);
+
+ /* Ack/clear/reset pending flow-handler (DMA) interrupts.
+ * Any new interrupts that happen after this, either while we're
+ * in this tasklet, or later, will show up in next ISR/tasklet. */
+- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
+- iwl_write32(priv, CSR_FH_INT_STATUS, inta_fh);
++ inta_fh = iwl3945_read32(priv, CSR_FH_INT_STATUS);
++ iwl3945_write32(priv, CSR_FH_INT_STATUS, inta_fh);
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (iwl_debug_level & IWL_DL_ISR) {
+- inta_mask = iwl_read32(priv, CSR_INT_MASK); /* just for debug */
++#ifdef CONFIG_IWL3945_DEBUG
++ if (iwl3945_debug_level & IWL_DL_ISR) {
++ /* just for debug */
++ inta_mask = iwl3945_read32(priv, CSR_INT_MASK);
+ IWL_DEBUG_ISR("inta 0x%08x, enabled 0x%08x, fh 0x%08x\n",
+ inta, inta_mask, inta_fh);
+ }
+@@ -4703,9 +4731,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ IWL_ERROR("Microcode HW error detected. Restarting.\n");
+
+ /* Tell the device to stop sending interrupts */
+- iwl_disable_interrupts(priv);
++ iwl3945_disable_interrupts(priv);
+
+- iwl_irq_handle_error(priv);
++ iwl3945_irq_handle_error(priv);
+
+ handled |= CSR_INT_BIT_HW_ERR;
+
+@@ -4714,8 +4742,8 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ return;
+ }
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (iwl_debug_level & (IWL_DL_ISR)) {
++#ifdef CONFIG_IWL3945_DEBUG
++ if (iwl3945_debug_level & (IWL_DL_ISR)) {
+ /* NIC fires this, but we don't use it, redundant with WAKEUP */
+ if (inta & CSR_INT_BIT_MAC_CLK_ACTV)
+ IWL_DEBUG_ISR("Microcode started or stopped.\n");
+@@ -4731,7 +4759,7 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ /* HW RF KILL switch toggled (4965 only) */
+ if (inta & CSR_INT_BIT_RF_KILL) {
+ int hw_rf_kill = 0;
+- if (!(iwl_read32(priv, CSR_GP_CNTRL) &
++ if (!(iwl3945_read32(priv, CSR_GP_CNTRL) &
+ CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
+ hw_rf_kill = 1;
+
+@@ -4761,20 +4789,20 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ if (inta & CSR_INT_BIT_SW_ERR) {
+ IWL_ERROR("Microcode SW error detected. Restarting 0x%X.\n",
+ inta);
+- iwl_irq_handle_error(priv);
++ iwl3945_irq_handle_error(priv);
+ handled |= CSR_INT_BIT_SW_ERR;
+ }
+
+ /* uCode wakes up after power-down sleep */
+ if (inta & CSR_INT_BIT_WAKEUP) {
+ IWL_DEBUG_ISR("Wakeup interrupt\n");
+- iwl_rx_queue_update_write_ptr(priv, &priv->rxq);
+- iwl_tx_queue_update_write_ptr(priv, &priv->txq[0]);
+- iwl_tx_queue_update_write_ptr(priv, &priv->txq[1]);
+- iwl_tx_queue_update_write_ptr(priv, &priv->txq[2]);
+- iwl_tx_queue_update_write_ptr(priv, &priv->txq[3]);
+- iwl_tx_queue_update_write_ptr(priv, &priv->txq[4]);
+- iwl_tx_queue_update_write_ptr(priv, &priv->txq[5]);
++ iwl3945_rx_queue_update_write_ptr(priv, &priv->rxq);
++ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[0]);
++ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[1]);
++ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[2]);
++ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[3]);
++ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[4]);
++ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[5]);
+
+ handled |= CSR_INT_BIT_WAKEUP;
+ }
+@@ -4783,19 +4811,19 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ * Rx "responses" (frame-received notification), and other
+ * notifications from uCode come through here*/
+ if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX)) {
+- iwl_rx_handle(priv);
++ iwl3945_rx_handle(priv);
+ handled |= (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX);
+ }
+
+ if (inta & CSR_INT_BIT_FH_TX) {
+ IWL_DEBUG_ISR("Tx interrupt\n");
+
+- iwl_write32(priv, CSR_FH_INT_STATUS, (1 << 6));
+- if (!iwl_grab_restricted_access(priv)) {
+- iwl_write_restricted(priv,
++ iwl3945_write32(priv, CSR_FH_INT_STATUS, (1 << 6));
++ if (!iwl3945_grab_nic_access(priv)) {
++ iwl3945_write_direct32(priv,
+ FH_TCSR_CREDIT
+ (ALM_FH_SRVC_CHNL), 0x0);
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+ }
+ handled |= CSR_INT_BIT_FH_TX;
+ }
+@@ -4810,13 +4838,13 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ }
+
+ /* Re-enable all interrupts */
+- iwl_enable_interrupts(priv);
++ iwl3945_enable_interrupts(priv);
+
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- if (iwl_debug_level & (IWL_DL_ISR)) {
+- inta = iwl_read32(priv, CSR_INT);
+- inta_mask = iwl_read32(priv, CSR_INT_MASK);
+- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
++#ifdef CONFIG_IWL3945_DEBUG
++ if (iwl3945_debug_level & (IWL_DL_ISR)) {
++ inta = iwl3945_read32(priv, CSR_INT);
++ inta_mask = iwl3945_read32(priv, CSR_INT_MASK);
++ inta_fh = iwl3945_read32(priv, CSR_FH_INT_STATUS);
+ IWL_DEBUG_ISR("End inta 0x%08x, enabled 0x%08x, fh 0x%08x, "
+ "flags 0x%08lx\n", inta, inta_mask, inta_fh, flags);
+ }
+@@ -4824,9 +4852,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+-static irqreturn_t iwl_isr(int irq, void *data)
++static irqreturn_t iwl3945_isr(int irq, void *data)
+ {
+- struct iwl_priv *priv = data;
++ struct iwl3945_priv *priv = data;
+ u32 inta, inta_mask;
+ u32 inta_fh;
+ if (!priv)
+@@ -4838,12 +4866,12 @@ static irqreturn_t iwl_isr(int irq, void *data)
+ * back-to-back ISRs and sporadic interrupts from our NIC.
+ * If we have something to service, the tasklet will re-enable ints.
+ * If we *don't* have something, we'll re-enable before leaving here. */
+- inta_mask = iwl_read32(priv, CSR_INT_MASK); /* just for debug */
+- iwl_write32(priv, CSR_INT_MASK, 0x00000000);
++ inta_mask = iwl3945_read32(priv, CSR_INT_MASK); /* just for debug */
++ iwl3945_write32(priv, CSR_INT_MASK, 0x00000000);
+
+ /* Discover which interrupts are active/pending */
+- inta = iwl_read32(priv, CSR_INT);
+- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
++ inta = iwl3945_read32(priv, CSR_INT);
++ inta_fh = iwl3945_read32(priv, CSR_FH_INT_STATUS);
+
+ /* Ignore interrupt if there's nothing in NIC to service.
+ * This may be due to IRQ shared with another device,
+@@ -4862,7 +4890,7 @@ static irqreturn_t iwl_isr(int irq, void *data)
+ IWL_DEBUG_ISR("ISR inta 0x%08x, enabled 0x%08x, fh 0x%08x\n",
+ inta, inta_mask, inta_fh);
+
+- /* iwl_irq_tasklet() will service interrupts and re-enable them */
++ /* iwl3945_irq_tasklet() will service interrupts and re-enable them */
+ tasklet_schedule(&priv->irq_tasklet);
+ unplugged:
+ spin_unlock(&priv->lock);
+@@ -4871,18 +4899,18 @@ unplugged:
+
+ none:
+ /* re-enable interrupts here since we don't have anything to service. */
+- iwl_enable_interrupts(priv);
++ iwl3945_enable_interrupts(priv);
+ spin_unlock(&priv->lock);
+ return IRQ_NONE;
+ }
+
+ /************************** EEPROM BANDS ****************************
+ *
+- * The iwl_eeprom_band definitions below provide the mapping from the
++ * The iwl3945_eeprom_band definitions below provide the mapping from the
+ * EEPROM contents to the specific channel number supported for each
+ * band.
+ *
+- * For example, iwl_priv->eeprom.band_3_channels[4] from the band_3
++ * For example, iwl3945_priv->eeprom.band_3_channels[4] from the band_3
+ * definition below maps to physical channel 42 in the 5.2GHz spectrum.
+ * The specific geography and calibration information for that channel
+ * is contained in the eeprom map itself.
+@@ -4908,58 +4936,58 @@ unplugged:
+ *********************************************************************/
+
+ /* 2.4 GHz */
+-static const u8 iwl_eeprom_band_1[14] = {
++static const u8 iwl3945_eeprom_band_1[14] = {
+ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
+ };
+
+ /* 5.2 GHz bands */
+-static const u8 iwl_eeprom_band_2[] = {
++static const u8 iwl3945_eeprom_band_2[] = { /* 4915-5080MHz */
+ 183, 184, 185, 187, 188, 189, 192, 196, 7, 8, 11, 12, 16
+ };
+
+-static const u8 iwl_eeprom_band_3[] = { /* 5205-5320MHz */
++static const u8 iwl3945_eeprom_band_3[] = { /* 5170-5320MHz */
+ 34, 36, 38, 40, 42, 44, 46, 48, 52, 56, 60, 64
+ };
+
+-static const u8 iwl_eeprom_band_4[] = { /* 5500-5700MHz */
++static const u8 iwl3945_eeprom_band_4[] = { /* 5500-5700MHz */
+ 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140
+ };
+
+-static const u8 iwl_eeprom_band_5[] = { /* 5725-5825MHz */
++static const u8 iwl3945_eeprom_band_5[] = { /* 5725-5825MHz */
+ 145, 149, 153, 157, 161, 165
+ };
+
+-static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
++static void iwl3945_init_band_reference(const struct iwl3945_priv *priv, int band,
+ int *eeprom_ch_count,
+- const struct iwl_eeprom_channel
++ const struct iwl3945_eeprom_channel
+ **eeprom_ch_info,
+ const u8 **eeprom_ch_index)
+ {
+ switch (band) {
+ case 1: /* 2.4GHz band */
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_1);
++ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_1);
+ *eeprom_ch_info = priv->eeprom.band_1_channels;
+- *eeprom_ch_index = iwl_eeprom_band_1;
++ *eeprom_ch_index = iwl3945_eeprom_band_1;
+ break;
+- case 2: /* 5.2GHz band */
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_2);
++ case 2: /* 4.9GHz band */
++ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_2);
+ *eeprom_ch_info = priv->eeprom.band_2_channels;
+- *eeprom_ch_index = iwl_eeprom_band_2;
++ *eeprom_ch_index = iwl3945_eeprom_band_2;
+ break;
+ case 3: /* 5.2GHz band */
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_3);
++ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_3);
+ *eeprom_ch_info = priv->eeprom.band_3_channels;
+- *eeprom_ch_index = iwl_eeprom_band_3;
++ *eeprom_ch_index = iwl3945_eeprom_band_3;
+ break;
+- case 4: /* 5.2GHz band */
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_4);
++ case 4: /* 5.5GHz band */
++ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_4);
+ *eeprom_ch_info = priv->eeprom.band_4_channels;
+- *eeprom_ch_index = iwl_eeprom_band_4;
++ *eeprom_ch_index = iwl3945_eeprom_band_4;
+ break;
+- case 5: /* 5.2GHz band */
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_5);
++ case 5: /* 5.7GHz band */
++ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_5);
+ *eeprom_ch_info = priv->eeprom.band_5_channels;
+- *eeprom_ch_index = iwl_eeprom_band_5;
++ *eeprom_ch_index = iwl3945_eeprom_band_5;
+ break;
+ default:
+ BUG();
+@@ -4967,7 +4995,12 @@ static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
+ }
+ }
+
+-const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
++/**
++ * iwl3945_get_channel_info - Find driver's private channel info
++ *
++ * Based on band and channel number.
++ */
++const struct iwl3945_channel_info *iwl3945_get_channel_info(const struct iwl3945_priv *priv,
+ int phymode, u16 channel)
+ {
+ int i;
+@@ -4994,13 +5027,16 @@ const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
+ #define CHECK_AND_PRINT(x) ((eeprom_ch_info[ch].flags & EEPROM_CHANNEL_##x) \
+ ? # x " " : "")
+
+-static int iwl_init_channel_map(struct iwl_priv *priv)
++/**
++ * iwl3945_init_channel_map - Set up driver's info for all possible channels
++ */
++static int iwl3945_init_channel_map(struct iwl3945_priv *priv)
+ {
+ int eeprom_ch_count = 0;
+ const u8 *eeprom_ch_index = NULL;
+- const struct iwl_eeprom_channel *eeprom_ch_info = NULL;
++ const struct iwl3945_eeprom_channel *eeprom_ch_info = NULL;
+ int band, ch;
+- struct iwl_channel_info *ch_info;
++ struct iwl3945_channel_info *ch_info;
+
+ if (priv->channel_count) {
+ IWL_DEBUG_INFO("Channel map already initialized.\n");
+@@ -5016,15 +5052,15 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+ IWL_DEBUG_INFO("Initializing regulatory info from EEPROM\n");
+
+ priv->channel_count =
+- ARRAY_SIZE(iwl_eeprom_band_1) +
+- ARRAY_SIZE(iwl_eeprom_band_2) +
+- ARRAY_SIZE(iwl_eeprom_band_3) +
+- ARRAY_SIZE(iwl_eeprom_band_4) +
+- ARRAY_SIZE(iwl_eeprom_band_5);
++ ARRAY_SIZE(iwl3945_eeprom_band_1) +
++ ARRAY_SIZE(iwl3945_eeprom_band_2) +
++ ARRAY_SIZE(iwl3945_eeprom_band_3) +
++ ARRAY_SIZE(iwl3945_eeprom_band_4) +
++ ARRAY_SIZE(iwl3945_eeprom_band_5);
+
+ IWL_DEBUG_INFO("Parsing data for %d channels.\n", priv->channel_count);
+
+- priv->channel_info = kzalloc(sizeof(struct iwl_channel_info) *
++ priv->channel_info = kzalloc(sizeof(struct iwl3945_channel_info) *
+ priv->channel_count, GFP_KERNEL);
+ if (!priv->channel_info) {
+ IWL_ERROR("Could not allocate channel_info\n");
+@@ -5039,7 +5075,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+ * what just in the EEPROM) */
+ for (band = 1; band <= 5; band++) {
+
+- iwl_init_band_reference(priv, band, &eeprom_ch_count,
++ iwl3945_init_band_reference(priv, band, &eeprom_ch_count,
+ &eeprom_ch_info, &eeprom_ch_index);
+
+ /* Loop through each band adding each of the channels */
+@@ -5103,6 +5139,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+ }
+ }
+
++ /* Set up txpower settings in driver for all channels */
+ if (iwl3945_txpower_set_from_eeprom(priv))
+ return -EIO;
+
+@@ -5132,7 +5169,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+ #define IWL_PASSIVE_DWELL_BASE (100)
+ #define IWL_CHANNEL_TUNE_TIME 5
+
+-static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
++static inline u16 iwl3945_get_active_dwell_time(struct iwl3945_priv *priv, int phymode)
+ {
+ if (phymode == MODE_IEEE80211A)
+ return IWL_ACTIVE_DWELL_TIME_52;
+@@ -5140,14 +5177,14 @@ static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
+ return IWL_ACTIVE_DWELL_TIME_24;
+ }
+
+-static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
++static u16 iwl3945_get_passive_dwell_time(struct iwl3945_priv *priv, int phymode)
+ {
+- u16 active = iwl_get_active_dwell_time(priv, phymode);
++ u16 active = iwl3945_get_active_dwell_time(priv, phymode);
+ u16 passive = (phymode != MODE_IEEE80211A) ?
+ IWL_PASSIVE_DWELL_BASE + IWL_PASSIVE_DWELL_TIME_24 :
+ IWL_PASSIVE_DWELL_BASE + IWL_PASSIVE_DWELL_TIME_52;
+
+- if (iwl_is_associated(priv)) {
++ if (iwl3945_is_associated(priv)) {
+ /* If we're associated, we clamp the maximum passive
+ * dwell time to be 98% of the beacon interval (minus
+ * 2 * channel tune time) */
+@@ -5163,30 +5200,30 @@ static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
+ return passive;
+ }
+
+-static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
++static int iwl3945_get_channels_for_scan(struct iwl3945_priv *priv, int phymode,
+ u8 is_active, u8 direct_mask,
+- struct iwl_scan_channel *scan_ch)
++ struct iwl3945_scan_channel *scan_ch)
+ {
+ const struct ieee80211_channel *channels = NULL;
+ const struct ieee80211_hw_mode *hw_mode;
+- const struct iwl_channel_info *ch_info;
++ const struct iwl3945_channel_info *ch_info;
+ u16 passive_dwell = 0;
+ u16 active_dwell = 0;
+ int added, i;
+
+- hw_mode = iwl_get_hw_mode(priv, phymode);
++ hw_mode = iwl3945_get_hw_mode(priv, phymode);
+ if (!hw_mode)
+ return 0;
+
+ channels = hw_mode->channels;
+
+- active_dwell = iwl_get_active_dwell_time(priv, phymode);
+- passive_dwell = iwl_get_passive_dwell_time(priv, phymode);
++ active_dwell = iwl3945_get_active_dwell_time(priv, phymode);
++ passive_dwell = iwl3945_get_passive_dwell_time(priv, phymode);
+
+ for (i = 0, added = 0; i < hw_mode->num_channels; i++) {
+ if (channels[i].chan ==
+ le16_to_cpu(priv->active_rxon.channel)) {
+- if (iwl_is_associated(priv)) {
++ if (iwl3945_is_associated(priv)) {
+ IWL_DEBUG_SCAN
+ ("Skipping current channel %d\n",
+ le16_to_cpu(priv->active_rxon.channel));
+@@ -5197,7 +5234,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+
+ scan_ch->channel = channels[i].chan;
+
+- ch_info = iwl_get_channel_info(priv, phymode, scan_ch->channel);
++ ch_info = iwl3945_get_channel_info(priv, phymode, scan_ch->channel);
+ if (!is_channel_valid(ch_info)) {
+ IWL_DEBUG_SCAN("Channel %d is INVALID for this SKU.\n",
+ scan_ch->channel);
+@@ -5219,7 +5256,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+ scan_ch->active_dwell = cpu_to_le16(active_dwell);
+ scan_ch->passive_dwell = cpu_to_le16(passive_dwell);
+
+- /* Set power levels to defaults */
++ /* Set txpower levels to defaults */
+ scan_ch->tpc.dsp_atten = 110;
+ /* scan_pwr_info->tpc.dsp_atten; */
+
+@@ -5229,8 +5266,8 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+ else {
+ scan_ch->tpc.tx_gain = ((1 << 5) | (5 << 3));
+ /* NOTE: if we were doing 6Mb OFDM for scans we'd use
+- * power level
+- scan_ch->tpc.tx_gain = ((1<<5) | (2 << 3)) | 3;
++ * power level:
++ * scan_ch->tpc.tx_gain = ((1 << 5) | (2 << 3)) | 3;
+ */
+ }
+
+@@ -5248,7 +5285,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+ return added;
+ }
+
+-static void iwl_reset_channel_flag(struct iwl_priv *priv)
++static void iwl3945_reset_channel_flag(struct iwl3945_priv *priv)
+ {
+ int i, j;
+ for (i = 0; i < 3; i++) {
+@@ -5258,13 +5295,13 @@ static void iwl_reset_channel_flag(struct iwl_priv *priv)
+ }
+ }
+
+-static void iwl_init_hw_rates(struct iwl_priv *priv,
++static void iwl3945_init_hw_rates(struct iwl3945_priv *priv,
+ struct ieee80211_rate *rates)
+ {
+ int i;
+
+ for (i = 0; i < IWL_RATE_COUNT; i++) {
+- rates[i].rate = iwl_rates[i].ieee * 5;
++ rates[i].rate = iwl3945_rates[i].ieee * 5;
+ rates[i].val = i; /* Rate scaling will work on indexes */
+ rates[i].val2 = i;
+ rates[i].flags = IEEE80211_RATE_SUPPORTED;
+@@ -5276,7 +5313,7 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
+ * If CCK 1M then set rate flag to CCK else CCK_2
+ * which is CCK | PREAMBLE2
+ */
+- rates[i].flags |= (iwl_rates[i].plcp == 10) ?
++ rates[i].flags |= (iwl3945_rates[i].plcp == 10) ?
+ IEEE80211_RATE_CCK : IEEE80211_RATE_CCK_2;
+ }
+
+@@ -5287,11 +5324,11 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
+ }
+
+ /**
+- * iwl_init_geos - Initialize mac80211's geo/channel info based from eeprom
++ * iwl3945_init_geos - Initialize mac80211's geo/channel info based from eeprom
+ */
+-static int iwl_init_geos(struct iwl_priv *priv)
++static int iwl3945_init_geos(struct iwl3945_priv *priv)
+ {
+- struct iwl_channel_info *ch;
++ struct iwl3945_channel_info *ch;
+ struct ieee80211_hw_mode *modes;
+ struct ieee80211_channel *channels;
+ struct ieee80211_channel *geo_ch;
+@@ -5337,7 +5374,7 @@ static int iwl_init_geos(struct iwl_priv *priv)
+
+ /* 5.2GHz channels start after the 2.4GHz channels */
+ modes[A].mode = MODE_IEEE80211A;
+- modes[A].channels = &channels[ARRAY_SIZE(iwl_eeprom_band_1)];
++ modes[A].channels = &channels[ARRAY_SIZE(iwl3945_eeprom_band_1)];
+ modes[A].rates = &rates[4];
+ modes[A].num_rates = 8; /* just OFDM */
+ modes[A].num_channels = 0;
+@@ -5357,7 +5394,7 @@ static int iwl_init_geos(struct iwl_priv *priv)
+ priv->ieee_channels = channels;
+ priv->ieee_rates = rates;
+
+- iwl_init_hw_rates(priv, rates);
++ iwl3945_init_hw_rates(priv, rates);
+
+ for (i = 0, geo_ch = channels; i < priv->channel_count; i++) {
+ ch = &priv->channel_info[i];
+@@ -5440,57 +5477,21 @@ static int iwl_init_geos(struct iwl_priv *priv)
+ *
+ ******************************************************************************/
+
+-static void iwl_dealloc_ucode_pci(struct iwl_priv *priv)
++static void iwl3945_dealloc_ucode_pci(struct iwl3945_priv *priv)
+ {
+- if (priv->ucode_code.v_addr != NULL) {
+- pci_free_consistent(priv->pci_dev,
+- priv->ucode_code.len,
+- priv->ucode_code.v_addr,
+- priv->ucode_code.p_addr);
+- priv->ucode_code.v_addr = NULL;
+- }
+- if (priv->ucode_data.v_addr != NULL) {
+- pci_free_consistent(priv->pci_dev,
+- priv->ucode_data.len,
+- priv->ucode_data.v_addr,
+- priv->ucode_data.p_addr);
+- priv->ucode_data.v_addr = NULL;
+- }
+- if (priv->ucode_data_backup.v_addr != NULL) {
+- pci_free_consistent(priv->pci_dev,
+- priv->ucode_data_backup.len,
+- priv->ucode_data_backup.v_addr,
+- priv->ucode_data_backup.p_addr);
+- priv->ucode_data_backup.v_addr = NULL;
+- }
+- if (priv->ucode_init.v_addr != NULL) {
+- pci_free_consistent(priv->pci_dev,
+- priv->ucode_init.len,
+- priv->ucode_init.v_addr,
+- priv->ucode_init.p_addr);
+- priv->ucode_init.v_addr = NULL;
+- }
+- if (priv->ucode_init_data.v_addr != NULL) {
+- pci_free_consistent(priv->pci_dev,
+- priv->ucode_init_data.len,
+- priv->ucode_init_data.v_addr,
+- priv->ucode_init_data.p_addr);
+- priv->ucode_init_data.v_addr = NULL;
+- }
+- if (priv->ucode_boot.v_addr != NULL) {
+- pci_free_consistent(priv->pci_dev,
+- priv->ucode_boot.len,
+- priv->ucode_boot.v_addr,
+- priv->ucode_boot.p_addr);
+- priv->ucode_boot.v_addr = NULL;
+- }
++ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_code);
++ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_data);
++ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_data_backup);
++ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_init);
++ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_init_data);
++ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_boot);
+ }
+
+ /**
+- * iwl_verify_inst_full - verify runtime uCode image in card vs. host,
++ * iwl3945_verify_inst_full - verify runtime uCode image in card vs. host,
+ * looking at all data.
+ */
+-static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
++static int iwl3945_verify_inst_full(struct iwl3945_priv *priv, __le32 * image, u32 len)
+ {
+ u32 val;
+ u32 save_len = len;
+@@ -5499,18 +5500,18 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
+
+ IWL_DEBUG_INFO("ucode inst image size is %u\n", len);
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc)
+ return rc;
+
+- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
++ iwl3945_write_direct32(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
+
+ errcnt = 0;
+ for (; len > 0; len -= sizeof(u32), image++) {
+ /* read data comes through single port, auto-incr addr */
+ /* NOTE: Use the debugless read so we don't flood kernel log
+ * if IWL_DL_IO is set */
+- val = _iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
++ val = _iwl3945_read_direct32(priv, HBUS_TARG_MEM_RDAT);
+ if (val != le32_to_cpu(*image)) {
+ IWL_ERROR("uCode INST section is invalid at "
+ "offset 0x%x, is 0x%x, s/b 0x%x\n",
+@@ -5522,22 +5523,21 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
+ }
+ }
+
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+
+ if (!errcnt)
+- IWL_DEBUG_INFO
+- ("ucode image in INSTRUCTION memory is good\n");
++ IWL_DEBUG_INFO("ucode image in INSTRUCTION memory is good\n");
+
+ return rc;
+ }
+
+
+ /**
+- * iwl_verify_inst_sparse - verify runtime uCode image in card vs. host,
++ * iwl3945_verify_inst_sparse - verify runtime uCode image in card vs. host,
+ * using sample data 100 bytes apart. If these sample points are good,
+ * it's a pretty good bet that everything between them is good, too.
+ */
+-static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
++static int iwl3945_verify_inst_sparse(struct iwl3945_priv *priv, __le32 *image, u32 len)
+ {
+ u32 val;
+ int rc = 0;
+@@ -5546,7 +5546,7 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+
+ IWL_DEBUG_INFO("ucode inst image size is %u\n", len);
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc)
+ return rc;
+
+@@ -5554,9 +5554,9 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+ /* read data comes through single port, auto-incr addr */
+ /* NOTE: Use the debugless read so we don't flood kernel log
+ * if IWL_DL_IO is set */
+- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR,
++ iwl3945_write_direct32(priv, HBUS_TARG_MEM_RADDR,
+ i + RTC_INST_LOWER_BOUND);
+- val = _iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
++ val = _iwl3945_read_direct32(priv, HBUS_TARG_MEM_RDAT);
+ if (val != le32_to_cpu(*image)) {
+ #if 0 /* Enable this if you want to see details */
+ IWL_ERROR("uCode INST section is invalid at "
+@@ -5570,17 +5570,17 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+ }
+ }
+
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+
+ return rc;
+ }
+
+
+ /**
+- * iwl_verify_ucode - determine which instruction image is in SRAM,
++ * iwl3945_verify_ucode - determine which instruction image is in SRAM,
+ * and verify its contents
+ */
+-static int iwl_verify_ucode(struct iwl_priv *priv)
++static int iwl3945_verify_ucode(struct iwl3945_priv *priv)
+ {
+ __le32 *image;
+ u32 len;
+@@ -5589,7 +5589,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+ /* Try bootstrap */
+ image = (__le32 *)priv->ucode_boot.v_addr;
+ len = priv->ucode_boot.len;
+- rc = iwl_verify_inst_sparse(priv, image, len);
++ rc = iwl3945_verify_inst_sparse(priv, image, len);
+ if (rc == 0) {
+ IWL_DEBUG_INFO("Bootstrap uCode is good in inst SRAM\n");
+ return 0;
+@@ -5598,7 +5598,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+ /* Try initialize */
+ image = (__le32 *)priv->ucode_init.v_addr;
+ len = priv->ucode_init.len;
+- rc = iwl_verify_inst_sparse(priv, image, len);
++ rc = iwl3945_verify_inst_sparse(priv, image, len);
+ if (rc == 0) {
+ IWL_DEBUG_INFO("Initialize uCode is good in inst SRAM\n");
+ return 0;
+@@ -5607,7 +5607,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+ /* Try runtime/protocol */
+ image = (__le32 *)priv->ucode_code.v_addr;
+ len = priv->ucode_code.len;
+- rc = iwl_verify_inst_sparse(priv, image, len);
++ rc = iwl3945_verify_inst_sparse(priv, image, len);
+ if (rc == 0) {
+ IWL_DEBUG_INFO("Runtime uCode is good in inst SRAM\n");
+ return 0;
+@@ -5615,18 +5615,19 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+
+ IWL_ERROR("NO VALID UCODE IMAGE IN INSTRUCTION SRAM!!\n");
+
+- /* Show first several data entries in instruction SRAM.
+- * Selection of bootstrap image is arbitrary. */
++ /* Since nothing seems to match, show first several data entries in
++ * instruction SRAM, so maybe visual inspection will give a clue.
++ * Selection of bootstrap image (vs. other images) is arbitrary. */
+ image = (__le32 *)priv->ucode_boot.v_addr;
+ len = priv->ucode_boot.len;
+- rc = iwl_verify_inst_full(priv, image, len);
++ rc = iwl3945_verify_inst_full(priv, image, len);
+
+ return rc;
+ }
+
+
+ /* check contents of special bootstrap uCode SRAM */
+-static int iwl_verify_bsm(struct iwl_priv *priv)
++static int iwl3945_verify_bsm(struct iwl3945_priv *priv)
+ {
+ __le32 *image = priv->ucode_boot.v_addr;
+ u32 len = priv->ucode_boot.len;
+@@ -5636,11 +5637,11 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
+ IWL_DEBUG_INFO("Begin verify bsm\n");
+
+ /* verify BSM SRAM contents */
+- val = iwl_read_restricted_reg(priv, BSM_WR_DWCOUNT_REG);
++ val = iwl3945_read_prph(priv, BSM_WR_DWCOUNT_REG);
+ for (reg = BSM_SRAM_LOWER_BOUND;
+ reg < BSM_SRAM_LOWER_BOUND + len;
+ reg += sizeof(u32), image ++) {
+- val = iwl_read_restricted_reg(priv, reg);
++ val = iwl3945_read_prph(priv, reg);
+ if (val != le32_to_cpu(*image)) {
+ IWL_ERROR("BSM uCode verification failed at "
+ "addr 0x%08X+%u (of %u), is 0x%x, s/b 0x%x\n",
+@@ -5657,7 +5658,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
+ }
+
+ /**
+- * iwl_load_bsm - Load bootstrap instructions
++ * iwl3945_load_bsm - Load bootstrap instructions
+ *
+ * BSM operation:
+ *
+@@ -5688,7 +5689,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
+ * the runtime uCode instructions and the backup data cache into SRAM,
+ * and re-launches the runtime uCode from where it left off.
+ */
+-static int iwl_load_bsm(struct iwl_priv *priv)
++static int iwl3945_load_bsm(struct iwl3945_priv *priv)
+ {
+ __le32 *image = priv->ucode_boot.v_addr;
+ u32 len = priv->ucode_boot.len;
+@@ -5708,8 +5709,8 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+ return -EINVAL;
+
+ /* Tell bootstrap uCode where to find the "Initialize" uCode
+- * in host DRAM ... bits 31:0 for 3945, bits 35:4 for 4965.
+- * NOTE: iwl_initialize_alive_start() will replace these values,
++ * in host DRAM ... host DRAM physical address bits 31:0 for 3945.
++ * NOTE: iwl3945_initialize_alive_start() will replace these values,
+ * after the "initialize" uCode has run, to point to
+ * runtime/protocol instructions and backup data cache. */
+ pinst = priv->ucode_init.p_addr;
+@@ -5717,42 +5718,42 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+ inst_len = priv->ucode_init.len;
+ data_len = priv->ucode_init_data.len;
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc)
+ return rc;
+
+- iwl_write_restricted_reg(priv, BSM_DRAM_INST_PTR_REG, pinst);
+- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_PTR_REG, pdata);
+- iwl_write_restricted_reg(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
+- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
++ iwl3945_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
++ iwl3945_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
++ iwl3945_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
++ iwl3945_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
+
+ /* Fill BSM memory with bootstrap instructions */
+ for (reg_offset = BSM_SRAM_LOWER_BOUND;
+ reg_offset < BSM_SRAM_LOWER_BOUND + len;
+ reg_offset += sizeof(u32), image++)
+- _iwl_write_restricted_reg(priv, reg_offset,
++ _iwl3945_write_prph(priv, reg_offset,
+ le32_to_cpu(*image));
+
+- rc = iwl_verify_bsm(priv);
++ rc = iwl3945_verify_bsm(priv);
+ if (rc) {
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+ return rc;
+ }
+
+ /* Tell BSM to copy from BSM SRAM into instruction SRAM, when asked */
+- iwl_write_restricted_reg(priv, BSM_WR_MEM_SRC_REG, 0x0);
+- iwl_write_restricted_reg(priv, BSM_WR_MEM_DST_REG,
++ iwl3945_write_prph(priv, BSM_WR_MEM_SRC_REG, 0x0);
++ iwl3945_write_prph(priv, BSM_WR_MEM_DST_REG,
+ RTC_INST_LOWER_BOUND);
+- iwl_write_restricted_reg(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
++ iwl3945_write_prph(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
+
+ /* Load bootstrap code into instruction SRAM now,
+ * to prepare to load "initialize" uCode */
+- iwl_write_restricted_reg(priv, BSM_WR_CTRL_REG,
++ iwl3945_write_prph(priv, BSM_WR_CTRL_REG,
+ BSM_WR_CTRL_REG_BIT_START);
+
+ /* Wait for load of bootstrap uCode to finish */
+ for (i = 0; i < 100; i++) {
+- done = iwl_read_restricted_reg(priv, BSM_WR_CTRL_REG);
++ done = iwl3945_read_prph(priv, BSM_WR_CTRL_REG);
+ if (!(done & BSM_WR_CTRL_REG_BIT_START))
+ break;
+ udelay(10);
+@@ -5766,29 +5767,29 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+
+ /* Enable future boot loads whenever power management unit triggers it
+ * (e.g. when powering back up after power-save shutdown) */
+- iwl_write_restricted_reg(priv, BSM_WR_CTRL_REG,
++ iwl3945_write_prph(priv, BSM_WR_CTRL_REG,
+ BSM_WR_CTRL_REG_BIT_START_EN);
+
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+
+ return 0;
+ }
+
+-static void iwl_nic_start(struct iwl_priv *priv)
++static void iwl3945_nic_start(struct iwl3945_priv *priv)
+ {
+ /* Remove all resets to allow NIC to operate */
+- iwl_write32(priv, CSR_RESET, 0);
++ iwl3945_write32(priv, CSR_RESET, 0);
+ }
+
+ /**
+- * iwl_read_ucode - Read uCode images from disk file.
++ * iwl3945_read_ucode - Read uCode images from disk file.
+ *
+ * Copy into buffers for card to fetch via bus-mastering
+ */
+-static int iwl_read_ucode(struct iwl_priv *priv)
++static int iwl3945_read_ucode(struct iwl3945_priv *priv)
+ {
+- struct iwl_ucode *ucode;
+- int rc = 0;
++ struct iwl3945_ucode *ucode;
++ int ret = 0;
+ const struct firmware *ucode_raw;
+ /* firmware file name contains uCode/driver compatibility version */
+ const char *name = "iwlwifi-3945" IWL3945_UCODE_API ".ucode";
+@@ -5798,9 +5799,10 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+
+ /* Ask kernel firmware_class module to get the boot firmware off disk.
+ * request_firmware() is synchronous, file is in memory on return. */
+- rc = request_firmware(&ucode_raw, name, &priv->pci_dev->dev);
+- if (rc < 0) {
+- IWL_ERROR("%s firmware file req failed: Reason %d\n", name, rc);
++ ret = request_firmware(&ucode_raw, name, &priv->pci_dev->dev);
++ if (ret < 0) {
++ IWL_ERROR("%s firmware file req failed: Reason %d\n",
++ name, ret);
+ goto error;
+ }
+
+@@ -5810,7 +5812,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ /* Make sure that we got at least our header! */
+ if (ucode_raw->size < sizeof(*ucode)) {
+ IWL_ERROR("File size way too small!\n");
+- rc = -EINVAL;
++ ret = -EINVAL;
+ goto err_release;
+ }
+
+@@ -5825,16 +5827,11 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ boot_size = le32_to_cpu(ucode->boot_size);
+
+ IWL_DEBUG_INFO("f/w package hdr ucode version = 0x%x\n", ver);
+- IWL_DEBUG_INFO("f/w package hdr runtime inst size = %u\n",
+- inst_size);
+- IWL_DEBUG_INFO("f/w package hdr runtime data size = %u\n",
+- data_size);
+- IWL_DEBUG_INFO("f/w package hdr init inst size = %u\n",
+- init_size);
+- IWL_DEBUG_INFO("f/w package hdr init data size = %u\n",
+- init_data_size);
+- IWL_DEBUG_INFO("f/w package hdr boot inst size = %u\n",
+- boot_size);
++ IWL_DEBUG_INFO("f/w package hdr runtime inst size = %u\n", inst_size);
++ IWL_DEBUG_INFO("f/w package hdr runtime data size = %u\n", data_size);
++ IWL_DEBUG_INFO("f/w package hdr init inst size = %u\n", init_size);
++ IWL_DEBUG_INFO("f/w package hdr init data size = %u\n", init_data_size);
++ IWL_DEBUG_INFO("f/w package hdr boot inst size = %u\n", boot_size);
+
+ /* Verify size of file vs. image size info in file's header */
+ if (ucode_raw->size < sizeof(*ucode) +
+@@ -5843,43 +5840,40 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+
+ IWL_DEBUG_INFO("uCode file size %d too small\n",
+ (int)ucode_raw->size);
+- rc = -EINVAL;
++ ret = -EINVAL;
+ goto err_release;
+ }
+
+ /* Verify that uCode images will fit in card's SRAM */
+ if (inst_size > IWL_MAX_INST_SIZE) {
+- IWL_DEBUG_INFO("uCode instr len %d too large to fit in card\n",
+- (int)inst_size);
+- rc = -EINVAL;
++ IWL_DEBUG_INFO("uCode instr len %d too large to fit in\n",
++ inst_size);
++ ret = -EINVAL;
+ goto err_release;
+ }
+
+ if (data_size > IWL_MAX_DATA_SIZE) {
+- IWL_DEBUG_INFO("uCode data len %d too large to fit in card\n",
+- (int)data_size);
+- rc = -EINVAL;
++ IWL_DEBUG_INFO("uCode data len %d too large to fit in\n",
++ data_size);
++ ret = -EINVAL;
+ goto err_release;
+ }
+ if (init_size > IWL_MAX_INST_SIZE) {
+- IWL_DEBUG_INFO
+- ("uCode init instr len %d too large to fit in card\n",
+- (int)init_size);
+- rc = -EINVAL;
++ IWL_DEBUG_INFO("uCode init instr len %d too large to fit in\n",
++ init_size);
++ ret = -EINVAL;
+ goto err_release;
+ }
+ if (init_data_size > IWL_MAX_DATA_SIZE) {
+- IWL_DEBUG_INFO
+- ("uCode init data len %d too large to fit in card\n",
+- (int)init_data_size);
+- rc = -EINVAL;
++ IWL_DEBUG_INFO("uCode init data len %d too large to fit in\n",
++ init_data_size);
++ ret = -EINVAL;
+ goto err_release;
+ }
+ if (boot_size > IWL_MAX_BSM_SIZE) {
+- IWL_DEBUG_INFO
+- ("uCode boot instr len %d too large to fit in bsm\n",
+- (int)boot_size);
+- rc = -EINVAL;
++ IWL_DEBUG_INFO("uCode boot instr len %d too large to fit in\n",
++ boot_size);
++ ret = -EINVAL;
+ goto err_release;
+ }
+
+@@ -5889,66 +5883,54 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ * 1) unmodified from disk
+ * 2) backup cache for save/restore during power-downs */
+ priv->ucode_code.len = inst_size;
+- priv->ucode_code.v_addr =
+- pci_alloc_consistent(priv->pci_dev,
+- priv->ucode_code.len,
+- &(priv->ucode_code.p_addr));
++ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_code);
+
+ priv->ucode_data.len = data_size;
+- priv->ucode_data.v_addr =
+- pci_alloc_consistent(priv->pci_dev,
+- priv->ucode_data.len,
+- &(priv->ucode_data.p_addr));
++ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_data);
+
+ priv->ucode_data_backup.len = data_size;
+- priv->ucode_data_backup.v_addr =
+- pci_alloc_consistent(priv->pci_dev,
+- priv->ucode_data_backup.len,
+- &(priv->ucode_data_backup.p_addr));
++ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_data_backup);
+
++ if (!priv->ucode_code.v_addr || !priv->ucode_data.v_addr ||
++ !priv->ucode_data_backup.v_addr)
++ goto err_pci_alloc;
+
+ /* Initialization instructions and data */
+- priv->ucode_init.len = init_size;
+- priv->ucode_init.v_addr =
+- pci_alloc_consistent(priv->pci_dev,
+- priv->ucode_init.len,
+- &(priv->ucode_init.p_addr));
-
-- u8 reserved10[2];
--#define EEPROM_REGULATORY_BAND_24_FAT_CHANNELS (2*0xA0) /* 14 bytes */
-- struct iwl_eeprom_channel band_24_channels[7]; /* abs.ofs: 320 */
-- u8 reserved11[2];
--#define EEPROM_REGULATORY_BAND_52_FAT_CHANNELS (2*0xA8) /* 22 bytes */
-- struct iwl_eeprom_channel band_52_channels[11]; /* abs.ofs: 336 */
-- u8 reserved12[6];
--#define EEPROM_CALIB_VERSION_OFFSET (2*0xB6) /* 2 bytes */
-- u16 calib_version; /* abs.ofs: 364 */
-- u8 reserved13[2];
--#define EEPROM_SATURATION_POWER_OFFSET (2*0xB8) /* 2 bytes */
-- u16 satruation_power; /* abs.ofs: 368 */
-- u8 reserved14[94];
--#define EEPROM_IWL_CALIB_TXPOWER_OFFSET (2*0xE8) /* 48 bytes */
-- struct iwl_eeprom_calib_info calib_info; /* abs.ofs: 464 */
+- priv->ucode_init_data.len = init_data_size;
+- priv->ucode_init_data.v_addr =
+- pci_alloc_consistent(priv->pci_dev,
+- priv->ucode_init_data.len,
+- &(priv->ucode_init_data.p_addr));
++ if (init_size && init_data_size) {
++ priv->ucode_init.len = init_size;
++ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_init);
++
++ priv->ucode_init_data.len = init_data_size;
++ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_init_data);
++
++ if (!priv->ucode_init.v_addr || !priv->ucode_init_data.v_addr)
++ goto err_pci_alloc;
++ }
+
+ /* Bootstrap (instructions only, no data) */
+- priv->ucode_boot.len = boot_size;
+- priv->ucode_boot.v_addr =
+- pci_alloc_consistent(priv->pci_dev,
+- priv->ucode_boot.len,
+- &(priv->ucode_boot.p_addr));
++ if (boot_size) {
++ priv->ucode_boot.len = boot_size;
++ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_boot);
+
+- if (!priv->ucode_code.v_addr || !priv->ucode_data.v_addr ||
+- !priv->ucode_init.v_addr || !priv->ucode_init_data.v_addr ||
+- !priv->ucode_boot.v_addr || !priv->ucode_data_backup.v_addr)
+- goto err_pci_alloc;
++ if (!priv->ucode_boot.v_addr)
++ goto err_pci_alloc;
++ }
+
+ /* Copy images into buffers for card's bus-master reads ... */
+
+ /* Runtime instructions (first block of data in file) */
+ src = &ucode->data[0];
+ len = priv->ucode_code.len;
+- IWL_DEBUG_INFO("Copying (but not loading) uCode instr len %d\n",
+- (int)len);
++ IWL_DEBUG_INFO("Copying (but not loading) uCode instr len %Zd\n", len);
+ memcpy(priv->ucode_code.v_addr, src, len);
+ IWL_DEBUG_INFO("uCode instr buf vaddr = 0x%p, paddr = 0x%08x\n",
+ priv->ucode_code.v_addr, (u32)priv->ucode_code.p_addr);
+
+ /* Runtime data (2nd block)
+- * NOTE: Copy into backup buffer will be done in iwl_up() */
++ * NOTE: Copy into backup buffer will be done in iwl3945_up() */
+ src = &ucode->data[inst_size];
+ len = priv->ucode_data.len;
+- IWL_DEBUG_INFO("Copying (but not loading) uCode data len %d\n",
+- (int)len);
++ IWL_DEBUG_INFO("Copying (but not loading) uCode data len %Zd\n", len);
+ memcpy(priv->ucode_data.v_addr, src, len);
+ memcpy(priv->ucode_data_backup.v_addr, src, len);
+
+@@ -5956,8 +5938,8 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ if (init_size) {
+ src = &ucode->data[inst_size + data_size];
+ len = priv->ucode_init.len;
+- IWL_DEBUG_INFO("Copying (but not loading) init instr len %d\n",
+- (int)len);
++ IWL_DEBUG_INFO("Copying (but not loading) init instr len %Zd\n",
++ len);
+ memcpy(priv->ucode_init.v_addr, src, len);
+ }
+
+@@ -5983,19 +5965,19 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+
+ err_pci_alloc:
+ IWL_ERROR("failed to allocate pci memory\n");
+- rc = -ENOMEM;
+- iwl_dealloc_ucode_pci(priv);
++ ret = -ENOMEM;
++ iwl3945_dealloc_ucode_pci(priv);
+
+ err_release:
+ release_firmware(ucode_raw);
+
+ error:
+- return rc;
++ return ret;
+ }
+
+
+ /**
+- * iwl_set_ucode_ptrs - Set uCode address location
++ * iwl3945_set_ucode_ptrs - Set uCode address location
+ *
+ * Tell initialization uCode where to find runtime uCode.
+ *
+@@ -6003,7 +5985,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ * We need to replace them to load runtime uCode inst and data,
+ * and to save runtime data when powering down.
+ */
+-static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
++static int iwl3945_set_ucode_ptrs(struct iwl3945_priv *priv)
+ {
+ dma_addr_t pinst;
+ dma_addr_t pdata;
+@@ -6015,24 +5997,24 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
+ pdata = priv->ucode_data_backup.p_addr;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return rc;
+ }
+
+ /* Tell bootstrap uCode where to find image to load */
+- iwl_write_restricted_reg(priv, BSM_DRAM_INST_PTR_REG, pinst);
+- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_PTR_REG, pdata);
+- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
++ iwl3945_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
++ iwl3945_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
++ iwl3945_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
+ priv->ucode_data.len);
+
+ /* Inst bytecount must be last to set up, bit 31 signals uCode
+ * that all new ptr/size info is in place */
+- iwl_write_restricted_reg(priv, BSM_DRAM_INST_BYTECOUNT_REG,
++ iwl3945_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG,
+ priv->ucode_code.len | BSM_DRAM_INST_LOAD);
+
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+@@ -6042,17 +6024,13 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
+ }
+
+ /**
+- * iwl_init_alive_start - Called after REPLY_ALIVE notification receieved
++ * iwl3945_init_alive_start - Called after REPLY_ALIVE notification received
+ *
+ * Called after REPLY_ALIVE notification received from "initialize" uCode.
+ *
+- * The 4965 "initialize" ALIVE reply contains calibration data for:
+- * Voltage, temperature, and MIMO tx gain correction, now stored in priv
+- * (3945 does not contain this data).
+- *
+ * Tell "initialize" uCode to go ahead and load the runtime uCode.
+-*/
+-static void iwl_init_alive_start(struct iwl_priv *priv)
++ */
++static void iwl3945_init_alive_start(struct iwl3945_priv *priv)
+ {
+ /* Check alive response for "valid" sign from uCode */
+ if (priv->card_alive_init.is_valid != UCODE_VALID_OK) {
+@@ -6065,7 +6043,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+ /* Bootstrap uCode has loaded initialize uCode ... verify inst image.
+ * This is a paranoid check, because we would not have gotten the
+ * "initialize" alive if code weren't properly loaded. */
+- if (iwl_verify_ucode(priv)) {
++ if (iwl3945_verify_ucode(priv)) {
+ /* Runtime instruction load was bad;
+ * take it all the way back down so we can try again */
+ IWL_DEBUG_INFO("Bad \"initialize\" uCode load.\n");
+@@ -6076,7 +6054,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+ * load and launch runtime uCode, which will send us another "Alive"
+ * notification. */
+ IWL_DEBUG_INFO("Initialization Alive received.\n");
+- if (iwl_set_ucode_ptrs(priv)) {
++ if (iwl3945_set_ucode_ptrs(priv)) {
+ /* Runtime instruction load won't happen;
+ * take it all the way back down so we can try again */
+ IWL_DEBUG_INFO("Couldn't set up uCode pointers.\n");
+@@ -6090,11 +6068,11 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+
+
+ /**
+- * iwl_alive_start - called after REPLY_ALIVE notification received
++ * iwl3945_alive_start - called after REPLY_ALIVE notification received
+ * from protocol/runtime uCode (initialization uCode's
+- * Alive gets handled by iwl_init_alive_start()).
++ * Alive gets handled by iwl3945_init_alive_start()).
+ */
+-static void iwl_alive_start(struct iwl_priv *priv)
++static void iwl3945_alive_start(struct iwl3945_priv *priv)
+ {
+ int rc = 0;
+ int thermal_spin = 0;
+@@ -6112,30 +6090,30 @@ static void iwl_alive_start(struct iwl_priv *priv)
+ /* Initialize uCode has loaded Runtime uCode ... verify inst image.
+ * This is a paranoid check, because we would not have gotten the
+ * "runtime" alive if code weren't properly loaded. */
+- if (iwl_verify_ucode(priv)) {
++ if (iwl3945_verify_ucode(priv)) {
+ /* Runtime instruction load was bad;
+ * take it all the way back down so we can try again */
+ IWL_DEBUG_INFO("Bad runtime uCode load.\n");
+ goto restart;
+ }
+
+- iwl_clear_stations_table(priv);
++ iwl3945_clear_stations_table(priv);
+
+- rc = iwl_grab_restricted_access(priv);
++ rc = iwl3945_grab_nic_access(priv);
+ if (rc) {
+ IWL_WARNING("Can not read rfkill status from adapter\n");
+ return;
+ }
+
+- rfkill = iwl_read_restricted_reg(priv, APMG_RFKILL_REG);
++ rfkill = iwl3945_read_prph(priv, APMG_RFKILL_REG);
+ IWL_DEBUG_INFO("RFKILL status: 0x%x\n", rfkill);
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+
+ if (rfkill & 0x1) {
+ clear_bit(STATUS_RF_KILL_HW, &priv->status);
+ /* if rfkill is not on, then wait for thermal
+ * sensor in adapter to kick in */
+- while (iwl_hw_get_temperature(priv) == 0) {
++ while (iwl3945_hw_get_temperature(priv) == 0) {
+ thermal_spin++;
+ udelay(10);
+ }
+@@ -6146,68 +6124,49 @@ static void iwl_alive_start(struct iwl_priv *priv)
+ } else
+ set_bit(STATUS_RF_KILL_HW, &priv->status);
+
+- /* After the ALIVE response, we can process host commands */
++ /* After the ALIVE response, we can send commands to 3945 uCode */
+ set_bit(STATUS_ALIVE, &priv->status);
+
+ /* Clear out the uCode error bit if it is set */
+ clear_bit(STATUS_FW_ERROR, &priv->status);
+
+- rc = iwl_init_channel_map(priv);
++ rc = iwl3945_init_channel_map(priv);
+ if (rc) {
+ IWL_ERROR("initializing regulatory failed: %d\n", rc);
+ return;
+ }
+
+- iwl_init_geos(priv);
++ iwl3945_init_geos(priv);
++ iwl3945_reset_channel_flag(priv);
+
+- if (iwl_is_rfkill(priv))
++ if (iwl3945_is_rfkill(priv))
+ return;
+
+- if (!priv->mac80211_registered) {
+- /* Unlock so any user space entry points can call back into
+- * the driver without a deadlock... */
+- mutex_unlock(&priv->mutex);
+- iwl_rate_control_register(priv->hw);
+- rc = ieee80211_register_hw(priv->hw);
+- priv->hw->conf.beacon_int = 100;
+- mutex_lock(&priv->mutex);
-
-- u8 reserved16[140]; /* fill out to full 1024 byte block */
+- if (rc) {
+- iwl_rate_control_unregister(priv->hw);
+- IWL_ERROR("Failed to register network "
+- "device (error %d)\n", rc);
+- return;
+- }
-
--#endif
+- priv->mac80211_registered = 1;
-
--} __attribute__ ((packed));
+- iwl_reset_channel_flag(priv);
+- } else
+- ieee80211_start_queues(priv->hw);
++ ieee80211_start_queues(priv->hw);
+
+ priv->active_rate = priv->rates_mask;
+ priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
+
+- iwl_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
++ iwl3945_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
+
+- if (iwl_is_associated(priv)) {
+- struct iwl_rxon_cmd *active_rxon =
+- (struct iwl_rxon_cmd *)(&priv->active_rxon);
++ if (iwl3945_is_associated(priv)) {
++ struct iwl3945_rxon_cmd *active_rxon =
++ (struct iwl3945_rxon_cmd *)(&priv->active_rxon);
+
+ memcpy(&priv->staging_rxon, &priv->active_rxon,
+ sizeof(priv->staging_rxon));
+ active_rxon->filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+ } else {
+ /* Initialize our rx_config data */
+- iwl_connection_init_rx_config(priv);
++ iwl3945_connection_init_rx_config(priv);
+ memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
+ }
+
+- /* Configure BT coexistence */
+- iwl_send_bt_config(priv);
++ /* Configure Bluetooth device coexistence support */
++ iwl3945_send_bt_config(priv);
+
+ /* Configure the adapter for unassociated operation */
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+
+ /* At this point, the NIC is initialized and operational */
+ priv->notif_missed_beacons = 0;
+@@ -6216,9 +6175,10 @@ static void iwl_alive_start(struct iwl_priv *priv)
+ iwl3945_reg_txpower_periodic(priv);
+
+ IWL_DEBUG_INFO("ALIVE processing complete.\n");
++ wake_up_interruptible(&priv->wait_command_queue);
+
+ if (priv->error_recovering)
+- iwl_error_recovery(priv);
++ iwl3945_error_recovery(priv);
+
+ return;
+
+@@ -6226,9 +6186,9 @@ static void iwl_alive_start(struct iwl_priv *priv)
+ queue_work(priv->workqueue, &priv->restart);
+ }
+
+-static void iwl_cancel_deferred_work(struct iwl_priv *priv);
++static void iwl3945_cancel_deferred_work(struct iwl3945_priv *priv);
+
+-static void __iwl_down(struct iwl_priv *priv)
++static void __iwl3945_down(struct iwl3945_priv *priv)
+ {
+ unsigned long flags;
+ int exit_pending = test_bit(STATUS_EXIT_PENDING, &priv->status);
+@@ -6241,7 +6201,7 @@ static void __iwl_down(struct iwl_priv *priv)
+ if (!exit_pending)
+ set_bit(STATUS_EXIT_PENDING, &priv->status);
+
+- iwl_clear_stations_table(priv);
++ iwl3945_clear_stations_table(priv);
+
+ /* Unblock any waiting calls */
+ wake_up_interruptible_all(&priv->wait_command_queue);
+@@ -6252,17 +6212,17 @@ static void __iwl_down(struct iwl_priv *priv)
+ clear_bit(STATUS_EXIT_PENDING, &priv->status);
+
+ /* stop and reset the on-board processor */
+- iwl_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
++ iwl3945_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
+
+ /* tell the device to stop sending interrupts */
+- iwl_disable_interrupts(priv);
++ iwl3945_disable_interrupts(priv);
+
+ if (priv->mac80211_registered)
+ ieee80211_stop_queues(priv->hw);
+
+- /* If we have not previously called iwl_init() then
++ /* If we have not previously called iwl3945_init() then
+ * clear all bits but the RF Kill and SUSPEND bits and return */
+- if (!iwl_is_init(priv)) {
++ if (!iwl3945_is_init(priv)) {
+ priv->status = test_bit(STATUS_RF_KILL_HW, &priv->status) <<
+ STATUS_RF_KILL_HW |
+ test_bit(STATUS_RF_KILL_SW, &priv->status) <<
+@@ -6284,51 +6244,50 @@ static void __iwl_down(struct iwl_priv *priv)
+ STATUS_FW_ERROR;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- iwl_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
++ iwl3945_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+- iwl_hw_txq_ctx_stop(priv);
+- iwl_hw_rxq_stop(priv);
++ iwl3945_hw_txq_ctx_stop(priv);
++ iwl3945_hw_rxq_stop(priv);
+
+ spin_lock_irqsave(&priv->lock, flags);
+- if (!iwl_grab_restricted_access(priv)) {
+- iwl_write_restricted_reg(priv, APMG_CLK_DIS_REG,
++ if (!iwl3945_grab_nic_access(priv)) {
++ iwl3945_write_prph(priv, APMG_CLK_DIS_REG,
+ APMG_CLK_VAL_DMA_CLK_RQT);
+- iwl_release_restricted_access(priv);
++ iwl3945_release_nic_access(priv);
+ }
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ udelay(5);
+
+- iwl_hw_nic_stop_master(priv);
+- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
+- iwl_hw_nic_reset(priv);
++ iwl3945_hw_nic_stop_master(priv);
++ iwl3945_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
++ iwl3945_hw_nic_reset(priv);
+
+ exit:
+- memset(&priv->card_alive, 0, sizeof(struct iwl_alive_resp));
++ memset(&priv->card_alive, 0, sizeof(struct iwl3945_alive_resp));
+
+ if (priv->ibss_beacon)
+ dev_kfree_skb(priv->ibss_beacon);
+ priv->ibss_beacon = NULL;
+
+ /* clear out any free frames */
+- iwl_clear_free_frames(priv);
++ iwl3945_clear_free_frames(priv);
+ }
+
+-static void iwl_down(struct iwl_priv *priv)
++static void iwl3945_down(struct iwl3945_priv *priv)
+ {
+ mutex_lock(&priv->mutex);
+- __iwl_down(priv);
++ __iwl3945_down(priv);
+ mutex_unlock(&priv->mutex);
+
+- iwl_cancel_deferred_work(priv);
++ iwl3945_cancel_deferred_work(priv);
+ }
+
+ #define MAX_HW_RESTARTS 5
+
+-static int __iwl_up(struct iwl_priv *priv)
++static int __iwl3945_up(struct iwl3945_priv *priv)
+ {
+- DECLARE_MAC_BUF(mac);
+ int rc, i;
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status)) {
+@@ -6339,7 +6298,19 @@ static int __iwl_up(struct iwl_priv *priv)
+ if (test_bit(STATUS_RF_KILL_SW, &priv->status)) {
+ IWL_WARNING("Radio disabled by SW RF kill (module "
+ "parameter)\n");
+- return 0;
++ return -ENODEV;
++ }
++
++ /* If platform's RF_KILL switch is NOT set to KILL */
++ if (iwl3945_read32(priv, CSR_GP_CNTRL) &
++ CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW)
++ clear_bit(STATUS_RF_KILL_HW, &priv->status);
++ else {
++ set_bit(STATUS_RF_KILL_HW, &priv->status);
++ if (!test_bit(STATUS_IN_SUSPEND, &priv->status)) {
++ IWL_WARNING("Radio disabled by HW RF Kill switch\n");
++ return -ENODEV;
++ }
+ }
+
+ if (!priv->ucode_data_backup.v_addr || !priv->ucode_data.v_addr) {
+@@ -6347,41 +6318,45 @@ static int __iwl_up(struct iwl_priv *priv)
+ return -EIO;
+ }
+
+- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
++ iwl3945_write32(priv, CSR_INT, 0xFFFFFFFF);
+
+- rc = iwl_hw_nic_init(priv);
++ rc = iwl3945_hw_nic_init(priv);
+ if (rc) {
+ IWL_ERROR("Unable to int nic\n");
+ return rc;
+ }
+
+ /* make sure rfkill handshake bits are cleared */
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR,
+ CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+
+ /* clear (again), then enable host interrupts */
+- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
+- iwl_enable_interrupts(priv);
++ iwl3945_write32(priv, CSR_INT, 0xFFFFFFFF);
++ iwl3945_enable_interrupts(priv);
+
+ /* really make sure rfkill handshake bits are cleared */
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+
+ /* Copy original ucode data image from disk into backup cache.
+ * This will be used to initialize the on-board processor's
+ * data SRAM for a clean start when the runtime program first loads. */
+ memcpy(priv->ucode_data_backup.v_addr, priv->ucode_data.v_addr,
+- priv->ucode_data.len);
++ priv->ucode_data.len);
++
++ /* We return success when we resume from suspend and rf_kill is on. */
++ if (test_bit(STATUS_RF_KILL_HW, &priv->status))
++ return 0;
+
+ for (i = 0; i < MAX_HW_RESTARTS; i++) {
+
+- iwl_clear_stations_table(priv);
++ iwl3945_clear_stations_table(priv);
+
+ /* load bootstrap state machine,
+ * load bootstrap program into processor's memory,
+ * prepare to load the "initialize" uCode */
+- rc = iwl_load_bsm(priv);
++ rc = iwl3945_load_bsm(priv);
+
+ if (rc) {
+ IWL_ERROR("Unable to set up bootstrap uCode: %d\n", rc);
+@@ -6389,14 +6364,7 @@ static int __iwl_up(struct iwl_priv *priv)
+ }
+
+ /* start card; "initialize" will load runtime ucode */
+- iwl_nic_start(priv);
-
--#define IWL_EEPROM_IMAGE_SIZE 1024
+- /* MAC Address location in EEPROM same for 3945/4965 */
+- get_eeprom_mac(priv, priv->mac_addr);
+- IWL_DEBUG_INFO("MAC address: %s\n",
+- print_mac(mac, priv->mac_addr));
-
--#endif
-diff --git a/drivers/net/wireless/iwlwifi/iwl-helpers.h b/drivers/net/wireless/iwlwifi/iwl-helpers.h
-index e2a8d95..cd2eb18 100644
---- a/drivers/net/wireless/iwlwifi/iwl-helpers.h
-+++ b/drivers/net/wireless/iwlwifi/iwl-helpers.h
-@@ -252,4 +252,27 @@ static inline unsigned long elapsed_jiffies(unsigned long start,
- return end + (MAX_JIFFY_OFFSET - start);
+- SET_IEEE80211_PERM_ADDR(priv->hw, priv->mac_addr);
++ iwl3945_nic_start(priv);
+
+ IWL_DEBUG_INFO(DRV_NAME " is coming up\n");
+
+@@ -6404,7 +6372,7 @@ static int __iwl_up(struct iwl_priv *priv)
+ }
+
+ set_bit(STATUS_EXIT_PENDING, &priv->status);
+- __iwl_down(priv);
++ __iwl3945_down(priv);
+
+ /* tried to restart and config the device for as long as our
+ * patience could withstand */
+@@ -6419,35 +6387,35 @@ static int __iwl_up(struct iwl_priv *priv)
+ *
+ *****************************************************************************/
+
+-static void iwl_bg_init_alive_start(struct work_struct *data)
++static void iwl3945_bg_init_alive_start(struct work_struct *data)
+ {
+- struct iwl_priv *priv =
+- container_of(data, struct iwl_priv, init_alive_start.work);
++ struct iwl3945_priv *priv =
++ container_of(data, struct iwl3945_priv, init_alive_start.work);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+ mutex_lock(&priv->mutex);
+- iwl_init_alive_start(priv);
++ iwl3945_init_alive_start(priv);
+ mutex_unlock(&priv->mutex);
}
-+static inline u8 iwl_get_dma_hi_address(dma_addr_t addr)
-+{
-+ return sizeof(addr) > sizeof(u32) ? (addr >> 16) >> 16 : 0;
-+}
+-static void iwl_bg_alive_start(struct work_struct *data)
++static void iwl3945_bg_alive_start(struct work_struct *data)
+ {
+- struct iwl_priv *priv =
+- container_of(data, struct iwl_priv, alive_start.work);
++ struct iwl3945_priv *priv =
++ container_of(data, struct iwl3945_priv, alive_start.work);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+ mutex_lock(&priv->mutex);
+- iwl_alive_start(priv);
++ iwl3945_alive_start(priv);
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_rf_kill(struct work_struct *work)
++static void iwl3945_bg_rf_kill(struct work_struct *work)
+ {
+- struct iwl_priv *priv = container_of(work, struct iwl_priv, rf_kill);
++ struct iwl3945_priv *priv = container_of(work, struct iwl3945_priv, rf_kill);
+
+ wake_up_interruptible(&priv->wait_command_queue);
+
+@@ -6456,7 +6424,7 @@ static void iwl_bg_rf_kill(struct work_struct *work)
+
+ mutex_lock(&priv->mutex);
+
+- if (!iwl_is_rfkill(priv)) {
++ if (!iwl3945_is_rfkill(priv)) {
+ IWL_DEBUG(IWL_DL_INFO | IWL_DL_RF_KILL,
+ "HW and/or SW RF Kill no longer active, restarting "
+ "device\n");
+@@ -6477,10 +6445,10 @@ static void iwl_bg_rf_kill(struct work_struct *work)
+
+ #define IWL_SCAN_CHECK_WATCHDOG (7 * HZ)
+
+-static void iwl_bg_scan_check(struct work_struct *data)
++static void iwl3945_bg_scan_check(struct work_struct *data)
+ {
+- struct iwl_priv *priv =
+- container_of(data, struct iwl_priv, scan_check.work);
++ struct iwl3945_priv *priv =
++ container_of(data, struct iwl3945_priv, scan_check.work);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+@@ -6493,22 +6461,22 @@ static void iwl_bg_scan_check(struct work_struct *data)
+ jiffies_to_msecs(IWL_SCAN_CHECK_WATCHDOG));
+
+ if (!test_bit(STATUS_EXIT_PENDING, &priv->status))
+- iwl_send_scan_abort(priv);
++ iwl3945_send_scan_abort(priv);
+ }
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_request_scan(struct work_struct *data)
++static void iwl3945_bg_request_scan(struct work_struct *data)
+ {
+- struct iwl_priv *priv =
+- container_of(data, struct iwl_priv, request_scan);
+- struct iwl_host_cmd cmd = {
++ struct iwl3945_priv *priv =
++ container_of(data, struct iwl3945_priv, request_scan);
++ struct iwl3945_host_cmd cmd = {
+ .id = REPLY_SCAN_CMD,
+- .len = sizeof(struct iwl_scan_cmd),
++ .len = sizeof(struct iwl3945_scan_cmd),
+ .meta.flags = CMD_SIZE_HUGE,
+ };
+ int rc = 0;
+- struct iwl_scan_cmd *scan;
++ struct iwl3945_scan_cmd *scan;
+ struct ieee80211_conf *conf = NULL;
+ u8 direct_mask;
+ int phymode;
+@@ -6517,7 +6485,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+
+ mutex_lock(&priv->mutex);
+
+- if (!iwl_is_ready(priv)) {
++ if (!iwl3945_is_ready(priv)) {
+ IWL_WARNING("request scan called when driver not ready.\n");
+ goto done;
+ }
+@@ -6546,7 +6514,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ goto done;
+ }
+
+- if (iwl_is_rfkill(priv)) {
++ if (iwl3945_is_rfkill(priv)) {
+ IWL_DEBUG_HC("Aborting scan due to RF Kill activation\n");
+ goto done;
+ }
+@@ -6562,7 +6530,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ }
+
+ if (!priv->scan) {
+- priv->scan = kmalloc(sizeof(struct iwl_scan_cmd) +
++ priv->scan = kmalloc(sizeof(struct iwl3945_scan_cmd) +
+ IWL_MAX_SCAN_SIZE, GFP_KERNEL);
+ if (!priv->scan) {
+ rc = -ENOMEM;
+@@ -6570,12 +6538,12 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ }
+ }
+ scan = priv->scan;
+- memset(scan, 0, sizeof(struct iwl_scan_cmd) + IWL_MAX_SCAN_SIZE);
++ memset(scan, 0, sizeof(struct iwl3945_scan_cmd) + IWL_MAX_SCAN_SIZE);
+
+ scan->quiet_plcp_th = IWL_PLCP_QUIET_THRESH;
+ scan->quiet_time = IWL_ACTIVE_QUIET_TIME;
+
+- if (iwl_is_associated(priv)) {
++ if (iwl3945_is_associated(priv)) {
+ u16 interval = 0;
+ u32 extra;
+ u32 suspend_time = 100;
+@@ -6612,14 +6580,14 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ if (priv->one_direct_scan) {
+ IWL_DEBUG_SCAN
+ ("Kicking off one direct scan for '%s'\n",
+- iwl_escape_essid(priv->direct_ssid,
++ iwl3945_escape_essid(priv->direct_ssid,
+ priv->direct_ssid_len));
+ scan->direct_scan[0].id = WLAN_EID_SSID;
+ scan->direct_scan[0].len = priv->direct_ssid_len;
+ memcpy(scan->direct_scan[0].ssid,
+ priv->direct_ssid, priv->direct_ssid_len);
+ direct_mask = 1;
+- } else if (!iwl_is_associated(priv) && priv->essid_len) {
++ } else if (!iwl3945_is_associated(priv) && priv->essid_len) {
+ scan->direct_scan[0].id = WLAN_EID_SSID;
+ scan->direct_scan[0].len = priv->essid_len;
+ memcpy(scan->direct_scan[0].ssid, priv->essid, priv->essid_len);
+@@ -6630,7 +6598,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ /* We don't build a direct scan probe request; the uCode will do
+ * that based on the direct_mask added to each channel entry */
+ scan->tx_cmd.len = cpu_to_le16(
+- iwl_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
++ iwl3945_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
+ IWL_MAX_SCAN_SIZE - sizeof(scan), 0));
+ scan->tx_cmd.tx_flags = TX_CMD_FLG_SEQ_CTL_MSK;
+ scan->tx_cmd.sta_id = priv->hw_setting.bcast_sta_id;
+@@ -6666,23 +6634,23 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ if (direct_mask)
+ IWL_DEBUG_SCAN
+ ("Initiating direct scan for %s.\n",
+- iwl_escape_essid(priv->essid, priv->essid_len));
++ iwl3945_escape_essid(priv->essid, priv->essid_len));
+ else
+ IWL_DEBUG_SCAN("Initiating indirect scan.\n");
+
+ scan->channel_count =
+- iwl_get_channels_for_scan(
++ iwl3945_get_channels_for_scan(
+ priv, phymode, 1, /* active */
+ direct_mask,
+ (void *)&scan->data[le16_to_cpu(scan->tx_cmd.len)]);
+
+ cmd.len += le16_to_cpu(scan->tx_cmd.len) +
+- scan->channel_count * sizeof(struct iwl_scan_channel);
++ scan->channel_count * sizeof(struct iwl3945_scan_channel);
+ cmd.data = scan;
+ scan->len = cpu_to_le16(cmd.len);
+
+ set_bit(STATUS_SCAN_HW, &priv->status);
+- rc = iwl_send_cmd_sync(priv, &cmd);
++ rc = iwl3945_send_cmd_sync(priv, &cmd);
+ if (rc)
+ goto done;
+
+@@ -6693,50 +6661,52 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ return;
+
+ done:
+- /* inform mac80211 sacn aborted */
++ /* inform mac80211 scan aborted */
+ queue_work(priv->workqueue, &priv->scan_completed);
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_up(struct work_struct *data)
++static void iwl3945_bg_up(struct work_struct *data)
+ {
+- struct iwl_priv *priv = container_of(data, struct iwl_priv, up);
++ struct iwl3945_priv *priv = container_of(data, struct iwl3945_priv, up);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+ mutex_lock(&priv->mutex);
+- __iwl_up(priv);
++ __iwl3945_up(priv);
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_restart(struct work_struct *data)
++static void iwl3945_bg_restart(struct work_struct *data)
+ {
+- struct iwl_priv *priv = container_of(data, struct iwl_priv, restart);
++ struct iwl3945_priv *priv = container_of(data, struct iwl3945_priv, restart);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+- iwl_down(priv);
++ iwl3945_down(priv);
+ queue_work(priv->workqueue, &priv->up);
+ }
+
+-static void iwl_bg_rx_replenish(struct work_struct *data)
++static void iwl3945_bg_rx_replenish(struct work_struct *data)
+ {
+- struct iwl_priv *priv =
+- container_of(data, struct iwl_priv, rx_replenish);
++ struct iwl3945_priv *priv =
++ container_of(data, struct iwl3945_priv, rx_replenish);
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
+ mutex_lock(&priv->mutex);
+- iwl_rx_replenish(priv);
++ iwl3945_rx_replenish(priv);
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_post_associate(struct work_struct *data)
++#define IWL_DELAY_NEXT_SCAN (HZ*2)
+
-+/* TODO: Move fw_desc functions to iwl-pci.ko */
-+static inline void iwl_free_fw_desc(struct pci_dev *pci_dev,
-+ struct fw_desc *desc)
-+{
-+ if (desc->v_addr)
-+ pci_free_consistent(pci_dev, desc->len,
-+ desc->v_addr, desc->p_addr);
-+ desc->v_addr = NULL;
-+ desc->len = 0;
-+}
++static void iwl3945_bg_post_associate(struct work_struct *data)
+ {
+- struct iwl_priv *priv = container_of(data, struct iwl_priv,
++ struct iwl3945_priv *priv = container_of(data, struct iwl3945_priv,
+ post_associate.work);
+
+ int rc = 0;
+@@ -6758,20 +6728,20 @@ static void iwl_bg_post_associate(struct work_struct *data)
+
+ mutex_lock(&priv->mutex);
+
+- if (!priv->interface_id || !priv->is_open) {
++ if (!priv->vif || !priv->is_open) {
+ mutex_unlock(&priv->mutex);
+ return;
+ }
+- iwl_scan_cancel_timeout(priv, 200);
++ iwl3945_scan_cancel_timeout(priv, 200);
+
+ conf = ieee80211_get_hw_conf(priv->hw);
+
+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+
+- memset(&priv->rxon_timing, 0, sizeof(struct iwl_rxon_time_cmd));
+- iwl_setup_rxon_timing(priv);
+- rc = iwl_send_cmd_pdu(priv, REPLY_RXON_TIMING,
++ memset(&priv->rxon_timing, 0, sizeof(struct iwl3945_rxon_time_cmd));
++ iwl3945_setup_rxon_timing(priv);
++ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON_TIMING,
+ sizeof(priv->rxon_timing), &priv->rxon_timing);
+ if (rc)
+ IWL_WARNING("REPLY_RXON_TIMING failed - "
+@@ -6800,75 +6770,81 @@ static void iwl_bg_post_associate(struct work_struct *data)
+
+ }
+
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+
+ switch (priv->iw_mode) {
+ case IEEE80211_IF_TYPE_STA:
+- iwl_rate_scale_init(priv->hw, IWL_AP_ID);
++ iwl3945_rate_scale_init(priv->hw, IWL_AP_ID);
+ break;
+
+ case IEEE80211_IF_TYPE_IBSS:
+
+ /* clear out the station table */
+- iwl_clear_stations_table(priv);
++ iwl3945_clear_stations_table(priv);
+
+- iwl_add_station(priv, BROADCAST_ADDR, 0, 0);
+- iwl_add_station(priv, priv->bssid, 0, 0);
++ iwl3945_add_station(priv, iwl3945_broadcast_addr, 0, 0);
++ iwl3945_add_station(priv, priv->bssid, 0, 0);
+ iwl3945_sync_sta(priv, IWL_STA_ID,
+ (priv->phymode == MODE_IEEE80211A)?
+ IWL_RATE_6M_PLCP : IWL_RATE_1M_PLCP,
+ CMD_ASYNC);
+- iwl_rate_scale_init(priv->hw, IWL_STA_ID);
+- iwl_send_beacon_cmd(priv);
++ iwl3945_rate_scale_init(priv->hw, IWL_STA_ID);
++ iwl3945_send_beacon_cmd(priv);
+
+ break;
+
+ default:
+ IWL_ERROR("%s Should not be called in %d mode\n",
+- __FUNCTION__, priv->iw_mode);
++ __FUNCTION__, priv->iw_mode);
+ break;
+ }
+
+- iwl_sequence_reset(priv);
++ iwl3945_sequence_reset(priv);
+
+-#ifdef CONFIG_IWLWIFI_QOS
+- iwl_activate_qos(priv, 0);
+-#endif /* CONFIG_IWLWIFI_QOS */
++#ifdef CONFIG_IWL3945_QOS
++ iwl3945_activate_qos(priv, 0);
++#endif /* CONFIG_IWL3945_QOS */
++ /* we have just associated, don't start scan too early */
++ priv->next_scan_jiffies = jiffies + IWL_DELAY_NEXT_SCAN;
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_abort_scan(struct work_struct *work)
++static void iwl3945_bg_abort_scan(struct work_struct *work)
+ {
+- struct iwl_priv *priv = container_of(work, struct iwl_priv,
+- abort_scan);
++ struct iwl3945_priv *priv = container_of(work, struct iwl3945_priv, abort_scan);
+
+- if (!iwl_is_ready(priv))
++ if (!iwl3945_is_ready(priv))
+ return;
+
+ mutex_lock(&priv->mutex);
+
+ set_bit(STATUS_SCAN_ABORTING, &priv->status);
+- iwl_send_scan_abort(priv);
++ iwl3945_send_scan_abort(priv);
+
+ mutex_unlock(&priv->mutex);
+ }
+
+-static void iwl_bg_scan_completed(struct work_struct *work)
++static int iwl3945_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf);
+
-+static inline int iwl_alloc_fw_desc(struct pci_dev *pci_dev,
-+ struct fw_desc *desc)
-+{
-+ desc->v_addr = pci_alloc_consistent(pci_dev, desc->len, &desc->p_addr);
-+ return (desc->v_addr != NULL) ? 0 : -ENOMEM;
-+}
++static void iwl3945_bg_scan_completed(struct work_struct *work)
+ {
+- struct iwl_priv *priv =
+- container_of(work, struct iwl_priv, scan_completed);
++ struct iwl3945_priv *priv =
++ container_of(work, struct iwl3945_priv, scan_completed);
+
+ IWL_DEBUG(IWL_DL_INFO | IWL_DL_SCAN, "SCAN complete scan\n");
+
+ if (test_bit(STATUS_EXIT_PENDING, &priv->status))
+ return;
+
++ if (test_bit(STATUS_CONF_PENDING, &priv->status))
++ iwl3945_mac_config(priv->hw, ieee80211_get_hw_conf(priv->hw));
+
- #endif /* __iwl_helpers_h__ */
-diff --git a/drivers/net/wireless/iwlwifi/iwl-hw.h b/drivers/net/wireless/iwlwifi/iwl-hw.h
-deleted file mode 100644
-index 1aa6fcd..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-hw.h
-+++ /dev/null
-@@ -1,537 +0,0 @@
--/******************************************************************************
-- *
-- * This file is provided under a dual BSD/GPLv2 license. When using or
-- * redistributing this file, you may do so under either license.
-- *
-- * GPL LICENSE SUMMARY
-- *
-- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of version 2 of the GNU Geeral Public License as
-- * published by the Free Software Foundation.
-- *
-- * This program is distributed in the hope that it will be useful, but
-- * WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- * General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; if not, write to the Free Software
-- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
-- * USA
-- *
-- * The full GNU General Public License is included in this distribution
-- * in the file called LICENSE.GPL.
-- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-- *
-- * BSD LICENSE
-- *
-- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
-- * All rights reserved.
-- *
-- * Redistribution and use in source and binary forms, with or without
-- * modification, are permitted provided that the following conditions
-- * are met:
-- *
-- * * Redistributions of source code must retain the above copyright
-- * notice, this list of conditions and the following disclaimer.
-- * * Redistributions in binary form must reproduce the above copyright
-- * notice, this list of conditions and the following disclaimer in
-- * the documentation and/or other materials provided with the
-- * distribution.
-- * * Neither the name Intel Corporation nor the names of its
-- * contributors may be used to endorse or promote products derived
-- * from this software without specific prior written permission.
-- *
-- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-- *****************************************************************************/
--
--#ifndef __iwlwifi_hw_h__
--#define __iwlwifi_hw_h__
--
--/*
-- * This file defines hardware constants common to 3945 and 4965.
-- *
-- * Device-specific constants are defined in iwl-3945-hw.h and iwl-4965-hw.h,
-- * although this file contains a few definitions for which the .c
-- * implementation is the same for 3945 and 4965, except for the value of
-- * a constant.
-- *
-- * uCode API constants are defined in iwl-commands.h.
-- *
-- * NOTE: DO NOT PUT OS IMPLEMENTATION-SPECIFIC DECLARATIONS HERE
-- *
-- * The iwl-*hw.h (and files they include) files should remain OS/driver
-- * implementation independent, declaring only the hardware interface.
-- */
--
--/* uCode queue management definitions */
--#define IWL_CMD_QUEUE_NUM 4
--#define IWL_CMD_FIFO_NUM 4
--#define IWL_BACK_QUEUE_FIRST_ID 7
--
--/* Tx rates */
--#define IWL_CCK_RATES 4
--#define IWL_OFDM_RATES 8
--
--#if IWL == 3945
--#define IWL_HT_RATES 0
--#elif IWL == 4965
--#define IWL_HT_RATES 16
--#endif
--
--#define IWL_MAX_RATES (IWL_CCK_RATES+IWL_OFDM_RATES+IWL_HT_RATES)
--
--/* Time constants */
--#define SHORT_SLOT_TIME 9
--#define LONG_SLOT_TIME 20
--
--/* RSSI to dBm */
--#if IWL == 3945
--#define IWL_RSSI_OFFSET 95
--#elif IWL == 4965
--#define IWL_RSSI_OFFSET 44
--#endif
--
--#include "iwl-eeprom.h"
--#include "iwl-commands.h"
--
--#define PCI_LINK_CTRL 0x0F0
--#define PCI_POWER_SOURCE 0x0C8
--#define PCI_REG_WUM8 0x0E8
--#define PCI_CFG_PMC_PME_FROM_D3COLD_SUPPORT (0x80000000)
--
--/*=== CSR (control and status registers) ===*/
--#define CSR_BASE (0x000)
--
--#define CSR_SW_VER (CSR_BASE+0x000)
--#define CSR_HW_IF_CONFIG_REG (CSR_BASE+0x000) /* hardware interface config */
--#define CSR_INT_COALESCING (CSR_BASE+0x004) /* accum ints, 32-usec units */
--#define CSR_INT (CSR_BASE+0x008) /* host interrupt status/ack */
--#define CSR_INT_MASK (CSR_BASE+0x00c) /* host interrupt enable */
--#define CSR_FH_INT_STATUS (CSR_BASE+0x010) /* busmaster int status/ack*/
--#define CSR_GPIO_IN (CSR_BASE+0x018) /* read external chip pins */
--#define CSR_RESET (CSR_BASE+0x020) /* busmaster enable, NMI, etc*/
--#define CSR_GP_CNTRL (CSR_BASE+0x024)
--#define CSR_HW_REV (CSR_BASE+0x028)
--#define CSR_EEPROM_REG (CSR_BASE+0x02c)
--#define CSR_EEPROM_GP (CSR_BASE+0x030)
--#define CSR_GP_UCODE (CSR_BASE+0x044)
--#define CSR_UCODE_DRV_GP1 (CSR_BASE+0x054)
--#define CSR_UCODE_DRV_GP1_SET (CSR_BASE+0x058)
--#define CSR_UCODE_DRV_GP1_CLR (CSR_BASE+0x05c)
--#define CSR_UCODE_DRV_GP2 (CSR_BASE+0x060)
--#define CSR_LED_REG (CSR_BASE+0x094)
--#define CSR_DRAM_INT_TBL_CTL (CSR_BASE+0x0A0)
--#define CSR_GIO_CHICKEN_BITS (CSR_BASE+0x100)
--#define CSR_ANA_PLL_CFG (CSR_BASE+0x20c)
--#define CSR_HW_REV_WA_REG (CSR_BASE+0x22C)
--
--/* HW I/F configuration */
--#define CSR_HW_IF_CONFIG_REG_BIT_ALMAGOR_MB (0x00000100)
--#define CSR_HW_IF_CONFIG_REG_BIT_ALMAGOR_MM (0x00000200)
--#define CSR_HW_IF_CONFIG_REG_BIT_SKU_MRC (0x00000400)
--#define CSR_HW_IF_CONFIG_REG_BIT_BOARD_TYPE (0x00000800)
--#define CSR_HW_IF_CONFIG_REG_BITS_SILICON_TYPE_A (0x00000000)
--#define CSR_HW_IF_CONFIG_REG_BITS_SILICON_TYPE_B (0x00001000)
--#define CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM (0x00200000)
--
--/* interrupt flags in INTA, set by uCode or hardware (e.g. dma),
-- * acknowledged (reset) by host writing "1" to flagged bits. */
--#define CSR_INT_BIT_FH_RX (1<<31) /* Rx DMA, cmd responses, FH_INT[17:16] */
--#define CSR_INT_BIT_HW_ERR (1<<29) /* DMA hardware error FH_INT[31] */
--#define CSR_INT_BIT_DNLD (1<<28) /* uCode Download */
--#define CSR_INT_BIT_FH_TX (1<<27) /* Tx DMA FH_INT[1:0] */
--#define CSR_INT_BIT_MAC_CLK_ACTV (1<<26) /* NIC controller's clock toggled on/off */
--#define CSR_INT_BIT_SW_ERR (1<<25) /* uCode error */
--#define CSR_INT_BIT_RF_KILL (1<<7) /* HW RFKILL switch GP_CNTRL[27] toggled */
--#define CSR_INT_BIT_CT_KILL (1<<6) /* Critical temp (chip too hot) rfkill */
--#define CSR_INT_BIT_SW_RX (1<<3) /* Rx, command responses, 3945 */
--#define CSR_INT_BIT_WAKEUP (1<<1) /* NIC controller waking up (pwr mgmt) */
--#define CSR_INT_BIT_ALIVE (1<<0) /* uCode interrupts once it initializes */
--
--#define CSR_INI_SET_MASK (CSR_INT_BIT_FH_RX | \
-- CSR_INT_BIT_HW_ERR | \
-- CSR_INT_BIT_FH_TX | \
-- CSR_INT_BIT_SW_ERR | \
-- CSR_INT_BIT_RF_KILL | \
-- CSR_INT_BIT_SW_RX | \
-- CSR_INT_BIT_WAKEUP | \
-- CSR_INT_BIT_ALIVE)
--
--/* interrupt flags in FH (flow handler) (PCI busmaster DMA) */
--#define CSR_FH_INT_BIT_ERR (1<<31) /* Error */
--#define CSR_FH_INT_BIT_HI_PRIOR (1<<30) /* High priority Rx, bypass coalescing */
--#define CSR_FH_INT_BIT_RX_CHNL2 (1<<18) /* Rx channel 2 (3945 only) */
--#define CSR_FH_INT_BIT_RX_CHNL1 (1<<17) /* Rx channel 1 */
--#define CSR_FH_INT_BIT_RX_CHNL0 (1<<16) /* Rx channel 0 */
--#define CSR_FH_INT_BIT_TX_CHNL6 (1<<6) /* Tx channel 6 (3945 only) */
--#define CSR_FH_INT_BIT_TX_CHNL1 (1<<1) /* Tx channel 1 */
--#define CSR_FH_INT_BIT_TX_CHNL0 (1<<0) /* Tx channel 0 */
--
--#define CSR_FH_INT_RX_MASK (CSR_FH_INT_BIT_HI_PRIOR | \
-- CSR_FH_INT_BIT_RX_CHNL2 | \
-- CSR_FH_INT_BIT_RX_CHNL1 | \
-- CSR_FH_INT_BIT_RX_CHNL0)
--
--#define CSR_FH_INT_TX_MASK (CSR_FH_INT_BIT_TX_CHNL6 | \
-- CSR_FH_INT_BIT_TX_CHNL1 | \
-- CSR_FH_INT_BIT_TX_CHNL0 )
--
--
--/* RESET */
--#define CSR_RESET_REG_FLAG_NEVO_RESET (0x00000001)
--#define CSR_RESET_REG_FLAG_FORCE_NMI (0x00000002)
--#define CSR_RESET_REG_FLAG_SW_RESET (0x00000080)
--#define CSR_RESET_REG_FLAG_MASTER_DISABLED (0x00000100)
--#define CSR_RESET_REG_FLAG_STOP_MASTER (0x00000200)
--
--/* GP (general purpose) CONTROL */
--#define CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY (0x00000001)
--#define CSR_GP_CNTRL_REG_FLAG_INIT_DONE (0x00000004)
--#define CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ (0x00000008)
--#define CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP (0x00000010)
--
--#define CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN (0x00000001)
--
--#define CSR_GP_CNTRL_REG_MSK_POWER_SAVE_TYPE (0x07000000)
--#define CSR_GP_CNTRL_REG_FLAG_MAC_POWER_SAVE (0x04000000)
--#define CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW (0x08000000)
--
--
--/* EEPROM REG */
--#define CSR_EEPROM_REG_READ_VALID_MSK (0x00000001)
--#define CSR_EEPROM_REG_BIT_CMD (0x00000002)
--
--/* EEPROM GP */
--#define CSR_EEPROM_GP_VALID_MSK (0x00000006)
--#define CSR_EEPROM_GP_BAD_SIGNATURE (0x00000000)
--#define CSR_EEPROM_GP_IF_OWNER_MSK (0x00000180)
--
--/* UCODE DRV GP */
--#define CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP (0x00000001)
--#define CSR_UCODE_SW_BIT_RFKILL (0x00000002)
--#define CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED (0x00000004)
--#define CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT (0x00000008)
--
--/* GPIO */
--#define CSR_GPIO_IN_BIT_AUX_POWER (0x00000200)
--#define CSR_GPIO_IN_VAL_VAUX_PWR_SRC (0x00000000)
--#define CSR_GPIO_IN_VAL_VMAIN_PWR_SRC CSR_GPIO_IN_BIT_AUX_POWER
--
--/* GI Chicken Bits */
--#define CSR_GIO_CHICKEN_BITS_REG_BIT_L1A_NO_L0S_RX (0x00800000)
--#define CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER (0x20000000)
--
--/* CSR_ANA_PLL_CFG */
--#define CSR_ANA_PLL_CFG_SH (0x00880300)
--
--#define CSR_LED_REG_TRUN_ON (0x00000078)
--#define CSR_LED_REG_TRUN_OFF (0x00000038)
--#define CSR_LED_BSM_CTRL_MSK (0xFFFFFFDF)
--
--/* DRAM_INT_TBL_CTRL */
--#define CSR_DRAM_INT_TBL_CTRL_EN (1<<31)
--#define CSR_DRAM_INT_TBL_CTRL_WRAP_CHK (1<<27)
--
--/*=== HBUS (Host-side Bus) ===*/
--#define HBUS_BASE (0x400)
--
--#define HBUS_TARG_MEM_RADDR (HBUS_BASE+0x00c)
--#define HBUS_TARG_MEM_WADDR (HBUS_BASE+0x010)
--#define HBUS_TARG_MEM_WDAT (HBUS_BASE+0x018)
--#define HBUS_TARG_MEM_RDAT (HBUS_BASE+0x01c)
--#define HBUS_TARG_PRPH_WADDR (HBUS_BASE+0x044)
--#define HBUS_TARG_PRPH_RADDR (HBUS_BASE+0x048)
--#define HBUS_TARG_PRPH_WDAT (HBUS_BASE+0x04c)
--#define HBUS_TARG_PRPH_RDAT (HBUS_BASE+0x050)
--#define HBUS_TARG_WRPTR (HBUS_BASE+0x060)
--
--#define HBUS_TARG_MBX_C (HBUS_BASE+0x030)
--
--
--/* SCD (Scheduler) */
--#define SCD_BASE (CSR_BASE + 0x2E00)
--
--#define SCD_MODE_REG (SCD_BASE + 0x000)
--#define SCD_ARASTAT_REG (SCD_BASE + 0x004)
--#define SCD_TXFACT_REG (SCD_BASE + 0x010)
--#define SCD_TXF4MF_REG (SCD_BASE + 0x014)
--#define SCD_TXF5MF_REG (SCD_BASE + 0x020)
--#define SCD_SBYP_MODE_1_REG (SCD_BASE + 0x02C)
--#define SCD_SBYP_MODE_2_REG (SCD_BASE + 0x030)
--
--/*=== FH (data Flow Handler) ===*/
--#define FH_BASE (0x800)
--
--#define FH_CBCC_TABLE (FH_BASE+0x140)
--#define FH_TFDB_TABLE (FH_BASE+0x180)
--#define FH_RCSR_TABLE (FH_BASE+0x400)
--#define FH_RSSR_TABLE (FH_BASE+0x4c0)
--#define FH_TCSR_TABLE (FH_BASE+0x500)
--#define FH_TSSR_TABLE (FH_BASE+0x680)
--
--/* TFDB (Transmit Frame Buffer Descriptor) */
--#define FH_TFDB(_channel, buf) \
-- (FH_TFDB_TABLE+((_channel)*2+(buf))*0x28)
--#define ALM_FH_TFDB_CHNL_BUF_CTRL_REG(_channel) \
-- (FH_TFDB_TABLE + 0x50 * _channel)
--/* CBCC _channel is [0,2] */
--#define FH_CBCC(_channel) (FH_CBCC_TABLE+(_channel)*0x8)
--#define FH_CBCC_CTRL(_channel) (FH_CBCC(_channel)+0x00)
--#define FH_CBCC_BASE(_channel) (FH_CBCC(_channel)+0x04)
--
--/* RCSR _channel is [0,2] */
--#define FH_RCSR(_channel) (FH_RCSR_TABLE+(_channel)*0x40)
--#define FH_RCSR_CONFIG(_channel) (FH_RCSR(_channel)+0x00)
--#define FH_RCSR_RBD_BASE(_channel) (FH_RCSR(_channel)+0x04)
--#define FH_RCSR_WPTR(_channel) (FH_RCSR(_channel)+0x20)
--#define FH_RCSR_RPTR_ADDR(_channel) (FH_RCSR(_channel)+0x24)
--
--#if IWL == 3945
--#define FH_RSCSR_CHNL0_WPTR (FH_RCSR_WPTR(0))
--#elif IWL == 4965
--#define FH_RSCSR_CHNL0_WPTR (FH_RSCSR_CHNL0_RBDCB_WPTR_REG)
--#endif
--
--/* RSSR */
--#define FH_RSSR_CTRL (FH_RSSR_TABLE+0x000)
--#define FH_RSSR_STATUS (FH_RSSR_TABLE+0x004)
--/* TCSR */
--#define FH_TCSR(_channel) (FH_TCSR_TABLE+(_channel)*0x20)
--#define FH_TCSR_CONFIG(_channel) (FH_TCSR(_channel)+0x00)
--#define FH_TCSR_CREDIT(_channel) (FH_TCSR(_channel)+0x04)
--#define FH_TCSR_BUFF_STTS(_channel) (FH_TCSR(_channel)+0x08)
--/* TSSR */
--#define FH_TSSR_CBB_BASE (FH_TSSR_TABLE+0x000)
--#define FH_TSSR_MSG_CONFIG (FH_TSSR_TABLE+0x008)
--#define FH_TSSR_TX_STATUS (FH_TSSR_TABLE+0x010)
--/* 18 - reserved */
--
--/* card static random access memory (SRAM) for processor data and instructs */
--#define RTC_INST_LOWER_BOUND (0x000000)
--#define RTC_DATA_LOWER_BOUND (0x800000)
--
--
--/* DBM */
--
--#define ALM_FH_SRVC_CHNL (6)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_POS_RBDC_SIZE (20)
--#define ALM_FH_RCSR_RX_CONFIG_REG_POS_IRQ_RBTH (4)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_BIT_WR_STTS_EN (0x08000000)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_DMA_CHNL_EN_ENABLE (0x80000000)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_RDRBD_EN_ENABLE (0x20000000)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_MAX_FRAG_SIZE_128 (0x01000000)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_IRQ_DEST_INT_HOST (0x00001000)
--
--#define ALM_FH_RCSR_RX_CONFIG_REG_VAL_MSG_MODE_FH (0x00000000)
--
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_MSG_MODE_TXF (0x00000000)
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_MSG_MODE_DRIVER (0x00000001)
--
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_DISABLE_VAL (0x00000000)
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE_VAL (0x00000008)
--
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_IFTFD (0x00200000)
--
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_RTC_NOINT (0x00000000)
--
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE (0x00000000)
--#define ALM_FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE (0x80000000)
--
--#define ALM_FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_VALID (0x00004000)
--
--#define ALM_FH_TCSR_CHNL_TX_BUF_STS_REG_BIT_TFDB_WPTR (0x00000001)
--
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_SNOOP_RD_TXPD_ON (0xFF000000)
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_ORDER_RD_TXPD_ON (0x00FF0000)
--
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_MAX_FRAG_SIZE_128B (0x00000400)
--
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_SNOOP_RD_TFD_ON (0x00000100)
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_ORDER_RD_CBB_ON (0x00000080)
--
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_ORDER_RSP_WAIT_TH (0x00000020)
--#define ALM_FH_TSSR_TX_MSG_CONFIG_REG_VAL_RSP_WAIT_TH (0x00000005)
--
--#define ALM_TB_MAX_BYTES_COUNT (0xFFF0)
--
--#define ALM_FH_TSSR_TX_STATUS_REG_BIT_BUFS_EMPTY(_channel) \
-- ((1LU << _channel) << 24)
--#define ALM_FH_TSSR_TX_STATUS_REG_BIT_NO_PEND_REQ(_channel) \
-- ((1LU << _channel) << 16)
--
--#define ALM_FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE(_channel) \
-- (ALM_FH_TSSR_TX_STATUS_REG_BIT_BUFS_EMPTY(_channel) | \
-- ALM_FH_TSSR_TX_STATUS_REG_BIT_NO_PEND_REQ(_channel))
--#define PCI_CFG_REV_ID_BIT_BASIC_SKU (0x40) /* bit 6 */
--#define PCI_CFG_REV_ID_BIT_RTP (0x80) /* bit 7 */
--
--#define HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED (0x00000004)
--
--#define TFD_QUEUE_MIN 0
--#define TFD_QUEUE_MAX 6
--#define TFD_QUEUE_SIZE_MAX (256)
--
--/* spectrum and channel data structures */
--#define IWL_NUM_SCAN_RATES (2)
--
--#define IWL_SCAN_FLAG_24GHZ (1<<0)
--#define IWL_SCAN_FLAG_52GHZ (1<<1)
--#define IWL_SCAN_FLAG_ACTIVE (1<<2)
--#define IWL_SCAN_FLAG_DIRECT (1<<3)
--
--#define IWL_MAX_CMD_SIZE 1024
--
--#define IWL_DEFAULT_TX_RETRY 15
--#define IWL_MAX_TX_RETRY 16
--
--/*********************************************/
--
--#define RFD_SIZE 4
--#define NUM_TFD_CHUNKS 4
--
--#define RX_QUEUE_SIZE 256
--#define RX_QUEUE_MASK 255
--#define RX_QUEUE_SIZE_LOG 8
--
--/* QoS definitions */
--
--#define CW_MIN_OFDM 15
--#define CW_MAX_OFDM 1023
--#define CW_MIN_CCK 31
--#define CW_MAX_CCK 1023
--
--#define QOS_TX0_CW_MIN_OFDM CW_MIN_OFDM
--#define QOS_TX1_CW_MIN_OFDM CW_MIN_OFDM
--#define QOS_TX2_CW_MIN_OFDM ((CW_MIN_OFDM + 1) / 2 - 1)
--#define QOS_TX3_CW_MIN_OFDM ((CW_MIN_OFDM + 1) / 4 - 1)
--
--#define QOS_TX0_CW_MIN_CCK CW_MIN_CCK
--#define QOS_TX1_CW_MIN_CCK CW_MIN_CCK
--#define QOS_TX2_CW_MIN_CCK ((CW_MIN_CCK + 1) / 2 - 1)
--#define QOS_TX3_CW_MIN_CCK ((CW_MIN_CCK + 1) / 4 - 1)
--
--#define QOS_TX0_CW_MAX_OFDM CW_MAX_OFDM
--#define QOS_TX1_CW_MAX_OFDM CW_MAX_OFDM
--#define QOS_TX2_CW_MAX_OFDM CW_MIN_OFDM
--#define QOS_TX3_CW_MAX_OFDM ((CW_MIN_OFDM + 1) / 2 - 1)
--
--#define QOS_TX0_CW_MAX_CCK CW_MAX_CCK
--#define QOS_TX1_CW_MAX_CCK CW_MAX_CCK
--#define QOS_TX2_CW_MAX_CCK CW_MIN_CCK
--#define QOS_TX3_CW_MAX_CCK ((CW_MIN_CCK + 1) / 2 - 1)
--
--#define QOS_TX0_AIFS 3
--#define QOS_TX1_AIFS 7
--#define QOS_TX2_AIFS 2
--#define QOS_TX3_AIFS 2
--
--#define QOS_TX0_ACM 0
--#define QOS_TX1_ACM 0
--#define QOS_TX2_ACM 0
--#define QOS_TX3_ACM 0
--
--#define QOS_TX0_TXOP_LIMIT_CCK 0
--#define QOS_TX1_TXOP_LIMIT_CCK 0
--#define QOS_TX2_TXOP_LIMIT_CCK 6016
--#define QOS_TX3_TXOP_LIMIT_CCK 3264
--
--#define QOS_TX0_TXOP_LIMIT_OFDM 0
--#define QOS_TX1_TXOP_LIMIT_OFDM 0
--#define QOS_TX2_TXOP_LIMIT_OFDM 3008
--#define QOS_TX3_TXOP_LIMIT_OFDM 1504
--
--#define DEF_TX0_CW_MIN_OFDM CW_MIN_OFDM
--#define DEF_TX1_CW_MIN_OFDM CW_MIN_OFDM
--#define DEF_TX2_CW_MIN_OFDM CW_MIN_OFDM
--#define DEF_TX3_CW_MIN_OFDM CW_MIN_OFDM
--
--#define DEF_TX0_CW_MIN_CCK CW_MIN_CCK
--#define DEF_TX1_CW_MIN_CCK CW_MIN_CCK
--#define DEF_TX2_CW_MIN_CCK CW_MIN_CCK
--#define DEF_TX3_CW_MIN_CCK CW_MIN_CCK
--
--#define DEF_TX0_CW_MAX_OFDM CW_MAX_OFDM
--#define DEF_TX1_CW_MAX_OFDM CW_MAX_OFDM
--#define DEF_TX2_CW_MAX_OFDM CW_MAX_OFDM
--#define DEF_TX3_CW_MAX_OFDM CW_MAX_OFDM
--
--#define DEF_TX0_CW_MAX_CCK CW_MAX_CCK
--#define DEF_TX1_CW_MAX_CCK CW_MAX_CCK
--#define DEF_TX2_CW_MAX_CCK CW_MAX_CCK
--#define DEF_TX3_CW_MAX_CCK CW_MAX_CCK
--
--#define DEF_TX0_AIFS (2)
--#define DEF_TX1_AIFS (2)
--#define DEF_TX2_AIFS (2)
--#define DEF_TX3_AIFS (2)
--
--#define DEF_TX0_ACM 0
--#define DEF_TX1_ACM 0
--#define DEF_TX2_ACM 0
--#define DEF_TX3_ACM 0
--
--#define DEF_TX0_TXOP_LIMIT_CCK 0
--#define DEF_TX1_TXOP_LIMIT_CCK 0
--#define DEF_TX2_TXOP_LIMIT_CCK 0
--#define DEF_TX3_TXOP_LIMIT_CCK 0
--
--#define DEF_TX0_TXOP_LIMIT_OFDM 0
--#define DEF_TX1_TXOP_LIMIT_OFDM 0
--#define DEF_TX2_TXOP_LIMIT_OFDM 0
--#define DEF_TX3_TXOP_LIMIT_OFDM 0
--
--#define QOS_QOS_SETS 3
--#define QOS_PARAM_SET_ACTIVE 0
--#define QOS_PARAM_SET_DEF_CCK 1
--#define QOS_PARAM_SET_DEF_OFDM 2
--
--#define CTRL_QOS_NO_ACK (0x0020)
--#define DCT_FLAG_EXT_QOS_ENABLED (0x10)
--
--#define U32_PAD(n) ((4-(n))&0x3)
--
--/*
-- * Generic queue structure
-- *
-- * Contains common data for Rx and Tx queues
-- */
--#define TFD_CTL_COUNT_SET(n) (n<<24)
--#define TFD_CTL_COUNT_GET(ctl) ((ctl>>24) & 7)
--#define TFD_CTL_PAD_SET(n) (n<<28)
--#define TFD_CTL_PAD_GET(ctl) (ctl>>28)
--
--#define TFD_TX_CMD_SLOTS 256
--#define TFD_CMD_SLOTS 32
--
--#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_cmd) - \
-- sizeof(struct iwl_cmd_meta))
--
--/*
-- * RX related structures and functions
-- */
--#define RX_FREE_BUFFERS 64
--#define RX_LOW_WATERMARK 8
--
--#endif /* __iwlwifi_hw_h__ */
-diff --git a/drivers/net/wireless/iwlwifi/iwl-io.h b/drivers/net/wireless/iwlwifi/iwl-io.h
-deleted file mode 100644
-index 8a8b96f..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-io.h
-+++ /dev/null
-@@ -1,470 +0,0 @@
--/******************************************************************************
-- *
-- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
-- *
-- * Portions of this file are derived from the ipw3945 project.
-- *
-- * This program is free software; you can redistribute it and/or modify it
-- * under the terms of version 2 of the GNU General Public License as
-- * published by the Free Software Foundation.
-- *
-- * This program is distributed in the hope that it will be useful, but WITHOUT
-- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
-- * more details.
-- *
-- * You should have received a copy of the GNU General Public License along with
-- * this program; if not, write to the Free Software Foundation, Inc.,
-- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
-- *
-- * The full GNU General Public License is included in this distribution in the
-- * file called LICENSE.
-- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-- *
-- *****************************************************************************/
--
--#ifndef __iwl_io_h__
--#define __iwl_io_h__
--
--#include <linux/io.h>
--
--#include "iwl-debug.h"
--
--/*
-- * IO, register, and NIC memory access functions
-- *
-- * NOTE on naming convention and macro usage for these
-- *
-- * A single _ prefix before a an access function means that no state
-- * check or debug information is printed when that function is called.
-- *
-- * A double __ prefix before an access function means that state is checked
-- * (in the case of *restricted calls) and the current line number is printed
-- * in addition to any other debug output.
-- *
-- * The non-prefixed name is the #define that maps the caller into a
-- * #define that provides the caller's __LINE__ to the double prefix version.
-- *
-- * If you wish to call the function without any debug or state checking,
-- * you should use the single _ prefix version (as is used by dependent IO
-- * routines, for example _iwl_read_restricted calls the non-check version of
-- * _iwl_read32.)
-- *
-- * These declarations are *extremely* useful in quickly isolating code deltas
-- * which result in misconfiguring of the hardware I/O. In combination with
-- * git-bisect and the IO debug level you can quickly determine the specific
-- * commit which breaks the IO sequence to the hardware.
-- *
-- */
--
--#define _iwl_write32(iwl, ofs, val) writel((val), (iwl)->hw_base + (ofs))
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_write32(const char *f, u32 l, struct iwl_priv *iwl,
-- u32 ofs, u32 val)
--{
-- IWL_DEBUG_IO("write_direct32(0x%08X, 0x%08X) - %s %d\n",
-- (u32) (ofs), (u32) (val), f, l);
-- _iwl_write32(iwl, ofs, val);
--}
--#define iwl_write32(iwl, ofs, val) \
-- __iwl_write32(__FILE__, __LINE__, iwl, ofs, val)
--#else
--#define iwl_write32(iwl, ofs, val) _iwl_write32(iwl, ofs, val)
--#endif
--
--#define _iwl_read32(iwl, ofs) readl((iwl)->hw_base + (ofs))
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline u32 __iwl_read32(char *f, u32 l, struct iwl_priv *iwl, u32 ofs)
--{
-- IWL_DEBUG_IO("read_direct32(0x%08X) - %s %d\n", ofs, f, l);
-- return _iwl_read32(iwl, ofs);
--}
--#define iwl_read32(iwl, ofs) __iwl_read32(__FILE__, __LINE__, iwl, ofs)
--#else
--#define iwl_read32(p, o) _iwl_read32(p, o)
--#endif
--
--static inline int _iwl_poll_bit(struct iwl_priv *priv, u32 addr,
-- u32 bits, u32 mask, int timeout)
--{
-- int i = 0;
--
-- do {
-- if ((_iwl_read32(priv, addr) & mask) == (bits & mask))
-- return i;
-- mdelay(10);
-- i += 10;
-- } while (i < timeout);
--
-- return -ETIMEDOUT;
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline int __iwl_poll_bit(const char *f, u32 l,
-- struct iwl_priv *priv, u32 addr,
-- u32 bits, u32 mask, int timeout)
--{
-- int rc = _iwl_poll_bit(priv, addr, bits, mask, timeout);
-- if (unlikely(rc == -ETIMEDOUT))
-- IWL_DEBUG_IO
-- ("poll_bit(0x%08X, 0x%08X, 0x%08X) - timedout - %s %d\n",
-- addr, bits, mask, f, l);
-- else
-- IWL_DEBUG_IO
-- ("poll_bit(0x%08X, 0x%08X, 0x%08X) = 0x%08X - %s %d\n",
-- addr, bits, mask, rc, f, l);
-- return rc;
--}
--#define iwl_poll_bit(iwl, addr, bits, mask, timeout) \
-- __iwl_poll_bit(__FILE__, __LINE__, iwl, addr, bits, mask, timeout)
--#else
--#define iwl_poll_bit(p, a, b, m, t) _iwl_poll_bit(p, a, b, m, t)
--#endif
--
--static inline void _iwl_set_bit(struct iwl_priv *priv, u32 reg, u32 mask)
--{
-- _iwl_write32(priv, reg, _iwl_read32(priv, reg) | mask);
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_set_bit(const char *f, u32 l,
-- struct iwl_priv *priv, u32 reg, u32 mask)
--{
-- u32 val = _iwl_read32(priv, reg) | mask;
-- IWL_DEBUG_IO("set_bit(0x%08X, 0x%08X) = 0x%08X\n", reg, mask, val);
-- _iwl_write32(priv, reg, val);
--}
--#define iwl_set_bit(p, r, m) __iwl_set_bit(__FILE__, __LINE__, p, r, m)
--#else
--#define iwl_set_bit(p, r, m) _iwl_set_bit(p, r, m)
--#endif
--
--static inline void _iwl_clear_bit(struct iwl_priv *priv, u32 reg, u32 mask)
--{
-- _iwl_write32(priv, reg, _iwl_read32(priv, reg) & ~mask);
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_clear_bit(const char *f, u32 l,
-- struct iwl_priv *priv, u32 reg, u32 mask)
--{
-- u32 val = _iwl_read32(priv, reg) & ~mask;
-- IWL_DEBUG_IO("clear_bit(0x%08X, 0x%08X) = 0x%08X\n", reg, mask, val);
-- _iwl_write32(priv, reg, val);
--}
--#define iwl_clear_bit(p, r, m) __iwl_clear_bit(__FILE__, __LINE__, p, r, m)
--#else
--#define iwl_clear_bit(p, r, m) _iwl_clear_bit(p, r, m)
--#endif
--
--static inline int _iwl_grab_restricted_access(struct iwl_priv *priv)
--{
-- int rc;
-- u32 gp_ctl;
--
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (atomic_read(&priv->restrict_refcnt))
+ ieee80211_scan_completed(priv->hw);
+
+ /* Since setting the TXPOWER may have been deferred while
+ * performing the scan, fire one off */
+ mutex_lock(&priv->mutex);
+- iwl_hw_reg_send_txpower(priv);
++ iwl3945_hw_reg_send_txpower(priv);
+ mutex_unlock(&priv->mutex);
+ }
+
+@@ -6878,50 +6854,123 @@ static void iwl_bg_scan_completed(struct work_struct *work)
+ *
+ *****************************************************************************/
+
+-static int iwl_mac_start(struct ieee80211_hw *hw)
++#define UCODE_READY_TIMEOUT (2 * HZ)
++
++static int iwl3945_mac_start(struct ieee80211_hw *hw)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
++ int ret;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
++ if (pci_enable_device(priv->pci_dev)) {
++ IWL_ERROR("Fail to pci_enable_device\n");
++ return -ENODEV;
++ }
++ pci_restore_state(priv->pci_dev);
++ pci_enable_msi(priv->pci_dev);
++
++ ret = request_irq(priv->pci_dev->irq, iwl3945_isr, IRQF_SHARED,
++ DRV_NAME, priv);
++ if (ret) {
++ IWL_ERROR("Error allocating IRQ %d\n", priv->pci_dev->irq);
++ goto out_disable_msi;
++ }
++
+ /* we should be verifying the device is ready to be opened */
+ mutex_lock(&priv->mutex);
+
+- priv->is_open = 1;
++ memset(&priv->staging_rxon, 0, sizeof(struct iwl3945_rxon_cmd));
++ /* fetch ucode file from disk, alloc and copy to bus-master buffers ...
++ * ucode filename and max sizes are card-specific. */
+
+- if (!iwl_is_rfkill(priv))
+- ieee80211_start_queues(priv->hw);
++ if (!priv->ucode_code.len) {
++ ret = iwl3945_read_ucode(priv);
++ if (ret) {
++ IWL_ERROR("Could not read microcode: %d\n", ret);
++ mutex_unlock(&priv->mutex);
++ goto out_release_irq;
++ }
++ }
++
++ ret = __iwl3945_up(priv);
+
+ mutex_unlock(&priv->mutex);
++
++ if (ret)
++ goto out_release_irq;
++
++ IWL_DEBUG_INFO("Start UP work.\n");
++
++ if (test_bit(STATUS_IN_SUSPEND, &priv->status))
++ return 0;
++
++ /* Wait for START_ALIVE from ucode. Otherwise callbacks from
++ * mac80211 will not be run successfully. */
++ ret = wait_event_interruptible_timeout(priv->wait_command_queue,
++ test_bit(STATUS_READY, &priv->status),
++ UCODE_READY_TIMEOUT);
++ if (!ret) {
++ if (!test_bit(STATUS_READY, &priv->status)) {
++ IWL_ERROR("Wait for START_ALIVE timeout after %dms.\n",
++ jiffies_to_msecs(UCODE_READY_TIMEOUT));
++ ret = -ETIMEDOUT;
++ goto out_release_irq;
++ }
++ }
++
++ priv->is_open = 1;
+ IWL_DEBUG_MAC80211("leave\n");
+ return 0;
++
++out_release_irq:
++ free_irq(priv->pci_dev->irq, priv);
++out_disable_msi:
++ pci_disable_msi(priv->pci_dev);
++ pci_disable_device(priv->pci_dev);
++ priv->is_open = 0;
++ IWL_DEBUG_MAC80211("leave - failed\n");
++ return ret;
+ }
+
+-static void iwl_mac_stop(struct ieee80211_hw *hw)
++static void iwl3945_mac_stop(struct ieee80211_hw *hw)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
++ if (!priv->is_open) {
++ IWL_DEBUG_MAC80211("leave - skip\n");
++ return;
++ }
+
+- mutex_lock(&priv->mutex);
+- /* stop mac, cancel any scan request and clear
+- * RXON_FILTER_ASSOC_MSK BIT
+- */
+ priv->is_open = 0;
+- iwl_scan_cancel_timeout(priv, 100);
+- cancel_delayed_work(&priv->post_associate);
+- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
+- mutex_unlock(&priv->mutex);
++
++ if (iwl3945_is_ready_rf(priv)) {
++ /* stop mac, cancel any scan request and clear
++ * RXON_FILTER_ASSOC_MSK BIT
++ */
++ mutex_lock(&priv->mutex);
++ iwl3945_scan_cancel_timeout(priv, 100);
++ cancel_delayed_work(&priv->post_associate);
++ mutex_unlock(&priv->mutex);
++ }
++
++ iwl3945_down(priv);
++
++ flush_workqueue(priv->workqueue);
++ free_irq(priv->pci_dev->irq, priv);
++ pci_disable_msi(priv->pci_dev);
++ pci_save_state(priv->pci_dev);
++ pci_disable_device(priv->pci_dev);
+
+ IWL_DEBUG_MAC80211("leave\n");
+ }
+
+-static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
++static int iwl3945_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
+ struct ieee80211_tx_control *ctl)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
+@@ -6933,29 +6982,29 @@ static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
+ IWL_DEBUG_TX("dev->xmit(%d bytes) at rate 0x%02x\n", skb->len,
+ ctl->tx_rate);
+
+- if (iwl_tx_skb(priv, skb, ctl))
++ if (iwl3945_tx_skb(priv, skb, ctl))
+ dev_kfree_skb_any(skb);
+
+ IWL_DEBUG_MAC80211("leave\n");
+ return 0;
+ }
+
+-static int iwl_mac_add_interface(struct ieee80211_hw *hw,
++static int iwl3945_mac_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_if_init_conf *conf)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+ unsigned long flags;
+ DECLARE_MAC_BUF(mac);
+
+- IWL_DEBUG_MAC80211("enter: id %d, type %d\n", conf->if_id, conf->type);
++ IWL_DEBUG_MAC80211("enter: type %d\n", conf->type);
+
+- if (priv->interface_id) {
+- IWL_DEBUG_MAC80211("leave - interface_id != 0\n");
++ if (priv->vif) {
++ IWL_DEBUG_MAC80211("leave - vif != NULL\n");
+ return -EOPNOTSUPP;
+ }
+
+ spin_lock_irqsave(&priv->lock, flags);
+- priv->interface_id = conf->if_id;
++ priv->vif = conf->vif;
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+@@ -6966,106 +7015,108 @@ static int iwl_mac_add_interface(struct ieee80211_hw *hw,
+ memcpy(priv->mac_addr, conf->mac_addr, ETH_ALEN);
+ }
+
+- iwl_set_mode(priv, conf->type);
++ if (iwl3945_is_ready(priv))
++ iwl3945_set_mode(priv, conf->type);
+
+- IWL_DEBUG_MAC80211("leave\n");
+ mutex_unlock(&priv->mutex);
+
++ IWL_DEBUG_MAC80211("leave\n");
+ return 0;
+ }
+
+ /**
+- * iwl_mac_config - mac80211 config callback
++ * iwl3945_mac_config - mac80211 config callback
+ *
+ * We ignore conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME since it seems to
+ * be set inappropriately and the driver currently sets the hardware up to
+ * use it whenever needed.
+ */
+-static int iwl_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
++static int iwl3945_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
+ {
+- struct iwl_priv *priv = hw->priv;
+- const struct iwl_channel_info *ch_info;
++ struct iwl3945_priv *priv = hw->priv;
++ const struct iwl3945_channel_info *ch_info;
+ unsigned long flags;
++ int ret = 0;
+
+ mutex_lock(&priv->mutex);
+ IWL_DEBUG_MAC80211("enter to channel %d\n", conf->channel);
+
+- if (!iwl_is_ready(priv)) {
++ priv->add_radiotap = !!(conf->flags & IEEE80211_CONF_RADIOTAP);
++
++ if (!iwl3945_is_ready(priv)) {
+ IWL_DEBUG_MAC80211("leave - not ready\n");
+- mutex_unlock(&priv->mutex);
+- return -EIO;
++ ret = -EIO;
++ goto out;
+ }
+
+- /* TODO: Figure out how to get ieee80211_local->sta_scanning w/ only
+- * what is exposed through include/ declrations */
+- if (unlikely(!iwl_param_disable_hw_scan &&
++ if (unlikely(!iwl3945_param_disable_hw_scan &&
+ test_bit(STATUS_SCANNING, &priv->status))) {
+ IWL_DEBUG_MAC80211("leave - scanning\n");
++ set_bit(STATUS_CONF_PENDING, &priv->status);
+ mutex_unlock(&priv->mutex);
+ return 0;
+ }
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+- ch_info = iwl_get_channel_info(priv, conf->phymode, conf->channel);
++ ch_info = iwl3945_get_channel_info(priv, conf->phymode, conf->channel);
+ if (!is_channel_valid(ch_info)) {
+ IWL_DEBUG_SCAN("Channel %d [%d] is INVALID for this SKU.\n",
+ conf->channel, conf->phymode);
+ IWL_DEBUG_MAC80211("leave - invalid channel\n");
+ spin_unlock_irqrestore(&priv->lock, flags);
+- mutex_unlock(&priv->mutex);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out;
+ }
+
+- iwl_set_rxon_channel(priv, conf->phymode, conf->channel);
++ iwl3945_set_rxon_channel(priv, conf->phymode, conf->channel);
+
+- iwl_set_flags_for_phymode(priv, conf->phymode);
++ iwl3945_set_flags_for_phymode(priv, conf->phymode);
+
+ /* The list of supported rates and rate mask can be different
+ * for each phymode; since the phymode may have changed, reset
+ * the rate mask to what mac80211 lists */
+- iwl_set_rate(priv);
++ iwl3945_set_rate(priv);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ #ifdef IEEE80211_CONF_CHANNEL_SWITCH
+ if (conf->flags & IEEE80211_CONF_CHANNEL_SWITCH) {
+- iwl_hw_channel_switch(priv, conf->channel);
+- mutex_unlock(&priv->mutex);
- return 0;
--#endif
-- if (test_bit(STATUS_RF_KILL_HW, &priv->status) ||
-- test_bit(STATUS_RF_KILL_SW, &priv->status)) {
-- IWL_WARNING("WARNING: Requesting MAC access during RFKILL "
-- "wakes up NIC\n");
--
-- /* 10 msec allows time for NIC to complete its data save */
-- gp_ctl = _iwl_read32(priv, CSR_GP_CNTRL);
-- if (gp_ctl & CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY) {
-- IWL_DEBUG_RF_KILL("Wait for complete power-down, "
-- "gpctl = 0x%08x\n", gp_ctl);
-- mdelay(10);
-- } else
-- IWL_DEBUG_RF_KILL("power-down complete, "
-- "gpctl = 0x%08x\n", gp_ctl);
-- }
--
-- /* this bit wakes up the NIC */
-- _iwl_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
-- rc = _iwl_poll_bit(priv, CSR_GP_CNTRL,
-- CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN,
-- (CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY |
-- CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP), 50);
-- if (rc < 0) {
-- IWL_ERROR("MAC is in deep sleep!\n");
++ iwl3945_hw_channel_switch(priv, conf->channel);
++ goto out;
+ }
+ #endif
+
+- iwl_radio_kill_sw(priv, !conf->radio_enabled);
++ iwl3945_radio_kill_sw(priv, !conf->radio_enabled);
+
+ if (!conf->radio_enabled) {
+ IWL_DEBUG_MAC80211("leave - radio disabled\n");
+- mutex_unlock(&priv->mutex);
+- return 0;
++ goto out;
+ }
+
+- if (iwl_is_rfkill(priv)) {
++ if (iwl3945_is_rfkill(priv)) {
+ IWL_DEBUG_MAC80211("leave - RF kill\n");
+- mutex_unlock(&priv->mutex);
- return -EIO;
-- }
++ ret = -EIO;
++ goto out;
+ }
+
+- iwl_set_rate(priv);
++ iwl3945_set_rate(priv);
+
+ if (memcmp(&priv->active_rxon,
+ &priv->staging_rxon, sizeof(priv->staging_rxon)))
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+ else
+ IWL_DEBUG_INFO("No re-sending same RXON configuration.\n");
+
+ IWL_DEBUG_MAC80211("leave\n");
+
++out:
++ clear_bit(STATUS_CONF_PENDING, &priv->status);
+ mutex_unlock(&priv->mutex);
-
--#ifdef CONFIG_IWLWIFI_DEBUG
-- atomic_inc(&priv->restrict_refcnt);
--#endif
- return 0;
--}
--
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline int __iwl_grab_restricted_access(const char *f, u32 l,
-- struct iwl_priv *priv)
--{
-- if (atomic_read(&priv->restrict_refcnt))
-- IWL_DEBUG_INFO("Grabbing access while already held at "
-- "line %d.\n", l);
--
-- IWL_DEBUG_IO("grabbing restricted access - %s %d\n", f, l);
--
-- return _iwl_grab_restricted_access(priv);
--}
--#define iwl_grab_restricted_access(priv) \
-- __iwl_grab_restricted_access(__FILE__, __LINE__, priv)
--#else
--#define iwl_grab_restricted_access(priv) \
-- _iwl_grab_restricted_access(priv)
--#endif
--
--static inline void _iwl_release_restricted_access(struct iwl_priv *priv)
--{
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (atomic_dec_and_test(&priv->restrict_refcnt))
--#endif
-- _iwl_clear_bit(priv, CSR_GP_CNTRL,
-- CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_release_restricted_access(const char *f, u32 l,
-- struct iwl_priv *priv)
--{
-- if (atomic_read(&priv->restrict_refcnt) <= 0)
-- IWL_ERROR("Release unheld restricted access at line %d.\n", l);
--
-- IWL_DEBUG_IO("releasing restricted access - %s %d\n", f, l);
-- _iwl_release_restricted_access(priv);
--}
--#define iwl_release_restricted_access(priv) \
-- __iwl_release_restricted_access(__FILE__, __LINE__, priv)
--#else
--#define iwl_release_restricted_access(priv) \
-- _iwl_release_restricted_access(priv)
--#endif
--
--static inline u32 _iwl_read_restricted(struct iwl_priv *priv, u32 reg)
--{
-- return _iwl_read32(priv, reg);
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline u32 __iwl_read_restricted(const char *f, u32 l,
-- struct iwl_priv *priv, u32 reg)
--{
-- u32 value = _iwl_read_restricted(priv, reg);
-- if (!atomic_read(&priv->restrict_refcnt))
-- IWL_ERROR("Unrestricted access from %s %d\n", f, l);
-- IWL_DEBUG_IO("read_restricted(0x%4X) = 0x%08x - %s %d \n", reg, value,
-- f, l);
-- return value;
--}
--#define iwl_read_restricted(priv, reg) \
-- __iwl_read_restricted(__FILE__, __LINE__, priv, reg)
--#else
--#define iwl_read_restricted _iwl_read_restricted
--#endif
--
--static inline void _iwl_write_restricted(struct iwl_priv *priv,
-- u32 reg, u32 value)
--{
-- _iwl_write32(priv, reg, value);
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static void __iwl_write_restricted(u32 line,
-- struct iwl_priv *priv, u32 reg, u32 value)
--{
-- if (!atomic_read(&priv->restrict_refcnt))
-- IWL_ERROR("Unrestricted access from line %d\n", line);
-- _iwl_write_restricted(priv, reg, value);
--}
--#define iwl_write_restricted(priv, reg, value) \
-- __iwl_write_restricted(__LINE__, priv, reg, value)
--#else
--#define iwl_write_restricted _iwl_write_restricted
--#endif
++ return ret;
+ }
+
+-static void iwl_config_ap(struct iwl_priv *priv)
++static void iwl3945_config_ap(struct iwl3945_priv *priv)
+ {
+ int rc = 0;
+
+@@ -7077,12 +7128,12 @@ static void iwl_config_ap(struct iwl_priv *priv)
+
+ /* RXON - unassoc (to set timing command) */
+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+
+ /* RXON Timing */
+- memset(&priv->rxon_timing, 0, sizeof(struct iwl_rxon_time_cmd));
+- iwl_setup_rxon_timing(priv);
+- rc = iwl_send_cmd_pdu(priv, REPLY_RXON_TIMING,
++ memset(&priv->rxon_timing, 0, sizeof(struct iwl3945_rxon_time_cmd));
++ iwl3945_setup_rxon_timing(priv);
++ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON_TIMING,
+ sizeof(priv->rxon_timing), &priv->rxon_timing);
+ if (rc)
+ IWL_WARNING("REPLY_RXON_TIMING failed - "
+@@ -7112,20 +7163,21 @@ static void iwl_config_ap(struct iwl_priv *priv)
+ }
+ /* restore RXON assoc */
+ priv->staging_rxon.filter_flags |= RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
+- iwl_add_station(priv, BROADCAST_ADDR, 0, 0);
++ iwl3945_commit_rxon(priv);
++ iwl3945_add_station(priv, iwl3945_broadcast_addr, 0, 0);
+ }
+- iwl_send_beacon_cmd(priv);
++ iwl3945_send_beacon_cmd(priv);
+
+ /* FIXME - we need to add code here to detect a totally new
+ * configuration, reset the AP, unassoc, rxon timing, assoc,
+ * clear sta table, add BCAST sta... */
+ }
+
+-static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
++static int iwl3945_mac_config_interface(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
+ struct ieee80211_if_conf *conf)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+ DECLARE_MAC_BUF(mac);
+ unsigned long flags;
+ int rc;
+@@ -7142,9 +7194,11 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+ return 0;
+ }
+
++ if (!iwl3945_is_alive(priv))
++ return -EAGAIN;
++
+ mutex_lock(&priv->mutex);
+
+- IWL_DEBUG_MAC80211("enter: interface id %d\n", if_id);
+ if (conf->bssid)
+ IWL_DEBUG_MAC80211("bssid: %s\n",
+ print_mac(mac, conf->bssid));
+@@ -7161,8 +7215,8 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+ return 0;
+ }
+
+- if (priv->interface_id != if_id) {
+- IWL_DEBUG_MAC80211("leave - interface_id != if_id\n");
++ if (priv->vif != vif) {
++ IWL_DEBUG_MAC80211("leave - priv->vif != vif\n");
+ mutex_unlock(&priv->mutex);
+ return 0;
+ }
+@@ -7180,11 +7234,14 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+ priv->ibss_beacon = conf->beacon;
+ }
+
++ if (iwl3945_is_rfkill(priv))
++ goto done;
++
+ if (conf->bssid && !is_zero_ether_addr(conf->bssid) &&
+ !is_multicast_ether_addr(conf->bssid)) {
+ /* If there is currently a HW scan going on in the background
+ * then we need to cancel it else the RXON below will fail. */
+- if (iwl_scan_cancel_timeout(priv, 100)) {
++ if (iwl3945_scan_cancel_timeout(priv, 100)) {
+ IWL_WARNING("Aborted scan still in progress "
+ "after 100ms\n");
+ IWL_DEBUG_MAC80211("leaving - scan abort failed.\n");
+@@ -7200,20 +7257,21 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+ memcpy(priv->bssid, conf->bssid, ETH_ALEN);
+
+ if (priv->iw_mode == IEEE80211_IF_TYPE_AP)
+- iwl_config_ap(priv);
++ iwl3945_config_ap(priv);
+ else {
+- rc = iwl_commit_rxon(priv);
++ rc = iwl3945_commit_rxon(priv);
+ if ((priv->iw_mode == IEEE80211_IF_TYPE_STA) && rc)
+- iwl_add_station(priv,
++ iwl3945_add_station(priv,
+ priv->active_rxon.bssid_addr, 1, 0);
+ }
+
+ } else {
+- iwl_scan_cancel_timeout(priv, 100);
++ iwl3945_scan_cancel_timeout(priv, 100);
+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+ }
+
++ done:
+ spin_lock_irqsave(&priv->lock, flags);
+ if (!conf->ssid_len)
+ memset(priv->essid, 0, IW_ESSID_MAX_SIZE);
+@@ -7229,34 +7287,35 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+ return 0;
+ }
+
+-static void iwl_configure_filter(struct ieee80211_hw *hw,
++static void iwl3945_configure_filter(struct ieee80211_hw *hw,
+ unsigned int changed_flags,
+ unsigned int *total_flags,
+ int mc_count, struct dev_addr_list *mc_list)
+ {
+ /*
+ * XXX: dummy
+- * see also iwl_connection_init_rx_config
++ * see also iwl3945_connection_init_rx_config
+ */
+ *total_flags = 0;
+ }
+
+-static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
++static void iwl3945_mac_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_if_init_conf *conf)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
+ mutex_lock(&priv->mutex);
+
+- iwl_scan_cancel_timeout(priv, 100);
+- cancel_delayed_work(&priv->post_associate);
+- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
-
--static inline void iwl_write_buffer_restricted(struct iwl_priv *priv,
-- u32 reg, u32 len, u32 *values)
--{
-- u32 count = sizeof(u32);
+- if (priv->interface_id == conf->if_id) {
+- priv->interface_id = 0;
++ if (iwl3945_is_ready_rf(priv)) {
++ iwl3945_scan_cancel_timeout(priv, 100);
++ cancel_delayed_work(&priv->post_associate);
++ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
++ iwl3945_commit_rxon(priv);
++ }
++ if (priv->vif == conf->vif) {
++ priv->vif = NULL;
+ memset(priv->bssid, 0, ETH_ALEN);
+ memset(priv->essid, 0, IW_ESSID_MAX_SIZE);
+ priv->essid_len = 0;
+@@ -7264,22 +7323,20 @@ static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
+ mutex_unlock(&priv->mutex);
+
+ IWL_DEBUG_MAC80211("leave\n");
-
-- if ((priv != NULL) && (values != NULL)) {
-- for (; 0 < len; len -= count, reg += count, values++)
-- _iwl_write_restricted(priv, reg, *values);
+ }
+
+-#define IWL_DELAY_NEXT_SCAN (HZ*2)
+-static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
++static int iwl3945_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
+ {
+ int rc = 0;
+ unsigned long flags;
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
+ mutex_lock(&priv->mutex);
+ spin_lock_irqsave(&priv->lock, flags);
+
+- if (!iwl_is_ready_rf(priv)) {
++ if (!iwl3945_is_ready_rf(priv)) {
+ rc = -EIO;
+ IWL_DEBUG_MAC80211("leave - not ready or exit pending\n");
+ goto out_unlock;
+@@ -7291,17 +7348,21 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
+ goto out_unlock;
+ }
+
++ /* we don't schedule scan within next_scan_jiffies period */
++ if (priv->next_scan_jiffies &&
++ time_after(priv->next_scan_jiffies, jiffies)) {
++ rc = -EAGAIN;
++ goto out_unlock;
++ }
+ /* if we just finished scan ask for delay */
+- if (priv->last_scan_jiffies &&
+- time_after(priv->last_scan_jiffies + IWL_DELAY_NEXT_SCAN,
+- jiffies)) {
++ if (priv->last_scan_jiffies && time_after(priv->last_scan_jiffies +
++ IWL_DELAY_NEXT_SCAN, jiffies)) {
+ rc = -EAGAIN;
+ goto out_unlock;
+ }
+ if (len) {
+- IWL_DEBUG_SCAN("direct scan for "
+- "%s [%d]\n ",
+- iwl_escape_essid(ssid, len), (int)len);
++ IWL_DEBUG_SCAN("direct scan for %s [%d]\n ",
++ iwl3945_escape_essid(ssid, len), (int)len);
+
+ priv->one_direct_scan = 1;
+ priv->direct_ssid_len = (u8)
+@@ -7310,7 +7371,7 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
+ } else
+ priv->one_direct_scan = 0;
+
+- rc = iwl_scan_initiate(priv);
++ rc = iwl3945_scan_initiate(priv);
+
+ IWL_DEBUG_MAC80211("leave\n");
+
+@@ -7321,17 +7382,17 @@ out_unlock:
+ return rc;
+ }
+
+-static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
++static int iwl3945_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ const u8 *local_addr, const u8 *addr,
+ struct ieee80211_key_conf *key)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+ int rc = 0;
+ u8 sta_id;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
+- if (!iwl_param_hwcrypto) {
++ if (!iwl3945_param_hwcrypto) {
+ IWL_DEBUG_MAC80211("leave - hwcrypto disabled\n");
+ return -EOPNOTSUPP;
+ }
+@@ -7340,7 +7401,7 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ /* only support pairwise keys */
+ return -EOPNOTSUPP;
+
+- sta_id = iwl_hw_find_station(priv, addr);
++ sta_id = iwl3945_hw_find_station(priv, addr);
+ if (sta_id == IWL_INVALID_STATION) {
+ DECLARE_MAC_BUF(mac);
+
+@@ -7351,24 +7412,24 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+
+ mutex_lock(&priv->mutex);
+
+- iwl_scan_cancel_timeout(priv, 100);
++ iwl3945_scan_cancel_timeout(priv, 100);
+
+ switch (cmd) {
+ case SET_KEY:
+- rc = iwl_update_sta_key_info(priv, key, sta_id);
++ rc = iwl3945_update_sta_key_info(priv, key, sta_id);
+ if (!rc) {
+- iwl_set_rxon_hwcrypto(priv, 1);
+- iwl_commit_rxon(priv);
++ iwl3945_set_rxon_hwcrypto(priv, 1);
++ iwl3945_commit_rxon(priv);
+ key->hw_key_idx = sta_id;
+ IWL_DEBUG_MAC80211("set_key success, using hwcrypto\n");
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV;
+ }
+ break;
+ case DISABLE_KEY:
+- rc = iwl_clear_sta_key_info(priv, sta_id);
++ rc = iwl3945_clear_sta_key_info(priv, sta_id);
+ if (!rc) {
+- iwl_set_rxon_hwcrypto(priv, 0);
+- iwl_commit_rxon(priv);
++ iwl3945_set_rxon_hwcrypto(priv, 0);
++ iwl3945_commit_rxon(priv);
+ IWL_DEBUG_MAC80211("disable hwcrypto key\n");
+ }
+ break;
+@@ -7382,18 +7443,18 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ return rc;
+ }
+
+-static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
++static int iwl3945_mac_conf_tx(struct ieee80211_hw *hw, int queue,
+ const struct ieee80211_tx_queue_params *params)
+ {
+- struct iwl_priv *priv = hw->priv;
+-#ifdef CONFIG_IWLWIFI_QOS
++ struct iwl3945_priv *priv = hw->priv;
++#ifdef CONFIG_IWL3945_QOS
+ unsigned long flags;
+ int q;
+-#endif /* CONFIG_IWL_QOS */
++#endif /* CONFIG_IWL3945_QOS */
+
+ IWL_DEBUG_MAC80211("enter\n");
+
+- if (!iwl_is_ready_rf(priv)) {
++ if (!iwl3945_is_ready_rf(priv)) {
+ IWL_DEBUG_MAC80211("leave - RF not ready\n");
+ return -EIO;
+ }
+@@ -7403,7 +7464,7 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
+ return 0;
+ }
+
+-#ifdef CONFIG_IWLWIFI_QOS
++#ifdef CONFIG_IWL3945_QOS
+ if (!priv->qos_data.qos_enable) {
+ priv->qos_data.qos_active = 0;
+ IWL_DEBUG_MAC80211("leave - qos not enabled\n");
+@@ -7426,30 +7487,30 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
+
+ mutex_lock(&priv->mutex);
+ if (priv->iw_mode == IEEE80211_IF_TYPE_AP)
+- iwl_activate_qos(priv, 1);
+- else if (priv->assoc_id && iwl_is_associated(priv))
+- iwl_activate_qos(priv, 0);
++ iwl3945_activate_qos(priv, 1);
++ else if (priv->assoc_id && iwl3945_is_associated(priv))
++ iwl3945_activate_qos(priv, 0);
+
+ mutex_unlock(&priv->mutex);
+
+-#endif /*CONFIG_IWLWIFI_QOS */
++#endif /*CONFIG_IWL3945_QOS */
+
+ IWL_DEBUG_MAC80211("leave\n");
+ return 0;
+ }
+
+-static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
++static int iwl3945_mac_get_tx_stats(struct ieee80211_hw *hw,
+ struct ieee80211_tx_queue_stats *stats)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+ int i, avail;
+- struct iwl_tx_queue *txq;
+- struct iwl_queue *q;
++ struct iwl3945_tx_queue *txq;
++ struct iwl3945_queue *q;
+ unsigned long flags;
+
+ IWL_DEBUG_MAC80211("enter\n");
+
+- if (!iwl_is_ready_rf(priv)) {
++ if (!iwl3945_is_ready_rf(priv)) {
+ IWL_DEBUG_MAC80211("leave - RF not ready\n");
+ return -EIO;
+ }
+@@ -7459,7 +7520,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
+ for (i = 0; i < AC_NUM; i++) {
+ txq = &priv->txq[i];
+ q = &txq->q;
+- avail = iwl_queue_space(q);
++ avail = iwl3945_queue_space(q);
+
+ stats->data[i].len = q->n_window - avail;
+ stats->data[i].limit = q->n_window - q->high_mark;
+@@ -7473,7 +7534,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
+ return 0;
+ }
+
+-static int iwl_mac_get_stats(struct ieee80211_hw *hw,
++static int iwl3945_mac_get_stats(struct ieee80211_hw *hw,
+ struct ieee80211_low_level_stats *stats)
+ {
+ IWL_DEBUG_MAC80211("enter\n");
+@@ -7482,7 +7543,7 @@ static int iwl_mac_get_stats(struct ieee80211_hw *hw,
+ return 0;
+ }
+
+-static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
++static u64 iwl3945_mac_get_tsf(struct ieee80211_hw *hw)
+ {
+ IWL_DEBUG_MAC80211("enter\n");
+ IWL_DEBUG_MAC80211("leave\n");
+@@ -7490,16 +7551,16 @@ static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
+ return 0;
+ }
+
+-static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
++static void iwl3945_mac_reset_tsf(struct ieee80211_hw *hw)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+ unsigned long flags;
+
+ mutex_lock(&priv->mutex);
+ IWL_DEBUG_MAC80211("enter\n");
+
+-#ifdef CONFIG_IWLWIFI_QOS
+- iwl_reset_qos(priv);
++#ifdef CONFIG_IWL3945_QOS
++ iwl3945_reset_qos(priv);
+ #endif
+ cancel_delayed_work(&priv->post_associate);
+
+@@ -7522,13 +7583,19 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
++ if (!iwl3945_is_ready_rf(priv)) {
++ IWL_DEBUG_MAC80211("leave - not ready\n");
++ mutex_unlock(&priv->mutex);
++ return;
++ }
++
+ /* we are restarting association process
+ * clear RXON_FILTER_ASSOC_MSK bit
+ */
+ if (priv->iw_mode != IEEE80211_IF_TYPE_AP) {
+- iwl_scan_cancel_timeout(priv, 100);
++ iwl3945_scan_cancel_timeout(priv, 100);
+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+ }
+
+ /* Per mac80211.h: This is only used in IBSS mode... */
+@@ -7539,15 +7606,9 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
+ return;
+ }
+
+- if (!iwl_is_ready_rf(priv)) {
+- IWL_DEBUG_MAC80211("leave - not ready\n");
+- mutex_unlock(&priv->mutex);
+- return;
- }
--}
--
--static inline int _iwl_poll_restricted_bit(struct iwl_priv *priv,
-- u32 addr, u32 mask, int timeout)
--{
-- int i = 0;
--
-- do {
-- if ((_iwl_read_restricted(priv, addr) & mask) == mask)
-- return i;
-- mdelay(10);
-- i += 10;
-- } while (i < timeout);
--
-- return -ETIMEDOUT;
--}
-
+ priv->only_active_channel = 0;
+
+- iwl_set_rate(priv);
++ iwl3945_set_rate(priv);
+
+ mutex_unlock(&priv->mutex);
+
+@@ -7555,16 +7616,16 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
+
+ }
+
+-static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
++static int iwl3945_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ struct ieee80211_tx_control *control)
+ {
+- struct iwl_priv *priv = hw->priv;
++ struct iwl3945_priv *priv = hw->priv;
+ unsigned long flags;
+
+ mutex_lock(&priv->mutex);
+ IWL_DEBUG_MAC80211("enter\n");
+
+- if (!iwl_is_ready_rf(priv)) {
++ if (!iwl3945_is_ready_rf(priv)) {
+ IWL_DEBUG_MAC80211("leave - RF not ready\n");
+ mutex_unlock(&priv->mutex);
+ return -EIO;
+@@ -7588,8 +7649,8 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ IWL_DEBUG_MAC80211("leave\n");
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+-#ifdef CONFIG_IWLWIFI_QOS
+- iwl_reset_qos(priv);
++#ifdef CONFIG_IWL3945_QOS
++ iwl3945_reset_qos(priv);
+ #endif
+
+ queue_work(priv->workqueue, &priv->post_associate.work);
+@@ -7605,7 +7666,7 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ *
+ *****************************************************************************/
+
-#ifdef CONFIG_IWLWIFI_DEBUG
--static inline int __iwl_poll_restricted_bit(const char *f, u32 l,
-- struct iwl_priv *priv,
-- u32 addr, u32 mask, int timeout)
--{
-- int rc = _iwl_poll_restricted_bit(priv, addr, mask, timeout);
--
-- if (unlikely(rc == -ETIMEDOUT))
-- IWL_DEBUG_IO("poll_restricted_bit(0x%08X, 0x%08X) - "
-- "timedout - %s %d\n", addr, mask, f, l);
-- else
-- IWL_DEBUG_IO("poll_restricted_bit(0x%08X, 0x%08X) = 0x%08X "
-- "- %s %d\n", addr, mask, rc, f, l);
-- return rc;
--}
--#define iwl_poll_restricted_bit(iwl, addr, mask, timeout) \
-- __iwl_poll_restricted_bit(__FILE__, __LINE__, iwl, addr, mask, timeout)
--#else
--#define iwl_poll_restricted_bit _iwl_poll_restricted_bit
--#endif
++#ifdef CONFIG_IWL3945_DEBUG
+
+ /*
+ * The following adds a new attribute to the sysfs representation
+@@ -7617,7 +7678,7 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+
+ static ssize_t show_debug_level(struct device_driver *d, char *buf)
+ {
+- return sprintf(buf, "0x%08X\n", iwl_debug_level);
++ return sprintf(buf, "0x%08X\n", iwl3945_debug_level);
+ }
+ static ssize_t store_debug_level(struct device_driver *d,
+ const char *buf, size_t count)
+@@ -7630,7 +7691,7 @@ static ssize_t store_debug_level(struct device_driver *d,
+ printk(KERN_INFO DRV_NAME
+ ": %s is not in hex or decimal form.\n", buf);
+ else
+- iwl_debug_level = val;
++ iwl3945_debug_level = val;
+
+ return strnlen(buf, count);
+ }
+@@ -7638,7 +7699,7 @@ static ssize_t store_debug_level(struct device_driver *d,
+ static DRIVER_ATTR(debug_level, S_IWUSR | S_IRUGO,
+ show_debug_level, store_debug_level);
+
+-#endif /* CONFIG_IWLWIFI_DEBUG */
++#endif /* CONFIG_IWL3945_DEBUG */
+
+ static ssize_t show_rf_kill(struct device *d,
+ struct device_attribute *attr, char *buf)
+@@ -7649,7 +7710,7 @@ static ssize_t show_rf_kill(struct device *d,
+ * 2 - HW based RF kill active
+ * 3 - Both HW and SW based RF kill active
+ */
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+ int val = (test_bit(STATUS_RF_KILL_SW, &priv->status) ? 0x1 : 0x0) |
+ (test_bit(STATUS_RF_KILL_HW, &priv->status) ? 0x2 : 0x0);
+
+@@ -7660,10 +7721,10 @@ static ssize_t store_rf_kill(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+
+ mutex_lock(&priv->mutex);
+- iwl_radio_kill_sw(priv, buf[0] == '1');
++ iwl3945_radio_kill_sw(priv, buf[0] == '1');
+ mutex_unlock(&priv->mutex);
+
+ return count;
+@@ -7674,12 +7735,12 @@ static DEVICE_ATTR(rf_kill, S_IWUSR | S_IRUGO, show_rf_kill, store_rf_kill);
+ static ssize_t show_temperature(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+
+- if (!iwl_is_alive(priv))
++ if (!iwl3945_is_alive(priv))
+ return -EAGAIN;
+
+- return sprintf(buf, "%d\n", iwl_hw_get_temperature(priv));
++ return sprintf(buf, "%d\n", iwl3945_hw_get_temperature(priv));
+ }
+
+ static DEVICE_ATTR(temperature, S_IRUGO, show_temperature, NULL);
+@@ -7688,15 +7749,15 @@ static ssize_t show_rs_window(struct device *d,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- struct iwl_priv *priv = d->driver_data;
+- return iwl_fill_rs_info(priv->hw, buf, IWL_AP_ID);
++ struct iwl3945_priv *priv = d->driver_data;
++ return iwl3945_fill_rs_info(priv->hw, buf, IWL_AP_ID);
+ }
+ static DEVICE_ATTR(rs_window, S_IRUGO, show_rs_window, NULL);
+
+ static ssize_t show_tx_power(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+ return sprintf(buf, "%d\n", priv->user_txpower_limit);
+ }
+
+@@ -7704,7 +7765,7 @@ static ssize_t store_tx_power(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+ char *p = (char *)buf;
+ u32 val;
+
+@@ -7713,7 +7774,7 @@ static ssize_t store_tx_power(struct device *d,
+ printk(KERN_INFO DRV_NAME
+ ": %s is not in decimal form.\n", buf);
+ else
+- iwl_hw_reg_set_txpower(priv, val);
++ iwl3945_hw_reg_set_txpower(priv, val);
+
+ return count;
+ }
+@@ -7723,7 +7784,7 @@ static DEVICE_ATTR(tx_power, S_IWUSR | S_IRUGO, show_tx_power, store_tx_power);
+ static ssize_t show_flags(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+
+ return sprintf(buf, "0x%04X\n", priv->active_rxon.flags);
+ }
+@@ -7732,19 +7793,19 @@ static ssize_t store_flags(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+ u32 flags = simple_strtoul(buf, NULL, 0);
+
+ mutex_lock(&priv->mutex);
+ if (le32_to_cpu(priv->staging_rxon.flags) != flags) {
+ /* Cancel any currently running scans... */
+- if (iwl_scan_cancel_timeout(priv, 100))
++ if (iwl3945_scan_cancel_timeout(priv, 100))
+ IWL_WARNING("Could not cancel scan.\n");
+ else {
+ IWL_DEBUG_INFO("Committing rxon.flags = 0x%04X\n",
+ flags);
+ priv->staging_rxon.flags = cpu_to_le32(flags);
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+ }
+ }
+ mutex_unlock(&priv->mutex);
+@@ -7757,7 +7818,7 @@ static DEVICE_ATTR(flags, S_IWUSR | S_IRUGO, show_flags, store_flags);
+ static ssize_t show_filter_flags(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+
+ return sprintf(buf, "0x%04X\n",
+ le32_to_cpu(priv->active_rxon.filter_flags));
+@@ -7767,20 +7828,20 @@ static ssize_t store_filter_flags(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+ u32 filter_flags = simple_strtoul(buf, NULL, 0);
+
+ mutex_lock(&priv->mutex);
+ if (le32_to_cpu(priv->staging_rxon.filter_flags) != filter_flags) {
+ /* Cancel any currently running scans... */
+- if (iwl_scan_cancel_timeout(priv, 100))
++ if (iwl3945_scan_cancel_timeout(priv, 100))
+ IWL_WARNING("Could not cancel scan.\n");
+ else {
+ IWL_DEBUG_INFO("Committing rxon.filter_flags = "
+ "0x%04X\n", filter_flags);
+ priv->staging_rxon.filter_flags =
+ cpu_to_le32(filter_flags);
+- iwl_commit_rxon(priv);
++ iwl3945_commit_rxon(priv);
+ }
+ }
+ mutex_unlock(&priv->mutex);
+@@ -7794,20 +7855,20 @@ static DEVICE_ATTR(filter_flags, S_IWUSR | S_IRUGO, show_filter_flags,
+ static ssize_t show_tune(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+
+ return sprintf(buf, "0x%04X\n",
+ (priv->phymode << 8) |
+ le16_to_cpu(priv->active_rxon.channel));
+ }
+
+-static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode);
++static void iwl3945_set_flags_for_phymode(struct iwl3945_priv *priv, u8 phymode);
+
+ static ssize_t store_tune(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
+ char *p = (char *)buf;
+ u16 tune = simple_strtoul(p, &p, 0);
+ u8 phymode = (tune >> 8) & 0xff;
+@@ -7818,9 +7879,9 @@ static ssize_t store_tune(struct device *d,
+ mutex_lock(&priv->mutex);
+ if ((le16_to_cpu(priv->staging_rxon.channel) != channel) ||
+ (priv->phymode != phymode)) {
+- const struct iwl_channel_info *ch_info;
++ const struct iwl3945_channel_info *ch_info;
+
+- ch_info = iwl_get_channel_info(priv, phymode, channel);
++ ch_info = iwl3945_get_channel_info(priv, phymode, channel);
+ if (!ch_info) {
+ IWL_WARNING("Requested invalid phymode/channel "
+ "combination: %d %d\n", phymode, channel);
+@@ -7829,18 +7890,18 @@ static ssize_t store_tune(struct device *d,
+ }
+
+ /* Cancel any currently running scans... */
+- if (iwl_scan_cancel_timeout(priv, 100))
++ if (iwl3945_scan_cancel_timeout(priv, 100))
+ IWL_WARNING("Could not cancel scan.\n");
+ else {
+ IWL_DEBUG_INFO("Committing phymode and "
+ "rxon.channel = %d %d\n",
+ phymode, channel);
+
+- iwl_set_rxon_channel(priv, phymode, channel);
+- iwl_set_flags_for_phymode(priv, phymode);
++ iwl3945_set_rxon_channel(priv, phymode, channel);
++ iwl3945_set_flags_for_phymode(priv, phymode);
+
+- iwl_set_rate(priv);
+- iwl_commit_rxon(priv);
++ iwl3945_set_rate(priv);
++ iwl3945_commit_rxon(priv);
+ }
+ }
+ mutex_unlock(&priv->mutex);
+@@ -7850,13 +7911,13 @@ static ssize_t store_tune(struct device *d,
+
+ static DEVICE_ATTR(tune, S_IWUSR | S_IRUGO, show_tune, store_tune);
+
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
+
+ static ssize_t show_measurement(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
+- struct iwl_spectrum_notification measure_report;
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_spectrum_notification measure_report;
+ u32 size = sizeof(measure_report), len = 0, ofs = 0;
+ u8 *data = (u8 *) & measure_report;
+ unsigned long flags;
+@@ -7888,7 +7949,7 @@ static ssize_t store_measurement(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+ struct ieee80211_measurement_params params = {
+ .channel = le16_to_cpu(priv->active_rxon.channel),
+ .start_time = cpu_to_le64(priv->last_tsf),
+@@ -7914,19 +7975,19 @@ static ssize_t store_measurement(struct device *d,
+
+ IWL_DEBUG_INFO("Invoking measurement of type %d on "
+ "channel %d (for '%s')\n", type, params.channel, buf);
+- iwl_get_measurement(priv, ¶ms, type);
++ iwl3945_get_measurement(priv, ¶ms, type);
+
+ return count;
+ }
+
+ static DEVICE_ATTR(measurement, S_IRUSR | S_IWUSR,
+ show_measurement, store_measurement);
+-#endif /* CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT */
++#endif /* CONFIG_IWL3945_SPECTRUM_MEASUREMENT */
+
+ static ssize_t show_rate(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+ unsigned long flags;
+ int i;
+
+@@ -7937,13 +7998,13 @@ static ssize_t show_rate(struct device *d,
+ i = priv->stations[IWL_STA_ID].current_rate.s.rate;
+ spin_unlock_irqrestore(&priv->sta_lock, flags);
+
+- i = iwl_rate_index_from_plcp(i);
++ i = iwl3945_rate_index_from_plcp(i);
+ if (i == -1)
+ return sprintf(buf, "0\n");
+
+ return sprintf(buf, "%d%s\n",
+- (iwl_rates[i].ieee >> 1),
+- (iwl_rates[i].ieee & 0x1) ? ".5" : "");
++ (iwl3945_rates[i].ieee >> 1),
++ (iwl3945_rates[i].ieee & 0x1) ? ".5" : "");
+ }
+
+ static DEVICE_ATTR(rate, S_IRUSR, show_rate, NULL);
+@@ -7952,7 +8013,7 @@ static ssize_t store_retry_rate(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+
+ priv->retry_rate = simple_strtoul(buf, NULL, 0);
+ if (priv->retry_rate <= 0)
+@@ -7964,7 +8025,7 @@ static ssize_t store_retry_rate(struct device *d,
+ static ssize_t show_retry_rate(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+ return sprintf(buf, "%d", priv->retry_rate);
+ }
+
+@@ -7975,14 +8036,14 @@ static ssize_t store_power_level(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+ int rc;
+ int mode;
+
+ mode = simple_strtoul(buf, NULL, 0);
+ mutex_lock(&priv->mutex);
+
+- if (!iwl_is_ready(priv)) {
++ if (!iwl3945_is_ready(priv)) {
+ rc = -EAGAIN;
+ goto out;
+ }
+@@ -7993,7 +8054,7 @@ static ssize_t store_power_level(struct device *d,
+ mode |= IWL_POWER_ENABLED;
+
+ if (mode != priv->power_mode) {
+- rc = iwl_send_power_mode(priv, IWL_POWER_LEVEL(mode));
++ rc = iwl3945_send_power_mode(priv, IWL_POWER_LEVEL(mode));
+ if (rc) {
+ IWL_DEBUG_MAC80211("failed setting power mode.\n");
+ goto out;
+@@ -8029,7 +8090,7 @@ static const s32 period_duration[] = {
+ static ssize_t show_power_level(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+ int level = IWL_POWER_LEVEL(priv->power_mode);
+ char *p = buf;
+
+@@ -8064,18 +8125,18 @@ static DEVICE_ATTR(power_level, S_IWUSR | S_IRUSR, show_power_level,
+ static ssize_t show_channels(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+ int len = 0, i;
+ struct ieee80211_channel *channels = NULL;
+ const struct ieee80211_hw_mode *hw_mode = NULL;
+ int count = 0;
+
+- if (!iwl_is_ready(priv))
++ if (!iwl3945_is_ready(priv))
+ return -EAGAIN;
+
+- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211G);
++ hw_mode = iwl3945_get_hw_mode(priv, MODE_IEEE80211G);
+ if (!hw_mode)
+- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211B);
++ hw_mode = iwl3945_get_hw_mode(priv, MODE_IEEE80211B);
+ if (hw_mode) {
+ channels = hw_mode->channels;
+ count = hw_mode->num_channels;
+@@ -8102,7 +8163,7 @@ static ssize_t show_channels(struct device *d,
+ flag & IEEE80211_CHAN_W_ACTIVE_SCAN ?
+ "active/passive" : "passive only");
+
+- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211A);
++ hw_mode = iwl3945_get_hw_mode(priv, MODE_IEEE80211A);
+ if (hw_mode) {
+ channels = hw_mode->channels;
+ count = hw_mode->num_channels;
+@@ -8138,17 +8199,17 @@ static DEVICE_ATTR(channels, S_IRUSR, show_channels, NULL);
+ static ssize_t show_statistics(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
+- u32 size = sizeof(struct iwl_notif_statistics);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ u32 size = sizeof(struct iwl3945_notif_statistics);
+ u32 len = 0, ofs = 0;
+ u8 *data = (u8 *) & priv->statistics;
+ int rc = 0;
+
+- if (!iwl_is_alive(priv))
++ if (!iwl3945_is_alive(priv))
+ return -EAGAIN;
+
+ mutex_lock(&priv->mutex);
+- rc = iwl_send_statistics_request(priv);
++ rc = iwl3945_send_statistics_request(priv);
+ mutex_unlock(&priv->mutex);
+
+ if (rc) {
+@@ -8176,9 +8237,9 @@ static DEVICE_ATTR(statistics, S_IRUGO, show_statistics, NULL);
+ static ssize_t show_antenna(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+
+- if (!iwl_is_alive(priv))
++ if (!iwl3945_is_alive(priv))
+ return -EAGAIN;
+
+ return sprintf(buf, "%d\n", priv->antenna);
+@@ -8189,7 +8250,7 @@ static ssize_t store_antenna(struct device *d,
+ const char *buf, size_t count)
+ {
+ int ant;
+- struct iwl_priv *priv = dev_get_drvdata(d);
++ struct iwl3945_priv *priv = dev_get_drvdata(d);
+
+ if (count == 0)
+ return 0;
+@@ -8201,7 +8262,7 @@ static ssize_t store_antenna(struct device *d,
+
+ if ((ant >= 0) && (ant <= 2)) {
+ IWL_DEBUG_INFO("Setting antenna select to %d.\n", ant);
+- priv->antenna = (enum iwl_antenna)ant;
++ priv->antenna = (enum iwl3945_antenna)ant;
+ } else
+ IWL_DEBUG_INFO("Bad antenna select value %d.\n", ant);
+
+@@ -8214,8 +8275,8 @@ static DEVICE_ATTR(antenna, S_IWUSR | S_IRUGO, show_antenna, store_antenna);
+ static ssize_t show_status(struct device *d,
+ struct device_attribute *attr, char *buf)
+ {
+- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
+- if (!iwl_is_alive(priv))
++ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ if (!iwl3945_is_alive(priv))
+ return -EAGAIN;
+ return sprintf(buf, "0x%08x\n", (int)priv->status);
+ }
+@@ -8229,7 +8290,7 @@ static ssize_t dump_error_log(struct device *d,
+ char *p = (char *)buf;
+
+ if (p[0] == '1')
+- iwl_dump_nic_error_log((struct iwl_priv *)d->driver_data);
++ iwl3945_dump_nic_error_log((struct iwl3945_priv *)d->driver_data);
+
+ return strnlen(buf, count);
+ }
+@@ -8243,7 +8304,7 @@ static ssize_t dump_event_log(struct device *d,
+ char *p = (char *)buf;
+
+ if (p[0] == '1')
+- iwl_dump_nic_event_log((struct iwl_priv *)d->driver_data);
++ iwl3945_dump_nic_event_log((struct iwl3945_priv *)d->driver_data);
+
+ return strnlen(buf, count);
+ }
+@@ -8256,34 +8317,34 @@ static DEVICE_ATTR(dump_events, S_IWUSR, NULL, dump_event_log);
+ *
+ *****************************************************************************/
+
+-static void iwl_setup_deferred_work(struct iwl_priv *priv)
++static void iwl3945_setup_deferred_work(struct iwl3945_priv *priv)
+ {
+ priv->workqueue = create_workqueue(DRV_NAME);
+
+ init_waitqueue_head(&priv->wait_command_queue);
+
+- INIT_WORK(&priv->up, iwl_bg_up);
+- INIT_WORK(&priv->restart, iwl_bg_restart);
+- INIT_WORK(&priv->rx_replenish, iwl_bg_rx_replenish);
+- INIT_WORK(&priv->scan_completed, iwl_bg_scan_completed);
+- INIT_WORK(&priv->request_scan, iwl_bg_request_scan);
+- INIT_WORK(&priv->abort_scan, iwl_bg_abort_scan);
+- INIT_WORK(&priv->rf_kill, iwl_bg_rf_kill);
+- INIT_WORK(&priv->beacon_update, iwl_bg_beacon_update);
+- INIT_DELAYED_WORK(&priv->post_associate, iwl_bg_post_associate);
+- INIT_DELAYED_WORK(&priv->init_alive_start, iwl_bg_init_alive_start);
+- INIT_DELAYED_WORK(&priv->alive_start, iwl_bg_alive_start);
+- INIT_DELAYED_WORK(&priv->scan_check, iwl_bg_scan_check);
-
--static inline u32 _iwl_read_restricted_reg(struct iwl_priv *priv, u32 reg)
--{
-- _iwl_write_restricted(priv, HBUS_TARG_PRPH_RADDR, reg | (3 << 24));
-- return _iwl_read_restricted(priv, HBUS_TARG_PRPH_RDAT);
--}
+- iwl_hw_setup_deferred_work(priv);
++ INIT_WORK(&priv->up, iwl3945_bg_up);
++ INIT_WORK(&priv->restart, iwl3945_bg_restart);
++ INIT_WORK(&priv->rx_replenish, iwl3945_bg_rx_replenish);
++ INIT_WORK(&priv->scan_completed, iwl3945_bg_scan_completed);
++ INIT_WORK(&priv->request_scan, iwl3945_bg_request_scan);
++ INIT_WORK(&priv->abort_scan, iwl3945_bg_abort_scan);
++ INIT_WORK(&priv->rf_kill, iwl3945_bg_rf_kill);
++ INIT_WORK(&priv->beacon_update, iwl3945_bg_beacon_update);
++ INIT_DELAYED_WORK(&priv->post_associate, iwl3945_bg_post_associate);
++ INIT_DELAYED_WORK(&priv->init_alive_start, iwl3945_bg_init_alive_start);
++ INIT_DELAYED_WORK(&priv->alive_start, iwl3945_bg_alive_start);
++ INIT_DELAYED_WORK(&priv->scan_check, iwl3945_bg_scan_check);
++
++ iwl3945_hw_setup_deferred_work(priv);
+
+ tasklet_init(&priv->irq_tasklet, (void (*)(unsigned long))
+- iwl_irq_tasklet, (unsigned long)priv);
++ iwl3945_irq_tasklet, (unsigned long)priv);
+ }
+
+-static void iwl_cancel_deferred_work(struct iwl_priv *priv)
++static void iwl3945_cancel_deferred_work(struct iwl3945_priv *priv)
+ {
+- iwl_hw_cancel_deferred_work(priv);
++ iwl3945_hw_cancel_deferred_work(priv);
+
+ cancel_delayed_work_sync(&priv->init_alive_start);
+ cancel_delayed_work(&priv->scan_check);
+@@ -8292,14 +8353,14 @@ static void iwl_cancel_deferred_work(struct iwl_priv *priv)
+ cancel_work_sync(&priv->beacon_update);
+ }
+
+-static struct attribute *iwl_sysfs_entries[] = {
++static struct attribute *iwl3945_sysfs_entries[] = {
+ &dev_attr_antenna.attr,
+ &dev_attr_channels.attr,
+ &dev_attr_dump_errors.attr,
+ &dev_attr_dump_events.attr,
+ &dev_attr_flags.attr,
+ &dev_attr_filter_flags.attr,
+-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
+ &dev_attr_measurement.attr,
+ #endif
+ &dev_attr_power_level.attr,
+@@ -8316,45 +8377,48 @@ static struct attribute *iwl_sysfs_entries[] = {
+ NULL
+ };
+
+-static struct attribute_group iwl_attribute_group = {
++static struct attribute_group iwl3945_attribute_group = {
+ .name = NULL, /* put in device directory */
+- .attrs = iwl_sysfs_entries,
++ .attrs = iwl3945_sysfs_entries,
+ };
+
+-static struct ieee80211_ops iwl_hw_ops = {
+- .tx = iwl_mac_tx,
+- .start = iwl_mac_start,
+- .stop = iwl_mac_stop,
+- .add_interface = iwl_mac_add_interface,
+- .remove_interface = iwl_mac_remove_interface,
+- .config = iwl_mac_config,
+- .config_interface = iwl_mac_config_interface,
+- .configure_filter = iwl_configure_filter,
+- .set_key = iwl_mac_set_key,
+- .get_stats = iwl_mac_get_stats,
+- .get_tx_stats = iwl_mac_get_tx_stats,
+- .conf_tx = iwl_mac_conf_tx,
+- .get_tsf = iwl_mac_get_tsf,
+- .reset_tsf = iwl_mac_reset_tsf,
+- .beacon_update = iwl_mac_beacon_update,
+- .hw_scan = iwl_mac_hw_scan
++static struct ieee80211_ops iwl3945_hw_ops = {
++ .tx = iwl3945_mac_tx,
++ .start = iwl3945_mac_start,
++ .stop = iwl3945_mac_stop,
++ .add_interface = iwl3945_mac_add_interface,
++ .remove_interface = iwl3945_mac_remove_interface,
++ .config = iwl3945_mac_config,
++ .config_interface = iwl3945_mac_config_interface,
++ .configure_filter = iwl3945_configure_filter,
++ .set_key = iwl3945_mac_set_key,
++ .get_stats = iwl3945_mac_get_stats,
++ .get_tx_stats = iwl3945_mac_get_tx_stats,
++ .conf_tx = iwl3945_mac_conf_tx,
++ .get_tsf = iwl3945_mac_get_tsf,
++ .reset_tsf = iwl3945_mac_reset_tsf,
++ .beacon_update = iwl3945_mac_beacon_update,
++ .hw_scan = iwl3945_mac_hw_scan
+ };
+
+-static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
++static int iwl3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ int err = 0;
+ u32 pci_id;
+- struct iwl_priv *priv;
++ struct iwl3945_priv *priv;
+ struct ieee80211_hw *hw;
+ int i;
++ DECLARE_MAC_BUF(mac);
+
+- if (iwl_param_disable_hw_scan) {
++ /* Disabling hardware scan means that mac80211 will perform scans
++ * "the hard way", rather than using device's scan. */
++ if (iwl3945_param_disable_hw_scan) {
+ IWL_DEBUG_INFO("Disabling hw_scan\n");
+- iwl_hw_ops.hw_scan = NULL;
++ iwl3945_hw_ops.hw_scan = NULL;
+ }
+
+- if ((iwl_param_queues_num > IWL_MAX_NUM_QUEUES) ||
+- (iwl_param_queues_num < IWL_MIN_NUM_QUEUES)) {
++ if ((iwl3945_param_queues_num > IWL_MAX_NUM_QUEUES) ||
++ (iwl3945_param_queues_num < IWL_MIN_NUM_QUEUES)) {
+ IWL_ERROR("invalid queues_num, should be between %d and %d\n",
+ IWL_MIN_NUM_QUEUES, IWL_MAX_NUM_QUEUES);
+ err = -EINVAL;
+@@ -8363,7 +8427,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ /* mac80211 allocates memory for this device instance, including
+ * space for this driver's private structure */
+- hw = ieee80211_alloc_hw(sizeof(struct iwl_priv), &iwl_hw_ops);
++ hw = ieee80211_alloc_hw(sizeof(struct iwl3945_priv), &iwl3945_hw_ops);
+ if (hw == NULL) {
+ IWL_ERROR("Can not allocate network device\n");
+ err = -ENOMEM;
+@@ -8378,9 +8442,11 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ priv->hw = hw;
+
+ priv->pci_dev = pdev;
+- priv->antenna = (enum iwl_antenna)iwl_param_antenna;
-#ifdef CONFIG_IWLWIFI_DEBUG
--static inline u32 __iwl_read_restricted_reg(u32 line,
-- struct iwl_priv *priv, u32 reg)
--{
-- if (!atomic_read(&priv->restrict_refcnt))
-- IWL_ERROR("Unrestricted access from line %d\n", line);
-- return _iwl_read_restricted_reg(priv, reg);
--}
+- iwl_debug_level = iwl_param_debug;
++
++ /* Select antenna (may be helpful if only one antenna is connected) */
++ priv->antenna = (enum iwl3945_antenna)iwl3945_param_antenna;
++#ifdef CONFIG_IWL3945_DEBUG
++ iwl3945_debug_level = iwl3945_param_debug;
+ atomic_set(&priv->restrict_refcnt, 0);
+ #endif
+ priv->retry_rate = 1;
+@@ -8399,6 +8465,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ /* Tell mac80211 our Tx characteristics */
+ hw->flags = IEEE80211_HW_HOST_GEN_BEACON_TEMPLATE;
+
++ /* 4 EDCA QOS priorities */
+ hw->queues = 4;
+
+ spin_lock_init(&priv->lock);
+@@ -8419,7 +8486,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ pci_set_master(pdev);
+
+- iwl_clear_stations_table(priv);
++ /* Clear the driver's (not device's) station table */
++ iwl3945_clear_stations_table(priv);
+
+ priv->data_retry_limit = -1;
+ priv->ieee_channels = NULL;
+@@ -8438,9 +8506,11 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ err = pci_request_regions(pdev, DRV_NAME);
+ if (err)
+ goto out_pci_disable_device;
++
+ /* We disable the RETRY_TIMEOUT register (0x41) to keep
+ * PCI Tx retries from interfering with C3 CPU state */
+ pci_write_config_byte(pdev, 0x41, 0x00);
++
+ priv->hw_base = pci_iomap(pdev, 0, 0);
+ if (!priv->hw_base) {
+ err = -ENODEV;
+@@ -8453,7 +8523,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ /* Initialize module parameter values here */
+
+- if (iwl_param_disable) {
++ /* Disable radio (SW RF KILL) via parameter when loading driver */
++ if (iwl3945_param_disable) {
+ set_bit(STATUS_RF_KILL_SW, &priv->status);
+ IWL_DEBUG_INFO("Radio disabled.\n");
+ }
+@@ -8488,78 +8559,82 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ priv->is_abg ? "A" : "");
+
+ /* Device-specific setup */
+- if (iwl_hw_set_hw_setting(priv)) {
++ if (iwl3945_hw_set_hw_setting(priv)) {
+ IWL_ERROR("failed to set hw settings\n");
+- mutex_unlock(&priv->mutex);
+ goto out_iounmap;
+ }
+
+-#ifdef CONFIG_IWLWIFI_QOS
+- if (iwl_param_qos_enable)
++#ifdef CONFIG_IWL3945_QOS
++ if (iwl3945_param_qos_enable)
+ priv->qos_data.qos_enable = 1;
+
+- iwl_reset_qos(priv);
++ iwl3945_reset_qos(priv);
+
+ priv->qos_data.qos_active = 0;
+ priv->qos_data.qos_cap.val = 0;
+-#endif /* CONFIG_IWLWIFI_QOS */
++#endif /* CONFIG_IWL3945_QOS */
+
+- iwl_set_rxon_channel(priv, MODE_IEEE80211G, 6);
+- iwl_setup_deferred_work(priv);
+- iwl_setup_rx_handlers(priv);
++ iwl3945_set_rxon_channel(priv, MODE_IEEE80211G, 6);
++ iwl3945_setup_deferred_work(priv);
++ iwl3945_setup_rx_handlers(priv);
+
+ priv->rates_mask = IWL_RATES_MASK;
+ /* If power management is turned on, default to AC mode */
+ priv->power_mode = IWL_POWER_AC;
+ priv->user_txpower_limit = IWL_DEFAULT_TX_POWER;
+
+- pci_enable_msi(pdev);
++ iwl3945_disable_interrupts(priv);
+
+- err = request_irq(pdev->irq, iwl_isr, IRQF_SHARED, DRV_NAME, priv);
+- if (err) {
+- IWL_ERROR("Error allocating IRQ %d\n", pdev->irq);
+- goto out_disable_msi;
+- }
-
--#define iwl_read_restricted_reg(priv, reg) \
-- __iwl_read_restricted_reg(__LINE__, priv, reg)
--#else
--#define iwl_read_restricted_reg _iwl_read_restricted_reg
--#endif
+- mutex_lock(&priv->mutex);
-
--static inline void _iwl_write_restricted_reg(struct iwl_priv *priv,
-- u32 addr, u32 val)
--{
-- _iwl_write_restricted(priv, HBUS_TARG_PRPH_WADDR,
-- ((addr & 0x0000FFFF) | (3 << 24)));
-- _iwl_write_restricted(priv, HBUS_TARG_PRPH_WDAT, val);
--}
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_write_restricted_reg(u32 line,
-- struct iwl_priv *priv,
-- u32 addr, u32 val)
--{
-- if (!atomic_read(&priv->restrict_refcnt))
-- IWL_ERROR("Unrestricted access from line %d\n", line);
-- _iwl_write_restricted_reg(priv, addr, val);
--}
+- err = sysfs_create_group(&pdev->dev.kobj, &iwl_attribute_group);
++ err = sysfs_create_group(&pdev->dev.kobj, &iwl3945_attribute_group);
+ if (err) {
+ IWL_ERROR("failed to create sysfs device attributes\n");
+- mutex_unlock(&priv->mutex);
+ goto out_release_irq;
+ }
+
+- /* fetch ucode file from disk, alloc and copy to bus-master buffers ...
+- * ucode filename and max sizes are card-specific. */
+- err = iwl_read_ucode(priv);
++ /* nic init */
++ iwl3945_set_bit(priv, CSR_GIO_CHICKEN_BITS,
++ CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER);
++
++ iwl3945_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
++ err = iwl3945_poll_bit(priv, CSR_GP_CNTRL,
++ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
++ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000);
++ if (err < 0) {
++ IWL_DEBUG_INFO("Failed to init the card\n");
++ goto out_remove_sysfs;
++ }
++ /* Read the EEPROM */
++ err = iwl3945_eeprom_init(priv);
+ if (err) {
+- IWL_ERROR("Could not read microcode: %d\n", err);
+- mutex_unlock(&priv->mutex);
+- goto out_pci_alloc;
++ IWL_ERROR("Unable to init EEPROM\n");
++ goto out_remove_sysfs;
+ }
++ /* MAC Address location in EEPROM same for 3945/4965 */
++ get_eeprom_mac(priv, priv->mac_addr);
++ IWL_DEBUG_INFO("MAC address: %s\n", print_mac(mac, priv->mac_addr));
++ SET_IEEE80211_PERM_ADDR(priv->hw, priv->mac_addr);
+
+- mutex_unlock(&priv->mutex);
-
--#define iwl_write_restricted_reg(priv, addr, val) \
-- __iwl_write_restricted_reg(__LINE__, priv, addr, val);
--#else
--#define iwl_write_restricted_reg _iwl_write_restricted_reg
--#endif
+- IWL_DEBUG_INFO("Queing UP work.\n");
++ iwl3945_rate_control_register(priv->hw);
++ err = ieee80211_register_hw(priv->hw);
++ if (err) {
++ IWL_ERROR("Failed to register network device (error %d)\n", err);
++ goto out_remove_sysfs;
++ }
+
+- queue_work(priv->workqueue, &priv->up);
++ priv->hw->conf.beacon_int = 100;
++ priv->mac80211_registered = 1;
++ pci_save_state(pdev);
++ pci_disable_device(pdev);
+
+ return 0;
+
+- out_pci_alloc:
+- iwl_dealloc_ucode_pci(priv);
-
--#define _iwl_set_bits_restricted_reg(priv, reg, mask) \
-- _iwl_write_restricted_reg(priv, reg, \
-- (_iwl_read_restricted_reg(priv, reg) | mask))
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_set_bits_restricted_reg(u32 line, struct iwl_priv
-- *priv, u32 reg, u32 mask)
--{
-- if (!atomic_read(&priv->restrict_refcnt))
-- IWL_ERROR("Unrestricted access from line %d\n", line);
-- _iwl_set_bits_restricted_reg(priv, reg, mask);
--}
--#define iwl_set_bits_restricted_reg(priv, reg, mask) \
-- __iwl_set_bits_restricted_reg(__LINE__, priv, reg, mask)
--#else
--#define iwl_set_bits_restricted_reg _iwl_set_bits_restricted_reg
--#endif
+- sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
++ out_remove_sysfs:
++ sysfs_remove_group(&pdev->dev.kobj, &iwl3945_attribute_group);
+
+ out_release_irq:
+- free_irq(pdev->irq, priv);
-
--#define _iwl_set_bits_mask_restricted_reg(priv, reg, bits, mask) \
-- _iwl_write_restricted_reg( \
-- priv, reg, ((_iwl_read_restricted_reg(priv, reg) & mask) | bits))
--#ifdef CONFIG_IWLWIFI_DEBUG
--static inline void __iwl_set_bits_mask_restricted_reg(u32 line,
-- struct iwl_priv *priv, u32 reg, u32 bits, u32 mask)
--{
-- if (!atomic_read(&priv->restrict_refcnt))
-- IWL_ERROR("Unrestricted access from line %d\n", line);
-- _iwl_set_bits_mask_restricted_reg(priv, reg, bits, mask);
--}
+- out_disable_msi:
+- pci_disable_msi(pdev);
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
+- iwl_unset_hw_setting(priv);
++ iwl3945_unset_hw_setting(priv);
+
+ out_iounmap:
+ pci_iounmap(pdev, priv->hw_base);
+@@ -8574,9 +8649,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return err;
+ }
+
+-static void iwl_pci_remove(struct pci_dev *pdev)
++static void iwl3945_pci_remove(struct pci_dev *pdev)
+ {
+- struct iwl_priv *priv = pci_get_drvdata(pdev);
++ struct iwl3945_priv *priv = pci_get_drvdata(pdev);
+ struct list_head *p, *q;
+ int i;
+
+@@ -8587,43 +8662,41 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+
+ set_bit(STATUS_EXIT_PENDING, &priv->status);
+
+- iwl_down(priv);
++ iwl3945_down(priv);
+
+ /* Free MAC hash list for ADHOC */
+ for (i = 0; i < IWL_IBSS_MAC_HASH_SIZE; i++) {
+ list_for_each_safe(p, q, &priv->ibss_mac_hash[i]) {
+ list_del(p);
+- kfree(list_entry(p, struct iwl_ibss_seq, list));
++ kfree(list_entry(p, struct iwl3945_ibss_seq, list));
+ }
+ }
+
+- sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
++ sysfs_remove_group(&pdev->dev.kobj, &iwl3945_attribute_group);
+
+- iwl_dealloc_ucode_pci(priv);
++ iwl3945_dealloc_ucode_pci(priv);
+
+ if (priv->rxq.bd)
+- iwl_rx_queue_free(priv, &priv->rxq);
+- iwl_hw_txq_ctx_free(priv);
++ iwl3945_rx_queue_free(priv, &priv->rxq);
++ iwl3945_hw_txq_ctx_free(priv);
+
+- iwl_unset_hw_setting(priv);
+- iwl_clear_stations_table(priv);
++ iwl3945_unset_hw_setting(priv);
++ iwl3945_clear_stations_table(priv);
+
+ if (priv->mac80211_registered) {
+ ieee80211_unregister_hw(priv->hw);
+- iwl_rate_control_unregister(priv->hw);
++ iwl3945_rate_control_unregister(priv->hw);
+ }
+
+ /*netif_stop_queue(dev); */
+ flush_workqueue(priv->workqueue);
+
+- /* ieee80211_unregister_hw calls iwl_mac_stop, which flushes
++ /* ieee80211_unregister_hw calls iwl3945_mac_stop, which flushes
+ * priv->workqueue... so we can't take down the workqueue
+ * until now... */
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
+
+- free_irq(pdev->irq, priv);
+- pci_disable_msi(pdev);
+ pci_iounmap(pdev, priv->hw_base);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+@@ -8642,93 +8715,31 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+
+ #ifdef CONFIG_PM
+
+-static int iwl_pci_suspend(struct pci_dev *pdev, pm_message_t state)
++static int iwl3945_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ {
+- struct iwl_priv *priv = pci_get_drvdata(pdev);
++ struct iwl3945_priv *priv = pci_get_drvdata(pdev);
+
+- set_bit(STATUS_IN_SUSPEND, &priv->status);
-
--#define iwl_set_bits_mask_restricted_reg(priv, reg, bits, mask) \
-- __iwl_set_bits_mask_restricted_reg(__LINE__, priv, reg, bits, mask)
--#else
--#define iwl_set_bits_mask_restricted_reg _iwl_set_bits_mask_restricted_reg
--#endif
+- /* Take down the device; powers it off, etc. */
+- iwl_down(priv);
-
--static inline void iwl_clear_bits_restricted_reg(struct iwl_priv
-- *priv, u32 reg, u32 mask)
+- if (priv->mac80211_registered)
+- ieee80211_stop_queues(priv->hw);
++ if (priv->is_open) {
++ set_bit(STATUS_IN_SUSPEND, &priv->status);
++ iwl3945_mac_stop(priv->hw);
++ priv->is_open = 1;
++ }
+
+- pci_save_state(pdev);
+- pci_disable_device(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+ return 0;
+ }
+
+-static void iwl_resume(struct iwl_priv *priv)
-{
-- u32 val = _iwl_read_restricted_reg(priv, reg);
-- _iwl_write_restricted_reg(priv, reg, (val & ~mask));
--}
+- unsigned long flags;
-
--static inline u32 iwl_read_restricted_mem(struct iwl_priv *priv, u32 addr)
--{
-- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR, addr);
-- return iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
--}
+- /* The following it a temporary work around due to the
+- * suspend / resume not fully initializing the NIC correctly.
+- * Without all of the following, resume will not attempt to take
+- * down the NIC (it shouldn't really need to) and will just try
+- * and bring the NIC back up. However that fails during the
+- * ucode verification process. This then causes iwl_down to be
+- * called *after* iwl_hw_nic_init() has succeeded -- which
+- * then lets the next init sequence succeed. So, we've
+- * replicated all of that NIC init code here... */
-
--static inline void iwl_write_restricted_mem(struct iwl_priv *priv, u32 addr,
-- u32 val)
--{
-- iwl_write_restricted(priv, HBUS_TARG_MEM_WADDR, addr);
-- iwl_write_restricted(priv, HBUS_TARG_MEM_WDAT, val);
--}
+- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
-
--static inline void iwl_write_restricted_mems(struct iwl_priv *priv, u32 addr,
-- u32 len, u32 *values)
--{
-- iwl_write_restricted(priv, HBUS_TARG_MEM_WADDR, addr);
-- for (; 0 < len; len -= sizeof(u32), values++)
-- iwl_write_restricted(priv, HBUS_TARG_MEM_WDAT, *values);
--}
+- iwl_hw_nic_init(priv);
-
--static inline void iwl_write_restricted_regs(struct iwl_priv *priv, u32 reg,
-- u32 len, u8 *values)
--{
-- u32 reg_offset = reg;
-- u32 aligment = reg & 0x3;
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
+- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-
-- /* write any non-dword-aligned stuff at the beginning */
-- if (len < sizeof(u32)) {
-- if ((aligment + len) <= sizeof(u32)) {
-- u8 size;
-- u32 value = 0;
-- size = len - 1;
-- memcpy(&value, values, len);
-- reg_offset = (reg_offset & 0x0000FFFF);
+- /* tell the device to stop sending interrupts */
+- iwl_disable_interrupts(priv);
-
-- _iwl_write_restricted(priv,
-- HBUS_TARG_PRPH_WADDR,
-- (reg_offset | (size << 24)));
-- _iwl_write_restricted(priv, HBUS_TARG_PRPH_WDAT,
-- value);
-- }
+- spin_lock_irqsave(&priv->lock, flags);
+- iwl_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
-
-- return;
+- if (!iwl_grab_restricted_access(priv)) {
+- iwl_write_restricted_reg(priv, APMG_CLK_DIS_REG,
+- APMG_CLK_VAL_DMA_CLK_RQT);
+- iwl_release_restricted_access(priv);
- }
+- spin_unlock_irqrestore(&priv->lock, flags);
-
-- /* now write all the dword-aligned stuff */
-- for (; reg_offset < (reg + len);
-- reg_offset += sizeof(u32), values += sizeof(u32))
-- _iwl_write_restricted_reg(priv, reg_offset, *((u32 *) values));
--}
--
--#endif
-diff --git a/drivers/net/wireless/iwlwifi/iwl-priv.h b/drivers/net/wireless/iwlwifi/iwl-priv.h
-deleted file mode 100644
-index 6b490d0..0000000
---- a/drivers/net/wireless/iwlwifi/iwl-priv.h
-+++ /dev/null
-@@ -1,308 +0,0 @@
--/******************************************************************************
-- *
-- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
-- *
-- * This program is free software; you can redistribute it and/or modify it
-- * under the terms of version 2 of the GNU General Public License as
-- * published by the Free Software Foundation.
-- *
-- * This program is distributed in the hope that it will be useful, but WITHOUT
-- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
-- * more details.
-- *
-- * You should have received a copy of the GNU General Public License along with
-- * this program; if not, write to the Free Software Foundation, Inc.,
-- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
-- *
-- * The full GNU General Public License is included in this distribution in the
-- * file called LICENSE.
-- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-- *
-- *****************************************************************************/
--
--#ifndef __iwl_priv_h__
--#define __iwl_priv_h__
--
--#include <linux/workqueue.h>
--
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
--
--enum {
-- MEASUREMENT_READY = (1 << 0),
-- MEASUREMENT_ACTIVE = (1 << 1),
--};
--
--#endif
--
--struct iwl_priv {
--
-- /* ieee device used by generic ieee processing code */
-- struct ieee80211_hw *hw;
-- struct ieee80211_channel *ieee_channels;
-- struct ieee80211_rate *ieee_rates;
--
-- /* temporary frame storage list */
-- struct list_head free_frames;
-- int frames_count;
--
-- u8 phymode;
-- int alloc_rxb_skb;
--
-- void (*rx_handlers[REPLY_MAX])(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb);
--
-- const struct ieee80211_hw_mode *modes;
--
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-- /* spectrum measurement report caching */
-- struct iwl_spectrum_notification measure_report;
-- u8 measurement_status;
--#endif
-- /* ucode beacon time */
-- u32 ucode_beacon_time;
--
-- /* we allocate array of iwl_channel_info for NIC's valid channels.
-- * Access via channel # using indirect index array */
-- struct iwl_channel_info *channel_info; /* channel info array */
-- u8 channel_count; /* # of channels */
--
-- /* each calibration channel group in the EEPROM has a derived
-- * clip setting for each rate. */
-- const struct iwl_clip_group clip_groups[5];
--
-- /* thermal calibration */
-- s32 temperature; /* degrees Kelvin */
-- s32 last_temperature;
--
-- /* Scan related variables */
-- unsigned long last_scan_jiffies;
-- unsigned long scan_start;
-- unsigned long scan_pass_start;
-- unsigned long scan_start_tsf;
-- int scan_bands;
-- int one_direct_scan;
-- u8 direct_ssid_len;
-- u8 direct_ssid[IW_ESSID_MAX_SIZE];
-- struct iwl_scan_cmd *scan;
-- u8 only_active_channel;
--
-- /* spinlock */
-- spinlock_t lock; /* protect general shared data */
-- spinlock_t hcmd_lock; /* protect hcmd */
-- struct mutex mutex;
--
-- /* basic pci-network driver stuff */
-- struct pci_dev *pci_dev;
--
-- /* pci hardware address support */
-- void __iomem *hw_base;
--
-- /* uCode images, save to reload in case of failure */
-- struct fw_image_desc ucode_code; /* runtime inst */
-- struct fw_image_desc ucode_data; /* runtime data original */
-- struct fw_image_desc ucode_data_backup; /* runtime data save/restore */
-- struct fw_image_desc ucode_init; /* initialization inst */
-- struct fw_image_desc ucode_init_data; /* initialization data */
-- struct fw_image_desc ucode_boot; /* bootstrap inst */
--
--
-- struct iwl_rxon_time_cmd rxon_timing;
--
-- /* We declare this const so it can only be
-- * changed via explicit cast within the
-- * routines that actually update the physical
-- * hardware */
-- const struct iwl_rxon_cmd active_rxon;
-- struct iwl_rxon_cmd staging_rxon;
--
-- int error_recovering;
-- struct iwl_rxon_cmd recovery_rxon;
--
-- /* 1st responses from initialize and runtime uCode images.
-- * 4965's initialize alive response contains some calibration data. */
-- struct iwl_init_alive_resp card_alive_init;
-- struct iwl_alive_resp card_alive;
--
--#ifdef LED
-- /* LED related variables */
-- struct iwl_activity_blink activity;
-- unsigned long led_packets;
-- int led_state;
--#endif
--
-- u16 active_rate;
-- u16 active_rate_basic;
--
-- u8 call_post_assoc_from_beacon;
-- u8 assoc_station_added;
--#if IWL == 4965
-- u8 use_ant_b_for_management_frame; /* Tx antenna selection */
-- /* HT variables */
-- u8 is_dup;
-- u8 is_ht_enabled;
-- u8 channel_width; /* 0=20MHZ, 1=40MHZ */
-- u8 current_channel_width;
-- u8 valid_antenna; /* Bit mask of antennas actually connected */
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-- struct iwl_sensitivity_data sensitivity_data;
-- struct iwl_chain_noise_data chain_noise_data;
-- u8 start_calib;
-- __le16 sensitivity_tbl[HD_TABLE_SIZE];
--#endif /*CONFIG_IWLWIFI_SENSITIVITY*/
--
--#ifdef CONFIG_IWLWIFI_HT
-- struct sta_ht_info current_assoc_ht;
--#endif
-- u8 active_rate_ht[2];
-- u8 last_phy_res[100];
--
-- /* Rate scaling data */
-- struct iwl_lq_mngr lq_mngr;
--#endif
--
-- /* Rate scaling data */
-- s8 data_retry_limit;
-- u8 retry_rate;
--
-- wait_queue_head_t wait_command_queue;
--
-- int activity_timer_active;
--
-- /* Rx and Tx DMA processing queues */
-- struct iwl_rx_queue rxq;
-- struct iwl_tx_queue txq[IWL_MAX_NUM_QUEUES];
--#if IWL == 4965
-- unsigned long txq_ctx_active_msk;
-- struct iwl_kw kw; /* keep warm address */
-- u32 scd_base_addr; /* scheduler sram base address */
--#endif
--
-- unsigned long status;
-- u32 config;
--
-- int last_rx_rssi; /* From Rx packet statisitics */
-- int last_rx_noise; /* From beacon statistics */
--
-- struct iwl_power_mgr power_data;
--
-- struct iwl_notif_statistics statistics;
-- unsigned long last_statistics_time;
--
-- /* context information */
-- u8 essid[IW_ESSID_MAX_SIZE];
-- u8 essid_len;
-- u16 rates_mask;
--
-- u32 power_mode;
-- u32 antenna;
-- u8 bssid[ETH_ALEN];
-- u16 rts_threshold;
-- u8 mac_addr[ETH_ALEN];
--
-- /*station table variables */
-- spinlock_t sta_lock;
-- int num_stations;
-- struct iwl_station_entry stations[IWL_STATION_COUNT];
--
-- /* Indication if ieee80211_ops->open has been called */
-- int is_open;
--
-- u8 mac80211_registered;
-- int is_abg;
--
-- u32 notif_missed_beacons;
--
-- /* Rx'd packet timing information */
-- u32 last_beacon_time;
-- u64 last_tsf;
--
-- /* Duplicate packet detection */
-- u16 last_seq_num;
-- u16 last_frag_num;
-- unsigned long last_packet_time;
-- struct list_head ibss_mac_hash[IWL_IBSS_MAC_HASH_SIZE];
--
-- /* eeprom */
-- struct iwl_eeprom eeprom;
--
-- int iw_mode;
--
-- struct sk_buff *ibss_beacon;
--
-- /* Last Rx'd beacon timestamp */
-- u32 timestamp0;
-- u32 timestamp1;
-- u16 beacon_int;
-- struct iwl_driver_hw_info hw_setting;
-- int interface_id;
--
-- /* Current association information needed to configure the
-- * hardware */
-- u16 assoc_id;
-- u16 assoc_capability;
-- u8 ps_mode;
--
--#ifdef CONFIG_IWLWIFI_QOS
-- struct iwl_qos_info qos_data;
--#endif /*CONFIG_IWLWIFI_QOS */
--
-- struct workqueue_struct *workqueue;
--
-- struct work_struct up;
-- struct work_struct restart;
-- struct work_struct calibrated_work;
-- struct work_struct scan_completed;
-- struct work_struct rx_replenish;
-- struct work_struct rf_kill;
-- struct work_struct abort_scan;
-- struct work_struct update_link_led;
-- struct work_struct auth_work;
-- struct work_struct report_work;
-- struct work_struct request_scan;
-- struct work_struct beacon_update;
+- udelay(5);
-
-- struct tasklet_struct irq_tasklet;
+- iwl_hw_nic_reset(priv);
-
-- struct delayed_work init_alive_start;
-- struct delayed_work alive_start;
-- struct delayed_work activity_timer;
-- struct delayed_work thermal_periodic;
-- struct delayed_work gather_stats;
-- struct delayed_work scan_check;
-- struct delayed_work post_associate;
+- /* Bring the device back up */
+- clear_bit(STATUS_IN_SUSPEND, &priv->status);
+- queue_work(priv->workqueue, &priv->up);
+-}
-
--#define IWL_DEFAULT_TX_POWER 0x0F
-- s8 user_txpower_limit;
-- s8 max_channel_txpower_limit;
-- u32 cck_power_index_compensation;
+-static int iwl_pci_resume(struct pci_dev *pdev)
++static int iwl3945_pci_resume(struct pci_dev *pdev)
+ {
+- struct iwl_priv *priv = pci_get_drvdata(pdev);
+- int err;
-
--#ifdef CONFIG_PM
-- u32 pm_state[16];
--#endif
+- printk(KERN_INFO "Coming out of suspend...\n");
++ struct iwl3945_priv *priv = pci_get_drvdata(pdev);
+
+ pci_set_power_state(pdev, PCI_D0);
+- err = pci_enable_device(pdev);
+- pci_restore_state(pdev);
+
+- /*
+- * Suspend/Resume resets the PCI configuration space, so we have to
+- * re-disable the RETRY_TIMEOUT register (0x41) to keep PCI Tx retries
+- * from interfering with C3 CPU state. pci_restore_state won't help
+- * here since it only restores the first 64 bytes pci config header.
+- */
+- pci_write_config_byte(pdev, 0x41, 0x00);
-
+- iwl_resume(priv);
++ if (priv->is_open)
++ iwl3945_mac_start(priv->hw);
+
++ clear_bit(STATUS_IN_SUSPEND, &priv->status);
+ return 0;
+ }
+
+@@ -8740,33 +8751,33 @@ static int iwl_pci_resume(struct pci_dev *pdev)
+ *
+ *****************************************************************************/
+
+-static struct pci_driver iwl_driver = {
++static struct pci_driver iwl3945_driver = {
+ .name = DRV_NAME,
+- .id_table = iwl_hw_card_ids,
+- .probe = iwl_pci_probe,
+- .remove = __devexit_p(iwl_pci_remove),
++ .id_table = iwl3945_hw_card_ids,
++ .probe = iwl3945_pci_probe,
++ .remove = __devexit_p(iwl3945_pci_remove),
+ #ifdef CONFIG_PM
+- .suspend = iwl_pci_suspend,
+- .resume = iwl_pci_resume,
++ .suspend = iwl3945_pci_suspend,
++ .resume = iwl3945_pci_resume,
+ #endif
+ };
+
+-static int __init iwl_init(void)
++static int __init iwl3945_init(void)
+ {
+
+ int ret;
+ printk(KERN_INFO DRV_NAME ": " DRV_DESCRIPTION ", " DRV_VERSION "\n");
+ printk(KERN_INFO DRV_NAME ": " DRV_COPYRIGHT "\n");
+- ret = pci_register_driver(&iwl_driver);
++ ret = pci_register_driver(&iwl3945_driver);
+ if (ret) {
+ IWL_ERROR("Unable to initialize PCI module\n");
+ return ret;
+ }
+-#ifdef CONFIG_IWLWIFI_DEBUG
+- ret = driver_create_file(&iwl_driver.driver, &driver_attr_debug_level);
++#ifdef CONFIG_IWL3945_DEBUG
++ ret = driver_create_file(&iwl3945_driver.driver, &driver_attr_debug_level);
+ if (ret) {
+ IWL_ERROR("Unable to create driver sysfs file\n");
+- pci_unregister_driver(&iwl_driver);
++ pci_unregister_driver(&iwl3945_driver);
+ return ret;
+ }
+ #endif
+@@ -8774,32 +8785,32 @@ static int __init iwl_init(void)
+ return ret;
+ }
+
+-static void __exit iwl_exit(void)
++static void __exit iwl3945_exit(void)
+ {
-#ifdef CONFIG_IWLWIFI_DEBUG
-- /* debugging info */
-- u32 framecnt_to_us;
-- atomic_t restrict_refcnt;
--#endif
--
--#if IWL == 4965
-- struct work_struct txpower_work;
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-- struct work_struct sensitivity_work;
--#endif
-- struct work_struct statistics_work;
-- struct timer_list statistics_periodic;
--
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- struct work_struct agg_work;
--#endif
--
--#endif /* 4965 */
--}; /*iwl_priv */
--
--#endif /* __iwl_priv_h__ */
-diff --git a/drivers/net/wireless/iwlwifi/iwl-prph.h b/drivers/net/wireless/iwlwifi/iwl-prph.h
-index 0df4114..4ba1216 100644
---- a/drivers/net/wireless/iwlwifi/iwl-prph.h
-+++ b/drivers/net/wireless/iwlwifi/iwl-prph.h
-@@ -8,7 +8,7 @@
- * Copyright(c) 2005 - 2007 Intel Corporation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of version 2 of the GNU Geeral Public License as
-+ * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
-@@ -63,7 +63,10 @@
- #ifndef __iwl_prph_h__
- #define __iwl_prph_h__
+- driver_remove_file(&iwl_driver.driver, &driver_attr_debug_level);
++#ifdef CONFIG_IWL3945_DEBUG
++ driver_remove_file(&iwl3945_driver.driver, &driver_attr_debug_level);
+ #endif
+- pci_unregister_driver(&iwl_driver);
++ pci_unregister_driver(&iwl3945_driver);
+ }
--
-+/*
-+ * Registers in this file are internal, not PCI bus memory mapped.
-+ * Driver accesses these via HBUS_TARG_PRPH_* registers.
-+ */
- #define PRPH_BASE (0x00000)
- #define PRPH_END (0xFFFFF)
+-module_param_named(antenna, iwl_param_antenna, int, 0444);
++module_param_named(antenna, iwl3945_param_antenna, int, 0444);
+ MODULE_PARM_DESC(antenna, "select antenna (1=Main, 2=Aux, default 0 [both])");
+-module_param_named(disable, iwl_param_disable, int, 0444);
++module_param_named(disable, iwl3945_param_disable, int, 0444);
+ MODULE_PARM_DESC(disable, "manually disable the radio (default 0 [radio on])");
+-module_param_named(hwcrypto, iwl_param_hwcrypto, int, 0444);
++module_param_named(hwcrypto, iwl3945_param_hwcrypto, int, 0444);
+ MODULE_PARM_DESC(hwcrypto,
+ "using hardware crypto engine (default 0 [software])\n");
+-module_param_named(debug, iwl_param_debug, int, 0444);
++module_param_named(debug, iwl3945_param_debug, int, 0444);
+ MODULE_PARM_DESC(debug, "debug output mask");
+-module_param_named(disable_hw_scan, iwl_param_disable_hw_scan, int, 0444);
++module_param_named(disable_hw_scan, iwl3945_param_disable_hw_scan, int, 0444);
+ MODULE_PARM_DESC(disable_hw_scan, "disable hardware scanning (default 0)");
-@@ -226,4 +229,58 @@
- #define BSM_SRAM_SIZE (1024) /* bytes */
+-module_param_named(queues_num, iwl_param_queues_num, int, 0444);
++module_param_named(queues_num, iwl3945_param_queues_num, int, 0444);
+ MODULE_PARM_DESC(queues_num, "number of hw queues.");
+ /* QoS */
+-module_param_named(qos_enable, iwl_param_qos_enable, int, 0444);
++module_param_named(qos_enable, iwl3945_param_qos_enable, int, 0444);
+ MODULE_PARM_DESC(qos_enable, "enable all QoS functionality");
-+/* 3945 Tx scheduler registers */
-+#define ALM_SCD_BASE (PRPH_BASE + 0x2E00)
-+#define ALM_SCD_MODE_REG (ALM_SCD_BASE + 0x000)
-+#define ALM_SCD_ARASTAT_REG (ALM_SCD_BASE + 0x004)
-+#define ALM_SCD_TXFACT_REG (ALM_SCD_BASE + 0x010)
-+#define ALM_SCD_TXF4MF_REG (ALM_SCD_BASE + 0x014)
-+#define ALM_SCD_TXF5MF_REG (ALM_SCD_BASE + 0x020)
-+#define ALM_SCD_SBYP_MODE_1_REG (ALM_SCD_BASE + 0x02C)
-+#define ALM_SCD_SBYP_MODE_2_REG (ALM_SCD_BASE + 0x030)
-+
-+/*
-+ * 4965 Tx Scheduler registers.
-+ * Details are documented in iwl-4965-hw.h
-+ */
-+#define KDR_SCD_BASE (PRPH_BASE + 0xa02c00)
-+
-+#define KDR_SCD_SRAM_BASE_ADDR (KDR_SCD_BASE + 0x0)
-+#define KDR_SCD_EMPTY_BITS (KDR_SCD_BASE + 0x4)
-+#define KDR_SCD_DRAM_BASE_ADDR (KDR_SCD_BASE + 0x10)
-+#define KDR_SCD_AIT (KDR_SCD_BASE + 0x18)
-+#define KDR_SCD_TXFACT (KDR_SCD_BASE + 0x1c)
-+#define KDR_SCD_QUEUE_WRPTR(x) (KDR_SCD_BASE + 0x24 + (x) * 4)
-+#define KDR_SCD_QUEUE_RDPTR(x) (KDR_SCD_BASE + 0x64 + (x) * 4)
-+#define KDR_SCD_SETQUEUENUM (KDR_SCD_BASE + 0xa4)
-+#define KDR_SCD_SET_TXSTAT_TXED (KDR_SCD_BASE + 0xa8)
-+#define KDR_SCD_SET_TXSTAT_DONE (KDR_SCD_BASE + 0xac)
-+#define KDR_SCD_SET_TXSTAT_NOT_SCHD (KDR_SCD_BASE + 0xb0)
-+#define KDR_SCD_DECREASE_CREDIT (KDR_SCD_BASE + 0xb4)
-+#define KDR_SCD_DECREASE_SCREDIT (KDR_SCD_BASE + 0xb8)
-+#define KDR_SCD_LOAD_CREDIT (KDR_SCD_BASE + 0xbc)
-+#define KDR_SCD_LOAD_SCREDIT (KDR_SCD_BASE + 0xc0)
-+#define KDR_SCD_BAR (KDR_SCD_BASE + 0xc4)
-+#define KDR_SCD_BAR_DW0 (KDR_SCD_BASE + 0xc8)
-+#define KDR_SCD_BAR_DW1 (KDR_SCD_BASE + 0xcc)
-+#define KDR_SCD_QUEUECHAIN_SEL (KDR_SCD_BASE + 0xd0)
-+#define KDR_SCD_QUERY_REQ (KDR_SCD_BASE + 0xd8)
-+#define KDR_SCD_QUERY_RES (KDR_SCD_BASE + 0xdc)
-+#define KDR_SCD_PENDING_FRAMES (KDR_SCD_BASE + 0xe0)
-+#define KDR_SCD_INTERRUPT_MASK (KDR_SCD_BASE + 0xe4)
-+#define KDR_SCD_INTERRUPT_THRESHOLD (KDR_SCD_BASE + 0xe8)
-+#define KDR_SCD_QUERY_MIN_FRAME_SIZE (KDR_SCD_BASE + 0x100)
-+#define KDR_SCD_QUEUE_STATUS_BITS(x) (KDR_SCD_BASE + 0x104 + (x) * 4)
-+
-+/* SP SCD */
-+#define SHL_SCD_BASE (PRPH_BASE + 0xa02c00)
-+
-+#define SHL_SCD_AIT (SHL_SCD_BASE + 0x0c)
-+#define SHL_SCD_TXFACT (SHL_SCD_BASE + 0x10)
-+#define SHL_SCD_QUEUE_WRPTR(x) (SHL_SCD_BASE + 0x18 + (x) * 4)
-+#define SHL_SCD_QUEUE_RDPTR(x) (SHL_SCD_BASE + 0x68 + (x) * 4)
-+#define SHL_SCD_QUEUECHAIN_SEL (SHL_SCD_BASE + 0xe8)
-+#define SHL_SCD_AGGR_SEL (SHL_SCD_BASE + 0x248)
-+#define SHL_SCD_INTERRUPT_MASK (SHL_SCD_BASE + 0x108)
-+
- #endif /* __iwl_prph_h__ */
-diff --git a/drivers/net/wireless/iwlwifi/iwl3945-base.c b/drivers/net/wireless/iwlwifi/iwl3945-base.c
-index 0b3ec7e..748ac12 100644
---- a/drivers/net/wireless/iwlwifi/iwl3945-base.c
-+++ b/drivers/net/wireless/iwlwifi/iwl3945-base.c
+-module_exit(iwl_exit);
+-module_init(iwl_init);
++module_exit(iwl3945_exit);
++module_init(iwl3945_init);
+diff --git a/drivers/net/wireless/iwlwifi/iwl4965-base.c b/drivers/net/wireless/iwlwifi/iwl4965-base.c
+index 15a45f4..c86da5c 100644
+--- a/drivers/net/wireless/iwlwifi/iwl4965-base.c
++++ b/drivers/net/wireless/iwlwifi/iwl4965-base.c
@@ -27,16 +27,6 @@
*
*****************************************************************************/
@@ -385686,29 +466142,34 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/version.h>
-@@ -56,16 +46,16 @@
+@@ -51,21 +41,20 @@
+ #include <linux/etherdevice.h>
+ #include <linux/if_arp.h>
+
+-#include <net/ieee80211_radiotap.h>
+ #include <net/mac80211.h>
#include <asm/div64.h>
--#define IWL 3945
+-#define IWL 4965
-
-#include "iwlwifi.h"
- #include "iwl-3945.h"
+ #include "iwl-4965.h"
#include "iwl-helpers.h"
-#ifdef CONFIG_IWLWIFI_DEBUG
-u32 iwl_debug_level;
-+#ifdef CONFIG_IWL3945_DEBUG
-+u32 iwl3945_debug_level;
++#ifdef CONFIG_IWL4965_DEBUG
++u32 iwl4965_debug_level;
#endif
-+static int iwl3945_tx_queue_update_write_ptr(struct iwl3945_priv *priv,
-+ struct iwl3945_tx_queue *txq);
++static int iwl4965_tx_queue_update_write_ptr(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq);
+
/******************************************************************************
*
* module boiler plate
-@@ -73,13 +63,13 @@ u32 iwl_debug_level;
+@@ -73,13 +62,14 @@ u32 iwl_debug_level;
******************************************************************************/
/* module parameters */
@@ -385719,29 +466180,30 @@
-int iwl_param_hwcrypto; /* def: using software encryption */
-int iwl_param_qos_enable = 1;
-int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
-+static int iwl3945_param_disable_hw_scan; /* def: 0 = use 3945's h/w scan */
-+static int iwl3945_param_debug; /* def: 0 = minimal debug log messages */
-+static int iwl3945_param_disable; /* def: 0 = enable radio */
-+static int iwl3945_param_antenna; /* def: 0 = both antennas (use diversity) */
-+int iwl3945_param_hwcrypto; /* def: 0 = use software encryption */
-+static int iwl3945_param_qos_enable = 1; /* def: 1 = use quality of service */
-+int iwl3945_param_queues_num = IWL_MAX_NUM_QUEUES; /* def: 8 Tx queues */
++static int iwl4965_param_disable_hw_scan; /* def: 0 = use 4965's h/w scan */
++static int iwl4965_param_debug; /* def: 0 = minimal debug log messages */
++static int iwl4965_param_disable; /* def: enable radio */
++static int iwl4965_param_antenna; /* def: 0 = both antennas (use diversity) */
++int iwl4965_param_hwcrypto; /* def: using software encryption */
++static int iwl4965_param_qos_enable = 1; /* def: 1 = use quality of service */
++int iwl4965_param_queues_num = IWL_MAX_NUM_QUEUES; /* def: 16 Tx queues */
++int iwl4965_param_amsdu_size_8K; /* def: enable 8K amsdu size */
/*
* module name, copyright, version, etc.
-@@ -89,19 +79,19 @@ int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
- #define DRV_DESCRIPTION \
- "Intel(R) PRO/Wireless 3945ABG/BG Network Connection driver for Linux"
+@@ -88,19 +78,19 @@ int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
+
+ #define DRV_DESCRIPTION "Intel(R) Wireless WiFi Link 4965AGN driver for Linux"
-#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL3945_DEBUG
++#ifdef CONFIG_IWL4965_DEBUG
#define VD "d"
#else
#define VD
#endif
-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
#define VS "s"
#else
#define VS
@@ -385752,66 +466214,57 @@
#define DRV_COPYRIGHT "Copyright(c) 2003-2007 Intel Corporation"
#define DRV_VERSION IWLWIFI_VERSION
-@@ -116,7 +106,7 @@ MODULE_VERSION(DRV_VERSION);
- MODULE_AUTHOR(DRV_COPYRIGHT);
- MODULE_LICENSE("GPL");
-
--__le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
-+static __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
- {
- u16 fc = le16_to_cpu(hdr->frame_control);
- int hdr_len = ieee80211_get_hdrlen(fc);
-@@ -126,8 +116,8 @@ __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
+@@ -125,8 +115,8 @@ __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
return NULL;
}
-static const struct ieee80211_hw_mode *iwl_get_hw_mode(
- struct iwl_priv *priv, int mode)
-+static const struct ieee80211_hw_mode *iwl3945_get_hw_mode(
-+ struct iwl3945_priv *priv, int mode)
++static const struct ieee80211_hw_mode *iwl4965_get_hw_mode(
++ struct iwl4965_priv *priv, int mode)
{
int i;
-@@ -138,7 +128,7 @@ static const struct ieee80211_hw_mode *iwl_get_hw_mode(
+@@ -137,7 +127,7 @@ static const struct ieee80211_hw_mode *iwl_get_hw_mode(
return NULL;
}
-static int iwl_is_empty_essid(const char *essid, int essid_len)
-+static int iwl3945_is_empty_essid(const char *essid, int essid_len)
++static int iwl4965_is_empty_essid(const char *essid, int essid_len)
{
/* Single white space is for Linksys APs */
if (essid_len == 1 && essid[0] == ' ')
-@@ -154,13 +144,13 @@ static int iwl_is_empty_essid(const char *essid, int essid_len)
+@@ -153,13 +143,13 @@ static int iwl_is_empty_essid(const char *essid, int essid_len)
return 1;
}
-static const char *iwl_escape_essid(const char *essid, u8 essid_len)
-+static const char *iwl3945_escape_essid(const char *essid, u8 essid_len)
++static const char *iwl4965_escape_essid(const char *essid, u8 essid_len)
{
static char escaped[IW_ESSID_MAX_SIZE * 2 + 1];
const char *s = essid;
char *d = escaped;
- if (iwl_is_empty_essid(essid, essid_len)) {
-+ if (iwl3945_is_empty_essid(essid, essid_len)) {
++ if (iwl4965_is_empty_essid(essid, essid_len)) {
memcpy(escaped, "<hidden>", sizeof("<hidden>"));
return escaped;
}
-@@ -178,10 +168,10 @@ static const char *iwl_escape_essid(const char *essid, u8 essid_len)
+@@ -177,10 +167,10 @@ static const char *iwl_escape_essid(const char *essid, u8 essid_len)
return escaped;
}
-static void iwl_print_hex_dump(int level, void *p, u32 len)
-+static void iwl3945_print_hex_dump(int level, void *p, u32 len)
++static void iwl4965_print_hex_dump(int level, void *p, u32 len)
{
-#ifdef CONFIG_IWLWIFI_DEBUG
- if (!(iwl_debug_level & level))
-+#ifdef CONFIG_IWL3945_DEBUG
-+ if (!(iwl3945_debug_level & level))
++#ifdef CONFIG_IWL4965_DEBUG
++ if (!(iwl4965_debug_level & level))
return;
print_hex_dump(KERN_DEBUG, "iwl data: ", DUMP_PREFIX_OFFSET, 16, 1,
-@@ -194,24 +184,31 @@ static void iwl_print_hex_dump(int level, void *p, u32 len)
+@@ -193,24 +183,33 @@ static void iwl_print_hex_dump(int level, void *p, u32 len)
*
* Theory of operation
*
@@ -385835,13 +466288,15 @@
- * The IWL operates with six queues, one receive queue in the device's
- * sram, one transmit queue for sending commands to the device firmware,
- * and four transmit queues for data.
-+ * The 3945 operates with six queues: One receive queue, one transmit queue
-+ * (#4) for sending commands to the device firmware, and four transmit queues
-+ * (#0-3) for data tx via EDCA. An additional 2 HCCA queues are unused.
++ * The 4965 operates with up to 17 queues: One receive queue, one transmit
++ * queue (#4) for sending commands to the device firmware, and 15 other
++ * Tx queues that may be mapped to prioritized Tx DMA/FIFO channels.
++ *
++ * See more detailed info in iwl-4965-hw.h.
***************************************************/
-static int iwl_queue_space(const struct iwl_queue *q)
-+static int iwl3945_queue_space(const struct iwl3945_queue *q)
++static int iwl4965_queue_space(const struct iwl4965_queue *q)
{
- int s = q->last_used - q->first_empty;
+ int s = q->read_ptr - q->write_ptr;
@@ -385851,18 +466306,18 @@
s -= q->n_bd;
if (s <= 0)
-@@ -223,42 +220,55 @@ static int iwl_queue_space(const struct iwl_queue *q)
+@@ -222,42 +221,55 @@ static int iwl_queue_space(const struct iwl_queue *q)
return s;
}
-/* XXX: n_bd must be power-of-two size */
-static inline int iwl_queue_inc_wrap(int index, int n_bd)
+/**
-+ * iwl3945_queue_inc_wrap - increment queue index, wrap back to beginning
++ * iwl4965_queue_inc_wrap - increment queue index, wrap back to beginning
+ * @index -- current index
+ * @n_bd -- total number of entries in queue (must be power of 2)
+ */
-+static inline int iwl3945_queue_inc_wrap(int index, int n_bd)
++static inline int iwl4965_queue_inc_wrap(int index, int n_bd)
{
return ++index & (n_bd - 1);
}
@@ -385870,17 +466325,17 @@
-/* XXX: n_bd must be power-of-two size */
-static inline int iwl_queue_dec_wrap(int index, int n_bd)
+/**
-+ * iwl3945_queue_dec_wrap - increment queue index, wrap back to end
++ * iwl4965_queue_dec_wrap - decrement queue index, wrap back to end
+ * @index -- current index
+ * @n_bd -- total number of entries in queue (must be power of 2)
+ */
-+static inline int iwl3945_queue_dec_wrap(int index, int n_bd)
++static inline int iwl4965_queue_dec_wrap(int index, int n_bd)
{
return --index & (n_bd - 1);
}
-static inline int x2_queue_used(const struct iwl_queue *q, int i)
-+static inline int x2_queue_used(const struct iwl3945_queue *q, int i)
++static inline int x2_queue_used(const struct iwl4965_queue *q, int i)
{
- return q->first_empty > q->last_used ?
- (i >= q->last_used && i < q->first_empty) :
@@ -385891,7 +466346,7 @@
}
-static inline u8 get_cmd_index(struct iwl_queue *q, u32 index, int is_huge)
-+static inline u8 get_cmd_index(struct iwl3945_queue *q, u32 index, int is_huge)
++static inline u8 get_cmd_index(struct iwl4965_queue *q, u32 index, int is_huge)
{
+ /* This is for scan command, the big buffer at end of command array */
if (is_huge)
@@ -385904,9 +466359,9 @@
-static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
+/**
-+ * iwl3945_queue_init - Initialize queue's high/low-water and read/write indexes
++ * iwl4965_queue_init - Initialize queue's high/low-water and read/write indexes
+ */
-+static int iwl3945_queue_init(struct iwl3945_priv *priv, struct iwl3945_queue *q,
++static int iwl4965_queue_init(struct iwl4965_priv *priv, struct iwl4965_queue *q,
int count, int slots_num, u32 id)
{
q->n_bd = count;
@@ -385915,12 +466370,12 @@
- /* count must be power-of-two size, otherwise iwl_queue_inc_wrap
- * and iwl_queue_dec_wrap are broken. */
-+ /* count must be power-of-two size, otherwise iwl3945_queue_inc_wrap
-+ * and iwl3945_queue_dec_wrap are broken. */
++ /* count must be power-of-two size, otherwise iwl4965_queue_inc_wrap
++ * and iwl4965_queue_dec_wrap are broken. */
BUG_ON(!is_power_of_2(count));
/* slots_num must be power-of-two size, otherwise
-@@ -273,27 +283,34 @@ static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
+@@ -272,27 +284,34 @@ static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
if (q->high_mark < 2)
q->high_mark = 2;
@@ -385933,10 +466388,10 @@
-static int iwl_tx_queue_alloc(struct iwl_priv *priv,
- struct iwl_tx_queue *txq, u32 id)
+/**
-+ * iwl3945_tx_queue_alloc - Alloc driver data and TFD CB for one Tx/cmd queue
++ * iwl4965_tx_queue_alloc - Alloc driver data and TFD CB for one Tx/cmd queue
+ */
-+static int iwl3945_tx_queue_alloc(struct iwl3945_priv *priv,
-+ struct iwl3945_tx_queue *txq, u32 id)
++static int iwl4965_tx_queue_alloc(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq, u32 id)
{
struct pci_dev *dev = priv->pci_dev;
@@ -385959,17 +466414,17 @@
txq->bd = pci_alloc_consistent(dev,
sizeof(txq->bd[0]) * TFD_QUEUE_SIZE_MAX,
&txq->q.dma_addr);
-@@ -316,24 +333,33 @@ static int iwl_tx_queue_alloc(struct iwl_priv *priv,
+@@ -315,24 +334,33 @@ static int iwl_tx_queue_alloc(struct iwl_priv *priv,
return -ENOMEM;
}
-int iwl_tx_queue_init(struct iwl_priv *priv,
- struct iwl_tx_queue *txq, int slots_num, u32 txq_id)
+/**
-+ * iwl3945_tx_queue_init - Allocate and initialize one tx/cmd queue
++ * iwl4965_tx_queue_init - Allocate and initialize one tx/cmd queue
+ */
-+int iwl3945_tx_queue_init(struct iwl3945_priv *priv,
-+ struct iwl3945_tx_queue *txq, int slots_num, u32 txq_id)
++int iwl4965_tx_queue_init(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq, int slots_num, u32 txq_id)
{
struct pci_dev *dev = priv->pci_dev;
int len;
@@ -385984,10 +466439,10 @@
+ * For the command queue (#4), allocate command space + one big
+ * command for scan, since scan command is very huge; the system will
+ * not have two scans at the same time, so only one is needed.
-+ * For data Tx queues (all other queues), no super-size command
++ * For normal Tx queues (all other queues), no super-size command
+ * space is needed.
+ */
-+ len = sizeof(struct iwl3945_cmd) * slots_num;
++ len = sizeof(struct iwl4965_cmd) * slots_num;
if (txq_id == IWL_CMD_QUEUE_NUM)
len += IWL_MAX_SCAN_SIZE;
txq->cmd = pci_alloc_consistent(dev, len, &txq->dma_addr_cmd);
@@ -385996,32 +466451,32 @@
- rc = iwl_tx_queue_alloc(priv, txq, txq_id);
+ /* Alloc driver data array and TFD circular buffer */
-+ rc = iwl3945_tx_queue_alloc(priv, txq, txq_id);
++ rc = iwl4965_tx_queue_alloc(priv, txq, txq_id);
if (rc) {
pci_free_consistent(dev, len, txq->cmd, txq->dma_addr_cmd);
-@@ -342,26 +368,29 @@ int iwl_tx_queue_init(struct iwl_priv *priv,
+@@ -341,26 +369,29 @@ int iwl_tx_queue_init(struct iwl_priv *priv,
txq->need_update = 0;
/* TFD_QUEUE_SIZE_MAX must be power-of-two size, otherwise
- * iwl_queue_inc_wrap and iwl_queue_dec_wrap are broken. */
-+ * iwl3945_queue_inc_wrap and iwl3945_queue_dec_wrap are broken. */
++ * iwl4965_queue_inc_wrap and iwl4965_queue_dec_wrap are broken. */
BUILD_BUG_ON(TFD_QUEUE_SIZE_MAX & (TFD_QUEUE_SIZE_MAX - 1));
- iwl_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
- iwl_hw_tx_queue_init(priv, txq);
-+ /* Initialize queue high/low-water, head/tail indexes */
-+ iwl3945_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
++ /* Initialize queue's high/low-water marks, and head/tail indexes */
++ iwl4965_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
+
-+ /* Tell device where to find queue, enable DMA channel. */
-+ iwl3945_hw_tx_queue_init(priv, txq);
++ /* Tell device where to find queue */
++ iwl4965_hw_tx_queue_init(priv, txq);
return 0;
}
/**
- * iwl_tx_queue_free - Deallocate DMA queue.
-+ * iwl3945_tx_queue_free - Deallocate DMA queue.
++ * iwl4965_tx_queue_free - Deallocate DMA queue.
* @txq: Transmit queue to deallocate.
*
* Empty queue by removing and destroying all BD's.
@@ -386031,14 +466486,14 @@
+ * 0-fill, but do not free "txq" descriptor structure.
*/
-void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
-+void iwl3945_tx_queue_free(struct iwl3945_priv *priv, struct iwl3945_tx_queue *txq)
++void iwl4965_tx_queue_free(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq)
{
- struct iwl_queue *q = &txq->q;
-+ struct iwl3945_queue *q = &txq->q;
++ struct iwl4965_queue *q = &txq->q;
struct pci_dev *dev = priv->pci_dev;
int len;
-@@ -369,44 +398,47 @@ void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
+@@ -368,45 +399,48 @@ void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
return;
/* first, empty all BD's */
@@ -386046,11 +466501,11 @@
- q->last_used = iwl_queue_inc_wrap(q->last_used, q->n_bd))
- iwl_hw_txq_free_tfd(priv, txq);
+ for (; q->write_ptr != q->read_ptr;
-+ q->read_ptr = iwl3945_queue_inc_wrap(q->read_ptr, q->n_bd))
-+ iwl3945_hw_txq_free_tfd(priv, txq);
++ q->read_ptr = iwl4965_queue_inc_wrap(q->read_ptr, q->n_bd))
++ iwl4965_hw_txq_free_tfd(priv, txq);
- len = sizeof(struct iwl_cmd) * q->n_window;
-+ len = sizeof(struct iwl3945_cmd) * q->n_window;
++ len = sizeof(struct iwl4965_cmd) * q->n_window;
if (q->id == IWL_CMD_QUEUE_NUM)
len += IWL_MAX_SCAN_SIZE;
@@ -386061,7 +466516,7 @@
+ /* De-alloc circular buffer of TFDs */
if (txq->q.n_bd)
- pci_free_consistent(dev, sizeof(struct iwl_tfd_frame) *
-+ pci_free_consistent(dev, sizeof(struct iwl3945_tfd_frame) *
++ pci_free_consistent(dev, sizeof(struct iwl4965_tfd_frame) *
txq->q.n_bd, txq->bd, txq->q.dma_addr);
+ /* De-alloc array of per-TFD driver data */
@@ -386076,7 +466531,7 @@
}
-const u8 BROADCAST_ADDR[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
-+const u8 iwl3945_broadcast_addr[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
++const u8 iwl4965_broadcast_addr[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
/*************** STATION TABLE MANAGEMENT ****
- *
@@ -386089,91 +466544,90 @@
*/
/**************************************************************/
+
-#if 0 /* temparary disable till we add real remove station */
-static u8 iwl_remove_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
+#if 0 /* temporary disable till we add real remove station */
+/**
-+ * iwl3945_remove_station - Remove driver's knowledge of station.
++ * iwl4965_remove_station - Remove driver's knowledge of station.
+ *
+ * NOTE: This does not remove station from device's station table.
+ */
-+static u8 iwl3945_remove_station(struct iwl3945_priv *priv, const u8 *addr, int is_ap)
++static u8 iwl4965_remove_station(struct iwl4965_priv *priv, const u8 *addr, int is_ap)
{
int index = IWL_INVALID_STATION;
int i;
-@@ -442,7 +474,13 @@ out:
- return 0;
+@@ -443,7 +477,12 @@ out:
}
#endif
+
-static void iwl_clear_stations_table(struct iwl_priv *priv)
-+
+/**
-+ * iwl3945_clear_stations_table - Clear the driver's station table
++ * iwl4965_clear_stations_table - Clear the driver's station table
+ *
+ * NOTE: This does not clear or otherwise alter the device's station table.
+ */
-+static void iwl3945_clear_stations_table(struct iwl3945_priv *priv)
++static void iwl4965_clear_stations_table(struct iwl4965_priv *priv)
{
unsigned long flags;
-@@ -454,12 +492,14 @@ static void iwl_clear_stations_table(struct iwl_priv *priv)
+@@ -455,11 +494,15 @@ static void iwl_clear_stations_table(struct iwl_priv *priv)
spin_unlock_irqrestore(&priv->sta_lock, flags);
}
--
-u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
+/**
-+ * iwl3945_add_station - Add station to station tables in driver and device
++ * iwl4965_add_station_flags - Add station to tables in driver and device
+ */
-+u8 iwl3945_add_station(struct iwl3945_priv *priv, const u8 *addr, int is_ap, u8 flags)
++u8 iwl4965_add_station_flags(struct iwl4965_priv *priv, const u8 *addr,
++ int is_ap, u8 flags, void *ht_data)
{
int i;
int index = IWL_INVALID_STATION;
- struct iwl_station_entry *station;
-+ struct iwl3945_station_entry *station;
++ struct iwl4965_station_entry *station;
unsigned long flags_spin;
DECLARE_MAC_BUF(mac);
- u8 rate;
-@@ -482,7 +522,7 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
- index = i;
+
+@@ -482,8 +525,8 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
}
+
- /* These twh conditions has the same outcome but keep them separate
-+ /* These two conditions has the same outcome but keep them separate
- since they have different meaning */
+- since they have different meaning */
++ /* These two conditions have the same outcome, but keep them separate
++ since they have different meanings */
if (unlikely(index == IWL_INVALID_STATION)) {
spin_unlock_irqrestore(&priv->sta_lock, flags_spin);
-@@ -500,30 +540,35 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
+ return index;
+@@ -501,28 +544,32 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
station->used = 1;
priv->num_stations++;
- memset(&station->sta, 0, sizeof(struct iwl_addsta_cmd));
+ /* Set up the REPLY_ADD_STA command to send to device */
-+ memset(&station->sta, 0, sizeof(struct iwl3945_addsta_cmd));
++ memset(&station->sta, 0, sizeof(struct iwl4965_addsta_cmd));
memcpy(station->sta.sta.addr, addr, ETH_ALEN);
station->sta.mode = 0;
station->sta.sta.sta_id = index;
station->sta.station_flags = 0;
-- rate = (priv->phymode == MODE_IEEE80211A) ? IWL_RATE_6M_PLCP :
-- IWL_RATE_1M_PLCP | priv->hw_setting.cck_flag;
-+ if (priv->phymode == MODE_IEEE80211A)
-+ rate = IWL_RATE_6M_PLCP;
-+ else
-+ rate = IWL_RATE_1M_PLCP;
-
- /* Turn on both antennas for the station... */
- station->sta.rate_n_flags =
-- iwl_hw_set_rate_n_flags(rate, RATE_MCS_ANT_AB_MSK);
-+ iwl3945_hw_set_rate_n_flags(rate, RATE_MCS_ANT_AB_MSK);
- station->current_rate.rate_n_flags =
- le16_to_cpu(station->sta.rate_n_flags);
+-#ifdef CONFIG_IWLWIFI_HT
++#ifdef CONFIG_IWL4965_HT
+ /* BCAST station and IBSS stations do not work in HT mode */
+ if (index != priv->hw_setting.bcast_sta_id &&
+ priv->iw_mode != IEEE80211_IF_TYPE_IBSS)
+- iwl4965_set_ht_add_station(priv, index);
+-#endif /*CONFIG_IWLWIFI_HT*/
++ iwl4965_set_ht_add_station(priv, index,
++ (struct ieee80211_ht_info *) ht_data);
++#endif /*CONFIG_IWL4965_HT*/
spin_unlock_irqrestore(&priv->sta_lock, flags_spin);
- iwl_send_add_station(priv, &station->sta, flags);
+
+ /* Add station to device's station table */
-+ iwl3945_send_add_station(priv, &station->sta, flags);
++ iwl4965_send_add_station(priv, &station->sta, flags);
return index;
}
@@ -386181,80 +466635,80 @@
/*************** DRIVER STATUS FUNCTIONS *****/
-static inline int iwl_is_ready(struct iwl_priv *priv)
-+static inline int iwl3945_is_ready(struct iwl3945_priv *priv)
++static inline int iwl4965_is_ready(struct iwl4965_priv *priv)
{
/* The adapter is 'ready' if READY and GEO_CONFIGURED bits are
* set but EXIT_PENDING is not */
-@@ -532,29 +577,29 @@ static inline int iwl_is_ready(struct iwl_priv *priv)
+@@ -531,29 +578,29 @@ static inline int iwl_is_ready(struct iwl_priv *priv)
!test_bit(STATUS_EXIT_PENDING, &priv->status);
}
-static inline int iwl_is_alive(struct iwl_priv *priv)
-+static inline int iwl3945_is_alive(struct iwl3945_priv *priv)
++static inline int iwl4965_is_alive(struct iwl4965_priv *priv)
{
return test_bit(STATUS_ALIVE, &priv->status);
}
-static inline int iwl_is_init(struct iwl_priv *priv)
-+static inline int iwl3945_is_init(struct iwl3945_priv *priv)
++static inline int iwl4965_is_init(struct iwl4965_priv *priv)
{
return test_bit(STATUS_INIT, &priv->status);
}
-static inline int iwl_is_rfkill(struct iwl_priv *priv)
-+static inline int iwl3945_is_rfkill(struct iwl3945_priv *priv)
++static inline int iwl4965_is_rfkill(struct iwl4965_priv *priv)
{
return test_bit(STATUS_RF_KILL_HW, &priv->status) ||
test_bit(STATUS_RF_KILL_SW, &priv->status);
}
-static inline int iwl_is_ready_rf(struct iwl_priv *priv)
-+static inline int iwl3945_is_ready_rf(struct iwl3945_priv *priv)
++static inline int iwl4965_is_ready_rf(struct iwl4965_priv *priv)
{
- if (iwl_is_rfkill(priv))
-+ if (iwl3945_is_rfkill(priv))
++ if (iwl4965_is_rfkill(priv))
return 0;
- return iwl_is_ready(priv);
-+ return iwl3945_is_ready(priv);
++ return iwl4965_is_ready(priv);
}
/*************** HOST COMMAND QUEUE FUNCTIONS *****/
-@@ -613,7 +658,7 @@ static const char *get_cmd_string(u8 cmd)
+@@ -618,7 +665,7 @@ static const char *get_cmd_string(u8 cmd)
#define HOST_COMPLETE_TIMEOUT (HZ / 2)
/**
- * iwl_enqueue_hcmd - enqueue a uCode command
-+ * iwl3945_enqueue_hcmd - enqueue a uCode command
++ * iwl4965_enqueue_hcmd - enqueue a uCode command
* @priv: device private data point
* @cmd: a point to the ucode command structure
*
-@@ -621,13 +666,13 @@ static const char *get_cmd_string(u8 cmd)
+@@ -626,13 +673,13 @@ static const char *get_cmd_string(u8 cmd)
* failed. On success, it turns the index (> 0) of command in the
* command queue.
*/
-static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+static int iwl3945_enqueue_hcmd(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
++static int iwl4965_enqueue_hcmd(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
{
- struct iwl_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
- struct iwl_queue *q = &txq->q;
- struct iwl_tfd_frame *tfd;
-+ struct iwl3945_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
-+ struct iwl3945_queue *q = &txq->q;
-+ struct iwl3945_tfd_frame *tfd;
++ struct iwl4965_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
++ struct iwl4965_queue *q = &txq->q;
++ struct iwl4965_tfd_frame *tfd;
u32 *control_flags;
- struct iwl_cmd *out_cmd;
-+ struct iwl3945_cmd *out_cmd;
++ struct iwl4965_cmd *out_cmd;
u32 idx;
u16 fix_size = (u16)(cmd->len + sizeof(out_cmd->hdr));
dma_addr_t phys_addr;
-@@ -642,19 +687,19 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+@@ -645,19 +692,19 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
BUG_ON((fix_size > TFD_MAX_PAYLOAD_SIZE) &&
!(cmd->meta.flags & CMD_SIZE_HUGE));
- if (iwl_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
-+ if (iwl3945_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
++ if (iwl4965_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
IWL_ERROR("No space for Tx\n");
return -ENOSPC;
}
@@ -386272,7 +466726,7 @@
out_cmd = &txq->cmd[idx];
out_cmd->hdr.cmd = cmd->id;
-@@ -666,13 +711,13 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+@@ -669,30 +716,34 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
out_cmd->hdr.flags = 0;
out_cmd->hdr.sequence = cpu_to_le16(QUEUE_TO_SEQ(IWL_CMD_QUEUE_NUM) |
@@ -386284,12 +466738,10 @@
phys_addr = txq->dma_addr_cmd + sizeof(txq->cmd[0]) * idx +
- offsetof(struct iwl_cmd, hdr);
- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
-+ offsetof(struct iwl3945_cmd, hdr);
-+ iwl3945_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
++ offsetof(struct iwl4965_cmd, hdr);
++ iwl4965_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
- pad = U32_PAD(cmd->len);
- count = TFD_CTL_COUNT_GET(*control_flags);
-@@ -682,17 +727,19 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+ IWL_DEBUG_HC("Sending command %s (#%x), seq: 0x%04X, "
"%d bytes at %d[%d]:%d\n",
get_cmd_string(out_cmd->hdr.cmd),
out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence),
@@ -386297,31 +466749,34 @@
+ fix_size, q->write_ptr, idx, IWL_CMD_QUEUE_NUM);
txq->need_update = 1;
++
++ /* Set up entry in queue's byte count circular buffer */
+ ret = iwl4965_tx_queue_update_wr_ptr(priv, txq, 0);
- q->first_empty = iwl_queue_inc_wrap(q->first_empty, q->n_bd);
-- ret = iwl_tx_queue_update_write_ptr(priv, txq);
+- iwl_tx_queue_update_write_ptr(priv, txq);
+
+ /* Increment and update queue's write index */
-+ q->write_ptr = iwl3945_queue_inc_wrap(q->write_ptr, q->n_bd);
-+ ret = iwl3945_tx_queue_update_write_ptr(priv, txq);
++ q->write_ptr = iwl4965_queue_inc_wrap(q->write_ptr, q->n_bd);
++ iwl4965_tx_queue_update_write_ptr(priv, txq);
spin_unlock_irqrestore(&priv->hcmd_lock, flags);
return ret ? ret : idx;
}
-int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+static int iwl3945_send_cmd_async(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
++static int iwl4965_send_cmd_async(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
{
int ret;
-@@ -707,16 +754,16 @@ int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+@@ -707,16 +758,16 @@ int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return -EBUSY;
- ret = iwl_enqueue_hcmd(priv, cmd);
-+ ret = iwl3945_enqueue_hcmd(priv, cmd);
++ ret = iwl4965_enqueue_hcmd(priv, cmd);
if (ret < 0) {
- IWL_ERROR("Error sending %s: iwl_enqueue_hcmd failed: %d\n",
-+ IWL_ERROR("Error sending %s: iwl3945_enqueue_hcmd failed: %d\n",
++ IWL_ERROR("Error sending %s: iwl4965_enqueue_hcmd failed: %d\n",
get_cmd_string(cmd->id), ret);
return ret;
}
@@ -386329,38 +466784,38 @@
}
-int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+static int iwl3945_send_cmd_sync(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
++static int iwl4965_send_cmd_sync(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
{
int cmd_idx;
int ret;
-@@ -738,10 +785,10 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+@@ -738,10 +789,10 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
if (cmd->meta.flags & CMD_WANT_SKB)
cmd->meta.source = &cmd->meta;
- cmd_idx = iwl_enqueue_hcmd(priv, cmd);
-+ cmd_idx = iwl3945_enqueue_hcmd(priv, cmd);
++ cmd_idx = iwl4965_enqueue_hcmd(priv, cmd);
if (cmd_idx < 0) {
ret = cmd_idx;
- IWL_ERROR("Error sending %s: iwl_enqueue_hcmd failed: %d\n",
-+ IWL_ERROR("Error sending %s: iwl3945_enqueue_hcmd failed: %d\n",
++ IWL_ERROR("Error sending %s: iwl4965_enqueue_hcmd failed: %d\n",
get_cmd_string(cmd->id), ret);
goto out;
}
-@@ -785,7 +832,7 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+@@ -785,7 +836,7 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
cancel:
if (cmd->meta.flags & CMD_WANT_SKB) {
- struct iwl_cmd *qcmd;
-+ struct iwl3945_cmd *qcmd;
++ struct iwl4965_cmd *qcmd;
/* Cancel the CMD_WANT_SKB flag for the cmd in the
* TX cmd queue. Otherwise in case the cmd comes
-@@ -804,47 +851,43 @@ out:
+@@ -804,64 +855,75 @@ out:
return ret;
}
-int iwl_send_cmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+int iwl3945_send_cmd(struct iwl3945_priv *priv, struct iwl3945_host_cmd *cmd)
++int iwl4965_send_cmd(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
{
- /* A command can not be asynchronous AND expect an SKB to be set. */
- BUG_ON((cmd->meta.flags & CMD_ASYNC) &&
@@ -386368,87 +466823,125 @@
-
if (cmd->meta.flags & CMD_ASYNC)
- return iwl_send_cmd_async(priv, cmd);
-+ return iwl3945_send_cmd_async(priv, cmd);
++ return iwl4965_send_cmd_async(priv, cmd);
- return iwl_send_cmd_sync(priv, cmd);
-+ return iwl3945_send_cmd_sync(priv, cmd);
++ return iwl4965_send_cmd_sync(priv, cmd);
}
-int iwl_send_cmd_pdu(struct iwl_priv *priv, u8 id, u16 len, const void *data)
-+int iwl3945_send_cmd_pdu(struct iwl3945_priv *priv, u8 id, u16 len, const void *data)
++int iwl4965_send_cmd_pdu(struct iwl4965_priv *priv, u8 id, u16 len, const void *data)
{
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_host_cmd cmd = {
.id = id,
.len = len,
.data = data,
};
- return iwl_send_cmd_sync(priv, &cmd);
-+ return iwl3945_send_cmd_sync(priv, &cmd);
++ return iwl4965_send_cmd_sync(priv, &cmd);
}
-static int __must_check iwl_send_cmd_u32(struct iwl_priv *priv, u8 id, u32 val)
-+static int __must_check iwl3945_send_cmd_u32(struct iwl3945_priv *priv, u8 id, u32 val)
++static int __must_check iwl4965_send_cmd_u32(struct iwl4965_priv *priv, u8 id, u32 val)
{
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_host_cmd cmd = {
.id = id,
.len = sizeof(val),
.data = &val,
};
- return iwl_send_cmd_sync(priv, &cmd);
-+ return iwl3945_send_cmd_sync(priv, &cmd);
++ return iwl4965_send_cmd_sync(priv, &cmd);
}
-int iwl_send_statistics_request(struct iwl_priv *priv)
-+int iwl3945_send_statistics_request(struct iwl3945_priv *priv)
++int iwl4965_send_statistics_request(struct iwl4965_priv *priv)
{
- return iwl_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
-+ return iwl3945_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
++ return iwl4965_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
+ }
+
+ /**
+- * iwl_rxon_add_station - add station into station table.
++ * iwl4965_rxon_add_station - add station into station table.
+ *
+ * there is only one AP station with id= IWL_AP_ID
+- * NOTE: mutex must be held before calling the this fnction
+-*/
+-static int iwl_rxon_add_station(struct iwl_priv *priv,
++ * NOTE: mutex must be held before calling this fnction
++ */
++static int iwl4965_rxon_add_station(struct iwl4965_priv *priv,
+ const u8 *addr, int is_ap)
+ {
+ u8 sta_id;
+
+- sta_id = iwl_add_station(priv, addr, is_ap, 0);
++ /* Add station to device's station table */
++#ifdef CONFIG_IWL4965_HT
++ struct ieee80211_conf *conf = &priv->hw->conf;
++ struct ieee80211_ht_info *cur_ht_config = &conf->ht_conf;
++
++ if ((is_ap) &&
++ (conf->flags & IEEE80211_CONF_SUPPORT_HT_MODE) &&
++ (priv->iw_mode == IEEE80211_IF_TYPE_STA))
++ sta_id = iwl4965_add_station_flags(priv, addr, is_ap,
++ 0, cur_ht_config);
++ else
++#endif /* CONFIG_IWL4965_HT */
++ sta_id = iwl4965_add_station_flags(priv, addr, is_ap,
++ 0, NULL);
++
++ /* Set up default rate scaling table in device's station table */
+ iwl4965_add_station(priv, addr, is_ap);
+
+ return sta_id;
}
/**
- * iwl_set_rxon_channel - Set the phymode and channel values in staging RXON
-+ * iwl3945_set_rxon_channel - Set the phymode and channel values in staging RXON
++ * iwl4965_set_rxon_channel - Set the phymode and channel values in staging RXON
* @phymode: MODE_IEEE80211A sets to 5.2GHz; all else set to 2.4GHz
* @channel: Any channel valid for the requested phymode
-@@ -853,9 +896,9 @@ int iwl_send_statistics_request(struct iwl_priv *priv)
+@@ -870,9 +932,10 @@ static int iwl_rxon_add_station(struct iwl_priv *priv,
* NOTE: Does not commit to the hardware; it sets appropriate bit fields
* in the staging RXON flag structure based on the phymode
*/
-static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
-+static int iwl3945_set_rxon_channel(struct iwl3945_priv *priv, u8 phymode, u16 channel)
++static int iwl4965_set_rxon_channel(struct iwl4965_priv *priv, u8 phymode,
++ u16 channel)
{
- if (!iwl_get_channel_info(priv, phymode, channel)) {
-+ if (!iwl3945_get_channel_info(priv, phymode, channel)) {
++ if (!iwl4965_get_channel_info(priv, phymode, channel)) {
IWL_DEBUG_INFO("Could not set channel to %d [%d]\n",
channel, phymode);
return -EINVAL;
-@@ -879,13 +922,13 @@ static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
+@@ -896,13 +959,13 @@ static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
}
/**
- * iwl_check_rxon_cmd - validate RXON structure is valid
-+ * iwl3945_check_rxon_cmd - validate RXON structure is valid
++ * iwl4965_check_rxon_cmd - validate RXON structure is valid
*
* NOTE: This is really only useful during development and can eventually
* be #ifdef'd out once the driver is stable and folks aren't actively
* making changes
*/
-static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
-+static int iwl3945_check_rxon_cmd(struct iwl3945_rxon_cmd *rxon)
++static int iwl4965_check_rxon_cmd(struct iwl4965_rxon_cmd *rxon)
{
int error = 0;
int counter = 1;
-@@ -951,21 +994,21 @@ static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
+@@ -962,21 +1025,21 @@ static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
le16_to_cpu(rxon->channel));
if (error) {
- IWL_ERROR("Not a valid iwl_rxon_assoc_cmd field values\n");
-+ IWL_ERROR("Not a valid iwl3945_rxon_assoc_cmd field values\n");
++ IWL_ERROR("Not a valid iwl4965_rxon_assoc_cmd field values\n");
return -1;
}
return 0;
@@ -386457,7 +466950,7 @@
/**
- * iwl_full_rxon_required - determine if RXON_ASSOC can be used in RXON commit
- * @priv: staging_rxon is comapred to active_rxon
-+ * iwl3945_full_rxon_required - check if full RXON (vs RXON_ASSOC) cmd is needed
++ * iwl4965_full_rxon_required - check if full RXON (vs RXON_ASSOC) cmd is needed
+ * @priv: staging_rxon is compared to active_rxon
*
- * If the RXON structure is changing sufficient to require a new
@@ -386468,24 +466961,24 @@
+ * a new tune (full RXON command, rather than RXON_ASSOC cmd) is required.
*/
-static int iwl_full_rxon_required(struct iwl_priv *priv)
-+static int iwl3945_full_rxon_required(struct iwl3945_priv *priv)
++static int iwl4965_full_rxon_required(struct iwl4965_priv *priv)
{
/* These items are only settable from the full RXON command */
-@@ -1000,19 +1043,19 @@ static int iwl_full_rxon_required(struct iwl_priv *priv)
+@@ -1016,19 +1079,19 @@ static int iwl_full_rxon_required(struct iwl_priv *priv)
return 0;
}
-static int iwl_send_rxon_assoc(struct iwl_priv *priv)
-+static int iwl3945_send_rxon_assoc(struct iwl3945_priv *priv)
++static int iwl4965_send_rxon_assoc(struct iwl4965_priv *priv)
{
int rc = 0;
- struct iwl_rx_packet *res = NULL;
- struct iwl_rxon_assoc_cmd rxon_assoc;
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_rx_packet *res = NULL;
-+ struct iwl3945_rxon_assoc_cmd rxon_assoc;
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_rx_packet *res = NULL;
++ struct iwl4965_rxon_assoc_cmd rxon_assoc;
++ struct iwl4965_host_cmd cmd = {
.id = REPLY_RXON_ASSOC,
.len = sizeof(rxon_assoc),
.meta.flags = CMD_WANT_SKB,
@@ -386493,31 +466986,31 @@
};
- const struct iwl_rxon_cmd *rxon1 = &priv->staging_rxon;
- const struct iwl_rxon_cmd *rxon2 = &priv->active_rxon;
-+ const struct iwl3945_rxon_cmd *rxon1 = &priv->staging_rxon;
-+ const struct iwl3945_rxon_cmd *rxon2 = &priv->active_rxon;
++ const struct iwl4965_rxon_cmd *rxon1 = &priv->staging_rxon;
++ const struct iwl4965_rxon_cmd *rxon2 = &priv->active_rxon;
if ((rxon1->flags == rxon2->flags) &&
(rxon1->filter_flags == rxon2->filter_flags) &&
-@@ -1028,11 +1071,11 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
- rxon_assoc.cck_basic_rates = priv->staging_rxon.cck_basic_rates;
- rxon_assoc.reserved = 0;
+@@ -1054,11 +1117,11 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
+ priv->staging_rxon.ofdm_ht_dual_stream_basic_rates;
+ rxon_assoc.rx_chain_select_flags = priv->staging_rxon.rx_chain;
- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl3945_send_cmd_sync(priv, &cmd);
++ rc = iwl4965_send_cmd_sync(priv, &cmd);
if (rc)
return rc;
- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
IWL_ERROR("Bad return from REPLY_RXON_ASSOC command\n");
rc = -EIO;
-@@ -1045,21 +1088,21 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
+@@ -1071,37 +1134,37 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
}
/**
- * iwl_commit_rxon - commit staging_rxon to hardware
-+ * iwl3945_commit_rxon - commit staging_rxon to hardware
++ * iwl4965_commit_rxon - commit staging_rxon to hardware
*
- * The RXON command in staging_rxon is commited to the hardware and
+ * The RXON command in staging_rxon is committed to the hardware and
@@ -386526,25 +467019,23 @@
* a HW tune is required based on the RXON structure changes.
*/
-static int iwl_commit_rxon(struct iwl_priv *priv)
-+static int iwl3945_commit_rxon(struct iwl3945_priv *priv)
++static int iwl4965_commit_rxon(struct iwl4965_priv *priv)
{
/* cast away the const for active_rxon in this function */
- struct iwl_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
-+ struct iwl3945_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
- int rc = 0;
++ struct iwl4965_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
DECLARE_MAC_BUF(mac);
+ int rc = 0;
- if (!iwl_is_alive(priv))
-+ if (!iwl3945_is_alive(priv))
++ if (!iwl4965_is_alive(priv))
return -1;
/* always get timestamp with Rx frame */
-@@ -1070,17 +1113,17 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
- ~(RXON_FLG_DIS_DIV_MSK | RXON_FLG_ANT_SEL_MSK);
- priv->staging_rxon.flags |= iwl3945_get_antenna_flags(priv);
+ priv->staging_rxon.flags |= RXON_FLG_TSF2HOST_MSK;
- rc = iwl_check_rxon_cmd(&priv->staging_rxon);
-+ rc = iwl3945_check_rxon_cmd(&priv->staging_rxon);
++ rc = iwl4965_check_rxon_cmd(&priv->staging_rxon);
if (rc) {
IWL_ERROR("Invalid RXON configuration. Not committing.\n");
return -EINVAL;
@@ -386552,138 +467043,162 @@
/* If we don't need to send a full RXON, we can use
- * iwl_rxon_assoc_cmd which is used to reconfigure filter
-+ * iwl3945_rxon_assoc_cmd which is used to reconfigure filter
++ * iwl4965_rxon_assoc_cmd which is used to reconfigure filter
* and other flags for the current radio configuration. */
- if (!iwl_full_rxon_required(priv)) {
- rc = iwl_send_rxon_assoc(priv);
-+ if (!iwl3945_full_rxon_required(priv)) {
-+ rc = iwl3945_send_rxon_assoc(priv);
++ if (!iwl4965_full_rxon_required(priv)) {
++ rc = iwl4965_send_rxon_assoc(priv);
if (rc) {
IWL_ERROR("Error setting RXON_ASSOC "
"configuration (%d).\n", rc);
-@@ -1096,13 +1139,13 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+@@ -1116,25 +1179,25 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ /* station table will be cleared */
+ priv->assoc_station_added = 0;
+
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+ priv->sensitivity_data.state = IWL_SENS_CALIB_NEED_REINIT;
+ if (!priv->error_recovering)
+ priv->start_calib = 0;
+
+ iwl4965_init_sensitivity(priv, CMD_ASYNC, 1);
+-#endif /* CONFIG_IWLWIFI_SENSITIVITY */
++#endif /* CONFIG_IWL4965_SENSITIVITY */
+
+ /* If we are currently associated and the new config requires
* an RXON_ASSOC and the new config wants the associated mask enabled,
* we must clear the associated from the active configuration
* before we apply the new config */
- if (iwl_is_associated(priv) &&
-+ if (iwl3945_is_associated(priv) &&
++ if (iwl4965_is_associated(priv) &&
(priv->staging_rxon.filter_flags & RXON_FILTER_ASSOC_MSK)) {
IWL_DEBUG_INFO("Toggling associated bit on current RXON\n");
active_rxon->filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- rc = iwl_send_cmd_pdu(priv, REPLY_RXON,
- sizeof(struct iwl_rxon_cmd),
-+ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON,
-+ sizeof(struct iwl3945_rxon_cmd),
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON,
++ sizeof(struct iwl4965_rxon_cmd),
&priv->active_rxon);
/* If the mask clearing failed then we set
-@@ -1125,8 +1168,8 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+@@ -1157,35 +1220,35 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
print_mac(mac, priv->staging_rxon.bssid_addr));
/* Apply the new configuration */
- rc = iwl_send_cmd_pdu(priv, REPLY_RXON,
- sizeof(struct iwl_rxon_cmd), &priv->staging_rxon);
-+ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON,
-+ sizeof(struct iwl3945_rxon_cmd), &priv->staging_rxon);
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON,
++ sizeof(struct iwl4965_rxon_cmd), &priv->staging_rxon);
if (rc) {
IWL_ERROR("Error setting new configuration (%d).\n", rc);
return rc;
-@@ -1134,18 +1177,18 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
-
- memcpy(active_rxon, &priv->staging_rxon, sizeof(*active_rxon));
+ }
- iwl_clear_stations_table(priv);
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
+
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+ if (!priv->error_recovering)
+ priv->start_calib = 0;
+
+ priv->sensitivity_data.state = IWL_SENS_CALIB_NEED_REINIT;
+ iwl4965_init_sensitivity(priv, CMD_ASYNC, 1);
+-#endif /* CONFIG_IWLWIFI_SENSITIVITY */
++#endif /* CONFIG_IWL4965_SENSITIVITY */
+
+ memcpy(active_rxon, &priv->staging_rxon, sizeof(*active_rxon));
/* If we issue a new RXON command which required a tune then we must
* send a new TXPOWER command or we won't be able to Tx any frames */
- rc = iwl_hw_reg_send_txpower(priv);
-+ rc = iwl3945_hw_reg_send_txpower(priv);
++ rc = iwl4965_hw_reg_send_txpower(priv);
if (rc) {
IWL_ERROR("Error setting Tx power (%d).\n", rc);
return rc;
}
/* Add the broadcast address so we can send broadcast frames */
-- if (iwl_add_station(priv, BROADCAST_ADDR, 0, 0) ==
-+ if (iwl3945_add_station(priv, iwl3945_broadcast_addr, 0, 0) ==
+- if (iwl_rxon_add_station(priv, BROADCAST_ADDR, 0) ==
++ if (iwl4965_rxon_add_station(priv, iwl4965_broadcast_addr, 0) ==
IWL_INVALID_STATION) {
IWL_ERROR("Error adding BROADCAST address for transmit.\n");
return -EIO;
-@@ -1153,9 +1196,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+@@ -1193,9 +1256,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
/* If we have set the ASSOC_MSK and we are in BSS mode then
* add the IWL_AP_ID to the station rate table */
- if (iwl_is_associated(priv) &&
-+ if (iwl3945_is_associated(priv) &&
- (priv->iw_mode == IEEE80211_IF_TYPE_STA))
-- if (iwl_add_station(priv, priv->active_rxon.bssid_addr, 1, 0)
-+ if (iwl3945_add_station(priv, priv->active_rxon.bssid_addr, 1, 0)
++ if (iwl4965_is_associated(priv) &&
+ (priv->iw_mode == IEEE80211_IF_TYPE_STA)) {
+- if (iwl_rxon_add_station(priv, priv->active_rxon.bssid_addr, 1)
++ if (iwl4965_rxon_add_station(priv, priv->active_rxon.bssid_addr, 1)
== IWL_INVALID_STATION) {
IWL_ERROR("Error adding AP address for transmit.\n");
return -EIO;
-@@ -1172,9 +1215,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+@@ -1206,9 +1269,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
return 0;
}
-static int iwl_send_bt_config(struct iwl_priv *priv)
-+static int iwl3945_send_bt_config(struct iwl3945_priv *priv)
++static int iwl4965_send_bt_config(struct iwl4965_priv *priv)
{
- struct iwl_bt_cmd bt_cmd = {
-+ struct iwl3945_bt_cmd bt_cmd = {
++ struct iwl4965_bt_cmd bt_cmd = {
.flags = 3,
.lead_time = 0xAA,
.max_kill = 1,
-@@ -1182,15 +1225,15 @@ static int iwl_send_bt_config(struct iwl_priv *priv)
+@@ -1216,15 +1279,15 @@ static int iwl_send_bt_config(struct iwl_priv *priv)
.kill_cts_mask = 0,
};
- return iwl_send_cmd_pdu(priv, REPLY_BT_CONFIG,
- sizeof(struct iwl_bt_cmd), &bt_cmd);
-+ return iwl3945_send_cmd_pdu(priv, REPLY_BT_CONFIG,
-+ sizeof(struct iwl3945_bt_cmd), &bt_cmd);
++ return iwl4965_send_cmd_pdu(priv, REPLY_BT_CONFIG,
++ sizeof(struct iwl4965_bt_cmd), &bt_cmd);
}
-static int iwl_send_scan_abort(struct iwl_priv *priv)
-+static int iwl3945_send_scan_abort(struct iwl3945_priv *priv)
++static int iwl4965_send_scan_abort(struct iwl4965_priv *priv)
{
int rc = 0;
- struct iwl_rx_packet *res;
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_rx_packet *res;
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_rx_packet *res;
++ struct iwl4965_host_cmd cmd = {
.id = REPLY_SCAN_ABORT_CMD,
.meta.flags = CMD_WANT_SKB,
};
-@@ -1203,13 +1246,13 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
+@@ -1237,13 +1300,13 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
return 0;
}
- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl3945_send_cmd_sync(priv, &cmd);
++ rc = iwl4965_send_cmd_sync(priv, &cmd);
if (rc) {
clear_bit(STATUS_SCAN_ABORTING, &priv->status);
return rc;
}
- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
if (res->u.status != CAN_ABORT_STATUS) {
/* The scan abort will return 1 for success or
* 2 for "failure". A failure condition can be
-@@ -1227,8 +1270,8 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
+@@ -1261,8 +1324,8 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
return rc;
}
-static int iwl_card_state_sync_callback(struct iwl_priv *priv,
- struct iwl_cmd *cmd,
-+static int iwl3945_card_state_sync_callback(struct iwl3945_priv *priv,
-+ struct iwl3945_cmd *cmd,
++static int iwl4965_card_state_sync_callback(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd,
struct sk_buff *skb)
{
return 1;
-@@ -1237,16 +1280,16 @@ static int iwl_card_state_sync_callback(struct iwl_priv *priv,
+@@ -1271,16 +1334,16 @@ static int iwl_card_state_sync_callback(struct iwl_priv *priv,
/*
* CARD_STATE_CMD
*
@@ -386696,31 +467211,31 @@
* restarted to come back on.
*/
-static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
-+static int iwl3945_send_card_state(struct iwl3945_priv *priv, u32 flags, u8 meta_flag)
++static int iwl4965_send_card_state(struct iwl4965_priv *priv, u32 flags, u8 meta_flag)
{
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_host_cmd cmd = {
.id = REPLY_CARD_STATE_CMD,
.len = sizeof(u32),
.data = &flags,
-@@ -1254,22 +1297,22 @@ static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
+@@ -1288,22 +1351,22 @@ static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
};
if (meta_flag & CMD_ASYNC)
- cmd.meta.u.callback = iwl_card_state_sync_callback;
-+ cmd.meta.u.callback = iwl3945_card_state_sync_callback;
++ cmd.meta.u.callback = iwl4965_card_state_sync_callback;
- return iwl_send_cmd(priv, &cmd);
-+ return iwl3945_send_cmd(priv, &cmd);
++ return iwl4965_send_cmd(priv, &cmd);
}
-static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
- struct iwl_cmd *cmd, struct sk_buff *skb)
-+static int iwl3945_add_sta_sync_callback(struct iwl3945_priv *priv,
-+ struct iwl3945_cmd *cmd, struct sk_buff *skb)
++static int iwl4965_add_sta_sync_callback(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd, struct sk_buff *skb)
{
- struct iwl_rx_packet *res = NULL;
-+ struct iwl3945_rx_packet *res = NULL;
++ struct iwl4965_rx_packet *res = NULL;
if (!skb) {
IWL_ERROR("Error: Response NULL in REPLY_ADD_STA.\n");
@@ -386728,58 +467243,58 @@
}
- res = (struct iwl_rx_packet *)skb->data;
-+ res = (struct iwl3945_rx_packet *)skb->data;
++ res = (struct iwl4965_rx_packet *)skb->data;
if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
IWL_ERROR("Bad return from REPLY_ADD_STA (0x%08X)\n",
res->hdr.flags);
-@@ -1287,29 +1330,29 @@ static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
+@@ -1321,29 +1384,29 @@ static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
return 1;
}
-int iwl_send_add_station(struct iwl_priv *priv,
- struct iwl_addsta_cmd *sta, u8 flags)
-+int iwl3945_send_add_station(struct iwl3945_priv *priv,
-+ struct iwl3945_addsta_cmd *sta, u8 flags)
++int iwl4965_send_add_station(struct iwl4965_priv *priv,
++ struct iwl4965_addsta_cmd *sta, u8 flags)
{
- struct iwl_rx_packet *res = NULL;
-+ struct iwl3945_rx_packet *res = NULL;
++ struct iwl4965_rx_packet *res = NULL;
int rc = 0;
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_host_cmd cmd = {
.id = REPLY_ADD_STA,
- .len = sizeof(struct iwl_addsta_cmd),
-+ .len = sizeof(struct iwl3945_addsta_cmd),
++ .len = sizeof(struct iwl4965_addsta_cmd),
.meta.flags = flags,
.data = sta,
};
if (flags & CMD_ASYNC)
- cmd.meta.u.callback = iwl_add_sta_sync_callback;
-+ cmd.meta.u.callback = iwl3945_add_sta_sync_callback;
++ cmd.meta.u.callback = iwl4965_add_sta_sync_callback;
else
cmd.meta.flags |= CMD_WANT_SKB;
- rc = iwl_send_cmd(priv, &cmd);
-+ rc = iwl3945_send_cmd(priv, &cmd);
++ rc = iwl4965_send_cmd(priv, &cmd);
if (rc || (flags & CMD_ASYNC))
return rc;
- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
IWL_ERROR("Bad return from REPLY_ADD_STA (0x%08X)\n",
res->hdr.flags);
-@@ -1334,7 +1377,7 @@ int iwl_send_add_station(struct iwl_priv *priv,
+@@ -1368,7 +1431,7 @@ int iwl_send_add_station(struct iwl_priv *priv,
return rc;
}
-static int iwl_update_sta_key_info(struct iwl_priv *priv,
-+static int iwl3945_update_sta_key_info(struct iwl3945_priv *priv,
++static int iwl4965_update_sta_key_info(struct iwl4965_priv *priv,
struct ieee80211_key_conf *keyconf,
u8 sta_id)
{
-@@ -1350,7 +1393,6 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
+@@ -1384,7 +1447,6 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
break;
case ALG_TKIP:
case ALG_WEP:
@@ -386787,25 +467302,25 @@
default:
return -EINVAL;
}
-@@ -1369,28 +1411,28 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
+@@ -1403,28 +1465,28 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
spin_unlock_irqrestore(&priv->sta_lock, flags);
IWL_DEBUG_INFO("hwcrypto: modify ucode station key info\n");
- iwl_send_add_station(priv, &priv->stations[sta_id].sta, 0);
-+ iwl3945_send_add_station(priv, &priv->stations[sta_id].sta, 0);
++ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, 0);
return 0;
}
-static int iwl_clear_sta_key_info(struct iwl_priv *priv, u8 sta_id)
-+static int iwl3945_clear_sta_key_info(struct iwl3945_priv *priv, u8 sta_id)
++static int iwl4965_clear_sta_key_info(struct iwl4965_priv *priv, u8 sta_id)
{
unsigned long flags;
spin_lock_irqsave(&priv->sta_lock, flags);
- memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl_hw_key));
- memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl_keyinfo));
-+ memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl3945_hw_key));
-+ memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl3945_keyinfo));
++ memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl4965_hw_key));
++ memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl4965_keyinfo));
priv->stations[sta_id].sta.key.key_flags = STA_KEY_FLG_NO_ENC;
priv->stations[sta_id].sta.sta.modify_mask = STA_MODIFY_KEY_MASK;
priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
@@ -386813,147 +467328,162 @@
IWL_DEBUG_INFO("hwcrypto: clear ucode station key info\n");
- iwl_send_add_station(priv, &priv->stations[sta_id].sta, 0);
-+ iwl3945_send_add_station(priv, &priv->stations[sta_id].sta, 0);
++ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, 0);
return 0;
}
-static void iwl_clear_free_frames(struct iwl_priv *priv)
-+static void iwl3945_clear_free_frames(struct iwl3945_priv *priv)
++static void iwl4965_clear_free_frames(struct iwl4965_priv *priv)
{
struct list_head *element;
-@@ -1400,7 +1442,7 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
+@@ -1434,7 +1496,7 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
while (!list_empty(&priv->free_frames)) {
element = priv->free_frames.next;
list_del(element);
- kfree(list_entry(element, struct iwl_frame, list));
-+ kfree(list_entry(element, struct iwl3945_frame, list));
++ kfree(list_entry(element, struct iwl4965_frame, list));
priv->frames_count--;
}
-@@ -1411,9 +1453,9 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
+@@ -1445,9 +1507,9 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
}
}
-static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
-+static struct iwl3945_frame *iwl3945_get_free_frame(struct iwl3945_priv *priv)
++static struct iwl4965_frame *iwl4965_get_free_frame(struct iwl4965_priv *priv)
{
- struct iwl_frame *frame;
-+ struct iwl3945_frame *frame;
++ struct iwl4965_frame *frame;
struct list_head *element;
if (list_empty(&priv->free_frames)) {
frame = kzalloc(sizeof(*frame), GFP_KERNEL);
-@@ -1428,21 +1470,21 @@ static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
+@@ -1462,21 +1524,21 @@ static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
element = priv->free_frames.next;
list_del(element);
- return list_entry(element, struct iwl_frame, list);
-+ return list_entry(element, struct iwl3945_frame, list);
++ return list_entry(element, struct iwl4965_frame, list);
}
-static void iwl_free_frame(struct iwl_priv *priv, struct iwl_frame *frame)
-+static void iwl3945_free_frame(struct iwl3945_priv *priv, struct iwl3945_frame *frame)
++static void iwl4965_free_frame(struct iwl4965_priv *priv, struct iwl4965_frame *frame)
{
memset(frame, 0, sizeof(*frame));
list_add(&frame->list, &priv->free_frames);
}
-unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
-+unsigned int iwl3945_fill_beacon_frame(struct iwl3945_priv *priv,
++unsigned int iwl4965_fill_beacon_frame(struct iwl4965_priv *priv,
struct ieee80211_hdr *hdr,
const u8 *dest, int left)
{
- if (!iwl_is_associated(priv) || !priv->ibss_beacon ||
-+ if (!iwl3945_is_associated(priv) || !priv->ibss_beacon ||
++ if (!iwl4965_is_associated(priv) || !priv->ibss_beacon ||
((priv->iw_mode != IEEE80211_IF_TYPE_IBSS) &&
(priv->iw_mode != IEEE80211_IF_TYPE_AP)))
return 0;
-@@ -1455,37 +1497,27 @@ unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
+@@ -1489,10 +1551,11 @@ unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
return priv->ibss_beacon->len;
}
--static int iwl_rate_index_from_plcp(int plcp)
--{
-- int i = 0;
--
-- for (i = 0; i < IWL_RATE_COUNT; i++)
-- if (iwl_rates[i].plcp == plcp)
-- return i;
-- return -1;
--}
--
+-int iwl_rate_index_from_plcp(int plcp)
++int iwl4965_rate_index_from_plcp(int plcp)
+ {
+ int i = 0;
+
++ /* 4965 HT rate format */
+ if (plcp & RATE_MCS_HT_MSK) {
+ i = (plcp & 0xff);
+
+@@ -1506,35 +1569,37 @@ int iwl_rate_index_from_plcp(int plcp)
+ if ((i >= IWL_FIRST_OFDM_RATE) &&
+ (i <= IWL_LAST_OFDM_RATE))
+ return i;
++
++ /* 4965 legacy rate format, search for match in table */
+ } else {
+- for (i = 0; i < ARRAY_SIZE(iwl_rates); i++)
+- if (iwl_rates[i].plcp == (plcp &0xFF))
++ for (i = 0; i < ARRAY_SIZE(iwl4965_rates); i++)
++ if (iwl4965_rates[i].plcp == (plcp &0xFF))
+ return i;
+ }
+ return -1;
+ }
+
-static u8 iwl_rate_get_lowest_plcp(int rate_mask)
-+static u8 iwl3945_rate_get_lowest_plcp(int rate_mask)
++static u8 iwl4965_rate_get_lowest_plcp(int rate_mask)
{
u8 i;
for (i = IWL_RATE_1M_INDEX; i != IWL_RATE_INVALID;
- i = iwl_rates[i].next_ieee) {
-+ i = iwl3945_rates[i].next_ieee) {
++ i = iwl4965_rates[i].next_ieee) {
if (rate_mask & (1 << i))
- return iwl_rates[i].plcp;
-+ return iwl3945_rates[i].plcp;
++ return iwl4965_rates[i].plcp;
}
return IWL_RATE_INVALID;
}
-static int iwl_send_beacon_cmd(struct iwl_priv *priv)
-+static int iwl3945_send_beacon_cmd(struct iwl3945_priv *priv)
++static int iwl4965_send_beacon_cmd(struct iwl4965_priv *priv)
{
- struct iwl_frame *frame;
-+ struct iwl3945_frame *frame;
++ struct iwl4965_frame *frame;
unsigned int frame_size;
int rc;
u8 rate;
- frame = iwl_get_free_frame(priv);
-+ frame = iwl3945_get_free_frame(priv);
++ frame = iwl4965_get_free_frame(priv);
if (!frame) {
IWL_ERROR("Could not obtain free frame buffer for beacon "
-@@ -1494,22 +1526,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
+@@ -1543,22 +1608,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
}
if (!(priv->staging_rxon.flags & RXON_FLG_BAND_24G_MSK)) {
- rate = iwl_rate_get_lowest_plcp(priv->active_rate_basic &
-+ rate = iwl3945_rate_get_lowest_plcp(priv->active_rate_basic &
++ rate = iwl4965_rate_get_lowest_plcp(priv->active_rate_basic &
0xFF0);
if (rate == IWL_INVALID_RATE)
rate = IWL_RATE_6M_PLCP;
} else {
- rate = iwl_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
-+ rate = iwl3945_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
++ rate = iwl4965_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
if (rate == IWL_INVALID_RATE)
rate = IWL_RATE_1M_PLCP;
}
- frame_size = iwl_hw_get_beacon_cmd(priv, frame, rate);
-+ frame_size = iwl3945_hw_get_beacon_cmd(priv, frame, rate);
++ frame_size = iwl4965_hw_get_beacon_cmd(priv, frame, rate);
- rc = iwl_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
-+ rc = iwl3945_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
&frame->u.cmd[0]);
- iwl_free_frame(priv, frame);
-+ iwl3945_free_frame(priv, frame);
++ iwl4965_free_frame(priv, frame);
return rc;
}
-@@ -1520,22 +1552,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
+@@ -1569,22 +1634,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
*
******************************************************************************/
-static void get_eeprom_mac(struct iwl_priv *priv, u8 *mac)
-+static void get_eeprom_mac(struct iwl3945_priv *priv, u8 *mac)
++static void get_eeprom_mac(struct iwl4965_priv *priv, u8 *mac)
{
memcpy(mac, priv->eeprom.mac_address, 6);
}
/**
- * iwl_eeprom_init - read EEPROM contents
-+ * iwl3945_eeprom_init - read EEPROM contents
++ * iwl4965_eeprom_init - read EEPROM contents
*
- * Load the EEPROM from adapter into priv->eeprom
+ * Load the EEPROM contents from adapter into priv->eeprom
@@ -386961,22 +467491,22 @@
* NOTE: This routine uses the non-debug IO access functions.
*/
-int iwl_eeprom_init(struct iwl_priv *priv)
-+int iwl3945_eeprom_init(struct iwl3945_priv *priv)
++int iwl4965_eeprom_init(struct iwl4965_priv *priv)
{
- u16 *e = (u16 *)&priv->eeprom;
- u32 gp = iwl_read32(priv, CSR_EEPROM_GP);
+ __le16 *e = (__le16 *)&priv->eeprom;
-+ u32 gp = iwl3945_read32(priv, CSR_EEPROM_GP);
++ u32 gp = iwl4965_read32(priv, CSR_EEPROM_GP);
u32 r;
int sz = sizeof(priv->eeprom);
int rc;
-@@ -1553,20 +1585,21 @@ int iwl_eeprom_init(struct iwl_priv *priv)
+@@ -1602,20 +1667,21 @@ int iwl_eeprom_init(struct iwl_priv *priv)
return -ENOENT;
}
- rc = iwl_eeprom_aqcuire_semaphore(priv);
+ /* Make sure driver (instead of uCode) is allowed to read EEPROM */
-+ rc = iwl3945_eeprom_acquire_semaphore(priv);
++ rc = iwl4965_eeprom_acquire_semaphore(priv);
if (rc < 0) {
- IWL_ERROR("Failed to aqcuire EEPROM semaphore.\n");
+ IWL_ERROR("Failed to acquire EEPROM semaphore.\n");
@@ -386987,196 +467517,265 @@
for (addr = 0; addr < sz; addr += sizeof(u16)) {
- _iwl_write32(priv, CSR_EEPROM_REG, addr << 1);
- _iwl_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
-+ _iwl3945_write32(priv, CSR_EEPROM_REG, addr << 1);
-+ _iwl3945_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
++ _iwl4965_write32(priv, CSR_EEPROM_REG, addr << 1);
++ _iwl4965_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
for (i = 0; i < IWL_EEPROM_ACCESS_TIMEOUT;
i += IWL_EEPROM_ACCESS_DELAY) {
- r = _iwl_read_restricted(priv, CSR_EEPROM_REG);
-+ r = _iwl3945_read_direct32(priv, CSR_EEPROM_REG);
++ r = _iwl4965_read_direct32(priv, CSR_EEPROM_REG);
if (r & CSR_EEPROM_REG_READ_VALID_MSK)
break;
udelay(IWL_EEPROM_ACCESS_DELAY);
-@@ -1576,7 +1609,7 @@ int iwl_eeprom_init(struct iwl_priv *priv)
- IWL_ERROR("Time out reading EEPROM[%d]", addr);
- return -ETIMEDOUT;
+@@ -1626,12 +1692,12 @@ int iwl_eeprom_init(struct iwl_priv *priv)
+ rc = -ETIMEDOUT;
+ goto done;
}
- e[addr / 2] = le16_to_cpu(r >> 16);
+ e[addr / 2] = cpu_to_le16(r >> 16);
}
+ rc = 0;
- return 0;
-@@ -1587,22 +1620,17 @@ int iwl_eeprom_init(struct iwl_priv *priv)
+ done:
+- iwl_eeprom_release_semaphore(priv);
++ iwl4965_eeprom_release_semaphore(priv);
+ return rc;
+ }
+
+@@ -1640,22 +1706,20 @@ done:
* Misc. internal state and helper functions
*
******************************************************************************/
-#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL3945_DEBUG
++#ifdef CONFIG_IWL4965_DEBUG
/**
- * iwl_report_frame - dump frame to syslog during debug sessions
-+ * iwl3945_report_frame - dump frame to syslog during debug sessions
++ * iwl4965_report_frame - dump frame to syslog during debug sessions
*
- * hack this function to show different aspects of received frames,
+ * You may hack this function to show different aspects of received frames,
* including selective frame dumps.
* group100 parameter selects whether to show 1 out of 100 good frames.
-- *
+ *
- * TODO: ieee80211_hdr stuff is common to 3945 and 4965, so frame type
- * info output is okay, but some of this stuff (e.g. iwl_rx_frame_stats)
- * is 3945-specific and gives bad output for 4965. Need to split the
- * functionality, keep common stuff here.
++ * TODO: This was originally written for 3945, need to audit for
++ * proper operation with 4965.
*/
-void iwl_report_frame(struct iwl_priv *priv,
- struct iwl_rx_packet *pkt,
-+void iwl3945_report_frame(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_packet *pkt,
++void iwl4965_report_frame(struct iwl4965_priv *priv,
++ struct iwl4965_rx_packet *pkt,
struct ieee80211_hdr *header, int group100)
{
u32 to_us;
-@@ -1624,9 +1652,9 @@ void iwl_report_frame(struct iwl_priv *priv,
+@@ -1677,9 +1741,9 @@ void iwl_report_frame(struct iwl_priv *priv,
u8 agc;
u16 sig_avg;
u16 noise_diff;
- struct iwl_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
- struct iwl_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
- struct iwl_rx_frame_end *rx_end = IWL_RX_END(pkt);
-+ struct iwl3945_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
-+ struct iwl3945_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
-+ struct iwl3945_rx_frame_end *rx_end = IWL_RX_END(pkt);
++ struct iwl4965_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
++ struct iwl4965_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
++ struct iwl4965_rx_frame_end *rx_end = IWL_RX_END(pkt);
u8 *data = IWL_RX_DATA(pkt);
/* MAC header */
-@@ -1702,11 +1730,11 @@ void iwl_report_frame(struct iwl_priv *priv,
+@@ -1755,11 +1819,11 @@ void iwl_report_frame(struct iwl_priv *priv,
else
title = "Frame";
- rate = iwl_rate_index_from_plcp(rate_sym);
-+ rate = iwl3945_rate_index_from_plcp(rate_sym);
++ rate = iwl4965_rate_index_from_plcp(rate_sym);
if (rate == -1)
rate = 0;
else
- rate = iwl_rates[rate].ieee / 2;
-+ rate = iwl3945_rates[rate].ieee / 2;
++ rate = iwl4965_rates[rate].ieee / 2;
/* print frame summary.
* MAC addresses show just the last byte (for brevity),
-@@ -1728,25 +1756,25 @@ void iwl_report_frame(struct iwl_priv *priv,
+@@ -1781,25 +1845,25 @@ void iwl_report_frame(struct iwl_priv *priv,
}
}
if (print_dump)
- iwl_print_hex_dump(IWL_DL_RX, data, length);
-+ iwl3945_print_hex_dump(IWL_DL_RX, data, length);
++ iwl4965_print_hex_dump(IWL_DL_RX, data, length);
}
#endif
-static void iwl_unset_hw_setting(struct iwl_priv *priv)
-+static void iwl3945_unset_hw_setting(struct iwl3945_priv *priv)
++static void iwl4965_unset_hw_setting(struct iwl4965_priv *priv)
{
if (priv->hw_setting.shared_virt)
pci_free_consistent(priv->pci_dev,
- sizeof(struct iwl_shared),
-+ sizeof(struct iwl3945_shared),
++ sizeof(struct iwl4965_shared),
priv->hw_setting.shared_virt,
priv->hw_setting.shared_phys);
}
/**
- * iwl_supported_rate_to_ie - fill in the supported rate in IE field
-+ * iwl3945_supported_rate_to_ie - fill in the supported rate in IE field
++ * iwl4965_supported_rate_to_ie - fill in the supported rate in IE field
*
* return : set the bit for each supported rate insert in ie
*/
-static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
-+static u16 iwl3945_supported_rate_to_ie(u8 *ie, u16 supported_rate,
++static u16 iwl4965_supported_rate_to_ie(u8 *ie, u16 supported_rate,
u16 basic_rate, int *left)
{
u16 ret_rates = 0, bit;
-@@ -1757,7 +1785,7 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
+@@ -1810,7 +1874,7 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
for (bit = 1, i = 0; i < IWL_RATE_COUNT; i++, bit <<= 1) {
if (bit & supported_rate) {
ret_rates |= bit;
- rates[*cnt] = iwl_rates[i].ieee |
-+ rates[*cnt] = iwl3945_rates[i].ieee |
++ rates[*cnt] = iwl4965_rates[i].ieee |
((bit & basic_rate) ? 0x80 : 0x00);
(*cnt)++;
(*left)--;
-@@ -1771,9 +1799,9 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
+@@ -1823,22 +1887,25 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
+ return ret_rates;
}
+-#ifdef CONFIG_IWLWIFI_HT
+-void static iwl_set_ht_capab(struct ieee80211_hw *hw,
+- struct ieee80211_ht_capability *ht_cap,
+- u8 use_wide_chan);
++#ifdef CONFIG_IWL4965_HT
++void static iwl4965_set_ht_capab(struct ieee80211_hw *hw,
++ struct ieee80211_ht_cap *ht_cap,
++ u8 use_current_config);
+ #endif
+
/**
- * iwl_fill_probe_req - fill in all required fields and IE for probe request
-+ * iwl3945_fill_probe_req - fill in all required fields and IE for probe request
++ * iwl4965_fill_probe_req - fill in all required fields and IE for probe request
*/
-static u16 iwl_fill_probe_req(struct iwl_priv *priv,
-+static u16 iwl3945_fill_probe_req(struct iwl3945_priv *priv,
++static u16 iwl4965_fill_probe_req(struct iwl4965_priv *priv,
struct ieee80211_mgmt *frame,
int left, int is_direct)
{
-@@ -1789,9 +1817,9 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ int len = 0;
+ u8 *pos = NULL;
+- u16 active_rates, ret_rates, cck_rates;
++ u16 active_rates, ret_rates, cck_rates, active_rate_basic;
++#ifdef CONFIG_IWL4965_HT
++ struct ieee80211_hw_mode *mode;
++#endif /* CONFIG_IWL4965_HT */
+
+ /* Make sure there is enough space for the probe request,
+ * two mandatory IEs and the data */
+@@ -1848,9 +1915,9 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
len += 24;
frame->frame_control = cpu_to_le16(IEEE80211_STYPE_PROBE_REQ);
- memcpy(frame->da, BROADCAST_ADDR, ETH_ALEN);
-+ memcpy(frame->da, iwl3945_broadcast_addr, ETH_ALEN);
++ memcpy(frame->da, iwl4965_broadcast_addr, ETH_ALEN);
memcpy(frame->sa, priv->mac_addr, ETH_ALEN);
- memcpy(frame->bssid, BROADCAST_ADDR, ETH_ALEN);
-+ memcpy(frame->bssid, iwl3945_broadcast_addr, ETH_ALEN);
++ memcpy(frame->bssid, iwl4965_broadcast_addr, ETH_ALEN);
frame->seq_ctrl = 0;
/* fill in our indirect SSID IE */
-@@ -1834,11 +1862,11 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
- priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
+@@ -1888,17 +1955,19 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ *pos++ = WLAN_EID_SUPP_RATES;
+ *pos = 0;
+
+- priv->active_rate = priv->rates_mask;
+- active_rates = priv->active_rate;
+- priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
++ /* exclude 60M rate */
++ active_rates = priv->rates_mask;
++ active_rates &= ~IWL_RATE_60M_MASK;
++
++ active_rate_basic = active_rates & IWL_BASIC_RATES_MASK;
cck_rates = IWL_CCK_RATES_MASK & active_rates;
- ret_rates = iwl_supported_rate_to_ie(pos, cck_rates,
-+ ret_rates = iwl3945_supported_rate_to_ie(pos, cck_rates,
- priv->active_rate_basic, &left);
+- priv->active_rate_basic, &left);
++ ret_rates = iwl4965_supported_rate_to_ie(pos, cck_rates,
++ active_rate_basic, &left);
active_rates &= ~ret_rates;
- ret_rates = iwl_supported_rate_to_ie(pos, active_rates,
-+ ret_rates = iwl3945_supported_rate_to_ie(pos, active_rates,
- priv->active_rate_basic, &left);
+- priv->active_rate_basic, &left);
++ ret_rates = iwl4965_supported_rate_to_ie(pos, active_rates,
++ active_rate_basic, &left);
active_rates &= ~ret_rates;
-@@ -1855,7 +1883,7 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+ len += 2 + *pos;
+@@ -1914,25 +1983,22 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
/* ... fill it in... */
*pos++ = WLAN_EID_EXT_SUPP_RATES;
*pos = 0;
- iwl_supported_rate_to_ie(pos, active_rates,
-+ iwl3945_supported_rate_to_ie(pos, active_rates,
- priv->active_rate_basic, &left);
+- priv->active_rate_basic, &left);
++ iwl4965_supported_rate_to_ie(pos, active_rates,
++ active_rate_basic, &left);
if (*pos > 0)
len += 2 + *pos;
-@@ -1867,16 +1895,16 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
+
+-#ifdef CONFIG_IWLWIFI_HT
+- if (is_direct && priv->is_ht_enabled) {
+- u8 use_wide_chan = 1;
+-
+- if (priv->channel_width != IWL_CHANNEL_WIDTH_40MHZ)
+- use_wide_chan = 0;
++#ifdef CONFIG_IWL4965_HT
++ mode = priv->hw->conf.mode;
++ if (mode->ht_info.ht_supported) {
+ pos += (*pos) + 1;
+ *pos++ = WLAN_EID_HT_CAPABILITY;
+- *pos++ = sizeof(struct ieee80211_ht_capability);
+- iwl_set_ht_capab(NULL, (struct ieee80211_ht_capability *)pos,
+- use_wide_chan);
+- len += 2 + sizeof(struct ieee80211_ht_capability);
++ *pos++ = sizeof(struct ieee80211_ht_cap);
++ iwl4965_set_ht_capab(priv->hw,
++ (struct ieee80211_ht_cap *)pos, 0);
++ len += 2 + sizeof(struct ieee80211_ht_cap);
+ }
+-#endif /*CONFIG_IWLWIFI_HT */
++#endif /*CONFIG_IWL4965_HT */
+
+ fill_end:
+ return (u16)len;
+@@ -1941,16 +2007,16 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
/*
* QoS support
*/
-#ifdef CONFIG_IWLWIFI_QOS
-static int iwl_send_qos_params_command(struct iwl_priv *priv,
- struct iwl_qosparam_cmd *qos)
-+#ifdef CONFIG_IWL3945_QOS
-+static int iwl3945_send_qos_params_command(struct iwl3945_priv *priv,
-+ struct iwl3945_qosparam_cmd *qos)
++#ifdef CONFIG_IWL4965_QOS
++static int iwl4965_send_qos_params_command(struct iwl4965_priv *priv,
++ struct iwl4965_qosparam_cmd *qos)
{
- return iwl_send_cmd_pdu(priv, REPLY_QOS_PARAM,
- sizeof(struct iwl_qosparam_cmd), qos);
-+ return iwl3945_send_cmd_pdu(priv, REPLY_QOS_PARAM,
-+ sizeof(struct iwl3945_qosparam_cmd), qos);
++ return iwl4965_send_cmd_pdu(priv, REPLY_QOS_PARAM,
++ sizeof(struct iwl4965_qosparam_cmd), qos);
}
-static void iwl_reset_qos(struct iwl_priv *priv)
-+static void iwl3945_reset_qos(struct iwl3945_priv *priv)
++static void iwl4965_reset_qos(struct iwl4965_priv *priv)
{
u16 cw_min = 15;
u16 cw_max = 1023;
-@@ -1963,13 +1991,10 @@ static void iwl_reset_qos(struct iwl_priv *priv)
+@@ -2037,13 +2103,10 @@ static void iwl_reset_qos(struct iwl_priv *priv)
spin_unlock_irqrestore(&priv->lock, flags);
}
-static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
-+static void iwl3945_activate_qos(struct iwl3945_priv *priv, u8 force)
++static void iwl4965_activate_qos(struct iwl4965_priv *priv, u8 force)
{
unsigned long flags;
@@ -387186,109 +467785,124 @@
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
-@@ -1990,16 +2015,16 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
+@@ -2057,23 +2120,28 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
+ !priv->qos_data.qos_cap.q_AP.txop_request)
+ priv->qos_data.def_qos_parm.qos_flags |=
+ QOS_PARAM_FLG_TXOP_TYPE_MSK;
+-
+ if (priv->qos_data.qos_active)
+ priv->qos_data.def_qos_parm.qos_flags |=
+ QOS_PARAM_FLG_UPDATE_EDCA_MSK;
++#ifdef CONFIG_IWL4965_HT
++ if (priv->current_ht_config.is_ht)
++ priv->qos_data.def_qos_parm.qos_flags |= QOS_PARAM_FLG_TGN_MSK;
++#endif /* CONFIG_IWL4965_HT */
++
spin_unlock_irqrestore(&priv->lock, flags);
- if (force || iwl_is_associated(priv)) {
-+ if (force || iwl3945_is_associated(priv)) {
- IWL_DEBUG_QOS("send QoS cmd with Qos active %d \n",
- priv->qos_data.qos_active);
+- IWL_DEBUG_QOS("send QoS cmd with Qos active %d \n",
+- priv->qos_data.qos_active);
++ if (force || iwl4965_is_associated(priv)) {
++ IWL_DEBUG_QOS("send QoS cmd with Qos active=%d FLAGS=0x%X\n",
++ priv->qos_data.qos_active,
++ priv->qos_data.def_qos_parm.qos_flags);
- iwl_send_qos_params_command(priv,
-+ iwl3945_send_qos_params_command(priv,
++ iwl4965_send_qos_params_command(priv,
&(priv->qos_data.def_qos_parm));
}
}
-#endif /* CONFIG_IWLWIFI_QOS */
-+#endif /* CONFIG_IWL3945_QOS */
++#endif /* CONFIG_IWL4965_QOS */
/*
* Power management (not Tx power!) functions
*/
-@@ -2017,7 +2042,7 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
+@@ -2091,7 +2159,7 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
/* default power management (not Tx power) table values */
/* for tim 0-10 */
-static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
-+static struct iwl3945_power_vec_entry range_0[IWL_POWER_AC] = {
++static struct iwl4965_power_vec_entry range_0[IWL_POWER_AC] = {
{{NOSLP, SLP_TIMEOUT(0), SLP_TIMEOUT(0), SLP_VEC(0, 0, 0, 0, 0)}, 0},
{{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(500), SLP_VEC(1, 2, 3, 4, 4)}, 0},
{{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(300), SLP_VEC(2, 4, 6, 7, 7)}, 0},
-@@ -2027,7 +2052,7 @@ static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
+@@ -2101,7 +2169,7 @@ static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
};
/* for tim > 10 */
-static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
-+static struct iwl3945_power_vec_entry range_1[IWL_POWER_AC] = {
++static struct iwl4965_power_vec_entry range_1[IWL_POWER_AC] = {
{{NOSLP, SLP_TIMEOUT(0), SLP_TIMEOUT(0), SLP_VEC(0, 0, 0, 0, 0)}, 0},
{{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(500),
SLP_VEC(1, 2, 3, 4, 0xFF)}, 0},
-@@ -2040,11 +2065,11 @@ static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
+@@ -2114,11 +2182,11 @@ static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
SLP_VEC(4, 7, 10, 10, 0xFF)}, 0}
};
-int iwl_power_init_handle(struct iwl_priv *priv)
-+int iwl3945_power_init_handle(struct iwl3945_priv *priv)
++int iwl4965_power_init_handle(struct iwl4965_priv *priv)
{
int rc = 0, i;
- struct iwl_power_mgr *pow_data;
- int size = sizeof(struct iwl_power_vec_entry) * IWL_POWER_AC;
-+ struct iwl3945_power_mgr *pow_data;
-+ int size = sizeof(struct iwl3945_power_vec_entry) * IWL_POWER_AC;
++ struct iwl4965_power_mgr *pow_data;
++ int size = sizeof(struct iwl4965_power_vec_entry) * IWL_POWER_AC;
u16 pci_pm;
IWL_DEBUG_POWER("Initialize power \n");
-@@ -2063,7 +2088,7 @@ int iwl_power_init_handle(struct iwl_priv *priv)
+@@ -2137,7 +2205,7 @@ int iwl_power_init_handle(struct iwl_priv *priv)
if (rc != 0)
return 0;
else {
- struct iwl_powertable_cmd *cmd;
-+ struct iwl3945_powertable_cmd *cmd;
++ struct iwl4965_powertable_cmd *cmd;
IWL_DEBUG_POWER("adjust power command flags\n");
-@@ -2079,15 +2104,15 @@ int iwl_power_init_handle(struct iwl_priv *priv)
+@@ -2153,15 +2221,15 @@ int iwl_power_init_handle(struct iwl_priv *priv)
return rc;
}
-static int iwl_update_power_cmd(struct iwl_priv *priv,
- struct iwl_powertable_cmd *cmd, u32 mode)
-+static int iwl3945_update_power_cmd(struct iwl3945_priv *priv,
-+ struct iwl3945_powertable_cmd *cmd, u32 mode)
++static int iwl4965_update_power_cmd(struct iwl4965_priv *priv,
++ struct iwl4965_powertable_cmd *cmd, u32 mode)
{
int rc = 0, i;
u8 skip;
u32 max_sleep = 0;
- struct iwl_power_vec_entry *range;
-+ struct iwl3945_power_vec_entry *range;
++ struct iwl4965_power_vec_entry *range;
u8 period = 0;
- struct iwl_power_mgr *pow_data;
-+ struct iwl3945_power_mgr *pow_data;
++ struct iwl4965_power_mgr *pow_data;
if (mode > IWL_POWER_INDEX_5) {
IWL_DEBUG_POWER("Error invalid power mode \n");
-@@ -2100,7 +2125,7 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
+@@ -2174,7 +2242,7 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
else
range = &pow_data->pwr_range_1[1];
- memcpy(cmd, &range[mode].cmd, sizeof(struct iwl_powertable_cmd));
-+ memcpy(cmd, &range[mode].cmd, sizeof(struct iwl3945_powertable_cmd));
++ memcpy(cmd, &range[mode].cmd, sizeof(struct iwl4965_powertable_cmd));
#ifdef IWL_MAC80211_DISABLE
if (priv->assoc_network != NULL) {
-@@ -2143,14 +2168,14 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
+@@ -2217,14 +2285,14 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
return rc;
}
-static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
-+static int iwl3945_send_power_mode(struct iwl3945_priv *priv, u32 mode)
++static int iwl4965_send_power_mode(struct iwl4965_priv *priv, u32 mode)
{
- u32 final_mode = mode;
+ u32 uninitialized_var(final_mode);
int rc;
- struct iwl_powertable_cmd cmd;
-+ struct iwl3945_powertable_cmd cmd;
++ struct iwl4965_powertable_cmd cmd;
/* If on battery, set to 3,
- * if plugged into AC power, set to CAM ("continuosly aware mode"),
@@ -387296,111 +467910,111 @@
* else user level */
switch (mode) {
case IWL_POWER_BATTERY:
-@@ -2164,9 +2189,9 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
- break;
- }
+@@ -2240,9 +2308,9 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
+
+ cmd.keep_alive_beacons = 0;
- iwl_update_power_cmd(priv, &cmd, final_mode);
-+ iwl3945_update_power_cmd(priv, &cmd, final_mode);
++ iwl4965_update_power_cmd(priv, &cmd, final_mode);
- rc = iwl_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
-+ rc = iwl3945_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
++ rc = iwl4965_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
if (final_mode == IWL_POWER_MODE_CAM)
clear_bit(STATUS_POWER_PMI, &priv->status);
-@@ -2176,7 +2201,7 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
+@@ -2252,7 +2320,7 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
return rc;
}
-int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
-+int iwl3945_is_network_packet(struct iwl3945_priv *priv, struct ieee80211_hdr *header)
++int iwl4965_is_network_packet(struct iwl4965_priv *priv, struct ieee80211_hdr *header)
{
/* Filter incoming packets to determine if they are targeted toward
* this network, discarding packets coming from ourselves */
-@@ -2206,7 +2231,7 @@ int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+@@ -2282,7 +2350,7 @@ int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
#define TX_STATUS_ENTRY(x) case TX_STATUS_FAIL_ ## x: return #x
-const char *iwl_get_tx_fail_reason(u32 status)
-+static const char *iwl3945_get_tx_fail_reason(u32 status)
++static const char *iwl4965_get_tx_fail_reason(u32 status)
{
switch (status & TX_STATUS_MSK) {
case TX_STATUS_SUCCESS:
-@@ -2233,11 +2258,11 @@ const char *iwl_get_tx_fail_reason(u32 status)
+@@ -2309,11 +2377,11 @@ const char *iwl_get_tx_fail_reason(u32 status)
}
/**
- * iwl_scan_cancel - Cancel any currently executing HW scan
-+ * iwl3945_scan_cancel - Cancel any currently executing HW scan
++ * iwl4965_scan_cancel - Cancel any currently executing HW scan
*
* NOTE: priv->mutex is not required before calling this function
*/
-static int iwl_scan_cancel(struct iwl_priv *priv)
-+static int iwl3945_scan_cancel(struct iwl3945_priv *priv)
++static int iwl4965_scan_cancel(struct iwl4965_priv *priv)
{
if (!test_bit(STATUS_SCAN_HW, &priv->status)) {
clear_bit(STATUS_SCANNING, &priv->status);
-@@ -2260,17 +2285,17 @@ static int iwl_scan_cancel(struct iwl_priv *priv)
+@@ -2336,17 +2404,17 @@ static int iwl_scan_cancel(struct iwl_priv *priv)
}
/**
- * iwl_scan_cancel_timeout - Cancel any currently executing HW scan
-+ * iwl3945_scan_cancel_timeout - Cancel any currently executing HW scan
++ * iwl4965_scan_cancel_timeout - Cancel any currently executing HW scan
* @ms: amount of time to wait (in milliseconds) for scan to abort
*
* NOTE: priv->mutex must be held before calling this function
*/
-static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
-+static int iwl3945_scan_cancel_timeout(struct iwl3945_priv *priv, unsigned long ms)
++static int iwl4965_scan_cancel_timeout(struct iwl4965_priv *priv, unsigned long ms)
{
unsigned long now = jiffies;
int ret;
- ret = iwl_scan_cancel(priv);
-+ ret = iwl3945_scan_cancel(priv);
++ ret = iwl4965_scan_cancel(priv);
if (ret && ms) {
mutex_unlock(&priv->mutex);
while (!time_after(jiffies, now + msecs_to_jiffies(ms)) &&
-@@ -2284,7 +2309,7 @@ static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
+@@ -2360,7 +2428,7 @@ static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
return ret;
}
-static void iwl_sequence_reset(struct iwl_priv *priv)
-+static void iwl3945_sequence_reset(struct iwl3945_priv *priv)
++static void iwl4965_sequence_reset(struct iwl4965_priv *priv)
{
/* Reset ieee stats */
-@@ -2295,13 +2320,13 @@ static void iwl_sequence_reset(struct iwl_priv *priv)
+@@ -2371,13 +2439,13 @@ static void iwl_sequence_reset(struct iwl_priv *priv)
priv->last_frag_num = -1;
priv->last_packet_time = 0;
- iwl_scan_cancel(priv);
-+ iwl3945_scan_cancel(priv);
++ iwl4965_scan_cancel(priv);
}
- #define MAX_UCODE_BEACON_INTERVAL 1024
+ #define MAX_UCODE_BEACON_INTERVAL 4096
#define INTEL_CONN_LISTEN_INTERVAL __constant_cpu_to_le16(0xA)
-static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
-+static __le16 iwl3945_adjust_beacon_interval(u16 beacon_val)
++static __le16 iwl4965_adjust_beacon_interval(u16 beacon_val)
{
u16 new_val = 0;
u16 beacon_factor = 0;
-@@ -2314,7 +2339,7 @@ static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
+@@ -2390,7 +2458,7 @@ static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
return cpu_to_le16(new_val);
}
-static void iwl_setup_rxon_timing(struct iwl_priv *priv)
-+static void iwl3945_setup_rxon_timing(struct iwl3945_priv *priv)
++static void iwl4965_setup_rxon_timing(struct iwl4965_priv *priv)
{
u64 interval_tm_unit;
u64 tsf, result;
-@@ -2344,14 +2369,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
+@@ -2420,14 +2488,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
priv->rxon_timing.beacon_interval =
cpu_to_le16(beacon_int);
priv->rxon_timing.beacon_interval =
- iwl_adjust_beacon_interval(
-+ iwl3945_adjust_beacon_interval(
++ iwl4965_adjust_beacon_interval(
le16_to_cpu(priv->rxon_timing.beacon_interval));
}
@@ -387408,16 +468022,16 @@
} else {
priv->rxon_timing.beacon_interval =
- iwl_adjust_beacon_interval(conf->beacon_int);
-+ iwl3945_adjust_beacon_interval(conf->beacon_int);
++ iwl4965_adjust_beacon_interval(conf->beacon_int);
/* TODO: we need to get atim_window from upper stack
* for now we set to 0 */
priv->rxon_timing.atim_window = 0;
-@@ -2370,14 +2395,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
+@@ -2446,14 +2514,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
le16_to_cpu(priv->rxon_timing.atim_window));
}
-static int iwl_scan_initiate(struct iwl_priv *priv)
-+static int iwl3945_scan_initiate(struct iwl3945_priv *priv)
++static int iwl4965_scan_initiate(struct iwl4965_priv *priv)
{
if (priv->iw_mode == IEEE80211_IF_TYPE_AP) {
IWL_ERROR("APs don't scan.\n");
@@ -387425,41 +468039,41 @@
}
- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl3945_is_ready_rf(priv)) {
++ if (!iwl4965_is_ready_rf(priv)) {
IWL_DEBUG_SCAN("Aborting scan due to not ready.\n");
return -EIO;
}
-@@ -2404,9 +2429,9 @@ static int iwl_scan_initiate(struct iwl_priv *priv)
+@@ -2480,9 +2548,9 @@ static int iwl_scan_initiate(struct iwl_priv *priv)
return 0;
}
-static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
-+static int iwl3945_set_rxon_hwcrypto(struct iwl3945_priv *priv, int hw_decrypt)
++static int iwl4965_set_rxon_hwcrypto(struct iwl4965_priv *priv, int hw_decrypt)
{
- struct iwl_rxon_cmd *rxon = &priv->staging_rxon;
-+ struct iwl3945_rxon_cmd *rxon = &priv->staging_rxon;
++ struct iwl4965_rxon_cmd *rxon = &priv->staging_rxon;
if (hw_decrypt)
rxon->filter_flags &= ~RXON_FILTER_DIS_DECRYPT_MSK;
-@@ -2416,7 +2441,7 @@ static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
+@@ -2492,7 +2560,7 @@ static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
return 0;
}
-static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
-+static void iwl3945_set_flags_for_phymode(struct iwl3945_priv *priv, u8 phymode)
++static void iwl4965_set_flags_for_phymode(struct iwl4965_priv *priv, u8 phymode)
{
if (phymode == MODE_IEEE80211A) {
priv->staging_rxon.flags &=
-@@ -2424,7 +2449,7 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
+@@ -2500,7 +2568,7 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
| RXON_FLG_CCK_MSK);
priv->staging_rxon.flags |= RXON_FLG_SHORT_SLOT_MSK;
} else {
- /* Copied from iwl_bg_post_associate() */
-+ /* Copied from iwl3945_bg_post_associate() */
++ /* Copied from iwl4965_bg_post_associate() */
if (priv->assoc_capability & WLAN_CAPABILITY_SHORT_SLOT_TIME)
priv->staging_rxon.flags |= RXON_FLG_SHORT_SLOT_MSK;
else
-@@ -2440,11 +2465,11 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
+@@ -2516,11 +2584,11 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
}
/*
@@ -387467,68 +468081,68 @@
+ * initialize rxon structure with default values from eeprom
*/
-static void iwl_connection_init_rx_config(struct iwl_priv *priv)
-+static void iwl3945_connection_init_rx_config(struct iwl3945_priv *priv)
++static void iwl4965_connection_init_rx_config(struct iwl4965_priv *priv)
{
- const struct iwl_channel_info *ch_info;
-+ const struct iwl3945_channel_info *ch_info;
++ const struct iwl4965_channel_info *ch_info;
memset(&priv->staging_rxon, 0, sizeof(priv->staging_rxon));
-@@ -2481,7 +2506,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
+@@ -2557,7 +2625,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
priv->staging_rxon.flags |= RXON_FLG_SHORT_PREAMBLE_MSK;
#endif
- ch_info = iwl_get_channel_info(priv, priv->phymode,
-+ ch_info = iwl3945_get_channel_info(priv, priv->phymode,
++ ch_info = iwl4965_get_channel_info(priv, priv->phymode,
le16_to_cpu(priv->staging_rxon.channel));
if (!ch_info)
-@@ -2501,7 +2526,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
+@@ -2577,7 +2645,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
else
priv->phymode = MODE_IEEE80211G;
- iwl_set_flags_for_phymode(priv, priv->phymode);
-+ iwl3945_set_flags_for_phymode(priv, priv->phymode);
++ iwl4965_set_flags_for_phymode(priv, priv->phymode);
priv->staging_rxon.ofdm_basic_rates =
(IWL_OFDM_RATES_MASK >> IWL_FIRST_OFDM_RATE) & 0xFF;
-@@ -2509,15 +2534,12 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
- (IWL_CCK_RATES_MASK >> IWL_FIRST_CCK_RATE) & 0xF;
+@@ -2593,15 +2661,12 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
+ iwl4965_set_rxon_chain(priv);
}
-static int iwl_set_mode(struct iwl_priv *priv, int mode)
-+static int iwl3945_set_mode(struct iwl3945_priv *priv, int mode)
++static int iwl4965_set_mode(struct iwl4965_priv *priv, int mode)
{
- if (!iwl_is_ready_rf(priv))
- return -EAGAIN;
-
if (mode == IEEE80211_IF_TYPE_IBSS) {
- const struct iwl_channel_info *ch_info;
-+ const struct iwl3945_channel_info *ch_info;
++ const struct iwl4965_channel_info *ch_info;
- ch_info = iwl_get_channel_info(priv,
-+ ch_info = iwl3945_get_channel_info(priv,
++ ch_info = iwl4965_get_channel_info(priv,
priv->phymode,
le16_to_cpu(priv->staging_rxon.channel));
-@@ -2528,32 +2550,36 @@ static int iwl_set_mode(struct iwl_priv *priv, int mode)
+@@ -2612,32 +2677,36 @@ static int iwl_set_mode(struct iwl_priv *priv, int mode)
}
}
+ priv->iw_mode = mode;
+
-+ iwl3945_connection_init_rx_config(priv);
++ iwl4965_connection_init_rx_config(priv);
+ memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
+
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
+
+ /* dont commit rxon if rf-kill is on*/
-+ if (!iwl3945_is_ready_rf(priv))
++ if (!iwl4965_is_ready_rf(priv))
+ return -EAGAIN;
+
cancel_delayed_work(&priv->scan_check);
- if (iwl_scan_cancel_timeout(priv, 100)) {
-+ if (iwl3945_scan_cancel_timeout(priv, 100)) {
++ if (iwl4965_scan_cancel_timeout(priv, 100)) {
IWL_WARNING("Aborted scan still in progress after 100ms\n");
IWL_DEBUG_MAC80211("leaving - scan abort failed.\n");
return -EAGAIN;
@@ -387542,36 +468156,47 @@
- iwl_clear_stations_table(priv);
-
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
return 0;
}
-static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
-+static void iwl3945_build_tx_cmd_hwcrypto(struct iwl3945_priv *priv,
++static void iwl4965_build_tx_cmd_hwcrypto(struct iwl4965_priv *priv,
struct ieee80211_tx_control *ctl,
- struct iwl_cmd *cmd,
-+ struct iwl3945_cmd *cmd,
++ struct iwl4965_cmd *cmd,
struct sk_buff *skb_frag,
int last_frag)
{
- struct iwl_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
-+ struct iwl3945_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
++ struct iwl4965_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
switch (keyinfo->alg) {
case ALG_CCMP:
-@@ -2596,8 +2622,8 @@ static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
+@@ -2680,8 +2749,8 @@ static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
/*
* handle build REPLY_TX command notification.
*/
-static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
- struct iwl_cmd *cmd,
-+static void iwl3945_build_tx_cmd_basic(struct iwl3945_priv *priv,
-+ struct iwl3945_cmd *cmd,
++static void iwl4965_build_tx_cmd_basic(struct iwl4965_priv *priv,
++ struct iwl4965_cmd *cmd,
struct ieee80211_tx_control *ctrl,
struct ieee80211_hdr *hdr,
int is_unicast, u8 std_id)
-@@ -2645,11 +2671,9 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
+@@ -2703,6 +2772,10 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
+ tx_flags |= TX_CMD_FLG_SEQ_CTL_MSK;
+ }
+
++ if (ieee80211_is_back_request(fc))
++ tx_flags |= TX_CMD_FLG_ACK_MSK | TX_CMD_FLG_IMM_BA_RSP_MASK;
++
++
+ cmd->cmd.tx.sta_id = std_id;
+ if (ieee80211_get_morefrag(hdr))
+ tx_flags |= TX_CMD_FLG_MORE_FRAG_MSK;
+@@ -2729,11 +2802,9 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
if ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) {
if ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_ASSOC_REQ ||
(fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_REASSOC_REQ)
@@ -387585,18 +468210,22 @@
} else
cmd->cmd.tx.timeout.pm_frame_timeout = 0;
-@@ -2658,41 +2682,44 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
+@@ -2742,40 +2813,47 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
cmd->cmd.tx.next_frame_len = 0;
}
-static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
+/**
-+ * iwl3945_get_sta_id - Find station's index within station table
++ * iwl4965_get_sta_id - Find station's index within station table
++ *
++ * If new IBSS station, create new entry in station table
+ */
-+static int iwl3945_get_sta_id(struct iwl3945_priv *priv, struct ieee80211_hdr *hdr)
++static int iwl4965_get_sta_id(struct iwl4965_priv *priv,
++ struct ieee80211_hdr *hdr)
{
int sta_id;
u16 fc = le16_to_cpu(hdr->frame_control);
+ DECLARE_MAC_BUF(mac);
- /* If this frame is broadcast or not data then use the broadcast
- * station id */
@@ -387617,7 +468246,7 @@
/* If we are an AP, then find the station, or use BCAST */
case IEEE80211_IF_TYPE_AP:
- sta_id = iwl_hw_find_station(priv, hdr->addr1);
-+ sta_id = iwl3945_hw_find_station(priv, hdr->addr1);
++ sta_id = iwl4965_hw_find_station(priv, hdr->addr1);
if (sta_id != IWL_INVALID_STATION)
return sta_id;
return priv->hw_setting.bcast_sta_id;
@@ -387626,64 +468255,64 @@
- * target specific station id */
+ /* If this frame is going out to an IBSS network, find the station,
+ * or create a new station table entry */
- case IEEE80211_IF_TYPE_IBSS: {
- DECLARE_MAC_BUF(mac);
-
+ case IEEE80211_IF_TYPE_IBSS:
- sta_id = iwl_hw_find_station(priv, hdr->addr1);
-+ /* Create new station table entry */
-+ sta_id = iwl3945_hw_find_station(priv, hdr->addr1);
++ sta_id = iwl4965_hw_find_station(priv, hdr->addr1);
if (sta_id != IWL_INVALID_STATION)
return sta_id;
- sta_id = iwl_add_station(priv, hdr->addr1, 0, CMD_ASYNC);
-+ sta_id = iwl3945_add_station(priv, hdr->addr1, 0, CMD_ASYNC);
++ /* Create new station table entry */
++ sta_id = iwl4965_add_station_flags(priv, hdr->addr1,
++ 0, CMD_ASYNC, NULL);
if (sta_id != IWL_INVALID_STATION)
return sta_id;
-@@ -2700,11 +2727,11 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
+@@ -2783,11 +2861,11 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
IWL_DEBUG_DROP("Station %s not in station map. "
"Defaulting to broadcast...\n",
print_mac(mac, hdr->addr1));
- iwl_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
-+ iwl3945_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
++ iwl4965_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
return priv->hw_setting.bcast_sta_id;
- }
+
default:
- IWL_WARNING("Unkown mode of operation: %d", priv->iw_mode);
+ IWL_WARNING("Unknown mode of operation: %d", priv->iw_mode);
return priv->hw_setting.bcast_sta_id;
}
}
-@@ -2712,18 +2739,18 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
+@@ -2795,18 +2873,19 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
/*
* start REPLY_TX command process
*/
-static int iwl_tx_skb(struct iwl_priv *priv,
-+static int iwl3945_tx_skb(struct iwl3945_priv *priv,
++static int iwl4965_tx_skb(struct iwl4965_priv *priv,
struct sk_buff *skb, struct ieee80211_tx_control *ctl)
{
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
- struct iwl_tfd_frame *tfd;
-+ struct iwl3945_tfd_frame *tfd;
++ struct iwl4965_tfd_frame *tfd;
u32 *control_flags;
int txq_id = ctl->queue;
- struct iwl_tx_queue *txq = NULL;
- struct iwl_queue *q = NULL;
-+ struct iwl3945_tx_queue *txq = NULL;
-+ struct iwl3945_queue *q = NULL;
++ struct iwl4965_tx_queue *txq = NULL;
++ struct iwl4965_queue *q = NULL;
dma_addr_t phys_addr;
dma_addr_t txcmd_phys;
- struct iwl_cmd *out_cmd = NULL;
-+ struct iwl3945_cmd *out_cmd = NULL;
++ dma_addr_t scratch_phys;
++ struct iwl4965_cmd *out_cmd = NULL;
u16 len, idx, len_org;
u8 id, hdr_len, unicast;
u8 sta_id;
-@@ -2735,13 +2762,13 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+@@ -2818,13 +2897,13 @@ static int iwl_tx_skb(struct iwl_priv *priv,
int rc;
spin_lock_irqsave(&priv->lock, flags);
- if (iwl_is_rfkill(priv)) {
-+ if (iwl3945_is_rfkill(priv)) {
++ if (iwl4965_is_rfkill(priv)) {
IWL_DEBUG_DROP("Dropping - RF KILL\n");
goto drop_unlock;
}
@@ -387695,25 +468324,25 @@
goto drop_unlock;
}
-@@ -2755,7 +2782,7 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+@@ -2838,7 +2917,7 @@ static int iwl_tx_skb(struct iwl_priv *priv,
fc = le16_to_cpu(hdr->frame_control);
-#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL3945_DEBUG
++#ifdef CONFIG_IWL4965_DEBUG
if (ieee80211_is_auth(fc))
IWL_DEBUG_TX("Sending AUTH frame\n");
else if (ieee80211_is_assoc_request(fc))
-@@ -2764,16 +2791,19 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+@@ -2847,16 +2926,19 @@ static int iwl_tx_skb(struct iwl_priv *priv,
IWL_DEBUG_TX("Sending REASSOC frame\n");
#endif
- if (!iwl_is_associated(priv) &&
+ /* drop all data frame if we are not associated */
-+ if (!iwl3945_is_associated(priv) && !priv->assoc_id &&
++ if (!iwl4965_is_associated(priv) && !priv->assoc_id &&
((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_DATA)) {
- IWL_DEBUG_DROP("Dropping - !iwl_is_associated\n");
-+ IWL_DEBUG_DROP("Dropping - !iwl3945_is_associated\n");
++ IWL_DEBUG_DROP("Dropping - !iwl4965_is_associated\n");
goto drop_unlock;
}
@@ -387723,13 +468352,25 @@
- sta_id = iwl_get_sta_id(priv, hdr);
+
+ /* Find (or create) index into station table for destination station */
-+ sta_id = iwl3945_get_sta_id(priv, hdr);
++ sta_id = iwl4965_get_sta_id(priv, hdr);
if (sta_id == IWL_INVALID_STATION) {
DECLARE_MAC_BUF(mac);
-@@ -2794,32 +2824,54 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+@@ -2876,40 +2958,62 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ (hdr->seq_ctrl &
__constant_cpu_to_le16(IEEE80211_SCTL_FRAG));
seq_number += 0x10;
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ /* aggregation is on for this <sta,tid> */
+ if (ctl->flags & IEEE80211_TXCTL_HT_MPDU_AGG)
+ txq_id = priv->stations[sta_id].tid[tid].agg.txq_id;
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
}
+
+ /* Descriptor for chosen Tx queue */
@@ -387750,12 +468391,12 @@
- txq->txb[q->first_empty].skb[0] = skb;
- memcpy(&(txq->txb[q->first_empty].status.control),
+ /* Set up driver data for this TFD */
-+ memset(&(txq->txb[q->write_ptr]), 0, sizeof(struct iwl3945_tx_info));
++ memset(&(txq->txb[q->write_ptr]), 0, sizeof(struct iwl4965_tx_info));
+ txq->txb[q->write_ptr].skb[0] = skb;
+ memcpy(&(txq->txb[q->write_ptr].status.control),
ctl, sizeof(struct ieee80211_tx_control));
+
-+ /* Init first empty entry in queue's array of Tx/cmd buffers */
++ /* Set up first empty entry in queue's array of Tx/cmd buffers */
out_cmd = &txq->cmd[idx];
memset(&out_cmd->hdr, 0, sizeof(out_cmd->hdr));
memset(&out_cmd->cmd.tx, 0, sizeof(out_cmd->cmd.tx));
@@ -387787,11 +468428,11 @@
+ */
len = priv->hw_setting.tx_cmd_len +
- sizeof(struct iwl_cmd_header) + hdr_len;
-+ sizeof(struct iwl3945_cmd_header) + hdr_len;
++ sizeof(struct iwl4965_cmd_header) + hdr_len;
len_org = len;
len = (len + 3) & ~3;
-@@ -2829,37 +2881,45 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+@@ -2919,36 +3023,53 @@ static int iwl_tx_skb(struct iwl_priv *priv,
else
len_org = 0;
@@ -387799,17 +468440,17 @@
- offsetof(struct iwl_cmd, hdr);
+ /* Physical address of this Tx command's header (not MAC header!),
+ * within command buffer array. */
-+ txcmd_phys = txq->dma_addr_cmd + sizeof(struct iwl3945_cmd) * idx +
-+ offsetof(struct iwl3945_cmd, hdr);
++ txcmd_phys = txq->dma_addr_cmd + sizeof(struct iwl4965_cmd) * idx +
++ offsetof(struct iwl4965_cmd, hdr);
- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
+ /* Add buffer containing Tx command and MAC(!) header to TFD's
+ * first entry */
-+ iwl3945_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
++ iwl4965_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
if (!(ctl->flags & IEEE80211_TXCTL_DO_NOT_ENCRYPT))
- iwl_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
-+ iwl3945_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
++ iwl4965_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
- /* 802.11 null functions have no payload... */
+ /* Set up TFD's 2nd entry to point directly to remainder of skb,
@@ -387819,18 +468460,12 @@
phys_addr = pci_map_single(priv->pci_dev, skb->data + hdr_len,
len, PCI_DMA_TODEVICE);
- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
-+ iwl3945_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
++ iwl4965_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
}
-- /* If there is no payload, then only one TFD is used */
- if (!len)
-+ /* If there is no payload, then we use only one Tx buffer */
- *control_flags = TFD_CTL_COUNT_SET(1);
- else
-+ /* Else use 2 buffers.
-+ * Tell 3945 about any padding after MAC header */
- *control_flags = TFD_CTL_COUNT_SET(2) |
- TFD_CTL_PAD_SET(U32_PAD(len));
++ /* Tell 4965 about any 2-byte padding after MAC header */
+ if (len_org)
+ out_cmd->cmd.tx.tx_flags |= TX_CMD_FLG_MH_PAD_MSK;
+ /* Total # bytes to be transmitted */
len = (u16)skb->len;
@@ -387838,105 +468473,123 @@
/* TODO need this for burst mode later on */
- iwl_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
-+ iwl3945_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
++ iwl4965_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
/* set is_hcca to 0; it probably will never be implemented */
- iwl_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
-+ iwl3945_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
++ iwl4965_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
++
++ scratch_phys = txcmd_phys + sizeof(struct iwl4965_cmd_header) +
++ offsetof(struct iwl4965_tx_cmd, scratch);
++ out_cmd->cmd.tx.dram_lsb_ptr = cpu_to_le32(scratch_phys);
++ out_cmd->cmd.tx.dram_msb_ptr = iwl_get_dma_hi_address(scratch_phys);
++
++#ifdef CONFIG_IWL4965_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++ /* TODO: move this functionality to rate scaling */
++ iwl4965_tl_get_stats(priv, hdr);
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /*CONFIG_IWL4965_HT */
- out_cmd->cmd.tx.tx_flags &= ~TX_CMD_FLG_ANT_A_MSK;
- out_cmd->cmd.tx.tx_flags &= ~TX_CMD_FLG_ANT_B_MSK;
-@@ -2875,25 +2935,26 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+- iwl4965_tx_cmd(priv, out_cmd, sta_id, txcmd_phys,
+- hdr, hdr_len, ctl, NULL);
+
+ if (!ieee80211_get_morefrag(hdr)) {
+ txq->need_update = 1;
+@@ -2961,27 +3082,29 @@ static int iwl_tx_skb(struct iwl_priv *priv,
txq->need_update = 0;
}
- iwl_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
-+ iwl3945_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
++ iwl4965_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
sizeof(out_cmd->cmd.tx));
- iwl_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
-+ iwl3945_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
++ iwl4965_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
ieee80211_get_hdrlen(fc));
++ /* Set up entry for this TFD in Tx byte-count array */
+ iwl4965_tx_queue_update_wr_ptr(priv, txq, len);
+
- q->first_empty = iwl_queue_inc_wrap(q->first_empty, q->n_bd);
- rc = iwl_tx_queue_update_write_ptr(priv, txq);
+ /* Tell device the write index *just past* this latest filled TFD */
-+ q->write_ptr = iwl3945_queue_inc_wrap(q->write_ptr, q->n_bd);
-+ rc = iwl3945_tx_queue_update_write_ptr(priv, txq);
++ q->write_ptr = iwl4965_queue_inc_wrap(q->write_ptr, q->n_bd);
++ rc = iwl4965_tx_queue_update_write_ptr(priv, txq);
spin_unlock_irqrestore(&priv->lock, flags);
if (rc)
return rc;
- if ((iwl_queue_space(q) < q->high_mark)
-+ if ((iwl3945_queue_space(q) < q->high_mark)
++ if ((iwl4965_queue_space(q) < q->high_mark)
&& priv->mac80211_registered) {
if (wait_write_ptr) {
spin_lock_irqsave(&priv->lock, flags);
txq->need_update = 1;
- iwl_tx_queue_update_write_ptr(priv, txq);
-+ iwl3945_tx_queue_update_write_ptr(priv, txq);
++ iwl4965_tx_queue_update_write_ptr(priv, txq);
spin_unlock_irqrestore(&priv->lock, flags);
}
-@@ -2908,13 +2969,13 @@ drop:
+@@ -2996,13 +3119,13 @@ drop:
return -1;
}
-static void iwl_set_rate(struct iwl_priv *priv)
-+static void iwl3945_set_rate(struct iwl3945_priv *priv)
++static void iwl4965_set_rate(struct iwl4965_priv *priv)
{
const struct ieee80211_hw_mode *hw = NULL;
struct ieee80211_rate *rate;
int i;
- hw = iwl_get_hw_mode(priv, priv->phymode);
-+ hw = iwl3945_get_hw_mode(priv, priv->phymode);
++ hw = iwl4965_get_hw_mode(priv, priv->phymode);
if (!hw) {
IWL_ERROR("Failed to set rate: unable to get hw mode\n");
return;
-@@ -2932,7 +2993,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
+@@ -3020,7 +3143,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
if ((rate->val < IWL_RATE_COUNT) &&
(rate->flags & IEEE80211_RATE_SUPPORTED)) {
IWL_DEBUG_RATE("Adding rate index %d (plcp %d)%s\n",
- rate->val, iwl_rates[rate->val].plcp,
-+ rate->val, iwl3945_rates[rate->val].plcp,
++ rate->val, iwl4965_rates[rate->val].plcp,
(rate->flags & IEEE80211_RATE_BASIC) ?
"*" : "");
priv->active_rate |= (1 << rate->val);
-@@ -2940,7 +3001,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
+@@ -3028,7 +3151,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
priv->active_rate_basic |= (1 << rate->val);
} else
IWL_DEBUG_RATE("Not adding rate %d (plcp %d)\n",
- rate->val, iwl_rates[rate->val].plcp);
-+ rate->val, iwl3945_rates[rate->val].plcp);
++ rate->val, iwl4965_rates[rate->val].plcp);
}
IWL_DEBUG_RATE("Set active_rate = %0x, active_rate_basic = %0x\n",
-@@ -2969,7 +3030,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
+@@ -3057,7 +3180,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
(IWL_OFDM_BASIC_RATES_MASK >> IWL_FIRST_OFDM_RATE) & 0xFF;
}
-static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
-+static void iwl3945_radio_kill_sw(struct iwl3945_priv *priv, int disable_radio)
++static void iwl4965_radio_kill_sw(struct iwl4965_priv *priv, int disable_radio)
{
unsigned long flags;
-@@ -2980,21 +3041,21 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
+@@ -3068,21 +3191,21 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
disable_radio ? "OFF" : "ON");
if (disable_radio) {
- iwl_scan_cancel(priv);
-+ iwl3945_scan_cancel(priv);
++ iwl4965_scan_cancel(priv);
/* FIXME: This is a workaround for AP */
if (priv->iw_mode != IEEE80211_IF_TYPE_AP) {
spin_lock_irqsave(&priv->lock, flags);
- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_SET,
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_SET,
CSR_UCODE_SW_BIT_RFKILL);
spin_unlock_irqrestore(&priv->lock, flags);
- iwl_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
-+ iwl3945_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
++ iwl4965_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
set_bit(STATUS_RF_KILL_SW, &priv->status);
}
return;
@@ -387944,33 +468597,33 @@
spin_lock_irqsave(&priv->lock, flags);
- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
clear_bit(STATUS_RF_KILL_SW, &priv->status);
spin_unlock_irqrestore(&priv->lock, flags);
-@@ -3003,9 +3064,9 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
+@@ -3091,9 +3214,9 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
msleep(10);
spin_lock_irqsave(&priv->lock, flags);
- iwl_read32(priv, CSR_UCODE_DRV_GP1);
- if (!iwl_grab_restricted_access(priv))
- iwl_release_restricted_access(priv);
-+ iwl3945_read32(priv, CSR_UCODE_DRV_GP1);
-+ if (!iwl3945_grab_nic_access(priv))
-+ iwl3945_release_nic_access(priv);
++ iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
++ if (!iwl4965_grab_nic_access(priv))
++ iwl4965_release_nic_access(priv);
spin_unlock_irqrestore(&priv->lock, flags);
if (test_bit(STATUS_RF_KILL_HW, &priv->status)) {
-@@ -3018,7 +3079,7 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
+@@ -3106,7 +3229,7 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
return;
}
-void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
-+void iwl3945_set_decrypted_flag(struct iwl3945_priv *priv, struct sk_buff *skb,
++void iwl4965_set_decrypted_flag(struct iwl4965_priv *priv, struct sk_buff *skb,
u32 decrypt_res, struct ieee80211_rx_status *stats)
{
u16 fc =
-@@ -3050,97 +3111,9 @@ void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
+@@ -3138,97 +3261,10 @@ void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
}
}
@@ -388061,27 +468714,27 @@
- rxb->skb = NULL;
-}
-
--
+
#define IWL_PACKET_RETRY_TIME HZ
-int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
-+int iwl3945_is_duplicate_packet(struct iwl3945_priv *priv, struct ieee80211_hdr *header)
++int iwl4965_is_duplicate_packet(struct iwl4965_priv *priv, struct ieee80211_hdr *header)
{
u16 sc = le16_to_cpu(header->seq_ctrl);
u16 seq = (sc & IEEE80211_SCTL_SEQ) >> 4;
-@@ -3151,29 +3124,26 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+@@ -3239,29 +3275,26 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
switch (priv->iw_mode) {
case IEEE80211_IF_TYPE_IBSS:{
struct list_head *p;
- struct iwl_ibss_seq *entry = NULL;
-+ struct iwl3945_ibss_seq *entry = NULL;
++ struct iwl4965_ibss_seq *entry = NULL;
u8 *mac = header->addr2;
int index = mac[5] & (IWL_IBSS_MAC_HASH_SIZE - 1);
__list_for_each(p, &priv->ibss_mac_hash[index]) {
- entry =
- list_entry(p, struct iwl_ibss_seq, list);
-+ entry = list_entry(p, struct iwl3945_ibss_seq, list);
++ entry = list_entry(p, struct iwl4965_ibss_seq, list);
if (!compare_ether_addr(entry->mac, mac))
break;
}
@@ -388103,133 +468756,121 @@
return 0;
}
last_seq = &entry->seq_num;
-@@ -3207,7 +3177,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+@@ -3295,7 +3328,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
return 1;
}
-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
#include "iwl-spectrum.h"
-@@ -3222,7 +3192,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
+@@ -3310,7 +3343,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
* the lower 3 bytes is the time in usec within one beacon interval
*/
-static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
-+static u32 iwl3945_usecs_to_beacons(u32 usec, u32 beacon_interval)
++static u32 iwl4965_usecs_to_beacons(u32 usec, u32 beacon_interval)
{
u32 quot;
u32 rem;
-@@ -3241,7 +3211,7 @@ static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
+@@ -3329,7 +3362,7 @@ static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
* the same as HW timer counter counting down
*/
-static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
-+static __le32 iwl3945_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
++static __le32 iwl4965_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
{
u32 base_low = base & BEACON_TIME_MASK_LOW;
u32 addon_low = addon & BEACON_TIME_MASK_LOW;
-@@ -3260,13 +3230,13 @@ static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
+@@ -3348,13 +3381,13 @@ static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
return cpu_to_le32(res);
}
-static int iwl_get_measurement(struct iwl_priv *priv,
-+static int iwl3945_get_measurement(struct iwl3945_priv *priv,
++static int iwl4965_get_measurement(struct iwl4965_priv *priv,
struct ieee80211_measurement_params *params,
u8 type)
{
- struct iwl_spectrum_cmd spectrum;
- struct iwl_rx_packet *res;
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_spectrum_cmd spectrum;
-+ struct iwl3945_rx_packet *res;
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_spectrum_cmd spectrum;
++ struct iwl4965_rx_packet *res;
++ struct iwl4965_host_cmd cmd = {
.id = REPLY_SPECTRUM_MEASUREMENT_CMD,
.data = (void *)&spectrum,
.meta.flags = CMD_WANT_SKB,
-@@ -3276,9 +3246,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+@@ -3364,9 +3397,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
int spectrum_resp_status;
int duration = le16_to_cpu(params->duration);
- if (iwl_is_associated(priv))
-+ if (iwl3945_is_associated(priv))
++ if (iwl4965_is_associated(priv))
add_time =
- iwl_usecs_to_beacons(
-+ iwl3945_usecs_to_beacons(
++ iwl4965_usecs_to_beacons(
le64_to_cpu(params->start_time) - priv->last_tsf,
le16_to_cpu(priv->rxon_timing.beacon_interval));
-@@ -3291,9 +3261,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+@@ -3379,9 +3412,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
cmd.len = sizeof(spectrum);
spectrum.len = cpu_to_le16(cmd.len - sizeof(spectrum.len));
- if (iwl_is_associated(priv))
-+ if (iwl3945_is_associated(priv))
++ if (iwl4965_is_associated(priv))
spectrum.start_time =
- iwl_add_beacon_time(priv->last_beacon_time,
-+ iwl3945_add_beacon_time(priv->last_beacon_time,
++ iwl4965_add_beacon_time(priv->last_beacon_time,
add_time,
le16_to_cpu(priv->rxon_timing.beacon_interval));
else
-@@ -3306,11 +3276,11 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+@@ -3394,11 +3427,11 @@ static int iwl_get_measurement(struct iwl_priv *priv,
spectrum.flags |= RXON_FLG_BAND_24G_MSK |
RXON_FLG_AUTO_DETECT_MSK | RXON_FLG_TGG_PROTECT_MSK;
- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl3945_send_cmd_sync(priv, &cmd);
++ rc = iwl4965_send_cmd_sync(priv, &cmd);
if (rc)
return rc;
- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl3945_rx_packet *)cmd.meta.u.skb->data;
++ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
IWL_ERROR("Bad return from REPLY_RX_ON_ASSOC command\n");
rc = -EIO;
-@@ -3320,9 +3290,8 @@ static int iwl_get_measurement(struct iwl_priv *priv,
- switch (spectrum_resp_status) {
- case 0: /* Command will be handled */
- if (res->u.spectrum.id != 0xff) {
-- IWL_DEBUG_INFO
-- ("Replaced existing measurement: %d\n",
-- res->u.spectrum.id);
-+ IWL_DEBUG_INFO("Replaced existing measurement: %d\n",
-+ res->u.spectrum.id);
- priv->measurement_status &= ~MEASUREMENT_READY;
- }
- priv->measurement_status |= MEASUREMENT_ACTIVE;
-@@ -3340,8 +3309,8 @@ static int iwl_get_measurement(struct iwl_priv *priv,
+@@ -3428,8 +3461,8 @@ static int iwl_get_measurement(struct iwl_priv *priv,
}
#endif
-static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
- struct iwl_tx_info *tx_sta)
-+static void iwl3945_txstatus_to_ieee(struct iwl3945_priv *priv,
-+ struct iwl3945_tx_info *tx_sta)
++static void iwl4965_txstatus_to_ieee(struct iwl4965_priv *priv,
++ struct iwl4965_tx_info *tx_sta)
{
tx_sta->status.ack_signal = 0;
-@@ -3360,41 +3329,41 @@ static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
+@@ -3448,41 +3481,41 @@ static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
}
/**
- * iwl_tx_queue_reclaim - Reclaim Tx queue entries no more used by NIC.
-+ * iwl3945_tx_queue_reclaim - Reclaim Tx queue entries already Tx'd
++ * iwl4965_tx_queue_reclaim - Reclaim Tx queue entries already Tx'd
*
- * When FW advances 'R' index, all entries between old and
- * new 'R' index need to be reclaimed. As result, some free space
- * forms. If there is enough free space (> low mark), wake Tx queue.
+ * When FW advances 'R' index, all entries between old and new 'R' index
-+ * need to be reclaimed. As result, some free space forms. If there is
++ * need to be reclaimed. As result, some free space forms. If there is
+ * enough free space (> low mark), wake the stack that feeds us.
*/
-int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
-+static int iwl3945_tx_queue_reclaim(struct iwl3945_priv *priv, int txq_id, int index)
++int iwl4965_tx_queue_reclaim(struct iwl4965_priv *priv, int txq_id, int index)
{
- struct iwl_tx_queue *txq = &priv->txq[txq_id];
- struct iwl_queue *q = &txq->q;
-+ struct iwl3945_tx_queue *txq = &priv->txq[txq_id];
-+ struct iwl3945_queue *q = &txq->q;
++ struct iwl4965_tx_queue *txq = &priv->txq[txq_id];
++ struct iwl4965_queue *q = &txq->q;
int nfreed = 0;
if ((index >= q->n_bd) || (x2_queue_used(q, index) == 0)) {
@@ -388243,16 +468884,16 @@
- for (index = iwl_queue_inc_wrap(index, q->n_bd);
- q->last_used != index;
- q->last_used = iwl_queue_inc_wrap(q->last_used, q->n_bd)) {
-+ for (index = iwl3945_queue_inc_wrap(index, q->n_bd);
++ for (index = iwl4965_queue_inc_wrap(index, q->n_bd);
+ q->read_ptr != index;
-+ q->read_ptr = iwl3945_queue_inc_wrap(q->read_ptr, q->n_bd)) {
++ q->read_ptr = iwl4965_queue_inc_wrap(q->read_ptr, q->n_bd)) {
if (txq_id != IWL_CMD_QUEUE_NUM) {
- iwl_txstatus_to_ieee(priv,
- &(txq->txb[txq->q.last_used]));
- iwl_hw_txq_free_tfd(priv, txq);
-+ iwl3945_txstatus_to_ieee(priv,
++ iwl4965_txstatus_to_ieee(priv,
+ &(txq->txb[txq->q.read_ptr]));
-+ iwl3945_hw_txq_free_tfd(priv, txq);
++ iwl4965_hw_txq_free_tfd(priv, txq);
} else if (nfreed > 1) {
IWL_ERROR("HCMD skipped: index (%d) %d %d\n", index,
- q->first_empty, q->last_used);
@@ -388263,43 +468904,168 @@
}
- if (iwl_queue_space(q) > q->low_mark && (txq_id >= 0) &&
-+ if (iwl3945_queue_space(q) > q->low_mark && (txq_id >= 0) &&
++ if (iwl4965_queue_space(q) > q->low_mark && (txq_id >= 0) &&
(txq_id != IWL_CMD_QUEUE_NUM) &&
priv->mac80211_registered)
ieee80211_wake_queue(priv->hw, txq_id);
-@@ -3403,7 +3372,7 @@ int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
+@@ -3491,7 +3524,7 @@ int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
return nfreed;
}
-static int iwl_is_tx_success(u32 status)
-+static int iwl3945_is_tx_success(u32 status)
++static int iwl4965_is_tx_success(u32 status)
{
- return (status & 0xFF) == 0x1;
- }
-@@ -3413,27 +3382,30 @@ static int iwl_is_tx_success(u32 status)
+ status &= TX_STATUS_MSK;
+ return (status == TX_STATUS_SUCCESS)
+@@ -3503,22 +3536,22 @@ static int iwl_is_tx_success(u32 status)
* Generic RX handler implementations
*
******************************************************************************/
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+
+-static inline int iwl_get_ra_sta_id(struct iwl_priv *priv,
++static inline int iwl4965_get_ra_sta_id(struct iwl4965_priv *priv,
+ struct ieee80211_hdr *hdr)
+ {
+ if (priv->iw_mode == IEEE80211_IF_TYPE_STA)
+ return IWL_AP_ID;
+ else {
+ u8 *da = ieee80211_get_DA(hdr);
+- return iwl_hw_find_station(priv, da);
++ return iwl4965_hw_find_station(priv, da);
+ }
+ }
+
+-static struct ieee80211_hdr *iwl_tx_queue_get_hdr(
+- struct iwl_priv *priv, int txq_id, int idx)
++static struct ieee80211_hdr *iwl4965_tx_queue_get_hdr(
++ struct iwl4965_priv *priv, int txq_id, int idx)
+ {
+ if (priv->txq[txq_id].txb[idx].skb[0])
+ return (struct ieee80211_hdr *)priv->txq[txq_id].
+@@ -3526,16 +3559,20 @@ static struct ieee80211_hdr *iwl_tx_queue_get_hdr(
+ return NULL;
+ }
+
+-static inline u32 iwl_get_scd_ssn(struct iwl_tx_resp *tx_resp)
++static inline u32 iwl4965_get_scd_ssn(struct iwl4965_tx_resp *tx_resp)
+ {
+ __le32 *scd_ssn = (__le32 *)((u32 *)&tx_resp->status +
+ tx_resp->frame_count);
+ return le32_to_cpu(*scd_ssn) & MAX_SN;
+
+ }
+-static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
+- struct iwl_ht_agg *agg,
+- struct iwl_tx_resp *tx_resp,
++
++/**
++ * iwl4965_tx_status_reply_tx - Handle Tx rspnse for frames in aggregation queue
++ */
++static int iwl4965_tx_status_reply_tx(struct iwl4965_priv *priv,
++ struct iwl4965_ht_agg *agg,
++ struct iwl4965_tx_resp *tx_resp,
+ u16 start_idx)
+ {
+ u32 status;
+@@ -3547,15 +3584,17 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
+ u16 seq;
+
+ if (agg->wait_for_ba)
+- IWL_DEBUG_TX_REPLY("got tx repsons w/o back\n");
++ IWL_DEBUG_TX_REPLY("got tx response w/o block-ack\n");
+
+ agg->frame_count = tx_resp->frame_count;
+ agg->start_idx = start_idx;
+ agg->rate_n_flags = le32_to_cpu(tx_resp->rate_n_flags);
+ agg->bitmap0 = agg->bitmap1 = 0;
+
++ /* # frames attempted by Tx command */
+ if (agg->frame_count == 1) {
+- struct iwl_tx_queue *txq ;
++ /* Only one frame was attempted; no block-ack will arrive */
++ struct iwl4965_tx_queue *txq ;
+ status = le32_to_cpu(frame_status[0]);
+
+ txq_id = agg->txq_id;
+@@ -3564,28 +3603,30 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
+ IWL_DEBUG_TX_REPLY("FrameCnt = %d, StartIdx=%d \n",
+ agg->frame_count, agg->start_idx);
+
+- tx_status = &(priv->txq[txq_id].txb[txq->q.last_used].status);
++ tx_status = &(priv->txq[txq_id].txb[txq->q.read_ptr].status);
+ tx_status->retry_count = tx_resp->failure_frame;
+ tx_status->queue_number = status & 0xff;
+ tx_status->queue_length = tx_resp->bt_kill_count;
+ tx_status->queue_length |= tx_resp->failure_rts;
+
+- tx_status->flags = iwl_is_tx_success(status)?
++ tx_status->flags = iwl4965_is_tx_success(status)?
+ IEEE80211_TX_STATUS_ACK : 0;
+ tx_status->control.tx_rate =
+- iwl_hw_get_rate_n_flags(tx_resp->rate_n_flags);
++ iwl4965_hw_get_rate_n_flags(tx_resp->rate_n_flags);
+ /* FIXME: code repetition end */
+
+ IWL_DEBUG_TX_REPLY("1 Frame 0x%x failure :%d\n",
+ status & 0xff, tx_resp->failure_frame);
+ IWL_DEBUG_TX_REPLY("Rate Info rate_n_flags=%x\n",
+- iwl_hw_get_rate_n_flags(tx_resp->rate_n_flags));
++ iwl4965_hw_get_rate_n_flags(tx_resp->rate_n_flags));
+
+ agg->wait_for_ba = 0;
+ } else {
++ /* Two or more frames were attempted; expect block-ack */
+ u64 bitmap = 0;
+ int start = agg->start_idx;
+
++ /* Construct bit-map of pending frames within Tx window */
+ for (i = 0; i < agg->frame_count; i++) {
+ u16 sc;
+ status = le32_to_cpu(frame_status[i]);
+@@ -3600,7 +3641,7 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
+ IWL_DEBUG_TX_REPLY("FrameCnt = %d, txq_id=%d idx=%d\n",
+ agg->frame_count, txq_id, idx);
+
+- hdr = iwl_tx_queue_get_hdr(priv, txq_id, idx);
++ hdr = iwl4965_tx_queue_get_hdr(priv, txq_id, idx);
+
+ sc = le16_to_cpu(hdr->seq_ctrl);
+ if (idx != (SEQ_TO_SN(sc) & 0xff)) {
+@@ -3649,19 +3690,22 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
+ #endif
+ #endif
+
-static void iwl_rx_reply_tx(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
+/**
-+ * iwl3945_rx_reply_tx - Handle Tx response
++ * iwl4965_rx_reply_tx - Handle standard (non-aggregation) Tx response
+ */
-+static void iwl3945_rx_reply_tx(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_tx(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
u16 sequence = le16_to_cpu(pkt->hdr.sequence);
int txq_id = SEQ_TO_QUEUE(sequence);
int index = SEQ_TO_INDEX(sequence);
- struct iwl_tx_queue *txq = &priv->txq[txq_id];
-+ struct iwl3945_tx_queue *txq = &priv->txq[txq_id];
++ struct iwl4965_tx_queue *txq = &priv->txq[txq_id];
struct ieee80211_tx_status *tx_status;
- struct iwl_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
-+ struct iwl3945_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
++ struct iwl4965_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
u32 status = le32_to_cpu(tx_resp->status);
-
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ int tid, sta_id;
+ #endif
+ #endif
+@@ -3669,18 +3713,18 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
if ((index >= txq->q.n_bd) || (x2_queue_used(&txq->q, index) == 0)) {
IWL_ERROR("Read index for DMA queue txq_id (%d) index %d "
"is out of range [0-%d] %d %d\n", txq_id,
@@ -388310,30 +469076,88 @@
return;
}
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ if (txq->sched_retry) {
+- const u32 scd_ssn = iwl_get_scd_ssn(tx_resp);
++ const u32 scd_ssn = iwl4965_get_scd_ssn(tx_resp);
+ struct ieee80211_hdr *hdr =
+- iwl_tx_queue_get_hdr(priv, txq_id, index);
+- struct iwl_ht_agg *agg = NULL;
++ iwl4965_tx_queue_get_hdr(priv, txq_id, index);
++ struct iwl4965_ht_agg *agg = NULL;
+ __le16 *qc = ieee80211_get_qos_ctrl(hdr);
+
+ if (qc == NULL) {
+@@ -3690,7 +3734,7 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
+
+ tid = le16_to_cpu(*qc) & 0xf;
+
+- sta_id = iwl_get_ra_sta_id(priv, hdr);
++ sta_id = iwl4965_get_ra_sta_id(priv, hdr);
+ if (unlikely(sta_id == IWL_INVALID_STATION)) {
+ IWL_ERROR("Station not known for\n");
+ return;
+@@ -3701,20 +3745,20 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
+ iwl4965_tx_status_reply_tx(priv, agg, tx_resp, index);
+
+ if ((tx_resp->frame_count == 1) &&
+- !iwl_is_tx_success(status)) {
++ !iwl4965_is_tx_success(status)) {
+ /* TODO: send BAR */
+ }
+
+- if ((txq->q.last_used != (scd_ssn & 0xff))) {
+- index = iwl_queue_dec_wrap(scd_ssn & 0xff, txq->q.n_bd);
++ if ((txq->q.read_ptr != (scd_ssn & 0xff))) {
++ index = iwl4965_queue_dec_wrap(scd_ssn & 0xff, txq->q.n_bd);
+ IWL_DEBUG_TX_REPLY("Retry scheduler reclaim scd_ssn "
+ "%d index %d\n", scd_ssn , index);
+- iwl_tx_queue_reclaim(priv, txq_id, index);
++ iwl4965_tx_queue_reclaim(priv, txq_id, index);
+ }
+ } else {
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
- tx_status = &(txq->txb[txq->q.last_used].status);
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
+ tx_status = &(txq->txb[txq->q.read_ptr].status);
tx_status->retry_count = tx_resp->failure_frame;
tx_status->queue_number = status;
-@@ -3441,28 +3413,28 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
+@@ -3722,35 +3766,35 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
tx_status->queue_length |= tx_resp->failure_rts;
tx_status->flags =
- iwl_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
-+ iwl3945_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
++ iwl4965_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
-- tx_status->control.tx_rate = iwl_rate_index_from_plcp(tx_resp->rate);
-+ tx_status->control.tx_rate = iwl3945_rate_index_from_plcp(tx_resp->rate);
+ tx_status->control.tx_rate =
+- iwl_hw_get_rate_n_flags(tx_resp->rate_n_flags);
++ iwl4965_hw_get_rate_n_flags(tx_resp->rate_n_flags);
- IWL_DEBUG_TX("Tx queue %d Status %s (0x%08x) plcp rate %d retries %d\n",
-- txq_id, iwl_get_tx_fail_reason(status), status,
-+ txq_id, iwl3945_get_tx_fail_reason(status), status,
- tx_resp->rate, tx_resp->failure_frame);
+ IWL_DEBUG_TX("Tx queue %d Status %s (0x%08x) rate_n_flags 0x%x "
+- "retries %d\n", txq_id, iwl_get_tx_fail_reason(status),
++ "retries %d\n", txq_id, iwl4965_get_tx_fail_reason(status),
+ status, le32_to_cpu(tx_resp->rate_n_flags),
+ tx_resp->failure_frame);
IWL_DEBUG_TX_REPLY("Tx queue reclaim %d\n", index);
if (index != -1)
- iwl_tx_queue_reclaim(priv, txq_id, index);
-+ iwl3945_tx_queue_reclaim(priv, txq_id, index);
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++ iwl4965_tx_queue_reclaim(priv, txq_id, index);
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
+ }
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
if (iwl_check_bits(status, TX_ABORT_REQUIRED_MSK))
IWL_ERROR("TODO: Implement Tx ABORT REQUIRED!!!\n");
@@ -388342,45 +469166,42 @@
-static void iwl_rx_reply_alive(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_reply_alive(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_alive(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_alive_resp *palive;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_alive_resp *palive;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_alive_resp *palive;
struct delayed_work *pwork;
palive = &pkt->u.alive_frame;
-@@ -3476,14 +3448,14 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
+@@ -3764,12 +3808,12 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
IWL_DEBUG_INFO("Initialization Alive received.\n");
memcpy(&priv->card_alive_init,
&pkt->u.alive_frame,
- sizeof(struct iwl_init_alive_resp));
-+ sizeof(struct iwl3945_init_alive_resp));
++ sizeof(struct iwl4965_init_alive_resp));
pwork = &priv->init_alive_start;
} else {
IWL_DEBUG_INFO("Runtime Alive received.\n");
memcpy(&priv->card_alive, &pkt->u.alive_frame,
- sizeof(struct iwl_alive_resp));
-+ sizeof(struct iwl3945_alive_resp));
++ sizeof(struct iwl4965_alive_resp));
pwork = &priv->alive_start;
-- iwl_disable_events(priv);
-+ iwl3945_disable_events(priv);
}
- /* We delay the ALIVE response by 5ms to
-@@ -3495,19 +3467,19 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
+@@ -3782,19 +3826,19 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
IWL_WARNING("uCode did not respond OK.\n");
}
-static void iwl_rx_reply_add_sta(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_reply_add_sta(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_add_sta(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
IWL_DEBUG_RX("Received REPLY_ADD_STA: 0x%02X\n", pkt->u.status);
return;
@@ -388388,27 +469209,27 @@
-static void iwl_rx_reply_error(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_reply_error(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_error(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
IWL_ERROR("Error Reply type 0x%08X cmd %s (0x%02X) "
"seq 0x%04X ser 0x%08X\n",
-@@ -3520,23 +3492,23 @@ static void iwl_rx_reply_error(struct iwl_priv *priv,
+@@ -3807,23 +3851,23 @@ static void iwl_rx_reply_error(struct iwl_priv *priv,
#define TX_STATUS_ENTRY(x) case TX_STATUS_FAIL_ ## x: return #x
-static void iwl_rx_csa(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_csa(struct iwl3945_priv *priv, struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_csa(struct iwl4965_priv *priv, struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_rxon_cmd *rxon = (void *)&priv->active_rxon;
- struct iwl_csa_notification *csa = &(pkt->u.csa_notif);
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_rxon_cmd *rxon = (void *)&priv->active_rxon;
-+ struct iwl3945_csa_notification *csa = &(pkt->u.csa_notif);
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rxon_cmd *rxon = (void *)&priv->active_rxon;
++ struct iwl4965_csa_notification *csa = &(pkt->u.csa_notif);
IWL_DEBUG_11H("CSA notif: channel %d, status %d\n",
le16_to_cpu(csa->channel), le32_to_cpu(csa->status));
rxon->channel = csa->channel;
@@ -388417,33 +469238,33 @@
-static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_spectrum_measure_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_spectrum_measure_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_spectrum_notification *report = &(pkt->u.spectrum_notif);
-+#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_spectrum_notification *report = &(pkt->u.spectrum_notif);
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_spectrum_notification *report = &(pkt->u.spectrum_notif);
if (!report->state) {
IWL_DEBUG(IWL_DL_11H | IWL_DL_INFO,
-@@ -3549,35 +3521,35 @@ static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
+@@ -3836,35 +3880,35 @@ static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
#endif
}
-static void iwl_rx_pm_sleep_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_pm_sleep_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_pm_sleep_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
-#ifdef CONFIG_IWLWIFI_DEBUG
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_sleep_notification *sleep = &(pkt->u.sleep_notif);
-+#ifdef CONFIG_IWL3945_DEBUG
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_sleep_notification *sleep = &(pkt->u.sleep_notif);
++#ifdef CONFIG_IWL4965_DEBUG
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_sleep_notification *sleep = &(pkt->u.sleep_notif);
IWL_DEBUG_RX("sleep mode: %d, src: %d\n",
sleep->pm_sleep_mode, sleep->pm_wakeup_src);
#endif
@@ -388451,25 +469272,25 @@
-static void iwl_rx_pm_debug_statistics_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_pm_debug_statistics_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_pm_debug_statistics_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
IWL_DEBUG_RADIO("Dumping %d bytes of unhandled "
"notification for %s:\n",
le32_to_cpu(pkt->len), get_cmd_string(pkt->hdr.cmd));
- iwl_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
-+ iwl3945_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
++ iwl4965_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
}
-static void iwl_bg_beacon_update(struct work_struct *work)
-+static void iwl3945_bg_beacon_update(struct work_struct *work)
++static void iwl4965_bg_beacon_update(struct work_struct *work)
{
- struct iwl_priv *priv =
- container_of(work, struct iwl_priv, beacon_update);
-+ struct iwl3945_priv *priv =
-+ container_of(work, struct iwl3945_priv, beacon_update);
++ struct iwl4965_priv *priv =
++ container_of(work, struct iwl4965_priv, beacon_update);
struct sk_buff *beacon;
/* Pull updated AP beacon from mac80211. will fail if not in AP mode */
@@ -388478,45 +469299,47 @@
if (!beacon) {
IWL_ERROR("update beacon failed\n");
-@@ -3592,15 +3564,15 @@ static void iwl_bg_beacon_update(struct work_struct *work)
+@@ -3879,16 +3923,16 @@ static void iwl_bg_beacon_update(struct work_struct *work)
priv->ibss_beacon = beacon;
mutex_unlock(&priv->mutex);
- iwl_send_beacon_cmd(priv);
-+ iwl3945_send_beacon_cmd(priv);
++ iwl4965_send_beacon_cmd(priv);
}
-static void iwl_rx_beacon_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_beacon_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_beacon_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
-#ifdef CONFIG_IWLWIFI_DEBUG
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_beacon_notif *beacon = &(pkt->u.beacon_status);
-+#ifdef CONFIG_IWL3945_DEBUG
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_beacon_notif *beacon = &(pkt->u.beacon_status);
- u8 rate = beacon->beacon_notify_hdr.rate;
+- u8 rate = iwl_hw_get_rate(beacon->beacon_notify_hdr.rate_n_flags);
++#ifdef CONFIG_IWL4965_DEBUG
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_beacon_notif *beacon = &(pkt->u.beacon_status);
++ u8 rate = iwl4965_hw_get_rate(beacon->beacon_notify_hdr.rate_n_flags);
IWL_DEBUG_RX("beacon status %x retries %d iss %d "
-@@ -3618,25 +3590,25 @@ static void iwl_rx_beacon_notif(struct iwl_priv *priv,
+ "tsf %d %d rate %d\n",
+@@ -3905,25 +3949,25 @@ static void iwl_rx_beacon_notif(struct iwl_priv *priv,
}
/* Service response to REPLY_SCAN_CMD (0x80) */
-static void iwl_rx_reply_scan(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_reply_scan(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_reply_scan(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
-#ifdef CONFIG_IWLWIFI_DEBUG
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_scanreq_notification *notif =
- (struct iwl_scanreq_notification *)pkt->u.raw;
-+#ifdef CONFIG_IWL3945_DEBUG
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_scanreq_notification *notif =
-+ (struct iwl3945_scanreq_notification *)pkt->u.raw;
++#ifdef CONFIG_IWL4965_DEBUG
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_scanreq_notification *notif =
++ (struct iwl4965_scanreq_notification *)pkt->u.raw;
IWL_DEBUG_RX("Scan request status = 0x%x\n", notif->status);
#endif
@@ -388525,37 +469348,37 @@
/* Service SCAN_START_NOTIFICATION (0x82) */
-static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_scan_start_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_scan_start_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_scanstart_notification *notif =
- (struct iwl_scanstart_notification *)pkt->u.raw;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_scanstart_notification *notif =
-+ (struct iwl3945_scanstart_notification *)pkt->u.raw;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_scanstart_notification *notif =
++ (struct iwl4965_scanstart_notification *)pkt->u.raw;
priv->scan_start_tsf = le32_to_cpu(notif->tsf_low);
IWL_DEBUG_SCAN("Scan start: "
"%d [802.11%s] "
-@@ -3648,12 +3620,12 @@ static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
+@@ -3935,12 +3979,12 @@ static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
}
/* Service SCAN_RESULTS_NOTIFICATION (0x83) */
-static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_scan_results_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_scan_results_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_scanresults_notification *notif =
- (struct iwl_scanresults_notification *)pkt->u.raw;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_scanresults_notification *notif =
-+ (struct iwl3945_scanresults_notification *)pkt->u.raw;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_scanresults_notification *notif =
++ (struct iwl4965_scanresults_notification *)pkt->u.raw;
IWL_DEBUG_SCAN("Scan ch.res: "
"%d [802.11%s] "
-@@ -3669,14 +3641,15 @@ static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
+@@ -3956,14 +4000,15 @@ static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
(priv->last_scan_jiffies, jiffies)));
priv->last_scan_jiffies = jiffies;
@@ -388565,17 +469388,17 @@
/* Service SCAN_COMPLETE_NOTIFICATION (0x84) */
-static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_scan_complete_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_scan_complete_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
- struct iwl_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
IWL_DEBUG_SCAN("Scan complete: %d channels (TSF 0x%08X:%08X) - %d\n",
scan_notif->scanned_channels,
-@@ -3711,6 +3684,7 @@ static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
+@@ -3998,6 +4043,7 @@ static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
}
priv->last_scan_jiffies = jiffies;
@@ -388583,67 +469406,106 @@
IWL_DEBUG_INFO("Setting scan to off\n");
clear_bit(STATUS_SCANNING, &priv->status);
-@@ -3729,10 +3703,10 @@ reschedule:
+@@ -4016,10 +4062,10 @@ reschedule:
/* Handle notification from uCode that card's power state is changing
* due to software, hardware, or critical temperature RFKILL */
-static void iwl_rx_card_state_notif(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_rx_card_state_notif(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_rx_card_state_notif(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl3945_rx_packet *pkt = (void *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
u32 flags = le32_to_cpu(pkt->u.card_state_notif.flags);
unsigned long status = priv->status;
-@@ -3740,7 +3714,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
- (flags & HW_CARD_DISABLED) ? "Kill" : "On",
- (flags & SW_CARD_DISABLED) ? "Kill" : "On");
+@@ -4030,35 +4076,35 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+ if (flags & (SW_CARD_DISABLED | HW_CARD_DISABLED |
+ RF_CARD_DISABLED)) {
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_SET,
- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_SET,
+ CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
- if (flags & HW_CARD_DISABLED)
-@@ -3754,7 +3728,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
- else
+- if (!iwl_grab_restricted_access(priv)) {
+- iwl_write_restricted(
++ if (!iwl4965_grab_nic_access(priv)) {
++ iwl4965_write_direct32(
+ priv, HBUS_TARG_MBX_C,
+ HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED);
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ }
+
+ if (!(flags & RXON_CARD_DISABLED)) {
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR,
+ CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+- if (!iwl_grab_restricted_access(priv)) {
+- iwl_write_restricted(
++ if (!iwl4965_grab_nic_access(priv)) {
++ iwl4965_write_direct32(
+ priv, HBUS_TARG_MBX_C,
+ HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED);
+
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
+ }
+ }
+
+ if (flags & RF_CARD_DISABLED) {
+- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_SET,
+ CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT);
+- iwl_read32(priv, CSR_UCODE_DRV_GP1);
+- if (!iwl_grab_restricted_access(priv))
+- iwl_release_restricted_access(priv);
++ iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
++ if (!iwl4965_grab_nic_access(priv))
++ iwl4965_release_nic_access(priv);
+ }
+ }
+
+@@ -4074,7 +4120,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
clear_bit(STATUS_RF_KILL_SW, &priv->status);
-- iwl_scan_cancel(priv);
-+ iwl3945_scan_cancel(priv);
+ if (!(flags & RXON_CARD_DISABLED))
+- iwl_scan_cancel(priv);
++ iwl4965_scan_cancel(priv);
if ((test_bit(STATUS_RF_KILL_HW, &status) !=
test_bit(STATUS_RF_KILL_HW, &priv->status)) ||
-@@ -3766,7 +3740,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+@@ -4086,7 +4132,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
}
/**
- * iwl_setup_rx_handlers - Initialize Rx handler callbacks
-+ * iwl3945_setup_rx_handlers - Initialize Rx handler callbacks
++ * iwl4965_setup_rx_handlers - Initialize Rx handler callbacks
*
* Setup the RX handlers for each of the reply types sent from the uCode
* to the host.
-@@ -3774,61 +3748,58 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+@@ -4094,61 +4140,58 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
* This function chains into the hardware specific files for them to setup
* any hardware specific handlers as well.
*/
-static void iwl_setup_rx_handlers(struct iwl_priv *priv)
-+static void iwl3945_setup_rx_handlers(struct iwl3945_priv *priv)
++static void iwl4965_setup_rx_handlers(struct iwl4965_priv *priv)
{
- priv->rx_handlers[REPLY_ALIVE] = iwl_rx_reply_alive;
- priv->rx_handlers[REPLY_ADD_STA] = iwl_rx_reply_add_sta;
- priv->rx_handlers[REPLY_ERROR] = iwl_rx_reply_error;
- priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl_rx_csa;
-+ priv->rx_handlers[REPLY_ALIVE] = iwl3945_rx_reply_alive;
-+ priv->rx_handlers[REPLY_ADD_STA] = iwl3945_rx_reply_add_sta;
-+ priv->rx_handlers[REPLY_ERROR] = iwl3945_rx_reply_error;
-+ priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl3945_rx_csa;
++ priv->rx_handlers[REPLY_ALIVE] = iwl4965_rx_reply_alive;
++ priv->rx_handlers[REPLY_ADD_STA] = iwl4965_rx_reply_add_sta;
++ priv->rx_handlers[REPLY_ERROR] = iwl4965_rx_reply_error;
++ priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl4965_rx_csa;
priv->rx_handlers[SPECTRUM_MEASURE_NOTIFICATION] =
- iwl_rx_spectrum_measure_notif;
- priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl_rx_pm_sleep_notif;
-+ iwl3945_rx_spectrum_measure_notif;
-+ priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl3945_rx_pm_sleep_notif;
++ iwl4965_rx_spectrum_measure_notif;
++ priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl4965_rx_pm_sleep_notif;
priv->rx_handlers[PM_DEBUG_STATISTIC_NOTIFIC] =
- iwl_rx_pm_debug_statistics_notif;
- priv->rx_handlers[BEACON_NOTIFICATION] = iwl_rx_beacon_notif;
@@ -388655,8 +469517,8 @@
- * The same handler is used for both the REPLY to a
- * discrete statistics request from the host as well as
- * for the periodic statistics notification from the uCode
-+ iwl3945_rx_pm_debug_statistics_notif;
-+ priv->rx_handlers[BEACON_NOTIFICATION] = iwl3945_rx_beacon_notif;
++ iwl4965_rx_pm_debug_statistics_notif;
++ priv->rx_handlers[BEACON_NOTIFICATION] = iwl4965_rx_beacon_notif;
+
+ /*
+ * The same handler is used for both the REPLY to a discrete
@@ -388665,33 +469527,33 @@
*/
- priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl_hw_rx_statistics;
- priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl_hw_rx_statistics;
-+ priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl3945_hw_rx_statistics;
-+ priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl3945_hw_rx_statistics;
++ priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl4965_hw_rx_statistics;
++ priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl4965_hw_rx_statistics;
- priv->rx_handlers[REPLY_SCAN_CMD] = iwl_rx_reply_scan;
- priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl_rx_scan_start_notif;
-+ priv->rx_handlers[REPLY_SCAN_CMD] = iwl3945_rx_reply_scan;
-+ priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl3945_rx_scan_start_notif;
++ priv->rx_handlers[REPLY_SCAN_CMD] = iwl4965_rx_reply_scan;
++ priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl4965_rx_scan_start_notif;
priv->rx_handlers[SCAN_RESULTS_NOTIFICATION] =
- iwl_rx_scan_results_notif;
-+ iwl3945_rx_scan_results_notif;
++ iwl4965_rx_scan_results_notif;
priv->rx_handlers[SCAN_COMPLETE_NOTIFICATION] =
- iwl_rx_scan_complete_notif;
- priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl_rx_card_state_notif;
- priv->rx_handlers[REPLY_TX] = iwl_rx_reply_tx;
-+ iwl3945_rx_scan_complete_notif;
-+ priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl3945_rx_card_state_notif;
-+ priv->rx_handlers[REPLY_TX] = iwl3945_rx_reply_tx;
++ iwl4965_rx_scan_complete_notif;
++ priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl4965_rx_card_state_notif;
++ priv->rx_handlers[REPLY_TX] = iwl4965_rx_reply_tx;
- /* Setup hardware specific Rx handlers */
- iwl_hw_rx_handler_setup(priv);
+ /* Set up hardware specific Rx handlers */
-+ iwl3945_hw_rx_handler_setup(priv);
++ iwl4965_hw_rx_handler_setup(priv);
}
/**
- * iwl_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
-+ * iwl3945_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
++ * iwl4965_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
* @rxb: Rx buffer to reclaim
*
* If an Rx buffer has an async callback associated with it the callback
@@ -388700,31 +469562,46 @@
*/
-static void iwl_tx_cmd_complete(struct iwl_priv *priv,
- struct iwl_rx_mem_buffer *rxb)
-+static void iwl3945_tx_cmd_complete(struct iwl3945_priv *priv,
-+ struct iwl3945_rx_mem_buffer *rxb)
++static void iwl4965_tx_cmd_complete(struct iwl4965_priv *priv,
++ struct iwl4965_rx_mem_buffer *rxb)
{
- struct iwl_rx_packet *pkt = (struct iwl_rx_packet *)rxb->skb->data;
-+ struct iwl3945_rx_packet *pkt = (struct iwl3945_rx_packet *)rxb->skb->data;
++ struct iwl4965_rx_packet *pkt = (struct iwl4965_rx_packet *)rxb->skb->data;
u16 sequence = le16_to_cpu(pkt->hdr.sequence);
int txq_id = SEQ_TO_QUEUE(sequence);
int index = SEQ_TO_INDEX(sequence);
int huge = sequence & SEQ_HUGE_FRAME;
int cmd_index;
- struct iwl_cmd *cmd;
-+ struct iwl3945_cmd *cmd;
++ struct iwl4965_cmd *cmd;
/* If a Tx command is being handled and it isn't in the actual
* command queue then there a command routing bug has been introduced
-@@ -3849,7 +3820,7 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+@@ -4169,7 +4212,7 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
!cmd->meta.u.callback(priv, cmd, rxb->skb))
rxb->skb = NULL;
- iwl_tx_queue_reclaim(priv, txq_id, index);
-+ iwl3945_tx_queue_reclaim(priv, txq_id, index);
++ iwl4965_tx_queue_reclaim(priv, txq_id, index);
if (!(cmd->meta.flags & CMD_ASYNC)) {
clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
-@@ -3879,10 +3850,10 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+@@ -4181,9 +4224,11 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+ /*
+ * Rx theory of operation
+ *
+- * The host allocates 32 DMA target addresses and passes the host address
+- * to the firmware at register IWL_RFDS_TABLE_LOWER + N * RFD_SIZE where N is
+- * 0 to 31
++ * Driver allocates a circular buffer of Receive Buffer Descriptors (RBDs),
++ * each of which point to Receive Buffers to be filled by 4965. These get
++ * used not only for Rx frames, but for any command response or notification
++ * from the 4965. The driver and 4965 manage the Rx buffers by means
++ * of indexes into the circular buffer.
+ *
+ * Rx Queue Indexes
+ * The host/firmware share two index registers for managing the Rx buffers.
+@@ -4199,10 +4244,10 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
* The queue is empty (no good data) if WRITE = READ - 1, and is full if
* WRITE = READ.
*
@@ -388737,18 +469614,18 @@
* and fire the RX interrupt. The driver can then query the READ index and
* process as many packets as possible, moving the WRITE index forward as it
* resets the Rx queue buffers with new memory.
-@@ -3890,8 +3861,8 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+@@ -4210,8 +4255,8 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
* The management in the driver is as follows:
* + A list of pre-allocated SKBs is stored in iwl->rxq->rx_free. When
* iwl->rxq->free_count drops to or below RX_LOW_WATERMARK, work is scheduled
- * to replensish the iwl->rxq->rx_free.
- * + In iwl_rx_replenish (scheduled) if 'processed' != 'read' then the
+ * to replenish the iwl->rxq->rx_free.
-+ * + In iwl3945_rx_replenish (scheduled) if 'processed' != 'read' then the
++ * + In iwl4965_rx_replenish (scheduled) if 'processed' != 'read' then the
* iwl->rxq is replenished and the READ INDEX is updated (updating the
* 'processed' and 'read' driver indexes as well)
* + A received packet is processed and handed to the kernel network stack,
-@@ -3904,28 +3875,28 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
+@@ -4224,28 +4269,28 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
*
* Driver sequence:
*
@@ -388756,22 +469633,22 @@
- * iwl_rx_replenish() Replenishes rx_free list from rx_used, and calls
- * iwl_rx_queue_restock
- * iwl_rx_queue_restock() Moves available buffers from rx_free into Rx
-+ * iwl3945_rx_queue_alloc() Allocates rx_free
-+ * iwl3945_rx_replenish() Replenishes rx_free list from rx_used, and calls
-+ * iwl3945_rx_queue_restock
-+ * iwl3945_rx_queue_restock() Moves available buffers from rx_free into Rx
++ * iwl4965_rx_queue_alloc() Allocates rx_free
++ * iwl4965_rx_replenish() Replenishes rx_free list from rx_used, and calls
++ * iwl4965_rx_queue_restock
++ * iwl4965_rx_queue_restock() Moves available buffers from rx_free into Rx
* queue, updates firmware pointers, and updates
* the WRITE index. If insufficient rx_free buffers
- * are available, schedules iwl_rx_replenish
-+ * are available, schedules iwl3945_rx_replenish
++ * are available, schedules iwl4965_rx_replenish
*
* -- enable interrupts --
- * ISR - iwl_rx() Detach iwl_rx_mem_buffers from pool up to the
-+ * ISR - iwl3945_rx() Detach iwl3945_rx_mem_buffers from pool up to the
++ * ISR - iwl4965_rx() Detach iwl4965_rx_mem_buffers from pool up to the
* READ INDEX, detaching the SKB from the pool.
* Moves the packet buffer from queue to rx_used.
- * Calls iwl_rx_queue_restock to refill any empty
-+ * Calls iwl3945_rx_queue_restock to refill any empty
++ * Calls iwl4965_rx_queue_restock to refill any empty
* slots.
* ...
*
@@ -388779,14 +469656,14 @@
/**
- * iwl_rx_queue_space - Return number of free slots available in queue.
-+ * iwl3945_rx_queue_space - Return number of free slots available in queue.
++ * iwl4965_rx_queue_space - Return number of free slots available in queue.
*/
-static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
-+static int iwl3945_rx_queue_space(const struct iwl3945_rx_queue *q)
++static int iwl4965_rx_queue_space(const struct iwl4965_rx_queue *q)
{
int s = q->read - q->write;
if (s <= 0)
-@@ -3938,15 +3909,9 @@ static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
+@@ -4258,15 +4303,9 @@ static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
}
/**
@@ -388797,68 +469674,69 @@
- * implementation being the same (only a numeric constant is
- * different)
- *
-+ * iwl3945_rx_queue_update_write_ptr - Update the write pointer for the RX queue
++ * iwl4965_rx_queue_update_write_ptr - Update the write pointer for the RX queue
*/
-int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
-+int iwl3945_rx_queue_update_write_ptr(struct iwl3945_priv *priv, struct iwl3945_rx_queue *q)
++int iwl4965_rx_queue_update_write_ptr(struct iwl4965_priv *priv, struct iwl4965_rx_queue *q)
{
u32 reg = 0;
int rc = 0;
-@@ -3957,24 +3922,29 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
+@@ -4277,24 +4316,29 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
if (q->need_update == 0)
goto exit_unlock;
+ /* If power-saving is in use, make sure device is awake */
if (test_bit(STATUS_POWER_PMI, &priv->status)) {
- reg = iwl_read32(priv, CSR_UCODE_DRV_GP1);
-+ reg = iwl3945_read32(priv, CSR_UCODE_DRV_GP1);
++ reg = iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
- iwl_set_bit(priv, CSR_GP_CNTRL,
-+ iwl3945_set_bit(priv, CSR_GP_CNTRL,
++ iwl4965_set_bit(priv, CSR_GP_CNTRL,
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
goto exit_unlock;
}
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc)
goto exit_unlock;
- iwl_write_restricted(priv, FH_RSCSR_CHNL0_WPTR,
+ /* Device expects a multiple of 8 */
-+ iwl3945_write_direct32(priv, FH_RSCSR_CHNL0_WPTR,
++ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_WPTR,
q->write & ~0x7);
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
+
+ /* Else device is assumed to be awake */
} else
- iwl_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
+ /* Device expects a multiple of 8 */
-+ iwl3945_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
++ iwl4965_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
q->need_update = 0;
-@@ -3985,42 +3955,43 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
+@@ -4305,11 +4349,9 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
}
/**
- * iwl_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer pointer.
- *
- * NOTE: This function has 3945 and 4965 specific code paths in it.
-+ * iwl3945_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer ptr
++ * iwl4965_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer ptr
*/
-static inline __le32 iwl_dma_addr2rbd_ptr(struct iwl_priv *priv,
-+static inline __le32 iwl3945_dma_addr2rbd_ptr(struct iwl3945_priv *priv,
++static inline __le32 iwl4965_dma_addr2rbd_ptr(struct iwl4965_priv *priv,
dma_addr_t dma_addr)
{
- return cpu_to_le32((u32)dma_addr);
- }
+ return cpu_to_le32((u32)(dma_addr >> 8));
+@@ -4317,31 +4359,34 @@ static inline __le32 iwl_dma_addr2rbd_ptr(struct iwl_priv *priv,
+
/**
- * iwl_rx_queue_restock - refill RX queue from pre-allocated pool
-+ * iwl3945_rx_queue_restock - refill RX queue from pre-allocated pool
++ * iwl4965_rx_queue_restock - refill RX queue from pre-allocated pool
*
- * If there are slots in the RX queue that need to be restocked,
+ * If there are slots in the RX queue that need to be restocked,
@@ -388871,33 +469749,33 @@
* target buffer.
*/
-int iwl_rx_queue_restock(struct iwl_priv *priv)
-+static int iwl3945_rx_queue_restock(struct iwl3945_priv *priv)
++static int iwl4965_rx_queue_restock(struct iwl4965_priv *priv)
{
- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl3945_rx_queue *rxq = &priv->rxq;
++ struct iwl4965_rx_queue *rxq = &priv->rxq;
struct list_head *element;
- struct iwl_rx_mem_buffer *rxb;
-+ struct iwl3945_rx_mem_buffer *rxb;
++ struct iwl4965_rx_mem_buffer *rxb;
unsigned long flags;
int write, rc;
spin_lock_irqsave(&rxq->lock, flags);
write = rxq->write & ~0x7;
- while ((iwl_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
-+ while ((iwl3945_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
++ while ((iwl4965_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
+ /* Get next free Rx buffer, remove from free list */
element = rxq->rx_free.next;
- rxb = list_entry(element, struct iwl_rx_mem_buffer, list);
-+ rxb = list_entry(element, struct iwl3945_rx_mem_buffer, list);
++ rxb = list_entry(element, struct iwl4965_rx_mem_buffer, list);
list_del(element);
- rxq->bd[rxq->write] = iwl_dma_addr2rbd_ptr(priv, rxb->dma_addr);
+
+ /* Point to Rx buffer via next RBD in circular buffer */
-+ rxq->bd[rxq->write] = iwl3945_dma_addr2rbd_ptr(priv, rxb->dma_addr);
++ rxq->bd[rxq->write] = iwl4965_dma_addr2rbd_ptr(priv, rxb->dma_addr);
rxq->queue[rxq->write] = rxb;
rxq->write = (rxq->write + 1) & RX_QUEUE_MASK;
rxq->free_count--;
-@@ -4032,13 +4003,14 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
+@@ -4353,13 +4398,14 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
queue_work(priv->workqueue, &priv->rx_replenish);
@@ -388910,65 +469788,59 @@
rxq->need_update = 1;
spin_unlock_irqrestore(&rxq->lock, flags);
- rc = iwl_rx_queue_update_write_ptr(priv, rxq);
-+ rc = iwl3945_rx_queue_update_write_ptr(priv, rxq);
++ rc = iwl4965_rx_queue_update_write_ptr(priv, rxq);
if (rc)
return rc;
}
-@@ -4047,24 +4019,25 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
+@@ -4368,26 +4414,28 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
}
/**
- * iwl_rx_replensih - Move all used packet from rx_used to rx_free
-+ * iwl3945_rx_replenish - Move all used packet from rx_used to rx_free
++ * iwl4965_rx_replenish - Move all used packet from rx_used to rx_free
*
* When moving to rx_free an SKB is allocated for the slot.
*
- * Also restock the Rx queue via iwl_rx_queue_restock.
- * This is called as a scheduled work item (except for during intialization)
-+ * Also restock the Rx queue via iwl3945_rx_queue_restock.
++ * Also restock the Rx queue via iwl4965_rx_queue_restock.
+ * This is called as a scheduled work item (except for during initialization)
*/
-void iwl_rx_replenish(void *data)
-+static void iwl3945_rx_allocate(struct iwl3945_priv *priv)
++static void iwl4965_rx_allocate(struct iwl4965_priv *priv)
{
- struct iwl_priv *priv = data;
- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl3945_rx_queue *rxq = &priv->rxq;
++ struct iwl4965_rx_queue *rxq = &priv->rxq;
struct list_head *element;
- struct iwl_rx_mem_buffer *rxb;
-+ struct iwl3945_rx_mem_buffer *rxb;
++ struct iwl4965_rx_mem_buffer *rxb;
unsigned long flags;
spin_lock_irqsave(&rxq->lock, flags);
while (!list_empty(&rxq->rx_used)) {
element = rxq->rx_used.next;
- rxb = list_entry(element, struct iwl_rx_mem_buffer, list);
-+ rxb = list_entry(element, struct iwl3945_rx_mem_buffer, list);
++ rxb = list_entry(element, struct iwl4965_rx_mem_buffer, list);
+
+ /* Alloc a new receive buffer */
rxb->skb =
- alloc_skb(IWL_RX_BUF_SIZE, __GFP_NOWARN | GFP_ATOMIC);
+- alloc_skb(IWL_RX_BUF_SIZE, __GFP_NOWARN | GFP_ATOMIC);
++ alloc_skb(priv->hw_setting.rx_buf_size,
++ __GFP_NOWARN | GFP_ATOMIC);
if (!rxb->skb) {
-@@ -4076,8 +4049,19 @@ void iwl_rx_replenish(void *data)
- * more buffers it will schedule replenish */
- break;
+ if (net_ratelimit())
+ printk(KERN_CRIT DRV_NAME
+@@ -4399,32 +4447,55 @@ void iwl_rx_replenish(void *data)
}
-+
-+ /* If radiotap head is required, reserve some headroom here.
-+ * The physical head count is a variable rx_stats->phy_count.
-+ * We reserve 4 bytes here. Plus these extra bytes, the
-+ * headroom of the physical head should be enough for the
-+ * radiotap head that iwl3945 supported. See iwl3945_rt.
-+ */
-+ skb_reserve(rxb->skb, 4);
-+
priv->alloc_rxb_skb++;
list_del(element);
+
+ /* Get physical address of RB/SKB */
rxb->dma_addr =
pci_map_single(priv->pci_dev, rxb->skb->data,
- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
-@@ -4085,18 +4069,38 @@ void iwl_rx_replenish(void *data)
+- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
++ priv->hw_setting.rx_buf_size, PCI_DMA_FROMDEVICE);
+ list_add_tail(&rxb->list, &rxq->rx_free);
rxq->free_count++;
}
spin_unlock_irqrestore(&rxq->lock, flags);
@@ -388976,26 +469848,26 @@
+
+/*
+ * this should be called while priv->lock is locked
-+ */
-+static void __iwl3945_rx_replenish(void *data)
++*/
++static void __iwl4965_rx_replenish(void *data)
+{
-+ struct iwl3945_priv *priv = data;
++ struct iwl4965_priv *priv = data;
+
-+ iwl3945_rx_allocate(priv);
-+ iwl3945_rx_queue_restock(priv);
++ iwl4965_rx_allocate(priv);
++ iwl4965_rx_queue_restock(priv);
+}
+
+
-+void iwl3945_rx_replenish(void *data)
++void iwl4965_rx_replenish(void *data)
+{
-+ struct iwl3945_priv *priv = data;
++ struct iwl4965_priv *priv = data;
+ unsigned long flags;
+
-+ iwl3945_rx_allocate(priv);
++ iwl4965_rx_allocate(priv);
spin_lock_irqsave(&priv->lock, flags);
- iwl_rx_queue_restock(priv);
-+ iwl3945_rx_queue_restock(priv);
++ iwl4965_rx_queue_restock(priv);
spin_unlock_irqrestore(&priv->lock, flags);
}
@@ -389006,19 +469878,28 @@
* non NULL it is unmapped and freed
*/
-void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
-+static void iwl3945_rx_queue_free(struct iwl3945_priv *priv, struct iwl3945_rx_queue *rxq)
++static void iwl4965_rx_queue_free(struct iwl4965_priv *priv, struct iwl4965_rx_queue *rxq)
{
int i;
for (i = 0; i < RX_QUEUE_SIZE + RX_FREE_BUFFERS; i++) {
-@@ -4113,21 +4117,25 @@ void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
+ if (rxq->pool[i].skb != NULL) {
+ pci_unmap_single(priv->pci_dev,
+ rxq->pool[i].dma_addr,
+- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
++ priv->hw_setting.rx_buf_size,
++ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(rxq->pool[i].skb);
+ }
+ }
+@@ -4434,21 +4505,25 @@ void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
rxq->bd = NULL;
}
-int iwl_rx_queue_alloc(struct iwl_priv *priv)
-+int iwl3945_rx_queue_alloc(struct iwl3945_priv *priv)
++int iwl4965_rx_queue_alloc(struct iwl4965_priv *priv)
{
- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl3945_rx_queue *rxq = &priv->rxq;
++ struct iwl4965_rx_queue *rxq = &priv->rxq;
struct pci_dev *dev = priv->pci_dev;
int i;
@@ -389038,53 +469919,63 @@
/* Set us so that we have processed and used all buffers, but have
* not restocked the Rx queue with fresh buffers */
rxq->read = rxq->write = 0;
-@@ -4136,7 +4144,7 @@ int iwl_rx_queue_alloc(struct iwl_priv *priv)
+@@ -4457,7 +4532,7 @@ int iwl_rx_queue_alloc(struct iwl_priv *priv)
return 0;
}
-void iwl_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
-+void iwl3945_rx_queue_reset(struct iwl3945_priv *priv, struct iwl3945_rx_queue *rxq)
++void iwl4965_rx_queue_reset(struct iwl4965_priv *priv, struct iwl4965_rx_queue *rxq)
{
unsigned long flags;
int i;
-@@ -4183,7 +4191,7 @@ static u8 ratio2dB[100] = {
+@@ -4471,7 +4546,8 @@ void iwl_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
+ if (rxq->pool[i].skb != NULL) {
+ pci_unmap_single(priv->pci_dev,
+ rxq->pool[i].dma_addr,
+- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
++ priv->hw_setting.rx_buf_size,
++ PCI_DMA_FROMDEVICE);
+ priv->alloc_rxb_skb--;
+ dev_kfree_skb(rxq->pool[i].skb);
+ rxq->pool[i].skb = NULL;
+@@ -4504,7 +4580,7 @@ static u8 ratio2dB[100] = {
/* Calculates a relative dB value from a ratio of linear
* (i.e. not dB) signal levels.
* Conversion assumes that levels are voltages (20*log), not powers (10*log). */
-int iwl_calc_db_from_ratio(int sig_ratio)
-+int iwl3945_calc_db_from_ratio(int sig_ratio)
++int iwl4965_calc_db_from_ratio(int sig_ratio)
{
- /* Anything above 1000:1 just report as 60 dB */
- if (sig_ratio > 1000)
-@@ -4209,7 +4217,7 @@ int iwl_calc_db_from_ratio(int sig_ratio)
+ /* 1000:1 or higher just report as 60 dB */
+ if (sig_ratio >= 1000)
+@@ -4530,7 +4606,7 @@ int iwl_calc_db_from_ratio(int sig_ratio)
/* Calculate an indication of rx signal quality (a percentage, not dBm!).
* See http://www.ces.clemson.edu/linux/signal_quality.shtml for info
* about formulas used below. */
-int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
-+int iwl3945_calc_sig_qual(int rssi_dbm, int noise_dbm)
++int iwl4965_calc_sig_qual(int rssi_dbm, int noise_dbm)
{
int sig_qual;
int degradation = PERFECT_RSSI - rssi_dbm;
-@@ -4244,24 +4252,30 @@ int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
+@@ -4565,32 +4641,39 @@ int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
}
/**
- * iwl_rx_handle - Main entry function for receiving responses from the uCode
-+ * iwl3945_rx_handle - Main entry function for receiving responses from uCode
++ * iwl4965_rx_handle - Main entry function for receiving responses from uCode
*
* Uses the priv->rx_handlers callback function array to invoke
* the appropriate handlers, including command responses,
* frame-received notifications, and other notifications.
*/
-static void iwl_rx_handle(struct iwl_priv *priv)
-+static void iwl3945_rx_handle(struct iwl3945_priv *priv)
++static void iwl4965_rx_handle(struct iwl4965_priv *priv)
{
- struct iwl_rx_mem_buffer *rxb;
- struct iwl_rx_packet *pkt;
- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl3945_rx_mem_buffer *rxb;
-+ struct iwl3945_rx_packet *pkt;
-+ struct iwl3945_rx_queue *rxq = &priv->rxq;
++ struct iwl4965_rx_mem_buffer *rxb;
++ struct iwl4965_rx_packet *pkt;
++ struct iwl4965_rx_queue *rxq = &priv->rxq;
u32 r, i;
int reclaim;
unsigned long flags;
@@ -389094,15 +469985,16 @@
- r = iwl_hw_get_rx_read(priv);
+ /* uCode's read index (stored in shared DRAM) indicates the last Rx
+ * buffer that the driver may process (last buffer filled by ucode). */
-+ r = iwl3945_hw_get_rx_read(priv);
++ r = iwl4965_hw_get_rx_read(priv);
i = rxq->read;
-+ if (iwl3945_rx_queue_space(rxq) > (RX_QUEUE_SIZE / 2))
-+ fill_rx = 1;
/* Rx interrupt, but nothing sent from uCode */
if (i == r)
IWL_DEBUG(IWL_DL_RX | IWL_DL_ISR, "r = %d, i = %d\n", r, i);
-@@ -4269,7 +4283,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+
++ if (iwl4965_rx_queue_space(rxq) > (RX_QUEUE_SIZE / 2))
++ fill_rx = 1;
++
while (i != r) {
rxb = rxq->queue[i];
@@ -389111,50 +470003,60 @@
* then a bug has been introduced in the queue refilling
* routines -- catch it here */
BUG_ON(rxb == NULL);
-@@ -4279,7 +4293,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+@@ -4598,9 +4681,9 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+ rxq->queue[i] = NULL;
+
pci_dma_sync_single_for_cpu(priv->pci_dev, rxb->dma_addr,
- IWL_RX_BUF_SIZE,
+- IWL_RX_BUF_SIZE,
++ priv->hw_setting.rx_buf_size,
PCI_DMA_FROMDEVICE);
- pkt = (struct iwl_rx_packet *)rxb->skb->data;
-+ pkt = (struct iwl3945_rx_packet *)rxb->skb->data;
++ pkt = (struct iwl4965_rx_packet *)rxb->skb->data;
/* Reclaim a command buffer only if this packet is a response
* to a (driver-originated) command.
-@@ -4293,7 +4307,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+@@ -4617,7 +4700,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
/* Based on type of command response or notification,
* handle those that need handling via function in
- * rx_handlers table. See iwl_setup_rx_handlers() */
-+ * rx_handlers table. See iwl3945_setup_rx_handlers() */
++ * rx_handlers table. See iwl4965_setup_rx_handlers() */
if (priv->rx_handlers[pkt->hdr.cmd]) {
IWL_DEBUG(IWL_DL_HOST_COMMAND | IWL_DL_RX | IWL_DL_ISR,
"r = %d, i = %d, %s, 0x%02x\n", r, i,
-@@ -4308,11 +4322,11 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+@@ -4632,11 +4715,11 @@ static void iwl_rx_handle(struct iwl_priv *priv)
}
if (reclaim) {
- /* Invoke any callbacks, transfer the skb to caller,
- * and fire off the (possibly) blocking iwl_send_cmd()
+ /* Invoke any callbacks, transfer the skb to caller, and
-+ * fire off the (possibly) blocking iwl3945_send_cmd()
++ * fire off the (possibly) blocking iwl4965_send_cmd()
* as we reclaim the driver command queue */
if (rxb && rxb->skb)
- iwl_tx_cmd_complete(priv, rxb);
-+ iwl3945_tx_cmd_complete(priv, rxb);
++ iwl4965_tx_cmd_complete(priv, rxb);
else
IWL_WARNING("Claim null rxb?\n");
}
-@@ -4332,15 +4346,28 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+@@ -4651,20 +4734,34 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+ }
+
+ pci_unmap_single(priv->pci_dev, rxb->dma_addr,
+- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
++ priv->hw_setting.rx_buf_size,
++ PCI_DMA_FROMDEVICE);
+ spin_lock_irqsave(&rxq->lock, flags);
list_add_tail(&rxb->list, &priv->rxq.rx_used);
spin_unlock_irqrestore(&rxq->lock, flags);
i = (i + 1) & RX_QUEUE_MASK;
+ /* If there are a lot of unused frames,
-+ * restock the Rx queue so ucode won't assert. */
++ * restock the Rx queue so ucode wont assert. */
+ if (fill_rx) {
+ count++;
+ if (count >= 8) {
+ priv->rxq.read = i;
-+ __iwl3945_rx_replenish(priv);
++ __iwl4965_rx_replenish(priv);
+ count = 0;
+ }
+ }
@@ -389163,52 +470065,52 @@
/* Backtrack one entry */
priv->rxq.read = i;
- iwl_rx_queue_restock(priv);
-+ iwl3945_rx_queue_restock(priv);
++ iwl4965_rx_queue_restock(priv);
}
-int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
- struct iwl_tx_queue *txq)
+/**
-+ * iwl3945_tx_queue_update_write_ptr - Send new write index to hardware
++ * iwl4965_tx_queue_update_write_ptr - Send new write index to hardware
+ */
-+static int iwl3945_tx_queue_update_write_ptr(struct iwl3945_priv *priv,
-+ struct iwl3945_tx_queue *txq)
++static int iwl4965_tx_queue_update_write_ptr(struct iwl4965_priv *priv,
++ struct iwl4965_tx_queue *txq)
{
u32 reg = 0;
int rc = 0;
-@@ -4354,41 +4381,41 @@ int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
+@@ -4678,41 +4775,41 @@ int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
/* wake up nic if it's powered down ...
* uCode will wake up, and interrupt us again, so next
* time we'll skip this part. */
- reg = iwl_read32(priv, CSR_UCODE_DRV_GP1);
-+ reg = iwl3945_read32(priv, CSR_UCODE_DRV_GP1);
++ reg = iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
IWL_DEBUG_INFO("Requesting wakeup, GP1 = 0x%x\n", reg);
- iwl_set_bit(priv, CSR_GP_CNTRL,
-+ iwl3945_set_bit(priv, CSR_GP_CNTRL,
++ iwl4965_set_bit(priv, CSR_GP_CNTRL,
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
return rc;
}
/* restore this queue's parameters in nic hardware. */
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc)
return rc;
- iwl_write_restricted(priv, HBUS_TARG_WRPTR,
- txq->q.first_empty | (txq_id << 8));
- iwl_release_restricted_access(priv);
-+ iwl3945_write_direct32(priv, HBUS_TARG_WRPTR,
++ iwl4965_write_direct32(priv, HBUS_TARG_WRPTR,
+ txq->q.write_ptr | (txq_id << 8));
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
/* else not in power-save mode, uCode will never sleep when we're
* trying to tx (during RFKILL, we're not trying to tx). */
} else
- iwl_write32(priv, HBUS_TARG_WRPTR,
- txq->q.first_empty | (txq_id << 8));
-+ iwl3945_write32(priv, HBUS_TARG_WRPTR,
++ iwl4965_write32(priv, HBUS_TARG_WRPTR,
+ txq->q.write_ptr | (txq_id << 8));
txq->need_update = 0;
@@ -389218,145 +470120,141 @@
-#ifdef CONFIG_IWLWIFI_DEBUG
-static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
-+#ifdef CONFIG_IWL3945_DEBUG
-+static void iwl3945_print_rx_config_cmd(struct iwl3945_rxon_cmd *rxon)
++#ifdef CONFIG_IWL4965_DEBUG
++static void iwl4965_print_rx_config_cmd(struct iwl4965_rxon_cmd *rxon)
{
DECLARE_MAC_BUF(mac);
IWL_DEBUG_RADIO("RX CONFIG:\n");
- iwl_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
-+ iwl3945_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
++ iwl4965_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
IWL_DEBUG_RADIO("u16 channel: 0x%x\n", le16_to_cpu(rxon->channel));
IWL_DEBUG_RADIO("u32 flags: 0x%08X\n", le32_to_cpu(rxon->flags));
IWL_DEBUG_RADIO("u32 filter_flags: 0x%08x\n",
-@@ -4405,24 +4432,24 @@ static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
+@@ -4729,24 +4826,24 @@ static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
}
#endif
-static void iwl_enable_interrupts(struct iwl_priv *priv)
-+static void iwl3945_enable_interrupts(struct iwl3945_priv *priv)
++static void iwl4965_enable_interrupts(struct iwl4965_priv *priv)
{
IWL_DEBUG_ISR("Enabling interrupts\n");
set_bit(STATUS_INT_ENABLED, &priv->status);
- iwl_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
-+ iwl3945_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
++ iwl4965_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
}
-static inline void iwl_disable_interrupts(struct iwl_priv *priv)
-+static inline void iwl3945_disable_interrupts(struct iwl3945_priv *priv)
++static inline void iwl4965_disable_interrupts(struct iwl4965_priv *priv)
{
clear_bit(STATUS_INT_ENABLED, &priv->status);
/* disable interrupts from uCode/NIC to host */
- iwl_write32(priv, CSR_INT_MASK, 0x00000000);
-+ iwl3945_write32(priv, CSR_INT_MASK, 0x00000000);
++ iwl4965_write32(priv, CSR_INT_MASK, 0x00000000);
/* acknowledge/clear/reset any interrupts still pending
* from uCode or flow handler (Rx/Tx DMA) */
- iwl_write32(priv, CSR_INT, 0xffffffff);
- iwl_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
-+ iwl3945_write32(priv, CSR_INT, 0xffffffff);
-+ iwl3945_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
++ iwl4965_write32(priv, CSR_INT, 0xffffffff);
++ iwl4965_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
IWL_DEBUG_ISR("Disabled interrupts\n");
}
-@@ -4449,7 +4476,7 @@ static const char *desc_lookup(int i)
+@@ -4773,7 +4870,7 @@ static const char *desc_lookup(int i)
#define ERROR_START_OFFSET (1 * sizeof(u32))
#define ERROR_ELEM_SIZE (7 * sizeof(u32))
-static void iwl_dump_nic_error_log(struct iwl_priv *priv)
-+static void iwl3945_dump_nic_error_log(struct iwl3945_priv *priv)
++static void iwl4965_dump_nic_error_log(struct iwl4965_priv *priv)
{
- u32 i;
+ u32 data2, line;
u32 desc, time, count, base, data1;
-@@ -4458,18 +4485,18 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+@@ -4782,18 +4879,18 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
base = le32_to_cpu(priv->card_alive.error_event_table_ptr);
- if (!iwl_hw_valid_rtc_data_addr(base)) {
-+ if (!iwl3945_hw_valid_rtc_data_addr(base)) {
++ if (!iwl4965_hw_valid_rtc_data_addr(base)) {
IWL_ERROR("Not valid error log pointer 0x%08X\n", base);
return;
}
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc) {
IWL_WARNING("Can not read from adapter at this time.\n");
return;
}
- count = iwl_read_restricted_mem(priv, base);
-+ count = iwl3945_read_targ_mem(priv, base);
++ count = iwl4965_read_targ_mem(priv, base);
if (ERROR_START_OFFSET <= count * ERROR_ELEM_SIZE) {
IWL_ERROR("Start IWL Error Log Dump:\n");
-@@ -4482,19 +4509,19 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
- for (i = ERROR_START_OFFSET;
- i < (count * ERROR_ELEM_SIZE) + ERROR_START_OFFSET;
- i += ERROR_ELEM_SIZE) {
-- desc = iwl_read_restricted_mem(priv, base + i);
-+ desc = iwl3945_read_targ_mem(priv, base + i);
- time =
-- iwl_read_restricted_mem(priv, base + i + 1 * sizeof(u32));
-+ iwl3945_read_targ_mem(priv, base + i + 1 * sizeof(u32));
- blink1 =
-- iwl_read_restricted_mem(priv, base + i + 2 * sizeof(u32));
-+ iwl3945_read_targ_mem(priv, base + i + 2 * sizeof(u32));
- blink2 =
-- iwl_read_restricted_mem(priv, base + i + 3 * sizeof(u32));
-+ iwl3945_read_targ_mem(priv, base + i + 3 * sizeof(u32));
- ilink1 =
-- iwl_read_restricted_mem(priv, base + i + 4 * sizeof(u32));
-+ iwl3945_read_targ_mem(priv, base + i + 4 * sizeof(u32));
- ilink2 =
-- iwl_read_restricted_mem(priv, base + i + 5 * sizeof(u32));
-+ iwl3945_read_targ_mem(priv, base + i + 5 * sizeof(u32));
- data1 =
-- iwl_read_restricted_mem(priv, base + i + 6 * sizeof(u32));
-+ iwl3945_read_targ_mem(priv, base + i + 6 * sizeof(u32));
-
- IWL_ERROR
- ("%-13s (#%d) %010u 0x%05X 0x%05X 0x%05X 0x%05X %u\n\n",
-@@ -4502,18 +4529,18 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
- ilink1, ilink2, data1);
+@@ -4801,15 +4898,15 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+ priv->status, priv->config, count);
}
-- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
+- desc = iwl_read_restricted_mem(priv, base + 1 * sizeof(u32));
+- blink1 = iwl_read_restricted_mem(priv, base + 3 * sizeof(u32));
+- blink2 = iwl_read_restricted_mem(priv, base + 4 * sizeof(u32));
+- ilink1 = iwl_read_restricted_mem(priv, base + 5 * sizeof(u32));
+- ilink2 = iwl_read_restricted_mem(priv, base + 6 * sizeof(u32));
+- data1 = iwl_read_restricted_mem(priv, base + 7 * sizeof(u32));
+- data2 = iwl_read_restricted_mem(priv, base + 8 * sizeof(u32));
+- line = iwl_read_restricted_mem(priv, base + 9 * sizeof(u32));
+- time = iwl_read_restricted_mem(priv, base + 11 * sizeof(u32));
++ desc = iwl4965_read_targ_mem(priv, base + 1 * sizeof(u32));
++ blink1 = iwl4965_read_targ_mem(priv, base + 3 * sizeof(u32));
++ blink2 = iwl4965_read_targ_mem(priv, base + 4 * sizeof(u32));
++ ilink1 = iwl4965_read_targ_mem(priv, base + 5 * sizeof(u32));
++ ilink2 = iwl4965_read_targ_mem(priv, base + 6 * sizeof(u32));
++ data1 = iwl4965_read_targ_mem(priv, base + 7 * sizeof(u32));
++ data2 = iwl4965_read_targ_mem(priv, base + 8 * sizeof(u32));
++ line = iwl4965_read_targ_mem(priv, base + 9 * sizeof(u32));
++ time = iwl4965_read_targ_mem(priv, base + 11 * sizeof(u32));
+
+ IWL_ERROR("Desc Time "
+ "data1 data2 line\n");
+@@ -4819,17 +4916,17 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+ IWL_ERROR("0x%05X 0x%05X 0x%05X 0x%05X\n", blink1, blink2,
+ ilink1, ilink2);
+- iwl_release_restricted_access(priv);
++ iwl4965_release_nic_access(priv);
}
--#define EVENT_START_OFFSET (4 * sizeof(u32))
-+#define EVENT_START_OFFSET (6 * sizeof(u32))
+ #define EVENT_START_OFFSET (4 * sizeof(u32))
/**
- * iwl_print_event_log - Dump error event log to syslog
-+ * iwl3945_print_event_log - Dump error event log to syslog
++ * iwl4965_print_event_log - Dump error event log to syslog
*
- * NOTE: Must be called with iwl_grab_restricted_access() already obtained!
-+ * NOTE: Must be called with iwl3945_grab_nic_access() already obtained!
++ * NOTE: Must be called with iwl4965_grab_nic_access() already obtained!
*/
-static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
-+static void iwl3945_print_event_log(struct iwl3945_priv *priv, u32 start_idx,
++static void iwl4965_print_event_log(struct iwl4965_priv *priv, u32 start_idx,
u32 num_events, u32 mode)
{
u32 i;
-@@ -4537,21 +4564,21 @@ static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
+@@ -4853,21 +4950,21 @@ static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
/* "time" is actually "data" for mode 0 (no timestamp).
* place event id # at far right for easier visual parsing. */
for (i = 0; i < num_events; i++) {
- ev = iwl_read_restricted_mem(priv, ptr);
-+ ev = iwl3945_read_targ_mem(priv, ptr);
++ ev = iwl4965_read_targ_mem(priv, ptr);
ptr += sizeof(u32);
- time = iwl_read_restricted_mem(priv, ptr);
-+ time = iwl3945_read_targ_mem(priv, ptr);
++ time = iwl4965_read_targ_mem(priv, ptr);
ptr += sizeof(u32);
if (mode == 0)
IWL_ERROR("0x%08x\t%04u\n", time, ev); /* data, ev */
else {
- data = iwl_read_restricted_mem(priv, ptr);
-+ data = iwl3945_read_targ_mem(priv, ptr);
++ data = iwl4965_read_targ_mem(priv, ptr);
ptr += sizeof(u32);
IWL_ERROR("%010u\t0x%08x\t%04u\n", time, data, ev);
}
@@ -389364,22 +470262,22 @@
}
-static void iwl_dump_nic_event_log(struct iwl_priv *priv)
-+static void iwl3945_dump_nic_event_log(struct iwl3945_priv *priv)
++static void iwl4965_dump_nic_event_log(struct iwl4965_priv *priv)
{
int rc;
u32 base; /* SRAM byte address of event log header */
-@@ -4562,29 +4589,29 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
+@@ -4878,29 +4975,29 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
u32 size; /* # entries that we'll print */
base = le32_to_cpu(priv->card_alive.log_event_table_ptr);
- if (!iwl_hw_valid_rtc_data_addr(base)) {
-+ if (!iwl3945_hw_valid_rtc_data_addr(base)) {
++ if (!iwl4965_hw_valid_rtc_data_addr(base)) {
IWL_ERROR("Invalid event log pointer 0x%08X\n", base);
return;
}
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc) {
IWL_WARNING("Can not read from adapter at this time.\n");
return;
@@ -389390,10 +470288,10 @@
- mode = iwl_read_restricted_mem(priv, base + (1 * sizeof(u32)));
- num_wraps = iwl_read_restricted_mem(priv, base + (2 * sizeof(u32)));
- next_entry = iwl_read_restricted_mem(priv, base + (3 * sizeof(u32)));
-+ capacity = iwl3945_read_targ_mem(priv, base);
-+ mode = iwl3945_read_targ_mem(priv, base + (1 * sizeof(u32)));
-+ num_wraps = iwl3945_read_targ_mem(priv, base + (2 * sizeof(u32)));
-+ next_entry = iwl3945_read_targ_mem(priv, base + (3 * sizeof(u32)));
++ capacity = iwl4965_read_targ_mem(priv, base);
++ mode = iwl4965_read_targ_mem(priv, base + (1 * sizeof(u32)));
++ num_wraps = iwl4965_read_targ_mem(priv, base + (2 * sizeof(u32)));
++ next_entry = iwl4965_read_targ_mem(priv, base + (3 * sizeof(u32)));
size = num_wraps ? capacity : next_entry;
@@ -389401,35 +470299,35 @@
if (size == 0) {
IWL_ERROR("Start IWL Event Log Dump: nothing in log\n");
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
return;
}
-@@ -4594,31 +4621,31 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
+@@ -4910,31 +5007,31 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
/* if uCode has wrapped back to top of log, start at the oldest entry,
* i.e the next one that uCode would fill. */
if (num_wraps)
- iwl_print_event_log(priv, next_entry,
-+ iwl3945_print_event_log(priv, next_entry,
++ iwl4965_print_event_log(priv, next_entry,
capacity - next_entry, mode);
/* (then/else) start at top of log */
- iwl_print_event_log(priv, 0, next_entry, mode);
-+ iwl3945_print_event_log(priv, 0, next_entry, mode);
++ iwl4965_print_event_log(priv, 0, next_entry, mode);
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
}
/**
- * iwl_irq_handle_error - called for HW or SW error interrupt from card
-+ * iwl3945_irq_handle_error - called for HW or SW error interrupt from card
++ * iwl4965_irq_handle_error - called for HW or SW error interrupt from card
*/
-static void iwl_irq_handle_error(struct iwl_priv *priv)
-+static void iwl3945_irq_handle_error(struct iwl3945_priv *priv)
++static void iwl4965_irq_handle_error(struct iwl4965_priv *priv)
{
- /* Set the FW error flag -- cleared on iwl_down */
-+ /* Set the FW error flag -- cleared on iwl3945_down */
++ /* Set the FW error flag -- cleared on iwl4965_down */
set_bit(STATUS_FW_ERROR, &priv->status);
/* Cancel currently queued command. */
@@ -389440,29 +470338,29 @@
- iwl_dump_nic_error_log(priv);
- iwl_dump_nic_event_log(priv);
- iwl_print_rx_config_cmd(&priv->staging_rxon);
-+#ifdef CONFIG_IWL3945_DEBUG
-+ if (iwl3945_debug_level & IWL_DL_FW_ERRORS) {
-+ iwl3945_dump_nic_error_log(priv);
-+ iwl3945_dump_nic_event_log(priv);
-+ iwl3945_print_rx_config_cmd(&priv->staging_rxon);
++#ifdef CONFIG_IWL4965_DEBUG
++ if (iwl4965_debug_level & IWL_DL_FW_ERRORS) {
++ iwl4965_dump_nic_error_log(priv);
++ iwl4965_dump_nic_event_log(priv);
++ iwl4965_print_rx_config_cmd(&priv->staging_rxon);
}
#endif
-@@ -4632,7 +4659,7 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
+@@ -4948,7 +5045,7 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
IWL_DEBUG(IWL_DL_INFO | IWL_DL_FW_ERRORS,
"Restarting adapter due to uCode error.\n");
- if (iwl_is_associated(priv)) {
-+ if (iwl3945_is_associated(priv)) {
++ if (iwl4965_is_associated(priv)) {
memcpy(&priv->recovery_rxon, &priv->active_rxon,
sizeof(priv->recovery_rxon));
priv->error_recovering = 1;
-@@ -4641,16 +4668,16 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
+@@ -4957,16 +5054,16 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
}
}
-static void iwl_error_recovery(struct iwl_priv *priv)
-+static void iwl3945_error_recovery(struct iwl3945_priv *priv)
++static void iwl4965_error_recovery(struct iwl4965_priv *priv)
{
unsigned long flags;
@@ -389470,93 +470368,106 @@
sizeof(priv->staging_rxon));
priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
-- iwl_add_station(priv, priv->bssid, 1, 0);
-+ iwl3945_add_station(priv, priv->bssid, 1, 0);
+- iwl_rxon_add_station(priv, priv->bssid, 1);
++ iwl4965_rxon_add_station(priv, priv->bssid, 1);
spin_lock_irqsave(&priv->lock, flags);
priv->assoc_id = le16_to_cpu(priv->staging_rxon.assoc_id);
-@@ -4658,12 +4685,12 @@ static void iwl_error_recovery(struct iwl_priv *priv)
+@@ -4974,12 +5071,12 @@ static void iwl_error_recovery(struct iwl_priv *priv)
spin_unlock_irqrestore(&priv->lock, flags);
}
-static void iwl_irq_tasklet(struct iwl_priv *priv)
-+static void iwl3945_irq_tasklet(struct iwl3945_priv *priv)
++static void iwl4965_irq_tasklet(struct iwl4965_priv *priv)
{
u32 inta, handled = 0;
u32 inta_fh;
unsigned long flags;
-#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL3945_DEBUG
++#ifdef CONFIG_IWL4965_DEBUG
u32 inta_mask;
#endif
-@@ -4672,18 +4699,19 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -4988,18 +5085,19 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
/* Ack/clear/reset pending uCode interrupts.
* Note: Some bits in CSR_INT are "OR" of bits in CSR_FH_INT_STATUS,
* and will clear only when CSR_FH_INT_STATUS gets cleared. */
- inta = iwl_read32(priv, CSR_INT);
- iwl_write32(priv, CSR_INT, inta);
-+ inta = iwl3945_read32(priv, CSR_INT);
-+ iwl3945_write32(priv, CSR_INT, inta);
++ inta = iwl4965_read32(priv, CSR_INT);
++ iwl4965_write32(priv, CSR_INT, inta);
/* Ack/clear/reset pending flow-handler (DMA) interrupts.
* Any new interrupts that happen after this, either while we're
* in this tasklet, or later, will show up in next ISR/tasklet. */
- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
- iwl_write32(priv, CSR_FH_INT_STATUS, inta_fh);
-+ inta_fh = iwl3945_read32(priv, CSR_FH_INT_STATUS);
-+ iwl3945_write32(priv, CSR_FH_INT_STATUS, inta_fh);
++ inta_fh = iwl4965_read32(priv, CSR_FH_INT_STATUS);
++ iwl4965_write32(priv, CSR_FH_INT_STATUS, inta_fh);
-#ifdef CONFIG_IWLWIFI_DEBUG
- if (iwl_debug_level & IWL_DL_ISR) {
- inta_mask = iwl_read32(priv, CSR_INT_MASK); /* just for debug */
-+#ifdef CONFIG_IWL3945_DEBUG
-+ if (iwl3945_debug_level & IWL_DL_ISR) {
++#ifdef CONFIG_IWL4965_DEBUG
++ if (iwl4965_debug_level & IWL_DL_ISR) {
+ /* just for debug */
-+ inta_mask = iwl3945_read32(priv, CSR_INT_MASK);
++ inta_mask = iwl4965_read32(priv, CSR_INT_MASK);
IWL_DEBUG_ISR("inta 0x%08x, enabled 0x%08x, fh 0x%08x\n",
inta, inta_mask, inta_fh);
}
-@@ -4703,9 +4731,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -5019,9 +5117,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
IWL_ERROR("Microcode HW error detected. Restarting.\n");
/* Tell the device to stop sending interrupts */
- iwl_disable_interrupts(priv);
-+ iwl3945_disable_interrupts(priv);
++ iwl4965_disable_interrupts(priv);
- iwl_irq_handle_error(priv);
-+ iwl3945_irq_handle_error(priv);
++ iwl4965_irq_handle_error(priv);
handled |= CSR_INT_BIT_HW_ERR;
-@@ -4714,8 +4742,8 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -5030,8 +5128,8 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
return;
}
-#ifdef CONFIG_IWLWIFI_DEBUG
- if (iwl_debug_level & (IWL_DL_ISR)) {
-+#ifdef CONFIG_IWL3945_DEBUG
-+ if (iwl3945_debug_level & (IWL_DL_ISR)) {
++#ifdef CONFIG_IWL4965_DEBUG
++ if (iwl4965_debug_level & (IWL_DL_ISR)) {
/* NIC fires this, but we don't use it, redundant with WAKEUP */
if (inta & CSR_INT_BIT_MAC_CLK_ACTV)
IWL_DEBUG_ISR("Microcode started or stopped.\n");
-@@ -4731,7 +4759,7 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- /* HW RF KILL switch toggled (4965 only) */
+@@ -5044,10 +5142,10 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ /* Safely ignore these bits for debug checks below */
+ inta &= ~(CSR_INT_BIT_MAC_CLK_ACTV | CSR_INT_BIT_ALIVE);
+
+- /* HW RF KILL switch toggled (4965 only) */
++ /* HW RF KILL switch toggled */
if (inta & CSR_INT_BIT_RF_KILL) {
int hw_rf_kill = 0;
- if (!(iwl_read32(priv, CSR_GP_CNTRL) &
-+ if (!(iwl3945_read32(priv, CSR_GP_CNTRL) &
++ if (!(iwl4965_read32(priv, CSR_GP_CNTRL) &
CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
hw_rf_kill = 1;
-@@ -4761,20 +4789,20 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -5067,7 +5165,7 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ handled |= CSR_INT_BIT_RF_KILL;
+ }
+
+- /* Chip got too hot and stopped itself (4965 only) */
++ /* Chip got too hot and stopped itself */
+ if (inta & CSR_INT_BIT_CT_KILL) {
+ IWL_ERROR("Microcode CT kill error detected.\n");
+ handled |= CSR_INT_BIT_CT_KILL;
+@@ -5077,20 +5175,20 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
if (inta & CSR_INT_BIT_SW_ERR) {
IWL_ERROR("Microcode SW error detected. Restarting 0x%X.\n",
inta);
- iwl_irq_handle_error(priv);
-+ iwl3945_irq_handle_error(priv);
++ iwl4965_irq_handle_error(priv);
handled |= CSR_INT_BIT_SW_ERR;
}
@@ -389570,105 +470481,89 @@
- iwl_tx_queue_update_write_ptr(priv, &priv->txq[3]);
- iwl_tx_queue_update_write_ptr(priv, &priv->txq[4]);
- iwl_tx_queue_update_write_ptr(priv, &priv->txq[5]);
-+ iwl3945_rx_queue_update_write_ptr(priv, &priv->rxq);
-+ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[0]);
-+ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[1]);
-+ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[2]);
-+ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[3]);
-+ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[4]);
-+ iwl3945_tx_queue_update_write_ptr(priv, &priv->txq[5]);
++ iwl4965_rx_queue_update_write_ptr(priv, &priv->rxq);
++ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[0]);
++ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[1]);
++ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[2]);
++ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[3]);
++ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[4]);
++ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[5]);
handled |= CSR_INT_BIT_WAKEUP;
}
-@@ -4783,19 +4811,19 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -5099,7 +5197,7 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
* Rx "responses" (frame-received notification), and other
* notifications from uCode come through here*/
if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX)) {
- iwl_rx_handle(priv);
-+ iwl3945_rx_handle(priv);
++ iwl4965_rx_handle(priv);
handled |= (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX);
}
- if (inta & CSR_INT_BIT_FH_TX) {
- IWL_DEBUG_ISR("Tx interrupt\n");
-
-- iwl_write32(priv, CSR_FH_INT_STATUS, (1 << 6));
-- if (!iwl_grab_restricted_access(priv)) {
-- iwl_write_restricted(priv,
-+ iwl3945_write32(priv, CSR_FH_INT_STATUS, (1 << 6));
-+ if (!iwl3945_grab_nic_access(priv)) {
-+ iwl3945_write_direct32(priv,
- FH_TCSR_CREDIT
- (ALM_FH_SRVC_CHNL), 0x0);
-- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
- }
- handled |= CSR_INT_BIT_FH_TX;
- }
-@@ -4810,13 +4838,13 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -5118,13 +5216,13 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
}
/* Re-enable all interrupts */
- iwl_enable_interrupts(priv);
-+ iwl3945_enable_interrupts(priv);
++ iwl4965_enable_interrupts(priv);
-#ifdef CONFIG_IWLWIFI_DEBUG
- if (iwl_debug_level & (IWL_DL_ISR)) {
- inta = iwl_read32(priv, CSR_INT);
- inta_mask = iwl_read32(priv, CSR_INT_MASK);
- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
-+#ifdef CONFIG_IWL3945_DEBUG
-+ if (iwl3945_debug_level & (IWL_DL_ISR)) {
-+ inta = iwl3945_read32(priv, CSR_INT);
-+ inta_mask = iwl3945_read32(priv, CSR_INT_MASK);
-+ inta_fh = iwl3945_read32(priv, CSR_FH_INT_STATUS);
++#ifdef CONFIG_IWL4965_DEBUG
++ if (iwl4965_debug_level & (IWL_DL_ISR)) {
++ inta = iwl4965_read32(priv, CSR_INT);
++ inta_mask = iwl4965_read32(priv, CSR_INT_MASK);
++ inta_fh = iwl4965_read32(priv, CSR_FH_INT_STATUS);
IWL_DEBUG_ISR("End inta 0x%08x, enabled 0x%08x, fh 0x%08x, "
"flags 0x%08lx\n", inta, inta_mask, inta_fh, flags);
}
-@@ -4824,9 +4852,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+@@ -5132,9 +5230,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
spin_unlock_irqrestore(&priv->lock, flags);
}
-static irqreturn_t iwl_isr(int irq, void *data)
-+static irqreturn_t iwl3945_isr(int irq, void *data)
++static irqreturn_t iwl4965_isr(int irq, void *data)
{
- struct iwl_priv *priv = data;
-+ struct iwl3945_priv *priv = data;
++ struct iwl4965_priv *priv = data;
u32 inta, inta_mask;
u32 inta_fh;
if (!priv)
-@@ -4838,12 +4866,12 @@ static irqreturn_t iwl_isr(int irq, void *data)
+@@ -5146,12 +5244,12 @@ static irqreturn_t iwl_isr(int irq, void *data)
* back-to-back ISRs and sporadic interrupts from our NIC.
* If we have something to service, the tasklet will re-enable ints.
* If we *don't* have something, we'll re-enable before leaving here. */
- inta_mask = iwl_read32(priv, CSR_INT_MASK); /* just for debug */
- iwl_write32(priv, CSR_INT_MASK, 0x00000000);
-+ inta_mask = iwl3945_read32(priv, CSR_INT_MASK); /* just for debug */
-+ iwl3945_write32(priv, CSR_INT_MASK, 0x00000000);
++ inta_mask = iwl4965_read32(priv, CSR_INT_MASK); /* just for debug */
++ iwl4965_write32(priv, CSR_INT_MASK, 0x00000000);
/* Discover which interrupts are active/pending */
- inta = iwl_read32(priv, CSR_INT);
- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
-+ inta = iwl3945_read32(priv, CSR_INT);
-+ inta_fh = iwl3945_read32(priv, CSR_FH_INT_STATUS);
++ inta = iwl4965_read32(priv, CSR_INT);
++ inta_fh = iwl4965_read32(priv, CSR_FH_INT_STATUS);
/* Ignore interrupt if there's nothing in NIC to service.
* This may be due to IRQ shared with another device,
-@@ -4862,7 +4890,7 @@ static irqreturn_t iwl_isr(int irq, void *data)
+@@ -5171,7 +5269,7 @@ static irqreturn_t iwl_isr(int irq, void *data)
IWL_DEBUG_ISR("ISR inta 0x%08x, enabled 0x%08x, fh 0x%08x\n",
inta, inta_mask, inta_fh);
- /* iwl_irq_tasklet() will service interrupts and re-enable them */
-+ /* iwl3945_irq_tasklet() will service interrupts and re-enable them */
++ /* iwl4965_irq_tasklet() will service interrupts and re-enable them */
tasklet_schedule(&priv->irq_tasklet);
- unplugged:
- spin_unlock(&priv->lock);
-@@ -4871,18 +4899,18 @@ unplugged:
+
+ unplugged:
+@@ -5180,18 +5278,18 @@ static irqreturn_t iwl_isr(int irq, void *data)
none:
/* re-enable interrupts here since we don't have anything to service. */
- iwl_enable_interrupts(priv);
-+ iwl3945_enable_interrupts(priv);
++ iwl4965_enable_interrupts(priv);
spin_unlock(&priv->lock);
return IRQ_NONE;
}
@@ -389676,129 +470571,156 @@
/************************** EEPROM BANDS ****************************
*
- * The iwl_eeprom_band definitions below provide the mapping from the
-+ * The iwl3945_eeprom_band definitions below provide the mapping from the
++ * The iwl4965_eeprom_band definitions below provide the mapping from the
* EEPROM contents to the specific channel number supported for each
* band.
*
- * For example, iwl_priv->eeprom.band_3_channels[4] from the band_3
-+ * For example, iwl3945_priv->eeprom.band_3_channels[4] from the band_3
++ * For example, iwl4965_priv->eeprom.band_3_channels[4] from the band_3
* definition below maps to physical channel 42 in the 5.2GHz spectrum.
* The specific geography and calibration information for that channel
* is contained in the eeprom map itself.
-@@ -4908,58 +4936,58 @@ unplugged:
+@@ -5217,76 +5315,77 @@ static irqreturn_t iwl_isr(int irq, void *data)
*********************************************************************/
/* 2.4 GHz */
-static const u8 iwl_eeprom_band_1[14] = {
-+static const u8 iwl3945_eeprom_band_1[14] = {
++static const u8 iwl4965_eeprom_band_1[14] = {
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
};
/* 5.2 GHz bands */
-static const u8 iwl_eeprom_band_2[] = {
-+static const u8 iwl3945_eeprom_band_2[] = { /* 4915-5080MHz */
++static const u8 iwl4965_eeprom_band_2[] = { /* 4915-5080MHz */
183, 184, 185, 187, 188, 189, 192, 196, 7, 8, 11, 12, 16
};
-static const u8 iwl_eeprom_band_3[] = { /* 5205-5320MHz */
-+static const u8 iwl3945_eeprom_band_3[] = { /* 5170-5320MHz */
++static const u8 iwl4965_eeprom_band_3[] = { /* 5170-5320MHz */
34, 36, 38, 40, 42, 44, 46, 48, 52, 56, 60, 64
};
-static const u8 iwl_eeprom_band_4[] = { /* 5500-5700MHz */
-+static const u8 iwl3945_eeprom_band_4[] = { /* 5500-5700MHz */
++static const u8 iwl4965_eeprom_band_4[] = { /* 5500-5700MHz */
100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140
};
-static const u8 iwl_eeprom_band_5[] = { /* 5725-5825MHz */
-+static const u8 iwl3945_eeprom_band_5[] = { /* 5725-5825MHz */
++static const u8 iwl4965_eeprom_band_5[] = { /* 5725-5825MHz */
145, 149, 153, 157, 161, 165
};
+-static u8 iwl_eeprom_band_6[] = { /* 2.4 FAT channel */
++static u8 iwl4965_eeprom_band_6[] = { /* 2.4 FAT channel */
+ 1, 2, 3, 4, 5, 6, 7
+ };
+
+-static u8 iwl_eeprom_band_7[] = { /* 5.2 FAT channel */
++static u8 iwl4965_eeprom_band_7[] = { /* 5.2 FAT channel */
+ 36, 44, 52, 60, 100, 108, 116, 124, 132, 149, 157
+ };
+
-static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
-+static void iwl3945_init_band_reference(const struct iwl3945_priv *priv, int band,
++static void iwl4965_init_band_reference(const struct iwl4965_priv *priv,
++ int band,
int *eeprom_ch_count,
- const struct iwl_eeprom_channel
-+ const struct iwl3945_eeprom_channel
++ const struct iwl4965_eeprom_channel
**eeprom_ch_info,
const u8 **eeprom_ch_index)
{
switch (band) {
case 1: /* 2.4GHz band */
- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_1);
-+ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_1);
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_1);
*eeprom_ch_info = priv->eeprom.band_1_channels;
- *eeprom_ch_index = iwl_eeprom_band_1;
-+ *eeprom_ch_index = iwl3945_eeprom_band_1;
++ *eeprom_ch_index = iwl4965_eeprom_band_1;
break;
- case 2: /* 5.2GHz band */
- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_2);
+ case 2: /* 4.9GHz band */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_2);
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_2);
*eeprom_ch_info = priv->eeprom.band_2_channels;
- *eeprom_ch_index = iwl_eeprom_band_2;
-+ *eeprom_ch_index = iwl3945_eeprom_band_2;
++ *eeprom_ch_index = iwl4965_eeprom_band_2;
break;
case 3: /* 5.2GHz band */
- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_3);
-+ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_3);
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_3);
*eeprom_ch_info = priv->eeprom.band_3_channels;
- *eeprom_ch_index = iwl_eeprom_band_3;
-+ *eeprom_ch_index = iwl3945_eeprom_band_3;
++ *eeprom_ch_index = iwl4965_eeprom_band_3;
break;
- case 4: /* 5.2GHz band */
- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_4);
+ case 4: /* 5.5GHz band */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_4);
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_4);
*eeprom_ch_info = priv->eeprom.band_4_channels;
- *eeprom_ch_index = iwl_eeprom_band_4;
-+ *eeprom_ch_index = iwl3945_eeprom_band_4;
++ *eeprom_ch_index = iwl4965_eeprom_band_4;
break;
- case 5: /* 5.2GHz band */
- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_5);
+ case 5: /* 5.7GHz band */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl3945_eeprom_band_5);
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_5);
*eeprom_ch_info = priv->eeprom.band_5_channels;
- *eeprom_ch_index = iwl_eeprom_band_5;
-+ *eeprom_ch_index = iwl3945_eeprom_band_5;
++ *eeprom_ch_index = iwl4965_eeprom_band_5;
+ break;
+- case 6:
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_6);
++ case 6: /* 2.4GHz FAT channels */
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_6);
+ *eeprom_ch_info = priv->eeprom.band_24_channels;
+- *eeprom_ch_index = iwl_eeprom_band_6;
++ *eeprom_ch_index = iwl4965_eeprom_band_6;
+ break;
+- case 7:
+- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_7);
++ case 7: /* 5 GHz FAT channels */
++ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_7);
+ *eeprom_ch_info = priv->eeprom.band_52_channels;
+- *eeprom_ch_index = iwl_eeprom_band_7;
++ *eeprom_ch_index = iwl4965_eeprom_band_7;
break;
default:
BUG();
-@@ -4967,7 +4995,12 @@ static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
+@@ -5294,7 +5393,12 @@ static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
}
}
-const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
+/**
-+ * iwl3945_get_channel_info - Find driver's private channel info
++ * iwl4965_get_channel_info - Find driver's private channel info
+ *
+ * Based on band and channel number.
+ */
-+const struct iwl3945_channel_info *iwl3945_get_channel_info(const struct iwl3945_priv *priv,
++const struct iwl4965_channel_info *iwl4965_get_channel_info(const struct iwl4965_priv *priv,
int phymode, u16 channel)
{
int i;
-@@ -4994,13 +5027,16 @@ const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
+@@ -5321,13 +5425,16 @@ const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
#define CHECK_AND_PRINT(x) ((eeprom_ch_info[ch].flags & EEPROM_CHANNEL_##x) \
? # x " " : "")
-static int iwl_init_channel_map(struct iwl_priv *priv)
+/**
-+ * iwl3945_init_channel_map - Set up driver's info for all possible channels
++ * iwl4965_init_channel_map - Set up driver's info for all possible channels
+ */
-+static int iwl3945_init_channel_map(struct iwl3945_priv *priv)
++static int iwl4965_init_channel_map(struct iwl4965_priv *priv)
{
int eeprom_ch_count = 0;
const u8 *eeprom_ch_index = NULL;
- const struct iwl_eeprom_channel *eeprom_ch_info = NULL;
-+ const struct iwl3945_eeprom_channel *eeprom_ch_info = NULL;
++ const struct iwl4965_eeprom_channel *eeprom_ch_info = NULL;
int band, ch;
- struct iwl_channel_info *ch_info;
-+ struct iwl3945_channel_info *ch_info;
++ struct iwl4965_channel_info *ch_info;
if (priv->channel_count) {
IWL_DEBUG_INFO("Channel map already initialized.\n");
-@@ -5016,15 +5052,15 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+@@ -5343,15 +5450,15 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
IWL_DEBUG_INFO("Initializing regulatory info from EEPROM\n");
priv->channel_count =
@@ -389807,83 +470729,108 @@
- ARRAY_SIZE(iwl_eeprom_band_3) +
- ARRAY_SIZE(iwl_eeprom_band_4) +
- ARRAY_SIZE(iwl_eeprom_band_5);
-+ ARRAY_SIZE(iwl3945_eeprom_band_1) +
-+ ARRAY_SIZE(iwl3945_eeprom_band_2) +
-+ ARRAY_SIZE(iwl3945_eeprom_band_3) +
-+ ARRAY_SIZE(iwl3945_eeprom_band_4) +
-+ ARRAY_SIZE(iwl3945_eeprom_band_5);
++ ARRAY_SIZE(iwl4965_eeprom_band_1) +
++ ARRAY_SIZE(iwl4965_eeprom_band_2) +
++ ARRAY_SIZE(iwl4965_eeprom_band_3) +
++ ARRAY_SIZE(iwl4965_eeprom_band_4) +
++ ARRAY_SIZE(iwl4965_eeprom_band_5);
IWL_DEBUG_INFO("Parsing data for %d channels.\n", priv->channel_count);
- priv->channel_info = kzalloc(sizeof(struct iwl_channel_info) *
-+ priv->channel_info = kzalloc(sizeof(struct iwl3945_channel_info) *
++ priv->channel_info = kzalloc(sizeof(struct iwl4965_channel_info) *
priv->channel_count, GFP_KERNEL);
if (!priv->channel_info) {
IWL_ERROR("Could not allocate channel_info\n");
-@@ -5039,7 +5075,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+@@ -5366,7 +5473,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
* what just in the EEPROM) */
for (band = 1; band <= 5; band++) {
- iwl_init_band_reference(priv, band, &eeprom_ch_count,
-+ iwl3945_init_band_reference(priv, band, &eeprom_ch_count,
++ iwl4965_init_band_reference(priv, band, &eeprom_ch_count,
&eeprom_ch_info, &eeprom_ch_index);
/* Loop through each band adding each of the channels */
-@@ -5103,6 +5139,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+@@ -5430,14 +5537,17 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
}
}
-+ /* Set up txpower settings in driver for all channels */
- if (iwl3945_txpower_set_from_eeprom(priv))
- return -EIO;
++ /* Two additional EEPROM bands for 2.4 and 5 GHz FAT channels */
+ for (band = 6; band <= 7; band++) {
+ int phymode;
+ u8 fat_extension_chan;
-@@ -5132,7 +5169,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+- iwl_init_band_reference(priv, band, &eeprom_ch_count,
++ iwl4965_init_band_reference(priv, band, &eeprom_ch_count,
+ &eeprom_ch_info, &eeprom_ch_index);
+
++ /* EEPROM band 6 is 2.4, band 7 is 5 GHz */
+ phymode = (band == 6) ? MODE_IEEE80211B : MODE_IEEE80211A;
++
+ /* Loop through each band adding each of the channels */
+ for (ch = 0; ch < eeprom_ch_count; ch++) {
+
+@@ -5449,11 +5559,13 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
+ else
+ fat_extension_chan = HT_IE_EXT_CHANNEL_ABOVE;
+
++ /* Set up driver's info for lower half */
+ iwl4965_set_fat_chan_info(priv, phymode,
+ eeprom_ch_index[ch],
+ &(eeprom_ch_info[ch]),
+ fat_extension_chan);
+
++ /* Set up driver's info for upper half */
+ iwl4965_set_fat_chan_info(priv, phymode,
+ (eeprom_ch_index[ch] + 4),
+ &(eeprom_ch_info[ch]),
+@@ -5487,7 +5599,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
#define IWL_PASSIVE_DWELL_BASE (100)
#define IWL_CHANNEL_TUNE_TIME 5
-static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
-+static inline u16 iwl3945_get_active_dwell_time(struct iwl3945_priv *priv, int phymode)
++static inline u16 iwl4965_get_active_dwell_time(struct iwl4965_priv *priv, int phymode)
{
if (phymode == MODE_IEEE80211A)
return IWL_ACTIVE_DWELL_TIME_52;
-@@ -5140,14 +5177,14 @@ static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
+@@ -5495,14 +5607,14 @@ static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
return IWL_ACTIVE_DWELL_TIME_24;
}
-static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
-+static u16 iwl3945_get_passive_dwell_time(struct iwl3945_priv *priv, int phymode)
++static u16 iwl4965_get_passive_dwell_time(struct iwl4965_priv *priv, int phymode)
{
- u16 active = iwl_get_active_dwell_time(priv, phymode);
-+ u16 active = iwl3945_get_active_dwell_time(priv, phymode);
++ u16 active = iwl4965_get_active_dwell_time(priv, phymode);
u16 passive = (phymode != MODE_IEEE80211A) ?
IWL_PASSIVE_DWELL_BASE + IWL_PASSIVE_DWELL_TIME_24 :
IWL_PASSIVE_DWELL_BASE + IWL_PASSIVE_DWELL_TIME_52;
- if (iwl_is_associated(priv)) {
-+ if (iwl3945_is_associated(priv)) {
++ if (iwl4965_is_associated(priv)) {
/* If we're associated, we clamp the maximum passive
* dwell time to be 98% of the beacon interval (minus
* 2 * channel tune time) */
-@@ -5163,30 +5200,30 @@ static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
+@@ -5518,30 +5630,30 @@ static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
return passive;
}
-static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
-+static int iwl3945_get_channels_for_scan(struct iwl3945_priv *priv, int phymode,
++static int iwl4965_get_channels_for_scan(struct iwl4965_priv *priv, int phymode,
u8 is_active, u8 direct_mask,
- struct iwl_scan_channel *scan_ch)
-+ struct iwl3945_scan_channel *scan_ch)
++ struct iwl4965_scan_channel *scan_ch)
{
const struct ieee80211_channel *channels = NULL;
const struct ieee80211_hw_mode *hw_mode;
- const struct iwl_channel_info *ch_info;
-+ const struct iwl3945_channel_info *ch_info;
++ const struct iwl4965_channel_info *ch_info;
u16 passive_dwell = 0;
u16 active_dwell = 0;
int added, i;
- hw_mode = iwl_get_hw_mode(priv, phymode);
-+ hw_mode = iwl3945_get_hw_mode(priv, phymode);
++ hw_mode = iwl4965_get_hw_mode(priv, phymode);
if (!hw_mode)
return 0;
@@ -389891,27 +470838,28 @@
- active_dwell = iwl_get_active_dwell_time(priv, phymode);
- passive_dwell = iwl_get_passive_dwell_time(priv, phymode);
-+ active_dwell = iwl3945_get_active_dwell_time(priv, phymode);
-+ passive_dwell = iwl3945_get_passive_dwell_time(priv, phymode);
++ active_dwell = iwl4965_get_active_dwell_time(priv, phymode);
++ passive_dwell = iwl4965_get_passive_dwell_time(priv, phymode);
for (i = 0, added = 0; i < hw_mode->num_channels; i++) {
if (channels[i].chan ==
le16_to_cpu(priv->active_rxon.channel)) {
- if (iwl_is_associated(priv)) {
-+ if (iwl3945_is_associated(priv)) {
++ if (iwl4965_is_associated(priv)) {
IWL_DEBUG_SCAN
("Skipping current channel %d\n",
le16_to_cpu(priv->active_rxon.channel));
-@@ -5197,7 +5234,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+@@ -5552,7 +5664,8 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
scan_ch->channel = channels[i].chan;
- ch_info = iwl_get_channel_info(priv, phymode, scan_ch->channel);
-+ ch_info = iwl3945_get_channel_info(priv, phymode, scan_ch->channel);
++ ch_info = iwl4965_get_channel_info(priv, phymode,
++ scan_ch->channel);
if (!is_channel_valid(ch_info)) {
IWL_DEBUG_SCAN("Channel %d is INVALID for this SKU.\n",
scan_ch->channel);
-@@ -5219,7 +5256,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+@@ -5574,7 +5687,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
scan_ch->active_dwell = cpu_to_le16(active_dwell);
scan_ch->passive_dwell = cpu_to_le16(passive_dwell);
@@ -389920,7 +470868,7 @@
scan_ch->tpc.dsp_atten = 110;
/* scan_pwr_info->tpc.dsp_atten; */
-@@ -5229,8 +5266,8 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+@@ -5584,8 +5697,8 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
else {
scan_ch->tpc.tx_gain = ((1 << 5) | (5 << 3));
/* NOTE: if we were doing 6Mb OFDM for scans we'd use
@@ -389931,79 +470879,134 @@
*/
}
-@@ -5248,7 +5285,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+@@ -5603,7 +5716,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
return added;
}
-static void iwl_reset_channel_flag(struct iwl_priv *priv)
-+static void iwl3945_reset_channel_flag(struct iwl3945_priv *priv)
++static void iwl4965_reset_channel_flag(struct iwl4965_priv *priv)
{
int i, j;
for (i = 0; i < 3; i++) {
-@@ -5258,13 +5295,13 @@ static void iwl_reset_channel_flag(struct iwl_priv *priv)
+@@ -5613,13 +5726,13 @@ static void iwl_reset_channel_flag(struct iwl_priv *priv)
}
}
-static void iwl_init_hw_rates(struct iwl_priv *priv,
-+static void iwl3945_init_hw_rates(struct iwl3945_priv *priv,
++static void iwl4965_init_hw_rates(struct iwl4965_priv *priv,
struct ieee80211_rate *rates)
{
int i;
for (i = 0; i < IWL_RATE_COUNT; i++) {
- rates[i].rate = iwl_rates[i].ieee * 5;
-+ rates[i].rate = iwl3945_rates[i].ieee * 5;
++ rates[i].rate = iwl4965_rates[i].ieee * 5;
rates[i].val = i; /* Rate scaling will work on indexes */
rates[i].val2 = i;
rates[i].flags = IEEE80211_RATE_SUPPORTED;
-@@ -5276,7 +5313,7 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
+@@ -5631,7 +5744,7 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
* If CCK 1M then set rate flag to CCK else CCK_2
* which is CCK | PREAMBLE2
*/
- rates[i].flags |= (iwl_rates[i].plcp == 10) ?
-+ rates[i].flags |= (iwl3945_rates[i].plcp == 10) ?
++ rates[i].flags |= (iwl4965_rates[i].plcp == 10) ?
IEEE80211_RATE_CCK : IEEE80211_RATE_CCK_2;
}
-@@ -5287,11 +5324,11 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
+@@ -5639,16 +5752,14 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
+ if (IWL_BASIC_RATES_MASK & (1 << i))
+ rates[i].flags |= IEEE80211_RATE_BASIC;
+ }
+-
+- iwl4965_init_hw_rates(priv, rates);
}
/**
- * iwl_init_geos - Initialize mac80211's geo/channel info based from eeprom
-+ * iwl3945_init_geos - Initialize mac80211's geo/channel info based from eeprom
++ * iwl4965_init_geos - Initialize mac80211's geo/channel info based from eeprom
*/
-static int iwl_init_geos(struct iwl_priv *priv)
-+static int iwl3945_init_geos(struct iwl3945_priv *priv)
++static int iwl4965_init_geos(struct iwl4965_priv *priv)
{
- struct iwl_channel_info *ch;
-+ struct iwl3945_channel_info *ch;
++ struct iwl4965_channel_info *ch;
struct ieee80211_hw_mode *modes;
struct ieee80211_channel *channels;
struct ieee80211_channel *geo_ch;
-@@ -5337,7 +5374,7 @@ static int iwl_init_geos(struct iwl_priv *priv)
+@@ -5658,10 +5769,8 @@ static int iwl_init_geos(struct iwl_priv *priv)
+ A = 0,
+ B = 1,
+ G = 2,
+- A_11N = 3,
+- G_11N = 4,
+ };
+- int mode_count = 5;
++ int mode_count = 3;
+
+ if (priv->modes) {
+ IWL_DEBUG_INFO("Geography modes already initialized.\n");
+@@ -5696,11 +5805,14 @@ static int iwl_init_geos(struct iwl_priv *priv)
/* 5.2GHz channels start after the 2.4GHz channels */
modes[A].mode = MODE_IEEE80211A;
- modes[A].channels = &channels[ARRAY_SIZE(iwl_eeprom_band_1)];
-+ modes[A].channels = &channels[ARRAY_SIZE(iwl3945_eeprom_band_1)];
- modes[A].rates = &rates[4];
++ modes[A].channels = &channels[ARRAY_SIZE(iwl4965_eeprom_band_1)];
+ modes[A].rates = rates;
modes[A].num_rates = 8; /* just OFDM */
+ modes[A].rates = &rates[4];
modes[A].num_channels = 0;
-@@ -5357,7 +5394,7 @@ static int iwl_init_geos(struct iwl_priv *priv)
++#ifdef CONFIG_IWL4965_HT
++ iwl4965_init_ht_hw_capab(&modes[A].ht_info, MODE_IEEE80211A);
++#endif
+
+ modes[B].mode = MODE_IEEE80211B;
+ modes[B].channels = channels;
+@@ -5713,23 +5825,14 @@ static int iwl_init_geos(struct iwl_priv *priv)
+ modes[G].rates = rates;
+ modes[G].num_rates = 12; /* OFDM & CCK */
+ modes[G].num_channels = 0;
+-
+- modes[G_11N].mode = MODE_IEEE80211G;
+- modes[G_11N].channels = channels;
+- modes[G_11N].num_rates = 13; /* OFDM & CCK */
+- modes[G_11N].rates = rates;
+- modes[G_11N].num_channels = 0;
+-
+- modes[A_11N].mode = MODE_IEEE80211A;
+- modes[A_11N].channels = &channels[ARRAY_SIZE(iwl_eeprom_band_1)];
+- modes[A_11N].rates = &rates[4];
+- modes[A_11N].num_rates = 9; /* just OFDM */
+- modes[A_11N].num_channels = 0;
++#ifdef CONFIG_IWL4965_HT
++ iwl4965_init_ht_hw_capab(&modes[G].ht_info, MODE_IEEE80211G);
++#endif
+
priv->ieee_channels = channels;
priv->ieee_rates = rates;
- iwl_init_hw_rates(priv, rates);
-+ iwl3945_init_hw_rates(priv, rates);
++ iwl4965_init_hw_rates(priv, rates);
for (i = 0, geo_ch = channels; i < priv->channel_count; i++) {
ch = &priv->channel_info[i];
-@@ -5440,57 +5477,21 @@ static int iwl_init_geos(struct iwl_priv *priv)
+@@ -5744,11 +5847,9 @@ static int iwl_init_geos(struct iwl_priv *priv)
+
+ if (is_channel_a_band(ch)) {
+ geo_ch = &modes[A].channels[modes[A].num_channels++];
+- modes[A_11N].num_channels++;
+ } else {
+ geo_ch = &modes[B].channels[modes[B].num_channels++];
+ modes[G].num_channels++;
+- modes[G_11N].num_channels++;
+ }
+
+ geo_ch->freq = ieee80211chan2mhz(ch->channel);
+@@ -5814,57 +5915,22 @@ static int iwl_init_geos(struct iwl_priv *priv)
*
******************************************************************************/
-static void iwl_dealloc_ucode_pci(struct iwl_priv *priv)
-+static void iwl3945_dealloc_ucode_pci(struct iwl3945_priv *priv)
++static void iwl4965_dealloc_ucode_pci(struct iwl4965_priv *priv)
{
- if (priv->ucode_code.v_addr != NULL) {
- pci_free_consistent(priv->pci_dev,
@@ -390057,25 +471060,26 @@
/**
- * iwl_verify_inst_full - verify runtime uCode image in card vs. host,
-+ * iwl3945_verify_inst_full - verify runtime uCode image in card vs. host,
++ * iwl4965_verify_inst_full - verify runtime uCode image in card vs. host,
* looking at all data.
*/
-static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
-+static int iwl3945_verify_inst_full(struct iwl3945_priv *priv, __le32 * image, u32 len)
++static int iwl4965_verify_inst_full(struct iwl4965_priv *priv, __le32 *image,
++ u32 len)
{
u32 val;
u32 save_len = len;
-@@ -5499,18 +5500,18 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
+@@ -5873,18 +5939,18 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
IWL_DEBUG_INFO("ucode inst image size is %u\n", len);
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc)
return rc;
- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
-+ iwl3945_write_direct32(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
++ iwl4965_write_direct32(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
errcnt = 0;
for (; len > 0; len -= sizeof(u32), image++) {
@@ -390083,64 +471087,60 @@
/* NOTE: Use the debugless read so we don't flood kernel log
* if IWL_DL_IO is set */
- val = _iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
-+ val = _iwl3945_read_direct32(priv, HBUS_TARG_MEM_RDAT);
++ val = _iwl4965_read_direct32(priv, HBUS_TARG_MEM_RDAT);
if (val != le32_to_cpu(*image)) {
IWL_ERROR("uCode INST section is invalid at "
"offset 0x%x, is 0x%x, s/b 0x%x\n",
-@@ -5522,22 +5523,21 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
+@@ -5896,7 +5962,7 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
}
}
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
if (!errcnt)
-- IWL_DEBUG_INFO
-- ("ucode image in INSTRUCTION memory is good\n");
-+ IWL_DEBUG_INFO("ucode image in INSTRUCTION memory is good\n");
-
- return rc;
- }
+ IWL_DEBUG_INFO
+@@ -5907,11 +5973,11 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
/**
- * iwl_verify_inst_sparse - verify runtime uCode image in card vs. host,
-+ * iwl3945_verify_inst_sparse - verify runtime uCode image in card vs. host,
++ * iwl4965_verify_inst_sparse - verify runtime uCode image in card vs. host,
* using sample data 100 bytes apart. If these sample points are good,
* it's a pretty good bet that everything between them is good, too.
*/
-static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
-+static int iwl3945_verify_inst_sparse(struct iwl3945_priv *priv, __le32 *image, u32 len)
++static int iwl4965_verify_inst_sparse(struct iwl4965_priv *priv, __le32 *image, u32 len)
{
u32 val;
int rc = 0;
-@@ -5546,7 +5546,7 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+@@ -5920,7 +5986,7 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
IWL_DEBUG_INFO("ucode inst image size is %u\n", len);
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc)
return rc;
-@@ -5554,9 +5554,9 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+@@ -5928,9 +5994,9 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
/* read data comes through single port, auto-incr addr */
/* NOTE: Use the debugless read so we don't flood kernel log
* if IWL_DL_IO is set */
- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR,
-+ iwl3945_write_direct32(priv, HBUS_TARG_MEM_RADDR,
++ iwl4965_write_direct32(priv, HBUS_TARG_MEM_RADDR,
i + RTC_INST_LOWER_BOUND);
- val = _iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
-+ val = _iwl3945_read_direct32(priv, HBUS_TARG_MEM_RDAT);
++ val = _iwl4965_read_direct32(priv, HBUS_TARG_MEM_RDAT);
if (val != le32_to_cpu(*image)) {
#if 0 /* Enable this if you want to see details */
IWL_ERROR("uCode INST section is invalid at "
-@@ -5570,17 +5570,17 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+@@ -5944,17 +6010,17 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
}
}
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
return rc;
}
@@ -390148,42 +471148,42 @@
/**
- * iwl_verify_ucode - determine which instruction image is in SRAM,
-+ * iwl3945_verify_ucode - determine which instruction image is in SRAM,
++ * iwl4965_verify_ucode - determine which instruction image is in SRAM,
* and verify its contents
*/
-static int iwl_verify_ucode(struct iwl_priv *priv)
-+static int iwl3945_verify_ucode(struct iwl3945_priv *priv)
++static int iwl4965_verify_ucode(struct iwl4965_priv *priv)
{
__le32 *image;
u32 len;
-@@ -5589,7 +5589,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+@@ -5963,7 +6029,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
/* Try bootstrap */
image = (__le32 *)priv->ucode_boot.v_addr;
len = priv->ucode_boot.len;
- rc = iwl_verify_inst_sparse(priv, image, len);
-+ rc = iwl3945_verify_inst_sparse(priv, image, len);
++ rc = iwl4965_verify_inst_sparse(priv, image, len);
if (rc == 0) {
IWL_DEBUG_INFO("Bootstrap uCode is good in inst SRAM\n");
return 0;
-@@ -5598,7 +5598,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+@@ -5972,7 +6038,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
/* Try initialize */
image = (__le32 *)priv->ucode_init.v_addr;
len = priv->ucode_init.len;
- rc = iwl_verify_inst_sparse(priv, image, len);
-+ rc = iwl3945_verify_inst_sparse(priv, image, len);
++ rc = iwl4965_verify_inst_sparse(priv, image, len);
if (rc == 0) {
IWL_DEBUG_INFO("Initialize uCode is good in inst SRAM\n");
return 0;
-@@ -5607,7 +5607,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+@@ -5981,7 +6047,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
/* Try runtime/protocol */
image = (__le32 *)priv->ucode_code.v_addr;
len = priv->ucode_code.len;
- rc = iwl_verify_inst_sparse(priv, image, len);
-+ rc = iwl3945_verify_inst_sparse(priv, image, len);
++ rc = iwl4965_verify_inst_sparse(priv, image, len);
if (rc == 0) {
IWL_DEBUG_INFO("Runtime uCode is good in inst SRAM\n");
return 0;
-@@ -5615,18 +5615,19 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+@@ -5989,18 +6055,19 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
IWL_ERROR("NO VALID UCODE IMAGE IN INSTRUCTION SRAM!!\n");
@@ -390195,7 +471195,7 @@
image = (__le32 *)priv->ucode_boot.v_addr;
len = priv->ucode_boot.len;
- rc = iwl_verify_inst_full(priv, image, len);
-+ rc = iwl3945_verify_inst_full(priv, image, len);
++ rc = iwl4965_verify_inst_full(priv, image, len);
return rc;
}
@@ -390203,59 +471203,59 @@
/* check contents of special bootstrap uCode SRAM */
-static int iwl_verify_bsm(struct iwl_priv *priv)
-+static int iwl3945_verify_bsm(struct iwl3945_priv *priv)
++static int iwl4965_verify_bsm(struct iwl4965_priv *priv)
{
__le32 *image = priv->ucode_boot.v_addr;
u32 len = priv->ucode_boot.len;
-@@ -5636,11 +5637,11 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
+@@ -6010,11 +6077,11 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
IWL_DEBUG_INFO("Begin verify bsm\n");
/* verify BSM SRAM contents */
- val = iwl_read_restricted_reg(priv, BSM_WR_DWCOUNT_REG);
-+ val = iwl3945_read_prph(priv, BSM_WR_DWCOUNT_REG);
++ val = iwl4965_read_prph(priv, BSM_WR_DWCOUNT_REG);
for (reg = BSM_SRAM_LOWER_BOUND;
reg < BSM_SRAM_LOWER_BOUND + len;
reg += sizeof(u32), image ++) {
- val = iwl_read_restricted_reg(priv, reg);
-+ val = iwl3945_read_prph(priv, reg);
++ val = iwl4965_read_prph(priv, reg);
if (val != le32_to_cpu(*image)) {
IWL_ERROR("BSM uCode verification failed at "
"addr 0x%08X+%u (of %u), is 0x%x, s/b 0x%x\n",
-@@ -5657,7 +5658,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
+@@ -6031,7 +6098,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
}
/**
- * iwl_load_bsm - Load bootstrap instructions
-+ * iwl3945_load_bsm - Load bootstrap instructions
++ * iwl4965_load_bsm - Load bootstrap instructions
*
* BSM operation:
*
-@@ -5688,7 +5689,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
+@@ -6062,7 +6129,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
* the runtime uCode instructions and the backup data cache into SRAM,
* and re-launches the runtime uCode from where it left off.
*/
-static int iwl_load_bsm(struct iwl_priv *priv)
-+static int iwl3945_load_bsm(struct iwl3945_priv *priv)
++static int iwl4965_load_bsm(struct iwl4965_priv *priv)
{
__le32 *image = priv->ucode_boot.v_addr;
u32 len = priv->ucode_boot.len;
-@@ -5708,8 +5709,8 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+@@ -6082,8 +6149,8 @@ static int iwl_load_bsm(struct iwl_priv *priv)
return -EINVAL;
/* Tell bootstrap uCode where to find the "Initialize" uCode
- * in host DRAM ... bits 31:0 for 3945, bits 35:4 for 4965.
- * NOTE: iwl_initialize_alive_start() will replace these values,
-+ * in host DRAM ... host DRAM physical address bits 31:0 for 3945.
-+ * NOTE: iwl3945_initialize_alive_start() will replace these values,
++ * in host DRAM ... host DRAM physical address bits 35:4 for 4965.
++ * NOTE: iwl4965_initialize_alive_start() will replace these values,
* after the "initialize" uCode has run, to point to
* runtime/protocol instructions and backup data cache. */
- pinst = priv->ucode_init.p_addr;
-@@ -5717,42 +5718,42 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+ pinst = priv->ucode_init.p_addr >> 4;
+@@ -6091,42 +6158,42 @@ static int iwl_load_bsm(struct iwl_priv *priv)
inst_len = priv->ucode_init.len;
data_len = priv->ucode_init_data.len;
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc)
return rc;
@@ -390263,88 +471263,89 @@
- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_PTR_REG, pdata);
- iwl_write_restricted_reg(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
-+ iwl3945_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
-+ iwl3945_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
-+ iwl3945_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
-+ iwl3945_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
++ iwl4965_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
++ iwl4965_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
++ iwl4965_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
++ iwl4965_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
/* Fill BSM memory with bootstrap instructions */
for (reg_offset = BSM_SRAM_LOWER_BOUND;
reg_offset < BSM_SRAM_LOWER_BOUND + len;
reg_offset += sizeof(u32), image++)
- _iwl_write_restricted_reg(priv, reg_offset,
-+ _iwl3945_write_prph(priv, reg_offset,
++ _iwl4965_write_prph(priv, reg_offset,
le32_to_cpu(*image));
- rc = iwl_verify_bsm(priv);
-+ rc = iwl3945_verify_bsm(priv);
++ rc = iwl4965_verify_bsm(priv);
if (rc) {
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
return rc;
}
/* Tell BSM to copy from BSM SRAM into instruction SRAM, when asked */
- iwl_write_restricted_reg(priv, BSM_WR_MEM_SRC_REG, 0x0);
- iwl_write_restricted_reg(priv, BSM_WR_MEM_DST_REG,
-+ iwl3945_write_prph(priv, BSM_WR_MEM_SRC_REG, 0x0);
-+ iwl3945_write_prph(priv, BSM_WR_MEM_DST_REG,
++ iwl4965_write_prph(priv, BSM_WR_MEM_SRC_REG, 0x0);
++ iwl4965_write_prph(priv, BSM_WR_MEM_DST_REG,
RTC_INST_LOWER_BOUND);
- iwl_write_restricted_reg(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
-+ iwl3945_write_prph(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
++ iwl4965_write_prph(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
/* Load bootstrap code into instruction SRAM now,
* to prepare to load "initialize" uCode */
- iwl_write_restricted_reg(priv, BSM_WR_CTRL_REG,
-+ iwl3945_write_prph(priv, BSM_WR_CTRL_REG,
++ iwl4965_write_prph(priv, BSM_WR_CTRL_REG,
BSM_WR_CTRL_REG_BIT_START);
/* Wait for load of bootstrap uCode to finish */
for (i = 0; i < 100; i++) {
- done = iwl_read_restricted_reg(priv, BSM_WR_CTRL_REG);
-+ done = iwl3945_read_prph(priv, BSM_WR_CTRL_REG);
++ done = iwl4965_read_prph(priv, BSM_WR_CTRL_REG);
if (!(done & BSM_WR_CTRL_REG_BIT_START))
break;
udelay(10);
-@@ -5766,29 +5767,29 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+@@ -6140,29 +6207,30 @@ static int iwl_load_bsm(struct iwl_priv *priv)
/* Enable future boot loads whenever power management unit triggers it
* (e.g. when powering back up after power-save shutdown) */
- iwl_write_restricted_reg(priv, BSM_WR_CTRL_REG,
-+ iwl3945_write_prph(priv, BSM_WR_CTRL_REG,
++ iwl4965_write_prph(priv, BSM_WR_CTRL_REG,
BSM_WR_CTRL_REG_BIT_START_EN);
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
return 0;
}
-static void iwl_nic_start(struct iwl_priv *priv)
-+static void iwl3945_nic_start(struct iwl3945_priv *priv)
++static void iwl4965_nic_start(struct iwl4965_priv *priv)
{
/* Remove all resets to allow NIC to operate */
- iwl_write32(priv, CSR_RESET, 0);
-+ iwl3945_write32(priv, CSR_RESET, 0);
++ iwl4965_write32(priv, CSR_RESET, 0);
}
++
/**
- * iwl_read_ucode - Read uCode images from disk file.
-+ * iwl3945_read_ucode - Read uCode images from disk file.
++ * iwl4965_read_ucode - Read uCode images from disk file.
*
* Copy into buffers for card to fetch via bus-mastering
*/
-static int iwl_read_ucode(struct iwl_priv *priv)
-+static int iwl3945_read_ucode(struct iwl3945_priv *priv)
++static int iwl4965_read_ucode(struct iwl4965_priv *priv)
{
- struct iwl_ucode *ucode;
- int rc = 0;
-+ struct iwl3945_ucode *ucode;
-+ int ret = 0;
++ struct iwl4965_ucode *ucode;
++ int ret;
const struct firmware *ucode_raw;
- /* firmware file name contains uCode/driver compatibility version */
- const char *name = "iwlwifi-3945" IWL3945_UCODE_API ".ucode";
-@@ -5798,9 +5799,10 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ const char *name = "iwlwifi-4965" IWL4965_UCODE_API ".ucode";
+ u8 *src;
+@@ -6171,9 +6239,10 @@ static int iwl_read_ucode(struct iwl_priv *priv)
/* Ask kernel firmware_class module to get the boot firmware off disk.
* request_firmware() is synchronous, file is in memory on return. */
@@ -390354,11 +471355,11 @@
+ ret = request_firmware(&ucode_raw, name, &priv->pci_dev->dev);
+ if (ret < 0) {
+ IWL_ERROR("%s firmware file req failed: Reason %d\n",
-+ name, ret);
++ name, ret);
goto error;
}
-@@ -5810,7 +5812,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+@@ -6183,7 +6252,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
/* Make sure that we got at least our header! */
if (ucode_raw->size < sizeof(*ucode)) {
IWL_ERROR("File size way too small!\n");
@@ -390367,29 +471368,7 @@
goto err_release;
}
-@@ -5825,16 +5827,11 @@ static int iwl_read_ucode(struct iwl_priv *priv)
- boot_size = le32_to_cpu(ucode->boot_size);
-
- IWL_DEBUG_INFO("f/w package hdr ucode version = 0x%x\n", ver);
-- IWL_DEBUG_INFO("f/w package hdr runtime inst size = %u\n",
-- inst_size);
-- IWL_DEBUG_INFO("f/w package hdr runtime data size = %u\n",
-- data_size);
-- IWL_DEBUG_INFO("f/w package hdr init inst size = %u\n",
-- init_size);
-- IWL_DEBUG_INFO("f/w package hdr init data size = %u\n",
-- init_data_size);
-- IWL_DEBUG_INFO("f/w package hdr boot inst size = %u\n",
-- boot_size);
-+ IWL_DEBUG_INFO("f/w package hdr runtime inst size = %u\n", inst_size);
-+ IWL_DEBUG_INFO("f/w package hdr runtime data size = %u\n", data_size);
-+ IWL_DEBUG_INFO("f/w package hdr init inst size = %u\n", init_size);
-+ IWL_DEBUG_INFO("f/w package hdr init data size = %u\n", init_data_size);
-+ IWL_DEBUG_INFO("f/w package hdr boot inst size = %u\n", boot_size);
-
- /* Verify size of file vs. image size info in file's header */
- if (ucode_raw->size < sizeof(*ucode) +
-@@ -5843,43 +5840,40 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+@@ -6216,43 +6285,43 @@ static int iwl_read_ucode(struct iwl_priv *priv)
IWL_DEBUG_INFO("uCode file size %d too small\n",
(int)ucode_raw->size);
@@ -390414,42 +471393,42 @@
- (int)data_size);
- rc = -EINVAL;
+ IWL_DEBUG_INFO("uCode data len %d too large to fit in\n",
-+ data_size);
++ data_size);
+ ret = -EINVAL;
goto err_release;
}
if (init_size > IWL_MAX_INST_SIZE) {
-- IWL_DEBUG_INFO
+ IWL_DEBUG_INFO
- ("uCode init instr len %d too large to fit in card\n",
- (int)init_size);
- rc = -EINVAL;
-+ IWL_DEBUG_INFO("uCode init instr len %d too large to fit in\n",
-+ init_size);
++ ("uCode init instr len %d too large to fit in\n",
++ init_size);
+ ret = -EINVAL;
goto err_release;
}
if (init_data_size > IWL_MAX_DATA_SIZE) {
-- IWL_DEBUG_INFO
+ IWL_DEBUG_INFO
- ("uCode init data len %d too large to fit in card\n",
- (int)init_data_size);
- rc = -EINVAL;
-+ IWL_DEBUG_INFO("uCode init data len %d too large to fit in\n",
-+ init_data_size);
++ ("uCode init data len %d too large to fit in\n",
++ init_data_size);
+ ret = -EINVAL;
goto err_release;
}
if (boot_size > IWL_MAX_BSM_SIZE) {
-- IWL_DEBUG_INFO
+ IWL_DEBUG_INFO
- ("uCode boot instr len %d too large to fit in bsm\n",
- (int)boot_size);
- rc = -EINVAL;
-+ IWL_DEBUG_INFO("uCode boot instr len %d too large to fit in\n",
-+ boot_size);
++ ("uCode boot instr len %d too large to fit in\n",
++ boot_size);
+ ret = -EINVAL;
goto err_release;
}
-@@ -5889,66 +5883,54 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+@@ -6262,66 +6331,50 @@ static int iwl_read_ucode(struct iwl_priv *priv)
* 1) unmodified from disk
* 2) backup cache for save/restore during power-downs */
priv->ucode_code.len = inst_size;
@@ -390471,12 +471450,9 @@
- pci_alloc_consistent(priv->pci_dev,
- priv->ucode_data_backup.len,
- &(priv->ucode_data_backup.p_addr));
+-
+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_data_backup);
-+ if (!priv->ucode_code.v_addr || !priv->ucode_data.v_addr ||
-+ !priv->ucode_data_backup.v_addr)
-+ goto err_pci_alloc;
-
/* Initialization instructions and data */
- priv->ucode_init.len = init_size;
- priv->ucode_init.v_addr =
@@ -390532,7 +471508,7 @@
/* Runtime data (2nd block)
- * NOTE: Copy into backup buffer will be done in iwl_up() */
-+ * NOTE: Copy into backup buffer will be done in iwl3945_up() */
++ * NOTE: Copy into backup buffer will be done in iwl4965_up() */
src = &ucode->data[inst_size];
len = priv->ucode_data.len;
- IWL_DEBUG_INFO("Copying (but not loading) uCode data len %d\n",
@@ -390541,25 +471517,45 @@
memcpy(priv->ucode_data.v_addr, src, len);
memcpy(priv->ucode_data_backup.v_addr, src, len);
-@@ -5956,8 +5938,8 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+@@ -6329,8 +6382,8 @@ static int iwl_read_ucode(struct iwl_priv *priv)
if (init_size) {
src = &ucode->data[inst_size + data_size];
len = priv->ucode_init.len;
- IWL_DEBUG_INFO("Copying (but not loading) init instr len %d\n",
- (int)len);
+ IWL_DEBUG_INFO("Copying (but not loading) init instr len %Zd\n",
-+ len);
++ len);
memcpy(priv->ucode_init.v_addr, src, len);
}
-@@ -5983,19 +5965,19 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+@@ -6338,16 +6391,15 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ if (init_data_size) {
+ src = &ucode->data[inst_size + data_size + init_size];
+ len = priv->ucode_init_data.len;
+- IWL_DEBUG_INFO("Copying (but not loading) init data len %d\n",
+- (int)len);
++ IWL_DEBUG_INFO("Copying (but not loading) init data len %Zd\n",
++ len);
+ memcpy(priv->ucode_init_data.v_addr, src, len);
+ }
+
+ /* Bootstrap instructions (5th block) */
+ src = &ucode->data[inst_size + data_size + init_size + init_data_size];
+ len = priv->ucode_boot.len;
+- IWL_DEBUG_INFO("Copying (but not loading) boot instr len %d\n",
+- (int)len);
++ IWL_DEBUG_INFO("Copying (but not loading) boot instr len %Zd\n", len);
+ memcpy(priv->ucode_boot.v_addr, src, len);
+
+ /* We have our copies now, allow OS release its copies */
+@@ -6356,19 +6408,19 @@ static int iwl_read_ucode(struct iwl_priv *priv)
err_pci_alloc:
IWL_ERROR("failed to allocate pci memory\n");
- rc = -ENOMEM;
- iwl_dealloc_ucode_pci(priv);
+ ret = -ENOMEM;
-+ iwl3945_dealloc_ucode_pci(priv);
++ iwl4965_dealloc_ucode_pci(priv);
err_release:
release_firmware(ucode_raw);
@@ -390572,25 +471568,25 @@
/**
- * iwl_set_ucode_ptrs - Set uCode address location
-+ * iwl3945_set_ucode_ptrs - Set uCode address location
++ * iwl4965_set_ucode_ptrs - Set uCode address location
*
* Tell initialization uCode where to find runtime uCode.
*
-@@ -6003,7 +5985,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+@@ -6376,7 +6428,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
* We need to replace them to load runtime uCode inst and data,
* and to save runtime data when powering down.
*/
-static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
-+static int iwl3945_set_ucode_ptrs(struct iwl3945_priv *priv)
++static int iwl4965_set_ucode_ptrs(struct iwl4965_priv *priv)
{
dma_addr_t pinst;
dma_addr_t pdata;
-@@ -6015,24 +5997,24 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
- pdata = priv->ucode_data_backup.p_addr;
+@@ -6388,24 +6440,24 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
+ pdata = priv->ucode_data_backup.p_addr >> 4;
spin_lock_irqsave(&priv->lock, flags);
- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
++ rc = iwl4965_grab_nic_access(priv);
if (rc) {
spin_unlock_irqrestore(&priv->lock, flags);
return rc;
@@ -390600,82 +471596,79 @@
- iwl_write_restricted_reg(priv, BSM_DRAM_INST_PTR_REG, pinst);
- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_PTR_REG, pdata);
- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
-+ iwl3945_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
-+ iwl3945_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
-+ iwl3945_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
++ iwl4965_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
++ iwl4965_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
++ iwl4965_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
priv->ucode_data.len);
/* Inst bytecount must be last to set up, bit 31 signals uCode
* that all new ptr/size info is in place */
- iwl_write_restricted_reg(priv, BSM_DRAM_INST_BYTECOUNT_REG,
-+ iwl3945_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG,
++ iwl4965_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG,
priv->ucode_code.len | BSM_DRAM_INST_LOAD);
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
spin_unlock_irqrestore(&priv->lock, flags);
-@@ -6042,17 +6024,13 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
+@@ -6415,7 +6467,7 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
}
/**
- * iwl_init_alive_start - Called after REPLY_ALIVE notification receieved
-+ * iwl3945_init_alive_start - Called after REPLY_ALIVE notification received
++ * iwl4965_init_alive_start - Called after REPLY_ALIVE notification received
*
* Called after REPLY_ALIVE notification received from "initialize" uCode.
*
-- * The 4965 "initialize" ALIVE reply contains calibration data for:
-- * Voltage, temperature, and MIMO tx gain correction, now stored in priv
-- * (3945 does not contain this data).
-- *
+@@ -6425,7 +6477,7 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
+ *
* Tell "initialize" uCode to go ahead and load the runtime uCode.
--*/
+ */
-static void iwl_init_alive_start(struct iwl_priv *priv)
-+ */
-+static void iwl3945_init_alive_start(struct iwl3945_priv *priv)
++static void iwl4965_init_alive_start(struct iwl4965_priv *priv)
{
/* Check alive response for "valid" sign from uCode */
if (priv->card_alive_init.is_valid != UCODE_VALID_OK) {
-@@ -6065,7 +6043,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+@@ -6438,7 +6490,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
/* Bootstrap uCode has loaded initialize uCode ... verify inst image.
* This is a paranoid check, because we would not have gotten the
* "initialize" alive if code weren't properly loaded. */
- if (iwl_verify_ucode(priv)) {
-+ if (iwl3945_verify_ucode(priv)) {
++ if (iwl4965_verify_ucode(priv)) {
/* Runtime instruction load was bad;
* take it all the way back down so we can try again */
IWL_DEBUG_INFO("Bad \"initialize\" uCode load.\n");
-@@ -6076,7 +6054,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+@@ -6452,7 +6504,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
* load and launch runtime uCode, which will send us another "Alive"
* notification. */
IWL_DEBUG_INFO("Initialization Alive received.\n");
- if (iwl_set_ucode_ptrs(priv)) {
-+ if (iwl3945_set_ucode_ptrs(priv)) {
++ if (iwl4965_set_ucode_ptrs(priv)) {
/* Runtime instruction load won't happen;
* take it all the way back down so we can try again */
IWL_DEBUG_INFO("Couldn't set up uCode pointers.\n");
-@@ -6090,11 +6068,11 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+@@ -6466,11 +6518,11 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
/**
- * iwl_alive_start - called after REPLY_ALIVE notification received
-+ * iwl3945_alive_start - called after REPLY_ALIVE notification received
++ * iwl4965_alive_start - called after REPLY_ALIVE notification received
* from protocol/runtime uCode (initialization uCode's
- * Alive gets handled by iwl_init_alive_start()).
-+ * Alive gets handled by iwl3945_init_alive_start()).
++ * Alive gets handled by iwl4965_init_alive_start()).
*/
-static void iwl_alive_start(struct iwl_priv *priv)
-+static void iwl3945_alive_start(struct iwl3945_priv *priv)
++static void iwl4965_alive_start(struct iwl4965_priv *priv)
{
int rc = 0;
- int thermal_spin = 0;
-@@ -6112,30 +6090,30 @@ static void iwl_alive_start(struct iwl_priv *priv)
+
+@@ -6486,14 +6538,14 @@ static void iwl_alive_start(struct iwl_priv *priv)
/* Initialize uCode has loaded Runtime uCode ... verify inst image.
* This is a paranoid check, because we would not have gotten the
* "runtime" alive if code weren't properly loaded. */
- if (iwl_verify_ucode(priv)) {
-+ if (iwl3945_verify_ucode(priv)) {
++ if (iwl4965_verify_ucode(priv)) {
/* Runtime instruction load was bad;
* take it all the way back down so we can try again */
IWL_DEBUG_INFO("Bad runtime uCode load.\n");
@@ -390683,54 +471676,34 @@
}
- iwl_clear_stations_table(priv);
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl3945_grab_nic_access(priv);
+ rc = iwl4965_alive_notify(priv);
if (rc) {
- IWL_WARNING("Can not read rfkill status from adapter\n");
- return;
+@@ -6502,78 +6554,61 @@ static void iwl_alive_start(struct iwl_priv *priv)
+ goto restart;
}
-- rfkill = iwl_read_restricted_reg(priv, APMG_RFKILL_REG);
-+ rfkill = iwl3945_read_prph(priv, APMG_RFKILL_REG);
- IWL_DEBUG_INFO("RFKILL status: 0x%x\n", rfkill);
-- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
-
- if (rfkill & 0x1) {
- clear_bit(STATUS_RF_KILL_HW, &priv->status);
- /* if rfkill is not on, then wait for thermal
- * sensor in adapter to kick in */
-- while (iwl_hw_get_temperature(priv) == 0) {
-+ while (iwl3945_hw_get_temperature(priv) == 0) {
- thermal_spin++;
- udelay(10);
- }
-@@ -6146,68 +6124,49 @@ static void iwl_alive_start(struct iwl_priv *priv)
- } else
- set_bit(STATUS_RF_KILL_HW, &priv->status);
-
- /* After the ALIVE response, we can process host commands */
-+ /* After the ALIVE response, we can send commands to 3945 uCode */
++ /* After the ALIVE response, we can send host commands to 4965 uCode */
set_bit(STATUS_ALIVE, &priv->status);
/* Clear out the uCode error bit if it is set */
clear_bit(STATUS_FW_ERROR, &priv->status);
- rc = iwl_init_channel_map(priv);
-+ rc = iwl3945_init_channel_map(priv);
++ rc = iwl4965_init_channel_map(priv);
if (rc) {
IWL_ERROR("initializing regulatory failed: %d\n", rc);
return;
}
- iwl_init_geos(priv);
-+ iwl3945_init_geos(priv);
-+ iwl3945_reset_channel_flag(priv);
++ iwl4965_init_geos(priv);
++ iwl4965_reset_channel_flag(priv);
- if (iwl_is_rfkill(priv))
-+ if (iwl3945_is_rfkill(priv))
++ if (iwl4965_is_rfkill(priv))
return;
- if (!priv->mac80211_registered) {
@@ -390760,14 +471733,14 @@
priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
- iwl_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
-+ iwl3945_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
++ iwl4965_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
- if (iwl_is_associated(priv)) {
- struct iwl_rxon_cmd *active_rxon =
- (struct iwl_rxon_cmd *)(&priv->active_rxon);
-+ if (iwl3945_is_associated(priv)) {
-+ struct iwl3945_rxon_cmd *active_rxon =
-+ (struct iwl3945_rxon_cmd *)(&priv->active_rxon);
++ if (iwl4965_is_associated(priv)) {
++ struct iwl4965_rxon_cmd *active_rxon =
++ (struct iwl4965_rxon_cmd *)(&priv->active_rxon);
memcpy(&priv->staging_rxon, &priv->active_rxon,
sizeof(priv->staging_rxon));
@@ -390775,97 +471748,99 @@
} else {
/* Initialize our rx_config data */
- iwl_connection_init_rx_config(priv);
-+ iwl3945_connection_init_rx_config(priv);
++ iwl4965_connection_init_rx_config(priv);
memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
}
- /* Configure BT coexistence */
- iwl_send_bt_config(priv);
+ /* Configure Bluetooth device coexistence support */
-+ iwl3945_send_bt_config(priv);
++ iwl4965_send_bt_config(priv);
/* Configure the adapter for unassociated operation */
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
/* At this point, the NIC is initialized and operational */
priv->notif_missed_beacons = 0;
-@@ -6216,9 +6175,10 @@ static void iwl_alive_start(struct iwl_priv *priv)
- iwl3945_reg_txpower_periodic(priv);
+ set_bit(STATUS_READY, &priv->status);
+ iwl4965_rf_kill_ct_config(priv);
++
IWL_DEBUG_INFO("ALIVE processing complete.\n");
+ wake_up_interruptible(&priv->wait_command_queue);
if (priv->error_recovering)
- iwl_error_recovery(priv);
-+ iwl3945_error_recovery(priv);
++ iwl4965_error_recovery(priv);
return;
-@@ -6226,9 +6186,9 @@ static void iwl_alive_start(struct iwl_priv *priv)
+@@ -6581,9 +6616,9 @@ static void iwl_alive_start(struct iwl_priv *priv)
queue_work(priv->workqueue, &priv->restart);
}
-static void iwl_cancel_deferred_work(struct iwl_priv *priv);
-+static void iwl3945_cancel_deferred_work(struct iwl3945_priv *priv);
++static void iwl4965_cancel_deferred_work(struct iwl4965_priv *priv);
-static void __iwl_down(struct iwl_priv *priv)
-+static void __iwl3945_down(struct iwl3945_priv *priv)
++static void __iwl4965_down(struct iwl4965_priv *priv)
{
unsigned long flags;
int exit_pending = test_bit(STATUS_EXIT_PENDING, &priv->status);
-@@ -6241,7 +6201,7 @@ static void __iwl_down(struct iwl_priv *priv)
+@@ -6596,7 +6631,7 @@ static void __iwl_down(struct iwl_priv *priv)
if (!exit_pending)
set_bit(STATUS_EXIT_PENDING, &priv->status);
- iwl_clear_stations_table(priv);
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
/* Unblock any waiting calls */
wake_up_interruptible_all(&priv->wait_command_queue);
-@@ -6252,17 +6212,17 @@ static void __iwl_down(struct iwl_priv *priv)
+@@ -6607,17 +6642,17 @@ static void __iwl_down(struct iwl_priv *priv)
clear_bit(STATUS_EXIT_PENDING, &priv->status);
/* stop and reset the on-board processor */
- iwl_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
-+ iwl3945_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
++ iwl4965_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
/* tell the device to stop sending interrupts */
- iwl_disable_interrupts(priv);
-+ iwl3945_disable_interrupts(priv);
++ iwl4965_disable_interrupts(priv);
if (priv->mac80211_registered)
ieee80211_stop_queues(priv->hw);
- /* If we have not previously called iwl_init() then
-+ /* If we have not previously called iwl3945_init() then
++ /* If we have not previously called iwl4965_init() then
* clear all bits but the RF Kill and SUSPEND bits and return */
- if (!iwl_is_init(priv)) {
-+ if (!iwl3945_is_init(priv)) {
++ if (!iwl4965_is_init(priv)) {
priv->status = test_bit(STATUS_RF_KILL_HW, &priv->status) <<
STATUS_RF_KILL_HW |
test_bit(STATUS_RF_KILL_SW, &priv->status) <<
-@@ -6284,51 +6244,50 @@ static void __iwl_down(struct iwl_priv *priv)
+@@ -6639,53 +6674,52 @@ static void __iwl_down(struct iwl_priv *priv)
STATUS_FW_ERROR;
spin_lock_irqsave(&priv->lock, flags);
- iwl_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
-+ iwl3945_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
++ iwl4965_clear_bit(priv, CSR_GP_CNTRL,
++ CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
spin_unlock_irqrestore(&priv->lock, flags);
- iwl_hw_txq_ctx_stop(priv);
- iwl_hw_rxq_stop(priv);
-+ iwl3945_hw_txq_ctx_stop(priv);
-+ iwl3945_hw_rxq_stop(priv);
++ iwl4965_hw_txq_ctx_stop(priv);
++ iwl4965_hw_rxq_stop(priv);
spin_lock_irqsave(&priv->lock, flags);
- if (!iwl_grab_restricted_access(priv)) {
- iwl_write_restricted_reg(priv, APMG_CLK_DIS_REG,
-+ if (!iwl3945_grab_nic_access(priv)) {
-+ iwl3945_write_prph(priv, APMG_CLK_DIS_REG,
++ if (!iwl4965_grab_nic_access(priv)) {
++ iwl4965_write_prph(priv, APMG_CLK_DIS_REG,
APMG_CLK_VAL_DMA_CLK_RQT);
- iwl_release_restricted_access(priv);
-+ iwl3945_release_nic_access(priv);
++ iwl4965_release_nic_access(priv);
}
spin_unlock_irqrestore(&priv->lock, flags);
@@ -390874,13 +471849,13 @@
- iwl_hw_nic_stop_master(priv);
- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
- iwl_hw_nic_reset(priv);
-+ iwl3945_hw_nic_stop_master(priv);
-+ iwl3945_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
-+ iwl3945_hw_nic_reset(priv);
++ iwl4965_hw_nic_stop_master(priv);
++ iwl4965_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
++ iwl4965_hw_nic_reset(priv);
exit:
- memset(&priv->card_alive, 0, sizeof(struct iwl_alive_resp));
-+ memset(&priv->card_alive, 0, sizeof(struct iwl3945_alive_resp));
++ memset(&priv->card_alive, 0, sizeof(struct iwl4965_alive_resp));
if (priv->ibss_beacon)
dev_kfree_skb(priv->ibss_beacon);
@@ -390888,31 +471863,33 @@
/* clear out any free frames */
- iwl_clear_free_frames(priv);
-+ iwl3945_clear_free_frames(priv);
++ iwl4965_clear_free_frames(priv);
}
-static void iwl_down(struct iwl_priv *priv)
-+static void iwl3945_down(struct iwl3945_priv *priv)
++static void iwl4965_down(struct iwl4965_priv *priv)
{
mutex_lock(&priv->mutex);
- __iwl_down(priv);
-+ __iwl3945_down(priv);
++ __iwl4965_down(priv);
mutex_unlock(&priv->mutex);
- iwl_cancel_deferred_work(priv);
-+ iwl3945_cancel_deferred_work(priv);
++ iwl4965_cancel_deferred_work(priv);
}
#define MAX_HW_RESTARTS 5
-static int __iwl_up(struct iwl_priv *priv)
-+static int __iwl3945_up(struct iwl3945_priv *priv)
++static int __iwl4965_up(struct iwl4965_priv *priv)
{
- DECLARE_MAC_BUF(mac);
int rc, i;
+- u32 hw_rf_kill = 0;
if (test_bit(STATUS_EXIT_PENDING, &priv->status)) {
-@@ -6339,7 +6298,19 @@ static int __iwl_up(struct iwl_priv *priv)
+ IWL_WARNING("Exit pending; will not bring the NIC up\n");
+@@ -6695,7 +6729,19 @@ static int __iwl_up(struct iwl_priv *priv)
if (test_bit(STATUS_RF_KILL_SW, &priv->status)) {
IWL_WARNING("Radio disabled by SW RF kill (module "
"parameter)\n");
@@ -390921,7 +471898,7 @@
+ }
+
+ /* If platform's RF_KILL switch is NOT set to KILL */
-+ if (iwl3945_read32(priv, CSR_GP_CNTRL) &
++ if (iwl4965_read32(priv, CSR_GP_CNTRL) &
+ CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW)
+ clear_bit(STATUS_RF_KILL_HW, &priv->status);
+ else {
@@ -390933,15 +471910,15 @@
}
if (!priv->ucode_data_backup.v_addr || !priv->ucode_data.v_addr) {
-@@ -6347,41 +6318,45 @@ static int __iwl_up(struct iwl_priv *priv)
+@@ -6703,53 +6749,45 @@ static int __iwl_up(struct iwl_priv *priv)
return -EIO;
}
- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
-+ iwl3945_write32(priv, CSR_INT, 0xFFFFFFFF);
++ iwl4965_write32(priv, CSR_INT, 0xFFFFFFFF);
- rc = iwl_hw_nic_init(priv);
-+ rc = iwl3945_hw_nic_init(priv);
++ rc = iwl4965_hw_nic_init(priv);
if (rc) {
IWL_ERROR("Unable to int nic\n");
return rc;
@@ -390950,21 +471927,21 @@
/* make sure rfkill handshake bits are cleared */
- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR,
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR,
CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
/* clear (again), then enable host interrupts */
- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
- iwl_enable_interrupts(priv);
-+ iwl3945_write32(priv, CSR_INT, 0xFFFFFFFF);
-+ iwl3945_enable_interrupts(priv);
++ iwl4965_write32(priv, CSR_INT, 0xFFFFFFFF);
++ iwl4965_enable_interrupts(priv);
/* really make sure rfkill handshake bits are cleared */
- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl3945_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
++ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
/* Copy original ucode data image from disk into backup cache.
* This will be used to initialize the on-board processor's
@@ -390972,25 +471949,35 @@
memcpy(priv->ucode_data_backup.v_addr, priv->ucode_data.v_addr,
- priv->ucode_data.len);
+ priv->ucode_data.len);
-+
+
+- /* If platform's RF_KILL switch is set to KILL,
+- * wait for BIT_INT_RF_KILL interrupt before loading uCode
+- * and getting things started */
+- if (!(iwl_read32(priv, CSR_GP_CNTRL) &
+- CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
+- hw_rf_kill = 1;
+-
+- if (test_bit(STATUS_RF_KILL_HW, &priv->status) || hw_rf_kill) {
+- IWL_WARNING("Radio disabled by HW RF Kill switch\n");
+ /* We return success when we resume from suspend and rf_kill is on. */
+ if (test_bit(STATUS_RF_KILL_HW, &priv->status))
-+ return 0;
+ return 0;
+- }
for (i = 0; i < MAX_HW_RESTARTS; i++) {
- iwl_clear_stations_table(priv);
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
/* load bootstrap state machine,
* load bootstrap program into processor's memory,
* prepare to load the "initialize" uCode */
- rc = iwl_load_bsm(priv);
-+ rc = iwl3945_load_bsm(priv);
++ rc = iwl4965_load_bsm(priv);
if (rc) {
IWL_ERROR("Unable to set up bootstrap uCode: %d\n", rc);
-@@ -6389,14 +6364,7 @@ static int __iwl_up(struct iwl_priv *priv)
+@@ -6757,14 +6795,7 @@ static int __iwl_up(struct iwl_priv *priv)
}
/* start card; "initialize" will load runtime ucode */
@@ -391002,166 +471989,166 @@
- print_mac(mac, priv->mac_addr));
-
- SET_IEEE80211_PERM_ADDR(priv->hw, priv->mac_addr);
-+ iwl3945_nic_start(priv);
++ iwl4965_nic_start(priv);
IWL_DEBUG_INFO(DRV_NAME " is coming up\n");
-@@ -6404,7 +6372,7 @@ static int __iwl_up(struct iwl_priv *priv)
+@@ -6772,7 +6803,7 @@ static int __iwl_up(struct iwl_priv *priv)
}
set_bit(STATUS_EXIT_PENDING, &priv->status);
- __iwl_down(priv);
-+ __iwl3945_down(priv);
++ __iwl4965_down(priv);
/* tried to restart and config the device for as long as our
* patience could withstand */
-@@ -6419,35 +6387,35 @@ static int __iwl_up(struct iwl_priv *priv)
+@@ -6787,35 +6818,35 @@ static int __iwl_up(struct iwl_priv *priv)
*
*****************************************************************************/
-static void iwl_bg_init_alive_start(struct work_struct *data)
-+static void iwl3945_bg_init_alive_start(struct work_struct *data)
++static void iwl4965_bg_init_alive_start(struct work_struct *data)
{
- struct iwl_priv *priv =
- container_of(data, struct iwl_priv, init_alive_start.work);
-+ struct iwl3945_priv *priv =
-+ container_of(data, struct iwl3945_priv, init_alive_start.work);
++ struct iwl4965_priv *priv =
++ container_of(data, struct iwl4965_priv, init_alive_start.work);
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
mutex_lock(&priv->mutex);
- iwl_init_alive_start(priv);
-+ iwl3945_init_alive_start(priv);
++ iwl4965_init_alive_start(priv);
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_alive_start(struct work_struct *data)
-+static void iwl3945_bg_alive_start(struct work_struct *data)
++static void iwl4965_bg_alive_start(struct work_struct *data)
{
- struct iwl_priv *priv =
- container_of(data, struct iwl_priv, alive_start.work);
-+ struct iwl3945_priv *priv =
-+ container_of(data, struct iwl3945_priv, alive_start.work);
++ struct iwl4965_priv *priv =
++ container_of(data, struct iwl4965_priv, alive_start.work);
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
mutex_lock(&priv->mutex);
- iwl_alive_start(priv);
-+ iwl3945_alive_start(priv);
++ iwl4965_alive_start(priv);
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_rf_kill(struct work_struct *work)
-+static void iwl3945_bg_rf_kill(struct work_struct *work)
++static void iwl4965_bg_rf_kill(struct work_struct *work)
{
- struct iwl_priv *priv = container_of(work, struct iwl_priv, rf_kill);
-+ struct iwl3945_priv *priv = container_of(work, struct iwl3945_priv, rf_kill);
++ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv, rf_kill);
wake_up_interruptible(&priv->wait_command_queue);
-@@ -6456,7 +6424,7 @@ static void iwl_bg_rf_kill(struct work_struct *work)
+@@ -6824,7 +6855,7 @@ static void iwl_bg_rf_kill(struct work_struct *work)
mutex_lock(&priv->mutex);
- if (!iwl_is_rfkill(priv)) {
-+ if (!iwl3945_is_rfkill(priv)) {
++ if (!iwl4965_is_rfkill(priv)) {
IWL_DEBUG(IWL_DL_INFO | IWL_DL_RF_KILL,
"HW and/or SW RF Kill no longer active, restarting "
"device\n");
-@@ -6477,10 +6445,10 @@ static void iwl_bg_rf_kill(struct work_struct *work)
+@@ -6845,10 +6876,10 @@ static void iwl_bg_rf_kill(struct work_struct *work)
#define IWL_SCAN_CHECK_WATCHDOG (7 * HZ)
-static void iwl_bg_scan_check(struct work_struct *data)
-+static void iwl3945_bg_scan_check(struct work_struct *data)
++static void iwl4965_bg_scan_check(struct work_struct *data)
{
- struct iwl_priv *priv =
- container_of(data, struct iwl_priv, scan_check.work);
-+ struct iwl3945_priv *priv =
-+ container_of(data, struct iwl3945_priv, scan_check.work);
++ struct iwl4965_priv *priv =
++ container_of(data, struct iwl4965_priv, scan_check.work);
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
-@@ -6493,22 +6461,22 @@ static void iwl_bg_scan_check(struct work_struct *data)
+@@ -6861,22 +6892,22 @@ static void iwl_bg_scan_check(struct work_struct *data)
jiffies_to_msecs(IWL_SCAN_CHECK_WATCHDOG));
if (!test_bit(STATUS_EXIT_PENDING, &priv->status))
- iwl_send_scan_abort(priv);
-+ iwl3945_send_scan_abort(priv);
++ iwl4965_send_scan_abort(priv);
}
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_request_scan(struct work_struct *data)
-+static void iwl3945_bg_request_scan(struct work_struct *data)
++static void iwl4965_bg_request_scan(struct work_struct *data)
{
- struct iwl_priv *priv =
- container_of(data, struct iwl_priv, request_scan);
- struct iwl_host_cmd cmd = {
-+ struct iwl3945_priv *priv =
-+ container_of(data, struct iwl3945_priv, request_scan);
-+ struct iwl3945_host_cmd cmd = {
++ struct iwl4965_priv *priv =
++ container_of(data, struct iwl4965_priv, request_scan);
++ struct iwl4965_host_cmd cmd = {
.id = REPLY_SCAN_CMD,
- .len = sizeof(struct iwl_scan_cmd),
-+ .len = sizeof(struct iwl3945_scan_cmd),
++ .len = sizeof(struct iwl4965_scan_cmd),
.meta.flags = CMD_SIZE_HUGE,
};
int rc = 0;
- struct iwl_scan_cmd *scan;
-+ struct iwl3945_scan_cmd *scan;
++ struct iwl4965_scan_cmd *scan;
struct ieee80211_conf *conf = NULL;
u8 direct_mask;
int phymode;
-@@ -6517,7 +6485,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -6885,7 +6916,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
mutex_lock(&priv->mutex);
- if (!iwl_is_ready(priv)) {
-+ if (!iwl3945_is_ready(priv)) {
++ if (!iwl4965_is_ready(priv)) {
IWL_WARNING("request scan called when driver not ready.\n");
goto done;
}
-@@ -6546,7 +6514,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -6914,7 +6945,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
goto done;
}
- if (iwl_is_rfkill(priv)) {
-+ if (iwl3945_is_rfkill(priv)) {
++ if (iwl4965_is_rfkill(priv)) {
IWL_DEBUG_HC("Aborting scan due to RF Kill activation\n");
goto done;
}
-@@ -6562,7 +6530,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -6930,7 +6961,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
}
if (!priv->scan) {
- priv->scan = kmalloc(sizeof(struct iwl_scan_cmd) +
-+ priv->scan = kmalloc(sizeof(struct iwl3945_scan_cmd) +
++ priv->scan = kmalloc(sizeof(struct iwl4965_scan_cmd) +
IWL_MAX_SCAN_SIZE, GFP_KERNEL);
if (!priv->scan) {
rc = -ENOMEM;
-@@ -6570,12 +6538,12 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -6938,12 +6969,12 @@ static void iwl_bg_request_scan(struct work_struct *data)
}
}
scan = priv->scan;
- memset(scan, 0, sizeof(struct iwl_scan_cmd) + IWL_MAX_SCAN_SIZE);
-+ memset(scan, 0, sizeof(struct iwl3945_scan_cmd) + IWL_MAX_SCAN_SIZE);
++ memset(scan, 0, sizeof(struct iwl4965_scan_cmd) + IWL_MAX_SCAN_SIZE);
scan->quiet_plcp_th = IWL_PLCP_QUIET_THRESH;
scan->quiet_time = IWL_ACTIVE_QUIET_TIME;
- if (iwl_is_associated(priv)) {
-+ if (iwl3945_is_associated(priv)) {
++ if (iwl4965_is_associated(priv)) {
u16 interval = 0;
u32 extra;
u32 suspend_time = 100;
-@@ -6612,14 +6580,14 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -6973,14 +7004,14 @@ static void iwl_bg_request_scan(struct work_struct *data)
if (priv->one_direct_scan) {
IWL_DEBUG_SCAN
("Kicking off one direct scan for '%s'\n",
- iwl_escape_essid(priv->direct_ssid,
-+ iwl3945_escape_essid(priv->direct_ssid,
++ iwl4965_escape_essid(priv->direct_ssid,
priv->direct_ssid_len));
scan->direct_scan[0].id = WLAN_EID_SSID;
scan->direct_scan[0].len = priv->direct_ssid_len;
@@ -391169,48 +472156,66 @@
priv->direct_ssid, priv->direct_ssid_len);
direct_mask = 1;
- } else if (!iwl_is_associated(priv) && priv->essid_len) {
-+ } else if (!iwl3945_is_associated(priv) && priv->essid_len) {
++ } else if (!iwl4965_is_associated(priv) && priv->essid_len) {
scan->direct_scan[0].id = WLAN_EID_SSID;
scan->direct_scan[0].len = priv->essid_len;
memcpy(scan->direct_scan[0].ssid, priv->essid, priv->essid_len);
-@@ -6630,7 +6598,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -6991,7 +7022,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
/* We don't build a direct scan probe request; the uCode will do
* that based on the direct_mask added to each channel entry */
scan->tx_cmd.len = cpu_to_le16(
- iwl_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
-+ iwl3945_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
++ iwl4965_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
IWL_MAX_SCAN_SIZE - sizeof(scan), 0));
scan->tx_cmd.tx_flags = TX_CMD_FLG_SEQ_CTL_MSK;
scan->tx_cmd.sta_id = priv->hw_setting.bcast_sta_id;
-@@ -6666,23 +6634,23 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -7005,7 +7036,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+ case 2:
+ scan->flags = RXON_FLG_BAND_24G_MSK | RXON_FLG_AUTO_DETECT_MSK;
+ scan->tx_cmd.rate_n_flags =
+- iwl_hw_set_rate_n_flags(IWL_RATE_1M_PLCP,
++ iwl4965_hw_set_rate_n_flags(IWL_RATE_1M_PLCP,
+ RATE_MCS_ANT_B_MSK|RATE_MCS_CCK_MSK);
+
+ scan->good_CRC_th = 0;
+@@ -7014,7 +7045,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+
+ case 1:
+ scan->tx_cmd.rate_n_flags =
+- iwl_hw_set_rate_n_flags(IWL_RATE_6M_PLCP,
++ iwl4965_hw_set_rate_n_flags(IWL_RATE_6M_PLCP,
+ RATE_MCS_ANT_B_MSK);
+ scan->good_CRC_th = IWL_GOOD_CRC_TH;
+ phymode = MODE_IEEE80211A;
+@@ -7041,23 +7072,23 @@ static void iwl_bg_request_scan(struct work_struct *data)
if (direct_mask)
IWL_DEBUG_SCAN
("Initiating direct scan for %s.\n",
- iwl_escape_essid(priv->essid, priv->essid_len));
-+ iwl3945_escape_essid(priv->essid, priv->essid_len));
++ iwl4965_escape_essid(priv->essid, priv->essid_len));
else
IWL_DEBUG_SCAN("Initiating indirect scan.\n");
scan->channel_count =
- iwl_get_channels_for_scan(
-+ iwl3945_get_channels_for_scan(
++ iwl4965_get_channels_for_scan(
priv, phymode, 1, /* active */
direct_mask,
(void *)&scan->data[le16_to_cpu(scan->tx_cmd.len)]);
cmd.len += le16_to_cpu(scan->tx_cmd.len) +
- scan->channel_count * sizeof(struct iwl_scan_channel);
-+ scan->channel_count * sizeof(struct iwl3945_scan_channel);
++ scan->channel_count * sizeof(struct iwl4965_scan_channel);
cmd.data = scan;
scan->len = cpu_to_le16(cmd.len);
set_bit(STATUS_SCAN_HW, &priv->status);
- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl3945_send_cmd_sync(priv, &cmd);
++ rc = iwl4965_send_cmd_sync(priv, &cmd);
if (rc)
goto done;
-@@ -6693,50 +6661,52 @@ static void iwl_bg_request_scan(struct work_struct *data)
+@@ -7068,50 +7099,52 @@ static void iwl_bg_request_scan(struct work_struct *data)
return;
done:
@@ -391221,62 +472226,62 @@
}
-static void iwl_bg_up(struct work_struct *data)
-+static void iwl3945_bg_up(struct work_struct *data)
++static void iwl4965_bg_up(struct work_struct *data)
{
- struct iwl_priv *priv = container_of(data, struct iwl_priv, up);
-+ struct iwl3945_priv *priv = container_of(data, struct iwl3945_priv, up);
++ struct iwl4965_priv *priv = container_of(data, struct iwl4965_priv, up);
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
mutex_lock(&priv->mutex);
- __iwl_up(priv);
-+ __iwl3945_up(priv);
++ __iwl4965_up(priv);
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_restart(struct work_struct *data)
-+static void iwl3945_bg_restart(struct work_struct *data)
++static void iwl4965_bg_restart(struct work_struct *data)
{
- struct iwl_priv *priv = container_of(data, struct iwl_priv, restart);
-+ struct iwl3945_priv *priv = container_of(data, struct iwl3945_priv, restart);
++ struct iwl4965_priv *priv = container_of(data, struct iwl4965_priv, restart);
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
- iwl_down(priv);
-+ iwl3945_down(priv);
++ iwl4965_down(priv);
queue_work(priv->workqueue, &priv->up);
}
-static void iwl_bg_rx_replenish(struct work_struct *data)
-+static void iwl3945_bg_rx_replenish(struct work_struct *data)
++static void iwl4965_bg_rx_replenish(struct work_struct *data)
{
- struct iwl_priv *priv =
- container_of(data, struct iwl_priv, rx_replenish);
-+ struct iwl3945_priv *priv =
-+ container_of(data, struct iwl3945_priv, rx_replenish);
++ struct iwl4965_priv *priv =
++ container_of(data, struct iwl4965_priv, rx_replenish);
if (test_bit(STATUS_EXIT_PENDING, &priv->status))
return;
mutex_lock(&priv->mutex);
- iwl_rx_replenish(priv);
-+ iwl3945_rx_replenish(priv);
++ iwl4965_rx_replenish(priv);
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_post_associate(struct work_struct *data)
+#define IWL_DELAY_NEXT_SCAN (HZ*2)
+
-+static void iwl3945_bg_post_associate(struct work_struct *data)
++static void iwl4965_bg_post_associate(struct work_struct *data)
{
- struct iwl_priv *priv = container_of(data, struct iwl_priv,
-+ struct iwl3945_priv *priv = container_of(data, struct iwl3945_priv,
++ struct iwl4965_priv *priv = container_of(data, struct iwl4965_priv,
post_associate.work);
int rc = 0;
-@@ -6758,20 +6728,20 @@ static void iwl_bg_post_associate(struct work_struct *data)
+@@ -7133,20 +7166,20 @@ static void iwl_bg_post_associate(struct work_struct *data)
mutex_lock(&priv->mutex);
@@ -391286,107 +472291,131 @@
return;
}
- iwl_scan_cancel_timeout(priv, 200);
-+ iwl3945_scan_cancel_timeout(priv, 200);
++ iwl4965_scan_cancel_timeout(priv, 200);
conf = ieee80211_get_hw_conf(priv->hw);
priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
- memset(&priv->rxon_timing, 0, sizeof(struct iwl_rxon_time_cmd));
- iwl_setup_rxon_timing(priv);
- rc = iwl_send_cmd_pdu(priv, REPLY_RXON_TIMING,
-+ memset(&priv->rxon_timing, 0, sizeof(struct iwl3945_rxon_time_cmd));
-+ iwl3945_setup_rxon_timing(priv);
-+ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON_TIMING,
++ memset(&priv->rxon_timing, 0, sizeof(struct iwl4965_rxon_time_cmd));
++ iwl4965_setup_rxon_timing(priv);
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON_TIMING,
sizeof(priv->rxon_timing), &priv->rxon_timing);
if (rc)
IWL_WARNING("REPLY_RXON_TIMING failed - "
-@@ -6800,75 +6770,81 @@ static void iwl_bg_post_associate(struct work_struct *data)
+@@ -7154,15 +7187,10 @@ static void iwl_bg_post_associate(struct work_struct *data)
+
+ priv->staging_rxon.filter_flags |= RXON_FILTER_ASSOC_MSK;
+
+-#ifdef CONFIG_IWLWIFI_HT
+- if (priv->is_ht_enabled && priv->current_assoc_ht.is_ht)
+- iwl4965_set_rxon_ht(priv, &priv->current_assoc_ht);
+- else {
+- priv->active_rate_ht[0] = 0;
+- priv->active_rate_ht[1] = 0;
+- priv->current_channel_width = IWL_CHANNEL_WIDTH_20MHZ;
+- }
+-#endif /* CONFIG_IWLWIFI_HT*/
++#ifdef CONFIG_IWL4965_HT
++ if (priv->current_ht_config.is_ht)
++ iwl4965_set_rxon_ht(priv, &priv->current_ht_config);
++#endif /* CONFIG_IWL4965_HT*/
+ iwl4965_set_rxon_chain(priv);
+ priv->staging_rxon.assoc_id = cpu_to_le16(priv->assoc_id);
+
+@@ -7185,22 +7213,22 @@ static void iwl_bg_post_associate(struct work_struct *data)
}
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
switch (priv->iw_mode) {
case IEEE80211_IF_TYPE_STA:
- iwl_rate_scale_init(priv->hw, IWL_AP_ID);
-+ iwl3945_rate_scale_init(priv->hw, IWL_AP_ID);
++ iwl4965_rate_scale_init(priv->hw, IWL_AP_ID);
break;
case IEEE80211_IF_TYPE_IBSS:
/* clear out the station table */
- iwl_clear_stations_table(priv);
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
-- iwl_add_station(priv, BROADCAST_ADDR, 0, 0);
-- iwl_add_station(priv, priv->bssid, 0, 0);
-+ iwl3945_add_station(priv, iwl3945_broadcast_addr, 0, 0);
-+ iwl3945_add_station(priv, priv->bssid, 0, 0);
- iwl3945_sync_sta(priv, IWL_STA_ID,
- (priv->phymode == MODE_IEEE80211A)?
- IWL_RATE_6M_PLCP : IWL_RATE_1M_PLCP,
- CMD_ASYNC);
+- iwl_rxon_add_station(priv, BROADCAST_ADDR, 0);
+- iwl_rxon_add_station(priv, priv->bssid, 0);
- iwl_rate_scale_init(priv->hw, IWL_STA_ID);
- iwl_send_beacon_cmd(priv);
-+ iwl3945_rate_scale_init(priv->hw, IWL_STA_ID);
-+ iwl3945_send_beacon_cmd(priv);
++ iwl4965_rxon_add_station(priv, iwl4965_broadcast_addr, 0);
++ iwl4965_rxon_add_station(priv, priv->bssid, 0);
++ iwl4965_rate_scale_init(priv->hw, IWL_STA_ID);
++ iwl4965_send_beacon_cmd(priv);
break;
- default:
- IWL_ERROR("%s Should not be called in %d mode\n",
-- __FUNCTION__, priv->iw_mode);
-+ __FUNCTION__, priv->iw_mode);
+@@ -7210,55 +7238,61 @@ static void iwl_bg_post_associate(struct work_struct *data)
break;
}
- iwl_sequence_reset(priv);
-+ iwl3945_sequence_reset(priv);
++ iwl4965_sequence_reset(priv);
+
+-#ifdef CONFIG_IWLWIFI_SENSITIVITY
++#ifdef CONFIG_IWL4965_SENSITIVITY
+ /* Enable Rx differential gain and sensitivity calibrations */
+ iwl4965_chain_noise_reset(priv);
+ priv->start_calib = 1;
+-#endif /* CONFIG_IWLWIFI_SENSITIVITY */
++#endif /* CONFIG_IWL4965_SENSITIVITY */
+
+ if (priv->iw_mode == IEEE80211_IF_TYPE_IBSS)
+ priv->assoc_station_added = 1;
-#ifdef CONFIG_IWLWIFI_QOS
- iwl_activate_qos(priv, 0);
-#endif /* CONFIG_IWLWIFI_QOS */
-+#ifdef CONFIG_IWL3945_QOS
-+ iwl3945_activate_qos(priv, 0);
-+#endif /* CONFIG_IWL3945_QOS */
++#ifdef CONFIG_IWL4965_QOS
++ iwl4965_activate_qos(priv, 0);
++#endif /* CONFIG_IWL4965_QOS */
+ /* we have just associated, don't start scan too early */
+ priv->next_scan_jiffies = jiffies + IWL_DELAY_NEXT_SCAN;
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_abort_scan(struct work_struct *work)
-+static void iwl3945_bg_abort_scan(struct work_struct *work)
++static void iwl4965_bg_abort_scan(struct work_struct *work)
{
- struct iwl_priv *priv = container_of(work, struct iwl_priv,
- abort_scan);
-+ struct iwl3945_priv *priv = container_of(work, struct iwl3945_priv, abort_scan);
++ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv, abort_scan);
- if (!iwl_is_ready(priv))
-+ if (!iwl3945_is_ready(priv))
++ if (!iwl4965_is_ready(priv))
return;
mutex_lock(&priv->mutex);
set_bit(STATUS_SCAN_ABORTING, &priv->status);
- iwl_send_scan_abort(priv);
-+ iwl3945_send_scan_abort(priv);
++ iwl4965_send_scan_abort(priv);
mutex_unlock(&priv->mutex);
}
-static void iwl_bg_scan_completed(struct work_struct *work)
-+static int iwl3945_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf);
++static int iwl4965_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf);
+
-+static void iwl3945_bg_scan_completed(struct work_struct *work)
++static void iwl4965_bg_scan_completed(struct work_struct *work)
{
- struct iwl_priv *priv =
- container_of(work, struct iwl_priv, scan_completed);
-+ struct iwl3945_priv *priv =
-+ container_of(work, struct iwl3945_priv, scan_completed);
++ struct iwl4965_priv *priv =
++ container_of(work, struct iwl4965_priv, scan_completed);
IWL_DEBUG(IWL_DL_INFO | IWL_DL_SCAN, "SCAN complete scan\n");
@@ -391394,7 +472423,7 @@
return;
+ if (test_bit(STATUS_CONF_PENDING, &priv->status))
-+ iwl3945_mac_config(priv->hw, ieee80211_get_hw_conf(priv->hw));
++ iwl4965_mac_config(priv->hw, ieee80211_get_hw_conf(priv->hw));
+
ieee80211_scan_completed(priv->hw);
@@ -391402,21 +472431,21 @@
* performing the scan, fire one off */
mutex_lock(&priv->mutex);
- iwl_hw_reg_send_txpower(priv);
-+ iwl3945_hw_reg_send_txpower(priv);
++ iwl4965_hw_reg_send_txpower(priv);
mutex_unlock(&priv->mutex);
}
-@@ -6878,50 +6854,123 @@ static void iwl_bg_scan_completed(struct work_struct *work)
+@@ -7268,50 +7302,123 @@ static void iwl_bg_scan_completed(struct work_struct *work)
*
*****************************************************************************/
-static int iwl_mac_start(struct ieee80211_hw *hw)
+#define UCODE_READY_TIMEOUT (2 * HZ)
+
-+static int iwl3945_mac_start(struct ieee80211_hw *hw)
++static int iwl4965_mac_start(struct ieee80211_hw *hw)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
+ int ret;
IWL_DEBUG_MAC80211("enter\n");
@@ -391428,7 +472457,7 @@
+ pci_restore_state(priv->pci_dev);
+ pci_enable_msi(priv->pci_dev);
+
-+ ret = request_irq(priv->pci_dev->irq, iwl3945_isr, IRQF_SHARED,
++ ret = request_irq(priv->pci_dev->irq, iwl4965_isr, IRQF_SHARED,
+ DRV_NAME, priv);
+ if (ret) {
+ IWL_ERROR("Error allocating IRQ %d\n", priv->pci_dev->irq);
@@ -391439,29 +472468,29 @@
mutex_lock(&priv->mutex);
- priv->is_open = 1;
-+ memset(&priv->staging_rxon, 0, sizeof(struct iwl3945_rxon_cmd));
++ memset(&priv->staging_rxon, 0, sizeof(struct iwl4965_rxon_cmd));
+ /* fetch ucode file from disk, alloc and copy to bus-master buffers ...
+ * ucode filename and max sizes are card-specific. */
-
-- if (!iwl_is_rfkill(priv))
-- ieee80211_start_queues(priv->hw);
++
+ if (!priv->ucode_code.len) {
-+ ret = iwl3945_read_ucode(priv);
++ ret = iwl4965_read_ucode(priv);
+ if (ret) {
+ IWL_ERROR("Could not read microcode: %d\n", ret);
+ mutex_unlock(&priv->mutex);
+ goto out_release_irq;
+ }
+ }
-+
-+ ret = __iwl3945_up(priv);
+
+- if (!iwl_is_rfkill(priv))
+- ieee80211_start_queues(priv->hw);
++ ret = __iwl4965_up(priv);
mutex_unlock(&priv->mutex);
+
+ if (ret)
+ goto out_release_irq;
+
-+ IWL_DEBUG_INFO("Start UP work.\n");
++ IWL_DEBUG_INFO("Start UP work done.\n");
+
+ if (test_bit(STATUS_IN_SUSPEND, &priv->status))
+ return 0;
@@ -391495,10 +472524,10 @@
}
-static void iwl_mac_stop(struct ieee80211_hw *hw)
-+static void iwl3945_mac_stop(struct ieee80211_hw *hw)
++static void iwl4965_mac_stop(struct ieee80211_hw *hw)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
IWL_DEBUG_MAC80211("enter\n");
@@ -391518,17 +472547,17 @@
- iwl_commit_rxon(priv);
- mutex_unlock(&priv->mutex);
+
-+ if (iwl3945_is_ready_rf(priv)) {
++ if (iwl4965_is_ready_rf(priv)) {
+ /* stop mac, cancel any scan request and clear
+ * RXON_FILTER_ASSOC_MSK BIT
+ */
+ mutex_lock(&priv->mutex);
-+ iwl3945_scan_cancel_timeout(priv, 100);
++ iwl4965_scan_cancel_timeout(priv, 100);
+ cancel_delayed_work(&priv->post_associate);
+ mutex_unlock(&priv->mutex);
+ }
+
-+ iwl3945_down(priv);
++ iwl4965_down(priv);
+
+ flush_workqueue(priv->workqueue);
+ free_irq(priv->pci_dev->irq, priv);
@@ -391540,20 +472569,20 @@
}
-static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
-+static int iwl3945_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
++static int iwl4965_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
struct ieee80211_tx_control *ctl)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
IWL_DEBUG_MAC80211("enter\n");
-@@ -6933,29 +6982,29 @@ static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
+@@ -7323,29 +7430,29 @@ static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
IWL_DEBUG_TX("dev->xmit(%d bytes) at rate 0x%02x\n", skb->len,
ctl->tx_rate);
- if (iwl_tx_skb(priv, skb, ctl))
-+ if (iwl3945_tx_skb(priv, skb, ctl))
++ if (iwl4965_tx_skb(priv, skb, ctl))
dev_kfree_skb_any(skb);
IWL_DEBUG_MAC80211("leave\n");
@@ -391561,11 +472590,11 @@
}
-static int iwl_mac_add_interface(struct ieee80211_hw *hw,
-+static int iwl3945_mac_add_interface(struct ieee80211_hw *hw,
++static int iwl4965_mac_add_interface(struct ieee80211_hw *hw,
struct ieee80211_if_init_conf *conf)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
unsigned long flags;
DECLARE_MAC_BUF(mac);
@@ -391576,7 +472605,7 @@
- IWL_DEBUG_MAC80211("leave - interface_id != 0\n");
+ if (priv->vif) {
+ IWL_DEBUG_MAC80211("leave - vif != NULL\n");
- return -EOPNOTSUPP;
+ return 0;
}
spin_lock_irqsave(&priv->lock, flags);
@@ -391585,15 +472614,16 @@
spin_unlock_irqrestore(&priv->lock, flags);
-@@ -6966,106 +7015,108 @@ static int iwl_mac_add_interface(struct ieee80211_hw *hw,
+@@ -7355,58 +7462,62 @@ static int iwl_mac_add_interface(struct ieee80211_hw *hw,
+ IWL_DEBUG_MAC80211("Set %s\n", print_mac(mac, conf->mac_addr));
memcpy(priv->mac_addr, conf->mac_addr, ETH_ALEN);
}
-
- iwl_set_mode(priv, conf->type);
-+ if (iwl3945_is_ready(priv))
-+ iwl3945_set_mode(priv, conf->type);
- IWL_DEBUG_MAC80211("leave\n");
++ if (iwl4965_is_ready(priv))
++ iwl4965_set_mode(priv, conf->type);
++
mutex_unlock(&priv->mutex);
+ IWL_DEBUG_MAC80211("leave\n");
@@ -391602,19 +472632,19 @@
/**
- * iwl_mac_config - mac80211 config callback
-+ * iwl3945_mac_config - mac80211 config callback
++ * iwl4965_mac_config - mac80211 config callback
*
* We ignore conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME since it seems to
* be set inappropriately and the driver currently sets the hardware up to
* use it whenever needed.
*/
-static int iwl_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
-+static int iwl3945_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
++static int iwl4965_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
{
- struct iwl_priv *priv = hw->priv;
- const struct iwl_channel_info *ch_info;
-+ struct iwl3945_priv *priv = hw->priv;
-+ const struct iwl3945_channel_info *ch_info;
++ struct iwl4965_priv *priv = hw->priv;
++ const struct iwl4965_channel_info *ch_info;
unsigned long flags;
+ int ret = 0;
@@ -391624,7 +472654,7 @@
- if (!iwl_is_ready(priv)) {
+ priv->add_radiotap = !!(conf->flags & IEEE80211_CONF_RADIOTAP);
+
-+ if (!iwl3945_is_ready(priv)) {
++ if (!iwl4965_is_ready(priv)) {
IWL_DEBUG_MAC80211("leave - not ready\n");
- mutex_unlock(&priv->mutex);
- return -EIO;
@@ -391635,7 +472665,7 @@
- /* TODO: Figure out how to get ieee80211_local->sta_scanning w/ only
- * what is exposed through include/ declrations */
- if (unlikely(!iwl_param_disable_hw_scan &&
-+ if (unlikely(!iwl3945_param_disable_hw_scan &&
++ if (unlikely(!iwl4965_param_disable_hw_scan &&
test_bit(STATUS_SCANNING, &priv->status))) {
IWL_DEBUG_MAC80211("leave - scanning\n");
+ set_bit(STATUS_CONF_PENDING, &priv->status);
@@ -391646,7 +472676,7 @@
spin_lock_irqsave(&priv->lock, flags);
- ch_info = iwl_get_channel_info(priv, conf->phymode, conf->channel);
-+ ch_info = iwl3945_get_channel_info(priv, conf->phymode, conf->channel);
++ ch_info = iwl4965_get_channel_info(priv, conf->phymode, conf->channel);
if (!is_channel_valid(ch_info)) {
IWL_DEBUG_SCAN("Channel %d [%d] is INVALID for this SKU.\n",
conf->channel, conf->phymode);
@@ -391658,17 +472688,29 @@
+ goto out;
}
+-#ifdef CONFIG_IWLWIFI_HT
++#ifdef CONFIG_IWL4965_HT
+ /* if we are switching fron ht to 2.4 clear flags
+ * from any ht related info since 2.4 does not
+ * support ht */
+@@ -7416,57 +7527,56 @@ static int iwl_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
+ #endif
+ )
+ priv->staging_rxon.flags = 0;
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /* CONFIG_IWL4965_HT */
+
- iwl_set_rxon_channel(priv, conf->phymode, conf->channel);
-+ iwl3945_set_rxon_channel(priv, conf->phymode, conf->channel);
++ iwl4965_set_rxon_channel(priv, conf->phymode, conf->channel);
- iwl_set_flags_for_phymode(priv, conf->phymode);
-+ iwl3945_set_flags_for_phymode(priv, conf->phymode);
++ iwl4965_set_flags_for_phymode(priv, conf->phymode);
/* The list of supported rates and rate mask can be different
* for each phymode; since the phymode may have changed, reset
* the rate mask to what mac80211 lists */
- iwl_set_rate(priv);
-+ iwl3945_set_rate(priv);
++ iwl4965_set_rate(priv);
spin_unlock_irqrestore(&priv->lock, flags);
@@ -391677,13 +472719,13 @@
- iwl_hw_channel_switch(priv, conf->channel);
- mutex_unlock(&priv->mutex);
- return 0;
-+ iwl3945_hw_channel_switch(priv, conf->channel);
++ iwl4965_hw_channel_switch(priv, conf->channel);
+ goto out;
}
#endif
- iwl_radio_kill_sw(priv, !conf->radio_enabled);
-+ iwl3945_radio_kill_sw(priv, !conf->radio_enabled);
++ iwl4965_radio_kill_sw(priv, !conf->radio_enabled);
if (!conf->radio_enabled) {
IWL_DEBUG_MAC80211("leave - radio disabled\n");
@@ -391693,7 +472735,7 @@
}
- if (iwl_is_rfkill(priv)) {
-+ if (iwl3945_is_rfkill(priv)) {
++ if (iwl4965_is_rfkill(priv)) {
IWL_DEBUG_MAC80211("leave - RF kill\n");
- mutex_unlock(&priv->mutex);
- return -EIO;
@@ -391702,12 +472744,12 @@
}
- iwl_set_rate(priv);
-+ iwl3945_set_rate(priv);
++ iwl4965_set_rate(priv);
if (memcmp(&priv->active_rxon,
&priv->staging_rxon, sizeof(priv->staging_rxon)))
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
else
IWL_DEBUG_INFO("No re-sending same RXON configuration.\n");
@@ -391722,38 +472764,43 @@
}
-static void iwl_config_ap(struct iwl_priv *priv)
-+static void iwl3945_config_ap(struct iwl3945_priv *priv)
++static void iwl4965_config_ap(struct iwl4965_priv *priv)
{
int rc = 0;
-@@ -7077,12 +7128,12 @@ static void iwl_config_ap(struct iwl_priv *priv)
+@@ -7478,12 +7588,12 @@ static void iwl_config_ap(struct iwl_priv *priv)
/* RXON - unassoc (to set timing command) */
priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
/* RXON Timing */
- memset(&priv->rxon_timing, 0, sizeof(struct iwl_rxon_time_cmd));
- iwl_setup_rxon_timing(priv);
- rc = iwl_send_cmd_pdu(priv, REPLY_RXON_TIMING,
-+ memset(&priv->rxon_timing, 0, sizeof(struct iwl3945_rxon_time_cmd));
-+ iwl3945_setup_rxon_timing(priv);
-+ rc = iwl3945_send_cmd_pdu(priv, REPLY_RXON_TIMING,
++ memset(&priv->rxon_timing, 0, sizeof(struct iwl4965_rxon_time_cmd));
++ iwl4965_setup_rxon_timing(priv);
++ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON_TIMING,
sizeof(priv->rxon_timing), &priv->rxon_timing);
if (rc)
IWL_WARNING("REPLY_RXON_TIMING failed - "
-@@ -7112,20 +7163,21 @@ static void iwl_config_ap(struct iwl_priv *priv)
+@@ -7515,23 +7625,24 @@ static void iwl_config_ap(struct iwl_priv *priv)
}
/* restore RXON assoc */
priv->staging_rxon.filter_flags |= RXON_FILTER_ASSOC_MSK;
- iwl_commit_rxon(priv);
-- iwl_add_station(priv, BROADCAST_ADDR, 0, 0);
-+ iwl3945_commit_rxon(priv);
-+ iwl3945_add_station(priv, iwl3945_broadcast_addr, 0, 0);
+-#ifdef CONFIG_IWLWIFI_QOS
+- iwl_activate_qos(priv, 1);
++ iwl4965_commit_rxon(priv);
++#ifdef CONFIG_IWL4965_QOS
++ iwl4965_activate_qos(priv, 1);
+ #endif
+- iwl_rxon_add_station(priv, BROADCAST_ADDR, 0);
++ iwl4965_rxon_add_station(priv, iwl4965_broadcast_addr, 0);
}
- iwl_send_beacon_cmd(priv);
-+ iwl3945_send_beacon_cmd(priv);
++ iwl4965_send_beacon_cmd(priv);
/* FIXME - we need to add code here to detect a totally new
* configuration, reset the AP, unassoc, rxon timing, assoc,
@@ -391761,20 +472808,20 @@
}
-static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
-+static int iwl3945_mac_config_interface(struct ieee80211_hw *hw,
++static int iwl4965_mac_config_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
struct ieee80211_if_conf *conf)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
DECLARE_MAC_BUF(mac);
unsigned long flags;
int rc;
-@@ -7142,9 +7194,11 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+@@ -7546,9 +7657,11 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
return 0;
}
-+ if (!iwl3945_is_alive(priv))
++ if (!iwl4965_is_alive(priv))
+ return -EAGAIN;
+
mutex_lock(&priv->mutex);
@@ -391783,7 +472830,7 @@
if (conf->bssid)
IWL_DEBUG_MAC80211("bssid: %s\n",
print_mac(mac, conf->bssid));
-@@ -7161,8 +7215,8 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+@@ -7565,8 +7678,8 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
return 0;
}
@@ -391794,11 +472841,11 @@
mutex_unlock(&priv->mutex);
return 0;
}
-@@ -7180,11 +7234,14 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+@@ -7584,11 +7697,14 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
priv->ibss_beacon = conf->beacon;
}
-+ if (iwl3945_is_rfkill(priv))
++ if (iwl4965_is_rfkill(priv))
+ goto done;
+
if (conf->bssid && !is_zero_ether_addr(conf->bssid) &&
@@ -391806,43 +472853,43 @@
/* If there is currently a HW scan going on in the background
* then we need to cancel it else the RXON below will fail. */
- if (iwl_scan_cancel_timeout(priv, 100)) {
-+ if (iwl3945_scan_cancel_timeout(priv, 100)) {
++ if (iwl4965_scan_cancel_timeout(priv, 100)) {
IWL_WARNING("Aborted scan still in progress "
"after 100ms\n");
IWL_DEBUG_MAC80211("leaving - scan abort failed.\n");
-@@ -7200,20 +7257,21 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+@@ -7604,20 +7720,21 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
memcpy(priv->bssid, conf->bssid, ETH_ALEN);
if (priv->iw_mode == IEEE80211_IF_TYPE_AP)
- iwl_config_ap(priv);
-+ iwl3945_config_ap(priv);
++ iwl4965_config_ap(priv);
else {
- rc = iwl_commit_rxon(priv);
-+ rc = iwl3945_commit_rxon(priv);
++ rc = iwl4965_commit_rxon(priv);
if ((priv->iw_mode == IEEE80211_IF_TYPE_STA) && rc)
-- iwl_add_station(priv,
-+ iwl3945_add_station(priv,
- priv->active_rxon.bssid_addr, 1, 0);
+- iwl_rxon_add_station(
++ iwl4965_rxon_add_station(
+ priv, priv->active_rxon.bssid_addr, 1);
}
} else {
- iwl_scan_cancel_timeout(priv, 100);
-+ iwl3945_scan_cancel_timeout(priv, 100);
++ iwl4965_scan_cancel_timeout(priv, 100);
priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
}
+ done:
spin_lock_irqsave(&priv->lock, flags);
if (!conf->ssid_len)
memset(priv->essid, 0, IW_ESSID_MAX_SIZE);
-@@ -7229,34 +7287,35 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
+@@ -7633,34 +7750,35 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
return 0;
}
-static void iwl_configure_filter(struct ieee80211_hw *hw,
-+static void iwl3945_configure_filter(struct ieee80211_hw *hw,
++static void iwl4965_configure_filter(struct ieee80211_hw *hw,
unsigned int changed_flags,
unsigned int *total_flags,
int mc_count, struct dev_addr_list *mc_list)
@@ -391850,17 +472897,17 @@
/*
* XXX: dummy
- * see also iwl_connection_init_rx_config
-+ * see also iwl3945_connection_init_rx_config
++ * see also iwl4965_connection_init_rx_config
*/
*total_flags = 0;
}
-static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
-+static void iwl3945_mac_remove_interface(struct ieee80211_hw *hw,
++static void iwl4965_mac_remove_interface(struct ieee80211_hw *hw,
struct ieee80211_if_init_conf *conf)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
IWL_DEBUG_MAC80211("enter\n");
@@ -391873,32 +472920,61 @@
-
- if (priv->interface_id == conf->if_id) {
- priv->interface_id = 0;
-+ if (iwl3945_is_ready_rf(priv)) {
-+ iwl3945_scan_cancel_timeout(priv, 100);
++ if (iwl4965_is_ready_rf(priv)) {
++ iwl4965_scan_cancel_timeout(priv, 100);
+ cancel_delayed_work(&priv->post_associate);
+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
+ }
+ if (priv->vif == conf->vif) {
+ priv->vif = NULL;
memset(priv->bssid, 0, ETH_ALEN);
memset(priv->essid, 0, IW_ESSID_MAX_SIZE);
priv->essid_len = 0;
-@@ -7264,22 +7323,20 @@ static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
- mutex_unlock(&priv->mutex);
+@@ -7671,19 +7789,50 @@ static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
- IWL_DEBUG_MAC80211("leave\n");
--
}
-#define IWL_DELAY_NEXT_SCAN (HZ*2)
-static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
-+static int iwl3945_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
++static void iwl4965_bss_info_changed(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_bss_conf *bss_conf,
++ u32 changes)
++{
++ struct iwl4965_priv *priv = hw->priv;
++
++ if (changes & BSS_CHANGED_ERP_PREAMBLE) {
++ if (bss_conf->use_short_preamble)
++ priv->staging_rxon.flags |= RXON_FLG_SHORT_PREAMBLE_MSK;
++ else
++ priv->staging_rxon.flags &= ~RXON_FLG_SHORT_PREAMBLE_MSK;
++ }
++
++ if (changes & BSS_CHANGED_ERP_CTS_PROT) {
++ if (bss_conf->use_cts_prot && (priv->phymode != MODE_IEEE80211A))
++ priv->staging_rxon.flags |= RXON_FLG_TGG_PROTECT_MSK;
++ else
++ priv->staging_rxon.flags &= ~RXON_FLG_TGG_PROTECT_MSK;
++ }
++
++ if (changes & BSS_CHANGED_ASSOC) {
++ /*
++ * TODO:
++ * do stuff instead of sniffing assoc resp
++ */
++ }
++
++ if (iwl4965_is_associated(priv))
++ iwl4965_send_rxon_assoc(priv);
++}
++
++static int iwl4965_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
{
int rc = 0;
unsigned long flags;
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
IWL_DEBUG_MAC80211("enter\n");
@@ -391906,11 +472982,11 @@
spin_lock_irqsave(&priv->lock, flags);
- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl3945_is_ready_rf(priv)) {
++ if (!iwl4965_is_ready_rf(priv)) {
rc = -EIO;
IWL_DEBUG_MAC80211("leave - not ready or exit pending\n");
goto out_unlock;
-@@ -7291,17 +7348,21 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
+@@ -7695,17 +7844,21 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
goto out_unlock;
}
@@ -391934,65 +473010,66 @@
- "%s [%d]\n ",
- iwl_escape_essid(ssid, len), (int)len);
+ IWL_DEBUG_SCAN("direct scan for %s [%d]\n ",
-+ iwl3945_escape_essid(ssid, len), (int)len);
++ iwl4965_escape_essid(ssid, len), (int)len);
priv->one_direct_scan = 1;
priv->direct_ssid_len = (u8)
-@@ -7310,7 +7371,7 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
+@@ -7714,7 +7867,7 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
} else
priv->one_direct_scan = 0;
- rc = iwl_scan_initiate(priv);
-+ rc = iwl3945_scan_initiate(priv);
++ rc = iwl4965_scan_initiate(priv);
IWL_DEBUG_MAC80211("leave\n");
-@@ -7321,17 +7382,17 @@ out_unlock:
+@@ -7725,18 +7878,18 @@ out_unlock:
return rc;
}
-static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
-+static int iwl3945_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
++static int iwl4965_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
const u8 *local_addr, const u8 *addr,
struct ieee80211_key_conf *key)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
+ DECLARE_MAC_BUF(mac);
int rc = 0;
u8 sta_id;
IWL_DEBUG_MAC80211("enter\n");
- if (!iwl_param_hwcrypto) {
-+ if (!iwl3945_param_hwcrypto) {
++ if (!iwl4965_param_hwcrypto) {
IWL_DEBUG_MAC80211("leave - hwcrypto disabled\n");
return -EOPNOTSUPP;
}
-@@ -7340,7 +7401,7 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+@@ -7745,7 +7898,7 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
/* only support pairwise keys */
return -EOPNOTSUPP;
- sta_id = iwl_hw_find_station(priv, addr);
-+ sta_id = iwl3945_hw_find_station(priv, addr);
++ sta_id = iwl4965_hw_find_station(priv, addr);
if (sta_id == IWL_INVALID_STATION) {
- DECLARE_MAC_BUF(mac);
-
-@@ -7351,24 +7412,24 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ IWL_DEBUG_MAC80211("leave - %s not in station map.\n",
+ print_mac(mac, addr));
+@@ -7754,24 +7907,24 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
mutex_lock(&priv->mutex);
- iwl_scan_cancel_timeout(priv, 100);
-+ iwl3945_scan_cancel_timeout(priv, 100);
++ iwl4965_scan_cancel_timeout(priv, 100);
switch (cmd) {
case SET_KEY:
- rc = iwl_update_sta_key_info(priv, key, sta_id);
-+ rc = iwl3945_update_sta_key_info(priv, key, sta_id);
++ rc = iwl4965_update_sta_key_info(priv, key, sta_id);
if (!rc) {
- iwl_set_rxon_hwcrypto(priv, 1);
- iwl_commit_rxon(priv);
-+ iwl3945_set_rxon_hwcrypto(priv, 1);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_set_rxon_hwcrypto(priv, 1);
++ iwl4965_commit_rxon(priv);
key->hw_key_idx = sta_id;
IWL_DEBUG_MAC80211("set_key success, using hwcrypto\n");
key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV;
@@ -392000,141 +473077,166 @@
break;
case DISABLE_KEY:
- rc = iwl_clear_sta_key_info(priv, sta_id);
-+ rc = iwl3945_clear_sta_key_info(priv, sta_id);
++ rc = iwl4965_clear_sta_key_info(priv, sta_id);
if (!rc) {
- iwl_set_rxon_hwcrypto(priv, 0);
- iwl_commit_rxon(priv);
-+ iwl3945_set_rxon_hwcrypto(priv, 0);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_set_rxon_hwcrypto(priv, 0);
++ iwl4965_commit_rxon(priv);
IWL_DEBUG_MAC80211("disable hwcrypto key\n");
}
break;
-@@ -7382,18 +7443,18 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+@@ -7785,18 +7938,18 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
return rc;
}
-static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
-+static int iwl3945_mac_conf_tx(struct ieee80211_hw *hw, int queue,
++static int iwl4965_mac_conf_tx(struct ieee80211_hw *hw, int queue,
const struct ieee80211_tx_queue_params *params)
{
- struct iwl_priv *priv = hw->priv;
-#ifdef CONFIG_IWLWIFI_QOS
-+ struct iwl3945_priv *priv = hw->priv;
-+#ifdef CONFIG_IWL3945_QOS
++ struct iwl4965_priv *priv = hw->priv;
++#ifdef CONFIG_IWL4965_QOS
unsigned long flags;
int q;
-#endif /* CONFIG_IWL_QOS */
-+#endif /* CONFIG_IWL3945_QOS */
++#endif /* CONFIG_IWL4965_QOS */
IWL_DEBUG_MAC80211("enter\n");
- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl3945_is_ready_rf(priv)) {
++ if (!iwl4965_is_ready_rf(priv)) {
IWL_DEBUG_MAC80211("leave - RF not ready\n");
return -EIO;
}
-@@ -7403,7 +7464,7 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
+@@ -7806,7 +7959,7 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
return 0;
}
-#ifdef CONFIG_IWLWIFI_QOS
-+#ifdef CONFIG_IWL3945_QOS
++#ifdef CONFIG_IWL4965_QOS
if (!priv->qos_data.qos_enable) {
priv->qos_data.qos_active = 0;
IWL_DEBUG_MAC80211("leave - qos not enabled\n");
-@@ -7426,30 +7487,30 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
+@@ -7829,30 +7982,30 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
mutex_lock(&priv->mutex);
if (priv->iw_mode == IEEE80211_IF_TYPE_AP)
- iwl_activate_qos(priv, 1);
- else if (priv->assoc_id && iwl_is_associated(priv))
- iwl_activate_qos(priv, 0);
-+ iwl3945_activate_qos(priv, 1);
-+ else if (priv->assoc_id && iwl3945_is_associated(priv))
-+ iwl3945_activate_qos(priv, 0);
++ iwl4965_activate_qos(priv, 1);
++ else if (priv->assoc_id && iwl4965_is_associated(priv))
++ iwl4965_activate_qos(priv, 0);
mutex_unlock(&priv->mutex);
-#endif /*CONFIG_IWLWIFI_QOS */
-+#endif /*CONFIG_IWL3945_QOS */
++#endif /*CONFIG_IWL4965_QOS */
IWL_DEBUG_MAC80211("leave\n");
return 0;
}
-static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
-+static int iwl3945_mac_get_tx_stats(struct ieee80211_hw *hw,
++static int iwl4965_mac_get_tx_stats(struct ieee80211_hw *hw,
struct ieee80211_tx_queue_stats *stats)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
int i, avail;
- struct iwl_tx_queue *txq;
- struct iwl_queue *q;
-+ struct iwl3945_tx_queue *txq;
-+ struct iwl3945_queue *q;
++ struct iwl4965_tx_queue *txq;
++ struct iwl4965_queue *q;
unsigned long flags;
IWL_DEBUG_MAC80211("enter\n");
- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl3945_is_ready_rf(priv)) {
++ if (!iwl4965_is_ready_rf(priv)) {
IWL_DEBUG_MAC80211("leave - RF not ready\n");
return -EIO;
}
-@@ -7459,7 +7520,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
+@@ -7862,7 +8015,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
for (i = 0; i < AC_NUM; i++) {
txq = &priv->txq[i];
q = &txq->q;
- avail = iwl_queue_space(q);
-+ avail = iwl3945_queue_space(q);
++ avail = iwl4965_queue_space(q);
stats->data[i].len = q->n_window - avail;
stats->data[i].limit = q->n_window - q->high_mark;
-@@ -7473,7 +7534,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
+@@ -7876,7 +8029,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
return 0;
}
-static int iwl_mac_get_stats(struct ieee80211_hw *hw,
-+static int iwl3945_mac_get_stats(struct ieee80211_hw *hw,
++static int iwl4965_mac_get_stats(struct ieee80211_hw *hw,
struct ieee80211_low_level_stats *stats)
{
IWL_DEBUG_MAC80211("enter\n");
-@@ -7482,7 +7543,7 @@ static int iwl_mac_get_stats(struct ieee80211_hw *hw,
+@@ -7885,7 +8038,7 @@ static int iwl_mac_get_stats(struct ieee80211_hw *hw,
return 0;
}
-static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
-+static u64 iwl3945_mac_get_tsf(struct ieee80211_hw *hw)
++static u64 iwl4965_mac_get_tsf(struct ieee80211_hw *hw)
{
IWL_DEBUG_MAC80211("enter\n");
IWL_DEBUG_MAC80211("leave\n");
-@@ -7490,16 +7551,16 @@ static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
+@@ -7893,35 +8046,35 @@ static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
return 0;
}
-static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
-+static void iwl3945_mac_reset_tsf(struct ieee80211_hw *hw)
++static void iwl4965_mac_reset_tsf(struct ieee80211_hw *hw)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
unsigned long flags;
mutex_lock(&priv->mutex);
IWL_DEBUG_MAC80211("enter\n");
+ priv->lq_mngr.lq_ready = 0;
+-#ifdef CONFIG_IWLWIFI_HT
++#ifdef CONFIG_IWL4965_HT
+ spin_lock_irqsave(&priv->lock, flags);
+- memset(&priv->current_assoc_ht, 0, sizeof(struct sta_ht_info));
++ memset(&priv->current_ht_config, 0, sizeof(struct iwl_ht_info));
+ spin_unlock_irqrestore(&priv->lock, flags);
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT_AGG
+ /* if (priv->lq_mngr.agg_ctrl.granted_ba)
+ iwl4965_turn_off_agg(priv, TID_ALL_SPECIFIED);*/
+
+- memset(&(priv->lq_mngr.agg_ctrl), 0, sizeof(struct iwl_agg_control));
++ memset(&(priv->lq_mngr.agg_ctrl), 0, sizeof(struct iwl4965_agg_control));
+ priv->lq_mngr.agg_ctrl.tid_traffic_load_threshold = 10;
+ priv->lq_mngr.agg_ctrl.ba_timeout = 5000;
+ priv->lq_mngr.agg_ctrl.auto_agg = 1;
+
+ if (priv->lq_mngr.agg_ctrl.auto_agg)
+ priv->lq_mngr.agg_ctrl.requested_ba = TID_ALL_ENABLED;
+-#endif /*CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /*CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
+
-#ifdef CONFIG_IWLWIFI_QOS
- iwl_reset_qos(priv);
-+#ifdef CONFIG_IWL3945_QOS
-+ iwl3945_reset_qos(priv);
++#ifdef CONFIG_IWL4965_QOS
++ iwl4965_reset_qos(priv);
#endif
- cancel_delayed_work(&priv->post_associate);
-@@ -7522,13 +7583,19 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
+ cancel_delayed_work(&priv->post_associate);
+@@ -7946,13 +8099,19 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
spin_unlock_irqrestore(&priv->lock, flags);
-+ if (!iwl3945_is_ready_rf(priv)) {
++ if (!iwl4965_is_ready_rf(priv)) {
+ IWL_DEBUG_MAC80211("leave - not ready\n");
+ mutex_unlock(&priv->mutex);
+ return;
@@ -392142,17 +473244,17 @@
+
/* we are restarting association process
* clear RXON_FILTER_ASSOC_MSK bit
- */
+ */
if (priv->iw_mode != IEEE80211_IF_TYPE_AP) {
- iwl_scan_cancel_timeout(priv, 100);
-+ iwl3945_scan_cancel_timeout(priv, 100);
++ iwl4965_scan_cancel_timeout(priv, 100);
priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
}
/* Per mac80211.h: This is only used in IBSS mode... */
-@@ -7539,15 +7606,9 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
+@@ -7963,32 +8122,25 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
return;
}
@@ -392165,123 +473267,379 @@
priv->only_active_channel = 0;
- iwl_set_rate(priv);
-+ iwl3945_set_rate(priv);
++ iwl4965_set_rate(priv);
mutex_unlock(&priv->mutex);
-@@ -7555,16 +7616,16 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
-
+ IWL_DEBUG_MAC80211("leave\n");
+-
}
-static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
-+static int iwl3945_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
++static int iwl4965_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
struct ieee80211_tx_control *control)
{
- struct iwl_priv *priv = hw->priv;
-+ struct iwl3945_priv *priv = hw->priv;
++ struct iwl4965_priv *priv = hw->priv;
unsigned long flags;
mutex_lock(&priv->mutex);
IWL_DEBUG_MAC80211("enter\n");
- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl3945_is_ready_rf(priv)) {
++ if (!iwl4965_is_ready_rf(priv)) {
IWL_DEBUG_MAC80211("leave - RF not ready\n");
mutex_unlock(&priv->mutex);
return -EIO;
-@@ -7588,8 +7649,8 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+@@ -8012,8 +8164,8 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
IWL_DEBUG_MAC80211("leave\n");
spin_unlock_irqrestore(&priv->lock, flags);
-#ifdef CONFIG_IWLWIFI_QOS
- iwl_reset_qos(priv);
-+#ifdef CONFIG_IWL3945_QOS
-+ iwl3945_reset_qos(priv);
++#ifdef CONFIG_IWL4965_QOS
++ iwl4965_reset_qos(priv);
#endif
- queue_work(priv->workqueue, &priv->post_associate.work);
-@@ -7605,7 +7666,7 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ queue_work(priv->workqueue, &priv->post_associate.work);
+@@ -8023,133 +8175,62 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ return 0;
+ }
+
+-#ifdef CONFIG_IWLWIFI_HT
+-union ht_cap_info {
+- struct {
+- u16 advanced_coding_cap :1;
+- u16 supported_chan_width_set :1;
+- u16 mimo_power_save_mode :2;
+- u16 green_field :1;
+- u16 short_GI20 :1;
+- u16 short_GI40 :1;
+- u16 tx_stbc :1;
+- u16 rx_stbc :1;
+- u16 beam_forming :1;
+- u16 delayed_ba :1;
+- u16 maximal_amsdu_size :1;
+- u16 cck_mode_at_40MHz :1;
+- u16 psmp_support :1;
+- u16 stbc_ctrl_frame_support :1;
+- u16 sig_txop_protection_support :1;
+- };
+- u16 val;
+-} __attribute__ ((packed));
+-
+-union ht_param_info{
+- struct {
+- u8 max_rx_ampdu_factor :2;
+- u8 mpdu_density :3;
+- u8 reserved :3;
+- };
+- u8 val;
+-} __attribute__ ((packed));
+-
+-union ht_exra_param_info {
+- struct {
+- u8 ext_chan_offset :2;
+- u8 tx_chan_width :1;
+- u8 rifs_mode :1;
+- u8 controlled_access_only :1;
+- u8 service_interval_granularity :3;
+- };
+- u8 val;
+-} __attribute__ ((packed));
+-
+-union ht_operation_mode{
+- struct {
+- u16 op_mode :2;
+- u16 non_GF :1;
+- u16 reserved :13;
+- };
+- u16 val;
+-} __attribute__ ((packed));
+-
++#ifdef CONFIG_IWL4965_HT
+
+-static int sta_ht_info_init(struct ieee80211_ht_capability *ht_cap,
+- struct ieee80211_ht_additional_info *ht_extra,
+- struct sta_ht_info *ht_info_ap,
+- struct sta_ht_info *ht_info)
++static void iwl4965_ht_info_fill(struct ieee80211_conf *conf,
++ struct iwl4965_priv *priv)
+ {
+- union ht_cap_info cap;
+- union ht_operation_mode op_mode;
+- union ht_param_info param_info;
+- union ht_exra_param_info extra_param_info;
++ struct iwl_ht_info *iwl_conf = &priv->current_ht_config;
++ struct ieee80211_ht_info *ht_conf = &conf->ht_conf;
++ struct ieee80211_ht_bss_info *ht_bss_conf = &conf->ht_bss_conf;
+
+ IWL_DEBUG_MAC80211("enter: \n");
+
+- if (!ht_info) {
+- IWL_DEBUG_MAC80211("leave: ht_info is NULL\n");
+- return -1;
++ if (!(conf->flags & IEEE80211_CONF_SUPPORT_HT_MODE)) {
++ iwl_conf->is_ht = 0;
++ return;
+ }
+
+- if (ht_cap) {
+- cap.val = (u16) le16_to_cpu(ht_cap->capabilities_info);
+- param_info.val = ht_cap->mac_ht_params_info;
+- ht_info->is_ht = 1;
+- if (cap.short_GI20)
+- ht_info->sgf |= 0x1;
+- if (cap.short_GI40)
+- ht_info->sgf |= 0x2;
+- ht_info->is_green_field = cap.green_field;
+- ht_info->max_amsdu_size = cap.maximal_amsdu_size;
+- ht_info->supported_chan_width = cap.supported_chan_width_set;
+- ht_info->tx_mimo_ps_mode = cap.mimo_power_save_mode;
+- memcpy(ht_info->supp_rates, ht_cap->supported_mcs_set, 16);
+-
+- ht_info->ampdu_factor = param_info.max_rx_ampdu_factor;
+- ht_info->mpdu_density = param_info.mpdu_density;
+-
+- IWL_DEBUG_MAC80211("SISO mask 0x%X MIMO mask 0x%X \n",
+- ht_cap->supported_mcs_set[0],
+- ht_cap->supported_mcs_set[1]);
+-
+- if (ht_info_ap) {
+- ht_info->control_channel = ht_info_ap->control_channel;
+- ht_info->extension_chan_offset =
+- ht_info_ap->extension_chan_offset;
+- ht_info->tx_chan_width = ht_info_ap->tx_chan_width;
+- ht_info->operating_mode = ht_info_ap->operating_mode;
+- }
+-
+- if (ht_extra) {
+- extra_param_info.val = ht_extra->ht_param;
+- ht_info->control_channel = ht_extra->control_chan;
+- ht_info->extension_chan_offset =
+- extra_param_info.ext_chan_offset;
+- ht_info->tx_chan_width = extra_param_info.tx_chan_width;
+- op_mode.val = (u16)
+- le16_to_cpu(ht_extra->operation_mode);
+- ht_info->operating_mode = op_mode.op_mode;
+- IWL_DEBUG_MAC80211("control channel %d\n",
+- ht_extra->control_chan);
+- }
+- } else
+- ht_info->is_ht = 0;
+-
++ iwl_conf->is_ht = 1;
++ priv->ps_mode = (u8)((ht_conf->cap & IEEE80211_HT_CAP_MIMO_PS) >> 2);
++
++ if (ht_conf->cap & IEEE80211_HT_CAP_SGI_20)
++ iwl_conf->sgf |= 0x1;
++ if (ht_conf->cap & IEEE80211_HT_CAP_SGI_40)
++ iwl_conf->sgf |= 0x2;
++
++ iwl_conf->is_green_field = !!(ht_conf->cap & IEEE80211_HT_CAP_GRN_FLD);
++ iwl_conf->max_amsdu_size =
++ !!(ht_conf->cap & IEEE80211_HT_CAP_MAX_AMSDU);
++ iwl_conf->supported_chan_width =
++ !!(ht_conf->cap & IEEE80211_HT_CAP_SUP_WIDTH);
++ iwl_conf->tx_mimo_ps_mode =
++ (u8)((ht_conf->cap & IEEE80211_HT_CAP_MIMO_PS) >> 2);
++ memcpy(iwl_conf->supp_mcs_set, ht_conf->supp_mcs_set, 16);
++
++ iwl_conf->control_channel = ht_bss_conf->primary_channel;
++ iwl_conf->extension_chan_offset =
++ ht_bss_conf->bss_cap & IEEE80211_HT_IE_CHA_SEC_OFFSET;
++ iwl_conf->tx_chan_width =
++ !!(ht_bss_conf->bss_cap & IEEE80211_HT_IE_CHA_WIDTH);
++ iwl_conf->ht_protection =
++ ht_bss_conf->bss_op_mode & IEEE80211_HT_IE_HT_PROTECTION;
++ iwl_conf->non_GF_STA_present =
++ !!(ht_bss_conf->bss_op_mode & IEEE80211_HT_IE_NON_GF_STA_PRSNT);
++
++ IWL_DEBUG_MAC80211("control channel %d\n",
++ iwl_conf->control_channel);
+ IWL_DEBUG_MAC80211("leave\n");
+- return 0;
+ }
+
+-static int iwl_mac_conf_ht(struct ieee80211_hw *hw,
+- struct ieee80211_ht_capability *ht_cap,
+- struct ieee80211_ht_additional_info *ht_extra)
++static int iwl4965_mac_conf_ht(struct ieee80211_hw *hw,
++ struct ieee80211_conf *conf)
+ {
+- struct iwl_priv *priv = hw->priv;
+- int rs;
++ struct iwl4965_priv *priv = hw->priv;
+
+ IWL_DEBUG_MAC80211("enter: \n");
+
+- rs = sta_ht_info_init(ht_cap, ht_extra, NULL, &priv->current_assoc_ht);
++ iwl4965_ht_info_fill(conf, priv);
+ iwl4965_set_rxon_chain(priv);
+
+ if (priv && priv->assoc_id &&
+@@ -8164,58 +8245,33 @@ static int iwl_mac_conf_ht(struct ieee80211_hw *hw,
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+- IWL_DEBUG_MAC80211("leave: control channel %d\n",
+- ht_extra->control_chan);
+- return rs;
+-
++ IWL_DEBUG_MAC80211("leave:\n");
++ return 0;
+ }
+
+-static void iwl_set_ht_capab(struct ieee80211_hw *hw,
+- struct ieee80211_ht_capability *ht_cap,
+- u8 use_wide_chan)
++static void iwl4965_set_ht_capab(struct ieee80211_hw *hw,
++ struct ieee80211_ht_cap *ht_cap,
++ u8 use_current_config)
+ {
+- union ht_cap_info cap;
+- union ht_param_info param_info;
+-
+- memset(&cap, 0, sizeof(union ht_cap_info));
+- memset(¶m_info, 0, sizeof(union ht_param_info));
+-
+- cap.maximal_amsdu_size = HT_IE_MAX_AMSDU_SIZE_4K;
+- cap.green_field = 1;
+- cap.short_GI20 = 1;
+- cap.short_GI40 = 1;
+- cap.supported_chan_width_set = use_wide_chan;
+- cap.mimo_power_save_mode = 0x3;
+-
+- param_info.max_rx_ampdu_factor = CFG_HT_RX_AMPDU_FACTOR_DEF;
+- param_info.mpdu_density = CFG_HT_MPDU_DENSITY_DEF;
+- ht_cap->capabilities_info = (__le16) cpu_to_le16(cap.val);
+- ht_cap->mac_ht_params_info = (u8) param_info.val;
++ struct ieee80211_conf *conf = &hw->conf;
++ struct ieee80211_hw_mode *mode = conf->mode;
+
+- ht_cap->supported_mcs_set[0] = 0xff;
+- ht_cap->supported_mcs_set[1] = 0xff;
+- ht_cap->supported_mcs_set[4] =
+- (cap.supported_chan_width_set) ? 0x1: 0x0;
++ if (use_current_config) {
++ ht_cap->cap_info = cpu_to_le16(conf->ht_conf.cap);
++ memcpy(ht_cap->supp_mcs_set,
++ conf->ht_conf.supp_mcs_set, 16);
++ } else {
++ ht_cap->cap_info = cpu_to_le16(mode->ht_info.cap);
++ memcpy(ht_cap->supp_mcs_set,
++ mode->ht_info.supp_mcs_set, 16);
++ }
++ ht_cap->ampdu_params_info =
++ (mode->ht_info.ampdu_factor & IEEE80211_HT_CAP_AMPDU_FACTOR) |
++ ((mode->ht_info.ampdu_density << 2) &
++ IEEE80211_HT_CAP_AMPDU_DENSITY);
+ }
+
+-static void iwl_mac_get_ht_capab(struct ieee80211_hw *hw,
+- struct ieee80211_ht_capability *ht_cap)
+-{
+- u8 use_wide_channel = 1;
+- struct iwl_priv *priv = hw->priv;
+-
+- IWL_DEBUG_MAC80211("enter: \n");
+- if (priv->channel_width != IWL_CHANNEL_WIDTH_40MHZ)
+- use_wide_channel = 0;
+-
+- /* no fat tx allowed on 2.4GHZ */
+- if (priv->phymode != MODE_IEEE80211A)
+- use_wide_channel = 0;
+-
+- iwl_set_ht_capab(hw, ht_cap, use_wide_channel);
+- IWL_DEBUG_MAC80211("leave: \n");
+-}
+-#endif /*CONFIG_IWLWIFI_HT*/
++#endif /*CONFIG_IWL4965_HT*/
+
+ /*****************************************************************************
+ *
+@@ -8223,7 +8279,7 @@ static void iwl_mac_get_ht_capab(struct ieee80211_hw *hw,
*
*****************************************************************************/
-#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL3945_DEBUG
++#ifdef CONFIG_IWL4965_DEBUG
/*
* The following adds a new attribute to the sysfs representation
-@@ -7617,7 +7678,7 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+@@ -8235,7 +8291,7 @@ static void iwl_mac_get_ht_capab(struct ieee80211_hw *hw,
static ssize_t show_debug_level(struct device_driver *d, char *buf)
{
- return sprintf(buf, "0x%08X\n", iwl_debug_level);
-+ return sprintf(buf, "0x%08X\n", iwl3945_debug_level);
++ return sprintf(buf, "0x%08X\n", iwl4965_debug_level);
}
static ssize_t store_debug_level(struct device_driver *d,
const char *buf, size_t count)
-@@ -7630,7 +7691,7 @@ static ssize_t store_debug_level(struct device_driver *d,
+@@ -8248,7 +8304,7 @@ static ssize_t store_debug_level(struct device_driver *d,
printk(KERN_INFO DRV_NAME
": %s is not in hex or decimal form.\n", buf);
else
- iwl_debug_level = val;
-+ iwl3945_debug_level = val;
++ iwl4965_debug_level = val;
return strnlen(buf, count);
}
-@@ -7638,7 +7699,7 @@ static ssize_t store_debug_level(struct device_driver *d,
+@@ -8256,7 +8312,7 @@ static ssize_t store_debug_level(struct device_driver *d,
static DRIVER_ATTR(debug_level, S_IWUSR | S_IRUGO,
show_debug_level, store_debug_level);
-#endif /* CONFIG_IWLWIFI_DEBUG */
-+#endif /* CONFIG_IWL3945_DEBUG */
++#endif /* CONFIG_IWL4965_DEBUG */
static ssize_t show_rf_kill(struct device *d,
struct device_attribute *attr, char *buf)
-@@ -7649,7 +7710,7 @@ static ssize_t show_rf_kill(struct device *d,
+@@ -8267,7 +8323,7 @@ static ssize_t show_rf_kill(struct device *d,
* 2 - HW based RF kill active
* 3 - Both HW and SW based RF kill active
*/
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
int val = (test_bit(STATUS_RF_KILL_SW, &priv->status) ? 0x1 : 0x0) |
(test_bit(STATUS_RF_KILL_HW, &priv->status) ? 0x2 : 0x0);
-@@ -7660,10 +7721,10 @@ static ssize_t store_rf_kill(struct device *d,
+@@ -8278,10 +8334,10 @@ static ssize_t store_rf_kill(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
mutex_lock(&priv->mutex);
- iwl_radio_kill_sw(priv, buf[0] == '1');
-+ iwl3945_radio_kill_sw(priv, buf[0] == '1');
++ iwl4965_radio_kill_sw(priv, buf[0] == '1');
mutex_unlock(&priv->mutex);
return count;
-@@ -7674,12 +7735,12 @@ static DEVICE_ATTR(rf_kill, S_IWUSR | S_IRUGO, show_rf_kill, store_rf_kill);
+@@ -8292,12 +8348,12 @@ static DEVICE_ATTR(rf_kill, S_IWUSR | S_IRUGO, show_rf_kill, store_rf_kill);
static ssize_t show_temperature(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- if (!iwl_is_alive(priv))
-+ if (!iwl3945_is_alive(priv))
++ if (!iwl4965_is_alive(priv))
return -EAGAIN;
- return sprintf(buf, "%d\n", iwl_hw_get_temperature(priv));
-+ return sprintf(buf, "%d\n", iwl3945_hw_get_temperature(priv));
++ return sprintf(buf, "%d\n", iwl4965_hw_get_temperature(priv));
}
static DEVICE_ATTR(temperature, S_IRUGO, show_temperature, NULL);
-@@ -7688,15 +7749,15 @@ static ssize_t show_rs_window(struct device *d,
+@@ -8306,15 +8362,15 @@ static ssize_t show_rs_window(struct device *d,
struct device_attribute *attr,
char *buf)
{
- struct iwl_priv *priv = d->driver_data;
- return iwl_fill_rs_info(priv->hw, buf, IWL_AP_ID);
-+ struct iwl3945_priv *priv = d->driver_data;
-+ return iwl3945_fill_rs_info(priv->hw, buf, IWL_AP_ID);
++ struct iwl4965_priv *priv = d->driver_data;
++ return iwl4965_fill_rs_info(priv->hw, buf, IWL_AP_ID);
}
static DEVICE_ATTR(rs_window, S_IRUGO, show_rs_window, NULL);
@@ -392289,82 +473647,82 @@
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
return sprintf(buf, "%d\n", priv->user_txpower_limit);
}
-@@ -7704,7 +7765,7 @@ static ssize_t store_tx_power(struct device *d,
+@@ -8322,7 +8378,7 @@ static ssize_t store_tx_power(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
char *p = (char *)buf;
u32 val;
-@@ -7713,7 +7774,7 @@ static ssize_t store_tx_power(struct device *d,
+@@ -8331,7 +8387,7 @@ static ssize_t store_tx_power(struct device *d,
printk(KERN_INFO DRV_NAME
": %s is not in decimal form.\n", buf);
else
- iwl_hw_reg_set_txpower(priv, val);
-+ iwl3945_hw_reg_set_txpower(priv, val);
++ iwl4965_hw_reg_set_txpower(priv, val);
return count;
}
-@@ -7723,7 +7784,7 @@ static DEVICE_ATTR(tx_power, S_IWUSR | S_IRUGO, show_tx_power, store_tx_power);
+@@ -8341,7 +8397,7 @@ static DEVICE_ATTR(tx_power, S_IWUSR | S_IRUGO, show_tx_power, store_tx_power);
static ssize_t show_flags(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
return sprintf(buf, "0x%04X\n", priv->active_rxon.flags);
}
-@@ -7732,19 +7793,19 @@ static ssize_t store_flags(struct device *d,
+@@ -8350,19 +8406,19 @@ static ssize_t store_flags(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
u32 flags = simple_strtoul(buf, NULL, 0);
mutex_lock(&priv->mutex);
if (le32_to_cpu(priv->staging_rxon.flags) != flags) {
/* Cancel any currently running scans... */
- if (iwl_scan_cancel_timeout(priv, 100))
-+ if (iwl3945_scan_cancel_timeout(priv, 100))
++ if (iwl4965_scan_cancel_timeout(priv, 100))
IWL_WARNING("Could not cancel scan.\n");
else {
IWL_DEBUG_INFO("Committing rxon.flags = 0x%04X\n",
flags);
priv->staging_rxon.flags = cpu_to_le32(flags);
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
}
}
mutex_unlock(&priv->mutex);
-@@ -7757,7 +7818,7 @@ static DEVICE_ATTR(flags, S_IWUSR | S_IRUGO, show_flags, store_flags);
+@@ -8375,7 +8431,7 @@ static DEVICE_ATTR(flags, S_IWUSR | S_IRUGO, show_flags, store_flags);
static ssize_t show_filter_flags(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
return sprintf(buf, "0x%04X\n",
le32_to_cpu(priv->active_rxon.filter_flags));
-@@ -7767,20 +7828,20 @@ static ssize_t store_filter_flags(struct device *d,
+@@ -8385,20 +8441,20 @@ static ssize_t store_filter_flags(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
u32 filter_flags = simple_strtoul(buf, NULL, 0);
mutex_lock(&priv->mutex);
if (le32_to_cpu(priv->staging_rxon.filter_flags) != filter_flags) {
/* Cancel any currently running scans... */
- if (iwl_scan_cancel_timeout(priv, 100))
-+ if (iwl3945_scan_cancel_timeout(priv, 100))
++ if (iwl4965_scan_cancel_timeout(priv, 100))
IWL_WARNING("Could not cancel scan.\n");
else {
IWL_DEBUG_INFO("Committing rxon.filter_flags = "
@@ -392372,16 +473730,16 @@
priv->staging_rxon.filter_flags =
cpu_to_le32(filter_flags);
- iwl_commit_rxon(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_commit_rxon(priv);
}
}
mutex_unlock(&priv->mutex);
-@@ -7794,20 +7855,20 @@ static DEVICE_ATTR(filter_flags, S_IWUSR | S_IRUGO, show_filter_flags,
+@@ -8412,20 +8468,20 @@ static DEVICE_ATTR(filter_flags, S_IWUSR | S_IRUGO, show_filter_flags,
static ssize_t show_tune(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
return sprintf(buf, "0x%04X\n",
(priv->phymode << 8) |
@@ -392389,35 +473747,35 @@
}
-static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode);
-+static void iwl3945_set_flags_for_phymode(struct iwl3945_priv *priv, u8 phymode);
++static void iwl4965_set_flags_for_phymode(struct iwl4965_priv *priv, u8 phymode);
static ssize_t store_tune(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
char *p = (char *)buf;
u16 tune = simple_strtoul(p, &p, 0);
u8 phymode = (tune >> 8) & 0xff;
-@@ -7818,9 +7879,9 @@ static ssize_t store_tune(struct device *d,
+@@ -8436,9 +8492,9 @@ static ssize_t store_tune(struct device *d,
mutex_lock(&priv->mutex);
if ((le16_to_cpu(priv->staging_rxon.channel) != channel) ||
(priv->phymode != phymode)) {
- const struct iwl_channel_info *ch_info;
-+ const struct iwl3945_channel_info *ch_info;
++ const struct iwl4965_channel_info *ch_info;
- ch_info = iwl_get_channel_info(priv, phymode, channel);
-+ ch_info = iwl3945_get_channel_info(priv, phymode, channel);
++ ch_info = iwl4965_get_channel_info(priv, phymode, channel);
if (!ch_info) {
IWL_WARNING("Requested invalid phymode/channel "
"combination: %d %d\n", phymode, channel);
-@@ -7829,18 +7890,18 @@ static ssize_t store_tune(struct device *d,
+@@ -8447,18 +8503,18 @@ static ssize_t store_tune(struct device *d,
}
/* Cancel any currently running scans... */
- if (iwl_scan_cancel_timeout(priv, 100))
-+ if (iwl3945_scan_cancel_timeout(priv, 100))
++ if (iwl4965_scan_cancel_timeout(priv, 100))
IWL_WARNING("Could not cancel scan.\n");
else {
IWL_DEBUG_INFO("Committing phymode and "
@@ -392426,48 +473784,48 @@
- iwl_set_rxon_channel(priv, phymode, channel);
- iwl_set_flags_for_phymode(priv, phymode);
-+ iwl3945_set_rxon_channel(priv, phymode, channel);
-+ iwl3945_set_flags_for_phymode(priv, phymode);
++ iwl4965_set_rxon_channel(priv, phymode, channel);
++ iwl4965_set_flags_for_phymode(priv, phymode);
- iwl_set_rate(priv);
- iwl_commit_rxon(priv);
-+ iwl3945_set_rate(priv);
-+ iwl3945_commit_rxon(priv);
++ iwl4965_set_rate(priv);
++ iwl4965_commit_rxon(priv);
}
}
mutex_unlock(&priv->mutex);
-@@ -7850,13 +7911,13 @@ static ssize_t store_tune(struct device *d,
+@@ -8468,13 +8524,13 @@ static ssize_t store_tune(struct device *d,
static DEVICE_ATTR(tune, S_IWUSR | S_IRUGO, show_tune, store_tune);
-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
static ssize_t show_measurement(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
- struct iwl_spectrum_notification measure_report;
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_spectrum_notification measure_report;
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_spectrum_notification measure_report;
u32 size = sizeof(measure_report), len = 0, ofs = 0;
u8 *data = (u8 *) & measure_report;
unsigned long flags;
-@@ -7888,7 +7949,7 @@ static ssize_t store_measurement(struct device *d,
+@@ -8506,7 +8562,7 @@ static ssize_t store_measurement(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
struct ieee80211_measurement_params params = {
.channel = le16_to_cpu(priv->active_rxon.channel),
.start_time = cpu_to_le64(priv->last_tsf),
-@@ -7914,19 +7975,19 @@ static ssize_t store_measurement(struct device *d,
+@@ -8532,20 +8588,20 @@ static ssize_t store_measurement(struct device *d,
IWL_DEBUG_INFO("Invoking measurement of type %d on "
"channel %d (for '%s')\n", type, params.channel, buf);
- iwl_get_measurement(priv, ¶ms, type);
-+ iwl3945_get_measurement(priv, ¶ms, type);
++ iwl4965_get_measurement(priv, ¶ms, type);
return count;
}
@@ -392475,57 +473833,32 @@
static DEVICE_ATTR(measurement, S_IRUSR | S_IWUSR,
show_measurement, store_measurement);
-#endif /* CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT */
-+#endif /* CONFIG_IWL3945_SPECTRUM_MEASUREMENT */
-
- static ssize_t show_rate(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
- unsigned long flags;
- int i;
-
-@@ -7937,13 +7998,13 @@ static ssize_t show_rate(struct device *d,
- i = priv->stations[IWL_STA_ID].current_rate.s.rate;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
-
-- i = iwl_rate_index_from_plcp(i);
-+ i = iwl3945_rate_index_from_plcp(i);
- if (i == -1)
- return sprintf(buf, "0\n");
-
- return sprintf(buf, "%d%s\n",
-- (iwl_rates[i].ieee >> 1),
-- (iwl_rates[i].ieee & 0x1) ? ".5" : "");
-+ (iwl3945_rates[i].ieee >> 1),
-+ (iwl3945_rates[i].ieee & 0x1) ? ".5" : "");
- }
++#endif /* CONFIG_IWL4965_SPECTRUM_MEASUREMENT */
- static DEVICE_ATTR(rate, S_IRUSR, show_rate, NULL);
-@@ -7952,7 +8013,7 @@ static ssize_t store_retry_rate(struct device *d,
+ static ssize_t store_retry_rate(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
priv->retry_rate = simple_strtoul(buf, NULL, 0);
if (priv->retry_rate <= 0)
-@@ -7964,7 +8025,7 @@ static ssize_t store_retry_rate(struct device *d,
+@@ -8557,7 +8613,7 @@ static ssize_t store_retry_rate(struct device *d,
static ssize_t show_retry_rate(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
return sprintf(buf, "%d", priv->retry_rate);
}
-@@ -7975,14 +8036,14 @@ static ssize_t store_power_level(struct device *d,
+@@ -8568,14 +8624,14 @@ static ssize_t store_power_level(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
int rc;
int mode;
@@ -392533,147 +473866,147 @@
mutex_lock(&priv->mutex);
- if (!iwl_is_ready(priv)) {
-+ if (!iwl3945_is_ready(priv)) {
++ if (!iwl4965_is_ready(priv)) {
rc = -EAGAIN;
goto out;
}
-@@ -7993,7 +8054,7 @@ static ssize_t store_power_level(struct device *d,
+@@ -8586,7 +8642,7 @@ static ssize_t store_power_level(struct device *d,
mode |= IWL_POWER_ENABLED;
if (mode != priv->power_mode) {
- rc = iwl_send_power_mode(priv, IWL_POWER_LEVEL(mode));
-+ rc = iwl3945_send_power_mode(priv, IWL_POWER_LEVEL(mode));
++ rc = iwl4965_send_power_mode(priv, IWL_POWER_LEVEL(mode));
if (rc) {
IWL_DEBUG_MAC80211("failed setting power mode.\n");
goto out;
-@@ -8029,7 +8090,7 @@ static const s32 period_duration[] = {
+@@ -8622,7 +8678,7 @@ static const s32 period_duration[] = {
static ssize_t show_power_level(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
int level = IWL_POWER_LEVEL(priv->power_mode);
char *p = buf;
-@@ -8064,18 +8125,18 @@ static DEVICE_ATTR(power_level, S_IWUSR | S_IRUSR, show_power_level,
+@@ -8657,18 +8713,18 @@ static DEVICE_ATTR(power_level, S_IWUSR | S_IRUSR, show_power_level,
static ssize_t show_channels(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
int len = 0, i;
struct ieee80211_channel *channels = NULL;
const struct ieee80211_hw_mode *hw_mode = NULL;
int count = 0;
- if (!iwl_is_ready(priv))
-+ if (!iwl3945_is_ready(priv))
++ if (!iwl4965_is_ready(priv))
return -EAGAIN;
- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211G);
-+ hw_mode = iwl3945_get_hw_mode(priv, MODE_IEEE80211G);
++ hw_mode = iwl4965_get_hw_mode(priv, MODE_IEEE80211G);
if (!hw_mode)
- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211B);
-+ hw_mode = iwl3945_get_hw_mode(priv, MODE_IEEE80211B);
++ hw_mode = iwl4965_get_hw_mode(priv, MODE_IEEE80211B);
if (hw_mode) {
channels = hw_mode->channels;
count = hw_mode->num_channels;
-@@ -8102,7 +8163,7 @@ static ssize_t show_channels(struct device *d,
+@@ -8695,7 +8751,7 @@ static ssize_t show_channels(struct device *d,
flag & IEEE80211_CHAN_W_ACTIVE_SCAN ?
"active/passive" : "passive only");
- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211A);
-+ hw_mode = iwl3945_get_hw_mode(priv, MODE_IEEE80211A);
++ hw_mode = iwl4965_get_hw_mode(priv, MODE_IEEE80211A);
if (hw_mode) {
channels = hw_mode->channels;
count = hw_mode->num_channels;
-@@ -8138,17 +8199,17 @@ static DEVICE_ATTR(channels, S_IRUSR, show_channels, NULL);
+@@ -8731,17 +8787,17 @@ static DEVICE_ATTR(channels, S_IRUSR, show_channels, NULL);
static ssize_t show_statistics(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
- u32 size = sizeof(struct iwl_notif_statistics);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
-+ u32 size = sizeof(struct iwl3945_notif_statistics);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
++ u32 size = sizeof(struct iwl4965_notif_statistics);
u32 len = 0, ofs = 0;
u8 *data = (u8 *) & priv->statistics;
int rc = 0;
- if (!iwl_is_alive(priv))
-+ if (!iwl3945_is_alive(priv))
++ if (!iwl4965_is_alive(priv))
return -EAGAIN;
mutex_lock(&priv->mutex);
- rc = iwl_send_statistics_request(priv);
-+ rc = iwl3945_send_statistics_request(priv);
++ rc = iwl4965_send_statistics_request(priv);
mutex_unlock(&priv->mutex);
if (rc) {
-@@ -8176,9 +8237,9 @@ static DEVICE_ATTR(statistics, S_IRUGO, show_statistics, NULL);
+@@ -8769,9 +8825,9 @@ static DEVICE_ATTR(statistics, S_IRUGO, show_statistics, NULL);
static ssize_t show_antenna(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
- if (!iwl_is_alive(priv))
-+ if (!iwl3945_is_alive(priv))
++ if (!iwl4965_is_alive(priv))
return -EAGAIN;
return sprintf(buf, "%d\n", priv->antenna);
-@@ -8189,7 +8250,7 @@ static ssize_t store_antenna(struct device *d,
+@@ -8782,7 +8838,7 @@ static ssize_t store_antenna(struct device *d,
const char *buf, size_t count)
{
int ant;
- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl3945_priv *priv = dev_get_drvdata(d);
++ struct iwl4965_priv *priv = dev_get_drvdata(d);
if (count == 0)
return 0;
-@@ -8201,7 +8262,7 @@ static ssize_t store_antenna(struct device *d,
+@@ -8794,7 +8850,7 @@ static ssize_t store_antenna(struct device *d,
if ((ant >= 0) && (ant <= 2)) {
IWL_DEBUG_INFO("Setting antenna select to %d.\n", ant);
- priv->antenna = (enum iwl_antenna)ant;
-+ priv->antenna = (enum iwl3945_antenna)ant;
++ priv->antenna = (enum iwl4965_antenna)ant;
} else
IWL_DEBUG_INFO("Bad antenna select value %d.\n", ant);
-@@ -8214,8 +8275,8 @@ static DEVICE_ATTR(antenna, S_IWUSR | S_IRUGO, show_antenna, store_antenna);
+@@ -8807,8 +8863,8 @@ static DEVICE_ATTR(antenna, S_IWUSR | S_IRUGO, show_antenna, store_antenna);
static ssize_t show_status(struct device *d,
struct device_attribute *attr, char *buf)
{
- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
- if (!iwl_is_alive(priv))
-+ struct iwl3945_priv *priv = (struct iwl3945_priv *)d->driver_data;
-+ if (!iwl3945_is_alive(priv))
++ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
++ if (!iwl4965_is_alive(priv))
return -EAGAIN;
return sprintf(buf, "0x%08x\n", (int)priv->status);
}
-@@ -8229,7 +8290,7 @@ static ssize_t dump_error_log(struct device *d,
+@@ -8822,7 +8878,7 @@ static ssize_t dump_error_log(struct device *d,
char *p = (char *)buf;
if (p[0] == '1')
- iwl_dump_nic_error_log((struct iwl_priv *)d->driver_data);
-+ iwl3945_dump_nic_error_log((struct iwl3945_priv *)d->driver_data);
++ iwl4965_dump_nic_error_log((struct iwl4965_priv *)d->driver_data);
return strnlen(buf, count);
}
-@@ -8243,7 +8304,7 @@ static ssize_t dump_event_log(struct device *d,
+@@ -8836,7 +8892,7 @@ static ssize_t dump_event_log(struct device *d,
char *p = (char *)buf;
if (p[0] == '1')
- iwl_dump_nic_event_log((struct iwl_priv *)d->driver_data);
-+ iwl3945_dump_nic_event_log((struct iwl3945_priv *)d->driver_data);
++ iwl4965_dump_nic_event_log((struct iwl4965_priv *)d->driver_data);
return strnlen(buf, count);
}
-@@ -8256,34 +8317,34 @@ static DEVICE_ATTR(dump_events, S_IWUSR, NULL, dump_event_log);
+@@ -8849,34 +8905,34 @@ static DEVICE_ATTR(dump_events, S_IWUSR, NULL, dump_event_log);
*
*****************************************************************************/
-static void iwl_setup_deferred_work(struct iwl_priv *priv)
-+static void iwl3945_setup_deferred_work(struct iwl3945_priv *priv)
++static void iwl4965_setup_deferred_work(struct iwl4965_priv *priv)
{
priv->workqueue = create_workqueue(DRV_NAME);
@@ -392693,40 +474026,40 @@
- INIT_DELAYED_WORK(&priv->scan_check, iwl_bg_scan_check);
-
- iwl_hw_setup_deferred_work(priv);
-+ INIT_WORK(&priv->up, iwl3945_bg_up);
-+ INIT_WORK(&priv->restart, iwl3945_bg_restart);
-+ INIT_WORK(&priv->rx_replenish, iwl3945_bg_rx_replenish);
-+ INIT_WORK(&priv->scan_completed, iwl3945_bg_scan_completed);
-+ INIT_WORK(&priv->request_scan, iwl3945_bg_request_scan);
-+ INIT_WORK(&priv->abort_scan, iwl3945_bg_abort_scan);
-+ INIT_WORK(&priv->rf_kill, iwl3945_bg_rf_kill);
-+ INIT_WORK(&priv->beacon_update, iwl3945_bg_beacon_update);
-+ INIT_DELAYED_WORK(&priv->post_associate, iwl3945_bg_post_associate);
-+ INIT_DELAYED_WORK(&priv->init_alive_start, iwl3945_bg_init_alive_start);
-+ INIT_DELAYED_WORK(&priv->alive_start, iwl3945_bg_alive_start);
-+ INIT_DELAYED_WORK(&priv->scan_check, iwl3945_bg_scan_check);
++ INIT_WORK(&priv->up, iwl4965_bg_up);
++ INIT_WORK(&priv->restart, iwl4965_bg_restart);
++ INIT_WORK(&priv->rx_replenish, iwl4965_bg_rx_replenish);
++ INIT_WORK(&priv->scan_completed, iwl4965_bg_scan_completed);
++ INIT_WORK(&priv->request_scan, iwl4965_bg_request_scan);
++ INIT_WORK(&priv->abort_scan, iwl4965_bg_abort_scan);
++ INIT_WORK(&priv->rf_kill, iwl4965_bg_rf_kill);
++ INIT_WORK(&priv->beacon_update, iwl4965_bg_beacon_update);
++ INIT_DELAYED_WORK(&priv->post_associate, iwl4965_bg_post_associate);
++ INIT_DELAYED_WORK(&priv->init_alive_start, iwl4965_bg_init_alive_start);
++ INIT_DELAYED_WORK(&priv->alive_start, iwl4965_bg_alive_start);
++ INIT_DELAYED_WORK(&priv->scan_check, iwl4965_bg_scan_check);
+
-+ iwl3945_hw_setup_deferred_work(priv);
++ iwl4965_hw_setup_deferred_work(priv);
tasklet_init(&priv->irq_tasklet, (void (*)(unsigned long))
- iwl_irq_tasklet, (unsigned long)priv);
-+ iwl3945_irq_tasklet, (unsigned long)priv);
++ iwl4965_irq_tasklet, (unsigned long)priv);
}
-static void iwl_cancel_deferred_work(struct iwl_priv *priv)
-+static void iwl3945_cancel_deferred_work(struct iwl3945_priv *priv)
++static void iwl4965_cancel_deferred_work(struct iwl4965_priv *priv)
{
- iwl_hw_cancel_deferred_work(priv);
-+ iwl3945_hw_cancel_deferred_work(priv);
++ iwl4965_hw_cancel_deferred_work(priv);
cancel_delayed_work_sync(&priv->init_alive_start);
cancel_delayed_work(&priv->scan_check);
-@@ -8292,14 +8353,14 @@ static void iwl_cancel_deferred_work(struct iwl_priv *priv)
+@@ -8885,14 +8941,14 @@ static void iwl_cancel_deferred_work(struct iwl_priv *priv)
cancel_work_sync(&priv->beacon_update);
}
-static struct attribute *iwl_sysfs_entries[] = {
-+static struct attribute *iwl3945_sysfs_entries[] = {
++static struct attribute *iwl4965_sysfs_entries[] = {
&dev_attr_antenna.attr,
&dev_attr_channels.attr,
&dev_attr_dump_errors.attr,
@@ -392734,19 +474067,19 @@
&dev_attr_flags.attr,
&dev_attr_filter_flags.attr,
-#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL3945_SPECTRUM_MEASUREMENT
++#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
&dev_attr_measurement.attr,
#endif
&dev_attr_power_level.attr,
-@@ -8316,45 +8377,48 @@ static struct attribute *iwl_sysfs_entries[] = {
+@@ -8908,54 +8964,56 @@ static struct attribute *iwl_sysfs_entries[] = {
NULL
};
-static struct attribute_group iwl_attribute_group = {
-+static struct attribute_group iwl3945_attribute_group = {
++static struct attribute_group iwl4965_attribute_group = {
.name = NULL, /* put in device directory */
- .attrs = iwl_sysfs_entries,
-+ .attrs = iwl3945_sysfs_entries,
++ .attrs = iwl4965_sysfs_entries,
};
-static struct ieee80211_ops iwl_hw_ops = {
@@ -392765,33 +474098,51 @@
- .get_tsf = iwl_mac_get_tsf,
- .reset_tsf = iwl_mac_reset_tsf,
- .beacon_update = iwl_mac_beacon_update,
+-#ifdef CONFIG_IWLWIFI_HT
+- .conf_ht = iwl_mac_conf_ht,
+- .get_ht_capab = iwl_mac_get_ht_capab,
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- .ht_tx_agg_start = iwl_mac_ht_tx_agg_start,
+- .ht_tx_agg_stop = iwl_mac_ht_tx_agg_stop,
+- .ht_rx_agg_start = iwl_mac_ht_rx_agg_start,
+- .ht_rx_agg_stop = iwl_mac_ht_rx_agg_stop,
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
- .hw_scan = iwl_mac_hw_scan
-+static struct ieee80211_ops iwl3945_hw_ops = {
-+ .tx = iwl3945_mac_tx,
-+ .start = iwl3945_mac_start,
-+ .stop = iwl3945_mac_stop,
-+ .add_interface = iwl3945_mac_add_interface,
-+ .remove_interface = iwl3945_mac_remove_interface,
-+ .config = iwl3945_mac_config,
-+ .config_interface = iwl3945_mac_config_interface,
-+ .configure_filter = iwl3945_configure_filter,
-+ .set_key = iwl3945_mac_set_key,
-+ .get_stats = iwl3945_mac_get_stats,
-+ .get_tx_stats = iwl3945_mac_get_tx_stats,
-+ .conf_tx = iwl3945_mac_conf_tx,
-+ .get_tsf = iwl3945_mac_get_tsf,
-+ .reset_tsf = iwl3945_mac_reset_tsf,
-+ .beacon_update = iwl3945_mac_beacon_update,
-+ .hw_scan = iwl3945_mac_hw_scan
++static struct ieee80211_ops iwl4965_hw_ops = {
++ .tx = iwl4965_mac_tx,
++ .start = iwl4965_mac_start,
++ .stop = iwl4965_mac_stop,
++ .add_interface = iwl4965_mac_add_interface,
++ .remove_interface = iwl4965_mac_remove_interface,
++ .config = iwl4965_mac_config,
++ .config_interface = iwl4965_mac_config_interface,
++ .configure_filter = iwl4965_configure_filter,
++ .set_key = iwl4965_mac_set_key,
++ .get_stats = iwl4965_mac_get_stats,
++ .get_tx_stats = iwl4965_mac_get_tx_stats,
++ .conf_tx = iwl4965_mac_conf_tx,
++ .get_tsf = iwl4965_mac_get_tsf,
++ .reset_tsf = iwl4965_mac_reset_tsf,
++ .beacon_update = iwl4965_mac_beacon_update,
++ .bss_info_changed = iwl4965_bss_info_changed,
++#ifdef CONFIG_IWL4965_HT
++ .conf_ht = iwl4965_mac_conf_ht,
++ .ampdu_action = iwl4965_mac_ampdu_action,
++#ifdef CONFIG_IWL4965_HT_AGG
++ .ht_tx_agg_start = iwl4965_mac_ht_tx_agg_start,
++ .ht_tx_agg_stop = iwl4965_mac_ht_tx_agg_stop,
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
++ .hw_scan = iwl4965_mac_hw_scan
};
-static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
-+static int iwl3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
++static int iwl4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
int err = 0;
- u32 pci_id;
- struct iwl_priv *priv;
-+ struct iwl3945_priv *priv;
++ struct iwl4965_priv *priv;
struct ieee80211_hw *hw;
int i;
+ DECLARE_MAC_BUF(mac);
@@ -392799,62 +474150,71 @@
- if (iwl_param_disable_hw_scan) {
+ /* Disabling hardware scan means that mac80211 will perform scans
+ * "the hard way", rather than using device's scan. */
-+ if (iwl3945_param_disable_hw_scan) {
++ if (iwl4965_param_disable_hw_scan) {
IWL_DEBUG_INFO("Disabling hw_scan\n");
- iwl_hw_ops.hw_scan = NULL;
-+ iwl3945_hw_ops.hw_scan = NULL;
++ iwl4965_hw_ops.hw_scan = NULL;
}
- if ((iwl_param_queues_num > IWL_MAX_NUM_QUEUES) ||
- (iwl_param_queues_num < IWL_MIN_NUM_QUEUES)) {
-+ if ((iwl3945_param_queues_num > IWL_MAX_NUM_QUEUES) ||
-+ (iwl3945_param_queues_num < IWL_MIN_NUM_QUEUES)) {
++ if ((iwl4965_param_queues_num > IWL_MAX_NUM_QUEUES) ||
++ (iwl4965_param_queues_num < IWL_MIN_NUM_QUEUES)) {
IWL_ERROR("invalid queues_num, should be between %d and %d\n",
IWL_MIN_NUM_QUEUES, IWL_MAX_NUM_QUEUES);
err = -EINVAL;
-@@ -8363,7 +8427,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -8964,7 +9022,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* mac80211 allocates memory for this device instance, including
* space for this driver's private structure */
- hw = ieee80211_alloc_hw(sizeof(struct iwl_priv), &iwl_hw_ops);
-+ hw = ieee80211_alloc_hw(sizeof(struct iwl3945_priv), &iwl3945_hw_ops);
++ hw = ieee80211_alloc_hw(sizeof(struct iwl4965_priv), &iwl4965_hw_ops);
if (hw == NULL) {
IWL_ERROR("Can not allocate network device\n");
err = -ENOMEM;
-@@ -8378,9 +8442,11 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -8979,9 +9037,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
priv->hw = hw;
priv->pci_dev = pdev;
- priv->antenna = (enum iwl_antenna)iwl_param_antenna;
-#ifdef CONFIG_IWLWIFI_DEBUG
- iwl_debug_level = iwl_param_debug;
-+
-+ /* Select antenna (may be helpful if only one antenna is connected) */
-+ priv->antenna = (enum iwl3945_antenna)iwl3945_param_antenna;
-+#ifdef CONFIG_IWL3945_DEBUG
-+ iwl3945_debug_level = iwl3945_param_debug;
++ priv->antenna = (enum iwl4965_antenna)iwl4965_param_antenna;
++#ifdef CONFIG_IWL4965_DEBUG
++ iwl4965_debug_level = iwl4965_param_debug;
atomic_set(&priv->restrict_refcnt, 0);
#endif
priv->retry_rate = 1;
-@@ -8399,6 +8465,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -9000,12 +9058,14 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Tell mac80211 our Tx characteristics */
hw->flags = IEEE80211_HW_HOST_GEN_BEACON_TEMPLATE;
-+ /* 4 EDCA QOS priorities */
++ /* Default value; 4 EDCA QOS priorities */
hw->queues = 4;
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
++#ifdef CONFIG_IWL4965_HT
++#ifdef CONFIG_IWL4965_HT_AGG
++ /* Enhanced value; more queues, to support 11n aggregation */
+ hw->queues = 16;
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
++#endif /* CONFIG_IWL4965_HT_AGG */
++#endif /* CONFIG_IWL4965_HT */
spin_lock_init(&priv->lock);
-@@ -8419,7 +8486,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ spin_lock_init(&priv->power_data.lock);
+@@ -9026,7 +9086,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_master(pdev);
- iwl_clear_stations_table(priv);
+ /* Clear the driver's (not device's) station table */
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_clear_stations_table(priv);
priv->data_retry_limit = -1;
priv->ieee_channels = NULL;
-@@ -8438,9 +8506,11 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -9045,9 +9106,11 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
err = pci_request_regions(pdev, DRV_NAME);
if (err)
goto out_pci_disable_device;
@@ -392866,22 +474226,36 @@
priv->hw_base = pci_iomap(pdev, 0, 0);
if (!priv->hw_base) {
err = -ENODEV;
-@@ -8453,7 +8523,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -9060,7 +9123,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Initialize module parameter values here */
- if (iwl_param_disable) {
+ /* Disable radio (SW RF KILL) via parameter when loading driver */
-+ if (iwl3945_param_disable) {
++ if (iwl4965_param_disable) {
set_bit(STATUS_RF_KILL_SW, &priv->status);
IWL_DEBUG_INFO("Radio disabled.\n");
}
-@@ -8488,78 +8559,82 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- priv->is_abg ? "A" : "");
+@@ -9069,91 +9133,92 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ priv->ps_mode = 0;
+ priv->use_ant_b_for_management_frame = 1; /* start with ant B */
+- priv->is_ht_enabled = 1;
+- priv->channel_width = IWL_CHANNEL_WIDTH_40MHZ;
+ priv->valid_antenna = 0x7; /* assume all 3 connected */
+ priv->ps_mode = IWL_MIMO_PS_NONE;
+- priv->cck_power_index_compensation = iwl_read32(
+- priv, CSR_HW_REV_WA_REG);
+
++ /* Choose which receivers/antennas to use */
+ iwl4965_set_rxon_chain(priv);
+
+ printk(KERN_INFO DRV_NAME
+ ": Detected Intel Wireless WiFi Link 4965AGN\n");
/* Device-specific setup */
- if (iwl_hw_set_hw_setting(priv)) {
-+ if (iwl3945_hw_set_hw_setting(priv)) {
++ if (iwl4965_hw_set_hw_setting(priv)) {
IWL_ERROR("failed to set hw settings\n");
- mutex_unlock(&priv->mutex);
goto out_iounmap;
@@ -392889,24 +474263,24 @@
-#ifdef CONFIG_IWLWIFI_QOS
- if (iwl_param_qos_enable)
-+#ifdef CONFIG_IWL3945_QOS
-+ if (iwl3945_param_qos_enable)
++#ifdef CONFIG_IWL4965_QOS
++ if (iwl4965_param_qos_enable)
priv->qos_data.qos_enable = 1;
- iwl_reset_qos(priv);
-+ iwl3945_reset_qos(priv);
++ iwl4965_reset_qos(priv);
priv->qos_data.qos_active = 0;
priv->qos_data.qos_cap.val = 0;
-#endif /* CONFIG_IWLWIFI_QOS */
-+#endif /* CONFIG_IWL3945_QOS */
++#endif /* CONFIG_IWL4965_QOS */
- iwl_set_rxon_channel(priv, MODE_IEEE80211G, 6);
- iwl_setup_deferred_work(priv);
- iwl_setup_rx_handlers(priv);
-+ iwl3945_set_rxon_channel(priv, MODE_IEEE80211G, 6);
-+ iwl3945_setup_deferred_work(priv);
-+ iwl3945_setup_rx_handlers(priv);
++ iwl4965_set_rxon_channel(priv, MODE_IEEE80211G, 6);
++ iwl4965_setup_deferred_work(priv);
++ iwl4965_setup_rx_handlers(priv);
priv->rates_mask = IWL_RATES_MASK;
/* If power management is turned on, default to AC mode */
@@ -392914,8 +474288,7 @@
priv->user_txpower_limit = IWL_DEFAULT_TX_POWER;
- pci_enable_msi(pdev);
-+ iwl3945_disable_interrupts(priv);
-
+-
- err = request_irq(pdev->irq, iwl_isr, IRQF_SHARED, DRV_NAME, priv);
- if (err) {
- IWL_ERROR("Error allocating IRQ %d\n", pdev->irq);
@@ -392923,9 +474296,10 @@
- }
-
- mutex_lock(&priv->mutex);
--
++ iwl4965_disable_interrupts(priv);
+
- err = sysfs_create_group(&pdev->dev.kobj, &iwl_attribute_group);
-+ err = sysfs_create_group(&pdev->dev.kobj, &iwl3945_attribute_group);
++ err = sysfs_create_group(&pdev->dev.kobj, &iwl4965_attribute_group);
if (err) {
IWL_ERROR("failed to create sysfs device attributes\n");
- mutex_unlock(&priv->mutex);
@@ -392936,11 +474310,11 @@
- * ucode filename and max sizes are card-specific. */
- err = iwl_read_ucode(priv);
+ /* nic init */
-+ iwl3945_set_bit(priv, CSR_GIO_CHICKEN_BITS,
++ iwl4965_set_bit(priv, CSR_GIO_CHICKEN_BITS,
+ CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER);
+
-+ iwl3945_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
-+ err = iwl3945_poll_bit(priv, CSR_GP_CNTRL,
++ iwl4965_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
++ err = iwl4965_poll_bit(priv, CSR_GP_CNTRL,
+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000);
+ if (err < 0) {
@@ -392948,7 +474322,7 @@
+ goto out_remove_sysfs;
+ }
+ /* Read the EEPROM */
-+ err = iwl3945_eeprom_init(priv);
++ err = iwl4965_eeprom_init(priv);
if (err) {
- IWL_ERROR("Could not read microcode: %d\n", err);
- mutex_unlock(&priv->mutex);
@@ -392964,7 +474338,7 @@
- mutex_unlock(&priv->mutex);
-
- IWL_DEBUG_INFO("Queing UP work.\n");
-+ iwl3945_rate_control_register(priv->hw);
++ iwl4965_rate_control_register(priv->hw);
+ err = ieee80211_register_hw(priv->hw);
+ if (err) {
+ IWL_ERROR("Failed to register network device (error %d)\n", err);
@@ -392984,7 +474358,7 @@
-
- sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
+ out_remove_sysfs:
-+ sysfs_remove_group(&pdev->dev.kobj, &iwl3945_attribute_group);
++ sysfs_remove_group(&pdev->dev.kobj, &iwl4965_attribute_group);
out_release_irq:
- free_irq(pdev->irq, priv);
@@ -392994,66 +474368,66 @@
destroy_workqueue(priv->workqueue);
priv->workqueue = NULL;
- iwl_unset_hw_setting(priv);
-+ iwl3945_unset_hw_setting(priv);
++ iwl4965_unset_hw_setting(priv);
out_iounmap:
pci_iounmap(pdev, priv->hw_base);
-@@ -8574,9 +8649,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -9168,9 +9233,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return err;
}
-static void iwl_pci_remove(struct pci_dev *pdev)
-+static void iwl3945_pci_remove(struct pci_dev *pdev)
++static void iwl4965_pci_remove(struct pci_dev *pdev)
{
- struct iwl_priv *priv = pci_get_drvdata(pdev);
-+ struct iwl3945_priv *priv = pci_get_drvdata(pdev);
++ struct iwl4965_priv *priv = pci_get_drvdata(pdev);
struct list_head *p, *q;
int i;
-@@ -8587,43 +8662,41 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+@@ -9181,43 +9246,41 @@ static void iwl_pci_remove(struct pci_dev *pdev)
set_bit(STATUS_EXIT_PENDING, &priv->status);
- iwl_down(priv);
-+ iwl3945_down(priv);
++ iwl4965_down(priv);
/* Free MAC hash list for ADHOC */
for (i = 0; i < IWL_IBSS_MAC_HASH_SIZE; i++) {
list_for_each_safe(p, q, &priv->ibss_mac_hash[i]) {
list_del(p);
- kfree(list_entry(p, struct iwl_ibss_seq, list));
-+ kfree(list_entry(p, struct iwl3945_ibss_seq, list));
++ kfree(list_entry(p, struct iwl4965_ibss_seq, list));
}
}
- sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
-+ sysfs_remove_group(&pdev->dev.kobj, &iwl3945_attribute_group);
++ sysfs_remove_group(&pdev->dev.kobj, &iwl4965_attribute_group);
- iwl_dealloc_ucode_pci(priv);
-+ iwl3945_dealloc_ucode_pci(priv);
++ iwl4965_dealloc_ucode_pci(priv);
if (priv->rxq.bd)
- iwl_rx_queue_free(priv, &priv->rxq);
- iwl_hw_txq_ctx_free(priv);
-+ iwl3945_rx_queue_free(priv, &priv->rxq);
-+ iwl3945_hw_txq_ctx_free(priv);
++ iwl4965_rx_queue_free(priv, &priv->rxq);
++ iwl4965_hw_txq_ctx_free(priv);
- iwl_unset_hw_setting(priv);
- iwl_clear_stations_table(priv);
-+ iwl3945_unset_hw_setting(priv);
-+ iwl3945_clear_stations_table(priv);
++ iwl4965_unset_hw_setting(priv);
++ iwl4965_clear_stations_table(priv);
if (priv->mac80211_registered) {
ieee80211_unregister_hw(priv->hw);
- iwl_rate_control_unregister(priv->hw);
-+ iwl3945_rate_control_unregister(priv->hw);
++ iwl4965_rate_control_unregister(priv->hw);
}
/*netif_stop_queue(dev); */
flush_workqueue(priv->workqueue);
- /* ieee80211_unregister_hw calls iwl_mac_stop, which flushes
-+ /* ieee80211_unregister_hw calls iwl3945_mac_stop, which flushes
++ /* ieee80211_unregister_hw calls iwl4965_mac_stop, which flushes
* priv->workqueue... so we can't take down the workqueue
* until now... */
destroy_workqueue(priv->workqueue);
@@ -393064,15 +474438,15 @@
pci_iounmap(pdev, priv->hw_base);
pci_release_regions(pdev);
pci_disable_device(pdev);
-@@ -8642,93 +8715,31 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+@@ -9236,93 +9299,31 @@ static void iwl_pci_remove(struct pci_dev *pdev)
#ifdef CONFIG_PM
-static int iwl_pci_suspend(struct pci_dev *pdev, pm_message_t state)
-+static int iwl3945_pci_suspend(struct pci_dev *pdev, pm_message_t state)
++static int iwl4965_pci_suspend(struct pci_dev *pdev, pm_message_t state)
{
- struct iwl_priv *priv = pci_get_drvdata(pdev);
-+ struct iwl3945_priv *priv = pci_get_drvdata(pdev);
++ struct iwl4965_priv *priv = pci_get_drvdata(pdev);
- set_bit(STATUS_IN_SUSPEND, &priv->status);
-
@@ -393083,7 +474457,7 @@
- ieee80211_stop_queues(priv->hw);
+ if (priv->is_open) {
+ set_bit(STATUS_IN_SUSPEND, &priv->status);
-+ iwl3945_mac_stop(priv->hw);
++ iwl4965_mac_stop(priv->hw);
+ priv->is_open = 1;
+ }
@@ -393095,7 +474469,8 @@
}
-static void iwl_resume(struct iwl_priv *priv)
--{
++static int iwl4965_pci_resume(struct pci_dev *pdev)
+ {
- unsigned long flags;
-
- /* The following it a temporary work around due to the
@@ -393142,18 +474517,17 @@
-}
-
-static int iwl_pci_resume(struct pci_dev *pdev)
-+static int iwl3945_pci_resume(struct pci_dev *pdev)
- {
+-{
- struct iwl_priv *priv = pci_get_drvdata(pdev);
- int err;
-
- printk(KERN_INFO "Coming out of suspend...\n");
-+ struct iwl3945_priv *priv = pci_get_drvdata(pdev);
++ struct iwl4965_priv *priv = pci_get_drvdata(pdev);
pci_set_power_state(pdev, PCI_D0);
- err = pci_enable_device(pdev);
- pci_restore_state(pdev);
-
+-
- /*
- * Suspend/Resume resets the PCI configuration space, so we have to
- * re-disable the RETRY_TIMEOUT register (0x41) to keep PCI Tx retries
@@ -393161,127298 +474535,135056 @@
- * here since it only restores the first 64 bytes pci config header.
- */
- pci_write_config_byte(pdev, 0x41, 0x00);
--
+
- iwl_resume(priv);
+ if (priv->is_open)
-+ iwl3945_mac_start(priv->hw);
++ iwl4965_mac_start(priv->hw);
+ clear_bit(STATUS_IN_SUSPEND, &priv->status);
return 0;
}
-@@ -8740,33 +8751,33 @@ static int iwl_pci_resume(struct pci_dev *pdev)
+@@ -9334,33 +9335,33 @@ static int iwl_pci_resume(struct pci_dev *pdev)
*
*****************************************************************************/
-static struct pci_driver iwl_driver = {
-+static struct pci_driver iwl3945_driver = {
++static struct pci_driver iwl4965_driver = {
.name = DRV_NAME,
- .id_table = iwl_hw_card_ids,
- .probe = iwl_pci_probe,
- .remove = __devexit_p(iwl_pci_remove),
-+ .id_table = iwl3945_hw_card_ids,
-+ .probe = iwl3945_pci_probe,
-+ .remove = __devexit_p(iwl3945_pci_remove),
++ .id_table = iwl4965_hw_card_ids,
++ .probe = iwl4965_pci_probe,
++ .remove = __devexit_p(iwl4965_pci_remove),
#ifdef CONFIG_PM
- .suspend = iwl_pci_suspend,
- .resume = iwl_pci_resume,
-+ .suspend = iwl3945_pci_suspend,
-+ .resume = iwl3945_pci_resume,
++ .suspend = iwl4965_pci_suspend,
++ .resume = iwl4965_pci_resume,
#endif
};
-static int __init iwl_init(void)
-+static int __init iwl3945_init(void)
++static int __init iwl4965_init(void)
{
int ret;
printk(KERN_INFO DRV_NAME ": " DRV_DESCRIPTION ", " DRV_VERSION "\n");
printk(KERN_INFO DRV_NAME ": " DRV_COPYRIGHT "\n");
- ret = pci_register_driver(&iwl_driver);
-+ ret = pci_register_driver(&iwl3945_driver);
++ ret = pci_register_driver(&iwl4965_driver);
if (ret) {
IWL_ERROR("Unable to initialize PCI module\n");
return ret;
}
-#ifdef CONFIG_IWLWIFI_DEBUG
- ret = driver_create_file(&iwl_driver.driver, &driver_attr_debug_level);
-+#ifdef CONFIG_IWL3945_DEBUG
-+ ret = driver_create_file(&iwl3945_driver.driver, &driver_attr_debug_level);
++#ifdef CONFIG_IWL4965_DEBUG
++ ret = driver_create_file(&iwl4965_driver.driver, &driver_attr_debug_level);
if (ret) {
IWL_ERROR("Unable to create driver sysfs file\n");
- pci_unregister_driver(&iwl_driver);
-+ pci_unregister_driver(&iwl3945_driver);
++ pci_unregister_driver(&iwl4965_driver);
return ret;
}
#endif
-@@ -8774,32 +8785,32 @@ static int __init iwl_init(void)
+@@ -9368,32 +9369,34 @@ static int __init iwl_init(void)
return ret;
}
-static void __exit iwl_exit(void)
-+static void __exit iwl3945_exit(void)
++static void __exit iwl4965_exit(void)
{
-#ifdef CONFIG_IWLWIFI_DEBUG
- driver_remove_file(&iwl_driver.driver, &driver_attr_debug_level);
-+#ifdef CONFIG_IWL3945_DEBUG
-+ driver_remove_file(&iwl3945_driver.driver, &driver_attr_debug_level);
++#ifdef CONFIG_IWL4965_DEBUG
++ driver_remove_file(&iwl4965_driver.driver, &driver_attr_debug_level);
#endif
- pci_unregister_driver(&iwl_driver);
-+ pci_unregister_driver(&iwl3945_driver);
++ pci_unregister_driver(&iwl4965_driver);
}
-module_param_named(antenna, iwl_param_antenna, int, 0444);
-+module_param_named(antenna, iwl3945_param_antenna, int, 0444);
++module_param_named(antenna, iwl4965_param_antenna, int, 0444);
MODULE_PARM_DESC(antenna, "select antenna (1=Main, 2=Aux, default 0 [both])");
-module_param_named(disable, iwl_param_disable, int, 0444);
-+module_param_named(disable, iwl3945_param_disable, int, 0444);
++module_param_named(disable, iwl4965_param_disable, int, 0444);
MODULE_PARM_DESC(disable, "manually disable the radio (default 0 [radio on])");
-module_param_named(hwcrypto, iwl_param_hwcrypto, int, 0444);
-+module_param_named(hwcrypto, iwl3945_param_hwcrypto, int, 0444);
++module_param_named(hwcrypto, iwl4965_param_hwcrypto, int, 0444);
MODULE_PARM_DESC(hwcrypto,
"using hardware crypto engine (default 0 [software])\n");
-module_param_named(debug, iwl_param_debug, int, 0444);
-+module_param_named(debug, iwl3945_param_debug, int, 0444);
++module_param_named(debug, iwl4965_param_debug, int, 0444);
MODULE_PARM_DESC(debug, "debug output mask");
-module_param_named(disable_hw_scan, iwl_param_disable_hw_scan, int, 0444);
-+module_param_named(disable_hw_scan, iwl3945_param_disable_hw_scan, int, 0444);
++module_param_named(disable_hw_scan, iwl4965_param_disable_hw_scan, int, 0444);
MODULE_PARM_DESC(disable_hw_scan, "disable hardware scanning (default 0)");
-module_param_named(queues_num, iwl_param_queues_num, int, 0444);
-+module_param_named(queues_num, iwl3945_param_queues_num, int, 0444);
++module_param_named(queues_num, iwl4965_param_queues_num, int, 0444);
MODULE_PARM_DESC(queues_num, "number of hw queues.");
/* QoS */
-module_param_named(qos_enable, iwl_param_qos_enable, int, 0444);
-+module_param_named(qos_enable, iwl3945_param_qos_enable, int, 0444);
++module_param_named(qos_enable, iwl4965_param_qos_enable, int, 0444);
MODULE_PARM_DESC(qos_enable, "enable all QoS functionality");
++module_param_named(amsdu_size_8K, iwl4965_param_amsdu_size_8K, int, 0444);
++MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size");
-module_exit(iwl_exit);
-module_init(iwl_init);
-+module_exit(iwl3945_exit);
-+module_init(iwl3945_init);
-diff --git a/drivers/net/wireless/iwlwifi/iwl4965-base.c b/drivers/net/wireless/iwlwifi/iwl4965-base.c
-index 15a45f4..c86da5c 100644
---- a/drivers/net/wireless/iwlwifi/iwl4965-base.c
-+++ b/drivers/net/wireless/iwlwifi/iwl4965-base.c
-@@ -27,16 +27,6 @@
- *
- *****************************************************************************/
-
++module_exit(iwl4965_exit);
++module_init(iwl4965_init);
+diff --git a/drivers/net/wireless/iwlwifi/iwlwifi.h b/drivers/net/wireless/iwlwifi/iwlwifi.h
+deleted file mode 100644
+index 432ce88..0000000
+--- a/drivers/net/wireless/iwlwifi/iwlwifi.h
++++ /dev/null
+@@ -1,708 +0,0 @@
+-/******************************************************************************
+- *
+- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
+- *
+- * Portions of this file are derived from the ipw3945 project, as well
+- * as portions of the ieee80211 subsystem header files.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of version 2 of the GNU General Public License as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+- * The full GNU General Public License is included in this distribution in the
+- * file called LICENSE.
+- *
+- * Contact Information:
+- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
+- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+- *
+- *****************************************************************************/
+-
+-#ifndef __iwlwifi_h__
+-#define __iwlwifi_h__
+-
+-#include <linux/pci.h> /* for struct pci_device_id */
+-#include <linux/kernel.h>
+-#include <net/ieee80211_radiotap.h>
+-
+-struct iwl_priv;
+-
+-/* Hardware specific file defines the PCI IDs table for that hardware module */
+-extern struct pci_device_id iwl_hw_card_ids[];
+-
+-#include "iwl-hw.h"
+-#if IWL == 3945
+-#define DRV_NAME "iwl3945"
+-#include "iwl-3945-hw.h"
+-#elif IWL == 4965
+-#define DRV_NAME "iwl4965"
+-#include "iwl-4965-hw.h"
+-#endif
+-
+-#include "iwl-prph.h"
+-
-/*
-- * NOTE: This file (iwl-base.c) is used to build to multiple hardware targets
-- * by defining IWL to either 3945 or 4965. The Makefile used when building
-- * the base targets will create base-3945.o and base-4965.o
+- * Driver implementation data structures, constants, inline
+- * functions
+- *
+- * NOTE: DO NOT PUT HARDWARE/UCODE SPECIFIC DECLRATIONS HERE
+- *
+- * Hardware specific declrations go into iwl-*hw.h
- *
-- * The eventual goal is to move as many of the #if IWL / #endif blocks out of
-- * this file and into the hardware specific implementation files (iwl-XXXX.c)
-- * and leave only the common (non #ifdef sprinkled) code in this file
- */
-
- #include <linux/kernel.h>
- #include <linux/module.h>
- #include <linux/version.h>
-@@ -51,21 +41,20 @@
- #include <linux/etherdevice.h>
- #include <linux/if_arp.h>
-
--#include <net/ieee80211_radiotap.h>
- #include <net/mac80211.h>
-
- #include <asm/div64.h>
-
--#define IWL 4965
+-#include "iwl-debug.h"
-
--#include "iwlwifi.h"
- #include "iwl-4965.h"
- #include "iwl-helpers.h"
-
--#ifdef CONFIG_IWLWIFI_DEBUG
--u32 iwl_debug_level;
-+#ifdef CONFIG_IWL4965_DEBUG
-+u32 iwl4965_debug_level;
- #endif
-
-+static int iwl4965_tx_queue_update_write_ptr(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq);
-+
- /******************************************************************************
- *
- * module boiler plate
-@@ -73,13 +62,14 @@ u32 iwl_debug_level;
- ******************************************************************************/
-
- /* module parameters */
--int iwl_param_disable_hw_scan;
--int iwl_param_debug;
--int iwl_param_disable; /* def: enable radio */
--int iwl_param_antenna; /* def: 0 = both antennas (use diversity) */
--int iwl_param_hwcrypto; /* def: using software encryption */
--int iwl_param_qos_enable = 1;
--int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
-+static int iwl4965_param_disable_hw_scan; /* def: 0 = use 4965's h/w scan */
-+static int iwl4965_param_debug; /* def: 0 = minimal debug log messages */
-+static int iwl4965_param_disable; /* def: enable radio */
-+static int iwl4965_param_antenna; /* def: 0 = both antennas (use diversity) */
-+int iwl4965_param_hwcrypto; /* def: using software encryption */
-+static int iwl4965_param_qos_enable = 1; /* def: 1 = use quality of service */
-+int iwl4965_param_queues_num = IWL_MAX_NUM_QUEUES; /* def: 16 Tx queues */
-+int iwl4965_param_amsdu_size_8K; /* def: enable 8K amsdu size */
-
- /*
- * module name, copyright, version, etc.
-@@ -88,19 +78,19 @@ int iwl_param_queues_num = IWL_MAX_NUM_QUEUES;
-
- #define DRV_DESCRIPTION "Intel(R) Wireless WiFi Link 4965AGN driver for Linux"
-
--#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL4965_DEBUG
- #define VD "d"
- #else
- #define VD
- #endif
-
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
- #define VS "s"
- #else
- #define VS
- #endif
-
--#define IWLWIFI_VERSION "1.1.17k" VD VS
-+#define IWLWIFI_VERSION "1.2.23k" VD VS
- #define DRV_COPYRIGHT "Copyright(c) 2003-2007 Intel Corporation"
- #define DRV_VERSION IWLWIFI_VERSION
-
-@@ -125,8 +115,8 @@ __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr)
- return NULL;
- }
-
--static const struct ieee80211_hw_mode *iwl_get_hw_mode(
-- struct iwl_priv *priv, int mode)
-+static const struct ieee80211_hw_mode *iwl4965_get_hw_mode(
-+ struct iwl4965_priv *priv, int mode)
- {
- int i;
-
-@@ -137,7 +127,7 @@ static const struct ieee80211_hw_mode *iwl_get_hw_mode(
- return NULL;
- }
-
--static int iwl_is_empty_essid(const char *essid, int essid_len)
-+static int iwl4965_is_empty_essid(const char *essid, int essid_len)
- {
- /* Single white space is for Linksys APs */
- if (essid_len == 1 && essid[0] == ' ')
-@@ -153,13 +143,13 @@ static int iwl_is_empty_essid(const char *essid, int essid_len)
- return 1;
- }
-
--static const char *iwl_escape_essid(const char *essid, u8 essid_len)
-+static const char *iwl4965_escape_essid(const char *essid, u8 essid_len)
- {
- static char escaped[IW_ESSID_MAX_SIZE * 2 + 1];
- const char *s = essid;
- char *d = escaped;
-
-- if (iwl_is_empty_essid(essid, essid_len)) {
-+ if (iwl4965_is_empty_essid(essid, essid_len)) {
- memcpy(escaped, "<hidden>", sizeof("<hidden>"));
- return escaped;
- }
-@@ -177,10 +167,10 @@ static const char *iwl_escape_essid(const char *essid, u8 essid_len)
- return escaped;
- }
-
--static void iwl_print_hex_dump(int level, void *p, u32 len)
-+static void iwl4965_print_hex_dump(int level, void *p, u32 len)
- {
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (!(iwl_debug_level & level))
-+#ifdef CONFIG_IWL4965_DEBUG
-+ if (!(iwl4965_debug_level & level))
- return;
-
- print_hex_dump(KERN_DEBUG, "iwl data: ", DUMP_PREFIX_OFFSET, 16, 1,
-@@ -193,24 +183,33 @@ static void iwl_print_hex_dump(int level, void *p, u32 len)
- *
- * Theory of operation
- *
-- * A queue is a circular buffers with 'Read' and 'Write' pointers.
-- * 2 empty entries always kept in the buffer to protect from overflow.
-+ * A Tx or Rx queue resides in host DRAM, and is comprised of a circular buffer
-+ * of buffer descriptors, each of which points to one or more data buffers for
-+ * the device to read from or fill. Driver and device exchange status of each
-+ * queue via "read" and "write" pointers. Driver keeps minimum of 2 empty
-+ * entries in each circular buffer, to protect against confusing empty and full
-+ * queue states.
-+ *
-+ * The device reads or writes the data in the queues via the device's several
-+ * DMA/FIFO channels. Each queue is mapped to a single DMA channel.
- *
- * For Tx queue, there are low mark and high mark limits. If, after queuing
- * the packet for Tx, free space become < low mark, Tx queue stopped. When
- * reclaiming packets (on 'tx done IRQ), if free space become > high mark,
- * Tx queue resumed.
- *
-- * The IWL operates with six queues, one receive queue in the device's
-- * sram, one transmit queue for sending commands to the device firmware,
-- * and four transmit queues for data.
-+ * The 4965 operates with up to 17 queues: One receive queue, one transmit
-+ * queue (#4) for sending commands to the device firmware, and 15 other
-+ * Tx queues that may be mapped to prioritized Tx DMA/FIFO channels.
-+ *
-+ * See more detailed info in iwl-4965-hw.h.
- ***************************************************/
-
--static int iwl_queue_space(const struct iwl_queue *q)
-+static int iwl4965_queue_space(const struct iwl4965_queue *q)
- {
-- int s = q->last_used - q->first_empty;
-+ int s = q->read_ptr - q->write_ptr;
-
-- if (q->last_used > q->first_empty)
-+ if (q->read_ptr > q->write_ptr)
- s -= q->n_bd;
-
- if (s <= 0)
-@@ -222,42 +221,55 @@ static int iwl_queue_space(const struct iwl_queue *q)
- return s;
- }
-
--/* XXX: n_bd must be power-of-two size */
--static inline int iwl_queue_inc_wrap(int index, int n_bd)
-+/**
-+ * iwl4965_queue_inc_wrap - increment queue index, wrap back to beginning
-+ * @index -- current index
-+ * @n_bd -- total number of entries in queue (must be power of 2)
-+ */
-+static inline int iwl4965_queue_inc_wrap(int index, int n_bd)
- {
- return ++index & (n_bd - 1);
- }
-
--/* XXX: n_bd must be power-of-two size */
--static inline int iwl_queue_dec_wrap(int index, int n_bd)
-+/**
-+ * iwl4965_queue_dec_wrap - decrement queue index, wrap back to end
-+ * @index -- current index
-+ * @n_bd -- total number of entries in queue (must be power of 2)
-+ */
-+static inline int iwl4965_queue_dec_wrap(int index, int n_bd)
- {
- return --index & (n_bd - 1);
- }
-
--static inline int x2_queue_used(const struct iwl_queue *q, int i)
-+static inline int x2_queue_used(const struct iwl4965_queue *q, int i)
- {
-- return q->first_empty > q->last_used ?
-- (i >= q->last_used && i < q->first_empty) :
-- !(i < q->last_used && i >= q->first_empty);
-+ return q->write_ptr > q->read_ptr ?
-+ (i >= q->read_ptr && i < q->write_ptr) :
-+ !(i < q->read_ptr && i >= q->write_ptr);
- }
-
--static inline u8 get_cmd_index(struct iwl_queue *q, u32 index, int is_huge)
-+static inline u8 get_cmd_index(struct iwl4965_queue *q, u32 index, int is_huge)
- {
-+ /* This is for scan command, the big buffer at end of command array */
- if (is_huge)
-- return q->n_window;
-+ return q->n_window; /* must be power of 2 */
-
-+ /* Otherwise, use normal size buffers */
- return index & (q->n_window - 1);
- }
-
--static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
-+/**
-+ * iwl4965_queue_init - Initialize queue's high/low-water and read/write indexes
-+ */
-+static int iwl4965_queue_init(struct iwl4965_priv *priv, struct iwl4965_queue *q,
- int count, int slots_num, u32 id)
- {
- q->n_bd = count;
- q->n_window = slots_num;
- q->id = id;
-
-- /* count must be power-of-two size, otherwise iwl_queue_inc_wrap
-- * and iwl_queue_dec_wrap are broken. */
-+ /* count must be power-of-two size, otherwise iwl4965_queue_inc_wrap
-+ * and iwl4965_queue_dec_wrap are broken. */
- BUG_ON(!is_power_of_2(count));
-
- /* slots_num must be power-of-two size, otherwise
-@@ -272,27 +284,34 @@ static int iwl_queue_init(struct iwl_priv *priv, struct iwl_queue *q,
- if (q->high_mark < 2)
- q->high_mark = 2;
-
-- q->first_empty = q->last_used = 0;
-+ q->write_ptr = q->read_ptr = 0;
-
- return 0;
- }
-
--static int iwl_tx_queue_alloc(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq, u32 id)
-+/**
-+ * iwl4965_tx_queue_alloc - Alloc driver data and TFD CB for one Tx/cmd queue
-+ */
-+static int iwl4965_tx_queue_alloc(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq, u32 id)
- {
- struct pci_dev *dev = priv->pci_dev;
-
-+ /* Driver private data, only for Tx (not command) queues,
-+ * not shared with device. */
- if (id != IWL_CMD_QUEUE_NUM) {
- txq->txb = kmalloc(sizeof(txq->txb[0]) *
- TFD_QUEUE_SIZE_MAX, GFP_KERNEL);
- if (!txq->txb) {
-- IWL_ERROR("kmalloc for auxilary BD "
-+ IWL_ERROR("kmalloc for auxiliary BD "
- "structures failed\n");
- goto error;
- }
- } else
- txq->txb = NULL;
-
-+ /* Circular buffer of transmit frame descriptors (TFDs),
-+ * shared with device */
- txq->bd = pci_alloc_consistent(dev,
- sizeof(txq->bd[0]) * TFD_QUEUE_SIZE_MAX,
- &txq->q.dma_addr);
-@@ -315,24 +334,33 @@ static int iwl_tx_queue_alloc(struct iwl_priv *priv,
- return -ENOMEM;
- }
-
--int iwl_tx_queue_init(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq, int slots_num, u32 txq_id)
-+/**
-+ * iwl4965_tx_queue_init - Allocate and initialize one tx/cmd queue
-+ */
-+int iwl4965_tx_queue_init(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq, int slots_num, u32 txq_id)
- {
- struct pci_dev *dev = priv->pci_dev;
- int len;
- int rc = 0;
-
-- /* alocate command space + one big command for scan since scan
-- * command is very huge the system will not have two scan at the
-- * same time */
-- len = sizeof(struct iwl_cmd) * slots_num;
-+ /*
-+ * Alloc buffer array for commands (Tx or other types of commands).
-+ * For the command queue (#4), allocate command space + one big
-+ * command for scan, since scan command is very huge; the system will
-+ * not have two scans at the same time, so only one is needed.
-+ * For normal Tx queues (all other queues), no super-size command
-+ * space is needed.
-+ */
-+ len = sizeof(struct iwl4965_cmd) * slots_num;
- if (txq_id == IWL_CMD_QUEUE_NUM)
- len += IWL_MAX_SCAN_SIZE;
- txq->cmd = pci_alloc_consistent(dev, len, &txq->dma_addr_cmd);
- if (!txq->cmd)
- return -ENOMEM;
-
-- rc = iwl_tx_queue_alloc(priv, txq, txq_id);
-+ /* Alloc driver data array and TFD circular buffer */
-+ rc = iwl4965_tx_queue_alloc(priv, txq, txq_id);
- if (rc) {
- pci_free_consistent(dev, len, txq->cmd, txq->dma_addr_cmd);
-
-@@ -341,26 +369,29 @@ int iwl_tx_queue_init(struct iwl_priv *priv,
- txq->need_update = 0;
-
- /* TFD_QUEUE_SIZE_MAX must be power-of-two size, otherwise
-- * iwl_queue_inc_wrap and iwl_queue_dec_wrap are broken. */
-+ * iwl4965_queue_inc_wrap and iwl4965_queue_dec_wrap are broken. */
- BUILD_BUG_ON(TFD_QUEUE_SIZE_MAX & (TFD_QUEUE_SIZE_MAX - 1));
-- iwl_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
-
-- iwl_hw_tx_queue_init(priv, txq);
-+ /* Initialize queue's high/low-water marks, and head/tail indexes */
-+ iwl4965_queue_init(priv, &txq->q, TFD_QUEUE_SIZE_MAX, slots_num, txq_id);
-+
-+ /* Tell device where to find queue */
-+ iwl4965_hw_tx_queue_init(priv, txq);
-
- return 0;
- }
-
- /**
-- * iwl_tx_queue_free - Deallocate DMA queue.
-+ * iwl4965_tx_queue_free - Deallocate DMA queue.
- * @txq: Transmit queue to deallocate.
- *
- * Empty queue by removing and destroying all BD's.
-- * Free all buffers. txq itself is not freed.
+-/* Default noise level to report when noise measurement is not available.
+- * This may be because we're:
+- * 1) Not associated (4965, no beacon statistics being sent to driver)
+- * 2) Scanning (noise measurement does not apply to associated channel)
+- * 3) Receiving CCK (3945 delivers noise info only for OFDM frames)
+- * Use default noise value of -127 ... this is below the range of measurable
+- * Rx dBm for either 3945 or 4965, so it can indicate "unmeasurable" to user.
+- * Also, -127 works better than 0 when averaging frames with/without
+- * noise info (e.g. averaging might be done in app); measured dBm values are
+- * always negative ... using a negative value as the default keeps all
+- * averages within an s8's (used in some apps) range of negative values. */
+-#define IWL_NOISE_MEAS_NOT_AVAILABLE (-127)
+-
+-/* Module parameters accessible from iwl-*.c */
+-extern int iwl_param_disable_hw_scan;
+-extern int iwl_param_debug;
+-extern int iwl_param_mode;
+-extern int iwl_param_disable;
+-extern int iwl_param_antenna;
+-extern int iwl_param_hwcrypto;
+-extern int iwl_param_qos_enable;
+-extern int iwl_param_queues_num;
+-
+-enum iwl_antenna {
+- IWL_ANTENNA_DIVERSITY,
+- IWL_ANTENNA_MAIN,
+- IWL_ANTENNA_AUX
+-};
+-
+-/*
+- * RTS threshold here is total size [2347] minus 4 FCS bytes
+- * Per spec:
+- * a value of 0 means RTS on all data/management packets
+- * a value > max MSDU size means no RTS
+- * else RTS for data/management frames where MPDU is larger
+- * than RTS value.
+- */
+-#define DEFAULT_RTS_THRESHOLD 2347U
+-#define MIN_RTS_THRESHOLD 0U
+-#define MAX_RTS_THRESHOLD 2347U
+-#define MAX_MSDU_SIZE 2304U
+-#define MAX_MPDU_SIZE 2346U
+-#define DEFAULT_BEACON_INTERVAL 100U
+-#define DEFAULT_SHORT_RETRY_LIMIT 7U
+-#define DEFAULT_LONG_RETRY_LIMIT 4U
+-
+-struct iwl_rx_mem_buffer {
+- dma_addr_t dma_addr;
+- struct sk_buff *skb;
+- struct list_head list;
+-};
+-
+-struct iwl_rt_rx_hdr {
+- struct ieee80211_radiotap_header rt_hdr;
+- __le64 rt_tsf; /* TSF */
+- u8 rt_flags; /* radiotap packet flags */
+- u8 rt_rate; /* rate in 500kb/s */
+- __le16 rt_channelMHz; /* channel in MHz */
+- __le16 rt_chbitmask; /* channel bitfield */
+- s8 rt_dbmsignal; /* signal in dBm, kluged to signed */
+- s8 rt_dbmnoise;
+- u8 rt_antenna; /* antenna number */
+- u8 payload[0]; /* payload... */
+-} __attribute__ ((packed));
+-
+-struct iwl_rt_tx_hdr {
+- struct ieee80211_radiotap_header rt_hdr;
+- u8 rt_rate; /* rate in 500kb/s */
+- __le16 rt_channel; /* channel in mHz */
+- __le16 rt_chbitmask; /* channel bitfield */
+- s8 rt_dbmsignal; /* signal in dBm, kluged to signed */
+- u8 rt_antenna; /* antenna number */
+- u8 payload[0]; /* payload... */
+-} __attribute__ ((packed));
+-
+-/*
+- * Generic queue structure
- *
-+ * Free all buffers.
-+ * 0-fill, but do not free "txq" descriptor structure.
- */
--void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
-+void iwl4965_tx_queue_free(struct iwl4965_priv *priv, struct iwl4965_tx_queue *txq)
- {
-- struct iwl_queue *q = &txq->q;
-+ struct iwl4965_queue *q = &txq->q;
- struct pci_dev *dev = priv->pci_dev;
- int len;
-
-@@ -368,45 +399,48 @@ void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq)
- return;
-
- /* first, empty all BD's */
-- for (; q->first_empty != q->last_used;
-- q->last_used = iwl_queue_inc_wrap(q->last_used, q->n_bd))
-- iwl_hw_txq_free_tfd(priv, txq);
-+ for (; q->write_ptr != q->read_ptr;
-+ q->read_ptr = iwl4965_queue_inc_wrap(q->read_ptr, q->n_bd))
-+ iwl4965_hw_txq_free_tfd(priv, txq);
-
-- len = sizeof(struct iwl_cmd) * q->n_window;
-+ len = sizeof(struct iwl4965_cmd) * q->n_window;
- if (q->id == IWL_CMD_QUEUE_NUM)
- len += IWL_MAX_SCAN_SIZE;
-
-+ /* De-alloc array of command/tx buffers */
- pci_free_consistent(dev, len, txq->cmd, txq->dma_addr_cmd);
-
-- /* free buffers belonging to queue itself */
-+ /* De-alloc circular buffer of TFDs */
- if (txq->q.n_bd)
-- pci_free_consistent(dev, sizeof(struct iwl_tfd_frame) *
-+ pci_free_consistent(dev, sizeof(struct iwl4965_tfd_frame) *
- txq->q.n_bd, txq->bd, txq->q.dma_addr);
-
-+ /* De-alloc array of per-TFD driver data */
- if (txq->txb) {
- kfree(txq->txb);
- txq->txb = NULL;
- }
-
-- /* 0 fill whole structure */
-+ /* 0-fill queue descriptor structure */
- memset(txq, 0, sizeof(*txq));
- }
-
--const u8 BROADCAST_ADDR[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
-+const u8 iwl4965_broadcast_addr[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
-
- /*************** STATION TABLE MANAGEMENT ****
+- * Contains common data for Rx and Tx queues
+- */
+-struct iwl_queue {
+- int n_bd; /* number of BDs in this queue */
+- int first_empty; /* 1-st empty entry (index) host_w*/
+- int last_used; /* last used entry (index) host_r*/
+- dma_addr_t dma_addr; /* physical addr for BD's */
+- int n_window; /* safe queue window */
+- u32 id;
+- int low_mark; /* low watermark, resume queue if free
+- * space more than this */
+- int high_mark; /* high watermark, stop queue if free
+- * space less than this */
+-} __attribute__ ((packed));
+-
+-#define MAX_NUM_OF_TBS (20)
+-
+-struct iwl_tx_info {
+- struct ieee80211_tx_status status;
+- struct sk_buff *skb[MAX_NUM_OF_TBS];
+-};
+-
+-/**
+- * struct iwl_tx_queue - Tx Queue for DMA
+- * @need_update: need to update read/write index
+- * @shed_retry: queue is HT AGG enabled
- *
-- * NOTE: This needs to be overhauled to better synchronize between
-- * how the iwl-4965.c is using iwl_hw_find_station vs. iwl-3945.c
+- * Queue consists of circular buffer of BD's and required locking structures.
+- */
+-struct iwl_tx_queue {
+- struct iwl_queue q;
+- struct iwl_tfd_frame *bd;
+- struct iwl_cmd *cmd;
+- dma_addr_t dma_addr_cmd;
+- struct iwl_tx_info *txb;
+- int need_update;
+- int sched_retry;
+- int active;
+-};
+-
+-#include "iwl-channel.h"
+-
+-#if IWL == 3945
+-#include "iwl-3945-rs.h"
+-#else
+-#include "iwl-4965-rs.h"
+-#endif
+-
+-#define IWL_TX_FIFO_AC0 0
+-#define IWL_TX_FIFO_AC1 1
+-#define IWL_TX_FIFO_AC2 2
+-#define IWL_TX_FIFO_AC3 3
+-#define IWL_TX_FIFO_HCCA_1 5
+-#define IWL_TX_FIFO_HCCA_2 6
+-#define IWL_TX_FIFO_NONE 7
+-
+-/* Minimum number of queues. MAX_NUM is defined in hw specific files */
+-#define IWL_MIN_NUM_QUEUES 4
+-
+-/* Power management (not Tx power) structures */
+-
+-struct iwl_power_vec_entry {
+- struct iwl_powertable_cmd cmd;
+- u8 no_dtim;
+-};
+-#define IWL_POWER_RANGE_0 (0)
+-#define IWL_POWER_RANGE_1 (1)
+-
+-#define IWL_POWER_MODE_CAM 0x00 /* Continuously Aware Mode, always on */
+-#define IWL_POWER_INDEX_3 0x03
+-#define IWL_POWER_INDEX_5 0x05
+-#define IWL_POWER_AC 0x06
+-#define IWL_POWER_BATTERY 0x07
+-#define IWL_POWER_LIMIT 0x07
+-#define IWL_POWER_MASK 0x0F
+-#define IWL_POWER_ENABLED 0x10
+-#define IWL_POWER_LEVEL(x) ((x) & IWL_POWER_MASK)
+-
+-struct iwl_power_mgr {
+- spinlock_t lock;
+- struct iwl_power_vec_entry pwr_range_0[IWL_POWER_AC];
+- struct iwl_power_vec_entry pwr_range_1[IWL_POWER_AC];
+- u8 active_index;
+- u32 dtim_val;
+-};
+-
+-#define IEEE80211_DATA_LEN 2304
+-#define IEEE80211_4ADDR_LEN 30
+-#define IEEE80211_HLEN (IEEE80211_4ADDR_LEN)
+-#define IEEE80211_FRAME_LEN (IEEE80211_DATA_LEN + IEEE80211_HLEN)
+-
+-struct iwl_frame {
+- union {
+- struct ieee80211_hdr frame;
+- struct iwl_tx_beacon_cmd beacon;
+- u8 raw[IEEE80211_FRAME_LEN];
+- u8 cmd[360];
+- } u;
+- struct list_head list;
+-};
+-
+-#define SEQ_TO_QUEUE(x) ((x >> 8) & 0xbf)
+-#define QUEUE_TO_SEQ(x) ((x & 0xbf) << 8)
+-#define SEQ_TO_INDEX(x) (x & 0xff)
+-#define INDEX_TO_SEQ(x) (x & 0xff)
+-#define SEQ_HUGE_FRAME (0x4000)
+-#define SEQ_RX_FRAME __constant_cpu_to_le16(0x8000)
+-#define SEQ_TO_SN(seq) (((seq) & IEEE80211_SCTL_SEQ) >> 4)
+-#define SN_TO_SEQ(ssn) (((ssn) << 4) & IEEE80211_SCTL_SEQ)
+-#define MAX_SN ((IEEE80211_SCTL_SEQ) >> 4)
+-
+-enum {
+- /* CMD_SIZE_NORMAL = 0, */
+- CMD_SIZE_HUGE = (1 << 0),
+- /* CMD_SYNC = 0, */
+- CMD_ASYNC = (1 << 1),
+- /* CMD_NO_SKB = 0, */
+- CMD_WANT_SKB = (1 << 2),
+-};
+-
+-struct iwl_cmd;
+-struct iwl_priv;
+-
+-struct iwl_cmd_meta {
+- struct iwl_cmd_meta *source;
+- union {
+- struct sk_buff *skb;
+- int (*callback)(struct iwl_priv *priv,
+- struct iwl_cmd *cmd, struct sk_buff *skb);
+- } __attribute__ ((packed)) u;
+-
+- /* The CMD_SIZE_HUGE flag bit indicates that the command
+- * structure is stored at the end of the shared queue memory. */
+- u32 flags;
+-
+-} __attribute__ ((packed));
+-
+-struct iwl_cmd {
+- struct iwl_cmd_meta meta;
+- struct iwl_cmd_header hdr;
+- union {
+- struct iwl_addsta_cmd addsta;
+- struct iwl_led_cmd led;
+- u32 flags;
+- u8 val8;
+- u16 val16;
+- u32 val32;
+- struct iwl_bt_cmd bt;
+- struct iwl_rxon_time_cmd rxon_time;
+- struct iwl_powertable_cmd powertable;
+- struct iwl_qosparam_cmd qosparam;
+- struct iwl_tx_cmd tx;
+- struct iwl_tx_beacon_cmd tx_beacon;
+- struct iwl_rxon_assoc_cmd rxon_assoc;
+- u8 *indirect;
+- u8 payload[360];
+- } __attribute__ ((packed)) cmd;
+-} __attribute__ ((packed));
+-
+-struct iwl_host_cmd {
+- u8 id;
+- u16 len;
+- struct iwl_cmd_meta meta;
+- const void *data;
+-};
+-
+-#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_cmd) - \
+- sizeof(struct iwl_cmd_meta))
+-
+-/*
+- * RX related structures and functions
+- */
+-#define RX_FREE_BUFFERS 64
+-#define RX_LOW_WATERMARK 8
+-
+-#define SUP_RATE_11A_MAX_NUM_CHANNELS 8
+-#define SUP_RATE_11B_MAX_NUM_CHANNELS 4
+-#define SUP_RATE_11G_MAX_NUM_CHANNELS 12
+-
+-/**
+- * struct iwl_rx_queue - Rx queue
+- * @processed: Internal index to last handled Rx packet
+- * @read: Shared index to newest available Rx buffer
+- * @write: Shared index to oldest written Rx packet
+- * @free_count: Number of pre-allocated buffers in rx_free
+- * @rx_free: list of free SKBs for use
+- * @rx_used: List of Rx buffers with no SKB
+- * @need_update: flag to indicate we need to update read/write index
- *
-- * mac80211 should also be examined to determine if sta_info is duplicating
-+ * mac80211 should be examined to determine if sta_info is duplicating
- * the functionality provided here
- */
-
- /**************************************************************/
-
--#if 0 /* temparary disable till we add real remove station */
--static u8 iwl_remove_station(struct iwl_priv *priv, const u8 *addr, int is_ap)
-+#if 0 /* temporary disable till we add real remove station */
-+/**
-+ * iwl4965_remove_station - Remove driver's knowledge of station.
-+ *
-+ * NOTE: This does not remove station from device's station table.
-+ */
-+static u8 iwl4965_remove_station(struct iwl4965_priv *priv, const u8 *addr, int is_ap)
- {
- int index = IWL_INVALID_STATION;
- int i;
-@@ -443,7 +477,12 @@ out:
- }
- #endif
-
--static void iwl_clear_stations_table(struct iwl_priv *priv)
-+/**
-+ * iwl4965_clear_stations_table - Clear the driver's station table
-+ *
-+ * NOTE: This does not clear or otherwise alter the device's station table.
-+ */
-+static void iwl4965_clear_stations_table(struct iwl4965_priv *priv)
- {
- unsigned long flags;
-
-@@ -455,11 +494,15 @@ static void iwl_clear_stations_table(struct iwl_priv *priv)
- spin_unlock_irqrestore(&priv->sta_lock, flags);
- }
-
--u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
-+/**
-+ * iwl4965_add_station_flags - Add station to tables in driver and device
-+ */
-+u8 iwl4965_add_station_flags(struct iwl4965_priv *priv, const u8 *addr,
-+ int is_ap, u8 flags, void *ht_data)
- {
- int i;
- int index = IWL_INVALID_STATION;
-- struct iwl_station_entry *station;
-+ struct iwl4965_station_entry *station;
- unsigned long flags_spin;
- DECLARE_MAC_BUF(mac);
-
-@@ -482,8 +525,8 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
- }
-
-
-- /* These twh conditions has the same outcome but keep them separate
-- since they have different meaning */
-+ /* These two conditions have the same outcome, but keep them separate
-+ since they have different meanings */
- if (unlikely(index == IWL_INVALID_STATION)) {
- spin_unlock_irqrestore(&priv->sta_lock, flags_spin);
- return index;
-@@ -501,28 +544,32 @@ u8 iwl_add_station(struct iwl_priv *priv, const u8 *addr, int is_ap, u8 flags)
- station->used = 1;
- priv->num_stations++;
-
-- memset(&station->sta, 0, sizeof(struct iwl_addsta_cmd));
-+ /* Set up the REPLY_ADD_STA command to send to device */
-+ memset(&station->sta, 0, sizeof(struct iwl4965_addsta_cmd));
- memcpy(station->sta.sta.addr, addr, ETH_ALEN);
- station->sta.mode = 0;
- station->sta.sta.sta_id = index;
- station->sta.station_flags = 0;
-
+- * NOTE: rx_free and rx_used are used as a FIFO for iwl_rx_mem_buffers
+- */
+-struct iwl_rx_queue {
+- __le32 *bd;
+- dma_addr_t dma_addr;
+- struct iwl_rx_mem_buffer pool[RX_QUEUE_SIZE + RX_FREE_BUFFERS];
+- struct iwl_rx_mem_buffer *queue[RX_QUEUE_SIZE];
+- u32 processed;
+- u32 read;
+- u32 write;
+- u32 free_count;
+- struct list_head rx_free;
+- struct list_head rx_used;
+- int need_update;
+- spinlock_t lock;
+-};
+-
+-#define IWL_SUPPORTED_RATES_IE_LEN 8
+-
+-#define SCAN_INTERVAL 100
+-
+-#define MAX_A_CHANNELS 252
+-#define MIN_A_CHANNELS 7
+-
+-#define MAX_B_CHANNELS 14
+-#define MIN_B_CHANNELS 1
+-
+-#define STATUS_HCMD_ACTIVE 0 /* host command in progress */
+-#define STATUS_INT_ENABLED 1
+-#define STATUS_RF_KILL_HW 2
+-#define STATUS_RF_KILL_SW 3
+-#define STATUS_INIT 4
+-#define STATUS_ALIVE 5
+-#define STATUS_READY 6
+-#define STATUS_TEMPERATURE 7
+-#define STATUS_GEO_CONFIGURED 8
+-#define STATUS_EXIT_PENDING 9
+-#define STATUS_IN_SUSPEND 10
+-#define STATUS_STATISTICS 11
+-#define STATUS_SCANNING 12
+-#define STATUS_SCAN_ABORTING 13
+-#define STATUS_SCAN_HW 14
+-#define STATUS_POWER_PMI 15
+-#define STATUS_FW_ERROR 16
+-
+-#define MAX_TID_COUNT 9
+-
+-#define IWL_INVALID_RATE 0xFF
+-#define IWL_INVALID_VALUE -1
+-
+-#if IWL == 4965
-#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
- /* BCAST station and IBSS stations do not work in HT mode */
- if (index != priv->hw_setting.bcast_sta_id &&
- priv->iw_mode != IEEE80211_IF_TYPE_IBSS)
-- iwl4965_set_ht_add_station(priv, index);
--#endif /*CONFIG_IWLWIFI_HT*/
-+ iwl4965_set_ht_add_station(priv, index,
-+ (struct ieee80211_ht_info *) ht_data);
-+#endif /*CONFIG_IWL4965_HT*/
-
- spin_unlock_irqrestore(&priv->sta_lock, flags_spin);
-- iwl_send_add_station(priv, &station->sta, flags);
-+
-+ /* Add station to device's station table */
-+ iwl4965_send_add_station(priv, &station->sta, flags);
- return index;
-
- }
-
- /*************** DRIVER STATUS FUNCTIONS *****/
-
--static inline int iwl_is_ready(struct iwl_priv *priv)
-+static inline int iwl4965_is_ready(struct iwl4965_priv *priv)
- {
- /* The adapter is 'ready' if READY and GEO_CONFIGURED bits are
- * set but EXIT_PENDING is not */
-@@ -531,29 +578,29 @@ static inline int iwl_is_ready(struct iwl_priv *priv)
- !test_bit(STATUS_EXIT_PENDING, &priv->status);
- }
-
--static inline int iwl_is_alive(struct iwl_priv *priv)
-+static inline int iwl4965_is_alive(struct iwl4965_priv *priv)
- {
- return test_bit(STATUS_ALIVE, &priv->status);
- }
-
--static inline int iwl_is_init(struct iwl_priv *priv)
-+static inline int iwl4965_is_init(struct iwl4965_priv *priv)
- {
- return test_bit(STATUS_INIT, &priv->status);
- }
-
--static inline int iwl_is_rfkill(struct iwl_priv *priv)
-+static inline int iwl4965_is_rfkill(struct iwl4965_priv *priv)
- {
- return test_bit(STATUS_RF_KILL_HW, &priv->status) ||
- test_bit(STATUS_RF_KILL_SW, &priv->status);
- }
-
--static inline int iwl_is_ready_rf(struct iwl_priv *priv)
-+static inline int iwl4965_is_ready_rf(struct iwl4965_priv *priv)
- {
-
-- if (iwl_is_rfkill(priv))
-+ if (iwl4965_is_rfkill(priv))
- return 0;
-
-- return iwl_is_ready(priv);
-+ return iwl4965_is_ready(priv);
- }
-
- /*************** HOST COMMAND QUEUE FUNCTIONS *****/
-@@ -618,7 +665,7 @@ static const char *get_cmd_string(u8 cmd)
- #define HOST_COMPLETE_TIMEOUT (HZ / 2)
-
- /**
-- * iwl_enqueue_hcmd - enqueue a uCode command
-+ * iwl4965_enqueue_hcmd - enqueue a uCode command
- * @priv: device private data point
- * @cmd: a point to the ucode command structure
- *
-@@ -626,13 +673,13 @@ static const char *get_cmd_string(u8 cmd)
- * failed. On success, it turns the index (> 0) of command in the
- * command queue.
- */
--static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+static int iwl4965_enqueue_hcmd(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
- {
-- struct iwl_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
-- struct iwl_queue *q = &txq->q;
-- struct iwl_tfd_frame *tfd;
-+ struct iwl4965_tx_queue *txq = &priv->txq[IWL_CMD_QUEUE_NUM];
-+ struct iwl4965_queue *q = &txq->q;
-+ struct iwl4965_tfd_frame *tfd;
- u32 *control_flags;
-- struct iwl_cmd *out_cmd;
-+ struct iwl4965_cmd *out_cmd;
- u32 idx;
- u16 fix_size = (u16)(cmd->len + sizeof(out_cmd->hdr));
- dma_addr_t phys_addr;
-@@ -645,19 +692,19 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
- BUG_ON((fix_size > TFD_MAX_PAYLOAD_SIZE) &&
- !(cmd->meta.flags & CMD_SIZE_HUGE));
-
-- if (iwl_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
-+ if (iwl4965_queue_space(q) < ((cmd->meta.flags & CMD_ASYNC) ? 2 : 1)) {
- IWL_ERROR("No space for Tx\n");
- return -ENOSPC;
- }
-
- spin_lock_irqsave(&priv->hcmd_lock, flags);
-
-- tfd = &txq->bd[q->first_empty];
-+ tfd = &txq->bd[q->write_ptr];
- memset(tfd, 0, sizeof(*tfd));
-
- control_flags = (u32 *) tfd;
-
-- idx = get_cmd_index(q, q->first_empty, cmd->meta.flags & CMD_SIZE_HUGE);
-+ idx = get_cmd_index(q, q->write_ptr, cmd->meta.flags & CMD_SIZE_HUGE);
- out_cmd = &txq->cmd[idx];
-
- out_cmd->hdr.cmd = cmd->id;
-@@ -669,30 +716,34 @@ static int iwl_enqueue_hcmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-
- out_cmd->hdr.flags = 0;
- out_cmd->hdr.sequence = cpu_to_le16(QUEUE_TO_SEQ(IWL_CMD_QUEUE_NUM) |
-- INDEX_TO_SEQ(q->first_empty));
-+ INDEX_TO_SEQ(q->write_ptr));
- if (out_cmd->meta.flags & CMD_SIZE_HUGE)
- out_cmd->hdr.sequence |= cpu_to_le16(SEQ_HUGE_FRAME);
-
- phys_addr = txq->dma_addr_cmd + sizeof(txq->cmd[0]) * idx +
-- offsetof(struct iwl_cmd, hdr);
-- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
-+ offsetof(struct iwl4965_cmd, hdr);
-+ iwl4965_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, fix_size);
-
- IWL_DEBUG_HC("Sending command %s (#%x), seq: 0x%04X, "
- "%d bytes at %d[%d]:%d\n",
- get_cmd_string(out_cmd->hdr.cmd),
- out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence),
-- fix_size, q->first_empty, idx, IWL_CMD_QUEUE_NUM);
-+ fix_size, q->write_ptr, idx, IWL_CMD_QUEUE_NUM);
-
- txq->need_update = 1;
-+
-+ /* Set up entry in queue's byte count circular buffer */
- ret = iwl4965_tx_queue_update_wr_ptr(priv, txq, 0);
-- q->first_empty = iwl_queue_inc_wrap(q->first_empty, q->n_bd);
-- iwl_tx_queue_update_write_ptr(priv, txq);
-+
-+ /* Increment and update queue's write index */
-+ q->write_ptr = iwl4965_queue_inc_wrap(q->write_ptr, q->n_bd);
-+ iwl4965_tx_queue_update_write_ptr(priv, txq);
-
- spin_unlock_irqrestore(&priv->hcmd_lock, flags);
- return ret ? ret : idx;
- }
-
--int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+static int iwl4965_send_cmd_async(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
- {
- int ret;
-
-@@ -707,16 +758,16 @@ int iwl_send_cmd_async(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return -EBUSY;
-
-- ret = iwl_enqueue_hcmd(priv, cmd);
-+ ret = iwl4965_enqueue_hcmd(priv, cmd);
- if (ret < 0) {
-- IWL_ERROR("Error sending %s: iwl_enqueue_hcmd failed: %d\n",
-+ IWL_ERROR("Error sending %s: iwl4965_enqueue_hcmd failed: %d\n",
- get_cmd_string(cmd->id), ret);
- return ret;
- }
- return 0;
- }
-
--int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+static int iwl4965_send_cmd_sync(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
- {
- int cmd_idx;
- int ret;
-@@ -738,10 +789,10 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
- if (cmd->meta.flags & CMD_WANT_SKB)
- cmd->meta.source = &cmd->meta;
-
-- cmd_idx = iwl_enqueue_hcmd(priv, cmd);
-+ cmd_idx = iwl4965_enqueue_hcmd(priv, cmd);
- if (cmd_idx < 0) {
- ret = cmd_idx;
-- IWL_ERROR("Error sending %s: iwl_enqueue_hcmd failed: %d\n",
-+ IWL_ERROR("Error sending %s: iwl4965_enqueue_hcmd failed: %d\n",
- get_cmd_string(cmd->id), ret);
- goto out;
- }
-@@ -785,7 +836,7 @@ int iwl_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-
- cancel:
- if (cmd->meta.flags & CMD_WANT_SKB) {
-- struct iwl_cmd *qcmd;
-+ struct iwl4965_cmd *qcmd;
-
- /* Cancel the CMD_WANT_SKB flag for the cmd in the
- * TX cmd queue. Otherwise in case the cmd comes
-@@ -804,64 +855,75 @@ out:
- return ret;
- }
-
--int iwl_send_cmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
-+int iwl4965_send_cmd(struct iwl4965_priv *priv, struct iwl4965_host_cmd *cmd)
- {
-- /* A command can not be asynchronous AND expect an SKB to be set. */
-- BUG_ON((cmd->meta.flags & CMD_ASYNC) &&
-- (cmd->meta.flags & CMD_WANT_SKB));
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+-struct iwl_ht_agg {
+- u16 txq_id;
+- u16 frame_count;
+- u16 wait_for_ba;
+- u16 start_idx;
+- u32 bitmap0;
+- u32 bitmap1;
+- u32 rate_n_flags;
+-};
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
+-#endif
-
- if (cmd->meta.flags & CMD_ASYNC)
-- return iwl_send_cmd_async(priv, cmd);
-+ return iwl4965_send_cmd_async(priv, cmd);
-
-- return iwl_send_cmd_sync(priv, cmd);
-+ return iwl4965_send_cmd_sync(priv, cmd);
- }
-
--int iwl_send_cmd_pdu(struct iwl_priv *priv, u8 id, u16 len, const void *data)
-+int iwl4965_send_cmd_pdu(struct iwl4965_priv *priv, u8 id, u16 len, const void *data)
- {
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_host_cmd cmd = {
- .id = id,
- .len = len,
- .data = data,
- };
-
-- return iwl_send_cmd_sync(priv, &cmd);
-+ return iwl4965_send_cmd_sync(priv, &cmd);
- }
-
--static int __must_check iwl_send_cmd_u32(struct iwl_priv *priv, u8 id, u32 val)
-+static int __must_check iwl4965_send_cmd_u32(struct iwl4965_priv *priv, u8 id, u32 val)
- {
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_host_cmd cmd = {
- .id = id,
- .len = sizeof(val),
- .data = &val,
- };
-
-- return iwl_send_cmd_sync(priv, &cmd);
-+ return iwl4965_send_cmd_sync(priv, &cmd);
- }
-
--int iwl_send_statistics_request(struct iwl_priv *priv)
-+int iwl4965_send_statistics_request(struct iwl4965_priv *priv)
- {
-- return iwl_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
-+ return iwl4965_send_cmd_u32(priv, REPLY_STATISTICS_CMD, 0);
- }
-
- /**
-- * iwl_rxon_add_station - add station into station table.
-+ * iwl4965_rxon_add_station - add station into station table.
- *
- * there is only one AP station with id= IWL_AP_ID
-- * NOTE: mutex must be held before calling the this fnction
+-struct iwl_tid_data {
+- u16 seq_number;
+-#if IWL == 4965
+-#ifdef CONFIG_IWLWIFI_HT
+-#ifdef CONFIG_IWLWIFI_HT_AGG
+- struct iwl_ht_agg agg;
+-#endif /* CONFIG_IWLWIFI_HT_AGG */
+-#endif /* CONFIG_IWLWIFI_HT */
+-#endif
+-};
+-
+-struct iwl_hw_key {
+- enum ieee80211_key_alg alg;
+- int keylen;
+- u8 key[32];
+-};
+-
+-union iwl_ht_rate_supp {
+- u16 rates;
+- struct {
+- u8 siso_rate;
+- u8 mimo_rate;
+- };
+-};
+-
+-#ifdef CONFIG_IWLWIFI_HT
+-#define CFG_HT_RX_AMPDU_FACTOR_DEF (0x3)
+-#define HT_IE_MAX_AMSDU_SIZE_4K (0)
+-#define CFG_HT_MPDU_DENSITY_2USEC (0x5)
+-#define CFG_HT_MPDU_DENSITY_DEF CFG_HT_MPDU_DENSITY_2USEC
+-
+-struct sta_ht_info {
+- u8 is_ht;
+- u16 rx_mimo_ps_mode;
+- u16 tx_mimo_ps_mode;
+- u16 control_channel;
+- u8 max_amsdu_size;
+- u8 ampdu_factor;
+- u8 mpdu_density;
+- u8 operating_mode;
+- u8 supported_chan_width;
+- u8 extension_chan_offset;
+- u8 is_green_field;
+- u8 sgf;
+- u8 supp_rates[16];
+- u8 tx_chan_width;
+- u8 chan_width_cap;
+-};
+-#endif /*CONFIG_IWLWIFI_HT */
+-
+-#ifdef CONFIG_IWLWIFI_QOS
+-
+-union iwl_qos_capabity {
+- struct {
+- u8 edca_count:4; /* bit 0-3 */
+- u8 q_ack:1; /* bit 4 */
+- u8 queue_request:1; /* bit 5 */
+- u8 txop_request:1; /* bit 6 */
+- u8 reserved:1; /* bit 7 */
+- } q_AP;
+- struct {
+- u8 acvo_APSD:1; /* bit 0 */
+- u8 acvi_APSD:1; /* bit 1 */
+- u8 ac_bk_APSD:1; /* bit 2 */
+- u8 ac_be_APSD:1; /* bit 3 */
+- u8 q_ack:1; /* bit 4 */
+- u8 max_len:2; /* bit 5-6 */
+- u8 more_data_ack:1; /* bit 7 */
+- } q_STA;
+- u8 val;
+-};
+-
+-/* QoS sturctures */
+-struct iwl_qos_info {
+- int qos_enable;
+- int qos_active;
+- union iwl_qos_capabity qos_cap;
+- struct iwl_qosparam_cmd def_qos_parm;
+-};
+-#endif /*CONFIG_IWLWIFI_QOS */
+-
+-#define STA_PS_STATUS_WAKE 0
+-#define STA_PS_STATUS_SLEEP 1
+-
+-struct iwl_station_entry {
+- struct iwl_addsta_cmd sta;
+- struct iwl_tid_data tid[MAX_TID_COUNT];
+-#if IWL == 3945
+- union {
+- struct {
+- u8 rate;
+- u8 flags;
+- } s;
+- u16 rate_n_flags;
+- } current_rate;
+-#endif
+- u8 used;
+- u8 ps_status;
+- struct iwl_hw_key keyinfo;
+-};
+-
+-/* one for each uCode image (inst/data, boot/init/runtime) */
+-struct fw_image_desc {
+- void *v_addr; /* access by driver */
+- dma_addr_t p_addr; /* access by card's busmaster DMA */
+- u32 len; /* bytes */
+-};
+-
+-/* uCode file layout */
+-struct iwl_ucode {
+- __le32 ver; /* major/minor/subminor */
+- __le32 inst_size; /* bytes of runtime instructions */
+- __le32 data_size; /* bytes of runtime data */
+- __le32 init_size; /* bytes of initialization instructions */
+- __le32 init_data_size; /* bytes of initialization data */
+- __le32 boot_size; /* bytes of bootstrap instructions */
+- u8 data[0]; /* data in same order as "size" elements */
+-};
+-
+-#define IWL_IBSS_MAC_HASH_SIZE 32
+-
+-struct iwl_ibss_seq {
+- u8 mac[ETH_ALEN];
+- u16 seq_num;
+- u16 frag_num;
+- unsigned long packet_time;
+- struct list_head list;
+-};
+-
+-struct iwl_driver_hw_info {
+- u16 max_txq_num;
+- u16 ac_queue_count;
+- u32 rx_buffer_size;
+- u16 tx_cmd_len;
+- u16 max_rxq_size;
+- u16 max_rxq_log;
+- u32 cck_flag;
+- u8 max_stations;
+- u8 bcast_sta_id;
+- void *shared_virt;
+- dma_addr_t shared_phys;
+-};
+-
+-
+-#define STA_FLG_RTS_MIMO_PROT_MSK __constant_cpu_to_le32(1 << 17)
+-#define STA_FLG_AGG_MPDU_8US_MSK __constant_cpu_to_le32(1 << 18)
+-#define STA_FLG_MAX_AGG_SIZE_POS (19)
+-#define STA_FLG_MAX_AGG_SIZE_MSK __constant_cpu_to_le32(3 << 19)
+-#define STA_FLG_FAT_EN_MSK __constant_cpu_to_le32(1 << 21)
+-#define STA_FLG_MIMO_DIS_MSK __constant_cpu_to_le32(1 << 22)
+-#define STA_FLG_AGG_MPDU_DENSITY_POS (23)
+-#define STA_FLG_AGG_MPDU_DENSITY_MSK __constant_cpu_to_le32(7 << 23)
+-#define HT_SHORT_GI_20MHZ_ONLY (1 << 0)
+-#define HT_SHORT_GI_40MHZ_ONLY (1 << 1)
+-
+-
+-#include "iwl-priv.h"
+-
+-/* Requires full declaration of iwl_priv before including */
+-#include "iwl-io.h"
+-
+-#define IWL_RX_HDR(x) ((struct iwl_rx_frame_hdr *)(\
+- x->u.rx_frame.stats.payload + \
+- x->u.rx_frame.stats.phy_count))
+-#define IWL_RX_END(x) ((struct iwl_rx_frame_end *)(\
+- IWL_RX_HDR(x)->payload + \
+- le16_to_cpu(IWL_RX_HDR(x)->len)))
+-#define IWL_RX_STATS(x) (&x->u.rx_frame.stats)
+-#define IWL_RX_DATA(x) (IWL_RX_HDR(x)->payload)
+-
+-
+-/******************************************************************************
+- *
+- * Functions implemented in iwl-base.c which are forward declared here
+- * for use by iwl-*.c
+- *
+- *****************************************************************************/
+-struct iwl_addsta_cmd;
+-extern int iwl_send_add_station(struct iwl_priv *priv,
+- struct iwl_addsta_cmd *sta, u8 flags);
+-extern const char *iwl_get_tx_fail_reason(u32 status);
+-extern u8 iwl_add_station(struct iwl_priv *priv, const u8 *bssid,
+- int is_ap, u8 flags);
+-extern int iwl_is_network_packet(struct iwl_priv *priv,
+- struct ieee80211_hdr *header);
+-extern int iwl_power_init_handle(struct iwl_priv *priv);
+-extern int iwl_eeprom_init(struct iwl_priv *priv);
+-#ifdef CONFIG_IWLWIFI_DEBUG
+-extern void iwl_report_frame(struct iwl_priv *priv,
+- struct iwl_rx_packet *pkt,
+- struct ieee80211_hdr *header, int group100);
+-#else
+-static inline void iwl_report_frame(struct iwl_priv *priv,
+- struct iwl_rx_packet *pkt,
+- struct ieee80211_hdr *header,
+- int group100) {}
+-#endif
+-extern int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq);
+-extern void iwl_handle_data_packet_monitor(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb,
+- void *data, short len,
+- struct ieee80211_rx_status *stats,
+- u16 phy_flags);
+-extern int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr
+- *header);
+-extern void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq);
+-extern int iwl_rx_queue_alloc(struct iwl_priv *priv);
+-extern void iwl_rx_queue_reset(struct iwl_priv *priv,
+- struct iwl_rx_queue *rxq);
+-extern int iwl_calc_db_from_ratio(int sig_ratio);
+-extern int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm);
+-extern int iwl_tx_queue_init(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq, int count, u32 id);
+-extern int iwl_rx_queue_restock(struct iwl_priv *priv);
+-extern void iwl_rx_replenish(void *data);
+-extern void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq);
+-extern int iwl_send_cmd_pdu(struct iwl_priv *priv, u8 id, u16 len,
+- const void *data);
+-extern int __must_check iwl_send_cmd_async(struct iwl_priv *priv,
+- struct iwl_host_cmd *cmd);
+-extern int __must_check iwl_send_cmd_sync(struct iwl_priv *priv,
+- struct iwl_host_cmd *cmd);
+-extern int __must_check iwl_send_cmd(struct iwl_priv *priv,
+- struct iwl_host_cmd *cmd);
+-extern unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
+- struct ieee80211_hdr *hdr,
+- const u8 *dest, int left);
+-extern int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv,
+- struct iwl_rx_queue *q);
+-extern int iwl_send_statistics_request(struct iwl_priv *priv);
+-extern void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
+- u32 decrypt_res,
+- struct ieee80211_rx_status *stats);
+-extern __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr);
+-
+-extern const u8 BROADCAST_ADDR[ETH_ALEN];
+-
+-/*
+- * Currently used by iwl-3945-rs... look at restructuring so that it doesn't
+- * call this... todo... fix that.
-*/
--static int iwl_rxon_add_station(struct iwl_priv *priv,
-+ * NOTE: mutex must be held before calling this fnction
-+ */
-+static int iwl4965_rxon_add_station(struct iwl4965_priv *priv,
- const u8 *addr, int is_ap)
- {
- u8 sta_id;
-
-- sta_id = iwl_add_station(priv, addr, is_ap, 0);
-+ /* Add station to device's station table */
-+#ifdef CONFIG_IWL4965_HT
-+ struct ieee80211_conf *conf = &priv->hw->conf;
-+ struct ieee80211_ht_info *cur_ht_config = &conf->ht_conf;
-+
-+ if ((is_ap) &&
-+ (conf->flags & IEEE80211_CONF_SUPPORT_HT_MODE) &&
-+ (priv->iw_mode == IEEE80211_IF_TYPE_STA))
-+ sta_id = iwl4965_add_station_flags(priv, addr, is_ap,
-+ 0, cur_ht_config);
-+ else
-+#endif /* CONFIG_IWL4965_HT */
-+ sta_id = iwl4965_add_station_flags(priv, addr, is_ap,
-+ 0, NULL);
-+
-+ /* Set up default rate scaling table in device's station table */
- iwl4965_add_station(priv, addr, is_ap);
-
- return sta_id;
- }
-
- /**
-- * iwl_set_rxon_channel - Set the phymode and channel values in staging RXON
-+ * iwl4965_set_rxon_channel - Set the phymode and channel values in staging RXON
- * @phymode: MODE_IEEE80211A sets to 5.2GHz; all else set to 2.4GHz
- * @channel: Any channel valid for the requested phymode
-
-@@ -870,9 +932,10 @@ static int iwl_rxon_add_station(struct iwl_priv *priv,
- * NOTE: Does not commit to the hardware; it sets appropriate bit fields
- * in the staging RXON flag structure based on the phymode
- */
--static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
-+static int iwl4965_set_rxon_channel(struct iwl4965_priv *priv, u8 phymode,
-+ u16 channel)
- {
-- if (!iwl_get_channel_info(priv, phymode, channel)) {
-+ if (!iwl4965_get_channel_info(priv, phymode, channel)) {
- IWL_DEBUG_INFO("Could not set channel to %d [%d]\n",
- channel, phymode);
- return -EINVAL;
-@@ -896,13 +959,13 @@ static int iwl_set_rxon_channel(struct iwl_priv *priv, u8 phymode, u16 channel)
- }
-
- /**
-- * iwl_check_rxon_cmd - validate RXON structure is valid
-+ * iwl4965_check_rxon_cmd - validate RXON structure is valid
- *
- * NOTE: This is really only useful during development and can eventually
- * be #ifdef'd out once the driver is stable and folks aren't actively
- * making changes
- */
--static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
-+static int iwl4965_check_rxon_cmd(struct iwl4965_rxon_cmd *rxon)
- {
- int error = 0;
- int counter = 1;
-@@ -962,21 +1025,21 @@ static int iwl_check_rxon_cmd(struct iwl_rxon_cmd *rxon)
- le16_to_cpu(rxon->channel));
-
- if (error) {
-- IWL_ERROR("Not a valid iwl_rxon_assoc_cmd field values\n");
-+ IWL_ERROR("Not a valid iwl4965_rxon_assoc_cmd field values\n");
- return -1;
- }
- return 0;
- }
-
- /**
-- * iwl_full_rxon_required - determine if RXON_ASSOC can be used in RXON commit
-- * @priv: staging_rxon is comapred to active_rxon
-+ * iwl4965_full_rxon_required - check if full RXON (vs RXON_ASSOC) cmd is needed
-+ * @priv: staging_rxon is compared to active_rxon
- *
-- * If the RXON structure is changing sufficient to require a new
-- * tune or to clear and reset the RXON_FILTER_ASSOC_MSK then return 1
-- * to indicate a new tune is required.
-+ * If the RXON structure is changing enough to require a new tune,
-+ * or is clearing the RXON_FILTER_ASSOC_MSK, then return 1 to indicate that
-+ * a new tune (full RXON command, rather than RXON_ASSOC cmd) is required.
- */
--static int iwl_full_rxon_required(struct iwl_priv *priv)
-+static int iwl4965_full_rxon_required(struct iwl4965_priv *priv)
- {
-
- /* These items are only settable from the full RXON command */
-@@ -1016,19 +1079,19 @@ static int iwl_full_rxon_required(struct iwl_priv *priv)
- return 0;
- }
+-extern u8 iwl_sync_station(struct iwl_priv *priv, int sta_id,
+- u16 tx_rate, u8 flags);
+-
+-static inline int iwl_is_associated(struct iwl_priv *priv)
+-{
+- return (priv->active_rxon.filter_flags & RXON_FILTER_ASSOC_MSK) ? 1 : 0;
+-}
+-
+-/******************************************************************************
+- *
+- * Functions implemented in iwl-[34]*.c which are forward declared here
+- * for use by iwl-base.c
+- *
+- * NOTE: The implementation of these functions are hardware specific
+- * which is why they are in the hardware specific files (vs. iwl-base.c)
+- *
+- * Naming convention --
+- * iwl_ <-- Its part of iwlwifi (should be changed to iwl_)
+- * iwl_hw_ <-- Hardware specific (implemented in iwl-XXXX.c by all HW)
+- * iwlXXXX_ <-- Hardware specific (implemented in iwl-XXXX.c for XXXX)
+- * iwl_bg_ <-- Called from work queue context
+- * iwl_mac_ <-- mac80211 callback
+- *
+- ****************************************************************************/
+-extern void iwl_hw_rx_handler_setup(struct iwl_priv *priv);
+-extern void iwl_hw_setup_deferred_work(struct iwl_priv *priv);
+-extern void iwl_hw_cancel_deferred_work(struct iwl_priv *priv);
+-extern int iwl_hw_rxq_stop(struct iwl_priv *priv);
+-extern int iwl_hw_set_hw_setting(struct iwl_priv *priv);
+-extern int iwl_hw_nic_init(struct iwl_priv *priv);
+-extern void iwl_hw_card_show_info(struct iwl_priv *priv);
+-extern int iwl_hw_nic_stop_master(struct iwl_priv *priv);
+-extern void iwl_hw_txq_ctx_free(struct iwl_priv *priv);
+-extern void iwl_hw_txq_ctx_stop(struct iwl_priv *priv);
+-extern int iwl_hw_nic_reset(struct iwl_priv *priv);
+-extern int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *tfd,
+- dma_addr_t addr, u16 len);
+-extern int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq);
+-extern int iwl_hw_get_temperature(struct iwl_priv *priv);
+-extern int iwl_hw_tx_queue_init(struct iwl_priv *priv,
+- struct iwl_tx_queue *txq);
+-extern unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
+- struct iwl_frame *frame, u8 rate);
+-extern int iwl_hw_get_rx_read(struct iwl_priv *priv);
+-extern void iwl_hw_build_tx_cmd_rate(struct iwl_priv *priv,
+- struct iwl_cmd *cmd,
+- struct ieee80211_tx_control *ctrl,
+- struct ieee80211_hdr *hdr,
+- int sta_id, int tx_id);
+-extern int iwl_hw_reg_send_txpower(struct iwl_priv *priv);
+-extern int iwl_hw_reg_set_txpower(struct iwl_priv *priv, s8 power);
+-extern void iwl_hw_rx_statistics(struct iwl_priv *priv,
+- struct iwl_rx_mem_buffer *rxb);
+-extern void iwl_disable_events(struct iwl_priv *priv);
+-extern int iwl4965_get_temperature(const struct iwl_priv *priv);
+-
+-/**
+- * iwl_hw_find_station - Find station id for a given BSSID
+- * @bssid: MAC address of station ID to find
+- *
+- * NOTE: This should not be hardware specific but the code has
+- * not yet been merged into a single common layer for managing the
+- * station tables.
+- */
+-extern u8 iwl_hw_find_station(struct iwl_priv *priv, const u8 *bssid);
+-
+-extern int iwl_hw_channel_switch(struct iwl_priv *priv, u16 channel);
+-extern int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index);
+-#endif
+diff --git a/drivers/net/wireless/libertas/11d.c b/drivers/net/wireless/libertas/11d.c
+index 9cf0211..5e10ce0 100644
+--- a/drivers/net/wireless/libertas/11d.c
++++ b/drivers/net/wireless/libertas/11d.c
+@@ -43,16 +43,14 @@ static struct chan_freq_power channel_freq_power_UN_BG[] = {
+ {14, 2484, TX_PWR_DEFAULT}
+ };
--static int iwl_send_rxon_assoc(struct iwl_priv *priv)
-+static int iwl4965_send_rxon_assoc(struct iwl4965_priv *priv)
+-static u8 wlan_region_2_code(u8 * region)
++static u8 lbs_region_2_code(u8 *region)
{
- int rc = 0;
-- struct iwl_rx_packet *res = NULL;
-- struct iwl_rxon_assoc_cmd rxon_assoc;
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_rx_packet *res = NULL;
-+ struct iwl4965_rxon_assoc_cmd rxon_assoc;
-+ struct iwl4965_host_cmd cmd = {
- .id = REPLY_RXON_ASSOC,
- .len = sizeof(rxon_assoc),
- .meta.flags = CMD_WANT_SKB,
- .data = &rxon_assoc,
- };
-- const struct iwl_rxon_cmd *rxon1 = &priv->staging_rxon;
-- const struct iwl_rxon_cmd *rxon2 = &priv->active_rxon;
-+ const struct iwl4965_rxon_cmd *rxon1 = &priv->staging_rxon;
-+ const struct iwl4965_rxon_cmd *rxon2 = &priv->active_rxon;
-
- if ((rxon1->flags == rxon2->flags) &&
- (rxon1->filter_flags == rxon2->filter_flags) &&
-@@ -1054,11 +1117,11 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
- priv->staging_rxon.ofdm_ht_dual_stream_basic_rates;
- rxon_assoc.rx_chain_select_flags = priv->staging_rxon.rx_chain;
+ u8 i;
+- u8 size = sizeof(region_code_mapping)/
+- sizeof(struct region_code_mapping);
-- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl4965_send_cmd_sync(priv, &cmd);
- if (rc)
- return rc;
+ for (i = 0; region[i] && i < COUNTRY_CODE_LEN; i++)
+ region[i] = toupper(region[i]);
-- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
- if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
- IWL_ERROR("Bad return from REPLY_RXON_ASSOC command\n");
- rc = -EIO;
-@@ -1071,37 +1134,37 @@ static int iwl_send_rxon_assoc(struct iwl_priv *priv)
+- for (i = 0; i < size; i++) {
++ for (i = 0; i < ARRAY_SIZE(region_code_mapping); i++) {
+ if (!memcmp(region, region_code_mapping[i].region,
+ COUNTRY_CODE_LEN))
+ return (region_code_mapping[i].code);
+@@ -62,12 +60,11 @@ static u8 wlan_region_2_code(u8 * region)
+ return (region_code_mapping[0].code);
}
- /**
-- * iwl_commit_rxon - commit staging_rxon to hardware
-+ * iwl4965_commit_rxon - commit staging_rxon to hardware
- *
-- * The RXON command in staging_rxon is commited to the hardware and
-+ * The RXON command in staging_rxon is committed to the hardware and
- * the active_rxon structure is updated with the new data. This
- * function correctly transitions out of the RXON_ASSOC_MSK state if
- * a HW tune is required based on the RXON structure changes.
- */
--static int iwl_commit_rxon(struct iwl_priv *priv)
-+static int iwl4965_commit_rxon(struct iwl4965_priv *priv)
+-static u8 *wlan_code_2_region(u8 code)
++static u8 *lbs_code_2_region(u8 code)
{
- /* cast away the const for active_rxon in this function */
-- struct iwl_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
-+ struct iwl4965_rxon_cmd *active_rxon = (void *)&priv->active_rxon;
- DECLARE_MAC_BUF(mac);
- int rc = 0;
-
-- if (!iwl_is_alive(priv))
-+ if (!iwl4965_is_alive(priv))
- return -1;
-
- /* always get timestamp with Rx frame */
- priv->staging_rxon.flags |= RXON_FLG_TSF2HOST_MSK;
-
-- rc = iwl_check_rxon_cmd(&priv->staging_rxon);
-+ rc = iwl4965_check_rxon_cmd(&priv->staging_rxon);
- if (rc) {
- IWL_ERROR("Invalid RXON configuration. Not committing.\n");
- return -EINVAL;
- }
-
- /* If we don't need to send a full RXON, we can use
-- * iwl_rxon_assoc_cmd which is used to reconfigure filter
-+ * iwl4965_rxon_assoc_cmd which is used to reconfigure filter
- * and other flags for the current radio configuration. */
-- if (!iwl_full_rxon_required(priv)) {
-- rc = iwl_send_rxon_assoc(priv);
-+ if (!iwl4965_full_rxon_required(priv)) {
-+ rc = iwl4965_send_rxon_assoc(priv);
- if (rc) {
- IWL_ERROR("Error setting RXON_ASSOC "
- "configuration (%d).\n", rc);
-@@ -1116,25 +1179,25 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
- /* station table will be cleared */
- priv->assoc_station_added = 0;
-
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-+#ifdef CONFIG_IWL4965_SENSITIVITY
- priv->sensitivity_data.state = IWL_SENS_CALIB_NEED_REINIT;
- if (!priv->error_recovering)
- priv->start_calib = 0;
-
- iwl4965_init_sensitivity(priv, CMD_ASYNC, 1);
--#endif /* CONFIG_IWLWIFI_SENSITIVITY */
-+#endif /* CONFIG_IWL4965_SENSITIVITY */
-
- /* If we are currently associated and the new config requires
- * an RXON_ASSOC and the new config wants the associated mask enabled,
- * we must clear the associated from the active configuration
- * before we apply the new config */
-- if (iwl_is_associated(priv) &&
-+ if (iwl4965_is_associated(priv) &&
- (priv->staging_rxon.filter_flags & RXON_FILTER_ASSOC_MSK)) {
- IWL_DEBUG_INFO("Toggling associated bit on current RXON\n");
- active_rxon->filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-
-- rc = iwl_send_cmd_pdu(priv, REPLY_RXON,
-- sizeof(struct iwl_rxon_cmd),
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON,
-+ sizeof(struct iwl4965_rxon_cmd),
- &priv->active_rxon);
-
- /* If the mask clearing failed then we set
-@@ -1157,35 +1220,35 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
- print_mac(mac, priv->staging_rxon.bssid_addr));
-
- /* Apply the new configuration */
-- rc = iwl_send_cmd_pdu(priv, REPLY_RXON,
-- sizeof(struct iwl_rxon_cmd), &priv->staging_rxon);
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON,
-+ sizeof(struct iwl4965_rxon_cmd), &priv->staging_rxon);
- if (rc) {
- IWL_ERROR("Error setting new configuration (%d).\n", rc);
- return rc;
- }
-
-- iwl_clear_stations_table(priv);
-+ iwl4965_clear_stations_table(priv);
-
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-+#ifdef CONFIG_IWL4965_SENSITIVITY
- if (!priv->error_recovering)
- priv->start_calib = 0;
-
- priv->sensitivity_data.state = IWL_SENS_CALIB_NEED_REINIT;
- iwl4965_init_sensitivity(priv, CMD_ASYNC, 1);
--#endif /* CONFIG_IWLWIFI_SENSITIVITY */
-+#endif /* CONFIG_IWL4965_SENSITIVITY */
-
- memcpy(active_rxon, &priv->staging_rxon, sizeof(*active_rxon));
-
- /* If we issue a new RXON command which required a tune then we must
- * send a new TXPOWER command or we won't be able to Tx any frames */
-- rc = iwl_hw_reg_send_txpower(priv);
-+ rc = iwl4965_hw_reg_send_txpower(priv);
- if (rc) {
- IWL_ERROR("Error setting Tx power (%d).\n", rc);
- return rc;
+ u8 i;
+- u8 size = sizeof(region_code_mapping)
+- / sizeof(struct region_code_mapping);
+- for (i = 0; i < size; i++) {
++
++ for (i = 0; i < ARRAY_SIZE(region_code_mapping); i++) {
+ if (region_code_mapping[i].code == code)
+ return (region_code_mapping[i].region);
}
+@@ -82,7 +79,7 @@ static u8 *wlan_code_2_region(u8 code)
+ * @param nrchan number of channels
+ * @return the nrchan-th chan number
+ */
+-static u8 wlan_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 * chan)
++static u8 lbs_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 *chan)
+ /*find the nrchan-th chan after the firstchan*/
+ {
+ u8 i;
+@@ -90,8 +87,7 @@ static u8 wlan_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 * chan)
+ u8 cfp_no;
- /* Add the broadcast address so we can send broadcast frames */
-- if (iwl_rxon_add_station(priv, BROADCAST_ADDR, 0) ==
-+ if (iwl4965_rxon_add_station(priv, iwl4965_broadcast_addr, 0) ==
- IWL_INVALID_STATION) {
- IWL_ERROR("Error adding BROADCAST address for transmit.\n");
- return -EIO;
-@@ -1193,9 +1256,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ cfp = channel_freq_power_UN_BG;
+- cfp_no = sizeof(channel_freq_power_UN_BG) /
+- sizeof(struct chan_freq_power);
++ cfp_no = ARRAY_SIZE(channel_freq_power_UN_BG);
- /* If we have set the ASSOC_MSK and we are in BSS mode then
- * add the IWL_AP_ID to the station rate table */
-- if (iwl_is_associated(priv) &&
-+ if (iwl4965_is_associated(priv) &&
- (priv->iw_mode == IEEE80211_IF_TYPE_STA)) {
-- if (iwl_rxon_add_station(priv, priv->active_rxon.bssid_addr, 1)
-+ if (iwl4965_rxon_add_station(priv, priv->active_rxon.bssid_addr, 1)
- == IWL_INVALID_STATION) {
- IWL_ERROR("Error adding AP address for transmit.\n");
- return -EIO;
-@@ -1206,9 +1269,9 @@ static int iwl_commit_rxon(struct iwl_priv *priv)
+ for (i = 0; i < cfp_no; i++) {
+ if ((cfp + i)->channel == firstchan) {
+@@ -117,7 +113,7 @@ static u8 wlan_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 * chan)
+ * @param parsed_region_chan pointer to parsed_region_chan_11d
+ * @return TRUE; FALSE
+ */
+-static u8 wlan_channel_known_11d(u8 chan,
++static u8 lbs_channel_known_11d(u8 chan,
+ struct parsed_region_chan_11d * parsed_region_chan)
+ {
+ struct chan_power_11d *chanpwr = parsed_region_chan->chanpwr;
+@@ -138,19 +134,15 @@ static u8 wlan_channel_known_11d(u8 chan,
return 0;
}
--static int iwl_send_bt_config(struct iwl_priv *priv)
-+static int iwl4965_send_bt_config(struct iwl4965_priv *priv)
+-u32 libertas_chan_2_freq(u8 chan, u8 band)
++u32 lbs_chan_2_freq(u8 chan, u8 band)
{
-- struct iwl_bt_cmd bt_cmd = {
-+ struct iwl4965_bt_cmd bt_cmd = {
- .flags = 3,
- .lead_time = 0xAA,
- .max_kill = 1,
-@@ -1216,15 +1279,15 @@ static int iwl_send_bt_config(struct iwl_priv *priv)
- .kill_cts_mask = 0,
- };
-
-- return iwl_send_cmd_pdu(priv, REPLY_BT_CONFIG,
-- sizeof(struct iwl_bt_cmd), &bt_cmd);
-+ return iwl4965_send_cmd_pdu(priv, REPLY_BT_CONFIG,
-+ sizeof(struct iwl4965_bt_cmd), &bt_cmd);
- }
+ struct chan_freq_power *cf;
+- u16 cnt;
+ u16 i;
+ u32 freq = 0;
--static int iwl_send_scan_abort(struct iwl_priv *priv)
-+static int iwl4965_send_scan_abort(struct iwl4965_priv *priv)
- {
- int rc = 0;
-- struct iwl_rx_packet *res;
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_rx_packet *res;
-+ struct iwl4965_host_cmd cmd = {
- .id = REPLY_SCAN_ABORT_CMD,
- .meta.flags = CMD_WANT_SKB,
- };
-@@ -1237,13 +1300,13 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
- return 0;
- }
+ cf = channel_freq_power_UN_BG;
+- cnt =
+- sizeof(channel_freq_power_UN_BG) /
+- sizeof(struct chan_freq_power);
-- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl4965_send_cmd_sync(priv, &cmd);
- if (rc) {
- clear_bit(STATUS_SCAN_ABORTING, &priv->status);
- return rc;
+- for (i = 0; i < cnt; i++) {
++ for (i = 0; i < ARRAY_SIZE(channel_freq_power_UN_BG); i++) {
+ if (chan == cf[i].channel)
+ freq = cf[i].freq;
}
+@@ -160,7 +152,7 @@ u32 libertas_chan_2_freq(u8 chan, u8 band)
-- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
- if (res->u.status != CAN_ABORT_STATUS) {
- /* The scan abort will return 1 for success or
- * 2 for "failure". A failure condition can be
-@@ -1261,8 +1324,8 @@ static int iwl_send_scan_abort(struct iwl_priv *priv)
- return rc;
- }
-
--static int iwl_card_state_sync_callback(struct iwl_priv *priv,
-- struct iwl_cmd *cmd,
-+static int iwl4965_card_state_sync_callback(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd,
- struct sk_buff *skb)
- {
- return 1;
-@@ -1271,16 +1334,16 @@ static int iwl_card_state_sync_callback(struct iwl_priv *priv,
- /*
- * CARD_STATE_CMD
- *
-- * Use: Sets the internal card state to enable, disable, or halt
-+ * Use: Sets the device's internal card state to enable, disable, or halt
- *
- * When in the 'enable' state the card operates as normal.
- * When in the 'disable' state, the card enters into a low power mode.
- * When in the 'halt' state, the card is shut down and must be fully
- * restarted to come back on.
- */
--static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
-+static int iwl4965_send_card_state(struct iwl4965_priv *priv, u32 flags, u8 meta_flag)
+ static int generate_domain_info_11d(struct parsed_region_chan_11d
+ *parsed_region_chan,
+- struct wlan_802_11d_domain_reg * domaininfo)
++ struct lbs_802_11d_domain_reg *domaininfo)
{
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_host_cmd cmd = {
- .id = REPLY_CARD_STATE_CMD,
- .len = sizeof(u32),
- .data = &flags,
-@@ -1288,22 +1351,22 @@ static int iwl_send_card_state(struct iwl_priv *priv, u32 flags, u8 meta_flag)
- };
-
- if (meta_flag & CMD_ASYNC)
-- cmd.meta.u.callback = iwl_card_state_sync_callback;
-+ cmd.meta.u.callback = iwl4965_card_state_sync_callback;
-
-- return iwl_send_cmd(priv, &cmd);
-+ return iwl4965_send_cmd(priv, &cmd);
- }
+ u8 nr_subband = 0;
--static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
-- struct iwl_cmd *cmd, struct sk_buff *skb)
-+static int iwl4965_add_sta_sync_callback(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd, struct sk_buff *skb)
+@@ -225,7 +217,7 @@ static int generate_domain_info_11d(struct parsed_region_chan_11d
+ * @param *parsed_region_chan pointer to parsed_region_chan_11d
+ * @return N/A
+ */
+-static void wlan_generate_parsed_region_chan_11d(struct region_channel * region_chan,
++static void lbs_generate_parsed_region_chan_11d(struct region_channel *region_chan,
+ struct parsed_region_chan_11d *
+ parsed_region_chan)
{
-- struct iwl_rx_packet *res = NULL;
-+ struct iwl4965_rx_packet *res = NULL;
-
- if (!skb) {
- IWL_ERROR("Error: Response NULL in REPLY_ADD_STA.\n");
- return 1;
- }
-
-- res = (struct iwl_rx_packet *)skb->data;
-+ res = (struct iwl4965_rx_packet *)skb->data;
- if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
- IWL_ERROR("Bad return from REPLY_ADD_STA (0x%08X)\n",
- res->hdr.flags);
-@@ -1321,29 +1384,29 @@ static int iwl_add_sta_sync_callback(struct iwl_priv *priv,
- return 1;
- }
+@@ -246,7 +238,7 @@ static void wlan_generate_parsed_region_chan_11d(struct region_channel * region_
+ parsed_region_chan->band = region_chan->band;
+ parsed_region_chan->region = region_chan->region;
+ memcpy(parsed_region_chan->countrycode,
+- wlan_code_2_region(region_chan->region), COUNTRY_CODE_LEN);
++ lbs_code_2_region(region_chan->region), COUNTRY_CODE_LEN);
--int iwl_send_add_station(struct iwl_priv *priv,
-- struct iwl_addsta_cmd *sta, u8 flags)
-+int iwl4965_send_add_station(struct iwl4965_priv *priv,
-+ struct iwl4965_addsta_cmd *sta, u8 flags)
+ lbs_deb_11d("region 0x%x, band %d\n", parsed_region_chan->region,
+ parsed_region_chan->band);
+@@ -272,7 +264,7 @@ static void wlan_generate_parsed_region_chan_11d(struct region_channel * region_
+ * @param chan chan
+ * @return TRUE;FALSE
+ */
+-static u8 wlan_region_chan_supported_11d(u8 region, u8 band, u8 chan)
++static u8 lbs_region_chan_supported_11d(u8 region, u8 band, u8 chan)
{
-- struct iwl_rx_packet *res = NULL;
-+ struct iwl4965_rx_packet *res = NULL;
- int rc = 0;
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_host_cmd cmd = {
- .id = REPLY_ADD_STA,
-- .len = sizeof(struct iwl_addsta_cmd),
-+ .len = sizeof(struct iwl4965_addsta_cmd),
- .meta.flags = flags,
- .data = sta,
- };
-
- if (flags & CMD_ASYNC)
-- cmd.meta.u.callback = iwl_add_sta_sync_callback;
-+ cmd.meta.u.callback = iwl4965_add_sta_sync_callback;
- else
- cmd.meta.flags |= CMD_WANT_SKB;
-
-- rc = iwl_send_cmd(priv, &cmd);
-+ rc = iwl4965_send_cmd(priv, &cmd);
+ struct chan_freq_power *cfp;
+ int cfp_no;
+@@ -281,7 +273,7 @@ static u8 wlan_region_chan_supported_11d(u8 region, u8 band, u8 chan)
- if (rc || (flags & CMD_ASYNC))
- return rc;
+ lbs_deb_enter(LBS_DEB_11D);
-- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
- if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
- IWL_ERROR("Bad return from REPLY_ADD_STA (0x%08X)\n",
- res->hdr.flags);
-@@ -1368,7 +1431,7 @@ int iwl_send_add_station(struct iwl_priv *priv,
- return rc;
- }
+- cfp = libertas_get_region_cfp_table(region, band, &cfp_no);
++ cfp = lbs_get_region_cfp_table(region, band, &cfp_no);
+ if (cfp == NULL)
+ return 0;
--static int iwl_update_sta_key_info(struct iwl_priv *priv,
-+static int iwl4965_update_sta_key_info(struct iwl4965_priv *priv,
- struct ieee80211_key_conf *keyconf,
- u8 sta_id)
- {
-@@ -1384,7 +1447,6 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
- break;
- case ALG_TKIP:
- case ALG_WEP:
-- return -EINVAL;
- default:
- return -EINVAL;
- }
-@@ -1403,28 +1465,28 @@ static int iwl_update_sta_key_info(struct iwl_priv *priv,
- spin_unlock_irqrestore(&priv->sta_lock, flags);
+@@ -346,7 +338,7 @@ static int parse_domain_info_11d(struct ieeetypes_countryinfofullset*
- IWL_DEBUG_INFO("hwcrypto: modify ucode station key info\n");
-- iwl_send_add_station(priv, &priv->stations[sta_id].sta, 0);
-+ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, 0);
- return 0;
- }
+ /*Step1: check region_code */
+ parsed_region_chan->region = region =
+- wlan_region_2_code(countryinfo->countrycode);
++ lbs_region_2_code(countryinfo->countrycode);
--static int iwl_clear_sta_key_info(struct iwl_priv *priv, u8 sta_id)
-+static int iwl4965_clear_sta_key_info(struct iwl4965_priv *priv, u8 sta_id)
- {
- unsigned long flags;
+ lbs_deb_11d("regioncode=%x\n", (u8) parsed_region_chan->region);
+ lbs_deb_hex(LBS_DEB_11D, "countrycode", (char *)countryinfo->countrycode,
+@@ -375,7 +367,7 @@ static int parse_domain_info_11d(struct ieeetypes_countryinfofullset*
+ for (i = 0; idx < MAX_NO_OF_CHAN && i < nrchan; i++) {
+ /*step4: channel is supported? */
- spin_lock_irqsave(&priv->sta_lock, flags);
-- memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl_hw_key));
-- memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl_keyinfo));
-+ memset(&priv->stations[sta_id].keyinfo, 0, sizeof(struct iwl4965_hw_key));
-+ memset(&priv->stations[sta_id].sta.key, 0, sizeof(struct iwl4965_keyinfo));
- priv->stations[sta_id].sta.key.key_flags = STA_KEY_FLG_NO_ENC;
- priv->stations[sta_id].sta.sta.modify_mask = STA_MODIFY_KEY_MASK;
- priv->stations[sta_id].sta.mode = STA_CONTROL_MODIFY_MSK;
- spin_unlock_irqrestore(&priv->sta_lock, flags);
+- if (!wlan_get_chan_11d(band, firstchan, i, &curchan)) {
++ if (!lbs_get_chan_11d(band, firstchan, i, &curchan)) {
+ /* Chan is not found in UN table */
+ lbs_deb_11d("chan is not supported: %d \n", i);
+ break;
+@@ -383,7 +375,7 @@ static int parse_domain_info_11d(struct ieeetypes_countryinfofullset*
- IWL_DEBUG_INFO("hwcrypto: clear ucode station key info\n");
-- iwl_send_add_station(priv, &priv->stations[sta_id].sta, 0);
-+ iwl4965_send_add_station(priv, &priv->stations[sta_id].sta, 0);
- return 0;
- }
+ lastchan = curchan;
--static void iwl_clear_free_frames(struct iwl_priv *priv)
-+static void iwl4965_clear_free_frames(struct iwl4965_priv *priv)
+- if (wlan_region_chan_supported_11d
++ if (lbs_region_chan_supported_11d
+ (region, band, curchan)) {
+ /*step5: Check if curchan is supported by mrvl in region */
+ parsed_region_chan->chanpwr[idx].chan = curchan;
+@@ -419,14 +411,14 @@ done:
+ * @param parsed_region_chan pointer to parsed_region_chan_11d
+ * @return PASSIVE if chan is unknown; ACTIVE if chan is known
+ */
+-u8 libertas_get_scan_type_11d(u8 chan,
++u8 lbs_get_scan_type_11d(u8 chan,
+ struct parsed_region_chan_11d * parsed_region_chan)
{
- struct list_head *element;
-
-@@ -1434,7 +1496,7 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
- while (!list_empty(&priv->free_frames)) {
- element = priv->free_frames.next;
- list_del(element);
-- kfree(list_entry(element, struct iwl_frame, list));
-+ kfree(list_entry(element, struct iwl4965_frame, list));
- priv->frames_count--;
- }
+ u8 scan_type = CMD_SCAN_TYPE_PASSIVE;
-@@ -1445,9 +1507,9 @@ static void iwl_clear_free_frames(struct iwl_priv *priv)
- }
- }
+ lbs_deb_enter(LBS_DEB_11D);
--static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
-+static struct iwl4965_frame *iwl4965_get_free_frame(struct iwl4965_priv *priv)
- {
-- struct iwl_frame *frame;
-+ struct iwl4965_frame *frame;
- struct list_head *element;
- if (list_empty(&priv->free_frames)) {
- frame = kzalloc(sizeof(*frame), GFP_KERNEL);
-@@ -1462,21 +1524,21 @@ static struct iwl_frame *iwl_get_free_frame(struct iwl_priv *priv)
+- if (wlan_channel_known_11d(chan, parsed_region_chan)) {
++ if (lbs_channel_known_11d(chan, parsed_region_chan)) {
+ lbs_deb_11d("found, do active scan\n");
+ scan_type = CMD_SCAN_TYPE_ACTIVE;
+ } else {
+@@ -438,29 +430,29 @@ u8 libertas_get_scan_type_11d(u8 chan,
- element = priv->free_frames.next;
- list_del(element);
-- return list_entry(element, struct iwl_frame, list);
-+ return list_entry(element, struct iwl4965_frame, list);
}
--static void iwl_free_frame(struct iwl_priv *priv, struct iwl_frame *frame)
-+static void iwl4965_free_frame(struct iwl4965_priv *priv, struct iwl4965_frame *frame)
+-void libertas_init_11d(wlan_private * priv)
++void lbs_init_11d(struct lbs_private *priv)
{
- memset(frame, 0, sizeof(*frame));
- list_add(&frame->list, &priv->free_frames);
+- priv->adapter->enable11d = 0;
+- memset(&(priv->adapter->parsed_region_chan), 0,
++ priv->enable11d = 0;
++ memset(&(priv->parsed_region_chan), 0,
+ sizeof(struct parsed_region_chan_11d));
+ return;
}
--unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
-+unsigned int iwl4965_fill_beacon_frame(struct iwl4965_priv *priv,
- struct ieee80211_hdr *hdr,
- const u8 *dest, int left)
+ /**
+ * @brief This function sets DOMAIN INFO to FW
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @return 0; -1
+ */
+-static int set_domain_info_11d(wlan_private * priv)
++static int set_domain_info_11d(struct lbs_private *priv)
{
+ int ret;
-- if (!iwl_is_associated(priv) || !priv->ibss_beacon ||
-+ if (!iwl4965_is_associated(priv) || !priv->ibss_beacon ||
- ((priv->iw_mode != IEEE80211_IF_TYPE_IBSS) &&
- (priv->iw_mode != IEEE80211_IF_TYPE_AP)))
+- if (!priv->adapter->enable11d) {
++ if (!priv->enable11d) {
+ lbs_deb_11d("dnld domain Info with 11d disabled\n");
return 0;
-@@ -1489,10 +1551,11 @@ unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
- return priv->ibss_beacon->len;
- }
+ }
--int iwl_rate_index_from_plcp(int plcp)
-+int iwl4965_rate_index_from_plcp(int plcp)
- {
- int i = 0;
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11D_DOMAIN_INFO,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11D_DOMAIN_INFO,
+ CMD_ACT_SET,
+ CMD_OPTION_WAITFORRSP, 0, NULL);
+ if (ret)
+@@ -471,28 +463,27 @@ static int set_domain_info_11d(wlan_private * priv)
-+ /* 4965 HT rate format */
- if (plcp & RATE_MCS_HT_MSK) {
- i = (plcp & 0xff);
+ /**
+ * @brief This function setups scan channels
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @param band band
+ * @return 0
+ */
+-int libertas_set_universaltable(wlan_private * priv, u8 band)
++int lbs_set_universaltable(struct lbs_private *priv, u8 band)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ u16 size = sizeof(struct chan_freq_power);
+ u16 i = 0;
-@@ -1506,35 +1569,37 @@ int iwl_rate_index_from_plcp(int plcp)
- if ((i >= IWL_FIRST_OFDM_RATE) &&
- (i <= IWL_LAST_OFDM_RATE))
- return i;
-+
-+ /* 4965 legacy rate format, search for match in table */
- } else {
-- for (i = 0; i < ARRAY_SIZE(iwl_rates); i++)
-- if (iwl_rates[i].plcp == (plcp &0xFF))
-+ for (i = 0; i < ARRAY_SIZE(iwl4965_rates); i++)
-+ if (iwl4965_rates[i].plcp == (plcp &0xFF))
- return i;
- }
- return -1;
- }
+- memset(adapter->universal_channel, 0,
+- sizeof(adapter->universal_channel));
++ memset(priv->universal_channel, 0,
++ sizeof(priv->universal_channel));
--static u8 iwl_rate_get_lowest_plcp(int rate_mask)
-+static u8 iwl4965_rate_get_lowest_plcp(int rate_mask)
- {
- u8 i;
+- adapter->universal_channel[i].nrcfp =
++ priv->universal_channel[i].nrcfp =
+ sizeof(channel_freq_power_UN_BG) / size;
+ lbs_deb_11d("BG-band nrcfp %d\n",
+- adapter->universal_channel[i].nrcfp);
++ priv->universal_channel[i].nrcfp);
- for (i = IWL_RATE_1M_INDEX; i != IWL_RATE_INVALID;
-- i = iwl_rates[i].next_ieee) {
-+ i = iwl4965_rates[i].next_ieee) {
- if (rate_mask & (1 << i))
-- return iwl_rates[i].plcp;
-+ return iwl4965_rates[i].plcp;
- }
+- adapter->universal_channel[i].CFP = channel_freq_power_UN_BG;
+- adapter->universal_channel[i].valid = 1;
+- adapter->universal_channel[i].region = UNIVERSAL_REGION_CODE;
+- adapter->universal_channel[i].band = band;
++ priv->universal_channel[i].CFP = channel_freq_power_UN_BG;
++ priv->universal_channel[i].valid = 1;
++ priv->universal_channel[i].region = UNIVERSAL_REGION_CODE;
++ priv->universal_channel[i].band = band;
+ i++;
- return IWL_RATE_INVALID;
- }
+ return 0;
+@@ -500,21 +491,20 @@ int libertas_set_universaltable(wlan_private * priv, u8 band)
--static int iwl_send_beacon_cmd(struct iwl_priv *priv)
-+static int iwl4965_send_beacon_cmd(struct iwl4965_priv *priv)
+ /**
+ * @brief This function implements command CMD_802_11D_DOMAIN_INFO
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @param cmd pointer to cmd buffer
+ * @param cmdno cmd ID
+ * @param cmdOption cmd action
+ * @return 0
+ */
+-int libertas_cmd_802_11d_domain_info(wlan_private * priv,
++int lbs_cmd_802_11d_domain_info(struct lbs_private *priv,
+ struct cmd_ds_command *cmd, u16 cmdno,
+ u16 cmdoption)
{
-- struct iwl_frame *frame;
-+ struct iwl4965_frame *frame;
- unsigned int frame_size;
- int rc;
- u8 rate;
-
-- frame = iwl_get_free_frame(priv);
-+ frame = iwl4965_get_free_frame(priv);
+ struct cmd_ds_802_11d_domain_info *pdomaininfo =
+ &cmd->params.domaininfo;
+ struct mrvlietypes_domainparamset *domain = &pdomaininfo->domain;
+- wlan_adapter *adapter = priv->adapter;
+- u8 nr_subband = adapter->domainreg.nr_subband;
++ u8 nr_subband = priv->domainreg.nr_subband;
- if (!frame) {
- IWL_ERROR("Could not obtain free frame buffer for beacon "
-@@ -1543,22 +1608,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
- }
+ lbs_deb_enter(LBS_DEB_11D);
- if (!(priv->staging_rxon.flags & RXON_FLG_BAND_24G_MSK)) {
-- rate = iwl_rate_get_lowest_plcp(priv->active_rate_basic &
-+ rate = iwl4965_rate_get_lowest_plcp(priv->active_rate_basic &
- 0xFF0);
- if (rate == IWL_INVALID_RATE)
- rate = IWL_RATE_6M_PLCP;
- } else {
-- rate = iwl_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
-+ rate = iwl4965_rate_get_lowest_plcp(priv->active_rate_basic & 0xF);
- if (rate == IWL_INVALID_RATE)
- rate = IWL_RATE_1M_PLCP;
+@@ -526,12 +516,12 @@ int libertas_cmd_802_11d_domain_info(wlan_private * priv,
+ cmd->size =
+ cpu_to_le16(sizeof(pdomaininfo->action) + S_DS_GEN);
+ lbs_deb_hex(LBS_DEB_11D, "802_11D_DOMAIN_INFO", (u8 *) cmd,
+- (int)(cmd->size));
++ le16_to_cpu(cmd->size));
+ goto done;
}
-- frame_size = iwl_hw_get_beacon_cmd(priv, frame, rate);
-+ frame_size = iwl4965_hw_get_beacon_cmd(priv, frame, rate);
-
-- rc = iwl_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_TX_BEACON, frame_size,
- &frame->u.cmd[0]);
+ domain->header.type = cpu_to_le16(TLV_TYPE_DOMAIN);
+- memcpy(domain->countrycode, adapter->domainreg.countrycode,
++ memcpy(domain->countrycode, priv->domainreg.countrycode,
+ sizeof(domain->countrycode));
-- iwl_free_frame(priv, frame);
-+ iwl4965_free_frame(priv, frame);
+ domain->header.len =
+@@ -539,7 +529,7 @@ int libertas_cmd_802_11d_domain_info(wlan_private * priv,
+ sizeof(domain->countrycode));
- return rc;
- }
-@@ -1569,22 +1634,22 @@ static int iwl_send_beacon_cmd(struct iwl_priv *priv)
- *
- ******************************************************************************/
+ if (nr_subband) {
+- memcpy(domain->subband, adapter->domainreg.subband,
++ memcpy(domain->subband, priv->domainreg.subband,
+ nr_subband * sizeof(struct ieeetypes_subbandset));
--static void get_eeprom_mac(struct iwl_priv *priv, u8 *mac)
-+static void get_eeprom_mac(struct iwl4965_priv *priv, u8 *mac)
- {
- memcpy(mac, priv->eeprom.mac_address, 6);
- }
+ cmd->size = cpu_to_le16(sizeof(pdomaininfo->action) +
+@@ -560,11 +550,11 @@ done:
/**
-- * iwl_eeprom_init - read EEPROM contents
-+ * iwl4965_eeprom_init - read EEPROM contents
- *
-- * Load the EEPROM from adapter into priv->eeprom
-+ * Load the EEPROM contents from adapter into priv->eeprom
- *
- * NOTE: This routine uses the non-debug IO access functions.
+ * @brief This function parses countryinfo from AP and download country info to FW
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @param resp pointer to command response buffer
+ * @return 0; -1
*/
--int iwl_eeprom_init(struct iwl_priv *priv)
-+int iwl4965_eeprom_init(struct iwl4965_priv *priv)
+-int libertas_ret_802_11d_domain_info(wlan_private * priv,
++int lbs_ret_802_11d_domain_info(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
-- u16 *e = (u16 *)&priv->eeprom;
-- u32 gp = iwl_read32(priv, CSR_EEPROM_GP);
-+ __le16 *e = (__le16 *)&priv->eeprom;
-+ u32 gp = iwl4965_read32(priv, CSR_EEPROM_GP);
- u32 r;
- int sz = sizeof(priv->eeprom);
- int rc;
-@@ -1602,20 +1667,21 @@ int iwl_eeprom_init(struct iwl_priv *priv)
- return -ENOENT;
- }
-
-- rc = iwl_eeprom_aqcuire_semaphore(priv);
-+ /* Make sure driver (instead of uCode) is allowed to read EEPROM */
-+ rc = iwl4965_eeprom_acquire_semaphore(priv);
- if (rc < 0) {
-- IWL_ERROR("Failed to aqcuire EEPROM semaphore.\n");
-+ IWL_ERROR("Failed to acquire EEPROM semaphore.\n");
- return -ENOENT;
- }
-
- /* eeprom is an array of 16bit values */
- for (addr = 0; addr < sz; addr += sizeof(u16)) {
-- _iwl_write32(priv, CSR_EEPROM_REG, addr << 1);
-- _iwl_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
-+ _iwl4965_write32(priv, CSR_EEPROM_REG, addr << 1);
-+ _iwl4965_clear_bit(priv, CSR_EEPROM_REG, CSR_EEPROM_REG_BIT_CMD);
-
- for (i = 0; i < IWL_EEPROM_ACCESS_TIMEOUT;
- i += IWL_EEPROM_ACCESS_DELAY) {
-- r = _iwl_read_restricted(priv, CSR_EEPROM_REG);
-+ r = _iwl4965_read_direct32(priv, CSR_EEPROM_REG);
- if (r & CSR_EEPROM_REG_READ_VALID_MSK)
- break;
- udelay(IWL_EEPROM_ACCESS_DELAY);
-@@ -1626,12 +1692,12 @@ int iwl_eeprom_init(struct iwl_priv *priv)
- rc = -ETIMEDOUT;
- goto done;
- }
-- e[addr / 2] = le16_to_cpu(r >> 16);
-+ e[addr / 2] = cpu_to_le16(r >> 16);
- }
- rc = 0;
-
- done:
-- iwl_eeprom_release_semaphore(priv);
-+ iwl4965_eeprom_release_semaphore(priv);
- return rc;
- }
-
-@@ -1640,22 +1706,20 @@ done:
- * Misc. internal state and helper functions
- *
- ******************************************************************************/
--#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL4965_DEBUG
+ struct cmd_ds_802_11d_domain_info *domaininfo = &resp->params.domaininforesp;
+@@ -606,31 +596,30 @@ int libertas_ret_802_11d_domain_info(wlan_private * priv,
/**
-- * iwl_report_frame - dump frame to syslog during debug sessions
-+ * iwl4965_report_frame - dump frame to syslog during debug sessions
- *
-- * hack this function to show different aspects of received frames,
-+ * You may hack this function to show different aspects of received frames,
- * including selective frame dumps.
- * group100 parameter selects whether to show 1 out of 100 good frames.
- *
-- * TODO: ieee80211_hdr stuff is common to 3945 and 4965, so frame type
-- * info output is okay, but some of this stuff (e.g. iwl_rx_frame_stats)
-- * is 3945-specific and gives bad output for 4965. Need to split the
-- * functionality, keep common stuff here.
-+ * TODO: This was originally written for 3945, need to audit for
-+ * proper operation with 4965.
+ * @brief This function parses countryinfo from AP and download country info to FW
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @return 0; -1
*/
--void iwl_report_frame(struct iwl_priv *priv,
-- struct iwl_rx_packet *pkt,
-+void iwl4965_report_frame(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_packet *pkt,
- struct ieee80211_hdr *header, int group100)
+-int libertas_parse_dnld_countryinfo_11d(wlan_private * priv,
++int lbs_parse_dnld_countryinfo_11d(struct lbs_private *priv,
+ struct bss_descriptor * bss)
{
- u32 to_us;
-@@ -1677,9 +1741,9 @@ void iwl_report_frame(struct iwl_priv *priv,
- u8 agc;
- u16 sig_avg;
- u16 noise_diff;
-- struct iwl_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
-- struct iwl_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
-- struct iwl_rx_frame_end *rx_end = IWL_RX_END(pkt);
-+ struct iwl4965_rx_frame_stats *rx_stats = IWL_RX_STATS(pkt);
-+ struct iwl4965_rx_frame_hdr *rx_hdr = IWL_RX_HDR(pkt);
-+ struct iwl4965_rx_frame_end *rx_end = IWL_RX_END(pkt);
- u8 *data = IWL_RX_DATA(pkt);
-
- /* MAC header */
-@@ -1755,11 +1819,11 @@ void iwl_report_frame(struct iwl_priv *priv,
- else
- title = "Frame";
+ int ret;
+- wlan_adapter *adapter = priv->adapter;
-- rate = iwl_rate_index_from_plcp(rate_sym);
-+ rate = iwl4965_rate_index_from_plcp(rate_sym);
- if (rate == -1)
- rate = 0;
- else
-- rate = iwl_rates[rate].ieee / 2;
-+ rate = iwl4965_rates[rate].ieee / 2;
+ lbs_deb_enter(LBS_DEB_11D);
+- if (priv->adapter->enable11d) {
+- memset(&adapter->parsed_region_chan, 0,
++ if (priv->enable11d) {
++ memset(&priv->parsed_region_chan, 0,
+ sizeof(struct parsed_region_chan_11d));
+ ret = parse_domain_info_11d(&bss->countryinfo, 0,
+- &adapter->parsed_region_chan);
++ &priv->parsed_region_chan);
- /* print frame summary.
- * MAC addresses show just the last byte (for brevity),
-@@ -1781,25 +1845,25 @@ void iwl_report_frame(struct iwl_priv *priv,
+ if (ret == -1) {
+ lbs_deb_11d("error parsing domain_info from AP\n");
+ goto done;
}
- }
- if (print_dump)
-- iwl_print_hex_dump(IWL_DL_RX, data, length);
-+ iwl4965_print_hex_dump(IWL_DL_RX, data, length);
- }
- #endif
--static void iwl_unset_hw_setting(struct iwl_priv *priv)
-+static void iwl4965_unset_hw_setting(struct iwl4965_priv *priv)
- {
- if (priv->hw_setting.shared_virt)
- pci_free_consistent(priv->pci_dev,
-- sizeof(struct iwl_shared),
-+ sizeof(struct iwl4965_shared),
- priv->hw_setting.shared_virt,
- priv->hw_setting.shared_phys);
- }
+- memset(&adapter->domainreg, 0,
+- sizeof(struct wlan_802_11d_domain_reg));
+- generate_domain_info_11d(&adapter->parsed_region_chan,
+- &adapter->domainreg);
++ memset(&priv->domainreg, 0,
++ sizeof(struct lbs_802_11d_domain_reg));
++ generate_domain_info_11d(&priv->parsed_region_chan,
++ &priv->domainreg);
- /**
-- * iwl_supported_rate_to_ie - fill in the supported rate in IE field
-+ * iwl4965_supported_rate_to_ie - fill in the supported rate in IE field
- *
- * return : set the bit for each supported rate insert in ie
- */
--static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
-+static u16 iwl4965_supported_rate_to_ie(u8 *ie, u16 supported_rate,
- u16 basic_rate, int *left)
- {
- u16 ret_rates = 0, bit;
-@@ -1810,7 +1874,7 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
- for (bit = 1, i = 0; i < IWL_RATE_COUNT; i++, bit <<= 1) {
- if (bit & supported_rate) {
- ret_rates |= bit;
-- rates[*cnt] = iwl_rates[i].ieee |
-+ rates[*cnt] = iwl4965_rates[i].ieee |
- ((bit & basic_rate) ? 0x80 : 0x00);
- (*cnt)++;
- (*left)--;
-@@ -1823,22 +1887,25 @@ static u16 iwl_supported_rate_to_ie(u8 *ie, u16 supported_rate,
- return ret_rates;
- }
+ ret = set_domain_info_11d(priv);
--#ifdef CONFIG_IWLWIFI_HT
--void static iwl_set_ht_capab(struct ieee80211_hw *hw,
-- struct ieee80211_ht_capability *ht_cap,
-- u8 use_wide_chan);
-+#ifdef CONFIG_IWL4965_HT
-+void static iwl4965_set_ht_capab(struct ieee80211_hw *hw,
-+ struct ieee80211_ht_cap *ht_cap,
-+ u8 use_current_config);
- #endif
+@@ -648,25 +637,23 @@ done:
/**
-- * iwl_fill_probe_req - fill in all required fields and IE for probe request
-+ * iwl4965_fill_probe_req - fill in all required fields and IE for probe request
+ * @brief This function generates 11D info from user specified regioncode and download to FW
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @return 0; -1
*/
--static u16 iwl_fill_probe_req(struct iwl_priv *priv,
-+static u16 iwl4965_fill_probe_req(struct iwl4965_priv *priv,
- struct ieee80211_mgmt *frame,
- int left, int is_direct)
+-int libertas_create_dnld_countryinfo_11d(wlan_private * priv)
++int lbs_create_dnld_countryinfo_11d(struct lbs_private *priv)
{
- int len = 0;
- u8 *pos = NULL;
-- u16 active_rates, ret_rates, cck_rates;
-+ u16 active_rates, ret_rates, cck_rates, active_rate_basic;
-+#ifdef CONFIG_IWL4965_HT
-+ struct ieee80211_hw_mode *mode;
-+#endif /* CONFIG_IWL4965_HT */
-
- /* Make sure there is enough space for the probe request,
- * two mandatory IEs and the data */
-@@ -1848,9 +1915,9 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
- len += 24;
-
- frame->frame_control = cpu_to_le16(IEEE80211_STYPE_PROBE_REQ);
-- memcpy(frame->da, BROADCAST_ADDR, ETH_ALEN);
-+ memcpy(frame->da, iwl4965_broadcast_addr, ETH_ALEN);
- memcpy(frame->sa, priv->mac_addr, ETH_ALEN);
-- memcpy(frame->bssid, BROADCAST_ADDR, ETH_ALEN);
-+ memcpy(frame->bssid, iwl4965_broadcast_addr, ETH_ALEN);
- frame->seq_ctrl = 0;
-
- /* fill in our indirect SSID IE */
-@@ -1888,17 +1955,19 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
- *pos++ = WLAN_EID_SUPP_RATES;
- *pos = 0;
+ int ret;
+- wlan_adapter *adapter = priv->adapter;
+ struct region_channel *region_chan;
+ u8 j;
-- priv->active_rate = priv->rates_mask;
-- active_rates = priv->active_rate;
-- priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
-+ /* exclude 60M rate */
-+ active_rates = priv->rates_mask;
-+ active_rates &= ~IWL_RATE_60M_MASK;
-+
-+ active_rate_basic = active_rates & IWL_BASIC_RATES_MASK;
+ lbs_deb_enter(LBS_DEB_11D);
+- lbs_deb_11d("curbssparams.band %d\n", adapter->curbssparams.band);
++ lbs_deb_11d("curbssparams.band %d\n", priv->curbssparams.band);
- cck_rates = IWL_CCK_RATES_MASK & active_rates;
-- ret_rates = iwl_supported_rate_to_ie(pos, cck_rates,
-- priv->active_rate_basic, &left);
-+ ret_rates = iwl4965_supported_rate_to_ie(pos, cck_rates,
-+ active_rate_basic, &left);
- active_rates &= ~ret_rates;
+- if (priv->adapter->enable11d) {
++ if (priv->enable11d) {
+ /* update parsed_region_chan_11; dnld domaininf to FW */
-- ret_rates = iwl_supported_rate_to_ie(pos, active_rates,
-- priv->active_rate_basic, &left);
-+ ret_rates = iwl4965_supported_rate_to_ie(pos, active_rates,
-+ active_rate_basic, &left);
- active_rates &= ~ret_rates;
+- for (j = 0; j < sizeof(adapter->region_channel) /
+- sizeof(adapter->region_channel[0]); j++) {
+- region_chan = &adapter->region_channel[j];
++ for (j = 0; j < ARRAY_SIZE(priv->region_channel); j++) {
++ region_chan = &priv->region_channel[j];
- len += 2 + *pos;
-@@ -1914,25 +1983,22 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
- /* ... fill it in... */
- *pos++ = WLAN_EID_EXT_SUPP_RATES;
- *pos = 0;
-- iwl_supported_rate_to_ie(pos, active_rates,
-- priv->active_rate_basic, &left);
-+ iwl4965_supported_rate_to_ie(pos, active_rates,
-+ active_rate_basic, &left);
- if (*pos > 0)
- len += 2 + *pos;
+ lbs_deb_11d("%d region_chan->band %d\n", j,
+ region_chan->band);
+@@ -674,29 +661,28 @@ int libertas_create_dnld_countryinfo_11d(wlan_private * priv)
+ if (!region_chan || !region_chan->valid
+ || !region_chan->CFP)
+ continue;
+- if (region_chan->band != adapter->curbssparams.band)
++ if (region_chan->band != priv->curbssparams.band)
+ continue;
+ break;
+ }
--#ifdef CONFIG_IWLWIFI_HT
-- if (is_direct && priv->is_ht_enabled) {
-- u8 use_wide_chan = 1;
--
-- if (priv->channel_width != IWL_CHANNEL_WIDTH_40MHZ)
-- use_wide_chan = 0;
-+#ifdef CONFIG_IWL4965_HT
-+ mode = priv->hw->conf.mode;
-+ if (mode->ht_info.ht_supported) {
- pos += (*pos) + 1;
- *pos++ = WLAN_EID_HT_CAPABILITY;
-- *pos++ = sizeof(struct ieee80211_ht_capability);
-- iwl_set_ht_capab(NULL, (struct ieee80211_ht_capability *)pos,
-- use_wide_chan);
-- len += 2 + sizeof(struct ieee80211_ht_capability);
-+ *pos++ = sizeof(struct ieee80211_ht_cap);
-+ iwl4965_set_ht_capab(priv->hw,
-+ (struct ieee80211_ht_cap *)pos, 0);
-+ len += 2 + sizeof(struct ieee80211_ht_cap);
- }
--#endif /*CONFIG_IWLWIFI_HT */
-+#endif /*CONFIG_IWL4965_HT */
+- if (j >= sizeof(adapter->region_channel) /
+- sizeof(adapter->region_channel[0])) {
++ if (j >= ARRAY_SIZE(priv->region_channel)) {
+ lbs_deb_11d("region_chan not found, band %d\n",
+- adapter->curbssparams.band);
++ priv->curbssparams.band);
+ ret = -1;
+ goto done;
+ }
- fill_end:
- return (u16)len;
-@@ -1941,16 +2007,16 @@ static u16 iwl_fill_probe_req(struct iwl_priv *priv,
- /*
- * QoS support
- */
--#ifdef CONFIG_IWLWIFI_QOS
--static int iwl_send_qos_params_command(struct iwl_priv *priv,
-- struct iwl_qosparam_cmd *qos)
-+#ifdef CONFIG_IWL4965_QOS
-+static int iwl4965_send_qos_params_command(struct iwl4965_priv *priv,
-+ struct iwl4965_qosparam_cmd *qos)
- {
+- memset(&adapter->parsed_region_chan, 0,
++ memset(&priv->parsed_region_chan, 0,
+ sizeof(struct parsed_region_chan_11d));
+- wlan_generate_parsed_region_chan_11d(region_chan,
+- &adapter->
++ lbs_generate_parsed_region_chan_11d(region_chan,
++ &priv->
+ parsed_region_chan);
-- return iwl_send_cmd_pdu(priv, REPLY_QOS_PARAM,
-- sizeof(struct iwl_qosparam_cmd), qos);
-+ return iwl4965_send_cmd_pdu(priv, REPLY_QOS_PARAM,
-+ sizeof(struct iwl4965_qosparam_cmd), qos);
- }
+- memset(&adapter->domainreg, 0,
+- sizeof(struct wlan_802_11d_domain_reg));
+- generate_domain_info_11d(&adapter->parsed_region_chan,
+- &adapter->domainreg);
++ memset(&priv->domainreg, 0,
++ sizeof(struct lbs_802_11d_domain_reg));
++ generate_domain_info_11d(&priv->parsed_region_chan,
++ &priv->domainreg);
--static void iwl_reset_qos(struct iwl_priv *priv)
-+static void iwl4965_reset_qos(struct iwl4965_priv *priv)
- {
- u16 cw_min = 15;
- u16 cw_max = 1023;
-@@ -2037,13 +2103,10 @@ static void iwl_reset_qos(struct iwl_priv *priv)
- spin_unlock_irqrestore(&priv->lock, flags);
- }
+ ret = set_domain_info_11d(priv);
--static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
-+static void iwl4965_activate_qos(struct iwl4965_priv *priv, u8 force)
- {
- unsigned long flags;
+diff --git a/drivers/net/wireless/libertas/11d.h b/drivers/net/wireless/libertas/11d.h
+index 3a6d1f8..811eea2 100644
+--- a/drivers/net/wireless/libertas/11d.h
++++ b/drivers/net/wireless/libertas/11d.h
+@@ -2,8 +2,8 @@
+ * This header file contains data structures and
+ * function declarations of 802.11d
+ */
+-#ifndef _WLAN_11D_
+-#define _WLAN_11D_
++#ifndef _LBS_11D_
++#define _LBS_11D_
-- if (priv == NULL)
-- return;
--
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+ #include "types.h"
+ #include "defs.h"
+@@ -52,7 +52,7 @@ struct cmd_ds_802_11d_domain_info {
+ } __attribute__ ((packed));
-@@ -2057,23 +2120,28 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
- !priv->qos_data.qos_cap.q_AP.txop_request)
- priv->qos_data.def_qos_parm.qos_flags |=
- QOS_PARAM_FLG_TXOP_TYPE_MSK;
--
- if (priv->qos_data.qos_active)
- priv->qos_data.def_qos_parm.qos_flags |=
- QOS_PARAM_FLG_UPDATE_EDCA_MSK;
+ /** domain regulatory information */
+-struct wlan_802_11d_domain_reg {
++struct lbs_802_11d_domain_reg {
+ /** country Code*/
+ u8 countrycode[COUNTRY_CODE_LEN];
+ /** No. of subband*/
+@@ -78,26 +78,28 @@ struct region_code_mapping {
+ u8 code;
+ };
-+#ifdef CONFIG_IWL4965_HT
-+ if (priv->current_ht_config.is_ht)
-+ priv->qos_data.def_qos_parm.qos_flags |= QOS_PARAM_FLG_TGN_MSK;
-+#endif /* CONFIG_IWL4965_HT */
+-u8 libertas_get_scan_type_11d(u8 chan,
++struct lbs_private;
+
- spin_unlock_irqrestore(&priv->lock, flags);
++u8 lbs_get_scan_type_11d(u8 chan,
+ struct parsed_region_chan_11d *parsed_region_chan);
-- if (force || iwl_is_associated(priv)) {
-- IWL_DEBUG_QOS("send QoS cmd with Qos active %d \n",
-- priv->qos_data.qos_active);
-+ if (force || iwl4965_is_associated(priv)) {
-+ IWL_DEBUG_QOS("send QoS cmd with Qos active=%d FLAGS=0x%X\n",
-+ priv->qos_data.qos_active,
-+ priv->qos_data.def_qos_parm.qos_flags);
+-u32 libertas_chan_2_freq(u8 chan, u8 band);
++u32 lbs_chan_2_freq(u8 chan, u8 band);
-- iwl_send_qos_params_command(priv,
-+ iwl4965_send_qos_params_command(priv,
- &(priv->qos_data.def_qos_parm));
- }
- }
+-void libertas_init_11d(wlan_private * priv);
++void lbs_init_11d(struct lbs_private *priv);
--#endif /* CONFIG_IWLWIFI_QOS */
-+#endif /* CONFIG_IWL4965_QOS */
- /*
- * Power management (not Tx power!) functions
- */
-@@ -2091,7 +2159,7 @@ static void iwl_activate_qos(struct iwl_priv *priv, u8 force)
+-int libertas_set_universaltable(wlan_private * priv, u8 band);
++int lbs_set_universaltable(struct lbs_private *priv, u8 band);
- /* default power management (not Tx power) table values */
- /* for tim 0-10 */
--static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
-+static struct iwl4965_power_vec_entry range_0[IWL_POWER_AC] = {
- {{NOSLP, SLP_TIMEOUT(0), SLP_TIMEOUT(0), SLP_VEC(0, 0, 0, 0, 0)}, 0},
- {{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(500), SLP_VEC(1, 2, 3, 4, 4)}, 0},
- {{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(300), SLP_VEC(2, 4, 6, 7, 7)}, 0},
-@@ -2101,7 +2169,7 @@ static struct iwl_power_vec_entry range_0[IWL_POWER_AC] = {
- };
+-int libertas_cmd_802_11d_domain_info(wlan_private * priv,
++int lbs_cmd_802_11d_domain_info(struct lbs_private *priv,
+ struct cmd_ds_command *cmd, u16 cmdno,
+ u16 cmdOption);
- /* for tim > 10 */
--static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
-+static struct iwl4965_power_vec_entry range_1[IWL_POWER_AC] = {
- {{NOSLP, SLP_TIMEOUT(0), SLP_TIMEOUT(0), SLP_VEC(0, 0, 0, 0, 0)}, 0},
- {{SLP, SLP_TIMEOUT(200), SLP_TIMEOUT(500),
- SLP_VEC(1, 2, 3, 4, 0xFF)}, 0},
-@@ -2114,11 +2182,11 @@ static struct iwl_power_vec_entry range_1[IWL_POWER_AC] = {
- SLP_VEC(4, 7, 10, 10, 0xFF)}, 0}
- };
+-int libertas_ret_802_11d_domain_info(wlan_private * priv,
++int lbs_ret_802_11d_domain_info(struct lbs_private *priv,
+ struct cmd_ds_command *resp);
--int iwl_power_init_handle(struct iwl_priv *priv)
-+int iwl4965_power_init_handle(struct iwl4965_priv *priv)
- {
- int rc = 0, i;
-- struct iwl_power_mgr *pow_data;
-- int size = sizeof(struct iwl_power_vec_entry) * IWL_POWER_AC;
-+ struct iwl4965_power_mgr *pow_data;
-+ int size = sizeof(struct iwl4965_power_vec_entry) * IWL_POWER_AC;
- u16 pci_pm;
+ struct bss_descriptor;
+-int libertas_parse_dnld_countryinfo_11d(wlan_private * priv,
++int lbs_parse_dnld_countryinfo_11d(struct lbs_private *priv,
+ struct bss_descriptor * bss);
- IWL_DEBUG_POWER("Initialize power \n");
-@@ -2137,7 +2205,7 @@ int iwl_power_init_handle(struct iwl_priv *priv)
- if (rc != 0)
- return 0;
- else {
-- struct iwl_powertable_cmd *cmd;
-+ struct iwl4965_powertable_cmd *cmd;
+-int libertas_create_dnld_countryinfo_11d(wlan_private * priv);
++int lbs_create_dnld_countryinfo_11d(struct lbs_private *priv);
- IWL_DEBUG_POWER("adjust power command flags\n");
+-#endif /* _WLAN_11D_ */
++#endif
+diff --git a/drivers/net/wireless/libertas/README b/drivers/net/wireless/libertas/README
+index 0b133ce..d860fc3 100644
+--- a/drivers/net/wireless/libertas/README
++++ b/drivers/net/wireless/libertas/README
+@@ -195,45 +195,33 @@ setuserscan
-@@ -2153,15 +2221,15 @@ int iwl_power_init_handle(struct iwl_priv *priv)
- return rc;
- }
+ where [ARGS]:
--static int iwl_update_power_cmd(struct iwl_priv *priv,
-- struct iwl_powertable_cmd *cmd, u32 mode)
-+static int iwl4965_update_power_cmd(struct iwl4965_priv *priv,
-+ struct iwl4965_powertable_cmd *cmd, u32 mode)
- {
- int rc = 0, i;
- u8 skip;
- u32 max_sleep = 0;
-- struct iwl_power_vec_entry *range;
-+ struct iwl4965_power_vec_entry *range;
- u8 period = 0;
-- struct iwl_power_mgr *pow_data;
-+ struct iwl4965_power_mgr *pow_data;
+- chan=[chan#][band][mode] where band is [a,b,g] and mode is
+- blank for active or 'p' for passive
+ bssid=xx:xx:xx:xx:xx:xx specify a BSSID filter for the scan
+ ssid="[SSID]" specify a SSID filter for the scan
+ keep=[0 or 1] keep the previous scan results (1), discard (0)
+ dur=[scan time] time to scan for each channel in milliseconds
+- probes=[#] number of probe requests to send on each chan
+ type=[1,2,3] BSS type: 1 (Infra), 2(Adhoc), 3(Any)
- if (mode > IWL_POWER_INDEX_5) {
- IWL_DEBUG_POWER("Error invalid power mode \n");
-@@ -2174,7 +2242,7 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
- else
- range = &pow_data->pwr_range_1[1];
+- Any combination of the above arguments can be supplied on the command line.
+- If the chan token is absent, a full channel scan will be completed by
+- the driver. If the dur or probes tokens are absent, the driver default
+- setting will be used. The bssid and ssid fields, if blank,
+- will produce an unfiltered scan. The type field will default to 3 (Any)
+- and the keep field will default to 0 (Discard).
++ Any combination of the above arguments can be supplied on the command
++ line. If dur tokens are absent, the driver default setting will be used.
++ The bssid and ssid fields, if blank, will produce an unfiltered scan.
++ The type field will default to 3 (Any) and the keep field will default
++ to 0 (Discard).
-- memcpy(cmd, &range[mode].cmd, sizeof(struct iwl_powertable_cmd));
-+ memcpy(cmd, &range[mode].cmd, sizeof(struct iwl4965_powertable_cmd));
+ Examples:
+- 1) Perform an active scan on channels 1, 6, and 11 in the 'g' band:
+- echo "chan=1g,6g,11g" > setuserscan
++ 1) Perform a passive scan on all channels for 20 ms per channel:
++ echo "dur=20" > setuserscan
- #ifdef IWL_MAC80211_DISABLE
- if (priv->assoc_network != NULL) {
-@@ -2217,14 +2285,14 @@ static int iwl_update_power_cmd(struct iwl_priv *priv,
- return rc;
- }
+- 2) Perform a passive scan on channel 11 for 20 ms:
+- echo "chan=11gp dur=20" > setuserscan
++ 2) Perform an active scan for a specific SSID:
++ echo "ssid="TestAP"" > setuserscan
--static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
-+static int iwl4965_send_power_mode(struct iwl4965_priv *priv, u32 mode)
- {
-- u32 final_mode = mode;
-+ u32 uninitialized_var(final_mode);
- int rc;
-- struct iwl_powertable_cmd cmd;
-+ struct iwl4965_powertable_cmd cmd;
+- 3) Perform an active scan on channels 1, 6, and 11; and a passive scan on
+- channel 36 in the 'a' band:
+-
+- echo "chan=1g,6g,11g,36ap" > setuserscan
+-
+- 4) Perform an active scan on channel 6 and 36 for a specific SSID:
+- echo "chan=6g,36a ssid="TestAP"" > setuserscan
+-
+- 5) Scan all available channels (B/G, A bands) for a specific BSSID, keep
++ 3) Scan all available channels (B/G, A bands) for a specific BSSID, keep
+ the current scan table intact, update existing or append new scan data:
+ echo "bssid=00:50:43:20:12:82 keep=1" > setuserscan
- /* If on battery, set to 3,
-- * if plugged into AC power, set to CAM ("continuosly aware mode"),
-+ * if plugged into AC power, set to CAM ("continuously aware mode"),
- * else user level */
- switch (mode) {
- case IWL_POWER_BATTERY:
-@@ -2240,9 +2308,9 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
+- 6) Scan channel 6, for all infrastructure networks, sending two probe
+- requests. Keep the previous scan table intact. Update any duplicate
+- BSSID/SSID matches with the new scan data:
+- echo "chan=6g type=1 probes=2 keep=1" > setuserscan
++ 4) Scan for all infrastructure networks.
++ Keep the previous scan table intact. Update any duplicate BSSID/SSID
++ matches with the new scan data:
++ echo "type=1 keep=1" > setuserscan
- cmd.keep_alive_beacons = 0;
+ All entries in the scan table (not just the new scan data when keep=1)
+ will be displayed upon completion by use of the getscantable ioctl.
+diff --git a/drivers/net/wireless/libertas/assoc.c b/drivers/net/wireless/libertas/assoc.c
+index b61b176..c622e9b 100644
+--- a/drivers/net/wireless/libertas/assoc.c
++++ b/drivers/net/wireless/libertas/assoc.c
+@@ -9,39 +9,16 @@
+ #include "decl.h"
+ #include "hostcmd.h"
+ #include "host.h"
++#include "cmd.h"
-- iwl_update_power_cmd(priv, &cmd, final_mode);
-+ iwl4965_update_power_cmd(priv, &cmd, final_mode);
-- rc = iwl_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
-+ rc = iwl4965_send_cmd_pdu(priv, POWER_TABLE_CMD, sizeof(cmd), &cmd);
+ static const u8 bssid_any[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
+ static const u8 bssid_off[ETH_ALEN] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
- if (final_mode == IWL_POWER_MODE_CAM)
- clear_bit(STATUS_POWER_PMI, &priv->status);
-@@ -2252,7 +2320,7 @@ static int iwl_send_power_mode(struct iwl_priv *priv, u32 mode)
- return rc;
- }
+-static void print_assoc_req(const char * extra, struct assoc_request * assoc_req)
+-{
+- DECLARE_MAC_BUF(mac);
+- lbs_deb_assoc(
+- "#### Association Request: %s\n"
+- " flags: 0x%08lX\n"
+- " SSID: '%s'\n"
+- " channel: %d\n"
+- " band: %d\n"
+- " mode: %d\n"
+- " BSSID: %s\n"
+- " Encryption:%s%s%s\n"
+- " auth: %d\n",
+- extra, assoc_req->flags,
+- escape_essid(assoc_req->ssid, assoc_req->ssid_len),
+- assoc_req->channel, assoc_req->band, assoc_req->mode,
+- print_mac(mac, assoc_req->bssid),
+- assoc_req->secinfo.WPAenabled ? " WPA" : "",
+- assoc_req->secinfo.WPA2enabled ? " WPA2" : "",
+- assoc_req->secinfo.wep_enabled ? " WEP" : "",
+- assoc_req->secinfo.auth_mode);
+-}
+-
--int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
-+int iwl4965_is_network_packet(struct iwl4965_priv *priv, struct ieee80211_hdr *header)
+-static int assoc_helper_essid(wlan_private *priv,
++static int assoc_helper_essid(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
- /* Filter incoming packets to determine if they are targeted toward
- * this network, discarding packets coming from ourselves */
-@@ -2282,7 +2350,7 @@ int iwl_is_network_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
-
- #define TX_STATUS_ENTRY(x) case TX_STATUS_FAIL_ ## x: return #x
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+ struct bss_descriptor * bss;
+ int channel = -1;
+@@ -55,18 +32,17 @@ static int assoc_helper_essid(wlan_private *priv,
+ if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags))
+ channel = assoc_req->channel;
--const char *iwl_get_tx_fail_reason(u32 status)
-+static const char *iwl4965_get_tx_fail_reason(u32 status)
- {
- switch (status & TX_STATUS_MSK) {
- case TX_STATUS_SUCCESS:
-@@ -2309,11 +2377,11 @@ const char *iwl_get_tx_fail_reason(u32 status)
- }
+- lbs_deb_assoc("New SSID requested: '%s'\n",
++ lbs_deb_assoc("SSID '%s' requested\n",
+ escape_essid(assoc_req->ssid, assoc_req->ssid_len));
+ if (assoc_req->mode == IW_MODE_INFRA) {
+- libertas_send_specific_ssid_scan(priv, assoc_req->ssid,
++ lbs_send_specific_ssid_scan(priv, assoc_req->ssid,
+ assoc_req->ssid_len, 0);
- /**
-- * iwl_scan_cancel - Cancel any currently executing HW scan
-+ * iwl4965_scan_cancel - Cancel any currently executing HW scan
- *
- * NOTE: priv->mutex is not required before calling this function
- */
--static int iwl_scan_cancel(struct iwl_priv *priv)
-+static int iwl4965_scan_cancel(struct iwl4965_priv *priv)
- {
- if (!test_bit(STATUS_SCAN_HW, &priv->status)) {
- clear_bit(STATUS_SCANNING, &priv->status);
-@@ -2336,17 +2404,17 @@ static int iwl_scan_cancel(struct iwl_priv *priv)
- }
+- bss = libertas_find_ssid_in_list(adapter, assoc_req->ssid,
++ bss = lbs_find_ssid_in_list(priv, assoc_req->ssid,
+ assoc_req->ssid_len, NULL, IW_MODE_INFRA, channel);
+ if (bss != NULL) {
+- lbs_deb_assoc("SSID found in scan list, associating\n");
+ memcpy(&assoc_req->bss, bss, sizeof(struct bss_descriptor));
+- ret = wlan_associate(priv, assoc_req);
++ ret = lbs_associate(priv, assoc_req);
+ } else {
+ lbs_deb_assoc("SSID not found; cannot associate\n");
+ }
+@@ -74,23 +50,23 @@ static int assoc_helper_essid(wlan_private *priv,
+ /* Scan for the network, do not save previous results. Stale
+ * scan data will cause us to join a non-existant adhoc network
+ */
+- libertas_send_specific_ssid_scan(priv, assoc_req->ssid,
++ lbs_send_specific_ssid_scan(priv, assoc_req->ssid,
+ assoc_req->ssid_len, 1);
- /**
-- * iwl_scan_cancel_timeout - Cancel any currently executing HW scan
-+ * iwl4965_scan_cancel_timeout - Cancel any currently executing HW scan
- * @ms: amount of time to wait (in milliseconds) for scan to abort
- *
- * NOTE: priv->mutex must be held before calling this function
- */
--static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
-+static int iwl4965_scan_cancel_timeout(struct iwl4965_priv *priv, unsigned long ms)
- {
- unsigned long now = jiffies;
- int ret;
+ /* Search for the requested SSID in the scan table */
+- bss = libertas_find_ssid_in_list(adapter, assoc_req->ssid,
++ bss = lbs_find_ssid_in_list(priv, assoc_req->ssid,
+ assoc_req->ssid_len, NULL, IW_MODE_ADHOC, channel);
+ if (bss != NULL) {
+ lbs_deb_assoc("SSID found, will join\n");
+ memcpy(&assoc_req->bss, bss, sizeof(struct bss_descriptor));
+- libertas_join_adhoc_network(priv, assoc_req);
++ lbs_join_adhoc_network(priv, assoc_req);
+ } else {
+ /* else send START command */
+ lbs_deb_assoc("SSID not found, creating adhoc network\n");
+ memcpy(&assoc_req->bss.ssid, &assoc_req->ssid,
+ IW_ESSID_MAX_SIZE);
+ assoc_req->bss.ssid_len = assoc_req->ssid_len;
+- libertas_start_adhoc_network(priv, assoc_req);
++ lbs_start_adhoc_network(priv, assoc_req);
+ }
+ }
-- ret = iwl_scan_cancel(priv);
-+ ret = iwl4965_scan_cancel(priv);
- if (ret && ms) {
- mutex_unlock(&priv->mutex);
- while (!time_after(jiffies, now + msecs_to_jiffies(ms)) &&
-@@ -2360,7 +2428,7 @@ static int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms)
- return ret;
+@@ -99,10 +75,9 @@ static int assoc_helper_essid(wlan_private *priv,
}
--static void iwl_sequence_reset(struct iwl_priv *priv)
-+static void iwl4965_sequence_reset(struct iwl4965_priv *priv)
- {
- /* Reset ieee stats */
-@@ -2371,13 +2439,13 @@ static void iwl_sequence_reset(struct iwl_priv *priv)
- priv->last_frag_num = -1;
- priv->last_packet_time = 0;
+-static int assoc_helper_bssid(wlan_private *priv,
++static int assoc_helper_bssid(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+ struct bss_descriptor * bss;
+ DECLARE_MAC_BUF(mac);
+@@ -111,7 +86,7 @@ static int assoc_helper_bssid(wlan_private *priv,
+ print_mac(mac, assoc_req->bssid));
-- iwl_scan_cancel(priv);
-+ iwl4965_scan_cancel(priv);
- }
+ /* Search for index position in list for requested MAC */
+- bss = libertas_find_bssid_in_list(adapter, assoc_req->bssid,
++ bss = lbs_find_bssid_in_list(priv, assoc_req->bssid,
+ assoc_req->mode);
+ if (bss == NULL) {
+ lbs_deb_assoc("ASSOC: WAP: BSSID %s not found, "
+@@ -121,10 +96,10 @@ static int assoc_helper_bssid(wlan_private *priv,
- #define MAX_UCODE_BEACON_INTERVAL 4096
- #define INTEL_CONN_LISTEN_INTERVAL __constant_cpu_to_le16(0xA)
+ memcpy(&assoc_req->bss, bss, sizeof(struct bss_descriptor));
+ if (assoc_req->mode == IW_MODE_INFRA) {
+- ret = wlan_associate(priv, assoc_req);
+- lbs_deb_assoc("ASSOC: wlan_associate(bssid) returned %d\n", ret);
++ ret = lbs_associate(priv, assoc_req);
++ lbs_deb_assoc("ASSOC: lbs_associate(bssid) returned %d\n", ret);
+ } else if (assoc_req->mode == IW_MODE_ADHOC) {
+- libertas_join_adhoc_network(priv, assoc_req);
++ lbs_join_adhoc_network(priv, assoc_req);
+ }
--static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
-+static __le16 iwl4965_adjust_beacon_interval(u16 beacon_val)
- {
- u16 new_val = 0;
- u16 beacon_factor = 0;
-@@ -2390,7 +2458,7 @@ static __le16 iwl_adjust_beacon_interval(u16 beacon_val)
- return cpu_to_le16(new_val);
+ out:
+@@ -133,11 +108,13 @@ out:
}
--static void iwl_setup_rxon_timing(struct iwl_priv *priv)
-+static void iwl4965_setup_rxon_timing(struct iwl4965_priv *priv)
+
+-static int assoc_helper_associate(wlan_private *priv,
++static int assoc_helper_associate(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
- u64 interval_tm_unit;
- u64 tsf, result;
-@@ -2420,14 +2488,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
- priv->rxon_timing.beacon_interval =
- cpu_to_le16(beacon_int);
- priv->rxon_timing.beacon_interval =
-- iwl_adjust_beacon_interval(
-+ iwl4965_adjust_beacon_interval(
- le16_to_cpu(priv->rxon_timing.beacon_interval));
- }
+ int ret = 0, done = 0;
- priv->rxon_timing.atim_window = 0;
- } else {
- priv->rxon_timing.beacon_interval =
-- iwl_adjust_beacon_interval(conf->beacon_int);
-+ iwl4965_adjust_beacon_interval(conf->beacon_int);
- /* TODO: we need to get atim_window from upper stack
- * for now we set to 0 */
- priv->rxon_timing.atim_window = 0;
-@@ -2446,14 +2514,14 @@ static void iwl_setup_rxon_timing(struct iwl_priv *priv)
- le16_to_cpu(priv->rxon_timing.atim_window));
- }
++ lbs_deb_enter(LBS_DEB_ASSOC);
++
+ /* If we're given and 'any' BSSID, try associating based on SSID */
--static int iwl_scan_initiate(struct iwl_priv *priv)
-+static int iwl4965_scan_initiate(struct iwl4965_priv *priv)
- {
- if (priv->iw_mode == IEEE80211_IF_TYPE_AP) {
- IWL_ERROR("APs don't scan.\n");
- return 0;
+ if (test_bit(ASSOC_FLAG_BSSID, &assoc_req->flags)) {
+@@ -145,42 +122,36 @@ static int assoc_helper_associate(wlan_private *priv,
+ && compare_ether_addr(bssid_off, assoc_req->bssid)) {
+ ret = assoc_helper_bssid(priv, assoc_req);
+ done = 1;
+- if (ret) {
+- lbs_deb_assoc("ASSOC: bssid: ret = %d\n", ret);
+- }
+ }
}
-- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl4965_is_ready_rf(priv)) {
- IWL_DEBUG_SCAN("Aborting scan due to not ready.\n");
- return -EIO;
+ if (!done && test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)) {
+ ret = assoc_helper_essid(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC: bssid: ret = %d\n", ret);
+- }
}
-@@ -2480,9 +2548,9 @@ static int iwl_scan_initiate(struct iwl_priv *priv)
- return 0;
- }
--static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
-+static int iwl4965_set_rxon_hwcrypto(struct iwl4965_priv *priv, int hw_decrypt)
- {
-- struct iwl_rxon_cmd *rxon = &priv->staging_rxon;
-+ struct iwl4965_rxon_cmd *rxon = &priv->staging_rxon;
-
- if (hw_decrypt)
- rxon->filter_flags &= ~RXON_FILTER_DIS_DECRYPT_MSK;
-@@ -2492,7 +2560,7 @@ static int iwl_set_rxon_hwcrypto(struct iwl_priv *priv, int hw_decrypt)
- return 0;
++ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+ return ret;
}
--static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
-+static void iwl4965_set_flags_for_phymode(struct iwl4965_priv *priv, u8 phymode)
- {
- if (phymode == MODE_IEEE80211A) {
- priv->staging_rxon.flags &=
-@@ -2500,7 +2568,7 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
- | RXON_FLG_CCK_MSK);
- priv->staging_rxon.flags |= RXON_FLG_SHORT_SLOT_MSK;
- } else {
-- /* Copied from iwl_bg_post_associate() */
-+ /* Copied from iwl4965_bg_post_associate() */
- if (priv->assoc_capability & WLAN_CAPABILITY_SHORT_SLOT_TIME)
- priv->staging_rxon.flags |= RXON_FLG_SHORT_SLOT_MSK;
- else
-@@ -2516,11 +2584,11 @@ static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode)
- }
- /*
-- * initilize rxon structure with default values fromm eeprom
-+ * initialize rxon structure with default values from eeprom
- */
--static void iwl_connection_init_rx_config(struct iwl_priv *priv)
-+static void iwl4965_connection_init_rx_config(struct iwl4965_priv *priv)
+-static int assoc_helper_mode(wlan_private *priv,
++static int assoc_helper_mode(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
-- const struct iwl_channel_info *ch_info;
-+ const struct iwl4965_channel_info *ch_info;
-
- memset(&priv->staging_rxon, 0, sizeof(priv->staging_rxon));
-
-@@ -2557,7 +2625,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
- priv->staging_rxon.flags |= RXON_FLG_SHORT_PREAMBLE_MSK;
- #endif
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
-- ch_info = iwl_get_channel_info(priv, priv->phymode,
-+ ch_info = iwl4965_get_channel_info(priv, priv->phymode,
- le16_to_cpu(priv->staging_rxon.channel));
+ lbs_deb_enter(LBS_DEB_ASSOC);
- if (!ch_info)
-@@ -2577,7 +2645,7 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
- else
- priv->phymode = MODE_IEEE80211G;
+- if (assoc_req->mode == adapter->mode)
++ if (assoc_req->mode == priv->mode)
+ goto done;
-- iwl_set_flags_for_phymode(priv, priv->phymode);
-+ iwl4965_set_flags_for_phymode(priv, priv->phymode);
+ if (assoc_req->mode == IW_MODE_INFRA) {
+- if (adapter->psstate != PS_STATE_FULL_POWER)
+- libertas_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
+- adapter->psmode = WLAN802_11POWERMODECAM;
++ if (priv->psstate != PS_STATE_FULL_POWER)
++ lbs_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
++ priv->psmode = LBS802_11POWERMODECAM;
+ }
- priv->staging_rxon.ofdm_basic_rates =
- (IWL_OFDM_RATES_MASK >> IWL_FIRST_OFDM_RATE) & 0xFF;
-@@ -2593,15 +2661,12 @@ static void iwl_connection_init_rx_config(struct iwl_priv *priv)
- iwl4965_set_rxon_chain(priv);
+- adapter->mode = assoc_req->mode;
+- ret = libertas_prepare_and_send_command(priv,
++ priv->mode = assoc_req->mode;
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_SNMP_MIB,
+ 0, CMD_OPTION_WAITFORRSP,
+ OID_802_11_INFRASTRUCTURE_MODE,
+@@ -192,57 +163,76 @@ done:
}
--static int iwl_set_mode(struct iwl_priv *priv, int mode)
-+static int iwl4965_set_mode(struct iwl4965_priv *priv, int mode)
- {
-- if (!iwl_is_ready_rf(priv))
-- return -EAGAIN;
--
- if (mode == IEEE80211_IF_TYPE_IBSS) {
-- const struct iwl_channel_info *ch_info;
-+ const struct iwl4965_channel_info *ch_info;
-
-- ch_info = iwl_get_channel_info(priv,
-+ ch_info = iwl4965_get_channel_info(priv,
- priv->phymode,
- le16_to_cpu(priv->staging_rxon.channel));
-
-@@ -2612,32 +2677,36 @@ static int iwl_set_mode(struct iwl_priv *priv, int mode)
- }
- }
-+ priv->iw_mode = mode;
-+
-+ iwl4965_connection_init_rx_config(priv);
-+ memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
-+
-+ iwl4965_clear_stations_table(priv);
+-static int update_channel(wlan_private * priv)
++int lbs_update_channel(struct lbs_private *priv)
+ {
+- /* the channel in f/w could be out of sync, get the current channel */
+- return libertas_prepare_and_send_command(priv, CMD_802_11_RF_CHANNEL,
+- CMD_OPT_802_11_RF_CHANNEL_GET,
+- CMD_OPTION_WAITFORRSP, 0, NULL);
++ int ret;
+
-+ /* dont commit rxon if rf-kill is on*/
-+ if (!iwl4965_is_ready_rf(priv))
-+ return -EAGAIN;
++ /* the channel in f/w could be out of sync; get the current channel */
++ lbs_deb_enter(LBS_DEB_ASSOC);
+
- cancel_delayed_work(&priv->scan_check);
-- if (iwl_scan_cancel_timeout(priv, 100)) {
-+ if (iwl4965_scan_cancel_timeout(priv, 100)) {
- IWL_WARNING("Aborted scan still in progress after 100ms\n");
- IWL_DEBUG_MAC80211("leaving - scan abort failed.\n");
- return -EAGAIN;
- }
-
-- priv->iw_mode = mode;
--
-- iwl_connection_init_rx_config(priv);
-- memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
--
-- iwl_clear_stations_table(priv);
--
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
-
- return 0;
++ ret = lbs_get_channel(priv);
++ if (ret > 0) {
++ priv->curbssparams.channel = ret;
++ ret = 0;
++ }
++ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
++ return ret;
}
--static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
-+static void iwl4965_build_tx_cmd_hwcrypto(struct iwl4965_priv *priv,
- struct ieee80211_tx_control *ctl,
-- struct iwl_cmd *cmd,
-+ struct iwl4965_cmd *cmd,
- struct sk_buff *skb_frag,
- int last_frag)
+-void libertas_sync_channel(struct work_struct *work)
++void lbs_sync_channel(struct work_struct *work)
{
-- struct iwl_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
-+ struct iwl4965_hw_key *keyinfo = &priv->stations[ctl->key_idx].keyinfo;
-
- switch (keyinfo->alg) {
- case ALG_CCMP:
-@@ -2680,8 +2749,8 @@ static void iwl_build_tx_cmd_hwcrypto(struct iwl_priv *priv,
- /*
- * handle build REPLY_TX command notification.
- */
--static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
-- struct iwl_cmd *cmd,
-+static void iwl4965_build_tx_cmd_basic(struct iwl4965_priv *priv,
-+ struct iwl4965_cmd *cmd,
- struct ieee80211_tx_control *ctrl,
- struct ieee80211_hdr *hdr,
- int is_unicast, u8 std_id)
-@@ -2703,6 +2772,10 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
- tx_flags |= TX_CMD_FLG_SEQ_CTL_MSK;
- }
-
-+ if (ieee80211_is_back_request(fc))
-+ tx_flags |= TX_CMD_FLG_ACK_MSK | TX_CMD_FLG_IMM_BA_RSP_MASK;
-+
-+
- cmd->cmd.tx.sta_id = std_id;
- if (ieee80211_get_morefrag(hdr))
- tx_flags |= TX_CMD_FLG_MORE_FRAG_MSK;
-@@ -2729,11 +2802,9 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
- if ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) {
- if ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_ASSOC_REQ ||
- (fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_REASSOC_REQ)
-- cmd->cmd.tx.timeout.pm_frame_timeout =
-- cpu_to_le16(3);
-+ cmd->cmd.tx.timeout.pm_frame_timeout = cpu_to_le16(3);
- else
-- cmd->cmd.tx.timeout.pm_frame_timeout =
-- cpu_to_le16(2);
-+ cmd->cmd.tx.timeout.pm_frame_timeout = cpu_to_le16(2);
- } else
- cmd->cmd.tx.timeout.pm_frame_timeout = 0;
+- wlan_private *priv = container_of(work, wlan_private, sync_channel);
++ struct lbs_private *priv = container_of(work, struct lbs_private,
++ sync_channel);
-@@ -2742,40 +2813,47 @@ static void iwl_build_tx_cmd_basic(struct iwl_priv *priv,
- cmd->cmd.tx.next_frame_len = 0;
+- if (update_channel(priv) != 0)
++ lbs_deb_enter(LBS_DEB_ASSOC);
++ if (lbs_update_channel(priv))
+ lbs_pr_info("Channel synchronization failed.");
++ lbs_deb_leave(LBS_DEB_ASSOC);
}
--static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
-+/**
-+ * iwl4965_get_sta_id - Find station's index within station table
-+ *
-+ * If new IBSS station, create new entry in station table
-+ */
-+static int iwl4965_get_sta_id(struct iwl4965_priv *priv,
-+ struct ieee80211_hdr *hdr)
+-static int assoc_helper_channel(wlan_private *priv,
++static int assoc_helper_channel(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
- int sta_id;
- u16 fc = le16_to_cpu(hdr->frame_control);
- DECLARE_MAC_BUF(mac);
-
-- /* If this frame is broadcast or not data then use the broadcast
-- * station id */
-+ /* If this frame is broadcast or management, use broadcast station id */
- if (((fc & IEEE80211_FCTL_FTYPE) != IEEE80211_FTYPE_DATA) ||
- is_multicast_ether_addr(hdr->addr1))
- return priv->hw_setting.bcast_sta_id;
-
- switch (priv->iw_mode) {
-
-- /* If this frame is part of a BSS network (we're a station), then
-- * we use the AP's station id */
-+ /* If we are a client station in a BSS network, use the special
-+ * AP station entry (that's the only station we communicate with) */
- case IEEE80211_IF_TYPE_STA:
- return IWL_AP_ID;
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
- /* If we are an AP, then find the station, or use BCAST */
- case IEEE80211_IF_TYPE_AP:
-- sta_id = iwl_hw_find_station(priv, hdr->addr1);
-+ sta_id = iwl4965_hw_find_station(priv, hdr->addr1);
- if (sta_id != IWL_INVALID_STATION)
- return sta_id;
- return priv->hw_setting.bcast_sta_id;
+ lbs_deb_enter(LBS_DEB_ASSOC);
-- /* If this frame is part of a IBSS network, then we use the
-- * target specific station id */
-+ /* If this frame is going out to an IBSS network, find the station,
-+ * or create a new station table entry */
- case IEEE80211_IF_TYPE_IBSS:
-- sta_id = iwl_hw_find_station(priv, hdr->addr1);
-+ sta_id = iwl4965_hw_find_station(priv, hdr->addr1);
- if (sta_id != IWL_INVALID_STATION)
- return sta_id;
+- ret = update_channel(priv);
+- if (ret < 0) {
+- lbs_deb_assoc("ASSOC: channel: error getting channel.");
++ ret = lbs_update_channel(priv);
++ if (ret) {
++ lbs_deb_assoc("ASSOC: channel: error getting channel.\n");
++ goto done;
+ }
-- sta_id = iwl_add_station(priv, hdr->addr1, 0, CMD_ASYNC);
-+ /* Create new station table entry */
-+ sta_id = iwl4965_add_station_flags(priv, hdr->addr1,
-+ 0, CMD_ASYNC, NULL);
+- if (assoc_req->channel == adapter->curbssparams.channel)
++ if (assoc_req->channel == priv->curbssparams.channel)
+ goto done;
- if (sta_id != IWL_INVALID_STATION)
- return sta_id;
-@@ -2783,11 +2861,11 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
- IWL_DEBUG_DROP("Station %s not in station map. "
- "Defaulting to broadcast...\n",
- print_mac(mac, hdr->addr1));
-- iwl_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
-+ iwl4965_print_hex_dump(IWL_DL_DROP, (u8 *) hdr, sizeof(*hdr));
- return priv->hw_setting.bcast_sta_id;
++ if (priv->mesh_dev) {
++ /* Change mesh channel first; 21.p21 firmware won't let
++ you change channel otherwise (even though it'll return
++ an error to this */
++ lbs_mesh_config(priv, 0, assoc_req->channel);
++ }
++
+ lbs_deb_assoc("ASSOC: channel: %d -> %d\n",
+- adapter->curbssparams.channel, assoc_req->channel);
++ priv->curbssparams.channel, assoc_req->channel);
- default:
-- IWL_WARNING("Unkown mode of operation: %d", priv->iw_mode);
-+ IWL_WARNING("Unknown mode of operation: %d", priv->iw_mode);
- return priv->hw_setting.bcast_sta_id;
- }
- }
-@@ -2795,18 +2873,19 @@ static int iwl_get_sta_id(struct iwl_priv *priv, struct ieee80211_hdr *hdr)
- /*
- * start REPLY_TX command process
- */
--static int iwl_tx_skb(struct iwl_priv *priv,
-+static int iwl4965_tx_skb(struct iwl4965_priv *priv,
- struct sk_buff *skb, struct ieee80211_tx_control *ctl)
- {
- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
-- struct iwl_tfd_frame *tfd;
-+ struct iwl4965_tfd_frame *tfd;
- u32 *control_flags;
- int txq_id = ctl->queue;
-- struct iwl_tx_queue *txq = NULL;
-- struct iwl_queue *q = NULL;
-+ struct iwl4965_tx_queue *txq = NULL;
-+ struct iwl4965_queue *q = NULL;
- dma_addr_t phys_addr;
- dma_addr_t txcmd_phys;
-- struct iwl_cmd *out_cmd = NULL;
-+ dma_addr_t scratch_phys;
-+ struct iwl4965_cmd *out_cmd = NULL;
- u16 len, idx, len_org;
- u8 id, hdr_len, unicast;
- u8 sta_id;
-@@ -2818,13 +2897,13 @@ static int iwl_tx_skb(struct iwl_priv *priv,
- int rc;
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_RF_CHANNEL,
+- CMD_OPT_802_11_RF_CHANNEL_SET,
+- CMD_OPTION_WAITFORRSP, 0, &assoc_req->channel);
+- if (ret < 0) {
+- lbs_deb_assoc("ASSOC: channel: error setting channel.");
+- }
++ ret = lbs_set_channel(priv, assoc_req->channel);
++ if (ret < 0)
++ lbs_deb_assoc("ASSOC: channel: error setting channel.\n");
- spin_lock_irqsave(&priv->lock, flags);
-- if (iwl_is_rfkill(priv)) {
-+ if (iwl4965_is_rfkill(priv)) {
- IWL_DEBUG_DROP("Dropping - RF KILL\n");
- goto drop_unlock;
+- ret = update_channel(priv);
+- if (ret < 0) {
+- lbs_deb_assoc("ASSOC: channel: error getting channel.");
++ /* FIXME: shouldn't need to grab the channel _again_ after setting
++ * it since the firmware is supposed to return the new channel, but
++ * whatever... */
++ ret = lbs_update_channel(priv);
++ if (ret) {
++ lbs_deb_assoc("ASSOC: channel: error getting channel.\n");
++ goto done;
}
-- if (!priv->interface_id) {
-- IWL_DEBUG_DROP("Dropping - !priv->interface_id\n");
-+ if (!priv->vif) {
-+ IWL_DEBUG_DROP("Dropping - !priv->vif\n");
- goto drop_unlock;
+- if (assoc_req->channel != adapter->curbssparams.channel) {
+- lbs_deb_assoc("ASSOC: channel: failed to update channel to %d",
++ if (assoc_req->channel != priv->curbssparams.channel) {
++ lbs_deb_assoc("ASSOC: channel: failed to update channel to %d\n",
+ assoc_req->channel);
+- goto done;
++ goto restore_mesh;
}
-@@ -2838,7 +2917,7 @@ static int iwl_tx_skb(struct iwl_priv *priv,
-
- fc = le16_to_cpu(hdr->frame_control);
-
--#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL4965_DEBUG
- if (ieee80211_is_auth(fc))
- IWL_DEBUG_TX("Sending AUTH frame\n");
- else if (ieee80211_is_assoc_request(fc))
-@@ -2847,16 +2926,19 @@ static int iwl_tx_skb(struct iwl_priv *priv,
- IWL_DEBUG_TX("Sending REASSOC frame\n");
- #endif
-
-- if (!iwl_is_associated(priv) &&
-+ /* drop all data frame if we are not associated */
-+ if (!iwl4965_is_associated(priv) && !priv->assoc_id &&
- ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_DATA)) {
-- IWL_DEBUG_DROP("Dropping - !iwl_is_associated\n");
-+ IWL_DEBUG_DROP("Dropping - !iwl4965_is_associated\n");
- goto drop_unlock;
+ if ( assoc_req->secinfo.wep_enabled
+@@ -255,83 +245,75 @@ static int assoc_helper_channel(wlan_private *priv,
}
- spin_unlock_irqrestore(&priv->lock, flags);
+ /* Must restart/rejoin adhoc networks after channel change */
+- set_bit(ASSOC_FLAG_SSID, &assoc_req->flags);
++ set_bit(ASSOC_FLAG_SSID, &assoc_req->flags);
- hdr_len = ieee80211_get_hdrlen(fc);
-- sta_id = iwl_get_sta_id(priv, hdr);
+-done:
++ restore_mesh:
++ if (priv->mesh_dev)
++ lbs_mesh_config(priv, 1, priv->curbssparams.channel);
+
-+ /* Find (or create) index into station table for destination station */
-+ sta_id = iwl4965_get_sta_id(priv, hdr);
- if (sta_id == IWL_INVALID_STATION) {
- DECLARE_MAC_BUF(mac);
++ done:
+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+ return ret;
+ }
-@@ -2876,40 +2958,62 @@ static int iwl_tx_skb(struct iwl_priv *priv,
- (hdr->seq_ctrl &
- __constant_cpu_to_le16(IEEE80211_SCTL_FRAG));
- seq_number += 0x10;
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- /* aggregation is on for this <sta,tid> */
- if (ctl->flags & IEEE80211_TXCTL_HT_MPDU_AGG)
- txq_id = priv->stations[sta_id].tid[tid].agg.txq_id;
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
- }
-+
-+ /* Descriptor for chosen Tx queue */
- txq = &priv->txq[txq_id];
- q = &txq->q;
- spin_lock_irqsave(&priv->lock, flags);
+-static int assoc_helper_wep_keys(wlan_private *priv,
+- struct assoc_request * assoc_req)
++static int assoc_helper_wep_keys(struct lbs_private *priv,
++ struct assoc_request *assoc_req)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ int i;
+ int ret = 0;
-- tfd = &txq->bd[q->first_empty];
-+ /* Set up first empty TFD within this queue's circular TFD buffer */
-+ tfd = &txq->bd[q->write_ptr];
- memset(tfd, 0, sizeof(*tfd));
- control_flags = (u32 *) tfd;
-- idx = get_cmd_index(q, q->first_empty, 0);
-+ idx = get_cmd_index(q, q->write_ptr, 0);
+ lbs_deb_enter(LBS_DEB_ASSOC);
-- memset(&(txq->txb[q->first_empty]), 0, sizeof(struct iwl_tx_info));
-- txq->txb[q->first_empty].skb[0] = skb;
-- memcpy(&(txq->txb[q->first_empty].status.control),
-+ /* Set up driver data for this TFD */
-+ memset(&(txq->txb[q->write_ptr]), 0, sizeof(struct iwl4965_tx_info));
-+ txq->txb[q->write_ptr].skb[0] = skb;
-+ memcpy(&(txq->txb[q->write_ptr].status.control),
- ctl, sizeof(struct ieee80211_tx_control));
-+
-+ /* Set up first empty entry in queue's array of Tx/cmd buffers */
- out_cmd = &txq->cmd[idx];
- memset(&out_cmd->hdr, 0, sizeof(out_cmd->hdr));
- memset(&out_cmd->cmd.tx, 0, sizeof(out_cmd->cmd.tx));
-+
-+ /*
-+ * Set up the Tx-command (not MAC!) header.
-+ * Store the chosen Tx queue and TFD index within the sequence field;
-+ * after Tx, uCode's Tx response will return this value so driver can
-+ * locate the frame within the tx queue and do post-tx processing.
-+ */
- out_cmd->hdr.cmd = REPLY_TX;
- out_cmd->hdr.sequence = cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |
-- INDEX_TO_SEQ(q->first_empty)));
-- /* copy frags header */
-+ INDEX_TO_SEQ(q->write_ptr)));
-+
-+ /* Copy MAC header from skb into command buffer */
- memcpy(out_cmd->cmd.tx.hdr, hdr, hdr_len);
+ /* Set or remove WEP keys */
+- if ( assoc_req->wep_keys[0].len
+- || assoc_req->wep_keys[1].len
+- || assoc_req->wep_keys[2].len
+- || assoc_req->wep_keys[3].len) {
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_802_11_SET_WEP,
+- CMD_ACT_ADD,
+- CMD_OPTION_WAITFORRSP,
+- 0, assoc_req);
+- } else {
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_802_11_SET_WEP,
+- CMD_ACT_REMOVE,
+- CMD_OPTION_WAITFORRSP,
+- 0, NULL);
+- }
++ if (assoc_req->wep_keys[0].len || assoc_req->wep_keys[1].len ||
++ assoc_req->wep_keys[2].len || assoc_req->wep_keys[3].len)
++ ret = lbs_cmd_802_11_set_wep(priv, CMD_ACT_ADD, assoc_req);
++ else
++ ret = lbs_cmd_802_11_set_wep(priv, CMD_ACT_REMOVE, assoc_req);
-- /* hdr = (struct ieee80211_hdr *)out_cmd->cmd.tx.hdr; */
-+ /*
-+ * Use the first empty entry in this queue's command buffer array
-+ * to contain the Tx command and MAC header concatenated together
-+ * (payload data will be in another buffer).
-+ * Size of this varies, due to varying MAC header length.
-+ * If end is not dword aligned, we'll have 2 extra bytes at the end
-+ * of the MAC header (device reads on dword boundaries).
-+ * We'll tell device about this padding later.
-+ */
- len = priv->hw_setting.tx_cmd_len +
-- sizeof(struct iwl_cmd_header) + hdr_len;
-+ sizeof(struct iwl4965_cmd_header) + hdr_len;
+ if (ret)
+ goto out;
- len_org = len;
- len = (len + 3) & ~3;
-@@ -2919,36 +3023,53 @@ static int iwl_tx_skb(struct iwl_priv *priv,
+ /* enable/disable the MAC's WEP packet filter */
+ if (assoc_req->secinfo.wep_enabled)
+- adapter->currentpacketfilter |= CMD_ACT_MAC_WEP_ENABLE;
++ priv->currentpacketfilter |= CMD_ACT_MAC_WEP_ENABLE;
else
- len_org = 0;
+- adapter->currentpacketfilter &= ~CMD_ACT_MAC_WEP_ENABLE;
+- ret = libertas_set_mac_packet_filter(priv);
++ priv->currentpacketfilter &= ~CMD_ACT_MAC_WEP_ENABLE;
++
++ ret = lbs_set_mac_packet_filter(priv);
+ if (ret)
+ goto out;
-- txcmd_phys = txq->dma_addr_cmd + sizeof(struct iwl_cmd) * idx +
-- offsetof(struct iwl_cmd, hdr);
-+ /* Physical address of this Tx command's header (not MAC header!),
-+ * within command buffer array. */
-+ txcmd_phys = txq->dma_addr_cmd + sizeof(struct iwl4965_cmd) * idx +
-+ offsetof(struct iwl4965_cmd, hdr);
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
-- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
-+ /* Add buffer containing Tx command and MAC(!) header to TFD's
-+ * first entry */
-+ iwl4965_hw_txq_attach_buf_to_tfd(priv, tfd, txcmd_phys, len);
+- /* Copy WEP keys into adapter wep key fields */
++ /* Copy WEP keys into priv wep key fields */
+ for (i = 0; i < 4; i++) {
+- memcpy(&adapter->wep_keys[i], &assoc_req->wep_keys[i],
+- sizeof(struct enc_key));
++ memcpy(&priv->wep_keys[i], &assoc_req->wep_keys[i],
++ sizeof(struct enc_key));
+ }
+- adapter->wep_tx_keyidx = assoc_req->wep_tx_keyidx;
++ priv->wep_tx_keyidx = assoc_req->wep_tx_keyidx;
- if (!(ctl->flags & IEEE80211_TXCTL_DO_NOT_ENCRYPT))
-- iwl_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
-+ iwl4965_build_tx_cmd_hwcrypto(priv, ctl, out_cmd, skb, 0);
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
-- /* 802.11 null functions have no payload... */
-+ /* Set up TFD's 2nd entry to point directly to remainder of skb,
-+ * if any (802.11 null frames have no payload). */
- len = skb->len - hdr_len;
- if (len) {
- phys_addr = pci_map_single(priv->pci_dev, skb->data + hdr_len,
- len, PCI_DMA_TODEVICE);
-- iwl_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
-+ iwl4965_hw_txq_attach_buf_to_tfd(priv, tfd, phys_addr, len);
- }
+ out:
+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+ return ret;
+ }
-+ /* Tell 4965 about any 2-byte padding after MAC header */
- if (len_org)
- out_cmd->cmd.tx.tx_flags |= TX_CMD_FLG_MH_PAD_MSK;
+-static int assoc_helper_secinfo(wlan_private *priv,
++static int assoc_helper_secinfo(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+- u32 do_wpa;
+- u32 rsn = 0;
++ uint16_t do_wpa;
++ uint16_t rsn = 0;
-+ /* Total # bytes to be transmitted */
- len = (u16)skb->len;
- out_cmd->cmd.tx.len = cpu_to_le16(len);
+ lbs_deb_enter(LBS_DEB_ASSOC);
- /* TODO need this for burst mode later on */
-- iwl_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
-+ iwl4965_build_tx_cmd_basic(priv, out_cmd, ctl, hdr, unicast, sta_id);
+- memcpy(&adapter->secinfo, &assoc_req->secinfo,
+- sizeof(struct wlan_802_11_security));
++ memcpy(&priv->secinfo, &assoc_req->secinfo,
++ sizeof(struct lbs_802_11_security));
- /* set is_hcca to 0; it probably will never be implemented */
-- iwl_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
-+ iwl4965_hw_build_tx_cmd_rate(priv, out_cmd, ctl, hdr, sta_id, 0);
-+
-+ scratch_phys = txcmd_phys + sizeof(struct iwl4965_cmd_header) +
-+ offsetof(struct iwl4965_tx_cmd, scratch);
-+ out_cmd->cmd.tx.dram_lsb_ptr = cpu_to_le32(scratch_phys);
-+ out_cmd->cmd.tx.dram_msb_ptr = iwl_get_dma_hi_address(scratch_phys);
-+
-+#ifdef CONFIG_IWL4965_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+ /* TODO: move this functionality to rate scaling */
-+ iwl4965_tl_get_stats(priv, hdr);
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /*CONFIG_IWL4965_HT */
+- ret = libertas_set_mac_packet_filter(priv);
++ ret = lbs_set_mac_packet_filter(priv);
+ if (ret)
+ goto out;
-- iwl4965_tx_cmd(priv, out_cmd, sta_id, txcmd_phys,
-- hdr, hdr_len, ctl, NULL);
+@@ -341,28 +323,19 @@ static int assoc_helper_secinfo(wlan_private *priv,
+ */
- if (!ieee80211_get_morefrag(hdr)) {
- txq->need_update = 1;
-@@ -2961,27 +3082,29 @@ static int iwl_tx_skb(struct iwl_priv *priv,
- txq->need_update = 0;
+ /* Get RSN enabled/disabled */
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_802_11_ENABLE_RSN,
+- CMD_ACT_GET,
+- CMD_OPTION_WAITFORRSP,
+- 0, &rsn);
++ ret = lbs_cmd_802_11_enable_rsn(priv, CMD_ACT_GET, &rsn);
+ if (ret) {
+- lbs_deb_assoc("Failed to get RSN status: %d", ret);
++ lbs_deb_assoc("Failed to get RSN status: %d\n", ret);
+ goto out;
}
-- iwl_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
-+ iwl4965_print_hex_dump(IWL_DL_TX, out_cmd->cmd.payload,
- sizeof(out_cmd->cmd.tx));
+ /* Don't re-enable RSN if it's already enabled */
+- do_wpa = (assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled);
++ do_wpa = assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled;
+ if (do_wpa == rsn)
+ goto out;
-- iwl_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
-+ iwl4965_print_hex_dump(IWL_DL_TX, (u8 *)out_cmd->cmd.tx.hdr,
- ieee80211_get_hdrlen(fc));
+ /* Set RSN enabled/disabled */
+- rsn = do_wpa;
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_802_11_ENABLE_RSN,
+- CMD_ACT_SET,
+- CMD_OPTION_WAITFORRSP,
+- 0, &rsn);
++ ret = lbs_cmd_802_11_enable_rsn(priv, CMD_ACT_SET, &do_wpa);
-+ /* Set up entry for this TFD in Tx byte-count array */
- iwl4965_tx_queue_update_wr_ptr(priv, txq, len);
+ out:
+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+@@ -370,7 +343,7 @@ out:
+ }
-- q->first_empty = iwl_queue_inc_wrap(q->first_empty, q->n_bd);
-- rc = iwl_tx_queue_update_write_ptr(priv, txq);
-+ /* Tell device the write index *just past* this latest filled TFD */
-+ q->write_ptr = iwl4965_queue_inc_wrap(q->write_ptr, q->n_bd);
-+ rc = iwl4965_tx_queue_update_write_ptr(priv, txq);
- spin_unlock_irqrestore(&priv->lock, flags);
- if (rc)
- return rc;
+-static int assoc_helper_wpa_keys(wlan_private *priv,
++static int assoc_helper_wpa_keys(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
+ {
+ int ret = 0;
+@@ -385,7 +358,7 @@ static int assoc_helper_wpa_keys(wlan_private *priv,
-- if ((iwl_queue_space(q) < q->high_mark)
-+ if ((iwl4965_queue_space(q) < q->high_mark)
- && priv->mac80211_registered) {
- if (wait_write_ptr) {
- spin_lock_irqsave(&priv->lock, flags);
- txq->need_update = 1;
-- iwl_tx_queue_update_write_ptr(priv, txq);
-+ iwl4965_tx_queue_update_write_ptr(priv, txq);
- spin_unlock_irqrestore(&priv->lock, flags);
- }
+ if (test_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags)) {
+ clear_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags);
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_KEY_MATERIAL,
+ CMD_ACT_SET,
+ CMD_OPTION_WAITFORRSP,
+@@ -399,7 +372,7 @@ static int assoc_helper_wpa_keys(wlan_private *priv,
+ if (test_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags)) {
+ clear_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags);
-@@ -2996,13 +3119,13 @@ drop:
- return -1;
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_KEY_MATERIAL,
+ CMD_ACT_SET,
+ CMD_OPTION_WAITFORRSP,
+@@ -413,20 +386,19 @@ out:
}
--static void iwl_set_rate(struct iwl_priv *priv)
-+static void iwl4965_set_rate(struct iwl4965_priv *priv)
+
+-static int assoc_helper_wpa_ie(wlan_private *priv,
++static int assoc_helper_wpa_ie(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
- const struct ieee80211_hw_mode *hw = NULL;
- struct ieee80211_rate *rate;
- int i;
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
-- hw = iwl_get_hw_mode(priv, priv->phymode);
-+ hw = iwl4965_get_hw_mode(priv, priv->phymode);
- if (!hw) {
- IWL_ERROR("Failed to set rate: unable to get hw mode\n");
- return;
-@@ -3020,7 +3143,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
- if ((rate->val < IWL_RATE_COUNT) &&
- (rate->flags & IEEE80211_RATE_SUPPORTED)) {
- IWL_DEBUG_RATE("Adding rate index %d (plcp %d)%s\n",
-- rate->val, iwl_rates[rate->val].plcp,
-+ rate->val, iwl4965_rates[rate->val].plcp,
- (rate->flags & IEEE80211_RATE_BASIC) ?
- "*" : "");
- priv->active_rate |= (1 << rate->val);
-@@ -3028,7 +3151,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
- priv->active_rate_basic |= (1 << rate->val);
- } else
- IWL_DEBUG_RATE("Not adding rate %d (plcp %d)\n",
-- rate->val, iwl_rates[rate->val].plcp);
-+ rate->val, iwl4965_rates[rate->val].plcp);
+ lbs_deb_enter(LBS_DEB_ASSOC);
+
+ if (assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled) {
+- memcpy(&adapter->wpa_ie, &assoc_req->wpa_ie, assoc_req->wpa_ie_len);
+- adapter->wpa_ie_len = assoc_req->wpa_ie_len;
++ memcpy(&priv->wpa_ie, &assoc_req->wpa_ie, assoc_req->wpa_ie_len);
++ priv->wpa_ie_len = assoc_req->wpa_ie_len;
+ } else {
+- memset(&adapter->wpa_ie, 0, MAX_WPA_IE_LEN);
+- adapter->wpa_ie_len = 0;
++ memset(&priv->wpa_ie, 0, MAX_WPA_IE_LEN);
++ priv->wpa_ie_len = 0;
}
- IWL_DEBUG_RATE("Set active_rate = %0x, active_rate_basic = %0x\n",
-@@ -3057,7 +3180,7 @@ static void iwl_set_rate(struct iwl_priv *priv)
- (IWL_OFDM_BASIC_RATES_MASK >> IWL_FIRST_OFDM_RATE) & 0xFF;
+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+@@ -434,55 +406,68 @@ static int assoc_helper_wpa_ie(wlan_private *priv,
}
--static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
-+static void iwl4965_radio_kill_sw(struct iwl4965_priv *priv, int disable_radio)
+
+-static int should_deauth_infrastructure(wlan_adapter *adapter,
++static int should_deauth_infrastructure(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
- unsigned long flags;
+- if (adapter->connect_status != LIBERTAS_CONNECTED)
++ int ret = 0;
++
++ lbs_deb_enter(LBS_DEB_ASSOC);
++
++ if (priv->connect_status != LBS_CONNECTED)
+ return 0;
-@@ -3068,21 +3191,21 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
- disable_radio ? "OFF" : "ON");
+ if (test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)) {
+- lbs_deb_assoc("Deauthenticating due to new SSID in "
+- " configuration request.\n");
+- return 1;
++ lbs_deb_assoc("Deauthenticating due to new SSID\n");
++ ret = 1;
++ goto out;
+ }
- if (disable_radio) {
-- iwl_scan_cancel(priv);
-+ iwl4965_scan_cancel(priv);
- /* FIXME: This is a workaround for AP */
- if (priv->iw_mode != IEEE80211_IF_TYPE_AP) {
- spin_lock_irqsave(&priv->lock, flags);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_SET,
- CSR_UCODE_SW_BIT_RFKILL);
- spin_unlock_irqrestore(&priv->lock, flags);
-- iwl_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
-+ iwl4965_send_card_state(priv, CARD_STATE_CMD_DISABLE, 0);
- set_bit(STATUS_RF_KILL_SW, &priv->status);
+ if (test_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags)) {
+- if (adapter->secinfo.auth_mode != assoc_req->secinfo.auth_mode) {
+- lbs_deb_assoc("Deauthenticating due to updated security "
+- "info in configuration request.\n");
+- return 1;
++ if (priv->secinfo.auth_mode != assoc_req->secinfo.auth_mode) {
++ lbs_deb_assoc("Deauthenticating due to new security\n");
++ ret = 1;
++ goto out;
}
- return;
}
- spin_lock_irqsave(&priv->lock, flags);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+ if (test_bit(ASSOC_FLAG_BSSID, &assoc_req->flags)) {
+- lbs_deb_assoc("Deauthenticating due to new BSSID in "
+- " configuration request.\n");
+- return 1;
++ lbs_deb_assoc("Deauthenticating due to new BSSID\n");
++ ret = 1;
++ goto out;
+ }
- clear_bit(STATUS_RF_KILL_SW, &priv->status);
- spin_unlock_irqrestore(&priv->lock, flags);
-@@ -3091,9 +3214,9 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
- msleep(10);
+ if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags)) {
+- lbs_deb_assoc("Deauthenticating due to channel switch.\n");
+- return 1;
++ lbs_deb_assoc("Deauthenticating due to channel switch\n");
++ ret = 1;
++ goto out;
+ }
- spin_lock_irqsave(&priv->lock, flags);
-- iwl_read32(priv, CSR_UCODE_DRV_GP1);
-- if (!iwl_grab_restricted_access(priv))
-- iwl_release_restricted_access(priv);
-+ iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
-+ if (!iwl4965_grab_nic_access(priv))
-+ iwl4965_release_nic_access(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
+ /* FIXME: deal with 'auto' mode somehow */
+ if (test_bit(ASSOC_FLAG_MODE, &assoc_req->flags)) {
+- if (assoc_req->mode != IW_MODE_INFRA)
+- return 1;
++ if (assoc_req->mode != IW_MODE_INFRA) {
++ lbs_deb_assoc("Deauthenticating due to leaving "
++ "infra mode\n");
++ ret = 1;
++ goto out;
++ }
+ }
- if (test_bit(STATUS_RF_KILL_HW, &priv->status)) {
-@@ -3106,7 +3229,7 @@ static void iwl_radio_kill_sw(struct iwl_priv *priv, int disable_radio)
- return;
++out:
++ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+ return 0;
}
--void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
-+void iwl4965_set_decrypted_flag(struct iwl4965_priv *priv, struct sk_buff *skb,
- u32 decrypt_res, struct ieee80211_rx_status *stats)
+
+-static int should_stop_adhoc(wlan_adapter *adapter,
++static int should_stop_adhoc(struct lbs_private *priv,
+ struct assoc_request * assoc_req)
{
- u16 fc =
-@@ -3138,97 +3261,10 @@ void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
+- if (adapter->connect_status != LIBERTAS_CONNECTED)
++ lbs_deb_enter(LBS_DEB_ASSOC);
++
++ if (priv->connect_status != LBS_CONNECTED)
+ return 0;
+
+- if (libertas_ssid_cmp(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len,
++ if (lbs_ssid_cmp(priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len,
+ assoc_req->ssid, assoc_req->ssid_len) != 0)
+ return 1;
+
+@@ -493,18 +478,19 @@ static int should_stop_adhoc(wlan_adapter *adapter,
}
- }
--void iwl_handle_data_packet_monitor(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb,
-- void *data, short len,
-- struct ieee80211_rx_status *stats,
-- u16 phy_flags)
--{
-- struct iwl_rt_rx_hdr *iwl_rt;
--
-- /* First cache any information we need before we overwrite
-- * the information provided in the skb from the hardware */
-- s8 signal = stats->ssi;
-- s8 noise = 0;
-- int rate = stats->rate;
-- u64 tsf = stats->mactime;
-- __le16 phy_flags_hw = cpu_to_le16(phy_flags);
--
-- /* We received data from the HW, so stop the watchdog */
-- if (len > IWL_RX_BUF_SIZE - sizeof(*iwl_rt)) {
-- IWL_DEBUG_DROP("Dropping too large packet in monitor\n");
-- return;
-- }
--
-- /* copy the frame data to write after where the radiotap header goes */
-- iwl_rt = (void *)rxb->skb->data;
-- memmove(iwl_rt->payload, data, len);
--
-- iwl_rt->rt_hdr.it_version = PKTHDR_RADIOTAP_VERSION;
-- iwl_rt->rt_hdr.it_pad = 0; /* always good to zero */
--
-- /* total header + data */
-- iwl_rt->rt_hdr.it_len = cpu_to_le16(sizeof(*iwl_rt));
--
-- /* Set the size of the skb to the size of the frame */
-- skb_put(rxb->skb, sizeof(*iwl_rt) + len);
--
-- /* Big bitfield of all the fields we provide in radiotap */
-- iwl_rt->rt_hdr.it_present =
-- cpu_to_le32((1 << IEEE80211_RADIOTAP_TSFT) |
-- (1 << IEEE80211_RADIOTAP_FLAGS) |
-- (1 << IEEE80211_RADIOTAP_RATE) |
-- (1 << IEEE80211_RADIOTAP_CHANNEL) |
-- (1 << IEEE80211_RADIOTAP_DBM_ANTSIGNAL) |
-- (1 << IEEE80211_RADIOTAP_DBM_ANTNOISE) |
-- (1 << IEEE80211_RADIOTAP_ANTENNA));
--
-- /* Zero the flags, we'll add to them as we go */
-- iwl_rt->rt_flags = 0;
--
-- iwl_rt->rt_tsf = cpu_to_le64(tsf);
--
-- /* Convert to dBm */
-- iwl_rt->rt_dbmsignal = signal;
-- iwl_rt->rt_dbmnoise = noise;
--
-- /* Convert the channel frequency and set the flags */
-- iwl_rt->rt_channelMHz = cpu_to_le16(stats->freq);
-- if (!(phy_flags_hw & RX_RES_PHY_FLAGS_BAND_24_MSK))
-- iwl_rt->rt_chbitmask =
-- cpu_to_le16((IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ));
-- else if (phy_flags_hw & RX_RES_PHY_FLAGS_MOD_CCK_MSK)
-- iwl_rt->rt_chbitmask =
-- cpu_to_le16((IEEE80211_CHAN_CCK | IEEE80211_CHAN_2GHZ));
-- else /* 802.11g */
-- iwl_rt->rt_chbitmask =
-- cpu_to_le16((IEEE80211_CHAN_OFDM | IEEE80211_CHAN_2GHZ));
--
-- rate = iwl_rate_index_from_plcp(rate);
-- if (rate == -1)
-- iwl_rt->rt_rate = 0;
-- else
-- iwl_rt->rt_rate = iwl_rates[rate].ieee;
--
-- /* antenna number */
-- iwl_rt->rt_antenna =
-- le16_to_cpu(phy_flags_hw & RX_RES_PHY_FLAGS_ANTENNA_MSK) >> 4;
--
-- /* set the preamble flag if we have it */
-- if (phy_flags_hw & RX_RES_PHY_FLAGS_SHORT_PREAMBLE_MSK)
-- iwl_rt->rt_flags |= IEEE80211_RADIOTAP_F_SHORTPRE;
--
-- IWL_DEBUG_RX("Rx packet of %d bytes.\n", rxb->skb->len);
--
-- stats->flag |= RX_FLAG_RADIOTAP;
-- ieee80211_rx_irqsafe(priv->hw, rxb->skb, stats);
-- rxb->skb = NULL;
--}
--
+ if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags)) {
+- if (assoc_req->channel != adapter->curbssparams.channel)
++ if (assoc_req->channel != priv->curbssparams.channel)
+ return 1;
+ }
- #define IWL_PACKET_RETRY_TIME HZ
++ lbs_deb_leave(LBS_DEB_ASSOC);
+ return 0;
+ }
--int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
-+int iwl4965_is_duplicate_packet(struct iwl4965_priv *priv, struct ieee80211_hdr *header)
+
+-void libertas_association_worker(struct work_struct *work)
++void lbs_association_worker(struct work_struct *work)
{
- u16 sc = le16_to_cpu(header->seq_ctrl);
- u16 seq = (sc & IEEE80211_SCTL_SEQ) >> 4;
-@@ -3239,29 +3275,26 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
- switch (priv->iw_mode) {
- case IEEE80211_IF_TYPE_IBSS:{
- struct list_head *p;
-- struct iwl_ibss_seq *entry = NULL;
-+ struct iwl4965_ibss_seq *entry = NULL;
- u8 *mac = header->addr2;
- int index = mac[5] & (IWL_IBSS_MAC_HASH_SIZE - 1);
+- wlan_private *priv = container_of(work, wlan_private, assoc_work.work);
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = container_of(work, struct lbs_private,
++ assoc_work.work);
+ struct assoc_request * assoc_req = NULL;
+ int ret = 0;
+ int find_any_ssid = 0;
+@@ -512,16 +498,33 @@ void libertas_association_worker(struct work_struct *work)
- __list_for_each(p, &priv->ibss_mac_hash[index]) {
-- entry =
-- list_entry(p, struct iwl_ibss_seq, list);
-+ entry = list_entry(p, struct iwl4965_ibss_seq, list);
- if (!compare_ether_addr(entry->mac, mac))
- break;
- }
- if (p == &priv->ibss_mac_hash[index]) {
- entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
- if (!entry) {
-- IWL_ERROR
-- ("Cannot malloc new mac entry\n");
-+ IWL_ERROR("Cannot malloc new mac entry\n");
- return 0;
+ lbs_deb_enter(LBS_DEB_ASSOC);
+
+- mutex_lock(&adapter->lock);
+- assoc_req = adapter->pending_assoc_req;
+- adapter->pending_assoc_req = NULL;
+- adapter->in_progress_assoc_req = assoc_req;
+- mutex_unlock(&adapter->lock);
++ mutex_lock(&priv->lock);
++ assoc_req = priv->pending_assoc_req;
++ priv->pending_assoc_req = NULL;
++ priv->in_progress_assoc_req = assoc_req;
++ mutex_unlock(&priv->lock);
+
+ if (!assoc_req)
+ goto done;
+
+- print_assoc_req(__func__, assoc_req);
++ lbs_deb_assoc(
++ "Association Request:\n"
++ " flags: 0x%08lx\n"
++ " SSID: '%s'\n"
++ " chann: %d\n"
++ " band: %d\n"
++ " mode: %d\n"
++ " BSSID: %s\n"
++ " secinfo: %s%s%s\n"
++ " auth_mode: %d\n",
++ assoc_req->flags,
++ escape_essid(assoc_req->ssid, assoc_req->ssid_len),
++ assoc_req->channel, assoc_req->band, assoc_req->mode,
++ print_mac(mac, assoc_req->bssid),
++ assoc_req->secinfo.WPAenabled ? " WPA" : "",
++ assoc_req->secinfo.WPA2enabled ? " WPA2" : "",
++ assoc_req->secinfo.wep_enabled ? " WEP" : "",
++ assoc_req->secinfo.auth_mode);
+
+ /* If 'any' SSID was specified, find an SSID to associate with */
+ if (test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)
+@@ -538,7 +541,7 @@ void libertas_association_worker(struct work_struct *work)
+ if (find_any_ssid) {
+ u8 new_mode;
+
+- ret = libertas_find_best_network_ssid(priv, assoc_req->ssid,
++ ret = lbs_find_best_network_ssid(priv, assoc_req->ssid,
+ &assoc_req->ssid_len, assoc_req->mode, &new_mode);
+ if (ret) {
+ lbs_deb_assoc("Could not find best network\n");
+@@ -557,18 +560,18 @@ void libertas_association_worker(struct work_struct *work)
+ * Check if the attributes being changing require deauthentication
+ * from the currently associated infrastructure access point.
+ */
+- if (adapter->mode == IW_MODE_INFRA) {
+- if (should_deauth_infrastructure(adapter, assoc_req)) {
+- ret = libertas_send_deauthentication(priv);
++ if (priv->mode == IW_MODE_INFRA) {
++ if (should_deauth_infrastructure(priv, assoc_req)) {
++ ret = lbs_send_deauthentication(priv);
+ if (ret) {
+ lbs_deb_assoc("Deauthentication due to new "
+ "configuration request failed: %d\n",
+ ret);
}
- memcpy(entry->mac, mac, ETH_ALEN);
- entry->seq_num = seq;
- entry->frag_num = frag;
- entry->packet_time = jiffies;
-- list_add(&entry->list,
-- &priv->ibss_mac_hash[index]);
-+ list_add(&entry->list, &priv->ibss_mac_hash[index]);
- return 0;
}
- last_seq = &entry->seq_num;
-@@ -3295,7 +3328,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
- return 1;
- }
+- } else if (adapter->mode == IW_MODE_ADHOC) {
+- if (should_stop_adhoc(adapter, assoc_req)) {
+- ret = libertas_stop_adhoc_network(priv);
++ } else if (priv->mode == IW_MODE_ADHOC) {
++ if (should_stop_adhoc(priv, assoc_req)) {
++ ret = lbs_stop_adhoc_network(priv);
+ if (ret) {
+ lbs_deb_assoc("Teardown of AdHoc network due to "
+ "new configuration request failed: %d\n",
+@@ -581,58 +584,40 @@ void libertas_association_worker(struct work_struct *work)
+ /* Send the various configuration bits to the firmware */
+ if (test_bit(ASSOC_FLAG_MODE, &assoc_req->flags)) {
+ ret = assoc_helper_mode(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC(:%d) mode: ret = %d\n",
+- __LINE__, ret);
++ if (ret)
+ goto out;
+- }
+ }
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
+ if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags)) {
+ ret = assoc_helper_channel(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC(:%d) channel: ret = %d\n",
+- __LINE__, ret);
++ if (ret)
+ goto out;
+- }
+ }
- #include "iwl-spectrum.h"
+ if ( test_bit(ASSOC_FLAG_WEP_KEYS, &assoc_req->flags)
+ || test_bit(ASSOC_FLAG_WEP_TX_KEYIDX, &assoc_req->flags)) {
+ ret = assoc_helper_wep_keys(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC(:%d) wep_keys: ret = %d\n",
+- __LINE__, ret);
++ if (ret)
+ goto out;
+- }
+ }
-@@ -3310,7 +3343,7 @@ int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr *header)
- * the lower 3 bytes is the time in usec within one beacon interval
- */
+ if (test_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags)) {
+ ret = assoc_helper_secinfo(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC(:%d) secinfo: ret = %d\n",
+- __LINE__, ret);
++ if (ret)
+ goto out;
+- }
+ }
--static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
-+static u32 iwl4965_usecs_to_beacons(u32 usec, u32 beacon_interval)
- {
- u32 quot;
- u32 rem;
-@@ -3329,7 +3362,7 @@ static u32 iwl_usecs_to_beacons(u32 usec, u32 beacon_interval)
- * the same as HW timer counter counting down
- */
+ if (test_bit(ASSOC_FLAG_WPA_IE, &assoc_req->flags)) {
+ ret = assoc_helper_wpa_ie(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC(:%d) wpa_ie: ret = %d\n",
+- __LINE__, ret);
++ if (ret)
+ goto out;
+- }
+ }
--static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
-+static __le32 iwl4965_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
- {
- u32 base_low = base & BEACON_TIME_MASK_LOW;
- u32 addon_low = addon & BEACON_TIME_MASK_LOW;
-@@ -3348,13 +3381,13 @@ static __le32 iwl_add_beacon_time(u32 base, u32 addon, u32 beacon_interval)
- return cpu_to_le32(res);
- }
+ if (test_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags)
+ || test_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags)) {
+ ret = assoc_helper_wpa_keys(priv, assoc_req);
+- if (ret) {
+- lbs_deb_assoc("ASSOC(:%d) wpa_keys: ret = %d\n",
+- __LINE__, ret);
++ if (ret)
+ goto out;
+- }
+ }
--static int iwl_get_measurement(struct iwl_priv *priv,
-+static int iwl4965_get_measurement(struct iwl4965_priv *priv,
- struct ieee80211_measurement_params *params,
- u8 type)
- {
-- struct iwl_spectrum_cmd spectrum;
-- struct iwl_rx_packet *res;
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_spectrum_cmd spectrum;
-+ struct iwl4965_rx_packet *res;
-+ struct iwl4965_host_cmd cmd = {
- .id = REPLY_SPECTRUM_MEASUREMENT_CMD,
- .data = (void *)&spectrum,
- .meta.flags = CMD_WANT_SKB,
-@@ -3364,9 +3397,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
- int spectrum_resp_status;
- int duration = le16_to_cpu(params->duration);
+ /* SSID/BSSID should be the _last_ config option set, because they
+@@ -644,28 +629,27 @@ void libertas_association_worker(struct work_struct *work)
-- if (iwl_is_associated(priv))
-+ if (iwl4965_is_associated(priv))
- add_time =
-- iwl_usecs_to_beacons(
-+ iwl4965_usecs_to_beacons(
- le64_to_cpu(params->start_time) - priv->last_tsf,
- le16_to_cpu(priv->rxon_timing.beacon_interval));
+ ret = assoc_helper_associate(priv, assoc_req);
+ if (ret) {
+- lbs_deb_assoc("ASSOC: association attempt unsuccessful: %d\n",
++ lbs_deb_assoc("ASSOC: association unsuccessful: %d\n",
+ ret);
+ success = 0;
+ }
-@@ -3379,9 +3412,9 @@ static int iwl_get_measurement(struct iwl_priv *priv,
- cmd.len = sizeof(spectrum);
- spectrum.len = cpu_to_le16(cmd.len - sizeof(spectrum.len));
+- if (adapter->connect_status != LIBERTAS_CONNECTED) {
+- lbs_deb_assoc("ASSOC: association attempt unsuccessful, "
+- "not connected.\n");
++ if (priv->connect_status != LBS_CONNECTED) {
++ lbs_deb_assoc("ASSOC: association unsuccessful, "
++ "not connected\n");
+ success = 0;
+ }
-- if (iwl_is_associated(priv))
-+ if (iwl4965_is_associated(priv))
- spectrum.start_time =
-- iwl_add_beacon_time(priv->last_beacon_time,
-+ iwl4965_add_beacon_time(priv->last_beacon_time,
- add_time,
- le16_to_cpu(priv->rxon_timing.beacon_interval));
- else
-@@ -3394,11 +3427,11 @@ static int iwl_get_measurement(struct iwl_priv *priv,
- spectrum.flags |= RXON_FLG_BAND_24G_MSK |
- RXON_FLG_AUTO_DETECT_MSK | RXON_FLG_TGG_PROTECT_MSK;
+ if (success) {
+- lbs_deb_assoc("ASSOC: association attempt successful. "
+- "Associated to '%s' (%s)\n",
+- escape_essid(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len),
+- print_mac(mac, adapter->curbssparams.bssid));
+- libertas_prepare_and_send_command(priv,
++ lbs_deb_assoc("ASSOC: associated to '%s', %s\n",
++ escape_essid(priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len),
++ print_mac(mac, priv->curbssparams.bssid));
++ lbs_prepare_and_send_command(priv,
+ CMD_802_11_RSSI,
+ 0, CMD_OPTION_WAITFORRSP, 0, NULL);
-- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl4965_send_cmd_sync(priv, &cmd);
- if (rc)
- return rc;
+- libertas_prepare_and_send_command(priv,
++ lbs_prepare_and_send_command(priv,
+ CMD_802_11_GET_LOG,
+ 0, CMD_OPTION_WAITFORRSP, 0, NULL);
+ } else {
+@@ -679,9 +663,9 @@ out:
+ ret);
+ }
-- res = (struct iwl_rx_packet *)cmd.meta.u.skb->data;
-+ res = (struct iwl4965_rx_packet *)cmd.meta.u.skb->data;
- if (res->hdr.flags & IWL_CMD_FAILED_MSK) {
- IWL_ERROR("Bad return from REPLY_RX_ON_ASSOC command\n");
- rc = -EIO;
-@@ -3428,8 +3461,8 @@ static int iwl_get_measurement(struct iwl_priv *priv,
- }
- #endif
+- mutex_lock(&adapter->lock);
+- adapter->in_progress_assoc_req = NULL;
+- mutex_unlock(&adapter->lock);
++ mutex_lock(&priv->lock);
++ priv->in_progress_assoc_req = NULL;
++ mutex_unlock(&priv->lock);
+ kfree(assoc_req);
--static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
-- struct iwl_tx_info *tx_sta)
-+static void iwl4965_txstatus_to_ieee(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_info *tx_sta)
+ done:
+@@ -692,14 +676,15 @@ done:
+ /*
+ * Caller MUST hold any necessary locks
+ */
+-struct assoc_request * wlan_get_association_request(wlan_adapter *adapter)
++struct assoc_request *lbs_get_association_request(struct lbs_private *priv)
{
+ struct assoc_request * assoc_req;
- tx_sta->status.ack_signal = 0;
-@@ -3448,41 +3481,41 @@ static void iwl_txstatus_to_ieee(struct iwl_priv *priv,
- }
+- if (!adapter->pending_assoc_req) {
+- adapter->pending_assoc_req = kzalloc(sizeof(struct assoc_request),
++ lbs_deb_enter(LBS_DEB_ASSOC);
++ if (!priv->pending_assoc_req) {
++ priv->pending_assoc_req = kzalloc(sizeof(struct assoc_request),
+ GFP_KERNEL);
+- if (!adapter->pending_assoc_req) {
++ if (!priv->pending_assoc_req) {
+ lbs_pr_info("Not enough memory to allocate association"
+ " request!\n");
+ return NULL;
+@@ -709,60 +694,59 @@ struct assoc_request * wlan_get_association_request(wlan_adapter *adapter)
+ /* Copy current configuration attributes to the association request,
+ * but don't overwrite any that are already set.
+ */
+- assoc_req = adapter->pending_assoc_req;
++ assoc_req = priv->pending_assoc_req;
+ if (!test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)) {
+- memcpy(&assoc_req->ssid, &adapter->curbssparams.ssid,
++ memcpy(&assoc_req->ssid, &priv->curbssparams.ssid,
+ IW_ESSID_MAX_SIZE);
+- assoc_req->ssid_len = adapter->curbssparams.ssid_len;
++ assoc_req->ssid_len = priv->curbssparams.ssid_len;
+ }
- /**
-- * iwl_tx_queue_reclaim - Reclaim Tx queue entries no more used by NIC.
-+ * iwl4965_tx_queue_reclaim - Reclaim Tx queue entries already Tx'd
- *
-- * When FW advances 'R' index, all entries between old and
-- * new 'R' index need to be reclaimed. As result, some free space
-- * forms. If there is enough free space (> low mark), wake Tx queue.
-+ * When FW advances 'R' index, all entries between old and new 'R' index
-+ * need to be reclaimed. As result, some free space forms. If there is
-+ * enough free space (> low mark), wake the stack that feeds us.
- */
--int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
-+int iwl4965_tx_queue_reclaim(struct iwl4965_priv *priv, int txq_id, int index)
- {
-- struct iwl_tx_queue *txq = &priv->txq[txq_id];
-- struct iwl_queue *q = &txq->q;
-+ struct iwl4965_tx_queue *txq = &priv->txq[txq_id];
-+ struct iwl4965_queue *q = &txq->q;
- int nfreed = 0;
+ if (!test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags))
+- assoc_req->channel = adapter->curbssparams.channel;
++ assoc_req->channel = priv->curbssparams.channel;
- if ((index >= q->n_bd) || (x2_queue_used(q, index) == 0)) {
- IWL_ERROR("Read index for DMA queue txq id (%d), index %d, "
- "is out of range [0-%d] %d %d.\n", txq_id,
-- index, q->n_bd, q->first_empty, q->last_used);
-+ index, q->n_bd, q->write_ptr, q->read_ptr);
- return 0;
+ if (!test_bit(ASSOC_FLAG_BAND, &assoc_req->flags))
+- assoc_req->band = adapter->curbssparams.band;
++ assoc_req->band = priv->curbssparams.band;
+
+ if (!test_bit(ASSOC_FLAG_MODE, &assoc_req->flags))
+- assoc_req->mode = adapter->mode;
++ assoc_req->mode = priv->mode;
+
+ if (!test_bit(ASSOC_FLAG_BSSID, &assoc_req->flags)) {
+- memcpy(&assoc_req->bssid, adapter->curbssparams.bssid,
++ memcpy(&assoc_req->bssid, priv->curbssparams.bssid,
+ ETH_ALEN);
}
-- for (index = iwl_queue_inc_wrap(index, q->n_bd);
-- q->last_used != index;
-- q->last_used = iwl_queue_inc_wrap(q->last_used, q->n_bd)) {
-+ for (index = iwl4965_queue_inc_wrap(index, q->n_bd);
-+ q->read_ptr != index;
-+ q->read_ptr = iwl4965_queue_inc_wrap(q->read_ptr, q->n_bd)) {
- if (txq_id != IWL_CMD_QUEUE_NUM) {
-- iwl_txstatus_to_ieee(priv,
-- &(txq->txb[txq->q.last_used]));
-- iwl_hw_txq_free_tfd(priv, txq);
-+ iwl4965_txstatus_to_ieee(priv,
-+ &(txq->txb[txq->q.read_ptr]));
-+ iwl4965_hw_txq_free_tfd(priv, txq);
- } else if (nfreed > 1) {
- IWL_ERROR("HCMD skipped: index (%d) %d %d\n", index,
-- q->first_empty, q->last_used);
-+ q->write_ptr, q->read_ptr);
- queue_work(priv->workqueue, &priv->restart);
+ if (!test_bit(ASSOC_FLAG_WEP_KEYS, &assoc_req->flags)) {
+ int i;
+ for (i = 0; i < 4; i++) {
+- memcpy(&assoc_req->wep_keys[i], &adapter->wep_keys[i],
++ memcpy(&assoc_req->wep_keys[i], &priv->wep_keys[i],
+ sizeof(struct enc_key));
}
- nfreed++;
}
-- if (iwl_queue_space(q) > q->low_mark && (txq_id >= 0) &&
-+ if (iwl4965_queue_space(q) > q->low_mark && (txq_id >= 0) &&
- (txq_id != IWL_CMD_QUEUE_NUM) &&
- priv->mac80211_registered)
- ieee80211_wake_queue(priv->hw, txq_id);
-@@ -3491,7 +3524,7 @@ int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index)
- return nfreed;
- }
+ if (!test_bit(ASSOC_FLAG_WEP_TX_KEYIDX, &assoc_req->flags))
+- assoc_req->wep_tx_keyidx = adapter->wep_tx_keyidx;
++ assoc_req->wep_tx_keyidx = priv->wep_tx_keyidx;
--static int iwl_is_tx_success(u32 status)
-+static int iwl4965_is_tx_success(u32 status)
- {
- status &= TX_STATUS_MSK;
- return (status == TX_STATUS_SUCCESS)
-@@ -3503,22 +3536,22 @@ static int iwl_is_tx_success(u32 status)
- * Generic RX handler implementations
- *
- ******************************************************************************/
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
+ if (!test_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags)) {
+- memcpy(&assoc_req->wpa_mcast_key, &adapter->wpa_mcast_key,
++ memcpy(&assoc_req->wpa_mcast_key, &priv->wpa_mcast_key,
+ sizeof(struct enc_key));
+ }
--static inline int iwl_get_ra_sta_id(struct iwl_priv *priv,
-+static inline int iwl4965_get_ra_sta_id(struct iwl4965_priv *priv,
- struct ieee80211_hdr *hdr)
- {
- if (priv->iw_mode == IEEE80211_IF_TYPE_STA)
- return IWL_AP_ID;
- else {
- u8 *da = ieee80211_get_DA(hdr);
-- return iwl_hw_find_station(priv, da);
-+ return iwl4965_hw_find_station(priv, da);
+ if (!test_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags)) {
+- memcpy(&assoc_req->wpa_unicast_key, &adapter->wpa_unicast_key,
++ memcpy(&assoc_req->wpa_unicast_key, &priv->wpa_unicast_key,
+ sizeof(struct enc_key));
+ }
+
+ if (!test_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags)) {
+- memcpy(&assoc_req->secinfo, &adapter->secinfo,
+- sizeof(struct wlan_802_11_security));
++ memcpy(&assoc_req->secinfo, &priv->secinfo,
++ sizeof(struct lbs_802_11_security));
}
+
+ if (!test_bit(ASSOC_FLAG_WPA_IE, &assoc_req->flags)) {
+- memcpy(&assoc_req->wpa_ie, &adapter->wpa_ie,
++ memcpy(&assoc_req->wpa_ie, &priv->wpa_ie,
+ MAX_WPA_IE_LEN);
+- assoc_req->wpa_ie_len = adapter->wpa_ie_len;
++ assoc_req->wpa_ie_len = priv->wpa_ie_len;
+ }
+
+- print_assoc_req(__func__, assoc_req);
+-
++ lbs_deb_leave(LBS_DEB_ASSOC);
+ return assoc_req;
}
+diff --git a/drivers/net/wireless/libertas/assoc.h b/drivers/net/wireless/libertas/assoc.h
+index e09b749..08372bb 100644
+--- a/drivers/net/wireless/libertas/assoc.h
++++ b/drivers/net/wireless/libertas/assoc.h
+@@ -1,32 +1,12 @@
+ /* Copyright (C) 2006, Red Hat, Inc. */
--static struct ieee80211_hdr *iwl_tx_queue_get_hdr(
-- struct iwl_priv *priv, int txq_id, int idx)
-+static struct ieee80211_hdr *iwl4965_tx_queue_get_hdr(
-+ struct iwl4965_priv *priv, int txq_id, int idx)
+-#ifndef _WLAN_ASSOC_H_
+-#define _WLAN_ASSOC_H_
++#ifndef _LBS_ASSOC_H_
++#define _LBS_ASSOC_H_
+
+ #include "dev.h"
+
+-void libertas_association_worker(struct work_struct *work);
++void lbs_association_worker(struct work_struct *work);
++struct assoc_request *lbs_get_association_request(struct lbs_private *priv);
++void lbs_sync_channel(struct work_struct *work);
+
+-struct assoc_request * wlan_get_association_request(wlan_adapter *adapter);
+-
+-void libertas_sync_channel(struct work_struct *work);
+-
+-#define ASSOC_DELAY (HZ / 2)
+-static inline void wlan_postpone_association_work(wlan_private *priv)
+-{
+- if (priv->adapter->surpriseremoved)
+- return;
+- cancel_delayed_work(&priv->assoc_work);
+- queue_delayed_work(priv->work_thread, &priv->assoc_work, ASSOC_DELAY);
+-}
+-
+-static inline void wlan_cancel_association_work(wlan_private *priv)
+-{
+- cancel_delayed_work(&priv->assoc_work);
+- if (priv->adapter->pending_assoc_req) {
+- kfree(priv->adapter->pending_assoc_req);
+- priv->adapter->pending_assoc_req = NULL;
+- }
+-}
+-
+-#endif /* _WLAN_ASSOC_H */
++#endif /* _LBS_ASSOC_H */
+diff --git a/drivers/net/wireless/libertas/cmd.c b/drivers/net/wireless/libertas/cmd.c
+index be5cfd8..eab0203 100644
+--- a/drivers/net/wireless/libertas/cmd.c
++++ b/drivers/net/wireless/libertas/cmd.c
+@@ -11,47 +11,139 @@
+ #include "dev.h"
+ #include "join.h"
+ #include "wext.h"
++#include "cmd.h"
+
+-static void cleanup_cmdnode(struct cmd_ctrl_node *ptempnode);
++static struct cmd_ctrl_node *lbs_get_cmd_ctrl_node(struct lbs_private *priv);
++static void lbs_set_cmd_ctrl_node(struct lbs_private *priv,
++ struct cmd_ctrl_node *ptempnode,
++ void *pdata_buf);
+
+-static u16 commands_allowed_in_ps[] = {
+- CMD_802_11_RSSI,
+-};
+
+ /**
+- * @brief This function checks if the commans is allowed
+- * in PS mode not.
++ * @brief Checks whether a command is allowed in Power Save mode
+ *
+ * @param command the command ID
+- * @return TRUE or FALSE
++ * @return 1 if allowed, 0 if not allowed
+ */
+-static u8 is_command_allowed_in_ps(__le16 command)
++static u8 is_command_allowed_in_ps(u16 cmd)
{
- if (priv->txq[txq_id].txb[idx].skb[0])
- return (struct ieee80211_hdr *)priv->txq[txq_id].
-@@ -3526,16 +3559,20 @@ static struct ieee80211_hdr *iwl_tx_queue_get_hdr(
- return NULL;
+- int i;
+-
+- for (i = 0; i < ARRAY_SIZE(commands_allowed_in_ps); i++) {
+- if (command == cpu_to_le16(commands_allowed_in_ps[i]))
+- return 1;
++ switch (cmd) {
++ case CMD_802_11_RSSI:
++ return 1;
++ default:
++ break;
+ }
+-
+ return 0;
}
--static inline u32 iwl_get_scd_ssn(struct iwl_tx_resp *tx_resp)
-+static inline u32 iwl4965_get_scd_ssn(struct iwl4965_tx_resp *tx_resp)
+-static int wlan_cmd_hw_spec(wlan_private * priv, struct cmd_ds_command *cmd)
++/**
++ * @brief Updates the hardware details like MAC address and regulatory region
++ *
++ * @param priv A pointer to struct lbs_private structure
++ *
++ * @return 0 on success, error on failure
++ */
++int lbs_update_hw_spec(struct lbs_private *priv)
{
- __le32 *scd_ssn = (__le32 *)((u32 *)&tx_resp->status +
- tx_resp->frame_count);
- return le32_to_cpu(*scd_ssn) & MAX_SN;
+- struct cmd_ds_get_hw_spec *hwspec = &cmd->params.hwspec;
++ struct cmd_ds_get_hw_spec cmd;
++ int ret = -1;
++ u32 i;
++ DECLARE_MAC_BUF(mac);
+
+ lbs_deb_enter(LBS_DEB_CMD);
+
+- cmd->command = cpu_to_le16(CMD_GET_HW_SPEC);
+- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_get_hw_spec) + S_DS_GEN);
+- memcpy(hwspec->permanentaddr, priv->adapter->current_addr, ETH_ALEN);
++ memset(&cmd, 0, sizeof(cmd));
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ memcpy(cmd.permanentaddr, priv->current_addr, ETH_ALEN);
++ ret = lbs_cmd_with_response(priv, CMD_GET_HW_SPEC, &cmd);
++ if (ret)
++ goto out;
++
++ priv->fwcapinfo = le32_to_cpu(cmd.fwcapinfo);
++
++ /* The firmware release is in an interesting format: the patch
++ * level is in the most significant nibble ... so fix that: */
++ priv->fwrelease = le32_to_cpu(cmd.fwrelease);
++ priv->fwrelease = (priv->fwrelease << 8) |
++ (priv->fwrelease >> 24 & 0xff);
++
++ /* Some firmware capabilities:
++ * CF card firmware 5.0.16p0: cap 0x00000303
++ * USB dongle firmware 5.110.17p2: cap 0x00000303
++ */
++ printk("libertas: %s, fw %u.%u.%up%u, cap 0x%08x\n",
++ print_mac(mac, cmd.permanentaddr),
++ priv->fwrelease >> 24 & 0xff,
++ priv->fwrelease >> 16 & 0xff,
++ priv->fwrelease >> 8 & 0xff,
++ priv->fwrelease & 0xff,
++ priv->fwcapinfo);
++ lbs_deb_cmd("GET_HW_SPEC: hardware interface 0x%x, hardware spec 0x%04x\n",
++ cmd.hwifversion, cmd.version);
++
++ /* Clamp region code to 8-bit since FW spec indicates that it should
++ * only ever be 8-bit, even though the field size is 16-bit. Some firmware
++ * returns non-zero high 8 bits here.
++ */
++ priv->regioncode = le16_to_cpu(cmd.regioncode) & 0xFF;
++
++ for (i = 0; i < MRVDRV_MAX_REGION_CODE; i++) {
++ /* use the region code to search for the index */
++ if (priv->regioncode == lbs_region_code_to_index[i])
++ break;
++ }
++
++ /* if it's unidentified region code, use the default (USA) */
++ if (i >= MRVDRV_MAX_REGION_CODE) {
++ priv->regioncode = 0x10;
++ lbs_pr_info("unidentified region code; using the default (USA)\n");
++ }
++
++ if (priv->current_addr[0] == 0xff)
++ memmove(priv->current_addr, cmd.permanentaddr, ETH_ALEN);
++
++ memcpy(priv->dev->dev_addr, priv->current_addr, ETH_ALEN);
++ if (priv->mesh_dev)
++ memcpy(priv->mesh_dev->dev_addr, priv->current_addr, ETH_ALEN);
++
++ if (lbs_set_regiontable(priv, priv->regioncode, 0)) {
++ ret = -1;
++ goto out;
++ }
++ if (lbs_set_universaltable(priv, 0)) {
++ ret = -1;
++ goto out;
++ }
++
++out:
+ lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
++ return ret;
}
--static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
-- struct iwl_ht_agg *agg,
-- struct iwl_tx_resp *tx_resp,
+
+-static int wlan_cmd_802_11_ps_mode(wlan_private * priv,
++int lbs_host_sleep_cfg(struct lbs_private *priv, uint32_t criteria)
++{
++ struct cmd_ds_host_sleep cmd_config;
++ int ret;
+
-+/**
-+ * iwl4965_tx_status_reply_tx - Handle Tx rspnse for frames in aggregation queue
-+ */
-+static int iwl4965_tx_status_reply_tx(struct iwl4965_priv *priv,
-+ struct iwl4965_ht_agg *agg,
-+ struct iwl4965_tx_resp *tx_resp,
- u16 start_idx)
++ cmd_config.hdr.size = cpu_to_le16(sizeof(cmd_config));
++ cmd_config.criteria = cpu_to_le32(criteria);
++ cmd_config.gpio = priv->wol_gpio;
++ cmd_config.gap = priv->wol_gap;
++
++ ret = lbs_cmd_with_response(priv, CMD_802_11_HOST_SLEEP_CFG, &cmd_config);
++ if (!ret) {
++ lbs_deb_cmd("Set WOL criteria to %x\n", criteria);
++ priv->wol_criteria = criteria;
++ } else {
++ lbs_pr_info("HOST_SLEEP_CFG failed %d\n", ret);
++ }
++
++ return ret;
++}
++EXPORT_SYMBOL_GPL(lbs_host_sleep_cfg);
++
++static int lbs_cmd_802_11_ps_mode(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action)
{
- u32 status;
-@@ -3547,15 +3584,17 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
- u16 seq;
-
- if (agg->wait_for_ba)
-- IWL_DEBUG_TX_REPLY("got tx repsons w/o back\n");
-+ IWL_DEBUG_TX_REPLY("got tx response w/o block-ack\n");
+@@ -90,161 +182,161 @@ static int wlan_cmd_802_11_ps_mode(wlan_private * priv,
+ return 0;
+ }
- agg->frame_count = tx_resp->frame_count;
- agg->start_idx = start_idx;
- agg->rate_n_flags = le32_to_cpu(tx_resp->rate_n_flags);
- agg->bitmap0 = agg->bitmap1 = 0;
+-static int wlan_cmd_802_11_inactivity_timeout(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- u16 cmd_action, void *pdata_buf)
++int lbs_cmd_802_11_inactivity_timeout(struct lbs_private *priv,
++ uint16_t cmd_action, uint16_t *timeout)
+ {
+- u16 *timeout = pdata_buf;
++ struct cmd_ds_802_11_inactivity_timeout cmd;
++ int ret;
-+ /* # frames attempted by Tx command */
- if (agg->frame_count == 1) {
-- struct iwl_tx_queue *txq ;
-+ /* Only one frame was attempted; no block-ack will arrive */
-+ struct iwl4965_tx_queue *txq ;
- status = le32_to_cpu(frame_status[0]);
+ lbs_deb_enter(LBS_DEB_CMD);
- txq_id = agg->txq_id;
-@@ -3564,28 +3603,30 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
- IWL_DEBUG_TX_REPLY("FrameCnt = %d, StartIdx=%d \n",
- agg->frame_count, agg->start_idx);
+- cmd->command = cpu_to_le16(CMD_802_11_INACTIVITY_TIMEOUT);
+- cmd->size =
+- cpu_to_le16(sizeof(struct cmd_ds_802_11_inactivity_timeout)
+- + S_DS_GEN);
++ cmd.hdr.command = cpu_to_le16(CMD_802_11_INACTIVITY_TIMEOUT);
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-- tx_status = &(priv->txq[txq_id].txb[txq->q.last_used].status);
-+ tx_status = &(priv->txq[txq_id].txb[txq->q.read_ptr].status);
- tx_status->retry_count = tx_resp->failure_frame;
- tx_status->queue_number = status & 0xff;
- tx_status->queue_length = tx_resp->bt_kill_count;
- tx_status->queue_length |= tx_resp->failure_rts;
+- cmd->params.inactivity_timeout.action = cpu_to_le16(cmd_action);
++ cmd.action = cpu_to_le16(cmd_action);
-- tx_status->flags = iwl_is_tx_success(status)?
-+ tx_status->flags = iwl4965_is_tx_success(status)?
- IEEE80211_TX_STATUS_ACK : 0;
- tx_status->control.tx_rate =
-- iwl_hw_get_rate_n_flags(tx_resp->rate_n_flags);
-+ iwl4965_hw_get_rate_n_flags(tx_resp->rate_n_flags);
- /* FIXME: code repetition end */
+- if (cmd_action)
+- cmd->params.inactivity_timeout.timeout = cpu_to_le16(*timeout);
++ if (cmd_action == CMD_ACT_SET)
++ cmd.timeout = cpu_to_le16(*timeout);
+ else
+- cmd->params.inactivity_timeout.timeout = 0;
++ cmd.timeout = 0;
- IWL_DEBUG_TX_REPLY("1 Frame 0x%x failure :%d\n",
- status & 0xff, tx_resp->failure_frame);
- IWL_DEBUG_TX_REPLY("Rate Info rate_n_flags=%x\n",
-- iwl_hw_get_rate_n_flags(tx_resp->rate_n_flags));
-+ iwl4965_hw_get_rate_n_flags(tx_resp->rate_n_flags));
+- lbs_deb_leave(LBS_DEB_CMD);
++ ret = lbs_cmd_with_response(priv, CMD_802_11_INACTIVITY_TIMEOUT, &cmd);
++
++ if (!ret)
++ *timeout = le16_to_cpu(cmd.timeout);
++
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+ return 0;
+ }
- agg->wait_for_ba = 0;
- } else {
-+ /* Two or more frames were attempted; expect block-ack */
- u64 bitmap = 0;
- int start = agg->start_idx;
+-static int wlan_cmd_802_11_sleep_params(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- u16 cmd_action)
++int lbs_cmd_802_11_sleep_params(struct lbs_private *priv, uint16_t cmd_action,
++ struct sleep_params *sp)
+ {
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ds_802_11_sleep_params *sp = &cmd->params.sleep_params;
++ struct cmd_ds_802_11_sleep_params cmd;
++ int ret;
-+ /* Construct bit-map of pending frames within Tx window */
- for (i = 0; i < agg->frame_count; i++) {
- u16 sc;
- status = le32_to_cpu(frame_status[i]);
-@@ -3600,7 +3641,7 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
- IWL_DEBUG_TX_REPLY("FrameCnt = %d, txq_id=%d idx=%d\n",
- agg->frame_count, txq_id, idx);
+ lbs_deb_enter(LBS_DEB_CMD);
-- hdr = iwl_tx_queue_get_hdr(priv, txq_id, idx);
-+ hdr = iwl4965_tx_queue_get_hdr(priv, txq_id, idx);
+- cmd->size = cpu_to_le16((sizeof(struct cmd_ds_802_11_sleep_params)) +
+- S_DS_GEN);
+- cmd->command = cpu_to_le16(CMD_802_11_SLEEP_PARAMS);
+-
+ if (cmd_action == CMD_ACT_GET) {
+- memset(&adapter->sp, 0, sizeof(struct sleep_params));
+- memset(sp, 0, sizeof(struct cmd_ds_802_11_sleep_params));
+- sp->action = cpu_to_le16(cmd_action);
+- } else if (cmd_action == CMD_ACT_SET) {
+- sp->action = cpu_to_le16(cmd_action);
+- sp->error = cpu_to_le16(adapter->sp.sp_error);
+- sp->offset = cpu_to_le16(adapter->sp.sp_offset);
+- sp->stabletime = cpu_to_le16(adapter->sp.sp_stabletime);
+- sp->calcontrol = (u8) adapter->sp.sp_calcontrol;
+- sp->externalsleepclk = (u8) adapter->sp.sp_extsleepclk;
+- sp->reserved = cpu_to_le16(adapter->sp.sp_reserved);
++ memset(&cmd, 0, sizeof(cmd));
++ } else {
++ cmd.error = cpu_to_le16(sp->sp_error);
++ cmd.offset = cpu_to_le16(sp->sp_offset);
++ cmd.stabletime = cpu_to_le16(sp->sp_stabletime);
++ cmd.calcontrol = sp->sp_calcontrol;
++ cmd.externalsleepclk = sp->sp_extsleepclk;
++ cmd.reserved = cpu_to_le16(sp->sp_reserved);
++ }
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ cmd.action = cpu_to_le16(cmd_action);
++
++ ret = lbs_cmd_with_response(priv, CMD_802_11_SLEEP_PARAMS, &cmd);
++
++ if (!ret) {
++ lbs_deb_cmd("error 0x%x, offset 0x%x, stabletime 0x%x, "
++ "calcontrol 0x%x extsleepclk 0x%x\n",
++ le16_to_cpu(cmd.error), le16_to_cpu(cmd.offset),
++ le16_to_cpu(cmd.stabletime), cmd.calcontrol,
++ cmd.externalsleepclk);
++
++ sp->sp_error = le16_to_cpu(cmd.error);
++ sp->sp_offset = le16_to_cpu(cmd.offset);
++ sp->sp_stabletime = le16_to_cpu(cmd.stabletime);
++ sp->sp_calcontrol = cmd.calcontrol;
++ sp->sp_extsleepclk = cmd.externalsleepclk;
++ sp->sp_reserved = le16_to_cpu(cmd.reserved);
+ }
- sc = le16_to_cpu(hdr->seq_ctrl);
- if (idx != (SEQ_TO_SN(sc) & 0xff)) {
-@@ -3649,19 +3690,22 @@ static int iwl4965_tx_status_reply_tx(struct iwl_priv *priv,
- #endif
- #endif
+- lbs_deb_leave(LBS_DEB_CMD);
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+ return 0;
+ }
--static void iwl_rx_reply_tx(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+/**
-+ * iwl4965_rx_reply_tx - Handle standard (non-aggregation) Tx response
-+ */
-+static void iwl4965_rx_reply_tx(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_set_wep(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- u32 cmd_act,
+- void * pdata_buf)
++int lbs_cmd_802_11_set_wep(struct lbs_private *priv, uint16_t cmd_action,
++ struct assoc_request *assoc)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
- u16 sequence = le16_to_cpu(pkt->hdr.sequence);
- int txq_id = SEQ_TO_QUEUE(sequence);
- int index = SEQ_TO_INDEX(sequence);
-- struct iwl_tx_queue *txq = &priv->txq[txq_id];
-+ struct iwl4965_tx_queue *txq = &priv->txq[txq_id];
- struct ieee80211_tx_status *tx_status;
-- struct iwl_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
-+ struct iwl4965_tx_resp *tx_resp = (void *)&pkt->u.raw[0];
- u32 status = le32_to_cpu(tx_resp->status);
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- int tid, sta_id;
- #endif
- #endif
-@@ -3669,18 +3713,18 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
- if ((index >= txq->q.n_bd) || (x2_queue_used(&txq->q, index) == 0)) {
- IWL_ERROR("Read index for DMA queue txq_id (%d) index %d "
- "is out of range [0-%d] %d %d\n", txq_id,
-- index, txq->q.n_bd, txq->q.first_empty,
-- txq->q.last_used);
-+ index, txq->q.n_bd, txq->q.write_ptr,
-+ txq->q.read_ptr);
- return;
- }
+- struct cmd_ds_802_11_set_wep *wep = &cmd->params.wep;
+- wlan_adapter *adapter = priv->adapter;
++ struct cmd_ds_802_11_set_wep cmd;
+ int ret = 0;
+- struct assoc_request * assoc_req = pdata_buf;
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
- if (txq->sched_retry) {
-- const u32 scd_ssn = iwl_get_scd_ssn(tx_resp);
-+ const u32 scd_ssn = iwl4965_get_scd_ssn(tx_resp);
- struct ieee80211_hdr *hdr =
-- iwl_tx_queue_get_hdr(priv, txq_id, index);
-- struct iwl_ht_agg *agg = NULL;
-+ iwl4965_tx_queue_get_hdr(priv, txq_id, index);
-+ struct iwl4965_ht_agg *agg = NULL;
- __le16 *qc = ieee80211_get_qos_ctrl(hdr);
+ lbs_deb_enter(LBS_DEB_CMD);
- if (qc == NULL) {
-@@ -3690,7 +3734,7 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
+- cmd->command = cpu_to_le16(CMD_802_11_SET_WEP);
+- cmd->size = cpu_to_le16(sizeof(*wep) + S_DS_GEN);
+-
+- if (cmd_act == CMD_ACT_ADD) {
+- int i;
++ cmd.hdr.command = cpu_to_le16(CMD_802_11_SET_WEP);
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
- tid = le16_to_cpu(*qc) & 0xf;
+- if (!assoc_req) {
+- lbs_deb_cmd("Invalid association request!");
+- ret = -1;
+- goto done;
+- }
++ cmd.action = cpu_to_le16(cmd_action);
-- sta_id = iwl_get_ra_sta_id(priv, hdr);
-+ sta_id = iwl4965_get_ra_sta_id(priv, hdr);
- if (unlikely(sta_id == IWL_INVALID_STATION)) {
- IWL_ERROR("Station not known for\n");
- return;
-@@ -3701,20 +3745,20 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
- iwl4965_tx_status_reply_tx(priv, agg, tx_resp, index);
+- wep->action = cpu_to_le16(CMD_ACT_ADD);
++ if (cmd_action == CMD_ACT_ADD) {
++ int i;
- if ((tx_resp->frame_count == 1) &&
-- !iwl_is_tx_success(status)) {
-+ !iwl4965_is_tx_success(status)) {
- /* TODO: send BAR */
- }
+ /* default tx key index */
+- wep->keyindex = cpu_to_le16((u16)(assoc_req->wep_tx_keyidx &
+- (u32)CMD_WEP_KEY_INDEX_MASK));
++ cmd.keyindex = cpu_to_le16(assoc->wep_tx_keyidx &
++ CMD_WEP_KEY_INDEX_MASK);
-- if ((txq->q.last_used != (scd_ssn & 0xff))) {
-- index = iwl_queue_dec_wrap(scd_ssn & 0xff, txq->q.n_bd);
-+ if ((txq->q.read_ptr != (scd_ssn & 0xff))) {
-+ index = iwl4965_queue_dec_wrap(scd_ssn & 0xff, txq->q.n_bd);
- IWL_DEBUG_TX_REPLY("Retry scheduler reclaim scd_ssn "
- "%d index %d\n", scd_ssn , index);
-- iwl_tx_queue_reclaim(priv, txq_id, index);
-+ iwl4965_tx_queue_reclaim(priv, txq_id, index);
+ /* Copy key types and material to host command structure */
+ for (i = 0; i < 4; i++) {
+- struct enc_key * pkey = &assoc_req->wep_keys[i];
++ struct enc_key *pkey = &assoc->wep_keys[i];
+
+ switch (pkey->len) {
+ case KEY_LEN_WEP_40:
+- wep->keytype[i] = CMD_TYPE_WEP_40_BIT;
+- memmove(&wep->keymaterial[i], pkey->key,
+- pkey->len);
++ cmd.keytype[i] = CMD_TYPE_WEP_40_BIT;
++ memmove(cmd.keymaterial[i], pkey->key, pkey->len);
+ lbs_deb_cmd("SET_WEP: add key %d (40 bit)\n", i);
+ break;
+ case KEY_LEN_WEP_104:
+- wep->keytype[i] = CMD_TYPE_WEP_104_BIT;
+- memmove(&wep->keymaterial[i], pkey->key,
+- pkey->len);
++ cmd.keytype[i] = CMD_TYPE_WEP_104_BIT;
++ memmove(cmd.keymaterial[i], pkey->key, pkey->len);
+ lbs_deb_cmd("SET_WEP: add key %d (104 bit)\n", i);
+ break;
+ case 0:
+ break;
+ default:
+ lbs_deb_cmd("SET_WEP: invalid key %d, length %d\n",
+- i, pkey->len);
++ i, pkey->len);
+ ret = -1;
+ goto done;
+ break;
+ }
}
- } else {
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-- tx_status = &(txq->txb[txq->q.last_used].status);
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-+ tx_status = &(txq->txb[txq->q.read_ptr].status);
+- } else if (cmd_act == CMD_ACT_REMOVE) {
++ } else if (cmd_action == CMD_ACT_REMOVE) {
+ /* ACT_REMOVE clears _all_ WEP keys */
+- wep->action = cpu_to_le16(CMD_ACT_REMOVE);
- tx_status->retry_count = tx_resp->failure_frame;
- tx_status->queue_number = status;
-@@ -3722,35 +3766,35 @@ static void iwl_rx_reply_tx(struct iwl_priv *priv,
- tx_status->queue_length |= tx_resp->failure_rts;
+ /* default tx key index */
+- wep->keyindex = cpu_to_le16((u16)(adapter->wep_tx_keyidx &
+- (u32)CMD_WEP_KEY_INDEX_MASK));
+- lbs_deb_cmd("SET_WEP: remove key %d\n", adapter->wep_tx_keyidx);
++ cmd.keyindex = cpu_to_le16(priv->wep_tx_keyidx &
++ CMD_WEP_KEY_INDEX_MASK);
++ lbs_deb_cmd("SET_WEP: remove key %d\n", priv->wep_tx_keyidx);
+ }
- tx_status->flags =
-- iwl_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
-+ iwl4965_is_tx_success(status) ? IEEE80211_TX_STATUS_ACK : 0;
+- ret = 0;
+-
++ ret = lbs_cmd_with_response(priv, CMD_802_11_SET_WEP, &cmd);
+ done:
+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+ return ret;
+ }
- tx_status->control.tx_rate =
-- iwl_hw_get_rate_n_flags(tx_resp->rate_n_flags);
-+ iwl4965_hw_get_rate_n_flags(tx_resp->rate_n_flags);
+-static int wlan_cmd_802_11_enable_rsn(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- u16 cmd_action,
+- void * pdata_buf)
++int lbs_cmd_802_11_enable_rsn(struct lbs_private *priv, uint16_t cmd_action,
++ uint16_t *enable)
+ {
+- struct cmd_ds_802_11_enable_rsn *penableRSN = &cmd->params.enbrsn;
+- u32 * enable = pdata_buf;
++ struct cmd_ds_802_11_enable_rsn cmd;
++ int ret;
- IWL_DEBUG_TX("Tx queue %d Status %s (0x%08x) rate_n_flags 0x%x "
-- "retries %d\n", txq_id, iwl_get_tx_fail_reason(status),
-+ "retries %d\n", txq_id, iwl4965_get_tx_fail_reason(status),
- status, le32_to_cpu(tx_resp->rate_n_flags),
- tx_resp->failure_frame);
+ lbs_deb_enter(LBS_DEB_CMD);
- IWL_DEBUG_TX_REPLY("Tx queue reclaim %d\n", index);
- if (index != -1)
-- iwl_tx_queue_reclaim(priv, txq_id, index);
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+ iwl4965_tx_queue_reclaim(priv, txq_id, index);
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
+- cmd->command = cpu_to_le16(CMD_802_11_ENABLE_RSN);
+- cmd->size = cpu_to_le16(sizeof(*penableRSN) + S_DS_GEN);
+- penableRSN->action = cpu_to_le16(cmd_action);
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ cmd.action = cpu_to_le16(cmd_action);
+
+ if (cmd_action == CMD_ACT_SET) {
+ if (*enable)
+- penableRSN->enable = cpu_to_le16(CMD_ENABLE_RSN);
++ cmd.enable = cpu_to_le16(CMD_ENABLE_RSN);
+ else
+- penableRSN->enable = cpu_to_le16(CMD_DISABLE_RSN);
++ cmd.enable = cpu_to_le16(CMD_DISABLE_RSN);
+ lbs_deb_cmd("ENABLE_RSN: %d\n", *enable);
}
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
- if (iwl_check_bits(status, TX_ABORT_REQUIRED_MSK))
- IWL_ERROR("TODO: Implement Tx ABORT REQUIRED!!!\n");
- }
+- lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
+-}
++ ret = lbs_cmd_with_response(priv, CMD_802_11_ENABLE_RSN, &cmd);
++ if (!ret && cmd_action == CMD_ACT_GET)
++ *enable = le16_to_cpu(cmd.enable);
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
++ return ret;
++}
--static void iwl_rx_reply_alive(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_reply_alive(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
- {
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_alive_resp *palive;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_alive_resp *palive;
- struct delayed_work *pwork;
+ static void set_one_wpa_key(struct MrvlIEtype_keyParamSet * pkeyparamset,
+ struct enc_key * pkey)
+@@ -272,7 +364,7 @@ static void set_one_wpa_key(struct MrvlIEtype_keyParamSet * pkeyparamset,
+ lbs_deb_leave(LBS_DEB_CMD);
+ }
- palive = &pkt->u.alive_frame;
-@@ -3764,12 +3808,12 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
- IWL_DEBUG_INFO("Initialization Alive received.\n");
- memcpy(&priv->card_alive_init,
- &pkt->u.alive_frame,
-- sizeof(struct iwl_init_alive_resp));
-+ sizeof(struct iwl4965_init_alive_resp));
- pwork = &priv->init_alive_start;
- } else {
- IWL_DEBUG_INFO("Runtime Alive received.\n");
- memcpy(&priv->card_alive, &pkt->u.alive_frame,
-- sizeof(struct iwl_alive_resp));
-+ sizeof(struct iwl4965_alive_resp));
- pwork = &priv->alive_start;
- }
+-static int wlan_cmd_802_11_key_material(wlan_private * priv,
++static int lbs_cmd_802_11_key_material(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action,
+ u32 cmd_oid, void *pdata_buf)
+@@ -319,7 +411,7 @@ done:
+ return ret;
+ }
-@@ -3782,19 +3826,19 @@ static void iwl_rx_reply_alive(struct iwl_priv *priv,
- IWL_WARNING("uCode did not respond OK.\n");
+-static int wlan_cmd_802_11_reset(wlan_private * priv,
++static int lbs_cmd_802_11_reset(struct lbs_private *priv,
+ struct cmd_ds_command *cmd, int cmd_action)
+ {
+ struct cmd_ds_802_11_reset *reset = &cmd->params.reset;
+@@ -334,7 +426,7 @@ static int wlan_cmd_802_11_reset(wlan_private * priv,
+ return 0;
}
--static void iwl_rx_reply_add_sta(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_reply_add_sta(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_get_log(wlan_private * priv,
++static int lbs_cmd_802_11_get_log(struct lbs_private *priv,
+ struct cmd_ds_command *cmd)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
+ lbs_deb_enter(LBS_DEB_CMD);
+@@ -346,7 +438,7 @@ static int wlan_cmd_802_11_get_log(wlan_private * priv,
+ return 0;
+ }
- IWL_DEBUG_RX("Received REPLY_ADD_STA: 0x%02X\n", pkt->u.status);
- return;
+-static int wlan_cmd_802_11_get_stat(wlan_private * priv,
++static int lbs_cmd_802_11_get_stat(struct lbs_private *priv,
+ struct cmd_ds_command *cmd)
+ {
+ lbs_deb_enter(LBS_DEB_CMD);
+@@ -358,13 +450,12 @@ static int wlan_cmd_802_11_get_stat(wlan_private * priv,
+ return 0;
}
--static void iwl_rx_reply_error(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_reply_error(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
++static int lbs_cmd_802_11_snmp_mib(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ int cmd_action,
+ int cmd_oid, void *pdata_buf)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
+ struct cmd_ds_802_11_snmp_mib *pSNMPMIB = &cmd->params.smib;
+- wlan_adapter *adapter = priv->adapter;
+ u8 ucTemp;
- IWL_ERROR("Error Reply type 0x%08X cmd %s (0x%02X) "
- "seq 0x%04X ser 0x%08X\n",
-@@ -3807,23 +3851,23 @@ static void iwl_rx_reply_error(struct iwl_priv *priv,
+ lbs_deb_enter(LBS_DEB_CMD);
+@@ -380,7 +471,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
+ u8 mode = (u8) (size_t) pdata_buf;
+ pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_SET);
+ pSNMPMIB->oid = cpu_to_le16((u16) DESIRED_BSSTYPE_I);
+- pSNMPMIB->bufsize = sizeof(u8);
++ pSNMPMIB->bufsize = cpu_to_le16(sizeof(u8));
+ if (mode == IW_MODE_ADHOC) {
+ ucTemp = SNMP_MIB_VALUE_ADHOC;
+ } else {
+@@ -400,8 +491,8 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
+ pSNMPMIB->oid = cpu_to_le16((u16) DOT11D_I);
- #define TX_STATUS_ENTRY(x) case TX_STATUS_FAIL_ ## x: return #x
+ if (cmd_action == CMD_ACT_SET) {
+- pSNMPMIB->querytype = CMD_ACT_SET;
+- pSNMPMIB->bufsize = sizeof(u16);
++ pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_SET);
++ pSNMPMIB->bufsize = cpu_to_le16(sizeof(u16));
+ ulTemp = *(u32 *)pdata_buf;
+ *((__le16 *)(pSNMPMIB->value)) =
+ cpu_to_le16((u16) ulTemp);
+@@ -433,7 +524,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
+ {
--static void iwl_rx_csa(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_csa(struct iwl4965_priv *priv, struct iwl4965_rx_mem_buffer *rxb)
- {
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_rxon_cmd *rxon = (void *)&priv->active_rxon;
-- struct iwl_csa_notification *csa = &(pkt->u.csa_notif);
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rxon_cmd *rxon = (void *)&priv->active_rxon;
-+ struct iwl4965_csa_notification *csa = &(pkt->u.csa_notif);
- IWL_DEBUG_11H("CSA notif: channel %d, status %d\n",
- le16_to_cpu(csa->channel), le32_to_cpu(csa->status));
- rxon->channel = csa->channel;
- priv->staging_rxon.channel = csa->channel;
+ u32 ulTemp;
+- pSNMPMIB->oid = le16_to_cpu((u16) RTSTHRESH_I);
++ pSNMPMIB->oid = cpu_to_le16(RTSTHRESH_I);
+
+ if (cmd_action == CMD_ACT_GET) {
+ pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_GET);
+@@ -456,7 +547,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
+ pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_SET);
+ pSNMPMIB->bufsize = cpu_to_le16(sizeof(u16));
+ *((__le16 *)(pSNMPMIB->value)) =
+- cpu_to_le16((u16) adapter->txretrycount);
++ cpu_to_le16((u16) priv->txretrycount);
+ }
+
+ break;
+@@ -479,47 +570,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
+ return 0;
}
--static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_spectrum_measure_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_radio_control(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- int cmd_action)
+-{
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ds_802_11_radio_control *pradiocontrol = &cmd->params.radio;
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+-
+- cmd->size =
+- cpu_to_le16((sizeof(struct cmd_ds_802_11_radio_control)) +
+- S_DS_GEN);
+- cmd->command = cpu_to_le16(CMD_802_11_RADIO_CONTROL);
+-
+- pradiocontrol->action = cpu_to_le16(cmd_action);
+-
+- switch (adapter->preamble) {
+- case CMD_TYPE_SHORT_PREAMBLE:
+- pradiocontrol->control = cpu_to_le16(SET_SHORT_PREAMBLE);
+- break;
+-
+- case CMD_TYPE_LONG_PREAMBLE:
+- pradiocontrol->control = cpu_to_le16(SET_LONG_PREAMBLE);
+- break;
+-
+- case CMD_TYPE_AUTO_PREAMBLE:
+- default:
+- pradiocontrol->control = cpu_to_le16(SET_AUTO_PREAMBLE);
+- break;
+- }
+-
+- if (adapter->radioon)
+- pradiocontrol->control |= cpu_to_le16(TURN_ON_RF);
+- else
+- pradiocontrol->control &= cpu_to_le16(~TURN_ON_RF);
+-
+- lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
+-}
+-
+-static int wlan_cmd_802_11_rf_tx_power(wlan_private * priv,
++static int lbs_cmd_802_11_rf_tx_power(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action, void *pdata_buf)
{
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_spectrum_notification *report = &(pkt->u.spectrum_notif);
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_spectrum_notification *report = &(pkt->u.spectrum_notif);
-
- if (!report->state) {
- IWL_DEBUG(IWL_DL_11H | IWL_DL_INFO,
-@@ -3836,35 +3880,35 @@ static void iwl_rx_spectrum_measure_notif(struct iwl_priv *priv,
- #endif
+@@ -563,7 +614,7 @@ static int wlan_cmd_802_11_rf_tx_power(wlan_private * priv,
+ return 0;
}
--static void iwl_rx_pm_sleep_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_pm_sleep_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_monitor_mode(wlan_private * priv,
++static int lbs_cmd_802_11_monitor_mode(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action, void *pdata_buf)
{
--#ifdef CONFIG_IWLWIFI_DEBUG
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_sleep_notification *sleep = &(pkt->u.sleep_notif);
-+#ifdef CONFIG_IWL4965_DEBUG
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_sleep_notification *sleep = &(pkt->u.sleep_notif);
- IWL_DEBUG_RX("sleep mode: %d, src: %d\n",
- sleep->pm_sleep_mode, sleep->pm_wakeup_src);
- #endif
+@@ -583,13 +634,12 @@ static int wlan_cmd_802_11_monitor_mode(wlan_private * priv,
+ return 0;
}
--static void iwl_rx_pm_debug_statistics_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_pm_debug_statistics_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_rate_adapt_rateset(wlan_private * priv,
++static int lbs_cmd_802_11_rate_adapt_rateset(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
- IWL_DEBUG_RADIO("Dumping %d bytes of unhandled "
- "notification for %s:\n",
- le32_to_cpu(pkt->len), get_cmd_string(pkt->hdr.cmd));
-- iwl_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
-+ iwl4965_print_hex_dump(IWL_DL_RADIO, pkt->u.raw, le32_to_cpu(pkt->len));
+ struct cmd_ds_802_11_rate_adapt_rateset
+ *rateadapt = &cmd->params.rateset;
+- wlan_adapter *adapter = priv->adapter;
+
+ lbs_deb_enter(LBS_DEB_CMD);
+ cmd->size =
+@@ -598,46 +648,100 @@ static int wlan_cmd_802_11_rate_adapt_rateset(wlan_private * priv,
+ cmd->command = cpu_to_le16(CMD_802_11_RATE_ADAPT_RATESET);
+
+ rateadapt->action = cpu_to_le16(cmd_action);
+- rateadapt->enablehwauto = cpu_to_le16(adapter->enablehwauto);
+- rateadapt->bitmap = cpu_to_le16(adapter->ratebitmap);
++ rateadapt->enablehwauto = cpu_to_le16(priv->enablehwauto);
++ rateadapt->bitmap = cpu_to_le16(priv->ratebitmap);
+
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
--static void iwl_bg_beacon_update(struct work_struct *work)
-+static void iwl4965_bg_beacon_update(struct work_struct *work)
+-static int wlan_cmd_802_11_data_rate(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- u16 cmd_action)
++/**
++ * @brief Get the current data rate
++ *
++ * @param priv A pointer to struct lbs_private structure
++ *
++ * @return The data rate on success, error on failure
++ */
++int lbs_get_data_rate(struct lbs_private *priv)
{
-- struct iwl_priv *priv =
-- container_of(work, struct iwl_priv, beacon_update);
-+ struct iwl4965_priv *priv =
-+ container_of(work, struct iwl4965_priv, beacon_update);
- struct sk_buff *beacon;
+- struct cmd_ds_802_11_data_rate *pdatarate = &cmd->params.drate;
+- wlan_adapter *adapter = priv->adapter;
++ struct cmd_ds_802_11_data_rate cmd;
++ int ret = -1;
- /* Pull updated AP beacon from mac80211. will fail if not in AP mode */
-- beacon = ieee80211_beacon_get(priv->hw, priv->interface_id, NULL);
-+ beacon = ieee80211_beacon_get(priv->hw, priv->vif, NULL);
+ lbs_deb_enter(LBS_DEB_CMD);
- if (!beacon) {
- IWL_ERROR("update beacon failed\n");
-@@ -3879,16 +3923,16 @@ static void iwl_bg_beacon_update(struct work_struct *work)
- priv->ibss_beacon = beacon;
- mutex_unlock(&priv->mutex);
+- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_802_11_data_rate) +
+- S_DS_GEN);
+- cmd->command = cpu_to_le16(CMD_802_11_DATA_RATE);
+- memset(pdatarate, 0, sizeof(struct cmd_ds_802_11_data_rate));
+- pdatarate->action = cpu_to_le16(cmd_action);
+-
+- if (cmd_action == CMD_ACT_SET_TX_FIX_RATE) {
+- pdatarate->rates[0] = libertas_data_rate_to_fw_index(adapter->cur_rate);
+- lbs_deb_cmd("DATA_RATE: set fixed 0x%02X\n",
+- adapter->cur_rate);
+- } else if (cmd_action == CMD_ACT_SET_TX_AUTO) {
++ memset(&cmd, 0, sizeof(cmd));
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ cmd.action = cpu_to_le16(CMD_ACT_GET_TX_RATE);
++
++ ret = lbs_cmd_with_response(priv, CMD_802_11_DATA_RATE, &cmd);
++ if (ret)
++ goto out;
++
++ lbs_deb_hex(LBS_DEB_CMD, "DATA_RATE_RESP", (u8 *) &cmd, sizeof (cmd));
++
++ ret = (int) lbs_fw_index_to_data_rate(cmd.rates[0]);
++ lbs_deb_cmd("DATA_RATE: current rate 0x%02x\n", ret);
++
++out:
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
++ return ret;
++}
++
++/**
++ * @brief Set the data rate
++ *
++ * @param priv A pointer to struct lbs_private structure
++ * @param rate The desired data rate, or 0 to clear a locked rate
++ *
++ * @return 0 on success, error on failure
++ */
++int lbs_set_data_rate(struct lbs_private *priv, u8 rate)
++{
++ struct cmd_ds_802_11_data_rate cmd;
++ int ret = 0;
++
++ lbs_deb_enter(LBS_DEB_CMD);
++
++ memset(&cmd, 0, sizeof(cmd));
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++
++ if (rate > 0) {
++ cmd.action = cpu_to_le16(CMD_ACT_SET_TX_FIX_RATE);
++ cmd.rates[0] = lbs_data_rate_to_fw_index(rate);
++ if (cmd.rates[0] == 0) {
++ lbs_deb_cmd("DATA_RATE: invalid requested rate of"
++ " 0x%02X\n", rate);
++ ret = 0;
++ goto out;
++ }
++ lbs_deb_cmd("DATA_RATE: set fixed 0x%02X\n", cmd.rates[0]);
++ } else {
++ cmd.action = cpu_to_le16(CMD_ACT_SET_TX_AUTO);
+ lbs_deb_cmd("DATA_RATE: setting auto\n");
+ }
-- iwl_send_beacon_cmd(priv);
-+ iwl4965_send_beacon_cmd(priv);
+- lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
++ ret = lbs_cmd_with_response(priv, CMD_802_11_DATA_RATE, &cmd);
++ if (ret)
++ goto out;
++
++ lbs_deb_hex(LBS_DEB_CMD, "DATA_RATE_RESP", (u8 *) &cmd, sizeof (cmd));
++
++ /* FIXME: get actual rates FW can do if this command actually returns
++ * all data rates supported.
++ */
++ priv->cur_rate = lbs_fw_index_to_data_rate(cmd.rates[0]);
++ lbs_deb_cmd("DATA_RATE: current rate is 0x%02x\n", priv->cur_rate);
++
++out:
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
++ return ret;
}
--static void iwl_rx_beacon_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_beacon_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_mac_multicast_adr(wlan_private * priv,
++static int lbs_cmd_mac_multicast_adr(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action)
{
--#ifdef CONFIG_IWLWIFI_DEBUG
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_beacon_notif *beacon = &(pkt->u.beacon_status);
-- u8 rate = iwl_hw_get_rate(beacon->beacon_notify_hdr.rate_n_flags);
-+#ifdef CONFIG_IWL4965_DEBUG
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_beacon_notif *beacon = &(pkt->u.beacon_status);
-+ u8 rate = iwl4965_hw_get_rate(beacon->beacon_notify_hdr.rate_n_flags);
+ struct cmd_ds_mac_multicast_adr *pMCastAdr = &cmd->params.madr;
+- wlan_adapter *adapter = priv->adapter;
- IWL_DEBUG_RX("beacon status %x retries %d iss %d "
- "tsf %d %d rate %d\n",
-@@ -3905,25 +3949,25 @@ static void iwl_rx_beacon_notif(struct iwl_priv *priv,
+ lbs_deb_enter(LBS_DEB_CMD);
+ cmd->size = cpu_to_le16(sizeof(struct cmd_ds_mac_multicast_adr) +
+@@ -647,39 +751,79 @@ static int wlan_cmd_mac_multicast_adr(wlan_private * priv,
+ lbs_deb_cmd("MULTICAST_ADR: setting %d addresses\n", pMCastAdr->nr_of_adrs);
+ pMCastAdr->action = cpu_to_le16(cmd_action);
+ pMCastAdr->nr_of_adrs =
+- cpu_to_le16((u16) adapter->nr_of_multicastmacaddr);
+- memcpy(pMCastAdr->maclist, adapter->multicastlist,
+- adapter->nr_of_multicastmacaddr * ETH_ALEN);
++ cpu_to_le16((u16) priv->nr_of_multicastmacaddr);
++ memcpy(pMCastAdr->maclist, priv->multicastlist,
++ priv->nr_of_multicastmacaddr * ETH_ALEN);
+
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
- /* Service response to REPLY_SCAN_CMD (0x80) */
--static void iwl_rx_reply_scan(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_reply_scan(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_rf_channel(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- int option, void *pdata_buf)
++/**
++ * @brief Get the radio channel
++ *
++ * @param priv A pointer to struct lbs_private structure
++ *
++ * @return The channel on success, error on failure
++ */
++int lbs_get_channel(struct lbs_private *priv)
{
--#ifdef CONFIG_IWLWIFI_DEBUG
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_scanreq_notification *notif =
-- (struct iwl_scanreq_notification *)pkt->u.raw;
-+#ifdef CONFIG_IWL4965_DEBUG
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_scanreq_notification *notif =
-+ (struct iwl4965_scanreq_notification *)pkt->u.raw;
+- struct cmd_ds_802_11_rf_channel *rfchan = &cmd->params.rfchannel;
++ struct cmd_ds_802_11_rf_channel cmd;
++ int ret = 0;
- IWL_DEBUG_RX("Scan request status = 0x%x\n", notif->status);
- #endif
+ lbs_deb_enter(LBS_DEB_CMD);
+- cmd->command = cpu_to_le16(CMD_802_11_RF_CHANNEL);
+- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_802_11_rf_channel) +
+- S_DS_GEN);
+
+- if (option == CMD_OPT_802_11_RF_CHANNEL_SET) {
+- rfchan->currentchannel = cpu_to_le16(*((u16 *) pdata_buf));
+- }
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ cmd.action = cpu_to_le16(CMD_OPT_802_11_RF_CHANNEL_GET);
+
+- rfchan->action = cpu_to_le16(option);
++ ret = lbs_cmd_with_response(priv, CMD_802_11_RF_CHANNEL, &cmd);
++ if (ret)
++ goto out;
+
+- lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
++ ret = le16_to_cpu(cmd.channel);
++ lbs_deb_cmd("current radio channel is %d\n", ret);
++
++out:
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
++ return ret;
++}
++
++/**
++ * @brief Set the radio channel
++ *
++ * @param priv A pointer to struct lbs_private structure
++ * @param channel The desired channel, or 0 to clear a locked channel
++ *
++ * @return 0 on success, error on failure
++ */
++int lbs_set_channel(struct lbs_private *priv, u8 channel)
++{
++ struct cmd_ds_802_11_rf_channel cmd;
++ u8 old_channel = priv->curbssparams.channel;
++ int ret = 0;
++
++ lbs_deb_enter(LBS_DEB_CMD);
++
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ cmd.action = cpu_to_le16(CMD_OPT_802_11_RF_CHANNEL_SET);
++ cmd.channel = cpu_to_le16(channel);
++
++ ret = lbs_cmd_with_response(priv, CMD_802_11_RF_CHANNEL, &cmd);
++ if (ret)
++ goto out;
++
++ priv->curbssparams.channel = (uint8_t) le16_to_cpu(cmd.channel);
++ lbs_deb_cmd("channel switch from %d to %d\n", old_channel,
++ priv->curbssparams.channel);
++
++out:
++ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
++ return ret;
}
- /* Service SCAN_START_NOTIFICATION (0x82) */
--static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_scan_start_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_rssi(wlan_private * priv,
++static int lbs_cmd_802_11_rssi(struct lbs_private *priv,
+ struct cmd_ds_command *cmd)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_scanstart_notification *notif =
-- (struct iwl_scanstart_notification *)pkt->u.raw;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_scanstart_notification *notif =
-+ (struct iwl4965_scanstart_notification *)pkt->u.raw;
- priv->scan_start_tsf = le32_to_cpu(notif->tsf_low);
- IWL_DEBUG_SCAN("Scan start: "
- "%d [802.11%s] "
-@@ -3935,12 +3979,12 @@ static void iwl_rx_scan_start_notif(struct iwl_priv *priv,
+- wlan_adapter *adapter = priv->adapter;
+
+ lbs_deb_enter(LBS_DEB_CMD);
+ cmd->command = cpu_to_le16(CMD_802_11_RSSI);
+@@ -687,28 +831,28 @@ static int wlan_cmd_802_11_rssi(wlan_private * priv,
+ cmd->params.rssi.N = cpu_to_le16(DEFAULT_BCN_AVG_FACTOR);
+
+ /* reset Beacon SNR/NF/RSSI values */
+- adapter->SNR[TYPE_BEACON][TYPE_NOAVG] = 0;
+- adapter->SNR[TYPE_BEACON][TYPE_AVG] = 0;
+- adapter->NF[TYPE_BEACON][TYPE_NOAVG] = 0;
+- adapter->NF[TYPE_BEACON][TYPE_AVG] = 0;
+- adapter->RSSI[TYPE_BEACON][TYPE_NOAVG] = 0;
+- adapter->RSSI[TYPE_BEACON][TYPE_AVG] = 0;
++ priv->SNR[TYPE_BEACON][TYPE_NOAVG] = 0;
++ priv->SNR[TYPE_BEACON][TYPE_AVG] = 0;
++ priv->NF[TYPE_BEACON][TYPE_NOAVG] = 0;
++ priv->NF[TYPE_BEACON][TYPE_AVG] = 0;
++ priv->RSSI[TYPE_BEACON][TYPE_NOAVG] = 0;
++ priv->RSSI[TYPE_BEACON][TYPE_AVG] = 0;
+
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
- /* Service SCAN_RESULTS_NOTIFICATION (0x83) */
--static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_scan_results_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_reg_access(wlan_private * priv,
++static int lbs_cmd_reg_access(struct lbs_private *priv,
+ struct cmd_ds_command *cmdptr,
+ u8 cmd_action, void *pdata_buf)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_scanresults_notification *notif =
-- (struct iwl_scanresults_notification *)pkt->u.raw;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_scanresults_notification *notif =
-+ (struct iwl4965_scanresults_notification *)pkt->u.raw;
+- struct wlan_offset_value *offval;
++ struct lbs_offset_value *offval;
- IWL_DEBUG_SCAN("Scan ch.res: "
- "%d [802.11%s] "
-@@ -3956,14 +4000,15 @@ static void iwl_rx_scan_results_notif(struct iwl_priv *priv,
- (priv->last_scan_jiffies, jiffies)));
+ lbs_deb_enter(LBS_DEB_CMD);
- priv->last_scan_jiffies = jiffies;
-+ priv->next_scan_jiffies = 0;
+- offval = (struct wlan_offset_value *)pdata_buf;
++ offval = (struct lbs_offset_value *)pdata_buf;
+
+- switch (cmdptr->command) {
++ switch (le16_to_cpu(cmdptr->command)) {
+ case CMD_MAC_REG_ACCESS:
+ {
+ struct cmd_ds_mac_reg_access *macreg;
+@@ -773,11 +917,10 @@ static int wlan_cmd_reg_access(wlan_private * priv,
+ return 0;
}
- /* Service SCAN_COMPLETE_NOTIFICATION (0x84) */
--static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_scan_complete_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_mac_address(wlan_private * priv,
++static int lbs_cmd_802_11_mac_address(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-- struct iwl_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_scancomplete_notification *scan_notif = (void *)pkt->u.raw;
+- wlan_adapter *adapter = priv->adapter;
- IWL_DEBUG_SCAN("Scan complete: %d channels (TSF 0x%08X:%08X) - %d\n",
- scan_notif->scanned_channels,
-@@ -3998,6 +4043,7 @@ static void iwl_rx_scan_complete_notif(struct iwl_priv *priv,
- }
+ lbs_deb_enter(LBS_DEB_CMD);
+ cmd->command = cpu_to_le16(CMD_802_11_MAC_ADDRESS);
+@@ -789,19 +932,19 @@ static int wlan_cmd_802_11_mac_address(wlan_private * priv,
- priv->last_scan_jiffies = jiffies;
-+ priv->next_scan_jiffies = 0;
- IWL_DEBUG_INFO("Setting scan to off\n");
+ if (cmd_action == CMD_ACT_SET) {
+ memcpy(cmd->params.macadd.macadd,
+- adapter->current_addr, ETH_ALEN);
+- lbs_deb_hex(LBS_DEB_CMD, "SET_CMD: MAC addr", adapter->current_addr, 6);
++ priv->current_addr, ETH_ALEN);
++ lbs_deb_hex(LBS_DEB_CMD, "SET_CMD: MAC addr", priv->current_addr, 6);
+ }
- clear_bit(STATUS_SCANNING, &priv->status);
-@@ -4016,10 +4062,10 @@ reschedule:
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
+ }
- /* Handle notification from uCode that card's power state is changing
- * due to software, hardware, or critical temperature RFKILL */
--static void iwl_rx_card_state_notif(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_rx_card_state_notif(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-static int wlan_cmd_802_11_eeprom_access(wlan_private * priv,
++static int lbs_cmd_802_11_eeprom_access(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ int cmd_action, void *pdata_buf)
{
-- struct iwl_rx_packet *pkt = (void *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (void *)rxb->skb->data;
- u32 flags = le32_to_cpu(pkt->u.card_state_notif.flags);
- unsigned long status = priv->status;
+- struct wlan_ioctl_regrdwr *ea = pdata_buf;
++ struct lbs_ioctl_regrdwr *ea = pdata_buf;
-@@ -4030,35 +4076,35 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
- if (flags & (SW_CARD_DISABLED | HW_CARD_DISABLED |
- RF_CARD_DISABLED)) {
+ lbs_deb_enter(LBS_DEB_CMD);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_SET,
- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+@@ -819,7 +962,7 @@ static int wlan_cmd_802_11_eeprom_access(wlan_private * priv,
+ return 0;
+ }
-- if (!iwl_grab_restricted_access(priv)) {
-- iwl_write_restricted(
-+ if (!iwl4965_grab_nic_access(priv)) {
-+ iwl4965_write_direct32(
- priv, HBUS_TARG_MBX_C,
- HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED);
+-static int wlan_cmd_bt_access(wlan_private * priv,
++static int lbs_cmd_bt_access(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action, void *pdata_buf)
+ {
+@@ -857,7 +1000,7 @@ static int wlan_cmd_bt_access(wlan_private * priv,
+ return 0;
+ }
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- }
+-static int wlan_cmd_fwt_access(wlan_private * priv,
++static int lbs_cmd_fwt_access(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ u16 cmd_action, void *pdata_buf)
+ {
+@@ -879,47 +1022,72 @@ static int wlan_cmd_fwt_access(wlan_private * priv,
+ return 0;
+ }
- if (!(flags & RXON_CARD_DISABLED)) {
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR,
- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
-- if (!iwl_grab_restricted_access(priv)) {
-- iwl_write_restricted(
-+ if (!iwl4965_grab_nic_access(priv)) {
-+ iwl4965_write_direct32(
- priv, HBUS_TARG_MBX_C,
- HBUS_TARG_MBX_C_REG_BIT_CMD_BLOCKED);
+-static int wlan_cmd_mesh_access(wlan_private * priv,
+- struct cmd_ds_command *cmd,
+- u16 cmd_action, void *pdata_buf)
++int lbs_mesh_access(struct lbs_private *priv, uint16_t cmd_action,
++ struct cmd_ds_mesh_access *cmd)
+ {
+- struct cmd_ds_mesh_access *mesh_access = &cmd->params.mesh;
++ int ret;
++
+ lbs_deb_enter_args(LBS_DEB_CMD, "action %d", cmd_action);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- }
- }
+- cmd->command = cpu_to_le16(CMD_MESH_ACCESS);
+- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_mesh_access) + S_DS_GEN);
+- cmd->result = 0;
++ cmd->hdr.command = cpu_to_le16(CMD_MESH_ACCESS);
++ cmd->hdr.size = cpu_to_le16(sizeof(*cmd));
++ cmd->hdr.result = 0;
- if (flags & RF_CARD_DISABLED) {
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_SET,
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_SET,
- CSR_UCODE_DRV_GP1_REG_BIT_CT_KILL_EXIT);
-- iwl_read32(priv, CSR_UCODE_DRV_GP1);
-- if (!iwl_grab_restricted_access(priv))
-- iwl_release_restricted_access(priv);
-+ iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
-+ if (!iwl4965_grab_nic_access(priv))
-+ iwl4965_release_nic_access(priv);
- }
- }
+- if (pdata_buf)
+- memcpy(mesh_access, pdata_buf, sizeof(*mesh_access));
+- else
+- memset(mesh_access, 0, sizeof(*mesh_access));
++ cmd->action = cpu_to_le16(cmd_action);
-@@ -4074,7 +4120,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
- clear_bit(STATUS_RF_KILL_SW, &priv->status);
+- mesh_access->action = cpu_to_le16(cmd_action);
++ ret = lbs_cmd_with_response(priv, CMD_MESH_ACCESS, cmd);
- if (!(flags & RXON_CARD_DISABLED))
-- iwl_scan_cancel(priv);
-+ iwl4965_scan_cancel(priv);
+ lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
++ return ret;
+ }
++EXPORT_SYMBOL_GPL(lbs_mesh_access);
- if ((test_bit(STATUS_RF_KILL_HW, &status) !=
- test_bit(STATUS_RF_KILL_HW, &priv->status)) ||
-@@ -4086,7 +4132,7 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
+-static int wlan_cmd_set_boot2_ver(wlan_private * priv,
++int lbs_mesh_config(struct lbs_private *priv, uint16_t enable, uint16_t chan)
++{
++ struct cmd_ds_mesh_config cmd;
++
++ memset(&cmd, 0, sizeof(cmd));
++ cmd.action = cpu_to_le16(enable);
++ cmd.channel = cpu_to_le16(chan);
++ cmd.type = cpu_to_le16(priv->mesh_tlv);
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++
++ if (enable) {
++ cmd.length = cpu_to_le16(priv->mesh_ssid_len);
++ memcpy(cmd.data, priv->mesh_ssid, priv->mesh_ssid_len);
++ }
++ lbs_deb_cmd("mesh config enable %d TLV %x channel %d SSID %s\n",
++ enable, priv->mesh_tlv, chan,
++ escape_essid(priv->mesh_ssid, priv->mesh_ssid_len));
++ return lbs_cmd_with_response(priv, CMD_MESH_CONFIG, &cmd);
++}
++
++static int lbs_cmd_bcn_ctrl(struct lbs_private * priv,
+ struct cmd_ds_command *cmd,
+- u16 cmd_action, void *pdata_buf)
++ u16 cmd_action)
+ {
+- struct cmd_ds_set_boot2_ver *boot2_ver = &cmd->params.boot2_ver;
+- cmd->command = cpu_to_le16(CMD_SET_BOOT2_VER);
+- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_set_boot2_ver) + S_DS_GEN);
+- boot2_ver->version = priv->boot2_version;
++ struct cmd_ds_802_11_beacon_control
++ *bcn_ctrl = &cmd->params.bcn_ctrl;
++
++ lbs_deb_enter(LBS_DEB_CMD);
++ cmd->size =
++ cpu_to_le16(sizeof(struct cmd_ds_802_11_beacon_control)
++ + S_DS_GEN);
++ cmd->command = cpu_to_le16(CMD_802_11_BEACON_CTRL);
++
++ bcn_ctrl->action = cpu_to_le16(cmd_action);
++ bcn_ctrl->beacon_enable = cpu_to_le16(priv->beacon_enable);
++ bcn_ctrl->beacon_period = cpu_to_le16(priv->beacon_period);
++
++ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
- /**
-- * iwl_setup_rx_handlers - Initialize Rx handler callbacks
-+ * iwl4965_setup_rx_handlers - Initialize Rx handler callbacks
- *
- * Setup the RX handlers for each of the reply types sent from the uCode
- * to the host.
-@@ -4094,61 +4140,58 @@ static void iwl_rx_card_state_notif(struct iwl_priv *priv,
- * This function chains into the hardware specific files for them to setup
- * any hardware specific handlers as well.
- */
--static void iwl_setup_rx_handlers(struct iwl_priv *priv)
-+static void iwl4965_setup_rx_handlers(struct iwl4965_priv *priv)
+-/*
+- * Note: NEVER use libertas_queue_cmd() with addtail==0 other than for
+- * the command timer, because it does not account for queued commands.
+- */
+-void libertas_queue_cmd(wlan_adapter * adapter, struct cmd_ctrl_node *cmdnode, u8 addtail)
++static void lbs_queue_cmd(struct lbs_private *priv,
++ struct cmd_ctrl_node *cmdnode)
{
-- priv->rx_handlers[REPLY_ALIVE] = iwl_rx_reply_alive;
-- priv->rx_handlers[REPLY_ADD_STA] = iwl_rx_reply_add_sta;
-- priv->rx_handlers[REPLY_ERROR] = iwl_rx_reply_error;
-- priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl_rx_csa;
-+ priv->rx_handlers[REPLY_ALIVE] = iwl4965_rx_reply_alive;
-+ priv->rx_handlers[REPLY_ADD_STA] = iwl4965_rx_reply_add_sta;
-+ priv->rx_handlers[REPLY_ERROR] = iwl4965_rx_reply_error;
-+ priv->rx_handlers[CHANNEL_SWITCH_NOTIFICATION] = iwl4965_rx_csa;
- priv->rx_handlers[SPECTRUM_MEASURE_NOTIFICATION] =
-- iwl_rx_spectrum_measure_notif;
-- priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl_rx_pm_sleep_notif;
-+ iwl4965_rx_spectrum_measure_notif;
-+ priv->rx_handlers[PM_SLEEP_NOTIFICATION] = iwl4965_rx_pm_sleep_notif;
- priv->rx_handlers[PM_DEBUG_STATISTIC_NOTIFIC] =
-- iwl_rx_pm_debug_statistics_notif;
-- priv->rx_handlers[BEACON_NOTIFICATION] = iwl_rx_beacon_notif;
+ unsigned long flags;
+- struct cmd_ds_command *cmdptr;
++ int addtail = 1;
+
+ lbs_deb_enter(LBS_DEB_HOST);
+
+@@ -927,118 +1095,87 @@ void libertas_queue_cmd(wlan_adapter * adapter, struct cmd_ctrl_node *cmdnode, u
+ lbs_deb_host("QUEUE_CMD: cmdnode is NULL\n");
+ goto done;
+ }
-
-- /* NOTE: iwl_rx_statistics is different based on whether
-- * the build is for the 3945 or the 4965. See the
-- * corresponding implementation in iwl-XXXX.c
-- *
-- * The same handler is used for both the REPLY to a
-- * discrete statistics request from the host as well as
-- * for the periodic statistics notification from the uCode
-+ iwl4965_rx_pm_debug_statistics_notif;
-+ priv->rx_handlers[BEACON_NOTIFICATION] = iwl4965_rx_beacon_notif;
+- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
+- if (!cmdptr) {
+- lbs_deb_host("QUEUE_CMD: cmdptr is NULL\n");
++ if (!cmdnode->cmdbuf->size) {
++ lbs_deb_host("DNLD_CMD: cmd size is zero\n");
+ goto done;
+ }
++ cmdnode->result = 0;
+
+ /* Exit_PS command needs to be queued in the header always. */
+- if (cmdptr->command == CMD_802_11_PS_MODE) {
+- struct cmd_ds_802_11_ps_mode *psm = &cmdptr->params.psmode;
++ if (le16_to_cpu(cmdnode->cmdbuf->command) == CMD_802_11_PS_MODE) {
++ struct cmd_ds_802_11_ps_mode *psm = (void *) &cmdnode->cmdbuf[1];
+
-+ /*
-+ * The same handler is used for both the REPLY to a discrete
-+ * statistics request from the host as well as for the periodic
-+ * statistics notifications (after received beacons) from the uCode.
- */
-- priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl_hw_rx_statistics;
-- priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl_hw_rx_statistics;
-+ priv->rx_handlers[REPLY_STATISTICS_CMD] = iwl4965_hw_rx_statistics;
-+ priv->rx_handlers[STATISTICS_NOTIFICATION] = iwl4965_hw_rx_statistics;
+ if (psm->action == cpu_to_le16(CMD_SUBCMD_EXIT_PS)) {
+- if (adapter->psstate != PS_STATE_FULL_POWER)
++ if (priv->psstate != PS_STATE_FULL_POWER)
+ addtail = 0;
+ }
+ }
-- priv->rx_handlers[REPLY_SCAN_CMD] = iwl_rx_reply_scan;
-- priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl_rx_scan_start_notif;
-+ priv->rx_handlers[REPLY_SCAN_CMD] = iwl4965_rx_reply_scan;
-+ priv->rx_handlers[SCAN_START_NOTIFICATION] = iwl4965_rx_scan_start_notif;
- priv->rx_handlers[SCAN_RESULTS_NOTIFICATION] =
-- iwl_rx_scan_results_notif;
-+ iwl4965_rx_scan_results_notif;
- priv->rx_handlers[SCAN_COMPLETE_NOTIFICATION] =
-- iwl_rx_scan_complete_notif;
-- priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl_rx_card_state_notif;
-- priv->rx_handlers[REPLY_TX] = iwl_rx_reply_tx;
-+ iwl4965_rx_scan_complete_notif;
-+ priv->rx_handlers[CARD_STATE_NOTIFICATION] = iwl4965_rx_card_state_notif;
-+ priv->rx_handlers[REPLY_TX] = iwl4965_rx_reply_tx;
+- spin_lock_irqsave(&adapter->driver_lock, flags);
++ spin_lock_irqsave(&priv->driver_lock, flags);
-- /* Setup hardware specific Rx handlers */
-- iwl_hw_rx_handler_setup(priv);
-+ /* Set up hardware specific Rx handlers */
-+ iwl4965_hw_rx_handler_setup(priv);
+- if (addtail) {
+- list_add_tail((struct list_head *)cmdnode,
+- &adapter->cmdpendingq);
+- adapter->nr_cmd_pending++;
+- } else
+- list_add((struct list_head *)cmdnode, &adapter->cmdpendingq);
++ if (addtail)
++ list_add_tail(&cmdnode->list, &priv->cmdpendingq);
++ else
++ list_add(&cmdnode->list, &priv->cmdpendingq);
+
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+
+ lbs_deb_host("QUEUE_CMD: inserted command 0x%04x into cmdpendingq\n",
+- le16_to_cpu(((struct cmd_ds_gen*)cmdnode->bufvirtualaddr)->command));
++ le16_to_cpu(cmdnode->cmdbuf->command));
+
+ done:
+ lbs_deb_leave(LBS_DEB_HOST);
}
- /**
-- * iwl_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
-+ * iwl4965_tx_cmd_complete - Pull unused buffers off the queue and reclaim them
- * @rxb: Rx buffer to reclaim
- *
- * If an Rx buffer has an async callback associated with it the callback
- * will be executed. The attached skb (if present) will only be freed
- * if the callback returns 1
- */
--static void iwl_tx_cmd_complete(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb)
-+static void iwl4965_tx_cmd_complete(struct iwl4965_priv *priv,
-+ struct iwl4965_rx_mem_buffer *rxb)
+-/*
+- * TODO: Fix the issue when DownloadcommandToStation is being called the
+- * second time when the command times out. All the cmdptr->xxx are in little
+- * endian and therefore all the comparissions will fail.
+- * For now - we are not performing the endian conversion the second time - but
+- * for PS and DEEP_SLEEP we need to worry
+- */
+-static int DownloadcommandToStation(wlan_private * priv,
+- struct cmd_ctrl_node *cmdnode)
++static void lbs_submit_command(struct lbs_private *priv,
++ struct cmd_ctrl_node *cmdnode)
{
-- struct iwl_rx_packet *pkt = (struct iwl_rx_packet *)rxb->skb->data;
-+ struct iwl4965_rx_packet *pkt = (struct iwl4965_rx_packet *)rxb->skb->data;
- u16 sequence = le16_to_cpu(pkt->hdr.sequence);
- int txq_id = SEQ_TO_QUEUE(sequence);
- int index = SEQ_TO_INDEX(sequence);
- int huge = sequence & SEQ_HUGE_FRAME;
- int cmd_index;
-- struct iwl_cmd *cmd;
-+ struct iwl4965_cmd *cmd;
-
- /* If a Tx command is being handled and it isn't in the actual
- * command queue then there a command routing bug has been introduced
-@@ -4169,7 +4212,7 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
- !cmd->meta.u.callback(priv, cmd, rxb->skb))
- rxb->skb = NULL;
+ unsigned long flags;
+- struct cmd_ds_command *cmdptr;
+- wlan_adapter *adapter = priv->adapter;
+- int ret = -1;
+- u16 cmdsize;
+- u16 command;
++ struct cmd_header *cmd;
++ uint16_t cmdsize;
++ uint16_t command;
++ int timeo = 5 * HZ;
++ int ret;
-- iwl_tx_queue_reclaim(priv, txq_id, index);
-+ iwl4965_tx_queue_reclaim(priv, txq_id, index);
+ lbs_deb_enter(LBS_DEB_HOST);
- if (!(cmd->meta.flags & CMD_ASYNC)) {
- clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
-@@ -4181,9 +4224,11 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
- /*
- * Rx theory of operation
- *
-- * The host allocates 32 DMA target addresses and passes the host address
-- * to the firmware at register IWL_RFDS_TABLE_LOWER + N * RFD_SIZE where N is
-- * 0 to 31
-+ * Driver allocates a circular buffer of Receive Buffer Descriptors (RBDs),
-+ * each of which point to Receive Buffers to be filled by 4965. These get
-+ * used not only for Rx frames, but for any command response or notification
-+ * from the 4965. The driver and 4965 manage the Rx buffers by means
-+ * of indexes into the circular buffer.
- *
- * Rx Queue Indexes
- * The host/firmware share two index registers for managing the Rx buffers.
-@@ -4199,10 +4244,10 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
- * The queue is empty (no good data) if WRITE = READ - 1, and is full if
- * WRITE = READ.
- *
-- * During initialization the host sets up the READ queue position to the first
-+ * During initialization, the host sets up the READ queue position to the first
- * INDEX position, and WRITE to the last (READ - 1 wrapped)
- *
-- * When the firmware places a packet in a buffer it will advance the READ index
-+ * When the firmware places a packet in a buffer, it will advance the READ index
- * and fire the RX interrupt. The driver can then query the READ index and
- * process as many packets as possible, moving the WRITE index forward as it
- * resets the Rx queue buffers with new memory.
-@@ -4210,8 +4255,8 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
- * The management in the driver is as follows:
- * + A list of pre-allocated SKBs is stored in iwl->rxq->rx_free. When
- * iwl->rxq->free_count drops to or below RX_LOW_WATERMARK, work is scheduled
-- * to replensish the iwl->rxq->rx_free.
-- * + In iwl_rx_replenish (scheduled) if 'processed' != 'read' then the
-+ * to replenish the iwl->rxq->rx_free.
-+ * + In iwl4965_rx_replenish (scheduled) if 'processed' != 'read' then the
- * iwl->rxq is replenished and the READ INDEX is updated (updating the
- * 'processed' and 'read' driver indexes as well)
- * + A received packet is processed and handed to the kernel network stack,
-@@ -4224,28 +4269,28 @@ static void iwl_tx_cmd_complete(struct iwl_priv *priv,
- *
- * Driver sequence:
- *
-- * iwl_rx_queue_alloc() Allocates rx_free
-- * iwl_rx_replenish() Replenishes rx_free list from rx_used, and calls
-- * iwl_rx_queue_restock
-- * iwl_rx_queue_restock() Moves available buffers from rx_free into Rx
-+ * iwl4965_rx_queue_alloc() Allocates rx_free
-+ * iwl4965_rx_replenish() Replenishes rx_free list from rx_used, and calls
-+ * iwl4965_rx_queue_restock
-+ * iwl4965_rx_queue_restock() Moves available buffers from rx_free into Rx
- * queue, updates firmware pointers, and updates
- * the WRITE index. If insufficient rx_free buffers
-- * are available, schedules iwl_rx_replenish
-+ * are available, schedules iwl4965_rx_replenish
- *
- * -- enable interrupts --
-- * ISR - iwl_rx() Detach iwl_rx_mem_buffers from pool up to the
-+ * ISR - iwl4965_rx() Detach iwl4965_rx_mem_buffers from pool up to the
- * READ INDEX, detaching the SKB from the pool.
- * Moves the packet buffer from queue to rx_used.
-- * Calls iwl_rx_queue_restock to refill any empty
-+ * Calls iwl4965_rx_queue_restock to refill any empty
- * slots.
- * ...
- *
- */
+- if (!adapter || !cmdnode) {
+- lbs_deb_host("DNLD_CMD: adapter or cmdmode is NULL\n");
+- goto done;
+- }
++ cmd = cmdnode->cmdbuf;
- /**
-- * iwl_rx_queue_space - Return number of free slots available in queue.
-+ * iwl4965_rx_queue_space - Return number of free slots available in queue.
- */
--static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
-+static int iwl4965_rx_queue_space(const struct iwl4965_rx_queue *q)
- {
- int s = q->read - q->write;
- if (s <= 0)
-@@ -4258,15 +4303,9 @@ static int iwl_rx_queue_space(const struct iwl_rx_queue *q)
- }
+- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ priv->cur_cmd = cmdnode;
++ priv->cur_cmd_retcode = 0;
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- /**
-- * iwl_rx_queue_update_write_ptr - Update the write pointer for the RX queue
-- *
-- * NOTE: This function has 3945 and 4965 specific code sections
-- * but is declared in base due to the majority of the
-- * implementation being the same (only a numeric constant is
-- * different)
-- *
-+ * iwl4965_rx_queue_update_write_ptr - Update the write pointer for the RX queue
- */
--int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
-+int iwl4965_rx_queue_update_write_ptr(struct iwl4965_priv *priv, struct iwl4965_rx_queue *q)
- {
- u32 reg = 0;
- int rc = 0;
-@@ -4277,24 +4316,29 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
- if (q->need_update == 0)
- goto exit_unlock;
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (!cmdptr || !cmdptr->size) {
+- lbs_deb_host("DNLD_CMD: cmdptr is NULL or zero\n");
+- __libertas_cleanup_and_insert_cmd(priv, cmdnode);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
+- goto done;
+- }
++ cmdsize = le16_to_cpu(cmd->size);
++ command = le16_to_cpu(cmd->command);
-+ /* If power-saving is in use, make sure device is awake */
- if (test_bit(STATUS_POWER_PMI, &priv->status)) {
-- reg = iwl_read32(priv, CSR_UCODE_DRV_GP1);
-+ reg = iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
+- adapter->cur_cmd = cmdnode;
+- adapter->cur_cmd_retcode = 0;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ /* These commands take longer */
++ if (command == CMD_802_11_SCAN || command == CMD_802_11_ASSOCIATE ||
++ command == CMD_802_11_AUTHENTICATE)
++ timeo = 10 * HZ;
- if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
-- iwl_set_bit(priv, CSR_GP_CNTRL,
-+ iwl4965_set_bit(priv, CSR_GP_CNTRL,
- CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
- goto exit_unlock;
- }
+- cmdsize = cmdptr->size;
+- command = cpu_to_le16(cmdptr->command);
++ lbs_deb_host("DNLD_CMD: command 0x%04x, seq %d, size %d, jiffies %lu\n",
++ command, le16_to_cpu(cmd->seqnum), cmdsize, jiffies);
++ lbs_deb_hex(LBS_DEB_HOST, "DNLD_CMD", (void *) cmdnode->cmdbuf, cmdsize);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc)
- goto exit_unlock;
+- lbs_deb_host("DNLD_CMD: command 0x%04x, size %d, jiffies %lu\n",
+- command, le16_to_cpu(cmdptr->size), jiffies);
+- lbs_deb_hex(LBS_DEB_HOST, "DNLD_CMD", cmdnode->bufvirtualaddr, cmdsize);
++ ret = priv->hw_host_to_card(priv, MVMS_CMD, (u8 *) cmd, cmdsize);
-- iwl_write_restricted(priv, FH_RSCSR_CHNL0_WPTR,
-+ /* Device expects a multiple of 8 */
-+ iwl4965_write_direct32(priv, FH_RSCSR_CHNL0_WPTR,
- q->write & ~0x7);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
-+
-+ /* Else device is assumed to be awake */
- } else
-- iwl_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
-+ /* Device expects a multiple of 8 */
-+ iwl4965_write32(priv, FH_RSCSR_CHNL0_WPTR, q->write & ~0x7);
+- cmdnode->cmdwaitqwoken = 0;
+- cmdsize = cpu_to_le16(cmdsize);
+-
+- ret = priv->hw_host_to_card(priv, MVMS_CMD, (u8 *) cmdptr, cmdsize);
+-
+- if (ret != 0) {
+- lbs_deb_host("DNLD_CMD: hw_host_to_card failed\n");
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- adapter->cur_cmd_retcode = ret;
+- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
+- adapter->nr_cmd_pending--;
+- adapter->cur_cmd = NULL;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
+- goto done;
+- }
+-
+- lbs_deb_cmd("DNLD_CMD: sent command 0x%04x, jiffies %lu\n", command, jiffies);
++ if (ret) {
++ lbs_pr_info("DNLD_CMD: hw_host_to_card failed: %d\n", ret);
++ /* Let the timer kick in and retry, and potentially reset
++ the whole thing if the condition persists */
++ timeo = HZ;
++ } else
++ lbs_deb_cmd("DNLD_CMD: sent command 0x%04x, jiffies %lu\n",
++ command, jiffies);
+ /* Setup the timer after transmit command */
+- if (command == CMD_802_11_SCAN || command == CMD_802_11_AUTHENTICATE
+- || command == CMD_802_11_ASSOCIATE)
+- mod_timer(&adapter->command_timer, jiffies + (10*HZ));
+- else
+- mod_timer(&adapter->command_timer, jiffies + (5*HZ));
+-
+- ret = 0;
++ mod_timer(&priv->command_timer, jiffies + timeo);
- q->need_update = 0;
-@@ -4305,11 +4349,9 @@ int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv, struct iwl_rx_queue *q)
+-done:
+- lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
+- return ret;
++ lbs_deb_leave(LBS_DEB_HOST);
}
- /**
-- * iwl_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer pointer.
-- *
-- * NOTE: This function has 3945 and 4965 specific code paths in it.
-+ * iwl4965_dma_addr2rbd_ptr - convert a DMA address to a uCode read buffer ptr
- */
--static inline __le32 iwl_dma_addr2rbd_ptr(struct iwl_priv *priv,
-+static inline __le32 iwl4965_dma_addr2rbd_ptr(struct iwl4965_priv *priv,
- dma_addr_t dma_addr)
+-static int wlan_cmd_mac_control(wlan_private * priv,
++static int lbs_cmd_mac_control(struct lbs_private *priv,
+ struct cmd_ds_command *cmd)
{
- return cpu_to_le32((u32)(dma_addr >> 8));
-@@ -4317,31 +4359,34 @@ static inline __le32 iwl_dma_addr2rbd_ptr(struct iwl_priv *priv,
+ struct cmd_ds_mac_control *mac = &cmd->params.macctrl;
+@@ -1047,7 +1184,7 @@ static int wlan_cmd_mac_control(wlan_private * priv,
+ cmd->command = cpu_to_le16(CMD_MAC_CONTROL);
+ cmd->size = cpu_to_le16(sizeof(struct cmd_ds_mac_control) + S_DS_GEN);
+- mac->action = cpu_to_le16(priv->adapter->currentpacketfilter);
++ mac->action = cpu_to_le16(priv->currentpacketfilter);
+
+ lbs_deb_cmd("MAC_CONTROL: action 0x%x, size %d\n",
+ le16_to_cpu(mac->action), le16_to_cpu(cmd->size));
+@@ -1058,54 +1195,98 @@ static int wlan_cmd_mac_control(wlan_private * priv,
/**
-- * iwl_rx_queue_restock - refill RX queue from pre-allocated pool
-+ * iwl4965_rx_queue_restock - refill RX queue from pre-allocated pool
- *
-- * If there are slots in the RX queue that need to be restocked,
-+ * If there are slots in the RX queue that need to be restocked,
- * and we have free pre-allocated buffers, fill the ranks as much
-- * as we can pulling from rx_free.
-+ * as we can, pulling from rx_free.
- *
- * This moves the 'write' index forward to catch up with 'processed', and
- * also updates the memory address in the firmware to reference the new
- * target buffer.
+ * This function inserts command node to cmdfreeq
+- * after cleans it. Requires adapter->driver_lock held.
++ * after cleans it. Requires priv->driver_lock held.
*/
--int iwl_rx_queue_restock(struct iwl_priv *priv)
-+static int iwl4965_rx_queue_restock(struct iwl4965_priv *priv)
+-void __libertas_cleanup_and_insert_cmd(wlan_private * priv, struct cmd_ctrl_node *ptempcmd)
++static void __lbs_cleanup_and_insert_cmd(struct lbs_private *priv,
++ struct cmd_ctrl_node *cmdnode)
{
-- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl4965_rx_queue *rxq = &priv->rxq;
- struct list_head *element;
-- struct iwl_rx_mem_buffer *rxb;
-+ struct iwl4965_rx_mem_buffer *rxb;
- unsigned long flags;
- int write, rc;
+- wlan_adapter *adapter = priv->adapter;
++ lbs_deb_enter(LBS_DEB_HOST);
- spin_lock_irqsave(&rxq->lock, flags);
- write = rxq->write & ~0x7;
-- while ((iwl_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
-+ while ((iwl4965_rx_queue_space(rxq) > 0) && (rxq->free_count)) {
-+ /* Get next free Rx buffer, remove from free list */
- element = rxq->rx_free.next;
-- rxb = list_entry(element, struct iwl_rx_mem_buffer, list);
-+ rxb = list_entry(element, struct iwl4965_rx_mem_buffer, list);
- list_del(element);
-- rxq->bd[rxq->write] = iwl_dma_addr2rbd_ptr(priv, rxb->dma_addr);
+- if (!ptempcmd)
+- return;
++ if (!cmdnode)
++ goto out;
+
-+ /* Point to Rx buffer via next RBD in circular buffer */
-+ rxq->bd[rxq->write] = iwl4965_dma_addr2rbd_ptr(priv, rxb->dma_addr);
- rxq->queue[rxq->write] = rxb;
- rxq->write = (rxq->write + 1) & RX_QUEUE_MASK;
- rxq->free_count--;
-@@ -4353,13 +4398,14 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
- queue_work(priv->workqueue, &priv->rx_replenish);
-
++ cmdnode->callback = NULL;
++ cmdnode->callback_arg = 0;
-- /* If we've added more space for the firmware to place data, tell it */
-+ /* If we've added more space for the firmware to place data, tell it.
-+ * Increment device's write pointer in multiples of 8. */
- if ((write != (rxq->write & ~0x7))
- || (abs(rxq->write - rxq->read) > 7)) {
- spin_lock_irqsave(&rxq->lock, flags);
- rxq->need_update = 1;
- spin_unlock_irqrestore(&rxq->lock, flags);
-- rc = iwl_rx_queue_update_write_ptr(priv, rxq);
-+ rc = iwl4965_rx_queue_update_write_ptr(priv, rxq);
- if (rc)
- return rc;
- }
-@@ -4368,26 +4414,28 @@ int iwl_rx_queue_restock(struct iwl_priv *priv)
+- cleanup_cmdnode(ptempcmd);
+- list_add_tail((struct list_head *)ptempcmd, &adapter->cmdfreeq);
++ memset(cmdnode->cmdbuf, 0, LBS_CMD_BUFFER_SIZE);
++
++ list_add_tail(&cmdnode->list, &priv->cmdfreeq);
++ out:
++ lbs_deb_leave(LBS_DEB_HOST);
}
- /**
-- * iwl_rx_replensih - Move all used packet from rx_used to rx_free
-+ * iwl4965_rx_replenish - Move all used packet from rx_used to rx_free
- *
- * When moving to rx_free an SKB is allocated for the slot.
- *
-- * Also restock the Rx queue via iwl_rx_queue_restock.
-- * This is called as a scheduled work item (except for during intialization)
-+ * Also restock the Rx queue via iwl4965_rx_queue_restock.
-+ * This is called as a scheduled work item (except for during initialization)
- */
--void iwl_rx_replenish(void *data)
-+static void iwl4965_rx_allocate(struct iwl4965_priv *priv)
+-static void libertas_cleanup_and_insert_cmd(wlan_private * priv, struct cmd_ctrl_node *ptempcmd)
++static void lbs_cleanup_and_insert_cmd(struct lbs_private *priv,
++ struct cmd_ctrl_node *ptempcmd)
{
-- struct iwl_priv *priv = data;
-- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl4965_rx_queue *rxq = &priv->rxq;
- struct list_head *element;
-- struct iwl_rx_mem_buffer *rxb;
-+ struct iwl4965_rx_mem_buffer *rxb;
unsigned long flags;
- spin_lock_irqsave(&rxq->lock, flags);
- while (!list_empty(&rxq->rx_used)) {
- element = rxq->rx_used.next;
-- rxb = list_entry(element, struct iwl_rx_mem_buffer, list);
-+ rxb = list_entry(element, struct iwl4965_rx_mem_buffer, list);
+
+- spin_lock_irqsave(&priv->adapter->driver_lock, flags);
+- __libertas_cleanup_and_insert_cmd(priv, ptempcmd);
+- spin_unlock_irqrestore(&priv->adapter->driver_lock, flags);
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ __lbs_cleanup_and_insert_cmd(priv, ptempcmd);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ }
+
+-int libertas_set_radio_control(wlan_private * priv)
++void lbs_complete_command(struct lbs_private *priv, struct cmd_ctrl_node *cmd,
++ int result)
++{
++ if (cmd == priv->cur_cmd)
++ priv->cur_cmd_retcode = result;
+
-+ /* Alloc a new receive buffer */
- rxb->skb =
-- alloc_skb(IWL_RX_BUF_SIZE, __GFP_NOWARN | GFP_ATOMIC);
-+ alloc_skb(priv->hw_setting.rx_buf_size,
-+ __GFP_NOWARN | GFP_ATOMIC);
- if (!rxb->skb) {
- if (net_ratelimit())
- printk(KERN_CRIT DRV_NAME
-@@ -4399,32 +4447,55 @@ void iwl_rx_replenish(void *data)
- }
- priv->alloc_rxb_skb++;
- list_del(element);
++ cmd->result = result;
++ cmd->cmdwaitqwoken = 1;
++ wake_up_interruptible(&cmd->cmdwait_q);
+
-+ /* Get physical address of RB/SKB */
- rxb->dma_addr =
- pci_map_single(priv->pci_dev, rxb->skb->data,
-- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
-+ priv->hw_setting.rx_buf_size, PCI_DMA_FROMDEVICE);
- list_add_tail(&rxb->list, &rxq->rx_free);
- rxq->free_count++;
- }
- spin_unlock_irqrestore(&rxq->lock, flags);
++ if (!cmd->callback)
++ __lbs_cleanup_and_insert_cmd(priv, cmd);
++ priv->cur_cmd = NULL;
+}
+
-+/*
-+ * this should be called while priv->lock is locked
-+*/
-+static void __iwl4965_rx_replenish(void *data)
-+{
-+ struct iwl4965_priv *priv = data;
++int lbs_set_radio_control(struct lbs_private *priv)
+ {
+ int ret = 0;
++ struct cmd_ds_802_11_radio_control cmd;
+
+ lbs_deb_enter(LBS_DEB_CMD);
+
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_802_11_RADIO_CONTROL,
+- CMD_ACT_SET,
+- CMD_OPTION_WAITFORRSP, 0, NULL);
++ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
++ cmd.action = cpu_to_le16(CMD_ACT_SET);
+
-+ iwl4965_rx_allocate(priv);
-+ iwl4965_rx_queue_restock(priv);
-+}
++ switch (priv->preamble) {
++ case CMD_TYPE_SHORT_PREAMBLE:
++ cmd.control = cpu_to_le16(SET_SHORT_PREAMBLE);
++ break;
+
++ case CMD_TYPE_LONG_PREAMBLE:
++ cmd.control = cpu_to_le16(SET_LONG_PREAMBLE);
++ break;
+
-+void iwl4965_rx_replenish(void *data)
-+{
-+ struct iwl4965_priv *priv = data;
-+ unsigned long flags;
++ case CMD_TYPE_AUTO_PREAMBLE:
++ default:
++ cmd.control = cpu_to_le16(SET_AUTO_PREAMBLE);
++ break;
++ }
+
-+ iwl4965_rx_allocate(priv);
++ if (priv->radioon)
++ cmd.control |= cpu_to_le16(TURN_ON_RF);
++ else
++ cmd.control &= cpu_to_le16(~TURN_ON_RF);
++
++ lbs_deb_cmd("RADIO_SET: radio %d, preamble %d\n", priv->radioon,
++ priv->preamble);
- spin_lock_irqsave(&priv->lock, flags);
-- iwl_rx_queue_restock(priv);
-+ iwl4965_rx_queue_restock(priv);
- spin_unlock_irqrestore(&priv->lock, flags);
- }
+- lbs_deb_cmd("RADIO_SET: radio %d, preamble %d\n",
+- priv->adapter->radioon, priv->adapter->preamble);
++ ret = lbs_cmd_with_response(priv, CMD_802_11_RADIO_CONTROL, &cmd);
- /* Assumes that the skb field of the buffers in 'pool' is kept accurate.
-- * If an SKB has been detached, the POOL needs to have it's SKB set to NULL
-+ * If an SKB has been detached, the POOL needs to have its SKB set to NULL
- * This free routine walks the list of POOL entries and if SKB is set to
- * non NULL it is unmapped and freed
- */
--void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
-+static void iwl4965_rx_queue_free(struct iwl4965_priv *priv, struct iwl4965_rx_queue *rxq)
- {
- int i;
- for (i = 0; i < RX_QUEUE_SIZE + RX_FREE_BUFFERS; i++) {
- if (rxq->pool[i].skb != NULL) {
- pci_unmap_single(priv->pci_dev,
- rxq->pool[i].dma_addr,
-- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
-+ priv->hw_setting.rx_buf_size,
-+ PCI_DMA_FROMDEVICE);
- dev_kfree_skb(rxq->pool[i].skb);
- }
- }
-@@ -4434,21 +4505,25 @@ void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
- rxq->bd = NULL;
+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+ return ret;
}
--int iwl_rx_queue_alloc(struct iwl_priv *priv)
-+int iwl4965_rx_queue_alloc(struct iwl4965_priv *priv)
+-int libertas_set_mac_packet_filter(wlan_private * priv)
++int lbs_set_mac_packet_filter(struct lbs_private *priv)
{
-- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl4965_rx_queue *rxq = &priv->rxq;
- struct pci_dev *dev = priv->pci_dev;
- int i;
+ int ret = 0;
- spin_lock_init(&rxq->lock);
- INIT_LIST_HEAD(&rxq->rx_free);
- INIT_LIST_HEAD(&rxq->rx_used);
-+
-+ /* Alloc the circular buffer of Read Buffer Descriptors (RBDs) */
- rxq->bd = pci_alloc_consistent(dev, 4 * RX_QUEUE_SIZE, &rxq->dma_addr);
- if (!rxq->bd)
- return -ENOMEM;
-+
- /* Fill the rx_used queue with _all_ of the Rx buffers */
- for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++)
- list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
-+
- /* Set us so that we have processed and used all buffers, but have
- * not restocked the Rx queue with fresh buffers */
- rxq->read = rxq->write = 0;
-@@ -4457,7 +4532,7 @@ int iwl_rx_queue_alloc(struct iwl_priv *priv)
- return 0;
- }
+ lbs_deb_enter(LBS_DEB_CMD);
--void iwl_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
-+void iwl4965_rx_queue_reset(struct iwl4965_priv *priv, struct iwl4965_rx_queue *rxq)
- {
- unsigned long flags;
- int i;
-@@ -4471,7 +4546,8 @@ void iwl_rx_queue_reset(struct iwl_priv *priv, struct iwl_rx_queue *rxq)
- if (rxq->pool[i].skb != NULL) {
- pci_unmap_single(priv->pci_dev,
- rxq->pool[i].dma_addr,
-- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
-+ priv->hw_setting.rx_buf_size,
-+ PCI_DMA_FROMDEVICE);
- priv->alloc_rxb_skb--;
- dev_kfree_skb(rxq->pool[i].skb);
- rxq->pool[i].skb = NULL;
-@@ -4504,7 +4580,7 @@ static u8 ratio2dB[100] = {
- /* Calculates a relative dB value from a ratio of linear
- * (i.e. not dB) signal levels.
- * Conversion assumes that levels are voltages (20*log), not powers (10*log). */
--int iwl_calc_db_from_ratio(int sig_ratio)
-+int iwl4965_calc_db_from_ratio(int sig_ratio)
- {
- /* 1000:1 or higher just report as 60 dB */
- if (sig_ratio >= 1000)
-@@ -4530,7 +4606,7 @@ int iwl_calc_db_from_ratio(int sig_ratio)
- /* Calculate an indication of rx signal quality (a percentage, not dBm!).
- * See http://www.ces.clemson.edu/linux/signal_quality.shtml for info
- * about formulas used below. */
--int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
-+int iwl4965_calc_sig_qual(int rssi_dbm, int noise_dbm)
- {
- int sig_qual;
- int degradation = PERFECT_RSSI - rssi_dbm;
-@@ -4565,32 +4641,39 @@ int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm)
- }
+ /* Send MAC control command to station */
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_MAC_CONTROL, 0, 0, 0, NULL);
+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+@@ -1115,7 +1296,7 @@ int libertas_set_mac_packet_filter(wlan_private * priv)
/**
-- * iwl_rx_handle - Main entry function for receiving responses from the uCode
-+ * iwl4965_rx_handle - Main entry function for receiving responses from uCode
+ * @brief This function prepare the command before send to firmware.
*
- * Uses the priv->rx_handlers callback function array to invoke
- * the appropriate handlers, including command responses,
- * frame-received notifications, and other notifications.
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param cmd_no command number
+ * @param cmd_action command action: GET or SET
+ * @param wait_option wait option: wait response or not
+@@ -1123,32 +1304,31 @@ int libertas_set_mac_packet_filter(wlan_private * priv)
+ * @param pdata_buf A pointer to informaion buffer
+ * @return 0 or -1
*/
--static void iwl_rx_handle(struct iwl_priv *priv)
-+static void iwl4965_rx_handle(struct iwl4965_priv *priv)
+-int libertas_prepare_and_send_command(wlan_private * priv,
++int lbs_prepare_and_send_command(struct lbs_private *priv,
+ u16 cmd_no,
+ u16 cmd_action,
+ u16 wait_option, u32 cmd_oid, void *pdata_buf)
{
-- struct iwl_rx_mem_buffer *rxb;
-- struct iwl_rx_packet *pkt;
-- struct iwl_rx_queue *rxq = &priv->rxq;
-+ struct iwl4965_rx_mem_buffer *rxb;
-+ struct iwl4965_rx_packet *pkt;
-+ struct iwl4965_rx_queue *rxq = &priv->rxq;
- u32 r, i;
- int reclaim;
+ int ret = 0;
+- wlan_adapter *adapter = priv->adapter;
+ struct cmd_ctrl_node *cmdnode;
+ struct cmd_ds_command *cmdptr;
unsigned long flags;
-+ u8 fill_rx = 0;
-+ u32 count = 0;
-- r = iwl_hw_get_rx_read(priv);
-+ /* uCode's read index (stored in shared DRAM) indicates the last Rx
-+ * buffer that the driver may process (last buffer filled by ucode). */
-+ r = iwl4965_hw_get_rx_read(priv);
- i = rxq->read;
+ lbs_deb_enter(LBS_DEB_HOST);
- /* Rx interrupt, but nothing sent from uCode */
- if (i == r)
- IWL_DEBUG(IWL_DL_RX | IWL_DL_ISR, "r = %d, i = %d\n", r, i);
+- if (!adapter) {
+- lbs_deb_host("PREP_CMD: adapter is NULL\n");
++ if (!priv) {
++ lbs_deb_host("PREP_CMD: priv is NULL\n");
+ ret = -1;
+ goto done;
+ }
-+ if (iwl4965_rx_queue_space(rxq) > (RX_QUEUE_SIZE / 2))
-+ fill_rx = 1;
-+
- while (i != r) {
- rxb = rxq->queue[i];
+- if (adapter->surpriseremoved) {
++ if (priv->surpriseremoved) {
+ lbs_deb_host("PREP_CMD: card removed\n");
+ ret = -1;
+ goto done;
+ }
-- /* If an RXB doesn't have a queue slot associated with it
-+ /* If an RXB doesn't have a Rx queue slot associated with it,
- * then a bug has been introduced in the queue refilling
- * routines -- catch it here */
- BUG_ON(rxb == NULL);
-@@ -4598,9 +4681,9 @@ static void iwl_rx_handle(struct iwl_priv *priv)
- rxq->queue[i] = NULL;
+- cmdnode = libertas_get_free_cmd_ctrl_node(priv);
++ cmdnode = lbs_get_cmd_ctrl_node(priv);
- pci_dma_sync_single_for_cpu(priv->pci_dev, rxb->dma_addr,
-- IWL_RX_BUF_SIZE,
-+ priv->hw_setting.rx_buf_size,
- PCI_DMA_FROMDEVICE);
-- pkt = (struct iwl_rx_packet *)rxb->skb->data;
-+ pkt = (struct iwl4965_rx_packet *)rxb->skb->data;
+ if (cmdnode == NULL) {
+ lbs_deb_host("PREP_CMD: cmdnode is NULL\n");
+@@ -1159,138 +1339,107 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ goto done;
+ }
- /* Reclaim a command buffer only if this packet is a response
- * to a (driver-originated) command.
-@@ -4617,7 +4700,7 @@ static void iwl_rx_handle(struct iwl_priv *priv)
+- libertas_set_cmd_ctrl_node(priv, cmdnode, cmd_oid, wait_option, pdata_buf);
++ lbs_set_cmd_ctrl_node(priv, cmdnode, pdata_buf);
- /* Based on type of command response or notification,
- * handle those that need handling via function in
-- * rx_handlers table. See iwl_setup_rx_handlers() */
-+ * rx_handlers table. See iwl4965_setup_rx_handlers() */
- if (priv->rx_handlers[pkt->hdr.cmd]) {
- IWL_DEBUG(IWL_DL_HOST_COMMAND | IWL_DL_RX | IWL_DL_ISR,
- "r = %d, i = %d, %s, 0x%02x\n", r, i,
-@@ -4632,11 +4715,11 @@ static void iwl_rx_handle(struct iwl_priv *priv)
- }
+- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
++ cmdptr = (struct cmd_ds_command *)cmdnode->cmdbuf;
- if (reclaim) {
-- /* Invoke any callbacks, transfer the skb to caller,
-- * and fire off the (possibly) blocking iwl_send_cmd()
-+ /* Invoke any callbacks, transfer the skb to caller, and
-+ * fire off the (possibly) blocking iwl4965_send_cmd()
- * as we reclaim the driver command queue */
- if (rxb && rxb->skb)
-- iwl_tx_cmd_complete(priv, rxb);
-+ iwl4965_tx_cmd_complete(priv, rxb);
- else
- IWL_WARNING("Claim null rxb?\n");
- }
-@@ -4651,20 +4734,34 @@ static void iwl_rx_handle(struct iwl_priv *priv)
- }
+ lbs_deb_host("PREP_CMD: command 0x%04x\n", cmd_no);
- pci_unmap_single(priv->pci_dev, rxb->dma_addr,
-- IWL_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
-+ priv->hw_setting.rx_buf_size,
-+ PCI_DMA_FROMDEVICE);
- spin_lock_irqsave(&rxq->lock, flags);
- list_add_tail(&rxb->list, &priv->rxq.rx_used);
- spin_unlock_irqrestore(&rxq->lock, flags);
- i = (i + 1) & RX_QUEUE_MASK;
-+ /* If there are a lot of unused frames,
-+ * restock the Rx queue so ucode wont assert. */
-+ if (fill_rx) {
-+ count++;
-+ if (count >= 8) {
-+ priv->rxq.read = i;
-+ __iwl4965_rx_replenish(priv);
-+ count = 0;
-+ }
-+ }
- }
+- if (!cmdptr) {
+- lbs_deb_host("PREP_CMD: cmdptr is NULL\n");
+- libertas_cleanup_and_insert_cmd(priv, cmdnode);
+- ret = -1;
+- goto done;
+- }
+-
+ /* Set sequence number, command and INT option */
+- adapter->seqnum++;
+- cmdptr->seqnum = cpu_to_le16(adapter->seqnum);
++ priv->seqnum++;
++ cmdptr->seqnum = cpu_to_le16(priv->seqnum);
- /* Backtrack one entry */
- priv->rxq.read = i;
-- iwl_rx_queue_restock(priv);
-+ iwl4965_rx_queue_restock(priv);
- }
+ cmdptr->command = cpu_to_le16(cmd_no);
+ cmdptr->result = 0;
--int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq)
-+/**
-+ * iwl4965_tx_queue_update_write_ptr - Send new write index to hardware
-+ */
-+static int iwl4965_tx_queue_update_write_ptr(struct iwl4965_priv *priv,
-+ struct iwl4965_tx_queue *txq)
- {
- u32 reg = 0;
- int rc = 0;
-@@ -4678,41 +4775,41 @@ int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
- /* wake up nic if it's powered down ...
- * uCode will wake up, and interrupt us again, so next
- * time we'll skip this part. */
-- reg = iwl_read32(priv, CSR_UCODE_DRV_GP1);
-+ reg = iwl4965_read32(priv, CSR_UCODE_DRV_GP1);
+ switch (cmd_no) {
+- case CMD_GET_HW_SPEC:
+- ret = wlan_cmd_hw_spec(priv, cmdptr);
+- break;
+ case CMD_802_11_PS_MODE:
+- ret = wlan_cmd_802_11_ps_mode(priv, cmdptr, cmd_action);
++ ret = lbs_cmd_802_11_ps_mode(priv, cmdptr, cmd_action);
+ break;
- if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
- IWL_DEBUG_INFO("Requesting wakeup, GP1 = 0x%x\n", reg);
-- iwl_set_bit(priv, CSR_GP_CNTRL,
-+ iwl4965_set_bit(priv, CSR_GP_CNTRL,
- CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
- return rc;
- }
+ case CMD_802_11_SCAN:
+- ret = libertas_cmd_80211_scan(priv, cmdptr, pdata_buf);
++ ret = lbs_cmd_80211_scan(priv, cmdptr, pdata_buf);
+ break;
- /* restore this queue's parameters in nic hardware. */
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc)
- return rc;
-- iwl_write_restricted(priv, HBUS_TARG_WRPTR,
-- txq->q.first_empty | (txq_id << 8));
-- iwl_release_restricted_access(priv);
-+ iwl4965_write_direct32(priv, HBUS_TARG_WRPTR,
-+ txq->q.write_ptr | (txq_id << 8));
-+ iwl4965_release_nic_access(priv);
+ case CMD_MAC_CONTROL:
+- ret = wlan_cmd_mac_control(priv, cmdptr);
++ ret = lbs_cmd_mac_control(priv, cmdptr);
+ break;
- /* else not in power-save mode, uCode will never sleep when we're
- * trying to tx (during RFKILL, we're not trying to tx). */
- } else
-- iwl_write32(priv, HBUS_TARG_WRPTR,
-- txq->q.first_empty | (txq_id << 8));
-+ iwl4965_write32(priv, HBUS_TARG_WRPTR,
-+ txq->q.write_ptr | (txq_id << 8));
+ case CMD_802_11_ASSOCIATE:
+ case CMD_802_11_REASSOCIATE:
+- ret = libertas_cmd_80211_associate(priv, cmdptr, pdata_buf);
++ ret = lbs_cmd_80211_associate(priv, cmdptr, pdata_buf);
+ break;
- txq->need_update = 0;
+ case CMD_802_11_DEAUTHENTICATE:
+- ret = libertas_cmd_80211_deauthenticate(priv, cmdptr);
+- break;
+-
+- case CMD_802_11_SET_WEP:
+- ret = wlan_cmd_802_11_set_wep(priv, cmdptr, cmd_action, pdata_buf);
++ ret = lbs_cmd_80211_deauthenticate(priv, cmdptr);
+ break;
- return rc;
- }
+ case CMD_802_11_AD_HOC_START:
+- ret = libertas_cmd_80211_ad_hoc_start(priv, cmdptr, pdata_buf);
++ ret = lbs_cmd_80211_ad_hoc_start(priv, cmdptr, pdata_buf);
+ break;
+ case CMD_CODE_DNLD:
+ break;
--#ifdef CONFIG_IWLWIFI_DEBUG
--static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
-+#ifdef CONFIG_IWL4965_DEBUG
-+static void iwl4965_print_rx_config_cmd(struct iwl4965_rxon_cmd *rxon)
- {
- DECLARE_MAC_BUF(mac);
+ case CMD_802_11_RESET:
+- ret = wlan_cmd_802_11_reset(priv, cmdptr, cmd_action);
++ ret = lbs_cmd_802_11_reset(priv, cmdptr, cmd_action);
+ break;
- IWL_DEBUG_RADIO("RX CONFIG:\n");
-- iwl_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
-+ iwl4965_print_hex_dump(IWL_DL_RADIO, (u8 *) rxon, sizeof(*rxon));
- IWL_DEBUG_RADIO("u16 channel: 0x%x\n", le16_to_cpu(rxon->channel));
- IWL_DEBUG_RADIO("u32 flags: 0x%08X\n", le32_to_cpu(rxon->flags));
- IWL_DEBUG_RADIO("u32 filter_flags: 0x%08x\n",
-@@ -4729,24 +4826,24 @@ static void iwl_print_rx_config_cmd(struct iwl_rxon_cmd *rxon)
- }
- #endif
+ case CMD_802_11_GET_LOG:
+- ret = wlan_cmd_802_11_get_log(priv, cmdptr);
++ ret = lbs_cmd_802_11_get_log(priv, cmdptr);
+ break;
--static void iwl_enable_interrupts(struct iwl_priv *priv)
-+static void iwl4965_enable_interrupts(struct iwl4965_priv *priv)
- {
- IWL_DEBUG_ISR("Enabling interrupts\n");
- set_bit(STATUS_INT_ENABLED, &priv->status);
-- iwl_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
-+ iwl4965_write32(priv, CSR_INT_MASK, CSR_INI_SET_MASK);
- }
+ case CMD_802_11_AUTHENTICATE:
+- ret = libertas_cmd_80211_authenticate(priv, cmdptr, pdata_buf);
++ ret = lbs_cmd_80211_authenticate(priv, cmdptr, pdata_buf);
+ break;
--static inline void iwl_disable_interrupts(struct iwl_priv *priv)
-+static inline void iwl4965_disable_interrupts(struct iwl4965_priv *priv)
- {
- clear_bit(STATUS_INT_ENABLED, &priv->status);
+ case CMD_802_11_GET_STAT:
+- ret = wlan_cmd_802_11_get_stat(priv, cmdptr);
++ ret = lbs_cmd_802_11_get_stat(priv, cmdptr);
+ break;
- /* disable interrupts from uCode/NIC to host */
-- iwl_write32(priv, CSR_INT_MASK, 0x00000000);
-+ iwl4965_write32(priv, CSR_INT_MASK, 0x00000000);
+ case CMD_802_11_SNMP_MIB:
+- ret = wlan_cmd_802_11_snmp_mib(priv, cmdptr,
++ ret = lbs_cmd_802_11_snmp_mib(priv, cmdptr,
+ cmd_action, cmd_oid, pdata_buf);
+ break;
- /* acknowledge/clear/reset any interrupts still pending
- * from uCode or flow handler (Rx/Tx DMA) */
-- iwl_write32(priv, CSR_INT, 0xffffffff);
-- iwl_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
-+ iwl4965_write32(priv, CSR_INT, 0xffffffff);
-+ iwl4965_write32(priv, CSR_FH_INT_STATUS, 0xffffffff);
- IWL_DEBUG_ISR("Disabled interrupts\n");
- }
+ case CMD_MAC_REG_ACCESS:
+ case CMD_BBP_REG_ACCESS:
+ case CMD_RF_REG_ACCESS:
+- ret = wlan_cmd_reg_access(priv, cmdptr, cmd_action, pdata_buf);
+- break;
+-
+- case CMD_802_11_RF_CHANNEL:
+- ret = wlan_cmd_802_11_rf_channel(priv, cmdptr,
+- cmd_action, pdata_buf);
++ ret = lbs_cmd_reg_access(priv, cmdptr, cmd_action, pdata_buf);
+ break;
-@@ -4773,7 +4870,7 @@ static const char *desc_lookup(int i)
- #define ERROR_START_OFFSET (1 * sizeof(u32))
- #define ERROR_ELEM_SIZE (7 * sizeof(u32))
+ case CMD_802_11_RF_TX_POWER:
+- ret = wlan_cmd_802_11_rf_tx_power(priv, cmdptr,
++ ret = lbs_cmd_802_11_rf_tx_power(priv, cmdptr,
+ cmd_action, pdata_buf);
+ break;
--static void iwl_dump_nic_error_log(struct iwl_priv *priv)
-+static void iwl4965_dump_nic_error_log(struct iwl4965_priv *priv)
- {
- u32 data2, line;
- u32 desc, time, count, base, data1;
-@@ -4782,18 +4879,18 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
+- case CMD_802_11_RADIO_CONTROL:
+- ret = wlan_cmd_802_11_radio_control(priv, cmdptr, cmd_action);
+- break;
+-
+- case CMD_802_11_DATA_RATE:
+- ret = wlan_cmd_802_11_data_rate(priv, cmdptr, cmd_action);
+- break;
+ case CMD_802_11_RATE_ADAPT_RATESET:
+- ret = wlan_cmd_802_11_rate_adapt_rateset(priv,
++ ret = lbs_cmd_802_11_rate_adapt_rateset(priv,
+ cmdptr, cmd_action);
+ break;
- base = le32_to_cpu(priv->card_alive.error_event_table_ptr);
+ case CMD_MAC_MULTICAST_ADR:
+- ret = wlan_cmd_mac_multicast_adr(priv, cmdptr, cmd_action);
++ ret = lbs_cmd_mac_multicast_adr(priv, cmdptr, cmd_action);
+ break;
-- if (!iwl_hw_valid_rtc_data_addr(base)) {
-+ if (!iwl4965_hw_valid_rtc_data_addr(base)) {
- IWL_ERROR("Not valid error log pointer 0x%08X\n", base);
- return;
- }
+ case CMD_802_11_MONITOR_MODE:
+- ret = wlan_cmd_802_11_monitor_mode(priv, cmdptr,
++ ret = lbs_cmd_802_11_monitor_mode(priv, cmdptr,
+ cmd_action, pdata_buf);
+ break;
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- IWL_WARNING("Can not read from adapter at this time.\n");
- return;
- }
+ case CMD_802_11_AD_HOC_JOIN:
+- ret = libertas_cmd_80211_ad_hoc_join(priv, cmdptr, pdata_buf);
++ ret = lbs_cmd_80211_ad_hoc_join(priv, cmdptr, pdata_buf);
+ break;
-- count = iwl_read_restricted_mem(priv, base);
-+ count = iwl4965_read_targ_mem(priv, base);
+ case CMD_802_11_RSSI:
+- ret = wlan_cmd_802_11_rssi(priv, cmdptr);
++ ret = lbs_cmd_802_11_rssi(priv, cmdptr);
+ break;
- if (ERROR_START_OFFSET <= count * ERROR_ELEM_SIZE) {
- IWL_ERROR("Start IWL Error Log Dump:\n");
-@@ -4801,15 +4898,15 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
- priv->status, priv->config, count);
- }
+ case CMD_802_11_AD_HOC_STOP:
+- ret = libertas_cmd_80211_ad_hoc_stop(priv, cmdptr);
+- break;
+-
+- case CMD_802_11_ENABLE_RSN:
+- ret = wlan_cmd_802_11_enable_rsn(priv, cmdptr, cmd_action,
+- pdata_buf);
++ ret = lbs_cmd_80211_ad_hoc_stop(priv, cmdptr);
+ break;
-- desc = iwl_read_restricted_mem(priv, base + 1 * sizeof(u32));
-- blink1 = iwl_read_restricted_mem(priv, base + 3 * sizeof(u32));
-- blink2 = iwl_read_restricted_mem(priv, base + 4 * sizeof(u32));
-- ilink1 = iwl_read_restricted_mem(priv, base + 5 * sizeof(u32));
-- ilink2 = iwl_read_restricted_mem(priv, base + 6 * sizeof(u32));
-- data1 = iwl_read_restricted_mem(priv, base + 7 * sizeof(u32));
-- data2 = iwl_read_restricted_mem(priv, base + 8 * sizeof(u32));
-- line = iwl_read_restricted_mem(priv, base + 9 * sizeof(u32));
-- time = iwl_read_restricted_mem(priv, base + 11 * sizeof(u32));
-+ desc = iwl4965_read_targ_mem(priv, base + 1 * sizeof(u32));
-+ blink1 = iwl4965_read_targ_mem(priv, base + 3 * sizeof(u32));
-+ blink2 = iwl4965_read_targ_mem(priv, base + 4 * sizeof(u32));
-+ ilink1 = iwl4965_read_targ_mem(priv, base + 5 * sizeof(u32));
-+ ilink2 = iwl4965_read_targ_mem(priv, base + 6 * sizeof(u32));
-+ data1 = iwl4965_read_targ_mem(priv, base + 7 * sizeof(u32));
-+ data2 = iwl4965_read_targ_mem(priv, base + 8 * sizeof(u32));
-+ line = iwl4965_read_targ_mem(priv, base + 9 * sizeof(u32));
-+ time = iwl4965_read_targ_mem(priv, base + 11 * sizeof(u32));
+ case CMD_802_11_KEY_MATERIAL:
+- ret = wlan_cmd_802_11_key_material(priv, cmdptr, cmd_action,
++ ret = lbs_cmd_802_11_key_material(priv, cmdptr, cmd_action,
+ cmd_oid, pdata_buf);
+ break;
- IWL_ERROR("Desc Time "
- "data1 data2 line\n");
-@@ -4819,17 +4916,17 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv)
- IWL_ERROR("0x%05X 0x%05X 0x%05X 0x%05X\n", blink1, blink2,
- ilink1, ilink2);
+@@ -1300,11 +1449,11 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ break;
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- }
+ case CMD_802_11_MAC_ADDRESS:
+- ret = wlan_cmd_802_11_mac_address(priv, cmdptr, cmd_action);
++ ret = lbs_cmd_802_11_mac_address(priv, cmdptr, cmd_action);
+ break;
- #define EVENT_START_OFFSET (4 * sizeof(u32))
+ case CMD_802_11_EEPROM_ACCESS:
+- ret = wlan_cmd_802_11_eeprom_access(priv, cmdptr,
++ ret = lbs_cmd_802_11_eeprom_access(priv, cmdptr,
+ cmd_action, pdata_buf);
+ break;
- /**
-- * iwl_print_event_log - Dump error event log to syslog
-+ * iwl4965_print_event_log - Dump error event log to syslog
- *
-- * NOTE: Must be called with iwl_grab_restricted_access() already obtained!
-+ * NOTE: Must be called with iwl4965_grab_nic_access() already obtained!
- */
--static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
-+static void iwl4965_print_event_log(struct iwl4965_priv *priv, u32 start_idx,
- u32 num_events, u32 mode)
- {
- u32 i;
-@@ -4853,21 +4950,21 @@ static void iwl_print_event_log(struct iwl_priv *priv, u32 start_idx,
- /* "time" is actually "data" for mode 0 (no timestamp).
- * place event id # at far right for easier visual parsing. */
- for (i = 0; i < num_events; i++) {
-- ev = iwl_read_restricted_mem(priv, ptr);
-+ ev = iwl4965_read_targ_mem(priv, ptr);
- ptr += sizeof(u32);
-- time = iwl_read_restricted_mem(priv, ptr);
-+ time = iwl4965_read_targ_mem(priv, ptr);
- ptr += sizeof(u32);
- if (mode == 0)
- IWL_ERROR("0x%08x\t%04u\n", time, ev); /* data, ev */
- else {
-- data = iwl_read_restricted_mem(priv, ptr);
-+ data = iwl4965_read_targ_mem(priv, ptr);
- ptr += sizeof(u32);
- IWL_ERROR("%010u\t0x%08x\t%04u\n", time, data, ev);
- }
- }
- }
+@@ -1322,19 +1471,10 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ goto done;
--static void iwl_dump_nic_event_log(struct iwl_priv *priv)
-+static void iwl4965_dump_nic_event_log(struct iwl4965_priv *priv)
- {
- int rc;
- u32 base; /* SRAM byte address of event log header */
-@@ -4878,29 +4975,29 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
- u32 size; /* # entries that we'll print */
+ case CMD_802_11D_DOMAIN_INFO:
+- ret = libertas_cmd_802_11d_domain_info(priv, cmdptr,
++ ret = lbs_cmd_802_11d_domain_info(priv, cmdptr,
+ cmd_no, cmd_action);
+ break;
- base = le32_to_cpu(priv->card_alive.log_event_table_ptr);
-- if (!iwl_hw_valid_rtc_data_addr(base)) {
-+ if (!iwl4965_hw_valid_rtc_data_addr(base)) {
- IWL_ERROR("Invalid event log pointer 0x%08X\n", base);
- return;
- }
+- case CMD_802_11_SLEEP_PARAMS:
+- ret = wlan_cmd_802_11_sleep_params(priv, cmdptr, cmd_action);
+- break;
+- case CMD_802_11_INACTIVITY_TIMEOUT:
+- ret = wlan_cmd_802_11_inactivity_timeout(priv, cmdptr,
+- cmd_action, pdata_buf);
+- libertas_set_cmd_ctrl_node(priv, cmdnode, 0, 0, pdata_buf);
+- break;
+-
+ case CMD_802_11_TPC_CFG:
+ cmdptr->command = cpu_to_le16(CMD_802_11_TPC_CFG);
+ cmdptr->size =
+@@ -1361,13 +1501,15 @@ int libertas_prepare_and_send_command(wlan_private * priv,
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- IWL_WARNING("Can not read from adapter at this time.\n");
- return;
- }
+ #define ACTION_NUMLED_TLVTYPE_LEN_FIELDS_LEN 8
+ cmdptr->size =
+- cpu_to_le16(gpio->header.len + S_DS_GEN +
+- ACTION_NUMLED_TLVTYPE_LEN_FIELDS_LEN);
+- gpio->header.len = cpu_to_le16(gpio->header.len);
++ cpu_to_le16(le16_to_cpu(gpio->header.len)
++ + S_DS_GEN
++ + ACTION_NUMLED_TLVTYPE_LEN_FIELDS_LEN);
++ gpio->header.len = gpio->header.len;
- /* event log header */
-- capacity = iwl_read_restricted_mem(priv, base);
-- mode = iwl_read_restricted_mem(priv, base + (1 * sizeof(u32)));
-- num_wraps = iwl_read_restricted_mem(priv, base + (2 * sizeof(u32)));
-- next_entry = iwl_read_restricted_mem(priv, base + (3 * sizeof(u32)));
-+ capacity = iwl4965_read_targ_mem(priv, base);
-+ mode = iwl4965_read_targ_mem(priv, base + (1 * sizeof(u32)));
-+ num_wraps = iwl4965_read_targ_mem(priv, base + (2 * sizeof(u32)));
-+ next_entry = iwl4965_read_targ_mem(priv, base + (3 * sizeof(u32)));
+ ret = 0;
+ break;
+ }
++
+ case CMD_802_11_PWR_CFG:
+ cmdptr->command = cpu_to_le16(CMD_802_11_PWR_CFG);
+ cmdptr->size =
+@@ -1379,19 +1521,11 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ ret = 0;
+ break;
+ case CMD_BT_ACCESS:
+- ret = wlan_cmd_bt_access(priv, cmdptr, cmd_action, pdata_buf);
++ ret = lbs_cmd_bt_access(priv, cmdptr, cmd_action, pdata_buf);
+ break;
- size = num_wraps ? capacity : next_entry;
+ case CMD_FWT_ACCESS:
+- ret = wlan_cmd_fwt_access(priv, cmdptr, cmd_action, pdata_buf);
+- break;
+-
+- case CMD_MESH_ACCESS:
+- ret = wlan_cmd_mesh_access(priv, cmdptr, cmd_action, pdata_buf);
+- break;
+-
+- case CMD_SET_BOOT2_VER:
+- ret = wlan_cmd_set_boot2_ver(priv, cmdptr, cmd_action, pdata_buf);
++ ret = lbs_cmd_fwt_access(priv, cmdptr, cmd_action, pdata_buf);
+ break;
- /* bail out if nothing in log */
- if (size == 0) {
- IWL_ERROR("Start IWL Event Log Dump: nothing in log\n");
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- return;
+ case CMD_GET_TSF:
+@@ -1400,6 +1534,9 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ S_DS_GEN);
+ ret = 0;
+ break;
++ case CMD_802_11_BEACON_CTRL:
++ ret = lbs_cmd_bcn_ctrl(priv, cmdptr, cmd_action);
++ break;
+ default:
+ lbs_deb_host("PREP_CMD: unknown command 0x%04x\n", cmd_no);
+ ret = -1;
+@@ -1409,14 +1546,14 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ /* return error, since the command preparation failed */
+ if (ret != 0) {
+ lbs_deb_host("PREP_CMD: command preparation failed\n");
+- libertas_cleanup_and_insert_cmd(priv, cmdnode);
++ lbs_cleanup_and_insert_cmd(priv, cmdnode);
+ ret = -1;
+ goto done;
}
-@@ -4910,31 +5007,31 @@ static void iwl_dump_nic_event_log(struct iwl_priv *priv)
- /* if uCode has wrapped back to top of log, start at the oldest entry,
- * i.e the next one that uCode would fill. */
- if (num_wraps)
-- iwl_print_event_log(priv, next_entry,
-+ iwl4965_print_event_log(priv, next_entry,
- capacity - next_entry, mode);
+ cmdnode->cmdwaitqwoken = 0;
- /* (then/else) start at top of log */
-- iwl_print_event_log(priv, 0, next_entry, mode);
-+ iwl4965_print_event_log(priv, 0, next_entry, mode);
+- libertas_queue_cmd(adapter, cmdnode, 1);
++ lbs_queue_cmd(priv, cmdnode);
+ wake_up_interruptible(&priv->waitq);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
+ if (wait_option & CMD_OPTION_WAITFORRSP) {
+@@ -1426,67 +1563,60 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+ cmdnode->cmdwaitqwoken);
+ }
+
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (adapter->cur_cmd_retcode) {
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ if (priv->cur_cmd_retcode) {
+ lbs_deb_host("PREP_CMD: command failed with return code %d\n",
+- adapter->cur_cmd_retcode);
+- adapter->cur_cmd_retcode = 0;
++ priv->cur_cmd_retcode);
++ priv->cur_cmd_retcode = 0;
+ ret = -1;
+ }
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+
+ done:
+ lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
+ return ret;
}
+-EXPORT_SYMBOL_GPL(libertas_prepare_and_send_command);
++EXPORT_SYMBOL_GPL(lbs_prepare_and_send_command);
/**
-- * iwl_irq_handle_error - called for HW or SW error interrupt from card
-+ * iwl4965_irq_handle_error - called for HW or SW error interrupt from card
+ * @brief This function allocates the command buffer and link
+ * it to command free queue.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return 0 or -1
*/
--static void iwl_irq_handle_error(struct iwl_priv *priv)
-+static void iwl4965_irq_handle_error(struct iwl4965_priv *priv)
+-int libertas_allocate_cmd_buffer(wlan_private * priv)
++int lbs_allocate_cmd_buffer(struct lbs_private *priv)
{
-- /* Set the FW error flag -- cleared on iwl_down */
-+ /* Set the FW error flag -- cleared on iwl4965_down */
- set_bit(STATUS_FW_ERROR, &priv->status);
+ int ret = 0;
+- u32 ulbufsize;
++ u32 bufsize;
+ u32 i;
+- struct cmd_ctrl_node *tempcmd_array;
+- u8 *ptempvirtualaddr;
+- wlan_adapter *adapter = priv->adapter;
++ struct cmd_ctrl_node *cmdarray;
- /* Cancel currently queued command. */
- clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
+ lbs_deb_enter(LBS_DEB_HOST);
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (iwl_debug_level & IWL_DL_FW_ERRORS) {
-- iwl_dump_nic_error_log(priv);
-- iwl_dump_nic_event_log(priv);
-- iwl_print_rx_config_cmd(&priv->staging_rxon);
-+#ifdef CONFIG_IWL4965_DEBUG
-+ if (iwl4965_debug_level & IWL_DL_FW_ERRORS) {
-+ iwl4965_dump_nic_error_log(priv);
-+ iwl4965_dump_nic_event_log(priv);
-+ iwl4965_print_rx_config_cmd(&priv->staging_rxon);
+- /* Allocate and initialize cmdCtrlNode */
+- ulbufsize = sizeof(struct cmd_ctrl_node) * MRVDRV_NUM_OF_CMD_BUFFER;
+-
+- if (!(tempcmd_array = kzalloc(ulbufsize, GFP_KERNEL))) {
++ /* Allocate and initialize the command array */
++ bufsize = sizeof(struct cmd_ctrl_node) * LBS_NUM_CMD_BUFFERS;
++ if (!(cmdarray = kzalloc(bufsize, GFP_KERNEL))) {
+ lbs_deb_host("ALLOC_CMD_BUF: tempcmd_array is NULL\n");
+ ret = -1;
+ goto done;
}
- #endif
+- adapter->cmd_array = tempcmd_array;
++ priv->cmd_array = cmdarray;
-@@ -4948,7 +5045,7 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
- IWL_DEBUG(IWL_DL_INFO | IWL_DL_FW_ERRORS,
- "Restarting adapter due to uCode error.\n");
+- /* Allocate and initialize command buffers */
+- ulbufsize = MRVDRV_SIZE_OF_CMD_BUFFER;
+- for (i = 0; i < MRVDRV_NUM_OF_CMD_BUFFER; i++) {
+- if (!(ptempvirtualaddr = kzalloc(ulbufsize, GFP_KERNEL))) {
++ /* Allocate and initialize each command buffer in the command array */
++ for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
++ cmdarray[i].cmdbuf = kzalloc(LBS_CMD_BUFFER_SIZE, GFP_KERNEL);
++ if (!cmdarray[i].cmdbuf) {
+ lbs_deb_host("ALLOC_CMD_BUF: ptempvirtualaddr is NULL\n");
+ ret = -1;
+ goto done;
+ }
+-
+- /* Update command buffer virtual */
+- tempcmd_array[i].bufvirtualaddr = ptempvirtualaddr;
+ }
-- if (iwl_is_associated(priv)) {
-+ if (iwl4965_is_associated(priv)) {
- memcpy(&priv->recovery_rxon, &priv->active_rxon,
- sizeof(priv->recovery_rxon));
- priv->error_recovering = 1;
-@@ -4957,16 +5054,16 @@ static void iwl_irq_handle_error(struct iwl_priv *priv)
+- for (i = 0; i < MRVDRV_NUM_OF_CMD_BUFFER; i++) {
+- init_waitqueue_head(&tempcmd_array[i].cmdwait_q);
+- libertas_cleanup_and_insert_cmd(priv, &tempcmd_array[i]);
++ for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
++ init_waitqueue_head(&cmdarray[i].cmdwait_q);
++ lbs_cleanup_and_insert_cmd(priv, &cmdarray[i]);
}
- }
+-
+ ret = 0;
--static void iwl_error_recovery(struct iwl_priv *priv)
-+static void iwl4965_error_recovery(struct iwl4965_priv *priv)
+ done:
+@@ -1497,39 +1627,36 @@ done:
+ /**
+ * @brief This function frees the command buffer.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return 0 or -1
+ */
+-int libertas_free_cmd_buffer(wlan_private * priv)
++int lbs_free_cmd_buffer(struct lbs_private *priv)
{
- unsigned long flags;
+- u32 ulbufsize; /* Someone needs to die for this. Slowly and painfully */
++ struct cmd_ctrl_node *cmdarray;
+ unsigned int i;
+- struct cmd_ctrl_node *tempcmd_array;
+- wlan_adapter *adapter = priv->adapter;
- memcpy(&priv->staging_rxon, &priv->recovery_rxon,
- sizeof(priv->staging_rxon));
- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
+ lbs_deb_enter(LBS_DEB_HOST);
-- iwl_rxon_add_station(priv, priv->bssid, 1);
-+ iwl4965_rxon_add_station(priv, priv->bssid, 1);
+ /* need to check if cmd array is allocated or not */
+- if (adapter->cmd_array == NULL) {
++ if (priv->cmd_array == NULL) {
+ lbs_deb_host("FREE_CMD_BUF: cmd_array is NULL\n");
+ goto done;
+ }
- spin_lock_irqsave(&priv->lock, flags);
- priv->assoc_id = le16_to_cpu(priv->staging_rxon.assoc_id);
-@@ -4974,12 +5071,12 @@ static void iwl_error_recovery(struct iwl_priv *priv)
- spin_unlock_irqrestore(&priv->lock, flags);
- }
+- tempcmd_array = adapter->cmd_array;
++ cmdarray = priv->cmd_array;
--static void iwl_irq_tasklet(struct iwl_priv *priv)
-+static void iwl4965_irq_tasklet(struct iwl4965_priv *priv)
+ /* Release shared memory buffers */
+- ulbufsize = MRVDRV_SIZE_OF_CMD_BUFFER;
+- for (i = 0; i < MRVDRV_NUM_OF_CMD_BUFFER; i++) {
+- if (tempcmd_array[i].bufvirtualaddr) {
+- kfree(tempcmd_array[i].bufvirtualaddr);
+- tempcmd_array[i].bufvirtualaddr = NULL;
++ for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
++ if (cmdarray[i].cmdbuf) {
++ kfree(cmdarray[i].cmdbuf);
++ cmdarray[i].cmdbuf = NULL;
+ }
+ }
+
+ /* Release cmd_ctrl_node */
+- if (adapter->cmd_array) {
+- kfree(adapter->cmd_array);
+- adapter->cmd_array = NULL;
++ if (priv->cmd_array) {
++ kfree(priv->cmd_array);
++ priv->cmd_array = NULL;
+ }
+
+ done:
+@@ -1541,34 +1668,31 @@ done:
+ * @brief This function gets a free command node if available in
+ * command free queue.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return cmd_ctrl_node A pointer to cmd_ctrl_node structure or NULL
+ */
+-struct cmd_ctrl_node *libertas_get_free_cmd_ctrl_node(wlan_private * priv)
++static struct cmd_ctrl_node *lbs_get_cmd_ctrl_node(struct lbs_private *priv)
{
- u32 inta, handled = 0;
- u32 inta_fh;
+ struct cmd_ctrl_node *tempnode;
+- wlan_adapter *adapter = priv->adapter;
unsigned long flags;
--#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL4965_DEBUG
- u32 inta_mask;
- #endif
-@@ -4988,18 +5085,19 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- /* Ack/clear/reset pending uCode interrupts.
- * Note: Some bits in CSR_INT are "OR" of bits in CSR_FH_INT_STATUS,
- * and will clear only when CSR_FH_INT_STATUS gets cleared. */
-- inta = iwl_read32(priv, CSR_INT);
-- iwl_write32(priv, CSR_INT, inta);
-+ inta = iwl4965_read32(priv, CSR_INT);
-+ iwl4965_write32(priv, CSR_INT, inta);
+ lbs_deb_enter(LBS_DEB_HOST);
- /* Ack/clear/reset pending flow-handler (DMA) interrupts.
- * Any new interrupts that happen after this, either while we're
- * in this tasklet, or later, will show up in next ISR/tasklet. */
-- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
-- iwl_write32(priv, CSR_FH_INT_STATUS, inta_fh);
-+ inta_fh = iwl4965_read32(priv, CSR_FH_INT_STATUS);
-+ iwl4965_write32(priv, CSR_FH_INT_STATUS, inta_fh);
+- if (!adapter)
++ if (!priv)
+ return NULL;
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (iwl_debug_level & IWL_DL_ISR) {
-- inta_mask = iwl_read32(priv, CSR_INT_MASK); /* just for debug */
-+#ifdef CONFIG_IWL4965_DEBUG
-+ if (iwl4965_debug_level & IWL_DL_ISR) {
-+ /* just for debug */
-+ inta_mask = iwl4965_read32(priv, CSR_INT_MASK);
- IWL_DEBUG_ISR("inta 0x%08x, enabled 0x%08x, fh 0x%08x\n",
- inta, inta_mask, inta_fh);
+- spin_lock_irqsave(&adapter->driver_lock, flags);
++ spin_lock_irqsave(&priv->driver_lock, flags);
+
+- if (!list_empty(&adapter->cmdfreeq)) {
+- tempnode = (struct cmd_ctrl_node *)adapter->cmdfreeq.next;
+- list_del((struct list_head *)tempnode);
++ if (!list_empty(&priv->cmdfreeq)) {
++ tempnode = list_first_entry(&priv->cmdfreeq,
++ struct cmd_ctrl_node, list);
++ list_del(&tempnode->list);
+ } else {
+ lbs_deb_host("GET_CMD_NODE: cmd_ctrl_node is not available\n");
+ tempnode = NULL;
}
-@@ -5019,9 +5117,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- IWL_ERROR("Microcode HW error detected. Restarting.\n");
- /* Tell the device to stop sending interrupts */
-- iwl_disable_interrupts(priv);
-+ iwl4965_disable_interrupts(priv);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
+-
+- if (tempnode)
+- cleanup_cmdnode(tempnode);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
-- iwl_irq_handle_error(priv);
-+ iwl4965_irq_handle_error(priv);
+ lbs_deb_leave(LBS_DEB_HOST);
+ return tempnode;
+@@ -1580,47 +1704,26 @@ struct cmd_ctrl_node *libertas_get_free_cmd_ctrl_node(wlan_private * priv)
+ * @param ptempnode A pointer to cmdCtrlNode structure
+ * @return n/a
+ */
+-static void cleanup_cmdnode(struct cmd_ctrl_node *ptempnode)
+-{
+- lbs_deb_enter(LBS_DEB_HOST);
+-
+- if (!ptempnode)
+- return;
+- ptempnode->cmdwaitqwoken = 1;
+- wake_up_interruptible(&ptempnode->cmdwait_q);
+- ptempnode->status = 0;
+- ptempnode->cmd_oid = (u32) 0;
+- ptempnode->wait_option = 0;
+- ptempnode->pdata_buf = NULL;
+-
+- if (ptempnode->bufvirtualaddr != NULL)
+- memset(ptempnode->bufvirtualaddr, 0, MRVDRV_SIZE_OF_CMD_BUFFER);
+-
+- lbs_deb_leave(LBS_DEB_HOST);
+-}
- handled |= CSR_INT_BIT_HW_ERR;
+ /**
+ * @brief This function initializes the command node.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param ptempnode A pointer to cmd_ctrl_node structure
+- * @param cmd_oid cmd oid: treated as sub command
+- * @param wait_option wait option: wait response or not
+ * @param pdata_buf A pointer to informaion buffer
+ * @return 0 or -1
+ */
+-void libertas_set_cmd_ctrl_node(wlan_private * priv,
+- struct cmd_ctrl_node *ptempnode,
+- u32 cmd_oid, u16 wait_option, void *pdata_buf)
++static void lbs_set_cmd_ctrl_node(struct lbs_private *priv,
++ struct cmd_ctrl_node *ptempnode,
++ void *pdata_buf)
+ {
+ lbs_deb_enter(LBS_DEB_HOST);
-@@ -5030,8 +5128,8 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
+ if (!ptempnode)
return;
- }
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (iwl_debug_level & (IWL_DL_ISR)) {
-+#ifdef CONFIG_IWL4965_DEBUG
-+ if (iwl4965_debug_level & (IWL_DL_ISR)) {
- /* NIC fires this, but we don't use it, redundant with WAKEUP */
- if (inta & CSR_INT_BIT_MAC_CLK_ACTV)
- IWL_DEBUG_ISR("Microcode started or stopped.\n");
-@@ -5044,10 +5142,10 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- /* Safely ignore these bits for debug checks below */
- inta &= ~(CSR_INT_BIT_MAC_CLK_ACTV | CSR_INT_BIT_ALIVE);
+- ptempnode->cmd_oid = cmd_oid;
+- ptempnode->wait_option = wait_option;
+- ptempnode->pdata_buf = pdata_buf;
++ ptempnode->callback = NULL;
++ ptempnode->callback_arg = (unsigned long)pdata_buf;
-- /* HW RF KILL switch toggled (4965 only) */
-+ /* HW RF KILL switch toggled */
- if (inta & CSR_INT_BIT_RF_KILL) {
- int hw_rf_kill = 0;
-- if (!(iwl_read32(priv, CSR_GP_CNTRL) &
-+ if (!(iwl4965_read32(priv, CSR_GP_CNTRL) &
- CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
- hw_rf_kill = 1;
+ lbs_deb_leave(LBS_DEB_HOST);
+ }
+@@ -1630,60 +1733,58 @@ void libertas_set_cmd_ctrl_node(wlan_private * priv,
+ * pending queue. It will put fimware back to PS mode
+ * if applicable.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return 0 or -1
+ */
+-int libertas_execute_next_command(wlan_private * priv)
++int lbs_execute_next_command(struct lbs_private *priv)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ struct cmd_ctrl_node *cmdnode = NULL;
+- struct cmd_ds_command *cmdptr;
++ struct cmd_header *cmd;
+ unsigned long flags;
+ int ret = 0;
-@@ -5067,7 +5165,7 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- handled |= CSR_INT_BIT_RF_KILL;
+ // Debug group is LBS_DEB_THREAD and not LBS_DEB_HOST, because the
+- // only caller to us is libertas_thread() and we get even when a
++ // only caller to us is lbs_thread() and we get even when a
+ // data packet is received
+ lbs_deb_enter(LBS_DEB_THREAD);
+
+- spin_lock_irqsave(&adapter->driver_lock, flags);
++ spin_lock_irqsave(&priv->driver_lock, flags);
+
+- if (adapter->cur_cmd) {
++ if (priv->cur_cmd) {
+ lbs_pr_alert( "EXEC_NEXT_CMD: already processing command!\n");
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ ret = -1;
+ goto done;
}
-- /* Chip got too hot and stopped itself (4965 only) */
-+ /* Chip got too hot and stopped itself */
- if (inta & CSR_INT_BIT_CT_KILL) {
- IWL_ERROR("Microcode CT kill error detected.\n");
- handled |= CSR_INT_BIT_CT_KILL;
-@@ -5077,20 +5175,20 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- if (inta & CSR_INT_BIT_SW_ERR) {
- IWL_ERROR("Microcode SW error detected. Restarting 0x%X.\n",
- inta);
-- iwl_irq_handle_error(priv);
-+ iwl4965_irq_handle_error(priv);
- handled |= CSR_INT_BIT_SW_ERR;
+- if (!list_empty(&adapter->cmdpendingq)) {
+- cmdnode = (struct cmd_ctrl_node *)
+- adapter->cmdpendingq.next;
++ if (!list_empty(&priv->cmdpendingq)) {
++ cmdnode = list_first_entry(&priv->cmdpendingq,
++ struct cmd_ctrl_node, list);
}
- /* uCode wakes up after power-down sleep */
- if (inta & CSR_INT_BIT_WAKEUP) {
- IWL_DEBUG_ISR("Wakeup interrupt\n");
-- iwl_rx_queue_update_write_ptr(priv, &priv->rxq);
-- iwl_tx_queue_update_write_ptr(priv, &priv->txq[0]);
-- iwl_tx_queue_update_write_ptr(priv, &priv->txq[1]);
-- iwl_tx_queue_update_write_ptr(priv, &priv->txq[2]);
-- iwl_tx_queue_update_write_ptr(priv, &priv->txq[3]);
-- iwl_tx_queue_update_write_ptr(priv, &priv->txq[4]);
-- iwl_tx_queue_update_write_ptr(priv, &priv->txq[5]);
-+ iwl4965_rx_queue_update_write_ptr(priv, &priv->rxq);
-+ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[0]);
-+ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[1]);
-+ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[2]);
-+ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[3]);
-+ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[4]);
-+ iwl4965_tx_queue_update_write_ptr(priv, &priv->txq[5]);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- handled |= CSR_INT_BIT_WAKEUP;
- }
-@@ -5099,7 +5197,7 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- * Rx "responses" (frame-received notification), and other
- * notifications from uCode come through here*/
- if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX)) {
-- iwl_rx_handle(priv);
-+ iwl4965_rx_handle(priv);
- handled |= (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX);
- }
+ if (cmdnode) {
+- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
++ cmd = cmdnode->cmdbuf;
-@@ -5118,13 +5216,13 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- }
+- if (is_command_allowed_in_ps(cmdptr->command)) {
+- if ((adapter->psstate == PS_STATE_SLEEP) ||
+- (adapter->psstate == PS_STATE_PRE_SLEEP)) {
++ if (is_command_allowed_in_ps(le16_to_cpu(cmd->command))) {
++ if ((priv->psstate == PS_STATE_SLEEP) ||
++ (priv->psstate == PS_STATE_PRE_SLEEP)) {
+ lbs_deb_host(
+ "EXEC_NEXT_CMD: cannot send cmd 0x%04x in psstate %d\n",
+- le16_to_cpu(cmdptr->command),
+- adapter->psstate);
++ le16_to_cpu(cmd->command),
++ priv->psstate);
+ ret = -1;
+ goto done;
+ }
+ lbs_deb_host("EXEC_NEXT_CMD: OK to send command "
+- "0x%04x in psstate %d\n",
+- le16_to_cpu(cmdptr->command),
+- adapter->psstate);
+- } else if (adapter->psstate != PS_STATE_FULL_POWER) {
++ "0x%04x in psstate %d\n",
++ le16_to_cpu(cmd->command), priv->psstate);
++ } else if (priv->psstate != PS_STATE_FULL_POWER) {
+ /*
+ * 1. Non-PS command:
+ * Queue it. set needtowakeup to TRUE if current state
+- * is SLEEP, otherwise call libertas_ps_wakeup to send Exit_PS.
++ * is SLEEP, otherwise call lbs_ps_wakeup to send Exit_PS.
+ * 2. PS command but not Exit_PS:
+ * Ignore it.
+ * 3. PS command Exit_PS:
+@@ -1691,18 +1792,17 @@ int libertas_execute_next_command(wlan_private * priv)
+ * otherwise send this command down to firmware
+ * immediately.
+ */
+- if (cmdptr->command !=
+- cpu_to_le16(CMD_802_11_PS_MODE)) {
++ if (cmd->command != cpu_to_le16(CMD_802_11_PS_MODE)) {
+ /* Prepare to send Exit PS,
+ * this non PS command will be sent later */
+- if ((adapter->psstate == PS_STATE_SLEEP)
+- || (adapter->psstate == PS_STATE_PRE_SLEEP)
++ if ((priv->psstate == PS_STATE_SLEEP)
++ || (priv->psstate == PS_STATE_PRE_SLEEP)
+ ) {
+ /* w/ new scheme, it will not reach here.
+ since it is blocked in main_thread. */
+- adapter->needtowakeup = 1;
++ priv->needtowakeup = 1;
+ } else
+- libertas_ps_wakeup(priv, 0);
++ lbs_ps_wakeup(priv, 0);
- /* Re-enable all interrupts */
-- iwl_enable_interrupts(priv);
-+ iwl4965_enable_interrupts(priv);
+ ret = 0;
+ goto done;
+@@ -1711,8 +1811,7 @@ int libertas_execute_next_command(wlan_private * priv)
+ * PS command. Ignore it if it is not Exit_PS.
+ * otherwise send it down immediately.
+ */
+- struct cmd_ds_802_11_ps_mode *psm =
+- &cmdptr->params.psmode;
++ struct cmd_ds_802_11_ps_mode *psm = (void *)&cmd[1];
--#ifdef CONFIG_IWLWIFI_DEBUG
-- if (iwl_debug_level & (IWL_DL_ISR)) {
-- inta = iwl_read32(priv, CSR_INT);
-- inta_mask = iwl_read32(priv, CSR_INT_MASK);
-- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
-+#ifdef CONFIG_IWL4965_DEBUG
-+ if (iwl4965_debug_level & (IWL_DL_ISR)) {
-+ inta = iwl4965_read32(priv, CSR_INT);
-+ inta_mask = iwl4965_read32(priv, CSR_INT_MASK);
-+ inta_fh = iwl4965_read32(priv, CSR_FH_INT_STATUS);
- IWL_DEBUG_ISR("End inta 0x%08x, enabled 0x%08x, fh 0x%08x, "
- "flags 0x%08lx\n", inta, inta_mask, inta_fh, flags);
+ lbs_deb_host(
+ "EXEC_NEXT_CMD: PS cmd, action 0x%02x\n",
+@@ -1721,20 +1820,24 @@ int libertas_execute_next_command(wlan_private * priv)
+ cpu_to_le16(CMD_SUBCMD_EXIT_PS)) {
+ lbs_deb_host(
+ "EXEC_NEXT_CMD: ignore ENTER_PS cmd\n");
+- list_del((struct list_head *)cmdnode);
+- libertas_cleanup_and_insert_cmd(priv, cmdnode);
++ list_del(&cmdnode->list);
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ lbs_complete_command(priv, cmdnode, 0);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+
+ ret = 0;
+ goto done;
+ }
+
+- if ((adapter->psstate == PS_STATE_SLEEP) ||
+- (adapter->psstate == PS_STATE_PRE_SLEEP)) {
++ if ((priv->psstate == PS_STATE_SLEEP) ||
++ (priv->psstate == PS_STATE_PRE_SLEEP)) {
+ lbs_deb_host(
+ "EXEC_NEXT_CMD: ignore EXIT_PS cmd in sleep\n");
+- list_del((struct list_head *)cmdnode);
+- libertas_cleanup_and_insert_cmd(priv, cmdnode);
+- adapter->needtowakeup = 1;
++ list_del(&cmdnode->list);
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ lbs_complete_command(priv, cmdnode, 0);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ priv->needtowakeup = 1;
+
+ ret = 0;
+ goto done;
+@@ -1744,33 +1847,34 @@ int libertas_execute_next_command(wlan_private * priv)
+ "EXEC_NEXT_CMD: sending EXIT_PS\n");
+ }
+ }
+- list_del((struct list_head *)cmdnode);
++ list_del(&cmdnode->list);
+ lbs_deb_host("EXEC_NEXT_CMD: sending command 0x%04x\n",
+- le16_to_cpu(cmdptr->command));
+- DownloadcommandToStation(priv, cmdnode);
++ le16_to_cpu(cmd->command));
++ lbs_submit_command(priv, cmdnode);
+ } else {
+ /*
+ * check if in power save mode, if yes, put the device back
+ * to PS mode
+ */
+- if ((adapter->psmode != WLAN802_11POWERMODECAM) &&
+- (adapter->psstate == PS_STATE_FULL_POWER) &&
+- (adapter->connect_status == LIBERTAS_CONNECTED)) {
+- if (adapter->secinfo.WPAenabled ||
+- adapter->secinfo.WPA2enabled) {
++ if ((priv->psmode != LBS802_11POWERMODECAM) &&
++ (priv->psstate == PS_STATE_FULL_POWER) &&
++ ((priv->connect_status == LBS_CONNECTED) ||
++ (priv->mesh_connect_status == LBS_CONNECTED))) {
++ if (priv->secinfo.WPAenabled ||
++ priv->secinfo.WPA2enabled) {
+ /* check for valid WPA group keys */
+- if (adapter->wpa_mcast_key.len ||
+- adapter->wpa_unicast_key.len) {
++ if (priv->wpa_mcast_key.len ||
++ priv->wpa_unicast_key.len) {
+ lbs_deb_host(
+ "EXEC_NEXT_CMD: WPA enabled and GTK_SET"
+ " go back to PS_SLEEP");
+- libertas_ps_sleep(priv, 0);
++ lbs_ps_sleep(priv, 0);
+ }
+ } else {
+ lbs_deb_host(
+ "EXEC_NEXT_CMD: cmdpendingq empty, "
+ "go back to PS_SLEEP");
+- libertas_ps_sleep(priv, 0);
++ lbs_ps_sleep(priv, 0);
+ }
+ }
}
-@@ -5132,9 +5230,9 @@ static void iwl_irq_tasklet(struct iwl_priv *priv)
- spin_unlock_irqrestore(&priv->lock, flags);
+@@ -1781,7 +1885,7 @@ done:
+ return ret;
}
--static irqreturn_t iwl_isr(int irq, void *data)
-+static irqreturn_t iwl4965_isr(int irq, void *data)
+-void libertas_send_iwevcustom_event(wlan_private * priv, s8 * str)
++void lbs_send_iwevcustom_event(struct lbs_private *priv, s8 *str)
{
-- struct iwl_priv *priv = data;
-+ struct iwl4965_priv *priv = data;
- u32 inta, inta_mask;
- u32 inta_fh;
- if (!priv)
-@@ -5146,12 +5244,12 @@ static irqreturn_t iwl_isr(int irq, void *data)
- * back-to-back ISRs and sporadic interrupts from our NIC.
- * If we have something to service, the tasklet will re-enable ints.
- * If we *don't* have something, we'll re-enable before leaving here. */
-- inta_mask = iwl_read32(priv, CSR_INT_MASK); /* just for debug */
-- iwl_write32(priv, CSR_INT_MASK, 0x00000000);
-+ inta_mask = iwl4965_read32(priv, CSR_INT_MASK); /* just for debug */
-+ iwl4965_write32(priv, CSR_INT_MASK, 0x00000000);
+ union iwreq_data iwrq;
+ u8 buf[50];
+@@ -1805,10 +1909,9 @@ void libertas_send_iwevcustom_event(wlan_private * priv, s8 * str)
+ lbs_deb_leave(LBS_DEB_WEXT);
+ }
- /* Discover which interrupts are active/pending */
-- inta = iwl_read32(priv, CSR_INT);
-- inta_fh = iwl_read32(priv, CSR_FH_INT_STATUS);
-+ inta = iwl4965_read32(priv, CSR_INT);
-+ inta_fh = iwl4965_read32(priv, CSR_FH_INT_STATUS);
+-static int sendconfirmsleep(wlan_private * priv, u8 * cmdptr, u16 size)
++static int sendconfirmsleep(struct lbs_private *priv, u8 *cmdptr, u16 size)
+ {
+ unsigned long flags;
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
- /* Ignore interrupt if there's nothing in NIC to service.
- * This may be due to IRQ shared with another device,
-@@ -5171,7 +5269,7 @@ static irqreturn_t iwl_isr(int irq, void *data)
- IWL_DEBUG_ISR("ISR inta 0x%08x, enabled 0x%08x, fh 0x%08x\n",
- inta, inta_mask, inta_fh);
+ lbs_deb_enter(LBS_DEB_HOST);
+@@ -1819,26 +1922,25 @@ static int sendconfirmsleep(wlan_private * priv, u8 * cmdptr, u16 size)
+ lbs_deb_hex(LBS_DEB_HOST, "sleep confirm command", cmdptr, size);
-- /* iwl_irq_tasklet() will service interrupts and re-enable them */
-+ /* iwl4965_irq_tasklet() will service interrupts and re-enable them */
- tasklet_schedule(&priv->irq_tasklet);
+ ret = priv->hw_host_to_card(priv, MVMS_CMD, cmdptr, size);
+- priv->dnld_sent = DNLD_RES_RECEIVED;
- unplugged:
-@@ -5180,18 +5278,18 @@ static irqreturn_t iwl_isr(int irq, void *data)
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (adapter->intcounter || adapter->currenttxskb)
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ if (priv->intcounter || priv->currenttxskb)
+ lbs_deb_host("SEND_SLEEPC_CMD: intcounter %d, currenttxskb %p\n",
+- adapter->intcounter, adapter->currenttxskb);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ priv->intcounter, priv->currenttxskb);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- none:
- /* re-enable interrupts here since we don't have anything to service. */
-- iwl_enable_interrupts(priv);
-+ iwl4965_enable_interrupts(priv);
- spin_unlock(&priv->lock);
- return IRQ_NONE;
- }
+ if (ret) {
+ lbs_pr_alert(
+ "SEND_SLEEPC_CMD: Host to Card failed for Confirm Sleep\n");
+ } else {
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (!adapter->intcounter) {
+- adapter->psstate = PS_STATE_SLEEP;
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ if (!priv->intcounter) {
++ priv->psstate = PS_STATE_SLEEP;
+ } else {
+ lbs_deb_host("SEND_SLEEPC_CMD: after sent, intcounter %d\n",
+- adapter->intcounter);
++ priv->intcounter);
+ }
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- /************************** EEPROM BANDS ****************************
- *
-- * The iwl_eeprom_band definitions below provide the mapping from the
-+ * The iwl4965_eeprom_band definitions below provide the mapping from the
- * EEPROM contents to the specific channel number supported for each
- * band.
- *
-- * For example, iwl_priv->eeprom.band_3_channels[4] from the band_3
-+ * For example, iwl4965_priv->eeprom.band_3_channels[4] from the band_3
- * definition below maps to physical channel 42 in the 5.2GHz spectrum.
- * The specific geography and calibration information for that channel
- * is contained in the eeprom map itself.
-@@ -5217,76 +5315,77 @@ static irqreturn_t iwl_isr(int irq, void *data)
- *********************************************************************/
+ lbs_deb_host("SEND_SLEEPC_CMD: sent confirm sleep\n");
+ }
+@@ -1847,7 +1949,7 @@ static int sendconfirmsleep(wlan_private * priv, u8 * cmdptr, u16 size)
+ return ret;
+ }
- /* 2.4 GHz */
--static const u8 iwl_eeprom_band_1[14] = {
-+static const u8 iwl4965_eeprom_band_1[14] = {
- 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
- };
+-void libertas_ps_sleep(wlan_private * priv, int wait_option)
++void lbs_ps_sleep(struct lbs_private *priv, int wait_option)
+ {
+ lbs_deb_enter(LBS_DEB_HOST);
- /* 5.2 GHz bands */
--static const u8 iwl_eeprom_band_2[] = {
-+static const u8 iwl4965_eeprom_band_2[] = { /* 4915-5080MHz */
- 183, 184, 185, 187, 188, 189, 192, 196, 7, 8, 11, 12, 16
- };
+@@ -1856,7 +1958,7 @@ void libertas_ps_sleep(wlan_private * priv, int wait_option)
+ * Remove this check if it is to be supported in IBSS mode also
+ */
--static const u8 iwl_eeprom_band_3[] = { /* 5205-5320MHz */
-+static const u8 iwl4965_eeprom_band_3[] = { /* 5170-5320MHz */
- 34, 36, 38, 40, 42, 44, 46, 48, 52, 56, 60, 64
- };
+- libertas_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
++ lbs_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
+ CMD_SUBCMD_ENTER_PS, wait_option, 0, NULL);
--static const u8 iwl_eeprom_band_4[] = { /* 5500-5700MHz */
-+static const u8 iwl4965_eeprom_band_4[] = { /* 5500-5700MHz */
- 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140
- };
+ lbs_deb_leave(LBS_DEB_HOST);
+@@ -1865,19 +1967,19 @@ void libertas_ps_sleep(wlan_private * priv, int wait_option)
+ /**
+ * @brief This function sends Exit_PS command to firmware.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param wait_option wait response or not
+ * @return n/a
+ */
+-void libertas_ps_wakeup(wlan_private * priv, int wait_option)
++void lbs_ps_wakeup(struct lbs_private *priv, int wait_option)
+ {
+ __le32 Localpsmode;
--static const u8 iwl_eeprom_band_5[] = { /* 5725-5825MHz */
-+static const u8 iwl4965_eeprom_band_5[] = { /* 5725-5825MHz */
- 145, 149, 153, 157, 161, 165
- };
+ lbs_deb_enter(LBS_DEB_HOST);
--static u8 iwl_eeprom_band_6[] = { /* 2.4 FAT channel */
-+static u8 iwl4965_eeprom_band_6[] = { /* 2.4 FAT channel */
- 1, 2, 3, 4, 5, 6, 7
- };
+- Localpsmode = cpu_to_le32(WLAN802_11POWERMODECAM);
++ Localpsmode = cpu_to_le32(LBS802_11POWERMODECAM);
--static u8 iwl_eeprom_band_7[] = { /* 5.2 FAT channel */
-+static u8 iwl4965_eeprom_band_7[] = { /* 5.2 FAT channel */
- 36, 44, 52, 60, 100, 108, 116, 124, 132, 149, 157
- };
+- libertas_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
++ lbs_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
+ CMD_SUBCMD_EXIT_PS,
+ wait_option, 0, &Localpsmode);
--static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
-+static void iwl4965_init_band_reference(const struct iwl4965_priv *priv,
-+ int band,
- int *eeprom_ch_count,
-- const struct iwl_eeprom_channel
-+ const struct iwl4965_eeprom_channel
- **eeprom_ch_info,
- const u8 **eeprom_ch_index)
+@@ -1888,37 +1990,36 @@ void libertas_ps_wakeup(wlan_private * priv, int wait_option)
+ * @brief This function checks condition and prepares to
+ * send sleep confirm command to firmware if ok.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param psmode Power Saving mode
+ * @return n/a
+ */
+-void libertas_ps_confirm_sleep(wlan_private * priv, u16 psmode)
++void lbs_ps_confirm_sleep(struct lbs_private *priv, u16 psmode)
{
- switch (band) {
- case 1: /* 2.4GHz band */
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_1);
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_1);
- *eeprom_ch_info = priv->eeprom.band_1_channels;
-- *eeprom_ch_index = iwl_eeprom_band_1;
-+ *eeprom_ch_index = iwl4965_eeprom_band_1;
- break;
-- case 2: /* 5.2GHz band */
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_2);
-+ case 2: /* 4.9GHz band */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_2);
- *eeprom_ch_info = priv->eeprom.band_2_channels;
-- *eeprom_ch_index = iwl_eeprom_band_2;
-+ *eeprom_ch_index = iwl4965_eeprom_band_2;
- break;
- case 3: /* 5.2GHz band */
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_3);
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_3);
- *eeprom_ch_info = priv->eeprom.band_3_channels;
-- *eeprom_ch_index = iwl_eeprom_band_3;
-+ *eeprom_ch_index = iwl4965_eeprom_band_3;
- break;
-- case 4: /* 5.2GHz band */
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_4);
-+ case 4: /* 5.5GHz band */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_4);
- *eeprom_ch_info = priv->eeprom.band_4_channels;
-- *eeprom_ch_index = iwl_eeprom_band_4;
-+ *eeprom_ch_index = iwl4965_eeprom_band_4;
- break;
-- case 5: /* 5.2GHz band */
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_5);
-+ case 5: /* 5.7GHz band */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_5);
- *eeprom_ch_info = priv->eeprom.band_5_channels;
-- *eeprom_ch_index = iwl_eeprom_band_5;
-+ *eeprom_ch_index = iwl4965_eeprom_band_5;
- break;
-- case 6:
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_6);
-+ case 6: /* 2.4GHz FAT channels */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_6);
- *eeprom_ch_info = priv->eeprom.band_24_channels;
-- *eeprom_ch_index = iwl_eeprom_band_6;
-+ *eeprom_ch_index = iwl4965_eeprom_band_6;
- break;
-- case 7:
-- *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_7);
-+ case 7: /* 5 GHz FAT channels */
-+ *eeprom_ch_count = ARRAY_SIZE(iwl4965_eeprom_band_7);
- *eeprom_ch_info = priv->eeprom.band_52_channels;
-- *eeprom_ch_index = iwl_eeprom_band_7;
-+ *eeprom_ch_index = iwl4965_eeprom_band_7;
- break;
- default:
- BUG();
-@@ -5294,7 +5393,12 @@ static void iwl_init_band_reference(const struct iwl_priv *priv, int band,
+ unsigned long flags =0;
+- wlan_adapter *adapter = priv->adapter;
+ u8 allowed = 1;
+
+ lbs_deb_enter(LBS_DEB_HOST);
+
+ if (priv->dnld_sent) {
+ allowed = 0;
+- lbs_deb_host("dnld_sent was set");
++ lbs_deb_host("dnld_sent was set\n");
}
- }
--const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
-+/**
-+ * iwl4965_get_channel_info - Find driver's private channel info
-+ *
-+ * Based on band and channel number.
-+ */
-+const struct iwl4965_channel_info *iwl4965_get_channel_info(const struct iwl4965_priv *priv,
- int phymode, u16 channel)
- {
- int i;
-@@ -5321,13 +5425,16 @@ const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv,
- #define CHECK_AND_PRINT(x) ((eeprom_ch_info[ch].flags & EEPROM_CHANNEL_##x) \
- ? # x " " : "")
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (adapter->cur_cmd) {
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ if (priv->cur_cmd) {
+ allowed = 0;
+- lbs_deb_host("cur_cmd was set");
++ lbs_deb_host("cur_cmd was set\n");
+ }
+- if (adapter->intcounter > 0) {
++ if (priv->intcounter > 0) {
+ allowed = 0;
+- lbs_deb_host("intcounter %d", adapter->intcounter);
++ lbs_deb_host("intcounter %d\n", priv->intcounter);
+ }
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
--static int iwl_init_channel_map(struct iwl_priv *priv)
+ if (allowed) {
+- lbs_deb_host("sending libertas_ps_confirm_sleep\n");
+- sendconfirmsleep(priv, (u8 *) & adapter->libertas_ps_confirm_sleep,
++ lbs_deb_host("sending lbs_ps_confirm_sleep\n");
++ sendconfirmsleep(priv, (u8 *) & priv->lbs_ps_confirm_sleep,
+ sizeof(struct PS_CMD_ConfirmSleep));
+ } else {
+ lbs_deb_host("sleep confirm has been delayed\n");
+@@ -1926,3 +2027,123 @@ void libertas_ps_confirm_sleep(wlan_private * priv, u16 psmode)
+
+ lbs_deb_leave(LBS_DEB_HOST);
+ }
++
++
+/**
-+ * iwl4965_init_channel_map - Set up driver's info for all possible channels
++ * @brief Simple callback that copies response back into command
++ *
++ * @param priv A pointer to struct lbs_private structure
++ * @param extra A pointer to the original command structure for which
++ * 'resp' is a response
++ * @param resp A pointer to the command response
++ *
++ * @return 0 on success, error on failure
+ */
-+static int iwl4965_init_channel_map(struct iwl4965_priv *priv)
++int lbs_cmd_copyback(struct lbs_private *priv, unsigned long extra,
++ struct cmd_header *resp)
++{
++ struct cmd_header *buf = (void *)extra;
++ uint16_t copy_len;
++
++ lbs_deb_enter(LBS_DEB_CMD);
++
++ copy_len = min(le16_to_cpu(buf->size), le16_to_cpu(resp->size));
++ lbs_deb_cmd("Copying back %u bytes; command response was %u bytes, "
++ "copy back buffer was %u bytes\n", copy_len,
++ le16_to_cpu(resp->size), le16_to_cpu(buf->size));
++ memcpy(buf, resp, copy_len);
++
++ lbs_deb_leave(LBS_DEB_CMD);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(lbs_cmd_copyback);
++
++struct cmd_ctrl_node *__lbs_cmd_async(struct lbs_private *priv, uint16_t command,
++ struct cmd_header *in_cmd, int in_cmd_size,
++ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
++ unsigned long callback_arg)
++{
++ struct cmd_ctrl_node *cmdnode;
++
++ lbs_deb_enter(LBS_DEB_HOST);
++
++ if (priv->surpriseremoved) {
++ lbs_deb_host("PREP_CMD: card removed\n");
++ cmdnode = ERR_PTR(-ENOENT);
++ goto done;
++ }
++
++ cmdnode = lbs_get_cmd_ctrl_node(priv);
++ if (cmdnode == NULL) {
++ lbs_deb_host("PREP_CMD: cmdnode is NULL\n");
++
++ /* Wake up main thread to execute next command */
++ wake_up_interruptible(&priv->waitq);
++ cmdnode = ERR_PTR(-ENOBUFS);
++ goto done;
++ }
++
++ cmdnode->callback = callback;
++ cmdnode->callback_arg = callback_arg;
++
++ /* Copy the incoming command to the buffer */
++ memcpy(cmdnode->cmdbuf, in_cmd, in_cmd_size);
++
++ /* Set sequence number, clean result, move to buffer */
++ priv->seqnum++;
++ cmdnode->cmdbuf->command = cpu_to_le16(command);
++ cmdnode->cmdbuf->size = cpu_to_le16(in_cmd_size);
++ cmdnode->cmdbuf->seqnum = cpu_to_le16(priv->seqnum);
++ cmdnode->cmdbuf->result = 0;
++
++ lbs_deb_host("PREP_CMD: command 0x%04x\n", command);
++
++ /* here was the big old switch() statement, which is now obsolete,
++ * because the caller of lbs_cmd() sets up all of *cmd for us. */
++
++ cmdnode->cmdwaitqwoken = 0;
++ lbs_queue_cmd(priv, cmdnode);
++ wake_up_interruptible(&priv->waitq);
++
++ done:
++ lbs_deb_leave_args(LBS_DEB_HOST, "ret %p", cmdnode);
++ return cmdnode;
++}
++
++int __lbs_cmd(struct lbs_private *priv, uint16_t command,
++ struct cmd_header *in_cmd, int in_cmd_size,
++ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
++ unsigned long callback_arg)
++{
++ struct cmd_ctrl_node *cmdnode;
++ unsigned long flags;
++ int ret = 0;
++
++ lbs_deb_enter(LBS_DEB_HOST);
++
++ cmdnode = __lbs_cmd_async(priv, command, in_cmd, in_cmd_size,
++ callback, callback_arg);
++ if (IS_ERR(cmdnode)) {
++ ret = PTR_ERR(cmdnode);
++ goto done;
++ }
++
++ might_sleep();
++ wait_event_interruptible(cmdnode->cmdwait_q, cmdnode->cmdwaitqwoken);
++
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ ret = cmdnode->result;
++ if (ret)
++ lbs_pr_info("PREP_CMD: command 0x%04x failed: %d\n",
++ command, ret);
++
++ __lbs_cleanup_and_insert_cmd(priv, cmdnode);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++
++done:
++ lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
++ return ret;
++}
++EXPORT_SYMBOL_GPL(__lbs_cmd);
++
++
+diff --git a/drivers/net/wireless/libertas/cmd.h b/drivers/net/wireless/libertas/cmd.h
+new file mode 100644
+index 0000000..b9ab85c
+--- /dev/null
++++ b/drivers/net/wireless/libertas/cmd.h
+@@ -0,0 +1,61 @@
++/* Copyright (C) 2007, Red Hat, Inc. */
++
++#ifndef _LBS_CMD_H_
++#define _LBS_CMD_H_
++
++#include "hostcmd.h"
++#include "dev.h"
++
++/* lbs_cmd() infers the size of the buffer to copy data back into, from
++ the size of the target of the pointer. Since the command to be sent
++ may often be smaller, that size is set in cmd->size by the caller.*/
++#define lbs_cmd(priv, cmdnr, cmd, cb, cb_arg) ({ \
++ uint16_t __sz = le16_to_cpu((cmd)->hdr.size); \
++ (cmd)->hdr.size = cpu_to_le16(sizeof(*(cmd))); \
++ __lbs_cmd(priv, cmdnr, &(cmd)->hdr, __sz, cb, cb_arg); \
++})
++
++#define lbs_cmd_with_response(priv, cmdnr, cmd) \
++ lbs_cmd(priv, cmdnr, cmd, lbs_cmd_copyback, (unsigned long) (cmd))
++
++/* __lbs_cmd() will free the cmdnode and return success/failure.
++ __lbs_cmd_async() requires that the callback free the cmdnode */
++struct cmd_ctrl_node *__lbs_cmd_async(struct lbs_private *priv, uint16_t command,
++ struct cmd_header *in_cmd, int in_cmd_size,
++ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
++ unsigned long callback_arg);
++int __lbs_cmd(struct lbs_private *priv, uint16_t command,
++ struct cmd_header *in_cmd, int in_cmd_size,
++ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
++ unsigned long callback_arg);
++
++int lbs_cmd_copyback(struct lbs_private *priv, unsigned long extra,
++ struct cmd_header *resp);
++
++int lbs_update_hw_spec(struct lbs_private *priv);
++
++int lbs_mesh_access(struct lbs_private *priv, uint16_t cmd_action,
++ struct cmd_ds_mesh_access *cmd);
++
++int lbs_get_data_rate(struct lbs_private *priv);
++int lbs_set_data_rate(struct lbs_private *priv, u8 rate);
++
++int lbs_get_channel(struct lbs_private *priv);
++int lbs_set_channel(struct lbs_private *priv, u8 channel);
++
++int lbs_mesh_config(struct lbs_private *priv, uint16_t enable, uint16_t chan);
++
++int lbs_host_sleep_cfg(struct lbs_private *priv, uint32_t criteria);
++int lbs_suspend(struct lbs_private *priv);
++int lbs_resume(struct lbs_private *priv);
++
++int lbs_cmd_802_11_inactivity_timeout(struct lbs_private *priv,
++ uint16_t cmd_action, uint16_t *timeout);
++int lbs_cmd_802_11_sleep_params(struct lbs_private *priv, uint16_t cmd_action,
++ struct sleep_params *sp);
++int lbs_cmd_802_11_set_wep(struct lbs_private *priv, uint16_t cmd_action,
++ struct assoc_request *assoc);
++int lbs_cmd_802_11_enable_rsn(struct lbs_private *priv, uint16_t cmd_action,
++ uint16_t *enable);
++
++#endif /* _LBS_CMD_H */
+diff --git a/drivers/net/wireless/libertas/cmdresp.c b/drivers/net/wireless/libertas/cmdresp.c
+index 8f90892..159216a 100644
+--- a/drivers/net/wireless/libertas/cmdresp.c
++++ b/drivers/net/wireless/libertas/cmdresp.c
+@@ -20,18 +20,17 @@
+ * reports disconnect to upper layer, clean tx/rx packets,
+ * reset link state etc.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return n/a
+ */
+-void libertas_mac_event_disconnected(wlan_private * priv)
++void lbs_mac_event_disconnected(struct lbs_private *priv)
{
- int eeprom_ch_count = 0;
- const u8 *eeprom_ch_index = NULL;
-- const struct iwl_eeprom_channel *eeprom_ch_info = NULL;
-+ const struct iwl4965_eeprom_channel *eeprom_ch_info = NULL;
- int band, ch;
-- struct iwl_channel_info *ch_info;
-+ struct iwl4965_channel_info *ch_info;
-
- if (priv->channel_count) {
- IWL_DEBUG_INFO("Channel map already initialized.\n");
-@@ -5343,15 +5450,15 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
- IWL_DEBUG_INFO("Initializing regulatory info from EEPROM\n");
-
- priv->channel_count =
-- ARRAY_SIZE(iwl_eeprom_band_1) +
-- ARRAY_SIZE(iwl_eeprom_band_2) +
-- ARRAY_SIZE(iwl_eeprom_band_3) +
-- ARRAY_SIZE(iwl_eeprom_band_4) +
-- ARRAY_SIZE(iwl_eeprom_band_5);
-+ ARRAY_SIZE(iwl4965_eeprom_band_1) +
-+ ARRAY_SIZE(iwl4965_eeprom_band_2) +
-+ ARRAY_SIZE(iwl4965_eeprom_band_3) +
-+ ARRAY_SIZE(iwl4965_eeprom_band_4) +
-+ ARRAY_SIZE(iwl4965_eeprom_band_5);
-
- IWL_DEBUG_INFO("Parsing data for %d channels.\n", priv->channel_count);
-
-- priv->channel_info = kzalloc(sizeof(struct iwl_channel_info) *
-+ priv->channel_info = kzalloc(sizeof(struct iwl4965_channel_info) *
- priv->channel_count, GFP_KERNEL);
- if (!priv->channel_info) {
- IWL_ERROR("Could not allocate channel_info\n");
-@@ -5366,7 +5473,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
- * what just in the EEPROM) */
- for (band = 1; band <= 5; band++) {
+- wlan_adapter *adapter = priv->adapter;
+ union iwreq_data wrqu;
-- iwl_init_band_reference(priv, band, &eeprom_ch_count,
-+ iwl4965_init_band_reference(priv, band, &eeprom_ch_count,
- &eeprom_ch_info, &eeprom_ch_index);
+- if (adapter->connect_status != LIBERTAS_CONNECTED)
++ if (priv->connect_status != LBS_CONNECTED)
+ return;
- /* Loop through each band adding each of the channels */
-@@ -5430,14 +5537,17 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
- }
- }
+- lbs_deb_enter(LBS_DEB_CMD);
++ lbs_deb_enter(LBS_DEB_ASSOC);
-+ /* Two additional EEPROM bands for 2.4 and 5 GHz FAT channels */
- for (band = 6; band <= 7; band++) {
- int phymode;
- u8 fat_extension_chan;
+ memset(wrqu.ap_addr.sa_data, 0x00, ETH_ALEN);
+ wrqu.ap_addr.sa_family = ARPHRD_ETHER;
+@@ -44,40 +43,36 @@ void libertas_mac_event_disconnected(wlan_private * priv)
+ msleep_interruptible(1000);
+ wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
-- iwl_init_band_reference(priv, band, &eeprom_ch_count,
-+ iwl4965_init_band_reference(priv, band, &eeprom_ch_count,
- &eeprom_ch_info, &eeprom_ch_index);
+- /* Free Tx and Rx packets */
+- kfree_skb(priv->adapter->currenttxskb);
+- priv->adapter->currenttxskb = NULL;
+-
+ /* report disconnect to upper layer */
+ netif_stop_queue(priv->dev);
+ netif_carrier_off(priv->dev);
-+ /* EEPROM band 6 is 2.4, band 7 is 5 GHz */
- phymode = (band == 6) ? MODE_IEEE80211B : MODE_IEEE80211A;
++ /* Free Tx and Rx packets */
++ kfree_skb(priv->currenttxskb);
++ priv->currenttxskb = NULL;
++ priv->tx_pending_len = 0;
+
- /* Loop through each band adding each of the channels */
- for (ch = 0; ch < eeprom_ch_count; ch++) {
+ /* reset SNR/NF/RSSI values */
+- memset(adapter->SNR, 0x00, sizeof(adapter->SNR));
+- memset(adapter->NF, 0x00, sizeof(adapter->NF));
+- memset(adapter->RSSI, 0x00, sizeof(adapter->RSSI));
+- memset(adapter->rawSNR, 0x00, sizeof(adapter->rawSNR));
+- memset(adapter->rawNF, 0x00, sizeof(adapter->rawNF));
+- adapter->nextSNRNF = 0;
+- adapter->numSNRNF = 0;
+- lbs_deb_cmd("current SSID '%s', length %u\n",
+- escape_essid(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len),
+- adapter->curbssparams.ssid_len);
+-
+- adapter->connect_status = LIBERTAS_DISCONNECTED;
++ memset(priv->SNR, 0x00, sizeof(priv->SNR));
++ memset(priv->NF, 0x00, sizeof(priv->NF));
++ memset(priv->RSSI, 0x00, sizeof(priv->RSSI));
++ memset(priv->rawSNR, 0x00, sizeof(priv->rawSNR));
++ memset(priv->rawNF, 0x00, sizeof(priv->rawNF));
++ priv->nextSNRNF = 0;
++ priv->numSNRNF = 0;
++ priv->connect_status = LBS_DISCONNECTED;
-@@ -5449,11 +5559,13 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
- else
- fat_extension_chan = HT_IE_EXT_CHANNEL_ABOVE;
+ /* Clear out associated SSID and BSSID since connection is
+ * no longer valid.
+ */
+- memset(&adapter->curbssparams.bssid, 0, ETH_ALEN);
+- memset(&adapter->curbssparams.ssid, 0, IW_ESSID_MAX_SIZE);
+- adapter->curbssparams.ssid_len = 0;
++ memset(&priv->curbssparams.bssid, 0, ETH_ALEN);
++ memset(&priv->curbssparams.ssid, 0, IW_ESSID_MAX_SIZE);
++ priv->curbssparams.ssid_len = 0;
-+ /* Set up driver's info for lower half */
- iwl4965_set_fat_chan_info(priv, phymode,
- eeprom_ch_index[ch],
- &(eeprom_ch_info[ch]),
- fat_extension_chan);
+- if (adapter->psstate != PS_STATE_FULL_POWER) {
++ if (priv->psstate != PS_STATE_FULL_POWER) {
+ /* make firmware to exit PS mode */
+ lbs_deb_cmd("disconnected, so exit PS mode\n");
+- libertas_ps_wakeup(priv, 0);
++ lbs_ps_wakeup(priv, 0);
+ }
+ lbs_deb_leave(LBS_DEB_CMD);
+ }
+@@ -85,11 +80,11 @@ void libertas_mac_event_disconnected(wlan_private * priv)
+ /**
+ * @brief This function handles MIC failure event.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @para event the event id
+ * @return n/a
+ */
+-static void handle_mic_failureevent(wlan_private * priv, u32 event)
++static void handle_mic_failureevent(struct lbs_private *priv, u32 event)
+ {
+ char buf[50];
-+ /* Set up driver's info for upper half */
- iwl4965_set_fat_chan_info(priv, phymode,
- (eeprom_ch_index[ch] + 4),
- &(eeprom_ch_info[ch]),
-@@ -5487,7 +5599,7 @@ static int iwl_init_channel_map(struct iwl_priv *priv)
- #define IWL_PASSIVE_DWELL_BASE (100)
- #define IWL_CHANNEL_TUNE_TIME 5
+@@ -104,15 +99,14 @@ static void handle_mic_failureevent(wlan_private * priv, u32 event)
+ strcat(buf, "multicast ");
+ }
--static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
-+static inline u16 iwl4965_get_active_dwell_time(struct iwl4965_priv *priv, int phymode)
- {
- if (phymode == MODE_IEEE80211A)
- return IWL_ACTIVE_DWELL_TIME_52;
-@@ -5495,14 +5607,14 @@ static inline u16 iwl_get_active_dwell_time(struct iwl_priv *priv, int phymode)
- return IWL_ACTIVE_DWELL_TIME_24;
+- libertas_send_iwevcustom_event(priv, buf);
++ lbs_send_iwevcustom_event(priv, buf);
+ lbs_deb_leave(LBS_DEB_CMD);
}
--static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
-+static u16 iwl4965_get_passive_dwell_time(struct iwl4965_priv *priv, int phymode)
+-static int wlan_ret_reg_access(wlan_private * priv,
++static int lbs_ret_reg_access(struct lbs_private *priv,
+ u16 type, struct cmd_ds_command *resp)
{
-- u16 active = iwl_get_active_dwell_time(priv, phymode);
-+ u16 active = iwl4965_get_active_dwell_time(priv, phymode);
- u16 passive = (phymode != MODE_IEEE80211A) ?
- IWL_PASSIVE_DWELL_BASE + IWL_PASSIVE_DWELL_TIME_24 :
- IWL_PASSIVE_DWELL_BASE + IWL_PASSIVE_DWELL_TIME_52;
+ int ret = 0;
+- wlan_adapter *adapter = priv->adapter;
-- if (iwl_is_associated(priv)) {
-+ if (iwl4965_is_associated(priv)) {
- /* If we're associated, we clamp the maximum passive
- * dwell time to be 98% of the beacon interval (minus
- * 2 * channel tune time) */
-@@ -5518,30 +5630,30 @@ static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, int phymode)
- return passive;
- }
+ lbs_deb_enter(LBS_DEB_CMD);
--static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
-+static int iwl4965_get_channels_for_scan(struct iwl4965_priv *priv, int phymode,
- u8 is_active, u8 direct_mask,
-- struct iwl_scan_channel *scan_ch)
-+ struct iwl4965_scan_channel *scan_ch)
- {
- const struct ieee80211_channel *channels = NULL;
- const struct ieee80211_hw_mode *hw_mode;
-- const struct iwl_channel_info *ch_info;
-+ const struct iwl4965_channel_info *ch_info;
- u16 passive_dwell = 0;
- u16 active_dwell = 0;
- int added, i;
+@@ -121,8 +115,8 @@ static int wlan_ret_reg_access(wlan_private * priv,
+ {
+ struct cmd_ds_mac_reg_access *reg = &resp->params.macreg;
-- hw_mode = iwl_get_hw_mode(priv, phymode);
-+ hw_mode = iwl4965_get_hw_mode(priv, phymode);
- if (!hw_mode)
- return 0;
+- adapter->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
+- adapter->offsetvalue.value = le32_to_cpu(reg->value);
++ priv->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
++ priv->offsetvalue.value = le32_to_cpu(reg->value);
+ break;
+ }
- channels = hw_mode->channels;
+@@ -130,8 +124,8 @@ static int wlan_ret_reg_access(wlan_private * priv,
+ {
+ struct cmd_ds_bbp_reg_access *reg = &resp->params.bbpreg;
-- active_dwell = iwl_get_active_dwell_time(priv, phymode);
-- passive_dwell = iwl_get_passive_dwell_time(priv, phymode);
-+ active_dwell = iwl4965_get_active_dwell_time(priv, phymode);
-+ passive_dwell = iwl4965_get_passive_dwell_time(priv, phymode);
+- adapter->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
+- adapter->offsetvalue.value = reg->value;
++ priv->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
++ priv->offsetvalue.value = reg->value;
+ break;
+ }
- for (i = 0, added = 0; i < hw_mode->num_channels; i++) {
- if (channels[i].chan ==
- le16_to_cpu(priv->active_rxon.channel)) {
-- if (iwl_is_associated(priv)) {
-+ if (iwl4965_is_associated(priv)) {
- IWL_DEBUG_SCAN
- ("Skipping current channel %d\n",
- le16_to_cpu(priv->active_rxon.channel));
-@@ -5552,7 +5664,8 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
+@@ -139,8 +133,8 @@ static int wlan_ret_reg_access(wlan_private * priv,
+ {
+ struct cmd_ds_rf_reg_access *reg = &resp->params.rfreg;
- scan_ch->channel = channels[i].chan;
+- adapter->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
+- adapter->offsetvalue.value = reg->value;
++ priv->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
++ priv->offsetvalue.value = reg->value;
+ break;
+ }
-- ch_info = iwl_get_channel_info(priv, phymode, scan_ch->channel);
-+ ch_info = iwl4965_get_channel_info(priv, phymode,
-+ scan_ch->channel);
- if (!is_channel_valid(ch_info)) {
- IWL_DEBUG_SCAN("Channel %d is INVALID for this SKU.\n",
- scan_ch->channel);
-@@ -5574,7 +5687,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
- scan_ch->active_dwell = cpu_to_le16(active_dwell);
- scan_ch->passive_dwell = cpu_to_le16(passive_dwell);
+@@ -152,112 +146,23 @@ static int wlan_ret_reg_access(wlan_private * priv,
+ return ret;
+ }
-- /* Set power levels to defaults */
-+ /* Set txpower levels to defaults */
- scan_ch->tpc.dsp_atten = 110;
- /* scan_pwr_info->tpc.dsp_atten; */
+-static int wlan_ret_get_hw_spec(wlan_private * priv,
+- struct cmd_ds_command *resp)
+-{
+- u32 i;
+- struct cmd_ds_get_hw_spec *hwspec = &resp->params.hwspec;
+- wlan_adapter *adapter = priv->adapter;
+- int ret = 0;
+- DECLARE_MAC_BUF(mac);
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+-
+- adapter->fwcapinfo = le32_to_cpu(hwspec->fwcapinfo);
+-
+- memcpy(adapter->fwreleasenumber, hwspec->fwreleasenumber, 4);
+-
+- lbs_deb_cmd("GET_HW_SPEC: firmware release %u.%u.%up%u\n",
+- adapter->fwreleasenumber[2], adapter->fwreleasenumber[1],
+- adapter->fwreleasenumber[0], adapter->fwreleasenumber[3]);
+- lbs_deb_cmd("GET_HW_SPEC: MAC addr %s\n",
+- print_mac(mac, hwspec->permanentaddr));
+- lbs_deb_cmd("GET_HW_SPEC: hardware interface 0x%x, hardware spec 0x%04x\n",
+- hwspec->hwifversion, hwspec->version);
+-
+- /* Clamp region code to 8-bit since FW spec indicates that it should
+- * only ever be 8-bit, even though the field size is 16-bit. Some firmware
+- * returns non-zero high 8 bits here.
+- */
+- adapter->regioncode = le16_to_cpu(hwspec->regioncode) & 0xFF;
+-
+- for (i = 0; i < MRVDRV_MAX_REGION_CODE; i++) {
+- /* use the region code to search for the index */
+- if (adapter->regioncode == libertas_region_code_to_index[i]) {
+- break;
+- }
+- }
+-
+- /* if it's unidentified region code, use the default (USA) */
+- if (i >= MRVDRV_MAX_REGION_CODE) {
+- adapter->regioncode = 0x10;
+- lbs_pr_info("unidentified region code; using the default (USA)\n");
+- }
+-
+- if (adapter->current_addr[0] == 0xff)
+- memmove(adapter->current_addr, hwspec->permanentaddr, ETH_ALEN);
+-
+- memcpy(priv->dev->dev_addr, adapter->current_addr, ETH_ALEN);
+- if (priv->mesh_dev)
+- memcpy(priv->mesh_dev->dev_addr, adapter->current_addr, ETH_ALEN);
+-
+- if (libertas_set_regiontable(priv, adapter->regioncode, 0)) {
+- ret = -1;
+- goto done;
+- }
+-
+- if (libertas_set_universaltable(priv, 0)) {
+- ret = -1;
+- goto done;
+- }
+-
+-done:
+- lbs_deb_enter_args(LBS_DEB_CMD, "ret %d", ret);
+- return ret;
+-}
+-
+-static int wlan_ret_802_11_sleep_params(wlan_private * priv,
+- struct cmd_ds_command *resp)
+-{
+- struct cmd_ds_802_11_sleep_params *sp = &resp->params.sleep_params;
+- wlan_adapter *adapter = priv->adapter;
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+-
+- lbs_deb_cmd("error 0x%x, offset 0x%x, stabletime 0x%x, calcontrol 0x%x "
+- "extsleepclk 0x%x\n", le16_to_cpu(sp->error),
+- le16_to_cpu(sp->offset), le16_to_cpu(sp->stabletime),
+- sp->calcontrol, sp->externalsleepclk);
+-
+- adapter->sp.sp_error = le16_to_cpu(sp->error);
+- adapter->sp.sp_offset = le16_to_cpu(sp->offset);
+- adapter->sp.sp_stabletime = le16_to_cpu(sp->stabletime);
+- adapter->sp.sp_calcontrol = sp->calcontrol;
+- adapter->sp.sp_extsleepclk = sp->externalsleepclk;
+- adapter->sp.sp_reserved = le16_to_cpu(sp->reserved);
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+- return 0;
+-}
+-
+-static int wlan_ret_802_11_stat(wlan_private * priv,
++static int lbs_ret_802_11_stat(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
+ {
+ lbs_deb_enter(LBS_DEB_CMD);
+-/* currently adapter->wlan802_11Stat is unused
++/* currently priv->wlan802_11Stat is unused
-@@ -5584,8 +5697,8 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
- else {
- scan_ch->tpc.tx_gain = ((1 << 5) | (5 << 3));
- /* NOTE: if we were doing 6Mb OFDM for scans we'd use
-- * power level
-- scan_ch->tpc.tx_gain = ((1<<5) | (2 << 3)) | 3;
-+ * power level:
-+ * scan_ch->tpc.tx_gain = ((1 << 5) | (2 << 3)) | 3;
- */
- }
+ struct cmd_ds_802_11_get_stat *p11Stat = &resp->params.gstat;
+- wlan_adapter *adapter = priv->adapter;
-@@ -5603,7 +5716,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, int phymode,
- return added;
+ // TODO Convert it to Big endian befor copy
+- memcpy(&adapter->wlan802_11Stat,
++ memcpy(&priv->wlan802_11Stat,
+ p11Stat, sizeof(struct cmd_ds_802_11_get_stat));
+ */
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
--static void iwl_reset_channel_flag(struct iwl_priv *priv)
-+static void iwl4965_reset_channel_flag(struct iwl4965_priv *priv)
+-static int wlan_ret_802_11_snmp_mib(wlan_private * priv,
++static int lbs_ret_802_11_snmp_mib(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
- int i, j;
- for (i = 0; i < 3; i++) {
-@@ -5613,13 +5726,13 @@ static void iwl_reset_channel_flag(struct iwl_priv *priv)
- }
+ struct cmd_ds_802_11_snmp_mib *smib = &resp->params.smib;
+@@ -273,22 +178,22 @@ static int wlan_ret_802_11_snmp_mib(wlan_private * priv,
+ if (querytype == CMD_ACT_GET) {
+ switch (oid) {
+ case FRAGTHRESH_I:
+- priv->adapter->fragthsd =
++ priv->fragthsd =
+ le16_to_cpu(*((__le16 *)(smib->value)));
+ lbs_deb_cmd("SNMP_RESP: frag threshold %u\n",
+- priv->adapter->fragthsd);
++ priv->fragthsd);
+ break;
+ case RTSTHRESH_I:
+- priv->adapter->rtsthsd =
++ priv->rtsthsd =
+ le16_to_cpu(*((__le16 *)(smib->value)));
+ lbs_deb_cmd("SNMP_RESP: rts threshold %u\n",
+- priv->adapter->rtsthsd);
++ priv->rtsthsd);
+ break;
+ case SHORT_RETRYLIM_I:
+- priv->adapter->txretrycount =
++ priv->txretrycount =
+ le16_to_cpu(*((__le16 *)(smib->value)));
+ lbs_deb_cmd("SNMP_RESP: tx retry count %u\n",
+- priv->adapter->rtsthsd);
++ priv->rtsthsd);
+ break;
+ default:
+ break;
+@@ -299,12 +204,11 @@ static int wlan_ret_802_11_snmp_mib(wlan_private * priv,
+ return 0;
}
--static void iwl_init_hw_rates(struct iwl_priv *priv,
-+static void iwl4965_init_hw_rates(struct iwl4965_priv *priv,
- struct ieee80211_rate *rates)
+-static int wlan_ret_802_11_key_material(wlan_private * priv,
++static int lbs_ret_802_11_key_material(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
- int i;
+ struct cmd_ds_802_11_key_material *pkeymaterial =
+ &resp->params.keymaterial;
+- wlan_adapter *adapter = priv->adapter;
+ u16 action = le16_to_cpu(pkeymaterial->action);
- for (i = 0; i < IWL_RATE_COUNT; i++) {
-- rates[i].rate = iwl_rates[i].ieee * 5;
-+ rates[i].rate = iwl4965_rates[i].ieee * 5;
- rates[i].val = i; /* Rate scaling will work on indexes */
- rates[i].val2 = i;
- rates[i].flags = IEEE80211_RATE_SUPPORTED;
-@@ -5631,7 +5744,7 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
- * If CCK 1M then set rate flag to CCK else CCK_2
- * which is CCK | PREAMBLE2
- */
-- rates[i].flags |= (iwl_rates[i].plcp == 10) ?
-+ rates[i].flags |= (iwl4965_rates[i].plcp == 10) ?
- IEEE80211_RATE_CCK : IEEE80211_RATE_CCK_2;
- }
+ lbs_deb_enter(LBS_DEB_CMD);
+@@ -332,9 +236,9 @@ static int wlan_ret_802_11_key_material(wlan_private * priv,
+ break;
-@@ -5639,16 +5752,14 @@ static void iwl_init_hw_rates(struct iwl_priv *priv,
- if (IWL_BASIC_RATES_MASK & (1 << i))
- rates[i].flags |= IEEE80211_RATE_BASIC;
- }
--
-- iwl4965_init_hw_rates(priv, rates);
+ if (key_flags & KEY_INFO_WPA_UNICAST)
+- pkey = &adapter->wpa_unicast_key;
++ pkey = &priv->wpa_unicast_key;
+ else if (key_flags & KEY_INFO_WPA_MCAST)
+- pkey = &adapter->wpa_mcast_key;
++ pkey = &priv->wpa_mcast_key;
+ else
+ break;
+
+@@ -355,134 +259,85 @@ static int wlan_ret_802_11_key_material(wlan_private * priv,
+ return 0;
}
- /**
-- * iwl_init_geos - Initialize mac80211's geo/channel info based from eeprom
-+ * iwl4965_init_geos - Initialize mac80211's geo/channel info based from eeprom
- */
--static int iwl_init_geos(struct iwl_priv *priv)
-+static int iwl4965_init_geos(struct iwl4965_priv *priv)
+-static int wlan_ret_802_11_mac_address(wlan_private * priv,
++static int lbs_ret_802_11_mac_address(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
-- struct iwl_channel_info *ch;
-+ struct iwl4965_channel_info *ch;
- struct ieee80211_hw_mode *modes;
- struct ieee80211_channel *channels;
- struct ieee80211_channel *geo_ch;
-@@ -5658,10 +5769,8 @@ static int iwl_init_geos(struct iwl_priv *priv)
- A = 0,
- B = 1,
- G = 2,
-- A_11N = 3,
-- G_11N = 4,
- };
-- int mode_count = 5;
-+ int mode_count = 3;
-
- if (priv->modes) {
- IWL_DEBUG_INFO("Geography modes already initialized.\n");
-@@ -5696,11 +5805,14 @@ static int iwl_init_geos(struct iwl_priv *priv)
+ struct cmd_ds_802_11_mac_address *macadd = &resp->params.macadd;
+- wlan_adapter *adapter = priv->adapter;
- /* 5.2GHz channels start after the 2.4GHz channels */
- modes[A].mode = MODE_IEEE80211A;
-- modes[A].channels = &channels[ARRAY_SIZE(iwl_eeprom_band_1)];
-+ modes[A].channels = &channels[ARRAY_SIZE(iwl4965_eeprom_band_1)];
- modes[A].rates = rates;
- modes[A].num_rates = 8; /* just OFDM */
- modes[A].rates = &rates[4];
- modes[A].num_channels = 0;
-+#ifdef CONFIG_IWL4965_HT
-+ iwl4965_init_ht_hw_capab(&modes[A].ht_info, MODE_IEEE80211A);
-+#endif
+ lbs_deb_enter(LBS_DEB_CMD);
- modes[B].mode = MODE_IEEE80211B;
- modes[B].channels = channels;
-@@ -5713,23 +5825,14 @@ static int iwl_init_geos(struct iwl_priv *priv)
- modes[G].rates = rates;
- modes[G].num_rates = 12; /* OFDM & CCK */
- modes[G].num_channels = 0;
--
-- modes[G_11N].mode = MODE_IEEE80211G;
-- modes[G_11N].channels = channels;
-- modes[G_11N].num_rates = 13; /* OFDM & CCK */
-- modes[G_11N].rates = rates;
-- modes[G_11N].num_channels = 0;
--
-- modes[A_11N].mode = MODE_IEEE80211A;
-- modes[A_11N].channels = &channels[ARRAY_SIZE(iwl_eeprom_band_1)];
-- modes[A_11N].rates = &rates[4];
-- modes[A_11N].num_rates = 9; /* just OFDM */
-- modes[A_11N].num_channels = 0;
-+#ifdef CONFIG_IWL4965_HT
-+ iwl4965_init_ht_hw_capab(&modes[G].ht_info, MODE_IEEE80211G);
-+#endif
+- memcpy(adapter->current_addr, macadd->macadd, ETH_ALEN);
++ memcpy(priv->current_addr, macadd->macadd, ETH_ALEN);
- priv->ieee_channels = channels;
- priv->ieee_rates = rates;
+ lbs_deb_enter(LBS_DEB_CMD);
+ return 0;
+ }
-- iwl_init_hw_rates(priv, rates);
-+ iwl4965_init_hw_rates(priv, rates);
+-static int wlan_ret_802_11_rf_tx_power(wlan_private * priv,
++static int lbs_ret_802_11_rf_tx_power(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
+ {
+ struct cmd_ds_802_11_rf_tx_power *rtp = &resp->params.txp;
+- wlan_adapter *adapter = priv->adapter;
- for (i = 0, geo_ch = channels; i < priv->channel_count; i++) {
- ch = &priv->channel_info[i];
-@@ -5744,11 +5847,9 @@ static int iwl_init_geos(struct iwl_priv *priv)
+ lbs_deb_enter(LBS_DEB_CMD);
- if (is_channel_a_band(ch)) {
- geo_ch = &modes[A].channels[modes[A].num_channels++];
-- modes[A_11N].num_channels++;
- } else {
- geo_ch = &modes[B].channels[modes[B].num_channels++];
- modes[G].num_channels++;
-- modes[G_11N].num_channels++;
- }
+- adapter->txpowerlevel = le16_to_cpu(rtp->currentlevel);
++ priv->txpowerlevel = le16_to_cpu(rtp->currentlevel);
- geo_ch->freq = ieee80211chan2mhz(ch->channel);
-@@ -5814,57 +5915,22 @@ static int iwl_init_geos(struct iwl_priv *priv)
- *
- ******************************************************************************/
+- lbs_deb_cmd("TX power currently %d\n", adapter->txpowerlevel);
++ lbs_deb_cmd("TX power currently %d\n", priv->txpowerlevel);
--static void iwl_dealloc_ucode_pci(struct iwl_priv *priv)
-+static void iwl4965_dealloc_ucode_pci(struct iwl4965_priv *priv)
- {
-- if (priv->ucode_code.v_addr != NULL) {
-- pci_free_consistent(priv->pci_dev,
-- priv->ucode_code.len,
-- priv->ucode_code.v_addr,
-- priv->ucode_code.p_addr);
-- priv->ucode_code.v_addr = NULL;
-- }
-- if (priv->ucode_data.v_addr != NULL) {
-- pci_free_consistent(priv->pci_dev,
-- priv->ucode_data.len,
-- priv->ucode_data.v_addr,
-- priv->ucode_data.p_addr);
-- priv->ucode_data.v_addr = NULL;
-- }
-- if (priv->ucode_data_backup.v_addr != NULL) {
-- pci_free_consistent(priv->pci_dev,
-- priv->ucode_data_backup.len,
-- priv->ucode_data_backup.v_addr,
-- priv->ucode_data_backup.p_addr);
-- priv->ucode_data_backup.v_addr = NULL;
-- }
-- if (priv->ucode_init.v_addr != NULL) {
-- pci_free_consistent(priv->pci_dev,
-- priv->ucode_init.len,
-- priv->ucode_init.v_addr,
-- priv->ucode_init.p_addr);
-- priv->ucode_init.v_addr = NULL;
-- }
-- if (priv->ucode_init_data.v_addr != NULL) {
-- pci_free_consistent(priv->pci_dev,
-- priv->ucode_init_data.len,
-- priv->ucode_init_data.v_addr,
-- priv->ucode_init_data.p_addr);
-- priv->ucode_init_data.v_addr = NULL;
-- }
-- if (priv->ucode_boot.v_addr != NULL) {
-- pci_free_consistent(priv->pci_dev,
-- priv->ucode_boot.len,
-- priv->ucode_boot.v_addr,
-- priv->ucode_boot.p_addr);
-- priv->ucode_boot.v_addr = NULL;
-- }
-+ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_code);
-+ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_data);
-+ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_data_backup);
-+ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_init);
-+ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_init_data);
-+ iwl_free_fw_desc(priv->pci_dev, &priv->ucode_boot);
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
- /**
-- * iwl_verify_inst_full - verify runtime uCode image in card vs. host,
-+ * iwl4965_verify_inst_full - verify runtime uCode image in card vs. host,
- * looking at all data.
- */
--static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
-+static int iwl4965_verify_inst_full(struct iwl4965_priv *priv, __le32 *image,
-+ u32 len)
+-static int wlan_ret_802_11_rate_adapt_rateset(wlan_private * priv,
++static int lbs_ret_802_11_rate_adapt_rateset(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
- u32 val;
- u32 save_len = len;
-@@ -5873,18 +5939,18 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
-
- IWL_DEBUG_INFO("ucode inst image size is %u\n", len);
-
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc)
- return rc;
+ struct cmd_ds_802_11_rate_adapt_rateset *rates = &resp->params.rateset;
+- wlan_adapter *adapter = priv->adapter;
-- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
-+ iwl4965_write_direct32(priv, HBUS_TARG_MEM_RADDR, RTC_INST_LOWER_BOUND);
+ lbs_deb_enter(LBS_DEB_CMD);
- errcnt = 0;
- for (; len > 0; len -= sizeof(u32), image++) {
- /* read data comes through single port, auto-incr addr */
- /* NOTE: Use the debugless read so we don't flood kernel log
- * if IWL_DL_IO is set */
-- val = _iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
-+ val = _iwl4965_read_direct32(priv, HBUS_TARG_MEM_RDAT);
- if (val != le32_to_cpu(*image)) {
- IWL_ERROR("uCode INST section is invalid at "
- "offset 0x%x, is 0x%x, s/b 0x%x\n",
-@@ -5896,7 +5962,7 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
- }
+ if (rates->action == CMD_ACT_GET) {
+- adapter->enablehwauto = le16_to_cpu(rates->enablehwauto);
+- adapter->ratebitmap = le16_to_cpu(rates->bitmap);
++ priv->enablehwauto = le16_to_cpu(rates->enablehwauto);
++ priv->ratebitmap = le16_to_cpu(rates->bitmap);
}
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
+ }
- if (!errcnt)
- IWL_DEBUG_INFO
-@@ -5907,11 +5973,11 @@ static int iwl_verify_inst_full(struct iwl_priv *priv, __le32 * image, u32 len)
+-static int wlan_ret_802_11_data_rate(wlan_private * priv,
+- struct cmd_ds_command *resp)
+-{
+- struct cmd_ds_802_11_data_rate *pdatarate = &resp->params.drate;
+- wlan_adapter *adapter = priv->adapter;
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+-
+- lbs_deb_hex(LBS_DEB_CMD, "DATA_RATE_RESP", (u8 *) pdatarate,
+- sizeof(struct cmd_ds_802_11_data_rate));
+-
+- /* FIXME: get actual rates FW can do if this command actually returns
+- * all data rates supported.
+- */
+- adapter->cur_rate = libertas_fw_index_to_data_rate(pdatarate->rates[0]);
+- lbs_deb_cmd("DATA_RATE: current rate 0x%02x\n", adapter->cur_rate);
+-
+- lbs_deb_leave(LBS_DEB_CMD);
+- return 0;
+-}
+-
+-static int wlan_ret_802_11_rf_channel(wlan_private * priv,
+- struct cmd_ds_command *resp)
+-{
+- struct cmd_ds_802_11_rf_channel *rfchannel = &resp->params.rfchannel;
+- wlan_adapter *adapter = priv->adapter;
+- u16 action = le16_to_cpu(rfchannel->action);
+- u16 newchannel = le16_to_cpu(rfchannel->currentchannel);
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+-
+- if (action == CMD_OPT_802_11_RF_CHANNEL_GET
+- && adapter->curbssparams.channel != newchannel) {
+- lbs_deb_cmd("channel switch from %d to %d\n",
+- adapter->curbssparams.channel, newchannel);
+-
+- /* Update the channel again */
+- adapter->curbssparams.channel = newchannel;
+- }
+-
+- lbs_deb_enter(LBS_DEB_CMD);
+- return 0;
+-}
+-
+-static int wlan_ret_802_11_rssi(wlan_private * priv,
++static int lbs_ret_802_11_rssi(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
+ {
+ struct cmd_ds_802_11_rssi_rsp *rssirsp = &resp->params.rssirsp;
+- wlan_adapter *adapter = priv->adapter;
+ lbs_deb_enter(LBS_DEB_CMD);
- /**
-- * iwl_verify_inst_sparse - verify runtime uCode image in card vs. host,
-+ * iwl4965_verify_inst_sparse - verify runtime uCode image in card vs. host,
- * using sample data 100 bytes apart. If these sample points are good,
- * it's a pretty good bet that everything between them is good, too.
- */
--static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
-+static int iwl4965_verify_inst_sparse(struct iwl4965_priv *priv, __le32 *image, u32 len)
- {
- u32 val;
- int rc = 0;
-@@ -5920,7 +5986,7 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
+ /* store the non average value */
+- adapter->SNR[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->SNR);
+- adapter->NF[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->noisefloor);
++ priv->SNR[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->SNR);
++ priv->NF[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->noisefloor);
- IWL_DEBUG_INFO("ucode inst image size is %u\n", len);
+- adapter->SNR[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgSNR);
+- adapter->NF[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgnoisefloor);
++ priv->SNR[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgSNR);
++ priv->NF[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgnoisefloor);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc)
- return rc;
+- adapter->RSSI[TYPE_BEACON][TYPE_NOAVG] =
+- CAL_RSSI(adapter->SNR[TYPE_BEACON][TYPE_NOAVG],
+- adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
++ priv->RSSI[TYPE_BEACON][TYPE_NOAVG] =
++ CAL_RSSI(priv->SNR[TYPE_BEACON][TYPE_NOAVG],
++ priv->NF[TYPE_BEACON][TYPE_NOAVG]);
-@@ -5928,9 +5994,9 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
- /* read data comes through single port, auto-incr addr */
- /* NOTE: Use the debugless read so we don't flood kernel log
- * if IWL_DL_IO is set */
-- iwl_write_restricted(priv, HBUS_TARG_MEM_RADDR,
-+ iwl4965_write_direct32(priv, HBUS_TARG_MEM_RADDR,
- i + RTC_INST_LOWER_BOUND);
-- val = _iwl_read_restricted(priv, HBUS_TARG_MEM_RDAT);
-+ val = _iwl4965_read_direct32(priv, HBUS_TARG_MEM_RDAT);
- if (val != le32_to_cpu(*image)) {
- #if 0 /* Enable this if you want to see details */
- IWL_ERROR("uCode INST section is invalid at "
-@@ -5944,17 +6010,17 @@ static int iwl_verify_inst_sparse(struct iwl_priv *priv, __le32 *image, u32 len)
- }
- }
+- adapter->RSSI[TYPE_BEACON][TYPE_AVG] =
+- CAL_RSSI(adapter->SNR[TYPE_BEACON][TYPE_AVG] / AVG_SCALE,
+- adapter->NF[TYPE_BEACON][TYPE_AVG] / AVG_SCALE);
++ priv->RSSI[TYPE_BEACON][TYPE_AVG] =
++ CAL_RSSI(priv->SNR[TYPE_BEACON][TYPE_AVG] / AVG_SCALE,
++ priv->NF[TYPE_BEACON][TYPE_AVG] / AVG_SCALE);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
+ lbs_deb_cmd("RSSI: beacon %d, avg %d\n",
+- adapter->RSSI[TYPE_BEACON][TYPE_NOAVG],
+- adapter->RSSI[TYPE_BEACON][TYPE_AVG]);
++ priv->RSSI[TYPE_BEACON][TYPE_NOAVG],
++ priv->RSSI[TYPE_BEACON][TYPE_AVG]);
- return rc;
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
+-static int wlan_ret_802_11_eeprom_access(wlan_private * priv,
++static int lbs_ret_802_11_eeprom_access(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
+ {
+- wlan_adapter *adapter = priv->adapter;
+- struct wlan_ioctl_regrdwr *pbuf;
+- pbuf = (struct wlan_ioctl_regrdwr *) adapter->prdeeprom;
++ struct lbs_ioctl_regrdwr *pbuf;
++ pbuf = (struct lbs_ioctl_regrdwr *) priv->prdeeprom;
- /**
-- * iwl_verify_ucode - determine which instruction image is in SRAM,
-+ * iwl4965_verify_ucode - determine which instruction image is in SRAM,
- * and verify its contents
- */
--static int iwl_verify_ucode(struct iwl_priv *priv)
-+static int iwl4965_verify_ucode(struct iwl4965_priv *priv)
+ lbs_deb_enter_args(LBS_DEB_CMD, "len %d",
+ le16_to_cpu(resp->params.rdeeprom.bytecount));
+@@ -503,46 +358,45 @@ static int wlan_ret_802_11_eeprom_access(wlan_private * priv,
+ return 0;
+ }
+
+-static int wlan_ret_get_log(wlan_private * priv,
++static int lbs_ret_get_log(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
- __le32 *image;
- u32 len;
-@@ -5963,7 +6029,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
- /* Try bootstrap */
- image = (__le32 *)priv->ucode_boot.v_addr;
- len = priv->ucode_boot.len;
-- rc = iwl_verify_inst_sparse(priv, image, len);
-+ rc = iwl4965_verify_inst_sparse(priv, image, len);
- if (rc == 0) {
- IWL_DEBUG_INFO("Bootstrap uCode is good in inst SRAM\n");
- return 0;
-@@ -5972,7 +6038,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
- /* Try initialize */
- image = (__le32 *)priv->ucode_init.v_addr;
- len = priv->ucode_init.len;
-- rc = iwl_verify_inst_sparse(priv, image, len);
-+ rc = iwl4965_verify_inst_sparse(priv, image, len);
- if (rc == 0) {
- IWL_DEBUG_INFO("Initialize uCode is good in inst SRAM\n");
- return 0;
-@@ -5981,7 +6047,7 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
- /* Try runtime/protocol */
- image = (__le32 *)priv->ucode_code.v_addr;
- len = priv->ucode_code.len;
-- rc = iwl_verify_inst_sparse(priv, image, len);
-+ rc = iwl4965_verify_inst_sparse(priv, image, len);
- if (rc == 0) {
- IWL_DEBUG_INFO("Runtime uCode is good in inst SRAM\n");
- return 0;
-@@ -5989,18 +6055,19 @@ static int iwl_verify_ucode(struct iwl_priv *priv)
+ struct cmd_ds_802_11_get_log *logmessage = &resp->params.glog;
+- wlan_adapter *adapter = priv->adapter;
- IWL_ERROR("NO VALID UCODE IMAGE IN INSTRUCTION SRAM!!\n");
+ lbs_deb_enter(LBS_DEB_CMD);
-- /* Show first several data entries in instruction SRAM.
-- * Selection of bootstrap image is arbitrary. */
-+ /* Since nothing seems to match, show first several data entries in
-+ * instruction SRAM, so maybe visual inspection will give a clue.
-+ * Selection of bootstrap image (vs. other images) is arbitrary. */
- image = (__le32 *)priv->ucode_boot.v_addr;
- len = priv->ucode_boot.len;
-- rc = iwl_verify_inst_full(priv, image, len);
-+ rc = iwl4965_verify_inst_full(priv, image, len);
+ /* Stored little-endian */
+- memcpy(&adapter->logmsg, logmessage, sizeof(struct cmd_ds_802_11_get_log));
++ memcpy(&priv->logmsg, logmessage, sizeof(struct cmd_ds_802_11_get_log));
- return rc;
+ lbs_deb_leave(LBS_DEB_CMD);
+ return 0;
}
-
- /* check contents of special bootstrap uCode SRAM */
--static int iwl_verify_bsm(struct iwl_priv *priv)
-+static int iwl4965_verify_bsm(struct iwl4965_priv *priv)
+-static int libertas_ret_802_11_enable_rsn(wlan_private * priv,
+- struct cmd_ds_command *resp)
++static int lbs_ret_802_11_bcn_ctrl(struct lbs_private * priv,
++ struct cmd_ds_command *resp)
{
- __le32 *image = priv->ucode_boot.v_addr;
- u32 len = priv->ucode_boot.len;
-@@ -6010,11 +6077,11 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
- IWL_DEBUG_INFO("Begin verify bsm\n");
+- struct cmd_ds_802_11_enable_rsn *enable_rsn = &resp->params.enbrsn;
+- wlan_adapter *adapter = priv->adapter;
+- u32 * pdata_buf = adapter->cur_cmd->pdata_buf;
++ struct cmd_ds_802_11_beacon_control *bcn_ctrl =
++ &resp->params.bcn_ctrl;
- /* verify BSM SRAM contents */
-- val = iwl_read_restricted_reg(priv, BSM_WR_DWCOUNT_REG);
-+ val = iwl4965_read_prph(priv, BSM_WR_DWCOUNT_REG);
- for (reg = BSM_SRAM_LOWER_BOUND;
- reg < BSM_SRAM_LOWER_BOUND + len;
- reg += sizeof(u32), image ++) {
-- val = iwl_read_restricted_reg(priv, reg);
-+ val = iwl4965_read_prph(priv, reg);
- if (val != le32_to_cpu(*image)) {
- IWL_ERROR("BSM uCode verification failed at "
- "addr 0x%08X+%u (of %u), is 0x%x, s/b 0x%x\n",
-@@ -6031,7 +6098,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
- }
+ lbs_deb_enter(LBS_DEB_CMD);
- /**
-- * iwl_load_bsm - Load bootstrap instructions
-+ * iwl4965_load_bsm - Load bootstrap instructions
- *
- * BSM operation:
- *
-@@ -6062,7 +6129,7 @@ static int iwl_verify_bsm(struct iwl_priv *priv)
- * the runtime uCode instructions and the backup data cache into SRAM,
- * and re-launches the runtime uCode from where it left off.
- */
--static int iwl_load_bsm(struct iwl_priv *priv)
-+static int iwl4965_load_bsm(struct iwl4965_priv *priv)
- {
- __le32 *image = priv->ucode_boot.v_addr;
- u32 len = priv->ucode_boot.len;
-@@ -6082,8 +6149,8 @@ static int iwl_load_bsm(struct iwl_priv *priv)
- return -EINVAL;
+- if (enable_rsn->action == cpu_to_le16(CMD_ACT_GET)) {
+- if (pdata_buf)
+- *pdata_buf = (u32) le16_to_cpu(enable_rsn->enable);
++ if (bcn_ctrl->action == CMD_ACT_GET) {
++ priv->beacon_enable = (u8) le16_to_cpu(bcn_ctrl->beacon_enable);
++ priv->beacon_period = le16_to_cpu(bcn_ctrl->beacon_period);
+ }
- /* Tell bootstrap uCode where to find the "Initialize" uCode
-- * in host DRAM ... bits 31:0 for 3945, bits 35:4 for 4965.
-- * NOTE: iwl_initialize_alive_start() will replace these values,
-+ * in host DRAM ... host DRAM physical address bits 35:4 for 4965.
-+ * NOTE: iwl4965_initialize_alive_start() will replace these values,
- * after the "initialize" uCode has run, to point to
- * runtime/protocol instructions and backup data cache. */
- pinst = priv->ucode_init.p_addr >> 4;
-@@ -6091,42 +6158,42 @@ static int iwl_load_bsm(struct iwl_priv *priv)
- inst_len = priv->ucode_init.len;
- data_len = priv->ucode_init_data.len;
+- lbs_deb_leave(LBS_DEB_CMD);
++ lbs_deb_enter(LBS_DEB_CMD);
+ return 0;
+ }
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc)
- return rc;
+-static inline int handle_cmd_response(u16 respcmd,
+- struct cmd_ds_command *resp,
+- wlan_private *priv)
++static inline int handle_cmd_response(struct lbs_private *priv,
++ unsigned long dummy,
++ struct cmd_header *cmd_response)
+ {
++ struct cmd_ds_command *resp = (struct cmd_ds_command *) cmd_response;
+ int ret = 0;
+ unsigned long flags;
+- wlan_adapter *adapter = priv->adapter;
++ uint16_t respcmd = le16_to_cpu(resp->command);
-- iwl_write_restricted_reg(priv, BSM_DRAM_INST_PTR_REG, pinst);
-- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_PTR_REG, pdata);
-- iwl_write_restricted_reg(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
-- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
-+ iwl4965_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
-+ iwl4965_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
-+ iwl4965_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG, inst_len);
-+ iwl4965_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG, data_len);
+ lbs_deb_enter(LBS_DEB_HOST);
- /* Fill BSM memory with bootstrap instructions */
- for (reg_offset = BSM_SRAM_LOWER_BOUND;
- reg_offset < BSM_SRAM_LOWER_BOUND + len;
- reg_offset += sizeof(u32), image++)
-- _iwl_write_restricted_reg(priv, reg_offset,
-+ _iwl4965_write_prph(priv, reg_offset,
- le32_to_cpu(*image));
+@@ -550,218 +404,213 @@ static inline int handle_cmd_response(u16 respcmd,
+ case CMD_RET(CMD_MAC_REG_ACCESS):
+ case CMD_RET(CMD_BBP_REG_ACCESS):
+ case CMD_RET(CMD_RF_REG_ACCESS):
+- ret = wlan_ret_reg_access(priv, respcmd, resp);
+- break;
+-
+- case CMD_RET(CMD_GET_HW_SPEC):
+- ret = wlan_ret_get_hw_spec(priv, resp);
++ ret = lbs_ret_reg_access(priv, respcmd, resp);
+ break;
-- rc = iwl_verify_bsm(priv);
-+ rc = iwl4965_verify_bsm(priv);
- if (rc) {
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
- return rc;
- }
+ case CMD_RET(CMD_802_11_SCAN):
+- ret = libertas_ret_80211_scan(priv, resp);
++ ret = lbs_ret_80211_scan(priv, resp);
+ break;
- /* Tell BSM to copy from BSM SRAM into instruction SRAM, when asked */
-- iwl_write_restricted_reg(priv, BSM_WR_MEM_SRC_REG, 0x0);
-- iwl_write_restricted_reg(priv, BSM_WR_MEM_DST_REG,
-+ iwl4965_write_prph(priv, BSM_WR_MEM_SRC_REG, 0x0);
-+ iwl4965_write_prph(priv, BSM_WR_MEM_DST_REG,
- RTC_INST_LOWER_BOUND);
-- iwl_write_restricted_reg(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
-+ iwl4965_write_prph(priv, BSM_WR_DWCOUNT_REG, len / sizeof(u32));
+ case CMD_RET(CMD_802_11_GET_LOG):
+- ret = wlan_ret_get_log(priv, resp);
++ ret = lbs_ret_get_log(priv, resp);
+ break;
- /* Load bootstrap code into instruction SRAM now,
- * to prepare to load "initialize" uCode */
-- iwl_write_restricted_reg(priv, BSM_WR_CTRL_REG,
-+ iwl4965_write_prph(priv, BSM_WR_CTRL_REG,
- BSM_WR_CTRL_REG_BIT_START);
+ case CMD_RET_802_11_ASSOCIATE:
+ case CMD_RET(CMD_802_11_ASSOCIATE):
+ case CMD_RET(CMD_802_11_REASSOCIATE):
+- ret = libertas_ret_80211_associate(priv, resp);
++ ret = lbs_ret_80211_associate(priv, resp);
+ break;
- /* Wait for load of bootstrap uCode to finish */
- for (i = 0; i < 100; i++) {
-- done = iwl_read_restricted_reg(priv, BSM_WR_CTRL_REG);
-+ done = iwl4965_read_prph(priv, BSM_WR_CTRL_REG);
- if (!(done & BSM_WR_CTRL_REG_BIT_START))
- break;
- udelay(10);
-@@ -6140,29 +6207,30 @@ static int iwl_load_bsm(struct iwl_priv *priv)
+ case CMD_RET(CMD_802_11_DISASSOCIATE):
+ case CMD_RET(CMD_802_11_DEAUTHENTICATE):
+- ret = libertas_ret_80211_disassociate(priv, resp);
++ ret = lbs_ret_80211_disassociate(priv, resp);
+ break;
- /* Enable future boot loads whenever power management unit triggers it
- * (e.g. when powering back up after power-save shutdown) */
-- iwl_write_restricted_reg(priv, BSM_WR_CTRL_REG,
-+ iwl4965_write_prph(priv, BSM_WR_CTRL_REG,
- BSM_WR_CTRL_REG_BIT_START_EN);
+ case CMD_RET(CMD_802_11_AD_HOC_START):
+ case CMD_RET(CMD_802_11_AD_HOC_JOIN):
+- ret = libertas_ret_80211_ad_hoc_start(priv, resp);
++ ret = lbs_ret_80211_ad_hoc_start(priv, resp);
+ break;
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
+ case CMD_RET(CMD_802_11_GET_STAT):
+- ret = wlan_ret_802_11_stat(priv, resp);
++ ret = lbs_ret_802_11_stat(priv, resp);
+ break;
- return 0;
- }
+ case CMD_RET(CMD_802_11_SNMP_MIB):
+- ret = wlan_ret_802_11_snmp_mib(priv, resp);
++ ret = lbs_ret_802_11_snmp_mib(priv, resp);
+ break;
--static void iwl_nic_start(struct iwl_priv *priv)
-+static void iwl4965_nic_start(struct iwl4965_priv *priv)
- {
- /* Remove all resets to allow NIC to operate */
-- iwl_write32(priv, CSR_RESET, 0);
-+ iwl4965_write32(priv, CSR_RESET, 0);
- }
+ case CMD_RET(CMD_802_11_RF_TX_POWER):
+- ret = wlan_ret_802_11_rf_tx_power(priv, resp);
++ ret = lbs_ret_802_11_rf_tx_power(priv, resp);
+ break;
-+
- /**
-- * iwl_read_ucode - Read uCode images from disk file.
-+ * iwl4965_read_ucode - Read uCode images from disk file.
- *
- * Copy into buffers for card to fetch via bus-mastering
- */
--static int iwl_read_ucode(struct iwl_priv *priv)
-+static int iwl4965_read_ucode(struct iwl4965_priv *priv)
- {
-- struct iwl_ucode *ucode;
-- int rc = 0;
-+ struct iwl4965_ucode *ucode;
-+ int ret;
- const struct firmware *ucode_raw;
- const char *name = "iwlwifi-4965" IWL4965_UCODE_API ".ucode";
- u8 *src;
-@@ -6171,9 +6239,10 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+ case CMD_RET(CMD_802_11_SET_AFC):
+ case CMD_RET(CMD_802_11_GET_AFC):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- memmove(adapter->cur_cmd->pdata_buf, &resp->params.afc,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.afc,
+ sizeof(struct cmd_ds_802_11_afc));
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- /* Ask kernel firmware_class module to get the boot firmware off disk.
- * request_firmware() is synchronous, file is in memory on return. */
-- rc = request_firmware(&ucode_raw, name, &priv->pci_dev->dev);
-- if (rc < 0) {
-- IWL_ERROR("%s firmware file req failed: Reason %d\n", name, rc);
-+ ret = request_firmware(&ucode_raw, name, &priv->pci_dev->dev);
-+ if (ret < 0) {
-+ IWL_ERROR("%s firmware file req failed: Reason %d\n",
-+ name, ret);
- goto error;
- }
+ break;
-@@ -6183,7 +6252,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
- /* Make sure that we got at least our header! */
- if (ucode_raw->size < sizeof(*ucode)) {
- IWL_ERROR("File size way too small!\n");
-- rc = -EINVAL;
-+ ret = -EINVAL;
- goto err_release;
- }
+ case CMD_RET(CMD_MAC_MULTICAST_ADR):
+ case CMD_RET(CMD_MAC_CONTROL):
+- case CMD_RET(CMD_802_11_SET_WEP):
+ case CMD_RET(CMD_802_11_RESET):
+ case CMD_RET(CMD_802_11_AUTHENTICATE):
+- case CMD_RET(CMD_802_11_RADIO_CONTROL):
+ case CMD_RET(CMD_802_11_BEACON_STOP):
+ break;
-@@ -6216,43 +6285,43 @@ static int iwl_read_ucode(struct iwl_priv *priv)
+- case CMD_RET(CMD_802_11_ENABLE_RSN):
+- ret = libertas_ret_802_11_enable_rsn(priv, resp);
+- break;
+-
+- case CMD_RET(CMD_802_11_DATA_RATE):
+- ret = wlan_ret_802_11_data_rate(priv, resp);
+- break;
+ case CMD_RET(CMD_802_11_RATE_ADAPT_RATESET):
+- ret = wlan_ret_802_11_rate_adapt_rateset(priv, resp);
+- break;
+- case CMD_RET(CMD_802_11_RF_CHANNEL):
+- ret = wlan_ret_802_11_rf_channel(priv, resp);
++ ret = lbs_ret_802_11_rate_adapt_rateset(priv, resp);
+ break;
- IWL_DEBUG_INFO("uCode file size %d too small\n",
- (int)ucode_raw->size);
-- rc = -EINVAL;
-+ ret = -EINVAL;
- goto err_release;
- }
+ case CMD_RET(CMD_802_11_RSSI):
+- ret = wlan_ret_802_11_rssi(priv, resp);
++ ret = lbs_ret_802_11_rssi(priv, resp);
+ break;
- /* Verify that uCode images will fit in card's SRAM */
- if (inst_size > IWL_MAX_INST_SIZE) {
-- IWL_DEBUG_INFO("uCode instr len %d too large to fit in card\n",
-- (int)inst_size);
-- rc = -EINVAL;
-+ IWL_DEBUG_INFO("uCode instr len %d too large to fit in\n",
-+ inst_size);
-+ ret = -EINVAL;
- goto err_release;
- }
+ case CMD_RET(CMD_802_11_MAC_ADDRESS):
+- ret = wlan_ret_802_11_mac_address(priv, resp);
++ ret = lbs_ret_802_11_mac_address(priv, resp);
+ break;
- if (data_size > IWL_MAX_DATA_SIZE) {
-- IWL_DEBUG_INFO("uCode data len %d too large to fit in card\n",
-- (int)data_size);
-- rc = -EINVAL;
-+ IWL_DEBUG_INFO("uCode data len %d too large to fit in\n",
-+ data_size);
-+ ret = -EINVAL;
- goto err_release;
- }
- if (init_size > IWL_MAX_INST_SIZE) {
- IWL_DEBUG_INFO
-- ("uCode init instr len %d too large to fit in card\n",
-- (int)init_size);
-- rc = -EINVAL;
-+ ("uCode init instr len %d too large to fit in\n",
-+ init_size);
-+ ret = -EINVAL;
- goto err_release;
- }
- if (init_data_size > IWL_MAX_DATA_SIZE) {
- IWL_DEBUG_INFO
-- ("uCode init data len %d too large to fit in card\n",
-- (int)init_data_size);
-- rc = -EINVAL;
-+ ("uCode init data len %d too large to fit in\n",
-+ init_data_size);
-+ ret = -EINVAL;
- goto err_release;
- }
- if (boot_size > IWL_MAX_BSM_SIZE) {
- IWL_DEBUG_INFO
-- ("uCode boot instr len %d too large to fit in bsm\n",
-- (int)boot_size);
-- rc = -EINVAL;
-+ ("uCode boot instr len %d too large to fit in\n",
-+ boot_size);
-+ ret = -EINVAL;
- goto err_release;
- }
+ case CMD_RET(CMD_802_11_AD_HOC_STOP):
+- ret = libertas_ret_80211_ad_hoc_stop(priv, resp);
++ ret = lbs_ret_80211_ad_hoc_stop(priv, resp);
+ break;
-@@ -6262,66 +6331,50 @@ static int iwl_read_ucode(struct iwl_priv *priv)
- * 1) unmodified from disk
- * 2) backup cache for save/restore during power-downs */
- priv->ucode_code.len = inst_size;
-- priv->ucode_code.v_addr =
-- pci_alloc_consistent(priv->pci_dev,
-- priv->ucode_code.len,
-- &(priv->ucode_code.p_addr));
-+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_code);
+ case CMD_RET(CMD_802_11_KEY_MATERIAL):
+- ret = wlan_ret_802_11_key_material(priv, resp);
++ ret = lbs_ret_802_11_key_material(priv, resp);
+ break;
- priv->ucode_data.len = data_size;
-- priv->ucode_data.v_addr =
-- pci_alloc_consistent(priv->pci_dev,
-- priv->ucode_data.len,
-- &(priv->ucode_data.p_addr));
-+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_data);
+ case CMD_RET(CMD_802_11_EEPROM_ACCESS):
+- ret = wlan_ret_802_11_eeprom_access(priv, resp);
++ ret = lbs_ret_802_11_eeprom_access(priv, resp);
+ break;
- priv->ucode_data_backup.len = data_size;
-- priv->ucode_data_backup.v_addr =
-- pci_alloc_consistent(priv->pci_dev,
-- priv->ucode_data_backup.len,
-- &(priv->ucode_data_backup.p_addr));
+ case CMD_RET(CMD_802_11D_DOMAIN_INFO):
+- ret = libertas_ret_802_11d_domain_info(priv, resp);
+- break;
-
-+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_data_backup);
+- case CMD_RET(CMD_802_11_SLEEP_PARAMS):
+- ret = wlan_ret_802_11_sleep_params(priv, resp);
+- break;
+- case CMD_RET(CMD_802_11_INACTIVITY_TIMEOUT):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- *((u16 *) adapter->cur_cmd->pdata_buf) =
+- le16_to_cpu(resp->params.inactivity_timeout.timeout);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ ret = lbs_ret_802_11d_domain_info(priv, resp);
+ break;
- /* Initialization instructions and data */
-- priv->ucode_init.len = init_size;
-- priv->ucode_init.v_addr =
-- pci_alloc_consistent(priv->pci_dev,
-- priv->ucode_init.len,
-- &(priv->ucode_init.p_addr));
--
-- priv->ucode_init_data.len = init_data_size;
-- priv->ucode_init_data.v_addr =
-- pci_alloc_consistent(priv->pci_dev,
-- priv->ucode_init_data.len,
-- &(priv->ucode_init_data.p_addr));
-+ if (init_size && init_data_size) {
-+ priv->ucode_init.len = init_size;
-+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_init);
-+
-+ priv->ucode_init_data.len = init_data_size;
-+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_init_data);
+ case CMD_RET(CMD_802_11_TPC_CFG):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- memmove(adapter->cur_cmd->pdata_buf, &resp->params.tpccfg,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.tpccfg,
+ sizeof(struct cmd_ds_802_11_tpc_cfg));
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ break;
+ case CMD_RET(CMD_802_11_LED_GPIO_CTRL):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- memmove(adapter->cur_cmd->pdata_buf, &resp->params.ledgpio,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.ledgpio,
+ sizeof(struct cmd_ds_802_11_led_ctrl));
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ break;
+
-+ if (!priv->ucode_init.v_addr || !priv->ucode_init_data.v_addr)
-+ goto err_pci_alloc;
-+ }
+ case CMD_RET(CMD_802_11_PWR_CFG):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- memmove(adapter->cur_cmd->pdata_buf, &resp->params.pwrcfg,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.pwrcfg,
+ sizeof(struct cmd_ds_802_11_pwr_cfg));
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- /* Bootstrap (instructions only, no data) */
-- priv->ucode_boot.len = boot_size;
-- priv->ucode_boot.v_addr =
-- pci_alloc_consistent(priv->pci_dev,
-- priv->ucode_boot.len,
-- &(priv->ucode_boot.p_addr));
-+ if (boot_size) {
-+ priv->ucode_boot.len = boot_size;
-+ iwl_alloc_fw_desc(priv->pci_dev, &priv->ucode_boot);
+ break;
-- if (!priv->ucode_code.v_addr || !priv->ucode_data.v_addr ||
-- !priv->ucode_init.v_addr || !priv->ucode_init_data.v_addr ||
-- !priv->ucode_boot.v_addr || !priv->ucode_data_backup.v_addr)
-- goto err_pci_alloc;
-+ if (!priv->ucode_boot.v_addr)
-+ goto err_pci_alloc;
-+ }
+ case CMD_RET(CMD_GET_TSF):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- memcpy(priv->adapter->cur_cmd->pdata_buf,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ memcpy((void *)priv->cur_cmd->callback_arg,
+ &resp->params.gettsf.tsfvalue, sizeof(u64));
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ break;
+ case CMD_RET(CMD_BT_ACCESS):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (adapter->cur_cmd->pdata_buf)
+- memcpy(adapter->cur_cmd->pdata_buf,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ if (priv->cur_cmd->callback_arg)
++ memcpy((void *)priv->cur_cmd->callback_arg,
+ &resp->params.bt.addr1, 2 * ETH_ALEN);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ break;
+ case CMD_RET(CMD_FWT_ACCESS):
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (adapter->cur_cmd->pdata_buf)
+- memcpy(adapter->cur_cmd->pdata_buf, &resp->params.fwt,
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ if (priv->cur_cmd->callback_arg)
++ memcpy((void *)priv->cur_cmd->callback_arg, &resp->params.fwt,
+ sizeof(resp->params.fwt));
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ break;
+- case CMD_RET(CMD_MESH_ACCESS):
+- if (adapter->cur_cmd->pdata_buf)
+- memcpy(adapter->cur_cmd->pdata_buf, &resp->params.mesh,
+- sizeof(resp->params.mesh));
++ case CMD_RET(CMD_802_11_BEACON_CTRL):
++ ret = lbs_ret_802_11_bcn_ctrl(priv, resp);
+ break;
++
+ default:
+ lbs_deb_host("CMD_RESP: unknown cmd response 0x%04x\n",
+- resp->command);
++ le16_to_cpu(resp->command));
+ break;
+ }
+ lbs_deb_leave(LBS_DEB_HOST);
+ return ret;
+ }
- /* Copy images into buffers for card's bus-master reads ... */
+-int libertas_process_rx_command(wlan_private * priv)
++int lbs_process_rx_command(struct lbs_private *priv)
+ {
+- u16 respcmd;
+- struct cmd_ds_command *resp;
+- wlan_adapter *adapter = priv->adapter;
++ uint16_t respcmd, curcmd;
++ struct cmd_header *resp;
+ int ret = 0;
+- ulong flags;
+- u16 result;
++ unsigned long flags;
++ uint16_t result;
- /* Runtime instructions (first block of data in file) */
- src = &ucode->data[0];
- len = priv->ucode_code.len;
-- IWL_DEBUG_INFO("Copying (but not loading) uCode instr len %d\n",
-- (int)len);
-+ IWL_DEBUG_INFO("Copying (but not loading) uCode instr len %Zd\n", len);
- memcpy(priv->ucode_code.v_addr, src, len);
- IWL_DEBUG_INFO("uCode instr buf vaddr = 0x%p, paddr = 0x%08x\n",
- priv->ucode_code.v_addr, (u32)priv->ucode_code.p_addr);
+ lbs_deb_enter(LBS_DEB_HOST);
- /* Runtime data (2nd block)
-- * NOTE: Copy into backup buffer will be done in iwl_up() */
-+ * NOTE: Copy into backup buffer will be done in iwl4965_up() */
- src = &ucode->data[inst_size];
- len = priv->ucode_data.len;
-- IWL_DEBUG_INFO("Copying (but not loading) uCode data len %d\n",
-- (int)len);
-+ IWL_DEBUG_INFO("Copying (but not loading) uCode data len %Zd\n", len);
- memcpy(priv->ucode_data.v_addr, src, len);
- memcpy(priv->ucode_data_backup.v_addr, src, len);
+- /* Now we got response from FW, cancel the command timer */
+- del_timer(&adapter->command_timer);
+-
+- mutex_lock(&adapter->lock);
+- spin_lock_irqsave(&adapter->driver_lock, flags);
++ mutex_lock(&priv->lock);
++ spin_lock_irqsave(&priv->driver_lock, flags);
-@@ -6329,8 +6382,8 @@ static int iwl_read_ucode(struct iwl_priv *priv)
- if (init_size) {
- src = &ucode->data[inst_size + data_size];
- len = priv->ucode_init.len;
-- IWL_DEBUG_INFO("Copying (but not loading) init instr len %d\n",
-- (int)len);
-+ IWL_DEBUG_INFO("Copying (but not loading) init instr len %Zd\n",
-+ len);
- memcpy(priv->ucode_init.v_addr, src, len);
+- if (!adapter->cur_cmd) {
++ if (!priv->cur_cmd) {
+ lbs_deb_host("CMD_RESP: cur_cmd is NULL\n");
+ ret = -1;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ goto done;
}
+- resp = (struct cmd_ds_command *)(adapter->cur_cmd->bufvirtualaddr);
++
++ resp = (void *)priv->upld_buf;
++
++ curcmd = le16_to_cpu(resp->command);
-@@ -6338,16 +6391,15 @@ static int iwl_read_ucode(struct iwl_priv *priv)
- if (init_data_size) {
- src = &ucode->data[inst_size + data_size + init_size];
- len = priv->ucode_init_data.len;
-- IWL_DEBUG_INFO("Copying (but not loading) init data len %d\n",
-- (int)len);
-+ IWL_DEBUG_INFO("Copying (but not loading) init data len %Zd\n",
-+ len);
- memcpy(priv->ucode_init_data.v_addr, src, len);
- }
+ respcmd = le16_to_cpu(resp->command);
+ result = le16_to_cpu(resp->result);
- /* Bootstrap instructions (5th block) */
- src = &ucode->data[inst_size + data_size + init_size + init_data_size];
- len = priv->ucode_boot.len;
-- IWL_DEBUG_INFO("Copying (but not loading) boot instr len %d\n",
-- (int)len);
-+ IWL_DEBUG_INFO("Copying (but not loading) boot instr len %Zd\n", len);
- memcpy(priv->ucode_boot.v_addr, src, len);
+- lbs_deb_host("CMD_RESP: response 0x%04x, size %d, jiffies %lu\n",
+- respcmd, priv->upld_len, jiffies);
+- lbs_deb_hex(LBS_DEB_HOST, "CMD_RESP", adapter->cur_cmd->bufvirtualaddr,
+- priv->upld_len);
+-
+- if (!(respcmd & 0x8000)) {
+- lbs_deb_host("invalid response!\n");
+- adapter->cur_cmd_retcode = -1;
+- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
+- adapter->nr_cmd_pending--;
+- adapter->cur_cmd = NULL;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ lbs_deb_host("CMD_RESP: response 0x%04x, seq %d, size %d, jiffies %lu\n",
++ respcmd, le16_to_cpu(resp->seqnum), priv->upld_len, jiffies);
++ lbs_deb_hex(LBS_DEB_HOST, "CMD_RESP", (void *) resp, priv->upld_len);
++
++ if (resp->seqnum != resp->seqnum) {
++ lbs_pr_info("Received CMD_RESP with invalid sequence %d (expected %d)\n",
++ le16_to_cpu(resp->seqnum), le16_to_cpu(resp->seqnum));
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ ret = -1;
++ goto done;
++ }
++ if (respcmd != CMD_RET(curcmd) &&
++ respcmd != CMD_802_11_ASSOCIATE && curcmd != CMD_RET_802_11_ASSOCIATE) {
++ lbs_pr_info("Invalid CMD_RESP %x to command %x!\n", respcmd, curcmd);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ ret = -1;
++ goto done;
++ }
++
++ if (resp->result == cpu_to_le16(0x0004)) {
++ /* 0x0004 means -EAGAIN. Drop the response, let it time out
++ and be resubmitted */
++ lbs_pr_info("Firmware returns DEFER to command %x. Will let it time out...\n",
++ le16_to_cpu(resp->command));
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ ret = -1;
+ goto done;
+ }
- /* We have our copies now, allow OS release its copies */
-@@ -6356,19 +6408,19 @@ static int iwl_read_ucode(struct iwl_priv *priv)
++ /* Now we got response from FW, cancel the command timer */
++ del_timer(&priv->command_timer);
++ priv->cmd_timed_out = 0;
++ if (priv->nr_retries) {
++ lbs_pr_info("Received result %x to command %x after %d retries\n",
++ result, curcmd, priv->nr_retries);
++ priv->nr_retries = 0;
++ }
++
+ /* Store the response code to cur_cmd_retcode. */
+- adapter->cur_cmd_retcode = result;;
++ priv->cur_cmd_retcode = result;
- err_pci_alloc:
- IWL_ERROR("failed to allocate pci memory\n");
-- rc = -ENOMEM;
-- iwl_dealloc_ucode_pci(priv);
-+ ret = -ENOMEM;
-+ iwl4965_dealloc_ucode_pci(priv);
+ if (respcmd == CMD_RET(CMD_802_11_PS_MODE)) {
+- struct cmd_ds_802_11_ps_mode *psmode = &resp->params.psmode;
++ struct cmd_ds_802_11_ps_mode *psmode = (void *) &resp[1];
+ u16 action = le16_to_cpu(psmode->action);
- err_release:
- release_firmware(ucode_raw);
+ lbs_deb_host(
+@@ -774,54 +623,45 @@ int libertas_process_rx_command(wlan_private * priv)
+ /*
+ * We should not re-try enter-ps command in
+ * ad-hoc mode. It takes place in
+- * libertas_execute_next_command().
++ * lbs_execute_next_command().
+ */
+- if (adapter->mode == IW_MODE_ADHOC &&
++ if (priv->mode == IW_MODE_ADHOC &&
+ action == CMD_SUBCMD_ENTER_PS)
+- adapter->psmode = WLAN802_11POWERMODECAM;
++ priv->psmode = LBS802_11POWERMODECAM;
+ } else if (action == CMD_SUBCMD_ENTER_PS) {
+- adapter->needtowakeup = 0;
+- adapter->psstate = PS_STATE_AWAKE;
++ priv->needtowakeup = 0;
++ priv->psstate = PS_STATE_AWAKE;
- error:
-- return rc;
-+ return ret;
- }
+ lbs_deb_host("CMD_RESP: ENTER_PS command response\n");
+- if (adapter->connect_status != LIBERTAS_CONNECTED) {
++ if (priv->connect_status != LBS_CONNECTED) {
+ /*
+ * When Deauth Event received before Enter_PS command
+ * response, We need to wake up the firmware.
+ */
+ lbs_deb_host(
+- "disconnected, invoking libertas_ps_wakeup\n");
++ "disconnected, invoking lbs_ps_wakeup\n");
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
+- mutex_unlock(&adapter->lock);
+- libertas_ps_wakeup(priv, 0);
+- mutex_lock(&adapter->lock);
+- spin_lock_irqsave(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ mutex_unlock(&priv->lock);
++ lbs_ps_wakeup(priv, 0);
++ mutex_lock(&priv->lock);
++ spin_lock_irqsave(&priv->driver_lock, flags);
+ }
+ } else if (action == CMD_SUBCMD_EXIT_PS) {
+- adapter->needtowakeup = 0;
+- adapter->psstate = PS_STATE_FULL_POWER;
++ priv->needtowakeup = 0;
++ priv->psstate = PS_STATE_FULL_POWER;
+ lbs_deb_host("CMD_RESP: EXIT_PS command response\n");
+ } else {
+ lbs_deb_host("CMD_RESP: PS action 0x%X\n", action);
+ }
- /**
-- * iwl_set_ucode_ptrs - Set uCode address location
-+ * iwl4965_set_ucode_ptrs - Set uCode address location
- *
- * Tell initialization uCode where to find runtime uCode.
- *
-@@ -6376,7 +6428,7 @@ static int iwl_read_ucode(struct iwl_priv *priv)
- * We need to replace them to load runtime uCode inst and data,
- * and to save runtime data when powering down.
- */
--static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
-+static int iwl4965_set_ucode_ptrs(struct iwl4965_priv *priv)
- {
- dma_addr_t pinst;
- dma_addr_t pdata;
-@@ -6388,24 +6440,24 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
- pdata = priv->ucode_data_backup.p_addr >> 4;
+- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
+- adapter->nr_cmd_pending--;
+- adapter->cur_cmd = NULL;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ lbs_complete_command(priv, priv->cur_cmd, result);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
- spin_lock_irqsave(&priv->lock, flags);
-- rc = iwl_grab_restricted_access(priv);
-+ rc = iwl4965_grab_nic_access(priv);
- if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
- return rc;
+ ret = 0;
+ goto done;
}
- /* Tell bootstrap uCode where to find image to load */
-- iwl_write_restricted_reg(priv, BSM_DRAM_INST_PTR_REG, pinst);
-- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_PTR_REG, pdata);
-- iwl_write_restricted_reg(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
-+ iwl4965_write_prph(priv, BSM_DRAM_INST_PTR_REG, pinst);
-+ iwl4965_write_prph(priv, BSM_DRAM_DATA_PTR_REG, pdata);
-+ iwl4965_write_prph(priv, BSM_DRAM_DATA_BYTECOUNT_REG,
- priv->ucode_data.len);
+- if (adapter->cur_cmd->cmdflags & CMD_F_HOSTCMD) {
+- /* Copy the response back to response buffer */
+- memcpy(adapter->cur_cmd->pdata_buf, resp,
+- le16_to_cpu(resp->size));
+- adapter->cur_cmd->cmdflags &= ~CMD_F_HOSTCMD;
+- }
+-
+ /* If the command is not successful, cleanup and return failure */
+ if ((result != 0 || !(respcmd & 0x8000))) {
+ lbs_deb_host("CMD_RESP: error 0x%04x in command reply 0x%04x\n",
+@@ -836,106 +676,132 @@ int libertas_process_rx_command(wlan_private * priv)
+ break;
- /* Inst bytecount must be last to set up, bit 31 signals uCode
- * that all new ptr/size info is in place */
-- iwl_write_restricted_reg(priv, BSM_DRAM_INST_BYTECOUNT_REG,
-+ iwl4965_write_prph(priv, BSM_DRAM_INST_BYTECOUNT_REG,
- priv->ucode_code.len | BSM_DRAM_INST_LOAD);
+ }
+-
+- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
+- adapter->nr_cmd_pending--;
+- adapter->cur_cmd = NULL;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ lbs_complete_command(priv, priv->cur_cmd, result);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
+ ret = -1;
+ goto done;
+ }
- spin_unlock_irqrestore(&priv->lock, flags);
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
-@@ -6415,7 +6467,7 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
+- ret = handle_cmd_response(respcmd, resp, priv);
++ if (priv->cur_cmd && priv->cur_cmd->callback) {
++ ret = priv->cur_cmd->callback(priv, priv->cur_cmd->callback_arg,
++ resp);
++ } else
++ ret = handle_cmd_response(priv, 0, resp);
+
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- if (adapter->cur_cmd) {
++ spin_lock_irqsave(&priv->driver_lock, flags);
++
++ if (priv->cur_cmd) {
+ /* Clean up and Put current command back to cmdfreeq */
+- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
+- adapter->nr_cmd_pending--;
+- WARN_ON(adapter->nr_cmd_pending > 128);
+- adapter->cur_cmd = NULL;
++ lbs_complete_command(priv, priv->cur_cmd, result);
+ }
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
+
+ done:
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
+ lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
+ return ret;
}
- /**
-- * iwl_init_alive_start - Called after REPLY_ALIVE notification receieved
-+ * iwl4965_init_alive_start - Called after REPLY_ALIVE notification received
- *
- * Called after REPLY_ALIVE notification received from "initialize" uCode.
- *
-@@ -6425,7 +6477,7 @@ static int iwl_set_ucode_ptrs(struct iwl_priv *priv)
- *
- * Tell "initialize" uCode to go ahead and load the runtime uCode.
- */
--static void iwl_init_alive_start(struct iwl_priv *priv)
-+static void iwl4965_init_alive_start(struct iwl4965_priv *priv)
+-int libertas_process_event(wlan_private * priv)
++static int lbs_send_confirmwake(struct lbs_private *priv)
++{
++ struct cmd_header *cmd = &priv->lbs_ps_confirm_wake;
++ int ret = 0;
++
++ lbs_deb_enter(LBS_DEB_HOST);
++
++ cmd->command = cpu_to_le16(CMD_802_11_WAKEUP_CONFIRM);
++ cmd->size = cpu_to_le16(sizeof(*cmd));
++ cmd->seqnum = cpu_to_le16(++priv->seqnum);
++ cmd->result = 0;
++
++ lbs_deb_host("SEND_WAKEC_CMD: before download\n");
++
++ lbs_deb_hex(LBS_DEB_HOST, "wake confirm command", (void *)cmd, sizeof(*cmd));
++
++ ret = priv->hw_host_to_card(priv, MVMS_CMD, (void *)cmd, sizeof(*cmd));
++ if (ret)
++ lbs_pr_alert("SEND_WAKEC_CMD: Host to Card failed for Confirm Wake\n");
++
++ lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
++ return ret;
++}
++
++int lbs_process_event(struct lbs_private *priv)
{
- /* Check alive response for "valid" sign from uCode */
- if (priv->card_alive_init.is_valid != UCODE_VALID_OK) {
-@@ -6438,7 +6490,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
- /* Bootstrap uCode has loaded initialize uCode ... verify inst image.
- * This is a paranoid check, because we would not have gotten the
- * "initialize" alive if code weren't properly loaded. */
-- if (iwl_verify_ucode(priv)) {
-+ if (iwl4965_verify_ucode(priv)) {
- /* Runtime instruction load was bad;
- * take it all the way back down so we can try again */
- IWL_DEBUG_INFO("Bad \"initialize\" uCode load.\n");
-@@ -6452,7 +6504,7 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
- * load and launch runtime uCode, which will send us another "Alive"
- * notification. */
- IWL_DEBUG_INFO("Initialization Alive received.\n");
-- if (iwl_set_ucode_ptrs(priv)) {
-+ if (iwl4965_set_ucode_ptrs(priv)) {
- /* Runtime instruction load won't happen;
- * take it all the way back down so we can try again */
- IWL_DEBUG_INFO("Couldn't set up uCode pointers.\n");
-@@ -6466,11 +6518,11 @@ static void iwl_init_alive_start(struct iwl_priv *priv)
+ int ret = 0;
+- wlan_adapter *adapter = priv->adapter;
+ u32 eventcause;
+ lbs_deb_enter(LBS_DEB_CMD);
- /**
-- * iwl_alive_start - called after REPLY_ALIVE notification received
-+ * iwl4965_alive_start - called after REPLY_ALIVE notification received
- * from protocol/runtime uCode (initialization uCode's
-- * Alive gets handled by iwl_init_alive_start()).
-+ * Alive gets handled by iwl4965_init_alive_start()).
- */
--static void iwl_alive_start(struct iwl_priv *priv)
-+static void iwl4965_alive_start(struct iwl4965_priv *priv)
- {
- int rc = 0;
+- spin_lock_irq(&adapter->driver_lock);
+- eventcause = adapter->eventcause;
+- spin_unlock_irq(&adapter->driver_lock);
++ spin_lock_irq(&priv->driver_lock);
++ eventcause = priv->eventcause >> SBI_EVENT_CAUSE_SHIFT;
++ spin_unlock_irq(&priv->driver_lock);
-@@ -6486,14 +6538,14 @@ static void iwl_alive_start(struct iwl_priv *priv)
- /* Initialize uCode has loaded Runtime uCode ... verify inst image.
- * This is a paranoid check, because we would not have gotten the
- * "runtime" alive if code weren't properly loaded. */
-- if (iwl_verify_ucode(priv)) {
-+ if (iwl4965_verify_ucode(priv)) {
- /* Runtime instruction load was bad;
- * take it all the way back down so we can try again */
- IWL_DEBUG_INFO("Bad runtime uCode load.\n");
- goto restart;
- }
+- lbs_deb_cmd("event cause 0x%x\n", eventcause);
++ lbs_deb_cmd("event cause %d\n", eventcause);
-- iwl_clear_stations_table(priv);
-+ iwl4965_clear_stations_table(priv);
+- switch (eventcause >> SBI_EVENT_CAUSE_SHIFT) {
++ switch (eventcause) {
+ case MACREG_INT_CODE_LINK_SENSED:
+ lbs_deb_cmd("EVENT: MACREG_INT_CODE_LINK_SENSED\n");
+ break;
- rc = iwl4965_alive_notify(priv);
- if (rc) {
-@@ -6502,78 +6554,61 @@ static void iwl_alive_start(struct iwl_priv *priv)
- goto restart;
- }
+ case MACREG_INT_CODE_DEAUTHENTICATED:
+ lbs_deb_cmd("EVENT: deauthenticated\n");
+- libertas_mac_event_disconnected(priv);
++ lbs_mac_event_disconnected(priv);
+ break;
-- /* After the ALIVE response, we can process host commands */
-+ /* After the ALIVE response, we can send host commands to 4965 uCode */
- set_bit(STATUS_ALIVE, &priv->status);
+ case MACREG_INT_CODE_DISASSOCIATED:
+ lbs_deb_cmd("EVENT: disassociated\n");
+- libertas_mac_event_disconnected(priv);
++ lbs_mac_event_disconnected(priv);
+ break;
- /* Clear out the uCode error bit if it is set */
- clear_bit(STATUS_FW_ERROR, &priv->status);
+- case MACREG_INT_CODE_LINK_LOSE_NO_SCAN:
++ case MACREG_INT_CODE_LINK_LOST_NO_SCAN:
+ lbs_deb_cmd("EVENT: link lost\n");
+- libertas_mac_event_disconnected(priv);
++ lbs_mac_event_disconnected(priv);
+ break;
-- rc = iwl_init_channel_map(priv);
-+ rc = iwl4965_init_channel_map(priv);
- if (rc) {
- IWL_ERROR("initializing regulatory failed: %d\n", rc);
- return;
- }
+ case MACREG_INT_CODE_PS_SLEEP:
+ lbs_deb_cmd("EVENT: sleep\n");
-- iwl_init_geos(priv);
-+ iwl4965_init_geos(priv);
-+ iwl4965_reset_channel_flag(priv);
+ /* handle unexpected PS SLEEP event */
+- if (adapter->psstate == PS_STATE_FULL_POWER) {
++ if (priv->psstate == PS_STATE_FULL_POWER) {
+ lbs_deb_cmd(
+ "EVENT: in FULL POWER mode, ignoreing PS_SLEEP\n");
+ break;
+ }
+- adapter->psstate = PS_STATE_PRE_SLEEP;
++ priv->psstate = PS_STATE_PRE_SLEEP;
-- if (iwl_is_rfkill(priv))
-+ if (iwl4965_is_rfkill(priv))
- return;
+- libertas_ps_confirm_sleep(priv, (u16) adapter->psmode);
++ lbs_ps_confirm_sleep(priv, (u16) priv->psmode);
-- if (!priv->mac80211_registered) {
-- /* Unlock so any user space entry points can call back into
-- * the driver without a deadlock... */
-- mutex_unlock(&priv->mutex);
-- iwl_rate_control_register(priv->hw);
-- rc = ieee80211_register_hw(priv->hw);
-- priv->hw->conf.beacon_int = 100;
-- mutex_lock(&priv->mutex);
--
-- if (rc) {
-- iwl_rate_control_unregister(priv->hw);
-- IWL_ERROR("Failed to register network "
-- "device (error %d)\n", rc);
-- return;
-- }
--
-- priv->mac80211_registered = 1;
+ break;
+
++ case MACREG_INT_CODE_HOST_AWAKE:
++ lbs_deb_cmd("EVENT: HOST_AWAKE\n");
++ lbs_send_confirmwake(priv);
++ break;
++
+ case MACREG_INT_CODE_PS_AWAKE:
+ lbs_deb_cmd("EVENT: awake\n");
-
-- iwl_reset_channel_flag(priv);
-- } else
-- ieee80211_start_queues(priv->hw);
-+ ieee80211_start_queues(priv->hw);
+ /* handle unexpected PS AWAKE event */
+- if (adapter->psstate == PS_STATE_FULL_POWER) {
++ if (priv->psstate == PS_STATE_FULL_POWER) {
+ lbs_deb_cmd(
+ "EVENT: In FULL POWER mode - ignore PS AWAKE\n");
+ break;
+ }
- priv->active_rate = priv->rates_mask;
- priv->active_rate_basic = priv->rates_mask & IWL_BASIC_RATES_MASK;
+- adapter->psstate = PS_STATE_AWAKE;
++ priv->psstate = PS_STATE_AWAKE;
-- iwl_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
-+ iwl4965_send_power_mode(priv, IWL_POWER_LEVEL(priv->power_mode));
+- if (adapter->needtowakeup) {
++ if (priv->needtowakeup) {
+ /*
+ * wait for the command processing to finish
+ * before resuming sending
+- * adapter->needtowakeup will be set to FALSE
+- * in libertas_ps_wakeup()
++ * priv->needtowakeup will be set to FALSE
++ * in lbs_ps_wakeup()
+ */
+ lbs_deb_cmd("waking up ...\n");
+- libertas_ps_wakeup(priv, 0);
++ lbs_ps_wakeup(priv, 0);
+ }
+ break;
-- if (iwl_is_associated(priv)) {
-- struct iwl_rxon_cmd *active_rxon =
-- (struct iwl_rxon_cmd *)(&priv->active_rxon);
-+ if (iwl4965_is_associated(priv)) {
-+ struct iwl4965_rxon_cmd *active_rxon =
-+ (struct iwl4965_rxon_cmd *)(&priv->active_rxon);
+@@ -979,24 +845,24 @@ int libertas_process_event(wlan_private * priv)
+ break;
+ }
+ lbs_pr_info("EVENT: MESH_AUTO_STARTED\n");
+- adapter->connect_status = LIBERTAS_CONNECTED;
+- if (priv->mesh_open == 1) {
+- netif_wake_queue(priv->mesh_dev);
++ priv->mesh_connect_status = LBS_CONNECTED;
++ if (priv->mesh_open) {
+ netif_carrier_on(priv->mesh_dev);
++ if (!priv->tx_pending_len)
++ netif_wake_queue(priv->mesh_dev);
+ }
+- adapter->mode = IW_MODE_ADHOC;
++ priv->mode = IW_MODE_ADHOC;
+ schedule_work(&priv->sync_channel);
+ break;
- memcpy(&priv->staging_rxon, &priv->active_rxon,
- sizeof(priv->staging_rxon));
- active_rxon->filter_flags &= ~RXON_FILTER_ASSOC_MSK;
- } else {
- /* Initialize our rx_config data */
-- iwl_connection_init_rx_config(priv);
-+ iwl4965_connection_init_rx_config(priv);
- memcpy(priv->staging_rxon.node_addr, priv->mac_addr, ETH_ALEN);
+ default:
+- lbs_pr_alert("EVENT: unknown event id 0x%04x\n",
+- eventcause >> SBI_EVENT_CAUSE_SHIFT);
++ lbs_pr_alert("EVENT: unknown event id %d\n", eventcause);
+ break;
}
-- /* Configure BT coexistence */
-- iwl_send_bt_config(priv);
-+ /* Configure Bluetooth device coexistence support */
-+ iwl4965_send_bt_config(priv);
-
- /* Configure the adapter for unassociated operation */
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
-
- /* At this point, the NIC is initialized and operational */
- priv->notif_missed_beacons = 0;
- set_bit(STATUS_READY, &priv->status);
+- spin_lock_irq(&adapter->driver_lock);
+- adapter->eventcause = 0;
+- spin_unlock_irq(&adapter->driver_lock);
++ spin_lock_irq(&priv->driver_lock);
++ priv->eventcause = 0;
++ spin_unlock_irq(&priv->driver_lock);
- iwl4965_rf_kill_ct_config(priv);
-+
- IWL_DEBUG_INFO("ALIVE processing complete.\n");
-+ wake_up_interruptible(&priv->wait_command_queue);
+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+ return ret;
+diff --git a/drivers/net/wireless/libertas/debugfs.c b/drivers/net/wireless/libertas/debugfs.c
+index 0bda0b5..fd67b77 100644
+--- a/drivers/net/wireless/libertas/debugfs.c
++++ b/drivers/net/wireless/libertas/debugfs.c
+@@ -10,15 +10,16 @@
+ #include "decl.h"
+ #include "host.h"
+ #include "debugfs.h"
++#include "cmd.h"
- if (priv->error_recovering)
-- iwl_error_recovery(priv);
-+ iwl4965_error_recovery(priv);
+-static struct dentry *libertas_dir = NULL;
++static struct dentry *lbs_dir;
+ static char *szStates[] = {
+ "Connected",
+ "Disconnected"
+ };
- return;
+ #ifdef PROC_DEBUG
+-static void libertas_debug_init(wlan_private * priv, struct net_device *dev);
++static void lbs_debug_init(struct lbs_private *priv, struct net_device *dev);
+ #endif
-@@ -6581,9 +6616,9 @@ static void iwl_alive_start(struct iwl_priv *priv)
- queue_work(priv->workqueue, &priv->restart);
- }
+ static int open_file_generic(struct inode *inode, struct file *file)
+@@ -35,19 +36,19 @@ static ssize_t write_file_dummy(struct file *file, const char __user *buf,
--static void iwl_cancel_deferred_work(struct iwl_priv *priv);
-+static void iwl4965_cancel_deferred_work(struct iwl4965_priv *priv);
+ static const size_t len = PAGE_SIZE;
--static void __iwl_down(struct iwl_priv *priv)
-+static void __iwl4965_down(struct iwl4965_priv *priv)
+-static ssize_t libertas_dev_info(struct file *file, char __user *userbuf,
++static ssize_t lbs_dev_info(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
{
- unsigned long flags;
- int exit_pending = test_bit(STATUS_EXIT_PENDING, &priv->status);
-@@ -6596,7 +6631,7 @@ static void __iwl_down(struct iwl_priv *priv)
- if (!exit_pending)
- set_bit(STATUS_EXIT_PENDING, &priv->status);
-
-- iwl_clear_stations_table(priv);
-+ iwl4965_clear_stations_table(priv);
-
- /* Unblock any waiting calls */
- wake_up_interruptible_all(&priv->wait_command_queue);
-@@ -6607,17 +6642,17 @@ static void __iwl_down(struct iwl_priv *priv)
- clear_bit(STATUS_EXIT_PENDING, &priv->status);
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ size_t pos = 0;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
+ ssize_t res;
- /* stop and reset the on-board processor */
-- iwl_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
-+ iwl4965_write32(priv, CSR_RESET, CSR_RESET_REG_FLAG_NEVO_RESET);
+ pos += snprintf(buf+pos, len-pos, "state = %s\n",
+- szStates[priv->adapter->connect_status]);
++ szStates[priv->connect_status]);
+ pos += snprintf(buf+pos, len-pos, "region_code = %02x\n",
+- (u32) priv->adapter->regioncode);
++ (u32) priv->regioncode);
- /* tell the device to stop sending interrupts */
-- iwl_disable_interrupts(priv);
-+ iwl4965_disable_interrupts(priv);
+ res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- if (priv->mac80211_registered)
- ieee80211_stop_queues(priv->hw);
+@@ -56,10 +57,10 @@ static ssize_t libertas_dev_info(struct file *file, char __user *userbuf,
+ }
-- /* If we have not previously called iwl_init() then
-+ /* If we have not previously called iwl4965_init() then
- * clear all bits but the RF Kill and SUSPEND bits and return */
-- if (!iwl_is_init(priv)) {
-+ if (!iwl4965_is_init(priv)) {
- priv->status = test_bit(STATUS_RF_KILL_HW, &priv->status) <<
- STATUS_RF_KILL_HW |
- test_bit(STATUS_RF_KILL_SW, &priv->status) <<
-@@ -6639,53 +6674,52 @@ static void __iwl_down(struct iwl_priv *priv)
- STATUS_FW_ERROR;
- spin_lock_irqsave(&priv->lock, flags);
-- iwl_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
-+ iwl4965_clear_bit(priv, CSR_GP_CNTRL,
-+ CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
- spin_unlock_irqrestore(&priv->lock, flags);
+-static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
++static ssize_t lbs_getscantable(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ size_t pos = 0;
+ int numscansdone = 0, res;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+@@ -70,8 +71,8 @@ static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
+ pos += snprintf(buf+pos, len-pos,
+ "# | ch | rssi | bssid | cap | Qual | SSID \n");
-- iwl_hw_txq_ctx_stop(priv);
-- iwl_hw_rxq_stop(priv);
-+ iwl4965_hw_txq_ctx_stop(priv);
-+ iwl4965_hw_rxq_stop(priv);
+- mutex_lock(&priv->adapter->lock);
+- list_for_each_entry (iter_bss, &priv->adapter->network_list, list) {
++ mutex_lock(&priv->lock);
++ list_for_each_entry (iter_bss, &priv->network_list, list) {
+ u16 ibss = (iter_bss->capability & WLAN_CAPABILITY_IBSS);
+ u16 privacy = (iter_bss->capability & WLAN_CAPABILITY_PRIVACY);
+ u16 spectrum_mgmt = (iter_bss->capability & WLAN_CAPABILITY_SPECTRUM_MGMT);
+@@ -90,7 +91,7 @@ static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
- spin_lock_irqsave(&priv->lock, flags);
-- if (!iwl_grab_restricted_access(priv)) {
-- iwl_write_restricted_reg(priv, APMG_CLK_DIS_REG,
-+ if (!iwl4965_grab_nic_access(priv)) {
-+ iwl4965_write_prph(priv, APMG_CLK_DIS_REG,
- APMG_CLK_VAL_DMA_CLK_RQT);
-- iwl_release_restricted_access(priv);
-+ iwl4965_release_nic_access(priv);
+ numscansdone++;
}
- spin_unlock_irqrestore(&priv->lock, flags);
-
- udelay(5);
-
-- iwl_hw_nic_stop_master(priv);
-- iwl_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
-- iwl_hw_nic_reset(priv);
-+ iwl4965_hw_nic_stop_master(priv);
-+ iwl4965_set_bit(priv, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);
-+ iwl4965_hw_nic_reset(priv);
-
- exit:
-- memset(&priv->card_alive, 0, sizeof(struct iwl_alive_resp));
-+ memset(&priv->card_alive, 0, sizeof(struct iwl4965_alive_resp));
+- mutex_unlock(&priv->adapter->lock);
++ mutex_unlock(&priv->lock);
- if (priv->ibss_beacon)
- dev_kfree_skb(priv->ibss_beacon);
- priv->ibss_beacon = NULL;
+ res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- /* clear out any free frames */
-- iwl_clear_free_frames(priv);
-+ iwl4965_clear_free_frames(priv);
+@@ -98,83 +99,75 @@ static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
+ return res;
}
--static void iwl_down(struct iwl_priv *priv)
-+static void iwl4965_down(struct iwl4965_priv *priv)
+-static ssize_t libertas_sleepparams_write(struct file *file,
++static ssize_t lbs_sleepparams_write(struct file *file,
+ const char __user *user_buf, size_t count,
+ loff_t *ppos)
{
- mutex_lock(&priv->mutex);
-- __iwl_down(priv);
-+ __iwl4965_down(priv);
- mutex_unlock(&priv->mutex);
+- wlan_private *priv = file->private_data;
+- ssize_t buf_size, res;
++ struct lbs_private *priv = file->private_data;
++ ssize_t buf_size, ret;
++ struct sleep_params sp;
+ int p1, p2, p3, p4, p5, p6;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
-- iwl_cancel_deferred_work(priv);
-+ iwl4965_cancel_deferred_work(priv);
- }
+ buf_size = min(count, len - 1);
+ if (copy_from_user(buf, user_buf, buf_size)) {
+- res = -EFAULT;
++ ret = -EFAULT;
+ goto out_unlock;
+ }
+- res = sscanf(buf, "%d %d %d %d %d %d", &p1, &p2, &p3, &p4, &p5, &p6);
+- if (res != 6) {
+- res = -EFAULT;
++ ret = sscanf(buf, "%d %d %d %d %d %d", &p1, &p2, &p3, &p4, &p5, &p6);
++ if (ret != 6) {
++ ret = -EINVAL;
+ goto out_unlock;
+ }
+- priv->adapter->sp.sp_error = p1;
+- priv->adapter->sp.sp_offset = p2;
+- priv->adapter->sp.sp_stabletime = p3;
+- priv->adapter->sp.sp_calcontrol = p4;
+- priv->adapter->sp.sp_extsleepclk = p5;
+- priv->adapter->sp.sp_reserved = p6;
+-
+- res = libertas_prepare_and_send_command(priv,
+- CMD_802_11_SLEEP_PARAMS,
+- CMD_ACT_SET,
+- CMD_OPTION_WAITFORRSP, 0, NULL);
+-
+- if (!res)
+- res = count;
+- else
+- res = -EINVAL;
++ sp.sp_error = p1;
++ sp.sp_offset = p2;
++ sp.sp_stabletime = p3;
++ sp.sp_calcontrol = p4;
++ sp.sp_extsleepclk = p5;
++ sp.sp_reserved = p6;
++
++ ret = lbs_cmd_802_11_sleep_params(priv, CMD_ACT_SET, &sp);
++ if (!ret)
++ ret = count;
++ else if (ret > 0)
++ ret = -EINVAL;
- #define MAX_HW_RESTARTS 5
+ out_unlock:
+ free_page(addr);
+- return res;
++ return ret;
+ }
--static int __iwl_up(struct iwl_priv *priv)
-+static int __iwl4965_up(struct iwl4965_priv *priv)
+-static ssize_t libertas_sleepparams_read(struct file *file, char __user *userbuf,
++static ssize_t lbs_sleepparams_read(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- DECLARE_MAC_BUF(mac);
- int rc, i;
-- u32 hw_rf_kill = 0;
-
- if (test_bit(STATUS_EXIT_PENDING, &priv->status)) {
- IWL_WARNING("Exit pending; will not bring the NIC up\n");
-@@ -6695,7 +6729,19 @@ static int __iwl_up(struct iwl_priv *priv)
- if (test_bit(STATUS_RF_KILL_SW, &priv->status)) {
- IWL_WARNING("Radio disabled by SW RF kill (module "
- "parameter)\n");
-- return 0;
-+ return -ENODEV;
-+ }
-+
-+ /* If platform's RF_KILL switch is NOT set to KILL */
-+ if (iwl4965_read32(priv, CSR_GP_CNTRL) &
-+ CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW)
-+ clear_bit(STATUS_RF_KILL_HW, &priv->status);
-+ else {
-+ set_bit(STATUS_RF_KILL_HW, &priv->status);
-+ if (!test_bit(STATUS_IN_SUSPEND, &priv->status)) {
-+ IWL_WARNING("Radio disabled by HW RF Kill switch\n");
-+ return -ENODEV;
-+ }
- }
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res;
++ struct lbs_private *priv = file->private_data;
++ ssize_t ret;
+ size_t pos = 0;
++ struct sleep_params sp;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
- if (!priv->ucode_data_backup.v_addr || !priv->ucode_data.v_addr) {
-@@ -6703,53 +6749,45 @@ static int __iwl_up(struct iwl_priv *priv)
- return -EIO;
- }
+- res = libertas_prepare_and_send_command(priv,
+- CMD_802_11_SLEEP_PARAMS,
+- CMD_ACT_GET,
+- CMD_OPTION_WAITFORRSP, 0, NULL);
+- if (res) {
+- res = -EFAULT;
++ ret = lbs_cmd_802_11_sleep_params(priv, CMD_ACT_GET, &sp);
++ if (ret)
+ goto out_unlock;
+- }
-- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
-+ iwl4965_write32(priv, CSR_INT, 0xFFFFFFFF);
+- pos += snprintf(buf, len, "%d %d %d %d %d %d\n", adapter->sp.sp_error,
+- adapter->sp.sp_offset, adapter->sp.sp_stabletime,
+- adapter->sp.sp_calcontrol, adapter->sp.sp_extsleepclk,
+- adapter->sp.sp_reserved);
++ pos += snprintf(buf, len, "%d %d %d %d %d %d\n", sp.sp_error,
++ sp.sp_offset, sp.sp_stabletime,
++ sp.sp_calcontrol, sp.sp_extsleepclk,
++ sp.sp_reserved);
-- rc = iwl_hw_nic_init(priv);
-+ rc = iwl4965_hw_nic_init(priv);
- if (rc) {
- IWL_ERROR("Unable to int nic\n");
- return rc;
- }
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
++ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- /* make sure rfkill handshake bits are cleared */
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR,
- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+ out_unlock:
+ free_page(addr);
+- return res;
++ return ret;
+ }
- /* clear (again), then enable host interrupts */
-- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
-- iwl_enable_interrupts(priv);
-+ iwl4965_write32(priv, CSR_INT, 0xFFFFFFFF);
-+ iwl4965_enable_interrupts(priv);
+-static ssize_t libertas_extscan(struct file *file, const char __user *userbuf,
++static ssize_t lbs_extscan(struct file *file, const char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ union iwreq_data wrqu;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+@@ -186,7 +179,7 @@ static ssize_t libertas_extscan(struct file *file, const char __user *userbuf,
+ goto out_unlock;
+ }
- /* really make sure rfkill handshake bits are cleared */
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-+ iwl4965_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+- libertas_send_specific_ssid_scan(priv, buf, strlen(buf)-1, 0);
++ lbs_send_specific_ssid_scan(priv, buf, strlen(buf)-1, 0);
- /* Copy original ucode data image from disk into backup cache.
- * This will be used to initialize the on-board processor's
- * data SRAM for a clean start when the runtime program first loads. */
- memcpy(priv->ucode_data_backup.v_addr, priv->ucode_data.v_addr,
-- priv->ucode_data.len);
-+ priv->ucode_data.len);
+ memset(&wrqu, 0, sizeof(union iwreq_data));
+ wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
+@@ -196,45 +189,8 @@ out_unlock:
+ return count;
+ }
-- /* If platform's RF_KILL switch is set to KILL,
-- * wait for BIT_INT_RF_KILL interrupt before loading uCode
-- * and getting things started */
-- if (!(iwl_read32(priv, CSR_GP_CNTRL) &
-- CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
-- hw_rf_kill = 1;
+-static int libertas_parse_chan(char *buf, size_t count,
+- struct wlan_ioctl_user_scan_cfg *scan_cfg, int dur)
+-{
+- char *start, *end, *hold, *str;
+- int i = 0;
-
-- if (test_bit(STATUS_RF_KILL_HW, &priv->status) || hw_rf_kill) {
-- IWL_WARNING("Radio disabled by HW RF Kill switch\n");
-+ /* We return success when we resume from suspend and rf_kill is on. */
-+ if (test_bit(STATUS_RF_KILL_HW, &priv->status))
- return 0;
+- start = strstr(buf, "chan=");
+- if (!start)
+- return -EINVAL;
+- start += 5;
+- end = strchr(start, ' ');
+- if (!end)
+- end = buf + count;
+- hold = kzalloc((end - start)+1, GFP_KERNEL);
+- if (!hold)
+- return -ENOMEM;
+- strncpy(hold, start, end - start);
+- hold[(end-start)+1] = '\0';
+- while(hold && (str = strsep(&hold, ","))) {
+- int chan;
+- char band, passive = 0;
+- sscanf(str, "%d%c%c", &chan, &band, &passive);
+- scan_cfg->chanlist[i].channumber = chan;
+- scan_cfg->chanlist[i].scantype = passive ? 1 : 0;
+- if (band == 'b' || band == 'g')
+- scan_cfg->chanlist[i].radiotype = 0;
+- else if (band == 'a')
+- scan_cfg->chanlist[i].radiotype = 1;
+-
+- scan_cfg->chanlist[i].scantime = dur;
+- i++;
- }
+-
+- kfree(hold);
+- return i;
+-}
+-
+-static void libertas_parse_bssid(char *buf, size_t count,
+- struct wlan_ioctl_user_scan_cfg *scan_cfg)
++static void lbs_parse_bssid(char *buf, size_t count,
++ struct lbs_ioctl_user_scan_cfg *scan_cfg)
+ {
+ char *hold;
+ unsigned int mac[ETH_ALEN];
+@@ -243,12 +199,13 @@ static void libertas_parse_bssid(char *buf, size_t count,
+ if (!hold)
+ return;
+ hold += 6;
+- sscanf(hold, MAC_FMT, mac, mac+1, mac+2, mac+3, mac+4, mac+5);
++ sscanf(hold, "%02x:%02x:%02x:%02x:%02x:%02x",
++ mac, mac+1, mac+2, mac+3, mac+4, mac+5);
+ memcpy(scan_cfg->bssid, mac, ETH_ALEN);
+ }
- for (i = 0; i < MAX_HW_RESTARTS; i++) {
-
-- iwl_clear_stations_table(priv);
-+ iwl4965_clear_stations_table(priv);
+-static void libertas_parse_ssid(char *buf, size_t count,
+- struct wlan_ioctl_user_scan_cfg *scan_cfg)
++static void lbs_parse_ssid(char *buf, size_t count,
++ struct lbs_ioctl_user_scan_cfg *scan_cfg)
+ {
+ char *hold, *end;
+ ssize_t size;
+@@ -267,7 +224,7 @@ static void libertas_parse_ssid(char *buf, size_t count,
+ return;
+ }
- /* load bootstrap state machine,
- * load bootstrap program into processor's memory,
- * prepare to load the "initialize" uCode */
-- rc = iwl_load_bsm(priv);
-+ rc = iwl4965_load_bsm(priv);
+-static int libertas_parse_clear(char *buf, size_t count, const char *tag)
++static int lbs_parse_clear(char *buf, size_t count, const char *tag)
+ {
+ char *hold;
+ int val;
+@@ -284,8 +241,8 @@ static int libertas_parse_clear(char *buf, size_t count, const char *tag)
+ return val;
+ }
- if (rc) {
- IWL_ERROR("Unable to set up bootstrap uCode: %d\n", rc);
-@@ -6757,14 +6795,7 @@ static int __iwl_up(struct iwl_priv *priv)
- }
+-static int libertas_parse_dur(char *buf, size_t count,
+- struct wlan_ioctl_user_scan_cfg *scan_cfg)
++static int lbs_parse_dur(char *buf, size_t count,
++ struct lbs_ioctl_user_scan_cfg *scan_cfg)
+ {
+ char *hold;
+ int val;
+@@ -299,25 +256,8 @@ static int libertas_parse_dur(char *buf, size_t count,
+ return val;
+ }
- /* start card; "initialize" will load runtime ucode */
-- iwl_nic_start(priv);
+-static void libertas_parse_probes(char *buf, size_t count,
+- struct wlan_ioctl_user_scan_cfg *scan_cfg)
+-{
+- char *hold;
+- int val;
-
-- /* MAC Address location in EEPROM same for 3945/4965 */
-- get_eeprom_mac(priv, priv->mac_addr);
-- IWL_DEBUG_INFO("MAC address: %s\n",
-- print_mac(mac, priv->mac_addr));
+- hold = strstr(buf, "probes=");
+- if (!hold)
+- return;
+- hold += 7;
+- sscanf(hold, "%d", &val);
-
-- SET_IEEE80211_PERM_ADDR(priv->hw, priv->mac_addr);
-+ iwl4965_nic_start(priv);
+- scan_cfg->numprobes = val;
+-
+- return;
+-}
+-
+-static void libertas_parse_type(char *buf, size_t count,
+- struct wlan_ioctl_user_scan_cfg *scan_cfg)
++static void lbs_parse_type(char *buf, size_t count,
++ struct lbs_ioctl_user_scan_cfg *scan_cfg)
+ {
+ char *hold;
+ int val;
+@@ -337,1036 +277,324 @@ static void libertas_parse_type(char *buf, size_t count,
+ return;
+ }
- IWL_DEBUG_INFO(DRV_NAME " is coming up\n");
+-static ssize_t libertas_setuserscan(struct file *file,
++static ssize_t lbs_setuserscan(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+- struct wlan_ioctl_user_scan_cfg *scan_cfg;
++ struct lbs_ioctl_user_scan_cfg *scan_cfg;
+ union iwreq_data wrqu;
+ int dur;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
++ char *buf = (char *)get_zeroed_page(GFP_KERNEL);
-@@ -6772,7 +6803,7 @@ static int __iwl_up(struct iwl_priv *priv)
+- scan_cfg = kzalloc(sizeof(struct wlan_ioctl_user_scan_cfg), GFP_KERNEL);
+- if (!scan_cfg)
++ if (!buf)
+ return -ENOMEM;
+
+ buf_size = min(count, len - 1);
+ if (copy_from_user(buf, userbuf, buf_size)) {
+ res = -EFAULT;
+- goto out_unlock;
++ goto out_buf;
++ }
++
++ scan_cfg = kzalloc(sizeof(struct lbs_ioctl_user_scan_cfg), GFP_KERNEL);
++ if (!scan_cfg) {
++ res = -ENOMEM;
++ goto out_buf;
}
++ res = count;
++
++ scan_cfg->bsstype = LBS_SCAN_BSS_TYPE_ANY;
- set_bit(STATUS_EXIT_PENDING, &priv->status);
-- __iwl_down(priv);
-+ __iwl4965_down(priv);
+- scan_cfg->bsstype = WLAN_SCAN_BSS_TYPE_ANY;
++ dur = lbs_parse_dur(buf, count, scan_cfg);
++ lbs_parse_bssid(buf, count, scan_cfg);
++ scan_cfg->clear_bssid = lbs_parse_clear(buf, count, "clear_bssid=");
++ lbs_parse_ssid(buf, count, scan_cfg);
++ scan_cfg->clear_ssid = lbs_parse_clear(buf, count, "clear_ssid=");
++ lbs_parse_type(buf, count, scan_cfg);
- /* tried to restart and config the device for as long as our
- * patience could withstand */
-@@ -6787,35 +6818,35 @@ static int __iwl_up(struct iwl_priv *priv)
- *
- *****************************************************************************/
+- dur = libertas_parse_dur(buf, count, scan_cfg);
+- libertas_parse_chan(buf, count, scan_cfg, dur);
+- libertas_parse_bssid(buf, count, scan_cfg);
+- scan_cfg->clear_bssid = libertas_parse_clear(buf, count, "clear_bssid=");
+- libertas_parse_ssid(buf, count, scan_cfg);
+- scan_cfg->clear_ssid = libertas_parse_clear(buf, count, "clear_ssid=");
+- libertas_parse_probes(buf, count, scan_cfg);
+- libertas_parse_type(buf, count, scan_cfg);
++ lbs_scan_networks(priv, scan_cfg, 1);
++ wait_event_interruptible(priv->cmd_pending,
++ priv->surpriseremoved || !priv->last_scanned_channel);
--static void iwl_bg_init_alive_start(struct work_struct *data)
-+static void iwl4965_bg_init_alive_start(struct work_struct *data)
- {
-- struct iwl_priv *priv =
-- container_of(data, struct iwl_priv, init_alive_start.work);
-+ struct iwl4965_priv *priv =
-+ container_of(data, struct iwl4965_priv, init_alive_start.work);
+- wlan_scan_networks(priv, scan_cfg, 1);
+- wait_event_interruptible(priv->adapter->cmd_pending,
+- !priv->adapter->nr_cmd_pending);
++ if (priv->surpriseremoved)
++ goto out_scan_cfg;
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+ memset(&wrqu, 0x00, sizeof(union iwreq_data));
+ wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
- mutex_lock(&priv->mutex);
-- iwl_init_alive_start(priv);
-+ iwl4965_init_alive_start(priv);
- mutex_unlock(&priv->mutex);
+-out_unlock:
+- free_page(addr);
++ out_scan_cfg:
+ kfree(scan_cfg);
+- return count;
++ out_buf:
++ free_page((unsigned long)buf);
++ return res;
}
--static void iwl_bg_alive_start(struct work_struct *data)
-+static void iwl4965_bg_alive_start(struct work_struct *data)
- {
-- struct iwl_priv *priv =
-- container_of(data, struct iwl_priv, alive_start.work);
-+ struct iwl4965_priv *priv =
-+ container_of(data, struct iwl4965_priv, alive_start.work);
+-static int libertas_event_initcmd(wlan_private *priv, void **response_buf,
+- struct cmd_ctrl_node **cmdnode,
+- struct cmd_ds_command **cmd)
+-{
+- u16 wait_option = CMD_OPTION_WAITFORRSP;
+-
+- if (!(*cmdnode = libertas_get_free_cmd_ctrl_node(priv))) {
+- lbs_deb_debugfs("failed libertas_get_free_cmd_ctrl_node\n");
+- return -ENOMEM;
+- }
+- if (!(*response_buf = kmalloc(3000, GFP_KERNEL))) {
+- lbs_deb_debugfs("failed to allocate response buffer!\n");
+- return -ENOMEM;
+- }
+- libertas_set_cmd_ctrl_node(priv, *cmdnode, 0, wait_option, NULL);
+- init_waitqueue_head(&(*cmdnode)->cmdwait_q);
+- (*cmdnode)->pdata_buf = *response_buf;
+- (*cmdnode)->cmdflags |= CMD_F_HOSTCMD;
+- (*cmdnode)->cmdwaitqwoken = 0;
+- *cmd = (struct cmd_ds_command *)(*cmdnode)->bufvirtualaddr;
+- (*cmd)->command = cpu_to_le16(CMD_802_11_SUBSCRIBE_EVENT);
+- (*cmd)->seqnum = cpu_to_le16(++priv->adapter->seqnum);
+- (*cmd)->result = 0;
+- return 0;
+-}
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+-static ssize_t libertas_lowrssi_read(struct file *file, char __user *userbuf,
+- size_t count, loff_t *ppos)
++/*
++ * When calling CMD_802_11_SUBSCRIBE_EVENT with CMD_ACT_GET, me might
++ * get a bunch of vendor-specific TLVs (a.k.a. IEs) back from the
++ * firmware. Here's an example:
++ * 04 01 02 00 00 00 05 01 02 00 00 00 06 01 02 00
++ * 00 00 07 01 02 00 3c 00 00 00 00 00 00 00 03 03
++ * 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
++ *
++ * The 04 01 is the TLV type (here TLV_TYPE_RSSI_LOW), 02 00 is the length,
++ * 00 00 are the data bytes of this TLV. For this TLV, their meaning is
++ * defined in mrvlietypes_thresholds
++ *
++ * This function searches in this TLV data chunk for a given TLV type
++ * and returns a pointer to the first data byte of the TLV, or to NULL
++ * if the TLV hasn't been found.
++ */
++static void *lbs_tlv_find(uint16_t tlv_type, const uint8_t *tlv, uint16_t size)
+ {
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res, cmd_len;
++ struct mrvlietypesheader *tlv_h;
++ uint16_t length;
+ ssize_t pos = 0;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0) {
+- free_page(addr);
+- return res;
+- }
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
- mutex_lock(&priv->mutex);
-- iwl_alive_start(priv);
-+ iwl4965_alive_start(priv);
- mutex_unlock(&priv->mutex);
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- event = (void *)(response_buf + S_DS_GEN);
+- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
+- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
+- switch (header->type) {
+- struct mrvlietypes_rssithreshold *Lowrssi;
+- case __constant_cpu_to_le16(TLV_TYPE_RSSI_LOW):
+- Lowrssi = (void *)(response_buf + cmd_len);
+- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
+- Lowrssi->rssivalue,
+- Lowrssi->rssifreq,
+- (event->events & cpu_to_le16(0x0001))?1:0);
+- default:
+- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
+- break;
+- }
+- }
+-
+- kfree(response_buf);
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- free_page(addr);
+- return res;
+-}
+-
+-static u16 libertas_get_events_bitmap(wlan_private *priv)
+-{
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res;
+- u16 event_bitmap;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- return res;
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- return 0;
+- }
+-
+- if (le16_to_cpu(pcmdptr->command) != CMD_RET(CMD_802_11_SUBSCRIBE_EVENT)) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- return 0;
+- }
+-
+- event = (struct cmd_ds_802_11_subscribe_event *)(response_buf + S_DS_GEN);
+- event_bitmap = le16_to_cpu(event->events);
+- kfree(response_buf);
+- return event_bitmap;
++ while (pos < size) {
++ tlv_h = (struct mrvlietypesheader *) tlv;
++ if (!tlv_h->len)
++ return NULL;
++ if (tlv_h->type == cpu_to_le16(tlv_type))
++ return tlv_h;
++ length = le16_to_cpu(tlv_h->len) + sizeof(*tlv_h);
++ pos += length;
++ tlv += length;
++ }
++ return NULL;
}
--static void iwl_bg_rf_kill(struct work_struct *work)
-+static void iwl4965_bg_rf_kill(struct work_struct *work)
+-static ssize_t libertas_lowrssi_write(struct file *file,
+- const char __user *userbuf,
+- size_t count, loff_t *ppos)
+-{
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res, buf_size;
+- int value, freq, subscribed, cmd_len;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- struct mrvlietypes_rssithreshold *rssi_threshold;
+- void *response_buf;
+- u16 event_bitmap;
+- u8 *ptr;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+
+- buf_size = min(count, len - 1);
+- if (copy_from_user(buf, userbuf, buf_size)) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
+- if (res != 3) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+-
+- event_bitmap = libertas_get_events_bitmap(priv);
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- goto out_unlock;
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_SET);
+- pcmdptr->size = cpu_to_le16(S_DS_GEN +
+- sizeof(struct cmd_ds_802_11_subscribe_event) +
+- sizeof(struct mrvlietypes_rssithreshold));
+-
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- ptr = (u8*) pcmdptr+cmd_len;
+- rssi_threshold = (struct mrvlietypes_rssithreshold *)(ptr);
+- rssi_threshold->header.type = cpu_to_le16(0x0104);
+- rssi_threshold->header.len = cpu_to_le16(2);
+- rssi_threshold->rssivalue = value;
+- rssi_threshold->rssifreq = freq;
+- event_bitmap |= subscribed ? 0x0001 : 0x0;
+- event->events = cpu_to_le16(event_bitmap);
+-
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- res = count;
+-out_unlock:
+- free_page(addr);
+- return res;
+-}
+-
+-static ssize_t libertas_lowsnr_read(struct file *file, char __user *userbuf,
++static ssize_t lbs_threshold_read(uint16_t tlv_type, uint16_t event_mask,
++ struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv = container_of(work, struct iwl_priv, rf_kill);
-+ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv, rf_kill);
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res, cmd_len;
+- ssize_t pos = 0;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0) {
+- free_page(addr);
+- return res;
+- }
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
++ struct cmd_ds_802_11_subscribe_event *subscribed;
++ struct mrvlietypes_thresholds *got;
++ struct lbs_private *priv = file->private_data;
++ ssize_t ret = 0;
++ size_t pos = 0;
++ char *buf;
++ u8 value;
++ u8 freq;
++ int events = 0;
- wake_up_interruptible(&priv->wait_command_queue);
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
++ buf = (char *)get_zeroed_page(GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
-@@ -6824,7 +6855,7 @@ static void iwl_bg_rf_kill(struct work_struct *work)
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
++ subscribed = kzalloc(sizeof(*subscribed), GFP_KERNEL);
++ if (!subscribed) {
++ ret = -ENOMEM;
++ goto out_page;
+ }
- mutex_lock(&priv->mutex);
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- event = (void *)(response_buf + S_DS_GEN);
+- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
+- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
+- switch (header->type) {
+- struct mrvlietypes_snrthreshold *LowSnr;
+- case __constant_cpu_to_le16(TLV_TYPE_SNR_LOW):
+- LowSnr = (void *)(response_buf + cmd_len);
+- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
+- LowSnr->snrvalue,
+- LowSnr->snrfreq,
+- (event->events & cpu_to_le16(0x0002))?1:0);
+- default:
+- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
+- break;
+- }
+- }
++ subscribed->hdr.size = cpu_to_le16(sizeof(*subscribed));
++ subscribed->action = cpu_to_le16(CMD_ACT_GET);
-- if (!iwl_is_rfkill(priv)) {
-+ if (!iwl4965_is_rfkill(priv)) {
- IWL_DEBUG(IWL_DL_INFO | IWL_DL_RF_KILL,
- "HW and/or SW RF Kill no longer active, restarting "
- "device\n");
-@@ -6845,10 +6876,10 @@ static void iwl_bg_rf_kill(struct work_struct *work)
+- kfree(response_buf);
++ ret = lbs_cmd_with_response(priv, CMD_802_11_SUBSCRIBE_EVENT, subscribed);
++ if (ret)
++ goto out_cmd;
- #define IWL_SCAN_CHECK_WATCHDOG (7 * HZ)
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- free_page(addr);
+- return res;
+-}
++ got = lbs_tlv_find(tlv_type, subscribed->tlv, sizeof(subscribed->tlv));
++ if (got) {
++ value = got->value;
++ freq = got->freq;
++ events = le16_to_cpu(subscribed->events);
--static void iwl_bg_scan_check(struct work_struct *data)
-+static void iwl4965_bg_scan_check(struct work_struct *data)
- {
-- struct iwl_priv *priv =
-- container_of(data, struct iwl_priv, scan_check.work);
-+ struct iwl4965_priv *priv =
-+ container_of(data, struct iwl4965_priv, scan_check.work);
+-static ssize_t libertas_lowsnr_write(struct file *file,
+- const char __user *userbuf,
+- size_t count, loff_t *ppos)
+-{
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res, buf_size;
+- int value, freq, subscribed, cmd_len;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- struct mrvlietypes_snrthreshold *snr_threshold;
+- void *response_buf;
+- u16 event_bitmap;
+- u8 *ptr;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- buf_size = min(count, len - 1);
+- if (copy_from_user(buf, userbuf, buf_size)) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
+- if (res != 3) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+-
+- event_bitmap = libertas_get_events_bitmap(priv);
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- goto out_unlock;
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_SET);
+- pcmdptr->size = cpu_to_le16(S_DS_GEN +
+- sizeof(struct cmd_ds_802_11_subscribe_event) +
+- sizeof(struct mrvlietypes_snrthreshold));
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- ptr = (u8*) pcmdptr+cmd_len;
+- snr_threshold = (struct mrvlietypes_snrthreshold *)(ptr);
+- snr_threshold->header.type = cpu_to_le16(TLV_TYPE_SNR_LOW);
+- snr_threshold->header.len = cpu_to_le16(2);
+- snr_threshold->snrvalue = value;
+- snr_threshold->snrfreq = freq;
+- event_bitmap |= subscribed ? 0x0002 : 0x0;
+- event->events = cpu_to_le16(event_bitmap);
+-
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
++ pos += snprintf(buf, len, "%d %d %d\n", value, freq,
++ !!(events & event_mask));
+ }
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
-@@ -6861,22 +6892,22 @@ static void iwl_bg_scan_check(struct work_struct *data)
- jiffies_to_msecs(IWL_SCAN_CHECK_WATCHDOG));
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
++ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- if (!test_bit(STATUS_EXIT_PENDING, &priv->status))
-- iwl_send_scan_abort(priv);
-+ iwl4965_send_scan_abort(priv);
- }
- mutex_unlock(&priv->mutex);
+- res = count;
++ out_cmd:
++ kfree(subscribed);
+
+-out_unlock:
+- free_page(addr);
+- return res;
++ out_page:
++ free_page((unsigned long)buf);
++ return ret;
}
--static void iwl_bg_request_scan(struct work_struct *data)
-+static void iwl4965_bg_request_scan(struct work_struct *data)
- {
-- struct iwl_priv *priv =
-- container_of(data, struct iwl_priv, request_scan);
-- struct iwl_host_cmd cmd = {
-+ struct iwl4965_priv *priv =
-+ container_of(data, struct iwl4965_priv, request_scan);
-+ struct iwl4965_host_cmd cmd = {
- .id = REPLY_SCAN_CMD,
-- .len = sizeof(struct iwl_scan_cmd),
-+ .len = sizeof(struct iwl4965_scan_cmd),
- .meta.flags = CMD_SIZE_HUGE,
- };
- int rc = 0;
-- struct iwl_scan_cmd *scan;
-+ struct iwl4965_scan_cmd *scan;
- struct ieee80211_conf *conf = NULL;
- u8 direct_mask;
- int phymode;
-@@ -6885,7 +6916,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+-static ssize_t libertas_failcount_read(struct file *file, char __user *userbuf,
+- size_t count, loff_t *ppos)
+-{
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res, cmd_len;
+- ssize_t pos = 0;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0) {
+- free_page(addr);
+- return res;
+- }
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- event = (void *)(response_buf + S_DS_GEN);
+- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
+- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
+- switch (header->type) {
+- struct mrvlietypes_failurecount *failcount;
+- case __constant_cpu_to_le16(TLV_TYPE_FAILCOUNT):
+- failcount = (void *)(response_buf + cmd_len);
+- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
+- failcount->failvalue,
+- failcount->Failfreq,
+- (event->events & cpu_to_le16(0x0004))?1:0);
+- default:
+- cmd_len += sizeof(struct mrvlietypes_failurecount);
+- break;
+- }
+- }
+-
+- kfree(response_buf);
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- free_page(addr);
+- return res;
+-}
- mutex_lock(&priv->mutex);
+-static ssize_t libertas_failcount_write(struct file *file,
+- const char __user *userbuf,
+- size_t count, loff_t *ppos)
++static ssize_t lbs_threshold_write(uint16_t tlv_type, uint16_t event_mask,
++ struct file *file,
++ const char __user *userbuf, size_t count,
++ loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res, buf_size;
+- int value, freq, subscribed, cmd_len;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- struct mrvlietypes_failurecount *failcount;
+- void *response_buf;
+- u16 event_bitmap;
+- u8 *ptr;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
++ struct cmd_ds_802_11_subscribe_event *events;
++ struct mrvlietypes_thresholds *tlv;
++ struct lbs_private *priv = file->private_data;
++ ssize_t buf_size;
++ int value, freq, new_mask;
++ uint16_t curr_mask;
++ char *buf;
++ int ret;
++
++ buf = (char *)get_zeroed_page(GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
-- if (!iwl_is_ready(priv)) {
-+ if (!iwl4965_is_ready(priv)) {
- IWL_WARNING("request scan called when driver not ready.\n");
- goto done;
- }
-@@ -6914,7 +6945,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
- goto done;
+ buf_size = min(count, len - 1);
+ if (copy_from_user(buf, userbuf, buf_size)) {
+- res = -EFAULT;
+- goto out_unlock;
++ ret = -EFAULT;
++ goto out_page;
}
-
-- if (iwl_is_rfkill(priv)) {
-+ if (iwl4965_is_rfkill(priv)) {
- IWL_DEBUG_HC("Aborting scan due to RF Kill activation\n");
- goto done;
+- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
+- if (res != 3) {
+- res = -EFAULT;
+- goto out_unlock;
++ ret = sscanf(buf, "%d %d %d", &value, &freq, &new_mask);
++ if (ret != 3) {
++ ret = -EINVAL;
++ goto out_page;
}
-@@ -6930,7 +6961,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+-
+- event_bitmap = libertas_get_events_bitmap(priv);
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- goto out_unlock;
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_SET);
+- pcmdptr->size = cpu_to_le16(S_DS_GEN +
+- sizeof(struct cmd_ds_802_11_subscribe_event) +
+- sizeof(struct mrvlietypes_failurecount));
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- ptr = (u8*) pcmdptr+cmd_len;
+- failcount = (struct mrvlietypes_failurecount *)(ptr);
+- failcount->header.type = cpu_to_le16(TLV_TYPE_FAILCOUNT);
+- failcount->header.len = cpu_to_le16(2);
+- failcount->failvalue = value;
+- failcount->Failfreq = freq;
+- event_bitmap |= subscribed ? 0x0004 : 0x0;
+- event->events = cpu_to_le16(event_bitmap);
+-
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = (struct cmd_ds_command *)response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
++ events = kzalloc(sizeof(*events), GFP_KERNEL);
++ if (!events) {
++ ret = -ENOMEM;
++ goto out_page;
}
- if (!priv->scan) {
-- priv->scan = kmalloc(sizeof(struct iwl_scan_cmd) +
-+ priv->scan = kmalloc(sizeof(struct iwl4965_scan_cmd) +
- IWL_MAX_SCAN_SIZE, GFP_KERNEL);
- if (!priv->scan) {
- rc = -ENOMEM;
-@@ -6938,12 +6969,12 @@ static void iwl_bg_request_scan(struct work_struct *data)
- }
- }
- scan = priv->scan;
-- memset(scan, 0, sizeof(struct iwl_scan_cmd) + IWL_MAX_SCAN_SIZE);
-+ memset(scan, 0, sizeof(struct iwl4965_scan_cmd) + IWL_MAX_SCAN_SIZE);
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
++ events->hdr.size = cpu_to_le16(sizeof(*events));
++ events->action = cpu_to_le16(CMD_ACT_GET);
- scan->quiet_plcp_th = IWL_PLCP_QUIET_THRESH;
- scan->quiet_time = IWL_ACTIVE_QUIET_TIME;
+- res = count;
+-out_unlock:
+- free_page(addr);
+- return res;
+-}
+-
+-static ssize_t libertas_bcnmiss_read(struct file *file, char __user *userbuf,
+- size_t count, loff_t *ppos)
+-{
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res, cmd_len;
+- ssize_t pos = 0;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0) {
+- free_page(addr);
+- return res;
+- }
++ ret = lbs_cmd_with_response(priv, CMD_802_11_SUBSCRIBE_EVENT, events);
++ if (ret)
++ goto out_events;
-- if (iwl_is_associated(priv)) {
-+ if (iwl4965_is_associated(priv)) {
- u16 interval = 0;
- u32 extra;
- u32 suspend_time = 100;
-@@ -6973,14 +7004,14 @@ static void iwl_bg_request_scan(struct work_struct *data)
- if (priv->one_direct_scan) {
- IWL_DEBUG_SCAN
- ("Kicking off one direct scan for '%s'\n",
-- iwl_escape_essid(priv->direct_ssid,
-+ iwl4965_escape_essid(priv->direct_ssid,
- priv->direct_ssid_len));
- scan->direct_scan[0].id = WLAN_EID_SSID;
- scan->direct_scan[0].len = priv->direct_ssid_len;
- memcpy(scan->direct_scan[0].ssid,
- priv->direct_ssid, priv->direct_ssid_len);
- direct_mask = 1;
-- } else if (!iwl_is_associated(priv) && priv->essid_len) {
-+ } else if (!iwl4965_is_associated(priv) && priv->essid_len) {
- scan->direct_scan[0].id = WLAN_EID_SSID;
- scan->direct_scan[0].len = priv->essid_len;
- memcpy(scan->direct_scan[0].ssid, priv->essid, priv->essid_len);
-@@ -6991,7 +7022,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
- /* We don't build a direct scan probe request; the uCode will do
- * that based on the direct_mask added to each channel entry */
- scan->tx_cmd.len = cpu_to_le16(
-- iwl_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
-+ iwl4965_fill_probe_req(priv, (struct ieee80211_mgmt *)scan->data,
- IWL_MAX_SCAN_SIZE - sizeof(scan), 0));
- scan->tx_cmd.tx_flags = TX_CMD_FLG_SEQ_CTL_MSK;
- scan->tx_cmd.sta_id = priv->hw_setting.bcast_sta_id;
-@@ -7005,7 +7036,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
- case 2:
- scan->flags = RXON_FLG_BAND_24G_MSK | RXON_FLG_AUTO_DETECT_MSK;
- scan->tx_cmd.rate_n_flags =
-- iwl_hw_set_rate_n_flags(IWL_RATE_1M_PLCP,
-+ iwl4965_hw_set_rate_n_flags(IWL_RATE_1M_PLCP,
- RATE_MCS_ANT_B_MSK|RATE_MCS_CCK_MSK);
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
++ curr_mask = le16_to_cpu(events->events);
- scan->good_CRC_th = 0;
-@@ -7014,7 +7045,7 @@ static void iwl_bg_request_scan(struct work_struct *data)
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
++ if (new_mask)
++ new_mask = curr_mask | event_mask;
++ else
++ new_mask = curr_mask & ~event_mask;
- case 1:
- scan->tx_cmd.rate_n_flags =
-- iwl_hw_set_rate_n_flags(IWL_RATE_6M_PLCP,
-+ iwl4965_hw_set_rate_n_flags(IWL_RATE_6M_PLCP,
- RATE_MCS_ANT_B_MSK);
- scan->good_CRC_th = IWL_GOOD_CRC_TH;
- phymode = MODE_IEEE80211A;
-@@ -7041,23 +7072,23 @@ static void iwl_bg_request_scan(struct work_struct *data)
- if (direct_mask)
- IWL_DEBUG_SCAN
- ("Initiating direct scan for %s.\n",
-- iwl_escape_essid(priv->essid, priv->essid_len));
-+ iwl4965_escape_essid(priv->essid, priv->essid_len));
- else
- IWL_DEBUG_SCAN("Initiating indirect scan.\n");
+- pcmdptr = response_buf;
++ /* Now everything is set and we can send stuff down to the firmware */
- scan->channel_count =
-- iwl_get_channels_for_scan(
-+ iwl4965_get_channels_for_scan(
- priv, phymode, 1, /* active */
- direct_mask,
- (void *)&scan->data[le16_to_cpu(scan->tx_cmd.len)]);
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- free_page(addr);
+- kfree(response_buf);
+- return 0;
+- }
++ tlv = (void *)events->tlv;
- cmd.len += le16_to_cpu(scan->tx_cmd.len) +
-- scan->channel_count * sizeof(struct iwl_scan_channel);
-+ scan->channel_count * sizeof(struct iwl4965_scan_channel);
- cmd.data = scan;
- scan->len = cpu_to_le16(cmd.len);
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- free_page(addr);
+- kfree(response_buf);
+- return 0;
+- }
++ events->action = cpu_to_le16(CMD_ACT_SET);
++ events->events = cpu_to_le16(new_mask);
++ tlv->header.type = cpu_to_le16(tlv_type);
++ tlv->header.len = cpu_to_le16(sizeof(*tlv) - sizeof(tlv->header));
++ tlv->value = value;
++ if (tlv_type != TLV_TYPE_BCNMISS)
++ tlv->freq = freq;
- set_bit(STATUS_SCAN_HW, &priv->status);
-- rc = iwl_send_cmd_sync(priv, &cmd);
-+ rc = iwl4965_send_cmd_sync(priv, &cmd);
- if (rc)
- goto done;
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- event = (void *)(response_buf + S_DS_GEN);
+- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
+- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
+- switch (header->type) {
+- struct mrvlietypes_beaconsmissed *bcnmiss;
+- case __constant_cpu_to_le16(TLV_TYPE_BCNMISS):
+- bcnmiss = (void *)(response_buf + cmd_len);
+- pos += snprintf(buf+pos, len-pos, "%d N/A %d\n",
+- bcnmiss->beaconmissed,
+- (event->events & cpu_to_le16(0x0008))?1:0);
+- default:
+- cmd_len += sizeof(struct mrvlietypes_beaconsmissed);
+- break;
+- }
+- }
++ /* The command header, the event mask, and the one TLV */
++ events->hdr.size = cpu_to_le16(sizeof(events->hdr) + 2 + sizeof(*tlv));
-@@ -7068,50 +7099,52 @@ static void iwl_bg_request_scan(struct work_struct *data)
- return;
+- kfree(response_buf);
++ ret = lbs_cmd_with_response(priv, CMD_802_11_SUBSCRIBE_EVENT, events);
- done:
-- /* inform mac80211 sacn aborted */
-+ /* inform mac80211 scan aborted */
- queue_work(priv->workqueue, &priv->scan_completed);
- mutex_unlock(&priv->mutex);
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- free_page(addr);
+- return res;
++ if (!ret)
++ ret = count;
++ out_events:
++ kfree(events);
++ out_page:
++ free_page((unsigned long)buf);
++ return ret;
}
--static void iwl_bg_up(struct work_struct *data)
-+static void iwl4965_bg_up(struct work_struct *data)
+-static ssize_t libertas_bcnmiss_write(struct file *file,
+- const char __user *userbuf,
+- size_t count, loff_t *ppos)
++
++static ssize_t lbs_lowrssi_read(struct file *file, char __user *userbuf,
++ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv = container_of(data, struct iwl_priv, up);
-+ struct iwl4965_priv *priv = container_of(data, struct iwl4965_priv, up);
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res, buf_size;
+- int value, freq, subscribed, cmd_len;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- struct mrvlietypes_beaconsmissed *bcnmiss;
+- void *response_buf;
+- u16 event_bitmap;
+- u8 *ptr;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
++ return lbs_threshold_read(TLV_TYPE_RSSI_LOW, CMD_SUBSCRIBE_RSSI_LOW,
++ file, userbuf, count, ppos);
++}
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+- buf_size = min(count, len - 1);
+- if (copy_from_user(buf, userbuf, buf_size)) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
+- if (res != 3) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
- mutex_lock(&priv->mutex);
-- __iwl_up(priv);
-+ __iwl4965_up(priv);
- mutex_unlock(&priv->mutex);
- }
+- event_bitmap = libertas_get_events_bitmap(priv);
++static ssize_t lbs_lowrssi_write(struct file *file, const char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_write(TLV_TYPE_RSSI_LOW, CMD_SUBSCRIBE_RSSI_LOW,
++ file, userbuf, count, ppos);
++}
--static void iwl_bg_restart(struct work_struct *data)
-+static void iwl4965_bg_restart(struct work_struct *data)
- {
-- struct iwl_priv *priv = container_of(data, struct iwl_priv, restart);
-+ struct iwl4965_priv *priv = container_of(data, struct iwl4965_priv, restart);
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- goto out_unlock;
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_SET);
+- pcmdptr->size = cpu_to_le16(S_DS_GEN +
+- sizeof(struct cmd_ds_802_11_subscribe_event) +
+- sizeof(struct mrvlietypes_beaconsmissed));
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- ptr = (u8*) pcmdptr+cmd_len;
+- bcnmiss = (struct mrvlietypes_beaconsmissed *)(ptr);
+- bcnmiss->header.type = cpu_to_le16(TLV_TYPE_BCNMISS);
+- bcnmiss->header.len = cpu_to_le16(2);
+- bcnmiss->beaconmissed = value;
+- event_bitmap |= subscribed ? 0x0008 : 0x0;
+- event->events = cpu_to_le16(event_bitmap);
+-
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
++static ssize_t lbs_lowsnr_read(struct file *file, char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_read(TLV_TYPE_SNR_LOW, CMD_SUBSCRIBE_SNR_LOW,
++ file, userbuf, count, ppos);
++}
-- iwl_down(priv);
-+ iwl4965_down(priv);
- queue_work(priv->workqueue, &priv->up);
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- free_page(addr);
+- kfree(response_buf);
+- return 0;
+- }
+
+- res = count;
+-out_unlock:
+- free_page(addr);
+- return res;
++static ssize_t lbs_lowsnr_write(struct file *file, const char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_write(TLV_TYPE_SNR_LOW, CMD_SUBSCRIBE_SNR_LOW,
++ file, userbuf, count, ppos);
}
--static void iwl_bg_rx_replenish(struct work_struct *data)
-+static void iwl4965_bg_rx_replenish(struct work_struct *data)
+-static ssize_t libertas_highrssi_read(struct file *file, char __user *userbuf,
++
++static ssize_t lbs_failcount_read(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv =
-- container_of(data, struct iwl_priv, rx_replenish);
-+ struct iwl4965_priv *priv =
-+ container_of(data, struct iwl4965_priv, rx_replenish);
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res, cmd_len;
+- ssize_t pos = 0;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0) {
+- free_page(addr);
+- return res;
+- }
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- event = (void *)(response_buf + S_DS_GEN);
+- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
+- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
+- switch (header->type) {
+- struct mrvlietypes_rssithreshold *Highrssi;
+- case __constant_cpu_to_le16(TLV_TYPE_RSSI_HIGH):
+- Highrssi = (void *)(response_buf + cmd_len);
+- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
+- Highrssi->rssivalue,
+- Highrssi->rssifreq,
+- (event->events & cpu_to_le16(0x0010))?1:0);
+- default:
+- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
+- break;
+- }
+- }
+-
+- kfree(response_buf);
+-
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- free_page(addr);
+- return res;
++ return lbs_threshold_read(TLV_TYPE_FAILCOUNT, CMD_SUBSCRIBE_FAILCOUNT,
++ file, userbuf, count, ppos);
+ }
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+-static ssize_t libertas_highrssi_write(struct file *file,
+- const char __user *userbuf,
+- size_t count, loff_t *ppos)
+-{
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res, buf_size;
+- int value, freq, subscribed, cmd_len;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- struct mrvlietypes_rssithreshold *rssi_threshold;
+- void *response_buf;
+- u16 event_bitmap;
+- u8 *ptr;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- buf_size = min(count, len - 1);
+- if (copy_from_user(buf, userbuf, buf_size)) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
+- if (res != 3) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
- mutex_lock(&priv->mutex);
-- iwl_rx_replenish(priv);
-+ iwl4965_rx_replenish(priv);
- mutex_unlock(&priv->mutex);
+- event_bitmap = libertas_get_events_bitmap(priv);
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- goto out_unlock;
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_SET);
+- pcmdptr->size = cpu_to_le16(S_DS_GEN +
+- sizeof(struct cmd_ds_802_11_subscribe_event) +
+- sizeof(struct mrvlietypes_rssithreshold));
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- ptr = (u8*) pcmdptr+cmd_len;
+- rssi_threshold = (struct mrvlietypes_rssithreshold *)(ptr);
+- rssi_threshold->header.type = cpu_to_le16(TLV_TYPE_RSSI_HIGH);
+- rssi_threshold->header.len = cpu_to_le16(2);
+- rssi_threshold->rssivalue = value;
+- rssi_threshold->rssifreq = freq;
+- event_bitmap |= subscribed ? 0x0010 : 0x0;
+- event->events = cpu_to_le16(event_bitmap);
+-
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- return 0;
+- }
++static ssize_t lbs_failcount_write(struct file *file, const char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_write(TLV_TYPE_FAILCOUNT, CMD_SUBSCRIBE_FAILCOUNT,
++ file, userbuf, count, ppos);
++}
+
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- return 0;
+- }
+
+- res = count;
+-out_unlock:
+- free_page(addr);
+- return res;
++static ssize_t lbs_highrssi_read(struct file *file, char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_read(TLV_TYPE_RSSI_HIGH, CMD_SUBSCRIBE_RSSI_HIGH,
++ file, userbuf, count, ppos);
}
--static void iwl_bg_post_associate(struct work_struct *data)
-+#define IWL_DELAY_NEXT_SCAN (HZ*2)
+-static ssize_t libertas_highsnr_read(struct file *file, char __user *userbuf,
+
-+static void iwl4965_bg_post_associate(struct work_struct *data)
++static ssize_t lbs_highrssi_write(struct file *file, const char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv = container_of(data, struct iwl_priv,
-+ struct iwl4965_priv *priv = container_of(data, struct iwl4965_priv,
- post_associate.work);
-
- int rc = 0;
-@@ -7133,20 +7166,20 @@ static void iwl_bg_post_associate(struct work_struct *data)
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- void *response_buf;
+- int res, cmd_len;
+- ssize_t pos = 0;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0) {
+- free_page(addr);
+- return res;
+- }
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_GET);
+- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
++ return lbs_threshold_write(TLV_TYPE_RSSI_HIGH, CMD_SUBSCRIBE_RSSI_HIGH,
++ file, userbuf, count, ppos);
++}
- mutex_lock(&priv->mutex);
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
-- if (!priv->interface_id || !priv->is_open) {
-+ if (!priv->vif || !priv->is_open) {
- mutex_unlock(&priv->mutex);
- return;
- }
-- iwl_scan_cancel_timeout(priv, 200);
-+ iwl4965_scan_cancel_timeout(priv, 200);
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
+-
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- event = (void *)(response_buf + S_DS_GEN);
+- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
+- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
+- switch (header->type) {
+- struct mrvlietypes_snrthreshold *HighSnr;
+- case __constant_cpu_to_le16(TLV_TYPE_SNR_HIGH):
+- HighSnr = (void *)(response_buf + cmd_len);
+- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
+- HighSnr->snrvalue,
+- HighSnr->snrfreq,
+- (event->events & cpu_to_le16(0x0020))?1:0);
+- default:
+- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
+- break;
+- }
+- }
++static ssize_t lbs_highsnr_read(struct file *file, char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_read(TLV_TYPE_SNR_HIGH, CMD_SUBSCRIBE_SNR_HIGH,
++ file, userbuf, count, ppos);
++}
- conf = ieee80211_get_hw_conf(priv->hw);
+- kfree(response_buf);
- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
+- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- free_page(addr);
+- return res;
++static ssize_t lbs_highsnr_write(struct file *file, const char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_write(TLV_TYPE_SNR_HIGH, CMD_SUBSCRIBE_SNR_HIGH,
++ file, userbuf, count, ppos);
+ }
-- memset(&priv->rxon_timing, 0, sizeof(struct iwl_rxon_time_cmd));
-- iwl_setup_rxon_timing(priv);
-- rc = iwl_send_cmd_pdu(priv, REPLY_RXON_TIMING,
-+ memset(&priv->rxon_timing, 0, sizeof(struct iwl4965_rxon_time_cmd));
-+ iwl4965_setup_rxon_timing(priv);
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON_TIMING,
- sizeof(priv->rxon_timing), &priv->rxon_timing);
- if (rc)
- IWL_WARNING("REPLY_RXON_TIMING failed - "
-@@ -7154,15 +7187,10 @@ static void iwl_bg_post_associate(struct work_struct *data)
+-static ssize_t libertas_highsnr_write(struct file *file,
+- const char __user *userbuf,
+- size_t count, loff_t *ppos)
++static ssize_t lbs_bcnmiss_read(struct file *file, char __user *userbuf,
++ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- ssize_t res, buf_size;
+- int value, freq, subscribed, cmd_len;
+- struct cmd_ctrl_node *pcmdnode;
+- struct cmd_ds_command *pcmdptr;
+- struct cmd_ds_802_11_subscribe_event *event;
+- struct mrvlietypes_snrthreshold *snr_threshold;
+- void *response_buf;
+- u16 event_bitmap;
+- u8 *ptr;
+- unsigned long addr = get_zeroed_page(GFP_KERNEL);
+- char *buf = (char *)addr;
+-
+- buf_size = min(count, len - 1);
+- if (copy_from_user(buf, userbuf, buf_size)) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
+- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
+- if (res != 3) {
+- res = -EFAULT;
+- goto out_unlock;
+- }
++ return lbs_threshold_read(TLV_TYPE_BCNMISS, CMD_SUBSCRIBE_BCNMISS,
++ file, userbuf, count, ppos);
++}
- priv->staging_rxon.filter_flags |= RXON_FILTER_ASSOC_MSK;
+- event_bitmap = libertas_get_events_bitmap(priv);
--#ifdef CONFIG_IWLWIFI_HT
-- if (priv->is_ht_enabled && priv->current_assoc_ht.is_ht)
-- iwl4965_set_rxon_ht(priv, &priv->current_assoc_ht);
-- else {
-- priv->active_rate_ht[0] = 0;
-- priv->active_rate_ht[1] = 0;
-- priv->current_channel_width = IWL_CHANNEL_WIDTH_20MHZ;
+- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
+- if (res < 0)
+- goto out_unlock;
+-
+- event = &pcmdptr->params.subscribe_event;
+- event->action = cpu_to_le16(CMD_ACT_SET);
+- pcmdptr->size = cpu_to_le16(S_DS_GEN +
+- sizeof(struct cmd_ds_802_11_subscribe_event) +
+- sizeof(struct mrvlietypes_snrthreshold));
+- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
+- ptr = (u8*) pcmdptr+cmd_len;
+- snr_threshold = (struct mrvlietypes_snrthreshold *)(ptr);
+- snr_threshold->header.type = cpu_to_le16(TLV_TYPE_SNR_HIGH);
+- snr_threshold->header.len = cpu_to_le16(2);
+- snr_threshold->snrvalue = value;
+- snr_threshold->snrfreq = freq;
+- event_bitmap |= subscribed ? 0x0020 : 0x0;
+- event->events = cpu_to_le16(event_bitmap);
+-
+- libertas_queue_cmd(adapter, pcmdnode, 1);
+- wake_up_interruptible(&priv->waitq);
+-
+- /* Sleep until response is generated by FW */
+- wait_event_interruptible(pcmdnode->cmdwait_q,
+- pcmdnode->cmdwaitqwoken);
+-
+- pcmdptr = response_buf;
+-
+- if (pcmdptr->result) {
+- lbs_pr_err("%s: fail, result=%d\n", __func__,
+- le16_to_cpu(pcmdptr->result));
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
- }
--#endif /* CONFIG_IWLWIFI_HT*/
-+#ifdef CONFIG_IWL4965_HT
-+ if (priv->current_ht_config.is_ht)
-+ iwl4965_set_rxon_ht(priv, &priv->current_ht_config);
-+#endif /* CONFIG_IWL4965_HT*/
- iwl4965_set_rxon_chain(priv);
- priv->staging_rxon.assoc_id = cpu_to_le16(priv->assoc_id);
++static ssize_t lbs_bcnmiss_write(struct file *file, const char __user *userbuf,
++ size_t count, loff_t *ppos)
++{
++ return lbs_threshold_write(TLV_TYPE_BCNMISS, CMD_SUBSCRIBE_BCNMISS,
++ file, userbuf, count, ppos);
++}
-@@ -7185,22 +7213,22 @@ static void iwl_bg_post_associate(struct work_struct *data)
+- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
+- lbs_pr_err("command response incorrect!\n");
+- kfree(response_buf);
+- free_page(addr);
+- return 0;
+- }
- }
+- res = count;
+-out_unlock:
+- free_page(addr);
+- return res;
+-}
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
+-static ssize_t libertas_rdmac_read(struct file *file, char __user *userbuf,
++static ssize_t lbs_rdmac_read(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct wlan_offset_value offval;
++ struct lbs_private *priv = file->private_data;
++ struct lbs_offset_value offval;
+ ssize_t pos = 0;
+ int ret;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+@@ -1375,23 +603,23 @@ static ssize_t libertas_rdmac_read(struct file *file, char __user *userbuf,
+ offval.offset = priv->mac_offset;
+ offval.value = 0;
- switch (priv->iw_mode) {
- case IEEE80211_IF_TYPE_STA:
-- iwl_rate_scale_init(priv->hw, IWL_AP_ID);
-+ iwl4965_rate_scale_init(priv->hw, IWL_AP_ID);
- break;
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_MAC_REG_ACCESS, 0,
+ CMD_OPTION_WAITFORRSP, 0, &offval);
+ mdelay(10);
+ pos += snprintf(buf+pos, len-pos, "MAC[0x%x] = 0x%08x\n",
+- priv->mac_offset, adapter->offsetvalue.value);
++ priv->mac_offset, priv->offsetvalue.value);
- case IEEE80211_IF_TYPE_IBSS:
+ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+ free_page(addr);
+ return ret;
+ }
- /* clear out the station table */
-- iwl_clear_stations_table(priv);
-+ iwl4965_clear_stations_table(priv);
+-static ssize_t libertas_rdmac_write(struct file *file,
++static ssize_t lbs_rdmac_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
+@@ -1408,15 +636,15 @@ out_unlock:
+ return res;
+ }
-- iwl_rxon_add_station(priv, BROADCAST_ADDR, 0);
-- iwl_rxon_add_station(priv, priv->bssid, 0);
-- iwl_rate_scale_init(priv->hw, IWL_STA_ID);
-- iwl_send_beacon_cmd(priv);
-+ iwl4965_rxon_add_station(priv, iwl4965_broadcast_addr, 0);
-+ iwl4965_rxon_add_station(priv, priv->bssid, 0);
-+ iwl4965_rate_scale_init(priv->hw, IWL_STA_ID);
-+ iwl4965_send_beacon_cmd(priv);
+-static ssize_t libertas_wrmac_write(struct file *file,
++static ssize_t lbs_wrmac_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
- break;
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ u32 offset, value;
+- struct wlan_offset_value offval;
++ struct lbs_offset_value offval;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
-@@ -7210,55 +7238,61 @@ static void iwl_bg_post_associate(struct work_struct *data)
- break;
- }
+@@ -1433,7 +661,7 @@ static ssize_t libertas_wrmac_write(struct file *file,
-- iwl_sequence_reset(priv);
-+ iwl4965_sequence_reset(priv);
+ offval.offset = offset;
+ offval.value = value;
+- res = libertas_prepare_and_send_command(priv,
++ res = lbs_prepare_and_send_command(priv,
+ CMD_MAC_REG_ACCESS, 1,
+ CMD_OPTION_WAITFORRSP, 0, &offval);
+ mdelay(10);
+@@ -1444,12 +672,11 @@ out_unlock:
+ return res;
+ }
--#ifdef CONFIG_IWLWIFI_SENSITIVITY
-+#ifdef CONFIG_IWL4965_SENSITIVITY
- /* Enable Rx differential gain and sensitivity calibrations */
- iwl4965_chain_noise_reset(priv);
- priv->start_calib = 1;
--#endif /* CONFIG_IWLWIFI_SENSITIVITY */
-+#endif /* CONFIG_IWL4965_SENSITIVITY */
+-static ssize_t libertas_rdbbp_read(struct file *file, char __user *userbuf,
++static ssize_t lbs_rdbbp_read(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct wlan_offset_value offval;
++ struct lbs_private *priv = file->private_data;
++ struct lbs_offset_value offval;
+ ssize_t pos = 0;
+ int ret;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+@@ -1458,12 +685,12 @@ static ssize_t libertas_rdbbp_read(struct file *file, char __user *userbuf,
+ offval.offset = priv->bbp_offset;
+ offval.value = 0;
- if (priv->iw_mode == IEEE80211_IF_TYPE_IBSS)
- priv->assoc_station_added = 1;
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_BBP_REG_ACCESS, 0,
+ CMD_OPTION_WAITFORRSP, 0, &offval);
+ mdelay(10);
+ pos += snprintf(buf+pos, len-pos, "BBP[0x%x] = 0x%08x\n",
+- priv->bbp_offset, adapter->offsetvalue.value);
++ priv->bbp_offset, priv->offsetvalue.value);
--#ifdef CONFIG_IWLWIFI_QOS
-- iwl_activate_qos(priv, 0);
--#endif /* CONFIG_IWLWIFI_QOS */
-+#ifdef CONFIG_IWL4965_QOS
-+ iwl4965_activate_qos(priv, 0);
-+#endif /* CONFIG_IWL4965_QOS */
-+ /* we have just associated, don't start scan too early */
-+ priv->next_scan_jiffies = jiffies + IWL_DELAY_NEXT_SCAN;
- mutex_unlock(&priv->mutex);
+ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+ free_page(addr);
+@@ -1471,11 +698,11 @@ static ssize_t libertas_rdbbp_read(struct file *file, char __user *userbuf,
+ return ret;
}
--static void iwl_bg_abort_scan(struct work_struct *work)
-+static void iwl4965_bg_abort_scan(struct work_struct *work)
+-static ssize_t libertas_rdbbp_write(struct file *file,
++static ssize_t lbs_rdbbp_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv = container_of(work, struct iwl_priv,
-- abort_scan);
-+ struct iwl4965_priv *priv = container_of(work, struct iwl4965_priv, abort_scan);
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
+@@ -1492,15 +719,15 @@ out_unlock:
+ return res;
+ }
-- if (!iwl_is_ready(priv))
-+ if (!iwl4965_is_ready(priv))
- return;
+-static ssize_t libertas_wrbbp_write(struct file *file,
++static ssize_t lbs_wrbbp_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
- mutex_lock(&priv->mutex);
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ u32 offset, value;
+- struct wlan_offset_value offval;
++ struct lbs_offset_value offval;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
- set_bit(STATUS_SCAN_ABORTING, &priv->status);
-- iwl_send_scan_abort(priv);
-+ iwl4965_send_scan_abort(priv);
+@@ -1517,7 +744,7 @@ static ssize_t libertas_wrbbp_write(struct file *file,
- mutex_unlock(&priv->mutex);
+ offval.offset = offset;
+ offval.value = value;
+- res = libertas_prepare_and_send_command(priv,
++ res = lbs_prepare_and_send_command(priv,
+ CMD_BBP_REG_ACCESS, 1,
+ CMD_OPTION_WAITFORRSP, 0, &offval);
+ mdelay(10);
+@@ -1528,12 +755,11 @@ out_unlock:
+ return res;
}
--static void iwl_bg_scan_completed(struct work_struct *work)
-+static int iwl4965_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf);
-+
-+static void iwl4965_bg_scan_completed(struct work_struct *work)
+-static ssize_t libertas_rdrf_read(struct file *file, char __user *userbuf,
++static ssize_t lbs_rdrf_read(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv =
-- container_of(work, struct iwl_priv, scan_completed);
-+ struct iwl4965_priv *priv =
-+ container_of(work, struct iwl4965_priv, scan_completed);
-
- IWL_DEBUG(IWL_DL_INFO | IWL_DL_SCAN, "SCAN complete scan\n");
-
- if (test_bit(STATUS_EXIT_PENDING, &priv->status))
- return;
+- wlan_private *priv = file->private_data;
+- wlan_adapter *adapter = priv->adapter;
+- struct wlan_offset_value offval;
++ struct lbs_private *priv = file->private_data;
++ struct lbs_offset_value offval;
+ ssize_t pos = 0;
+ int ret;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+@@ -1542,12 +768,12 @@ static ssize_t libertas_rdrf_read(struct file *file, char __user *userbuf,
+ offval.offset = priv->rf_offset;
+ offval.value = 0;
-+ if (test_bit(STATUS_CONF_PENDING, &priv->status))
-+ iwl4965_mac_config(priv->hw, ieee80211_get_hw_conf(priv->hw));
-+
- ieee80211_scan_completed(priv->hw);
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_RF_REG_ACCESS, 0,
+ CMD_OPTION_WAITFORRSP, 0, &offval);
+ mdelay(10);
+ pos += snprintf(buf+pos, len-pos, "RF[0x%x] = 0x%08x\n",
+- priv->rf_offset, adapter->offsetvalue.value);
++ priv->rf_offset, priv->offsetvalue.value);
- /* Since setting the TXPOWER may have been deferred while
- * performing the scan, fire one off */
- mutex_lock(&priv->mutex);
-- iwl_hw_reg_send_txpower(priv);
-+ iwl4965_hw_reg_send_txpower(priv);
- mutex_unlock(&priv->mutex);
+ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+ free_page(addr);
+@@ -1555,11 +781,11 @@ static ssize_t libertas_rdrf_read(struct file *file, char __user *userbuf,
+ return ret;
}
-@@ -7268,50 +7302,123 @@ static void iwl_bg_scan_completed(struct work_struct *work)
- *
- *****************************************************************************/
-
--static int iwl_mac_start(struct ieee80211_hw *hw)
-+#define UCODE_READY_TIMEOUT (2 * HZ)
-+
-+static int iwl4965_mac_start(struct ieee80211_hw *hw)
+-static ssize_t libertas_rdrf_write(struct file *file,
++static ssize_t lbs_rdrf_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
{
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
-+ int ret;
-
- IWL_DEBUG_MAC80211("enter\n");
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
+@@ -1576,15 +802,15 @@ out_unlock:
+ return res;
+ }
-+ if (pci_enable_device(priv->pci_dev)) {
-+ IWL_ERROR("Fail to pci_enable_device\n");
-+ return -ENODEV;
-+ }
-+ pci_restore_state(priv->pci_dev);
-+ pci_enable_msi(priv->pci_dev);
-+
-+ ret = request_irq(priv->pci_dev->irq, iwl4965_isr, IRQF_SHARED,
-+ DRV_NAME, priv);
-+ if (ret) {
-+ IWL_ERROR("Error allocating IRQ %d\n", priv->pci_dev->irq);
-+ goto out_disable_msi;
-+ }
-+
- /* we should be verifying the device is ready to be opened */
- mutex_lock(&priv->mutex);
+-static ssize_t libertas_wrrf_write(struct file *file,
++static ssize_t lbs_wrrf_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
-- priv->is_open = 1;
-+ memset(&priv->staging_rxon, 0, sizeof(struct iwl4965_rxon_cmd));
-+ /* fetch ucode file from disk, alloc and copy to bus-master buffers ...
-+ * ucode filename and max sizes are card-specific. */
-+
-+ if (!priv->ucode_code.len) {
-+ ret = iwl4965_read_ucode(priv);
-+ if (ret) {
-+ IWL_ERROR("Could not read microcode: %d\n", ret);
-+ mutex_unlock(&priv->mutex);
-+ goto out_release_irq;
-+ }
-+ }
+- wlan_private *priv = file->private_data;
++ struct lbs_private *priv = file->private_data;
+ ssize_t res, buf_size;
+ u32 offset, value;
+- struct wlan_offset_value offval;
++ struct lbs_offset_value offval;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *)addr;
-- if (!iwl_is_rfkill(priv))
-- ieee80211_start_queues(priv->hw);
-+ ret = __iwl4965_up(priv);
+@@ -1601,7 +827,7 @@ static ssize_t libertas_wrrf_write(struct file *file,
- mutex_unlock(&priv->mutex);
-+
-+ if (ret)
-+ goto out_release_irq;
-+
-+ IWL_DEBUG_INFO("Start UP work done.\n");
-+
-+ if (test_bit(STATUS_IN_SUSPEND, &priv->status))
-+ return 0;
-+
-+ /* Wait for START_ALIVE from ucode. Otherwise callbacks from
-+ * mac80211 will not be run successfully. */
-+ ret = wait_event_interruptible_timeout(priv->wait_command_queue,
-+ test_bit(STATUS_READY, &priv->status),
-+ UCODE_READY_TIMEOUT);
-+ if (!ret) {
-+ if (!test_bit(STATUS_READY, &priv->status)) {
-+ IWL_ERROR("Wait for START_ALIVE timeout after %dms.\n",
-+ jiffies_to_msecs(UCODE_READY_TIMEOUT));
-+ ret = -ETIMEDOUT;
-+ goto out_release_irq;
-+ }
-+ }
-+
-+ priv->is_open = 1;
- IWL_DEBUG_MAC80211("leave\n");
- return 0;
-+
-+out_release_irq:
-+ free_irq(priv->pci_dev->irq, priv);
-+out_disable_msi:
-+ pci_disable_msi(priv->pci_dev);
-+ pci_disable_device(priv->pci_dev);
-+ priv->is_open = 0;
-+ IWL_DEBUG_MAC80211("leave - failed\n");
-+ return ret;
+ offval.offset = offset;
+ offval.value = value;
+- res = libertas_prepare_and_send_command(priv,
++ res = lbs_prepare_and_send_command(priv,
+ CMD_RF_REG_ACCESS, 1,
+ CMD_OPTION_WAITFORRSP, 0, &offval);
+ mdelay(10);
+@@ -1619,69 +845,69 @@ out_unlock:
+ .write = (fwrite), \
}
--static void iwl_mac_stop(struct ieee80211_hw *hw)
-+static void iwl4965_mac_stop(struct ieee80211_hw *hw)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
+-struct libertas_debugfs_files {
++struct lbs_debugfs_files {
+ char *name;
+ int perm;
+ struct file_operations fops;
+ };
- IWL_DEBUG_MAC80211("enter\n");
+-static struct libertas_debugfs_files debugfs_files[] = {
+- { "info", 0444, FOPS(libertas_dev_info, write_file_dummy), },
+- { "getscantable", 0444, FOPS(libertas_getscantable,
++static struct lbs_debugfs_files debugfs_files[] = {
++ { "info", 0444, FOPS(lbs_dev_info, write_file_dummy), },
++ { "getscantable", 0444, FOPS(lbs_getscantable,
+ write_file_dummy), },
+- { "sleepparams", 0644, FOPS(libertas_sleepparams_read,
+- libertas_sleepparams_write), },
+- { "extscan", 0600, FOPS(NULL, libertas_extscan), },
+- { "setuserscan", 0600, FOPS(NULL, libertas_setuserscan), },
++ { "sleepparams", 0644, FOPS(lbs_sleepparams_read,
++ lbs_sleepparams_write), },
++ { "extscan", 0600, FOPS(NULL, lbs_extscan), },
++ { "setuserscan", 0600, FOPS(NULL, lbs_setuserscan), },
+ };
-+ if (!priv->is_open) {
-+ IWL_DEBUG_MAC80211("leave - skip\n");
-+ return;
-+ }
+-static struct libertas_debugfs_files debugfs_events_files[] = {
+- {"low_rssi", 0644, FOPS(libertas_lowrssi_read,
+- libertas_lowrssi_write), },
+- {"low_snr", 0644, FOPS(libertas_lowsnr_read,
+- libertas_lowsnr_write), },
+- {"failure_count", 0644, FOPS(libertas_failcount_read,
+- libertas_failcount_write), },
+- {"beacon_missed", 0644, FOPS(libertas_bcnmiss_read,
+- libertas_bcnmiss_write), },
+- {"high_rssi", 0644, FOPS(libertas_highrssi_read,
+- libertas_highrssi_write), },
+- {"high_snr", 0644, FOPS(libertas_highsnr_read,
+- libertas_highsnr_write), },
++static struct lbs_debugfs_files debugfs_events_files[] = {
++ {"low_rssi", 0644, FOPS(lbs_lowrssi_read,
++ lbs_lowrssi_write), },
++ {"low_snr", 0644, FOPS(lbs_lowsnr_read,
++ lbs_lowsnr_write), },
++ {"failure_count", 0644, FOPS(lbs_failcount_read,
++ lbs_failcount_write), },
++ {"beacon_missed", 0644, FOPS(lbs_bcnmiss_read,
++ lbs_bcnmiss_write), },
++ {"high_rssi", 0644, FOPS(lbs_highrssi_read,
++ lbs_highrssi_write), },
++ {"high_snr", 0644, FOPS(lbs_highsnr_read,
++ lbs_highsnr_write), },
+ };
-- mutex_lock(&priv->mutex);
-- /* stop mac, cancel any scan request and clear
-- * RXON_FILTER_ASSOC_MSK BIT
-- */
- priv->is_open = 0;
-- iwl_scan_cancel_timeout(priv, 100);
-- cancel_delayed_work(&priv->post_associate);
-- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
-- mutex_unlock(&priv->mutex);
-+
-+ if (iwl4965_is_ready_rf(priv)) {
-+ /* stop mac, cancel any scan request and clear
-+ * RXON_FILTER_ASSOC_MSK BIT
-+ */
-+ mutex_lock(&priv->mutex);
-+ iwl4965_scan_cancel_timeout(priv, 100);
-+ cancel_delayed_work(&priv->post_associate);
-+ mutex_unlock(&priv->mutex);
-+ }
-+
-+ iwl4965_down(priv);
-+
-+ flush_workqueue(priv->workqueue);
-+ free_irq(priv->pci_dev->irq, priv);
-+ pci_disable_msi(priv->pci_dev);
-+ pci_save_state(priv->pci_dev);
-+ pci_disable_device(priv->pci_dev);
+-static struct libertas_debugfs_files debugfs_regs_files[] = {
+- {"rdmac", 0644, FOPS(libertas_rdmac_read, libertas_rdmac_write), },
+- {"wrmac", 0600, FOPS(NULL, libertas_wrmac_write), },
+- {"rdbbp", 0644, FOPS(libertas_rdbbp_read, libertas_rdbbp_write), },
+- {"wrbbp", 0600, FOPS(NULL, libertas_wrbbp_write), },
+- {"rdrf", 0644, FOPS(libertas_rdrf_read, libertas_rdrf_write), },
+- {"wrrf", 0600, FOPS(NULL, libertas_wrrf_write), },
++static struct lbs_debugfs_files debugfs_regs_files[] = {
++ {"rdmac", 0644, FOPS(lbs_rdmac_read, lbs_rdmac_write), },
++ {"wrmac", 0600, FOPS(NULL, lbs_wrmac_write), },
++ {"rdbbp", 0644, FOPS(lbs_rdbbp_read, lbs_rdbbp_write), },
++ {"wrbbp", 0600, FOPS(NULL, lbs_wrbbp_write), },
++ {"rdrf", 0644, FOPS(lbs_rdrf_read, lbs_rdrf_write), },
++ {"wrrf", 0600, FOPS(NULL, lbs_wrrf_write), },
+ };
- IWL_DEBUG_MAC80211("leave\n");
+-void libertas_debugfs_init(void)
++void lbs_debugfs_init(void)
+ {
+- if (!libertas_dir)
+- libertas_dir = debugfs_create_dir("libertas_wireless", NULL);
++ if (!lbs_dir)
++ lbs_dir = debugfs_create_dir("lbs_wireless", NULL);
+
+ return;
}
--static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
-+static int iwl4965_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
- struct ieee80211_tx_control *ctl)
+-void libertas_debugfs_remove(void)
++void lbs_debugfs_remove(void)
{
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
+- if (libertas_dir)
+- debugfs_remove(libertas_dir);
++ if (lbs_dir)
++ debugfs_remove(lbs_dir);
+ return;
+ }
- IWL_DEBUG_MAC80211("enter\n");
+-void libertas_debugfs_init_one(wlan_private *priv, struct net_device *dev)
++void lbs_debugfs_init_one(struct lbs_private *priv, struct net_device *dev)
+ {
+ int i;
+- struct libertas_debugfs_files *files;
+- if (!libertas_dir)
++ struct lbs_debugfs_files *files;
++ if (!lbs_dir)
+ goto exit;
-@@ -7323,29 +7430,29 @@ static int iwl_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
- IWL_DEBUG_TX("dev->xmit(%d bytes) at rate 0x%02x\n", skb->len,
- ctl->tx_rate);
+- priv->debugfs_dir = debugfs_create_dir(dev->name, libertas_dir);
++ priv->debugfs_dir = debugfs_create_dir(dev->name, lbs_dir);
+ if (!priv->debugfs_dir)
+ goto exit;
-- if (iwl_tx_skb(priv, skb, ctl))
-+ if (iwl4965_tx_skb(priv, skb, ctl))
- dev_kfree_skb_any(skb);
+@@ -1721,13 +947,13 @@ void libertas_debugfs_init_one(wlan_private *priv, struct net_device *dev)
+ }
- IWL_DEBUG_MAC80211("leave\n");
- return 0;
+ #ifdef PROC_DEBUG
+- libertas_debug_init(priv, dev);
++ lbs_debug_init(priv, dev);
+ #endif
+ exit:
+ return;
}
--static int iwl_mac_add_interface(struct ieee80211_hw *hw,
-+static int iwl4965_mac_add_interface(struct ieee80211_hw *hw,
- struct ieee80211_if_init_conf *conf)
+-void libertas_debugfs_remove_one(wlan_private *priv)
++void lbs_debugfs_remove_one(struct lbs_private *priv)
{
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- unsigned long flags;
- DECLARE_MAC_BUF(mac);
-
-- IWL_DEBUG_MAC80211("enter: id %d, type %d\n", conf->if_id, conf->type);
-+ IWL_DEBUG_MAC80211("enter: type %d\n", conf->type);
+ int i;
-- if (priv->interface_id) {
-- IWL_DEBUG_MAC80211("leave - interface_id != 0\n");
-+ if (priv->vif) {
-+ IWL_DEBUG_MAC80211("leave - vif != NULL\n");
- return 0;
- }
+@@ -1754,8 +980,8 @@ void libertas_debugfs_remove_one(wlan_private *priv)
- spin_lock_irqsave(&priv->lock, flags);
-- priv->interface_id = conf->if_id;
-+ priv->vif = conf->vif;
+ #ifdef PROC_DEBUG
- spin_unlock_irqrestore(&priv->lock, flags);
+-#define item_size(n) (FIELD_SIZEOF(wlan_adapter, n))
+-#define item_addr(n) (offsetof(wlan_adapter, n))
++#define item_size(n) (FIELD_SIZEOF(struct lbs_private, n))
++#define item_addr(n) (offsetof(struct lbs_private, n))
-@@ -7355,58 +7462,62 @@ static int iwl_mac_add_interface(struct ieee80211_hw *hw,
- IWL_DEBUG_MAC80211("Set %s\n", print_mac(mac, conf->mac_addr));
- memcpy(priv->mac_addr, conf->mac_addr, ETH_ALEN);
- }
-- iwl_set_mode(priv, conf->type);
-- IWL_DEBUG_MAC80211("leave\n");
-+ if (iwl4965_is_ready(priv))
-+ iwl4965_set_mode(priv, conf->type);
-+
- mutex_unlock(&priv->mutex);
+ struct debug_data {
+@@ -1764,7 +990,7 @@ struct debug_data {
+ size_t addr;
+ };
-+ IWL_DEBUG_MAC80211("leave\n");
- return 0;
+-/* To debug any member of wlan_adapter, simply add one line here.
++/* To debug any member of struct lbs_private, simply add one line here.
+ */
+ static struct debug_data items[] = {
+ {"intcounter", item_size(intcounter), item_addr(intcounter)},
+@@ -1785,7 +1011,7 @@ static int num_of_items = ARRAY_SIZE(items);
+ * @param data data to output
+ * @return number of output data
+ */
+-static ssize_t wlan_debugfs_read(struct file *file, char __user *userbuf,
++static ssize_t lbs_debugfs_read(struct file *file, char __user *userbuf,
+ size_t count, loff_t *ppos)
+ {
+ int val = 0;
+@@ -1829,7 +1055,7 @@ static ssize_t wlan_debugfs_read(struct file *file, char __user *userbuf,
+ * @param data data to write
+ * @return number of data
+ */
+-static ssize_t wlan_debugfs_write(struct file *f, const char __user *buf,
++static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
+ size_t cnt, loff_t *ppos)
+ {
+ int r, i;
+@@ -1881,21 +1107,21 @@ static ssize_t wlan_debugfs_write(struct file *f, const char __user *buf,
+ return (ssize_t)cnt;
}
+-static struct file_operations libertas_debug_fops = {
++static struct file_operations lbs_debug_fops = {
+ .owner = THIS_MODULE,
+ .open = open_file_generic,
+- .write = wlan_debugfs_write,
+- .read = wlan_debugfs_read,
++ .write = lbs_debugfs_write,
++ .read = lbs_debugfs_read,
+ };
+
/**
-- * iwl_mac_config - mac80211 config callback
-+ * iwl4965_mac_config - mac80211 config callback
+ * @brief create debug proc file
*
- * We ignore conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME since it seems to
- * be set inappropriately and the driver currently sets the hardware up to
- * use it whenever needed.
+- * @param priv pointer wlan_private
++ * @param priv pointer struct lbs_private
+ * @param dev pointer net_device
+ * @return N/A
*/
--static int iwl_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
-+static int iwl4965_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
+-static void libertas_debug_init(wlan_private * priv, struct net_device *dev)
++static void lbs_debug_init(struct lbs_private *priv, struct net_device *dev)
{
-- struct iwl_priv *priv = hw->priv;
-- const struct iwl_channel_info *ch_info;
-+ struct iwl4965_priv *priv = hw->priv;
-+ const struct iwl4965_channel_info *ch_info;
- unsigned long flags;
-+ int ret = 0;
-
- mutex_lock(&priv->mutex);
- IWL_DEBUG_MAC80211("enter to channel %d\n", conf->channel);
-
-- if (!iwl_is_ready(priv)) {
-+ priv->add_radiotap = !!(conf->flags & IEEE80211_CONF_RADIOTAP);
-+
-+ if (!iwl4965_is_ready(priv)) {
- IWL_DEBUG_MAC80211("leave - not ready\n");
-- mutex_unlock(&priv->mutex);
-- return -EIO;
-+ ret = -EIO;
-+ goto out;
- }
-
-- /* TODO: Figure out how to get ieee80211_local->sta_scanning w/ only
-- * what is exposed through include/ declrations */
-- if (unlikely(!iwl_param_disable_hw_scan &&
-+ if (unlikely(!iwl4965_param_disable_hw_scan &&
- test_bit(STATUS_SCANNING, &priv->status))) {
- IWL_DEBUG_MAC80211("leave - scanning\n");
-+ set_bit(STATUS_CONF_PENDING, &priv->status);
- mutex_unlock(&priv->mutex);
- return 0;
- }
+ int i;
- spin_lock_irqsave(&priv->lock, flags);
+@@ -1903,11 +1129,10 @@ static void libertas_debug_init(wlan_private * priv, struct net_device *dev)
+ return;
-- ch_info = iwl_get_channel_info(priv, conf->phymode, conf->channel);
-+ ch_info = iwl4965_get_channel_info(priv, conf->phymode, conf->channel);
- if (!is_channel_valid(ch_info)) {
- IWL_DEBUG_SCAN("Channel %d [%d] is INVALID for this SKU.\n",
- conf->channel, conf->phymode);
- IWL_DEBUG_MAC80211("leave - invalid channel\n");
- spin_unlock_irqrestore(&priv->lock, flags);
-- mutex_unlock(&priv->mutex);
-- return -EINVAL;
-+ ret = -EINVAL;
-+ goto out;
- }
+ for (i = 0; i < num_of_items; i++)
+- items[i].addr += (size_t) priv->adapter;
++ items[i].addr += (size_t) priv;
--#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
- /* if we are switching fron ht to 2.4 clear flags
- * from any ht related info since 2.4 does not
- * support ht */
-@@ -7416,57 +7527,56 @@ static int iwl_mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
+ priv->debugfs_debug = debugfs_create_file("debug", 0644,
+ priv->debugfs_dir, &items[0],
+- &libertas_debug_fops);
++ &lbs_debug_fops);
+ }
#endif
- )
- priv->staging_rxon.flags = 0;
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /* CONFIG_IWL4965_HT */
+-
+diff --git a/drivers/net/wireless/libertas/debugfs.h b/drivers/net/wireless/libertas/debugfs.h
+index 880a11b..f2b9c7f 100644
+--- a/drivers/net/wireless/libertas/debugfs.h
++++ b/drivers/net/wireless/libertas/debugfs.h
+@@ -1,6 +1,10 @@
+-void libertas_debugfs_init(void);
+-void libertas_debugfs_remove(void);
++#ifndef _LBS_DEBUGFS_H_
++#define _LBS_DEBUGFS_H_
-- iwl_set_rxon_channel(priv, conf->phymode, conf->channel);
-+ iwl4965_set_rxon_channel(priv, conf->phymode, conf->channel);
+-void libertas_debugfs_init_one(wlan_private *priv, struct net_device *dev);
+-void libertas_debugfs_remove_one(wlan_private *priv);
++void lbs_debugfs_init(void);
++void lbs_debugfs_remove(void);
-- iwl_set_flags_for_phymode(priv, conf->phymode);
-+ iwl4965_set_flags_for_phymode(priv, conf->phymode);
++void lbs_debugfs_init_one(struct lbs_private *priv, struct net_device *dev);
++void lbs_debugfs_remove_one(struct lbs_private *priv);
++
++#endif
+diff --git a/drivers/net/wireless/libertas/decl.h b/drivers/net/wireless/libertas/decl.h
+index 87fea9d..aaacd9b 100644
+--- a/drivers/net/wireless/libertas/decl.h
++++ b/drivers/net/wireless/libertas/decl.h
+@@ -3,80 +3,74 @@
+ * functions defined in other source files
+ */
- /* The list of supported rates and rate mask can be different
- * for each phymode; since the phymode may have changed, reset
- * the rate mask to what mac80211 lists */
-- iwl_set_rate(priv);
-+ iwl4965_set_rate(priv);
+-#ifndef _WLAN_DECL_H_
+-#define _WLAN_DECL_H_
++#ifndef _LBS_DECL_H_
++#define _LBS_DECL_H_
- spin_unlock_irqrestore(&priv->lock, flags);
+ #include <linux/device.h>
- #ifdef IEEE80211_CONF_CHANNEL_SWITCH
- if (conf->flags & IEEE80211_CONF_CHANNEL_SWITCH) {
-- iwl_hw_channel_switch(priv, conf->channel);
-- mutex_unlock(&priv->mutex);
-- return 0;
-+ iwl4965_hw_channel_switch(priv, conf->channel);
-+ goto out;
- }
- #endif
+ #include "defs.h"
-- iwl_radio_kill_sw(priv, !conf->radio_enabled);
-+ iwl4965_radio_kill_sw(priv, !conf->radio_enabled);
+ /** Function Prototype Declaration */
+-struct wlan_private;
++struct lbs_private;
+ struct sk_buff;
+ struct net_device;
+-
+-int libertas_set_mac_packet_filter(wlan_private * priv);
+-
+-void libertas_send_tx_feedback(wlan_private * priv);
+-
+-int libertas_free_cmd_buffer(wlan_private * priv);
+ struct cmd_ctrl_node;
+-struct cmd_ctrl_node *libertas_get_free_cmd_ctrl_node(wlan_private * priv);
++struct cmd_ds_command;
- if (!conf->radio_enabled) {
- IWL_DEBUG_MAC80211("leave - radio disabled\n");
-- mutex_unlock(&priv->mutex);
-- return 0;
-+ goto out;
- }
+-void libertas_set_cmd_ctrl_node(wlan_private * priv,
+- struct cmd_ctrl_node *ptempnode,
+- u32 cmd_oid, u16 wait_option, void *pdata_buf);
++int lbs_set_mac_packet_filter(struct lbs_private *priv);
-- if (iwl_is_rfkill(priv)) {
-+ if (iwl4965_is_rfkill(priv)) {
- IWL_DEBUG_MAC80211("leave - RF kill\n");
-- mutex_unlock(&priv->mutex);
-- return -EIO;
-+ ret = -EIO;
-+ goto out;
- }
+-int libertas_prepare_and_send_command(wlan_private * priv,
+- u16 cmd_no,
+- u16 cmd_action,
+- u16 wait_option, u32 cmd_oid, void *pdata_buf);
++void lbs_send_tx_feedback(struct lbs_private *priv);
-- iwl_set_rate(priv);
-+ iwl4965_set_rate(priv);
+-void libertas_queue_cmd(wlan_adapter * adapter, struct cmd_ctrl_node *cmdnode, u8 addtail);
++int lbs_free_cmd_buffer(struct lbs_private *priv);
- if (memcmp(&priv->active_rxon,
- &priv->staging_rxon, sizeof(priv->staging_rxon)))
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
- else
- IWL_DEBUG_INFO("No re-sending same RXON configuration.\n");
+-int libertas_allocate_cmd_buffer(wlan_private * priv);
+-int libertas_execute_next_command(wlan_private * priv);
+-int libertas_process_event(wlan_private * priv);
+-void libertas_interrupt(struct net_device *);
+-int libertas_set_radio_control(wlan_private * priv);
+-u32 libertas_fw_index_to_data_rate(u8 index);
+-u8 libertas_data_rate_to_fw_index(u32 rate);
+-void libertas_get_fwversion(wlan_adapter * adapter, char *fwversion, int maxlen);
++int lbs_prepare_and_send_command(struct lbs_private *priv,
++ u16 cmd_no,
++ u16 cmd_action,
++ u16 wait_option, u32 cmd_oid, void *pdata_buf);
- IWL_DEBUG_MAC80211("leave\n");
+-void libertas_upload_rx_packet(wlan_private * priv, struct sk_buff *skb);
++int lbs_allocate_cmd_buffer(struct lbs_private *priv);
++int lbs_execute_next_command(struct lbs_private *priv);
++int lbs_process_event(struct lbs_private *priv);
++void lbs_interrupt(struct lbs_private *priv);
++int lbs_set_radio_control(struct lbs_private *priv);
++u32 lbs_fw_index_to_data_rate(u8 index);
++u8 lbs_data_rate_to_fw_index(u32 rate);
++void lbs_get_fwversion(struct lbs_private *priv,
++ char *fwversion,
++ int maxlen);
-+out:
-+ clear_bit(STATUS_CONF_PENDING, &priv->status);
- mutex_unlock(&priv->mutex);
+ /** The proc fs interface */
+-int libertas_process_rx_command(wlan_private * priv);
+-int libertas_process_tx(wlan_private * priv, struct sk_buff *skb);
+-void __libertas_cleanup_and_insert_cmd(wlan_private * priv,
+- struct cmd_ctrl_node *ptempcmd);
-
-- return 0;
-+ return ret;
- }
-
--static void iwl_config_ap(struct iwl_priv *priv)
-+static void iwl4965_config_ap(struct iwl4965_priv *priv)
- {
- int rc = 0;
+-int libertas_set_regiontable(wlan_private * priv, u8 region, u8 band);
+-
+-int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *);
++int lbs_process_rx_command(struct lbs_private *priv);
++void lbs_complete_command(struct lbs_private *priv, struct cmd_ctrl_node *cmd,
++ int result);
++int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev);
++int lbs_set_regiontable(struct lbs_private *priv, u8 region, u8 band);
-@@ -7478,12 +7588,12 @@ static void iwl_config_ap(struct iwl_priv *priv)
+-void libertas_ps_sleep(wlan_private * priv, int wait_option);
+-void libertas_ps_confirm_sleep(wlan_private * priv, u16 psmode);
+-void libertas_ps_wakeup(wlan_private * priv, int wait_option);
++int lbs_process_rxed_packet(struct lbs_private *priv, struct sk_buff *);
- /* RXON - unassoc (to set timing command) */
- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
+-void libertas_tx_runqueue(wlan_private *priv);
++void lbs_ps_sleep(struct lbs_private *priv, int wait_option);
++void lbs_ps_confirm_sleep(struct lbs_private *priv, u16 psmode);
++void lbs_ps_wakeup(struct lbs_private *priv, int wait_option);
- /* RXON Timing */
-- memset(&priv->rxon_timing, 0, sizeof(struct iwl_rxon_time_cmd));
-- iwl_setup_rxon_timing(priv);
-- rc = iwl_send_cmd_pdu(priv, REPLY_RXON_TIMING,
-+ memset(&priv->rxon_timing, 0, sizeof(struct iwl4965_rxon_time_cmd));
-+ iwl4965_setup_rxon_timing(priv);
-+ rc = iwl4965_send_cmd_pdu(priv, REPLY_RXON_TIMING,
- sizeof(priv->rxon_timing), &priv->rxon_timing);
- if (rc)
- IWL_WARNING("REPLY_RXON_TIMING failed - "
-@@ -7515,23 +7625,24 @@ static void iwl_config_ap(struct iwl_priv *priv)
- }
- /* restore RXON assoc */
- priv->staging_rxon.filter_flags |= RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
--#ifdef CONFIG_IWLWIFI_QOS
-- iwl_activate_qos(priv, 1);
-+ iwl4965_commit_rxon(priv);
-+#ifdef CONFIG_IWL4965_QOS
-+ iwl4965_activate_qos(priv, 1);
- #endif
-- iwl_rxon_add_station(priv, BROADCAST_ADDR, 0);
-+ iwl4965_rxon_add_station(priv, iwl4965_broadcast_addr, 0);
- }
-- iwl_send_beacon_cmd(priv);
-+ iwl4965_send_beacon_cmd(priv);
+-struct chan_freq_power *libertas_find_cfp_by_band_and_channel(
+- wlan_adapter * adapter, u8 band, u16 channel);
++struct chan_freq_power *lbs_find_cfp_by_band_and_channel(
++ struct lbs_private *priv,
++ u8 band,
++ u16 channel);
- /* FIXME - we need to add code here to detect a totally new
- * configuration, reset the AP, unassoc, rxon timing, assoc,
- * clear sta table, add BCAST sta... */
- }
+-void libertas_mac_event_disconnected(wlan_private * priv);
++void lbs_mac_event_disconnected(struct lbs_private *priv);
--static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
-+static int iwl4965_mac_config_interface(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
- struct ieee80211_if_conf *conf)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- DECLARE_MAC_BUF(mac);
- unsigned long flags;
- int rc;
-@@ -7546,9 +7657,11 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
- return 0;
- }
+-void libertas_send_iwevcustom_event(wlan_private * priv, s8 * str);
++void lbs_send_iwevcustom_event(struct lbs_private *priv, s8 *str);
-+ if (!iwl4965_is_alive(priv))
-+ return -EAGAIN;
+ /* main.c */
+-struct chan_freq_power *libertas_get_region_cfp_table(u8 region, u8 band,
+- int *cfp_no);
+-wlan_private *libertas_add_card(void *card, struct device *dmdev);
+-int libertas_remove_card(wlan_private *priv);
+-int libertas_start_card(wlan_private *priv);
+-int libertas_stop_card(wlan_private *priv);
+-int libertas_add_mesh(wlan_private *priv, struct device *dev);
+-void libertas_remove_mesh(wlan_private *priv);
+-int libertas_reset_device(wlan_private *priv);
+-
+-#endif /* _WLAN_DECL_H_ */
++struct chan_freq_power *lbs_get_region_cfp_table(u8 region,
++ u8 band,
++ int *cfp_no);
++struct lbs_private *lbs_add_card(void *card, struct device *dmdev);
++int lbs_remove_card(struct lbs_private *priv);
++int lbs_start_card(struct lbs_private *priv);
++int lbs_stop_card(struct lbs_private *priv);
++int lbs_reset_device(struct lbs_private *priv);
++void lbs_host_to_card_done(struct lbs_private *priv);
+
- mutex_lock(&priv->mutex);
++int lbs_update_channel(struct lbs_private *priv);
++#endif
+diff --git a/drivers/net/wireless/libertas/defs.h b/drivers/net/wireless/libertas/defs.h
+index 3a0c9be..3053cc2 100644
+--- a/drivers/net/wireless/libertas/defs.h
++++ b/drivers/net/wireless/libertas/defs.h
+@@ -2,8 +2,8 @@
+ * This header file contains global constant/enum definitions,
+ * global variable declaration.
+ */
+-#ifndef _WLAN_DEFS_H_
+-#define _WLAN_DEFS_H_
++#ifndef _LBS_DEFS_H_
++#define _LBS_DEFS_H_
-- IWL_DEBUG_MAC80211("enter: interface id %d\n", if_id);
- if (conf->bssid)
- IWL_DEBUG_MAC80211("bssid: %s\n",
- print_mac(mac, conf->bssid));
-@@ -7565,8 +7678,8 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
- return 0;
- }
+ #include <linux/spinlock.h>
-- if (priv->interface_id != if_id) {
-- IWL_DEBUG_MAC80211("leave - interface_id != if_id\n");
-+ if (priv->vif != vif) {
-+ IWL_DEBUG_MAC80211("leave - priv->vif != vif\n");
- mutex_unlock(&priv->mutex);
- return 0;
- }
-@@ -7584,11 +7697,14 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
- priv->ibss_beacon = conf->beacon;
- }
+@@ -41,11 +41,11 @@
+ #define LBS_DEB_HEX 0x00200000
+ #define LBS_DEB_SDIO 0x00400000
-+ if (iwl4965_is_rfkill(priv))
-+ goto done;
-+
- if (conf->bssid && !is_zero_ether_addr(conf->bssid) &&
- !is_multicast_ether_addr(conf->bssid)) {
- /* If there is currently a HW scan going on in the background
- * then we need to cancel it else the RXON below will fail. */
-- if (iwl_scan_cancel_timeout(priv, 100)) {
-+ if (iwl4965_scan_cancel_timeout(priv, 100)) {
- IWL_WARNING("Aborted scan still in progress "
- "after 100ms\n");
- IWL_DEBUG_MAC80211("leaving - scan abort failed.\n");
-@@ -7604,20 +7720,21 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
- memcpy(priv->bssid, conf->bssid, ETH_ALEN);
+-extern unsigned int libertas_debug;
++extern unsigned int lbs_debug;
- if (priv->iw_mode == IEEE80211_IF_TYPE_AP)
-- iwl_config_ap(priv);
-+ iwl4965_config_ap(priv);
- else {
-- rc = iwl_commit_rxon(priv);
-+ rc = iwl4965_commit_rxon(priv);
- if ((priv->iw_mode == IEEE80211_IF_TYPE_STA) && rc)
-- iwl_rxon_add_station(
-+ iwl4965_rxon_add_station(
- priv, priv->active_rxon.bssid_addr, 1);
- }
+ #ifdef DEBUG
+ #define LBS_DEB_LL(grp, grpnam, fmt, args...) \
+-do { if ((libertas_debug & (grp)) == (grp)) \
++do { if ((lbs_debug & (grp)) == (grp)) \
+ printk(KERN_DEBUG DRV_NAME grpnam "%s: " fmt, \
+ in_interrupt() ? " (INT)" : "", ## args); } while (0)
+ #else
+@@ -96,8 +96,8 @@ static inline void lbs_deb_hex(unsigned int grp, const char *prompt, u8 *buf, in
+ int i = 0;
- } else {
-- iwl_scan_cancel_timeout(priv, 100);
-+ iwl4965_scan_cancel_timeout(priv, 100);
- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
- }
+ if (len &&
+- (libertas_debug & LBS_DEB_HEX) &&
+- (libertas_debug & grp))
++ (lbs_debug & LBS_DEB_HEX) &&
++ (lbs_debug & grp))
+ {
+ for (i = 1; i <= len; i++) {
+ if ((i & 0xf) == 1) {
+@@ -132,15 +132,22 @@ static inline void lbs_deb_hex(unsigned int grp, const char *prompt, u8 *buf, in
+ */
-+ done:
- spin_lock_irqsave(&priv->lock, flags);
- if (!conf->ssid_len)
- memset(priv->essid, 0, IW_ESSID_MAX_SIZE);
-@@ -7633,34 +7750,35 @@ static int iwl_mac_config_interface(struct ieee80211_hw *hw, int if_id,
- return 0;
- }
+ #define MRVDRV_MAX_MULTICAST_LIST_SIZE 32
+-#define MRVDRV_NUM_OF_CMD_BUFFER 10
+-#define MRVDRV_SIZE_OF_CMD_BUFFER (2 * 1024)
++#define LBS_NUM_CMD_BUFFERS 10
++#define LBS_CMD_BUFFER_SIZE (2 * 1024)
+ #define MRVDRV_MAX_CHANNEL_SIZE 14
+ #define MRVDRV_ASSOCIATION_TIME_OUT 255
+ #define MRVDRV_SNAP_HEADER_LEN 8
--static void iwl_configure_filter(struct ieee80211_hw *hw,
-+static void iwl4965_configure_filter(struct ieee80211_hw *hw,
- unsigned int changed_flags,
- unsigned int *total_flags,
- int mc_count, struct dev_addr_list *mc_list)
- {
- /*
- * XXX: dummy
-- * see also iwl_connection_init_rx_config
-+ * see also iwl4965_connection_init_rx_config
- */
- *total_flags = 0;
- }
+-#define WLAN_UPLD_SIZE 2312
++#define LBS_UPLD_SIZE 2312
+ #define DEV_NAME_LEN 32
--static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
-+static void iwl4965_mac_remove_interface(struct ieee80211_hw *hw,
- struct ieee80211_if_init_conf *conf)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
++/* Wake criteria for HOST_SLEEP_CFG command */
++#define EHS_WAKE_ON_BROADCAST_DATA 0x0001
++#define EHS_WAKE_ON_UNICAST_DATA 0x0002
++#define EHS_WAKE_ON_MAC_EVENT 0x0004
++#define EHS_WAKE_ON_MULTICAST_DATA 0x0008
++#define EHS_REMOVE_WAKEUP 0xFFFFFFFF
++
+ /** Misc constants */
+ /* This section defines 802.11 specific contants */
- IWL_DEBUG_MAC80211("enter\n");
+@@ -257,17 +264,11 @@ static inline void lbs_deb_hex(unsigned int grp, const char *prompt, u8 *buf, in
- mutex_lock(&priv->mutex);
+ #define MAX_LEDS 8
-- iwl_scan_cancel_timeout(priv, 100);
-- cancel_delayed_work(&priv->post_associate);
-- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
+-#define IS_MESH_FRAME(x) (x->cb[6])
+-#define SET_MESH_FRAME(x) (x->cb[6]=1)
+-#define UNSET_MESH_FRAME(x) (x->cb[6]=0)
-
-- if (priv->interface_id == conf->if_id) {
-- priv->interface_id = 0;
-+ if (iwl4965_is_ready_rf(priv)) {
-+ iwl4965_scan_cancel_timeout(priv, 100);
-+ cancel_delayed_work(&priv->post_associate);
-+ priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-+ iwl4965_commit_rxon(priv);
-+ }
-+ if (priv->vif == conf->vif) {
-+ priv->vif = NULL;
- memset(priv->bssid, 0, ETH_ALEN);
- memset(priv->essid, 0, IW_ESSID_MAX_SIZE);
- priv->essid_len = 0;
-@@ -7671,19 +7789,50 @@ static void iwl_mac_remove_interface(struct ieee80211_hw *hw,
-
- }
-
--#define IWL_DELAY_NEXT_SCAN (HZ*2)
--static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
-+static void iwl4965_bss_info_changed(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
-+ struct ieee80211_bss_conf *bss_conf,
-+ u32 changes)
-+{
-+ struct iwl4965_priv *priv = hw->priv;
-+
-+ if (changes & BSS_CHANGED_ERP_PREAMBLE) {
-+ if (bss_conf->use_short_preamble)
-+ priv->staging_rxon.flags |= RXON_FLG_SHORT_PREAMBLE_MSK;
-+ else
-+ priv->staging_rxon.flags &= ~RXON_FLG_SHORT_PREAMBLE_MSK;
-+ }
-+
-+ if (changes & BSS_CHANGED_ERP_CTS_PROT) {
-+ if (bss_conf->use_cts_prot && (priv->phymode != MODE_IEEE80211A))
-+ priv->staging_rxon.flags |= RXON_FLG_TGG_PROTECT_MSK;
-+ else
-+ priv->staging_rxon.flags &= ~RXON_FLG_TGG_PROTECT_MSK;
-+ }
-+
-+ if (changes & BSS_CHANGED_ASSOC) {
-+ /*
-+ * TODO:
-+ * do stuff instead of sniffing assoc resp
-+ */
-+ }
-+
-+ if (iwl4965_is_associated(priv))
-+ iwl4965_send_rxon_assoc(priv);
-+}
-+
-+static int iwl4965_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
- {
- int rc = 0;
- unsigned long flags;
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
+ /** Global Variable Declaration */
+-typedef struct _wlan_private wlan_private;
+-typedef struct _wlan_adapter wlan_adapter;
+-extern const char libertas_driver_version[];
+-extern u16 libertas_region_code_to_index[MRVDRV_MAX_REGION_CODE];
++extern const char lbs_driver_version[];
++extern u16 lbs_region_code_to_index[MRVDRV_MAX_REGION_CODE];
- IWL_DEBUG_MAC80211("enter\n");
+-extern u8 libertas_bg_rates[MAX_RATES];
++extern u8 lbs_bg_rates[MAX_RATES];
- mutex_lock(&priv->mutex);
- spin_lock_irqsave(&priv->lock, flags);
+ /** ENUM definition*/
+ /** SNRNF_TYPE */
+@@ -284,13 +285,13 @@ enum SNRNF_DATA {
+ MAX_TYPE_AVG
+ };
-- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl4965_is_ready_rf(priv)) {
- rc = -EIO;
- IWL_DEBUG_MAC80211("leave - not ready or exit pending\n");
- goto out_unlock;
-@@ -7695,17 +7844,21 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
- goto out_unlock;
- }
+-/** WLAN_802_11_POWER_MODE */
+-enum WLAN_802_11_POWER_MODE {
+- WLAN802_11POWERMODECAM,
+- WLAN802_11POWERMODEMAX_PSP,
+- WLAN802_11POWERMODEFAST_PSP,
++/** LBS_802_11_POWER_MODE */
++enum LBS_802_11_POWER_MODE {
++ LBS802_11POWERMODECAM,
++ LBS802_11POWERMODEMAX_PSP,
++ LBS802_11POWERMODEFAST_PSP,
+ /*not a real mode, defined as an upper bound */
+- WLAN802_11POWEMODEMAX
++ LBS802_11POWEMODEMAX
+ };
-+ /* we don't schedule scan within next_scan_jiffies period */
-+ if (priv->next_scan_jiffies &&
-+ time_after(priv->next_scan_jiffies, jiffies)) {
-+ rc = -EAGAIN;
-+ goto out_unlock;
-+ }
- /* if we just finished scan ask for delay */
-- if (priv->last_scan_jiffies &&
-- time_after(priv->last_scan_jiffies + IWL_DELAY_NEXT_SCAN,
-- jiffies)) {
-+ if (priv->last_scan_jiffies && time_after(priv->last_scan_jiffies +
-+ IWL_DELAY_NEXT_SCAN, jiffies)) {
- rc = -EAGAIN;
- goto out_unlock;
- }
- if (len) {
-- IWL_DEBUG_SCAN("direct scan for "
-- "%s [%d]\n ",
-- iwl_escape_essid(ssid, len), (int)len);
-+ IWL_DEBUG_SCAN("direct scan for %s [%d]\n ",
-+ iwl4965_escape_essid(ssid, len), (int)len);
+ /** PS_STATE */
+@@ -308,16 +309,16 @@ enum DNLD_STATE {
+ DNLD_CMD_SENT
+ };
- priv->one_direct_scan = 1;
- priv->direct_ssid_len = (u8)
-@@ -7714,7 +7867,7 @@ static int iwl_mac_hw_scan(struct ieee80211_hw *hw, u8 *ssid, size_t len)
- } else
- priv->one_direct_scan = 0;
+-/** WLAN_MEDIA_STATE */
+-enum WLAN_MEDIA_STATE {
+- LIBERTAS_CONNECTED,
+- LIBERTAS_DISCONNECTED
++/** LBS_MEDIA_STATE */
++enum LBS_MEDIA_STATE {
++ LBS_CONNECTED,
++ LBS_DISCONNECTED
+ };
-- rc = iwl_scan_initiate(priv);
-+ rc = iwl4965_scan_initiate(priv);
+-/** WLAN_802_11_PRIVACY_FILTER */
+-enum WLAN_802_11_PRIVACY_FILTER {
+- WLAN802_11PRIVFILTERACCEPTALL,
+- WLAN802_11PRIVFILTER8021XWEP
++/** LBS_802_11_PRIVACY_FILTER */
++enum LBS_802_11_PRIVACY_FILTER {
++ LBS802_11PRIVFILTERACCEPTALL,
++ LBS802_11PRIVFILTER8021XWEP
+ };
- IWL_DEBUG_MAC80211("leave\n");
+ /** mv_ms_type */
+@@ -382,4 +383,4 @@ enum SNMP_MIB_VALUE_e {
+ #define FWT_DEFAULT_SLEEPMODE 0
+ #define FWT_DEFAULT_SNR 0
-@@ -7725,18 +7878,18 @@ out_unlock:
- return rc;
- }
+-#endif /* _WLAN_DEFS_H_ */
++#endif
+diff --git a/drivers/net/wireless/libertas/dev.h b/drivers/net/wireless/libertas/dev.h
+index 1fb807a..58d7ef6 100644
+--- a/drivers/net/wireless/libertas/dev.h
++++ b/drivers/net/wireless/libertas/dev.h
+@@ -1,21 +1,20 @@
+ /**
+ * This file contains definitions and data structures specific
+ * to Marvell 802.11 NIC. It contains the Device Information
+- * structure wlan_adapter.
++ * structure struct lbs_private..
+ */
+-#ifndef _WLAN_DEV_H_
+-#define _WLAN_DEV_H_
++#ifndef _LBS_DEV_H_
++#define _LBS_DEV_H_
--static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
-+static int iwl4965_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
- const u8 *local_addr, const u8 *addr,
- struct ieee80211_key_conf *key)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- DECLARE_MAC_BUF(mac);
- int rc = 0;
- u8 sta_id;
+ #include <linux/netdevice.h>
+ #include <linux/wireless.h>
+ #include <linux/ethtool.h>
+ #include <linux/debugfs.h>
+-#include <net/ieee80211.h>
- IWL_DEBUG_MAC80211("enter\n");
+ #include "defs.h"
+ #include "scan.h"
-- if (!iwl_param_hwcrypto) {
-+ if (!iwl4965_param_hwcrypto) {
- IWL_DEBUG_MAC80211("leave - hwcrypto disabled\n");
- return -EOPNOTSUPP;
- }
-@@ -7745,7 +7898,7 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
- /* only support pairwise keys */
- return -EOPNOTSUPP;
+-extern struct ethtool_ops libertas_ethtool_ops;
++extern struct ethtool_ops lbs_ethtool_ops;
-- sta_id = iwl_hw_find_station(priv, addr);
-+ sta_id = iwl4965_hw_find_station(priv, addr);
- if (sta_id == IWL_INVALID_STATION) {
- IWL_DEBUG_MAC80211("leave - %s not in station map.\n",
- print_mac(mac, addr));
-@@ -7754,24 +7907,24 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ #define MAX_BSSID_PER_CHANNEL 16
- mutex_lock(&priv->mutex);
+@@ -53,7 +52,7 @@ struct region_channel {
+ struct chan_freq_power *CFP;
+ };
-- iwl_scan_cancel_timeout(priv, 100);
-+ iwl4965_scan_cancel_timeout(priv, 100);
+-struct wlan_802_11_security {
++struct lbs_802_11_security {
+ u8 WPAenabled;
+ u8 WPA2enabled;
+ u8 wep_enabled;
+@@ -78,16 +77,16 @@ struct current_bss_params {
- switch (cmd) {
- case SET_KEY:
-- rc = iwl_update_sta_key_info(priv, key, sta_id);
-+ rc = iwl4965_update_sta_key_info(priv, key, sta_id);
- if (!rc) {
-- iwl_set_rxon_hwcrypto(priv, 1);
-- iwl_commit_rxon(priv);
-+ iwl4965_set_rxon_hwcrypto(priv, 1);
-+ iwl4965_commit_rxon(priv);
- key->hw_key_idx = sta_id;
- IWL_DEBUG_MAC80211("set_key success, using hwcrypto\n");
- key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV;
- }
- break;
- case DISABLE_KEY:
-- rc = iwl_clear_sta_key_info(priv, sta_id);
-+ rc = iwl4965_clear_sta_key_info(priv, sta_id);
- if (!rc) {
-- iwl_set_rxon_hwcrypto(priv, 0);
-- iwl_commit_rxon(priv);
-+ iwl4965_set_rxon_hwcrypto(priv, 0);
-+ iwl4965_commit_rxon(priv);
- IWL_DEBUG_MAC80211("disable hwcrypto key\n");
- }
- break;
-@@ -7785,18 +7938,18 @@ static int iwl_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
- return rc;
- }
+ /** sleep_params */
+ struct sleep_params {
+- u16 sp_error;
+- u16 sp_offset;
+- u16 sp_stabletime;
+- u8 sp_calcontrol;
+- u8 sp_extsleepclk;
+- u16 sp_reserved;
++ uint16_t sp_error;
++ uint16_t sp_offset;
++ uint16_t sp_stabletime;
++ uint8_t sp_calcontrol;
++ uint8_t sp_extsleepclk;
++ uint16_t sp_reserved;
+ };
--static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
-+static int iwl4965_mac_conf_tx(struct ieee80211_hw *hw, int queue,
- const struct ieee80211_tx_queue_params *params)
- {
-- struct iwl_priv *priv = hw->priv;
--#ifdef CONFIG_IWLWIFI_QOS
-+ struct iwl4965_priv *priv = hw->priv;
-+#ifdef CONFIG_IWL4965_QOS
- unsigned long flags;
- int q;
--#endif /* CONFIG_IWL_QOS */
-+#endif /* CONFIG_IWL4965_QOS */
+ /* Mesh statistics */
+-struct wlan_mesh_stats {
++struct lbs_mesh_stats {
+ u32 fwd_bcast_cnt; /* Fwd: Broadcast counter */
+ u32 fwd_unicast_cnt; /* Fwd: Unicast counter */
+ u32 fwd_drop_ttl; /* Fwd: TTL zero */
+@@ -99,26 +98,22 @@ struct wlan_mesh_stats {
+ };
- IWL_DEBUG_MAC80211("enter\n");
+ /** Private structure for the MV device */
+-struct _wlan_private {
+- int open;
++struct lbs_private {
+ int mesh_open;
+ int infra_open;
+ int mesh_autostart_enabled;
+- __le16 boot2_version;
-- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl4965_is_ready_rf(priv)) {
- IWL_DEBUG_MAC80211("leave - RF not ready\n");
- return -EIO;
- }
-@@ -7806,7 +7959,7 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
- return 0;
- }
+ char name[DEV_NAME_LEN];
--#ifdef CONFIG_IWLWIFI_QOS
-+#ifdef CONFIG_IWL4965_QOS
- if (!priv->qos_data.qos_enable) {
- priv->qos_data.qos_active = 0;
- IWL_DEBUG_MAC80211("leave - qos not enabled\n");
-@@ -7829,30 +7982,30 @@ static int iwl_mac_conf_tx(struct ieee80211_hw *hw, int queue,
+ void *card;
+- wlan_adapter *adapter;
+ struct net_device *dev;
- mutex_lock(&priv->mutex);
- if (priv->iw_mode == IEEE80211_IF_TYPE_AP)
-- iwl_activate_qos(priv, 1);
-- else if (priv->assoc_id && iwl_is_associated(priv))
-- iwl_activate_qos(priv, 0);
-+ iwl4965_activate_qos(priv, 1);
-+ else if (priv->assoc_id && iwl4965_is_associated(priv))
-+ iwl4965_activate_qos(priv, 0);
+ struct net_device_stats stats;
+ struct net_device *mesh_dev; /* Virtual device */
+ struct net_device *rtap_net_dev;
+- struct ieee80211_device *ieee;
- mutex_unlock(&priv->mutex);
+ struct iw_statistics wstats;
+- struct wlan_mesh_stats mstats;
++ struct lbs_mesh_stats mstats;
+ struct dentry *debugfs_dir;
+ struct dentry *debugfs_debug;
+ struct dentry *debugfs_files[6];
+@@ -136,15 +131,13 @@ struct _wlan_private {
+ /** Upload length */
+ u32 upld_len;
+ /* Upload buffer */
+- u8 upld_buf[WLAN_UPLD_SIZE];
++ u8 upld_buf[LBS_UPLD_SIZE];
+ /* Download sent:
+ bit0 1/0=data_sent/data_tx_done,
+ bit1 1/0=cmd_sent/cmd_tx_done,
+ all other bits reserved 0 */
+ u8 dnld_sent;
--#endif /*CONFIG_IWLWIFI_QOS */
-+#endif /*CONFIG_IWL4965_QOS */
+- struct device *hotplug_device;
+-
+ /** thread to service interrupts */
+ struct task_struct *main_thread;
+ wait_queue_head_t waitq;
+@@ -155,65 +148,29 @@ struct _wlan_private {
+ struct work_struct sync_channel;
- IWL_DEBUG_MAC80211("leave\n");
- return 0;
- }
+ /** Hardware access */
+- int (*hw_host_to_card) (wlan_private * priv, u8 type, u8 * payload, u16 nb);
+- int (*hw_get_int_status) (wlan_private * priv, u8 *);
+- int (*hw_read_event_cause) (wlan_private *);
+-};
++ int (*hw_host_to_card) (struct lbs_private *priv, u8 type, u8 *payload, u16 nb);
++ int (*hw_get_int_status) (struct lbs_private *priv, u8 *);
++ int (*hw_read_event_cause) (struct lbs_private *);
--static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
-+static int iwl4965_mac_get_tx_stats(struct ieee80211_hw *hw,
- struct ieee80211_tx_queue_stats *stats)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- int i, avail;
-- struct iwl_tx_queue *txq;
-- struct iwl_queue *q;
-+ struct iwl4965_tx_queue *txq;
-+ struct iwl4965_queue *q;
- unsigned long flags;
+-/** Association request
+- *
+- * Encapsulates all the options that describe a specific assocation request
+- * or configuration of the wireless card's radio, mode, and security settings.
+- */
+-struct assoc_request {
+-#define ASSOC_FLAG_SSID 1
+-#define ASSOC_FLAG_CHANNEL 2
+-#define ASSOC_FLAG_BAND 3
+-#define ASSOC_FLAG_MODE 4
+-#define ASSOC_FLAG_BSSID 5
+-#define ASSOC_FLAG_WEP_KEYS 6
+-#define ASSOC_FLAG_WEP_TX_KEYIDX 7
+-#define ASSOC_FLAG_WPA_MCAST_KEY 8
+-#define ASSOC_FLAG_WPA_UCAST_KEY 9
+-#define ASSOC_FLAG_SECINFO 10
+-#define ASSOC_FLAG_WPA_IE 11
+- unsigned long flags;
++ /* Wake On LAN */
++ uint32_t wol_criteria;
++ uint8_t wol_gpio;
++ uint8_t wol_gap;
- IWL_DEBUG_MAC80211("enter\n");
+- u8 ssid[IW_ESSID_MAX_SIZE + 1];
+- u8 ssid_len;
+- u8 channel;
+- u8 band;
+- u8 mode;
+- u8 bssid[ETH_ALEN];
+-
+- /** WEP keys */
+- struct enc_key wep_keys[4];
+- u16 wep_tx_keyidx;
+-
+- /** WPA keys */
+- struct enc_key wpa_mcast_key;
+- struct enc_key wpa_unicast_key;
++ /* was struct lbs_adapter from here... */
-- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl4965_is_ready_rf(priv)) {
- IWL_DEBUG_MAC80211("leave - RF not ready\n");
- return -EIO;
- }
-@@ -7862,7 +8015,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
- for (i = 0; i < AC_NUM; i++) {
- txq = &priv->txq[i];
- q = &txq->q;
-- avail = iwl_queue_space(q);
-+ avail = iwl4965_queue_space(q);
+- struct wlan_802_11_security secinfo;
+-
+- /** WPA Information Elements*/
+- u8 wpa_ie[MAX_WPA_IE_LEN];
+- u8 wpa_ie_len;
+-
+- /* BSS to associate with for infrastructure of Ad-Hoc join */
+- struct bss_descriptor bss;
+-};
+-
+-/** Wlan adapter data structure*/
+-struct _wlan_adapter {
++ /** Wlan adapter data structure*/
+ /** STATUS variables */
+- u8 fwreleasenumber[4];
++ u32 fwrelease;
+ u32 fwcapinfo;
+ /* protected with big lock */
- stats->data[i].len = q->n_window - avail;
- stats->data[i].limit = q->n_window - q->high_mark;
-@@ -7876,7 +8029,7 @@ static int iwl_mac_get_tx_stats(struct ieee80211_hw *hw,
- return 0;
- }
+ struct mutex lock;
--static int iwl_mac_get_stats(struct ieee80211_hw *hw,
-+static int iwl4965_mac_get_stats(struct ieee80211_hw *hw,
- struct ieee80211_low_level_stats *stats)
- {
- IWL_DEBUG_MAC80211("enter\n");
-@@ -7885,7 +8038,7 @@ static int iwl_mac_get_stats(struct ieee80211_hw *hw,
- return 0;
- }
+- u8 tmptxbuf[WLAN_UPLD_SIZE];
++ /* TX packet ready to be sent... */
++ int tx_pending_len; /* -1 while building packet */
++
++ u8 tx_pending_buf[LBS_UPLD_SIZE];
+ /* protected by hard_start_xmit serialization */
--static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
-+static u64 iwl4965_mac_get_tsf(struct ieee80211_hw *hw)
- {
- IWL_DEBUG_MAC80211("enter\n");
- IWL_DEBUG_MAC80211("leave\n");
-@@ -7893,35 +8046,35 @@ static u64 iwl_mac_get_tsf(struct ieee80211_hw *hw)
- return 0;
- }
+ /** command-related variables */
+@@ -231,8 +188,7 @@ struct _wlan_adapter {
+ struct list_head cmdpendingq;
--static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
-+static void iwl4965_mac_reset_tsf(struct ieee80211_hw *hw)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- unsigned long flags;
+ wait_queue_head_t cmd_pending;
+- u8 nr_cmd_pending;
+- /* command related variables protected by adapter->driver_lock */
++ /* command related variables protected by priv->driver_lock */
- mutex_lock(&priv->mutex);
- IWL_DEBUG_MAC80211("enter\n");
+ /** Async and Sync Event variables */
+ u32 intcounter;
+@@ -244,17 +200,18 @@ struct _wlan_adapter {
- priv->lq_mngr.lq_ready = 0;
--#ifdef CONFIG_IWLWIFI_HT
-+#ifdef CONFIG_IWL4965_HT
- spin_lock_irqsave(&priv->lock, flags);
-- memset(&priv->current_assoc_ht, 0, sizeof(struct sta_ht_info));
-+ memset(&priv->current_ht_config, 0, sizeof(struct iwl_ht_info));
- spin_unlock_irqrestore(&priv->lock, flags);
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT_AGG
- /* if (priv->lq_mngr.agg_ctrl.granted_ba)
- iwl4965_turn_off_agg(priv, TID_ALL_SPECIFIED);*/
+ /** Timers */
+ struct timer_list command_timer;
+-
+- /* TX queue used in PS mode */
+- spinlock_t txqueue_lock;
+- struct sk_buff *tx_queue_ps[NR_TX_QUEUE];
+- unsigned int tx_queue_idx;
++ int nr_retries;
++ int cmd_timed_out;
-- memset(&(priv->lq_mngr.agg_ctrl), 0, sizeof(struct iwl_agg_control));
-+ memset(&(priv->lq_mngr.agg_ctrl), 0, sizeof(struct iwl4965_agg_control));
- priv->lq_mngr.agg_ctrl.tid_traffic_load_threshold = 10;
- priv->lq_mngr.agg_ctrl.ba_timeout = 5000;
- priv->lq_mngr.agg_ctrl.auto_agg = 1;
+ u8 hisregcpy;
- if (priv->lq_mngr.agg_ctrl.auto_agg)
- priv->lq_mngr.agg_ctrl.requested_ba = TID_ALL_ENABLED;
--#endif /*CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /*CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
+ /** current ssid/bssid related parameters*/
+ struct current_bss_params curbssparams;
--#ifdef CONFIG_IWLWIFI_QOS
-- iwl_reset_qos(priv);
-+#ifdef CONFIG_IWL4965_QOS
-+ iwl4965_reset_qos(priv);
- #endif
++ uint16_t mesh_tlv;
++ u8 mesh_ssid[IW_ESSID_MAX_SIZE + 1];
++ u8 mesh_ssid_len;
++
+ /* IW_MODE_* */
+ u8 mode;
- cancel_delayed_work(&priv->post_associate);
-@@ -7946,13 +8099,19 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
+@@ -263,6 +220,8 @@ struct _wlan_adapter {
+ struct list_head network_free_list;
+ struct bss_descriptor *networks;
- spin_unlock_irqrestore(&priv->lock, flags);
++ u16 beacon_period;
++ u8 beacon_enable;
+ u8 adhoccreate;
-+ if (!iwl4965_is_ready_rf(priv)) {
-+ IWL_DEBUG_MAC80211("leave - not ready\n");
-+ mutex_unlock(&priv->mutex);
-+ return;
-+ }
-+
- /* we are restarting association process
- * clear RXON_FILTER_ASSOC_MSK bit
- */
- if (priv->iw_mode != IEEE80211_IF_TYPE_AP) {
-- iwl_scan_cancel_timeout(priv, 100);
-+ iwl4965_scan_cancel_timeout(priv, 100);
- priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
- }
+ /** capability Info used in Association, start, join */
+@@ -286,11 +245,11 @@ struct _wlan_adapter {
- /* Per mac80211.h: This is only used in IBSS mode... */
-@@ -7963,32 +8122,25 @@ static void iwl_mac_reset_tsf(struct ieee80211_hw *hw)
- return;
- }
+ /** Tx-related variables (for single packet tx) */
+ struct sk_buff *currenttxskb;
+- u16 TxLockFlag;
-- if (!iwl_is_ready_rf(priv)) {
-- IWL_DEBUG_MAC80211("leave - not ready\n");
-- mutex_unlock(&priv->mutex);
-- return;
-- }
--
- priv->only_active_channel = 0;
+ /** NIC Operation characteristics */
+ u16 currentpacketfilter;
+ u32 connect_status;
++ u32 mesh_connect_status;
+ u16 regioncode;
+ u16 txpowerlevel;
-- iwl_set_rate(priv);
-+ iwl4965_set_rate(priv);
+@@ -300,15 +259,17 @@ struct _wlan_adapter {
+ u16 psmode; /* Wlan802_11PowermodeCAM=disable
+ Wlan802_11PowermodeMAX_PSP=enable */
+ u32 psstate;
++ char ps_supported;
+ u8 needtowakeup;
- mutex_unlock(&priv->mutex);
+- struct PS_CMD_ConfirmSleep libertas_ps_confirm_sleep;
++ struct PS_CMD_ConfirmSleep lbs_ps_confirm_sleep;
++ struct cmd_header lbs_ps_confirm_wake;
- IWL_DEBUG_MAC80211("leave\n");
--
- }
+ struct assoc_request * pending_assoc_req;
+ struct assoc_request * in_progress_assoc_req;
--static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
-+static int iwl4965_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- struct ieee80211_tx_control *control)
- {
-- struct iwl_priv *priv = hw->priv;
-+ struct iwl4965_priv *priv = hw->priv;
- unsigned long flags;
+ /** Encryption parameter */
+- struct wlan_802_11_security secinfo;
++ struct lbs_802_11_security secinfo;
- mutex_lock(&priv->mutex);
- IWL_DEBUG_MAC80211("enter\n");
+ /** WEP keys */
+ struct enc_key wep_keys[4];
+@@ -338,9 +299,6 @@ struct _wlan_adapter {
+ u8 cur_rate;
+ u8 auto_rate;
-- if (!iwl_is_ready_rf(priv)) {
-+ if (!iwl4965_is_ready_rf(priv)) {
- IWL_DEBUG_MAC80211("leave - RF not ready\n");
- mutex_unlock(&priv->mutex);
- return -EIO;
-@@ -8012,8 +8164,8 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- IWL_DEBUG_MAC80211("leave\n");
- spin_unlock_irqrestore(&priv->lock, flags);
+- /** sleep_params */
+- struct sleep_params sp;
+-
+ /** RF calibration data */
--#ifdef CONFIG_IWLWIFI_QOS
-- iwl_reset_qos(priv);
-+#ifdef CONFIG_IWL4965_QOS
-+ iwl4965_reset_qos(priv);
- #endif
+ #define MAX_REGION_CHANNEL_NUM 2
+@@ -350,7 +308,7 @@ struct _wlan_adapter {
+ struct region_channel universal_channel[MAX_REGION_CHANNEL_NUM];
- queue_work(priv->workqueue, &priv->post_associate.work);
-@@ -8023,133 +8175,62 @@ static int iwl_mac_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- return 0;
- }
+ /** 11D and Domain Regulatory Data */
+- struct wlan_802_11d_domain_reg domainreg;
++ struct lbs_802_11d_domain_reg domainreg;
+ struct parsed_region_chan_11d parsed_region_chan;
--#ifdef CONFIG_IWLWIFI_HT
--union ht_cap_info {
-- struct {
-- u16 advanced_coding_cap :1;
-- u16 supported_chan_width_set :1;
-- u16 mimo_power_save_mode :2;
-- u16 green_field :1;
-- u16 short_GI20 :1;
-- u16 short_GI40 :1;
-- u16 tx_stbc :1;
-- u16 rx_stbc :1;
-- u16 beam_forming :1;
-- u16 delayed_ba :1;
-- u16 maximal_amsdu_size :1;
-- u16 cck_mode_at_40MHz :1;
-- u16 psmp_support :1;
-- u16 stbc_ctrl_frame_support :1;
-- u16 sig_txop_protection_support :1;
-- };
-- u16 val;
--} __attribute__ ((packed));
--
--union ht_param_info{
-- struct {
-- u8 max_rx_ampdu_factor :2;
-- u8 mpdu_density :3;
-- u8 reserved :3;
-- };
-- u8 val;
--} __attribute__ ((packed));
--
--union ht_exra_param_info {
-- struct {
-- u8 ext_chan_offset :2;
-- u8 tx_chan_width :1;
-- u8 rifs_mode :1;
-- u8 controlled_access_only :1;
-- u8 service_interval_granularity :3;
-- };
-- u8 val;
--} __attribute__ ((packed));
--
--union ht_operation_mode{
-- struct {
-- u16 op_mode :2;
-- u16 non_GF :1;
-- u16 reserved :13;
-- };
-- u16 val;
--} __attribute__ ((packed));
--
-+#ifdef CONFIG_IWL4965_HT
+ /** FSM variable for 11d support */
+@@ -358,14 +316,57 @@ struct _wlan_adapter {
--static int sta_ht_info_init(struct ieee80211_ht_capability *ht_cap,
-- struct ieee80211_ht_additional_info *ht_extra,
-- struct sta_ht_info *ht_info_ap,
-- struct sta_ht_info *ht_info)
-+static void iwl4965_ht_info_fill(struct ieee80211_conf *conf,
-+ struct iwl4965_priv *priv)
- {
-- union ht_cap_info cap;
-- union ht_operation_mode op_mode;
-- union ht_param_info param_info;
-- union ht_exra_param_info extra_param_info;
-+ struct iwl_ht_info *iwl_conf = &priv->current_ht_config;
-+ struct ieee80211_ht_info *ht_conf = &conf->ht_conf;
-+ struct ieee80211_ht_bss_info *ht_bss_conf = &conf->ht_bss_conf;
+ /** MISCELLANEOUS */
+ u8 *prdeeprom;
+- struct wlan_offset_value offsetvalue;
++ struct lbs_offset_value offsetvalue;
- IWL_DEBUG_MAC80211("enter: \n");
+ struct cmd_ds_802_11_get_log logmsg;
-- if (!ht_info) {
-- IWL_DEBUG_MAC80211("leave: ht_info is NULL\n");
-- return -1;
-+ if (!(conf->flags & IEEE80211_CONF_SUPPORT_HT_MODE)) {
-+ iwl_conf->is_ht = 0;
-+ return;
- }
+ u32 monitormode;
++ int last_scanned_channel;
+ u8 fw_ready;
++};
++
++/** Association request
++ *
++ * Encapsulates all the options that describe a specific assocation request
++ * or configuration of the wireless card's radio, mode, and security settings.
++ */
++struct assoc_request {
++#define ASSOC_FLAG_SSID 1
++#define ASSOC_FLAG_CHANNEL 2
++#define ASSOC_FLAG_BAND 3
++#define ASSOC_FLAG_MODE 4
++#define ASSOC_FLAG_BSSID 5
++#define ASSOC_FLAG_WEP_KEYS 6
++#define ASSOC_FLAG_WEP_TX_KEYIDX 7
++#define ASSOC_FLAG_WPA_MCAST_KEY 8
++#define ASSOC_FLAG_WPA_UCAST_KEY 9
++#define ASSOC_FLAG_SECINFO 10
++#define ASSOC_FLAG_WPA_IE 11
++ unsigned long flags;
-- if (ht_cap) {
-- cap.val = (u16) le16_to_cpu(ht_cap->capabilities_info);
-- param_info.val = ht_cap->mac_ht_params_info;
-- ht_info->is_ht = 1;
-- if (cap.short_GI20)
-- ht_info->sgf |= 0x1;
-- if (cap.short_GI40)
-- ht_info->sgf |= 0x2;
-- ht_info->is_green_field = cap.green_field;
-- ht_info->max_amsdu_size = cap.maximal_amsdu_size;
-- ht_info->supported_chan_width = cap.supported_chan_width_set;
-- ht_info->tx_mimo_ps_mode = cap.mimo_power_save_mode;
-- memcpy(ht_info->supp_rates, ht_cap->supported_mcs_set, 16);
--
-- ht_info->ampdu_factor = param_info.max_rx_ampdu_factor;
-- ht_info->mpdu_density = param_info.mpdu_density;
--
-- IWL_DEBUG_MAC80211("SISO mask 0x%X MIMO mask 0x%X \n",
-- ht_cap->supported_mcs_set[0],
-- ht_cap->supported_mcs_set[1]);
--
-- if (ht_info_ap) {
-- ht_info->control_channel = ht_info_ap->control_channel;
-- ht_info->extension_chan_offset =
-- ht_info_ap->extension_chan_offset;
-- ht_info->tx_chan_width = ht_info_ap->tx_chan_width;
-- ht_info->operating_mode = ht_info_ap->operating_mode;
-- }
--
-- if (ht_extra) {
-- extra_param_info.val = ht_extra->ht_param;
-- ht_info->control_channel = ht_extra->control_chan;
-- ht_info->extension_chan_offset =
-- extra_param_info.ext_chan_offset;
-- ht_info->tx_chan_width = extra_param_info.tx_chan_width;
-- op_mode.val = (u16)
-- le16_to_cpu(ht_extra->operation_mode);
-- ht_info->operating_mode = op_mode.op_mode;
-- IWL_DEBUG_MAC80211("control channel %d\n",
-- ht_extra->control_chan);
-- }
-- } else
-- ht_info->is_ht = 0;
--
-+ iwl_conf->is_ht = 1;
-+ priv->ps_mode = (u8)((ht_conf->cap & IEEE80211_HT_CAP_MIMO_PS) >> 2);
+- u8 last_scanned_channel;
++ u8 ssid[IW_ESSID_MAX_SIZE + 1];
++ u8 ssid_len;
++ u8 channel;
++ u8 band;
++ u8 mode;
++ u8 bssid[ETH_ALEN];
+
-+ if (ht_conf->cap & IEEE80211_HT_CAP_SGI_20)
-+ iwl_conf->sgf |= 0x1;
-+ if (ht_conf->cap & IEEE80211_HT_CAP_SGI_40)
-+ iwl_conf->sgf |= 0x2;
++ /** WEP keys */
++ struct enc_key wep_keys[4];
++ u16 wep_tx_keyidx;
+
-+ iwl_conf->is_green_field = !!(ht_conf->cap & IEEE80211_HT_CAP_GRN_FLD);
-+ iwl_conf->max_amsdu_size =
-+ !!(ht_conf->cap & IEEE80211_HT_CAP_MAX_AMSDU);
-+ iwl_conf->supported_chan_width =
-+ !!(ht_conf->cap & IEEE80211_HT_CAP_SUP_WIDTH);
-+ iwl_conf->tx_mimo_ps_mode =
-+ (u8)((ht_conf->cap & IEEE80211_HT_CAP_MIMO_PS) >> 2);
-+ memcpy(iwl_conf->supp_mcs_set, ht_conf->supp_mcs_set, 16);
++ /** WPA keys */
++ struct enc_key wpa_mcast_key;
++ struct enc_key wpa_unicast_key;
+
-+ iwl_conf->control_channel = ht_bss_conf->primary_channel;
-+ iwl_conf->extension_chan_offset =
-+ ht_bss_conf->bss_cap & IEEE80211_HT_IE_CHA_SEC_OFFSET;
-+ iwl_conf->tx_chan_width =
-+ !!(ht_bss_conf->bss_cap & IEEE80211_HT_IE_CHA_WIDTH);
-+ iwl_conf->ht_protection =
-+ ht_bss_conf->bss_op_mode & IEEE80211_HT_IE_HT_PROTECTION;
-+ iwl_conf->non_GF_STA_present =
-+ !!(ht_bss_conf->bss_op_mode & IEEE80211_HT_IE_NON_GF_STA_PRSNT);
++ struct lbs_802_11_security secinfo;
+
-+ IWL_DEBUG_MAC80211("control channel %d\n",
-+ iwl_conf->control_channel);
- IWL_DEBUG_MAC80211("leave\n");
-- return 0;
- }
-
--static int iwl_mac_conf_ht(struct ieee80211_hw *hw,
-- struct ieee80211_ht_capability *ht_cap,
-- struct ieee80211_ht_additional_info *ht_extra)
-+static int iwl4965_mac_conf_ht(struct ieee80211_hw *hw,
-+ struct ieee80211_conf *conf)
- {
-- struct iwl_priv *priv = hw->priv;
-- int rs;
-+ struct iwl4965_priv *priv = hw->priv;
++ /** WPA Information Elements*/
++ u8 wpa_ie[MAX_WPA_IE_LEN];
++ u8 wpa_ie_len;
++
++ /* BSS to associate with for infrastructure of Ad-Hoc join */
++ struct bss_descriptor bss;
+ };
- IWL_DEBUG_MAC80211("enter: \n");
+-#endif /* _WLAN_DEV_H_ */
++#endif
+diff --git a/drivers/net/wireless/libertas/ethtool.c b/drivers/net/wireless/libertas/ethtool.c
+index 3dae152..21e6f98 100644
+--- a/drivers/net/wireless/libertas/ethtool.c
++++ b/drivers/net/wireless/libertas/ethtool.c
+@@ -8,6 +8,8 @@
+ #include "dev.h"
+ #include "join.h"
+ #include "wext.h"
++#include "cmd.h"
++
+ static const char * mesh_stat_strings[]= {
+ "drop_duplicate_bcast",
+ "drop_ttl_zero",
+@@ -19,35 +21,34 @@ static const char * mesh_stat_strings[]= {
+ "tx_failed_cnt"
+ };
-- rs = sta_ht_info_init(ht_cap, ht_extra, NULL, &priv->current_assoc_ht);
-+ iwl4965_ht_info_fill(conf, priv);
- iwl4965_set_rxon_chain(priv);
+-static void libertas_ethtool_get_drvinfo(struct net_device *dev,
++static void lbs_ethtool_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+ {
+- wlan_private *priv = (wlan_private *) dev->priv;
++ struct lbs_private *priv = (struct lbs_private *) dev->priv;
+ char fwver[32];
- if (priv && priv->assoc_id &&
-@@ -8164,58 +8245,33 @@ static int iwl_mac_conf_ht(struct ieee80211_hw *hw,
- spin_unlock_irqrestore(&priv->lock, flags);
- }
+- libertas_get_fwversion(priv->adapter, fwver, sizeof(fwver) - 1);
++ lbs_get_fwversion(priv, fwver, sizeof(fwver) - 1);
-- IWL_DEBUG_MAC80211("leave: control channel %d\n",
-- ht_extra->control_chan);
-- return rs;
--
-+ IWL_DEBUG_MAC80211("leave:\n");
-+ return 0;
+ strcpy(info->driver, "libertas");
+- strcpy(info->version, libertas_driver_version);
++ strcpy(info->version, lbs_driver_version);
+ strcpy(info->fw_version, fwver);
}
--static void iwl_set_ht_capab(struct ieee80211_hw *hw,
-- struct ieee80211_ht_capability *ht_cap,
-- u8 use_wide_chan)
-+static void iwl4965_set_ht_capab(struct ieee80211_hw *hw,
-+ struct ieee80211_ht_cap *ht_cap,
-+ u8 use_current_config)
- {
-- union ht_cap_info cap;
-- union ht_param_info param_info;
--
-- memset(&cap, 0, sizeof(union ht_cap_info));
-- memset(¶m_info, 0, sizeof(union ht_param_info));
--
-- cap.maximal_amsdu_size = HT_IE_MAX_AMSDU_SIZE_4K;
-- cap.green_field = 1;
-- cap.short_GI20 = 1;
-- cap.short_GI40 = 1;
-- cap.supported_chan_width_set = use_wide_chan;
-- cap.mimo_power_save_mode = 0x3;
--
-- param_info.max_rx_ampdu_factor = CFG_HT_RX_AMPDU_FACTOR_DEF;
-- param_info.mpdu_density = CFG_HT_MPDU_DENSITY_DEF;
-- ht_cap->capabilities_info = (__le16) cpu_to_le16(cap.val);
-- ht_cap->mac_ht_params_info = (u8) param_info.val;
-+ struct ieee80211_conf *conf = &hw->conf;
-+ struct ieee80211_hw_mode *mode = conf->mode;
+ /* All 8388 parts have 16KiB EEPROM size at the time of writing.
+ * In case that changes this needs fixing.
+ */
+-#define LIBERTAS_EEPROM_LEN 16384
++#define LBS_EEPROM_LEN 16384
-- ht_cap->supported_mcs_set[0] = 0xff;
-- ht_cap->supported_mcs_set[1] = 0xff;
-- ht_cap->supported_mcs_set[4] =
-- (cap.supported_chan_width_set) ? 0x1: 0x0;
-+ if (use_current_config) {
-+ ht_cap->cap_info = cpu_to_le16(conf->ht_conf.cap);
-+ memcpy(ht_cap->supp_mcs_set,
-+ conf->ht_conf.supp_mcs_set, 16);
-+ } else {
-+ ht_cap->cap_info = cpu_to_le16(mode->ht_info.cap);
-+ memcpy(ht_cap->supp_mcs_set,
-+ mode->ht_info.supp_mcs_set, 16);
-+ }
-+ ht_cap->ampdu_params_info =
-+ (mode->ht_info.ampdu_factor & IEEE80211_HT_CAP_AMPDU_FACTOR) |
-+ ((mode->ht_info.ampdu_density << 2) &
-+ IEEE80211_HT_CAP_AMPDU_DENSITY);
+-static int libertas_ethtool_get_eeprom_len(struct net_device *dev)
++static int lbs_ethtool_get_eeprom_len(struct net_device *dev)
+ {
+- return LIBERTAS_EEPROM_LEN;
++ return LBS_EEPROM_LEN;
}
--static void iwl_mac_get_ht_capab(struct ieee80211_hw *hw,
-- struct ieee80211_ht_capability *ht_cap)
--{
-- u8 use_wide_channel = 1;
-- struct iwl_priv *priv = hw->priv;
--
-- IWL_DEBUG_MAC80211("enter: \n");
-- if (priv->channel_width != IWL_CHANNEL_WIDTH_40MHZ)
-- use_wide_channel = 0;
--
-- /* no fat tx allowed on 2.4GHZ */
-- if (priv->phymode != MODE_IEEE80211A)
-- use_wide_channel = 0;
--
-- iwl_set_ht_capab(hw, ht_cap, use_wide_channel);
-- IWL_DEBUG_MAC80211("leave: \n");
--}
--#endif /*CONFIG_IWLWIFI_HT*/
-+#endif /*CONFIG_IWL4965_HT*/
+-static int libertas_ethtool_get_eeprom(struct net_device *dev,
++static int lbs_ethtool_get_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *eeprom, u8 * bytes)
+ {
+- wlan_private *priv = (wlan_private *) dev->priv;
+- wlan_adapter *adapter = priv->adapter;
+- struct wlan_ioctl_regrdwr regctrl;
++ struct lbs_private *priv = (struct lbs_private *) dev->priv;
++ struct lbs_ioctl_regrdwr regctrl;
+ char *ptr;
+ int ret;
- /*****************************************************************************
- *
-@@ -8223,7 +8279,7 @@ static void iwl_mac_get_ht_capab(struct ieee80211_hw *hw,
- *
- *****************************************************************************/
+@@ -55,47 +56,47 @@ static int libertas_ethtool_get_eeprom(struct net_device *dev,
+ regctrl.offset = eeprom->offset;
+ regctrl.NOB = eeprom->len;
--#ifdef CONFIG_IWLWIFI_DEBUG
-+#ifdef CONFIG_IWL4965_DEBUG
+- if (eeprom->offset + eeprom->len > LIBERTAS_EEPROM_LEN)
++ if (eeprom->offset + eeprom->len > LBS_EEPROM_LEN)
+ return -EINVAL;
- /*
- * The following adds a new attribute to the sysfs representation
-@@ -8235,7 +8291,7 @@ static void iwl_mac_get_ht_capab(struct ieee80211_hw *hw,
+ // mutex_lock(&priv->mutex);
- static ssize_t show_debug_level(struct device_driver *d, char *buf)
- {
-- return sprintf(buf, "0x%08X\n", iwl_debug_level);
-+ return sprintf(buf, "0x%08X\n", iwl4965_debug_level);
- }
- static ssize_t store_debug_level(struct device_driver *d,
- const char *buf, size_t count)
-@@ -8248,7 +8304,7 @@ static ssize_t store_debug_level(struct device_driver *d,
- printk(KERN_INFO DRV_NAME
- ": %s is not in hex or decimal form.\n", buf);
- else
-- iwl_debug_level = val;
-+ iwl4965_debug_level = val;
+- adapter->prdeeprom = kmalloc(eeprom->len+sizeof(regctrl), GFP_KERNEL);
+- if (!adapter->prdeeprom)
++ priv->prdeeprom = kmalloc(eeprom->len+sizeof(regctrl), GFP_KERNEL);
++ if (!priv->prdeeprom)
+ return -ENOMEM;
+- memcpy(adapter->prdeeprom, ®ctrl, sizeof(regctrl));
++ memcpy(priv->prdeeprom, ®ctrl, sizeof(regctrl));
- return strnlen(buf, count);
- }
-@@ -8256,7 +8312,7 @@ static ssize_t store_debug_level(struct device_driver *d,
- static DRIVER_ATTR(debug_level, S_IWUSR | S_IRUGO,
- show_debug_level, store_debug_level);
+ /* +14 is for action, offset, and NOB in
+ * response */
+ lbs_deb_ethtool("action:%d offset: %x NOB: %02x\n",
+ regctrl.action, regctrl.offset, regctrl.NOB);
--#endif /* CONFIG_IWLWIFI_DEBUG */
-+#endif /* CONFIG_IWL4965_DEBUG */
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_EEPROM_ACCESS,
+ regctrl.action,
+ CMD_OPTION_WAITFORRSP, 0,
+ ®ctrl);
- static ssize_t show_rf_kill(struct device *d,
- struct device_attribute *attr, char *buf)
-@@ -8267,7 +8323,7 @@ static ssize_t show_rf_kill(struct device *d,
- * 2 - HW based RF kill active
- * 3 - Both HW and SW based RF kill active
- */
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- int val = (test_bit(STATUS_RF_KILL_SW, &priv->status) ? 0x1 : 0x0) |
- (test_bit(STATUS_RF_KILL_HW, &priv->status) ? 0x2 : 0x0);
+ if (ret) {
+- if (adapter->prdeeprom)
+- kfree(adapter->prdeeprom);
++ if (priv->prdeeprom)
++ kfree(priv->prdeeprom);
+ goto done;
+ }
-@@ -8278,10 +8334,10 @@ static ssize_t store_rf_kill(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
+ mdelay(10);
- mutex_lock(&priv->mutex);
-- iwl_radio_kill_sw(priv, buf[0] == '1');
-+ iwl4965_radio_kill_sw(priv, buf[0] == '1');
- mutex_unlock(&priv->mutex);
+- ptr = (char *)adapter->prdeeprom;
++ ptr = (char *)priv->prdeeprom;
- return count;
-@@ -8292,12 +8348,12 @@ static DEVICE_ATTR(rf_kill, S_IWUSR | S_IRUGO, show_rf_kill, store_rf_kill);
- static ssize_t show_temperature(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
+ /* skip the command header, but include the "value" u32 variable */
+- ptr = ptr + sizeof(struct wlan_ioctl_regrdwr) - 4;
++ ptr = ptr + sizeof(struct lbs_ioctl_regrdwr) - 4;
-- if (!iwl_is_alive(priv))
-+ if (!iwl4965_is_alive(priv))
- return -EAGAIN;
+ /*
+ * Return the result back to the user
+ */
+ memcpy(bytes, ptr, eeprom->len);
-- return sprintf(buf, "%d\n", iwl_hw_get_temperature(priv));
-+ return sprintf(buf, "%d\n", iwl4965_hw_get_temperature(priv));
- }
+- if (adapter->prdeeprom)
+- kfree(adapter->prdeeprom);
++ if (priv->prdeeprom)
++ kfree(priv->prdeeprom);
+ // mutex_unlock(&priv->mutex);
- static DEVICE_ATTR(temperature, S_IRUGO, show_temperature, NULL);
-@@ -8306,15 +8362,15 @@ static ssize_t show_rs_window(struct device *d,
- struct device_attribute *attr,
- char *buf)
- {
-- struct iwl_priv *priv = d->driver_data;
-- return iwl_fill_rs_info(priv->hw, buf, IWL_AP_ID);
-+ struct iwl4965_priv *priv = d->driver_data;
-+ return iwl4965_fill_rs_info(priv->hw, buf, IWL_AP_ID);
+ ret = 0;
+@@ -105,17 +106,17 @@ done:
+ return ret;
}
- static DEVICE_ATTR(rs_window, S_IRUGO, show_rs_window, NULL);
- static ssize_t show_tx_power(struct device *d,
- struct device_attribute *attr, char *buf)
+-static void libertas_ethtool_get_stats(struct net_device * dev,
++static void lbs_ethtool_get_stats(struct net_device * dev,
+ struct ethtool_stats * stats, u64 * data)
{
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- return sprintf(buf, "%d\n", priv->user_txpower_limit);
- }
+- wlan_private *priv = dev->priv;
++ struct lbs_private *priv = dev->priv;
+ struct cmd_ds_mesh_access mesh_access;
+ int ret;
-@@ -8322,7 +8378,7 @@ static ssize_t store_tx_power(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- char *p = (char *)buf;
- u32 val;
+ lbs_deb_enter(LBS_DEB_ETHTOOL);
-@@ -8331,7 +8387,7 @@ static ssize_t store_tx_power(struct device *d,
- printk(KERN_INFO DRV_NAME
- ": %s is not in decimal form.\n", buf);
- else
-- iwl_hw_reg_set_txpower(priv, val);
-+ iwl4965_hw_reg_set_txpower(priv, val);
+ /* Get Mesh Statistics */
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_MESH_ACCESS, CMD_ACT_MESH_GET_STATS,
+ CMD_OPTION_WAITFORRSP, 0, &mesh_access);
- return count;
+@@ -143,7 +144,7 @@ static void libertas_ethtool_get_stats(struct net_device * dev,
+ lbs_deb_enter(LBS_DEB_ETHTOOL);
}
-@@ -8341,7 +8397,7 @@ static DEVICE_ATTR(tx_power, S_IWUSR | S_IRUGO, show_tx_power, store_tx_power);
- static ssize_t show_flags(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- return sprintf(buf, "0x%04X\n", priv->active_rxon.flags);
- }
-@@ -8350,19 +8406,19 @@ static ssize_t store_flags(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
+-static int libertas_ethtool_get_sset_count(struct net_device * dev, int sset)
++static int lbs_ethtool_get_sset_count(struct net_device * dev, int sset)
{
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- u32 flags = simple_strtoul(buf, NULL, 0);
-
- mutex_lock(&priv->mutex);
- if (le32_to_cpu(priv->staging_rxon.flags) != flags) {
- /* Cancel any currently running scans... */
-- if (iwl_scan_cancel_timeout(priv, 100))
-+ if (iwl4965_scan_cancel_timeout(priv, 100))
- IWL_WARNING("Could not cancel scan.\n");
- else {
- IWL_DEBUG_INFO("Committing rxon.flags = 0x%04X\n",
- flags);
- priv->staging_rxon.flags = cpu_to_le32(flags);
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
- }
+ switch (sset) {
+ case ETH_SS_STATS:
+@@ -153,7 +154,7 @@ static int libertas_ethtool_get_sset_count(struct net_device * dev, int sset)
}
- mutex_unlock(&priv->mutex);
-@@ -8375,7 +8431,7 @@ static DEVICE_ATTR(flags, S_IWUSR | S_IRUGO, show_flags, store_flags);
- static ssize_t show_filter_flags(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
-
- return sprintf(buf, "0x%04X\n",
- le32_to_cpu(priv->active_rxon.filter_flags));
-@@ -8385,20 +8441,20 @@ static ssize_t store_filter_flags(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- u32 filter_flags = simple_strtoul(buf, NULL, 0);
+ }
- mutex_lock(&priv->mutex);
- if (le32_to_cpu(priv->staging_rxon.filter_flags) != filter_flags) {
- /* Cancel any currently running scans... */
-- if (iwl_scan_cancel_timeout(priv, 100))
-+ if (iwl4965_scan_cancel_timeout(priv, 100))
- IWL_WARNING("Could not cancel scan.\n");
- else {
- IWL_DEBUG_INFO("Committing rxon.filter_flags = "
- "0x%04X\n", filter_flags);
- priv->staging_rxon.filter_flags =
- cpu_to_le32(filter_flags);
-- iwl_commit_rxon(priv);
-+ iwl4965_commit_rxon(priv);
- }
- }
- mutex_unlock(&priv->mutex);
-@@ -8412,20 +8468,20 @@ static DEVICE_ATTR(filter_flags, S_IWUSR | S_IRUGO, show_filter_flags,
- static ssize_t show_tune(struct device *d,
- struct device_attribute *attr, char *buf)
+-static void libertas_ethtool_get_strings (struct net_device * dev,
++static void lbs_ethtool_get_strings(struct net_device *dev,
+ u32 stringset,
+ u8 * s)
{
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
-
- return sprintf(buf, "0x%04X\n",
- (priv->phymode << 8) |
- le16_to_cpu(priv->active_rxon.channel));
+@@ -173,12 +174,57 @@ static void libertas_ethtool_get_strings (struct net_device * dev,
+ lbs_deb_enter(LBS_DEB_ETHTOOL);
}
--static void iwl_set_flags_for_phymode(struct iwl_priv *priv, u8 phymode);
-+static void iwl4965_set_flags_for_phymode(struct iwl4965_priv *priv, u8 phymode);
+-struct ethtool_ops libertas_ethtool_ops = {
+- .get_drvinfo = libertas_ethtool_get_drvinfo,
+- .get_eeprom = libertas_ethtool_get_eeprom,
+- .get_eeprom_len = libertas_ethtool_get_eeprom_len,
+- .get_sset_count = libertas_ethtool_get_sset_count,
+- .get_ethtool_stats = libertas_ethtool_get_stats,
+- .get_strings = libertas_ethtool_get_strings,
++static void lbs_ethtool_get_wol(struct net_device *dev,
++ struct ethtool_wolinfo *wol)
++{
++ struct lbs_private *priv = dev->priv;
++
++ if (priv->wol_criteria == 0xffffffff) {
++ /* Interface driver didn't configure wake */
++ wol->supported = wol->wolopts = 0;
++ return;
++ }
++
++ wol->supported = WAKE_UCAST|WAKE_MCAST|WAKE_BCAST|WAKE_PHY;
++
++ if (priv->wol_criteria & EHS_WAKE_ON_UNICAST_DATA)
++ wol->wolopts |= WAKE_UCAST;
++ if (priv->wol_criteria & EHS_WAKE_ON_MULTICAST_DATA)
++ wol->wolopts |= WAKE_MCAST;
++ if (priv->wol_criteria & EHS_WAKE_ON_BROADCAST_DATA)
++ wol->wolopts |= WAKE_BCAST;
++ if (priv->wol_criteria & EHS_WAKE_ON_MAC_EVENT)
++ wol->wolopts |= WAKE_PHY;
++}
++
++static int lbs_ethtool_set_wol(struct net_device *dev,
++ struct ethtool_wolinfo *wol)
++{
++ struct lbs_private *priv = dev->priv;
++ uint32_t criteria = 0;
++
++ if (priv->wol_criteria == 0xffffffff && wol->wolopts)
++ return -EOPNOTSUPP;
++
++ if (wol->wolopts & ~(WAKE_UCAST|WAKE_MCAST|WAKE_BCAST|WAKE_PHY))
++ return -EOPNOTSUPP;
++
++ if (wol->wolopts & WAKE_UCAST) criteria |= EHS_WAKE_ON_UNICAST_DATA;
++ if (wol->wolopts & WAKE_MCAST) criteria |= EHS_WAKE_ON_MULTICAST_DATA;
++ if (wol->wolopts & WAKE_BCAST) criteria |= EHS_WAKE_ON_BROADCAST_DATA;
++ if (wol->wolopts & WAKE_PHY) criteria |= EHS_WAKE_ON_MAC_EVENT;
++
++ return lbs_host_sleep_cfg(priv, criteria);
++}
++
++struct ethtool_ops lbs_ethtool_ops = {
++ .get_drvinfo = lbs_ethtool_get_drvinfo,
++ .get_eeprom = lbs_ethtool_get_eeprom,
++ .get_eeprom_len = lbs_ethtool_get_eeprom_len,
++ .get_sset_count = lbs_ethtool_get_sset_count,
++ .get_ethtool_stats = lbs_ethtool_get_stats,
++ .get_strings = lbs_ethtool_get_strings,
++ .get_wol = lbs_ethtool_get_wol,
++ .set_wol = lbs_ethtool_set_wol,
+ };
- static ssize_t store_tune(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
- char *p = (char *)buf;
- u16 tune = simple_strtoul(p, &p, 0);
- u8 phymode = (tune >> 8) & 0xff;
-@@ -8436,9 +8492,9 @@ static ssize_t store_tune(struct device *d,
- mutex_lock(&priv->mutex);
- if ((le16_to_cpu(priv->staging_rxon.channel) != channel) ||
- (priv->phymode != phymode)) {
-- const struct iwl_channel_info *ch_info;
-+ const struct iwl4965_channel_info *ch_info;
+diff --git a/drivers/net/wireless/libertas/host.h b/drivers/net/wireless/libertas/host.h
+index b37ddbc..1aa0407 100644
+--- a/drivers/net/wireless/libertas/host.h
++++ b/drivers/net/wireless/libertas/host.h
+@@ -2,25 +2,25 @@
+ * This file contains definitions of WLAN commands.
+ */
-- ch_info = iwl_get_channel_info(priv, phymode, channel);
-+ ch_info = iwl4965_get_channel_info(priv, phymode, channel);
- if (!ch_info) {
- IWL_WARNING("Requested invalid phymode/channel "
- "combination: %d %d\n", phymode, channel);
-@@ -8447,18 +8503,18 @@ static ssize_t store_tune(struct device *d,
- }
+-#ifndef _HOST_H_
+-#define _HOST_H_
++#ifndef _LBS_HOST_H_
++#define _LBS_HOST_H_
- /* Cancel any currently running scans... */
-- if (iwl_scan_cancel_timeout(priv, 100))
-+ if (iwl4965_scan_cancel_timeout(priv, 100))
- IWL_WARNING("Could not cancel scan.\n");
- else {
- IWL_DEBUG_INFO("Committing phymode and "
- "rxon.channel = %d %d\n",
- phymode, channel);
+ /** PUBLIC DEFINITIONS */
+-#define DEFAULT_AD_HOC_CHANNEL 6
+-#define DEFAULT_AD_HOC_CHANNEL_A 36
++#define DEFAULT_AD_HOC_CHANNEL 6
++#define DEFAULT_AD_HOC_CHANNEL_A 36
-- iwl_set_rxon_channel(priv, phymode, channel);
-- iwl_set_flags_for_phymode(priv, phymode);
-+ iwl4965_set_rxon_channel(priv, phymode, channel);
-+ iwl4965_set_flags_for_phymode(priv, phymode);
+ /** IEEE 802.11 oids */
+-#define OID_802_11_SSID 0x00008002
+-#define OID_802_11_INFRASTRUCTURE_MODE 0x00008008
+-#define OID_802_11_FRAGMENTATION_THRESHOLD 0x00008009
+-#define OID_802_11_RTS_THRESHOLD 0x0000800A
+-#define OID_802_11_TX_ANTENNA_SELECTED 0x0000800D
+-#define OID_802_11_SUPPORTED_RATES 0x0000800E
+-#define OID_802_11_STATISTICS 0x00008012
+-#define OID_802_11_TX_RETRYCOUNT 0x0000801D
+-#define OID_802_11D_ENABLE 0x00008020
+-
+-#define CMD_OPTION_WAITFORRSP 0x0002
++#define OID_802_11_SSID 0x00008002
++#define OID_802_11_INFRASTRUCTURE_MODE 0x00008008
++#define OID_802_11_FRAGMENTATION_THRESHOLD 0x00008009
++#define OID_802_11_RTS_THRESHOLD 0x0000800A
++#define OID_802_11_TX_ANTENNA_SELECTED 0x0000800D
++#define OID_802_11_SUPPORTED_RATES 0x0000800E
++#define OID_802_11_STATISTICS 0x00008012
++#define OID_802_11_TX_RETRYCOUNT 0x0000801D
++#define OID_802_11D_ENABLE 0x00008020
++
++#define CMD_OPTION_WAITFORRSP 0x0002
-- iwl_set_rate(priv);
-- iwl_commit_rxon(priv);
-+ iwl4965_set_rate(priv);
-+ iwl4965_commit_rxon(priv);
- }
- }
- mutex_unlock(&priv->mutex);
-@@ -8468,13 +8524,13 @@ static ssize_t store_tune(struct device *d,
+ /** Host command IDs */
- static DEVICE_ATTR(tune, S_IWUSR | S_IRUGO, show_tune, store_tune);
+@@ -30,192 +30,189 @@
+ #define CMD_RET(cmd) (0x8000 | cmd)
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
+ /* Return command convention exceptions: */
+-#define CMD_RET_802_11_ASSOCIATE 0x8012
++#define CMD_RET_802_11_ASSOCIATE 0x8012
- static ssize_t show_measurement(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-- struct iwl_spectrum_notification measure_report;
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_spectrum_notification measure_report;
- u32 size = sizeof(measure_report), len = 0, ofs = 0;
- u8 *data = (u8 *) & measure_report;
- unsigned long flags;
-@@ -8506,7 +8562,7 @@ static ssize_t store_measurement(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
- struct ieee80211_measurement_params params = {
- .channel = le16_to_cpu(priv->active_rxon.channel),
- .start_time = cpu_to_le64(priv->last_tsf),
-@@ -8532,20 +8588,20 @@ static ssize_t store_measurement(struct device *d,
+ /* Command codes */
+-#define CMD_CODE_DNLD 0x0002
+-#define CMD_GET_HW_SPEC 0x0003
+-#define CMD_EEPROM_UPDATE 0x0004
+-#define CMD_802_11_RESET 0x0005
+-#define CMD_802_11_SCAN 0x0006
+-#define CMD_802_11_GET_LOG 0x000b
+-#define CMD_MAC_MULTICAST_ADR 0x0010
+-#define CMD_802_11_AUTHENTICATE 0x0011
+-#define CMD_802_11_EEPROM_ACCESS 0x0059
+-#define CMD_802_11_ASSOCIATE 0x0050
+-#define CMD_802_11_SET_WEP 0x0013
+-#define CMD_802_11_GET_STAT 0x0014
+-#define CMD_802_3_GET_STAT 0x0015
+-#define CMD_802_11_SNMP_MIB 0x0016
+-#define CMD_MAC_REG_MAP 0x0017
+-#define CMD_BBP_REG_MAP 0x0018
+-#define CMD_MAC_REG_ACCESS 0x0019
+-#define CMD_BBP_REG_ACCESS 0x001a
+-#define CMD_RF_REG_ACCESS 0x001b
+-#define CMD_802_11_RADIO_CONTROL 0x001c
+-#define CMD_802_11_RF_CHANNEL 0x001d
+-#define CMD_802_11_RF_TX_POWER 0x001e
+-#define CMD_802_11_RSSI 0x001f
+-#define CMD_802_11_RF_ANTENNA 0x0020
+-
+-#define CMD_802_11_PS_MODE 0x0021
+-
+-#define CMD_802_11_DATA_RATE 0x0022
+-#define CMD_RF_REG_MAP 0x0023
+-#define CMD_802_11_DEAUTHENTICATE 0x0024
+-#define CMD_802_11_REASSOCIATE 0x0025
+-#define CMD_802_11_DISASSOCIATE 0x0026
+-#define CMD_MAC_CONTROL 0x0028
+-#define CMD_802_11_AD_HOC_START 0x002b
+-#define CMD_802_11_AD_HOC_JOIN 0x002c
+-
+-#define CMD_802_11_QUERY_TKIP_REPLY_CNTRS 0x002e
+-#define CMD_802_11_ENABLE_RSN 0x002f
+-#define CMD_802_11_PAIRWISE_TSC 0x0036
+-#define CMD_802_11_GROUP_TSC 0x0037
+-#define CMD_802_11_KEY_MATERIAL 0x005e
+-
+-#define CMD_802_11_SET_AFC 0x003c
+-#define CMD_802_11_GET_AFC 0x003d
+-
+-#define CMD_802_11_AD_HOC_STOP 0x0040
+-
+-#define CMD_802_11_BEACON_STOP 0x0049
+-
+-#define CMD_802_11_MAC_ADDRESS 0x004D
+-#define CMD_802_11_EEPROM_ACCESS 0x0059
+-
+-#define CMD_802_11_BAND_CONFIG 0x0058
+-
+-#define CMD_802_11D_DOMAIN_INFO 0x005b
+-
+-#define CMD_802_11_SLEEP_PARAMS 0x0066
+-
+-#define CMD_802_11_INACTIVITY_TIMEOUT 0x0067
+-
+-#define CMD_802_11_TPC_CFG 0x0072
+-#define CMD_802_11_PWR_CFG 0x0073
+-
+-#define CMD_802_11_LED_GPIO_CTRL 0x004e
+-
+-#define CMD_802_11_SUBSCRIBE_EVENT 0x0075
+-
+-#define CMD_802_11_RATE_ADAPT_RATESET 0x0076
+-
+-#define CMD_802_11_TX_RATE_QUERY 0x007f
+-
+-#define CMD_GET_TSF 0x0080
+-
+-#define CMD_BT_ACCESS 0x0087
+-
+-#define CMD_FWT_ACCESS 0x0095
+-
+-#define CMD_802_11_MONITOR_MODE 0x0098
+-
+-#define CMD_MESH_ACCESS 0x009b
+-
+-#define CMD_SET_BOOT2_VER 0x00a5
++#define CMD_CODE_DNLD 0x0002
++#define CMD_GET_HW_SPEC 0x0003
++#define CMD_EEPROM_UPDATE 0x0004
++#define CMD_802_11_RESET 0x0005
++#define CMD_802_11_SCAN 0x0006
++#define CMD_802_11_GET_LOG 0x000b
++#define CMD_MAC_MULTICAST_ADR 0x0010
++#define CMD_802_11_AUTHENTICATE 0x0011
++#define CMD_802_11_EEPROM_ACCESS 0x0059
++#define CMD_802_11_ASSOCIATE 0x0050
++#define CMD_802_11_SET_WEP 0x0013
++#define CMD_802_11_GET_STAT 0x0014
++#define CMD_802_3_GET_STAT 0x0015
++#define CMD_802_11_SNMP_MIB 0x0016
++#define CMD_MAC_REG_MAP 0x0017
++#define CMD_BBP_REG_MAP 0x0018
++#define CMD_MAC_REG_ACCESS 0x0019
++#define CMD_BBP_REG_ACCESS 0x001a
++#define CMD_RF_REG_ACCESS 0x001b
++#define CMD_802_11_RADIO_CONTROL 0x001c
++#define CMD_802_11_RF_CHANNEL 0x001d
++#define CMD_802_11_RF_TX_POWER 0x001e
++#define CMD_802_11_RSSI 0x001f
++#define CMD_802_11_RF_ANTENNA 0x0020
++#define CMD_802_11_PS_MODE 0x0021
++#define CMD_802_11_DATA_RATE 0x0022
++#define CMD_RF_REG_MAP 0x0023
++#define CMD_802_11_DEAUTHENTICATE 0x0024
++#define CMD_802_11_REASSOCIATE 0x0025
++#define CMD_802_11_DISASSOCIATE 0x0026
++#define CMD_MAC_CONTROL 0x0028
++#define CMD_802_11_AD_HOC_START 0x002b
++#define CMD_802_11_AD_HOC_JOIN 0x002c
++#define CMD_802_11_QUERY_TKIP_REPLY_CNTRS 0x002e
++#define CMD_802_11_ENABLE_RSN 0x002f
++#define CMD_802_11_PAIRWISE_TSC 0x0036
++#define CMD_802_11_GROUP_TSC 0x0037
++#define CMD_802_11_SET_AFC 0x003c
++#define CMD_802_11_GET_AFC 0x003d
++#define CMD_802_11_AD_HOC_STOP 0x0040
++#define CMD_802_11_HOST_SLEEP_CFG 0x0043
++#define CMD_802_11_WAKEUP_CONFIRM 0x0044
++#define CMD_802_11_HOST_SLEEP_ACTIVATE 0x0045
++#define CMD_802_11_BEACON_STOP 0x0049
++#define CMD_802_11_MAC_ADDRESS 0x004d
++#define CMD_802_11_LED_GPIO_CTRL 0x004e
++#define CMD_802_11_EEPROM_ACCESS 0x0059
++#define CMD_802_11_BAND_CONFIG 0x0058
++#define CMD_802_11D_DOMAIN_INFO 0x005b
++#define CMD_802_11_KEY_MATERIAL 0x005e
++#define CMD_802_11_SLEEP_PARAMS 0x0066
++#define CMD_802_11_INACTIVITY_TIMEOUT 0x0067
++#define CMD_802_11_SLEEP_PERIOD 0x0068
++#define CMD_802_11_TPC_CFG 0x0072
++#define CMD_802_11_PWR_CFG 0x0073
++#define CMD_802_11_FW_WAKE_METHOD 0x0074
++#define CMD_802_11_SUBSCRIBE_EVENT 0x0075
++#define CMD_802_11_RATE_ADAPT_RATESET 0x0076
++#define CMD_802_11_TX_RATE_QUERY 0x007f
++#define CMD_GET_TSF 0x0080
++#define CMD_BT_ACCESS 0x0087
++#define CMD_FWT_ACCESS 0x0095
++#define CMD_802_11_MONITOR_MODE 0x0098
++#define CMD_MESH_ACCESS 0x009b
++#define CMD_MESH_CONFIG 0x00a3
++#define CMD_SET_BOOT2_VER 0x00a5
++#define CMD_802_11_BEACON_CTRL 0x00b0
- IWL_DEBUG_INFO("Invoking measurement of type %d on "
- "channel %d (for '%s')\n", type, params.channel, buf);
-- iwl_get_measurement(priv, ¶ms, type);
-+ iwl4965_get_measurement(priv, ¶ms, type);
+ /* For the IEEE Power Save */
+-#define CMD_SUBCMD_ENTER_PS 0x0030
+-#define CMD_SUBCMD_EXIT_PS 0x0031
+-#define CMD_SUBCMD_SLEEP_CONFIRMED 0x0034
+-#define CMD_SUBCMD_FULL_POWERDOWN 0x0035
+-#define CMD_SUBCMD_FULL_POWERUP 0x0036
+-
+-#define CMD_ENABLE_RSN 0x0001
+-#define CMD_DISABLE_RSN 0x0000
++#define CMD_SUBCMD_ENTER_PS 0x0030
++#define CMD_SUBCMD_EXIT_PS 0x0031
++#define CMD_SUBCMD_SLEEP_CONFIRMED 0x0034
++#define CMD_SUBCMD_FULL_POWERDOWN 0x0035
++#define CMD_SUBCMD_FULL_POWERUP 0x0036
- return count;
- }
+-#define CMD_ACT_SET 0x0001
+-#define CMD_ACT_GET 0x0000
++#define CMD_ENABLE_RSN 0x0001
++#define CMD_DISABLE_RSN 0x0000
- static DEVICE_ATTR(measurement, S_IRUSR | S_IWUSR,
- show_measurement, store_measurement);
--#endif /* CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT */
-+#endif /* CONFIG_IWL4965_SPECTRUM_MEASUREMENT */
+-#define CMD_ACT_GET_AES (CMD_ACT_GET + 2)
+-#define CMD_ACT_SET_AES (CMD_ACT_SET + 2)
+-#define CMD_ACT_REMOVE_AES (CMD_ACT_SET + 3)
++#define CMD_ACT_GET 0x0000
++#define CMD_ACT_SET 0x0001
++#define CMD_ACT_GET_AES 0x0002
++#define CMD_ACT_SET_AES 0x0003
++#define CMD_ACT_REMOVE_AES 0x0004
- static ssize_t store_retry_rate(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
+ /* Define action or option for CMD_802_11_SET_WEP */
+-#define CMD_ACT_ADD 0x0002
+-#define CMD_ACT_REMOVE 0x0004
+-#define CMD_ACT_USE_DEFAULT 0x0008
++#define CMD_ACT_ADD 0x0002
++#define CMD_ACT_REMOVE 0x0004
++#define CMD_ACT_USE_DEFAULT 0x0008
- priv->retry_rate = simple_strtoul(buf, NULL, 0);
- if (priv->retry_rate <= 0)
-@@ -8557,7 +8613,7 @@ static ssize_t store_retry_rate(struct device *d,
- static ssize_t show_retry_rate(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
- return sprintf(buf, "%d", priv->retry_rate);
- }
+-#define CMD_TYPE_WEP_40_BIT 0x01
+-#define CMD_TYPE_WEP_104_BIT 0x02
++#define CMD_TYPE_WEP_40_BIT 0x01
++#define CMD_TYPE_WEP_104_BIT 0x02
-@@ -8568,14 +8624,14 @@ static ssize_t store_power_level(struct device *d,
- struct device_attribute *attr,
- const char *buf, size_t count)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
- int rc;
- int mode;
+-#define CMD_NUM_OF_WEP_KEYS 4
++#define CMD_NUM_OF_WEP_KEYS 4
- mode = simple_strtoul(buf, NULL, 0);
- mutex_lock(&priv->mutex);
+-#define CMD_WEP_KEY_INDEX_MASK 0x3fff
++#define CMD_WEP_KEY_INDEX_MASK 0x3fff
-- if (!iwl_is_ready(priv)) {
-+ if (!iwl4965_is_ready(priv)) {
- rc = -EAGAIN;
- goto out;
- }
-@@ -8586,7 +8642,7 @@ static ssize_t store_power_level(struct device *d,
- mode |= IWL_POWER_ENABLED;
+ /* Define action or option for CMD_802_11_RESET */
+-#define CMD_ACT_HALT 0x0003
++#define CMD_ACT_HALT 0x0003
- if (mode != priv->power_mode) {
-- rc = iwl_send_power_mode(priv, IWL_POWER_LEVEL(mode));
-+ rc = iwl4965_send_power_mode(priv, IWL_POWER_LEVEL(mode));
- if (rc) {
- IWL_DEBUG_MAC80211("failed setting power mode.\n");
- goto out;
-@@ -8622,7 +8678,7 @@ static const s32 period_duration[] = {
- static ssize_t show_power_level(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
- int level = IWL_POWER_LEVEL(priv->power_mode);
- char *p = buf;
+ /* Define action or option for CMD_802_11_SCAN */
+-#define CMD_BSS_TYPE_BSS 0x0001
+-#define CMD_BSS_TYPE_IBSS 0x0002
+-#define CMD_BSS_TYPE_ANY 0x0003
++#define CMD_BSS_TYPE_BSS 0x0001
++#define CMD_BSS_TYPE_IBSS 0x0002
++#define CMD_BSS_TYPE_ANY 0x0003
-@@ -8657,18 +8713,18 @@ static DEVICE_ATTR(power_level, S_IWUSR | S_IRUSR, show_power_level,
- static ssize_t show_channels(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
- int len = 0, i;
- struct ieee80211_channel *channels = NULL;
- const struct ieee80211_hw_mode *hw_mode = NULL;
- int count = 0;
+ /* Define action or option for CMD_802_11_SCAN */
+-#define CMD_SCAN_TYPE_ACTIVE 0x0000
+-#define CMD_SCAN_TYPE_PASSIVE 0x0001
++#define CMD_SCAN_TYPE_ACTIVE 0x0000
++#define CMD_SCAN_TYPE_PASSIVE 0x0001
-- if (!iwl_is_ready(priv))
-+ if (!iwl4965_is_ready(priv))
- return -EAGAIN;
+ #define CMD_SCAN_RADIO_TYPE_BG 0
-- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211G);
-+ hw_mode = iwl4965_get_hw_mode(priv, MODE_IEEE80211G);
- if (!hw_mode)
-- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211B);
-+ hw_mode = iwl4965_get_hw_mode(priv, MODE_IEEE80211B);
- if (hw_mode) {
- channels = hw_mode->channels;
- count = hw_mode->num_channels;
-@@ -8695,7 +8751,7 @@ static ssize_t show_channels(struct device *d,
- flag & IEEE80211_CHAN_W_ACTIVE_SCAN ?
- "active/passive" : "passive only");
+-#define CMD_SCAN_PROBE_DELAY_TIME 0
++#define CMD_SCAN_PROBE_DELAY_TIME 0
-- hw_mode = iwl_get_hw_mode(priv, MODE_IEEE80211A);
-+ hw_mode = iwl4965_get_hw_mode(priv, MODE_IEEE80211A);
- if (hw_mode) {
- channels = hw_mode->channels;
- count = hw_mode->num_channels;
-@@ -8731,17 +8787,17 @@ static DEVICE_ATTR(channels, S_IRUSR, show_channels, NULL);
- static ssize_t show_statistics(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-- u32 size = sizeof(struct iwl_notif_statistics);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
-+ u32 size = sizeof(struct iwl4965_notif_statistics);
- u32 len = 0, ofs = 0;
- u8 *data = (u8 *) & priv->statistics;
- int rc = 0;
+ /* Define action or option for CMD_MAC_CONTROL */
+-#define CMD_ACT_MAC_RX_ON 0x0001
+-#define CMD_ACT_MAC_TX_ON 0x0002
+-#define CMD_ACT_MAC_LOOPBACK_ON 0x0004
+-#define CMD_ACT_MAC_WEP_ENABLE 0x0008
+-#define CMD_ACT_MAC_INT_ENABLE 0x0010
+-#define CMD_ACT_MAC_MULTICAST_ENABLE 0x0020
+-#define CMD_ACT_MAC_BROADCAST_ENABLE 0x0040
+-#define CMD_ACT_MAC_PROMISCUOUS_ENABLE 0x0080
+-#define CMD_ACT_MAC_ALL_MULTICAST_ENABLE 0x0100
+-#define CMD_ACT_MAC_STRICT_PROTECTION_ENABLE 0x0400
++#define CMD_ACT_MAC_RX_ON 0x0001
++#define CMD_ACT_MAC_TX_ON 0x0002
++#define CMD_ACT_MAC_LOOPBACK_ON 0x0004
++#define CMD_ACT_MAC_WEP_ENABLE 0x0008
++#define CMD_ACT_MAC_INT_ENABLE 0x0010
++#define CMD_ACT_MAC_MULTICAST_ENABLE 0x0020
++#define CMD_ACT_MAC_BROADCAST_ENABLE 0x0040
++#define CMD_ACT_MAC_PROMISCUOUS_ENABLE 0x0080
++#define CMD_ACT_MAC_ALL_MULTICAST_ENABLE 0x0100
++#define CMD_ACT_MAC_STRICT_PROTECTION_ENABLE 0x0400
-- if (!iwl_is_alive(priv))
-+ if (!iwl4965_is_alive(priv))
- return -EAGAIN;
+ /* Define action or option for CMD_802_11_RADIO_CONTROL */
+-#define CMD_TYPE_AUTO_PREAMBLE 0x0001
+-#define CMD_TYPE_SHORT_PREAMBLE 0x0002
+-#define CMD_TYPE_LONG_PREAMBLE 0x0003
+-
+-#define TURN_ON_RF 0x01
+-#define RADIO_ON 0x01
+-#define RADIO_OFF 0x00
+-
+-#define SET_AUTO_PREAMBLE 0x05
+-#define SET_SHORT_PREAMBLE 0x03
+-#define SET_LONG_PREAMBLE 0x01
++#define CMD_TYPE_AUTO_PREAMBLE 0x0001
++#define CMD_TYPE_SHORT_PREAMBLE 0x0002
++#define CMD_TYPE_LONG_PREAMBLE 0x0003
++
++/* Event flags for CMD_802_11_SUBSCRIBE_EVENT */
++#define CMD_SUBSCRIBE_RSSI_LOW 0x0001
++#define CMD_SUBSCRIBE_SNR_LOW 0x0002
++#define CMD_SUBSCRIBE_FAILCOUNT 0x0004
++#define CMD_SUBSCRIBE_BCNMISS 0x0008
++#define CMD_SUBSCRIBE_RSSI_HIGH 0x0010
++#define CMD_SUBSCRIBE_SNR_HIGH 0x0020
++
++#define TURN_ON_RF 0x01
++#define RADIO_ON 0x01
++#define RADIO_OFF 0x00
++
++#define SET_AUTO_PREAMBLE 0x05
++#define SET_SHORT_PREAMBLE 0x03
++#define SET_LONG_PREAMBLE 0x01
- mutex_lock(&priv->mutex);
-- rc = iwl_send_statistics_request(priv);
-+ rc = iwl4965_send_statistics_request(priv);
- mutex_unlock(&priv->mutex);
+ /* Define action or option for CMD_802_11_RF_CHANNEL */
+-#define CMD_OPT_802_11_RF_CHANNEL_GET 0x00
+-#define CMD_OPT_802_11_RF_CHANNEL_SET 0x01
++#define CMD_OPT_802_11_RF_CHANNEL_GET 0x00
++#define CMD_OPT_802_11_RF_CHANNEL_SET 0x01
- if (rc) {
-@@ -8769,9 +8825,9 @@ static DEVICE_ATTR(statistics, S_IRUGO, show_statistics, NULL);
- static ssize_t show_antenna(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
+ /* Define action or option for CMD_802_11_RF_TX_POWER */
+-#define CMD_ACT_TX_POWER_OPT_GET 0x0000
+-#define CMD_ACT_TX_POWER_OPT_SET_HIGH 0x8007
+-#define CMD_ACT_TX_POWER_OPT_SET_MID 0x8004
+-#define CMD_ACT_TX_POWER_OPT_SET_LOW 0x8000
++#define CMD_ACT_TX_POWER_OPT_GET 0x0000
++#define CMD_ACT_TX_POWER_OPT_SET_HIGH 0x8007
++#define CMD_ACT_TX_POWER_OPT_SET_MID 0x8004
++#define CMD_ACT_TX_POWER_OPT_SET_LOW 0x8000
-- if (!iwl_is_alive(priv))
-+ if (!iwl4965_is_alive(priv))
- return -EAGAIN;
+-#define CMD_ACT_TX_POWER_INDEX_HIGH 0x0007
+-#define CMD_ACT_TX_POWER_INDEX_MID 0x0004
+-#define CMD_ACT_TX_POWER_INDEX_LOW 0x0000
++#define CMD_ACT_TX_POWER_INDEX_HIGH 0x0007
++#define CMD_ACT_TX_POWER_INDEX_MID 0x0004
++#define CMD_ACT_TX_POWER_INDEX_LOW 0x0000
- return sprintf(buf, "%d\n", priv->antenna);
-@@ -8782,7 +8838,7 @@ static ssize_t store_antenna(struct device *d,
- const char *buf, size_t count)
- {
- int ant;
-- struct iwl_priv *priv = dev_get_drvdata(d);
-+ struct iwl4965_priv *priv = dev_get_drvdata(d);
+ /* Define action or option for CMD_802_11_DATA_RATE */
+-#define CMD_ACT_SET_TX_AUTO 0x0000
+-#define CMD_ACT_SET_TX_FIX_RATE 0x0001
+-#define CMD_ACT_GET_TX_RATE 0x0002
++#define CMD_ACT_SET_TX_AUTO 0x0000
++#define CMD_ACT_SET_TX_FIX_RATE 0x0001
++#define CMD_ACT_GET_TX_RATE 0x0002
- if (count == 0)
- return 0;
-@@ -8794,7 +8850,7 @@ static ssize_t store_antenna(struct device *d,
+-#define CMD_ACT_SET_RX 0x0001
+-#define CMD_ACT_SET_TX 0x0002
+-#define CMD_ACT_SET_BOTH 0x0003
+-#define CMD_ACT_GET_RX 0x0004
+-#define CMD_ACT_GET_TX 0x0008
+-#define CMD_ACT_GET_BOTH 0x000c
++#define CMD_ACT_SET_RX 0x0001
++#define CMD_ACT_SET_TX 0x0002
++#define CMD_ACT_SET_BOTH 0x0003
++#define CMD_ACT_GET_RX 0x0004
++#define CMD_ACT_GET_TX 0x0008
++#define CMD_ACT_GET_BOTH 0x000c
- if ((ant >= 0) && (ant <= 2)) {
- IWL_DEBUG_INFO("Setting antenna select to %d.\n", ant);
-- priv->antenna = (enum iwl_antenna)ant;
-+ priv->antenna = (enum iwl4965_antenna)ant;
- } else
- IWL_DEBUG_INFO("Bad antenna select value %d.\n", ant);
+ /* Define action or option for CMD_802_11_PS_MODE */
+-#define CMD_TYPE_CAM 0x0000
+-#define CMD_TYPE_MAX_PSP 0x0001
+-#define CMD_TYPE_FAST_PSP 0x0002
++#define CMD_TYPE_CAM 0x0000
++#define CMD_TYPE_MAX_PSP 0x0001
++#define CMD_TYPE_FAST_PSP 0x0002
++
++/* Options for CMD_802_11_FW_WAKE_METHOD */
++#define CMD_WAKE_METHOD_UNCHANGED 0x0000
++#define CMD_WAKE_METHOD_COMMAND_INT 0x0001
++#define CMD_WAKE_METHOD_GPIO 0x0002
-@@ -8807,8 +8863,8 @@ static DEVICE_ATTR(antenna, S_IWUSR | S_IRUGO, show_antenna, store_antenna);
- static ssize_t show_status(struct device *d,
- struct device_attribute *attr, char *buf)
- {
-- struct iwl_priv *priv = (struct iwl_priv *)d->driver_data;
-- if (!iwl_is_alive(priv))
-+ struct iwl4965_priv *priv = (struct iwl4965_priv *)d->driver_data;
-+ if (!iwl4965_is_alive(priv))
- return -EAGAIN;
- return sprintf(buf, "0x%08x\n", (int)priv->status);
- }
-@@ -8822,7 +8878,7 @@ static ssize_t dump_error_log(struct device *d,
- char *p = (char *)buf;
+ /* Define action or option for CMD_BT_ACCESS */
+ enum cmd_bt_access_opts {
+@@ -237,8 +234,8 @@ enum cmd_fwt_access_opts {
+ CMD_ACT_FWT_ACCESS_DEL,
+ CMD_ACT_FWT_ACCESS_LOOKUP,
+ CMD_ACT_FWT_ACCESS_LIST,
+- CMD_ACT_FWT_ACCESS_LIST_route,
+- CMD_ACT_FWT_ACCESS_LIST_neighbor,
++ CMD_ACT_FWT_ACCESS_LIST_ROUTE,
++ CMD_ACT_FWT_ACCESS_LIST_NEIGHBOR,
+ CMD_ACT_FWT_ACCESS_RESET,
+ CMD_ACT_FWT_ACCESS_CLEANUP,
+ CMD_ACT_FWT_ACCESS_TIME,
+@@ -264,27 +261,36 @@ enum cmd_mesh_access_opts {
+ };
- if (p[0] == '1')
-- iwl_dump_nic_error_log((struct iwl_priv *)d->driver_data);
-+ iwl4965_dump_nic_error_log((struct iwl4965_priv *)d->driver_data);
+ /** Card Event definition */
+-#define MACREG_INT_CODE_TX_PPA_FREE 0x00000000
+-#define MACREG_INT_CODE_TX_DMA_DONE 0x00000001
+-#define MACREG_INT_CODE_LINK_LOSE_W_SCAN 0x00000002
+-#define MACREG_INT_CODE_LINK_LOSE_NO_SCAN 0x00000003
+-#define MACREG_INT_CODE_LINK_SENSED 0x00000004
+-#define MACREG_INT_CODE_CMD_FINISHED 0x00000005
+-#define MACREG_INT_CODE_MIB_CHANGED 0x00000006
+-#define MACREG_INT_CODE_INIT_DONE 0x00000007
+-#define MACREG_INT_CODE_DEAUTHENTICATED 0x00000008
+-#define MACREG_INT_CODE_DISASSOCIATED 0x00000009
+-#define MACREG_INT_CODE_PS_AWAKE 0x0000000a
+-#define MACREG_INT_CODE_PS_SLEEP 0x0000000b
+-#define MACREG_INT_CODE_MIC_ERR_MULTICAST 0x0000000d
+-#define MACREG_INT_CODE_MIC_ERR_UNICAST 0x0000000e
+-#define MACREG_INT_CODE_WM_AWAKE 0x0000000f
+-#define MACREG_INT_CODE_ADHOC_BCN_LOST 0x00000011
+-#define MACREG_INT_CODE_RSSI_LOW 0x00000019
+-#define MACREG_INT_CODE_SNR_LOW 0x0000001a
+-#define MACREG_INT_CODE_MAX_FAIL 0x0000001b
+-#define MACREG_INT_CODE_RSSI_HIGH 0x0000001c
+-#define MACREG_INT_CODE_SNR_HIGH 0x0000001d
+-#define MACREG_INT_CODE_MESH_AUTO_STARTED 0x00000023
+-
+-#endif /* _HOST_H_ */
++#define MACREG_INT_CODE_TX_PPA_FREE 0
++#define MACREG_INT_CODE_TX_DMA_DONE 1
++#define MACREG_INT_CODE_LINK_LOST_W_SCAN 2
++#define MACREG_INT_CODE_LINK_LOST_NO_SCAN 3
++#define MACREG_INT_CODE_LINK_SENSED 4
++#define MACREG_INT_CODE_CMD_FINISHED 5
++#define MACREG_INT_CODE_MIB_CHANGED 6
++#define MACREG_INT_CODE_INIT_DONE 7
++#define MACREG_INT_CODE_DEAUTHENTICATED 8
++#define MACREG_INT_CODE_DISASSOCIATED 9
++#define MACREG_INT_CODE_PS_AWAKE 10
++#define MACREG_INT_CODE_PS_SLEEP 11
++#define MACREG_INT_CODE_MIC_ERR_MULTICAST 13
++#define MACREG_INT_CODE_MIC_ERR_UNICAST 14
++#define MACREG_INT_CODE_WM_AWAKE 15
++#define MACREG_INT_CODE_DEEP_SLEEP_AWAKE 16
++#define MACREG_INT_CODE_ADHOC_BCN_LOST 17
++#define MACREG_INT_CODE_HOST_AWAKE 18
++#define MACREG_INT_CODE_STOP_TX 19
++#define MACREG_INT_CODE_START_TX 20
++#define MACREG_INT_CODE_CHANNEL_SWITCH 21
++#define MACREG_INT_CODE_MEASUREMENT_RDY 22
++#define MACREG_INT_CODE_WMM_CHANGE 23
++#define MACREG_INT_CODE_BG_SCAN_REPORT 24
++#define MACREG_INT_CODE_RSSI_LOW 25
++#define MACREG_INT_CODE_SNR_LOW 26
++#define MACREG_INT_CODE_MAX_FAIL 27
++#define MACREG_INT_CODE_RSSI_HIGH 28
++#define MACREG_INT_CODE_SNR_HIGH 29
++#define MACREG_INT_CODE_MESH_AUTO_STARTED 35
++#define MACREG_INT_CODE_FIRMWARE_READY 48
++
++#endif
+diff --git a/drivers/net/wireless/libertas/hostcmd.h b/drivers/net/wireless/libertas/hostcmd.h
+index e1045dc..d35b015 100644
+--- a/drivers/net/wireless/libertas/hostcmd.h
++++ b/drivers/net/wireless/libertas/hostcmd.h
+@@ -2,8 +2,8 @@
+ * This file contains the function prototypes, data structure
+ * and defines for all the host/station commands
+ */
+-#ifndef __HOSTCMD__H
+-#define __HOSTCMD__H
++#ifndef _LBS_HOSTCMD_H
++#define _LBS_HOSTCMD_H
- return strnlen(buf, count);
- }
-@@ -8836,7 +8892,7 @@ static ssize_t dump_event_log(struct device *d,
- char *p = (char *)buf;
+ #include <linux/wireless.h>
+ #include "11d.h"
+@@ -65,19 +65,21 @@ struct rxpd {
+ u8 reserved[3];
+ };
- if (p[0] == '1')
-- iwl_dump_nic_event_log((struct iwl_priv *)d->driver_data);
-+ iwl4965_dump_nic_event_log((struct iwl4965_priv *)d->driver_data);
++struct cmd_header {
++ __le16 command;
++ __le16 size;
++ __le16 seqnum;
++ __le16 result;
++} __attribute__ ((packed));
++
+ struct cmd_ctrl_node {
+- /* CMD link list */
+ struct list_head list;
+- u32 status;
+- /* CMD ID */
+- u32 cmd_oid;
+- /*CMD wait option: wait for finish or no wait */
+- u16 wait_option;
+- /* command parameter */
+- void *pdata_buf;
+- /*command data */
+- u8 *bufvirtualaddr;
+- u16 cmdflags;
++ int result;
++ /* command response */
++ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *);
++ unsigned long callback_arg;
++ /* command data */
++ struct cmd_header *cmdbuf;
+ /* wait queue */
+ u16 cmdwaitqwoken;
+ wait_queue_head_t cmdwait_q;
+@@ -86,13 +88,13 @@ struct cmd_ctrl_node {
+ /* Generic structure to hold all key types. */
+ struct enc_key {
+ u16 len;
+- u16 flags; /* KEY_INFO_* from wlan_defs.h */
+- u16 type; /* KEY_TYPE_* from wlan_defs.h */
++ u16 flags; /* KEY_INFO_* from defs.h */
++ u16 type; /* KEY_TYPE_* from defs.h */
+ u8 key[32];
+ };
- return strnlen(buf, count);
- }
-@@ -8849,34 +8905,34 @@ static DEVICE_ATTR(dump_events, S_IWUSR, NULL, dump_event_log);
- *
- *****************************************************************************/
+-/* wlan_offset_value */
+-struct wlan_offset_value {
++/* lbs_offset_value */
++struct lbs_offset_value {
+ u32 offset;
+ u32 value;
+ };
+@@ -104,14 +106,19 @@ struct cmd_ds_gen {
+ __le16 size;
+ __le16 seqnum;
+ __le16 result;
++ void *cmdresp[0];
+ };
--static void iwl_setup_deferred_work(struct iwl_priv *priv)
-+static void iwl4965_setup_deferred_work(struct iwl4965_priv *priv)
- {
- priv->workqueue = create_workqueue(DRV_NAME);
+ #define S_DS_GEN sizeof(struct cmd_ds_gen)
++
++
+ /*
+ * Define data structure for CMD_GET_HW_SPEC
+ * This structure defines the response for the GET_HW_SPEC command
+ */
+ struct cmd_ds_get_hw_spec {
++ struct cmd_header hdr;
++
+ /* HW Interface version number */
+ __le16 hwifversion;
+ /* HW version number */
+@@ -129,8 +136,8 @@ struct cmd_ds_get_hw_spec {
+ /* Number of antenna used */
+ __le16 nr_antenna;
- init_waitqueue_head(&priv->wait_command_queue);
+- /* FW release number, example 1,2,3,4 = 3.2.1p4 */
+- u8 fwreleasenumber[4];
++ /* FW release number, example 0x01030304 = 2.3.4p1 */
++ __le32 fwrelease;
-- INIT_WORK(&priv->up, iwl_bg_up);
-- INIT_WORK(&priv->restart, iwl_bg_restart);
-- INIT_WORK(&priv->rx_replenish, iwl_bg_rx_replenish);
-- INIT_WORK(&priv->scan_completed, iwl_bg_scan_completed);
-- INIT_WORK(&priv->request_scan, iwl_bg_request_scan);
-- INIT_WORK(&priv->abort_scan, iwl_bg_abort_scan);
-- INIT_WORK(&priv->rf_kill, iwl_bg_rf_kill);
-- INIT_WORK(&priv->beacon_update, iwl_bg_beacon_update);
-- INIT_DELAYED_WORK(&priv->post_associate, iwl_bg_post_associate);
-- INIT_DELAYED_WORK(&priv->init_alive_start, iwl_bg_init_alive_start);
-- INIT_DELAYED_WORK(&priv->alive_start, iwl_bg_alive_start);
-- INIT_DELAYED_WORK(&priv->scan_check, iwl_bg_scan_check);
--
-- iwl_hw_setup_deferred_work(priv);
-+ INIT_WORK(&priv->up, iwl4965_bg_up);
-+ INIT_WORK(&priv->restart, iwl4965_bg_restart);
-+ INIT_WORK(&priv->rx_replenish, iwl4965_bg_rx_replenish);
-+ INIT_WORK(&priv->scan_completed, iwl4965_bg_scan_completed);
-+ INIT_WORK(&priv->request_scan, iwl4965_bg_request_scan);
-+ INIT_WORK(&priv->abort_scan, iwl4965_bg_abort_scan);
-+ INIT_WORK(&priv->rf_kill, iwl4965_bg_rf_kill);
-+ INIT_WORK(&priv->beacon_update, iwl4965_bg_beacon_update);
-+ INIT_DELAYED_WORK(&priv->post_associate, iwl4965_bg_post_associate);
-+ INIT_DELAYED_WORK(&priv->init_alive_start, iwl4965_bg_init_alive_start);
-+ INIT_DELAYED_WORK(&priv->alive_start, iwl4965_bg_alive_start);
-+ INIT_DELAYED_WORK(&priv->scan_check, iwl4965_bg_scan_check);
+ /* Base Address of TxPD queue */
+ __le32 wcb_base;
+@@ -149,8 +156,17 @@ struct cmd_ds_802_11_reset {
+ };
+
+ struct cmd_ds_802_11_subscribe_event {
++ struct cmd_header hdr;
+
-+ iwl4965_hw_setup_deferred_work(priv);
+ __le16 action;
+ __le16 events;
++
++ /* A TLV to the CMD_802_11_SUBSCRIBE_EVENT command can contain a
++ * number of TLVs. From the v5.1 manual, those TLVs would add up to
++ * 40 bytes. However, future firmware might add additional TLVs, so I
++ * bump this up a bit.
++ */
++ uint8_t tlv[128];
+ };
- tasklet_init(&priv->irq_tasklet, (void (*)(unsigned long))
-- iwl_irq_tasklet, (unsigned long)priv);
-+ iwl4965_irq_tasklet, (unsigned long)priv);
- }
+ /*
+@@ -242,6 +258,8 @@ struct cmd_ds_802_11_ad_hoc_result {
+ };
--static void iwl_cancel_deferred_work(struct iwl_priv *priv)
-+static void iwl4965_cancel_deferred_work(struct iwl4965_priv *priv)
- {
-- iwl_hw_cancel_deferred_work(priv);
-+ iwl4965_hw_cancel_deferred_work(priv);
+ struct cmd_ds_802_11_set_wep {
++ struct cmd_header hdr;
++
+ /* ACT_ADD, ACT_REMOVE or ACT_ENABLE */
+ __le16 action;
- cancel_delayed_work_sync(&priv->init_alive_start);
- cancel_delayed_work(&priv->scan_check);
-@@ -8885,14 +8941,14 @@ static void iwl_cancel_deferred_work(struct iwl_priv *priv)
- cancel_work_sync(&priv->beacon_update);
- }
+@@ -249,8 +267,8 @@ struct cmd_ds_802_11_set_wep {
+ __le16 keyindex;
--static struct attribute *iwl_sysfs_entries[] = {
-+static struct attribute *iwl4965_sysfs_entries[] = {
- &dev_attr_antenna.attr,
- &dev_attr_channels.attr,
- &dev_attr_dump_errors.attr,
- &dev_attr_dump_events.attr,
- &dev_attr_flags.attr,
- &dev_attr_filter_flags.attr,
--#ifdef CONFIG_IWLWIFI_SPECTRUM_MEASUREMENT
-+#ifdef CONFIG_IWL4965_SPECTRUM_MEASUREMENT
- &dev_attr_measurement.attr,
- #endif
- &dev_attr_power_level.attr,
-@@ -8908,54 +8964,56 @@ static struct attribute *iwl_sysfs_entries[] = {
- NULL
+ /* 40, 128bit or TXWEP */
+- u8 keytype[4];
+- u8 keymaterial[4][16];
++ uint8_t keytype[4];
++ uint8_t keymaterial[4][16];
};
--static struct attribute_group iwl_attribute_group = {
-+static struct attribute_group iwl4965_attribute_group = {
- .name = NULL, /* put in device directory */
-- .attrs = iwl_sysfs_entries,
-+ .attrs = iwl4965_sysfs_entries,
+ struct cmd_ds_802_3_get_stat {
+@@ -328,11 +346,21 @@ struct cmd_ds_rf_reg_access {
};
--static struct ieee80211_ops iwl_hw_ops = {
-- .tx = iwl_mac_tx,
-- .start = iwl_mac_start,
-- .stop = iwl_mac_stop,
-- .add_interface = iwl_mac_add_interface,
-- .remove_interface = iwl_mac_remove_interface,
-- .config = iwl_mac_config,
-- .config_interface = iwl_mac_config_interface,
-- .configure_filter = iwl_configure_filter,
-- .set_key = iwl_mac_set_key,
-- .get_stats = iwl_mac_get_stats,
-- .get_tx_stats = iwl_mac_get_tx_stats,
-- .conf_tx = iwl_mac_conf_tx,
-- .get_tsf = iwl_mac_get_tsf,
-- .reset_tsf = iwl_mac_reset_tsf,
-- .beacon_update = iwl_mac_beacon_update,
--#ifdef CONFIG_IWLWIFI_HT
-- .conf_ht = iwl_mac_conf_ht,
-- .get_ht_capab = iwl_mac_get_ht_capab,
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- .ht_tx_agg_start = iwl_mac_ht_tx_agg_start,
-- .ht_tx_agg_stop = iwl_mac_ht_tx_agg_stop,
-- .ht_rx_agg_start = iwl_mac_ht_rx_agg_start,
-- .ht_rx_agg_stop = iwl_mac_ht_rx_agg_stop,
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-- .hw_scan = iwl_mac_hw_scan
-+static struct ieee80211_ops iwl4965_hw_ops = {
-+ .tx = iwl4965_mac_tx,
-+ .start = iwl4965_mac_start,
-+ .stop = iwl4965_mac_stop,
-+ .add_interface = iwl4965_mac_add_interface,
-+ .remove_interface = iwl4965_mac_remove_interface,
-+ .config = iwl4965_mac_config,
-+ .config_interface = iwl4965_mac_config_interface,
-+ .configure_filter = iwl4965_configure_filter,
-+ .set_key = iwl4965_mac_set_key,
-+ .get_stats = iwl4965_mac_get_stats,
-+ .get_tx_stats = iwl4965_mac_get_tx_stats,
-+ .conf_tx = iwl4965_mac_conf_tx,
-+ .get_tsf = iwl4965_mac_get_tsf,
-+ .reset_tsf = iwl4965_mac_reset_tsf,
-+ .beacon_update = iwl4965_mac_beacon_update,
-+ .bss_info_changed = iwl4965_bss_info_changed,
-+#ifdef CONFIG_IWL4965_HT
-+ .conf_ht = iwl4965_mac_conf_ht,
-+ .ampdu_action = iwl4965_mac_ampdu_action,
-+#ifdef CONFIG_IWL4965_HT_AGG
-+ .ht_tx_agg_start = iwl4965_mac_ht_tx_agg_start,
-+ .ht_tx_agg_stop = iwl4965_mac_ht_tx_agg_stop,
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
-+ .hw_scan = iwl4965_mac_hw_scan
+ struct cmd_ds_802_11_radio_control {
++ struct cmd_header hdr;
++
+ __le16 action;
+ __le16 control;
};
--static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
-+static int iwl4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- {
- int err = 0;
-- struct iwl_priv *priv;
-+ struct iwl4965_priv *priv;
- struct ieee80211_hw *hw;
- int i;
-+ DECLARE_MAC_BUF(mac);
++struct cmd_ds_802_11_beacon_control {
++ __le16 action;
++ __le16 beacon_enable;
++ __le16 beacon_period;
++};
++
+ struct cmd_ds_802_11_sleep_params {
++ struct cmd_header hdr;
++
+ /* ACT_GET/ACT_SET */
+ __le16 action;
-- if (iwl_param_disable_hw_scan) {
-+ /* Disabling hardware scan means that mac80211 will perform scans
-+ * "the hard way", rather than using device's scan. */
-+ if (iwl4965_param_disable_hw_scan) {
- IWL_DEBUG_INFO("Disabling hw_scan\n");
-- iwl_hw_ops.hw_scan = NULL;
-+ iwl4965_hw_ops.hw_scan = NULL;
- }
+@@ -346,16 +374,18 @@ struct cmd_ds_802_11_sleep_params {
+ __le16 stabletime;
-- if ((iwl_param_queues_num > IWL_MAX_NUM_QUEUES) ||
-- (iwl_param_queues_num < IWL_MIN_NUM_QUEUES)) {
-+ if ((iwl4965_param_queues_num > IWL_MAX_NUM_QUEUES) ||
-+ (iwl4965_param_queues_num < IWL_MIN_NUM_QUEUES)) {
- IWL_ERROR("invalid queues_num, should be between %d and %d\n",
- IWL_MIN_NUM_QUEUES, IWL_MAX_NUM_QUEUES);
- err = -EINVAL;
-@@ -8964,7 +9022,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ /* control periodic calibration */
+- u8 calcontrol;
++ uint8_t calcontrol;
- /* mac80211 allocates memory for this device instance, including
- * space for this driver's private structure */
-- hw = ieee80211_alloc_hw(sizeof(struct iwl_priv), &iwl_hw_ops);
-+ hw = ieee80211_alloc_hw(sizeof(struct iwl4965_priv), &iwl4965_hw_ops);
- if (hw == NULL) {
- IWL_ERROR("Can not allocate network device\n");
- err = -ENOMEM;
-@@ -8979,9 +9037,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- priv->hw = hw;
+ /* control the use of external sleep clock */
+- u8 externalsleepclk;
++ uint8_t externalsleepclk;
- priv->pci_dev = pdev;
-- priv->antenna = (enum iwl_antenna)iwl_param_antenna;
--#ifdef CONFIG_IWLWIFI_DEBUG
-- iwl_debug_level = iwl_param_debug;
-+ priv->antenna = (enum iwl4965_antenna)iwl4965_param_antenna;
-+#ifdef CONFIG_IWL4965_DEBUG
-+ iwl4965_debug_level = iwl4965_param_debug;
- atomic_set(&priv->restrict_refcnt, 0);
- #endif
- priv->retry_rate = 1;
-@@ -9000,12 +9058,14 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- /* Tell mac80211 our Tx characteristics */
- hw->flags = IEEE80211_HW_HOST_GEN_BEACON_TEMPLATE;
+ /* reserved field, should be set to zero */
+ __le16 reserved;
+ };
-+ /* Default value; 4 EDCA QOS priorities */
- hw->queues = 4;
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-+#ifdef CONFIG_IWL4965_HT
-+#ifdef CONFIG_IWL4965_HT_AGG
-+ /* Enhanced value; more queues, to support 11n aggregation */
- hw->queues = 16;
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
-+#endif /* CONFIG_IWL4965_HT_AGG */
-+#endif /* CONFIG_IWL4965_HT */
+ struct cmd_ds_802_11_inactivity_timeout {
++ struct cmd_header hdr;
++
+ /* ACT_GET/ACT_SET */
+ __le16 action;
- spin_lock_init(&priv->lock);
- spin_lock_init(&priv->power_data.lock);
-@@ -9026,7 +9086,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+@@ -364,11 +394,13 @@ struct cmd_ds_802_11_inactivity_timeout {
+ };
- pci_set_master(pdev);
+ struct cmd_ds_802_11_rf_channel {
++ struct cmd_header hdr;
++
+ __le16 action;
+- __le16 currentchannel;
+- __le16 rftype;
+- __le16 reserved;
+- u8 channellist[32];
++ __le16 channel;
++ __le16 rftype; /* unused */
++ __le16 reserved; /* unused */
++ u8 channellist[32]; /* unused */
+ };
-- iwl_clear_stations_table(priv);
-+ /* Clear the driver's (not device's) station table */
-+ iwl4965_clear_stations_table(priv);
+ struct cmd_ds_802_11_rssi {
+@@ -406,13 +438,29 @@ struct cmd_ds_802_11_rf_antenna {
+ };
- priv->data_retry_limit = -1;
- priv->ieee_channels = NULL;
-@@ -9045,9 +9106,11 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- err = pci_request_regions(pdev, DRV_NAME);
- if (err)
- goto out_pci_disable_device;
+ struct cmd_ds_802_11_monitor_mode {
+- u16 action;
+- u16 mode;
++ __le16 action;
++ __le16 mode;
+ };
+
+ struct cmd_ds_set_boot2_ver {
+- u16 action;
+- u16 version;
++ struct cmd_header hdr;
+
- /* We disable the RETRY_TIMEOUT register (0x41) to keep
- * PCI Tx retries from interfering with C3 CPU state */
- pci_write_config_byte(pdev, 0x41, 0x00);
++ __le16 action;
++ __le16 version;
++};
+
- priv->hw_base = pci_iomap(pdev, 0, 0);
- if (!priv->hw_base) {
- err = -ENODEV;
-@@ -9060,7 +9123,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
++struct cmd_ds_802_11_fw_wake_method {
++ struct cmd_header hdr;
++
++ __le16 action;
++ __le16 method;
++};
++
++struct cmd_ds_802_11_sleep_period {
++ struct cmd_header hdr;
++
++ __le16 action;
++ __le16 period;
+ };
- /* Initialize module parameter values here */
+ struct cmd_ds_802_11_ps_mode {
+@@ -437,6 +485,8 @@ struct PS_CMD_ConfirmSleep {
+ };
-- if (iwl_param_disable) {
-+ /* Disable radio (SW RF KILL) via parameter when loading driver */
-+ if (iwl4965_param_disable) {
- set_bit(STATUS_RF_KILL_SW, &priv->status);
- IWL_DEBUG_INFO("Radio disabled.\n");
- }
-@@ -9069,91 +9133,92 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ struct cmd_ds_802_11_data_rate {
++ struct cmd_header hdr;
++
+ __le16 action;
+ __le16 reserved;
+ u8 rates[MAX_RATES];
+@@ -488,6 +538,8 @@ struct cmd_ds_802_11_ad_hoc_join {
+ } __attribute__ ((packed));
- priv->ps_mode = 0;
- priv->use_ant_b_for_management_frame = 1; /* start with ant B */
-- priv->is_ht_enabled = 1;
-- priv->channel_width = IWL_CHANNEL_WIDTH_40MHZ;
- priv->valid_antenna = 0x7; /* assume all 3 connected */
- priv->ps_mode = IWL_MIMO_PS_NONE;
-- priv->cck_power_index_compensation = iwl_read32(
-- priv, CSR_HW_REV_WA_REG);
+ struct cmd_ds_802_11_enable_rsn {
++ struct cmd_header hdr;
++
+ __le16 action;
+ __le16 enable;
+ } __attribute__ ((packed));
+@@ -512,6 +564,13 @@ struct MrvlIEtype_keyParamSet {
+ u8 key[32];
+ };
-+ /* Choose which receivers/antennas to use */
- iwl4965_set_rxon_chain(priv);
++struct cmd_ds_host_sleep {
++ struct cmd_header hdr;
++ __le32 criteria;
++ uint8_t gpio;
++ uint8_t gap;
++} __attribute__ ((packed));
++
+ struct cmd_ds_802_11_key_material {
+ __le16 action;
+ struct MrvlIEtype_keyParamSet keyParamSet[2];
+@@ -598,7 +657,21 @@ struct cmd_ds_fwt_access {
+ u8 prec[ETH_ALEN];
+ } __attribute__ ((packed));
- printk(KERN_INFO DRV_NAME
- ": Detected Intel Wireless WiFi Link 4965AGN\n");
++
++struct cmd_ds_mesh_config {
++ struct cmd_header hdr;
++
++ __le16 action;
++ __le16 channel;
++ __le16 type;
++ __le16 length;
++ u8 data[128]; /* last position reserved */
++} __attribute__ ((packed));
++
++
+ struct cmd_ds_mesh_access {
++ struct cmd_header hdr;
++
+ __le16 action;
+ __le32 data[32]; /* last position reserved */
+ } __attribute__ ((packed));
+@@ -615,14 +688,12 @@ struct cmd_ds_command {
- /* Device-specific setup */
-- if (iwl_hw_set_hw_setting(priv)) {
-+ if (iwl4965_hw_set_hw_setting(priv)) {
- IWL_ERROR("failed to set hw settings\n");
-- mutex_unlock(&priv->mutex);
- goto out_iounmap;
- }
+ /* command Body */
+ union {
+- struct cmd_ds_get_hw_spec hwspec;
+ struct cmd_ds_802_11_ps_mode psmode;
+ struct cmd_ds_802_11_scan scan;
+ struct cmd_ds_802_11_scan_rsp scanresp;
+ struct cmd_ds_mac_control macctrl;
+ struct cmd_ds_802_11_associate associate;
+ struct cmd_ds_802_11_deauthenticate deauth;
+- struct cmd_ds_802_11_set_wep wep;
+ struct cmd_ds_802_11_ad_hoc_start ads;
+ struct cmd_ds_802_11_reset reset;
+ struct cmd_ds_802_11_ad_hoc_result result;
+@@ -634,17 +705,13 @@ struct cmd_ds_command {
+ struct cmd_ds_802_11_rf_tx_power txp;
+ struct cmd_ds_802_11_rf_antenna rant;
+ struct cmd_ds_802_11_monitor_mode monitor;
+- struct cmd_ds_802_11_data_rate drate;
+ struct cmd_ds_802_11_rate_adapt_rateset rateset;
+ struct cmd_ds_mac_multicast_adr madr;
+ struct cmd_ds_802_11_ad_hoc_join adj;
+- struct cmd_ds_802_11_radio_control radio;
+- struct cmd_ds_802_11_rf_channel rfchannel;
+ struct cmd_ds_802_11_rssi rssi;
+ struct cmd_ds_802_11_rssi_rsp rssirsp;
+ struct cmd_ds_802_11_disassociate dassociate;
+ struct cmd_ds_802_11_mac_address macadd;
+- struct cmd_ds_802_11_enable_rsn enbrsn;
+ struct cmd_ds_802_11_key_material keymaterial;
+ struct cmd_ds_mac_reg_access macreg;
+ struct cmd_ds_bbp_reg_access bbpreg;
+@@ -654,8 +721,6 @@ struct cmd_ds_command {
+ struct cmd_ds_802_11d_domain_info domaininfo;
+ struct cmd_ds_802_11d_domain_info domaininforesp;
--#ifdef CONFIG_IWLWIFI_QOS
-- if (iwl_param_qos_enable)
-+#ifdef CONFIG_IWL4965_QOS
-+ if (iwl4965_param_qos_enable)
- priv->qos_data.qos_enable = 1;
+- struct cmd_ds_802_11_sleep_params sleep_params;
+- struct cmd_ds_802_11_inactivity_timeout inactivity_timeout;
+ struct cmd_ds_802_11_tpc_cfg tpccfg;
+ struct cmd_ds_802_11_pwr_cfg pwrcfg;
+ struct cmd_ds_802_11_afc afc;
+@@ -664,10 +729,8 @@ struct cmd_ds_command {
+ struct cmd_tx_rate_query txrate;
+ struct cmd_ds_bt_access bt;
+ struct cmd_ds_fwt_access fwt;
+- struct cmd_ds_mesh_access mesh;
+- struct cmd_ds_set_boot2_ver boot2_ver;
+ struct cmd_ds_get_tsf gettsf;
+- struct cmd_ds_802_11_subscribe_event subscribe_event;
++ struct cmd_ds_802_11_beacon_control bcn_ctrl;
+ } params;
+ } __attribute__ ((packed));
-- iwl_reset_qos(priv);
-+ iwl4965_reset_qos(priv);
+diff --git a/drivers/net/wireless/libertas/if_cs.c b/drivers/net/wireless/libertas/if_cs.c
+index ba4fc2b..4b5ab9a 100644
+--- a/drivers/net/wireless/libertas/if_cs.c
++++ b/drivers/net/wireless/libertas/if_cs.c
+@@ -57,7 +57,7 @@ MODULE_LICENSE("GPL");
- priv->qos_data.qos_active = 0;
- priv->qos_data.qos_cap.val = 0;
--#endif /* CONFIG_IWLWIFI_QOS */
-+#endif /* CONFIG_IWL4965_QOS */
+ struct if_cs_card {
+ struct pcmcia_device *p_dev;
+- wlan_private *priv;
++ struct lbs_private *priv;
+ void __iomem *iobase;
+ };
-- iwl_set_rxon_channel(priv, MODE_IEEE80211G, 6);
-- iwl_setup_deferred_work(priv);
-- iwl_setup_rx_handlers(priv);
-+ iwl4965_set_rxon_channel(priv, MODE_IEEE80211G, 6);
-+ iwl4965_setup_deferred_work(priv);
-+ iwl4965_setup_rx_handlers(priv);
+@@ -243,7 +243,7 @@ static inline void if_cs_disable_ints(struct if_cs_card *card)
- priv->rates_mask = IWL_RATES_MASK;
- /* If power management is turned on, default to AC mode */
- priv->power_mode = IWL_POWER_AC;
- priv->user_txpower_limit = IWL_DEFAULT_TX_POWER;
+ static irqreturn_t if_cs_interrupt(int irq, void *data)
+ {
+- struct if_cs_card *card = (struct if_cs_card *)data;
++ struct if_cs_card *card = data;
+ u16 int_cause;
-- pci_enable_msi(pdev);
+ lbs_deb_enter(LBS_DEB_CS);
+@@ -253,25 +253,20 @@ static irqreturn_t if_cs_interrupt(int irq, void *data)
+ /* Not for us */
+ return IRQ_NONE;
+
+- } else if(int_cause == 0xffff) {
++ } else if (int_cause == 0xffff) {
+ /* Read in junk, the card has probably been removed */
+- card->priv->adapter->surpriseremoved = 1;
++ card->priv->surpriseremoved = 1;
+
+ } else {
+- if(int_cause & IF_CS_H_IC_TX_OVER) {
+- card->priv->dnld_sent = DNLD_RES_RECEIVED;
+- if (!card->priv->adapter->cur_cmd)
+- wake_up_interruptible(&card->priv->waitq);
-
-- err = request_irq(pdev->irq, iwl_isr, IRQF_SHARED, DRV_NAME, priv);
-- if (err) {
-- IWL_ERROR("Error allocating IRQ %d\n", pdev->irq);
-- goto out_disable_msi;
-- }
+- if (card->priv->adapter->connect_status == LIBERTAS_CONNECTED)
+- netif_wake_queue(card->priv->dev);
+- }
++ if (int_cause & IF_CS_H_IC_TX_OVER)
++ lbs_host_to_card_done(card->priv);
+
+ /* clear interrupt */
+ if_cs_write16(card, IF_CS_C_INT_CAUSE, int_cause & IF_CS_C_IC_MASK);
+ }
-
-- mutex_lock(&priv->mutex);
-+ iwl4965_disable_interrupts(priv);
+- libertas_interrupt(card->priv->dev);
++ spin_lock(&card->priv->driver_lock);
++ lbs_interrupt(card->priv);
++ spin_unlock(&card->priv->driver_lock);
-- err = sysfs_create_group(&pdev->dev.kobj, &iwl_attribute_group);
-+ err = sysfs_create_group(&pdev->dev.kobj, &iwl4965_attribute_group);
- if (err) {
- IWL_ERROR("failed to create sysfs device attributes\n");
-- mutex_unlock(&priv->mutex);
- goto out_release_irq;
+ return IRQ_HANDLED;
+ }
+@@ -286,7 +281,7 @@ static irqreturn_t if_cs_interrupt(int irq, void *data)
+ /*
+ * Called from if_cs_host_to_card to send a command to the hardware
+ */
+-static int if_cs_send_cmd(wlan_private *priv, u8 *buf, u16 nb)
++static int if_cs_send_cmd(struct lbs_private *priv, u8 *buf, u16 nb)
+ {
+ struct if_cs_card *card = (struct if_cs_card *)priv->card;
+ int ret = -1;
+@@ -331,7 +326,7 @@ done:
+ /*
+ * Called from if_cs_host_to_card to send a data to the hardware
+ */
+-static void if_cs_send_data(wlan_private *priv, u8 *buf, u16 nb)
++static void if_cs_send_data(struct lbs_private *priv, u8 *buf, u16 nb)
+ {
+ struct if_cs_card *card = (struct if_cs_card *)priv->card;
+
+@@ -354,7 +349,7 @@ static void if_cs_send_data(wlan_private *priv, u8 *buf, u16 nb)
+ /*
+ * Get the command result out of the card.
+ */
+-static int if_cs_receive_cmdres(wlan_private *priv, u8* data, u32 *len)
++static int if_cs_receive_cmdres(struct lbs_private *priv, u8 *data, u32 *len)
+ {
+ int ret = -1;
+ u16 val;
+@@ -369,7 +364,7 @@ static int if_cs_receive_cmdres(wlan_private *priv, u8* data, u32 *len)
}
-- /* fetch ucode file from disk, alloc and copy to bus-master buffers ...
-- * ucode filename and max sizes are card-specific. */
-- err = iwl_read_ucode(priv);
-+ /* nic init */
-+ iwl4965_set_bit(priv, CSR_GIO_CHICKEN_BITS,
-+ CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER);
-+
-+ iwl4965_set_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
-+ err = iwl4965_poll_bit(priv, CSR_GP_CNTRL,
-+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
-+ CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000);
-+ if (err < 0) {
-+ IWL_DEBUG_INFO("Failed to init the card\n");
-+ goto out_remove_sysfs;
-+ }
-+ /* Read the EEPROM */
-+ err = iwl4965_eeprom_init(priv);
- if (err) {
-- IWL_ERROR("Could not read microcode: %d\n", err);
-- mutex_unlock(&priv->mutex);
-- goto out_pci_alloc;
-+ IWL_ERROR("Unable to init EEPROM\n");
-+ goto out_remove_sysfs;
+ *len = if_cs_read16(priv->card, IF_CS_C_CMD_LEN);
+- if ((*len == 0) || (*len > MRVDRV_SIZE_OF_CMD_BUFFER)) {
++ if ((*len == 0) || (*len > LBS_CMD_BUFFER_SIZE)) {
+ lbs_pr_err("card cmd buffer has invalid # of bytes (%d)\n", *len);
+ goto out;
}
-+ /* MAC Address location in EEPROM same for 3945/4965 */
-+ get_eeprom_mac(priv, priv->mac_addr);
-+ IWL_DEBUG_INFO("MAC address: %s\n", print_mac(mac, priv->mac_addr));
-+ SET_IEEE80211_PERM_ADDR(priv->hw, priv->mac_addr);
+@@ -379,6 +374,9 @@ static int if_cs_receive_cmdres(wlan_private *priv, u8* data, u32 *len)
+ if (*len & 1)
+ data[*len-1] = if_cs_read8(priv->card, IF_CS_C_CMD);
-- mutex_unlock(&priv->mutex);
--
-- IWL_DEBUG_INFO("Queing UP work.\n");
-+ iwl4965_rate_control_register(priv->hw);
-+ err = ieee80211_register_hw(priv->hw);
-+ if (err) {
-+ IWL_ERROR("Failed to register network device (error %d)\n", err);
-+ goto out_remove_sysfs;
-+ }
++ /* This is a workaround for a firmware that reports too much
++ * bytes */
++ *len -= 8;
+ ret = 0;
+ out:
+ lbs_deb_leave_args(LBS_DEB_CS, "ret %d, len %d", ret, *len);
+@@ -386,7 +384,7 @@ out:
+ }
-- queue_work(priv->workqueue, &priv->up);
-+ priv->hw->conf.beacon_int = 100;
-+ priv->mac80211_registered = 1;
-+ pci_save_state(pdev);
-+ pci_disable_device(pdev);
- return 0;
+-static struct sk_buff *if_cs_receive_data(wlan_private *priv)
++static struct sk_buff *if_cs_receive_data(struct lbs_private *priv)
+ {
+ struct sk_buff *skb = NULL;
+ u16 len;
+@@ -616,7 +614,10 @@ done:
+ /********************************************************************/
-- out_pci_alloc:
-- iwl_dealloc_ucode_pci(priv);
--
-- sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
-+ out_remove_sysfs:
-+ sysfs_remove_group(&pdev->dev.kobj, &iwl4965_attribute_group);
+ /* Send commands or data packets to the card */
+-static int if_cs_host_to_card(wlan_private *priv, u8 type, u8 *buf, u16 nb)
++static int if_cs_host_to_card(struct lbs_private *priv,
++ u8 type,
++ u8 *buf,
++ u16 nb)
+ {
+ int ret = -1;
- out_release_irq:
-- free_irq(pdev->irq, priv);
+@@ -641,18 +642,16 @@ static int if_cs_host_to_card(wlan_private *priv, u8 type, u8 *buf, u16 nb)
+ }
+
+
+-static int if_cs_get_int_status(wlan_private *priv, u8 *ireg)
++static int if_cs_get_int_status(struct lbs_private *priv, u8 *ireg)
+ {
+ struct if_cs_card *card = (struct if_cs_card *)priv->card;
+- //wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+ u16 int_cause;
+- u8 *cmdbuf;
+ *ireg = 0;
+
+ lbs_deb_enter(LBS_DEB_CS);
+
+- if (priv->adapter->surpriseremoved)
++ if (priv->surpriseremoved)
+ goto out;
+
+ int_cause = if_cs_read16(card, IF_CS_C_INT_CAUSE) & IF_CS_C_IC_MASK;
+@@ -668,7 +667,7 @@ sbi_get_int_status_exit:
+ /* is there a data packet for us? */
+ if (*ireg & IF_CS_C_S_RX_UPLD_RDY) {
+ struct sk_buff *skb = if_cs_receive_data(priv);
+- libertas_process_rxed_packet(priv, skb);
++ lbs_process_rxed_packet(priv, skb);
+ *ireg &= ~IF_CS_C_S_RX_UPLD_RDY;
+ }
+
+@@ -678,31 +677,24 @@ sbi_get_int_status_exit:
+
+ /* Card has a command result for us */
+ if (*ireg & IF_CS_C_S_CMD_UPLD_RDY) {
+- spin_lock(&priv->adapter->driver_lock);
+- if (!priv->adapter->cur_cmd) {
+- cmdbuf = priv->upld_buf;
+- priv->adapter->hisregcpy &= ~IF_CS_C_S_RX_UPLD_RDY;
+- } else {
+- cmdbuf = priv->adapter->cur_cmd->bufvirtualaddr;
+- }
-
-- out_disable_msi:
-- pci_disable_msi(pdev);
- destroy_workqueue(priv->workqueue);
- priv->workqueue = NULL;
-- iwl_unset_hw_setting(priv);
-+ iwl4965_unset_hw_setting(priv);
+- ret = if_cs_receive_cmdres(priv, cmdbuf, &priv->upld_len);
+- spin_unlock(&priv->adapter->driver_lock);
++ spin_lock(&priv->driver_lock);
++ ret = if_cs_receive_cmdres(priv, priv->upld_buf, &priv->upld_len);
++ spin_unlock(&priv->driver_lock);
+ if (ret < 0)
+ lbs_pr_err("could not receive cmd from card\n");
+ }
- out_iounmap:
- pci_iounmap(pdev, priv->hw_base);
-@@ -9168,9 +9233,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- return err;
+ out:
+- lbs_deb_leave_args(LBS_DEB_CS, "ret %d, ireg 0x%x, hisregcpy 0x%x", ret, *ireg, priv->adapter->hisregcpy);
++ lbs_deb_leave_args(LBS_DEB_CS, "ret %d, ireg 0x%x, hisregcpy 0x%x", ret, *ireg, priv->hisregcpy);
+ return ret;
}
--static void iwl_pci_remove(struct pci_dev *pdev)
-+static void iwl4965_pci_remove(struct pci_dev *pdev)
+
+-static int if_cs_read_event_cause(wlan_private *priv)
++static int if_cs_read_event_cause(struct lbs_private *priv)
{
-- struct iwl_priv *priv = pci_get_drvdata(pdev);
-+ struct iwl4965_priv *priv = pci_get_drvdata(pdev);
- struct list_head *p, *q;
- int i;
+ lbs_deb_enter(LBS_DEB_CS);
-@@ -9181,43 +9246,41 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+- priv->adapter->eventcause = (if_cs_read16(priv->card, IF_CS_C_STATUS) & IF_CS_C_S_STATUS_MASK) >> 5;
++ priv->eventcause = (if_cs_read16(priv->card, IF_CS_C_STATUS) & IF_CS_C_S_STATUS_MASK) >> 5;
+ if_cs_write16(priv->card, IF_CS_H_INT_CAUSE, IF_CS_H_IC_HOST_EVENT);
- set_bit(STATUS_EXIT_PENDING, &priv->status);
+ return 0;
+@@ -746,7 +738,7 @@ static void if_cs_release(struct pcmcia_device *p_dev)
+ static int if_cs_probe(struct pcmcia_device *p_dev)
+ {
+ int ret = -ENOMEM;
+- wlan_private *priv;
++ struct lbs_private *priv;
+ struct if_cs_card *card;
+ /* CIS parsing */
+ tuple_t tuple;
+@@ -856,7 +848,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
+ goto out2;
-- iwl_down(priv);
-+ iwl4965_down(priv);
+ /* Make this card known to the libertas driver */
+- priv = libertas_add_card(card, &p_dev->dev);
++ priv = lbs_add_card(card, &p_dev->dev);
+ if (!priv) {
+ ret = -ENOMEM;
+ goto out2;
+@@ -869,7 +861,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
+ priv->hw_get_int_status = if_cs_get_int_status;
+ priv->hw_read_event_cause = if_cs_read_event_cause;
- /* Free MAC hash list for ADHOC */
- for (i = 0; i < IWL_IBSS_MAC_HASH_SIZE; i++) {
- list_for_each_safe(p, q, &priv->ibss_mac_hash[i]) {
- list_del(p);
-- kfree(list_entry(p, struct iwl_ibss_seq, list));
-+ kfree(list_entry(p, struct iwl4965_ibss_seq, list));
- }
+- priv->adapter->fw_ready = 1;
++ priv->fw_ready = 1;
+
+ /* Now actually get the IRQ */
+ ret = request_irq(p_dev->irq.AssignedIRQ, if_cs_interrupt,
+@@ -885,7 +877,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
+ if_cs_enable_ints(card);
+
+ /* And finally bring the card up */
+- if (libertas_start_card(priv) != 0) {
++ if (lbs_start_card(priv) != 0) {
+ lbs_pr_err("could not activate card\n");
+ goto out3;
}
+@@ -894,7 +886,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
+ goto out;
-- sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
-+ sysfs_remove_group(&pdev->dev.kobj, &iwl4965_attribute_group);
+ out3:
+- libertas_remove_card(priv);
++ lbs_remove_card(priv);
+ out2:
+ ioport_unmap(card->iobase);
+ out1:
+@@ -917,8 +909,8 @@ static void if_cs_detach(struct pcmcia_device *p_dev)
-- iwl_dealloc_ucode_pci(priv);
-+ iwl4965_dealloc_ucode_pci(priv);
+ lbs_deb_enter(LBS_DEB_CS);
- if (priv->rxq.bd)
-- iwl_rx_queue_free(priv, &priv->rxq);
-- iwl_hw_txq_ctx_free(priv);
-+ iwl4965_rx_queue_free(priv, &priv->rxq);
-+ iwl4965_hw_txq_ctx_free(priv);
+- libertas_stop_card(card->priv);
+- libertas_remove_card(card->priv);
++ lbs_stop_card(card->priv);
++ lbs_remove_card(card->priv);
+ if_cs_disable_ints(card);
+ if_cs_release(p_dev);
+ kfree(card);
+@@ -939,7 +931,7 @@ static struct pcmcia_device_id if_cs_ids[] = {
+ MODULE_DEVICE_TABLE(pcmcia, if_cs_ids);
-- iwl_unset_hw_setting(priv);
-- iwl_clear_stations_table(priv);
-+ iwl4965_unset_hw_setting(priv);
-+ iwl4965_clear_stations_table(priv);
- if (priv->mac80211_registered) {
- ieee80211_unregister_hw(priv->hw);
-- iwl_rate_control_unregister(priv->hw);
-+ iwl4965_rate_control_unregister(priv->hw);
- }
+-static struct pcmcia_driver libertas_driver = {
++static struct pcmcia_driver lbs_driver = {
+ .owner = THIS_MODULE,
+ .drv = {
+ .name = DRV_NAME,
+@@ -955,7 +947,7 @@ static int __init if_cs_init(void)
+ int ret;
+
+ lbs_deb_enter(LBS_DEB_CS);
+- ret = pcmcia_register_driver(&libertas_driver);
++ ret = pcmcia_register_driver(&lbs_driver);
+ lbs_deb_leave(LBS_DEB_CS);
+ return ret;
+ }
+@@ -964,7 +956,7 @@ static int __init if_cs_init(void)
+ static void __exit if_cs_exit(void)
+ {
+ lbs_deb_enter(LBS_DEB_CS);
+- pcmcia_unregister_driver(&libertas_driver);
++ pcmcia_unregister_driver(&lbs_driver);
+ lbs_deb_leave(LBS_DEB_CS);
+ }
+
+diff --git a/drivers/net/wireless/libertas/if_sdio.c b/drivers/net/wireless/libertas/if_sdio.c
+index 4f1efb1..eed7320 100644
+--- a/drivers/net/wireless/libertas/if_sdio.c
++++ b/drivers/net/wireless/libertas/if_sdio.c
+@@ -19,7 +19,7 @@
+ * current block size.
+ *
+ * As SDIO is still new to the kernel, it is unfortunately common with
+- * bugs in the host controllers related to that. One such bug is that
++ * bugs in the host controllers related to that. One such bug is that
+ * controllers cannot do transfers that aren't a multiple of 4 bytes.
+ * If you don't have time to fix the host controller driver, you can
+ * work around the problem by modifying if_sdio_host_to_card() and
+@@ -40,11 +40,11 @@
+ #include "dev.h"
+ #include "if_sdio.h"
+
+-static char *libertas_helper_name = NULL;
+-module_param_named(helper_name, libertas_helper_name, charp, 0644);
++static char *lbs_helper_name = NULL;
++module_param_named(helper_name, lbs_helper_name, charp, 0644);
- /*netif_stop_queue(dev); */
- flush_workqueue(priv->workqueue);
+-static char *libertas_fw_name = NULL;
+-module_param_named(fw_name, libertas_fw_name, charp, 0644);
++static char *lbs_fw_name = NULL;
++module_param_named(fw_name, lbs_fw_name, charp, 0644);
-- /* ieee80211_unregister_hw calls iwl_mac_stop, which flushes
-+ /* ieee80211_unregister_hw calls iwl4965_mac_stop, which flushes
- * priv->workqueue... so we can't take down the workqueue
- * until now... */
- destroy_workqueue(priv->workqueue);
- priv->workqueue = NULL;
+ static const struct sdio_device_id if_sdio_ids[] = {
+ { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_LIBERTAS) },
+@@ -82,7 +82,7 @@ struct if_sdio_packet {
-- free_irq(pdev->irq, priv);
-- pci_disable_msi(pdev);
- pci_iounmap(pdev, priv->hw_base);
- pci_release_regions(pdev);
- pci_disable_device(pdev);
-@@ -9236,93 +9299,31 @@ static void iwl_pci_remove(struct pci_dev *pdev)
+ struct if_sdio_card {
+ struct sdio_func *func;
+- wlan_private *priv;
++ struct lbs_private *priv;
- #ifdef CONFIG_PM
+ int model;
+ unsigned long ioport;
+@@ -134,32 +134,26 @@ static int if_sdio_handle_cmd(struct if_sdio_card *card,
--static int iwl_pci_suspend(struct pci_dev *pdev, pm_message_t state)
-+static int iwl4965_pci_suspend(struct pci_dev *pdev, pm_message_t state)
- {
-- struct iwl_priv *priv = pci_get_drvdata(pdev);
-+ struct iwl4965_priv *priv = pci_get_drvdata(pdev);
+ lbs_deb_enter(LBS_DEB_SDIO);
-- set_bit(STATUS_IN_SUSPEND, &priv->status);
--
-- /* Take down the device; powers it off, etc. */
-- iwl_down(priv);
+- spin_lock_irqsave(&card->priv->adapter->driver_lock, flags);
++ spin_lock_irqsave(&card->priv->driver_lock, flags);
+
+- if (!card->priv->adapter->cur_cmd) {
+- lbs_deb_sdio("discarding spurious response\n");
+- ret = 0;
+- goto out;
+- }
-
-- if (priv->mac80211_registered)
-- ieee80211_stop_queues(priv->hw);
-+ if (priv->is_open) {
-+ set_bit(STATUS_IN_SUSPEND, &priv->status);
-+ iwl4965_mac_stop(priv->hw);
-+ priv->is_open = 1;
-+ }
+- if (size > MRVDRV_SIZE_OF_CMD_BUFFER) {
++ if (size > LBS_CMD_BUFFER_SIZE) {
+ lbs_deb_sdio("response packet too large (%d bytes)\n",
+ (int)size);
+ ret = -E2BIG;
+ goto out;
+ }
-- pci_save_state(pdev);
-- pci_disable_device(pdev);
- pci_set_power_state(pdev, PCI_D3hot);
+- memcpy(card->priv->adapter->cur_cmd->bufvirtualaddr, buffer, size);
++ memcpy(card->priv->upld_buf, buffer, size);
+ card->priv->upld_len = size;
- return 0;
- }
+ card->int_cause |= MRVDRV_CMD_UPLD_RDY;
--static void iwl_resume(struct iwl_priv *priv)
-+static int iwl4965_pci_resume(struct pci_dev *pdev)
- {
-- unsigned long flags;
--
-- /* The following it a temporary work around due to the
-- * suspend / resume not fully initializing the NIC correctly.
-- * Without all of the following, resume will not attempt to take
-- * down the NIC (it shouldn't really need to) and will just try
-- * and bring the NIC back up. However that fails during the
-- * ucode verification process. This then causes iwl_down to be
-- * called *after* iwl_hw_nic_init() has succeeded -- which
-- * then lets the next init sequence succeed. So, we've
-- * replicated all of that NIC init code here... */
--
-- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
--
-- iwl_hw_nic_init(priv);
--
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR,
-- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
-- iwl_write32(priv, CSR_INT, 0xFFFFFFFF);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-- iwl_write32(priv, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
--
-- /* tell the device to stop sending interrupts */
-- iwl_disable_interrupts(priv);
--
-- spin_lock_irqsave(&priv->lock, flags);
-- iwl_clear_bit(priv, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
--
-- if (!iwl_grab_restricted_access(priv)) {
-- iwl_write_restricted_reg(priv, APMG_CLK_DIS_REG,
-- APMG_CLK_VAL_DMA_CLK_RQT);
-- iwl_release_restricted_access(priv);
-- }
-- spin_unlock_irqrestore(&priv->lock, flags);
--
-- udelay(5);
--
-- iwl_hw_nic_reset(priv);
--
-- /* Bring the device back up */
-- clear_bit(STATUS_IN_SUSPEND, &priv->status);
-- queue_work(priv->workqueue, &priv->up);
--}
--
--static int iwl_pci_resume(struct pci_dev *pdev)
--{
-- struct iwl_priv *priv = pci_get_drvdata(pdev);
-- int err;
--
-- printk(KERN_INFO "Coming out of suspend...\n");
-+ struct iwl4965_priv *priv = pci_get_drvdata(pdev);
+- libertas_interrupt(card->priv->dev);
++ lbs_interrupt(card->priv);
- pci_set_power_state(pdev, PCI_D0);
-- err = pci_enable_device(pdev);
-- pci_restore_state(pdev);
--
-- /*
-- * Suspend/Resume resets the PCI configuration space, so we have to
-- * re-disable the RETRY_TIMEOUT register (0x41) to keep PCI Tx retries
-- * from interfering with C3 CPU state. pci_restore_state won't help
-- * here since it only restores the first 64 bytes pci config header.
-- */
-- pci_write_config_byte(pdev, 0x41, 0x00);
+ ret = 0;
-- iwl_resume(priv);
-+ if (priv->is_open)
-+ iwl4965_mac_start(priv->hw);
+ out:
+- spin_unlock_irqrestore(&card->priv->adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&card->priv->driver_lock, flags);
-+ clear_bit(STATUS_IN_SUSPEND, &priv->status);
- return 0;
- }
+ lbs_deb_leave_args(LBS_DEB_SDIO, "ret %d", ret);
-@@ -9334,33 +9335,33 @@ static int iwl_pci_resume(struct pci_dev *pdev)
- *
- *****************************************************************************/
+@@ -194,7 +188,7 @@ static int if_sdio_handle_data(struct if_sdio_card *card,
--static struct pci_driver iwl_driver = {
-+static struct pci_driver iwl4965_driver = {
- .name = DRV_NAME,
-- .id_table = iwl_hw_card_ids,
-- .probe = iwl_pci_probe,
-- .remove = __devexit_p(iwl_pci_remove),
-+ .id_table = iwl4965_hw_card_ids,
-+ .probe = iwl4965_pci_probe,
-+ .remove = __devexit_p(iwl4965_pci_remove),
- #ifdef CONFIG_PM
-- .suspend = iwl_pci_suspend,
-- .resume = iwl_pci_resume,
-+ .suspend = iwl4965_pci_suspend,
-+ .resume = iwl4965_pci_resume,
- #endif
- };
+ memcpy(data, buffer, size);
--static int __init iwl_init(void)
-+static int __init iwl4965_init(void)
- {
+- libertas_process_rxed_packet(card->priv, skb);
++ lbs_process_rxed_packet(card->priv, skb);
- int ret;
- printk(KERN_INFO DRV_NAME ": " DRV_DESCRIPTION ", " DRV_VERSION "\n");
- printk(KERN_INFO DRV_NAME ": " DRV_COPYRIGHT "\n");
-- ret = pci_register_driver(&iwl_driver);
-+ ret = pci_register_driver(&iwl4965_driver);
- if (ret) {
- IWL_ERROR("Unable to initialize PCI module\n");
- return ret;
- }
--#ifdef CONFIG_IWLWIFI_DEBUG
-- ret = driver_create_file(&iwl_driver.driver, &driver_attr_debug_level);
-+#ifdef CONFIG_IWL4965_DEBUG
-+ ret = driver_create_file(&iwl4965_driver.driver, &driver_attr_debug_level);
- if (ret) {
- IWL_ERROR("Unable to create driver sysfs file\n");
-- pci_unregister_driver(&iwl_driver);
-+ pci_unregister_driver(&iwl4965_driver);
- return ret;
+ ret = 0;
+
+@@ -231,14 +225,14 @@ static int if_sdio_handle_event(struct if_sdio_card *card,
+ event <<= SBI_EVENT_CAUSE_SHIFT;
}
- #endif
-@@ -9368,32 +9369,34 @@ static int __init iwl_init(void)
- return ret;
- }
--static void __exit iwl_exit(void)
-+static void __exit iwl4965_exit(void)
- {
--#ifdef CONFIG_IWLWIFI_DEBUG
-- driver_remove_file(&iwl_driver.driver, &driver_attr_debug_level);
-+#ifdef CONFIG_IWL4965_DEBUG
-+ driver_remove_file(&iwl4965_driver.driver, &driver_attr_debug_level);
- #endif
-- pci_unregister_driver(&iwl_driver);
-+ pci_unregister_driver(&iwl4965_driver);
- }
+- spin_lock_irqsave(&card->priv->adapter->driver_lock, flags);
++ spin_lock_irqsave(&card->priv->driver_lock, flags);
--module_param_named(antenna, iwl_param_antenna, int, 0444);
-+module_param_named(antenna, iwl4965_param_antenna, int, 0444);
- MODULE_PARM_DESC(antenna, "select antenna (1=Main, 2=Aux, default 0 [both])");
--module_param_named(disable, iwl_param_disable, int, 0444);
-+module_param_named(disable, iwl4965_param_disable, int, 0444);
- MODULE_PARM_DESC(disable, "manually disable the radio (default 0 [radio on])");
--module_param_named(hwcrypto, iwl_param_hwcrypto, int, 0444);
-+module_param_named(hwcrypto, iwl4965_param_hwcrypto, int, 0444);
- MODULE_PARM_DESC(hwcrypto,
- "using hardware crypto engine (default 0 [software])\n");
--module_param_named(debug, iwl_param_debug, int, 0444);
-+module_param_named(debug, iwl4965_param_debug, int, 0444);
- MODULE_PARM_DESC(debug, "debug output mask");
--module_param_named(disable_hw_scan, iwl_param_disable_hw_scan, int, 0444);
-+module_param_named(disable_hw_scan, iwl4965_param_disable_hw_scan, int, 0444);
- MODULE_PARM_DESC(disable_hw_scan, "disable hardware scanning (default 0)");
+ card->event = event;
+ card->int_cause |= MRVDRV_CARDEVENT;
--module_param_named(queues_num, iwl_param_queues_num, int, 0444);
-+module_param_named(queues_num, iwl4965_param_queues_num, int, 0444);
- MODULE_PARM_DESC(queues_num, "number of hw queues.");
+- libertas_interrupt(card->priv->dev);
++ lbs_interrupt(card->priv);
- /* QoS */
--module_param_named(qos_enable, iwl_param_qos_enable, int, 0444);
-+module_param_named(qos_enable, iwl4965_param_qos_enable, int, 0444);
- MODULE_PARM_DESC(qos_enable, "enable all QoS functionality");
-+module_param_named(amsdu_size_8K, iwl4965_param_amsdu_size_8K, int, 0444);
-+MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size");
+- spin_unlock_irqrestore(&card->priv->adapter->driver_lock, flags);
++ spin_unlock_irqrestore(&card->priv->driver_lock, flags);
--module_exit(iwl_exit);
--module_init(iwl_init);
-+module_exit(iwl4965_exit);
-+module_init(iwl4965_init);
-diff --git a/drivers/net/wireless/iwlwifi/iwlwifi.h b/drivers/net/wireless/iwlwifi/iwlwifi.h
-deleted file mode 100644
-index 432ce88..0000000
---- a/drivers/net/wireless/iwlwifi/iwlwifi.h
-+++ /dev/null
-@@ -1,708 +0,0 @@
--/******************************************************************************
-- *
-- * Copyright(c) 2003 - 2007 Intel Corporation. All rights reserved.
-- *
-- * Portions of this file are derived from the ipw3945 project, as well
-- * as portions of the ieee80211 subsystem header files.
-- *
-- * This program is free software; you can redistribute it and/or modify it
-- * under the terms of version 2 of the GNU General Public License as
-- * published by the Free Software Foundation.
-- *
-- * This program is distributed in the hope that it will be useful, but WITHOUT
-- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
-- * more details.
-- *
-- * You should have received a copy of the GNU General Public License along with
-- * this program; if not, write to the Free Software Foundation, Inc.,
-- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
-- *
-- * The full GNU General Public License is included in this distribution in the
-- * file called LICENSE.
-- *
-- * Contact Information:
-- * James P. Ketrenos <ipw2100-admin at linux.intel.com>
-- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-- *
-- *****************************************************************************/
--
--#ifndef __iwlwifi_h__
--#define __iwlwifi_h__
--
--#include <linux/pci.h> /* for struct pci_device_id */
--#include <linux/kernel.h>
--#include <net/ieee80211_radiotap.h>
--
--struct iwl_priv;
--
--/* Hardware specific file defines the PCI IDs table for that hardware module */
--extern struct pci_device_id iwl_hw_card_ids[];
--
--#include "iwl-hw.h"
--#if IWL == 3945
--#define DRV_NAME "iwl3945"
--#include "iwl-3945-hw.h"
--#elif IWL == 4965
--#define DRV_NAME "iwl4965"
--#include "iwl-4965-hw.h"
--#endif
--
--#include "iwl-prph.h"
--
--/*
-- * Driver implementation data structures, constants, inline
-- * functions
-- *
-- * NOTE: DO NOT PUT HARDWARE/UCODE SPECIFIC DECLRATIONS HERE
-- *
-- * Hardware specific declrations go into iwl-*hw.h
-- *
-- */
--
--#include "iwl-debug.h"
--
--/* Default noise level to report when noise measurement is not available.
-- * This may be because we're:
-- * 1) Not associated (4965, no beacon statistics being sent to driver)
-- * 2) Scanning (noise measurement does not apply to associated channel)
-- * 3) Receiving CCK (3945 delivers noise info only for OFDM frames)
-- * Use default noise value of -127 ... this is below the range of measurable
-- * Rx dBm for either 3945 or 4965, so it can indicate "unmeasurable" to user.
-- * Also, -127 works better than 0 when averaging frames with/without
-- * noise info (e.g. averaging might be done in app); measured dBm values are
-- * always negative ... using a negative value as the default keeps all
-- * averages within an s8's (used in some apps) range of negative values. */
--#define IWL_NOISE_MEAS_NOT_AVAILABLE (-127)
--
--/* Module parameters accessible from iwl-*.c */
--extern int iwl_param_disable_hw_scan;
--extern int iwl_param_debug;
--extern int iwl_param_mode;
--extern int iwl_param_disable;
--extern int iwl_param_antenna;
--extern int iwl_param_hwcrypto;
--extern int iwl_param_qos_enable;
--extern int iwl_param_queues_num;
--
--enum iwl_antenna {
-- IWL_ANTENNA_DIVERSITY,
-- IWL_ANTENNA_MAIN,
-- IWL_ANTENNA_AUX
--};
--
--/*
-- * RTS threshold here is total size [2347] minus 4 FCS bytes
-- * Per spec:
-- * a value of 0 means RTS on all data/management packets
-- * a value > max MSDU size means no RTS
-- * else RTS for data/management frames where MPDU is larger
-- * than RTS value.
-- */
--#define DEFAULT_RTS_THRESHOLD 2347U
--#define MIN_RTS_THRESHOLD 0U
--#define MAX_RTS_THRESHOLD 2347U
--#define MAX_MSDU_SIZE 2304U
--#define MAX_MPDU_SIZE 2346U
--#define DEFAULT_BEACON_INTERVAL 100U
--#define DEFAULT_SHORT_RETRY_LIMIT 7U
--#define DEFAULT_LONG_RETRY_LIMIT 4U
--
--struct iwl_rx_mem_buffer {
-- dma_addr_t dma_addr;
-- struct sk_buff *skb;
-- struct list_head list;
--};
--
--struct iwl_rt_rx_hdr {
-- struct ieee80211_radiotap_header rt_hdr;
-- __le64 rt_tsf; /* TSF */
-- u8 rt_flags; /* radiotap packet flags */
-- u8 rt_rate; /* rate in 500kb/s */
-- __le16 rt_channelMHz; /* channel in MHz */
-- __le16 rt_chbitmask; /* channel bitfield */
-- s8 rt_dbmsignal; /* signal in dBm, kluged to signed */
-- s8 rt_dbmnoise;
-- u8 rt_antenna; /* antenna number */
-- u8 payload[0]; /* payload... */
--} __attribute__ ((packed));
--
--struct iwl_rt_tx_hdr {
-- struct ieee80211_radiotap_header rt_hdr;
-- u8 rt_rate; /* rate in 500kb/s */
-- __le16 rt_channel; /* channel in mHz */
-- __le16 rt_chbitmask; /* channel bitfield */
-- s8 rt_dbmsignal; /* signal in dBm, kluged to signed */
-- u8 rt_antenna; /* antenna number */
-- u8 payload[0]; /* payload... */
--} __attribute__ ((packed));
--
--/*
-- * Generic queue structure
-- *
-- * Contains common data for Rx and Tx queues
-- */
--struct iwl_queue {
-- int n_bd; /* number of BDs in this queue */
-- int first_empty; /* 1-st empty entry (index) host_w*/
-- int last_used; /* last used entry (index) host_r*/
-- dma_addr_t dma_addr; /* physical addr for BD's */
-- int n_window; /* safe queue window */
-- u32 id;
-- int low_mark; /* low watermark, resume queue if free
-- * space more than this */
-- int high_mark; /* high watermark, stop queue if free
-- * space less than this */
--} __attribute__ ((packed));
--
--#define MAX_NUM_OF_TBS (20)
--
--struct iwl_tx_info {
-- struct ieee80211_tx_status status;
-- struct sk_buff *skb[MAX_NUM_OF_TBS];
--};
--
--/**
-- * struct iwl_tx_queue - Tx Queue for DMA
-- * @need_update: need to update read/write index
-- * @shed_retry: queue is HT AGG enabled
-- *
-- * Queue consists of circular buffer of BD's and required locking structures.
-- */
--struct iwl_tx_queue {
-- struct iwl_queue q;
-- struct iwl_tfd_frame *bd;
-- struct iwl_cmd *cmd;
-- dma_addr_t dma_addr_cmd;
-- struct iwl_tx_info *txb;
-- int need_update;
-- int sched_retry;
-- int active;
--};
--
--#include "iwl-channel.h"
--
--#if IWL == 3945
--#include "iwl-3945-rs.h"
--#else
--#include "iwl-4965-rs.h"
--#endif
--
--#define IWL_TX_FIFO_AC0 0
--#define IWL_TX_FIFO_AC1 1
--#define IWL_TX_FIFO_AC2 2
--#define IWL_TX_FIFO_AC3 3
--#define IWL_TX_FIFO_HCCA_1 5
--#define IWL_TX_FIFO_HCCA_2 6
--#define IWL_TX_FIFO_NONE 7
--
--/* Minimum number of queues. MAX_NUM is defined in hw specific files */
--#define IWL_MIN_NUM_QUEUES 4
--
--/* Power management (not Tx power) structures */
--
--struct iwl_power_vec_entry {
-- struct iwl_powertable_cmd cmd;
-- u8 no_dtim;
--};
--#define IWL_POWER_RANGE_0 (0)
--#define IWL_POWER_RANGE_1 (1)
--
--#define IWL_POWER_MODE_CAM 0x00 /* Continuously Aware Mode, always on */
--#define IWL_POWER_INDEX_3 0x03
--#define IWL_POWER_INDEX_5 0x05
--#define IWL_POWER_AC 0x06
--#define IWL_POWER_BATTERY 0x07
--#define IWL_POWER_LIMIT 0x07
--#define IWL_POWER_MASK 0x0F
--#define IWL_POWER_ENABLED 0x10
--#define IWL_POWER_LEVEL(x) ((x) & IWL_POWER_MASK)
--
--struct iwl_power_mgr {
-- spinlock_t lock;
-- struct iwl_power_vec_entry pwr_range_0[IWL_POWER_AC];
-- struct iwl_power_vec_entry pwr_range_1[IWL_POWER_AC];
-- u8 active_index;
-- u32 dtim_val;
--};
--
--#define IEEE80211_DATA_LEN 2304
--#define IEEE80211_4ADDR_LEN 30
--#define IEEE80211_HLEN (IEEE80211_4ADDR_LEN)
--#define IEEE80211_FRAME_LEN (IEEE80211_DATA_LEN + IEEE80211_HLEN)
--
--struct iwl_frame {
-- union {
-- struct ieee80211_hdr frame;
-- struct iwl_tx_beacon_cmd beacon;
-- u8 raw[IEEE80211_FRAME_LEN];
-- u8 cmd[360];
-- } u;
-- struct list_head list;
--};
--
--#define SEQ_TO_QUEUE(x) ((x >> 8) & 0xbf)
--#define QUEUE_TO_SEQ(x) ((x & 0xbf) << 8)
--#define SEQ_TO_INDEX(x) (x & 0xff)
--#define INDEX_TO_SEQ(x) (x & 0xff)
--#define SEQ_HUGE_FRAME (0x4000)
--#define SEQ_RX_FRAME __constant_cpu_to_le16(0x8000)
--#define SEQ_TO_SN(seq) (((seq) & IEEE80211_SCTL_SEQ) >> 4)
--#define SN_TO_SEQ(ssn) (((ssn) << 4) & IEEE80211_SCTL_SEQ)
--#define MAX_SN ((IEEE80211_SCTL_SEQ) >> 4)
--
--enum {
-- /* CMD_SIZE_NORMAL = 0, */
-- CMD_SIZE_HUGE = (1 << 0),
-- /* CMD_SYNC = 0, */
-- CMD_ASYNC = (1 << 1),
-- /* CMD_NO_SKB = 0, */
-- CMD_WANT_SKB = (1 << 2),
--};
--
--struct iwl_cmd;
--struct iwl_priv;
--
--struct iwl_cmd_meta {
-- struct iwl_cmd_meta *source;
-- union {
-- struct sk_buff *skb;
-- int (*callback)(struct iwl_priv *priv,
-- struct iwl_cmd *cmd, struct sk_buff *skb);
-- } __attribute__ ((packed)) u;
--
-- /* The CMD_SIZE_HUGE flag bit indicates that the command
-- * structure is stored at the end of the shared queue memory. */
-- u32 flags;
--
--} __attribute__ ((packed));
--
--struct iwl_cmd {
-- struct iwl_cmd_meta meta;
-- struct iwl_cmd_header hdr;
-- union {
-- struct iwl_addsta_cmd addsta;
-- struct iwl_led_cmd led;
-- u32 flags;
-- u8 val8;
-- u16 val16;
-- u32 val32;
-- struct iwl_bt_cmd bt;
-- struct iwl_rxon_time_cmd rxon_time;
-- struct iwl_powertable_cmd powertable;
-- struct iwl_qosparam_cmd qosparam;
-- struct iwl_tx_cmd tx;
-- struct iwl_tx_beacon_cmd tx_beacon;
-- struct iwl_rxon_assoc_cmd rxon_assoc;
-- u8 *indirect;
-- u8 payload[360];
-- } __attribute__ ((packed)) cmd;
--} __attribute__ ((packed));
--
--struct iwl_host_cmd {
-- u8 id;
-- u16 len;
-- struct iwl_cmd_meta meta;
-- const void *data;
--};
--
--#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_cmd) - \
-- sizeof(struct iwl_cmd_meta))
--
--/*
-- * RX related structures and functions
-- */
--#define RX_FREE_BUFFERS 64
--#define RX_LOW_WATERMARK 8
--
--#define SUP_RATE_11A_MAX_NUM_CHANNELS 8
--#define SUP_RATE_11B_MAX_NUM_CHANNELS 4
--#define SUP_RATE_11G_MAX_NUM_CHANNELS 12
--
--/**
-- * struct iwl_rx_queue - Rx queue
-- * @processed: Internal index to last handled Rx packet
-- * @read: Shared index to newest available Rx buffer
-- * @write: Shared index to oldest written Rx packet
-- * @free_count: Number of pre-allocated buffers in rx_free
-- * @rx_free: list of free SKBs for use
-- * @rx_used: List of Rx buffers with no SKB
-- * @need_update: flag to indicate we need to update read/write index
-- *
-- * NOTE: rx_free and rx_used are used as a FIFO for iwl_rx_mem_buffers
-- */
--struct iwl_rx_queue {
-- __le32 *bd;
-- dma_addr_t dma_addr;
-- struct iwl_rx_mem_buffer pool[RX_QUEUE_SIZE + RX_FREE_BUFFERS];
-- struct iwl_rx_mem_buffer *queue[RX_QUEUE_SIZE];
-- u32 processed;
-- u32 read;
-- u32 write;
-- u32 free_count;
-- struct list_head rx_free;
-- struct list_head rx_used;
-- int need_update;
-- spinlock_t lock;
--};
--
--#define IWL_SUPPORTED_RATES_IE_LEN 8
--
--#define SCAN_INTERVAL 100
--
--#define MAX_A_CHANNELS 252
--#define MIN_A_CHANNELS 7
--
--#define MAX_B_CHANNELS 14
--#define MIN_B_CHANNELS 1
--
--#define STATUS_HCMD_ACTIVE 0 /* host command in progress */
--#define STATUS_INT_ENABLED 1
--#define STATUS_RF_KILL_HW 2
--#define STATUS_RF_KILL_SW 3
--#define STATUS_INIT 4
--#define STATUS_ALIVE 5
--#define STATUS_READY 6
--#define STATUS_TEMPERATURE 7
--#define STATUS_GEO_CONFIGURED 8
--#define STATUS_EXIT_PENDING 9
--#define STATUS_IN_SUSPEND 10
--#define STATUS_STATISTICS 11
--#define STATUS_SCANNING 12
--#define STATUS_SCAN_ABORTING 13
--#define STATUS_SCAN_HW 14
--#define STATUS_POWER_PMI 15
--#define STATUS_FW_ERROR 16
--
--#define MAX_TID_COUNT 9
--
--#define IWL_INVALID_RATE 0xFF
--#define IWL_INVALID_VALUE -1
--
--#if IWL == 4965
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
--struct iwl_ht_agg {
-- u16 txq_id;
-- u16 frame_count;
-- u16 wait_for_ba;
-- u16 start_idx;
-- u32 bitmap0;
-- u32 bitmap1;
-- u32 rate_n_flags;
--};
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
--#endif
--
--struct iwl_tid_data {
-- u16 seq_number;
--#if IWL == 4965
--#ifdef CONFIG_IWLWIFI_HT
--#ifdef CONFIG_IWLWIFI_HT_AGG
-- struct iwl_ht_agg agg;
--#endif /* CONFIG_IWLWIFI_HT_AGG */
--#endif /* CONFIG_IWLWIFI_HT */
--#endif
--};
--
--struct iwl_hw_key {
-- enum ieee80211_key_alg alg;
-- int keylen;
-- u8 key[32];
--};
--
--union iwl_ht_rate_supp {
-- u16 rates;
-- struct {
-- u8 siso_rate;
-- u8 mimo_rate;
-- };
--};
--
--#ifdef CONFIG_IWLWIFI_HT
--#define CFG_HT_RX_AMPDU_FACTOR_DEF (0x3)
--#define HT_IE_MAX_AMSDU_SIZE_4K (0)
--#define CFG_HT_MPDU_DENSITY_2USEC (0x5)
--#define CFG_HT_MPDU_DENSITY_DEF CFG_HT_MPDU_DENSITY_2USEC
--
--struct sta_ht_info {
-- u8 is_ht;
-- u16 rx_mimo_ps_mode;
-- u16 tx_mimo_ps_mode;
-- u16 control_channel;
-- u8 max_amsdu_size;
-- u8 ampdu_factor;
-- u8 mpdu_density;
-- u8 operating_mode;
-- u8 supported_chan_width;
-- u8 extension_chan_offset;
-- u8 is_green_field;
-- u8 sgf;
-- u8 supp_rates[16];
-- u8 tx_chan_width;
-- u8 chan_width_cap;
--};
--#endif /*CONFIG_IWLWIFI_HT */
--
--#ifdef CONFIG_IWLWIFI_QOS
--
--union iwl_qos_capabity {
-- struct {
-- u8 edca_count:4; /* bit 0-3 */
-- u8 q_ack:1; /* bit 4 */
-- u8 queue_request:1; /* bit 5 */
-- u8 txop_request:1; /* bit 6 */
-- u8 reserved:1; /* bit 7 */
-- } q_AP;
-- struct {
-- u8 acvo_APSD:1; /* bit 0 */
-- u8 acvi_APSD:1; /* bit 1 */
-- u8 ac_bk_APSD:1; /* bit 2 */
-- u8 ac_be_APSD:1; /* bit 3 */
-- u8 q_ack:1; /* bit 4 */
-- u8 max_len:2; /* bit 5-6 */
-- u8 more_data_ack:1; /* bit 7 */
-- } q_STA;
-- u8 val;
--};
--
--/* QoS sturctures */
--struct iwl_qos_info {
-- int qos_enable;
-- int qos_active;
-- union iwl_qos_capabity qos_cap;
-- struct iwl_qosparam_cmd def_qos_parm;
--};
--#endif /*CONFIG_IWLWIFI_QOS */
--
--#define STA_PS_STATUS_WAKE 0
--#define STA_PS_STATUS_SLEEP 1
--
--struct iwl_station_entry {
-- struct iwl_addsta_cmd sta;
-- struct iwl_tid_data tid[MAX_TID_COUNT];
--#if IWL == 3945
-- union {
-- struct {
-- u8 rate;
-- u8 flags;
-- } s;
-- u16 rate_n_flags;
-- } current_rate;
--#endif
-- u8 used;
-- u8 ps_status;
-- struct iwl_hw_key keyinfo;
--};
--
--/* one for each uCode image (inst/data, boot/init/runtime) */
--struct fw_image_desc {
-- void *v_addr; /* access by driver */
-- dma_addr_t p_addr; /* access by card's busmaster DMA */
-- u32 len; /* bytes */
--};
--
--/* uCode file layout */
--struct iwl_ucode {
-- __le32 ver; /* major/minor/subminor */
-- __le32 inst_size; /* bytes of runtime instructions */
-- __le32 data_size; /* bytes of runtime data */
-- __le32 init_size; /* bytes of initialization instructions */
-- __le32 init_data_size; /* bytes of initialization data */
-- __le32 boot_size; /* bytes of bootstrap instructions */
-- u8 data[0]; /* data in same order as "size" elements */
--};
--
--#define IWL_IBSS_MAC_HASH_SIZE 32
--
--struct iwl_ibss_seq {
-- u8 mac[ETH_ALEN];
-- u16 seq_num;
-- u16 frag_num;
-- unsigned long packet_time;
-- struct list_head list;
--};
--
--struct iwl_driver_hw_info {
-- u16 max_txq_num;
-- u16 ac_queue_count;
-- u32 rx_buffer_size;
-- u16 tx_cmd_len;
-- u16 max_rxq_size;
-- u16 max_rxq_log;
-- u32 cck_flag;
-- u8 max_stations;
-- u8 bcast_sta_id;
-- void *shared_virt;
-- dma_addr_t shared_phys;
--};
--
--
--#define STA_FLG_RTS_MIMO_PROT_MSK __constant_cpu_to_le32(1 << 17)
--#define STA_FLG_AGG_MPDU_8US_MSK __constant_cpu_to_le32(1 << 18)
--#define STA_FLG_MAX_AGG_SIZE_POS (19)
--#define STA_FLG_MAX_AGG_SIZE_MSK __constant_cpu_to_le32(3 << 19)
--#define STA_FLG_FAT_EN_MSK __constant_cpu_to_le32(1 << 21)
--#define STA_FLG_MIMO_DIS_MSK __constant_cpu_to_le32(1 << 22)
--#define STA_FLG_AGG_MPDU_DENSITY_POS (23)
--#define STA_FLG_AGG_MPDU_DENSITY_MSK __constant_cpu_to_le32(7 << 23)
--#define HT_SHORT_GI_20MHZ_ONLY (1 << 0)
--#define HT_SHORT_GI_40MHZ_ONLY (1 << 1)
--
--
--#include "iwl-priv.h"
--
--/* Requires full declaration of iwl_priv before including */
--#include "iwl-io.h"
--
--#define IWL_RX_HDR(x) ((struct iwl_rx_frame_hdr *)(\
-- x->u.rx_frame.stats.payload + \
-- x->u.rx_frame.stats.phy_count))
--#define IWL_RX_END(x) ((struct iwl_rx_frame_end *)(\
-- IWL_RX_HDR(x)->payload + \
-- le16_to_cpu(IWL_RX_HDR(x)->len)))
--#define IWL_RX_STATS(x) (&x->u.rx_frame.stats)
--#define IWL_RX_DATA(x) (IWL_RX_HDR(x)->payload)
--
--
--/******************************************************************************
-- *
-- * Functions implemented in iwl-base.c which are forward declared here
-- * for use by iwl-*.c
-- *
-- *****************************************************************************/
--struct iwl_addsta_cmd;
--extern int iwl_send_add_station(struct iwl_priv *priv,
-- struct iwl_addsta_cmd *sta, u8 flags);
--extern const char *iwl_get_tx_fail_reason(u32 status);
--extern u8 iwl_add_station(struct iwl_priv *priv, const u8 *bssid,
-- int is_ap, u8 flags);
--extern int iwl_is_network_packet(struct iwl_priv *priv,
-- struct ieee80211_hdr *header);
--extern int iwl_power_init_handle(struct iwl_priv *priv);
--extern int iwl_eeprom_init(struct iwl_priv *priv);
--#ifdef CONFIG_IWLWIFI_DEBUG
--extern void iwl_report_frame(struct iwl_priv *priv,
-- struct iwl_rx_packet *pkt,
-- struct ieee80211_hdr *header, int group100);
--#else
--static inline void iwl_report_frame(struct iwl_priv *priv,
-- struct iwl_rx_packet *pkt,
-- struct ieee80211_hdr *header,
-- int group100) {}
--#endif
--extern int iwl_tx_queue_update_write_ptr(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq);
--extern void iwl_handle_data_packet_monitor(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb,
-- void *data, short len,
-- struct ieee80211_rx_status *stats,
-- u16 phy_flags);
--extern int is_duplicate_packet(struct iwl_priv *priv, struct ieee80211_hdr
-- *header);
--extern void iwl_rx_queue_free(struct iwl_priv *priv, struct iwl_rx_queue *rxq);
--extern int iwl_rx_queue_alloc(struct iwl_priv *priv);
--extern void iwl_rx_queue_reset(struct iwl_priv *priv,
-- struct iwl_rx_queue *rxq);
--extern int iwl_calc_db_from_ratio(int sig_ratio);
--extern int iwl_calc_sig_qual(int rssi_dbm, int noise_dbm);
--extern int iwl_tx_queue_init(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq, int count, u32 id);
--extern int iwl_rx_queue_restock(struct iwl_priv *priv);
--extern void iwl_rx_replenish(void *data);
--extern void iwl_tx_queue_free(struct iwl_priv *priv, struct iwl_tx_queue *txq);
--extern int iwl_send_cmd_pdu(struct iwl_priv *priv, u8 id, u16 len,
-- const void *data);
--extern int __must_check iwl_send_cmd_async(struct iwl_priv *priv,
-- struct iwl_host_cmd *cmd);
--extern int __must_check iwl_send_cmd_sync(struct iwl_priv *priv,
-- struct iwl_host_cmd *cmd);
--extern int __must_check iwl_send_cmd(struct iwl_priv *priv,
-- struct iwl_host_cmd *cmd);
--extern unsigned int iwl_fill_beacon_frame(struct iwl_priv *priv,
-- struct ieee80211_hdr *hdr,
-- const u8 *dest, int left);
--extern int iwl_rx_queue_update_write_ptr(struct iwl_priv *priv,
-- struct iwl_rx_queue *q);
--extern int iwl_send_statistics_request(struct iwl_priv *priv);
--extern void iwl_set_decrypted_flag(struct iwl_priv *priv, struct sk_buff *skb,
-- u32 decrypt_res,
-- struct ieee80211_rx_status *stats);
--extern __le16 *ieee80211_get_qos_ctrl(struct ieee80211_hdr *hdr);
--
--extern const u8 BROADCAST_ADDR[ETH_ALEN];
--
--/*
-- * Currently used by iwl-3945-rs... look at restructuring so that it doesn't
-- * call this... todo... fix that.
--*/
--extern u8 iwl_sync_station(struct iwl_priv *priv, int sta_id,
-- u16 tx_rate, u8 flags);
--
--static inline int iwl_is_associated(struct iwl_priv *priv)
--{
-- return (priv->active_rxon.filter_flags & RXON_FILTER_ASSOC_MSK) ? 1 : 0;
--}
--
--/******************************************************************************
-- *
-- * Functions implemented in iwl-[34]*.c which are forward declared here
-- * for use by iwl-base.c
-- *
-- * NOTE: The implementation of these functions are hardware specific
-- * which is why they are in the hardware specific files (vs. iwl-base.c)
-- *
-- * Naming convention --
-- * iwl_ <-- Its part of iwlwifi (should be changed to iwl_)
-- * iwl_hw_ <-- Hardware specific (implemented in iwl-XXXX.c by all HW)
-- * iwlXXXX_ <-- Hardware specific (implemented in iwl-XXXX.c for XXXX)
-- * iwl_bg_ <-- Called from work queue context
-- * iwl_mac_ <-- mac80211 callback
-- *
-- ****************************************************************************/
--extern void iwl_hw_rx_handler_setup(struct iwl_priv *priv);
--extern void iwl_hw_setup_deferred_work(struct iwl_priv *priv);
--extern void iwl_hw_cancel_deferred_work(struct iwl_priv *priv);
--extern int iwl_hw_rxq_stop(struct iwl_priv *priv);
--extern int iwl_hw_set_hw_setting(struct iwl_priv *priv);
--extern int iwl_hw_nic_init(struct iwl_priv *priv);
--extern void iwl_hw_card_show_info(struct iwl_priv *priv);
--extern int iwl_hw_nic_stop_master(struct iwl_priv *priv);
--extern void iwl_hw_txq_ctx_free(struct iwl_priv *priv);
--extern void iwl_hw_txq_ctx_stop(struct iwl_priv *priv);
--extern int iwl_hw_nic_reset(struct iwl_priv *priv);
--extern int iwl_hw_txq_attach_buf_to_tfd(struct iwl_priv *priv, void *tfd,
-- dma_addr_t addr, u16 len);
--extern int iwl_hw_txq_free_tfd(struct iwl_priv *priv, struct iwl_tx_queue *txq);
--extern int iwl_hw_get_temperature(struct iwl_priv *priv);
--extern int iwl_hw_tx_queue_init(struct iwl_priv *priv,
-- struct iwl_tx_queue *txq);
--extern unsigned int iwl_hw_get_beacon_cmd(struct iwl_priv *priv,
-- struct iwl_frame *frame, u8 rate);
--extern int iwl_hw_get_rx_read(struct iwl_priv *priv);
--extern void iwl_hw_build_tx_cmd_rate(struct iwl_priv *priv,
-- struct iwl_cmd *cmd,
-- struct ieee80211_tx_control *ctrl,
-- struct ieee80211_hdr *hdr,
-- int sta_id, int tx_id);
--extern int iwl_hw_reg_send_txpower(struct iwl_priv *priv);
--extern int iwl_hw_reg_set_txpower(struct iwl_priv *priv, s8 power);
--extern void iwl_hw_rx_statistics(struct iwl_priv *priv,
-- struct iwl_rx_mem_buffer *rxb);
--extern void iwl_disable_events(struct iwl_priv *priv);
--extern int iwl4965_get_temperature(const struct iwl_priv *priv);
--
--/**
-- * iwl_hw_find_station - Find station id for a given BSSID
-- * @bssid: MAC address of station ID to find
-- *
-- * NOTE: This should not be hardware specific but the code has
-- * not yet been merged into a single common layer for managing the
-- * station tables.
-- */
--extern u8 iwl_hw_find_station(struct iwl_priv *priv, const u8 *bssid);
--
--extern int iwl_hw_channel_switch(struct iwl_priv *priv, u16 channel);
--extern int iwl_tx_queue_reclaim(struct iwl_priv *priv, int txq_id, int index);
--#endif
-diff --git a/drivers/net/wireless/libertas/11d.c b/drivers/net/wireless/libertas/11d.c
-index 9cf0211..5e10ce0 100644
---- a/drivers/net/wireless/libertas/11d.c
-+++ b/drivers/net/wireless/libertas/11d.c
-@@ -43,16 +43,14 @@ static struct chan_freq_power channel_freq_power_UN_BG[] = {
- {14, 2484, TX_PWR_DEFAULT}
- };
+ ret = 0;
--static u8 wlan_region_2_code(u8 * region)
-+static u8 lbs_region_2_code(u8 *region)
- {
- u8 i;
-- u8 size = sizeof(region_code_mapping)/
-- sizeof(struct region_code_mapping);
+@@ -454,7 +448,7 @@ static int if_sdio_prog_helper(struct if_sdio_card *card)
- for (i = 0; region[i] && i < COUNTRY_CODE_LEN; i++)
- region[i] = toupper(region[i]);
+ chunk_size = min(size, (size_t)60);
-- for (i = 0; i < size; i++) {
-+ for (i = 0; i < ARRAY_SIZE(region_code_mapping); i++) {
- if (!memcmp(region, region_code_mapping[i].region,
- COUNTRY_CODE_LEN))
- return (region_code_mapping[i].code);
-@@ -62,12 +60,11 @@ static u8 wlan_region_2_code(u8 * region)
- return (region_code_mapping[0].code);
- }
+- *((u32*)chunk_buffer) = cpu_to_le32(chunk_size);
++ *((__le32*)chunk_buffer) = cpu_to_le32(chunk_size);
+ memcpy(chunk_buffer + 4, firmware, chunk_size);
+ /*
+ lbs_deb_sdio("sending %d bytes chunk\n", chunk_size);
+@@ -694,7 +688,8 @@ out:
+ /* Libertas callbacks */
+ /*******************************************************************/
--static u8 *wlan_code_2_region(u8 code)
-+static u8 *lbs_code_2_region(u8 code)
- {
- u8 i;
-- u8 size = sizeof(region_code_mapping)
-- / sizeof(struct region_code_mapping);
-- for (i = 0; i < size; i++) {
-+
-+ for (i = 0; i < ARRAY_SIZE(region_code_mapping); i++) {
- if (region_code_mapping[i].code == code)
- return (region_code_mapping[i].region);
- }
-@@ -82,7 +79,7 @@ static u8 *wlan_code_2_region(u8 code)
- * @param nrchan number of channels
- * @return the nrchan-th chan number
- */
--static u8 wlan_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 * chan)
-+static u8 lbs_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 *chan)
- /*find the nrchan-th chan after the firstchan*/
+-static int if_sdio_host_to_card(wlan_private *priv, u8 type, u8 *buf, u16 nb)
++static int if_sdio_host_to_card(struct lbs_private *priv,
++ u8 type, u8 *buf, u16 nb)
{
- u8 i;
-@@ -90,8 +87,7 @@ static u8 wlan_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 * chan)
- u8 cfp_no;
-
- cfp = channel_freq_power_UN_BG;
-- cfp_no = sizeof(channel_freq_power_UN_BG) /
-- sizeof(struct chan_freq_power);
-+ cfp_no = ARRAY_SIZE(channel_freq_power_UN_BG);
+ int ret;
+ struct if_sdio_card *card;
+@@ -775,7 +770,7 @@ out:
+ return ret;
+ }
- for (i = 0; i < cfp_no; i++) {
- if ((cfp + i)->channel == firstchan) {
-@@ -117,7 +113,7 @@ static u8 wlan_get_chan_11d(u8 band, u8 firstchan, u8 nrchan, u8 * chan)
- * @param parsed_region_chan pointer to parsed_region_chan_11d
- * @return TRUE; FALSE
- */
--static u8 wlan_channel_known_11d(u8 chan,
-+static u8 lbs_channel_known_11d(u8 chan,
- struct parsed_region_chan_11d * parsed_region_chan)
+-static int if_sdio_get_int_status(wlan_private *priv, u8 *ireg)
++static int if_sdio_get_int_status(struct lbs_private *priv, u8 *ireg)
{
- struct chan_power_11d *chanpwr = parsed_region_chan->chanpwr;
-@@ -138,19 +134,15 @@ static u8 wlan_channel_known_11d(u8 chan,
+ struct if_sdio_card *card;
+
+@@ -791,7 +786,7 @@ static int if_sdio_get_int_status(wlan_private *priv, u8 *ireg)
return 0;
}
--u32 libertas_chan_2_freq(u8 chan, u8 band)
-+u32 lbs_chan_2_freq(u8 chan, u8 band)
+-static int if_sdio_read_event_cause(wlan_private *priv)
++static int if_sdio_read_event_cause(struct lbs_private *priv)
{
- struct chan_freq_power *cf;
-- u16 cnt;
- u16 i;
- u32 freq = 0;
-
- cf = channel_freq_power_UN_BG;
-- cnt =
-- sizeof(channel_freq_power_UN_BG) /
-- sizeof(struct chan_freq_power);
+ struct if_sdio_card *card;
-- for (i = 0; i < cnt; i++) {
-+ for (i = 0; i < ARRAY_SIZE(channel_freq_power_UN_BG); i++) {
- if (chan == cf[i].channel)
- freq = cf[i].freq;
- }
-@@ -160,7 +152,7 @@ u32 libertas_chan_2_freq(u8 chan, u8 band)
+@@ -799,7 +794,7 @@ static int if_sdio_read_event_cause(wlan_private *priv)
- static int generate_domain_info_11d(struct parsed_region_chan_11d
- *parsed_region_chan,
-- struct wlan_802_11d_domain_reg * domaininfo)
-+ struct lbs_802_11d_domain_reg *domaininfo)
- {
- u8 nr_subband = 0;
+ card = priv->card;
-@@ -225,7 +217,7 @@ static int generate_domain_info_11d(struct parsed_region_chan_11d
- * @param *parsed_region_chan pointer to parsed_region_chan_11d
- * @return N/A
- */
--static void wlan_generate_parsed_region_chan_11d(struct region_channel * region_chan,
-+static void lbs_generate_parsed_region_chan_11d(struct region_channel *region_chan,
- struct parsed_region_chan_11d *
- parsed_region_chan)
- {
-@@ -246,7 +238,7 @@ static void wlan_generate_parsed_region_chan_11d(struct region_channel * region_
- parsed_region_chan->band = region_chan->band;
- parsed_region_chan->region = region_chan->region;
- memcpy(parsed_region_chan->countrycode,
-- wlan_code_2_region(region_chan->region), COUNTRY_CODE_LEN);
-+ lbs_code_2_region(region_chan->region), COUNTRY_CODE_LEN);
+- priv->adapter->eventcause = card->event;
++ priv->eventcause = card->event;
- lbs_deb_11d("region 0x%x, band %d\n", parsed_region_chan->region,
- parsed_region_chan->band);
-@@ -272,7 +264,7 @@ static void wlan_generate_parsed_region_chan_11d(struct region_channel * region_
- * @param chan chan
- * @return TRUE;FALSE
- */
--static u8 wlan_region_chan_supported_11d(u8 region, u8 band, u8 chan)
-+static u8 lbs_region_chan_supported_11d(u8 region, u8 band, u8 chan)
- {
- struct chan_freq_power *cfp;
- int cfp_no;
-@@ -281,7 +273,7 @@ static u8 wlan_region_chan_supported_11d(u8 region, u8 band, u8 chan)
+ lbs_deb_leave(LBS_DEB_SDIO);
- lbs_deb_enter(LBS_DEB_11D);
+@@ -834,12 +829,9 @@ static void if_sdio_interrupt(struct sdio_func *func)
+ * Ignore the define name, this really means the card has
+ * successfully received the command.
+ */
+- if (cause & IF_SDIO_H_INT_DNLD) {
+- if ((card->priv->dnld_sent == DNLD_DATA_SENT) &&
+- (card->priv->adapter->connect_status == LIBERTAS_CONNECTED))
+- netif_wake_queue(card->priv->dev);
+- card->priv->dnld_sent = DNLD_RES_RECEIVED;
+- }
++ if (cause & IF_SDIO_H_INT_DNLD)
++ lbs_host_to_card_done(card->priv);
++
-- cfp = libertas_get_region_cfp_table(region, band, &cfp_no);
-+ cfp = lbs_get_region_cfp_table(region, band, &cfp_no);
- if (cfp == NULL)
- return 0;
+ if (cause & IF_SDIO_H_INT_UPLD) {
+ ret = if_sdio_card_to_host(card);
+@@ -857,7 +849,7 @@ static int if_sdio_probe(struct sdio_func *func,
+ const struct sdio_device_id *id)
+ {
+ struct if_sdio_card *card;
+- wlan_private *priv;
++ struct lbs_private *priv;
+ int ret, i;
+ unsigned int model;
+ struct if_sdio_packet *packet;
+@@ -905,15 +897,15 @@ static int if_sdio_probe(struct sdio_func *func,
+ card->helper = if_sdio_models[i].helper;
+ card->firmware = if_sdio_models[i].firmware;
-@@ -346,7 +338,7 @@ static int parse_domain_info_11d(struct ieeetypes_countryinfofullset*
+- if (libertas_helper_name) {
++ if (lbs_helper_name) {
+ lbs_deb_sdio("overriding helper firmware: %s\n",
+- libertas_helper_name);
+- card->helper = libertas_helper_name;
++ lbs_helper_name);
++ card->helper = lbs_helper_name;
+ }
- /*Step1: check region_code */
- parsed_region_chan->region = region =
-- wlan_region_2_code(countryinfo->countrycode);
-+ lbs_region_2_code(countryinfo->countrycode);
+- if (libertas_fw_name) {
+- lbs_deb_sdio("overriding firmware: %s\n", libertas_fw_name);
+- card->firmware = libertas_fw_name;
++ if (lbs_fw_name) {
++ lbs_deb_sdio("overriding firmware: %s\n", lbs_fw_name);
++ card->firmware = lbs_fw_name;
+ }
- lbs_deb_11d("regioncode=%x\n", (u8) parsed_region_chan->region);
- lbs_deb_hex(LBS_DEB_11D, "countrycode", (char *)countryinfo->countrycode,
-@@ -375,7 +367,7 @@ static int parse_domain_info_11d(struct ieeetypes_countryinfofullset*
- for (i = 0; idx < MAX_NO_OF_CHAN && i < nrchan; i++) {
- /*step4: channel is supported? */
+ sdio_claim_host(func);
+@@ -951,7 +943,7 @@ static int if_sdio_probe(struct sdio_func *func,
+ if (ret)
+ goto reclaim;
-- if (!wlan_get_chan_11d(band, firstchan, i, &curchan)) {
-+ if (!lbs_get_chan_11d(band, firstchan, i, &curchan)) {
- /* Chan is not found in UN table */
- lbs_deb_11d("chan is not supported: %d \n", i);
- break;
-@@ -383,7 +375,7 @@ static int parse_domain_info_11d(struct ieeetypes_countryinfofullset*
+- priv = libertas_add_card(card, &func->dev);
++ priv = lbs_add_card(card, &func->dev);
+ if (!priv) {
+ ret = -ENOMEM;
+ goto reclaim;
+@@ -964,7 +956,7 @@ static int if_sdio_probe(struct sdio_func *func,
+ priv->hw_get_int_status = if_sdio_get_int_status;
+ priv->hw_read_event_cause = if_sdio_read_event_cause;
- lastchan = curchan;
+- priv->adapter->fw_ready = 1;
++ priv->fw_ready = 1;
-- if (wlan_region_chan_supported_11d
-+ if (lbs_region_chan_supported_11d
- (region, band, curchan)) {
- /*step5: Check if curchan is supported by mrvl in region */
- parsed_region_chan->chanpwr[idx].chan = curchan;
-@@ -419,14 +411,14 @@ done:
- * @param parsed_region_chan pointer to parsed_region_chan_11d
- * @return PASSIVE if chan is unknown; ACTIVE if chan is known
- */
--u8 libertas_get_scan_type_11d(u8 chan,
-+u8 lbs_get_scan_type_11d(u8 chan,
- struct parsed_region_chan_11d * parsed_region_chan)
- {
- u8 scan_type = CMD_SCAN_TYPE_PASSIVE;
+ /*
+ * Enable interrupts now that everything is set up
+@@ -975,7 +967,7 @@ static int if_sdio_probe(struct sdio_func *func,
+ if (ret)
+ goto reclaim;
- lbs_deb_enter(LBS_DEB_11D);
+- ret = libertas_start_card(priv);
++ ret = lbs_start_card(priv);
+ if (ret)
+ goto err_activate_card;
-- if (wlan_channel_known_11d(chan, parsed_region_chan)) {
-+ if (lbs_channel_known_11d(chan, parsed_region_chan)) {
- lbs_deb_11d("found, do active scan\n");
- scan_type = CMD_SCAN_TYPE_ACTIVE;
- } else {
-@@ -438,29 +430,29 @@ u8 libertas_get_scan_type_11d(u8 chan,
+@@ -987,7 +979,7 @@ out:
+ err_activate_card:
+ flush_scheduled_work();
+ free_netdev(priv->dev);
+- kfree(priv->adapter);
++ kfree(priv);
+ reclaim:
+ sdio_claim_host(func);
+ release_int:
+@@ -1017,11 +1009,11 @@ static void if_sdio_remove(struct sdio_func *func)
- }
+ card = sdio_get_drvdata(func);
--void libertas_init_11d(wlan_private * priv)
-+void lbs_init_11d(struct lbs_private *priv)
- {
-- priv->adapter->enable11d = 0;
-- memset(&(priv->adapter->parsed_region_chan), 0,
-+ priv->enable11d = 0;
-+ memset(&(priv->parsed_region_chan), 0,
- sizeof(struct parsed_region_chan_11d));
- return;
- }
+- card->priv->adapter->surpriseremoved = 1;
++ card->priv->surpriseremoved = 1;
- /**
- * @brief This function sets DOMAIN INFO to FW
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @return 0; -1
- */
--static int set_domain_info_11d(wlan_private * priv)
-+static int set_domain_info_11d(struct lbs_private *priv)
- {
- int ret;
+ lbs_deb_sdio("call remove card\n");
+- libertas_stop_card(card->priv);
+- libertas_remove_card(card->priv);
++ lbs_stop_card(card->priv);
++ lbs_remove_card(card->priv);
-- if (!priv->adapter->enable11d) {
-+ if (!priv->enable11d) {
- lbs_deb_11d("dnld domain Info with 11d disabled\n");
- return 0;
- }
+ flush_scheduled_work();
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11D_DOMAIN_INFO,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11D_DOMAIN_INFO,
- CMD_ACT_SET,
- CMD_OPTION_WAITFORRSP, 0, NULL);
- if (ret)
-@@ -471,28 +463,27 @@ static int set_domain_info_11d(wlan_private * priv)
+@@ -1052,7 +1044,7 @@ static struct sdio_driver if_sdio_driver = {
+ /* Module functions */
+ /*******************************************************************/
- /**
- * @brief This function setups scan channels
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @param band band
- * @return 0
- */
--int libertas_set_universaltable(wlan_private * priv, u8 band)
-+int lbs_set_universaltable(struct lbs_private *priv, u8 band)
+-static int if_sdio_init_module(void)
++static int __init if_sdio_init_module(void)
{
-- wlan_adapter *adapter = priv->adapter;
- u16 size = sizeof(struct chan_freq_power);
- u16 i = 0;
+ int ret = 0;
-- memset(adapter->universal_channel, 0,
-- sizeof(adapter->universal_channel));
-+ memset(priv->universal_channel, 0,
-+ sizeof(priv->universal_channel));
+@@ -1068,7 +1060,7 @@ static int if_sdio_init_module(void)
+ return ret;
+ }
-- adapter->universal_channel[i].nrcfp =
-+ priv->universal_channel[i].nrcfp =
- sizeof(channel_freq_power_UN_BG) / size;
- lbs_deb_11d("BG-band nrcfp %d\n",
-- adapter->universal_channel[i].nrcfp);
-+ priv->universal_channel[i].nrcfp);
+-static void if_sdio_exit_module(void)
++static void __exit if_sdio_exit_module(void)
+ {
+ lbs_deb_enter(LBS_DEB_SDIO);
-- adapter->universal_channel[i].CFP = channel_freq_power_UN_BG;
-- adapter->universal_channel[i].valid = 1;
-- adapter->universal_channel[i].region = UNIVERSAL_REGION_CODE;
-- adapter->universal_channel[i].band = band;
-+ priv->universal_channel[i].CFP = channel_freq_power_UN_BG;
-+ priv->universal_channel[i].valid = 1;
-+ priv->universal_channel[i].region = UNIVERSAL_REGION_CODE;
-+ priv->universal_channel[i].band = band;
- i++;
+diff --git a/drivers/net/wireless/libertas/if_sdio.h b/drivers/net/wireless/libertas/if_sdio.h
+index dfcaea7..533bdfb 100644
+--- a/drivers/net/wireless/libertas/if_sdio.h
++++ b/drivers/net/wireless/libertas/if_sdio.h
+@@ -9,8 +9,8 @@
+ * your option) any later version.
+ */
- return 0;
-@@ -500,21 +491,20 @@ int libertas_set_universaltable(wlan_private * priv, u8 band)
+-#ifndef LIBERTAS_IF_SDIO_H
+-#define LIBERTAS_IF_SDIO_H
++#ifndef _LBS_IF_SDIO_H
++#define _LBS_IF_SDIO_H
- /**
- * @brief This function implements command CMD_802_11D_DOMAIN_INFO
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @param cmd pointer to cmd buffer
- * @param cmdno cmd ID
- * @param cmdOption cmd action
- * @return 0
- */
--int libertas_cmd_802_11d_domain_info(wlan_private * priv,
-+int lbs_cmd_802_11d_domain_info(struct lbs_private *priv,
- struct cmd_ds_command *cmd, u16 cmdno,
- u16 cmdoption)
- {
- struct cmd_ds_802_11d_domain_info *pdomaininfo =
- &cmd->params.domaininfo;
- struct mrvlietypes_domainparamset *domain = &pdomaininfo->domain;
-- wlan_adapter *adapter = priv->adapter;
-- u8 nr_subband = adapter->domainreg.nr_subband;
-+ u8 nr_subband = priv->domainreg.nr_subband;
+ #define IF_SDIO_IOPORT 0x00
- lbs_deb_enter(LBS_DEB_11D);
+diff --git a/drivers/net/wireless/libertas/if_usb.c b/drivers/net/wireless/libertas/if_usb.c
+index cb59f46..75aed9d 100644
+--- a/drivers/net/wireless/libertas/if_usb.c
++++ b/drivers/net/wireless/libertas/if_usb.c
+@@ -5,7 +5,6 @@
+ #include <linux/moduleparam.h>
+ #include <linux/firmware.h>
+ #include <linux/netdevice.h>
+-#include <linux/list.h>
+ #include <linux/usb.h>
-@@ -526,12 +516,12 @@ int libertas_cmd_802_11d_domain_info(wlan_private * priv,
- cmd->size =
- cpu_to_le16(sizeof(pdomaininfo->action) + S_DS_GEN);
- lbs_deb_hex(LBS_DEB_11D, "802_11D_DOMAIN_INFO", (u8 *) cmd,
-- (int)(cmd->size));
-+ le16_to_cpu(cmd->size));
- goto done;
- }
+ #define DRV_NAME "usb8xxx"
+@@ -14,24 +13,16 @@
+ #include "decl.h"
+ #include "defs.h"
+ #include "dev.h"
++#include "cmd.h"
+ #include "if_usb.h"
- domain->header.type = cpu_to_le16(TLV_TYPE_DOMAIN);
-- memcpy(domain->countrycode, adapter->domainreg.countrycode,
-+ memcpy(domain->countrycode, priv->domainreg.countrycode,
- sizeof(domain->countrycode));
+-#define MESSAGE_HEADER_LEN 4
+-
+-static const char usbdriver_name[] = "usb8xxx";
+-static u8 *default_fw_name = "usb8388.bin";
++#define INSANEDEBUG 0
++#define lbs_deb_usb2(...) do { if (INSANEDEBUG) lbs_deb_usbd(__VA_ARGS__); } while (0)
- domain->header.len =
-@@ -539,7 +529,7 @@ int libertas_cmd_802_11d_domain_info(wlan_private * priv,
- sizeof(domain->countrycode));
+-static char *libertas_fw_name = NULL;
+-module_param_named(fw_name, libertas_fw_name, charp, 0644);
++#define MESSAGE_HEADER_LEN 4
- if (nr_subband) {
-- memcpy(domain->subband, adapter->domainreg.subband,
-+ memcpy(domain->subband, priv->domainreg.subband,
- nr_subband * sizeof(struct ieeetypes_subbandset));
+-/*
+- * We need to send a RESET command to all USB devices before
+- * we tear down the USB connection. Otherwise we would not
+- * be able to re-init device the device if the module gets
+- * loaded again. This is a list of all initialized USB devices,
+- * for the reset code see if_usb_reset_device()
+-*/
+-static LIST_HEAD(usb_devices);
++static char *lbs_fw_name = "usb8388.bin";
++module_param_named(fw_name, lbs_fw_name, charp, 0644);
- cmd->size = cpu_to_le16(sizeof(pdomaininfo->action) +
-@@ -560,11 +550,11 @@ done:
+ static struct usb_device_id if_usb_table[] = {
+ /* Enter the device signature inside */
+@@ -44,14 +35,16 @@ MODULE_DEVICE_TABLE(usb, if_usb_table);
- /**
- * @brief This function parses countryinfo from AP and download country info to FW
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @param resp pointer to command response buffer
- * @return 0; -1
- */
--int libertas_ret_802_11d_domain_info(wlan_private * priv,
-+int lbs_ret_802_11d_domain_info(struct lbs_private *priv,
- struct cmd_ds_command *resp)
- {
- struct cmd_ds_802_11d_domain_info *domaininfo = &resp->params.domaininforesp;
-@@ -606,31 +596,30 @@ int libertas_ret_802_11d_domain_info(wlan_private * priv,
+ static void if_usb_receive(struct urb *urb);
+ static void if_usb_receive_fwload(struct urb *urb);
+-static int if_usb_prog_firmware(struct usb_card_rec *cardp);
+-static int if_usb_host_to_card(wlan_private * priv, u8 type, u8 * payload, u16 nb);
+-static int if_usb_get_int_status(wlan_private * priv, u8 *);
+-static int if_usb_read_event_cause(wlan_private *);
+-static int usb_tx_block(struct usb_card_rec *cardp, u8 *payload, u16 nb);
+-static void if_usb_free(struct usb_card_rec *cardp);
+-static int if_usb_submit_rx_urb(struct usb_card_rec *cardp);
+-static int if_usb_reset_device(struct usb_card_rec *cardp);
++static int if_usb_prog_firmware(struct if_usb_card *cardp);
++static int if_usb_host_to_card(struct lbs_private *priv, uint8_t type,
++ uint8_t *payload, uint16_t nb);
++static int if_usb_get_int_status(struct lbs_private *priv, uint8_t *);
++static int if_usb_read_event_cause(struct lbs_private *);
++static int usb_tx_block(struct if_usb_card *cardp, uint8_t *payload,
++ uint16_t nb);
++static void if_usb_free(struct if_usb_card *cardp);
++static int if_usb_submit_rx_urb(struct if_usb_card *cardp);
++static int if_usb_reset_device(struct if_usb_card *cardp);
/**
- * @brief This function parses countryinfo from AP and download country info to FW
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @return 0; -1
+ * @brief call back function to handle the status of the URB
+@@ -60,37 +53,22 @@ static int if_usb_reset_device(struct usb_card_rec *cardp);
*/
--int libertas_parse_dnld_countryinfo_11d(wlan_private * priv,
-+int lbs_parse_dnld_countryinfo_11d(struct lbs_private *priv,
- struct bss_descriptor * bss)
+ static void if_usb_write_bulk_callback(struct urb *urb)
{
- int ret;
-- wlan_adapter *adapter = priv->adapter;
-
- lbs_deb_enter(LBS_DEB_11D);
-- if (priv->adapter->enable11d) {
-- memset(&adapter->parsed_region_chan, 0,
-+ if (priv->enable11d) {
-+ memset(&priv->parsed_region_chan, 0,
- sizeof(struct parsed_region_chan_11d));
- ret = parse_domain_info_11d(&bss->countryinfo, 0,
-- &adapter->parsed_region_chan);
-+ &priv->parsed_region_chan);
+- struct usb_card_rec *cardp = (struct usb_card_rec *) urb->context;
++ struct if_usb_card *cardp = (struct if_usb_card *) urb->context;
- if (ret == -1) {
- lbs_deb_11d("error parsing domain_info from AP\n");
- goto done;
- }
+ /* handle the transmission complete validations */
-- memset(&adapter->domainreg, 0,
-- sizeof(struct wlan_802_11d_domain_reg));
-- generate_domain_info_11d(&adapter->parsed_region_chan,
-- &adapter->domainreg);
-+ memset(&priv->domainreg, 0,
-+ sizeof(struct lbs_802_11d_domain_reg));
-+ generate_domain_info_11d(&priv->parsed_region_chan,
-+ &priv->domainreg);
+ if (urb->status == 0) {
+- wlan_private *priv = cardp->priv;
++ struct lbs_private *priv = cardp->priv;
- ret = set_domain_info_11d(priv);
+- /*
+- lbs_deb_usbd(&urb->dev->dev, "URB status is successfull\n");
+- lbs_deb_usbd(&urb->dev->dev, "Actual length transmitted %d\n",
+- urb->actual_length);
+- */
++ lbs_deb_usb2(&urb->dev->dev, "URB status is successful\n");
++ lbs_deb_usb2(&urb->dev->dev, "Actual length transmitted %d\n",
++ urb->actual_length);
-@@ -648,25 +637,23 @@ done:
+ /* Used for both firmware TX and regular TX. priv isn't
+ * valid at firmware load time.
+ */
+- if (priv) {
+- wlan_adapter *adapter = priv->adapter;
+- struct net_device *dev = priv->dev;
+-
+- priv->dnld_sent = DNLD_RES_RECEIVED;
+-
+- /* Wake main thread if commands are pending */
+- if (!adapter->cur_cmd)
+- wake_up_interruptible(&priv->waitq);
+-
+- if ((adapter->connect_status == LIBERTAS_CONNECTED)) {
+- netif_wake_queue(dev);
+- netif_wake_queue(priv->mesh_dev);
+- }
+- }
++ if (priv)
++ lbs_host_to_card_done(priv);
+ } else {
+ /* print the failure status number for debug */
+ lbs_pr_info("URB in failure status: %d\n", urb->status);
+@@ -101,10 +79,10 @@ static void if_usb_write_bulk_callback(struct urb *urb)
/**
- * @brief This function generates 11D info from user specified regioncode and download to FW
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @return 0; -1
+ * @brief free tx/rx urb, skb and rx buffer
+- * @param cardp pointer usb_card_rec
++ * @param cardp pointer if_usb_card
+ * @return N/A
*/
--int libertas_create_dnld_countryinfo_11d(wlan_private * priv)
-+int lbs_create_dnld_countryinfo_11d(struct lbs_private *priv)
+-static void if_usb_free(struct usb_card_rec *cardp)
++static void if_usb_free(struct if_usb_card *cardp)
{
- int ret;
-- wlan_adapter *adapter = priv->adapter;
- struct region_channel *region_chan;
- u8 j;
-
- lbs_deb_enter(LBS_DEB_11D);
-- lbs_deb_11d("curbssparams.band %d\n", adapter->curbssparams.band);
-+ lbs_deb_11d("curbssparams.band %d\n", priv->curbssparams.band);
+ lbs_deb_enter(LBS_DEB_USB);
-- if (priv->adapter->enable11d) {
-+ if (priv->enable11d) {
- /* update parsed_region_chan_11; dnld domaininf to FW */
+@@ -118,12 +96,58 @@ static void if_usb_free(struct usb_card_rec *cardp)
+ usb_free_urb(cardp->rx_urb);
+ cardp->rx_urb = NULL;
-- for (j = 0; j < sizeof(adapter->region_channel) /
-- sizeof(adapter->region_channel[0]); j++) {
-- region_chan = &adapter->region_channel[j];
-+ for (j = 0; j < ARRAY_SIZE(priv->region_channel); j++) {
-+ region_chan = &priv->region_channel[j];
+- kfree(cardp->bulk_out_buffer);
+- cardp->bulk_out_buffer = NULL;
++ kfree(cardp->ep_out_buf);
++ cardp->ep_out_buf = NULL;
- lbs_deb_11d("%d region_chan->band %d\n", j,
- region_chan->band);
-@@ -674,29 +661,28 @@ int libertas_create_dnld_countryinfo_11d(wlan_private * priv)
- if (!region_chan || !region_chan->valid
- || !region_chan->CFP)
- continue;
-- if (region_chan->band != adapter->curbssparams.band)
-+ if (region_chan->band != priv->curbssparams.band)
- continue;
- break;
- }
+ lbs_deb_leave(LBS_DEB_USB);
+ }
-- if (j >= sizeof(adapter->region_channel) /
-- sizeof(adapter->region_channel[0])) {
-+ if (j >= ARRAY_SIZE(priv->region_channel)) {
- lbs_deb_11d("region_chan not found, band %d\n",
-- adapter->curbssparams.band);
-+ priv->curbssparams.band);
- ret = -1;
- goto done;
- }
++static void if_usb_setup_firmware(struct lbs_private *priv)
++{
++ struct if_usb_card *cardp = priv->card;
++ struct cmd_ds_set_boot2_ver b2_cmd;
++ struct cmd_ds_802_11_fw_wake_method wake_method;
++
++ b2_cmd.hdr.size = cpu_to_le16(sizeof(b2_cmd));
++ b2_cmd.action = 0;
++ b2_cmd.version = cardp->boot2_version;
++
++ if (lbs_cmd_with_response(priv, CMD_SET_BOOT2_VER, &b2_cmd))
++ lbs_deb_usb("Setting boot2 version failed\n");
++
++ priv->wol_gpio = 2; /* Wake via GPIO2... */
++ priv->wol_gap = 20; /* ... after 20ms */
++ lbs_host_sleep_cfg(priv, EHS_WAKE_ON_UNICAST_DATA);
++
++ wake_method.hdr.size = cpu_to_le16(sizeof(wake_method));
++ wake_method.action = cpu_to_le16(CMD_ACT_GET);
++ if (lbs_cmd_with_response(priv, CMD_802_11_FW_WAKE_METHOD, &wake_method)) {
++ lbs_pr_info("Firmware does not seem to support PS mode\n");
++ } else {
++ if (le16_to_cpu(wake_method.method) == CMD_WAKE_METHOD_COMMAND_INT) {
++ lbs_deb_usb("Firmware seems to support PS with wake-via-command\n");
++ priv->ps_supported = 1;
++ } else {
++ /* The versions which boot up this way don't seem to
++ work even if we set it to the command interrupt */
++ lbs_pr_info("Firmware doesn't wake via command interrupt; disabling PS mode\n");
++ }
++ }
++}
++
++static void if_usb_fw_timeo(unsigned long priv)
++{
++ struct if_usb_card *cardp = (void *)priv;
++
++ if (cardp->fwdnldover) {
++ lbs_deb_usb("Download complete, no event. Assuming success\n");
++ } else {
++ lbs_pr_err("Download timed out\n");
++ cardp->surprise_removed = 1;
++ }
++ wake_up(&cardp->fw_wq);
++}
++
+ /**
+ * @brief sets the configuration values
+ * @param ifnum interface number
+@@ -136,23 +160,26 @@ static int if_usb_probe(struct usb_interface *intf,
+ struct usb_device *udev;
+ struct usb_host_interface *iface_desc;
+ struct usb_endpoint_descriptor *endpoint;
+- wlan_private *priv;
+- struct usb_card_rec *cardp;
++ struct lbs_private *priv;
++ struct if_usb_card *cardp;
+ int i;
-- memset(&adapter->parsed_region_chan, 0,
-+ memset(&priv->parsed_region_chan, 0,
- sizeof(struct parsed_region_chan_11d));
-- wlan_generate_parsed_region_chan_11d(region_chan,
-- &adapter->
-+ lbs_generate_parsed_region_chan_11d(region_chan,
-+ &priv->
- parsed_region_chan);
+ udev = interface_to_usbdev(intf);
-- memset(&adapter->domainreg, 0,
-- sizeof(struct wlan_802_11d_domain_reg));
-- generate_domain_info_11d(&adapter->parsed_region_chan,
-- &adapter->domainreg);
-+ memset(&priv->domainreg, 0,
-+ sizeof(struct lbs_802_11d_domain_reg));
-+ generate_domain_info_11d(&priv->parsed_region_chan,
-+ &priv->domainreg);
+- cardp = kzalloc(sizeof(struct usb_card_rec), GFP_KERNEL);
++ cardp = kzalloc(sizeof(struct if_usb_card), GFP_KERNEL);
+ if (!cardp) {
+ lbs_pr_err("Out of memory allocating private data.\n");
+ goto error;
+ }
- ret = set_domain_info_11d(priv);
++ setup_timer(&cardp->fw_timeout, if_usb_fw_timeo, (unsigned long)cardp);
++ init_waitqueue_head(&cardp->fw_wq);
++
+ cardp->udev = udev;
+ iface_desc = intf->cur_altsetting;
-diff --git a/drivers/net/wireless/libertas/11d.h b/drivers/net/wireless/libertas/11d.h
-index 3a6d1f8..811eea2 100644
---- a/drivers/net/wireless/libertas/11d.h
-+++ b/drivers/net/wireless/libertas/11d.h
-@@ -2,8 +2,8 @@
- * This header file contains data structures and
- * function declarations of 802.11d
- */
--#ifndef _WLAN_11D_
--#define _WLAN_11D_
-+#ifndef _LBS_11D_
-+#define _LBS_11D_
+ lbs_deb_usbd(&udev->dev, "bcdUSB = 0x%X bDeviceClass = 0x%X"
+- " bDeviceSubClass = 0x%X, bDeviceProtocol = 0x%X\n",
++ " bDeviceSubClass = 0x%X, bDeviceProtocol = 0x%X\n",
+ le16_to_cpu(udev->descriptor.bcdUSB),
+ udev->descriptor.bDeviceClass,
+ udev->descriptor.bDeviceSubClass,
+@@ -160,92 +187,62 @@ static int if_usb_probe(struct usb_interface *intf,
- #include "types.h"
- #include "defs.h"
-@@ -52,7 +52,7 @@ struct cmd_ds_802_11d_domain_info {
- } __attribute__ ((packed));
+ for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
+ endpoint = &iface_desc->endpoint[i].desc;
+- if ((endpoint->bEndpointAddress & USB_ENDPOINT_DIR_MASK)
+- && ((endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
+- USB_ENDPOINT_XFER_BULK)) {
+- /* we found a bulk in endpoint */
+- lbs_deb_usbd(&udev->dev, "Bulk in size is %d\n",
+- le16_to_cpu(endpoint->wMaxPacketSize));
+- if (!(cardp->rx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
+- lbs_deb_usbd(&udev->dev,
+- "Rx URB allocation failed\n");
+- goto dealloc;
+- }
+- cardp->rx_urb_recall = 0;
+-
+- cardp->bulk_in_size =
+- le16_to_cpu(endpoint->wMaxPacketSize);
+- cardp->bulk_in_endpointAddr =
+- (endpoint->
+- bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+- lbs_deb_usbd(&udev->dev, "in_endpoint = %d\n",
+- endpoint->bEndpointAddress);
+- }
++ if (usb_endpoint_is_bulk_in(endpoint)) {
++ cardp->ep_in_size = le16_to_cpu(endpoint->wMaxPacketSize);
++ cardp->ep_in = usb_endpoint_num(endpoint);
- /** domain regulatory information */
--struct wlan_802_11d_domain_reg {
-+struct lbs_802_11d_domain_reg {
- /** country Code*/
- u8 countrycode[COUNTRY_CODE_LEN];
- /** No. of subband*/
-@@ -78,26 +78,28 @@ struct region_code_mapping {
- u8 code;
- };
+- if (((endpoint->
+- bEndpointAddress & USB_ENDPOINT_DIR_MASK) ==
+- USB_DIR_OUT)
+- && ((endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
+- USB_ENDPOINT_XFER_BULK)) {
+- /* We found bulk out endpoint */
+- if (!(cardp->tx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
+- lbs_deb_usbd(&udev->dev,
+- "Tx URB allocation failed\n");
+- goto dealloc;
+- }
++ lbs_deb_usbd(&udev->dev, "in_endpoint = %d\n", cardp->ep_in);
++ lbs_deb_usbd(&udev->dev, "Bulk in size is %d\n", cardp->ep_in_size);
--u8 libertas_get_scan_type_11d(u8 chan,
-+struct lbs_private;
+- cardp->bulk_out_size =
+- le16_to_cpu(endpoint->wMaxPacketSize);
+- lbs_deb_usbd(&udev->dev,
+- "Bulk out size is %d\n",
+- le16_to_cpu(endpoint->wMaxPacketSize));
+- cardp->bulk_out_endpointAddr =
+- endpoint->bEndpointAddress;
+- lbs_deb_usbd(&udev->dev, "out_endpoint = %d\n",
+- endpoint->bEndpointAddress);
+- cardp->bulk_out_buffer =
+- kmalloc(MRVDRV_ETH_TX_PACKET_BUFFER_SIZE,
+- GFP_KERNEL);
+-
+- if (!cardp->bulk_out_buffer) {
+- lbs_deb_usbd(&udev->dev,
+- "Could not allocate buffer\n");
+- goto dealloc;
+- }
++ } else if (usb_endpoint_is_bulk_out(endpoint)) {
++ cardp->ep_out_size = le16_to_cpu(endpoint->wMaxPacketSize);
++ cardp->ep_out = usb_endpoint_num(endpoint);
+
-+u8 lbs_get_scan_type_11d(u8 chan,
- struct parsed_region_chan_11d *parsed_region_chan);
++ lbs_deb_usbd(&udev->dev, "out_endpoint = %d\n", cardp->ep_out);
++ lbs_deb_usbd(&udev->dev, "Bulk out size is %d\n", cardp->ep_out_size);
+ }
+ }
++ if (!cardp->ep_out_size || !cardp->ep_in_size) {
++ lbs_deb_usbd(&udev->dev, "Endpoints not found\n");
++ goto dealloc;
++ }
++ if (!(cardp->rx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
++ lbs_deb_usbd(&udev->dev, "Rx URB allocation failed\n");
++ goto dealloc;
++ }
++ if (!(cardp->tx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
++ lbs_deb_usbd(&udev->dev, "Tx URB allocation failed\n");
++ goto dealloc;
++ }
++ cardp->ep_out_buf = kmalloc(MRVDRV_ETH_TX_PACKET_BUFFER_SIZE, GFP_KERNEL);
++ if (!cardp->ep_out_buf) {
++ lbs_deb_usbd(&udev->dev, "Could not allocate buffer\n");
++ goto dealloc;
++ }
--u32 libertas_chan_2_freq(u8 chan, u8 band);
-+u32 lbs_chan_2_freq(u8 chan, u8 band);
+ /* Upload firmware */
+- cardp->rinfo.cardp = cardp;
+ if (if_usb_prog_firmware(cardp)) {
+- lbs_deb_usbd(&udev->dev, "FW upload failed");
++ lbs_deb_usbd(&udev->dev, "FW upload failed\n");
+ goto err_prog_firmware;
+ }
--void libertas_init_11d(wlan_private * priv);
-+void lbs_init_11d(struct lbs_private *priv);
+- if (!(priv = libertas_add_card(cardp, &udev->dev)))
++ if (!(priv = lbs_add_card(cardp, &udev->dev)))
+ goto err_prog_firmware;
--int libertas_set_universaltable(wlan_private * priv, u8 band);
-+int lbs_set_universaltable(struct lbs_private *priv, u8 band);
+ cardp->priv = priv;
+-
+- if (libertas_add_mesh(priv, &udev->dev))
+- goto err_add_mesh;
+-
+- cardp->eth_dev = priv->dev;
++ cardp->priv->fw_ready = 1;
--int libertas_cmd_802_11d_domain_info(wlan_private * priv,
-+int lbs_cmd_802_11d_domain_info(struct lbs_private *priv,
- struct cmd_ds_command *cmd, u16 cmdno,
- u16 cmdOption);
+ priv->hw_host_to_card = if_usb_host_to_card;
+ priv->hw_get_int_status = if_usb_get_int_status;
+ priv->hw_read_event_cause = if_usb_read_event_cause;
+- priv->boot2_version = udev->descriptor.bcdDevice;
++ cardp->boot2_version = udev->descriptor.bcdDevice;
--int libertas_ret_802_11d_domain_info(wlan_private * priv,
-+int lbs_ret_802_11d_domain_info(struct lbs_private *priv,
- struct cmd_ds_command *resp);
+- /* Delay 200 ms to waiting for the FW ready */
+ if_usb_submit_rx_urb(cardp);
+- msleep_interruptible(200);
+- priv->adapter->fw_ready = 1;
- struct bss_descriptor;
--int libertas_parse_dnld_countryinfo_11d(wlan_private * priv,
-+int lbs_parse_dnld_countryinfo_11d(struct lbs_private *priv,
- struct bss_descriptor * bss);
+- if (libertas_start_card(priv))
++ if (lbs_start_card(priv))
+ goto err_start_card;
--int libertas_create_dnld_countryinfo_11d(wlan_private * priv);
-+int lbs_create_dnld_countryinfo_11d(struct lbs_private *priv);
+- list_add_tail(&cardp->list, &usb_devices);
++ if_usb_setup_firmware(priv);
--#endif /* _WLAN_11D_ */
-+#endif
-diff --git a/drivers/net/wireless/libertas/README b/drivers/net/wireless/libertas/README
-index 0b133ce..d860fc3 100644
---- a/drivers/net/wireless/libertas/README
-+++ b/drivers/net/wireless/libertas/README
-@@ -195,45 +195,33 @@ setuserscan
+ usb_get_dev(udev);
+ usb_set_intfdata(intf, cardp);
+@@ -253,9 +250,7 @@ static int if_usb_probe(struct usb_interface *intf,
+ return 0;
- where [ARGS]:
+ err_start_card:
+- libertas_remove_mesh(priv);
+-err_add_mesh:
+- libertas_remove_card(priv);
++ lbs_remove_card(priv);
+ err_prog_firmware:
+ if_usb_reset_device(cardp);
+ dealloc:
+@@ -272,23 +267,17 @@ error:
+ */
+ static void if_usb_disconnect(struct usb_interface *intf)
+ {
+- struct usb_card_rec *cardp = usb_get_intfdata(intf);
+- wlan_private *priv = (wlan_private *) cardp->priv;
++ struct if_usb_card *cardp = usb_get_intfdata(intf);
++ struct lbs_private *priv = (struct lbs_private *) cardp->priv;
-- chan=[chan#][band][mode] where band is [a,b,g] and mode is
-- blank for active or 'p' for passive
- bssid=xx:xx:xx:xx:xx:xx specify a BSSID filter for the scan
- ssid="[SSID]" specify a SSID filter for the scan
- keep=[0 or 1] keep the previous scan results (1), discard (0)
- dur=[scan time] time to scan for each channel in milliseconds
-- probes=[#] number of probe requests to send on each chan
- type=[1,2,3] BSS type: 1 (Infra), 2(Adhoc), 3(Any)
+ lbs_deb_enter(LBS_DEB_MAIN);
-- Any combination of the above arguments can be supplied on the command line.
-- If the chan token is absent, a full channel scan will be completed by
-- the driver. If the dur or probes tokens are absent, the driver default
-- setting will be used. The bssid and ssid fields, if blank,
-- will produce an unfiltered scan. The type field will default to 3 (Any)
-- and the keep field will default to 0 (Discard).
-+ Any combination of the above arguments can be supplied on the command
-+ line. If dur tokens are absent, the driver default setting will be used.
-+ The bssid and ssid fields, if blank, will produce an unfiltered scan.
-+ The type field will default to 3 (Any) and the keep field will default
-+ to 0 (Discard).
+- /* Update Surprise removed to TRUE */
+ cardp->surprise_removed = 1;
- Examples:
-- 1) Perform an active scan on channels 1, 6, and 11 in the 'g' band:
-- echo "chan=1g,6g,11g" > setuserscan
-+ 1) Perform a passive scan on all channels for 20 ms per channel:
-+ echo "dur=20" > setuserscan
+- list_del(&cardp->list);
+-
+ if (priv) {
+- wlan_adapter *adapter = priv->adapter;
+-
+- adapter->surpriseremoved = 1;
+- libertas_stop_card(priv);
+- libertas_remove_mesh(priv);
+- libertas_remove_card(priv);
++ priv->surpriseremoved = 1;
++ lbs_stop_card(priv);
++ lbs_remove_card(priv);
+ }
-- 2) Perform a passive scan on channel 11 for 20 ms:
-- echo "chan=11gp dur=20" > setuserscan
-+ 2) Perform an active scan for a specific SSID:
-+ echo "ssid="TestAP"" > setuserscan
+ /* Unlink and free urb */
+@@ -302,102 +291,82 @@ static void if_usb_disconnect(struct usb_interface *intf)
-- 3) Perform an active scan on channels 1, 6, and 11; and a passive scan on
-- channel 36 in the 'a' band:
+ /**
+ * @brief This function download FW
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @return 0
+ */
+-static int if_prog_firmware(struct usb_card_rec *cardp)
++static int if_usb_send_fw_pkt(struct if_usb_card *cardp)
+ {
+- struct FWData *fwdata;
+- struct fwheader *fwheader;
+- u8 *firmware = cardp->fw->data;
-
-- echo "chan=1g,6g,11g,36ap" > setuserscan
+- fwdata = kmalloc(sizeof(struct FWData), GFP_ATOMIC);
-
-- 4) Perform an active scan on channel 6 and 36 for a specific SSID:
-- echo "chan=6g,36a ssid="TestAP"" > setuserscan
+- if (!fwdata)
+- return -1;
-
-- 5) Scan all available channels (B/G, A bands) for a specific BSSID, keep
-+ 3) Scan all available channels (B/G, A bands) for a specific BSSID, keep
- the current scan table intact, update existing or append new scan data:
- echo "bssid=00:50:43:20:12:82 keep=1" > setuserscan
+- fwheader = &fwdata->fwheader;
++ struct fwdata *fwdata = cardp->ep_out_buf;
++ uint8_t *firmware = cardp->fw->data;
-- 6) Scan channel 6, for all infrastructure networks, sending two probe
-- requests. Keep the previous scan table intact. Update any duplicate
-- BSSID/SSID matches with the new scan data:
-- echo "chan=6g type=1 probes=2 keep=1" > setuserscan
-+ 4) Scan for all infrastructure networks.
-+ Keep the previous scan table intact. Update any duplicate BSSID/SSID
-+ matches with the new scan data:
-+ echo "type=1 keep=1" > setuserscan
++ /* If we got a CRC failure on the last block, back
++ up and retry it */
+ if (!cardp->CRC_OK) {
+ cardp->totalbytes = cardp->fwlastblksent;
+- cardp->fwseqnum = cardp->lastseqnum - 1;
++ cardp->fwseqnum--;
+ }
- All entries in the scan table (not just the new scan data when keep=1)
- will be displayed upon completion by use of the getscantable ioctl.
-diff --git a/drivers/net/wireless/libertas/assoc.c b/drivers/net/wireless/libertas/assoc.c
-index b61b176..c622e9b 100644
---- a/drivers/net/wireless/libertas/assoc.c
-+++ b/drivers/net/wireless/libertas/assoc.c
-@@ -9,39 +9,16 @@
- #include "decl.h"
- #include "hostcmd.h"
- #include "host.h"
-+#include "cmd.h"
+- /*
+- lbs_deb_usbd(&cardp->udev->dev, "totalbytes = %d\n",
+- cardp->totalbytes);
+- */
++ lbs_deb_usb2(&cardp->udev->dev, "totalbytes = %d\n",
++ cardp->totalbytes);
+- memcpy(fwheader, &firmware[cardp->totalbytes],
++ /* struct fwdata (which we sent to the card) has an
++ extra __le32 field in between the header and the data,
++ which is not in the struct fwheader in the actual
++ firmware binary. Insert the seqnum in the middle... */
++ memcpy(&fwdata->hdr, &firmware[cardp->totalbytes],
+ sizeof(struct fwheader));
- static const u8 bssid_any[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
- static const u8 bssid_off[ETH_ALEN] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
+ cardp->fwlastblksent = cardp->totalbytes;
+ cardp->totalbytes += sizeof(struct fwheader);
--static void print_assoc_req(const char * extra, struct assoc_request * assoc_req)
--{
-- DECLARE_MAC_BUF(mac);
-- lbs_deb_assoc(
-- "#### Association Request: %s\n"
-- " flags: 0x%08lX\n"
-- " SSID: '%s'\n"
-- " channel: %d\n"
-- " band: %d\n"
-- " mode: %d\n"
-- " BSSID: %s\n"
-- " Encryption:%s%s%s\n"
-- " auth: %d\n",
-- extra, assoc_req->flags,
-- escape_essid(assoc_req->ssid, assoc_req->ssid_len),
-- assoc_req->channel, assoc_req->band, assoc_req->mode,
-- print_mac(mac, assoc_req->bssid),
-- assoc_req->secinfo.WPAenabled ? " WPA" : "",
-- assoc_req->secinfo.WPA2enabled ? " WPA2" : "",
-- assoc_req->secinfo.wep_enabled ? " WEP" : "",
-- assoc_req->secinfo.auth_mode);
--}
--
+- /* lbs_deb_usbd(&cardp->udev->dev,"Copy Data\n"); */
+ memcpy(fwdata->data, &firmware[cardp->totalbytes],
+- le32_to_cpu(fwdata->fwheader.datalength));
++ le32_to_cpu(fwdata->hdr.datalength));
--static int assoc_helper_essid(wlan_private *priv,
-+static int assoc_helper_essid(struct lbs_private *priv,
- struct assoc_request * assoc_req)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
- struct bss_descriptor * bss;
- int channel = -1;
-@@ -55,18 +32,17 @@ static int assoc_helper_essid(wlan_private *priv,
- if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags))
- channel = assoc_req->channel;
+- /*
+- lbs_deb_usbd(&cardp->udev->dev,
+- "Data length = %d\n", le32_to_cpu(fwdata->fwheader.datalength));
+- */
++ lbs_deb_usb2(&cardp->udev->dev, "Data length = %d\n",
++ le32_to_cpu(fwdata->hdr.datalength));
-- lbs_deb_assoc("New SSID requested: '%s'\n",
-+ lbs_deb_assoc("SSID '%s' requested\n",
- escape_essid(assoc_req->ssid, assoc_req->ssid_len));
- if (assoc_req->mode == IW_MODE_INFRA) {
-- libertas_send_specific_ssid_scan(priv, assoc_req->ssid,
-+ lbs_send_specific_ssid_scan(priv, assoc_req->ssid,
- assoc_req->ssid_len, 0);
+- cardp->fwseqnum = cardp->fwseqnum + 1;
++ fwdata->seqnum = cpu_to_le32(++cardp->fwseqnum);
++ cardp->totalbytes += le32_to_cpu(fwdata->hdr.datalength);
-- bss = libertas_find_ssid_in_list(adapter, assoc_req->ssid,
-+ bss = lbs_find_ssid_in_list(priv, assoc_req->ssid,
- assoc_req->ssid_len, NULL, IW_MODE_INFRA, channel);
- if (bss != NULL) {
-- lbs_deb_assoc("SSID found in scan list, associating\n");
- memcpy(&assoc_req->bss, bss, sizeof(struct bss_descriptor));
-- ret = wlan_associate(priv, assoc_req);
-+ ret = lbs_associate(priv, assoc_req);
- } else {
- lbs_deb_assoc("SSID not found; cannot associate\n");
- }
-@@ -74,23 +50,23 @@ static int assoc_helper_essid(wlan_private *priv,
- /* Scan for the network, do not save previous results. Stale
- * scan data will cause us to join a non-existant adhoc network
- */
-- libertas_send_specific_ssid_scan(priv, assoc_req->ssid,
-+ lbs_send_specific_ssid_scan(priv, assoc_req->ssid,
- assoc_req->ssid_len, 1);
+- fwdata->seqnum = cpu_to_le32(cardp->fwseqnum);
+- cardp->lastseqnum = cardp->fwseqnum;
+- cardp->totalbytes += le32_to_cpu(fwdata->fwheader.datalength);
++ usb_tx_block(cardp, cardp->ep_out_buf, sizeof(struct fwdata) +
++ le32_to_cpu(fwdata->hdr.datalength));
++
++ if (fwdata->hdr.dnldcmd == cpu_to_le32(FW_HAS_DATA_TO_RECV)) {
++ lbs_deb_usb2(&cardp->udev->dev, "There are data to follow\n");
++ lbs_deb_usb2(&cardp->udev->dev, "seqnum = %d totalbytes = %d\n",
++ cardp->fwseqnum, cardp->totalbytes);
++ } else if (fwdata->hdr.dnldcmd == cpu_to_le32(FW_HAS_LAST_BLOCK)) {
++ lbs_deb_usb2(&cardp->udev->dev, "Host has finished FW downloading\n");
++ lbs_deb_usb2(&cardp->udev->dev, "Donwloading FW JUMP BLOCK\n");
- /* Search for the requested SSID in the scan table */
-- bss = libertas_find_ssid_in_list(adapter, assoc_req->ssid,
-+ bss = lbs_find_ssid_in_list(priv, assoc_req->ssid,
- assoc_req->ssid_len, NULL, IW_MODE_ADHOC, channel);
- if (bss != NULL) {
- lbs_deb_assoc("SSID found, will join\n");
- memcpy(&assoc_req->bss, bss, sizeof(struct bss_descriptor));
-- libertas_join_adhoc_network(priv, assoc_req);
-+ lbs_join_adhoc_network(priv, assoc_req);
- } else {
- /* else send START command */
- lbs_deb_assoc("SSID not found, creating adhoc network\n");
- memcpy(&assoc_req->bss.ssid, &assoc_req->ssid,
- IW_ESSID_MAX_SIZE);
- assoc_req->bss.ssid_len = assoc_req->ssid_len;
-- libertas_start_adhoc_network(priv, assoc_req);
-+ lbs_start_adhoc_network(priv, assoc_req);
- }
+- if (fwheader->dnldcmd == cpu_to_le32(FW_HAS_DATA_TO_RECV)) {
+- /*
+- lbs_deb_usbd(&cardp->udev->dev, "There are data to follow\n");
+- lbs_deb_usbd(&cardp->udev->dev,
+- "seqnum = %d totalbytes = %d\n", cardp->fwseqnum,
+- cardp->totalbytes);
+- */
+- memcpy(cardp->bulk_out_buffer, fwheader, FW_DATA_XMIT_SIZE);
+- usb_tx_block(cardp, cardp->bulk_out_buffer, FW_DATA_XMIT_SIZE);
+-
+- } else if (fwdata->fwheader.dnldcmd == cpu_to_le32(FW_HAS_LAST_BLOCK)) {
+- /*
+- lbs_deb_usbd(&cardp->udev->dev,
+- "Host has finished FW downloading\n");
+- lbs_deb_usbd(&cardp->udev->dev,
+- "Donwloading FW JUMP BLOCK\n");
+- */
+- memcpy(cardp->bulk_out_buffer, fwheader, FW_DATA_XMIT_SIZE);
+- usb_tx_block(cardp, cardp->bulk_out_buffer, FW_DATA_XMIT_SIZE);
+ cardp->fwfinalblk = 1;
}
-@@ -99,10 +75,9 @@ static int assoc_helper_essid(wlan_private *priv,
- }
+- /*
+- lbs_deb_usbd(&cardp->udev->dev,
+- "The firmware download is done size is %d\n",
+- cardp->totalbytes);
+- */
+-
+- kfree(fwdata);
++ lbs_deb_usb2(&cardp->udev->dev, "Firmware download done; size %d\n",
++ cardp->totalbytes);
+ return 0;
+ }
--static int assoc_helper_bssid(wlan_private *priv,
-+static int assoc_helper_bssid(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+-static int if_usb_reset_device(struct usb_card_rec *cardp)
++static int if_usb_reset_device(struct if_usb_card *cardp)
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
- struct bss_descriptor * bss;
- DECLARE_MAC_BUF(mac);
-@@ -111,7 +86,7 @@ static int assoc_helper_bssid(wlan_private *priv,
- print_mac(mac, assoc_req->bssid));
++ struct cmd_ds_command *cmd = cardp->ep_out_buf + 4;
+ int ret;
+- wlan_private * priv = cardp->priv;
- /* Search for index position in list for requested MAC */
-- bss = libertas_find_bssid_in_list(adapter, assoc_req->bssid,
-+ bss = lbs_find_bssid_in_list(priv, assoc_req->bssid,
- assoc_req->mode);
- if (bss == NULL) {
- lbs_deb_assoc("ASSOC: WAP: BSSID %s not found, "
-@@ -121,10 +96,10 @@ static int assoc_helper_bssid(wlan_private *priv,
+ lbs_deb_enter(LBS_DEB_USB);
- memcpy(&assoc_req->bss, bss, sizeof(struct bss_descriptor));
- if (assoc_req->mode == IW_MODE_INFRA) {
-- ret = wlan_associate(priv, assoc_req);
-- lbs_deb_assoc("ASSOC: wlan_associate(bssid) returned %d\n", ret);
-+ ret = lbs_associate(priv, assoc_req);
-+ lbs_deb_assoc("ASSOC: lbs_associate(bssid) returned %d\n", ret);
- } else if (assoc_req->mode == IW_MODE_ADHOC) {
-- libertas_join_adhoc_network(priv, assoc_req);
-+ lbs_join_adhoc_network(priv, assoc_req);
- }
+- /* Try a USB port reset first, if that fails send the reset
+- * command to the firmware.
+- */
++ *(__le32 *)cardp->ep_out_buf = cpu_to_le32(CMD_TYPE_REQUEST);
++
++ cmd->command = cpu_to_le16(CMD_802_11_RESET);
++ cmd->size = cpu_to_le16(sizeof(struct cmd_ds_802_11_reset) + S_DS_GEN);
++ cmd->result = cpu_to_le16(0);
++ cmd->seqnum = cpu_to_le16(0x5a5a);
++ cmd->params.reset.action = cpu_to_le16(CMD_ACT_HALT);
++ usb_tx_block(cardp, cardp->ep_out_buf, 4 + S_DS_GEN + sizeof(struct cmd_ds_802_11_reset));
++
++ msleep(100);
+ ret = usb_reset_device(cardp->udev);
+- if (!ret && priv) {
+- msleep(10);
+- ret = libertas_reset_device(priv);
+- msleep(10);
+- }
++ msleep(100);
- out:
-@@ -133,11 +108,13 @@ out:
- }
+ lbs_deb_leave_args(LBS_DEB_USB, "ret %d", ret);
+@@ -406,12 +375,12 @@ static int if_usb_reset_device(struct usb_card_rec *cardp)
--static int assoc_helper_associate(wlan_private *priv,
-+static int assoc_helper_associate(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+ /**
+ * @brief This function transfer the data to the device.
+- * @param priv pointer to wlan_private
++ * @param priv pointer to struct lbs_private
+ * @param payload pointer to payload data
+ * @param nb data length
+ * @return 0 or -1
+ */
+-static int usb_tx_block(struct usb_card_rec *cardp, u8 * payload, u16 nb)
++static int usb_tx_block(struct if_usb_card *cardp, uint8_t *payload, uint16_t nb)
{
- int ret = 0, done = 0;
+ int ret = -1;
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-+
- /* If we're given and 'any' BSSID, try associating based on SSID */
+@@ -423,17 +392,16 @@ static int usb_tx_block(struct usb_card_rec *cardp, u8 * payload, u16 nb)
- if (test_bit(ASSOC_FLAG_BSSID, &assoc_req->flags)) {
-@@ -145,42 +122,36 @@ static int assoc_helper_associate(wlan_private *priv,
- && compare_ether_addr(bssid_off, assoc_req->bssid)) {
- ret = assoc_helper_bssid(priv, assoc_req);
- done = 1;
-- if (ret) {
-- lbs_deb_assoc("ASSOC: bssid: ret = %d\n", ret);
-- }
- }
- }
+ usb_fill_bulk_urb(cardp->tx_urb, cardp->udev,
+ usb_sndbulkpipe(cardp->udev,
+- cardp->bulk_out_endpointAddr),
++ cardp->ep_out),
+ payload, nb, if_usb_write_bulk_callback, cardp);
- if (!done && test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)) {
- ret = assoc_helper_essid(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC: bssid: ret = %d\n", ret);
-- }
+ cardp->tx_urb->transfer_flags |= URB_ZERO_PACKET;
+
+ if ((ret = usb_submit_urb(cardp->tx_urb, GFP_ATOMIC))) {
+- /* transfer failed */
+- lbs_deb_usbd(&cardp->udev->dev, "usb_submit_urb failed\n");
++ lbs_deb_usbd(&cardp->udev->dev, "usb_submit_urb failed: %d\n", ret);
+ ret = -1;
+ } else {
+- /* lbs_deb_usbd(&cardp->udev->dev, "usb_submit_urb success\n"); */
++ lbs_deb_usb2(&cardp->udev->dev, "usb_submit_urb success\n");
+ ret = 0;
}
-+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+@@ -441,11 +409,10 @@ tx_ret:
return ret;
}
-
--static int assoc_helper_mode(wlan_private *priv,
-+static int assoc_helper_mode(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+-static int __if_usb_submit_rx_urb(struct usb_card_rec *cardp,
++static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ void (*callbackfn)(struct urb *urb))
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
+ struct sk_buff *skb;
+- struct read_cb_info *rinfo = &cardp->rinfo;
+ int ret = -1;
- lbs_deb_enter(LBS_DEB_ASSOC);
+ if (!(skb = dev_alloc_skb(MRVDRV_ETH_RX_PACKET_BUFFER_SIZE))) {
+@@ -453,25 +420,25 @@ static int __if_usb_submit_rx_urb(struct usb_card_rec *cardp,
+ goto rx_ret;
+ }
-- if (assoc_req->mode == adapter->mode)
-+ if (assoc_req->mode == priv->mode)
- goto done;
+- rinfo->skb = skb;
++ cardp->rx_skb = skb;
- if (assoc_req->mode == IW_MODE_INFRA) {
-- if (adapter->psstate != PS_STATE_FULL_POWER)
-- libertas_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
-- adapter->psmode = WLAN802_11POWERMODECAM;
-+ if (priv->psstate != PS_STATE_FULL_POWER)
-+ lbs_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
-+ priv->psmode = LBS802_11POWERMODECAM;
+ /* Fill the receive configuration URB and initialise the Rx call back */
+ usb_fill_bulk_urb(cardp->rx_urb, cardp->udev,
+- usb_rcvbulkpipe(cardp->udev,
+- cardp->bulk_in_endpointAddr),
++ usb_rcvbulkpipe(cardp->udev, cardp->ep_in),
+ (void *) (skb->tail + (size_t) IPFIELD_ALIGN_OFFSET),
+ MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
+- rinfo);
++ cardp);
+
+ cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+
+- /* lbs_deb_usbd(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb); */
++ lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
+ if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
+- /* handle failure conditions */
+- lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed\n");
++ lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
++ kfree_skb(skb);
++ cardp->rx_skb = NULL;
+ ret = -1;
+ } else {
+- /* lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB success\n"); */
++ lbs_deb_usb2(&cardp->udev->dev, "Submit Rx URB success\n");
+ ret = 0;
}
-- adapter->mode = assoc_req->mode;
-- ret = libertas_prepare_and_send_command(priv,
-+ priv->mode = assoc_req->mode;
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_SNMP_MIB,
- 0, CMD_OPTION_WAITFORRSP,
- OID_802_11_INFRASTRUCTURE_MODE,
-@@ -192,57 +163,76 @@ done:
+@@ -479,58 +446,78 @@ rx_ret:
+ return ret;
}
-
--static int update_channel(wlan_private * priv)
-+int lbs_update_channel(struct lbs_private *priv)
+-static int if_usb_submit_rx_urb_fwload(struct usb_card_rec *cardp)
++static int if_usb_submit_rx_urb_fwload(struct if_usb_card *cardp)
{
-- /* the channel in f/w could be out of sync, get the current channel */
-- return libertas_prepare_and_send_command(priv, CMD_802_11_RF_CHANNEL,
-- CMD_OPT_802_11_RF_CHANNEL_GET,
-- CMD_OPTION_WAITFORRSP, 0, NULL);
-+ int ret;
-+
-+ /* the channel in f/w could be out of sync; get the current channel */
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-+
-+ ret = lbs_get_channel(priv);
-+ if (ret > 0) {
-+ priv->curbssparams.channel = ret;
-+ ret = 0;
-+ }
-+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
-+ return ret;
+ return __if_usb_submit_rx_urb(cardp, &if_usb_receive_fwload);
}
--void libertas_sync_channel(struct work_struct *work)
-+void lbs_sync_channel(struct work_struct *work)
+-static int if_usb_submit_rx_urb(struct usb_card_rec *cardp)
++static int if_usb_submit_rx_urb(struct if_usb_card *cardp)
{
-- wlan_private *priv = container_of(work, wlan_private, sync_channel);
-+ struct lbs_private *priv = container_of(work, struct lbs_private,
-+ sync_channel);
-
-- if (update_channel(priv) != 0)
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-+ if (lbs_update_channel(priv))
- lbs_pr_info("Channel synchronization failed.");
-+ lbs_deb_leave(LBS_DEB_ASSOC);
+ return __if_usb_submit_rx_urb(cardp, &if_usb_receive);
}
--static int assoc_helper_channel(wlan_private *priv,
-+static int assoc_helper_channel(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+ static void if_usb_receive_fwload(struct urb *urb)
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
-
- lbs_deb_enter(LBS_DEB_ASSOC);
+- struct read_cb_info *rinfo = (struct read_cb_info *)urb->context;
+- struct sk_buff *skb = rinfo->skb;
+- struct usb_card_rec *cardp = (struct usb_card_rec *)rinfo->cardp;
++ struct if_usb_card *cardp = urb->context;
++ struct sk_buff *skb = cardp->rx_skb;
+ struct fwsyncheader *syncfwheader;
+- struct bootcmdrespStr bootcmdresp;
++ struct bootcmdresp bootcmdresp;
-- ret = update_channel(priv);
-- if (ret < 0) {
-- lbs_deb_assoc("ASSOC: channel: error getting channel.");
-+ ret = lbs_update_channel(priv);
-+ if (ret) {
-+ lbs_deb_assoc("ASSOC: channel: error getting channel.\n");
-+ goto done;
+ if (urb->status) {
+ lbs_deb_usbd(&cardp->udev->dev,
+- "URB status is failed during fw load\n");
++ "URB status is failed during fw load\n");
+ kfree_skb(skb);
+ return;
}
-- if (assoc_req->channel == adapter->curbssparams.channel)
-+ if (assoc_req->channel == priv->curbssparams.channel)
- goto done;
-
-+ if (priv->mesh_dev) {
-+ /* Change mesh channel first; 21.p21 firmware won't let
-+ you change channel otherwise (even though it'll return
-+ an error to this */
-+ lbs_mesh_config(priv, 0, assoc_req->channel);
+- if (cardp->bootcmdresp == 0) {
++ if (cardp->fwdnldover) {
++ __le32 *tmp = (__le32 *)(skb->data + IPFIELD_ALIGN_OFFSET);
++
++ if (tmp[0] == cpu_to_le32(CMD_TYPE_INDICATION) &&
++ tmp[1] == cpu_to_le32(MACREG_INT_CODE_FIRMWARE_READY)) {
++ lbs_pr_info("Firmware ready event received\n");
++ wake_up(&cardp->fw_wq);
++ } else {
++ lbs_deb_usb("Waiting for confirmation; got %x %x\n",
++ le32_to_cpu(tmp[0]), le32_to_cpu(tmp[1]));
++ if_usb_submit_rx_urb_fwload(cardp);
++ }
++ kfree_skb(skb);
++ return;
+ }
++ if (cardp->bootcmdresp <= 0) {
+ memcpy (&bootcmdresp, skb->data + IPFIELD_ALIGN_OFFSET,
+ sizeof(bootcmdresp));
+
- lbs_deb_assoc("ASSOC: channel: %d -> %d\n",
-- adapter->curbssparams.channel, assoc_req->channel);
-+ priv->curbssparams.channel, assoc_req->channel);
+ if (le16_to_cpu(cardp->udev->descriptor.bcdDevice) < 0x3106) {
+ kfree_skb(skb);
+ if_usb_submit_rx_urb_fwload(cardp);
+ cardp->bootcmdresp = 1;
+ lbs_deb_usbd(&cardp->udev->dev,
+- "Received valid boot command response\n");
++ "Received valid boot command response\n");
+ return;
+ }
+- if (bootcmdresp.u32magicnumber != cpu_to_le32(BOOT_CMD_MAGIC_NUMBER)) {
+- lbs_pr_info(
+- "boot cmd response wrong magic number (0x%x)\n",
+- le32_to_cpu(bootcmdresp.u32magicnumber));
+- } else if (bootcmdresp.u8cmd_tag != BOOT_CMD_FW_BY_USB) {
+- lbs_pr_info(
+- "boot cmd response cmd_tag error (%d)\n",
+- bootcmdresp.u8cmd_tag);
+- } else if (bootcmdresp.u8result != BOOT_CMD_RESP_OK) {
+- lbs_pr_info(
+- "boot cmd response result error (%d)\n",
+- bootcmdresp.u8result);
++ if (bootcmdresp.magic != cpu_to_le32(BOOT_CMD_MAGIC_NUMBER)) {
++ if (bootcmdresp.magic == cpu_to_le32(CMD_TYPE_REQUEST) ||
++ bootcmdresp.magic == cpu_to_le32(CMD_TYPE_DATA) ||
++ bootcmdresp.magic == cpu_to_le32(CMD_TYPE_INDICATION)) {
++ if (!cardp->bootcmdresp)
++ lbs_pr_info("Firmware already seems alive; resetting\n");
++ cardp->bootcmdresp = -1;
++ } else {
++ lbs_pr_info("boot cmd response wrong magic number (0x%x)\n",
++ le32_to_cpu(bootcmdresp.magic));
++ }
++ } else if (bootcmdresp.cmd != BOOT_CMD_FW_BY_USB) {
++ lbs_pr_info("boot cmd response cmd_tag error (%d)\n",
++ bootcmdresp.cmd);
++ } else if (bootcmdresp.result != BOOT_CMD_RESP_OK) {
++ lbs_pr_info("boot cmd response result error (%d)\n",
++ bootcmdresp.result);
+ } else {
+ cardp->bootcmdresp = 1;
+ lbs_deb_usbd(&cardp->udev->dev,
+- "Received valid boot command response\n");
++ "Received valid boot command response\n");
+ }
+ kfree_skb(skb);
+ if_usb_submit_rx_urb_fwload(cardp);
+@@ -545,50 +532,47 @@ static void if_usb_receive_fwload(struct urb *urb)
+ }
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_RF_CHANNEL,
-- CMD_OPT_802_11_RF_CHANNEL_SET,
-- CMD_OPTION_WAITFORRSP, 0, &assoc_req->channel);
-- if (ret < 0) {
-- lbs_deb_assoc("ASSOC: channel: error setting channel.");
-- }
-+ ret = lbs_set_channel(priv, assoc_req->channel);
-+ if (ret < 0)
-+ lbs_deb_assoc("ASSOC: channel: error setting channel.\n");
+ memcpy(syncfwheader, skb->data + IPFIELD_ALIGN_OFFSET,
+- sizeof(struct fwsyncheader));
++ sizeof(struct fwsyncheader));
-- ret = update_channel(priv);
-- if (ret < 0) {
-- lbs_deb_assoc("ASSOC: channel: error getting channel.");
-+ /* FIXME: shouldn't need to grab the channel _again_ after setting
-+ * it since the firmware is supposed to return the new channel, but
-+ * whatever... */
-+ ret = lbs_update_channel(priv);
-+ if (ret) {
-+ lbs_deb_assoc("ASSOC: channel: error getting channel.\n");
-+ goto done;
+ if (!syncfwheader->cmd) {
+- /*
+- lbs_deb_usbd(&cardp->udev->dev,
+- "FW received Blk with correct CRC\n");
+- lbs_deb_usbd(&cardp->udev->dev,
+- "FW received Blk seqnum = %d\n",
+- syncfwheader->seqnum);
+- */
++ lbs_deb_usb2(&cardp->udev->dev, "FW received Blk with correct CRC\n");
++ lbs_deb_usb2(&cardp->udev->dev, "FW received Blk seqnum = %d\n",
++ le32_to_cpu(syncfwheader->seqnum));
+ cardp->CRC_OK = 1;
+ } else {
+- lbs_deb_usbd(&cardp->udev->dev,
+- "FW received Blk with CRC error\n");
++ lbs_deb_usbd(&cardp->udev->dev, "FW received Blk with CRC error\n");
+ cardp->CRC_OK = 0;
}
-- if (assoc_req->channel != adapter->curbssparams.channel) {
-- lbs_deb_assoc("ASSOC: channel: failed to update channel to %d",
-+ if (assoc_req->channel != priv->curbssparams.channel) {
-+ lbs_deb_assoc("ASSOC: channel: failed to update channel to %d\n",
- assoc_req->channel);
-- goto done;
-+ goto restore_mesh;
- }
+ kfree_skb(skb);
- if ( assoc_req->secinfo.wep_enabled
-@@ -255,83 +245,75 @@ static int assoc_helper_channel(wlan_private *priv,
++ /* reschedule timer for 200ms hence */
++ mod_timer(&cardp->fw_timeout, jiffies + (HZ/5));
++
+ if (cardp->fwfinalblk) {
+ cardp->fwdnldover = 1;
+ goto exit;
}
- /* Must restart/rejoin adhoc networks after channel change */
-- set_bit(ASSOC_FLAG_SSID, &assoc_req->flags);
-+ set_bit(ASSOC_FLAG_SSID, &assoc_req->flags);
+- if_prog_firmware(cardp);
++ if_usb_send_fw_pkt(cardp);
--done:
-+ restore_mesh:
-+ if (priv->mesh_dev)
-+ lbs_mesh_config(priv, 1, priv->curbssparams.channel);
++ exit:
+ if_usb_submit_rx_urb_fwload(cardp);
+-exit:
+
-+ done:
- lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
- return ret;
+ kfree(syncfwheader);
+
+ return;
+-
}
+ #define MRVDRV_MIN_PKT_LEN 30
--static int assoc_helper_wep_keys(wlan_private *priv,
-- struct assoc_request * assoc_req)
-+static int assoc_helper_wep_keys(struct lbs_private *priv,
-+ struct assoc_request *assoc_req)
+ static inline void process_cmdtypedata(int recvlength, struct sk_buff *skb,
+- struct usb_card_rec *cardp,
+- wlan_private *priv)
++ struct if_usb_card *cardp,
++ struct lbs_private *priv)
{
-- wlan_adapter *adapter = priv->adapter;
- int i;
- int ret = 0;
+- if (recvlength > MRVDRV_ETH_RX_PACKET_BUFFER_SIZE +
+- MESSAGE_HEADER_LEN || recvlength < MRVDRV_MIN_PKT_LEN) {
+- lbs_deb_usbd(&cardp->udev->dev,
+- "Packet length is Invalid\n");
++ if (recvlength > MRVDRV_ETH_RX_PACKET_BUFFER_SIZE + MESSAGE_HEADER_LEN
++ || recvlength < MRVDRV_MIN_PKT_LEN) {
++ lbs_deb_usbd(&cardp->udev->dev, "Packet length is Invalid\n");
+ kfree_skb(skb);
+ return;
+ }
+@@ -596,19 +580,19 @@ static inline void process_cmdtypedata(int recvlength, struct sk_buff *skb,
+ skb_reserve(skb, IPFIELD_ALIGN_OFFSET);
+ skb_put(skb, recvlength);
+ skb_pull(skb, MESSAGE_HEADER_LEN);
+- libertas_process_rxed_packet(priv, skb);
++
++ lbs_process_rxed_packet(priv, skb);
+ priv->upld_len = (recvlength - MESSAGE_HEADER_LEN);
+ }
- lbs_deb_enter(LBS_DEB_ASSOC);
+-static inline void process_cmdrequest(int recvlength, u8 *recvbuff,
++static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
+ struct sk_buff *skb,
+- struct usb_card_rec *cardp,
+- wlan_private *priv)
++ struct if_usb_card *cardp,
++ struct lbs_private *priv)
+ {
+- u8 *cmdbuf;
+- if (recvlength > MRVDRV_SIZE_OF_CMD_BUFFER) {
++ if (recvlength > LBS_CMD_BUFFER_SIZE) {
+ lbs_deb_usbd(&cardp->udev->dev,
+- "The receive buffer is too large\n");
++ "The receive buffer is too large\n");
+ kfree_skb(skb);
+ return;
+ }
+@@ -616,28 +600,17 @@ static inline void process_cmdrequest(int recvlength, u8 *recvbuff,
+ if (!in_interrupt())
+ BUG();
- /* Set or remove WEP keys */
-- if ( assoc_req->wep_keys[0].len
-- || assoc_req->wep_keys[1].len
-- || assoc_req->wep_keys[2].len
-- || assoc_req->wep_keys[3].len) {
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_802_11_SET_WEP,
-- CMD_ACT_ADD,
-- CMD_OPTION_WAITFORRSP,
-- 0, assoc_req);
-- } else {
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_802_11_SET_WEP,
-- CMD_ACT_REMOVE,
-- CMD_OPTION_WAITFORRSP,
-- 0, NULL);
-- }
-+ if (assoc_req->wep_keys[0].len || assoc_req->wep_keys[1].len ||
-+ assoc_req->wep_keys[2].len || assoc_req->wep_keys[3].len)
-+ ret = lbs_cmd_802_11_set_wep(priv, CMD_ACT_ADD, assoc_req);
-+ else
-+ ret = lbs_cmd_802_11_set_wep(priv, CMD_ACT_REMOVE, assoc_req);
+- spin_lock(&priv->adapter->driver_lock);
+- /* take care of cur_cmd = NULL case by reading the
+- * data to clear the interrupt */
+- if (!priv->adapter->cur_cmd) {
+- cmdbuf = priv->upld_buf;
+- priv->adapter->hisregcpy &= ~MRVDRV_CMD_UPLD_RDY;
+- } else
+- cmdbuf = priv->adapter->cur_cmd->bufvirtualaddr;
+-
++ spin_lock(&priv->driver_lock);
+ cardp->usb_int_cause |= MRVDRV_CMD_UPLD_RDY;
+ priv->upld_len = (recvlength - MESSAGE_HEADER_LEN);
+- memcpy(cmdbuf, recvbuff + MESSAGE_HEADER_LEN,
+- priv->upld_len);
++ memcpy(priv->upld_buf, recvbuff + MESSAGE_HEADER_LEN, priv->upld_len);
- if (ret)
- goto out;
+ kfree_skb(skb);
+- libertas_interrupt(priv->dev);
+- spin_unlock(&priv->adapter->driver_lock);
++ lbs_interrupt(priv);
++ spin_unlock(&priv->driver_lock);
- /* enable/disable the MAC's WEP packet filter */
- if (assoc_req->secinfo.wep_enabled)
-- adapter->currentpacketfilter |= CMD_ACT_MAC_WEP_ENABLE;
-+ priv->currentpacketfilter |= CMD_ACT_MAC_WEP_ENABLE;
- else
-- adapter->currentpacketfilter &= ~CMD_ACT_MAC_WEP_ENABLE;
-- ret = libertas_set_mac_packet_filter(priv);
-+ priv->currentpacketfilter &= ~CMD_ACT_MAC_WEP_ENABLE;
-+
-+ ret = lbs_set_mac_packet_filter(priv);
- if (ret)
- goto out;
+ lbs_deb_usbd(&cardp->udev->dev,
+ "Wake up main thread to handle cmd response\n");
+-
+- return;
+ }
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
+ /**
+@@ -649,35 +622,33 @@ static inline void process_cmdrequest(int recvlength, u8 *recvbuff,
+ */
+ static void if_usb_receive(struct urb *urb)
+ {
+- struct read_cb_info *rinfo = (struct read_cb_info *)urb->context;
+- struct sk_buff *skb = rinfo->skb;
+- struct usb_card_rec *cardp = (struct usb_card_rec *) rinfo->cardp;
+- wlan_private * priv = cardp->priv;
+-
++ struct if_usb_card *cardp = urb->context;
++ struct sk_buff *skb = cardp->rx_skb;
++ struct lbs_private *priv = cardp->priv;
+ int recvlength = urb->actual_length;
+- u8 *recvbuff = NULL;
+- u32 recvtype = 0;
++ uint8_t *recvbuff = NULL;
++ uint32_t recvtype = 0;
++ __le32 *pkt = (__le32 *)(skb->data + IPFIELD_ALIGN_OFFSET);
-- /* Copy WEP keys into adapter wep key fields */
-+ /* Copy WEP keys into priv wep key fields */
- for (i = 0; i < 4; i++) {
-- memcpy(&adapter->wep_keys[i], &assoc_req->wep_keys[i],
-- sizeof(struct enc_key));
-+ memcpy(&priv->wep_keys[i], &assoc_req->wep_keys[i],
-+ sizeof(struct enc_key));
- }
-- adapter->wep_tx_keyidx = assoc_req->wep_tx_keyidx;
-+ priv->wep_tx_keyidx = assoc_req->wep_tx_keyidx;
+ lbs_deb_enter(LBS_DEB_USB);
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
+ if (recvlength) {
+- __le32 tmp;
+-
+ if (urb->status) {
+- lbs_deb_usbd(&cardp->udev->dev,
+- "URB status is failed\n");
++ lbs_deb_usbd(&cardp->udev->dev, "RX URB failed: %d\n",
++ urb->status);
+ kfree_skb(skb);
+ goto setup_for_next;
+ }
- out:
- lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
- return ret;
- }
+ recvbuff = skb->data + IPFIELD_ALIGN_OFFSET;
+- memcpy(&tmp, recvbuff, sizeof(u32));
+- recvtype = le32_to_cpu(tmp);
++ recvtype = le32_to_cpu(pkt[0]);
+ lbs_deb_usbd(&cardp->udev->dev,
+ "Recv length = 0x%x, Recv type = 0x%X\n",
+ recvlength, recvtype);
+- } else if (urb->status)
++ } else if (urb->status) {
++ kfree_skb(skb);
+ goto rx_exit;
++ }
--static int assoc_helper_secinfo(wlan_private *priv,
-+static int assoc_helper_secinfo(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+ switch (recvtype) {
+ case CMD_TYPE_DATA:
+@@ -690,24 +661,28 @@ static void if_usb_receive(struct urb *urb)
+
+ case CMD_TYPE_INDICATION:
+ /* Event cause handling */
+- spin_lock(&priv->adapter->driver_lock);
+- cardp->usb_event_cause = le32_to_cpu(*(__le32 *) (recvbuff + MESSAGE_HEADER_LEN));
++ spin_lock(&priv->driver_lock);
++
++ cardp->usb_event_cause = le32_to_cpu(pkt[1]);
++
+ lbs_deb_usbd(&cardp->udev->dev,"**EVENT** 0x%X\n",
+- cardp->usb_event_cause);
++ cardp->usb_event_cause);
++
++ /* Icky undocumented magic special case */
+ if (cardp->usb_event_cause & 0xffff0000) {
+- libertas_send_tx_feedback(priv);
+- spin_unlock(&priv->adapter->driver_lock);
++ lbs_send_tx_feedback(priv);
++ spin_unlock(&priv->driver_lock);
+ break;
+ }
+ cardp->usb_event_cause <<= 3;
+ cardp->usb_int_cause |= MRVDRV_CARDEVENT;
+ kfree_skb(skb);
+- libertas_interrupt(priv->dev);
+- spin_unlock(&priv->adapter->driver_lock);
++ lbs_interrupt(priv);
++ spin_unlock(&priv->driver_lock);
+ goto rx_exit;
+ default:
+ lbs_deb_usbd(&cardp->udev->dev, "Unknown command type 0x%X\n",
+- recvtype);
++ recvtype);
+ kfree_skb(skb);
+ break;
+ }
+@@ -720,58 +695,54 @@ rx_exit:
+
+ /**
+ * @brief This function downloads data to FW
+- * @param priv pointer to wlan_private structure
++ * @param priv pointer to struct lbs_private structure
+ * @param type type of data
+ * @param buf pointer to data buffer
+ * @param len number of bytes
+ * @return 0 or -1
+ */
+-static int if_usb_host_to_card(wlan_private * priv, u8 type, u8 * payload, u16 nb)
++static int if_usb_host_to_card(struct lbs_private *priv, uint8_t type,
++ uint8_t *payload, uint16_t nb)
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
-- u32 do_wpa;
-- u32 rsn = 0;
-+ uint16_t do_wpa;
-+ uint16_t rsn = 0;
+- struct usb_card_rec *cardp = (struct usb_card_rec *)priv->card;
++ struct if_usb_card *cardp = priv->card;
- lbs_deb_enter(LBS_DEB_ASSOC);
+ lbs_deb_usbd(&cardp->udev->dev,"*** type = %u\n", type);
+ lbs_deb_usbd(&cardp->udev->dev,"size after = %d\n", nb);
-- memcpy(&adapter->secinfo, &assoc_req->secinfo,
-- sizeof(struct wlan_802_11_security));
-+ memcpy(&priv->secinfo, &assoc_req->secinfo,
-+ sizeof(struct lbs_802_11_security));
+ if (type == MVMS_CMD) {
+- __le32 tmp = cpu_to_le32(CMD_TYPE_REQUEST);
++ *(__le32 *)cardp->ep_out_buf = cpu_to_le32(CMD_TYPE_REQUEST);
+ priv->dnld_sent = DNLD_CMD_SENT;
+- memcpy(cardp->bulk_out_buffer, (u8 *) & tmp,
+- MESSAGE_HEADER_LEN);
+-
+ } else {
+- __le32 tmp = cpu_to_le32(CMD_TYPE_DATA);
++ *(__le32 *)cardp->ep_out_buf = cpu_to_le32(CMD_TYPE_DATA);
+ priv->dnld_sent = DNLD_DATA_SENT;
+- memcpy(cardp->bulk_out_buffer, (u8 *) & tmp,
+- MESSAGE_HEADER_LEN);
+ }
-- ret = libertas_set_mac_packet_filter(priv);
-+ ret = lbs_set_mac_packet_filter(priv);
- if (ret)
- goto out;
+- memcpy((cardp->bulk_out_buffer + MESSAGE_HEADER_LEN), payload, nb);
++ memcpy((cardp->ep_out_buf + MESSAGE_HEADER_LEN), payload, nb);
-@@ -341,28 +323,19 @@ static int assoc_helper_secinfo(wlan_private *priv,
- */
+- return usb_tx_block(cardp, cardp->bulk_out_buffer,
+- nb + MESSAGE_HEADER_LEN);
++ return usb_tx_block(cardp, cardp->ep_out_buf, nb + MESSAGE_HEADER_LEN);
+ }
- /* Get RSN enabled/disabled */
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_802_11_ENABLE_RSN,
-- CMD_ACT_GET,
-- CMD_OPTION_WAITFORRSP,
-- 0, &rsn);
-+ ret = lbs_cmd_802_11_enable_rsn(priv, CMD_ACT_GET, &rsn);
- if (ret) {
-- lbs_deb_assoc("Failed to get RSN status: %d", ret);
-+ lbs_deb_assoc("Failed to get RSN status: %d\n", ret);
- goto out;
- }
+-/* called with adapter->driver_lock held */
+-static int if_usb_get_int_status(wlan_private * priv, u8 * ireg)
++/* called with priv->driver_lock held */
++static int if_usb_get_int_status(struct lbs_private *priv, uint8_t *ireg)
+ {
+- struct usb_card_rec *cardp = priv->card;
++ struct if_usb_card *cardp = priv->card;
- /* Don't re-enable RSN if it's already enabled */
-- do_wpa = (assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled);
-+ do_wpa = assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled;
- if (do_wpa == rsn)
- goto out;
+ *ireg = cardp->usb_int_cause;
+ cardp->usb_int_cause = 0;
- /* Set RSN enabled/disabled */
-- rsn = do_wpa;
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_802_11_ENABLE_RSN,
-- CMD_ACT_SET,
-- CMD_OPTION_WAITFORRSP,
-- 0, &rsn);
-+ ret = lbs_cmd_802_11_enable_rsn(priv, CMD_ACT_SET, &do_wpa);
+- lbs_deb_usbd(&cardp->udev->dev,"Int cause is 0x%X\n", *ireg);
++ lbs_deb_usbd(&cardp->udev->dev, "Int cause is 0x%X\n", *ireg);
- out:
- lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
-@@ -370,7 +343,7 @@ out:
+ return 0;
}
+-static int if_usb_read_event_cause(wlan_private * priv)
++static int if_usb_read_event_cause(struct lbs_private *priv)
+ {
+- struct usb_card_rec *cardp = priv->card;
++ struct if_usb_card *cardp = priv->card;
+
+- priv->adapter->eventcause = cardp->usb_event_cause;
++ priv->eventcause = cardp->usb_event_cause;
+ /* Re-submit rx urb here to avoid event lost issue */
+ if_usb_submit_rx_urb(cardp);
++
+ return 0;
+ }
--static int assoc_helper_wpa_keys(wlan_private *priv,
-+static int assoc_helper_wpa_keys(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+@@ -781,20 +752,17 @@ static int if_usb_read_event_cause(wlan_private * priv)
+ * 2:Boot from FW in EEPROM
+ * @return 0
+ */
+-static int if_usb_issue_boot_command(struct usb_card_rec *cardp, int ivalue)
++static int if_usb_issue_boot_command(struct if_usb_card *cardp, int ivalue)
{
- int ret = 0;
-@@ -385,7 +358,7 @@ static int assoc_helper_wpa_keys(wlan_private *priv,
+- struct bootcmdstr sbootcmd;
+- int i;
++ struct bootcmd *bootcmd = cardp->ep_out_buf;
- if (test_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags)) {
- clear_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags);
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_KEY_MATERIAL,
- CMD_ACT_SET,
- CMD_OPTION_WAITFORRSP,
-@@ -399,7 +372,7 @@ static int assoc_helper_wpa_keys(wlan_private *priv,
- if (test_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags)) {
- clear_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags);
+ /* Prepare command */
+- sbootcmd.u32magicnumber = cpu_to_le32(BOOT_CMD_MAGIC_NUMBER);
+- sbootcmd.u8cmd_tag = ivalue;
+- for (i=0; i<11; i++)
+- sbootcmd.au8dumy[i]=0x00;
+- memcpy(cardp->bulk_out_buffer, &sbootcmd, sizeof(struct bootcmdstr));
++ bootcmd->magic = cpu_to_le32(BOOT_CMD_MAGIC_NUMBER);
++ bootcmd->cmd = ivalue;
++ memset(bootcmd->pad, 0, sizeof(bootcmd->pad));
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_KEY_MATERIAL,
- CMD_ACT_SET,
- CMD_OPTION_WAITFORRSP,
-@@ -413,20 +386,19 @@ out:
+ /* Issue command */
+- usb_tx_block(cardp, cardp->bulk_out_buffer, sizeof(struct bootcmdstr));
++ usb_tx_block(cardp, cardp->ep_out_buf, sizeof(*bootcmd));
+
+ return 0;
+ }
+@@ -807,10 +775,10 @@ static int if_usb_issue_boot_command(struct usb_card_rec *cardp, int ivalue)
+ * len image length
+ * @return 0 or -1
+ */
+-static int check_fwfile_format(u8 *data, u32 totlen)
++static int check_fwfile_format(uint8_t *data, uint32_t totlen)
+ {
+- u32 bincmd, exit;
+- u32 blksize, offset, len;
++ uint32_t bincmd, exit;
++ uint32_t blksize, offset, len;
+ int ret;
+
+ ret = 1;
+@@ -848,7 +816,7 @@ static int check_fwfile_format(u8 *data, u32 totlen)
}
--static int assoc_helper_wpa_ie(wlan_private *priv,
-+static int assoc_helper_wpa_ie(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+-static int if_usb_prog_firmware(struct usb_card_rec *cardp)
++static int if_usb_prog_firmware(struct if_usb_card *cardp)
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
+ int i = 0;
+ static int reset_count = 10;
+@@ -856,10 +824,10 @@ static int if_usb_prog_firmware(struct usb_card_rec *cardp)
- lbs_deb_enter(LBS_DEB_ASSOC);
+ lbs_deb_enter(LBS_DEB_USB);
- if (assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled) {
-- memcpy(&adapter->wpa_ie, &assoc_req->wpa_ie, assoc_req->wpa_ie_len);
-- adapter->wpa_ie_len = assoc_req->wpa_ie_len;
-+ memcpy(&priv->wpa_ie, &assoc_req->wpa_ie, assoc_req->wpa_ie_len);
-+ priv->wpa_ie_len = assoc_req->wpa_ie_len;
- } else {
-- memset(&adapter->wpa_ie, 0, MAX_WPA_IE_LEN);
-- adapter->wpa_ie_len = 0;
-+ memset(&priv->wpa_ie, 0, MAX_WPA_IE_LEN);
-+ priv->wpa_ie_len = 0;
+- if ((ret = request_firmware(&cardp->fw, libertas_fw_name,
++ if ((ret = request_firmware(&cardp->fw, lbs_fw_name,
+ &cardp->udev->dev)) < 0) {
+ lbs_pr_err("request_firmware() failed with %#x\n", ret);
+- lbs_pr_err("firmware %s not found\n", libertas_fw_name);
++ lbs_pr_err("firmware %s not found\n", lbs_fw_name);
+ goto done;
}
- lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
-@@ -434,55 +406,68 @@ static int assoc_helper_wpa_ie(wlan_private *priv,
- }
+@@ -886,7 +854,7 @@ restart:
+ } while (cardp->bootcmdresp == 0 && j < 10);
+ } while (cardp->bootcmdresp == 0 && i < 5);
+
+- if (cardp->bootcmdresp == 0) {
++ if (cardp->bootcmdresp <= 0) {
+ if (--reset_count >= 0) {
+ if_usb_reset_device(cardp);
+ goto restart;
+@@ -904,15 +872,14 @@ restart:
+ cardp->totalbytes = 0;
+ cardp->fwfinalblk = 0;
+- if_prog_firmware(cardp);
++ /* Send the first firmware packet... */
++ if_usb_send_fw_pkt(cardp);
--static int should_deauth_infrastructure(wlan_adapter *adapter,
-+static int should_deauth_infrastructure(struct lbs_private *priv,
- struct assoc_request * assoc_req)
- {
-- if (adapter->connect_status != LIBERTAS_CONNECTED)
-+ int ret = 0;
-+
-+ lbs_deb_enter(LBS_DEB_ASSOC);
+- do {
+- lbs_deb_usbd(&cardp->udev->dev,"Wlan sched timeout\n");
+- i++;
+- msleep_interruptible(100);
+- if (cardp->surprise_removed || i >= 20)
+- break;
+- } while (!cardp->fwdnldover);
++ /* ... and wait for the process to complete */
++ wait_event_interruptible(cardp->fw_wq, cardp->surprise_removed || cardp->fwdnldover);
+
-+ if (priv->connect_status != LBS_CONNECTED)
- return 0;
++ del_timer_sync(&cardp->fw_timeout);
++ usb_kill_urb(cardp->rx_urb);
- if (test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)) {
-- lbs_deb_assoc("Deauthenticating due to new SSID in "
-- " configuration request.\n");
-- return 1;
-+ lbs_deb_assoc("Deauthenticating due to new SSID\n");
-+ ret = 1;
-+ goto out;
+ if (!cardp->fwdnldover) {
+ lbs_pr_info("failed to load fw, resetting device!\n");
+@@ -926,11 +893,11 @@ restart:
+ goto release_fw;
}
- if (test_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags)) {
-- if (adapter->secinfo.auth_mode != assoc_req->secinfo.auth_mode) {
-- lbs_deb_assoc("Deauthenticating due to updated security "
-- "info in configuration request.\n");
-- return 1;
-+ if (priv->secinfo.auth_mode != assoc_req->secinfo.auth_mode) {
-+ lbs_deb_assoc("Deauthenticating due to new security\n");
-+ ret = 1;
-+ goto out;
- }
- }
+-release_fw:
++ release_fw:
+ release_firmware(cardp->fw);
+ cardp->fw = NULL;
- if (test_bit(ASSOC_FLAG_BSSID, &assoc_req->flags)) {
-- lbs_deb_assoc("Deauthenticating due to new BSSID in "
-- " configuration request.\n");
-- return 1;
-+ lbs_deb_assoc("Deauthenticating due to new BSSID\n");
-+ ret = 1;
-+ goto out;
- }
+-done:
++ done:
+ lbs_deb_leave_args(LBS_DEB_USB, "ret %d", ret);
+ return ret;
+ }
+@@ -939,66 +906,38 @@ done:
+ #ifdef CONFIG_PM
+ static int if_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+- struct usb_card_rec *cardp = usb_get_intfdata(intf);
+- wlan_private *priv = cardp->priv;
++ struct if_usb_card *cardp = usb_get_intfdata(intf);
++ struct lbs_private *priv = cardp->priv;
++ int ret;
- if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags)) {
-- lbs_deb_assoc("Deauthenticating due to channel switch.\n");
-- return 1;
-+ lbs_deb_assoc("Deauthenticating due to channel switch\n");
-+ ret = 1;
+ lbs_deb_enter(LBS_DEB_USB);
+
+- if (priv->adapter->psstate != PS_STATE_FULL_POWER)
++ if (priv->psstate != PS_STATE_FULL_POWER)
+ return -1;
+
+- if (priv->mesh_dev && !priv->mesh_autostart_enabled) {
+- /* Mesh autostart must be activated while sleeping
+- * On resume it will go back to the current state
+- */
+- struct cmd_ds_mesh_access mesh_access;
+- memset(&mesh_access, 0, sizeof(mesh_access));
+- mesh_access.data[0] = cpu_to_le32(1);
+- libertas_prepare_and_send_command(priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
+- }
+-
+- netif_device_detach(cardp->eth_dev);
+- netif_device_detach(priv->mesh_dev);
++ ret = lbs_suspend(priv);
++ if (ret)
+ goto out;
- }
- /* FIXME: deal with 'auto' mode somehow */
- if (test_bit(ASSOC_FLAG_MODE, &assoc_req->flags)) {
-- if (assoc_req->mode != IW_MODE_INFRA)
-- return 1;
-+ if (assoc_req->mode != IW_MODE_INFRA) {
-+ lbs_deb_assoc("Deauthenticating due to leaving "
-+ "infra mode\n");
-+ ret = 1;
-+ goto out;
-+ }
- }
+ /* Unlink tx & rx urb */
+ usb_kill_urb(cardp->tx_urb);
+ usb_kill_urb(cardp->rx_urb);
-+out:
-+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
- return 0;
+- cardp->rx_urb_recall = 1;
+-
++ out:
+ lbs_deb_leave(LBS_DEB_USB);
+- return 0;
++ return ret;
}
-
--static int should_stop_adhoc(wlan_adapter *adapter,
-+static int should_stop_adhoc(struct lbs_private *priv,
- struct assoc_request * assoc_req)
+ static int if_usb_resume(struct usb_interface *intf)
{
-- if (adapter->connect_status != LIBERTAS_CONNECTED)
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-+
-+ if (priv->connect_status != LBS_CONNECTED)
- return 0;
+- struct usb_card_rec *cardp = usb_get_intfdata(intf);
+- wlan_private *priv = cardp->priv;
++ struct if_usb_card *cardp = usb_get_intfdata(intf);
++ struct lbs_private *priv = cardp->priv;
-- if (libertas_ssid_cmp(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len,
-+ if (lbs_ssid_cmp(priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len,
- assoc_req->ssid, assoc_req->ssid_len) != 0)
- return 1;
+ lbs_deb_enter(LBS_DEB_USB);
-@@ -493,18 +478,19 @@ static int should_stop_adhoc(wlan_adapter *adapter,
- }
+- cardp->rx_urb_recall = 0;
+-
+- if_usb_submit_rx_urb(cardp->priv);
++ if_usb_submit_rx_urb(cardp);
- if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags)) {
-- if (assoc_req->channel != adapter->curbssparams.channel)
-+ if (assoc_req->channel != priv->curbssparams.channel)
- return 1;
- }
+- netif_device_attach(cardp->eth_dev);
+- netif_device_attach(priv->mesh_dev);
+-
+- if (priv->mesh_dev && !priv->mesh_autostart_enabled) {
+- /* Mesh autostart was activated while sleeping
+- * Disable it if appropriate
+- */
+- struct cmd_ds_mesh_access mesh_access;
+- memset(&mesh_access, 0, sizeof(mesh_access));
+- mesh_access.data[0] = cpu_to_le32(0);
+- libertas_prepare_and_send_command(priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
+- }
++ lbs_resume(priv);
-+ lbs_deb_leave(LBS_DEB_ASSOC);
+ lbs_deb_leave(LBS_DEB_USB);
return 0;
- }
+@@ -1009,46 +948,30 @@ static int if_usb_resume(struct usb_interface *intf)
+ #endif
+ static struct usb_driver if_usb_driver = {
+- /* driver name */
+- .name = usbdriver_name,
+- /* probe function name */
++ .name = DRV_NAME,
+ .probe = if_usb_probe,
+- /* disconnect function name */
+ .disconnect = if_usb_disconnect,
+- /* device signature table */
+ .id_table = if_usb_table,
+ .suspend = if_usb_suspend,
+ .resume = if_usb_resume,
+ };
--void libertas_association_worker(struct work_struct *work)
-+void lbs_association_worker(struct work_struct *work)
+-static int if_usb_init_module(void)
++static int __init if_usb_init_module(void)
{
-- wlan_private *priv = container_of(work, wlan_private, assoc_work.work);
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = container_of(work, struct lbs_private,
-+ assoc_work.work);
- struct assoc_request * assoc_req = NULL;
int ret = 0;
- int find_any_ssid = 0;
-@@ -512,16 +498,33 @@ void libertas_association_worker(struct work_struct *work)
- lbs_deb_enter(LBS_DEB_ASSOC);
+ lbs_deb_enter(LBS_DEB_MAIN);
-- mutex_lock(&adapter->lock);
-- assoc_req = adapter->pending_assoc_req;
-- adapter->pending_assoc_req = NULL;
-- adapter->in_progress_assoc_req = assoc_req;
-- mutex_unlock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-+ assoc_req = priv->pending_assoc_req;
-+ priv->pending_assoc_req = NULL;
-+ priv->in_progress_assoc_req = assoc_req;
-+ mutex_unlock(&priv->lock);
+- if (libertas_fw_name == NULL) {
+- libertas_fw_name = default_fw_name;
+- }
+-
+ ret = usb_register(&if_usb_driver);
- if (!assoc_req)
- goto done;
+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
+ return ret;
+ }
-- print_assoc_req(__func__, assoc_req);
-+ lbs_deb_assoc(
-+ "Association Request:\n"
-+ " flags: 0x%08lx\n"
-+ " SSID: '%s'\n"
-+ " chann: %d\n"
-+ " band: %d\n"
-+ " mode: %d\n"
-+ " BSSID: %s\n"
-+ " secinfo: %s%s%s\n"
-+ " auth_mode: %d\n",
-+ assoc_req->flags,
-+ escape_essid(assoc_req->ssid, assoc_req->ssid_len),
-+ assoc_req->channel, assoc_req->band, assoc_req->mode,
-+ print_mac(mac, assoc_req->bssid),
-+ assoc_req->secinfo.WPAenabled ? " WPA" : "",
-+ assoc_req->secinfo.WPA2enabled ? " WPA2" : "",
-+ assoc_req->secinfo.wep_enabled ? " WEP" : "",
-+ assoc_req->secinfo.auth_mode);
+-static void if_usb_exit_module(void)
++static void __exit if_usb_exit_module(void)
+ {
+- struct usb_card_rec *cardp, *cardp_temp;
+-
+ lbs_deb_enter(LBS_DEB_MAIN);
- /* If 'any' SSID was specified, find an SSID to associate with */
- if (test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)
-@@ -538,7 +541,7 @@ void libertas_association_worker(struct work_struct *work)
- if (find_any_ssid) {
- u8 new_mode;
+- list_for_each_entry_safe(cardp, cardp_temp, &usb_devices, list) {
+- libertas_prepare_and_send_command(cardp->priv, CMD_802_11_RESET,
+- CMD_ACT_HALT, 0, 0, NULL);
+- }
+-
+- /* API unregisters the driver from USB subsystem */
+ usb_deregister(&if_usb_driver);
-- ret = libertas_find_best_network_ssid(priv, assoc_req->ssid,
-+ ret = lbs_find_best_network_ssid(priv, assoc_req->ssid,
- &assoc_req->ssid_len, assoc_req->mode, &new_mode);
- if (ret) {
- lbs_deb_assoc("Could not find best network\n");
-@@ -557,18 +560,18 @@ void libertas_association_worker(struct work_struct *work)
- * Check if the attributes being changing require deauthentication
- * from the currently associated infrastructure access point.
- */
-- if (adapter->mode == IW_MODE_INFRA) {
-- if (should_deauth_infrastructure(adapter, assoc_req)) {
-- ret = libertas_send_deauthentication(priv);
-+ if (priv->mode == IW_MODE_INFRA) {
-+ if (should_deauth_infrastructure(priv, assoc_req)) {
-+ ret = lbs_send_deauthentication(priv);
- if (ret) {
- lbs_deb_assoc("Deauthentication due to new "
- "configuration request failed: %d\n",
- ret);
- }
- }
-- } else if (adapter->mode == IW_MODE_ADHOC) {
-- if (should_stop_adhoc(adapter, assoc_req)) {
-- ret = libertas_stop_adhoc_network(priv);
-+ } else if (priv->mode == IW_MODE_ADHOC) {
-+ if (should_stop_adhoc(priv, assoc_req)) {
-+ ret = lbs_stop_adhoc_network(priv);
- if (ret) {
- lbs_deb_assoc("Teardown of AdHoc network due to "
- "new configuration request failed: %d\n",
-@@ -581,58 +584,40 @@ void libertas_association_worker(struct work_struct *work)
- /* Send the various configuration bits to the firmware */
- if (test_bit(ASSOC_FLAG_MODE, &assoc_req->flags)) {
- ret = assoc_helper_mode(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC(:%d) mode: ret = %d\n",
-- __LINE__, ret);
-+ if (ret)
- goto out;
-- }
- }
+ lbs_deb_leave(LBS_DEB_MAIN);
+@@ -1058,5 +981,5 @@ module_init(if_usb_init_module);
+ module_exit(if_usb_exit_module);
- if (test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags)) {
- ret = assoc_helper_channel(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC(:%d) channel: ret = %d\n",
-- __LINE__, ret);
-+ if (ret)
- goto out;
-- }
- }
+ MODULE_DESCRIPTION("8388 USB WLAN Driver");
+-MODULE_AUTHOR("Marvell International Ltd.");
++MODULE_AUTHOR("Marvell International Ltd. and Red Hat, Inc.");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/net/wireless/libertas/if_usb.h b/drivers/net/wireless/libertas/if_usb.h
+index e07a10e..e4829a3 100644
+--- a/drivers/net/wireless/libertas/if_usb.h
++++ b/drivers/net/wireless/libertas/if_usb.h
+@@ -1,79 +1,76 @@
+-#ifndef _LIBERTAS_IF_USB_H
+-#define _LIBERTAS_IF_USB_H
++#ifndef _LBS_IF_USB_H
++#define _LBS_IF_USB_H
- if ( test_bit(ASSOC_FLAG_WEP_KEYS, &assoc_req->flags)
- || test_bit(ASSOC_FLAG_WEP_TX_KEYIDX, &assoc_req->flags)) {
- ret = assoc_helper_wep_keys(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC(:%d) wep_keys: ret = %d\n",
-- __LINE__, ret);
-+ if (ret)
- goto out;
-- }
- }
+-#include <linux/list.h>
++#include <linux/wait.h>
++#include <linux/timer.h>
++
++struct lbs_private;
- if (test_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags)) {
- ret = assoc_helper_secinfo(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC(:%d) secinfo: ret = %d\n",
-- __LINE__, ret);
-+ if (ret)
- goto out;
-- }
- }
+ /**
+ * This file contains definition for USB interface.
+ */
+-#define CMD_TYPE_REQUEST 0xF00DFACE
+-#define CMD_TYPE_DATA 0xBEADC0DE
+-#define CMD_TYPE_INDICATION 0xBEEFFACE
++#define CMD_TYPE_REQUEST 0xF00DFACE
++#define CMD_TYPE_DATA 0xBEADC0DE
++#define CMD_TYPE_INDICATION 0xBEEFFACE
- if (test_bit(ASSOC_FLAG_WPA_IE, &assoc_req->flags)) {
- ret = assoc_helper_wpa_ie(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC(:%d) wpa_ie: ret = %d\n",
-- __LINE__, ret);
-+ if (ret)
- goto out;
-- }
- }
+-#define IPFIELD_ALIGN_OFFSET 2
++#define IPFIELD_ALIGN_OFFSET 2
- if (test_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags)
- || test_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags)) {
- ret = assoc_helper_wpa_keys(priv, assoc_req);
-- if (ret) {
-- lbs_deb_assoc("ASSOC(:%d) wpa_keys: ret = %d\n",
-- __LINE__, ret);
-+ if (ret)
- goto out;
-- }
- }
+-#define BOOT_CMD_FW_BY_USB 0x01
+-#define BOOT_CMD_FW_IN_EEPROM 0x02
+-#define BOOT_CMD_UPDATE_BOOT2 0x03
+-#define BOOT_CMD_UPDATE_FW 0x04
+-#define BOOT_CMD_MAGIC_NUMBER 0x4C56524D /* M=>0x4D,R=>0x52,V=>0x56,L=>0x4C */
++#define BOOT_CMD_FW_BY_USB 0x01
++#define BOOT_CMD_FW_IN_EEPROM 0x02
++#define BOOT_CMD_UPDATE_BOOT2 0x03
++#define BOOT_CMD_UPDATE_FW 0x04
++#define BOOT_CMD_MAGIC_NUMBER 0x4C56524D /* LVRM */
- /* SSID/BSSID should be the _last_ config option set, because they
-@@ -644,28 +629,27 @@ void libertas_association_worker(struct work_struct *work)
+-struct bootcmdstr
++struct bootcmd
+ {
+- __le32 u32magicnumber;
+- u8 u8cmd_tag;
+- u8 au8dumy[11];
++ __le32 magic;
++ uint8_t cmd;
++ uint8_t pad[11];
+ };
- ret = assoc_helper_associate(priv, assoc_req);
- if (ret) {
-- lbs_deb_assoc("ASSOC: association attempt unsuccessful: %d\n",
-+ lbs_deb_assoc("ASSOC: association unsuccessful: %d\n",
- ret);
- success = 0;
- }
+-#define BOOT_CMD_RESP_OK 0x0001
+-#define BOOT_CMD_RESP_FAIL 0x0000
++#define BOOT_CMD_RESP_OK 0x0001
++#define BOOT_CMD_RESP_FAIL 0x0000
-- if (adapter->connect_status != LIBERTAS_CONNECTED) {
-- lbs_deb_assoc("ASSOC: association attempt unsuccessful, "
-- "not connected.\n");
-+ if (priv->connect_status != LBS_CONNECTED) {
-+ lbs_deb_assoc("ASSOC: association unsuccessful, "
-+ "not connected\n");
- success = 0;
- }
+-struct bootcmdrespStr
++struct bootcmdresp
+ {
+- __le32 u32magicnumber;
+- u8 u8cmd_tag;
+- u8 u8result;
+- u8 au8dumy[2];
+-};
+-
+-/* read callback private data */
+-struct read_cb_info {
+- struct usb_card_rec *cardp;
+- struct sk_buff *skb;
++ __le32 magic;
++ uint8_t cmd;
++ uint8_t result;
++ uint8_t pad[2];
+ };
- if (success) {
-- lbs_deb_assoc("ASSOC: association attempt successful. "
-- "Associated to '%s' (%s)\n",
-- escape_essid(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len),
-- print_mac(mac, adapter->curbssparams.bssid));
-- libertas_prepare_and_send_command(priv,
-+ lbs_deb_assoc("ASSOC: associated to '%s', %s\n",
-+ escape_essid(priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len),
-+ print_mac(mac, priv->curbssparams.bssid));
-+ lbs_prepare_and_send_command(priv,
- CMD_802_11_RSSI,
- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
+ /** USB card description structure*/
+-struct usb_card_rec {
+- struct list_head list;
+- struct net_device *eth_dev;
++struct if_usb_card {
+ struct usb_device *udev;
+ struct urb *rx_urb, *tx_urb;
+- void *priv;
+- struct read_cb_info rinfo;
++ struct lbs_private *priv;
-- libertas_prepare_and_send_command(priv,
-+ lbs_prepare_and_send_command(priv,
- CMD_802_11_GET_LOG,
- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
- } else {
-@@ -679,9 +663,9 @@ out:
- ret);
- }
+- int bulk_in_size;
+- u8 bulk_in_endpointAddr;
++ struct sk_buff *rx_skb;
++ uint32_t usb_event_cause;
++ uint8_t usb_int_cause;
-- mutex_lock(&adapter->lock);
-- adapter->in_progress_assoc_req = NULL;
-- mutex_unlock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-+ priv->in_progress_assoc_req = NULL;
-+ mutex_unlock(&priv->lock);
- kfree(assoc_req);
+- u8 *bulk_out_buffer;
+- int bulk_out_size;
+- u8 bulk_out_endpointAddr;
++ uint8_t ep_in;
++ uint8_t ep_out;
- done:
-@@ -692,14 +676,15 @@ done:
- /*
- * Caller MUST hold any necessary locks
- */
--struct assoc_request * wlan_get_association_request(wlan_adapter *adapter)
-+struct assoc_request *lbs_get_association_request(struct lbs_private *priv)
- {
- struct assoc_request * assoc_req;
+- const struct firmware *fw;
+- u8 CRC_OK;
+- u32 fwseqnum;
+- u32 lastseqnum;
+- u32 totalbytes;
+- u32 fwlastblksent;
+- u8 fwdnldover;
+- u8 fwfinalblk;
+- u8 surprise_removed;
++ int8_t bootcmdresp;
-- if (!adapter->pending_assoc_req) {
-- adapter->pending_assoc_req = kzalloc(sizeof(struct assoc_request),
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-+ if (!priv->pending_assoc_req) {
-+ priv->pending_assoc_req = kzalloc(sizeof(struct assoc_request),
- GFP_KERNEL);
-- if (!adapter->pending_assoc_req) {
-+ if (!priv->pending_assoc_req) {
- lbs_pr_info("Not enough memory to allocate association"
- " request!\n");
- return NULL;
-@@ -709,60 +694,59 @@ struct assoc_request * wlan_get_association_request(wlan_adapter *adapter)
- /* Copy current configuration attributes to the association request,
- * but don't overwrite any that are already set.
- */
-- assoc_req = adapter->pending_assoc_req;
-+ assoc_req = priv->pending_assoc_req;
- if (!test_bit(ASSOC_FLAG_SSID, &assoc_req->flags)) {
-- memcpy(&assoc_req->ssid, &adapter->curbssparams.ssid,
-+ memcpy(&assoc_req->ssid, &priv->curbssparams.ssid,
- IW_ESSID_MAX_SIZE);
-- assoc_req->ssid_len = adapter->curbssparams.ssid_len;
-+ assoc_req->ssid_len = priv->curbssparams.ssid_len;
- }
+- u32 usb_event_cause;
+- u8 usb_int_cause;
++ int ep_in_size;
- if (!test_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags))
-- assoc_req->channel = adapter->curbssparams.channel;
-+ assoc_req->channel = priv->curbssparams.channel;
+- u8 rx_urb_recall;
++ void *ep_out_buf;
++ int ep_out_size;
- if (!test_bit(ASSOC_FLAG_BAND, &assoc_req->flags))
-- assoc_req->band = adapter->curbssparams.band;
-+ assoc_req->band = priv->curbssparams.band;
+- u8 bootcmdresp;
++ const struct firmware *fw;
++ struct timer_list fw_timeout;
++ wait_queue_head_t fw_wq;
++ uint32_t fwseqnum;
++ uint32_t totalbytes;
++ uint32_t fwlastblksent;
++ uint8_t CRC_OK;
++ uint8_t fwdnldover;
++ uint8_t fwfinalblk;
++ uint8_t surprise_removed;
++
++ __le16 boot2_version;
+ };
- if (!test_bit(ASSOC_FLAG_MODE, &assoc_req->flags))
-- assoc_req->mode = adapter->mode;
-+ assoc_req->mode = priv->mode;
+ /** fwheader */
+@@ -86,10 +83,10 @@ struct fwheader {
- if (!test_bit(ASSOC_FLAG_BSSID, &assoc_req->flags)) {
-- memcpy(&assoc_req->bssid, adapter->curbssparams.bssid,
-+ memcpy(&assoc_req->bssid, priv->curbssparams.bssid,
- ETH_ALEN);
- }
+ #define FW_MAX_DATA_BLK_SIZE 600
+ /** FWData */
+-struct FWData {
+- struct fwheader fwheader;
++struct fwdata {
++ struct fwheader hdr;
+ __le32 seqnum;
+- u8 data[FW_MAX_DATA_BLK_SIZE];
++ uint8_t data[0];
+ };
- if (!test_bit(ASSOC_FLAG_WEP_KEYS, &assoc_req->flags)) {
- int i;
- for (i = 0; i < 4; i++) {
-- memcpy(&assoc_req->wep_keys[i], &adapter->wep_keys[i],
-+ memcpy(&assoc_req->wep_keys[i], &priv->wep_keys[i],
- sizeof(struct enc_key));
- }
- }
+ /** fwsyncheader */
+@@ -101,7 +98,5 @@ struct fwsyncheader {
+ #define FW_HAS_DATA_TO_RECV 0x00000001
+ #define FW_HAS_LAST_BLOCK 0x00000004
- if (!test_bit(ASSOC_FLAG_WEP_TX_KEYIDX, &assoc_req->flags))
-- assoc_req->wep_tx_keyidx = adapter->wep_tx_keyidx;
-+ assoc_req->wep_tx_keyidx = priv->wep_tx_keyidx;
+-#define FW_DATA_XMIT_SIZE \
+- sizeof(struct fwheader) + le32_to_cpu(fwdata->fwheader.datalength) + sizeof(u32)
- if (!test_bit(ASSOC_FLAG_WPA_MCAST_KEY, &assoc_req->flags)) {
-- memcpy(&assoc_req->wpa_mcast_key, &adapter->wpa_mcast_key,
-+ memcpy(&assoc_req->wpa_mcast_key, &priv->wpa_mcast_key,
- sizeof(struct enc_key));
- }
+ #endif
+diff --git a/drivers/net/wireless/libertas/join.c b/drivers/net/wireless/libertas/join.c
+index dc24a05..2d45080 100644
+--- a/drivers/net/wireless/libertas/join.c
++++ b/drivers/net/wireless/libertas/join.c
+@@ -30,16 +30,18 @@
+ * NOTE: Setting the MSB of the basic rates need to be taken
+ * care, either before or after calling this function
+ *
+- * @param adapter A pointer to wlan_adapter structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param rate1 the buffer which keeps input and output
+ * @param rate1_size the size of rate1 buffer; new size of buffer on return
+ *
+ * @return 0 or -1
+ */
+-static int get_common_rates(wlan_adapter * adapter, u8 * rates, u16 *rates_size)
++static int get_common_rates(struct lbs_private *priv,
++ u8 *rates,
++ u16 *rates_size)
+ {
+- u8 *card_rates = libertas_bg_rates;
+- size_t num_card_rates = sizeof(libertas_bg_rates);
++ u8 *card_rates = lbs_bg_rates;
++ size_t num_card_rates = sizeof(lbs_bg_rates);
+ int ret = 0, i, j;
+ u8 tmp[30];
+ size_t tmp_size = 0;
+@@ -55,15 +57,15 @@ static int get_common_rates(wlan_adapter * adapter, u8 * rates, u16 *rates_size)
+ lbs_deb_hex(LBS_DEB_JOIN, "AP rates ", rates, *rates_size);
+ lbs_deb_hex(LBS_DEB_JOIN, "card rates ", card_rates, num_card_rates);
+ lbs_deb_hex(LBS_DEB_JOIN, "common rates", tmp, tmp_size);
+- lbs_deb_join("Tx datarate is currently 0x%X\n", adapter->cur_rate);
++ lbs_deb_join("TX data rate 0x%02x\n", priv->cur_rate);
- if (!test_bit(ASSOC_FLAG_WPA_UCAST_KEY, &assoc_req->flags)) {
-- memcpy(&assoc_req->wpa_unicast_key, &adapter->wpa_unicast_key,
-+ memcpy(&assoc_req->wpa_unicast_key, &priv->wpa_unicast_key,
- sizeof(struct enc_key));
+- if (!adapter->auto_rate) {
++ if (!priv->auto_rate) {
+ for (i = 0; i < tmp_size; i++) {
+- if (tmp[i] == adapter->cur_rate)
++ if (tmp[i] == priv->cur_rate)
+ goto done;
+ }
+ lbs_pr_alert("Previously set fixed data rate %#x isn't "
+- "compatible with the network.\n", adapter->cur_rate);
++ "compatible with the network.\n", priv->cur_rate);
+ ret = -1;
+ goto done;
}
+@@ -85,7 +87,7 @@ done:
+ * @param rates buffer of data rates
+ * @param len size of buffer
+ */
+-static void libertas_set_basic_rate_flags(u8 * rates, size_t len)
++static void lbs_set_basic_rate_flags(u8 *rates, size_t len)
+ {
+ int i;
- if (!test_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags)) {
-- memcpy(&assoc_req->secinfo, &adapter->secinfo,
-- sizeof(struct wlan_802_11_security));
-+ memcpy(&assoc_req->secinfo, &priv->secinfo,
-+ sizeof(struct lbs_802_11_security));
- }
+@@ -104,7 +106,7 @@ static void libertas_set_basic_rate_flags(u8 * rates, size_t len)
+ * @param rates buffer of data rates
+ * @param len size of buffer
+ */
+-void libertas_unset_basic_rate_flags(u8 * rates, size_t len)
++void lbs_unset_basic_rate_flags(u8 *rates, size_t len)
+ {
+ int i;
- if (!test_bit(ASSOC_FLAG_WPA_IE, &assoc_req->flags)) {
-- memcpy(&assoc_req->wpa_ie, &adapter->wpa_ie,
-+ memcpy(&assoc_req->wpa_ie, &priv->wpa_ie,
- MAX_WPA_IE_LEN);
-- assoc_req->wpa_ie_len = adapter->wpa_ie_len;
-+ assoc_req->wpa_ie_len = priv->wpa_ie_len;
- }
+@@ -116,19 +118,18 @@ void libertas_unset_basic_rate_flags(u8 * rates, size_t len)
+ /**
+ * @brief Associate to a specific BSS discovered in a scan
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param pbssdesc Pointer to the BSS descriptor to associate with.
+ *
+ * @return 0-success, otherwise fail
+ */
+-int wlan_associate(wlan_private * priv, struct assoc_request * assoc_req)
++int lbs_associate(struct lbs_private *priv, struct assoc_request *assoc_req)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ int ret;
-- print_assoc_req(__func__, assoc_req);
--
-+ lbs_deb_leave(LBS_DEB_ASSOC);
- return assoc_req;
- }
-diff --git a/drivers/net/wireless/libertas/assoc.h b/drivers/net/wireless/libertas/assoc.h
-index e09b749..08372bb 100644
---- a/drivers/net/wireless/libertas/assoc.h
-+++ b/drivers/net/wireless/libertas/assoc.h
-@@ -1,32 +1,12 @@
- /* Copyright (C) 2006, Red Hat, Inc. */
+- lbs_deb_enter(LBS_DEB_JOIN);
++ lbs_deb_enter(LBS_DEB_ASSOC);
--#ifndef _WLAN_ASSOC_H_
--#define _WLAN_ASSOC_H_
-+#ifndef _LBS_ASSOC_H_
-+#define _LBS_ASSOC_H_
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_AUTHENTICATE,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_AUTHENTICATE,
+ 0, CMD_OPTION_WAITFORRSP,
+ 0, assoc_req->bss.bssid);
- #include "dev.h"
+@@ -136,50 +137,50 @@ int wlan_associate(wlan_private * priv, struct assoc_request * assoc_req)
+ goto done;
--void libertas_association_worker(struct work_struct *work);
-+void lbs_association_worker(struct work_struct *work);
-+struct assoc_request *lbs_get_association_request(struct lbs_private *priv);
-+void lbs_sync_channel(struct work_struct *work);
+ /* set preamble to firmware */
+- if ( (adapter->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)
++ if ( (priv->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)
+ && (assoc_req->bss.capability & WLAN_CAPABILITY_SHORT_PREAMBLE))
+- adapter->preamble = CMD_TYPE_SHORT_PREAMBLE;
++ priv->preamble = CMD_TYPE_SHORT_PREAMBLE;
+ else
+- adapter->preamble = CMD_TYPE_LONG_PREAMBLE;
++ priv->preamble = CMD_TYPE_LONG_PREAMBLE;
--struct assoc_request * wlan_get_association_request(wlan_adapter *adapter);
--
--void libertas_sync_channel(struct work_struct *work);
--
--#define ASSOC_DELAY (HZ / 2)
--static inline void wlan_postpone_association_work(wlan_private *priv)
--{
-- if (priv->adapter->surpriseremoved)
-- return;
-- cancel_delayed_work(&priv->assoc_work);
-- queue_delayed_work(priv->work_thread, &priv->assoc_work, ASSOC_DELAY);
--}
--
--static inline void wlan_cancel_association_work(wlan_private *priv)
--{
-- cancel_delayed_work(&priv->assoc_work);
-- if (priv->adapter->pending_assoc_req) {
-- kfree(priv->adapter->pending_assoc_req);
-- priv->adapter->pending_assoc_req = NULL;
-- }
--}
--
--#endif /* _WLAN_ASSOC_H */
-+#endif /* _LBS_ASSOC_H */
-diff --git a/drivers/net/wireless/libertas/cmd.c b/drivers/net/wireless/libertas/cmd.c
-index be5cfd8..eab0203 100644
---- a/drivers/net/wireless/libertas/cmd.c
-+++ b/drivers/net/wireless/libertas/cmd.c
-@@ -11,47 +11,139 @@
- #include "dev.h"
- #include "join.h"
- #include "wext.h"
-+#include "cmd.h"
+- libertas_set_radio_control(priv);
++ lbs_set_radio_control(priv);
--static void cleanup_cmdnode(struct cmd_ctrl_node *ptempnode);
-+static struct cmd_ctrl_node *lbs_get_cmd_ctrl_node(struct lbs_private *priv);
-+static void lbs_set_cmd_ctrl_node(struct lbs_private *priv,
-+ struct cmd_ctrl_node *ptempnode,
-+ void *pdata_buf);
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_ASSOCIATE,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_ASSOCIATE,
+ 0, CMD_OPTION_WAITFORRSP, 0, assoc_req);
--static u16 commands_allowed_in_ps[] = {
-- CMD_802_11_RSSI,
--};
+ done:
+- lbs_deb_leave_args(LBS_DEB_JOIN, "ret %d", ret);
++ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+ return ret;
+ }
/**
-- * @brief This function checks if the commans is allowed
-- * in PS mode not.
-+ * @brief Checks whether a command is allowed in Power Save mode
+ * @brief Start an Adhoc Network
*
- * @param command the command ID
-- * @return TRUE or FALSE
-+ * @return 1 if allowed, 0 if not allowed
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param adhocssid The ssid of the Adhoc Network
+ * @return 0--success, -1--fail
*/
--static u8 is_command_allowed_in_ps(__le16 command)
-+static u8 is_command_allowed_in_ps(u16 cmd)
+-int libertas_start_adhoc_network(wlan_private * priv, struct assoc_request * assoc_req)
++int lbs_start_adhoc_network(struct lbs_private *priv,
++ struct assoc_request *assoc_req)
{
-- int i;
--
-- for (i = 0; i < ARRAY_SIZE(commands_allowed_in_ps); i++) {
-- if (command == cpu_to_le16(commands_allowed_in_ps[i]))
-- return 1;
-+ switch (cmd) {
-+ case CMD_802_11_RSSI:
-+ return 1;
-+ default:
-+ break;
+- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+
+- adapter->adhoccreate = 1;
++ priv->adhoccreate = 1;
+
+- if (adapter->capability & WLAN_CAPABILITY_SHORT_PREAMBLE) {
++ if (priv->capability & WLAN_CAPABILITY_SHORT_PREAMBLE) {
+ lbs_deb_join("AdhocStart: Short preamble\n");
+- adapter->preamble = CMD_TYPE_SHORT_PREAMBLE;
++ priv->preamble = CMD_TYPE_SHORT_PREAMBLE;
+ } else {
+ lbs_deb_join("AdhocStart: Long preamble\n");
+- adapter->preamble = CMD_TYPE_LONG_PREAMBLE;
++ priv->preamble = CMD_TYPE_LONG_PREAMBLE;
}
--
- return 0;
- }
--static int wlan_cmd_hw_spec(wlan_private * priv, struct cmd_ds_command *cmd)
-+/**
-+ * @brief Updates the hardware details like MAC address and regulatory region
-+ *
-+ * @param priv A pointer to struct lbs_private structure
-+ *
-+ * @return 0 on success, error on failure
-+ */
-+int lbs_update_hw_spec(struct lbs_private *priv)
+- libertas_set_radio_control(priv);
++ lbs_set_radio_control(priv);
+
+ lbs_deb_join("AdhocStart: channel = %d\n", assoc_req->channel);
+ lbs_deb_join("AdhocStart: band = %d\n", assoc_req->band);
+
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_AD_HOC_START,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_AD_HOC_START,
+ 0, CMD_OPTION_WAITFORRSP, 0, assoc_req);
+
+ return ret;
+@@ -188,34 +189,34 @@ int libertas_start_adhoc_network(wlan_private * priv, struct assoc_request * ass
+ /**
+ * @brief Join an adhoc network found in a previous scan
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param pbssdesc Pointer to a BSS descriptor found in a previous scan
+ * to attempt to join
+ *
+ * @return 0--success, -1--fail
+ */
+-int libertas_join_adhoc_network(wlan_private * priv, struct assoc_request * assoc_req)
++int lbs_join_adhoc_network(struct lbs_private *priv,
++ struct assoc_request *assoc_req)
{
-- struct cmd_ds_get_hw_spec *hwspec = &cmd->params.hwspec;
-+ struct cmd_ds_get_hw_spec cmd;
-+ int ret = -1;
-+ u32 i;
-+ DECLARE_MAC_BUF(mac);
+- wlan_adapter *adapter = priv->adapter;
+ struct bss_descriptor * bss = &assoc_req->bss;
+ int ret = 0;
- lbs_deb_enter(LBS_DEB_CMD);
+ lbs_deb_join("%s: Current SSID '%s', ssid length %u\n",
+ __func__,
+- escape_essid(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len),
+- adapter->curbssparams.ssid_len);
++ escape_essid(priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len),
++ priv->curbssparams.ssid_len);
+ lbs_deb_join("%s: requested ssid '%s', ssid length %u\n",
+ __func__, escape_essid(bss->ssid, bss->ssid_len),
+ bss->ssid_len);
-- cmd->command = cpu_to_le16(CMD_GET_HW_SPEC);
-- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_get_hw_spec) + S_DS_GEN);
-- memcpy(hwspec->permanentaddr, priv->adapter->current_addr, ETH_ALEN);
-+ memset(&cmd, 0, sizeof(cmd));
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ memcpy(cmd.permanentaddr, priv->current_addr, ETH_ALEN);
-+ ret = lbs_cmd_with_response(priv, CMD_GET_HW_SPEC, &cmd);
-+ if (ret)
-+ goto out;
-+
-+ priv->fwcapinfo = le32_to_cpu(cmd.fwcapinfo);
-+
-+ /* The firmware release is in an interesting format: the patch
-+ * level is in the most significant nibble ... so fix that: */
-+ priv->fwrelease = le32_to_cpu(cmd.fwrelease);
-+ priv->fwrelease = (priv->fwrelease << 8) |
-+ (priv->fwrelease >> 24 & 0xff);
-+
-+ /* Some firmware capabilities:
-+ * CF card firmware 5.0.16p0: cap 0x00000303
-+ * USB dongle firmware 5.110.17p2: cap 0x00000303
-+ */
-+ printk("libertas: %s, fw %u.%u.%up%u, cap 0x%08x\n",
-+ print_mac(mac, cmd.permanentaddr),
-+ priv->fwrelease >> 24 & 0xff,
-+ priv->fwrelease >> 16 & 0xff,
-+ priv->fwrelease >> 8 & 0xff,
-+ priv->fwrelease & 0xff,
-+ priv->fwcapinfo);
-+ lbs_deb_cmd("GET_HW_SPEC: hardware interface 0x%x, hardware spec 0x%04x\n",
-+ cmd.hwifversion, cmd.version);
-+
-+ /* Clamp region code to 8-bit since FW spec indicates that it should
-+ * only ever be 8-bit, even though the field size is 16-bit. Some firmware
-+ * returns non-zero high 8 bits here.
-+ */
-+ priv->regioncode = le16_to_cpu(cmd.regioncode) & 0xFF;
-+
-+ for (i = 0; i < MRVDRV_MAX_REGION_CODE; i++) {
-+ /* use the region code to search for the index */
-+ if (priv->regioncode == lbs_region_code_to_index[i])
-+ break;
-+ }
-+
-+ /* if it's unidentified region code, use the default (USA) */
-+ if (i >= MRVDRV_MAX_REGION_CODE) {
-+ priv->regioncode = 0x10;
-+ lbs_pr_info("unidentified region code; using the default (USA)\n");
-+ }
-+
-+ if (priv->current_addr[0] == 0xff)
-+ memmove(priv->current_addr, cmd.permanentaddr, ETH_ALEN);
-+
-+ memcpy(priv->dev->dev_addr, priv->current_addr, ETH_ALEN);
-+ if (priv->mesh_dev)
-+ memcpy(priv->mesh_dev->dev_addr, priv->current_addr, ETH_ALEN);
-+
-+ if (lbs_set_regiontable(priv, priv->regioncode, 0)) {
-+ ret = -1;
-+ goto out;
-+ }
+ /* check if the requested SSID is already joined */
+- if ( adapter->curbssparams.ssid_len
+- && !libertas_ssid_cmp(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len,
++ if ( priv->curbssparams.ssid_len
++ && !lbs_ssid_cmp(priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len,
+ bss->ssid, bss->ssid_len)
+- && (adapter->mode == IW_MODE_ADHOC)
+- && (adapter->connect_status == LIBERTAS_CONNECTED)) {
++ && (priv->mode == IW_MODE_ADHOC)
++ && (priv->connect_status == LBS_CONNECTED)) {
+ union iwreq_data wrqu;
-+ if (lbs_set_universaltable(priv, 0)) {
-+ ret = -1;
-+ goto out;
-+ }
-+
-+out:
- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
-+ return ret;
- }
+ lbs_deb_join("ADHOC_J_CMD: New ad-hoc SSID is the same as "
+@@ -225,7 +226,7 @@ int libertas_join_adhoc_network(wlan_private * priv, struct assoc_request * asso
+ * request really was successful, even if just a null-op.
+ */
+ memset(&wrqu, 0, sizeof(wrqu));
+- memcpy(wrqu.ap_addr.sa_data, adapter->curbssparams.bssid,
++ memcpy(wrqu.ap_addr.sa_data, priv->curbssparams.bssid,
+ ETH_ALEN);
+ wrqu.ap_addr.sa_family = ARPHRD_ETHER;
+ wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+@@ -235,22 +236,22 @@ int libertas_join_adhoc_network(wlan_private * priv, struct assoc_request * asso
+ /* Use shortpreamble only when both creator and card supports
+ short preamble */
+ if ( !(bss->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)
+- || !(adapter->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)) {
++ || !(priv->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)) {
+ lbs_deb_join("AdhocJoin: Long preamble\n");
+- adapter->preamble = CMD_TYPE_LONG_PREAMBLE;
++ priv->preamble = CMD_TYPE_LONG_PREAMBLE;
+ } else {
+ lbs_deb_join("AdhocJoin: Short preamble\n");
+- adapter->preamble = CMD_TYPE_SHORT_PREAMBLE;
++ priv->preamble = CMD_TYPE_SHORT_PREAMBLE;
+ }
--static int wlan_cmd_802_11_ps_mode(wlan_private * priv,
-+int lbs_host_sleep_cfg(struct lbs_private *priv, uint32_t criteria)
-+{
-+ struct cmd_ds_host_sleep cmd_config;
-+ int ret;
-+
-+ cmd_config.hdr.size = cpu_to_le16(sizeof(cmd_config));
-+ cmd_config.criteria = cpu_to_le32(criteria);
-+ cmd_config.gpio = priv->wol_gpio;
-+ cmd_config.gap = priv->wol_gap;
-+
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_HOST_SLEEP_CFG, &cmd_config);
-+ if (!ret) {
-+ lbs_deb_cmd("Set WOL criteria to %x\n", criteria);
-+ priv->wol_criteria = criteria;
-+ } else {
-+ lbs_pr_info("HOST_SLEEP_CFG failed %d\n", ret);
-+ }
-+
-+ return ret;
-+}
-+EXPORT_SYMBOL_GPL(lbs_host_sleep_cfg);
-+
-+static int lbs_cmd_802_11_ps_mode(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action)
- {
-@@ -90,161 +182,161 @@ static int wlan_cmd_802_11_ps_mode(wlan_private * priv,
- return 0;
- }
+- libertas_set_radio_control(priv);
++ lbs_set_radio_control(priv);
--static int wlan_cmd_802_11_inactivity_timeout(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- u16 cmd_action, void *pdata_buf)
-+int lbs_cmd_802_11_inactivity_timeout(struct lbs_private *priv,
-+ uint16_t cmd_action, uint16_t *timeout)
- {
-- u16 *timeout = pdata_buf;
-+ struct cmd_ds_802_11_inactivity_timeout cmd;
-+ int ret;
+ lbs_deb_join("AdhocJoin: channel = %d\n", assoc_req->channel);
+ lbs_deb_join("AdhocJoin: band = %c\n", assoc_req->band);
- lbs_deb_enter(LBS_DEB_CMD);
+- adapter->adhoccreate = 0;
++ priv->adhoccreate = 0;
-- cmd->command = cpu_to_le16(CMD_802_11_INACTIVITY_TIMEOUT);
-- cmd->size =
-- cpu_to_le16(sizeof(struct cmd_ds_802_11_inactivity_timeout)
-- + S_DS_GEN);
-+ cmd.hdr.command = cpu_to_le16(CMD_802_11_INACTIVITY_TIMEOUT);
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_AD_HOC_JOIN,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_AD_HOC_JOIN,
+ 0, CMD_OPTION_WAITFORRSP,
+ OID_802_11_SSID, assoc_req);
-- cmd->params.inactivity_timeout.action = cpu_to_le16(cmd_action);
-+ cmd.action = cpu_to_le16(cmd_action);
+@@ -258,38 +259,37 @@ out:
+ return ret;
+ }
-- if (cmd_action)
-- cmd->params.inactivity_timeout.timeout = cpu_to_le16(*timeout);
-+ if (cmd_action == CMD_ACT_SET)
-+ cmd.timeout = cpu_to_le16(*timeout);
- else
-- cmd->params.inactivity_timeout.timeout = 0;
-+ cmd.timeout = 0;
+-int libertas_stop_adhoc_network(wlan_private * priv)
++int lbs_stop_adhoc_network(struct lbs_private *priv)
+ {
+- return libertas_prepare_and_send_command(priv, CMD_802_11_AD_HOC_STOP,
++ return lbs_prepare_and_send_command(priv, CMD_802_11_AD_HOC_STOP,
+ 0, CMD_OPTION_WAITFORRSP, 0, NULL);
+ }
-- lbs_deb_leave(LBS_DEB_CMD);
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_INACTIVITY_TIMEOUT, &cmd);
-+
-+ if (!ret)
-+ *timeout = le16_to_cpu(cmd.timeout);
-+
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
- return 0;
+ /**
+ * @brief Send Deauthentication Request
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return 0--success, -1--fail
+ */
+-int libertas_send_deauthentication(wlan_private * priv)
++int lbs_send_deauthentication(struct lbs_private *priv)
+ {
+- return libertas_prepare_and_send_command(priv, CMD_802_11_DEAUTHENTICATE,
++ return lbs_prepare_and_send_command(priv, CMD_802_11_DEAUTHENTICATE,
+ 0, CMD_OPTION_WAITFORRSP, 0, NULL);
}
--static int wlan_cmd_802_11_sleep_params(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- u16 cmd_action)
-+int lbs_cmd_802_11_sleep_params(struct lbs_private *priv, uint16_t cmd_action,
-+ struct sleep_params *sp)
+ /**
+ * @brief This function prepares command of authenticate.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param cmd A pointer to cmd_ds_command structure
+ * @param pdata_buf Void cast of pointer to a BSSID to authenticate with
+ *
+ * @return 0 or -1
+ */
+-int libertas_cmd_80211_authenticate(wlan_private * priv,
++int lbs_cmd_80211_authenticate(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ void *pdata_buf)
{
- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ds_802_11_sleep_params *sp = &cmd->params.sleep_params;
-+ struct cmd_ds_802_11_sleep_params cmd;
-+ int ret;
-
- lbs_deb_enter(LBS_DEB_CMD);
+ struct cmd_ds_802_11_authenticate *pauthenticate = &cmd->params.auth;
+ int ret = -1;
+ u8 *bssid = pdata_buf;
+@@ -302,7 +302,7 @@ int libertas_cmd_80211_authenticate(wlan_private * priv,
+ + S_DS_GEN);
-- cmd->size = cpu_to_le16((sizeof(struct cmd_ds_802_11_sleep_params)) +
-- S_DS_GEN);
-- cmd->command = cpu_to_le16(CMD_802_11_SLEEP_PARAMS);
--
- if (cmd_action == CMD_ACT_GET) {
-- memset(&adapter->sp, 0, sizeof(struct sleep_params));
-- memset(sp, 0, sizeof(struct cmd_ds_802_11_sleep_params));
-- sp->action = cpu_to_le16(cmd_action);
-- } else if (cmd_action == CMD_ACT_SET) {
-- sp->action = cpu_to_le16(cmd_action);
-- sp->error = cpu_to_le16(adapter->sp.sp_error);
-- sp->offset = cpu_to_le16(adapter->sp.sp_offset);
-- sp->stabletime = cpu_to_le16(adapter->sp.sp_stabletime);
-- sp->calcontrol = (u8) adapter->sp.sp_calcontrol;
-- sp->externalsleepclk = (u8) adapter->sp.sp_extsleepclk;
-- sp->reserved = cpu_to_le16(adapter->sp.sp_reserved);
-+ memset(&cmd, 0, sizeof(cmd));
-+ } else {
-+ cmd.error = cpu_to_le16(sp->sp_error);
-+ cmd.offset = cpu_to_le16(sp->sp_offset);
-+ cmd.stabletime = cpu_to_le16(sp->sp_stabletime);
-+ cmd.calcontrol = sp->sp_calcontrol;
-+ cmd.externalsleepclk = sp->sp_extsleepclk;
-+ cmd.reserved = cpu_to_le16(sp->sp_reserved);
-+ }
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ cmd.action = cpu_to_le16(cmd_action);
-+
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_SLEEP_PARAMS, &cmd);
-+
-+ if (!ret) {
-+ lbs_deb_cmd("error 0x%x, offset 0x%x, stabletime 0x%x, "
-+ "calcontrol 0x%x extsleepclk 0x%x\n",
-+ le16_to_cpu(cmd.error), le16_to_cpu(cmd.offset),
-+ le16_to_cpu(cmd.stabletime), cmd.calcontrol,
-+ cmd.externalsleepclk);
-+
-+ sp->sp_error = le16_to_cpu(cmd.error);
-+ sp->sp_offset = le16_to_cpu(cmd.offset);
-+ sp->sp_stabletime = le16_to_cpu(cmd.stabletime);
-+ sp->sp_calcontrol = cmd.calcontrol;
-+ sp->sp_extsleepclk = cmd.externalsleepclk;
-+ sp->sp_reserved = le16_to_cpu(cmd.reserved);
+ /* translate auth mode to 802.11 defined wire value */
+- switch (adapter->secinfo.auth_mode) {
++ switch (priv->secinfo.auth_mode) {
+ case IW_AUTH_ALG_OPEN_SYSTEM:
+ pauthenticate->authtype = 0x00;
+ break;
+@@ -314,13 +314,13 @@ int libertas_cmd_80211_authenticate(wlan_private * priv,
+ break;
+ default:
+ lbs_deb_join("AUTH_CMD: invalid auth alg 0x%X\n",
+- adapter->secinfo.auth_mode);
++ priv->secinfo.auth_mode);
+ goto out;
}
-- lbs_deb_leave(LBS_DEB_CMD);
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+ memcpy(pauthenticate->macaddr, bssid, ETH_ALEN);
+
+- lbs_deb_join("AUTH_CMD: BSSID is : %s auth=0x%X\n",
++ lbs_deb_join("AUTH_CMD: BSSID %s, auth 0x%x\n",
+ print_mac(mac, bssid), pauthenticate->authtype);
+ ret = 0;
+
+@@ -329,10 +329,9 @@ out:
+ return ret;
+ }
+
+-int libertas_cmd_80211_deauthenticate(wlan_private * priv,
++int lbs_cmd_80211_deauthenticate(struct lbs_private *priv,
+ struct cmd_ds_command *cmd)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ struct cmd_ds_802_11_deauthenticate *dauth = &cmd->params.deauth;
+
+ lbs_deb_enter(LBS_DEB_JOIN);
+@@ -342,7 +341,7 @@ int libertas_cmd_80211_deauthenticate(wlan_private * priv,
+ S_DS_GEN);
+
+ /* set AP MAC address */
+- memmove(dauth->macaddr, adapter->curbssparams.bssid, ETH_ALEN);
++ memmove(dauth->macaddr, priv->curbssparams.bssid, ETH_ALEN);
+
+ /* Reason code 3 = Station is leaving */
+ #define REASON_CODE_STA_LEAVING 3
+@@ -352,10 +351,9 @@ int libertas_cmd_80211_deauthenticate(wlan_private * priv,
return 0;
}
--static int wlan_cmd_802_11_set_wep(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- u32 cmd_act,
-- void * pdata_buf)
-+int lbs_cmd_802_11_set_wep(struct lbs_private *priv, uint16_t cmd_action,
-+ struct assoc_request *assoc)
+-int libertas_cmd_80211_associate(wlan_private * priv,
++int lbs_cmd_80211_associate(struct lbs_private *priv,
+ struct cmd_ds_command *cmd, void *pdata_buf)
{
-- struct cmd_ds_802_11_set_wep *wep = &cmd->params.wep;
- wlan_adapter *adapter = priv->adapter;
-+ struct cmd_ds_802_11_set_wep cmd;
+ struct cmd_ds_802_11_associate *passo = &cmd->params.associate;
int ret = 0;
-- struct assoc_request * assoc_req = pdata_buf;
+ struct assoc_request * assoc_req = pdata_buf;
+@@ -368,11 +366,11 @@ int libertas_cmd_80211_associate(wlan_private * priv,
+ struct mrvlietypes_ratesparamset *rates;
+ struct mrvlietypes_rsnparamset *rsn;
- lbs_deb_enter(LBS_DEB_CMD);
+- lbs_deb_enter(LBS_DEB_JOIN);
++ lbs_deb_enter(LBS_DEB_ASSOC);
-- cmd->command = cpu_to_le16(CMD_802_11_SET_WEP);
-- cmd->size = cpu_to_le16(sizeof(*wep) + S_DS_GEN);
--
-- if (cmd_act == CMD_ACT_ADD) {
-- int i;
-+ cmd.hdr.command = cpu_to_le16(CMD_802_11_SET_WEP);
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
+ pos = (u8 *) passo;
-- if (!assoc_req) {
-- lbs_deb_cmd("Invalid association request!");
-- ret = -1;
-- goto done;
-- }
-+ cmd.action = cpu_to_le16(cmd_action);
+- if (!adapter) {
++ if (!priv) {
+ ret = -1;
+ goto done;
+ }
+@@ -416,22 +414,22 @@ int libertas_cmd_80211_associate(wlan_private * priv,
+ rates->header.type = cpu_to_le16(TLV_TYPE_RATES);
+ memcpy(&rates->rates, &bss->rates, MAX_RATES);
+ tmplen = MAX_RATES;
+- if (get_common_rates(adapter, rates->rates, &tmplen)) {
++ if (get_common_rates(priv, rates->rates, &tmplen)) {
+ ret = -1;
+ goto done;
+ }
+ pos += sizeof(rates->header) + tmplen;
+ rates->header.len = cpu_to_le16(tmplen);
+- lbs_deb_join("ASSOC_CMD: num rates = %u\n", tmplen);
++ lbs_deb_assoc("ASSOC_CMD: num rates %u\n", tmplen);
-- wep->action = cpu_to_le16(CMD_ACT_ADD);
-+ if (cmd_action == CMD_ACT_ADD) {
-+ int i;
+ /* Copy the infra. association rates into Current BSS state structure */
+- memset(&adapter->curbssparams.rates, 0, sizeof(adapter->curbssparams.rates));
+- memcpy(&adapter->curbssparams.rates, &rates->rates, tmplen);
++ memset(&priv->curbssparams.rates, 0, sizeof(priv->curbssparams.rates));
++ memcpy(&priv->curbssparams.rates, &rates->rates, tmplen);
- /* default tx key index */
-- wep->keyindex = cpu_to_le16((u16)(assoc_req->wep_tx_keyidx &
-- (u32)CMD_WEP_KEY_INDEX_MASK));
-+ cmd.keyindex = cpu_to_le16(assoc->wep_tx_keyidx &
-+ CMD_WEP_KEY_INDEX_MASK);
+ /* Set MSB on basic rates as the firmware requires, but _after_
+ * copying to current bss rates.
+ */
+- libertas_set_basic_rate_flags(rates->rates, tmplen);
++ lbs_set_basic_rate_flags(rates->rates, tmplen);
- /* Copy key types and material to host command structure */
- for (i = 0; i < 4; i++) {
-- struct enc_key * pkey = &assoc_req->wep_keys[i];
-+ struct enc_key *pkey = &assoc->wep_keys[i];
+ if (assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled) {
+ rsn = (struct mrvlietypes_rsnparamset *) pos;
+@@ -446,9 +444,9 @@ int libertas_cmd_80211_associate(wlan_private * priv,
+ }
- switch (pkey->len) {
- case KEY_LEN_WEP_40:
-- wep->keytype[i] = CMD_TYPE_WEP_40_BIT;
-- memmove(&wep->keymaterial[i], pkey->key,
-- pkey->len);
-+ cmd.keytype[i] = CMD_TYPE_WEP_40_BIT;
-+ memmove(cmd.keymaterial[i], pkey->key, pkey->len);
- lbs_deb_cmd("SET_WEP: add key %d (40 bit)\n", i);
- break;
- case KEY_LEN_WEP_104:
-- wep->keytype[i] = CMD_TYPE_WEP_104_BIT;
-- memmove(&wep->keymaterial[i], pkey->key,
-- pkey->len);
-+ cmd.keytype[i] = CMD_TYPE_WEP_104_BIT;
-+ memmove(cmd.keymaterial[i], pkey->key, pkey->len);
- lbs_deb_cmd("SET_WEP: add key %d (104 bit)\n", i);
- break;
- case 0:
- break;
- default:
- lbs_deb_cmd("SET_WEP: invalid key %d, length %d\n",
-- i, pkey->len);
-+ i, pkey->len);
- ret = -1;
- goto done;
- break;
- }
- }
-- } else if (cmd_act == CMD_ACT_REMOVE) {
-+ } else if (cmd_action == CMD_ACT_REMOVE) {
- /* ACT_REMOVE clears _all_ WEP keys */
-- wep->action = cpu_to_le16(CMD_ACT_REMOVE);
+ /* update curbssparams */
+- adapter->curbssparams.channel = bss->phyparamset.dsparamset.currentchan;
++ priv->curbssparams.channel = bss->phyparamset.dsparamset.currentchan;
- /* default tx key index */
-- wep->keyindex = cpu_to_le16((u16)(adapter->wep_tx_keyidx &
-- (u32)CMD_WEP_KEY_INDEX_MASK));
-- lbs_deb_cmd("SET_WEP: remove key %d\n", adapter->wep_tx_keyidx);
-+ cmd.keyindex = cpu_to_le16(priv->wep_tx_keyidx &
-+ CMD_WEP_KEY_INDEX_MASK);
-+ lbs_deb_cmd("SET_WEP: remove key %d\n", priv->wep_tx_keyidx);
+- if (libertas_parse_dnld_countryinfo_11d(priv, bss)) {
++ if (lbs_parse_dnld_countryinfo_11d(priv, bss)) {
+ ret = -1;
+ goto done;
}
+@@ -460,18 +458,16 @@ int libertas_cmd_80211_associate(wlan_private * priv,
+ if (bss->mode == IW_MODE_INFRA)
+ tmpcap |= WLAN_CAPABILITY_ESS;
+ passo->capability = cpu_to_le16(tmpcap);
+- lbs_deb_join("ASSOC_CMD: capability=%4X CAPINFO_MASK=%4X\n",
+- tmpcap, CAPINFO_MASK);
++ lbs_deb_assoc("ASSOC_CMD: capability 0x%04x\n", tmpcap);
-- ret = 0;
--
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_SET_WEP, &cmd);
done:
- lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
+- lbs_deb_leave_args(LBS_DEB_JOIN, "ret %d", ret);
++ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
return ret;
}
--static int wlan_cmd_802_11_enable_rsn(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- u16 cmd_action,
-- void * pdata_buf)
-+int lbs_cmd_802_11_enable_rsn(struct lbs_private *priv, uint16_t cmd_action,
-+ uint16_t *enable)
+-int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
++int lbs_cmd_80211_ad_hoc_start(struct lbs_private *priv,
+ struct cmd_ds_command *cmd, void *pdata_buf)
{
-- struct cmd_ds_802_11_enable_rsn *penableRSN = &cmd->params.enbrsn;
-- u32 * enable = pdata_buf;
-+ struct cmd_ds_802_11_enable_rsn cmd;
-+ int ret;
-
- lbs_deb_enter(LBS_DEB_CMD);
+- wlan_adapter *adapter = priv->adapter;
+ struct cmd_ds_802_11_ad_hoc_start *adhs = &cmd->params.ads;
+ int ret = 0;
+ int cmdappendsize = 0;
+@@ -481,7 +477,7 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
-- cmd->command = cpu_to_le16(CMD_802_11_ENABLE_RSN);
-- cmd->size = cpu_to_le16(sizeof(*penableRSN) + S_DS_GEN);
-- penableRSN->action = cpu_to_le16(cmd_action);
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ cmd.action = cpu_to_le16(cmd_action);
+ lbs_deb_enter(LBS_DEB_JOIN);
- if (cmd_action == CMD_ACT_SET) {
- if (*enable)
-- penableRSN->enable = cpu_to_le16(CMD_ENABLE_RSN);
-+ cmd.enable = cpu_to_le16(CMD_ENABLE_RSN);
- else
-- penableRSN->enable = cpu_to_le16(CMD_DISABLE_RSN);
-+ cmd.enable = cpu_to_le16(CMD_DISABLE_RSN);
- lbs_deb_cmd("ENABLE_RSN: %d\n", *enable);
+- if (!adapter) {
++ if (!priv) {
+ ret = -1;
+ goto done;
}
+@@ -491,7 +487,7 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
+ /*
+ * Fill in the parameters for 2 data structures:
+ * 1. cmd_ds_802_11_ad_hoc_start command
+- * 2. adapter->scantable[i]
++ * 2. priv->scantable[i]
+ *
+ * Driver will fill up SSID, bsstype,IBSS param, Physical Param,
+ * probe delay, and cap info.
+@@ -509,8 +505,10 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
-- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
--}
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_ENABLE_RSN, &cmd);
-+ if (!ret && cmd_action == CMD_ACT_GET)
-+ *enable = le16_to_cpu(cmd.enable);
+ /* set the BSS type */
+ adhs->bsstype = CMD_BSS_TYPE_IBSS;
+- adapter->mode = IW_MODE_ADHOC;
+- adhs->beaconperiod = cpu_to_le16(MRVDRV_BEACON_INTERVAL);
++ priv->mode = IW_MODE_ADHOC;
++ if (priv->beacon_period == 0)
++ priv->beacon_period = MRVDRV_BEACON_INTERVAL;
++ adhs->beaconperiod = cpu_to_le16(priv->beacon_period);
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
-+ return ret;
-+}
+ /* set Physical param set */
+ #define DS_PARA_IE_ID 3
+@@ -548,24 +546,24 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
+ adhs->probedelay = cpu_to_le16(CMD_SCAN_PROBE_DELAY_TIME);
- static void set_one_wpa_key(struct MrvlIEtype_keyParamSet * pkeyparamset,
- struct enc_key * pkey)
-@@ -272,7 +364,7 @@ static void set_one_wpa_key(struct MrvlIEtype_keyParamSet * pkeyparamset,
- lbs_deb_leave(LBS_DEB_CMD);
- }
+ memset(adhs->rates, 0, sizeof(adhs->rates));
+- ratesize = min(sizeof(adhs->rates), sizeof(libertas_bg_rates));
+- memcpy(adhs->rates, libertas_bg_rates, ratesize);
++ ratesize = min(sizeof(adhs->rates), sizeof(lbs_bg_rates));
++ memcpy(adhs->rates, lbs_bg_rates, ratesize);
--static int wlan_cmd_802_11_key_material(wlan_private * priv,
-+static int lbs_cmd_802_11_key_material(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action,
- u32 cmd_oid, void *pdata_buf)
-@@ -319,7 +411,7 @@ done:
- return ret;
- }
+ /* Copy the ad-hoc creating rates into Current BSS state structure */
+- memset(&adapter->curbssparams.rates, 0, sizeof(adapter->curbssparams.rates));
+- memcpy(&adapter->curbssparams.rates, &adhs->rates, ratesize);
++ memset(&priv->curbssparams.rates, 0, sizeof(priv->curbssparams.rates));
++ memcpy(&priv->curbssparams.rates, &adhs->rates, ratesize);
--static int wlan_cmd_802_11_reset(wlan_private * priv,
-+static int lbs_cmd_802_11_reset(struct lbs_private *priv,
- struct cmd_ds_command *cmd, int cmd_action)
- {
- struct cmd_ds_802_11_reset *reset = &cmd->params.reset;
-@@ -334,7 +426,7 @@ static int wlan_cmd_802_11_reset(wlan_private * priv,
- return 0;
- }
+ /* Set MSB on basic rates as the firmware requires, but _after_
+ * copying to current bss rates.
+ */
+- libertas_set_basic_rate_flags(adhs->rates, ratesize);
++ lbs_set_basic_rate_flags(adhs->rates, ratesize);
--static int wlan_cmd_802_11_get_log(wlan_private * priv,
-+static int lbs_cmd_802_11_get_log(struct lbs_private *priv,
- struct cmd_ds_command *cmd)
- {
- lbs_deb_enter(LBS_DEB_CMD);
-@@ -346,7 +438,7 @@ static int wlan_cmd_802_11_get_log(wlan_private * priv,
- return 0;
+ lbs_deb_join("ADHOC_S_CMD: rates=%02x %02x %02x %02x \n",
+ adhs->rates[0], adhs->rates[1], adhs->rates[2], adhs->rates[3]);
+
+ lbs_deb_join("ADHOC_S_CMD: AD HOC Start command is ready\n");
+
+- if (libertas_create_dnld_countryinfo_11d(priv)) {
++ if (lbs_create_dnld_countryinfo_11d(priv)) {
+ lbs_deb_join("ADHOC_S_CMD: dnld_countryinfo_11d failed\n");
+ ret = -1;
+ goto done;
+@@ -580,7 +578,7 @@ done:
+ return ret;
}
--static int wlan_cmd_802_11_get_stat(wlan_private * priv,
-+static int lbs_cmd_802_11_get_stat(struct lbs_private *priv,
- struct cmd_ds_command *cmd)
+-int libertas_cmd_80211_ad_hoc_stop(wlan_private * priv,
++int lbs_cmd_80211_ad_hoc_stop(struct lbs_private *priv,
+ struct cmd_ds_command *cmd)
{
- lbs_deb_enter(LBS_DEB_CMD);
-@@ -358,13 +450,12 @@ static int wlan_cmd_802_11_get_stat(wlan_private * priv,
+ cmd->command = cpu_to_le16(CMD_802_11_AD_HOC_STOP);
+@@ -589,10 +587,9 @@ int libertas_cmd_80211_ad_hoc_stop(wlan_private * priv,
return 0;
}
--static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
-+static int lbs_cmd_802_11_snmp_mib(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- int cmd_action,
- int cmd_oid, void *pdata_buf)
+-int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
++int lbs_cmd_80211_ad_hoc_join(struct lbs_private *priv,
+ struct cmd_ds_command *cmd, void *pdata_buf)
{
- struct cmd_ds_802_11_snmp_mib *pSNMPMIB = &cmd->params.smib;
- wlan_adapter *adapter = priv->adapter;
- u8 ucTemp;
+ struct cmd_ds_802_11_ad_hoc_join *join_cmd = &cmd->params.adj;
+ struct assoc_request * assoc_req = pdata_buf;
+ struct bss_descriptor *bss = &assoc_req->bss;
+@@ -633,26 +630,26 @@ int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
+ /* probedelay */
+ join_cmd->probedelay = cpu_to_le16(CMD_SCAN_PROBE_DELAY_TIME);
- lbs_deb_enter(LBS_DEB_CMD);
-@@ -380,7 +471,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
- u8 mode = (u8) (size_t) pdata_buf;
- pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_SET);
- pSNMPMIB->oid = cpu_to_le16((u16) DESIRED_BSSTYPE_I);
-- pSNMPMIB->bufsize = sizeof(u8);
-+ pSNMPMIB->bufsize = cpu_to_le16(sizeof(u8));
- if (mode == IW_MODE_ADHOC) {
- ucTemp = SNMP_MIB_VALUE_ADHOC;
- } else {
-@@ -400,8 +491,8 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
- pSNMPMIB->oid = cpu_to_le16((u16) DOT11D_I);
+- adapter->curbssparams.channel = bss->channel;
++ priv->curbssparams.channel = bss->channel;
- if (cmd_action == CMD_ACT_SET) {
-- pSNMPMIB->querytype = CMD_ACT_SET;
-- pSNMPMIB->bufsize = sizeof(u16);
-+ pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_SET);
-+ pSNMPMIB->bufsize = cpu_to_le16(sizeof(u16));
- ulTemp = *(u32 *)pdata_buf;
- *((__le16 *)(pSNMPMIB->value)) =
- cpu_to_le16((u16) ulTemp);
-@@ -433,7 +524,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
- {
+ /* Copy Data rates from the rates recorded in scan response */
+ memset(join_cmd->bss.rates, 0, sizeof(join_cmd->bss.rates));
+ ratesize = min_t(u16, sizeof(join_cmd->bss.rates), MAX_RATES);
+ memcpy(join_cmd->bss.rates, bss->rates, ratesize);
+- if (get_common_rates(adapter, join_cmd->bss.rates, &ratesize)) {
++ if (get_common_rates(priv, join_cmd->bss.rates, &ratesize)) {
+ lbs_deb_join("ADHOC_J_CMD: get_common_rates returns error.\n");
+ ret = -1;
+ goto done;
+ }
- u32 ulTemp;
-- pSNMPMIB->oid = le16_to_cpu((u16) RTSTHRESH_I);
-+ pSNMPMIB->oid = cpu_to_le16(RTSTHRESH_I);
+ /* Copy the ad-hoc creating rates into Current BSS state structure */
+- memset(&adapter->curbssparams.rates, 0, sizeof(adapter->curbssparams.rates));
+- memcpy(&adapter->curbssparams.rates, join_cmd->bss.rates, ratesize);
++ memset(&priv->curbssparams.rates, 0, sizeof(priv->curbssparams.rates));
++ memcpy(&priv->curbssparams.rates, join_cmd->bss.rates, ratesize);
- if (cmd_action == CMD_ACT_GET) {
- pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_GET);
-@@ -456,7 +547,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
- pSNMPMIB->querytype = cpu_to_le16(CMD_ACT_SET);
- pSNMPMIB->bufsize = cpu_to_le16(sizeof(u16));
- *((__le16 *)(pSNMPMIB->value)) =
-- cpu_to_le16((u16) adapter->txretrycount);
-+ cpu_to_le16((u16) priv->txretrycount);
+ /* Set MSB on basic rates as the firmware requires, but _after_
+ * copying to current bss rates.
+ */
+- libertas_set_basic_rate_flags(join_cmd->bss.rates, ratesize);
++ lbs_set_basic_rate_flags(join_cmd->bss.rates, ratesize);
+
+ join_cmd->bss.ssparamset.ibssparamset.atimwindow =
+ cpu_to_le16(bss->atimwindow);
+@@ -663,12 +660,12 @@ int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
+ join_cmd->bss.capability = cpu_to_le16(tmp);
+ }
+
+- if (adapter->psmode == WLAN802_11POWERMODEMAX_PSP) {
++ if (priv->psmode == LBS802_11POWERMODEMAX_PSP) {
+ /* wake up first */
+ __le32 Localpsmode;
+
+- Localpsmode = cpu_to_le32(WLAN802_11POWERMODECAM);
+- ret = libertas_prepare_and_send_command(priv,
++ Localpsmode = cpu_to_le32(LBS802_11POWERMODECAM);
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_PS_MODE,
+ CMD_ACT_SET,
+ 0, 0, &Localpsmode);
+@@ -679,7 +676,7 @@ int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
}
+ }
- break;
-@@ -479,47 +570,7 @@ static int wlan_cmd_802_11_snmp_mib(wlan_private * priv,
- return 0;
+- if (libertas_parse_dnld_countryinfo_11d(priv, bss)) {
++ if (lbs_parse_dnld_countryinfo_11d(priv, bss)) {
+ ret = -1;
+ goto done;
+ }
+@@ -692,24 +689,23 @@ done:
+ return ret;
}
--static int wlan_cmd_802_11_radio_control(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- int cmd_action)
--{
+-int libertas_ret_80211_associate(wlan_private * priv,
++int lbs_ret_80211_associate(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
+ {
- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ds_802_11_radio_control *pradiocontrol = &cmd->params.radio;
--
-- lbs_deb_enter(LBS_DEB_CMD);
--
-- cmd->size =
-- cpu_to_le16((sizeof(struct cmd_ds_802_11_radio_control)) +
-- S_DS_GEN);
-- cmd->command = cpu_to_le16(CMD_802_11_RADIO_CONTROL);
--
-- pradiocontrol->action = cpu_to_le16(cmd_action);
--
-- switch (adapter->preamble) {
-- case CMD_TYPE_SHORT_PREAMBLE:
-- pradiocontrol->control = cpu_to_le16(SET_SHORT_PREAMBLE);
-- break;
+ int ret = 0;
+ union iwreq_data wrqu;
+ struct ieeetypes_assocrsp *passocrsp;
+ struct bss_descriptor * bss;
+ u16 status_code;
+
+- lbs_deb_enter(LBS_DEB_JOIN);
++ lbs_deb_enter(LBS_DEB_ASSOC);
+
+- if (!adapter->in_progress_assoc_req) {
+- lbs_deb_join("ASSOC_RESP: no in-progress association request\n");
++ if (!priv->in_progress_assoc_req) {
++ lbs_deb_assoc("ASSOC_RESP: no in-progress assoc request\n");
+ ret = -1;
+ goto done;
+ }
+- bss = &adapter->in_progress_assoc_req->bss;
++ bss = &priv->in_progress_assoc_req->bss;
+
+ passocrsp = (struct ieeetypes_assocrsp *) & resp->params;
+
+@@ -734,96 +730,83 @@ int libertas_ret_80211_associate(wlan_private * priv,
+ status_code = le16_to_cpu(passocrsp->statuscode);
+ switch (status_code) {
+ case 0x00:
+- lbs_deb_join("ASSOC_RESP: Association succeeded\n");
+ break;
+ case 0x01:
+- lbs_deb_join("ASSOC_RESP: Association failed; invalid "
+- "parameters (status code %d)\n", status_code);
++ lbs_deb_assoc("ASSOC_RESP: invalid parameters\n");
+ break;
+ case 0x02:
+- lbs_deb_join("ASSOC_RESP: Association failed; internal timer "
+- "expired while waiting for the AP (status code %d)"
+- "\n", status_code);
++ lbs_deb_assoc("ASSOC_RESP: internal timer "
++ "expired while waiting for the AP\n");
+ break;
+ case 0x03:
+- lbs_deb_join("ASSOC_RESP: Association failed; association "
+- "was refused by the AP (status code %d)\n",
+- status_code);
++ lbs_deb_assoc("ASSOC_RESP: association "
++ "refused by AP\n");
+ break;
+ case 0x04:
+- lbs_deb_join("ASSOC_RESP: Association failed; authentication "
+- "was refused by the AP (status code %d)\n",
+- status_code);
++ lbs_deb_assoc("ASSOC_RESP: authentication "
++ "refused by AP\n");
+ break;
+ default:
+- lbs_deb_join("ASSOC_RESP: Association failed; reason unknown "
+- "(status code %d)\n", status_code);
++ lbs_deb_assoc("ASSOC_RESP: failure reason 0x%02x "
++ " unknown\n", status_code);
+ break;
+ }
+
+ if (status_code) {
+- libertas_mac_event_disconnected(priv);
++ lbs_mac_event_disconnected(priv);
+ ret = -1;
+ goto done;
+ }
+
+- lbs_deb_hex(LBS_DEB_JOIN, "ASSOC_RESP", (void *)&resp->params,
++ lbs_deb_hex(LBS_DEB_ASSOC, "ASSOC_RESP", (void *)&resp->params,
+ le16_to_cpu(resp->size) - S_DS_GEN);
+
+ /* Send a Media Connected event, according to the Spec */
+- adapter->connect_status = LIBERTAS_CONNECTED;
-
-- case CMD_TYPE_LONG_PREAMBLE:
-- pradiocontrol->control = cpu_to_le16(SET_LONG_PREAMBLE);
-- break;
+- lbs_deb_join("ASSOC_RESP: assocated to '%s'\n",
+- escape_essid(bss->ssid, bss->ssid_len));
++ priv->connect_status = LBS_CONNECTED;
+
+ /* Update current SSID and BSSID */
+- memcpy(&adapter->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
+- adapter->curbssparams.ssid_len = bss->ssid_len;
+- memcpy(adapter->curbssparams.bssid, bss->bssid, ETH_ALEN);
++ memcpy(&priv->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
++ priv->curbssparams.ssid_len = bss->ssid_len;
++ memcpy(priv->curbssparams.bssid, bss->bssid, ETH_ALEN);
+
+- lbs_deb_join("ASSOC_RESP: currentpacketfilter is %x\n",
+- adapter->currentpacketfilter);
++ lbs_deb_assoc("ASSOC_RESP: currentpacketfilter is 0x%x\n",
++ priv->currentpacketfilter);
+
+- adapter->SNR[TYPE_RXPD][TYPE_AVG] = 0;
+- adapter->NF[TYPE_RXPD][TYPE_AVG] = 0;
++ priv->SNR[TYPE_RXPD][TYPE_AVG] = 0;
++ priv->NF[TYPE_RXPD][TYPE_AVG] = 0;
+
+- memset(adapter->rawSNR, 0x00, sizeof(adapter->rawSNR));
+- memset(adapter->rawNF, 0x00, sizeof(adapter->rawNF));
+- adapter->nextSNRNF = 0;
+- adapter->numSNRNF = 0;
++ memset(priv->rawSNR, 0x00, sizeof(priv->rawSNR));
++ memset(priv->rawNF, 0x00, sizeof(priv->rawNF));
++ priv->nextSNRNF = 0;
++ priv->numSNRNF = 0;
+
+ netif_carrier_on(priv->dev);
+- netif_wake_queue(priv->dev);
-
-- case CMD_TYPE_AUTO_PREAMBLE:
-- default:
-- pradiocontrol->control = cpu_to_le16(SET_AUTO_PREAMBLE);
-- break;
+- if (priv->mesh_dev) {
+- netif_carrier_on(priv->mesh_dev);
+- netif_wake_queue(priv->mesh_dev);
- }
--
-- if (adapter->radioon)
-- pradiocontrol->control |= cpu_to_le16(TURN_ON_RF);
-- else
-- pradiocontrol->control &= cpu_to_le16(~TURN_ON_RF);
--
-- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
--}
--
--static int wlan_cmd_802_11_rf_tx_power(wlan_private * priv,
-+static int lbs_cmd_802_11_rf_tx_power(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action, void *pdata_buf)
- {
-@@ -563,7 +614,7 @@ static int wlan_cmd_802_11_rf_tx_power(wlan_private * priv,
- return 0;
++ if (!priv->tx_pending_len)
++ netif_wake_queue(priv->dev);
+
+- memcpy(wrqu.ap_addr.sa_data, adapter->curbssparams.bssid, ETH_ALEN);
++ memcpy(wrqu.ap_addr.sa_data, priv->curbssparams.bssid, ETH_ALEN);
+ wrqu.ap_addr.sa_family = ARPHRD_ETHER;
+ wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+
+ done:
+- lbs_deb_leave_args(LBS_DEB_JOIN, "ret %d", ret);
++ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
+ return ret;
}
--static int wlan_cmd_802_11_monitor_mode(wlan_private * priv,
-+static int lbs_cmd_802_11_monitor_mode(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action, void *pdata_buf)
+-int libertas_ret_80211_disassociate(wlan_private * priv,
++int lbs_ret_80211_disassociate(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
-@@ -583,13 +634,12 @@ static int wlan_cmd_802_11_monitor_mode(wlan_private * priv,
+ lbs_deb_enter(LBS_DEB_JOIN);
+
+- libertas_mac_event_disconnected(priv);
++ lbs_mac_event_disconnected(priv);
+
+ lbs_deb_leave(LBS_DEB_JOIN);
return 0;
}
--static int wlan_cmd_802_11_rate_adapt_rateset(wlan_private * priv,
-+static int lbs_cmd_802_11_rate_adapt_rateset(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action)
+-int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
++int lbs_ret_80211_ad_hoc_start(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
- struct cmd_ds_802_11_rate_adapt_rateset
- *rateadapt = &cmd->params.rateset;
- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+ u16 command = le16_to_cpu(resp->command);
+ u16 result = le16_to_cpu(resp->result);
+@@ -840,20 +823,20 @@ int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
+ lbs_deb_join("ADHOC_RESP: command = %x\n", command);
+ lbs_deb_join("ADHOC_RESP: result = %x\n", result);
- lbs_deb_enter(LBS_DEB_CMD);
- cmd->size =
-@@ -598,46 +648,100 @@ static int wlan_cmd_802_11_rate_adapt_rateset(wlan_private * priv,
- cmd->command = cpu_to_le16(CMD_802_11_RATE_ADAPT_RATESET);
+- if (!adapter->in_progress_assoc_req) {
++ if (!priv->in_progress_assoc_req) {
+ lbs_deb_join("ADHOC_RESP: no in-progress association request\n");
+ ret = -1;
+ goto done;
+ }
+- bss = &adapter->in_progress_assoc_req->bss;
++ bss = &priv->in_progress_assoc_req->bss;
- rateadapt->action = cpu_to_le16(cmd_action);
-- rateadapt->enablehwauto = cpu_to_le16(adapter->enablehwauto);
-- rateadapt->bitmap = cpu_to_le16(adapter->ratebitmap);
-+ rateadapt->enablehwauto = cpu_to_le16(priv->enablehwauto);
-+ rateadapt->bitmap = cpu_to_le16(priv->ratebitmap);
+ /*
+ * Join result code 0 --> SUCCESS
+ */
+ if (result) {
+ lbs_deb_join("ADHOC_RESP: failed\n");
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
+- libertas_mac_event_disconnected(priv);
++ if (priv->connect_status == LBS_CONNECTED) {
++ lbs_mac_event_disconnected(priv);
+ }
+ ret = -1;
+ goto done;
+@@ -867,7 +850,7 @@ int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
+ escape_essid(bss->ssid, bss->ssid_len));
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
- }
+ /* Send a Media Connected event, according to the Spec */
+- adapter->connect_status = LIBERTAS_CONNECTED;
++ priv->connect_status = LBS_CONNECTED;
--static int wlan_cmd_802_11_data_rate(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- u16 cmd_action)
-+/**
-+ * @brief Get the current data rate
-+ *
-+ * @param priv A pointer to struct lbs_private structure
-+ *
-+ * @return The data rate on success, error on failure
-+ */
-+int lbs_get_data_rate(struct lbs_private *priv)
- {
-- struct cmd_ds_802_11_data_rate *pdatarate = &cmd->params.drate;
-- wlan_adapter *adapter = priv->adapter;
-+ struct cmd_ds_802_11_data_rate cmd;
-+ int ret = -1;
+ if (command == CMD_RET(CMD_802_11_AD_HOC_START)) {
+ /* Update the created network descriptor with the new BSSID */
+@@ -875,27 +858,23 @@ int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
+ }
- lbs_deb_enter(LBS_DEB_CMD);
+ /* Set the BSSID from the joined/started descriptor */
+- memcpy(&adapter->curbssparams.bssid, bss->bssid, ETH_ALEN);
++ memcpy(&priv->curbssparams.bssid, bss->bssid, ETH_ALEN);
-- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_802_11_data_rate) +
-- S_DS_GEN);
-- cmd->command = cpu_to_le16(CMD_802_11_DATA_RATE);
-- memset(pdatarate, 0, sizeof(struct cmd_ds_802_11_data_rate));
-- pdatarate->action = cpu_to_le16(cmd_action);
+ /* Set the new SSID to current SSID */
+- memcpy(&adapter->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
+- adapter->curbssparams.ssid_len = bss->ssid_len;
++ memcpy(&priv->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
++ priv->curbssparams.ssid_len = bss->ssid_len;
+
+ netif_carrier_on(priv->dev);
+- netif_wake_queue(priv->dev);
-
-- if (cmd_action == CMD_ACT_SET_TX_FIX_RATE) {
-- pdatarate->rates[0] = libertas_data_rate_to_fw_index(adapter->cur_rate);
-- lbs_deb_cmd("DATA_RATE: set fixed 0x%02X\n",
-- adapter->cur_rate);
-- } else if (cmd_action == CMD_ACT_SET_TX_AUTO) {
-+ memset(&cmd, 0, sizeof(cmd));
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ cmd.action = cpu_to_le16(CMD_ACT_GET_TX_RATE);
-+
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_DATA_RATE, &cmd);
-+ if (ret)
-+ goto out;
-+
-+ lbs_deb_hex(LBS_DEB_CMD, "DATA_RATE_RESP", (u8 *) &cmd, sizeof (cmd));
-+
-+ ret = (int) lbs_fw_index_to_data_rate(cmd.rates[0]);
-+ lbs_deb_cmd("DATA_RATE: current rate 0x%02x\n", ret);
-+
-+out:
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
-+ return ret;
-+}
-+
-+/**
-+ * @brief Set the data rate
-+ *
-+ * @param priv A pointer to struct lbs_private structure
-+ * @param rate The desired data rate, or 0 to clear a locked rate
-+ *
-+ * @return 0 on success, error on failure
-+ */
-+int lbs_set_data_rate(struct lbs_private *priv, u8 rate)
-+{
-+ struct cmd_ds_802_11_data_rate cmd;
-+ int ret = 0;
-+
-+ lbs_deb_enter(LBS_DEB_CMD);
-+
-+ memset(&cmd, 0, sizeof(cmd));
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+
-+ if (rate > 0) {
-+ cmd.action = cpu_to_le16(CMD_ACT_SET_TX_FIX_RATE);
-+ cmd.rates[0] = lbs_data_rate_to_fw_index(rate);
-+ if (cmd.rates[0] == 0) {
-+ lbs_deb_cmd("DATA_RATE: invalid requested rate of"
-+ " 0x%02X\n", rate);
-+ ret = 0;
-+ goto out;
-+ }
-+ lbs_deb_cmd("DATA_RATE: set fixed 0x%02X\n", cmd.rates[0]);
-+ } else {
-+ cmd.action = cpu_to_le16(CMD_ACT_SET_TX_AUTO);
- lbs_deb_cmd("DATA_RATE: setting auto\n");
- }
+- if (priv->mesh_dev) {
+- netif_carrier_on(priv->mesh_dev);
+- netif_wake_queue(priv->mesh_dev);
+- }
++ if (!priv->tx_pending_len)
++ netif_wake_queue(priv->dev);
-- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_DATA_RATE, &cmd);
-+ if (ret)
-+ goto out;
-+
-+ lbs_deb_hex(LBS_DEB_CMD, "DATA_RATE_RESP", (u8 *) &cmd, sizeof (cmd));
-+
-+ /* FIXME: get actual rates FW can do if this command actually returns
-+ * all data rates supported.
-+ */
-+ priv->cur_rate = lbs_fw_index_to_data_rate(cmd.rates[0]);
-+ lbs_deb_cmd("DATA_RATE: current rate is 0x%02x\n", priv->cur_rate);
-+
-+out:
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
-+ return ret;
+ memset(&wrqu, 0, sizeof(wrqu));
+- memcpy(wrqu.ap_addr.sa_data, adapter->curbssparams.bssid, ETH_ALEN);
++ memcpy(wrqu.ap_addr.sa_data, priv->curbssparams.bssid, ETH_ALEN);
+ wrqu.ap_addr.sa_family = ARPHRD_ETHER;
+ wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+
+ lbs_deb_join("ADHOC_RESP: - Joined/Started Ad Hoc\n");
+- lbs_deb_join("ADHOC_RESP: channel = %d\n", adapter->curbssparams.channel);
++ lbs_deb_join("ADHOC_RESP: channel = %d\n", priv->curbssparams.channel);
+ lbs_deb_join("ADHOC_RESP: BSSID = %s\n",
+ print_mac(mac, padhocresult->bssid));
+
+@@ -904,12 +883,12 @@ done:
+ return ret;
}
--static int wlan_cmd_mac_multicast_adr(wlan_private * priv,
-+static int lbs_cmd_mac_multicast_adr(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action)
+-int libertas_ret_80211_ad_hoc_stop(wlan_private * priv,
++int lbs_ret_80211_ad_hoc_stop(struct lbs_private *priv,
+ struct cmd_ds_command *resp)
{
- struct cmd_ds_mac_multicast_adr *pMCastAdr = &cmd->params.madr;
-- wlan_adapter *adapter = priv->adapter;
+ lbs_deb_enter(LBS_DEB_JOIN);
- lbs_deb_enter(LBS_DEB_CMD);
- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_mac_multicast_adr) +
-@@ -647,39 +751,79 @@ static int wlan_cmd_mac_multicast_adr(wlan_private * priv,
- lbs_deb_cmd("MULTICAST_ADR: setting %d addresses\n", pMCastAdr->nr_of_adrs);
- pMCastAdr->action = cpu_to_le16(cmd_action);
- pMCastAdr->nr_of_adrs =
-- cpu_to_le16((u16) adapter->nr_of_multicastmacaddr);
-- memcpy(pMCastAdr->maclist, adapter->multicastlist,
-- adapter->nr_of_multicastmacaddr * ETH_ALEN);
-+ cpu_to_le16((u16) priv->nr_of_multicastmacaddr);
-+ memcpy(pMCastAdr->maclist, priv->multicastlist,
-+ priv->nr_of_multicastmacaddr * ETH_ALEN);
+- libertas_mac_event_disconnected(priv);
++ lbs_mac_event_disconnected(priv);
- lbs_deb_leave(LBS_DEB_CMD);
+ lbs_deb_leave(LBS_DEB_JOIN);
return 0;
- }
+diff --git a/drivers/net/wireless/libertas/join.h b/drivers/net/wireless/libertas/join.h
+index 894a072..c617d07 100644
+--- a/drivers/net/wireless/libertas/join.h
++++ b/drivers/net/wireless/libertas/join.h
+@@ -2,52 +2,52 @@
+ * Interface for the wlan infrastructure and adhoc join routines
+ *
+ * Driver interface functions and type declarations for the join module
+- * implemented in wlan_join.c. Process all start/join requests for
++ * implemented in join.c. Process all start/join requests for
+ * both adhoc and infrastructure networks
+ */
+-#ifndef _WLAN_JOIN_H
+-#define _WLAN_JOIN_H
++#ifndef _LBS_JOIN_H
++#define _LBS_JOIN_H
--static int wlan_cmd_802_11_rf_channel(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- int option, void *pdata_buf)
-+/**
-+ * @brief Get the radio channel
-+ *
-+ * @param priv A pointer to struct lbs_private structure
-+ *
-+ * @return The channel on success, error on failure
-+ */
-+int lbs_get_channel(struct lbs_private *priv)
- {
-- struct cmd_ds_802_11_rf_channel *rfchan = &cmd->params.rfchannel;
-+ struct cmd_ds_802_11_rf_channel cmd;
-+ int ret = 0;
+ #include "defs.h"
+ #include "dev.h"
- lbs_deb_enter(LBS_DEB_CMD);
-- cmd->command = cpu_to_le16(CMD_802_11_RF_CHANNEL);
-- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_802_11_rf_channel) +
-- S_DS_GEN);
+ struct cmd_ds_command;
+-int libertas_cmd_80211_authenticate(wlan_private * priv,
++int lbs_cmd_80211_authenticate(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ void *pdata_buf);
+-int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
++int lbs_cmd_80211_ad_hoc_join(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ void *pdata_buf);
+-int libertas_cmd_80211_ad_hoc_stop(wlan_private * priv,
++int lbs_cmd_80211_ad_hoc_stop(struct lbs_private *priv,
+ struct cmd_ds_command *cmd);
+-int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
++int lbs_cmd_80211_ad_hoc_start(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ void *pdata_buf);
+-int libertas_cmd_80211_deauthenticate(wlan_private * priv,
++int lbs_cmd_80211_deauthenticate(struct lbs_private *priv,
+ struct cmd_ds_command *cmd);
+-int libertas_cmd_80211_associate(wlan_private * priv,
++int lbs_cmd_80211_associate(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ void *pdata_buf);
-- if (option == CMD_OPT_802_11_RF_CHANNEL_SET) {
-- rfchan->currentchannel = cpu_to_le16(*((u16 *) pdata_buf));
-- }
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ cmd.action = cpu_to_le16(CMD_OPT_802_11_RF_CHANNEL_GET);
+-int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
++int lbs_ret_80211_ad_hoc_start(struct lbs_private *priv,
+ struct cmd_ds_command *resp);
+-int libertas_ret_80211_ad_hoc_stop(wlan_private * priv,
++int lbs_ret_80211_ad_hoc_stop(struct lbs_private *priv,
+ struct cmd_ds_command *resp);
+-int libertas_ret_80211_disassociate(wlan_private * priv,
++int lbs_ret_80211_disassociate(struct lbs_private *priv,
+ struct cmd_ds_command *resp);
+-int libertas_ret_80211_associate(wlan_private * priv,
++int lbs_ret_80211_associate(struct lbs_private *priv,
+ struct cmd_ds_command *resp);
-- rfchan->action = cpu_to_le16(option);
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_RF_CHANNEL, &cmd);
-+ if (ret)
-+ goto out;
+-int libertas_start_adhoc_network(wlan_private * priv,
++int lbs_start_adhoc_network(struct lbs_private *priv,
+ struct assoc_request * assoc_req);
+-int libertas_join_adhoc_network(wlan_private * priv,
++int lbs_join_adhoc_network(struct lbs_private *priv,
+ struct assoc_request * assoc_req);
+-int libertas_stop_adhoc_network(wlan_private * priv);
++int lbs_stop_adhoc_network(struct lbs_private *priv);
-- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
-+ ret = le16_to_cpu(cmd.channel);
-+ lbs_deb_cmd("current radio channel is %d\n", ret);
-+
-+out:
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
-+ return ret;
-+}
-+
-+/**
-+ * @brief Set the radio channel
-+ *
-+ * @param priv A pointer to struct lbs_private structure
-+ * @param channel The desired channel, or 0 to clear a locked channel
-+ *
-+ * @return 0 on success, error on failure
-+ */
-+int lbs_set_channel(struct lbs_private *priv, u8 channel)
-+{
-+ struct cmd_ds_802_11_rf_channel cmd;
-+ u8 old_channel = priv->curbssparams.channel;
-+ int ret = 0;
-+
-+ lbs_deb_enter(LBS_DEB_CMD);
-+
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ cmd.action = cpu_to_le16(CMD_OPT_802_11_RF_CHANNEL_SET);
-+ cmd.channel = cpu_to_le16(channel);
-+
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_RF_CHANNEL, &cmd);
-+ if (ret)
-+ goto out;
-+
-+ priv->curbssparams.channel = (uint8_t) le16_to_cpu(cmd.channel);
-+ lbs_deb_cmd("channel switch from %d to %d\n", old_channel,
-+ priv->curbssparams.channel);
-+
-+out:
-+ lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
-+ return ret;
- }
+-int libertas_send_deauthentication(wlan_private * priv);
++int lbs_send_deauthentication(struct lbs_private *priv);
--static int wlan_cmd_802_11_rssi(wlan_private * priv,
-+static int lbs_cmd_802_11_rssi(struct lbs_private *priv,
- struct cmd_ds_command *cmd)
- {
-- wlan_adapter *adapter = priv->adapter;
+-int wlan_associate(wlan_private * priv, struct assoc_request * assoc_req);
++int lbs_associate(struct lbs_private *priv, struct assoc_request *assoc_req);
- lbs_deb_enter(LBS_DEB_CMD);
- cmd->command = cpu_to_le16(CMD_802_11_RSSI);
-@@ -687,28 +831,28 @@ static int wlan_cmd_802_11_rssi(wlan_private * priv,
- cmd->params.rssi.N = cpu_to_le16(DEFAULT_BCN_AVG_FACTOR);
+-void libertas_unset_basic_rate_flags(u8 * rates, size_t len);
++void lbs_unset_basic_rate_flags(u8 *rates, size_t len);
- /* reset Beacon SNR/NF/RSSI values */
-- adapter->SNR[TYPE_BEACON][TYPE_NOAVG] = 0;
-- adapter->SNR[TYPE_BEACON][TYPE_AVG] = 0;
-- adapter->NF[TYPE_BEACON][TYPE_NOAVG] = 0;
-- adapter->NF[TYPE_BEACON][TYPE_AVG] = 0;
-- adapter->RSSI[TYPE_BEACON][TYPE_NOAVG] = 0;
-- adapter->RSSI[TYPE_BEACON][TYPE_AVG] = 0;
-+ priv->SNR[TYPE_BEACON][TYPE_NOAVG] = 0;
-+ priv->SNR[TYPE_BEACON][TYPE_AVG] = 0;
-+ priv->NF[TYPE_BEACON][TYPE_NOAVG] = 0;
-+ priv->NF[TYPE_BEACON][TYPE_AVG] = 0;
-+ priv->RSSI[TYPE_BEACON][TYPE_NOAVG] = 0;
-+ priv->RSSI[TYPE_BEACON][TYPE_AVG] = 0;
+ #endif
+diff --git a/drivers/net/wireless/libertas/main.c b/drivers/net/wireless/libertas/main.c
+index 1823b48..84fb49c 100644
+--- a/drivers/net/wireless/libertas/main.c
++++ b/drivers/net/wireless/libertas/main.c
+@@ -6,7 +6,6 @@
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
- }
+ #include <linux/moduleparam.h>
+ #include <linux/delay.h>
+-#include <linux/freezer.h>
+ #include <linux/etherdevice.h>
+ #include <linux/netdevice.h>
+ #include <linux/if_arp.h>
+@@ -22,9 +21,10 @@
+ #include "debugfs.h"
+ #include "assoc.h"
+ #include "join.h"
++#include "cmd.h"
--static int wlan_cmd_reg_access(wlan_private * priv,
-+static int lbs_cmd_reg_access(struct lbs_private *priv,
- struct cmd_ds_command *cmdptr,
- u8 cmd_action, void *pdata_buf)
- {
-- struct wlan_offset_value *offval;
-+ struct lbs_offset_value *offval;
+ #define DRIVER_RELEASE_VERSION "323.p0"
+-const char libertas_driver_version[] = "COMM-USB8388-" DRIVER_RELEASE_VERSION
++const char lbs_driver_version[] = "COMM-USB8388-" DRIVER_RELEASE_VERSION
+ #ifdef DEBUG
+ "-dbg"
+ #endif
+@@ -32,80 +32,80 @@ const char libertas_driver_version[] = "COMM-USB8388-" DRIVER_RELEASE_VERSION
- lbs_deb_enter(LBS_DEB_CMD);
-- offval = (struct wlan_offset_value *)pdata_buf;
-+ offval = (struct lbs_offset_value *)pdata_buf;
+ /* Module parameters */
+-unsigned int libertas_debug = 0;
+-module_param(libertas_debug, int, 0644);
+-EXPORT_SYMBOL_GPL(libertas_debug);
++unsigned int lbs_debug;
++EXPORT_SYMBOL_GPL(lbs_debug);
++module_param_named(libertas_debug, lbs_debug, int, 0644);
-- switch (cmdptr->command) {
-+ switch (le16_to_cpu(cmdptr->command)) {
- case CMD_MAC_REG_ACCESS:
- {
- struct cmd_ds_mac_reg_access *macreg;
-@@ -773,11 +917,10 @@ static int wlan_cmd_reg_access(wlan_private * priv,
- return 0;
- }
--static int wlan_cmd_802_11_mac_address(wlan_private * priv,
-+static int lbs_cmd_802_11_mac_address(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action)
- {
-- wlan_adapter *adapter = priv->adapter;
+-#define WLAN_TX_PWR_DEFAULT 20 /*100mW */
+-#define WLAN_TX_PWR_US_DEFAULT 20 /*100mW */
+-#define WLAN_TX_PWR_JP_DEFAULT 16 /*50mW */
+-#define WLAN_TX_PWR_FR_DEFAULT 20 /*100mW */
+-#define WLAN_TX_PWR_EMEA_DEFAULT 20 /*100mW */
++#define LBS_TX_PWR_DEFAULT 20 /*100mW */
++#define LBS_TX_PWR_US_DEFAULT 20 /*100mW */
++#define LBS_TX_PWR_JP_DEFAULT 16 /*50mW */
++#define LBS_TX_PWR_FR_DEFAULT 20 /*100mW */
++#define LBS_TX_PWR_EMEA_DEFAULT 20 /*100mW */
- lbs_deb_enter(LBS_DEB_CMD);
- cmd->command = cpu_to_le16(CMD_802_11_MAC_ADDRESS);
-@@ -789,19 +932,19 @@ static int wlan_cmd_802_11_mac_address(wlan_private * priv,
+ /* Format { channel, frequency (MHz), maxtxpower } */
+ /* band: 'B/G', region: USA FCC/Canada IC */
+ static struct chan_freq_power channel_freq_power_US_BG[] = {
+- {1, 2412, WLAN_TX_PWR_US_DEFAULT},
+- {2, 2417, WLAN_TX_PWR_US_DEFAULT},
+- {3, 2422, WLAN_TX_PWR_US_DEFAULT},
+- {4, 2427, WLAN_TX_PWR_US_DEFAULT},
+- {5, 2432, WLAN_TX_PWR_US_DEFAULT},
+- {6, 2437, WLAN_TX_PWR_US_DEFAULT},
+- {7, 2442, WLAN_TX_PWR_US_DEFAULT},
+- {8, 2447, WLAN_TX_PWR_US_DEFAULT},
+- {9, 2452, WLAN_TX_PWR_US_DEFAULT},
+- {10, 2457, WLAN_TX_PWR_US_DEFAULT},
+- {11, 2462, WLAN_TX_PWR_US_DEFAULT}
++ {1, 2412, LBS_TX_PWR_US_DEFAULT},
++ {2, 2417, LBS_TX_PWR_US_DEFAULT},
++ {3, 2422, LBS_TX_PWR_US_DEFAULT},
++ {4, 2427, LBS_TX_PWR_US_DEFAULT},
++ {5, 2432, LBS_TX_PWR_US_DEFAULT},
++ {6, 2437, LBS_TX_PWR_US_DEFAULT},
++ {7, 2442, LBS_TX_PWR_US_DEFAULT},
++ {8, 2447, LBS_TX_PWR_US_DEFAULT},
++ {9, 2452, LBS_TX_PWR_US_DEFAULT},
++ {10, 2457, LBS_TX_PWR_US_DEFAULT},
++ {11, 2462, LBS_TX_PWR_US_DEFAULT}
+ };
- if (cmd_action == CMD_ACT_SET) {
- memcpy(cmd->params.macadd.macadd,
-- adapter->current_addr, ETH_ALEN);
-- lbs_deb_hex(LBS_DEB_CMD, "SET_CMD: MAC addr", adapter->current_addr, 6);
-+ priv->current_addr, ETH_ALEN);
-+ lbs_deb_hex(LBS_DEB_CMD, "SET_CMD: MAC addr", priv->current_addr, 6);
- }
+ /* band: 'B/G', region: Europe ETSI */
+ static struct chan_freq_power channel_freq_power_EU_BG[] = {
+- {1, 2412, WLAN_TX_PWR_EMEA_DEFAULT},
+- {2, 2417, WLAN_TX_PWR_EMEA_DEFAULT},
+- {3, 2422, WLAN_TX_PWR_EMEA_DEFAULT},
+- {4, 2427, WLAN_TX_PWR_EMEA_DEFAULT},
+- {5, 2432, WLAN_TX_PWR_EMEA_DEFAULT},
+- {6, 2437, WLAN_TX_PWR_EMEA_DEFAULT},
+- {7, 2442, WLAN_TX_PWR_EMEA_DEFAULT},
+- {8, 2447, WLAN_TX_PWR_EMEA_DEFAULT},
+- {9, 2452, WLAN_TX_PWR_EMEA_DEFAULT},
+- {10, 2457, WLAN_TX_PWR_EMEA_DEFAULT},
+- {11, 2462, WLAN_TX_PWR_EMEA_DEFAULT},
+- {12, 2467, WLAN_TX_PWR_EMEA_DEFAULT},
+- {13, 2472, WLAN_TX_PWR_EMEA_DEFAULT}
++ {1, 2412, LBS_TX_PWR_EMEA_DEFAULT},
++ {2, 2417, LBS_TX_PWR_EMEA_DEFAULT},
++ {3, 2422, LBS_TX_PWR_EMEA_DEFAULT},
++ {4, 2427, LBS_TX_PWR_EMEA_DEFAULT},
++ {5, 2432, LBS_TX_PWR_EMEA_DEFAULT},
++ {6, 2437, LBS_TX_PWR_EMEA_DEFAULT},
++ {7, 2442, LBS_TX_PWR_EMEA_DEFAULT},
++ {8, 2447, LBS_TX_PWR_EMEA_DEFAULT},
++ {9, 2452, LBS_TX_PWR_EMEA_DEFAULT},
++ {10, 2457, LBS_TX_PWR_EMEA_DEFAULT},
++ {11, 2462, LBS_TX_PWR_EMEA_DEFAULT},
++ {12, 2467, LBS_TX_PWR_EMEA_DEFAULT},
++ {13, 2472, LBS_TX_PWR_EMEA_DEFAULT}
+ };
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
- }
+ /* band: 'B/G', region: Spain */
+ static struct chan_freq_power channel_freq_power_SPN_BG[] = {
+- {10, 2457, WLAN_TX_PWR_DEFAULT},
+- {11, 2462, WLAN_TX_PWR_DEFAULT}
++ {10, 2457, LBS_TX_PWR_DEFAULT},
++ {11, 2462, LBS_TX_PWR_DEFAULT}
+ };
--static int wlan_cmd_802_11_eeprom_access(wlan_private * priv,
-+static int lbs_cmd_802_11_eeprom_access(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- int cmd_action, void *pdata_buf)
- {
-- struct wlan_ioctl_regrdwr *ea = pdata_buf;
-+ struct lbs_ioctl_regrdwr *ea = pdata_buf;
+ /* band: 'B/G', region: France */
+ static struct chan_freq_power channel_freq_power_FR_BG[] = {
+- {10, 2457, WLAN_TX_PWR_FR_DEFAULT},
+- {11, 2462, WLAN_TX_PWR_FR_DEFAULT},
+- {12, 2467, WLAN_TX_PWR_FR_DEFAULT},
+- {13, 2472, WLAN_TX_PWR_FR_DEFAULT}
++ {10, 2457, LBS_TX_PWR_FR_DEFAULT},
++ {11, 2462, LBS_TX_PWR_FR_DEFAULT},
++ {12, 2467, LBS_TX_PWR_FR_DEFAULT},
++ {13, 2472, LBS_TX_PWR_FR_DEFAULT}
+ };
- lbs_deb_enter(LBS_DEB_CMD);
+ /* band: 'B/G', region: Japan */
+ static struct chan_freq_power channel_freq_power_JPN_BG[] = {
+- {1, 2412, WLAN_TX_PWR_JP_DEFAULT},
+- {2, 2417, WLAN_TX_PWR_JP_DEFAULT},
+- {3, 2422, WLAN_TX_PWR_JP_DEFAULT},
+- {4, 2427, WLAN_TX_PWR_JP_DEFAULT},
+- {5, 2432, WLAN_TX_PWR_JP_DEFAULT},
+- {6, 2437, WLAN_TX_PWR_JP_DEFAULT},
+- {7, 2442, WLAN_TX_PWR_JP_DEFAULT},
+- {8, 2447, WLAN_TX_PWR_JP_DEFAULT},
+- {9, 2452, WLAN_TX_PWR_JP_DEFAULT},
+- {10, 2457, WLAN_TX_PWR_JP_DEFAULT},
+- {11, 2462, WLAN_TX_PWR_JP_DEFAULT},
+- {12, 2467, WLAN_TX_PWR_JP_DEFAULT},
+- {13, 2472, WLAN_TX_PWR_JP_DEFAULT},
+- {14, 2484, WLAN_TX_PWR_JP_DEFAULT}
++ {1, 2412, LBS_TX_PWR_JP_DEFAULT},
++ {2, 2417, LBS_TX_PWR_JP_DEFAULT},
++ {3, 2422, LBS_TX_PWR_JP_DEFAULT},
++ {4, 2427, LBS_TX_PWR_JP_DEFAULT},
++ {5, 2432, LBS_TX_PWR_JP_DEFAULT},
++ {6, 2437, LBS_TX_PWR_JP_DEFAULT},
++ {7, 2442, LBS_TX_PWR_JP_DEFAULT},
++ {8, 2447, LBS_TX_PWR_JP_DEFAULT},
++ {9, 2452, LBS_TX_PWR_JP_DEFAULT},
++ {10, 2457, LBS_TX_PWR_JP_DEFAULT},
++ {11, 2462, LBS_TX_PWR_JP_DEFAULT},
++ {12, 2467, LBS_TX_PWR_JP_DEFAULT},
++ {13, 2472, LBS_TX_PWR_JP_DEFAULT},
++ {14, 2484, LBS_TX_PWR_JP_DEFAULT}
+ };
-@@ -819,7 +962,7 @@ static int wlan_cmd_802_11_eeprom_access(wlan_private * priv,
- return 0;
- }
+ /**
+@@ -153,13 +153,13 @@ static struct region_cfp_table region_cfp_table[] = {
+ /**
+ * the table to keep region code
+ */
+-u16 libertas_region_code_to_index[MRVDRV_MAX_REGION_CODE] =
++u16 lbs_region_code_to_index[MRVDRV_MAX_REGION_CODE] =
+ { 0x10, 0x20, 0x30, 0x31, 0x32, 0x40 };
--static int wlan_cmd_bt_access(wlan_private * priv,
-+static int lbs_cmd_bt_access(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action, void *pdata_buf)
- {
-@@ -857,7 +1000,7 @@ static int wlan_cmd_bt_access(wlan_private * priv,
- return 0;
- }
+ /**
+ * 802.11b/g supported bitrates (in 500Kb/s units)
+ */
+-u8 libertas_bg_rates[MAX_RATES] =
++u8 lbs_bg_rates[MAX_RATES] =
+ { 0x02, 0x04, 0x0b, 0x16, 0x0c, 0x12, 0x18, 0x24, 0x30, 0x48, 0x60, 0x6c,
+ 0x00, 0x00 };
--static int wlan_cmd_fwt_access(wlan_private * priv,
-+static int lbs_cmd_fwt_access(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- u16 cmd_action, void *pdata_buf)
+@@ -179,7 +179,7 @@ static u8 fw_data_rates[MAX_RATES] =
+ * @param idx The index of data rate
+ * @return data rate or 0
+ */
+-u32 libertas_fw_index_to_data_rate(u8 idx)
++u32 lbs_fw_index_to_data_rate(u8 idx)
{
-@@ -879,47 +1022,72 @@ static int wlan_cmd_fwt_access(wlan_private * priv,
- return 0;
- }
+ if (idx >= sizeof(fw_data_rates))
+ idx = 0;
+@@ -192,7 +192,7 @@ u32 libertas_fw_index_to_data_rate(u8 idx)
+ * @param rate data rate
+ * @return index or 0
+ */
+-u8 libertas_data_rate_to_fw_index(u32 rate)
++u8 lbs_data_rate_to_fw_index(u32 rate)
+ {
+ u8 i;
--static int wlan_cmd_mesh_access(wlan_private * priv,
-- struct cmd_ds_command *cmd,
-- u16 cmd_action, void *pdata_buf)
-+int lbs_mesh_access(struct lbs_private *priv, uint16_t cmd_action,
-+ struct cmd_ds_mesh_access *cmd)
+@@ -213,16 +213,18 @@ u8 libertas_data_rate_to_fw_index(u32 rate)
+ /**
+ * @brief Get function for sysfs attribute anycast_mask
+ */
+-static ssize_t libertas_anycast_get(struct device * dev,
++static ssize_t lbs_anycast_get(struct device *dev,
+ struct device_attribute *attr, char * buf)
{
-- struct cmd_ds_mesh_access *mesh_access = &cmd->params.mesh;
++ struct lbs_private *priv = to_net_dev(dev)->priv;
+ struct cmd_ds_mesh_access mesh_access;
+ int ret;
-+
- lbs_deb_enter_args(LBS_DEB_CMD, "action %d", cmd_action);
-- cmd->command = cpu_to_le16(CMD_MESH_ACCESS);
-- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_mesh_access) + S_DS_GEN);
-- cmd->result = 0;
-+ cmd->hdr.command = cpu_to_le16(CMD_MESH_ACCESS);
-+ cmd->hdr.size = cpu_to_le16(sizeof(*cmd));
-+ cmd->hdr.result = 0;
+ memset(&mesh_access, 0, sizeof(mesh_access));
+- libertas_prepare_and_send_command(to_net_dev(dev)->priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_GET_ANYCAST,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
++
++ ret = lbs_mesh_access(priv, CMD_ACT_MESH_GET_ANYCAST, &mesh_access);
++ if (ret)
++ return ret;
-- if (pdata_buf)
-- memcpy(mesh_access, pdata_buf, sizeof(*mesh_access));
-- else
-- memset(mesh_access, 0, sizeof(*mesh_access));
-+ cmd->action = cpu_to_le16(cmd_action);
+ return snprintf(buf, 12, "0x%X\n", le32_to_cpu(mesh_access.data[0]));
+ }
+@@ -230,244 +232,191 @@ static ssize_t libertas_anycast_get(struct device * dev,
+ /**
+ * @brief Set function for sysfs attribute anycast_mask
+ */
+-static ssize_t libertas_anycast_set(struct device * dev,
++static ssize_t lbs_anycast_set(struct device *dev,
+ struct device_attribute *attr, const char * buf, size_t count)
+ {
++ struct lbs_private *priv = to_net_dev(dev)->priv;
+ struct cmd_ds_mesh_access mesh_access;
+ uint32_t datum;
++ int ret;
-- mesh_access->action = cpu_to_le16(cmd_action);
-+ ret = lbs_cmd_with_response(priv, CMD_MESH_ACCESS, cmd);
+ memset(&mesh_access, 0, sizeof(mesh_access));
+ sscanf(buf, "%x", &datum);
+ mesh_access.data[0] = cpu_to_le32(datum);
- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
-+ return ret;
+- libertas_prepare_and_send_command((to_net_dev(dev))->priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_SET_ANYCAST,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
++ ret = lbs_mesh_access(priv, CMD_ACT_MESH_SET_ANYCAST, &mesh_access);
++ if (ret)
++ return ret;
++
+ return strlen(buf);
}
-+EXPORT_SYMBOL_GPL(lbs_mesh_access);
--static int wlan_cmd_set_boot2_ver(wlan_private * priv,
-+int lbs_mesh_config(struct lbs_private *priv, uint16_t enable, uint16_t chan)
-+{
-+ struct cmd_ds_mesh_config cmd;
-+
-+ memset(&cmd, 0, sizeof(cmd));
-+ cmd.action = cpu_to_le16(enable);
-+ cmd.channel = cpu_to_le16(chan);
-+ cmd.type = cpu_to_le16(priv->mesh_tlv);
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+
-+ if (enable) {
-+ cmd.length = cpu_to_le16(priv->mesh_ssid_len);
-+ memcpy(cmd.data, priv->mesh_ssid, priv->mesh_ssid_len);
-+ }
-+ lbs_deb_cmd("mesh config enable %d TLV %x channel %d SSID %s\n",
-+ enable, priv->mesh_tlv, chan,
-+ escape_essid(priv->mesh_ssid, priv->mesh_ssid_len));
-+ return lbs_cmd_with_response(priv, CMD_MESH_CONFIG, &cmd);
-+}
+-int libertas_add_rtap(wlan_private *priv);
+-void libertas_remove_rtap(wlan_private *priv);
++static int lbs_add_rtap(struct lbs_private *priv);
++static void lbs_remove_rtap(struct lbs_private *priv);
++static int lbs_add_mesh(struct lbs_private *priv);
++static void lbs_remove_mesh(struct lbs_private *priv);
+
-+static int lbs_cmd_bcn_ctrl(struct lbs_private * priv,
- struct cmd_ds_command *cmd,
-- u16 cmd_action, void *pdata_buf)
-+ u16 cmd_action)
+
+ /**
+ * Get function for sysfs attribute rtap
+ */
+-static ssize_t libertas_rtap_get(struct device * dev,
++static ssize_t lbs_rtap_get(struct device *dev,
+ struct device_attribute *attr, char * buf)
{
-- struct cmd_ds_set_boot2_ver *boot2_ver = &cmd->params.boot2_ver;
-- cmd->command = cpu_to_le16(CMD_SET_BOOT2_VER);
-- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_set_boot2_ver) + S_DS_GEN);
-- boot2_ver->version = priv->boot2_version;
-+ struct cmd_ds_802_11_beacon_control
-+ *bcn_ctrl = &cmd->params.bcn_ctrl;
-+
-+ lbs_deb_enter(LBS_DEB_CMD);
-+ cmd->size =
-+ cpu_to_le16(sizeof(struct cmd_ds_802_11_beacon_control)
-+ + S_DS_GEN);
-+ cmd->command = cpu_to_le16(CMD_802_11_BEACON_CTRL);
-+
-+ bcn_ctrl->action = cpu_to_le16(cmd_action);
-+ bcn_ctrl->beacon_enable = cpu_to_le16(priv->beacon_enable);
-+ bcn_ctrl->beacon_period = cpu_to_le16(priv->beacon_period);
-+
-+ lbs_deb_leave(LBS_DEB_CMD);
- return 0;
+- wlan_private *priv = (wlan_private *) (to_net_dev(dev))->priv;
+- wlan_adapter *adapter = priv->adapter;
+- return snprintf(buf, 5, "0x%X\n", adapter->monitormode);
++ struct lbs_private *priv = to_net_dev(dev)->priv;
++ return snprintf(buf, 5, "0x%X\n", priv->monitormode);
}
--/*
-- * Note: NEVER use libertas_queue_cmd() with addtail==0 other than for
-- * the command timer, because it does not account for queued commands.
-- */
--void libertas_queue_cmd(wlan_adapter * adapter, struct cmd_ctrl_node *cmdnode, u8 addtail)
-+static void lbs_queue_cmd(struct lbs_private *priv,
-+ struct cmd_ctrl_node *cmdnode)
+ /**
+ * Set function for sysfs attribute rtap
+ */
+-static ssize_t libertas_rtap_set(struct device * dev,
++static ssize_t lbs_rtap_set(struct device *dev,
+ struct device_attribute *attr, const char * buf, size_t count)
{
- unsigned long flags;
-- struct cmd_ds_command *cmdptr;
-+ int addtail = 1;
-
- lbs_deb_enter(LBS_DEB_HOST);
+ int monitor_mode;
+- wlan_private *priv = (wlan_private *) (to_net_dev(dev))->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = to_net_dev(dev)->priv;
-@@ -927,118 +1095,87 @@ void libertas_queue_cmd(wlan_adapter * adapter, struct cmd_ctrl_node *cmdnode, u
- lbs_deb_host("QUEUE_CMD: cmdnode is NULL\n");
- goto done;
- }
--
-- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
-- if (!cmdptr) {
-- lbs_deb_host("QUEUE_CMD: cmdptr is NULL\n");
-+ if (!cmdnode->cmdbuf->size) {
-+ lbs_deb_host("DNLD_CMD: cmd size is zero\n");
- goto done;
+ sscanf(buf, "%x", &monitor_mode);
+- if (monitor_mode != WLAN_MONITOR_OFF) {
+- if(adapter->monitormode == monitor_mode)
++ if (monitor_mode != LBS_MONITOR_OFF) {
++ if(priv->monitormode == monitor_mode)
+ return strlen(buf);
+- if (adapter->monitormode == WLAN_MONITOR_OFF) {
+- if (adapter->mode == IW_MODE_INFRA)
+- libertas_send_deauthentication(priv);
+- else if (adapter->mode == IW_MODE_ADHOC)
+- libertas_stop_adhoc_network(priv);
+- libertas_add_rtap(priv);
++ if (priv->monitormode == LBS_MONITOR_OFF) {
++ if (priv->infra_open || priv->mesh_open)
++ return -EBUSY;
++ if (priv->mode == IW_MODE_INFRA)
++ lbs_send_deauthentication(priv);
++ else if (priv->mode == IW_MODE_ADHOC)
++ lbs_stop_adhoc_network(priv);
++ lbs_add_rtap(priv);
+ }
+- adapter->monitormode = monitor_mode;
++ priv->monitormode = monitor_mode;
}
-+ cmdnode->result = 0;
- /* Exit_PS command needs to be queued in the header always. */
-- if (cmdptr->command == CMD_802_11_PS_MODE) {
-- struct cmd_ds_802_11_ps_mode *psm = &cmdptr->params.psmode;
-+ if (le16_to_cpu(cmdnode->cmdbuf->command) == CMD_802_11_PS_MODE) {
-+ struct cmd_ds_802_11_ps_mode *psm = (void *) &cmdnode->cmdbuf[1];
+ else {
+- if(adapter->monitormode == WLAN_MONITOR_OFF)
++ if (priv->monitormode == LBS_MONITOR_OFF)
+ return strlen(buf);
+- adapter->monitormode = WLAN_MONITOR_OFF;
+- libertas_remove_rtap(priv);
+- netif_wake_queue(priv->dev);
+- netif_wake_queue(priv->mesh_dev);
++ priv->monitormode = LBS_MONITOR_OFF;
++ lbs_remove_rtap(priv);
+
- if (psm->action == cpu_to_le16(CMD_SUBCMD_EXIT_PS)) {
-- if (adapter->psstate != PS_STATE_FULL_POWER)
-+ if (priv->psstate != PS_STATE_FULL_POWER)
- addtail = 0;
- }
++ if (priv->currenttxskb) {
++ dev_kfree_skb_any(priv->currenttxskb);
++ priv->currenttxskb = NULL;
++ }
++
++ /* Wake queues, command thread, etc. */
++ lbs_host_to_card_done(priv);
}
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-
-- if (addtail) {
-- list_add_tail((struct list_head *)cmdnode,
-- &adapter->cmdpendingq);
-- adapter->nr_cmd_pending++;
-- } else
-- list_add((struct list_head *)cmdnode, &adapter->cmdpendingq);
-+ if (addtail)
-+ list_add_tail(&cmdnode->list, &priv->cmdpendingq);
-+ else
-+ list_add(&cmdnode->list, &priv->cmdpendingq);
-
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- libertas_prepare_and_send_command(priv,
++ lbs_prepare_and_send_command(priv,
+ CMD_802_11_MONITOR_MODE, CMD_ACT_SET,
+- CMD_OPTION_WAITFORRSP, 0, &adapter->monitormode);
++ CMD_OPTION_WAITFORRSP, 0, &priv->monitormode);
+ return strlen(buf);
+ }
- lbs_deb_host("QUEUE_CMD: inserted command 0x%04x into cmdpendingq\n",
-- le16_to_cpu(((struct cmd_ds_gen*)cmdnode->bufvirtualaddr)->command));
-+ le16_to_cpu(cmdnode->cmdbuf->command));
+ /**
+- * libertas_rtap attribute to be exported per mshX interface
+- * through sysfs (/sys/class/net/mshX/libertas-rtap)
++ * lbs_rtap attribute to be exported per ethX interface
++ * through sysfs (/sys/class/net/ethX/lbs_rtap)
+ */
+-static DEVICE_ATTR(libertas_rtap, 0644, libertas_rtap_get,
+- libertas_rtap_set );
++static DEVICE_ATTR(lbs_rtap, 0644, lbs_rtap_get, lbs_rtap_set );
- done:
- lbs_deb_leave(LBS_DEB_HOST);
+ /**
+- * anycast_mask attribute to be exported per mshX interface
+- * through sysfs (/sys/class/net/mshX/anycast_mask)
++ * Get function for sysfs attribute mesh
+ */
+-static DEVICE_ATTR(anycast_mask, 0644, libertas_anycast_get, libertas_anycast_set);
+-
+-static ssize_t libertas_autostart_enabled_get(struct device * dev,
++static ssize_t lbs_mesh_get(struct device *dev,
+ struct device_attribute *attr, char * buf)
+ {
+- struct cmd_ds_mesh_access mesh_access;
+-
+- memset(&mesh_access, 0, sizeof(mesh_access));
+- libertas_prepare_and_send_command(to_net_dev(dev)->priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_GET_AUTOSTART_ENABLED,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
+-
+- return sprintf(buf, "%d\n", le32_to_cpu(mesh_access.data[0]));
++ struct lbs_private *priv = to_net_dev(dev)->priv;
++ return snprintf(buf, 5, "0x%X\n", !!priv->mesh_dev);
}
--/*
-- * TODO: Fix the issue when DownloadcommandToStation is being called the
-- * second time when the command times out. All the cmdptr->xxx are in little
-- * endian and therefore all the comparissions will fail.
-- * For now - we are not performing the endian conversion the second time - but
-- * for PS and DEEP_SLEEP we need to worry
-- */
--static int DownloadcommandToStation(wlan_private * priv,
-- struct cmd_ctrl_node *cmdnode)
-+static void lbs_submit_command(struct lbs_private *priv,
-+ struct cmd_ctrl_node *cmdnode)
+-static ssize_t libertas_autostart_enabled_set(struct device * dev,
++/**
++ * Set function for sysfs attribute mesh
++ */
++static ssize_t lbs_mesh_set(struct device *dev,
+ struct device_attribute *attr, const char * buf, size_t count)
{
- unsigned long flags;
-- struct cmd_ds_command *cmdptr;
-- wlan_adapter *adapter = priv->adapter;
-- int ret = -1;
-- u16 cmdsize;
-- u16 command;
-+ struct cmd_header *cmd;
-+ uint16_t cmdsize;
-+ uint16_t command;
-+ int timeo = 5 * HZ;
-+ int ret;
+- struct cmd_ds_mesh_access mesh_access;
+- uint32_t datum;
+- wlan_private * priv = (to_net_dev(dev))->priv;
++ struct lbs_private *priv = to_net_dev(dev)->priv;
++ int enable;
+ int ret;
- lbs_deb_enter(LBS_DEB_HOST);
+- memset(&mesh_access, 0, sizeof(mesh_access));
+- sscanf(buf, "%d", &datum);
+- mesh_access.data[0] = cpu_to_le32(datum);
++ sscanf(buf, "%x", &enable);
++ enable = !!enable;
++ if (enable == !!priv->mesh_dev)
++ return count;
++
++ ret = lbs_mesh_config(priv, enable, priv->curbssparams.channel);
++ if (ret)
++ return ret;
-- if (!adapter || !cmdnode) {
-- lbs_deb_host("DNLD_CMD: adapter or cmdmode is NULL\n");
-- goto done;
-- }
-+ cmd = cmdnode->cmdbuf;
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
+- if (ret == 0)
+- priv->mesh_autostart_enabled = datum ? 1 : 0;
++ if (enable)
++ lbs_add_mesh(priv);
++ else
++ lbs_remove_mesh(priv);
-- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ priv->cur_cmd = cmdnode;
-+ priv->cur_cmd_retcode = 0;
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- return strlen(buf);
++ return count;
+ }
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (!cmdptr || !cmdptr->size) {
-- lbs_deb_host("DNLD_CMD: cmdptr is NULL or zero\n");
-- __libertas_cleanup_and_insert_cmd(priv, cmdnode);
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-- goto done;
+-static DEVICE_ATTR(autostart_enabled, 0644,
+- libertas_autostart_enabled_get, libertas_autostart_enabled_set);
++/**
++ * lbs_mesh attribute to be exported per ethX interface
++ * through sysfs (/sys/class/net/ethX/lbs_mesh)
++ */
++static DEVICE_ATTR(lbs_mesh, 0644, lbs_mesh_get, lbs_mesh_set);
+
+-static struct attribute *libertas_mesh_sysfs_entries[] = {
++/**
++ * anycast_mask attribute to be exported per mshX interface
++ * through sysfs (/sys/class/net/mshX/anycast_mask)
++ */
++static DEVICE_ATTR(anycast_mask, 0644, lbs_anycast_get, lbs_anycast_set);
++
++static struct attribute *lbs_mesh_sysfs_entries[] = {
+ &dev_attr_anycast_mask.attr,
+- &dev_attr_autostart_enabled.attr,
+ NULL,
+ };
+
+-static struct attribute_group libertas_mesh_attr_group = {
+- .attrs = libertas_mesh_sysfs_entries,
++static struct attribute_group lbs_mesh_attr_group = {
++ .attrs = lbs_mesh_sysfs_entries,
+ };
+
+ /**
+- * @brief Check if the device can be open and wait if necessary.
+- *
+- * @param dev A pointer to net_device structure
+- * @return 0
+- *
+- * For USB adapter, on some systems the device open handler will be
+- * called before FW ready. Use the following flag check and wait
+- * function to work around the issue.
+- *
+- */
+-static int pre_open_check(struct net_device *dev)
+-{
+- wlan_private *priv = (wlan_private *) dev->priv;
+- wlan_adapter *adapter = priv->adapter;
+- int i = 0;
+-
+- while (!adapter->fw_ready && i < 20) {
+- i++;
+- msleep_interruptible(100);
- }
-+ cmdsize = le16_to_cpu(cmd->size);
-+ command = le16_to_cpu(cmd->command);
+- if (!adapter->fw_ready) {
+- lbs_pr_err("firmware not ready\n");
+- return -1;
+- }
+-
+- return 0;
+-}
+-
+-/**
+- * @brief This function opens the device
++ * @brief This function opens the ethX or mshX interface
+ *
+ * @param dev A pointer to net_device structure
+- * @return 0
++ * @return 0 or -EBUSY if monitor mode active
+ */
+-static int libertas_dev_open(struct net_device *dev)
++static int lbs_dev_open(struct net_device *dev)
+ {
+- wlan_private *priv = (wlan_private *) dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = (struct lbs_private *) dev->priv ;
++ int ret = 0;
-- adapter->cur_cmd = cmdnode;
-- adapter->cur_cmd_retcode = 0;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ /* These commands take longer */
-+ if (command == CMD_802_11_SCAN || command == CMD_802_11_ASSOCIATE ||
-+ command == CMD_802_11_AUTHENTICATE)
-+ timeo = 10 * HZ;
+ lbs_deb_enter(LBS_DEB_NET);
-- cmdsize = cmdptr->size;
-- command = cpu_to_le16(cmdptr->command);
-+ lbs_deb_host("DNLD_CMD: command 0x%04x, seq %d, size %d, jiffies %lu\n",
-+ command, le16_to_cpu(cmd->seqnum), cmdsize, jiffies);
-+ lbs_deb_hex(LBS_DEB_HOST, "DNLD_CMD", (void *) cmdnode->cmdbuf, cmdsize);
+- priv->open = 1;
++ spin_lock_irq(&priv->driver_lock);
-- lbs_deb_host("DNLD_CMD: command 0x%04x, size %d, jiffies %lu\n",
-- command, le16_to_cpu(cmdptr->size), jiffies);
-- lbs_deb_hex(LBS_DEB_HOST, "DNLD_CMD", cmdnode->bufvirtualaddr, cmdsize);
-+ ret = priv->hw_host_to_card(priv, MVMS_CMD, (u8 *) cmd, cmdsize);
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
+- netif_carrier_on(priv->dev);
+- if (priv->mesh_dev)
+- netif_carrier_on(priv->mesh_dev);
+- } else {
+- netif_carrier_off(priv->dev);
+- if (priv->mesh_dev)
+- netif_carrier_off(priv->mesh_dev);
++ if (priv->monitormode != LBS_MONITOR_OFF) {
++ ret = -EBUSY;
++ goto out;
+ }
-- cmdnode->cmdwaitqwoken = 0;
-- cmdsize = cpu_to_le16(cmdsize);
+- lbs_deb_leave(LBS_DEB_NET);
+- return 0;
+-}
+-/**
+- * @brief This function opens the mshX interface
+- *
+- * @param dev A pointer to net_device structure
+- * @return 0
+- */
+-static int libertas_mesh_open(struct net_device *dev)
+-{
+- wlan_private *priv = (wlan_private *) dev->priv ;
-
-- ret = priv->hw_host_to_card(priv, MVMS_CMD, (u8 *) cmdptr, cmdsize);
+- if (pre_open_check(dev) == -1)
+- return -1;
+- priv->mesh_open = 1 ;
+- netif_wake_queue(priv->mesh_dev);
+- if (priv->infra_open == 0)
+- return libertas_dev_open(priv->dev) ;
+- return 0;
+-}
-
-- if (ret != 0) {
-- lbs_deb_host("DNLD_CMD: hw_host_to_card failed\n");
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- adapter->cur_cmd_retcode = ret;
-- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
-- adapter->nr_cmd_pending--;
-- adapter->cur_cmd = NULL;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-- goto done;
-- }
+-/**
+- * @brief This function opens the ethX interface
+- *
+- * @param dev A pointer to net_device structure
+- * @return 0
+- */
+-static int libertas_open(struct net_device *dev)
+-{
+- wlan_private *priv = (wlan_private *) dev->priv ;
-
-- lbs_deb_cmd("DNLD_CMD: sent command 0x%04x, jiffies %lu\n", command, jiffies);
-+ if (ret) {
-+ lbs_pr_info("DNLD_CMD: hw_host_to_card failed: %d\n", ret);
-+ /* Let the timer kick in and retry, and potentially reset
-+ the whole thing if the condition persists */
-+ timeo = HZ;
-+ } else
-+ lbs_deb_cmd("DNLD_CMD: sent command 0x%04x, jiffies %lu\n",
-+ command, jiffies);
-
- /* Setup the timer after transmit command */
-- if (command == CMD_802_11_SCAN || command == CMD_802_11_AUTHENTICATE
-- || command == CMD_802_11_ASSOCIATE)
-- mod_timer(&adapter->command_timer, jiffies + (10*HZ));
-- else
-- mod_timer(&adapter->command_timer, jiffies + (5*HZ));
+- if(pre_open_check(dev) == -1)
+- return -1;
+- priv->infra_open = 1 ;
+- netif_wake_queue(priv->dev);
+- if (priv->open == 0)
+- return libertas_dev_open(priv->dev) ;
+- return 0;
+-}
-
-- ret = 0;
-+ mod_timer(&priv->command_timer, jiffies + timeo);
-
--done:
-- lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
-- return ret;
-+ lbs_deb_leave(LBS_DEB_HOST);
- }
+-static int libertas_dev_close(struct net_device *dev)
+-{
+- wlan_private *priv = dev->priv;
++ if (dev == priv->mesh_dev) {
++ priv->mesh_open = 1;
++ priv->mesh_connect_status = LBS_CONNECTED;
++ netif_carrier_on(dev);
++ } else {
++ priv->infra_open = 1;
--static int wlan_cmd_mac_control(wlan_private * priv,
-+static int lbs_cmd_mac_control(struct lbs_private *priv,
- struct cmd_ds_command *cmd)
- {
- struct cmd_ds_mac_control *mac = &cmd->params.macctrl;
-@@ -1047,7 +1184,7 @@ static int wlan_cmd_mac_control(wlan_private * priv,
+- lbs_deb_enter(LBS_DEB_NET);
++ if (priv->connect_status == LBS_CONNECTED)
++ netif_carrier_on(dev);
++ else
++ netif_carrier_off(dev);
++ }
- cmd->command = cpu_to_le16(CMD_MAC_CONTROL);
- cmd->size = cpu_to_le16(sizeof(struct cmd_ds_mac_control) + S_DS_GEN);
-- mac->action = cpu_to_le16(priv->adapter->currentpacketfilter);
-+ mac->action = cpu_to_le16(priv->currentpacketfilter);
+- netif_carrier_off(priv->dev);
+- priv->open = 0;
++ if (!priv->tx_pending_len)
++ netif_wake_queue(dev);
++ out:
- lbs_deb_cmd("MAC_CONTROL: action 0x%x, size %d\n",
- le16_to_cpu(mac->action), le16_to_cpu(cmd->size));
-@@ -1058,54 +1195,98 @@ static int wlan_cmd_mac_control(wlan_private * priv,
+- lbs_deb_leave(LBS_DEB_NET);
+- return 0;
++ spin_unlock_irq(&priv->driver_lock);
++ lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
++ return ret;
+ }
/**
- * This function inserts command node to cmdfreeq
-- * after cleans it. Requires adapter->driver_lock held.
-+ * after cleans it. Requires priv->driver_lock held.
+@@ -476,16 +425,23 @@ static int libertas_dev_close(struct net_device *dev)
+ * @param dev A pointer to net_device structure
+ * @return 0
*/
--void __libertas_cleanup_and_insert_cmd(wlan_private * priv, struct cmd_ctrl_node *ptempcmd)
-+static void __lbs_cleanup_and_insert_cmd(struct lbs_private *priv,
-+ struct cmd_ctrl_node *cmdnode)
+-static int libertas_mesh_close(struct net_device *dev)
++static int lbs_mesh_stop(struct net_device *dev)
{
-- wlan_adapter *adapter = priv->adapter;
-+ lbs_deb_enter(LBS_DEB_HOST);
-
-- if (!ptempcmd)
-- return;
-+ if (!cmdnode)
-+ goto out;
+- wlan_private *priv = (wlan_private *) (dev->priv);
++ struct lbs_private *priv = (struct lbs_private *) (dev->priv);
+
-+ cmdnode->callback = NULL;
-+ cmdnode->callback_arg = 0;
++ lbs_deb_enter(LBS_DEB_MESH);
++ spin_lock_irq(&priv->driver_lock);
-- cleanup_cmdnode(ptempcmd);
-- list_add_tail((struct list_head *)ptempcmd, &adapter->cmdfreeq);
-+ memset(cmdnode->cmdbuf, 0, LBS_CMD_BUFFER_SIZE);
+ priv->mesh_open = 0;
+- netif_stop_queue(priv->mesh_dev);
+- if (priv->infra_open == 0)
+- return libertas_dev_close(dev);
+- else
+- return 0;
++ priv->mesh_connect_status = LBS_DISCONNECTED;
+
-+ list_add_tail(&cmdnode->list, &priv->cmdfreeq);
-+ out:
-+ lbs_deb_leave(LBS_DEB_HOST);
++ netif_stop_queue(dev);
++ netif_carrier_off(dev);
++
++ spin_unlock_irq(&priv->driver_lock);
++
++ lbs_deb_leave(LBS_DEB_MESH);
++ return 0;
}
--static void libertas_cleanup_and_insert_cmd(wlan_private * priv, struct cmd_ctrl_node *ptempcmd)
-+static void lbs_cleanup_and_insert_cmd(struct lbs_private *priv,
-+ struct cmd_ctrl_node *ptempcmd)
+ /**
+@@ -494,134 +450,86 @@ static int libertas_mesh_close(struct net_device *dev)
+ * @param dev A pointer to net_device structure
+ * @return 0
+ */
+-static int libertas_close(struct net_device *dev)
+-{
+- wlan_private *priv = (wlan_private *) dev->priv;
+-
+- netif_stop_queue(dev);
+- priv->infra_open = 0;
+- if (priv->mesh_open == 0)
+- return libertas_dev_close(dev);
+- else
+- return 0;
+-}
+-
+-
+-static int libertas_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static int lbs_eth_stop(struct net_device *dev)
{
- unsigned long flags;
+- int ret = 0;
+- wlan_private *priv = dev->priv;
++ struct lbs_private *priv = (struct lbs_private *) dev->priv;
-- spin_lock_irqsave(&priv->adapter->driver_lock, flags);
-- __libertas_cleanup_and_insert_cmd(priv, ptempcmd);
-- spin_unlock_irqrestore(&priv->adapter->driver_lock, flags);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ __lbs_cleanup_and_insert_cmd(priv, ptempcmd);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ lbs_deb_enter(LBS_DEB_NET);
+
+- if (priv->dnld_sent || priv->adapter->TxLockFlag) {
+- priv->stats.tx_dropped++;
+- goto done;
+- }
+-
+- netif_stop_queue(priv->dev);
+- if (priv->mesh_dev)
+- netif_stop_queue(priv->mesh_dev);
++ spin_lock_irq(&priv->driver_lock);
++ priv->infra_open = 0;
++ netif_stop_queue(dev);
++ spin_unlock_irq(&priv->driver_lock);
+
+- if (libertas_process_tx(priv, skb) == 0)
+- dev->trans_start = jiffies;
+-done:
+- lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
+- return ret;
++ lbs_deb_leave(LBS_DEB_NET);
++ return 0;
}
--int libertas_set_radio_control(wlan_private * priv)
-+void lbs_complete_command(struct lbs_private *priv, struct cmd_ctrl_node *cmd,
-+ int result)
-+{
-+ if (cmd == priv->cur_cmd)
-+ priv->cur_cmd_retcode = result;
-+
-+ cmd->result = result;
-+ cmd->cmdwaitqwoken = 1;
-+ wake_up_interruptible(&cmd->cmdwait_q);
-+
-+ if (!cmd->callback)
-+ __lbs_cleanup_and_insert_cmd(priv, cmd);
-+ priv->cur_cmd = NULL;
-+}
-+
-+int lbs_set_radio_control(struct lbs_private *priv)
+-/**
+- * @brief Mark mesh packets and handover them to libertas_hard_start_xmit
+- *
+- */
+-static int libertas_mesh_pre_start_xmit(struct sk_buff *skb,
+- struct net_device *dev)
++static void lbs_tx_timeout(struct net_device *dev)
{
- int ret = 0;
-+ struct cmd_ds_802_11_radio_control cmd;
+- wlan_private *priv = dev->priv;
+- int ret;
+-
+- lbs_deb_enter(LBS_DEB_MESH);
+- if(priv->adapter->monitormode != WLAN_MONITOR_OFF) {
+- netif_stop_queue(dev);
+- return -EOPNOTSUPP;
+- }
+-
+- SET_MESH_FRAME(skb);
++ struct lbs_private *priv = (struct lbs_private *) dev->priv;
- lbs_deb_enter(LBS_DEB_CMD);
+- ret = libertas_hard_start_xmit(skb, priv->dev);
+- lbs_deb_leave_args(LBS_DEB_MESH, "ret %d", ret);
+- return ret;
+-}
++ lbs_deb_enter(LBS_DEB_TX);
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_802_11_RADIO_CONTROL,
-- CMD_ACT_SET,
-- CMD_OPTION_WAITFORRSP, 0, NULL);
-+ cmd.hdr.size = cpu_to_le16(sizeof(cmd));
-+ cmd.action = cpu_to_le16(CMD_ACT_SET);
-+
-+ switch (priv->preamble) {
-+ case CMD_TYPE_SHORT_PREAMBLE:
-+ cmd.control = cpu_to_le16(SET_SHORT_PREAMBLE);
-+ break;
-+
-+ case CMD_TYPE_LONG_PREAMBLE:
-+ cmd.control = cpu_to_le16(SET_LONG_PREAMBLE);
-+ break;
-+
-+ case CMD_TYPE_AUTO_PREAMBLE:
-+ default:
-+ cmd.control = cpu_to_le16(SET_AUTO_PREAMBLE);
-+ break;
-+ }
-+
-+ if (priv->radioon)
-+ cmd.control |= cpu_to_le16(TURN_ON_RF);
-+ else
-+ cmd.control &= cpu_to_le16(~TURN_ON_RF);
-+
-+ lbs_deb_cmd("RADIO_SET: radio %d, preamble %d\n", priv->radioon,
-+ priv->preamble);
+-/**
+- * @brief Mark non-mesh packets and handover them to libertas_hard_start_xmit
+- *
+- */
+-static int libertas_pre_start_xmit(struct sk_buff *skb, struct net_device *dev)
+-{
+- wlan_private *priv = dev->priv;
+- int ret;
++ lbs_pr_err("tx watch dog timeout\n");
-- lbs_deb_cmd("RADIO_SET: radio %d, preamble %d\n",
-- priv->adapter->radioon, priv->adapter->preamble);
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_RADIO_CONTROL, &cmd);
+- lbs_deb_enter(LBS_DEB_NET);
++ dev->trans_start = jiffies;
- lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
- return ret;
+- if(priv->adapter->monitormode != WLAN_MONITOR_OFF) {
+- netif_stop_queue(dev);
+- return -EOPNOTSUPP;
++ if (priv->currenttxskb) {
++ priv->eventcause = 0x01000000;
++ lbs_send_tx_feedback(priv);
+ }
++ /* XX: Shouldn't we also call into the hw-specific driver
++ to kick it somehow? */
++ lbs_host_to_card_done(priv);
+
+- UNSET_MESH_FRAME(skb);
++ /* More often than not, this actually happens because the
++ firmware has crapped itself -- rather than just a very
++ busy medium. So send a harmless command, and if/when
++ _that_ times out, we'll kick it in the head. */
++ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
++ 0, 0, NULL);
+
+- ret = libertas_hard_start_xmit(skb, dev);
+- lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
+- return ret;
++ lbs_deb_leave(LBS_DEB_TX);
}
--int libertas_set_mac_packet_filter(wlan_private * priv)
-+int lbs_set_mac_packet_filter(struct lbs_private *priv)
+-static void libertas_tx_timeout(struct net_device *dev)
++void lbs_host_to_card_done(struct lbs_private *priv)
{
- int ret = 0;
+- wlan_private *priv = (wlan_private *) dev->priv;
++ unsigned long flags;
- lbs_deb_enter(LBS_DEB_CMD);
+- lbs_deb_enter(LBS_DEB_TX);
++ lbs_deb_enter(LBS_DEB_THREAD);
- /* Send MAC control command to station */
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_MAC_CONTROL, 0, 0, 0, NULL);
+- lbs_pr_err("tx watch dog timeout\n");
++ spin_lock_irqsave(&priv->driver_lock, flags);
+
+ priv->dnld_sent = DNLD_RES_RECEIVED;
+- dev->trans_start = jiffies;
+
+- if (priv->adapter->currenttxskb) {
+- if (priv->adapter->monitormode != WLAN_MONITOR_OFF) {
+- /* If we are here, we have not received feedback from
+- the previous packet. Assume TX_FAIL and move on. */
+- priv->adapter->eventcause = 0x01000000;
+- libertas_send_tx_feedback(priv);
+- } else
+- wake_up_interruptible(&priv->waitq);
+- } else if (priv->adapter->connect_status == LIBERTAS_CONNECTED) {
+- netif_wake_queue(priv->dev);
+- if (priv->mesh_dev)
+- netif_wake_queue(priv->mesh_dev);
+- }
++ /* Wake main thread if commands are pending */
++ if (!priv->cur_cmd || priv->tx_pending_len > 0)
++ wake_up_interruptible(&priv->waitq);
+
+- lbs_deb_leave(LBS_DEB_TX);
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ lbs_deb_leave(LBS_DEB_THREAD);
+ }
++EXPORT_SYMBOL_GPL(lbs_host_to_card_done);
- lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
-@@ -1115,7 +1296,7 @@ int libertas_set_mac_packet_filter(wlan_private * priv)
/**
- * @brief This function prepare the command before send to firmware.
+ * @brief This function returns the network statistics
*
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param cmd_no command number
- * @param cmd_action command action: GET or SET
- * @param wait_option wait option: wait response or not
-@@ -1123,32 +1304,31 @@ int libertas_set_mac_packet_filter(wlan_private * priv)
- * @param pdata_buf A pointer to informaion buffer
- * @return 0 or -1
+- * @param dev A pointer to wlan_private structure
++ * @param dev A pointer to struct lbs_private structure
+ * @return A pointer to net_device_stats structure
*/
--int libertas_prepare_and_send_command(wlan_private * priv,
-+int lbs_prepare_and_send_command(struct lbs_private *priv,
- u16 cmd_no,
- u16 cmd_action,
- u16 wait_option, u32 cmd_oid, void *pdata_buf)
+-static struct net_device_stats *libertas_get_stats(struct net_device *dev)
++static struct net_device_stats *lbs_get_stats(struct net_device *dev)
+ {
+- wlan_private *priv = (wlan_private *) dev->priv;
++ struct lbs_private *priv = (struct lbs_private *) dev->priv;
+
++ lbs_deb_enter(LBS_DEB_NET);
+ return &priv->stats;
+ }
+
+-static int libertas_set_mac_address(struct net_device *dev, void *addr)
++static int lbs_set_mac_address(struct net_device *dev, void *addr)
{
int ret = 0;
+- wlan_private *priv = (wlan_private *) dev->priv;
- wlan_adapter *adapter = priv->adapter;
- struct cmd_ctrl_node *cmdnode;
- struct cmd_ds_command *cmdptr;
- unsigned long flags;
++ struct lbs_private *priv = (struct lbs_private *) dev->priv;
+ struct sockaddr *phwaddr = addr;
- lbs_deb_enter(LBS_DEB_HOST);
+ lbs_deb_enter(LBS_DEB_NET);
+@@ -629,15 +537,15 @@ static int libertas_set_mac_address(struct net_device *dev, void *addr)
+ /* In case it was called from the mesh device */
+ dev = priv->dev ;
-- if (!adapter) {
-- lbs_deb_host("PREP_CMD: adapter is NULL\n");
-+ if (!priv) {
-+ lbs_deb_host("PREP_CMD: priv is NULL\n");
- ret = -1;
- goto done;
- }
+- memset(adapter->current_addr, 0, ETH_ALEN);
++ memset(priv->current_addr, 0, ETH_ALEN);
-- if (adapter->surpriseremoved) {
-+ if (priv->surpriseremoved) {
- lbs_deb_host("PREP_CMD: card removed\n");
- ret = -1;
- goto done;
- }
+ /* dev->dev_addr is 8 bytes */
+ lbs_deb_hex(LBS_DEB_NET, "dev->dev_addr", dev->dev_addr, ETH_ALEN);
-- cmdnode = libertas_get_free_cmd_ctrl_node(priv);
-+ cmdnode = lbs_get_cmd_ctrl_node(priv);
+ lbs_deb_hex(LBS_DEB_NET, "addr", phwaddr->sa_data, ETH_ALEN);
+- memcpy(adapter->current_addr, phwaddr->sa_data, ETH_ALEN);
++ memcpy(priv->current_addr, phwaddr->sa_data, ETH_ALEN);
- if (cmdnode == NULL) {
- lbs_deb_host("PREP_CMD: cmdnode is NULL\n");
-@@ -1159,138 +1339,107 @@ int libertas_prepare_and_send_command(wlan_private * priv,
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_MAC_ADDRESS,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_MAC_ADDRESS,
+ CMD_ACT_SET,
+ CMD_OPTION_WAITFORRSP, 0, NULL);
+
+@@ -647,89 +555,86 @@ static int libertas_set_mac_address(struct net_device *dev, void *addr)
goto done;
}
-- libertas_set_cmd_ctrl_node(priv, cmdnode, cmd_oid, wait_option, pdata_buf);
-+ lbs_set_cmd_ctrl_node(priv, cmdnode, pdata_buf);
+- lbs_deb_hex(LBS_DEB_NET, "adapter->macaddr", adapter->current_addr, ETH_ALEN);
+- memcpy(dev->dev_addr, adapter->current_addr, ETH_ALEN);
++ lbs_deb_hex(LBS_DEB_NET, "priv->macaddr", priv->current_addr, ETH_ALEN);
++ memcpy(dev->dev_addr, priv->current_addr, ETH_ALEN);
+ if (priv->mesh_dev)
+- memcpy(priv->mesh_dev->dev_addr, adapter->current_addr, ETH_ALEN);
++ memcpy(priv->mesh_dev->dev_addr, priv->current_addr, ETH_ALEN);
-- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
-+ cmdptr = (struct cmd_ds_command *)cmdnode->cmdbuf;
+ done:
+ lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
+ return ret;
+ }
- lbs_deb_host("PREP_CMD: command 0x%04x\n", cmd_no);
+-static int libertas_copy_multicast_address(wlan_adapter * adapter,
++static int lbs_copy_multicast_address(struct lbs_private *priv,
+ struct net_device *dev)
+ {
+ int i = 0;
+ struct dev_mc_list *mcptr = dev->mc_list;
-- if (!cmdptr) {
-- lbs_deb_host("PREP_CMD: cmdptr is NULL\n");
-- libertas_cleanup_and_insert_cmd(priv, cmdnode);
-- ret = -1;
-- goto done;
-- }
+ for (i = 0; i < dev->mc_count; i++) {
+- memcpy(&adapter->multicastlist[i], mcptr->dmi_addr, ETH_ALEN);
++ memcpy(&priv->multicastlist[i], mcptr->dmi_addr, ETH_ALEN);
+ mcptr = mcptr->next;
+ }
-
- /* Set sequence number, command and INT option */
-- adapter->seqnum++;
-- cmdptr->seqnum = cpu_to_le16(adapter->seqnum);
-+ priv->seqnum++;
-+ cmdptr->seqnum = cpu_to_le16(priv->seqnum);
-
- cmdptr->command = cpu_to_le16(cmd_no);
- cmdptr->result = 0;
+ return i;
+-
+ }
- switch (cmd_no) {
-- case CMD_GET_HW_SPEC:
-- ret = wlan_cmd_hw_spec(priv, cmdptr);
-- break;
- case CMD_802_11_PS_MODE:
-- ret = wlan_cmd_802_11_ps_mode(priv, cmdptr, cmd_action);
-+ ret = lbs_cmd_802_11_ps_mode(priv, cmdptr, cmd_action);
- break;
+-static void libertas_set_multicast_list(struct net_device *dev)
++static void lbs_set_multicast_list(struct net_device *dev)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ int oldpacketfilter;
+ DECLARE_MAC_BUF(mac);
- case CMD_802_11_SCAN:
-- ret = libertas_cmd_80211_scan(priv, cmdptr, pdata_buf);
-+ ret = lbs_cmd_80211_scan(priv, cmdptr, pdata_buf);
- break;
+ lbs_deb_enter(LBS_DEB_NET);
- case CMD_MAC_CONTROL:
-- ret = wlan_cmd_mac_control(priv, cmdptr);
-+ ret = lbs_cmd_mac_control(priv, cmdptr);
- break;
+- oldpacketfilter = adapter->currentpacketfilter;
++ oldpacketfilter = priv->currentpacketfilter;
- case CMD_802_11_ASSOCIATE:
- case CMD_802_11_REASSOCIATE:
-- ret = libertas_cmd_80211_associate(priv, cmdptr, pdata_buf);
-+ ret = lbs_cmd_80211_associate(priv, cmdptr, pdata_buf);
- break;
+ if (dev->flags & IFF_PROMISC) {
+ lbs_deb_net("enable promiscuous mode\n");
+- adapter->currentpacketfilter |=
++ priv->currentpacketfilter |=
+ CMD_ACT_MAC_PROMISCUOUS_ENABLE;
+- adapter->currentpacketfilter &=
++ priv->currentpacketfilter &=
+ ~(CMD_ACT_MAC_ALL_MULTICAST_ENABLE |
+ CMD_ACT_MAC_MULTICAST_ENABLE);
+ } else {
+ /* Multicast */
+- adapter->currentpacketfilter &=
++ priv->currentpacketfilter &=
+ ~CMD_ACT_MAC_PROMISCUOUS_ENABLE;
- case CMD_802_11_DEAUTHENTICATE:
-- ret = libertas_cmd_80211_deauthenticate(priv, cmdptr);
-- break;
--
-- case CMD_802_11_SET_WEP:
-- ret = wlan_cmd_802_11_set_wep(priv, cmdptr, cmd_action, pdata_buf);
-+ ret = lbs_cmd_80211_deauthenticate(priv, cmdptr);
- break;
+ if (dev->flags & IFF_ALLMULTI || dev->mc_count >
+ MRVDRV_MAX_MULTICAST_LIST_SIZE) {
+ lbs_deb_net( "enabling all multicast\n");
+- adapter->currentpacketfilter |=
++ priv->currentpacketfilter |=
+ CMD_ACT_MAC_ALL_MULTICAST_ENABLE;
+- adapter->currentpacketfilter &=
++ priv->currentpacketfilter &=
+ ~CMD_ACT_MAC_MULTICAST_ENABLE;
+ } else {
+- adapter->currentpacketfilter &=
++ priv->currentpacketfilter &=
+ ~CMD_ACT_MAC_ALL_MULTICAST_ENABLE;
- case CMD_802_11_AD_HOC_START:
-- ret = libertas_cmd_80211_ad_hoc_start(priv, cmdptr, pdata_buf);
-+ ret = lbs_cmd_80211_ad_hoc_start(priv, cmdptr, pdata_buf);
- break;
- case CMD_CODE_DNLD:
- break;
+ if (!dev->mc_count) {
+ lbs_deb_net("no multicast addresses, "
+ "disabling multicast\n");
+- adapter->currentpacketfilter &=
++ priv->currentpacketfilter &=
+ ~CMD_ACT_MAC_MULTICAST_ENABLE;
+ } else {
+ int i;
- case CMD_802_11_RESET:
-- ret = wlan_cmd_802_11_reset(priv, cmdptr, cmd_action);
-+ ret = lbs_cmd_802_11_reset(priv, cmdptr, cmd_action);
- break;
+- adapter->currentpacketfilter |=
++ priv->currentpacketfilter |=
+ CMD_ACT_MAC_MULTICAST_ENABLE;
- case CMD_802_11_GET_LOG:
-- ret = wlan_cmd_802_11_get_log(priv, cmdptr);
-+ ret = lbs_cmd_802_11_get_log(priv, cmdptr);
- break;
+- adapter->nr_of_multicastmacaddr =
+- libertas_copy_multicast_address(adapter, dev);
++ priv->nr_of_multicastmacaddr =
++ lbs_copy_multicast_address(priv, dev);
- case CMD_802_11_AUTHENTICATE:
-- ret = libertas_cmd_80211_authenticate(priv, cmdptr, pdata_buf);
-+ ret = lbs_cmd_80211_authenticate(priv, cmdptr, pdata_buf);
- break;
+ lbs_deb_net("multicast addresses: %d\n",
+ dev->mc_count);
- case CMD_802_11_GET_STAT:
-- ret = wlan_cmd_802_11_get_stat(priv, cmdptr);
-+ ret = lbs_cmd_802_11_get_stat(priv, cmdptr);
- break;
+ for (i = 0; i < dev->mc_count; i++) {
+- lbs_deb_net("Multicast address %d:%s\n",
++ lbs_deb_net("Multicast address %d: %s\n",
+ i, print_mac(mac,
+- adapter->multicastlist[i]));
++ priv->multicastlist[i]));
+ }
+ /* send multicast addresses to firmware */
+- libertas_prepare_and_send_command(priv,
++ lbs_prepare_and_send_command(priv,
+ CMD_MAC_MULTICAST_ADR,
+ CMD_ACT_SET, 0, 0,
+ NULL);
+@@ -737,26 +642,25 @@ static void libertas_set_multicast_list(struct net_device *dev)
+ }
+ }
- case CMD_802_11_SNMP_MIB:
-- ret = wlan_cmd_802_11_snmp_mib(priv, cmdptr,
-+ ret = lbs_cmd_802_11_snmp_mib(priv, cmdptr,
- cmd_action, cmd_oid, pdata_buf);
- break;
+- if (adapter->currentpacketfilter != oldpacketfilter) {
+- libertas_set_mac_packet_filter(priv);
++ if (priv->currentpacketfilter != oldpacketfilter) {
++ lbs_set_mac_packet_filter(priv);
+ }
- case CMD_MAC_REG_ACCESS:
- case CMD_BBP_REG_ACCESS:
- case CMD_RF_REG_ACCESS:
-- ret = wlan_cmd_reg_access(priv, cmdptr, cmd_action, pdata_buf);
-- break;
--
-- case CMD_802_11_RF_CHANNEL:
-- ret = wlan_cmd_802_11_rf_channel(priv, cmdptr,
-- cmd_action, pdata_buf);
-+ ret = lbs_cmd_reg_access(priv, cmdptr, cmd_action, pdata_buf);
- break;
+ lbs_deb_leave(LBS_DEB_NET);
+ }
- case CMD_802_11_RF_TX_POWER:
-- ret = wlan_cmd_802_11_rf_tx_power(priv, cmdptr,
-+ ret = lbs_cmd_802_11_rf_tx_power(priv, cmdptr,
- cmd_action, pdata_buf);
- break;
+ /**
+- * @brief This function handles the major jobs in the WLAN driver.
++ * @brief This function handles the major jobs in the LBS driver.
+ * It handles all events generated by firmware, RX data received
+ * from firmware and TX data sent from kernel.
+ *
+- * @param data A pointer to wlan_thread structure
++ * @param data A pointer to lbs_thread structure
+ * @return 0
+ */
+-static int libertas_thread(void *data)
++static int lbs_thread(void *data)
+ {
+ struct net_device *dev = data;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ wait_queue_t wait;
+ u8 ireg = 0;
-- case CMD_802_11_RADIO_CONTROL:
-- ret = wlan_cmd_802_11_radio_control(priv, cmdptr, cmd_action);
-- break;
--
-- case CMD_802_11_DATA_RATE:
-- ret = wlan_cmd_802_11_data_rate(priv, cmdptr, cmd_action);
-- break;
- case CMD_802_11_RATE_ADAPT_RATESET:
-- ret = wlan_cmd_802_11_rate_adapt_rateset(priv,
-+ ret = lbs_cmd_802_11_rate_adapt_rateset(priv,
- cmdptr, cmd_action);
- break;
+@@ -764,215 +668,291 @@ static int libertas_thread(void *data)
- case CMD_MAC_MULTICAST_ADR:
-- ret = wlan_cmd_mac_multicast_adr(priv, cmdptr, cmd_action);
-+ ret = lbs_cmd_mac_multicast_adr(priv, cmdptr, cmd_action);
- break;
+ init_waitqueue_entry(&wait, current);
- case CMD_802_11_MONITOR_MODE:
-- ret = wlan_cmd_802_11_monitor_mode(priv, cmdptr,
-+ ret = lbs_cmd_802_11_monitor_mode(priv, cmdptr,
- cmd_action, pdata_buf);
- break;
+- set_freezable();
+ for (;;) {
+- lbs_deb_thread( "main-thread 111: intcounter=%d "
+- "currenttxskb=%p dnld_sent=%d\n",
+- adapter->intcounter,
+- adapter->currenttxskb, priv->dnld_sent);
++ int shouldsleep;
++
++ lbs_deb_thread( "main-thread 111: intcounter=%d currenttxskb=%p dnld_sent=%d\n",
++ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
- case CMD_802_11_AD_HOC_JOIN:
-- ret = libertas_cmd_80211_ad_hoc_join(priv, cmdptr, pdata_buf);
-+ ret = lbs_cmd_80211_ad_hoc_join(priv, cmdptr, pdata_buf);
- break;
+ add_wait_queue(&priv->waitq, &wait);
+ set_current_state(TASK_INTERRUPTIBLE);
+- spin_lock_irq(&adapter->driver_lock);
+- if ((adapter->psstate == PS_STATE_SLEEP) ||
+- (!adapter->intcounter
+- && (priv->dnld_sent || adapter->cur_cmd ||
+- list_empty(&adapter->cmdpendingq)))) {
+- lbs_deb_thread(
+- "main-thread sleeping... Conn=%d IntC=%d PS_mode=%d PS_State=%d\n",
+- adapter->connect_status, adapter->intcounter,
+- adapter->psmode, adapter->psstate);
+- spin_unlock_irq(&adapter->driver_lock);
++ spin_lock_irq(&priv->driver_lock);
++
++ if (kthread_should_stop())
++ shouldsleep = 0; /* Bye */
++ else if (priv->surpriseremoved)
++ shouldsleep = 1; /* We need to wait until we're _told_ to die */
++ else if (priv->psstate == PS_STATE_SLEEP)
++ shouldsleep = 1; /* Sleep mode. Nothing we can do till it wakes */
++ else if (priv->intcounter)
++ shouldsleep = 0; /* Interrupt pending. Deal with it now */
++ else if (priv->cmd_timed_out)
++ shouldsleep = 0; /* Command timed out. Recover */
++ else if (!priv->fw_ready)
++ shouldsleep = 1; /* Firmware not ready. We're waiting for it */
++ else if (priv->dnld_sent)
++ shouldsleep = 1; /* Something is en route to the device already */
++ else if (priv->tx_pending_len > 0)
++ shouldsleep = 0; /* We've a packet to send */
++ else if (priv->cur_cmd)
++ shouldsleep = 1; /* Can't send a command; one already running */
++ else if (!list_empty(&priv->cmdpendingq))
++ shouldsleep = 0; /* We have a command to send */
++ else
++ shouldsleep = 1; /* No command */
++
++ if (shouldsleep) {
++ lbs_deb_thread("main-thread sleeping... Conn=%d IntC=%d PS_mode=%d PS_State=%d\n",
++ priv->connect_status, priv->intcounter,
++ priv->psmode, priv->psstate);
++ spin_unlock_irq(&priv->driver_lock);
+ schedule();
+ } else
+- spin_unlock_irq(&adapter->driver_lock);
++ spin_unlock_irq(&priv->driver_lock);
- case CMD_802_11_RSSI:
-- ret = wlan_cmd_802_11_rssi(priv, cmdptr);
-+ ret = lbs_cmd_802_11_rssi(priv, cmdptr);
- break;
+- lbs_deb_thread(
+- "main-thread 222 (waking up): intcounter=%d currenttxskb=%p "
+- "dnld_sent=%d\n", adapter->intcounter,
+- adapter->currenttxskb, priv->dnld_sent);
++ lbs_deb_thread("main-thread 222 (waking up): intcounter=%d currenttxskb=%p dnld_sent=%d\n",
++ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
- case CMD_802_11_AD_HOC_STOP:
-- ret = libertas_cmd_80211_ad_hoc_stop(priv, cmdptr);
-- break;
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&priv->waitq, &wait);
+- try_to_freeze();
-
-- case CMD_802_11_ENABLE_RSN:
-- ret = wlan_cmd_802_11_enable_rsn(priv, cmdptr, cmd_action,
-- pdata_buf);
-+ ret = lbs_cmd_80211_ad_hoc_stop(priv, cmdptr);
- break;
+- lbs_deb_thread("main-thread 333: intcounter=%d currenttxskb=%p "
+- "dnld_sent=%d\n",
+- adapter->intcounter,
+- adapter->currenttxskb, priv->dnld_sent);
+-
+- if (kthread_should_stop()
+- || adapter->surpriseremoved) {
+- lbs_deb_thread(
+- "main-thread: break from main thread: surpriseremoved=0x%x\n",
+- adapter->surpriseremoved);
++
++ lbs_deb_thread("main-thread 333: intcounter=%d currenttxskb=%p dnld_sent=%d\n",
++ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
++
++ if (kthread_should_stop()) {
++ lbs_deb_thread("main-thread: break from main thread\n");
+ break;
+ }
- case CMD_802_11_KEY_MATERIAL:
-- ret = wlan_cmd_802_11_key_material(priv, cmdptr, cmd_action,
-+ ret = lbs_cmd_802_11_key_material(priv, cmdptr, cmd_action,
- cmd_oid, pdata_buf);
- break;
++ if (priv->surpriseremoved) {
++ lbs_deb_thread("adapter removed; waiting to die...\n");
++ continue;
++ }
++
++ spin_lock_irq(&priv->driver_lock);
-@@ -1300,11 +1449,11 @@ int libertas_prepare_and_send_command(wlan_private * priv,
- break;
+- spin_lock_irq(&adapter->driver_lock);
+- if (adapter->intcounter) {
++ if (priv->intcounter) {
+ u8 int_status;
+- adapter->intcounter = 0;
++
++ priv->intcounter = 0;
+ int_status = priv->hw_get_int_status(priv, &ireg);
- case CMD_802_11_MAC_ADDRESS:
-- ret = wlan_cmd_802_11_mac_address(priv, cmdptr, cmd_action);
-+ ret = lbs_cmd_802_11_mac_address(priv, cmdptr, cmd_action);
- break;
+ if (int_status) {
+- lbs_deb_thread(
+- "main-thread: reading HOST_INT_STATUS_REG failed\n");
+- spin_unlock_irq(&adapter->driver_lock);
++ lbs_deb_thread("main-thread: reading HOST_INT_STATUS_REG failed\n");
++ spin_unlock_irq(&priv->driver_lock);
+ continue;
+ }
+- adapter->hisregcpy |= ireg;
++ priv->hisregcpy |= ireg;
+ }
- case CMD_802_11_EEPROM_ACCESS:
-- ret = wlan_cmd_802_11_eeprom_access(priv, cmdptr,
-+ ret = lbs_cmd_802_11_eeprom_access(priv, cmdptr,
- cmd_action, pdata_buf);
- break;
+- lbs_deb_thread("main-thread 444: intcounter=%d currenttxskb=%p "
+- "dnld_sent=%d\n",
+- adapter->intcounter,
+- adapter->currenttxskb, priv->dnld_sent);
++ lbs_deb_thread("main-thread 444: intcounter=%d currenttxskb=%p dnld_sent=%d\n",
++ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
-@@ -1322,19 +1471,10 @@ int libertas_prepare_and_send_command(wlan_private * priv,
- goto done;
+ /* command response? */
+- if (adapter->hisregcpy & MRVDRV_CMD_UPLD_RDY) {
++ if (priv->hisregcpy & MRVDRV_CMD_UPLD_RDY) {
+ lbs_deb_thread("main-thread: cmd response ready\n");
- case CMD_802_11D_DOMAIN_INFO:
-- ret = libertas_cmd_802_11d_domain_info(priv, cmdptr,
-+ ret = lbs_cmd_802_11d_domain_info(priv, cmdptr,
- cmd_no, cmd_action);
- break;
+- adapter->hisregcpy &= ~MRVDRV_CMD_UPLD_RDY;
+- spin_unlock_irq(&adapter->driver_lock);
+- libertas_process_rx_command(priv);
+- spin_lock_irq(&adapter->driver_lock);
++ priv->hisregcpy &= ~MRVDRV_CMD_UPLD_RDY;
++ spin_unlock_irq(&priv->driver_lock);
++ lbs_process_rx_command(priv);
++ spin_lock_irq(&priv->driver_lock);
+ }
-- case CMD_802_11_SLEEP_PARAMS:
-- ret = wlan_cmd_802_11_sleep_params(priv, cmdptr, cmd_action);
-- break;
-- case CMD_802_11_INACTIVITY_TIMEOUT:
-- ret = wlan_cmd_802_11_inactivity_timeout(priv, cmdptr,
-- cmd_action, pdata_buf);
-- libertas_set_cmd_ctrl_node(priv, cmdnode, 0, 0, pdata_buf);
-- break;
--
- case CMD_802_11_TPC_CFG:
- cmdptr->command = cpu_to_le16(CMD_802_11_TPC_CFG);
- cmdptr->size =
-@@ -1361,13 +1501,15 @@ int libertas_prepare_and_send_command(wlan_private * priv,
++ if (priv->cmd_timed_out && priv->cur_cmd) {
++ struct cmd_ctrl_node *cmdnode = priv->cur_cmd;
++
++ if (++priv->nr_retries > 10) {
++ lbs_pr_info("Excessive timeouts submitting command %x\n",
++ le16_to_cpu(cmdnode->cmdbuf->command));
++ lbs_complete_command(priv, cmdnode, -ETIMEDOUT);
++ priv->nr_retries = 0;
++ } else {
++ priv->cur_cmd = NULL;
++ lbs_pr_info("requeueing command %x due to timeout (#%d)\n",
++ le16_to_cpu(cmdnode->cmdbuf->command), priv->nr_retries);
++
++ /* Stick it back at the _top_ of the pending queue
++ for immediate resubmission */
++ list_add(&cmdnode->list, &priv->cmdpendingq);
++ }
++ }
++ priv->cmd_timed_out = 0;
++
+ /* Any Card Event */
+- if (adapter->hisregcpy & MRVDRV_CARDEVENT) {
++ if (priv->hisregcpy & MRVDRV_CARDEVENT) {
+ lbs_deb_thread("main-thread: Card Event Activity\n");
- #define ACTION_NUMLED_TLVTYPE_LEN_FIELDS_LEN 8
- cmdptr->size =
-- cpu_to_le16(gpio->header.len + S_DS_GEN +
-- ACTION_NUMLED_TLVTYPE_LEN_FIELDS_LEN);
-- gpio->header.len = cpu_to_le16(gpio->header.len);
-+ cpu_to_le16(le16_to_cpu(gpio->header.len)
-+ + S_DS_GEN
-+ + ACTION_NUMLED_TLVTYPE_LEN_FIELDS_LEN);
-+ gpio->header.len = gpio->header.len;
+- adapter->hisregcpy &= ~MRVDRV_CARDEVENT;
++ priv->hisregcpy &= ~MRVDRV_CARDEVENT;
- ret = 0;
- break;
- }
+ if (priv->hw_read_event_cause(priv)) {
+- lbs_pr_alert(
+- "main-thread: hw_read_event_cause failed\n");
+- spin_unlock_irq(&adapter->driver_lock);
++ lbs_pr_alert("main-thread: hw_read_event_cause failed\n");
++ spin_unlock_irq(&priv->driver_lock);
+ continue;
+ }
+- spin_unlock_irq(&adapter->driver_lock);
+- libertas_process_event(priv);
++ spin_unlock_irq(&priv->driver_lock);
++ lbs_process_event(priv);
+ } else
+- spin_unlock_irq(&adapter->driver_lock);
++ spin_unlock_irq(&priv->driver_lock);
+
- case CMD_802_11_PWR_CFG:
- cmdptr->command = cpu_to_le16(CMD_802_11_PWR_CFG);
- cmdptr->size =
-@@ -1379,19 +1521,11 @@ int libertas_prepare_and_send_command(wlan_private * priv,
- ret = 0;
- break;
- case CMD_BT_ACCESS:
-- ret = wlan_cmd_bt_access(priv, cmdptr, cmd_action, pdata_buf);
-+ ret = lbs_cmd_bt_access(priv, cmdptr, cmd_action, pdata_buf);
- break;
++ if (!priv->fw_ready)
++ continue;
- case CMD_FWT_ACCESS:
-- ret = wlan_cmd_fwt_access(priv, cmdptr, cmd_action, pdata_buf);
-- break;
--
-- case CMD_MESH_ACCESS:
-- ret = wlan_cmd_mesh_access(priv, cmdptr, cmd_action, pdata_buf);
-- break;
+ /* Check if we need to confirm Sleep Request received previously */
+- if (adapter->psstate == PS_STATE_PRE_SLEEP) {
+- if (!priv->dnld_sent && !adapter->cur_cmd) {
+- if (adapter->connect_status ==
+- LIBERTAS_CONNECTED) {
+- lbs_deb_thread(
+- "main_thread: PRE_SLEEP--intcounter=%d currenttxskb=%p "
+- "dnld_sent=%d cur_cmd=%p, confirm now\n",
+- adapter->intcounter,
+- adapter->currenttxskb,
+- priv->dnld_sent,
+- adapter->cur_cmd);
-
-- case CMD_SET_BOOT2_VER:
-- ret = wlan_cmd_set_boot2_ver(priv, cmdptr, cmd_action, pdata_buf);
-+ ret = lbs_cmd_fwt_access(priv, cmdptr, cmd_action, pdata_buf);
- break;
-
- case CMD_GET_TSF:
-@@ -1400,6 +1534,9 @@ int libertas_prepare_and_send_command(wlan_private * priv,
- S_DS_GEN);
- ret = 0;
- break;
-+ case CMD_802_11_BEACON_CTRL:
-+ ret = lbs_cmd_bcn_ctrl(priv, cmdptr, cmd_action);
-+ break;
- default:
- lbs_deb_host("PREP_CMD: unknown command 0x%04x\n", cmd_no);
- ret = -1;
-@@ -1409,14 +1546,14 @@ int libertas_prepare_and_send_command(wlan_private * priv,
- /* return error, since the command preparation failed */
- if (ret != 0) {
- lbs_deb_host("PREP_CMD: command preparation failed\n");
-- libertas_cleanup_and_insert_cmd(priv, cmdnode);
-+ lbs_cleanup_and_insert_cmd(priv, cmdnode);
- ret = -1;
- goto done;
- }
+- libertas_ps_confirm_sleep(priv,
+- (u16) adapter->psmode);
+- } else {
+- /* workaround for firmware sending
+- * deauth/linkloss event immediately
+- * after sleep request, remove this
+- * after firmware fixes it
+- */
+- adapter->psstate = PS_STATE_AWAKE;
+- lbs_pr_alert(
+- "main-thread: ignore PS_SleepConfirm in non-connected state\n");
+- }
++ if (priv->psstate == PS_STATE_PRE_SLEEP &&
++ !priv->dnld_sent && !priv->cur_cmd) {
++ if (priv->connect_status == LBS_CONNECTED) {
++ lbs_deb_thread("main_thread: PRE_SLEEP--intcounter=%d currenttxskb=%p dnld_sent=%d cur_cmd=%p, confirm now\n",
++ priv->intcounter, priv->currenttxskb, priv->dnld_sent, priv->cur_cmd);
++
++ lbs_ps_confirm_sleep(priv, (u16) priv->psmode);
++ } else {
++ /* workaround for firmware sending
++ * deauth/linkloss event immediately
++ * after sleep request; remove this
++ * after firmware fixes it
++ */
++ priv->psstate = PS_STATE_AWAKE;
++ lbs_pr_alert("main-thread: ignore PS_SleepConfirm in non-connected state\n");
+ }
+ }
- cmdnode->cmdwaitqwoken = 0;
+ /* The PS state is changed during processing of Sleep Request
+ * event above
+ */
+- if ((priv->adapter->psstate == PS_STATE_SLEEP) ||
+- (priv->adapter->psstate == PS_STATE_PRE_SLEEP))
++ if ((priv->psstate == PS_STATE_SLEEP) ||
++ (priv->psstate == PS_STATE_PRE_SLEEP))
+ continue;
-- libertas_queue_cmd(adapter, cmdnode, 1);
-+ lbs_queue_cmd(priv, cmdnode);
- wake_up_interruptible(&priv->waitq);
+ /* Execute the next command */
+- if (!priv->dnld_sent && !priv->adapter->cur_cmd)
+- libertas_execute_next_command(priv);
++ if (!priv->dnld_sent && !priv->cur_cmd)
++ lbs_execute_next_command(priv);
- if (wait_option & CMD_OPTION_WAITFORRSP) {
-@@ -1426,67 +1563,60 @@ int libertas_prepare_and_send_command(wlan_private * priv,
- cmdnode->cmdwaitqwoken);
+ /* Wake-up command waiters which can't sleep in
+- * libertas_prepare_and_send_command
++ * lbs_prepare_and_send_command
+ */
+- if (!adapter->nr_cmd_pending)
+- wake_up_all(&adapter->cmd_pending);
+-
+- libertas_tx_runqueue(priv);
++ if (!list_empty(&priv->cmdpendingq))
++ wake_up_all(&priv->cmd_pending);
++
++ spin_lock_irq(&priv->driver_lock);
++ if (!priv->dnld_sent && priv->tx_pending_len > 0) {
++ int ret = priv->hw_host_to_card(priv, MVMS_DAT,
++ priv->tx_pending_buf,
++ priv->tx_pending_len);
++ if (ret) {
++ lbs_deb_tx("host_to_card failed %d\n", ret);
++ priv->dnld_sent = DNLD_RES_RECEIVED;
++ }
++ priv->tx_pending_len = 0;
++ if (!priv->currenttxskb) {
++ /* We can wake the queues immediately if we aren't
++ waiting for TX feedback */
++ if (priv->connect_status == LBS_CONNECTED)
++ netif_wake_queue(priv->dev);
++ if (priv->mesh_dev &&
++ priv->mesh_connect_status == LBS_CONNECTED)
++ netif_wake_queue(priv->mesh_dev);
++ }
++ }
++ spin_unlock_irq(&priv->driver_lock);
}
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (adapter->cur_cmd_retcode) {
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ if (priv->cur_cmd_retcode) {
- lbs_deb_host("PREP_CMD: command failed with return code %d\n",
-- adapter->cur_cmd_retcode);
-- adapter->cur_cmd_retcode = 0;
-+ priv->cur_cmd_retcode);
-+ priv->cur_cmd_retcode = 0;
- ret = -1;
- }
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- del_timer(&adapter->command_timer);
+- adapter->nr_cmd_pending = 0;
+- wake_up_all(&adapter->cmd_pending);
++ del_timer(&priv->command_timer);
++ wake_up_all(&priv->cmd_pending);
- done:
- lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
- return ret;
+ lbs_deb_leave(LBS_DEB_THREAD);
+ return 0;
}
--EXPORT_SYMBOL_GPL(libertas_prepare_and_send_command);
-+EXPORT_SYMBOL_GPL(lbs_prepare_and_send_command);
++static int lbs_suspend_callback(struct lbs_private *priv, unsigned long dummy,
++ struct cmd_header *cmd)
++{
++ lbs_deb_enter(LBS_DEB_FW);
++
++ netif_device_detach(priv->dev);
++ if (priv->mesh_dev)
++ netif_device_detach(priv->mesh_dev);
++
++ priv->fw_ready = 0;
++ lbs_deb_leave(LBS_DEB_FW);
++ return 0;
++}
++
++int lbs_suspend(struct lbs_private *priv)
++{
++ struct cmd_header cmd;
++ int ret;
++
++ lbs_deb_enter(LBS_DEB_FW);
++
++ if (priv->wol_criteria == 0xffffffff) {
++ lbs_pr_info("Suspend attempt without configuring wake params!\n");
++ return -EINVAL;
++ }
++
++ memset(&cmd, 0, sizeof(cmd));
++
++ ret = __lbs_cmd(priv, CMD_802_11_HOST_SLEEP_ACTIVATE, &cmd,
++ sizeof(cmd), lbs_suspend_callback, 0);
++ if (ret)
++ lbs_pr_info("HOST_SLEEP_ACTIVATE failed: %d\n", ret);
++
++ lbs_deb_leave_args(LBS_DEB_FW, "ret %d", ret);
++ return ret;
++}
++EXPORT_SYMBOL_GPL(lbs_suspend);
++
++int lbs_resume(struct lbs_private *priv)
++{
++ lbs_deb_enter(LBS_DEB_FW);
++
++ priv->fw_ready = 1;
++
++ /* Firmware doesn't seem to give us RX packets any more
++ until we send it some command. Might as well update */
++ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
++ 0, 0, NULL);
++
++ netif_device_attach(priv->dev);
++ if (priv->mesh_dev)
++ netif_device_attach(priv->mesh_dev);
++
++ lbs_deb_leave(LBS_DEB_FW);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(lbs_resume);
++
/**
- * @brief This function allocates the command buffer and link
- * it to command free queue.
+ * @brief This function downloads firmware image, gets
+ * HW spec from firmware and set basic parameters to
+ * firmware.
*
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return 0 or -1
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return 0 or -1
*/
--int libertas_allocate_cmd_buffer(wlan_private * priv)
-+int lbs_allocate_cmd_buffer(struct lbs_private *priv)
+-static int wlan_setup_firmware(wlan_private * priv)
++static int lbs_setup_firmware(struct lbs_private *priv)
{
- int ret = 0;
-- u32 ulbufsize;
-+ u32 bufsize;
- u32 i;
-- struct cmd_ctrl_node *tempcmd_array;
-- u8 *ptempvirtualaddr;
+ int ret = -1;
- wlan_adapter *adapter = priv->adapter;
-+ struct cmd_ctrl_node *cmdarray;
+- struct cmd_ds_mesh_access mesh_access;
- lbs_deb_enter(LBS_DEB_HOST);
+ lbs_deb_enter(LBS_DEB_FW);
-- /* Allocate and initialize cmdCtrlNode */
-- ulbufsize = sizeof(struct cmd_ctrl_node) * MRVDRV_NUM_OF_CMD_BUFFER;
+ /*
+ * Read MAC address from HW
+ */
+- memset(adapter->current_addr, 0xff, ETH_ALEN);
-
-- if (!(tempcmd_array = kzalloc(ulbufsize, GFP_KERNEL))) {
-+ /* Allocate and initialize the command array */
-+ bufsize = sizeof(struct cmd_ctrl_node) * LBS_NUM_CMD_BUFFERS;
-+ if (!(cmdarray = kzalloc(bufsize, GFP_KERNEL))) {
- lbs_deb_host("ALLOC_CMD_BUF: tempcmd_array is NULL\n");
+- ret = libertas_prepare_and_send_command(priv, CMD_GET_HW_SPEC,
+- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
+-
++ memset(priv->current_addr, 0xff, ETH_ALEN);
++ ret = lbs_update_hw_spec(priv);
+ if (ret) {
ret = -1;
goto done;
}
-- adapter->cmd_array = tempcmd_array;
-+ priv->cmd_array = cmdarray;
-- /* Allocate and initialize command buffers */
-- ulbufsize = MRVDRV_SIZE_OF_CMD_BUFFER;
-- for (i = 0; i < MRVDRV_NUM_OF_CMD_BUFFER; i++) {
-- if (!(ptempvirtualaddr = kzalloc(ulbufsize, GFP_KERNEL))) {
-+ /* Allocate and initialize each command buffer in the command array */
-+ for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
-+ cmdarray[i].cmdbuf = kzalloc(LBS_CMD_BUFFER_SIZE, GFP_KERNEL);
-+ if (!cmdarray[i].cmdbuf) {
- lbs_deb_host("ALLOC_CMD_BUF: ptempvirtualaddr is NULL\n");
- ret = -1;
- goto done;
- }
+- libertas_set_mac_packet_filter(priv);
-
-- /* Update command buffer virtual */
-- tempcmd_array[i].bufvirtualaddr = ptempvirtualaddr;
- }
+- /* Get the supported Data rates */
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_DATA_RATE,
+- CMD_ACT_GET_TX_RATE,
+- CMD_OPTION_WAITFORRSP, 0, NULL);
++ lbs_set_mac_packet_filter(priv);
-- for (i = 0; i < MRVDRV_NUM_OF_CMD_BUFFER; i++) {
-- init_waitqueue_head(&tempcmd_array[i].cmdwait_q);
-- libertas_cleanup_and_insert_cmd(priv, &tempcmd_array[i]);
-+ for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
-+ init_waitqueue_head(&cmdarray[i].cmdwait_q);
-+ lbs_cleanup_and_insert_cmd(priv, &cmdarray[i]);
+- if (ret) {
++ ret = lbs_get_data_rate(priv);
++ if (ret < 0) {
+ ret = -1;
+ goto done;
}
+
+- /* Disable mesh autostart */
+- if (priv->mesh_dev) {
+- memset(&mesh_access, 0, sizeof(mesh_access));
+- mesh_access.data[0] = cpu_to_le32(0);
+- ret = libertas_prepare_and_send_command(priv,
+- CMD_MESH_ACCESS,
+- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
+- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
+- if (ret) {
+- ret = -1;
+- goto done;
+- }
+- priv->mesh_autostart_enabled = 0;
+- }
+-
+- /* Set the boot2 version in firmware */
+- ret = libertas_prepare_and_send_command(priv, CMD_SET_BOOT2_VER,
+- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
-
ret = 0;
-
done:
-@@ -1497,39 +1627,36 @@ done:
- /**
- * @brief This function frees the command buffer.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return 0 or -1
+ lbs_deb_leave_args(LBS_DEB_FW, "ret %d", ret);
+@@ -985,164 +965,130 @@ done:
*/
--int libertas_free_cmd_buffer(wlan_private * priv)
-+int lbs_free_cmd_buffer(struct lbs_private *priv)
+ static void command_timer_fn(unsigned long data)
{
-- u32 ulbufsize; /* Someone needs to die for this. Slowly and painfully */
-+ struct cmd_ctrl_node *cmdarray;
- unsigned int i;
-- struct cmd_ctrl_node *tempcmd_array;
+- wlan_private *priv = (wlan_private *)data;
- wlan_adapter *adapter = priv->adapter;
+- struct cmd_ctrl_node *ptempnode;
+- struct cmd_ds_command *cmd;
++ struct lbs_private *priv = (struct lbs_private *)data;
+ unsigned long flags;
- lbs_deb_enter(LBS_DEB_HOST);
+- ptempnode = adapter->cur_cmd;
+- if (ptempnode == NULL) {
+- lbs_deb_fw("ptempnode empty\n");
+- return;
+- }
++ lbs_deb_enter(LBS_DEB_CMD);
++ spin_lock_irqsave(&priv->driver_lock, flags);
- /* need to check if cmd array is allocated or not */
-- if (adapter->cmd_array == NULL) {
-+ if (priv->cmd_array == NULL) {
- lbs_deb_host("FREE_CMD_BUF: cmd_array is NULL\n");
- goto done;
+- cmd = (struct cmd_ds_command *)ptempnode->bufvirtualaddr;
+- if (!cmd) {
+- lbs_deb_fw("cmd is NULL\n");
+- return;
++ if (!priv->cur_cmd) {
++ lbs_pr_info("Command timer expired; no pending command\n");
++ goto out;
}
-- tempcmd_array = adapter->cmd_array;
-+ cmdarray = priv->cmd_array;
+- lbs_deb_fw("command_timer_fn fired, cmd %x\n", cmd->command);
+-
+- if (!adapter->fw_ready)
+- return;
+-
+- spin_lock_irqsave(&adapter->driver_lock, flags);
+- adapter->cur_cmd = NULL;
+- spin_unlock_irqrestore(&adapter->driver_lock, flags);
+-
+- lbs_deb_fw("re-sending same command because of timeout\n");
+- libertas_queue_cmd(adapter, ptempnode, 0);
++ lbs_pr_info("Command %x timed out\n", le16_to_cpu(priv->cur_cmd->cmdbuf->command));
- /* Release shared memory buffers */
-- ulbufsize = MRVDRV_SIZE_OF_CMD_BUFFER;
-- for (i = 0; i < MRVDRV_NUM_OF_CMD_BUFFER; i++) {
-- if (tempcmd_array[i].bufvirtualaddr) {
-- kfree(tempcmd_array[i].bufvirtualaddr);
-- tempcmd_array[i].bufvirtualaddr = NULL;
-+ for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
-+ if (cmdarray[i].cmdbuf) {
-+ kfree(cmdarray[i].cmdbuf);
-+ cmdarray[i].cmdbuf = NULL;
- }
++ priv->cmd_timed_out = 1;
+ wake_up_interruptible(&priv->waitq);
+-
+- return;
++out:
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ lbs_deb_leave(LBS_DEB_CMD);
+ }
+
+-static int libertas_init_adapter(wlan_private * priv)
++static int lbs_init_adapter(struct lbs_private *priv)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ size_t bufsize;
+ int i, ret = 0;
+
++ lbs_deb_enter(LBS_DEB_MAIN);
++
+ /* Allocate buffer to store the BSSID list */
+ bufsize = MAX_NETWORK_COUNT * sizeof(struct bss_descriptor);
+- adapter->networks = kzalloc(bufsize, GFP_KERNEL);
+- if (!adapter->networks) {
++ priv->networks = kzalloc(bufsize, GFP_KERNEL);
++ if (!priv->networks) {
+ lbs_pr_err("Out of memory allocating beacons\n");
+ ret = -1;
+ goto out;
}
- /* Release cmd_ctrl_node */
-- if (adapter->cmd_array) {
-- kfree(adapter->cmd_array);
-- adapter->cmd_array = NULL;
-+ if (priv->cmd_array) {
-+ kfree(priv->cmd_array);
-+ priv->cmd_array = NULL;
+ /* Initialize scan result lists */
+- INIT_LIST_HEAD(&adapter->network_free_list);
+- INIT_LIST_HEAD(&adapter->network_list);
++ INIT_LIST_HEAD(&priv->network_free_list);
++ INIT_LIST_HEAD(&priv->network_list);
+ for (i = 0; i < MAX_NETWORK_COUNT; i++) {
+- list_add_tail(&adapter->networks[i].list,
+- &adapter->network_free_list);
++ list_add_tail(&priv->networks[i].list,
++ &priv->network_free_list);
}
- done:
-@@ -1541,34 +1668,31 @@ done:
- * @brief This function gets a free command node if available in
- * command free queue.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return cmd_ctrl_node A pointer to cmd_ctrl_node structure or NULL
- */
--struct cmd_ctrl_node *libertas_get_free_cmd_ctrl_node(wlan_private * priv)
-+static struct cmd_ctrl_node *lbs_get_cmd_ctrl_node(struct lbs_private *priv)
- {
- struct cmd_ctrl_node *tempnode;
-- wlan_adapter *adapter = priv->adapter;
- unsigned long flags;
+- adapter->libertas_ps_confirm_sleep.seqnum = cpu_to_le16(++adapter->seqnum);
+- adapter->libertas_ps_confirm_sleep.command =
++ priv->lbs_ps_confirm_sleep.seqnum = cpu_to_le16(++priv->seqnum);
++ priv->lbs_ps_confirm_sleep.command =
+ cpu_to_le16(CMD_802_11_PS_MODE);
+- adapter->libertas_ps_confirm_sleep.size =
++ priv->lbs_ps_confirm_sleep.size =
+ cpu_to_le16(sizeof(struct PS_CMD_ConfirmSleep));
+- adapter->libertas_ps_confirm_sleep.action =
++ priv->lbs_ps_confirm_sleep.action =
+ cpu_to_le16(CMD_SUBCMD_SLEEP_CONFIRMED);
- lbs_deb_enter(LBS_DEB_HOST);
+- memset(adapter->current_addr, 0xff, ETH_ALEN);
+-
+- adapter->connect_status = LIBERTAS_DISCONNECTED;
+- adapter->secinfo.auth_mode = IW_AUTH_ALG_OPEN_SYSTEM;
+- adapter->mode = IW_MODE_INFRA;
+- adapter->curbssparams.channel = DEFAULT_AD_HOC_CHANNEL;
+- adapter->currentpacketfilter = CMD_ACT_MAC_RX_ON | CMD_ACT_MAC_TX_ON;
+- adapter->radioon = RADIO_ON;
+- adapter->auto_rate = 1;
+- adapter->capability = WLAN_CAPABILITY_SHORT_PREAMBLE;
+- adapter->psmode = WLAN802_11POWERMODECAM;
+- adapter->psstate = PS_STATE_FULL_POWER;
++ memset(priv->current_addr, 0xff, ETH_ALEN);
-- if (!adapter)
-+ if (!priv)
- return NULL;
+- mutex_init(&adapter->lock);
++ priv->connect_status = LBS_DISCONNECTED;
++ priv->mesh_connect_status = LBS_DISCONNECTED;
++ priv->secinfo.auth_mode = IW_AUTH_ALG_OPEN_SYSTEM;
++ priv->mode = IW_MODE_INFRA;
++ priv->curbssparams.channel = DEFAULT_AD_HOC_CHANNEL;
++ priv->currentpacketfilter = CMD_ACT_MAC_RX_ON | CMD_ACT_MAC_TX_ON;
++ priv->radioon = RADIO_ON;
++ priv->auto_rate = 1;
++ priv->capability = WLAN_CAPABILITY_SHORT_PREAMBLE;
++ priv->psmode = LBS802_11POWERMODECAM;
++ priv->psstate = PS_STATE_FULL_POWER;
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
+- memset(&adapter->tx_queue_ps, 0, NR_TX_QUEUE*sizeof(struct sk_buff*));
+- adapter->tx_queue_idx = 0;
+- spin_lock_init(&adapter->txqueue_lock);
++ mutex_init(&priv->lock);
-- if (!list_empty(&adapter->cmdfreeq)) {
-- tempnode = (struct cmd_ctrl_node *)adapter->cmdfreeq.next;
-- list_del((struct list_head *)tempnode);
-+ if (!list_empty(&priv->cmdfreeq)) {
-+ tempnode = list_first_entry(&priv->cmdfreeq,
-+ struct cmd_ctrl_node, list);
-+ list_del(&tempnode->list);
- } else {
- lbs_deb_host("GET_CMD_NODE: cmd_ctrl_node is not available\n");
- tempnode = NULL;
+- setup_timer(&adapter->command_timer, command_timer_fn,
+- (unsigned long)priv);
++ setup_timer(&priv->command_timer, command_timer_fn,
++ (unsigned long)priv);
+
+- INIT_LIST_HEAD(&adapter->cmdfreeq);
+- INIT_LIST_HEAD(&adapter->cmdpendingq);
++ INIT_LIST_HEAD(&priv->cmdfreeq);
++ INIT_LIST_HEAD(&priv->cmdpendingq);
+
+- spin_lock_init(&adapter->driver_lock);
+- init_waitqueue_head(&adapter->cmd_pending);
+- adapter->nr_cmd_pending = 0;
++ spin_lock_init(&priv->driver_lock);
++ init_waitqueue_head(&priv->cmd_pending);
+
+ /* Allocate the command buffers */
+- if (libertas_allocate_cmd_buffer(priv)) {
++ if (lbs_allocate_cmd_buffer(priv)) {
+ lbs_pr_err("Out of memory allocating command buffers\n");
+ ret = -1;
}
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
--
-- if (tempnode)
-- cleanup_cmdnode(tempnode);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ out:
++ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
++
+ return ret;
+ }
- lbs_deb_leave(LBS_DEB_HOST);
- return tempnode;
-@@ -1580,47 +1704,26 @@ struct cmd_ctrl_node *libertas_get_free_cmd_ctrl_node(wlan_private * priv)
- * @param ptempnode A pointer to cmdCtrlNode structure
- * @return n/a
- */
--static void cleanup_cmdnode(struct cmd_ctrl_node *ptempnode)
--{
-- lbs_deb_enter(LBS_DEB_HOST);
+-static void libertas_free_adapter(wlan_private * priv)
++static void lbs_free_adapter(struct lbs_private *priv)
+ {
+- wlan_adapter *adapter = priv->adapter;
-
-- if (!ptempnode)
+- if (!adapter) {
+- lbs_deb_fw("why double free adapter?\n");
- return;
-- ptempnode->cmdwaitqwoken = 1;
-- wake_up_interruptible(&ptempnode->cmdwait_q);
-- ptempnode->status = 0;
-- ptempnode->cmd_oid = (u32) 0;
-- ptempnode->wait_option = 0;
-- ptempnode->pdata_buf = NULL;
+- }
-
-- if (ptempnode->bufvirtualaddr != NULL)
-- memset(ptempnode->bufvirtualaddr, 0, MRVDRV_SIZE_OF_CMD_BUFFER);
+- lbs_deb_fw("free command buffer\n");
+- libertas_free_cmd_buffer(priv);
-
-- lbs_deb_leave(LBS_DEB_HOST);
--}
+- lbs_deb_fw("free command_timer\n");
+- del_timer(&adapter->command_timer);
++ lbs_deb_enter(LBS_DEB_MAIN);
+
+- lbs_deb_fw("free scan results table\n");
+- kfree(adapter->networks);
+- adapter->networks = NULL;
++ lbs_free_cmd_buffer(priv);
++ del_timer(&priv->command_timer);
++ kfree(priv->networks);
++ priv->networks = NULL;
+
+- /* Free the adapter object itself */
+- lbs_deb_fw("free adapter\n");
+- kfree(adapter);
+- priv->adapter = NULL;
++ lbs_deb_leave(LBS_DEB_MAIN);
+ }
/**
- * @brief This function initializes the command node.
+ * @brief This function adds the card. it will probe the
+- * card, allocate the wlan_priv and initialize the device.
++ * card, allocate the lbs_priv and initialize the device.
*
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param ptempnode A pointer to cmd_ctrl_node structure
-- * @param cmd_oid cmd oid: treated as sub command
-- * @param wait_option wait option: wait response or not
- * @param pdata_buf A pointer to informaion buffer
- * @return 0 or -1
+ * @param card A pointer to card
+- * @return A pointer to wlan_private structure
++ * @return A pointer to struct lbs_private structure
*/
--void libertas_set_cmd_ctrl_node(wlan_private * priv,
-- struct cmd_ctrl_node *ptempnode,
-- u32 cmd_oid, u16 wait_option, void *pdata_buf)
-+static void lbs_set_cmd_ctrl_node(struct lbs_private *priv,
-+ struct cmd_ctrl_node *ptempnode,
-+ void *pdata_buf)
+-wlan_private *libertas_add_card(void *card, struct device *dmdev)
++struct lbs_private *lbs_add_card(void *card, struct device *dmdev)
{
- lbs_deb_enter(LBS_DEB_HOST);
+ struct net_device *dev = NULL;
+- wlan_private *priv = NULL;
++ struct lbs_private *priv = NULL;
- if (!ptempnode)
- return;
+- lbs_deb_enter(LBS_DEB_NET);
++ lbs_deb_enter(LBS_DEB_MAIN);
-- ptempnode->cmd_oid = cmd_oid;
-- ptempnode->wait_option = wait_option;
-- ptempnode->pdata_buf = pdata_buf;
-+ ptempnode->callback = NULL;
-+ ptempnode->callback_arg = (unsigned long)pdata_buf;
+ /* Allocate an Ethernet device and register it */
+- if (!(dev = alloc_etherdev(sizeof(wlan_private)))) {
++ dev = alloc_etherdev(sizeof(struct lbs_private));
++ if (!dev) {
+ lbs_pr_err("init ethX device failed\n");
+ goto done;
+ }
+ priv = dev->priv;
- lbs_deb_leave(LBS_DEB_HOST);
+- /* allocate buffer for wlan_adapter */
+- if (!(priv->adapter = kzalloc(sizeof(wlan_adapter), GFP_KERNEL))) {
+- lbs_pr_err("allocate buffer for wlan_adapter failed\n");
+- goto err_kzalloc;
+- }
+-
+- if (libertas_init_adapter(priv)) {
++ if (lbs_init_adapter(priv)) {
+ lbs_pr_err("failed to initialize adapter structure.\n");
+ goto err_init_adapter;
+ }
+@@ -1151,81 +1097,78 @@ wlan_private *libertas_add_card(void *card, struct device *dmdev)
+ priv->card = card;
+ priv->mesh_open = 0;
+ priv->infra_open = 0;
+- priv->hotplug_device = dmdev;
+
+ /* Setup the OS Interface to our functions */
+- dev->open = libertas_open;
+- dev->hard_start_xmit = libertas_pre_start_xmit;
+- dev->stop = libertas_close;
+- dev->set_mac_address = libertas_set_mac_address;
+- dev->tx_timeout = libertas_tx_timeout;
+- dev->get_stats = libertas_get_stats;
++ dev->open = lbs_dev_open;
++ dev->hard_start_xmit = lbs_hard_start_xmit;
++ dev->stop = lbs_eth_stop;
++ dev->set_mac_address = lbs_set_mac_address;
++ dev->tx_timeout = lbs_tx_timeout;
++ dev->get_stats = lbs_get_stats;
+ dev->watchdog_timeo = 5 * HZ;
+- dev->ethtool_ops = &libertas_ethtool_ops;
++ dev->ethtool_ops = &lbs_ethtool_ops;
+ #ifdef WIRELESS_EXT
+- dev->wireless_handlers = (struct iw_handler_def *)&libertas_handler_def;
++ dev->wireless_handlers = (struct iw_handler_def *)&lbs_handler_def;
+ #endif
+ dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
+- dev->set_multicast_list = libertas_set_multicast_list;
++ dev->set_multicast_list = lbs_set_multicast_list;
+
+ SET_NETDEV_DEV(dev, dmdev);
+
+ priv->rtap_net_dev = NULL;
+- if (device_create_file(dmdev, &dev_attr_libertas_rtap))
+- goto err_init_adapter;
+
+ lbs_deb_thread("Starting main thread...\n");
+ init_waitqueue_head(&priv->waitq);
+- priv->main_thread = kthread_run(libertas_thread, dev, "libertas_main");
++ priv->main_thread = kthread_run(lbs_thread, dev, "lbs_main");
+ if (IS_ERR(priv->main_thread)) {
+ lbs_deb_thread("Error creating main thread.\n");
+- goto err_kthread_run;
++ goto err_init_adapter;
+ }
+
+- priv->work_thread = create_singlethread_workqueue("libertas_worker");
+- INIT_DELAYED_WORK(&priv->assoc_work, libertas_association_worker);
+- INIT_DELAYED_WORK(&priv->scan_work, libertas_scan_worker);
+- INIT_WORK(&priv->sync_channel, libertas_sync_channel);
++ priv->work_thread = create_singlethread_workqueue("lbs_worker");
++ INIT_DELAYED_WORK(&priv->assoc_work, lbs_association_worker);
++ INIT_DELAYED_WORK(&priv->scan_work, lbs_scan_worker);
++ INIT_WORK(&priv->sync_channel, lbs_sync_channel);
+
+- goto done;
++ sprintf(priv->mesh_ssid, "mesh");
++ priv->mesh_ssid_len = 4;
+
+-err_kthread_run:
+- device_remove_file(dmdev, &dev_attr_libertas_rtap);
++ priv->wol_criteria = 0xffffffff;
++ priv->wol_gpio = 0xff;
+
+-err_init_adapter:
+- libertas_free_adapter(priv);
++ goto done;
+
+-err_kzalloc:
++err_init_adapter:
++ lbs_free_adapter(priv);
+ free_netdev(dev);
+ priv = NULL;
+
+ done:
+- lbs_deb_leave_args(LBS_DEB_NET, "priv %p", priv);
++ lbs_deb_leave_args(LBS_DEB_MAIN, "priv %p", priv);
+ return priv;
}
-@@ -1630,60 +1733,58 @@ void libertas_set_cmd_ctrl_node(wlan_private * priv,
- * pending queue. It will put fimware back to PS mode
- * if applicable.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return 0 or -1
- */
--int libertas_execute_next_command(wlan_private * priv)
-+int lbs_execute_next_command(struct lbs_private *priv)
+-EXPORT_SYMBOL_GPL(libertas_add_card);
++EXPORT_SYMBOL_GPL(lbs_add_card);
+
+
+-int libertas_remove_card(wlan_private *priv)
++int lbs_remove_card(struct lbs_private *priv)
{
- wlan_adapter *adapter = priv->adapter;
- struct cmd_ctrl_node *cmdnode = NULL;
-- struct cmd_ds_command *cmdptr;
-+ struct cmd_header *cmd;
- unsigned long flags;
- int ret = 0;
+ struct net_device *dev = priv->dev;
+ union iwreq_data wrqu;
- // Debug group is LBS_DEB_THREAD and not LBS_DEB_HOST, because the
-- // only caller to us is libertas_thread() and we get even when a
-+ // only caller to us is lbs_thread() and we get even when a
- // data packet is received
- lbs_deb_enter(LBS_DEB_THREAD);
+ lbs_deb_enter(LBS_DEB_MAIN);
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
+- libertas_remove_rtap(priv);
++ lbs_remove_mesh(priv);
++ lbs_remove_rtap(priv);
-- if (adapter->cur_cmd) {
-+ if (priv->cur_cmd) {
- lbs_pr_alert( "EXEC_NEXT_CMD: already processing command!\n");
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- ret = -1;
- goto done;
- }
+ dev = priv->dev;
+- device_remove_file(priv->hotplug_device, &dev_attr_libertas_rtap);
-- if (!list_empty(&adapter->cmdpendingq)) {
-- cmdnode = (struct cmd_ctrl_node *)
-- adapter->cmdpendingq.next;
-+ if (!list_empty(&priv->cmdpendingq)) {
-+ cmdnode = list_first_entry(&priv->cmdpendingq,
-+ struct cmd_ctrl_node, list);
+ cancel_delayed_work(&priv->scan_work);
+ cancel_delayed_work(&priv->assoc_work);
+ destroy_workqueue(priv->work_thread);
+
+- if (adapter->psmode == WLAN802_11POWERMODEMAX_PSP) {
+- adapter->psmode = WLAN802_11POWERMODECAM;
+- libertas_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
++ if (priv->psmode == LBS802_11POWERMODEMAX_PSP) {
++ priv->psmode = LBS802_11POWERMODECAM;
++ lbs_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
}
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ memset(wrqu.ap_addr.sa_data, 0xaa, ETH_ALEN);
+@@ -1233,10 +1176,10 @@ int libertas_remove_card(wlan_private *priv)
+ wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
- if (cmdnode) {
-- cmdptr = (struct cmd_ds_command *)cmdnode->bufvirtualaddr;
-+ cmd = cmdnode->cmdbuf;
+ /* Stop the thread servicing the interrupts */
+- adapter->surpriseremoved = 1;
++ priv->surpriseremoved = 1;
+ kthread_stop(priv->main_thread);
-- if (is_command_allowed_in_ps(cmdptr->command)) {
-- if ((adapter->psstate == PS_STATE_SLEEP) ||
-- (adapter->psstate == PS_STATE_PRE_SLEEP)) {
-+ if (is_command_allowed_in_ps(le16_to_cpu(cmd->command))) {
-+ if ((priv->psstate == PS_STATE_SLEEP) ||
-+ (priv->psstate == PS_STATE_PRE_SLEEP)) {
- lbs_deb_host(
- "EXEC_NEXT_CMD: cannot send cmd 0x%04x in psstate %d\n",
-- le16_to_cpu(cmdptr->command),
-- adapter->psstate);
-+ le16_to_cpu(cmd->command),
-+ priv->psstate);
- ret = -1;
- goto done;
- }
- lbs_deb_host("EXEC_NEXT_CMD: OK to send command "
-- "0x%04x in psstate %d\n",
-- le16_to_cpu(cmdptr->command),
-- adapter->psstate);
-- } else if (adapter->psstate != PS_STATE_FULL_POWER) {
-+ "0x%04x in psstate %d\n",
-+ le16_to_cpu(cmd->command), priv->psstate);
-+ } else if (priv->psstate != PS_STATE_FULL_POWER) {
- /*
- * 1. Non-PS command:
- * Queue it. set needtowakeup to TRUE if current state
-- * is SLEEP, otherwise call libertas_ps_wakeup to send Exit_PS.
-+ * is SLEEP, otherwise call lbs_ps_wakeup to send Exit_PS.
- * 2. PS command but not Exit_PS:
- * Ignore it.
- * 3. PS command Exit_PS:
-@@ -1691,18 +1792,17 @@ int libertas_execute_next_command(wlan_private * priv)
- * otherwise send this command down to firmware
- * immediately.
- */
-- if (cmdptr->command !=
-- cpu_to_le16(CMD_802_11_PS_MODE)) {
-+ if (cmd->command != cpu_to_le16(CMD_802_11_PS_MODE)) {
- /* Prepare to send Exit PS,
- * this non PS command will be sent later */
-- if ((adapter->psstate == PS_STATE_SLEEP)
-- || (adapter->psstate == PS_STATE_PRE_SLEEP)
-+ if ((priv->psstate == PS_STATE_SLEEP)
-+ || (priv->psstate == PS_STATE_PRE_SLEEP)
- ) {
- /* w/ new scheme, it will not reach here.
- since it is blocked in main_thread. */
-- adapter->needtowakeup = 1;
-+ priv->needtowakeup = 1;
- } else
-- libertas_ps_wakeup(priv, 0);
-+ lbs_ps_wakeup(priv, 0);
+- libertas_free_adapter(priv);
++ lbs_free_adapter(priv);
- ret = 0;
- goto done;
-@@ -1711,8 +1811,7 @@ int libertas_execute_next_command(wlan_private * priv)
- * PS command. Ignore it if it is not Exit_PS.
- * otherwise send it down immediately.
- */
-- struct cmd_ds_802_11_ps_mode *psm =
-- &cmdptr->params.psmode;
-+ struct cmd_ds_802_11_ps_mode *psm = (void *)&cmd[1];
+ priv->dev = NULL;
+ free_netdev(dev);
+@@ -1244,10 +1187,10 @@ int libertas_remove_card(wlan_private *priv)
+ lbs_deb_leave(LBS_DEB_MAIN);
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(libertas_remove_card);
++EXPORT_SYMBOL_GPL(lbs_remove_card);
- lbs_deb_host(
- "EXEC_NEXT_CMD: PS cmd, action 0x%02x\n",
-@@ -1721,20 +1820,24 @@ int libertas_execute_next_command(wlan_private * priv)
- cpu_to_le16(CMD_SUBCMD_EXIT_PS)) {
- lbs_deb_host(
- "EXEC_NEXT_CMD: ignore ENTER_PS cmd\n");
-- list_del((struct list_head *)cmdnode);
-- libertas_cleanup_and_insert_cmd(priv, cmdnode);
-+ list_del(&cmdnode->list);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ lbs_complete_command(priv, cmdnode, 0);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- ret = 0;
- goto done;
- }
+-int libertas_start_card(wlan_private *priv)
++int lbs_start_card(struct lbs_private *priv)
+ {
+ struct net_device *dev = priv->dev;
+ int ret = -1;
+@@ -1255,19 +1198,52 @@ int libertas_start_card(wlan_private *priv)
+ lbs_deb_enter(LBS_DEB_MAIN);
-- if ((adapter->psstate == PS_STATE_SLEEP) ||
-- (adapter->psstate == PS_STATE_PRE_SLEEP)) {
-+ if ((priv->psstate == PS_STATE_SLEEP) ||
-+ (priv->psstate == PS_STATE_PRE_SLEEP)) {
- lbs_deb_host(
- "EXEC_NEXT_CMD: ignore EXIT_PS cmd in sleep\n");
-- list_del((struct list_head *)cmdnode);
-- libertas_cleanup_and_insert_cmd(priv, cmdnode);
-- adapter->needtowakeup = 1;
-+ list_del(&cmdnode->list);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ lbs_complete_command(priv, cmdnode, 0);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ priv->needtowakeup = 1;
+ /* poke the firmware */
+- ret = wlan_setup_firmware(priv);
++ ret = lbs_setup_firmware(priv);
+ if (ret)
+ goto done;
- ret = 0;
- goto done;
-@@ -1744,33 +1847,34 @@ int libertas_execute_next_command(wlan_private * priv)
- "EXEC_NEXT_CMD: sending EXIT_PS\n");
- }
- }
-- list_del((struct list_head *)cmdnode);
-+ list_del(&cmdnode->list);
- lbs_deb_host("EXEC_NEXT_CMD: sending command 0x%04x\n",
-- le16_to_cpu(cmdptr->command));
-- DownloadcommandToStation(priv, cmdnode);
-+ le16_to_cpu(cmd->command));
-+ lbs_submit_command(priv, cmdnode);
- } else {
- /*
- * check if in power save mode, if yes, put the device back
- * to PS mode
- */
-- if ((adapter->psmode != WLAN802_11POWERMODECAM) &&
-- (adapter->psstate == PS_STATE_FULL_POWER) &&
-- (adapter->connect_status == LIBERTAS_CONNECTED)) {
-- if (adapter->secinfo.WPAenabled ||
-- adapter->secinfo.WPA2enabled) {
-+ if ((priv->psmode != LBS802_11POWERMODECAM) &&
-+ (priv->psstate == PS_STATE_FULL_POWER) &&
-+ ((priv->connect_status == LBS_CONNECTED) ||
-+ (priv->mesh_connect_status == LBS_CONNECTED))) {
-+ if (priv->secinfo.WPAenabled ||
-+ priv->secinfo.WPA2enabled) {
- /* check for valid WPA group keys */
-- if (adapter->wpa_mcast_key.len ||
-- adapter->wpa_unicast_key.len) {
-+ if (priv->wpa_mcast_key.len ||
-+ priv->wpa_unicast_key.len) {
- lbs_deb_host(
- "EXEC_NEXT_CMD: WPA enabled and GTK_SET"
- " go back to PS_SLEEP");
-- libertas_ps_sleep(priv, 0);
-+ lbs_ps_sleep(priv, 0);
- }
- } else {
- lbs_deb_host(
- "EXEC_NEXT_CMD: cmdpendingq empty, "
- "go back to PS_SLEEP");
-- libertas_ps_sleep(priv, 0);
-+ lbs_ps_sleep(priv, 0);
- }
- }
+ /* init 802.11d */
+- libertas_init_11d(priv);
++ lbs_init_11d(priv);
+
+ if (register_netdev(dev)) {
+ lbs_pr_err("cannot register ethX device\n");
+ goto done;
}
-@@ -1781,7 +1885,7 @@ done:
++ if (device_create_file(&dev->dev, &dev_attr_lbs_rtap))
++ lbs_pr_err("cannot register lbs_rtap attribute\n");
++
++ lbs_update_channel(priv);
++
++ /* 5.0.16p0 is known to NOT support any mesh */
++ if (priv->fwrelease > 0x05001000) {
++ /* Enable mesh, if supported, and work out which TLV it uses.
++ 0x100 + 291 is an unofficial value used in 5.110.20.pXX
++ 0x100 + 37 is the official value used in 5.110.21.pXX
++ but we check them in that order because 20.pXX doesn't
++ give an error -- it just silently fails. */
++
++ /* 5.110.20.pXX firmware will fail the command if the channel
++ doesn't match the existing channel. But only if the TLV
++ is correct. If the channel is wrong, _BOTH_ versions will
++ give an error to 0x100+291, and allow 0x100+37 to succeed.
++ It's just that 5.110.20.pXX will not have done anything
++ useful */
++
++ priv->mesh_tlv = 0x100 + 291;
++ if (lbs_mesh_config(priv, 1, priv->curbssparams.channel)) {
++ priv->mesh_tlv = 0x100 + 37;
++ if (lbs_mesh_config(priv, 1, priv->curbssparams.channel))
++ priv->mesh_tlv = 0;
++ }
++ if (priv->mesh_tlv) {
++ lbs_add_mesh(priv);
+
+- libertas_debugfs_init_one(priv, dev);
++ if (device_create_file(&dev->dev, &dev_attr_lbs_mesh))
++ lbs_pr_err("cannot register lbs_mesh attribute\n");
++ }
++ }
++
++ lbs_debugfs_init_one(priv, dev);
+
+ lbs_pr_info("%s: Marvell WLAN 802.11 adapter\n", dev->name);
+
+@@ -1277,10 +1253,10 @@ done:
+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
return ret;
}
+-EXPORT_SYMBOL_GPL(libertas_start_card);
++EXPORT_SYMBOL_GPL(lbs_start_card);
--void libertas_send_iwevcustom_event(wlan_private * priv, s8 * str)
-+void lbs_send_iwevcustom_event(struct lbs_private *priv, s8 *str)
- {
- union iwreq_data iwrq;
- u8 buf[50];
-@@ -1805,10 +1909,9 @@ void libertas_send_iwevcustom_event(wlan_private * priv, s8 * str)
- lbs_deb_leave(LBS_DEB_WEXT);
- }
--static int sendconfirmsleep(wlan_private * priv, u8 * cmdptr, u16 size)
-+static int sendconfirmsleep(struct lbs_private *priv, u8 *cmdptr, u16 size)
+-int libertas_stop_card(wlan_private *priv)
++int lbs_stop_card(struct lbs_private *priv)
{
- unsigned long flags;
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
-
- lbs_deb_enter(LBS_DEB_HOST);
-@@ -1819,26 +1922,25 @@ static int sendconfirmsleep(wlan_private * priv, u8 * cmdptr, u16 size)
- lbs_deb_hex(LBS_DEB_HOST, "sleep confirm command", cmdptr, size);
+ struct net_device *dev = priv->dev;
+ int ret = -1;
+@@ -1292,31 +1268,35 @@ int libertas_stop_card(wlan_private *priv)
+ netif_stop_queue(priv->dev);
+ netif_carrier_off(priv->dev);
- ret = priv->hw_host_to_card(priv, MVMS_CMD, cmdptr, size);
-- priv->dnld_sent = DNLD_RES_RECEIVED;
+- libertas_debugfs_remove_one(priv);
++ lbs_debugfs_remove_one(priv);
++ device_remove_file(&dev->dev, &dev_attr_lbs_rtap);
++ if (priv->mesh_tlv)
++ device_remove_file(&dev->dev, &dev_attr_lbs_mesh);
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (adapter->intcounter || adapter->currenttxskb)
+ /* Flush pending command nodes */
+- spin_lock_irqsave(&priv->adapter->driver_lock, flags);
+- list_for_each_entry(cmdnode, &priv->adapter->cmdpendingq, list) {
+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ if (priv->intcounter || priv->currenttxskb)
- lbs_deb_host("SEND_SLEEPC_CMD: intcounter %d, currenttxskb %p\n",
-- adapter->intcounter, adapter->currenttxskb);
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ priv->intcounter, priv->currenttxskb);
++ list_for_each_entry(cmdnode, &priv->cmdpendingq, list) {
++ cmdnode->result = -ENOENT;
+ cmdnode->cmdwaitqwoken = 1;
+ wake_up_interruptible(&cmdnode->cmdwait_q);
+ }
+- spin_unlock_irqrestore(&priv->adapter->driver_lock, flags);
+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- if (ret) {
- lbs_pr_alert(
- "SEND_SLEEPC_CMD: Host to Card failed for Confirm Sleep\n");
- } else {
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (!adapter->intcounter) {
-- adapter->psstate = PS_STATE_SLEEP;
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ if (!priv->intcounter) {
-+ priv->psstate = PS_STATE_SLEEP;
- } else {
- lbs_deb_host("SEND_SLEEPC_CMD: after sent, intcounter %d\n",
-- adapter->intcounter);
-+ priv->intcounter);
- }
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ unregister_netdev(dev);
- lbs_deb_host("SEND_SLEEPC_CMD: sent confirm sleep\n");
- }
-@@ -1847,7 +1949,7 @@ static int sendconfirmsleep(wlan_private * priv, u8 * cmdptr, u16 size)
+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
return ret;
}
+-EXPORT_SYMBOL_GPL(libertas_stop_card);
++EXPORT_SYMBOL_GPL(lbs_stop_card);
--void libertas_ps_sleep(wlan_private * priv, int wait_option)
-+void lbs_ps_sleep(struct lbs_private *priv, int wait_option)
- {
- lbs_deb_enter(LBS_DEB_HOST);
-
-@@ -1856,7 +1958,7 @@ void libertas_ps_sleep(wlan_private * priv, int wait_option)
- * Remove this check if it is to be supported in IBSS mode also
- */
-
-- libertas_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
-+ lbs_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
- CMD_SUBCMD_ENTER_PS, wait_option, 0, NULL);
- lbs_deb_leave(LBS_DEB_HOST);
-@@ -1865,19 +1967,19 @@ void libertas_ps_sleep(wlan_private * priv, int wait_option)
/**
- * @brief This function sends Exit_PS command to firmware.
+ * @brief This function adds mshX interface
*
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param wait_option wait response or not
- * @return n/a
+- * @param priv A pointer to the wlan_private structure
++ * @param priv A pointer to the struct lbs_private structure
+ * @return 0 if successful, -X otherwise
*/
--void libertas_ps_wakeup(wlan_private * priv, int wait_option)
-+void lbs_ps_wakeup(struct lbs_private *priv, int wait_option)
+-int libertas_add_mesh(wlan_private *priv, struct device *dev)
++static int lbs_add_mesh(struct lbs_private *priv)
{
- __le32 Localpsmode;
+ struct net_device *mesh_dev = NULL;
+ int ret = 0;
+@@ -1332,16 +1312,16 @@ int libertas_add_mesh(wlan_private *priv, struct device *dev)
+ mesh_dev->priv = priv;
+ priv->mesh_dev = mesh_dev;
- lbs_deb_enter(LBS_DEB_HOST);
+- mesh_dev->open = libertas_mesh_open;
+- mesh_dev->hard_start_xmit = libertas_mesh_pre_start_xmit;
+- mesh_dev->stop = libertas_mesh_close;
+- mesh_dev->get_stats = libertas_get_stats;
+- mesh_dev->set_mac_address = libertas_set_mac_address;
+- mesh_dev->ethtool_ops = &libertas_ethtool_ops;
++ mesh_dev->open = lbs_dev_open;
++ mesh_dev->hard_start_xmit = lbs_hard_start_xmit;
++ mesh_dev->stop = lbs_mesh_stop;
++ mesh_dev->get_stats = lbs_get_stats;
++ mesh_dev->set_mac_address = lbs_set_mac_address;
++ mesh_dev->ethtool_ops = &lbs_ethtool_ops;
+ memcpy(mesh_dev->dev_addr, priv->dev->dev_addr,
+ sizeof(priv->dev->dev_addr));
-- Localpsmode = cpu_to_le32(WLAN802_11POWERMODECAM);
-+ Localpsmode = cpu_to_le32(LBS802_11POWERMODECAM);
+- SET_NETDEV_DEV(priv->mesh_dev, dev);
++ SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
-- libertas_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
-+ lbs_prepare_and_send_command(priv, CMD_802_11_PS_MODE,
- CMD_SUBCMD_EXIT_PS,
- wait_option, 0, &Localpsmode);
+ #ifdef WIRELESS_EXT
+ mesh_dev->wireless_handlers = (struct iw_handler_def *)&mesh_handler_def;
+@@ -1353,7 +1333,7 @@ int libertas_add_mesh(wlan_private *priv, struct device *dev)
+ goto err_free;
+ }
-@@ -1888,37 +1990,36 @@ void libertas_ps_wakeup(wlan_private * priv, int wait_option)
- * @brief This function checks condition and prepares to
- * send sleep confirm command to firmware if ok.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param psmode Power Saving mode
- * @return n/a
- */
--void libertas_ps_confirm_sleep(wlan_private * priv, u16 psmode)
-+void lbs_ps_confirm_sleep(struct lbs_private *priv, u16 psmode)
- {
- unsigned long flags =0;
-- wlan_adapter *adapter = priv->adapter;
- u8 allowed = 1;
+- ret = sysfs_create_group(&(mesh_dev->dev.kobj), &libertas_mesh_attr_group);
++ ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
+ if (ret)
+ goto err_unregister;
- lbs_deb_enter(LBS_DEB_HOST);
+@@ -1371,33 +1351,28 @@ done:
+ lbs_deb_leave_args(LBS_DEB_MESH, "ret %d", ret);
+ return ret;
+ }
+-EXPORT_SYMBOL_GPL(libertas_add_mesh);
++EXPORT_SYMBOL_GPL(lbs_add_mesh);
- if (priv->dnld_sent) {
- allowed = 0;
-- lbs_deb_host("dnld_sent was set");
-+ lbs_deb_host("dnld_sent was set\n");
- }
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (adapter->cur_cmd) {
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ if (priv->cur_cmd) {
- allowed = 0;
-- lbs_deb_host("cur_cmd was set");
-+ lbs_deb_host("cur_cmd was set\n");
- }
-- if (adapter->intcounter > 0) {
-+ if (priv->intcounter > 0) {
- allowed = 0;
-- lbs_deb_host("intcounter %d", adapter->intcounter);
-+ lbs_deb_host("intcounter %d\n", priv->intcounter);
- }
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+-void libertas_remove_mesh(wlan_private *priv)
++static void lbs_remove_mesh(struct lbs_private *priv)
+ {
+ struct net_device *mesh_dev;
- if (allowed) {
-- lbs_deb_host("sending libertas_ps_confirm_sleep\n");
-- sendconfirmsleep(priv, (u8 *) & adapter->libertas_ps_confirm_sleep,
-+ lbs_deb_host("sending lbs_ps_confirm_sleep\n");
-+ sendconfirmsleep(priv, (u8 *) & priv->lbs_ps_confirm_sleep,
- sizeof(struct PS_CMD_ConfirmSleep));
- } else {
- lbs_deb_host("sleep confirm has been delayed\n");
-@@ -1926,3 +2027,123 @@ void libertas_ps_confirm_sleep(wlan_private * priv, u16 psmode)
+- lbs_deb_enter(LBS_DEB_MAIN);
+-
+- if (!priv)
+- goto out;
- lbs_deb_leave(LBS_DEB_HOST);
+ mesh_dev = priv->mesh_dev;
++ if (!mesh_dev)
++ return;
+
++ lbs_deb_enter(LBS_DEB_MESH);
+ netif_stop_queue(mesh_dev);
+ netif_carrier_off(priv->mesh_dev);
+-
+- sysfs_remove_group(&(mesh_dev->dev.kobj), &libertas_mesh_attr_group);
++ sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
+ unregister_netdev(mesh_dev);
+-
+- priv->mesh_dev = NULL ;
++ priv->mesh_dev = NULL;
+ free_netdev(mesh_dev);
+-
+-out:
+- lbs_deb_leave(LBS_DEB_MAIN);
++ lbs_deb_leave(LBS_DEB_MESH);
}
-+
-+
-+/**
-+ * @brief Simple callback that copies response back into command
-+ *
-+ * @param priv A pointer to struct lbs_private structure
-+ * @param extra A pointer to the original command structure for which
-+ * 'resp' is a response
-+ * @param resp A pointer to the command response
-+ *
-+ * @return 0 on success, error on failure
-+ */
-+int lbs_cmd_copyback(struct lbs_private *priv, unsigned long extra,
-+ struct cmd_header *resp)
-+{
-+ struct cmd_header *buf = (void *)extra;
-+ uint16_t copy_len;
-+
-+ lbs_deb_enter(LBS_DEB_CMD);
-+
-+ copy_len = min(le16_to_cpu(buf->size), le16_to_cpu(resp->size));
-+ lbs_deb_cmd("Copying back %u bytes; command response was %u bytes, "
-+ "copy back buffer was %u bytes\n", copy_len,
-+ le16_to_cpu(resp->size), le16_to_cpu(buf->size));
-+ memcpy(buf, resp, copy_len);
-+
-+ lbs_deb_leave(LBS_DEB_CMD);
-+ return 0;
-+}
-+EXPORT_SYMBOL_GPL(lbs_cmd_copyback);
-+
-+struct cmd_ctrl_node *__lbs_cmd_async(struct lbs_private *priv, uint16_t command,
-+ struct cmd_header *in_cmd, int in_cmd_size,
-+ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
-+ unsigned long callback_arg)
-+{
-+ struct cmd_ctrl_node *cmdnode;
-+
-+ lbs_deb_enter(LBS_DEB_HOST);
-+
-+ if (priv->surpriseremoved) {
-+ lbs_deb_host("PREP_CMD: card removed\n");
-+ cmdnode = ERR_PTR(-ENOENT);
-+ goto done;
-+ }
-+
-+ cmdnode = lbs_get_cmd_ctrl_node(priv);
-+ if (cmdnode == NULL) {
-+ lbs_deb_host("PREP_CMD: cmdnode is NULL\n");
-+
-+ /* Wake up main thread to execute next command */
-+ wake_up_interruptible(&priv->waitq);
-+ cmdnode = ERR_PTR(-ENOBUFS);
-+ goto done;
-+ }
-+
-+ cmdnode->callback = callback;
-+ cmdnode->callback_arg = callback_arg;
-+
-+ /* Copy the incoming command to the buffer */
-+ memcpy(cmdnode->cmdbuf, in_cmd, in_cmd_size);
-+
-+ /* Set sequence number, clean result, move to buffer */
-+ priv->seqnum++;
-+ cmdnode->cmdbuf->command = cpu_to_le16(command);
-+ cmdnode->cmdbuf->size = cpu_to_le16(in_cmd_size);
-+ cmdnode->cmdbuf->seqnum = cpu_to_le16(priv->seqnum);
-+ cmdnode->cmdbuf->result = 0;
-+
-+ lbs_deb_host("PREP_CMD: command 0x%04x\n", command);
-+
-+ /* here was the big old switch() statement, which is now obsolete,
-+ * because the caller of lbs_cmd() sets up all of *cmd for us. */
-+
-+ cmdnode->cmdwaitqwoken = 0;
-+ lbs_queue_cmd(priv, cmdnode);
-+ wake_up_interruptible(&priv->waitq);
-+
-+ done:
-+ lbs_deb_leave_args(LBS_DEB_HOST, "ret %p", cmdnode);
-+ return cmdnode;
-+}
-+
-+int __lbs_cmd(struct lbs_private *priv, uint16_t command,
-+ struct cmd_header *in_cmd, int in_cmd_size,
-+ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
-+ unsigned long callback_arg)
-+{
-+ struct cmd_ctrl_node *cmdnode;
-+ unsigned long flags;
-+ int ret = 0;
-+
-+ lbs_deb_enter(LBS_DEB_HOST);
-+
-+ cmdnode = __lbs_cmd_async(priv, command, in_cmd, in_cmd_size,
-+ callback, callback_arg);
-+ if (IS_ERR(cmdnode)) {
-+ ret = PTR_ERR(cmdnode);
-+ goto done;
-+ }
-+
-+ might_sleep();
-+ wait_event_interruptible(cmdnode->cmdwait_q, cmdnode->cmdwaitqwoken);
-+
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ ret = cmdnode->result;
-+ if (ret)
-+ lbs_pr_info("PREP_CMD: command 0x%04x failed: %d\n",
-+ command, ret);
-+
-+ __lbs_cleanup_and_insert_cmd(priv, cmdnode);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+
-+done:
-+ lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
-+ return ret;
-+}
-+EXPORT_SYMBOL_GPL(__lbs_cmd);
-+
-+
-diff --git a/drivers/net/wireless/libertas/cmd.h b/drivers/net/wireless/libertas/cmd.h
-new file mode 100644
-index 0000000..b9ab85c
---- /dev/null
-+++ b/drivers/net/wireless/libertas/cmd.h
-@@ -0,0 +1,61 @@
-+/* Copyright (C) 2007, Red Hat, Inc. */
-+
-+#ifndef _LBS_CMD_H_
-+#define _LBS_CMD_H_
-+
-+#include "hostcmd.h"
-+#include "dev.h"
-+
-+/* lbs_cmd() infers the size of the buffer to copy data back into, from
-+ the size of the target of the pointer. Since the command to be sent
-+ may often be smaller, that size is set in cmd->size by the caller.*/
-+#define lbs_cmd(priv, cmdnr, cmd, cb, cb_arg) ({ \
-+ uint16_t __sz = le16_to_cpu((cmd)->hdr.size); \
-+ (cmd)->hdr.size = cpu_to_le16(sizeof(*(cmd))); \
-+ __lbs_cmd(priv, cmdnr, &(cmd)->hdr, __sz, cb, cb_arg); \
-+})
-+
-+#define lbs_cmd_with_response(priv, cmdnr, cmd) \
-+ lbs_cmd(priv, cmdnr, cmd, lbs_cmd_copyback, (unsigned long) (cmd))
-+
-+/* __lbs_cmd() will free the cmdnode and return success/failure.
-+ __lbs_cmd_async() requires that the callback free the cmdnode */
-+struct cmd_ctrl_node *__lbs_cmd_async(struct lbs_private *priv, uint16_t command,
-+ struct cmd_header *in_cmd, int in_cmd_size,
-+ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
-+ unsigned long callback_arg);
-+int __lbs_cmd(struct lbs_private *priv, uint16_t command,
-+ struct cmd_header *in_cmd, int in_cmd_size,
-+ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *),
-+ unsigned long callback_arg);
-+
-+int lbs_cmd_copyback(struct lbs_private *priv, unsigned long extra,
-+ struct cmd_header *resp);
-+
-+int lbs_update_hw_spec(struct lbs_private *priv);
-+
-+int lbs_mesh_access(struct lbs_private *priv, uint16_t cmd_action,
-+ struct cmd_ds_mesh_access *cmd);
-+
-+int lbs_get_data_rate(struct lbs_private *priv);
-+int lbs_set_data_rate(struct lbs_private *priv, u8 rate);
-+
-+int lbs_get_channel(struct lbs_private *priv);
-+int lbs_set_channel(struct lbs_private *priv, u8 channel);
-+
-+int lbs_mesh_config(struct lbs_private *priv, uint16_t enable, uint16_t chan);
-+
-+int lbs_host_sleep_cfg(struct lbs_private *priv, uint32_t criteria);
-+int lbs_suspend(struct lbs_private *priv);
-+int lbs_resume(struct lbs_private *priv);
-+
-+int lbs_cmd_802_11_inactivity_timeout(struct lbs_private *priv,
-+ uint16_t cmd_action, uint16_t *timeout);
-+int lbs_cmd_802_11_sleep_params(struct lbs_private *priv, uint16_t cmd_action,
-+ struct sleep_params *sp);
-+int lbs_cmd_802_11_set_wep(struct lbs_private *priv, uint16_t cmd_action,
-+ struct assoc_request *assoc);
-+int lbs_cmd_802_11_enable_rsn(struct lbs_private *priv, uint16_t cmd_action,
-+ uint16_t *enable);
-+
-+#endif /* _LBS_CMD_H */
-diff --git a/drivers/net/wireless/libertas/cmdresp.c b/drivers/net/wireless/libertas/cmdresp.c
-index 8f90892..159216a 100644
---- a/drivers/net/wireless/libertas/cmdresp.c
-+++ b/drivers/net/wireless/libertas/cmdresp.c
-@@ -20,18 +20,17 @@
- * reports disconnect to upper layer, clean tx/rx packets,
- * reset link state etc.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return n/a
+-EXPORT_SYMBOL_GPL(libertas_remove_mesh);
++EXPORT_SYMBOL_GPL(lbs_remove_mesh);
+
+ /**
+ * @brief This function finds the CFP in
+@@ -1408,7 +1383,7 @@ EXPORT_SYMBOL_GPL(libertas_remove_mesh);
+ * @param cfp_no A pointer to CFP number
+ * @return A pointer to CFP
*/
--void libertas_mac_event_disconnected(wlan_private * priv)
-+void lbs_mac_event_disconnected(struct lbs_private *priv)
+-struct chan_freq_power *libertas_get_region_cfp_table(u8 region, u8 band, int *cfp_no)
++struct chan_freq_power *lbs_get_region_cfp_table(u8 region, u8 band, int *cfp_no)
+ {
+ int i, end;
+
+@@ -1430,9 +1405,8 @@ struct chan_freq_power *libertas_get_region_cfp_table(u8 region, u8 band, int *c
+ return NULL;
+ }
+
+-int libertas_set_regiontable(wlan_private * priv, u8 region, u8 band)
++int lbs_set_regiontable(struct lbs_private *priv, u8 region, u8 band)
{
- wlan_adapter *adapter = priv->adapter;
- union iwreq_data wrqu;
+ int ret = 0;
+ int i = 0;
-- if (adapter->connect_status != LIBERTAS_CONNECTED)
-+ if (priv->connect_status != LBS_CONNECTED)
- return;
+@@ -1441,24 +1415,22 @@ int libertas_set_regiontable(wlan_private * priv, u8 region, u8 band)
-- lbs_deb_enter(LBS_DEB_CMD);
-+ lbs_deb_enter(LBS_DEB_ASSOC);
+ lbs_deb_enter(LBS_DEB_MAIN);
- memset(wrqu.ap_addr.sa_data, 0x00, ETH_ALEN);
- wrqu.ap_addr.sa_family = ARPHRD_ETHER;
-@@ -44,40 +43,36 @@ void libertas_mac_event_disconnected(wlan_private * priv)
- msleep_interruptible(1000);
- wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+- memset(adapter->region_channel, 0, sizeof(adapter->region_channel));
++ memset(priv->region_channel, 0, sizeof(priv->region_channel));
-- /* Free Tx and Rx packets */
-- kfree_skb(priv->adapter->currenttxskb);
-- priv->adapter->currenttxskb = NULL;
+- {
+- cfp = libertas_get_region_cfp_table(region, band, &cfp_no);
+- if (cfp != NULL) {
+- adapter->region_channel[i].nrcfp = cfp_no;
+- adapter->region_channel[i].CFP = cfp;
+- } else {
+- lbs_deb_main("wrong region code %#x in band B/G\n",
+- region);
+- ret = -1;
+- goto out;
+- }
+- adapter->region_channel[i].valid = 1;
+- adapter->region_channel[i].region = region;
+- adapter->region_channel[i].band = band;
+- i++;
++ cfp = lbs_get_region_cfp_table(region, band, &cfp_no);
++ if (cfp != NULL) {
++ priv->region_channel[i].nrcfp = cfp_no;
++ priv->region_channel[i].CFP = cfp;
++ } else {
++ lbs_deb_main("wrong region code %#x in band B/G\n",
++ region);
++ ret = -1;
++ goto out;
+ }
++ priv->region_channel[i].valid = 1;
++ priv->region_channel[i].region = region;
++ priv->region_channel[i].band = band;
++ i++;
+ out:
+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
+ return ret;
+@@ -1472,58 +1444,46 @@ out:
+ * @param dev A pointer to net_device structure
+ * @return n/a
+ */
+-void libertas_interrupt(struct net_device *dev)
++void lbs_interrupt(struct lbs_private *priv)
+ {
+- wlan_private *priv = dev->priv;
-
- /* report disconnect to upper layer */
- netif_stop_queue(priv->dev);
- netif_carrier_off(priv->dev);
+ lbs_deb_enter(LBS_DEB_THREAD);
-+ /* Free Tx and Rx packets */
-+ kfree_skb(priv->currenttxskb);
-+ priv->currenttxskb = NULL;
-+ priv->tx_pending_len = 0;
-+
- /* reset SNR/NF/RSSI values */
-- memset(adapter->SNR, 0x00, sizeof(adapter->SNR));
-- memset(adapter->NF, 0x00, sizeof(adapter->NF));
-- memset(adapter->RSSI, 0x00, sizeof(adapter->RSSI));
-- memset(adapter->rawSNR, 0x00, sizeof(adapter->rawSNR));
-- memset(adapter->rawNF, 0x00, sizeof(adapter->rawNF));
-- adapter->nextSNRNF = 0;
-- adapter->numSNRNF = 0;
-- lbs_deb_cmd("current SSID '%s', length %u\n",
-- escape_essid(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len),
-- adapter->curbssparams.ssid_len);
+- lbs_deb_thread("libertas_interrupt: intcounter=%d\n",
+- priv->adapter->intcounter);
-
-- adapter->connect_status = LIBERTAS_DISCONNECTED;
-+ memset(priv->SNR, 0x00, sizeof(priv->SNR));
-+ memset(priv->NF, 0x00, sizeof(priv->NF));
-+ memset(priv->RSSI, 0x00, sizeof(priv->RSSI));
-+ memset(priv->rawSNR, 0x00, sizeof(priv->rawSNR));
-+ memset(priv->rawNF, 0x00, sizeof(priv->rawNF));
-+ priv->nextSNRNF = 0;
-+ priv->numSNRNF = 0;
-+ priv->connect_status = LBS_DISCONNECTED;
-
- /* Clear out associated SSID and BSSID since connection is
- * no longer valid.
- */
-- memset(&adapter->curbssparams.bssid, 0, ETH_ALEN);
-- memset(&adapter->curbssparams.ssid, 0, IW_ESSID_MAX_SIZE);
-- adapter->curbssparams.ssid_len = 0;
-+ memset(&priv->curbssparams.bssid, 0, ETH_ALEN);
-+ memset(&priv->curbssparams.ssid, 0, IW_ESSID_MAX_SIZE);
-+ priv->curbssparams.ssid_len = 0;
+- priv->adapter->intcounter++;
+-
+- if (priv->adapter->psstate == PS_STATE_SLEEP) {
+- priv->adapter->psstate = PS_STATE_AWAKE;
+- netif_wake_queue(dev);
+- if (priv->mesh_dev)
+- netif_wake_queue(priv->mesh_dev);
+- }
+-
++ lbs_deb_thread("lbs_interrupt: intcounter=%d\n", priv->intcounter);
++ priv->intcounter++;
++ if (priv->psstate == PS_STATE_SLEEP)
++ priv->psstate = PS_STATE_AWAKE;
+ wake_up_interruptible(&priv->waitq);
-- if (adapter->psstate != PS_STATE_FULL_POWER) {
-+ if (priv->psstate != PS_STATE_FULL_POWER) {
- /* make firmware to exit PS mode */
- lbs_deb_cmd("disconnected, so exit PS mode\n");
-- libertas_ps_wakeup(priv, 0);
-+ lbs_ps_wakeup(priv, 0);
- }
- lbs_deb_leave(LBS_DEB_CMD);
+ lbs_deb_leave(LBS_DEB_THREAD);
}
-@@ -85,11 +80,11 @@ void libertas_mac_event_disconnected(wlan_private * priv)
- /**
- * @brief This function handles MIC failure event.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @para event the event id
- * @return n/a
- */
--static void handle_mic_failureevent(wlan_private * priv, u32 event)
-+static void handle_mic_failureevent(struct lbs_private *priv, u32 event)
+-EXPORT_SYMBOL_GPL(libertas_interrupt);
++EXPORT_SYMBOL_GPL(lbs_interrupt);
+
+-int libertas_reset_device(wlan_private *priv)
++int lbs_reset_device(struct lbs_private *priv)
{
- char buf[50];
+ int ret;
-@@ -104,15 +99,14 @@ static void handle_mic_failureevent(wlan_private * priv, u32 event)
- strcat(buf, "multicast ");
- }
+ lbs_deb_enter(LBS_DEB_MAIN);
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_RESET,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_RESET,
+ CMD_ACT_HALT, 0, 0, NULL);
+ msleep_interruptible(10);
-- libertas_send_iwevcustom_event(priv, buf);
-+ lbs_send_iwevcustom_event(priv, buf);
- lbs_deb_leave(LBS_DEB_CMD);
+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
+ return ret;
}
+-EXPORT_SYMBOL_GPL(libertas_reset_device);
++EXPORT_SYMBOL_GPL(lbs_reset_device);
--static int wlan_ret_reg_access(wlan_private * priv,
-+static int lbs_ret_reg_access(struct lbs_private *priv,
- u16 type, struct cmd_ds_command *resp)
+-static int libertas_init_module(void)
++static int __init lbs_init_module(void)
{
- int ret = 0;
-- wlan_adapter *adapter = priv->adapter;
+ lbs_deb_enter(LBS_DEB_MAIN);
+- libertas_debugfs_init();
++ lbs_debugfs_init();
+ lbs_deb_leave(LBS_DEB_MAIN);
+ return 0;
+ }
- lbs_deb_enter(LBS_DEB_CMD);
+-static void libertas_exit_module(void)
++static void __exit lbs_exit_module(void)
+ {
+ lbs_deb_enter(LBS_DEB_MAIN);
+-
+- libertas_debugfs_remove();
+-
++ lbs_debugfs_remove();
+ lbs_deb_leave(LBS_DEB_MAIN);
+ }
-@@ -121,8 +115,8 @@ static int wlan_ret_reg_access(wlan_private * priv,
- {
- struct cmd_ds_mac_reg_access *reg = &resp->params.macreg;
+@@ -1531,79 +1491,89 @@ static void libertas_exit_module(void)
+ * rtap interface support fuctions
+ */
-- adapter->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
-- adapter->offsetvalue.value = le32_to_cpu(reg->value);
-+ priv->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
-+ priv->offsetvalue.value = le32_to_cpu(reg->value);
- break;
- }
+-static int libertas_rtap_open(struct net_device *dev)
++static int lbs_rtap_open(struct net_device *dev)
+ {
+- netif_carrier_off(dev);
+- netif_stop_queue(dev);
+- return 0;
++ /* Yes, _stop_ the queue. Because we don't support injection */
++ lbs_deb_enter(LBS_DEB_MAIN);
++ netif_carrier_off(dev);
++ netif_stop_queue(dev);
++ lbs_deb_leave(LBS_DEB_LEAVE);
++ return 0;
+ }
-@@ -130,8 +124,8 @@ static int wlan_ret_reg_access(wlan_private * priv,
- {
- struct cmd_ds_bbp_reg_access *reg = &resp->params.bbpreg;
+-static int libertas_rtap_stop(struct net_device *dev)
++static int lbs_rtap_stop(struct net_device *dev)
+ {
+- return 0;
++ lbs_deb_enter(LBS_DEB_MAIN);
++ lbs_deb_leave(LBS_DEB_MAIN);
++ return 0;
+ }
-- adapter->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
-- adapter->offsetvalue.value = reg->value;
-+ priv->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
-+ priv->offsetvalue.value = reg->value;
- break;
- }
+-static int libertas_rtap_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static int lbs_rtap_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+- netif_stop_queue(dev);
+- return -EOPNOTSUPP;
++ netif_stop_queue(dev);
++ return NETDEV_TX_BUSY;
+ }
-@@ -139,8 +133,8 @@ static int wlan_ret_reg_access(wlan_private * priv,
- {
- struct cmd_ds_rf_reg_access *reg = &resp->params.rfreg;
+-static struct net_device_stats *libertas_rtap_get_stats(struct net_device *dev)
++static struct net_device_stats *lbs_rtap_get_stats(struct net_device *dev)
+ {
+- wlan_private *priv = dev->priv;
+- return &priv->ieee->stats;
++ struct lbs_private *priv = dev->priv;
++ lbs_deb_enter(LBS_DEB_NET);
++ return &priv->stats;
+ }
-- adapter->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
-- adapter->offsetvalue.value = reg->value;
-+ priv->offsetvalue.offset = (u32)le16_to_cpu(reg->offset);
-+ priv->offsetvalue.value = reg->value;
- break;
- }
-@@ -152,112 +146,23 @@ static int wlan_ret_reg_access(wlan_private * priv,
- return ret;
+-void libertas_remove_rtap(wlan_private *priv)
++static void lbs_remove_rtap(struct lbs_private *priv)
+ {
++ lbs_deb_enter(LBS_DEB_MAIN);
+ if (priv->rtap_net_dev == NULL)
+ return;
+ unregister_netdev(priv->rtap_net_dev);
+- free_ieee80211(priv->rtap_net_dev);
++ free_netdev(priv->rtap_net_dev);
+ priv->rtap_net_dev = NULL;
++ lbs_deb_leave(LBS_DEB_MAIN);
}
--static int wlan_ret_get_hw_spec(wlan_private * priv,
-- struct cmd_ds_command *resp)
--{
-- u32 i;
-- struct cmd_ds_get_hw_spec *hwspec = &resp->params.hwspec;
-- wlan_adapter *adapter = priv->adapter;
-- int ret = 0;
-- DECLARE_MAC_BUF(mac);
--
-- lbs_deb_enter(LBS_DEB_CMD);
--
-- adapter->fwcapinfo = le32_to_cpu(hwspec->fwcapinfo);
--
-- memcpy(adapter->fwreleasenumber, hwspec->fwreleasenumber, 4);
--
-- lbs_deb_cmd("GET_HW_SPEC: firmware release %u.%u.%up%u\n",
-- adapter->fwreleasenumber[2], adapter->fwreleasenumber[1],
-- adapter->fwreleasenumber[0], adapter->fwreleasenumber[3]);
-- lbs_deb_cmd("GET_HW_SPEC: MAC addr %s\n",
-- print_mac(mac, hwspec->permanentaddr));
-- lbs_deb_cmd("GET_HW_SPEC: hardware interface 0x%x, hardware spec 0x%04x\n",
-- hwspec->hwifversion, hwspec->version);
--
-- /* Clamp region code to 8-bit since FW spec indicates that it should
-- * only ever be 8-bit, even though the field size is 16-bit. Some firmware
-- * returns non-zero high 8 bits here.
-- */
-- adapter->regioncode = le16_to_cpu(hwspec->regioncode) & 0xFF;
--
-- for (i = 0; i < MRVDRV_MAX_REGION_CODE; i++) {
-- /* use the region code to search for the index */
-- if (adapter->regioncode == libertas_region_code_to_index[i]) {
-- break;
-- }
-- }
--
-- /* if it's unidentified region code, use the default (USA) */
-- if (i >= MRVDRV_MAX_REGION_CODE) {
-- adapter->regioncode = 0x10;
-- lbs_pr_info("unidentified region code; using the default (USA)\n");
-- }
--
-- if (adapter->current_addr[0] == 0xff)
-- memmove(adapter->current_addr, hwspec->permanentaddr, ETH_ALEN);
--
-- memcpy(priv->dev->dev_addr, adapter->current_addr, ETH_ALEN);
-- if (priv->mesh_dev)
-- memcpy(priv->mesh_dev->dev_addr, adapter->current_addr, ETH_ALEN);
--
-- if (libertas_set_regiontable(priv, adapter->regioncode, 0)) {
-- ret = -1;
-- goto done;
-- }
--
-- if (libertas_set_universaltable(priv, 0)) {
-- ret = -1;
-- goto done;
-- }
--
--done:
-- lbs_deb_enter_args(LBS_DEB_CMD, "ret %d", ret);
-- return ret;
--}
--
--static int wlan_ret_802_11_sleep_params(wlan_private * priv,
-- struct cmd_ds_command *resp)
--{
-- struct cmd_ds_802_11_sleep_params *sp = &resp->params.sleep_params;
-- wlan_adapter *adapter = priv->adapter;
+-int libertas_add_rtap(wlan_private *priv)
++static int lbs_add_rtap(struct lbs_private *priv)
+ {
+- int rc = 0;
-
-- lbs_deb_enter(LBS_DEB_CMD);
+- if (priv->rtap_net_dev)
+- return -EPERM;
-
-- lbs_deb_cmd("error 0x%x, offset 0x%x, stabletime 0x%x, calcontrol 0x%x "
-- "extsleepclk 0x%x\n", le16_to_cpu(sp->error),
-- le16_to_cpu(sp->offset), le16_to_cpu(sp->stabletime),
-- sp->calcontrol, sp->externalsleepclk);
+- priv->rtap_net_dev = alloc_ieee80211(0);
+- if (priv->rtap_net_dev == NULL)
+- return -ENOMEM;
-
-- adapter->sp.sp_error = le16_to_cpu(sp->error);
-- adapter->sp.sp_offset = le16_to_cpu(sp->offset);
-- adapter->sp.sp_stabletime = le16_to_cpu(sp->stabletime);
-- adapter->sp.sp_calcontrol = sp->calcontrol;
-- adapter->sp.sp_extsleepclk = sp->externalsleepclk;
-- adapter->sp.sp_reserved = le16_to_cpu(sp->reserved);
-
-- lbs_deb_enter(LBS_DEB_CMD);
+- priv->ieee = netdev_priv(priv->rtap_net_dev);
++ int ret = 0;
++ struct net_device *rtap_dev;
+
+- strcpy(priv->rtap_net_dev->name, "rtap%d");
++ lbs_deb_enter(LBS_DEB_MAIN);
++ if (priv->rtap_net_dev) {
++ ret = -EPERM;
++ goto out;
++ }
+
+- priv->rtap_net_dev->type = ARPHRD_IEEE80211_RADIOTAP;
+- priv->rtap_net_dev->open = libertas_rtap_open;
+- priv->rtap_net_dev->stop = libertas_rtap_stop;
+- priv->rtap_net_dev->get_stats = libertas_rtap_get_stats;
+- priv->rtap_net_dev->hard_start_xmit = libertas_rtap_hard_start_xmit;
+- priv->rtap_net_dev->set_multicast_list = libertas_set_multicast_list;
+- priv->rtap_net_dev->priv = priv;
++ rtap_dev = alloc_netdev(0, "rtap%d", ether_setup);
++ if (rtap_dev == NULL) {
++ ret = -ENOMEM;
++ goto out;
++ }
+
+- priv->ieee->iw_mode = IW_MODE_MONITOR;
++ memcpy(rtap_dev->dev_addr, priv->current_addr, ETH_ALEN);
++ rtap_dev->type = ARPHRD_IEEE80211_RADIOTAP;
++ rtap_dev->open = lbs_rtap_open;
++ rtap_dev->stop = lbs_rtap_stop;
++ rtap_dev->get_stats = lbs_rtap_get_stats;
++ rtap_dev->hard_start_xmit = lbs_rtap_hard_start_xmit;
++ rtap_dev->set_multicast_list = lbs_set_multicast_list;
++ rtap_dev->priv = priv;
+
+- rc = register_netdev(priv->rtap_net_dev);
+- if (rc) {
+- free_ieee80211(priv->rtap_net_dev);
+- priv->rtap_net_dev = NULL;
+- return rc;
++ ret = register_netdev(rtap_dev);
++ if (ret) {
++ free_netdev(rtap_dev);
++ goto out;
+ }
++ priv->rtap_net_dev = rtap_dev;
+
- return 0;
--}
--
--static int wlan_ret_802_11_stat(wlan_private * priv,
-+static int lbs_ret_802_11_stat(struct lbs_private *priv,
- struct cmd_ds_command *resp)
++out:
++ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
++ return ret;
+ }
+
+
+-module_init(libertas_init_module);
+-module_exit(libertas_exit_module);
++module_init(lbs_init_module);
++module_exit(lbs_exit_module);
+
+ MODULE_DESCRIPTION("Libertas WLAN Driver Library");
+ MODULE_AUTHOR("Marvell International Ltd.");
+diff --git a/drivers/net/wireless/libertas/rx.c b/drivers/net/wireless/libertas/rx.c
+index 0420e5b..149557a 100644
+--- a/drivers/net/wireless/libertas/rx.c
++++ b/drivers/net/wireless/libertas/rx.c
+@@ -35,134 +35,114 @@ struct rx80211packethdr {
+ void *eth80211_hdr;
+ } __attribute__ ((packed));
+
+-static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb);
++static int process_rxed_802_11_packet(struct lbs_private *priv,
++ struct sk_buff *skb);
+
+ /**
+ * @brief This function computes the avgSNR .
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return avgSNR
+ */
+-static u8 wlan_getavgsnr(wlan_private * priv)
++static u8 lbs_getavgsnr(struct lbs_private *priv)
{
- lbs_deb_enter(LBS_DEB_CMD);
--/* currently adapter->wlan802_11Stat is unused
-+/* currently priv->wlan802_11Stat is unused
+ u8 i;
+ u16 temp = 0;
+- wlan_adapter *adapter = priv->adapter;
+- if (adapter->numSNRNF == 0)
++ if (priv->numSNRNF == 0)
+ return 0;
+- for (i = 0; i < adapter->numSNRNF; i++)
+- temp += adapter->rawSNR[i];
+- return (u8) (temp / adapter->numSNRNF);
++ for (i = 0; i < priv->numSNRNF; i++)
++ temp += priv->rawSNR[i];
++ return (u8) (temp / priv->numSNRNF);
- struct cmd_ds_802_11_get_stat *p11Stat = &resp->params.gstat;
+ }
+
+ /**
+ * @brief This function computes the AvgNF
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @return AvgNF
+ */
+-static u8 wlan_getavgnf(wlan_private * priv)
++static u8 lbs_getavgnf(struct lbs_private *priv)
+ {
+ u8 i;
+ u16 temp = 0;
- wlan_adapter *adapter = priv->adapter;
+- if (adapter->numSNRNF == 0)
++ if (priv->numSNRNF == 0)
+ return 0;
+- for (i = 0; i < adapter->numSNRNF; i++)
+- temp += adapter->rawNF[i];
+- return (u8) (temp / adapter->numSNRNF);
++ for (i = 0; i < priv->numSNRNF; i++)
++ temp += priv->rawNF[i];
++ return (u8) (temp / priv->numSNRNF);
- // TODO Convert it to Big endian befor copy
-- memcpy(&adapter->wlan802_11Stat,
-+ memcpy(&priv->wlan802_11Stat,
- p11Stat, sizeof(struct cmd_ds_802_11_get_stat));
- */
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
}
--static int wlan_ret_802_11_snmp_mib(wlan_private * priv,
-+static int lbs_ret_802_11_snmp_mib(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+ /**
+ * @brief This function save the raw SNR/NF to our internel buffer
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param prxpd A pointer to rxpd structure of received packet
+ * @return n/a
+ */
+-static void wlan_save_rawSNRNF(wlan_private * priv, struct rxpd *p_rx_pd)
++static void lbs_save_rawSNRNF(struct lbs_private *priv, struct rxpd *p_rx_pd)
{
- struct cmd_ds_802_11_snmp_mib *smib = &resp->params.smib;
-@@ -273,22 +178,22 @@ static int wlan_ret_802_11_snmp_mib(wlan_private * priv,
- if (querytype == CMD_ACT_GET) {
- switch (oid) {
- case FRAGTHRESH_I:
-- priv->adapter->fragthsd =
-+ priv->fragthsd =
- le16_to_cpu(*((__le16 *)(smib->value)));
- lbs_deb_cmd("SNMP_RESP: frag threshold %u\n",
-- priv->adapter->fragthsd);
-+ priv->fragthsd);
- break;
- case RTSTHRESH_I:
-- priv->adapter->rtsthsd =
-+ priv->rtsthsd =
- le16_to_cpu(*((__le16 *)(smib->value)));
- lbs_deb_cmd("SNMP_RESP: rts threshold %u\n",
-- priv->adapter->rtsthsd);
-+ priv->rtsthsd);
- break;
- case SHORT_RETRYLIM_I:
-- priv->adapter->txretrycount =
-+ priv->txretrycount =
- le16_to_cpu(*((__le16 *)(smib->value)));
- lbs_deb_cmd("SNMP_RESP: tx retry count %u\n",
-- priv->adapter->rtsthsd);
-+ priv->rtsthsd);
- break;
- default:
- break;
-@@ -299,12 +204,11 @@ static int wlan_ret_802_11_snmp_mib(wlan_private * priv,
- return 0;
+- wlan_adapter *adapter = priv->adapter;
+- if (adapter->numSNRNF < DEFAULT_DATA_AVG_FACTOR)
+- adapter->numSNRNF++;
+- adapter->rawSNR[adapter->nextSNRNF] = p_rx_pd->snr;
+- adapter->rawNF[adapter->nextSNRNF] = p_rx_pd->nf;
+- adapter->nextSNRNF++;
+- if (adapter->nextSNRNF >= DEFAULT_DATA_AVG_FACTOR)
+- adapter->nextSNRNF = 0;
++ if (priv->numSNRNF < DEFAULT_DATA_AVG_FACTOR)
++ priv->numSNRNF++;
++ priv->rawSNR[priv->nextSNRNF] = p_rx_pd->snr;
++ priv->rawNF[priv->nextSNRNF] = p_rx_pd->nf;
++ priv->nextSNRNF++;
++ if (priv->nextSNRNF >= DEFAULT_DATA_AVG_FACTOR)
++ priv->nextSNRNF = 0;
+ return;
}
--static int wlan_ret_802_11_key_material(wlan_private * priv,
-+static int lbs_ret_802_11_key_material(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+ /**
+ * @brief This function computes the RSSI in received packet.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param prxpd A pointer to rxpd structure of received packet
+ * @return n/a
+ */
+-static void wlan_compute_rssi(wlan_private * priv, struct rxpd *p_rx_pd)
++static void lbs_compute_rssi(struct lbs_private *priv, struct rxpd *p_rx_pd)
{
- struct cmd_ds_802_11_key_material *pkeymaterial =
- &resp->params.keymaterial;
- wlan_adapter *adapter = priv->adapter;
- u16 action = le16_to_cpu(pkeymaterial->action);
- lbs_deb_enter(LBS_DEB_CMD);
-@@ -332,9 +236,9 @@ static int wlan_ret_802_11_key_material(wlan_private * priv,
- break;
+ lbs_deb_enter(LBS_DEB_RX);
- if (key_flags & KEY_INFO_WPA_UNICAST)
-- pkey = &adapter->wpa_unicast_key;
-+ pkey = &priv->wpa_unicast_key;
- else if (key_flags & KEY_INFO_WPA_MCAST)
-- pkey = &adapter->wpa_mcast_key;
-+ pkey = &priv->wpa_mcast_key;
- else
- break;
+ lbs_deb_rx("rxpd: SNR %d, NF %d\n", p_rx_pd->snr, p_rx_pd->nf);
+ lbs_deb_rx("before computing SNR: SNR-avg = %d, NF-avg = %d\n",
+- adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
+- adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
++ priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
++ priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-@@ -355,134 +259,85 @@ static int wlan_ret_802_11_key_material(wlan_private * priv,
- return 0;
- }
+- adapter->SNR[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->snr;
+- adapter->NF[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->nf;
+- wlan_save_rawSNRNF(priv, p_rx_pd);
++ priv->SNR[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->snr;
++ priv->NF[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->nf;
++ lbs_save_rawSNRNF(priv, p_rx_pd);
--static int wlan_ret_802_11_mac_address(wlan_private * priv,
-+static int lbs_ret_802_11_mac_address(struct lbs_private *priv,
- struct cmd_ds_command *resp)
- {
- struct cmd_ds_802_11_mac_address *macadd = &resp->params.macadd;
-- wlan_adapter *adapter = priv->adapter;
+- adapter->SNR[TYPE_RXPD][TYPE_AVG] = wlan_getavgsnr(priv) * AVG_SCALE;
+- adapter->NF[TYPE_RXPD][TYPE_AVG] = wlan_getavgnf(priv) * AVG_SCALE;
++ priv->SNR[TYPE_RXPD][TYPE_AVG] = lbs_getavgsnr(priv) * AVG_SCALE;
++ priv->NF[TYPE_RXPD][TYPE_AVG] = lbs_getavgnf(priv) * AVG_SCALE;
+ lbs_deb_rx("after computing SNR: SNR-avg = %d, NF-avg = %d\n",
+- adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
+- adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
++ priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
++ priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
- lbs_deb_enter(LBS_DEB_CMD);
+- adapter->RSSI[TYPE_RXPD][TYPE_NOAVG] =
+- CAL_RSSI(adapter->SNR[TYPE_RXPD][TYPE_NOAVG],
+- adapter->NF[TYPE_RXPD][TYPE_NOAVG]);
++ priv->RSSI[TYPE_RXPD][TYPE_NOAVG] =
++ CAL_RSSI(priv->SNR[TYPE_RXPD][TYPE_NOAVG],
++ priv->NF[TYPE_RXPD][TYPE_NOAVG]);
-- memcpy(adapter->current_addr, macadd->macadd, ETH_ALEN);
-+ memcpy(priv->current_addr, macadd->macadd, ETH_ALEN);
+- adapter->RSSI[TYPE_RXPD][TYPE_AVG] =
+- CAL_RSSI(adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
+- adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
++ priv->RSSI[TYPE_RXPD][TYPE_AVG] =
++ CAL_RSSI(priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
++ priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
- lbs_deb_enter(LBS_DEB_CMD);
- return 0;
+ lbs_deb_leave(LBS_DEB_RX);
}
--static int wlan_ret_802_11_rf_tx_power(wlan_private * priv,
-+static int lbs_ret_802_11_rf_tx_power(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+-void libertas_upload_rx_packet(wlan_private * priv, struct sk_buff *skb)
+-{
+- lbs_deb_rx("skb->data %p\n", skb->data);
+-
+- if (priv->adapter->monitormode != WLAN_MONITOR_OFF) {
+- skb->protocol = eth_type_trans(skb, priv->rtap_net_dev);
+- } else {
+- if (priv->mesh_dev && IS_MESH_FRAME(skb))
+- skb->protocol = eth_type_trans(skb, priv->mesh_dev);
+- else
+- skb->protocol = eth_type_trans(skb, priv->dev);
+- }
+- skb->ip_summed = CHECKSUM_UNNECESSARY;
+- netif_rx(skb);
+-}
+-
+ /**
+ * @brief This function processes received packet and forwards it
+ * to kernel/upper layer
+ *
+- * @param priv A pointer to wlan_private
++ * @param priv A pointer to struct lbs_private
+ * @param skb A pointer to skb which includes the received packet
+ * @return 0 or -1
+ */
+-int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *skb)
++int lbs_process_rxed_packet(struct lbs_private *priv, struct sk_buff *skb)
{
- struct cmd_ds_802_11_rf_tx_power *rtp = &resp->params.txp;
- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
+-
++ struct net_device *dev = priv->dev;
+ struct rxpackethdr *p_rx_pkt;
+ struct rxpd *p_rx_pd;
- lbs_deb_enter(LBS_DEB_CMD);
+@@ -173,15 +153,15 @@ int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *skb)
-- adapter->txpowerlevel = le16_to_cpu(rtp->currentlevel);
-+ priv->txpowerlevel = le16_to_cpu(rtp->currentlevel);
+ lbs_deb_enter(LBS_DEB_RX);
-- lbs_deb_cmd("TX power currently %d\n", adapter->txpowerlevel);
-+ lbs_deb_cmd("TX power currently %d\n", priv->txpowerlevel);
+- if (priv->adapter->monitormode != WLAN_MONITOR_OFF)
++ skb->ip_summed = CHECKSUM_NONE;
++
++ if (priv->monitormode != LBS_MONITOR_OFF)
+ return process_rxed_802_11_packet(priv, skb);
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
+ p_rx_pkt = (struct rxpackethdr *) skb->data;
+ p_rx_pd = &p_rx_pkt->rx_pd;
+- if (p_rx_pd->rx_control & RxPD_MESH_FRAME)
+- SET_MESH_FRAME(skb);
+- else
+- UNSET_MESH_FRAME(skb);
++ if (priv->mesh_dev && (p_rx_pd->rx_control & RxPD_MESH_FRAME))
++ dev = priv->mesh_dev;
+
+ lbs_deb_hex(LBS_DEB_RX, "RX Data: Before chop rxpd", skb->data,
+ min_t(unsigned int, skb->len, 100));
+@@ -257,23 +237,27 @@ int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *skb)
+ /* Take the data rate from the rxpd structure
+ * only if the rate is auto
+ */
+- if (adapter->auto_rate)
+- adapter->cur_rate = libertas_fw_index_to_data_rate(p_rx_pd->rx_rate);
++ if (priv->auto_rate)
++ priv->cur_rate = lbs_fw_index_to_data_rate(p_rx_pd->rx_rate);
+
+- wlan_compute_rssi(priv, p_rx_pd);
++ lbs_compute_rssi(priv, p_rx_pd);
+
+ lbs_deb_rx("rx data: size of actual packet %d\n", skb->len);
+ priv->stats.rx_bytes += skb->len;
+ priv->stats.rx_packets++;
+
+- libertas_upload_rx_packet(priv, skb);
++ skb->protocol = eth_type_trans(skb, dev);
++ if (in_interrupt())
++ netif_rx(skb);
++ else
++ netif_rx_ni(skb);
+
+ ret = 0;
+ done:
+ lbs_deb_leave_args(LBS_DEB_RX, "ret %d", ret);
+ return ret;
}
+-EXPORT_SYMBOL_GPL(libertas_process_rxed_packet);
++EXPORT_SYMBOL_GPL(lbs_process_rxed_packet);
--static int wlan_ret_802_11_rate_adapt_rateset(wlan_private * priv,
-+static int lbs_ret_802_11_rate_adapt_rateset(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+ /**
+ * @brief This function converts Tx/Rx rates from the Marvell WLAN format
+@@ -319,13 +303,13 @@ static u8 convert_mv_rate_to_radiotap(u8 rate)
+ * @brief This function processes a received 802.11 packet and forwards it
+ * to kernel/upper layer
+ *
+- * @param priv A pointer to wlan_private
++ * @param priv A pointer to struct lbs_private
+ * @param skb A pointer to skb which includes the received packet
+ * @return 0 or -1
+ */
+-static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb)
++static int process_rxed_802_11_packet(struct lbs_private *priv,
++ struct sk_buff *skb)
{
- struct cmd_ds_802_11_rate_adapt_rateset *rates = &resp->params.rateset;
- wlan_adapter *adapter = priv->adapter;
+ int ret = 0;
- lbs_deb_enter(LBS_DEB_CMD);
+ struct rx80211packethdr *p_rx_pkt;
+@@ -341,9 +325,10 @@ static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb)
+ // lbs_deb_hex(LBS_DEB_RX, "RX Data: Before chop rxpd", skb->data, min(skb->len, 100));
- if (rates->action == CMD_ACT_GET) {
-- adapter->enablehwauto = le16_to_cpu(rates->enablehwauto);
-- adapter->ratebitmap = le16_to_cpu(rates->bitmap);
-+ priv->enablehwauto = le16_to_cpu(rates->enablehwauto);
-+ priv->ratebitmap = le16_to_cpu(rates->bitmap);
+ if (skb->len < (ETH_HLEN + 8 + sizeof(struct rxpd))) {
+- lbs_deb_rx("rx err: frame received wit bad length\n");
++ lbs_deb_rx("rx err: frame received with bad length\n");
+ priv->stats.rx_length_errors++;
+- ret = 0;
++ ret = -EINVAL;
++ kfree(skb);
+ goto done;
}
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
- }
+@@ -359,65 +344,56 @@ static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb)
+ skb->len, sizeof(struct rxpd), skb->len - sizeof(struct rxpd));
--static int wlan_ret_802_11_data_rate(wlan_private * priv,
-- struct cmd_ds_command *resp)
--{
-- struct cmd_ds_802_11_data_rate *pdatarate = &resp->params.drate;
-- wlan_adapter *adapter = priv->adapter;
--
-- lbs_deb_enter(LBS_DEB_CMD);
--
-- lbs_deb_hex(LBS_DEB_CMD, "DATA_RATE_RESP", (u8 *) pdatarate,
-- sizeof(struct cmd_ds_802_11_data_rate));
--
-- /* FIXME: get actual rates FW can do if this command actually returns
-- * all data rates supported.
-- */
-- adapter->cur_rate = libertas_fw_index_to_data_rate(pdatarate->rates[0]);
-- lbs_deb_cmd("DATA_RATE: current rate 0x%02x\n", adapter->cur_rate);
--
-- lbs_deb_leave(LBS_DEB_CMD);
-- return 0;
--}
--
--static int wlan_ret_802_11_rf_channel(wlan_private * priv,
-- struct cmd_ds_command *resp)
--{
-- struct cmd_ds_802_11_rf_channel *rfchannel = &resp->params.rfchannel;
-- wlan_adapter *adapter = priv->adapter;
-- u16 action = le16_to_cpu(rfchannel->action);
-- u16 newchannel = le16_to_cpu(rfchannel->currentchannel);
--
-- lbs_deb_enter(LBS_DEB_CMD);
--
-- if (action == CMD_OPT_802_11_RF_CHANNEL_GET
-- && adapter->curbssparams.channel != newchannel) {
-- lbs_deb_cmd("channel switch from %d to %d\n",
-- adapter->curbssparams.channel, newchannel);
--
-- /* Update the channel again */
-- adapter->curbssparams.channel = newchannel;
+ /* create the exported radio header */
+- if(priv->adapter->monitormode == WLAN_MONITOR_OFF) {
+- /* no radio header */
+- /* chop the rxpd */
+- skb_pull(skb, sizeof(struct rxpd));
- }
+
+- else {
+- /* radiotap header */
+- radiotap_hdr.hdr.it_version = 0;
+- /* XXX must check this value for pad */
+- radiotap_hdr.hdr.it_pad = 0;
+- radiotap_hdr.hdr.it_len = cpu_to_le16 (sizeof(struct rx_radiotap_hdr));
+- radiotap_hdr.hdr.it_present = cpu_to_le32 (RX_RADIOTAP_PRESENT);
+- /* unknown values */
+- radiotap_hdr.flags = 0;
+- radiotap_hdr.chan_freq = 0;
+- radiotap_hdr.chan_flags = 0;
+- radiotap_hdr.antenna = 0;
+- /* known values */
+- radiotap_hdr.rate = convert_mv_rate_to_radiotap(prxpd->rx_rate);
+- /* XXX must check no carryout */
+- radiotap_hdr.antsignal = prxpd->snr + prxpd->nf;
+- radiotap_hdr.rx_flags = 0;
+- if (!(prxpd->status & cpu_to_le16(MRVDRV_RXPD_STATUS_OK)))
+- radiotap_hdr.rx_flags |= IEEE80211_RADIOTAP_F_RX_BADFCS;
+- //memset(radiotap_hdr.pad, 0x11, IEEE80211_RADIOTAP_HDRLEN - 18);
-
-- lbs_deb_enter(LBS_DEB_CMD);
-- return 0;
--}
+- /* chop the rxpd */
+- skb_pull(skb, sizeof(struct rxpd));
-
--static int wlan_ret_802_11_rssi(wlan_private * priv,
-+static int lbs_ret_802_11_rssi(struct lbs_private *priv,
- struct cmd_ds_command *resp)
- {
- struct cmd_ds_802_11_rssi_rsp *rssirsp = &resp->params.rssirsp;
-- wlan_adapter *adapter = priv->adapter;
+- /* add space for the new radio header */
+- if ((skb_headroom(skb) < sizeof(struct rx_radiotap_hdr)) &&
+- pskb_expand_head(skb, sizeof(struct rx_radiotap_hdr), 0,
+- GFP_ATOMIC)) {
+- lbs_pr_alert("%s: couldn't pskb_expand_head\n",
+- __func__);
+- }
+-
+- pradiotap_hdr =
+- (struct rx_radiotap_hdr *)skb_push(skb,
+- sizeof(struct
+- rx_radiotap_hdr));
+- memcpy(pradiotap_hdr, &radiotap_hdr,
+- sizeof(struct rx_radiotap_hdr));
++ /* radiotap header */
++ radiotap_hdr.hdr.it_version = 0;
++ /* XXX must check this value for pad */
++ radiotap_hdr.hdr.it_pad = 0;
++ radiotap_hdr.hdr.it_len = cpu_to_le16 (sizeof(struct rx_radiotap_hdr));
++ radiotap_hdr.hdr.it_present = cpu_to_le32 (RX_RADIOTAP_PRESENT);
++ /* unknown values */
++ radiotap_hdr.flags = 0;
++ radiotap_hdr.chan_freq = 0;
++ radiotap_hdr.chan_flags = 0;
++ radiotap_hdr.antenna = 0;
++ /* known values */
++ radiotap_hdr.rate = convert_mv_rate_to_radiotap(prxpd->rx_rate);
++ /* XXX must check no carryout */
++ radiotap_hdr.antsignal = prxpd->snr + prxpd->nf;
++ radiotap_hdr.rx_flags = 0;
++ if (!(prxpd->status & cpu_to_le16(MRVDRV_RXPD_STATUS_OK)))
++ radiotap_hdr.rx_flags |= IEEE80211_RADIOTAP_F_RX_BADFCS;
++ //memset(radiotap_hdr.pad, 0x11, IEEE80211_RADIOTAP_HDRLEN - 18);
++
++ /* chop the rxpd */
++ skb_pull(skb, sizeof(struct rxpd));
++
++ /* add space for the new radio header */
++ if ((skb_headroom(skb) < sizeof(struct rx_radiotap_hdr)) &&
++ pskb_expand_head(skb, sizeof(struct rx_radiotap_hdr), 0, GFP_ATOMIC)) {
++ lbs_pr_alert("%s: couldn't pskb_expand_head\n", __func__);
++ ret = -ENOMEM;
++ kfree_skb(skb);
++ goto done;
+ }
- lbs_deb_enter(LBS_DEB_CMD);
++ pradiotap_hdr = (void *)skb_push(skb, sizeof(struct rx_radiotap_hdr));
++ memcpy(pradiotap_hdr, &radiotap_hdr, sizeof(struct rx_radiotap_hdr));
++
+ /* Take the data rate from the rxpd structure
+ * only if the rate is auto
+ */
+- if (adapter->auto_rate)
+- adapter->cur_rate = libertas_fw_index_to_data_rate(prxpd->rx_rate);
++ if (priv->auto_rate)
++ priv->cur_rate = lbs_fw_index_to_data_rate(prxpd->rx_rate);
- /* store the non average value */
-- adapter->SNR[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->SNR);
-- adapter->NF[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->noisefloor);
-+ priv->SNR[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->SNR);
-+ priv->NF[TYPE_BEACON][TYPE_NOAVG] = le16_to_cpu(rssirsp->noisefloor);
+- wlan_compute_rssi(priv, prxpd);
++ lbs_compute_rssi(priv, prxpd);
-- adapter->SNR[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgSNR);
-- adapter->NF[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgnoisefloor);
-+ priv->SNR[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgSNR);
-+ priv->NF[TYPE_BEACON][TYPE_AVG] = le16_to_cpu(rssirsp->avgnoisefloor);
+ lbs_deb_rx("rx data: size of actual packet %d\n", skb->len);
+ priv->stats.rx_bytes += skb->len;
+ priv->stats.rx_packets++;
-- adapter->RSSI[TYPE_BEACON][TYPE_NOAVG] =
-- CAL_RSSI(adapter->SNR[TYPE_BEACON][TYPE_NOAVG],
-- adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
-+ priv->RSSI[TYPE_BEACON][TYPE_NOAVG] =
-+ CAL_RSSI(priv->SNR[TYPE_BEACON][TYPE_NOAVG],
-+ priv->NF[TYPE_BEACON][TYPE_NOAVG]);
+- libertas_upload_rx_packet(priv, skb);
++ skb->protocol = eth_type_trans(skb, priv->rtap_net_dev);
++ netif_rx(skb);
-- adapter->RSSI[TYPE_BEACON][TYPE_AVG] =
-- CAL_RSSI(adapter->SNR[TYPE_BEACON][TYPE_AVG] / AVG_SCALE,
-- adapter->NF[TYPE_BEACON][TYPE_AVG] / AVG_SCALE);
-+ priv->RSSI[TYPE_BEACON][TYPE_AVG] =
-+ CAL_RSSI(priv->SNR[TYPE_BEACON][TYPE_AVG] / AVG_SCALE,
-+ priv->NF[TYPE_BEACON][TYPE_AVG] / AVG_SCALE);
+ ret = 0;
- lbs_deb_cmd("RSSI: beacon %d, avg %d\n",
-- adapter->RSSI[TYPE_BEACON][TYPE_NOAVG],
-- adapter->RSSI[TYPE_BEACON][TYPE_AVG]);
-+ priv->RSSI[TYPE_BEACON][TYPE_NOAVG],
-+ priv->RSSI[TYPE_BEACON][TYPE_AVG]);
+diff --git a/drivers/net/wireless/libertas/scan.c b/drivers/net/wireless/libertas/scan.c
+index ad1e67d..9a61188 100644
+--- a/drivers/net/wireless/libertas/scan.c
++++ b/drivers/net/wireless/libertas/scan.c
+@@ -39,9 +39,8 @@
+ //! Memory needed to store a max number/size SSID TLV for a firmware scan
+ #define SSID_TLV_MAX_SIZE (1 * sizeof(struct mrvlietypes_ssidparamset))
- lbs_deb_leave(LBS_DEB_CMD);
- return 0;
+-//! Maximum memory needed for a wlan_scan_cmd_config with all TLVs at max
+-#define MAX_SCAN_CFG_ALLOC (sizeof(struct wlan_scan_cmd_config) \
+- + sizeof(struct mrvlietypes_numprobes) \
++//! Maximum memory needed for a lbs_scan_cmd_config with all TLVs at max
++#define MAX_SCAN_CFG_ALLOC (sizeof(struct lbs_scan_cmd_config) \
+ + CHAN_TLV_MAX_SIZE \
+ + SSID_TLV_MAX_SIZE)
+
+@@ -80,7 +79,23 @@ static inline void clear_bss_descriptor (struct bss_descriptor * bss)
+ memset(bss, 0, offsetof(struct bss_descriptor, list));
}
--static int wlan_ret_802_11_eeprom_access(wlan_private * priv,
-+static int lbs_ret_802_11_eeprom_access(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+-static inline int match_bss_no_security(struct wlan_802_11_security * secinfo,
++/**
++ * @brief Compare two SSIDs
++ *
++ * @param ssid1 A pointer to ssid to compare
++ * @param ssid2 A pointer to ssid to compare
++ *
++ * @return 0: ssid is same, otherwise is different
++ */
++int lbs_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
++{
++ if (ssid1_len != ssid2_len)
++ return -1;
++
++ return memcmp(ssid1, ssid2, ssid1_len);
++}
++
++static inline int match_bss_no_security(struct lbs_802_11_security *secinfo,
+ struct bss_descriptor * match_bss)
{
-- wlan_adapter *adapter = priv->adapter;
-- struct wlan_ioctl_regrdwr *pbuf;
-- pbuf = (struct wlan_ioctl_regrdwr *) adapter->prdeeprom;
-+ struct lbs_ioctl_regrdwr *pbuf;
-+ pbuf = (struct lbs_ioctl_regrdwr *) priv->prdeeprom;
-
- lbs_deb_enter_args(LBS_DEB_CMD, "len %d",
- le16_to_cpu(resp->params.rdeeprom.bytecount));
-@@ -503,46 +358,45 @@ static int wlan_ret_802_11_eeprom_access(wlan_private * priv,
+ if ( !secinfo->wep_enabled
+@@ -94,7 +109,7 @@ static inline int match_bss_no_security(struct wlan_802_11_security * secinfo,
return 0;
}
--static int wlan_ret_get_log(wlan_private * priv,
-+static int lbs_ret_get_log(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+-static inline int match_bss_static_wep(struct wlan_802_11_security * secinfo,
++static inline int match_bss_static_wep(struct lbs_802_11_security *secinfo,
+ struct bss_descriptor * match_bss)
{
- struct cmd_ds_802_11_get_log *logmessage = &resp->params.glog;
-- wlan_adapter *adapter = priv->adapter;
-
- lbs_deb_enter(LBS_DEB_CMD);
-
- /* Stored little-endian */
-- memcpy(&adapter->logmsg, logmessage, sizeof(struct cmd_ds_802_11_get_log));
-+ memcpy(&priv->logmsg, logmessage, sizeof(struct cmd_ds_802_11_get_log));
-
- lbs_deb_leave(LBS_DEB_CMD);
+ if ( secinfo->wep_enabled
+@@ -106,7 +121,7 @@ static inline int match_bss_static_wep(struct wlan_802_11_security * secinfo,
return 0;
}
--static int libertas_ret_802_11_enable_rsn(wlan_private * priv,
-- struct cmd_ds_command *resp)
-+static int lbs_ret_802_11_bcn_ctrl(struct lbs_private * priv,
-+ struct cmd_ds_command *resp)
+-static inline int match_bss_wpa(struct wlan_802_11_security * secinfo,
++static inline int match_bss_wpa(struct lbs_802_11_security *secinfo,
+ struct bss_descriptor * match_bss)
{
-- struct cmd_ds_802_11_enable_rsn *enable_rsn = &resp->params.enbrsn;
-- wlan_adapter *adapter = priv->adapter;
-- u32 * pdata_buf = adapter->cur_cmd->pdata_buf;
-+ struct cmd_ds_802_11_beacon_control *bcn_ctrl =
-+ &resp->params.bcn_ctrl;
-
- lbs_deb_enter(LBS_DEB_CMD);
+ if ( !secinfo->wep_enabled
+@@ -121,7 +136,7 @@ static inline int match_bss_wpa(struct wlan_802_11_security * secinfo,
+ return 0;
+ }
-- if (enable_rsn->action == cpu_to_le16(CMD_ACT_GET)) {
-- if (pdata_buf)
-- *pdata_buf = (u32) le16_to_cpu(enable_rsn->enable);
-+ if (bcn_ctrl->action == CMD_ACT_GET) {
-+ priv->beacon_enable = (u8) le16_to_cpu(bcn_ctrl->beacon_enable);
-+ priv->beacon_period = le16_to_cpu(bcn_ctrl->beacon_period);
- }
+-static inline int match_bss_wpa2(struct wlan_802_11_security * secinfo,
++static inline int match_bss_wpa2(struct lbs_802_11_security *secinfo,
+ struct bss_descriptor * match_bss)
+ {
+ if ( !secinfo->wep_enabled
+@@ -136,7 +151,7 @@ static inline int match_bss_wpa2(struct wlan_802_11_security * secinfo,
+ return 0;
+ }
-- lbs_deb_leave(LBS_DEB_CMD);
-+ lbs_deb_enter(LBS_DEB_CMD);
+-static inline int match_bss_dynamic_wep(struct wlan_802_11_security * secinfo,
++static inline int match_bss_dynamic_wep(struct lbs_802_11_security *secinfo,
+ struct bss_descriptor * match_bss)
+ {
+ if ( !secinfo->wep_enabled
+@@ -150,6 +165,18 @@ static inline int match_bss_dynamic_wep(struct wlan_802_11_security * secinfo,
return 0;
}
--static inline int handle_cmd_response(u16 respcmd,
-- struct cmd_ds_command *resp,
-- wlan_private *priv)
-+static inline int handle_cmd_response(struct lbs_private *priv,
-+ unsigned long dummy,
-+ struct cmd_header *cmd_response)
++static inline int is_same_network(struct bss_descriptor *src,
++ struct bss_descriptor *dst)
++{
++ /* A network is only a duplicate if the channel, BSSID, and ESSID
++ * all match. We treat all <hidden> with the same BSSID and channel
++ * as one network */
++ return ((src->ssid_len == dst->ssid_len) &&
++ (src->channel == dst->channel) &&
++ !compare_ether_addr(src->bssid, dst->bssid) &&
++ !memcmp(src->ssid, dst->ssid, src->ssid_len));
++}
++
+ /**
+ * @brief Check if a scanned network compatible with the driver settings
+ *
+@@ -163,13 +190,13 @@ static inline int match_bss_dynamic_wep(struct wlan_802_11_security * secinfo,
+ * 0 0 0 0 !=NONE 1 0 0 yes Dynamic WEP
+ *
+ *
+- * @param adapter A pointer to wlan_adapter
++ * @param priv A pointer to struct lbs_private
+ * @param index Index in scantable to check against current driver settings
+ * @param mode Network mode: Infrastructure or IBSS
+ *
+ * @return Index in scantable, or error code if negative
+ */
+-static int is_network_compatible(wlan_adapter * adapter,
++static int is_network_compatible(struct lbs_private *priv,
+ struct bss_descriptor * bss, u8 mode)
{
-+ struct cmd_ds_command *resp = (struct cmd_ds_command *) cmd_response;
- int ret = 0;
- unsigned long flags;
-- wlan_adapter *adapter = priv->adapter;
-+ uint16_t respcmd = le16_to_cpu(resp->command);
+ int matched = 0;
+@@ -179,34 +206,34 @@ static int is_network_compatible(wlan_adapter * adapter,
+ if (bss->mode != mode)
+ goto done;
- lbs_deb_enter(LBS_DEB_HOST);
+- if ((matched = match_bss_no_security(&adapter->secinfo, bss))) {
++ if ((matched = match_bss_no_security(&priv->secinfo, bss))) {
+ goto done;
+- } else if ((matched = match_bss_static_wep(&adapter->secinfo, bss))) {
++ } else if ((matched = match_bss_static_wep(&priv->secinfo, bss))) {
+ goto done;
+- } else if ((matched = match_bss_wpa(&adapter->secinfo, bss))) {
++ } else if ((matched = match_bss_wpa(&priv->secinfo, bss))) {
+ lbs_deb_scan(
+- "is_network_compatible() WPA: wpa_ie=%#x "
+- "wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s "
+- "privacy=%#x\n", bss->wpa_ie[0], bss->rsn_ie[0],
+- adapter->secinfo.wep_enabled ? "e" : "d",
+- adapter->secinfo.WPAenabled ? "e" : "d",
+- adapter->secinfo.WPA2enabled ? "e" : "d",
++ "is_network_compatible() WPA: wpa_ie 0x%x "
++ "wpa2_ie 0x%x WEP %s WPA %s WPA2 %s "
++ "privacy 0x%x\n", bss->wpa_ie[0], bss->rsn_ie[0],
++ priv->secinfo.wep_enabled ? "e" : "d",
++ priv->secinfo.WPAenabled ? "e" : "d",
++ priv->secinfo.WPA2enabled ? "e" : "d",
+ (bss->capability & WLAN_CAPABILITY_PRIVACY));
+ goto done;
+- } else if ((matched = match_bss_wpa2(&adapter->secinfo, bss))) {
++ } else if ((matched = match_bss_wpa2(&priv->secinfo, bss))) {
+ lbs_deb_scan(
+- "is_network_compatible() WPA2: wpa_ie=%#x "
+- "wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s "
+- "privacy=%#x\n", bss->wpa_ie[0], bss->rsn_ie[0],
+- adapter->secinfo.wep_enabled ? "e" : "d",
+- adapter->secinfo.WPAenabled ? "e" : "d",
+- adapter->secinfo.WPA2enabled ? "e" : "d",
++ "is_network_compatible() WPA2: wpa_ie 0x%x "
++ "wpa2_ie 0x%x WEP %s WPA %s WPA2 %s "
++ "privacy 0x%x\n", bss->wpa_ie[0], bss->rsn_ie[0],
++ priv->secinfo.wep_enabled ? "e" : "d",
++ priv->secinfo.WPAenabled ? "e" : "d",
++ priv->secinfo.WPA2enabled ? "e" : "d",
+ (bss->capability & WLAN_CAPABILITY_PRIVACY));
+ goto done;
+- } else if ((matched = match_bss_dynamic_wep(&adapter->secinfo, bss))) {
++ } else if ((matched = match_bss_dynamic_wep(&priv->secinfo, bss))) {
+ lbs_deb_scan(
+ "is_network_compatible() dynamic WEP: "
+- "wpa_ie=%#x wpa2_ie=%#x privacy=%#x\n",
++ "wpa_ie 0x%x wpa2_ie 0x%x privacy 0x%x\n",
+ bss->wpa_ie[0], bss->rsn_ie[0],
+ (bss->capability & WLAN_CAPABILITY_PRIVACY));
+ goto done;
+@@ -214,12 +241,12 @@ static int is_network_compatible(wlan_adapter * adapter,
-@@ -550,218 +404,213 @@ static inline int handle_cmd_response(u16 respcmd,
- case CMD_RET(CMD_MAC_REG_ACCESS):
- case CMD_RET(CMD_BBP_REG_ACCESS):
- case CMD_RET(CMD_RF_REG_ACCESS):
-- ret = wlan_ret_reg_access(priv, respcmd, resp);
-- break;
--
-- case CMD_RET(CMD_GET_HW_SPEC):
-- ret = wlan_ret_get_hw_spec(priv, resp);
-+ ret = lbs_ret_reg_access(priv, respcmd, resp);
- break;
+ /* bss security settings don't match those configured on card */
+ lbs_deb_scan(
+- "is_network_compatible() FAILED: wpa_ie=%#x "
+- "wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s privacy=%#x\n",
++ "is_network_compatible() FAILED: wpa_ie 0x%x "
++ "wpa2_ie 0x%x WEP %s WPA %s WPA2 %s privacy 0x%x\n",
+ bss->wpa_ie[0], bss->rsn_ie[0],
+- adapter->secinfo.wep_enabled ? "e" : "d",
+- adapter->secinfo.WPAenabled ? "e" : "d",
+- adapter->secinfo.WPA2enabled ? "e" : "d",
++ priv->secinfo.wep_enabled ? "e" : "d",
++ priv->secinfo.WPAenabled ? "e" : "d",
++ priv->secinfo.WPA2enabled ? "e" : "d",
+ (bss->capability & WLAN_CAPABILITY_PRIVACY));
- case CMD_RET(CMD_802_11_SCAN):
-- ret = libertas_ret_80211_scan(priv, resp);
-+ ret = lbs_ret_80211_scan(priv, resp);
- break;
+ done:
+@@ -227,22 +254,6 @@ done:
+ return matched;
+ }
- case CMD_RET(CMD_802_11_GET_LOG):
-- ret = wlan_ret_get_log(priv, resp);
-+ ret = lbs_ret_get_log(priv, resp);
- break;
+-/**
+- * @brief Compare two SSIDs
+- *
+- * @param ssid1 A pointer to ssid to compare
+- * @param ssid2 A pointer to ssid to compare
+- *
+- * @return 0--ssid is same, otherwise is different
+- */
+-int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
+-{
+- if (ssid1_len != ssid2_len)
+- return -1;
+-
+- return memcmp(ssid1, ssid2, ssid1_len);
+-}
+-
- case CMD_RET_802_11_ASSOCIATE:
- case CMD_RET(CMD_802_11_ASSOCIATE):
- case CMD_RET(CMD_802_11_REASSOCIATE):
-- ret = libertas_ret_80211_associate(priv, resp);
-+ ret = lbs_ret_80211_associate(priv, resp);
- break;
- case CMD_RET(CMD_802_11_DISASSOCIATE):
- case CMD_RET(CMD_802_11_DEAUTHENTICATE):
-- ret = libertas_ret_80211_disassociate(priv, resp);
-+ ret = lbs_ret_80211_disassociate(priv, resp);
- break;
- case CMD_RET(CMD_802_11_AD_HOC_START):
- case CMD_RET(CMD_802_11_AD_HOC_JOIN):
-- ret = libertas_ret_80211_ad_hoc_start(priv, resp);
-+ ret = lbs_ret_80211_ad_hoc_start(priv, resp);
- break;
+@@ -252,17 +263,27 @@ int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
+ /* */
+ /*********************************************************************/
- case CMD_RET(CMD_802_11_GET_STAT):
-- ret = wlan_ret_802_11_stat(priv, resp);
-+ ret = lbs_ret_802_11_stat(priv, resp);
- break;
++void lbs_scan_worker(struct work_struct *work)
++{
++ struct lbs_private *priv =
++ container_of(work, struct lbs_private, scan_work.work);
++
++ lbs_deb_enter(LBS_DEB_SCAN);
++ lbs_scan_networks(priv, NULL, 0);
++ lbs_deb_leave(LBS_DEB_SCAN);
++}
++
- case CMD_RET(CMD_802_11_SNMP_MIB):
-- ret = wlan_ret_802_11_snmp_mib(priv, resp);
-+ ret = lbs_ret_802_11_snmp_mib(priv, resp);
- break;
+ /**
+ * @brief Create a channel list for the driver to scan based on region info
+ *
+- * Only used from wlan_scan_setup_scan_config()
++ * Only used from lbs_scan_setup_scan_config()
+ *
+ * Use the driver region/band information to construct a comprehensive list
+ * of channels to scan. This routine is used for any scan that is not
+ * provided a specific channel list to scan.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param scanchanlist Output parameter: resulting channel list to scan
+ * @param filteredscan Flag indicating whether or not a BSSID or SSID filter
+ * is being sent in the command to firmware. Used to
+@@ -272,12 +293,11 @@ int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
+ *
+ * @return void
+ */
+-static void wlan_scan_create_channel_list(wlan_private * priv,
++static int lbs_scan_create_channel_list(struct lbs_private *priv,
+ struct chanscanparamset * scanchanlist,
+ u8 filteredscan)
+ {
- case CMD_RET(CMD_802_11_RF_TX_POWER):
-- ret = wlan_ret_802_11_rf_tx_power(priv, resp);
-+ ret = lbs_ret_802_11_rf_tx_power(priv, resp);
- break;
+- wlan_adapter *adapter = priv->adapter;
+ struct region_channel *scanregion;
+ struct chan_freq_power *cfp;
+ int rgnidx;
+@@ -285,8 +305,6 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
+ int nextchan;
+ u8 scantype;
- case CMD_RET(CMD_802_11_SET_AFC):
- case CMD_RET(CMD_802_11_GET_AFC):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- memmove(adapter->cur_cmd->pdata_buf, &resp->params.afc,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.afc,
- sizeof(struct cmd_ds_802_11_afc));
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- lbs_deb_enter_args(LBS_DEB_SCAN, "filteredscan %d", filteredscan);
+-
+ chanidx = 0;
- break;
+ /* Set the default scan type to the user specified type, will later
+@@ -295,21 +313,22 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
+ */
+ scantype = CMD_SCAN_TYPE_ACTIVE;
- case CMD_RET(CMD_MAC_MULTICAST_ADR):
- case CMD_RET(CMD_MAC_CONTROL):
-- case CMD_RET(CMD_802_11_SET_WEP):
- case CMD_RET(CMD_802_11_RESET):
- case CMD_RET(CMD_802_11_AUTHENTICATE):
-- case CMD_RET(CMD_802_11_RADIO_CONTROL):
- case CMD_RET(CMD_802_11_BEACON_STOP):
- break;
+- for (rgnidx = 0; rgnidx < ARRAY_SIZE(adapter->region_channel); rgnidx++) {
+- if (priv->adapter->enable11d &&
+- adapter->connect_status != LIBERTAS_CONNECTED) {
++ for (rgnidx = 0; rgnidx < ARRAY_SIZE(priv->region_channel); rgnidx++) {
++ if (priv->enable11d &&
++ (priv->connect_status != LBS_CONNECTED) &&
++ (priv->mesh_connect_status != LBS_CONNECTED)) {
+ /* Scan all the supported chan for the first scan */
+- if (!adapter->universal_channel[rgnidx].valid)
++ if (!priv->universal_channel[rgnidx].valid)
+ continue;
+- scanregion = &adapter->universal_channel[rgnidx];
++ scanregion = &priv->universal_channel[rgnidx];
-- case CMD_RET(CMD_802_11_ENABLE_RSN):
-- ret = libertas_ret_802_11_enable_rsn(priv, resp);
-- break;
--
-- case CMD_RET(CMD_802_11_DATA_RATE):
-- ret = wlan_ret_802_11_data_rate(priv, resp);
-- break;
- case CMD_RET(CMD_802_11_RATE_ADAPT_RATESET):
-- ret = wlan_ret_802_11_rate_adapt_rateset(priv, resp);
-- break;
-- case CMD_RET(CMD_802_11_RF_CHANNEL):
-- ret = wlan_ret_802_11_rf_channel(priv, resp);
-+ ret = lbs_ret_802_11_rate_adapt_rateset(priv, resp);
- break;
+ /* clear the parsed_region_chan for the first scan */
+- memset(&adapter->parsed_region_chan, 0x00,
+- sizeof(adapter->parsed_region_chan));
++ memset(&priv->parsed_region_chan, 0x00,
++ sizeof(priv->parsed_region_chan));
+ } else {
+- if (!adapter->region_channel[rgnidx].valid)
++ if (!priv->region_channel[rgnidx].valid)
+ continue;
+- scanregion = &adapter->region_channel[rgnidx];
++ scanregion = &priv->region_channel[rgnidx];
+ }
- case CMD_RET(CMD_802_11_RSSI):
-- ret = wlan_ret_802_11_rssi(priv, resp);
-+ ret = lbs_ret_802_11_rssi(priv, resp);
- break;
+ for (nextchan = 0;
+@@ -317,10 +336,10 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
- case CMD_RET(CMD_802_11_MAC_ADDRESS):
-- ret = wlan_ret_802_11_mac_address(priv, resp);
-+ ret = lbs_ret_802_11_mac_address(priv, resp);
- break;
+ cfp = scanregion->CFP + nextchan;
- case CMD_RET(CMD_802_11_AD_HOC_STOP):
-- ret = libertas_ret_80211_ad_hoc_stop(priv, resp);
-+ ret = lbs_ret_80211_ad_hoc_stop(priv, resp);
- break;
+- if (priv->adapter->enable11d) {
++ if (priv->enable11d) {
+ scantype =
+- libertas_get_scan_type_11d(cfp->channel,
+- &adapter->
++ lbs_get_scan_type_11d(cfp->channel,
++ &priv->
+ parsed_region_chan);
+ }
- case CMD_RET(CMD_802_11_KEY_MATERIAL):
-- ret = wlan_ret_802_11_key_material(priv, resp);
-+ ret = lbs_ret_802_11_key_material(priv, resp);
- break;
+@@ -353,453 +372,151 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
+ }
+ }
+ }
++ return chanidx;
+ }
- case CMD_RET(CMD_802_11_EEPROM_ACCESS):
-- ret = wlan_ret_802_11_eeprom_access(priv, resp);
-+ ret = lbs_ret_802_11_eeprom_access(priv, resp);
- break;
- case CMD_RET(CMD_802_11D_DOMAIN_INFO):
-- ret = libertas_ret_802_11d_domain_info(priv, resp);
-- break;
+-/* Delayed partial scan worker */
+-void libertas_scan_worker(struct work_struct *work)
++/*
++ * Add SSID TLV of the form:
++ *
++ * TLV-ID SSID 00 00
++ * length 06 00
++ * ssid 4d 4e 54 45 53 54
++ */
++static int lbs_scan_add_ssid_tlv(u8 *tlv,
++ const struct lbs_ioctl_user_scan_cfg *user_cfg)
+ {
+- wlan_private *priv = container_of(work, wlan_private, scan_work.work);
-
-- case CMD_RET(CMD_802_11_SLEEP_PARAMS):
-- ret = wlan_ret_802_11_sleep_params(priv, resp);
-- break;
-- case CMD_RET(CMD_802_11_INACTIVITY_TIMEOUT):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- *((u16 *) adapter->cur_cmd->pdata_buf) =
-- le16_to_cpu(resp->params.inactivity_timeout.timeout);
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ ret = lbs_ret_802_11d_domain_info(priv, resp);
- break;
+- wlan_scan_networks(priv, NULL, 0);
++ struct mrvlietypes_ssidparamset *ssid_tlv =
++ (struct mrvlietypes_ssidparamset *)tlv;
++ ssid_tlv->header.type = cpu_to_le16(TLV_TYPE_SSID);
++ ssid_tlv->header.len = cpu_to_le16(user_cfg->ssid_len);
++ memcpy(ssid_tlv->ssid, user_cfg->ssid, user_cfg->ssid_len);
++ return sizeof(ssid_tlv->header) + user_cfg->ssid_len;
+ }
- case CMD_RET(CMD_802_11_TPC_CFG):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- memmove(adapter->cur_cmd->pdata_buf, &resp->params.tpccfg,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.tpccfg,
- sizeof(struct cmd_ds_802_11_tpc_cfg));
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- break;
- case CMD_RET(CMD_802_11_LED_GPIO_CTRL):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- memmove(adapter->cur_cmd->pdata_buf, &resp->params.ledgpio,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.ledgpio,
- sizeof(struct cmd_ds_802_11_led_ctrl));
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- break;
-+
- case CMD_RET(CMD_802_11_PWR_CFG):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- memmove(adapter->cur_cmd->pdata_buf, &resp->params.pwrcfg,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ memmove((void *)priv->cur_cmd->callback_arg, &resp->params.pwrcfg,
- sizeof(struct cmd_ds_802_11_pwr_cfg));
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- break;
+-/**
+- * @brief Construct a wlan_scan_cmd_config structure to use in issue scan cmds
+- *
+- * Application layer or other functions can invoke wlan_scan_networks
+- * with a scan configuration supplied in a wlan_ioctl_user_scan_cfg struct.
+- * This structure is used as the basis of one or many wlan_scan_cmd_config
+- * commands that are sent to the command processing module and sent to
+- * firmware.
+- *
+- * Create a wlan_scan_cmd_config based on the following user supplied
+- * parameters (if present):
+- * - SSID filter
+- * - BSSID filter
+- * - Number of Probes to be sent
+- * - channel list
+- *
+- * If the SSID or BSSID filter is not present, disable/clear the filter.
+- * If the number of probes is not set, use the adapter default setting
+- * Qualify the channel
++/*
++ * Add CHANLIST TLV of the form
+ *
+- * @param priv A pointer to wlan_private structure
+- * @param puserscanin NULL or pointer to scan configuration parameters
+- * @param ppchantlvout Output parameter: Pointer to the start of the
+- * channel TLV portion of the output scan config
+- * @param pscanchanlist Output parameter: Pointer to the resulting channel
+- * list to scan
+- * @param pmaxchanperscan Output parameter: Number of channels to scan for
+- * each issuance of the firmware scan command
+- * @param pfilteredscan Output parameter: Flag indicating whether or not
+- * a BSSID or SSID filter is being sent in the
+- * command to firmware. Used to increase the number
+- * of channels sent in a scan command and to
+- * disable the firmware channel scan filter.
+- * @param pscancurrentonly Output parameter: Flag indicating whether or not
+- * we are only scanning our current active channel
++ * TLV-ID CHANLIST 01 01
++ * length 5b 00
++ * channel 1 00 01 00 00 00 64 00
++ * radio type 00
++ * channel 01
++ * scan type 00
++ * min scan time 00 00
++ * max scan time 64 00
++ * channel 2 00 02 00 00 00 64 00
++ * channel 3 00 03 00 00 00 64 00
++ * channel 4 00 04 00 00 00 64 00
++ * channel 5 00 05 00 00 00 64 00
++ * channel 6 00 06 00 00 00 64 00
++ * channel 7 00 07 00 00 00 64 00
++ * channel 8 00 08 00 00 00 64 00
++ * channel 9 00 09 00 00 00 64 00
++ * channel 10 00 0a 00 00 00 64 00
++ * channel 11 00 0b 00 00 00 64 00
++ * channel 12 00 0c 00 00 00 64 00
++ * channel 13 00 0d 00 00 00 64 00
+ *
+- * @return resulting scan configuration
+ */
+-static struct wlan_scan_cmd_config *
+-wlan_scan_setup_scan_config(wlan_private * priv,
+- const struct wlan_ioctl_user_scan_cfg * puserscanin,
+- struct mrvlietypes_chanlistparamset ** ppchantlvout,
+- struct chanscanparamset * pscanchanlist,
+- int *pmaxchanperscan,
+- u8 * pfilteredscan,
+- u8 * pscancurrentonly)
++static int lbs_scan_add_chanlist_tlv(u8 *tlv,
++ struct chanscanparamset *chan_list,
++ int chan_count)
+ {
+- struct mrvlietypes_numprobes *pnumprobestlv;
+- struct mrvlietypes_ssidparamset *pssidtlv;
+- struct wlan_scan_cmd_config * pscancfgout = NULL;
+- u8 *ptlvpos;
+- u16 numprobes;
+- int chanidx;
+- int scantype;
+- int scandur;
+- int channel;
+- int radiotype;
+-
+- lbs_deb_enter(LBS_DEB_SCAN);
+-
+- pscancfgout = kzalloc(MAX_SCAN_CFG_ALLOC, GFP_KERNEL);
+- if (pscancfgout == NULL)
+- goto out;
+-
+- /* The tlvbufferlen is calculated for each scan command. The TLVs added
+- * in this routine will be preserved since the routine that sends
+- * the command will append channelTLVs at *ppchantlvout. The difference
+- * between the *ppchantlvout and the tlvbuffer start will be used
+- * to calculate the size of anything we add in this routine.
+- */
+- pscancfgout->tlvbufferlen = 0;
+-
+- /* Running tlv pointer. Assigned to ppchantlvout at end of function
+- * so later routines know where channels can be added to the command buf
+- */
+- ptlvpos = pscancfgout->tlvbuffer;
+-
+- /*
+- * Set the initial scan paramters for progressive scanning. If a specific
+- * BSSID or SSID is used, the number of channels in the scan command
+- * will be increased to the absolute maximum
+- */
+- *pmaxchanperscan = MRVDRV_CHANNELS_PER_SCAN_CMD;
+-
+- /* Initialize the scan as un-filtered by firmware, set to TRUE below if
+- * a SSID or BSSID filter is sent in the command
+- */
+- *pfilteredscan = 0;
+-
+- /* Initialize the scan as not being only on the current channel. If
+- * the channel list is customized, only contains one channel, and
+- * is the active channel, this is set true and data flow is not halted.
+- */
+- *pscancurrentonly = 0;
+-
+- if (puserscanin) {
+- /* Set the bss type scan filter, use adapter setting if unset */
+- pscancfgout->bsstype =
+- puserscanin->bsstype ? puserscanin->bsstype : CMD_BSS_TYPE_ANY;
+-
+- /* Set the number of probes to send, use adapter setting if unset */
+- numprobes = puserscanin->numprobes ? puserscanin->numprobes : 0;
+-
+- /*
+- * Set the BSSID filter to the incoming configuration,
+- * if non-zero. If not set, it will remain disabled (all zeros).
+- */
+- memcpy(pscancfgout->bssid, puserscanin->bssid,
+- sizeof(pscancfgout->bssid));
+-
+- if (puserscanin->ssid_len) {
+- pssidtlv =
+- (struct mrvlietypes_ssidparamset *) pscancfgout->
+- tlvbuffer;
+- pssidtlv->header.type = cpu_to_le16(TLV_TYPE_SSID);
+- pssidtlv->header.len = cpu_to_le16(puserscanin->ssid_len);
+- memcpy(pssidtlv->ssid, puserscanin->ssid,
+- puserscanin->ssid_len);
+- ptlvpos += sizeof(pssidtlv->header) + puserscanin->ssid_len;
+- }
+-
+- /*
+- * The default number of channels sent in the command is low to
+- * ensure the response buffer from the firmware does not truncate
+- * scan results. That is not an issue with an SSID or BSSID
+- * filter applied to the scan results in the firmware.
+- */
+- if ( puserscanin->ssid_len
+- || (compare_ether_addr(pscancfgout->bssid, &zeromac[0]) != 0)) {
+- *pmaxchanperscan = MRVDRV_MAX_CHANNELS_PER_SCAN;
+- *pfilteredscan = 1;
+- }
+- } else {
+- pscancfgout->bsstype = CMD_BSS_TYPE_ANY;
+- numprobes = 0;
+- }
+-
+- /* If the input config or adapter has the number of Probes set, add tlv */
+- if (numprobes) {
+- pnumprobestlv = (struct mrvlietypes_numprobes *) ptlvpos;
+- pnumprobestlv->header.type = cpu_to_le16(TLV_TYPE_NUMPROBES);
+- pnumprobestlv->header.len = cpu_to_le16(2);
+- pnumprobestlv->numprobes = cpu_to_le16(numprobes);
+-
+- ptlvpos += sizeof(*pnumprobestlv);
+- }
+-
+- /*
+- * Set the output for the channel TLV to the address in the tlv buffer
+- * past any TLVs that were added in this fuction (SSID, numprobes).
+- * channel TLVs will be added past this for each scan command, preserving
+- * the TLVs that were previously added.
+- */
+- *ppchantlvout = (struct mrvlietypes_chanlistparamset *) ptlvpos;
+-
+- if (!puserscanin || !puserscanin->chanlist[0].channumber) {
+- /* Create a default channel scan list */
+- lbs_deb_scan("creating full region channel list\n");
+- wlan_scan_create_channel_list(priv, pscanchanlist,
+- *pfilteredscan);
+- goto out;
+- }
+-
+- for (chanidx = 0;
+- chanidx < WLAN_IOCTL_USER_SCAN_CHAN_MAX
+- && puserscanin->chanlist[chanidx].channumber; chanidx++) {
+-
+- channel = puserscanin->chanlist[chanidx].channumber;
+- (pscanchanlist + chanidx)->channumber = channel;
+-
+- radiotype = puserscanin->chanlist[chanidx].radiotype;
+- (pscanchanlist + chanidx)->radiotype = radiotype;
+-
+- scantype = puserscanin->chanlist[chanidx].scantype;
+-
+- if (scantype == CMD_SCAN_TYPE_PASSIVE) {
+- (pscanchanlist +
+- chanidx)->chanscanmode.passivescan = 1;
+- } else {
+- (pscanchanlist +
+- chanidx)->chanscanmode.passivescan = 0;
+- }
+-
+- if (puserscanin->chanlist[chanidx].scantime) {
+- scandur = puserscanin->chanlist[chanidx].scantime;
+- } else {
+- if (scantype == CMD_SCAN_TYPE_PASSIVE) {
+- scandur = MRVDRV_PASSIVE_SCAN_CHAN_TIME;
+- } else {
+- scandur = MRVDRV_ACTIVE_SCAN_CHAN_TIME;
+- }
+- }
+-
+- (pscanchanlist + chanidx)->minscantime =
+- cpu_to_le16(scandur);
+- (pscanchanlist + chanidx)->maxscantime =
+- cpu_to_le16(scandur);
+- }
+-
+- /* Check if we are only scanning the current channel */
+- if ((chanidx == 1) &&
+- (puserscanin->chanlist[0].channumber ==
+- priv->adapter->curbssparams.channel)) {
+- *pscancurrentonly = 1;
+- lbs_deb_scan("scanning current channel only");
+- }
+-
+-out:
+- return pscancfgout;
++ size_t size = sizeof(struct chanscanparamset) * chan_count;
++ struct mrvlietypes_chanlistparamset *chan_tlv =
++ (struct mrvlietypes_chanlistparamset *) tlv;
++
++ chan_tlv->header.type = cpu_to_le16(TLV_TYPE_CHANLIST);
++ memcpy(chan_tlv->chanscanparam, chan_list, size);
++ chan_tlv->header.len = cpu_to_le16(size);
++ return sizeof(chan_tlv->header) + size;
+ }
- case CMD_RET(CMD_GET_TSF):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- memcpy(priv->adapter->cur_cmd->pdata_buf,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ memcpy((void *)priv->cur_cmd->callback_arg,
- &resp->params.gettsf.tsfvalue, sizeof(u64));
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- break;
- case CMD_RET(CMD_BT_ACCESS):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (adapter->cur_cmd->pdata_buf)
-- memcpy(adapter->cur_cmd->pdata_buf,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ if (priv->cur_cmd->callback_arg)
-+ memcpy((void *)priv->cur_cmd->callback_arg,
- &resp->params.bt.addr1, 2 * ETH_ALEN);
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- break;
- case CMD_RET(CMD_FWT_ACCESS):
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (adapter->cur_cmd->pdata_buf)
-- memcpy(adapter->cur_cmd->pdata_buf, &resp->params.fwt,
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ if (priv->cur_cmd->callback_arg)
-+ memcpy((void *)priv->cur_cmd->callback_arg, &resp->params.fwt,
- sizeof(resp->params.fwt));
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- break;
-- case CMD_RET(CMD_MESH_ACCESS):
-- if (adapter->cur_cmd->pdata_buf)
-- memcpy(adapter->cur_cmd->pdata_buf, &resp->params.mesh,
-- sizeof(resp->params.mesh));
-+ case CMD_RET(CMD_802_11_BEACON_CTRL):
-+ ret = lbs_ret_802_11_bcn_ctrl(priv, resp);
- break;
+-/**
+- * @brief Construct and send multiple scan config commands to the firmware
+- *
+- * Only used from wlan_scan_networks()
+- *
+- * Previous routines have created a wlan_scan_cmd_config with any requested
+- * TLVs. This function splits the channel TLV into maxchanperscan lists
+- * and sends the portion of the channel TLV along with the other TLVs
+- * to the wlan_cmd routines for execution in the firmware.
+
- default:
- lbs_deb_host("CMD_RESP: unknown cmd response 0x%04x\n",
-- resp->command);
-+ le16_to_cpu(resp->command));
- break;
++/*
++ * Add RATES TLV of the form
+ *
+- * @param priv A pointer to wlan_private structure
+- * @param maxchanperscan Maximum number channels to be included in each
+- * scan command sent to firmware
+- * @param filteredscan Flag indicating whether or not a BSSID or SSID
+- * filter is being used for the firmware command
+- * scan command sent to firmware
+- * @param pscancfgout Scan configuration used for this scan.
+- * @param pchantlvout Pointer in the pscancfgout where the channel TLV
+- * should start. This is past any other TLVs that
+- * must be sent down in each firmware command.
+- * @param pscanchanlist List of channels to scan in maxchanperscan segments
++ * TLV-ID RATES 01 00
++ * length 0e 00
++ * rates 82 84 8b 96 0c 12 18 24 30 48 60 6c
+ *
+- * @return 0 or error return otherwise
++ * The rates are in lbs_bg_rates[], but for the 802.11b
++ * rates the high bit isn't set.
+ */
+-static int wlan_scan_channel_list(wlan_private * priv,
+- int maxchanperscan,
+- u8 filteredscan,
+- struct wlan_scan_cmd_config * pscancfgout,
+- struct mrvlietypes_chanlistparamset * pchantlvout,
+- struct chanscanparamset * pscanchanlist,
+- const struct wlan_ioctl_user_scan_cfg * puserscanin,
+- int full_scan)
++static int lbs_scan_add_rates_tlv(u8 *tlv)
+ {
+- struct chanscanparamset *ptmpchan;
+- struct chanscanparamset *pstartchan;
+- u8 scanband;
+- int doneearly;
+- int tlvidx;
+- int ret = 0;
+- int scanned = 0;
+- union iwreq_data wrqu;
+-
+- lbs_deb_enter_args(LBS_DEB_SCAN, "maxchanperscan %d, filteredscan %d, "
+- "full_scan %d", maxchanperscan, filteredscan, full_scan);
+-
+- if (!pscancfgout || !pchantlvout || !pscanchanlist) {
+- lbs_deb_scan("pscancfgout, pchantlvout or "
+- "pscanchanlist is NULL\n");
+- ret = -1;
+- goto out;
+- }
+-
+- pchantlvout->header.type = cpu_to_le16(TLV_TYPE_CHANLIST);
+-
+- /* Set the temp channel struct pointer to the start of the desired list */
+- ptmpchan = pscanchanlist;
+-
+- if (priv->adapter->last_scanned_channel && !puserscanin)
+- ptmpchan += priv->adapter->last_scanned_channel;
+-
+- /* Loop through the desired channel list, sending a new firmware scan
+- * commands for each maxchanperscan channels (or for 1,6,11 individually
+- * if configured accordingly)
+- */
+- while (ptmpchan->channumber) {
+-
+- tlvidx = 0;
+- pchantlvout->header.len = 0;
+- scanband = ptmpchan->radiotype;
+- pstartchan = ptmpchan;
+- doneearly = 0;
+-
+- /* Construct the channel TLV for the scan command. Continue to
+- * insert channel TLVs until:
+- * - the tlvidx hits the maximum configured per scan command
+- * - the next channel to insert is 0 (end of desired channel list)
+- * - doneearly is set (controlling individual scanning of 1,6,11)
+- */
+- while (tlvidx < maxchanperscan && ptmpchan->channumber
+- && !doneearly && scanned < 2) {
+-
+- lbs_deb_scan("channel %d, radio %d, passive %d, "
+- "dischanflt %d, maxscantime %d\n",
+- ptmpchan->channumber,
+- ptmpchan->radiotype,
+- ptmpchan->chanscanmode.passivescan,
+- ptmpchan->chanscanmode.disablechanfilt,
+- ptmpchan->maxscantime);
+-
+- /* Copy the current channel TLV to the command being prepared */
+- memcpy(pchantlvout->chanscanparam + tlvidx,
+- ptmpchan, sizeof(pchantlvout->chanscanparam));
+-
+- /* Increment the TLV header length by the size appended */
+- /* Ew, it would be _so_ nice if we could just declare the
+- variable little-endian and let GCC handle it for us */
+- pchantlvout->header.len =
+- cpu_to_le16(le16_to_cpu(pchantlvout->header.len) +
+- sizeof(pchantlvout->chanscanparam));
+-
+- /*
+- * The tlv buffer length is set to the number of bytes of the
+- * between the channel tlv pointer and the start of the
+- * tlv buffer. This compensates for any TLVs that were appended
+- * before the channel list.
+- */
+- pscancfgout->tlvbufferlen = ((u8 *) pchantlvout
+- - pscancfgout->tlvbuffer);
+-
+- /* Add the size of the channel tlv header and the data length */
+- pscancfgout->tlvbufferlen +=
+- (sizeof(pchantlvout->header)
+- + le16_to_cpu(pchantlvout->header.len));
+-
+- /* Increment the index to the channel tlv we are constructing */
+- tlvidx++;
+-
+- doneearly = 0;
+-
+- /* Stop the loop if the *current* channel is in the 1,6,11 set
+- * and we are not filtering on a BSSID or SSID.
+- */
+- if (!filteredscan && (ptmpchan->channumber == 1
+- || ptmpchan->channumber == 6
+- || ptmpchan->channumber == 11)) {
+- doneearly = 1;
+- }
+-
+- /* Increment the tmp pointer to the next channel to be scanned */
+- ptmpchan++;
+- scanned++;
+-
+- /* Stop the loop if the *next* channel is in the 1,6,11 set.
+- * This will cause it to be the only channel scanned on the next
+- * interation
+- */
+- if (!filteredscan && (ptmpchan->channumber == 1
+- || ptmpchan->channumber == 6
+- || ptmpchan->channumber == 11)) {
+- doneearly = 1;
+- }
+- }
+-
+- /* Send the scan command to the firmware with the specified cfg */
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SCAN, 0,
+- 0, 0, pscancfgout);
+- if (scanned >= 2 && !full_scan) {
+- ret = 0;
+- goto done;
+- }
+- scanned = 0;
+- }
+-
+-done:
+- priv->adapter->last_scanned_channel = ptmpchan->channumber;
+-
+- if (priv->adapter->last_scanned_channel) {
+- /* Schedule the next part of the partial scan */
+- if (!full_scan && !priv->adapter->surpriseremoved) {
+- cancel_delayed_work(&priv->scan_work);
+- queue_delayed_work(priv->work_thread, &priv->scan_work,
+- msecs_to_jiffies(300));
+- }
+- } else {
+- /* All done, tell userspace the scan table has been updated */
+- memset(&wrqu, 0, sizeof(union iwreq_data));
+- wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
++ int i;
++ struct mrvlietypes_ratesparamset *rate_tlv =
++ (struct mrvlietypes_ratesparamset *) tlv;
++
++ rate_tlv->header.type = cpu_to_le16(TLV_TYPE_RATES);
++ tlv += sizeof(rate_tlv->header);
++ for (i = 0; i < MAX_RATES; i++) {
++ *tlv = lbs_bg_rates[i];
++ if (*tlv == 0)
++ break;
++ /* This code makes sure that the 802.11b rates (1 MBit/s, 2
++ MBit/s, 5.5 MBit/s and 11 MBit/s get's the high bit set.
++ Note that the values are MBit/s * 2, to mark them as
++ basic rates so that the firmware likes it better */
++ if (*tlv == 0x02 || *tlv == 0x04 ||
++ *tlv == 0x0b || *tlv == 0x16)
++ *tlv |= 0x80;
++ tlv++;
}
- lbs_deb_leave(LBS_DEB_HOST);
- return ret;
+-
+-out:
+- lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
+- return ret;
++ rate_tlv->header.len = cpu_to_le16(i);
++ return sizeof(rate_tlv->header) + i;
}
--int libertas_process_rx_command(wlan_private * priv)
-+int lbs_process_rx_command(struct lbs_private *priv)
++
+ /*
+- * Only used from wlan_scan_networks()
+-*/
+-static void clear_selected_scan_list_entries(wlan_adapter *adapter,
+- const struct wlan_ioctl_user_scan_cfg *scan_cfg)
++ * Generate the CMD_802_11_SCAN command with the proper tlv
++ * for a bunch of channels.
++ */
++static int lbs_do_scan(struct lbs_private *priv,
++ u8 bsstype,
++ struct chanscanparamset *chan_list,
++ int chan_count,
++ const struct lbs_ioctl_user_scan_cfg *user_cfg)
{
-- u16 respcmd;
-- struct cmd_ds_command *resp;
-- wlan_adapter *adapter = priv->adapter;
-+ uint16_t respcmd, curcmd;
-+ struct cmd_header *resp;
- int ret = 0;
-- ulong flags;
-- u16 result;
-+ unsigned long flags;
-+ uint16_t result;
+- struct bss_descriptor *bss;
+- struct bss_descriptor *safe;
+- u32 clear_ssid_flag = 0, clear_bssid_flag = 0;
++ int ret = -ENOMEM;
++ struct lbs_scan_cmd_config *scan_cmd;
++ u8 *tlv; /* pointer into our current, growing TLV storage area */
- lbs_deb_enter(LBS_DEB_HOST);
+- lbs_deb_enter(LBS_DEB_SCAN);
++ lbs_deb_enter_args(LBS_DEB_SCAN, "bsstype %d, chanlist[].chan %d, "
++ "chan_count %d",
++ bsstype, chan_list[0].channumber, chan_count);
-- /* Now we got response from FW, cancel the command timer */
-- del_timer(&adapter->command_timer);
+- if (!scan_cfg)
++ /* create the fixed part for scan command */
++ scan_cmd = kzalloc(MAX_SCAN_CFG_ALLOC, GFP_KERNEL);
++ if (scan_cmd == NULL)
+ goto out;
+-
+- if (scan_cfg->clear_ssid && scan_cfg->ssid_len)
+- clear_ssid_flag = 1;
+-
+- if (scan_cfg->clear_bssid
+- && (compare_ether_addr(scan_cfg->bssid, &zeromac[0]) != 0)
+- && (compare_ether_addr(scan_cfg->bssid, &bcastmac[0]) != 0)) {
+- clear_bssid_flag = 1;
+- }
+-
+- if (!clear_ssid_flag && !clear_bssid_flag)
+- goto out;
-
- mutex_lock(&adapter->lock);
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-+ mutex_lock(&priv->lock);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-
-- if (!adapter->cur_cmd) {
-+ if (!priv->cur_cmd) {
- lbs_deb_host("CMD_RESP: cur_cmd is NULL\n");
- ret = -1;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- goto done;
- }
-- resp = (struct cmd_ds_command *)(adapter->cur_cmd->bufvirtualaddr);
-+
-+ resp = (void *)priv->upld_buf;
-+
-+ curcmd = le16_to_cpu(resp->command);
-
- respcmd = le16_to_cpu(resp->command);
- result = le16_to_cpu(resp->result);
-
-- lbs_deb_host("CMD_RESP: response 0x%04x, size %d, jiffies %lu\n",
-- respcmd, priv->upld_len, jiffies);
-- lbs_deb_hex(LBS_DEB_HOST, "CMD_RESP", adapter->cur_cmd->bufvirtualaddr,
-- priv->upld_len);
+- list_for_each_entry_safe (bss, safe, &adapter->network_list, list) {
+- u32 clear = 0;
-
-- if (!(respcmd & 0x8000)) {
-- lbs_deb_host("invalid response!\n");
-- adapter->cur_cmd_retcode = -1;
-- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
-- adapter->nr_cmd_pending--;
-- adapter->cur_cmd = NULL;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ lbs_deb_host("CMD_RESP: response 0x%04x, seq %d, size %d, jiffies %lu\n",
-+ respcmd, le16_to_cpu(resp->seqnum), priv->upld_len, jiffies);
-+ lbs_deb_hex(LBS_DEB_HOST, "CMD_RESP", (void *) resp, priv->upld_len);
+- /* Check for an SSID match */
+- if ( clear_ssid_flag
+- && (bss->ssid_len == scan_cfg->ssid_len)
+- && !memcmp(bss->ssid, scan_cfg->ssid, bss->ssid_len))
+- clear = 1;
+-
+- /* Check for a BSSID match */
+- if ( clear_bssid_flag
+- && !compare_ether_addr(bss->bssid, scan_cfg->bssid))
+- clear = 1;
+-
+- if (clear) {
+- list_move_tail (&bss->list, &adapter->network_free_list);
+- clear_bss_descriptor(bss);
+- }
+- }
+- mutex_unlock(&adapter->lock);
++ tlv = scan_cmd->tlvbuffer;
++ if (user_cfg)
++ memcpy(scan_cmd->bssid, user_cfg->bssid, ETH_ALEN);
++ scan_cmd->bsstype = bsstype;
+
-+ if (resp->seqnum != resp->seqnum) {
-+ lbs_pr_info("Received CMD_RESP with invalid sequence %d (expected %d)\n",
-+ le16_to_cpu(resp->seqnum), le16_to_cpu(resp->seqnum));
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ ret = -1;
-+ goto done;
-+ }
-+ if (respcmd != CMD_RET(curcmd) &&
-+ respcmd != CMD_802_11_ASSOCIATE && curcmd != CMD_RET_802_11_ASSOCIATE) {
-+ lbs_pr_info("Invalid CMD_RESP %x to command %x!\n", respcmd, curcmd);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ ret = -1;
-+ goto done;
-+ }
++ /* add TLVs */
++ if (user_cfg && user_cfg->ssid_len)
++ tlv += lbs_scan_add_ssid_tlv(tlv, user_cfg);
++ if (chan_list && chan_count)
++ tlv += lbs_scan_add_chanlist_tlv(tlv, chan_list, chan_count);
++ tlv += lbs_scan_add_rates_tlv(tlv);
+
-+ if (resp->result == cpu_to_le16(0x0004)) {
-+ /* 0x0004 means -EAGAIN. Drop the response, let it time out
-+ and be resubmitted */
-+ lbs_pr_info("Firmware returns DEFER to command %x. Will let it time out...\n",
-+ le16_to_cpu(resp->command));
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
- ret = -1;
- goto done;
- }
-
-+ /* Now we got response from FW, cancel the command timer */
-+ del_timer(&priv->command_timer);
-+ priv->cmd_timed_out = 0;
-+ if (priv->nr_retries) {
-+ lbs_pr_info("Received result %x to command %x after %d retries\n",
-+ result, curcmd, priv->nr_retries);
-+ priv->nr_retries = 0;
-+ }
++ /* This is the final data we are about to send */
++ scan_cmd->tlvbufferlen = tlv - scan_cmd->tlvbuffer;
++ lbs_deb_hex(LBS_DEB_SCAN, "SCAN_CMD", (void *)scan_cmd, 1+6);
++ lbs_deb_hex(LBS_DEB_SCAN, "SCAN_TLV", scan_cmd->tlvbuffer,
++ scan_cmd->tlvbufferlen);
+
- /* Store the response code to cur_cmd_retcode. */
-- adapter->cur_cmd_retcode = result;;
-+ priv->cur_cmd_retcode = result;
-
- if (respcmd == CMD_RET(CMD_802_11_PS_MODE)) {
-- struct cmd_ds_802_11_ps_mode *psmode = &resp->params.psmode;
-+ struct cmd_ds_802_11_ps_mode *psmode = (void *) &resp[1];
- u16 action = le16_to_cpu(psmode->action);
-
- lbs_deb_host(
-@@ -774,54 +623,45 @@ int libertas_process_rx_command(wlan_private * priv)
- /*
- * We should not re-try enter-ps command in
- * ad-hoc mode. It takes place in
-- * libertas_execute_next_command().
-+ * lbs_execute_next_command().
- */
-- if (adapter->mode == IW_MODE_ADHOC &&
-+ if (priv->mode == IW_MODE_ADHOC &&
- action == CMD_SUBCMD_ENTER_PS)
-- adapter->psmode = WLAN802_11POWERMODECAM;
-+ priv->psmode = LBS802_11POWERMODECAM;
- } else if (action == CMD_SUBCMD_ENTER_PS) {
-- adapter->needtowakeup = 0;
-- adapter->psstate = PS_STATE_AWAKE;
-+ priv->needtowakeup = 0;
-+ priv->psstate = PS_STATE_AWAKE;
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SCAN, 0,
++ CMD_OPTION_WAITFORRSP, 0, scan_cmd);
+ out:
+- lbs_deb_leave(LBS_DEB_SCAN);
++ kfree(scan_cmd);
++ lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
++ return ret;
+ }
- lbs_deb_host("CMD_RESP: ENTER_PS command response\n");
-- if (adapter->connect_status != LIBERTAS_CONNECTED) {
-+ if (priv->connect_status != LBS_CONNECTED) {
- /*
- * When Deauth Event received before Enter_PS command
- * response, We need to wake up the firmware.
- */
- lbs_deb_host(
-- "disconnected, invoking libertas_ps_wakeup\n");
-+ "disconnected, invoking lbs_ps_wakeup\n");
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-- mutex_unlock(&adapter->lock);
-- libertas_ps_wakeup(priv, 0);
-- mutex_lock(&adapter->lock);
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ mutex_unlock(&priv->lock);
-+ lbs_ps_wakeup(priv, 0);
-+ mutex_lock(&priv->lock);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
- }
- } else if (action == CMD_SUBCMD_EXIT_PS) {
-- adapter->needtowakeup = 0;
-- adapter->psstate = PS_STATE_FULL_POWER;
-+ priv->needtowakeup = 0;
-+ priv->psstate = PS_STATE_FULL_POWER;
- lbs_deb_host("CMD_RESP: EXIT_PS command response\n");
- } else {
- lbs_deb_host("CMD_RESP: PS action 0x%X\n", action);
- }
+@@ -812,32 +529,32 @@ out:
+ * order to send the appropriate scan commands to firmware to populate or
+ * update the internal driver scan table
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param puserscanin Pointer to the input configuration for the requested
+ * scan.
+- * @param full_scan ???
+ *
+ * @return 0 or < 0 if error
+ */
+-int wlan_scan_networks(wlan_private * priv,
+- const struct wlan_ioctl_user_scan_cfg * puserscanin,
++int lbs_scan_networks(struct lbs_private *priv,
++ const struct lbs_ioctl_user_scan_cfg *user_cfg,
+ int full_scan)
+ {
+- wlan_adapter * adapter = priv->adapter;
+- struct mrvlietypes_chanlistparamset *pchantlvout;
+- struct chanscanparamset * scan_chan_list = NULL;
+- struct wlan_scan_cmd_config * scan_cfg = NULL;
+- u8 filteredscan;
+- u8 scancurrentchanonly;
+- int maxchanperscan;
+- int ret;
++ int ret = -ENOMEM;
++ struct chanscanparamset *chan_list;
++ struct chanscanparamset *curr_chans;
++ int chan_count;
++ u8 bsstype = CMD_BSS_TYPE_ANY;
++ int numchannels = MRVDRV_CHANNELS_PER_SCAN_CMD;
++ int filteredscan = 0;
++ union iwreq_data wrqu;
+ #ifdef CONFIG_LIBERTAS_DEBUG
+- struct bss_descriptor * iter_bss;
++ struct bss_descriptor *iter;
+ int i = 0;
+ DECLARE_MAC_BUF(mac);
+ #endif
-- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
-- adapter->nr_cmd_pending--;
-- adapter->cur_cmd = NULL;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ lbs_complete_command(priv, priv->cur_cmd, result);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- lbs_deb_enter_args(LBS_DEB_SCAN, "full_scan %d", full_scan);
++ lbs_deb_enter_args(LBS_DEB_SCAN, "full_scan %d",
++ full_scan);
- ret = 0;
- goto done;
+ /* Cancel any partial outstanding partial scans if this scan
+ * is a full scan.
+@@ -845,90 +562,138 @@ int wlan_scan_networks(wlan_private * priv,
+ if (full_scan && delayed_work_pending(&priv->scan_work))
+ cancel_delayed_work(&priv->scan_work);
+
+- scan_chan_list = kzalloc(sizeof(struct chanscanparamset) *
+- WLAN_IOCTL_USER_SCAN_CHAN_MAX, GFP_KERNEL);
+- if (scan_chan_list == NULL) {
+- ret = -ENOMEM;
++ /* Determine same scan parameters */
++ if (user_cfg) {
++ if (user_cfg->bsstype)
++ bsstype = user_cfg->bsstype;
++ if (compare_ether_addr(user_cfg->bssid, &zeromac[0]) != 0) {
++ numchannels = MRVDRV_MAX_CHANNELS_PER_SCAN;
++ filteredscan = 1;
++ }
++ }
++ lbs_deb_scan("numchannels %d, bsstype %d, "
++ "filteredscan %d\n",
++ numchannels, bsstype, filteredscan);
++
++ /* Create list of channels to scan */
++ chan_list = kzalloc(sizeof(struct chanscanparamset) *
++ LBS_IOCTL_USER_SCAN_CHAN_MAX, GFP_KERNEL);
++ if (!chan_list) {
++ lbs_pr_alert("SCAN: chan_list empty\n");
+ goto out;
}
-- if (adapter->cur_cmd->cmdflags & CMD_F_HOSTCMD) {
-- /* Copy the response back to response buffer */
-- memcpy(adapter->cur_cmd->pdata_buf, resp,
-- le16_to_cpu(resp->size));
-- adapter->cur_cmd->cmdflags &= ~CMD_F_HOSTCMD;
-- }
--
- /* If the command is not successful, cleanup and return failure */
- if ((result != 0 || !(respcmd & 0x8000))) {
- lbs_deb_host("CMD_RESP: error 0x%04x in command reply 0x%04x\n",
-@@ -836,106 +676,132 @@ int libertas_process_rx_command(wlan_private * priv)
- break;
+- scan_cfg = wlan_scan_setup_scan_config(priv,
+- puserscanin,
+- &pchantlvout,
+- scan_chan_list,
+- &maxchanperscan,
+- &filteredscan,
+- &scancurrentchanonly);
+- if (scan_cfg == NULL) {
+- ret = -ENOMEM;
+- goto out;
++ /* We want to scan all channels */
++ chan_count = lbs_scan_create_channel_list(priv, chan_list,
++ filteredscan);
++
++ netif_stop_queue(priv->dev);
++ netif_carrier_off(priv->dev);
++ if (priv->mesh_dev) {
++ netif_stop_queue(priv->mesh_dev);
++ netif_carrier_off(priv->mesh_dev);
++ }
++
++ /* Prepare to continue an interrupted scan */
++ lbs_deb_scan("chan_count %d, last_scanned_channel %d\n",
++ chan_count, priv->last_scanned_channel);
++ curr_chans = chan_list;
++ /* advance channel list by already-scanned-channels */
++ if (priv->last_scanned_channel > 0) {
++ curr_chans += priv->last_scanned_channel;
++ chan_count -= priv->last_scanned_channel;
+ }
- }
--
-- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
-- adapter->nr_cmd_pending--;
-- adapter->cur_cmd = NULL;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ lbs_complete_command(priv, priv->cur_cmd, result);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- clear_selected_scan_list_entries(adapter, puserscanin);
++ /* Send scan command(s)
++ * numchannels contains the number of channels we should maximally scan
++ * chan_count is the total number of channels to scan
++ */
- ret = -1;
- goto done;
- }
+- /* Keep the data path active if we are only scanning our current channel */
+- if (!scancurrentchanonly) {
+- netif_stop_queue(priv->dev);
+- netif_carrier_off(priv->dev);
+- if (priv->mesh_dev) {
+- netif_stop_queue(priv->mesh_dev);
+- netif_carrier_off(priv->mesh_dev);
++ while (chan_count) {
++ int to_scan = min(numchannels, chan_count);
++ lbs_deb_scan("scanning %d of %d channels\n",
++ to_scan, chan_count);
++ ret = lbs_do_scan(priv, bsstype, curr_chans,
++ to_scan, user_cfg);
++ if (ret) {
++ lbs_pr_err("SCAN_CMD failed\n");
++ goto out2;
++ }
++ curr_chans += to_scan;
++ chan_count -= to_scan;
++
++ /* somehow schedule the next part of the scan */
++ if (chan_count &&
++ !full_scan &&
++ !priv->surpriseremoved) {
++ /* -1 marks just that we're currently scanning */
++ if (priv->last_scanned_channel < 0)
++ priv->last_scanned_channel = to_scan;
++ else
++ priv->last_scanned_channel += to_scan;
++ cancel_delayed_work(&priv->scan_work);
++ queue_delayed_work(priv->work_thread, &priv->scan_work,
++ msecs_to_jiffies(300));
++ /* skip over GIWSCAN event */
++ goto out;
+ }
+- }
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+- ret = wlan_scan_channel_list(priv,
+- maxchanperscan,
+- filteredscan,
+- scan_cfg,
+- pchantlvout,
+- scan_chan_list,
+- puserscanin,
+- full_scan);
++ }
++ memset(&wrqu, 0, sizeof(union iwreq_data));
++ wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
-- ret = handle_cmd_response(respcmd, resp, priv);
-+ if (priv->cur_cmd && priv->cur_cmd->callback) {
-+ ret = priv->cur_cmd->callback(priv, priv->cur_cmd->callback_arg,
-+ resp);
-+ } else
-+ ret = handle_cmd_response(priv, 0, resp);
+ #ifdef CONFIG_LIBERTAS_DEBUG
+ /* Dump the scan table */
+- mutex_lock(&adapter->lock);
+- lbs_deb_scan("The scan table contains:\n");
+- list_for_each_entry (iter_bss, &adapter->network_list, list) {
+- lbs_deb_scan("scan %02d, %s, RSSI, %d, SSID '%s'\n",
+- i++, print_mac(mac, iter_bss->bssid), (s32) iter_bss->rssi,
+- escape_essid(iter_bss->ssid, iter_bss->ssid_len));
+- }
+- mutex_unlock(&adapter->lock);
++ mutex_lock(&priv->lock);
++ lbs_deb_scan("scan table:\n");
++ list_for_each_entry(iter, &priv->network_list, list)
++ lbs_deb_scan("%02d: BSSID %s, RSSI %d, SSID '%s'\n",
++ i++, print_mac(mac, iter->bssid), (s32) iter->rssi,
++ escape_essid(iter->ssid, iter->ssid_len));
++ mutex_unlock(&priv->lock);
+ #endif
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- if (adapter->cur_cmd) {
-+ spin_lock_irqsave(&priv->driver_lock, flags);
+- if (priv->adapter->connect_status == LIBERTAS_CONNECTED) {
++out2:
++ priv->last_scanned_channel = 0;
+
-+ if (priv->cur_cmd) {
- /* Clean up and Put current command back to cmdfreeq */
-- __libertas_cleanup_and_insert_cmd(priv, adapter->cur_cmd);
-- adapter->nr_cmd_pending--;
-- WARN_ON(adapter->nr_cmd_pending > 128);
-- adapter->cur_cmd = NULL;
-+ lbs_complete_command(priv, priv->cur_cmd, result);
++out:
++ if (priv->connect_status == LBS_CONNECTED) {
+ netif_carrier_on(priv->dev);
+- netif_wake_queue(priv->dev);
+- if (priv->mesh_dev) {
+- netif_carrier_on(priv->mesh_dev);
++ if (!priv->tx_pending_len)
++ netif_wake_queue(priv->dev);
++ }
++ if (priv->mesh_dev && (priv->mesh_connect_status == LBS_CONNECTED)) {
++ netif_carrier_on(priv->mesh_dev);
++ if (!priv->tx_pending_len)
+ netif_wake_queue(priv->mesh_dev);
+- }
}
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+-
+-out:
+- if (scan_cfg)
+- kfree(scan_cfg);
+-
+- if (scan_chan_list)
+- kfree(scan_chan_list);
++ kfree(chan_list);
- done:
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
- lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
+ lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
return ret;
}
--int libertas_process_event(wlan_private * priv)
-+static int lbs_send_confirmwake(struct lbs_private *priv)
-+{
-+ struct cmd_header *cmd = &priv->lbs_ps_confirm_wake;
-+ int ret = 0;
-+
-+ lbs_deb_enter(LBS_DEB_HOST);
-+
-+ cmd->command = cpu_to_le16(CMD_802_11_WAKEUP_CONFIRM);
-+ cmd->size = cpu_to_le16(sizeof(*cmd));
-+ cmd->seqnum = cpu_to_le16(++priv->seqnum);
-+ cmd->result = 0;
+
-+ lbs_deb_host("SEND_WAKEC_CMD: before download\n");
-+
-+ lbs_deb_hex(LBS_DEB_HOST, "wake confirm command", (void *)cmd, sizeof(*cmd));
+
-+ ret = priv->hw_host_to_card(priv, MVMS_CMD, (void *)cmd, sizeof(*cmd));
-+ if (ret)
-+ lbs_pr_alert("SEND_WAKEC_CMD: Host to Card failed for Confirm Wake\n");
+
-+ lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
-+ return ret;
-+}
++/*********************************************************************/
++/* */
++/* Result interpretation */
++/* */
++/*********************************************************************/
+
-+int lbs_process_event(struct lbs_private *priv)
+ /**
+ * @brief Interpret a BSS scan response returned from the firmware
+ *
+ * Parse the various fixed fields and IEs passed back for a a BSS probe
+- * response or beacon from the scan command. Record information as needed
+- * in the scan table struct bss_descriptor for that entry.
++ * response or beacon from the scan command. Record information as needed
++ * in the scan table struct bss_descriptor for that entry.
+ *
+ * @param bss Output parameter: Pointer to the BSS Entry
+ *
+ * @return 0 or -1
+ */
+-static int libertas_process_bss(struct bss_descriptor * bss,
++static int lbs_process_bss(struct bss_descriptor *bss,
+ u8 ** pbeaconinfo, int *bytesleft)
{
- int ret = 0;
-- wlan_adapter *adapter = priv->adapter;
- u32 eventcause;
+ struct ieeetypes_fhparamset *pFH;
+@@ -946,7 +711,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- lbs_deb_enter(LBS_DEB_CMD);
+ if (*bytesleft >= sizeof(beaconsize)) {
+ /* Extract & convert beacon size from the command buffer */
+- beaconsize = le16_to_cpu(get_unaligned((u16 *)*pbeaconinfo));
++ beaconsize = le16_to_cpu(get_unaligned((__le16 *)*pbeaconinfo));
+ *bytesleft -= sizeof(beaconsize);
+ *pbeaconinfo += sizeof(beaconsize);
+ }
+@@ -967,7 +732,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
+ *bytesleft -= beaconsize;
-- spin_lock_irq(&adapter->driver_lock);
-- eventcause = adapter->eventcause;
-- spin_unlock_irq(&adapter->driver_lock);
-+ spin_lock_irq(&priv->driver_lock);
-+ eventcause = priv->eventcause >> SBI_EVENT_CAUSE_SHIFT;
-+ spin_unlock_irq(&priv->driver_lock);
+ memcpy(bss->bssid, pos, ETH_ALEN);
+- lbs_deb_scan("process_bss: AP BSSID %s\n", print_mac(mac, bss->bssid));
++ lbs_deb_scan("process_bss: BSSID %s\n", print_mac(mac, bss->bssid));
+ pos += ETH_ALEN;
-- lbs_deb_cmd("event cause 0x%x\n", eventcause);
-+ lbs_deb_cmd("event cause %d\n", eventcause);
+ if ((end - pos) < 12) {
+@@ -983,7 +748,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
-- switch (eventcause >> SBI_EVENT_CAUSE_SHIFT) {
-+ switch (eventcause) {
- case MACREG_INT_CODE_LINK_SENSED:
- lbs_deb_cmd("EVENT: MACREG_INT_CODE_LINK_SENSED\n");
- break;
+ /* RSSI is 1 byte long */
+ bss->rssi = *pos;
+- lbs_deb_scan("process_bss: RSSI=%02X\n", *pos);
++ lbs_deb_scan("process_bss: RSSI %d\n", *pos);
+ pos++;
- case MACREG_INT_CODE_DEAUTHENTICATED:
- lbs_deb_cmd("EVENT: deauthenticated\n");
-- libertas_mac_event_disconnected(priv);
-+ lbs_mac_event_disconnected(priv);
- break;
+ /* time stamp is 8 bytes long */
+@@ -995,18 +760,18 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- case MACREG_INT_CODE_DISASSOCIATED:
- lbs_deb_cmd("EVENT: disassociated\n");
-- libertas_mac_event_disconnected(priv);
-+ lbs_mac_event_disconnected(priv);
- break;
+ /* capability information is 2 bytes long */
+ bss->capability = le16_to_cpup((void *) pos);
+- lbs_deb_scan("process_bss: capabilities = 0x%4X\n", bss->capability);
++ lbs_deb_scan("process_bss: capabilities 0x%04x\n", bss->capability);
+ pos += 2;
-- case MACREG_INT_CODE_LINK_LOSE_NO_SCAN:
-+ case MACREG_INT_CODE_LINK_LOST_NO_SCAN:
- lbs_deb_cmd("EVENT: link lost\n");
-- libertas_mac_event_disconnected(priv);
-+ lbs_mac_event_disconnected(priv);
- break;
+ if (bss->capability & WLAN_CAPABILITY_PRIVACY)
+- lbs_deb_scan("process_bss: AP WEP enabled\n");
++ lbs_deb_scan("process_bss: WEP enabled\n");
+ if (bss->capability & WLAN_CAPABILITY_IBSS)
+ bss->mode = IW_MODE_ADHOC;
+ else
+ bss->mode = IW_MODE_INFRA;
- case MACREG_INT_CODE_PS_SLEEP:
- lbs_deb_cmd("EVENT: sleep\n");
+ /* rest of the current buffer are IE's */
+- lbs_deb_scan("process_bss: IE length for this AP = %zd\n", end - pos);
++ lbs_deb_scan("process_bss: IE len %zd\n", end - pos);
+ lbs_deb_hex(LBS_DEB_SCAN, "process_bss: IE info", pos, end - pos);
- /* handle unexpected PS SLEEP event */
-- if (adapter->psstate == PS_STATE_FULL_POWER) {
-+ if (priv->psstate == PS_STATE_FULL_POWER) {
- lbs_deb_cmd(
- "EVENT: in FULL POWER mode, ignoreing PS_SLEEP\n");
+ /* process variable IE */
+@@ -1024,7 +789,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
+ case MFIE_TYPE_SSID:
+ bss->ssid_len = elem->len;
+ memcpy(bss->ssid, elem->data, elem->len);
+- lbs_deb_scan("ssid '%s', ssid length %u\n",
++ lbs_deb_scan("got SSID IE: '%s', len %u\n",
+ escape_essid(bss->ssid, bss->ssid_len),
+ bss->ssid_len);
+ break;
+@@ -1033,16 +798,14 @@ static int libertas_process_bss(struct bss_descriptor * bss,
+ n_basic_rates = min_t(u8, MAX_RATES, elem->len);
+ memcpy(bss->rates, elem->data, n_basic_rates);
+ got_basic_rates = 1;
++ lbs_deb_scan("got RATES IE\n");
break;
- }
-- adapter->psstate = PS_STATE_PRE_SLEEP;
-+ priv->psstate = PS_STATE_PRE_SLEEP;
-- libertas_ps_confirm_sleep(priv, (u16) adapter->psmode);
-+ lbs_ps_confirm_sleep(priv, (u16) priv->psmode);
+ case MFIE_TYPE_FH_SET:
+ pFH = (struct ieeetypes_fhparamset *) pos;
+ memmove(&bss->phyparamset.fhparamset, pFH,
+ sizeof(struct ieeetypes_fhparamset));
+-#if 0 /* I think we can store these LE */
+- bss->phyparamset.fhparamset.dwelltime
+- = le16_to_cpu(bss->phyparamset.fhparamset.dwelltime);
+-#endif
++ lbs_deb_scan("got FH IE\n");
+ break;
- break;
+ case MFIE_TYPE_DS_SET:
+@@ -1050,31 +813,31 @@ static int libertas_process_bss(struct bss_descriptor * bss,
+ bss->channel = pDS->currentchan;
+ memcpy(&bss->phyparamset.dsparamset, pDS,
+ sizeof(struct ieeetypes_dsparamset));
++ lbs_deb_scan("got DS IE, channel %d\n", bss->channel);
+ break;
-+ case MACREG_INT_CODE_HOST_AWAKE:
-+ lbs_deb_cmd("EVENT: HOST_AWAKE\n");
-+ lbs_send_confirmwake(priv);
-+ break;
-+
- case MACREG_INT_CODE_PS_AWAKE:
- lbs_deb_cmd("EVENT: awake\n");
--
- /* handle unexpected PS AWAKE event */
-- if (adapter->psstate == PS_STATE_FULL_POWER) {
-+ if (priv->psstate == PS_STATE_FULL_POWER) {
- lbs_deb_cmd(
- "EVENT: In FULL POWER mode - ignore PS AWAKE\n");
+ case MFIE_TYPE_CF_SET:
+ pCF = (struct ieeetypes_cfparamset *) pos;
+ memcpy(&bss->ssparamset.cfparamset, pCF,
+ sizeof(struct ieeetypes_cfparamset));
++ lbs_deb_scan("got CF IE\n");
break;
- }
-- adapter->psstate = PS_STATE_AWAKE;
-+ priv->psstate = PS_STATE_AWAKE;
+ case MFIE_TYPE_IBSS_SET:
+ pibss = (struct ieeetypes_ibssparamset *) pos;
+- bss->atimwindow = le32_to_cpu(pibss->atimwindow);
++ bss->atimwindow = le16_to_cpu(pibss->atimwindow);
+ memmove(&bss->ssparamset.ibssparamset, pibss,
+ sizeof(struct ieeetypes_ibssparamset));
+-#if 0
+- bss->ssparamset.ibssparamset.atimwindow
+- = le16_to_cpu(bss->ssparamset.ibssparamset.atimwindow);
+-#endif
++ lbs_deb_scan("got IBSS IE\n");
+ break;
-- if (adapter->needtowakeup) {
-+ if (priv->needtowakeup) {
- /*
- * wait for the command processing to finish
- * before resuming sending
-- * adapter->needtowakeup will be set to FALSE
-- * in libertas_ps_wakeup()
-+ * priv->needtowakeup will be set to FALSE
-+ * in lbs_ps_wakeup()
+ case MFIE_TYPE_COUNTRY:
+ pcountryinfo = (struct ieeetypes_countryinfoset *) pos;
++ lbs_deb_scan("got COUNTRY IE\n");
+ if (pcountryinfo->len < sizeof(pcountryinfo->countrycode)
+ || pcountryinfo->len > 254) {
+ lbs_deb_scan("process_bss: 11D- Err "
+- "CountryInfo len =%d min=%zd max=254\n",
++ "CountryInfo len %d, min %zd, max 254\n",
+ pcountryinfo->len,
+ sizeof(pcountryinfo->countrycode));
+ ret = -1;
+@@ -1093,8 +856,11 @@ static int libertas_process_bss(struct bss_descriptor * bss,
+ * already found. Data rate IE should come before
+ * extended supported rate IE
*/
- lbs_deb_cmd("waking up ...\n");
-- libertas_ps_wakeup(priv, 0);
-+ lbs_ps_wakeup(priv, 0);
- }
- break;
+- if (!got_basic_rates)
++ lbs_deb_scan("got RATESEX IE\n");
++ if (!got_basic_rates) {
++ lbs_deb_scan("... but ignoring it\n");
+ break;
++ }
-@@ -979,24 +845,24 @@ int libertas_process_event(wlan_private * priv)
+ n_ex_rates = elem->len;
+ if (n_basic_rates + n_ex_rates > MAX_RATES)
+@@ -1113,24 +879,36 @@ static int libertas_process_bss(struct bss_descriptor * bss,
+ bss->wpa_ie_len = min(elem->len + 2,
+ MAX_WPA_IE_LEN);
+ memcpy(bss->wpa_ie, elem, bss->wpa_ie_len);
+- lbs_deb_hex(LBS_DEB_SCAN, "process_bss: WPA IE", bss->wpa_ie,
++ lbs_deb_scan("got WPA IE\n");
++ lbs_deb_hex(LBS_DEB_SCAN, "WPA IE", bss->wpa_ie,
+ elem->len);
+ } else if (elem->len >= MARVELL_MESH_IE_LENGTH &&
+ elem->data[0] == 0x00 &&
+ elem->data[1] == 0x50 &&
+ elem->data[2] == 0x43 &&
+ elem->data[3] == 0x04) {
++ lbs_deb_scan("got mesh IE\n");
+ bss->mesh = 1;
++ } else {
++ lbs_deb_scan("got generiec IE: "
++ "%02x:%02x:%02x:%02x, len %d\n",
++ elem->data[0], elem->data[1],
++ elem->data[2], elem->data[3],
++ elem->len);
+ }
break;
- }
- lbs_pr_info("EVENT: MESH_AUTO_STARTED\n");
-- adapter->connect_status = LIBERTAS_CONNECTED;
-- if (priv->mesh_open == 1) {
-- netif_wake_queue(priv->mesh_dev);
-+ priv->mesh_connect_status = LBS_CONNECTED;
-+ if (priv->mesh_open) {
- netif_carrier_on(priv->mesh_dev);
-+ if (!priv->tx_pending_len)
-+ netif_wake_queue(priv->mesh_dev);
- }
-- adapter->mode = IW_MODE_ADHOC;
-+ priv->mode = IW_MODE_ADHOC;
- schedule_work(&priv->sync_channel);
- break;
-
- default:
-- lbs_pr_alert("EVENT: unknown event id 0x%04x\n",
-- eventcause >> SBI_EVENT_CAUSE_SHIFT);
-+ lbs_pr_alert("EVENT: unknown event id %d\n", eventcause);
- break;
- }
-
-- spin_lock_irq(&adapter->driver_lock);
-- adapter->eventcause = 0;
-- spin_unlock_irq(&adapter->driver_lock);
-+ spin_lock_irq(&priv->driver_lock);
-+ priv->eventcause = 0;
-+ spin_unlock_irq(&priv->driver_lock);
-
- lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
- return ret;
-diff --git a/drivers/net/wireless/libertas/debugfs.c b/drivers/net/wireless/libertas/debugfs.c
-index 0bda0b5..fd67b77 100644
---- a/drivers/net/wireless/libertas/debugfs.c
-+++ b/drivers/net/wireless/libertas/debugfs.c
-@@ -10,15 +10,16 @@
- #include "decl.h"
- #include "host.h"
- #include "debugfs.h"
-+#include "cmd.h"
-
--static struct dentry *libertas_dir = NULL;
-+static struct dentry *lbs_dir;
- static char *szStates[] = {
- "Connected",
- "Disconnected"
- };
-
- #ifdef PROC_DEBUG
--static void libertas_debug_init(wlan_private * priv, struct net_device *dev);
-+static void lbs_debug_init(struct lbs_private *priv, struct net_device *dev);
- #endif
-
- static int open_file_generic(struct inode *inode, struct file *file)
-@@ -35,19 +36,19 @@ static ssize_t write_file_dummy(struct file *file, const char __user *buf,
-
- static const size_t len = PAGE_SIZE;
--static ssize_t libertas_dev_info(struct file *file, char __user *userbuf,
-+static ssize_t lbs_dev_info(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
- {
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- size_t pos = 0;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
- ssize_t res;
+ case MFIE_TYPE_RSN:
++ lbs_deb_scan("got RSN IE\n");
+ bss->rsn_ie_len = min(elem->len + 2, MAX_WPA_IE_LEN);
+ memcpy(bss->rsn_ie, elem, bss->rsn_ie_len);
+- lbs_deb_hex(LBS_DEB_SCAN, "process_bss: RSN_IE", bss->rsn_ie, elem->len);
++ lbs_deb_hex(LBS_DEB_SCAN, "process_bss: RSN_IE",
++ bss->rsn_ie, elem->len);
+ break;
- pos += snprintf(buf+pos, len-pos, "state = %s\n",
-- szStates[priv->adapter->connect_status]);
-+ szStates[priv->connect_status]);
- pos += snprintf(buf+pos, len-pos, "region_code = %02x\n",
-- (u32) priv->adapter->regioncode);
-+ (u32) priv->regioncode);
+ default:
++ lbs_deb_scan("got IE 0x%04x, len %d\n",
++ elem->id, elem->len);
+ break;
+ }
- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+@@ -1139,7 +917,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
-@@ -56,10 +57,10 @@ static ssize_t libertas_dev_info(struct file *file, char __user *userbuf,
- }
+ /* Timestamp */
+ bss->last_scanned = jiffies;
+- libertas_unset_basic_rate_flags(bss->rates, sizeof(bss->rates));
++ lbs_unset_basic_rate_flags(bss->rates, sizeof(bss->rates));
+ ret = 0;
--static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
-+static ssize_t lbs_getscantable(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
+@@ -1153,13 +931,13 @@ done:
+ *
+ * Used in association code
+ *
+- * @param adapter A pointer to wlan_adapter
++ * @param priv A pointer to struct lbs_private
+ * @param bssid BSSID to find in the scan list
+ * @param mode Network mode: Infrastructure or IBSS
+ *
+ * @return index in BSSID list, or error return code (< 0)
+ */
+-struct bss_descriptor *libertas_find_bssid_in_list(wlan_adapter * adapter,
++struct bss_descriptor *lbs_find_bssid_in_list(struct lbs_private *priv,
+ u8 * bssid, u8 mode)
{
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- size_t pos = 0;
- int numscansdone = 0, res;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-@@ -70,8 +71,8 @@ static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
- pos += snprintf(buf+pos, len-pos,
- "# | ch | rssi | bssid | cap | Qual | SSID \n");
-
-- mutex_lock(&priv->adapter->lock);
-- list_for_each_entry (iter_bss, &priv->adapter->network_list, list) {
+ struct bss_descriptor * iter_bss;
+@@ -1177,14 +955,14 @@ struct bss_descriptor *libertas_find_bssid_in_list(wlan_adapter * adapter,
+ * continue past a matched bssid that is not compatible in case there
+ * is an AP with multiple SSIDs assigned to the same BSSID
+ */
+- mutex_lock(&adapter->lock);
+- list_for_each_entry (iter_bss, &adapter->network_list, list) {
+ mutex_lock(&priv->lock);
+ list_for_each_entry (iter_bss, &priv->network_list, list) {
- u16 ibss = (iter_bss->capability & WLAN_CAPABILITY_IBSS);
- u16 privacy = (iter_bss->capability & WLAN_CAPABILITY_PRIVACY);
- u16 spectrum_mgmt = (iter_bss->capability & WLAN_CAPABILITY_SPECTRUM_MGMT);
-@@ -90,7 +91,7 @@ static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
-
- numscansdone++;
+ if (compare_ether_addr(iter_bss->bssid, bssid))
+ continue; /* bssid doesn't match */
+ switch (mode) {
+ case IW_MODE_INFRA:
+ case IW_MODE_ADHOC:
+- if (!is_network_compatible(adapter, iter_bss, mode))
++ if (!is_network_compatible(priv, iter_bss, mode))
+ break;
+ found_bss = iter_bss;
+ break;
+@@ -1193,7 +971,7 @@ struct bss_descriptor *libertas_find_bssid_in_list(wlan_adapter * adapter,
+ break;
+ }
}
-- mutex_unlock(&priv->adapter->lock);
+- mutex_unlock(&adapter->lock);
+ mutex_unlock(&priv->lock);
- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+ out:
+ lbs_deb_leave_args(LBS_DEB_SCAN, "found_bss %p", found_bss);
+@@ -1205,14 +983,14 @@ out:
+ *
+ * Used in association code
+ *
+- * @param adapter A pointer to wlan_adapter
++ * @param priv A pointer to struct lbs_private
+ * @param ssid SSID to find in the list
+ * @param bssid BSSID to qualify the SSID selection (if provided)
+ * @param mode Network mode: Infrastructure or IBSS
+ *
+ * @return index in BSSID list
+ */
+-struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
++struct bss_descriptor *lbs_find_ssid_in_list(struct lbs_private *priv,
+ u8 *ssid, u8 ssid_len, u8 * bssid, u8 mode,
+ int channel)
+ {
+@@ -1223,14 +1001,14 @@ struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
-@@ -98,83 +99,75 @@ static ssize_t libertas_getscantable(struct file *file, char __user *userbuf,
- return res;
- }
+ lbs_deb_enter(LBS_DEB_SCAN);
--static ssize_t libertas_sleepparams_write(struct file *file,
-+static ssize_t lbs_sleepparams_write(struct file *file,
- const char __user *user_buf, size_t count,
- loff_t *ppos)
- {
-- wlan_private *priv = file->private_data;
-- ssize_t buf_size, res;
-+ struct lbs_private *priv = file->private_data;
-+ ssize_t buf_size, ret;
-+ struct sleep_params sp;
- int p1, p2, p3, p4, p5, p6;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
- buf_size = min(count, len - 1);
- if (copy_from_user(buf, user_buf, buf_size)) {
-- res = -EFAULT;
-+ ret = -EFAULT;
- goto out_unlock;
- }
-- res = sscanf(buf, "%d %d %d %d %d %d", &p1, &p2, &p3, &p4, &p5, &p6);
-- if (res != 6) {
-- res = -EFAULT;
-+ ret = sscanf(buf, "%d %d %d %d %d %d", &p1, &p2, &p3, &p4, &p5, &p6);
-+ if (ret != 6) {
-+ ret = -EINVAL;
- goto out_unlock;
+- list_for_each_entry (iter_bss, &adapter->network_list, list) {
++ list_for_each_entry (iter_bss, &priv->network_list, list) {
+ if ( !tmp_oldest
+ || (iter_bss->last_scanned < tmp_oldest->last_scanned))
+ tmp_oldest = iter_bss;
+
+- if (libertas_ssid_cmp(iter_bss->ssid, iter_bss->ssid_len,
++ if (lbs_ssid_cmp(iter_bss->ssid, iter_bss->ssid_len,
+ ssid, ssid_len) != 0)
+ continue; /* ssid doesn't match */
+ if (bssid && compare_ether_addr(iter_bss->bssid, bssid) != 0)
+@@ -1241,7 +1019,7 @@ struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
+ switch (mode) {
+ case IW_MODE_INFRA:
+ case IW_MODE_ADHOC:
+- if (!is_network_compatible(adapter, iter_bss, mode))
++ if (!is_network_compatible(priv, iter_bss, mode))
+ break;
+
+ if (bssid) {
+@@ -1266,7 +1044,7 @@ struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
}
-- priv->adapter->sp.sp_error = p1;
-- priv->adapter->sp.sp_offset = p2;
-- priv->adapter->sp.sp_stabletime = p3;
-- priv->adapter->sp.sp_calcontrol = p4;
-- priv->adapter->sp.sp_extsleepclk = p5;
-- priv->adapter->sp.sp_reserved = p6;
--
-- res = libertas_prepare_and_send_command(priv,
-- CMD_802_11_SLEEP_PARAMS,
-- CMD_ACT_SET,
-- CMD_OPTION_WAITFORRSP, 0, NULL);
--
-- if (!res)
-- res = count;
-- else
-- res = -EINVAL;
-+ sp.sp_error = p1;
-+ sp.sp_offset = p2;
-+ sp.sp_stabletime = p3;
-+ sp.sp_calcontrol = p4;
-+ sp.sp_extsleepclk = p5;
-+ sp.sp_reserved = p6;
-+
-+ ret = lbs_cmd_802_11_sleep_params(priv, CMD_ACT_SET, &sp);
-+ if (!ret)
-+ ret = count;
-+ else if (ret > 0)
-+ ret = -EINVAL;
- out_unlock:
- free_page(addr);
-- return res;
-+ return ret;
+ out:
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
+ lbs_deb_leave_args(LBS_DEB_SCAN, "found_bss %p", found_bss);
+ return found_bss;
}
-
--static ssize_t libertas_sleepparams_read(struct file *file, char __user *userbuf,
-+static ssize_t lbs_sleepparams_read(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
+@@ -1277,12 +1055,13 @@ out:
+ * Search the scan table for the best SSID that also matches the current
+ * adapter network preference (infrastructure or adhoc)
+ *
+- * @param adapter A pointer to wlan_adapter
++ * @param priv A pointer to struct lbs_private
+ *
+ * @return index in BSSID list
+ */
+-static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * adapter,
+- u8 mode)
++static struct bss_descriptor *lbs_find_best_ssid_in_list(
++ struct lbs_private *priv,
++ u8 mode)
{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- ssize_t res;
-+ struct lbs_private *priv = file->private_data;
-+ ssize_t ret;
- size_t pos = 0;
-+ struct sleep_params sp;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
+ u8 bestrssi = 0;
+ struct bss_descriptor * iter_bss;
+@@ -1290,13 +1069,13 @@ static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * ad
-- res = libertas_prepare_and_send_command(priv,
-- CMD_802_11_SLEEP_PARAMS,
-- CMD_ACT_GET,
-- CMD_OPTION_WAITFORRSP, 0, NULL);
-- if (res) {
-- res = -EFAULT;
-+ ret = lbs_cmd_802_11_sleep_params(priv, CMD_ACT_GET, &sp);
-+ if (ret)
- goto out_unlock;
-- }
+ lbs_deb_enter(LBS_DEB_SCAN);
-- pos += snprintf(buf, len, "%d %d %d %d %d %d\n", adapter->sp.sp_error,
-- adapter->sp.sp_offset, adapter->sp.sp_stabletime,
-- adapter->sp.sp_calcontrol, adapter->sp.sp_extsleepclk,
-- adapter->sp.sp_reserved);
-+ pos += snprintf(buf, len, "%d %d %d %d %d %d\n", sp.sp_error,
-+ sp.sp_offset, sp.sp_stabletime,
-+ sp.sp_calcontrol, sp.sp_extsleepclk,
-+ sp.sp_reserved);
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-+ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+- list_for_each_entry (iter_bss, &adapter->network_list, list) {
++ list_for_each_entry (iter_bss, &priv->network_list, list) {
+ switch (mode) {
+ case IW_MODE_INFRA:
+ case IW_MODE_ADHOC:
+- if (!is_network_compatible(adapter, iter_bss, mode))
++ if (!is_network_compatible(priv, iter_bss, mode))
+ break;
+ if (SCAN_RSSI(iter_bss->rssi) <= bestrssi)
+ break;
+@@ -1313,7 +1092,7 @@ static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * ad
+ }
+ }
- out_unlock:
- free_page(addr);
-- return res;
-+ return ret;
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
+ lbs_deb_leave_args(LBS_DEB_SCAN, "best_bss %p", best_bss);
+ return best_bss;
}
-
--static ssize_t libertas_extscan(struct file *file, const char __user *userbuf,
-+static ssize_t lbs_extscan(struct file *file, const char __user *userbuf,
- size_t count, loff_t *ppos)
+@@ -1323,27 +1102,24 @@ static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * ad
+ *
+ * Used from association worker.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param pSSID A pointer to AP's ssid
+ *
+ * @return 0--success, otherwise--fail
+ */
+-int libertas_find_best_network_ssid(wlan_private * priv,
++int lbs_find_best_network_ssid(struct lbs_private *priv,
+ u8 *out_ssid, u8 *out_ssid_len, u8 preferred_mode, u8 *out_mode)
{
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- union iwreq_data wrqu;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-@@ -186,7 +179,7 @@ static ssize_t libertas_extscan(struct file *file, const char __user *userbuf,
- goto out_unlock;
- }
+- wlan_adapter *adapter = priv->adapter;
+ int ret = -1;
+ struct bss_descriptor * found;
-- libertas_send_specific_ssid_scan(priv, buf, strlen(buf)-1, 0);
-+ lbs_send_specific_ssid_scan(priv, buf, strlen(buf)-1, 0);
+ lbs_deb_enter(LBS_DEB_SCAN);
- memset(&wrqu, 0, sizeof(union iwreq_data));
- wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
-@@ -196,45 +189,8 @@ out_unlock:
- return count;
+- wlan_scan_networks(priv, NULL, 1);
+- if (adapter->surpriseremoved)
++ lbs_scan_networks(priv, NULL, 1);
++ if (priv->surpriseremoved)
+ goto out;
+
+- wait_event_interruptible(adapter->cmd_pending, !adapter->nr_cmd_pending);
+-
+- found = libertas_find_best_ssid_in_list(adapter, preferred_mode);
++ found = lbs_find_best_ssid_in_list(priv, preferred_mode);
+ if (found && (found->ssid_len > 0)) {
+ memcpy(out_ssid, &found->ssid, IW_ESSID_MAX_SIZE);
+ *out_ssid_len = found->ssid_len;
+@@ -1356,57 +1132,24 @@ out:
+ return ret;
}
--static int libertas_parse_chan(char *buf, size_t count,
-- struct wlan_ioctl_user_scan_cfg *scan_cfg, int dur)
+-/**
+- * @brief Scan Network
+- *
+- * @param dev A pointer to net_device structure
+- * @param info A pointer to iw_request_info structure
+- * @param vwrq A pointer to iw_param structure
+- * @param extra A pointer to extra data buf
+- *
+- * @return 0 --success, otherwise fail
+- */
+-int libertas_set_scan(struct net_device *dev, struct iw_request_info *info,
+- struct iw_param *vwrq, char *extra)
-{
-- char *start, *end, *hold, *str;
-- int i = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
-
-- start = strstr(buf, "chan=");
-- if (!start)
-- return -EINVAL;
-- start += 5;
-- end = strchr(start, ' ');
-- if (!end)
-- end = buf + count;
-- hold = kzalloc((end - start)+1, GFP_KERNEL);
-- if (!hold)
-- return -ENOMEM;
-- strncpy(hold, start, end - start);
-- hold[(end-start)+1] = '\0';
-- while(hold && (str = strsep(&hold, ","))) {
-- int chan;
-- char band, passive = 0;
-- sscanf(str, "%d%c%c", &chan, &band, &passive);
-- scan_cfg->chanlist[i].channumber = chan;
-- scan_cfg->chanlist[i].scantype = passive ? 1 : 0;
-- if (band == 'b' || band == 'g')
-- scan_cfg->chanlist[i].radiotype = 0;
-- else if (band == 'a')
-- scan_cfg->chanlist[i].radiotype = 1;
+- lbs_deb_enter(LBS_DEB_SCAN);
-
-- scan_cfg->chanlist[i].scantime = dur;
-- i++;
+- if (!delayed_work_pending(&priv->scan_work)) {
+- queue_delayed_work(priv->work_thread, &priv->scan_work,
+- msecs_to_jiffies(50));
- }
-
-- kfree(hold);
-- return i;
+- if (adapter->surpriseremoved)
+- return -1;
+-
+- lbs_deb_leave(LBS_DEB_SCAN);
+- return 0;
-}
-
--static void libertas_parse_bssid(char *buf, size_t count,
-- struct wlan_ioctl_user_scan_cfg *scan_cfg)
-+static void lbs_parse_bssid(char *buf, size_t count,
-+ struct lbs_ioctl_user_scan_cfg *scan_cfg)
- {
- char *hold;
- unsigned int mac[ETH_ALEN];
-@@ -243,12 +199,13 @@ static void libertas_parse_bssid(char *buf, size_t count,
- if (!hold)
- return;
- hold += 6;
-- sscanf(hold, MAC_FMT, mac, mac+1, mac+2, mac+3, mac+4, mac+5);
-+ sscanf(hold, "%02x:%02x:%02x:%02x:%02x:%02x",
-+ mac, mac+1, mac+2, mac+3, mac+4, mac+5);
- memcpy(scan_cfg->bssid, mac, ETH_ALEN);
- }
--static void libertas_parse_ssid(char *buf, size_t count,
-- struct wlan_ioctl_user_scan_cfg *scan_cfg)
-+static void lbs_parse_ssid(char *buf, size_t count,
-+ struct lbs_ioctl_user_scan_cfg *scan_cfg)
+ /**
+ * @brief Send a scan command for all available channels filtered on a spec
+ *
+ * Used in association code and from debugfs
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param ssid A pointer to the SSID to scan for
+ * @param ssid_len Length of the SSID
+ * @param clear_ssid Should existing scan results with this SSID
+ * be cleared?
+- * @param prequestedssid A pointer to AP's ssid
+- * @param keeppreviousscan Flag used to save/clear scan table before scan
+ *
+ * @return 0-success, otherwise fail
+ */
+-int libertas_send_specific_ssid_scan(wlan_private * priv,
++int lbs_send_specific_ssid_scan(struct lbs_private *priv,
+ u8 *ssid, u8 ssid_len, u8 clear_ssid)
{
- char *hold, *end;
- ssize_t size;
-@@ -267,7 +224,7 @@ static void libertas_parse_ssid(char *buf, size_t count,
- return;
- }
+- wlan_adapter *adapter = priv->adapter;
+- struct wlan_ioctl_user_scan_cfg scancfg;
++ struct lbs_ioctl_user_scan_cfg scancfg;
+ int ret = 0;
--static int libertas_parse_clear(char *buf, size_t count, const char *tag)
-+static int lbs_parse_clear(char *buf, size_t count, const char *tag)
- {
- char *hold;
- int val;
-@@ -284,8 +241,8 @@ static int libertas_parse_clear(char *buf, size_t count, const char *tag)
- return val;
- }
+ lbs_deb_enter_args(LBS_DEB_SCAN, "SSID '%s', clear %d",
+@@ -1420,12 +1163,11 @@ int libertas_send_specific_ssid_scan(wlan_private * priv,
+ scancfg.ssid_len = ssid_len;
+ scancfg.clear_ssid = clear_ssid;
--static int libertas_parse_dur(char *buf, size_t count,
-- struct wlan_ioctl_user_scan_cfg *scan_cfg)
-+static int lbs_parse_dur(char *buf, size_t count,
-+ struct lbs_ioctl_user_scan_cfg *scan_cfg)
- {
- char *hold;
- int val;
-@@ -299,25 +256,8 @@ static int libertas_parse_dur(char *buf, size_t count,
- return val;
- }
+- wlan_scan_networks(priv, &scancfg, 1);
+- if (adapter->surpriseremoved) {
++ lbs_scan_networks(priv, &scancfg, 1);
++ if (priv->surpriseremoved) {
+ ret = -1;
+ goto out;
+ }
+- wait_event_interruptible(adapter->cmd_pending, !adapter->nr_cmd_pending);
--static void libertas_parse_probes(char *buf, size_t count,
-- struct wlan_ioctl_user_scan_cfg *scan_cfg)
--{
-- char *hold;
-- int val;
--
-- hold = strstr(buf, "probes=");
-- if (!hold)
-- return;
-- hold += 7;
-- sscanf(hold, "%d", &val);
--
-- scan_cfg->numprobes = val;
--
-- return;
--}
--
--static void libertas_parse_type(char *buf, size_t count,
-- struct wlan_ioctl_user_scan_cfg *scan_cfg)
-+static void lbs_parse_type(char *buf, size_t count,
-+ struct lbs_ioctl_user_scan_cfg *scan_cfg)
- {
- char *hold;
- int val;
-@@ -337,1036 +277,324 @@ static void libertas_parse_type(char *buf, size_t count,
- return;
- }
+ out:
+ lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
+@@ -1441,13 +1183,13 @@ out:
+ /* */
+ /*********************************************************************/
--static ssize_t libertas_setuserscan(struct file *file,
-+static ssize_t lbs_setuserscan(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
++
+ #define MAX_CUSTOM_LEN 64
+
+-static inline char *libertas_translate_scan(wlan_private *priv,
++static inline char *lbs_translate_scan(struct lbs_private *priv,
+ char *start, char *stop,
+ struct bss_descriptor *bss)
{
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
-- struct wlan_ioctl_user_scan_cfg *scan_cfg;
-+ struct lbs_ioctl_user_scan_cfg *scan_cfg;
- union iwreq_data wrqu;
- int dur;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
-+ char *buf = (char *)get_zeroed_page(GFP_KERNEL);
+- wlan_adapter *adapter = priv->adapter;
+ struct chan_freq_power *cfp;
+ char *current_val; /* For rates */
+ struct iw_event iwe; /* Temporary buffer */
+@@ -1459,14 +1201,14 @@ static inline char *libertas_translate_scan(wlan_private *priv,
-- scan_cfg = kzalloc(sizeof(struct wlan_ioctl_user_scan_cfg), GFP_KERNEL);
-- if (!scan_cfg)
-+ if (!buf)
- return -ENOMEM;
+ lbs_deb_enter(LBS_DEB_SCAN);
- buf_size = min(count, len - 1);
- if (copy_from_user(buf, userbuf, buf_size)) {
- res = -EFAULT;
-- goto out_unlock;
-+ goto out_buf;
-+ }
-+
-+ scan_cfg = kzalloc(sizeof(struct lbs_ioctl_user_scan_cfg), GFP_KERNEL);
-+ if (!scan_cfg) {
-+ res = -ENOMEM;
-+ goto out_buf;
+- cfp = libertas_find_cfp_by_band_and_channel(adapter, 0, bss->channel);
++ cfp = lbs_find_cfp_by_band_and_channel(priv, 0, bss->channel);
+ if (!cfp) {
+ lbs_deb_scan("Invalid channel number %d\n", bss->channel);
+ start = NULL;
+ goto out;
}
-+ res = count;
-+
-+ scan_cfg->bsstype = LBS_SCAN_BSS_TYPE_ANY;
-
-- scan_cfg->bsstype = WLAN_SCAN_BSS_TYPE_ANY;
-+ dur = lbs_parse_dur(buf, count, scan_cfg);
-+ lbs_parse_bssid(buf, count, scan_cfg);
-+ scan_cfg->clear_bssid = lbs_parse_clear(buf, count, "clear_bssid=");
-+ lbs_parse_ssid(buf, count, scan_cfg);
-+ scan_cfg->clear_ssid = lbs_parse_clear(buf, count, "clear_ssid=");
-+ lbs_parse_type(buf, count, scan_cfg);
-
-- dur = libertas_parse_dur(buf, count, scan_cfg);
-- libertas_parse_chan(buf, count, scan_cfg, dur);
-- libertas_parse_bssid(buf, count, scan_cfg);
-- scan_cfg->clear_bssid = libertas_parse_clear(buf, count, "clear_bssid=");
-- libertas_parse_ssid(buf, count, scan_cfg);
-- scan_cfg->clear_ssid = libertas_parse_clear(buf, count, "clear_ssid=");
-- libertas_parse_probes(buf, count, scan_cfg);
-- libertas_parse_type(buf, count, scan_cfg);
-+ lbs_scan_networks(priv, scan_cfg, 1);
-+ wait_event_interruptible(priv->cmd_pending,
-+ priv->surpriseremoved || !priv->last_scanned_channel);
-- wlan_scan_networks(priv, scan_cfg, 1);
-- wait_event_interruptible(priv->adapter->cmd_pending,
-- !priv->adapter->nr_cmd_pending);
-+ if (priv->surpriseremoved)
-+ goto out_scan_cfg;
+- /* First entry *MUST* be the AP BSSID */
++ /* First entry *MUST* be the BSSID */
+ iwe.cmd = SIOCGIWAP;
+ iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
+ memcpy(iwe.u.ap_addr.sa_data, &bss->bssid, ETH_ALEN);
+@@ -1502,25 +1244,25 @@ static inline char *libertas_translate_scan(wlan_private *priv,
+ if (iwe.u.qual.qual > 100)
+ iwe.u.qual.qual = 100;
- memset(&wrqu, 0x00, sizeof(union iwreq_data));
- wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
+- if (adapter->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
++ if (priv->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
+ iwe.u.qual.noise = MRVDRV_NF_DEFAULT_SCAN_VALUE;
+ } else {
+ iwe.u.qual.noise =
+- CAL_NF(adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
++ CAL_NF(priv->NF[TYPE_BEACON][TYPE_NOAVG]);
+ }
--out_unlock:
-- free_page(addr);
-+ out_scan_cfg:
- kfree(scan_cfg);
-- return count;
-+ out_buf:
-+ free_page((unsigned long)buf);
-+ return res;
+ /* Locally created ad-hoc BSSs won't have beacons if this is the
+ * only station in the adhoc network; so get signal strength
+ * from receive statistics.
+ */
+- if ((adapter->mode == IW_MODE_ADHOC)
+- && adapter->adhoccreate
+- && !libertas_ssid_cmp(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len,
++ if ((priv->mode == IW_MODE_ADHOC)
++ && priv->adhoccreate
++ && !lbs_ssid_cmp(priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len,
+ bss->ssid, bss->ssid_len)) {
+ int snr, nf;
+- snr = adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
+- nf = adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
++ snr = priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
++ nf = priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
+ iwe.u.qual.level = CAL_RSSI(snr, nf);
+ }
+ start = iwe_stream_add_event(start, stop, &iwe, IW_EV_QUAL_LEN);
+@@ -1549,10 +1291,10 @@ static inline char *libertas_translate_scan(wlan_private *priv,
+ stop, &iwe, IW_EV_PARAM_LEN);
+ }
+ if ((bss->mode == IW_MODE_ADHOC)
+- && !libertas_ssid_cmp(adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len,
++ && !lbs_ssid_cmp(priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len,
+ bss->ssid, bss->ssid_len)
+- && adapter->adhoccreate) {
++ && priv->adhoccreate) {
+ iwe.u.bitrate.value = 22 * 500000;
+ current_val = iwe_stream_add_value(start, current_val,
+ stop, &iwe, IW_EV_PARAM_LEN);
+@@ -1596,6 +1338,54 @@ out:
+ return start;
}
--static int libertas_event_initcmd(wlan_private *priv, void **response_buf,
-- struct cmd_ctrl_node **cmdnode,
-- struct cmd_ds_command **cmd)
--{
-- u16 wait_option = CMD_OPTION_WAITFORRSP;
--
-- if (!(*cmdnode = libertas_get_free_cmd_ctrl_node(priv))) {
-- lbs_deb_debugfs("failed libertas_get_free_cmd_ctrl_node\n");
-- return -ENOMEM;
-- }
-- if (!(*response_buf = kmalloc(3000, GFP_KERNEL))) {
-- lbs_deb_debugfs("failed to allocate response buffer!\n");
-- return -ENOMEM;
-- }
-- libertas_set_cmd_ctrl_node(priv, *cmdnode, 0, wait_option, NULL);
-- init_waitqueue_head(&(*cmdnode)->cmdwait_q);
-- (*cmdnode)->pdata_buf = *response_buf;
-- (*cmdnode)->cmdflags |= CMD_F_HOSTCMD;
-- (*cmdnode)->cmdwaitqwoken = 0;
-- *cmd = (struct cmd_ds_command *)(*cmdnode)->bufvirtualaddr;
-- (*cmd)->command = cpu_to_le16(CMD_802_11_SUBSCRIBE_EVENT);
-- (*cmd)->seqnum = cpu_to_le16(++priv->adapter->seqnum);
-- (*cmd)->result = 0;
-- return 0;
--}
-
--static ssize_t libertas_lowrssi_read(struct file *file, char __user *userbuf,
-- size_t count, loff_t *ppos)
-+/*
-+ * When calling CMD_802_11_SUBSCRIBE_EVENT with CMD_ACT_GET, me might
-+ * get a bunch of vendor-specific TLVs (a.k.a. IEs) back from the
-+ * firmware. Here's an example:
-+ * 04 01 02 00 00 00 05 01 02 00 00 00 06 01 02 00
-+ * 00 00 07 01 02 00 3c 00 00 00 00 00 00 00 03 03
-+ * 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
++
++/**
++ * @brief Handle Scan Network ioctl
+ *
-+ * The 04 01 is the TLV type (here TLV_TYPE_RSSI_LOW), 02 00 is the length,
-+ * 00 00 are the data bytes of this TLV. For this TLV, their meaning is
-+ * defined in mrvlietypes_thresholds
++ * @param dev A pointer to net_device structure
++ * @param info A pointer to iw_request_info structure
++ * @param vwrq A pointer to iw_param structure
++ * @param extra A pointer to extra data buf
+ *
-+ * This function searches in this TLV data chunk for a given TLV type
-+ * and returns a pointer to the first data byte of the TLV, or to NULL
-+ * if the TLV hasn't been found.
++ * @return 0 --success, otherwise fail
+ */
-+static void *lbs_tlv_find(uint16_t tlv_type, const uint8_t *tlv, uint16_t size)
++int lbs_set_scan(struct net_device *dev, struct iw_request_info *info,
++ struct iw_param *wrqu, char *extra)
++{
++ struct lbs_private *priv = dev->priv;
++
++ lbs_deb_enter(LBS_DEB_SCAN);
++
++ if (!netif_running(dev))
++ return -ENETDOWN;
++
++ /* mac80211 does this:
++ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
++ if (sdata->type != IEEE80211_IF_TYPE_xxx)
++ return -EOPNOTSUPP;
++
++ if (wrqu->data.length == sizeof(struct iw_scan_req) &&
++ wrqu->data.flags & IW_SCAN_THIS_ESSID) {
++ req = (struct iw_scan_req *)extra;
++ ssid = req->essid;
++ ssid_len = req->essid_len;
++ }
++ */
++
++ if (!delayed_work_pending(&priv->scan_work))
++ queue_delayed_work(priv->work_thread, &priv->scan_work,
++ msecs_to_jiffies(50));
++ /* set marker that currently a scan is taking place */
++ priv->last_scanned_channel = -1;
++
++ if (priv->surpriseremoved)
++ return -EIO;
++
++ lbs_deb_leave(LBS_DEB_SCAN);
++ return 0;
++}
++
++
+ /**
+ * @brief Handle Retrieve scan table ioctl
+ *
+@@ -1606,12 +1396,11 @@ out:
+ *
+ * @return 0 --success, otherwise fail
+ */
+-int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
++int lbs_get_scan(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
{
-- wlan_private *priv = file->private_data;
+ #define SCAN_ITEM_SIZE 128
+- wlan_private *priv = dev->priv;
- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res, cmd_len;
-+ struct mrvlietypesheader *tlv_h;
-+ uint16_t length;
- ssize_t pos = 0;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0) {
-- free_page(addr);
-- return res;
-- }
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
++ struct lbs_private *priv = dev->priv;
+ int err = 0;
+ char *ev = extra;
+ char *stop = ev + dwrq->length;
+@@ -1620,14 +1409,18 @@ int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
--
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- event = (void *)(response_buf + S_DS_GEN);
-- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
-- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
-- switch (header->type) {
-- struct mrvlietypes_rssithreshold *Lowrssi;
-- case __constant_cpu_to_le16(TLV_TYPE_RSSI_LOW):
-- Lowrssi = (void *)(response_buf + cmd_len);
-- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
-- Lowrssi->rssivalue,
-- Lowrssi->rssifreq,
-- (event->events & cpu_to_le16(0x0001))?1:0);
-- default:
-- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
-- break;
-- }
-- }
--
-- kfree(response_buf);
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-- free_page(addr);
-- return res;
--}
--
--static u16 libertas_get_events_bitmap(wlan_private *priv)
--{
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res;
-- u16 event_bitmap;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- return res;
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- return 0;
-- }
+ lbs_deb_enter(LBS_DEB_SCAN);
+
++ /* iwlist should wait until the current scan is finished */
++ if (priv->last_scanned_channel)
++ return -EAGAIN;
++
+ /* Update RSSI if current BSS is a locally created ad-hoc BSS */
+- if ((adapter->mode == IW_MODE_ADHOC) && adapter->adhoccreate) {
+- libertas_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
++ if ((priv->mode == IW_MODE_ADHOC) && priv->adhoccreate) {
++ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
+ CMD_OPTION_WAITFORRSP, 0, NULL);
+ }
+
+- mutex_lock(&adapter->lock);
+- list_for_each_entry_safe (iter_bss, safe, &adapter->network_list, list) {
++ mutex_lock(&priv->lock);
++ list_for_each_entry_safe (iter_bss, safe, &priv->network_list, list) {
+ char * next_ev;
+ unsigned long stale_time;
+
+@@ -1644,18 +1437,18 @@ int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
+ stale_time = iter_bss->last_scanned + DEFAULT_MAX_SCAN_AGE;
+ if (time_after(jiffies, stale_time)) {
+ list_move_tail (&iter_bss->list,
+- &adapter->network_free_list);
++ &priv->network_free_list);
+ clear_bss_descriptor(iter_bss);
+ continue;
+ }
+
+ /* Translate to WE format this entry */
+- next_ev = libertas_translate_scan(priv, ev, stop, iter_bss);
++ next_ev = lbs_translate_scan(priv, ev, stop, iter_bss);
+ if (next_ev == NULL)
+ continue;
+ ev = next_ev;
+ }
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
+
+ dwrq->length = (ev - extra);
+ dwrq->flags = 0;
+@@ -1677,24 +1470,25 @@ int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
+ /**
+ * @brief Prepare a scan command to be sent to the firmware
+ *
+- * Called from libertas_prepare_and_send_command() in cmd.c
++ * Called via lbs_prepare_and_send_command(priv, CMD_802_11_SCAN, ...)
++ * from cmd.c
+ *
+ * Sends a fixed lenght data part (specifying the BSS type and BSSID filters)
+ * as well as a variable number/length of TLVs to the firmware.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param cmd A pointer to cmd_ds_command structure to be sent to
+ * firmware with the cmd_DS_801_11_SCAN structure
+- * @param pdata_buf Void pointer cast of a wlan_scan_cmd_config struct used
++ * @param pdata_buf Void pointer cast of a lbs_scan_cmd_config struct used
+ * to set the fields/TLVs for the command sent to firmware
+ *
+ * @return 0 or -1
+ */
+-int libertas_cmd_80211_scan(wlan_private * priv,
+- struct cmd_ds_command *cmd, void *pdata_buf)
++int lbs_cmd_80211_scan(struct lbs_private *priv,
++ struct cmd_ds_command *cmd, void *pdata_buf)
+ {
+ struct cmd_ds_802_11_scan *pscan = &cmd->params.scan;
+- struct wlan_scan_cmd_config *pscancfg = pdata_buf;
++ struct lbs_scan_cmd_config *pscancfg = pdata_buf;
+
+ lbs_deb_enter(LBS_DEB_SCAN);
+
+@@ -1703,32 +1497,14 @@ int libertas_cmd_80211_scan(wlan_private * priv,
+ memcpy(pscan->bssid, pscancfg->bssid, ETH_ALEN);
+ memcpy(pscan->tlvbuffer, pscancfg->tlvbuffer, pscancfg->tlvbufferlen);
+
+- cmd->command = cpu_to_le16(CMD_802_11_SCAN);
-
-- if (le16_to_cpu(pcmdptr->command) != CMD_RET(CMD_802_11_SUBSCRIBE_EVENT)) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- return 0;
-- }
+ /* size is equal to the sizeof(fixed portions) + the TLV len + header */
+ cmd->size = cpu_to_le16(sizeof(pscan->bsstype) + ETH_ALEN
+ + pscancfg->tlvbufferlen + S_DS_GEN);
+
+- lbs_deb_scan("SCAN_CMD: command 0x%04x, size %d, seqnum %d\n",
+- le16_to_cpu(cmd->command), le16_to_cpu(cmd->size),
+- le16_to_cpu(cmd->seqnum));
-
-- event = (struct cmd_ds_802_11_subscribe_event *)(response_buf + S_DS_GEN);
-- event_bitmap = le16_to_cpu(event->events);
-- kfree(response_buf);
-- return event_bitmap;
-+ while (pos < size) {
-+ tlv_h = (struct mrvlietypesheader *) tlv;
-+ if (!tlv_h->len)
-+ return NULL;
-+ if (tlv_h->type == cpu_to_le16(tlv_type))
-+ return tlv_h;
-+ length = le16_to_cpu(tlv_h->len) + sizeof(*tlv_h);
-+ pos += length;
-+ tlv += length;
-+ }
-+ return NULL;
+ lbs_deb_leave(LBS_DEB_SCAN);
+ return 0;
}
--static ssize_t libertas_lowrssi_write(struct file *file,
-- const char __user *userbuf,
-- size_t count, loff_t *ppos)
+-static inline int is_same_network(struct bss_descriptor *src,
+- struct bss_descriptor *dst)
-{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- ssize_t res, buf_size;
-- int value, freq, subscribed, cmd_len;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- struct mrvlietypes_rssithreshold *rssi_threshold;
-- void *response_buf;
-- u16 event_bitmap;
-- u8 *ptr;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
-
-- buf_size = min(count, len - 1);
-- if (copy_from_user(buf, userbuf, buf_size)) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
-- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
-- if (res != 3) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
--
-- event_bitmap = libertas_get_events_bitmap(priv);
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- goto out_unlock;
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_SET);
-- pcmdptr->size = cpu_to_le16(S_DS_GEN +
-- sizeof(struct cmd_ds_802_11_subscribe_event) +
-- sizeof(struct mrvlietypes_rssithreshold));
--
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- ptr = (u8*) pcmdptr+cmd_len;
-- rssi_threshold = (struct mrvlietypes_rssithreshold *)(ptr);
-- rssi_threshold->header.type = cpu_to_le16(0x0104);
-- rssi_threshold->header.len = cpu_to_le16(2);
-- rssi_threshold->rssivalue = value;
-- rssi_threshold->rssifreq = freq;
-- event_bitmap |= subscribed ? 0x0001 : 0x0;
-- event->events = cpu_to_le16(event_bitmap);
--
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
--
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
--
-- res = count;
--out_unlock:
-- free_page(addr);
-- return res;
+- /* A network is only a duplicate if the channel, BSSID, and ESSID
+- * all match. We treat all <hidden> with the same BSSID and channel
+- * as one network */
+- return ((src->ssid_len == dst->ssid_len) &&
+- (src->channel == dst->channel) &&
+- !compare_ether_addr(src->bssid, dst->bssid) &&
+- !memcmp(src->ssid, dst->ssid, src->ssid_len));
-}
-
--static ssize_t libertas_lowsnr_read(struct file *file, char __user *userbuf,
-+static ssize_t lbs_threshold_read(uint16_t tlv_type, uint16_t event_mask,
-+ struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
+ /**
+ * @brief This function handles the command response of scan
+ *
+@@ -1750,14 +1526,13 @@ static inline int is_same_network(struct bss_descriptor *src,
+ * | bufsize and sizeof the fixed fields above) |
+ * .-----------------------------------------------------------.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param resp A pointer to cmd_ds_command
+ *
+ * @return 0 or -1
+ */
+-int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
++int lbs_ret_80211_scan(struct lbs_private *priv, struct cmd_ds_command *resp)
{
-- wlan_private *priv = file->private_data;
- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res, cmd_len;
-- ssize_t pos = 0;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0) {
-- free_page(addr);
-- return res;
-- }
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
-+ struct cmd_ds_802_11_subscribe_event *subscribed;
-+ struct mrvlietypes_thresholds *got;
-+ struct lbs_private *priv = file->private_data;
-+ ssize_t ret = 0;
-+ size_t pos = 0;
-+ char *buf;
-+ u8 value;
-+ u8 freq;
-+ int events = 0;
+ struct cmd_ds_802_11_scan_rsp *pscan;
+ struct bss_descriptor * iter_bss;
+ struct bss_descriptor * safe;
+@@ -1771,11 +1546,11 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
+ lbs_deb_enter(LBS_DEB_SCAN);
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
-+ buf = (char *)get_zeroed_page(GFP_KERNEL);
-+ if (!buf)
-+ return -ENOMEM;
+ /* Prune old entries from scan table */
+- list_for_each_entry_safe (iter_bss, safe, &adapter->network_list, list) {
++ list_for_each_entry_safe (iter_bss, safe, &priv->network_list, list) {
+ unsigned long stale_time = iter_bss->last_scanned + DEFAULT_MAX_SCAN_AGE;
+ if (time_before(jiffies, stale_time))
+ continue;
+- list_move_tail (&iter_bss->list, &adapter->network_free_list);
++ list_move_tail (&iter_bss->list, &priv->network_free_list);
+ clear_bss_descriptor(iter_bss);
+ }
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-+ subscribed = kzalloc(sizeof(*subscribed), GFP_KERNEL);
-+ if (!subscribed) {
-+ ret = -ENOMEM;
-+ goto out_page;
+@@ -1789,12 +1564,11 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
+ goto done;
}
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- event = (void *)(response_buf + S_DS_GEN);
-- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
-- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
-- switch (header->type) {
-- struct mrvlietypes_snrthreshold *LowSnr;
-- case __constant_cpu_to_le16(TLV_TYPE_SNR_LOW):
-- LowSnr = (void *)(response_buf + cmd_len);
-- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
-- LowSnr->snrvalue,
-- LowSnr->snrfreq,
-- (event->events & cpu_to_le16(0x0002))?1:0);
-- default:
-- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
-- break;
-- }
-- }
-+ subscribed->hdr.size = cpu_to_le16(sizeof(*subscribed));
-+ subscribed->action = cpu_to_le16(CMD_ACT_GET);
+- bytesleft = le16_to_cpu(get_unaligned((u16*)&pscan->bssdescriptsize));
++ bytesleft = le16_to_cpu(pscan->bssdescriptsize);
+ lbs_deb_scan("SCAN_RESP: bssdescriptsize %d\n", bytesleft);
-- kfree(response_buf);
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_SUBSCRIBE_EVENT, subscribed);
-+ if (ret)
-+ goto out_cmd;
+- scanrespsize = le16_to_cpu(get_unaligned((u16*)&resp->size));
+- lbs_deb_scan("SCAN_RESP: returned %d AP before parsing\n",
+- pscan->nr_sets);
++ scanrespsize = le16_to_cpu(resp->size);
++ lbs_deb_scan("SCAN_RESP: scan results %d\n", pscan->nr_sets);
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-- free_page(addr);
-- return res;
--}
-+ got = lbs_tlv_find(tlv_type, subscribed->tlv, sizeof(subscribed->tlv));
-+ if (got) {
-+ value = got->value;
-+ freq = got->freq;
-+ events = le16_to_cpu(subscribed->events);
+ pbssinfo = pscan->bssdesc_and_tlvbuffer;
--static ssize_t libertas_lowsnr_write(struct file *file,
-- const char __user *userbuf,
-- size_t count, loff_t *ppos)
--{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- ssize_t res, buf_size;
-- int value, freq, subscribed, cmd_len;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- struct mrvlietypes_snrthreshold *snr_threshold;
-- void *response_buf;
-- u16 event_bitmap;
-- u8 *ptr;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- buf_size = min(count, len - 1);
-- if (copy_from_user(buf, userbuf, buf_size)) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
-- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
-- if (res != 3) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
--
-- event_bitmap = libertas_get_events_bitmap(priv);
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- goto out_unlock;
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_SET);
-- pcmdptr->size = cpu_to_le16(S_DS_GEN +
-- sizeof(struct cmd_ds_802_11_subscribe_event) +
-- sizeof(struct mrvlietypes_snrthreshold));
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- ptr = (u8*) pcmdptr+cmd_len;
-- snr_threshold = (struct mrvlietypes_snrthreshold *)(ptr);
-- snr_threshold->header.type = cpu_to_le16(TLV_TYPE_SNR_LOW);
-- snr_threshold->header.len = cpu_to_le16(2);
-- snr_threshold->snrvalue = value;
-- snr_threshold->snrfreq = freq;
-- event_bitmap |= subscribed ? 0x0002 : 0x0;
-- event->events = cpu_to_le16(event_bitmap);
--
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-+ pos += snprintf(buf, len, "%d %d %d\n", value, freq,
-+ !!(events & event_mask));
- }
+@@ -1821,14 +1595,14 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
-+ ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
+ /* Process the data fields and IEs returned for this BSS */
+ memset(&new, 0, sizeof (struct bss_descriptor));
+- if (libertas_process_bss(&new, &pbssinfo, &bytesleft) != 0) {
++ if (lbs_process_bss(&new, &pbssinfo, &bytesleft) != 0) {
+ /* error parsing the scan response, skipped */
+ lbs_deb_scan("SCAN_RESP: process_bss returned ERROR\n");
+ continue;
+ }
-- res = count;
-+ out_cmd:
-+ kfree(subscribed);
+ /* Try to find this bss in the scan table */
+- list_for_each_entry (iter_bss, &adapter->network_list, list) {
++ list_for_each_entry (iter_bss, &priv->network_list, list) {
+ if (is_same_network(iter_bss, &new)) {
+ found = iter_bss;
+ break;
+@@ -1842,21 +1616,21 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
+ if (found) {
+ /* found, clear it */
+ clear_bss_descriptor(found);
+- } else if (!list_empty(&adapter->network_free_list)) {
++ } else if (!list_empty(&priv->network_free_list)) {
+ /* Pull one from the free list */
+- found = list_entry(adapter->network_free_list.next,
++ found = list_entry(priv->network_free_list.next,
+ struct bss_descriptor, list);
+- list_move_tail(&found->list, &adapter->network_list);
++ list_move_tail(&found->list, &priv->network_list);
+ } else if (oldest) {
+ /* If there are no more slots, expire the oldest */
+ found = oldest;
+ clear_bss_descriptor(found);
+- list_move_tail(&found->list, &adapter->network_list);
++ list_move_tail(&found->list, &priv->network_list);
+ } else {
+ continue;
+ }
--out_unlock:
-- free_page(addr);
-- return res;
-+ out_page:
-+ free_page((unsigned long)buf);
-+ return ret;
- }
+- lbs_deb_scan("SCAN_RESP: BSSID = %s\n",
++ lbs_deb_scan("SCAN_RESP: BSSID %s\n",
+ print_mac(mac, new.bssid));
--static ssize_t libertas_failcount_read(struct file *file, char __user *userbuf,
-- size_t count, loff_t *ppos)
--{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res, cmd_len;
-- ssize_t pos = 0;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0) {
-- free_page(addr);
-- return res;
-- }
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
--
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
+ /* Copy the locally created newbssentry to the scan table */
+diff --git a/drivers/net/wireless/libertas/scan.h b/drivers/net/wireless/libertas/scan.h
+index c29c031..319f70d 100644
+--- a/drivers/net/wireless/libertas/scan.h
++++ b/drivers/net/wireless/libertas/scan.h
+@@ -2,10 +2,10 @@
+ * Interface for the wlan network scan routines
+ *
+ * Driver interface functions and type declarations for the scan module
+- * implemented in wlan_scan.c.
++ * implemented in scan.c.
+ */
+-#ifndef _WLAN_SCAN_H
+-#define _WLAN_SCAN_H
++#ifndef _LBS_SCAN_H
++#define _LBS_SCAN_H
+
+ #include <net/ieee80211.h>
+ #include "hostcmd.h"
+@@ -13,38 +13,38 @@
+ /**
+ * @brief Maximum number of channels that can be sent in a setuserscan ioctl
+ *
+- * @sa wlan_ioctl_user_scan_cfg
++ * @sa lbs_ioctl_user_scan_cfg
+ */
+-#define WLAN_IOCTL_USER_SCAN_CHAN_MAX 50
++#define LBS_IOCTL_USER_SCAN_CHAN_MAX 50
+
+-//! Infrastructure BSS scan type in wlan_scan_cmd_config
+-#define WLAN_SCAN_BSS_TYPE_BSS 1
++//! Infrastructure BSS scan type in lbs_scan_cmd_config
++#define LBS_SCAN_BSS_TYPE_BSS 1
+
+-//! Adhoc BSS scan type in wlan_scan_cmd_config
+-#define WLAN_SCAN_BSS_TYPE_IBSS 2
++//! Adhoc BSS scan type in lbs_scan_cmd_config
++#define LBS_SCAN_BSS_TYPE_IBSS 2
+
+-//! Adhoc or Infrastructure BSS scan type in wlan_scan_cmd_config, no filter
+-#define WLAN_SCAN_BSS_TYPE_ANY 3
++//! Adhoc or Infrastructure BSS scan type in lbs_scan_cmd_config, no filter
++#define LBS_SCAN_BSS_TYPE_ANY 3
+
+ /**
+ * @brief Structure used internally in the wlan driver to configure a scan.
+ *
+ * Sent to the command processing module to configure the firmware
+- * scan command prepared by libertas_cmd_80211_scan.
++ * scan command prepared by lbs_cmd_80211_scan.
+ *
+- * @sa wlan_scan_networks
++ * @sa lbs_scan_networks
+ *
+ */
+-struct wlan_scan_cmd_config {
++struct lbs_scan_cmd_config {
+ /**
+ * @brief BSS type to be sent in the firmware command
+ *
+ * Field can be used to restrict the types of networks returned in the
+ * scan. valid settings are:
+ *
+- * - WLAN_SCAN_BSS_TYPE_BSS (infrastructure)
+- * - WLAN_SCAN_BSS_TYPE_IBSS (adhoc)
+- * - WLAN_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
++ * - LBS_SCAN_BSS_TYPE_BSS (infrastructure)
++ * - LBS_SCAN_BSS_TYPE_IBSS (adhoc)
++ * - LBS_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
+ */
+ u8 bsstype;
+
+@@ -68,12 +68,12 @@ struct wlan_scan_cmd_config {
+ };
+
+ /**
+- * @brief IOCTL channel sub-structure sent in wlan_ioctl_user_scan_cfg
++ * @brief IOCTL channel sub-structure sent in lbs_ioctl_user_scan_cfg
+ *
+ * Multiple instances of this structure are included in the IOCTL command
+ * to configure a instance of a scan on the specific channel.
+ */
+-struct wlan_ioctl_user_scan_chan {
++struct lbs_ioctl_user_scan_chan {
+ u8 channumber; //!< channel Number to scan
+ u8 radiotype; //!< Radio type: 'B/G' band = 0, 'A' band = 1
+ u8 scantype; //!< Scan type: Active = 0, Passive = 1
+@@ -83,31 +83,26 @@ struct wlan_ioctl_user_scan_chan {
+ /**
+ * @brief IOCTL input structure to configure an immediate scan cmd to firmware
+ *
+- * Used in the setuserscan (WLAN_SET_USER_SCAN) private ioctl. Specifies
++ * Used in the setuserscan (LBS_SET_USER_SCAN) private ioctl. Specifies
+ * a number of parameters to be used in general for the scan as well
+- * as a channel list (wlan_ioctl_user_scan_chan) for each scan period
++ * as a channel list (lbs_ioctl_user_scan_chan) for each scan period
+ * desired.
+ *
+- * @sa libertas_set_user_scan_ioctl
++ * @sa lbs_set_user_scan_ioctl
+ */
+-struct wlan_ioctl_user_scan_cfg {
++struct lbs_ioctl_user_scan_cfg {
+ /**
+ * @brief BSS type to be sent in the firmware command
+ *
+ * Field can be used to restrict the types of networks returned in the
+ * scan. valid settings are:
+ *
+- * - WLAN_SCAN_BSS_TYPE_BSS (infrastructure)
+- * - WLAN_SCAN_BSS_TYPE_IBSS (adhoc)
+- * - WLAN_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
++ * - LBS_SCAN_BSS_TYPE_BSS (infrastructure)
++ * - LBS_SCAN_BSS_TYPE_IBSS (adhoc)
++ * - LBS_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
+ */
+ u8 bsstype;
+
+- /**
+- * @brief Configure the number of probe requests for active chan scans
+- */
+- u8 numprobes;
-
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- event = (void *)(response_buf + S_DS_GEN);
-- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
-- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
-- switch (header->type) {
-- struct mrvlietypes_failurecount *failcount;
-- case __constant_cpu_to_le16(TLV_TYPE_FAILCOUNT):
-- failcount = (void *)(response_buf + cmd_len);
-- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
-- failcount->failvalue,
-- failcount->Failfreq,
-- (event->events & cpu_to_le16(0x0004))?1:0);
-- default:
-- cmd_len += sizeof(struct mrvlietypes_failurecount);
-- break;
-- }
-- }
+ /**
+ * @brief BSSID filter sent in the firmware command to limit the results
+ */
+@@ -124,11 +119,6 @@ struct wlan_ioctl_user_scan_cfg {
+
+ /* Clear existing scan results matching this SSID */
+ u8 clear_ssid;
-
-- kfree(response_buf);
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-- free_page(addr);
-- return res;
--}
+- /**
+- * @brief Variable number (fixed maximum) of channels to scan up
+- */
+- struct wlan_ioctl_user_scan_chan chanlist[WLAN_IOCTL_USER_SCAN_CHAN_MAX];
+ };
+
+ /**
+@@ -174,30 +164,30 @@ struct bss_descriptor {
+ struct list_head list;
+ };
+
+-int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len);
++int lbs_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len);
+
+-struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
+- u8 *ssid, u8 ssid_len, u8 * bssid, u8 mode,
+- int channel);
++struct bss_descriptor *lbs_find_ssid_in_list(struct lbs_private *priv,
++ u8 *ssid, u8 ssid_len, u8 *bssid, u8 mode,
++ int channel);
+
+-struct bss_descriptor * libertas_find_bssid_in_list(wlan_adapter * adapter,
+- u8 * bssid, u8 mode);
++struct bss_descriptor *lbs_find_bssid_in_list(struct lbs_private *priv,
++ u8 *bssid, u8 mode);
+
+-int libertas_find_best_network_ssid(wlan_private * priv, u8 *out_ssid,
++int lbs_find_best_network_ssid(struct lbs_private *priv, u8 *out_ssid,
+ u8 *out_ssid_len, u8 preferred_mode, u8 *out_mode);
+
+-int libertas_send_specific_ssid_scan(wlan_private * priv, u8 *ssid,
++int lbs_send_specific_ssid_scan(struct lbs_private *priv, u8 *ssid,
+ u8 ssid_len, u8 clear_ssid);
+
+-int libertas_cmd_80211_scan(wlan_private * priv,
++int lbs_cmd_80211_scan(struct lbs_private *priv,
+ struct cmd_ds_command *cmd,
+ void *pdata_buf);
--static ssize_t libertas_failcount_write(struct file *file,
-- const char __user *userbuf,
-- size_t count, loff_t *ppos)
-+static ssize_t lbs_threshold_write(uint16_t tlv_type, uint16_t event_mask,
-+ struct file *file,
-+ const char __user *userbuf, size_t count,
-+ loff_t *ppos)
+-int libertas_ret_80211_scan(wlan_private * priv,
++int lbs_ret_80211_scan(struct lbs_private *priv,
+ struct cmd_ds_command *resp);
+
+-int wlan_scan_networks(wlan_private * priv,
+- const struct wlan_ioctl_user_scan_cfg * puserscanin,
++int lbs_scan_networks(struct lbs_private *priv,
++ const struct lbs_ioctl_user_scan_cfg *puserscanin,
+ int full_scan);
+
+ struct ifreq;
+@@ -205,11 +195,11 @@ struct ifreq;
+ struct iw_point;
+ struct iw_param;
+ struct iw_request_info;
+-int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
++int lbs_get_scan(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra);
+-int libertas_set_scan(struct net_device *dev, struct iw_request_info *info,
++int lbs_set_scan(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra);
+
+-void libertas_scan_worker(struct work_struct *work);
++void lbs_scan_worker(struct work_struct *work);
+
+-#endif /* _WLAN_SCAN_H */
++#endif
+diff --git a/drivers/net/wireless/libertas/tx.c b/drivers/net/wireless/libertas/tx.c
+index fbec06c..00d95f7 100644
+--- a/drivers/net/wireless/libertas/tx.c
++++ b/drivers/net/wireless/libertas/tx.c
+@@ -2,6 +2,7 @@
+ * This file contains the handling of TX in wlan driver.
+ */
+ #include <linux/netdevice.h>
++#include <linux/etherdevice.h>
+
+ #include "hostcmd.h"
+ #include "radiotap.h"
+@@ -49,188 +50,122 @@ static u32 convert_radiotap_rate_to_mv(u8 rate)
+ }
+
+ /**
+- * @brief This function processes a single packet and sends
+- * to IF layer
++ * @brief This function checks the conditions and sends packet to IF
++ * layer if everything is ok.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param skb A pointer to skb which includes TX packet
+ * @return 0 or -1
+ */
+-static int SendSinglePacket(wlan_private * priv, struct sk_buff *skb)
++int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- ssize_t res, buf_size;
-- int value, freq, subscribed, cmd_len;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- struct mrvlietypes_failurecount *failcount;
-- void *response_buf;
-- u16 event_bitmap;
-- u8 *ptr;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
-+ struct cmd_ds_802_11_subscribe_event *events;
-+ struct mrvlietypes_thresholds *tlv;
-+ struct lbs_private *priv = file->private_data;
-+ ssize_t buf_size;
-+ int value, freq, new_mask;
-+ uint16_t curr_mask;
-+ char *buf;
+- int ret = 0;
+- struct txpd localtxpd;
+- struct txpd *plocaltxpd = &localtxpd;
+- u8 *p802x_hdr;
+- struct tx_radiotap_hdr *pradiotap_hdr;
+- u32 new_rate;
+- u8 *ptr = priv->adapter->tmptxbuf;
++ unsigned long flags;
++ struct lbs_private *priv = dev->priv;
++ struct txpd *txpd;
++ char *p802x_hdr;
++ uint16_t pkt_len;
+ int ret;
-+
-+ buf = (char *)get_zeroed_page(GFP_KERNEL);
-+ if (!buf)
-+ return -ENOMEM;
- buf_size = min(count, len - 1);
- if (copy_from_user(buf, userbuf, buf_size)) {
-- res = -EFAULT;
-- goto out_unlock;
-+ ret = -EFAULT;
-+ goto out_page;
- }
-- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
-- if (res != 3) {
-- res = -EFAULT;
-- goto out_unlock;
-+ ret = sscanf(buf, "%d %d %d", &value, &freq, &new_mask);
-+ if (ret != 3) {
-+ ret = -EINVAL;
-+ goto out_page;
- }
--
-- event_bitmap = libertas_get_events_bitmap(priv);
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- goto out_unlock;
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_SET);
-- pcmdptr->size = cpu_to_le16(S_DS_GEN +
-- sizeof(struct cmd_ds_802_11_subscribe_event) +
-- sizeof(struct mrvlietypes_failurecount));
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- ptr = (u8*) pcmdptr+cmd_len;
-- failcount = (struct mrvlietypes_failurecount *)(ptr);
-- failcount->header.type = cpu_to_le16(TLV_TYPE_FAILCOUNT);
-- failcount->header.len = cpu_to_le16(2);
-- failcount->failvalue = value;
-- failcount->Failfreq = freq;
-- event_bitmap |= subscribed ? 0x0004 : 0x0;
-- event->events = cpu_to_le16(event_bitmap);
--
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = (struct cmd_ds_command *)response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-+ events = kzalloc(sizeof(*events), GFP_KERNEL);
-+ if (!events) {
-+ ret = -ENOMEM;
-+ goto out_page;
- }
+ lbs_deb_enter(LBS_DEB_TX);
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
-+ events->hdr.size = cpu_to_le16(sizeof(*events));
-+ events->action = cpu_to_le16(CMD_ACT_GET);
+- if (priv->adapter->surpriseremoved)
+- return -1;
++ ret = NETDEV_TX_OK;
++
++ /* We need to protect against the queues being restarted before
++ we get round to stopping them */
++ spin_lock_irqsave(&priv->driver_lock, flags);
++
++ if (priv->surpriseremoved)
++ goto free;
-- res = count;
--out_unlock:
-- free_page(addr);
-- return res;
--}
--
--static ssize_t libertas_bcnmiss_read(struct file *file, char __user *userbuf,
-- size_t count, loff_t *ppos)
--{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res, cmd_len;
-- ssize_t pos = 0;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0) {
-- free_page(addr);
-- return res;
-- }
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_SUBSCRIBE_EVENT, events);
-+ if (ret)
-+ goto out_events;
+ if (!skb->len || (skb->len > MRVDRV_ETH_TX_PACKET_BUFFER_SIZE)) {
+ lbs_deb_tx("tx err: skb length %d 0 or > %zd\n",
+ skb->len, MRVDRV_ETH_TX_PACKET_BUFFER_SIZE);
+- ret = -1;
+- goto done;
++ /* We'll never manage to send this one; drop it and return 'OK' */
++
++ priv->stats.tx_dropped++;
++ priv->stats.tx_errors++;
++ goto free;
+ }
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
-+ curr_mask = le16_to_cpu(events->events);
+- memset(plocaltxpd, 0, sizeof(struct txpd));
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
-+ if (new_mask)
-+ new_mask = curr_mask | event_mask;
-+ else
-+ new_mask = curr_mask & ~event_mask;
+- plocaltxpd->tx_packet_length = cpu_to_le16(skb->len);
++ netif_stop_queue(priv->dev);
++ if (priv->mesh_dev)
++ netif_stop_queue(priv->mesh_dev);
-- pcmdptr = response_buf;
-+ /* Now everything is set and we can send stuff down to the firmware */
+- /* offset of actual data */
+- plocaltxpd->tx_packet_location = cpu_to_le32(sizeof(struct txpd));
++ if (priv->tx_pending_len) {
++ /* This can happen if packets come in on the mesh and eth
++ device simultaneously -- there's no mutual exclusion on
++ hard_start_xmit() calls between devices. */
++ lbs_deb_tx("Packet on %s while busy\n", dev->name);
++ ret = NETDEV_TX_BUSY;
++ goto unlock;
++ }
++
++ priv->tx_pending_len = -1;
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++
++ lbs_deb_hex(LBS_DEB_TX, "TX Data", skb->data, min_t(unsigned int, skb->len, 100));
++
++ txpd = (void *)priv->tx_pending_buf;
++ memset(txpd, 0, sizeof(struct txpd));
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- free_page(addr);
-- kfree(response_buf);
-- return 0;
-- }
-+ tlv = (void *)events->tlv;
+ p802x_hdr = skb->data;
+- if (priv->adapter->monitormode != WLAN_MONITOR_OFF) {
++ pkt_len = skb->len;
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- free_page(addr);
-- kfree(response_buf);
-- return 0;
-- }
-+ events->action = cpu_to_le16(CMD_ACT_SET);
-+ events->events = cpu_to_le16(new_mask);
-+ tlv->header.type = cpu_to_le16(tlv_type);
-+ tlv->header.len = cpu_to_le16(sizeof(*tlv) - sizeof(tlv->header));
-+ tlv->value = value;
-+ if (tlv_type != TLV_TYPE_BCNMISS)
-+ tlv->freq = freq;
+- /* locate radiotap header */
+- pradiotap_hdr = (struct tx_radiotap_hdr *)skb->data;
++ if (dev == priv->rtap_net_dev) {
++ struct tx_radiotap_hdr *rtap_hdr = (void *)skb->data;
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- event = (void *)(response_buf + S_DS_GEN);
-- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
-- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
-- switch (header->type) {
-- struct mrvlietypes_beaconsmissed *bcnmiss;
-- case __constant_cpu_to_le16(TLV_TYPE_BCNMISS):
-- bcnmiss = (void *)(response_buf + cmd_len);
-- pos += snprintf(buf+pos, len-pos, "%d N/A %d\n",
-- bcnmiss->beaconmissed,
-- (event->events & cpu_to_le16(0x0008))?1:0);
-- default:
-- cmd_len += sizeof(struct mrvlietypes_beaconsmissed);
-- break;
+ /* set txpd fields from the radiotap header */
+- new_rate = convert_radiotap_rate_to_mv(pradiotap_hdr->rate);
+- if (new_rate != 0) {
+- /* use new tx_control[4:0] */
+- plocaltxpd->tx_control = cpu_to_le32(new_rate);
- }
-- }
-+ /* The command header, the event mask, and the one TLV */
-+ events->hdr.size = cpu_to_le16(sizeof(events->hdr) + 2 + sizeof(*tlv));
++ txpd->tx_control = cpu_to_le32(convert_radiotap_rate_to_mv(rtap_hdr->rate));
-- kfree(response_buf);
-+ ret = lbs_cmd_with_response(priv, CMD_802_11_SUBSCRIBE_EVENT, events);
+ /* skip the radiotap header */
+- p802x_hdr += sizeof(struct tx_radiotap_hdr);
+- plocaltxpd->tx_packet_length =
+- cpu_to_le16(le16_to_cpu(plocaltxpd->tx_packet_length)
+- - sizeof(struct tx_radiotap_hdr));
++ p802x_hdr += sizeof(*rtap_hdr);
++ pkt_len -= sizeof(*rtap_hdr);
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-- free_page(addr);
-- return res;
-+ if (!ret)
-+ ret = count;
-+ out_events:
-+ kfree(events);
-+ out_page:
-+ free_page((unsigned long)buf);
-+ return ret;
- }
++ /* copy destination address from 802.11 header */
++ memcpy(txpd->tx_dest_addr_high, p802x_hdr + 4, ETH_ALEN);
++ } else {
++ /* copy destination address from 802.3 header */
++ memcpy(txpd->tx_dest_addr_high, p802x_hdr, ETH_ALEN);
+ }
+- /* copy destination address from 802.3 or 802.11 header */
+- if (priv->adapter->monitormode != WLAN_MONITOR_OFF)
+- memcpy(plocaltxpd->tx_dest_addr_high, p802x_hdr + 4, ETH_ALEN);
+- else
+- memcpy(plocaltxpd->tx_dest_addr_high, p802x_hdr, ETH_ALEN);
--static ssize_t libertas_bcnmiss_write(struct file *file,
-- const char __user *userbuf,
-- size_t count, loff_t *ppos)
-+
-+static ssize_t lbs_lowrssi_read(struct file *file, char __user *userbuf,
-+ size_t count, loff_t *ppos)
- {
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- ssize_t res, buf_size;
-- int value, freq, subscribed, cmd_len;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- struct mrvlietypes_beaconsmissed *bcnmiss;
-- void *response_buf;
-- u16 event_bitmap;
-- u8 *ptr;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
-+ return lbs_threshold_read(TLV_TYPE_RSSI_LOW, CMD_SUBSCRIBE_RSSI_LOW,
-+ file, userbuf, count, ppos);
-+}
+- lbs_deb_hex(LBS_DEB_TX, "txpd", (u8 *) plocaltxpd, sizeof(struct txpd));
++ txpd->tx_packet_length = cpu_to_le16(pkt_len);
++ txpd->tx_packet_location = cpu_to_le32(sizeof(struct txpd));
-- buf_size = min(count, len - 1);
-- if (copy_from_user(buf, userbuf, buf_size)) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
-- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
-- if (res != 3) {
-- res = -EFAULT;
-- goto out_unlock;
+- if (IS_MESH_FRAME(skb)) {
+- plocaltxpd->tx_control |= cpu_to_le32(TxPD_MESH_FRAME);
- }
++ if (dev == priv->mesh_dev)
++ txpd->tx_control |= cpu_to_le32(TxPD_MESH_FRAME);
-- event_bitmap = libertas_get_events_bitmap(priv);
-+static ssize_t lbs_lowrssi_write(struct file *file, const char __user *userbuf,
-+ size_t count, loff_t *ppos)
-+{
-+ return lbs_threshold_write(TLV_TYPE_RSSI_LOW, CMD_SUBSCRIBE_RSSI_LOW,
-+ file, userbuf, count, ppos);
-+}
+- memcpy(ptr, plocaltxpd, sizeof(struct txpd));
++ lbs_deb_hex(LBS_DEB_TX, "txpd", (u8 *) &txpd, sizeof(struct txpd));
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- goto out_unlock;
+- ptr += sizeof(struct txpd);
++ lbs_deb_hex(LBS_DEB_TX, "Tx Data", (u8 *) p802x_hdr, le16_to_cpu(txpd->tx_packet_length));
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_SET);
-- pcmdptr->size = cpu_to_le16(S_DS_GEN +
-- sizeof(struct cmd_ds_802_11_subscribe_event) +
-- sizeof(struct mrvlietypes_beaconsmissed));
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- ptr = (u8*) pcmdptr+cmd_len;
-- bcnmiss = (struct mrvlietypes_beaconsmissed *)(ptr);
-- bcnmiss->header.type = cpu_to_le16(TLV_TYPE_BCNMISS);
-- bcnmiss->header.len = cpu_to_le16(2);
-- bcnmiss->beaconmissed = value;
-- event_bitmap |= subscribed ? 0x0008 : 0x0;
-- event->events = cpu_to_le16(event_bitmap);
--
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
-+static ssize_t lbs_lowsnr_read(struct file *file, char __user *userbuf,
-+ size_t count, loff_t *ppos)
-+{
-+ return lbs_threshold_read(TLV_TYPE_SNR_LOW, CMD_SUBSCRIBE_SNR_LOW,
-+ file, userbuf, count, ppos);
-+}
+- lbs_deb_hex(LBS_DEB_TX, "Tx Data", (u8 *) p802x_hdr, le16_to_cpu(plocaltxpd->tx_packet_length));
+- memcpy(ptr, p802x_hdr, le16_to_cpu(plocaltxpd->tx_packet_length));
+- ret = priv->hw_host_to_card(priv, MVMS_DAT,
+- priv->adapter->tmptxbuf,
+- le16_to_cpu(plocaltxpd->tx_packet_length) +
+- sizeof(struct txpd));
++ memcpy(&txpd[1], p802x_hdr, le16_to_cpu(txpd->tx_packet_length));
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- free_page(addr);
-- kfree(response_buf);
-- return 0;
+- if (ret) {
+- lbs_deb_tx("tx err: hw_host_to_card returned 0x%X\n", ret);
+- goto done;
- }
++ spin_lock_irqsave(&priv->driver_lock, flags);
++ priv->tx_pending_len = pkt_len + sizeof(struct txpd);
-- res = count;
--out_unlock:
-- free_page(addr);
-- return res;
-+static ssize_t lbs_lowsnr_write(struct file *file, const char __user *userbuf,
-+ size_t count, loff_t *ppos)
-+{
-+ return lbs_threshold_write(TLV_TYPE_SNR_LOW, CMD_SUBSCRIBE_SNR_LOW,
-+ file, userbuf, count, ppos);
- }
+- lbs_deb_tx("SendSinglePacket succeeds\n");
++ lbs_deb_tx("%s lined up packet\n", __func__);
--static ssize_t libertas_highrssi_read(struct file *file, char __user *userbuf,
+-done:
+- if (!ret) {
+- priv->stats.tx_packets++;
+- priv->stats.tx_bytes += skb->len;
+- } else {
+- priv->stats.tx_dropped++;
+- priv->stats.tx_errors++;
+- }
++ priv->stats.tx_packets++;
++ priv->stats.tx_bytes += skb->len;
+
+- if (!ret && priv->adapter->monitormode != WLAN_MONITOR_OFF) {
++ dev->trans_start = jiffies;
+
-+static ssize_t lbs_failcount_read(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
- {
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res, cmd_len;
-- ssize_t pos = 0;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0) {
-- free_page(addr);
-- return res;
++ if (priv->monitormode != LBS_MONITOR_OFF) {
+ /* Keep the skb to echo it back once Tx feedback is
+ received from FW */
+ skb_orphan(skb);
+- /* stop processing outgoing pkts */
+- netif_stop_queue(priv->dev);
+- if (priv->mesh_dev)
+- netif_stop_queue(priv->mesh_dev);
+- /* freeze any packets already in our queues */
+- priv->adapter->TxLockFlag = 1;
+- } else {
+- dev_kfree_skb_any(skb);
+- priv->adapter->currenttxskb = NULL;
- }
+
+- lbs_deb_leave_args(LBS_DEB_TX, "ret %d", ret);
+- return ret;
+-}
-
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
-
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
+-void libertas_tx_runqueue(wlan_private *priv)
+-{
+- wlan_adapter *adapter = priv->adapter;
+- int i;
-
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- event = (void *)(response_buf + S_DS_GEN);
-- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
-- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
-- switch (header->type) {
-- struct mrvlietypes_rssithreshold *Highrssi;
-- case __constant_cpu_to_le16(TLV_TYPE_RSSI_HIGH):
-- Highrssi = (void *)(response_buf + cmd_len);
-- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
-- Highrssi->rssivalue,
-- Highrssi->rssifreq,
-- (event->events & cpu_to_le16(0x0010))?1:0);
-- default:
-- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
-- break;
-- }
+- spin_lock(&adapter->txqueue_lock);
+- for (i = 0; i < adapter->tx_queue_idx; i++) {
+- struct sk_buff *skb = adapter->tx_queue_ps[i];
+- spin_unlock(&adapter->txqueue_lock);
+- SendSinglePacket(priv, skb);
+- spin_lock(&adapter->txqueue_lock);
- }
+- adapter->tx_queue_idx = 0;
+- spin_unlock(&adapter->txqueue_lock);
+-}
-
-- kfree(response_buf);
--
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-- free_page(addr);
-- return res;
-+ return lbs_threshold_read(TLV_TYPE_FAILCOUNT, CMD_SUBSCRIBE_FAILCOUNT,
-+ file, userbuf, count, ppos);
- }
-
--static ssize_t libertas_highrssi_write(struct file *file,
-- const char __user *userbuf,
-- size_t count, loff_t *ppos)
+-static void wlan_tx_queue(wlan_private *priv, struct sk_buff *skb)
-{
-- wlan_private *priv = file->private_data;
- wlan_adapter *adapter = priv->adapter;
-- ssize_t res, buf_size;
-- int value, freq, subscribed, cmd_len;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- struct mrvlietypes_rssithreshold *rssi_threshold;
-- void *response_buf;
-- u16 event_bitmap;
-- u8 *ptr;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
-
-- buf_size = min(count, len - 1);
-- if (copy_from_user(buf, userbuf, buf_size)) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
-- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
-- if (res != 3) {
-- res = -EFAULT;
-- goto out_unlock;
+- spin_lock(&adapter->txqueue_lock);
+-
+- WARN_ON(priv->adapter->tx_queue_idx >= NR_TX_QUEUE);
+- adapter->tx_queue_ps[adapter->tx_queue_idx++] = skb;
+- if (adapter->tx_queue_idx == NR_TX_QUEUE) {
+- netif_stop_queue(priv->dev);
+- if (priv->mesh_dev)
+- netif_stop_queue(priv->mesh_dev);
++ /* Keep the skb around for when we get feedback */
++ priv->currenttxskb = skb;
+ } else {
+- netif_start_queue(priv->dev);
+- if (priv->mesh_dev)
+- netif_start_queue(priv->mesh_dev);
- }
-
-- event_bitmap = libertas_get_events_bitmap(priv);
-
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- goto out_unlock;
+- spin_unlock(&adapter->txqueue_lock);
+-}
-
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_SET);
-- pcmdptr->size = cpu_to_le16(S_DS_GEN +
-- sizeof(struct cmd_ds_802_11_subscribe_event) +
-- sizeof(struct mrvlietypes_rssithreshold));
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- ptr = (u8*) pcmdptr+cmd_len;
-- rssi_threshold = (struct mrvlietypes_rssithreshold *)(ptr);
-- rssi_threshold->header.type = cpu_to_le16(TLV_TYPE_RSSI_HIGH);
-- rssi_threshold->header.len = cpu_to_le16(2);
-- rssi_threshold->rssivalue = value;
-- rssi_threshold->rssifreq = freq;
-- event_bitmap |= subscribed ? 0x0010 : 0x0;
-- event->events = cpu_to_le16(event_bitmap);
+-/**
+- * @brief This function checks the conditions and sends packet to IF
+- * layer if everything is ok.
+- *
+- * @param priv A pointer to wlan_private structure
+- * @return n/a
+- */
+-int libertas_process_tx(wlan_private * priv, struct sk_buff *skb)
+-{
+- int ret = -1;
-
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
+- lbs_deb_enter(LBS_DEB_TX);
+- lbs_deb_hex(LBS_DEB_TX, "TX Data", skb->data, min_t(unsigned int, skb->len, 100));
-
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
+- if (priv->dnld_sent) {
+- lbs_pr_alert( "TX error: dnld_sent = %d, not sending\n",
+- priv->dnld_sent);
+- goto done;
+- }
-
-- pcmdptr = response_buf;
+- if ((priv->adapter->psstate == PS_STATE_SLEEP) ||
+- (priv->adapter->psstate == PS_STATE_PRE_SLEEP)) {
+- wlan_tx_queue(priv, skb);
+- return ret;
++ free:
++ dev_kfree_skb_any(skb);
+ }
++ unlock:
++ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ wake_up(&priv->waitq);
+
+- priv->adapter->currenttxskb = skb;
-
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- return 0;
-- }
-+static ssize_t lbs_failcount_write(struct file *file, const char __user *userbuf,
-+ size_t count, loff_t *ppos)
-+{
-+ return lbs_threshold_write(TLV_TYPE_FAILCOUNT, CMD_SUBSCRIBE_FAILCOUNT,
-+ file, userbuf, count, ppos);
-+}
+- ret = SendSinglePacket(priv, skb);
+-done:
+ lbs_deb_leave_args(LBS_DEB_TX, "ret %d", ret);
+ return ret;
+ }
+@@ -239,24 +174,23 @@ done:
+ * @brief This function sends to the host the last transmitted packet,
+ * filling the radiotap headers with transmission information.
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param status A 32 bit value containing transmission status.
+ *
+ * @returns void
+ */
+-void libertas_send_tx_feedback(wlan_private * priv)
++void lbs_send_tx_feedback(struct lbs_private *priv)
+ {
+- wlan_adapter *adapter = priv->adapter;
+ struct tx_radiotap_hdr *radiotap_hdr;
+- u32 status = adapter->eventcause;
++ u32 status = priv->eventcause;
+ int txfail;
+ int try_count;
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- return 0;
-- }
+- if (adapter->monitormode == WLAN_MONITOR_OFF ||
+- adapter->currenttxskb == NULL)
++ if (priv->monitormode == LBS_MONITOR_OFF ||
++ priv->currenttxskb == NULL)
+ return;
-- res = count;
--out_unlock:
-- free_page(addr);
-- return res;
-+static ssize_t lbs_highrssi_read(struct file *file, char __user *userbuf,
-+ size_t count, loff_t *ppos)
-+{
-+ return lbs_threshold_read(TLV_TYPE_RSSI_HIGH, CMD_SUBSCRIBE_RSSI_HIGH,
-+ file, userbuf, count, ppos);
- }
+- radiotap_hdr = (struct tx_radiotap_hdr *)adapter->currenttxskb->data;
++ radiotap_hdr = (struct tx_radiotap_hdr *)priv->currenttxskb->data;
--static ssize_t libertas_highsnr_read(struct file *file, char __user *userbuf,
+ txfail = (status >> 24);
+
+@@ -269,14 +203,19 @@ void libertas_send_tx_feedback(wlan_private * priv)
+ #endif
+ try_count = (status >> 16) & 0xff;
+ radiotap_hdr->data_retries = (try_count) ?
+- (1 + adapter->txretrycount - try_count) : 0;
+- libertas_upload_rx_packet(priv, adapter->currenttxskb);
+- adapter->currenttxskb = NULL;
+- priv->adapter->TxLockFlag = 0;
+- if (priv->adapter->connect_status == LIBERTAS_CONNECTED) {
++ (1 + priv->txretrycount - try_count) : 0;
+
-+static ssize_t lbs_highrssi_write(struct file *file, const char __user *userbuf,
- size_t count, loff_t *ppos)
- {
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- void *response_buf;
-- int res, cmd_len;
-- ssize_t pos = 0;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0) {
-- free_page(addr);
-- return res;
++
++ priv->currenttxskb->protocol = eth_type_trans(priv->currenttxskb,
++ priv->rtap_net_dev);
++ netif_rx(priv->currenttxskb);
++
++ priv->currenttxskb = NULL;
++
++ if (priv->connect_status == LBS_CONNECTED)
+ netif_wake_queue(priv->dev);
+- if (priv->mesh_dev)
+- netif_wake_queue(priv->mesh_dev);
- }
++
++ if (priv->mesh_dev && (priv->mesh_connect_status == LBS_CONNECTED))
++ netif_wake_queue(priv->mesh_dev);
+ }
+-EXPORT_SYMBOL_GPL(libertas_send_tx_feedback);
++EXPORT_SYMBOL_GPL(lbs_send_tx_feedback);
+diff --git a/drivers/net/wireless/libertas/types.h b/drivers/net/wireless/libertas/types.h
+index a43a5f6..f0d5795 100644
+--- a/drivers/net/wireless/libertas/types.h
++++ b/drivers/net/wireless/libertas/types.h
+@@ -1,8 +1,8 @@
+ /**
+ * This header file contains definition for global types
+ */
+-#ifndef _WLAN_TYPES_
+-#define _WLAN_TYPES_
++#ifndef _LBS_TYPES_H_
++#define _LBS_TYPES_H_
+
+ #include <linux/if_ether.h>
+ #include <asm/byteorder.h>
+@@ -201,22 +201,11 @@ struct mrvlietypes_powercapability {
+ s8 maxpower;
+ } __attribute__ ((packed));
+
+-struct mrvlietypes_rssithreshold {
++/* used in CMD_802_11_SUBSCRIBE_EVENT for SNR, RSSI and Failure */
++struct mrvlietypes_thresholds {
+ struct mrvlietypesheader header;
+- u8 rssivalue;
+- u8 rssifreq;
+-} __attribute__ ((packed));
-
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_GET);
-- pcmdptr->size = cpu_to_le16(sizeof(*event) + S_DS_GEN);
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
+-struct mrvlietypes_snrthreshold {
+- struct mrvlietypesheader header;
+- u8 snrvalue;
+- u8 snrfreq;
+-} __attribute__ ((packed));
-
-- pcmdptr = response_buf;
-+ return lbs_threshold_write(TLV_TYPE_RSSI_HIGH, CMD_SUBSCRIBE_RSSI_HIGH,
-+ file, userbuf, count, ppos);
-+}
+-struct mrvlietypes_failurecount {
+- struct mrvlietypesheader header;
+- u8 failvalue;
+- u8 Failfreq;
++ u8 value;
++ u8 freq;
+ } __attribute__ ((packed));
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
+ struct mrvlietypes_beaconsmissed {
+@@ -250,4 +239,4 @@ struct mrvlietypes_ledgpio {
+ struct led_pin ledpin[1];
+ } __attribute__ ((packed));
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
--
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- event = (void *)(response_buf + S_DS_GEN);
-- while (cmd_len < le16_to_cpu(pcmdptr->size)) {
-- struct mrvlietypesheader *header = (void *)(response_buf + cmd_len);
-- switch (header->type) {
-- struct mrvlietypes_snrthreshold *HighSnr;
-- case __constant_cpu_to_le16(TLV_TYPE_SNR_HIGH):
-- HighSnr = (void *)(response_buf + cmd_len);
-- pos += snprintf(buf+pos, len-pos, "%d %d %d\n",
-- HighSnr->snrvalue,
-- HighSnr->snrfreq,
-- (event->events & cpu_to_le16(0x0020))?1:0);
-- default:
-- cmd_len += sizeof(struct mrvlietypes_snrthreshold);
-- break;
-- }
-- }
-+static ssize_t lbs_highsnr_read(struct file *file, char __user *userbuf,
-+ size_t count, loff_t *ppos)
+-#endif /* _WLAN_TYPES_ */
++#endif
+diff --git a/drivers/net/wireless/libertas/wext.c b/drivers/net/wireless/libertas/wext.c
+index 395b788..e8bfc26 100644
+--- a/drivers/net/wireless/libertas/wext.c
++++ b/drivers/net/wireless/libertas/wext.c
+@@ -19,30 +19,47 @@
+ #include "join.h"
+ #include "wext.h"
+ #include "assoc.h"
++#include "cmd.h"
++
++
++static inline void lbs_postpone_association_work(struct lbs_private *priv)
+{
-+ return lbs_threshold_read(TLV_TYPE_SNR_HIGH, CMD_SUBSCRIBE_SNR_HIGH,
-+ file, userbuf, count, ppos);
++ if (priv->surpriseremoved)
++ return;
++ cancel_delayed_work(&priv->assoc_work);
++ queue_delayed_work(priv->work_thread, &priv->assoc_work, HZ / 2);
+}
-
-- kfree(response_buf);
-
-- res = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
-- free_page(addr);
-- return res;
-+static ssize_t lbs_highsnr_write(struct file *file, const char __user *userbuf,
-+ size_t count, loff_t *ppos)
++
++static inline void lbs_cancel_association_work(struct lbs_private *priv)
+{
-+ return lbs_threshold_write(TLV_TYPE_SNR_HIGH, CMD_SUBSCRIBE_SNR_HIGH,
-+ file, userbuf, count, ppos);
- }
++ cancel_delayed_work(&priv->assoc_work);
++ kfree(priv->pending_assoc_req);
++ priv->pending_assoc_req = NULL;
++}
--static ssize_t libertas_highsnr_write(struct file *file,
-- const char __user *userbuf,
-- size_t count, loff_t *ppos)
-+static ssize_t lbs_bcnmiss_read(struct file *file, char __user *userbuf,
-+ size_t count, loff_t *ppos)
+
+ /**
+ * @brief Find the channel frequency power info with specific channel
+ *
+- * @param adapter A pointer to wlan_adapter structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param band it can be BAND_A, BAND_G or BAND_B
+ * @param channel the channel for looking
+ * @return A pointer to struct chan_freq_power structure or NULL if not find.
+ */
+-struct chan_freq_power *libertas_find_cfp_by_band_and_channel(wlan_adapter * adapter,
+- u8 band, u16 channel)
++struct chan_freq_power *lbs_find_cfp_by_band_and_channel(
++ struct lbs_private *priv,
++ u8 band,
++ u16 channel)
{
-- wlan_private *priv = file->private_data;
-- wlan_adapter *adapter = priv->adapter;
-- ssize_t res, buf_size;
-- int value, freq, subscribed, cmd_len;
-- struct cmd_ctrl_node *pcmdnode;
-- struct cmd_ds_command *pcmdptr;
-- struct cmd_ds_802_11_subscribe_event *event;
-- struct mrvlietypes_snrthreshold *snr_threshold;
-- void *response_buf;
-- u16 event_bitmap;
-- u8 *ptr;
-- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-- char *buf = (char *)addr;
--
-- buf_size = min(count, len - 1);
-- if (copy_from_user(buf, userbuf, buf_size)) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
-- res = sscanf(buf, "%d %d %d", &value, &freq, &subscribed);
-- if (res != 3) {
-- res = -EFAULT;
-- goto out_unlock;
-- }
-+ return lbs_threshold_read(TLV_TYPE_BCNMISS, CMD_SUBSCRIBE_BCNMISS,
-+ file, userbuf, count, ppos);
-+}
+ struct chan_freq_power *cfp = NULL;
+ struct region_channel *rc;
+- int count = sizeof(adapter->region_channel) /
+- sizeof(adapter->region_channel[0]);
+ int i, j;
-- event_bitmap = libertas_get_events_bitmap(priv);
+- for (j = 0; !cfp && (j < count); j++) {
+- rc = &adapter->region_channel[j];
++ for (j = 0; !cfp && (j < ARRAY_SIZE(priv->region_channel)); j++) {
++ rc = &priv->region_channel[j];
-- res = libertas_event_initcmd(priv, &response_buf, &pcmdnode, &pcmdptr);
-- if (res < 0)
-- goto out_unlock;
--
-- event = &pcmdptr->params.subscribe_event;
-- event->action = cpu_to_le16(CMD_ACT_SET);
-- pcmdptr->size = cpu_to_le16(S_DS_GEN +
-- sizeof(struct cmd_ds_802_11_subscribe_event) +
-- sizeof(struct mrvlietypes_snrthreshold));
-- cmd_len = S_DS_GEN + sizeof(struct cmd_ds_802_11_subscribe_event);
-- ptr = (u8*) pcmdptr+cmd_len;
-- snr_threshold = (struct mrvlietypes_snrthreshold *)(ptr);
-- snr_threshold->header.type = cpu_to_le16(TLV_TYPE_SNR_HIGH);
-- snr_threshold->header.len = cpu_to_le16(2);
-- snr_threshold->snrvalue = value;
-- snr_threshold->snrfreq = freq;
-- event_bitmap |= subscribed ? 0x0020 : 0x0;
-- event->events = cpu_to_le16(event_bitmap);
--
-- libertas_queue_cmd(adapter, pcmdnode, 1);
-- wake_up_interruptible(&priv->waitq);
--
-- /* Sleep until response is generated by FW */
-- wait_event_interruptible(pcmdnode->cmdwait_q,
-- pcmdnode->cmdwaitqwoken);
--
-- pcmdptr = response_buf;
--
-- if (pcmdptr->result) {
-- lbs_pr_err("%s: fail, result=%d\n", __func__,
-- le16_to_cpu(pcmdptr->result));
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
-+static ssize_t lbs_bcnmiss_write(struct file *file, const char __user *userbuf,
-+ size_t count, loff_t *ppos)
-+{
-+ return lbs_threshold_write(TLV_TYPE_BCNMISS, CMD_SUBSCRIBE_BCNMISS,
-+ file, userbuf, count, ppos);
-+}
+- if (adapter->enable11d)
+- rc = &adapter->universal_channel[j];
++ if (priv->enable11d)
++ rc = &priv->universal_channel[j];
+ if (!rc->valid || !rc->CFP)
+ continue;
+ if (rc->band != band)
+@@ -56,7 +73,7 @@ struct chan_freq_power *libertas_find_cfp_by_band_and_channel(wlan_adapter * ada
+ }
-- if (pcmdptr->command != cpu_to_le16(CMD_RET(CMD_802_11_SUBSCRIBE_EVENT))) {
-- lbs_pr_err("command response incorrect!\n");
-- kfree(response_buf);
-- free_page(addr);
-- return 0;
-- }
+ if (!cfp && channel)
+- lbs_deb_wext("libertas_find_cfp_by_band_and_channel: can't find "
++ lbs_deb_wext("lbs_find_cfp_by_band_and_channel: can't find "
+ "cfp by band %d / channel %d\n", band, channel);
-- res = count;
--out_unlock:
-- free_page(addr);
-- return res;
--}
+ return cfp;
+@@ -65,25 +82,25 @@ struct chan_freq_power *libertas_find_cfp_by_band_and_channel(wlan_adapter * ada
+ /**
+ * @brief Find the channel frequency power info with specific frequency
+ *
+- * @param adapter A pointer to wlan_adapter structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param band it can be BAND_A, BAND_G or BAND_B
+ * @param freq the frequency for looking
+ * @return A pointer to struct chan_freq_power structure or NULL if not find.
+ */
+-static struct chan_freq_power *find_cfp_by_band_and_freq(wlan_adapter * adapter,
+- u8 band, u32 freq)
++static struct chan_freq_power *find_cfp_by_band_and_freq(
++ struct lbs_private *priv,
++ u8 band,
++ u32 freq)
+ {
+ struct chan_freq_power *cfp = NULL;
+ struct region_channel *rc;
+- int count = sizeof(adapter->region_channel) /
+- sizeof(adapter->region_channel[0]);
+ int i, j;
--static ssize_t libertas_rdmac_read(struct file *file, char __user *userbuf,
-+static ssize_t lbs_rdmac_read(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
+- for (j = 0; !cfp && (j < count); j++) {
+- rc = &adapter->region_channel[j];
++ for (j = 0; !cfp && (j < ARRAY_SIZE(priv->region_channel)); j++) {
++ rc = &priv->region_channel[j];
+
+- if (adapter->enable11d)
+- rc = &adapter->universal_channel[j];
++ if (priv->enable11d)
++ rc = &priv->universal_channel[j];
+ if (!rc->valid || !rc->CFP)
+ continue;
+ if (rc->band != band)
+@@ -107,22 +124,21 @@ static struct chan_freq_power *find_cfp_by_band_and_freq(wlan_adapter * adapter,
+ /**
+ * @brief Set Radio On/OFF
+ *
+- * @param priv A pointer to wlan_private structure
++ * @param priv A pointer to struct lbs_private structure
+ * @option Radio Option
+ * @return 0 --success, otherwise fail
+ */
+-static int wlan_radio_ioctl(wlan_private * priv, u8 option)
++static int lbs_radio_ioctl(struct lbs_private *priv, u8 option)
{
-- wlan_private *priv = file->private_data;
+ int ret = 0;
- wlan_adapter *adapter = priv->adapter;
-- struct wlan_offset_value offval;
-+ struct lbs_private *priv = file->private_data;
-+ struct lbs_offset_value offval;
- ssize_t pos = 0;
- int ret;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-@@ -1375,23 +603,23 @@ static ssize_t libertas_rdmac_read(struct file *file, char __user *userbuf,
- offval.offset = priv->mac_offset;
- offval.value = 0;
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_MAC_REG_ACCESS, 0,
- CMD_OPTION_WAITFORRSP, 0, &offval);
- mdelay(10);
- pos += snprintf(buf+pos, len-pos, "MAC[0x%x] = 0x%08x\n",
-- priv->mac_offset, adapter->offsetvalue.value);
-+ priv->mac_offset, priv->offsetvalue.value);
+ lbs_deb_enter(LBS_DEB_WEXT);
- ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- free_page(addr);
- return ret;
+- if (adapter->radioon != option) {
++ if (priv->radioon != option) {
+ lbs_deb_wext("switching radio %s\n", option ? "on" : "off");
+- adapter->radioon = option;
++ priv->radioon = option;
+
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_RADIO_CONTROL,
+ CMD_ACT_SET,
+ CMD_OPTION_WAITFORRSP, 0, NULL);
+@@ -135,22 +151,23 @@ static int wlan_radio_ioctl(wlan_private * priv, u8 option)
+ /**
+ * @brief Copy active data rates based on adapter mode and status
+ *
+- * @param adapter A pointer to wlan_adapter structure
++ * @param priv A pointer to struct lbs_private structure
+ * @param rate The buf to return the active rates
+ */
+-static void copy_active_data_rates(wlan_adapter * adapter, u8 * rates)
++static void copy_active_data_rates(struct lbs_private *priv, u8 *rates)
+ {
+ lbs_deb_enter(LBS_DEB_WEXT);
+
+- if (adapter->connect_status != LIBERTAS_CONNECTED)
+- memcpy(rates, libertas_bg_rates, MAX_RATES);
++ if ((priv->connect_status != LBS_CONNECTED) &&
++ (priv->mesh_connect_status != LBS_CONNECTED))
++ memcpy(rates, lbs_bg_rates, MAX_RATES);
+ else
+- memcpy(rates, adapter->curbssparams.rates, MAX_RATES);
++ memcpy(rates, priv->curbssparams.rates, MAX_RATES);
+
+ lbs_deb_leave(LBS_DEB_WEXT);
}
--static ssize_t libertas_rdmac_write(struct file *file,
-+static ssize_t lbs_rdmac_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_get_name(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_name(struct net_device *dev, struct iw_request_info *info,
+ char *cwrq, char *extra)
{
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
-@@ -1408,15 +636,15 @@ out_unlock:
- return res;
+
+@@ -163,22 +180,21 @@ static int wlan_get_name(struct net_device *dev, struct iw_request_info *info,
+ return 0;
}
--static ssize_t libertas_wrmac_write(struct file *file,
-+static ssize_t lbs_wrmac_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_get_freq(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_freq(struct net_device *dev, struct iw_request_info *info,
+ struct iw_freq *fwrq, char *extra)
{
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct chan_freq_power *cfp;
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- u32 offset, value;
-- struct wlan_offset_value offval;
-+ struct lbs_offset_value offval;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
+ lbs_deb_enter(LBS_DEB_WEXT);
-@@ -1433,7 +661,7 @@ static ssize_t libertas_wrmac_write(struct file *file,
+- cfp = libertas_find_cfp_by_band_and_channel(adapter, 0,
+- adapter->curbssparams.channel);
++ cfp = lbs_find_cfp_by_band_and_channel(priv, 0,
++ priv->curbssparams.channel);
- offval.offset = offset;
- offval.value = value;
-- res = libertas_prepare_and_send_command(priv,
-+ res = lbs_prepare_and_send_command(priv,
- CMD_MAC_REG_ACCESS, 1,
- CMD_OPTION_WAITFORRSP, 0, &offval);
- mdelay(10);
-@@ -1444,12 +672,11 @@ out_unlock:
- return res;
+ if (!cfp) {
+- if (adapter->curbssparams.channel)
++ if (priv->curbssparams.channel)
+ lbs_deb_wext("invalid channel %d\n",
+- adapter->curbssparams.channel);
++ priv->curbssparams.channel);
+ return -EINVAL;
+ }
+
+@@ -190,16 +206,15 @@ static int wlan_get_freq(struct net_device *dev, struct iw_request_info *info,
+ return 0;
}
--static ssize_t libertas_rdbbp_read(struct file *file, char __user *userbuf,
-+static ssize_t lbs_rdbbp_read(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_get_wap(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_wap(struct net_device *dev, struct iw_request_info *info,
+ struct sockaddr *awrq, char *extra)
{
-- wlan_private *priv = file->private_data;
+- wlan_private *priv = dev->priv;
- wlan_adapter *adapter = priv->adapter;
-- struct wlan_offset_value offval;
-+ struct lbs_private *priv = file->private_data;
-+ struct lbs_offset_value offval;
- ssize_t pos = 0;
- int ret;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-@@ -1458,12 +685,12 @@ static ssize_t libertas_rdbbp_read(struct file *file, char __user *userbuf,
- offval.offset = priv->bbp_offset;
- offval.value = 0;
++ struct lbs_private *priv = dev->priv;
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_BBP_REG_ACCESS, 0,
- CMD_OPTION_WAITFORRSP, 0, &offval);
- mdelay(10);
- pos += snprintf(buf+pos, len-pos, "BBP[0x%x] = 0x%08x\n",
-- priv->bbp_offset, adapter->offsetvalue.value);
-+ priv->bbp_offset, priv->offsetvalue.value);
+ lbs_deb_enter(LBS_DEB_WEXT);
- ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- free_page(addr);
-@@ -1471,11 +698,11 @@ static ssize_t libertas_rdbbp_read(struct file *file, char __user *userbuf,
- return ret;
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
+- memcpy(awrq->sa_data, adapter->curbssparams.bssid, ETH_ALEN);
++ if (priv->connect_status == LBS_CONNECTED) {
++ memcpy(awrq->sa_data, priv->curbssparams.bssid, ETH_ALEN);
+ } else {
+ memset(awrq->sa_data, 0, ETH_ALEN);
+ }
+@@ -209,11 +224,10 @@ static int wlan_get_wap(struct net_device *dev, struct iw_request_info *info,
+ return 0;
}
--static ssize_t libertas_rdbbp_write(struct file *file,
-+static ssize_t lbs_rdbbp_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_set_nick(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_nick(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
{
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
-@@ -1492,15 +719,15 @@ out_unlock:
- return res;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+
+ lbs_deb_enter(LBS_DEB_WEXT);
+
+@@ -225,25 +239,24 @@ static int wlan_set_nick(struct net_device *dev, struct iw_request_info *info,
+ return -E2BIG;
+ }
+
+- mutex_lock(&adapter->lock);
+- memset(adapter->nodename, 0, sizeof(adapter->nodename));
+- memcpy(adapter->nodename, extra, dwrq->length);
+- mutex_unlock(&adapter->lock);
++ mutex_lock(&priv->lock);
++ memset(priv->nodename, 0, sizeof(priv->nodename));
++ memcpy(priv->nodename, extra, dwrq->length);
++ mutex_unlock(&priv->lock);
+
+ lbs_deb_leave(LBS_DEB_WEXT);
+ return 0;
}
--static ssize_t libertas_wrbbp_write(struct file *file,
-+static ssize_t lbs_wrbbp_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_get_nick(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_nick(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
{
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- u32 offset, value;
-- struct wlan_offset_value offval;
-+ struct lbs_offset_value offval;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
+ lbs_deb_enter(LBS_DEB_WEXT);
-@@ -1517,7 +744,7 @@ static ssize_t libertas_wrbbp_write(struct file *file,
+- dwrq->length = strlen(adapter->nodename);
+- memcpy(extra, adapter->nodename, dwrq->length);
++ dwrq->length = strlen(priv->nodename);
++ memcpy(extra, priv->nodename, dwrq->length);
+ extra[dwrq->length] = '\0';
- offval.offset = offset;
- offval.value = value;
-- res = libertas_prepare_and_send_command(priv,
-+ res = lbs_prepare_and_send_command(priv,
- CMD_BBP_REG_ACCESS, 1,
- CMD_OPTION_WAITFORRSP, 0, &offval);
- mdelay(10);
-@@ -1528,12 +755,11 @@ out_unlock:
- return res;
+ dwrq->flags = 1; /* active */
+@@ -255,14 +268,13 @@ static int wlan_get_nick(struct net_device *dev, struct iw_request_info *info,
+ static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+
+ lbs_deb_enter(LBS_DEB_WEXT);
+
+ /* Use nickname to indicate that mesh is on */
+
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
++ if (priv->mesh_connect_status == LBS_CONNECTED) {
+ strncpy(extra, "Mesh", 12);
+ extra[12] = '\0';
+ dwrq->length = strlen(extra);
+@@ -277,25 +289,24 @@ static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info,
+ return 0;
}
--static ssize_t libertas_rdrf_read(struct file *file, char __user *userbuf,
-+static ssize_t lbs_rdrf_read(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_set_rts(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_rts(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
{
-- wlan_private *priv = file->private_data;
+ int ret = 0;
+- wlan_private *priv = dev->priv;
- wlan_adapter *adapter = priv->adapter;
-- struct wlan_offset_value offval;
-+ struct lbs_private *priv = file->private_data;
-+ struct lbs_offset_value offval;
- ssize_t pos = 0;
- int ret;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
-@@ -1542,12 +768,12 @@ static ssize_t libertas_rdrf_read(struct file *file, char __user *userbuf,
- offval.offset = priv->rf_offset;
- offval.value = 0;
++ struct lbs_private *priv = dev->priv;
+ u32 rthr = vwrq->value;
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_RF_REG_ACCESS, 0,
- CMD_OPTION_WAITFORRSP, 0, &offval);
- mdelay(10);
- pos += snprintf(buf+pos, len-pos, "RF[0x%x] = 0x%08x\n",
-- priv->rf_offset, adapter->offsetvalue.value);
-+ priv->rf_offset, priv->offsetvalue.value);
+ lbs_deb_enter(LBS_DEB_WEXT);
- ret = simple_read_from_buffer(userbuf, count, ppos, buf, pos);
- free_page(addr);
-@@ -1555,11 +781,11 @@ static ssize_t libertas_rdrf_read(struct file *file, char __user *userbuf,
+ if (vwrq->disabled) {
+- adapter->rtsthsd = rthr = MRVDRV_RTS_MAX_VALUE;
++ priv->rtsthsd = rthr = MRVDRV_RTS_MAX_VALUE;
+ } else {
+ if (rthr < MRVDRV_RTS_MIN_VALUE || rthr > MRVDRV_RTS_MAX_VALUE)
+ return -EINVAL;
+- adapter->rtsthsd = rthr;
++ priv->rtsthsd = rthr;
+ }
+
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
+ CMD_ACT_SET, CMD_OPTION_WAITFORRSP,
+ OID_802_11_RTS_THRESHOLD, &rthr);
+
+@@ -303,23 +314,22 @@ static int wlan_set_rts(struct net_device *dev, struct iw_request_info *info,
return ret;
}
--static ssize_t libertas_rdrf_write(struct file *file,
-+static ssize_t lbs_rdrf_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_get_rts(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_rts(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
{
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
-@@ -1576,15 +802,15 @@ out_unlock:
- return res;
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+
+ lbs_deb_enter(LBS_DEB_WEXT);
+
+- adapter->rtsthsd = 0;
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
++ priv->rtsthsd = 0;
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
+ CMD_ACT_GET, CMD_OPTION_WAITFORRSP,
+ OID_802_11_RTS_THRESHOLD, NULL);
+ if (ret)
+ goto out;
+
+- vwrq->value = adapter->rtsthsd;
++ vwrq->value = priv->rtsthsd;
+ vwrq->disabled = ((vwrq->value < MRVDRV_RTS_MIN_VALUE)
+ || (vwrq->value > MRVDRV_RTS_MAX_VALUE));
+ vwrq->fixed = 1;
+@@ -329,26 +339,25 @@ out:
+ return ret;
}
--static ssize_t libertas_wrrf_write(struct file *file,
-+static ssize_t lbs_wrrf_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+-static int wlan_set_frag(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_frag(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
{
+ int ret = 0;
+ u32 fthr = vwrq->value;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
-- wlan_private *priv = file->private_data;
-+ struct lbs_private *priv = file->private_data;
- ssize_t res, buf_size;
- u32 offset, value;
-- struct wlan_offset_value offval;
-+ struct lbs_offset_value offval;
- unsigned long addr = get_zeroed_page(GFP_KERNEL);
- char *buf = (char *)addr;
+ lbs_deb_enter(LBS_DEB_WEXT);
-@@ -1601,7 +827,7 @@ static ssize_t libertas_wrrf_write(struct file *file,
+ if (vwrq->disabled) {
+- adapter->fragthsd = fthr = MRVDRV_FRAG_MAX_VALUE;
++ priv->fragthsd = fthr = MRVDRV_FRAG_MAX_VALUE;
+ } else {
+ if (fthr < MRVDRV_FRAG_MIN_VALUE
+ || fthr > MRVDRV_FRAG_MAX_VALUE)
+ return -EINVAL;
+- adapter->fragthsd = fthr;
++ priv->fragthsd = fthr;
+ }
- offval.offset = offset;
- offval.value = value;
-- res = libertas_prepare_and_send_command(priv,
-+ res = lbs_prepare_and_send_command(priv,
- CMD_RF_REG_ACCESS, 1,
- CMD_OPTION_WAITFORRSP, 0, &offval);
- mdelay(10);
-@@ -1619,69 +845,69 @@ out_unlock:
- .write = (fwrite), \
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
+ CMD_ACT_SET, CMD_OPTION_WAITFORRSP,
+ OID_802_11_FRAGMENTATION_THRESHOLD, &fthr);
+
+@@ -356,24 +365,23 @@ static int wlan_set_frag(struct net_device *dev, struct iw_request_info *info,
+ return ret;
}
--struct libertas_debugfs_files {
-+struct lbs_debugfs_files {
- char *name;
- int perm;
- struct file_operations fops;
- };
+-static int wlan_get_frag(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_frag(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
+ {
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
--static struct libertas_debugfs_files debugfs_files[] = {
-- { "info", 0444, FOPS(libertas_dev_info, write_file_dummy), },
-- { "getscantable", 0444, FOPS(libertas_getscantable,
-+static struct lbs_debugfs_files debugfs_files[] = {
-+ { "info", 0444, FOPS(lbs_dev_info, write_file_dummy), },
-+ { "getscantable", 0444, FOPS(lbs_getscantable,
- write_file_dummy), },
-- { "sleepparams", 0644, FOPS(libertas_sleepparams_read,
-- libertas_sleepparams_write), },
-- { "extscan", 0600, FOPS(NULL, libertas_extscan), },
-- { "setuserscan", 0600, FOPS(NULL, libertas_setuserscan), },
-+ { "sleepparams", 0644, FOPS(lbs_sleepparams_read,
-+ lbs_sleepparams_write), },
-+ { "extscan", 0600, FOPS(NULL, lbs_extscan), },
-+ { "setuserscan", 0600, FOPS(NULL, lbs_setuserscan), },
- };
+ lbs_deb_enter(LBS_DEB_WEXT);
--static struct libertas_debugfs_files debugfs_events_files[] = {
-- {"low_rssi", 0644, FOPS(libertas_lowrssi_read,
-- libertas_lowrssi_write), },
-- {"low_snr", 0644, FOPS(libertas_lowsnr_read,
-- libertas_lowsnr_write), },
-- {"failure_count", 0644, FOPS(libertas_failcount_read,
-- libertas_failcount_write), },
-- {"beacon_missed", 0644, FOPS(libertas_bcnmiss_read,
-- libertas_bcnmiss_write), },
-- {"high_rssi", 0644, FOPS(libertas_highrssi_read,
-- libertas_highrssi_write), },
-- {"high_snr", 0644, FOPS(libertas_highsnr_read,
-- libertas_highsnr_write), },
-+static struct lbs_debugfs_files debugfs_events_files[] = {
-+ {"low_rssi", 0644, FOPS(lbs_lowrssi_read,
-+ lbs_lowrssi_write), },
-+ {"low_snr", 0644, FOPS(lbs_lowsnr_read,
-+ lbs_lowsnr_write), },
-+ {"failure_count", 0644, FOPS(lbs_failcount_read,
-+ lbs_failcount_write), },
-+ {"beacon_missed", 0644, FOPS(lbs_bcnmiss_read,
-+ lbs_bcnmiss_write), },
-+ {"high_rssi", 0644, FOPS(lbs_highrssi_read,
-+ lbs_highrssi_write), },
-+ {"high_snr", 0644, FOPS(lbs_highsnr_read,
-+ lbs_highsnr_write), },
- };
+- adapter->fragthsd = 0;
+- ret = libertas_prepare_and_send_command(priv,
++ priv->fragthsd = 0;
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_SNMP_MIB,
+ CMD_ACT_GET, CMD_OPTION_WAITFORRSP,
+ OID_802_11_FRAGMENTATION_THRESHOLD, NULL);
+ if (ret)
+ goto out;
--static struct libertas_debugfs_files debugfs_regs_files[] = {
-- {"rdmac", 0644, FOPS(libertas_rdmac_read, libertas_rdmac_write), },
-- {"wrmac", 0600, FOPS(NULL, libertas_wrmac_write), },
-- {"rdbbp", 0644, FOPS(libertas_rdbbp_read, libertas_rdbbp_write), },
-- {"wrbbp", 0600, FOPS(NULL, libertas_wrbbp_write), },
-- {"rdrf", 0644, FOPS(libertas_rdrf_read, libertas_rdrf_write), },
-- {"wrrf", 0600, FOPS(NULL, libertas_wrrf_write), },
-+static struct lbs_debugfs_files debugfs_regs_files[] = {
-+ {"rdmac", 0644, FOPS(lbs_rdmac_read, lbs_rdmac_write), },
-+ {"wrmac", 0600, FOPS(NULL, lbs_wrmac_write), },
-+ {"rdbbp", 0644, FOPS(lbs_rdbbp_read, lbs_rdbbp_write), },
-+ {"wrbbp", 0600, FOPS(NULL, lbs_wrbbp_write), },
-+ {"rdrf", 0644, FOPS(lbs_rdrf_read, lbs_rdrf_write), },
-+ {"wrrf", 0600, FOPS(NULL, lbs_wrrf_write), },
- };
+- vwrq->value = adapter->fragthsd;
++ vwrq->value = priv->fragthsd;
+ vwrq->disabled = ((vwrq->value < MRVDRV_FRAG_MIN_VALUE)
+ || (vwrq->value > MRVDRV_FRAG_MAX_VALUE));
+ vwrq->fixed = 1;
+@@ -383,15 +391,14 @@ out:
+ return ret;
+ }
--void libertas_debugfs_init(void)
-+void lbs_debugfs_init(void)
+-static int wlan_get_mode(struct net_device *dev,
++static int lbs_get_mode(struct net_device *dev,
+ struct iw_request_info *info, u32 * uwrq, char *extra)
{
-- if (!libertas_dir)
-- libertas_dir = debugfs_create_dir("libertas_wireless", NULL);
-+ if (!lbs_dir)
-+ lbs_dir = debugfs_create_dir("lbs_wireless", NULL);
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
- return;
- }
+ lbs_deb_enter(LBS_DEB_WEXT);
--void libertas_debugfs_remove(void)
-+void lbs_debugfs_remove(void)
- {
-- if (libertas_dir)
-- debugfs_remove(libertas_dir);
-+ if (lbs_dir)
-+ debugfs_remove(lbs_dir);
- return;
+- *uwrq = adapter->mode;
++ *uwrq = priv->mode;
+
+ lbs_deb_leave(LBS_DEB_WEXT);
+ return 0;
+@@ -409,17 +416,16 @@ static int mesh_wlan_get_mode(struct net_device *dev,
+ return 0;
}
--void libertas_debugfs_init_one(wlan_private *priv, struct net_device *dev)
-+void lbs_debugfs_init_one(struct lbs_private *priv, struct net_device *dev)
+-static int wlan_get_txpow(struct net_device *dev,
++static int lbs_get_txpow(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
{
- int i;
-- struct libertas_debugfs_files *files;
-- if (!libertas_dir)
-+ struct lbs_debugfs_files *files;
-+ if (!lbs_dir)
- goto exit;
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
-- priv->debugfs_dir = debugfs_create_dir(dev->name, libertas_dir);
-+ priv->debugfs_dir = debugfs_create_dir(dev->name, lbs_dir);
- if (!priv->debugfs_dir)
- goto exit;
+ lbs_deb_enter(LBS_DEB_WEXT);
-@@ -1721,13 +947,13 @@ void libertas_debugfs_init_one(wlan_private *priv, struct net_device *dev)
- }
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_RF_TX_POWER,
+ CMD_ACT_TX_POWER_OPT_GET,
+ CMD_OPTION_WAITFORRSP, 0, NULL);
+@@ -427,10 +433,10 @@ static int wlan_get_txpow(struct net_device *dev,
+ if (ret)
+ goto out;
- #ifdef PROC_DEBUG
-- libertas_debug_init(priv, dev);
-+ lbs_debug_init(priv, dev);
- #endif
- exit:
- return;
+- lbs_deb_wext("tx power level %d dbm\n", adapter->txpowerlevel);
+- vwrq->value = adapter->txpowerlevel;
++ lbs_deb_wext("tx power level %d dbm\n", priv->txpowerlevel);
++ vwrq->value = priv->txpowerlevel;
+ vwrq->fixed = 1;
+- if (adapter->radioon) {
++ if (priv->radioon) {
+ vwrq->disabled = 0;
+ vwrq->flags = IW_TXPOW_DBM;
+ } else {
+@@ -442,12 +448,11 @@ out:
+ return ret;
}
--void libertas_debugfs_remove_one(wlan_private *priv)
-+void lbs_debugfs_remove_one(struct lbs_private *priv)
+-static int wlan_set_retry(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_retry(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
{
- int i;
-
-@@ -1754,8 +980,8 @@ void libertas_debugfs_remove_one(wlan_private *priv)
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
- #ifdef PROC_DEBUG
+ lbs_deb_enter(LBS_DEB_WEXT);
--#define item_size(n) (FIELD_SIZEOF(wlan_adapter, n))
--#define item_addr(n) (offsetof(wlan_adapter, n))
-+#define item_size(n) (FIELD_SIZEOF(struct lbs_private, n))
-+#define item_addr(n) (offsetof(struct lbs_private, n))
+@@ -460,9 +465,9 @@ static int wlan_set_retry(struct net_device *dev, struct iw_request_info *info,
+ return -EINVAL;
+ /* Adding 1 to convert retry count to try count */
+- adapter->txretrycount = vwrq->value + 1;
++ priv->txretrycount = vwrq->value + 1;
- struct debug_data {
-@@ -1764,7 +990,7 @@ struct debug_data {
- size_t addr;
- };
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
++ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
+ CMD_ACT_SET,
+ CMD_OPTION_WAITFORRSP,
+ OID_802_11_TX_RETRYCOUNT, NULL);
+@@ -478,17 +483,16 @@ out:
+ return ret;
+ }
--/* To debug any member of wlan_adapter, simply add one line here.
-+/* To debug any member of struct lbs_private, simply add one line here.
- */
- static struct debug_data items[] = {
- {"intcounter", item_size(intcounter), item_addr(intcounter)},
-@@ -1785,7 +1011,7 @@ static int num_of_items = ARRAY_SIZE(items);
- * @param data data to output
- * @return number of output data
- */
--static ssize_t wlan_debugfs_read(struct file *file, char __user *userbuf,
-+static ssize_t lbs_debugfs_read(struct file *file, char __user *userbuf,
- size_t count, loff_t *ppos)
- {
- int val = 0;
-@@ -1829,7 +1055,7 @@ static ssize_t wlan_debugfs_read(struct file *file, char __user *userbuf,
- * @param data data to write
- * @return number of data
- */
--static ssize_t wlan_debugfs_write(struct file *f, const char __user *buf,
-+static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
- size_t cnt, loff_t *ppos)
+-static int wlan_get_retry(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_retry(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
{
- int r, i;
-@@ -1881,21 +1107,21 @@ static ssize_t wlan_debugfs_write(struct file *f, const char __user *buf,
- return (ssize_t)cnt;
- }
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ int ret = 0;
--static struct file_operations libertas_debug_fops = {
-+static struct file_operations lbs_debug_fops = {
- .owner = THIS_MODULE,
- .open = open_file_generic,
-- .write = wlan_debugfs_write,
-- .read = wlan_debugfs_read,
-+ .write = lbs_debugfs_write,
-+ .read = lbs_debugfs_read,
- };
+ lbs_deb_enter(LBS_DEB_WEXT);
- /**
- * @brief create debug proc file
- *
-- * @param priv pointer wlan_private
-+ * @param priv pointer struct lbs_private
- * @param dev pointer net_device
- * @return N/A
+- adapter->txretrycount = 0;
+- ret = libertas_prepare_and_send_command(priv,
++ priv->txretrycount = 0;
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_SNMP_MIB,
+ CMD_ACT_GET, CMD_OPTION_WAITFORRSP,
+ OID_802_11_TX_RETRYCOUNT, NULL);
+@@ -499,7 +503,7 @@ static int wlan_get_retry(struct net_device *dev, struct iw_request_info *info,
+ if (!vwrq->flags) {
+ vwrq->flags = IW_RETRY_LIMIT;
+ /* Subtract 1 to convert try count to retry count */
+- vwrq->value = adapter->txretrycount - 1;
++ vwrq->value = priv->txretrycount - 1;
+ }
+
+ out:
+@@ -546,12 +550,11 @@ static inline void sort_channels(struct iw_freq *freq, int num)
+ * @param extra A pointer to extra data buf
+ * @return 0 --success, otherwise fail
*/
--static void libertas_debug_init(wlan_private * priv, struct net_device *dev)
-+static void lbs_debug_init(struct lbs_private *priv, struct net_device *dev)
+-static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_range(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
{
- int i;
+ int i, j;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct iw_range *range = (struct iw_range *)extra;
+ struct chan_freq_power *cfp;
+ u8 rates[MAX_RATES + 1];
+@@ -567,7 +570,7 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
+ range->max_nwid = 0;
-@@ -1903,11 +1129,10 @@ static void libertas_debug_init(wlan_private * priv, struct net_device *dev)
- return;
+ memset(rates, 0, sizeof(rates));
+- copy_active_data_rates(adapter, rates);
++ copy_active_data_rates(priv, rates);
+ range->num_bitrates = strnlen(rates, IW_MAX_BITRATES);
+ for (i = 0; i < range->num_bitrates; i++)
+ range->bitrate[i] = rates[i] * 500000;
+@@ -576,13 +579,14 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
+ range->num_bitrates);
- for (i = 0; i < num_of_items; i++)
-- items[i].addr += (size_t) priv->adapter;
-+ items[i].addr += (size_t) priv;
+ range->num_frequency = 0;
+- if (priv->adapter->enable11d &&
+- adapter->connect_status == LIBERTAS_CONNECTED) {
++ if (priv->enable11d &&
++ (priv->connect_status == LBS_CONNECTED ||
++ priv->mesh_connect_status == LBS_CONNECTED)) {
+ u8 chan_no;
+ u8 band;
- priv->debugfs_debug = debugfs_create_file("debug", 0644,
- priv->debugfs_dir, &items[0],
-- &libertas_debug_fops);
-+ &lbs_debug_fops);
+ struct parsed_region_chan_11d *parsed_region_chan =
+- &adapter->parsed_region_chan;
++ &priv->parsed_region_chan;
+
+ if (parsed_region_chan == NULL) {
+ lbs_deb_wext("11d: parsed_region_chan is NULL\n");
+@@ -598,7 +602,7 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
+ lbs_deb_wext("chan_no %d\n", chan_no);
+ range->freq[range->num_frequency].i = (long)chan_no;
+ range->freq[range->num_frequency].m =
+- (long)libertas_chan_2_freq(chan_no, band) * 100000;
++ (long)lbs_chan_2_freq(chan_no, band) * 100000;
+ range->freq[range->num_frequency].e = 1;
+ range->num_frequency++;
+ }
+@@ -606,13 +610,12 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
+ }
+ if (!flag) {
+ for (j = 0; (range->num_frequency < IW_MAX_FREQUENCIES)
+- && (j < sizeof(adapter->region_channel)
+- / sizeof(adapter->region_channel[0])); j++) {
+- cfp = adapter->region_channel[j].CFP;
++ && (j < ARRAY_SIZE(priv->region_channel)); j++) {
++ cfp = priv->region_channel[j].CFP;
+ for (i = 0; (range->num_frequency < IW_MAX_FREQUENCIES)
+- && adapter->region_channel[j].valid
++ && priv->region_channel[j].valid
+ && cfp
+- && (i < adapter->region_channel[j].nrcfp); i++) {
++ && (i < priv->region_channel[j].nrcfp); i++) {
+ range->freq[range->num_frequency].i =
+ (long)cfp->channel;
+ range->freq[range->num_frequency].m =
+@@ -712,7 +715,7 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
+ IW_EVENT_CAPA_MASK(SIOCGIWSCAN));
+ range->event_capa[1] = IW_EVENT_CAPA_K_1;
+
+- if (adapter->fwcapinfo & FW_CAPINFO_WPA) {
++ if (priv->fwcapinfo & FW_CAPINFO_WPA) {
+ range->enc_capa = IW_ENC_CAPA_WPA
+ | IW_ENC_CAPA_WPA2
+ | IW_ENC_CAPA_CIPHER_TKIP
+@@ -724,22 +727,28 @@ out:
+ return 0;
}
- #endif
--
-diff --git a/drivers/net/wireless/libertas/debugfs.h b/drivers/net/wireless/libertas/debugfs.h
-index 880a11b..f2b9c7f 100644
---- a/drivers/net/wireless/libertas/debugfs.h
-+++ b/drivers/net/wireless/libertas/debugfs.h
-@@ -1,6 +1,10 @@
--void libertas_debugfs_init(void);
--void libertas_debugfs_remove(void);
-+#ifndef _LBS_DEBUGFS_H_
-+#define _LBS_DEBUGFS_H_
--void libertas_debugfs_init_one(wlan_private *priv, struct net_device *dev);
--void libertas_debugfs_remove_one(wlan_private *priv);
-+void lbs_debugfs_init(void);
-+void lbs_debugfs_remove(void);
+-static int wlan_set_power(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_power(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
-+void lbs_debugfs_init_one(struct lbs_private *priv, struct net_device *dev);
-+void lbs_debugfs_remove_one(struct lbs_private *priv);
+ lbs_deb_enter(LBS_DEB_WEXT);
+
++ if (!priv->ps_supported) {
++ if (vwrq->disabled)
++ return 0;
++ else
++ return -EINVAL;
++ }
+
-+#endif
-diff --git a/drivers/net/wireless/libertas/decl.h b/drivers/net/wireless/libertas/decl.h
-index 87fea9d..aaacd9b 100644
---- a/drivers/net/wireless/libertas/decl.h
-+++ b/drivers/net/wireless/libertas/decl.h
-@@ -3,80 +3,74 @@
- * functions defined in other source files
- */
+ /* PS is currently supported only in Infrastructure mode
+ * Remove this check if it is to be supported in IBSS mode also
+ */
--#ifndef _WLAN_DECL_H_
--#define _WLAN_DECL_H_
-+#ifndef _LBS_DECL_H_
-+#define _LBS_DECL_H_
+ if (vwrq->disabled) {
+- adapter->psmode = WLAN802_11POWERMODECAM;
+- if (adapter->psstate != PS_STATE_FULL_POWER) {
+- libertas_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
++ priv->psmode = LBS802_11POWERMODECAM;
++ if (priv->psstate != PS_STATE_FULL_POWER) {
++ lbs_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
+ }
- #include <linux/device.h>
+ return 0;
+@@ -754,33 +763,32 @@ static int wlan_set_power(struct net_device *dev, struct iw_request_info *info,
+ return -EINVAL;
+ }
- #include "defs.h"
+- if (adapter->psmode != WLAN802_11POWERMODECAM) {
++ if (priv->psmode != LBS802_11POWERMODECAM) {
+ return 0;
+ }
- /** Function Prototype Declaration */
--struct wlan_private;
-+struct lbs_private;
- struct sk_buff;
- struct net_device;
--
--int libertas_set_mac_packet_filter(wlan_private * priv);
--
--void libertas_send_tx_feedback(wlan_private * priv);
--
--int libertas_free_cmd_buffer(wlan_private * priv);
- struct cmd_ctrl_node;
--struct cmd_ctrl_node *libertas_get_free_cmd_ctrl_node(wlan_private * priv);
-+struct cmd_ds_command;
+- adapter->psmode = WLAN802_11POWERMODEMAX_PSP;
++ priv->psmode = LBS802_11POWERMODEMAX_PSP;
--void libertas_set_cmd_ctrl_node(wlan_private * priv,
-- struct cmd_ctrl_node *ptempnode,
-- u32 cmd_oid, u16 wait_option, void *pdata_buf);
-+int lbs_set_mac_packet_filter(struct lbs_private *priv);
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
+- libertas_ps_sleep(priv, CMD_OPTION_WAITFORRSP);
++ if (priv->connect_status == LBS_CONNECTED) {
++ lbs_ps_sleep(priv, CMD_OPTION_WAITFORRSP);
+ }
--int libertas_prepare_and_send_command(wlan_private * priv,
-- u16 cmd_no,
-- u16 cmd_action,
-- u16 wait_option, u32 cmd_oid, void *pdata_buf);
-+void lbs_send_tx_feedback(struct lbs_private *priv);
+ lbs_deb_leave(LBS_DEB_WEXT);
+ return 0;
+ }
--void libertas_queue_cmd(wlan_adapter * adapter, struct cmd_ctrl_node *cmdnode, u8 addtail);
-+int lbs_free_cmd_buffer(struct lbs_private *priv);
+-static int wlan_get_power(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_power(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ int mode;
--int libertas_allocate_cmd_buffer(wlan_private * priv);
--int libertas_execute_next_command(wlan_private * priv);
--int libertas_process_event(wlan_private * priv);
--void libertas_interrupt(struct net_device *);
--int libertas_set_radio_control(wlan_private * priv);
--u32 libertas_fw_index_to_data_rate(u8 index);
--u8 libertas_data_rate_to_fw_index(u32 rate);
--void libertas_get_fwversion(wlan_adapter * adapter, char *fwversion, int maxlen);
-+int lbs_prepare_and_send_command(struct lbs_private *priv,
-+ u16 cmd_no,
-+ u16 cmd_action,
-+ u16 wait_option, u32 cmd_oid, void *pdata_buf);
+ lbs_deb_enter(LBS_DEB_WEXT);
--void libertas_upload_rx_packet(wlan_private * priv, struct sk_buff *skb);
-+int lbs_allocate_cmd_buffer(struct lbs_private *priv);
-+int lbs_execute_next_command(struct lbs_private *priv);
-+int lbs_process_event(struct lbs_private *priv);
-+void lbs_interrupt(struct lbs_private *priv);
-+int lbs_set_radio_control(struct lbs_private *priv);
-+u32 lbs_fw_index_to_data_rate(u8 index);
-+u8 lbs_data_rate_to_fw_index(u32 rate);
-+void lbs_get_fwversion(struct lbs_private *priv,
-+ char *fwversion,
-+ int maxlen);
+- mode = adapter->psmode;
++ mode = priv->psmode;
- /** The proc fs interface */
--int libertas_process_rx_command(wlan_private * priv);
--int libertas_process_tx(wlan_private * priv, struct sk_buff *skb);
--void __libertas_cleanup_and_insert_cmd(wlan_private * priv,
-- struct cmd_ctrl_node *ptempcmd);
--
--int libertas_set_regiontable(wlan_private * priv, u8 region, u8 band);
--
--int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *);
-+int lbs_process_rx_command(struct lbs_private *priv);
-+void lbs_complete_command(struct lbs_private *priv, struct cmd_ctrl_node *cmd,
-+ int result);
-+int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev);
-+int lbs_set_regiontable(struct lbs_private *priv, u8 region, u8 band);
+- if ((vwrq->disabled = (mode == WLAN802_11POWERMODECAM))
+- || adapter->connect_status == LIBERTAS_DISCONNECTED)
++ if ((vwrq->disabled = (mode == LBS802_11POWERMODECAM))
++ || priv->connect_status == LBS_DISCONNECTED)
+ {
+ goto out;
+ }
+@@ -792,7 +800,7 @@ out:
+ return 0;
+ }
--void libertas_ps_sleep(wlan_private * priv, int wait_option);
--void libertas_ps_confirm_sleep(wlan_private * priv, u16 psmode);
--void libertas_ps_wakeup(wlan_private * priv, int wait_option);
-+int lbs_process_rxed_packet(struct lbs_private *priv, struct sk_buff *);
+-static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
++static struct iw_statistics *lbs_get_wireless_stats(struct net_device *dev)
+ {
+ enum {
+ POOR = 30,
+@@ -802,8 +810,7 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
+ EXCELLENT = 95,
+ PERFECT = 100
+ };
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ u32 rssi_qual;
+ u32 tx_qual;
+ u32 quality = 0;
+@@ -813,22 +820,23 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
--void libertas_tx_runqueue(wlan_private *priv);
-+void lbs_ps_sleep(struct lbs_private *priv, int wait_option);
-+void lbs_ps_confirm_sleep(struct lbs_private *priv, u16 psmode);
-+void lbs_ps_wakeup(struct lbs_private *priv, int wait_option);
+ lbs_deb_enter(LBS_DEB_WEXT);
--struct chan_freq_power *libertas_find_cfp_by_band_and_channel(
-- wlan_adapter * adapter, u8 band, u16 channel);
-+struct chan_freq_power *lbs_find_cfp_by_band_and_channel(
-+ struct lbs_private *priv,
-+ u8 band,
-+ u16 channel);
+- priv->wstats.status = adapter->mode;
++ priv->wstats.status = priv->mode;
--void libertas_mac_event_disconnected(wlan_private * priv);
-+void lbs_mac_event_disconnected(struct lbs_private *priv);
+ /* If we're not associated, all quality values are meaningless */
+- if (adapter->connect_status != LIBERTAS_CONNECTED)
++ if ((priv->connect_status != LBS_CONNECTED) &&
++ (priv->mesh_connect_status != LBS_CONNECTED))
+ goto out;
--void libertas_send_iwevcustom_event(wlan_private * priv, s8 * str);
-+void lbs_send_iwevcustom_event(struct lbs_private *priv, s8 *str);
+ /* Quality by RSSI */
+ priv->wstats.qual.level =
+- CAL_RSSI(adapter->SNR[TYPE_BEACON][TYPE_NOAVG],
+- adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
++ CAL_RSSI(priv->SNR[TYPE_BEACON][TYPE_NOAVG],
++ priv->NF[TYPE_BEACON][TYPE_NOAVG]);
- /* main.c */
--struct chan_freq_power *libertas_get_region_cfp_table(u8 region, u8 band,
-- int *cfp_no);
--wlan_private *libertas_add_card(void *card, struct device *dmdev);
--int libertas_remove_card(wlan_private *priv);
--int libertas_start_card(wlan_private *priv);
--int libertas_stop_card(wlan_private *priv);
--int libertas_add_mesh(wlan_private *priv, struct device *dev);
--void libertas_remove_mesh(wlan_private *priv);
--int libertas_reset_device(wlan_private *priv);
--
--#endif /* _WLAN_DECL_H_ */
-+struct chan_freq_power *lbs_get_region_cfp_table(u8 region,
-+ u8 band,
-+ int *cfp_no);
-+struct lbs_private *lbs_add_card(void *card, struct device *dmdev);
-+int lbs_remove_card(struct lbs_private *priv);
-+int lbs_start_card(struct lbs_private *priv);
-+int lbs_stop_card(struct lbs_private *priv);
-+int lbs_reset_device(struct lbs_private *priv);
-+void lbs_host_to_card_done(struct lbs_private *priv);
-+
-+int lbs_update_channel(struct lbs_private *priv);
-+#endif
-diff --git a/drivers/net/wireless/libertas/defs.h b/drivers/net/wireless/libertas/defs.h
-index 3a0c9be..3053cc2 100644
---- a/drivers/net/wireless/libertas/defs.h
-+++ b/drivers/net/wireless/libertas/defs.h
-@@ -2,8 +2,8 @@
- * This header file contains global constant/enum definitions,
- * global variable declaration.
- */
--#ifndef _WLAN_DEFS_H_
--#define _WLAN_DEFS_H_
-+#ifndef _LBS_DEFS_H_
-+#define _LBS_DEFS_H_
+- if (adapter->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
++ if (priv->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
+ priv->wstats.qual.noise = MRVDRV_NF_DEFAULT_SCAN_VALUE;
+ } else {
+ priv->wstats.qual.noise =
+- CAL_NF(adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
++ CAL_NF(priv->NF[TYPE_BEACON][TYPE_NOAVG]);
+ }
- #include <linux/spinlock.h>
+ lbs_deb_wext("signal level %#x\n", priv->wstats.qual.level);
+@@ -852,7 +860,7 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
+ /* Quality by TX errors */
+ priv->wstats.discard.retries = priv->stats.tx_errors;
-@@ -41,11 +41,11 @@
- #define LBS_DEB_HEX 0x00200000
- #define LBS_DEB_SDIO 0x00400000
+- tx_retries = le32_to_cpu(adapter->logmsg.retry);
++ tx_retries = le32_to_cpu(priv->logmsg.retry);
--extern unsigned int libertas_debug;
-+extern unsigned int lbs_debug;
+ if (tx_retries > 75)
+ tx_qual = (90 - tx_retries) * POOR / 15;
+@@ -868,10 +876,10 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
+ (PERFECT - VERY_GOOD) / 50 + VERY_GOOD;
+ quality = min(quality, tx_qual);
- #ifdef DEBUG
- #define LBS_DEB_LL(grp, grpnam, fmt, args...) \
--do { if ((libertas_debug & (grp)) == (grp)) \
-+do { if ((lbs_debug & (grp)) == (grp)) \
- printk(KERN_DEBUG DRV_NAME grpnam "%s: " fmt, \
- in_interrupt() ? " (INT)" : "", ## args); } while (0)
- #else
-@@ -96,8 +96,8 @@ static inline void lbs_deb_hex(unsigned int grp, const char *prompt, u8 *buf, in
- int i = 0;
+- priv->wstats.discard.code = le32_to_cpu(adapter->logmsg.wepundecryptable);
+- priv->wstats.discard.fragment = le32_to_cpu(adapter->logmsg.rxfrag);
++ priv->wstats.discard.code = le32_to_cpu(priv->logmsg.wepundecryptable);
++ priv->wstats.discard.fragment = le32_to_cpu(priv->logmsg.rxfrag);
+ priv->wstats.discard.retries = tx_retries;
+- priv->wstats.discard.misc = le32_to_cpu(adapter->logmsg.ackfailure);
++ priv->wstats.discard.misc = le32_to_cpu(priv->logmsg.ackfailure);
- if (len &&
-- (libertas_debug & LBS_DEB_HEX) &&
-- (libertas_debug & grp))
-+ (lbs_debug & LBS_DEB_HEX) &&
-+ (lbs_debug & grp))
- {
- for (i = 1; i <= len; i++) {
- if ((i & 0xf) == 1) {
-@@ -132,15 +132,22 @@ static inline void lbs_deb_hex(unsigned int grp, const char *prompt, u8 *buf, in
- */
+ /* Calculate quality */
+ priv->wstats.qual.qual = min_t(u8, quality, 100);
+@@ -879,9 +887,9 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
+ stats_valid = 1;
- #define MRVDRV_MAX_MULTICAST_LIST_SIZE 32
--#define MRVDRV_NUM_OF_CMD_BUFFER 10
--#define MRVDRV_SIZE_OF_CMD_BUFFER (2 * 1024)
-+#define LBS_NUM_CMD_BUFFERS 10
-+#define LBS_CMD_BUFFER_SIZE (2 * 1024)
- #define MRVDRV_MAX_CHANNEL_SIZE 14
- #define MRVDRV_ASSOCIATION_TIME_OUT 255
- #define MRVDRV_SNAP_HEADER_LEN 8
+ /* update stats asynchronously for future calls */
+- libertas_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
++ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
+ 0, 0, NULL);
+- libertas_prepare_and_send_command(priv, CMD_802_11_GET_LOG, 0,
++ lbs_prepare_and_send_command(priv, CMD_802_11_GET_LOG, 0,
+ 0, 0, NULL);
+ out:
+ if (!stats_valid) {
+@@ -901,19 +909,18 @@ out:
--#define WLAN_UPLD_SIZE 2312
-+#define LBS_UPLD_SIZE 2312
- #define DEV_NAME_LEN 32
+ }
-+/* Wake criteria for HOST_SLEEP_CFG command */
-+#define EHS_WAKE_ON_BROADCAST_DATA 0x0001
-+#define EHS_WAKE_ON_UNICAST_DATA 0x0002
-+#define EHS_WAKE_ON_MAC_EVENT 0x0004
-+#define EHS_WAKE_ON_MULTICAST_DATA 0x0008
-+#define EHS_REMOVE_WAKEUP 0xFFFFFFFF
-+
- /** Misc constants */
- /* This section defines 802.11 specific contants */
+-static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_freq(struct net_device *dev, struct iw_request_info *info,
+ struct iw_freq *fwrq, char *extra)
+ {
+ int ret = -EINVAL;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct chan_freq_power *cfp;
+ struct assoc_request * assoc_req;
-@@ -257,17 +264,11 @@ static inline void lbs_deb_hex(unsigned int grp, const char *prompt, u8 *buf, in
+ lbs_deb_enter(LBS_DEB_WEXT);
- #define MAX_LEDS 8
+- mutex_lock(&adapter->lock);
+- assoc_req = wlan_get_association_request(adapter);
++ mutex_lock(&priv->lock);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+ goto out;
+@@ -923,7 +930,7 @@ static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
+ if (fwrq->e == 1) {
+ long f = fwrq->m / 100000;
--#define IS_MESH_FRAME(x) (x->cb[6])
--#define SET_MESH_FRAME(x) (x->cb[6]=1)
--#define UNSET_MESH_FRAME(x) (x->cb[6]=0)
--
- /** Global Variable Declaration */
--typedef struct _wlan_private wlan_private;
--typedef struct _wlan_adapter wlan_adapter;
--extern const char libertas_driver_version[];
--extern u16 libertas_region_code_to_index[MRVDRV_MAX_REGION_CODE];
-+extern const char lbs_driver_version[];
-+extern u16 lbs_region_code_to_index[MRVDRV_MAX_REGION_CODE];
+- cfp = find_cfp_by_band_and_freq(adapter, 0, f);
++ cfp = find_cfp_by_band_and_freq(priv, 0, f);
+ if (!cfp) {
+ lbs_deb_wext("invalid freq %ld\n", f);
+ goto out;
+@@ -938,7 +945,7 @@ static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
+ goto out;
+ }
--extern u8 libertas_bg_rates[MAX_RATES];
-+extern u8 lbs_bg_rates[MAX_RATES];
+- cfp = libertas_find_cfp_by_band_and_channel(adapter, 0, fwrq->m);
++ cfp = lbs_find_cfp_by_band_and_channel(priv, 0, fwrq->m);
+ if (!cfp) {
+ goto out;
+ }
+@@ -949,23 +956,71 @@ static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
+ out:
+ if (ret == 0) {
+ set_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ } else {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
++ }
++ mutex_unlock(&priv->lock);
++
++ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
++ return ret;
++}
++
++static int lbs_mesh_set_freq(struct net_device *dev,
++ struct iw_request_info *info,
++ struct iw_freq *fwrq, char *extra)
++{
++ struct lbs_private *priv = dev->priv;
++ struct chan_freq_power *cfp;
++ int ret = -EINVAL;
++
++ lbs_deb_enter(LBS_DEB_WEXT);
++
++ /* If setting by frequency, convert to a channel */
++ if (fwrq->e == 1) {
++ long f = fwrq->m / 100000;
++
++ cfp = find_cfp_by_band_and_freq(priv, 0, f);
++ if (!cfp) {
++ lbs_deb_wext("invalid freq %ld\n", f);
++ goto out;
++ }
++
++ fwrq->e = 0;
++ fwrq->m = (int) cfp->channel;
++ }
++
++ /* Setting by channel number */
++ if (fwrq->m > 1000 || fwrq->e > 0) {
++ goto out;
++ }
++
++ cfp = lbs_find_cfp_by_band_and_channel(priv, 0, fwrq->m);
++ if (!cfp) {
++ goto out;
++ }
++
++ if (fwrq->m != priv->curbssparams.channel) {
++ lbs_deb_wext("mesh channel change forces eth disconnect\n");
++ if (priv->mode == IW_MODE_INFRA)
++ lbs_send_deauthentication(priv);
++ else if (priv->mode == IW_MODE_ADHOC)
++ lbs_stop_adhoc_network(priv);
+ }
+- mutex_unlock(&adapter->lock);
++ lbs_mesh_config(priv, 1, fwrq->m);
++ lbs_update_channel(priv);
++ ret = 0;
- /** ENUM definition*/
- /** SNRNF_TYPE */
-@@ -284,13 +285,13 @@ enum SNRNF_DATA {
- MAX_TYPE_AVG
- };
++out:
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
+ }
--/** WLAN_802_11_POWER_MODE */
--enum WLAN_802_11_POWER_MODE {
-- WLAN802_11POWERMODECAM,
-- WLAN802_11POWERMODEMAX_PSP,
-- WLAN802_11POWERMODEFAST_PSP,
-+/** LBS_802_11_POWER_MODE */
-+enum LBS_802_11_POWER_MODE {
-+ LBS802_11POWERMODECAM,
-+ LBS802_11POWERMODEMAX_PSP,
-+ LBS802_11POWERMODEFAST_PSP,
- /*not a real mode, defined as an upper bound */
-- WLAN802_11POWEMODEMAX
-+ LBS802_11POWEMODEMAX
- };
+-static int wlan_set_rate(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_rate(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
+- u32 new_rate;
+- u16 action;
++ struct lbs_private *priv = dev->priv;
++ u8 new_rate = 0;
+ int ret = -EINVAL;
+ u8 rates[MAX_RATES + 1];
- /** PS_STATE */
-@@ -308,16 +309,16 @@ enum DNLD_STATE {
- DNLD_CMD_SENT
- };
+@@ -974,15 +1029,14 @@ static int wlan_set_rate(struct net_device *dev, struct iw_request_info *info,
--/** WLAN_MEDIA_STATE */
--enum WLAN_MEDIA_STATE {
-- LIBERTAS_CONNECTED,
-- LIBERTAS_DISCONNECTED
-+/** LBS_MEDIA_STATE */
-+enum LBS_MEDIA_STATE {
-+ LBS_CONNECTED,
-+ LBS_DISCONNECTED
- };
+ /* Auto rate? */
+ if (vwrq->value == -1) {
+- action = CMD_ACT_SET_TX_AUTO;
+- adapter->auto_rate = 1;
+- adapter->cur_rate = 0;
++ priv->auto_rate = 1;
++ priv->cur_rate = 0;
+ } else {
+ if (vwrq->value % 100000)
+ goto out;
--/** WLAN_802_11_PRIVACY_FILTER */
--enum WLAN_802_11_PRIVACY_FILTER {
-- WLAN802_11PRIVFILTERACCEPTALL,
-- WLAN802_11PRIVFILTER8021XWEP
-+/** LBS_802_11_PRIVACY_FILTER */
-+enum LBS_802_11_PRIVACY_FILTER {
-+ LBS802_11PRIVFILTERACCEPTALL,
-+ LBS802_11PRIVFILTER8021XWEP
- };
+ memset(rates, 0, sizeof(rates));
+- copy_active_data_rates(adapter, rates);
++ copy_active_data_rates(priv, rates);
+ new_rate = vwrq->value / 500000;
+ if (!memchr(rates, new_rate, sizeof(rates))) {
+ lbs_pr_alert("fixed data rate 0x%X out of range\n",
+@@ -990,31 +1044,28 @@ static int wlan_set_rate(struct net_device *dev, struct iw_request_info *info,
+ goto out;
+ }
- /** mv_ms_type */
-@@ -382,4 +383,4 @@ enum SNMP_MIB_VALUE_e {
- #define FWT_DEFAULT_SLEEPMODE 0
- #define FWT_DEFAULT_SNR 0
+- adapter->cur_rate = new_rate;
+- action = CMD_ACT_SET_TX_FIX_RATE;
+- adapter->auto_rate = 0;
++ priv->cur_rate = new_rate;
++ priv->auto_rate = 0;
+ }
--#endif /* _WLAN_DEFS_H_ */
-+#endif
-diff --git a/drivers/net/wireless/libertas/dev.h b/drivers/net/wireless/libertas/dev.h
-index 1fb807a..58d7ef6 100644
---- a/drivers/net/wireless/libertas/dev.h
-+++ b/drivers/net/wireless/libertas/dev.h
-@@ -1,21 +1,20 @@
- /**
- * This file contains definitions and data structures specific
- * to Marvell 802.11 NIC. It contains the Device Information
-- * structure wlan_adapter.
-+ * structure struct lbs_private..
- */
--#ifndef _WLAN_DEV_H_
--#define _WLAN_DEV_H_
-+#ifndef _LBS_DEV_H_
-+#define _LBS_DEV_H_
+- ret = libertas_prepare_and_send_command(priv, CMD_802_11_DATA_RATE,
+- action, CMD_OPTION_WAITFORRSP, 0, NULL);
++ ret = lbs_set_data_rate(priv, new_rate);
- #include <linux/netdevice.h>
- #include <linux/wireless.h>
- #include <linux/ethtool.h>
- #include <linux/debugfs.h>
--#include <net/ieee80211.h>
+ out:
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
+ }
- #include "defs.h"
- #include "scan.h"
+-static int wlan_get_rate(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_rate(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
--extern struct ethtool_ops libertas_ethtool_ops;
-+extern struct ethtool_ops lbs_ethtool_ops;
+ lbs_deb_enter(LBS_DEB_WEXT);
- #define MAX_BSSID_PER_CHANNEL 16
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
+- vwrq->value = adapter->cur_rate * 500000;
++ if (priv->connect_status == LBS_CONNECTED) {
++ vwrq->value = priv->cur_rate * 500000;
-@@ -53,7 +52,7 @@ struct region_channel {
- struct chan_freq_power *CFP;
- };
+- if (adapter->auto_rate)
++ if (priv->auto_rate)
+ vwrq->fixed = 0;
+ else
+ vwrq->fixed = 1;
+@@ -1028,12 +1079,11 @@ static int wlan_get_rate(struct net_device *dev, struct iw_request_info *info,
+ return 0;
+ }
--struct wlan_802_11_security {
-+struct lbs_802_11_security {
- u8 WPAenabled;
- u8 WPA2enabled;
- u8 wep_enabled;
-@@ -78,16 +77,16 @@ struct current_bss_params {
+-static int wlan_set_mode(struct net_device *dev,
++static int lbs_set_mode(struct net_device *dev,
+ struct iw_request_info *info, u32 * uwrq, char *extra)
+ {
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct assoc_request * assoc_req;
- /** sleep_params */
- struct sleep_params {
-- u16 sp_error;
-- u16 sp_offset;
-- u16 sp_stabletime;
-- u8 sp_calcontrol;
-- u8 sp_extsleepclk;
-- u16 sp_reserved;
-+ uint16_t sp_error;
-+ uint16_t sp_offset;
-+ uint16_t sp_stabletime;
-+ uint8_t sp_calcontrol;
-+ uint8_t sp_extsleepclk;
-+ uint16_t sp_reserved;
- };
+ lbs_deb_enter(LBS_DEB_WEXT);
+@@ -1046,18 +1096,18 @@ static int wlan_set_mode(struct net_device *dev,
+ goto out;
+ }
- /* Mesh statistics */
--struct wlan_mesh_stats {
-+struct lbs_mesh_stats {
- u32 fwd_bcast_cnt; /* Fwd: Broadcast counter */
- u32 fwd_unicast_cnt; /* Fwd: Unicast counter */
- u32 fwd_drop_ttl; /* Fwd: TTL zero */
-@@ -99,26 +98,22 @@ struct wlan_mesh_stats {
- };
+- mutex_lock(&adapter->lock);
+- assoc_req = wlan_get_association_request(adapter);
++ mutex_lock(&priv->lock);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ } else {
+ assoc_req->mode = *uwrq;
+ set_bit(ASSOC_FLAG_MODE, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ lbs_deb_wext("Switching to mode: 0x%x\n", *uwrq);
+ }
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
- /** Private structure for the MV device */
--struct _wlan_private {
-- int open;
-+struct lbs_private {
- int mesh_open;
- int infra_open;
- int mesh_autostart_enabled;
-- __le16 boot2_version;
+ out:
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+@@ -1074,23 +1124,22 @@ out:
+ * @param extra A pointer to extra data buf
+ * @return 0 --success, otherwise fail
+ */
+-static int wlan_get_encode(struct net_device *dev,
++static int lbs_get_encode(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_point *dwrq, u8 * extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ int index = (dwrq->flags & IW_ENCODE_INDEX) - 1;
- char name[DEV_NAME_LEN];
+ lbs_deb_enter(LBS_DEB_WEXT);
- void *card;
-- wlan_adapter *adapter;
- struct net_device *dev;
+ lbs_deb_wext("flags 0x%x, index %d, length %d, wep_tx_keyidx %d\n",
+- dwrq->flags, index, dwrq->length, adapter->wep_tx_keyidx);
++ dwrq->flags, index, dwrq->length, priv->wep_tx_keyidx);
- struct net_device_stats stats;
- struct net_device *mesh_dev; /* Virtual device */
- struct net_device *rtap_net_dev;
-- struct ieee80211_device *ieee;
+ dwrq->flags = 0;
- struct iw_statistics wstats;
-- struct wlan_mesh_stats mstats;
-+ struct lbs_mesh_stats mstats;
- struct dentry *debugfs_dir;
- struct dentry *debugfs_debug;
- struct dentry *debugfs_files[6];
-@@ -136,15 +131,13 @@ struct _wlan_private {
- /** Upload length */
- u32 upld_len;
- /* Upload buffer */
-- u8 upld_buf[WLAN_UPLD_SIZE];
-+ u8 upld_buf[LBS_UPLD_SIZE];
- /* Download sent:
- bit0 1/0=data_sent/data_tx_done,
- bit1 1/0=cmd_sent/cmd_tx_done,
- all other bits reserved 0 */
- u8 dnld_sent;
+ /* Authentication method */
+- switch (adapter->secinfo.auth_mode) {
++ switch (priv->secinfo.auth_mode) {
+ case IW_AUTH_ALG_OPEN_SYSTEM:
+ dwrq->flags = IW_ENCODE_OPEN;
+ break;
+@@ -1104,41 +1153,32 @@ static int wlan_get_encode(struct net_device *dev,
+ break;
+ }
-- struct device *hotplug_device;
+- if ( adapter->secinfo.wep_enabled
+- || adapter->secinfo.WPAenabled
+- || adapter->secinfo.WPA2enabled) {
+- dwrq->flags &= ~IW_ENCODE_DISABLED;
+- } else {
+- dwrq->flags |= IW_ENCODE_DISABLED;
+- }
-
- /** thread to service interrupts */
- struct task_struct *main_thread;
- wait_queue_head_t waitq;
-@@ -155,65 +148,29 @@ struct _wlan_private {
- struct work_struct sync_channel;
+ memset(extra, 0, 16);
- /** Hardware access */
-- int (*hw_host_to_card) (wlan_private * priv, u8 type, u8 * payload, u16 nb);
-- int (*hw_get_int_status) (wlan_private * priv, u8 *);
-- int (*hw_read_event_cause) (wlan_private *);
--};
-+ int (*hw_host_to_card) (struct lbs_private *priv, u8 type, u8 *payload, u16 nb);
-+ int (*hw_get_int_status) (struct lbs_private *priv, u8 *);
-+ int (*hw_read_event_cause) (struct lbs_private *);
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
--/** Association request
-- *
-- * Encapsulates all the options that describe a specific assocation request
-- * or configuration of the wireless card's radio, mode, and security settings.
-- */
--struct assoc_request {
--#define ASSOC_FLAG_SSID 1
--#define ASSOC_FLAG_CHANNEL 2
--#define ASSOC_FLAG_BAND 3
--#define ASSOC_FLAG_MODE 4
--#define ASSOC_FLAG_BSSID 5
--#define ASSOC_FLAG_WEP_KEYS 6
--#define ASSOC_FLAG_WEP_TX_KEYIDX 7
--#define ASSOC_FLAG_WPA_MCAST_KEY 8
--#define ASSOC_FLAG_WPA_UCAST_KEY 9
--#define ASSOC_FLAG_SECINFO 10
--#define ASSOC_FLAG_WPA_IE 11
-- unsigned long flags;
-+ /* Wake On LAN */
-+ uint32_t wol_criteria;
-+ uint8_t wol_gpio;
-+ uint8_t wol_gap;
+ /* Default to returning current transmit key */
+ if (index < 0)
+- index = adapter->wep_tx_keyidx;
++ index = priv->wep_tx_keyidx;
-- u8 ssid[IW_ESSID_MAX_SIZE + 1];
-- u8 ssid_len;
-- u8 channel;
-- u8 band;
-- u8 mode;
-- u8 bssid[ETH_ALEN];
--
-- /** WEP keys */
-- struct enc_key wep_keys[4];
-- u16 wep_tx_keyidx;
--
-- /** WPA keys */
-- struct enc_key wpa_mcast_key;
-- struct enc_key wpa_unicast_key;
-+ /* was struct lbs_adapter from here... */
+- if ((adapter->wep_keys[index].len) && adapter->secinfo.wep_enabled) {
+- memcpy(extra, adapter->wep_keys[index].key,
+- adapter->wep_keys[index].len);
+- dwrq->length = adapter->wep_keys[index].len;
++ if ((priv->wep_keys[index].len) && priv->secinfo.wep_enabled) {
++ memcpy(extra, priv->wep_keys[index].key,
++ priv->wep_keys[index].len);
++ dwrq->length = priv->wep_keys[index].len;
-- struct wlan_802_11_security secinfo;
--
-- /** WPA Information Elements*/
-- u8 wpa_ie[MAX_WPA_IE_LEN];
-- u8 wpa_ie_len;
--
-- /* BSS to associate with for infrastructure of Ad-Hoc join */
-- struct bss_descriptor bss;
--};
--
--/** Wlan adapter data structure*/
--struct _wlan_adapter {
-+ /** Wlan adapter data structure*/
- /** STATUS variables */
-- u8 fwreleasenumber[4];
-+ u32 fwrelease;
- u32 fwcapinfo;
- /* protected with big lock */
+ dwrq->flags |= (index + 1);
+ /* Return WEP enabled */
+ dwrq->flags &= ~IW_ENCODE_DISABLED;
+- } else if ((adapter->secinfo.WPAenabled)
+- || (adapter->secinfo.WPA2enabled)) {
++ } else if ((priv->secinfo.WPAenabled)
++ || (priv->secinfo.WPA2enabled)) {
+ /* return WPA enabled */
+ dwrq->flags &= ~IW_ENCODE_DISABLED;
++ dwrq->flags |= IW_ENCODE_NOKEY;
+ } else {
+ dwrq->flags |= IW_ENCODE_DISABLED;
+ }
- struct mutex lock;
+- mutex_unlock(&adapter->lock);
+-
+- dwrq->flags |= IW_ENCODE_NOKEY;
++ mutex_unlock(&priv->lock);
-- u8 tmptxbuf[WLAN_UPLD_SIZE];
-+ /* TX packet ready to be sent... */
-+ int tx_pending_len; /* -1 while building packet */
-+
-+ u8 tx_pending_buf[LBS_UPLD_SIZE];
- /* protected by hard_start_xmit serialization */
+ lbs_deb_wext("key: %02x:%02x:%02x:%02x:%02x:%02x, keylen %d\n",
+ extra[0], extra[1], extra[2],
+@@ -1160,7 +1200,7 @@ static int wlan_get_encode(struct net_device *dev,
+ * @param set_tx_key Force set TX key (1 = yes, 0 = no)
+ * @return 0 --success, otherwise fail
+ */
+-static int wlan_set_wep_key(struct assoc_request *assoc_req,
++static int lbs_set_wep_key(struct assoc_request *assoc_req,
+ const char *key_material,
+ u16 key_length,
+ u16 index,
+@@ -1278,20 +1318,19 @@ static void disable_wpa(struct assoc_request *assoc_req)
+ * @param extra A pointer to extra data buf
+ * @return 0 --success, otherwise fail
+ */
+-static int wlan_set_encode(struct net_device *dev,
++static int lbs_set_encode(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
+ {
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct assoc_request * assoc_req;
+ u16 is_default = 0, index = 0, set_tx_key = 0;
- /** command-related variables */
-@@ -231,8 +188,7 @@ struct _wlan_adapter {
- struct list_head cmdpendingq;
+ lbs_deb_enter(LBS_DEB_WEXT);
- wait_queue_head_t cmd_pending;
-- u8 nr_cmd_pending;
-- /* command related variables protected by adapter->driver_lock */
-+ /* command related variables protected by priv->driver_lock */
+- mutex_lock(&adapter->lock);
+- assoc_req = wlan_get_association_request(adapter);
++ mutex_lock(&priv->lock);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1317,7 +1356,7 @@ static int wlan_set_encode(struct net_device *dev,
+ if (!assoc_req->secinfo.wep_enabled || (dwrq->length == 0 && !is_default))
+ set_tx_key = 1;
- /** Async and Sync Event variables */
- u32 intcounter;
-@@ -244,17 +200,18 @@ struct _wlan_adapter {
+- ret = wlan_set_wep_key(assoc_req, extra, dwrq->length, index, set_tx_key);
++ ret = lbs_set_wep_key(assoc_req, extra, dwrq->length, index, set_tx_key);
+ if (ret)
+ goto out;
- /** Timers */
- struct timer_list command_timer;
--
-- /* TX queue used in PS mode */
-- spinlock_t txqueue_lock;
-- struct sk_buff *tx_queue_ps[NR_TX_QUEUE];
-- unsigned int tx_queue_idx;
-+ int nr_retries;
-+ int cmd_timed_out;
+@@ -1335,11 +1374,11 @@ static int wlan_set_encode(struct net_device *dev,
+ out:
+ if (ret == 0) {
+ set_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ } else {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ }
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
- u8 hisregcpy;
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
+@@ -1354,14 +1393,13 @@ out:
+ * @param extra A pointer to extra data buf
+ * @return 0 on success, otherwise failure
+ */
+-static int wlan_get_encodeext(struct net_device *dev,
++static int lbs_get_encodeext(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_point *dwrq,
+ char *extra)
+ {
+ int ret = -EINVAL;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
+ int index, max_key_len;
- /** current ssid/bssid related parameters*/
- struct current_bss_params curbssparams;
+@@ -1377,46 +1415,46 @@ static int wlan_get_encodeext(struct net_device *dev,
+ goto out;
+ index--;
+ } else {
+- index = adapter->wep_tx_keyidx;
++ index = priv->wep_tx_keyidx;
+ }
-+ uint16_t mesh_tlv;
-+ u8 mesh_ssid[IW_ESSID_MAX_SIZE + 1];
-+ u8 mesh_ssid_len;
-+
- /* IW_MODE_* */
- u8 mode;
+- if (!ext->ext_flags & IW_ENCODE_EXT_GROUP_KEY &&
++ if (!(ext->ext_flags & IW_ENCODE_EXT_GROUP_KEY) &&
+ ext->alg != IW_ENCODE_ALG_WEP) {
+- if (index != 0 || adapter->mode != IW_MODE_INFRA)
++ if (index != 0 || priv->mode != IW_MODE_INFRA)
+ goto out;
+ }
-@@ -263,6 +220,8 @@ struct _wlan_adapter {
- struct list_head network_free_list;
- struct bss_descriptor *networks;
+ dwrq->flags = index + 1;
+ memset(ext, 0, sizeof(*ext));
-+ u16 beacon_period;
-+ u8 beacon_enable;
- u8 adhoccreate;
+- if ( !adapter->secinfo.wep_enabled
+- && !adapter->secinfo.WPAenabled
+- && !adapter->secinfo.WPA2enabled) {
++ if ( !priv->secinfo.wep_enabled
++ && !priv->secinfo.WPAenabled
++ && !priv->secinfo.WPA2enabled) {
+ ext->alg = IW_ENCODE_ALG_NONE;
+ ext->key_len = 0;
+ dwrq->flags |= IW_ENCODE_DISABLED;
+ } else {
+ u8 *key = NULL;
- /** capability Info used in Association, start, join */
-@@ -286,11 +245,11 @@ struct _wlan_adapter {
+- if ( adapter->secinfo.wep_enabled
+- && !adapter->secinfo.WPAenabled
+- && !adapter->secinfo.WPA2enabled) {
++ if ( priv->secinfo.wep_enabled
++ && !priv->secinfo.WPAenabled
++ && !priv->secinfo.WPA2enabled) {
+ /* WEP */
+ ext->alg = IW_ENCODE_ALG_WEP;
+- ext->key_len = adapter->wep_keys[index].len;
+- key = &adapter->wep_keys[index].key[0];
+- } else if ( !adapter->secinfo.wep_enabled
+- && (adapter->secinfo.WPAenabled ||
+- adapter->secinfo.WPA2enabled)) {
++ ext->key_len = priv->wep_keys[index].len;
++ key = &priv->wep_keys[index].key[0];
++ } else if ( !priv->secinfo.wep_enabled
++ && (priv->secinfo.WPAenabled ||
++ priv->secinfo.WPA2enabled)) {
+ /* WPA */
+ struct enc_key * pkey = NULL;
- /** Tx-related variables (for single packet tx) */
- struct sk_buff *currenttxskb;
-- u16 TxLockFlag;
+- if ( adapter->wpa_mcast_key.len
+- && (adapter->wpa_mcast_key.flags & KEY_INFO_WPA_ENABLED))
+- pkey = &adapter->wpa_mcast_key;
+- else if ( adapter->wpa_unicast_key.len
+- && (adapter->wpa_unicast_key.flags & KEY_INFO_WPA_ENABLED))
+- pkey = &adapter->wpa_unicast_key;
++ if ( priv->wpa_mcast_key.len
++ && (priv->wpa_mcast_key.flags & KEY_INFO_WPA_ENABLED))
++ pkey = &priv->wpa_mcast_key;
++ else if ( priv->wpa_unicast_key.len
++ && (priv->wpa_unicast_key.flags & KEY_INFO_WPA_ENABLED))
++ pkey = &priv->wpa_unicast_key;
- /** NIC Operation characteristics */
- u16 currentpacketfilter;
- u32 connect_status;
-+ u32 mesh_connect_status;
- u16 regioncode;
- u16 txpowerlevel;
+ if (pkey) {
+ if (pkey->type == KEY_TYPE_ID_AES) {
+@@ -1461,22 +1499,21 @@ out:
+ * @param extra A pointer to extra data buf
+ * @return 0 --success, otherwise fail
+ */
+-static int wlan_set_encodeext(struct net_device *dev,
++static int lbs_set_encodeext(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_point *dwrq,
+ char *extra)
+ {
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
+ int alg = ext->alg;
+ struct assoc_request * assoc_req;
-@@ -300,15 +259,17 @@ struct _wlan_adapter {
- u16 psmode; /* Wlan802_11PowermodeCAM=disable
- Wlan802_11PowermodeMAX_PSP=enable */
- u32 psstate;
-+ char ps_supported;
- u8 needtowakeup;
+ lbs_deb_enter(LBS_DEB_WEXT);
-- struct PS_CMD_ConfirmSleep libertas_ps_confirm_sleep;
-+ struct PS_CMD_ConfirmSleep lbs_ps_confirm_sleep;
-+ struct cmd_header lbs_ps_confirm_wake;
+- mutex_lock(&adapter->lock);
+- assoc_req = wlan_get_association_request(adapter);
++ mutex_lock(&priv->lock);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1503,7 +1540,7 @@ static int wlan_set_encodeext(struct net_device *dev,
+ set_tx_key = 1;
- struct assoc_request * pending_assoc_req;
- struct assoc_request * in_progress_assoc_req;
+ /* Copy key to driver */
+- ret = wlan_set_wep_key (assoc_req, ext->key, ext->key_len, index,
++ ret = lbs_set_wep_key(assoc_req, ext->key, ext->key_len, index,
+ set_tx_key);
+ if (ret)
+ goto out;
+@@ -1576,31 +1613,30 @@ static int wlan_set_encodeext(struct net_device *dev,
- /** Encryption parameter */
-- struct wlan_802_11_security secinfo;
-+ struct lbs_802_11_security secinfo;
+ out:
+ if (ret == 0) {
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ } else {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ }
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
- /** WEP keys */
- struct enc_key wep_keys[4];
-@@ -338,9 +299,6 @@ struct _wlan_adapter {
- u8 cur_rate;
- u8 auto_rate;
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
+ }
-- /** sleep_params */
-- struct sleep_params sp;
--
- /** RF calibration data */
- #define MAX_REGION_CHANNEL_NUM 2
-@@ -350,7 +308,7 @@ struct _wlan_adapter {
- struct region_channel universal_channel[MAX_REGION_CHANNEL_NUM];
+-static int wlan_set_genie(struct net_device *dev,
++static int lbs_set_genie(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_point *dwrq,
+ char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ int ret = 0;
+ struct assoc_request * assoc_req;
- /** 11D and Domain Regulatory Data */
-- struct wlan_802_11d_domain_reg domainreg;
-+ struct lbs_802_11d_domain_reg domainreg;
- struct parsed_region_chan_11d parsed_region_chan;
+ lbs_deb_enter(LBS_DEB_WEXT);
- /** FSM variable for 11d support */
-@@ -358,14 +316,57 @@ struct _wlan_adapter {
+- mutex_lock(&adapter->lock);
+- assoc_req = wlan_get_association_request(adapter);
++ mutex_lock(&priv->lock);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1616,46 +1652,45 @@ static int wlan_set_genie(struct net_device *dev,
+ memcpy(&assoc_req->wpa_ie[0], extra, dwrq->length);
+ assoc_req->wpa_ie_len = dwrq->length;
+ } else {
+- memset(&assoc_req->wpa_ie[0], 0, sizeof(adapter->wpa_ie));
++ memset(&assoc_req->wpa_ie[0], 0, sizeof(priv->wpa_ie));
+ assoc_req->wpa_ie_len = 0;
+ }
- /** MISCELLANEOUS */
- u8 *prdeeprom;
-- struct wlan_offset_value offsetvalue;
-+ struct lbs_offset_value offsetvalue;
+ out:
+ if (ret == 0) {
+ set_bit(ASSOC_FLAG_WPA_IE, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ } else {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ }
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
- struct cmd_ds_802_11_get_log logmsg;
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
+ }
- u32 monitormode;
-+ int last_scanned_channel;
- u8 fw_ready;
-+};
-+
-+/** Association request
-+ *
-+ * Encapsulates all the options that describe a specific assocation request
-+ * or configuration of the wireless card's radio, mode, and security settings.
-+ */
-+struct assoc_request {
-+#define ASSOC_FLAG_SSID 1
-+#define ASSOC_FLAG_CHANNEL 2
-+#define ASSOC_FLAG_BAND 3
-+#define ASSOC_FLAG_MODE 4
-+#define ASSOC_FLAG_BSSID 5
-+#define ASSOC_FLAG_WEP_KEYS 6
-+#define ASSOC_FLAG_WEP_TX_KEYIDX 7
-+#define ASSOC_FLAG_WPA_MCAST_KEY 8
-+#define ASSOC_FLAG_WPA_UCAST_KEY 9
-+#define ASSOC_FLAG_SECINFO 10
-+#define ASSOC_FLAG_WPA_IE 11
-+ unsigned long flags;
+-static int wlan_get_genie(struct net_device *dev,
++static int lbs_get_genie(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_point *dwrq,
+ char *extra)
+ {
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
-- u8 last_scanned_channel;
-+ u8 ssid[IW_ESSID_MAX_SIZE + 1];
-+ u8 ssid_len;
-+ u8 channel;
-+ u8 band;
-+ u8 mode;
-+ u8 bssid[ETH_ALEN];
-+
-+ /** WEP keys */
-+ struct enc_key wep_keys[4];
-+ u16 wep_tx_keyidx;
-+
-+ /** WPA keys */
-+ struct enc_key wpa_mcast_key;
-+ struct enc_key wpa_unicast_key;
-+
-+ struct lbs_802_11_security secinfo;
-+
-+ /** WPA Information Elements*/
-+ u8 wpa_ie[MAX_WPA_IE_LEN];
-+ u8 wpa_ie_len;
-+
-+ /* BSS to associate with for infrastructure of Ad-Hoc join */
-+ struct bss_descriptor bss;
- };
+ lbs_deb_enter(LBS_DEB_WEXT);
--#endif /* _WLAN_DEV_H_ */
-+#endif
-diff --git a/drivers/net/wireless/libertas/ethtool.c b/drivers/net/wireless/libertas/ethtool.c
-index 3dae152..21e6f98 100644
---- a/drivers/net/wireless/libertas/ethtool.c
-+++ b/drivers/net/wireless/libertas/ethtool.c
-@@ -8,6 +8,8 @@
- #include "dev.h"
- #include "join.h"
- #include "wext.h"
-+#include "cmd.h"
-+
- static const char * mesh_stat_strings[]= {
- "drop_duplicate_bcast",
- "drop_ttl_zero",
-@@ -19,35 +21,34 @@ static const char * mesh_stat_strings[]= {
- "tx_failed_cnt"
- };
+- if (adapter->wpa_ie_len == 0) {
++ if (priv->wpa_ie_len == 0) {
+ dwrq->length = 0;
+ goto out;
+ }
--static void libertas_ethtool_get_drvinfo(struct net_device *dev,
-+static void lbs_ethtool_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
- {
-- wlan_private *priv = (wlan_private *) dev->priv;
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv;
- char fwver[32];
+- if (dwrq->length < adapter->wpa_ie_len) {
++ if (dwrq->length < priv->wpa_ie_len) {
+ ret = -E2BIG;
+ goto out;
+ }
-- libertas_get_fwversion(priv->adapter, fwver, sizeof(fwver) - 1);
-+ lbs_get_fwversion(priv, fwver, sizeof(fwver) - 1);
+- dwrq->length = adapter->wpa_ie_len;
+- memcpy(extra, &adapter->wpa_ie[0], adapter->wpa_ie_len);
++ dwrq->length = priv->wpa_ie_len;
++ memcpy(extra, &priv->wpa_ie[0], priv->wpa_ie_len);
- strcpy(info->driver, "libertas");
-- strcpy(info->version, libertas_driver_version);
-+ strcpy(info->version, lbs_driver_version);
- strcpy(info->fw_version, fwver);
+ out:
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+@@ -1663,21 +1698,20 @@ out:
}
- /* All 8388 parts have 16KiB EEPROM size at the time of writing.
- * In case that changes this needs fixing.
- */
--#define LIBERTAS_EEPROM_LEN 16384
-+#define LBS_EEPROM_LEN 16384
--static int libertas_ethtool_get_eeprom_len(struct net_device *dev)
-+static int lbs_ethtool_get_eeprom_len(struct net_device *dev)
+-static int wlan_set_auth(struct net_device *dev,
++static int lbs_set_auth(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_param *dwrq,
+ char *extra)
{
-- return LIBERTAS_EEPROM_LEN;
-+ return LBS_EEPROM_LEN;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct assoc_request * assoc_req;
+ int ret = 0;
+ int updated = 0;
+
+ lbs_deb_enter(LBS_DEB_WEXT);
+
+- mutex_lock(&adapter->lock);
+- assoc_req = wlan_get_association_request(adapter);
++ mutex_lock(&priv->lock);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1752,44 +1786,43 @@ out:
+ if (ret == 0) {
+ if (updated)
+ set_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ } else if (ret != -EOPNOTSUPP) {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ }
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
+
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
}
--static int libertas_ethtool_get_eeprom(struct net_device *dev,
-+static int lbs_ethtool_get_eeprom(struct net_device *dev,
- struct ethtool_eeprom *eeprom, u8 * bytes)
+-static int wlan_get_auth(struct net_device *dev,
++static int lbs_get_auth(struct net_device *dev,
+ struct iw_request_info *info,
+ struct iw_param *dwrq,
+ char *extra)
{
-- wlan_private *priv = (wlan_private *) dev->priv;
+ int ret = 0;
+- wlan_private *priv = dev->priv;
- wlan_adapter *adapter = priv->adapter;
-- struct wlan_ioctl_regrdwr regctrl;
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv;
-+ struct lbs_ioctl_regrdwr regctrl;
- char *ptr;
- int ret;
++ struct lbs_private *priv = dev->priv;
-@@ -55,47 +56,47 @@ static int libertas_ethtool_get_eeprom(struct net_device *dev,
- regctrl.offset = eeprom->offset;
- regctrl.NOB = eeprom->len;
+ lbs_deb_enter(LBS_DEB_WEXT);
-- if (eeprom->offset + eeprom->len > LIBERTAS_EEPROM_LEN)
-+ if (eeprom->offset + eeprom->len > LBS_EEPROM_LEN)
- return -EINVAL;
+ switch (dwrq->flags & IW_AUTH_INDEX) {
+ case IW_AUTH_WPA_VERSION:
+ dwrq->value = 0;
+- if (adapter->secinfo.WPAenabled)
++ if (priv->secinfo.WPAenabled)
+ dwrq->value |= IW_AUTH_WPA_VERSION_WPA;
+- if (adapter->secinfo.WPA2enabled)
++ if (priv->secinfo.WPA2enabled)
+ dwrq->value |= IW_AUTH_WPA_VERSION_WPA2;
+ if (!dwrq->value)
+ dwrq->value |= IW_AUTH_WPA_VERSION_DISABLED;
+ break;
- // mutex_lock(&priv->mutex);
+ case IW_AUTH_80211_AUTH_ALG:
+- dwrq->value = adapter->secinfo.auth_mode;
++ dwrq->value = priv->secinfo.auth_mode;
+ break;
-- adapter->prdeeprom = kmalloc(eeprom->len+sizeof(regctrl), GFP_KERNEL);
-- if (!adapter->prdeeprom)
-+ priv->prdeeprom = kmalloc(eeprom->len+sizeof(regctrl), GFP_KERNEL);
-+ if (!priv->prdeeprom)
- return -ENOMEM;
-- memcpy(adapter->prdeeprom, ®ctrl, sizeof(regctrl));
-+ memcpy(priv->prdeeprom, ®ctrl, sizeof(regctrl));
+ case IW_AUTH_WPA_ENABLED:
+- if (adapter->secinfo.WPAenabled && adapter->secinfo.WPA2enabled)
++ if (priv->secinfo.WPAenabled && priv->secinfo.WPA2enabled)
+ dwrq->value = 1;
+ break;
- /* +14 is for action, offset, and NOB in
- * response */
- lbs_deb_ethtool("action:%d offset: %x NOB: %02x\n",
- regctrl.action, regctrl.offset, regctrl.NOB);
+@@ -1802,25 +1835,24 @@ static int wlan_get_auth(struct net_device *dev,
+ }
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_EEPROM_ACCESS,
- regctrl.action,
- CMD_OPTION_WAITFORRSP, 0,
- ®ctrl);
- if (ret) {
-- if (adapter->prdeeprom)
-- kfree(adapter->prdeeprom);
-+ if (priv->prdeeprom)
-+ kfree(priv->prdeeprom);
- goto done;
- }
+-static int wlan_set_txpow(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_txpow(struct net_device *dev, struct iw_request_info *info,
+ struct iw_param *vwrq, char *extra)
+ {
+ int ret = 0;
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
- mdelay(10);
+ u16 dbm;
-- ptr = (char *)adapter->prdeeprom;
-+ ptr = (char *)priv->prdeeprom;
+ lbs_deb_enter(LBS_DEB_WEXT);
- /* skip the command header, but include the "value" u32 variable */
-- ptr = ptr + sizeof(struct wlan_ioctl_regrdwr) - 4;
-+ ptr = ptr + sizeof(struct lbs_ioctl_regrdwr) - 4;
+ if (vwrq->disabled) {
+- wlan_radio_ioctl(priv, RADIO_OFF);
++ lbs_radio_ioctl(priv, RADIO_OFF);
+ return 0;
+ }
- /*
- * Return the result back to the user
- */
- memcpy(bytes, ptr, eeprom->len);
+- adapter->preamble = CMD_TYPE_AUTO_PREAMBLE;
++ priv->preamble = CMD_TYPE_AUTO_PREAMBLE;
-- if (adapter->prdeeprom)
-- kfree(adapter->prdeeprom);
-+ if (priv->prdeeprom)
-+ kfree(priv->prdeeprom);
- // mutex_unlock(&priv->mutex);
+- wlan_radio_ioctl(priv, RADIO_ON);
++ lbs_radio_ioctl(priv, RADIO_ON);
- ret = 0;
-@@ -105,17 +106,17 @@ done:
- return ret;
+ /* Userspace check in iwrange if it should use dBm or mW,
+ * therefore this should never happen... Jean II */
+@@ -1836,7 +1868,7 @@ static int wlan_set_txpow(struct net_device *dev, struct iw_request_info *info,
+
+ lbs_deb_wext("txpower set %d dbm\n", dbm);
+
+- ret = libertas_prepare_and_send_command(priv,
++ ret = lbs_prepare_and_send_command(priv,
+ CMD_802_11_RF_TX_POWER,
+ CMD_ACT_TX_POWER_OPT_SET_LOW,
+ CMD_OPTION_WAITFORRSP, 0, (void *)&dbm);
+@@ -1845,11 +1877,10 @@ static int wlan_set_txpow(struct net_device *dev, struct iw_request_info *info,
+ return ret;
}
--static void libertas_ethtool_get_stats(struct net_device * dev,
-+static void lbs_ethtool_get_stats(struct net_device * dev,
- struct ethtool_stats * stats, u64 * data)
+-static int wlan_get_essid(struct net_device *dev, struct iw_request_info *info,
++static int lbs_get_essid(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
{
- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
+ struct lbs_private *priv = dev->priv;
- struct cmd_ds_mesh_access mesh_access;
- int ret;
- lbs_deb_enter(LBS_DEB_ETHTOOL);
+ lbs_deb_enter(LBS_DEB_WEXT);
- /* Get Mesh Statistics */
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_MESH_ACCESS, CMD_ACT_MESH_GET_STATS,
- CMD_OPTION_WAITFORRSP, 0, &mesh_access);
+@@ -1861,19 +1892,19 @@ static int wlan_get_essid(struct net_device *dev, struct iw_request_info *info,
+ /*
+ * Get the current SSID
+ */
+- if (adapter->connect_status == LIBERTAS_CONNECTED) {
+- memcpy(extra, adapter->curbssparams.ssid,
+- adapter->curbssparams.ssid_len);
+- extra[adapter->curbssparams.ssid_len] = '\0';
++ if (priv->connect_status == LBS_CONNECTED) {
++ memcpy(extra, priv->curbssparams.ssid,
++ priv->curbssparams.ssid_len);
++ extra[priv->curbssparams.ssid_len] = '\0';
+ } else {
+ memset(extra, 0, 32);
+- extra[adapter->curbssparams.ssid_len] = '\0';
++ extra[priv->curbssparams.ssid_len] = '\0';
+ }
+ /*
+ * If none, we may want to get the one that was set
+ */
-@@ -143,7 +144,7 @@ static void libertas_ethtool_get_stats(struct net_device * dev,
- lbs_deb_enter(LBS_DEB_ETHTOOL);
+- dwrq->length = adapter->curbssparams.ssid_len;
++ dwrq->length = priv->curbssparams.ssid_len;
+
+ dwrq->flags = 1; /* active */
+
+@@ -1881,11 +1912,10 @@ static int wlan_get_essid(struct net_device *dev, struct iw_request_info *info,
+ return 0;
}
--static int libertas_ethtool_get_sset_count(struct net_device * dev, int sset)
-+static int lbs_ethtool_get_sset_count(struct net_device * dev, int sset)
+-static int wlan_set_essid(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_essid(struct net_device *dev, struct iw_request_info *info,
+ struct iw_point *dwrq, char *extra)
{
- switch (sset) {
- case ETH_SS_STATS:
-@@ -153,7 +154,7 @@ static int libertas_ethtool_get_sset_count(struct net_device * dev, int sset)
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ int ret = 0;
+ u8 ssid[IW_ESSID_MAX_SIZE];
+ u8 ssid_len = 0;
+@@ -1918,10 +1948,10 @@ static int wlan_set_essid(struct net_device *dev, struct iw_request_info *info,
}
- }
--static void libertas_ethtool_get_strings (struct net_device * dev,
-+static void lbs_ethtool_get_strings(struct net_device *dev,
- u32 stringset,
- u8 * s)
- {
-@@ -173,12 +174,57 @@ static void libertas_ethtool_get_strings (struct net_device * dev,
- lbs_deb_enter(LBS_DEB_ETHTOOL);
- }
+ out:
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
+ if (ret == 0) {
+ /* Get or create the current association request */
+- assoc_req = wlan_get_association_request(adapter);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+ ret = -ENOMEM;
+ } else {
+@@ -1929,17 +1959,65 @@ out:
+ memcpy(&assoc_req->ssid, &ssid, IW_ESSID_MAX_SIZE);
+ assoc_req->ssid_len = ssid_len;
+ set_bit(ASSOC_FLAG_SSID, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ }
+ }
--struct ethtool_ops libertas_ethtool_ops = {
-- .get_drvinfo = libertas_ethtool_get_drvinfo,
-- .get_eeprom = libertas_ethtool_get_eeprom,
-- .get_eeprom_len = libertas_ethtool_get_eeprom_len,
-- .get_sset_count = libertas_ethtool_get_sset_count,
-- .get_ethtool_stats = libertas_ethtool_get_stats,
-- .get_strings = libertas_ethtool_get_strings,
-+static void lbs_ethtool_get_wol(struct net_device *dev,
-+ struct ethtool_wolinfo *wol)
-+{
-+ struct lbs_private *priv = dev->priv;
-+
-+ if (priv->wol_criteria == 0xffffffff) {
-+ /* Interface driver didn't configure wake */
-+ wol->supported = wol->wolopts = 0;
-+ return;
+ /* Cancel the association request if there was an error */
+ if (ret != 0) {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ }
+
-+ wol->supported = WAKE_UCAST|WAKE_MCAST|WAKE_BCAST|WAKE_PHY;
++ mutex_unlock(&priv->lock);
+
-+ if (priv->wol_criteria & EHS_WAKE_ON_UNICAST_DATA)
-+ wol->wolopts |= WAKE_UCAST;
-+ if (priv->wol_criteria & EHS_WAKE_ON_MULTICAST_DATA)
-+ wol->wolopts |= WAKE_MCAST;
-+ if (priv->wol_criteria & EHS_WAKE_ON_BROADCAST_DATA)
-+ wol->wolopts |= WAKE_BCAST;
-+ if (priv->wol_criteria & EHS_WAKE_ON_MAC_EVENT)
-+ wol->wolopts |= WAKE_PHY;
++ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
++ return ret;
+}
+
-+static int lbs_ethtool_set_wol(struct net_device *dev,
-+ struct ethtool_wolinfo *wol)
++static int lbs_mesh_get_essid(struct net_device *dev,
++ struct iw_request_info *info,
++ struct iw_point *dwrq, char *extra)
+{
+ struct lbs_private *priv = dev->priv;
-+ uint32_t criteria = 0;
+
-+ if (priv->wol_criteria == 0xffffffff && wol->wolopts)
-+ return -EOPNOTSUPP;
++ lbs_deb_enter(LBS_DEB_WEXT);
+
-+ if (wol->wolopts & ~(WAKE_UCAST|WAKE_MCAST|WAKE_BCAST|WAKE_PHY))
-+ return -EOPNOTSUPP;
++ memcpy(extra, priv->mesh_ssid, priv->mesh_ssid_len);
+
-+ if (wol->wolopts & WAKE_UCAST) criteria |= EHS_WAKE_ON_UNICAST_DATA;
-+ if (wol->wolopts & WAKE_MCAST) criteria |= EHS_WAKE_ON_MULTICAST_DATA;
-+ if (wol->wolopts & WAKE_BCAST) criteria |= EHS_WAKE_ON_BROADCAST_DATA;
-+ if (wol->wolopts & WAKE_PHY) criteria |= EHS_WAKE_ON_MAC_EVENT;
++ dwrq->length = priv->mesh_ssid_len;
+
-+ return lbs_host_sleep_cfg(priv, criteria);
++ dwrq->flags = 1; /* active */
++
++ lbs_deb_leave(LBS_DEB_WEXT);
++ return 0;
+}
+
-+struct ethtool_ops lbs_ethtool_ops = {
-+ .get_drvinfo = lbs_ethtool_get_drvinfo,
-+ .get_eeprom = lbs_ethtool_get_eeprom,
-+ .get_eeprom_len = lbs_ethtool_get_eeprom_len,
-+ .get_sset_count = lbs_ethtool_get_sset_count,
-+ .get_ethtool_stats = lbs_ethtool_get_stats,
-+ .get_strings = lbs_ethtool_get_strings,
-+ .get_wol = lbs_ethtool_get_wol,
-+ .set_wol = lbs_ethtool_set_wol,
- };
-
-diff --git a/drivers/net/wireless/libertas/host.h b/drivers/net/wireless/libertas/host.h
-index b37ddbc..1aa0407 100644
---- a/drivers/net/wireless/libertas/host.h
-+++ b/drivers/net/wireless/libertas/host.h
-@@ -2,25 +2,25 @@
- * This file contains definitions of WLAN commands.
- */
-
--#ifndef _HOST_H_
--#define _HOST_H_
-+#ifndef _LBS_HOST_H_
-+#define _LBS_HOST_H_
++static int lbs_mesh_set_essid(struct net_device *dev,
++ struct iw_request_info *info,
++ struct iw_point *dwrq, char *extra)
++{
++ struct lbs_private *priv = dev->priv;
++ int ret = 0;
++
++ lbs_deb_enter(LBS_DEB_WEXT);
++
++ /* Check the size of the string */
++ if (dwrq->length > IW_ESSID_MAX_SIZE) {
++ ret = -E2BIG;
++ goto out;
+ }
- /** PUBLIC DEFINITIONS */
--#define DEFAULT_AD_HOC_CHANNEL 6
--#define DEFAULT_AD_HOC_CHANNEL_A 36
-+#define DEFAULT_AD_HOC_CHANNEL 6
-+#define DEFAULT_AD_HOC_CHANNEL_A 36
+- mutex_unlock(&adapter->lock);
++ if (!dwrq->flags || !dwrq->length) {
++ ret = -EINVAL;
++ goto out;
++ } else {
++ /* Specific SSID requested */
++ memcpy(priv->mesh_ssid, extra, dwrq->length);
++ priv->mesh_ssid_len = dwrq->length;
++ }
- /** IEEE 802.11 oids */
--#define OID_802_11_SSID 0x00008002
--#define OID_802_11_INFRASTRUCTURE_MODE 0x00008008
--#define OID_802_11_FRAGMENTATION_THRESHOLD 0x00008009
--#define OID_802_11_RTS_THRESHOLD 0x0000800A
--#define OID_802_11_TX_ANTENNA_SELECTED 0x0000800D
--#define OID_802_11_SUPPORTED_RATES 0x0000800E
--#define OID_802_11_STATISTICS 0x00008012
--#define OID_802_11_TX_RETRYCOUNT 0x0000801D
--#define OID_802_11D_ENABLE 0x00008020
--
--#define CMD_OPTION_WAITFORRSP 0x0002
-+#define OID_802_11_SSID 0x00008002
-+#define OID_802_11_INFRASTRUCTURE_MODE 0x00008008
-+#define OID_802_11_FRAGMENTATION_THRESHOLD 0x00008009
-+#define OID_802_11_RTS_THRESHOLD 0x0000800A
-+#define OID_802_11_TX_ANTENNA_SELECTED 0x0000800D
-+#define OID_802_11_SUPPORTED_RATES 0x0000800E
-+#define OID_802_11_STATISTICS 0x00008012
-+#define OID_802_11_TX_RETRYCOUNT 0x0000801D
-+#define OID_802_11D_ENABLE 0x00008020
-+
-+#define CMD_OPTION_WAITFORRSP 0x0002
++ lbs_mesh_config(priv, 1, priv->curbssparams.channel);
++ out:
+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
+ return ret;
+ }
+@@ -1953,11 +2031,10 @@ out:
+ * @param extra A pointer to extra data buf
+ * @return 0 --success, otherwise fail
+ */
+-static int wlan_set_wap(struct net_device *dev, struct iw_request_info *info,
++static int lbs_set_wap(struct net_device *dev, struct iw_request_info *info,
+ struct sockaddr *awrq, char *extra)
+ {
+- wlan_private *priv = dev->priv;
+- wlan_adapter *adapter = priv->adapter;
++ struct lbs_private *priv = dev->priv;
+ struct assoc_request * assoc_req;
+ int ret = 0;
+ DECLARE_MAC_BUF(mac);
+@@ -1969,44 +2046,38 @@ static int wlan_set_wap(struct net_device *dev, struct iw_request_info *info,
- /** Host command IDs */
+ lbs_deb_wext("ASSOC: WAP: sa_data %s\n", print_mac(mac, awrq->sa_data));
-@@ -30,192 +30,189 @@
- #define CMD_RET(cmd) (0x8000 | cmd)
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
- /* Return command convention exceptions: */
--#define CMD_RET_802_11_ASSOCIATE 0x8012
-+#define CMD_RET_802_11_ASSOCIATE 0x8012
+ /* Get or create the current association request */
+- assoc_req = wlan_get_association_request(adapter);
++ assoc_req = lbs_get_association_request(priv);
+ if (!assoc_req) {
+- wlan_cancel_association_work(priv);
++ lbs_cancel_association_work(priv);
+ ret = -ENOMEM;
+ } else {
+ /* Copy the BSSID to the association request */
+ memcpy(&assoc_req->bssid, awrq->sa_data, ETH_ALEN);
+ set_bit(ASSOC_FLAG_BSSID, &assoc_req->flags);
+- wlan_postpone_association_work(priv);
++ lbs_postpone_association_work(priv);
+ }
- /* Command codes */
--#define CMD_CODE_DNLD 0x0002
--#define CMD_GET_HW_SPEC 0x0003
--#define CMD_EEPROM_UPDATE 0x0004
--#define CMD_802_11_RESET 0x0005
--#define CMD_802_11_SCAN 0x0006
--#define CMD_802_11_GET_LOG 0x000b
--#define CMD_MAC_MULTICAST_ADR 0x0010
--#define CMD_802_11_AUTHENTICATE 0x0011
--#define CMD_802_11_EEPROM_ACCESS 0x0059
--#define CMD_802_11_ASSOCIATE 0x0050
--#define CMD_802_11_SET_WEP 0x0013
--#define CMD_802_11_GET_STAT 0x0014
--#define CMD_802_3_GET_STAT 0x0015
--#define CMD_802_11_SNMP_MIB 0x0016
--#define CMD_MAC_REG_MAP 0x0017
--#define CMD_BBP_REG_MAP 0x0018
--#define CMD_MAC_REG_ACCESS 0x0019
--#define CMD_BBP_REG_ACCESS 0x001a
--#define CMD_RF_REG_ACCESS 0x001b
--#define CMD_802_11_RADIO_CONTROL 0x001c
--#define CMD_802_11_RF_CHANNEL 0x001d
--#define CMD_802_11_RF_TX_POWER 0x001e
--#define CMD_802_11_RSSI 0x001f
--#define CMD_802_11_RF_ANTENNA 0x0020
--
--#define CMD_802_11_PS_MODE 0x0021
--
--#define CMD_802_11_DATA_RATE 0x0022
--#define CMD_RF_REG_MAP 0x0023
--#define CMD_802_11_DEAUTHENTICATE 0x0024
--#define CMD_802_11_REASSOCIATE 0x0025
--#define CMD_802_11_DISASSOCIATE 0x0026
--#define CMD_MAC_CONTROL 0x0028
--#define CMD_802_11_AD_HOC_START 0x002b
--#define CMD_802_11_AD_HOC_JOIN 0x002c
--
--#define CMD_802_11_QUERY_TKIP_REPLY_CNTRS 0x002e
--#define CMD_802_11_ENABLE_RSN 0x002f
--#define CMD_802_11_PAIRWISE_TSC 0x0036
--#define CMD_802_11_GROUP_TSC 0x0037
--#define CMD_802_11_KEY_MATERIAL 0x005e
--
--#define CMD_802_11_SET_AFC 0x003c
--#define CMD_802_11_GET_AFC 0x003d
--
--#define CMD_802_11_AD_HOC_STOP 0x0040
--
--#define CMD_802_11_BEACON_STOP 0x0049
--
--#define CMD_802_11_MAC_ADDRESS 0x004D
--#define CMD_802_11_EEPROM_ACCESS 0x0059
--
--#define CMD_802_11_BAND_CONFIG 0x0058
--
--#define CMD_802_11D_DOMAIN_INFO 0x005b
--
--#define CMD_802_11_SLEEP_PARAMS 0x0066
--
--#define CMD_802_11_INACTIVITY_TIMEOUT 0x0067
--
--#define CMD_802_11_TPC_CFG 0x0072
--#define CMD_802_11_PWR_CFG 0x0073
--
--#define CMD_802_11_LED_GPIO_CTRL 0x004e
--
--#define CMD_802_11_SUBSCRIBE_EVENT 0x0075
--
--#define CMD_802_11_RATE_ADAPT_RATESET 0x0076
--
--#define CMD_802_11_TX_RATE_QUERY 0x007f
--
--#define CMD_GET_TSF 0x0080
--
--#define CMD_BT_ACCESS 0x0087
--
--#define CMD_FWT_ACCESS 0x0095
--
--#define CMD_802_11_MONITOR_MODE 0x0098
--
--#define CMD_MESH_ACCESS 0x009b
--
--#define CMD_SET_BOOT2_VER 0x00a5
-+#define CMD_CODE_DNLD 0x0002
-+#define CMD_GET_HW_SPEC 0x0003
-+#define CMD_EEPROM_UPDATE 0x0004
-+#define CMD_802_11_RESET 0x0005
-+#define CMD_802_11_SCAN 0x0006
-+#define CMD_802_11_GET_LOG 0x000b
-+#define CMD_MAC_MULTICAST_ADR 0x0010
-+#define CMD_802_11_AUTHENTICATE 0x0011
-+#define CMD_802_11_EEPROM_ACCESS 0x0059
-+#define CMD_802_11_ASSOCIATE 0x0050
-+#define CMD_802_11_SET_WEP 0x0013
-+#define CMD_802_11_GET_STAT 0x0014
-+#define CMD_802_3_GET_STAT 0x0015
-+#define CMD_802_11_SNMP_MIB 0x0016
-+#define CMD_MAC_REG_MAP 0x0017
-+#define CMD_BBP_REG_MAP 0x0018
-+#define CMD_MAC_REG_ACCESS 0x0019
-+#define CMD_BBP_REG_ACCESS 0x001a
-+#define CMD_RF_REG_ACCESS 0x001b
-+#define CMD_802_11_RADIO_CONTROL 0x001c
-+#define CMD_802_11_RF_CHANNEL 0x001d
-+#define CMD_802_11_RF_TX_POWER 0x001e
-+#define CMD_802_11_RSSI 0x001f
-+#define CMD_802_11_RF_ANTENNA 0x0020
-+#define CMD_802_11_PS_MODE 0x0021
-+#define CMD_802_11_DATA_RATE 0x0022
-+#define CMD_RF_REG_MAP 0x0023
-+#define CMD_802_11_DEAUTHENTICATE 0x0024
-+#define CMD_802_11_REASSOCIATE 0x0025
-+#define CMD_802_11_DISASSOCIATE 0x0026
-+#define CMD_MAC_CONTROL 0x0028
-+#define CMD_802_11_AD_HOC_START 0x002b
-+#define CMD_802_11_AD_HOC_JOIN 0x002c
-+#define CMD_802_11_QUERY_TKIP_REPLY_CNTRS 0x002e
-+#define CMD_802_11_ENABLE_RSN 0x002f
-+#define CMD_802_11_PAIRWISE_TSC 0x0036
-+#define CMD_802_11_GROUP_TSC 0x0037
-+#define CMD_802_11_SET_AFC 0x003c
-+#define CMD_802_11_GET_AFC 0x003d
-+#define CMD_802_11_AD_HOC_STOP 0x0040
-+#define CMD_802_11_HOST_SLEEP_CFG 0x0043
-+#define CMD_802_11_WAKEUP_CONFIRM 0x0044
-+#define CMD_802_11_HOST_SLEEP_ACTIVATE 0x0045
-+#define CMD_802_11_BEACON_STOP 0x0049
-+#define CMD_802_11_MAC_ADDRESS 0x004d
-+#define CMD_802_11_LED_GPIO_CTRL 0x004e
-+#define CMD_802_11_EEPROM_ACCESS 0x0059
-+#define CMD_802_11_BAND_CONFIG 0x0058
-+#define CMD_802_11D_DOMAIN_INFO 0x005b
-+#define CMD_802_11_KEY_MATERIAL 0x005e
-+#define CMD_802_11_SLEEP_PARAMS 0x0066
-+#define CMD_802_11_INACTIVITY_TIMEOUT 0x0067
-+#define CMD_802_11_SLEEP_PERIOD 0x0068
-+#define CMD_802_11_TPC_CFG 0x0072
-+#define CMD_802_11_PWR_CFG 0x0073
-+#define CMD_802_11_FW_WAKE_METHOD 0x0074
-+#define CMD_802_11_SUBSCRIBE_EVENT 0x0075
-+#define CMD_802_11_RATE_ADAPT_RATESET 0x0076
-+#define CMD_802_11_TX_RATE_QUERY 0x007f
-+#define CMD_GET_TSF 0x0080
-+#define CMD_BT_ACCESS 0x0087
-+#define CMD_FWT_ACCESS 0x0095
-+#define CMD_802_11_MONITOR_MODE 0x0098
-+#define CMD_MESH_ACCESS 0x009b
-+#define CMD_MESH_CONFIG 0x00a3
-+#define CMD_SET_BOOT2_VER 0x00a5
-+#define CMD_802_11_BEACON_CTRL 0x00b0
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
- /* For the IEEE Power Save */
--#define CMD_SUBCMD_ENTER_PS 0x0030
--#define CMD_SUBCMD_EXIT_PS 0x0031
--#define CMD_SUBCMD_SLEEP_CONFIRMED 0x0034
--#define CMD_SUBCMD_FULL_POWERDOWN 0x0035
--#define CMD_SUBCMD_FULL_POWERUP 0x0036
--
--#define CMD_ENABLE_RSN 0x0001
--#define CMD_DISABLE_RSN 0x0000
-+#define CMD_SUBCMD_ENTER_PS 0x0030
-+#define CMD_SUBCMD_EXIT_PS 0x0031
-+#define CMD_SUBCMD_SLEEP_CONFIRMED 0x0034
-+#define CMD_SUBCMD_FULL_POWERDOWN 0x0035
-+#define CMD_SUBCMD_FULL_POWERUP 0x0036
+ return ret;
+ }
--#define CMD_ACT_SET 0x0001
--#define CMD_ACT_GET 0x0000
-+#define CMD_ENABLE_RSN 0x0001
-+#define CMD_DISABLE_RSN 0x0000
+-void libertas_get_fwversion(wlan_adapter * adapter, char *fwversion, int maxlen)
++void lbs_get_fwversion(struct lbs_private *priv, char *fwversion, int maxlen)
+ {
+ char fwver[32];
--#define CMD_ACT_GET_AES (CMD_ACT_GET + 2)
--#define CMD_ACT_SET_AES (CMD_ACT_SET + 2)
--#define CMD_ACT_REMOVE_AES (CMD_ACT_SET + 3)
-+#define CMD_ACT_GET 0x0000
-+#define CMD_ACT_SET 0x0001
-+#define CMD_ACT_GET_AES 0x0002
-+#define CMD_ACT_SET_AES 0x0003
-+#define CMD_ACT_REMOVE_AES 0x0004
+- mutex_lock(&adapter->lock);
++ mutex_lock(&priv->lock);
- /* Define action or option for CMD_802_11_SET_WEP */
--#define CMD_ACT_ADD 0x0002
--#define CMD_ACT_REMOVE 0x0004
--#define CMD_ACT_USE_DEFAULT 0x0008
-+#define CMD_ACT_ADD 0x0002
-+#define CMD_ACT_REMOVE 0x0004
-+#define CMD_ACT_USE_DEFAULT 0x0008
+- if (adapter->fwreleasenumber[3] == 0)
+- sprintf(fwver, "%u.%u.%u",
+- adapter->fwreleasenumber[2],
+- adapter->fwreleasenumber[1],
+- adapter->fwreleasenumber[0]);
+- else
+- sprintf(fwver, "%u.%u.%u.p%u",
+- adapter->fwreleasenumber[2],
+- adapter->fwreleasenumber[1],
+- adapter->fwreleasenumber[0],
+- adapter->fwreleasenumber[3]);
++ sprintf(fwver, "%u.%u.%u.p%u",
++ priv->fwrelease >> 24 & 0xff,
++ priv->fwrelease >> 16 & 0xff,
++ priv->fwrelease >> 8 & 0xff,
++ priv->fwrelease & 0xff);
--#define CMD_TYPE_WEP_40_BIT 0x01
--#define CMD_TYPE_WEP_104_BIT 0x02
-+#define CMD_TYPE_WEP_40_BIT 0x01
-+#define CMD_TYPE_WEP_104_BIT 0x02
+- mutex_unlock(&adapter->lock);
++ mutex_unlock(&priv->lock);
+ snprintf(fwversion, maxlen, fwver);
+ }
--#define CMD_NUM_OF_WEP_KEYS 4
-+#define CMD_NUM_OF_WEP_KEYS 4
+@@ -2014,19 +2085,19 @@ void libertas_get_fwversion(wlan_adapter * adapter, char *fwversion, int maxlen)
+ /*
+ * iwconfig settable callbacks
+ */
+-static const iw_handler wlan_handler[] = {
++static const iw_handler lbs_handler[] = {
+ (iw_handler) NULL, /* SIOCSIWCOMMIT */
+- (iw_handler) wlan_get_name, /* SIOCGIWNAME */
++ (iw_handler) lbs_get_name, /* SIOCGIWNAME */
+ (iw_handler) NULL, /* SIOCSIWNWID */
+ (iw_handler) NULL, /* SIOCGIWNWID */
+- (iw_handler) wlan_set_freq, /* SIOCSIWFREQ */
+- (iw_handler) wlan_get_freq, /* SIOCGIWFREQ */
+- (iw_handler) wlan_set_mode, /* SIOCSIWMODE */
+- (iw_handler) wlan_get_mode, /* SIOCGIWMODE */
++ (iw_handler) lbs_set_freq, /* SIOCSIWFREQ */
++ (iw_handler) lbs_get_freq, /* SIOCGIWFREQ */
++ (iw_handler) lbs_set_mode, /* SIOCSIWMODE */
++ (iw_handler) lbs_get_mode, /* SIOCGIWMODE */
+ (iw_handler) NULL, /* SIOCSIWSENS */
+ (iw_handler) NULL, /* SIOCGIWSENS */
+ (iw_handler) NULL, /* SIOCSIWRANGE */
+- (iw_handler) wlan_get_range, /* SIOCGIWRANGE */
++ (iw_handler) lbs_get_range, /* SIOCGIWRANGE */
+ (iw_handler) NULL, /* SIOCSIWPRIV */
+ (iw_handler) NULL, /* SIOCGIWPRIV */
+ (iw_handler) NULL, /* SIOCSIWSTATS */
+@@ -2035,56 +2106,56 @@ static const iw_handler wlan_handler[] = {
+ iw_handler_get_spy, /* SIOCGIWSPY */
+ iw_handler_set_thrspy, /* SIOCSIWTHRSPY */
+ iw_handler_get_thrspy, /* SIOCGIWTHRSPY */
+- (iw_handler) wlan_set_wap, /* SIOCSIWAP */
+- (iw_handler) wlan_get_wap, /* SIOCGIWAP */
++ (iw_handler) lbs_set_wap, /* SIOCSIWAP */
++ (iw_handler) lbs_get_wap, /* SIOCGIWAP */
+ (iw_handler) NULL, /* SIOCSIWMLME */
+ (iw_handler) NULL, /* SIOCGIWAPLIST - deprecated */
+- (iw_handler) libertas_set_scan, /* SIOCSIWSCAN */
+- (iw_handler) libertas_get_scan, /* SIOCGIWSCAN */
+- (iw_handler) wlan_set_essid, /* SIOCSIWESSID */
+- (iw_handler) wlan_get_essid, /* SIOCGIWESSID */
+- (iw_handler) wlan_set_nick, /* SIOCSIWNICKN */
+- (iw_handler) wlan_get_nick, /* SIOCGIWNICKN */
++ (iw_handler) lbs_set_scan, /* SIOCSIWSCAN */
++ (iw_handler) lbs_get_scan, /* SIOCGIWSCAN */
++ (iw_handler) lbs_set_essid, /* SIOCSIWESSID */
++ (iw_handler) lbs_get_essid, /* SIOCGIWESSID */
++ (iw_handler) lbs_set_nick, /* SIOCSIWNICKN */
++ (iw_handler) lbs_get_nick, /* SIOCGIWNICKN */
+ (iw_handler) NULL, /* -- hole -- */
+ (iw_handler) NULL, /* -- hole -- */
+- (iw_handler) wlan_set_rate, /* SIOCSIWRATE */
+- (iw_handler) wlan_get_rate, /* SIOCGIWRATE */
+- (iw_handler) wlan_set_rts, /* SIOCSIWRTS */
+- (iw_handler) wlan_get_rts, /* SIOCGIWRTS */
+- (iw_handler) wlan_set_frag, /* SIOCSIWFRAG */
+- (iw_handler) wlan_get_frag, /* SIOCGIWFRAG */
+- (iw_handler) wlan_set_txpow, /* SIOCSIWTXPOW */
+- (iw_handler) wlan_get_txpow, /* SIOCGIWTXPOW */
+- (iw_handler) wlan_set_retry, /* SIOCSIWRETRY */
+- (iw_handler) wlan_get_retry, /* SIOCGIWRETRY */
+- (iw_handler) wlan_set_encode, /* SIOCSIWENCODE */
+- (iw_handler) wlan_get_encode, /* SIOCGIWENCODE */
+- (iw_handler) wlan_set_power, /* SIOCSIWPOWER */
+- (iw_handler) wlan_get_power, /* SIOCGIWPOWER */
++ (iw_handler) lbs_set_rate, /* SIOCSIWRATE */
++ (iw_handler) lbs_get_rate, /* SIOCGIWRATE */
++ (iw_handler) lbs_set_rts, /* SIOCSIWRTS */
++ (iw_handler) lbs_get_rts, /* SIOCGIWRTS */
++ (iw_handler) lbs_set_frag, /* SIOCSIWFRAG */
++ (iw_handler) lbs_get_frag, /* SIOCGIWFRAG */
++ (iw_handler) lbs_set_txpow, /* SIOCSIWTXPOW */
++ (iw_handler) lbs_get_txpow, /* SIOCGIWTXPOW */
++ (iw_handler) lbs_set_retry, /* SIOCSIWRETRY */
++ (iw_handler) lbs_get_retry, /* SIOCGIWRETRY */
++ (iw_handler) lbs_set_encode, /* SIOCSIWENCODE */
++ (iw_handler) lbs_get_encode, /* SIOCGIWENCODE */
++ (iw_handler) lbs_set_power, /* SIOCSIWPOWER */
++ (iw_handler) lbs_get_power, /* SIOCGIWPOWER */
+ (iw_handler) NULL, /* -- hole -- */
+ (iw_handler) NULL, /* -- hole -- */
+- (iw_handler) wlan_set_genie, /* SIOCSIWGENIE */
+- (iw_handler) wlan_get_genie, /* SIOCGIWGENIE */
+- (iw_handler) wlan_set_auth, /* SIOCSIWAUTH */
+- (iw_handler) wlan_get_auth, /* SIOCGIWAUTH */
+- (iw_handler) wlan_set_encodeext,/* SIOCSIWENCODEEXT */
+- (iw_handler) wlan_get_encodeext,/* SIOCGIWENCODEEXT */
++ (iw_handler) lbs_set_genie, /* SIOCSIWGENIE */
++ (iw_handler) lbs_get_genie, /* SIOCGIWGENIE */
++ (iw_handler) lbs_set_auth, /* SIOCSIWAUTH */
++ (iw_handler) lbs_get_auth, /* SIOCGIWAUTH */
++ (iw_handler) lbs_set_encodeext,/* SIOCSIWENCODEEXT */
++ (iw_handler) lbs_get_encodeext,/* SIOCGIWENCODEEXT */
+ (iw_handler) NULL, /* SIOCSIWPMKSA */
+ };
--#define CMD_WEP_KEY_INDEX_MASK 0x3fff
-+#define CMD_WEP_KEY_INDEX_MASK 0x3fff
+ static const iw_handler mesh_wlan_handler[] = {
+ (iw_handler) NULL, /* SIOCSIWCOMMIT */
+- (iw_handler) wlan_get_name, /* SIOCGIWNAME */
++ (iw_handler) lbs_get_name, /* SIOCGIWNAME */
+ (iw_handler) NULL, /* SIOCSIWNWID */
+ (iw_handler) NULL, /* SIOCGIWNWID */
+- (iw_handler) wlan_set_freq, /* SIOCSIWFREQ */
+- (iw_handler) wlan_get_freq, /* SIOCGIWFREQ */
++ (iw_handler) lbs_mesh_set_freq, /* SIOCSIWFREQ */
++ (iw_handler) lbs_get_freq, /* SIOCGIWFREQ */
+ (iw_handler) NULL, /* SIOCSIWMODE */
+ (iw_handler) mesh_wlan_get_mode, /* SIOCGIWMODE */
+ (iw_handler) NULL, /* SIOCSIWSENS */
+ (iw_handler) NULL, /* SIOCGIWSENS */
+ (iw_handler) NULL, /* SIOCSIWRANGE */
+- (iw_handler) wlan_get_range, /* SIOCGIWRANGE */
++ (iw_handler) lbs_get_range, /* SIOCGIWRANGE */
+ (iw_handler) NULL, /* SIOCSIWPRIV */
+ (iw_handler) NULL, /* SIOCGIWPRIV */
+ (iw_handler) NULL, /* SIOCSIWSTATS */
+@@ -2097,46 +2168,46 @@ static const iw_handler mesh_wlan_handler[] = {
+ (iw_handler) NULL, /* SIOCGIWAP */
+ (iw_handler) NULL, /* SIOCSIWMLME */
+ (iw_handler) NULL, /* SIOCGIWAPLIST - deprecated */
+- (iw_handler) libertas_set_scan, /* SIOCSIWSCAN */
+- (iw_handler) libertas_get_scan, /* SIOCGIWSCAN */
+- (iw_handler) NULL, /* SIOCSIWESSID */
+- (iw_handler) NULL, /* SIOCGIWESSID */
++ (iw_handler) lbs_set_scan, /* SIOCSIWSCAN */
++ (iw_handler) lbs_get_scan, /* SIOCGIWSCAN */
++ (iw_handler) lbs_mesh_set_essid,/* SIOCSIWESSID */
++ (iw_handler) lbs_mesh_get_essid,/* SIOCGIWESSID */
+ (iw_handler) NULL, /* SIOCSIWNICKN */
+ (iw_handler) mesh_get_nick, /* SIOCGIWNICKN */
+ (iw_handler) NULL, /* -- hole -- */
+ (iw_handler) NULL, /* -- hole -- */
+- (iw_handler) wlan_set_rate, /* SIOCSIWRATE */
+- (iw_handler) wlan_get_rate, /* SIOCGIWRATE */
+- (iw_handler) wlan_set_rts, /* SIOCSIWRTS */
+- (iw_handler) wlan_get_rts, /* SIOCGIWRTS */
+- (iw_handler) wlan_set_frag, /* SIOCSIWFRAG */
+- (iw_handler) wlan_get_frag, /* SIOCGIWFRAG */
+- (iw_handler) wlan_set_txpow, /* SIOCSIWTXPOW */
+- (iw_handler) wlan_get_txpow, /* SIOCGIWTXPOW */
+- (iw_handler) wlan_set_retry, /* SIOCSIWRETRY */
+- (iw_handler) wlan_get_retry, /* SIOCGIWRETRY */
+- (iw_handler) wlan_set_encode, /* SIOCSIWENCODE */
+- (iw_handler) wlan_get_encode, /* SIOCGIWENCODE */
+- (iw_handler) wlan_set_power, /* SIOCSIWPOWER */
+- (iw_handler) wlan_get_power, /* SIOCGIWPOWER */
++ (iw_handler) lbs_set_rate, /* SIOCSIWRATE */
++ (iw_handler) lbs_get_rate, /* SIOCGIWRATE */
++ (iw_handler) lbs_set_rts, /* SIOCSIWRTS */
++ (iw_handler) lbs_get_rts, /* SIOCGIWRTS */
++ (iw_handler) lbs_set_frag, /* SIOCSIWFRAG */
++ (iw_handler) lbs_get_frag, /* SIOCGIWFRAG */
++ (iw_handler) lbs_set_txpow, /* SIOCSIWTXPOW */
++ (iw_handler) lbs_get_txpow, /* SIOCGIWTXPOW */
++ (iw_handler) lbs_set_retry, /* SIOCSIWRETRY */
++ (iw_handler) lbs_get_retry, /* SIOCGIWRETRY */
++ (iw_handler) lbs_set_encode, /* SIOCSIWENCODE */
++ (iw_handler) lbs_get_encode, /* SIOCGIWENCODE */
++ (iw_handler) lbs_set_power, /* SIOCSIWPOWER */
++ (iw_handler) lbs_get_power, /* SIOCGIWPOWER */
+ (iw_handler) NULL, /* -- hole -- */
+ (iw_handler) NULL, /* -- hole -- */
+- (iw_handler) wlan_set_genie, /* SIOCSIWGENIE */
+- (iw_handler) wlan_get_genie, /* SIOCGIWGENIE */
+- (iw_handler) wlan_set_auth, /* SIOCSIWAUTH */
+- (iw_handler) wlan_get_auth, /* SIOCGIWAUTH */
+- (iw_handler) wlan_set_encodeext,/* SIOCSIWENCODEEXT */
+- (iw_handler) wlan_get_encodeext,/* SIOCGIWENCODEEXT */
++ (iw_handler) lbs_set_genie, /* SIOCSIWGENIE */
++ (iw_handler) lbs_get_genie, /* SIOCGIWGENIE */
++ (iw_handler) lbs_set_auth, /* SIOCSIWAUTH */
++ (iw_handler) lbs_get_auth, /* SIOCGIWAUTH */
++ (iw_handler) lbs_set_encodeext,/* SIOCSIWENCODEEXT */
++ (iw_handler) lbs_get_encodeext,/* SIOCGIWENCODEEXT */
+ (iw_handler) NULL, /* SIOCSIWPMKSA */
+ };
+-struct iw_handler_def libertas_handler_def = {
+- .num_standard = ARRAY_SIZE(wlan_handler),
+- .standard = (iw_handler *) wlan_handler,
+- .get_wireless_stats = wlan_get_wireless_stats,
++struct iw_handler_def lbs_handler_def = {
++ .num_standard = ARRAY_SIZE(lbs_handler),
++ .standard = (iw_handler *) lbs_handler,
++ .get_wireless_stats = lbs_get_wireless_stats,
+ };
- /* Define action or option for CMD_802_11_RESET */
--#define CMD_ACT_HALT 0x0003
-+#define CMD_ACT_HALT 0x0003
+ struct iw_handler_def mesh_handler_def = {
+ .num_standard = ARRAY_SIZE(mesh_wlan_handler),
+ .standard = (iw_handler *) mesh_wlan_handler,
+- .get_wireless_stats = wlan_get_wireless_stats,
++ .get_wireless_stats = lbs_get_wireless_stats,
+ };
+diff --git a/drivers/net/wireless/libertas/wext.h b/drivers/net/wireless/libertas/wext.h
+index 6aa444c..a563d9a 100644
+--- a/drivers/net/wireless/libertas/wext.h
++++ b/drivers/net/wireless/libertas/wext.h
+@@ -1,11 +1,11 @@
+ /**
+ * This file contains definition for IOCTL call.
+ */
+-#ifndef _WLAN_WEXT_H_
+-#define _WLAN_WEXT_H_
++#ifndef _LBS_WEXT_H_
++#define _LBS_WEXT_H_
- /* Define action or option for CMD_802_11_SCAN */
--#define CMD_BSS_TYPE_BSS 0x0001
--#define CMD_BSS_TYPE_IBSS 0x0002
--#define CMD_BSS_TYPE_ANY 0x0003
-+#define CMD_BSS_TYPE_BSS 0x0001
-+#define CMD_BSS_TYPE_IBSS 0x0002
-+#define CMD_BSS_TYPE_ANY 0x0003
+-/** wlan_ioctl_regrdwr */
+-struct wlan_ioctl_regrdwr {
++/** lbs_ioctl_regrdwr */
++struct lbs_ioctl_regrdwr {
+ /** Which register to access */
+ u16 whichreg;
+ /** Read or Write */
+@@ -15,9 +15,9 @@ struct wlan_ioctl_regrdwr {
+ u32 value;
+ };
- /* Define action or option for CMD_802_11_SCAN */
--#define CMD_SCAN_TYPE_ACTIVE 0x0000
--#define CMD_SCAN_TYPE_PASSIVE 0x0001
-+#define CMD_SCAN_TYPE_ACTIVE 0x0000
-+#define CMD_SCAN_TYPE_PASSIVE 0x0001
+-#define WLAN_MONITOR_OFF 0
++#define LBS_MONITOR_OFF 0
- #define CMD_SCAN_RADIO_TYPE_BG 0
+-extern struct iw_handler_def libertas_handler_def;
++extern struct iw_handler_def lbs_handler_def;
+ extern struct iw_handler_def mesh_handler_def;
--#define CMD_SCAN_PROBE_DELAY_TIME 0
-+#define CMD_SCAN_PROBE_DELAY_TIME 0
+-#endif /* _WLAN_WEXT_H_ */
++#endif
+diff --git a/drivers/net/wireless/orinoco.c b/drivers/net/wireless/orinoco.c
+index ca6c2da..6d13a0d 100644
+--- a/drivers/net/wireless/orinoco.c
++++ b/drivers/net/wireless/orinoco.c
+@@ -270,6 +270,37 @@ static inline void set_port_type(struct orinoco_private *priv)
+ }
+ }
- /* Define action or option for CMD_MAC_CONTROL */
--#define CMD_ACT_MAC_RX_ON 0x0001
--#define CMD_ACT_MAC_TX_ON 0x0002
--#define CMD_ACT_MAC_LOOPBACK_ON 0x0004
--#define CMD_ACT_MAC_WEP_ENABLE 0x0008
--#define CMD_ACT_MAC_INT_ENABLE 0x0010
--#define CMD_ACT_MAC_MULTICAST_ENABLE 0x0020
--#define CMD_ACT_MAC_BROADCAST_ENABLE 0x0040
--#define CMD_ACT_MAC_PROMISCUOUS_ENABLE 0x0080
--#define CMD_ACT_MAC_ALL_MULTICAST_ENABLE 0x0100
--#define CMD_ACT_MAC_STRICT_PROTECTION_ENABLE 0x0400
-+#define CMD_ACT_MAC_RX_ON 0x0001
-+#define CMD_ACT_MAC_TX_ON 0x0002
-+#define CMD_ACT_MAC_LOOPBACK_ON 0x0004
-+#define CMD_ACT_MAC_WEP_ENABLE 0x0008
-+#define CMD_ACT_MAC_INT_ENABLE 0x0010
-+#define CMD_ACT_MAC_MULTICAST_ENABLE 0x0020
-+#define CMD_ACT_MAC_BROADCAST_ENABLE 0x0040
-+#define CMD_ACT_MAC_PROMISCUOUS_ENABLE 0x0080
-+#define CMD_ACT_MAC_ALL_MULTICAST_ENABLE 0x0100
-+#define CMD_ACT_MAC_STRICT_PROTECTION_ENABLE 0x0400
++#define ORINOCO_MAX_BSS_COUNT 64
++static int orinoco_bss_data_allocate(struct orinoco_private *priv)
++{
++ if (priv->bss_data)
++ return 0;
++
++ priv->bss_data =
++ kzalloc(ORINOCO_MAX_BSS_COUNT * sizeof(bss_element), GFP_KERNEL);
++ if (!priv->bss_data) {
++ printk(KERN_WARNING "Out of memory allocating beacons");
++ return -ENOMEM;
++ }
++ return 0;
++}
++
++static void orinoco_bss_data_free(struct orinoco_private *priv)
++{
++ kfree(priv->bss_data);
++ priv->bss_data = NULL;
++}
++
++static void orinoco_bss_data_init(struct orinoco_private *priv)
++{
++ int i;
++
++ INIT_LIST_HEAD(&priv->bss_free_list);
++ INIT_LIST_HEAD(&priv->bss_list);
++ for (i = 0; i < ORINOCO_MAX_BSS_COUNT; i++)
++ list_add_tail(&priv->bss_data[i].list, &priv->bss_free_list);
++}
++
+ /********************************************************************/
+ /* Device methods */
+ /********************************************************************/
+@@ -1083,6 +1114,124 @@ static void orinoco_send_wevents(struct work_struct *work)
+ orinoco_unlock(priv, &flags);
+ }
- /* Define action or option for CMD_802_11_RADIO_CONTROL */
--#define CMD_TYPE_AUTO_PREAMBLE 0x0001
--#define CMD_TYPE_SHORT_PREAMBLE 0x0002
--#define CMD_TYPE_LONG_PREAMBLE 0x0003
--
--#define TURN_ON_RF 0x01
--#define RADIO_ON 0x01
--#define RADIO_OFF 0x00
--
--#define SET_AUTO_PREAMBLE 0x05
--#define SET_SHORT_PREAMBLE 0x03
--#define SET_LONG_PREAMBLE 0x01
-+#define CMD_TYPE_AUTO_PREAMBLE 0x0001
-+#define CMD_TYPE_SHORT_PREAMBLE 0x0002
-+#define CMD_TYPE_LONG_PREAMBLE 0x0003
+
-+/* Event flags for CMD_802_11_SUBSCRIBE_EVENT */
-+#define CMD_SUBSCRIBE_RSSI_LOW 0x0001
-+#define CMD_SUBSCRIBE_SNR_LOW 0x0002
-+#define CMD_SUBSCRIBE_FAILCOUNT 0x0004
-+#define CMD_SUBSCRIBE_BCNMISS 0x0008
-+#define CMD_SUBSCRIBE_RSSI_HIGH 0x0010
-+#define CMD_SUBSCRIBE_SNR_HIGH 0x0020
++static inline void orinoco_clear_scan_results(struct orinoco_private *priv,
++ unsigned long scan_age)
++{
++ bss_element *bss;
++ bss_element *tmp_bss;
++
++ /* Blow away current list of scan results */
++ list_for_each_entry_safe(bss, tmp_bss, &priv->bss_list, list) {
++ if (!scan_age ||
++ time_after(jiffies, bss->last_scanned + scan_age)) {
++ list_move_tail(&bss->list, &priv->bss_free_list);
++ /* Don't blow away ->list, just BSS data */
++ memset(bss, 0, sizeof(bss->bss));
++ bss->last_scanned = 0;
++ }
++ }
++}
++
++static int orinoco_process_scan_results(struct net_device *dev,
++ unsigned char *buf,
++ int len)
++{
++ struct orinoco_private *priv = netdev_priv(dev);
++ int offset; /* In the scan data */
++ union hermes_scan_info *atom;
++ int atom_len;
++
++ switch (priv->firmware_type) {
++ case FIRMWARE_TYPE_AGERE:
++ atom_len = sizeof(struct agere_scan_apinfo);
++ offset = 0;
++ break;
++ case FIRMWARE_TYPE_SYMBOL:
++ /* Lack of documentation necessitates this hack.
++ * Different firmwares have 68 or 76 byte long atoms.
++ * We try modulo first. If the length divides by both,
++ * we check what would be the channel in the second
++ * frame for a 68-byte atom. 76-byte atoms have 0 there.
++ * Valid channel cannot be 0. */
++ if (len % 76)
++ atom_len = 68;
++ else if (len % 68)
++ atom_len = 76;
++ else if (len >= 1292 && buf[68] == 0)
++ atom_len = 76;
++ else
++ atom_len = 68;
++ offset = 0;
++ break;
++ case FIRMWARE_TYPE_INTERSIL:
++ offset = 4;
++ if (priv->has_hostscan) {
++ atom_len = le16_to_cpup((__le16 *)buf);
++ /* Sanity check for atom_len */
++ if (atom_len < sizeof(struct prism2_scan_apinfo)) {
++ printk(KERN_ERR "%s: Invalid atom_len in scan "
++ "data: %d\n", dev->name, atom_len);
++ return -EIO;
++ }
++ } else
++ atom_len = offsetof(struct prism2_scan_apinfo, atim);
++ break;
++ default:
++ return -EOPNOTSUPP;
++ }
++
++ /* Check that we got an whole number of atoms */
++ if ((len - offset) % atom_len) {
++ printk(KERN_ERR "%s: Unexpected scan data length %d, "
++ "atom_len %d, offset %d\n", dev->name, len,
++ atom_len, offset);
++ return -EIO;
++ }
++
++ orinoco_clear_scan_results(priv, msecs_to_jiffies(15000));
++
++ /* Read the entries one by one */
++ for (; offset + atom_len <= len; offset += atom_len) {
++ int found = 0;
++ bss_element *bss = NULL;
++
++ /* Get next atom */
++ atom = (union hermes_scan_info *) (buf + offset);
++
++ /* Try to update an existing bss first */
++ list_for_each_entry(bss, &priv->bss_list, list) {
++ if (compare_ether_addr(bss->bss.a.bssid, atom->a.bssid))
++ continue;
++ if (le16_to_cpu(bss->bss.a.essid_len) !=
++ le16_to_cpu(atom->a.essid_len))
++ continue;
++ if (memcmp(bss->bss.a.essid, atom->a.essid,
++ le16_to_cpu(atom->a.essid_len)))
++ continue;
++ found = 1;
++ break;
++ }
++
++ /* Grab a bss off the free list */
++ if (!found && !list_empty(&priv->bss_free_list)) {
++ bss = list_entry(priv->bss_free_list.next,
++ bss_element, list);
++ list_del(priv->bss_free_list.next);
++
++ list_add_tail(&bss->list, &priv->bss_list);
++ }
++
++ if (bss) {
++ /* Always update the BSS to get latest beacon info */
++ memcpy(&bss->bss, atom, sizeof(bss->bss));
++ bss->last_scanned = jiffies;
++ }
++ }
+
-+#define TURN_ON_RF 0x01
-+#define RADIO_ON 0x01
-+#define RADIO_OFF 0x00
++ return 0;
++}
+
-+#define SET_AUTO_PREAMBLE 0x05
-+#define SET_SHORT_PREAMBLE 0x03
-+#define SET_LONG_PREAMBLE 0x01
+ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
+ {
+ struct orinoco_private *priv = netdev_priv(dev);
+@@ -1208,6 +1357,9 @@ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
+ union iwreq_data wrqu;
+ unsigned char *buf;
- /* Define action or option for CMD_802_11_RF_CHANNEL */
--#define CMD_OPT_802_11_RF_CHANNEL_GET 0x00
--#define CMD_OPT_802_11_RF_CHANNEL_SET 0x01
-+#define CMD_OPT_802_11_RF_CHANNEL_GET 0x00
-+#define CMD_OPT_802_11_RF_CHANNEL_SET 0x01
++ /* Scan is no longer in progress */
++ priv->scan_inprogress = 0;
++
+ /* Sanity check */
+ if (len > 4096) {
+ printk(KERN_WARNING "%s: Scan results too large (%d bytes)\n",
+@@ -1215,15 +1367,6 @@ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
+ break;
+ }
- /* Define action or option for CMD_802_11_RF_TX_POWER */
--#define CMD_ACT_TX_POWER_OPT_GET 0x0000
--#define CMD_ACT_TX_POWER_OPT_SET_HIGH 0x8007
--#define CMD_ACT_TX_POWER_OPT_SET_MID 0x8004
--#define CMD_ACT_TX_POWER_OPT_SET_LOW 0x8000
-+#define CMD_ACT_TX_POWER_OPT_GET 0x0000
-+#define CMD_ACT_TX_POWER_OPT_SET_HIGH 0x8007
-+#define CMD_ACT_TX_POWER_OPT_SET_MID 0x8004
-+#define CMD_ACT_TX_POWER_OPT_SET_LOW 0x8000
+- /* We are a strict producer. If the previous scan results
+- * have not been consumed, we just have to drop this
+- * frame. We can't remove the previous results ourselves,
+- * that would be *very* racy... Jean II */
+- if (priv->scan_result != NULL) {
+- printk(KERN_WARNING "%s: Previous scan results not consumed, dropping info frame.\n", dev->name);
+- break;
+- }
+-
+ /* Allocate buffer for results */
+ buf = kmalloc(len, GFP_ATOMIC);
+ if (buf == NULL)
+@@ -1248,18 +1391,17 @@ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
+ }
+ #endif /* ORINOCO_DEBUG */
--#define CMD_ACT_TX_POWER_INDEX_HIGH 0x0007
--#define CMD_ACT_TX_POWER_INDEX_MID 0x0004
--#define CMD_ACT_TX_POWER_INDEX_LOW 0x0000
-+#define CMD_ACT_TX_POWER_INDEX_HIGH 0x0007
-+#define CMD_ACT_TX_POWER_INDEX_MID 0x0004
-+#define CMD_ACT_TX_POWER_INDEX_LOW 0x0000
+- /* Allow the clients to access the results */
+- priv->scan_len = len;
+- priv->scan_result = buf;
+-
+- /* Send an empty event to user space.
+- * We don't send the received data on the event because
+- * it would require us to do complex transcoding, and
+- * we want to minimise the work done in the irq handler
+- * Use a request to extract the data - Jean II */
+- wrqu.data.length = 0;
+- wrqu.data.flags = 0;
+- wireless_send_event(dev, SIOCGIWSCAN, &wrqu, NULL);
++ if (orinoco_process_scan_results(dev, buf, len) == 0) {
++ /* Send an empty event to user space.
++ * We don't send the received data on the event because
++ * it would require us to do complex transcoding, and
++ * we want to minimise the work done in the irq handler
++ * Use a request to extract the data - Jean II */
++ wrqu.data.length = 0;
++ wrqu.data.flags = 0;
++ wireless_send_event(dev, SIOCGIWSCAN, &wrqu, NULL);
++ }
++ kfree(buf);
+ }
+ break;
+ case HERMES_INQ_SEC_STAT_AGERE:
+@@ -1896,8 +2038,7 @@ static void orinoco_reset(struct work_struct *work)
+ orinoco_unlock(priv, &flags);
- /* Define action or option for CMD_802_11_DATA_RATE */
--#define CMD_ACT_SET_TX_AUTO 0x0000
--#define CMD_ACT_SET_TX_FIX_RATE 0x0001
--#define CMD_ACT_GET_TX_RATE 0x0002
-+#define CMD_ACT_SET_TX_AUTO 0x0000
-+#define CMD_ACT_SET_TX_FIX_RATE 0x0001
-+#define CMD_ACT_GET_TX_RATE 0x0002
+ /* Scanning support: Cleanup of driver struct */
+- kfree(priv->scan_result);
+- priv->scan_result = NULL;
++ orinoco_clear_scan_results(priv, 0);
+ priv->scan_inprogress = 0;
--#define CMD_ACT_SET_RX 0x0001
--#define CMD_ACT_SET_TX 0x0002
--#define CMD_ACT_SET_BOTH 0x0003
--#define CMD_ACT_GET_RX 0x0004
--#define CMD_ACT_GET_TX 0x0008
--#define CMD_ACT_GET_BOTH 0x000c
-+#define CMD_ACT_SET_RX 0x0001
-+#define CMD_ACT_SET_TX 0x0002
-+#define CMD_ACT_SET_BOTH 0x0003
-+#define CMD_ACT_GET_RX 0x0004
-+#define CMD_ACT_GET_TX 0x0008
-+#define CMD_ACT_GET_BOTH 0x000c
+ if (priv->hard_reset) {
+@@ -2412,6 +2553,10 @@ struct net_device *alloc_orinocodev(int sizeof_card,
+ else
+ priv->card = NULL;
- /* Define action or option for CMD_802_11_PS_MODE */
--#define CMD_TYPE_CAM 0x0000
--#define CMD_TYPE_MAX_PSP 0x0001
--#define CMD_TYPE_FAST_PSP 0x0002
-+#define CMD_TYPE_CAM 0x0000
-+#define CMD_TYPE_MAX_PSP 0x0001
-+#define CMD_TYPE_FAST_PSP 0x0002
++ if (orinoco_bss_data_allocate(priv))
++ goto err_out_free;
++ orinoco_bss_data_init(priv);
+
-+/* Options for CMD_802_11_FW_WAKE_METHOD */
-+#define CMD_WAKE_METHOD_UNCHANGED 0x0000
-+#define CMD_WAKE_METHOD_COMMAND_INT 0x0001
-+#define CMD_WAKE_METHOD_GPIO 0x0002
+ /* Setup / override net_device fields */
+ dev->init = orinoco_init;
+ dev->hard_start_xmit = orinoco_xmit;
+@@ -2447,13 +2592,16 @@ struct net_device *alloc_orinocodev(int sizeof_card,
- /* Define action or option for CMD_BT_ACCESS */
- enum cmd_bt_access_opts {
-@@ -237,8 +234,8 @@ enum cmd_fwt_access_opts {
- CMD_ACT_FWT_ACCESS_DEL,
- CMD_ACT_FWT_ACCESS_LOOKUP,
- CMD_ACT_FWT_ACCESS_LIST,
-- CMD_ACT_FWT_ACCESS_LIST_route,
-- CMD_ACT_FWT_ACCESS_LIST_neighbor,
-+ CMD_ACT_FWT_ACCESS_LIST_ROUTE,
-+ CMD_ACT_FWT_ACCESS_LIST_NEIGHBOR,
- CMD_ACT_FWT_ACCESS_RESET,
- CMD_ACT_FWT_ACCESS_CLEANUP,
- CMD_ACT_FWT_ACCESS_TIME,
-@@ -264,27 +261,36 @@ enum cmd_mesh_access_opts {
- };
+ return dev;
- /** Card Event definition */
--#define MACREG_INT_CODE_TX_PPA_FREE 0x00000000
--#define MACREG_INT_CODE_TX_DMA_DONE 0x00000001
--#define MACREG_INT_CODE_LINK_LOSE_W_SCAN 0x00000002
--#define MACREG_INT_CODE_LINK_LOSE_NO_SCAN 0x00000003
--#define MACREG_INT_CODE_LINK_SENSED 0x00000004
--#define MACREG_INT_CODE_CMD_FINISHED 0x00000005
--#define MACREG_INT_CODE_MIB_CHANGED 0x00000006
--#define MACREG_INT_CODE_INIT_DONE 0x00000007
--#define MACREG_INT_CODE_DEAUTHENTICATED 0x00000008
--#define MACREG_INT_CODE_DISASSOCIATED 0x00000009
--#define MACREG_INT_CODE_PS_AWAKE 0x0000000a
--#define MACREG_INT_CODE_PS_SLEEP 0x0000000b
--#define MACREG_INT_CODE_MIC_ERR_MULTICAST 0x0000000d
--#define MACREG_INT_CODE_MIC_ERR_UNICAST 0x0000000e
--#define MACREG_INT_CODE_WM_AWAKE 0x0000000f
--#define MACREG_INT_CODE_ADHOC_BCN_LOST 0x00000011
--#define MACREG_INT_CODE_RSSI_LOW 0x00000019
--#define MACREG_INT_CODE_SNR_LOW 0x0000001a
--#define MACREG_INT_CODE_MAX_FAIL 0x0000001b
--#define MACREG_INT_CODE_RSSI_HIGH 0x0000001c
--#define MACREG_INT_CODE_SNR_HIGH 0x0000001d
--#define MACREG_INT_CODE_MESH_AUTO_STARTED 0x00000023
--
--#endif /* _HOST_H_ */
-+#define MACREG_INT_CODE_TX_PPA_FREE 0
-+#define MACREG_INT_CODE_TX_DMA_DONE 1
-+#define MACREG_INT_CODE_LINK_LOST_W_SCAN 2
-+#define MACREG_INT_CODE_LINK_LOST_NO_SCAN 3
-+#define MACREG_INT_CODE_LINK_SENSED 4
-+#define MACREG_INT_CODE_CMD_FINISHED 5
-+#define MACREG_INT_CODE_MIB_CHANGED 6
-+#define MACREG_INT_CODE_INIT_DONE 7
-+#define MACREG_INT_CODE_DEAUTHENTICATED 8
-+#define MACREG_INT_CODE_DISASSOCIATED 9
-+#define MACREG_INT_CODE_PS_AWAKE 10
-+#define MACREG_INT_CODE_PS_SLEEP 11
-+#define MACREG_INT_CODE_MIC_ERR_MULTICAST 13
-+#define MACREG_INT_CODE_MIC_ERR_UNICAST 14
-+#define MACREG_INT_CODE_WM_AWAKE 15
-+#define MACREG_INT_CODE_DEEP_SLEEP_AWAKE 16
-+#define MACREG_INT_CODE_ADHOC_BCN_LOST 17
-+#define MACREG_INT_CODE_HOST_AWAKE 18
-+#define MACREG_INT_CODE_STOP_TX 19
-+#define MACREG_INT_CODE_START_TX 20
-+#define MACREG_INT_CODE_CHANNEL_SWITCH 21
-+#define MACREG_INT_CODE_MEASUREMENT_RDY 22
-+#define MACREG_INT_CODE_WMM_CHANGE 23
-+#define MACREG_INT_CODE_BG_SCAN_REPORT 24
-+#define MACREG_INT_CODE_RSSI_LOW 25
-+#define MACREG_INT_CODE_SNR_LOW 26
-+#define MACREG_INT_CODE_MAX_FAIL 27
-+#define MACREG_INT_CODE_RSSI_HIGH 28
-+#define MACREG_INT_CODE_SNR_HIGH 29
-+#define MACREG_INT_CODE_MESH_AUTO_STARTED 35
-+#define MACREG_INT_CODE_FIRMWARE_READY 48
-+
-+#endif
-diff --git a/drivers/net/wireless/libertas/hostcmd.h b/drivers/net/wireless/libertas/hostcmd.h
-index e1045dc..d35b015 100644
---- a/drivers/net/wireless/libertas/hostcmd.h
-+++ b/drivers/net/wireless/libertas/hostcmd.h
-@@ -2,8 +2,8 @@
- * This file contains the function prototypes, data structure
- * and defines for all the host/station commands
- */
--#ifndef __HOSTCMD__H
--#define __HOSTCMD__H
-+#ifndef _LBS_HOSTCMD_H
-+#define _LBS_HOSTCMD_H
++err_out_free:
++ free_netdev(dev);
++ return NULL;
+ }
- #include <linux/wireless.h>
- #include "11d.h"
-@@ -65,19 +65,21 @@ struct rxpd {
- u8 reserved[3];
- };
+ void free_orinocodev(struct net_device *dev)
+ {
+ struct orinoco_private *priv = netdev_priv(dev);
-+struct cmd_header {
-+ __le16 command;
-+ __le16 size;
-+ __le16 seqnum;
-+ __le16 result;
-+} __attribute__ ((packed));
-+
- struct cmd_ctrl_node {
-- /* CMD link list */
- struct list_head list;
-- u32 status;
-- /* CMD ID */
-- u32 cmd_oid;
-- /*CMD wait option: wait for finish or no wait */
-- u16 wait_option;
-- /* command parameter */
-- void *pdata_buf;
-- /*command data */
-- u8 *bufvirtualaddr;
-- u16 cmdflags;
-+ int result;
-+ /* command response */
-+ int (*callback)(struct lbs_private *, unsigned long, struct cmd_header *);
-+ unsigned long callback_arg;
-+ /* command data */
-+ struct cmd_header *cmdbuf;
- /* wait queue */
- u16 cmdwaitqwoken;
- wait_queue_head_t cmdwait_q;
-@@ -86,13 +88,13 @@ struct cmd_ctrl_node {
- /* Generic structure to hold all key types. */
- struct enc_key {
- u16 len;
-- u16 flags; /* KEY_INFO_* from wlan_defs.h */
-- u16 type; /* KEY_TYPE_* from wlan_defs.h */
-+ u16 flags; /* KEY_INFO_* from defs.h */
-+ u16 type; /* KEY_TYPE_* from defs.h */
- u8 key[32];
- };
+- kfree(priv->scan_result);
++ orinoco_bss_data_free(priv);
+ free_netdev(dev);
+ }
--/* wlan_offset_value */
--struct wlan_offset_value {
-+/* lbs_offset_value */
-+struct lbs_offset_value {
- u32 offset;
- u32 value;
- };
-@@ -104,14 +106,19 @@ struct cmd_ds_gen {
- __le16 size;
- __le16 seqnum;
- __le16 result;
-+ void *cmdresp[0];
- };
+@@ -3841,23 +3989,10 @@ static int orinoco_ioctl_setscan(struct net_device *dev,
+ * we access scan variables in priv is critical.
+ * o scan_inprogress : not touched by irq handler
+ * o scan_mode : not touched by irq handler
+- * o scan_result : irq is strict producer, non-irq is strict
+- * consumer.
+ * o scan_len : synchronised with scan_result
+ * Before modifying anything on those variables, please think hard !
+ * Jean II */
- #define S_DS_GEN sizeof(struct cmd_ds_gen)
+- /* If there is still some left-over scan results, get rid of it */
+- if (priv->scan_result != NULL) {
+- /* What's likely is that a client did crash or was killed
+- * between triggering the scan request and reading the
+- * results, so we need to reset everything.
+- * Some clients that are too slow may suffer from that...
+- * Jean II */
+- kfree(priv->scan_result);
+- priv->scan_result = NULL;
+- }
+-
+ /* Save flags */
+ priv->scan_mode = srq->flags;
+
+@@ -3905,169 +4040,125 @@ static int orinoco_ioctl_setscan(struct net_device *dev,
+ return err;
+ }
+
++#define MAX_CUSTOM_LEN 64
+
+ /* Translate scan data returned from the card to a card independant
+ * format that the Wireless Tools will understand - Jean II
+ * Return message length or -errno for fatal errors */
+-static inline int orinoco_translate_scan(struct net_device *dev,
+- char *buffer,
+- char *scan,
+- int scan_len)
++static inline char *orinoco_translate_scan(struct net_device *dev,
++ char *current_ev,
++ char *end_buf,
++ union hermes_scan_info *bss,
++ unsigned int last_scanned)
+ {
+ struct orinoco_private *priv = netdev_priv(dev);
+- int offset; /* In the scan data */
+- union hermes_scan_info *atom;
+- int atom_len;
+ u16 capabilities;
+ u16 channel;
+ struct iw_event iwe; /* Temporary buffer */
+- char * current_ev = buffer;
+- char * end_buf = buffer + IW_SCAN_MAX_DATA;
+-
+- switch (priv->firmware_type) {
+- case FIRMWARE_TYPE_AGERE:
+- atom_len = sizeof(struct agere_scan_apinfo);
+- offset = 0;
+- break;
+- case FIRMWARE_TYPE_SYMBOL:
+- /* Lack of documentation necessitates this hack.
+- * Different firmwares have 68 or 76 byte long atoms.
+- * We try modulo first. If the length divides by both,
+- * we check what would be the channel in the second
+- * frame for a 68-byte atom. 76-byte atoms have 0 there.
+- * Valid channel cannot be 0. */
+- if (scan_len % 76)
+- atom_len = 68;
+- else if (scan_len % 68)
+- atom_len = 76;
+- else if (scan_len >= 1292 && scan[68] == 0)
+- atom_len = 76;
++ char *p;
++ char custom[MAX_CUSTOM_LEN];
+
- /*
- * Define data structure for CMD_GET_HW_SPEC
- * This structure defines the response for the GET_HW_SPEC command
- */
- struct cmd_ds_get_hw_spec {
-+ struct cmd_header hdr;
++ /* First entry *MUST* be the AP MAC address */
++ iwe.cmd = SIOCGIWAP;
++ iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
++ memcpy(iwe.u.ap_addr.sa_data, bss->a.bssid, ETH_ALEN);
++ current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_ADDR_LEN);
+
- /* HW Interface version number */
- __le16 hwifversion;
- /* HW version number */
-@@ -129,8 +136,8 @@ struct cmd_ds_get_hw_spec {
- /* Number of antenna used */
- __le16 nr_antenna;
-
-- /* FW release number, example 1,2,3,4 = 3.2.1p4 */
-- u8 fwreleasenumber[4];
-+ /* FW release number, example 0x01030304 = 2.3.4p1 */
-+ __le32 fwrelease;
-
- /* Base Address of TxPD queue */
- __le32 wcb_base;
-@@ -149,8 +156,17 @@ struct cmd_ds_802_11_reset {
- };
-
- struct cmd_ds_802_11_subscribe_event {
-+ struct cmd_header hdr;
++ /* Other entries will be displayed in the order we give them */
+
- __le16 action;
- __le16 events;
++ /* Add the ESSID */
++ iwe.u.data.length = le16_to_cpu(bss->a.essid_len);
++ if (iwe.u.data.length > 32)
++ iwe.u.data.length = 32;
++ iwe.cmd = SIOCGIWESSID;
++ iwe.u.data.flags = 1;
++ current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, bss->a.essid);
+
-+ /* A TLV to the CMD_802_11_SUBSCRIBE_EVENT command can contain a
-+ * number of TLVs. From the v5.1 manual, those TLVs would add up to
-+ * 40 bytes. However, future firmware might add additional TLVs, so I
-+ * bump this up a bit.
-+ */
-+ uint8_t tlv[128];
- };
-
- /*
-@@ -242,6 +258,8 @@ struct cmd_ds_802_11_ad_hoc_result {
- };
-
- struct cmd_ds_802_11_set_wep {
-+ struct cmd_header hdr;
++ /* Add mode */
++ iwe.cmd = SIOCGIWMODE;
++ capabilities = le16_to_cpu(bss->a.capabilities);
++ if (capabilities & 0x3) {
++ if (capabilities & 0x1)
++ iwe.u.mode = IW_MODE_MASTER;
+ else
+- atom_len = 68;
+- offset = 0;
+- break;
+- case FIRMWARE_TYPE_INTERSIL:
+- offset = 4;
+- if (priv->has_hostscan) {
+- atom_len = le16_to_cpup((__le16 *)scan);
+- /* Sanity check for atom_len */
+- if (atom_len < sizeof(struct prism2_scan_apinfo)) {
+- printk(KERN_ERR "%s: Invalid atom_len in scan data: %d\n",
+- dev->name, atom_len);
+- return -EIO;
+- }
+- } else
+- atom_len = offsetof(struct prism2_scan_apinfo, atim);
+- break;
+- default:
+- return -EOPNOTSUPP;
+- }
+-
+- /* Check that we got an whole number of atoms */
+- if ((scan_len - offset) % atom_len) {
+- printk(KERN_ERR "%s: Unexpected scan data length %d, "
+- "atom_len %d, offset %d\n", dev->name, scan_len,
+- atom_len, offset);
+- return -EIO;
+- }
+-
+- /* Read the entries one by one */
+- for (; offset + atom_len <= scan_len; offset += atom_len) {
+- /* Get next atom */
+- atom = (union hermes_scan_info *) (scan + offset);
+-
+- /* First entry *MUST* be the AP MAC address */
+- iwe.cmd = SIOCGIWAP;
+- iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
+- memcpy(iwe.u.ap_addr.sa_data, atom->a.bssid, ETH_ALEN);
+- current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_ADDR_LEN);
+-
+- /* Other entries will be displayed in the order we give them */
+-
+- /* Add the ESSID */
+- iwe.u.data.length = le16_to_cpu(atom->a.essid_len);
+- if (iwe.u.data.length > 32)
+- iwe.u.data.length = 32;
+- iwe.cmd = SIOCGIWESSID;
+- iwe.u.data.flags = 1;
+- current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, atom->a.essid);
+-
+- /* Add mode */
+- iwe.cmd = SIOCGIWMODE;
+- capabilities = le16_to_cpu(atom->a.capabilities);
+- if (capabilities & 0x3) {
+- if (capabilities & 0x1)
+- iwe.u.mode = IW_MODE_MASTER;
+- else
+- iwe.u.mode = IW_MODE_ADHOC;
+- current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_UINT_LEN);
+- }
+-
+- channel = atom->s.channel;
+- if ( (channel >= 1) && (channel <= NUM_CHANNELS) ) {
+- /* Add frequency */
+- iwe.cmd = SIOCGIWFREQ;
+- iwe.u.freq.m = channel_frequency[channel-1] * 100000;
+- iwe.u.freq.e = 1;
+- current_ev = iwe_stream_add_event(current_ev, end_buf,
+- &iwe, IW_EV_FREQ_LEN);
+- }
++ iwe.u.mode = IW_MODE_ADHOC;
++ current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_UINT_LEN);
++ }
+
- /* ACT_ADD, ACT_REMOVE or ACT_ENABLE */
- __le16 action;
-
-@@ -249,8 +267,8 @@ struct cmd_ds_802_11_set_wep {
- __le16 keyindex;
-
- /* 40, 128bit or TXWEP */
-- u8 keytype[4];
-- u8 keymaterial[4][16];
-+ uint8_t keytype[4];
-+ uint8_t keymaterial[4][16];
- };
-
- struct cmd_ds_802_3_get_stat {
-@@ -328,11 +346,21 @@ struct cmd_ds_rf_reg_access {
- };
++ channel = bss->s.channel;
++ if ((channel >= 1) && (channel <= NUM_CHANNELS)) {
++ /* Add frequency */
++ iwe.cmd = SIOCGIWFREQ;
++ iwe.u.freq.m = channel_frequency[channel-1] * 100000;
++ iwe.u.freq.e = 1;
++ current_ev = iwe_stream_add_event(current_ev, end_buf,
++ &iwe, IW_EV_FREQ_LEN);
++ }
++
++ /* Add quality statistics */
++ iwe.cmd = IWEVQUAL;
++ iwe.u.qual.updated = 0x10; /* no link quality */
++ iwe.u.qual.level = (__u8) le16_to_cpu(bss->a.level) - 0x95;
++ iwe.u.qual.noise = (__u8) le16_to_cpu(bss->a.noise) - 0x95;
++ /* Wireless tools prior to 27.pre22 will show link quality
++ * anyway, so we provide a reasonable value. */
++ if (iwe.u.qual.level > iwe.u.qual.noise)
++ iwe.u.qual.qual = iwe.u.qual.level - iwe.u.qual.noise;
++ else
++ iwe.u.qual.qual = 0;
++ current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_QUAL_LEN);
- struct cmd_ds_802_11_radio_control {
-+ struct cmd_header hdr;
+- /* Add quality statistics */
+- iwe.cmd = IWEVQUAL;
+- iwe.u.qual.updated = 0x10; /* no link quality */
+- iwe.u.qual.level = (__u8) le16_to_cpu(atom->a.level) - 0x95;
+- iwe.u.qual.noise = (__u8) le16_to_cpu(atom->a.noise) - 0x95;
+- /* Wireless tools prior to 27.pre22 will show link quality
+- * anyway, so we provide a reasonable value. */
+- if (iwe.u.qual.level > iwe.u.qual.noise)
+- iwe.u.qual.qual = iwe.u.qual.level - iwe.u.qual.noise;
+- else
+- iwe.u.qual.qual = 0;
+- current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_QUAL_LEN);
++ /* Add encryption capability */
++ iwe.cmd = SIOCGIWENCODE;
++ if (capabilities & 0x10)
++ iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
++ else
++ iwe.u.data.flags = IW_ENCODE_DISABLED;
++ iwe.u.data.length = 0;
++ current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, bss->a.essid);
+
- __le16 action;
- __le16 control;
- };
++ /* Add EXTRA: Age to display seconds since last beacon/probe response
++ * for given network. */
++ iwe.cmd = IWEVCUSTOM;
++ p = custom;
++ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
++ " Last beacon: %dms ago",
++ jiffies_to_msecs(jiffies - last_scanned));
++ iwe.u.data.length = p - custom;
++ if (iwe.u.data.length)
++ current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, custom);
++
++ /* Bit rate is not available in Lucent/Agere firmwares */
++ if (priv->firmware_type != FIRMWARE_TYPE_AGERE) {
++ char *current_val = current_ev + IW_EV_LCP_LEN;
++ int i;
++ int step;
-+struct cmd_ds_802_11_beacon_control {
-+ __le16 action;
-+ __le16 beacon_enable;
-+ __le16 beacon_period;
-+};
+- /* Add encryption capability */
+- iwe.cmd = SIOCGIWENCODE;
+- if (capabilities & 0x10)
+- iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
++ if (priv->firmware_type == FIRMWARE_TYPE_SYMBOL)
++ step = 2;
+ else
+- iwe.u.data.flags = IW_ENCODE_DISABLED;
+- iwe.u.data.length = 0;
+- current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, atom->a.essid);
+-
+- /* Bit rate is not available in Lucent/Agere firmwares */
+- if (priv->firmware_type != FIRMWARE_TYPE_AGERE) {
+- char * current_val = current_ev + IW_EV_LCP_LEN;
+- int i;
+- int step;
+-
+- if (priv->firmware_type == FIRMWARE_TYPE_SYMBOL)
+- step = 2;
+- else
+- step = 1;
+-
+- iwe.cmd = SIOCGIWRATE;
+- /* Those two flags are ignored... */
+- iwe.u.bitrate.fixed = iwe.u.bitrate.disabled = 0;
+- /* Max 10 values */
+- for (i = 0; i < 10; i += step) {
+- /* NULL terminated */
+- if (atom->p.rates[i] == 0x0)
+- break;
+- /* Bit rate given in 500 kb/s units (+ 0x80) */
+- iwe.u.bitrate.value = ((atom->p.rates[i] & 0x7f) * 500000);
+- current_val = iwe_stream_add_value(current_ev, current_val,
+- end_buf, &iwe,
+- IW_EV_PARAM_LEN);
+- }
+- /* Check if we added any event */
+- if ((current_val - current_ev) > IW_EV_LCP_LEN)
+- current_ev = current_val;
++ step = 1;
+
- struct cmd_ds_802_11_sleep_params {
-+ struct cmd_header hdr;
++ iwe.cmd = SIOCGIWRATE;
++ /* Those two flags are ignored... */
++ iwe.u.bitrate.fixed = iwe.u.bitrate.disabled = 0;
++ /* Max 10 values */
++ for (i = 0; i < 10; i += step) {
++ /* NULL terminated */
++ if (bss->p.rates[i] == 0x0)
++ break;
++ /* Bit rate given in 500 kb/s units (+ 0x80) */
++ iwe.u.bitrate.value = ((bss->p.rates[i] & 0x7f) * 500000);
++ current_val = iwe_stream_add_value(current_ev, current_val,
++ end_buf, &iwe,
++ IW_EV_PARAM_LEN);
+ }
+-
+- /* The other data in the scan result are not really
+- * interesting, so for now drop it - Jean II */
++ /* Check if we added any event */
++ if ((current_val - current_ev) > IW_EV_LCP_LEN)
++ current_ev = current_val;
+ }
+- return current_ev - buffer;
+
- /* ACT_GET/ACT_SET */
- __le16 action;
++ return current_ev;
+ }
-@@ -346,16 +374,18 @@ struct cmd_ds_802_11_sleep_params {
- __le16 stabletime;
+ /* Return results of a scan */
+@@ -4077,68 +4168,45 @@ static int orinoco_ioctl_getscan(struct net_device *dev,
+ char *extra)
+ {
+ struct orinoco_private *priv = netdev_priv(dev);
++ bss_element *bss;
+ int err = 0;
+ unsigned long flags;
++ char *current_ev = extra;
- /* control periodic calibration */
-- u8 calcontrol;
-+ uint8_t calcontrol;
+ if (orinoco_lock(priv, &flags) != 0)
+ return -EBUSY;
- /* control the use of external sleep clock */
-- u8 externalsleepclk;
-+ uint8_t externalsleepclk;
+- /* If no results yet, ask to try again later */
+- if (priv->scan_result == NULL) {
+- if (priv->scan_inprogress)
+- /* Important note : we don't want to block the caller
+- * until results are ready for various reasons.
+- * First, managing wait queues is complex and racy.
+- * Second, we grab some rtnetlink lock before comming
+- * here (in dev_ioctl()).
+- * Third, we generate an Wireless Event, so the
+- * caller can wait itself on that - Jean II */
+- err = -EAGAIN;
+- else
+- /* Client error, no scan results...
+- * The caller need to restart the scan. */
+- err = -ENODATA;
+- } else {
+- /* We have some results to push back to user space */
+-
+- /* Translate to WE format */
+- int ret = orinoco_translate_scan(dev, extra,
+- priv->scan_result,
+- priv->scan_len);
+-
+- if (ret < 0) {
+- err = ret;
+- kfree(priv->scan_result);
+- priv->scan_result = NULL;
+- } else {
+- srq->length = ret;
++ if (priv->scan_inprogress) {
++ /* Important note : we don't want to block the caller
++ * until results are ready for various reasons.
++ * First, managing wait queues is complex and racy.
++ * Second, we grab some rtnetlink lock before comming
++ * here (in dev_ioctl()).
++ * Third, we generate an Wireless Event, so the
++ * caller can wait itself on that - Jean II */
++ err = -EAGAIN;
++ goto out;
++ }
- /* reserved field, should be set to zero */
- __le16 reserved;
- };
+- /* Return flags */
+- srq->flags = (__u16) priv->scan_mode;
++ list_for_each_entry(bss, &priv->bss_list, list) {
++ /* Translate to WE format this entry */
++ current_ev = orinoco_translate_scan(dev, current_ev,
++ extra + srq->length,
++ &bss->bss,
++ bss->last_scanned);
- struct cmd_ds_802_11_inactivity_timeout {
-+ struct cmd_header hdr;
+- /* In any case, Scan results will be cleaned up in the
+- * reset function and when exiting the driver.
+- * The person triggering the scanning may never come to
+- * pick the results, so we need to do it in those places.
+- * Jean II */
+-
+-#ifdef SCAN_SINGLE_READ
+- /* If you enable this option, only one client (the first
+- * one) will be able to read the result (and only one
+- * time). If there is multiple concurent clients that
+- * want to read scan results, this behavior is not
+- * advisable - Jean II */
+- kfree(priv->scan_result);
+- priv->scan_result = NULL;
+-#endif /* SCAN_SINGLE_READ */
+- /* Here, if too much time has elapsed since last scan,
+- * we may want to clean up scan results... - Jean II */
++ /* Check if there is space for one more entry */
++ if ((extra + srq->length - current_ev) <= IW_EV_ADDR_LEN) {
++ /* Ask user space to try again with a bigger buffer */
++ err = -E2BIG;
++ goto out;
+ }
+-
+- /* Scan is no longer in progress */
+- priv->scan_inprogress = 0;
+ }
+-
+
- /* ACT_GET/ACT_SET */
- __le16 action;
++ srq->length = (current_ev - extra);
++ srq->flags = (__u16) priv->scan_mode;
++
++out:
+ orinoco_unlock(priv, &flags);
+ return err;
+ }
+diff --git a/drivers/net/wireless/orinoco.h b/drivers/net/wireless/orinoco.h
+index 4720fb2..c6b1858 100644
+--- a/drivers/net/wireless/orinoco.h
++++ b/drivers/net/wireless/orinoco.h
+@@ -36,6 +36,12 @@ typedef enum {
+ FIRMWARE_TYPE_SYMBOL
+ } fwtype_t;
-@@ -364,11 +394,13 @@ struct cmd_ds_802_11_inactivity_timeout {
- };
++typedef struct {
++ union hermes_scan_info bss;
++ unsigned long last_scanned;
++ struct list_head list;
++} bss_element;
++
+ struct orinoco_private {
+ void *card; /* Pointer to card dependent structure */
+ int (*hard_reset)(struct orinoco_private *);
+@@ -105,10 +111,12 @@ struct orinoco_private {
+ int promiscuous, mc_count;
- struct cmd_ds_802_11_rf_channel {
-+ struct cmd_header hdr;
+ /* Scanning support */
++ struct list_head bss_list;
++ struct list_head bss_free_list;
++ bss_element *bss_data;
+
- __le16 action;
-- __le16 currentchannel;
-- __le16 rftype;
-- __le16 reserved;
-- u8 channellist[32];
-+ __le16 channel;
-+ __le16 rftype; /* unused */
-+ __le16 reserved; /* unused */
-+ u8 channellist[32]; /* unused */
+ int scan_inprogress; /* Scan pending... */
+ u32 scan_mode; /* Type of scan done */
+- char * scan_result; /* Result of previous scan */
+- int scan_len; /* Lenght of result */
};
- struct cmd_ds_802_11_rssi {
-@@ -406,13 +438,29 @@ struct cmd_ds_802_11_rf_antenna {
- };
+ #ifdef ORINOCO_DEBUG
+diff --git a/drivers/net/wireless/p54common.c b/drivers/net/wireless/p54common.c
+index 1437db0..5cda49a 100644
+--- a/drivers/net/wireless/p54common.c
++++ b/drivers/net/wireless/p54common.c
+@@ -54,7 +54,7 @@ void p54_parse_firmware(struct ieee80211_hw *dev, const struct firmware *fw)
+ u32 code = le32_to_cpu(bootrec->code);
+ switch (code) {
+ case BR_CODE_COMPONENT_ID:
+- switch (be32_to_cpu(*bootrec->data)) {
++ switch (be32_to_cpu(*(__be32 *)bootrec->data)) {
+ case FW_FMAC:
+ printk(KERN_INFO "p54: FreeMAC firmware\n");
+ break;
+@@ -78,14 +78,14 @@ void p54_parse_firmware(struct ieee80211_hw *dev, const struct firmware *fw)
+ fw_version = (unsigned char*)bootrec->data;
+ break;
+ case BR_CODE_DESCR:
+- priv->rx_start = le32_to_cpu(bootrec->data[1]);
++ priv->rx_start = le32_to_cpu(((__le32 *)bootrec->data)[1]);
+ /* FIXME add sanity checking */
+- priv->rx_end = le32_to_cpu(bootrec->data[2]) - 0x3500;
++ priv->rx_end = le32_to_cpu(((__le32 *)bootrec->data)[2]) - 0x3500;
+ break;
+ case BR_CODE_EXPOSED_IF:
+ exp_if = (struct bootrec_exp_if *) bootrec->data;
+ for (i = 0; i < (len * sizeof(*exp_if) / 4); i++)
+- if (exp_if[i].if_id == 0x1a)
++ if (exp_if[i].if_id == cpu_to_le16(0x1a))
+ priv->fw_var = le16_to_cpu(exp_if[i].variant);
+ break;
+ case BR_CODE_DEPENDENT_IF:
+@@ -314,6 +314,7 @@ static void p54_rx_data(struct ieee80211_hw *dev, struct sk_buff *skb)
+ rx_status.phymode = MODE_IEEE80211G;
+ rx_status.antenna = hdr->antenna;
+ rx_status.mactime = le64_to_cpu(hdr->timestamp);
++ rx_status.flag |= RX_FLAG_TSFT;
- struct cmd_ds_802_11_monitor_mode {
-- u16 action;
-- u16 mode;
-+ __le16 action;
-+ __le16 mode;
- };
+ skb_pull(skb, sizeof(*hdr));
+ skb_trim(skb, le16_to_cpu(hdr->len));
+@@ -374,7 +375,7 @@ static void p54_rx_frame_sent(struct ieee80211_hw *dev, struct sk_buff *skb)
+ if ((entry_hdr->magic1 & cpu_to_le16(0x4000)) != 0)
+ pad = entry_data->align[0];
- struct cmd_ds_set_boot2_ver {
-- u16 action;
-- u16 version;
-+ struct cmd_header hdr;
-+
-+ __le16 action;
-+ __le16 version;
-+};
-+
-+struct cmd_ds_802_11_fw_wake_method {
-+ struct cmd_header hdr;
-+
-+ __le16 action;
-+ __le16 method;
-+};
-+
-+struct cmd_ds_802_11_sleep_period {
-+ struct cmd_header hdr;
-+
-+ __le16 action;
-+ __le16 period;
- };
+- if (!status.control.flags & IEEE80211_TXCTL_NO_ACK) {
++ if (!(status.control.flags & IEEE80211_TXCTL_NO_ACK)) {
+ if (!(payload->status & 0x01))
+ status.flags |= IEEE80211_TX_STATUS_ACK;
+ else
+@@ -853,7 +854,8 @@ static int p54_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
+ return ret;
+ }
- struct cmd_ds_802_11_ps_mode {
-@@ -437,6 +485,8 @@ struct PS_CMD_ConfirmSleep {
- };
+-static int p54_config_interface(struct ieee80211_hw *dev, int if_id,
++static int p54_config_interface(struct ieee80211_hw *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_if_conf *conf)
+ {
+ struct p54_common *priv = dev->priv;
+diff --git a/drivers/net/wireless/p54pci.c b/drivers/net/wireless/p54pci.c
+index 410b543..fa52772 100644
+--- a/drivers/net/wireless/p54pci.c
++++ b/drivers/net/wireless/p54pci.c
+@@ -48,10 +48,10 @@ static int p54p_upload_firmware(struct ieee80211_hw *dev)
+ const struct firmware *fw_entry = NULL;
+ __le32 reg;
+ int err;
+- u32 *data;
++ __le32 *data;
+ u32 remains, left, device_addr;
- struct cmd_ds_802_11_data_rate {
-+ struct cmd_header hdr;
-+
- __le16 action;
- __le16 reserved;
- u8 rates[MAX_RATES];
-@@ -488,6 +538,8 @@ struct cmd_ds_802_11_ad_hoc_join {
- } __attribute__ ((packed));
+- P54P_WRITE(int_enable, 0);
++ P54P_WRITE(int_enable, cpu_to_le32(0));
+ P54P_READ(int_enable);
+ udelay(10);
- struct cmd_ds_802_11_enable_rsn {
-+ struct cmd_header hdr;
-+
- __le16 action;
- __le16 enable;
- } __attribute__ ((packed));
-@@ -512,6 +564,13 @@ struct MrvlIEtype_keyParamSet {
- u8 key[32];
- };
+@@ -82,7 +82,7 @@ static int p54p_upload_firmware(struct ieee80211_hw *dev)
-+struct cmd_ds_host_sleep {
-+ struct cmd_header hdr;
-+ __le32 criteria;
-+ uint8_t gpio;
-+ uint8_t gap;
-+} __attribute__ ((packed));
-+
- struct cmd_ds_802_11_key_material {
- __le16 action;
- struct MrvlIEtype_keyParamSet keyParamSet[2];
-@@ -598,7 +657,21 @@ struct cmd_ds_fwt_access {
- u8 prec[ETH_ALEN];
- } __attribute__ ((packed));
+ p54_parse_firmware(dev, fw_entry);
-+
-+struct cmd_ds_mesh_config {
-+ struct cmd_header hdr;
-+
-+ __le16 action;
-+ __le16 channel;
-+ __le16 type;
-+ __le16 length;
-+ u8 data[128]; /* last position reserved */
-+} __attribute__ ((packed));
-+
-+
- struct cmd_ds_mesh_access {
-+ struct cmd_header hdr;
-+
- __le16 action;
- __le32 data[32]; /* last position reserved */
- } __attribute__ ((packed));
-@@ -615,14 +688,12 @@ struct cmd_ds_command {
+- data = (u32 *) fw_entry->data;
++ data = (__le32 *) fw_entry->data;
+ remains = fw_entry->size;
+ device_addr = ISL38XX_DEV_FIRMWARE_ADDR;
+ while (remains) {
+@@ -141,6 +141,7 @@ static irqreturn_t p54p_simple_interrupt(int irq, void *dev_id)
+ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ {
+ struct p54p_priv *priv = dev->priv;
++ struct p54p_ring_control *ring_control = priv->ring_control;
+ int err;
+ struct p54_control_hdr *hdr;
+ void *eeprom;
+@@ -164,8 +165,8 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ goto out;
+ }
- /* command Body */
- union {
-- struct cmd_ds_get_hw_spec hwspec;
- struct cmd_ds_802_11_ps_mode psmode;
- struct cmd_ds_802_11_scan scan;
- struct cmd_ds_802_11_scan_rsp scanresp;
- struct cmd_ds_mac_control macctrl;
- struct cmd_ds_802_11_associate associate;
- struct cmd_ds_802_11_deauthenticate deauth;
-- struct cmd_ds_802_11_set_wep wep;
- struct cmd_ds_802_11_ad_hoc_start ads;
- struct cmd_ds_802_11_reset reset;
- struct cmd_ds_802_11_ad_hoc_result result;
-@@ -634,17 +705,13 @@ struct cmd_ds_command {
- struct cmd_ds_802_11_rf_tx_power txp;
- struct cmd_ds_802_11_rf_antenna rant;
- struct cmd_ds_802_11_monitor_mode monitor;
-- struct cmd_ds_802_11_data_rate drate;
- struct cmd_ds_802_11_rate_adapt_rateset rateset;
- struct cmd_ds_mac_multicast_adr madr;
- struct cmd_ds_802_11_ad_hoc_join adj;
-- struct cmd_ds_802_11_radio_control radio;
-- struct cmd_ds_802_11_rf_channel rfchannel;
- struct cmd_ds_802_11_rssi rssi;
- struct cmd_ds_802_11_rssi_rsp rssirsp;
- struct cmd_ds_802_11_disassociate dassociate;
- struct cmd_ds_802_11_mac_address macadd;
-- struct cmd_ds_802_11_enable_rsn enbrsn;
- struct cmd_ds_802_11_key_material keymaterial;
- struct cmd_ds_mac_reg_access macreg;
- struct cmd_ds_bbp_reg_access bbpreg;
-@@ -654,8 +721,6 @@ struct cmd_ds_command {
- struct cmd_ds_802_11d_domain_info domaininfo;
- struct cmd_ds_802_11d_domain_info domaininforesp;
+- memset(priv->ring_control, 0, sizeof(*priv->ring_control));
+- P54P_WRITE(ring_control_base, priv->ring_control_dma);
++ memset(ring_control, 0, sizeof(*ring_control));
++ P54P_WRITE(ring_control_base, cpu_to_le32(priv->ring_control_dma));
+ P54P_READ(ring_control_base);
+ udelay(10);
-- struct cmd_ds_802_11_sleep_params sleep_params;
-- struct cmd_ds_802_11_inactivity_timeout inactivity_timeout;
- struct cmd_ds_802_11_tpc_cfg tpccfg;
- struct cmd_ds_802_11_pwr_cfg pwrcfg;
- struct cmd_ds_802_11_afc afc;
-@@ -664,10 +729,8 @@ struct cmd_ds_command {
- struct cmd_tx_rate_query txrate;
- struct cmd_ds_bt_access bt;
- struct cmd_ds_fwt_access fwt;
-- struct cmd_ds_mesh_access mesh;
-- struct cmd_ds_set_boot2_ver boot2_ver;
- struct cmd_ds_get_tsf gettsf;
-- struct cmd_ds_802_11_subscribe_event subscribe_event;
-+ struct cmd_ds_802_11_beacon_control bcn_ctrl;
- } params;
- } __attribute__ ((packed));
+@@ -194,14 +195,14 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ tx_mapping = pci_map_single(priv->pdev, (void *)hdr,
+ EEPROM_READBACK_LEN, PCI_DMA_TODEVICE);
-diff --git a/drivers/net/wireless/libertas/if_cs.c b/drivers/net/wireless/libertas/if_cs.c
-index ba4fc2b..4b5ab9a 100644
---- a/drivers/net/wireless/libertas/if_cs.c
-+++ b/drivers/net/wireless/libertas/if_cs.c
-@@ -57,7 +57,7 @@ MODULE_LICENSE("GPL");
+- priv->ring_control->rx_mgmt[0].host_addr = cpu_to_le32(rx_mapping);
+- priv->ring_control->rx_mgmt[0].len = cpu_to_le16(0x2010);
+- priv->ring_control->tx_data[0].host_addr = cpu_to_le32(tx_mapping);
+- priv->ring_control->tx_data[0].device_addr = hdr->req_id;
+- priv->ring_control->tx_data[0].len = cpu_to_le16(EEPROM_READBACK_LEN);
++ ring_control->rx_mgmt[0].host_addr = cpu_to_le32(rx_mapping);
++ ring_control->rx_mgmt[0].len = cpu_to_le16(0x2010);
++ ring_control->tx_data[0].host_addr = cpu_to_le32(tx_mapping);
++ ring_control->tx_data[0].device_addr = hdr->req_id;
++ ring_control->tx_data[0].len = cpu_to_le16(EEPROM_READBACK_LEN);
- struct if_cs_card {
- struct pcmcia_device *p_dev;
-- wlan_private *priv;
-+ struct lbs_private *priv;
- void __iomem *iobase;
- };
+- priv->ring_control->host_idx[2] = cpu_to_le32(1);
+- priv->ring_control->host_idx[1] = cpu_to_le32(1);
++ ring_control->host_idx[2] = cpu_to_le32(1);
++ ring_control->host_idx[1] = cpu_to_le32(1);
-@@ -243,7 +243,7 @@ static inline void if_cs_disable_ints(struct if_cs_card *card)
+ wmb();
+ mdelay(100);
+@@ -215,8 +216,8 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ pci_unmap_single(priv->pdev, rx_mapping,
+ 0x2010, PCI_DMA_FROMDEVICE);
- static irqreturn_t if_cs_interrupt(int irq, void *data)
+- alen = le16_to_cpu(priv->ring_control->rx_mgmt[0].len);
+- if (le32_to_cpu(priv->ring_control->device_idx[2]) != 1 ||
++ alen = le16_to_cpu(ring_control->rx_mgmt[0].len);
++ if (le32_to_cpu(ring_control->device_idx[2]) != 1 ||
+ alen < 0x10) {
+ printk(KERN_ERR "%s (prism54pci): Cannot read eeprom!\n",
+ pci_name(priv->pdev));
+@@ -228,7 +229,7 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+
+ out:
+ kfree(eeprom);
+- P54P_WRITE(int_enable, 0);
++ P54P_WRITE(int_enable, cpu_to_le32(0));
+ P54P_READ(int_enable);
+ udelay(10);
+ free_irq(priv->pdev->irq, priv);
+@@ -239,16 +240,17 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ static void p54p_refill_rx_ring(struct ieee80211_hw *dev)
{
-- struct if_cs_card *card = (struct if_cs_card *)data;
-+ struct if_cs_card *card = data;
- u16 int_cause;
+ struct p54p_priv *priv = dev->priv;
++ struct p54p_ring_control *ring_control = priv->ring_control;
+ u32 limit, host_idx, idx;
- lbs_deb_enter(LBS_DEB_CS);
-@@ -253,25 +253,20 @@ static irqreturn_t if_cs_interrupt(int irq, void *data)
- /* Not for us */
- return IRQ_NONE;
+- host_idx = le32_to_cpu(priv->ring_control->host_idx[0]);
++ host_idx = le32_to_cpu(ring_control->host_idx[0]);
+ limit = host_idx;
+- limit -= le32_to_cpu(priv->ring_control->device_idx[0]);
+- limit = ARRAY_SIZE(priv->ring_control->rx_data) - limit;
++ limit -= le32_to_cpu(ring_control->device_idx[0]);
++ limit = ARRAY_SIZE(ring_control->rx_data) - limit;
-- } else if(int_cause == 0xffff) {
-+ } else if (int_cause == 0xffff) {
- /* Read in junk, the card has probably been removed */
-- card->priv->adapter->surpriseremoved = 1;
-+ card->priv->surpriseremoved = 1;
+- idx = host_idx % ARRAY_SIZE(priv->ring_control->rx_data);
++ idx = host_idx % ARRAY_SIZE(ring_control->rx_data);
+ while (limit-- > 1) {
+- struct p54p_desc *desc = &priv->ring_control->rx_data[idx];
++ struct p54p_desc *desc = &ring_control->rx_data[idx];
- } else {
-- if(int_cause & IF_CS_H_IC_TX_OVER) {
-- card->priv->dnld_sent = DNLD_RES_RECEIVED;
-- if (!card->priv->adapter->cur_cmd)
-- wake_up_interruptible(&card->priv->waitq);
--
-- if (card->priv->adapter->connect_status == LIBERTAS_CONNECTED)
-- netif_wake_queue(card->priv->dev);
-- }
-+ if (int_cause & IF_CS_H_IC_TX_OVER)
-+ lbs_host_to_card_done(card->priv);
+ if (!desc->host_addr) {
+ struct sk_buff *skb;
+@@ -270,22 +272,23 @@ static void p54p_refill_rx_ring(struct ieee80211_hw *dev)
- /* clear interrupt */
- if_cs_write16(card, IF_CS_C_INT_CAUSE, int_cause & IF_CS_C_IC_MASK);
+ idx++;
+ host_idx++;
+- idx %= ARRAY_SIZE(priv->ring_control->rx_data);
++ idx %= ARRAY_SIZE(ring_control->rx_data);
}
--
-- libertas_interrupt(card->priv->dev);
-+ spin_lock(&card->priv->driver_lock);
-+ lbs_interrupt(card->priv);
-+ spin_unlock(&card->priv->driver_lock);
- return IRQ_HANDLED;
+ wmb();
+- priv->ring_control->host_idx[0] = cpu_to_le32(host_idx);
++ ring_control->host_idx[0] = cpu_to_le32(host_idx);
}
-@@ -286,7 +281,7 @@ static irqreturn_t if_cs_interrupt(int irq, void *data)
- /*
- * Called from if_cs_host_to_card to send a command to the hardware
- */
--static int if_cs_send_cmd(wlan_private *priv, u8 *buf, u16 nb)
-+static int if_cs_send_cmd(struct lbs_private *priv, u8 *buf, u16 nb)
- {
- struct if_cs_card *card = (struct if_cs_card *)priv->card;
- int ret = -1;
-@@ -331,7 +326,7 @@ done:
- /*
- * Called from if_cs_host_to_card to send a data to the hardware
- */
--static void if_cs_send_data(wlan_private *priv, u8 *buf, u16 nb)
-+static void if_cs_send_data(struct lbs_private *priv, u8 *buf, u16 nb)
- {
- struct if_cs_card *card = (struct if_cs_card *)priv->card;
-@@ -354,7 +349,7 @@ static void if_cs_send_data(wlan_private *priv, u8 *buf, u16 nb)
- /*
- * Get the command result out of the card.
- */
--static int if_cs_receive_cmdres(wlan_private *priv, u8* data, u32 *len)
-+static int if_cs_receive_cmdres(struct lbs_private *priv, u8 *data, u32 *len)
+ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
{
- int ret = -1;
- u16 val;
-@@ -369,7 +364,7 @@ static int if_cs_receive_cmdres(wlan_private *priv, u8* data, u32 *len)
- }
+ struct ieee80211_hw *dev = dev_id;
+ struct p54p_priv *priv = dev->priv;
++ struct p54p_ring_control *ring_control = priv->ring_control;
+ __le32 reg;
- *len = if_cs_read16(priv->card, IF_CS_C_CMD_LEN);
-- if ((*len == 0) || (*len > MRVDRV_SIZE_OF_CMD_BUFFER)) {
-+ if ((*len == 0) || (*len > LBS_CMD_BUFFER_SIZE)) {
- lbs_pr_err("card cmd buffer has invalid # of bytes (%d)\n", *len);
- goto out;
+ spin_lock(&priv->lock);
+ reg = P54P_READ(int_ident);
+- if (unlikely(reg == 0xFFFFFFFF)) {
++ if (unlikely(reg == cpu_to_le32(0xFFFFFFFF))) {
+ spin_unlock(&priv->lock);
+ return IRQ_HANDLED;
}
-@@ -379,6 +374,9 @@ static int if_cs_receive_cmdres(wlan_private *priv, u8* data, u32 *len)
- if (*len & 1)
- data[*len-1] = if_cs_read8(priv->card, IF_CS_C_CMD);
-
-+ /* This is a workaround for a firmware that reports too much
-+ * bytes */
-+ *len -= 8;
- ret = 0;
- out:
- lbs_deb_leave_args(LBS_DEB_CS, "ret %d, len %d", ret, *len);
-@@ -386,7 +384,7 @@ out:
- }
+@@ -298,12 +301,12 @@ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
+ struct p54p_desc *desc;
+ u32 idx, i;
+ i = priv->tx_idx;
+- i %= ARRAY_SIZE(priv->ring_control->tx_data);
+- priv->tx_idx = idx = le32_to_cpu(priv->ring_control->device_idx[1]);
+- idx %= ARRAY_SIZE(priv->ring_control->tx_data);
++ i %= ARRAY_SIZE(ring_control->tx_data);
++ priv->tx_idx = idx = le32_to_cpu(ring_control->device_idx[1]);
++ idx %= ARRAY_SIZE(ring_control->tx_data);
+ while (i != idx) {
+- desc = &priv->ring_control->tx_data[i];
++ desc = &ring_control->tx_data[i];
+ if (priv->tx_buf[i]) {
+ kfree(priv->tx_buf[i]);
+ priv->tx_buf[i] = NULL;
+@@ -318,17 +321,17 @@ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
+ desc->flags = 0;
--static struct sk_buff *if_cs_receive_data(wlan_private *priv)
-+static struct sk_buff *if_cs_receive_data(struct lbs_private *priv)
- {
- struct sk_buff *skb = NULL;
- u16 len;
-@@ -616,7 +614,10 @@ done:
- /********************************************************************/
+ i++;
+- i %= ARRAY_SIZE(priv->ring_control->tx_data);
++ i %= ARRAY_SIZE(ring_control->tx_data);
+ }
- /* Send commands or data packets to the card */
--static int if_cs_host_to_card(wlan_private *priv, u8 type, u8 *buf, u16 nb)
-+static int if_cs_host_to_card(struct lbs_private *priv,
-+ u8 type,
-+ u8 *buf,
-+ u16 nb)
- {
- int ret = -1;
+ i = priv->rx_idx;
+- i %= ARRAY_SIZE(priv->ring_control->rx_data);
+- priv->rx_idx = idx = le32_to_cpu(priv->ring_control->device_idx[0]);
+- idx %= ARRAY_SIZE(priv->ring_control->rx_data);
++ i %= ARRAY_SIZE(ring_control->rx_data);
++ priv->rx_idx = idx = le32_to_cpu(ring_control->device_idx[0]);
++ idx %= ARRAY_SIZE(ring_control->rx_data);
+ while (i != idx) {
+ u16 len;
+ struct sk_buff *skb;
+- desc = &priv->ring_control->rx_data[i];
++ desc = &ring_control->rx_data[i];
+ len = le16_to_cpu(desc->len);
+ skb = priv->rx_buf[i];
-@@ -641,18 +642,16 @@ static int if_cs_host_to_card(wlan_private *priv, u8 type, u8 *buf, u16 nb)
- }
+@@ -347,7 +350,7 @@ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
+ }
+ i++;
+- i %= ARRAY_SIZE(priv->ring_control->rx_data);
++ i %= ARRAY_SIZE(ring_control->rx_data);
+ }
--static int if_cs_get_int_status(wlan_private *priv, u8 *ireg)
-+static int if_cs_get_int_status(struct lbs_private *priv, u8 *ireg)
+ p54p_refill_rx_ring(dev);
+@@ -366,6 +369,7 @@ static void p54p_tx(struct ieee80211_hw *dev, struct p54_control_hdr *data,
+ size_t len, int free_on_tx)
{
- struct if_cs_card *card = (struct if_cs_card *)priv->card;
-- //wlan_adapter *adapter = priv->adapter;
- int ret = 0;
- u16 int_cause;
-- u8 *cmdbuf;
- *ireg = 0;
+ struct p54p_priv *priv = dev->priv;
++ struct p54p_ring_control *ring_control = priv->ring_control;
+ unsigned long flags;
+ struct p54p_desc *desc;
+ dma_addr_t mapping;
+@@ -373,19 +377,19 @@ static void p54p_tx(struct ieee80211_hw *dev, struct p54_control_hdr *data,
- lbs_deb_enter(LBS_DEB_CS);
+ spin_lock_irqsave(&priv->lock, flags);
-- if (priv->adapter->surpriseremoved)
-+ if (priv->surpriseremoved)
- goto out;
+- device_idx = le32_to_cpu(priv->ring_control->device_idx[1]);
+- idx = le32_to_cpu(priv->ring_control->host_idx[1]);
+- i = idx % ARRAY_SIZE(priv->ring_control->tx_data);
++ device_idx = le32_to_cpu(ring_control->device_idx[1]);
++ idx = le32_to_cpu(ring_control->host_idx[1]);
++ i = idx % ARRAY_SIZE(ring_control->tx_data);
- int_cause = if_cs_read16(card, IF_CS_C_INT_CAUSE) & IF_CS_C_IC_MASK;
-@@ -668,7 +667,7 @@ sbi_get_int_status_exit:
- /* is there a data packet for us? */
- if (*ireg & IF_CS_C_S_RX_UPLD_RDY) {
- struct sk_buff *skb = if_cs_receive_data(priv);
-- libertas_process_rxed_packet(priv, skb);
-+ lbs_process_rxed_packet(priv, skb);
- *ireg &= ~IF_CS_C_S_RX_UPLD_RDY;
- }
+ mapping = pci_map_single(priv->pdev, data, len, PCI_DMA_TODEVICE);
+- desc = &priv->ring_control->tx_data[i];
++ desc = &ring_control->tx_data[i];
+ desc->host_addr = cpu_to_le32(mapping);
+ desc->device_addr = data->req_id;
+ desc->len = cpu_to_le16(len);
+ desc->flags = 0;
-@@ -678,31 +677,24 @@ sbi_get_int_status_exit:
+ wmb();
+- priv->ring_control->host_idx[1] = cpu_to_le32(idx + 1);
++ ring_control->host_idx[1] = cpu_to_le32(idx + 1);
- /* Card has a command result for us */
- if (*ireg & IF_CS_C_S_CMD_UPLD_RDY) {
-- spin_lock(&priv->adapter->driver_lock);
-- if (!priv->adapter->cur_cmd) {
-- cmdbuf = priv->upld_buf;
-- priv->adapter->hisregcpy &= ~IF_CS_C_S_RX_UPLD_RDY;
-- } else {
-- cmdbuf = priv->adapter->cur_cmd->bufvirtualaddr;
-- }
--
-- ret = if_cs_receive_cmdres(priv, cmdbuf, &priv->upld_len);
-- spin_unlock(&priv->adapter->driver_lock);
-+ spin_lock(&priv->driver_lock);
-+ ret = if_cs_receive_cmdres(priv, priv->upld_buf, &priv->upld_len);
-+ spin_unlock(&priv->driver_lock);
- if (ret < 0)
- lbs_pr_err("could not receive cmd from card\n");
- }
+ if (free_on_tx)
+ priv->tx_buf[i] = data;
+@@ -397,7 +401,7 @@ static void p54p_tx(struct ieee80211_hw *dev, struct p54_control_hdr *data,
- out:
-- lbs_deb_leave_args(LBS_DEB_CS, "ret %d, ireg 0x%x, hisregcpy 0x%x", ret, *ireg, priv->adapter->hisregcpy);
-+ lbs_deb_leave_args(LBS_DEB_CS, "ret %d, ireg 0x%x, hisregcpy 0x%x", ret, *ireg, priv->hisregcpy);
- return ret;
+ /* FIXME: unlikely to happen because the device usually runs out of
+ memory before we fill the ring up, but we can make it impossible */
+- if (idx - device_idx > ARRAY_SIZE(priv->ring_control->tx_data) - 2)
++ if (idx - device_idx > ARRAY_SIZE(ring_control->tx_data) - 2)
+ printk(KERN_INFO "%s: tx overflow.\n", wiphy_name(dev->wiphy));
}
+@@ -421,7 +425,7 @@ static int p54p_open(struct ieee80211_hw *dev)
--static int if_cs_read_event_cause(wlan_private *priv)
-+static int if_cs_read_event_cause(struct lbs_private *priv)
- {
- lbs_deb_enter(LBS_DEB_CS);
-
-- priv->adapter->eventcause = (if_cs_read16(priv->card, IF_CS_C_STATUS) & IF_CS_C_S_STATUS_MASK) >> 5;
-+ priv->eventcause = (if_cs_read16(priv->card, IF_CS_C_STATUS) & IF_CS_C_S_STATUS_MASK) >> 5;
- if_cs_write16(priv->card, IF_CS_H_INT_CAUSE, IF_CS_H_IC_HOST_EVENT);
+ p54p_upload_firmware(dev);
- return 0;
-@@ -746,7 +738,7 @@ static void if_cs_release(struct pcmcia_device *p_dev)
- static int if_cs_probe(struct pcmcia_device *p_dev)
+- P54P_WRITE(ring_control_base, priv->ring_control_dma);
++ P54P_WRITE(ring_control_base, cpu_to_le32(priv->ring_control_dma));
+ P54P_READ(ring_control_base);
+ wmb();
+ udelay(10);
+@@ -457,10 +461,11 @@ static int p54p_open(struct ieee80211_hw *dev)
+ static void p54p_stop(struct ieee80211_hw *dev)
{
- int ret = -ENOMEM;
-- wlan_private *priv;
-+ struct lbs_private *priv;
- struct if_cs_card *card;
- /* CIS parsing */
- tuple_t tuple;
-@@ -856,7 +848,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
- goto out2;
+ struct p54p_priv *priv = dev->priv;
++ struct p54p_ring_control *ring_control = priv->ring_control;
+ unsigned int i;
+ struct p54p_desc *desc;
- /* Make this card known to the libertas driver */
-- priv = libertas_add_card(card, &p_dev->dev);
-+ priv = lbs_add_card(card, &p_dev->dev);
- if (!priv) {
- ret = -ENOMEM;
- goto out2;
-@@ -869,7 +861,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
- priv->hw_get_int_status = if_cs_get_int_status;
- priv->hw_read_event_cause = if_cs_read_event_cause;
+- P54P_WRITE(int_enable, 0);
++ P54P_WRITE(int_enable, cpu_to_le32(0));
+ P54P_READ(int_enable);
+ udelay(10);
-- priv->adapter->fw_ready = 1;
-+ priv->fw_ready = 1;
+@@ -469,7 +474,7 @@ static void p54p_stop(struct ieee80211_hw *dev)
+ P54P_WRITE(dev_int, cpu_to_le32(ISL38XX_DEV_INT_RESET));
- /* Now actually get the IRQ */
- ret = request_irq(p_dev->irq.AssignedIRQ, if_cs_interrupt,
-@@ -885,7 +877,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
- if_cs_enable_ints(card);
+ for (i = 0; i < ARRAY_SIZE(priv->rx_buf); i++) {
+- desc = &priv->ring_control->rx_data[i];
++ desc = &ring_control->rx_data[i];
+ if (desc->host_addr)
+ pci_unmap_single(priv->pdev, le32_to_cpu(desc->host_addr),
+ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
+@@ -478,7 +483,7 @@ static void p54p_stop(struct ieee80211_hw *dev)
+ }
- /* And finally bring the card up */
-- if (libertas_start_card(priv) != 0) {
-+ if (lbs_start_card(priv) != 0) {
- lbs_pr_err("could not activate card\n");
- goto out3;
+ for (i = 0; i < ARRAY_SIZE(priv->tx_buf); i++) {
+- desc = &priv->ring_control->tx_data[i];
++ desc = &ring_control->tx_data[i];
+ if (desc->host_addr)
+ pci_unmap_single(priv->pdev, le32_to_cpu(desc->host_addr),
+ le16_to_cpu(desc->len), PCI_DMA_TODEVICE);
+@@ -487,7 +492,7 @@ static void p54p_stop(struct ieee80211_hw *dev)
+ priv->tx_buf[i] = NULL;
}
-@@ -894,7 +886,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
- goto out;
- out3:
-- libertas_remove_card(priv);
-+ lbs_remove_card(priv);
- out2:
- ioport_unmap(card->iobase);
- out1:
-@@ -917,8 +909,8 @@ static void if_cs_detach(struct pcmcia_device *p_dev)
+- memset(priv->ring_control, 0, sizeof(*priv->ring_control));
++ memset(ring_control, 0, sizeof(ring_control));
+ }
- lbs_deb_enter(LBS_DEB_CS);
+ static int __devinit p54p_probe(struct pci_dev *pdev,
+diff --git a/drivers/net/wireless/p54pci.h b/drivers/net/wireless/p54pci.h
+index 52feb59..5bedd7a 100644
+--- a/drivers/net/wireless/p54pci.h
++++ b/drivers/net/wireless/p54pci.h
+@@ -85,8 +85,8 @@ struct p54p_ring_control {
+ struct p54p_desc tx_mgmt[4];
+ } __attribute__ ((packed));
-- libertas_stop_card(card->priv);
-- libertas_remove_card(card->priv);
-+ lbs_stop_card(card->priv);
-+ lbs_remove_card(card->priv);
- if_cs_disable_ints(card);
- if_cs_release(p_dev);
- kfree(card);
-@@ -939,7 +931,7 @@ static struct pcmcia_device_id if_cs_ids[] = {
- MODULE_DEVICE_TABLE(pcmcia, if_cs_ids);
+-#define P54P_READ(r) __raw_readl(&priv->map->r)
+-#define P54P_WRITE(r, val) __raw_writel((__force u32)(val), &priv->map->r)
++#define P54P_READ(r) (__force __le32)__raw_readl(&priv->map->r)
++#define P54P_WRITE(r, val) __raw_writel((__force u32)(__le32)(val), &priv->map->r)
+ struct p54p_priv {
+ struct p54_common common;
+diff --git a/drivers/net/wireless/prism54/isl_38xx.h b/drivers/net/wireless/prism54/isl_38xx.h
+index 3fadcb6..19c33d3 100644
+--- a/drivers/net/wireless/prism54/isl_38xx.h
++++ b/drivers/net/wireless/prism54/isl_38xx.h
+@@ -138,14 +138,14 @@ isl38xx_w32_flush(void __iomem *base, u32 val, unsigned long offset)
+ #define MAX_FRAGMENT_SIZE_RX 1600
--static struct pcmcia_driver libertas_driver = {
-+static struct pcmcia_driver lbs_driver = {
- .owner = THIS_MODULE,
- .drv = {
- .name = DRV_NAME,
-@@ -955,7 +947,7 @@ static int __init if_cs_init(void)
- int ret;
+ typedef struct {
+- u32 address; /* physical address on host */
+- u16 size; /* packet size */
+- u16 flags; /* set of bit-wise flags */
++ __le32 address; /* physical address on host */
++ __le16 size; /* packet size */
++ __le16 flags; /* set of bit-wise flags */
+ } isl38xx_fragment;
- lbs_deb_enter(LBS_DEB_CS);
-- ret = pcmcia_register_driver(&libertas_driver);
-+ ret = pcmcia_register_driver(&lbs_driver);
- lbs_deb_leave(LBS_DEB_CS);
- return ret;
- }
-@@ -964,7 +956,7 @@ static int __init if_cs_init(void)
- static void __exit if_cs_exit(void)
- {
- lbs_deb_enter(LBS_DEB_CS);
-- pcmcia_unregister_driver(&libertas_driver);
-+ pcmcia_unregister_driver(&lbs_driver);
- lbs_deb_leave(LBS_DEB_CS);
- }
+ struct isl38xx_cb {
+- u32 driver_curr_frag[ISL38XX_CB_QCOUNT];
+- u32 device_curr_frag[ISL38XX_CB_QCOUNT];
++ __le32 driver_curr_frag[ISL38XX_CB_QCOUNT];
++ __le32 device_curr_frag[ISL38XX_CB_QCOUNT];
+ isl38xx_fragment rx_data_low[ISL38XX_CB_RX_QSIZE];
+ isl38xx_fragment tx_data_low[ISL38XX_CB_TX_QSIZE];
+ isl38xx_fragment rx_data_high[ISL38XX_CB_RX_QSIZE];
+diff --git a/drivers/net/wireless/prism54/isl_ioctl.c b/drivers/net/wireless/prism54/isl_ioctl.c
+index 6d80ca4..1b595a6 100644
+--- a/drivers/net/wireless/prism54/isl_ioctl.c
++++ b/drivers/net/wireless/prism54/isl_ioctl.c
+@@ -165,8 +165,7 @@ prism54_update_stats(struct work_struct *work)
+ struct obj_bss bss, *bss2;
+ union oid_res_t r;
-diff --git a/drivers/net/wireless/libertas/if_sdio.c b/drivers/net/wireless/libertas/if_sdio.c
-index 4f1efb1..eed7320 100644
---- a/drivers/net/wireless/libertas/if_sdio.c
-+++ b/drivers/net/wireless/libertas/if_sdio.c
-@@ -19,7 +19,7 @@
- * current block size.
- *
- * As SDIO is still new to the kernel, it is unfortunately common with
-- * bugs in the host controllers related to that. One such bug is that
-+ * bugs in the host controllers related to that. One such bug is that
- * controllers cannot do transfers that aren't a multiple of 4 bytes.
- * If you don't have time to fix the host controller driver, you can
- * work around the problem by modifying if_sdio_host_to_card() and
-@@ -40,11 +40,11 @@
- #include "dev.h"
- #include "if_sdio.h"
+- if (down_interruptible(&priv->stats_sem))
+- return;
++ down(&priv->stats_sem);
--static char *libertas_helper_name = NULL;
--module_param_named(helper_name, libertas_helper_name, charp, 0644);
-+static char *lbs_helper_name = NULL;
-+module_param_named(helper_name, lbs_helper_name, charp, 0644);
+ /* Noise floor.
+ * I'm not sure if the unit is dBm.
+@@ -1118,7 +1117,7 @@ prism54_set_encode(struct net_device *ndev, struct iw_request_info *info,
+ mgt_set_request(priv, DOT11_OID_DEFKEYID, 0,
+ &index);
+ } else {
+- if (!dwrq->flags & IW_ENCODE_MODE) {
++ if (!(dwrq->flags & IW_ENCODE_MODE)) {
+ /* we cannot do anything. Complain. */
+ return -EINVAL;
+ }
+@@ -1793,8 +1792,7 @@ prism54_clear_mac(struct islpci_acl *acl)
+ struct list_head *ptr, *next;
+ struct mac_entry *entry;
--static char *libertas_fw_name = NULL;
--module_param_named(fw_name, libertas_fw_name, charp, 0644);
-+static char *lbs_fw_name = NULL;
-+module_param_named(fw_name, lbs_fw_name, charp, 0644);
+- if (down_interruptible(&acl->sem))
+- return;
++ down(&acl->sem);
- static const struct sdio_device_id if_sdio_ids[] = {
- { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_LIBERTAS) },
-@@ -82,7 +82,7 @@ struct if_sdio_packet {
+ if (acl->size == 0) {
+ up(&acl->sem);
+@@ -2116,8 +2114,7 @@ prism54_wpa_bss_ie_add(islpci_private *priv, u8 *bssid,
+ if (wpa_ie_len > MAX_WPA_IE_LEN)
+ wpa_ie_len = MAX_WPA_IE_LEN;
- struct if_sdio_card {
- struct sdio_func *func;
-- wlan_private *priv;
-+ struct lbs_private *priv;
+- if (down_interruptible(&priv->wpa_sem))
+- return;
++ down(&priv->wpa_sem);
- int model;
- unsigned long ioport;
-@@ -134,32 +134,26 @@ static int if_sdio_handle_cmd(struct if_sdio_card *card,
+ /* try to use existing entry */
+ list_for_each(ptr, &priv->bss_wpa_list) {
+@@ -2178,8 +2175,7 @@ prism54_wpa_bss_ie_get(islpci_private *priv, u8 *bssid, u8 *wpa_ie)
+ struct islpci_bss_wpa_ie *bss = NULL;
+ size_t len = 0;
- lbs_deb_enter(LBS_DEB_SDIO);
+- if (down_interruptible(&priv->wpa_sem))
+- return 0;
++ down(&priv->wpa_sem);
-- spin_lock_irqsave(&card->priv->adapter->driver_lock, flags);
-+ spin_lock_irqsave(&card->priv->driver_lock, flags);
+ list_for_each(ptr, &priv->bss_wpa_list) {
+ bss = list_entry(ptr, struct islpci_bss_wpa_ie, list);
+@@ -2610,7 +2606,7 @@ prism2_ioctl_set_encryption(struct net_device *dev,
+ mgt_set_request(priv, DOT11_OID_DEFKEYID, 0,
+ &index);
+ } else {
+- if (!param->u.crypt.flags & IW_ENCODE_MODE) {
++ if (!(param->u.crypt.flags & IW_ENCODE_MODE)) {
+ /* we cannot do anything. Complain. */
+ return -EINVAL;
+ }
+diff --git a/drivers/net/wireless/prism54/islpci_dev.c b/drivers/net/wireless/prism54/islpci_dev.c
+index 219dd65..dbb538c 100644
+--- a/drivers/net/wireless/prism54/islpci_dev.c
++++ b/drivers/net/wireless/prism54/islpci_dev.c
+@@ -861,7 +861,7 @@ islpci_setup(struct pci_dev *pdev)
+ init_waitqueue_head(&priv->reset_done);
-- if (!card->priv->adapter->cur_cmd) {
-- lbs_deb_sdio("discarding spurious response\n");
-- ret = 0;
-- goto out;
-- }
--
-- if (size > MRVDRV_SIZE_OF_CMD_BUFFER) {
-+ if (size > LBS_CMD_BUFFER_SIZE) {
- lbs_deb_sdio("response packet too large (%d bytes)\n",
- (int)size);
- ret = -E2BIG;
- goto out;
+ /* init the queue read locks, process wait counter */
+- sema_init(&priv->mgmt_sem, 1);
++ mutex_init(&priv->mgmt_lock);
+ priv->mgmt_received = NULL;
+ init_waitqueue_head(&priv->mgmt_wqueue);
+ sema_init(&priv->stats_sem, 1);
+diff --git a/drivers/net/wireless/prism54/islpci_dev.h b/drivers/net/wireless/prism54/islpci_dev.h
+index 736666d..4e0182c 100644
+--- a/drivers/net/wireless/prism54/islpci_dev.h
++++ b/drivers/net/wireless/prism54/islpci_dev.h
+@@ -26,6 +26,7 @@
+ #include <linux/wireless.h>
+ #include <net/iw_handler.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+
+ #include "isl_38xx.h"
+ #include "isl_oid.h"
+@@ -164,7 +165,7 @@ typedef struct {
+ wait_queue_head_t reset_done;
+
+ /* used by islpci_mgt_transaction */
+- struct semaphore mgmt_sem; /* serialize access to mailbox and wqueue */
++ struct mutex mgmt_lock; /* serialize access to mailbox and wqueue */
+ struct islpci_mgmtframe *mgmt_received; /* mbox for incoming frame */
+ wait_queue_head_t mgmt_wqueue; /* waitqueue for mbox */
+
+diff --git a/drivers/net/wireless/prism54/islpci_eth.c b/drivers/net/wireless/prism54/islpci_eth.c
+index f49eb06..762e85b 100644
+--- a/drivers/net/wireless/prism54/islpci_eth.c
++++ b/drivers/net/wireless/prism54/islpci_eth.c
+@@ -471,7 +471,7 @@ islpci_eth_receive(islpci_private *priv)
+ wmb();
+
+ /* increment the driver read pointer */
+- add_le32p((u32 *) &control_block->
++ add_le32p(&control_block->
+ driver_curr_frag[ISL38XX_CB_RX_DATA_LQ], 1);
}
-- memcpy(card->priv->adapter->cur_cmd->bufvirtualaddr, buffer, size);
-+ memcpy(card->priv->upld_buf, buffer, size);
- card->priv->upld_len = size;
+diff --git a/drivers/net/wireless/prism54/islpci_eth.h b/drivers/net/wireless/prism54/islpci_eth.h
+index 5bf820d..61454d3 100644
+--- a/drivers/net/wireless/prism54/islpci_eth.h
++++ b/drivers/net/wireless/prism54/islpci_eth.h
+@@ -23,15 +23,15 @@
+ #include "islpci_dev.h"
- card->int_cause |= MRVDRV_CMD_UPLD_RDY;
+ struct rfmon_header {
+- u16 unk0; /* = 0x0000 */
+- u16 length; /* = 0x1400 */
+- u32 clock; /* 1MHz clock */
++ __le16 unk0; /* = 0x0000 */
++ __le16 length; /* = 0x1400 */
++ __le32 clock; /* 1MHz clock */
+ u8 flags;
+ u8 unk1;
+ u8 rate;
+ u8 unk2;
+- u16 freq;
+- u16 unk3;
++ __le16 freq;
++ __le16 unk3;
+ u8 rssi;
+ u8 padding[3];
+ } __attribute__ ((packed));
+@@ -47,20 +47,20 @@ struct rx_annex_header {
+ #define P80211CAPTURE_VERSION 0x80211001
-- libertas_interrupt(card->priv->dev);
-+ lbs_interrupt(card->priv);
+ struct avs_80211_1_header {
+- uint32_t version;
+- uint32_t length;
+- uint64_t mactime;
+- uint64_t hosttime;
+- uint32_t phytype;
+- uint32_t channel;
+- uint32_t datarate;
+- uint32_t antenna;
+- uint32_t priority;
+- uint32_t ssi_type;
+- int32_t ssi_signal;
+- int32_t ssi_noise;
+- uint32_t preamble;
+- uint32_t encoding;
++ __be32 version;
++ __be32 length;
++ __be64 mactime;
++ __be64 hosttime;
++ __be32 phytype;
++ __be32 channel;
++ __be32 datarate;
++ __be32 antenna;
++ __be32 priority;
++ __be32 ssi_type;
++ __be32 ssi_signal;
++ __be32 ssi_noise;
++ __be32 preamble;
++ __be32 encoding;
+ };
- ret = 0;
+ void islpci_eth_cleanup_transmit(islpci_private *, isl38xx_control_block *);
+diff --git a/drivers/net/wireless/prism54/islpci_mgt.c b/drivers/net/wireless/prism54/islpci_mgt.c
+index 2246f79..f7c677e 100644
+--- a/drivers/net/wireless/prism54/islpci_mgt.c
++++ b/drivers/net/wireless/prism54/islpci_mgt.c
+@@ -460,7 +460,7 @@ islpci_mgt_transaction(struct net_device *ndev,
- out:
-- spin_unlock_irqrestore(&card->priv->adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&card->priv->driver_lock, flags);
+ *recvframe = NULL;
- lbs_deb_leave_args(LBS_DEB_SDIO, "ret %d", ret);
+- if (down_interruptible(&priv->mgmt_sem))
++ if (mutex_lock_interruptible(&priv->mgmt_lock))
+ return -ERESTARTSYS;
-@@ -194,7 +188,7 @@ static int if_sdio_handle_data(struct if_sdio_card *card,
+ prepare_to_wait(&priv->mgmt_wqueue, &wait, TASK_UNINTERRUPTIBLE);
+@@ -504,7 +504,7 @@ islpci_mgt_transaction(struct net_device *ndev,
+ /* TODO: we should reset the device here */
+ out:
+ finish_wait(&priv->mgmt_wqueue, &wait);
+- up(&priv->mgmt_sem);
++ mutex_unlock(&priv->mgmt_lock);
+ return err;
+ }
- memcpy(data, buffer, size);
+diff --git a/drivers/net/wireless/prism54/islpci_mgt.h b/drivers/net/wireless/prism54/islpci_mgt.h
+index fc53b58..f91a88f 100644
+--- a/drivers/net/wireless/prism54/islpci_mgt.h
++++ b/drivers/net/wireless/prism54/islpci_mgt.h
+@@ -86,7 +86,7 @@ extern int pc_debug;
+ #define PIMFOR_FLAG_LITTLE_ENDIAN 0x02
-- libertas_process_rxed_packet(card->priv, skb);
-+ lbs_process_rxed_packet(card->priv, skb);
+ static inline void
+-add_le32p(u32 * le_number, u32 add)
++add_le32p(__le32 * le_number, u32 add)
+ {
+ *le_number = cpu_to_le32(le32_to_cpup(le_number) + add);
+ }
+diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
+index f87fe10..f3858ee 100644
+--- a/drivers/net/wireless/ray_cs.c
++++ b/drivers/net/wireless/ray_cs.c
+@@ -44,6 +44,7 @@
+ #include <linux/ioport.h>
+ #include <linux/skbuff.h>
+ #include <linux/ethtool.h>
++#include <linux/ieee80211.h>
- ret = 0;
+ #include <pcmcia/cs_types.h>
+ #include <pcmcia/cs.h>
+@@ -997,13 +998,13 @@ static int ray_hw_xmit(unsigned char* data, int len, struct net_device* dev,
+ static int translate_frame(ray_dev_t *local, struct tx_msg __iomem *ptx, unsigned char *data,
+ int len)
+ {
+- unsigned short int proto = ((struct ethhdr *)data)->h_proto;
++ __be16 proto = ((struct ethhdr *)data)->h_proto;
+ if (ntohs(proto) >= 1536) { /* DIX II ethernet frame */
+ DEBUG(3,"ray_cs translate_frame DIX II\n");
+ /* Copy LLC header to card buffer */
+ memcpy_toio(&ptx->var, eth2_llc, sizeof(eth2_llc));
+ memcpy_toio( ((void __iomem *)&ptx->var) + sizeof(eth2_llc), (UCHAR *)&proto, 2);
+- if ((proto == 0xf380) || (proto == 0x3781)) {
++ if (proto == htons(ETH_P_AARP) || proto == htons(ETH_P_IPX)) {
+ /* This is the selective translation table, only 2 entries */
+ writeb(0xf8, &((struct snaphdr_t __iomem *)ptx->var)->org[3]);
+ }
+@@ -1014,7 +1015,7 @@ static int translate_frame(ray_dev_t *local, struct tx_msg __iomem *ptx, unsigne
+ }
+ else { /* already 802 type, and proto is length */
+ DEBUG(3,"ray_cs translate_frame 802\n");
+- if (proto == 0xffff) { /* evil netware IPX 802.3 without LLC */
++ if (proto == htons(0xffff)) { /* evil netware IPX 802.3 without LLC */
+ DEBUG(3,"ray_cs translate_frame evil IPX\n");
+ memcpy_toio(&ptx->var, data + ETH_HLEN, len - ETH_HLEN);
+ return 0 - ETH_HLEN;
+@@ -1780,19 +1781,19 @@ static struct net_device_stats *ray_get_stats(struct net_device *dev)
+ }
+ if (readb(&p->mrx_overflow_for_host))
+ {
+- local->stats.rx_over_errors += ntohs(readb(&p->mrx_overflow));
++ local->stats.rx_over_errors += swab16(readw(&p->mrx_overflow));
+ writeb(0,&p->mrx_overflow);
+ writeb(0,&p->mrx_overflow_for_host);
+ }
+ if (readb(&p->mrx_checksum_error_for_host))
+ {
+- local->stats.rx_crc_errors += ntohs(readb(&p->mrx_checksum_error));
++ local->stats.rx_crc_errors += swab16(readw(&p->mrx_checksum_error));
+ writeb(0,&p->mrx_checksum_error);
+ writeb(0,&p->mrx_checksum_error_for_host);
+ }
+ if (readb(&p->rx_hec_error_for_host))
+ {
+- local->stats.rx_frame_errors += ntohs(readb(&p->rx_hec_error));
++ local->stats.rx_frame_errors += swab16(readw(&p->rx_hec_error));
+ writeb(0,&p->rx_hec_error);
+ writeb(0,&p->rx_hec_error_for_host);
+ }
+@@ -2316,32 +2317,17 @@ static void rx_data(struct net_device *dev, struct rcs __iomem *prcs, unsigned i
+ static void untranslate(ray_dev_t *local, struct sk_buff *skb, int len)
+ {
+ snaphdr_t *psnap = (snaphdr_t *)(skb->data + RX_MAC_HEADER_LENGTH);
+- struct mac_header *pmac = (struct mac_header *)skb->data;
+- unsigned short type = *(unsigned short *)psnap->ethertype;
+- unsigned int xsap = *(unsigned int *)psnap & 0x00ffffff;
+- unsigned int org = (*(unsigned int *)psnap->org) & 0x00ffffff;
++ struct ieee80211_hdr *pmac = (struct ieee80211_hdr *)skb->data;
++ __be16 type = *(__be16 *)psnap->ethertype;
+ int delta;
+ struct ethhdr *peth;
+ UCHAR srcaddr[ADDRLEN];
+ UCHAR destaddr[ADDRLEN];
++ static UCHAR org_bridge[3] = {0, 0, 0xf8};
++ static UCHAR org_1042[3] = {0, 0, 0};
-@@ -231,14 +225,14 @@ static int if_sdio_handle_event(struct if_sdio_card *card,
- event <<= SBI_EVENT_CAUSE_SHIFT;
- }
+- if (pmac->frame_ctl_2 & FC2_FROM_DS) {
+- if (pmac->frame_ctl_2 & FC2_TO_DS) { /* AP to AP */
+- memcpy(destaddr, pmac->addr_3, ADDRLEN);
+- memcpy(srcaddr, ((unsigned char *)pmac->addr_3) + ADDRLEN, ADDRLEN);
+- } else { /* AP to terminal */
+- memcpy(destaddr, pmac->addr_1, ADDRLEN);
+- memcpy(srcaddr, pmac->addr_3, ADDRLEN);
+- }
+- } else { /* Terminal to AP */
+- if (pmac->frame_ctl_2 & FC2_TO_DS) {
+- memcpy(destaddr, pmac->addr_3, ADDRLEN);
+- memcpy(srcaddr, pmac->addr_2, ADDRLEN);
+- } else { /* Adhoc */
+- memcpy(destaddr, pmac->addr_1, ADDRLEN);
+- memcpy(srcaddr, pmac->addr_2, ADDRLEN);
+- }
+- }
++ memcpy(destaddr, ieee80211_get_DA(pmac), ADDRLEN);
++ memcpy(srcaddr, ieee80211_get_SA(pmac), ADDRLEN);
-- spin_lock_irqsave(&card->priv->adapter->driver_lock, flags);
-+ spin_lock_irqsave(&card->priv->driver_lock, flags);
+ #ifdef PCMCIA_DEBUG
+ if (pc_debug > 3) {
+@@ -2349,33 +2335,34 @@ static void untranslate(ray_dev_t *local, struct sk_buff *skb, int len)
+ printk(KERN_DEBUG "skb->data before untranslate");
+ for (i=0;i<64;i++)
+ printk("%02x ",skb->data[i]);
+- printk("\n" KERN_DEBUG "type = %08x, xsap = %08x, org = %08x\n",
+- type,xsap,org);
++ printk("\n" KERN_DEBUG "type = %08x, xsap = %02x%02x%02x, org = %02x02x02x\n",
++ ntohs(type),
++ psnap->dsap, psnap->ssap, psnap->ctrl,
++ psnap->org[0], psnap->org[1], psnap->org[2]);
+ printk(KERN_DEBUG "untranslate skb->data = %p\n",skb->data);
+ }
+ #endif
- card->event = event;
- card->int_cause |= MRVDRV_CARDEVENT;
+- if ( xsap != SNAP_ID) {
++ if (psnap->dsap != 0xaa || psnap->ssap != 0xaa || psnap->ctrl != 3) {
+ /* not a snap type so leave it alone */
+- DEBUG(3,"ray_cs untranslate NOT SNAP %x\n", *(unsigned int *)psnap & 0x00ffffff);
++ DEBUG(3,"ray_cs untranslate NOT SNAP %02x %02x %02x\n",
++ psnap->dsap, psnap->ssap, psnap->ctrl);
-- libertas_interrupt(card->priv->dev);
-+ lbs_interrupt(card->priv);
+ delta = RX_MAC_HEADER_LENGTH - ETH_HLEN;
+ peth = (struct ethhdr *)(skb->data + delta);
+ peth->h_proto = htons(len - RX_MAC_HEADER_LENGTH);
+ }
+ else { /* Its a SNAP */
+- if (org == BRIDGE_ENCAP) { /* EtherII and nuke the LLC */
++ if (memcmp(psnap->org, org_bridge, 3) == 0) { /* EtherII and nuke the LLC */
+ DEBUG(3,"ray_cs untranslate Bridge encap\n");
+ delta = RX_MAC_HEADER_LENGTH
+ + sizeof(struct snaphdr_t) - ETH_HLEN;
+ peth = (struct ethhdr *)(skb->data + delta);
+ peth->h_proto = type;
+- }
+- else {
+- if (org == RFC1042_ENCAP) {
+- switch (type) {
+- case RAY_IPX_TYPE:
+- case APPLEARP_TYPE:
++ } else if (memcmp(psnap->org, org_1042, 3) == 0) {
++ switch (ntohs(type)) {
++ case ETH_P_IPX:
++ case ETH_P_AARP:
+ DEBUG(3,"ray_cs untranslate RFC IPX/AARP\n");
+ delta = RX_MAC_HEADER_LENGTH - ETH_HLEN;
+ peth = (struct ethhdr *)(skb->data + delta);
+@@ -2389,14 +2376,12 @@ static void untranslate(ray_dev_t *local, struct sk_buff *skb, int len)
+ peth->h_proto = type;
+ break;
+ }
+- }
+- else {
++ } else {
+ printk("ray_cs untranslate very confused by packet\n");
+ delta = RX_MAC_HEADER_LENGTH - ETH_HLEN;
+ peth = (struct ethhdr *)(skb->data + delta);
+ peth->h_proto = type;
+- }
+- }
++ }
+ }
+ /* TBD reserve skb_reserve(skb, delta); */
+ skb_pull(skb, delta);
+diff --git a/drivers/net/wireless/rt2x00/rt2400pci.c b/drivers/net/wireless/rt2x00/rt2400pci.c
+index 31c1dd2..d6cba13 100644
+--- a/drivers/net/wireless/rt2x00/rt2400pci.c
++++ b/drivers/net/wireless/rt2x00/rt2400pci.c
+@@ -24,11 +24,6 @@
+ Supported chipsets: RT2460.
+ */
-- spin_unlock_irqrestore(&card->priv->adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&card->priv->driver_lock, flags);
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2400pci"
+-
+ #include <linux/delay.h>
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+@@ -54,7 +49,7 @@
+ * the access attempt is considered to have failed,
+ * and we will print an error.
+ */
+-static u32 rt2400pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
++static u32 rt2400pci_bbp_check(struct rt2x00_dev *rt2x00dev)
+ {
+ u32 reg;
+ unsigned int i;
+@@ -69,7 +64,7 @@ static u32 rt2400pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
+ return reg;
+ }
- ret = 0;
+-static void rt2400pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
++static void rt2400pci_bbp_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u8 value)
+ {
+ u32 reg;
+@@ -95,7 +90,7 @@ static void rt2400pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
+ rt2x00pci_register_write(rt2x00dev, BBPCSR, reg);
+ }
-@@ -454,7 +448,7 @@ static int if_sdio_prog_helper(struct if_sdio_card *card)
+-static void rt2400pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
++static void rt2400pci_bbp_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u8 *value)
+ {
+ u32 reg;
+@@ -132,7 +127,7 @@ static void rt2400pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ *value = rt2x00_get_field32(reg, BBPCSR_VALUE);
+ }
- chunk_size = min(size, (size_t)60);
+-static void rt2400pci_rf_write(const struct rt2x00_dev *rt2x00dev,
++static void rt2400pci_rf_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u32 value)
+ {
+ u32 reg;
+@@ -195,13 +190,13 @@ static void rt2400pci_eepromregister_write(struct eeprom_93cx6 *eeprom)
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+ #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
-- *((u32*)chunk_buffer) = cpu_to_le32(chunk_size);
-+ *((__le32*)chunk_buffer) = cpu_to_le32(chunk_size);
- memcpy(chunk_buffer + 4, firmware, chunk_size);
- /*
- lbs_deb_sdio("sending %d bytes chunk\n", chunk_size);
-@@ -694,7 +688,8 @@ out:
- /* Libertas callbacks */
- /*******************************************************************/
+-static void rt2400pci_read_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt2400pci_read_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 *data)
+ {
+ rt2x00pci_register_read(rt2x00dev, CSR_OFFSET(word), data);
+ }
--static int if_sdio_host_to_card(wlan_private *priv, u8 type, u8 *buf, u16 nb)
-+static int if_sdio_host_to_card(struct lbs_private *priv,
-+ u8 type, u8 *buf, u16 nb)
+-static void rt2400pci_write_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt2400pci_write_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 data)
{
- int ret;
- struct if_sdio_card *card;
-@@ -775,7 +770,7 @@ out:
- return ret;
+ rt2x00pci_register_write(rt2x00dev, CSR_OFFSET(word), data);
+@@ -285,7 +280,7 @@ static void rt2400pci_config_type(struct rt2x00_dev *rt2x00dev, const int type,
+ */
+ rt2x00pci_register_read(rt2x00dev, CSR14, ®);
+ rt2x00_set_field32(®, CSR14_TSF_COUNT, 1);
+- rt2x00_set_field32(®, CSR14_TBCN, 1);
++ rt2x00_set_field32(®, CSR14_TBCN, (tsf_sync == TSF_SYNC_BEACON));
+ rt2x00_set_field32(®, CSR14_BEACON_GEN, 0);
+ rt2x00_set_field32(®, CSR14_TSF_SYNC, tsf_sync);
+ rt2x00pci_register_write(rt2x00dev, CSR14, reg);
+@@ -397,7 +392,7 @@ static void rt2400pci_config_txpower(struct rt2x00_dev *rt2x00dev, int txpower)
+ }
+
+ static void rt2400pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+- int antenna_tx, int antenna_rx)
++ struct antenna_setup *ant)
+ {
+ u8 r1;
+ u8 r4;
+@@ -408,14 +403,20 @@ static void rt2400pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Configure the TX antenna.
+ */
+- switch (antenna_tx) {
+- case ANTENNA_SW_DIVERSITY:
++ switch (ant->tx) {
+ case ANTENNA_HW_DIVERSITY:
+ rt2x00_set_field8(&r1, BBP_R1_TX_ANTENNA, 1);
+ break;
+ case ANTENNA_A:
+ rt2x00_set_field8(&r1, BBP_R1_TX_ANTENNA, 0);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+ rt2x00_set_field8(&r1, BBP_R1_TX_ANTENNA, 2);
+ break;
+@@ -424,14 +425,20 @@ static void rt2400pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Configure the RX antenna.
+ */
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
++ switch (ant->rx) {
+ case ANTENNA_HW_DIVERSITY:
+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+ break;
+ case ANTENNA_A:
+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 0);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
+ break;
+@@ -485,9 +492,7 @@ static void rt2400pci_config(struct rt2x00_dev *rt2x00dev,
+ rt2400pci_config_txpower(rt2x00dev,
+ libconf->conf->power_level);
+ if (flags & CONFIG_UPDATE_ANTENNA)
+- rt2400pci_config_antenna(rt2x00dev,
+- libconf->conf->antenna_sel_tx,
+- libconf->conf->antenna_sel_rx);
++ rt2400pci_config_antenna(rt2x00dev, &libconf->ant);
+ if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
+ rt2400pci_config_duration(rt2x00dev, libconf);
}
+@@ -514,18 +519,10 @@ static void rt2400pci_enable_led(struct rt2x00_dev *rt2x00dev)
--static int if_sdio_get_int_status(wlan_private *priv, u8 *ireg)
-+static int if_sdio_get_int_status(struct lbs_private *priv, u8 *ireg)
- {
- struct if_sdio_card *card;
-
-@@ -791,7 +786,7 @@ static int if_sdio_get_int_status(wlan_private *priv, u8 *ireg)
- return 0;
+ rt2x00_set_field32(®, LEDCSR_ON_PERIOD, 70);
+ rt2x00_set_field32(®, LEDCSR_OFF_PERIOD, 30);
+-
+- if (rt2x00dev->led_mode == LED_MODE_TXRX_ACTIVITY) {
+- rt2x00_set_field32(®, LEDCSR_LINK, 1);
+- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 0);
+- } else if (rt2x00dev->led_mode == LED_MODE_ASUS) {
+- rt2x00_set_field32(®, LEDCSR_LINK, 0);
+- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
+- } else {
+- rt2x00_set_field32(®, LEDCSR_LINK, 1);
+- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
+- }
+-
++ rt2x00_set_field32(®, LEDCSR_LINK,
++ (rt2x00dev->led_mode != LED_MODE_ASUS));
++ rt2x00_set_field32(®, LEDCSR_ACTIVITY,
++ (rt2x00dev->led_mode != LED_MODE_TXRX_ACTIVITY));
+ rt2x00pci_register_write(rt2x00dev, LEDCSR, reg);
}
--static int if_sdio_read_event_cause(wlan_private *priv)
-+static int if_sdio_read_event_cause(struct lbs_private *priv)
+@@ -542,7 +539,8 @@ static void rt2400pci_disable_led(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Link tuning
+ */
+-static void rt2400pci_link_stats(struct rt2x00_dev *rt2x00dev)
++static void rt2400pci_link_stats(struct rt2x00_dev *rt2x00dev,
++ struct link_qual *qual)
{
- struct if_sdio_card *card;
+ u32 reg;
+ u8 bbp;
+@@ -551,13 +549,13 @@ static void rt2400pci_link_stats(struct rt2x00_dev *rt2x00dev)
+ * Update FCS error count from register.
+ */
+ rt2x00pci_register_read(rt2x00dev, CNT0, ®);
+- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
++ qual->rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
-@@ -799,7 +794,7 @@ static int if_sdio_read_event_cause(wlan_private *priv)
+ /*
+ * Update False CCA count from register.
+ */
+ rt2400pci_bbp_read(rt2x00dev, 39, &bbp);
+- rt2x00dev->link.false_cca = bbp;
++ qual->false_cca = bbp;
+ }
- card = priv->card;
+ static void rt2400pci_reset_tuner(struct rt2x00_dev *rt2x00dev)
+@@ -582,10 +580,10 @@ static void rt2400pci_link_tuner(struct rt2x00_dev *rt2x00dev)
+ */
+ rt2400pci_bbp_read(rt2x00dev, 13, ®);
-- priv->adapter->eventcause = card->event;
-+ priv->eventcause = card->event;
+- if (rt2x00dev->link.false_cca > 512 && reg < 0x20) {
++ if (rt2x00dev->link.qual.false_cca > 512 && reg < 0x20) {
+ rt2400pci_bbp_write(rt2x00dev, 13, ++reg);
+ rt2x00dev->link.vgc_level = reg;
+- } else if (rt2x00dev->link.false_cca < 100 && reg > 0x08) {
++ } else if (rt2x00dev->link.qual.false_cca < 100 && reg > 0x08) {
+ rt2400pci_bbp_write(rt2x00dev, 13, --reg);
+ rt2x00dev->link.vgc_level = reg;
+ }
+@@ -594,65 +592,43 @@ static void rt2400pci_link_tuner(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Initialization functions.
+ */
+-static void rt2400pci_init_rxring(struct rt2x00_dev *rt2x00dev)
++static void rt2400pci_init_rxentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
+ {
+- struct data_ring *ring = rt2x00dev->rx;
+- struct data_desc *rxd;
+- unsigned int i;
++ __le32 *rxd = entry->priv;
+ u32 word;
- lbs_deb_leave(LBS_DEB_SDIO);
+- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
+-
+- for (i = 0; i < ring->stats.limit; i++) {
+- rxd = ring->entry[i].priv;
+-
+- rt2x00_desc_read(rxd, 2, &word);
+- rt2x00_set_field32(&word, RXD_W2_BUFFER_LENGTH,
+- ring->data_size);
+- rt2x00_desc_write(rxd, 2, word);
+-
+- rt2x00_desc_read(rxd, 1, &word);
+- rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS,
+- ring->entry[i].data_dma);
+- rt2x00_desc_write(rxd, 1, word);
++ rt2x00_desc_read(rxd, 2, &word);
++ rt2x00_set_field32(&word, RXD_W2_BUFFER_LENGTH, entry->ring->data_size);
++ rt2x00_desc_write(rxd, 2, word);
-@@ -834,12 +829,9 @@ static void if_sdio_interrupt(struct sdio_func *func)
- * Ignore the define name, this really means the card has
- * successfully received the command.
- */
-- if (cause & IF_SDIO_H_INT_DNLD) {
-- if ((card->priv->dnld_sent == DNLD_DATA_SENT) &&
-- (card->priv->adapter->connect_status == LIBERTAS_CONNECTED))
-- netif_wake_queue(card->priv->dev);
-- card->priv->dnld_sent = DNLD_RES_RECEIVED;
+- rt2x00_desc_read(rxd, 0, &word);
+- rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
+- rt2x00_desc_write(rxd, 0, word);
- }
-+ if (cause & IF_SDIO_H_INT_DNLD)
-+ lbs_host_to_card_done(card->priv);
-+
++ rt2x00_desc_read(rxd, 1, &word);
++ rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS, entry->data_dma);
++ rt2x00_desc_write(rxd, 1, word);
- if (cause & IF_SDIO_H_INT_UPLD) {
- ret = if_sdio_card_to_host(card);
-@@ -857,7 +849,7 @@ static int if_sdio_probe(struct sdio_func *func,
- const struct sdio_device_id *id)
- {
- struct if_sdio_card *card;
-- wlan_private *priv;
-+ struct lbs_private *priv;
- int ret, i;
- unsigned int model;
- struct if_sdio_packet *packet;
-@@ -905,15 +897,15 @@ static int if_sdio_probe(struct sdio_func *func,
- card->helper = if_sdio_models[i].helper;
- card->firmware = if_sdio_models[i].firmware;
+- rt2x00_ring_index_clear(rt2x00dev->rx);
++ rt2x00_desc_read(rxd, 0, &word);
++ rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
++ rt2x00_desc_write(rxd, 0, word);
+ }
-- if (libertas_helper_name) {
-+ if (lbs_helper_name) {
- lbs_deb_sdio("overriding helper firmware: %s\n",
-- libertas_helper_name);
-- card->helper = libertas_helper_name;
-+ lbs_helper_name);
-+ card->helper = lbs_helper_name;
- }
+-static void rt2400pci_init_txring(struct rt2x00_dev *rt2x00dev, const int queue)
++static void rt2400pci_init_txentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
+ {
+- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
+- struct data_desc *txd;
+- unsigned int i;
++ __le32 *txd = entry->priv;
+ u32 word;
-- if (libertas_fw_name) {
-- lbs_deb_sdio("overriding firmware: %s\n", libertas_fw_name);
-- card->firmware = libertas_fw_name;
-+ if (lbs_fw_name) {
-+ lbs_deb_sdio("overriding firmware: %s\n", lbs_fw_name);
-+ card->firmware = lbs_fw_name;
- }
+- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
+-
+- for (i = 0; i < ring->stats.limit; i++) {
+- txd = ring->entry[i].priv;
+-
+- rt2x00_desc_read(txd, 1, &word);
+- rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS,
+- ring->entry[i].data_dma);
+- rt2x00_desc_write(txd, 1, word);
+-
+- rt2x00_desc_read(txd, 2, &word);
+- rt2x00_set_field32(&word, TXD_W2_BUFFER_LENGTH,
+- ring->data_size);
+- rt2x00_desc_write(txd, 2, word);
++ rt2x00_desc_read(txd, 1, &word);
++ rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS, entry->data_dma);
++ rt2x00_desc_write(txd, 1, word);
- sdio_claim_host(func);
-@@ -951,7 +943,7 @@ static int if_sdio_probe(struct sdio_func *func,
- if (ret)
- goto reclaim;
+- rt2x00_desc_read(txd, 0, &word);
+- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
+- rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
+- rt2x00_desc_write(txd, 0, word);
+- }
++ rt2x00_desc_read(txd, 2, &word);
++ rt2x00_set_field32(&word, TXD_W2_BUFFER_LENGTH, entry->ring->data_size);
++ rt2x00_desc_write(txd, 2, word);
-- priv = libertas_add_card(card, &func->dev);
-+ priv = lbs_add_card(card, &func->dev);
- if (!priv) {
- ret = -ENOMEM;
- goto reclaim;
-@@ -964,7 +956,7 @@ static int if_sdio_probe(struct sdio_func *func,
- priv->hw_get_int_status = if_sdio_get_int_status;
- priv->hw_read_event_cause = if_sdio_read_event_cause;
+- rt2x00_ring_index_clear(ring);
++ rt2x00_desc_read(txd, 0, &word);
++ rt2x00_set_field32(&word, TXD_W0_VALID, 0);
++ rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
++ rt2x00_desc_write(txd, 0, word);
+ }
-- priv->adapter->fw_ready = 1;
-+ priv->fw_ready = 1;
+ static int rt2400pci_init_rings(struct rt2x00_dev *rt2x00dev)
+@@ -660,15 +636,6 @@ static int rt2400pci_init_rings(struct rt2x00_dev *rt2x00dev)
+ u32 reg;
/*
- * Enable interrupts now that everything is set up
-@@ -975,7 +967,7 @@ static int if_sdio_probe(struct sdio_func *func,
- if (ret)
- goto reclaim;
+- * Initialize rings.
+- */
+- rt2400pci_init_rxring(rt2x00dev);
+- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
+- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA1);
+- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_AFTER_BEACON);
+- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
+-
+- /*
+ * Initialize registers.
+ */
+ rt2x00pci_register_read(rt2x00dev, TXCSR2, ®);
+@@ -1014,53 +981,37 @@ static int rt2400pci_set_device_state(struct rt2x00_dev *rt2x00dev,
+ * TX descriptor initialization
+ */
+ static void rt2400pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
++ struct sk_buff *skb,
+ struct txdata_entry_desc *desc,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
+ struct ieee80211_tx_control *control)
+ {
++ struct skb_desc *skbdesc = get_skb_desc(skb);
++ __le32 *txd = skbdesc->desc;
+ u32 word;
+- u32 signal = 0;
+- u32 service = 0;
+- u32 length_high = 0;
+- u32 length_low = 0;
+-
+- /*
+- * The PLCP values should be treated as if they
+- * were BBP values.
+- */
+- rt2x00_set_field32(&signal, BBPCSR_VALUE, desc->signal);
+- rt2x00_set_field32(&signal, BBPCSR_REGNUM, 5);
+- rt2x00_set_field32(&signal, BBPCSR_BUSY, 1);
+-
+- rt2x00_set_field32(&service, BBPCSR_VALUE, desc->service);
+- rt2x00_set_field32(&service, BBPCSR_REGNUM, 6);
+- rt2x00_set_field32(&service, BBPCSR_BUSY, 1);
+-
+- rt2x00_set_field32(&length_high, BBPCSR_VALUE, desc->length_high);
+- rt2x00_set_field32(&length_high, BBPCSR_REGNUM, 7);
+- rt2x00_set_field32(&length_high, BBPCSR_BUSY, 1);
+-
+- rt2x00_set_field32(&length_low, BBPCSR_VALUE, desc->length_low);
+- rt2x00_set_field32(&length_low, BBPCSR_REGNUM, 8);
+- rt2x00_set_field32(&length_low, BBPCSR_BUSY, 1);
-- ret = libertas_start_card(priv);
-+ ret = lbs_start_card(priv);
- if (ret)
- goto err_activate_card;
+ /*
+ * Start writing the descriptor words.
+ */
+ rt2x00_desc_read(txd, 2, &word);
+- rt2x00_set_field32(&word, TXD_W2_DATABYTE_COUNT, length);
++ rt2x00_set_field32(&word, TXD_W2_DATABYTE_COUNT, skbdesc->data_len);
+ rt2x00_desc_write(txd, 2, word);
-@@ -987,7 +979,7 @@ out:
- err_activate_card:
- flush_scheduled_work();
- free_netdev(priv->dev);
-- kfree(priv->adapter);
-+ kfree(priv);
- reclaim:
- sdio_claim_host(func);
- release_int:
-@@ -1017,11 +1009,11 @@ static void if_sdio_remove(struct sdio_func *func)
+ rt2x00_desc_read(txd, 3, &word);
+- rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL, signal);
+- rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE, service);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL, desc->signal);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL_REGNUM, 5);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL_BUSY, 1);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE, desc->service);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE_REGNUM, 6);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE_BUSY, 1);
+ rt2x00_desc_write(txd, 3, word);
- card = sdio_get_drvdata(func);
+ rt2x00_desc_read(txd, 4, &word);
+- rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_LOW, length_low);
+- rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_HIGH, length_high);
++ rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_LOW, desc->length_low);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_LOW_REGNUM, 8);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_LOW_BUSY, 1);
++ rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_HIGH, desc->length_high);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_HIGH_REGNUM, 7);
++ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_HIGH_BUSY, 1);
+ rt2x00_desc_write(txd, 4, word);
-- card->priv->adapter->surpriseremoved = 1;
-+ card->priv->surpriseremoved = 1;
+ rt2x00_desc_read(txd, 0, &word);
+@@ -1069,7 +1020,7 @@ static void rt2400pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
+ test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_ACK,
+- !(control->flags & IEEE80211_TXCTL_NO_ACK));
++ test_bit(ENTRY_TXD_ACK, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
+ test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_RTS,
+@@ -1099,12 +1050,12 @@ static void rt2400pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
+ }
- lbs_deb_sdio("call remove card\n");
-- libertas_stop_card(card->priv);
-- libertas_remove_card(card->priv);
-+ lbs_stop_card(card->priv);
-+ lbs_remove_card(card->priv);
+ rt2x00pci_register_read(rt2x00dev, TXCSR0, ®);
+- if (queue == IEEE80211_TX_QUEUE_DATA0)
+- rt2x00_set_field32(®, TXCSR0_KICK_PRIO, 1);
+- else if (queue == IEEE80211_TX_QUEUE_DATA1)
+- rt2x00_set_field32(®, TXCSR0_KICK_TX, 1);
+- else if (queue == IEEE80211_TX_QUEUE_AFTER_BEACON)
+- rt2x00_set_field32(®, TXCSR0_KICK_ATIM, 1);
++ rt2x00_set_field32(®, TXCSR0_KICK_PRIO,
++ (queue == IEEE80211_TX_QUEUE_DATA0));
++ rt2x00_set_field32(®, TXCSR0_KICK_TX,
++ (queue == IEEE80211_TX_QUEUE_DATA1));
++ rt2x00_set_field32(®, TXCSR0_KICK_ATIM,
++ (queue == IEEE80211_TX_QUEUE_AFTER_BEACON));
+ rt2x00pci_register_write(rt2x00dev, TXCSR0, reg);
+ }
- flush_scheduled_work();
+@@ -1114,7 +1065,7 @@ static void rt2400pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
+ static void rt2400pci_fill_rxdone(struct data_entry *entry,
+ struct rxdata_entry_desc *desc)
+ {
+- struct data_desc *rxd = entry->priv;
++ __le32 *rxd = entry->priv;
+ u32 word0;
+ u32 word2;
-@@ -1052,7 +1044,7 @@ static struct sdio_driver if_sdio_driver = {
- /* Module functions */
- /*******************************************************************/
+@@ -1135,6 +1086,7 @@ static void rt2400pci_fill_rxdone(struct data_entry *entry,
+ entry->ring->rt2x00dev->rssi_offset;
+ desc->ofdm = 0;
+ desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
++ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
+ }
--static int if_sdio_init_module(void)
-+static int __init if_sdio_init_module(void)
+ /*
+@@ -1144,7 +1096,7 @@ static void rt2400pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
{
- int ret = 0;
+ struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
+ struct data_entry *entry;
+- struct data_desc *txd;
++ __le32 *txd;
+ u32 word;
+ int tx_status;
+ int retry;
+@@ -1164,26 +1116,8 @@ static void rt2400pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
+ tx_status = rt2x00_get_field32(word, TXD_W0_RESULT);
+ retry = rt2x00_get_field32(word, TXD_W0_RETRY_COUNT);
-@@ -1068,7 +1060,7 @@ static int if_sdio_init_module(void)
- return ret;
+- rt2x00lib_txdone(entry, tx_status, retry);
+-
+- /*
+- * Make this entry available for reuse.
+- */
+- entry->flags = 0;
+- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
+- rt2x00_desc_write(txd, 0, word);
+- rt2x00_ring_index_done_inc(ring);
++ rt2x00pci_txdone(rt2x00dev, entry, tx_status, retry);
+ }
+-
+- /*
+- * If the data ring was full before the txdone handler
+- * we must make sure the packet queue in the mac80211 stack
+- * is reenabled when the txdone handler has finished.
+- */
+- entry = ring->entry;
+- if (!rt2x00_ring_full(ring))
+- ieee80211_wake_queue(rt2x00dev->hw,
+- entry->tx_status.control.queue);
}
--static void if_sdio_exit_module(void)
-+static void __exit if_sdio_exit_module(void)
+ static irqreturn_t rt2400pci_interrupt(int irq, void *dev_instance)
+@@ -1315,12 +1249,23 @@ static int rt2400pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Identify default antenna configuration.
+ */
+- rt2x00dev->hw->conf.antenna_sel_tx =
++ rt2x00dev->default_ant.tx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
+- rt2x00dev->hw->conf.antenna_sel_rx =
++ rt2x00dev->default_ant.rx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
+
+ /*
++ * When the eeprom indicates SW_DIVERSITY use HW_DIVERSITY instead.
++ * I am not 100% sure about this, but the legacy drivers do not
++ * indicate antenna swapping in software is required when
++ * diversity is enabled.
++ */
++ if (rt2x00dev->default_ant.tx == ANTENNA_SW_DIVERSITY)
++ rt2x00dev->default_ant.tx = ANTENNA_HW_DIVERSITY;
++ if (rt2x00dev->default_ant.rx == ANTENNA_SW_DIVERSITY)
++ rt2x00dev->default_ant.rx = ANTENNA_HW_DIVERSITY;
++
++ /*
+ * Store led mode, for correct led behaviour.
+ */
+ rt2x00dev->led_mode =
+@@ -1447,7 +1392,6 @@ static void rt2400pci_configure_filter(struct ieee80211_hw *hw,
+ struct dev_addr_list *mc_list)
{
- lbs_deb_enter(LBS_DEB_SDIO);
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- struct interface *intf = &rt2x00dev->interface;
+ u32 reg;
-diff --git a/drivers/net/wireless/libertas/if_sdio.h b/drivers/net/wireless/libertas/if_sdio.h
-index dfcaea7..533bdfb 100644
---- a/drivers/net/wireless/libertas/if_sdio.h
-+++ b/drivers/net/wireless/libertas/if_sdio.h
-@@ -9,8 +9,8 @@
- * your option) any later version.
- */
+ /*
+@@ -1466,21 +1410,18 @@ static void rt2400pci_configure_filter(struct ieee80211_hw *hw,
+ * Apply some rules to the filters:
+ * - Some filters imply different filters to be set.
+ * - Some things we can't filter out at all.
+- * - Some filters are set based on interface type.
+ */
+ *total_flags |= FIF_ALLMULTI;
+ if (*total_flags & FIF_OTHER_BSS ||
+ *total_flags & FIF_PROMISC_IN_BSS)
+ *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
+- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
+- *total_flags |= FIF_PROMISC_IN_BSS;
--#ifndef LIBERTAS_IF_SDIO_H
--#define LIBERTAS_IF_SDIO_H
-+#ifndef _LBS_IF_SDIO_H
-+#define _LBS_IF_SDIO_H
+ /*
+ * Check if there is any work left for us.
+ */
+- if (intf->filter == *total_flags)
++ if (rt2x00dev->packet_filter == *total_flags)
+ return;
+- intf->filter = *total_flags;
++ rt2x00dev->packet_filter = *total_flags;
- #define IF_SDIO_IOPORT 0x00
+ /*
+ * Start configuration steps.
+@@ -1583,7 +1524,7 @@ static const struct ieee80211_ops rt2400pci_mac80211_ops = {
+ .configure_filter = rt2400pci_configure_filter,
+ .get_stats = rt2x00mac_get_stats,
+ .set_retry_limit = rt2400pci_set_retry_limit,
+- .erp_ie_changed = rt2x00mac_erp_ie_changed,
++ .bss_info_changed = rt2x00mac_bss_info_changed,
+ .conf_tx = rt2400pci_conf_tx,
+ .get_tx_stats = rt2x00mac_get_tx_stats,
+ .get_tsf = rt2400pci_get_tsf,
+@@ -1597,6 +1538,8 @@ static const struct rt2x00lib_ops rt2400pci_rt2x00_ops = {
+ .probe_hw = rt2400pci_probe_hw,
+ .initialize = rt2x00pci_initialize,
+ .uninitialize = rt2x00pci_uninitialize,
++ .init_rxentry = rt2400pci_init_rxentry,
++ .init_txentry = rt2400pci_init_txentry,
+ .set_device_state = rt2400pci_set_device_state,
+ .rfkill_poll = rt2400pci_rfkill_poll,
+ .link_stats = rt2400pci_link_stats,
+@@ -1614,7 +1557,7 @@ static const struct rt2x00lib_ops rt2400pci_rt2x00_ops = {
+ };
-diff --git a/drivers/net/wireless/libertas/if_usb.c b/drivers/net/wireless/libertas/if_usb.c
-index cb59f46..75aed9d 100644
---- a/drivers/net/wireless/libertas/if_usb.c
-+++ b/drivers/net/wireless/libertas/if_usb.c
-@@ -5,7 +5,6 @@
- #include <linux/moduleparam.h>
- #include <linux/firmware.h>
- #include <linux/netdevice.h>
--#include <linux/list.h>
- #include <linux/usb.h>
+ static const struct rt2x00_ops rt2400pci_ops = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .rxd_size = RXD_DESC_SIZE,
+ .txd_size = TXD_DESC_SIZE,
+ .eeprom_size = EEPROM_SIZE,
+@@ -1642,7 +1585,7 @@ MODULE_DEVICE_TABLE(pci, rt2400pci_device_table);
+ MODULE_LICENSE("GPL");
- #define DRV_NAME "usb8xxx"
-@@ -14,24 +13,16 @@
- #include "decl.h"
- #include "defs.h"
- #include "dev.h"
-+#include "cmd.h"
- #include "if_usb.h"
+ static struct pci_driver rt2400pci_driver = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .id_table = rt2400pci_device_table,
+ .probe = rt2x00pci_probe,
+ .remove = __devexit_p(rt2x00pci_remove),
+diff --git a/drivers/net/wireless/rt2x00/rt2400pci.h b/drivers/net/wireless/rt2x00/rt2400pci.h
+index ae22501..369aac6 100644
+--- a/drivers/net/wireless/rt2x00/rt2400pci.h
++++ b/drivers/net/wireless/rt2x00/rt2400pci.h
+@@ -803,8 +803,8 @@
+ /*
+ * DMA descriptor defines.
+ */
+-#define TXD_DESC_SIZE ( 8 * sizeof(struct data_desc) )
+-#define RXD_DESC_SIZE ( 8 * sizeof(struct data_desc) )
++#define TXD_DESC_SIZE ( 8 * sizeof(__le32) )
++#define RXD_DESC_SIZE ( 8 * sizeof(__le32) )
--#define MESSAGE_HEADER_LEN 4
--
--static const char usbdriver_name[] = "usb8xxx";
--static u8 *default_fw_name = "usb8388.bin";
-+#define INSANEDEBUG 0
-+#define lbs_deb_usb2(...) do { if (INSANEDEBUG) lbs_deb_usbd(__VA_ARGS__); } while (0)
+ /*
+ * TX descriptor format for TX, PRIO, ATIM and Beacon Ring.
+@@ -839,11 +839,21 @@
--static char *libertas_fw_name = NULL;
--module_param_named(fw_name, libertas_fw_name, charp, 0644);
-+#define MESSAGE_HEADER_LEN 4
+ /*
+ * Word3 & 4: PLCP information
+- */
+-#define TXD_W3_PLCP_SIGNAL FIELD32(0x0000ffff)
+-#define TXD_W3_PLCP_SERVICE FIELD32(0xffff0000)
+-#define TXD_W4_PLCP_LENGTH_LOW FIELD32(0x0000ffff)
+-#define TXD_W4_PLCP_LENGTH_HIGH FIELD32(0xffff0000)
++ * The PLCP values should be treated as if they were BBP values.
++ */
++#define TXD_W3_PLCP_SIGNAL FIELD32(0x000000ff)
++#define TXD_W3_PLCP_SIGNAL_REGNUM FIELD32(0x00007f00)
++#define TXD_W3_PLCP_SIGNAL_BUSY FIELD32(0x00008000)
++#define TXD_W3_PLCP_SERVICE FIELD32(0x00ff0000)
++#define TXD_W3_PLCP_SERVICE_REGNUM FIELD32(0x7f000000)
++#define TXD_W3_PLCP_SERVICE_BUSY FIELD32(0x80000000)
++
++#define TXD_W4_PLCP_LENGTH_LOW FIELD32(0x000000ff)
++#define TXD_W3_PLCP_LENGTH_LOW_REGNUM FIELD32(0x00007f00)
++#define TXD_W3_PLCP_LENGTH_LOW_BUSY FIELD32(0x00008000)
++#define TXD_W4_PLCP_LENGTH_HIGH FIELD32(0x00ff0000)
++#define TXD_W3_PLCP_LENGTH_HIGH_REGNUM FIELD32(0x7f000000)
++#define TXD_W3_PLCP_LENGTH_HIGH_BUSY FIELD32(0x80000000)
+
+ /*
+ * Word5
+diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c
+index 702321c..e874fdc 100644
+--- a/drivers/net/wireless/rt2x00/rt2500pci.c
++++ b/drivers/net/wireless/rt2x00/rt2500pci.c
+@@ -24,11 +24,6 @@
+ Supported chipsets: RT2560.
+ */
-/*
-- * We need to send a RESET command to all USB devices before
-- * we tear down the USB connection. Otherwise we would not
-- * be able to re-init device the device if the module gets
-- * loaded again. This is a list of all initialized USB devices,
-- * for the reset code see if_usb_reset_device()
--*/
--static LIST_HEAD(usb_devices);
-+static char *lbs_fw_name = "usb8388.bin";
-+module_param_named(fw_name, lbs_fw_name, charp, 0644);
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2500pci"
+-
+ #include <linux/delay.h>
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+@@ -54,7 +49,7 @@
+ * the access attempt is considered to have failed,
+ * and we will print an error.
+ */
+-static u32 rt2500pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
++static u32 rt2500pci_bbp_check(struct rt2x00_dev *rt2x00dev)
+ {
+ u32 reg;
+ unsigned int i;
+@@ -69,7 +64,7 @@ static u32 rt2500pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
+ return reg;
+ }
- static struct usb_device_id if_usb_table[] = {
- /* Enter the device signature inside */
-@@ -44,14 +35,16 @@ MODULE_DEVICE_TABLE(usb, if_usb_table);
+-static void rt2500pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
++static void rt2500pci_bbp_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u8 value)
+ {
+ u32 reg;
+@@ -95,7 +90,7 @@ static void rt2500pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
+ rt2x00pci_register_write(rt2x00dev, BBPCSR, reg);
+ }
- static void if_usb_receive(struct urb *urb);
- static void if_usb_receive_fwload(struct urb *urb);
--static int if_usb_prog_firmware(struct usb_card_rec *cardp);
--static int if_usb_host_to_card(wlan_private * priv, u8 type, u8 * payload, u16 nb);
--static int if_usb_get_int_status(wlan_private * priv, u8 *);
--static int if_usb_read_event_cause(wlan_private *);
--static int usb_tx_block(struct usb_card_rec *cardp, u8 *payload, u16 nb);
--static void if_usb_free(struct usb_card_rec *cardp);
--static int if_usb_submit_rx_urb(struct usb_card_rec *cardp);
--static int if_usb_reset_device(struct usb_card_rec *cardp);
-+static int if_usb_prog_firmware(struct if_usb_card *cardp);
-+static int if_usb_host_to_card(struct lbs_private *priv, uint8_t type,
-+ uint8_t *payload, uint16_t nb);
-+static int if_usb_get_int_status(struct lbs_private *priv, uint8_t *);
-+static int if_usb_read_event_cause(struct lbs_private *);
-+static int usb_tx_block(struct if_usb_card *cardp, uint8_t *payload,
-+ uint16_t nb);
-+static void if_usb_free(struct if_usb_card *cardp);
-+static int if_usb_submit_rx_urb(struct if_usb_card *cardp);
-+static int if_usb_reset_device(struct if_usb_card *cardp);
+-static void rt2500pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
++static void rt2500pci_bbp_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u8 *value)
+ {
+ u32 reg;
+@@ -132,7 +127,7 @@ static void rt2500pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ *value = rt2x00_get_field32(reg, BBPCSR_VALUE);
+ }
- /**
- * @brief call back function to handle the status of the URB
-@@ -60,37 +53,22 @@ static int if_usb_reset_device(struct usb_card_rec *cardp);
- */
- static void if_usb_write_bulk_callback(struct urb *urb)
+-static void rt2500pci_rf_write(const struct rt2x00_dev *rt2x00dev,
++static void rt2500pci_rf_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u32 value)
{
-- struct usb_card_rec *cardp = (struct usb_card_rec *) urb->context;
-+ struct if_usb_card *cardp = (struct if_usb_card *) urb->context;
+ u32 reg;
+@@ -195,13 +190,13 @@ static void rt2500pci_eepromregister_write(struct eeprom_93cx6 *eeprom)
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+ #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
- /* handle the transmission complete validations */
+-static void rt2500pci_read_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt2500pci_read_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 *data)
+ {
+ rt2x00pci_register_read(rt2x00dev, CSR_OFFSET(word), data);
+ }
- if (urb->status == 0) {
-- wlan_private *priv = cardp->priv;
-+ struct lbs_private *priv = cardp->priv;
+-static void rt2500pci_write_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt2500pci_write_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 data)
+ {
+ rt2x00pci_register_write(rt2x00dev, CSR_OFFSET(word), data);
+@@ -289,7 +284,7 @@ static void rt2500pci_config_type(struct rt2x00_dev *rt2x00dev, const int type,
+ */
+ rt2x00pci_register_read(rt2x00dev, CSR14, ®);
+ rt2x00_set_field32(®, CSR14_TSF_COUNT, 1);
+- rt2x00_set_field32(®, CSR14_TBCN, 1);
++ rt2x00_set_field32(®, CSR14_TBCN, (tsf_sync == TSF_SYNC_BEACON));
+ rt2x00_set_field32(®, CSR14_BEACON_GEN, 0);
+ rt2x00_set_field32(®, CSR14_TSF_SYNC, tsf_sync);
+ rt2x00pci_register_write(rt2x00dev, CSR14, reg);
+@@ -424,7 +419,7 @@ static void rt2500pci_config_txpower(struct rt2x00_dev *rt2x00dev,
+ }
-- /*
-- lbs_deb_usbd(&urb->dev->dev, "URB status is successfull\n");
-- lbs_deb_usbd(&urb->dev->dev, "Actual length transmitted %d\n",
-- urb->actual_length);
-- */
-+ lbs_deb_usb2(&urb->dev->dev, "URB status is successful\n");
-+ lbs_deb_usb2(&urb->dev->dev, "Actual length transmitted %d\n",
-+ urb->actual_length);
+ static void rt2500pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx, const int antenna_rx)
++ struct antenna_setup *ant)
+ {
+ u32 reg;
+ u8 r14;
+@@ -437,18 +432,20 @@ static void rt2500pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Configure the TX antenna.
+ */
+- switch (antenna_tx) {
+- case ANTENNA_SW_DIVERSITY:
+- case ANTENNA_HW_DIVERSITY:
+- rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 2);
+- rt2x00_set_field32(®, BBPCSR1_CCK, 2);
+- rt2x00_set_field32(®, BBPCSR1_OFDM, 2);
+- break;
++ switch (ant->tx) {
+ case ANTENNA_A:
+ rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 0);
+ rt2x00_set_field32(®, BBPCSR1_CCK, 0);
+ rt2x00_set_field32(®, BBPCSR1_OFDM, 0);
+ break;
++ case ANTENNA_HW_DIVERSITY:
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+ rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 2);
+ rt2x00_set_field32(®, BBPCSR1_CCK, 2);
+@@ -459,14 +456,18 @@ static void rt2500pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Configure the RX antenna.
+ */
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
+- case ANTENNA_HW_DIVERSITY:
+- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 2);
+- break;
++ switch (ant->rx) {
+ case ANTENNA_A:
+ rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 0);
+ break;
++ case ANTENNA_HW_DIVERSITY:
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+ rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 2);
+ break;
+@@ -541,9 +542,7 @@ static void rt2500pci_config(struct rt2x00_dev *rt2x00dev,
+ rt2500pci_config_txpower(rt2x00dev,
+ libconf->conf->power_level);
+ if (flags & CONFIG_UPDATE_ANTENNA)
+- rt2500pci_config_antenna(rt2x00dev,
+- libconf->conf->antenna_sel_tx,
+- libconf->conf->antenna_sel_rx);
++ rt2500pci_config_antenna(rt2x00dev, &libconf->ant);
+ if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
+ rt2500pci_config_duration(rt2x00dev, libconf);
+ }
+@@ -559,18 +558,10 @@ static void rt2500pci_enable_led(struct rt2x00_dev *rt2x00dev)
- /* Used for both firmware TX and regular TX. priv isn't
- * valid at firmware load time.
- */
-- if (priv) {
-- wlan_adapter *adapter = priv->adapter;
-- struct net_device *dev = priv->dev;
--
-- priv->dnld_sent = DNLD_RES_RECEIVED;
+ rt2x00_set_field32(®, LEDCSR_ON_PERIOD, 70);
+ rt2x00_set_field32(®, LEDCSR_OFF_PERIOD, 30);
-
-- /* Wake main thread if commands are pending */
-- if (!adapter->cur_cmd)
-- wake_up_interruptible(&priv->waitq);
+- if (rt2x00dev->led_mode == LED_MODE_TXRX_ACTIVITY) {
+- rt2x00_set_field32(®, LEDCSR_LINK, 1);
+- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 0);
+- } else if (rt2x00dev->led_mode == LED_MODE_ASUS) {
+- rt2x00_set_field32(®, LEDCSR_LINK, 0);
+- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
+- } else {
+- rt2x00_set_field32(®, LEDCSR_LINK, 1);
+- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
+- }
-
-- if ((adapter->connect_status == LIBERTAS_CONNECTED)) {
-- netif_wake_queue(dev);
-- netif_wake_queue(priv->mesh_dev);
-- }
-- }
-+ if (priv)
-+ lbs_host_to_card_done(priv);
- } else {
- /* print the failure status number for debug */
- lbs_pr_info("URB in failure status: %d\n", urb->status);
-@@ -101,10 +79,10 @@ static void if_usb_write_bulk_callback(struct urb *urb)
++ rt2x00_set_field32(®, LEDCSR_LINK,
++ (rt2x00dev->led_mode != LED_MODE_ASUS));
++ rt2x00_set_field32(®, LEDCSR_ACTIVITY,
++ (rt2x00dev->led_mode != LED_MODE_TXRX_ACTIVITY));
+ rt2x00pci_register_write(rt2x00dev, LEDCSR, reg);
+ }
- /**
- * @brief free tx/rx urb, skb and rx buffer
-- * @param cardp pointer usb_card_rec
-+ * @param cardp pointer if_usb_card
- * @return N/A
+@@ -587,7 +578,8 @@ static void rt2500pci_disable_led(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Link tuning
*/
--static void if_usb_free(struct usb_card_rec *cardp)
-+static void if_usb_free(struct if_usb_card *cardp)
+-static void rt2500pci_link_stats(struct rt2x00_dev *rt2x00dev)
++static void rt2500pci_link_stats(struct rt2x00_dev *rt2x00dev,
++ struct link_qual *qual)
{
- lbs_deb_enter(LBS_DEB_USB);
-
-@@ -118,12 +96,58 @@ static void if_usb_free(struct usb_card_rec *cardp)
- usb_free_urb(cardp->rx_urb);
- cardp->rx_urb = NULL;
+ u32 reg;
-- kfree(cardp->bulk_out_buffer);
-- cardp->bulk_out_buffer = NULL;
-+ kfree(cardp->ep_out_buf);
-+ cardp->ep_out_buf = NULL;
+@@ -595,13 +587,13 @@ static void rt2500pci_link_stats(struct rt2x00_dev *rt2x00dev)
+ * Update FCS error count from register.
+ */
+ rt2x00pci_register_read(rt2x00dev, CNT0, ®);
+- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
++ qual->rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
- lbs_deb_leave(LBS_DEB_USB);
+ /*
+ * Update False CCA count from register.
+ */
+ rt2x00pci_register_read(rt2x00dev, CNT3, ®);
+- rt2x00dev->link.false_cca = rt2x00_get_field32(reg, CNT3_FALSE_CCA);
++ qual->false_cca = rt2x00_get_field32(reg, CNT3_FALSE_CCA);
}
-+static void if_usb_setup_firmware(struct lbs_private *priv)
-+{
-+ struct if_usb_card *cardp = priv->card;
-+ struct cmd_ds_set_boot2_ver b2_cmd;
-+ struct cmd_ds_802_11_fw_wake_method wake_method;
-+
-+ b2_cmd.hdr.size = cpu_to_le16(sizeof(b2_cmd));
-+ b2_cmd.action = 0;
-+ b2_cmd.version = cardp->boot2_version;
-+
-+ if (lbs_cmd_with_response(priv, CMD_SET_BOOT2_VER, &b2_cmd))
-+ lbs_deb_usb("Setting boot2 version failed\n");
-+
-+ priv->wol_gpio = 2; /* Wake via GPIO2... */
-+ priv->wol_gap = 20; /* ... after 20ms */
-+ lbs_host_sleep_cfg(priv, EHS_WAKE_ON_UNICAST_DATA);
-+
-+ wake_method.hdr.size = cpu_to_le16(sizeof(wake_method));
-+ wake_method.action = cpu_to_le16(CMD_ACT_GET);
-+ if (lbs_cmd_with_response(priv, CMD_802_11_FW_WAKE_METHOD, &wake_method)) {
-+ lbs_pr_info("Firmware does not seem to support PS mode\n");
-+ } else {
-+ if (le16_to_cpu(wake_method.method) == CMD_WAKE_METHOD_COMMAND_INT) {
-+ lbs_deb_usb("Firmware seems to support PS with wake-via-command\n");
-+ priv->ps_supported = 1;
-+ } else {
-+ /* The versions which boot up this way don't seem to
-+ work even if we set it to the command interrupt */
-+ lbs_pr_info("Firmware doesn't wake via command interrupt; disabling PS mode\n");
-+ }
-+ }
-+}
-+
-+static void if_usb_fw_timeo(unsigned long priv)
-+{
-+ struct if_usb_card *cardp = (void *)priv;
-+
-+ if (cardp->fwdnldover) {
-+ lbs_deb_usb("Download complete, no event. Assuming success\n");
-+ } else {
-+ lbs_pr_err("Download timed out\n");
-+ cardp->surprise_removed = 1;
-+ }
-+ wake_up(&cardp->fw_wq);
-+}
-+
- /**
- * @brief sets the configuration values
- * @param ifnum interface number
-@@ -136,23 +160,26 @@ static int if_usb_probe(struct usb_interface *intf,
- struct usb_device *udev;
- struct usb_host_interface *iface_desc;
- struct usb_endpoint_descriptor *endpoint;
-- wlan_private *priv;
-- struct usb_card_rec *cardp;
-+ struct lbs_private *priv;
-+ struct if_usb_card *cardp;
- int i;
-
- udev = interface_to_usbdev(intf);
-
-- cardp = kzalloc(sizeof(struct usb_card_rec), GFP_KERNEL);
-+ cardp = kzalloc(sizeof(struct if_usb_card), GFP_KERNEL);
- if (!cardp) {
- lbs_pr_err("Out of memory allocating private data.\n");
- goto error;
+ static void rt2500pci_reset_tuner(struct rt2x00_dev *rt2x00dev)
+@@ -679,10 +671,10 @@ dynamic_cca_tune:
+ * R17 is inside the dynamic tuning range,
+ * start tuning the link based on the false cca counter.
+ */
+- if (rt2x00dev->link.false_cca > 512 && r17 < 0x40) {
++ if (rt2x00dev->link.qual.false_cca > 512 && r17 < 0x40) {
+ rt2500pci_bbp_write(rt2x00dev, 17, ++r17);
+ rt2x00dev->link.vgc_level = r17;
+- } else if (rt2x00dev->link.false_cca < 100 && r17 > 0x32) {
++ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > 0x32) {
+ rt2500pci_bbp_write(rt2x00dev, 17, --r17);
+ rt2x00dev->link.vgc_level = r17;
}
+@@ -691,55 +683,35 @@ dynamic_cca_tune:
+ /*
+ * Initialization functions.
+ */
+-static void rt2500pci_init_rxring(struct rt2x00_dev *rt2x00dev)
++static void rt2500pci_init_rxentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
+ {
+- struct data_ring *ring = rt2x00dev->rx;
+- struct data_desc *rxd;
+- unsigned int i;
++ __le32 *rxd = entry->priv;
+ u32 word;
-+ setup_timer(&cardp->fw_timeout, if_usb_fw_timeo, (unsigned long)cardp);
-+ init_waitqueue_head(&cardp->fw_wq);
-+
- cardp->udev = udev;
- iface_desc = intf->cur_altsetting;
-
- lbs_deb_usbd(&udev->dev, "bcdUSB = 0x%X bDeviceClass = 0x%X"
-- " bDeviceSubClass = 0x%X, bDeviceProtocol = 0x%X\n",
-+ " bDeviceSubClass = 0x%X, bDeviceProtocol = 0x%X\n",
- le16_to_cpu(udev->descriptor.bcdUSB),
- udev->descriptor.bDeviceClass,
- udev->descriptor.bDeviceSubClass,
-@@ -160,92 +187,62 @@ static int if_usb_probe(struct usb_interface *intf,
-
- for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
- endpoint = &iface_desc->endpoint[i].desc;
-- if ((endpoint->bEndpointAddress & USB_ENDPOINT_DIR_MASK)
-- && ((endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
-- USB_ENDPOINT_XFER_BULK)) {
-- /* we found a bulk in endpoint */
-- lbs_deb_usbd(&udev->dev, "Bulk in size is %d\n",
-- le16_to_cpu(endpoint->wMaxPacketSize));
-- if (!(cardp->rx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
-- lbs_deb_usbd(&udev->dev,
-- "Rx URB allocation failed\n");
-- goto dealloc;
-- }
-- cardp->rx_urb_recall = 0;
+- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
-
-- cardp->bulk_in_size =
-- le16_to_cpu(endpoint->wMaxPacketSize);
-- cardp->bulk_in_endpointAddr =
-- (endpoint->
-- bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
-- lbs_deb_usbd(&udev->dev, "in_endpoint = %d\n",
-- endpoint->bEndpointAddress);
-- }
-+ if (usb_endpoint_is_bulk_in(endpoint)) {
-+ cardp->ep_in_size = le16_to_cpu(endpoint->wMaxPacketSize);
-+ cardp->ep_in = usb_endpoint_num(endpoint);
-
-- if (((endpoint->
-- bEndpointAddress & USB_ENDPOINT_DIR_MASK) ==
-- USB_DIR_OUT)
-- && ((endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
-- USB_ENDPOINT_XFER_BULK)) {
-- /* We found bulk out endpoint */
-- if (!(cardp->tx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
-- lbs_deb_usbd(&udev->dev,
-- "Tx URB allocation failed\n");
-- goto dealloc;
-- }
-+ lbs_deb_usbd(&udev->dev, "in_endpoint = %d\n", cardp->ep_in);
-+ lbs_deb_usbd(&udev->dev, "Bulk in size is %d\n", cardp->ep_in_size);
-
-- cardp->bulk_out_size =
-- le16_to_cpu(endpoint->wMaxPacketSize);
-- lbs_deb_usbd(&udev->dev,
-- "Bulk out size is %d\n",
-- le16_to_cpu(endpoint->wMaxPacketSize));
-- cardp->bulk_out_endpointAddr =
-- endpoint->bEndpointAddress;
-- lbs_deb_usbd(&udev->dev, "out_endpoint = %d\n",
-- endpoint->bEndpointAddress);
-- cardp->bulk_out_buffer =
-- kmalloc(MRVDRV_ETH_TX_PACKET_BUFFER_SIZE,
-- GFP_KERNEL);
+- for (i = 0; i < ring->stats.limit; i++) {
+- rxd = ring->entry[i].priv;
-
-- if (!cardp->bulk_out_buffer) {
-- lbs_deb_usbd(&udev->dev,
-- "Could not allocate buffer\n");
-- goto dealloc;
-- }
-+ } else if (usb_endpoint_is_bulk_out(endpoint)) {
-+ cardp->ep_out_size = le16_to_cpu(endpoint->wMaxPacketSize);
-+ cardp->ep_out = usb_endpoint_num(endpoint);
-+
-+ lbs_deb_usbd(&udev->dev, "out_endpoint = %d\n", cardp->ep_out);
-+ lbs_deb_usbd(&udev->dev, "Bulk out size is %d\n", cardp->ep_out_size);
- }
- }
-+ if (!cardp->ep_out_size || !cardp->ep_in_size) {
-+ lbs_deb_usbd(&udev->dev, "Endpoints not found\n");
-+ goto dealloc;
-+ }
-+ if (!(cardp->rx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
-+ lbs_deb_usbd(&udev->dev, "Rx URB allocation failed\n");
-+ goto dealloc;
-+ }
-+ if (!(cardp->tx_urb = usb_alloc_urb(0, GFP_KERNEL))) {
-+ lbs_deb_usbd(&udev->dev, "Tx URB allocation failed\n");
-+ goto dealloc;
-+ }
-+ cardp->ep_out_buf = kmalloc(MRVDRV_ETH_TX_PACKET_BUFFER_SIZE, GFP_KERNEL);
-+ if (!cardp->ep_out_buf) {
-+ lbs_deb_usbd(&udev->dev, "Could not allocate buffer\n");
-+ goto dealloc;
-+ }
+- rt2x00_desc_read(rxd, 1, &word);
+- rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS,
+- ring->entry[i].data_dma);
+- rt2x00_desc_write(rxd, 1, word);
+-
+- rt2x00_desc_read(rxd, 0, &word);
+- rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
+- rt2x00_desc_write(rxd, 0, word);
+- }
++ rt2x00_desc_read(rxd, 1, &word);
++ rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS, entry->data_dma);
++ rt2x00_desc_write(rxd, 1, word);
- /* Upload firmware */
-- cardp->rinfo.cardp = cardp;
- if (if_usb_prog_firmware(cardp)) {
-- lbs_deb_usbd(&udev->dev, "FW upload failed");
-+ lbs_deb_usbd(&udev->dev, "FW upload failed\n");
- goto err_prog_firmware;
- }
+- rt2x00_ring_index_clear(rt2x00dev->rx);
++ rt2x00_desc_read(rxd, 0, &word);
++ rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
++ rt2x00_desc_write(rxd, 0, word);
+ }
-- if (!(priv = libertas_add_card(cardp, &udev->dev)))
-+ if (!(priv = lbs_add_card(cardp, &udev->dev)))
- goto err_prog_firmware;
+-static void rt2500pci_init_txring(struct rt2x00_dev *rt2x00dev, const int queue)
++static void rt2500pci_init_txentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
+ {
+- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
+- struct data_desc *txd;
+- unsigned int i;
++ __le32 *txd = entry->priv;
+ u32 word;
- cardp->priv = priv;
+- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
-
-- if (libertas_add_mesh(priv, &udev->dev))
-- goto err_add_mesh;
+- for (i = 0; i < ring->stats.limit; i++) {
+- txd = ring->entry[i].priv;
-
-- cardp->eth_dev = priv->dev;
-+ cardp->priv->fw_ready = 1;
+- rt2x00_desc_read(txd, 1, &word);
+- rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS,
+- ring->entry[i].data_dma);
+- rt2x00_desc_write(txd, 1, word);
+-
+- rt2x00_desc_read(txd, 0, &word);
+- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
+- rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
+- rt2x00_desc_write(txd, 0, word);
+- }
++ rt2x00_desc_read(txd, 1, &word);
++ rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS, entry->data_dma);
++ rt2x00_desc_write(txd, 1, word);
- priv->hw_host_to_card = if_usb_host_to_card;
- priv->hw_get_int_status = if_usb_get_int_status;
- priv->hw_read_event_cause = if_usb_read_event_cause;
-- priv->boot2_version = udev->descriptor.bcdDevice;
-+ cardp->boot2_version = udev->descriptor.bcdDevice;
+- rt2x00_ring_index_clear(ring);
++ rt2x00_desc_read(txd, 0, &word);
++ rt2x00_set_field32(&word, TXD_W0_VALID, 0);
++ rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
++ rt2x00_desc_write(txd, 0, word);
+ }
-- /* Delay 200 ms to waiting for the FW ready */
- if_usb_submit_rx_urb(cardp);
-- msleep_interruptible(200);
-- priv->adapter->fw_ready = 1;
+ static int rt2500pci_init_rings(struct rt2x00_dev *rt2x00dev)
+@@ -747,15 +719,6 @@ static int rt2500pci_init_rings(struct rt2x00_dev *rt2x00dev)
+ u32 reg;
-- if (libertas_start_card(priv))
-+ if (lbs_start_card(priv))
- goto err_start_card;
+ /*
+- * Initialize rings.
+- */
+- rt2500pci_init_rxring(rt2x00dev);
+- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
+- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA1);
+- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_AFTER_BEACON);
+- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
+-
+- /*
+ * Initialize registers.
+ */
+ rt2x00pci_register_read(rt2x00dev, TXCSR2, ®);
+@@ -1170,12 +1133,12 @@ static int rt2500pci_set_device_state(struct rt2x00_dev *rt2x00dev,
+ * TX descriptor initialization
+ */
+ static void rt2500pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
++ struct sk_buff *skb,
+ struct txdata_entry_desc *desc,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
+ struct ieee80211_tx_control *control)
+ {
++ struct skb_desc *skbdesc = get_skb_desc(skb);
++ __le32 *txd = skbdesc->desc;
+ u32 word;
-- list_add_tail(&cardp->list, &usb_devices);
-+ if_usb_setup_firmware(priv);
+ /*
+@@ -1206,7 +1169,7 @@ static void rt2500pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
+ test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_ACK,
+- !(control->flags & IEEE80211_TXCTL_NO_ACK));
++ test_bit(ENTRY_TXD_ACK, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
+ test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_OFDM,
+@@ -1216,7 +1179,7 @@ static void rt2500pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_RETRY_MODE,
+ !!(control->flags &
+ IEEE80211_TXCTL_LONG_RETRY_LIMIT));
+- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
++ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
+ rt2x00_set_field32(&word, TXD_W0_CIPHER_ALG, CIPHER_NONE);
+ rt2x00_desc_write(txd, 0, word);
+ }
+@@ -1239,12 +1202,12 @@ static void rt2500pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
+ }
- usb_get_dev(udev);
- usb_set_intfdata(intf, cardp);
-@@ -253,9 +250,7 @@ static int if_usb_probe(struct usb_interface *intf,
- return 0;
+ rt2x00pci_register_read(rt2x00dev, TXCSR0, ®);
+- if (queue == IEEE80211_TX_QUEUE_DATA0)
+- rt2x00_set_field32(®, TXCSR0_KICK_PRIO, 1);
+- else if (queue == IEEE80211_TX_QUEUE_DATA1)
+- rt2x00_set_field32(®, TXCSR0_KICK_TX, 1);
+- else if (queue == IEEE80211_TX_QUEUE_AFTER_BEACON)
+- rt2x00_set_field32(®, TXCSR0_KICK_ATIM, 1);
++ rt2x00_set_field32(®, TXCSR0_KICK_PRIO,
++ (queue == IEEE80211_TX_QUEUE_DATA0));
++ rt2x00_set_field32(®, TXCSR0_KICK_TX,
++ (queue == IEEE80211_TX_QUEUE_DATA1));
++ rt2x00_set_field32(®, TXCSR0_KICK_ATIM,
++ (queue == IEEE80211_TX_QUEUE_AFTER_BEACON));
+ rt2x00pci_register_write(rt2x00dev, TXCSR0, reg);
+ }
- err_start_card:
-- libertas_remove_mesh(priv);
--err_add_mesh:
-- libertas_remove_card(priv);
-+ lbs_remove_card(priv);
- err_prog_firmware:
- if_usb_reset_device(cardp);
- dealloc:
-@@ -272,23 +267,17 @@ error:
- */
- static void if_usb_disconnect(struct usb_interface *intf)
+@@ -1254,7 +1217,7 @@ static void rt2500pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
+ static void rt2500pci_fill_rxdone(struct data_entry *entry,
+ struct rxdata_entry_desc *desc)
{
-- struct usb_card_rec *cardp = usb_get_intfdata(intf);
-- wlan_private *priv = (wlan_private *) cardp->priv;
-+ struct if_usb_card *cardp = usb_get_intfdata(intf);
-+ struct lbs_private *priv = (struct lbs_private *) cardp->priv;
+- struct data_desc *rxd = entry->priv;
++ __le32 *rxd = entry->priv;
+ u32 word0;
+ u32 word2;
- lbs_deb_enter(LBS_DEB_MAIN);
+@@ -1272,6 +1235,7 @@ static void rt2500pci_fill_rxdone(struct data_entry *entry,
+ entry->ring->rt2x00dev->rssi_offset;
+ desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
+ desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
++ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
+ }
-- /* Update Surprise removed to TRUE */
- cardp->surprise_removed = 1;
+ /*
+@@ -1281,7 +1245,7 @@ static void rt2500pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
+ {
+ struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
+ struct data_entry *entry;
+- struct data_desc *txd;
++ __le32 *txd;
+ u32 word;
+ int tx_status;
+ int retry;
+@@ -1301,26 +1265,8 @@ static void rt2500pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
+ tx_status = rt2x00_get_field32(word, TXD_W0_RESULT);
+ retry = rt2x00_get_field32(word, TXD_W0_RETRY_COUNT);
-- list_del(&cardp->list);
--
- if (priv) {
-- wlan_adapter *adapter = priv->adapter;
+- rt2x00lib_txdone(entry, tx_status, retry);
-
-- adapter->surpriseremoved = 1;
-- libertas_stop_card(priv);
-- libertas_remove_mesh(priv);
-- libertas_remove_card(priv);
-+ priv->surpriseremoved = 1;
-+ lbs_stop_card(priv);
-+ lbs_remove_card(priv);
+- /*
+- * Make this entry available for reuse.
+- */
+- entry->flags = 0;
+- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
+- rt2x00_desc_write(txd, 0, word);
+- rt2x00_ring_index_done_inc(ring);
++ rt2x00pci_txdone(rt2x00dev, entry, tx_status, retry);
}
+-
+- /*
+- * If the data ring was full before the txdone handler
+- * we must make sure the packet queue in the mac80211 stack
+- * is reenabled when the txdone handler has finished.
+- */
+- entry = ring->entry;
+- if (!rt2x00_ring_full(ring))
+- ieee80211_wake_queue(rt2x00dev->hw,
+- entry->tx_status.control.queue);
+ }
- /* Unlink and free urb */
-@@ -302,102 +291,82 @@ static void if_usb_disconnect(struct usb_interface *intf)
+ static irqreturn_t rt2500pci_interrupt(int irq, void *dev_instance)
+@@ -1420,9 +1366,12 @@ static int rt2500pci_validate_eeprom(struct rt2x00_dev *rt2x00dev)
+ rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
+ if (word == 0xffff) {
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 0);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 0);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE, 0);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
++ ANTENNA_SW_DIVERSITY);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
++ ANTENNA_SW_DIVERSITY);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE,
++ LED_MODE_DEFAULT);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_RF_TYPE, RF2522);
+@@ -1481,9 +1430,9 @@ static int rt2500pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Identify default antenna configuration.
+ */
+- rt2x00dev->hw->conf.antenna_sel_tx =
++ rt2x00dev->default_ant.tx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
+- rt2x00dev->hw->conf.antenna_sel_rx =
++ rt2x00dev->default_ant.rx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
- /**
- * @brief This function download FW
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @return 0
- */
--static int if_prog_firmware(struct usb_card_rec *cardp)
-+static int if_usb_send_fw_pkt(struct if_usb_card *cardp)
+ /*
+@@ -1774,7 +1723,6 @@ static void rt2500pci_configure_filter(struct ieee80211_hw *hw,
+ struct dev_addr_list *mc_list)
{
-- struct FWData *fwdata;
-- struct fwheader *fwheader;
-- u8 *firmware = cardp->fw->data;
--
-- fwdata = kmalloc(sizeof(struct FWData), GFP_ATOMIC);
--
-- if (!fwdata)
-- return -1;
--
-- fwheader = &fwdata->fwheader;
-+ struct fwdata *fwdata = cardp->ep_out_buf;
-+ uint8_t *firmware = cardp->fw->data;
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- struct interface *intf = &rt2x00dev->interface;
+ u32 reg;
-+ /* If we got a CRC failure on the last block, back
-+ up and retry it */
- if (!cardp->CRC_OK) {
- cardp->totalbytes = cardp->fwlastblksent;
-- cardp->fwseqnum = cardp->lastseqnum - 1;
-+ cardp->fwseqnum--;
- }
+ /*
+@@ -1793,22 +1741,19 @@ static void rt2500pci_configure_filter(struct ieee80211_hw *hw,
+ * Apply some rules to the filters:
+ * - Some filters imply different filters to be set.
+ * - Some things we can't filter out at all.
+- * - Some filters are set based on interface type.
+ */
+ if (mc_count)
+ *total_flags |= FIF_ALLMULTI;
+ if (*total_flags & FIF_OTHER_BSS ||
+ *total_flags & FIF_PROMISC_IN_BSS)
+ *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
+- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
+- *total_flags |= FIF_PROMISC_IN_BSS;
-- /*
-- lbs_deb_usbd(&cardp->udev->dev, "totalbytes = %d\n",
-- cardp->totalbytes);
-- */
-+ lbs_deb_usb2(&cardp->udev->dev, "totalbytes = %d\n",
-+ cardp->totalbytes);
+ /*
+ * Check if there is any work left for us.
+ */
+- if (intf->filter == *total_flags)
++ if (rt2x00dev->packet_filter == *total_flags)
+ return;
+- intf->filter = *total_flags;
++ rt2x00dev->packet_filter = *total_flags;
-- memcpy(fwheader, &firmware[cardp->totalbytes],
-+ /* struct fwdata (which we sent to the card) has an
-+ extra __le32 field in between the header and the data,
-+ which is not in the struct fwheader in the actual
-+ firmware binary. Insert the seqnum in the middle... */
-+ memcpy(&fwdata->hdr, &firmware[cardp->totalbytes],
- sizeof(struct fwheader));
+ /*
+ * Start configuration steps.
+@@ -1890,7 +1835,7 @@ static const struct ieee80211_ops rt2500pci_mac80211_ops = {
+ .configure_filter = rt2500pci_configure_filter,
+ .get_stats = rt2x00mac_get_stats,
+ .set_retry_limit = rt2500pci_set_retry_limit,
+- .erp_ie_changed = rt2x00mac_erp_ie_changed,
++ .bss_info_changed = rt2x00mac_bss_info_changed,
+ .conf_tx = rt2x00mac_conf_tx,
+ .get_tx_stats = rt2x00mac_get_tx_stats,
+ .get_tsf = rt2500pci_get_tsf,
+@@ -1904,6 +1849,8 @@ static const struct rt2x00lib_ops rt2500pci_rt2x00_ops = {
+ .probe_hw = rt2500pci_probe_hw,
+ .initialize = rt2x00pci_initialize,
+ .uninitialize = rt2x00pci_uninitialize,
++ .init_rxentry = rt2500pci_init_rxentry,
++ .init_txentry = rt2500pci_init_txentry,
+ .set_device_state = rt2500pci_set_device_state,
+ .rfkill_poll = rt2500pci_rfkill_poll,
+ .link_stats = rt2500pci_link_stats,
+@@ -1921,7 +1868,7 @@ static const struct rt2x00lib_ops rt2500pci_rt2x00_ops = {
+ };
- cardp->fwlastblksent = cardp->totalbytes;
- cardp->totalbytes += sizeof(struct fwheader);
+ static const struct rt2x00_ops rt2500pci_ops = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .rxd_size = RXD_DESC_SIZE,
+ .txd_size = TXD_DESC_SIZE,
+ .eeprom_size = EEPROM_SIZE,
+@@ -1949,7 +1896,7 @@ MODULE_DEVICE_TABLE(pci, rt2500pci_device_table);
+ MODULE_LICENSE("GPL");
-- /* lbs_deb_usbd(&cardp->udev->dev,"Copy Data\n"); */
- memcpy(fwdata->data, &firmware[cardp->totalbytes],
-- le32_to_cpu(fwdata->fwheader.datalength));
-+ le32_to_cpu(fwdata->hdr.datalength));
+ static struct pci_driver rt2500pci_driver = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .id_table = rt2500pci_device_table,
+ .probe = rt2x00pci_probe,
+ .remove = __devexit_p(rt2x00pci_remove),
+diff --git a/drivers/net/wireless/rt2x00/rt2500pci.h b/drivers/net/wireless/rt2x00/rt2500pci.h
+index d92aa56..92ba090 100644
+--- a/drivers/net/wireless/rt2x00/rt2500pci.h
++++ b/drivers/net/wireless/rt2x00/rt2500pci.h
+@@ -1082,8 +1082,8 @@
+ /*
+ * DMA descriptor defines.
+ */
+-#define TXD_DESC_SIZE ( 11 * sizeof(struct data_desc) )
+-#define RXD_DESC_SIZE ( 11 * sizeof(struct data_desc) )
++#define TXD_DESC_SIZE ( 11 * sizeof(__le32) )
++#define RXD_DESC_SIZE ( 11 * sizeof(__le32) )
-- /*
-- lbs_deb_usbd(&cardp->udev->dev,
-- "Data length = %d\n", le32_to_cpu(fwdata->fwheader.datalength));
-- */
-+ lbs_deb_usb2(&cardp->udev->dev, "Data length = %d\n",
-+ le32_to_cpu(fwdata->hdr.datalength));
+ /*
+ * TX descriptor format for TX, PRIO, ATIM and Beacon Ring.
+diff --git a/drivers/net/wireless/rt2x00/rt2500usb.c b/drivers/net/wireless/rt2x00/rt2500usb.c
+index 18b1f91..86ded40 100644
+--- a/drivers/net/wireless/rt2x00/rt2500usb.c
++++ b/drivers/net/wireless/rt2x00/rt2500usb.c
+@@ -24,11 +24,6 @@
+ Supported chipsets: RT2570.
+ */
-- cardp->fwseqnum = cardp->fwseqnum + 1;
-+ fwdata->seqnum = cpu_to_le32(++cardp->fwseqnum);
-+ cardp->totalbytes += le32_to_cpu(fwdata->hdr.datalength);
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2500usb"
+-
+ #include <linux/delay.h>
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+@@ -52,8 +47,10 @@
+ * between each attampt. When the busy bit is still set at that time,
+ * the access attempt is considered to have failed,
+ * and we will print an error.
++ * If the usb_cache_mutex is already held then the _lock variants must
++ * be used instead.
+ */
+-static inline void rt2500usb_register_read(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2500usb_register_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ u16 *value)
+ {
+@@ -64,8 +61,18 @@ static inline void rt2500usb_register_read(const struct rt2x00_dev *rt2x00dev,
+ *value = le16_to_cpu(reg);
+ }
-- fwdata->seqnum = cpu_to_le32(cardp->fwseqnum);
-- cardp->lastseqnum = cardp->fwseqnum;
-- cardp->totalbytes += le32_to_cpu(fwdata->fwheader.datalength);
-+ usb_tx_block(cardp, cardp->ep_out_buf, sizeof(struct fwdata) +
-+ le32_to_cpu(fwdata->hdr.datalength));
+-static inline void rt2500usb_register_multiread(const struct rt2x00_dev
+- *rt2x00dev,
++static inline void rt2500usb_register_read_lock(struct rt2x00_dev *rt2x00dev,
++ const unsigned int offset,
++ u16 *value)
++{
++ __le16 reg;
++ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_READ,
++ USB_VENDOR_REQUEST_IN, offset,
++ ®, sizeof(u16), REGISTER_TIMEOUT);
++ *value = le16_to_cpu(reg);
++}
+
-+ if (fwdata->hdr.dnldcmd == cpu_to_le32(FW_HAS_DATA_TO_RECV)) {
-+ lbs_deb_usb2(&cardp->udev->dev, "There are data to follow\n");
-+ lbs_deb_usb2(&cardp->udev->dev, "seqnum = %d totalbytes = %d\n",
-+ cardp->fwseqnum, cardp->totalbytes);
-+ } else if (fwdata->hdr.dnldcmd == cpu_to_le32(FW_HAS_LAST_BLOCK)) {
-+ lbs_deb_usb2(&cardp->udev->dev, "Host has finished FW downloading\n");
-+ lbs_deb_usb2(&cardp->udev->dev, "Donwloading FW JUMP BLOCK\n");
-
-- if (fwheader->dnldcmd == cpu_to_le32(FW_HAS_DATA_TO_RECV)) {
-- /*
-- lbs_deb_usbd(&cardp->udev->dev, "There are data to follow\n");
-- lbs_deb_usbd(&cardp->udev->dev,
-- "seqnum = %d totalbytes = %d\n", cardp->fwseqnum,
-- cardp->totalbytes);
-- */
-- memcpy(cardp->bulk_out_buffer, fwheader, FW_DATA_XMIT_SIZE);
-- usb_tx_block(cardp, cardp->bulk_out_buffer, FW_DATA_XMIT_SIZE);
--
-- } else if (fwdata->fwheader.dnldcmd == cpu_to_le32(FW_HAS_LAST_BLOCK)) {
-- /*
-- lbs_deb_usbd(&cardp->udev->dev,
-- "Host has finished FW downloading\n");
-- lbs_deb_usbd(&cardp->udev->dev,
-- "Donwloading FW JUMP BLOCK\n");
-- */
-- memcpy(cardp->bulk_out_buffer, fwheader, FW_DATA_XMIT_SIZE);
-- usb_tx_block(cardp, cardp->bulk_out_buffer, FW_DATA_XMIT_SIZE);
- cardp->fwfinalblk = 1;
- }
++static inline void rt2500usb_register_multiread(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ void *value, const u16 length)
+ {
+@@ -75,7 +82,7 @@ static inline void rt2500usb_register_multiread(const struct rt2x00_dev
+ value, length, timeout);
+ }
-- /*
-- lbs_deb_usbd(&cardp->udev->dev,
-- "The firmware download is done size is %d\n",
-- cardp->totalbytes);
-- */
--
-- kfree(fwdata);
-+ lbs_deb_usb2(&cardp->udev->dev, "Firmware download done; size %d\n",
-+ cardp->totalbytes);
+-static inline void rt2500usb_register_write(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2500usb_register_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ u16 value)
+ {
+@@ -85,8 +92,17 @@ static inline void rt2500usb_register_write(const struct rt2x00_dev *rt2x00dev,
+ ®, sizeof(u16), REGISTER_TIMEOUT);
+ }
- return 0;
+-static inline void rt2500usb_register_multiwrite(const struct rt2x00_dev
+- *rt2x00dev,
++static inline void rt2500usb_register_write_lock(struct rt2x00_dev *rt2x00dev,
++ const unsigned int offset,
++ u16 value)
++{
++ __le16 reg = cpu_to_le16(value);
++ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_WRITE,
++ USB_VENDOR_REQUEST_OUT, offset,
++ ®, sizeof(u16), REGISTER_TIMEOUT);
++}
++
++static inline void rt2500usb_register_multiwrite(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ void *value, const u16 length)
+ {
+@@ -96,13 +112,13 @@ static inline void rt2500usb_register_multiwrite(const struct rt2x00_dev
+ value, length, timeout);
}
--static int if_usb_reset_device(struct usb_card_rec *cardp)
-+static int if_usb_reset_device(struct if_usb_card *cardp)
+-static u16 rt2500usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
++static u16 rt2500usb_bbp_check(struct rt2x00_dev *rt2x00dev)
{
-+ struct cmd_ds_command *cmd = cardp->ep_out_buf + 4;
- int ret;
-- wlan_private * priv = cardp->priv;
+ u16 reg;
+ unsigned int i;
- lbs_deb_enter(LBS_DEB_USB);
+ for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
+- rt2500usb_register_read(rt2x00dev, PHY_CSR8, ®);
++ rt2500usb_register_read_lock(rt2x00dev, PHY_CSR8, ®);
+ if (!rt2x00_get_field16(reg, PHY_CSR8_BUSY))
+ break;
+ udelay(REGISTER_BUSY_DELAY);
+@@ -111,17 +127,20 @@ static u16 rt2500usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
+ return reg;
+ }
-- /* Try a USB port reset first, if that fails send the reset
-- * command to the firmware.
-- */
-+ *(__le32 *)cardp->ep_out_buf = cpu_to_le32(CMD_TYPE_REQUEST);
-+
-+ cmd->command = cpu_to_le16(CMD_802_11_RESET);
-+ cmd->size = cpu_to_le16(sizeof(struct cmd_ds_802_11_reset) + S_DS_GEN);
-+ cmd->result = cpu_to_le16(0);
-+ cmd->seqnum = cpu_to_le16(0x5a5a);
-+ cmd->params.reset.action = cpu_to_le16(CMD_ACT_HALT);
-+ usb_tx_block(cardp, cardp->ep_out_buf, 4 + S_DS_GEN + sizeof(struct cmd_ds_802_11_reset));
+-static void rt2500usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
++static void rt2500usb_bbp_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u8 value)
+ {
+ u16 reg;
+
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
+
-+ msleep(100);
- ret = usb_reset_device(cardp->udev);
-- if (!ret && priv) {
-- msleep(10);
-- ret = libertas_reset_device(priv);
-- msleep(10);
-- }
-+ msleep(100);
+ /*
+ * Wait until the BBP becomes ready.
+ */
+ reg = rt2500usb_bbp_check(rt2x00dev);
+ if (rt2x00_get_field16(reg, PHY_CSR8_BUSY)) {
+ ERROR(rt2x00dev, "PHY_CSR8 register busy. Write failed.\n");
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ return;
+ }
- lbs_deb_leave_args(LBS_DEB_USB, "ret %d", ret);
+@@ -133,14 +152,18 @@ static void rt2500usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field16(®, PHY_CSR7_REG_ID, word);
+ rt2x00_set_field16(®, PHY_CSR7_READ_CONTROL, 0);
-@@ -406,12 +375,12 @@ static int if_usb_reset_device(struct usb_card_rec *cardp)
+- rt2500usb_register_write(rt2x00dev, PHY_CSR7, reg);
++ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR7, reg);
++
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ }
- /**
- * @brief This function transfer the data to the device.
-- * @param priv pointer to wlan_private
-+ * @param priv pointer to struct lbs_private
- * @param payload pointer to payload data
- * @param nb data length
- * @return 0 or -1
- */
--static int usb_tx_block(struct usb_card_rec *cardp, u8 * payload, u16 nb)
-+static int usb_tx_block(struct if_usb_card *cardp, uint8_t *payload, uint16_t nb)
+-static void rt2500usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
++static void rt2500usb_bbp_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u8 *value)
{
- int ret = -1;
-
-@@ -423,17 +392,16 @@ static int usb_tx_block(struct usb_card_rec *cardp, u8 * payload, u16 nb)
+ u16 reg;
- usb_fill_bulk_urb(cardp->tx_urb, cardp->udev,
- usb_sndbulkpipe(cardp->udev,
-- cardp->bulk_out_endpointAddr),
-+ cardp->ep_out),
- payload, nb, if_usb_write_bulk_callback, cardp);
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
++
+ /*
+ * Wait until the BBP becomes ready.
+ */
+@@ -157,7 +180,7 @@ static void rt2500usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field16(®, PHY_CSR7_REG_ID, word);
+ rt2x00_set_field16(®, PHY_CSR7_READ_CONTROL, 1);
- cardp->tx_urb->transfer_flags |= URB_ZERO_PACKET;
+- rt2500usb_register_write(rt2x00dev, PHY_CSR7, reg);
++ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR7, reg);
- if ((ret = usb_submit_urb(cardp->tx_urb, GFP_ATOMIC))) {
-- /* transfer failed */
-- lbs_deb_usbd(&cardp->udev->dev, "usb_submit_urb failed\n");
-+ lbs_deb_usbd(&cardp->udev->dev, "usb_submit_urb failed: %d\n", ret);
- ret = -1;
- } else {
-- /* lbs_deb_usbd(&cardp->udev->dev, "usb_submit_urb success\n"); */
-+ lbs_deb_usb2(&cardp->udev->dev, "usb_submit_urb success\n");
- ret = 0;
+ /*
+ * Wait until the BBP becomes ready.
+@@ -166,14 +189,17 @@ static void rt2500usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ if (rt2x00_get_field16(reg, PHY_CSR8_BUSY)) {
+ ERROR(rt2x00dev, "PHY_CSR8 register busy. Read failed.\n");
+ *value = 0xff;
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ return;
}
-@@ -441,11 +409,10 @@ tx_ret:
- return ret;
+- rt2500usb_register_read(rt2x00dev, PHY_CSR7, ®);
++ rt2500usb_register_read_lock(rt2x00dev, PHY_CSR7, ®);
+ *value = rt2x00_get_field16(reg, PHY_CSR7_DATA);
++
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
}
--static int __if_usb_submit_rx_urb(struct usb_card_rec *cardp,
-+static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
- void (*callbackfn)(struct urb *urb))
+-static void rt2500usb_rf_write(const struct rt2x00_dev *rt2x00dev,
++static void rt2500usb_rf_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u32 value)
{
- struct sk_buff *skb;
-- struct read_cb_info *rinfo = &cardp->rinfo;
- int ret = -1;
+ u16 reg;
+@@ -182,20 +208,23 @@ static void rt2500usb_rf_write(const struct rt2x00_dev *rt2x00dev,
+ if (!word)
+ return;
- if (!(skb = dev_alloc_skb(MRVDRV_ETH_RX_PACKET_BUFFER_SIZE))) {
-@@ -453,25 +420,25 @@ static int __if_usb_submit_rx_urb(struct usb_card_rec *cardp,
- goto rx_ret;
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
++
+ for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
+- rt2500usb_register_read(rt2x00dev, PHY_CSR10, ®);
++ rt2500usb_register_read_lock(rt2x00dev, PHY_CSR10, ®);
+ if (!rt2x00_get_field16(reg, PHY_CSR10_RF_BUSY))
+ goto rf_write;
+ udelay(REGISTER_BUSY_DELAY);
}
-- rinfo->skb = skb;
-+ cardp->rx_skb = skb;
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ ERROR(rt2x00dev, "PHY_CSR10 register busy. Write failed.\n");
+ return;
- /* Fill the receive configuration URB and initialise the Rx call back */
- usb_fill_bulk_urb(cardp->rx_urb, cardp->udev,
-- usb_rcvbulkpipe(cardp->udev,
-- cardp->bulk_in_endpointAddr),
-+ usb_rcvbulkpipe(cardp->udev, cardp->ep_in),
- (void *) (skb->tail + (size_t) IPFIELD_ALIGN_OFFSET),
- MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
-- rinfo);
-+ cardp);
+ rf_write:
+ reg = 0;
+ rt2x00_set_field16(®, PHY_CSR9_RF_VALUE, value);
+- rt2500usb_register_write(rt2x00dev, PHY_CSR9, reg);
++ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR9, reg);
- cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+ reg = 0;
+ rt2x00_set_field16(®, PHY_CSR10_RF_VALUE, value >> 16);
+@@ -203,20 +232,22 @@ rf_write:
+ rt2x00_set_field16(®, PHY_CSR10_RF_IF_SELECT, 0);
+ rt2x00_set_field16(®, PHY_CSR10_RF_BUSY, 1);
-- /* lbs_deb_usbd(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb); */
-+ lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
- if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
-- /* handle failure conditions */
-- lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed\n");
-+ lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
-+ kfree_skb(skb);
-+ cardp->rx_skb = NULL;
- ret = -1;
- } else {
-- /* lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB success\n"); */
-+ lbs_deb_usb2(&cardp->udev->dev, "Submit Rx URB success\n");
- ret = 0;
- }
+- rt2500usb_register_write(rt2x00dev, PHY_CSR10, reg);
++ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR10, reg);
+ rt2x00_rf_write(rt2x00dev, word, value);
++
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ }
-@@ -479,58 +446,78 @@ rx_ret:
- return ret;
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+ #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u16)) )
+
+-static void rt2500usb_read_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt2500usb_read_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 *data)
+ {
+ rt2500usb_register_read(rt2x00dev, CSR_OFFSET(word), (u16 *) data);
}
--static int if_usb_submit_rx_urb_fwload(struct usb_card_rec *cardp)
-+static int if_usb_submit_rx_urb_fwload(struct if_usb_card *cardp)
+-static void rt2500usb_write_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt2500usb_write_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 data)
{
- return __if_usb_submit_rx_urb(cardp, &if_usb_receive_fwload);
+ rt2500usb_register_write(rt2x00dev, CSR_OFFSET(word), data);
+@@ -296,7 +327,8 @@ static void rt2500usb_config_type(struct rt2x00_dev *rt2x00dev, const int type,
+
+ rt2500usb_register_read(rt2x00dev, TXRX_CSR19, ®);
+ rt2x00_set_field16(®, TXRX_CSR19_TSF_COUNT, 1);
+- rt2x00_set_field16(®, TXRX_CSR19_TBCN, 1);
++ rt2x00_set_field16(®, TXRX_CSR19_TBCN,
++ (tsf_sync == TSF_SYNC_BEACON));
+ rt2x00_set_field16(®, TXRX_CSR19_BEACON_GEN, 0);
+ rt2x00_set_field16(®, TXRX_CSR19_TSF_SYNC, tsf_sync);
+ rt2500usb_register_write(rt2x00dev, TXRX_CSR19, reg);
+@@ -385,7 +417,7 @@ static void rt2500usb_config_txpower(struct rt2x00_dev *rt2x00dev,
}
--static int if_usb_submit_rx_urb(struct usb_card_rec *cardp)
-+static int if_usb_submit_rx_urb(struct if_usb_card *cardp)
+ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx, const int antenna_rx)
++ struct antenna_setup *ant)
{
- return __if_usb_submit_rx_urb(cardp, &if_usb_receive);
+ u8 r2;
+ u8 r14;
+@@ -400,8 +432,7 @@ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Configure the TX antenna.
+ */
+- switch (antenna_tx) {
+- case ANTENNA_SW_DIVERSITY:
++ switch (ant->tx) {
+ case ANTENNA_HW_DIVERSITY:
+ rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 1);
+ rt2x00_set_field16(&csr5, PHY_CSR5_CCK, 1);
+@@ -412,6 +443,13 @@ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field16(&csr5, PHY_CSR5_CCK, 0);
+ rt2x00_set_field16(&csr6, PHY_CSR6_OFDM, 0);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+ rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 2);
+ rt2x00_set_field16(&csr5, PHY_CSR5_CCK, 2);
+@@ -422,14 +460,20 @@ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Configure the RX antenna.
+ */
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
++ switch (ant->rx) {
+ case ANTENNA_HW_DIVERSITY:
+ rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 1);
+ break;
+ case ANTENNA_A:
+ rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 0);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+ rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 2);
+ break;
+@@ -487,9 +531,7 @@ static void rt2500usb_config(struct rt2x00_dev *rt2x00dev,
+ rt2500usb_config_txpower(rt2x00dev,
+ libconf->conf->power_level);
+ if (flags & CONFIG_UPDATE_ANTENNA)
+- rt2500usb_config_antenna(rt2x00dev,
+- libconf->conf->antenna_sel_tx,
+- libconf->conf->antenna_sel_rx);
++ rt2500usb_config_antenna(rt2x00dev, &libconf->ant);
+ if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
+ rt2500usb_config_duration(rt2x00dev, libconf);
}
+@@ -507,18 +549,10 @@ static void rt2500usb_enable_led(struct rt2x00_dev *rt2x00dev)
+ rt2500usb_register_write(rt2x00dev, MAC_CSR21, reg);
- static void if_usb_receive_fwload(struct urb *urb)
+ rt2500usb_register_read(rt2x00dev, MAC_CSR20, ®);
+-
+- if (rt2x00dev->led_mode == LED_MODE_TXRX_ACTIVITY) {
+- rt2x00_set_field16(®, MAC_CSR20_LINK, 1);
+- rt2x00_set_field16(®, MAC_CSR20_ACTIVITY, 0);
+- } else if (rt2x00dev->led_mode == LED_MODE_ASUS) {
+- rt2x00_set_field16(®, MAC_CSR20_LINK, 0);
+- rt2x00_set_field16(®, MAC_CSR20_ACTIVITY, 1);
+- } else {
+- rt2x00_set_field16(®, MAC_CSR20_LINK, 1);
+- rt2x00_set_field16(®, MAC_CSR20_ACTIVITY, 1);
+- }
+-
++ rt2x00_set_field16(®, MAC_CSR20_LINK,
++ (rt2x00dev->led_mode != LED_MODE_ASUS));
++ rt2x00_set_field16(®, MAC_CSR20_ACTIVITY,
++ (rt2x00dev->led_mode != LED_MODE_TXRX_ACTIVITY));
+ rt2500usb_register_write(rt2x00dev, MAC_CSR20, reg);
+ }
+
+@@ -535,7 +569,8 @@ static void rt2500usb_disable_led(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Link tuning
+ */
+-static void rt2500usb_link_stats(struct rt2x00_dev *rt2x00dev)
++static void rt2500usb_link_stats(struct rt2x00_dev *rt2x00dev,
++ struct link_qual *qual)
{
-- struct read_cb_info *rinfo = (struct read_cb_info *)urb->context;
-- struct sk_buff *skb = rinfo->skb;
-- struct usb_card_rec *cardp = (struct usb_card_rec *)rinfo->cardp;
-+ struct if_usb_card *cardp = urb->context;
-+ struct sk_buff *skb = cardp->rx_skb;
- struct fwsyncheader *syncfwheader;
-- struct bootcmdrespStr bootcmdresp;
-+ struct bootcmdresp bootcmdresp;
+ u16 reg;
- if (urb->status) {
- lbs_deb_usbd(&cardp->udev->dev,
-- "URB status is failed during fw load\n");
-+ "URB status is failed during fw load\n");
- kfree_skb(skb);
- return;
- }
+@@ -543,14 +578,13 @@ static void rt2500usb_link_stats(struct rt2x00_dev *rt2x00dev)
+ * Update FCS error count from register.
+ */
+ rt2500usb_register_read(rt2x00dev, STA_CSR0, ®);
+- rt2x00dev->link.rx_failed = rt2x00_get_field16(reg, STA_CSR0_FCS_ERROR);
++ qual->rx_failed = rt2x00_get_field16(reg, STA_CSR0_FCS_ERROR);
-- if (cardp->bootcmdresp == 0) {
-+ if (cardp->fwdnldover) {
-+ __le32 *tmp = (__le32 *)(skb->data + IPFIELD_ALIGN_OFFSET);
-+
-+ if (tmp[0] == cpu_to_le32(CMD_TYPE_INDICATION) &&
-+ tmp[1] == cpu_to_le32(MACREG_INT_CODE_FIRMWARE_READY)) {
-+ lbs_pr_info("Firmware ready event received\n");
-+ wake_up(&cardp->fw_wq);
-+ } else {
-+ lbs_deb_usb("Waiting for confirmation; got %x %x\n",
-+ le32_to_cpu(tmp[0]), le32_to_cpu(tmp[1]));
-+ if_usb_submit_rx_urb_fwload(cardp);
-+ }
-+ kfree_skb(skb);
-+ return;
-+ }
-+ if (cardp->bootcmdresp <= 0) {
- memcpy (&bootcmdresp, skb->data + IPFIELD_ALIGN_OFFSET,
- sizeof(bootcmdresp));
-+
- if (le16_to_cpu(cardp->udev->descriptor.bcdDevice) < 0x3106) {
- kfree_skb(skb);
- if_usb_submit_rx_urb_fwload(cardp);
- cardp->bootcmdresp = 1;
- lbs_deb_usbd(&cardp->udev->dev,
-- "Received valid boot command response\n");
-+ "Received valid boot command response\n");
- return;
- }
-- if (bootcmdresp.u32magicnumber != cpu_to_le32(BOOT_CMD_MAGIC_NUMBER)) {
-- lbs_pr_info(
-- "boot cmd response wrong magic number (0x%x)\n",
-- le32_to_cpu(bootcmdresp.u32magicnumber));
-- } else if (bootcmdresp.u8cmd_tag != BOOT_CMD_FW_BY_USB) {
-- lbs_pr_info(
-- "boot cmd response cmd_tag error (%d)\n",
-- bootcmdresp.u8cmd_tag);
-- } else if (bootcmdresp.u8result != BOOT_CMD_RESP_OK) {
-- lbs_pr_info(
-- "boot cmd response result error (%d)\n",
-- bootcmdresp.u8result);
-+ if (bootcmdresp.magic != cpu_to_le32(BOOT_CMD_MAGIC_NUMBER)) {
-+ if (bootcmdresp.magic == cpu_to_le32(CMD_TYPE_REQUEST) ||
-+ bootcmdresp.magic == cpu_to_le32(CMD_TYPE_DATA) ||
-+ bootcmdresp.magic == cpu_to_le32(CMD_TYPE_INDICATION)) {
-+ if (!cardp->bootcmdresp)
-+ lbs_pr_info("Firmware already seems alive; resetting\n");
-+ cardp->bootcmdresp = -1;
-+ } else {
-+ lbs_pr_info("boot cmd response wrong magic number (0x%x)\n",
-+ le32_to_cpu(bootcmdresp.magic));
-+ }
-+ } else if (bootcmdresp.cmd != BOOT_CMD_FW_BY_USB) {
-+ lbs_pr_info("boot cmd response cmd_tag error (%d)\n",
-+ bootcmdresp.cmd);
-+ } else if (bootcmdresp.result != BOOT_CMD_RESP_OK) {
-+ lbs_pr_info("boot cmd response result error (%d)\n",
-+ bootcmdresp.result);
- } else {
- cardp->bootcmdresp = 1;
- lbs_deb_usbd(&cardp->udev->dev,
-- "Received valid boot command response\n");
-+ "Received valid boot command response\n");
- }
- kfree_skb(skb);
- if_usb_submit_rx_urb_fwload(cardp);
-@@ -545,50 +532,47 @@ static void if_usb_receive_fwload(struct urb *urb)
- }
+ /*
+ * Update False CCA count from register.
+ */
+ rt2500usb_register_read(rt2x00dev, STA_CSR3, ®);
+- rt2x00dev->link.false_cca =
+- rt2x00_get_field16(reg, STA_CSR3_FALSE_CCA_ERROR);
++ qual->false_cca = rt2x00_get_field16(reg, STA_CSR3_FALSE_CCA_ERROR);
+ }
- memcpy(syncfwheader, skb->data + IPFIELD_ALIGN_OFFSET,
-- sizeof(struct fwsyncheader));
-+ sizeof(struct fwsyncheader));
+ static void rt2500usb_reset_tuner(struct rt2x00_dev *rt2x00dev)
+@@ -673,10 +707,10 @@ static void rt2500usb_link_tuner(struct rt2x00_dev *rt2x00dev)
+ if (r17 > up_bound) {
+ rt2500usb_bbp_write(rt2x00dev, 17, up_bound);
+ rt2x00dev->link.vgc_level = up_bound;
+- } else if (rt2x00dev->link.false_cca > 512 && r17 < up_bound) {
++ } else if (rt2x00dev->link.qual.false_cca > 512 && r17 < up_bound) {
+ rt2500usb_bbp_write(rt2x00dev, 17, ++r17);
+ rt2x00dev->link.vgc_level = r17;
+- } else if (rt2x00dev->link.false_cca < 100 && r17 > low_bound) {
++ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > low_bound) {
+ rt2500usb_bbp_write(rt2x00dev, 17, --r17);
+ rt2x00dev->link.vgc_level = r17;
+ }
+@@ -755,9 +789,11 @@ static int rt2500usb_init_registers(struct rt2x00_dev *rt2x00dev)
- if (!syncfwheader->cmd) {
-- /*
-- lbs_deb_usbd(&cardp->udev->dev,
-- "FW received Blk with correct CRC\n");
-- lbs_deb_usbd(&cardp->udev->dev,
-- "FW received Blk seqnum = %d\n",
-- syncfwheader->seqnum);
-- */
-+ lbs_deb_usb2(&cardp->udev->dev, "FW received Blk with correct CRC\n");
-+ lbs_deb_usb2(&cardp->udev->dev, "FW received Blk seqnum = %d\n",
-+ le32_to_cpu(syncfwheader->seqnum));
- cardp->CRC_OK = 1;
+ if (rt2x00_rev(&rt2x00dev->chip) >= RT2570_VERSION_C) {
+ rt2500usb_register_read(rt2x00dev, PHY_CSR2, ®);
+- reg &= ~0x0002;
++ rt2x00_set_field16(®, PHY_CSR2_LNA, 0);
} else {
-- lbs_deb_usbd(&cardp->udev->dev,
-- "FW received Blk with CRC error\n");
-+ lbs_deb_usbd(&cardp->udev->dev, "FW received Blk with CRC error\n");
- cardp->CRC_OK = 0;
+- reg = 0x3002;
++ reg = 0;
++ rt2x00_set_field16(®, PHY_CSR2_LNA, 1);
++ rt2x00_set_field16(®, PHY_CSR2_LNA_MODE, 3);
}
+ rt2500usb_register_write(rt2x00dev, PHY_CSR2, reg);
- kfree_skb(skb);
-
-+ /* reschedule timer for 200ms hence */
-+ mod_timer(&cardp->fw_timeout, jiffies + (HZ/5));
-+
- if (cardp->fwfinalblk) {
- cardp->fwdnldover = 1;
- goto exit;
+@@ -884,8 +920,6 @@ static int rt2500usb_enable_radio(struct rt2x00_dev *rt2x00dev)
+ return -EIO;
}
-- if_prog_firmware(cardp);
-+ if_usb_send_fw_pkt(cardp);
+- rt2x00usb_enable_radio(rt2x00dev);
+-
+ /*
+ * Enable LED
+ */
+@@ -988,12 +1022,12 @@ static int rt2500usb_set_device_state(struct rt2x00_dev *rt2x00dev,
+ * TX descriptor initialization
+ */
+ static void rt2500usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
++ struct sk_buff *skb,
+ struct txdata_entry_desc *desc,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
+ struct ieee80211_tx_control *control)
+ {
++ struct skb_desc *skbdesc = get_skb_desc(skb);
++ __le32 *txd = skbdesc->desc;
+ u32 word;
+
+ /*
+@@ -1018,7 +1052,7 @@ static void rt2500usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
+ test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_ACK,
+- !(control->flags & IEEE80211_TXCTL_NO_ACK));
++ test_bit(ENTRY_TXD_ACK, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
+ test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_OFDM,
+@@ -1026,7 +1060,7 @@ static void rt2500usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_NEW_SEQ,
+ !!(control->flags & IEEE80211_TXCTL_FIRST_FRAGMENT));
+ rt2x00_set_field32(&word, TXD_W0_IFS, desc->ifs);
+- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
++ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
+ rt2x00_set_field32(&word, TXD_W0_CIPHER, CIPHER_NONE);
+ rt2x00_desc_write(txd, 0, word);
+ }
+@@ -1079,10 +1113,10 @@ static void rt2500usb_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
+ static void rt2500usb_fill_rxdone(struct data_entry *entry,
+ struct rxdata_entry_desc *desc)
+ {
++ struct skb_desc *skbdesc = get_skb_desc(entry->skb);
+ struct urb *urb = entry->priv;
+- struct data_desc *rxd = (struct data_desc *)(entry->skb->data +
+- (urb->actual_length -
+- entry->ring->desc_size));
++ __le32 *rxd = (__le32 *)(entry->skb->data +
++ (urb->actual_length - entry->ring->desc_size));
+ u32 word0;
+ u32 word1;
-+ exit:
- if_usb_submit_rx_urb_fwload(cardp);
--exit:
-+
- kfree(syncfwheader);
+@@ -1103,8 +1137,15 @@ static void rt2500usb_fill_rxdone(struct data_entry *entry,
+ entry->ring->rt2x00dev->rssi_offset;
+ desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
+ desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
++ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
- return;
--
+- return;
++ /*
++ * Set descriptor and data pointer.
++ */
++ skbdesc->desc = entry->skb->data + desc->size;
++ skbdesc->desc_len = entry->ring->desc_size;
++ skbdesc->data = entry->skb->data;
++ skbdesc->data_len = desc->size;
}
- #define MRVDRV_MIN_PKT_LEN 30
+ /*
+@@ -1163,9 +1204,12 @@ static int rt2500usb_validate_eeprom(struct rt2x00_dev *rt2x00dev)
+ rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
+ if (word == 0xffff) {
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 0);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 0);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE, 0);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
++ ANTENNA_SW_DIVERSITY);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
++ ANTENNA_SW_DIVERSITY);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE,
++ LED_MODE_DEFAULT);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_RF_TYPE, RF2522);
+@@ -1275,12 +1319,23 @@ static int rt2500usb_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Identify default antenna configuration.
+ */
+- rt2x00dev->hw->conf.antenna_sel_tx =
++ rt2x00dev->default_ant.tx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
+- rt2x00dev->hw->conf.antenna_sel_rx =
++ rt2x00dev->default_ant.rx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
- static inline void process_cmdtypedata(int recvlength, struct sk_buff *skb,
-- struct usb_card_rec *cardp,
-- wlan_private *priv)
-+ struct if_usb_card *cardp,
-+ struct lbs_private *priv)
- {
-- if (recvlength > MRVDRV_ETH_RX_PACKET_BUFFER_SIZE +
-- MESSAGE_HEADER_LEN || recvlength < MRVDRV_MIN_PKT_LEN) {
-- lbs_deb_usbd(&cardp->udev->dev,
-- "Packet length is Invalid\n");
-+ if (recvlength > MRVDRV_ETH_RX_PACKET_BUFFER_SIZE + MESSAGE_HEADER_LEN
-+ || recvlength < MRVDRV_MIN_PKT_LEN) {
-+ lbs_deb_usbd(&cardp->udev->dev, "Packet length is Invalid\n");
- kfree_skb(skb);
- return;
- }
-@@ -596,19 +580,19 @@ static inline void process_cmdtypedata(int recvlength, struct sk_buff *skb,
- skb_reserve(skb, IPFIELD_ALIGN_OFFSET);
- skb_put(skb, recvlength);
- skb_pull(skb, MESSAGE_HEADER_LEN);
-- libertas_process_rxed_packet(priv, skb);
+ /*
++ * When the eeprom indicates SW_DIVERSITY use HW_DIVERSITY instead.
++ * I am not 100% sure about this, but the legacy drivers do not
++ * indicate antenna swapping in software is required when
++ * diversity is enabled.
++ */
++ if (rt2x00dev->default_ant.tx == ANTENNA_SW_DIVERSITY)
++ rt2x00dev->default_ant.tx = ANTENNA_HW_DIVERSITY;
++ if (rt2x00dev->default_ant.rx == ANTENNA_SW_DIVERSITY)
++ rt2x00dev->default_ant.rx = ANTENNA_HW_DIVERSITY;
+
-+ lbs_process_rxed_packet(priv, skb);
- priv->upld_len = (recvlength - MESSAGE_HEADER_LEN);
- }
-
--static inline void process_cmdrequest(int recvlength, u8 *recvbuff,
-+static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
- struct sk_buff *skb,
-- struct usb_card_rec *cardp,
-- wlan_private *priv)
-+ struct if_usb_card *cardp,
-+ struct lbs_private *priv)
++ /*
+ * Store led mode, for correct led behaviour.
+ */
+ rt2x00dev->led_mode =
+@@ -1562,7 +1617,6 @@ static void rt2500usb_configure_filter(struct ieee80211_hw *hw,
+ struct dev_addr_list *mc_list)
{
-- u8 *cmdbuf;
-- if (recvlength > MRVDRV_SIZE_OF_CMD_BUFFER) {
-+ if (recvlength > LBS_CMD_BUFFER_SIZE) {
- lbs_deb_usbd(&cardp->udev->dev,
-- "The receive buffer is too large\n");
-+ "The receive buffer is too large\n");
- kfree_skb(skb);
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- struct interface *intf = &rt2x00dev->interface;
+ u16 reg;
+
+ /*
+@@ -1581,22 +1635,19 @@ static void rt2500usb_configure_filter(struct ieee80211_hw *hw,
+ * Apply some rules to the filters:
+ * - Some filters imply different filters to be set.
+ * - Some things we can't filter out at all.
+- * - Some filters are set based on interface type.
+ */
+ if (mc_count)
+ *total_flags |= FIF_ALLMULTI;
+ if (*total_flags & FIF_OTHER_BSS ||
+ *total_flags & FIF_PROMISC_IN_BSS)
+ *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
+- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
+- *total_flags |= FIF_PROMISC_IN_BSS;
+
+ /*
+ * Check if there is any work left for us.
+ */
+- if (intf->filter == *total_flags)
++ if (rt2x00dev->packet_filter == *total_flags)
return;
- }
-@@ -616,28 +600,17 @@ static inline void process_cmdrequest(int recvlength, u8 *recvbuff,
- if (!in_interrupt())
- BUG();
+- intf->filter = *total_flags;
++ rt2x00dev->packet_filter = *total_flags;
-- spin_lock(&priv->adapter->driver_lock);
-- /* take care of cur_cmd = NULL case by reading the
-- * data to clear the interrupt */
-- if (!priv->adapter->cur_cmd) {
-- cmdbuf = priv->upld_buf;
-- priv->adapter->hisregcpy &= ~MRVDRV_CMD_UPLD_RDY;
-- } else
-- cmdbuf = priv->adapter->cur_cmd->bufvirtualaddr;
--
-+ spin_lock(&priv->driver_lock);
- cardp->usb_int_cause |= MRVDRV_CMD_UPLD_RDY;
- priv->upld_len = (recvlength - MESSAGE_HEADER_LEN);
-- memcpy(cmdbuf, recvbuff + MESSAGE_HEADER_LEN,
-- priv->upld_len);
-+ memcpy(priv->upld_buf, recvbuff + MESSAGE_HEADER_LEN, priv->upld_len);
+ /*
+ * When in atomic context, reschedule and let rt2x00lib
+@@ -1638,8 +1689,8 @@ static int rt2500usb_beacon_update(struct ieee80211_hw *hw,
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+ struct usb_device *usb_dev =
+ interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
+- struct data_ring *ring =
+- rt2x00lib_get_ring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
++ struct skb_desc *desc;
++ struct data_ring *ring;
+ struct data_entry *beacon;
+ struct data_entry *guardian;
+ int pipe = usb_sndbulkpipe(usb_dev, 1);
+@@ -1651,6 +1702,7 @@ static int rt2500usb_beacon_update(struct ieee80211_hw *hw,
+ * initialization.
+ */
+ control->queue = IEEE80211_TX_QUEUE_BEACON;
++ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
- kfree_skb(skb);
-- libertas_interrupt(priv->dev);
-- spin_unlock(&priv->adapter->driver_lock);
-+ lbs_interrupt(priv);
-+ spin_unlock(&priv->driver_lock);
+ /*
+ * Obtain 2 entries, one for the guardian byte,
+@@ -1661,23 +1713,34 @@ static int rt2500usb_beacon_update(struct ieee80211_hw *hw,
+ beacon = rt2x00_get_data_entry(ring);
- lbs_deb_usbd(&cardp->udev->dev,
- "Wake up main thread to handle cmd response\n");
--
-- return;
- }
+ /*
+- * First we create the beacon.
++ * Add the descriptor in front of the skb.
+ */
+ skb_push(skb, ring->desc_size);
+ memset(skb->data, 0, ring->desc_size);
- /**
-@@ -649,35 +622,33 @@ static inline void process_cmdrequest(int recvlength, u8 *recvbuff,
- */
- static void if_usb_receive(struct urb *urb)
- {
-- struct read_cb_info *rinfo = (struct read_cb_info *)urb->context;
-- struct sk_buff *skb = rinfo->skb;
-- struct usb_card_rec *cardp = (struct usb_card_rec *) rinfo->cardp;
-- wlan_private * priv = cardp->priv;
--
-+ struct if_usb_card *cardp = urb->context;
-+ struct sk_buff *skb = cardp->rx_skb;
-+ struct lbs_private *priv = cardp->priv;
- int recvlength = urb->actual_length;
-- u8 *recvbuff = NULL;
-- u32 recvtype = 0;
-+ uint8_t *recvbuff = NULL;
-+ uint32_t recvtype = 0;
-+ __le32 *pkt = (__le32 *)(skb->data + IPFIELD_ALIGN_OFFSET);
+- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
+- (struct ieee80211_hdr *)(skb->data +
+- ring->desc_size),
+- skb->len - ring->desc_size, control);
++ /*
++ * Fill in skb descriptor
++ */
++ desc = get_skb_desc(skb);
++ desc->desc_len = ring->desc_size;
++ desc->data_len = skb->len - ring->desc_size;
++ desc->desc = skb->data;
++ desc->data = skb->data + ring->desc_size;
++ desc->ring = ring;
++ desc->entry = beacon;
++
++ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
- lbs_deb_enter(LBS_DEB_USB);
++ /*
++ * USB devices cannot blindly pass the skb->len as the
++ * length of the data to usb_fill_bulk_urb. Pass the skb
++ * to the driver to determine what the length should be.
++ */
+ length = rt2500usb_get_tx_data_len(rt2x00dev, skb);
- if (recvlength) {
-- __le32 tmp;
+ usb_fill_bulk_urb(beacon->priv, usb_dev, pipe,
+ skb->data, length, rt2500usb_beacondone, beacon);
+
+- beacon->skb = skb;
-
- if (urb->status) {
-- lbs_deb_usbd(&cardp->udev->dev,
-- "URB status is failed\n");
-+ lbs_deb_usbd(&cardp->udev->dev, "RX URB failed: %d\n",
-+ urb->status);
- kfree_skb(skb);
- goto setup_for_next;
- }
+ /*
+ * Second we need to create the guardian byte.
+ * We only need a single byte, so lets recycle
+@@ -1710,7 +1773,7 @@ static const struct ieee80211_ops rt2500usb_mac80211_ops = {
+ .config_interface = rt2x00mac_config_interface,
+ .configure_filter = rt2500usb_configure_filter,
+ .get_stats = rt2x00mac_get_stats,
+- .erp_ie_changed = rt2x00mac_erp_ie_changed,
++ .bss_info_changed = rt2x00mac_bss_info_changed,
+ .conf_tx = rt2x00mac_conf_tx,
+ .get_tx_stats = rt2x00mac_get_tx_stats,
+ .beacon_update = rt2500usb_beacon_update,
+@@ -1720,6 +1783,8 @@ static const struct rt2x00lib_ops rt2500usb_rt2x00_ops = {
+ .probe_hw = rt2500usb_probe_hw,
+ .initialize = rt2x00usb_initialize,
+ .uninitialize = rt2x00usb_uninitialize,
++ .init_rxentry = rt2x00usb_init_rxentry,
++ .init_txentry = rt2x00usb_init_txentry,
+ .set_device_state = rt2500usb_set_device_state,
+ .link_stats = rt2500usb_link_stats,
+ .reset_tuner = rt2500usb_reset_tuner,
+@@ -1737,7 +1802,7 @@ static const struct rt2x00lib_ops rt2500usb_rt2x00_ops = {
+ };
- recvbuff = skb->data + IPFIELD_ALIGN_OFFSET;
-- memcpy(&tmp, recvbuff, sizeof(u32));
-- recvtype = le32_to_cpu(tmp);
-+ recvtype = le32_to_cpu(pkt[0]);
- lbs_deb_usbd(&cardp->udev->dev,
- "Recv length = 0x%x, Recv type = 0x%X\n",
- recvlength, recvtype);
-- } else if (urb->status)
-+ } else if (urb->status) {
-+ kfree_skb(skb);
- goto rx_exit;
-+ }
+ static const struct rt2x00_ops rt2500usb_ops = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .rxd_size = RXD_DESC_SIZE,
+ .txd_size = TXD_DESC_SIZE,
+ .eeprom_size = EEPROM_SIZE,
+@@ -1809,7 +1874,7 @@ MODULE_DEVICE_TABLE(usb, rt2500usb_device_table);
+ MODULE_LICENSE("GPL");
- switch (recvtype) {
- case CMD_TYPE_DATA:
-@@ -690,24 +661,28 @@ static void if_usb_receive(struct urb *urb)
+ static struct usb_driver rt2500usb_driver = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .id_table = rt2500usb_device_table,
+ .probe = rt2x00usb_probe,
+ .disconnect = rt2x00usb_disconnect,
+diff --git a/drivers/net/wireless/rt2x00/rt2500usb.h b/drivers/net/wireless/rt2x00/rt2500usb.h
+index b18d56e..9e04337 100644
+--- a/drivers/net/wireless/rt2x00/rt2500usb.h
++++ b/drivers/net/wireless/rt2x00/rt2500usb.h
+@@ -430,10 +430,21 @@
- case CMD_TYPE_INDICATION:
- /* Event cause handling */
-- spin_lock(&priv->adapter->driver_lock);
-- cardp->usb_event_cause = le32_to_cpu(*(__le32 *) (recvbuff + MESSAGE_HEADER_LEN));
-+ spin_lock(&priv->driver_lock);
-+
-+ cardp->usb_event_cause = le32_to_cpu(pkt[1]);
+ /*
+ * MAC configuration registers.
++ */
+
- lbs_deb_usbd(&cardp->udev->dev,"**EVENT** 0x%X\n",
-- cardp->usb_event_cause);
-+ cardp->usb_event_cause);
++/*
+ * PHY_CSR2: TX MAC configuration.
+- * PHY_CSR3: RX MAC configuration.
++ * NOTE: Both register fields are complete dummy,
++ * documentation and legacy drivers are unclear un
++ * what this register means or what fields exists.
+ */
+ #define PHY_CSR2 0x04c4
++#define PHY_CSR2_LNA FIELD16(0x0002)
++#define PHY_CSR2_LNA_MODE FIELD16(0x3000)
+
-+ /* Icky undocumented magic special case */
- if (cardp->usb_event_cause & 0xffff0000) {
-- libertas_send_tx_feedback(priv);
-- spin_unlock(&priv->adapter->driver_lock);
-+ lbs_send_tx_feedback(priv);
-+ spin_unlock(&priv->driver_lock);
- break;
- }
- cardp->usb_event_cause <<= 3;
- cardp->usb_int_cause |= MRVDRV_CARDEVENT;
- kfree_skb(skb);
-- libertas_interrupt(priv->dev);
-- spin_unlock(&priv->adapter->driver_lock);
-+ lbs_interrupt(priv);
-+ spin_unlock(&priv->driver_lock);
- goto rx_exit;
- default:
- lbs_deb_usbd(&cardp->udev->dev, "Unknown command type 0x%X\n",
-- recvtype);
-+ recvtype);
- kfree_skb(skb);
- break;
- }
-@@ -720,58 +695,54 @@ rx_exit:
++/*
++ * PHY_CSR3: RX MAC configuration.
++ */
+ #define PHY_CSR3 0x04c6
- /**
- * @brief This function downloads data to FW
-- * @param priv pointer to wlan_private structure
-+ * @param priv pointer to struct lbs_private structure
- * @param type type of data
- * @param buf pointer to data buffer
- * @param len number of bytes
- * @return 0 or -1
+ /*
+@@ -692,8 +703,8 @@
+ /*
+ * DMA descriptor defines.
*/
--static int if_usb_host_to_card(wlan_private * priv, u8 type, u8 * payload, u16 nb)
-+static int if_usb_host_to_card(struct lbs_private *priv, uint8_t type,
-+ uint8_t *payload, uint16_t nb)
- {
-- struct usb_card_rec *cardp = (struct usb_card_rec *)priv->card;
-+ struct if_usb_card *cardp = priv->card;
-
- lbs_deb_usbd(&cardp->udev->dev,"*** type = %u\n", type);
- lbs_deb_usbd(&cardp->udev->dev,"size after = %d\n", nb);
+-#define TXD_DESC_SIZE ( 5 * sizeof(struct data_desc) )
+-#define RXD_DESC_SIZE ( 4 * sizeof(struct data_desc) )
++#define TXD_DESC_SIZE ( 5 * sizeof(__le32) )
++#define RXD_DESC_SIZE ( 4 * sizeof(__le32) )
- if (type == MVMS_CMD) {
-- __le32 tmp = cpu_to_le32(CMD_TYPE_REQUEST);
-+ *(__le32 *)cardp->ep_out_buf = cpu_to_le32(CMD_TYPE_REQUEST);
- priv->dnld_sent = DNLD_CMD_SENT;
-- memcpy(cardp->bulk_out_buffer, (u8 *) & tmp,
-- MESSAGE_HEADER_LEN);
--
- } else {
-- __le32 tmp = cpu_to_le32(CMD_TYPE_DATA);
-+ *(__le32 *)cardp->ep_out_buf = cpu_to_le32(CMD_TYPE_DATA);
- priv->dnld_sent = DNLD_DATA_SENT;
-- memcpy(cardp->bulk_out_buffer, (u8 *) & tmp,
-- MESSAGE_HEADER_LEN);
- }
+ /*
+ * TX descriptor format for TX, PRIO, ATIM and Beacon Ring.
+diff --git a/drivers/net/wireless/rt2x00/rt2x00.h b/drivers/net/wireless/rt2x00/rt2x00.h
+index c8f16f1..05927b9 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00.h
++++ b/drivers/net/wireless/rt2x00/rt2x00.h
+@@ -31,6 +31,8 @@
+ #include <linux/skbuff.h>
+ #include <linux/workqueue.h>
+ #include <linux/firmware.h>
++#include <linux/mutex.h>
++#include <linux/etherdevice.h>
-- memcpy((cardp->bulk_out_buffer + MESSAGE_HEADER_LEN), payload, nb);
-+ memcpy((cardp->ep_out_buf + MESSAGE_HEADER_LEN), payload, nb);
+ #include <net/mac80211.h>
-- return usb_tx_block(cardp, cardp->bulk_out_buffer,
-- nb + MESSAGE_HEADER_LEN);
-+ return usb_tx_block(cardp, cardp->ep_out_buf, nb + MESSAGE_HEADER_LEN);
- }
+@@ -40,9 +42,8 @@
--/* called with adapter->driver_lock held */
--static int if_usb_get_int_status(wlan_private * priv, u8 * ireg)
-+/* called with priv->driver_lock held */
-+static int if_usb_get_int_status(struct lbs_private *priv, uint8_t *ireg)
- {
-- struct usb_card_rec *cardp = priv->card;
-+ struct if_usb_card *cardp = priv->card;
+ /*
+ * Module information.
+- * DRV_NAME should be set within the individual module source files.
+ */
+-#define DRV_VERSION "2.0.10"
++#define DRV_VERSION "2.0.14"
+ #define DRV_PROJECT "http://rt2x00.serialmonkey.com"
- *ireg = cardp->usb_int_cause;
- cardp->usb_int_cause = 0;
+ /*
+@@ -55,7 +56,7 @@
-- lbs_deb_usbd(&cardp->udev->dev,"Int cause is 0x%X\n", *ireg);
-+ lbs_deb_usbd(&cardp->udev->dev, "Int cause is 0x%X\n", *ireg);
+ #define DEBUG_PRINTK_PROBE(__kernlvl, __lvl, __msg, __args...) \
+ printk(__kernlvl "%s -> %s: %s - " __msg, \
+- DRV_NAME, __FUNCTION__, __lvl, ##__args)
++ KBUILD_MODNAME, __FUNCTION__, __lvl, ##__args)
- return 0;
+ #ifdef CONFIG_RT2X00_DEBUG
+ #define DEBUG_PRINTK(__dev, __kernlvl, __lvl, __msg, __args...) \
+@@ -133,20 +134,26 @@
+ */
+ static inline int is_rts_frame(u16 fc)
+ {
+- return !!(((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
+- ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_RTS));
++ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
++ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_RTS));
}
--static int if_usb_read_event_cause(wlan_private * priv)
-+static int if_usb_read_event_cause(struct lbs_private *priv)
+ static inline int is_cts_frame(u16 fc)
{
-- struct usb_card_rec *cardp = priv->card;
-+ struct if_usb_card *cardp = priv->card;
+- return !!(((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
+- ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_CTS));
++ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
++ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_CTS));
+ }
-- priv->adapter->eventcause = cardp->usb_event_cause;
-+ priv->eventcause = cardp->usb_event_cause;
- /* Re-submit rx urb here to avoid event lost issue */
- if_usb_submit_rx_urb(cardp);
+ static inline int is_probe_resp(u16 fc)
+ {
+- return !!(((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) &&
+- ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_PROBE_RESP));
++ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) &&
++ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_PROBE_RESP));
++}
+
- return 0;
++static inline int is_beacon(u16 fc)
++{
++ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) &&
++ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_BEACON));
}
-@@ -781,20 +752,17 @@ static int if_usb_read_event_cause(wlan_private * priv)
- * 2:Boot from FW in EEPROM
- * @return 0
+ /*
+@@ -180,18 +187,17 @@ struct rf_channel {
+ };
+
+ /*
+- * To optimize the quality of the link we need to store
+- * the quality of received frames and periodically
+- * optimize the link.
++ * Antenna setup values.
*/
--static int if_usb_issue_boot_command(struct usb_card_rec *cardp, int ivalue)
-+static int if_usb_issue_boot_command(struct if_usb_card *cardp, int ivalue)
- {
-- struct bootcmdstr sbootcmd;
-- int i;
-+ struct bootcmd *bootcmd = cardp->ep_out_buf;
+-struct link {
+- /*
+- * Link tuner counter
+- * The number of times the link has been tuned
+- * since the radio has been switched on.
+- */
+- u32 count;
++struct antenna_setup {
++ enum antenna rx;
++ enum antenna tx;
++};
- /* Prepare command */
-- sbootcmd.u32magicnumber = cpu_to_le32(BOOT_CMD_MAGIC_NUMBER);
-- sbootcmd.u8cmd_tag = ivalue;
-- for (i=0; i<11; i++)
-- sbootcmd.au8dumy[i]=0x00;
-- memcpy(cardp->bulk_out_buffer, &sbootcmd, sizeof(struct bootcmdstr));
-+ bootcmd->magic = cpu_to_le32(BOOT_CMD_MAGIC_NUMBER);
-+ bootcmd->cmd = ivalue;
-+ memset(bootcmd->pad, 0, sizeof(bootcmd->pad));
++/*
++ * Quality statistics about the currently active link.
++ */
++struct link_qual {
+ /*
+ * Statistics required for Link tuning.
+ * For the average RSSI value we use the "Walking average" approach.
+@@ -211,7 +217,6 @@ struct link {
+ * the new values correctly allowing a effective link tuning.
+ */
+ int avg_rssi;
+- int vgc_level;
+ int false_cca;
- /* Issue command */
-- usb_tx_block(cardp, cardp->bulk_out_buffer, sizeof(struct bootcmdstr));
-+ usb_tx_block(cardp, cardp->ep_out_buf, sizeof(*bootcmd));
+ /*
+@@ -240,6 +245,72 @@ struct link {
+ #define WEIGHT_RSSI 20
+ #define WEIGHT_RX 40
+ #define WEIGHT_TX 40
++};
++
++/*
++ * Antenna settings about the currently active link.
++ */
++struct link_ant {
++ /*
++ * Antenna flags
++ */
++ unsigned int flags;
++#define ANTENNA_RX_DIVERSITY 0x00000001
++#define ANTENNA_TX_DIVERSITY 0x00000002
++#define ANTENNA_MODE_SAMPLE 0x00000004
++
++ /*
++ * Currently active TX/RX antenna setup.
++ * When software diversity is used, this will indicate
++ * which antenna is actually used at this time.
++ */
++ struct antenna_setup active;
++
++ /*
++ * RSSI information for the different antenna's.
++ * These statistics are used to determine when
++ * to switch antenna when using software diversity.
++ *
++ * rssi[0] -> Antenna A RSSI
++ * rssi[1] -> Antenna B RSSI
++ */
++ int rssi_history[2];
++
++ /*
++ * Current RSSI average of the currently active antenna.
++ * Similar to the avg_rssi in the link_qual structure
++ * this value is updated by using the walking average.
++ */
++ int rssi_ant;
++};
++
++/*
++ * To optimize the quality of the link we need to store
++ * the quality of received frames and periodically
++ * optimize the link.
++ */
++struct link {
++ /*
++ * Link tuner counter
++ * The number of times the link has been tuned
++ * since the radio has been switched on.
++ */
++ u32 count;
++
++ /*
++ * Quality measurement values.
++ */
++ struct link_qual qual;
++
++ /*
++ * TX/RX antenna setup.
++ */
++ struct link_ant ant;
++
++ /*
++ * Active VGC level
++ */
++ int vgc_level;
- return 0;
- }
-@@ -807,10 +775,10 @@ static int if_usb_issue_boot_command(struct usb_card_rec *cardp, int ivalue)
- * len image length
- * @return 0 or -1
- */
--static int check_fwfile_format(u8 *data, u32 totlen)
-+static int check_fwfile_format(uint8_t *data, uint32_t totlen)
- {
-- u32 bincmd, exit;
-- u32 blksize, offset, len;
-+ uint32_t bincmd, exit;
-+ uint32_t blksize, offset, len;
- int ret;
+ /*
+ * Work structure for scheduling periodic link tuning.
+@@ -248,36 +319,47 @@ struct link {
+ };
- ret = 1;
-@@ -848,7 +816,7 @@ static int check_fwfile_format(u8 *data, u32 totlen)
- }
+ /*
+- * Clear all counters inside the link structure.
+- * This can be easiest achieved by memsetting everything
+- * except for the work structure at the end.
++ * Small helper macro to work with moving/walking averages.
+ */
+-static inline void rt2x00_clear_link(struct link *link)
+-{
+- memset(link, 0x00, sizeof(*link) - sizeof(link->work));
+- link->rx_percentage = 50;
+- link->tx_percentage = 50;
+-}
++#define MOVING_AVERAGE(__avg, __val, __samples) \
++ ( (((__avg) * ((__samples) - 1)) + (__val)) / (__samples) )
+ /*
+- * Update the rssi using the walking average approach.
++ * When we lack RSSI information return something less then -80 to
++ * tell the driver to tune the device to maximum sensitivity.
+ */
+-static inline void rt2x00_update_link_rssi(struct link *link, int rssi)
+-{
+- if (!link->avg_rssi)
+- link->avg_rssi = rssi;
+- else
+- link->avg_rssi = ((link->avg_rssi * 7) + rssi) / 8;
+-}
++#define DEFAULT_RSSI ( -128 )
--static int if_usb_prog_firmware(struct usb_card_rec *cardp)
-+static int if_usb_prog_firmware(struct if_usb_card *cardp)
+ /*
+- * When the avg_rssi is unset or no frames have been received),
+- * we need to return the default value which needs to be less
+- * than -80 so the device will select the maximum sensitivity.
++ * Link quality access functions.
+ */
+ static inline int rt2x00_get_link_rssi(struct link *link)
{
- int i = 0;
- static int reset_count = 10;
-@@ -856,10 +824,10 @@ static int if_usb_prog_firmware(struct usb_card_rec *cardp)
-
- lbs_deb_enter(LBS_DEB_USB);
+- return (link->avg_rssi && link->rx_success) ? link->avg_rssi : -128;
++ if (link->qual.avg_rssi && link->qual.rx_success)
++ return link->qual.avg_rssi;
++ return DEFAULT_RSSI;
++}
++
++static inline int rt2x00_get_link_ant_rssi(struct link *link)
++{
++ if (link->ant.rssi_ant && link->qual.rx_success)
++ return link->ant.rssi_ant;
++ return DEFAULT_RSSI;
++}
++
++static inline int rt2x00_get_link_ant_rssi_history(struct link *link,
++ enum antenna ant)
++{
++ if (link->ant.rssi_history[ant - ANTENNA_A])
++ return link->ant.rssi_history[ant - ANTENNA_A];
++ return DEFAULT_RSSI;
++}
++
++static inline int rt2x00_update_ant_rssi(struct link *link, int rssi)
++{
++ int old_rssi = link->ant.rssi_history[link->ant.active.rx - ANTENNA_A];
++ link->ant.rssi_history[link->ant.active.rx - ANTENNA_A] = rssi;
++ return old_rssi;
+ }
-- if ((ret = request_firmware(&cardp->fw, libertas_fw_name,
-+ if ((ret = request_firmware(&cardp->fw, lbs_fw_name,
- &cardp->udev->dev)) < 0) {
- lbs_pr_err("request_firmware() failed with %#x\n", ret);
-- lbs_pr_err("firmware %s not found\n", libertas_fw_name);
-+ lbs_pr_err("firmware %s not found\n", lbs_fw_name);
- goto done;
- }
+ /*
+@@ -290,14 +372,12 @@ struct interface {
+ * to us by the 80211 stack, and is used to request
+ * new beacons.
+ */
+- int id;
++ struct ieee80211_vif *id;
-@@ -886,7 +854,7 @@ restart:
- } while (cardp->bootcmdresp == 0 && j < 10);
- } while (cardp->bootcmdresp == 0 && i < 5);
+ /*
+ * Current working type (IEEE80211_IF_TYPE_*).
+- * When set to INVALID_INTERFACE, no interface is configured.
+ */
+ int type;
+-#define INVALID_INTERFACE IEEE80211_IF_TYPE_INVALID
-- if (cardp->bootcmdresp == 0) {
-+ if (cardp->bootcmdresp <= 0) {
- if (--reset_count >= 0) {
- if_usb_reset_device(cardp);
- goto restart;
-@@ -904,15 +872,14 @@ restart:
- cardp->totalbytes = 0;
- cardp->fwfinalblk = 0;
+ /*
+ * MAC of the device.
+@@ -308,11 +388,6 @@ struct interface {
+ * BBSID of the AP to associate with.
+ */
+ u8 bssid[ETH_ALEN];
+-
+- /*
+- * Store the packet filter mode for the current interface.
+- */
+- unsigned int filter;
+ };
-- if_prog_firmware(cardp);
-+ /* Send the first firmware packet... */
-+ if_usb_send_fw_pkt(cardp);
+ static inline int is_interface_present(struct interface *intf)
+@@ -362,6 +437,8 @@ struct rt2x00lib_conf {
+ struct ieee80211_conf *conf;
+ struct rf_channel rf;
-- do {
-- lbs_deb_usbd(&cardp->udev->dev,"Wlan sched timeout\n");
-- i++;
-- msleep_interruptible(100);
-- if (cardp->surprise_removed || i >= 20)
-- break;
-- } while (!cardp->fwdnldover);
-+ /* ... and wait for the process to complete */
-+ wait_event_interruptible(cardp->fw_wq, cardp->surprise_removed || cardp->fwdnldover);
++ struct antenna_setup ant;
+
-+ del_timer_sync(&cardp->fw_timeout);
-+ usb_kill_urb(cardp->rx_urb);
+ int phymode;
- if (!cardp->fwdnldover) {
- lbs_pr_info("failed to load fw, resetting device!\n");
-@@ -926,11 +893,11 @@ restart:
- goto release_fw;
- }
+ int basic_rates;
+@@ -397,12 +474,21 @@ struct rt2x00lib_ops {
+ void (*uninitialize) (struct rt2x00_dev *rt2x00dev);
--release_fw:
-+ release_fw:
- release_firmware(cardp->fw);
- cardp->fw = NULL;
+ /*
++ * Ring initialization handlers
++ */
++ void (*init_rxentry) (struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry);
++ void (*init_txentry) (struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry);
++
++ /*
+ * Radio control handlers.
+ */
+ int (*set_device_state) (struct rt2x00_dev *rt2x00dev,
+ enum dev_state state);
+ int (*rfkill_poll) (struct rt2x00_dev *rt2x00dev);
+- void (*link_stats) (struct rt2x00_dev *rt2x00dev);
++ void (*link_stats) (struct rt2x00_dev *rt2x00dev,
++ struct link_qual *qual);
+ void (*reset_tuner) (struct rt2x00_dev *rt2x00dev);
+ void (*link_tuner) (struct rt2x00_dev *rt2x00dev);
--done:
-+ done:
- lbs_deb_leave_args(LBS_DEB_USB, "ret %d", ret);
- return ret;
- }
-@@ -939,66 +906,38 @@ done:
- #ifdef CONFIG_PM
- static int if_usb_suspend(struct usb_interface *intf, pm_message_t message)
- {
-- struct usb_card_rec *cardp = usb_get_intfdata(intf);
-- wlan_private *priv = cardp->priv;
-+ struct if_usb_card *cardp = usb_get_intfdata(intf);
-+ struct lbs_private *priv = cardp->priv;
-+ int ret;
+@@ -410,10 +496,8 @@ struct rt2x00lib_ops {
+ * TX control handlers
+ */
+ void (*write_tx_desc) (struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
++ struct sk_buff *skb,
+ struct txdata_entry_desc *desc,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
+ struct ieee80211_tx_control *control);
+ int (*write_tx_data) (struct rt2x00_dev *rt2x00dev,
+ struct data_ring *ring, struct sk_buff *skb,
+@@ -545,7 +629,7 @@ struct rt2x00_dev {
+ * required for deregistration of debugfs.
+ */
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+- const struct rt2x00debug_intf *debugfs_intf;
++ struct rt2x00debug_intf *debugfs_intf;
+ #endif /* CONFIG_RT2X00_LIB_DEBUGFS */
- lbs_deb_enter(LBS_DEB_USB);
+ /*
+@@ -566,6 +650,13 @@ struct rt2x00_dev {
+ struct hw_mode_spec spec;
-- if (priv->adapter->psstate != PS_STATE_FULL_POWER)
-+ if (priv->psstate != PS_STATE_FULL_POWER)
- return -1;
+ /*
++ * This is the default TX/RX antenna setup as indicated
++ * by the device's EEPROM. When mac80211 sets its
++ * antenna value to 0 we should be using these values.
++ */
++ struct antenna_setup default_ant;
++
++ /*
+ * Register pointers
+ * csr_addr: Base register address. (PCI)
+ * csr_cache: CSR cache for usb_control_msg. (USB)
+@@ -574,6 +665,25 @@ struct rt2x00_dev {
+ void *csr_cache;
-- if (priv->mesh_dev && !priv->mesh_autostart_enabled) {
-- /* Mesh autostart must be activated while sleeping
-- * On resume it will go back to the current state
-- */
-- struct cmd_ds_mesh_access mesh_access;
-- memset(&mesh_access, 0, sizeof(mesh_access));
-- mesh_access.data[0] = cpu_to_le32(1);
-- libertas_prepare_and_send_command(priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
-- }
--
-- netif_device_detach(cardp->eth_dev);
-- netif_device_detach(priv->mesh_dev);
-+ ret = lbs_suspend(priv);
-+ if (ret)
-+ goto out;
+ /*
++ * Mutex to protect register accesses on USB devices.
++ * There are 2 reasons this is needed, one is to ensure
++ * use of the csr_cache (for USB devices) by one thread
++ * isn't corrupted by another thread trying to access it.
++ * The other is that access to BBP and RF registers
++ * require multiple BUS transactions and if another thread
++ * attempted to access one of those registers at the same
++ * time one of the writes could silently fail.
++ */
++ struct mutex usb_cache_mutex;
++
++ /*
++ * Current packet filter configuration for the device.
++ * This contains all currently active FIF_* flags send
++ * to us by mac80211 during configure_filter().
++ */
++ unsigned int packet_filter;
++
++ /*
+ * Interface configuration.
+ */
+ struct interface interface;
+@@ -697,13 +807,13 @@ struct rt2x00_dev {
+ * Generic RF access.
+ * The RF is being accessed by word index.
+ */
+-static inline void rt2x00_rf_read(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2x00_rf_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 *data)
+ {
+ *data = rt2x00dev->rf[word];
+ }
- /* Unlink tx & rx urb */
- usb_kill_urb(cardp->tx_urb);
- usb_kill_urb(cardp->rx_urb);
+-static inline void rt2x00_rf_write(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2x00_rf_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 data)
+ {
+ rt2x00dev->rf[word] = data;
+@@ -713,19 +823,19 @@ static inline void rt2x00_rf_write(const struct rt2x00_dev *rt2x00dev,
+ * Generic EEPROM access.
+ * The EEPROM is being accessed by word index.
+ */
+-static inline void *rt2x00_eeprom_addr(const struct rt2x00_dev *rt2x00dev,
++static inline void *rt2x00_eeprom_addr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word)
+ {
+ return (void *)&rt2x00dev->eeprom[word];
+ }
-- cardp->rx_urb_recall = 1;
--
-+ out:
- lbs_deb_leave(LBS_DEB_USB);
-- return 0;
-+ return ret;
+-static inline void rt2x00_eeprom_read(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2x00_eeprom_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u16 *data)
+ {
+ *data = le16_to_cpu(rt2x00dev->eeprom[word]);
}
- static int if_usb_resume(struct usb_interface *intf)
+-static inline void rt2x00_eeprom_write(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2x00_eeprom_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u16 data)
{
-- struct usb_card_rec *cardp = usb_get_intfdata(intf);
-- wlan_private *priv = cardp->priv;
-+ struct if_usb_card *cardp = usb_get_intfdata(intf);
-+ struct lbs_private *priv = cardp->priv;
+ rt2x00dev->eeprom[word] = cpu_to_le16(data);
+@@ -804,9 +914,7 @@ void rt2x00lib_rxdone(struct data_entry *entry, struct sk_buff *skb,
+ * TX descriptor initializer
+ */
+ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
++ struct sk_buff *skb,
+ struct ieee80211_tx_control *control);
- lbs_deb_enter(LBS_DEB_USB);
+ /*
+@@ -821,14 +929,17 @@ int rt2x00mac_add_interface(struct ieee80211_hw *hw,
+ void rt2x00mac_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_if_init_conf *conf);
+ int rt2x00mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf);
+-int rt2x00mac_config_interface(struct ieee80211_hw *hw, int if_id,
++int rt2x00mac_config_interface(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
+ struct ieee80211_if_conf *conf);
+ int rt2x00mac_get_stats(struct ieee80211_hw *hw,
+ struct ieee80211_low_level_stats *stats);
+ int rt2x00mac_get_tx_stats(struct ieee80211_hw *hw,
+ struct ieee80211_tx_queue_stats *stats);
+-void rt2x00mac_erp_ie_changed(struct ieee80211_hw *hw, u8 changes,
+- int cts_protection, int preamble);
++void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_bss_conf *bss_conf,
++ u32 changes);
+ int rt2x00mac_conf_tx(struct ieee80211_hw *hw, int queue,
+ const struct ieee80211_tx_queue_params *params);
-- cardp->rx_urb_recall = 0;
--
-- if_usb_submit_rx_urb(cardp->priv);
-+ if_usb_submit_rx_urb(cardp);
+diff --git a/drivers/net/wireless/rt2x00/rt2x00config.c b/drivers/net/wireless/rt2x00/rt2x00config.c
+index 12914cf..72cfe00 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00config.c
++++ b/drivers/net/wireless/rt2x00/rt2x00config.c
+@@ -23,11 +23,6 @@
+ Abstract: rt2x00 generic configuration routines.
+ */
-- netif_device_attach(cardp->eth_dev);
-- netif_device_attach(priv->mesh_dev);
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00lib"
-
-- if (priv->mesh_dev && !priv->mesh_autostart_enabled) {
-- /* Mesh autostart was activated while sleeping
-- * Disable it if appropriate
-- */
-- struct cmd_ds_mesh_access mesh_access;
-- memset(&mesh_access, 0, sizeof(mesh_access));
-- mesh_access.data[0] = cpu_to_le32(0);
-- libertas_prepare_and_send_command(priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
-- }
-+ lbs_resume(priv);
-
- lbs_deb_leave(LBS_DEB_USB);
- return 0;
-@@ -1009,46 +948,30 @@ static int if_usb_resume(struct usb_interface *intf)
- #endif
+ #include <linux/kernel.h>
+ #include <linux/module.h>
- static struct usb_driver if_usb_driver = {
-- /* driver name */
-- .name = usbdriver_name,
-- /* probe function name */
-+ .name = DRV_NAME,
- .probe = if_usb_probe,
-- /* disconnect function name */
- .disconnect = if_usb_disconnect,
-- /* device signature table */
- .id_table = if_usb_table,
- .suspend = if_usb_suspend,
- .resume = if_usb_resume,
- };
+@@ -94,12 +89,44 @@ void rt2x00lib_config_type(struct rt2x00_dev *rt2x00dev, const int type)
+ rt2x00dev->ops->lib->config_type(rt2x00dev, type, tsf_sync);
+ }
--static int if_usb_init_module(void)
-+static int __init if_usb_init_module(void)
++void rt2x00lib_config_antenna(struct rt2x00_dev *rt2x00dev,
++ enum antenna rx, enum antenna tx)
++{
++ struct rt2x00lib_conf libconf;
++
++ libconf.ant.rx = rx;
++ libconf.ant.tx = tx;
++
++ /*
++ * Antenna setup changes require the RX to be disabled,
++ * else the changes will be ignored by the device.
++ */
++ if (test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
++ rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
++
++ /*
++ * Write new antenna setup to device and reset the link tuner.
++ * The latter is required since we need to recalibrate the
++ * noise-sensitivity ratio for the new setup.
++ */
++ rt2x00dev->ops->lib->config(rt2x00dev, CONFIG_UPDATE_ANTENNA, &libconf);
++ rt2x00lib_reset_link_tuner(rt2x00dev);
++
++ rt2x00dev->link.ant.active.rx = libconf.ant.rx;
++ rt2x00dev->link.ant.active.tx = libconf.ant.tx;
++
++ if (test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
++ rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++}
++
+ void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
+ struct ieee80211_conf *conf, const int force_config)
{
- int ret = 0;
+ struct rt2x00lib_conf libconf;
+ struct ieee80211_hw_mode *mode;
+ struct ieee80211_rate *rate;
++ struct antenna_setup *default_ant = &rt2x00dev->default_ant;
++ struct antenna_setup *active_ant = &rt2x00dev->link.ant.active;
+ int flags = 0;
+ int short_slot_time;
- lbs_deb_enter(LBS_DEB_MAIN);
+@@ -122,7 +149,39 @@ void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
+ flags |= CONFIG_UPDATE_CHANNEL;
+ if (rt2x00dev->tx_power != conf->power_level)
+ flags |= CONFIG_UPDATE_TXPOWER;
+- if (rt2x00dev->rx_status.antenna == conf->antenna_sel_rx)
++
++ /*
++ * Determining changes in the antenna setups request several checks:
++ * antenna_sel_{r,t}x = 0
++ * -> Does active_{r,t}x match default_{r,t}x
++ * -> Is default_{r,t}x SW_DIVERSITY
++ * antenna_sel_{r,t}x = 1/2
++ * -> Does active_{r,t}x match antenna_sel_{r,t}x
++ * The reason for not updating the antenna while SW diversity
++ * should be used is simple: Software diversity means that
++ * we should switch between the antenna's based on the
++ * quality. This means that the current antenna is good enough
++ * to work with untill the link tuner decides that an antenna
++ * switch should be performed.
++ */
++ if (!conf->antenna_sel_rx &&
++ default_ant->rx != ANTENNA_SW_DIVERSITY &&
++ default_ant->rx != active_ant->rx)
++ flags |= CONFIG_UPDATE_ANTENNA;
++ else if (conf->antenna_sel_rx &&
++ conf->antenna_sel_rx != active_ant->rx)
++ flags |= CONFIG_UPDATE_ANTENNA;
++ else if (active_ant->rx == ANTENNA_SW_DIVERSITY)
++ flags |= CONFIG_UPDATE_ANTENNA;
++
++ if (!conf->antenna_sel_tx &&
++ default_ant->tx != ANTENNA_SW_DIVERSITY &&
++ default_ant->tx != active_ant->tx)
++ flags |= CONFIG_UPDATE_ANTENNA;
++ else if (conf->antenna_sel_tx &&
++ conf->antenna_sel_tx != active_ant->tx)
++ flags |= CONFIG_UPDATE_ANTENNA;
++ else if (active_ant->tx == ANTENNA_SW_DIVERSITY)
+ flags |= CONFIG_UPDATE_ANTENNA;
-- if (libertas_fw_name == NULL) {
-- libertas_fw_name = default_fw_name;
-- }
--
- ret = usb_register(&if_usb_driver);
+ /*
+@@ -171,6 +230,22 @@ config:
+ sizeof(libconf.rf));
+ }
- lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
- return ret;
- }
++ if (flags & CONFIG_UPDATE_ANTENNA) {
++ if (conf->antenna_sel_rx)
++ libconf.ant.rx = conf->antenna_sel_rx;
++ else if (default_ant->rx != ANTENNA_SW_DIVERSITY)
++ libconf.ant.rx = default_ant->rx;
++ else if (active_ant->rx == ANTENNA_SW_DIVERSITY)
++ libconf.ant.rx = ANTENNA_B;
++
++ if (conf->antenna_sel_tx)
++ libconf.ant.tx = conf->antenna_sel_tx;
++ else if (default_ant->tx != ANTENNA_SW_DIVERSITY)
++ libconf.ant.tx = default_ant->tx;
++ else if (active_ant->tx == ANTENNA_SW_DIVERSITY)
++ libconf.ant.tx = ANTENNA_B;
++ }
++
+ if (flags & CONFIG_UPDATE_SLOT_TIME) {
+ short_slot_time = conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME;
--static void if_usb_exit_module(void)
-+static void __exit if_usb_exit_module(void)
- {
-- struct usb_card_rec *cardp, *cardp_temp;
--
- lbs_deb_enter(LBS_DEB_MAIN);
+@@ -196,10 +271,17 @@ config:
+ if (flags & (CONFIG_UPDATE_CHANNEL | CONFIG_UPDATE_ANTENNA))
+ rt2x00lib_reset_link_tuner(rt2x00dev);
-- list_for_each_entry_safe(cardp, cardp_temp, &usb_devices, list) {
-- libertas_prepare_and_send_command(cardp->priv, CMD_802_11_RESET,
-- CMD_ACT_HALT, 0, 0, NULL);
-- }
+- rt2x00dev->curr_hwmode = libconf.phymode;
+- rt2x00dev->rx_status.phymode = conf->phymode;
++ if (flags & CONFIG_UPDATE_PHYMODE) {
++ rt2x00dev->curr_hwmode = libconf.phymode;
++ rt2x00dev->rx_status.phymode = conf->phymode;
++ }
++
+ rt2x00dev->rx_status.freq = conf->freq;
+ rt2x00dev->rx_status.channel = conf->channel;
+ rt2x00dev->tx_power = conf->power_level;
+- rt2x00dev->rx_status.antenna = conf->antenna_sel_rx;
++
++ if (flags & CONFIG_UPDATE_ANTENNA) {
++ rt2x00dev->link.ant.active.rx = libconf.ant.rx;
++ rt2x00dev->link.ant.active.tx = libconf.ant.tx;
++ }
+ }
+diff --git a/drivers/net/wireless/rt2x00/rt2x00debug.c b/drivers/net/wireless/rt2x00/rt2x00debug.c
+index 9275d6f..b44a9f4 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00debug.c
++++ b/drivers/net/wireless/rt2x00/rt2x00debug.c
+@@ -23,18 +23,15 @@
+ Abstract: rt2x00 debugfs specific routines.
+ */
+
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00lib"
-
-- /* API unregisters the driver from USB subsystem */
- usb_deregister(&if_usb_driver);
+ #include <linux/debugfs.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/poll.h>
+ #include <linux/uaccess.h>
- lbs_deb_leave(LBS_DEB_MAIN);
-@@ -1058,5 +981,5 @@ module_init(if_usb_init_module);
- module_exit(if_usb_exit_module);
+ #include "rt2x00.h"
+ #include "rt2x00lib.h"
++#include "rt2x00dump.h"
- MODULE_DESCRIPTION("8388 USB WLAN Driver");
--MODULE_AUTHOR("Marvell International Ltd.");
-+MODULE_AUTHOR("Marvell International Ltd. and Red Hat, Inc.");
- MODULE_LICENSE("GPL");
-diff --git a/drivers/net/wireless/libertas/if_usb.h b/drivers/net/wireless/libertas/if_usb.h
-index e07a10e..e4829a3 100644
---- a/drivers/net/wireless/libertas/if_usb.h
-+++ b/drivers/net/wireless/libertas/if_usb.h
-@@ -1,79 +1,76 @@
--#ifndef _LIBERTAS_IF_USB_H
--#define _LIBERTAS_IF_USB_H
-+#ifndef _LBS_IF_USB_H
-+#define _LBS_IF_USB_H
+ #define PRINT_LINE_LEN_MAX 32
--#include <linux/list.h>
-+#include <linux/wait.h>
-+#include <linux/timer.h>
+@@ -55,18 +52,22 @@ struct rt2x00debug_intf {
+ /*
+ * Debugfs entries for:
+ * - driver folder
+- * - driver file
+- * - chipset file
+- * - device flags file
+- * - register offset/value files
+- * - eeprom offset/value files
+- * - bbp offset/value files
+- * - rf offset/value files
++ * - driver file
++ * - chipset file
++ * - device flags file
++ * - register folder
++ * - csr offset/value files
++ * - eeprom offset/value files
++ * - bbp offset/value files
++ * - rf offset/value files
++ * - frame dump folder
++ * - frame dump file
+ */
+ struct dentry *driver_folder;
+ struct dentry *driver_entry;
+ struct dentry *chipset_entry;
+ struct dentry *dev_flags;
++ struct dentry *register_folder;
+ struct dentry *csr_off_entry;
+ struct dentry *csr_val_entry;
+ struct dentry *eeprom_off_entry;
+@@ -75,6 +76,24 @@ struct rt2x00debug_intf {
+ struct dentry *bbp_val_entry;
+ struct dentry *rf_off_entry;
+ struct dentry *rf_val_entry;
++ struct dentry *frame_folder;
++ struct dentry *frame_dump_entry;
+
-+struct lbs_private;
-
- /**
- * This file contains definition for USB interface.
- */
--#define CMD_TYPE_REQUEST 0xF00DFACE
--#define CMD_TYPE_DATA 0xBEADC0DE
--#define CMD_TYPE_INDICATION 0xBEEFFACE
-+#define CMD_TYPE_REQUEST 0xF00DFACE
-+#define CMD_TYPE_DATA 0xBEADC0DE
-+#define CMD_TYPE_INDICATION 0xBEEFFACE
-
--#define IPFIELD_ALIGN_OFFSET 2
-+#define IPFIELD_ALIGN_OFFSET 2
-
--#define BOOT_CMD_FW_BY_USB 0x01
--#define BOOT_CMD_FW_IN_EEPROM 0x02
--#define BOOT_CMD_UPDATE_BOOT2 0x03
--#define BOOT_CMD_UPDATE_FW 0x04
--#define BOOT_CMD_MAGIC_NUMBER 0x4C56524D /* M=>0x4D,R=>0x52,V=>0x56,L=>0x4C */
-+#define BOOT_CMD_FW_BY_USB 0x01
-+#define BOOT_CMD_FW_IN_EEPROM 0x02
-+#define BOOT_CMD_UPDATE_BOOT2 0x03
-+#define BOOT_CMD_UPDATE_FW 0x04
-+#define BOOT_CMD_MAGIC_NUMBER 0x4C56524D /* LVRM */
++ /*
++ * The frame dump file only allows a single reader,
++ * so we need to store the current state here.
++ */
++ unsigned long frame_dump_flags;
++#define FRAME_DUMP_FILE_OPEN 1
++
++ /*
++ * We queue each frame before dumping it to the user,
++ * per read command we will pass a single skb structure
++ * so we should be prepared to queue multiple sk buffers
++ * before sending it to userspace.
++ */
++ struct sk_buff_head frame_dump_skbqueue;
++ wait_queue_head_t frame_dump_waitqueue;
--struct bootcmdstr
-+struct bootcmd
- {
-- __le32 u32magicnumber;
-- u8 u8cmd_tag;
-- u8 au8dumy[11];
-+ __le32 magic;
-+ uint8_t cmd;
-+ uint8_t pad[11];
+ /*
+ * Driver and chipset files will use a data buffer
+@@ -93,6 +112,59 @@ struct rt2x00debug_intf {
+ unsigned int offset_rf;
};
--#define BOOT_CMD_RESP_OK 0x0001
--#define BOOT_CMD_RESP_FAIL 0x0000
-+#define BOOT_CMD_RESP_OK 0x0001
-+#define BOOT_CMD_RESP_FAIL 0x0000
-
--struct bootcmdrespStr
-+struct bootcmdresp
++void rt2x00debug_dump_frame(struct rt2x00_dev *rt2x00dev,
++ struct sk_buff *skb)
++{
++ struct rt2x00debug_intf *intf = rt2x00dev->debugfs_intf;
++ struct skb_desc *desc = get_skb_desc(skb);
++ struct sk_buff *skbcopy;
++ struct rt2x00dump_hdr *dump_hdr;
++ struct timeval timestamp;
++
++ do_gettimeofday(×tamp);
++
++ if (!test_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags))
++ return;
++
++ if (skb_queue_len(&intf->frame_dump_skbqueue) > 20) {
++ DEBUG(rt2x00dev, "txrx dump queue length exceeded.\n");
++ return;
++ }
++
++ skbcopy = alloc_skb(sizeof(*dump_hdr) + desc->desc_len + desc->data_len,
++ GFP_ATOMIC);
++ if (!skbcopy) {
++ DEBUG(rt2x00dev, "Failed to copy skb for dump.\n");
++ return;
++ }
++
++ dump_hdr = (struct rt2x00dump_hdr *)skb_put(skbcopy, sizeof(*dump_hdr));
++ dump_hdr->version = cpu_to_le32(DUMP_HEADER_VERSION);
++ dump_hdr->header_length = cpu_to_le32(sizeof(*dump_hdr));
++ dump_hdr->desc_length = cpu_to_le32(desc->desc_len);
++ dump_hdr->data_length = cpu_to_le32(desc->data_len);
++ dump_hdr->chip_rt = cpu_to_le16(rt2x00dev->chip.rt);
++ dump_hdr->chip_rf = cpu_to_le16(rt2x00dev->chip.rf);
++ dump_hdr->chip_rev = cpu_to_le32(rt2x00dev->chip.rev);
++ dump_hdr->type = cpu_to_le16(desc->frame_type);
++ dump_hdr->ring_index = desc->ring->queue_idx;
++ dump_hdr->entry_index = desc->entry->entry_idx;
++ dump_hdr->timestamp_sec = cpu_to_le32(timestamp.tv_sec);
++ dump_hdr->timestamp_usec = cpu_to_le32(timestamp.tv_usec);
++
++ memcpy(skb_put(skbcopy, desc->desc_len), desc->desc, desc->desc_len);
++ memcpy(skb_put(skbcopy, desc->data_len), desc->data, desc->data_len);
++
++ skb_queue_tail(&intf->frame_dump_skbqueue, skbcopy);
++ wake_up_interruptible(&intf->frame_dump_waitqueue);
++
++ /*
++ * Verify that the file has not been closed while we were working.
++ */
++ if (!test_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags))
++ skb_queue_purge(&intf->frame_dump_skbqueue);
++}
++
+ static int rt2x00debug_file_open(struct inode *inode, struct file *file)
{
-- __le32 u32magicnumber;
-- u8 u8cmd_tag;
-- u8 u8result;
-- u8 au8dumy[2];
--};
--
--/* read callback private data */
--struct read_cb_info {
-- struct usb_card_rec *cardp;
-- struct sk_buff *skb;
-+ __le32 magic;
-+ uint8_t cmd;
-+ uint8_t result;
-+ uint8_t pad[2];
- };
-
- /** USB card description structure*/
--struct usb_card_rec {
-- struct list_head list;
-- struct net_device *eth_dev;
-+struct if_usb_card {
- struct usb_device *udev;
- struct urb *rx_urb, *tx_urb;
-- void *priv;
-- struct read_cb_info rinfo;
-+ struct lbs_private *priv;
-
-- int bulk_in_size;
-- u8 bulk_in_endpointAddr;
-+ struct sk_buff *rx_skb;
-+ uint32_t usb_event_cause;
-+ uint8_t usb_int_cause;
-
-- u8 *bulk_out_buffer;
-- int bulk_out_size;
-- u8 bulk_out_endpointAddr;
-+ uint8_t ep_in;
-+ uint8_t ep_out;
+ struct rt2x00debug_intf *intf = inode->i_private;
+@@ -114,13 +186,96 @@ static int rt2x00debug_file_release(struct inode *inode, struct file *file)
+ return 0;
+ }
-- const struct firmware *fw;
-- u8 CRC_OK;
-- u32 fwseqnum;
-- u32 lastseqnum;
-- u32 totalbytes;
-- u32 fwlastblksent;
-- u8 fwdnldover;
-- u8 fwfinalblk;
-- u8 surprise_removed;
-+ int8_t bootcmdresp;
++static int rt2x00debug_open_ring_dump(struct inode *inode, struct file *file)
++{
++ struct rt2x00debug_intf *intf = inode->i_private;
++ int retval;
++
++ retval = rt2x00debug_file_open(inode, file);
++ if (retval)
++ return retval;
++
++ if (test_and_set_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags)) {
++ rt2x00debug_file_release(inode, file);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
++static int rt2x00debug_release_ring_dump(struct inode *inode, struct file *file)
++{
++ struct rt2x00debug_intf *intf = inode->i_private;
++
++ skb_queue_purge(&intf->frame_dump_skbqueue);
++
++ clear_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags);
++
++ return rt2x00debug_file_release(inode, file);
++}
++
++static ssize_t rt2x00debug_read_ring_dump(struct file *file,
++ char __user *buf,
++ size_t length,
++ loff_t *offset)
++{
++ struct rt2x00debug_intf *intf = file->private_data;
++ struct sk_buff *skb;
++ size_t status;
++ int retval;
++
++ if (file->f_flags & O_NONBLOCK)
++ return -EAGAIN;
++
++ retval =
++ wait_event_interruptible(intf->frame_dump_waitqueue,
++ (skb =
++ skb_dequeue(&intf->frame_dump_skbqueue)));
++ if (retval)
++ return retval;
++
++ status = min((size_t)skb->len, length);
++ if (copy_to_user(buf, skb->data, status)) {
++ status = -EFAULT;
++ goto exit;
++ }
++
++ *offset += status;
++
++exit:
++ kfree_skb(skb);
++
++ return status;
++}
++
++static unsigned int rt2x00debug_poll_ring_dump(struct file *file,
++ poll_table *wait)
++{
++ struct rt2x00debug_intf *intf = file->private_data;
++
++ poll_wait(file, &intf->frame_dump_waitqueue, wait);
++
++ if (!skb_queue_empty(&intf->frame_dump_skbqueue))
++ return POLLOUT | POLLWRNORM;
++
++ return 0;
++}
++
++static const struct file_operations rt2x00debug_fop_ring_dump = {
++ .owner = THIS_MODULE,
++ .read = rt2x00debug_read_ring_dump,
++ .poll = rt2x00debug_poll_ring_dump,
++ .open = rt2x00debug_open_ring_dump,
++ .release = rt2x00debug_release_ring_dump,
++};
++
+ #define RT2X00DEBUGFS_OPS_READ(__name, __format, __type) \
+ static ssize_t rt2x00debug_read_##__name(struct file *file, \
+ char __user *buf, \
+ size_t length, \
+ loff_t *offset) \
+ { \
+- struct rt2x00debug_intf *intf = file->private_data; \
++ struct rt2x00debug_intf *intf = file->private_data; \
+ const struct rt2x00debug *debug = intf->debug; \
+ char line[16]; \
+ size_t size; \
+@@ -150,7 +305,7 @@ static ssize_t rt2x00debug_write_##__name(struct file *file, \
+ size_t length, \
+ loff_t *offset) \
+ { \
+- struct rt2x00debug_intf *intf = file->private_data; \
++ struct rt2x00debug_intf *intf = file->private_data; \
+ const struct rt2x00debug *debug = intf->debug; \
+ char line[16]; \
+ size_t size; \
+@@ -254,11 +409,15 @@ static struct dentry *rt2x00debug_create_file_chipset(const char *name,
+ const struct rt2x00debug *debug = intf->debug;
+ char *data;
-- u32 usb_event_cause;
-- u8 usb_int_cause;
-+ int ep_in_size;
+- data = kzalloc(4 * PRINT_LINE_LEN_MAX, GFP_KERNEL);
++ data = kzalloc(8 * PRINT_LINE_LEN_MAX, GFP_KERNEL);
+ if (!data)
+ return NULL;
-- u8 rx_urb_recall;
-+ void *ep_out_buf;
-+ int ep_out_size;
+ blob->data = data;
++ data += sprintf(data, "rt chip: %04x\n", intf->rt2x00dev->chip.rt);
++ data += sprintf(data, "rf chip: %04x\n", intf->rt2x00dev->chip.rf);
++ data += sprintf(data, "revision:%08x\n", intf->rt2x00dev->chip.rev);
++ data += sprintf(data, "\n");
+ data += sprintf(data, "csr length: %d\n", debug->csr.word_count);
+ data += sprintf(data, "eeprom length: %d\n", debug->eeprom.word_count);
+ data += sprintf(data, "bbp length: %d\n", debug->bbp.word_count);
+@@ -306,12 +465,17 @@ void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
+ if (IS_ERR(intf->dev_flags))
+ goto exit;
-- u8 bootcmdresp;
-+ const struct firmware *fw;
-+ struct timer_list fw_timeout;
-+ wait_queue_head_t fw_wq;
-+ uint32_t fwseqnum;
-+ uint32_t totalbytes;
-+ uint32_t fwlastblksent;
-+ uint8_t CRC_OK;
-+ uint8_t fwdnldover;
-+ uint8_t fwfinalblk;
-+ uint8_t surprise_removed;
+-#define RT2X00DEBUGFS_CREATE_ENTRY(__intf, __name) \
++ intf->register_folder =
++ debugfs_create_dir("register", intf->driver_folder);
++ if (IS_ERR(intf->register_folder))
++ goto exit;
+
-+ __le16 boot2_version;
- };
++#define RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(__intf, __name) \
+ ({ \
+ (__intf)->__name##_off_entry = \
+ debugfs_create_u32(__stringify(__name) "_offset", \
+ S_IRUGO | S_IWUSR, \
+- (__intf)->driver_folder, \
++ (__intf)->register_folder, \
+ &(__intf)->offset_##__name); \
+ if (IS_ERR((__intf)->__name##_off_entry)) \
+ goto exit; \
+@@ -319,18 +483,32 @@ void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
+ (__intf)->__name##_val_entry = \
+ debugfs_create_file(__stringify(__name) "_value", \
+ S_IRUGO | S_IWUSR, \
+- (__intf)->driver_folder, \
++ (__intf)->register_folder, \
+ (__intf), &rt2x00debug_fop_##__name);\
+ if (IS_ERR((__intf)->__name##_val_entry)) \
+ goto exit; \
+ })
- /** fwheader */
-@@ -86,10 +83,10 @@ struct fwheader {
+- RT2X00DEBUGFS_CREATE_ENTRY(intf, csr);
+- RT2X00DEBUGFS_CREATE_ENTRY(intf, eeprom);
+- RT2X00DEBUGFS_CREATE_ENTRY(intf, bbp);
+- RT2X00DEBUGFS_CREATE_ENTRY(intf, rf);
++ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, csr);
++ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, eeprom);
++ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, bbp);
++ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, rf);
- #define FW_MAX_DATA_BLK_SIZE 600
- /** FWData */
--struct FWData {
-- struct fwheader fwheader;
-+struct fwdata {
-+ struct fwheader hdr;
- __le32 seqnum;
-- u8 data[FW_MAX_DATA_BLK_SIZE];
-+ uint8_t data[0];
- };
+-#undef RT2X00DEBUGFS_CREATE_ENTRY
++#undef RT2X00DEBUGFS_CREATE_REGISTER_ENTRY
++
++ intf->frame_folder =
++ debugfs_create_dir("frame", intf->driver_folder);
++ if (IS_ERR(intf->frame_folder))
++ goto exit;
++
++ intf->frame_dump_entry =
++ debugfs_create_file("dump", S_IRUGO, intf->frame_folder,
++ intf, &rt2x00debug_fop_ring_dump);
++ if (IS_ERR(intf->frame_dump_entry))
++ goto exit;
++
++ skb_queue_head_init(&intf->frame_dump_skbqueue);
++ init_waitqueue_head(&intf->frame_dump_waitqueue);
- /** fwsyncheader */
-@@ -101,7 +98,5 @@ struct fwsyncheader {
- #define FW_HAS_DATA_TO_RECV 0x00000001
- #define FW_HAS_LAST_BLOCK 0x00000004
+ return;
--#define FW_DATA_XMIT_SIZE \
-- sizeof(struct fwheader) + le32_to_cpu(fwdata->fwheader.datalength) + sizeof(u32)
+@@ -343,11 +521,15 @@ exit:
- #endif
-diff --git a/drivers/net/wireless/libertas/join.c b/drivers/net/wireless/libertas/join.c
-index dc24a05..2d45080 100644
---- a/drivers/net/wireless/libertas/join.c
-+++ b/drivers/net/wireless/libertas/join.c
-@@ -30,16 +30,18 @@
- * NOTE: Setting the MSB of the basic rates need to be taken
- * care, either before or after calling this function
- *
-- * @param adapter A pointer to wlan_adapter structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param rate1 the buffer which keeps input and output
- * @param rate1_size the size of rate1 buffer; new size of buffer on return
- *
- * @return 0 or -1
- */
--static int get_common_rates(wlan_adapter * adapter, u8 * rates, u16 *rates_size)
-+static int get_common_rates(struct lbs_private *priv,
-+ u8 *rates,
-+ u16 *rates_size)
+ void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
{
-- u8 *card_rates = libertas_bg_rates;
-- size_t num_card_rates = sizeof(libertas_bg_rates);
-+ u8 *card_rates = lbs_bg_rates;
-+ size_t num_card_rates = sizeof(lbs_bg_rates);
- int ret = 0, i, j;
- u8 tmp[30];
- size_t tmp_size = 0;
-@@ -55,15 +57,15 @@ static int get_common_rates(wlan_adapter * adapter, u8 * rates, u16 *rates_size)
- lbs_deb_hex(LBS_DEB_JOIN, "AP rates ", rates, *rates_size);
- lbs_deb_hex(LBS_DEB_JOIN, "card rates ", card_rates, num_card_rates);
- lbs_deb_hex(LBS_DEB_JOIN, "common rates", tmp, tmp_size);
-- lbs_deb_join("Tx datarate is currently 0x%X\n", adapter->cur_rate);
-+ lbs_deb_join("TX data rate 0x%02x\n", priv->cur_rate);
+- const struct rt2x00debug_intf *intf = rt2x00dev->debugfs_intf;
++ struct rt2x00debug_intf *intf = rt2x00dev->debugfs_intf;
-- if (!adapter->auto_rate) {
-+ if (!priv->auto_rate) {
- for (i = 0; i < tmp_size; i++) {
-- if (tmp[i] == adapter->cur_rate)
-+ if (tmp[i] == priv->cur_rate)
- goto done;
- }
- lbs_pr_alert("Previously set fixed data rate %#x isn't "
-- "compatible with the network.\n", adapter->cur_rate);
-+ "compatible with the network.\n", priv->cur_rate);
- ret = -1;
- goto done;
- }
-@@ -85,7 +87,7 @@ done:
- * @param rates buffer of data rates
- * @param len size of buffer
- */
--static void libertas_set_basic_rate_flags(u8 * rates, size_t len)
-+static void lbs_set_basic_rate_flags(u8 *rates, size_t len)
- {
- int i;
+ if (unlikely(!intf))
+ return;
-@@ -104,7 +106,7 @@ static void libertas_set_basic_rate_flags(u8 * rates, size_t len)
- * @param rates buffer of data rates
- * @param len size of buffer
- */
--void libertas_unset_basic_rate_flags(u8 * rates, size_t len)
-+void lbs_unset_basic_rate_flags(u8 *rates, size_t len)
- {
- int i;
++ skb_queue_purge(&intf->frame_dump_skbqueue);
++
++ debugfs_remove(intf->frame_dump_entry);
++ debugfs_remove(intf->frame_folder);
+ debugfs_remove(intf->rf_val_entry);
+ debugfs_remove(intf->rf_off_entry);
+ debugfs_remove(intf->bbp_val_entry);
+@@ -356,6 +538,7 @@ void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
+ debugfs_remove(intf->eeprom_off_entry);
+ debugfs_remove(intf->csr_val_entry);
+ debugfs_remove(intf->csr_off_entry);
++ debugfs_remove(intf->register_folder);
+ debugfs_remove(intf->dev_flags);
+ debugfs_remove(intf->chipset_entry);
+ debugfs_remove(intf->driver_entry);
+diff --git a/drivers/net/wireless/rt2x00/rt2x00debug.h b/drivers/net/wireless/rt2x00/rt2x00debug.h
+index 860e8fa..d37efbd 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00debug.h
++++ b/drivers/net/wireless/rt2x00/rt2x00debug.h
+@@ -30,9 +30,9 @@ struct rt2x00_dev;
-@@ -116,19 +118,18 @@ void libertas_unset_basic_rate_flags(u8 * rates, size_t len)
- /**
- * @brief Associate to a specific BSS discovered in a scan
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param pbssdesc Pointer to the BSS descriptor to associate with.
- *
- * @return 0-success, otherwise fail
+ #define RT2X00DEBUGFS_REGISTER_ENTRY(__name, __type) \
+ struct reg##__name { \
+- void (*read)(const struct rt2x00_dev *rt2x00dev, \
++ void (*read)(struct rt2x00_dev *rt2x00dev, \
+ const unsigned int word, __type *data); \
+- void (*write)(const struct rt2x00_dev *rt2x00dev, \
++ void (*write)(struct rt2x00_dev *rt2x00dev, \
+ const unsigned int word, __type data); \
+ \
+ unsigned int word_size; \
+diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c
+index ff399f8..c4be2ac 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00dev.c
++++ b/drivers/net/wireless/rt2x00/rt2x00dev.c
+@@ -23,16 +23,12 @@
+ Abstract: rt2x00 generic device routines.
*/
--int wlan_associate(wlan_private * priv, struct assoc_request * assoc_req)
-+int lbs_associate(struct lbs_private *priv, struct assoc_request *assoc_req)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret;
-
-- lbs_deb_enter(LBS_DEB_JOIN);
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_AUTHENTICATE,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_AUTHENTICATE,
- 0, CMD_OPTION_WAITFORRSP,
- 0, assoc_req->bss.bssid);
-
-@@ -136,50 +137,50 @@ int wlan_associate(wlan_private * priv, struct assoc_request * assoc_req)
- goto done;
-
- /* set preamble to firmware */
-- if ( (adapter->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)
-+ if ( (priv->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)
- && (assoc_req->bss.capability & WLAN_CAPABILITY_SHORT_PREAMBLE))
-- adapter->preamble = CMD_TYPE_SHORT_PREAMBLE;
-+ priv->preamble = CMD_TYPE_SHORT_PREAMBLE;
- else
-- adapter->preamble = CMD_TYPE_LONG_PREAMBLE;
-+ priv->preamble = CMD_TYPE_LONG_PREAMBLE;
-- libertas_set_radio_control(priv);
-+ lbs_set_radio_control(priv);
-
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_ASSOCIATE,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_ASSOCIATE,
- 0, CMD_OPTION_WAITFORRSP, 0, assoc_req);
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00lib"
+-
+ #include <linux/kernel.h>
+ #include <linux/module.h>
- done:
-- lbs_deb_leave_args(LBS_DEB_JOIN, "ret %d", ret);
-+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
- return ret;
- }
+ #include "rt2x00.h"
+ #include "rt2x00lib.h"
++#include "rt2x00dump.h"
- /**
- * @brief Start an Adhoc Network
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param adhocssid The ssid of the Adhoc Network
- * @return 0--success, -1--fail
+ /*
+ * Ring handler.
+@@ -67,7 +63,21 @@ EXPORT_SYMBOL_GPL(rt2x00lib_get_ring);
*/
--int libertas_start_adhoc_network(wlan_private * priv, struct assoc_request * assoc_req)
-+int lbs_start_adhoc_network(struct lbs_private *priv,
-+ struct assoc_request *assoc_req)
+ static void rt2x00lib_start_link_tuner(struct rt2x00_dev *rt2x00dev)
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
-
-- adapter->adhoccreate = 1;
-+ priv->adhoccreate = 1;
-
-- if (adapter->capability & WLAN_CAPABILITY_SHORT_PREAMBLE) {
-+ if (priv->capability & WLAN_CAPABILITY_SHORT_PREAMBLE) {
- lbs_deb_join("AdhocStart: Short preamble\n");
-- adapter->preamble = CMD_TYPE_SHORT_PREAMBLE;
-+ priv->preamble = CMD_TYPE_SHORT_PREAMBLE;
- } else {
- lbs_deb_join("AdhocStart: Long preamble\n");
-- adapter->preamble = CMD_TYPE_LONG_PREAMBLE;
-+ priv->preamble = CMD_TYPE_LONG_PREAMBLE;
- }
-
-- libertas_set_radio_control(priv);
-+ lbs_set_radio_control(priv);
-
- lbs_deb_join("AdhocStart: channel = %d\n", assoc_req->channel);
- lbs_deb_join("AdhocStart: band = %d\n", assoc_req->band);
+- rt2x00_clear_link(&rt2x00dev->link);
++ rt2x00dev->link.count = 0;
++ rt2x00dev->link.vgc_level = 0;
++
++ memset(&rt2x00dev->link.qual, 0, sizeof(rt2x00dev->link.qual));
++
++ /*
++ * The RX and TX percentage should start at 50%
++ * this will assure we will get at least get some
++ * decent value when the link tuner starts.
++ * The value will be dropped and overwritten with
++ * the correct (measured )value anyway during the
++ * first run of the link tuner.
++ */
++ rt2x00dev->link.qual.rx_percentage = 50;
++ rt2x00dev->link.qual.tx_percentage = 50;
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_AD_HOC_START,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_AD_HOC_START,
- 0, CMD_OPTION_WAITFORRSP, 0, assoc_req);
+ /*
+ * Reset the link tuner.
+@@ -93,6 +103,46 @@ void rt2x00lib_reset_link_tuner(struct rt2x00_dev *rt2x00dev)
+ }
- return ret;
-@@ -188,34 +189,34 @@ int libertas_start_adhoc_network(wlan_private * priv, struct assoc_request * ass
- /**
- * @brief Join an adhoc network found in a previous scan
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param pbssdesc Pointer to a BSS descriptor found in a previous scan
- * to attempt to join
- *
- * @return 0--success, -1--fail
+ /*
++ * Ring initialization
++ */
++static void rt2x00lib_init_rxrings(struct rt2x00_dev *rt2x00dev)
++{
++ struct data_ring *ring = rt2x00dev->rx;
++ unsigned int i;
++
++ if (!rt2x00dev->ops->lib->init_rxentry)
++ return;
++
++ if (ring->data_addr)
++ memset(ring->data_addr, 0, rt2x00_get_ring_size(ring));
++
++ for (i = 0; i < ring->stats.limit; i++)
++ rt2x00dev->ops->lib->init_rxentry(rt2x00dev, &ring->entry[i]);
++
++ rt2x00_ring_index_clear(ring);
++}
++
++static void rt2x00lib_init_txrings(struct rt2x00_dev *rt2x00dev)
++{
++ struct data_ring *ring;
++ unsigned int i;
++
++ if (!rt2x00dev->ops->lib->init_txentry)
++ return;
++
++ txringall_for_each(rt2x00dev, ring) {
++ if (ring->data_addr)
++ memset(ring->data_addr, 0, rt2x00_get_ring_size(ring));
++
++ for (i = 0; i < ring->stats.limit; i++)
++ rt2x00dev->ops->lib->init_txentry(rt2x00dev,
++ &ring->entry[i]);
++
++ rt2x00_ring_index_clear(ring);
++ }
++}
++
++/*
+ * Radio control handlers.
*/
--int libertas_join_adhoc_network(wlan_private * priv, struct assoc_request * assoc_req)
-+int lbs_join_adhoc_network(struct lbs_private *priv,
-+ struct assoc_request *assoc_req)
- {
-- wlan_adapter *adapter = priv->adapter;
- struct bss_descriptor * bss = &assoc_req->bss;
- int ret = 0;
+ int rt2x00lib_enable_radio(struct rt2x00_dev *rt2x00dev)
+@@ -108,6 +158,12 @@ int rt2x00lib_enable_radio(struct rt2x00_dev *rt2x00dev)
+ return 0;
- lbs_deb_join("%s: Current SSID '%s', ssid length %u\n",
- __func__,
-- escape_essid(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len),
-- adapter->curbssparams.ssid_len);
-+ escape_essid(priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len),
-+ priv->curbssparams.ssid_len);
- lbs_deb_join("%s: requested ssid '%s', ssid length %u\n",
- __func__, escape_essid(bss->ssid, bss->ssid_len),
- bss->ssid_len);
+ /*
++ * Initialize all data rings.
++ */
++ rt2x00lib_init_rxrings(rt2x00dev);
++ rt2x00lib_init_txrings(rt2x00dev);
++
++ /*
+ * Enable radio.
+ */
+ status = rt2x00dev->ops->lib->set_device_state(rt2x00dev,
+@@ -179,26 +235,153 @@ void rt2x00lib_toggle_rx(struct rt2x00_dev *rt2x00dev, enum dev_state state)
+ rt2x00lib_start_link_tuner(rt2x00dev);
+ }
- /* check if the requested SSID is already joined */
-- if ( adapter->curbssparams.ssid_len
-- && !libertas_ssid_cmp(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len,
-+ if ( priv->curbssparams.ssid_len
-+ && !lbs_ssid_cmp(priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len,
- bss->ssid, bss->ssid_len)
-- && (adapter->mode == IW_MODE_ADHOC)
-- && (adapter->connect_status == LIBERTAS_CONNECTED)) {
-+ && (priv->mode == IW_MODE_ADHOC)
-+ && (priv->connect_status == LBS_CONNECTED)) {
- union iwreq_data wrqu;
+-static void rt2x00lib_precalculate_link_signal(struct link *link)
++static void rt2x00lib_evaluate_antenna_sample(struct rt2x00_dev *rt2x00dev)
+ {
+- if (link->rx_failed || link->rx_success)
+- link->rx_percentage =
+- (link->rx_success * 100) /
+- (link->rx_failed + link->rx_success);
++ enum antenna rx = rt2x00dev->link.ant.active.rx;
++ enum antenna tx = rt2x00dev->link.ant.active.tx;
++ int sample_a =
++ rt2x00_get_link_ant_rssi_history(&rt2x00dev->link, ANTENNA_A);
++ int sample_b =
++ rt2x00_get_link_ant_rssi_history(&rt2x00dev->link, ANTENNA_B);
++
++ /*
++ * We are done sampling. Now we should evaluate the results.
++ */
++ rt2x00dev->link.ant.flags &= ~ANTENNA_MODE_SAMPLE;
++
++ /*
++ * During the last period we have sampled the RSSI
++ * from both antenna's. It now is time to determine
++ * which antenna demonstrated the best performance.
++ * When we are already on the antenna with the best
++ * performance, then there really is nothing for us
++ * left to do.
++ */
++ if (sample_a == sample_b)
++ return;
++
++ if (rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY) {
++ if (sample_a > sample_b && rx == ANTENNA_B)
++ rx = ANTENNA_A;
++ else if (rx == ANTENNA_A)
++ rx = ANTENNA_B;
++ }
++
++ if (rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY) {
++ if (sample_a > sample_b && tx == ANTENNA_B)
++ tx = ANTENNA_A;
++ else if (tx == ANTENNA_A)
++ tx = ANTENNA_B;
++ }
++
++ rt2x00lib_config_antenna(rt2x00dev, rx, tx);
++}
++
++static void rt2x00lib_evaluate_antenna_eval(struct rt2x00_dev *rt2x00dev)
++{
++ enum antenna rx = rt2x00dev->link.ant.active.rx;
++ enum antenna tx = rt2x00dev->link.ant.active.tx;
++ int rssi_curr = rt2x00_get_link_ant_rssi(&rt2x00dev->link);
++ int rssi_old = rt2x00_update_ant_rssi(&rt2x00dev->link, rssi_curr);
++
++ /*
++ * Legacy driver indicates that we should swap antenna's
++ * when the difference in RSSI is greater that 5. This
++ * also should be done when the RSSI was actually better
++ * then the previous sample.
++ * When the difference exceeds the threshold we should
++ * sample the rssi from the other antenna to make a valid
++ * comparison between the 2 antennas.
++ */
++ if ((rssi_curr - rssi_old) > -5 || (rssi_curr - rssi_old) < 5)
++ return;
++
++ rt2x00dev->link.ant.flags |= ANTENNA_MODE_SAMPLE;
++
++ if (rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY)
++ rx = (rx == ANTENNA_A) ? ANTENNA_B : ANTENNA_A;
++
++ if (rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY)
++ tx = (tx == ANTENNA_A) ? ANTENNA_B : ANTENNA_A;
++
++ rt2x00lib_config_antenna(rt2x00dev, rx, tx);
++}
++
++static void rt2x00lib_evaluate_antenna(struct rt2x00_dev *rt2x00dev)
++{
++ /*
++ * Determine if software diversity is enabled for
++ * either the TX or RX antenna (or both).
++ * Always perform this check since within the link
++ * tuner interval the configuration might have changed.
++ */
++ rt2x00dev->link.ant.flags &= ~ANTENNA_RX_DIVERSITY;
++ rt2x00dev->link.ant.flags &= ~ANTENNA_TX_DIVERSITY;
++
++ if (rt2x00dev->hw->conf.antenna_sel_rx == 0 &&
++ rt2x00dev->default_ant.rx != ANTENNA_SW_DIVERSITY)
++ rt2x00dev->link.ant.flags |= ANTENNA_RX_DIVERSITY;
++ if (rt2x00dev->hw->conf.antenna_sel_tx == 0 &&
++ rt2x00dev->default_ant.tx != ANTENNA_SW_DIVERSITY)
++ rt2x00dev->link.ant.flags |= ANTENNA_TX_DIVERSITY;
++
++ if (!(rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY) &&
++ !(rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY)) {
++ rt2x00dev->link.ant.flags &= ~ANTENNA_MODE_SAMPLE;
++ return;
++ }
++
++ /*
++ * If we have only sampled the data over the last period
++ * we should now harvest the data. Otherwise just evaluate
++ * the data. The latter should only be performed once
++ * every 2 seconds.
++ */
++ if (rt2x00dev->link.ant.flags & ANTENNA_MODE_SAMPLE)
++ rt2x00lib_evaluate_antenna_sample(rt2x00dev);
++ else if (rt2x00dev->link.count & 1)
++ rt2x00lib_evaluate_antenna_eval(rt2x00dev);
++}
++
++static void rt2x00lib_update_link_stats(struct link *link, int rssi)
++{
++ int avg_rssi = rssi;
++
++ /*
++ * Update global RSSI
++ */
++ if (link->qual.avg_rssi)
++ avg_rssi = MOVING_AVERAGE(link->qual.avg_rssi, rssi, 8);
++ link->qual.avg_rssi = avg_rssi;
++
++ /*
++ * Update antenna RSSI
++ */
++ if (link->ant.rssi_ant)
++ rssi = MOVING_AVERAGE(link->ant.rssi_ant, rssi, 8);
++ link->ant.rssi_ant = rssi;
++}
++
++static void rt2x00lib_precalculate_link_signal(struct link_qual *qual)
++{
++ if (qual->rx_failed || qual->rx_success)
++ qual->rx_percentage =
++ (qual->rx_success * 100) /
++ (qual->rx_failed + qual->rx_success);
+ else
+- link->rx_percentage = 50;
++ qual->rx_percentage = 50;
- lbs_deb_join("ADHOC_J_CMD: New ad-hoc SSID is the same as "
-@@ -225,7 +226,7 @@ int libertas_join_adhoc_network(wlan_private * priv, struct assoc_request * asso
- * request really was successful, even if just a null-op.
- */
- memset(&wrqu, 0, sizeof(wrqu));
-- memcpy(wrqu.ap_addr.sa_data, adapter->curbssparams.bssid,
-+ memcpy(wrqu.ap_addr.sa_data, priv->curbssparams.bssid,
- ETH_ALEN);
- wrqu.ap_addr.sa_family = ARPHRD_ETHER;
- wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
-@@ -235,22 +236,22 @@ int libertas_join_adhoc_network(wlan_private * priv, struct assoc_request * asso
- /* Use shortpreamble only when both creator and card supports
- short preamble */
- if ( !(bss->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)
-- || !(adapter->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)) {
-+ || !(priv->capability & WLAN_CAPABILITY_SHORT_PREAMBLE)) {
- lbs_deb_join("AdhocJoin: Long preamble\n");
-- adapter->preamble = CMD_TYPE_LONG_PREAMBLE;
-+ priv->preamble = CMD_TYPE_LONG_PREAMBLE;
- } else {
- lbs_deb_join("AdhocJoin: Short preamble\n");
-- adapter->preamble = CMD_TYPE_SHORT_PREAMBLE;
-+ priv->preamble = CMD_TYPE_SHORT_PREAMBLE;
- }
+- if (link->tx_failed || link->tx_success)
+- link->tx_percentage =
+- (link->tx_success * 100) /
+- (link->tx_failed + link->tx_success);
++ if (qual->tx_failed || qual->tx_success)
++ qual->tx_percentage =
++ (qual->tx_success * 100) /
++ (qual->tx_failed + qual->tx_success);
+ else
+- link->tx_percentage = 50;
++ qual->tx_percentage = 50;
-- libertas_set_radio_control(priv);
-+ lbs_set_radio_control(priv);
+- link->rx_success = 0;
+- link->rx_failed = 0;
+- link->tx_success = 0;
+- link->tx_failed = 0;
++ qual->rx_success = 0;
++ qual->rx_failed = 0;
++ qual->tx_success = 0;
++ qual->tx_failed = 0;
+ }
- lbs_deb_join("AdhocJoin: channel = %d\n", assoc_req->channel);
- lbs_deb_join("AdhocJoin: band = %c\n", assoc_req->band);
+ static int rt2x00lib_calculate_link_signal(struct rt2x00_dev *rt2x00dev,
+@@ -225,8 +408,8 @@ static int rt2x00lib_calculate_link_signal(struct rt2x00_dev *rt2x00dev,
+ * defines to calculate the current link signal.
+ */
+ signal = ((WEIGHT_RSSI * rssi_percentage) +
+- (WEIGHT_TX * rt2x00dev->link.tx_percentage) +
+- (WEIGHT_RX * rt2x00dev->link.rx_percentage)) / 100;
++ (WEIGHT_TX * rt2x00dev->link.qual.tx_percentage) +
++ (WEIGHT_RX * rt2x00dev->link.qual.rx_percentage)) / 100;
-- adapter->adhoccreate = 0;
-+ priv->adhoccreate = 0;
+ return (signal > 100) ? 100 : signal;
+ }
+@@ -246,10 +429,9 @@ static void rt2x00lib_link_tuner(struct work_struct *work)
+ /*
+ * Update statistics.
+ */
+- rt2x00dev->ops->lib->link_stats(rt2x00dev);
+-
++ rt2x00dev->ops->lib->link_stats(rt2x00dev, &rt2x00dev->link.qual);
+ rt2x00dev->low_level_stats.dot11FCSErrorCount +=
+- rt2x00dev->link.rx_failed;
++ rt2x00dev->link.qual.rx_failed;
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_AD_HOC_JOIN,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_AD_HOC_JOIN,
- 0, CMD_OPTION_WAITFORRSP,
- OID_802_11_SSID, assoc_req);
+ /*
+ * Only perform the link tuning when Link tuning
+@@ -259,10 +441,15 @@ static void rt2x00lib_link_tuner(struct work_struct *work)
+ rt2x00dev->ops->lib->link_tuner(rt2x00dev);
-@@ -258,38 +259,37 @@ out:
- return ret;
- }
+ /*
++ * Evaluate antenna setup.
++ */
++ rt2x00lib_evaluate_antenna(rt2x00dev);
++
++ /*
+ * Precalculate a portion of the link signal which is
+ * in based on the tx/rx success/failure counters.
+ */
+- rt2x00lib_precalculate_link_signal(&rt2x00dev->link);
++ rt2x00lib_precalculate_link_signal(&rt2x00dev->link.qual);
--int libertas_stop_adhoc_network(wlan_private * priv)
-+int lbs_stop_adhoc_network(struct lbs_private *priv)
+ /*
+ * Increase tuner counter, and reschedule the next link tuner run.
+@@ -276,7 +463,7 @@ static void rt2x00lib_packetfilter_scheduled(struct work_struct *work)
{
-- return libertas_prepare_and_send_command(priv, CMD_802_11_AD_HOC_STOP,
-+ return lbs_prepare_and_send_command(priv, CMD_802_11_AD_HOC_STOP,
- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
- }
+ struct rt2x00_dev *rt2x00dev =
+ container_of(work, struct rt2x00_dev, filter_work);
+- unsigned int filter = rt2x00dev->interface.filter;
++ unsigned int filter = rt2x00dev->packet_filter;
- /**
- * @brief Send Deauthentication Request
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return 0--success, -1--fail
- */
--int libertas_send_deauthentication(wlan_private * priv)
-+int lbs_send_deauthentication(struct lbs_private *priv)
- {
-- return libertas_prepare_and_send_command(priv, CMD_802_11_DEAUTHENTICATE,
-+ return lbs_prepare_and_send_command(priv, CMD_802_11_DEAUTHENTICATE,
- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
- }
+ /*
+ * Since we had stored the filter inside interface.filter,
+@@ -284,7 +471,7 @@ static void rt2x00lib_packetfilter_scheduled(struct work_struct *work)
+ * assume nothing has changed (*total_flags will be compared
+ * to interface.filter to determine if any action is required).
+ */
+- rt2x00dev->interface.filter = 0;
++ rt2x00dev->packet_filter = 0;
- /**
- * @brief This function prepares command of authenticate.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param cmd A pointer to cmd_ds_command structure
- * @param pdata_buf Void cast of pointer to a BSSID to authenticate with
- *
- * @return 0 or -1
- */
--int libertas_cmd_80211_authenticate(wlan_private * priv,
-+int lbs_cmd_80211_authenticate(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- void *pdata_buf)
+ rt2x00dev->ops->hw->configure_filter(rt2x00dev->hw,
+ filter, &filter, 0, NULL);
+@@ -294,10 +481,17 @@ static void rt2x00lib_configuration_scheduled(struct work_struct *work)
{
-- wlan_adapter *adapter = priv->adapter;
- struct cmd_ds_802_11_authenticate *pauthenticate = &cmd->params.auth;
- int ret = -1;
- u8 *bssid = pdata_buf;
-@@ -302,7 +302,7 @@ int libertas_cmd_80211_authenticate(wlan_private * priv,
- + S_DS_GEN);
+ struct rt2x00_dev *rt2x00dev =
+ container_of(work, struct rt2x00_dev, config_work);
+- int preamble = !test_bit(CONFIG_SHORT_PREAMBLE, &rt2x00dev->flags);
++ struct ieee80211_bss_conf bss_conf;
- /* translate auth mode to 802.11 defined wire value */
-- switch (adapter->secinfo.auth_mode) {
-+ switch (priv->secinfo.auth_mode) {
- case IW_AUTH_ALG_OPEN_SYSTEM:
- pauthenticate->authtype = 0x00;
- break;
-@@ -314,13 +314,13 @@ int libertas_cmd_80211_authenticate(wlan_private * priv,
- break;
- default:
- lbs_deb_join("AUTH_CMD: invalid auth alg 0x%X\n",
-- adapter->secinfo.auth_mode);
-+ priv->secinfo.auth_mode);
- goto out;
- }
+- rt2x00mac_erp_ie_changed(rt2x00dev->hw,
+- IEEE80211_ERP_CHANGE_PREAMBLE, 0, preamble);
++ bss_conf.use_short_preamble =
++ test_bit(CONFIG_SHORT_PREAMBLE, &rt2x00dev->flags);
++
++ /*
++ * FIXME: shouldn't invoke it this way because all other contents
++ * of bss_conf is invalid.
++ */
++ rt2x00mac_bss_info_changed(rt2x00dev->hw, rt2x00dev->interface.id,
++ &bss_conf, BSS_CHANGED_ERP_PREAMBLE);
+ }
- memcpy(pauthenticate->macaddr, bssid, ETH_ALEN);
+ /*
+@@ -350,8 +544,8 @@ void rt2x00lib_txdone(struct data_entry *entry,
+ tx_status->ack_signal = 0;
+ tx_status->excessive_retries = (status == TX_FAIL_RETRY);
+ tx_status->retry_count = retry;
+- rt2x00dev->link.tx_success += success;
+- rt2x00dev->link.tx_failed += retry + fail;
++ rt2x00dev->link.qual.tx_success += success;
++ rt2x00dev->link.qual.tx_failed += retry + fail;
-- lbs_deb_join("AUTH_CMD: BSSID is : %s auth=0x%X\n",
-+ lbs_deb_join("AUTH_CMD: BSSID %s, auth 0x%x\n",
- print_mac(mac, bssid), pauthenticate->authtype);
- ret = 0;
+ if (!(tx_status->control.flags & IEEE80211_TXCTL_NO_ACK)) {
+ if (success)
+@@ -371,9 +565,11 @@ void rt2x00lib_txdone(struct data_entry *entry,
+ }
-@@ -329,10 +329,9 @@ out:
- return ret;
+ /*
+- * Send the tx_status to mac80211,
+- * that method also cleans up the skb structure.
++ * Send the tx_status to mac80211 & debugfs.
++ * mac80211 will clean up the skb structure.
+ */
++ get_skb_desc(entry->skb)->frame_type = DUMP_FRAME_TXDONE;
++ rt2x00debug_dump_frame(rt2x00dev, entry->skb);
+ ieee80211_tx_status_irqsafe(rt2x00dev->hw, entry->skb, tx_status);
+ entry->skb = NULL;
}
+@@ -386,8 +582,10 @@ void rt2x00lib_rxdone(struct data_entry *entry, struct sk_buff *skb,
+ struct ieee80211_rx_status *rx_status = &rt2x00dev->rx_status;
+ struct ieee80211_hw_mode *mode;
+ struct ieee80211_rate *rate;
++ struct ieee80211_hdr *hdr;
+ unsigned int i;
+ int val = 0;
++ u16 fc;
--int libertas_cmd_80211_deauthenticate(wlan_private * priv,
-+int lbs_cmd_80211_deauthenticate(struct lbs_private *priv,
- struct cmd_ds_command *cmd)
- {
-- wlan_adapter *adapter = priv->adapter;
- struct cmd_ds_802_11_deauthenticate *dauth = &cmd->params.deauth;
-
- lbs_deb_enter(LBS_DEB_JOIN);
-@@ -342,7 +341,7 @@ int libertas_cmd_80211_deauthenticate(wlan_private * priv,
- S_DS_GEN);
+ /*
+ * Update RX statistics.
+@@ -412,17 +610,28 @@ void rt2x00lib_rxdone(struct data_entry *entry, struct sk_buff *skb,
+ }
+ }
- /* set AP MAC address */
-- memmove(dauth->macaddr, adapter->curbssparams.bssid, ETH_ALEN);
-+ memmove(dauth->macaddr, priv->curbssparams.bssid, ETH_ALEN);
+- rt2x00_update_link_rssi(&rt2x00dev->link, desc->rssi);
+- rt2x00dev->link.rx_success++;
++ /*
++ * Only update link status if this is a beacon frame carrying our bssid.
++ */
++ hdr = (struct ieee80211_hdr*)skb->data;
++ fc = le16_to_cpu(hdr->frame_control);
++ if (is_beacon(fc) && desc->my_bss)
++ rt2x00lib_update_link_stats(&rt2x00dev->link, desc->rssi);
++
++ rt2x00dev->link.qual.rx_success++;
++
+ rx_status->rate = val;
+ rx_status->signal =
+ rt2x00lib_calculate_link_signal(rt2x00dev, desc->rssi);
+ rx_status->ssi = desc->rssi;
+ rx_status->flag = desc->flags;
++ rx_status->antenna = rt2x00dev->link.ant.active.rx;
- /* Reason code 3 = Station is leaving */
- #define REASON_CODE_STA_LEAVING 3
-@@ -352,10 +351,9 @@ int libertas_cmd_80211_deauthenticate(wlan_private * priv,
- return 0;
+ /*
+- * Send frame to mac80211
++ * Send frame to mac80211 & debugfs
+ */
++ get_skb_desc(skb)->frame_type = DUMP_FRAME_RXDONE;
++ rt2x00debug_dump_frame(rt2x00dev, skb);
+ ieee80211_rx_irqsafe(rt2x00dev->hw, skb, rx_status);
}
-
--int libertas_cmd_80211_associate(wlan_private * priv,
-+int lbs_cmd_80211_associate(struct lbs_private *priv,
- struct cmd_ds_command *cmd, void *pdata_buf)
+ EXPORT_SYMBOL_GPL(rt2x00lib_rxdone);
+@@ -431,36 +640,25 @@ EXPORT_SYMBOL_GPL(rt2x00lib_rxdone);
+ * TX descriptor initializer
+ */
+ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
++ struct sk_buff *skb,
+ struct ieee80211_tx_control *control)
{
-- wlan_adapter *adapter = priv->adapter;
- struct cmd_ds_802_11_associate *passo = &cmd->params.associate;
- int ret = 0;
- struct assoc_request * assoc_req = pdata_buf;
-@@ -368,11 +366,11 @@ int libertas_cmd_80211_associate(wlan_private * priv,
- struct mrvlietypes_ratesparamset *rates;
- struct mrvlietypes_rsnparamset *rsn;
+ struct txdata_entry_desc desc;
+- struct data_ring *ring;
++ struct skb_desc *skbdesc = get_skb_desc(skb);
++ struct ieee80211_hdr *ieee80211hdr = skbdesc->data;
+ int tx_rate;
+ int bitrate;
++ int length;
+ int duration;
+ int residual;
+ u16 frame_control;
+ u16 seq_ctrl;
-- lbs_deb_enter(LBS_DEB_JOIN);
-+ lbs_deb_enter(LBS_DEB_ASSOC);
+- /*
+- * Make sure the descriptor is properly cleared.
+- */
+- memset(&desc, 0x00, sizeof(desc));
++ memset(&desc, 0, sizeof(desc));
- pos = (u8 *) passo;
+- /*
+- * Get ring pointer, if we fail to obtain the
+- * correct ring, then use the first TX ring.
+- */
+- ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
+- if (!ring)
+- ring = rt2x00lib_get_ring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
+-
+- desc.cw_min = ring->tx_params.cw_min;
+- desc.cw_max = ring->tx_params.cw_max;
+- desc.aifs = ring->tx_params.aifs;
++ desc.cw_min = skbdesc->ring->tx_params.cw_min;
++ desc.cw_max = skbdesc->ring->tx_params.cw_max;
++ desc.aifs = skbdesc->ring->tx_params.aifs;
-- if (!adapter) {
-+ if (!priv) {
- ret = -1;
- goto done;
- }
-@@ -416,22 +414,22 @@ int libertas_cmd_80211_associate(wlan_private * priv,
- rates->header.type = cpu_to_le16(TLV_TYPE_RATES);
- memcpy(&rates->rates, &bss->rates, MAX_RATES);
- tmplen = MAX_RATES;
-- if (get_common_rates(adapter, rates->rates, &tmplen)) {
-+ if (get_common_rates(priv, rates->rates, &tmplen)) {
- ret = -1;
- goto done;
+ /*
+ * Identify queue
+@@ -482,12 +680,21 @@ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ tx_rate = control->tx_rate;
+
+ /*
++ * Check whether this frame is to be acked
++ */
++ if (!(control->flags & IEEE80211_TXCTL_NO_ACK))
++ __set_bit(ENTRY_TXD_ACK, &desc.flags);
++
++ /*
+ * Check if this is a RTS/CTS frame
+ */
+ if (is_rts_frame(frame_control) || is_cts_frame(frame_control)) {
+ __set_bit(ENTRY_TXD_BURST, &desc.flags);
+- if (is_rts_frame(frame_control))
++ if (is_rts_frame(frame_control)) {
+ __set_bit(ENTRY_TXD_RTS_FRAME, &desc.flags);
++ __set_bit(ENTRY_TXD_ACK, &desc.flags);
++ } else
++ __clear_bit(ENTRY_TXD_ACK, &desc.flags);
+ if (control->rts_cts_rate)
+ tx_rate = control->rts_cts_rate;
}
- pos += sizeof(rates->header) + tmplen;
- rates->header.len = cpu_to_le16(tmplen);
-- lbs_deb_join("ASSOC_CMD: num rates = %u\n", tmplen);
-+ lbs_deb_assoc("ASSOC_CMD: num rates %u\n", tmplen);
+@@ -532,17 +739,18 @@ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ desc.signal = DEVICE_GET_RATE_FIELD(tx_rate, PLCP);
+ desc.service = 0x04;
- /* Copy the infra. association rates into Current BSS state structure */
-- memset(&adapter->curbssparams.rates, 0, sizeof(adapter->curbssparams.rates));
-- memcpy(&adapter->curbssparams.rates, &rates->rates, tmplen);
-+ memset(&priv->curbssparams.rates, 0, sizeof(priv->curbssparams.rates));
-+ memcpy(&priv->curbssparams.rates, &rates->rates, tmplen);
++ length = skbdesc->data_len + FCS_LEN;
+ if (test_bit(ENTRY_TXD_OFDM_RATE, &desc.flags)) {
+- desc.length_high = ((length + FCS_LEN) >> 6) & 0x3f;
+- desc.length_low = ((length + FCS_LEN) & 0x3f);
++ desc.length_high = (length >> 6) & 0x3f;
++ desc.length_low = length & 0x3f;
+ } else {
+ bitrate = DEVICE_GET_RATE_FIELD(tx_rate, RATE);
- /* Set MSB on basic rates as the firmware requires, but _after_
- * copying to current bss rates.
- */
-- libertas_set_basic_rate_flags(rates->rates, tmplen);
-+ lbs_set_basic_rate_flags(rates->rates, tmplen);
+ /*
+ * Convert length to microseconds.
+ */
+- residual = get_duration_res(length + FCS_LEN, bitrate);
+- duration = get_duration(length + FCS_LEN, bitrate);
++ residual = get_duration_res(length, bitrate);
++ duration = get_duration(length, bitrate);
- if (assoc_req->secinfo.WPAenabled || assoc_req->secinfo.WPA2enabled) {
- rsn = (struct mrvlietypes_rsnparamset *) pos;
-@@ -446,9 +444,9 @@ int libertas_cmd_80211_associate(wlan_private * priv,
+ if (residual != 0) {
+ duration++;
+@@ -565,8 +773,22 @@ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ desc.signal |= 0x08;
}
- /* update curbssparams */
-- adapter->curbssparams.channel = bss->phyparamset.dsparamset.currentchan;
-+ priv->curbssparams.channel = bss->phyparamset.dsparamset.currentchan;
+- rt2x00dev->ops->lib->write_tx_desc(rt2x00dev, txd, &desc,
+- ieee80211hdr, length, control);
++ rt2x00dev->ops->lib->write_tx_desc(rt2x00dev, skb, &desc, control);
++
++ /*
++ * Update ring entry.
++ */
++ skbdesc->entry->skb = skb;
++ memcpy(&skbdesc->entry->tx_status.control, control, sizeof(*control));
++
++ /*
++ * The frame has been completely initialized and ready
++ * for sending to the device. The caller will push the
++ * frame to the device, but we are going to push the
++ * frame to debugfs here.
++ */
++ skbdesc->frame_type = DUMP_FRAME_TX;
++ rt2x00debug_dump_frame(rt2x00dev, skb);
+ }
+ EXPORT_SYMBOL_GPL(rt2x00lib_write_tx_desc);
-- if (libertas_parse_dnld_countryinfo_11d(priv, bss)) {
-+ if (lbs_parse_dnld_countryinfo_11d(priv, bss)) {
- ret = -1;
- goto done;
+@@ -809,6 +1031,7 @@ static int rt2x00lib_alloc_entries(struct data_ring *ring,
+ entry[i].flags = 0;
+ entry[i].ring = ring;
+ entry[i].skb = NULL;
++ entry[i].entry_idx = i;
}
-@@ -460,18 +458,16 @@ int libertas_cmd_80211_associate(wlan_private * priv,
- if (bss->mode == IW_MODE_INFRA)
- tmpcap |= WLAN_CAPABILITY_ESS;
- passo->capability = cpu_to_le16(tmpcap);
-- lbs_deb_join("ASSOC_CMD: capability=%4X CAPINFO_MASK=%4X\n",
-- tmpcap, CAPINFO_MASK);
-+ lbs_deb_assoc("ASSOC_CMD: capability 0x%04x\n", tmpcap);
- done:
-- lbs_deb_leave_args(LBS_DEB_JOIN, "ret %d", ret);
-+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
- return ret;
+ ring->entry = entry;
+@@ -866,7 +1089,7 @@ static void rt2x00lib_free_ring_entries(struct rt2x00_dev *rt2x00dev)
+ }
}
--int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
-+int lbs_cmd_80211_ad_hoc_start(struct lbs_private *priv,
- struct cmd_ds_command *cmd, void *pdata_buf)
+-void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev)
++static void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev)
{
-- wlan_adapter *adapter = priv->adapter;
- struct cmd_ds_802_11_ad_hoc_start *adhs = &cmd->params.ads;
- int ret = 0;
- int cmdappendsize = 0;
-@@ -481,7 +477,7 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
+ if (!__test_and_clear_bit(DEVICE_INITIALIZED, &rt2x00dev->flags))
+ return;
+@@ -887,7 +1110,7 @@ void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev)
+ rt2x00lib_free_ring_entries(rt2x00dev);
+ }
- lbs_deb_enter(LBS_DEB_JOIN);
+-int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev)
++static int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev)
+ {
+ int status;
+
+@@ -930,12 +1153,65 @@ exit:
+ return status;
+ }
+
++int rt2x00lib_start(struct rt2x00_dev *rt2x00dev)
++{
++ int retval;
++
++ if (test_bit(DEVICE_STARTED, &rt2x00dev->flags))
++ return 0;
++
++ /*
++ * If this is the first interface which is added,
++ * we should load the firmware now.
++ */
++ if (test_bit(DRIVER_REQUIRE_FIRMWARE, &rt2x00dev->flags)) {
++ retval = rt2x00lib_load_firmware(rt2x00dev);
++ if (retval)
++ return retval;
++ }
++
++ /*
++ * Initialize the device.
++ */
++ retval = rt2x00lib_initialize(rt2x00dev);
++ if (retval)
++ return retval;
++
++ /*
++ * Enable radio.
++ */
++ retval = rt2x00lib_enable_radio(rt2x00dev);
++ if (retval) {
++ rt2x00lib_uninitialize(rt2x00dev);
++ return retval;
++ }
++
++ __set_bit(DEVICE_STARTED, &rt2x00dev->flags);
++
++ return 0;
++}
++
++void rt2x00lib_stop(struct rt2x00_dev *rt2x00dev)
++{
++ if (!test_bit(DEVICE_STARTED, &rt2x00dev->flags))
++ return;
++
++ /*
++ * Perhaps we can add something smarter here,
++ * but for now just disabling the radio should do.
++ */
++ rt2x00lib_disable_radio(rt2x00dev);
++
++ __clear_bit(DEVICE_STARTED, &rt2x00dev->flags);
++}
++
+ /*
+ * driver allocation handlers.
+ */
+ static int rt2x00lib_alloc_rings(struct rt2x00_dev *rt2x00dev)
+ {
+ struct data_ring *ring;
++ unsigned int index;
-- if (!adapter) {
-+ if (!priv) {
- ret = -1;
- goto done;
- }
-@@ -491,7 +487,7 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
/*
- * Fill in the parameters for 2 data structures:
- * 1. cmd_ds_802_11_ad_hoc_start command
-- * 2. adapter->scantable[i]
-+ * 2. priv->scantable[i]
- *
- * Driver will fill up SSID, bsstype,IBSS param, Physical Param,
- * probe delay, and cap info.
-@@ -509,8 +505,10 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
+ * We need the following rings:
+@@ -963,11 +1239,18 @@ static int rt2x00lib_alloc_rings(struct rt2x00_dev *rt2x00dev)
- /* set the BSS type */
- adhs->bsstype = CMD_BSS_TYPE_IBSS;
-- adapter->mode = IW_MODE_ADHOC;
-- adhs->beaconperiod = cpu_to_le16(MRVDRV_BEACON_INTERVAL);
-+ priv->mode = IW_MODE_ADHOC;
-+ if (priv->beacon_period == 0)
-+ priv->beacon_period = MRVDRV_BEACON_INTERVAL;
-+ adhs->beaconperiod = cpu_to_le16(priv->beacon_period);
+ /*
+ * Initialize ring parameters.
+- * cw_min: 2^5 = 32.
+- * cw_max: 2^10 = 1024.
++ * RX: queue_idx = 0
++ * TX: queue_idx = IEEE80211_TX_QUEUE_DATA0 + index
++ * TX: cw_min: 2^5 = 32.
++ * TX: cw_max: 2^10 = 1024.
+ */
+- ring_for_each(rt2x00dev, ring) {
++ rt2x00dev->rx->rt2x00dev = rt2x00dev;
++ rt2x00dev->rx->queue_idx = 0;
++
++ index = IEEE80211_TX_QUEUE_DATA0;
++ txring_for_each(rt2x00dev, ring) {
+ ring->rt2x00dev = rt2x00dev;
++ ring->queue_idx = index++;
+ ring->tx_params.aifs = 2;
+ ring->tx_params.cw_min = 5;
+ ring->tx_params.cw_max = 10;
+@@ -1008,7 +1291,7 @@ int rt2x00lib_probe_dev(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Reset current working type.
+ */
+- rt2x00dev->interface.type = INVALID_INTERFACE;
++ rt2x00dev->interface.type = IEEE80211_IF_TYPE_INVALID;
- /* set Physical param set */
- #define DS_PARA_IE_ID 3
-@@ -548,24 +546,24 @@ int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
- adhs->probedelay = cpu_to_le16(CMD_SCAN_PROBE_DELAY_TIME);
+ /*
+ * Allocate ring array.
+@@ -1112,7 +1395,7 @@ int rt2x00lib_suspend(struct rt2x00_dev *rt2x00dev, pm_message_t state)
+ * Disable radio and unitialize all items
+ * that must be recreated on resume.
+ */
+- rt2x00mac_stop(rt2x00dev->hw);
++ rt2x00lib_stop(rt2x00dev);
+ rt2x00lib_uninitialize(rt2x00dev);
+ rt2x00debug_deregister(rt2x00dev);
- memset(adhs->rates, 0, sizeof(adhs->rates));
-- ratesize = min(sizeof(adhs->rates), sizeof(libertas_bg_rates));
-- memcpy(adhs->rates, libertas_bg_rates, ratesize);
-+ ratesize = min(sizeof(adhs->rates), sizeof(lbs_bg_rates));
-+ memcpy(adhs->rates, lbs_bg_rates, ratesize);
+@@ -1134,7 +1417,6 @@ int rt2x00lib_resume(struct rt2x00_dev *rt2x00dev)
+ int retval;
- /* Copy the ad-hoc creating rates into Current BSS state structure */
-- memset(&adapter->curbssparams.rates, 0, sizeof(adapter->curbssparams.rates));
-- memcpy(&adapter->curbssparams.rates, &adhs->rates, ratesize);
-+ memset(&priv->curbssparams.rates, 0, sizeof(priv->curbssparams.rates));
-+ memcpy(&priv->curbssparams.rates, &adhs->rates, ratesize);
+ NOTICE(rt2x00dev, "Waking up.\n");
+- __set_bit(DEVICE_PRESENT, &rt2x00dev->flags);
- /* Set MSB on basic rates as the firmware requires, but _after_
- * copying to current bss rates.
+ /*
+ * Open the debugfs entry.
+@@ -1150,7 +1432,7 @@ int rt2x00lib_resume(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Reinitialize device and all active interfaces.
*/
-- libertas_set_basic_rate_flags(adhs->rates, ratesize);
-+ lbs_set_basic_rate_flags(adhs->rates, ratesize);
+- retval = rt2x00mac_start(rt2x00dev->hw);
++ retval = rt2x00lib_start(rt2x00dev);
+ if (retval)
+ goto exit;
- lbs_deb_join("ADHOC_S_CMD: rates=%02x %02x %02x %02x \n",
- adhs->rates[0], adhs->rates[1], adhs->rates[2], adhs->rates[3]);
+@@ -1166,6 +1448,11 @@ int rt2x00lib_resume(struct rt2x00_dev *rt2x00dev)
+ rt2x00lib_config_type(rt2x00dev, intf->type);
- lbs_deb_join("ADHOC_S_CMD: AD HOC Start command is ready\n");
+ /*
++ * We are ready again to receive requests from mac80211.
++ */
++ __set_bit(DEVICE_PRESENT, &rt2x00dev->flags);
++
++ /*
+ * It is possible that during that mac80211 has attempted
+ * to send frames while we were suspending or resuming.
+ * In that case we have disabled the TX queue and should
+diff --git a/drivers/net/wireless/rt2x00/rt2x00dump.h b/drivers/net/wireless/rt2x00/rt2x00dump.h
+new file mode 100644
+index 0000000..99f3f36
+--- /dev/null
++++ b/drivers/net/wireless/rt2x00/rt2x00dump.h
+@@ -0,0 +1,121 @@
++/*
++ Copyright (C) 2004 - 2007 rt2x00 SourceForge Project
++ <http://rt2x00.serialmonkey.com>
++
++ This program is free software; you can redistribute it and/or modify
++ it under the terms of the GNU General Public License as published by
++ the Free Software Foundation; either version 2 of the License, or
++ (at your option) any later version.
++
++ This program is distributed in the hope that it will be useful,
++ but WITHOUT ANY WARRANTY; without even the implied warranty of
++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ GNU General Public License for more details.
++
++ You should have received a copy of the GNU General Public License
++ along with this program; if not, write to the
++ Free Software Foundation, Inc.,
++ 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
++ */
++
++/*
++ Module: rt2x00dump
++ Abstract: Data structures for the rt2x00debug & userspace.
++ */
++
++#ifndef RT2X00DUMP_H
++#define RT2X00DUMP_H
++
++/**
++ * DOC: Introduction
++ *
++ * This header is intended to be exported to userspace,
++ * to make the structures and enumerations available to userspace
++ * applications. This means that all data types should be exportable.
++ *
++ * When rt2x00 is compiled with debugfs support enabled,
++ * it is possible to capture all data coming in and out of the device
++ * by reading the frame dump file. This file can have only a single reader.
++ * The following frames will be reported:
++ * - All incoming frames (rx)
++ * - All outgoing frames (tx, including beacon and atim)
++ * - All completed frames (txdone including atim)
++ *
++ * The data is send to the file using the following format:
++ *
++ * [rt2x00dump header][hardware descriptor][ieee802.11 frame]
++ *
++ * rt2x00dump header: The description of the dumped frame, as well as
++ * additional information usefull for debugging. See &rt2x00dump_hdr.
++ * hardware descriptor: Descriptor that was used to receive or transmit
++ * the frame.
++ * ieee802.11 frame: The actual frame that was received or transmitted.
++ */
++
++/**
++ * enum rt2x00_dump_type - Frame type
++ *
++ * These values are used for the @type member of &rt2x00dump_hdr.
++ * @DUMP_FRAME_RXDONE: This frame has been received by the hardware.
++ * @DUMP_FRAME_TX: This frame is queued for transmission to the hardware.
++ * @DUMP_FRAME_TXDONE: This frame indicates the device has handled
++ * the tx event which has either succeeded or failed. A frame
++ * with this type should also have been reported with as a
++ * %DUMP_FRAME_TX frame.
++ */
++enum rt2x00_dump_type {
++ DUMP_FRAME_RXDONE = 1,
++ DUMP_FRAME_TX = 2,
++ DUMP_FRAME_TXDONE = 3,
++};
++
++/**
++ * struct rt2x00dump_hdr - Dump frame header
++ *
++ * Each frame dumped to the debugfs file starts with this header
++ * attached. This header contains the description of the actual
++ * frame which was dumped.
++ *
++ * New fields inside the structure must be appended to the end of
++ * the structure. This way userspace tools compiled for earlier
++ * header versions can still correctly handle the frame dump
++ * (although they will not handle all data passed to them in the dump).
++ *
++ * @version: Header version should always be set to %DUMP_HEADER_VERSION.
++ * This field must be checked by userspace to determine if it can
++ * handle this frame.
++ * @header_length: The length of the &rt2x00dump_hdr structure. This is
++ * used for compatibility reasons so userspace can easily determine
++ * the location of the next field in the dump.
++ * @desc_length: The length of the device descriptor.
++ * @data_length: The length of the frame data (including the ieee802.11 header.
++ * @chip_rt: RT chipset
++ * @chip_rf: RF chipset
++ * @chip_rev: Chipset revision
++ * @type: The frame type (&rt2x00_dump_type)
++ * @ring_index: The index number of the data ring.
++ * @entry_index: The index number of the entry inside the data ring.
++ * @timestamp_sec: Timestamp - seconds
++ * @timestamp_usec: Timestamp - microseconds
++ */
++struct rt2x00dump_hdr {
++ __le32 version;
++#define DUMP_HEADER_VERSION 2
++
++ __le32 header_length;
++ __le32 desc_length;
++ __le32 data_length;
++
++ __le16 chip_rt;
++ __le16 chip_rf;
++ __le32 chip_rev;
++
++ __le16 type;
++ __u8 ring_index;
++ __u8 entry_index;
++
++ __le32 timestamp_sec;
++ __le32 timestamp_usec;
++};
++
++#endif /* RT2X00DUMP_H */
+diff --git a/drivers/net/wireless/rt2x00/rt2x00firmware.c b/drivers/net/wireless/rt2x00/rt2x00firmware.c
+index 236025f..0a475e4 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00firmware.c
++++ b/drivers/net/wireless/rt2x00/rt2x00firmware.c
+@@ -23,11 +23,6 @@
+ Abstract: rt2x00 firmware loading routines.
+ */
-- if (libertas_create_dnld_countryinfo_11d(priv)) {
-+ if (lbs_create_dnld_countryinfo_11d(priv)) {
- lbs_deb_join("ADHOC_S_CMD: dnld_countryinfo_11d failed\n");
- ret = -1;
- goto done;
-@@ -580,7 +578,7 @@ done:
- return ret;
- }
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00lib"
+-
+ #include <linux/crc-itu-t.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+diff --git a/drivers/net/wireless/rt2x00/rt2x00lib.h b/drivers/net/wireless/rt2x00/rt2x00lib.h
+index 06d9bc0..1adbd28 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00lib.h
++++ b/drivers/net/wireless/rt2x00/rt2x00lib.h
+@@ -44,8 +44,8 @@ void rt2x00lib_reset_link_tuner(struct rt2x00_dev *rt2x00dev);
+ /*
+ * Initialization handlers.
+ */
+-int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev);
+-void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev);
++int rt2x00lib_start(struct rt2x00_dev *rt2x00dev);
++void rt2x00lib_stop(struct rt2x00_dev *rt2x00dev);
--int libertas_cmd_80211_ad_hoc_stop(wlan_private * priv,
-+int lbs_cmd_80211_ad_hoc_stop(struct lbs_private *priv,
- struct cmd_ds_command *cmd)
- {
- cmd->command = cpu_to_le16(CMD_802_11_AD_HOC_STOP);
-@@ -589,10 +587,9 @@ int libertas_cmd_80211_ad_hoc_stop(wlan_private * priv,
- return 0;
- }
+ /*
+ * Configuration handlers.
+@@ -53,6 +53,8 @@ void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev);
+ void rt2x00lib_config_mac_addr(struct rt2x00_dev *rt2x00dev, u8 *mac);
+ void rt2x00lib_config_bssid(struct rt2x00_dev *rt2x00dev, u8 *bssid);
+ void rt2x00lib_config_type(struct rt2x00_dev *rt2x00dev, const int type);
++void rt2x00lib_config_antenna(struct rt2x00_dev *rt2x00dev,
++ enum antenna rx, enum antenna tx);
+ void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
+ struct ieee80211_conf *conf, const int force_config);
--int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
-+int lbs_cmd_80211_ad_hoc_join(struct lbs_private *priv,
- struct cmd_ds_command *cmd, void *pdata_buf)
+@@ -78,6 +80,7 @@ static inline void rt2x00lib_free_firmware(struct rt2x00_dev *rt2x00dev)
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+ void rt2x00debug_register(struct rt2x00_dev *rt2x00dev);
+ void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev);
++void rt2x00debug_dump_frame(struct rt2x00_dev *rt2x00dev, struct sk_buff *skb);
+ #else
+ static inline void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
{
-- wlan_adapter *adapter = priv->adapter;
- struct cmd_ds_802_11_ad_hoc_join *join_cmd = &cmd->params.adj;
- struct assoc_request * assoc_req = pdata_buf;
- struct bss_descriptor *bss = &assoc_req->bss;
-@@ -633,26 +630,26 @@ int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
- /* probedelay */
- join_cmd->probedelay = cpu_to_le16(CMD_SCAN_PROBE_DELAY_TIME);
-
-- adapter->curbssparams.channel = bss->channel;
-+ priv->curbssparams.channel = bss->channel;
+@@ -86,6 +89,11 @@ static inline void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
+ static inline void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
+ {
+ }
++
++static inline void rt2x00debug_dump_frame(struct rt2x00_dev *rt2x00dev,
++ struct sk_buff *skb)
++{
++}
+ #endif /* CONFIG_RT2X00_LIB_DEBUGFS */
- /* Copy Data rates from the rates recorded in scan response */
- memset(join_cmd->bss.rates, 0, sizeof(join_cmd->bss.rates));
- ratesize = min_t(u16, sizeof(join_cmd->bss.rates), MAX_RATES);
- memcpy(join_cmd->bss.rates, bss->rates, ratesize);
-- if (get_common_rates(adapter, join_cmd->bss.rates, &ratesize)) {
-+ if (get_common_rates(priv, join_cmd->bss.rates, &ratesize)) {
- lbs_deb_join("ADHOC_J_CMD: get_common_rates returns error.\n");
- ret = -1;
- goto done;
- }
+ /*
+diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c
+index 85ea8a8..e3f15e5 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00mac.c
++++ b/drivers/net/wireless/rt2x00/rt2x00mac.c
+@@ -23,11 +23,6 @@
+ Abstract: rt2x00 generic mac80211 routines.
+ */
- /* Copy the ad-hoc creating rates into Current BSS state structure */
-- memset(&adapter->curbssparams.rates, 0, sizeof(adapter->curbssparams.rates));
-- memcpy(&adapter->curbssparams.rates, join_cmd->bss.rates, ratesize);
-+ memset(&priv->curbssparams.rates, 0, sizeof(priv->curbssparams.rates));
-+ memcpy(&priv->curbssparams.rates, join_cmd->bss.rates, ratesize);
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00lib"
+-
+ #include <linux/kernel.h>
+ #include <linux/module.h>
- /* Set MSB on basic rates as the firmware requires, but _after_
- * copying to current bss rates.
+@@ -89,7 +84,7 @@ int rt2x00mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
*/
-- libertas_set_basic_rate_flags(join_cmd->bss.rates, ratesize);
-+ lbs_set_basic_rate_flags(join_cmd->bss.rates, ratesize);
-
- join_cmd->bss.ssparamset.ibssparamset.atimwindow =
- cpu_to_le16(bss->atimwindow);
-@@ -663,12 +660,12 @@ int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
- join_cmd->bss.capability = cpu_to_le16(tmp);
+ if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags)) {
+ ieee80211_stop_queues(hw);
+- return 0;
++ return NETDEV_TX_OK;
}
-- if (adapter->psmode == WLAN802_11POWERMODEMAX_PSP) {
-+ if (priv->psmode == LBS802_11POWERMODEMAX_PSP) {
- /* wake up first */
- __le32 Localpsmode;
+ /*
+@@ -115,15 +110,24 @@ int rt2x00mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
+ if (!is_rts_frame(frame_control) && !is_cts_frame(frame_control) &&
+ (control->flags & (IEEE80211_TXCTL_USE_RTS_CTS |
+ IEEE80211_TXCTL_USE_CTS_PROTECT))) {
+- if (rt2x00_ring_free(ring) <= 1)
++ if (rt2x00_ring_free(ring) <= 1) {
++ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+ return NETDEV_TX_BUSY;
++ }
-- Localpsmode = cpu_to_le32(WLAN802_11POWERMODECAM);
-- ret = libertas_prepare_and_send_command(priv,
-+ Localpsmode = cpu_to_le32(LBS802_11POWERMODECAM);
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_PS_MODE,
- CMD_ACT_SET,
- 0, 0, &Localpsmode);
-@@ -679,7 +676,7 @@ int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
- }
+- if (rt2x00mac_tx_rts_cts(rt2x00dev, ring, skb, control))
++ if (rt2x00mac_tx_rts_cts(rt2x00dev, ring, skb, control)) {
++ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+ return NETDEV_TX_BUSY;
++ }
}
-- if (libertas_parse_dnld_countryinfo_11d(priv, bss)) {
-+ if (lbs_parse_dnld_countryinfo_11d(priv, bss)) {
- ret = -1;
- goto done;
- }
-@@ -692,24 +689,23 @@ done:
- return ret;
- }
+- if (rt2x00dev->ops->lib->write_tx_data(rt2x00dev, ring, skb, control))
++ if (rt2x00dev->ops->lib->write_tx_data(rt2x00dev, ring, skb, control)) {
++ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+ return NETDEV_TX_BUSY;
++ }
++
++ if (rt2x00_ring_full(ring))
++ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
--int libertas_ret_80211_associate(wlan_private * priv,
-+int lbs_ret_80211_associate(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+ if (rt2x00dev->ops->lib->kick_tx_queue)
+ rt2x00dev->ops->lib->kick_tx_queue(rt2x00dev, control->queue);
+@@ -135,41 +139,11 @@ EXPORT_SYMBOL_GPL(rt2x00mac_tx);
+ int rt2x00mac_start(struct ieee80211_hw *hw)
{
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
- union iwreq_data wrqu;
- struct ieeetypes_assocrsp *passocrsp;
- struct bss_descriptor * bss;
- u16 status_code;
-
-- lbs_deb_enter(LBS_DEB_JOIN);
-+ lbs_deb_enter(LBS_DEB_ASSOC);
-
-- if (!adapter->in_progress_assoc_req) {
-- lbs_deb_join("ASSOC_RESP: no in-progress association request\n");
-+ if (!priv->in_progress_assoc_req) {
-+ lbs_deb_assoc("ASSOC_RESP: no in-progress assoc request\n");
- ret = -1;
- goto done;
- }
-- bss = &adapter->in_progress_assoc_req->bss;
-+ bss = &priv->in_progress_assoc_req->bss;
-
- passocrsp = (struct ieeetypes_assocrsp *) & resp->params;
-
-@@ -734,96 +730,83 @@ int libertas_ret_80211_associate(wlan_private * priv,
- status_code = le16_to_cpu(passocrsp->statuscode);
- switch (status_code) {
- case 0x00:
-- lbs_deb_join("ASSOC_RESP: Association succeeded\n");
- break;
- case 0x01:
-- lbs_deb_join("ASSOC_RESP: Association failed; invalid "
-- "parameters (status code %d)\n", status_code);
-+ lbs_deb_assoc("ASSOC_RESP: invalid parameters\n");
- break;
- case 0x02:
-- lbs_deb_join("ASSOC_RESP: Association failed; internal timer "
-- "expired while waiting for the AP (status code %d)"
-- "\n", status_code);
-+ lbs_deb_assoc("ASSOC_RESP: internal timer "
-+ "expired while waiting for the AP\n");
- break;
- case 0x03:
-- lbs_deb_join("ASSOC_RESP: Association failed; association "
-- "was refused by the AP (status code %d)\n",
-- status_code);
-+ lbs_deb_assoc("ASSOC_RESP: association "
-+ "refused by AP\n");
- break;
- case 0x04:
-- lbs_deb_join("ASSOC_RESP: Association failed; authentication "
-- "was refused by the AP (status code %d)\n",
-- status_code);
-+ lbs_deb_assoc("ASSOC_RESP: authentication "
-+ "refused by AP\n");
- break;
- default:
-- lbs_deb_join("ASSOC_RESP: Association failed; reason unknown "
-- "(status code %d)\n", status_code);
-+ lbs_deb_assoc("ASSOC_RESP: failure reason 0x%02x "
-+ " unknown\n", status_code);
- break;
- }
-
- if (status_code) {
-- libertas_mac_event_disconnected(priv);
-+ lbs_mac_event_disconnected(priv);
- ret = -1;
- goto done;
- }
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- int status;
-- lbs_deb_hex(LBS_DEB_JOIN, "ASSOC_RESP", (void *)&resp->params,
-+ lbs_deb_hex(LBS_DEB_ASSOC, "ASSOC_RESP", (void *)&resp->params,
- le16_to_cpu(resp->size) - S_DS_GEN);
+- if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags) ||
+- test_bit(DEVICE_STARTED, &rt2x00dev->flags))
++ if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags))
+ return 0;
- /* Send a Media Connected event, according to the Spec */
-- adapter->connect_status = LIBERTAS_CONNECTED;
+- /*
+- * If this is the first interface which is added,
+- * we should load the firmware now.
+- */
+- if (test_bit(DRIVER_REQUIRE_FIRMWARE, &rt2x00dev->flags)) {
+- status = rt2x00lib_load_firmware(rt2x00dev);
+- if (status)
+- return status;
+- }
-
-- lbs_deb_join("ASSOC_RESP: assocated to '%s'\n",
-- escape_essid(bss->ssid, bss->ssid_len));
-+ priv->connect_status = LBS_CONNECTED;
+- /*
+- * Initialize the device.
+- */
+- status = rt2x00lib_initialize(rt2x00dev);
+- if (status)
+- return status;
+-
+- /*
+- * Enable radio.
+- */
+- status = rt2x00lib_enable_radio(rt2x00dev);
+- if (status) {
+- rt2x00lib_uninitialize(rt2x00dev);
+- return status;
+- }
+-
+- __set_bit(DEVICE_STARTED, &rt2x00dev->flags);
+-
+- return 0;
++ return rt2x00lib_start(rt2x00dev);
+ }
+ EXPORT_SYMBOL_GPL(rt2x00mac_start);
- /* Update current SSID and BSSID */
-- memcpy(&adapter->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
-- adapter->curbssparams.ssid_len = bss->ssid_len;
-- memcpy(adapter->curbssparams.bssid, bss->bssid, ETH_ALEN);
-+ memcpy(&priv->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
-+ priv->curbssparams.ssid_len = bss->ssid_len;
-+ memcpy(priv->curbssparams.bssid, bss->bssid, ETH_ALEN);
+@@ -180,13 +154,7 @@ void rt2x00mac_stop(struct ieee80211_hw *hw)
+ if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags))
+ return;
-- lbs_deb_join("ASSOC_RESP: currentpacketfilter is %x\n",
-- adapter->currentpacketfilter);
-+ lbs_deb_assoc("ASSOC_RESP: currentpacketfilter is 0x%x\n",
-+ priv->currentpacketfilter);
+- /*
+- * Perhaps we can add something smarter here,
+- * but for now just disabling the radio should do.
+- */
+- rt2x00lib_disable_radio(rt2x00dev);
+-
+- __clear_bit(DEVICE_STARTED, &rt2x00dev->flags);
++ rt2x00lib_stop(rt2x00dev);
+ }
+ EXPORT_SYMBOL_GPL(rt2x00mac_stop);
-- adapter->SNR[TYPE_RXPD][TYPE_AVG] = 0;
-- adapter->NF[TYPE_RXPD][TYPE_AVG] = 0;
-+ priv->SNR[TYPE_RXPD][TYPE_AVG] = 0;
-+ priv->NF[TYPE_RXPD][TYPE_AVG] = 0;
+@@ -213,7 +181,7 @@ int rt2x00mac_add_interface(struct ieee80211_hw *hw,
+ is_interface_present(intf))
+ return -ENOBUFS;
-- memset(adapter->rawSNR, 0x00, sizeof(adapter->rawSNR));
-- memset(adapter->rawNF, 0x00, sizeof(adapter->rawNF));
-- adapter->nextSNRNF = 0;
-- adapter->numSNRNF = 0;
-+ memset(priv->rawSNR, 0x00, sizeof(priv->rawSNR));
-+ memset(priv->rawNF, 0x00, sizeof(priv->rawNF));
-+ priv->nextSNRNF = 0;
-+ priv->numSNRNF = 0;
+- intf->id = conf->if_id;
++ intf->id = conf->vif;
+ intf->type = conf->type;
+ if (conf->type == IEEE80211_IF_TYPE_AP)
+ memcpy(&intf->bssid, conf->mac_addr, ETH_ALEN);
+@@ -247,7 +215,7 @@ void rt2x00mac_remove_interface(struct ieee80211_hw *hw,
+ return;
- netif_carrier_on(priv->dev);
-- netif_wake_queue(priv->dev);
--
-- if (priv->mesh_dev) {
-- netif_carrier_on(priv->mesh_dev);
-- netif_wake_queue(priv->mesh_dev);
-- }
-+ if (!priv->tx_pending_len)
-+ netif_wake_queue(priv->dev);
+ intf->id = 0;
+- intf->type = INVALID_INTERFACE;
++ intf->type = IEEE80211_IF_TYPE_INVALID;
+ memset(&intf->bssid, 0x00, ETH_ALEN);
+ memset(&intf->mac, 0x00, ETH_ALEN);
-- memcpy(wrqu.ap_addr.sa_data, adapter->curbssparams.bssid, ETH_ALEN);
-+ memcpy(wrqu.ap_addr.sa_data, priv->curbssparams.bssid, ETH_ALEN);
- wrqu.ap_addr.sa_family = ARPHRD_ETHER;
- wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+@@ -297,7 +265,8 @@ int rt2x00mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
+ }
+ EXPORT_SYMBOL_GPL(rt2x00mac_config);
- done:
-- lbs_deb_leave_args(LBS_DEB_JOIN, "ret %d", ret);
-+ lbs_deb_leave_args(LBS_DEB_ASSOC, "ret %d", ret);
- return ret;
+-int rt2x00mac_config_interface(struct ieee80211_hw *hw, int if_id,
++int rt2x00mac_config_interface(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
+ struct ieee80211_if_conf *conf)
+ {
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+@@ -373,23 +342,27 @@ int rt2x00mac_get_tx_stats(struct ieee80211_hw *hw,
}
+ EXPORT_SYMBOL_GPL(rt2x00mac_get_tx_stats);
--int libertas_ret_80211_disassociate(wlan_private * priv,
-+int lbs_ret_80211_disassociate(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+-void rt2x00mac_erp_ie_changed(struct ieee80211_hw *hw, u8 changes,
+- int cts_protection, int preamble)
++void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_bss_conf *bss_conf,
++ u32 changes)
{
- lbs_deb_enter(LBS_DEB_JOIN);
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+ int short_preamble;
+ int ack_timeout;
+ int ack_consume_time;
+ int difs;
++ int preamble;
-- libertas_mac_event_disconnected(priv);
-+ lbs_mac_event_disconnected(priv);
+ /*
+ * We only support changing preamble mode.
+ */
+- if (!(changes & IEEE80211_ERP_CHANGE_PREAMBLE))
++ if (!(changes & BSS_CHANGED_ERP_PREAMBLE))
+ return;
- lbs_deb_leave(LBS_DEB_JOIN);
- return 0;
+- short_preamble = !preamble;
+- preamble = !!(preamble) ? PREAMBLE : SHORT_PREAMBLE;
++ short_preamble = bss_conf->use_short_preamble;
++ preamble = bss_conf->use_short_preamble ?
++ SHORT_PREAMBLE : PREAMBLE;
+
+ difs = (hw->conf.flags & IEEE80211_CONF_SHORT_SLOT_TIME) ?
+ SHORT_DIFS : DIFS;
+@@ -405,7 +378,7 @@ void rt2x00mac_erp_ie_changed(struct ieee80211_hw *hw, u8 changes,
+ rt2x00dev->ops->lib->config_preamble(rt2x00dev, short_preamble,
+ ack_timeout, ack_consume_time);
}
+-EXPORT_SYMBOL_GPL(rt2x00mac_erp_ie_changed);
++EXPORT_SYMBOL_GPL(rt2x00mac_bss_info_changed);
--int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
-+int lbs_ret_80211_ad_hoc_start(struct lbs_private *priv,
- struct cmd_ds_command *resp)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
- u16 command = le16_to_cpu(resp->command);
- u16 result = le16_to_cpu(resp->result);
-@@ -840,20 +823,20 @@ int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
- lbs_deb_join("ADHOC_RESP: command = %x\n", command);
- lbs_deb_join("ADHOC_RESP: result = %x\n", result);
+ int rt2x00mac_conf_tx(struct ieee80211_hw *hw, int queue,
+ const struct ieee80211_tx_queue_params *params)
+diff --git a/drivers/net/wireless/rt2x00/rt2x00pci.c b/drivers/net/wireless/rt2x00/rt2x00pci.c
+index 04663eb..804a998 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00pci.c
++++ b/drivers/net/wireless/rt2x00/rt2x00pci.c
+@@ -23,11 +23,6 @@
+ Abstract: rt2x00 generic pci device routines.
+ */
-- if (!adapter->in_progress_assoc_req) {
-+ if (!priv->in_progress_assoc_req) {
- lbs_deb_join("ADHOC_RESP: no in-progress association request\n");
- ret = -1;
- goto done;
- }
-- bss = &adapter->in_progress_assoc_req->bss;
-+ bss = &priv->in_progress_assoc_req->bss;
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00pci"
+-
+ #include <linux/dma-mapping.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+@@ -43,9 +38,9 @@ int rt2x00pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ struct ieee80211_tx_control *control)
+ {
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- struct data_ring *ring =
+- rt2x00lib_get_ring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
+- struct data_entry *entry = rt2x00_get_data_entry(ring);
++ struct skb_desc *desc;
++ struct data_ring *ring;
++ struct data_entry *entry;
/*
- * Join result code 0 --> SUCCESS
+ * Just in case mac80211 doesn't set this correctly,
+@@ -53,14 +48,22 @@ int rt2x00pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ * initialization.
*/
- if (result) {
- lbs_deb_join("ADHOC_RESP: failed\n");
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-- libertas_mac_event_disconnected(priv);
-+ if (priv->connect_status == LBS_CONNECTED) {
-+ lbs_mac_event_disconnected(priv);
- }
- ret = -1;
- goto done;
-@@ -867,7 +850,7 @@ int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
- escape_essid(bss->ssid, bss->ssid_len));
+ control->queue = IEEE80211_TX_QUEUE_BEACON;
++ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
++ entry = rt2x00_get_data_entry(ring);
- /* Send a Media Connected event, according to the Spec */
-- adapter->connect_status = LIBERTAS_CONNECTED;
-+ priv->connect_status = LBS_CONNECTED;
+ /*
+- * Update the beacon entry.
++ * Fill in skb descriptor
+ */
++ desc = get_skb_desc(skb);
++ desc->desc_len = ring->desc_size;
++ desc->data_len = skb->len;
++ desc->desc = entry->priv;
++ desc->data = skb->data;
++ desc->ring = ring;
++ desc->entry = entry;
++
+ memcpy(entry->data_addr, skb->data, skb->len);
+- rt2x00lib_write_tx_desc(rt2x00dev, entry->priv,
+- (struct ieee80211_hdr *)skb->data,
+- skb->len, control);
++ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
- if (command == CMD_RET(CMD_802_11_AD_HOC_START)) {
- /* Update the created network descriptor with the new BSSID */
-@@ -875,27 +858,23 @@ int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
- }
+ /*
+ * Enable beacon generation.
+@@ -78,15 +81,13 @@ int rt2x00pci_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ struct data_ring *ring, struct sk_buff *skb,
+ struct ieee80211_tx_control *control)
+ {
+- struct ieee80211_hdr *ieee80211hdr = (struct ieee80211_hdr *)skb->data;
+ struct data_entry *entry = rt2x00_get_data_entry(ring);
+- struct data_desc *txd = entry->priv;
++ __le32 *txd = entry->priv;
++ struct skb_desc *desc;
+ u32 word;
- /* Set the BSSID from the joined/started descriptor */
-- memcpy(&adapter->curbssparams.bssid, bss->bssid, ETH_ALEN);
-+ memcpy(&priv->curbssparams.bssid, bss->bssid, ETH_ALEN);
+- if (rt2x00_ring_full(ring)) {
+- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
++ if (rt2x00_ring_full(ring))
+ return -EINVAL;
+- }
- /* Set the new SSID to current SSID */
-- memcpy(&adapter->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
-- adapter->curbssparams.ssid_len = bss->ssid_len;
-+ memcpy(&priv->curbssparams.ssid, &bss->ssid, IW_ESSID_MAX_SIZE);
-+ priv->curbssparams.ssid_len = bss->ssid_len;
+ rt2x00_desc_read(txd, 0, &word);
- netif_carrier_on(priv->dev);
-- netif_wake_queue(priv->dev);
--
-- if (priv->mesh_dev) {
-- netif_carrier_on(priv->mesh_dev);
-- netif_wake_queue(priv->mesh_dev);
-- }
-+ if (!priv->tx_pending_len)
-+ netif_wake_queue(priv->dev);
+@@ -96,37 +97,42 @@ int rt2x00pci_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ "Arrived at non-free entry in the non-full queue %d.\n"
+ "Please file bug report to %s.\n",
+ control->queue, DRV_PROJECT);
+- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+ return -EINVAL;
+ }
- memset(&wrqu, 0, sizeof(wrqu));
-- memcpy(wrqu.ap_addr.sa_data, adapter->curbssparams.bssid, ETH_ALEN);
-+ memcpy(wrqu.ap_addr.sa_data, priv->curbssparams.bssid, ETH_ALEN);
- wrqu.ap_addr.sa_family = ARPHRD_ETHER;
- wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+- entry->skb = skb;
+- memcpy(&entry->tx_status.control, control, sizeof(*control));
++ /*
++ * Fill in skb descriptor
++ */
++ desc = get_skb_desc(skb);
++ desc->desc_len = ring->desc_size;
++ desc->data_len = skb->len;
++ desc->desc = entry->priv;
++ desc->data = skb->data;
++ desc->ring = ring;
++ desc->entry = entry;
++
+ memcpy(entry->data_addr, skb->data, skb->len);
+- rt2x00lib_write_tx_desc(rt2x00dev, txd, ieee80211hdr,
+- skb->len, control);
++ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
- lbs_deb_join("ADHOC_RESP: - Joined/Started Ad Hoc\n");
-- lbs_deb_join("ADHOC_RESP: channel = %d\n", adapter->curbssparams.channel);
-+ lbs_deb_join("ADHOC_RESP: channel = %d\n", priv->curbssparams.channel);
- lbs_deb_join("ADHOC_RESP: BSSID = %s\n",
- print_mac(mac, padhocresult->bssid));
+ rt2x00_ring_index_inc(ring);
-@@ -904,12 +883,12 @@ done:
- return ret;
+- if (rt2x00_ring_full(ring))
+- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+-
+ return 0;
}
+ EXPORT_SYMBOL_GPL(rt2x00pci_write_tx_data);
--int libertas_ret_80211_ad_hoc_stop(wlan_private * priv,
-+int lbs_ret_80211_ad_hoc_stop(struct lbs_private *priv,
- struct cmd_ds_command *resp)
+ /*
+- * RX data handlers.
++ * TX/RX data handlers.
+ */
+ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
{
- lbs_deb_enter(LBS_DEB_JOIN);
+ struct data_ring *ring = rt2x00dev->rx;
+ struct data_entry *entry;
+- struct data_desc *rxd;
+ struct sk_buff *skb;
+ struct ieee80211_hdr *hdr;
++ struct skb_desc *skbdesc;
+ struct rxdata_entry_desc desc;
+ int header_size;
++ __le32 *rxd;
+ int align;
+ u32 word;
-- libertas_mac_event_disconnected(priv);
-+ lbs_mac_event_disconnected(priv);
+@@ -138,7 +144,7 @@ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
+ if (rt2x00_get_field32(word, RXD_ENTRY_OWNER_NIC))
+ break;
- lbs_deb_leave(LBS_DEB_JOIN);
- return 0;
-diff --git a/drivers/net/wireless/libertas/join.h b/drivers/net/wireless/libertas/join.h
-index 894a072..c617d07 100644
---- a/drivers/net/wireless/libertas/join.h
-+++ b/drivers/net/wireless/libertas/join.h
-@@ -2,52 +2,52 @@
- * Interface for the wlan infrastructure and adhoc join routines
- *
- * Driver interface functions and type declarations for the join module
-- * implemented in wlan_join.c. Process all start/join requests for
-+ * implemented in join.c. Process all start/join requests for
- * both adhoc and infrastructure networks
- */
--#ifndef _WLAN_JOIN_H
--#define _WLAN_JOIN_H
-+#ifndef _LBS_JOIN_H
-+#define _LBS_JOIN_H
+- memset(&desc, 0x00, sizeof(desc));
++ memset(&desc, 0, sizeof(desc));
+ rt2x00dev->ops->lib->fill_rxdone(entry, &desc);
- #include "defs.h"
- #include "dev.h"
+ hdr = (struct ieee80211_hdr *)entry->data_addr;
+@@ -163,6 +169,17 @@ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
+ memcpy(skb_put(skb, desc.size), entry->data_addr, desc.size);
- struct cmd_ds_command;
--int libertas_cmd_80211_authenticate(wlan_private * priv,
-+int lbs_cmd_80211_authenticate(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- void *pdata_buf);
--int libertas_cmd_80211_ad_hoc_join(wlan_private * priv,
-+int lbs_cmd_80211_ad_hoc_join(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- void *pdata_buf);
--int libertas_cmd_80211_ad_hoc_stop(wlan_private * priv,
-+int lbs_cmd_80211_ad_hoc_stop(struct lbs_private *priv,
- struct cmd_ds_command *cmd);
--int libertas_cmd_80211_ad_hoc_start(wlan_private * priv,
-+int lbs_cmd_80211_ad_hoc_start(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- void *pdata_buf);
--int libertas_cmd_80211_deauthenticate(wlan_private * priv,
-+int lbs_cmd_80211_deauthenticate(struct lbs_private *priv,
- struct cmd_ds_command *cmd);
--int libertas_cmd_80211_associate(wlan_private * priv,
-+int lbs_cmd_80211_associate(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- void *pdata_buf);
+ /*
++ * Fill in skb descriptor
++ */
++ skbdesc = get_skb_desc(skb);
++ skbdesc->desc_len = entry->ring->desc_size;
++ skbdesc->data_len = skb->len;
++ skbdesc->desc = entry->priv;
++ skbdesc->data = skb->data;
++ skbdesc->ring = ring;
++ skbdesc->entry = entry;
++
++ /*
+ * Send the frame to rt2x00lib for further processing.
+ */
+ rt2x00lib_rxdone(entry, skb, &desc);
+@@ -177,6 +194,37 @@ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
+ }
+ EXPORT_SYMBOL_GPL(rt2x00pci_rxdone);
--int libertas_ret_80211_ad_hoc_start(wlan_private * priv,
-+int lbs_ret_80211_ad_hoc_start(struct lbs_private *priv,
- struct cmd_ds_command *resp);
--int libertas_ret_80211_ad_hoc_stop(wlan_private * priv,
-+int lbs_ret_80211_ad_hoc_stop(struct lbs_private *priv,
- struct cmd_ds_command *resp);
--int libertas_ret_80211_disassociate(wlan_private * priv,
-+int lbs_ret_80211_disassociate(struct lbs_private *priv,
- struct cmd_ds_command *resp);
--int libertas_ret_80211_associate(wlan_private * priv,
-+int lbs_ret_80211_associate(struct lbs_private *priv,
- struct cmd_ds_command *resp);
++void rt2x00pci_txdone(struct rt2x00_dev *rt2x00dev, struct data_entry *entry,
++ const int tx_status, const int retry)
++{
++ u32 word;
++
++ rt2x00lib_txdone(entry, tx_status, retry);
++
++ /*
++ * Make this entry available for reuse.
++ */
++ entry->flags = 0;
++
++ rt2x00_desc_read(entry->priv, 0, &word);
++ rt2x00_set_field32(&word, TXD_ENTRY_OWNER_NIC, 0);
++ rt2x00_set_field32(&word, TXD_ENTRY_VALID, 0);
++ rt2x00_desc_write(entry->priv, 0, word);
++
++ rt2x00_ring_index_done_inc(entry->ring);
++
++ /*
++ * If the data ring was full before the txdone handler
++ * we must make sure the packet queue in the mac80211 stack
++ * is reenabled when the txdone handler has finished.
++ */
++ if (!rt2x00_ring_full(entry->ring))
++ ieee80211_wake_queue(rt2x00dev->hw,
++ entry->tx_status.control.queue);
++
++}
++EXPORT_SYMBOL_GPL(rt2x00pci_txdone);
++
+ /*
+ * Device initialization handlers.
+ */
+diff --git a/drivers/net/wireless/rt2x00/rt2x00pci.h b/drivers/net/wireless/rt2x00/rt2x00pci.h
+index 82adeac..2d1eb81 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00pci.h
++++ b/drivers/net/wireless/rt2x00/rt2x00pci.h
+@@ -57,7 +57,7 @@
+ /*
+ * Register access.
+ */
+-static inline void rt2x00pci_register_read(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2x00pci_register_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned long offset,
+ u32 *value)
+ {
+@@ -65,14 +65,14 @@ static inline void rt2x00pci_register_read(const struct rt2x00_dev *rt2x00dev,
+ }
--int libertas_start_adhoc_network(wlan_private * priv,
-+int lbs_start_adhoc_network(struct lbs_private *priv,
- struct assoc_request * assoc_req);
--int libertas_join_adhoc_network(wlan_private * priv,
-+int lbs_join_adhoc_network(struct lbs_private *priv,
- struct assoc_request * assoc_req);
--int libertas_stop_adhoc_network(wlan_private * priv);
-+int lbs_stop_adhoc_network(struct lbs_private *priv);
+ static inline void
+-rt2x00pci_register_multiread(const struct rt2x00_dev *rt2x00dev,
++rt2x00pci_register_multiread(struct rt2x00_dev *rt2x00dev,
+ const unsigned long offset,
+ void *value, const u16 length)
+ {
+ memcpy_fromio(value, rt2x00dev->csr_addr + offset, length);
+ }
--int libertas_send_deauthentication(wlan_private * priv);
-+int lbs_send_deauthentication(struct lbs_private *priv);
+-static inline void rt2x00pci_register_write(const struct rt2x00_dev *rt2x00dev,
++static inline void rt2x00pci_register_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned long offset,
+ u32 value)
+ {
+@@ -80,7 +80,7 @@ static inline void rt2x00pci_register_write(const struct rt2x00_dev *rt2x00dev,
+ }
--int wlan_associate(wlan_private * priv, struct assoc_request * assoc_req);
-+int lbs_associate(struct lbs_private *priv, struct assoc_request *assoc_req);
+ static inline void
+-rt2x00pci_register_multiwrite(const struct rt2x00_dev *rt2x00dev,
++rt2x00pci_register_multiwrite(struct rt2x00_dev *rt2x00dev,
+ const unsigned long offset,
+ void *value, const u16 length)
+ {
+@@ -101,9 +101,11 @@ int rt2x00pci_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ struct ieee80211_tx_control *control);
--void libertas_unset_basic_rate_flags(u8 * rates, size_t len);
-+void lbs_unset_basic_rate_flags(u8 *rates, size_t len);
+ /*
+- * RX data handlers.
++ * RX/TX data handlers.
+ */
+ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev);
++void rt2x00pci_txdone(struct rt2x00_dev *rt2x00dev, struct data_entry *entry,
++ const int tx_status, const int retry);
- #endif
-diff --git a/drivers/net/wireless/libertas/main.c b/drivers/net/wireless/libertas/main.c
-index 1823b48..84fb49c 100644
---- a/drivers/net/wireless/libertas/main.c
-+++ b/drivers/net/wireless/libertas/main.c
-@@ -6,7 +6,6 @@
+ /*
+ * Device initialization handlers.
+diff --git a/drivers/net/wireless/rt2x00/rt2x00rfkill.c b/drivers/net/wireless/rt2x00/rt2x00rfkill.c
+index a0f8b8e..34a96d4 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00rfkill.c
++++ b/drivers/net/wireless/rt2x00/rt2x00rfkill.c
+@@ -23,11 +23,6 @@
+ Abstract: rt2x00 rfkill routines.
+ */
- #include <linux/moduleparam.h>
- #include <linux/delay.h>
--#include <linux/freezer.h>
- #include <linux/etherdevice.h>
- #include <linux/netdevice.h>
- #include <linux/if_arp.h>
-@@ -22,9 +21,10 @@
- #include "debugfs.h"
- #include "assoc.h"
- #include "join.h"
-+#include "cmd.h"
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00lib"
+-
+ #include <linux/input-polldev.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+@@ -68,8 +63,10 @@ static void rt2x00rfkill_poll(struct input_polled_dev *poll_dev)
+ struct rt2x00_dev *rt2x00dev = poll_dev->private;
+ int state = rt2x00dev->ops->lib->rfkill_poll(rt2x00dev);
- #define DRIVER_RELEASE_VERSION "323.p0"
--const char libertas_driver_version[] = "COMM-USB8388-" DRIVER_RELEASE_VERSION
-+const char lbs_driver_version[] = "COMM-USB8388-" DRIVER_RELEASE_VERSION
- #ifdef DEBUG
- "-dbg"
- #endif
-@@ -32,80 +32,80 @@ const char libertas_driver_version[] = "COMM-USB8388-" DRIVER_RELEASE_VERSION
+- if (rt2x00dev->rfkill->state != state)
++ if (rt2x00dev->rfkill->state != state) {
+ input_report_key(poll_dev->input, KEY_WLAN, 1);
++ input_report_key(poll_dev->input, KEY_WLAN, 0);
++ }
+ }
+ int rt2x00rfkill_register(struct rt2x00_dev *rt2x00dev)
+@@ -92,6 +89,13 @@ int rt2x00rfkill_register(struct rt2x00_dev *rt2x00dev)
+ return retval;
+ }
- /* Module parameters */
--unsigned int libertas_debug = 0;
--module_param(libertas_debug, int, 0644);
--EXPORT_SYMBOL_GPL(libertas_debug);
-+unsigned int lbs_debug;
-+EXPORT_SYMBOL_GPL(lbs_debug);
-+module_param_named(libertas_debug, lbs_debug, int, 0644);
++ /*
++ * Force initial poll which will detect the initial device state,
++ * and correctly sends the signal to the rfkill layer about this
++ * state.
++ */
++ rt2x00rfkill_poll(rt2x00dev->poll_dev);
++
+ return 0;
+ }
+@@ -114,26 +118,41 @@ int rt2x00rfkill_allocate(struct rt2x00_dev *rt2x00dev)
+ rt2x00dev->rfkill = rfkill_allocate(device, RFKILL_TYPE_WLAN);
+ if (!rt2x00dev->rfkill) {
+ ERROR(rt2x00dev, "Failed to allocate rfkill handler.\n");
+- return -ENOMEM;
++ goto exit;
+ }
--#define WLAN_TX_PWR_DEFAULT 20 /*100mW */
--#define WLAN_TX_PWR_US_DEFAULT 20 /*100mW */
--#define WLAN_TX_PWR_JP_DEFAULT 16 /*50mW */
--#define WLAN_TX_PWR_FR_DEFAULT 20 /*100mW */
--#define WLAN_TX_PWR_EMEA_DEFAULT 20 /*100mW */
-+#define LBS_TX_PWR_DEFAULT 20 /*100mW */
-+#define LBS_TX_PWR_US_DEFAULT 20 /*100mW */
-+#define LBS_TX_PWR_JP_DEFAULT 16 /*50mW */
-+#define LBS_TX_PWR_FR_DEFAULT 20 /*100mW */
-+#define LBS_TX_PWR_EMEA_DEFAULT 20 /*100mW */
+ rt2x00dev->rfkill->name = rt2x00dev->ops->name;
+ rt2x00dev->rfkill->data = rt2x00dev;
+- rt2x00dev->rfkill->state = rt2x00dev->ops->lib->rfkill_poll(rt2x00dev);
++ rt2x00dev->rfkill->state = -1;
+ rt2x00dev->rfkill->toggle_radio = rt2x00rfkill_toggle_radio;
- /* Format { channel, frequency (MHz), maxtxpower } */
- /* band: 'B/G', region: USA FCC/Canada IC */
- static struct chan_freq_power channel_freq_power_US_BG[] = {
-- {1, 2412, WLAN_TX_PWR_US_DEFAULT},
-- {2, 2417, WLAN_TX_PWR_US_DEFAULT},
-- {3, 2422, WLAN_TX_PWR_US_DEFAULT},
-- {4, 2427, WLAN_TX_PWR_US_DEFAULT},
-- {5, 2432, WLAN_TX_PWR_US_DEFAULT},
-- {6, 2437, WLAN_TX_PWR_US_DEFAULT},
-- {7, 2442, WLAN_TX_PWR_US_DEFAULT},
-- {8, 2447, WLAN_TX_PWR_US_DEFAULT},
-- {9, 2452, WLAN_TX_PWR_US_DEFAULT},
-- {10, 2457, WLAN_TX_PWR_US_DEFAULT},
-- {11, 2462, WLAN_TX_PWR_US_DEFAULT}
-+ {1, 2412, LBS_TX_PWR_US_DEFAULT},
-+ {2, 2417, LBS_TX_PWR_US_DEFAULT},
-+ {3, 2422, LBS_TX_PWR_US_DEFAULT},
-+ {4, 2427, LBS_TX_PWR_US_DEFAULT},
-+ {5, 2432, LBS_TX_PWR_US_DEFAULT},
-+ {6, 2437, LBS_TX_PWR_US_DEFAULT},
-+ {7, 2442, LBS_TX_PWR_US_DEFAULT},
-+ {8, 2447, LBS_TX_PWR_US_DEFAULT},
-+ {9, 2452, LBS_TX_PWR_US_DEFAULT},
-+ {10, 2457, LBS_TX_PWR_US_DEFAULT},
-+ {11, 2462, LBS_TX_PWR_US_DEFAULT}
- };
+ rt2x00dev->poll_dev = input_allocate_polled_device();
+ if (!rt2x00dev->poll_dev) {
+ ERROR(rt2x00dev, "Failed to allocate polled device.\n");
+- rfkill_free(rt2x00dev->rfkill);
+- return -ENOMEM;
++ goto exit_free_rfkill;
+ }
- /* band: 'B/G', region: Europe ETSI */
- static struct chan_freq_power channel_freq_power_EU_BG[] = {
-- {1, 2412, WLAN_TX_PWR_EMEA_DEFAULT},
-- {2, 2417, WLAN_TX_PWR_EMEA_DEFAULT},
-- {3, 2422, WLAN_TX_PWR_EMEA_DEFAULT},
-- {4, 2427, WLAN_TX_PWR_EMEA_DEFAULT},
-- {5, 2432, WLAN_TX_PWR_EMEA_DEFAULT},
-- {6, 2437, WLAN_TX_PWR_EMEA_DEFAULT},
-- {7, 2442, WLAN_TX_PWR_EMEA_DEFAULT},
-- {8, 2447, WLAN_TX_PWR_EMEA_DEFAULT},
-- {9, 2452, WLAN_TX_PWR_EMEA_DEFAULT},
-- {10, 2457, WLAN_TX_PWR_EMEA_DEFAULT},
-- {11, 2462, WLAN_TX_PWR_EMEA_DEFAULT},
-- {12, 2467, WLAN_TX_PWR_EMEA_DEFAULT},
-- {13, 2472, WLAN_TX_PWR_EMEA_DEFAULT}
-+ {1, 2412, LBS_TX_PWR_EMEA_DEFAULT},
-+ {2, 2417, LBS_TX_PWR_EMEA_DEFAULT},
-+ {3, 2422, LBS_TX_PWR_EMEA_DEFAULT},
-+ {4, 2427, LBS_TX_PWR_EMEA_DEFAULT},
-+ {5, 2432, LBS_TX_PWR_EMEA_DEFAULT},
-+ {6, 2437, LBS_TX_PWR_EMEA_DEFAULT},
-+ {7, 2442, LBS_TX_PWR_EMEA_DEFAULT},
-+ {8, 2447, LBS_TX_PWR_EMEA_DEFAULT},
-+ {9, 2452, LBS_TX_PWR_EMEA_DEFAULT},
-+ {10, 2457, LBS_TX_PWR_EMEA_DEFAULT},
-+ {11, 2462, LBS_TX_PWR_EMEA_DEFAULT},
-+ {12, 2467, LBS_TX_PWR_EMEA_DEFAULT},
-+ {13, 2472, LBS_TX_PWR_EMEA_DEFAULT}
- };
+ rt2x00dev->poll_dev->private = rt2x00dev;
+ rt2x00dev->poll_dev->poll = rt2x00rfkill_poll;
+ rt2x00dev->poll_dev->poll_interval = RFKILL_POLL_INTERVAL;
- /* band: 'B/G', region: Spain */
- static struct chan_freq_power channel_freq_power_SPN_BG[] = {
-- {10, 2457, WLAN_TX_PWR_DEFAULT},
-- {11, 2462, WLAN_TX_PWR_DEFAULT}
-+ {10, 2457, LBS_TX_PWR_DEFAULT},
-+ {11, 2462, LBS_TX_PWR_DEFAULT}
- };
++ rt2x00dev->poll_dev->input->name = rt2x00dev->ops->name;
++ rt2x00dev->poll_dev->input->phys = wiphy_name(rt2x00dev->hw->wiphy);
++ rt2x00dev->poll_dev->input->id.bustype = BUS_HOST;
++ rt2x00dev->poll_dev->input->id.vendor = 0x1814;
++ rt2x00dev->poll_dev->input->id.product = rt2x00dev->chip.rt;
++ rt2x00dev->poll_dev->input->id.version = rt2x00dev->chip.rev;
++ rt2x00dev->poll_dev->input->dev.parent = device;
++ rt2x00dev->poll_dev->input->evbit[0] = BIT(EV_KEY);
++ set_bit(KEY_WLAN, rt2x00dev->poll_dev->input->keybit);
++
+ return 0;
++
++exit_free_rfkill:
++ rfkill_free(rt2x00dev->rfkill);
++
++exit:
++ return -ENOMEM;
+ }
- /* band: 'B/G', region: France */
- static struct chan_freq_power channel_freq_power_FR_BG[] = {
-- {10, 2457, WLAN_TX_PWR_FR_DEFAULT},
-- {11, 2462, WLAN_TX_PWR_FR_DEFAULT},
-- {12, 2467, WLAN_TX_PWR_FR_DEFAULT},
-- {13, 2472, WLAN_TX_PWR_FR_DEFAULT}
-+ {10, 2457, LBS_TX_PWR_FR_DEFAULT},
-+ {11, 2462, LBS_TX_PWR_FR_DEFAULT},
-+ {12, 2467, LBS_TX_PWR_FR_DEFAULT},
-+ {13, 2472, LBS_TX_PWR_FR_DEFAULT}
- };
+ void rt2x00rfkill_free(struct rt2x00_dev *rt2x00dev)
+diff --git a/drivers/net/wireless/rt2x00/rt2x00ring.h b/drivers/net/wireless/rt2x00/rt2x00ring.h
+index 1a864d3..1caa6d6 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00ring.h
++++ b/drivers/net/wireless/rt2x00/rt2x00ring.h
+@@ -27,19 +27,27 @@
+ #define RT2X00RING_H
- /* band: 'B/G', region: Japan */
- static struct chan_freq_power channel_freq_power_JPN_BG[] = {
-- {1, 2412, WLAN_TX_PWR_JP_DEFAULT},
-- {2, 2417, WLAN_TX_PWR_JP_DEFAULT},
-- {3, 2422, WLAN_TX_PWR_JP_DEFAULT},
-- {4, 2427, WLAN_TX_PWR_JP_DEFAULT},
-- {5, 2432, WLAN_TX_PWR_JP_DEFAULT},
-- {6, 2437, WLAN_TX_PWR_JP_DEFAULT},
-- {7, 2442, WLAN_TX_PWR_JP_DEFAULT},
-- {8, 2447, WLAN_TX_PWR_JP_DEFAULT},
-- {9, 2452, WLAN_TX_PWR_JP_DEFAULT},
-- {10, 2457, WLAN_TX_PWR_JP_DEFAULT},
-- {11, 2462, WLAN_TX_PWR_JP_DEFAULT},
-- {12, 2467, WLAN_TX_PWR_JP_DEFAULT},
-- {13, 2472, WLAN_TX_PWR_JP_DEFAULT},
-- {14, 2484, WLAN_TX_PWR_JP_DEFAULT}
-+ {1, 2412, LBS_TX_PWR_JP_DEFAULT},
-+ {2, 2417, LBS_TX_PWR_JP_DEFAULT},
-+ {3, 2422, LBS_TX_PWR_JP_DEFAULT},
-+ {4, 2427, LBS_TX_PWR_JP_DEFAULT},
-+ {5, 2432, LBS_TX_PWR_JP_DEFAULT},
-+ {6, 2437, LBS_TX_PWR_JP_DEFAULT},
-+ {7, 2442, LBS_TX_PWR_JP_DEFAULT},
-+ {8, 2447, LBS_TX_PWR_JP_DEFAULT},
-+ {9, 2452, LBS_TX_PWR_JP_DEFAULT},
-+ {10, 2457, LBS_TX_PWR_JP_DEFAULT},
-+ {11, 2462, LBS_TX_PWR_JP_DEFAULT},
-+ {12, 2467, LBS_TX_PWR_JP_DEFAULT},
-+ {13, 2472, LBS_TX_PWR_JP_DEFAULT},
-+ {14, 2484, LBS_TX_PWR_JP_DEFAULT}
+ /*
+- * data_desc
+- * Each data entry also contains a descriptor which is used by the
+- * device to determine what should be done with the packet and
+- * what the current status is.
+- * This structure is greatly simplified, but the descriptors
+- * are basically a list of little endian 32 bit values.
+- * Make the array by default 1 word big, this will allow us
+- * to use sizeof() correctly.
++ * skb_desc
++ * Descriptor information for the skb buffer
+ */
+-struct data_desc {
+- __le32 word[1];
++struct skb_desc {
++ unsigned int frame_type;
++
++ unsigned int desc_len;
++ unsigned int data_len;
++
++ void *desc;
++ void *data;
++
++ struct data_ring *ring;
++ struct data_entry *entry;
};
- /**
-@@ -153,13 +153,13 @@ static struct region_cfp_table region_cfp_table[] = {
- /**
- * the table to keep region code
- */
--u16 libertas_region_code_to_index[MRVDRV_MAX_REGION_CODE] =
-+u16 lbs_region_code_to_index[MRVDRV_MAX_REGION_CODE] =
- { 0x10, 0x20, 0x30, 0x31, 0x32, 0x40 };
++static inline struct skb_desc* get_skb_desc(struct sk_buff *skb)
++{
++ return (struct skb_desc*)&skb->cb[0];
++}
++
+ /*
+ * rxdata_entry_desc
+ * Summary of information that has been read from the
+@@ -51,6 +59,7 @@ struct rxdata_entry_desc {
+ int ofdm;
+ int size;
+ int flags;
++ int my_bss;
+ };
- /**
- * 802.11b/g supported bitrates (in 500Kb/s units)
- */
--u8 libertas_bg_rates[MAX_RATES] =
-+u8 lbs_bg_rates[MAX_RATES] =
- { 0x02, 0x04, 0x0b, 0x16, 0x0c, 0x12, 0x18, 0x24, 0x30, 0x48, 0x60, 0x6c,
- 0x00, 0x00 };
+ /*
+@@ -66,6 +75,7 @@ struct txdata_entry_desc {
+ #define ENTRY_TXD_MORE_FRAG 4
+ #define ENTRY_TXD_REQ_TIMESTAMP 5
+ #define ENTRY_TXD_BURST 6
++#define ENTRY_TXD_ACK 7
-@@ -179,7 +179,7 @@ static u8 fw_data_rates[MAX_RATES] =
- * @param idx The index of data rate
- * @return data rate or 0
- */
--u32 libertas_fw_index_to_data_rate(u8 idx)
-+u32 lbs_fw_index_to_data_rate(u8 idx)
- {
- if (idx >= sizeof(fw_data_rates))
- idx = 0;
-@@ -192,7 +192,7 @@ u32 libertas_fw_index_to_data_rate(u8 idx)
- * @param rate data rate
- * @return index or 0
- */
--u8 libertas_data_rate_to_fw_index(u32 rate)
-+u8 lbs_data_rate_to_fw_index(u32 rate)
- {
- u8 i;
+ /*
+ * Queue ID. ID's 0-4 are data TX rings
+@@ -134,6 +144,11 @@ struct data_entry {
+ */
+ void *data_addr;
+ dma_addr_t data_dma;
++
++ /*
++ * Entry identification number (index).
++ */
++ unsigned int entry_idx;
+ };
-@@ -213,16 +213,18 @@ u8 libertas_data_rate_to_fw_index(u32 rate)
- /**
- * @brief Get function for sysfs attribute anycast_mask
- */
--static ssize_t libertas_anycast_get(struct device * dev,
-+static ssize_t lbs_anycast_get(struct device *dev,
- struct device_attribute *attr, char * buf)
- {
-+ struct lbs_private *priv = to_net_dev(dev)->priv;
- struct cmd_ds_mesh_access mesh_access;
-+ int ret;
+ /*
+@@ -172,6 +187,13 @@ struct data_ring {
+ void *data_addr;
- memset(&mesh_access, 0, sizeof(mesh_access));
-- libertas_prepare_and_send_command(to_net_dev(dev)->priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_GET_ANYCAST,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
+ /*
++ * Queue identification number:
++ * RX: 0
++ * TX: IEEE80211_TX_*
++ */
++ unsigned int queue_idx;
+
-+ ret = lbs_mesh_access(priv, CMD_ACT_MESH_GET_ANYCAST, &mesh_access);
-+ if (ret)
-+ return ret;
-
- return snprintf(buf, 12, "0x%X\n", le32_to_cpu(mesh_access.data[0]));
- }
-@@ -230,244 +232,191 @@ static ssize_t libertas_anycast_get(struct device * dev,
- /**
- * @brief Set function for sysfs attribute anycast_mask
++ /*
+ * Index variables.
+ */
+ u16 index;
+@@ -253,16 +275,16 @@ static inline int rt2x00_ring_free(struct data_ring *ring)
+ /*
+ * TX/RX Descriptor access functions.
*/
--static ssize_t libertas_anycast_set(struct device * dev,
-+static ssize_t lbs_anycast_set(struct device *dev,
- struct device_attribute *attr, const char * buf, size_t count)
+-static inline void rt2x00_desc_read(struct data_desc *desc,
++static inline void rt2x00_desc_read(__le32 *desc,
+ const u8 word, u32 *value)
{
-+ struct lbs_private *priv = to_net_dev(dev)->priv;
- struct cmd_ds_mesh_access mesh_access;
- uint32_t datum;
-+ int ret;
-
- memset(&mesh_access, 0, sizeof(mesh_access));
- sscanf(buf, "%x", &datum);
- mesh_access.data[0] = cpu_to_le32(datum);
-
-- libertas_prepare_and_send_command((to_net_dev(dev))->priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_SET_ANYCAST,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
-+ ret = lbs_mesh_access(priv, CMD_ACT_MESH_SET_ANYCAST, &mesh_access);
-+ if (ret)
-+ return ret;
-+
- return strlen(buf);
+- *value = le32_to_cpu(desc->word[word]);
++ *value = le32_to_cpu(desc[word]);
}
--int libertas_add_rtap(wlan_private *priv);
--void libertas_remove_rtap(wlan_private *priv);
-+static int lbs_add_rtap(struct lbs_private *priv);
-+static void lbs_remove_rtap(struct lbs_private *priv);
-+static int lbs_add_mesh(struct lbs_private *priv);
-+static void lbs_remove_mesh(struct lbs_private *priv);
-+
-
- /**
- * Get function for sysfs attribute rtap
- */
--static ssize_t libertas_rtap_get(struct device * dev,
-+static ssize_t lbs_rtap_get(struct device *dev,
- struct device_attribute *attr, char * buf)
+-static inline void rt2x00_desc_write(struct data_desc *desc,
++static inline void rt2x00_desc_write(__le32 *desc,
+ const u8 word, const u32 value)
{
-- wlan_private *priv = (wlan_private *) (to_net_dev(dev))->priv;
-- wlan_adapter *adapter = priv->adapter;
-- return snprintf(buf, 5, "0x%X\n", adapter->monitormode);
-+ struct lbs_private *priv = to_net_dev(dev)->priv;
-+ return snprintf(buf, 5, "0x%X\n", priv->monitormode);
+- desc->word[word] = cpu_to_le32(value);
++ desc[word] = cpu_to_le32(value);
}
- /**
- * Set function for sysfs attribute rtap
+ #endif /* RT2X00RING_H */
+diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.c b/drivers/net/wireless/rt2x00/rt2x00usb.c
+index 568d738..84e9bdb 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00usb.c
++++ b/drivers/net/wireless/rt2x00/rt2x00usb.c
+@@ -23,14 +23,10 @@
+ Abstract: rt2x00 generic usb device routines.
*/
--static ssize_t libertas_rtap_set(struct device * dev,
-+static ssize_t lbs_rtap_set(struct device *dev,
- struct device_attribute *attr, const char * buf, size_t count)
- {
- int monitor_mode;
-- wlan_private *priv = (wlan_private *) (to_net_dev(dev))->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = to_net_dev(dev)->priv;
- sscanf(buf, "%x", &monitor_mode);
-- if (monitor_mode != WLAN_MONITOR_OFF) {
-- if(adapter->monitormode == monitor_mode)
-+ if (monitor_mode != LBS_MONITOR_OFF) {
-+ if(priv->monitormode == monitor_mode)
- return strlen(buf);
-- if (adapter->monitormode == WLAN_MONITOR_OFF) {
-- if (adapter->mode == IW_MODE_INFRA)
-- libertas_send_deauthentication(priv);
-- else if (adapter->mode == IW_MODE_ADHOC)
-- libertas_stop_adhoc_network(priv);
-- libertas_add_rtap(priv);
-+ if (priv->monitormode == LBS_MONITOR_OFF) {
-+ if (priv->infra_open || priv->mesh_open)
-+ return -EBUSY;
-+ if (priv->mode == IW_MODE_INFRA)
-+ lbs_send_deauthentication(priv);
-+ else if (priv->mode == IW_MODE_ADHOC)
-+ lbs_stop_adhoc_network(priv);
-+ lbs_add_rtap(priv);
- }
-- adapter->monitormode = monitor_mode;
-+ priv->monitormode = monitor_mode;
- }
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt2x00usb"
+-
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/usb.h>
++#include <linux/bug.h>
+
+ #include "rt2x00.h"
+ #include "rt2x00usb.h"
+@@ -38,7 +34,7 @@
+ /*
+ * Interfacing with the HW.
+ */
+-int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
++int rt2x00usb_vendor_request(struct rt2x00_dev *rt2x00dev,
+ const u8 request, const u8 requesttype,
+ const u16 offset, const u16 value,
+ void *buffer, const u16 buffer_length,
+@@ -52,6 +48,7 @@ int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
+ (requesttype == USB_VENDOR_REQUEST_IN) ?
+ usb_rcvctrlpipe(usb_dev, 0) : usb_sndctrlpipe(usb_dev, 0);
- else {
-- if(adapter->monitormode == WLAN_MONITOR_OFF)
-+ if (priv->monitormode == LBS_MONITOR_OFF)
- return strlen(buf);
-- adapter->monitormode = WLAN_MONITOR_OFF;
-- libertas_remove_rtap(priv);
-- netif_wake_queue(priv->dev);
-- netif_wake_queue(priv->mesh_dev);
-+ priv->monitormode = LBS_MONITOR_OFF;
-+ lbs_remove_rtap(priv);
+
-+ if (priv->currenttxskb) {
-+ dev_kfree_skb_any(priv->currenttxskb);
-+ priv->currenttxskb = NULL;
-+ }
+ for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
+ status = usb_control_msg(usb_dev, pipe, request, requesttype,
+ value, offset, buffer, buffer_length,
+@@ -76,13 +73,15 @@ int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
+ }
+ EXPORT_SYMBOL_GPL(rt2x00usb_vendor_request);
+
+-int rt2x00usb_vendor_request_buff(const struct rt2x00_dev *rt2x00dev,
+- const u8 request, const u8 requesttype,
+- const u16 offset, void *buffer,
+- const u16 buffer_length, const int timeout)
++int rt2x00usb_vendor_req_buff_lock(struct rt2x00_dev *rt2x00dev,
++ const u8 request, const u8 requesttype,
++ const u16 offset, void *buffer,
++ const u16 buffer_length, const int timeout)
+ {
+ int status;
+
++ BUG_ON(!mutex_is_locked(&rt2x00dev->usb_cache_mutex));
+
-+ /* Wake queues, command thread, etc. */
-+ lbs_host_to_card_done(priv);
- }
+ /*
+ * Check for Cache availability.
+ */
+@@ -103,6 +102,25 @@ int rt2x00usb_vendor_request_buff(const struct rt2x00_dev *rt2x00dev,
-- libertas_prepare_and_send_command(priv,
-+ lbs_prepare_and_send_command(priv,
- CMD_802_11_MONITOR_MODE, CMD_ACT_SET,
-- CMD_OPTION_WAITFORRSP, 0, &adapter->monitormode);
-+ CMD_OPTION_WAITFORRSP, 0, &priv->monitormode);
- return strlen(buf);
+ return status;
}
++EXPORT_SYMBOL_GPL(rt2x00usb_vendor_req_buff_lock);
++
++int rt2x00usb_vendor_request_buff(struct rt2x00_dev *rt2x00dev,
++ const u8 request, const u8 requesttype,
++ const u16 offset, void *buffer,
++ const u16 buffer_length, const int timeout)
++{
++ int status;
++
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
++
++ status = rt2x00usb_vendor_req_buff_lock(rt2x00dev, request,
++ requesttype, offset, buffer,
++ buffer_length, timeout);
++
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
++
++ return status;
++}
+ EXPORT_SYMBOL_GPL(rt2x00usb_vendor_request_buff);
- /**
-- * libertas_rtap attribute to be exported per mshX interface
-- * through sysfs (/sys/class/net/mshX/libertas-rtap)
-+ * lbs_rtap attribute to be exported per ethX interface
-+ * through sysfs (/sys/class/net/ethX/lbs_rtap)
- */
--static DEVICE_ATTR(libertas_rtap, 0644, libertas_rtap_get,
-- libertas_rtap_set );
-+static DEVICE_ATTR(lbs_rtap, 0644, lbs_rtap_get, lbs_rtap_set );
+ /*
+@@ -113,7 +131,7 @@ static void rt2x00usb_interrupt_txdone(struct urb *urb)
+ struct data_entry *entry = (struct data_entry *)urb->context;
+ struct data_ring *ring = entry->ring;
+ struct rt2x00_dev *rt2x00dev = ring->rt2x00dev;
+- struct data_desc *txd = (struct data_desc *)entry->skb->data;
++ __le32 *txd = (__le32 *)entry->skb->data;
+ u32 word;
+ int tx_status;
- /**
-- * anycast_mask attribute to be exported per mshX interface
-- * through sysfs (/sys/class/net/mshX/anycast_mask)
-+ * Get function for sysfs attribute mesh
- */
--static DEVICE_ATTR(anycast_mask, 0644, libertas_anycast_get, libertas_anycast_set);
--
--static ssize_t libertas_autostart_enabled_get(struct device * dev,
-+static ssize_t lbs_mesh_get(struct device *dev,
- struct device_attribute *attr, char * buf)
- {
-- struct cmd_ds_mesh_access mesh_access;
--
-- memset(&mesh_access, 0, sizeof(mesh_access));
-- libertas_prepare_and_send_command(to_net_dev(dev)->priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_GET_AUTOSTART_ENABLED,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
--
-- return sprintf(buf, "%d\n", le32_to_cpu(mesh_access.data[0]));
-+ struct lbs_private *priv = to_net_dev(dev)->priv;
-+ return snprintf(buf, 5, "0x%X\n", !!priv->mesh_dev);
- }
+@@ -158,20 +176,17 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ struct usb_device *usb_dev =
+ interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
+ struct data_entry *entry = rt2x00_get_data_entry(ring);
+- int pipe = usb_sndbulkpipe(usb_dev, 1);
++ struct skb_desc *desc;
+ u32 length;
--static ssize_t libertas_autostart_enabled_set(struct device * dev,
-+/**
-+ * Set function for sysfs attribute mesh
-+ */
-+static ssize_t lbs_mesh_set(struct device *dev,
- struct device_attribute *attr, const char * buf, size_t count)
- {
-- struct cmd_ds_mesh_access mesh_access;
-- uint32_t datum;
-- wlan_private * priv = (to_net_dev(dev))->priv;
-+ struct lbs_private *priv = to_net_dev(dev)->priv;
-+ int enable;
- int ret;
+- if (rt2x00_ring_full(ring)) {
+- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
++ if (rt2x00_ring_full(ring))
+ return -EINVAL;
+- }
-- memset(&mesh_access, 0, sizeof(mesh_access));
-- sscanf(buf, "%d", &datum);
-- mesh_access.data[0] = cpu_to_le32(datum);
-+ sscanf(buf, "%x", &enable);
-+ enable = !!enable;
-+ if (enable == !!priv->mesh_dev)
-+ return count;
+ if (test_bit(ENTRY_OWNER_NIC, &entry->flags)) {
+ ERROR(rt2x00dev,
+ "Arrived at non-free entry in the non-full queue %d.\n"
+ "Please file bug report to %s.\n",
+ control->queue, DRV_PROJECT);
+- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+ return -EINVAL;
+ }
+
+@@ -181,12 +196,18 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ skb_push(skb, ring->desc_size);
+ memset(skb->data, 0, ring->desc_size);
+
+- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
+- (struct ieee80211_hdr *)(skb->data +
+- ring->desc_size),
+- skb->len - ring->desc_size, control);
+- memcpy(&entry->tx_status.control, control, sizeof(*control));
+- entry->skb = skb;
++ /*
++ * Fill in skb descriptor
++ */
++ desc = get_skb_desc(skb);
++ desc->desc_len = ring->desc_size;
++ desc->data_len = skb->len - ring->desc_size;
++ desc->desc = skb->data;
++ desc->data = skb->data + ring->desc_size;
++ desc->ring = ring;
++ desc->entry = entry;
+
-+ ret = lbs_mesh_config(priv, enable, priv->curbssparams.channel);
-+ if (ret)
-+ return ret;
++ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
-- if (ret == 0)
-- priv->mesh_autostart_enabled = datum ? 1 : 0;
-+ if (enable)
-+ lbs_add_mesh(priv);
-+ else
-+ lbs_remove_mesh(priv);
+ /*
+ * USB devices cannot blindly pass the skb->len as the
+@@ -199,15 +220,12 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ * Initialize URB and send the frame to the device.
+ */
+ __set_bit(ENTRY_OWNER_NIC, &entry->flags);
+- usb_fill_bulk_urb(entry->priv, usb_dev, pipe,
++ usb_fill_bulk_urb(entry->priv, usb_dev, usb_sndbulkpipe(usb_dev, 1),
+ skb->data, length, rt2x00usb_interrupt_txdone, entry);
+ usb_submit_urb(entry->priv, GFP_ATOMIC);
-- return strlen(buf);
-+ return count;
- }
+ rt2x00_ring_index_inc(ring);
--static DEVICE_ATTR(autostart_enabled, 0644,
-- libertas_autostart_enabled_get, libertas_autostart_enabled_set);
-+/**
-+ * lbs_mesh attribute to be exported per ethX interface
-+ * through sysfs (/sys/class/net/ethX/lbs_mesh)
-+ */
-+static DEVICE_ATTR(lbs_mesh, 0644, lbs_mesh_get, lbs_mesh_set);
+- if (rt2x00_ring_full(ring))
+- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
+-
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(rt2x00usb_write_tx_data);
+@@ -222,6 +240,7 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
+ struct rt2x00_dev *rt2x00dev = ring->rt2x00dev;
+ struct sk_buff *skb;
+ struct ieee80211_hdr *hdr;
++ struct skb_desc *skbdesc;
+ struct rxdata_entry_desc desc;
+ int header_size;
+ int frame_size;
+@@ -238,7 +257,14 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
+ if (urb->actual_length < entry->ring->desc_size || urb->status)
+ goto skip_entry;
--static struct attribute *libertas_mesh_sysfs_entries[] = {
-+/**
-+ * anycast_mask attribute to be exported per mshX interface
-+ * through sysfs (/sys/class/net/mshX/anycast_mask)
-+ */
-+static DEVICE_ATTR(anycast_mask, 0644, lbs_anycast_get, lbs_anycast_set);
+- memset(&desc, 0x00, sizeof(desc));
++ /*
++ * Fill in skb descriptor
++ */
++ skbdesc = get_skb_desc(entry->skb);
++ skbdesc->ring = ring;
++ skbdesc->entry = entry;
+
-+static struct attribute *lbs_mesh_sysfs_entries[] = {
- &dev_attr_anycast_mask.attr,
-- &dev_attr_autostart_enabled.attr,
- NULL,
- };
++ memset(&desc, 0, sizeof(desc));
+ rt2x00dev->ops->lib->fill_rxdone(entry, &desc);
--static struct attribute_group libertas_mesh_attr_group = {
-- .attrs = libertas_mesh_sysfs_entries,
-+static struct attribute_group lbs_mesh_attr_group = {
-+ .attrs = lbs_mesh_sysfs_entries,
- };
+ /*
+@@ -264,9 +290,6 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
+ /*
+ * The data behind the ieee80211 header must be
+ * aligned on a 4 byte boundary.
+- * After that trim the entire buffer down to only
+- * contain the valid frame data excluding the device
+- * descriptor.
+ */
+ hdr = (struct ieee80211_hdr *)entry->skb->data;
+ header_size =
+@@ -276,6 +299,16 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
+ skb_push(entry->skb, 2);
+ memmove(entry->skb->data, entry->skb->data + 2, skb->len - 2);
+ }
++
++ /*
++ * Trim the entire buffer down to only contain the valid frame data
++ * excluding the device descriptor. The position of the descriptor
++ * varies. This means that we should check where the descriptor is
++ * and decide if we need to pull the data pointer to exclude the
++ * device descriptor.
++ */
++ if (skbdesc->data > skbdesc->desc)
++ skb_pull(entry->skb, skbdesc->desc_len);
+ skb_trim(entry->skb, desc.size);
- /**
-- * @brief Check if the device can be open and wait if necessary.
-- *
-- * @param dev A pointer to net_device structure
-- * @return 0
-- *
-- * For USB adapter, on some systems the device open handler will be
-- * called before FW ready. Use the following flag check and wait
-- * function to work around the issue.
-- *
-- */
--static int pre_open_check(struct net_device *dev)
+ /*
+@@ -303,43 +336,6 @@ skip_entry:
+ /*
+ * Radio handlers
+ */
+-void rt2x00usb_enable_radio(struct rt2x00_dev *rt2x00dev)
-{
-- wlan_private *priv = (wlan_private *) dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-- int i = 0;
+- struct usb_device *usb_dev =
+- interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
+- struct data_ring *ring;
+- struct data_entry *entry;
+- unsigned int i;
-
-- while (!adapter->fw_ready && i < 20) {
-- i++;
-- msleep_interruptible(100);
-- }
-- if (!adapter->fw_ready) {
-- lbs_pr_err("firmware not ready\n");
-- return -1;
+- /*
+- * Initialize the TX rings
+- */
+- txringall_for_each(rt2x00dev, ring) {
+- for (i = 0; i < ring->stats.limit; i++)
+- ring->entry[i].flags = 0;
+-
+- rt2x00_ring_index_clear(ring);
- }
-
-- return 0;
+- /*
+- * Initialize and start the RX ring.
+- */
+- rt2x00_ring_index_clear(rt2x00dev->rx);
+-
+- for (i = 0; i < rt2x00dev->rx->stats.limit; i++) {
+- entry = &rt2x00dev->rx->entry[i];
+-
+- usb_fill_bulk_urb(entry->priv, usb_dev,
+- usb_rcvbulkpipe(usb_dev, 1),
+- entry->skb->data, entry->skb->len,
+- rt2x00usb_interrupt_rxdone, entry);
+-
+- __set_bit(ENTRY_OWNER_NIC, &entry->flags);
+- usb_submit_urb(entry->priv, GFP_ATOMIC);
+- }
-}
+-EXPORT_SYMBOL_GPL(rt2x00usb_enable_radio);
-
--/**
-- * @brief This function opens the device
-+ * @brief This function opens the ethX or mshX interface
- *
- * @param dev A pointer to net_device structure
-- * @return 0
-+ * @return 0 or -EBUSY if monitor mode active
+ void rt2x00usb_disable_radio(struct rt2x00_dev *rt2x00dev)
+ {
+ struct data_ring *ring;
+@@ -361,6 +357,29 @@ EXPORT_SYMBOL_GPL(rt2x00usb_disable_radio);
+ /*
+ * Device initialization handlers.
*/
--static int libertas_dev_open(struct net_device *dev)
-+static int lbs_dev_open(struct net_device *dev)
++void rt2x00usb_init_rxentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
++{
++ struct usb_device *usb_dev =
++ interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
++
++ usb_fill_bulk_urb(entry->priv, usb_dev,
++ usb_rcvbulkpipe(usb_dev, 1),
++ entry->skb->data, entry->skb->len,
++ rt2x00usb_interrupt_rxdone, entry);
++
++ __set_bit(ENTRY_OWNER_NIC, &entry->flags);
++ usb_submit_urb(entry->priv, GFP_ATOMIC);
++}
++EXPORT_SYMBOL_GPL(rt2x00usb_init_rxentry);
++
++void rt2x00usb_init_txentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
++{
++ entry->flags = 0;
++}
++EXPORT_SYMBOL_GPL(rt2x00usb_init_txentry);
++
+ static int rt2x00usb_alloc_urb(struct rt2x00_dev *rt2x00dev,
+ struct data_ring *ring)
{
-- wlan_private *priv = (wlan_private *) dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv ;
-+ int ret = 0;
+@@ -400,7 +419,7 @@ int rt2x00usb_initialize(struct rt2x00_dev *rt2x00dev)
+ struct sk_buff *skb;
+ unsigned int entry_size;
+ unsigned int i;
+- int status;
++ int uninitialized_var(status);
- lbs_deb_enter(LBS_DEB_NET);
+ /*
+ * Allocate DMA
+@@ -507,6 +526,7 @@ int rt2x00usb_probe(struct usb_interface *usb_intf,
+ rt2x00dev->dev = usb_intf;
+ rt2x00dev->ops = ops;
+ rt2x00dev->hw = hw;
++ mutex_init(&rt2x00dev->usb_cache_mutex);
-- priv->open = 1;
-+ spin_lock_irq(&priv->driver_lock);
+ rt2x00dev->usb_maxpacket =
+ usb_maxpacket(usb_dev, usb_sndbulkpipe(usb_dev, 1), 1);
+diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.h b/drivers/net/wireless/rt2x00/rt2x00usb.h
+index 2681abe..e40df40 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00usb.h
++++ b/drivers/net/wireless/rt2x00/rt2x00usb.h
+@@ -91,7 +91,7 @@
+ * a buffer allocated by kmalloc. Failure to do so can lead
+ * to unexpected behavior depending on the architecture.
+ */
+-int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
++int rt2x00usb_vendor_request(struct rt2x00_dev *rt2x00dev,
+ const u8 request, const u8 requesttype,
+ const u16 offset, const u16 value,
+ void *buffer, const u16 buffer_length,
+@@ -107,18 +107,25 @@ int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
+ * kmalloc. Hence the reason for using a previously allocated cache
+ * which has been allocated properly.
+ */
+-int rt2x00usb_vendor_request_buff(const struct rt2x00_dev *rt2x00dev,
++int rt2x00usb_vendor_request_buff(struct rt2x00_dev *rt2x00dev,
+ const u8 request, const u8 requesttype,
+ const u16 offset, void *buffer,
+ const u16 buffer_length, const int timeout);
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-- netif_carrier_on(priv->dev);
-- if (priv->mesh_dev)
-- netif_carrier_on(priv->mesh_dev);
-- } else {
-- netif_carrier_off(priv->dev);
-- if (priv->mesh_dev)
-- netif_carrier_off(priv->mesh_dev);
-+ if (priv->monitormode != LBS_MONITOR_OFF) {
-+ ret = -EBUSY;
-+ goto out;
- }
+ /*
++ * A version of rt2x00usb_vendor_request_buff which must be called
++ * if the usb_cache_mutex is already held. */
++int rt2x00usb_vendor_req_buff_lock(struct rt2x00_dev *rt2x00dev,
++ const u8 request, const u8 requesttype,
++ const u16 offset, void *buffer,
++ const u16 buffer_length, const int timeout);
++
++/*
+ * Simple wrapper around rt2x00usb_vendor_request to write a single
+ * command to the device. Since we don't use the buffer argument we
+ * don't have to worry about kmalloc here.
+ */
+-static inline int rt2x00usb_vendor_request_sw(const struct rt2x00_dev
+- *rt2x00dev,
++static inline int rt2x00usb_vendor_request_sw(struct rt2x00_dev *rt2x00dev,
+ const u8 request,
+ const u16 offset,
+ const u16 value,
+@@ -134,8 +141,8 @@ static inline int rt2x00usb_vendor_request_sw(const struct rt2x00_dev
+ * from the device. Note that the eeprom argument _must_ be allocated using
+ * kmalloc for correct handling inside the kernel USB layer.
+ */
+-static inline int rt2x00usb_eeprom_read(const struct rt2x00_dev *rt2x00dev,
+- __le16 *eeprom, const u16 lenght)
++static inline int rt2x00usb_eeprom_read(struct rt2x00_dev *rt2x00dev,
++ __le16 *eeprom, const u16 lenght)
+ {
+ int timeout = REGISTER_TIMEOUT * (lenght / sizeof(u16));
-- lbs_deb_leave(LBS_DEB_NET);
-- return 0;
--}
--/**
-- * @brief This function opens the mshX interface
-- *
-- * @param dev A pointer to net_device structure
-- * @return 0
-- */
--static int libertas_mesh_open(struct net_device *dev)
--{
-- wlan_private *priv = (wlan_private *) dev->priv ;
--
-- if (pre_open_check(dev) == -1)
-- return -1;
-- priv->mesh_open = 1 ;
-- netif_wake_queue(priv->mesh_dev);
-- if (priv->infra_open == 0)
-- return libertas_dev_open(priv->dev) ;
-- return 0;
--}
--
--/**
-- * @brief This function opens the ethX interface
-- *
-- * @param dev A pointer to net_device structure
-- * @return 0
-- */
--static int libertas_open(struct net_device *dev)
--{
-- wlan_private *priv = (wlan_private *) dev->priv ;
--
-- if(pre_open_check(dev) == -1)
-- return -1;
-- priv->infra_open = 1 ;
-- netif_wake_queue(priv->dev);
-- if (priv->open == 0)
-- return libertas_dev_open(priv->dev) ;
-- return 0;
--}
--
--static int libertas_dev_close(struct net_device *dev)
--{
-- wlan_private *priv = dev->priv;
-+ if (dev == priv->mesh_dev) {
-+ priv->mesh_open = 1;
-+ priv->mesh_connect_status = LBS_CONNECTED;
-+ netif_carrier_on(dev);
-+ } else {
-+ priv->infra_open = 1;
+@@ -147,7 +154,6 @@ static inline int rt2x00usb_eeprom_read(const struct rt2x00_dev *rt2x00dev,
+ /*
+ * Radio handlers
+ */
+-void rt2x00usb_enable_radio(struct rt2x00_dev *rt2x00dev);
+ void rt2x00usb_disable_radio(struct rt2x00_dev *rt2x00dev);
-- lbs_deb_enter(LBS_DEB_NET);
-+ if (priv->connect_status == LBS_CONNECTED)
-+ netif_carrier_on(dev);
-+ else
-+ netif_carrier_off(dev);
-+ }
+ /*
+@@ -160,6 +166,10 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
+ /*
+ * Device initialization handlers.
+ */
++void rt2x00usb_init_rxentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry);
++void rt2x00usb_init_txentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry);
+ int rt2x00usb_initialize(struct rt2x00_dev *rt2x00dev);
+ void rt2x00usb_uninitialize(struct rt2x00_dev *rt2x00dev);
-- netif_carrier_off(priv->dev);
-- priv->open = 0;
-+ if (!priv->tx_pending_len)
-+ netif_wake_queue(dev);
-+ out:
+diff --git a/drivers/net/wireless/rt2x00/rt61pci.c b/drivers/net/wireless/rt2x00/rt61pci.c
+index ecae968..ab52f22 100644
+--- a/drivers/net/wireless/rt2x00/rt61pci.c
++++ b/drivers/net/wireless/rt2x00/rt61pci.c
+@@ -24,11 +24,6 @@
+ Supported chipsets: RT2561, RT2561s, RT2661.
+ */
-- lbs_deb_leave(LBS_DEB_NET);
-- return 0;
-+ spin_unlock_irq(&priv->driver_lock);
-+ lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
-+ return ret;
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt61pci"
+-
+ #include <linux/delay.h>
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+@@ -52,7 +47,7 @@
+ * the access attempt is considered to have failed,
+ * and we will print an error.
+ */
+-static u32 rt61pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
++static u32 rt61pci_bbp_check(struct rt2x00_dev *rt2x00dev)
+ {
+ u32 reg;
+ unsigned int i;
+@@ -67,7 +62,7 @@ static u32 rt61pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
+ return reg;
}
- /**
-@@ -476,16 +425,23 @@ static int libertas_dev_close(struct net_device *dev)
- * @param dev A pointer to net_device structure
- * @return 0
- */
--static int libertas_mesh_close(struct net_device *dev)
-+static int lbs_mesh_stop(struct net_device *dev)
+-static void rt61pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
++static void rt61pci_bbp_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u8 value)
{
-- wlan_private *priv = (wlan_private *) (dev->priv);
-+ struct lbs_private *priv = (struct lbs_private *) (dev->priv);
-+
-+ lbs_deb_enter(LBS_DEB_MESH);
-+ spin_lock_irq(&priv->driver_lock);
+ u32 reg;
+@@ -93,7 +88,7 @@ static void rt61pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
+ rt2x00pci_register_write(rt2x00dev, PHY_CSR3, reg);
+ }
- priv->mesh_open = 0;
-- netif_stop_queue(priv->mesh_dev);
-- if (priv->infra_open == 0)
-- return libertas_dev_close(dev);
-- else
-- return 0;
-+ priv->mesh_connect_status = LBS_DISCONNECTED;
-+
-+ netif_stop_queue(dev);
-+ netif_carrier_off(dev);
-+
-+ spin_unlock_irq(&priv->driver_lock);
-+
-+ lbs_deb_leave(LBS_DEB_MESH);
-+ return 0;
+-static void rt61pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
++static void rt61pci_bbp_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u8 *value)
+ {
+ u32 reg;
+@@ -130,7 +125,7 @@ static void rt61pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ *value = rt2x00_get_field32(reg, PHY_CSR3_VALUE);
}
- /**
-@@ -494,134 +450,86 @@ static int libertas_mesh_close(struct net_device *dev)
- * @param dev A pointer to net_device structure
- * @return 0
- */
--static int libertas_close(struct net_device *dev)
--{
-- wlan_private *priv = (wlan_private *) dev->priv;
--
-- netif_stop_queue(dev);
-- priv->infra_open = 0;
-- if (priv->mesh_open == 0)
-- return libertas_dev_close(dev);
-- else
-- return 0;
--}
--
--
--static int libertas_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
-+static int lbs_eth_stop(struct net_device *dev)
+-static void rt61pci_rf_write(const struct rt2x00_dev *rt2x00dev,
++static void rt61pci_rf_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u32 value)
{
-- int ret = 0;
-- wlan_private *priv = dev->priv;
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv;
+ u32 reg;
+@@ -160,7 +155,7 @@ rf_write:
+ rt2x00_rf_write(rt2x00dev, word, value);
+ }
- lbs_deb_enter(LBS_DEB_NET);
+-static void rt61pci_mcu_request(const struct rt2x00_dev *rt2x00dev,
++static void rt61pci_mcu_request(struct rt2x00_dev *rt2x00dev,
+ const u8 command, const u8 token,
+ const u8 arg0, const u8 arg1)
+ {
+@@ -220,13 +215,13 @@ static void rt61pci_eepromregister_write(struct eeprom_93cx6 *eeprom)
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+ #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
-- if (priv->dnld_sent || priv->adapter->TxLockFlag) {
-- priv->stats.tx_dropped++;
-- goto done;
-- }
--
-- netif_stop_queue(priv->dev);
-- if (priv->mesh_dev)
-- netif_stop_queue(priv->mesh_dev);
-+ spin_lock_irq(&priv->driver_lock);
-+ priv->infra_open = 0;
-+ netif_stop_queue(dev);
-+ spin_unlock_irq(&priv->driver_lock);
+-static void rt61pci_read_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt61pci_read_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 *data)
+ {
+ rt2x00pci_register_read(rt2x00dev, CSR_OFFSET(word), data);
+ }
-- if (libertas_process_tx(priv, skb) == 0)
-- dev->trans_start = jiffies;
--done:
-- lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
-- return ret;
-+ lbs_deb_leave(LBS_DEB_NET);
-+ return 0;
+-static void rt61pci_write_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt61pci_write_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 data)
+ {
+ rt2x00pci_register_write(rt2x00dev, CSR_OFFSET(word), data);
+@@ -322,7 +317,8 @@ static void rt61pci_config_type(struct rt2x00_dev *rt2x00dev, const int type,
+ */
+ rt2x00pci_register_read(rt2x00dev, TXRX_CSR9, ®);
+ rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 1);
+- rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 1);
++ rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE,
++ (tsf_sync == TSF_SYNC_BEACON));
+ rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0);
+ rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, tsf_sync);
+ rt2x00pci_register_write(rt2x00dev, TXRX_CSR9, reg);
+@@ -411,8 +407,7 @@ static void rt61pci_config_txpower(struct rt2x00_dev *rt2x00dev,
}
--/**
-- * @brief Mark mesh packets and handover them to libertas_hard_start_xmit
-- *
-- */
--static int libertas_mesh_pre_start_xmit(struct sk_buff *skb,
-- struct net_device *dev)
-+static void lbs_tx_timeout(struct net_device *dev)
+ static void rt61pci_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx,
+- const int antenna_rx)
++ struct antenna_setup *ant)
{
-- wlan_private *priv = dev->priv;
-- int ret;
+ u8 r3;
+ u8 r4;
+@@ -423,32 +418,39 @@ static void rt61pci_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
+ rt61pci_bbp_read(rt2x00dev, 77, &r77);
+
+ rt2x00_set_field8(&r3, BBP_R3_SMART_MODE,
+- !rt2x00_rf(&rt2x00dev->chip, RF5225));
++ rt2x00_rf(&rt2x00dev->chip, RF5325));
+
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
++ /*
++ * Configure the RX antenna.
++ */
++ switch (ant->rx) {
+ case ANTENNA_HW_DIVERSITY:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
+- !!(rt2x00dev->curr_hwmode != HWMODE_A));
++ (rt2x00dev->curr_hwmode != HWMODE_A));
+ break;
+ case ANTENNA_A:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
-
-- lbs_deb_enter(LBS_DEB_MESH);
-- if(priv->adapter->monitormode != WLAN_MONITOR_OFF) {
-- netif_stop_queue(dev);
-- return -EOPNOTSUPP;
-- }
+ if (rt2x00dev->curr_hwmode == HWMODE_A)
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
+ else
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
-
-- SET_MESH_FRAME(skb);
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv;
+ if (rt2x00dev->curr_hwmode == HWMODE_A)
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
+ else
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
+ break;
+ }
-- ret = libertas_hard_start_xmit(skb, priv->dev);
-- lbs_deb_leave_args(LBS_DEB_MESH, "ret %d", ret);
-- return ret;
--}
-+ lbs_deb_enter(LBS_DEB_TX);
+@@ -458,8 +460,7 @@ static void rt61pci_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
+ }
--/**
-- * @brief Mark non-mesh packets and handover them to libertas_hard_start_xmit
-- *
-- */
--static int libertas_pre_start_xmit(struct sk_buff *skb, struct net_device *dev)
--{
-- wlan_private *priv = dev->priv;
-- int ret;
-+ lbs_pr_err("tx watch dog timeout\n");
+ static void rt61pci_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx,
+- const int antenna_rx)
++ struct antenna_setup *ant)
+ {
+ u8 r3;
+ u8 r4;
+@@ -470,22 +471,31 @@ static void rt61pci_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
+ rt61pci_bbp_read(rt2x00dev, 77, &r77);
-- lbs_deb_enter(LBS_DEB_NET);
-+ dev->trans_start = jiffies;
+ rt2x00_set_field8(&r3, BBP_R3_SMART_MODE,
+- !rt2x00_rf(&rt2x00dev->chip, RF2527));
++ rt2x00_rf(&rt2x00dev->chip, RF2529));
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
+ !test_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags));
-- if(priv->adapter->monitormode != WLAN_MONITOR_OFF) {
-- netif_stop_queue(dev);
-- return -EOPNOTSUPP;
-+ if (priv->currenttxskb) {
-+ priv->eventcause = 0x01000000;
-+ lbs_send_tx_feedback(priv);
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
++ /*
++ * Configure the RX antenna.
++ */
++ switch (ant->rx) {
+ case ANTENNA_HW_DIVERSITY:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
+ break;
+ case ANTENNA_A:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
+ break;
}
-+ /* XX: Shouldn't we also call into the hw-specific driver
-+ to kick it somehow? */
-+ lbs_host_to_card_done(priv);
-- UNSET_MESH_FRAME(skb);
-+ /* More often than not, this actually happens because the
-+ firmware has crapped itself -- rather than just a very
-+ busy medium. So send a harmless command, and if/when
-+ _that_ times out, we'll kick it in the head. */
-+ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
-+ 0, 0, NULL);
+@@ -501,23 +511,18 @@ static void rt61pci_config_antenna_2529_rx(struct rt2x00_dev *rt2x00dev,
-- ret = libertas_hard_start_xmit(skb, dev);
-- lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
-- return ret;
-+ lbs_deb_leave(LBS_DEB_TX);
+ rt2x00pci_register_read(rt2x00dev, MAC_CSR13, ®);
+
+- if (p1 != 0xff) {
+- rt2x00_set_field32(®, MAC_CSR13_BIT4, !!p1);
+- rt2x00_set_field32(®, MAC_CSR13_BIT12, 0);
+- rt2x00pci_register_write(rt2x00dev, MAC_CSR13, reg);
+- }
+- if (p2 != 0xff) {
+- rt2x00_set_field32(®, MAC_CSR13_BIT3, !p2);
+- rt2x00_set_field32(®, MAC_CSR13_BIT11, 0);
+- rt2x00pci_register_write(rt2x00dev, MAC_CSR13, reg);
+- }
++ rt2x00_set_field32(®, MAC_CSR13_BIT4, p1);
++ rt2x00_set_field32(®, MAC_CSR13_BIT12, 0);
++
++ rt2x00_set_field32(®, MAC_CSR13_BIT3, !p2);
++ rt2x00_set_field32(®, MAC_CSR13_BIT11, 0);
++
++ rt2x00pci_register_write(rt2x00dev, MAC_CSR13, reg);
}
--static void libertas_tx_timeout(struct net_device *dev)
-+void lbs_host_to_card_done(struct lbs_private *priv)
+ static void rt61pci_config_antenna_2529(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx,
+- const int antenna_rx)
++ struct antenna_setup *ant)
{
-- wlan_private *priv = (wlan_private *) dev->priv;
-+ unsigned long flags;
-
-- lbs_deb_enter(LBS_DEB_TX);
-+ lbs_deb_enter(LBS_DEB_THREAD);
-
-- lbs_pr_err("tx watch dog timeout\n");
-+ spin_lock_irqsave(&priv->driver_lock, flags);
+- u16 eeprom;
+ u8 r3;
+ u8 r4;
+ u8 r77;
+@@ -525,70 +530,36 @@ static void rt61pci_config_antenna_2529(struct rt2x00_dev *rt2x00dev,
+ rt61pci_bbp_read(rt2x00dev, 3, &r3);
+ rt61pci_bbp_read(rt2x00dev, 4, &r4);
+ rt61pci_bbp_read(rt2x00dev, 77, &r77);
+- rt2x00_eeprom_read(rt2x00dev, EEPROM_NIC, &eeprom);
- priv->dnld_sent = DNLD_RES_RECEIVED;
-- dev->trans_start = jiffies;
+- rt2x00_set_field8(&r3, BBP_R3_SMART_MODE, 0);
+-
+- if (rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY) &&
+- rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY)) {
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
+- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 1);
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 1);
+- } else if (rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY)) {
+- if (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED) >= 2) {
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
+- rt61pci_bbp_write(rt2x00dev, 77, r77);
+- }
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
+- } else if (!rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY) &&
+- rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY)) {
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
+- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
+-
+- switch (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED)) {
+- case 0:
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 1);
+- break;
+- case 1:
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 0);
+- break;
+- case 2:
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 0);
+- break;
+- case 3:
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
+- break;
+- }
+- } else if (!rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY) &&
+- !rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY)) {
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
++ /* FIXME: Antenna selection for the rf 2529 is very confusing in the
++ * legacy driver. The code below should be ok for non-diversity setups.
++ */
-- if (priv->adapter->currenttxskb) {
-- if (priv->adapter->monitormode != WLAN_MONITOR_OFF) {
-- /* If we are here, we have not received feedback from
-- the previous packet. Assume TX_FAIL and move on. */
-- priv->adapter->eventcause = 0x01000000;
-- libertas_send_tx_feedback(priv);
-- } else
-- wake_up_interruptible(&priv->waitq);
-- } else if (priv->adapter->connect_status == LIBERTAS_CONNECTED) {
-- netif_wake_queue(priv->dev);
-- if (priv->mesh_dev)
-- netif_wake_queue(priv->mesh_dev);
-- }
-+ /* Wake main thread if commands are pending */
-+ if (!priv->cur_cmd || priv->tx_pending_len > 0)
-+ wake_up_interruptible(&priv->waitq);
+- switch (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED)) {
+- case 0:
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
+- rt61pci_bbp_write(rt2x00dev, 77, r77);
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 1);
+- break;
+- case 1:
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
+- rt61pci_bbp_write(rt2x00dev, 77, r77);
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 0);
+- break;
+- case 2:
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
+- rt61pci_bbp_write(rt2x00dev, 77, r77);
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 0);
+- break;
+- case 3:
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
+- rt61pci_bbp_write(rt2x00dev, 77, r77);
+- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
+- break;
+- }
++ /*
++ * Configure the RX antenna.
++ */
++ switch (ant->rx) {
++ case ANTENNA_A:
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
++ rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 0);
++ break;
++ case ANTENNA_SW_DIVERSITY:
++ case ANTENNA_HW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
++ case ANTENNA_B:
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
++ rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
++ break;
+ }
-- lbs_deb_leave(LBS_DEB_TX);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ lbs_deb_leave(LBS_DEB_THREAD);
++ rt61pci_bbp_write(rt2x00dev, 77, r77);
+ rt61pci_bbp_write(rt2x00dev, 3, r3);
+ rt61pci_bbp_write(rt2x00dev, 4, r4);
}
-+EXPORT_SYMBOL_GPL(lbs_host_to_card_done);
+@@ -625,46 +596,44 @@ static const struct antenna_sel antenna_sel_bg[] = {
+ };
- /**
- * @brief This function returns the network statistics
- *
-- * @param dev A pointer to wlan_private structure
-+ * @param dev A pointer to struct lbs_private structure
- * @return A pointer to net_device_stats structure
- */
--static struct net_device_stats *libertas_get_stats(struct net_device *dev)
-+static struct net_device_stats *lbs_get_stats(struct net_device *dev)
+ static void rt61pci_config_antenna(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx, const int antenna_rx)
++ struct antenna_setup *ant)
{
-- wlan_private *priv = (wlan_private *) dev->priv;
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv;
+ const struct antenna_sel *sel;
+ unsigned int lna;
+ unsigned int i;
+ u32 reg;
-+ lbs_deb_enter(LBS_DEB_NET);
- return &priv->stats;
+- rt2x00pci_register_read(rt2x00dev, PHY_CSR0, ®);
+-
+ if (rt2x00dev->curr_hwmode == HWMODE_A) {
+ sel = antenna_sel_a;
+ lna = test_bit(CONFIG_EXTERNAL_LNA_A, &rt2x00dev->flags);
+-
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 0);
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 1);
+ } else {
+ sel = antenna_sel_bg;
+ lna = test_bit(CONFIG_EXTERNAL_LNA_BG, &rt2x00dev->flags);
+-
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 1);
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 0);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(antenna_sel_a); i++)
+ rt61pci_bbp_write(rt2x00dev, sel[i].word, sel[i].value[lna]);
+
++ rt2x00pci_register_read(rt2x00dev, PHY_CSR0, ®);
++
++ rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG,
++ (rt2x00dev->curr_hwmode == HWMODE_B ||
++ rt2x00dev->curr_hwmode == HWMODE_G));
++ rt2x00_set_field32(®, PHY_CSR0_PA_PE_A,
++ (rt2x00dev->curr_hwmode == HWMODE_A));
++
+ rt2x00pci_register_write(rt2x00dev, PHY_CSR0, reg);
+
+ if (rt2x00_rf(&rt2x00dev->chip, RF5225) ||
+ rt2x00_rf(&rt2x00dev->chip, RF5325))
+- rt61pci_config_antenna_5x(rt2x00dev, antenna_tx, antenna_rx);
++ rt61pci_config_antenna_5x(rt2x00dev, ant);
+ else if (rt2x00_rf(&rt2x00dev->chip, RF2527))
+- rt61pci_config_antenna_2x(rt2x00dev, antenna_tx, antenna_rx);
++ rt61pci_config_antenna_2x(rt2x00dev, ant);
+ else if (rt2x00_rf(&rt2x00dev->chip, RF2529)) {
+ if (test_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags))
+- rt61pci_config_antenna_2x(rt2x00dev, antenna_tx,
+- antenna_rx);
++ rt61pci_config_antenna_2x(rt2x00dev, ant);
+ else
+- rt61pci_config_antenna_2529(rt2x00dev, antenna_tx,
+- antenna_rx);
++ rt61pci_config_antenna_2529(rt2x00dev, ant);
+ }
}
--static int libertas_set_mac_address(struct net_device *dev, void *addr)
-+static int lbs_set_mac_address(struct net_device *dev, void *addr)
+@@ -709,8 +678,7 @@ static void rt61pci_config(struct rt2x00_dev *rt2x00dev,
+ if ((flags & CONFIG_UPDATE_TXPOWER) && !(flags & CONFIG_UPDATE_CHANNEL))
+ rt61pci_config_txpower(rt2x00dev, libconf->conf->power_level);
+ if (flags & CONFIG_UPDATE_ANTENNA)
+- rt61pci_config_antenna(rt2x00dev, libconf->conf->antenna_sel_tx,
+- libconf->conf->antenna_sel_rx);
++ rt61pci_config_antenna(rt2x00dev, &libconf->ant);
+ if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
+ rt61pci_config_duration(rt2x00dev, libconf);
+ }
+@@ -721,7 +689,6 @@ static void rt61pci_config(struct rt2x00_dev *rt2x00dev,
+ static void rt61pci_enable_led(struct rt2x00_dev *rt2x00dev)
{
- int ret = 0;
-- wlan_private *priv = (wlan_private *) dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = (struct lbs_private *) dev->priv;
- struct sockaddr *phwaddr = addr;
-
- lbs_deb_enter(LBS_DEB_NET);
-@@ -629,15 +537,15 @@ static int libertas_set_mac_address(struct net_device *dev, void *addr)
- /* In case it was called from the mesh device */
- dev = priv->dev ;
+ u32 reg;
+- u16 led_reg;
+ u8 arg0;
+ u8 arg1;
-- memset(adapter->current_addr, 0, ETH_ALEN);
-+ memset(priv->current_addr, 0, ETH_ALEN);
+@@ -730,15 +697,14 @@ static void rt61pci_enable_led(struct rt2x00_dev *rt2x00dev)
+ rt2x00_set_field32(®, MAC_CSR14_OFF_PERIOD, 30);
+ rt2x00pci_register_write(rt2x00dev, MAC_CSR14, reg);
- /* dev->dev_addr is 8 bytes */
- lbs_deb_hex(LBS_DEB_NET, "dev->dev_addr", dev->dev_addr, ETH_ALEN);
+- led_reg = rt2x00dev->led_reg;
+- rt2x00_set_field16(&led_reg, MCU_LEDCS_RADIO_STATUS, 1);
+- if (rt2x00dev->rx_status.phymode == MODE_IEEE80211A)
+- rt2x00_set_field16(&led_reg, MCU_LEDCS_LINK_A_STATUS, 1);
+- else
+- rt2x00_set_field16(&led_reg, MCU_LEDCS_LINK_BG_STATUS, 1);
++ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_RADIO_STATUS, 1);
++ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_A_STATUS,
++ (rt2x00dev->rx_status.phymode == MODE_IEEE80211A));
++ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_BG_STATUS,
++ (rt2x00dev->rx_status.phymode != MODE_IEEE80211A));
- lbs_deb_hex(LBS_DEB_NET, "addr", phwaddr->sa_data, ETH_ALEN);
-- memcpy(adapter->current_addr, phwaddr->sa_data, ETH_ALEN);
-+ memcpy(priv->current_addr, phwaddr->sa_data, ETH_ALEN);
+- arg0 = led_reg & 0xff;
+- arg1 = (led_reg >> 8) & 0xff;
++ arg0 = rt2x00dev->led_reg & 0xff;
++ arg1 = (rt2x00dev->led_reg >> 8) & 0xff;
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_MAC_ADDRESS,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_MAC_ADDRESS,
- CMD_ACT_SET,
- CMD_OPTION_WAITFORRSP, 0, NULL);
+ rt61pci_mcu_request(rt2x00dev, MCU_LED, 0xff, arg0, arg1);
+ }
+@@ -792,7 +758,8 @@ static void rt61pci_activity_led(struct rt2x00_dev *rt2x00dev, int rssi)
+ /*
+ * Link tuning
+ */
+-static void rt61pci_link_stats(struct rt2x00_dev *rt2x00dev)
++static void rt61pci_link_stats(struct rt2x00_dev *rt2x00dev,
++ struct link_qual *qual)
+ {
+ u32 reg;
-@@ -647,89 +555,86 @@ static int libertas_set_mac_address(struct net_device *dev, void *addr)
- goto done;
- }
+@@ -800,14 +767,13 @@ static void rt61pci_link_stats(struct rt2x00_dev *rt2x00dev)
+ * Update FCS error count from register.
+ */
+ rt2x00pci_register_read(rt2x00dev, STA_CSR0, ®);
+- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
++ qual->rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
-- lbs_deb_hex(LBS_DEB_NET, "adapter->macaddr", adapter->current_addr, ETH_ALEN);
-- memcpy(dev->dev_addr, adapter->current_addr, ETH_ALEN);
-+ lbs_deb_hex(LBS_DEB_NET, "priv->macaddr", priv->current_addr, ETH_ALEN);
-+ memcpy(dev->dev_addr, priv->current_addr, ETH_ALEN);
- if (priv->mesh_dev)
-- memcpy(priv->mesh_dev->dev_addr, adapter->current_addr, ETH_ALEN);
-+ memcpy(priv->mesh_dev->dev_addr, priv->current_addr, ETH_ALEN);
+ /*
+ * Update False CCA count from register.
+ */
+ rt2x00pci_register_read(rt2x00dev, STA_CSR1, ®);
+- rt2x00dev->link.false_cca =
+- rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
++ qual->false_cca = rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
+ }
- done:
- lbs_deb_leave_args(LBS_DEB_NET, "ret %d", ret);
- return ret;
+ static void rt61pci_reset_tuner(struct rt2x00_dev *rt2x00dev)
+@@ -904,11 +870,11 @@ static void rt61pci_link_tuner(struct rt2x00_dev *rt2x00dev)
+ * r17 does not yet exceed upper limit, continue and base
+ * the r17 tuning on the false CCA count.
+ */
+- if (rt2x00dev->link.false_cca > 512 && r17 < up_bound) {
++ if (rt2x00dev->link.qual.false_cca > 512 && r17 < up_bound) {
+ if (++r17 > up_bound)
+ r17 = up_bound;
+ rt61pci_bbp_write(rt2x00dev, 17, r17);
+- } else if (rt2x00dev->link.false_cca < 100 && r17 > low_bound) {
++ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > low_bound) {
+ if (--r17 < low_bound)
+ r17 = low_bound;
+ rt61pci_bbp_write(rt2x00dev, 17, r17);
+@@ -1023,64 +989,46 @@ static int rt61pci_load_firmware(struct rt2x00_dev *rt2x00dev, void *data,
+ return 0;
}
--static int libertas_copy_multicast_address(wlan_adapter * adapter,
-+static int lbs_copy_multicast_address(struct lbs_private *priv,
- struct net_device *dev)
+-static void rt61pci_init_rxring(struct rt2x00_dev *rt2x00dev)
++static void rt61pci_init_rxentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
{
- int i = 0;
- struct dev_mc_list *mcptr = dev->mc_list;
+- struct data_ring *ring = rt2x00dev->rx;
+- struct data_desc *rxd;
+- unsigned int i;
++ __le32 *rxd = entry->priv;
+ u32 word;
- for (i = 0; i < dev->mc_count; i++) {
-- memcpy(&adapter->multicastlist[i], mcptr->dmi_addr, ETH_ALEN);
-+ memcpy(&priv->multicastlist[i], mcptr->dmi_addr, ETH_ALEN);
- mcptr = mcptr->next;
- }
+- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
-
- return i;
+- for (i = 0; i < ring->stats.limit; i++) {
+- rxd = ring->entry[i].priv;
+-
+- rt2x00_desc_read(rxd, 5, &word);
+- rt2x00_set_field32(&word, RXD_W5_BUFFER_PHYSICAL_ADDRESS,
+- ring->entry[i].data_dma);
+- rt2x00_desc_write(rxd, 5, word);
++ rt2x00_desc_read(rxd, 5, &word);
++ rt2x00_set_field32(&word, RXD_W5_BUFFER_PHYSICAL_ADDRESS,
++ entry->data_dma);
++ rt2x00_desc_write(rxd, 5, word);
+
+- rt2x00_desc_read(rxd, 0, &word);
+- rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
+- rt2x00_desc_write(rxd, 0, word);
+- }
-
+- rt2x00_ring_index_clear(rt2x00dev->rx);
++ rt2x00_desc_read(rxd, 0, &word);
++ rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
++ rt2x00_desc_write(rxd, 0, word);
}
--static void libertas_set_multicast_list(struct net_device *dev)
-+static void lbs_set_multicast_list(struct net_device *dev)
+-static void rt61pci_init_txring(struct rt2x00_dev *rt2x00dev, const int queue)
++static void rt61pci_init_txentry(struct rt2x00_dev *rt2x00dev,
++ struct data_entry *entry)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int oldpacketfilter;
- DECLARE_MAC_BUF(mac);
-
- lbs_deb_enter(LBS_DEB_NET);
+- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
+- struct data_desc *txd;
+- unsigned int i;
++ __le32 *txd = entry->priv;
+ u32 word;
-- oldpacketfilter = adapter->currentpacketfilter;
-+ oldpacketfilter = priv->currentpacketfilter;
+- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
+-
+- for (i = 0; i < ring->stats.limit; i++) {
+- txd = ring->entry[i].priv;
+-
+- rt2x00_desc_read(txd, 1, &word);
+- rt2x00_set_field32(&word, TXD_W1_BUFFER_COUNT, 1);
+- rt2x00_desc_write(txd, 1, word);
+-
+- rt2x00_desc_read(txd, 5, &word);
+- rt2x00_set_field32(&word, TXD_W5_PID_TYPE, queue);
+- rt2x00_set_field32(&word, TXD_W5_PID_SUBTYPE, i);
+- rt2x00_desc_write(txd, 5, word);
++ rt2x00_desc_read(txd, 1, &word);
++ rt2x00_set_field32(&word, TXD_W1_BUFFER_COUNT, 1);
++ rt2x00_desc_write(txd, 1, word);
- if (dev->flags & IFF_PROMISC) {
- lbs_deb_net("enable promiscuous mode\n");
-- adapter->currentpacketfilter |=
-+ priv->currentpacketfilter |=
- CMD_ACT_MAC_PROMISCUOUS_ENABLE;
-- adapter->currentpacketfilter &=
-+ priv->currentpacketfilter &=
- ~(CMD_ACT_MAC_ALL_MULTICAST_ENABLE |
- CMD_ACT_MAC_MULTICAST_ENABLE);
- } else {
- /* Multicast */
-- adapter->currentpacketfilter &=
-+ priv->currentpacketfilter &=
- ~CMD_ACT_MAC_PROMISCUOUS_ENABLE;
+- rt2x00_desc_read(txd, 6, &word);
+- rt2x00_set_field32(&word, TXD_W6_BUFFER_PHYSICAL_ADDRESS,
+- ring->entry[i].data_dma);
+- rt2x00_desc_write(txd, 6, word);
++ rt2x00_desc_read(txd, 5, &word);
++ rt2x00_set_field32(&word, TXD_W5_PID_TYPE, entry->ring->queue_idx);
++ rt2x00_set_field32(&word, TXD_W5_PID_SUBTYPE, entry->entry_idx);
++ rt2x00_desc_write(txd, 5, word);
- if (dev->flags & IFF_ALLMULTI || dev->mc_count >
- MRVDRV_MAX_MULTICAST_LIST_SIZE) {
- lbs_deb_net( "enabling all multicast\n");
-- adapter->currentpacketfilter |=
-+ priv->currentpacketfilter |=
- CMD_ACT_MAC_ALL_MULTICAST_ENABLE;
-- adapter->currentpacketfilter &=
-+ priv->currentpacketfilter &=
- ~CMD_ACT_MAC_MULTICAST_ENABLE;
- } else {
-- adapter->currentpacketfilter &=
-+ priv->currentpacketfilter &=
- ~CMD_ACT_MAC_ALL_MULTICAST_ENABLE;
+- rt2x00_desc_read(txd, 0, &word);
+- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
+- rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
+- rt2x00_desc_write(txd, 0, word);
+- }
++ rt2x00_desc_read(txd, 6, &word);
++ rt2x00_set_field32(&word, TXD_W6_BUFFER_PHYSICAL_ADDRESS,
++ entry->data_dma);
++ rt2x00_desc_write(txd, 6, word);
- if (!dev->mc_count) {
- lbs_deb_net("no multicast addresses, "
- "disabling multicast\n");
-- adapter->currentpacketfilter &=
-+ priv->currentpacketfilter &=
- ~CMD_ACT_MAC_MULTICAST_ENABLE;
- } else {
- int i;
+- rt2x00_ring_index_clear(ring);
++ rt2x00_desc_read(txd, 0, &word);
++ rt2x00_set_field32(&word, TXD_W0_VALID, 0);
++ rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
++ rt2x00_desc_write(txd, 0, word);
+ }
-- adapter->currentpacketfilter |=
-+ priv->currentpacketfilter |=
- CMD_ACT_MAC_MULTICAST_ENABLE;
+ static int rt61pci_init_rings(struct rt2x00_dev *rt2x00dev)
+@@ -1088,16 +1036,6 @@ static int rt61pci_init_rings(struct rt2x00_dev *rt2x00dev)
+ u32 reg;
-- adapter->nr_of_multicastmacaddr =
-- libertas_copy_multicast_address(adapter, dev);
-+ priv->nr_of_multicastmacaddr =
-+ lbs_copy_multicast_address(priv, dev);
+ /*
+- * Initialize rings.
+- */
+- rt61pci_init_rxring(rt2x00dev);
+- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
+- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA1);
+- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA2);
+- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA3);
+- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA4);
+-
+- /*
+ * Initialize registers.
+ */
+ rt2x00pci_register_read(rt2x00dev, TX_RING_CSR0, ®);
+@@ -1565,12 +1503,12 @@ static int rt61pci_set_device_state(struct rt2x00_dev *rt2x00dev,
+ * TX descriptor initialization
+ */
+ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
+- struct txdata_entry_desc *desc,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
+- struct ieee80211_tx_control *control)
++ struct sk_buff *skb,
++ struct txdata_entry_desc *desc,
++ struct ieee80211_tx_control *control)
+ {
++ struct skb_desc *skbdesc = get_skb_desc(skb);
++ __le32 *txd = skbdesc->desc;
+ u32 word;
- lbs_deb_net("multicast addresses: %d\n",
- dev->mc_count);
+ /*
+@@ -1599,7 +1537,7 @@ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_desc_write(txd, 5, word);
- for (i = 0; i < dev->mc_count; i++) {
-- lbs_deb_net("Multicast address %d:%s\n",
-+ lbs_deb_net("Multicast address %d: %s\n",
- i, print_mac(mac,
-- adapter->multicastlist[i]));
-+ priv->multicastlist[i]));
- }
- /* send multicast addresses to firmware */
-- libertas_prepare_and_send_command(priv,
-+ lbs_prepare_and_send_command(priv,
- CMD_MAC_MULTICAST_ADR,
- CMD_ACT_SET, 0, 0,
- NULL);
-@@ -737,26 +642,25 @@ static void libertas_set_multicast_list(struct net_device *dev)
- }
- }
+ rt2x00_desc_read(txd, 11, &word);
+- rt2x00_set_field32(&word, TXD_W11_BUFFER_LENGTH0, length);
++ rt2x00_set_field32(&word, TXD_W11_BUFFER_LENGTH0, skbdesc->data_len);
+ rt2x00_desc_write(txd, 11, word);
-- if (adapter->currentpacketfilter != oldpacketfilter) {
-- libertas_set_mac_packet_filter(priv);
-+ if (priv->currentpacketfilter != oldpacketfilter) {
-+ lbs_set_mac_packet_filter(priv);
+ rt2x00_desc_read(txd, 0, &word);
+@@ -1608,7 +1546,7 @@ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
+ test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_ACK,
+- !(control->flags & IEEE80211_TXCTL_NO_ACK));
++ test_bit(ENTRY_TXD_ACK, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
+ test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_OFDM,
+@@ -1618,7 +1556,7 @@ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ !!(control->flags &
+ IEEE80211_TXCTL_LONG_RETRY_LIMIT));
+ rt2x00_set_field32(&word, TXD_W0_TKIP_MIC, 0);
+- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
++ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
+ rt2x00_set_field32(&word, TXD_W0_BURST,
+ test_bit(ENTRY_TXD_BURST, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_CIPHER_ALG, CIPHER_NONE);
+@@ -1649,16 +1587,16 @@ static void rt61pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
}
- lbs_deb_leave(LBS_DEB_NET);
+ rt2x00pci_register_read(rt2x00dev, TX_CNTL_CSR, ®);
+- if (queue == IEEE80211_TX_QUEUE_DATA0)
+- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC0, 1);
+- else if (queue == IEEE80211_TX_QUEUE_DATA1)
+- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC1, 1);
+- else if (queue == IEEE80211_TX_QUEUE_DATA2)
+- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC2, 1);
+- else if (queue == IEEE80211_TX_QUEUE_DATA3)
+- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC3, 1);
+- else if (queue == IEEE80211_TX_QUEUE_DATA4)
+- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_MGMT, 1);
++ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC0,
++ (queue == IEEE80211_TX_QUEUE_DATA0));
++ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC1,
++ (queue == IEEE80211_TX_QUEUE_DATA1));
++ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC2,
++ (queue == IEEE80211_TX_QUEUE_DATA2));
++ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC3,
++ (queue == IEEE80211_TX_QUEUE_DATA3));
++ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_MGMT,
++ (queue == IEEE80211_TX_QUEUE_DATA4));
+ rt2x00pci_register_write(rt2x00dev, TX_CNTL_CSR, reg);
}
- /**
-- * @brief This function handles the major jobs in the WLAN driver.
-+ * @brief This function handles the major jobs in the LBS driver.
- * It handles all events generated by firmware, RX data received
- * from firmware and TX data sent from kernel.
- *
-- * @param data A pointer to wlan_thread structure
-+ * @param data A pointer to lbs_thread structure
- * @return 0
- */
--static int libertas_thread(void *data)
-+static int lbs_thread(void *data)
+@@ -1709,7 +1647,7 @@ static int rt61pci_agc_to_rssi(struct rt2x00_dev *rt2x00dev, int rxd_w1)
+ static void rt61pci_fill_rxdone(struct data_entry *entry,
+ struct rxdata_entry_desc *desc)
{
- struct net_device *dev = data;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- wait_queue_t wait;
- u8 ireg = 0;
+- struct data_desc *rxd = entry->priv;
++ __le32 *rxd = entry->priv;
+ u32 word0;
+ u32 word1;
-@@ -764,215 +668,291 @@ static int libertas_thread(void *data)
+@@ -1727,8 +1665,7 @@ static void rt61pci_fill_rxdone(struct data_entry *entry,
+ desc->rssi = rt61pci_agc_to_rssi(entry->ring->rt2x00dev, word1);
+ desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
+ desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
+-
+- return;
++ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
+ }
- init_waitqueue_entry(&wait, current);
+ /*
+@@ -1739,7 +1676,7 @@ static void rt61pci_txdone(struct rt2x00_dev *rt2x00dev)
+ struct data_ring *ring;
+ struct data_entry *entry;
+ struct data_entry *entry_done;
+- struct data_desc *txd;
++ __le32 *txd;
+ u32 word;
+ u32 reg;
+ u32 old_reg;
+@@ -1809,24 +1746,7 @@ static void rt61pci_txdone(struct rt2x00_dev *rt2x00dev)
+ tx_status = rt2x00_get_field32(reg, STA_CSR4_TX_RESULT);
+ retry = rt2x00_get_field32(reg, STA_CSR4_RETRY_COUNT);
-- set_freezable();
- for (;;) {
-- lbs_deb_thread( "main-thread 111: intcounter=%d "
-- "currenttxskb=%p dnld_sent=%d\n",
-- adapter->intcounter,
-- adapter->currenttxskb, priv->dnld_sent);
-+ int shouldsleep;
-+
-+ lbs_deb_thread( "main-thread 111: intcounter=%d currenttxskb=%p dnld_sent=%d\n",
-+ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
+- rt2x00lib_txdone(entry, tx_status, retry);
+-
+- /*
+- * Make this entry available for reuse.
+- */
+- entry->flags = 0;
+- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
+- rt2x00_desc_write(txd, 0, word);
+- rt2x00_ring_index_done_inc(entry->ring);
+-
+- /*
+- * If the data ring was full before the txdone handler
+- * we must make sure the packet queue in the mac80211 stack
+- * is reenabled when the txdone handler has finished.
+- */
+- if (!rt2x00_ring_full(ring))
+- ieee80211_wake_queue(rt2x00dev->hw,
+- entry->tx_status.control.queue);
++ rt2x00pci_txdone(rt2x00dev, entry, tx_status, retry);
+ }
+ }
- add_wait_queue(&priv->waitq, &wait);
- set_current_state(TASK_INTERRUPTIBLE);
-- spin_lock_irq(&adapter->driver_lock);
-- if ((adapter->psstate == PS_STATE_SLEEP) ||
-- (!adapter->intcounter
-- && (priv->dnld_sent || adapter->cur_cmd ||
-- list_empty(&adapter->cmdpendingq)))) {
-- lbs_deb_thread(
-- "main-thread sleeping... Conn=%d IntC=%d PS_mode=%d PS_State=%d\n",
-- adapter->connect_status, adapter->intcounter,
-- adapter->psmode, adapter->psstate);
-- spin_unlock_irq(&adapter->driver_lock);
-+ spin_lock_irq(&priv->driver_lock);
-+
-+ if (kthread_should_stop())
-+ shouldsleep = 0; /* Bye */
-+ else if (priv->surpriseremoved)
-+ shouldsleep = 1; /* We need to wait until we're _told_ to die */
-+ else if (priv->psstate == PS_STATE_SLEEP)
-+ shouldsleep = 1; /* Sleep mode. Nothing we can do till it wakes */
-+ else if (priv->intcounter)
-+ shouldsleep = 0; /* Interrupt pending. Deal with it now */
-+ else if (priv->cmd_timed_out)
-+ shouldsleep = 0; /* Command timed out. Recover */
-+ else if (!priv->fw_ready)
-+ shouldsleep = 1; /* Firmware not ready. We're waiting for it */
-+ else if (priv->dnld_sent)
-+ shouldsleep = 1; /* Something is en route to the device already */
-+ else if (priv->tx_pending_len > 0)
-+ shouldsleep = 0; /* We've a packet to send */
-+ else if (priv->cur_cmd)
-+ shouldsleep = 1; /* Can't send a command; one already running */
-+ else if (!list_empty(&priv->cmdpendingq))
-+ shouldsleep = 0; /* We have a command to send */
-+ else
-+ shouldsleep = 1; /* No command */
+@@ -1920,8 +1840,10 @@ static int rt61pci_validate_eeprom(struct rt2x00_dev *rt2x00dev)
+ rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
+ if (word == 0xffff) {
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 2);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 2);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
++ ANTENNA_B);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
++ ANTENNA_B);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_FRAME_TYPE, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
+@@ -2025,11 +1947,17 @@ static int rt61pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ }
+
+ /*
++ * Determine number of antenna's.
++ */
++ if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_NUM) == 2)
++ __set_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags);
+
-+ if (shouldsleep) {
-+ lbs_deb_thread("main-thread sleeping... Conn=%d IntC=%d PS_mode=%d PS_State=%d\n",
-+ priv->connect_status, priv->intcounter,
-+ priv->psmode, priv->psstate);
-+ spin_unlock_irq(&priv->driver_lock);
- schedule();
- } else
-- spin_unlock_irq(&adapter->driver_lock);
-+ spin_unlock_irq(&priv->driver_lock);
++ /*
+ * Identify default antenna configuration.
+ */
+- rt2x00dev->hw->conf.antenna_sel_tx =
++ rt2x00dev->default_ant.tx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
+- rt2x00dev->hw->conf.antenna_sel_rx =
++ rt2x00dev->default_ant.rx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
-- lbs_deb_thread(
-- "main-thread 222 (waking up): intcounter=%d currenttxskb=%p "
-- "dnld_sent=%d\n", adapter->intcounter,
-- adapter->currenttxskb, priv->dnld_sent);
-+ lbs_deb_thread("main-thread 222 (waking up): intcounter=%d currenttxskb=%p dnld_sent=%d\n",
-+ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
+ /*
+@@ -2039,12 +1967,6 @@ static int rt61pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ __set_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags);
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&priv->waitq, &wait);
-- try_to_freeze();
--
-- lbs_deb_thread("main-thread 333: intcounter=%d currenttxskb=%p "
-- "dnld_sent=%d\n",
-- adapter->intcounter,
-- adapter->currenttxskb, priv->dnld_sent);
+ /*
+- * Determine number of antenna's.
+- */
+- if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_NUM) == 2)
+- __set_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags);
-
-- if (kthread_should_stop()
-- || adapter->surpriseremoved) {
-- lbs_deb_thread(
-- "main-thread: break from main thread: surpriseremoved=0x%x\n",
-- adapter->surpriseremoved);
-+
-+ lbs_deb_thread("main-thread 333: intcounter=%d currenttxskb=%p dnld_sent=%d\n",
-+ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
-+
-+ if (kthread_should_stop()) {
-+ lbs_deb_thread("main-thread: break from main thread\n");
- break;
- }
+- /*
+ * Detect if this device has an hardware controlled radio.
+ */
+ #ifdef CONFIG_RT61PCI_RFKILL
+@@ -2072,6 +1994,38 @@ static int rt61pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ __set_bit(CONFIG_EXTERNAL_LNA_BG, &rt2x00dev->flags);
-+ if (priv->surpriseremoved) {
-+ lbs_deb_thread("adapter removed; waiting to die...\n");
-+ continue;
+ /*
++ * When working with a RF2529 chip without double antenna
++ * the antenna settings should be gathered from the NIC
++ * eeprom word.
++ */
++ if (rt2x00_rf(&rt2x00dev->chip, RF2529) &&
++ !test_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags)) {
++ switch (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED)) {
++ case 0:
++ rt2x00dev->default_ant.tx = ANTENNA_B;
++ rt2x00dev->default_ant.rx = ANTENNA_A;
++ break;
++ case 1:
++ rt2x00dev->default_ant.tx = ANTENNA_B;
++ rt2x00dev->default_ant.rx = ANTENNA_B;
++ break;
++ case 2:
++ rt2x00dev->default_ant.tx = ANTENNA_A;
++ rt2x00dev->default_ant.rx = ANTENNA_A;
++ break;
++ case 3:
++ rt2x00dev->default_ant.tx = ANTENNA_A;
++ rt2x00dev->default_ant.rx = ANTENNA_B;
++ break;
+ }
+
-+ spin_lock_irq(&priv->driver_lock);
-
-- spin_lock_irq(&adapter->driver_lock);
-- if (adapter->intcounter) {
-+ if (priv->intcounter) {
- u8 int_status;
-- adapter->intcounter = 0;
++ if (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY))
++ rt2x00dev->default_ant.tx = ANTENNA_SW_DIVERSITY;
++ if (rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY))
++ rt2x00dev->default_ant.rx = ANTENNA_SW_DIVERSITY;
++ }
+
-+ priv->intcounter = 0;
- int_status = priv->hw_get_int_status(priv, &ireg);
-
- if (int_status) {
-- lbs_deb_thread(
-- "main-thread: reading HOST_INT_STATUS_REG failed\n");
-- spin_unlock_irq(&adapter->driver_lock);
-+ lbs_deb_thread("main-thread: reading HOST_INT_STATUS_REG failed\n");
-+ spin_unlock_irq(&priv->driver_lock);
- continue;
- }
-- adapter->hisregcpy |= ireg;
-+ priv->hisregcpy |= ireg;
- }
++ /*
+ * Store led settings, for correct led behaviour.
+ * If the eeprom value is invalid,
+ * switch to default led mode.
+@@ -2325,7 +2279,6 @@ static void rt61pci_configure_filter(struct ieee80211_hw *hw,
+ struct dev_addr_list *mc_list)
+ {
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- struct interface *intf = &rt2x00dev->interface;
+ u32 reg;
-- lbs_deb_thread("main-thread 444: intcounter=%d currenttxskb=%p "
-- "dnld_sent=%d\n",
-- adapter->intcounter,
-- adapter->currenttxskb, priv->dnld_sent);
-+ lbs_deb_thread("main-thread 444: intcounter=%d currenttxskb=%p dnld_sent=%d\n",
-+ priv->intcounter, priv->currenttxskb, priv->dnld_sent);
+ /*
+@@ -2344,22 +2297,19 @@ static void rt61pci_configure_filter(struct ieee80211_hw *hw,
+ * Apply some rules to the filters:
+ * - Some filters imply different filters to be set.
+ * - Some things we can't filter out at all.
+- * - Some filters are set based on interface type.
+ */
+ if (mc_count)
+ *total_flags |= FIF_ALLMULTI;
+ if (*total_flags & FIF_OTHER_BSS ||
+ *total_flags & FIF_PROMISC_IN_BSS)
+ *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
+- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
+- *total_flags |= FIF_PROMISC_IN_BSS;
- /* command response? */
-- if (adapter->hisregcpy & MRVDRV_CMD_UPLD_RDY) {
-+ if (priv->hisregcpy & MRVDRV_CMD_UPLD_RDY) {
- lbs_deb_thread("main-thread: cmd response ready\n");
+ /*
+ * Check if there is any work left for us.
+ */
+- if (intf->filter == *total_flags)
++ if (rt2x00dev->packet_filter == *total_flags)
+ return;
+- intf->filter = *total_flags;
++ rt2x00dev->packet_filter = *total_flags;
-- adapter->hisregcpy &= ~MRVDRV_CMD_UPLD_RDY;
-- spin_unlock_irq(&adapter->driver_lock);
-- libertas_process_rx_command(priv);
-- spin_lock_irq(&adapter->driver_lock);
-+ priv->hisregcpy &= ~MRVDRV_CMD_UPLD_RDY;
-+ spin_unlock_irq(&priv->driver_lock);
-+ lbs_process_rx_command(priv);
-+ spin_lock_irq(&priv->driver_lock);
- }
+ /*
+ * Start configuration steps.
+@@ -2426,6 +2376,9 @@ static int rt61pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ struct ieee80211_tx_control *control)
+ {
+ struct rt2x00_dev *rt2x00dev = hw->priv;
++ struct skb_desc *desc;
++ struct data_ring *ring;
++ struct data_entry *entry;
-+ if (priv->cmd_timed_out && priv->cur_cmd) {
-+ struct cmd_ctrl_node *cmdnode = priv->cur_cmd;
-+
-+ if (++priv->nr_retries > 10) {
-+ lbs_pr_info("Excessive timeouts submitting command %x\n",
-+ le16_to_cpu(cmdnode->cmdbuf->command));
-+ lbs_complete_command(priv, cmdnode, -ETIMEDOUT);
-+ priv->nr_retries = 0;
-+ } else {
-+ priv->cur_cmd = NULL;
-+ lbs_pr_info("requeueing command %x due to timeout (#%d)\n",
-+ le16_to_cpu(cmdnode->cmdbuf->command), priv->nr_retries);
-+
-+ /* Stick it back at the _top_ of the pending queue
-+ for immediate resubmission */
-+ list_add(&cmdnode->list, &priv->cmdpendingq);
-+ }
-+ }
-+ priv->cmd_timed_out = 0;
-+
- /* Any Card Event */
-- if (adapter->hisregcpy & MRVDRV_CARDEVENT) {
-+ if (priv->hisregcpy & MRVDRV_CARDEVENT) {
- lbs_deb_thread("main-thread: Card Event Activity\n");
+ /*
+ * Just in case the ieee80211 doesn't set this,
+@@ -2433,6 +2386,8 @@ static int rt61pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ * initialization.
+ */
+ control->queue = IEEE80211_TX_QUEUE_BEACON;
++ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
++ entry = rt2x00_get_data_entry(ring);
-- adapter->hisregcpy &= ~MRVDRV_CARDEVENT;
-+ priv->hisregcpy &= ~MRVDRV_CARDEVENT;
+ /*
+ * We need to append the descriptor in front of the
+@@ -2446,15 +2401,23 @@ static int rt61pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ }
- if (priv->hw_read_event_cause(priv)) {
-- lbs_pr_alert(
-- "main-thread: hw_read_event_cause failed\n");
-- spin_unlock_irq(&adapter->driver_lock);
-+ lbs_pr_alert("main-thread: hw_read_event_cause failed\n");
-+ spin_unlock_irq(&priv->driver_lock);
- continue;
- }
-- spin_unlock_irq(&adapter->driver_lock);
-- libertas_process_event(priv);
-+ spin_unlock_irq(&priv->driver_lock);
-+ lbs_process_event(priv);
- } else
-- spin_unlock_irq(&adapter->driver_lock);
-+ spin_unlock_irq(&priv->driver_lock);
+ /*
+- * First we create the beacon.
++ * Add the descriptor in front of the skb.
++ */
++ skb_push(skb, ring->desc_size);
++ memset(skb->data, 0, ring->desc_size);
+
-+ if (!priv->fw_ready)
-+ continue;
++ /*
++ * Fill in skb descriptor
+ */
+- skb_push(skb, TXD_DESC_SIZE);
+- memset(skb->data, 0, TXD_DESC_SIZE);
++ desc = get_skb_desc(skb);
++ desc->desc_len = ring->desc_size;
++ desc->data_len = skb->len - ring->desc_size;
++ desc->desc = skb->data;
++ desc->data = skb->data + ring->desc_size;
++ desc->ring = ring;
++ desc->entry = entry;
- /* Check if we need to confirm Sleep Request received previously */
-- if (adapter->psstate == PS_STATE_PRE_SLEEP) {
-- if (!priv->dnld_sent && !adapter->cur_cmd) {
-- if (adapter->connect_status ==
-- LIBERTAS_CONNECTED) {
-- lbs_deb_thread(
-- "main_thread: PRE_SLEEP--intcounter=%d currenttxskb=%p "
-- "dnld_sent=%d cur_cmd=%p, confirm now\n",
-- adapter->intcounter,
-- adapter->currenttxskb,
-- priv->dnld_sent,
-- adapter->cur_cmd);
--
-- libertas_ps_confirm_sleep(priv,
-- (u16) adapter->psmode);
-- } else {
-- /* workaround for firmware sending
-- * deauth/linkloss event immediately
-- * after sleep request, remove this
-- * after firmware fixes it
-- */
-- adapter->psstate = PS_STATE_AWAKE;
-- lbs_pr_alert(
-- "main-thread: ignore PS_SleepConfirm in non-connected state\n");
-- }
-+ if (priv->psstate == PS_STATE_PRE_SLEEP &&
-+ !priv->dnld_sent && !priv->cur_cmd) {
-+ if (priv->connect_status == LBS_CONNECTED) {
-+ lbs_deb_thread("main_thread: PRE_SLEEP--intcounter=%d currenttxskb=%p dnld_sent=%d cur_cmd=%p, confirm now\n",
-+ priv->intcounter, priv->currenttxskb, priv->dnld_sent, priv->cur_cmd);
-+
-+ lbs_ps_confirm_sleep(priv, (u16) priv->psmode);
-+ } else {
-+ /* workaround for firmware sending
-+ * deauth/linkloss event immediately
-+ * after sleep request; remove this
-+ * after firmware fixes it
-+ */
-+ priv->psstate = PS_STATE_AWAKE;
-+ lbs_pr_alert("main-thread: ignore PS_SleepConfirm in non-connected state\n");
- }
- }
+- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
+- (struct ieee80211_hdr *)(skb->data +
+- TXD_DESC_SIZE),
+- skb->len - TXD_DESC_SIZE, control);
++ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
- /* The PS state is changed during processing of Sleep Request
- * event above
- */
-- if ((priv->adapter->psstate == PS_STATE_SLEEP) ||
-- (priv->adapter->psstate == PS_STATE_PRE_SLEEP))
-+ if ((priv->psstate == PS_STATE_SLEEP) ||
-+ (priv->psstate == PS_STATE_PRE_SLEEP))
- continue;
+ /*
+ * Write entire beacon with descriptor to register,
+@@ -2478,7 +2441,7 @@ static const struct ieee80211_ops rt61pci_mac80211_ops = {
+ .configure_filter = rt61pci_configure_filter,
+ .get_stats = rt2x00mac_get_stats,
+ .set_retry_limit = rt61pci_set_retry_limit,
+- .erp_ie_changed = rt2x00mac_erp_ie_changed,
++ .bss_info_changed = rt2x00mac_bss_info_changed,
+ .conf_tx = rt2x00mac_conf_tx,
+ .get_tx_stats = rt2x00mac_get_tx_stats,
+ .get_tsf = rt61pci_get_tsf,
+@@ -2493,6 +2456,8 @@ static const struct rt2x00lib_ops rt61pci_rt2x00_ops = {
+ .load_firmware = rt61pci_load_firmware,
+ .initialize = rt2x00pci_initialize,
+ .uninitialize = rt2x00pci_uninitialize,
++ .init_rxentry = rt61pci_init_rxentry,
++ .init_txentry = rt61pci_init_txentry,
+ .set_device_state = rt61pci_set_device_state,
+ .rfkill_poll = rt61pci_rfkill_poll,
+ .link_stats = rt61pci_link_stats,
+@@ -2510,7 +2475,7 @@ static const struct rt2x00lib_ops rt61pci_rt2x00_ops = {
+ };
- /* Execute the next command */
-- if (!priv->dnld_sent && !priv->adapter->cur_cmd)
-- libertas_execute_next_command(priv);
-+ if (!priv->dnld_sent && !priv->cur_cmd)
-+ lbs_execute_next_command(priv);
+ static const struct rt2x00_ops rt61pci_ops = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .rxd_size = RXD_DESC_SIZE,
+ .txd_size = TXD_DESC_SIZE,
+ .eeprom_size = EEPROM_SIZE,
+@@ -2547,7 +2512,7 @@ MODULE_FIRMWARE(FIRMWARE_RT2661);
+ MODULE_LICENSE("GPL");
- /* Wake-up command waiters which can't sleep in
-- * libertas_prepare_and_send_command
-+ * lbs_prepare_and_send_command
- */
-- if (!adapter->nr_cmd_pending)
-- wake_up_all(&adapter->cmd_pending);
--
-- libertas_tx_runqueue(priv);
-+ if (!list_empty(&priv->cmdpendingq))
-+ wake_up_all(&priv->cmd_pending);
+ static struct pci_driver rt61pci_driver = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .id_table = rt61pci_device_table,
+ .probe = rt2x00pci_probe,
+ .remove = __devexit_p(rt2x00pci_remove),
+diff --git a/drivers/net/wireless/rt2x00/rt61pci.h b/drivers/net/wireless/rt2x00/rt61pci.h
+index 6721d7d..4c6524e 100644
+--- a/drivers/net/wireless/rt2x00/rt61pci.h
++++ b/drivers/net/wireless/rt2x00/rt61pci.h
+@@ -1077,13 +1077,19 @@ struct hw_pairwise_ta_entry {
+ * R4: RX antenna control
+ * FRAME_END: 1 - DPDT, 0 - SPDT (Only valid for 802.11G, RF2527 & RF2529)
+ */
+-#define BBP_R4_RX_ANTENNA FIELD8(0x03)
+
-+ spin_lock_irq(&priv->driver_lock);
-+ if (!priv->dnld_sent && priv->tx_pending_len > 0) {
-+ int ret = priv->hw_host_to_card(priv, MVMS_DAT,
-+ priv->tx_pending_buf,
-+ priv->tx_pending_len);
-+ if (ret) {
-+ lbs_deb_tx("host_to_card failed %d\n", ret);
-+ priv->dnld_sent = DNLD_RES_RECEIVED;
-+ }
-+ priv->tx_pending_len = 0;
-+ if (!priv->currenttxskb) {
-+ /* We can wake the queues immediately if we aren't
-+ waiting for TX feedback */
-+ if (priv->connect_status == LBS_CONNECTED)
-+ netif_wake_queue(priv->dev);
-+ if (priv->mesh_dev &&
-+ priv->mesh_connect_status == LBS_CONNECTED)
-+ netif_wake_queue(priv->mesh_dev);
-+ }
-+ }
-+ spin_unlock_irq(&priv->driver_lock);
- }
++/*
++ * ANTENNA_CONTROL semantics (guessed):
++ * 0x1: Software controlled antenna switching (fixed or SW diversity)
++ * 0x2: Hardware diversity.
++ */
++#define BBP_R4_RX_ANTENNA_CONTROL FIELD8(0x03)
+ #define BBP_R4_RX_FRAME_END FIELD8(0x20)
-- del_timer(&adapter->command_timer);
-- adapter->nr_cmd_pending = 0;
-- wake_up_all(&adapter->cmd_pending);
-+ del_timer(&priv->command_timer);
-+ wake_up_all(&priv->cmd_pending);
+ /*
+ * R77
+ */
+-#define BBP_R77_PAIR FIELD8(0x03)
++#define BBP_R77_RX_ANTENNA FIELD8(0x03)
- lbs_deb_leave(LBS_DEB_THREAD);
- return 0;
+ /*
+ * RF registers
+@@ -1240,8 +1246,8 @@ struct hw_pairwise_ta_entry {
+ /*
+ * DMA descriptor defines.
+ */
+-#define TXD_DESC_SIZE ( 16 * sizeof(struct data_desc) )
+-#define RXD_DESC_SIZE ( 16 * sizeof(struct data_desc) )
++#define TXD_DESC_SIZE ( 16 * sizeof(__le32) )
++#define RXD_DESC_SIZE ( 16 * sizeof(__le32) )
+
+ /*
+ * TX descriptor format for TX, PRIO and Beacon Ring.
+diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c
+index c0671c2..4d576ab 100644
+--- a/drivers/net/wireless/rt2x00/rt73usb.c
++++ b/drivers/net/wireless/rt2x00/rt73usb.c
+@@ -24,11 +24,6 @@
+ Supported chipsets: rt2571W & rt2671.
+ */
+
+-/*
+- * Set enviroment defines for rt2x00.h
+- */
+-#define DRV_NAME "rt73usb"
+-
+ #include <linux/delay.h>
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+@@ -52,8 +47,9 @@
+ * between each attampt. When the busy bit is still set at that time,
+ * the access attempt is considered to have failed,
+ * and we will print an error.
++ * The _lock versions must be used if you already hold the usb_cache_mutex
+ */
+-static inline void rt73usb_register_read(const struct rt2x00_dev *rt2x00dev,
++static inline void rt73usb_register_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset, u32 *value)
+ {
+ __le32 reg;
+@@ -63,8 +59,17 @@ static inline void rt73usb_register_read(const struct rt2x00_dev *rt2x00dev,
+ *value = le32_to_cpu(reg);
}
-+static int lbs_suspend_callback(struct lbs_private *priv, unsigned long dummy,
-+ struct cmd_header *cmd)
-+{
-+ lbs_deb_enter(LBS_DEB_FW);
-+
-+ netif_device_detach(priv->dev);
-+ if (priv->mesh_dev)
-+ netif_device_detach(priv->mesh_dev);
-+
-+ priv->fw_ready = 0;
-+ lbs_deb_leave(LBS_DEB_FW);
-+ return 0;
-+}
-+
-+int lbs_suspend(struct lbs_private *priv)
+-static inline void rt73usb_register_multiread(const struct rt2x00_dev
+- *rt2x00dev,
++static inline void rt73usb_register_read_lock(struct rt2x00_dev *rt2x00dev,
++ const unsigned int offset, u32 *value)
+{
-+ struct cmd_header cmd;
-+ int ret;
-+
-+ lbs_deb_enter(LBS_DEB_FW);
-+
-+ if (priv->wol_criteria == 0xffffffff) {
-+ lbs_pr_info("Suspend attempt without configuring wake params!\n");
-+ return -EINVAL;
-+ }
-+
-+ memset(&cmd, 0, sizeof(cmd));
-+
-+ ret = __lbs_cmd(priv, CMD_802_11_HOST_SLEEP_ACTIVATE, &cmd,
-+ sizeof(cmd), lbs_suspend_callback, 0);
-+ if (ret)
-+ lbs_pr_info("HOST_SLEEP_ACTIVATE failed: %d\n", ret);
-+
-+ lbs_deb_leave_args(LBS_DEB_FW, "ret %d", ret);
-+ return ret;
++ __le32 reg;
++ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_READ,
++ USB_VENDOR_REQUEST_IN, offset,
++ ®, sizeof(u32), REGISTER_TIMEOUT);
++ *value = le32_to_cpu(reg);
+}
-+EXPORT_SYMBOL_GPL(lbs_suspend);
-+
-+int lbs_resume(struct lbs_private *priv)
-+{
-+ lbs_deb_enter(LBS_DEB_FW);
-+
-+ priv->fw_ready = 1;
-+
-+ /* Firmware doesn't seem to give us RX packets any more
-+ until we send it some command. Might as well update */
-+ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
-+ 0, 0, NULL);
+
-+ netif_device_attach(priv->dev);
-+ if (priv->mesh_dev)
-+ netif_device_attach(priv->mesh_dev);
-+
-+ lbs_deb_leave(LBS_DEB_FW);
-+ return 0;
++static inline void rt73usb_register_multiread(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ void *value, const u32 length)
+ {
+@@ -74,7 +79,7 @@ static inline void rt73usb_register_multiread(const struct rt2x00_dev
+ value, length, timeout);
+ }
+
+-static inline void rt73usb_register_write(const struct rt2x00_dev *rt2x00dev,
++static inline void rt73usb_register_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset, u32 value)
+ {
+ __le32 reg = cpu_to_le32(value);
+@@ -83,8 +88,16 @@ static inline void rt73usb_register_write(const struct rt2x00_dev *rt2x00dev,
+ ®, sizeof(u32), REGISTER_TIMEOUT);
+ }
+
+-static inline void rt73usb_register_multiwrite(const struct rt2x00_dev
+- *rt2x00dev,
++static inline void rt73usb_register_write_lock(struct rt2x00_dev *rt2x00dev,
++ const unsigned int offset, u32 value)
++{
++ __le32 reg = cpu_to_le32(value);
++ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_WRITE,
++ USB_VENDOR_REQUEST_OUT, offset,
++ ®, sizeof(u32), REGISTER_TIMEOUT);
+}
-+EXPORT_SYMBOL_GPL(lbs_resume);
+
- /**
- * @brief This function downloads firmware image, gets
- * HW spec from firmware and set basic parameters to
- * firmware.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return 0 or -1
- */
--static int wlan_setup_firmware(wlan_private * priv)
-+static int lbs_setup_firmware(struct lbs_private *priv)
++static inline void rt73usb_register_multiwrite(struct rt2x00_dev *rt2x00dev,
+ const unsigned int offset,
+ void *value, const u32 length)
{
- int ret = -1;
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ds_mesh_access mesh_access;
+@@ -94,13 +107,13 @@ static inline void rt73usb_register_multiwrite(const struct rt2x00_dev
+ value, length, timeout);
+ }
- lbs_deb_enter(LBS_DEB_FW);
+-static u32 rt73usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
++static u32 rt73usb_bbp_check(struct rt2x00_dev *rt2x00dev)
+ {
+ u32 reg;
+ unsigned int i;
+
+ for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
+- rt73usb_register_read(rt2x00dev, PHY_CSR3, ®);
++ rt73usb_register_read_lock(rt2x00dev, PHY_CSR3, ®);
+ if (!rt2x00_get_field32(reg, PHY_CSR3_BUSY))
+ break;
+ udelay(REGISTER_BUSY_DELAY);
+@@ -109,17 +122,20 @@ static u32 rt73usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
+ return reg;
+ }
+
+-static void rt73usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
++static void rt73usb_bbp_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u8 value)
+ {
+ u32 reg;
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
++
/*
- * Read MAC address from HW
+ * Wait until the BBP becomes ready.
*/
-- memset(adapter->current_addr, 0xff, ETH_ALEN);
--
-- ret = libertas_prepare_and_send_command(priv, CMD_GET_HW_SPEC,
-- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
--
-+ memset(priv->current_addr, 0xff, ETH_ALEN);
-+ ret = lbs_update_hw_spec(priv);
- if (ret) {
- ret = -1;
- goto done;
+ reg = rt73usb_bbp_check(rt2x00dev);
+ if (rt2x00_get_field32(reg, PHY_CSR3_BUSY)) {
+ ERROR(rt2x00dev, "PHY_CSR3 register busy. Write failed.\n");
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ return;
}
-- libertas_set_mac_packet_filter(priv);
--
-- /* Get the supported Data rates */
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_DATA_RATE,
-- CMD_ACT_GET_TX_RATE,
-- CMD_OPTION_WAITFORRSP, 0, NULL);
-+ lbs_set_mac_packet_filter(priv);
+@@ -132,20 +148,24 @@ static void rt73usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(®, PHY_CSR3_BUSY, 1);
+ rt2x00_set_field32(®, PHY_CSR3_READ_CONTROL, 0);
-- if (ret) {
-+ ret = lbs_get_data_rate(priv);
-+ if (ret < 0) {
- ret = -1;
- goto done;
- }
+- rt73usb_register_write(rt2x00dev, PHY_CSR3, reg);
++ rt73usb_register_write_lock(rt2x00dev, PHY_CSR3, reg);
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ }
-- /* Disable mesh autostart */
-- if (priv->mesh_dev) {
-- memset(&mesh_access, 0, sizeof(mesh_access));
-- mesh_access.data[0] = cpu_to_le32(0);
-- ret = libertas_prepare_and_send_command(priv,
-- CMD_MESH_ACCESS,
-- CMD_ACT_MESH_SET_AUTOSTART_ENABLED,
-- CMD_OPTION_WAITFORRSP, 0, (void *)&mesh_access);
-- if (ret) {
-- ret = -1;
-- goto done;
-- }
-- priv->mesh_autostart_enabled = 0;
-- }
--
-- /* Set the boot2 version in firmware */
-- ret = libertas_prepare_and_send_command(priv, CMD_SET_BOOT2_VER,
-- 0, CMD_OPTION_WAITFORRSP, 0, NULL);
--
- ret = 0;
- done:
- lbs_deb_leave_args(LBS_DEB_FW, "ret %d", ret);
-@@ -985,164 +965,130 @@ done:
- */
- static void command_timer_fn(unsigned long data)
+-static void rt73usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
++static void rt73usb_bbp_read(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u8 *value)
{
-- wlan_private *priv = (wlan_private *)data;
-- wlan_adapter *adapter = priv->adapter;
-- struct cmd_ctrl_node *ptempnode;
-- struct cmd_ds_command *cmd;
-+ struct lbs_private *priv = (struct lbs_private *)data;
- unsigned long flags;
-
-- ptempnode = adapter->cur_cmd;
-- if (ptempnode == NULL) {
-- lbs_deb_fw("ptempnode empty\n");
-- return;
-- }
-+ lbs_deb_enter(LBS_DEB_CMD);
-+ spin_lock_irqsave(&priv->driver_lock, flags);
+ u32 reg;
-- cmd = (struct cmd_ds_command *)ptempnode->bufvirtualaddr;
-- if (!cmd) {
-- lbs_deb_fw("cmd is NULL\n");
-- return;
-+ if (!priv->cur_cmd) {
-+ lbs_pr_info("Command timer expired; no pending command\n");
-+ goto out;
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
++
+ /*
+ * Wait until the BBP becomes ready.
+ */
+ reg = rt73usb_bbp_check(rt2x00dev);
+ if (rt2x00_get_field32(reg, PHY_CSR3_BUSY)) {
+ ERROR(rt2x00dev, "PHY_CSR3 register busy. Read failed.\n");
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ return;
}
-- lbs_deb_fw("command_timer_fn fired, cmd %x\n", cmd->command);
--
-- if (!adapter->fw_ready)
-- return;
--
-- spin_lock_irqsave(&adapter->driver_lock, flags);
-- adapter->cur_cmd = NULL;
-- spin_unlock_irqrestore(&adapter->driver_lock, flags);
--
-- lbs_deb_fw("re-sending same command because of timeout\n");
-- libertas_queue_cmd(adapter, ptempnode, 0);
-+ lbs_pr_info("Command %x timed out\n", le16_to_cpu(priv->cur_cmd->cmdbuf->command));
+@@ -157,7 +177,7 @@ static void rt73usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(®, PHY_CSR3_BUSY, 1);
+ rt2x00_set_field32(®, PHY_CSR3_READ_CONTROL, 1);
-+ priv->cmd_timed_out = 1;
- wake_up_interruptible(&priv->waitq);
--
-- return;
-+out:
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ lbs_deb_leave(LBS_DEB_CMD);
+- rt73usb_register_write(rt2x00dev, PHY_CSR3, reg);
++ rt73usb_register_write_lock(rt2x00dev, PHY_CSR3, reg);
+
+ /*
+ * Wait until the BBP becomes ready.
+@@ -170,9 +190,10 @@ static void rt73usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
+ }
+
+ *value = rt2x00_get_field32(reg, PHY_CSR3_VALUE);
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
}
--static int libertas_init_adapter(wlan_private * priv)
-+static int lbs_init_adapter(struct lbs_private *priv)
+-static void rt73usb_rf_write(const struct rt2x00_dev *rt2x00dev,
++static void rt73usb_rf_write(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, const u32 value)
{
-- wlan_adapter *adapter = priv->adapter;
- size_t bufsize;
- int i, ret = 0;
+ u32 reg;
+@@ -181,13 +202,16 @@ static void rt73usb_rf_write(const struct rt2x00_dev *rt2x00dev,
+ if (!word)
+ return;
-+ lbs_deb_enter(LBS_DEB_MAIN);
++ mutex_lock(&rt2x00dev->usb_cache_mutex);
+
- /* Allocate buffer to store the BSSID list */
- bufsize = MAX_NETWORK_COUNT * sizeof(struct bss_descriptor);
-- adapter->networks = kzalloc(bufsize, GFP_KERNEL);
-- if (!adapter->networks) {
-+ priv->networks = kzalloc(bufsize, GFP_KERNEL);
-+ if (!priv->networks) {
- lbs_pr_err("Out of memory allocating beacons\n");
- ret = -1;
- goto out;
- }
-
- /* Initialize scan result lists */
-- INIT_LIST_HEAD(&adapter->network_free_list);
-- INIT_LIST_HEAD(&adapter->network_list);
-+ INIT_LIST_HEAD(&priv->network_free_list);
-+ INIT_LIST_HEAD(&priv->network_list);
- for (i = 0; i < MAX_NETWORK_COUNT; i++) {
-- list_add_tail(&adapter->networks[i].list,
-- &adapter->network_free_list);
-+ list_add_tail(&priv->networks[i].list,
-+ &priv->network_free_list);
+ for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
+- rt73usb_register_read(rt2x00dev, PHY_CSR4, ®);
++ rt73usb_register_read_lock(rt2x00dev, PHY_CSR4, ®);
+ if (!rt2x00_get_field32(reg, PHY_CSR4_BUSY))
+ goto rf_write;
+ udelay(REGISTER_BUSY_DELAY);
}
-- adapter->libertas_ps_confirm_sleep.seqnum = cpu_to_le16(++adapter->seqnum);
-- adapter->libertas_ps_confirm_sleep.command =
-+ priv->lbs_ps_confirm_sleep.seqnum = cpu_to_le16(++priv->seqnum);
-+ priv->lbs_ps_confirm_sleep.command =
- cpu_to_le16(CMD_802_11_PS_MODE);
-- adapter->libertas_ps_confirm_sleep.size =
-+ priv->lbs_ps_confirm_sleep.size =
- cpu_to_le16(sizeof(struct PS_CMD_ConfirmSleep));
-- adapter->libertas_ps_confirm_sleep.action =
-+ priv->lbs_ps_confirm_sleep.action =
- cpu_to_le16(CMD_SUBCMD_SLEEP_CONFIRMED);
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ ERROR(rt2x00dev, "PHY_CSR4 register busy. Write failed.\n");
+ return;
-- memset(adapter->current_addr, 0xff, ETH_ALEN);
--
-- adapter->connect_status = LIBERTAS_DISCONNECTED;
-- adapter->secinfo.auth_mode = IW_AUTH_ALG_OPEN_SYSTEM;
-- adapter->mode = IW_MODE_INFRA;
-- adapter->curbssparams.channel = DEFAULT_AD_HOC_CHANNEL;
-- adapter->currentpacketfilter = CMD_ACT_MAC_RX_ON | CMD_ACT_MAC_TX_ON;
-- adapter->radioon = RADIO_ON;
-- adapter->auto_rate = 1;
-- adapter->capability = WLAN_CAPABILITY_SHORT_PREAMBLE;
-- adapter->psmode = WLAN802_11POWERMODECAM;
-- adapter->psstate = PS_STATE_FULL_POWER;
-+ memset(priv->current_addr, 0xff, ETH_ALEN);
+@@ -200,25 +224,26 @@ rf_write:
+ * all others contain 20 bits.
+ */
+ rt2x00_set_field32(®, PHY_CSR4_NUMBER_OF_BITS,
+- 20 + !!(rt2x00_rf(&rt2x00dev->chip, RF5225) ||
+- rt2x00_rf(&rt2x00dev->chip, RF2527)));
++ 20 + (rt2x00_rf(&rt2x00dev->chip, RF5225) ||
++ rt2x00_rf(&rt2x00dev->chip, RF2527)));
+ rt2x00_set_field32(®, PHY_CSR4_IF_SELECT, 0);
+ rt2x00_set_field32(®, PHY_CSR4_BUSY, 1);
-- mutex_init(&adapter->lock);
-+ priv->connect_status = LBS_DISCONNECTED;
-+ priv->mesh_connect_status = LBS_DISCONNECTED;
-+ priv->secinfo.auth_mode = IW_AUTH_ALG_OPEN_SYSTEM;
-+ priv->mode = IW_MODE_INFRA;
-+ priv->curbssparams.channel = DEFAULT_AD_HOC_CHANNEL;
-+ priv->currentpacketfilter = CMD_ACT_MAC_RX_ON | CMD_ACT_MAC_TX_ON;
-+ priv->radioon = RADIO_ON;
-+ priv->auto_rate = 1;
-+ priv->capability = WLAN_CAPABILITY_SHORT_PREAMBLE;
-+ priv->psmode = LBS802_11POWERMODECAM;
-+ priv->psstate = PS_STATE_FULL_POWER;
+- rt73usb_register_write(rt2x00dev, PHY_CSR4, reg);
++ rt73usb_register_write_lock(rt2x00dev, PHY_CSR4, reg);
+ rt2x00_rf_write(rt2x00dev, word, value);
++ mutex_unlock(&rt2x00dev->usb_cache_mutex);
+ }
-- memset(&adapter->tx_queue_ps, 0, NR_TX_QUEUE*sizeof(struct sk_buff*));
-- adapter->tx_queue_idx = 0;
-- spin_lock_init(&adapter->txqueue_lock);
-+ mutex_init(&priv->lock);
+ #ifdef CONFIG_RT2X00_LIB_DEBUGFS
+ #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
-- setup_timer(&adapter->command_timer, command_timer_fn,
-- (unsigned long)priv);
-+ setup_timer(&priv->command_timer, command_timer_fn,
-+ (unsigned long)priv);
+-static void rt73usb_read_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt73usb_read_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 *data)
+ {
+ rt73usb_register_read(rt2x00dev, CSR_OFFSET(word), data);
+ }
-- INIT_LIST_HEAD(&adapter->cmdfreeq);
-- INIT_LIST_HEAD(&adapter->cmdpendingq);
-+ INIT_LIST_HEAD(&priv->cmdfreeq);
-+ INIT_LIST_HEAD(&priv->cmdpendingq);
+-static void rt73usb_write_csr(const struct rt2x00_dev *rt2x00dev,
++static void rt73usb_write_csr(struct rt2x00_dev *rt2x00dev,
+ const unsigned int word, u32 data)
+ {
+ rt73usb_register_write(rt2x00dev, CSR_OFFSET(word), data);
+@@ -302,7 +327,8 @@ static void rt73usb_config_type(struct rt2x00_dev *rt2x00dev, const int type,
+ */
+ rt73usb_register_read(rt2x00dev, TXRX_CSR9, ®);
+ rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 1);
+- rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 1);
++ rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE,
++ (tsf_sync == TSF_SYNC_BEACON));
+ rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0);
+ rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, tsf_sync);
+ rt73usb_register_write(rt2x00dev, TXRX_CSR9, reg);
+@@ -396,12 +422,12 @@ static void rt73usb_config_txpower(struct rt2x00_dev *rt2x00dev,
+ }
-- spin_lock_init(&adapter->driver_lock);
-- init_waitqueue_head(&adapter->cmd_pending);
-- adapter->nr_cmd_pending = 0;
-+ spin_lock_init(&priv->driver_lock);
-+ init_waitqueue_head(&priv->cmd_pending);
+ static void rt73usb_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx,
+- const int antenna_rx)
++ struct antenna_setup *ant)
+ {
+ u8 r3;
+ u8 r4;
+ u8 r77;
++ u8 temp;
- /* Allocate the command buffers */
-- if (libertas_allocate_cmd_buffer(priv)) {
-+ if (lbs_allocate_cmd_buffer(priv)) {
- lbs_pr_err("Out of memory allocating command buffers\n");
- ret = -1;
- }
+ rt73usb_bbp_read(rt2x00dev, 3, &r3);
+ rt73usb_bbp_read(rt2x00dev, 4, &r4);
+@@ -409,30 +435,38 @@ static void rt73usb_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
- out:
-+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
-+
- return ret;
- }
+ rt2x00_set_field8(&r3, BBP_R3_SMART_MODE, 0);
--static void libertas_free_adapter(wlan_private * priv)
-+static void lbs_free_adapter(struct lbs_private *priv)
- {
-- wlan_adapter *adapter = priv->adapter;
--
-- if (!adapter) {
-- lbs_deb_fw("why double free adapter?\n");
-- return;
-- }
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
++ /*
++ * Configure the RX antenna.
++ */
++ switch (ant->rx) {
+ case ANTENNA_HW_DIVERSITY:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
+- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
+- !!(rt2x00dev->curr_hwmode != HWMODE_A));
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
++ temp = !test_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags)
++ && (rt2x00dev->curr_hwmode != HWMODE_A);
++ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, temp);
+ break;
+ case ANTENNA_A:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
-
-- lbs_deb_fw("free command buffer\n");
-- libertas_free_cmd_buffer(priv);
+ if (rt2x00dev->curr_hwmode == HWMODE_A)
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
+ else
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
-
-- lbs_deb_fw("free command_timer\n");
-- del_timer(&adapter->command_timer);
-+ lbs_deb_enter(LBS_DEB_MAIN);
-
-- lbs_deb_fw("free scan results table\n");
-- kfree(adapter->networks);
-- adapter->networks = NULL;
-+ lbs_free_cmd_buffer(priv);
-+ del_timer(&priv->command_timer);
-+ kfree(priv->networks);
-+ priv->networks = NULL;
+ if (rt2x00dev->curr_hwmode == HWMODE_A)
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
+ else
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
+ break;
+ }
-- /* Free the adapter object itself */
-- lbs_deb_fw("free adapter\n");
-- kfree(adapter);
-- priv->adapter = NULL;
-+ lbs_deb_leave(LBS_DEB_MAIN);
+@@ -442,8 +476,7 @@ static void rt73usb_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
}
- /**
- * @brief This function adds the card. it will probe the
-- * card, allocate the wlan_priv and initialize the device.
-+ * card, allocate the lbs_priv and initialize the device.
- *
- * @param card A pointer to card
-- * @return A pointer to wlan_private structure
-+ * @return A pointer to struct lbs_private structure
- */
--wlan_private *libertas_add_card(void *card, struct device *dmdev)
-+struct lbs_private *lbs_add_card(void *card, struct device *dmdev)
+ static void rt73usb_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx,
+- const int antenna_rx)
++ struct antenna_setup *ant)
{
- struct net_device *dev = NULL;
-- wlan_private *priv = NULL;
-+ struct lbs_private *priv = NULL;
-
-- lbs_deb_enter(LBS_DEB_NET);
-+ lbs_deb_enter(LBS_DEB_MAIN);
+ u8 r3;
+ u8 r4;
+@@ -457,18 +490,27 @@ static void rt73usb_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
+ !test_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags));
- /* Allocate an Ethernet device and register it */
-- if (!(dev = alloc_etherdev(sizeof(wlan_private)))) {
-+ dev = alloc_etherdev(sizeof(struct lbs_private));
-+ if (!dev) {
- lbs_pr_err("init ethX device failed\n");
- goto done;
+- switch (antenna_rx) {
+- case ANTENNA_SW_DIVERSITY:
++ /*
++ * Configure the RX antenna.
++ */
++ switch (ant->rx) {
+ case ANTENNA_HW_DIVERSITY:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
+ break;
+ case ANTENNA_A:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
+ break;
++ case ANTENNA_SW_DIVERSITY:
++ /*
++ * NOTE: We should never come here because rt2x00lib is
++ * supposed to catch this and send us the correct antenna
++ * explicitely. However we are nog going to bug about this.
++ * Instead, just default to antenna B.
++ */
+ case ANTENNA_B:
+- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
+- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
++ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
++ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
+ break;
}
- priv = dev->priv;
-- /* allocate buffer for wlan_adapter */
-- if (!(priv->adapter = kzalloc(sizeof(wlan_adapter), GFP_KERNEL))) {
-- lbs_pr_err("allocate buffer for wlan_adapter failed\n");
-- goto err_kzalloc;
-- }
--
-- if (libertas_init_adapter(priv)) {
-+ if (lbs_init_adapter(priv)) {
- lbs_pr_err("failed to initialize adapter structure.\n");
- goto err_init_adapter;
- }
-@@ -1151,81 +1097,78 @@ wlan_private *libertas_add_card(void *card, struct device *dmdev)
- priv->card = card;
- priv->mesh_open = 0;
- priv->infra_open = 0;
-- priv->hotplug_device = dmdev;
+@@ -509,40 +551,40 @@ static const struct antenna_sel antenna_sel_bg[] = {
+ };
- /* Setup the OS Interface to our functions */
-- dev->open = libertas_open;
-- dev->hard_start_xmit = libertas_pre_start_xmit;
-- dev->stop = libertas_close;
-- dev->set_mac_address = libertas_set_mac_address;
-- dev->tx_timeout = libertas_tx_timeout;
-- dev->get_stats = libertas_get_stats;
-+ dev->open = lbs_dev_open;
-+ dev->hard_start_xmit = lbs_hard_start_xmit;
-+ dev->stop = lbs_eth_stop;
-+ dev->set_mac_address = lbs_set_mac_address;
-+ dev->tx_timeout = lbs_tx_timeout;
-+ dev->get_stats = lbs_get_stats;
- dev->watchdog_timeo = 5 * HZ;
-- dev->ethtool_ops = &libertas_ethtool_ops;
-+ dev->ethtool_ops = &lbs_ethtool_ops;
- #ifdef WIRELESS_EXT
-- dev->wireless_handlers = (struct iw_handler_def *)&libertas_handler_def;
-+ dev->wireless_handlers = (struct iw_handler_def *)&lbs_handler_def;
- #endif
- dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
-- dev->set_multicast_list = libertas_set_multicast_list;
-+ dev->set_multicast_list = lbs_set_multicast_list;
+ static void rt73usb_config_antenna(struct rt2x00_dev *rt2x00dev,
+- const int antenna_tx, const int antenna_rx)
++ struct antenna_setup *ant)
+ {
+ const struct antenna_sel *sel;
+ unsigned int lna;
+ unsigned int i;
+ u32 reg;
- SET_NETDEV_DEV(dev, dmdev);
+- rt73usb_register_read(rt2x00dev, PHY_CSR0, ®);
+-
+ if (rt2x00dev->curr_hwmode == HWMODE_A) {
+ sel = antenna_sel_a;
+ lna = test_bit(CONFIG_EXTERNAL_LNA_A, &rt2x00dev->flags);
+-
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 0);
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 1);
+ } else {
+ sel = antenna_sel_bg;
+ lna = test_bit(CONFIG_EXTERNAL_LNA_BG, &rt2x00dev->flags);
+-
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 1);
+- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 0);
+ }
- priv->rtap_net_dev = NULL;
-- if (device_create_file(dmdev, &dev_attr_libertas_rtap))
-- goto err_init_adapter;
+ for (i = 0; i < ARRAY_SIZE(antenna_sel_a); i++)
+ rt73usb_bbp_write(rt2x00dev, sel[i].word, sel[i].value[lna]);
- lbs_deb_thread("Starting main thread...\n");
- init_waitqueue_head(&priv->waitq);
-- priv->main_thread = kthread_run(libertas_thread, dev, "libertas_main");
-+ priv->main_thread = kthread_run(lbs_thread, dev, "lbs_main");
- if (IS_ERR(priv->main_thread)) {
- lbs_deb_thread("Error creating main thread.\n");
-- goto err_kthread_run;
-+ goto err_init_adapter;
- }
++ rt73usb_register_read(rt2x00dev, PHY_CSR0, ®);
++
++ rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG,
++ (rt2x00dev->curr_hwmode == HWMODE_B ||
++ rt2x00dev->curr_hwmode == HWMODE_G));
++ rt2x00_set_field32(®, PHY_CSR0_PA_PE_A,
++ (rt2x00dev->curr_hwmode == HWMODE_A));
++
+ rt73usb_register_write(rt2x00dev, PHY_CSR0, reg);
-- priv->work_thread = create_singlethread_workqueue("libertas_worker");
-- INIT_DELAYED_WORK(&priv->assoc_work, libertas_association_worker);
-- INIT_DELAYED_WORK(&priv->scan_work, libertas_scan_worker);
-- INIT_WORK(&priv->sync_channel, libertas_sync_channel);
-+ priv->work_thread = create_singlethread_workqueue("lbs_worker");
-+ INIT_DELAYED_WORK(&priv->assoc_work, lbs_association_worker);
-+ INIT_DELAYED_WORK(&priv->scan_work, lbs_scan_worker);
-+ INIT_WORK(&priv->sync_channel, lbs_sync_channel);
+ if (rt2x00_rf(&rt2x00dev->chip, RF5226) ||
+ rt2x00_rf(&rt2x00dev->chip, RF5225))
+- rt73usb_config_antenna_5x(rt2x00dev, antenna_tx, antenna_rx);
++ rt73usb_config_antenna_5x(rt2x00dev, ant);
+ else if (rt2x00_rf(&rt2x00dev->chip, RF2528) ||
+ rt2x00_rf(&rt2x00dev->chip, RF2527))
+- rt73usb_config_antenna_2x(rt2x00dev, antenna_tx, antenna_rx);
++ rt73usb_config_antenna_2x(rt2x00dev, ant);
+ }
-- goto done;
-+ sprintf(priv->mesh_ssid, "mesh");
-+ priv->mesh_ssid_len = 4;
+ static void rt73usb_config_duration(struct rt2x00_dev *rt2x00dev,
+@@ -586,8 +628,7 @@ static void rt73usb_config(struct rt2x00_dev *rt2x00dev,
+ if ((flags & CONFIG_UPDATE_TXPOWER) && !(flags & CONFIG_UPDATE_CHANNEL))
+ rt73usb_config_txpower(rt2x00dev, libconf->conf->power_level);
+ if (flags & CONFIG_UPDATE_ANTENNA)
+- rt73usb_config_antenna(rt2x00dev, libconf->conf->antenna_sel_tx,
+- libconf->conf->antenna_sel_rx);
++ rt73usb_config_antenna(rt2x00dev, &libconf->ant);
+ if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
+ rt73usb_config_duration(rt2x00dev, libconf);
+ }
+@@ -605,12 +646,10 @@ static void rt73usb_enable_led(struct rt2x00_dev *rt2x00dev)
+ rt73usb_register_write(rt2x00dev, MAC_CSR14, reg);
--err_kthread_run:
-- device_remove_file(dmdev, &dev_attr_libertas_rtap);
-+ priv->wol_criteria = 0xffffffff;
-+ priv->wol_gpio = 0xff;
+ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_RADIO_STATUS, 1);
+- if (rt2x00dev->rx_status.phymode == MODE_IEEE80211A)
+- rt2x00_set_field16(&rt2x00dev->led_reg,
+- MCU_LEDCS_LINK_A_STATUS, 1);
+- else
+- rt2x00_set_field16(&rt2x00dev->led_reg,
+- MCU_LEDCS_LINK_BG_STATUS, 1);
++ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_A_STATUS,
++ (rt2x00dev->rx_status.phymode == MODE_IEEE80211A));
++ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_BG_STATUS,
++ (rt2x00dev->rx_status.phymode != MODE_IEEE80211A));
--err_init_adapter:
-- libertas_free_adapter(priv);
-+ goto done;
+ rt2x00usb_vendor_request_sw(rt2x00dev, USB_LED_CONTROL, 0x0000,
+ rt2x00dev->led_reg, REGISTER_TIMEOUT);
+@@ -659,7 +698,8 @@ static void rt73usb_activity_led(struct rt2x00_dev *rt2x00dev, int rssi)
+ /*
+ * Link tuning
+ */
+-static void rt73usb_link_stats(struct rt2x00_dev *rt2x00dev)
++static void rt73usb_link_stats(struct rt2x00_dev *rt2x00dev,
++ struct link_qual *qual)
+ {
+ u32 reg;
--err_kzalloc:
-+err_init_adapter:
-+ lbs_free_adapter(priv);
- free_netdev(dev);
- priv = NULL;
+@@ -667,15 +707,13 @@ static void rt73usb_link_stats(struct rt2x00_dev *rt2x00dev)
+ * Update FCS error count from register.
+ */
+ rt73usb_register_read(rt2x00dev, STA_CSR0, ®);
+- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
++ qual->rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
- done:
-- lbs_deb_leave_args(LBS_DEB_NET, "priv %p", priv);
-+ lbs_deb_leave_args(LBS_DEB_MAIN, "priv %p", priv);
- return priv;
+ /*
+ * Update False CCA count from register.
+ */
+ rt73usb_register_read(rt2x00dev, STA_CSR1, ®);
+- reg = rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
+- rt2x00dev->link.false_cca =
+- rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
++ qual->false_cca = rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
}
--EXPORT_SYMBOL_GPL(libertas_add_card);
-+EXPORT_SYMBOL_GPL(lbs_add_card);
+ static void rt73usb_reset_tuner(struct rt2x00_dev *rt2x00dev)
+@@ -781,12 +819,12 @@ static void rt73usb_link_tuner(struct rt2x00_dev *rt2x00dev)
+ * r17 does not yet exceed upper limit, continue and base
+ * the r17 tuning on the false CCA count.
+ */
+- if (rt2x00dev->link.false_cca > 512 && r17 < up_bound) {
++ if (rt2x00dev->link.qual.false_cca > 512 && r17 < up_bound) {
+ r17 += 4;
+ if (r17 > up_bound)
+ r17 = up_bound;
+ rt73usb_bbp_write(rt2x00dev, 17, r17);
+- } else if (rt2x00dev->link.false_cca < 100 && r17 > low_bound) {
++ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > low_bound) {
+ r17 -= 4;
+ if (r17 < low_bound)
+ r17 = low_bound;
+@@ -1098,8 +1136,6 @@ static int rt73usb_enable_radio(struct rt2x00_dev *rt2x00dev)
+ return -EIO;
+ }
--int libertas_remove_card(wlan_private *priv)
-+int lbs_remove_card(struct lbs_private *priv)
+- rt2x00usb_enable_radio(rt2x00dev);
+-
+ /*
+ * Enable LED
+ */
+@@ -1193,12 +1229,12 @@ static int rt73usb_set_device_state(struct rt2x00_dev *rt2x00dev,
+ * TX descriptor initialization
+ */
+ static void rt73usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+- struct data_desc *txd,
+- struct txdata_entry_desc *desc,
+- struct ieee80211_hdr *ieee80211hdr,
+- unsigned int length,
+- struct ieee80211_tx_control *control)
++ struct sk_buff *skb,
++ struct txdata_entry_desc *desc,
++ struct ieee80211_tx_control *control)
{
-- wlan_adapter *adapter = priv->adapter;
- struct net_device *dev = priv->dev;
- union iwreq_data wrqu;
++ struct skb_desc *skbdesc = get_skb_desc(skb);
++ __le32 *txd = skbdesc->desc;
+ u32 word;
- lbs_deb_enter(LBS_DEB_MAIN);
+ /*
+@@ -1233,7 +1269,7 @@ static void rt73usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
+ test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_ACK,
+- !(control->flags & IEEE80211_TXCTL_NO_ACK));
++ test_bit(ENTRY_TXD_ACK, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
+ test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_OFDM,
+@@ -1243,7 +1279,7 @@ static void rt73usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
+ !!(control->flags &
+ IEEE80211_TXCTL_LONG_RETRY_LIMIT));
+ rt2x00_set_field32(&word, TXD_W0_TKIP_MIC, 0);
+- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
++ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
+ rt2x00_set_field32(&word, TXD_W0_BURST2,
+ test_bit(ENTRY_TXD_BURST, &desc->flags));
+ rt2x00_set_field32(&word, TXD_W0_CIPHER_ALG, CIPHER_NONE);
+@@ -1340,7 +1376,8 @@ static int rt73usb_agc_to_rssi(struct rt2x00_dev *rt2x00dev, int rxd_w1)
+ static void rt73usb_fill_rxdone(struct data_entry *entry,
+ struct rxdata_entry_desc *desc)
+ {
+- struct data_desc *rxd = (struct data_desc *)entry->skb->data;
++ struct skb_desc *skbdesc = get_skb_desc(entry->skb);
++ __le32 *rxd = (__le32 *)entry->skb->data;
+ u32 word0;
+ u32 word1;
-- libertas_remove_rtap(priv);
-+ lbs_remove_mesh(priv);
-+ lbs_remove_rtap(priv);
+@@ -1358,13 +1395,15 @@ static void rt73usb_fill_rxdone(struct data_entry *entry,
+ desc->rssi = rt73usb_agc_to_rssi(entry->ring->rt2x00dev, word1);
+ desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
+ desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
++ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
- dev = priv->dev;
-- device_remove_file(priv->hotplug_device, &dev_attr_libertas_rtap);
+ /*
+- * Pull the skb to clear the descriptor area.
++ * Set descriptor and data pointer.
+ */
+- skb_pull(entry->skb, entry->ring->desc_size);
+-
+- return;
++ skbdesc->desc = entry->skb->data;
++ skbdesc->desc_len = entry->ring->desc_size;
++ skbdesc->data = entry->skb->data + entry->ring->desc_size;
++ skbdesc->data_len = desc->size;
+ }
- cancel_delayed_work(&priv->scan_work);
- cancel_delayed_work(&priv->assoc_work);
- destroy_workqueue(priv->work_thread);
+ /*
+@@ -1392,8 +1431,10 @@ static int rt73usb_validate_eeprom(struct rt2x00_dev *rt2x00dev)
+ rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
+ if (word == 0xffff) {
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 2);
+- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 2);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
++ ANTENNA_B);
++ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
++ ANTENNA_B);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_FRAME_TYPE, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
+ rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
+@@ -1502,9 +1543,9 @@ static int rt73usb_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Identify default antenna configuration.
+ */
+- rt2x00dev->hw->conf.antenna_sel_tx =
++ rt2x00dev->default_ant.tx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
+- rt2x00dev->hw->conf.antenna_sel_rx =
++ rt2x00dev->default_ant.rx =
+ rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
-- if (adapter->psmode == WLAN802_11POWERMODEMAX_PSP) {
-- adapter->psmode = WLAN802_11POWERMODECAM;
-- libertas_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
-+ if (priv->psmode == LBS802_11POWERMODEMAX_PSP) {
-+ priv->psmode = LBS802_11POWERMODECAM;
-+ lbs_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
- }
+ /*
+@@ -1806,7 +1847,6 @@ static void rt73usb_configure_filter(struct ieee80211_hw *hw,
+ struct dev_addr_list *mc_list)
+ {
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- struct interface *intf = &rt2x00dev->interface;
+ u32 reg;
- memset(wrqu.ap_addr.sa_data, 0xaa, ETH_ALEN);
-@@ -1233,10 +1176,10 @@ int libertas_remove_card(wlan_private *priv)
- wireless_send_event(priv->dev, SIOCGIWAP, &wrqu, NULL);
+ /*
+@@ -1825,22 +1865,19 @@ static void rt73usb_configure_filter(struct ieee80211_hw *hw,
+ * Apply some rules to the filters:
+ * - Some filters imply different filters to be set.
+ * - Some things we can't filter out at all.
+- * - Some filters are set based on interface type.
+ */
+ if (mc_count)
+ *total_flags |= FIF_ALLMULTI;
+ if (*total_flags & FIF_OTHER_BSS ||
+ *total_flags & FIF_PROMISC_IN_BSS)
+ *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
+- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
+- *total_flags |= FIF_PROMISC_IN_BSS;
- /* Stop the thread servicing the interrupts */
-- adapter->surpriseremoved = 1;
-+ priv->surpriseremoved = 1;
- kthread_stop(priv->main_thread);
+ /*
+ * Check if there is any work left for us.
+ */
+- if (intf->filter == *total_flags)
++ if (rt2x00dev->packet_filter == *total_flags)
+ return;
+- intf->filter = *total_flags;
++ rt2x00dev->packet_filter = *total_flags;
-- libertas_free_adapter(priv);
-+ lbs_free_adapter(priv);
+ /*
+ * When in atomic context, reschedule and let rt2x00lib
+@@ -1926,6 +1963,9 @@ static int rt73usb_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ struct ieee80211_tx_control *control)
+ {
+ struct rt2x00_dev *rt2x00dev = hw->priv;
++ struct skb_desc *desc;
++ struct data_ring *ring;
++ struct data_entry *entry;
+ int timeout;
- priv->dev = NULL;
- free_netdev(dev);
-@@ -1244,10 +1187,10 @@ int libertas_remove_card(wlan_private *priv)
- lbs_deb_leave(LBS_DEB_MAIN);
- return 0;
- }
--EXPORT_SYMBOL_GPL(libertas_remove_card);
-+EXPORT_SYMBOL_GPL(lbs_remove_card);
+ /*
+@@ -1934,17 +1974,27 @@ static int rt73usb_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
+ * initialization.
+ */
+ control->queue = IEEE80211_TX_QUEUE_BEACON;
++ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
++ entry = rt2x00_get_data_entry(ring);
++
++ /*
++ * Add the descriptor in front of the skb.
++ */
++ skb_push(skb, ring->desc_size);
++ memset(skb->data, 0, ring->desc_size);
+ /*
+- * First we create the beacon.
++ * Fill in skb descriptor
+ */
+- skb_push(skb, TXD_DESC_SIZE);
+- memset(skb->data, 0, TXD_DESC_SIZE);
++ desc = get_skb_desc(skb);
++ desc->desc_len = ring->desc_size;
++ desc->data_len = skb->len - ring->desc_size;
++ desc->desc = skb->data;
++ desc->data = skb->data + ring->desc_size;
++ desc->ring = ring;
++ desc->entry = entry;
--int libertas_start_card(wlan_private *priv)
-+int lbs_start_card(struct lbs_private *priv)
- {
- struct net_device *dev = priv->dev;
- int ret = -1;
-@@ -1255,19 +1198,52 @@ int libertas_start_card(wlan_private *priv)
- lbs_deb_enter(LBS_DEB_MAIN);
+- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
+- (struct ieee80211_hdr *)(skb->data +
+- TXD_DESC_SIZE),
+- skb->len - TXD_DESC_SIZE, control);
++ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
- /* poke the firmware */
-- ret = wlan_setup_firmware(priv);
-+ ret = lbs_setup_firmware(priv);
- if (ret)
- goto done;
+ /*
+ * Write entire beacon with descriptor to register,
+@@ -1971,7 +2021,7 @@ static const struct ieee80211_ops rt73usb_mac80211_ops = {
+ .configure_filter = rt73usb_configure_filter,
+ .get_stats = rt2x00mac_get_stats,
+ .set_retry_limit = rt73usb_set_retry_limit,
+- .erp_ie_changed = rt2x00mac_erp_ie_changed,
++ .bss_info_changed = rt2x00mac_bss_info_changed,
+ .conf_tx = rt2x00mac_conf_tx,
+ .get_tx_stats = rt2x00mac_get_tx_stats,
+ .get_tsf = rt73usb_get_tsf,
+@@ -1985,6 +2035,8 @@ static const struct rt2x00lib_ops rt73usb_rt2x00_ops = {
+ .load_firmware = rt73usb_load_firmware,
+ .initialize = rt2x00usb_initialize,
+ .uninitialize = rt2x00usb_uninitialize,
++ .init_rxentry = rt2x00usb_init_rxentry,
++ .init_txentry = rt2x00usb_init_txentry,
+ .set_device_state = rt73usb_set_device_state,
+ .link_stats = rt73usb_link_stats,
+ .reset_tuner = rt73usb_reset_tuner,
+@@ -2002,7 +2054,7 @@ static const struct rt2x00lib_ops rt73usb_rt2x00_ops = {
+ };
- /* init 802.11d */
-- libertas_init_11d(priv);
-+ lbs_init_11d(priv);
+ static const struct rt2x00_ops rt73usb_ops = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .rxd_size = RXD_DESC_SIZE,
+ .txd_size = TXD_DESC_SIZE,
+ .eeprom_size = EEPROM_SIZE,
+@@ -2089,7 +2141,7 @@ MODULE_FIRMWARE(FIRMWARE_RT2571);
+ MODULE_LICENSE("GPL");
- if (register_netdev(dev)) {
- lbs_pr_err("cannot register ethX device\n");
- goto done;
- }
-+ if (device_create_file(&dev->dev, &dev_attr_lbs_rtap))
-+ lbs_pr_err("cannot register lbs_rtap attribute\n");
+ static struct usb_driver rt73usb_driver = {
+- .name = DRV_NAME,
++ .name = KBUILD_MODNAME,
+ .id_table = rt73usb_device_table,
+ .probe = rt2x00usb_probe,
+ .disconnect = rt2x00usb_disconnect,
+diff --git a/drivers/net/wireless/rt2x00/rt73usb.h b/drivers/net/wireless/rt2x00/rt73usb.h
+index f095151..d49dcaa 100644
+--- a/drivers/net/wireless/rt2x00/rt73usb.h
++++ b/drivers/net/wireless/rt2x00/rt73usb.h
+@@ -713,13 +713,19 @@ struct hw_pairwise_ta_entry {
+ * R4: RX antenna control
+ * FRAME_END: 1 - DPDT, 0 - SPDT (Only valid for 802.11G, RF2527 & RF2529)
+ */
+-#define BBP_R4_RX_ANTENNA FIELD8(0x03)
++
++/*
++ * ANTENNA_CONTROL semantics (guessed):
++ * 0x1: Software controlled antenna switching (fixed or SW diversity)
++ * 0x2: Hardware diversity.
++ */
++#define BBP_R4_RX_ANTENNA_CONTROL FIELD8(0x03)
+ #define BBP_R4_RX_FRAME_END FIELD8(0x20)
+
+ /*
+ * R77
+ */
+-#define BBP_R77_PAIR FIELD8(0x03)
++#define BBP_R77_RX_ANTENNA FIELD8(0x03)
+
+ /*
+ * RF registers
+@@ -860,8 +866,8 @@ struct hw_pairwise_ta_entry {
+ /*
+ * DMA descriptor defines.
+ */
+-#define TXD_DESC_SIZE ( 6 * sizeof(struct data_desc) )
+-#define RXD_DESC_SIZE ( 6 * sizeof(struct data_desc) )
++#define TXD_DESC_SIZE ( 6 * sizeof(__le32) )
++#define RXD_DESC_SIZE ( 6 * sizeof(__le32) )
+
+ /*
+ * TX descriptor format for TX, PRIO and Beacon Ring.
+diff --git a/drivers/net/wireless/rtl8180.h b/drivers/net/wireless/rtl8180.h
+new file mode 100644
+index 0000000..2cbfe3c
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180.h
+@@ -0,0 +1,151 @@
++#ifndef RTL8180_H
++#define RTL8180_H
++
++#include "rtl818x.h"
++
++#define MAX_RX_SIZE IEEE80211_MAX_RTS_THRESHOLD
++
++#define RF_PARAM_ANALOGPHY (1 << 0)
++#define RF_PARAM_ANTBDEFAULT (1 << 1)
++#define RF_PARAM_CARRIERSENSE1 (1 << 2)
++#define RF_PARAM_CARRIERSENSE2 (1 << 3)
++
++#define BB_ANTATTEN_CHAN14 0x0C
++#define BB_ANTENNA_B 0x40
++
++#define BB_HOST_BANG (1 << 30)
++#define BB_HOST_BANG_EN (1 << 2)
++#define BB_HOST_BANG_CLK (1 << 1)
++#define BB_HOST_BANG_DATA 1
++
++#define ANAPARAM_TXDACOFF_SHIFT 27
++#define ANAPARAM_PWR0_SHIFT 28
++#define ANAPARAM_PWR0_MASK (0x07 << ANAPARAM_PWR0_SHIFT)
++#define ANAPARAM_PWR1_SHIFT 20
++#define ANAPARAM_PWR1_MASK (0x7F << ANAPARAM_PWR1_SHIFT)
++
++enum rtl8180_tx_desc_flags {
++ RTL8180_TX_DESC_FLAG_NO_ENC = (1 << 15),
++ RTL8180_TX_DESC_FLAG_TX_OK = (1 << 15),
++ RTL8180_TX_DESC_FLAG_SPLCP = (1 << 16),
++ RTL8180_TX_DESC_FLAG_RX_UNDER = (1 << 16),
++ RTL8180_TX_DESC_FLAG_MOREFRAG = (1 << 17),
++ RTL8180_TX_DESC_FLAG_CTS = (1 << 18),
++ RTL8180_TX_DESC_FLAG_RTS = (1 << 23),
++ RTL8180_TX_DESC_FLAG_LS = (1 << 28),
++ RTL8180_TX_DESC_FLAG_FS = (1 << 29),
++ RTL8180_TX_DESC_FLAG_DMA = (1 << 30),
++ RTL8180_TX_DESC_FLAG_OWN = (1 << 31)
++};
++
++struct rtl8180_tx_desc {
++ __le32 flags;
++ __le16 rts_duration;
++ __le16 plcp_len;
++ __le32 tx_buf;
++ __le32 frame_len;
++ __le32 next_tx_desc;
++ u8 cw;
++ u8 retry_limit;
++ u8 agc;
++ u8 flags2;
++ u32 reserved[2];
++} __attribute__ ((packed));
++
++enum rtl8180_rx_desc_flags {
++ RTL8180_RX_DESC_FLAG_ICV_ERR = (1 << 12),
++ RTL8180_RX_DESC_FLAG_CRC32_ERR = (1 << 13),
++ RTL8180_RX_DESC_FLAG_PM = (1 << 14),
++ RTL8180_RX_DESC_FLAG_RX_ERR = (1 << 15),
++ RTL8180_RX_DESC_FLAG_BCAST = (1 << 16),
++ RTL8180_RX_DESC_FLAG_PAM = (1 << 17),
++ RTL8180_RX_DESC_FLAG_MCAST = (1 << 18),
++ RTL8180_RX_DESC_FLAG_SPLCP = (1 << 25),
++ RTL8180_RX_DESC_FLAG_FOF = (1 << 26),
++ RTL8180_RX_DESC_FLAG_DMA_FAIL = (1 << 27),
++ RTL8180_RX_DESC_FLAG_LS = (1 << 28),
++ RTL8180_RX_DESC_FLAG_FS = (1 << 29),
++ RTL8180_RX_DESC_FLAG_EOR = (1 << 30),
++ RTL8180_RX_DESC_FLAG_OWN = (1 << 31)
++};
++
++struct rtl8180_rx_desc {
++ __le32 flags;
++ __le32 flags2;
++ union {
++ __le32 rx_buf;
++ __le64 tsft;
++ };
++} __attribute__ ((packed));
++
++struct rtl8180_tx_ring {
++ struct rtl8180_tx_desc *desc;
++ dma_addr_t dma;
++ unsigned int idx;
++ unsigned int entries;
++ struct sk_buff_head queue;
++};
++
++struct rtl8180_priv {
++ /* common between rtl818x drivers */
++ struct rtl818x_csr __iomem *map;
++ const struct rtl818x_rf_ops *rf;
++ struct ieee80211_vif *vif;
++ int mode;
++
++ /* rtl8180 driver specific */
++ spinlock_t lock;
++ struct rtl8180_rx_desc *rx_ring;
++ dma_addr_t rx_ring_dma;
++ unsigned int rx_idx;
++ struct sk_buff *rx_buf[32];
++ struct rtl8180_tx_ring tx_ring[4];
++ struct ieee80211_channel channels[14];
++ struct ieee80211_rate rates[12];
++ struct ieee80211_hw_mode modes[2];
++ struct pci_dev *pdev;
++ u32 rx_conf;
++
++ int r8185;
++ u32 anaparam;
++ u16 rfparam;
++ u8 csthreshold;
++};
++
++void rtl8180_write_phy(struct ieee80211_hw *dev, u8 addr, u32 data);
++void rtl8180_set_anaparam(struct rtl8180_priv *priv, u32 anaparam);
++
++static inline u8 rtl818x_ioread8(struct rtl8180_priv *priv, u8 __iomem *addr)
++{
++ return ioread8(addr);
++}
++
++static inline u16 rtl818x_ioread16(struct rtl8180_priv *priv, __le16 __iomem *addr)
++{
++ return ioread16(addr);
++}
++
++static inline u32 rtl818x_ioread32(struct rtl8180_priv *priv, __le32 __iomem *addr)
++{
++ return ioread32(addr);
++}
++
++static inline void rtl818x_iowrite8(struct rtl8180_priv *priv,
++ u8 __iomem *addr, u8 val)
++{
++ iowrite8(val, addr);
++}
++
++static inline void rtl818x_iowrite16(struct rtl8180_priv *priv,
++ __le16 __iomem *addr, u16 val)
++{
++ iowrite16(val, addr);
++}
++
++static inline void rtl818x_iowrite32(struct rtl8180_priv *priv,
++ __le32 __iomem *addr, u32 val)
++{
++ iowrite32(val, addr);
++}
++
++#endif /* RTL8180_H */
+diff --git a/drivers/net/wireless/rtl8180_dev.c b/drivers/net/wireless/rtl8180_dev.c
+new file mode 100644
+index 0000000..07f37b0
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_dev.c
+@@ -0,0 +1,1051 @@
++
++/*
++ * Linux device driver for RTL8180 / RTL8185
++ *
++ * Copyright 2007 Michael Wu <flamingice at sourmilk.net>
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Based on the r8180 driver, which is:
++ * Copyright 2004-2005 Andrea Merello <andreamrl at tiscali.it>, et al.
++ *
++ * Thanks to Realtek for their support!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#include <linux/init.h>
++#include <linux/pci.h>
++#include <linux/delay.h>
++#include <linux/etherdevice.h>
++#include <linux/eeprom_93cx6.h>
++#include <net/mac80211.h>
++
++#include "rtl8180.h"
++#include "rtl8180_rtl8225.h"
++#include "rtl8180_sa2400.h"
++#include "rtl8180_max2820.h"
++#include "rtl8180_grf5101.h"
++
++MODULE_AUTHOR("Michael Wu <flamingice at sourmilk.net>");
++MODULE_AUTHOR("Andrea Merello <andreamrl at tiscali.it>");
++MODULE_DESCRIPTION("RTL8180 / RTL8185 PCI wireless driver");
++MODULE_LICENSE("GPL");
++
++static struct pci_device_id rtl8180_table[] __devinitdata = {
++ /* rtl8185 */
++ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8185) },
++ { PCI_DEVICE(PCI_VENDOR_ID_BELKIN, 0x701f) },
++
++ /* rtl8180 */
++ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8180) },
++ { PCI_DEVICE(0x1799, 0x6001) },
++ { PCI_DEVICE(0x1799, 0x6020) },
++ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x3300) },
++ { }
++};
++
++MODULE_DEVICE_TABLE(pci, rtl8180_table);
++
++void rtl8180_write_phy(struct ieee80211_hw *dev, u8 addr, u32 data)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ int i = 10;
++ u32 buf;
++
++ buf = (data << 8) | addr;
++
++ rtl818x_iowrite32(priv, (__le32 __iomem *)&priv->map->PHY[0], buf | 0x80);
++ while (i--) {
++ rtl818x_iowrite32(priv, (__le32 __iomem *)&priv->map->PHY[0], buf);
++ if (rtl818x_ioread8(priv, &priv->map->PHY[2]) == (data & 0xFF))
++ return;
++ }
++}
++
++static void rtl8180_handle_rx(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ unsigned int count = 32;
++
++ while (count--) {
++ struct rtl8180_rx_desc *entry = &priv->rx_ring[priv->rx_idx];
++ struct sk_buff *skb = priv->rx_buf[priv->rx_idx];
++ u32 flags = le32_to_cpu(entry->flags);
++
++ if (flags & RTL8180_RX_DESC_FLAG_OWN)
++ return;
++
++ if (unlikely(flags & (RTL8180_RX_DESC_FLAG_DMA_FAIL |
++ RTL8180_RX_DESC_FLAG_FOF |
++ RTL8180_RX_DESC_FLAG_RX_ERR)))
++ goto done;
++ else {
++ u32 flags2 = le32_to_cpu(entry->flags2);
++ struct ieee80211_rx_status rx_status = {0};
++ struct sk_buff *new_skb = dev_alloc_skb(MAX_RX_SIZE);
++
++ if (unlikely(!new_skb))
++ goto done;
++
++ pci_unmap_single(priv->pdev,
++ *((dma_addr_t *)skb->cb),
++ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
++ skb_put(skb, flags & 0xFFF);
++
++ rx_status.antenna = (flags2 >> 15) & 1;
++ /* TODO: improve signal/rssi reporting */
++ rx_status.signal = flags2 & 0xFF;
++ rx_status.ssi = (flags2 >> 8) & 0x7F;
++ rx_status.rate = (flags >> 20) & 0xF;
++ rx_status.freq = dev->conf.freq;
++ rx_status.channel = dev->conf.channel;
++ rx_status.phymode = dev->conf.phymode;
++ rx_status.mactime = le64_to_cpu(entry->tsft);
++ rx_status.flag |= RX_FLAG_TSFT;
++ if (flags & RTL8180_RX_DESC_FLAG_CRC32_ERR)
++ rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++
++ ieee80211_rx_irqsafe(dev, skb, &rx_status);
++
++ skb = new_skb;
++ priv->rx_buf[priv->rx_idx] = skb;
++ *((dma_addr_t *) skb->cb) =
++ pci_map_single(priv->pdev, skb_tail_pointer(skb),
++ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
++ }
++
++ done:
++ entry->rx_buf = cpu_to_le32(*((dma_addr_t *)skb->cb));
++ entry->flags = cpu_to_le32(RTL8180_RX_DESC_FLAG_OWN |
++ MAX_RX_SIZE);
++ if (priv->rx_idx == 31)
++ entry->flags |= cpu_to_le32(RTL8180_RX_DESC_FLAG_EOR);
++ priv->rx_idx = (priv->rx_idx + 1) % 32;
++ }
++}
++
++static void rtl8180_handle_tx(struct ieee80211_hw *dev, unsigned int prio)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ struct rtl8180_tx_ring *ring = &priv->tx_ring[prio];
++
++ while (skb_queue_len(&ring->queue)) {
++ struct rtl8180_tx_desc *entry = &ring->desc[ring->idx];
++ struct sk_buff *skb;
++ struct ieee80211_tx_status status = { {0} };
++ struct ieee80211_tx_control *control;
++ u32 flags = le32_to_cpu(entry->flags);
++
++ if (flags & RTL8180_TX_DESC_FLAG_OWN)
++ return;
++
++ ring->idx = (ring->idx + 1) % ring->entries;
++ skb = __skb_dequeue(&ring->queue);
++ pci_unmap_single(priv->pdev, le32_to_cpu(entry->tx_buf),
++ skb->len, PCI_DMA_TODEVICE);
++
++ control = *((struct ieee80211_tx_control **)skb->cb);
++ if (control)
++ memcpy(&status.control, control, sizeof(*control));
++ kfree(control);
++
++ if (!(status.control.flags & IEEE80211_TXCTL_NO_ACK)) {
++ if (flags & RTL8180_TX_DESC_FLAG_TX_OK)
++ status.flags = IEEE80211_TX_STATUS_ACK;
++ else
++ status.excessive_retries = 1;
++ }
++ status.retry_count = flags & 0xFF;
++
++ ieee80211_tx_status_irqsafe(dev, skb, &status);
++ if (ring->entries - skb_queue_len(&ring->queue) == 2)
++ ieee80211_wake_queue(dev, prio);
++ }
++}
++
++static irqreturn_t rtl8180_interrupt(int irq, void *dev_id)
++{
++ struct ieee80211_hw *dev = dev_id;
++ struct rtl8180_priv *priv = dev->priv;
++ u16 reg;
++
++ spin_lock(&priv->lock);
++ reg = rtl818x_ioread16(priv, &priv->map->INT_STATUS);
++ if (unlikely(reg == 0xFFFF)) {
++ spin_unlock(&priv->lock);
++ return IRQ_HANDLED;
++ }
++
++ rtl818x_iowrite16(priv, &priv->map->INT_STATUS, reg);
++
++ if (reg & (RTL818X_INT_TXB_OK | RTL818X_INT_TXB_ERR))
++ rtl8180_handle_tx(dev, 3);
++
++ if (reg & (RTL818X_INT_TXH_OK | RTL818X_INT_TXH_ERR))
++ rtl8180_handle_tx(dev, 2);
++
++ if (reg & (RTL818X_INT_TXN_OK | RTL818X_INT_TXN_ERR))
++ rtl8180_handle_tx(dev, 1);
++
++ if (reg & (RTL818X_INT_TXL_OK | RTL818X_INT_TXL_ERR))
++ rtl8180_handle_tx(dev, 0);
++
++ if (reg & (RTL818X_INT_RX_OK | RTL818X_INT_RX_ERR))
++ rtl8180_handle_rx(dev);
++
++ spin_unlock(&priv->lock);
++
++ return IRQ_HANDLED;
++}
++
++static int rtl8180_tx(struct ieee80211_hw *dev, struct sk_buff *skb,
++ struct ieee80211_tx_control *control)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ struct rtl8180_tx_ring *ring;
++ struct rtl8180_tx_desc *entry;
++ unsigned long flags;
++ unsigned int idx, prio;
++ dma_addr_t mapping;
++ u32 tx_flags;
++ u16 plcp_len = 0;
++ __le16 rts_duration = 0;
++
++ prio = control->queue;
++ ring = &priv->tx_ring[prio];
++
++ mapping = pci_map_single(priv->pdev, skb->data,
++ skb->len, PCI_DMA_TODEVICE);
++
++ tx_flags = RTL8180_TX_DESC_FLAG_OWN | RTL8180_TX_DESC_FLAG_FS |
++ RTL8180_TX_DESC_FLAG_LS | (control->tx_rate << 24) |
++ (control->rts_cts_rate << 19) | skb->len;
++
++ if (priv->r8185)
++ tx_flags |= RTL8180_TX_DESC_FLAG_DMA |
++ RTL8180_TX_DESC_FLAG_NO_ENC;
++
++ if (control->flags & IEEE80211_TXCTL_USE_RTS_CTS)
++ tx_flags |= RTL8180_TX_DESC_FLAG_RTS;
++ else if (control->flags & IEEE80211_TXCTL_USE_CTS_PROTECT)
++ tx_flags |= RTL8180_TX_DESC_FLAG_CTS;
++
++ *((struct ieee80211_tx_control **) skb->cb) =
++ kmemdup(control, sizeof(*control), GFP_ATOMIC);
++
++ if (control->flags & IEEE80211_TXCTL_USE_RTS_CTS)
++ rts_duration = ieee80211_rts_duration(dev, priv->vif, skb->len,
++ control);
++
++ if (!priv->r8185) {
++ unsigned int remainder;
++
++ plcp_len = DIV_ROUND_UP(16 * (skb->len + 4),
++ (control->rate->rate * 2) / 10);
++ remainder = (16 * (skb->len + 4)) %
++ ((control->rate->rate * 2) / 10);
++ if (remainder > 0 && remainder <= 6)
++ plcp_len |= 1 << 15;
++ }
++
++ spin_lock_irqsave(&priv->lock, flags);
++ idx = (ring->idx + skb_queue_len(&ring->queue)) % ring->entries;
++ entry = &ring->desc[idx];
++
++ entry->rts_duration = rts_duration;
++ entry->plcp_len = cpu_to_le16(plcp_len);
++ entry->tx_buf = cpu_to_le32(mapping);
++ entry->frame_len = cpu_to_le32(skb->len);
++ entry->flags2 = control->alt_retry_rate != -1 ?
++ control->alt_retry_rate << 4 : 0;
++ entry->retry_limit = control->retry_limit;
++ entry->flags = cpu_to_le32(tx_flags);
++ __skb_queue_tail(&ring->queue, skb);
++ if (ring->entries - skb_queue_len(&ring->queue) < 2)
++ ieee80211_stop_queue(dev, control->queue);
++ spin_unlock_irqrestore(&priv->lock, flags);
++
++ rtl818x_iowrite8(priv, &priv->map->TX_DMA_POLLING, (1 << (prio + 4)));
++
++ return 0;
++}
++
++void rtl8180_set_anaparam(struct rtl8180_priv *priv, u32 anaparam)
++{
++ u8 reg;
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3,
++ reg | RTL818X_CONFIG3_ANAPARAM_WRITE);
++ rtl818x_iowrite32(priv, &priv->map->ANAPARAM, anaparam);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3,
++ reg & ~RTL818X_CONFIG3_ANAPARAM_WRITE);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++}
++
++static int rtl8180_init_hw(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u16 reg;
++
++ rtl818x_iowrite8(priv, &priv->map->CMD, 0);
++ rtl818x_ioread8(priv, &priv->map->CMD);
++ msleep(10);
++
++ /* reset */
++ rtl818x_iowrite16(priv, &priv->map->INT_MASK, 0);
++ rtl818x_ioread8(priv, &priv->map->CMD);
++
++ reg = rtl818x_ioread8(priv, &priv->map->CMD);
++ reg &= (1 << 1);
++ reg |= RTL818X_CMD_RESET;
++ rtl818x_iowrite8(priv, &priv->map->CMD, RTL818X_CMD_RESET);
++ rtl818x_ioread8(priv, &priv->map->CMD);
++ msleep(200);
++
++ /* check success of reset */
++ if (rtl818x_ioread8(priv, &priv->map->CMD) & RTL818X_CMD_RESET) {
++ printk(KERN_ERR "%s: reset timeout!\n", wiphy_name(dev->wiphy));
++ return -ETIMEDOUT;
++ }
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_LOAD);
++ rtl818x_ioread8(priv, &priv->map->CMD);
++ msleep(200);
++
++ if (rtl818x_ioread8(priv, &priv->map->CONFIG3) & (1 << 3)) {
++ /* For cardbus */
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
++ reg |= 1 << 1;
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg);
++ reg = rtl818x_ioread16(priv, &priv->map->FEMR);
++ reg |= (1 << 15) | (1 << 14) | (1 << 4);
++ rtl818x_iowrite16(priv, &priv->map->FEMR, reg);
++ }
++
++ rtl818x_iowrite8(priv, &priv->map->MSR, 0);
++
++ if (!priv->r8185)
++ rtl8180_set_anaparam(priv, priv->anaparam);
++
++ rtl818x_iowrite32(priv, &priv->map->RDSAR, priv->rx_ring_dma);
++ rtl818x_iowrite32(priv, &priv->map->TBDA, priv->tx_ring[3].dma);
++ rtl818x_iowrite32(priv, &priv->map->THPDA, priv->tx_ring[2].dma);
++ rtl818x_iowrite32(priv, &priv->map->TNPDA, priv->tx_ring[1].dma);
++ rtl818x_iowrite32(priv, &priv->map->TLPDA, priv->tx_ring[0].dma);
++
++ /* TODO: necessary? specs indicate not */
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG2);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG2, reg & ~(1 << 3));
++ if (priv->r8185) {
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG2);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG2, reg | (1 << 4));
++ }
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ /* TODO: set CONFIG5 for calibrating AGC on rtl8180 + philips radio? */
++
++ /* TODO: turn off hw wep on rtl8180 */
++
++ rtl818x_iowrite32(priv, &priv->map->INT_TIMEOUT, 0);
++
++ if (priv->r8185) {
++ rtl818x_iowrite8(priv, &priv->map->WPA_CONF, 0);
++ rtl818x_iowrite8(priv, &priv->map->RATE_FALLBACK, 0x81);
++ rtl818x_iowrite8(priv, &priv->map->RESP_RATE, (8 << 4) | 0);
++
++ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x01F3);
++
++ /* TODO: set ClkRun enable? necessary? */
++ reg = rtl818x_ioread8(priv, &priv->map->GP_ENABLE);
++ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, reg & ~(1 << 6));
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg | (1 << 2));
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++ } else {
++ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x1);
++ rtl818x_iowrite8(priv, &priv->map->SECURITY, 0);
++
++ rtl818x_iowrite8(priv, &priv->map->PHY_DELAY, 0x6);
++ rtl818x_iowrite8(priv, &priv->map->CARRIER_SENSE_COUNTER, 0x4C);
++ }
++
++ priv->rf->init(dev);
++ if (priv->r8185)
++ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x01F3);
++ return 0;
++}
++
++static int rtl8180_init_rx_ring(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ struct rtl8180_rx_desc *entry;
++ int i;
++
++ priv->rx_ring = pci_alloc_consistent(priv->pdev,
++ sizeof(*priv->rx_ring) * 32,
++ &priv->rx_ring_dma);
++
++ if (!priv->rx_ring || (unsigned long)priv->rx_ring & 0xFF) {
++ printk(KERN_ERR "%s: Cannot allocate RX ring\n",
++ wiphy_name(dev->wiphy));
++ return -ENOMEM;
++ }
++
++ memset(priv->rx_ring, 0, sizeof(*priv->rx_ring) * 32);
++ priv->rx_idx = 0;
++
++ for (i = 0; i < 32; i++) {
++ struct sk_buff *skb = dev_alloc_skb(MAX_RX_SIZE);
++ dma_addr_t *mapping;
++ entry = &priv->rx_ring[i];
++ if (!skb)
++ return 0;
++
++ priv->rx_buf[i] = skb;
++ mapping = (dma_addr_t *)skb->cb;
++ *mapping = pci_map_single(priv->pdev, skb_tail_pointer(skb),
++ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
++ entry->rx_buf = cpu_to_le32(*mapping);
++ entry->flags = cpu_to_le32(RTL8180_RX_DESC_FLAG_OWN |
++ MAX_RX_SIZE);
++ }
++ entry->flags |= cpu_to_le32(RTL8180_RX_DESC_FLAG_EOR);
++ return 0;
++}
++
++static void rtl8180_free_rx_ring(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ int i;
++
++ for (i = 0; i < 32; i++) {
++ struct sk_buff *skb = priv->rx_buf[i];
++ if (!skb)
++ continue;
++
++ pci_unmap_single(priv->pdev,
++ *((dma_addr_t *)skb->cb),
++ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
++ kfree_skb(skb);
++ }
++
++ pci_free_consistent(priv->pdev, sizeof(*priv->rx_ring) * 32,
++ priv->rx_ring, priv->rx_ring_dma);
++ priv->rx_ring = NULL;
++}
++
++static int rtl8180_init_tx_ring(struct ieee80211_hw *dev,
++ unsigned int prio, unsigned int entries)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ struct rtl8180_tx_desc *ring;
++ dma_addr_t dma;
++ int i;
++
++ ring = pci_alloc_consistent(priv->pdev, sizeof(*ring) * entries, &dma);
++ if (!ring || (unsigned long)ring & 0xFF) {
++ printk(KERN_ERR "%s: Cannot allocate TX ring (prio = %d)\n",
++ wiphy_name(dev->wiphy), prio);
++ return -ENOMEM;
++ }
++
++ memset(ring, 0, sizeof(*ring)*entries);
++ priv->tx_ring[prio].desc = ring;
++ priv->tx_ring[prio].dma = dma;
++ priv->tx_ring[prio].idx = 0;
++ priv->tx_ring[prio].entries = entries;
++ skb_queue_head_init(&priv->tx_ring[prio].queue);
++
++ for (i = 0; i < entries; i++)
++ ring[i].next_tx_desc =
++ cpu_to_le32((u32)dma + ((i + 1) % entries) * sizeof(*ring));
++
++ return 0;
++}
++
++static void rtl8180_free_tx_ring(struct ieee80211_hw *dev, unsigned int prio)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ struct rtl8180_tx_ring *ring = &priv->tx_ring[prio];
++
++ while (skb_queue_len(&ring->queue)) {
++ struct rtl8180_tx_desc *entry = &ring->desc[ring->idx];
++ struct sk_buff *skb = __skb_dequeue(&ring->queue);
++
++ pci_unmap_single(priv->pdev, le32_to_cpu(entry->tx_buf),
++ skb->len, PCI_DMA_TODEVICE);
++ kfree(*((struct ieee80211_tx_control **) skb->cb));
++ kfree_skb(skb);
++ ring->idx = (ring->idx + 1) % ring->entries;
++ }
++
++ pci_free_consistent(priv->pdev, sizeof(*ring->desc)*ring->entries,
++ ring->desc, ring->dma);
++ ring->desc = NULL;
++}
++
++static int rtl8180_start(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ int ret, i;
++ u32 reg;
++
++ ret = rtl8180_init_rx_ring(dev);
++ if (ret)
++ return ret;
++
++ for (i = 0; i < 4; i++)
++ if ((ret = rtl8180_init_tx_ring(dev, i, 16)))
++ goto err_free_rings;
++
++ ret = rtl8180_init_hw(dev);
++ if (ret)
++ goto err_free_rings;
++
++ rtl818x_iowrite32(priv, &priv->map->RDSAR, priv->rx_ring_dma);
++ rtl818x_iowrite32(priv, &priv->map->TBDA, priv->tx_ring[3].dma);
++ rtl818x_iowrite32(priv, &priv->map->THPDA, priv->tx_ring[2].dma);
++ rtl818x_iowrite32(priv, &priv->map->TNPDA, priv->tx_ring[1].dma);
++ rtl818x_iowrite32(priv, &priv->map->TLPDA, priv->tx_ring[0].dma);
++
++ ret = request_irq(priv->pdev->irq, &rtl8180_interrupt,
++ IRQF_SHARED, KBUILD_MODNAME, dev);
++ if (ret) {
++ printk(KERN_ERR "%s: failed to register IRQ handler\n",
++ wiphy_name(dev->wiphy));
++ goto err_free_rings;
++ }
++
++ rtl818x_iowrite16(priv, &priv->map->INT_MASK, 0xFFFF);
++
++ rtl818x_iowrite32(priv, &priv->map->MAR[0], ~0);
++ rtl818x_iowrite32(priv, &priv->map->MAR[1], ~0);
++
++ reg = RTL818X_RX_CONF_ONLYERLPKT |
++ RTL818X_RX_CONF_RX_AUTORESETPHY |
++ RTL818X_RX_CONF_MGMT |
++ RTL818X_RX_CONF_DATA |
++ (7 << 8 /* MAX RX DMA */) |
++ RTL818X_RX_CONF_BROADCAST |
++ RTL818X_RX_CONF_NICMAC;
++
++ if (priv->r8185)
++ reg |= RTL818X_RX_CONF_CSDM1 | RTL818X_RX_CONF_CSDM2;
++ else {
++ reg |= (priv->rfparam & RF_PARAM_CARRIERSENSE1)
++ ? RTL818X_RX_CONF_CSDM1 : 0;
++ reg |= (priv->rfparam & RF_PARAM_CARRIERSENSE2)
++ ? RTL818X_RX_CONF_CSDM2 : 0;
++ }
++
++ priv->rx_conf = reg;
++ rtl818x_iowrite32(priv, &priv->map->RX_CONF, reg);
++
++ if (priv->r8185) {
++ reg = rtl818x_ioread8(priv, &priv->map->CW_CONF);
++ reg &= ~RTL818X_CW_CONF_PERPACKET_CW_SHIFT;
++ reg |= RTL818X_CW_CONF_PERPACKET_RETRY_SHIFT;
++ rtl818x_iowrite8(priv, &priv->map->CW_CONF, reg);
++
++ reg = rtl818x_ioread8(priv, &priv->map->TX_AGC_CTL);
++ reg &= ~RTL818X_TX_AGC_CTL_PERPACKET_GAIN_SHIFT;
++ reg &= ~RTL818X_TX_AGC_CTL_PERPACKET_ANTSEL_SHIFT;
++ reg |= RTL818X_TX_AGC_CTL_FEEDBACK_ANT;
++ rtl818x_iowrite8(priv, &priv->map->TX_AGC_CTL, reg);
++
++ /* disable early TX */
++ rtl818x_iowrite8(priv, (u8 __iomem *)priv->map + 0xec, 0x3f);
++ }
++
++ reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
++ reg |= (6 << 21 /* MAX TX DMA */) |
++ RTL818X_TX_CONF_NO_ICV;
++
++ if (priv->r8185)
++ reg &= ~RTL818X_TX_CONF_PROBE_DTS;
++ else
++ reg &= ~RTL818X_TX_CONF_HW_SEQNUM;
++
++ /* different meaning, same value on both rtl8185 and rtl8180 */
++ reg &= ~RTL818X_TX_CONF_SAT_HWPLCP;
++
++ rtl818x_iowrite32(priv, &priv->map->TX_CONF, reg);
++
++ reg = rtl818x_ioread8(priv, &priv->map->CMD);
++ reg |= RTL818X_CMD_RX_ENABLE;
++ reg |= RTL818X_CMD_TX_ENABLE;
++ rtl818x_iowrite8(priv, &priv->map->CMD, reg);
++
++ priv->mode = IEEE80211_IF_TYPE_MNTR;
++ return 0;
++
++ err_free_rings:
++ rtl8180_free_rx_ring(dev);
++ for (i = 0; i < 4; i++)
++ if (priv->tx_ring[i].desc)
++ rtl8180_free_tx_ring(dev, i);
++
++ return ret;
++}
++
++static void rtl8180_stop(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u8 reg;
++ int i;
++
++ priv->mode = IEEE80211_IF_TYPE_INVALID;
++
++ rtl818x_iowrite16(priv, &priv->map->INT_MASK, 0);
++
++ reg = rtl818x_ioread8(priv, &priv->map->CMD);
++ reg &= ~RTL818X_CMD_TX_ENABLE;
++ reg &= ~RTL818X_CMD_RX_ENABLE;
++ rtl818x_iowrite8(priv, &priv->map->CMD, reg);
++
++ priv->rf->stop(dev);
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG4);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG4, reg | RTL818X_CONFIG4_VCOOFF);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ free_irq(priv->pdev->irq, dev);
++
++ rtl8180_free_rx_ring(dev);
++ for (i = 0; i < 4; i++)
++ rtl8180_free_tx_ring(dev, i);
++}
++
++static int rtl8180_add_interface(struct ieee80211_hw *dev,
++ struct ieee80211_if_init_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
++
++ if (priv->mode != IEEE80211_IF_TYPE_MNTR)
++ return -EOPNOTSUPP;
++
++ switch (conf->type) {
++ case IEEE80211_IF_TYPE_STA:
++ priv->mode = conf->type;
++ break;
++ default:
++ return -EOPNOTSUPP;
++ }
++
++ priv->vif = conf->vif;
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ rtl818x_iowrite32(priv, (__le32 __iomem *)&priv->map->MAC[0],
++ cpu_to_le32(*(u32 *)conf->mac_addr));
++ rtl818x_iowrite16(priv, (__le16 __iomem *)&priv->map->MAC[4],
++ cpu_to_le16(*(u16 *)(conf->mac_addr + 4)));
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ return 0;
++}
++
++static void rtl8180_remove_interface(struct ieee80211_hw *dev,
++ struct ieee80211_if_init_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ priv->mode = IEEE80211_IF_TYPE_MNTR;
++ priv->vif = NULL;
++}
++
++static int rtl8180_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
++
++ priv->rf->set_chan(dev, conf);
++
++ return 0;
++}
++
++static int rtl8180_config_interface(struct ieee80211_hw *dev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_if_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ int i;
++
++ for (i = 0; i < ETH_ALEN; i++)
++ rtl818x_iowrite8(priv, &priv->map->BSSID[i], conf->bssid[i]);
++
++ if (is_valid_ether_addr(conf->bssid))
++ rtl818x_iowrite8(priv, &priv->map->MSR, RTL818X_MSR_INFRA);
++ else
++ rtl818x_iowrite8(priv, &priv->map->MSR, RTL818X_MSR_NO_LINK);
++
++ return 0;
++}
++
++static void rtl8180_configure_filter(struct ieee80211_hw *dev,
++ unsigned int changed_flags,
++ unsigned int *total_flags,
++ int mc_count, struct dev_addr_list *mclist)
++{
++ struct rtl8180_priv *priv = dev->priv;
++
++ if (changed_flags & FIF_FCSFAIL)
++ priv->rx_conf ^= RTL818X_RX_CONF_FCS;
++ if (changed_flags & FIF_CONTROL)
++ priv->rx_conf ^= RTL818X_RX_CONF_CTRL;
++ if (changed_flags & FIF_OTHER_BSS)
++ priv->rx_conf ^= RTL818X_RX_CONF_MONITOR;
++ if (*total_flags & FIF_ALLMULTI || mc_count > 0)
++ priv->rx_conf |= RTL818X_RX_CONF_MULTICAST;
++ else
++ priv->rx_conf &= ~RTL818X_RX_CONF_MULTICAST;
++
++ *total_flags = 0;
++
++ if (priv->rx_conf & RTL818X_RX_CONF_FCS)
++ *total_flags |= FIF_FCSFAIL;
++ if (priv->rx_conf & RTL818X_RX_CONF_CTRL)
++ *total_flags |= FIF_CONTROL;
++ if (priv->rx_conf & RTL818X_RX_CONF_MONITOR)
++ *total_flags |= FIF_OTHER_BSS;
++ if (priv->rx_conf & RTL818X_RX_CONF_MULTICAST)
++ *total_flags |= FIF_ALLMULTI;
++
++ rtl818x_iowrite32(priv, &priv->map->RX_CONF, priv->rx_conf);
++}
++
++static const struct ieee80211_ops rtl8180_ops = {
++ .tx = rtl8180_tx,
++ .start = rtl8180_start,
++ .stop = rtl8180_stop,
++ .add_interface = rtl8180_add_interface,
++ .remove_interface = rtl8180_remove_interface,
++ .config = rtl8180_config,
++ .config_interface = rtl8180_config_interface,
++ .configure_filter = rtl8180_configure_filter,
++};
++
++static void rtl8180_eeprom_register_read(struct eeprom_93cx6 *eeprom)
++{
++ struct ieee80211_hw *dev = eeprom->data;
++ struct rtl8180_priv *priv = dev->priv;
++ u8 reg = rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++
++ eeprom->reg_data_in = reg & RTL818X_EEPROM_CMD_WRITE;
++ eeprom->reg_data_out = reg & RTL818X_EEPROM_CMD_READ;
++ eeprom->reg_data_clock = reg & RTL818X_EEPROM_CMD_CK;
++ eeprom->reg_chip_select = reg & RTL818X_EEPROM_CMD_CS;
++}
++
++static void rtl8180_eeprom_register_write(struct eeprom_93cx6 *eeprom)
++{
++ struct ieee80211_hw *dev = eeprom->data;
++ struct rtl8180_priv *priv = dev->priv;
++ u8 reg = 2 << 6;
++
++ if (eeprom->reg_data_in)
++ reg |= RTL818X_EEPROM_CMD_WRITE;
++ if (eeprom->reg_data_out)
++ reg |= RTL818X_EEPROM_CMD_READ;
++ if (eeprom->reg_data_clock)
++ reg |= RTL818X_EEPROM_CMD_CK;
++ if (eeprom->reg_chip_select)
++ reg |= RTL818X_EEPROM_CMD_CS;
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, reg);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(10);
++}
++
++static int __devinit rtl8180_probe(struct pci_dev *pdev,
++ const struct pci_device_id *id)
++{
++ struct ieee80211_hw *dev;
++ struct rtl8180_priv *priv;
++ unsigned long mem_addr, mem_len;
++ unsigned int io_addr, io_len;
++ int err, i;
++ struct eeprom_93cx6 eeprom;
++ const char *chip_name, *rf_name = NULL;
++ u32 reg;
++ u16 eeprom_val;
++ DECLARE_MAC_BUF(mac);
++
++ err = pci_enable_device(pdev);
++ if (err) {
++ printk(KERN_ERR "%s (rtl8180): Cannot enable new PCI device\n",
++ pci_name(pdev));
++ return err;
++ }
++
++ err = pci_request_regions(pdev, KBUILD_MODNAME);
++ if (err) {
++ printk(KERN_ERR "%s (rtl8180): Cannot obtain PCI resources\n",
++ pci_name(pdev));
++ return err;
++ }
++
++ io_addr = pci_resource_start(pdev, 0);
++ io_len = pci_resource_len(pdev, 0);
++ mem_addr = pci_resource_start(pdev, 1);
++ mem_len = pci_resource_len(pdev, 1);
++
++ if (mem_len < sizeof(struct rtl818x_csr) ||
++ io_len < sizeof(struct rtl818x_csr)) {
++ printk(KERN_ERR "%s (rtl8180): Too short PCI resources\n",
++ pci_name(pdev));
++ err = -ENOMEM;
++ goto err_free_reg;
++ }
++
++ if ((err = pci_set_dma_mask(pdev, 0xFFFFFF00ULL)) ||
++ (err = pci_set_consistent_dma_mask(pdev, 0xFFFFFF00ULL))) {
++ printk(KERN_ERR "%s (rtl8180): No suitable DMA available\n",
++ pci_name(pdev));
++ goto err_free_reg;
++ }
++
++ pci_set_master(pdev);
++
++ dev = ieee80211_alloc_hw(sizeof(*priv), &rtl8180_ops);
++ if (!dev) {
++ printk(KERN_ERR "%s (rtl8180): ieee80211 alloc failed\n",
++ pci_name(pdev));
++ err = -ENOMEM;
++ goto err_free_reg;
++ }
++
++ priv = dev->priv;
++ priv->pdev = pdev;
++
++ SET_IEEE80211_DEV(dev, &pdev->dev);
++ pci_set_drvdata(pdev, dev);
++
++ priv->map = pci_iomap(pdev, 1, mem_len);
++ if (!priv->map)
++ priv->map = pci_iomap(pdev, 0, io_len);
++
++ if (!priv->map) {
++ printk(KERN_ERR "%s (rtl8180): Cannot map device memory\n",
++ pci_name(pdev));
++ goto err_free_dev;
++ }
++
++ memcpy(priv->channels, rtl818x_channels, sizeof(rtl818x_channels));
++ memcpy(priv->rates, rtl818x_rates, sizeof(rtl818x_rates));
++ priv->modes[0].mode = MODE_IEEE80211G;
++ priv->modes[0].num_rates = ARRAY_SIZE(rtl818x_rates);
++ priv->modes[0].rates = priv->rates;
++ priv->modes[0].num_channels = ARRAY_SIZE(rtl818x_channels);
++ priv->modes[0].channels = priv->channels;
++ priv->modes[1].mode = MODE_IEEE80211B;
++ priv->modes[1].num_rates = 4;
++ priv->modes[1].rates = priv->rates;
++ priv->modes[1].num_channels = ARRAY_SIZE(rtl818x_channels);
++ priv->modes[1].channels = priv->channels;
++ priv->mode = IEEE80211_IF_TYPE_INVALID;
++ dev->flags = IEEE80211_HW_HOST_BROADCAST_PS_BUFFERING |
++ IEEE80211_HW_RX_INCLUDES_FCS;
++ dev->queues = 1;
++ dev->max_rssi = 65;
++
++ reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
++ reg &= RTL818X_TX_CONF_HWVER_MASK;
++ switch (reg) {
++ case RTL818X_TX_CONF_R8180_ABCD:
++ chip_name = "RTL8180";
++ break;
++ case RTL818X_TX_CONF_R8180_F:
++ chip_name = "RTL8180vF";
++ break;
++ case RTL818X_TX_CONF_R8185_ABC:
++ chip_name = "RTL8185";
++ break;
++ case RTL818X_TX_CONF_R8185_D:
++ chip_name = "RTL8185vD";
++ break;
++ default:
++ printk(KERN_ERR "%s (rtl8180): Unknown chip! (0x%x)\n",
++ pci_name(pdev), reg >> 25);
++ goto err_iounmap;
++ }
++
++ priv->r8185 = reg & RTL818X_TX_CONF_R8185_ABC;
++ if (priv->r8185) {
++ if ((err = ieee80211_register_hwmode(dev, &priv->modes[0])))
++ goto err_iounmap;
++
++ pci_try_set_mwi(pdev);
++ }
++
++ if ((err = ieee80211_register_hwmode(dev, &priv->modes[1])))
++ goto err_iounmap;
++
++ eeprom.data = dev;
++ eeprom.register_read = rtl8180_eeprom_register_read;
++ eeprom.register_write = rtl8180_eeprom_register_write;
++ if (rtl818x_ioread32(priv, &priv->map->RX_CONF) & (1 << 6))
++ eeprom.width = PCI_EEPROM_WIDTH_93C66;
++ else
++ eeprom.width = PCI_EEPROM_WIDTH_93C46;
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_PROGRAM);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(10);
++
++ eeprom_93cx6_read(&eeprom, 0x06, &eeprom_val);
++ eeprom_val &= 0xFF;
++ switch (eeprom_val) {
++ case 1: rf_name = "Intersil";
++ break;
++ case 2: rf_name = "RFMD";
++ break;
++ case 3: priv->rf = &sa2400_rf_ops;
++ break;
++ case 4: priv->rf = &max2820_rf_ops;
++ break;
++ case 5: priv->rf = &grf5101_rf_ops;
++ break;
++ case 9: priv->rf = rtl8180_detect_rf(dev);
++ break;
++ case 10:
++ rf_name = "RTL8255";
++ break;
++ default:
++ printk(KERN_ERR "%s (rtl8180): Unknown RF! (0x%x)\n",
++ pci_name(pdev), eeprom_val);
++ goto err_iounmap;
++ }
+
-+ lbs_update_channel(priv);
++ if (!priv->rf) {
++ printk(KERN_ERR "%s (rtl8180): %s RF frontend not supported!\n",
++ pci_name(pdev), rf_name);
++ goto err_iounmap;
++ }
+
-+ /* 5.0.16p0 is known to NOT support any mesh */
-+ if (priv->fwrelease > 0x05001000) {
-+ /* Enable mesh, if supported, and work out which TLV it uses.
-+ 0x100 + 291 is an unofficial value used in 5.110.20.pXX
-+ 0x100 + 37 is the official value used in 5.110.21.pXX
-+ but we check them in that order because 20.pXX doesn't
-+ give an error -- it just silently fails. */
++ eeprom_93cx6_read(&eeprom, 0x17, &eeprom_val);
++ priv->csthreshold = eeprom_val >> 8;
++ if (!priv->r8185) {
++ __le32 anaparam;
++ eeprom_93cx6_multiread(&eeprom, 0xD, (__le16 *)&anaparam, 2);
++ priv->anaparam = le32_to_cpu(anaparam);
++ eeprom_93cx6_read(&eeprom, 0x19, &priv->rfparam);
++ }
+
-+ /* 5.110.20.pXX firmware will fail the command if the channel
-+ doesn't match the existing channel. But only if the TLV
-+ is correct. If the channel is wrong, _BOTH_ versions will
-+ give an error to 0x100+291, and allow 0x100+37 to succeed.
-+ It's just that 5.110.20.pXX will not have done anything
-+ useful */
++ eeprom_93cx6_multiread(&eeprom, 0x7, (__le16 *)dev->wiphy->perm_addr, 3);
++ if (!is_valid_ether_addr(dev->wiphy->perm_addr)) {
++ printk(KERN_WARNING "%s (rtl8180): Invalid hwaddr! Using"
++ " randomly generated MAC addr\n", pci_name(pdev));
++ random_ether_addr(dev->wiphy->perm_addr);
++ }
+
-+ priv->mesh_tlv = 0x100 + 291;
-+ if (lbs_mesh_config(priv, 1, priv->curbssparams.channel)) {
-+ priv->mesh_tlv = 0x100 + 37;
-+ if (lbs_mesh_config(priv, 1, priv->curbssparams.channel))
-+ priv->mesh_tlv = 0;
-+ }
-+ if (priv->mesh_tlv) {
-+ lbs_add_mesh(priv);
-
-- libertas_debugfs_init_one(priv, dev);
-+ if (device_create_file(&dev->dev, &dev_attr_lbs_mesh))
-+ lbs_pr_err("cannot register lbs_mesh attribute\n");
++ /* CCK TX power */
++ for (i = 0; i < 14; i += 2) {
++ u16 txpwr;
++ eeprom_93cx6_read(&eeprom, 0x10 + (i >> 1), &txpwr);
++ priv->channels[i].val = txpwr & 0xFF;
++ priv->channels[i + 1].val = txpwr >> 8;
++ }
++
++ /* OFDM TX power */
++ if (priv->r8185) {
++ for (i = 0; i < 14; i += 2) {
++ u16 txpwr;
++ eeprom_93cx6_read(&eeprom, 0x20 + (i >> 1), &txpwr);
++ priv->channels[i].val |= (txpwr & 0xFF) << 8;
++ priv->channels[i + 1].val |= txpwr & 0xFF00;
+ }
+ }
+
-+ lbs_debugfs_init_one(priv, dev);
-
- lbs_pr_info("%s: Marvell WLAN 802.11 adapter\n", dev->name);
-
-@@ -1277,10 +1253,10 @@ done:
- lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
- return ret;
- }
--EXPORT_SYMBOL_GPL(libertas_start_card);
-+EXPORT_SYMBOL_GPL(lbs_start_card);
-
-
--int libertas_stop_card(wlan_private *priv)
-+int lbs_stop_card(struct lbs_private *priv)
- {
- struct net_device *dev = priv->dev;
- int ret = -1;
-@@ -1292,31 +1268,35 @@ int libertas_stop_card(wlan_private *priv)
- netif_stop_queue(priv->dev);
- netif_carrier_off(priv->dev);
-
-- libertas_debugfs_remove_one(priv);
-+ lbs_debugfs_remove_one(priv);
-+ device_remove_file(&dev->dev, &dev_attr_lbs_rtap);
-+ if (priv->mesh_tlv)
-+ device_remove_file(&dev->dev, &dev_attr_lbs_mesh);
-
- /* Flush pending command nodes */
-- spin_lock_irqsave(&priv->adapter->driver_lock, flags);
-- list_for_each_entry(cmdnode, &priv->adapter->cmdpendingq, list) {
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ list_for_each_entry(cmdnode, &priv->cmdpendingq, list) {
-+ cmdnode->result = -ENOENT;
- cmdnode->cmdwaitqwoken = 1;
- wake_up_interruptible(&cmdnode->cmdwait_q);
- }
-- spin_unlock_irqrestore(&priv->adapter->driver_lock, flags);
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-
- unregister_netdev(dev);
-
- lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
- return ret;
- }
--EXPORT_SYMBOL_GPL(libertas_stop_card);
-+EXPORT_SYMBOL_GPL(lbs_stop_card);
-
-
- /**
- * @brief This function adds mshX interface
- *
-- * @param priv A pointer to the wlan_private structure
-+ * @param priv A pointer to the struct lbs_private structure
- * @return 0 if successful, -X otherwise
- */
--int libertas_add_mesh(wlan_private *priv, struct device *dev)
-+static int lbs_add_mesh(struct lbs_private *priv)
- {
- struct net_device *mesh_dev = NULL;
- int ret = 0;
-@@ -1332,16 +1312,16 @@ int libertas_add_mesh(wlan_private *priv, struct device *dev)
- mesh_dev->priv = priv;
- priv->mesh_dev = mesh_dev;
-
-- mesh_dev->open = libertas_mesh_open;
-- mesh_dev->hard_start_xmit = libertas_mesh_pre_start_xmit;
-- mesh_dev->stop = libertas_mesh_close;
-- mesh_dev->get_stats = libertas_get_stats;
-- mesh_dev->set_mac_address = libertas_set_mac_address;
-- mesh_dev->ethtool_ops = &libertas_ethtool_ops;
-+ mesh_dev->open = lbs_dev_open;
-+ mesh_dev->hard_start_xmit = lbs_hard_start_xmit;
-+ mesh_dev->stop = lbs_mesh_stop;
-+ mesh_dev->get_stats = lbs_get_stats;
-+ mesh_dev->set_mac_address = lbs_set_mac_address;
-+ mesh_dev->ethtool_ops = &lbs_ethtool_ops;
- memcpy(mesh_dev->dev_addr, priv->dev->dev_addr,
- sizeof(priv->dev->dev_addr));
-
-- SET_NETDEV_DEV(priv->mesh_dev, dev);
-+ SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
-
- #ifdef WIRELESS_EXT
- mesh_dev->wireless_handlers = (struct iw_handler_def *)&mesh_handler_def;
-@@ -1353,7 +1333,7 @@ int libertas_add_mesh(wlan_private *priv, struct device *dev)
- goto err_free;
- }
-
-- ret = sysfs_create_group(&(mesh_dev->dev.kobj), &libertas_mesh_attr_group);
-+ ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
- if (ret)
- goto err_unregister;
-
-@@ -1371,33 +1351,28 @@ done:
- lbs_deb_leave_args(LBS_DEB_MESH, "ret %d", ret);
- return ret;
- }
--EXPORT_SYMBOL_GPL(libertas_add_mesh);
-+EXPORT_SYMBOL_GPL(lbs_add_mesh);
-
-
--void libertas_remove_mesh(wlan_private *priv)
-+static void lbs_remove_mesh(struct lbs_private *priv)
- {
- struct net_device *mesh_dev;
-
-- lbs_deb_enter(LBS_DEB_MAIN);
--
-- if (!priv)
-- goto out;
-
- mesh_dev = priv->mesh_dev;
-+ if (!mesh_dev)
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ spin_lock_init(&priv->lock);
++
++ err = ieee80211_register_hw(dev);
++ if (err) {
++ printk(KERN_ERR "%s (rtl8180): Cannot register device\n",
++ pci_name(pdev));
++ goto err_iounmap;
++ }
++
++ printk(KERN_INFO "%s: hwaddr %s, %s + %s\n",
++ wiphy_name(dev->wiphy), print_mac(mac, dev->wiphy->perm_addr),
++ chip_name, priv->rf->name);
++
++ return 0;
++
++ err_iounmap:
++ iounmap(priv->map);
++
++ err_free_dev:
++ pci_set_drvdata(pdev, NULL);
++ ieee80211_free_hw(dev);
++
++ err_free_reg:
++ pci_release_regions(pdev);
++ pci_disable_device(pdev);
++ return err;
++}
++
++static void __devexit rtl8180_remove(struct pci_dev *pdev)
++{
++ struct ieee80211_hw *dev = pci_get_drvdata(pdev);
++ struct rtl8180_priv *priv;
++
++ if (!dev)
+ return;
-
-+ lbs_deb_enter(LBS_DEB_MESH);
- netif_stop_queue(mesh_dev);
- netif_carrier_off(priv->mesh_dev);
--
-- sysfs_remove_group(&(mesh_dev->dev.kobj), &libertas_mesh_attr_group);
-+ sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
- unregister_netdev(mesh_dev);
--
-- priv->mesh_dev = NULL ;
-+ priv->mesh_dev = NULL;
- free_netdev(mesh_dev);
--
--out:
-- lbs_deb_leave(LBS_DEB_MAIN);
-+ lbs_deb_leave(LBS_DEB_MESH);
- }
--EXPORT_SYMBOL_GPL(libertas_remove_mesh);
-+EXPORT_SYMBOL_GPL(lbs_remove_mesh);
-
- /**
- * @brief This function finds the CFP in
-@@ -1408,7 +1383,7 @@ EXPORT_SYMBOL_GPL(libertas_remove_mesh);
- * @param cfp_no A pointer to CFP number
- * @return A pointer to CFP
- */
--struct chan_freq_power *libertas_get_region_cfp_table(u8 region, u8 band, int *cfp_no)
-+struct chan_freq_power *lbs_get_region_cfp_table(u8 region, u8 band, int *cfp_no)
- {
- int i, end;
-
-@@ -1430,9 +1405,8 @@ struct chan_freq_power *libertas_get_region_cfp_table(u8 region, u8 band, int *c
- return NULL;
- }
-
--int libertas_set_regiontable(wlan_private * priv, u8 region, u8 band)
-+int lbs_set_regiontable(struct lbs_private *priv, u8 region, u8 band)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
- int i = 0;
-
-@@ -1441,24 +1415,22 @@ int libertas_set_regiontable(wlan_private * priv, u8 region, u8 band)
-
- lbs_deb_enter(LBS_DEB_MAIN);
-
-- memset(adapter->region_channel, 0, sizeof(adapter->region_channel));
-+ memset(priv->region_channel, 0, sizeof(priv->region_channel));
-
-- {
-- cfp = libertas_get_region_cfp_table(region, band, &cfp_no);
-- if (cfp != NULL) {
-- adapter->region_channel[i].nrcfp = cfp_no;
-- adapter->region_channel[i].CFP = cfp;
-- } else {
-- lbs_deb_main("wrong region code %#x in band B/G\n",
-- region);
-- ret = -1;
-- goto out;
-- }
-- adapter->region_channel[i].valid = 1;
-- adapter->region_channel[i].region = region;
-- adapter->region_channel[i].band = band;
-- i++;
-+ cfp = lbs_get_region_cfp_table(region, band, &cfp_no);
-+ if (cfp != NULL) {
-+ priv->region_channel[i].nrcfp = cfp_no;
-+ priv->region_channel[i].CFP = cfp;
-+ } else {
-+ lbs_deb_main("wrong region code %#x in band B/G\n",
-+ region);
-+ ret = -1;
-+ goto out;
- }
-+ priv->region_channel[i].valid = 1;
-+ priv->region_channel[i].region = region;
-+ priv->region_channel[i].band = band;
-+ i++;
- out:
- lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
- return ret;
-@@ -1472,58 +1444,46 @@ out:
- * @param dev A pointer to net_device structure
- * @return n/a
- */
--void libertas_interrupt(struct net_device *dev)
-+void lbs_interrupt(struct lbs_private *priv)
- {
-- wlan_private *priv = dev->priv;
--
- lbs_deb_enter(LBS_DEB_THREAD);
-
-- lbs_deb_thread("libertas_interrupt: intcounter=%d\n",
-- priv->adapter->intcounter);
--
-- priv->adapter->intcounter++;
--
-- if (priv->adapter->psstate == PS_STATE_SLEEP) {
-- priv->adapter->psstate = PS_STATE_AWAKE;
-- netif_wake_queue(dev);
-- if (priv->mesh_dev)
-- netif_wake_queue(priv->mesh_dev);
-- }
--
-+ lbs_deb_thread("lbs_interrupt: intcounter=%d\n", priv->intcounter);
-+ priv->intcounter++;
-+ if (priv->psstate == PS_STATE_SLEEP)
-+ priv->psstate = PS_STATE_AWAKE;
- wake_up_interruptible(&priv->waitq);
-
- lbs_deb_leave(LBS_DEB_THREAD);
- }
--EXPORT_SYMBOL_GPL(libertas_interrupt);
-+EXPORT_SYMBOL_GPL(lbs_interrupt);
-
--int libertas_reset_device(wlan_private *priv)
-+int lbs_reset_device(struct lbs_private *priv)
- {
- int ret;
-
- lbs_deb_enter(LBS_DEB_MAIN);
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_RESET,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_RESET,
- CMD_ACT_HALT, 0, 0, NULL);
- msleep_interruptible(10);
-
- lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
- return ret;
- }
--EXPORT_SYMBOL_GPL(libertas_reset_device);
-+EXPORT_SYMBOL_GPL(lbs_reset_device);
-
--static int libertas_init_module(void)
-+static int __init lbs_init_module(void)
- {
- lbs_deb_enter(LBS_DEB_MAIN);
-- libertas_debugfs_init();
-+ lbs_debugfs_init();
- lbs_deb_leave(LBS_DEB_MAIN);
- return 0;
- }
-
--static void libertas_exit_module(void)
-+static void __exit lbs_exit_module(void)
- {
- lbs_deb_enter(LBS_DEB_MAIN);
--
-- libertas_debugfs_remove();
--
-+ lbs_debugfs_remove();
- lbs_deb_leave(LBS_DEB_MAIN);
- }
-
-@@ -1531,79 +1491,89 @@ static void libertas_exit_module(void)
- * rtap interface support fuctions
- */
-
--static int libertas_rtap_open(struct net_device *dev)
-+static int lbs_rtap_open(struct net_device *dev)
- {
-- netif_carrier_off(dev);
-- netif_stop_queue(dev);
-- return 0;
-+ /* Yes, _stop_ the queue. Because we don't support injection */
-+ lbs_deb_enter(LBS_DEB_MAIN);
-+ netif_carrier_off(dev);
-+ netif_stop_queue(dev);
-+ lbs_deb_leave(LBS_DEB_LEAVE);
++
++ ieee80211_unregister_hw(dev);
++
++ priv = dev->priv;
++
++ pci_iounmap(pdev, priv->map);
++ pci_release_regions(pdev);
++ pci_disable_device(pdev);
++ ieee80211_free_hw(dev);
++}
++
++#ifdef CONFIG_PM
++static int rtl8180_suspend(struct pci_dev *pdev, pm_message_t state)
++{
++ pci_save_state(pdev);
++ pci_set_power_state(pdev, pci_choose_state(pdev, state));
+ return 0;
- }
-
--static int libertas_rtap_stop(struct net_device *dev)
-+static int lbs_rtap_stop(struct net_device *dev)
- {
-- return 0;
-+ lbs_deb_enter(LBS_DEB_MAIN);
-+ lbs_deb_leave(LBS_DEB_MAIN);
++}
++
++static int rtl8180_resume(struct pci_dev *pdev)
++{
++ pci_set_power_state(pdev, PCI_D0);
++ pci_restore_state(pdev);
+ return 0;
- }
-
--static int libertas_rtap_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
-+static int lbs_rtap_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
- {
-- netif_stop_queue(dev);
-- return -EOPNOTSUPP;
-+ netif_stop_queue(dev);
-+ return NETDEV_TX_BUSY;
- }
-
--static struct net_device_stats *libertas_rtap_get_stats(struct net_device *dev)
-+static struct net_device_stats *lbs_rtap_get_stats(struct net_device *dev)
- {
-- wlan_private *priv = dev->priv;
-- return &priv->ieee->stats;
-+ struct lbs_private *priv = dev->priv;
-+ lbs_deb_enter(LBS_DEB_NET);
-+ return &priv->stats;
- }
-
-
--void libertas_remove_rtap(wlan_private *priv)
-+static void lbs_remove_rtap(struct lbs_private *priv)
- {
-+ lbs_deb_enter(LBS_DEB_MAIN);
- if (priv->rtap_net_dev == NULL)
- return;
- unregister_netdev(priv->rtap_net_dev);
-- free_ieee80211(priv->rtap_net_dev);
-+ free_netdev(priv->rtap_net_dev);
- priv->rtap_net_dev = NULL;
-+ lbs_deb_leave(LBS_DEB_MAIN);
- }
-
--int libertas_add_rtap(wlan_private *priv)
-+static int lbs_add_rtap(struct lbs_private *priv)
- {
-- int rc = 0;
--
-- if (priv->rtap_net_dev)
-- return -EPERM;
--
-- priv->rtap_net_dev = alloc_ieee80211(0);
-- if (priv->rtap_net_dev == NULL)
-- return -ENOMEM;
--
--
-- priv->ieee = netdev_priv(priv->rtap_net_dev);
-+ int ret = 0;
-+ struct net_device *rtap_dev;
-
-- strcpy(priv->rtap_net_dev->name, "rtap%d");
-+ lbs_deb_enter(LBS_DEB_MAIN);
-+ if (priv->rtap_net_dev) {
-+ ret = -EPERM;
-+ goto out;
-+ }
-
-- priv->rtap_net_dev->type = ARPHRD_IEEE80211_RADIOTAP;
-- priv->rtap_net_dev->open = libertas_rtap_open;
-- priv->rtap_net_dev->stop = libertas_rtap_stop;
-- priv->rtap_net_dev->get_stats = libertas_rtap_get_stats;
-- priv->rtap_net_dev->hard_start_xmit = libertas_rtap_hard_start_xmit;
-- priv->rtap_net_dev->set_multicast_list = libertas_set_multicast_list;
-- priv->rtap_net_dev->priv = priv;
-+ rtap_dev = alloc_netdev(0, "rtap%d", ether_setup);
-+ if (rtap_dev == NULL) {
-+ ret = -ENOMEM;
-+ goto out;
-+ }
-
-- priv->ieee->iw_mode = IW_MODE_MONITOR;
-+ memcpy(rtap_dev->dev_addr, priv->current_addr, ETH_ALEN);
-+ rtap_dev->type = ARPHRD_IEEE80211_RADIOTAP;
-+ rtap_dev->open = lbs_rtap_open;
-+ rtap_dev->stop = lbs_rtap_stop;
-+ rtap_dev->get_stats = lbs_rtap_get_stats;
-+ rtap_dev->hard_start_xmit = lbs_rtap_hard_start_xmit;
-+ rtap_dev->set_multicast_list = lbs_set_multicast_list;
-+ rtap_dev->priv = priv;
-
-- rc = register_netdev(priv->rtap_net_dev);
-- if (rc) {
-- free_ieee80211(priv->rtap_net_dev);
-- priv->rtap_net_dev = NULL;
-- return rc;
-+ ret = register_netdev(rtap_dev);
-+ if (ret) {
-+ free_netdev(rtap_dev);
-+ goto out;
- }
-+ priv->rtap_net_dev = rtap_dev;
-
-- return 0;
-+out:
-+ lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
-+ return ret;
- }
-
-
--module_init(libertas_init_module);
--module_exit(libertas_exit_module);
-+module_init(lbs_init_module);
-+module_exit(lbs_exit_module);
-
- MODULE_DESCRIPTION("Libertas WLAN Driver Library");
- MODULE_AUTHOR("Marvell International Ltd.");
-diff --git a/drivers/net/wireless/libertas/rx.c b/drivers/net/wireless/libertas/rx.c
-index 0420e5b..149557a 100644
---- a/drivers/net/wireless/libertas/rx.c
-+++ b/drivers/net/wireless/libertas/rx.c
-@@ -35,134 +35,114 @@ struct rx80211packethdr {
- void *eth80211_hdr;
- } __attribute__ ((packed));
-
--static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb);
-+static int process_rxed_802_11_packet(struct lbs_private *priv,
-+ struct sk_buff *skb);
-
- /**
- * @brief This function computes the avgSNR .
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return avgSNR
- */
--static u8 wlan_getavgsnr(wlan_private * priv)
-+static u8 lbs_getavgsnr(struct lbs_private *priv)
- {
- u8 i;
- u16 temp = 0;
-- wlan_adapter *adapter = priv->adapter;
-- if (adapter->numSNRNF == 0)
-+ if (priv->numSNRNF == 0)
- return 0;
-- for (i = 0; i < adapter->numSNRNF; i++)
-- temp += adapter->rawSNR[i];
-- return (u8) (temp / adapter->numSNRNF);
-+ for (i = 0; i < priv->numSNRNF; i++)
-+ temp += priv->rawSNR[i];
-+ return (u8) (temp / priv->numSNRNF);
-
- }
-
- /**
- * @brief This function computes the AvgNF
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @return AvgNF
- */
--static u8 wlan_getavgnf(wlan_private * priv)
-+static u8 lbs_getavgnf(struct lbs_private *priv)
- {
- u8 i;
- u16 temp = 0;
-- wlan_adapter *adapter = priv->adapter;
-- if (adapter->numSNRNF == 0)
-+ if (priv->numSNRNF == 0)
- return 0;
-- for (i = 0; i < adapter->numSNRNF; i++)
-- temp += adapter->rawNF[i];
-- return (u8) (temp / adapter->numSNRNF);
-+ for (i = 0; i < priv->numSNRNF; i++)
-+ temp += priv->rawNF[i];
-+ return (u8) (temp / priv->numSNRNF);
-
- }
-
- /**
- * @brief This function save the raw SNR/NF to our internel buffer
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param prxpd A pointer to rxpd structure of received packet
- * @return n/a
- */
--static void wlan_save_rawSNRNF(wlan_private * priv, struct rxpd *p_rx_pd)
-+static void lbs_save_rawSNRNF(struct lbs_private *priv, struct rxpd *p_rx_pd)
- {
-- wlan_adapter *adapter = priv->adapter;
-- if (adapter->numSNRNF < DEFAULT_DATA_AVG_FACTOR)
-- adapter->numSNRNF++;
-- adapter->rawSNR[adapter->nextSNRNF] = p_rx_pd->snr;
-- adapter->rawNF[adapter->nextSNRNF] = p_rx_pd->nf;
-- adapter->nextSNRNF++;
-- if (adapter->nextSNRNF >= DEFAULT_DATA_AVG_FACTOR)
-- adapter->nextSNRNF = 0;
-+ if (priv->numSNRNF < DEFAULT_DATA_AVG_FACTOR)
-+ priv->numSNRNF++;
-+ priv->rawSNR[priv->nextSNRNF] = p_rx_pd->snr;
-+ priv->rawNF[priv->nextSNRNF] = p_rx_pd->nf;
-+ priv->nextSNRNF++;
-+ if (priv->nextSNRNF >= DEFAULT_DATA_AVG_FACTOR)
-+ priv->nextSNRNF = 0;
- return;
- }
-
- /**
- * @brief This function computes the RSSI in received packet.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param prxpd A pointer to rxpd structure of received packet
- * @return n/a
- */
--static void wlan_compute_rssi(wlan_private * priv, struct rxpd *p_rx_pd)
-+static void lbs_compute_rssi(struct lbs_private *priv, struct rxpd *p_rx_pd)
- {
-- wlan_adapter *adapter = priv->adapter;
-
- lbs_deb_enter(LBS_DEB_RX);
-
- lbs_deb_rx("rxpd: SNR %d, NF %d\n", p_rx_pd->snr, p_rx_pd->nf);
- lbs_deb_rx("before computing SNR: SNR-avg = %d, NF-avg = %d\n",
-- adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
-- adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-+ priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
-+ priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-
-- adapter->SNR[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->snr;
-- adapter->NF[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->nf;
-- wlan_save_rawSNRNF(priv, p_rx_pd);
-+ priv->SNR[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->snr;
-+ priv->NF[TYPE_RXPD][TYPE_NOAVG] = p_rx_pd->nf;
-+ lbs_save_rawSNRNF(priv, p_rx_pd);
-
-- adapter->SNR[TYPE_RXPD][TYPE_AVG] = wlan_getavgsnr(priv) * AVG_SCALE;
-- adapter->NF[TYPE_RXPD][TYPE_AVG] = wlan_getavgnf(priv) * AVG_SCALE;
-+ priv->SNR[TYPE_RXPD][TYPE_AVG] = lbs_getavgsnr(priv) * AVG_SCALE;
-+ priv->NF[TYPE_RXPD][TYPE_AVG] = lbs_getavgnf(priv) * AVG_SCALE;
- lbs_deb_rx("after computing SNR: SNR-avg = %d, NF-avg = %d\n",
-- adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
-- adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-+ priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
-+ priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-
-- adapter->RSSI[TYPE_RXPD][TYPE_NOAVG] =
-- CAL_RSSI(adapter->SNR[TYPE_RXPD][TYPE_NOAVG],
-- adapter->NF[TYPE_RXPD][TYPE_NOAVG]);
-+ priv->RSSI[TYPE_RXPD][TYPE_NOAVG] =
-+ CAL_RSSI(priv->SNR[TYPE_RXPD][TYPE_NOAVG],
-+ priv->NF[TYPE_RXPD][TYPE_NOAVG]);
-
-- adapter->RSSI[TYPE_RXPD][TYPE_AVG] =
-- CAL_RSSI(adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
-- adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-+ priv->RSSI[TYPE_RXPD][TYPE_AVG] =
-+ CAL_RSSI(priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE,
-+ priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE);
-
- lbs_deb_leave(LBS_DEB_RX);
- }
-
--void libertas_upload_rx_packet(wlan_private * priv, struct sk_buff *skb)
--{
-- lbs_deb_rx("skb->data %p\n", skb->data);
--
-- if (priv->adapter->monitormode != WLAN_MONITOR_OFF) {
-- skb->protocol = eth_type_trans(skb, priv->rtap_net_dev);
-- } else {
-- if (priv->mesh_dev && IS_MESH_FRAME(skb))
-- skb->protocol = eth_type_trans(skb, priv->mesh_dev);
-- else
-- skb->protocol = eth_type_trans(skb, priv->dev);
-- }
-- skb->ip_summed = CHECKSUM_UNNECESSARY;
-- netif_rx(skb);
--}
--
- /**
- * @brief This function processes received packet and forwards it
- * to kernel/upper layer
- *
-- * @param priv A pointer to wlan_private
-+ * @param priv A pointer to struct lbs_private
- * @param skb A pointer to skb which includes the received packet
- * @return 0 or -1
- */
--int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *skb)
-+int lbs_process_rxed_packet(struct lbs_private *priv, struct sk_buff *skb)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
--
-+ struct net_device *dev = priv->dev;
- struct rxpackethdr *p_rx_pkt;
- struct rxpd *p_rx_pd;
-
-@@ -173,15 +153,15 @@ int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *skb)
-
- lbs_deb_enter(LBS_DEB_RX);
-
-- if (priv->adapter->monitormode != WLAN_MONITOR_OFF)
-+ skb->ip_summed = CHECKSUM_NONE;
++}
+
-+ if (priv->monitormode != LBS_MONITOR_OFF)
- return process_rxed_802_11_packet(priv, skb);
-
- p_rx_pkt = (struct rxpackethdr *) skb->data;
- p_rx_pd = &p_rx_pkt->rx_pd;
-- if (p_rx_pd->rx_control & RxPD_MESH_FRAME)
-- SET_MESH_FRAME(skb);
-- else
-- UNSET_MESH_FRAME(skb);
-+ if (priv->mesh_dev && (p_rx_pd->rx_control & RxPD_MESH_FRAME))
-+ dev = priv->mesh_dev;
-
- lbs_deb_hex(LBS_DEB_RX, "RX Data: Before chop rxpd", skb->data,
- min_t(unsigned int, skb->len, 100));
-@@ -257,23 +237,27 @@ int libertas_process_rxed_packet(wlan_private * priv, struct sk_buff *skb)
- /* Take the data rate from the rxpd structure
- * only if the rate is auto
- */
-- if (adapter->auto_rate)
-- adapter->cur_rate = libertas_fw_index_to_data_rate(p_rx_pd->rx_rate);
-+ if (priv->auto_rate)
-+ priv->cur_rate = lbs_fw_index_to_data_rate(p_rx_pd->rx_rate);
-
-- wlan_compute_rssi(priv, p_rx_pd);
-+ lbs_compute_rssi(priv, p_rx_pd);
-
- lbs_deb_rx("rx data: size of actual packet %d\n", skb->len);
- priv->stats.rx_bytes += skb->len;
- priv->stats.rx_packets++;
-
-- libertas_upload_rx_packet(priv, skb);
-+ skb->protocol = eth_type_trans(skb, dev);
-+ if (in_interrupt())
-+ netif_rx(skb);
-+ else
-+ netif_rx_ni(skb);
-
- ret = 0;
- done:
- lbs_deb_leave_args(LBS_DEB_RX, "ret %d", ret);
- return ret;
- }
--EXPORT_SYMBOL_GPL(libertas_process_rxed_packet);
-+EXPORT_SYMBOL_GPL(lbs_process_rxed_packet);
-
- /**
- * @brief This function converts Tx/Rx rates from the Marvell WLAN format
-@@ -319,13 +303,13 @@ static u8 convert_mv_rate_to_radiotap(u8 rate)
- * @brief This function processes a received 802.11 packet and forwards it
- * to kernel/upper layer
- *
-- * @param priv A pointer to wlan_private
-+ * @param priv A pointer to struct lbs_private
- * @param skb A pointer to skb which includes the received packet
- * @return 0 or -1
- */
--static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb)
-+static int process_rxed_802_11_packet(struct lbs_private *priv,
-+ struct sk_buff *skb)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret = 0;
-
- struct rx80211packethdr *p_rx_pkt;
-@@ -341,9 +325,10 @@ static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb)
- // lbs_deb_hex(LBS_DEB_RX, "RX Data: Before chop rxpd", skb->data, min(skb->len, 100));
-
- if (skb->len < (ETH_HLEN + 8 + sizeof(struct rxpd))) {
-- lbs_deb_rx("rx err: frame received wit bad length\n");
-+ lbs_deb_rx("rx err: frame received with bad length\n");
- priv->stats.rx_length_errors++;
-- ret = 0;
-+ ret = -EINVAL;
-+ kfree(skb);
- goto done;
- }
-
-@@ -359,65 +344,56 @@ static int process_rxed_802_11_packet(wlan_private * priv, struct sk_buff *skb)
- skb->len, sizeof(struct rxpd), skb->len - sizeof(struct rxpd));
-
- /* create the exported radio header */
-- if(priv->adapter->monitormode == WLAN_MONITOR_OFF) {
-- /* no radio header */
-- /* chop the rxpd */
-- skb_pull(skb, sizeof(struct rxpd));
-- }
-
-- else {
-- /* radiotap header */
-- radiotap_hdr.hdr.it_version = 0;
-- /* XXX must check this value for pad */
-- radiotap_hdr.hdr.it_pad = 0;
-- radiotap_hdr.hdr.it_len = cpu_to_le16 (sizeof(struct rx_radiotap_hdr));
-- radiotap_hdr.hdr.it_present = cpu_to_le32 (RX_RADIOTAP_PRESENT);
-- /* unknown values */
-- radiotap_hdr.flags = 0;
-- radiotap_hdr.chan_freq = 0;
-- radiotap_hdr.chan_flags = 0;
-- radiotap_hdr.antenna = 0;
-- /* known values */
-- radiotap_hdr.rate = convert_mv_rate_to_radiotap(prxpd->rx_rate);
-- /* XXX must check no carryout */
-- radiotap_hdr.antsignal = prxpd->snr + prxpd->nf;
-- radiotap_hdr.rx_flags = 0;
-- if (!(prxpd->status & cpu_to_le16(MRVDRV_RXPD_STATUS_OK)))
-- radiotap_hdr.rx_flags |= IEEE80211_RADIOTAP_F_RX_BADFCS;
-- //memset(radiotap_hdr.pad, 0x11, IEEE80211_RADIOTAP_HDRLEN - 18);
--
-- /* chop the rxpd */
-- skb_pull(skb, sizeof(struct rxpd));
--
-- /* add space for the new radio header */
-- if ((skb_headroom(skb) < sizeof(struct rx_radiotap_hdr)) &&
-- pskb_expand_head(skb, sizeof(struct rx_radiotap_hdr), 0,
-- GFP_ATOMIC)) {
-- lbs_pr_alert("%s: couldn't pskb_expand_head\n",
-- __func__);
-- }
--
-- pradiotap_hdr =
-- (struct rx_radiotap_hdr *)skb_push(skb,
-- sizeof(struct
-- rx_radiotap_hdr));
-- memcpy(pradiotap_hdr, &radiotap_hdr,
-- sizeof(struct rx_radiotap_hdr));
-+ /* radiotap header */
-+ radiotap_hdr.hdr.it_version = 0;
-+ /* XXX must check this value for pad */
-+ radiotap_hdr.hdr.it_pad = 0;
-+ radiotap_hdr.hdr.it_len = cpu_to_le16 (sizeof(struct rx_radiotap_hdr));
-+ radiotap_hdr.hdr.it_present = cpu_to_le32 (RX_RADIOTAP_PRESENT);
-+ /* unknown values */
-+ radiotap_hdr.flags = 0;
-+ radiotap_hdr.chan_freq = 0;
-+ radiotap_hdr.chan_flags = 0;
-+ radiotap_hdr.antenna = 0;
-+ /* known values */
-+ radiotap_hdr.rate = convert_mv_rate_to_radiotap(prxpd->rx_rate);
-+ /* XXX must check no carryout */
-+ radiotap_hdr.antsignal = prxpd->snr + prxpd->nf;
-+ radiotap_hdr.rx_flags = 0;
-+ if (!(prxpd->status & cpu_to_le16(MRVDRV_RXPD_STATUS_OK)))
-+ radiotap_hdr.rx_flags |= IEEE80211_RADIOTAP_F_RX_BADFCS;
-+ //memset(radiotap_hdr.pad, 0x11, IEEE80211_RADIOTAP_HDRLEN - 18);
++#endif /* CONFIG_PM */
+
-+ /* chop the rxpd */
-+ skb_pull(skb, sizeof(struct rxpd));
++static struct pci_driver rtl8180_driver = {
++ .name = KBUILD_MODNAME,
++ .id_table = rtl8180_table,
++ .probe = rtl8180_probe,
++ .remove = __devexit_p(rtl8180_remove),
++#ifdef CONFIG_PM
++ .suspend = rtl8180_suspend,
++ .resume = rtl8180_resume,
++#endif /* CONFIG_PM */
++};
+
-+ /* add space for the new radio header */
-+ if ((skb_headroom(skb) < sizeof(struct rx_radiotap_hdr)) &&
-+ pskb_expand_head(skb, sizeof(struct rx_radiotap_hdr), 0, GFP_ATOMIC)) {
-+ lbs_pr_alert("%s: couldn't pskb_expand_head\n", __func__);
-+ ret = -ENOMEM;
-+ kfree_skb(skb);
-+ goto done;
- }
-
-+ pradiotap_hdr = (void *)skb_push(skb, sizeof(struct rx_radiotap_hdr));
-+ memcpy(pradiotap_hdr, &radiotap_hdr, sizeof(struct rx_radiotap_hdr));
++static int __init rtl8180_init(void)
++{
++ return pci_register_driver(&rtl8180_driver);
++}
+
- /* Take the data rate from the rxpd structure
- * only if the rate is auto
- */
-- if (adapter->auto_rate)
-- adapter->cur_rate = libertas_fw_index_to_data_rate(prxpd->rx_rate);
-+ if (priv->auto_rate)
-+ priv->cur_rate = lbs_fw_index_to_data_rate(prxpd->rx_rate);
-
-- wlan_compute_rssi(priv, prxpd);
-+ lbs_compute_rssi(priv, prxpd);
-
- lbs_deb_rx("rx data: size of actual packet %d\n", skb->len);
- priv->stats.rx_bytes += skb->len;
- priv->stats.rx_packets++;
-
-- libertas_upload_rx_packet(priv, skb);
-+ skb->protocol = eth_type_trans(skb, priv->rtap_net_dev);
-+ netif_rx(skb);
-
- ret = 0;
-
-diff --git a/drivers/net/wireless/libertas/scan.c b/drivers/net/wireless/libertas/scan.c
-index ad1e67d..9a61188 100644
---- a/drivers/net/wireless/libertas/scan.c
-+++ b/drivers/net/wireless/libertas/scan.c
-@@ -39,9 +39,8 @@
- //! Memory needed to store a max number/size SSID TLV for a firmware scan
- #define SSID_TLV_MAX_SIZE (1 * sizeof(struct mrvlietypes_ssidparamset))
-
--//! Maximum memory needed for a wlan_scan_cmd_config with all TLVs at max
--#define MAX_SCAN_CFG_ALLOC (sizeof(struct wlan_scan_cmd_config) \
-- + sizeof(struct mrvlietypes_numprobes) \
-+//! Maximum memory needed for a lbs_scan_cmd_config with all TLVs at max
-+#define MAX_SCAN_CFG_ALLOC (sizeof(struct lbs_scan_cmd_config) \
- + CHAN_TLV_MAX_SIZE \
- + SSID_TLV_MAX_SIZE)
-
-@@ -80,7 +79,23 @@ static inline void clear_bss_descriptor (struct bss_descriptor * bss)
- memset(bss, 0, offsetof(struct bss_descriptor, list));
- }
-
--static inline int match_bss_no_security(struct wlan_802_11_security * secinfo,
-+/**
-+ * @brief Compare two SSIDs
++static void __exit rtl8180_exit(void)
++{
++ pci_unregister_driver(&rtl8180_driver);
++}
++
++module_init(rtl8180_init);
++module_exit(rtl8180_exit);
+diff --git a/drivers/net/wireless/rtl8180_grf5101.c b/drivers/net/wireless/rtl8180_grf5101.c
+new file mode 100644
+index 0000000..8293e19
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_grf5101.c
+@@ -0,0 +1,179 @@
++
++/*
++ * Radio tuning for GCT GRF5101 on RTL8180
+ *
-+ * @param ssid1 A pointer to ssid to compare
-+ * @param ssid2 A pointer to ssid to compare
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
+ *
-+ * @return 0: ssid is same, otherwise is different
++ * Code from the BSD driver and the rtl8181 project have been
++ * very useful to understand certain things
++ *
++ * I want to thanks the Authors of such projects and the Ndiswrapper
++ * project Authors.
++ *
++ * A special Big Thanks also is for all people who donated me cards,
++ * making possible the creation of the original rtl8180 driver
++ * from which this code is derived!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
+ */
-+int lbs_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
++
++#include <linux/init.h>
++#include <linux/pci.h>
++#include <linux/delay.h>
++#include <net/mac80211.h>
++
++#include "rtl8180.h"
++#include "rtl8180_grf5101.h"
++
++static const int grf5101_encode[] = {
++ 0x0, 0x8, 0x4, 0xC,
++ 0x2, 0xA, 0x6, 0xE,
++ 0x1, 0x9, 0x5, 0xD,
++ 0x3, 0xB, 0x7, 0xF
++};
++
++static void write_grf5101(struct ieee80211_hw *dev, u8 addr, u32 data)
+{
-+ if (ssid1_len != ssid2_len)
-+ return -1;
++ struct rtl8180_priv *priv = dev->priv;
++ u32 phy_config;
+
-+ return memcmp(ssid1, ssid2, ssid1_len);
++ phy_config = grf5101_encode[(data >> 8) & 0xF];
++ phy_config |= grf5101_encode[(data >> 4) & 0xF] << 4;
++ phy_config |= grf5101_encode[data & 0xF] << 8;
++ phy_config |= grf5101_encode[(addr >> 1) & 0xF] << 12;
++ phy_config |= (addr & 1) << 16;
++ phy_config |= grf5101_encode[(data & 0xf000) >> 12] << 24;
++
++ /* MAC will bang bits to the chip */
++ phy_config |= 0x90000000;
++
++ rtl818x_iowrite32(priv,
++ (__le32 __iomem *) &priv->map->RFPinsOutput, phy_config);
++
++ msleep(3);
+}
+
-+static inline int match_bss_no_security(struct lbs_802_11_security *secinfo,
- struct bss_descriptor * match_bss)
- {
- if ( !secinfo->wep_enabled
-@@ -94,7 +109,7 @@ static inline int match_bss_no_security(struct wlan_802_11_security * secinfo,
- return 0;
- }
-
--static inline int match_bss_static_wep(struct wlan_802_11_security * secinfo,
-+static inline int match_bss_static_wep(struct lbs_802_11_security *secinfo,
- struct bss_descriptor * match_bss)
- {
- if ( secinfo->wep_enabled
-@@ -106,7 +121,7 @@ static inline int match_bss_static_wep(struct wlan_802_11_security * secinfo,
- return 0;
- }
-
--static inline int match_bss_wpa(struct wlan_802_11_security * secinfo,
-+static inline int match_bss_wpa(struct lbs_802_11_security *secinfo,
- struct bss_descriptor * match_bss)
- {
- if ( !secinfo->wep_enabled
-@@ -121,7 +136,7 @@ static inline int match_bss_wpa(struct wlan_802_11_security * secinfo,
- return 0;
- }
-
--static inline int match_bss_wpa2(struct wlan_802_11_security * secinfo,
-+static inline int match_bss_wpa2(struct lbs_802_11_security *secinfo,
- struct bss_descriptor * match_bss)
- {
- if ( !secinfo->wep_enabled
-@@ -136,7 +151,7 @@ static inline int match_bss_wpa2(struct wlan_802_11_security * secinfo,
- return 0;
- }
-
--static inline int match_bss_dynamic_wep(struct wlan_802_11_security * secinfo,
-+static inline int match_bss_dynamic_wep(struct lbs_802_11_security *secinfo,
- struct bss_descriptor * match_bss)
- {
- if ( !secinfo->wep_enabled
-@@ -150,6 +165,18 @@ static inline int match_bss_dynamic_wep(struct wlan_802_11_security * secinfo,
- return 0;
- }
-
-+static inline int is_same_network(struct bss_descriptor *src,
-+ struct bss_descriptor *dst)
++static void grf5101_write_phy_antenna(struct ieee80211_hw *dev, short chan)
+{
-+ /* A network is only a duplicate if the channel, BSSID, and ESSID
-+ * all match. We treat all <hidden> with the same BSSID and channel
-+ * as one network */
-+ return ((src->ssid_len == dst->ssid_len) &&
-+ (src->channel == dst->channel) &&
-+ !compare_ether_addr(src->bssid, dst->bssid) &&
-+ !memcmp(src->ssid, dst->ssid, src->ssid_len));
++ struct rtl8180_priv *priv = dev->priv;
++ u8 ant = GRF5101_ANTENNA;
++
++ if (priv->rfparam & RF_PARAM_ANTBDEFAULT)
++ ant |= BB_ANTENNA_B;
++
++ if (chan == 14)
++ ant |= BB_ANTATTEN_CHAN14;
++
++ rtl8180_write_phy(dev, 0x10, ant);
+}
+
- /**
- * @brief Check if a scanned network compatible with the driver settings
- *
-@@ -163,13 +190,13 @@ static inline int match_bss_dynamic_wep(struct wlan_802_11_security * secinfo,
- * 0 0 0 0 !=NONE 1 0 0 yes Dynamic WEP
- *
- *
-- * @param adapter A pointer to wlan_adapter
-+ * @param priv A pointer to struct lbs_private
- * @param index Index in scantable to check against current driver settings
- * @param mode Network mode: Infrastructure or IBSS
- *
- * @return Index in scantable, or error code if negative
- */
--static int is_network_compatible(wlan_adapter * adapter,
-+static int is_network_compatible(struct lbs_private *priv,
- struct bss_descriptor * bss, u8 mode)
- {
- int matched = 0;
-@@ -179,34 +206,34 @@ static int is_network_compatible(wlan_adapter * adapter,
- if (bss->mode != mode)
- goto done;
-
-- if ((matched = match_bss_no_security(&adapter->secinfo, bss))) {
-+ if ((matched = match_bss_no_security(&priv->secinfo, bss))) {
- goto done;
-- } else if ((matched = match_bss_static_wep(&adapter->secinfo, bss))) {
-+ } else if ((matched = match_bss_static_wep(&priv->secinfo, bss))) {
- goto done;
-- } else if ((matched = match_bss_wpa(&adapter->secinfo, bss))) {
-+ } else if ((matched = match_bss_wpa(&priv->secinfo, bss))) {
- lbs_deb_scan(
-- "is_network_compatible() WPA: wpa_ie=%#x "
-- "wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s "
-- "privacy=%#x\n", bss->wpa_ie[0], bss->rsn_ie[0],
-- adapter->secinfo.wep_enabled ? "e" : "d",
-- adapter->secinfo.WPAenabled ? "e" : "d",
-- adapter->secinfo.WPA2enabled ? "e" : "d",
-+ "is_network_compatible() WPA: wpa_ie 0x%x "
-+ "wpa2_ie 0x%x WEP %s WPA %s WPA2 %s "
-+ "privacy 0x%x\n", bss->wpa_ie[0], bss->rsn_ie[0],
-+ priv->secinfo.wep_enabled ? "e" : "d",
-+ priv->secinfo.WPAenabled ? "e" : "d",
-+ priv->secinfo.WPA2enabled ? "e" : "d",
- (bss->capability & WLAN_CAPABILITY_PRIVACY));
- goto done;
-- } else if ((matched = match_bss_wpa2(&adapter->secinfo, bss))) {
-+ } else if ((matched = match_bss_wpa2(&priv->secinfo, bss))) {
- lbs_deb_scan(
-- "is_network_compatible() WPA2: wpa_ie=%#x "
-- "wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s "
-- "privacy=%#x\n", bss->wpa_ie[0], bss->rsn_ie[0],
-- adapter->secinfo.wep_enabled ? "e" : "d",
-- adapter->secinfo.WPAenabled ? "e" : "d",
-- adapter->secinfo.WPA2enabled ? "e" : "d",
-+ "is_network_compatible() WPA2: wpa_ie 0x%x "
-+ "wpa2_ie 0x%x WEP %s WPA %s WPA2 %s "
-+ "privacy 0x%x\n", bss->wpa_ie[0], bss->rsn_ie[0],
-+ priv->secinfo.wep_enabled ? "e" : "d",
-+ priv->secinfo.WPAenabled ? "e" : "d",
-+ priv->secinfo.WPA2enabled ? "e" : "d",
- (bss->capability & WLAN_CAPABILITY_PRIVACY));
- goto done;
-- } else if ((matched = match_bss_dynamic_wep(&adapter->secinfo, bss))) {
-+ } else if ((matched = match_bss_dynamic_wep(&priv->secinfo, bss))) {
- lbs_deb_scan(
- "is_network_compatible() dynamic WEP: "
-- "wpa_ie=%#x wpa2_ie=%#x privacy=%#x\n",
-+ "wpa_ie 0x%x wpa2_ie 0x%x privacy 0x%x\n",
- bss->wpa_ie[0], bss->rsn_ie[0],
- (bss->capability & WLAN_CAPABILITY_PRIVACY));
- goto done;
-@@ -214,12 +241,12 @@ static int is_network_compatible(wlan_adapter * adapter,
-
- /* bss security settings don't match those configured on card */
- lbs_deb_scan(
-- "is_network_compatible() FAILED: wpa_ie=%#x "
-- "wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s privacy=%#x\n",
-+ "is_network_compatible() FAILED: wpa_ie 0x%x "
-+ "wpa2_ie 0x%x WEP %s WPA %s WPA2 %s privacy 0x%x\n",
- bss->wpa_ie[0], bss->rsn_ie[0],
-- adapter->secinfo.wep_enabled ? "e" : "d",
-- adapter->secinfo.WPAenabled ? "e" : "d",
-- adapter->secinfo.WPA2enabled ? "e" : "d",
-+ priv->secinfo.wep_enabled ? "e" : "d",
-+ priv->secinfo.WPAenabled ? "e" : "d",
-+ priv->secinfo.WPA2enabled ? "e" : "d",
- (bss->capability & WLAN_CAPABILITY_PRIVACY));
-
- done:
-@@ -227,22 +254,6 @@ done:
- return matched;
- }
-
--/**
-- * @brief Compare two SSIDs
-- *
-- * @param ssid1 A pointer to ssid to compare
-- * @param ssid2 A pointer to ssid to compare
-- *
-- * @return 0--ssid is same, otherwise is different
-- */
--int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
--{
-- if (ssid1_len != ssid2_len)
-- return -1;
--
-- return memcmp(ssid1, ssid2, ssid1_len);
--}
--
-
-
-
-@@ -252,17 +263,27 @@ int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
- /* */
- /*********************************************************************/
-
-+void lbs_scan_worker(struct work_struct *work)
++static void grf5101_rf_set_channel(struct ieee80211_hw *dev,
++ struct ieee80211_conf *conf)
+{
-+ struct lbs_private *priv =
-+ container_of(work, struct lbs_private, scan_work.work);
++ struct rtl8180_priv *priv = dev->priv;
++ u32 txpw = priv->channels[conf->channel - 1].val & 0xFF;
++ u32 chan = conf->channel - 1;
+
-+ lbs_deb_enter(LBS_DEB_SCAN);
-+ lbs_scan_networks(priv, NULL, 0);
-+ lbs_deb_leave(LBS_DEB_SCAN);
++ /* set TX power */
++ write_grf5101(dev, 0x15, 0x0);
++ write_grf5101(dev, 0x06, txpw);
++ write_grf5101(dev, 0x15, 0x10);
++ write_grf5101(dev, 0x15, 0x0);
++
++ /* set frequency */
++ write_grf5101(dev, 0x07, 0x0);
++ write_grf5101(dev, 0x0B, chan);
++ write_grf5101(dev, 0x07, 0x1000);
++
++ grf5101_write_phy_antenna(dev, chan);
+}
+
-
- /**
- * @brief Create a channel list for the driver to scan based on region info
- *
-- * Only used from wlan_scan_setup_scan_config()
-+ * Only used from lbs_scan_setup_scan_config()
- *
- * Use the driver region/band information to construct a comprehensive list
- * of channels to scan. This routine is used for any scan that is not
- * provided a specific channel list to scan.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param scanchanlist Output parameter: resulting channel list to scan
- * @param filteredscan Flag indicating whether or not a BSSID or SSID filter
- * is being sent in the command to firmware. Used to
-@@ -272,12 +293,11 @@ int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len)
- *
- * @return void
- */
--static void wlan_scan_create_channel_list(wlan_private * priv,
-+static int lbs_scan_create_channel_list(struct lbs_private *priv,
- struct chanscanparamset * scanchanlist,
- u8 filteredscan)
- {
-
-- wlan_adapter *adapter = priv->adapter;
- struct region_channel *scanregion;
- struct chan_freq_power *cfp;
- int rgnidx;
-@@ -285,8 +305,6 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
- int nextchan;
- u8 scantype;
-
-- lbs_deb_enter_args(LBS_DEB_SCAN, "filteredscan %d", filteredscan);
--
- chanidx = 0;
-
- /* Set the default scan type to the user specified type, will later
-@@ -295,21 +313,22 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
- */
- scantype = CMD_SCAN_TYPE_ACTIVE;
-
-- for (rgnidx = 0; rgnidx < ARRAY_SIZE(adapter->region_channel); rgnidx++) {
-- if (priv->adapter->enable11d &&
-- adapter->connect_status != LIBERTAS_CONNECTED) {
-+ for (rgnidx = 0; rgnidx < ARRAY_SIZE(priv->region_channel); rgnidx++) {
-+ if (priv->enable11d &&
-+ (priv->connect_status != LBS_CONNECTED) &&
-+ (priv->mesh_connect_status != LBS_CONNECTED)) {
- /* Scan all the supported chan for the first scan */
-- if (!adapter->universal_channel[rgnidx].valid)
-+ if (!priv->universal_channel[rgnidx].valid)
- continue;
-- scanregion = &adapter->universal_channel[rgnidx];
-+ scanregion = &priv->universal_channel[rgnidx];
-
- /* clear the parsed_region_chan for the first scan */
-- memset(&adapter->parsed_region_chan, 0x00,
-- sizeof(adapter->parsed_region_chan));
-+ memset(&priv->parsed_region_chan, 0x00,
-+ sizeof(priv->parsed_region_chan));
- } else {
-- if (!adapter->region_channel[rgnidx].valid)
-+ if (!priv->region_channel[rgnidx].valid)
- continue;
-- scanregion = &adapter->region_channel[rgnidx];
-+ scanregion = &priv->region_channel[rgnidx];
- }
-
- for (nextchan = 0;
-@@ -317,10 +336,10 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
-
- cfp = scanregion->CFP + nextchan;
-
-- if (priv->adapter->enable11d) {
-+ if (priv->enable11d) {
- scantype =
-- libertas_get_scan_type_11d(cfp->channel,
-- &adapter->
-+ lbs_get_scan_type_11d(cfp->channel,
-+ &priv->
- parsed_region_chan);
- }
-
-@@ -353,453 +372,151 @@ static void wlan_scan_create_channel_list(wlan_private * priv,
- }
- }
- }
-+ return chanidx;
- }
-
-
--/* Delayed partial scan worker */
--void libertas_scan_worker(struct work_struct *work)
++static void grf5101_rf_stop(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u32 anaparam;
++
++ anaparam = priv->anaparam;
++ anaparam &= 0x000fffff;
++ anaparam |= 0x3f900000;
++ rtl8180_set_anaparam(priv, anaparam);
++
++ write_grf5101(dev, 0x07, 0x0);
++ write_grf5101(dev, 0x1f, 0x45);
++ write_grf5101(dev, 0x1f, 0x5);
++ write_grf5101(dev, 0x00, 0x8e4);
++}
++
++static void grf5101_rf_init(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++
++ rtl8180_set_anaparam(priv, priv->anaparam);
++
++ write_grf5101(dev, 0x1f, 0x0);
++ write_grf5101(dev, 0x1f, 0x0);
++ write_grf5101(dev, 0x1f, 0x40);
++ write_grf5101(dev, 0x1f, 0x60);
++ write_grf5101(dev, 0x1f, 0x61);
++ write_grf5101(dev, 0x1f, 0x61);
++ write_grf5101(dev, 0x00, 0xae4);
++ write_grf5101(dev, 0x1f, 0x1);
++ write_grf5101(dev, 0x1f, 0x41);
++ write_grf5101(dev, 0x1f, 0x61);
++
++ write_grf5101(dev, 0x01, 0x1a23);
++ write_grf5101(dev, 0x02, 0x4971);
++ write_grf5101(dev, 0x03, 0x41de);
++ write_grf5101(dev, 0x04, 0x2d80);
++ write_grf5101(dev, 0x05, 0x68ff); /* 0x61ff original value */
++ write_grf5101(dev, 0x06, 0x0);
++ write_grf5101(dev, 0x07, 0x0);
++ write_grf5101(dev, 0x08, 0x7533);
++ write_grf5101(dev, 0x09, 0xc401);
++ write_grf5101(dev, 0x0a, 0x0);
++ write_grf5101(dev, 0x0c, 0x1c7);
++ write_grf5101(dev, 0x0d, 0x29d3);
++ write_grf5101(dev, 0x0e, 0x2e8);
++ write_grf5101(dev, 0x10, 0x192);
++ write_grf5101(dev, 0x11, 0x248);
++ write_grf5101(dev, 0x12, 0x0);
++ write_grf5101(dev, 0x13, 0x20c4);
++ write_grf5101(dev, 0x14, 0xf4fc);
++ write_grf5101(dev, 0x15, 0x0);
++ write_grf5101(dev, 0x16, 0x1500);
++
++ write_grf5101(dev, 0x07, 0x1000);
++
++ /* baseband configuration */
++ rtl8180_write_phy(dev, 0, 0xa8);
++ rtl8180_write_phy(dev, 3, 0x0);
++ rtl8180_write_phy(dev, 4, 0xc0);
++ rtl8180_write_phy(dev, 5, 0x90);
++ rtl8180_write_phy(dev, 6, 0x1e);
++ rtl8180_write_phy(dev, 7, 0x64);
++
++ grf5101_write_phy_antenna(dev, 1);
++
++ rtl8180_write_phy(dev, 0x11, 0x88);
++
++ if (rtl818x_ioread8(priv, &priv->map->CONFIG2) &
++ RTL818X_CONFIG2_ANTENNA_DIV)
++ rtl8180_write_phy(dev, 0x12, 0xc0); /* enable ant diversity */
++ else
++ rtl8180_write_phy(dev, 0x12, 0x40); /* disable ant diversity */
++
++ rtl8180_write_phy(dev, 0x13, 0x90 | priv->csthreshold);
++
++ rtl8180_write_phy(dev, 0x19, 0x0);
++ rtl8180_write_phy(dev, 0x1a, 0xa0);
++ rtl8180_write_phy(dev, 0x1b, 0x44);
++}
++
++const struct rtl818x_rf_ops grf5101_rf_ops = {
++ .name = "GCT",
++ .init = grf5101_rf_init,
++ .stop = grf5101_rf_stop,
++ .set_chan = grf5101_rf_set_channel
++};
+diff --git a/drivers/net/wireless/rtl8180_grf5101.h b/drivers/net/wireless/rtl8180_grf5101.h
+new file mode 100644
+index 0000000..7664711
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_grf5101.h
+@@ -0,0 +1,28 @@
++#ifndef RTL8180_GRF5101_H
++#define RTL8180_GRF5101_H
++
+/*
-+ * Add SSID TLV of the form:
++ * Radio tuning for GCT GRF5101 on RTL8180
+ *
-+ * TLV-ID SSID 00 00
-+ * length 06 00
-+ * ssid 4d 4e 54 45 53 54
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Code from the BSD driver and the rtl8181 project have been
++ * very useful to understand certain things
++ *
++ * I want to thanks the Authors of such projects and the Ndiswrapper
++ * project Authors.
++ *
++ * A special Big Thanks also is for all people who donated me cards,
++ * making possible the creation of the original rtl8180 driver
++ * from which this code is derived!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
+ */
-+static int lbs_scan_add_ssid_tlv(u8 *tlv,
-+ const struct lbs_ioctl_user_scan_cfg *user_cfg)
- {
-- wlan_private *priv = container_of(work, wlan_private, scan_work.work);
--
-- wlan_scan_networks(priv, NULL, 0);
-+ struct mrvlietypes_ssidparamset *ssid_tlv =
-+ (struct mrvlietypes_ssidparamset *)tlv;
-+ ssid_tlv->header.type = cpu_to_le16(TLV_TYPE_SSID);
-+ ssid_tlv->header.len = cpu_to_le16(user_cfg->ssid_len);
-+ memcpy(ssid_tlv->ssid, user_cfg->ssid, user_cfg->ssid_len);
-+ return sizeof(ssid_tlv->header) + user_cfg->ssid_len;
- }
-
-
--/**
-- * @brief Construct a wlan_scan_cmd_config structure to use in issue scan cmds
-- *
-- * Application layer or other functions can invoke wlan_scan_networks
-- * with a scan configuration supplied in a wlan_ioctl_user_scan_cfg struct.
-- * This structure is used as the basis of one or many wlan_scan_cmd_config
-- * commands that are sent to the command processing module and sent to
-- * firmware.
-- *
-- * Create a wlan_scan_cmd_config based on the following user supplied
-- * parameters (if present):
-- * - SSID filter
-- * - BSSID filter
-- * - Number of Probes to be sent
-- * - channel list
-- *
-- * If the SSID or BSSID filter is not present, disable/clear the filter.
-- * If the number of probes is not set, use the adapter default setting
-- * Qualify the channel
++
++#define GRF5101_ANTENNA 0xA3
++
++extern const struct rtl818x_rf_ops grf5101_rf_ops;
++
++#endif /* RTL8180_GRF5101_H */
+diff --git a/drivers/net/wireless/rtl8180_max2820.c b/drivers/net/wireless/rtl8180_max2820.c
+new file mode 100644
+index 0000000..98fe9fd
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_max2820.c
+@@ -0,0 +1,150 @@
+/*
-+ * Add CHANLIST TLV of the form
- *
-- * @param priv A pointer to wlan_private structure
-- * @param puserscanin NULL or pointer to scan configuration parameters
-- * @param ppchantlvout Output parameter: Pointer to the start of the
-- * channel TLV portion of the output scan config
-- * @param pscanchanlist Output parameter: Pointer to the resulting channel
-- * list to scan
-- * @param pmaxchanperscan Output parameter: Number of channels to scan for
-- * each issuance of the firmware scan command
-- * @param pfilteredscan Output parameter: Flag indicating whether or not
-- * a BSSID or SSID filter is being sent in the
-- * command to firmware. Used to increase the number
-- * of channels sent in a scan command and to
-- * disable the firmware channel scan filter.
-- * @param pscancurrentonly Output parameter: Flag indicating whether or not
-- * we are only scanning our current active channel
-+ * TLV-ID CHANLIST 01 01
-+ * length 5b 00
-+ * channel 1 00 01 00 00 00 64 00
-+ * radio type 00
-+ * channel 01
-+ * scan type 00
-+ * min scan time 00 00
-+ * max scan time 64 00
-+ * channel 2 00 02 00 00 00 64 00
-+ * channel 3 00 03 00 00 00 64 00
-+ * channel 4 00 04 00 00 00 64 00
-+ * channel 5 00 05 00 00 00 64 00
-+ * channel 6 00 06 00 00 00 64 00
-+ * channel 7 00 07 00 00 00 64 00
-+ * channel 8 00 08 00 00 00 64 00
-+ * channel 9 00 09 00 00 00 64 00
-+ * channel 10 00 0a 00 00 00 64 00
-+ * channel 11 00 0b 00 00 00 64 00
-+ * channel 12 00 0c 00 00 00 64 00
-+ * channel 13 00 0d 00 00 00 64 00
- *
-- * @return resulting scan configuration
- */
--static struct wlan_scan_cmd_config *
--wlan_scan_setup_scan_config(wlan_private * priv,
-- const struct wlan_ioctl_user_scan_cfg * puserscanin,
-- struct mrvlietypes_chanlistparamset ** ppchantlvout,
-- struct chanscanparamset * pscanchanlist,
-- int *pmaxchanperscan,
-- u8 * pfilteredscan,
-- u8 * pscancurrentonly)
-+static int lbs_scan_add_chanlist_tlv(u8 *tlv,
-+ struct chanscanparamset *chan_list,
-+ int chan_count)
- {
-- struct mrvlietypes_numprobes *pnumprobestlv;
-- struct mrvlietypes_ssidparamset *pssidtlv;
-- struct wlan_scan_cmd_config * pscancfgout = NULL;
-- u8 *ptlvpos;
-- u16 numprobes;
-- int chanidx;
-- int scantype;
-- int scandur;
-- int channel;
-- int radiotype;
--
-- lbs_deb_enter(LBS_DEB_SCAN);
--
-- pscancfgout = kzalloc(MAX_SCAN_CFG_ALLOC, GFP_KERNEL);
-- if (pscancfgout == NULL)
-- goto out;
--
-- /* The tlvbufferlen is calculated for each scan command. The TLVs added
-- * in this routine will be preserved since the routine that sends
-- * the command will append channelTLVs at *ppchantlvout. The difference
-- * between the *ppchantlvout and the tlvbuffer start will be used
-- * to calculate the size of anything we add in this routine.
-- */
-- pscancfgout->tlvbufferlen = 0;
--
-- /* Running tlv pointer. Assigned to ppchantlvout at end of function
-- * so later routines know where channels can be added to the command buf
-- */
-- ptlvpos = pscancfgout->tlvbuffer;
--
-- /*
-- * Set the initial scan paramters for progressive scanning. If a specific
-- * BSSID or SSID is used, the number of channels in the scan command
-- * will be increased to the absolute maximum
-- */
-- *pmaxchanperscan = MRVDRV_CHANNELS_PER_SCAN_CMD;
--
-- /* Initialize the scan as un-filtered by firmware, set to TRUE below if
-- * a SSID or BSSID filter is sent in the command
-- */
-- *pfilteredscan = 0;
--
-- /* Initialize the scan as not being only on the current channel. If
-- * the channel list is customized, only contains one channel, and
-- * is the active channel, this is set true and data flow is not halted.
-- */
-- *pscancurrentonly = 0;
--
-- if (puserscanin) {
-- /* Set the bss type scan filter, use adapter setting if unset */
-- pscancfgout->bsstype =
-- puserscanin->bsstype ? puserscanin->bsstype : CMD_BSS_TYPE_ANY;
--
-- /* Set the number of probes to send, use adapter setting if unset */
-- numprobes = puserscanin->numprobes ? puserscanin->numprobes : 0;
--
-- /*
-- * Set the BSSID filter to the incoming configuration,
-- * if non-zero. If not set, it will remain disabled (all zeros).
-- */
-- memcpy(pscancfgout->bssid, puserscanin->bssid,
-- sizeof(pscancfgout->bssid));
--
-- if (puserscanin->ssid_len) {
-- pssidtlv =
-- (struct mrvlietypes_ssidparamset *) pscancfgout->
-- tlvbuffer;
-- pssidtlv->header.type = cpu_to_le16(TLV_TYPE_SSID);
-- pssidtlv->header.len = cpu_to_le16(puserscanin->ssid_len);
-- memcpy(pssidtlv->ssid, puserscanin->ssid,
-- puserscanin->ssid_len);
-- ptlvpos += sizeof(pssidtlv->header) + puserscanin->ssid_len;
-- }
--
-- /*
-- * The default number of channels sent in the command is low to
-- * ensure the response buffer from the firmware does not truncate
-- * scan results. That is not an issue with an SSID or BSSID
-- * filter applied to the scan results in the firmware.
-- */
-- if ( puserscanin->ssid_len
-- || (compare_ether_addr(pscancfgout->bssid, &zeromac[0]) != 0)) {
-- *pmaxchanperscan = MRVDRV_MAX_CHANNELS_PER_SCAN;
-- *pfilteredscan = 1;
-- }
-- } else {
-- pscancfgout->bsstype = CMD_BSS_TYPE_ANY;
-- numprobes = 0;
-- }
--
-- /* If the input config or adapter has the number of Probes set, add tlv */
-- if (numprobes) {
-- pnumprobestlv = (struct mrvlietypes_numprobes *) ptlvpos;
-- pnumprobestlv->header.type = cpu_to_le16(TLV_TYPE_NUMPROBES);
-- pnumprobestlv->header.len = cpu_to_le16(2);
-- pnumprobestlv->numprobes = cpu_to_le16(numprobes);
--
-- ptlvpos += sizeof(*pnumprobestlv);
-- }
--
-- /*
-- * Set the output for the channel TLV to the address in the tlv buffer
-- * past any TLVs that were added in this fuction (SSID, numprobes).
-- * channel TLVs will be added past this for each scan command, preserving
-- * the TLVs that were previously added.
-- */
-- *ppchantlvout = (struct mrvlietypes_chanlistparamset *) ptlvpos;
--
-- if (!puserscanin || !puserscanin->chanlist[0].channumber) {
-- /* Create a default channel scan list */
-- lbs_deb_scan("creating full region channel list\n");
-- wlan_scan_create_channel_list(priv, pscanchanlist,
-- *pfilteredscan);
-- goto out;
-- }
--
-- for (chanidx = 0;
-- chanidx < WLAN_IOCTL_USER_SCAN_CHAN_MAX
-- && puserscanin->chanlist[chanidx].channumber; chanidx++) {
--
-- channel = puserscanin->chanlist[chanidx].channumber;
-- (pscanchanlist + chanidx)->channumber = channel;
--
-- radiotype = puserscanin->chanlist[chanidx].radiotype;
-- (pscanchanlist + chanidx)->radiotype = radiotype;
--
-- scantype = puserscanin->chanlist[chanidx].scantype;
--
-- if (scantype == CMD_SCAN_TYPE_PASSIVE) {
-- (pscanchanlist +
-- chanidx)->chanscanmode.passivescan = 1;
-- } else {
-- (pscanchanlist +
-- chanidx)->chanscanmode.passivescan = 0;
-- }
--
-- if (puserscanin->chanlist[chanidx].scantime) {
-- scandur = puserscanin->chanlist[chanidx].scantime;
-- } else {
-- if (scantype == CMD_SCAN_TYPE_PASSIVE) {
-- scandur = MRVDRV_PASSIVE_SCAN_CHAN_TIME;
-- } else {
-- scandur = MRVDRV_ACTIVE_SCAN_CHAN_TIME;
-- }
-- }
--
-- (pscanchanlist + chanidx)->minscantime =
-- cpu_to_le16(scandur);
-- (pscanchanlist + chanidx)->maxscantime =
-- cpu_to_le16(scandur);
-- }
--
-- /* Check if we are only scanning the current channel */
-- if ((chanidx == 1) &&
-- (puserscanin->chanlist[0].channumber ==
-- priv->adapter->curbssparams.channel)) {
-- *pscancurrentonly = 1;
-- lbs_deb_scan("scanning current channel only");
-- }
--
--out:
-- return pscancfgout;
-+ size_t size = sizeof(struct chanscanparamset) * chan_count;
-+ struct mrvlietypes_chanlistparamset *chan_tlv =
-+ (struct mrvlietypes_chanlistparamset *) tlv;
++ * Radio tuning for Maxim max2820 on RTL8180
++ *
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Code from the BSD driver and the rtl8181 project have been
++ * very useful to understand certain things
++ *
++ * I want to thanks the Authors of such projects and the Ndiswrapper
++ * project Authors.
++ *
++ * A special Big Thanks also is for all people who donated me cards,
++ * making possible the creation of the original rtl8180 driver
++ * from which this code is derived!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
+
-+ chan_tlv->header.type = cpu_to_le16(TLV_TYPE_CHANLIST);
-+ memcpy(chan_tlv->chanscanparam, chan_list, size);
-+ chan_tlv->header.len = cpu_to_le16(size);
-+ return sizeof(chan_tlv->header) + size;
- }
-
--/**
-- * @brief Construct and send multiple scan config commands to the firmware
-- *
-- * Only used from wlan_scan_networks()
-- *
-- * Previous routines have created a wlan_scan_cmd_config with any requested
-- * TLVs. This function splits the channel TLV into maxchanperscan lists
-- * and sends the portion of the channel TLV along with the other TLVs
-- * to the wlan_cmd routines for execution in the firmware.
++#include <linux/init.h>
++#include <linux/pci.h>
++#include <linux/delay.h>
++#include <net/mac80211.h>
++
++#include "rtl8180.h"
++#include "rtl8180_max2820.h"
++
++static const u32 max2820_chan[] = {
++ 12, /* CH 1 */
++ 17,
++ 22,
++ 27,
++ 32,
++ 37,
++ 42,
++ 47,
++ 52,
++ 57,
++ 62,
++ 67,
++ 72,
++ 84, /* CH 14 */
++};
++
++static void write_max2820(struct ieee80211_hw *dev, u8 addr, u32 data)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u32 phy_config;
++
++ phy_config = 0x90 + (data & 0xf);
++ phy_config <<= 16;
++ phy_config += addr;
++ phy_config <<= 8;
++ phy_config += (data >> 4) & 0xff;
++
++ rtl818x_iowrite32(priv,
++ (__le32 __iomem *) &priv->map->RFPinsOutput, phy_config);
++
++ msleep(1);
++}
++
++static void max2820_write_phy_antenna(struct ieee80211_hw *dev, short chan)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u8 ant;
++
++ ant = MAXIM_ANTENNA;
++ if (priv->rfparam & RF_PARAM_ANTBDEFAULT)
++ ant |= BB_ANTENNA_B;
++ if (chan == 14)
++ ant |= BB_ANTATTEN_CHAN14;
++
++ rtl8180_write_phy(dev, 0x10, ant);
++}
++
++static void max2820_rf_set_channel(struct ieee80211_hw *dev,
++ struct ieee80211_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ unsigned int chan_idx = conf ? conf->channel - 1 : 0;
++ u32 txpw = priv->channels[chan_idx].val & 0xFF;
++ u32 chan = max2820_chan[chan_idx];
++
++ /* While philips SA2400 drive the PA bias from
++ * sa2400, for MAXIM we do this directly from BB */
++ rtl8180_write_phy(dev, 3, txpw);
++
++ max2820_write_phy_antenna(dev, chan);
++ write_max2820(dev, 3, chan);
++}
++
++static void max2820_rf_stop(struct ieee80211_hw *dev)
++{
++ rtl8180_write_phy(dev, 3, 0x8);
++ write_max2820(dev, 1, 0);
++}
++
++
++static void max2820_rf_init(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++
++ /* MAXIM from netbsd driver */
++ write_max2820(dev, 0, 0x007); /* test mode as indicated in datasheet */
++ write_max2820(dev, 1, 0x01e); /* enable register */
++ write_max2820(dev, 2, 0x001); /* synt register */
++
++ max2820_rf_set_channel(dev, NULL);
++
++ write_max2820(dev, 4, 0x313); /* rx register */
++
++ /* PA is driven directly by the BB, we keep the MAXIM bias
++ * at the highest value in case that setting it to lower
++ * values may introduce some further attenuation somewhere..
++ */
++ write_max2820(dev, 5, 0x00f);
++
++ /* baseband configuration */
++ rtl8180_write_phy(dev, 0, 0x88); /* sys1 */
++ rtl8180_write_phy(dev, 3, 0x08); /* txagc */
++ rtl8180_write_phy(dev, 4, 0xf8); /* lnadet */
++ rtl8180_write_phy(dev, 5, 0x90); /* ifagcinit */
++ rtl8180_write_phy(dev, 6, 0x1a); /* ifagclimit */
++ rtl8180_write_phy(dev, 7, 0x64); /* ifagcdet */
++
++ max2820_write_phy_antenna(dev, 1);
++
++ rtl8180_write_phy(dev, 0x11, 0x88); /* trl */
++
++ if (rtl818x_ioread8(priv, &priv->map->CONFIG2) &
++ RTL818X_CONFIG2_ANTENNA_DIV)
++ rtl8180_write_phy(dev, 0x12, 0xc7);
++ else
++ rtl8180_write_phy(dev, 0x12, 0x47);
++
++ rtl8180_write_phy(dev, 0x13, 0x9b);
++
++ rtl8180_write_phy(dev, 0x19, 0x0); /* CHESTLIM */
++ rtl8180_write_phy(dev, 0x1a, 0x9f); /* CHSQLIM */
++
++ max2820_rf_set_channel(dev, NULL);
++}
++
++const struct rtl818x_rf_ops max2820_rf_ops = {
++ .name = "Maxim",
++ .init = max2820_rf_init,
++ .stop = max2820_rf_stop,
++ .set_chan = max2820_rf_set_channel
++};
+diff --git a/drivers/net/wireless/rtl8180_max2820.h b/drivers/net/wireless/rtl8180_max2820.h
+new file mode 100644
+index 0000000..61cf6d1
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_max2820.h
+@@ -0,0 +1,28 @@
++#ifndef RTL8180_MAX2820_H
++#define RTL8180_MAX2820_H
+
+/*
-+ * Add RATES TLV of the form
- *
-- * @param priv A pointer to wlan_private structure
-- * @param maxchanperscan Maximum number channels to be included in each
-- * scan command sent to firmware
-- * @param filteredscan Flag indicating whether or not a BSSID or SSID
-- * filter is being used for the firmware command
-- * scan command sent to firmware
-- * @param pscancfgout Scan configuration used for this scan.
-- * @param pchantlvout Pointer in the pscancfgout where the channel TLV
-- * should start. This is past any other TLVs that
-- * must be sent down in each firmware command.
-- * @param pscanchanlist List of channels to scan in maxchanperscan segments
-+ * TLV-ID RATES 01 00
-+ * length 0e 00
-+ * rates 82 84 8b 96 0c 12 18 24 30 48 60 6c
- *
-- * @return 0 or error return otherwise
-+ * The rates are in lbs_bg_rates[], but for the 802.11b
-+ * rates the high bit isn't set.
- */
--static int wlan_scan_channel_list(wlan_private * priv,
-- int maxchanperscan,
-- u8 filteredscan,
-- struct wlan_scan_cmd_config * pscancfgout,
-- struct mrvlietypes_chanlistparamset * pchantlvout,
-- struct chanscanparamset * pscanchanlist,
-- const struct wlan_ioctl_user_scan_cfg * puserscanin,
-- int full_scan)
-+static int lbs_scan_add_rates_tlv(u8 *tlv)
- {
-- struct chanscanparamset *ptmpchan;
-- struct chanscanparamset *pstartchan;
-- u8 scanband;
-- int doneearly;
-- int tlvidx;
-- int ret = 0;
-- int scanned = 0;
-- union iwreq_data wrqu;
--
-- lbs_deb_enter_args(LBS_DEB_SCAN, "maxchanperscan %d, filteredscan %d, "
-- "full_scan %d", maxchanperscan, filteredscan, full_scan);
--
-- if (!pscancfgout || !pchantlvout || !pscanchanlist) {
-- lbs_deb_scan("pscancfgout, pchantlvout or "
-- "pscanchanlist is NULL\n");
-- ret = -1;
-- goto out;
-- }
--
-- pchantlvout->header.type = cpu_to_le16(TLV_TYPE_CHANLIST);
--
-- /* Set the temp channel struct pointer to the start of the desired list */
-- ptmpchan = pscanchanlist;
--
-- if (priv->adapter->last_scanned_channel && !puserscanin)
-- ptmpchan += priv->adapter->last_scanned_channel;
--
-- /* Loop through the desired channel list, sending a new firmware scan
-- * commands for each maxchanperscan channels (or for 1,6,11 individually
-- * if configured accordingly)
-- */
-- while (ptmpchan->channumber) {
--
-- tlvidx = 0;
-- pchantlvout->header.len = 0;
-- scanband = ptmpchan->radiotype;
-- pstartchan = ptmpchan;
-- doneearly = 0;
--
-- /* Construct the channel TLV for the scan command. Continue to
-- * insert channel TLVs until:
-- * - the tlvidx hits the maximum configured per scan command
-- * - the next channel to insert is 0 (end of desired channel list)
-- * - doneearly is set (controlling individual scanning of 1,6,11)
-- */
-- while (tlvidx < maxchanperscan && ptmpchan->channumber
-- && !doneearly && scanned < 2) {
--
-- lbs_deb_scan("channel %d, radio %d, passive %d, "
-- "dischanflt %d, maxscantime %d\n",
-- ptmpchan->channumber,
-- ptmpchan->radiotype,
-- ptmpchan->chanscanmode.passivescan,
-- ptmpchan->chanscanmode.disablechanfilt,
-- ptmpchan->maxscantime);
--
-- /* Copy the current channel TLV to the command being prepared */
-- memcpy(pchantlvout->chanscanparam + tlvidx,
-- ptmpchan, sizeof(pchantlvout->chanscanparam));
--
-- /* Increment the TLV header length by the size appended */
-- /* Ew, it would be _so_ nice if we could just declare the
-- variable little-endian and let GCC handle it for us */
-- pchantlvout->header.len =
-- cpu_to_le16(le16_to_cpu(pchantlvout->header.len) +
-- sizeof(pchantlvout->chanscanparam));
--
-- /*
-- * The tlv buffer length is set to the number of bytes of the
-- * between the channel tlv pointer and the start of the
-- * tlv buffer. This compensates for any TLVs that were appended
-- * before the channel list.
-- */
-- pscancfgout->tlvbufferlen = ((u8 *) pchantlvout
-- - pscancfgout->tlvbuffer);
--
-- /* Add the size of the channel tlv header and the data length */
-- pscancfgout->tlvbufferlen +=
-- (sizeof(pchantlvout->header)
-- + le16_to_cpu(pchantlvout->header.len));
--
-- /* Increment the index to the channel tlv we are constructing */
-- tlvidx++;
--
-- doneearly = 0;
--
-- /* Stop the loop if the *current* channel is in the 1,6,11 set
-- * and we are not filtering on a BSSID or SSID.
-- */
-- if (!filteredscan && (ptmpchan->channumber == 1
-- || ptmpchan->channumber == 6
-- || ptmpchan->channumber == 11)) {
-- doneearly = 1;
-- }
--
-- /* Increment the tmp pointer to the next channel to be scanned */
-- ptmpchan++;
-- scanned++;
--
-- /* Stop the loop if the *next* channel is in the 1,6,11 set.
-- * This will cause it to be the only channel scanned on the next
-- * interation
-- */
-- if (!filteredscan && (ptmpchan->channumber == 1
-- || ptmpchan->channumber == 6
-- || ptmpchan->channumber == 11)) {
-- doneearly = 1;
-- }
-- }
--
-- /* Send the scan command to the firmware with the specified cfg */
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SCAN, 0,
-- 0, 0, pscancfgout);
-- if (scanned >= 2 && !full_scan) {
-- ret = 0;
-- goto done;
-- }
-- scanned = 0;
-- }
--
--done:
-- priv->adapter->last_scanned_channel = ptmpchan->channumber;
--
-- if (priv->adapter->last_scanned_channel) {
-- /* Schedule the next part of the partial scan */
-- if (!full_scan && !priv->adapter->surpriseremoved) {
-- cancel_delayed_work(&priv->scan_work);
-- queue_delayed_work(priv->work_thread, &priv->scan_work,
-- msecs_to_jiffies(300));
-- }
-- } else {
-- /* All done, tell userspace the scan table has been updated */
-- memset(&wrqu, 0, sizeof(union iwreq_data));
-- wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
++ * Radio tuning for Maxim max2820 on RTL8180
++ *
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Code from the BSD driver and the rtl8181 project have been
++ * very useful to understand certain things
++ *
++ * I want to thanks the Authors of such projects and the Ndiswrapper
++ * project Authors.
++ *
++ * A special Big Thanks also is for all people who donated me cards,
++ * making possible the creation of the original rtl8180 driver
++ * from which this code is derived!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#define MAXIM_ANTENNA 0xb3
++
++extern const struct rtl818x_rf_ops max2820_rf_ops;
++
++#endif /* RTL8180_MAX2820_H */
+diff --git a/drivers/net/wireless/rtl8180_rtl8225.c b/drivers/net/wireless/rtl8180_rtl8225.c
+new file mode 100644
+index 0000000..ef3832b
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_rtl8225.c
+@@ -0,0 +1,779 @@
++
++/*
++ * Radio tuning for RTL8225 on RTL8180
++ *
++ * Copyright 2007 Michael Wu <flamingice at sourmilk.net>
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Based on the r8180 driver, which is:
++ * Copyright 2005 Andrea Merello <andreamrl at tiscali.it>, et al.
++ *
++ * Thanks to Realtek for their support!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#include <linux/init.h>
++#include <linux/pci.h>
++#include <linux/delay.h>
++#include <net/mac80211.h>
++
++#include "rtl8180.h"
++#include "rtl8180_rtl8225.h"
++
++static void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u16 reg80, reg84, reg82;
++ u32 bangdata;
+ int i;
-+ struct mrvlietypes_ratesparamset *rate_tlv =
-+ (struct mrvlietypes_ratesparamset *) tlv;
+
-+ rate_tlv->header.type = cpu_to_le16(TLV_TYPE_RATES);
-+ tlv += sizeof(rate_tlv->header);
-+ for (i = 0; i < MAX_RATES; i++) {
-+ *tlv = lbs_bg_rates[i];
-+ if (*tlv == 0)
-+ break;
-+ /* This code makes sure that the 802.11b rates (1 MBit/s, 2
-+ MBit/s, 5.5 MBit/s and 11 MBit/s get's the high bit set.
-+ Note that the values are MBit/s * 2, to mark them as
-+ basic rates so that the firmware likes it better */
-+ if (*tlv == 0x02 || *tlv == 0x04 ||
-+ *tlv == 0x0b || *tlv == 0x16)
-+ *tlv |= 0x80;
-+ tlv++;
- }
--
--out:
-- lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
-- return ret;
-+ rate_tlv->header.len = cpu_to_le16(i);
-+ return sizeof(rate_tlv->header) + i;
- }
-
++ bangdata = (data << 4) | (addr & 0xf);
+
- /*
-- * Only used from wlan_scan_networks()
--*/
--static void clear_selected_scan_list_entries(wlan_adapter *adapter,
-- const struct wlan_ioctl_user_scan_cfg *scan_cfg)
-+ * Generate the CMD_802_11_SCAN command with the proper tlv
-+ * for a bunch of channels.
-+ */
-+static int lbs_do_scan(struct lbs_private *priv,
-+ u8 bsstype,
-+ struct chanscanparamset *chan_list,
-+ int chan_count,
-+ const struct lbs_ioctl_user_scan_cfg *user_cfg)
- {
-- struct bss_descriptor *bss;
-- struct bss_descriptor *safe;
-- u32 clear_ssid_flag = 0, clear_bssid_flag = 0;
-+ int ret = -ENOMEM;
-+ struct lbs_scan_cmd_config *scan_cmd;
-+ u8 *tlv; /* pointer into our current, growing TLV storage area */
-
-- lbs_deb_enter(LBS_DEB_SCAN);
-+ lbs_deb_enter_args(LBS_DEB_SCAN, "bsstype %d, chanlist[].chan %d, "
-+ "chan_count %d",
-+ bsstype, chan_list[0].channumber, chan_count);
-
-- if (!scan_cfg)
-+ /* create the fixed part for scan command */
-+ scan_cmd = kzalloc(MAX_SCAN_CFG_ALLOC, GFP_KERNEL);
-+ if (scan_cmd == NULL)
- goto out;
--
-- if (scan_cfg->clear_ssid && scan_cfg->ssid_len)
-- clear_ssid_flag = 1;
--
-- if (scan_cfg->clear_bssid
-- && (compare_ether_addr(scan_cfg->bssid, &zeromac[0]) != 0)
-- && (compare_ether_addr(scan_cfg->bssid, &bcastmac[0]) != 0)) {
-- clear_bssid_flag = 1;
-- }
--
-- if (!clear_ssid_flag && !clear_bssid_flag)
-- goto out;
--
-- mutex_lock(&adapter->lock);
-- list_for_each_entry_safe (bss, safe, &adapter->network_list, list) {
-- u32 clear = 0;
--
-- /* Check for an SSID match */
-- if ( clear_ssid_flag
-- && (bss->ssid_len == scan_cfg->ssid_len)
-- && !memcmp(bss->ssid, scan_cfg->ssid, bss->ssid_len))
-- clear = 1;
--
-- /* Check for a BSSID match */
-- if ( clear_bssid_flag
-- && !compare_ether_addr(bss->bssid, scan_cfg->bssid))
-- clear = 1;
--
-- if (clear) {
-- list_move_tail (&bss->list, &adapter->network_free_list);
-- clear_bss_descriptor(bss);
-- }
-- }
-- mutex_unlock(&adapter->lock);
-+ tlv = scan_cmd->tlvbuffer;
-+ if (user_cfg)
-+ memcpy(scan_cmd->bssid, user_cfg->bssid, ETH_ALEN);
-+ scan_cmd->bsstype = bsstype;
++ reg80 = rtl818x_ioread16(priv, &priv->map->RFPinsOutput) & 0xfff3;
++ reg82 = rtl818x_ioread16(priv, &priv->map->RFPinsEnable);
+
-+ /* add TLVs */
-+ if (user_cfg && user_cfg->ssid_len)
-+ tlv += lbs_scan_add_ssid_tlv(tlv, user_cfg);
-+ if (chan_list && chan_count)
-+ tlv += lbs_scan_add_chanlist_tlv(tlv, chan_list, chan_count);
-+ tlv += lbs_scan_add_rates_tlv(tlv);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, reg82 | 0x7);
+
-+ /* This is the final data we are about to send */
-+ scan_cmd->tlvbufferlen = tlv - scan_cmd->tlvbuffer;
-+ lbs_deb_hex(LBS_DEB_SCAN, "SCAN_CMD", (void *)scan_cmd, 1+6);
-+ lbs_deb_hex(LBS_DEB_SCAN, "SCAN_TLV", scan_cmd->tlvbuffer,
-+ scan_cmd->tlvbufferlen);
++ reg84 = rtl818x_ioread16(priv, &priv->map->RFPinsSelect);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84 | 0x7 | 0x400);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(10);
+
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SCAN, 0,
-+ CMD_OPTION_WAITFORRSP, 0, scan_cmd);
- out:
-- lbs_deb_leave(LBS_DEB_SCAN);
-+ kfree(scan_cmd);
-+ lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
-+ return ret;
- }
-
-
-@@ -812,32 +529,32 @@ out:
- * order to send the appropriate scan commands to firmware to populate or
- * update the internal driver scan table
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param puserscanin Pointer to the input configuration for the requested
- * scan.
-- * @param full_scan ???
- *
- * @return 0 or < 0 if error
- */
--int wlan_scan_networks(wlan_private * priv,
-- const struct wlan_ioctl_user_scan_cfg * puserscanin,
-+int lbs_scan_networks(struct lbs_private *priv,
-+ const struct lbs_ioctl_user_scan_cfg *user_cfg,
- int full_scan)
- {
-- wlan_adapter * adapter = priv->adapter;
-- struct mrvlietypes_chanlistparamset *pchantlvout;
-- struct chanscanparamset * scan_chan_list = NULL;
-- struct wlan_scan_cmd_config * scan_cfg = NULL;
-- u8 filteredscan;
-- u8 scancurrentchanonly;
-- int maxchanperscan;
-- int ret;
-+ int ret = -ENOMEM;
-+ struct chanscanparamset *chan_list;
-+ struct chanscanparamset *curr_chans;
-+ int chan_count;
-+ u8 bsstype = CMD_BSS_TYPE_ANY;
-+ int numchannels = MRVDRV_CHANNELS_PER_SCAN_CMD;
-+ int filteredscan = 0;
-+ union iwreq_data wrqu;
- #ifdef CONFIG_LIBERTAS_DEBUG
-- struct bss_descriptor * iter_bss;
-+ struct bss_descriptor *iter;
- int i = 0;
- DECLARE_MAC_BUF(mac);
- #endif
-
-- lbs_deb_enter_args(LBS_DEB_SCAN, "full_scan %d", full_scan);
-+ lbs_deb_enter_args(LBS_DEB_SCAN, "full_scan %d",
-+ full_scan);
-
- /* Cancel any partial outstanding partial scans if this scan
- * is a full scan.
-@@ -845,90 +562,138 @@ int wlan_scan_networks(wlan_private * priv,
- if (full_scan && delayed_work_pending(&priv->scan_work))
- cancel_delayed_work(&priv->scan_work);
-
-- scan_chan_list = kzalloc(sizeof(struct chanscanparamset) *
-- WLAN_IOCTL_USER_SCAN_CHAN_MAX, GFP_KERNEL);
-- if (scan_chan_list == NULL) {
-- ret = -ENOMEM;
-+ /* Determine same scan parameters */
-+ if (user_cfg) {
-+ if (user_cfg->bsstype)
-+ bsstype = user_cfg->bsstype;
-+ if (compare_ether_addr(user_cfg->bssid, &zeromac[0]) != 0) {
-+ numchannels = MRVDRV_MAX_CHANNELS_PER_SCAN;
-+ filteredscan = 1;
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(10);
++
++ for (i = 15; i >= 0; i--) {
++ u16 reg = reg80 | !!(bangdata & (1 << i));
++
++ if (i & 1)
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg | (1 << 1));
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg | (1 << 1));
++
++ if (!(i & 1))
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
++ }
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(10);
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84 | 0x400);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++}
++
++static u16 rtl8225_read(struct ieee80211_hw *dev, u8 addr)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u16 reg80, reg82, reg84, out;
++ int i;
++
++ reg80 = rtl818x_ioread16(priv, &priv->map->RFPinsOutput);
++ reg82 = rtl818x_ioread16(priv, &priv->map->RFPinsEnable);
++ reg84 = rtl818x_ioread16(priv, &priv->map->RFPinsSelect) | 0x400;
++
++ reg80 &= ~0xF;
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, reg82 | 0x000F);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84 | 0x000F);
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(4);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(5);
++
++ for (i = 4; i >= 0; i--) {
++ u16 reg = reg80 | ((addr >> i) & 1);
++
++ if (!(i & 1)) {
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(1);
++ }
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg | (1 << 1));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg | (1 << 1));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++
++ if (i & 1) {
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(1);
+ }
+ }
-+ lbs_deb_scan("numchannels %d, bsstype %d, "
-+ "filteredscan %d\n",
-+ numchannels, bsstype, filteredscan);
+
-+ /* Create list of channels to scan */
-+ chan_list = kzalloc(sizeof(struct chanscanparamset) *
-+ LBS_IOCTL_USER_SCAN_CHAN_MAX, GFP_KERNEL);
-+ if (!chan_list) {
-+ lbs_pr_alert("SCAN: chan_list empty\n");
- goto out;
- }
-
-- scan_cfg = wlan_scan_setup_scan_config(priv,
-- puserscanin,
-- &pchantlvout,
-- scan_chan_list,
-- &maxchanperscan,
-- &filteredscan,
-- &scancurrentchanonly);
-- if (scan_cfg == NULL) {
-- ret = -ENOMEM;
-- goto out;
-+ /* We want to scan all channels */
-+ chan_count = lbs_scan_create_channel_list(priv, chan_list,
-+ filteredscan);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x000E);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x040E);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3) | (1 << 1));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
+
-+ netif_stop_queue(priv->dev);
-+ netif_carrier_off(priv->dev);
-+ if (priv->mesh_dev) {
-+ netif_stop_queue(priv->mesh_dev);
-+ netif_carrier_off(priv->mesh_dev);
++ out = 0;
++ for (i = 11; i >= 0; i--) {
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(1);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3) | (1 << 1));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3) | (1 << 1));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3) | (1 << 1));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
++
++ if (rtl818x_ioread16(priv, &priv->map->RFPinsInput) & (1 << 1))
++ out |= 1 << i;
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
+ }
+
-+ /* Prepare to continue an interrupted scan */
-+ lbs_deb_scan("chan_count %d, last_scanned_channel %d\n",
-+ chan_count, priv->last_scanned_channel);
-+ curr_chans = chan_list;
-+ /* advance channel list by already-scanned-channels */
-+ if (priv->last_scanned_channel > 0) {
-+ curr_chans += priv->last_scanned_channel;
-+ chan_count -= priv->last_scanned_channel;
- }
-
-- clear_selected_scan_list_entries(adapter, puserscanin);
-+ /* Send scan command(s)
-+ * numchannels contains the number of channels we should maximally scan
-+ * chan_count is the total number of channels to scan
-+ */
-
-- /* Keep the data path active if we are only scanning our current channel */
-- if (!scancurrentchanonly) {
-- netif_stop_queue(priv->dev);
-- netif_carrier_off(priv->dev);
-- if (priv->mesh_dev) {
-- netif_stop_queue(priv->mesh_dev);
-- netif_carrier_off(priv->mesh_dev);
-+ while (chan_count) {
-+ int to_scan = min(numchannels, chan_count);
-+ lbs_deb_scan("scanning %d of %d channels\n",
-+ to_scan, chan_count);
-+ ret = lbs_do_scan(priv, bsstype, curr_chans,
-+ to_scan, user_cfg);
-+ if (ret) {
-+ lbs_pr_err("SCAN_CMD failed\n");
-+ goto out2;
-+ }
-+ curr_chans += to_scan;
-+ chan_count -= to_scan;
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
++ reg80 | (1 << 3) | (1 << 2));
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ udelay(2);
+
-+ /* somehow schedule the next part of the scan */
-+ if (chan_count &&
-+ !full_scan &&
-+ !priv->surpriseremoved) {
-+ /* -1 marks just that we're currently scanning */
-+ if (priv->last_scanned_channel < 0)
-+ priv->last_scanned_channel = to_scan;
-+ else
-+ priv->last_scanned_channel += to_scan;
-+ cancel_delayed_work(&priv->scan_work);
-+ queue_delayed_work(priv->work_thread, &priv->scan_work,
-+ msecs_to_jiffies(300));
-+ /* skip over GIWSCAN event */
-+ goto out;
- }
-- }
-
-- ret = wlan_scan_channel_list(priv,
-- maxchanperscan,
-- filteredscan,
-- scan_cfg,
-- pchantlvout,
-- scan_chan_list,
-- puserscanin,
-- full_scan);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, reg82);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x03A0);
++
++ return out;
++}
++
++static const u16 rtl8225bcd_rxgain[] = {
++ 0x0400, 0x0401, 0x0402, 0x0403, 0x0404, 0x0405, 0x0408, 0x0409,
++ 0x040a, 0x040b, 0x0502, 0x0503, 0x0504, 0x0505, 0x0540, 0x0541,
++ 0x0542, 0x0543, 0x0544, 0x0545, 0x0580, 0x0581, 0x0582, 0x0583,
++ 0x0584, 0x0585, 0x0588, 0x0589, 0x058a, 0x058b, 0x0643, 0x0644,
++ 0x0645, 0x0680, 0x0681, 0x0682, 0x0683, 0x0684, 0x0685, 0x0688,
++ 0x0689, 0x068a, 0x068b, 0x068c, 0x0742, 0x0743, 0x0744, 0x0745,
++ 0x0780, 0x0781, 0x0782, 0x0783, 0x0784, 0x0785, 0x0788, 0x0789,
++ 0x078a, 0x078b, 0x078c, 0x078d, 0x0790, 0x0791, 0x0792, 0x0793,
++ 0x0794, 0x0795, 0x0798, 0x0799, 0x079a, 0x079b, 0x079c, 0x079d,
++ 0x07a0, 0x07a1, 0x07a2, 0x07a3, 0x07a4, 0x07a5, 0x07a8, 0x07a9,
++ 0x07aa, 0x07ab, 0x07ac, 0x07ad, 0x07b0, 0x07b1, 0x07b2, 0x07b3,
++ 0x07b4, 0x07b5, 0x07b8, 0x07b9, 0x07ba, 0x07bb, 0x07bb
++};
++
++static const u8 rtl8225_agc[] = {
++ 0x9e, 0x9e, 0x9e, 0x9e, 0x9e, 0x9e, 0x9e, 0x9e,
++ 0x9d, 0x9c, 0x9b, 0x9a, 0x99, 0x98, 0x97, 0x96,
++ 0x95, 0x94, 0x93, 0x92, 0x91, 0x90, 0x8f, 0x8e,
++ 0x8d, 0x8c, 0x8b, 0x8a, 0x89, 0x88, 0x87, 0x86,
++ 0x85, 0x84, 0x83, 0x82, 0x81, 0x80, 0x3f, 0x3e,
++ 0x3d, 0x3c, 0x3b, 0x3a, 0x39, 0x38, 0x37, 0x36,
++ 0x35, 0x34, 0x33, 0x32, 0x31, 0x30, 0x2f, 0x2e,
++ 0x2d, 0x2c, 0x2b, 0x2a, 0x29, 0x28, 0x27, 0x26,
++ 0x25, 0x24, 0x23, 0x22, 0x21, 0x20, 0x1f, 0x1e,
++ 0x1d, 0x1c, 0x1b, 0x1a, 0x19, 0x18, 0x17, 0x16,
++ 0x15, 0x14, 0x13, 0x12, 0x11, 0x10, 0x0f, 0x0e,
++ 0x0d, 0x0c, 0x0b, 0x0a, 0x09, 0x08, 0x07, 0x06,
++ 0x05, 0x04, 0x03, 0x02, 0x01, 0x01, 0x01, 0x01,
++ 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
++ 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
++ 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01
++};
++
++static const u8 rtl8225_gain[] = {
++ 0x23, 0x88, 0x7c, 0xa5, /* -82dbm */
++ 0x23, 0x88, 0x7c, 0xb5, /* -82dbm */
++ 0x23, 0x88, 0x7c, 0xc5, /* -82dbm */
++ 0x33, 0x80, 0x79, 0xc5, /* -78dbm */
++ 0x43, 0x78, 0x76, 0xc5, /* -74dbm */
++ 0x53, 0x60, 0x73, 0xc5, /* -70dbm */
++ 0x63, 0x58, 0x70, 0xc5, /* -66dbm */
++};
++
++static const u8 rtl8225_threshold[] = {
++ 0x8d, 0x8d, 0x8d, 0x8d, 0x9d, 0xad, 0xbd
++};
++
++static const u8 rtl8225_tx_gain_cck_ofdm[] = {
++ 0x02, 0x06, 0x0e, 0x1e, 0x3e, 0x7e
++};
++
++static const u8 rtl8225_tx_power_cck[] = {
++ 0x18, 0x17, 0x15, 0x11, 0x0c, 0x08, 0x04, 0x02,
++ 0x1b, 0x1a, 0x17, 0x13, 0x0e, 0x09, 0x04, 0x02,
++ 0x1f, 0x1e, 0x1a, 0x15, 0x10, 0x0a, 0x05, 0x02,
++ 0x22, 0x21, 0x1d, 0x18, 0x11, 0x0b, 0x06, 0x02,
++ 0x26, 0x25, 0x21, 0x1b, 0x14, 0x0d, 0x06, 0x03,
++ 0x2b, 0x2a, 0x25, 0x1e, 0x16, 0x0e, 0x07, 0x03
++};
++
++static const u8 rtl8225_tx_power_cck_ch14[] = {
++ 0x18, 0x17, 0x15, 0x0c, 0x00, 0x00, 0x00, 0x00,
++ 0x1b, 0x1a, 0x17, 0x0e, 0x00, 0x00, 0x00, 0x00,
++ 0x1f, 0x1e, 0x1a, 0x0f, 0x00, 0x00, 0x00, 0x00,
++ 0x22, 0x21, 0x1d, 0x11, 0x00, 0x00, 0x00, 0x00,
++ 0x26, 0x25, 0x21, 0x13, 0x00, 0x00, 0x00, 0x00,
++ 0x2b, 0x2a, 0x25, 0x15, 0x00, 0x00, 0x00, 0x00
++};
++
++static const u8 rtl8225_tx_power_ofdm[] = {
++ 0x80, 0x90, 0xa2, 0xb5, 0xcb, 0xe4
++};
++
++static const u32 rtl8225_chan[] = {
++ 0x085c, 0x08dc, 0x095c, 0x09dc, 0x0a5c, 0x0adc, 0x0b5c,
++ 0x0bdc, 0x0c5c, 0x0cdc, 0x0d5c, 0x0ddc, 0x0e5c, 0x0f72
++};
++
++static void rtl8225_rf_set_tx_power(struct ieee80211_hw *dev, int channel)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u8 cck_power, ofdm_power;
++ const u8 *tmp;
++ u32 reg;
++ int i;
++
++ cck_power = priv->channels[channel - 1].val & 0xFF;
++ ofdm_power = priv->channels[channel - 1].val >> 8;
++
++ cck_power = min(cck_power, (u8)35);
++ ofdm_power = min(ofdm_power, (u8)35);
++
++ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_CCK,
++ rtl8225_tx_gain_cck_ofdm[cck_power / 6] >> 1);
++
++ if (channel == 14)
++ tmp = &rtl8225_tx_power_cck_ch14[(cck_power % 6) * 8];
++ else
++ tmp = &rtl8225_tx_power_cck[(cck_power % 6) * 8];
++
++ for (i = 0; i < 8; i++)
++ rtl8225_write_phy_cck(dev, 0x44 + i, *tmp++);
++
++ msleep(1); /* FIXME: optional? */
++
++ /* anaparam2 on */
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg | RTL818X_CONFIG3_ANAPARAM_WRITE);
++ rtl818x_iowrite32(priv, &priv->map->ANAPARAM2, RTL8225_ANAPARAM2_ON);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg & ~RTL818X_CONFIG3_ANAPARAM_WRITE);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_OFDM,
++ rtl8225_tx_gain_cck_ofdm[ofdm_power/6] >> 1);
++
++ tmp = &rtl8225_tx_power_ofdm[ofdm_power % 6];
++
++ rtl8225_write_phy_ofdm(dev, 5, *tmp);
++ rtl8225_write_phy_ofdm(dev, 7, *tmp);
++
++ msleep(1);
++}
++
++static void rtl8225_rf_init(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ int i;
++
++ rtl8180_set_anaparam(priv, RTL8225_ANAPARAM_ON);
++
++ /* host_pci_init */
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x0480);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x0488);
++ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ msleep(200); /* FIXME: ehh?? */
++ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0xFF & ~(1 << 6));
++
++ rtl818x_iowrite32(priv, &priv->map->RF_TIMING, 0x000a8008);
++
++ /* TODO: check if we need really to change BRSR to do RF config */
++ rtl818x_ioread16(priv, &priv->map->BRSR);
++ rtl818x_iowrite16(priv, &priv->map->BRSR, 0xFFFF);
++ rtl818x_iowrite32(priv, &priv->map->RF_PARA, 0x00100044);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, 0x44);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ rtl8225_write(dev, 0x0, 0x067);
++ rtl8225_write(dev, 0x1, 0xFE0);
++ rtl8225_write(dev, 0x2, 0x44D);
++ rtl8225_write(dev, 0x3, 0x441);
++ rtl8225_write(dev, 0x4, 0x8BE);
++ rtl8225_write(dev, 0x5, 0xBF0); /* TODO: minipci */
++ rtl8225_write(dev, 0x6, 0xAE6);
++ rtl8225_write(dev, 0x7, rtl8225_chan[0]);
++ rtl8225_write(dev, 0x8, 0x01F);
++ rtl8225_write(dev, 0x9, 0x334);
++ rtl8225_write(dev, 0xA, 0xFD4);
++ rtl8225_write(dev, 0xB, 0x391);
++ rtl8225_write(dev, 0xC, 0x050);
++ rtl8225_write(dev, 0xD, 0x6DB);
++ rtl8225_write(dev, 0xE, 0x029);
++ rtl8225_write(dev, 0xF, 0x914); msleep(1);
++
++ rtl8225_write(dev, 0x2, 0xC4D); msleep(100);
++
++ rtl8225_write(dev, 0x0, 0x127);
++
++ for (i = 0; i < ARRAY_SIZE(rtl8225bcd_rxgain); i++) {
++ rtl8225_write(dev, 0x1, i + 1);
++ rtl8225_write(dev, 0x2, rtl8225bcd_rxgain[i]);
+ }
-+ memset(&wrqu, 0, sizeof(union iwreq_data));
-+ wireless_send_event(priv->dev, SIOCGIWSCAN, &wrqu, NULL);
-
- #ifdef CONFIG_LIBERTAS_DEBUG
- /* Dump the scan table */
-- mutex_lock(&adapter->lock);
-- lbs_deb_scan("The scan table contains:\n");
-- list_for_each_entry (iter_bss, &adapter->network_list, list) {
-- lbs_deb_scan("scan %02d, %s, RSSI, %d, SSID '%s'\n",
-- i++, print_mac(mac, iter_bss->bssid), (s32) iter_bss->rssi,
-- escape_essid(iter_bss->ssid, iter_bss->ssid_len));
-- }
-- mutex_unlock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-+ lbs_deb_scan("scan table:\n");
-+ list_for_each_entry(iter, &priv->network_list, list)
-+ lbs_deb_scan("%02d: BSSID %s, RSSI %d, SSID '%s'\n",
-+ i++, print_mac(mac, iter->bssid), (s32) iter->rssi,
-+ escape_essid(iter->ssid, iter->ssid_len));
-+ mutex_unlock(&priv->lock);
- #endif
-
-- if (priv->adapter->connect_status == LIBERTAS_CONNECTED) {
-+out2:
-+ priv->last_scanned_channel = 0;
+
-+out:
-+ if (priv->connect_status == LBS_CONNECTED) {
- netif_carrier_on(priv->dev);
-- netif_wake_queue(priv->dev);
-- if (priv->mesh_dev) {
-- netif_carrier_on(priv->mesh_dev);
-+ if (!priv->tx_pending_len)
-+ netif_wake_queue(priv->dev);
++ rtl8225_write(dev, 0x0, 0x027);
++ rtl8225_write(dev, 0x0, 0x22F);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++
++ for (i = 0; i < ARRAY_SIZE(rtl8225_agc); i++) {
++ rtl8225_write_phy_ofdm(dev, 0xB, rtl8225_agc[i]);
++ msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0xA, 0x80 + i);
++ msleep(1);
+ }
-+ if (priv->mesh_dev && (priv->mesh_connect_status == LBS_CONNECTED)) {
-+ netif_carrier_on(priv->mesh_dev);
-+ if (!priv->tx_pending_len)
- netif_wake_queue(priv->mesh_dev);
-- }
- }
--
--out:
-- if (scan_cfg)
-- kfree(scan_cfg);
--
-- if (scan_chan_list)
-- kfree(scan_chan_list);
-+ kfree(chan_list);
-
- lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
- return ret;
- }
-
+
++ msleep(1);
+
++ rtl8225_write_phy_ofdm(dev, 0x00, 0x01); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x01, 0x02); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x02, 0x62); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x03, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x04, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x05, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x06, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x07, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x08, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x09, 0xfe); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0a, 0x09); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0b, 0x80); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0c, 0x01); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0e, 0xd3); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0f, 0x38); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x10, 0x84); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x11, 0x03); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x12, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x13, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x14, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x15, 0x40); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x16, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x17, 0x40); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x18, 0xef); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x19, 0x19); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1a, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1b, 0x76); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1c, 0x04); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1e, 0x95); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1f, 0x75); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x20, 0x1f); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x21, 0x27); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x22, 0x16); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x24, 0x46); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x25, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x27, 0x88); msleep(1);
+
-+/*********************************************************************/
-+/* */
-+/* Result interpretation */
-+/* */
-+/*********************************************************************/
++ rtl8225_write_phy_cck(dev, 0x00, 0x98); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x03, 0x20); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x04, 0x7e); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x05, 0x12); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x06, 0xfc); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x07, 0x78); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x08, 0x2e); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x10, 0x93); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x11, 0x88); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x12, 0x47); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x13, 0xd0);
++ rtl8225_write_phy_cck(dev, 0x19, 0x00);
++ rtl8225_write_phy_cck(dev, 0x1a, 0xa0);
++ rtl8225_write_phy_cck(dev, 0x1b, 0x08);
++ rtl8225_write_phy_cck(dev, 0x40, 0x86);
++ rtl8225_write_phy_cck(dev, 0x41, 0x8d); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x42, 0x15); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x43, 0x18); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x44, 0x1f); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x45, 0x1e); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x46, 0x1a); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x47, 0x15); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x48, 0x10); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x49, 0x0a); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x4a, 0x05); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x4b, 0x02); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x4c, 0x05); msleep(1);
+
- /**
- * @brief Interpret a BSS scan response returned from the firmware
- *
- * Parse the various fixed fields and IEs passed back for a a BSS probe
-- * response or beacon from the scan command. Record information as needed
-- * in the scan table struct bss_descriptor for that entry.
-+ * response or beacon from the scan command. Record information as needed
-+ * in the scan table struct bss_descriptor for that entry.
- *
- * @param bss Output parameter: Pointer to the BSS Entry
- *
- * @return 0 or -1
- */
--static int libertas_process_bss(struct bss_descriptor * bss,
-+static int lbs_process_bss(struct bss_descriptor *bss,
- u8 ** pbeaconinfo, int *bytesleft)
- {
- struct ieeetypes_fhparamset *pFH;
-@@ -946,7 +711,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
-
- if (*bytesleft >= sizeof(beaconsize)) {
- /* Extract & convert beacon size from the command buffer */
-- beaconsize = le16_to_cpu(get_unaligned((u16 *)*pbeaconinfo));
-+ beaconsize = le16_to_cpu(get_unaligned((__le16 *)*pbeaconinfo));
- *bytesleft -= sizeof(beaconsize);
- *pbeaconinfo += sizeof(beaconsize);
- }
-@@ -967,7 +732,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- *bytesleft -= beaconsize;
-
- memcpy(bss->bssid, pos, ETH_ALEN);
-- lbs_deb_scan("process_bss: AP BSSID %s\n", print_mac(mac, bss->bssid));
-+ lbs_deb_scan("process_bss: BSSID %s\n", print_mac(mac, bss->bssid));
- pos += ETH_ALEN;
-
- if ((end - pos) < 12) {
-@@ -983,7 +748,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
-
- /* RSSI is 1 byte long */
- bss->rssi = *pos;
-- lbs_deb_scan("process_bss: RSSI=%02X\n", *pos);
-+ lbs_deb_scan("process_bss: RSSI %d\n", *pos);
- pos++;
-
- /* time stamp is 8 bytes long */
-@@ -995,18 +760,18 @@ static int libertas_process_bss(struct bss_descriptor * bss,
-
- /* capability information is 2 bytes long */
- bss->capability = le16_to_cpup((void *) pos);
-- lbs_deb_scan("process_bss: capabilities = 0x%4X\n", bss->capability);
-+ lbs_deb_scan("process_bss: capabilities 0x%04x\n", bss->capability);
- pos += 2;
-
- if (bss->capability & WLAN_CAPABILITY_PRIVACY)
-- lbs_deb_scan("process_bss: AP WEP enabled\n");
-+ lbs_deb_scan("process_bss: WEP enabled\n");
- if (bss->capability & WLAN_CAPABILITY_IBSS)
- bss->mode = IW_MODE_ADHOC;
- else
- bss->mode = IW_MODE_INFRA;
-
- /* rest of the current buffer are IE's */
-- lbs_deb_scan("process_bss: IE length for this AP = %zd\n", end - pos);
-+ lbs_deb_scan("process_bss: IE len %zd\n", end - pos);
- lbs_deb_hex(LBS_DEB_SCAN, "process_bss: IE info", pos, end - pos);
-
- /* process variable IE */
-@@ -1024,7 +789,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- case MFIE_TYPE_SSID:
- bss->ssid_len = elem->len;
- memcpy(bss->ssid, elem->data, elem->len);
-- lbs_deb_scan("ssid '%s', ssid length %u\n",
-+ lbs_deb_scan("got SSID IE: '%s', len %u\n",
- escape_essid(bss->ssid, bss->ssid_len),
- bss->ssid_len);
- break;
-@@ -1033,16 +798,14 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- n_basic_rates = min_t(u8, MAX_RATES, elem->len);
- memcpy(bss->rates, elem->data, n_basic_rates);
- got_basic_rates = 1;
-+ lbs_deb_scan("got RATES IE\n");
- break;
-
- case MFIE_TYPE_FH_SET:
- pFH = (struct ieeetypes_fhparamset *) pos;
- memmove(&bss->phyparamset.fhparamset, pFH,
- sizeof(struct ieeetypes_fhparamset));
--#if 0 /* I think we can store these LE */
-- bss->phyparamset.fhparamset.dwelltime
-- = le16_to_cpu(bss->phyparamset.fhparamset.dwelltime);
--#endif
-+ lbs_deb_scan("got FH IE\n");
- break;
-
- case MFIE_TYPE_DS_SET:
-@@ -1050,31 +813,31 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- bss->channel = pDS->currentchan;
- memcpy(&bss->phyparamset.dsparamset, pDS,
- sizeof(struct ieeetypes_dsparamset));
-+ lbs_deb_scan("got DS IE, channel %d\n", bss->channel);
- break;
-
- case MFIE_TYPE_CF_SET:
- pCF = (struct ieeetypes_cfparamset *) pos;
- memcpy(&bss->ssparamset.cfparamset, pCF,
- sizeof(struct ieeetypes_cfparamset));
-+ lbs_deb_scan("got CF IE\n");
- break;
-
- case MFIE_TYPE_IBSS_SET:
- pibss = (struct ieeetypes_ibssparamset *) pos;
-- bss->atimwindow = le32_to_cpu(pibss->atimwindow);
-+ bss->atimwindow = le16_to_cpu(pibss->atimwindow);
- memmove(&bss->ssparamset.ibssparamset, pibss,
- sizeof(struct ieeetypes_ibssparamset));
--#if 0
-- bss->ssparamset.ibssparamset.atimwindow
-- = le16_to_cpu(bss->ssparamset.ibssparamset.atimwindow);
--#endif
-+ lbs_deb_scan("got IBSS IE\n");
- break;
-
- case MFIE_TYPE_COUNTRY:
- pcountryinfo = (struct ieeetypes_countryinfoset *) pos;
-+ lbs_deb_scan("got COUNTRY IE\n");
- if (pcountryinfo->len < sizeof(pcountryinfo->countrycode)
- || pcountryinfo->len > 254) {
- lbs_deb_scan("process_bss: 11D- Err "
-- "CountryInfo len =%d min=%zd max=254\n",
-+ "CountryInfo len %d, min %zd, max 254\n",
- pcountryinfo->len,
- sizeof(pcountryinfo->countrycode));
- ret = -1;
-@@ -1093,8 +856,11 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- * already found. Data rate IE should come before
- * extended supported rate IE
- */
-- if (!got_basic_rates)
-+ lbs_deb_scan("got RATESEX IE\n");
-+ if (!got_basic_rates) {
-+ lbs_deb_scan("... but ignoring it\n");
- break;
-+ }
-
- n_ex_rates = elem->len;
- if (n_basic_rates + n_ex_rates > MAX_RATES)
-@@ -1113,24 +879,36 @@ static int libertas_process_bss(struct bss_descriptor * bss,
- bss->wpa_ie_len = min(elem->len + 2,
- MAX_WPA_IE_LEN);
- memcpy(bss->wpa_ie, elem, bss->wpa_ie_len);
-- lbs_deb_hex(LBS_DEB_SCAN, "process_bss: WPA IE", bss->wpa_ie,
-+ lbs_deb_scan("got WPA IE\n");
-+ lbs_deb_hex(LBS_DEB_SCAN, "WPA IE", bss->wpa_ie,
- elem->len);
- } else if (elem->len >= MARVELL_MESH_IE_LENGTH &&
- elem->data[0] == 0x00 &&
- elem->data[1] == 0x50 &&
- elem->data[2] == 0x43 &&
- elem->data[3] == 0x04) {
-+ lbs_deb_scan("got mesh IE\n");
- bss->mesh = 1;
-+ } else {
-+ lbs_deb_scan("got generiec IE: "
-+ "%02x:%02x:%02x:%02x, len %d\n",
-+ elem->data[0], elem->data[1],
-+ elem->data[2], elem->data[3],
-+ elem->len);
- }
- break;
-
- case MFIE_TYPE_RSN:
-+ lbs_deb_scan("got RSN IE\n");
- bss->rsn_ie_len = min(elem->len + 2, MAX_WPA_IE_LEN);
- memcpy(bss->rsn_ie, elem, bss->rsn_ie_len);
-- lbs_deb_hex(LBS_DEB_SCAN, "process_bss: RSN_IE", bss->rsn_ie, elem->len);
-+ lbs_deb_hex(LBS_DEB_SCAN, "process_bss: RSN_IE",
-+ bss->rsn_ie, elem->len);
- break;
-
- default:
-+ lbs_deb_scan("got IE 0x%04x, len %d\n",
-+ elem->id, elem->len);
- break;
- }
-
-@@ -1139,7 +917,7 @@ static int libertas_process_bss(struct bss_descriptor * bss,
-
- /* Timestamp */
- bss->last_scanned = jiffies;
-- libertas_unset_basic_rate_flags(bss->rates, sizeof(bss->rates));
-+ lbs_unset_basic_rate_flags(bss->rates, sizeof(bss->rates));
-
- ret = 0;
-
-@@ -1153,13 +931,13 @@ done:
- *
- * Used in association code
- *
-- * @param adapter A pointer to wlan_adapter
-+ * @param priv A pointer to struct lbs_private
- * @param bssid BSSID to find in the scan list
- * @param mode Network mode: Infrastructure or IBSS
- *
- * @return index in BSSID list, or error return code (< 0)
- */
--struct bss_descriptor *libertas_find_bssid_in_list(wlan_adapter * adapter,
-+struct bss_descriptor *lbs_find_bssid_in_list(struct lbs_private *priv,
- u8 * bssid, u8 mode)
- {
- struct bss_descriptor * iter_bss;
-@@ -1177,14 +955,14 @@ struct bss_descriptor *libertas_find_bssid_in_list(wlan_adapter * adapter,
- * continue past a matched bssid that is not compatible in case there
- * is an AP with multiple SSIDs assigned to the same BSSID
- */
-- mutex_lock(&adapter->lock);
-- list_for_each_entry (iter_bss, &adapter->network_list, list) {
-+ mutex_lock(&priv->lock);
-+ list_for_each_entry (iter_bss, &priv->network_list, list) {
- if (compare_ether_addr(iter_bss->bssid, bssid))
- continue; /* bssid doesn't match */
- switch (mode) {
- case IW_MODE_INFRA:
- case IW_MODE_ADHOC:
-- if (!is_network_compatible(adapter, iter_bss, mode))
-+ if (!is_network_compatible(priv, iter_bss, mode))
- break;
- found_bss = iter_bss;
- break;
-@@ -1193,7 +971,7 @@ struct bss_descriptor *libertas_find_bssid_in_list(wlan_adapter * adapter,
- break;
- }
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
-
- out:
- lbs_deb_leave_args(LBS_DEB_SCAN, "found_bss %p", found_bss);
-@@ -1205,14 +983,14 @@ out:
- *
- * Used in association code
- *
-- * @param adapter A pointer to wlan_adapter
-+ * @param priv A pointer to struct lbs_private
- * @param ssid SSID to find in the list
- * @param bssid BSSID to qualify the SSID selection (if provided)
- * @param mode Network mode: Infrastructure or IBSS
- *
- * @return index in BSSID list
- */
--struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
-+struct bss_descriptor *lbs_find_ssid_in_list(struct lbs_private *priv,
- u8 *ssid, u8 ssid_len, u8 * bssid, u8 mode,
- int channel)
- {
-@@ -1223,14 +1001,14 @@ struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
-
- lbs_deb_enter(LBS_DEB_SCAN);
-
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-
-- list_for_each_entry (iter_bss, &adapter->network_list, list) {
-+ list_for_each_entry (iter_bss, &priv->network_list, list) {
- if ( !tmp_oldest
- || (iter_bss->last_scanned < tmp_oldest->last_scanned))
- tmp_oldest = iter_bss;
-
-- if (libertas_ssid_cmp(iter_bss->ssid, iter_bss->ssid_len,
-+ if (lbs_ssid_cmp(iter_bss->ssid, iter_bss->ssid_len,
- ssid, ssid_len) != 0)
- continue; /* ssid doesn't match */
- if (bssid && compare_ether_addr(iter_bss->bssid, bssid) != 0)
-@@ -1241,7 +1019,7 @@ struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
- switch (mode) {
- case IW_MODE_INFRA:
- case IW_MODE_ADHOC:
-- if (!is_network_compatible(adapter, iter_bss, mode))
-+ if (!is_network_compatible(priv, iter_bss, mode))
- break;
-
- if (bssid) {
-@@ -1266,7 +1044,7 @@ struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
- }
-
- out:
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
- lbs_deb_leave_args(LBS_DEB_SCAN, "found_bss %p", found_bss);
- return found_bss;
- }
-@@ -1277,12 +1055,13 @@ out:
- * Search the scan table for the best SSID that also matches the current
- * adapter network preference (infrastructure or adhoc)
- *
-- * @param adapter A pointer to wlan_adapter
-+ * @param priv A pointer to struct lbs_private
- *
- * @return index in BSSID list
- */
--static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * adapter,
-- u8 mode)
-+static struct bss_descriptor *lbs_find_best_ssid_in_list(
-+ struct lbs_private *priv,
-+ u8 mode)
- {
- u8 bestrssi = 0;
- struct bss_descriptor * iter_bss;
-@@ -1290,13 +1069,13 @@ static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * ad
-
- lbs_deb_enter(LBS_DEB_SCAN);
-
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-
-- list_for_each_entry (iter_bss, &adapter->network_list, list) {
-+ list_for_each_entry (iter_bss, &priv->network_list, list) {
- switch (mode) {
- case IW_MODE_INFRA:
- case IW_MODE_ADHOC:
-- if (!is_network_compatible(adapter, iter_bss, mode))
-+ if (!is_network_compatible(priv, iter_bss, mode))
- break;
- if (SCAN_RSSI(iter_bss->rssi) <= bestrssi)
- break;
-@@ -1313,7 +1092,7 @@ static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * ad
- }
- }
-
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
- lbs_deb_leave_args(LBS_DEB_SCAN, "best_bss %p", best_bss);
- return best_bss;
- }
-@@ -1323,27 +1102,24 @@ static struct bss_descriptor * libertas_find_best_ssid_in_list(wlan_adapter * ad
- *
- * Used from association worker.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param pSSID A pointer to AP's ssid
- *
- * @return 0--success, otherwise--fail
- */
--int libertas_find_best_network_ssid(wlan_private * priv,
-+int lbs_find_best_network_ssid(struct lbs_private *priv,
- u8 *out_ssid, u8 *out_ssid_len, u8 preferred_mode, u8 *out_mode)
- {
-- wlan_adapter *adapter = priv->adapter;
- int ret = -1;
- struct bss_descriptor * found;
-
- lbs_deb_enter(LBS_DEB_SCAN);
-
-- wlan_scan_networks(priv, NULL, 1);
-- if (adapter->surpriseremoved)
-+ lbs_scan_networks(priv, NULL, 1);
-+ if (priv->surpriseremoved)
- goto out;
-
-- wait_event_interruptible(adapter->cmd_pending, !adapter->nr_cmd_pending);
--
-- found = libertas_find_best_ssid_in_list(adapter, preferred_mode);
-+ found = lbs_find_best_ssid_in_list(priv, preferred_mode);
- if (found && (found->ssid_len > 0)) {
- memcpy(out_ssid, &found->ssid, IW_ESSID_MAX_SIZE);
- *out_ssid_len = found->ssid_len;
-@@ -1356,57 +1132,24 @@ out:
- return ret;
- }
-
--/**
-- * @brief Scan Network
-- *
-- * @param dev A pointer to net_device structure
-- * @param info A pointer to iw_request_info structure
-- * @param vwrq A pointer to iw_param structure
-- * @param extra A pointer to extra data buf
-- *
-- * @return 0 --success, otherwise fail
-- */
--int libertas_set_scan(struct net_device *dev, struct iw_request_info *info,
-- struct iw_param *vwrq, char *extra)
--{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
--
-- lbs_deb_enter(LBS_DEB_SCAN);
--
-- if (!delayed_work_pending(&priv->scan_work)) {
-- queue_delayed_work(priv->work_thread, &priv->scan_work,
-- msecs_to_jiffies(50));
-- }
--
-- if (adapter->surpriseremoved)
-- return -1;
--
-- lbs_deb_leave(LBS_DEB_SCAN);
-- return 0;
--}
--
-
- /**
- * @brief Send a scan command for all available channels filtered on a spec
- *
- * Used in association code and from debugfs
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param ssid A pointer to the SSID to scan for
- * @param ssid_len Length of the SSID
- * @param clear_ssid Should existing scan results with this SSID
- * be cleared?
-- * @param prequestedssid A pointer to AP's ssid
-- * @param keeppreviousscan Flag used to save/clear scan table before scan
- *
- * @return 0-success, otherwise fail
- */
--int libertas_send_specific_ssid_scan(wlan_private * priv,
-+int lbs_send_specific_ssid_scan(struct lbs_private *priv,
- u8 *ssid, u8 ssid_len, u8 clear_ssid)
- {
-- wlan_adapter *adapter = priv->adapter;
-- struct wlan_ioctl_user_scan_cfg scancfg;
-+ struct lbs_ioctl_user_scan_cfg scancfg;
- int ret = 0;
-
- lbs_deb_enter_args(LBS_DEB_SCAN, "SSID '%s', clear %d",
-@@ -1420,12 +1163,11 @@ int libertas_send_specific_ssid_scan(wlan_private * priv,
- scancfg.ssid_len = ssid_len;
- scancfg.clear_ssid = clear_ssid;
-
-- wlan_scan_networks(priv, &scancfg, 1);
-- if (adapter->surpriseremoved) {
-+ lbs_scan_networks(priv, &scancfg, 1);
-+ if (priv->surpriseremoved) {
- ret = -1;
- goto out;
- }
-- wait_event_interruptible(adapter->cmd_pending, !adapter->nr_cmd_pending);
-
- out:
- lbs_deb_leave_args(LBS_DEB_SCAN, "ret %d", ret);
-@@ -1441,13 +1183,13 @@ out:
- /* */
- /*********************************************************************/
-
++ rtl818x_iowrite8(priv, &priv->map->TESTR, 0x0D); msleep(1);
+
- #define MAX_CUSTOM_LEN 64
-
--static inline char *libertas_translate_scan(wlan_private *priv,
-+static inline char *lbs_translate_scan(struct lbs_private *priv,
- char *start, char *stop,
- struct bss_descriptor *bss)
- {
-- wlan_adapter *adapter = priv->adapter;
- struct chan_freq_power *cfp;
- char *current_val; /* For rates */
- struct iw_event iwe; /* Temporary buffer */
-@@ -1459,14 +1201,14 @@ static inline char *libertas_translate_scan(wlan_private *priv,
-
- lbs_deb_enter(LBS_DEB_SCAN);
-
-- cfp = libertas_find_cfp_by_band_and_channel(adapter, 0, bss->channel);
-+ cfp = lbs_find_cfp_by_band_and_channel(priv, 0, bss->channel);
- if (!cfp) {
- lbs_deb_scan("Invalid channel number %d\n", bss->channel);
- start = NULL;
- goto out;
- }
-
-- /* First entry *MUST* be the AP BSSID */
-+ /* First entry *MUST* be the BSSID */
- iwe.cmd = SIOCGIWAP;
- iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
- memcpy(iwe.u.ap_addr.sa_data, &bss->bssid, ETH_ALEN);
-@@ -1502,25 +1244,25 @@ static inline char *libertas_translate_scan(wlan_private *priv,
- if (iwe.u.qual.qual > 100)
- iwe.u.qual.qual = 100;
-
-- if (adapter->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
-+ if (priv->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
- iwe.u.qual.noise = MRVDRV_NF_DEFAULT_SCAN_VALUE;
- } else {
- iwe.u.qual.noise =
-- CAL_NF(adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
-+ CAL_NF(priv->NF[TYPE_BEACON][TYPE_NOAVG]);
- }
-
- /* Locally created ad-hoc BSSs won't have beacons if this is the
- * only station in the adhoc network; so get signal strength
- * from receive statistics.
- */
-- if ((adapter->mode == IW_MODE_ADHOC)
-- && adapter->adhoccreate
-- && !libertas_ssid_cmp(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len,
-+ if ((priv->mode == IW_MODE_ADHOC)
-+ && priv->adhoccreate
-+ && !lbs_ssid_cmp(priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len,
- bss->ssid, bss->ssid_len)) {
- int snr, nf;
-- snr = adapter->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
-- nf = adapter->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
-+ snr = priv->SNR[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
-+ nf = priv->NF[TYPE_RXPD][TYPE_AVG] / AVG_SCALE;
- iwe.u.qual.level = CAL_RSSI(snr, nf);
- }
- start = iwe_stream_add_event(start, stop, &iwe, IW_EV_QUAL_LEN);
-@@ -1549,10 +1291,10 @@ static inline char *libertas_translate_scan(wlan_private *priv,
- stop, &iwe, IW_EV_PARAM_LEN);
- }
- if ((bss->mode == IW_MODE_ADHOC)
-- && !libertas_ssid_cmp(adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len,
-+ && !lbs_ssid_cmp(priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len,
- bss->ssid, bss->ssid_len)
-- && adapter->adhoccreate) {
-+ && priv->adhoccreate) {
- iwe.u.bitrate.value = 22 * 500000;
- current_val = iwe_stream_add_value(start, current_val,
- stop, &iwe, IW_EV_PARAM_LEN);
-@@ -1596,6 +1338,54 @@ out:
- return start;
- }
-
++ rtl8225_rf_set_tx_power(dev, 1);
+
-+/**
-+ * @brief Handle Scan Network ioctl
-+ *
-+ * @param dev A pointer to net_device structure
-+ * @param info A pointer to iw_request_info structure
-+ * @param vwrq A pointer to iw_param structure
-+ * @param extra A pointer to extra data buf
-+ *
-+ * @return 0 --success, otherwise fail
-+ */
-+int lbs_set_scan(struct net_device *dev, struct iw_request_info *info,
-+ struct iw_param *wrqu, char *extra)
++ /* RX antenna default to A */
++ rtl8225_write_phy_cck(dev, 0x10, 0x9b); msleep(1); /* B: 0xDB */
++ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1); /* B: 0x10 */
++
++ rtl818x_iowrite8(priv, &priv->map->TX_ANTENNA, 0x03); /* B: 0x00 */
++ msleep(1);
++ rtl818x_iowrite32(priv, (__le32 __iomem *)((void __iomem *)priv->map + 0x94), 0x15c00002);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++
++ rtl8225_write(dev, 0x0c, 0x50);
++ /* set OFDM initial gain */
++ rtl8225_write_phy_ofdm(dev, 0x0d, rtl8225_gain[4 * 4]);
++ rtl8225_write_phy_ofdm(dev, 0x23, rtl8225_gain[4 * 4 + 1]);
++ rtl8225_write_phy_ofdm(dev, 0x1b, rtl8225_gain[4 * 4 + 2]);
++ rtl8225_write_phy_ofdm(dev, 0x1d, rtl8225_gain[4 * 4 + 3]);
++ /* set CCK threshold */
++ rtl8225_write_phy_cck(dev, 0x41, rtl8225_threshold[0]);
++}
++
++static const u8 rtl8225z2_tx_power_cck_ch14[] = {
++ 0x36, 0x35, 0x2e, 0x1b, 0x00, 0x00, 0x00, 0x00
++};
++
++static const u8 rtl8225z2_tx_power_cck_B[] = {
++ 0x30, 0x2f, 0x29, 0x21, 0x19, 0x10, 0x08, 0x04
++};
++
++static const u8 rtl8225z2_tx_power_cck_A[] = {
++ 0x33, 0x32, 0x2b, 0x23, 0x1a, 0x11, 0x08, 0x04
++};
++
++static const u8 rtl8225z2_tx_power_cck[] = {
++ 0x36, 0x35, 0x2e, 0x25, 0x1c, 0x12, 0x09, 0x04
++};
++
++static void rtl8225z2_rf_set_tx_power(struct ieee80211_hw *dev, int channel)
+{
-+ struct lbs_private *priv = dev->priv;
++ struct rtl8180_priv *priv = dev->priv;
++ u8 cck_power, ofdm_power;
++ const u8 *tmp;
++ int i;
+
-+ lbs_deb_enter(LBS_DEB_SCAN);
++ cck_power = priv->channels[channel - 1].val & 0xFF;
++ ofdm_power = priv->channels[channel - 1].val >> 8;
+
-+ if (!netif_running(dev))
-+ return -ENETDOWN;
++ if (channel == 14)
++ tmp = rtl8225z2_tx_power_cck_ch14;
++ else if (cck_power == 12)
++ tmp = rtl8225z2_tx_power_cck_B;
++ else if (cck_power == 13)
++ tmp = rtl8225z2_tx_power_cck_A;
++ else
++ tmp = rtl8225z2_tx_power_cck;
+
-+ /* mac80211 does this:
-+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
-+ if (sdata->type != IEEE80211_IF_TYPE_xxx)
-+ return -EOPNOTSUPP;
++ for (i = 0; i < 8; i++)
++ rtl8225_write_phy_cck(dev, 0x44 + i, *tmp++);
+
-+ if (wrqu->data.length == sizeof(struct iw_scan_req) &&
-+ wrqu->data.flags & IW_SCAN_THIS_ESSID) {
-+ req = (struct iw_scan_req *)extra;
-+ ssid = req->essid;
-+ ssid_len = req->essid_len;
++ cck_power = min(cck_power, (u8)35);
++ if (cck_power == 13 || cck_power == 14)
++ cck_power = 12;
++ if (cck_power >= 15)
++ cck_power -= 2;
++
++ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_CCK, cck_power);
++ rtl818x_ioread8(priv, &priv->map->TX_GAIN_CCK);
++ msleep(1);
++
++ ofdm_power = min(ofdm_power, (u8)35);
++ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_OFDM, ofdm_power);
++
++ rtl8225_write_phy_ofdm(dev, 2, 0x62);
++ rtl8225_write_phy_ofdm(dev, 5, 0x00);
++ rtl8225_write_phy_ofdm(dev, 6, 0x40);
++ rtl8225_write_phy_ofdm(dev, 7, 0x00);
++ rtl8225_write_phy_ofdm(dev, 8, 0x40);
++
++ msleep(1);
++}
++
++static const u16 rtl8225z2_rxgain[] = {
++ 0x0000, 0x0001, 0x0002, 0x0003, 0x0004, 0x0005, 0x0008, 0x0009,
++ 0x000a, 0x000b, 0x0102, 0x0103, 0x0104, 0x0105, 0x0140, 0x0141,
++ 0x0142, 0x0143, 0x0144, 0x0145, 0x0180, 0x0181, 0x0182, 0x0183,
++ 0x0184, 0x0185, 0x0188, 0x0189, 0x018a, 0x018b, 0x0243, 0x0244,
++ 0x0245, 0x0280, 0x0281, 0x0282, 0x0283, 0x0284, 0x0285, 0x0288,
++ 0x0289, 0x028a, 0x028b, 0x028c, 0x0342, 0x0343, 0x0344, 0x0345,
++ 0x0380, 0x0381, 0x0382, 0x0383, 0x0384, 0x0385, 0x0388, 0x0389,
++ 0x038a, 0x038b, 0x038c, 0x038d, 0x0390, 0x0391, 0x0392, 0x0393,
++ 0x0394, 0x0395, 0x0398, 0x0399, 0x039a, 0x039b, 0x039c, 0x039d,
++ 0x03a0, 0x03a1, 0x03a2, 0x03a3, 0x03a4, 0x03a5, 0x03a8, 0x03a9,
++ 0x03aa, 0x03ab, 0x03ac, 0x03ad, 0x03b0, 0x03b1, 0x03b2, 0x03b3,
++ 0x03b4, 0x03b5, 0x03b8, 0x03b9, 0x03ba, 0x03bb, 0x03bb
++};
++
++static void rtl8225z2_rf_init(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ int i;
++
++ rtl8180_set_anaparam(priv, RTL8225_ANAPARAM_ON);
++
++ /* host_pci_init */
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x0480);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x0488);
++ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ msleep(200); /* FIXME: ehh?? */
++ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0xFF & ~(1 << 6));
++
++ rtl818x_iowrite32(priv, &priv->map->RF_TIMING, 0x00088008);
++
++ /* TODO: check if we need really to change BRSR to do RF config */
++ rtl818x_ioread16(priv, &priv->map->BRSR);
++ rtl818x_iowrite16(priv, &priv->map->BRSR, 0xFFFF);
++ rtl818x_iowrite32(priv, &priv->map->RF_PARA, 0x00100044);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, 0x44);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++
++ rtl8225_write(dev, 0x0, 0x0B7); msleep(1);
++ rtl8225_write(dev, 0x1, 0xEE0); msleep(1);
++ rtl8225_write(dev, 0x2, 0x44D); msleep(1);
++ rtl8225_write(dev, 0x3, 0x441); msleep(1);
++ rtl8225_write(dev, 0x4, 0x8C3); msleep(1);
++ rtl8225_write(dev, 0x5, 0xC72); msleep(1);
++ rtl8225_write(dev, 0x6, 0x0E6); msleep(1);
++ rtl8225_write(dev, 0x7, 0x82A); msleep(1);
++ rtl8225_write(dev, 0x8, 0x03F); msleep(1);
++ rtl8225_write(dev, 0x9, 0x335); msleep(1);
++ rtl8225_write(dev, 0xa, 0x9D4); msleep(1);
++ rtl8225_write(dev, 0xb, 0x7BB); msleep(1);
++ rtl8225_write(dev, 0xc, 0x850); msleep(1);
++ rtl8225_write(dev, 0xd, 0xCDF); msleep(1);
++ rtl8225_write(dev, 0xe, 0x02B); msleep(1);
++ rtl8225_write(dev, 0xf, 0x114); msleep(100);
++
++ if (!(rtl8225_read(dev, 6) & (1 << 7))) {
++ rtl8225_write(dev, 0x02, 0x0C4D);
++ msleep(200);
++ rtl8225_write(dev, 0x02, 0x044D);
++ msleep(100);
++ /* TODO: readd calibration failure message when the calibration
++ check works */
+ }
-+ */
+
-+ if (!delayed_work_pending(&priv->scan_work))
-+ queue_delayed_work(priv->work_thread, &priv->scan_work,
-+ msecs_to_jiffies(50));
-+ /* set marker that currently a scan is taking place */
-+ priv->last_scanned_channel = -1;
++ rtl8225_write(dev, 0x0, 0x1B7);
++ rtl8225_write(dev, 0x3, 0x002);
++ rtl8225_write(dev, 0x5, 0x004);
+
-+ if (priv->surpriseremoved)
-+ return -EIO;
++ for (i = 0; i < ARRAY_SIZE(rtl8225z2_rxgain); i++) {
++ rtl8225_write(dev, 0x1, i + 1);
++ rtl8225_write(dev, 0x2, rtl8225z2_rxgain[i]);
++ }
+
-+ lbs_deb_leave(LBS_DEB_SCAN);
-+ return 0;
++ rtl8225_write(dev, 0x0, 0x0B7); msleep(100);
++ rtl8225_write(dev, 0x2, 0xC4D);
++
++ msleep(200);
++ rtl8225_write(dev, 0x2, 0x44D);
++ msleep(100);
++
++ rtl8225_write(dev, 0x00, 0x2BF);
++ rtl8225_write(dev, 0xFF, 0xFFFF);
++
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++
++ for (i = 0; i < ARRAY_SIZE(rtl8225_agc); i++) {
++ rtl8225_write_phy_ofdm(dev, 0xB, rtl8225_agc[i]);
++ msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0xA, 0x80 + i);
++ msleep(1);
++ }
++
++ msleep(1);
++
++ rtl8225_write_phy_ofdm(dev, 0x00, 0x01); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x01, 0x02); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x02, 0x62); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x03, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x04, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x05, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x06, 0x40); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x07, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x08, 0x40); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x09, 0xfe); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0a, 0x09); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x18, 0xef); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0b, 0x80); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0c, 0x01); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0d, 0x43);
++ rtl8225_write_phy_ofdm(dev, 0x0e, 0xd3); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x0f, 0x38); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x10, 0x84); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x11, 0x06); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x12, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x13, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x14, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x15, 0x40); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x16, 0x00); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x17, 0x40); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x18, 0xef); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x19, 0x19); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1a, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1b, 0x11); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1c, 0x04); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1d, 0xc5); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1e, 0xb3); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x1f, 0x75); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x20, 0x1f); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x21, 0x27); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x22, 0x16); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x23, 0x80); msleep(1); /* FIXME: not needed? */
++ rtl8225_write_phy_ofdm(dev, 0x24, 0x46); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x25, 0x20); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1);
++ rtl8225_write_phy_ofdm(dev, 0x27, 0x88); msleep(1);
++
++ rtl8225_write_phy_cck(dev, 0x00, 0x98); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x03, 0x20); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x04, 0x7e); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x05, 0x12); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x06, 0xfc); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x07, 0x78); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x08, 0x2e); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x10, 0x93); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x11, 0x88); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x12, 0x47); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x13, 0xd0);
++ rtl8225_write_phy_cck(dev, 0x19, 0x00);
++ rtl8225_write_phy_cck(dev, 0x1a, 0xa0);
++ rtl8225_write_phy_cck(dev, 0x1b, 0x08);
++ rtl8225_write_phy_cck(dev, 0x40, 0x86);
++ rtl8225_write_phy_cck(dev, 0x41, 0x8a); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x42, 0x15); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x43, 0x18); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x44, 0x36); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x45, 0x35); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x46, 0x2e); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x47, 0x25); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x48, 0x1c); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x49, 0x12); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x4a, 0x09); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x4b, 0x04); msleep(1);
++ rtl8225_write_phy_cck(dev, 0x4c, 0x05); msleep(1);
++
++ rtl818x_iowrite8(priv, (u8 __iomem *)((void __iomem *)priv->map + 0x5B), 0x0D); msleep(1);
++
++ rtl8225z2_rf_set_tx_power(dev, 1);
++
++ /* RX antenna default to A */
++ rtl8225_write_phy_cck(dev, 0x10, 0x9b); msleep(1); /* B: 0xDB */
++ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1); /* B: 0x10 */
++
++ rtl818x_iowrite8(priv, &priv->map->TX_ANTENNA, 0x03); /* B: 0x00 */
++ msleep(1);
++ rtl818x_iowrite32(priv, (__le32 __iomem *)((void __iomem *)priv->map + 0x94), 0x15c00002);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++}
++
++static void rtl8225_rf_stop(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u8 reg;
++
++ rtl8225_write(dev, 0x4, 0x1f); msleep(1);
++
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
++ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg | RTL818X_CONFIG3_ANAPARAM_WRITE);
++ rtl818x_iowrite32(priv, &priv->map->ANAPARAM2, RTL8225_ANAPARAM2_OFF);
++ rtl818x_iowrite32(priv, &priv->map->ANAPARAM, RTL8225_ANAPARAM_OFF);
++ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg & ~RTL818X_CONFIG3_ANAPARAM_WRITE);
++ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+}
+
++static void rtl8225_rf_set_channel(struct ieee80211_hw *dev,
++ struct ieee80211_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
+
- /**
- * @brief Handle Retrieve scan table ioctl
- *
-@@ -1606,12 +1396,11 @@ out:
- *
- * @return 0 --success, otherwise fail
- */
--int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
-+int lbs_get_scan(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
- {
- #define SCAN_ITEM_SIZE 128
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int err = 0;
- char *ev = extra;
- char *stop = ev + dwrq->length;
-@@ -1620,14 +1409,18 @@ int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
-
- lbs_deb_enter(LBS_DEB_SCAN);
-
-+ /* iwlist should wait until the current scan is finished */
-+ if (priv->last_scanned_channel)
-+ return -EAGAIN;
++ if (priv->rf->init == rtl8225_rf_init)
++ rtl8225_rf_set_tx_power(dev, conf->channel);
++ else
++ rtl8225z2_rf_set_tx_power(dev, conf->channel);
+
- /* Update RSSI if current BSS is a locally created ad-hoc BSS */
-- if ((adapter->mode == IW_MODE_ADHOC) && adapter->adhoccreate) {
-- libertas_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
-+ if ((priv->mode == IW_MODE_ADHOC) && priv->adhoccreate) {
-+ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
- CMD_OPTION_WAITFORRSP, 0, NULL);
- }
-
-- mutex_lock(&adapter->lock);
-- list_for_each_entry_safe (iter_bss, safe, &adapter->network_list, list) {
-+ mutex_lock(&priv->lock);
-+ list_for_each_entry_safe (iter_bss, safe, &priv->network_list, list) {
- char * next_ev;
- unsigned long stale_time;
-
-@@ -1644,18 +1437,18 @@ int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
- stale_time = iter_bss->last_scanned + DEFAULT_MAX_SCAN_AGE;
- if (time_after(jiffies, stale_time)) {
- list_move_tail (&iter_bss->list,
-- &adapter->network_free_list);
-+ &priv->network_free_list);
- clear_bss_descriptor(iter_bss);
- continue;
- }
-
- /* Translate to WE format this entry */
-- next_ev = libertas_translate_scan(priv, ev, stop, iter_bss);
-+ next_ev = lbs_translate_scan(priv, ev, stop, iter_bss);
- if (next_ev == NULL)
- continue;
- ev = next_ev;
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
-
- dwrq->length = (ev - extra);
- dwrq->flags = 0;
-@@ -1677,24 +1470,25 @@ int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
- /**
- * @brief Prepare a scan command to be sent to the firmware
- *
-- * Called from libertas_prepare_and_send_command() in cmd.c
-+ * Called via lbs_prepare_and_send_command(priv, CMD_802_11_SCAN, ...)
-+ * from cmd.c
- *
- * Sends a fixed lenght data part (specifying the BSS type and BSSID filters)
- * as well as a variable number/length of TLVs to the firmware.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param cmd A pointer to cmd_ds_command structure to be sent to
- * firmware with the cmd_DS_801_11_SCAN structure
-- * @param pdata_buf Void pointer cast of a wlan_scan_cmd_config struct used
-+ * @param pdata_buf Void pointer cast of a lbs_scan_cmd_config struct used
- * to set the fields/TLVs for the command sent to firmware
- *
- * @return 0 or -1
- */
--int libertas_cmd_80211_scan(wlan_private * priv,
-- struct cmd_ds_command *cmd, void *pdata_buf)
-+int lbs_cmd_80211_scan(struct lbs_private *priv,
-+ struct cmd_ds_command *cmd, void *pdata_buf)
- {
- struct cmd_ds_802_11_scan *pscan = &cmd->params.scan;
-- struct wlan_scan_cmd_config *pscancfg = pdata_buf;
-+ struct lbs_scan_cmd_config *pscancfg = pdata_buf;
-
- lbs_deb_enter(LBS_DEB_SCAN);
-
-@@ -1703,32 +1497,14 @@ int libertas_cmd_80211_scan(wlan_private * priv,
- memcpy(pscan->bssid, pscancfg->bssid, ETH_ALEN);
- memcpy(pscan->tlvbuffer, pscancfg->tlvbuffer, pscancfg->tlvbufferlen);
-
-- cmd->command = cpu_to_le16(CMD_802_11_SCAN);
--
- /* size is equal to the sizeof(fixed portions) + the TLV len + header */
- cmd->size = cpu_to_le16(sizeof(pscan->bsstype) + ETH_ALEN
- + pscancfg->tlvbufferlen + S_DS_GEN);
-
-- lbs_deb_scan("SCAN_CMD: command 0x%04x, size %d, seqnum %d\n",
-- le16_to_cpu(cmd->command), le16_to_cpu(cmd->size),
-- le16_to_cpu(cmd->seqnum));
--
- lbs_deb_leave(LBS_DEB_SCAN);
- return 0;
- }
-
--static inline int is_same_network(struct bss_descriptor *src,
-- struct bss_descriptor *dst)
--{
-- /* A network is only a duplicate if the channel, BSSID, and ESSID
-- * all match. We treat all <hidden> with the same BSSID and channel
-- * as one network */
-- return ((src->ssid_len == dst->ssid_len) &&
-- (src->channel == dst->channel) &&
-- !compare_ether_addr(src->bssid, dst->bssid) &&
-- !memcmp(src->ssid, dst->ssid, src->ssid_len));
--}
--
- /**
- * @brief This function handles the command response of scan
- *
-@@ -1750,14 +1526,13 @@ static inline int is_same_network(struct bss_descriptor *src,
- * | bufsize and sizeof the fixed fields above) |
- * .-----------------------------------------------------------.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param resp A pointer to cmd_ds_command
- *
- * @return 0 or -1
- */
--int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
-+int lbs_ret_80211_scan(struct lbs_private *priv, struct cmd_ds_command *resp)
- {
-- wlan_adapter *adapter = priv->adapter;
- struct cmd_ds_802_11_scan_rsp *pscan;
- struct bss_descriptor * iter_bss;
- struct bss_descriptor * safe;
-@@ -1771,11 +1546,11 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
- lbs_deb_enter(LBS_DEB_SCAN);
-
- /* Prune old entries from scan table */
-- list_for_each_entry_safe (iter_bss, safe, &adapter->network_list, list) {
-+ list_for_each_entry_safe (iter_bss, safe, &priv->network_list, list) {
- unsigned long stale_time = iter_bss->last_scanned + DEFAULT_MAX_SCAN_AGE;
- if (time_before(jiffies, stale_time))
- continue;
-- list_move_tail (&iter_bss->list, &adapter->network_free_list);
-+ list_move_tail (&iter_bss->list, &priv->network_free_list);
- clear_bss_descriptor(iter_bss);
- }
-
-@@ -1789,12 +1564,11 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
- goto done;
- }
-
-- bytesleft = le16_to_cpu(get_unaligned((u16*)&pscan->bssdescriptsize));
-+ bytesleft = le16_to_cpu(pscan->bssdescriptsize);
- lbs_deb_scan("SCAN_RESP: bssdescriptsize %d\n", bytesleft);
-
-- scanrespsize = le16_to_cpu(get_unaligned((u16*)&resp->size));
-- lbs_deb_scan("SCAN_RESP: returned %d AP before parsing\n",
-- pscan->nr_sets);
-+ scanrespsize = le16_to_cpu(resp->size);
-+ lbs_deb_scan("SCAN_RESP: scan results %d\n", pscan->nr_sets);
-
- pbssinfo = pscan->bssdesc_and_tlvbuffer;
-
-@@ -1821,14 +1595,14 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
-
- /* Process the data fields and IEs returned for this BSS */
- memset(&new, 0, sizeof (struct bss_descriptor));
-- if (libertas_process_bss(&new, &pbssinfo, &bytesleft) != 0) {
-+ if (lbs_process_bss(&new, &pbssinfo, &bytesleft) != 0) {
- /* error parsing the scan response, skipped */
- lbs_deb_scan("SCAN_RESP: process_bss returned ERROR\n");
- continue;
- }
-
- /* Try to find this bss in the scan table */
-- list_for_each_entry (iter_bss, &adapter->network_list, list) {
-+ list_for_each_entry (iter_bss, &priv->network_list, list) {
- if (is_same_network(iter_bss, &new)) {
- found = iter_bss;
- break;
-@@ -1842,21 +1616,21 @@ int libertas_ret_80211_scan(wlan_private * priv, struct cmd_ds_command *resp)
- if (found) {
- /* found, clear it */
- clear_bss_descriptor(found);
-- } else if (!list_empty(&adapter->network_free_list)) {
-+ } else if (!list_empty(&priv->network_free_list)) {
- /* Pull one from the free list */
-- found = list_entry(adapter->network_free_list.next,
-+ found = list_entry(priv->network_free_list.next,
- struct bss_descriptor, list);
-- list_move_tail(&found->list, &adapter->network_list);
-+ list_move_tail(&found->list, &priv->network_list);
- } else if (oldest) {
- /* If there are no more slots, expire the oldest */
- found = oldest;
- clear_bss_descriptor(found);
-- list_move_tail(&found->list, &adapter->network_list);
-+ list_move_tail(&found->list, &priv->network_list);
- } else {
- continue;
- }
-
-- lbs_deb_scan("SCAN_RESP: BSSID = %s\n",
-+ lbs_deb_scan("SCAN_RESP: BSSID %s\n",
- print_mac(mac, new.bssid));
-
- /* Copy the locally created newbssentry to the scan table */
-diff --git a/drivers/net/wireless/libertas/scan.h b/drivers/net/wireless/libertas/scan.h
-index c29c031..319f70d 100644
---- a/drivers/net/wireless/libertas/scan.h
-+++ b/drivers/net/wireless/libertas/scan.h
-@@ -2,10 +2,10 @@
- * Interface for the wlan network scan routines
- *
- * Driver interface functions and type declarations for the scan module
-- * implemented in wlan_scan.c.
-+ * implemented in scan.c.
- */
--#ifndef _WLAN_SCAN_H
--#define _WLAN_SCAN_H
-+#ifndef _LBS_SCAN_H
-+#define _LBS_SCAN_H
-
- #include <net/ieee80211.h>
- #include "hostcmd.h"
-@@ -13,38 +13,38 @@
- /**
- * @brief Maximum number of channels that can be sent in a setuserscan ioctl
- *
-- * @sa wlan_ioctl_user_scan_cfg
-+ * @sa lbs_ioctl_user_scan_cfg
- */
--#define WLAN_IOCTL_USER_SCAN_CHAN_MAX 50
-+#define LBS_IOCTL_USER_SCAN_CHAN_MAX 50
-
--//! Infrastructure BSS scan type in wlan_scan_cmd_config
--#define WLAN_SCAN_BSS_TYPE_BSS 1
-+//! Infrastructure BSS scan type in lbs_scan_cmd_config
-+#define LBS_SCAN_BSS_TYPE_BSS 1
-
--//! Adhoc BSS scan type in wlan_scan_cmd_config
--#define WLAN_SCAN_BSS_TYPE_IBSS 2
-+//! Adhoc BSS scan type in lbs_scan_cmd_config
-+#define LBS_SCAN_BSS_TYPE_IBSS 2
-
--//! Adhoc or Infrastructure BSS scan type in wlan_scan_cmd_config, no filter
--#define WLAN_SCAN_BSS_TYPE_ANY 3
-+//! Adhoc or Infrastructure BSS scan type in lbs_scan_cmd_config, no filter
-+#define LBS_SCAN_BSS_TYPE_ANY 3
-
- /**
- * @brief Structure used internally in the wlan driver to configure a scan.
- *
- * Sent to the command processing module to configure the firmware
-- * scan command prepared by libertas_cmd_80211_scan.
-+ * scan command prepared by lbs_cmd_80211_scan.
- *
-- * @sa wlan_scan_networks
-+ * @sa lbs_scan_networks
- *
- */
--struct wlan_scan_cmd_config {
-+struct lbs_scan_cmd_config {
- /**
- * @brief BSS type to be sent in the firmware command
- *
- * Field can be used to restrict the types of networks returned in the
- * scan. valid settings are:
- *
-- * - WLAN_SCAN_BSS_TYPE_BSS (infrastructure)
-- * - WLAN_SCAN_BSS_TYPE_IBSS (adhoc)
-- * - WLAN_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
-+ * - LBS_SCAN_BSS_TYPE_BSS (infrastructure)
-+ * - LBS_SCAN_BSS_TYPE_IBSS (adhoc)
-+ * - LBS_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
- */
- u8 bsstype;
-
-@@ -68,12 +68,12 @@ struct wlan_scan_cmd_config {
- };
-
- /**
-- * @brief IOCTL channel sub-structure sent in wlan_ioctl_user_scan_cfg
-+ * @brief IOCTL channel sub-structure sent in lbs_ioctl_user_scan_cfg
- *
- * Multiple instances of this structure are included in the IOCTL command
- * to configure a instance of a scan on the specific channel.
- */
--struct wlan_ioctl_user_scan_chan {
-+struct lbs_ioctl_user_scan_chan {
- u8 channumber; //!< channel Number to scan
- u8 radiotype; //!< Radio type: 'B/G' band = 0, 'A' band = 1
- u8 scantype; //!< Scan type: Active = 0, Passive = 1
-@@ -83,31 +83,26 @@ struct wlan_ioctl_user_scan_chan {
- /**
- * @brief IOCTL input structure to configure an immediate scan cmd to firmware
- *
-- * Used in the setuserscan (WLAN_SET_USER_SCAN) private ioctl. Specifies
-+ * Used in the setuserscan (LBS_SET_USER_SCAN) private ioctl. Specifies
- * a number of parameters to be used in general for the scan as well
-- * as a channel list (wlan_ioctl_user_scan_chan) for each scan period
-+ * as a channel list (lbs_ioctl_user_scan_chan) for each scan period
- * desired.
- *
-- * @sa libertas_set_user_scan_ioctl
-+ * @sa lbs_set_user_scan_ioctl
- */
--struct wlan_ioctl_user_scan_cfg {
-+struct lbs_ioctl_user_scan_cfg {
- /**
- * @brief BSS type to be sent in the firmware command
- *
- * Field can be used to restrict the types of networks returned in the
- * scan. valid settings are:
- *
-- * - WLAN_SCAN_BSS_TYPE_BSS (infrastructure)
-- * - WLAN_SCAN_BSS_TYPE_IBSS (adhoc)
-- * - WLAN_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
-+ * - LBS_SCAN_BSS_TYPE_BSS (infrastructure)
-+ * - LBS_SCAN_BSS_TYPE_IBSS (adhoc)
-+ * - LBS_SCAN_BSS_TYPE_ANY (unrestricted, adhoc and infrastructure)
- */
- u8 bsstype;
-
-- /**
-- * @brief Configure the number of probe requests for active chan scans
-- */
-- u8 numprobes;
--
- /**
- * @brief BSSID filter sent in the firmware command to limit the results
- */
-@@ -124,11 +119,6 @@ struct wlan_ioctl_user_scan_cfg {
-
- /* Clear existing scan results matching this SSID */
- u8 clear_ssid;
--
-- /**
-- * @brief Variable number (fixed maximum) of channels to scan up
-- */
-- struct wlan_ioctl_user_scan_chan chanlist[WLAN_IOCTL_USER_SCAN_CHAN_MAX];
- };
-
- /**
-@@ -174,30 +164,30 @@ struct bss_descriptor {
- struct list_head list;
- };
-
--int libertas_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len);
-+int lbs_ssid_cmp(u8 *ssid1, u8 ssid1_len, u8 *ssid2, u8 ssid2_len);
-
--struct bss_descriptor * libertas_find_ssid_in_list(wlan_adapter * adapter,
-- u8 *ssid, u8 ssid_len, u8 * bssid, u8 mode,
-- int channel);
-+struct bss_descriptor *lbs_find_ssid_in_list(struct lbs_private *priv,
-+ u8 *ssid, u8 ssid_len, u8 *bssid, u8 mode,
-+ int channel);
-
--struct bss_descriptor * libertas_find_bssid_in_list(wlan_adapter * adapter,
-- u8 * bssid, u8 mode);
-+struct bss_descriptor *lbs_find_bssid_in_list(struct lbs_private *priv,
-+ u8 *bssid, u8 mode);
-
--int libertas_find_best_network_ssid(wlan_private * priv, u8 *out_ssid,
-+int lbs_find_best_network_ssid(struct lbs_private *priv, u8 *out_ssid,
- u8 *out_ssid_len, u8 preferred_mode, u8 *out_mode);
-
--int libertas_send_specific_ssid_scan(wlan_private * priv, u8 *ssid,
-+int lbs_send_specific_ssid_scan(struct lbs_private *priv, u8 *ssid,
- u8 ssid_len, u8 clear_ssid);
-
--int libertas_cmd_80211_scan(wlan_private * priv,
-+int lbs_cmd_80211_scan(struct lbs_private *priv,
- struct cmd_ds_command *cmd,
- void *pdata_buf);
-
--int libertas_ret_80211_scan(wlan_private * priv,
-+int lbs_ret_80211_scan(struct lbs_private *priv,
- struct cmd_ds_command *resp);
-
--int wlan_scan_networks(wlan_private * priv,
-- const struct wlan_ioctl_user_scan_cfg * puserscanin,
-+int lbs_scan_networks(struct lbs_private *priv,
-+ const struct lbs_ioctl_user_scan_cfg *puserscanin,
- int full_scan);
-
- struct ifreq;
-@@ -205,11 +195,11 @@ struct ifreq;
- struct iw_point;
- struct iw_param;
- struct iw_request_info;
--int libertas_get_scan(struct net_device *dev, struct iw_request_info *info,
-+int lbs_get_scan(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra);
--int libertas_set_scan(struct net_device *dev, struct iw_request_info *info,
-+int lbs_set_scan(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra);
-
--void libertas_scan_worker(struct work_struct *work);
-+void lbs_scan_worker(struct work_struct *work);
-
--#endif /* _WLAN_SCAN_H */
-+#endif
-diff --git a/drivers/net/wireless/libertas/tx.c b/drivers/net/wireless/libertas/tx.c
-index fbec06c..00d95f7 100644
---- a/drivers/net/wireless/libertas/tx.c
-+++ b/drivers/net/wireless/libertas/tx.c
-@@ -2,6 +2,7 @@
- * This file contains the handling of TX in wlan driver.
- */
- #include <linux/netdevice.h>
-+#include <linux/etherdevice.h>
-
- #include "hostcmd.h"
- #include "radiotap.h"
-@@ -49,188 +50,122 @@ static u32 convert_radiotap_rate_to_mv(u8 rate)
- }
-
- /**
-- * @brief This function processes a single packet and sends
-- * to IF layer
-+ * @brief This function checks the conditions and sends packet to IF
-+ * layer if everything is ok.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param skb A pointer to skb which includes TX packet
- * @return 0 or -1
- */
--static int SendSinglePacket(wlan_private * priv, struct sk_buff *skb)
-+int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
- {
-- int ret = 0;
-- struct txpd localtxpd;
-- struct txpd *plocaltxpd = &localtxpd;
-- u8 *p802x_hdr;
-- struct tx_radiotap_hdr *pradiotap_hdr;
-- u32 new_rate;
-- u8 *ptr = priv->adapter->tmptxbuf;
-+ unsigned long flags;
-+ struct lbs_private *priv = dev->priv;
-+ struct txpd *txpd;
-+ char *p802x_hdr;
-+ uint16_t pkt_len;
-+ int ret;
-
- lbs_deb_enter(LBS_DEB_TX);
-
-- if (priv->adapter->surpriseremoved)
-- return -1;
-+ ret = NETDEV_TX_OK;
++ rtl8225_write(dev, 0x7, rtl8225_chan[conf->channel - 1]);
++ msleep(10);
++
++ if (conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME) {
++ rtl818x_iowrite8(priv, &priv->map->SLOT, 0x9);
++ rtl818x_iowrite8(priv, &priv->map->SIFS, 0x22);
++ rtl818x_iowrite8(priv, &priv->map->DIFS, 0x14);
++ rtl818x_iowrite8(priv, &priv->map->EIFS, 81);
++ rtl818x_iowrite8(priv, &priv->map->CW_VAL, 0x73);
++ } else {
++ rtl818x_iowrite8(priv, &priv->map->SLOT, 0x14);
++ rtl818x_iowrite8(priv, &priv->map->SIFS, 0x44);
++ rtl818x_iowrite8(priv, &priv->map->DIFS, 0x24);
++ rtl818x_iowrite8(priv, &priv->map->EIFS, 81);
++ rtl818x_iowrite8(priv, &priv->map->CW_VAL, 0xa5);
++ }
++}
++
++static const struct rtl818x_rf_ops rtl8225_ops = {
++ .name = "rtl8225",
++ .init = rtl8225_rf_init,
++ .stop = rtl8225_rf_stop,
++ .set_chan = rtl8225_rf_set_channel
++};
++
++static const struct rtl818x_rf_ops rtl8225z2_ops = {
++ .name = "rtl8225z2",
++ .init = rtl8225z2_rf_init,
++ .stop = rtl8225_rf_stop,
++ .set_chan = rtl8225_rf_set_channel
++};
+
-+ /* We need to protect against the queues being restarted before
-+ we get round to stopping them */
-+ spin_lock_irqsave(&priv->driver_lock, flags);
++const struct rtl818x_rf_ops * rtl8180_detect_rf(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u16 reg8, reg9;
+
-+ if (priv->surpriseremoved)
-+ goto free;
-
- if (!skb->len || (skb->len > MRVDRV_ETH_TX_PACKET_BUFFER_SIZE)) {
- lbs_deb_tx("tx err: skb length %d 0 or > %zd\n",
- skb->len, MRVDRV_ETH_TX_PACKET_BUFFER_SIZE);
-- ret = -1;
-- goto done;
-+ /* We'll never manage to send this one; drop it and return 'OK' */
++ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x0480);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x0488);
++ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ msleep(100);
+
-+ priv->stats.tx_dropped++;
-+ priv->stats.tx_errors++;
-+ goto free;
- }
-
-- memset(plocaltxpd, 0, sizeof(struct txpd));
-
-- plocaltxpd->tx_packet_length = cpu_to_le16(skb->len);
-+ netif_stop_queue(priv->dev);
-+ if (priv->mesh_dev)
-+ netif_stop_queue(priv->mesh_dev);
-
-- /* offset of actual data */
-- plocaltxpd->tx_packet_location = cpu_to_le32(sizeof(struct txpd));
-+ if (priv->tx_pending_len) {
-+ /* This can happen if packets come in on the mesh and eth
-+ device simultaneously -- there's no mutual exclusion on
-+ hard_start_xmit() calls between devices. */
-+ lbs_deb_tx("Packet on %s while busy\n", dev->name);
-+ ret = NETDEV_TX_BUSY;
-+ goto unlock;
-+ }
++ rtl8225_write(dev, 0, 0x1B7);
+
-+ priv->tx_pending_len = -1;
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
++ reg8 = rtl8225_read(dev, 8);
++ reg9 = rtl8225_read(dev, 9);
+
-+ lbs_deb_hex(LBS_DEB_TX, "TX Data", skb->data, min_t(unsigned int, skb->len, 100));
++ rtl8225_write(dev, 0, 0x0B7);
+
-+ txpd = (void *)priv->tx_pending_buf;
-+ memset(txpd, 0, sizeof(struct txpd));
-
- p802x_hdr = skb->data;
-- if (priv->adapter->monitormode != WLAN_MONITOR_OFF) {
-+ pkt_len = skb->len;
-
-- /* locate radiotap header */
-- pradiotap_hdr = (struct tx_radiotap_hdr *)skb->data;
-+ if (dev == priv->rtap_net_dev) {
-+ struct tx_radiotap_hdr *rtap_hdr = (void *)skb->data;
-
- /* set txpd fields from the radiotap header */
-- new_rate = convert_radiotap_rate_to_mv(pradiotap_hdr->rate);
-- if (new_rate != 0) {
-- /* use new tx_control[4:0] */
-- plocaltxpd->tx_control = cpu_to_le32(new_rate);
-- }
-+ txpd->tx_control = cpu_to_le32(convert_radiotap_rate_to_mv(rtap_hdr->rate));
-
- /* skip the radiotap header */
-- p802x_hdr += sizeof(struct tx_radiotap_hdr);
-- plocaltxpd->tx_packet_length =
-- cpu_to_le16(le16_to_cpu(plocaltxpd->tx_packet_length)
-- - sizeof(struct tx_radiotap_hdr));
-+ p802x_hdr += sizeof(*rtap_hdr);
-+ pkt_len -= sizeof(*rtap_hdr);
-
-+ /* copy destination address from 802.11 header */
-+ memcpy(txpd->tx_dest_addr_high, p802x_hdr + 4, ETH_ALEN);
-+ } else {
-+ /* copy destination address from 802.3 header */
-+ memcpy(txpd->tx_dest_addr_high, p802x_hdr, ETH_ALEN);
- }
-- /* copy destination address from 802.3 or 802.11 header */
-- if (priv->adapter->monitormode != WLAN_MONITOR_OFF)
-- memcpy(plocaltxpd->tx_dest_addr_high, p802x_hdr + 4, ETH_ALEN);
-- else
-- memcpy(plocaltxpd->tx_dest_addr_high, p802x_hdr, ETH_ALEN);
-
-- lbs_deb_hex(LBS_DEB_TX, "txpd", (u8 *) plocaltxpd, sizeof(struct txpd));
-+ txpd->tx_packet_length = cpu_to_le16(pkt_len);
-+ txpd->tx_packet_location = cpu_to_le32(sizeof(struct txpd));
-
-- if (IS_MESH_FRAME(skb)) {
-- plocaltxpd->tx_control |= cpu_to_le32(TxPD_MESH_FRAME);
-- }
-+ if (dev == priv->mesh_dev)
-+ txpd->tx_control |= cpu_to_le32(TxPD_MESH_FRAME);
-
-- memcpy(ptr, plocaltxpd, sizeof(struct txpd));
-+ lbs_deb_hex(LBS_DEB_TX, "txpd", (u8 *) &txpd, sizeof(struct txpd));
-
-- ptr += sizeof(struct txpd);
-+ lbs_deb_hex(LBS_DEB_TX, "Tx Data", (u8 *) p802x_hdr, le16_to_cpu(txpd->tx_packet_length));
-
-- lbs_deb_hex(LBS_DEB_TX, "Tx Data", (u8 *) p802x_hdr, le16_to_cpu(plocaltxpd->tx_packet_length));
-- memcpy(ptr, p802x_hdr, le16_to_cpu(plocaltxpd->tx_packet_length));
-- ret = priv->hw_host_to_card(priv, MVMS_DAT,
-- priv->adapter->tmptxbuf,
-- le16_to_cpu(plocaltxpd->tx_packet_length) +
-- sizeof(struct txpd));
-+ memcpy(&txpd[1], p802x_hdr, le16_to_cpu(txpd->tx_packet_length));
-
-- if (ret) {
-- lbs_deb_tx("tx err: hw_host_to_card returned 0x%X\n", ret);
-- goto done;
-- }
-+ spin_lock_irqsave(&priv->driver_lock, flags);
-+ priv->tx_pending_len = pkt_len + sizeof(struct txpd);
-
-- lbs_deb_tx("SendSinglePacket succeeds\n");
-+ lbs_deb_tx("%s lined up packet\n", __func__);
-
--done:
-- if (!ret) {
-- priv->stats.tx_packets++;
-- priv->stats.tx_bytes += skb->len;
-- } else {
-- priv->stats.tx_dropped++;
-- priv->stats.tx_errors++;
-- }
-+ priv->stats.tx_packets++;
-+ priv->stats.tx_bytes += skb->len;
-
-- if (!ret && priv->adapter->monitormode != WLAN_MONITOR_OFF) {
-+ dev->trans_start = jiffies;
++ if (reg8 != 0x588 || reg9 != 0x700)
++ return &rtl8225_ops;
+
-+ if (priv->monitormode != LBS_MONITOR_OFF) {
- /* Keep the skb to echo it back once Tx feedback is
- received from FW */
- skb_orphan(skb);
-- /* stop processing outgoing pkts */
-- netif_stop_queue(priv->dev);
-- if (priv->mesh_dev)
-- netif_stop_queue(priv->mesh_dev);
-- /* freeze any packets already in our queues */
-- priv->adapter->TxLockFlag = 1;
-- } else {
-- dev_kfree_skb_any(skb);
-- priv->adapter->currenttxskb = NULL;
-- }
-
-- lbs_deb_leave_args(LBS_DEB_TX, "ret %d", ret);
-- return ret;
--}
--
--
--void libertas_tx_runqueue(wlan_private *priv)
--{
-- wlan_adapter *adapter = priv->adapter;
-- int i;
--
-- spin_lock(&adapter->txqueue_lock);
-- for (i = 0; i < adapter->tx_queue_idx; i++) {
-- struct sk_buff *skb = adapter->tx_queue_ps[i];
-- spin_unlock(&adapter->txqueue_lock);
-- SendSinglePacket(priv, skb);
-- spin_lock(&adapter->txqueue_lock);
-- }
-- adapter->tx_queue_idx = 0;
-- spin_unlock(&adapter->txqueue_lock);
--}
--
--static void wlan_tx_queue(wlan_private *priv, struct sk_buff *skb)
--{
-- wlan_adapter *adapter = priv->adapter;
--
-- spin_lock(&adapter->txqueue_lock);
--
-- WARN_ON(priv->adapter->tx_queue_idx >= NR_TX_QUEUE);
-- adapter->tx_queue_ps[adapter->tx_queue_idx++] = skb;
-- if (adapter->tx_queue_idx == NR_TX_QUEUE) {
-- netif_stop_queue(priv->dev);
-- if (priv->mesh_dev)
-- netif_stop_queue(priv->mesh_dev);
-+ /* Keep the skb around for when we get feedback */
-+ priv->currenttxskb = skb;
- } else {
-- netif_start_queue(priv->dev);
-- if (priv->mesh_dev)
-- netif_start_queue(priv->mesh_dev);
-- }
--
-- spin_unlock(&adapter->txqueue_lock);
--}
--
--/**
-- * @brief This function checks the conditions and sends packet to IF
-- * layer if everything is ok.
-- *
-- * @param priv A pointer to wlan_private structure
-- * @return n/a
-- */
--int libertas_process_tx(wlan_private * priv, struct sk_buff *skb)
--{
-- int ret = -1;
--
-- lbs_deb_enter(LBS_DEB_TX);
-- lbs_deb_hex(LBS_DEB_TX, "TX Data", skb->data, min_t(unsigned int, skb->len, 100));
--
-- if (priv->dnld_sent) {
-- lbs_pr_alert( "TX error: dnld_sent = %d, not sending\n",
-- priv->dnld_sent);
-- goto done;
-- }
--
-- if ((priv->adapter->psstate == PS_STATE_SLEEP) ||
-- (priv->adapter->psstate == PS_STATE_PRE_SLEEP)) {
-- wlan_tx_queue(priv, skb);
-- return ret;
-+ free:
-+ dev_kfree_skb_any(skb);
- }
-+ unlock:
-+ spin_unlock_irqrestore(&priv->driver_lock, flags);
-+ wake_up(&priv->waitq);
-
-- priv->adapter->currenttxskb = skb;
--
-- ret = SendSinglePacket(priv, skb);
--done:
- lbs_deb_leave_args(LBS_DEB_TX, "ret %d", ret);
- return ret;
- }
-@@ -239,24 +174,23 @@ done:
- * @brief This function sends to the host the last transmitted packet,
- * filling the radiotap headers with transmission information.
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param status A 32 bit value containing transmission status.
- *
- * @returns void
- */
--void libertas_send_tx_feedback(wlan_private * priv)
-+void lbs_send_tx_feedback(struct lbs_private *priv)
- {
-- wlan_adapter *adapter = priv->adapter;
- struct tx_radiotap_hdr *radiotap_hdr;
-- u32 status = adapter->eventcause;
-+ u32 status = priv->eventcause;
- int txfail;
- int try_count;
-
-- if (adapter->monitormode == WLAN_MONITOR_OFF ||
-- adapter->currenttxskb == NULL)
-+ if (priv->monitormode == LBS_MONITOR_OFF ||
-+ priv->currenttxskb == NULL)
- return;
-
-- radiotap_hdr = (struct tx_radiotap_hdr *)adapter->currenttxskb->data;
-+ radiotap_hdr = (struct tx_radiotap_hdr *)priv->currenttxskb->data;
-
- txfail = (status >> 24);
-
-@@ -269,14 +203,19 @@ void libertas_send_tx_feedback(wlan_private * priv)
- #endif
- try_count = (status >> 16) & 0xff;
- radiotap_hdr->data_retries = (try_count) ?
-- (1 + adapter->txretrycount - try_count) : 0;
-- libertas_upload_rx_packet(priv, adapter->currenttxskb);
-- adapter->currenttxskb = NULL;
-- priv->adapter->TxLockFlag = 0;
-- if (priv->adapter->connect_status == LIBERTAS_CONNECTED) {
-+ (1 + priv->txretrycount - try_count) : 0;
++ return &rtl8225z2_ops;
++}
+diff --git a/drivers/net/wireless/rtl8180_rtl8225.h b/drivers/net/wireless/rtl8180_rtl8225.h
+new file mode 100644
+index 0000000..310013a
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_rtl8225.h
+@@ -0,0 +1,23 @@
++#ifndef RTL8180_RTL8225_H
++#define RTL8180_RTL8225_H
+
++#define RTL8225_ANAPARAM_ON 0xa0000b59
++#define RTL8225_ANAPARAM2_ON 0x860dec11
++#define RTL8225_ANAPARAM_OFF 0xa00beb59
++#define RTL8225_ANAPARAM2_OFF 0x840dec11
+
-+ priv->currenttxskb->protocol = eth_type_trans(priv->currenttxskb,
-+ priv->rtap_net_dev);
-+ netif_rx(priv->currenttxskb);
++const struct rtl818x_rf_ops * rtl8180_detect_rf(struct ieee80211_hw *);
+
-+ priv->currenttxskb = NULL;
++static inline void rtl8225_write_phy_ofdm(struct ieee80211_hw *dev,
++ u8 addr, u8 data)
++{
++ rtl8180_write_phy(dev, addr, data);
++}
+
-+ if (priv->connect_status == LBS_CONNECTED)
- netif_wake_queue(priv->dev);
-- if (priv->mesh_dev)
-- netif_wake_queue(priv->mesh_dev);
-- }
++static inline void rtl8225_write_phy_cck(struct ieee80211_hw *dev,
++ u8 addr, u8 data)
++{
++ rtl8180_write_phy(dev, addr, data | 0x10000);
++}
+
-+ if (priv->mesh_dev && (priv->mesh_connect_status == LBS_CONNECTED))
-+ netif_wake_queue(priv->mesh_dev);
- }
--EXPORT_SYMBOL_GPL(libertas_send_tx_feedback);
-+EXPORT_SYMBOL_GPL(lbs_send_tx_feedback);
-diff --git a/drivers/net/wireless/libertas/types.h b/drivers/net/wireless/libertas/types.h
-index a43a5f6..f0d5795 100644
---- a/drivers/net/wireless/libertas/types.h
-+++ b/drivers/net/wireless/libertas/types.h
-@@ -1,8 +1,8 @@
- /**
- * This header file contains definition for global types
- */
--#ifndef _WLAN_TYPES_
--#define _WLAN_TYPES_
-+#ifndef _LBS_TYPES_H_
-+#define _LBS_TYPES_H_
-
- #include <linux/if_ether.h>
- #include <asm/byteorder.h>
-@@ -201,22 +201,11 @@ struct mrvlietypes_powercapability {
- s8 maxpower;
- } __attribute__ ((packed));
-
--struct mrvlietypes_rssithreshold {
-+/* used in CMD_802_11_SUBSCRIBE_EVENT for SNR, RSSI and Failure */
-+struct mrvlietypes_thresholds {
- struct mrvlietypesheader header;
-- u8 rssivalue;
-- u8 rssifreq;
--} __attribute__ ((packed));
--
--struct mrvlietypes_snrthreshold {
-- struct mrvlietypesheader header;
-- u8 snrvalue;
-- u8 snrfreq;
--} __attribute__ ((packed));
--
--struct mrvlietypes_failurecount {
-- struct mrvlietypesheader header;
-- u8 failvalue;
-- u8 Failfreq;
-+ u8 value;
-+ u8 freq;
- } __attribute__ ((packed));
-
- struct mrvlietypes_beaconsmissed {
-@@ -250,4 +239,4 @@ struct mrvlietypes_ledgpio {
- struct led_pin ledpin[1];
- } __attribute__ ((packed));
-
--#endif /* _WLAN_TYPES_ */
-+#endif
-diff --git a/drivers/net/wireless/libertas/wext.c b/drivers/net/wireless/libertas/wext.c
-index 395b788..e8bfc26 100644
---- a/drivers/net/wireless/libertas/wext.c
-+++ b/drivers/net/wireless/libertas/wext.c
-@@ -19,30 +19,47 @@
- #include "join.h"
- #include "wext.h"
- #include "assoc.h"
-+#include "cmd.h"
++#endif /* RTL8180_RTL8225_H */
+diff --git a/drivers/net/wireless/rtl8180_sa2400.c b/drivers/net/wireless/rtl8180_sa2400.c
+new file mode 100644
+index 0000000..e08ace7
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_sa2400.c
+@@ -0,0 +1,201 @@
+
++/*
++ * Radio tuning for Philips SA2400 on RTL8180
++ *
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Code from the BSD driver and the rtl8181 project have been
++ * very useful to understand certain things
++ *
++ * I want to thanks the Authors of such projects and the Ndiswrapper
++ * project Authors.
++ *
++ * A special Big Thanks also is for all people who donated me cards,
++ * making possible the creation of the original rtl8180 driver
++ * from which this code is derived!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
+
-+static inline void lbs_postpone_association_work(struct lbs_private *priv)
++#include <linux/init.h>
++#include <linux/pci.h>
++#include <linux/delay.h>
++#include <net/mac80211.h>
++
++#include "rtl8180.h"
++#include "rtl8180_sa2400.h"
++
++static const u32 sa2400_chan[] = {
++ 0x00096c, /* ch1 */
++ 0x080970,
++ 0x100974,
++ 0x180978,
++ 0x000980,
++ 0x080984,
++ 0x100988,
++ 0x18098c,
++ 0x000994,
++ 0x080998,
++ 0x10099c,
++ 0x1809a0,
++ 0x0009a8,
++ 0x0009b4, /* ch 14 */
++};
++
++static void write_sa2400(struct ieee80211_hw *dev, u8 addr, u32 data)
+{
-+ if (priv->surpriseremoved)
-+ return;
-+ cancel_delayed_work(&priv->assoc_work);
-+ queue_delayed_work(priv->work_thread, &priv->assoc_work, HZ / 2);
++ struct rtl8180_priv *priv = dev->priv;
++ u32 phy_config;
++
++ /* MAC will bang bits to the sa2400. sw 3-wire is NOT used */
++ phy_config = 0xb0000000;
++
++ phy_config |= ((u32)(addr & 0xf)) << 24;
++ phy_config |= data & 0xffffff;
++
++ rtl818x_iowrite32(priv,
++ (__le32 __iomem *) &priv->map->RFPinsOutput, phy_config);
++
++ msleep(3);
+}
+
-+static inline void lbs_cancel_association_work(struct lbs_private *priv)
++static void sa2400_write_phy_antenna(struct ieee80211_hw *dev, short chan)
+{
-+ cancel_delayed_work(&priv->assoc_work);
-+ kfree(priv->pending_assoc_req);
-+ priv->pending_assoc_req = NULL;
++ struct rtl8180_priv *priv = dev->priv;
++ u8 ant = SA2400_ANTENNA;
++
++ if (priv->rfparam & RF_PARAM_ANTBDEFAULT)
++ ant |= BB_ANTENNA_B;
++
++ if (chan == 14)
++ ant |= BB_ANTATTEN_CHAN14;
++
++ rtl8180_write_phy(dev, 0x10, ant);
++
+}
++
++static void sa2400_rf_set_channel(struct ieee80211_hw *dev,
++ struct ieee80211_conf *conf)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u32 txpw = priv->channels[conf->channel - 1].val & 0xFF;
++ u32 chan = sa2400_chan[conf->channel - 1];
++
++ write_sa2400(dev, 7, txpw);
++
++ sa2400_write_phy_antenna(dev, chan);
++
++ write_sa2400(dev, 0, chan);
++ write_sa2400(dev, 1, 0xbb50);
++ write_sa2400(dev, 2, 0x80);
++ write_sa2400(dev, 3, 0);
++}
++
++static void sa2400_rf_stop(struct ieee80211_hw *dev)
++{
++ write_sa2400(dev, 4, 0);
++}
++
++static void sa2400_rf_init(struct ieee80211_hw *dev)
++{
++ struct rtl8180_priv *priv = dev->priv;
++ u32 anaparam, txconf;
++ u8 firdac;
++ int analogphy = priv->rfparam & RF_PARAM_ANALOGPHY;
++
++ anaparam = priv->anaparam;
++ anaparam &= ~(1 << ANAPARAM_TXDACOFF_SHIFT);
++ anaparam &= ~ANAPARAM_PWR1_MASK;
++ anaparam &= ~ANAPARAM_PWR0_MASK;
++
++ if (analogphy) {
++ anaparam |= SA2400_ANA_ANAPARAM_PWR1_ON << ANAPARAM_PWR1_SHIFT;
++ firdac = 0;
++ } else {
++ anaparam |= (SA2400_DIG_ANAPARAM_PWR1_ON << ANAPARAM_PWR1_SHIFT);
++ anaparam |= (SA2400_ANAPARAM_PWR0_ON << ANAPARAM_PWR0_SHIFT);
++ firdac = 1 << SA2400_REG4_FIRDAC_SHIFT;
++ }
++
++ rtl8180_set_anaparam(priv, anaparam);
++
++ write_sa2400(dev, 0, sa2400_chan[0]);
++ write_sa2400(dev, 1, 0xbb50);
++ write_sa2400(dev, 2, 0x80);
++ write_sa2400(dev, 3, 0);
++ write_sa2400(dev, 4, 0x19340 | firdac);
++ write_sa2400(dev, 5, 0x1dfb | (SA2400_MAX_SENS - 54) << 15);
++ write_sa2400(dev, 4, 0x19348 | firdac); /* calibrate VCO */
++
++ if (!analogphy)
++ write_sa2400(dev, 4, 0x1938c); /*???*/
++
++ write_sa2400(dev, 4, 0x19340 | firdac);
++
++ write_sa2400(dev, 0, sa2400_chan[0]);
++ write_sa2400(dev, 1, 0xbb50);
++ write_sa2400(dev, 2, 0x80);
++ write_sa2400(dev, 3, 0);
++ write_sa2400(dev, 4, 0x19344 | firdac); /* calibrate filter */
++
++ /* new from rtl8180 embedded driver (rtl8181 project) */
++ write_sa2400(dev, 6, 0x13ff | (1 << 23)); /* MANRX */
++ write_sa2400(dev, 8, 0); /* VCO */
++
++ if (analogphy) {
++ rtl8180_set_anaparam(priv, anaparam |
++ (1 << ANAPARAM_TXDACOFF_SHIFT));
++
++ txconf = rtl818x_ioread32(priv, &priv->map->TX_CONF);
++ rtl818x_iowrite32(priv, &priv->map->TX_CONF,
++ txconf | RTL818X_TX_CONF_LOOPBACK_CONT);
++
++ write_sa2400(dev, 4, 0x19341); /* calibrates DC */
++
++ /* a 5us sleep is required here,
++ * we rely on the 3ms delay introduced in write_sa2400 */
++ write_sa2400(dev, 4, 0x19345);
++
++ /* a 20us sleep is required here,
++ * we rely on the 3ms delay introduced in write_sa2400 */
++
++ rtl818x_iowrite32(priv, &priv->map->TX_CONF, txconf);
++
++ rtl8180_set_anaparam(priv, anaparam);
++ }
++ /* end new code */
++
++ write_sa2400(dev, 4, 0x19341 | firdac); /* RTX MODE */
++
++ /* baseband configuration */
++ rtl8180_write_phy(dev, 0, 0x98);
++ rtl8180_write_phy(dev, 3, 0x38);
++ rtl8180_write_phy(dev, 4, 0xe0);
++ rtl8180_write_phy(dev, 5, 0x90);
++ rtl8180_write_phy(dev, 6, 0x1a);
++ rtl8180_write_phy(dev, 7, 0x64);
++
++ sa2400_write_phy_antenna(dev, 1);
++
++ rtl8180_write_phy(dev, 0x11, 0x80);
++
++ if (rtl818x_ioread8(priv, &priv->map->CONFIG2) &
++ RTL818X_CONFIG2_ANTENNA_DIV)
++ rtl8180_write_phy(dev, 0x12, 0xc7); /* enable ant diversity */
++ else
++ rtl8180_write_phy(dev, 0x12, 0x47); /* disable ant diversity */
++
++ rtl8180_write_phy(dev, 0x13, 0x90 | priv->csthreshold);
++
++ rtl8180_write_phy(dev, 0x19, 0x0);
++ rtl8180_write_phy(dev, 0x1a, 0xa0);
++}
++
++const struct rtl818x_rf_ops sa2400_rf_ops = {
++ .name = "Philips",
++ .init = sa2400_rf_init,
++ .stop = sa2400_rf_stop,
++ .set_chan = sa2400_rf_set_channel
++};
+diff --git a/drivers/net/wireless/rtl8180_sa2400.h b/drivers/net/wireless/rtl8180_sa2400.h
+new file mode 100644
+index 0000000..a4aaa0d
+--- /dev/null
++++ b/drivers/net/wireless/rtl8180_sa2400.h
+@@ -0,0 +1,36 @@
++#ifndef RTL8180_SA2400_H
++#define RTL8180_SA2400_H
++
++/*
++ * Radio tuning for Philips SA2400 on RTL8180
++ *
++ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
++ *
++ * Code from the BSD driver and the rtl8181 project have been
++ * very useful to understand certain things
++ *
++ * I want to thanks the Authors of such projects and the Ndiswrapper
++ * project Authors.
++ *
++ * A special Big Thanks also is for all people who donated me cards,
++ * making possible the creation of the original rtl8180 driver
++ * from which this code is derived!
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#define SA2400_ANTENNA 0x91
++#define SA2400_DIG_ANAPARAM_PWR1_ON 0x8
++#define SA2400_ANA_ANAPARAM_PWR1_ON 0x28
++#define SA2400_ANAPARAM_PWR0_ON 0x3
++
++/* RX sensitivity in dbm */
++#define SA2400_MAX_SENS 85
++
++#define SA2400_REG4_FIRDAC_SHIFT 7
++
++extern const struct rtl818x_rf_ops sa2400_rf_ops;
++
++#endif /* RTL8180_SA2400_H */
+diff --git a/drivers/net/wireless/rtl8187.h b/drivers/net/wireless/rtl8187.h
+index 6ad322e..8680a0b 100644
+--- a/drivers/net/wireless/rtl8187.h
++++ b/drivers/net/wireless/rtl8187.h
+@@ -64,9 +64,9 @@ struct rtl8187_tx_hdr {
+ struct rtl8187_priv {
+ /* common between rtl818x drivers */
+ struct rtl818x_csr *map;
+- void (*rf_init)(struct ieee80211_hw *);
++ const struct rtl818x_rf_ops *rf;
++ struct ieee80211_vif *vif;
+ int mode;
+- int if_id;
-
- /**
- * @brief Find the channel frequency power info with specific channel
- *
-- * @param adapter A pointer to wlan_adapter structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param band it can be BAND_A, BAND_G or BAND_B
- * @param channel the channel for looking
- * @return A pointer to struct chan_freq_power structure or NULL if not find.
- */
--struct chan_freq_power *libertas_find_cfp_by_band_and_channel(wlan_adapter * adapter,
-- u8 band, u16 channel)
-+struct chan_freq_power *lbs_find_cfp_by_band_and_channel(
-+ struct lbs_private *priv,
-+ u8 band,
-+ u16 channel)
- {
- struct chan_freq_power *cfp = NULL;
- struct region_channel *rc;
-- int count = sizeof(adapter->region_channel) /
-- sizeof(adapter->region_channel[0]);
- int i, j;
-
-- for (j = 0; !cfp && (j < count); j++) {
-- rc = &adapter->region_channel[j];
-+ for (j = 0; !cfp && (j < ARRAY_SIZE(priv->region_channel)); j++) {
-+ rc = &priv->region_channel[j];
-
-- if (adapter->enable11d)
-- rc = &adapter->universal_channel[j];
-+ if (priv->enable11d)
-+ rc = &priv->universal_channel[j];
- if (!rc->valid || !rc->CFP)
- continue;
- if (rc->band != band)
-@@ -56,7 +73,7 @@ struct chan_freq_power *libertas_find_cfp_by_band_and_channel(wlan_adapter * ada
+ /* rtl8187 specific */
+ struct ieee80211_channel channels[14];
+diff --git a/drivers/net/wireless/rtl8187_dev.c b/drivers/net/wireless/rtl8187_dev.c
+index bd1ab3b..0d71716 100644
+--- a/drivers/net/wireless/rtl8187_dev.c
++++ b/drivers/net/wireless/rtl8187_dev.c
+@@ -150,7 +150,8 @@ static int rtl8187_tx(struct ieee80211_hw *dev, struct sk_buff *skb,
+ flags |= RTL8187_TX_FLAG_MORE_FRAG;
+ if (control->flags & IEEE80211_TXCTL_USE_RTS_CTS) {
+ flags |= RTL8187_TX_FLAG_RTS;
+- rts_dur = ieee80211_rts_duration(dev, priv->if_id, skb->len, control);
++ rts_dur = ieee80211_rts_duration(dev, priv->vif,
++ skb->len, control);
}
+ if (control->flags & IEEE80211_TXCTL_USE_CTS_PROTECT)
+ flags |= RTL8187_TX_FLAG_CTS;
+@@ -227,6 +228,7 @@ static void rtl8187_rx_cb(struct urb *urb)
+ rx_status.channel = dev->conf.channel;
+ rx_status.phymode = dev->conf.phymode;
+ rx_status.mactime = le64_to_cpu(hdr->mac_time);
++ rx_status.flag |= RX_FLAG_TSFT;
+ if (flags & (1 << 13))
+ rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
+ ieee80211_rx_irqsafe(dev, skb, &rx_status);
+@@ -392,37 +394,19 @@ static int rtl8187_init_hw(struct ieee80211_hw *dev)
+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FF7);
+ msleep(100);
- if (!cfp && channel)
-- lbs_deb_wext("libertas_find_cfp_by_band_and_channel: can't find "
-+ lbs_deb_wext("lbs_find_cfp_by_band_and_channel: can't find "
- "cfp by band %d / channel %d\n", band, channel);
-
- return cfp;
-@@ -65,25 +82,25 @@ struct chan_freq_power *libertas_find_cfp_by_band_and_channel(wlan_adapter * ada
- /**
- * @brief Find the channel frequency power info with specific frequency
- *
-- * @param adapter A pointer to wlan_adapter structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param band it can be BAND_A, BAND_G or BAND_B
- * @param freq the frequency for looking
- * @return A pointer to struct chan_freq_power structure or NULL if not find.
- */
--static struct chan_freq_power *find_cfp_by_band_and_freq(wlan_adapter * adapter,
-- u8 band, u32 freq)
-+static struct chan_freq_power *find_cfp_by_band_and_freq(
-+ struct lbs_private *priv,
-+ u8 band,
-+ u32 freq)
- {
- struct chan_freq_power *cfp = NULL;
- struct region_channel *rc;
-- int count = sizeof(adapter->region_channel) /
-- sizeof(adapter->region_channel[0]);
- int i, j;
-
-- for (j = 0; !cfp && (j < count); j++) {
-- rc = &adapter->region_channel[j];
-+ for (j = 0; !cfp && (j < ARRAY_SIZE(priv->region_channel)); j++) {
-+ rc = &priv->region_channel[j];
-
-- if (adapter->enable11d)
-- rc = &adapter->universal_channel[j];
-+ if (priv->enable11d)
-+ rc = &priv->universal_channel[j];
- if (!rc->valid || !rc->CFP)
- continue;
- if (rc->band != band)
-@@ -107,22 +124,21 @@ static struct chan_freq_power *find_cfp_by_band_and_freq(wlan_adapter * adapter,
- /**
- * @brief Set Radio On/OFF
- *
-- * @param priv A pointer to wlan_private structure
-+ * @param priv A pointer to struct lbs_private structure
- * @option Radio Option
- * @return 0 --success, otherwise fail
- */
--static int wlan_radio_ioctl(wlan_private * priv, u8 option)
-+static int lbs_radio_ioctl(struct lbs_private *priv, u8 option)
- {
- int ret = 0;
-- wlan_adapter *adapter = priv->adapter;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- if (adapter->radioon != option) {
-+ if (priv->radioon != option) {
- lbs_deb_wext("switching radio %s\n", option ? "on" : "off");
-- adapter->radioon = option;
-+ priv->radioon = option;
-
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_RADIO_CONTROL,
- CMD_ACT_SET,
- CMD_OPTION_WAITFORRSP, 0, NULL);
-@@ -135,22 +151,23 @@ static int wlan_radio_ioctl(wlan_private * priv, u8 option)
- /**
- * @brief Copy active data rates based on adapter mode and status
- *
-- * @param adapter A pointer to wlan_adapter structure
-+ * @param priv A pointer to struct lbs_private structure
- * @param rate The buf to return the active rates
- */
--static void copy_active_data_rates(wlan_adapter * adapter, u8 * rates)
-+static void copy_active_data_rates(struct lbs_private *priv, u8 *rates)
- {
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- if (adapter->connect_status != LIBERTAS_CONNECTED)
-- memcpy(rates, libertas_bg_rates, MAX_RATES);
-+ if ((priv->connect_status != LBS_CONNECTED) &&
-+ (priv->mesh_connect_status != LBS_CONNECTED))
-+ memcpy(rates, lbs_bg_rates, MAX_RATES);
- else
-- memcpy(rates, adapter->curbssparams.rates, MAX_RATES);
-+ memcpy(rates, priv->curbssparams.rates, MAX_RATES);
-
- lbs_deb_leave(LBS_DEB_WEXT);
- }
+- priv->rf_init(dev);
++ priv->rf->init(dev);
--static int wlan_get_name(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_name(struct net_device *dev, struct iw_request_info *info,
- char *cwrq, char *extra)
- {
+ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x01F3);
+- reg = rtl818x_ioread16(priv, &priv->map->PGSELECT) & 0xfffe;
+- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg | 0x1);
++ reg = rtl818x_ioread8(priv, &priv->map->PGSELECT) & ~1;
++ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg | 1);
+ rtl818x_iowrite16(priv, (__le16 *)0xFFFE, 0x10);
+ rtl818x_iowrite8(priv, &priv->map->TALLY_SEL, 0x80);
+ rtl818x_iowrite8(priv, (u8 *)0xFFFF, 0x60);
+- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg);
++ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg);
-@@ -163,22 +180,21 @@ static int wlan_get_name(struct net_device *dev, struct iw_request_info *info,
return 0;
}
--static int wlan_get_freq(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_freq(struct net_device *dev, struct iw_request_info *info,
- struct iw_freq *fwrq, char *extra)
+-static void rtl8187_set_channel(struct ieee80211_hw *dev, int channel)
+-{
+- u32 reg;
+- struct rtl8187_priv *priv = dev->priv;
+-
+- reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
+- /* Enable TX loopback on MAC level to avoid TX during channel
+- * changes, as this has be seen to causes problems and the
+- * card will stop work until next reset
+- */
+- rtl818x_iowrite32(priv, &priv->map->TX_CONF,
+- reg | RTL818X_TX_CONF_LOOPBACK_MAC);
+- msleep(10);
+- rtl8225_rf_set_channel(dev, channel);
+- msleep(10);
+- rtl818x_iowrite32(priv, &priv->map->TX_CONF, reg);
+-}
+-
+ static int rtl8187_start(struct ieee80211_hw *dev)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct chan_freq_power *cfp;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- cfp = libertas_find_cfp_by_band_and_channel(adapter, 0,
-- adapter->curbssparams.channel);
-+ cfp = lbs_find_cfp_by_band_and_channel(priv, 0,
-+ priv->curbssparams.channel);
-
- if (!cfp) {
-- if (adapter->curbssparams.channel)
-+ if (priv->curbssparams.channel)
- lbs_deb_wext("invalid channel %d\n",
-- adapter->curbssparams.channel);
-+ priv->curbssparams.channel);
- return -EINVAL;
- }
+ struct rtl8187_priv *priv = dev->priv;
+@@ -491,7 +475,7 @@ static void rtl8187_stop(struct ieee80211_hw *dev)
+ reg &= ~RTL818X_CMD_RX_ENABLE;
+ rtl818x_iowrite8(priv, &priv->map->CMD, reg);
-@@ -190,16 +206,15 @@ static int wlan_get_freq(struct net_device *dev, struct iw_request_info *info,
- return 0;
- }
+- rtl8225_rf_stop(dev);
++ priv->rf->stop(dev);
--static int wlan_get_wap(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_wap(struct net_device *dev, struct iw_request_info *info,
- struct sockaddr *awrq, char *extra)
+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG4);
+@@ -542,7 +526,19 @@ static void rtl8187_remove_interface(struct ieee80211_hw *dev,
+ static int rtl8187_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
+ struct rtl8187_priv *priv = dev->priv;
+- rtl8187_set_channel(dev, conf->channel);
++ u32 reg;
++
++ reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
++ /* Enable TX loopback on MAC level to avoid TX during channel
++ * changes, as this has be seen to causes problems and the
++ * card will stop work until next reset
++ */
++ rtl818x_iowrite32(priv, &priv->map->TX_CONF,
++ reg | RTL818X_TX_CONF_LOOPBACK_MAC);
++ msleep(10);
++ priv->rf->set_chan(dev, conf);
++ msleep(10);
++ rtl818x_iowrite32(priv, &priv->map->TX_CONF, reg);
- lbs_deb_enter(LBS_DEB_WEXT);
+ rtl818x_iowrite8(priv, &priv->map->SIFS, 0x22);
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-- memcpy(awrq->sa_data, adapter->curbssparams.bssid, ETH_ALEN);
-+ if (priv->connect_status == LBS_CONNECTED) {
-+ memcpy(awrq->sa_data, priv->curbssparams.bssid, ETH_ALEN);
- } else {
- memset(awrq->sa_data, 0, ETH_ALEN);
- }
-@@ -209,11 +224,10 @@ static int wlan_get_wap(struct net_device *dev, struct iw_request_info *info,
+@@ -565,14 +561,13 @@ static int rtl8187_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
return 0;
}
--static int wlan_set_nick(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_nick(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
+-static int rtl8187_config_interface(struct ieee80211_hw *dev, int if_id,
++static int rtl8187_config_interface(struct ieee80211_hw *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_if_conf *conf)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-@@ -225,25 +239,24 @@ static int wlan_set_nick(struct net_device *dev, struct iw_request_info *info,
- return -E2BIG;
- }
-
-- mutex_lock(&adapter->lock);
-- memset(adapter->nodename, 0, sizeof(adapter->nodename));
-- memcpy(adapter->nodename, extra, dwrq->length);
-- mutex_unlock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-+ memset(priv->nodename, 0, sizeof(priv->nodename));
-+ memcpy(priv->nodename, extra, dwrq->length);
-+ mutex_unlock(&priv->lock);
-
- lbs_deb_leave(LBS_DEB_WEXT);
- return 0;
- }
+ struct rtl8187_priv *priv = dev->priv;
+ int i;
--static int wlan_get_nick(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_nick(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
+- priv->if_id = if_id;
+-
+ for (i = 0; i < ETH_ALEN; i++)
+ rtl818x_iowrite8(priv, &priv->map->BSSID[i], conf->bssid[i]);
- lbs_deb_enter(LBS_DEB_WEXT);
+@@ -752,23 +747,16 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
+ eeprom_93cx6_read(&eeprom, RTL8187_EEPROM_TXPWR_BASE,
+ &priv->txpwr_base);
-- dwrq->length = strlen(adapter->nodename);
-- memcpy(extra, adapter->nodename, dwrq->length);
-+ dwrq->length = strlen(priv->nodename);
-+ memcpy(extra, priv->nodename, dwrq->length);
- extra[dwrq->length] = '\0';
+- reg = rtl818x_ioread16(priv, &priv->map->PGSELECT) & ~1;
+- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg | 1);
++ reg = rtl818x_ioread8(priv, &priv->map->PGSELECT) & ~1;
++ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg | 1);
+ /* 0 means asic B-cut, we should use SW 3 wire
+ * bit-by-bit banging for radio. 1 means we can use
+ * USB specific request to write radio registers */
+ priv->asic_rev = rtl818x_ioread8(priv, (u8 *)0xFFFE) & 0x3;
+- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg);
++ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg);
+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
- dwrq->flags = 1; /* active */
-@@ -255,14 +268,13 @@ static int wlan_get_nick(struct net_device *dev, struct iw_request_info *info,
- static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
+- rtl8225_write(dev, 0, 0x1B7);
+-
+- if (rtl8225_read(dev, 8) != 0x588 || rtl8225_read(dev, 9) != 0x700)
+- priv->rf_init = rtl8225_rf_init;
+- else
+- priv->rf_init = rtl8225z2_rf_init;
+-
+- rtl8225_write(dev, 0, 0x0B7);
++ priv->rf = rtl8187_detect_rf(dev);
- lbs_deb_enter(LBS_DEB_WEXT);
+ err = ieee80211_register_hw(dev);
+ if (err) {
+@@ -778,8 +766,7 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
- /* Use nickname to indicate that mesh is on */
+ printk(KERN_INFO "%s: hwaddr %s, rtl8187 V%d + %s\n",
+ wiphy_name(dev->wiphy), print_mac(mac, dev->wiphy->perm_addr),
+- priv->asic_rev, priv->rf_init == rtl8225_rf_init ?
+- "rtl8225" : "rtl8225z2");
++ priv->asic_rev, priv->rf->name);
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-+ if (priv->mesh_connect_status == LBS_CONNECTED) {
- strncpy(extra, "Mesh", 12);
- extra[12] = '\0';
- dwrq->length = strlen(extra);
-@@ -277,25 +289,24 @@ static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info,
return 0;
- }
-
--static int wlan_set_rts(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_rts(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- u32 rthr = vwrq->value;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
- if (vwrq->disabled) {
-- adapter->rtsthsd = rthr = MRVDRV_RTS_MAX_VALUE;
-+ priv->rtsthsd = rthr = MRVDRV_RTS_MAX_VALUE;
- } else {
- if (rthr < MRVDRV_RTS_MIN_VALUE || rthr > MRVDRV_RTS_MAX_VALUE)
- return -EINVAL;
-- adapter->rtsthsd = rthr;
-+ priv->rtsthsd = rthr;
- }
-
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
- CMD_ACT_SET, CMD_OPTION_WAITFORRSP,
- OID_802_11_RTS_THRESHOLD, &rthr);
-@@ -303,23 +314,22 @@ static int wlan_set_rts(struct net_device *dev, struct iw_request_info *info,
- return ret;
- }
-
--static int wlan_get_rts(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_rts(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- adapter->rtsthsd = 0;
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
-+ priv->rtsthsd = 0;
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
- CMD_ACT_GET, CMD_OPTION_WAITFORRSP,
- OID_802_11_RTS_THRESHOLD, NULL);
- if (ret)
- goto out;
-
-- vwrq->value = adapter->rtsthsd;
-+ vwrq->value = priv->rtsthsd;
- vwrq->disabled = ((vwrq->value < MRVDRV_RTS_MIN_VALUE)
- || (vwrq->value > MRVDRV_RTS_MAX_VALUE));
- vwrq->fixed = 1;
-@@ -329,26 +339,25 @@ out:
- return ret;
+diff --git a/drivers/net/wireless/rtl8187_rtl8225.c b/drivers/net/wireless/rtl8187_rtl8225.c
+index efc4120..b713de1 100644
+--- a/drivers/net/wireless/rtl8187_rtl8225.c
++++ b/drivers/net/wireless/rtl8187_rtl8225.c
+@@ -101,7 +101,7 @@ static void rtl8225_write_8051(struct ieee80211_hw *dev, u8 addr, __le16 data)
+ msleep(2);
}
--static int wlan_set_frag(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_frag(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
+-void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
++static void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
{
- int ret = 0;
- u32 fthr = vwrq->value;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
- if (vwrq->disabled) {
-- adapter->fragthsd = fthr = MRVDRV_FRAG_MAX_VALUE;
-+ priv->fragthsd = fthr = MRVDRV_FRAG_MAX_VALUE;
- } else {
- if (fthr < MRVDRV_FRAG_MIN_VALUE
- || fthr > MRVDRV_FRAG_MAX_VALUE)
- return -EINVAL;
-- adapter->fragthsd = fthr;
-+ priv->fragthsd = fthr;
- }
-
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
- CMD_ACT_SET, CMD_OPTION_WAITFORRSP,
- OID_802_11_FRAGMENTATION_THRESHOLD, &fthr);
+ struct rtl8187_priv *priv = dev->priv;
-@@ -356,24 +365,23 @@ static int wlan_set_frag(struct net_device *dev, struct iw_request_info *info,
- return ret;
+@@ -111,7 +111,7 @@ void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
+ rtl8225_write_bitbang(dev, addr, data);
}
--static int wlan_get_frag(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_frag(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
+-u16 rtl8225_read(struct ieee80211_hw *dev, u8 addr)
++static u16 rtl8225_read(struct ieee80211_hw *dev, u8 addr)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- adapter->fragthsd = 0;
-- ret = libertas_prepare_and_send_command(priv,
-+ priv->fragthsd = 0;
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_SNMP_MIB,
- CMD_ACT_GET, CMD_OPTION_WAITFORRSP,
- OID_802_11_FRAGMENTATION_THRESHOLD, NULL);
- if (ret)
- goto out;
-
-- vwrq->value = adapter->fragthsd;
-+ vwrq->value = priv->fragthsd;
- vwrq->disabled = ((vwrq->value < MRVDRV_FRAG_MIN_VALUE)
- || (vwrq->value > MRVDRV_FRAG_MAX_VALUE));
- vwrq->fixed = 1;
-@@ -383,15 +391,14 @@ out:
- return ret;
+ struct rtl8187_priv *priv = dev->priv;
+ u16 reg80, reg82, reg84, out;
+@@ -325,7 +325,7 @@ static void rtl8225_rf_set_tx_power(struct ieee80211_hw *dev, int channel)
+ msleep(1);
}
--static int wlan_get_mode(struct net_device *dev,
-+static int lbs_get_mode(struct net_device *dev,
- struct iw_request_info *info, u32 * uwrq, char *extra)
+-void rtl8225_rf_init(struct ieee80211_hw *dev)
++static void rtl8225_rf_init(struct ieee80211_hw *dev)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- *uwrq = adapter->mode;
-+ *uwrq = priv->mode;
-
- lbs_deb_leave(LBS_DEB_WEXT);
- return 0;
-@@ -409,17 +416,16 @@ static int mesh_wlan_get_mode(struct net_device *dev,
- return 0;
- }
+ struct rtl8187_priv *priv = dev->priv;
+ int i;
+@@ -567,7 +567,7 @@ static const u8 rtl8225z2_gain_bg[] = {
+ 0x63, 0x15, 0xc5 /* -66dBm */
+ };
--static int wlan_get_txpow(struct net_device *dev,
-+static int lbs_get_txpow(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
+-void rtl8225z2_rf_init(struct ieee80211_hw *dev)
++static void rtl8225z2_rf_init(struct ieee80211_hw *dev)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_RF_TX_POWER,
- CMD_ACT_TX_POWER_OPT_GET,
- CMD_OPTION_WAITFORRSP, 0, NULL);
-@@ -427,10 +433,10 @@ static int wlan_get_txpow(struct net_device *dev,
- if (ret)
- goto out;
-
-- lbs_deb_wext("tx power level %d dbm\n", adapter->txpowerlevel);
-- vwrq->value = adapter->txpowerlevel;
-+ lbs_deb_wext("tx power level %d dbm\n", priv->txpowerlevel);
-+ vwrq->value = priv->txpowerlevel;
- vwrq->fixed = 1;
-- if (adapter->radioon) {
-+ if (priv->radioon) {
- vwrq->disabled = 0;
- vwrq->flags = IW_TXPOW_DBM;
- } else {
-@@ -442,12 +448,11 @@ out:
- return ret;
+ struct rtl8187_priv *priv = dev->priv;
+ int i;
+@@ -715,7 +715,7 @@ void rtl8225z2_rf_init(struct ieee80211_hw *dev)
+ rtl818x_iowrite32(priv, (__le32 *)0xFF94, 0x3dc00002);
}
--static int wlan_set_retry(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_retry(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
+-void rtl8225_rf_stop(struct ieee80211_hw *dev)
++static void rtl8225_rf_stop(struct ieee80211_hw *dev)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-@@ -460,9 +465,9 @@ static int wlan_set_retry(struct net_device *dev, struct iw_request_info *info,
- return -EINVAL;
-
- /* Adding 1 to convert retry count to try count */
-- adapter->txretrycount = vwrq->value + 1;
-+ priv->txretrycount = vwrq->value + 1;
-
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
-+ ret = lbs_prepare_and_send_command(priv, CMD_802_11_SNMP_MIB,
- CMD_ACT_SET,
- CMD_OPTION_WAITFORRSP,
- OID_802_11_TX_RETRYCOUNT, NULL);
-@@ -478,17 +483,16 @@ out:
- return ret;
+ u8 reg;
+ struct rtl8187_priv *priv = dev->priv;
+@@ -731,15 +731,47 @@ void rtl8225_rf_stop(struct ieee80211_hw *dev)
+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
}
--static int wlan_get_retry(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_retry(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int ret = 0;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- adapter->txretrycount = 0;
-- ret = libertas_prepare_and_send_command(priv,
-+ priv->txretrycount = 0;
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_SNMP_MIB,
- CMD_ACT_GET, CMD_OPTION_WAITFORRSP,
- OID_802_11_TX_RETRYCOUNT, NULL);
-@@ -499,7 +503,7 @@ static int wlan_get_retry(struct net_device *dev, struct iw_request_info *info,
- if (!vwrq->flags) {
- vwrq->flags = IW_RETRY_LIMIT;
- /* Subtract 1 to convert try count to retry count */
-- vwrq->value = adapter->txretrycount - 1;
-+ vwrq->value = priv->txretrycount - 1;
- }
-
- out:
-@@ -546,12 +550,11 @@ static inline void sort_channels(struct iw_freq *freq, int num)
- * @param extra A pointer to extra data buf
- * @return 0 --success, otherwise fail
- */
--static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_range(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
+-void rtl8225_rf_set_channel(struct ieee80211_hw *dev, int channel)
++static void rtl8225_rf_set_channel(struct ieee80211_hw *dev,
++ struct ieee80211_conf *conf)
{
- int i, j;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct iw_range *range = (struct iw_range *)extra;
- struct chan_freq_power *cfp;
- u8 rates[MAX_RATES + 1];
-@@ -567,7 +570,7 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
- range->max_nwid = 0;
-
- memset(rates, 0, sizeof(rates));
-- copy_active_data_rates(adapter, rates);
-+ copy_active_data_rates(priv, rates);
- range->num_bitrates = strnlen(rates, IW_MAX_BITRATES);
- for (i = 0; i < range->num_bitrates; i++)
- range->bitrate[i] = rates[i] * 500000;
-@@ -576,13 +579,14 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
- range->num_bitrates);
-
- range->num_frequency = 0;
-- if (priv->adapter->enable11d &&
-- adapter->connect_status == LIBERTAS_CONNECTED) {
-+ if (priv->enable11d &&
-+ (priv->connect_status == LBS_CONNECTED ||
-+ priv->mesh_connect_status == LBS_CONNECTED)) {
- u8 chan_no;
- u8 band;
-
- struct parsed_region_chan_11d *parsed_region_chan =
-- &adapter->parsed_region_chan;
-+ &priv->parsed_region_chan;
+ struct rtl8187_priv *priv = dev->priv;
- if (parsed_region_chan == NULL) {
- lbs_deb_wext("11d: parsed_region_chan is NULL\n");
-@@ -598,7 +602,7 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
- lbs_deb_wext("chan_no %d\n", chan_no);
- range->freq[range->num_frequency].i = (long)chan_no;
- range->freq[range->num_frequency].m =
-- (long)libertas_chan_2_freq(chan_no, band) * 100000;
-+ (long)lbs_chan_2_freq(chan_no, band) * 100000;
- range->freq[range->num_frequency].e = 1;
- range->num_frequency++;
- }
-@@ -606,13 +610,12 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
- }
- if (!flag) {
- for (j = 0; (range->num_frequency < IW_MAX_FREQUENCIES)
-- && (j < sizeof(adapter->region_channel)
-- / sizeof(adapter->region_channel[0])); j++) {
-- cfp = adapter->region_channel[j].CFP;
-+ && (j < ARRAY_SIZE(priv->region_channel)); j++) {
-+ cfp = priv->region_channel[j].CFP;
- for (i = 0; (range->num_frequency < IW_MAX_FREQUENCIES)
-- && adapter->region_channel[j].valid
-+ && priv->region_channel[j].valid
- && cfp
-- && (i < adapter->region_channel[j].nrcfp); i++) {
-+ && (i < priv->region_channel[j].nrcfp); i++) {
- range->freq[range->num_frequency].i =
- (long)cfp->channel;
- range->freq[range->num_frequency].m =
-@@ -712,7 +715,7 @@ static int wlan_get_range(struct net_device *dev, struct iw_request_info *info,
- IW_EVENT_CAPA_MASK(SIOCGIWSCAN));
- range->event_capa[1] = IW_EVENT_CAPA_K_1;
+- if (priv->rf_init == rtl8225_rf_init)
+- rtl8225_rf_set_tx_power(dev, channel);
++ if (priv->rf->init == rtl8225_rf_init)
++ rtl8225_rf_set_tx_power(dev, conf->channel);
+ else
+- rtl8225z2_rf_set_tx_power(dev, channel);
++ rtl8225z2_rf_set_tx_power(dev, conf->channel);
-- if (adapter->fwcapinfo & FW_CAPINFO_WPA) {
-+ if (priv->fwcapinfo & FW_CAPINFO_WPA) {
- range->enc_capa = IW_ENC_CAPA_WPA
- | IW_ENC_CAPA_WPA2
- | IW_ENC_CAPA_CIPHER_TKIP
-@@ -724,22 +727,28 @@ out:
- return 0;
+- rtl8225_write(dev, 0x7, rtl8225_chan[channel - 1]);
++ rtl8225_write(dev, 0x7, rtl8225_chan[conf->channel - 1]);
+ msleep(10);
}
-
--static int wlan_set_power(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_power(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-+ if (!priv->ps_supported) {
-+ if (vwrq->disabled)
-+ return 0;
-+ else
-+ return -EINVAL;
-+ }
+
- /* PS is currently supported only in Infrastructure mode
- * Remove this check if it is to be supported in IBSS mode also
- */
-
- if (vwrq->disabled) {
-- adapter->psmode = WLAN802_11POWERMODECAM;
-- if (adapter->psstate != PS_STATE_FULL_POWER) {
-- libertas_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
-+ priv->psmode = LBS802_11POWERMODECAM;
-+ if (priv->psstate != PS_STATE_FULL_POWER) {
-+ lbs_ps_wakeup(priv, CMD_OPTION_WAITFORRSP);
- }
-
- return 0;
-@@ -754,33 +763,32 @@ static int wlan_set_power(struct net_device *dev, struct iw_request_info *info,
- return -EINVAL;
- }
-
-- if (adapter->psmode != WLAN802_11POWERMODECAM) {
-+ if (priv->psmode != LBS802_11POWERMODECAM) {
- return 0;
- }
-
-- adapter->psmode = WLAN802_11POWERMODEMAX_PSP;
-+ priv->psmode = LBS802_11POWERMODEMAX_PSP;
-
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-- libertas_ps_sleep(priv, CMD_OPTION_WAITFORRSP);
-+ if (priv->connect_status == LBS_CONNECTED) {
-+ lbs_ps_sleep(priv, CMD_OPTION_WAITFORRSP);
- }
-
- lbs_deb_leave(LBS_DEB_WEXT);
- return 0;
- }
-
--static int wlan_get_power(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_power(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int mode;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- mode = adapter->psmode;
-+ mode = priv->psmode;
-
-- if ((vwrq->disabled = (mode == WLAN802_11POWERMODECAM))
-- || adapter->connect_status == LIBERTAS_DISCONNECTED)
-+ if ((vwrq->disabled = (mode == LBS802_11POWERMODECAM))
-+ || priv->connect_status == LBS_DISCONNECTED)
- {
- goto out;
- }
-@@ -792,7 +800,7 @@ out:
- return 0;
- }
-
--static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
-+static struct iw_statistics *lbs_get_wireless_stats(struct net_device *dev)
- {
- enum {
- POOR = 30,
-@@ -802,8 +810,7 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
- EXCELLENT = 95,
- PERFECT = 100
- };
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- u32 rssi_qual;
- u32 tx_qual;
- u32 quality = 0;
-@@ -813,22 +820,23 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- priv->wstats.status = adapter->mode;
-+ priv->wstats.status = priv->mode;
-
- /* If we're not associated, all quality values are meaningless */
-- if (adapter->connect_status != LIBERTAS_CONNECTED)
-+ if ((priv->connect_status != LBS_CONNECTED) &&
-+ (priv->mesh_connect_status != LBS_CONNECTED))
- goto out;
-
- /* Quality by RSSI */
- priv->wstats.qual.level =
-- CAL_RSSI(adapter->SNR[TYPE_BEACON][TYPE_NOAVG],
-- adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
-+ CAL_RSSI(priv->SNR[TYPE_BEACON][TYPE_NOAVG],
-+ priv->NF[TYPE_BEACON][TYPE_NOAVG]);
-
-- if (adapter->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
-+ if (priv->NF[TYPE_BEACON][TYPE_NOAVG] == 0) {
- priv->wstats.qual.noise = MRVDRV_NF_DEFAULT_SCAN_VALUE;
- } else {
- priv->wstats.qual.noise =
-- CAL_NF(adapter->NF[TYPE_BEACON][TYPE_NOAVG]);
-+ CAL_NF(priv->NF[TYPE_BEACON][TYPE_NOAVG]);
- }
-
- lbs_deb_wext("signal level %#x\n", priv->wstats.qual.level);
-@@ -852,7 +860,7 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
- /* Quality by TX errors */
- priv->wstats.discard.retries = priv->stats.tx_errors;
-
-- tx_retries = le32_to_cpu(adapter->logmsg.retry);
-+ tx_retries = le32_to_cpu(priv->logmsg.retry);
-
- if (tx_retries > 75)
- tx_qual = (90 - tx_retries) * POOR / 15;
-@@ -868,10 +876,10 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
- (PERFECT - VERY_GOOD) / 50 + VERY_GOOD;
- quality = min(quality, tx_qual);
-
-- priv->wstats.discard.code = le32_to_cpu(adapter->logmsg.wepundecryptable);
-- priv->wstats.discard.fragment = le32_to_cpu(adapter->logmsg.rxfrag);
-+ priv->wstats.discard.code = le32_to_cpu(priv->logmsg.wepundecryptable);
-+ priv->wstats.discard.fragment = le32_to_cpu(priv->logmsg.rxfrag);
- priv->wstats.discard.retries = tx_retries;
-- priv->wstats.discard.misc = le32_to_cpu(adapter->logmsg.ackfailure);
-+ priv->wstats.discard.misc = le32_to_cpu(priv->logmsg.ackfailure);
-
- /* Calculate quality */
- priv->wstats.qual.qual = min_t(u8, quality, 100);
-@@ -879,9 +887,9 @@ static struct iw_statistics *wlan_get_wireless_stats(struct net_device *dev)
- stats_valid = 1;
-
- /* update stats asynchronously for future calls */
-- libertas_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
-+ lbs_prepare_and_send_command(priv, CMD_802_11_RSSI, 0,
- 0, 0, NULL);
-- libertas_prepare_and_send_command(priv, CMD_802_11_GET_LOG, 0,
-+ lbs_prepare_and_send_command(priv, CMD_802_11_GET_LOG, 0,
- 0, 0, NULL);
- out:
- if (!stats_valid) {
-@@ -901,19 +909,18 @@ out:
-
- }
-
--static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_freq(struct net_device *dev, struct iw_request_info *info,
- struct iw_freq *fwrq, char *extra)
- {
- int ret = -EINVAL;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct chan_freq_power *cfp;
- struct assoc_request * assoc_req;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- mutex_lock(&adapter->lock);
-- assoc_req = wlan_get_association_request(adapter);
-+ mutex_lock(&priv->lock);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
- goto out;
-@@ -923,7 +930,7 @@ static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
- if (fwrq->e == 1) {
- long f = fwrq->m / 100000;
-
-- cfp = find_cfp_by_band_and_freq(adapter, 0, f);
-+ cfp = find_cfp_by_band_and_freq(priv, 0, f);
- if (!cfp) {
- lbs_deb_wext("invalid freq %ld\n", f);
- goto out;
-@@ -938,7 +945,7 @@ static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
- goto out;
- }
-
-- cfp = libertas_find_cfp_by_band_and_channel(adapter, 0, fwrq->m);
-+ cfp = lbs_find_cfp_by_band_and_channel(priv, 0, fwrq->m);
- if (!cfp) {
- goto out;
- }
-@@ -949,23 +956,71 @@ static int wlan_set_freq(struct net_device *dev, struct iw_request_info *info,
- out:
- if (ret == 0) {
- set_bit(ASSOC_FLAG_CHANNEL, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- } else {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
-+ }
-+ mutex_unlock(&priv->lock);
++static const struct rtl818x_rf_ops rtl8225_ops = {
++ .name = "rtl8225",
++ .init = rtl8225_rf_init,
++ .stop = rtl8225_rf_stop,
++ .set_chan = rtl8225_rf_set_channel
++};
+
-+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
-+ return ret;
-+}
++static const struct rtl818x_rf_ops rtl8225z2_ops = {
++ .name = "rtl8225z2",
++ .init = rtl8225z2_rf_init,
++ .stop = rtl8225_rf_stop,
++ .set_chan = rtl8225_rf_set_channel
++};
+
-+static int lbs_mesh_set_freq(struct net_device *dev,
-+ struct iw_request_info *info,
-+ struct iw_freq *fwrq, char *extra)
++const struct rtl818x_rf_ops * rtl8187_detect_rf(struct ieee80211_hw *dev)
+{
-+ struct lbs_private *priv = dev->priv;
-+ struct chan_freq_power *cfp;
-+ int ret = -EINVAL;
-+
-+ lbs_deb_enter(LBS_DEB_WEXT);
-+
-+ /* If setting by frequency, convert to a channel */
-+ if (fwrq->e == 1) {
-+ long f = fwrq->m / 100000;
++ u16 reg8, reg9;
+
-+ cfp = find_cfp_by_band_and_freq(priv, 0, f);
-+ if (!cfp) {
-+ lbs_deb_wext("invalid freq %ld\n", f);
-+ goto out;
-+ }
++ rtl8225_write(dev, 0, 0x1B7);
+
-+ fwrq->e = 0;
-+ fwrq->m = (int) cfp->channel;
-+ }
++ reg8 = rtl8225_read(dev, 8);
++ reg9 = rtl8225_read(dev, 9);
+
-+ /* Setting by channel number */
-+ if (fwrq->m > 1000 || fwrq->e > 0) {
-+ goto out;
-+ }
++ rtl8225_write(dev, 0, 0x0B7);
+
-+ cfp = lbs_find_cfp_by_band_and_channel(priv, 0, fwrq->m);
-+ if (!cfp) {
-+ goto out;
-+ }
++ if (reg8 != 0x588 || reg9 != 0x700)
++ return &rtl8225_ops;
+
-+ if (fwrq->m != priv->curbssparams.channel) {
-+ lbs_deb_wext("mesh channel change forces eth disconnect\n");
-+ if (priv->mode == IW_MODE_INFRA)
-+ lbs_send_deauthentication(priv);
-+ else if (priv->mode == IW_MODE_ADHOC)
-+ lbs_stop_adhoc_network(priv);
- }
-- mutex_unlock(&adapter->lock);
-+ lbs_mesh_config(priv, 1, fwrq->m);
-+ lbs_update_channel(priv);
-+ ret = 0;
-
-+out:
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
- }
-
--static int wlan_set_rate(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_rate(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-- u32 new_rate;
-- u16 action;
-+ struct lbs_private *priv = dev->priv;
-+ u8 new_rate = 0;
- int ret = -EINVAL;
- u8 rates[MAX_RATES + 1];
-
-@@ -974,15 +1029,14 @@ static int wlan_set_rate(struct net_device *dev, struct iw_request_info *info,
-
- /* Auto rate? */
- if (vwrq->value == -1) {
-- action = CMD_ACT_SET_TX_AUTO;
-- adapter->auto_rate = 1;
-- adapter->cur_rate = 0;
-+ priv->auto_rate = 1;
-+ priv->cur_rate = 0;
- } else {
- if (vwrq->value % 100000)
- goto out;
-
- memset(rates, 0, sizeof(rates));
-- copy_active_data_rates(adapter, rates);
-+ copy_active_data_rates(priv, rates);
- new_rate = vwrq->value / 500000;
- if (!memchr(rates, new_rate, sizeof(rates))) {
- lbs_pr_alert("fixed data rate 0x%X out of range\n",
-@@ -990,31 +1044,28 @@ static int wlan_set_rate(struct net_device *dev, struct iw_request_info *info,
- goto out;
- }
-
-- adapter->cur_rate = new_rate;
-- action = CMD_ACT_SET_TX_FIX_RATE;
-- adapter->auto_rate = 0;
-+ priv->cur_rate = new_rate;
-+ priv->auto_rate = 0;
- }
-
-- ret = libertas_prepare_and_send_command(priv, CMD_802_11_DATA_RATE,
-- action, CMD_OPTION_WAITFORRSP, 0, NULL);
-+ ret = lbs_set_data_rate(priv, new_rate);
-
- out:
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
- }
-
--static int wlan_get_rate(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_rate(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
++ return &rtl8225z2_ops;
++}
+diff --git a/drivers/net/wireless/rtl8187_rtl8225.h b/drivers/net/wireless/rtl8187_rtl8225.h
+index 798ba4a..d39ed02 100644
+--- a/drivers/net/wireless/rtl8187_rtl8225.h
++++ b/drivers/net/wireless/rtl8187_rtl8225.h
+@@ -20,14 +20,7 @@
+ #define RTL8225_ANAPARAM_OFF 0xa00beb59
+ #define RTL8225_ANAPARAM2_OFF 0x840dec11
- lbs_deb_enter(LBS_DEB_WEXT);
+-void rtl8225_write(struct ieee80211_hw *, u8 addr, u16 data);
+-u16 rtl8225_read(struct ieee80211_hw *, u8 addr);
+-
+-void rtl8225_rf_init(struct ieee80211_hw *);
+-void rtl8225z2_rf_init(struct ieee80211_hw *);
+-void rtl8225_rf_stop(struct ieee80211_hw *);
+-void rtl8225_rf_set_channel(struct ieee80211_hw *, int);
+-
++const struct rtl818x_rf_ops * rtl8187_detect_rf(struct ieee80211_hw *);
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-- vwrq->value = adapter->cur_rate * 500000;
-+ if (priv->connect_status == LBS_CONNECTED) {
-+ vwrq->value = priv->cur_rate * 500000;
+ static inline void rtl8225_write_phy_ofdm(struct ieee80211_hw *dev,
+ u8 addr, u32 data)
+diff --git a/drivers/net/wireless/rtl818x.h b/drivers/net/wireless/rtl818x.h
+index 880d4be..1e7d6f8 100644
+--- a/drivers/net/wireless/rtl818x.h
++++ b/drivers/net/wireless/rtl818x.h
+@@ -58,13 +58,17 @@ struct rtl818x_csr {
+ #define RTL818X_INT_TX_FO (1 << 15)
+ __le32 TX_CONF;
+ #define RTL818X_TX_CONF_LOOPBACK_MAC (1 << 17)
++#define RTL818X_TX_CONF_LOOPBACK_CONT (3 << 17)
+ #define RTL818X_TX_CONF_NO_ICV (1 << 19)
+ #define RTL818X_TX_CONF_DISCW (1 << 20)
++#define RTL818X_TX_CONF_SAT_HWPLCP (1 << 24)
+ #define RTL818X_TX_CONF_R8180_ABCD (2 << 25)
+ #define RTL818X_TX_CONF_R8180_F (3 << 25)
+ #define RTL818X_TX_CONF_R8185_ABC (4 << 25)
+ #define RTL818X_TX_CONF_R8185_D (5 << 25)
+ #define RTL818X_TX_CONF_HWVER_MASK (7 << 25)
++#define RTL818X_TX_CONF_PROBE_DTS (1 << 29)
++#define RTL818X_TX_CONF_HW_SEQNUM (1 << 30)
+ #define RTL818X_TX_CONF_CW_MIN (1 << 31)
+ __le32 RX_CONF;
+ #define RTL818X_RX_CONF_MONITOR (1 << 0)
+@@ -75,8 +79,12 @@ struct rtl818x_csr {
+ #define RTL818X_RX_CONF_DATA (1 << 18)
+ #define RTL818X_RX_CONF_CTRL (1 << 19)
+ #define RTL818X_RX_CONF_MGMT (1 << 20)
++#define RTL818X_RX_CONF_ADDR3 (1 << 21)
++#define RTL818X_RX_CONF_PM (1 << 22)
+ #define RTL818X_RX_CONF_BSSID (1 << 23)
+ #define RTL818X_RX_CONF_RX_AUTORESETPHY (1 << 28)
++#define RTL818X_RX_CONF_CSDM1 (1 << 29)
++#define RTL818X_RX_CONF_CSDM2 (1 << 30)
+ #define RTL818X_RX_CONF_ONLYERLPKT (1 << 31)
+ __le32 INT_TIMEOUT;
+ __le32 TBDA;
+@@ -92,6 +100,7 @@ struct rtl818x_csr {
+ u8 CONFIG0;
+ u8 CONFIG1;
+ u8 CONFIG2;
++#define RTL818X_CONFIG2_ANTENNA_DIV (1 << 6)
+ __le32 ANAPARAM;
+ u8 MSR;
+ #define RTL818X_MSR_NO_LINK (0 << 2)
+@@ -104,14 +113,17 @@ struct rtl818x_csr {
+ #define RTL818X_CONFIG4_VCOOFF (1 << 7)
+ u8 TESTR;
+ u8 reserved_9[2];
+- __le16 PGSELECT;
++ u8 PGSELECT;
++ u8 SECURITY;
+ __le32 ANAPARAM2;
+ u8 reserved_10[12];
+ __le16 BEACON_INTERVAL;
+ __le16 ATIM_WND;
+ __le16 BEACON_INTERVAL_TIME;
+ __le16 ATIMTR_INTERVAL;
+- u8 reserved_11[4];
++ u8 PHY_DELAY;
++ u8 CARRIER_SENSE_COUNTER;
++ u8 reserved_11[2];
+ u8 PHY[4];
+ __le16 RFPinsOutput;
+ __le16 RFPinsEnable;
+@@ -149,11 +161,20 @@ struct rtl818x_csr {
+ u8 RETRY_CTR;
+ u8 reserved_18[5];
+ __le32 RDSAR;
+- u8 reserved_19[18];
+- u16 TALLY_CNT;
++ u8 reserved_19[12];
++ __le16 FEMR;
++ u8 reserved_20[4];
++ __le16 TALLY_CNT;
+ u8 TALLY_SEL;
+ } __attribute__((packed));
-- if (adapter->auto_rate)
-+ if (priv->auto_rate)
- vwrq->fixed = 0;
- else
- vwrq->fixed = 1;
-@@ -1028,12 +1079,11 @@ static int wlan_get_rate(struct net_device *dev, struct iw_request_info *info,
- return 0;
++struct rtl818x_rf_ops {
++ char *name;
++ void (*init)(struct ieee80211_hw *);
++ void (*stop)(struct ieee80211_hw *);
++ void (*set_chan)(struct ieee80211_hw *, struct ieee80211_conf *);
++};
++
+ static const struct ieee80211_rate rtl818x_rates[] = {
+ { .rate = 10,
+ .val = 0,
+diff --git a/drivers/net/wireless/wavelan.c b/drivers/net/wireless/wavelan.c
+index a1f8a16..03384a4 100644
+--- a/drivers/net/wireless/wavelan.c
++++ b/drivers/net/wireless/wavelan.c
+@@ -49,27 +49,6 @@ static int __init wv_psa_to_irq(u8 irqval)
+ return -1;
}
--static int wlan_set_mode(struct net_device *dev,
-+static int lbs_set_mode(struct net_device *dev,
- struct iw_request_info *info, u32 * uwrq, char *extra)
- {
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct assoc_request * assoc_req;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-@@ -1046,18 +1096,18 @@ static int wlan_set_mode(struct net_device *dev,
- goto out;
- }
-
-- mutex_lock(&adapter->lock);
-- assoc_req = wlan_get_association_request(adapter);
-+ mutex_lock(&priv->lock);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
- } else {
- assoc_req->mode = *uwrq;
- set_bit(ASSOC_FLAG_MODE, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- lbs_deb_wext("Switching to mode: 0x%x\n", *uwrq);
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
-
- out:
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
-@@ -1074,23 +1124,22 @@ out:
- * @param extra A pointer to extra data buf
- * @return 0 --success, otherwise fail
- */
--static int wlan_get_encode(struct net_device *dev,
-+static int lbs_get_encode(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_point *dwrq, u8 * extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int index = (dwrq->flags & IW_ENCODE_INDEX) - 1;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
- lbs_deb_wext("flags 0x%x, index %d, length %d, wep_tx_keyidx %d\n",
-- dwrq->flags, index, dwrq->length, adapter->wep_tx_keyidx);
-+ dwrq->flags, index, dwrq->length, priv->wep_tx_keyidx);
-
- dwrq->flags = 0;
-
- /* Authentication method */
-- switch (adapter->secinfo.auth_mode) {
-+ switch (priv->secinfo.auth_mode) {
- case IW_AUTH_ALG_OPEN_SYSTEM:
- dwrq->flags = IW_ENCODE_OPEN;
- break;
-@@ -1104,41 +1153,32 @@ static int wlan_get_encode(struct net_device *dev,
- break;
- }
-
-- if ( adapter->secinfo.wep_enabled
-- || adapter->secinfo.WPAenabled
-- || adapter->secinfo.WPA2enabled) {
-- dwrq->flags &= ~IW_ENCODE_DISABLED;
-- } else {
-- dwrq->flags |= IW_ENCODE_DISABLED;
-- }
+-#ifdef STRUCT_CHECK
+-/*------------------------------------------------------------------*/
+-/*
+- * Sanity routine to verify the sizes of the various WaveLAN interface
+- * structures.
+- */
+-static char *wv_struct_check(void)
+-{
+-#define SC(t,s,n) if (sizeof(t) != s) return(n);
-
- memset(extra, 0, 16);
-
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
-
- /* Default to returning current transmit key */
- if (index < 0)
-- index = adapter->wep_tx_keyidx;
-+ index = priv->wep_tx_keyidx;
-
-- if ((adapter->wep_keys[index].len) && adapter->secinfo.wep_enabled) {
-- memcpy(extra, adapter->wep_keys[index].key,
-- adapter->wep_keys[index].len);
-- dwrq->length = adapter->wep_keys[index].len;
-+ if ((priv->wep_keys[index].len) && priv->secinfo.wep_enabled) {
-+ memcpy(extra, priv->wep_keys[index].key,
-+ priv->wep_keys[index].len);
-+ dwrq->length = priv->wep_keys[index].len;
-
- dwrq->flags |= (index + 1);
- /* Return WEP enabled */
- dwrq->flags &= ~IW_ENCODE_DISABLED;
-- } else if ((adapter->secinfo.WPAenabled)
-- || (adapter->secinfo.WPA2enabled)) {
-+ } else if ((priv->secinfo.WPAenabled)
-+ || (priv->secinfo.WPA2enabled)) {
- /* return WPA enabled */
- dwrq->flags &= ~IW_ENCODE_DISABLED;
-+ dwrq->flags |= IW_ENCODE_NOKEY;
- } else {
- dwrq->flags |= IW_ENCODE_DISABLED;
- }
-
-- mutex_unlock(&adapter->lock);
+- SC(psa_t, PSA_SIZE, "psa_t");
+- SC(mmw_t, MMW_SIZE, "mmw_t");
+- SC(mmr_t, MMR_SIZE, "mmr_t");
+- SC(ha_t, HA_SIZE, "ha_t");
-
-- dwrq->flags |= IW_ENCODE_NOKEY;
-+ mutex_unlock(&priv->lock);
+-#undef SC
+-
+- return ((char *) NULL);
+-} /* wv_struct_check */
+-#endif /* STRUCT_CHECK */
+-
+ /********************* HOST ADAPTER SUBROUTINES *********************/
+ /*
+ * Useful subroutines to manage the WaveLAN ISA interface
+@@ -3740,7 +3719,7 @@ static int wv_check_ioaddr(unsigned long ioaddr, u8 * mac)
+ * non-NCR/AT&T/Lucent ISA card. See wavelan.p.h for detail on
+ * how to configure your card.
+ */
+- for (i = 0; i < (sizeof(MAC_ADDRESSES) / sizeof(char) / 3); i++)
++ for (i = 0; i < ARRAY_SIZE(MAC_ADDRESSES); i++)
+ if ((mac[0] == MAC_ADDRESSES[i][0]) &&
+ (mac[1] == MAC_ADDRESSES[i][1]) &&
+ (mac[2] == MAC_ADDRESSES[i][2]))
+@@ -4215,14 +4194,11 @@ struct net_device * __init wavelan_probe(int unit)
+ int i;
+ int r = 0;
- lbs_deb_wext("key: %02x:%02x:%02x:%02x:%02x:%02x, keylen %d\n",
- extra[0], extra[1], extra[2],
-@@ -1160,7 +1200,7 @@ static int wlan_get_encode(struct net_device *dev,
- * @param set_tx_key Force set TX key (1 = yes, 0 = no)
- * @return 0 --success, otherwise fail
+-#ifdef STRUCT_CHECK
+- if (wv_struct_check() != (char *) NULL) {
+- printk(KERN_WARNING
+- "%s: wavelan_probe(): structure/compiler botch: \"%s\"\n",
+- dev->name, wv_struct_check());
+- return -ENODEV;
+- }
+-#endif /* STRUCT_CHECK */
++ /* compile-time check the sizes of structures */
++ BUILD_BUG_ON(sizeof(psa_t) != PSA_SIZE);
++ BUILD_BUG_ON(sizeof(mmw_t) != MMW_SIZE);
++ BUILD_BUG_ON(sizeof(mmr_t) != MMR_SIZE);
++ BUILD_BUG_ON(sizeof(ha_t) != HA_SIZE);
+
+ dev = alloc_etherdev(sizeof(net_local));
+ if (!dev)
+diff --git a/drivers/net/wireless/wavelan.p.h b/drivers/net/wireless/wavelan.p.h
+index fe24281..b33ac47 100644
+--- a/drivers/net/wireless/wavelan.p.h
++++ b/drivers/net/wireless/wavelan.p.h
+@@ -400,7 +400,6 @@
*/
--static int wlan_set_wep_key(struct assoc_request *assoc_req,
-+static int lbs_set_wep_key(struct assoc_request *assoc_req,
- const char *key_material,
- u16 key_length,
- u16 index,
-@@ -1278,20 +1318,19 @@ static void disable_wpa(struct assoc_request *assoc_req)
- * @param extra A pointer to extra data buf
- * @return 0 --success, otherwise fail
+ #undef SET_PSA_CRC /* Calculate and set the CRC on PSA (slower) */
+ #define USE_PSA_CONFIG /* Use info from the PSA. */
+-#undef STRUCT_CHECK /* Verify padding of structures. */
+ #undef EEPROM_IS_PROTECTED /* doesn't seem to be necessary */
+ #define MULTICAST_AVOID /* Avoid extra multicast (I'm sceptical). */
+ #undef SET_MAC_ADDRESS /* Experimental */
+diff --git a/drivers/net/wireless/wavelan_cs.c b/drivers/net/wireless/wavelan_cs.c
+index 577c647..c2037b2 100644
+--- a/drivers/net/wireless/wavelan_cs.c
++++ b/drivers/net/wireless/wavelan_cs.c
+@@ -71,27 +71,6 @@ static void wv_nwid_filter(unsigned char mode, net_local *lp);
+ * (wavelan modem or i82593)
*/
--static int wlan_set_encode(struct net_device *dev,
-+static int lbs_set_encode(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
- {
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct assoc_request * assoc_req;
- u16 is_default = 0, index = 0, set_tx_key = 0;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- mutex_lock(&adapter->lock);
-- assoc_req = wlan_get_association_request(adapter);
-+ mutex_lock(&priv->lock);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
- goto out;
-@@ -1317,7 +1356,7 @@ static int wlan_set_encode(struct net_device *dev,
- if (!assoc_req->secinfo.wep_enabled || (dwrq->length == 0 && !is_default))
- set_tx_key = 1;
-
-- ret = wlan_set_wep_key(assoc_req, extra, dwrq->length, index, set_tx_key);
-+ ret = lbs_set_wep_key(assoc_req, extra, dwrq->length, index, set_tx_key);
- if (ret)
- goto out;
-@@ -1335,11 +1374,11 @@ static int wlan_set_encode(struct net_device *dev,
- out:
- if (ret == 0) {
- set_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- } else {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
+-#ifdef STRUCT_CHECK
+-/*------------------------------------------------------------------*/
+-/*
+- * Sanity routine to verify the sizes of the various WaveLAN interface
+- * structures.
+- */
+-static char *
+-wv_structuct_check(void)
+-{
+-#define SC(t,s,n) if (sizeof(t) != s) return(n);
+-
+- SC(psa_t, PSA_SIZE, "psa_t");
+- SC(mmw_t, MMW_SIZE, "mmw_t");
+- SC(mmr_t, MMR_SIZE, "mmr_t");
+-
+-#undef SC
+-
+- return((char *) NULL);
+-} /* wv_structuct_check */
+-#endif /* STRUCT_CHECK */
+-
+ /******************* MODEM MANAGEMENT SUBROUTINES *******************/
+ /*
+ * Useful subroutines to manage the modem of the wavelan
+@@ -3223,14 +3202,14 @@ wv_mmc_init(struct net_device * dev)
+ * non-NCR/AT&T/Lucent PCMCIA cards, see wavelan_cs.h for detail on
+ * how to configure your card...
+ */
+- for(i = 0; i < (sizeof(MAC_ADDRESSES) / sizeof(char) / 3); i++)
+- if((psa.psa_univ_mac_addr[0] == MAC_ADDRESSES[i][0]) &&
+- (psa.psa_univ_mac_addr[1] == MAC_ADDRESSES[i][1]) &&
+- (psa.psa_univ_mac_addr[2] == MAC_ADDRESSES[i][2]))
++ for (i = 0; i < ARRAY_SIZE(MAC_ADDRESSES); i++)
++ if ((psa.psa_univ_mac_addr[0] == MAC_ADDRESSES[i][0]) &&
++ (psa.psa_univ_mac_addr[1] == MAC_ADDRESSES[i][1]) &&
++ (psa.psa_univ_mac_addr[2] == MAC_ADDRESSES[i][2]))
+ break;
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
-@@ -1354,14 +1393,13 @@ out:
- * @param extra A pointer to extra data buf
- * @return 0 on success, otherwise failure
- */
--static int wlan_get_encodeext(struct net_device *dev,
-+static int lbs_get_encodeext(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_point *dwrq,
- char *extra)
- {
- int ret = -EINVAL;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
- int index, max_key_len;
+ /* If we have not found it... */
+- if(i == (sizeof(MAC_ADDRESSES) / sizeof(char) / 3))
++ if (i == ARRAY_SIZE(MAC_ADDRESSES))
+ {
+ #ifdef DEBUG_CONFIG_ERRORS
+ printk(KERN_WARNING "%s: wv_mmc_init(): Invalid MAC address: %02X:%02X:%02X:...\n",
+@@ -3794,14 +3773,10 @@ wv_hw_config(struct net_device * dev)
+ printk(KERN_DEBUG "%s: ->wv_hw_config()\n", dev->name);
+ #endif
-@@ -1377,46 +1415,46 @@ static int wlan_get_encodeext(struct net_device *dev,
- goto out;
- index--;
- } else {
-- index = adapter->wep_tx_keyidx;
-+ index = priv->wep_tx_keyidx;
- }
+-#ifdef STRUCT_CHECK
+- if(wv_structuct_check() != (char *) NULL)
+- {
+- printk(KERN_WARNING "%s: wv_hw_config: structure/compiler botch: \"%s\"\n",
+- dev->name, wv_structuct_check());
+- return FALSE;
+- }
+-#endif /* STRUCT_CHECK == 1 */
++ /* compile-time check the sizes of structures */
++ BUILD_BUG_ON(sizeof(psa_t) != PSA_SIZE);
++ BUILD_BUG_ON(sizeof(mmw_t) != MMW_SIZE);
++ BUILD_BUG_ON(sizeof(mmr_t) != MMR_SIZE);
-- if (!ext->ext_flags & IW_ENCODE_EXT_GROUP_KEY &&
-+ if (!(ext->ext_flags & IW_ENCODE_EXT_GROUP_KEY) &&
- ext->alg != IW_ENCODE_ALG_WEP) {
-- if (index != 0 || adapter->mode != IW_MODE_INFRA)
-+ if (index != 0 || priv->mode != IW_MODE_INFRA)
- goto out;
- }
+ /* Reset the pcmcia interface */
+ if(wv_pcmcia_reset(dev) == FALSE)
+diff --git a/drivers/net/wireless/wavelan_cs.p.h b/drivers/net/wireless/wavelan_cs.p.h
+index 4b9de00..33dd970 100644
+--- a/drivers/net/wireless/wavelan_cs.p.h
++++ b/drivers/net/wireless/wavelan_cs.p.h
+@@ -459,7 +459,6 @@
+ #undef WAVELAN_ROAMING_EXT /* Enable roaming wireless extensions */
+ #undef SET_PSA_CRC /* Set the CRC in PSA (slower) */
+ #define USE_PSA_CONFIG /* Use info from the PSA */
+-#undef STRUCT_CHECK /* Verify padding of structures */
+ #undef EEPROM_IS_PROTECTED /* Doesn't seem to be necessary */
+ #define MULTICAST_AVOID /* Avoid extra multicast (I'm sceptical) */
+ #undef SET_MAC_ADDRESS /* Experimental */
+@@ -548,7 +547,7 @@ typedef struct wavepoint_beacon
+ spec_id2, /* Unused */
+ pdu_type, /* Unused */
+ seq; /* WavePoint beacon sequence number */
+- unsigned short domain_id, /* WavePoint Domain ID */
++ __be16 domain_id, /* WavePoint Domain ID */
+ nwid; /* WavePoint NWID */
+ } wavepoint_beacon;
- dwrq->flags = index + 1;
- memset(ext, 0, sizeof(*ext));
+diff --git a/drivers/net/wireless/zd1211rw/Kconfig b/drivers/net/wireless/zd1211rw/Kconfig
+index d1ab24a..74b31ea 100644
+--- a/drivers/net/wireless/zd1211rw/Kconfig
++++ b/drivers/net/wireless/zd1211rw/Kconfig
+@@ -1,14 +1,13 @@
+ config ZD1211RW
+ tristate "ZyDAS ZD1211/ZD1211B USB-wireless support"
+- depends on USB && IEEE80211_SOFTMAC && WLAN_80211 && EXPERIMENTAL
+- select WIRELESS_EXT
++ depends on USB && MAC80211 && WLAN_80211 && EXPERIMENTAL
+ select FW_LOADER
+ ---help---
+ This is an experimental driver for the ZyDAS ZD1211/ZD1211B wireless
+ chip, present in many USB-wireless adapters.
-- if ( !adapter->secinfo.wep_enabled
-- && !adapter->secinfo.WPAenabled
-- && !adapter->secinfo.WPA2enabled) {
-+ if ( !priv->secinfo.wep_enabled
-+ && !priv->secinfo.WPAenabled
-+ && !priv->secinfo.WPA2enabled) {
- ext->alg = IW_ENCODE_ALG_NONE;
- ext->key_len = 0;
- dwrq->flags |= IW_ENCODE_DISABLED;
- } else {
- u8 *key = NULL;
+- Device firmware is required alongside this driver. You can download the
+- firmware distribution from http://zd1211.ath.cx/get-firmware
++ Device firmware is required alongside this driver. You can download
++ the firmware distribution from http://zd1211.ath.cx/get-firmware
-- if ( adapter->secinfo.wep_enabled
-- && !adapter->secinfo.WPAenabled
-- && !adapter->secinfo.WPA2enabled) {
-+ if ( priv->secinfo.wep_enabled
-+ && !priv->secinfo.WPAenabled
-+ && !priv->secinfo.WPA2enabled) {
- /* WEP */
- ext->alg = IW_ENCODE_ALG_WEP;
-- ext->key_len = adapter->wep_keys[index].len;
-- key = &adapter->wep_keys[index].key[0];
-- } else if ( !adapter->secinfo.wep_enabled
-- && (adapter->secinfo.WPAenabled ||
-- adapter->secinfo.WPA2enabled)) {
-+ ext->key_len = priv->wep_keys[index].len;
-+ key = &priv->wep_keys[index].key[0];
-+ } else if ( !priv->secinfo.wep_enabled
-+ && (priv->secinfo.WPAenabled ||
-+ priv->secinfo.WPA2enabled)) {
- /* WPA */
- struct enc_key * pkey = NULL;
+ config ZD1211RW_DEBUG
+ bool "ZyDAS ZD1211 debugging"
+diff --git a/drivers/net/wireless/zd1211rw/Makefile b/drivers/net/wireless/zd1211rw/Makefile
+index 7a2f2a9..cc36126 100644
+--- a/drivers/net/wireless/zd1211rw/Makefile
++++ b/drivers/net/wireless/zd1211rw/Makefile
+@@ -1,7 +1,6 @@
+ obj-$(CONFIG_ZD1211RW) += zd1211rw.o
-- if ( adapter->wpa_mcast_key.len
-- && (adapter->wpa_mcast_key.flags & KEY_INFO_WPA_ENABLED))
-- pkey = &adapter->wpa_mcast_key;
-- else if ( adapter->wpa_unicast_key.len
-- && (adapter->wpa_unicast_key.flags & KEY_INFO_WPA_ENABLED))
-- pkey = &adapter->wpa_unicast_key;
-+ if ( priv->wpa_mcast_key.len
-+ && (priv->wpa_mcast_key.flags & KEY_INFO_WPA_ENABLED))
-+ pkey = &priv->wpa_mcast_key;
-+ else if ( priv->wpa_unicast_key.len
-+ && (priv->wpa_unicast_key.flags & KEY_INFO_WPA_ENABLED))
-+ pkey = &priv->wpa_unicast_key;
+-zd1211rw-objs := zd_chip.o zd_ieee80211.o \
+- zd_mac.o zd_netdev.o \
++zd1211rw-objs := zd_chip.o zd_ieee80211.o zd_mac.o \
+ zd_rf_al2230.o zd_rf_rf2959.o \
+ zd_rf_al7230b.o zd_rf_uw2453.o \
+ zd_rf.o zd_usb.o
+diff --git a/drivers/net/wireless/zd1211rw/zd_chip.c b/drivers/net/wireless/zd1211rw/zd_chip.c
+index f831b68..99e5b03 100644
+--- a/drivers/net/wireless/zd1211rw/zd_chip.c
++++ b/drivers/net/wireless/zd1211rw/zd_chip.c
+@@ -1,4 +1,7 @@
+-/* zd_chip.c
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -30,12 +33,12 @@
+ #include "zd_rf.h"
- if (pkey) {
- if (pkey->type == KEY_TYPE_ID_AES) {
-@@ -1461,22 +1499,21 @@ out:
- * @param extra A pointer to extra data buf
- * @return 0 --success, otherwise fail
- */
--static int wlan_set_encodeext(struct net_device *dev,
-+static int lbs_set_encodeext(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_point *dwrq,
- char *extra)
+ void zd_chip_init(struct zd_chip *chip,
+- struct net_device *netdev,
++ struct ieee80211_hw *hw,
+ struct usb_interface *intf)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
- int alg = ext->alg;
- struct assoc_request * assoc_req;
-
- lbs_deb_enter(LBS_DEB_WEXT);
+ memset(chip, 0, sizeof(*chip));
+ mutex_init(&chip->mutex);
+- zd_usb_init(&chip->usb, netdev, intf);
++ zd_usb_init(&chip->usb, hw, intf);
+ zd_rf_init(&chip->rf);
+ }
-- mutex_lock(&adapter->lock);
-- assoc_req = wlan_get_association_request(adapter);
-+ mutex_lock(&priv->lock);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
- goto out;
-@@ -1503,7 +1540,7 @@ static int wlan_set_encodeext(struct net_device *dev,
- set_tx_key = 1;
+@@ -50,7 +53,7 @@ void zd_chip_clear(struct zd_chip *chip)
- /* Copy key to driver */
-- ret = wlan_set_wep_key (assoc_req, ext->key, ext->key_len, index,
-+ ret = lbs_set_wep_key(assoc_req, ext->key, ext->key_len, index,
- set_tx_key);
- if (ret)
- goto out;
-@@ -1576,31 +1613,30 @@ static int wlan_set_encodeext(struct net_device *dev,
+ static int scnprint_mac_oui(struct zd_chip *chip, char *buffer, size_t size)
+ {
+- u8 *addr = zd_usb_to_netdev(&chip->usb)->dev_addr;
++ u8 *addr = zd_mac_get_perm_addr(zd_chip_to_mac(chip));
+ return scnprintf(buffer, size, "%02x-%02x-%02x",
+ addr[0], addr[1], addr[2]);
+ }
+@@ -378,15 +381,18 @@ int zd_write_mac_addr(struct zd_chip *chip, const u8 *mac_addr)
+ };
+ DECLARE_MAC_BUF(mac);
- out:
- if (ret == 0) {
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- } else {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
+- reqs[0].value = (mac_addr[3] << 24)
+- | (mac_addr[2] << 16)
+- | (mac_addr[1] << 8)
+- | mac_addr[0];
+- reqs[1].value = (mac_addr[5] << 8)
+- | mac_addr[4];
+-
+- dev_dbg_f(zd_chip_dev(chip),
+- "mac addr %s\n", print_mac(mac, mac_addr));
++ if (mac_addr) {
++ reqs[0].value = (mac_addr[3] << 24)
++ | (mac_addr[2] << 16)
++ | (mac_addr[1] << 8)
++ | mac_addr[0];
++ reqs[1].value = (mac_addr[5] << 8)
++ | mac_addr[4];
++ dev_dbg_f(zd_chip_dev(chip),
++ "mac addr %s\n", print_mac(mac, mac_addr));
++ } else {
++ dev_dbg_f(zd_chip_dev(chip), "set NULL mac\n");
++ }
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
+ mutex_lock(&chip->mutex);
+ r = zd_iowrite32a_locked(chip, reqs, ARRAY_SIZE(reqs));
+@@ -980,7 +986,7 @@ static int print_fw_version(struct zd_chip *chip)
+ return 0;
}
-
--static int wlan_set_genie(struct net_device *dev,
-+static int lbs_set_genie(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_point *dwrq,
- char *extra)
+-static int set_mandatory_rates(struct zd_chip *chip, enum ieee80211_std std)
++static int set_mandatory_rates(struct zd_chip *chip, int mode)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int ret = 0;
- struct assoc_request * assoc_req;
+ u32 rates;
+ ZD_ASSERT(mutex_is_locked(&chip->mutex));
+@@ -988,11 +994,11 @@ static int set_mandatory_rates(struct zd_chip *chip, enum ieee80211_std std)
+ * that the device is supporting. Until further notice we should try
+ * to support 802.11g also for full speed USB.
+ */
+- switch (std) {
+- case IEEE80211B:
++ switch (mode) {
++ case MODE_IEEE80211B:
+ rates = CR_RATE_1M|CR_RATE_2M|CR_RATE_5_5M|CR_RATE_11M;
+ break;
+- case IEEE80211G:
++ case MODE_IEEE80211G:
+ rates = CR_RATE_1M|CR_RATE_2M|CR_RATE_5_5M|CR_RATE_11M|
+ CR_RATE_6M|CR_RATE_12M|CR_RATE_24M;
+ break;
+@@ -1003,24 +1009,17 @@ static int set_mandatory_rates(struct zd_chip *chip, enum ieee80211_std std)
+ }
- lbs_deb_enter(LBS_DEB_WEXT);
+ int zd_chip_set_rts_cts_rate_locked(struct zd_chip *chip,
+- u8 rts_rate, int preamble)
++ int preamble)
+ {
+- int rts_mod = ZD_RX_CCK;
+ u32 value = 0;
-- mutex_lock(&adapter->lock);
-- assoc_req = wlan_get_association_request(adapter);
-+ mutex_lock(&priv->lock);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
- goto out;
-@@ -1616,46 +1652,45 @@ static int wlan_set_genie(struct net_device *dev,
- memcpy(&assoc_req->wpa_ie[0], extra, dwrq->length);
- assoc_req->wpa_ie_len = dwrq->length;
- } else {
-- memset(&assoc_req->wpa_ie[0], 0, sizeof(adapter->wpa_ie));
-+ memset(&assoc_req->wpa_ie[0], 0, sizeof(priv->wpa_ie));
- assoc_req->wpa_ie_len = 0;
- }
+- /* Modulation bit */
+- if (ZD_MODULATION_TYPE(rts_rate) == ZD_OFDM)
+- rts_mod = ZD_RX_OFDM;
+-
+- dev_dbg_f(zd_chip_dev(chip), "rts_rate=%x preamble=%x\n",
+- rts_rate, preamble);
+-
+- value |= ZD_PURE_RATE(rts_rate) << RTSCTS_SH_RTS_RATE;
+- value |= rts_mod << RTSCTS_SH_RTS_MOD_TYPE;
++ dev_dbg_f(zd_chip_dev(chip), "preamble=%x\n", preamble);
+ value |= preamble << RTSCTS_SH_RTS_PMB_TYPE;
+ value |= preamble << RTSCTS_SH_CTS_PMB_TYPE;
- out:
- if (ret == 0) {
- set_bit(ASSOC_FLAG_WPA_IE, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- } else {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
+- /* We always send 11M self-CTS messages, like the vendor driver. */
++ /* We always send 11M RTS/self-CTS messages, like the vendor driver. */
++ value |= ZD_PURE_RATE(ZD_CCK_RATE_11M) << RTSCTS_SH_RTS_RATE;
++ value |= ZD_RX_CCK << RTSCTS_SH_RTS_MOD_TYPE;
+ value |= ZD_PURE_RATE(ZD_CCK_RATE_11M) << RTSCTS_SH_CTS_RATE;
+ value |= ZD_RX_CCK << RTSCTS_SH_CTS_MOD_TYPE;
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
+@@ -1109,7 +1108,7 @@ int zd_chip_init_hw(struct zd_chip *chip)
+ * It might be discussed, whether we should suppport pure b mode for
+ * full speed USB.
+ */
+- r = set_mandatory_rates(chip, IEEE80211G);
++ r = set_mandatory_rates(chip, MODE_IEEE80211G);
+ if (r)
+ goto out;
+ /* Disabling interrupts is certainly a smart thing here.
+@@ -1320,12 +1319,17 @@ out:
+ return r;
}
--static int wlan_get_genie(struct net_device *dev,
-+static int lbs_get_genie(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_point *dwrq,
- char *extra)
+-int zd_chip_set_basic_rates_locked(struct zd_chip *chip, u16 cr_rates)
++int zd_chip_set_basic_rates(struct zd_chip *chip, u16 cr_rates)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
+- ZD_ASSERT((cr_rates & ~(CR_RATES_80211B | CR_RATES_80211G)) == 0);
+- dev_dbg_f(zd_chip_dev(chip), "%x\n", cr_rates);
++ int r;
++
++ if (cr_rates & ~(CR_RATES_80211B|CR_RATES_80211G))
++ return -EINVAL;
- lbs_deb_enter(LBS_DEB_WEXT);
+- return zd_iowrite32_locked(chip, cr_rates, CR_BASIC_RATE_TBL);
++ mutex_lock(&chip->mutex);
++ r = zd_iowrite32_locked(chip, cr_rates, CR_BASIC_RATE_TBL);
++ mutex_unlock(&chip->mutex);
++ return r;
+ }
-- if (adapter->wpa_ie_len == 0) {
-+ if (priv->wpa_ie_len == 0) {
- dwrq->length = 0;
- goto out;
- }
+ static int ofdm_qual_db(u8 status_quality, u8 zd_rate, unsigned int size)
+@@ -1468,56 +1472,44 @@ u8 zd_rx_qual_percent(const void *rx_frame, unsigned int size,
+ {
+ return (status->frame_status&ZD_RX_OFDM) ?
+ ofdm_qual_percent(status->signal_quality_ofdm,
+- zd_rate_from_ofdm_plcp_header(rx_frame),
++ zd_rate_from_ofdm_plcp_header(rx_frame),
+ size) :
+ cck_qual_percent(status->signal_quality_cck);
+ }
-- if (dwrq->length < adapter->wpa_ie_len) {
-+ if (dwrq->length < priv->wpa_ie_len) {
- ret = -E2BIG;
- goto out;
+-u8 zd_rx_strength_percent(u8 rssi)
+-{
+- int r = (rssi*100) / 41;
+- if (r > 100)
+- r = 100;
+- return (u8) r;
+-}
+-
+-u16 zd_rx_rate(const void *rx_frame, const struct rx_status *status)
++/**
++ * zd_rx_rate - report zd-rate
++ * @rx_frame - received frame
++ * @rx_status - rx_status as given by the device
++ *
++ * This function converts the rate as encoded in the received packet to the
++ * zd-rate, we are using on other places in the driver.
++ */
++u8 zd_rx_rate(const void *rx_frame, const struct rx_status *status)
+ {
+- static const u16 ofdm_rates[] = {
+- [ZD_OFDM_PLCP_RATE_6M] = 60,
+- [ZD_OFDM_PLCP_RATE_9M] = 90,
+- [ZD_OFDM_PLCP_RATE_12M] = 120,
+- [ZD_OFDM_PLCP_RATE_18M] = 180,
+- [ZD_OFDM_PLCP_RATE_24M] = 240,
+- [ZD_OFDM_PLCP_RATE_36M] = 360,
+- [ZD_OFDM_PLCP_RATE_48M] = 480,
+- [ZD_OFDM_PLCP_RATE_54M] = 540,
+- };
+- u16 rate;
++ u8 zd_rate;
+ if (status->frame_status & ZD_RX_OFDM) {
+- /* Deals with PLCP OFDM rate (not zd_rates) */
+- u8 ofdm_rate = zd_ofdm_plcp_header_rate(rx_frame);
+- rate = ofdm_rates[ofdm_rate & 0xf];
++ zd_rate = zd_rate_from_ofdm_plcp_header(rx_frame);
+ } else {
+ switch (zd_cck_plcp_header_signal(rx_frame)) {
+ case ZD_CCK_PLCP_SIGNAL_1M:
+- rate = 10;
++ zd_rate = ZD_CCK_RATE_1M;
+ break;
+ case ZD_CCK_PLCP_SIGNAL_2M:
+- rate = 20;
++ zd_rate = ZD_CCK_RATE_2M;
+ break;
+ case ZD_CCK_PLCP_SIGNAL_5M5:
+- rate = 55;
++ zd_rate = ZD_CCK_RATE_5_5M;
+ break;
+ case ZD_CCK_PLCP_SIGNAL_11M:
+- rate = 110;
++ zd_rate = ZD_CCK_RATE_11M;
+ break;
+ default:
+- rate = 0;
++ zd_rate = 0;
+ }
}
-- dwrq->length = adapter->wpa_ie_len;
-- memcpy(extra, &adapter->wpa_ie[0], adapter->wpa_ie_len);
-+ dwrq->length = priv->wpa_ie_len;
-+ memcpy(extra, &priv->wpa_ie[0], priv->wpa_ie_len);
-
- out:
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
-@@ -1663,21 +1698,20 @@ out:
+- return rate;
++ return zd_rate;
}
+ int zd_chip_switch_radio_on(struct zd_chip *chip)
+@@ -1557,20 +1549,22 @@ void zd_chip_disable_int(struct zd_chip *chip)
+ mutex_unlock(&chip->mutex);
+ }
--static int wlan_set_auth(struct net_device *dev,
-+static int lbs_set_auth(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_param *dwrq,
- char *extra)
+-int zd_chip_enable_rx(struct zd_chip *chip)
++int zd_chip_enable_rxtx(struct zd_chip *chip)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct assoc_request * assoc_req;
- int ret = 0;
- int updated = 0;
-
- lbs_deb_enter(LBS_DEB_WEXT);
-
-- mutex_lock(&adapter->lock);
-- assoc_req = wlan_get_association_request(adapter);
-+ mutex_lock(&priv->lock);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
- goto out;
-@@ -1752,44 +1786,43 @@ out:
- if (ret == 0) {
- if (updated)
- set_bit(ASSOC_FLAG_SECINFO, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- } else if (ret != -EOPNOTSUPP) {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
- }
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
+ int r;
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
+ mutex_lock(&chip->mutex);
++ zd_usb_enable_tx(&chip->usb);
+ r = zd_usb_enable_rx(&chip->usb);
+ mutex_unlock(&chip->mutex);
+ return r;
}
--static int wlan_get_auth(struct net_device *dev,
-+static int lbs_get_auth(struct net_device *dev,
- struct iw_request_info *info,
- struct iw_param *dwrq,
- char *extra)
+-void zd_chip_disable_rx(struct zd_chip *chip)
++void zd_chip_disable_rxtx(struct zd_chip *chip)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
+ mutex_lock(&chip->mutex);
+ zd_usb_disable_rx(&chip->usb);
++ zd_usb_disable_tx(&chip->usb);
+ mutex_unlock(&chip->mutex);
+ }
- lbs_deb_enter(LBS_DEB_WEXT);
+diff --git a/drivers/net/wireless/zd1211rw/zd_chip.h b/drivers/net/wireless/zd1211rw/zd_chip.h
+index 8009b70..009c037 100644
+--- a/drivers/net/wireless/zd1211rw/zd_chip.h
++++ b/drivers/net/wireless/zd1211rw/zd_chip.h
+@@ -1,4 +1,7 @@
+-/* zd_chip.h
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -433,9 +436,10 @@ enum {
+ #define CR_GROUP_HASH_P2 CTL_REG(0x0628)
- switch (dwrq->flags & IW_AUTH_INDEX) {
- case IW_AUTH_WPA_VERSION:
- dwrq->value = 0;
-- if (adapter->secinfo.WPAenabled)
-+ if (priv->secinfo.WPAenabled)
- dwrq->value |= IW_AUTH_WPA_VERSION_WPA;
-- if (adapter->secinfo.WPA2enabled)
-+ if (priv->secinfo.WPA2enabled)
- dwrq->value |= IW_AUTH_WPA_VERSION_WPA2;
- if (!dwrq->value)
- dwrq->value |= IW_AUTH_WPA_VERSION_DISABLED;
- break;
+ #define CR_RX_TIMEOUT CTL_REG(0x062C)
++
+ /* Basic rates supported by the BSS. When producing ACK or CTS messages, the
+ * device will use a rate in this table that is less than or equal to the rate
+- * of the incoming frame which prompted the response */
++ * of the incoming frame which prompted the response. */
+ #define CR_BASIC_RATE_TBL CTL_REG(0x0630)
+ #define CR_RATE_1M (1 << 0) /* 802.11b */
+ #define CR_RATE_2M (1 << 1) /* 802.11b */
+@@ -509,14 +513,37 @@ enum {
+ #define CR_UNDERRUN_CNT CTL_REG(0x0688)
- case IW_AUTH_80211_AUTH_ALG:
-- dwrq->value = adapter->secinfo.auth_mode;
-+ dwrq->value = priv->secinfo.auth_mode;
- break;
+ #define CR_RX_FILTER CTL_REG(0x068c)
++#define RX_FILTER_ASSOC_REQUEST (1 << 0)
+ #define RX_FILTER_ASSOC_RESPONSE (1 << 1)
++#define RX_FILTER_REASSOC_REQUEST (1 << 2)
+ #define RX_FILTER_REASSOC_RESPONSE (1 << 3)
++#define RX_FILTER_PROBE_REQUEST (1 << 4)
+ #define RX_FILTER_PROBE_RESPONSE (1 << 5)
++/* bits 6 and 7 reserved */
+ #define RX_FILTER_BEACON (1 << 8)
++#define RX_FILTER_ATIM (1 << 9)
+ #define RX_FILTER_DISASSOC (1 << 10)
+ #define RX_FILTER_AUTH (1 << 11)
+-#define AP_RX_FILTER 0x0400feff
+-#define STA_RX_FILTER 0x0000ffff
++#define RX_FILTER_DEAUTH (1 << 12)
++#define RX_FILTER_PSPOLL (1 << 26)
++#define RX_FILTER_RTS (1 << 27)
++#define RX_FILTER_CTS (1 << 28)
++#define RX_FILTER_ACK (1 << 29)
++#define RX_FILTER_CFEND (1 << 30)
++#define RX_FILTER_CFACK (1 << 31)
++
++/* Enable bits for all frames you are interested in. */
++#define STA_RX_FILTER (RX_FILTER_ASSOC_REQUEST | RX_FILTER_ASSOC_RESPONSE | \
++ RX_FILTER_REASSOC_REQUEST | RX_FILTER_REASSOC_RESPONSE | \
++ RX_FILTER_PROBE_REQUEST | RX_FILTER_PROBE_RESPONSE | \
++ (0x3 << 6) /* vendor driver sets these reserved bits */ | \
++ RX_FILTER_BEACON | RX_FILTER_ATIM | RX_FILTER_DISASSOC | \
++ RX_FILTER_AUTH | RX_FILTER_DEAUTH | \
++ (0x7 << 13) /* vendor driver sets these reserved bits */ | \
++ RX_FILTER_PSPOLL | RX_FILTER_ACK) /* 0x2400ffff */
++
++#define RX_FILTER_CTRL (RX_FILTER_RTS | RX_FILTER_CTS | \
++ RX_FILTER_CFEND | RX_FILTER_CFACK)
- case IW_AUTH_WPA_ENABLED:
-- if (adapter->secinfo.WPAenabled && adapter->secinfo.WPA2enabled)
-+ if (priv->secinfo.WPAenabled && priv->secinfo.WPA2enabled)
- dwrq->value = 1;
- break;
+ /* Monitor mode sets filter to 0xfffff */
-@@ -1802,25 +1835,24 @@ static int wlan_get_auth(struct net_device *dev,
- }
+@@ -730,7 +757,7 @@ static inline struct zd_chip *zd_rf_to_chip(struct zd_rf *rf)
+ #define zd_chip_dev(chip) (&(chip)->usb.intf->dev)
+ void zd_chip_init(struct zd_chip *chip,
+- struct net_device *netdev,
++ struct ieee80211_hw *hw,
+ struct usb_interface *intf);
+ void zd_chip_clear(struct zd_chip *chip);
+ int zd_chip_read_mac_addr_fw(struct zd_chip *chip, u8 *addr);
+@@ -835,14 +862,12 @@ int zd_chip_switch_radio_on(struct zd_chip *chip);
+ int zd_chip_switch_radio_off(struct zd_chip *chip);
+ int zd_chip_enable_int(struct zd_chip *chip);
+ void zd_chip_disable_int(struct zd_chip *chip);
+-int zd_chip_enable_rx(struct zd_chip *chip);
+-void zd_chip_disable_rx(struct zd_chip *chip);
++int zd_chip_enable_rxtx(struct zd_chip *chip);
++void zd_chip_disable_rxtx(struct zd_chip *chip);
+ int zd_chip_enable_hwint(struct zd_chip *chip);
+ int zd_chip_disable_hwint(struct zd_chip *chip);
+ int zd_chip_generic_patch_6m_band(struct zd_chip *chip, int channel);
+-
+-int zd_chip_set_rts_cts_rate_locked(struct zd_chip *chip,
+- u8 rts_rate, int preamble);
++int zd_chip_set_rts_cts_rate_locked(struct zd_chip *chip, int preamble);
--static int wlan_set_txpow(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_txpow(struct net_device *dev, struct iw_request_info *info,
- struct iw_param *vwrq, char *extra)
+ static inline int zd_get_encryption_type(struct zd_chip *chip, u32 *type)
{
- int ret = 0;
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
+@@ -859,17 +884,7 @@ static inline int zd_chip_get_basic_rates(struct zd_chip *chip, u16 *cr_rates)
+ return zd_ioread16(chip, CR_BASIC_RATE_TBL, cr_rates);
+ }
- u16 dbm;
+-int zd_chip_set_basic_rates_locked(struct zd_chip *chip, u16 cr_rates);
+-
+-static inline int zd_chip_set_basic_rates(struct zd_chip *chip, u16 cr_rates)
+-{
+- int r;
+-
+- mutex_lock(&chip->mutex);
+- r = zd_chip_set_basic_rates_locked(chip, cr_rates);
+- mutex_unlock(&chip->mutex);
+- return r;
+-}
++int zd_chip_set_basic_rates(struct zd_chip *chip, u16 cr_rates);
- lbs_deb_enter(LBS_DEB_WEXT);
+ int zd_chip_lock_phy_regs(struct zd_chip *chip);
+ int zd_chip_unlock_phy_regs(struct zd_chip *chip);
+@@ -893,9 +908,8 @@ struct rx_status;
- if (vwrq->disabled) {
-- wlan_radio_ioctl(priv, RADIO_OFF);
-+ lbs_radio_ioctl(priv, RADIO_OFF);
- return 0;
- }
+ u8 zd_rx_qual_percent(const void *rx_frame, unsigned int size,
+ const struct rx_status *status);
+-u8 zd_rx_strength_percent(u8 rssi);
-- adapter->preamble = CMD_TYPE_AUTO_PREAMBLE;
-+ priv->preamble = CMD_TYPE_AUTO_PREAMBLE;
+-u16 zd_rx_rate(const void *rx_frame, const struct rx_status *status);
++u8 zd_rx_rate(const void *rx_frame, const struct rx_status *status);
-- wlan_radio_ioctl(priv, RADIO_ON);
-+ lbs_radio_ioctl(priv, RADIO_ON);
+ struct zd_mc_hash {
+ u32 low;
+diff --git a/drivers/net/wireless/zd1211rw/zd_def.h b/drivers/net/wireless/zd1211rw/zd_def.h
+index 505b4d7..5200db4 100644
+--- a/drivers/net/wireless/zd1211rw/zd_def.h
++++ b/drivers/net/wireless/zd1211rw/zd_def.h
+@@ -1,4 +1,7 @@
+-/* zd_def.h
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/wireless/zd1211rw/zd_ieee80211.c b/drivers/net/wireless/zd1211rw/zd_ieee80211.c
+index 189160e..7c277ec 100644
+--- a/drivers/net/wireless/zd1211rw/zd_ieee80211.c
++++ b/drivers/net/wireless/zd1211rw/zd_ieee80211.c
+@@ -1,4 +1,7 @@
+-/* zd_ieee80211.c
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -16,178 +19,85 @@
+ */
- /* Userspace check in iwrange if it should use dBm or mW,
- * therefore this should never happen... Jean II */
-@@ -1836,7 +1868,7 @@ static int wlan_set_txpow(struct net_device *dev, struct iw_request_info *info,
+ /*
+- * A lot of this code is generic and should be moved into the upper layers
+- * at some point.
++ * In the long term, we'll probably find a better way of handling regulatory
++ * requirements outside of the driver.
+ */
- lbs_deb_wext("txpower set %d dbm\n", dbm);
+-#include <linux/errno.h>
+-#include <linux/wireless.h>
+ #include <linux/kernel.h>
+-#include <net/ieee80211.h>
++#include <net/mac80211.h>
-- ret = libertas_prepare_and_send_command(priv,
-+ ret = lbs_prepare_and_send_command(priv,
- CMD_802_11_RF_TX_POWER,
- CMD_ACT_TX_POWER_OPT_SET_LOW,
- CMD_OPTION_WAITFORRSP, 0, (void *)&dbm);
-@@ -1845,11 +1877,10 @@ static int wlan_set_txpow(struct net_device *dev, struct iw_request_info *info,
- return ret;
- }
+-#include "zd_def.h"
+ #include "zd_ieee80211.h"
+ #include "zd_mac.h"
--static int wlan_get_essid(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_get_essid(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
++struct channel_range {
++ u8 regdomain;
++ u8 start;
++ u8 end; /* exclusive (channel must be less than end) */
++};
++
+ static const struct channel_range channel_ranges[] = {
+- [0] = { 0, 0},
+- [ZD_REGDOMAIN_FCC] = { 1, 12},
+- [ZD_REGDOMAIN_IC] = { 1, 12},
+- [ZD_REGDOMAIN_ETSI] = { 1, 14},
+- [ZD_REGDOMAIN_JAPAN] = { 1, 14},
+- [ZD_REGDOMAIN_SPAIN] = { 1, 14},
+- [ZD_REGDOMAIN_FRANCE] = { 1, 14},
++ { ZD_REGDOMAIN_FCC, 1, 12 },
++ { ZD_REGDOMAIN_IC, 1, 12 },
++ { ZD_REGDOMAIN_ETSI, 1, 14 },
++ { ZD_REGDOMAIN_JAPAN, 1, 14 },
++ { ZD_REGDOMAIN_SPAIN, 1, 14 },
++ { ZD_REGDOMAIN_FRANCE, 1, 14 },
- lbs_deb_enter(LBS_DEB_WEXT);
+ /* Japan originally only had channel 14 available (see CHNL_ID 0x40 in
+ * 802.11). However, in 2001 the range was extended to include channels
+ * 1-13. The ZyDAS devices still use the old region code but are
+ * designed to allow the extra channel access in Japan. */
+- [ZD_REGDOMAIN_JAPAN_ADD] = { 1, 15},
++ { ZD_REGDOMAIN_JAPAN_ADD, 1, 15 },
+ };
-@@ -1861,19 +1892,19 @@ static int wlan_get_essid(struct net_device *dev, struct iw_request_info *info,
- /*
- * Get the current SSID
- */
-- if (adapter->connect_status == LIBERTAS_CONNECTED) {
-- memcpy(extra, adapter->curbssparams.ssid,
-- adapter->curbssparams.ssid_len);
-- extra[adapter->curbssparams.ssid_len] = '\0';
-+ if (priv->connect_status == LBS_CONNECTED) {
-+ memcpy(extra, priv->curbssparams.ssid,
-+ priv->curbssparams.ssid_len);
-+ extra[priv->curbssparams.ssid_len] = '\0';
- } else {
- memset(extra, 0, 32);
-- extra[adapter->curbssparams.ssid_len] = '\0';
-+ extra[priv->curbssparams.ssid_len] = '\0';
+-const struct channel_range *zd_channel_range(u8 regdomain)
+-{
+- if (regdomain >= ARRAY_SIZE(channel_ranges))
+- regdomain = 0;
+- return &channel_ranges[regdomain];
+-}
+-
+-int zd_regdomain_supports_channel(u8 regdomain, u8 channel)
+-{
+- const struct channel_range *range = zd_channel_range(regdomain);
+- return range->start <= channel && channel < range->end;
+-}
+-
+-int zd_regdomain_supported(u8 regdomain)
+-{
+- const struct channel_range *range = zd_channel_range(regdomain);
+- return range->start != 0;
+-}
+-
+-/* Stores channel frequencies in MHz. */
+-static const u16 channel_frequencies[] = {
+- 2412, 2417, 2422, 2427, 2432, 2437, 2442, 2447,
+- 2452, 2457, 2462, 2467, 2472, 2484,
+-};
+-
+-#define NUM_CHANNELS ARRAY_SIZE(channel_frequencies)
+-
+-static int compute_freq(struct iw_freq *freq, u32 mhz, u32 hz)
+-{
+- u32 factor;
+-
+- freq->e = 0;
+- if (mhz >= 1000000000U) {
+- pr_debug("zd1211 mhz %u to large\n", mhz);
+- freq->m = 0;
+- return -EINVAL;
+- }
+-
+- factor = 1000;
+- while (mhz >= factor) {
+-
+- freq->e += 1;
+- factor *= 10;
+- }
+-
+- factor /= 1000U;
+- freq->m = mhz * (1000000U/factor) + hz/factor;
+-
+- return 0;
+-}
+-
+-int zd_channel_to_freq(struct iw_freq *freq, u8 channel)
++static const struct channel_range *zd_channel_range(u8 regdomain)
+ {
+- if (channel > NUM_CHANNELS) {
+- freq->m = 0;
+- freq->e = 0;
+- return -EINVAL;
+- }
+- if (!channel) {
+- freq->m = 0;
+- freq->e = 0;
+- return -EINVAL;
++ int i;
++ for (i = 0; i < ARRAY_SIZE(channel_ranges); i++) {
++ const struct channel_range *range = &channel_ranges[i];
++ if (range->regdomain == regdomain)
++ return range;
}
- /*
- * If none, we may want to get the one that was set
- */
+- return compute_freq(freq, channel_frequencies[channel-1], 0);
++ return NULL;
+ }
-- dwrq->length = adapter->curbssparams.ssid_len;
-+ dwrq->length = priv->curbssparams.ssid_len;
+-static int freq_to_mhz(const struct iw_freq *freq)
+-{
+- u32 factor;
+- int e;
+-
+- /* Such high frequencies are not supported. */
+- if (freq->e > 6)
+- return -EINVAL;
+-
+- factor = 1;
+- for (e = freq->e; e > 0; --e) {
+- factor *= 10;
+- }
+- factor = 1000000U / factor;
+-
+- if (freq->m % factor) {
+- return -EINVAL;
+- }
+-
+- return freq->m / factor;
+-}
++#define CHAN_TO_IDX(chan) ((chan) - 1)
- dwrq->flags = 1; /* active */
+-int zd_find_channel(u8 *channel, const struct iw_freq *freq)
++static void unmask_bg_channels(struct ieee80211_hw *hw,
++ const struct channel_range *range,
++ struct ieee80211_hw_mode *mode)
+ {
+- int i, r;
+- u32 mhz;
+-
+- if (freq->m < 1000) {
+- if (freq->m > NUM_CHANNELS || freq->m == 0)
+- return -EINVAL;
+- *channel = freq->m;
+- return 1;
+- }
+-
+- r = freq_to_mhz(freq);
+- if (r < 0)
+- return r;
+- mhz = r;
++ u8 channel;
-@@ -1881,11 +1912,10 @@ static int wlan_get_essid(struct net_device *dev, struct iw_request_info *info,
- return 0;
+- for (i = 0; i < NUM_CHANNELS; i++) {
+- if (mhz == channel_frequencies[i]) {
+- *channel = i+1;
+- return 1;
+- }
++ for (channel = range->start; channel < range->end; channel++) {
++ struct ieee80211_channel *chan =
++ &mode->channels[CHAN_TO_IDX(channel)];
++ chan->flag |= IEEE80211_CHAN_W_SCAN |
++ IEEE80211_CHAN_W_ACTIVE_SCAN |
++ IEEE80211_CHAN_W_IBSS;
+ }
+-
+- return -EINVAL;
}
--static int wlan_set_essid(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_essid(struct net_device *dev, struct iw_request_info *info,
- struct iw_point *dwrq, char *extra)
+-int zd_geo_init(struct ieee80211_device *ieee, u8 regdomain)
++void zd_geo_init(struct ieee80211_hw *hw, u8 regdomain)
{
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- int ret = 0;
- u8 ssid[IW_ESSID_MAX_SIZE];
- u8 ssid_len = 0;
-@@ -1918,10 +1948,10 @@ static int wlan_set_essid(struct net_device *dev, struct iw_request_info *info,
- }
+- struct ieee80211_geo geo;
++ struct zd_mac *mac = zd_hw_mac(hw);
+ const struct channel_range *range;
+- int i;
+- u8 channel;
- out:
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
- if (ret == 0) {
- /* Get or create the current association request */
-- assoc_req = wlan_get_association_request(adapter);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
- ret = -ENOMEM;
- } else {
-@@ -1929,17 +1959,65 @@ out:
- memcpy(&assoc_req->ssid, &ssid, IW_ESSID_MAX_SIZE);
- assoc_req->ssid_len = ssid_len;
- set_bit(ASSOC_FLAG_SSID, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- }
+- dev_dbg(zd_mac_dev(zd_netdev_mac(ieee->dev)),
+- "regdomain %#04x\n", regdomain);
++ dev_dbg(zd_mac_dev(mac), "regdomain %#02x\n", regdomain);
+
+ range = zd_channel_range(regdomain);
+- if (range->start == 0) {
+- dev_err(zd_mac_dev(zd_netdev_mac(ieee->dev)),
+- "zd1211 regdomain %#04x not supported\n",
+- regdomain);
+- return -EINVAL;
++ if (!range) {
++ /* The vendor driver overrides the regulatory domain and
++ * allowed channel registers and unconditionally restricts
++ * available channels to 1-11 everywhere. Match their
++ * questionable behaviour only for regdomains which we don't
++ * recognise. */
++ dev_warn(zd_mac_dev(mac), "Unrecognised regulatory domain: "
++ "%#02x. Defaulting to FCC.\n", regdomain);
++ range = zd_channel_range(ZD_REGDOMAIN_FCC);
}
- /* Cancel the association request if there was an error */
- if (ret != 0) {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
-+ }
-+
-+ mutex_unlock(&priv->lock);
-+
-+ lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
-+ return ret;
-+}
-+
-+static int lbs_mesh_get_essid(struct net_device *dev,
-+ struct iw_request_info *info,
-+ struct iw_point *dwrq, char *extra)
-+{
-+ struct lbs_private *priv = dev->priv;
-+
-+ lbs_deb_enter(LBS_DEB_WEXT);
-+
-+ memcpy(extra, priv->mesh_ssid, priv->mesh_ssid_len);
-+
-+ dwrq->length = priv->mesh_ssid_len;
-+
-+ dwrq->flags = 1; /* active */
-+
-+ lbs_deb_leave(LBS_DEB_WEXT);
-+ return 0;
-+}
-+
-+static int lbs_mesh_set_essid(struct net_device *dev,
-+ struct iw_request_info *info,
-+ struct iw_point *dwrq, char *extra)
-+{
-+ struct lbs_private *priv = dev->priv;
-+ int ret = 0;
+- memset(&geo, 0, sizeof(geo));
+-
+- for (i = 0, channel = range->start; channel < range->end; channel++) {
+- struct ieee80211_channel *chan = &geo.bg[i++];
+- chan->freq = channel_frequencies[channel - 1];
+- chan->channel = channel;
+- }
+-
+- geo.bg_channels = i;
+- memcpy(geo.name, "XX ", 4);
+- ieee80211_set_geo(ieee, &geo);
+- return 0;
++ unmask_bg_channels(hw, range, &mac->modes[0]);
++ unmask_bg_channels(hw, range, &mac->modes[1]);
+ }
+
-+ lbs_deb_enter(LBS_DEB_WEXT);
+diff --git a/drivers/net/wireless/zd1211rw/zd_ieee80211.h b/drivers/net/wireless/zd1211rw/zd_ieee80211.h
+index fbf6491..26b79f1 100644
+--- a/drivers/net/wireless/zd1211rw/zd_ieee80211.h
++++ b/drivers/net/wireless/zd1211rw/zd_ieee80211.h
+@@ -1,7 +1,27 @@
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ */
+
-+ /* Check the size of the string */
-+ if (dwrq->length > IW_ESSID_MAX_SIZE) {
-+ ret = -E2BIG;
-+ goto out;
- }
+ #ifndef _ZD_IEEE80211_H
+ #define _ZD_IEEE80211_H
-- mutex_unlock(&adapter->lock);
-+ if (!dwrq->flags || !dwrq->length) {
-+ ret = -EINVAL;
-+ goto out;
-+ } else {
-+ /* Specific SSID requested */
-+ memcpy(priv->mesh_ssid, extra, dwrq->length);
-+ priv->mesh_ssid_len = dwrq->length;
-+ }
+-#include <net/ieee80211.h>
++#include <net/mac80211.h>
-+ lbs_mesh_config(priv, 1, priv->curbssparams.channel);
-+ out:
- lbs_deb_leave_args(LBS_DEB_WEXT, "ret %d", ret);
- return ret;
- }
-@@ -1953,11 +2031,10 @@ out:
- * @param extra A pointer to extra data buf
- * @return 0 --success, otherwise fail
+ /* Additional definitions from the standards.
*/
--static int wlan_set_wap(struct net_device *dev, struct iw_request_info *info,
-+static int lbs_set_wap(struct net_device *dev, struct iw_request_info *info,
- struct sockaddr *awrq, char *extra)
- {
-- wlan_private *priv = dev->priv;
-- wlan_adapter *adapter = priv->adapter;
-+ struct lbs_private *priv = dev->priv;
- struct assoc_request * assoc_req;
- int ret = 0;
- DECLARE_MAC_BUF(mac);
-@@ -1969,44 +2046,38 @@ static int wlan_set_wap(struct net_device *dev, struct iw_request_info *info,
+@@ -19,22 +39,7 @@ enum {
+ MAX_CHANNEL24 = 14,
+ };
- lbs_deb_wext("ASSOC: WAP: sa_data %s\n", print_mac(mac, awrq->sa_data));
+-struct channel_range {
+- u8 start;
+- u8 end; /* exclusive (channel must be less than end) */
+-};
+-
+-struct iw_freq;
+-
+-int zd_geo_init(struct ieee80211_device *ieee, u8 regdomain);
+-
+-const struct channel_range *zd_channel_range(u8 regdomain);
+-int zd_regdomain_supports_channel(u8 regdomain, u8 channel);
+-int zd_regdomain_supported(u8 regdomain);
+-
+-/* for 2.4 GHz band */
+-int zd_channel_to_freq(struct iw_freq *freq, u8 channel);
+-int zd_find_channel(u8 *channel, const struct iw_freq *freq);
++void zd_geo_init(struct ieee80211_hw *hw, u8 regdomain);
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
+ #define ZD_PLCP_SERVICE_LENGTH_EXTENSION 0x80
- /* Get or create the current association request */
-- assoc_req = wlan_get_association_request(adapter);
-+ assoc_req = lbs_get_association_request(priv);
- if (!assoc_req) {
-- wlan_cancel_association_work(priv);
-+ lbs_cancel_association_work(priv);
- ret = -ENOMEM;
- } else {
- /* Copy the BSSID to the association request */
- memcpy(&assoc_req->bssid, awrq->sa_data, ETH_ALEN);
- set_bit(ASSOC_FLAG_BSSID, &assoc_req->flags);
-- wlan_postpone_association_work(priv);
-+ lbs_postpone_association_work(priv);
- }
+@@ -54,8 +59,8 @@ static inline u8 zd_ofdm_plcp_header_rate(const struct ofdm_plcp_header *header)
+ *
+ * See the struct zd_ctrlset definition in zd_mac.h.
+ */
+-#define ZD_OFDM_PLCP_RATE_6M 0xb
+-#define ZD_OFDM_PLCP_RATE_9M 0xf
++#define ZD_OFDM_PLCP_RATE_6M 0xb
++#define ZD_OFDM_PLCP_RATE_9M 0xf
+ #define ZD_OFDM_PLCP_RATE_12M 0xa
+ #define ZD_OFDM_PLCP_RATE_18M 0xe
+ #define ZD_OFDM_PLCP_RATE_24M 0x9
+@@ -87,10 +92,4 @@ static inline u8 zd_cck_plcp_header_signal(const struct cck_plcp_header *header)
+ #define ZD_CCK_PLCP_SIGNAL_5M5 0x37
+ #define ZD_CCK_PLCP_SIGNAL_11M 0x6e
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
+-enum ieee80211_std {
+- IEEE80211B = 0x01,
+- IEEE80211A = 0x02,
+- IEEE80211G = 0x04,
+-};
+-
+ #endif /* _ZD_IEEE80211_H */
+diff --git a/drivers/net/wireless/zd1211rw/zd_mac.c b/drivers/net/wireless/zd1211rw/zd_mac.c
+index 5298a8b..49127e4 100644
+--- a/drivers/net/wireless/zd1211rw/zd_mac.c
++++ b/drivers/net/wireless/zd1211rw/zd_mac.c
+@@ -1,4 +1,9 @@
+-/* zd_mac.c
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
++ * Copyright (C) 2006-2007 Michael Wu <flamingice at sourmilk.net>
++ * Copyright (c) 2007 Luis R. Rodriguez <mcgrof at winlab.rutgers.edu>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -17,7 +22,6 @@
- return ret;
- }
+ #include <linux/netdevice.h>
+ #include <linux/etherdevice.h>
+-#include <linux/wireless.h>
+ #include <linux/usb.h>
+ #include <linux/jiffies.h>
+ #include <net/ieee80211_radiotap.h>
+@@ -26,81 +30,105 @@
+ #include "zd_chip.h"
+ #include "zd_mac.h"
+ #include "zd_ieee80211.h"
+-#include "zd_netdev.h"
+ #include "zd_rf.h"
--void libertas_get_fwversion(wlan_adapter * adapter, char *fwversion, int maxlen)
-+void lbs_get_fwversion(struct lbs_private *priv, char *fwversion, int maxlen)
- {
- char fwver[32];
+-static void ieee_init(struct ieee80211_device *ieee);
+-static void softmac_init(struct ieee80211softmac_device *sm);
+-static void set_rts_cts_work(struct work_struct *work);
+-static void set_basic_rates_work(struct work_struct *work);
++/* This table contains the hardware specific values for the modulation rates. */
++static const struct ieee80211_rate zd_rates[] = {
++ { .rate = 10,
++ .val = ZD_CCK_RATE_1M,
++ .flags = IEEE80211_RATE_CCK },
++ { .rate = 20,
++ .val = ZD_CCK_RATE_2M,
++ .val2 = ZD_CCK_RATE_2M | ZD_CCK_PREA_SHORT,
++ .flags = IEEE80211_RATE_CCK_2 },
++ { .rate = 55,
++ .val = ZD_CCK_RATE_5_5M,
++ .val2 = ZD_CCK_RATE_5_5M | ZD_CCK_PREA_SHORT,
++ .flags = IEEE80211_RATE_CCK_2 },
++ { .rate = 110,
++ .val = ZD_CCK_RATE_11M,
++ .val2 = ZD_CCK_RATE_11M | ZD_CCK_PREA_SHORT,
++ .flags = IEEE80211_RATE_CCK_2 },
++ { .rate = 60,
++ .val = ZD_OFDM_RATE_6M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 90,
++ .val = ZD_OFDM_RATE_9M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 120,
++ .val = ZD_OFDM_RATE_12M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 180,
++ .val = ZD_OFDM_RATE_18M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 240,
++ .val = ZD_OFDM_RATE_24M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 360,
++ .val = ZD_OFDM_RATE_36M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 480,
++ .val = ZD_OFDM_RATE_48M,
++ .flags = IEEE80211_RATE_OFDM },
++ { .rate = 540,
++ .val = ZD_OFDM_RATE_54M,
++ .flags = IEEE80211_RATE_OFDM },
++};
++
++static const struct ieee80211_channel zd_channels[] = {
++ { .chan = 1,
++ .freq = 2412},
++ { .chan = 2,
++ .freq = 2417},
++ { .chan = 3,
++ .freq = 2422},
++ { .chan = 4,
++ .freq = 2427},
++ { .chan = 5,
++ .freq = 2432},
++ { .chan = 6,
++ .freq = 2437},
++ { .chan = 7,
++ .freq = 2442},
++ { .chan = 8,
++ .freq = 2447},
++ { .chan = 9,
++ .freq = 2452},
++ { .chan = 10,
++ .freq = 2457},
++ { .chan = 11,
++ .freq = 2462},
++ { .chan = 12,
++ .freq = 2467},
++ { .chan = 13,
++ .freq = 2472},
++ { .chan = 14,
++ .freq = 2484}
++};
-- mutex_lock(&adapter->lock);
-+ mutex_lock(&priv->lock);
+ static void housekeeping_init(struct zd_mac *mac);
+ static void housekeeping_enable(struct zd_mac *mac);
+ static void housekeeping_disable(struct zd_mac *mac);
-- if (adapter->fwreleasenumber[3] == 0)
-- sprintf(fwver, "%u.%u.%u",
-- adapter->fwreleasenumber[2],
-- adapter->fwreleasenumber[1],
-- adapter->fwreleasenumber[0]);
-- else
-- sprintf(fwver, "%u.%u.%u.p%u",
-- adapter->fwreleasenumber[2],
-- adapter->fwreleasenumber[1],
-- adapter->fwreleasenumber[0],
-- adapter->fwreleasenumber[3]);
-+ sprintf(fwver, "%u.%u.%u.p%u",
-+ priv->fwrelease >> 24 & 0xff,
-+ priv->fwrelease >> 16 & 0xff,
-+ priv->fwrelease >> 8 & 0xff,
-+ priv->fwrelease & 0xff);
+-static void set_multicast_hash_handler(struct work_struct *work);
+-
+-static void do_rx(unsigned long mac_ptr);
+-
+-int zd_mac_init(struct zd_mac *mac,
+- struct net_device *netdev,
+- struct usb_interface *intf)
+-{
+- struct ieee80211_device *ieee = zd_netdev_ieee80211(netdev);
+-
+- memset(mac, 0, sizeof(*mac));
+- spin_lock_init(&mac->lock);
+- mac->netdev = netdev;
+- INIT_DELAYED_WORK(&mac->set_rts_cts_work, set_rts_cts_work);
+- INIT_DELAYED_WORK(&mac->set_basic_rates_work, set_basic_rates_work);
+-
+- skb_queue_head_init(&mac->rx_queue);
+- tasklet_init(&mac->rx_tasklet, do_rx, (unsigned long)mac);
+- tasklet_disable(&mac->rx_tasklet);
+-
+- ieee_init(ieee);
+- softmac_init(ieee80211_priv(netdev));
+- zd_chip_init(&mac->chip, netdev, intf);
+- housekeeping_init(mac);
+- INIT_WORK(&mac->set_multicast_hash_work, set_multicast_hash_handler);
+- return 0;
+-}
+-
+-static int reset_channel(struct zd_mac *mac)
+-{
+- int r;
+- unsigned long flags;
+- const struct channel_range *range;
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- range = zd_channel_range(mac->regdomain);
+- if (!range->start) {
+- r = -EINVAL;
+- goto out;
+- }
+- mac->requested_channel = range->start;
+- r = 0;
+-out:
+- spin_unlock_irqrestore(&mac->lock, flags);
+- return r;
+-}
+-
+-int zd_mac_preinit_hw(struct zd_mac *mac)
++int zd_mac_preinit_hw(struct ieee80211_hw *hw)
+ {
+ int r;
+ u8 addr[ETH_ALEN];
++ struct zd_mac *mac = zd_hw_mac(hw);
-- mutex_unlock(&adapter->lock);
-+ mutex_unlock(&priv->lock);
- snprintf(fwversion, maxlen, fwver);
+ r = zd_chip_read_mac_addr_fw(&mac->chip, addr);
+ if (r)
+ return r;
+
+- memcpy(mac->netdev->dev_addr, addr, ETH_ALEN);
++ SET_IEEE80211_PERM_ADDR(hw, addr);
++
+ return 0;
}
-@@ -2014,19 +2085,19 @@ void libertas_get_fwversion(wlan_adapter * adapter, char *fwversion, int maxlen)
- /*
- * iwconfig settable callbacks
- */
--static const iw_handler wlan_handler[] = {
-+static const iw_handler lbs_handler[] = {
- (iw_handler) NULL, /* SIOCSIWCOMMIT */
-- (iw_handler) wlan_get_name, /* SIOCGIWNAME */
-+ (iw_handler) lbs_get_name, /* SIOCGIWNAME */
- (iw_handler) NULL, /* SIOCSIWNWID */
- (iw_handler) NULL, /* SIOCGIWNWID */
-- (iw_handler) wlan_set_freq, /* SIOCSIWFREQ */
-- (iw_handler) wlan_get_freq, /* SIOCGIWFREQ */
-- (iw_handler) wlan_set_mode, /* SIOCSIWMODE */
-- (iw_handler) wlan_get_mode, /* SIOCGIWMODE */
-+ (iw_handler) lbs_set_freq, /* SIOCSIWFREQ */
-+ (iw_handler) lbs_get_freq, /* SIOCGIWFREQ */
-+ (iw_handler) lbs_set_mode, /* SIOCSIWMODE */
-+ (iw_handler) lbs_get_mode, /* SIOCGIWMODE */
- (iw_handler) NULL, /* SIOCSIWSENS */
- (iw_handler) NULL, /* SIOCGIWSENS */
- (iw_handler) NULL, /* SIOCSIWRANGE */
-- (iw_handler) wlan_get_range, /* SIOCGIWRANGE */
-+ (iw_handler) lbs_get_range, /* SIOCGIWRANGE */
- (iw_handler) NULL, /* SIOCSIWPRIV */
- (iw_handler) NULL, /* SIOCGIWPRIV */
- (iw_handler) NULL, /* SIOCSIWSTATS */
-@@ -2035,56 +2106,56 @@ static const iw_handler wlan_handler[] = {
- iw_handler_get_spy, /* SIOCGIWSPY */
- iw_handler_set_thrspy, /* SIOCSIWTHRSPY */
- iw_handler_get_thrspy, /* SIOCGIWTHRSPY */
-- (iw_handler) wlan_set_wap, /* SIOCSIWAP */
-- (iw_handler) wlan_get_wap, /* SIOCGIWAP */
-+ (iw_handler) lbs_set_wap, /* SIOCSIWAP */
-+ (iw_handler) lbs_get_wap, /* SIOCGIWAP */
- (iw_handler) NULL, /* SIOCSIWMLME */
- (iw_handler) NULL, /* SIOCGIWAPLIST - deprecated */
-- (iw_handler) libertas_set_scan, /* SIOCSIWSCAN */
-- (iw_handler) libertas_get_scan, /* SIOCGIWSCAN */
-- (iw_handler) wlan_set_essid, /* SIOCSIWESSID */
-- (iw_handler) wlan_get_essid, /* SIOCGIWESSID */
-- (iw_handler) wlan_set_nick, /* SIOCSIWNICKN */
-- (iw_handler) wlan_get_nick, /* SIOCGIWNICKN */
-+ (iw_handler) lbs_set_scan, /* SIOCSIWSCAN */
-+ (iw_handler) lbs_get_scan, /* SIOCGIWSCAN */
-+ (iw_handler) lbs_set_essid, /* SIOCSIWESSID */
-+ (iw_handler) lbs_get_essid, /* SIOCGIWESSID */
-+ (iw_handler) lbs_set_nick, /* SIOCSIWNICKN */
-+ (iw_handler) lbs_get_nick, /* SIOCGIWNICKN */
- (iw_handler) NULL, /* -- hole -- */
- (iw_handler) NULL, /* -- hole -- */
-- (iw_handler) wlan_set_rate, /* SIOCSIWRATE */
-- (iw_handler) wlan_get_rate, /* SIOCGIWRATE */
-- (iw_handler) wlan_set_rts, /* SIOCSIWRTS */
-- (iw_handler) wlan_get_rts, /* SIOCGIWRTS */
-- (iw_handler) wlan_set_frag, /* SIOCSIWFRAG */
-- (iw_handler) wlan_get_frag, /* SIOCGIWFRAG */
-- (iw_handler) wlan_set_txpow, /* SIOCSIWTXPOW */
-- (iw_handler) wlan_get_txpow, /* SIOCGIWTXPOW */
-- (iw_handler) wlan_set_retry, /* SIOCSIWRETRY */
-- (iw_handler) wlan_get_retry, /* SIOCGIWRETRY */
-- (iw_handler) wlan_set_encode, /* SIOCSIWENCODE */
-- (iw_handler) wlan_get_encode, /* SIOCGIWENCODE */
-- (iw_handler) wlan_set_power, /* SIOCSIWPOWER */
-- (iw_handler) wlan_get_power, /* SIOCGIWPOWER */
-+ (iw_handler) lbs_set_rate, /* SIOCSIWRATE */
-+ (iw_handler) lbs_get_rate, /* SIOCGIWRATE */
-+ (iw_handler) lbs_set_rts, /* SIOCSIWRTS */
-+ (iw_handler) lbs_get_rts, /* SIOCGIWRTS */
-+ (iw_handler) lbs_set_frag, /* SIOCSIWFRAG */
-+ (iw_handler) lbs_get_frag, /* SIOCGIWFRAG */
-+ (iw_handler) lbs_set_txpow, /* SIOCSIWTXPOW */
-+ (iw_handler) lbs_get_txpow, /* SIOCGIWTXPOW */
-+ (iw_handler) lbs_set_retry, /* SIOCSIWRETRY */
-+ (iw_handler) lbs_get_retry, /* SIOCGIWRETRY */
-+ (iw_handler) lbs_set_encode, /* SIOCSIWENCODE */
-+ (iw_handler) lbs_get_encode, /* SIOCGIWENCODE */
-+ (iw_handler) lbs_set_power, /* SIOCSIWPOWER */
-+ (iw_handler) lbs_get_power, /* SIOCGIWPOWER */
- (iw_handler) NULL, /* -- hole -- */
- (iw_handler) NULL, /* -- hole -- */
-- (iw_handler) wlan_set_genie, /* SIOCSIWGENIE */
-- (iw_handler) wlan_get_genie, /* SIOCGIWGENIE */
-- (iw_handler) wlan_set_auth, /* SIOCSIWAUTH */
-- (iw_handler) wlan_get_auth, /* SIOCGIWAUTH */
-- (iw_handler) wlan_set_encodeext,/* SIOCSIWENCODEEXT */
-- (iw_handler) wlan_get_encodeext,/* SIOCGIWENCODEEXT */
-+ (iw_handler) lbs_set_genie, /* SIOCSIWGENIE */
-+ (iw_handler) lbs_get_genie, /* SIOCGIWGENIE */
-+ (iw_handler) lbs_set_auth, /* SIOCSIWAUTH */
-+ (iw_handler) lbs_get_auth, /* SIOCGIWAUTH */
-+ (iw_handler) lbs_set_encodeext,/* SIOCSIWENCODEEXT */
-+ (iw_handler) lbs_get_encodeext,/* SIOCGIWENCODEEXT */
- (iw_handler) NULL, /* SIOCSIWPMKSA */
- };
+-int zd_mac_init_hw(struct zd_mac *mac)
++int zd_mac_init_hw(struct ieee80211_hw *hw)
+ {
+ int r;
++ struct zd_mac *mac = zd_hw_mac(hw);
+ struct zd_chip *chip = &mac->chip;
+ u8 default_regdomain;
- static const iw_handler mesh_wlan_handler[] = {
- (iw_handler) NULL, /* SIOCSIWCOMMIT */
-- (iw_handler) wlan_get_name, /* SIOCGIWNAME */
-+ (iw_handler) lbs_get_name, /* SIOCGIWNAME */
- (iw_handler) NULL, /* SIOCSIWNWID */
- (iw_handler) NULL, /* SIOCGIWNWID */
-- (iw_handler) wlan_set_freq, /* SIOCSIWFREQ */
-- (iw_handler) wlan_get_freq, /* SIOCGIWFREQ */
-+ (iw_handler) lbs_mesh_set_freq, /* SIOCSIWFREQ */
-+ (iw_handler) lbs_get_freq, /* SIOCGIWFREQ */
- (iw_handler) NULL, /* SIOCSIWMODE */
- (iw_handler) mesh_wlan_get_mode, /* SIOCGIWMODE */
- (iw_handler) NULL, /* SIOCSIWSENS */
- (iw_handler) NULL, /* SIOCGIWSENS */
- (iw_handler) NULL, /* SIOCSIWRANGE */
-- (iw_handler) wlan_get_range, /* SIOCGIWRANGE */
-+ (iw_handler) lbs_get_range, /* SIOCGIWRANGE */
- (iw_handler) NULL, /* SIOCSIWPRIV */
- (iw_handler) NULL, /* SIOCGIWPRIV */
- (iw_handler) NULL, /* SIOCSIWSTATS */
-@@ -2097,46 +2168,46 @@ static const iw_handler mesh_wlan_handler[] = {
- (iw_handler) NULL, /* SIOCGIWAP */
- (iw_handler) NULL, /* SIOCSIWMLME */
- (iw_handler) NULL, /* SIOCGIWAPLIST - deprecated */
-- (iw_handler) libertas_set_scan, /* SIOCSIWSCAN */
-- (iw_handler) libertas_get_scan, /* SIOCGIWSCAN */
-- (iw_handler) NULL, /* SIOCSIWESSID */
-- (iw_handler) NULL, /* SIOCGIWESSID */
-+ (iw_handler) lbs_set_scan, /* SIOCSIWSCAN */
-+ (iw_handler) lbs_get_scan, /* SIOCGIWSCAN */
-+ (iw_handler) lbs_mesh_set_essid,/* SIOCSIWESSID */
-+ (iw_handler) lbs_mesh_get_essid,/* SIOCGIWESSID */
- (iw_handler) NULL, /* SIOCSIWNICKN */
- (iw_handler) mesh_get_nick, /* SIOCGIWNICKN */
- (iw_handler) NULL, /* -- hole -- */
- (iw_handler) NULL, /* -- hole -- */
-- (iw_handler) wlan_set_rate, /* SIOCSIWRATE */
-- (iw_handler) wlan_get_rate, /* SIOCGIWRATE */
-- (iw_handler) wlan_set_rts, /* SIOCSIWRTS */
-- (iw_handler) wlan_get_rts, /* SIOCGIWRTS */
-- (iw_handler) wlan_set_frag, /* SIOCSIWFRAG */
-- (iw_handler) wlan_get_frag, /* SIOCGIWFRAG */
-- (iw_handler) wlan_set_txpow, /* SIOCSIWTXPOW */
-- (iw_handler) wlan_get_txpow, /* SIOCGIWTXPOW */
-- (iw_handler) wlan_set_retry, /* SIOCSIWRETRY */
-- (iw_handler) wlan_get_retry, /* SIOCGIWRETRY */
-- (iw_handler) wlan_set_encode, /* SIOCSIWENCODE */
-- (iw_handler) wlan_get_encode, /* SIOCGIWENCODE */
-- (iw_handler) wlan_set_power, /* SIOCSIWPOWER */
-- (iw_handler) wlan_get_power, /* SIOCGIWPOWER */
-+ (iw_handler) lbs_set_rate, /* SIOCSIWRATE */
-+ (iw_handler) lbs_get_rate, /* SIOCGIWRATE */
-+ (iw_handler) lbs_set_rts, /* SIOCSIWRTS */
-+ (iw_handler) lbs_get_rts, /* SIOCGIWRTS */
-+ (iw_handler) lbs_set_frag, /* SIOCSIWFRAG */
-+ (iw_handler) lbs_get_frag, /* SIOCGIWFRAG */
-+ (iw_handler) lbs_set_txpow, /* SIOCSIWTXPOW */
-+ (iw_handler) lbs_get_txpow, /* SIOCGIWTXPOW */
-+ (iw_handler) lbs_set_retry, /* SIOCSIWRETRY */
-+ (iw_handler) lbs_get_retry, /* SIOCGIWRETRY */
-+ (iw_handler) lbs_set_encode, /* SIOCSIWENCODE */
-+ (iw_handler) lbs_get_encode, /* SIOCGIWENCODE */
-+ (iw_handler) lbs_set_power, /* SIOCSIWPOWER */
-+ (iw_handler) lbs_get_power, /* SIOCGIWPOWER */
- (iw_handler) NULL, /* -- hole -- */
- (iw_handler) NULL, /* -- hole -- */
-- (iw_handler) wlan_set_genie, /* SIOCSIWGENIE */
-- (iw_handler) wlan_get_genie, /* SIOCGIWGENIE */
-- (iw_handler) wlan_set_auth, /* SIOCSIWAUTH */
-- (iw_handler) wlan_get_auth, /* SIOCGIWAUTH */
-- (iw_handler) wlan_set_encodeext,/* SIOCSIWENCODEEXT */
-- (iw_handler) wlan_get_encodeext,/* SIOCGIWENCODEEXT */
-+ (iw_handler) lbs_set_genie, /* SIOCSIWGENIE */
-+ (iw_handler) lbs_get_genie, /* SIOCGIWGENIE */
-+ (iw_handler) lbs_set_auth, /* SIOCSIWAUTH */
-+ (iw_handler) lbs_get_auth, /* SIOCGIWAUTH */
-+ (iw_handler) lbs_set_encodeext,/* SIOCSIWENCODEEXT */
-+ (iw_handler) lbs_get_encodeext,/* SIOCGIWENCODEEXT */
- (iw_handler) NULL, /* SIOCSIWPMKSA */
- };
--struct iw_handler_def libertas_handler_def = {
-- .num_standard = ARRAY_SIZE(wlan_handler),
-- .standard = (iw_handler *) wlan_handler,
-- .get_wireless_stats = wlan_get_wireless_stats,
-+struct iw_handler_def lbs_handler_def = {
-+ .num_standard = ARRAY_SIZE(lbs_handler),
-+ .standard = (iw_handler *) lbs_handler,
-+ .get_wireless_stats = lbs_get_wireless_stats,
- };
+@@ -116,22 +144,9 @@ int zd_mac_init_hw(struct zd_mac *mac)
+ r = zd_read_regdomain(chip, &default_regdomain);
+ if (r)
+ goto disable_int;
+- if (!zd_regdomain_supported(default_regdomain)) {
+- /* The vendor driver overrides the regulatory domain and
+- * allowed channel registers and unconditionally restricts
+- * available channels to 1-11 everywhere. Match their
+- * questionable behaviour only for regdomains which we don't
+- * recognise. */
+- dev_warn(zd_mac_dev(mac), "Unrecognised regulatory domain: "
+- "%#04x. Defaulting to FCC.\n", default_regdomain);
+- default_regdomain = ZD_REGDOMAIN_FCC;
+- }
+ spin_lock_irq(&mac->lock);
+ mac->regdomain = mac->default_regdomain = default_regdomain;
+ spin_unlock_irq(&mac->lock);
+- r = reset_channel(mac);
+- if (r)
+- goto disable_int;
- struct iw_handler_def mesh_handler_def = {
- .num_standard = ARRAY_SIZE(mesh_wlan_handler),
- .standard = (iw_handler *) mesh_wlan_handler,
-- .get_wireless_stats = wlan_get_wireless_stats,
-+ .get_wireless_stats = lbs_get_wireless_stats,
- };
-diff --git a/drivers/net/wireless/libertas/wext.h b/drivers/net/wireless/libertas/wext.h
-index 6aa444c..a563d9a 100644
---- a/drivers/net/wireless/libertas/wext.h
-+++ b/drivers/net/wireless/libertas/wext.h
-@@ -1,11 +1,11 @@
- /**
- * This file contains definition for IOCTL call.
- */
--#ifndef _WLAN_WEXT_H_
--#define _WLAN_WEXT_H_
-+#ifndef _LBS_WEXT_H_
-+#define _LBS_WEXT_H_
+ /* We must inform the device that we are doing encryption/decryption in
+ * software at the moment. */
+@@ -139,9 +154,7 @@ int zd_mac_init_hw(struct zd_mac *mac)
+ if (r)
+ goto disable_int;
--/** wlan_ioctl_regrdwr */
--struct wlan_ioctl_regrdwr {
-+/** lbs_ioctl_regrdwr */
-+struct lbs_ioctl_regrdwr {
- /** Which register to access */
- u16 whichreg;
- /** Read or Write */
-@@ -15,9 +15,9 @@ struct wlan_ioctl_regrdwr {
- u32 value;
- };
+- r = zd_geo_init(zd_mac_to_ieee80211(mac), mac->regdomain);
+- if (r)
+- goto disable_int;
++ zd_geo_init(hw, mac->regdomain);
--#define WLAN_MONITOR_OFF 0
-+#define LBS_MONITOR_OFF 0
+ r = 0;
+ disable_int:
+@@ -153,8 +166,6 @@ out:
+ void zd_mac_clear(struct zd_mac *mac)
+ {
+ flush_workqueue(zd_workqueue);
+- skb_queue_purge(&mac->rx_queue);
+- tasklet_kill(&mac->rx_tasklet);
+ zd_chip_clear(&mac->chip);
+ ZD_ASSERT(!spin_is_locked(&mac->lock));
+ ZD_MEMCLEAR(mac, sizeof(struct zd_mac));
+@@ -162,34 +173,27 @@ void zd_mac_clear(struct zd_mac *mac)
--extern struct iw_handler_def libertas_handler_def;
-+extern struct iw_handler_def lbs_handler_def;
- extern struct iw_handler_def mesh_handler_def;
+ static int set_rx_filter(struct zd_mac *mac)
+ {
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- u32 filter = (ieee->iw_mode == IW_MODE_MONITOR) ? ~0 : STA_RX_FILTER;
+- return zd_iowrite32(&mac->chip, CR_RX_FILTER, filter);
+-}
++ unsigned long flags;
++ u32 filter = STA_RX_FILTER;
--#endif /* _WLAN_WEXT_H_ */
-+#endif
-diff --git a/drivers/net/wireless/orinoco.c b/drivers/net/wireless/orinoco.c
-index ca6c2da..6d13a0d 100644
---- a/drivers/net/wireless/orinoco.c
-+++ b/drivers/net/wireless/orinoco.c
-@@ -270,6 +270,37 @@ static inline void set_port_type(struct orinoco_private *priv)
+-static int set_sniffer(struct zd_mac *mac)
+-{
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- return zd_iowrite32(&mac->chip, CR_SNIFFER_ON,
+- ieee->iw_mode == IW_MODE_MONITOR ? 1 : 0);
+- return 0;
++ spin_lock_irqsave(&mac->lock, flags);
++ if (mac->pass_ctrl)
++ filter |= RX_FILTER_CTRL;
++ spin_unlock_irqrestore(&mac->lock, flags);
++
++ return zd_iowrite32(&mac->chip, CR_RX_FILTER, filter);
+ }
+
+ static int set_mc_hash(struct zd_mac *mac)
+ {
+ struct zd_mc_hash hash;
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+-
+ zd_mc_clear(&hash);
+- if (ieee->iw_mode == IW_MODE_MONITOR)
+- zd_mc_add_all(&hash);
+-
+ return zd_chip_set_multicast_hash(&mac->chip, &hash);
+ }
+
+-int zd_mac_open(struct net_device *netdev)
++static int zd_op_start(struct ieee80211_hw *hw)
+ {
+- struct zd_mac *mac = zd_netdev_mac(netdev);
++ struct zd_mac *mac = zd_hw_mac(hw);
+ struct zd_chip *chip = &mac->chip;
+ struct zd_usb *usb = &chip->usb;
+ int r;
+@@ -200,46 +204,33 @@ int zd_mac_open(struct net_device *netdev)
+ goto out;
}
+
+- tasklet_enable(&mac->rx_tasklet);
+-
+ r = zd_chip_enable_int(chip);
+ if (r < 0)
+ goto out;
+
+- r = zd_write_mac_addr(chip, netdev->dev_addr);
+- if (r)
+- goto disable_int;
+-
+ r = zd_chip_set_basic_rates(chip, CR_RATES_80211B | CR_RATES_80211G);
+ if (r < 0)
+ goto disable_int;
+ r = set_rx_filter(mac);
+ if (r)
+ goto disable_int;
+- r = set_sniffer(mac);
+- if (r)
+- goto disable_int;
+ r = set_mc_hash(mac);
+ if (r)
+ goto disable_int;
+ r = zd_chip_switch_radio_on(chip);
+ if (r < 0)
+ goto disable_int;
+- r = zd_chip_set_channel(chip, mac->requested_channel);
+- if (r < 0)
+- goto disable_radio;
+- r = zd_chip_enable_rx(chip);
++ r = zd_chip_enable_rxtx(chip);
+ if (r < 0)
+ goto disable_radio;
+ r = zd_chip_enable_hwint(chip);
+ if (r < 0)
+- goto disable_rx;
++ goto disable_rxtx;
+
+ housekeeping_enable(mac);
+- ieee80211softmac_start(netdev);
+ return 0;
+-disable_rx:
+- zd_chip_disable_rx(chip);
++disable_rxtx:
++ zd_chip_disable_rxtx(chip);
+ disable_radio:
+ zd_chip_switch_radio_off(chip);
+ disable_int:
+@@ -248,494 +239,190 @@ out:
+ return r;
}
-+#define ORINOCO_MAX_BSS_COUNT 64
-+static int orinoco_bss_data_allocate(struct orinoco_private *priv)
-+{
-+ if (priv->bss_data)
-+ return 0;
-+
-+ priv->bss_data =
-+ kzalloc(ORINOCO_MAX_BSS_COUNT * sizeof(bss_element), GFP_KERNEL);
-+ if (!priv->bss_data) {
-+ printk(KERN_WARNING "Out of memory allocating beacons");
-+ return -ENOMEM;
-+ }
-+ return 0;
-+}
-+
-+static void orinoco_bss_data_free(struct orinoco_private *priv)
-+{
-+ kfree(priv->bss_data);
-+ priv->bss_data = NULL;
-+}
-+
-+static void orinoco_bss_data_init(struct orinoco_private *priv)
-+{
-+ int i;
-+
-+ INIT_LIST_HEAD(&priv->bss_free_list);
-+ INIT_LIST_HEAD(&priv->bss_list);
-+ for (i = 0; i < ORINOCO_MAX_BSS_COUNT; i++)
-+ list_add_tail(&priv->bss_data[i].list, &priv->bss_free_list);
+-int zd_mac_stop(struct net_device *netdev)
++/**
++ * clear_tx_skb_control_block - clears the control block of tx skbuffs
++ * @skb: a &struct sk_buff pointer
++ *
++ * This clears the control block of skbuff buffers, which were transmitted to
++ * the device. Notify that the function is not thread-safe, so prevent
++ * multiple calls.
++ */
++static void clear_tx_skb_control_block(struct sk_buff *skb)
+ {
+- struct zd_mac *mac = zd_netdev_mac(netdev);
+- struct zd_chip *chip = &mac->chip;
++ struct zd_tx_skb_control_block *cb =
++ (struct zd_tx_skb_control_block *)skb->cb;
+
+- netif_stop_queue(netdev);
++ kfree(cb->control);
++ cb->control = NULL;
+}
-+
- /********************************************************************/
- /* Device methods */
- /********************************************************************/
-@@ -1083,6 +1114,124 @@ static void orinoco_send_wevents(struct work_struct *work)
- orinoco_unlock(priv, &flags);
- }
-+
-+static inline void orinoco_clear_scan_results(struct orinoco_private *priv,
-+ unsigned long scan_age)
+- /*
+- * The order here deliberately is a little different from the open()
++/**
++ * kfree_tx_skb - frees a tx skbuff
++ * @skb: a &struct sk_buff pointer
++ *
++ * Frees the tx skbuff. Frees also the allocated control structure in the
++ * control block if necessary.
++ */
++static void kfree_tx_skb(struct sk_buff *skb)
+{
-+ bss_element *bss;
-+ bss_element *tmp_bss;
-+
-+ /* Blow away current list of scan results */
-+ list_for_each_entry_safe(bss, tmp_bss, &priv->bss_list, list) {
-+ if (!scan_age ||
-+ time_after(jiffies, bss->last_scanned + scan_age)) {
-+ list_move_tail(&bss->list, &priv->bss_free_list);
-+ /* Don't blow away ->list, just BSS data */
-+ memset(bss, 0, sizeof(bss->bss));
-+ bss->last_scanned = 0;
-+ }
-+ }
++ clear_tx_skb_control_block(skb);
++ dev_kfree_skb_any(skb);
+}
+
-+static int orinoco_process_scan_results(struct net_device *dev,
-+ unsigned char *buf,
-+ int len)
++static void zd_op_stop(struct ieee80211_hw *hw)
+{
-+ struct orinoco_private *priv = netdev_priv(dev);
-+ int offset; /* In the scan data */
-+ union hermes_scan_info *atom;
-+ int atom_len;
-+
-+ switch (priv->firmware_type) {
-+ case FIRMWARE_TYPE_AGERE:
-+ atom_len = sizeof(struct agere_scan_apinfo);
-+ offset = 0;
-+ break;
-+ case FIRMWARE_TYPE_SYMBOL:
-+ /* Lack of documentation necessitates this hack.
-+ * Different firmwares have 68 or 76 byte long atoms.
-+ * We try modulo first. If the length divides by both,
-+ * we check what would be the channel in the second
-+ * frame for a 68-byte atom. 76-byte atoms have 0 there.
-+ * Valid channel cannot be 0. */
-+ if (len % 76)
-+ atom_len = 68;
-+ else if (len % 68)
-+ atom_len = 76;
-+ else if (len >= 1292 && buf[68] == 0)
-+ atom_len = 76;
-+ else
-+ atom_len = 68;
-+ offset = 0;
-+ break;
-+ case FIRMWARE_TYPE_INTERSIL:
-+ offset = 4;
-+ if (priv->has_hostscan) {
-+ atom_len = le16_to_cpup((__le16 *)buf);
-+ /* Sanity check for atom_len */
-+ if (atom_len < sizeof(struct prism2_scan_apinfo)) {
-+ printk(KERN_ERR "%s: Invalid atom_len in scan "
-+ "data: %d\n", dev->name, atom_len);
-+ return -EIO;
-+ }
-+ } else
-+ atom_len = offsetof(struct prism2_scan_apinfo, atim);
-+ break;
-+ default:
-+ return -EOPNOTSUPP;
-+ }
-+
-+ /* Check that we got an whole number of atoms */
-+ if ((len - offset) % atom_len) {
-+ printk(KERN_ERR "%s: Unexpected scan data length %d, "
-+ "atom_len %d, offset %d\n", dev->name, len,
-+ atom_len, offset);
-+ return -EIO;
-+ }
-+
-+ orinoco_clear_scan_results(priv, msecs_to_jiffies(15000));
-+
-+ /* Read the entries one by one */
-+ for (; offset + atom_len <= len; offset += atom_len) {
-+ int found = 0;
-+ bss_element *bss = NULL;
-+
-+ /* Get next atom */
-+ atom = (union hermes_scan_info *) (buf + offset);
-+
-+ /* Try to update an existing bss first */
-+ list_for_each_entry(bss, &priv->bss_list, list) {
-+ if (compare_ether_addr(bss->bss.a.bssid, atom->a.bssid))
-+ continue;
-+ if (le16_to_cpu(bss->bss.a.essid_len) !=
-+ le16_to_cpu(atom->a.essid_len))
-+ continue;
-+ if (memcmp(bss->bss.a.essid, atom->a.essid,
-+ le16_to_cpu(atom->a.essid_len)))
-+ continue;
-+ found = 1;
-+ break;
-+ }
-+
-+ /* Grab a bss off the free list */
-+ if (!found && !list_empty(&priv->bss_free_list)) {
-+ bss = list_entry(priv->bss_free_list.next,
-+ bss_element, list);
-+ list_del(priv->bss_free_list.next);
-+
-+ list_add_tail(&bss->list, &priv->bss_list);
-+ }
-+
-+ if (bss) {
-+ /* Always update the BSS to get latest beacon info */
-+ memcpy(&bss->bss, atom, sizeof(bss->bss));
-+ bss->last_scanned = jiffies;
-+ }
-+ }
-+
-+ return 0;
-+}
-+
- static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
- {
- struct orinoco_private *priv = netdev_priv(dev);
-@@ -1208,6 +1357,9 @@ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
- union iwreq_data wrqu;
- unsigned char *buf;
-
-+ /* Scan is no longer in progress */
-+ priv->scan_inprogress = 0;
++ struct zd_mac *mac = zd_hw_mac(hw);
++ struct zd_chip *chip = &mac->chip;
++ struct sk_buff *skb;
++ struct sk_buff_head *ack_wait_queue = &mac->ack_wait_queue;
+
- /* Sanity check */
- if (len > 4096) {
- printk(KERN_WARNING "%s: Scan results too large (%d bytes)\n",
-@@ -1215,15 +1367,6 @@ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
- break;
- }
++ /* The order here deliberately is a little different from the open()
+ * method, since we need to make sure there is no opportunity for RX
+- * frames to be processed by softmac after we have stopped it.
++ * frames to be processed by mac80211 after we have stopped it.
+ */
-- /* We are a strict producer. If the previous scan results
-- * have not been consumed, we just have to drop this
-- * frame. We can't remove the previous results ourselves,
-- * that would be *very* racy... Jean II */
-- if (priv->scan_result != NULL) {
-- printk(KERN_WARNING "%s: Previous scan results not consumed, dropping info frame.\n", dev->name);
-- break;
-- }
+- zd_chip_disable_rx(chip);
+- skb_queue_purge(&mac->rx_queue);
+- tasklet_disable(&mac->rx_tasklet);
++ zd_chip_disable_rxtx(chip);
+ housekeeping_disable(mac);
+- ieee80211softmac_stop(netdev);
-
- /* Allocate buffer for results */
- buf = kmalloc(len, GFP_ATOMIC);
- if (buf == NULL)
-@@ -1248,18 +1391,17 @@ static void __orinoco_ev_info(struct net_device *dev, hermes_t *hw)
- }
- #endif /* ORINOCO_DEBUG */
+- /* Ensure no work items are running or queued from this point */
+- cancel_delayed_work(&mac->set_rts_cts_work);
+- cancel_delayed_work(&mac->set_basic_rates_work);
+ flush_workqueue(zd_workqueue);
+- mac->updating_rts_rate = 0;
+- mac->updating_basic_rates = 0;
-- /* Allow the clients to access the results */
-- priv->scan_len = len;
-- priv->scan_result = buf;
--
-- /* Send an empty event to user space.
-- * We don't send the received data on the event because
-- * it would require us to do complex transcoding, and
-- * we want to minimise the work done in the irq handler
-- * Use a request to extract the data - Jean II */
-- wrqu.data.length = 0;
-- wrqu.data.flags = 0;
-- wireless_send_event(dev, SIOCGIWSCAN, &wrqu, NULL);
-+ if (orinoco_process_scan_results(dev, buf, len) == 0) {
-+ /* Send an empty event to user space.
-+ * We don't send the received data on the event because
-+ * it would require us to do complex transcoding, and
-+ * we want to minimise the work done in the irq handler
-+ * Use a request to extract the data - Jean II */
-+ wrqu.data.length = 0;
-+ wrqu.data.flags = 0;
-+ wireless_send_event(dev, SIOCGIWSCAN, &wrqu, NULL);
-+ }
-+ kfree(buf);
- }
- break;
- case HERMES_INQ_SEC_STAT_AGERE:
-@@ -1896,8 +2038,7 @@ static void orinoco_reset(struct work_struct *work)
- orinoco_unlock(priv, &flags);
+ zd_chip_disable_hwint(chip);
+ zd_chip_switch_radio_off(chip);
+ zd_chip_disable_int(chip);
- /* Scanning support: Cleanup of driver struct */
-- kfree(priv->scan_result);
-- priv->scan_result = NULL;
-+ orinoco_clear_scan_results(priv, 0);
- priv->scan_inprogress = 0;
+- return 0;
+-}
+-
+-int zd_mac_set_mac_address(struct net_device *netdev, void *p)
+-{
+- int r;
+- unsigned long flags;
+- struct sockaddr *addr = p;
+- struct zd_mac *mac = zd_netdev_mac(netdev);
+- struct zd_chip *chip = &mac->chip;
+- DECLARE_MAC_BUF(mac2);
+-
+- if (!is_valid_ether_addr(addr->sa_data))
+- return -EADDRNOTAVAIL;
+-
+- dev_dbg_f(zd_mac_dev(mac),
+- "Setting MAC to %s\n", print_mac(mac2, addr->sa_data));
+-
+- if (netdev->flags & IFF_UP) {
+- r = zd_write_mac_addr(chip, addr->sa_data);
+- if (r)
+- return r;
+- }
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- memcpy(netdev->dev_addr, addr->sa_data, ETH_ALEN);
+- spin_unlock_irqrestore(&mac->lock, flags);
+-
+- return 0;
+-}
+-
+-static void set_multicast_hash_handler(struct work_struct *work)
+-{
+- struct zd_mac *mac = container_of(work, struct zd_mac,
+- set_multicast_hash_work);
+- struct zd_mc_hash hash;
+-
+- spin_lock_irq(&mac->lock);
+- hash = mac->multicast_hash;
+- spin_unlock_irq(&mac->lock);
- if (priv->hard_reset) {
-@@ -2412,6 +2553,10 @@ struct net_device *alloc_orinocodev(int sizeof_card,
- else
- priv->card = NULL;
+- zd_chip_set_multicast_hash(&mac->chip, &hash);
++ while ((skb = skb_dequeue(ack_wait_queue)))
++ kfree_tx_skb(skb);
+ }
-+ if (orinoco_bss_data_allocate(priv))
-+ goto err_out_free;
-+ orinoco_bss_data_init(priv);
+-void zd_mac_set_multicast_list(struct net_device *dev)
+-{
+- struct zd_mac *mac = zd_netdev_mac(dev);
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- struct zd_mc_hash hash;
+- struct dev_mc_list *mc;
+- unsigned long flags;
+- DECLARE_MAC_BUF(mac2);
+-
+- if (dev->flags & (IFF_PROMISC|IFF_ALLMULTI) ||
+- ieee->iw_mode == IW_MODE_MONITOR) {
+- zd_mc_add_all(&hash);
+- } else {
+- zd_mc_clear(&hash);
+- for (mc = dev->mc_list; mc; mc = mc->next) {
+- dev_dbg_f(zd_mac_dev(mac), "mc addr %s\n",
+- print_mac(mac2, mc->dmi_addr));
+- zd_mc_add_addr(&hash, mc->dmi_addr);
+- }
+- }
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- mac->multicast_hash = hash;
+- spin_unlock_irqrestore(&mac->lock, flags);
+- queue_work(zd_workqueue, &mac->set_multicast_hash_work);
+-}
+-
+-int zd_mac_set_regdomain(struct zd_mac *mac, u8 regdomain)
+-{
+- int r;
+- u8 channel;
+-
+- ZD_ASSERT(!irqs_disabled());
+- spin_lock_irq(&mac->lock);
+- if (regdomain == 0) {
+- regdomain = mac->default_regdomain;
+- }
+- if (!zd_regdomain_supported(regdomain)) {
+- spin_unlock_irq(&mac->lock);
+- return -EINVAL;
+- }
+- mac->regdomain = regdomain;
+- channel = mac->requested_channel;
+- spin_unlock_irq(&mac->lock);
+-
+- r = zd_geo_init(zd_mac_to_ieee80211(mac), regdomain);
+- if (r)
+- return r;
+- if (!zd_regdomain_supports_channel(regdomain, channel)) {
+- r = reset_channel(mac);
+- if (r)
+- return r;
+- }
++/**
++ * init_tx_skb_control_block - initializes skb control block
++ * @skb: a &sk_buff pointer
++ * @dev: pointer to the mac80221 device
++ * @control: mac80211 tx control applying for the frame in @skb
++ *
++ * Initializes the control block of the skbuff to be transmitted.
++ */
++static int init_tx_skb_control_block(struct sk_buff *skb,
++ struct ieee80211_hw *hw,
++ struct ieee80211_tx_control *control)
++{
++ struct zd_tx_skb_control_block *cb =
++ (struct zd_tx_skb_control_block *)skb->cb;
+
- /* Setup / override net_device fields */
- dev->init = orinoco_init;
- dev->hard_start_xmit = orinoco_xmit;
-@@ -2447,13 +2592,16 @@ struct net_device *alloc_orinocodev(int sizeof_card,
-
- return dev;
++ ZD_ASSERT(sizeof(*cb) <= sizeof(skb->cb));
++ memset(cb, 0, sizeof(*cb));
++ cb->hw= hw;
++ cb->control = kmalloc(sizeof(*control), GFP_ATOMIC);
++ if (cb->control == NULL)
++ return -ENOMEM;
++ memcpy(cb->control, control, sizeof(*control));
-+err_out_free:
-+ free_netdev(dev);
-+ return NULL;
+ return 0;
}
- void free_orinocodev(struct net_device *dev)
+-u8 zd_mac_get_regdomain(struct zd_mac *mac)
+-{
+- unsigned long flags;
+- u8 regdomain;
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- regdomain = mac->regdomain;
+- spin_unlock_irqrestore(&mac->lock, flags);
+- return regdomain;
+-}
+-
+-/* Fallback to lowest rate, if rate is unknown. */
+-static u8 rate_to_zd_rate(u8 rate)
+-{
+- switch (rate) {
+- case IEEE80211_CCK_RATE_2MB:
+- return ZD_CCK_RATE_2M;
+- case IEEE80211_CCK_RATE_5MB:
+- return ZD_CCK_RATE_5_5M;
+- case IEEE80211_CCK_RATE_11MB:
+- return ZD_CCK_RATE_11M;
+- case IEEE80211_OFDM_RATE_6MB:
+- return ZD_OFDM_RATE_6M;
+- case IEEE80211_OFDM_RATE_9MB:
+- return ZD_OFDM_RATE_9M;
+- case IEEE80211_OFDM_RATE_12MB:
+- return ZD_OFDM_RATE_12M;
+- case IEEE80211_OFDM_RATE_18MB:
+- return ZD_OFDM_RATE_18M;
+- case IEEE80211_OFDM_RATE_24MB:
+- return ZD_OFDM_RATE_24M;
+- case IEEE80211_OFDM_RATE_36MB:
+- return ZD_OFDM_RATE_36M;
+- case IEEE80211_OFDM_RATE_48MB:
+- return ZD_OFDM_RATE_48M;
+- case IEEE80211_OFDM_RATE_54MB:
+- return ZD_OFDM_RATE_54M;
+- }
+- return ZD_CCK_RATE_1M;
+-}
+-
+-static u16 rate_to_cr_rate(u8 rate)
+-{
+- switch (rate) {
+- case IEEE80211_CCK_RATE_2MB:
+- return CR_RATE_1M;
+- case IEEE80211_CCK_RATE_5MB:
+- return CR_RATE_5_5M;
+- case IEEE80211_CCK_RATE_11MB:
+- return CR_RATE_11M;
+- case IEEE80211_OFDM_RATE_6MB:
+- return CR_RATE_6M;
+- case IEEE80211_OFDM_RATE_9MB:
+- return CR_RATE_9M;
+- case IEEE80211_OFDM_RATE_12MB:
+- return CR_RATE_12M;
+- case IEEE80211_OFDM_RATE_18MB:
+- return CR_RATE_18M;
+- case IEEE80211_OFDM_RATE_24MB:
+- return CR_RATE_24M;
+- case IEEE80211_OFDM_RATE_36MB:
+- return CR_RATE_36M;
+- case IEEE80211_OFDM_RATE_48MB:
+- return CR_RATE_48M;
+- case IEEE80211_OFDM_RATE_54MB:
+- return CR_RATE_54M;
+- }
+- return CR_RATE_1M;
+-}
+-
+-static void try_enable_tx(struct zd_mac *mac)
+-{
+- unsigned long flags;
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- if (mac->updating_rts_rate == 0 && mac->updating_basic_rates == 0)
+- netif_wake_queue(mac->netdev);
+- spin_unlock_irqrestore(&mac->lock, flags);
+-}
+-
+-static void set_rts_cts_work(struct work_struct *work)
++/**
++ * tx_status - reports tx status of a packet if required
++ * @hw - a &struct ieee80211_hw pointer
++ * @skb - a sk-buffer
++ * @status - the tx status of the packet without control information
++ * @success - True for successfull transmission of the frame
++ *
++ * This information calls ieee80211_tx_status_irqsafe() if required by the
++ * control information. It copies the control information into the status
++ * information.
++ *
++ * If no status information has been requested, the skb is freed.
++ */
++static void tx_status(struct ieee80211_hw *hw, struct sk_buff *skb,
++ struct ieee80211_tx_status *status,
++ bool success)
{
- struct orinoco_private *priv = netdev_priv(dev);
+- struct zd_mac *mac =
+- container_of(work, struct zd_mac, set_rts_cts_work.work);
+- unsigned long flags;
+- u8 rts_rate;
+- unsigned int short_preamble;
+-
+- mutex_lock(&mac->chip.mutex);
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- mac->updating_rts_rate = 0;
+- rts_rate = mac->rts_rate;
+- short_preamble = mac->short_preamble;
+- spin_unlock_irqrestore(&mac->lock, flags);
+-
+- zd_chip_set_rts_cts_rate_locked(&mac->chip, rts_rate, short_preamble);
+- mutex_unlock(&mac->chip.mutex);
++ struct zd_tx_skb_control_block *cb = (struct zd_tx_skb_control_block *)
++ skb->cb;
-- kfree(priv->scan_result);
-+ orinoco_bss_data_free(priv);
- free_netdev(dev);
+- try_enable_tx(mac);
++ ZD_ASSERT(cb->control != NULL);
++ memcpy(&status->control, cb->control, sizeof(status->control));
++ if (!success)
++ status->excessive_retries = 1;
++ clear_tx_skb_control_block(skb);
++ ieee80211_tx_status_irqsafe(hw, skb, status);
}
-@@ -3841,23 +3989,10 @@ static int orinoco_ioctl_setscan(struct net_device *dev,
- * we access scan variables in priv is critical.
- * o scan_inprogress : not touched by irq handler
- * o scan_mode : not touched by irq handler
-- * o scan_result : irq is strict producer, non-irq is strict
-- * consumer.
- * o scan_len : synchronised with scan_result
- * Before modifying anything on those variables, please think hard !
- * Jean II */
-
-- /* If there is still some left-over scan results, get rid of it */
-- if (priv->scan_result != NULL) {
-- /* What's likely is that a client did crash or was killed
-- * between triggering the scan request and reading the
-- * results, so we need to reset everything.
-- * Some clients that are too slow may suffer from that...
-- * Jean II */
-- kfree(priv->scan_result);
-- priv->scan_result = NULL;
-- }
+-static void set_basic_rates_work(struct work_struct *work)
++/**
++ * zd_mac_tx_failed - callback for failed frames
++ * @dev: the mac80211 wireless device
++ *
++ * This function is called if a frame couldn't be succesfully be
++ * transferred. The first frame from the tx queue, will be selected and
++ * reported as error to the upper layers.
++ */
++void zd_mac_tx_failed(struct ieee80211_hw *hw)
+ {
+- struct zd_mac *mac =
+- container_of(work, struct zd_mac, set_basic_rates_work.work);
+- unsigned long flags;
+- u16 basic_rates;
-
- /* Save flags */
- priv->scan_mode = srq->flags;
+- mutex_lock(&mac->chip.mutex);
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- mac->updating_basic_rates = 0;
+- basic_rates = mac->basic_rates;
+- spin_unlock_irqrestore(&mac->lock, flags);
+-
+- zd_chip_set_basic_rates_locked(&mac->chip, basic_rates);
+- mutex_unlock(&mac->chip.mutex);
++ struct sk_buff_head *q = &zd_hw_mac(hw)->ack_wait_queue;
++ struct sk_buff *skb;
++ struct ieee80211_tx_status status = {{0}};
-@@ -3905,169 +4040,125 @@ static int orinoco_ioctl_setscan(struct net_device *dev,
- return err;
+- try_enable_tx(mac);
++ skb = skb_dequeue(q);
++ if (skb == NULL)
++ return;
++ tx_status(hw, skb, &status, 0);
}
-+#define MAX_CUSTOM_LEN 64
-+
- /* Translate scan data returned from the card to a card independant
- * format that the Wireless Tools will understand - Jean II
- * Return message length or -errno for fatal errors */
--static inline int orinoco_translate_scan(struct net_device *dev,
-- char *buffer,
-- char *scan,
-- int scan_len)
-+static inline char *orinoco_translate_scan(struct net_device *dev,
-+ char *current_ev,
-+ char *end_buf,
-+ union hermes_scan_info *bss,
-+ unsigned int last_scanned)
- {
- struct orinoco_private *priv = netdev_priv(dev);
-- int offset; /* In the scan data */
-- union hermes_scan_info *atom;
-- int atom_len;
- u16 capabilities;
- u16 channel;
- struct iw_event iwe; /* Temporary buffer */
-- char * current_ev = buffer;
-- char * end_buf = buffer + IW_SCAN_MAX_DATA;
+-static void bssinfo_change(struct net_device *netdev, u32 changes)
+-{
+- struct zd_mac *mac = zd_netdev_mac(netdev);
+- struct ieee80211softmac_device *softmac = ieee80211_priv(netdev);
+- struct ieee80211softmac_bss_info *bssinfo = &softmac->bssinfo;
+- int need_set_rts_cts = 0;
+- int need_set_rates = 0;
+- u16 basic_rates;
+- unsigned long flags;
-
-- switch (priv->firmware_type) {
-- case FIRMWARE_TYPE_AGERE:
-- atom_len = sizeof(struct agere_scan_apinfo);
-- offset = 0;
-- break;
-- case FIRMWARE_TYPE_SYMBOL:
-- /* Lack of documentation necessitates this hack.
-- * Different firmwares have 68 or 76 byte long atoms.
-- * We try modulo first. If the length divides by both,
-- * we check what would be the channel in the second
-- * frame for a 68-byte atom. 76-byte atoms have 0 there.
-- * Valid channel cannot be 0. */
-- if (scan_len % 76)
-- atom_len = 68;
-- else if (scan_len % 68)
-- atom_len = 76;
-- else if (scan_len >= 1292 && scan[68] == 0)
-- atom_len = 76;
-+ char *p;
-+ char custom[MAX_CUSTOM_LEN];
-+
-+ /* First entry *MUST* be the AP MAC address */
-+ iwe.cmd = SIOCGIWAP;
-+ iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
-+ memcpy(iwe.u.ap_addr.sa_data, bss->a.bssid, ETH_ALEN);
-+ current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_ADDR_LEN);
-+
-+ /* Other entries will be displayed in the order we give them */
-+
-+ /* Add the ESSID */
-+ iwe.u.data.length = le16_to_cpu(bss->a.essid_len);
-+ if (iwe.u.data.length > 32)
-+ iwe.u.data.length = 32;
-+ iwe.cmd = SIOCGIWESSID;
-+ iwe.u.data.flags = 1;
-+ current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, bss->a.essid);
+- dev_dbg_f(zd_mac_dev(mac), "changes: %x\n", changes);
+-
+- if (changes & IEEE80211SOFTMAC_BSSINFOCHG_SHORT_PREAMBLE) {
+- spin_lock_irqsave(&mac->lock, flags);
+- mac->short_preamble = bssinfo->short_preamble;
+- spin_unlock_irqrestore(&mac->lock, flags);
+- need_set_rts_cts = 1;
+- }
+-
+- if (changes & IEEE80211SOFTMAC_BSSINFOCHG_RATES) {
+- /* Set RTS rate to highest available basic rate */
+- u8 hi_rate = ieee80211softmac_highest_supported_rate(softmac,
+- &bssinfo->supported_rates, 1);
+- hi_rate = rate_to_zd_rate(hi_rate);
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- if (hi_rate != mac->rts_rate) {
+- mac->rts_rate = hi_rate;
+- need_set_rts_cts = 1;
+- }
+- spin_unlock_irqrestore(&mac->lock, flags);
+-
+- /* Set basic rates */
+- need_set_rates = 1;
+- if (bssinfo->supported_rates.count == 0) {
+- /* Allow the device to be flexible */
+- basic_rates = CR_RATES_80211B | CR_RATES_80211G;
++/**
++ * zd_mac_tx_to_dev - callback for USB layer
++ * @skb: a &sk_buff pointer
++ * @error: error value, 0 if transmission successful
++ *
++ * Informs the MAC layer that the frame has successfully transferred to the
++ * device. If an ACK is required and the transfer to the device has been
++ * successful, the packets are put on the @ack_wait_queue with
++ * the control set removed.
++ */
++void zd_mac_tx_to_dev(struct sk_buff *skb, int error)
++{
++ struct zd_tx_skb_control_block *cb =
++ (struct zd_tx_skb_control_block *)skb->cb;
++ struct ieee80211_hw *hw = cb->hw;
+
-+ /* Add mode */
-+ iwe.cmd = SIOCGIWMODE;
-+ capabilities = le16_to_cpu(bss->a.capabilities);
-+ if (capabilities & 0x3) {
-+ if (capabilities & 0x1)
-+ iwe.u.mode = IW_MODE_MASTER;
- else
-- atom_len = 68;
-- offset = 0;
-- break;
-- case FIRMWARE_TYPE_INTERSIL:
-- offset = 4;
-- if (priv->has_hostscan) {
-- atom_len = le16_to_cpup((__le16 *)scan);
-- /* Sanity check for atom_len */
-- if (atom_len < sizeof(struct prism2_scan_apinfo)) {
-- printk(KERN_ERR "%s: Invalid atom_len in scan data: %d\n",
-- dev->name, atom_len);
-- return -EIO;
++ if (likely(cb->control)) {
++ skb_pull(skb, sizeof(struct zd_ctrlset));
++ if (unlikely(error ||
++ (cb->control->flags & IEEE80211_TXCTL_NO_ACK)))
++ {
++ struct ieee80211_tx_status status = {{0}};
++ tx_status(hw, skb, &status, !error);
+ } else {
+- int i = 0;
+- basic_rates = 0;
+-
+- for (i = 0; i < bssinfo->supported_rates.count; i++) {
+- u16 rate = bssinfo->supported_rates.rates[i];
+- if ((rate & IEEE80211_BASIC_RATE_MASK) == 0)
+- continue;
++ struct sk_buff_head *q =
++ &zd_hw_mac(hw)->ack_wait_queue;
+
+- rate &= ~IEEE80211_BASIC_RATE_MASK;
+- basic_rates |= rate_to_cr_rate(rate);
- }
-- } else
-- atom_len = offsetof(struct prism2_scan_apinfo, atim);
++ skb_queue_tail(q, skb);
++ while (skb_queue_len(q) > ZD_MAC_MAX_ACK_WAITERS)
++ zd_mac_tx_failed(hw);
+ }
+- spin_lock_irqsave(&mac->lock, flags);
+- mac->basic_rates = basic_rates;
+- spin_unlock_irqrestore(&mac->lock, flags);
+- }
+-
+- /* Schedule any changes we made above */
+-
+- spin_lock_irqsave(&mac->lock, flags);
+- if (need_set_rts_cts && !mac->updating_rts_rate) {
+- mac->updating_rts_rate = 1;
+- netif_stop_queue(mac->netdev);
+- queue_delayed_work(zd_workqueue, &mac->set_rts_cts_work, 0);
+- }
+- if (need_set_rates && !mac->updating_basic_rates) {
+- mac->updating_basic_rates = 1;
+- netif_stop_queue(mac->netdev);
+- queue_delayed_work(zd_workqueue, &mac->set_basic_rates_work,
+- 0);
+- }
+- spin_unlock_irqrestore(&mac->lock, flags);
+-}
+-
+-static void set_channel(struct net_device *netdev, u8 channel)
+-{
+- struct zd_mac *mac = zd_netdev_mac(netdev);
+-
+- dev_dbg_f(zd_mac_dev(mac), "channel %d\n", channel);
+-
+- zd_chip_set_channel(&mac->chip, channel);
+-}
+-
+-int zd_mac_request_channel(struct zd_mac *mac, u8 channel)
+-{
+- unsigned long lock_flags;
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+-
+- if (ieee->iw_mode == IW_MODE_INFRA)
+- return -EPERM;
+-
+- spin_lock_irqsave(&mac->lock, lock_flags);
+- if (!zd_regdomain_supports_channel(mac->regdomain, channel)) {
+- spin_unlock_irqrestore(&mac->lock, lock_flags);
+- return -EINVAL;
+- }
+- mac->requested_channel = channel;
+- spin_unlock_irqrestore(&mac->lock, lock_flags);
+- if (netif_running(mac->netdev))
+- return zd_chip_set_channel(&mac->chip, channel);
+- else
+- return 0;
+-}
+-
+-u8 zd_mac_get_channel(struct zd_mac *mac)
+-{
+- u8 channel = zd_chip_get_channel(&mac->chip);
+-
+- dev_dbg_f(zd_mac_dev(mac), "channel %u\n", channel);
+- return channel;
+-}
+-
+-int zd_mac_set_mode(struct zd_mac *mac, u32 mode)
+-{
+- struct ieee80211_device *ieee;
+-
+- switch (mode) {
+- case IW_MODE_AUTO:
+- case IW_MODE_ADHOC:
+- case IW_MODE_INFRA:
+- mac->netdev->type = ARPHRD_ETHER;
+- break;
+- case IW_MODE_MONITOR:
+- mac->netdev->type = ARPHRD_IEEE80211_RADIOTAP;
- break;
- default:
-- return -EOPNOTSUPP;
+- dev_dbg_f(zd_mac_dev(mac), "wrong mode %u\n", mode);
+- return -EINVAL;
- }
-
-- /* Check that we got an whole number of atoms */
-- if ((scan_len - offset) % atom_len) {
-- printk(KERN_ERR "%s: Unexpected scan data length %d, "
-- "atom_len %d, offset %d\n", dev->name, scan_len,
-- atom_len, offset);
-- return -EIO;
+- ieee = zd_mac_to_ieee80211(mac);
+- ZD_ASSERT(!irqs_disabled());
+- spin_lock_irq(&ieee->lock);
+- ieee->iw_mode = mode;
+- spin_unlock_irq(&ieee->lock);
+-
+- if (netif_running(mac->netdev)) {
+- int r = set_rx_filter(mac);
+- if (r)
+- return r;
+- return set_sniffer(mac);
- }
-
-- /* Read the entries one by one */
-- for (; offset + atom_len <= scan_len; offset += atom_len) {
-- /* Get next atom */
-- atom = (union hermes_scan_info *) (scan + offset);
+- return 0;
+-}
-
-- /* First entry *MUST* be the AP MAC address */
-- iwe.cmd = SIOCGIWAP;
-- iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
-- memcpy(iwe.u.ap_addr.sa_data, atom->a.bssid, ETH_ALEN);
-- current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_ADDR_LEN);
+-int zd_mac_get_mode(struct zd_mac *mac, u32 *mode)
+-{
+- unsigned long flags;
+- struct ieee80211_device *ieee;
-
-- /* Other entries will be displayed in the order we give them */
+- ieee = zd_mac_to_ieee80211(mac);
+- spin_lock_irqsave(&ieee->lock, flags);
+- *mode = ieee->iw_mode;
+- spin_unlock_irqrestore(&ieee->lock, flags);
+- return 0;
+-}
-
-- /* Add the ESSID */
-- iwe.u.data.length = le16_to_cpu(atom->a.essid_len);
-- if (iwe.u.data.length > 32)
-- iwe.u.data.length = 32;
-- iwe.cmd = SIOCGIWESSID;
-- iwe.u.data.flags = 1;
-- current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, atom->a.essid);
+-int zd_mac_get_range(struct zd_mac *mac, struct iw_range *range)
+-{
+- int i;
+- const struct channel_range *channel_range;
+- u8 regdomain;
-
-- /* Add mode */
-- iwe.cmd = SIOCGIWMODE;
-- capabilities = le16_to_cpu(atom->a.capabilities);
-- if (capabilities & 0x3) {
-- if (capabilities & 0x1)
-- iwe.u.mode = IW_MODE_MASTER;
-- else
-- iwe.u.mode = IW_MODE_ADHOC;
-- current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_UINT_LEN);
-- }
+- memset(range, 0, sizeof(*range));
-
-- channel = atom->s.channel;
-- if ( (channel >= 1) && (channel <= NUM_CHANNELS) ) {
-- /* Add frequency */
-- iwe.cmd = SIOCGIWFREQ;
-- iwe.u.freq.m = channel_frequency[channel-1] * 100000;
-- iwe.u.freq.e = 1;
-- current_ev = iwe_stream_add_event(current_ev, end_buf,
-- &iwe, IW_EV_FREQ_LEN);
-- }
-+ iwe.u.mode = IW_MODE_ADHOC;
-+ current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_UINT_LEN);
-+ }
-+
-+ channel = bss->s.channel;
-+ if ((channel >= 1) && (channel <= NUM_CHANNELS)) {
-+ /* Add frequency */
-+ iwe.cmd = SIOCGIWFREQ;
-+ iwe.u.freq.m = channel_frequency[channel-1] * 100000;
-+ iwe.u.freq.e = 1;
-+ current_ev = iwe_stream_add_event(current_ev, end_buf,
-+ &iwe, IW_EV_FREQ_LEN);
-+ }
-+
-+ /* Add quality statistics */
-+ iwe.cmd = IWEVQUAL;
-+ iwe.u.qual.updated = 0x10; /* no link quality */
-+ iwe.u.qual.level = (__u8) le16_to_cpu(bss->a.level) - 0x95;
-+ iwe.u.qual.noise = (__u8) le16_to_cpu(bss->a.noise) - 0x95;
-+ /* Wireless tools prior to 27.pre22 will show link quality
-+ * anyway, so we provide a reasonable value. */
-+ if (iwe.u.qual.level > iwe.u.qual.noise)
-+ iwe.u.qual.qual = iwe.u.qual.level - iwe.u.qual.noise;
-+ else
-+ iwe.u.qual.qual = 0;
-+ current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_QUAL_LEN);
-
-- /* Add quality statistics */
-- iwe.cmd = IWEVQUAL;
-- iwe.u.qual.updated = 0x10; /* no link quality */
-- iwe.u.qual.level = (__u8) le16_to_cpu(atom->a.level) - 0x95;
-- iwe.u.qual.noise = (__u8) le16_to_cpu(atom->a.noise) - 0x95;
-- /* Wireless tools prior to 27.pre22 will show link quality
-- * anyway, so we provide a reasonable value. */
-- if (iwe.u.qual.level > iwe.u.qual.noise)
-- iwe.u.qual.qual = iwe.u.qual.level - iwe.u.qual.noise;
-- else
-- iwe.u.qual.qual = 0;
-- current_ev = iwe_stream_add_event(current_ev, end_buf, &iwe, IW_EV_QUAL_LEN);
-+ /* Add encryption capability */
-+ iwe.cmd = SIOCGIWENCODE;
-+ if (capabilities & 0x10)
-+ iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
-+ else
-+ iwe.u.data.flags = IW_ENCODE_DISABLED;
-+ iwe.u.data.length = 0;
-+ current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, bss->a.essid);
-+
-+ /* Add EXTRA: Age to display seconds since last beacon/probe response
-+ * for given network. */
-+ iwe.cmd = IWEVCUSTOM;
-+ p = custom;
-+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
-+ " Last beacon: %dms ago",
-+ jiffies_to_msecs(jiffies - last_scanned));
-+ iwe.u.data.length = p - custom;
-+ if (iwe.u.data.length)
-+ current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, custom);
-+
-+ /* Bit rate is not available in Lucent/Agere firmwares */
-+ if (priv->firmware_type != FIRMWARE_TYPE_AGERE) {
-+ char *current_val = current_ev + IW_EV_LCP_LEN;
-+ int i;
-+ int step;
-
-- /* Add encryption capability */
-- iwe.cmd = SIOCGIWENCODE;
-- if (capabilities & 0x10)
-- iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
-+ if (priv->firmware_type == FIRMWARE_TYPE_SYMBOL)
-+ step = 2;
- else
-- iwe.u.data.flags = IW_ENCODE_DISABLED;
-- iwe.u.data.length = 0;
-- current_ev = iwe_stream_add_point(current_ev, end_buf, &iwe, atom->a.essid);
+- /* FIXME: Not so important and depends on the mode. For 802.11g
+- * usually this value is used. It seems to be that Bit/s number is
+- * given here.
+- */
+- range->throughput = 27 * 1000 * 1000;
-
-- /* Bit rate is not available in Lucent/Agere firmwares */
-- if (priv->firmware_type != FIRMWARE_TYPE_AGERE) {
-- char * current_val = current_ev + IW_EV_LCP_LEN;
-- int i;
-- int step;
+- range->max_qual.qual = 100;
+- range->max_qual.level = 100;
-
-- if (priv->firmware_type == FIRMWARE_TYPE_SYMBOL)
-- step = 2;
-- else
-- step = 1;
+- /* FIXME: Needs still to be tuned. */
+- range->avg_qual.qual = 71;
+- range->avg_qual.level = 80;
-
-- iwe.cmd = SIOCGIWRATE;
-- /* Those two flags are ignored... */
-- iwe.u.bitrate.fixed = iwe.u.bitrate.disabled = 0;
-- /* Max 10 values */
-- for (i = 0; i < 10; i += step) {
-- /* NULL terminated */
-- if (atom->p.rates[i] == 0x0)
-- break;
-- /* Bit rate given in 500 kb/s units (+ 0x80) */
-- iwe.u.bitrate.value = ((atom->p.rates[i] & 0x7f) * 500000);
-- current_val = iwe_stream_add_value(current_ev, current_val,
-- end_buf, &iwe,
-- IW_EV_PARAM_LEN);
-- }
-- /* Check if we added any event */
-- if ((current_val - current_ev) > IW_EV_LCP_LEN)
-- current_ev = current_val;
-+ step = 1;
-+
-+ iwe.cmd = SIOCGIWRATE;
-+ /* Those two flags are ignored... */
-+ iwe.u.bitrate.fixed = iwe.u.bitrate.disabled = 0;
-+ /* Max 10 values */
-+ for (i = 0; i < 10; i += step) {
-+ /* NULL terminated */
-+ if (bss->p.rates[i] == 0x0)
-+ break;
-+ /* Bit rate given in 500 kb/s units (+ 0x80) */
-+ iwe.u.bitrate.value = ((bss->p.rates[i] & 0x7f) * 500000);
-+ current_val = iwe_stream_add_value(current_ev, current_val,
-+ end_buf, &iwe,
-+ IW_EV_PARAM_LEN);
- }
+- /* FIXME: depends on standard? */
+- range->min_rts = 256;
+- range->max_rts = 2346;
-
-- /* The other data in the scan result are not really
-- * interesting, so for now drop it - Jean II */
-+ /* Check if we added any event */
-+ if ((current_val - current_ev) > IW_EV_LCP_LEN)
-+ current_ev = current_val;
+- range->min_frag = MIN_FRAG_THRESHOLD;
+- range->max_frag = MAX_FRAG_THRESHOLD;
+-
+- range->max_encoding_tokens = WEP_KEYS;
+- range->num_encoding_sizes = 2;
+- range->encoding_size[0] = 5;
+- range->encoding_size[1] = WEP_KEY_LEN;
+-
+- range->we_version_compiled = WIRELESS_EXT;
+- range->we_version_source = 20;
+-
+- range->enc_capa = IW_ENC_CAPA_WPA | IW_ENC_CAPA_WPA2 |
+- IW_ENC_CAPA_CIPHER_TKIP | IW_ENC_CAPA_CIPHER_CCMP;
+-
+- ZD_ASSERT(!irqs_disabled());
+- spin_lock_irq(&mac->lock);
+- regdomain = mac->regdomain;
+- spin_unlock_irq(&mac->lock);
+- channel_range = zd_channel_range(regdomain);
+-
+- range->num_channels = channel_range->end - channel_range->start;
+- range->old_num_channels = range->num_channels;
+- range->num_frequency = range->num_channels;
+- range->old_num_frequency = range->num_frequency;
+-
+- for (i = 0; i < range->num_frequency; i++) {
+- struct iw_freq *freq = &range->freq[i];
+- freq->i = channel_range->start + i;
+- zd_channel_to_freq(freq, freq->i);
++ } else {
++ kfree_tx_skb(skb);
}
-- return current_ev - buffer;
-+
-+ return current_ev;
+-
+- return 0;
}
- /* Return results of a scan */
-@@ -4077,68 +4168,45 @@ static int orinoco_ioctl_getscan(struct net_device *dev,
- char *extra)
+ static int zd_calc_tx_length_us(u8 *service, u8 zd_rate, u16 tx_length)
{
- struct orinoco_private *priv = netdev_priv(dev);
-+ bss_element *bss;
- int err = 0;
- unsigned long flags;
-+ char *current_ev = extra;
-
- if (orinoco_lock(priv, &flags) != 0)
- return -EBUSY;
-
-- /* If no results yet, ask to try again later */
-- if (priv->scan_result == NULL) {
-- if (priv->scan_inprogress)
-- /* Important note : we don't want to block the caller
-- * until results are ready for various reasons.
-- * First, managing wait queues is complex and racy.
-- * Second, we grab some rtnetlink lock before comming
-- * here (in dev_ioctl()).
-- * Third, we generate an Wireless Event, so the
-- * caller can wait itself on that - Jean II */
-- err = -EAGAIN;
-- else
-- /* Client error, no scan results...
-- * The caller need to restart the scan. */
-- err = -ENODATA;
-- } else {
-- /* We have some results to push back to user space */
+ /* ZD_PURE_RATE() must be used to remove the modulation type flag of
+- * the zd-rate values. */
++ * the zd-rate values.
++ */
+ static const u8 rate_divisor[] = {
+- [ZD_PURE_RATE(ZD_CCK_RATE_1M)] = 1,
+- [ZD_PURE_RATE(ZD_CCK_RATE_2M)] = 2,
-
-- /* Translate to WE format */
-- int ret = orinoco_translate_scan(dev, extra,
-- priv->scan_result,
-- priv->scan_len);
+- /* bits must be doubled */
+- [ZD_PURE_RATE(ZD_CCK_RATE_5_5M)] = 11,
-
-- if (ret < 0) {
-- err = ret;
-- kfree(priv->scan_result);
-- priv->scan_result = NULL;
-- } else {
-- srq->length = ret;
-+ if (priv->scan_inprogress) {
-+ /* Important note : we don't want to block the caller
-+ * until results are ready for various reasons.
-+ * First, managing wait queues is complex and racy.
-+ * Second, we grab some rtnetlink lock before comming
-+ * here (in dev_ioctl()).
-+ * Third, we generate an Wireless Event, so the
-+ * caller can wait itself on that - Jean II */
-+ err = -EAGAIN;
-+ goto out;
-+ }
+- [ZD_PURE_RATE(ZD_CCK_RATE_11M)] = 11,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_6M)] = 6,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_9M)] = 9,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_12M)] = 12,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_18M)] = 18,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_24M)] = 24,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_36M)] = 36,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_48M)] = 48,
+- [ZD_PURE_RATE(ZD_OFDM_RATE_54M)] = 54,
++ [ZD_PURE_RATE(ZD_CCK_RATE_1M)] = 1,
++ [ZD_PURE_RATE(ZD_CCK_RATE_2M)] = 2,
++ /* Bits must be doubled. */
++ [ZD_PURE_RATE(ZD_CCK_RATE_5_5M)] = 11,
++ [ZD_PURE_RATE(ZD_CCK_RATE_11M)] = 11,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_6M)] = 6,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_9M)] = 9,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_12M)] = 12,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_18M)] = 18,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_24M)] = 24,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_36M)] = 36,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_48M)] = 48,
++ [ZD_PURE_RATE(ZD_OFDM_RATE_54M)] = 54,
+ };
-- /* Return flags */
-- srq->flags = (__u16) priv->scan_mode;
-+ list_for_each_entry(bss, &priv->bss_list, list) {
-+ /* Translate to WE format this entry */
-+ current_ev = orinoco_translate_scan(dev, current_ev,
-+ extra + srq->length,
-+ &bss->bss,
-+ bss->last_scanned);
+ u32 bits = (u32)tx_length * 8;
+@@ -764,34 +451,10 @@ static int zd_calc_tx_length_us(u8 *service, u8 zd_rate, u16 tx_length)
+ return bits/divisor;
+ }
-- /* In any case, Scan results will be cleaned up in the
-- * reset function and when exiting the driver.
-- * The person triggering the scanning may never come to
-- * pick the results, so we need to do it in those places.
-- * Jean II */
+-static void cs_set_modulation(struct zd_mac *mac, struct zd_ctrlset *cs,
+- struct ieee80211_hdr_4addr *hdr)
+-{
+- struct ieee80211softmac_device *softmac = ieee80211_priv(mac->netdev);
+- u16 ftype = WLAN_FC_GET_TYPE(le16_to_cpu(hdr->frame_ctl));
+- u8 rate;
+- int is_mgt = (ftype == IEEE80211_FTYPE_MGMT) != 0;
+- int is_multicast = is_multicast_ether_addr(hdr->addr1);
+- int short_preamble = ieee80211softmac_short_preamble_ok(softmac,
+- is_multicast, is_mgt);
-
--#ifdef SCAN_SINGLE_READ
-- /* If you enable this option, only one client (the first
-- * one) will be able to read the result (and only one
-- * time). If there is multiple concurent clients that
-- * want to read scan results, this behavior is not
-- * advisable - Jean II */
-- kfree(priv->scan_result);
-- priv->scan_result = NULL;
--#endif /* SCAN_SINGLE_READ */
-- /* Here, if too much time has elapsed since last scan,
-- * we may want to clean up scan results... - Jean II */
-+ /* Check if there is space for one more entry */
-+ if ((extra + srq->length - current_ev) <= IW_EV_ADDR_LEN) {
-+ /* Ask user space to try again with a bigger buffer */
-+ err = -E2BIG;
-+ goto out;
- }
+- rate = ieee80211softmac_suggest_txrate(softmac, is_multicast, is_mgt);
+- cs->modulation = rate_to_zd_rate(rate);
-
-- /* Scan is no longer in progress */
-- priv->scan_inprogress = 0;
- }
--
-+
-+ srq->length = (current_ev - extra);
-+ srq->flags = (__u16) priv->scan_mode;
-+
-+out:
- orinoco_unlock(priv, &flags);
- return err;
- }
-diff --git a/drivers/net/wireless/orinoco.h b/drivers/net/wireless/orinoco.h
-index 4720fb2..c6b1858 100644
---- a/drivers/net/wireless/orinoco.h
-+++ b/drivers/net/wireless/orinoco.h
-@@ -36,6 +36,12 @@ typedef enum {
- FIRMWARE_TYPE_SYMBOL
- } fwtype_t;
-
-+typedef struct {
-+ union hermes_scan_info bss;
-+ unsigned long last_scanned;
-+ struct list_head list;
-+} bss_element;
-+
- struct orinoco_private {
- void *card; /* Pointer to card dependent structure */
- int (*hard_reset)(struct orinoco_private *);
-@@ -105,10 +111,12 @@ struct orinoco_private {
- int promiscuous, mc_count;
-
- /* Scanning support */
-+ struct list_head bss_list;
-+ struct list_head bss_free_list;
-+ bss_element *bss_data;
-+
- int scan_inprogress; /* Scan pending... */
- u32 scan_mode; /* Type of scan done */
-- char * scan_result; /* Result of previous scan */
-- int scan_len; /* Lenght of result */
- };
+- /* Set short preamble bit when appropriate */
+- if (short_preamble && ZD_MODULATION_TYPE(cs->modulation) == ZD_CCK
+- && cs->modulation != ZD_CCK_RATE_1M)
+- cs->modulation |= ZD_CCK_PREA_SHORT;
+-}
+-
+ static void cs_set_control(struct zd_mac *mac, struct zd_ctrlset *cs,
+- struct ieee80211_hdr_4addr *header)
++ struct ieee80211_hdr *header, u32 flags)
+ {
+- struct ieee80211softmac_device *softmac = ieee80211_priv(mac->netdev);
+- unsigned int tx_length = le16_to_cpu(cs->tx_length);
+- u16 fctl = le16_to_cpu(header->frame_ctl);
+- u16 ftype = WLAN_FC_GET_TYPE(fctl);
+- u16 stype = WLAN_FC_GET_STYPE(fctl);
++ u16 fctl = le16_to_cpu(header->frame_control);
- #ifdef ORINOCO_DEBUG
-diff --git a/drivers/net/wireless/p54common.c b/drivers/net/wireless/p54common.c
-index 1437db0..5cda49a 100644
---- a/drivers/net/wireless/p54common.c
-+++ b/drivers/net/wireless/p54common.c
-@@ -54,7 +54,7 @@ void p54_parse_firmware(struct ieee80211_hw *dev, const struct firmware *fw)
- u32 code = le32_to_cpu(bootrec->code);
- switch (code) {
- case BR_CODE_COMPONENT_ID:
-- switch (be32_to_cpu(*bootrec->data)) {
-+ switch (be32_to_cpu(*(__be32 *)bootrec->data)) {
- case FW_FMAC:
- printk(KERN_INFO "p54: FreeMAC firmware\n");
- break;
-@@ -78,14 +78,14 @@ void p54_parse_firmware(struct ieee80211_hw *dev, const struct firmware *fw)
- fw_version = (unsigned char*)bootrec->data;
- break;
- case BR_CODE_DESCR:
-- priv->rx_start = le32_to_cpu(bootrec->data[1]);
-+ priv->rx_start = le32_to_cpu(((__le32 *)bootrec->data)[1]);
- /* FIXME add sanity checking */
-- priv->rx_end = le32_to_cpu(bootrec->data[2]) - 0x3500;
-+ priv->rx_end = le32_to_cpu(((__le32 *)bootrec->data)[2]) - 0x3500;
- break;
- case BR_CODE_EXPOSED_IF:
- exp_if = (struct bootrec_exp_if *) bootrec->data;
- for (i = 0; i < (len * sizeof(*exp_if) / 4); i++)
-- if (exp_if[i].if_id == 0x1a)
-+ if (exp_if[i].if_id == cpu_to_le16(0x1a))
- priv->fw_var = le16_to_cpu(exp_if[i].variant);
- break;
- case BR_CODE_DEPENDENT_IF:
-@@ -314,6 +314,7 @@ static void p54_rx_data(struct ieee80211_hw *dev, struct sk_buff *skb)
- rx_status.phymode = MODE_IEEE80211G;
- rx_status.antenna = hdr->antenna;
- rx_status.mactime = le64_to_cpu(hdr->timestamp);
-+ rx_status.flag |= RX_FLAG_TSFT;
+ /*
+ * CONTROL TODO:
+@@ -802,7 +465,7 @@ static void cs_set_control(struct zd_mac *mac, struct zd_ctrlset *cs,
+ cs->control = 0;
- skb_pull(skb, sizeof(*hdr));
- skb_trim(skb, le16_to_cpu(hdr->len));
-@@ -374,7 +375,7 @@ static void p54_rx_frame_sent(struct ieee80211_hw *dev, struct sk_buff *skb)
- if ((entry_hdr->magic1 & cpu_to_le16(0x4000)) != 0)
- pad = entry_data->align[0];
+ /* First fragment */
+- if (WLAN_GET_SEQ_FRAG(le16_to_cpu(header->seq_ctl)) == 0)
++ if (flags & IEEE80211_TXCTL_FIRST_FRAGMENT)
+ cs->control |= ZD_CS_NEED_RANDOM_BACKOFF;
-- if (!status.control.flags & IEEE80211_TXCTL_NO_ACK) {
-+ if (!(status.control.flags & IEEE80211_TXCTL_NO_ACK)) {
- if (!(payload->status & 0x01))
- status.flags |= IEEE80211_TX_STATUS_ACK;
- else
-@@ -853,7 +854,8 @@ static int p54_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
- return ret;
- }
+ /* Multicast */
+@@ -810,54 +473,37 @@ static void cs_set_control(struct zd_mac *mac, struct zd_ctrlset *cs,
+ cs->control |= ZD_CS_MULTICAST;
--static int p54_config_interface(struct ieee80211_hw *dev, int if_id,
-+static int p54_config_interface(struct ieee80211_hw *dev,
-+ struct ieee80211_vif *vif,
- struct ieee80211_if_conf *conf)
- {
- struct p54_common *priv = dev->priv;
-diff --git a/drivers/net/wireless/p54pci.c b/drivers/net/wireless/p54pci.c
-index 410b543..fa52772 100644
---- a/drivers/net/wireless/p54pci.c
-+++ b/drivers/net/wireless/p54pci.c
-@@ -48,10 +48,10 @@ static int p54p_upload_firmware(struct ieee80211_hw *dev)
- const struct firmware *fw_entry = NULL;
- __le32 reg;
- int err;
-- u32 *data;
-+ __le32 *data;
- u32 remains, left, device_addr;
+ /* PS-POLL */
+- if (ftype == IEEE80211_FTYPE_CTL && stype == IEEE80211_STYPE_PSPOLL)
++ if ((fctl & (IEEE80211_FCTL_FTYPE|IEEE80211_FCTL_STYPE)) ==
++ (IEEE80211_FTYPE_CTL|IEEE80211_STYPE_PSPOLL))
+ cs->control |= ZD_CS_PS_POLL_FRAME;
-- P54P_WRITE(int_enable, 0);
-+ P54P_WRITE(int_enable, cpu_to_le32(0));
- P54P_READ(int_enable);
- udelay(10);
+- /* Unicast data frames over the threshold should have RTS */
+- if (!is_multicast_ether_addr(header->addr1) &&
+- ftype != IEEE80211_FTYPE_MGMT &&
+- tx_length > zd_netdev_ieee80211(mac->netdev)->rts)
++ if (flags & IEEE80211_TXCTL_USE_RTS_CTS)
+ cs->control |= ZD_CS_RTS;
-@@ -82,7 +82,7 @@ static int p54p_upload_firmware(struct ieee80211_hw *dev)
+- /* Use CTS-to-self protection if required */
+- if (ZD_MODULATION_TYPE(cs->modulation) == ZD_OFDM &&
+- ieee80211softmac_protection_needed(softmac)) {
+- /* FIXME: avoid sending RTS *and* self-CTS, is that correct? */
+- cs->control &= ~ZD_CS_RTS;
++ if (flags & IEEE80211_TXCTL_USE_CTS_PROTECT)
+ cs->control |= ZD_CS_SELF_CTS;
+- }
- p54_parse_firmware(dev, fw_entry);
+ /* FIXME: Management frame? */
+ }
-- data = (u32 *) fw_entry->data;
-+ data = (__le32 *) fw_entry->data;
- remains = fw_entry->size;
- device_addr = ISL38XX_DEV_FIRMWARE_ADDR;
- while (remains) {
-@@ -141,6 +141,7 @@ static irqreturn_t p54p_simple_interrupt(int irq, void *dev_id)
- static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ static int fill_ctrlset(struct zd_mac *mac,
+- struct ieee80211_txb *txb,
+- int frag_num)
++ struct sk_buff *skb,
++ struct ieee80211_tx_control *control)
{
- struct p54p_priv *priv = dev->priv;
-+ struct p54p_ring_control *ring_control = priv->ring_control;
- int err;
- struct p54_control_hdr *hdr;
- void *eeprom;
-@@ -164,8 +165,8 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
- goto out;
- }
+ int r;
+- struct sk_buff *skb = txb->fragments[frag_num];
+- struct ieee80211_hdr_4addr *hdr =
+- (struct ieee80211_hdr_4addr *) skb->data;
+- unsigned int frag_len = skb->len + IEEE80211_FCS_LEN;
+- unsigned int next_frag_len;
++ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
++ unsigned int frag_len = skb->len + FCS_LEN;
+ unsigned int packet_length;
+ struct zd_ctrlset *cs = (struct zd_ctrlset *)
+ skb_push(skb, sizeof(struct zd_ctrlset));
-- memset(priv->ring_control, 0, sizeof(*priv->ring_control));
-- P54P_WRITE(ring_control_base, priv->ring_control_dma);
-+ memset(ring_control, 0, sizeof(*ring_control));
-+ P54P_WRITE(ring_control_base, cpu_to_le32(priv->ring_control_dma));
- P54P_READ(ring_control_base);
- udelay(10);
+- if (frag_num+1 < txb->nr_frags) {
+- next_frag_len = txb->fragments[frag_num+1]->len +
+- IEEE80211_FCS_LEN;
+- } else {
+- next_frag_len = 0;
+- }
+ ZD_ASSERT(frag_len <= 0xffff);
+- ZD_ASSERT(next_frag_len <= 0xffff);
-@@ -194,14 +195,14 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
- tx_mapping = pci_map_single(priv->pdev, (void *)hdr,
- EEPROM_READBACK_LEN, PCI_DMA_TODEVICE);
+- cs_set_modulation(mac, cs, hdr);
++ cs->modulation = control->tx_rate;
-- priv->ring_control->rx_mgmt[0].host_addr = cpu_to_le32(rx_mapping);
-- priv->ring_control->rx_mgmt[0].len = cpu_to_le16(0x2010);
-- priv->ring_control->tx_data[0].host_addr = cpu_to_le32(tx_mapping);
-- priv->ring_control->tx_data[0].device_addr = hdr->req_id;
-- priv->ring_control->tx_data[0].len = cpu_to_le16(EEPROM_READBACK_LEN);
-+ ring_control->rx_mgmt[0].host_addr = cpu_to_le32(rx_mapping);
-+ ring_control->rx_mgmt[0].len = cpu_to_le16(0x2010);
-+ ring_control->tx_data[0].host_addr = cpu_to_le32(tx_mapping);
-+ ring_control->tx_data[0].device_addr = hdr->req_id;
-+ ring_control->tx_data[0].len = cpu_to_le16(EEPROM_READBACK_LEN);
+ cs->tx_length = cpu_to_le16(frag_len);
-- priv->ring_control->host_idx[2] = cpu_to_le32(1);
-- priv->ring_control->host_idx[1] = cpu_to_le32(1);
-+ ring_control->host_idx[2] = cpu_to_le32(1);
-+ ring_control->host_idx[1] = cpu_to_le32(1);
+- cs_set_control(mac, cs, hdr);
++ cs_set_control(mac, cs, hdr, control->flags);
- wmb();
- mdelay(100);
-@@ -215,8 +216,8 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
- pci_unmap_single(priv->pdev, rx_mapping,
- 0x2010, PCI_DMA_FROMDEVICE);
+ packet_length = frag_len + sizeof(struct zd_ctrlset) + 10;
+ ZD_ASSERT(packet_length <= 0xffff);
+@@ -886,419 +532,417 @@ static int fill_ctrlset(struct zd_mac *mac,
+ if (r < 0)
+ return r;
+ cs->current_length = cpu_to_le16(r);
+-
+- if (next_frag_len == 0) {
+- cs->next_frame_length = 0;
+- } else {
+- r = zd_calc_tx_length_us(NULL, ZD_RATE(cs->modulation),
+- next_frag_len);
+- if (r < 0)
+- return r;
+- cs->next_frame_length = cpu_to_le16(r);
+- }
++ cs->next_frame_length = 0;
-- alen = le16_to_cpu(priv->ring_control->rx_mgmt[0].len);
-- if (le32_to_cpu(priv->ring_control->device_idx[2]) != 1 ||
-+ alen = le16_to_cpu(ring_control->rx_mgmt[0].len);
-+ if (le32_to_cpu(ring_control->device_idx[2]) != 1 ||
- alen < 0x10) {
- printk(KERN_ERR "%s (prism54pci): Cannot read eeprom!\n",
- pci_name(priv->pdev));
-@@ -228,7 +229,7 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
+ return 0;
+ }
- out:
- kfree(eeprom);
-- P54P_WRITE(int_enable, 0);
-+ P54P_WRITE(int_enable, cpu_to_le32(0));
- P54P_READ(int_enable);
- udelay(10);
- free_irq(priv->pdev->irq, priv);
-@@ -239,16 +240,17 @@ static int p54p_read_eeprom(struct ieee80211_hw *dev)
- static void p54p_refill_rx_ring(struct ieee80211_hw *dev)
+-static int zd_mac_tx(struct zd_mac *mac, struct ieee80211_txb *txb, int pri)
++/**
++ * zd_op_tx - transmits a network frame to the device
++ *
++ * @dev: mac80211 hardware device
++ * @skb: socket buffer
++ * @control: the control structure
++ *
++ * This function transmit an IEEE 802.11 network frame to the device. The
++ * control block of the skbuff will be initialized. If necessary the incoming
++ * mac80211 queues will be stopped.
++ */
++static int zd_op_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
++ struct ieee80211_tx_control *control)
{
- struct p54p_priv *priv = dev->priv;
-+ struct p54p_ring_control *ring_control = priv->ring_control;
- u32 limit, host_idx, idx;
+- int i, r;
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
++ struct zd_mac *mac = zd_hw_mac(hw);
++ int r;
-- host_idx = le32_to_cpu(priv->ring_control->host_idx[0]);
-+ host_idx = le32_to_cpu(ring_control->host_idx[0]);
- limit = host_idx;
-- limit -= le32_to_cpu(priv->ring_control->device_idx[0]);
-- limit = ARRAY_SIZE(priv->ring_control->rx_data) - limit;
-+ limit -= le32_to_cpu(ring_control->device_idx[0]);
-+ limit = ARRAY_SIZE(ring_control->rx_data) - limit;
+- for (i = 0; i < txb->nr_frags; i++) {
+- struct sk_buff *skb = txb->fragments[i];
++ r = fill_ctrlset(mac, skb, control);
++ if (r)
++ return r;
-- idx = host_idx % ARRAY_SIZE(priv->ring_control->rx_data);
-+ idx = host_idx % ARRAY_SIZE(ring_control->rx_data);
- while (limit-- > 1) {
-- struct p54p_desc *desc = &priv->ring_control->rx_data[idx];
-+ struct p54p_desc *desc = &ring_control->rx_data[idx];
+- r = fill_ctrlset(mac, txb, i);
+- if (r) {
+- ieee->stats.tx_dropped++;
+- return r;
+- }
+- r = zd_usb_tx(&mac->chip.usb, skb->data, skb->len);
+- if (r) {
+- ieee->stats.tx_dropped++;
+- return r;
+- }
++ r = init_tx_skb_control_block(skb, hw, control);
++ if (r)
++ return r;
++ r = zd_usb_tx(&mac->chip.usb, skb);
++ if (r) {
++ clear_tx_skb_control_block(skb);
++ return r;
+ }
+-
+- /* FIXME: shouldn't this be handled by the upper layers? */
+- mac->netdev->trans_start = jiffies;
+-
+- ieee80211_txb_free(txb);
+ return 0;
+ }
- if (!desc->host_addr) {
- struct sk_buff *skb;
-@@ -270,22 +272,23 @@ static void p54p_refill_rx_ring(struct ieee80211_hw *dev)
+-struct zd_rt_hdr {
+- struct ieee80211_radiotap_header rt_hdr;
+- u8 rt_flags;
+- u8 rt_rate;
+- u16 rt_channel;
+- u16 rt_chbitmask;
+-} __attribute__((packed));
+-
+-static void fill_rt_header(void *buffer, struct zd_mac *mac,
+- const struct ieee80211_rx_stats *stats,
+- const struct rx_status *status)
+-{
+- struct zd_rt_hdr *hdr = buffer;
+-
+- hdr->rt_hdr.it_version = PKTHDR_RADIOTAP_VERSION;
+- hdr->rt_hdr.it_pad = 0;
+- hdr->rt_hdr.it_len = cpu_to_le16(sizeof(struct zd_rt_hdr));
+- hdr->rt_hdr.it_present = cpu_to_le32((1 << IEEE80211_RADIOTAP_FLAGS) |
+- (1 << IEEE80211_RADIOTAP_CHANNEL) |
+- (1 << IEEE80211_RADIOTAP_RATE));
+-
+- hdr->rt_flags = 0;
+- if (status->decryption_type & (ZD_RX_WEP64|ZD_RX_WEP128|ZD_RX_WEP256))
+- hdr->rt_flags |= IEEE80211_RADIOTAP_F_WEP;
+-
+- hdr->rt_rate = stats->rate / 5;
+-
+- /* FIXME: 802.11a */
+- hdr->rt_channel = cpu_to_le16(ieee80211chan2mhz(
+- _zd_chip_get_channel(&mac->chip)));
+- hdr->rt_chbitmask = cpu_to_le16(IEEE80211_CHAN_2GHZ |
+- ((status->frame_status & ZD_RX_FRAME_MODULATION_MASK) ==
+- ZD_RX_OFDM ? IEEE80211_CHAN_OFDM : IEEE80211_CHAN_CCK));
+-}
+-
+-/* Returns 1 if the data packet is for us and 0 otherwise. */
+-static int is_data_packet_for_us(struct ieee80211_device *ieee,
+- struct ieee80211_hdr_4addr *hdr)
+-{
+- struct net_device *netdev = ieee->dev;
+- u16 fc = le16_to_cpu(hdr->frame_ctl);
+-
+- ZD_ASSERT(WLAN_FC_GET_TYPE(fc) == IEEE80211_FTYPE_DATA);
+-
+- switch (ieee->iw_mode) {
+- case IW_MODE_ADHOC:
+- if ((fc & (IEEE80211_FCTL_TODS|IEEE80211_FCTL_FROMDS)) != 0 ||
+- compare_ether_addr(hdr->addr3, ieee->bssid) != 0)
+- return 0;
+- break;
+- case IW_MODE_AUTO:
+- case IW_MODE_INFRA:
+- if ((fc & (IEEE80211_FCTL_TODS|IEEE80211_FCTL_FROMDS)) !=
+- IEEE80211_FCTL_FROMDS ||
+- compare_ether_addr(hdr->addr2, ieee->bssid) != 0)
+- return 0;
+- break;
+- default:
+- ZD_ASSERT(ieee->iw_mode != IW_MODE_MONITOR);
+- return 0;
+- }
+-
+- return compare_ether_addr(hdr->addr1, netdev->dev_addr) == 0 ||
+- (is_multicast_ether_addr(hdr->addr1) &&
+- compare_ether_addr(hdr->addr3, netdev->dev_addr) != 0) ||
+- (netdev->flags & IFF_PROMISC);
+-}
+-
+-/* Filters received packets. The function returns 1 if the packet should be
+- * forwarded to ieee80211_rx(). If the packet should be ignored the function
+- * returns 0. If an invalid packet is found the function returns -EINVAL.
++/**
++ * filter_ack - filters incoming packets for acknowledgements
++ * @dev: the mac80211 device
++ * @rx_hdr: received header
++ * @stats: the status for the received packet
+ *
+- * The function calls ieee80211_rx_mgt() directly.
++ * This functions looks for ACK packets and tries to match them with the
++ * frames in the tx queue. If a match is found the frame will be dequeued and
++ * the upper layers is informed about the successful transmission. If
++ * mac80211 queues have been stopped and the number of frames still to be
++ * transmitted is low the queues will be opened again.
+ *
+- * It has been based on ieee80211_rx_any.
++ * Returns 1 if the frame was an ACK, 0 if it was ignored.
+ */
+-static int filter_rx(struct ieee80211_device *ieee,
+- const u8 *buffer, unsigned int length,
+- struct ieee80211_rx_stats *stats)
++static int filter_ack(struct ieee80211_hw *hw, struct ieee80211_hdr *rx_hdr,
++ struct ieee80211_rx_status *stats)
+ {
+- struct ieee80211_hdr_4addr *hdr;
+- u16 fc;
+-
+- if (ieee->iw_mode == IW_MODE_MONITOR)
+- return 1;
+-
+- hdr = (struct ieee80211_hdr_4addr *)buffer;
+- fc = le16_to_cpu(hdr->frame_ctl);
+- if ((fc & IEEE80211_FCTL_VERS) != 0)
+- return -EINVAL;
++ u16 fc = le16_to_cpu(rx_hdr->frame_control);
++ struct sk_buff *skb;
++ struct sk_buff_head *q;
++ unsigned long flags;
- idx++;
- host_idx++;
-- idx %= ARRAY_SIZE(priv->ring_control->rx_data);
-+ idx %= ARRAY_SIZE(ring_control->rx_data);
- }
+- switch (WLAN_FC_GET_TYPE(fc)) {
+- case IEEE80211_FTYPE_MGMT:
+- if (length < sizeof(struct ieee80211_hdr_3addr))
+- return -EINVAL;
+- ieee80211_rx_mgt(ieee, hdr, stats);
+- return 0;
+- case IEEE80211_FTYPE_CTL:
++ if ((fc & (IEEE80211_FCTL_FTYPE | IEEE80211_FCTL_STYPE)) !=
++ (IEEE80211_FTYPE_CTL | IEEE80211_STYPE_ACK))
+ return 0;
+- case IEEE80211_FTYPE_DATA:
+- /* Ignore invalid short buffers */
+- if (length < sizeof(struct ieee80211_hdr_3addr))
+- return -EINVAL;
+- return is_data_packet_for_us(ieee, hdr);
+- }
- wmb();
-- priv->ring_control->host_idx[0] = cpu_to_le32(host_idx);
-+ ring_control->host_idx[0] = cpu_to_le32(host_idx);
+- return -EINVAL;
++ q = &zd_hw_mac(hw)->ack_wait_queue;
++ spin_lock_irqsave(&q->lock, flags);
++ for (skb = q->next; skb != (struct sk_buff *)q; skb = skb->next) {
++ struct ieee80211_hdr *tx_hdr;
++
++ tx_hdr = (struct ieee80211_hdr *)skb->data;
++ if (likely(!compare_ether_addr(tx_hdr->addr2, rx_hdr->addr1)))
++ {
++ struct ieee80211_tx_status status = {{0}};
++ status.flags = IEEE80211_TX_STATUS_ACK;
++ status.ack_signal = stats->ssi;
++ __skb_unlink(skb, q);
++ tx_status(hw, skb, &status, 1);
++ goto out;
++ }
++ }
++out:
++ spin_unlock_irqrestore(&q->lock, flags);
++ return 1;
}
- static irqreturn_t p54p_interrupt(int irq, void *dev_id)
+-static void update_qual_rssi(struct zd_mac *mac,
+- const u8 *buffer, unsigned int length,
+- u8 qual_percent, u8 rssi_percent)
++int zd_mac_rx(struct ieee80211_hw *hw, const u8 *buffer, unsigned int length)
{
- struct ieee80211_hw *dev = dev_id;
- struct p54p_priv *priv = dev->priv;
-+ struct p54p_ring_control *ring_control = priv->ring_control;
- __le32 reg;
+- unsigned long flags;
+- struct ieee80211_hdr_3addr *hdr;
+- int i;
++ struct zd_mac *mac = zd_hw_mac(hw);
++ struct ieee80211_rx_status stats;
++ const struct rx_status *status;
++ struct sk_buff *skb;
++ int bad_frame = 0;
++ u16 fc;
++ bool is_qos, is_4addr, need_padding;
- spin_lock(&priv->lock);
- reg = P54P_READ(int_ident);
-- if (unlikely(reg == 0xFFFFFFFF)) {
-+ if (unlikely(reg == cpu_to_le32(0xFFFFFFFF))) {
- spin_unlock(&priv->lock);
- return IRQ_HANDLED;
- }
-@@ -298,12 +301,12 @@ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
- struct p54p_desc *desc;
- u32 idx, i;
- i = priv->tx_idx;
-- i %= ARRAY_SIZE(priv->ring_control->tx_data);
-- priv->tx_idx = idx = le32_to_cpu(priv->ring_control->device_idx[1]);
-- idx %= ARRAY_SIZE(priv->ring_control->tx_data);
-+ i %= ARRAY_SIZE(ring_control->tx_data);
-+ priv->tx_idx = idx = le32_to_cpu(ring_control->device_idx[1]);
-+ idx %= ARRAY_SIZE(ring_control->tx_data);
+- hdr = (struct ieee80211_hdr_3addr *)buffer;
+- if (length < offsetof(struct ieee80211_hdr_3addr, addr3))
+- return;
+- if (compare_ether_addr(hdr->addr2, zd_mac_to_ieee80211(mac)->bssid) != 0)
+- return;
++ if (length < ZD_PLCP_HEADER_SIZE + 10 /* IEEE80211_1ADDR_LEN */ +
++ FCS_LEN + sizeof(struct rx_status))
++ return -EINVAL;
- while (i != idx) {
-- desc = &priv->ring_control->tx_data[i];
-+ desc = &ring_control->tx_data[i];
- if (priv->tx_buf[i]) {
- kfree(priv->tx_buf[i]);
- priv->tx_buf[i] = NULL;
-@@ -318,17 +321,17 @@ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
- desc->flags = 0;
+- spin_lock_irqsave(&mac->lock, flags);
+- i = mac->stats_count % ZD_MAC_STATS_BUFFER_SIZE;
+- mac->qual_buffer[i] = qual_percent;
+- mac->rssi_buffer[i] = rssi_percent;
+- mac->stats_count++;
+- spin_unlock_irqrestore(&mac->lock, flags);
+-}
++ memset(&stats, 0, sizeof(stats));
- i++;
-- i %= ARRAY_SIZE(priv->ring_control->tx_data);
-+ i %= ARRAY_SIZE(ring_control->tx_data);
+-static int fill_rx_stats(struct ieee80211_rx_stats *stats,
+- const struct rx_status **pstatus,
+- struct zd_mac *mac,
+- const u8 *buffer, unsigned int length)
+-{
+- const struct rx_status *status;
++ /* Note about pass_failed_fcs and pass_ctrl access below:
++ * mac locking intentionally omitted here, as this is the only unlocked
++ * reader and the only writer is configure_filter. Plus, if there were
++ * any races accessing these variables, it wouldn't really matter.
++ * If mac80211 ever provides a way for us to access filter flags
++ * from outside configure_filter, we could improve on this. Also, this
++ * situation may change once we implement some kind of DMA-into-skb
++ * RX path. */
+
+- *pstatus = status = (struct rx_status *)
++ /* Caller has to ensure that length >= sizeof(struct rx_status). */
++ status = (struct rx_status *)
+ (buffer + (length - sizeof(struct rx_status)));
+ if (status->frame_status & ZD_RX_ERROR) {
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- ieee->stats.rx_errors++;
+- if (status->frame_status & ZD_RX_TIMEOUT_ERROR)
+- ieee->stats.rx_missed_errors++;
+- else if (status->frame_status & ZD_RX_FIFO_OVERRUN_ERROR)
+- ieee->stats.rx_fifo_errors++;
+- else if (status->frame_status & ZD_RX_DECRYPTION_ERROR)
+- ieee->ieee_stats.rx_discards_undecryptable++;
+- else if (status->frame_status & ZD_RX_CRC32_ERROR) {
+- ieee->stats.rx_crc_errors++;
+- ieee->ieee_stats.rx_fcs_errors++;
++ if (mac->pass_failed_fcs &&
++ (status->frame_status & ZD_RX_CRC32_ERROR)) {
++ stats.flag |= RX_FLAG_FAILED_FCS_CRC;
++ bad_frame = 1;
++ } else {
++ return -EINVAL;
}
+- else if (status->frame_status & ZD_RX_CRC16_ERROR)
+- ieee->stats.rx_crc_errors++;
+- return -EINVAL;
+ }
- i = priv->rx_idx;
-- i %= ARRAY_SIZE(priv->ring_control->rx_data);
-- priv->rx_idx = idx = le32_to_cpu(priv->ring_control->device_idx[0]);
-- idx %= ARRAY_SIZE(priv->ring_control->rx_data);
-+ i %= ARRAY_SIZE(ring_control->rx_data);
-+ priv->rx_idx = idx = le32_to_cpu(ring_control->device_idx[0]);
-+ idx %= ARRAY_SIZE(ring_control->rx_data);
- while (i != idx) {
- u16 len;
- struct sk_buff *skb;
-- desc = &priv->ring_control->rx_data[i];
-+ desc = &ring_control->rx_data[i];
- len = le16_to_cpu(desc->len);
- skb = priv->rx_buf[i];
+- memset(stats, 0, sizeof(struct ieee80211_rx_stats));
+- stats->len = length - (ZD_PLCP_HEADER_SIZE + IEEE80211_FCS_LEN +
+- + sizeof(struct rx_status));
+- /* FIXME: 802.11a */
+- stats->freq = IEEE80211_24GHZ_BAND;
+- stats->received_channel = _zd_chip_get_channel(&mac->chip);
+- stats->rssi = zd_rx_strength_percent(status->signal_strength);
+- stats->signal = zd_rx_qual_percent(buffer,
++ stats.channel = _zd_chip_get_channel(&mac->chip);
++ stats.freq = zd_channels[stats.channel - 1].freq;
++ stats.phymode = MODE_IEEE80211G;
++ stats.ssi = status->signal_strength;
++ stats.signal = zd_rx_qual_percent(buffer,
+ length - sizeof(struct rx_status),
+ status);
+- stats->mask = IEEE80211_STATMASK_RSSI | IEEE80211_STATMASK_SIGNAL;
+- stats->rate = zd_rx_rate(buffer, status);
+- if (stats->rate)
+- stats->mask |= IEEE80211_STATMASK_RATE;
++ stats.rate = zd_rx_rate(buffer, status);
++
++ length -= ZD_PLCP_HEADER_SIZE + sizeof(struct rx_status);
++ buffer += ZD_PLCP_HEADER_SIZE;
++
++ /* Except for bad frames, filter each frame to see if it is an ACK, in
++ * which case our internal TX tracking is updated. Normally we then
++ * bail here as there's no need to pass ACKs on up to the stack, but
++ * there is also the case where the stack has requested us to pass
++ * control frames on up (pass_ctrl) which we must consider. */
++ if (!bad_frame &&
++ filter_ack(hw, (struct ieee80211_hdr *)buffer, &stats)
++ && !mac->pass_ctrl)
++ return 0;
-@@ -347,7 +350,7 @@ static irqreturn_t p54p_interrupt(int irq, void *dev_id)
- }
+- return 0;
+-}
++ fc = le16_to_cpu(*((__le16 *) buffer));
- i++;
-- i %= ARRAY_SIZE(priv->ring_control->rx_data);
-+ i %= ARRAY_SIZE(ring_control->rx_data);
- }
+-static void zd_mac_rx(struct zd_mac *mac, struct sk_buff *skb)
+-{
+- int r;
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- struct ieee80211_rx_stats stats;
+- const struct rx_status *status;
++ is_qos = ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_DATA) &&
++ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_QOS_DATA);
++ is_4addr = (fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) ==
++ (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS);
++ need_padding = is_qos ^ is_4addr;
- p54p_refill_rx_ring(dev);
-@@ -366,6 +369,7 @@ static void p54p_tx(struct ieee80211_hw *dev, struct p54_control_hdr *data,
- size_t len, int free_on_tx)
- {
- struct p54p_priv *priv = dev->priv;
-+ struct p54p_ring_control *ring_control = priv->ring_control;
- unsigned long flags;
- struct p54p_desc *desc;
- dma_addr_t mapping;
-@@ -373,19 +377,19 @@ static void p54p_tx(struct ieee80211_hw *dev, struct p54_control_hdr *data,
+- if (skb->len < ZD_PLCP_HEADER_SIZE + IEEE80211_1ADDR_LEN +
+- IEEE80211_FCS_LEN + sizeof(struct rx_status))
+- {
+- ieee->stats.rx_errors++;
+- ieee->stats.rx_length_errors++;
+- goto free_skb;
++ skb = dev_alloc_skb(length + (need_padding ? 2 : 0));
++ if (skb == NULL)
++ return -ENOMEM;
++ if (need_padding) {
++ /* Make sure the the payload data is 4 byte aligned. */
++ skb_reserve(skb, 2);
+ }
- spin_lock_irqsave(&priv->lock, flags);
+- r = fill_rx_stats(&stats, &status, mac, skb->data, skb->len);
+- if (r) {
+- /* Only packets with rx errors are included here.
+- * The error stats have already been set in fill_rx_stats.
+- */
+- goto free_skb;
+- }
++ memcpy(skb_put(skb, length), buffer, length);
-- device_idx = le32_to_cpu(priv->ring_control->device_idx[1]);
-- idx = le32_to_cpu(priv->ring_control->host_idx[1]);
-- i = idx % ARRAY_SIZE(priv->ring_control->tx_data);
-+ device_idx = le32_to_cpu(ring_control->device_idx[1]);
-+ idx = le32_to_cpu(ring_control->host_idx[1]);
-+ i = idx % ARRAY_SIZE(ring_control->tx_data);
+- __skb_pull(skb, ZD_PLCP_HEADER_SIZE);
+- __skb_trim(skb, skb->len -
+- (IEEE80211_FCS_LEN + sizeof(struct rx_status)));
++ ieee80211_rx_irqsafe(hw, skb, &stats);
++ return 0;
++}
- mapping = pci_map_single(priv->pdev, data, len, PCI_DMA_TODEVICE);
-- desc = &priv->ring_control->tx_data[i];
-+ desc = &ring_control->tx_data[i];
- desc->host_addr = cpu_to_le32(mapping);
- desc->device_addr = data->req_id;
- desc->len = cpu_to_le16(len);
- desc->flags = 0;
+- ZD_ASSERT(IS_ALIGNED((unsigned long)skb->data, 4));
++static int zd_op_add_interface(struct ieee80211_hw *hw,
++ struct ieee80211_if_init_conf *conf)
++{
++ struct zd_mac *mac = zd_hw_mac(hw);
- wmb();
-- priv->ring_control->host_idx[1] = cpu_to_le32(idx + 1);
-+ ring_control->host_idx[1] = cpu_to_le32(idx + 1);
+- update_qual_rssi(mac, skb->data, skb->len, stats.signal,
+- status->signal_strength);
++ /* using IEEE80211_IF_TYPE_INVALID to indicate no mode selected */
++ if (mac->type != IEEE80211_IF_TYPE_INVALID)
++ return -EOPNOTSUPP;
- if (free_on_tx)
- priv->tx_buf[i] = data;
-@@ -397,7 +401,7 @@ static void p54p_tx(struct ieee80211_hw *dev, struct p54_control_hdr *data,
+- r = filter_rx(ieee, skb->data, skb->len, &stats);
+- if (r <= 0) {
+- if (r < 0) {
+- ieee->stats.rx_errors++;
+- dev_dbg_f(zd_mac_dev(mac), "Error in packet.\n");
+- }
+- goto free_skb;
++ switch (conf->type) {
++ case IEEE80211_IF_TYPE_MNTR:
++ case IEEE80211_IF_TYPE_STA:
++ mac->type = conf->type;
++ break;
++ default:
++ return -EOPNOTSUPP;
+ }
- /* FIXME: unlikely to happen because the device usually runs out of
- memory before we fill the ring up, but we can make it impossible */
-- if (idx - device_idx > ARRAY_SIZE(priv->ring_control->tx_data) - 2)
-+ if (idx - device_idx > ARRAY_SIZE(ring_control->tx_data) - 2)
- printk(KERN_INFO "%s: tx overflow.\n", wiphy_name(dev->wiphy));
+- if (ieee->iw_mode == IW_MODE_MONITOR)
+- fill_rt_header(skb_push(skb, sizeof(struct zd_rt_hdr)), mac,
+- &stats, status);
+-
+- r = ieee80211_rx(ieee, skb, &stats);
+- if (r)
+- return;
+-free_skb:
+- /* We are always in a soft irq. */
+- dev_kfree_skb(skb);
++ return zd_write_mac_addr(&mac->chip, conf->mac_addr);
}
-@@ -421,7 +425,7 @@ static int p54p_open(struct ieee80211_hw *dev)
+-static void do_rx(unsigned long mac_ptr)
++static void zd_op_remove_interface(struct ieee80211_hw *hw,
++ struct ieee80211_if_init_conf *conf)
+ {
+- struct zd_mac *mac = (struct zd_mac *)mac_ptr;
+- struct sk_buff *skb;
++ struct zd_mac *mac = zd_hw_mac(hw);
++ mac->type = IEEE80211_IF_TYPE_INVALID;
++ zd_write_mac_addr(&mac->chip, NULL);
++}
- p54p_upload_firmware(dev);
+- while ((skb = skb_dequeue(&mac->rx_queue)) != NULL)
+- zd_mac_rx(mac, skb);
++static int zd_op_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
++{
++ struct zd_mac *mac = zd_hw_mac(hw);
++ return zd_chip_set_channel(&mac->chip, conf->channel);
+ }
-- P54P_WRITE(ring_control_base, priv->ring_control_dma);
-+ P54P_WRITE(ring_control_base, cpu_to_le32(priv->ring_control_dma));
- P54P_READ(ring_control_base);
- wmb();
- udelay(10);
-@@ -457,10 +461,11 @@ static int p54p_open(struct ieee80211_hw *dev)
- static void p54p_stop(struct ieee80211_hw *dev)
+-int zd_mac_rx_irq(struct zd_mac *mac, const u8 *buffer, unsigned int length)
++static int zd_op_config_interface(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_if_conf *conf)
{
- struct p54p_priv *priv = dev->priv;
-+ struct p54p_ring_control *ring_control = priv->ring_control;
- unsigned int i;
- struct p54p_desc *desc;
-
-- P54P_WRITE(int_enable, 0);
-+ P54P_WRITE(int_enable, cpu_to_le32(0));
- P54P_READ(int_enable);
- udelay(10);
+- struct sk_buff *skb;
+- unsigned int reserved =
+- ALIGN(max_t(unsigned int,
+- sizeof(struct zd_rt_hdr), ZD_PLCP_HEADER_SIZE), 4) -
+- ZD_PLCP_HEADER_SIZE;
+-
+- skb = dev_alloc_skb(reserved + length);
+- if (!skb) {
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- dev_warn(zd_mac_dev(mac), "Could not allocate skb.\n");
+- ieee->stats.rx_dropped++;
+- return -ENOMEM;
+- }
+- skb_reserve(skb, reserved);
+- memcpy(__skb_put(skb, length), buffer, length);
+- skb_queue_tail(&mac->rx_queue, skb);
+- tasklet_schedule(&mac->rx_tasklet);
++ struct zd_mac *mac = zd_hw_mac(hw);
++
++ spin_lock_irq(&mac->lock);
++ mac->associated = is_valid_ether_addr(conf->bssid);
++ spin_unlock_irq(&mac->lock);
++
++ /* TODO: do hardware bssid filtering */
+ return 0;
+ }
-@@ -469,7 +474,7 @@ static void p54p_stop(struct ieee80211_hw *dev)
- P54P_WRITE(dev_int, cpu_to_le32(ISL38XX_DEV_INT_RESET));
+-static int netdev_tx(struct ieee80211_txb *txb, struct net_device *netdev,
+- int pri)
++static void set_multicast_hash_handler(struct work_struct *work)
+ {
+- return zd_mac_tx(zd_netdev_mac(netdev), txb, pri);
++ struct zd_mac *mac =
++ container_of(work, struct zd_mac, set_multicast_hash_work);
++ struct zd_mc_hash hash;
++
++ spin_lock_irq(&mac->lock);
++ hash = mac->multicast_hash;
++ spin_unlock_irq(&mac->lock);
++
++ zd_chip_set_multicast_hash(&mac->chip, &hash);
+ }
- for (i = 0; i < ARRAY_SIZE(priv->rx_buf); i++) {
-- desc = &priv->ring_control->rx_data[i];
-+ desc = &ring_control->rx_data[i];
- if (desc->host_addr)
- pci_unmap_single(priv->pdev, le32_to_cpu(desc->host_addr),
- MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
-@@ -478,7 +483,7 @@ static void p54p_stop(struct ieee80211_hw *dev)
- }
+-static void set_security(struct net_device *netdev,
+- struct ieee80211_security *sec)
++static void set_rx_filter_handler(struct work_struct *work)
+ {
+- struct ieee80211_device *ieee = zd_netdev_ieee80211(netdev);
+- struct ieee80211_security *secinfo = &ieee->sec;
+- int keyidx;
+-
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)), "\n");
+-
+- for (keyidx = 0; keyidx<WEP_KEYS; keyidx++)
+- if (sec->flags & (1<<keyidx)) {
+- secinfo->encode_alg[keyidx] = sec->encode_alg[keyidx];
+- secinfo->key_sizes[keyidx] = sec->key_sizes[keyidx];
+- memcpy(secinfo->keys[keyidx], sec->keys[keyidx],
+- SCM_KEY_LEN);
+- }
++ struct zd_mac *mac =
++ container_of(work, struct zd_mac, set_rx_filter_work);
++ int r;
- for (i = 0; i < ARRAY_SIZE(priv->tx_buf); i++) {
-- desc = &priv->ring_control->tx_data[i];
-+ desc = &ring_control->tx_data[i];
- if (desc->host_addr)
- pci_unmap_single(priv->pdev, le32_to_cpu(desc->host_addr),
- le16_to_cpu(desc->len), PCI_DMA_TODEVICE);
-@@ -487,7 +492,7 @@ static void p54p_stop(struct ieee80211_hw *dev)
- priv->tx_buf[i] = NULL;
+- if (sec->flags & SEC_ACTIVE_KEY) {
+- secinfo->active_key = sec->active_key;
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
+- " .active_key = %d\n", sec->active_key);
+- }
+- if (sec->flags & SEC_UNICAST_GROUP) {
+- secinfo->unicast_uses_group = sec->unicast_uses_group;
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
+- " .unicast_uses_group = %d\n",
+- sec->unicast_uses_group);
+- }
+- if (sec->flags & SEC_LEVEL) {
+- secinfo->level = sec->level;
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
+- " .level = %d\n", sec->level);
+- }
+- if (sec->flags & SEC_ENABLED) {
+- secinfo->enabled = sec->enabled;
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
+- " .enabled = %d\n", sec->enabled);
+- }
+- if (sec->flags & SEC_ENCRYPT) {
+- secinfo->encrypt = sec->encrypt;
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
+- " .encrypt = %d\n", sec->encrypt);
+- }
+- if (sec->flags & SEC_AUTH_MODE) {
+- secinfo->auth_mode = sec->auth_mode;
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
+- " .auth_mode = %d\n", sec->auth_mode);
++ dev_dbg_f(zd_mac_dev(mac), "\n");
++ r = set_rx_filter(mac);
++ if (r)
++ dev_err(zd_mac_dev(mac), "set_rx_filter_handler error %d\n", r);
++}
++
++#define SUPPORTED_FIF_FLAGS \
++ (FIF_PROMISC_IN_BSS | FIF_ALLMULTI | FIF_FCSFAIL | FIF_CONTROL | \
++ FIF_OTHER_BSS)
++static void zd_op_configure_filter(struct ieee80211_hw *hw,
++ unsigned int changed_flags,
++ unsigned int *new_flags,
++ int mc_count, struct dev_mc_list *mclist)
++{
++ struct zd_mc_hash hash;
++ struct zd_mac *mac = zd_hw_mac(hw);
++ unsigned long flags;
++ int i;
++
++ /* Only deal with supported flags */
++ changed_flags &= SUPPORTED_FIF_FLAGS;
++ *new_flags &= SUPPORTED_FIF_FLAGS;
++
++ /* changed_flags is always populated but this driver
++ * doesn't support all FIF flags so its possible we don't
++ * need to do anything */
++ if (!changed_flags)
++ return;
++
++ if (*new_flags & (FIF_PROMISC_IN_BSS | FIF_ALLMULTI)) {
++ zd_mc_add_all(&hash);
++ } else {
++ DECLARE_MAC_BUF(macbuf);
++
++ zd_mc_clear(&hash);
++ for (i = 0; i < mc_count; i++) {
++ if (!mclist)
++ break;
++ dev_dbg_f(zd_mac_dev(mac), "mc addr %s\n",
++ print_mac(macbuf, mclist->dmi_addr));
++ zd_mc_add_addr(&hash, mclist->dmi_addr);
++ mclist = mclist->next;
++ }
}
-
-- memset(priv->ring_control, 0, sizeof(*priv->ring_control));
-+ memset(ring_control, 0, sizeof(ring_control));
++
++ spin_lock_irqsave(&mac->lock, flags);
++ mac->pass_failed_fcs = !!(*new_flags & FIF_FCSFAIL);
++ mac->pass_ctrl = !!(*new_flags & FIF_CONTROL);
++ mac->multicast_hash = hash;
++ spin_unlock_irqrestore(&mac->lock, flags);
++ queue_work(zd_workqueue, &mac->set_multicast_hash_work);
++
++ if (changed_flags & FIF_CONTROL)
++ queue_work(zd_workqueue, &mac->set_rx_filter_work);
++
++ /* no handling required for FIF_OTHER_BSS as we don't currently
++ * do BSSID filtering */
++ /* FIXME: in future it would be nice to enable the probe response
++ * filter (so that the driver doesn't see them) until
++ * FIF_BCN_PRBRESP_PROMISC is set. however due to atomicity here, we'd
++ * have to schedule work to enable prbresp reception, which might
++ * happen too late. For now we'll just listen and forward them all the
++ * time. */
}
- static int __devinit p54p_probe(struct pci_dev *pdev,
-diff --git a/drivers/net/wireless/p54pci.h b/drivers/net/wireless/p54pci.h
-index 52feb59..5bedd7a 100644
---- a/drivers/net/wireless/p54pci.h
-+++ b/drivers/net/wireless/p54pci.h
-@@ -85,8 +85,8 @@ struct p54p_ring_control {
- struct p54p_desc tx_mgmt[4];
- } __attribute__ ((packed));
-
--#define P54P_READ(r) __raw_readl(&priv->map->r)
--#define P54P_WRITE(r, val) __raw_writel((__force u32)(val), &priv->map->r)
-+#define P54P_READ(r) (__force __le32)__raw_readl(&priv->map->r)
-+#define P54P_WRITE(r, val) __raw_writel((__force u32)(__le32)(val), &priv->map->r)
-
- struct p54p_priv {
- struct p54_common common;
-diff --git a/drivers/net/wireless/prism54/isl_38xx.h b/drivers/net/wireless/prism54/isl_38xx.h
-index 3fadcb6..19c33d3 100644
---- a/drivers/net/wireless/prism54/isl_38xx.h
-+++ b/drivers/net/wireless/prism54/isl_38xx.h
-@@ -138,14 +138,14 @@ isl38xx_w32_flush(void __iomem *base, u32 val, unsigned long offset)
- #define MAX_FRAGMENT_SIZE_RX 1600
-
- typedef struct {
-- u32 address; /* physical address on host */
-- u16 size; /* packet size */
-- u16 flags; /* set of bit-wise flags */
-+ __le32 address; /* physical address on host */
-+ __le16 size; /* packet size */
-+ __le16 flags; /* set of bit-wise flags */
- } isl38xx_fragment;
-
- struct isl38xx_cb {
-- u32 driver_curr_frag[ISL38XX_CB_QCOUNT];
-- u32 device_curr_frag[ISL38XX_CB_QCOUNT];
-+ __le32 driver_curr_frag[ISL38XX_CB_QCOUNT];
-+ __le32 device_curr_frag[ISL38XX_CB_QCOUNT];
- isl38xx_fragment rx_data_low[ISL38XX_CB_RX_QSIZE];
- isl38xx_fragment tx_data_low[ISL38XX_CB_TX_QSIZE];
- isl38xx_fragment rx_data_high[ISL38XX_CB_RX_QSIZE];
-diff --git a/drivers/net/wireless/prism54/isl_ioctl.c b/drivers/net/wireless/prism54/isl_ioctl.c
-index 6d80ca4..1b595a6 100644
---- a/drivers/net/wireless/prism54/isl_ioctl.c
-+++ b/drivers/net/wireless/prism54/isl_ioctl.c
-@@ -165,8 +165,7 @@ prism54_update_stats(struct work_struct *work)
- struct obj_bss bss, *bss2;
- union oid_res_t r;
-
-- if (down_interruptible(&priv->stats_sem))
-- return;
-+ down(&priv->stats_sem);
+-static void ieee_init(struct ieee80211_device *ieee)
++static void set_rts_cts_work(struct work_struct *work)
+ {
+- ieee->mode = IEEE_B | IEEE_G;
+- ieee->freq_band = IEEE80211_24GHZ_BAND;
+- ieee->modulation = IEEE80211_OFDM_MODULATION | IEEE80211_CCK_MODULATION;
+- ieee->tx_headroom = sizeof(struct zd_ctrlset);
+- ieee->set_security = set_security;
+- ieee->hard_start_xmit = netdev_tx;
+-
+- /* Software encryption/decryption for now */
+- ieee->host_build_iv = 0;
+- ieee->host_encrypt = 1;
+- ieee->host_decrypt = 1;
+-
+- /* FIXME: default to managed mode, until ieee80211 and zd1211rw can
+- * correctly support AUTO */
+- ieee->iw_mode = IW_MODE_INFRA;
++ struct zd_mac *mac =
++ container_of(work, struct zd_mac, set_rts_cts_work);
++ unsigned long flags;
++ unsigned int short_preamble;
++
++ mutex_lock(&mac->chip.mutex);
++
++ spin_lock_irqsave(&mac->lock, flags);
++ mac->updating_rts_rate = 0;
++ short_preamble = mac->short_preamble;
++ spin_unlock_irqrestore(&mac->lock, flags);
++
++ zd_chip_set_rts_cts_rate_locked(&mac->chip, short_preamble);
++ mutex_unlock(&mac->chip.mutex);
+ }
- /* Noise floor.
- * I'm not sure if the unit is dBm.
-@@ -1118,7 +1117,7 @@ prism54_set_encode(struct net_device *ndev, struct iw_request_info *info,
- mgt_set_request(priv, DOT11_OID_DEFKEYID, 0,
- &index);
- } else {
-- if (!dwrq->flags & IW_ENCODE_MODE) {
-+ if (!(dwrq->flags & IW_ENCODE_MODE)) {
- /* we cannot do anything. Complain. */
- return -EINVAL;
- }
-@@ -1793,8 +1792,7 @@ prism54_clear_mac(struct islpci_acl *acl)
- struct list_head *ptr, *next;
- struct mac_entry *entry;
+-static void softmac_init(struct ieee80211softmac_device *sm)
++static void zd_op_bss_info_changed(struct ieee80211_hw *hw,
++ struct ieee80211_vif *vif,
++ struct ieee80211_bss_conf *bss_conf,
++ u32 changes)
+ {
+- sm->set_channel = set_channel;
+- sm->bssinfo_change = bssinfo_change;
++ struct zd_mac *mac = zd_hw_mac(hw);
++ unsigned long flags;
++
++ dev_dbg_f(zd_mac_dev(mac), "changes: %x\n", changes);
++
++ if (changes & BSS_CHANGED_ERP_PREAMBLE) {
++ spin_lock_irqsave(&mac->lock, flags);
++ mac->short_preamble = bss_conf->use_short_preamble;
++ if (!mac->updating_rts_rate) {
++ mac->updating_rts_rate = 1;
++ /* FIXME: should disable TX here, until work has
++ * completed and RTS_CTS reg is updated */
++ queue_work(zd_workqueue, &mac->set_rts_cts_work);
++ }
++ spin_unlock_irqrestore(&mac->lock, flags);
++ }
+ }
-- if (down_interruptible(&acl->sem))
-- return;
-+ down(&acl->sem);
+-struct iw_statistics *zd_mac_get_wireless_stats(struct net_device *ndev)
++static const struct ieee80211_ops zd_ops = {
++ .tx = zd_op_tx,
++ .start = zd_op_start,
++ .stop = zd_op_stop,
++ .add_interface = zd_op_add_interface,
++ .remove_interface = zd_op_remove_interface,
++ .config = zd_op_config,
++ .config_interface = zd_op_config_interface,
++ .configure_filter = zd_op_configure_filter,
++ .bss_info_changed = zd_op_bss_info_changed,
++};
++
++struct ieee80211_hw *zd_mac_alloc_hw(struct usb_interface *intf)
+ {
+- struct zd_mac *mac = zd_netdev_mac(ndev);
+- struct iw_statistics *iw_stats = &mac->iw_stats;
+- unsigned int i, count, qual_total, rssi_total;
++ struct zd_mac *mac;
++ struct ieee80211_hw *hw;
++ int i;
- if (acl->size == 0) {
- up(&acl->sem);
-@@ -2116,8 +2114,7 @@ prism54_wpa_bss_ie_add(islpci_private *priv, u8 *bssid,
- if (wpa_ie_len > MAX_WPA_IE_LEN)
- wpa_ie_len = MAX_WPA_IE_LEN;
+- memset(iw_stats, 0, sizeof(struct iw_statistics));
+- /* We are not setting the status, because ieee->state is not updated
+- * at all and this driver doesn't track authentication state.
+- */
+- spin_lock_irq(&mac->lock);
+- count = mac->stats_count < ZD_MAC_STATS_BUFFER_SIZE ?
+- mac->stats_count : ZD_MAC_STATS_BUFFER_SIZE;
+- qual_total = rssi_total = 0;
+- for (i = 0; i < count; i++) {
+- qual_total += mac->qual_buffer[i];
+- rssi_total += mac->rssi_buffer[i];
++ hw = ieee80211_alloc_hw(sizeof(struct zd_mac), &zd_ops);
++ if (!hw) {
++ dev_dbg_f(&intf->dev, "out of memory\n");
++ return NULL;
+ }
+- spin_unlock_irq(&mac->lock);
+- iw_stats->qual.updated = IW_QUAL_NOISE_INVALID;
+- if (count > 0) {
+- iw_stats->qual.qual = qual_total / count;
+- iw_stats->qual.level = rssi_total / count;
+- iw_stats->qual.updated |=
+- IW_QUAL_QUAL_UPDATED|IW_QUAL_LEVEL_UPDATED;
+- } else {
+- iw_stats->qual.updated |=
+- IW_QUAL_QUAL_INVALID|IW_QUAL_LEVEL_INVALID;
++
++ mac = zd_hw_mac(hw);
++
++ memset(mac, 0, sizeof(*mac));
++ spin_lock_init(&mac->lock);
++ mac->hw = hw;
++
++ mac->type = IEEE80211_IF_TYPE_INVALID;
++
++ memcpy(mac->channels, zd_channels, sizeof(zd_channels));
++ memcpy(mac->rates, zd_rates, sizeof(zd_rates));
++ mac->modes[0].mode = MODE_IEEE80211G;
++ mac->modes[0].num_rates = ARRAY_SIZE(zd_rates);
++ mac->modes[0].rates = mac->rates;
++ mac->modes[0].num_channels = ARRAY_SIZE(zd_channels);
++ mac->modes[0].channels = mac->channels;
++ mac->modes[1].mode = MODE_IEEE80211B;
++ mac->modes[1].num_rates = 4;
++ mac->modes[1].rates = mac->rates;
++ mac->modes[1].num_channels = ARRAY_SIZE(zd_channels);
++ mac->modes[1].channels = mac->channels;
++
++ hw->flags = IEEE80211_HW_RX_INCLUDES_FCS |
++ IEEE80211_HW_DEFAULT_REG_DOMAIN_CONFIGURED;
++ hw->max_rssi = 100;
++ hw->max_signal = 100;
++
++ hw->queues = 1;
++ hw->extra_tx_headroom = sizeof(struct zd_ctrlset);
++
++ skb_queue_head_init(&mac->ack_wait_queue);
++
++ for (i = 0; i < 2; i++) {
++ if (ieee80211_register_hwmode(hw, &mac->modes[i])) {
++ dev_dbg_f(&intf->dev, "cannot register hwmode\n");
++ ieee80211_free_hw(hw);
++ return NULL;
++ }
+ }
+- /* TODO: update counter */
+- return iw_stats;
++
++ zd_chip_init(&mac->chip, hw, intf);
++ housekeeping_init(mac);
++ INIT_WORK(&mac->set_multicast_hash_work, set_multicast_hash_handler);
++ INIT_WORK(&mac->set_rts_cts_work, set_rts_cts_work);
++ INIT_WORK(&mac->set_rx_filter_work, set_rx_filter_handler);
++
++ SET_IEEE80211_DEV(hw, &intf->dev);
++ return hw;
+ }
-- if (down_interruptible(&priv->wpa_sem))
-- return;
-+ down(&priv->wpa_sem);
+ #define LINK_LED_WORK_DELAY HZ
+@@ -1308,18 +952,17 @@ static void link_led_handler(struct work_struct *work)
+ struct zd_mac *mac =
+ container_of(work, struct zd_mac, housekeeping.link_led_work.work);
+ struct zd_chip *chip = &mac->chip;
+- struct ieee80211softmac_device *sm = ieee80211_priv(mac->netdev);
+ int is_associated;
+ int r;
- /* try to use existing entry */
- list_for_each(ptr, &priv->bss_wpa_list) {
-@@ -2178,8 +2175,7 @@ prism54_wpa_bss_ie_get(islpci_private *priv, u8 *bssid, u8 *wpa_ie)
- struct islpci_bss_wpa_ie *bss = NULL;
- size_t len = 0;
+ spin_lock_irq(&mac->lock);
+- is_associated = sm->associnfo.associated != 0;
++ is_associated = mac->associated;
+ spin_unlock_irq(&mac->lock);
-- if (down_interruptible(&priv->wpa_sem))
-- return 0;
-+ down(&priv->wpa_sem);
+ r = zd_chip_control_leds(chip,
+ is_associated ? LED_ASSOCIATED : LED_SCANNING);
+ if (r)
+- dev_err(zd_mac_dev(mac), "zd_chip_control_leds error %d\n", r);
++ dev_dbg_f(zd_mac_dev(mac), "zd_chip_control_leds error %d\n", r);
- list_for_each(ptr, &priv->bss_wpa_list) {
- bss = list_entry(ptr, struct islpci_bss_wpa_ie, list);
-@@ -2610,7 +2606,7 @@ prism2_ioctl_set_encryption(struct net_device *dev,
- mgt_set_request(priv, DOT11_OID_DEFKEYID, 0,
- &index);
- } else {
-- if (!param->u.crypt.flags & IW_ENCODE_MODE) {
-+ if (!(param->u.crypt.flags & IW_ENCODE_MODE)) {
- /* we cannot do anything. Complain. */
- return -EINVAL;
- }
-diff --git a/drivers/net/wireless/prism54/islpci_dev.c b/drivers/net/wireless/prism54/islpci_dev.c
-index 219dd65..dbb538c 100644
---- a/drivers/net/wireless/prism54/islpci_dev.c
-+++ b/drivers/net/wireless/prism54/islpci_dev.c
-@@ -861,7 +861,7 @@ islpci_setup(struct pci_dev *pdev)
- init_waitqueue_head(&priv->reset_done);
+ queue_delayed_work(zd_workqueue, &mac->housekeeping.link_led_work,
+ LINK_LED_WORK_DELAY);
+diff --git a/drivers/net/wireless/zd1211rw/zd_mac.h b/drivers/net/wireless/zd1211rw/zd_mac.h
+index 1b15bde..2dde108 100644
+--- a/drivers/net/wireless/zd1211rw/zd_mac.h
++++ b/drivers/net/wireless/zd1211rw/zd_mac.h
+@@ -1,4 +1,7 @@
+-/* zd_mac.h
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -18,14 +21,11 @@
+ #ifndef _ZD_MAC_H
+ #define _ZD_MAC_H
- /* init the queue read locks, process wait counter */
-- sema_init(&priv->mgmt_sem, 1);
-+ mutex_init(&priv->mgmt_lock);
- priv->mgmt_received = NULL;
- init_waitqueue_head(&priv->mgmt_wqueue);
- sema_init(&priv->stats_sem, 1);
-diff --git a/drivers/net/wireless/prism54/islpci_dev.h b/drivers/net/wireless/prism54/islpci_dev.h
-index 736666d..4e0182c 100644
---- a/drivers/net/wireless/prism54/islpci_dev.h
-+++ b/drivers/net/wireless/prism54/islpci_dev.h
-@@ -26,6 +26,7 @@
- #include <linux/wireless.h>
- #include <net/iw_handler.h>
- #include <linux/list.h>
-+#include <linux/mutex.h>
+-#include <linux/wireless.h>
+ #include <linux/kernel.h>
+-#include <linux/workqueue.h>
+-#include <net/ieee80211.h>
+-#include <net/ieee80211softmac.h>
++#include <net/mac80211.h>
- #include "isl_38xx.h"
- #include "isl_oid.h"
-@@ -164,7 +165,7 @@ typedef struct {
- wait_queue_head_t reset_done;
+ #include "zd_chip.h"
+-#include "zd_netdev.h"
++#include "zd_ieee80211.h"
- /* used by islpci_mgt_transaction */
-- struct semaphore mgmt_sem; /* serialize access to mailbox and wqueue */
-+ struct mutex mgmt_lock; /* serialize access to mailbox and wqueue */
- struct islpci_mgmtframe *mgmt_received; /* mbox for incoming frame */
- wait_queue_head_t mgmt_wqueue; /* waitqueue for mbox */
+ struct zd_ctrlset {
+ u8 modulation;
+@@ -57,7 +57,7 @@ struct zd_ctrlset {
+ /* The two possible modulation types. Notify that 802.11b doesn't use the CCK
+ * codeing for the 1 and 2 MBit/s rate. We stay with the term here to remain
+ * consistent with uses the term at other places.
+- */
++ */
+ #define ZD_CCK 0x00
+ #define ZD_OFDM 0x10
-diff --git a/drivers/net/wireless/prism54/islpci_eth.c b/drivers/net/wireless/prism54/islpci_eth.c
-index f49eb06..762e85b 100644
---- a/drivers/net/wireless/prism54/islpci_eth.c
-+++ b/drivers/net/wireless/prism54/islpci_eth.c
-@@ -471,7 +471,7 @@ islpci_eth_receive(islpci_private *priv)
- wmb();
+@@ -141,58 +141,68 @@ struct rx_status {
+ #define ZD_RX_CRC16_ERROR 0x40
+ #define ZD_RX_ERROR 0x80
- /* increment the driver read pointer */
-- add_le32p((u32 *) &control_block->
-+ add_le32p(&control_block->
- driver_curr_frag[ISL38XX_CB_RX_DATA_LQ], 1);
- }
++enum mac_flags {
++ MAC_FIXED_CHANNEL = 0x01,
++};
++
+ struct housekeeping {
+ struct delayed_work link_led_work;
+ };
-diff --git a/drivers/net/wireless/prism54/islpci_eth.h b/drivers/net/wireless/prism54/islpci_eth.h
-index 5bf820d..61454d3 100644
---- a/drivers/net/wireless/prism54/islpci_eth.h
-+++ b/drivers/net/wireless/prism54/islpci_eth.h
-@@ -23,15 +23,15 @@
- #include "islpci_dev.h"
++/**
++ * struct zd_tx_skb_control_block - control block for tx skbuffs
++ * @control: &struct ieee80211_tx_control pointer
++ * @context: context pointer
++ *
++ * This structure is used to fill the cb field in an &sk_buff to transmit.
++ * The control field is NULL, if there is no requirement from the mac80211
++ * stack to report about the packet ACK. This is the case if the flag
++ * IEEE80211_TXCTL_NO_ACK is not set in &struct ieee80211_tx_control.
++ */
++struct zd_tx_skb_control_block {
++ struct ieee80211_tx_control *control;
++ struct ieee80211_hw *hw;
++ void *context;
++};
++
+ #define ZD_MAC_STATS_BUFFER_SIZE 16
- struct rfmon_header {
-- u16 unk0; /* = 0x0000 */
-- u16 length; /* = 0x1400 */
-- u32 clock; /* 1MHz clock */
-+ __le16 unk0; /* = 0x0000 */
-+ __le16 length; /* = 0x1400 */
-+ __le32 clock; /* 1MHz clock */
- u8 flags;
- u8 unk1;
- u8 rate;
- u8 unk2;
-- u16 freq;
-- u16 unk3;
-+ __le16 freq;
-+ __le16 unk3;
- u8 rssi;
- u8 padding[3];
- } __attribute__ ((packed));
-@@ -47,20 +47,20 @@ struct rx_annex_header {
- #define P80211CAPTURE_VERSION 0x80211001
++#define ZD_MAC_MAX_ACK_WAITERS 10
++
+ struct zd_mac {
+ struct zd_chip chip;
+ spinlock_t lock;
+- struct net_device *netdev;
+-
+- /* Unlocked reading possible */
+- struct iw_statistics iw_stats;
+-
++ struct ieee80211_hw *hw;
+ struct housekeeping housekeeping;
+ struct work_struct set_multicast_hash_work;
++ struct work_struct set_rts_cts_work;
++ struct work_struct set_rx_filter_work;
+ struct zd_mc_hash multicast_hash;
+- struct delayed_work set_rts_cts_work;
+- struct delayed_work set_basic_rates_work;
+-
+- struct tasklet_struct rx_tasklet;
+- struct sk_buff_head rx_queue;
+-
+- unsigned int stats_count;
+- u8 qual_buffer[ZD_MAC_STATS_BUFFER_SIZE];
+- u8 rssi_buffer[ZD_MAC_STATS_BUFFER_SIZE];
+ u8 regdomain;
+ u8 default_regdomain;
+- u8 requested_channel;
+-
+- /* A bitpattern of cr_rates */
+- u16 basic_rates;
+-
+- /* A zd_rate */
+- u8 rts_rate;
++ int type;
++ int associated;
++ struct sk_buff_head ack_wait_queue;
++ struct ieee80211_channel channels[14];
++ struct ieee80211_rate rates[12];
++ struct ieee80211_hw_mode modes[2];
- struct avs_80211_1_header {
-- uint32_t version;
-- uint32_t length;
-- uint64_t mactime;
-- uint64_t hosttime;
-- uint32_t phytype;
-- uint32_t channel;
-- uint32_t datarate;
-- uint32_t antenna;
-- uint32_t priority;
-- uint32_t ssi_type;
-- int32_t ssi_signal;
-- int32_t ssi_noise;
-- uint32_t preamble;
-- uint32_t encoding;
-+ __be32 version;
-+ __be32 length;
-+ __be64 mactime;
-+ __be64 hosttime;
-+ __be32 phytype;
-+ __be32 channel;
-+ __be32 datarate;
-+ __be32 antenna;
-+ __be32 priority;
-+ __be32 ssi_type;
-+ __be32 ssi_signal;
-+ __be32 ssi_noise;
-+ __be32 preamble;
-+ __be32 encoding;
- };
+ /* Short preamble (used for RTS/CTS) */
+ unsigned int short_preamble:1;
- void islpci_eth_cleanup_transmit(islpci_private *, isl38xx_control_block *);
-diff --git a/drivers/net/wireless/prism54/islpci_mgt.c b/drivers/net/wireless/prism54/islpci_mgt.c
-index 2246f79..f7c677e 100644
---- a/drivers/net/wireless/prism54/islpci_mgt.c
-+++ b/drivers/net/wireless/prism54/islpci_mgt.c
-@@ -460,7 +460,7 @@ islpci_mgt_transaction(struct net_device *ndev,
+ /* flags to indicate update in progress */
+ unsigned int updating_rts_rate:1;
+- unsigned int updating_basic_rates:1;
+-};
- *recvframe = NULL;
+-static inline struct ieee80211_device *zd_mac_to_ieee80211(struct zd_mac *mac)
+-{
+- return zd_netdev_ieee80211(mac->netdev);
+-}
++ /* whether to pass frames with CRC errors to stack */
++ unsigned int pass_failed_fcs:1;
-- if (down_interruptible(&priv->mgmt_sem))
-+ if (mutex_lock_interruptible(&priv->mgmt_lock))
- return -ERESTARTSYS;
+-static inline struct zd_mac *zd_netdev_mac(struct net_device *netdev)
++ /* whether to pass control frames to stack */
++ unsigned int pass_ctrl:1;
++};
++
++static inline struct zd_mac *zd_hw_mac(struct ieee80211_hw *hw)
+ {
+- return ieee80211softmac_priv(netdev);
++ return hw->priv;
+ }
- prepare_to_wait(&priv->mgmt_wqueue, &wait, TASK_UNINTERRUPTIBLE);
-@@ -504,7 +504,7 @@ islpci_mgt_transaction(struct net_device *ndev,
- /* TODO: we should reset the device here */
- out:
- finish_wait(&priv->mgmt_wqueue, &wait);
-- up(&priv->mgmt_sem);
-+ mutex_unlock(&priv->mgmt_lock);
- return err;
+ static inline struct zd_mac *zd_chip_to_mac(struct zd_chip *chip)
+@@ -205,35 +215,22 @@ static inline struct zd_mac *zd_usb_to_mac(struct zd_usb *usb)
+ return zd_chip_to_mac(zd_usb_to_chip(usb));
}
-diff --git a/drivers/net/wireless/prism54/islpci_mgt.h b/drivers/net/wireless/prism54/islpci_mgt.h
-index fc53b58..f91a88f 100644
---- a/drivers/net/wireless/prism54/islpci_mgt.h
-+++ b/drivers/net/wireless/prism54/islpci_mgt.h
-@@ -86,7 +86,7 @@ extern int pc_debug;
- #define PIMFOR_FLAG_LITTLE_ENDIAN 0x02
++static inline u8 *zd_mac_get_perm_addr(struct zd_mac *mac)
++{
++ return mac->hw->wiphy->perm_addr;
++}
++
+ #define zd_mac_dev(mac) (zd_chip_dev(&(mac)->chip))
- static inline void
--add_le32p(u32 * le_number, u32 add)
-+add_le32p(__le32 * le_number, u32 add)
- {
- *le_number = cpu_to_le32(le32_to_cpup(le_number) + add);
- }
-diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
-index f87fe10..f3858ee 100644
---- a/drivers/net/wireless/ray_cs.c
-+++ b/drivers/net/wireless/ray_cs.c
-@@ -44,6 +44,7 @@
- #include <linux/ioport.h>
- #include <linux/skbuff.h>
- #include <linux/ethtool.h>
-+#include <linux/ieee80211.h>
+-int zd_mac_init(struct zd_mac *mac,
+- struct net_device *netdev,
+- struct usb_interface *intf);
++struct ieee80211_hw *zd_mac_alloc_hw(struct usb_interface *intf);
+ void zd_mac_clear(struct zd_mac *mac);
- #include <pcmcia/cs_types.h>
- #include <pcmcia/cs.h>
-@@ -997,13 +998,13 @@ static int ray_hw_xmit(unsigned char* data, int len, struct net_device* dev,
- static int translate_frame(ray_dev_t *local, struct tx_msg __iomem *ptx, unsigned char *data,
- int len)
- {
-- unsigned short int proto = ((struct ethhdr *)data)->h_proto;
-+ __be16 proto = ((struct ethhdr *)data)->h_proto;
- if (ntohs(proto) >= 1536) { /* DIX II ethernet frame */
- DEBUG(3,"ray_cs translate_frame DIX II\n");
- /* Copy LLC header to card buffer */
- memcpy_toio(&ptx->var, eth2_llc, sizeof(eth2_llc));
- memcpy_toio( ((void __iomem *)&ptx->var) + sizeof(eth2_llc), (UCHAR *)&proto, 2);
-- if ((proto == 0xf380) || (proto == 0x3781)) {
-+ if (proto == htons(ETH_P_AARP) || proto == htons(ETH_P_IPX)) {
- /* This is the selective translation table, only 2 entries */
- writeb(0xf8, &((struct snaphdr_t __iomem *)ptx->var)->org[3]);
- }
-@@ -1014,7 +1015,7 @@ static int translate_frame(ray_dev_t *local, struct tx_msg __iomem *ptx, unsigne
- }
- else { /* already 802 type, and proto is length */
- DEBUG(3,"ray_cs translate_frame 802\n");
-- if (proto == 0xffff) { /* evil netware IPX 802.3 without LLC */
-+ if (proto == htons(0xffff)) { /* evil netware IPX 802.3 without LLC */
- DEBUG(3,"ray_cs translate_frame evil IPX\n");
- memcpy_toio(&ptx->var, data + ETH_HLEN, len - ETH_HLEN);
- return 0 - ETH_HLEN;
-@@ -1780,19 +1781,19 @@ static struct net_device_stats *ray_get_stats(struct net_device *dev)
- }
- if (readb(&p->mrx_overflow_for_host))
- {
-- local->stats.rx_over_errors += ntohs(readb(&p->mrx_overflow));
-+ local->stats.rx_over_errors += swab16(readw(&p->mrx_overflow));
- writeb(0,&p->mrx_overflow);
- writeb(0,&p->mrx_overflow_for_host);
- }
- if (readb(&p->mrx_checksum_error_for_host))
- {
-- local->stats.rx_crc_errors += ntohs(readb(&p->mrx_checksum_error));
-+ local->stats.rx_crc_errors += swab16(readw(&p->mrx_checksum_error));
- writeb(0,&p->mrx_checksum_error);
- writeb(0,&p->mrx_checksum_error_for_host);
- }
- if (readb(&p->rx_hec_error_for_host))
- {
-- local->stats.rx_frame_errors += ntohs(readb(&p->rx_hec_error));
-+ local->stats.rx_frame_errors += swab16(readw(&p->rx_hec_error));
- writeb(0,&p->rx_hec_error);
- writeb(0,&p->rx_hec_error_for_host);
- }
-@@ -2316,32 +2317,17 @@ static void rx_data(struct net_device *dev, struct rcs __iomem *prcs, unsigned i
- static void untranslate(ray_dev_t *local, struct sk_buff *skb, int len)
- {
- snaphdr_t *psnap = (snaphdr_t *)(skb->data + RX_MAC_HEADER_LENGTH);
-- struct mac_header *pmac = (struct mac_header *)skb->data;
-- unsigned short type = *(unsigned short *)psnap->ethertype;
-- unsigned int xsap = *(unsigned int *)psnap & 0x00ffffff;
-- unsigned int org = (*(unsigned int *)psnap->org) & 0x00ffffff;
-+ struct ieee80211_hdr *pmac = (struct ieee80211_hdr *)skb->data;
-+ __be16 type = *(__be16 *)psnap->ethertype;
- int delta;
- struct ethhdr *peth;
- UCHAR srcaddr[ADDRLEN];
- UCHAR destaddr[ADDRLEN];
-+ static UCHAR org_bridge[3] = {0, 0, 0xf8};
-+ static UCHAR org_1042[3] = {0, 0, 0};
+-int zd_mac_preinit_hw(struct zd_mac *mac);
+-int zd_mac_init_hw(struct zd_mac *mac);
+-
+-int zd_mac_open(struct net_device *netdev);
+-int zd_mac_stop(struct net_device *netdev);
+-int zd_mac_set_mac_address(struct net_device *dev, void *p);
+-void zd_mac_set_multicast_list(struct net_device *netdev);
+-
+-int zd_mac_rx_irq(struct zd_mac *mac, const u8 *buffer, unsigned int length);
+-
+-int zd_mac_set_regdomain(struct zd_mac *zd_mac, u8 regdomain);
+-u8 zd_mac_get_regdomain(struct zd_mac *zd_mac);
+-
+-int zd_mac_request_channel(struct zd_mac *mac, u8 channel);
+-u8 zd_mac_get_channel(struct zd_mac *mac);
+-
+-int zd_mac_set_mode(struct zd_mac *mac, u32 mode);
+-int zd_mac_get_mode(struct zd_mac *mac, u32 *mode);
+-
+-int zd_mac_get_range(struct zd_mac *mac, struct iw_range *range);
++int zd_mac_preinit_hw(struct ieee80211_hw *hw);
++int zd_mac_init_hw(struct ieee80211_hw *hw);
-- if (pmac->frame_ctl_2 & FC2_FROM_DS) {
-- if (pmac->frame_ctl_2 & FC2_TO_DS) { /* AP to AP */
-- memcpy(destaddr, pmac->addr_3, ADDRLEN);
-- memcpy(srcaddr, ((unsigned char *)pmac->addr_3) + ADDRLEN, ADDRLEN);
-- } else { /* AP to terminal */
-- memcpy(destaddr, pmac->addr_1, ADDRLEN);
-- memcpy(srcaddr, pmac->addr_3, ADDRLEN);
+-struct iw_statistics *zd_mac_get_wireless_stats(struct net_device *ndev);
++int zd_mac_rx(struct ieee80211_hw *hw, const u8 *buffer, unsigned int length);
++void zd_mac_tx_failed(struct ieee80211_hw *hw);
++void zd_mac_tx_to_dev(struct sk_buff *skb, int error);
+
+ #ifdef DEBUG
+ void zd_dump_rx_status(const struct rx_status *status);
+diff --git a/drivers/net/wireless/zd1211rw/zd_netdev.c b/drivers/net/wireless/zd1211rw/zd_netdev.c
+deleted file mode 100644
+index 047cab3..0000000
+--- a/drivers/net/wireless/zd1211rw/zd_netdev.c
++++ /dev/null
+@@ -1,264 +0,0 @@
+-/* zd_netdev.c
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- */
+-
+-#include <linux/netdevice.h>
+-#include <linux/etherdevice.h>
+-#include <linux/skbuff.h>
+-#include <net/ieee80211.h>
+-#include <net/ieee80211softmac.h>
+-#include <net/ieee80211softmac_wx.h>
+-#include <net/iw_handler.h>
+-
+-#include "zd_def.h"
+-#include "zd_netdev.h"
+-#include "zd_mac.h"
+-#include "zd_ieee80211.h"
+-
+-/* Region 0 means reset regdomain to default. */
+-static int zd_set_regdomain(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- const u8 *regdomain = (u8 *)req;
+- return zd_mac_set_regdomain(zd_netdev_mac(netdev), *regdomain);
+-}
+-
+-static int zd_get_regdomain(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- u8 *regdomain = (u8 *)req;
+- if (!regdomain)
+- return -EINVAL;
+- *regdomain = zd_mac_get_regdomain(zd_netdev_mac(netdev));
+- return 0;
+-}
+-
+-static const struct iw_priv_args zd_priv_args[] = {
+- {
+- .cmd = ZD_PRIV_SET_REGDOMAIN,
+- .set_args = IW_PRIV_TYPE_BYTE | IW_PRIV_SIZE_FIXED | 1,
+- .name = "set_regdomain",
+- },
+- {
+- .cmd = ZD_PRIV_GET_REGDOMAIN,
+- .get_args = IW_PRIV_TYPE_BYTE | IW_PRIV_SIZE_FIXED | 1,
+- .name = "get_regdomain",
+- },
+-};
+-
+-#define PRIV_OFFSET(x) [(x)-SIOCIWFIRSTPRIV]
+-
+-static const iw_handler zd_priv_handler[] = {
+- PRIV_OFFSET(ZD_PRIV_SET_REGDOMAIN) = zd_set_regdomain,
+- PRIV_OFFSET(ZD_PRIV_GET_REGDOMAIN) = zd_get_regdomain,
+-};
+-
+-static int iw_get_name(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- /* FIXME: check whether 802.11a will also supported */
+- strlcpy(req->name, "IEEE 802.11b/g", IFNAMSIZ);
+- return 0;
+-}
+-
+-static int iw_get_nick(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- strcpy(extra, "zd1211");
+- req->data.length = strlen(extra);
+- req->data.flags = 1;
+- return 0;
+-}
+-
+-static int iw_set_freq(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- int r;
+- struct zd_mac *mac = zd_netdev_mac(netdev);
+- struct iw_freq *freq = &req->freq;
+- u8 channel;
+-
+- r = zd_find_channel(&channel, freq);
+- if (r < 0)
+- return r;
+- r = zd_mac_request_channel(mac, channel);
+- return r;
+-}
+-
+-static int iw_get_freq(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- struct zd_mac *mac = zd_netdev_mac(netdev);
+- struct iw_freq *freq = &req->freq;
+-
+- return zd_channel_to_freq(freq, zd_mac_get_channel(mac));
+-}
+-
+-static int iw_set_mode(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- return zd_mac_set_mode(zd_netdev_mac(netdev), req->mode);
+-}
+-
+-static int iw_get_mode(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- return zd_mac_get_mode(zd_netdev_mac(netdev), &req->mode);
+-}
+-
+-static int iw_get_range(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *req, char *extra)
+-{
+- struct iw_range *range = (struct iw_range *)extra;
+-
+- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)), "\n");
+- req->data.length = sizeof(*range);
+- return zd_mac_get_range(zd_netdev_mac(netdev), range);
+-}
+-
+-static int iw_set_encode(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *data,
+- char *extra)
+-{
+- return ieee80211_wx_set_encode(zd_netdev_ieee80211(netdev), info,
+- data, extra);
+-}
+-
+-static int iw_get_encode(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *data,
+- char *extra)
+-{
+- return ieee80211_wx_get_encode(zd_netdev_ieee80211(netdev), info,
+- data, extra);
+-}
+-
+-static int iw_set_encodeext(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *data,
+- char *extra)
+-{
+- return ieee80211_wx_set_encodeext(zd_netdev_ieee80211(netdev), info,
+- data, extra);
+-}
+-
+-static int iw_get_encodeext(struct net_device *netdev,
+- struct iw_request_info *info,
+- union iwreq_data *data,
+- char *extra)
+-{
+- return ieee80211_wx_get_encodeext(zd_netdev_ieee80211(netdev), info,
+- data, extra);
+-}
+-
+-#define WX(x) [(x)-SIOCIWFIRST]
+-
+-static const iw_handler zd_standard_iw_handlers[] = {
+- WX(SIOCGIWNAME) = iw_get_name,
+- WX(SIOCGIWNICKN) = iw_get_nick,
+- WX(SIOCSIWFREQ) = iw_set_freq,
+- WX(SIOCGIWFREQ) = iw_get_freq,
+- WX(SIOCSIWMODE) = iw_set_mode,
+- WX(SIOCGIWMODE) = iw_get_mode,
+- WX(SIOCGIWRANGE) = iw_get_range,
+- WX(SIOCSIWENCODE) = iw_set_encode,
+- WX(SIOCGIWENCODE) = iw_get_encode,
+- WX(SIOCSIWENCODEEXT) = iw_set_encodeext,
+- WX(SIOCGIWENCODEEXT) = iw_get_encodeext,
+- WX(SIOCSIWAUTH) = ieee80211_wx_set_auth,
+- WX(SIOCGIWAUTH) = ieee80211_wx_get_auth,
+- WX(SIOCSIWSCAN) = ieee80211softmac_wx_trigger_scan,
+- WX(SIOCGIWSCAN) = ieee80211softmac_wx_get_scan_results,
+- WX(SIOCSIWESSID) = ieee80211softmac_wx_set_essid,
+- WX(SIOCGIWESSID) = ieee80211softmac_wx_get_essid,
+- WX(SIOCSIWAP) = ieee80211softmac_wx_set_wap,
+- WX(SIOCGIWAP) = ieee80211softmac_wx_get_wap,
+- WX(SIOCSIWRATE) = ieee80211softmac_wx_set_rate,
+- WX(SIOCGIWRATE) = ieee80211softmac_wx_get_rate,
+- WX(SIOCSIWGENIE) = ieee80211softmac_wx_set_genie,
+- WX(SIOCGIWGENIE) = ieee80211softmac_wx_get_genie,
+- WX(SIOCSIWMLME) = ieee80211softmac_wx_set_mlme,
+-};
+-
+-static const struct iw_handler_def iw_handler_def = {
+- .standard = zd_standard_iw_handlers,
+- .num_standard = ARRAY_SIZE(zd_standard_iw_handlers),
+- .private = zd_priv_handler,
+- .num_private = ARRAY_SIZE(zd_priv_handler),
+- .private_args = zd_priv_args,
+- .num_private_args = ARRAY_SIZE(zd_priv_args),
+- .get_wireless_stats = zd_mac_get_wireless_stats,
+-};
+-
+-struct net_device *zd_netdev_alloc(struct usb_interface *intf)
+-{
+- int r;
+- struct net_device *netdev;
+- struct zd_mac *mac;
+-
+- netdev = alloc_ieee80211softmac(sizeof(struct zd_mac));
+- if (!netdev) {
+- dev_dbg_f(&intf->dev, "out of memory\n");
+- return NULL;
- }
-- } else { /* Terminal to AP */
-- if (pmac->frame_ctl_2 & FC2_TO_DS) {
-- memcpy(destaddr, pmac->addr_3, ADDRLEN);
-- memcpy(srcaddr, pmac->addr_2, ADDRLEN);
-- } else { /* Adhoc */
-- memcpy(destaddr, pmac->addr_1, ADDRLEN);
-- memcpy(srcaddr, pmac->addr_2, ADDRLEN);
+-
+- mac = zd_netdev_mac(netdev);
+- r = zd_mac_init(mac, netdev, intf);
+- if (r) {
+- usb_set_intfdata(intf, NULL);
+- free_ieee80211(netdev);
+- return NULL;
- }
-- }
-+ memcpy(destaddr, ieee80211_get_DA(pmac), ADDRLEN);
-+ memcpy(srcaddr, ieee80211_get_SA(pmac), ADDRLEN);
+-
+- SET_NETDEV_DEV(netdev, &intf->dev);
+-
+- dev_dbg_f(&intf->dev, "netdev->flags %#06hx\n", netdev->flags);
+- dev_dbg_f(&intf->dev, "netdev->features %#010lx\n", netdev->features);
+-
+- netdev->open = zd_mac_open;
+- netdev->stop = zd_mac_stop;
+- /* netdev->get_stats = */
+- netdev->set_multicast_list = zd_mac_set_multicast_list;
+- netdev->set_mac_address = zd_mac_set_mac_address;
+- netdev->wireless_handlers = &iw_handler_def;
+- /* netdev->ethtool_ops = */
+-
+- return netdev;
+-}
+-
+-void zd_netdev_free(struct net_device *netdev)
+-{
+- if (!netdev)
+- return;
+-
+- zd_mac_clear(zd_netdev_mac(netdev));
+- free_ieee80211(netdev);
+-}
+-
+-void zd_netdev_disconnect(struct net_device *netdev)
+-{
+- unregister_netdev(netdev);
+-}
+diff --git a/drivers/net/wireless/zd1211rw/zd_netdev.h b/drivers/net/wireless/zd1211rw/zd_netdev.h
+deleted file mode 100644
+index 374a957..0000000
+--- a/drivers/net/wireless/zd1211rw/zd_netdev.h
++++ /dev/null
+@@ -1,45 +0,0 @@
+-/* zd_netdev.h: Header for net device related functions.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- */
+-
+-#ifndef _ZD_NETDEV_H
+-#define _ZD_NETDEV_H
+-
+-#include <linux/usb.h>
+-#include <linux/netdevice.h>
+-#include <net/ieee80211.h>
+-
+-#define ZD_PRIV_SET_REGDOMAIN (SIOCIWFIRSTPRIV)
+-#define ZD_PRIV_GET_REGDOMAIN (SIOCIWFIRSTPRIV+1)
+-
+-static inline struct ieee80211_device *zd_netdev_ieee80211(
+- struct net_device *ndev)
+-{
+- return netdev_priv(ndev);
+-}
+-
+-static inline struct net_device *zd_ieee80211_to_netdev(
+- struct ieee80211_device *ieee)
+-{
+- return ieee->dev;
+-}
+-
+-struct net_device *zd_netdev_alloc(struct usb_interface *intf);
+-void zd_netdev_free(struct net_device *netdev);
+-
+-void zd_netdev_disconnect(struct net_device *netdev);
+-
+-#endif /* _ZD_NETDEV_H */
+diff --git a/drivers/net/wireless/zd1211rw/zd_rf.c b/drivers/net/wireless/zd1211rw/zd_rf.c
+index abe5d38..ec41293 100644
+--- a/drivers/net/wireless/zd1211rw/zd_rf.c
++++ b/drivers/net/wireless/zd1211rw/zd_rf.c
+@@ -1,4 +1,7 @@
+-/* zd_rf.c
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/wireless/zd1211rw/zd_rf.h b/drivers/net/wireless/zd1211rw/zd_rf.h
+index 30502f2..79dc103 100644
+--- a/drivers/net/wireless/zd1211rw/zd_rf.h
++++ b/drivers/net/wireless/zd1211rw/zd_rf.h
+@@ -1,4 +1,7 @@
+-/* zd_rf.h
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/wireless/zd1211rw/zd_rf_al2230.c b/drivers/net/wireless/zd1211rw/zd_rf_al2230.c
+index 006774d..74a8f7a 100644
+--- a/drivers/net/wireless/zd1211rw/zd_rf_al2230.c
++++ b/drivers/net/wireless/zd1211rw/zd_rf_al2230.c
+@@ -1,4 +1,7 @@
+-/* zd_rf_al2230.c: Functions for the AL2230 RF controller
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c b/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c
+index 73d0bb2..65095d6 100644
+--- a/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c
++++ b/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c
+@@ -1,4 +1,7 @@
+-/* zd_rf_al7230b.c: Functions for the AL7230B RF controller
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c b/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c
+index cc70d40..0597d86 100644
+--- a/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c
++++ b/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c
+@@ -1,4 +1,7 @@
+-/* zd_rf_rfmd.c: Functions for the RFMD RF controller
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c b/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c
+index 857dcf3..439799b 100644
+--- a/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c
++++ b/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c
+@@ -1,4 +1,7 @@
+-/* zd_rf_uw2453.c: Functions for the UW2453 RF controller
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -403,7 +406,7 @@ static int uw2453_init_hw(struct zd_rf *rf)
+ if (r)
+ return r;
- #ifdef PCMCIA_DEBUG
- if (pc_debug > 3) {
-@@ -2349,33 +2335,34 @@ static void untranslate(ray_dev_t *local, struct sk_buff *skb, int len)
- printk(KERN_DEBUG "skb->data before untranslate");
- for (i=0;i<64;i++)
- printk("%02x ",skb->data[i]);
-- printk("\n" KERN_DEBUG "type = %08x, xsap = %08x, org = %08x\n",
-- type,xsap,org);
-+ printk("\n" KERN_DEBUG "type = %08x, xsap = %02x%02x%02x, org = %02x02x02x\n",
-+ ntohs(type),
-+ psnap->dsap, psnap->ssap, psnap->ctrl,
-+ psnap->org[0], psnap->org[1], psnap->org[2]);
- printk(KERN_DEBUG "untranslate skb->data = %p\n",skb->data);
- }
- #endif
+- if (!intr_status & 0xf) {
++ if (!(intr_status & 0xf)) {
+ dev_dbg_f(zd_chip_dev(chip),
+ "PLL locked on configuration %d\n", i);
+ found_config = i;
+diff --git a/drivers/net/wireless/zd1211rw/zd_usb.c b/drivers/net/wireless/zd1211rw/zd_usb.c
+index c755b69..7942b15 100644
+--- a/drivers/net/wireless/zd1211rw/zd_usb.c
++++ b/drivers/net/wireless/zd1211rw/zd_usb.c
+@@ -1,4 +1,8 @@
+-/* zd_usb.c
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
++ * Copyright (C) 2006-2007 Michael Wu <flamingice at sourmilk.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -17,18 +21,16 @@
-- if ( xsap != SNAP_ID) {
-+ if (psnap->dsap != 0xaa || psnap->ssap != 0xaa || psnap->ctrl != 3) {
- /* not a snap type so leave it alone */
-- DEBUG(3,"ray_cs untranslate NOT SNAP %x\n", *(unsigned int *)psnap & 0x00ffffff);
-+ DEBUG(3,"ray_cs untranslate NOT SNAP %02x %02x %02x\n",
-+ psnap->dsap, psnap->ssap, psnap->ctrl);
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/module.h>
+ #include <linux/firmware.h>
+ #include <linux/device.h>
+ #include <linux/errno.h>
+ #include <linux/skbuff.h>
+ #include <linux/usb.h>
+ #include <linux/workqueue.h>
+-#include <net/ieee80211.h>
++#include <net/mac80211.h>
+ #include <asm/unaligned.h>
- delta = RX_MAC_HEADER_LENGTH - ETH_HLEN;
- peth = (struct ethhdr *)(skb->data + delta);
- peth->h_proto = htons(len - RX_MAC_HEADER_LENGTH);
- }
- else { /* Its a SNAP */
-- if (org == BRIDGE_ENCAP) { /* EtherII and nuke the LLC */
-+ if (memcmp(psnap->org, org_bridge, 3) == 0) { /* EtherII and nuke the LLC */
- DEBUG(3,"ray_cs untranslate Bridge encap\n");
- delta = RX_MAC_HEADER_LENGTH
- + sizeof(struct snaphdr_t) - ETH_HLEN;
- peth = (struct ethhdr *)(skb->data + delta);
- peth->h_proto = type;
-- }
-- else {
-- if (org == RFC1042_ENCAP) {
-- switch (type) {
-- case RAY_IPX_TYPE:
-- case APPLEARP_TYPE:
-+ } else if (memcmp(psnap->org, org_1042, 3) == 0) {
-+ switch (ntohs(type)) {
-+ case ETH_P_IPX:
-+ case ETH_P_AARP:
- DEBUG(3,"ray_cs untranslate RFC IPX/AARP\n");
- delta = RX_MAC_HEADER_LENGTH - ETH_HLEN;
- peth = (struct ethhdr *)(skb->data + delta);
-@@ -2389,14 +2376,12 @@ static void untranslate(ray_dev_t *local, struct sk_buff *skb, int len)
- peth->h_proto = type;
- break;
- }
-- }
-- else {
-+ } else {
- printk("ray_cs untranslate very confused by packet\n");
- delta = RX_MAC_HEADER_LENGTH - ETH_HLEN;
- peth = (struct ethhdr *)(skb->data + delta);
- peth->h_proto = type;
-- }
-- }
-+ }
- }
- /* TBD reserve skb_reserve(skb, delta); */
- skb_pull(skb, delta);
-diff --git a/drivers/net/wireless/rt2x00/rt2400pci.c b/drivers/net/wireless/rt2x00/rt2400pci.c
-index 31c1dd2..d6cba13 100644
---- a/drivers/net/wireless/rt2x00/rt2400pci.c
-+++ b/drivers/net/wireless/rt2x00/rt2400pci.c
-@@ -24,11 +24,6 @@
- Supported chipsets: RT2460.
- */
+ #include "zd_def.h"
+-#include "zd_netdev.h"
+ #include "zd_mac.h"
+ #include "zd_usb.h"
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2400pci"
+@@ -55,6 +57,7 @@ static struct usb_device_id usb_ids[] = {
+ { USB_DEVICE(0x13b1, 0x001e), .driver_info = DEVICE_ZD1211 },
+ { USB_DEVICE(0x0586, 0x3407), .driver_info = DEVICE_ZD1211 },
+ { USB_DEVICE(0x129b, 0x1666), .driver_info = DEVICE_ZD1211 },
++ { USB_DEVICE(0x157e, 0x300a), .driver_info = DEVICE_ZD1211 },
+ /* ZD1211B */
+ { USB_DEVICE(0x0ace, 0x1215), .driver_info = DEVICE_ZD1211B },
+ { USB_DEVICE(0x157e, 0x300d), .driver_info = DEVICE_ZD1211B },
+@@ -353,18 +356,6 @@ out:
+ spin_unlock(&intr->lock);
+ }
+
+-static inline void handle_retry_failed_int(struct urb *urb)
+-{
+- struct zd_usb *usb = urb->context;
+- struct zd_mac *mac = zd_usb_to_mac(usb);
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-
- #include <linux/delay.h>
- #include <linux/etherdevice.h>
- #include <linux/init.h>
-@@ -54,7 +49,7 @@
- * the access attempt is considered to have failed,
- * and we will print an error.
- */
--static u32 rt2400pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
-+static u32 rt2400pci_bbp_check(struct rt2x00_dev *rt2x00dev)
+- ieee->stats.tx_errors++;
+- ieee->ieee_stats.tx_retry_limit_exceeded++;
+- dev_dbg_f(urb_dev(urb), "retry failed interrupt\n");
+-}
+-
+-
+ static void int_urb_complete(struct urb *urb)
{
- u32 reg;
- unsigned int i;
-@@ -69,7 +64,7 @@ static u32 rt2400pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
- return reg;
+ int r;
+@@ -400,7 +391,7 @@ static void int_urb_complete(struct urb *urb)
+ handle_regs_int(urb);
+ break;
+ case USB_INT_ID_RETRY_FAILED:
+- handle_retry_failed_int(urb);
++ zd_mac_tx_failed(zd_usb_to_hw(urb->context));
+ break;
+ default:
+ dev_dbg_f(urb_dev(urb), "error: urb %p unknown id %x\n", urb,
+@@ -530,14 +521,10 @@ static void handle_rx_packet(struct zd_usb *usb, const u8 *buffer,
+ unsigned int length)
+ {
+ int i;
+- struct zd_mac *mac = zd_usb_to_mac(usb);
+ const struct rx_length_info *length_info;
+
+ if (length < sizeof(struct rx_length_info)) {
+ /* It's not a complete packet anyhow. */
+- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
+- ieee->stats.rx_errors++;
+- ieee->stats.rx_length_errors++;
+ return;
+ }
+ length_info = (struct rx_length_info *)
+@@ -561,13 +548,13 @@ static void handle_rx_packet(struct zd_usb *usb, const u8 *buffer,
+ n = l+k;
+ if (n > length)
+ return;
+- zd_mac_rx_irq(mac, buffer+l, k);
++ zd_mac_rx(zd_usb_to_hw(usb), buffer+l, k);
+ if (i >= 2)
+ return;
+ l = (n+3) & ~3;
+ }
+ } else {
+- zd_mac_rx_irq(mac, buffer, length);
++ zd_mac_rx(zd_usb_to_hw(usb), buffer, length);
+ }
}
--static void rt2400pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt2400pci_bbp_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u8 value)
- {
- u32 reg;
-@@ -95,7 +90,7 @@ static void rt2400pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
- rt2x00pci_register_write(rt2x00dev, BBPCSR, reg);
+@@ -629,7 +616,7 @@ resubmit:
+ usb_submit_urb(urb, GFP_ATOMIC);
}
--static void rt2400pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
-+static void rt2400pci_bbp_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u8 *value)
+-static struct urb *alloc_urb(struct zd_usb *usb)
++static struct urb *alloc_rx_urb(struct zd_usb *usb)
{
- u32 reg;
-@@ -132,7 +127,7 @@ static void rt2400pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
- *value = rt2x00_get_field32(reg, BBPCSR_VALUE);
+ struct usb_device *udev = zd_usb_to_usbdev(usb);
+ struct urb *urb;
+@@ -653,7 +640,7 @@ static struct urb *alloc_urb(struct zd_usb *usb)
+ return urb;
}
--static void rt2400pci_rf_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt2400pci_rf_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u32 value)
+-static void free_urb(struct urb *urb)
++static void free_rx_urb(struct urb *urb)
{
- u32 reg;
-@@ -195,13 +190,13 @@ static void rt2400pci_eepromregister_write(struct eeprom_93cx6 *eeprom)
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
- #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
+ if (!urb)
+ return;
+@@ -671,11 +658,11 @@ int zd_usb_enable_rx(struct zd_usb *usb)
+ dev_dbg_f(zd_usb_dev(usb), "\n");
--static void rt2400pci_read_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt2400pci_read_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 *data)
- {
- rt2x00pci_register_read(rt2x00dev, CSR_OFFSET(word), data);
- }
+ r = -ENOMEM;
+- urbs = kcalloc(URBS_COUNT, sizeof(struct urb *), GFP_KERNEL);
++ urbs = kcalloc(RX_URBS_COUNT, sizeof(struct urb *), GFP_KERNEL);
+ if (!urbs)
+ goto error;
+- for (i = 0; i < URBS_COUNT; i++) {
+- urbs[i] = alloc_urb(usb);
++ for (i = 0; i < RX_URBS_COUNT; i++) {
++ urbs[i] = alloc_rx_urb(usb);
+ if (!urbs[i])
+ goto error;
+ }
+@@ -688,10 +675,10 @@ int zd_usb_enable_rx(struct zd_usb *usb)
+ goto error;
+ }
+ rx->urbs = urbs;
+- rx->urbs_count = URBS_COUNT;
++ rx->urbs_count = RX_URBS_COUNT;
+ spin_unlock_irq(&rx->lock);
--static void rt2400pci_write_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt2400pci_write_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 data)
- {
- rt2x00pci_register_write(rt2x00dev, CSR_OFFSET(word), data);
-@@ -285,7 +280,7 @@ static void rt2400pci_config_type(struct rt2x00_dev *rt2x00dev, const int type,
- */
- rt2x00pci_register_read(rt2x00dev, CSR14, ®);
- rt2x00_set_field32(®, CSR14_TSF_COUNT, 1);
-- rt2x00_set_field32(®, CSR14_TBCN, 1);
-+ rt2x00_set_field32(®, CSR14_TBCN, (tsf_sync == TSF_SYNC_BEACON));
- rt2x00_set_field32(®, CSR14_BEACON_GEN, 0);
- rt2x00_set_field32(®, CSR14_TSF_SYNC, tsf_sync);
- rt2x00pci_register_write(rt2x00dev, CSR14, reg);
-@@ -397,7 +392,7 @@ static void rt2400pci_config_txpower(struct rt2x00_dev *rt2x00dev, int txpower)
- }
+- for (i = 0; i < URBS_COUNT; i++) {
++ for (i = 0; i < RX_URBS_COUNT; i++) {
+ r = usb_submit_urb(urbs[i], GFP_KERNEL);
+ if (r)
+ goto error_submit;
+@@ -699,7 +686,7 @@ int zd_usb_enable_rx(struct zd_usb *usb)
- static void rt2400pci_config_antenna(struct rt2x00_dev *rt2x00dev,
-- int antenna_tx, int antenna_rx)
-+ struct antenna_setup *ant)
- {
- u8 r1;
- u8 r4;
-@@ -408,14 +403,20 @@ static void rt2400pci_config_antenna(struct rt2x00_dev *rt2x00dev,
- /*
- * Configure the TX antenna.
- */
-- switch (antenna_tx) {
-- case ANTENNA_SW_DIVERSITY:
-+ switch (ant->tx) {
- case ANTENNA_HW_DIVERSITY:
- rt2x00_set_field8(&r1, BBP_R1_TX_ANTENNA, 1);
- break;
- case ANTENNA_A:
- rt2x00_set_field8(&r1, BBP_R1_TX_ANTENNA, 0);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
- rt2x00_set_field8(&r1, BBP_R1_TX_ANTENNA, 2);
- break;
-@@ -424,14 +425,20 @@ static void rt2400pci_config_antenna(struct rt2x00_dev *rt2x00dev,
- /*
- * Configure the RX antenna.
- */
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
-+ switch (ant->rx) {
- case ANTENNA_HW_DIVERSITY:
- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
- break;
- case ANTENNA_A:
- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 0);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
- break;
-@@ -485,9 +492,7 @@ static void rt2400pci_config(struct rt2x00_dev *rt2x00dev,
- rt2400pci_config_txpower(rt2x00dev,
- libconf->conf->power_level);
- if (flags & CONFIG_UPDATE_ANTENNA)
-- rt2400pci_config_antenna(rt2x00dev,
-- libconf->conf->antenna_sel_tx,
-- libconf->conf->antenna_sel_rx);
-+ rt2400pci_config_antenna(rt2x00dev, &libconf->ant);
- if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
- rt2400pci_config_duration(rt2x00dev, libconf);
+ return 0;
+ error_submit:
+- for (i = 0; i < URBS_COUNT; i++) {
++ for (i = 0; i < RX_URBS_COUNT; i++) {
+ usb_kill_urb(urbs[i]);
+ }
+ spin_lock_irq(&rx->lock);
+@@ -708,8 +695,8 @@ error_submit:
+ spin_unlock_irq(&rx->lock);
+ error:
+ if (urbs) {
+- for (i = 0; i < URBS_COUNT; i++)
+- free_urb(urbs[i]);
++ for (i = 0; i < RX_URBS_COUNT; i++)
++ free_rx_urb(urbs[i]);
+ }
+ return r;
}
-@@ -514,18 +519,10 @@ static void rt2400pci_enable_led(struct rt2x00_dev *rt2x00dev)
+@@ -731,7 +718,7 @@ void zd_usb_disable_rx(struct zd_usb *usb)
- rt2x00_set_field32(®, LEDCSR_ON_PERIOD, 70);
- rt2x00_set_field32(®, LEDCSR_OFF_PERIOD, 30);
--
-- if (rt2x00dev->led_mode == LED_MODE_TXRX_ACTIVITY) {
-- rt2x00_set_field32(®, LEDCSR_LINK, 1);
-- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 0);
-- } else if (rt2x00dev->led_mode == LED_MODE_ASUS) {
-- rt2x00_set_field32(®, LEDCSR_LINK, 0);
-- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
-- } else {
-- rt2x00_set_field32(®, LEDCSR_LINK, 1);
-- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
-- }
--
-+ rt2x00_set_field32(®, LEDCSR_LINK,
-+ (rt2x00dev->led_mode != LED_MODE_ASUS));
-+ rt2x00_set_field32(®, LEDCSR_ACTIVITY,
-+ (rt2x00dev->led_mode != LED_MODE_TXRX_ACTIVITY));
- rt2x00pci_register_write(rt2x00dev, LEDCSR, reg);
+ for (i = 0; i < count; i++) {
+ usb_kill_urb(urbs[i]);
+- free_urb(urbs[i]);
++ free_rx_urb(urbs[i]);
+ }
+ kfree(urbs);
+
+@@ -741,9 +728,142 @@ void zd_usb_disable_rx(struct zd_usb *usb)
+ spin_unlock_irqrestore(&rx->lock, flags);
}
-@@ -542,7 +539,8 @@ static void rt2400pci_disable_led(struct rt2x00_dev *rt2x00dev)
- /*
- * Link tuning
- */
--static void rt2400pci_link_stats(struct rt2x00_dev *rt2x00dev)
-+static void rt2400pci_link_stats(struct rt2x00_dev *rt2x00dev,
-+ struct link_qual *qual)
++/**
++ * zd_usb_disable_tx - disable transmission
++ * @usb: the zd1211rw-private USB structure
++ *
++ * Frees all URBs in the free list and marks the transmission as disabled.
++ */
++void zd_usb_disable_tx(struct zd_usb *usb)
++{
++ struct zd_usb_tx *tx = &usb->tx;
++ unsigned long flags;
++ struct list_head *pos, *n;
++
++ spin_lock_irqsave(&tx->lock, flags);
++ list_for_each_safe(pos, n, &tx->free_urb_list) {
++ list_del(pos);
++ usb_free_urb(list_entry(pos, struct urb, urb_list));
++ }
++ tx->enabled = 0;
++ tx->submitted_urbs = 0;
++ /* The stopped state is ignored, relying on ieee80211_wake_queues()
++ * in a potentionally following zd_usb_enable_tx().
++ */
++ spin_unlock_irqrestore(&tx->lock, flags);
++}
++
++/**
++ * zd_usb_enable_tx - enables transmission
++ * @usb: a &struct zd_usb pointer
++ *
++ * This function enables transmission and prepares the &zd_usb_tx data
++ * structure.
++ */
++void zd_usb_enable_tx(struct zd_usb *usb)
++{
++ unsigned long flags;
++ struct zd_usb_tx *tx = &usb->tx;
++
++ spin_lock_irqsave(&tx->lock, flags);
++ tx->enabled = 1;
++ tx->submitted_urbs = 0;
++ ieee80211_wake_queues(zd_usb_to_hw(usb));
++ tx->stopped = 0;
++ spin_unlock_irqrestore(&tx->lock, flags);
++}
++
++/**
++ * alloc_tx_urb - provides an tx URB
++ * @usb: a &struct zd_usb pointer
++ *
++ * Allocates a new URB. If possible takes the urb from the free list in
++ * usb->tx.
++ */
++static struct urb *alloc_tx_urb(struct zd_usb *usb)
++{
++ struct zd_usb_tx *tx = &usb->tx;
++ unsigned long flags;
++ struct list_head *entry;
++ struct urb *urb;
++
++ spin_lock_irqsave(&tx->lock, flags);
++ if (list_empty(&tx->free_urb_list)) {
++ urb = usb_alloc_urb(0, GFP_ATOMIC);
++ goto out;
++ }
++ entry = tx->free_urb_list.next;
++ list_del(entry);
++ urb = list_entry(entry, struct urb, urb_list);
++out:
++ spin_unlock_irqrestore(&tx->lock, flags);
++ return urb;
++}
++
++/**
++ * free_tx_urb - frees a used tx URB
++ * @usb: a &struct zd_usb pointer
++ * @urb: URB to be freed
++ *
++ * Frees the the transmission URB, which means to put it on the free URB
++ * list.
++ */
++static void free_tx_urb(struct zd_usb *usb, struct urb *urb)
++{
++ struct zd_usb_tx *tx = &usb->tx;
++ unsigned long flags;
++
++ spin_lock_irqsave(&tx->lock, flags);
++ if (!tx->enabled) {
++ usb_free_urb(urb);
++ goto out;
++ }
++ list_add(&urb->urb_list, &tx->free_urb_list);
++out:
++ spin_unlock_irqrestore(&tx->lock, flags);
++}
++
++static void tx_dec_submitted_urbs(struct zd_usb *usb)
++{
++ struct zd_usb_tx *tx = &usb->tx;
++ unsigned long flags;
++
++ spin_lock_irqsave(&tx->lock, flags);
++ --tx->submitted_urbs;
++ if (tx->stopped && tx->submitted_urbs <= ZD_USB_TX_LOW) {
++ ieee80211_wake_queues(zd_usb_to_hw(usb));
++ tx->stopped = 0;
++ }
++ spin_unlock_irqrestore(&tx->lock, flags);
++}
++
++static void tx_inc_submitted_urbs(struct zd_usb *usb)
++{
++ struct zd_usb_tx *tx = &usb->tx;
++ unsigned long flags;
++
++ spin_lock_irqsave(&tx->lock, flags);
++ ++tx->submitted_urbs;
++ if (!tx->stopped && tx->submitted_urbs > ZD_USB_TX_HIGH) {
++ ieee80211_stop_queues(zd_usb_to_hw(usb));
++ tx->stopped = 1;
++ }
++ spin_unlock_irqrestore(&tx->lock, flags);
++}
++
++/**
++ * tx_urb_complete - completes the execution of an URB
++ * @urb: a URB
++ *
++ * This function is called if the URB has been transferred to a device or an
++ * error has happened.
++ */
+ static void tx_urb_complete(struct urb *urb)
{
- u32 reg;
- u8 bbp;
-@@ -551,13 +549,13 @@ static void rt2400pci_link_stats(struct rt2x00_dev *rt2x00dev)
- * Update FCS error count from register.
- */
- rt2x00pci_register_read(rt2x00dev, CNT0, ®);
-- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
-+ qual->rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
+ int r;
++ struct sk_buff *skb;
++ struct zd_tx_skb_control_block *cb;
++ struct zd_usb *usb;
- /*
- * Update False CCA count from register.
- */
- rt2400pci_bbp_read(rt2x00dev, 39, &bbp);
-- rt2x00dev->link.false_cca = bbp;
-+ qual->false_cca = bbp;
+ switch (urb->status) {
+ case 0:
+@@ -761,9 +881,12 @@ static void tx_urb_complete(struct urb *urb)
+ goto resubmit;
+ }
+ free_urb:
+- usb_buffer_free(urb->dev, urb->transfer_buffer_length,
+- urb->transfer_buffer, urb->transfer_dma);
+- usb_free_urb(urb);
++ skb = (struct sk_buff *)urb->context;
++ zd_mac_tx_to_dev(skb, urb->status);
++ cb = (struct zd_tx_skb_control_block *)skb->cb;
++ usb = &zd_hw_mac(cb->hw)->chip.usb;
++ free_tx_urb(usb, urb);
++ tx_dec_submitted_urbs(usb);
+ return;
+ resubmit:
+ r = usb_submit_urb(urb, GFP_ATOMIC);
+@@ -773,43 +896,40 @@ resubmit:
+ }
}
- static void rt2400pci_reset_tuner(struct rt2x00_dev *rt2x00dev)
-@@ -582,10 +580,10 @@ static void rt2400pci_link_tuner(struct rt2x00_dev *rt2x00dev)
- */
- rt2400pci_bbp_read(rt2x00dev, 13, ®);
-
-- if (rt2x00dev->link.false_cca > 512 && reg < 0x20) {
-+ if (rt2x00dev->link.qual.false_cca > 512 && reg < 0x20) {
- rt2400pci_bbp_write(rt2x00dev, 13, ++reg);
- rt2x00dev->link.vgc_level = reg;
-- } else if (rt2x00dev->link.false_cca < 100 && reg > 0x08) {
-+ } else if (rt2x00dev->link.qual.false_cca < 100 && reg > 0x08) {
- rt2400pci_bbp_write(rt2x00dev, 13, --reg);
- rt2x00dev->link.vgc_level = reg;
- }
-@@ -594,65 +592,43 @@ static void rt2400pci_link_tuner(struct rt2x00_dev *rt2x00dev)
- /*
- * Initialization functions.
+-/* Puts the frame on the USB endpoint. It doesn't wait for
+- * completion. The frame must contain the control set.
++/**
++ * zd_usb_tx: initiates transfer of a frame of the device
++ *
++ * @usb: the zd1211rw-private USB structure
++ * @skb: a &struct sk_buff pointer
++ *
++ * This function tranmits a frame to the device. It doesn't wait for
++ * completion. The frame must contain the control set and have all the
++ * control set information available.
++ *
++ * The function returns 0 if the transfer has been successfully initiated.
*/
--static void rt2400pci_init_rxring(struct rt2x00_dev *rt2x00dev)
-+static void rt2400pci_init_rxentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
+-int zd_usb_tx(struct zd_usb *usb, const u8 *frame, unsigned int length)
++int zd_usb_tx(struct zd_usb *usb, struct sk_buff *skb)
{
-- struct data_ring *ring = rt2x00dev->rx;
-- struct data_desc *rxd;
-- unsigned int i;
-+ __le32 *rxd = entry->priv;
- u32 word;
+ int r;
+ struct usb_device *udev = zd_usb_to_usbdev(usb);
+ struct urb *urb;
+- void *buffer;
-- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
--
-- for (i = 0; i < ring->stats.limit; i++) {
-- rxd = ring->entry[i].priv;
--
-- rt2x00_desc_read(rxd, 2, &word);
-- rt2x00_set_field32(&word, RXD_W2_BUFFER_LENGTH,
-- ring->data_size);
-- rt2x00_desc_write(rxd, 2, word);
--
-- rt2x00_desc_read(rxd, 1, &word);
-- rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS,
-- ring->entry[i].data_dma);
-- rt2x00_desc_write(rxd, 1, word);
-+ rt2x00_desc_read(rxd, 2, &word);
-+ rt2x00_set_field32(&word, RXD_W2_BUFFER_LENGTH, entry->ring->data_size);
-+ rt2x00_desc_write(rxd, 2, word);
+- urb = usb_alloc_urb(0, GFP_ATOMIC);
++ urb = alloc_tx_urb(usb);
+ if (!urb) {
+ r = -ENOMEM;
+ goto out;
+ }
-- rt2x00_desc_read(rxd, 0, &word);
-- rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
-- rt2x00_desc_write(rxd, 0, word);
+- buffer = usb_buffer_alloc(zd_usb_to_usbdev(usb), length, GFP_ATOMIC,
+- &urb->transfer_dma);
+- if (!buffer) {
+- r = -ENOMEM;
+- goto error_free_urb;
- }
-+ rt2x00_desc_read(rxd, 1, &word);
-+ rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS, entry->data_dma);
-+ rt2x00_desc_write(rxd, 1, word);
+- memcpy(buffer, frame, length);
+-
+ usb_fill_bulk_urb(urb, udev, usb_sndbulkpipe(udev, EP_DATA_OUT),
+- buffer, length, tx_urb_complete, NULL);
+- urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++ skb->data, skb->len, tx_urb_complete, skb);
-- rt2x00_ring_index_clear(rt2x00dev->rx);
-+ rt2x00_desc_read(rxd, 0, &word);
-+ rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
-+ rt2x00_desc_write(rxd, 0, word);
+ r = usb_submit_urb(urb, GFP_ATOMIC);
+ if (r)
+ goto error;
++ tx_inc_submitted_urbs(usb);
+ return 0;
+ error:
+- usb_buffer_free(zd_usb_to_usbdev(usb), length, buffer,
+- urb->transfer_dma);
+-error_free_urb:
+- usb_free_urb(urb);
++ free_tx_urb(usb, urb);
+ out:
+ return r;
}
+@@ -838,16 +958,20 @@ static inline void init_usb_rx(struct zd_usb *usb)
--static void rt2400pci_init_txring(struct rt2x00_dev *rt2x00dev, const int queue)
-+static void rt2400pci_init_txentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
+ static inline void init_usb_tx(struct zd_usb *usb)
{
-- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
-- struct data_desc *txd;
-- unsigned int i;
-+ __le32 *txd = entry->priv;
- u32 word;
+- /* FIXME: at this point we will allocate a fixed number of urb's for
+- * use in a cyclic scheme */
++ struct zd_usb_tx *tx = &usb->tx;
++ spin_lock_init(&tx->lock);
++ tx->enabled = 0;
++ tx->stopped = 0;
++ INIT_LIST_HEAD(&tx->free_urb_list);
++ tx->submitted_urbs = 0;
+ }
-- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
--
-- for (i = 0; i < ring->stats.limit; i++) {
-- txd = ring->entry[i].priv;
--
-- rt2x00_desc_read(txd, 1, &word);
-- rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS,
-- ring->entry[i].data_dma);
-- rt2x00_desc_write(txd, 1, word);
--
-- rt2x00_desc_read(txd, 2, &word);
-- rt2x00_set_field32(&word, TXD_W2_BUFFER_LENGTH,
-- ring->data_size);
-- rt2x00_desc_write(txd, 2, word);
-+ rt2x00_desc_read(txd, 1, &word);
-+ rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS, entry->data_dma);
-+ rt2x00_desc_write(txd, 1, word);
+-void zd_usb_init(struct zd_usb *usb, struct net_device *netdev,
++void zd_usb_init(struct zd_usb *usb, struct ieee80211_hw *hw,
+ struct usb_interface *intf)
+ {
+ memset(usb, 0, sizeof(*usb));
+ usb->intf = usb_get_intf(intf);
+- usb_set_intfdata(usb->intf, netdev);
++ usb_set_intfdata(usb->intf, hw);
+ init_usb_interrupt(usb);
+ init_usb_tx(usb);
+ init_usb_rx(usb);
+@@ -973,7 +1097,7 @@ int zd_usb_init_hw(struct zd_usb *usb)
+ return r;
+ }
-- rt2x00_desc_read(txd, 0, &word);
-- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-- rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
-- rt2x00_desc_write(txd, 0, word);
-- }
-+ rt2x00_desc_read(txd, 2, &word);
-+ rt2x00_set_field32(&word, TXD_W2_BUFFER_LENGTH, entry->ring->data_size);
-+ rt2x00_desc_write(txd, 2, word);
+- r = zd_mac_init_hw(mac);
++ r = zd_mac_init_hw(mac->hw);
+ if (r) {
+ dev_dbg_f(zd_usb_dev(usb),
+ "couldn't initialize mac. Error number %d\n", r);
+@@ -987,9 +1111,9 @@ int zd_usb_init_hw(struct zd_usb *usb)
+ static int probe(struct usb_interface *intf, const struct usb_device_id *id)
+ {
+ int r;
+- struct zd_usb *usb;
+ struct usb_device *udev = interface_to_usbdev(intf);
+- struct net_device *netdev = NULL;
++ struct zd_usb *usb;
++ struct ieee80211_hw *hw = NULL;
-- rt2x00_ring_index_clear(ring);
-+ rt2x00_desc_read(txd, 0, &word);
-+ rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-+ rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
-+ rt2x00_desc_write(txd, 0, word);
- }
+ print_id(udev);
- static int rt2400pci_init_rings(struct rt2x00_dev *rt2x00dev)
-@@ -660,15 +636,6 @@ static int rt2400pci_init_rings(struct rt2x00_dev *rt2x00dev)
- u32 reg;
+@@ -1007,57 +1131,65 @@ static int probe(struct usb_interface *intf, const struct usb_device_id *id)
+ goto error;
+ }
- /*
-- * Initialize rings.
-- */
-- rt2400pci_init_rxring(rt2x00dev);
-- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
-- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA1);
-- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_AFTER_BEACON);
-- rt2400pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
--
-- /*
- * Initialize registers.
- */
- rt2x00pci_register_read(rt2x00dev, TXCSR2, ®);
-@@ -1014,53 +981,37 @@ static int rt2400pci_set_device_state(struct rt2x00_dev *rt2x00dev,
- * TX descriptor initialization
- */
- static void rt2400pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-+ struct sk_buff *skb,
- struct txdata_entry_desc *desc,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
- struct ieee80211_tx_control *control)
- {
-+ struct skb_desc *skbdesc = get_skb_desc(skb);
-+ __le32 *txd = skbdesc->desc;
- u32 word;
-- u32 signal = 0;
-- u32 service = 0;
-- u32 length_high = 0;
-- u32 length_low = 0;
--
-- /*
-- * The PLCP values should be treated as if they
-- * were BBP values.
-- */
-- rt2x00_set_field32(&signal, BBPCSR_VALUE, desc->signal);
-- rt2x00_set_field32(&signal, BBPCSR_REGNUM, 5);
-- rt2x00_set_field32(&signal, BBPCSR_BUSY, 1);
--
-- rt2x00_set_field32(&service, BBPCSR_VALUE, desc->service);
-- rt2x00_set_field32(&service, BBPCSR_REGNUM, 6);
-- rt2x00_set_field32(&service, BBPCSR_BUSY, 1);
--
-- rt2x00_set_field32(&length_high, BBPCSR_VALUE, desc->length_high);
-- rt2x00_set_field32(&length_high, BBPCSR_REGNUM, 7);
-- rt2x00_set_field32(&length_high, BBPCSR_BUSY, 1);
--
-- rt2x00_set_field32(&length_low, BBPCSR_VALUE, desc->length_low);
-- rt2x00_set_field32(&length_low, BBPCSR_REGNUM, 8);
-- rt2x00_set_field32(&length_low, BBPCSR_BUSY, 1);
+- usb_reset_device(interface_to_usbdev(intf));
++ r = usb_reset_device(udev);
++ if (r) {
++ dev_err(&intf->dev,
++ "couldn't reset usb device. Error number %d\n", r);
++ goto error;
++ }
- /*
- * Start writing the descriptor words.
- */
- rt2x00_desc_read(txd, 2, &word);
-- rt2x00_set_field32(&word, TXD_W2_DATABYTE_COUNT, length);
-+ rt2x00_set_field32(&word, TXD_W2_DATABYTE_COUNT, skbdesc->data_len);
- rt2x00_desc_write(txd, 2, word);
+- netdev = zd_netdev_alloc(intf);
+- if (netdev == NULL) {
++ hw = zd_mac_alloc_hw(intf);
++ if (hw == NULL) {
+ r = -ENOMEM;
+ goto error;
+ }
- rt2x00_desc_read(txd, 3, &word);
-- rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL, signal);
-- rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE, service);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL, desc->signal);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL_REGNUM, 5);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_SIGNAL_BUSY, 1);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE, desc->service);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE_REGNUM, 6);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_SERVICE_BUSY, 1);
- rt2x00_desc_write(txd, 3, word);
+- usb = &zd_netdev_mac(netdev)->chip.usb;
++ usb = &zd_hw_mac(hw)->chip.usb;
+ usb->is_zd1211b = (id->driver_info == DEVICE_ZD1211B) != 0;
- rt2x00_desc_read(txd, 4, &word);
-- rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_LOW, length_low);
-- rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_HIGH, length_high);
-+ rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_LOW, desc->length_low);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_LOW_REGNUM, 8);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_LOW_BUSY, 1);
-+ rt2x00_set_field32(&word, TXD_W4_PLCP_LENGTH_HIGH, desc->length_high);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_HIGH_REGNUM, 7);
-+ rt2x00_set_field32(&word, TXD_W3_PLCP_LENGTH_HIGH_BUSY, 1);
- rt2x00_desc_write(txd, 4, word);
+- r = zd_mac_preinit_hw(zd_netdev_mac(netdev));
++ r = zd_mac_preinit_hw(hw);
+ if (r) {
+ dev_dbg_f(&intf->dev,
+ "couldn't initialize mac. Error number %d\n", r);
+ goto error;
+ }
- rt2x00_desc_read(txd, 0, &word);
-@@ -1069,7 +1020,7 @@ static void rt2400pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
- test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_ACK,
-- !(control->flags & IEEE80211_TXCTL_NO_ACK));
-+ test_bit(ENTRY_TXD_ACK, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
- test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_RTS,
-@@ -1099,12 +1050,12 @@ static void rt2400pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
+- r = register_netdev(netdev);
++ r = ieee80211_register_hw(hw);
+ if (r) {
+ dev_dbg_f(&intf->dev,
+- "couldn't register netdev. Error number %d\n", r);
++ "couldn't register device. Error number %d\n", r);
+ goto error;
}
- rt2x00pci_register_read(rt2x00dev, TXCSR0, ®);
-- if (queue == IEEE80211_TX_QUEUE_DATA0)
-- rt2x00_set_field32(®, TXCSR0_KICK_PRIO, 1);
-- else if (queue == IEEE80211_TX_QUEUE_DATA1)
-- rt2x00_set_field32(®, TXCSR0_KICK_TX, 1);
-- else if (queue == IEEE80211_TX_QUEUE_AFTER_BEACON)
-- rt2x00_set_field32(®, TXCSR0_KICK_ATIM, 1);
-+ rt2x00_set_field32(®, TXCSR0_KICK_PRIO,
-+ (queue == IEEE80211_TX_QUEUE_DATA0));
-+ rt2x00_set_field32(®, TXCSR0_KICK_TX,
-+ (queue == IEEE80211_TX_QUEUE_DATA1));
-+ rt2x00_set_field32(®, TXCSR0_KICK_ATIM,
-+ (queue == IEEE80211_TX_QUEUE_AFTER_BEACON));
- rt2x00pci_register_write(rt2x00dev, TXCSR0, reg);
+ dev_dbg_f(&intf->dev, "successful\n");
+- dev_info(&intf->dev,"%s\n", netdev->name);
++ dev_info(&intf->dev, "%s\n", wiphy_name(hw->wiphy));
+ return 0;
+ error:
+ usb_reset_device(interface_to_usbdev(intf));
+- zd_netdev_free(netdev);
++ if (hw) {
++ zd_mac_clear(zd_hw_mac(hw));
++ ieee80211_free_hw(hw);
++ }
+ return r;
}
-@@ -1114,7 +1065,7 @@ static void rt2400pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
- static void rt2400pci_fill_rxdone(struct data_entry *entry,
- struct rxdata_entry_desc *desc)
+ static void disconnect(struct usb_interface *intf)
{
-- struct data_desc *rxd = entry->priv;
-+ __le32 *rxd = entry->priv;
- u32 word0;
- u32 word2;
+- struct net_device *netdev = zd_intf_to_netdev(intf);
++ struct ieee80211_hw *hw = zd_intf_to_hw(intf);
+ struct zd_mac *mac;
+ struct zd_usb *usb;
-@@ -1135,6 +1086,7 @@ static void rt2400pci_fill_rxdone(struct data_entry *entry,
- entry->ring->rt2x00dev->rssi_offset;
- desc->ofdm = 0;
- desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
-+ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
- }
+ /* Either something really bad happened, or we're just dealing with
+ * a DEVICE_INSTALLER. */
+- if (netdev == NULL)
++ if (hw == NULL)
+ return;
- /*
-@@ -1144,7 +1096,7 @@ static void rt2400pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
- {
- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
- struct data_entry *entry;
-- struct data_desc *txd;
-+ __le32 *txd;
- u32 word;
- int tx_status;
- int retry;
-@@ -1164,26 +1116,8 @@ static void rt2400pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
- tx_status = rt2x00_get_field32(word, TXD_W0_RESULT);
- retry = rt2x00_get_field32(word, TXD_W0_RETRY_COUNT);
+- mac = zd_netdev_mac(netdev);
++ mac = zd_hw_mac(hw);
+ usb = &mac->chip.usb;
-- rt2x00lib_txdone(entry, tx_status, retry);
--
-- /*
-- * Make this entry available for reuse.
-- */
-- entry->flags = 0;
-- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-- rt2x00_desc_write(txd, 0, word);
-- rt2x00_ring_index_done_inc(ring);
-+ rt2x00pci_txdone(rt2x00dev, entry, tx_status, retry);
- }
--
-- /*
-- * If the data ring was full before the txdone handler
-- * we must make sure the packet queue in the mac80211 stack
-- * is reenabled when the txdone handler has finished.
-- */
-- entry = ring->entry;
-- if (!rt2x00_ring_full(ring))
-- ieee80211_wake_queue(rt2x00dev->hw,
-- entry->tx_status.control.queue);
- }
+ dev_dbg_f(zd_usb_dev(usb), "\n");
- static irqreturn_t rt2400pci_interrupt(int irq, void *dev_instance)
-@@ -1315,12 +1249,23 @@ static int rt2400pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
- /*
- * Identify default antenna configuration.
+- zd_netdev_disconnect(netdev);
++ ieee80211_unregister_hw(hw);
+
+ /* Just in case something has gone wrong! */
+ zd_usb_disable_rx(usb);
+@@ -1070,12 +1202,13 @@ static void disconnect(struct usb_interface *intf)
*/
-- rt2x00dev->hw->conf.antenna_sel_tx =
-+ rt2x00dev->default_ant.tx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
-- rt2x00dev->hw->conf.antenna_sel_rx =
-+ rt2x00dev->default_ant.rx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
+ usb_reset_device(interface_to_usbdev(intf));
- /*
-+ * When the eeprom indicates SW_DIVERSITY use HW_DIVERSITY instead.
-+ * I am not 100% sure about this, but the legacy drivers do not
-+ * indicate antenna swapping in software is required when
-+ * diversity is enabled.
-+ */
-+ if (rt2x00dev->default_ant.tx == ANTENNA_SW_DIVERSITY)
-+ rt2x00dev->default_ant.tx = ANTENNA_HW_DIVERSITY;
-+ if (rt2x00dev->default_ant.rx == ANTENNA_SW_DIVERSITY)
-+ rt2x00dev->default_ant.rx = ANTENNA_HW_DIVERSITY;
+- zd_netdev_free(netdev);
++ zd_mac_clear(mac);
++ ieee80211_free_hw(hw);
+ dev_dbg(&intf->dev, "disconnected\n");
+ }
+
+ static struct usb_driver driver = {
+- .name = "zd1211rw",
++ .name = KBUILD_MODNAME,
+ .id_table = usb_ids,
+ .probe = probe,
+ .disconnect = disconnect,
+diff --git a/drivers/net/wireless/zd1211rw/zd_usb.h b/drivers/net/wireless/zd1211rw/zd_usb.h
+index 961a7a1..049f8b9 100644
+--- a/drivers/net/wireless/zd1211rw/zd_usb.h
++++ b/drivers/net/wireless/zd1211rw/zd_usb.h
+@@ -1,4 +1,7 @@
+-/* zd_usb.h: Header for USB interface implemented by ZD1211 chip
++/* ZD1211 USB-WLAN driver for Linux
++ *
++ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
++ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -26,6 +29,9 @@
+
+ #include "zd_def.h"
+
++#define ZD_USB_TX_HIGH 5
++#define ZD_USB_TX_LOW 2
+
-+ /*
- * Store led mode, for correct led behaviour.
- */
- rt2x00dev->led_mode =
-@@ -1447,7 +1392,6 @@ static void rt2400pci_configure_filter(struct ieee80211_hw *hw,
- struct dev_addr_list *mc_list)
- {
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- struct interface *intf = &rt2x00dev->interface;
- u32 reg;
+ enum devicetype {
+ DEVICE_ZD1211 = 0,
+ DEVICE_ZD1211B = 1,
+@@ -165,7 +171,7 @@ static inline struct usb_int_regs *get_read_regs(struct zd_usb_interrupt *intr)
+ return (struct usb_int_regs *)intr->read_regs.buffer;
+ }
- /*
-@@ -1466,21 +1410,18 @@ static void rt2400pci_configure_filter(struct ieee80211_hw *hw,
- * Apply some rules to the filters:
- * - Some filters imply different filters to be set.
- * - Some things we can't filter out at all.
-- * - Some filters are set based on interface type.
- */
- *total_flags |= FIF_ALLMULTI;
- if (*total_flags & FIF_OTHER_BSS ||
- *total_flags & FIF_PROMISC_IN_BSS)
- *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
-- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
-- *total_flags |= FIF_PROMISC_IN_BSS;
+-#define URBS_COUNT 5
++#define RX_URBS_COUNT 5
- /*
- * Check if there is any work left for us.
- */
-- if (intf->filter == *total_flags)
-+ if (rt2x00dev->packet_filter == *total_flags)
- return;
-- intf->filter = *total_flags;
-+ rt2x00dev->packet_filter = *total_flags;
+ struct zd_usb_rx {
+ spinlock_t lock;
+@@ -176,8 +182,21 @@ struct zd_usb_rx {
+ int urbs_count;
+ };
- /*
- * Start configuration steps.
-@@ -1583,7 +1524,7 @@ static const struct ieee80211_ops rt2400pci_mac80211_ops = {
- .configure_filter = rt2400pci_configure_filter,
- .get_stats = rt2x00mac_get_stats,
- .set_retry_limit = rt2400pci_set_retry_limit,
-- .erp_ie_changed = rt2x00mac_erp_ie_changed,
-+ .bss_info_changed = rt2x00mac_bss_info_changed,
- .conf_tx = rt2400pci_conf_tx,
- .get_tx_stats = rt2x00mac_get_tx_stats,
- .get_tsf = rt2400pci_get_tsf,
-@@ -1597,6 +1538,8 @@ static const struct rt2x00lib_ops rt2400pci_rt2x00_ops = {
- .probe_hw = rt2400pci_probe_hw,
- .initialize = rt2x00pci_initialize,
- .uninitialize = rt2x00pci_uninitialize,
-+ .init_rxentry = rt2400pci_init_rxentry,
-+ .init_txentry = rt2400pci_init_txentry,
- .set_device_state = rt2400pci_set_device_state,
- .rfkill_poll = rt2400pci_rfkill_poll,
- .link_stats = rt2400pci_link_stats,
-@@ -1614,7 +1557,7 @@ static const struct rt2x00lib_ops rt2400pci_rt2x00_ops = {
++/**
++ * struct zd_usb_tx - structure used for transmitting frames
++ * @lock: lock for transmission
++ * @free_urb_list: list of free URBs, contains all the URBs, which can be used
++ * @submitted_urbs: atomic integer that counts the URBs having sent to the
++ * device, which haven't been completed
++ * @enabled: enabled flag, indicates whether tx is enabled
++ * @stopped: indicates whether higher level tx queues are stopped
++ */
+ struct zd_usb_tx {
+ spinlock_t lock;
++ struct list_head free_urb_list;
++ int submitted_urbs;
++ int enabled;
++ int stopped;
};
- static const struct rt2x00_ops rt2400pci_ops = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .rxd_size = RXD_DESC_SIZE,
- .txd_size = TXD_DESC_SIZE,
- .eeprom_size = EEPROM_SIZE,
-@@ -1642,7 +1585,7 @@ MODULE_DEVICE_TABLE(pci, rt2400pci_device_table);
- MODULE_LICENSE("GPL");
+ /* Contains the usb parts. The structure doesn't require a lock because intf
+@@ -198,17 +217,17 @@ static inline struct usb_device *zd_usb_to_usbdev(struct zd_usb *usb)
+ return interface_to_usbdev(usb->intf);
+ }
- static struct pci_driver rt2400pci_driver = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .id_table = rt2400pci_device_table,
- .probe = rt2x00pci_probe,
- .remove = __devexit_p(rt2x00pci_remove),
-diff --git a/drivers/net/wireless/rt2x00/rt2400pci.h b/drivers/net/wireless/rt2x00/rt2400pci.h
-index ae22501..369aac6 100644
---- a/drivers/net/wireless/rt2x00/rt2400pci.h
-+++ b/drivers/net/wireless/rt2x00/rt2400pci.h
-@@ -803,8 +803,8 @@
- /*
- * DMA descriptor defines.
- */
--#define TXD_DESC_SIZE ( 8 * sizeof(struct data_desc) )
--#define RXD_DESC_SIZE ( 8 * sizeof(struct data_desc) )
-+#define TXD_DESC_SIZE ( 8 * sizeof(__le32) )
-+#define RXD_DESC_SIZE ( 8 * sizeof(__le32) )
+-static inline struct net_device *zd_intf_to_netdev(struct usb_interface *intf)
++static inline struct ieee80211_hw *zd_intf_to_hw(struct usb_interface *intf)
+ {
+ return usb_get_intfdata(intf);
+ }
- /*
- * TX descriptor format for TX, PRIO, ATIM and Beacon Ring.
-@@ -839,11 +839,21 @@
+-static inline struct net_device *zd_usb_to_netdev(struct zd_usb *usb)
++static inline struct ieee80211_hw *zd_usb_to_hw(struct zd_usb *usb)
+ {
+- return zd_intf_to_netdev(usb->intf);
++ return zd_intf_to_hw(usb->intf);
+ }
- /*
- * Word3 & 4: PLCP information
-- */
--#define TXD_W3_PLCP_SIGNAL FIELD32(0x0000ffff)
--#define TXD_W3_PLCP_SERVICE FIELD32(0xffff0000)
--#define TXD_W4_PLCP_LENGTH_LOW FIELD32(0x0000ffff)
--#define TXD_W4_PLCP_LENGTH_HIGH FIELD32(0xffff0000)
-+ * The PLCP values should be treated as if they were BBP values.
-+ */
-+#define TXD_W3_PLCP_SIGNAL FIELD32(0x000000ff)
-+#define TXD_W3_PLCP_SIGNAL_REGNUM FIELD32(0x00007f00)
-+#define TXD_W3_PLCP_SIGNAL_BUSY FIELD32(0x00008000)
-+#define TXD_W3_PLCP_SERVICE FIELD32(0x00ff0000)
-+#define TXD_W3_PLCP_SERVICE_REGNUM FIELD32(0x7f000000)
-+#define TXD_W3_PLCP_SERVICE_BUSY FIELD32(0x80000000)
+-void zd_usb_init(struct zd_usb *usb, struct net_device *netdev,
++void zd_usb_init(struct zd_usb *usb, struct ieee80211_hw *hw,
+ struct usb_interface *intf);
+ int zd_usb_init_hw(struct zd_usb *usb);
+ void zd_usb_clear(struct zd_usb *usb);
+@@ -221,7 +240,10 @@ void zd_usb_disable_int(struct zd_usb *usb);
+ int zd_usb_enable_rx(struct zd_usb *usb);
+ void zd_usb_disable_rx(struct zd_usb *usb);
+
+-int zd_usb_tx(struct zd_usb *usb, const u8 *frame, unsigned int length);
++void zd_usb_enable_tx(struct zd_usb *usb);
++void zd_usb_disable_tx(struct zd_usb *usb);
+
-+#define TXD_W4_PLCP_LENGTH_LOW FIELD32(0x000000ff)
-+#define TXD_W3_PLCP_LENGTH_LOW_REGNUM FIELD32(0x00007f00)
-+#define TXD_W3_PLCP_LENGTH_LOW_BUSY FIELD32(0x00008000)
-+#define TXD_W4_PLCP_LENGTH_HIGH FIELD32(0x00ff0000)
-+#define TXD_W3_PLCP_LENGTH_HIGH_REGNUM FIELD32(0x7f000000)
-+#define TXD_W3_PLCP_LENGTH_HIGH_BUSY FIELD32(0x80000000)
++int zd_usb_tx(struct zd_usb *usb, struct sk_buff *skb);
- /*
- * Word5
-diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c
-index 702321c..e874fdc 100644
---- a/drivers/net/wireless/rt2x00/rt2500pci.c
-+++ b/drivers/net/wireless/rt2x00/rt2500pci.c
-@@ -24,11 +24,6 @@
- Supported chipsets: RT2560.
- */
+ int zd_usb_ioread16v(struct zd_usb *usb, u16 *values,
+ const zd_addr_t *addresses, unsigned int count);
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index bca37bf..7483d45 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1073,7 +1073,7 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
+ if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+ /* Do all the remapping work and M2P updates. */
+ MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
+- 0, DOMID_SELF);
++ NULL, DOMID_SELF);
+ mcl++;
+ HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
+ }
+diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
+index a6d6b24..703b85e 100644
+--- a/drivers/parisc/led.c
++++ b/drivers/parisc/led.c
+@@ -364,7 +364,7 @@ static __inline__ int led_get_net_activity(void)
+ struct in_device *in_dev = __in_dev_get_rcu(dev);
+ if (!in_dev || !in_dev->ifa_list)
+ continue;
+- if (LOOPBACK(in_dev->ifa_list->ifa_local))
++ if (ipv4_is_loopback(in_dev->ifa_list->ifa_local))
+ continue;
+ stats = dev->get_stats(dev);
+ rx_total += stats->rx_packets;
+diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c
+index ebb09e9..de34aa9 100644
+--- a/drivers/parisc/pdc_stable.c
++++ b/drivers/parisc/pdc_stable.c
+@@ -120,7 +120,7 @@ struct pdcspath_entry pdcspath_entry_##_name = { \
+ };
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2500pci"
--
- #include <linux/delay.h>
- #include <linux/etherdevice.h>
- #include <linux/init.h>
-@@ -54,7 +49,7 @@
- * the access attempt is considered to have failed,
- * and we will print an error.
+ #define PDCS_ATTR(_name, _mode, _show, _store) \
+-struct subsys_attribute pdcs_attr_##_name = { \
++struct kobj_attribute pdcs_attr_##_name = { \
+ .attr = {.name = __stringify(_name), .mode = _mode}, \
+ .show = _show, \
+ .store = _store, \
+@@ -523,15 +523,15 @@ static struct pdcspath_entry *pdcspath_entries[] = {
+
+ /**
+ * pdcs_size_read - Stable Storage size output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
*/
--static u32 rt2500pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
-+static u32 rt2500pci_bbp_check(struct rt2x00_dev *rt2x00dev)
+-static ssize_t
+-pdcs_size_read(struct kset *kset, char *buf)
++static ssize_t pdcs_size_read(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ char *buf)
{
- u32 reg;
- unsigned int i;
-@@ -69,7 +64,7 @@ static u32 rt2500pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
- return reg;
- }
+ char *out = buf;
--static void rt2500pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500pci_bbp_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u8 value)
- {
- u32 reg;
-@@ -95,7 +90,7 @@ static void rt2500pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
- rt2x00pci_register_write(rt2x00dev, BBPCSR, reg);
- }
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
--static void rt2500pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500pci_bbp_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u8 *value)
- {
- u32 reg;
-@@ -132,7 +127,7 @@ static void rt2500pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
- *value = rt2x00_get_field32(reg, BBPCSR_VALUE);
- }
+ /* show the size of the stable storage */
+@@ -542,17 +542,17 @@ pdcs_size_read(struct kset *kset, char *buf)
--static void rt2500pci_rf_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500pci_rf_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u32 value)
+ /**
+ * pdcs_auto_read - Stable Storage autoboot/search flag output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ * @knob: The PF_AUTOBOOT or PF_AUTOSEARCH flag
+ */
+-static ssize_t
+-pdcs_auto_read(struct kset *kset, char *buf, int knob)
++static ssize_t pdcs_auto_read(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ char *buf, int knob)
{
- u32 reg;
-@@ -195,13 +190,13 @@ static void rt2500pci_eepromregister_write(struct eeprom_93cx6 *eeprom)
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
- #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
+ char *out = buf;
+ struct pdcspath_entry *pathentry;
--static void rt2500pci_read_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500pci_read_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 *data)
- {
- rt2x00pci_register_read(rt2x00dev, CSR_OFFSET(word), data);
- }
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
--static void rt2500pci_write_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500pci_write_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 data)
- {
- rt2x00pci_register_write(rt2x00dev, CSR_OFFSET(word), data);
-@@ -289,7 +284,7 @@ static void rt2500pci_config_type(struct rt2x00_dev *rt2x00dev, const int type,
- */
- rt2x00pci_register_read(rt2x00dev, CSR14, ®);
- rt2x00_set_field32(®, CSR14_TSF_COUNT, 1);
-- rt2x00_set_field32(®, CSR14_TBCN, 1);
-+ rt2x00_set_field32(®, CSR14_TBCN, (tsf_sync == TSF_SYNC_BEACON));
- rt2x00_set_field32(®, CSR14_BEACON_GEN, 0);
- rt2x00_set_field32(®, CSR14_TSF_SYNC, tsf_sync);
- rt2x00pci_register_write(rt2x00dev, CSR14, reg);
-@@ -424,7 +419,7 @@ static void rt2500pci_config_txpower(struct rt2x00_dev *rt2x00dev,
- }
+ /* Current flags are stored in primary boot path entry */
+@@ -568,40 +568,37 @@ pdcs_auto_read(struct kset *kset, char *buf, int knob)
- static void rt2500pci_config_antenna(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx, const int antenna_rx)
-+ struct antenna_setup *ant)
+ /**
+ * pdcs_autoboot_read - Stable Storage autoboot flag output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ */
+-static inline ssize_t
+-pdcs_autoboot_read(struct kset *kset, char *buf)
++static ssize_t pdcs_autoboot_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
- u32 reg;
- u8 r14;
-@@ -437,18 +432,20 @@ static void rt2500pci_config_antenna(struct rt2x00_dev *rt2x00dev,
- /*
- * Configure the TX antenna.
- */
-- switch (antenna_tx) {
-- case ANTENNA_SW_DIVERSITY:
-- case ANTENNA_HW_DIVERSITY:
-- rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 2);
-- rt2x00_set_field32(®, BBPCSR1_CCK, 2);
-- rt2x00_set_field32(®, BBPCSR1_OFDM, 2);
-- break;
-+ switch (ant->tx) {
- case ANTENNA_A:
- rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 0);
- rt2x00_set_field32(®, BBPCSR1_CCK, 0);
- rt2x00_set_field32(®, BBPCSR1_OFDM, 0);
- break;
-+ case ANTENNA_HW_DIVERSITY:
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
- rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 2);
- rt2x00_set_field32(®, BBPCSR1_CCK, 2);
-@@ -459,14 +456,18 @@ static void rt2500pci_config_antenna(struct rt2x00_dev *rt2x00dev,
- /*
- * Configure the RX antenna.
- */
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
-- case ANTENNA_HW_DIVERSITY:
-- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 2);
-- break;
-+ switch (ant->rx) {
- case ANTENNA_A:
- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 0);
- break;
-+ case ANTENNA_HW_DIVERSITY:
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 2);
- break;
-@@ -541,9 +542,7 @@ static void rt2500pci_config(struct rt2x00_dev *rt2x00dev,
- rt2500pci_config_txpower(rt2x00dev,
- libconf->conf->power_level);
- if (flags & CONFIG_UPDATE_ANTENNA)
-- rt2500pci_config_antenna(rt2x00dev,
-- libconf->conf->antenna_sel_tx,
-- libconf->conf->antenna_sel_rx);
-+ rt2500pci_config_antenna(rt2x00dev, &libconf->ant);
- if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
- rt2500pci_config_duration(rt2x00dev, libconf);
- }
-@@ -559,18 +558,10 @@ static void rt2500pci_enable_led(struct rt2x00_dev *rt2x00dev)
-
- rt2x00_set_field32(®, LEDCSR_ON_PERIOD, 70);
- rt2x00_set_field32(®, LEDCSR_OFF_PERIOD, 30);
--
-- if (rt2x00dev->led_mode == LED_MODE_TXRX_ACTIVITY) {
-- rt2x00_set_field32(®, LEDCSR_LINK, 1);
-- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 0);
-- } else if (rt2x00dev->led_mode == LED_MODE_ASUS) {
-- rt2x00_set_field32(®, LEDCSR_LINK, 0);
-- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
-- } else {
-- rt2x00_set_field32(®, LEDCSR_LINK, 1);
-- rt2x00_set_field32(®, LEDCSR_ACTIVITY, 1);
-- }
--
-+ rt2x00_set_field32(®, LEDCSR_LINK,
-+ (rt2x00dev->led_mode != LED_MODE_ASUS));
-+ rt2x00_set_field32(®, LEDCSR_ACTIVITY,
-+ (rt2x00dev->led_mode != LED_MODE_TXRX_ACTIVITY));
- rt2x00pci_register_write(rt2x00dev, LEDCSR, reg);
+- return pdcs_auto_read(kset, buf, PF_AUTOBOOT);
++ return pdcs_auto_read(kobj, attr, buf, PF_AUTOBOOT);
}
-@@ -587,7 +578,8 @@ static void rt2500pci_disable_led(struct rt2x00_dev *rt2x00dev)
- /*
- * Link tuning
+ /**
+ * pdcs_autosearch_read - Stable Storage autoboot flag output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
*/
--static void rt2500pci_link_stats(struct rt2x00_dev *rt2x00dev)
-+static void rt2500pci_link_stats(struct rt2x00_dev *rt2x00dev,
-+ struct link_qual *qual)
+-static inline ssize_t
+-pdcs_autosearch_read(struct kset *kset, char *buf)
++static ssize_t pdcs_autosearch_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
- u32 reg;
-
-@@ -595,13 +587,13 @@ static void rt2500pci_link_stats(struct rt2x00_dev *rt2x00dev)
- * Update FCS error count from register.
- */
- rt2x00pci_register_read(rt2x00dev, CNT0, ®);
-- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
-+ qual->rx_failed = rt2x00_get_field32(reg, CNT0_FCS_ERROR);
-
- /*
- * Update False CCA count from register.
- */
- rt2x00pci_register_read(rt2x00dev, CNT3, ®);
-- rt2x00dev->link.false_cca = rt2x00_get_field32(reg, CNT3_FALSE_CCA);
-+ qual->false_cca = rt2x00_get_field32(reg, CNT3_FALSE_CCA);
+- return pdcs_auto_read(kset, buf, PF_AUTOSEARCH);
++ return pdcs_auto_read(kobj, attr, buf, PF_AUTOSEARCH);
}
- static void rt2500pci_reset_tuner(struct rt2x00_dev *rt2x00dev)
-@@ -679,10 +671,10 @@ dynamic_cca_tune:
- * R17 is inside the dynamic tuning range,
- * start tuning the link based on the false cca counter.
- */
-- if (rt2x00dev->link.false_cca > 512 && r17 < 0x40) {
-+ if (rt2x00dev->link.qual.false_cca > 512 && r17 < 0x40) {
- rt2500pci_bbp_write(rt2x00dev, 17, ++r17);
- rt2x00dev->link.vgc_level = r17;
-- } else if (rt2x00dev->link.false_cca < 100 && r17 > 0x32) {
-+ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > 0x32) {
- rt2500pci_bbp_write(rt2x00dev, 17, --r17);
- rt2x00dev->link.vgc_level = r17;
- }
-@@ -691,55 +683,35 @@ dynamic_cca_tune:
- /*
- * Initialization functions.
+ /**
+ * pdcs_timer_read - Stable Storage timer count output (in seconds).
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ *
+ * The value of the timer field correponds to a number of seconds in powers of 2.
*/
--static void rt2500pci_init_rxring(struct rt2x00_dev *rt2x00dev)
-+static void rt2500pci_init_rxentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
+-static ssize_t
+-pdcs_timer_read(struct kset *kset, char *buf)
++static ssize_t pdcs_timer_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
-- struct data_ring *ring = rt2x00dev->rx;
-- struct data_desc *rxd;
-- unsigned int i;
-+ __le32 *rxd = entry->priv;
- u32 word;
+ char *out = buf;
+ struct pdcspath_entry *pathentry;
-- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
--
-- for (i = 0; i < ring->stats.limit; i++) {
-- rxd = ring->entry[i].priv;
--
-- rt2x00_desc_read(rxd, 1, &word);
-- rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS,
-- ring->entry[i].data_dma);
-- rt2x00_desc_write(rxd, 1, word);
--
-- rt2x00_desc_read(rxd, 0, &word);
-- rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
-- rt2x00_desc_write(rxd, 0, word);
-- }
-+ rt2x00_desc_read(rxd, 1, &word);
-+ rt2x00_set_field32(&word, RXD_W1_BUFFER_ADDRESS, entry->data_dma);
-+ rt2x00_desc_write(rxd, 1, word);
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
-- rt2x00_ring_index_clear(rt2x00dev->rx);
-+ rt2x00_desc_read(rxd, 0, &word);
-+ rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
-+ rt2x00_desc_write(rxd, 0, word);
- }
+ /* Current flags are stored in primary boot path entry */
+@@ -618,15 +615,14 @@ pdcs_timer_read(struct kset *kset, char *buf)
--static void rt2500pci_init_txring(struct rt2x00_dev *rt2x00dev, const int queue)
-+static void rt2500pci_init_txentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
+ /**
+ * pdcs_osid_read - Stable Storage OS ID register output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ */
+-static ssize_t
+-pdcs_osid_read(struct kset *kset, char *buf)
++static ssize_t pdcs_osid_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
-- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
-- struct data_desc *txd;
-- unsigned int i;
-+ __le32 *txd = entry->priv;
- u32 word;
-
-- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
--
-- for (i = 0; i < ring->stats.limit; i++) {
-- txd = ring->entry[i].priv;
--
-- rt2x00_desc_read(txd, 1, &word);
-- rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS,
-- ring->entry[i].data_dma);
-- rt2x00_desc_write(txd, 1, word);
--
-- rt2x00_desc_read(txd, 0, &word);
-- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-- rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
-- rt2x00_desc_write(txd, 0, word);
-- }
-+ rt2x00_desc_read(txd, 1, &word);
-+ rt2x00_set_field32(&word, TXD_W1_BUFFER_ADDRESS, entry->data_dma);
-+ rt2x00_desc_write(txd, 1, word);
+ char *out = buf;
-- rt2x00_ring_index_clear(ring);
-+ rt2x00_desc_read(txd, 0, &word);
-+ rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-+ rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
-+ rt2x00_desc_write(txd, 0, word);
- }
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
- static int rt2500pci_init_rings(struct rt2x00_dev *rt2x00dev)
-@@ -747,15 +719,6 @@ static int rt2500pci_init_rings(struct rt2x00_dev *rt2x00dev)
- u32 reg;
+ out += sprintf(out, "%s dependent data (0x%.4x)\n",
+@@ -637,18 +633,17 @@ pdcs_osid_read(struct kset *kset, char *buf)
- /*
-- * Initialize rings.
-- */
-- rt2500pci_init_rxring(rt2x00dev);
-- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
-- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA1);
-- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_AFTER_BEACON);
-- rt2500pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
--
-- /*
- * Initialize registers.
- */
- rt2x00pci_register_read(rt2x00dev, TXCSR2, ®);
-@@ -1170,12 +1133,12 @@ static int rt2500pci_set_device_state(struct rt2x00_dev *rt2x00dev,
- * TX descriptor initialization
+ /**
+ * pdcs_osdep1_read - Stable Storage OS-Dependent data area 1 output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ *
+ * This can hold 16 bytes of OS-Dependent data.
*/
- static void rt2500pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-+ struct sk_buff *skb,
- struct txdata_entry_desc *desc,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
- struct ieee80211_tx_control *control)
+-static ssize_t
+-pdcs_osdep1_read(struct kset *kset, char *buf)
++static ssize_t pdcs_osdep1_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
-+ struct skb_desc *skbdesc = get_skb_desc(skb);
-+ __le32 *txd = skbdesc->desc;
- u32 word;
+ char *out = buf;
+ u32 result[4];
- /*
-@@ -1206,7 +1169,7 @@ static void rt2500pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
- test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_ACK,
-- !(control->flags & IEEE80211_TXCTL_NO_ACK));
-+ test_bit(ENTRY_TXD_ACK, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
- test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_OFDM,
-@@ -1216,7 +1179,7 @@ static void rt2500pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_RETRY_MODE,
- !!(control->flags &
- IEEE80211_TXCTL_LONG_RETRY_LIMIT));
-- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
-+ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
- rt2x00_set_field32(&word, TXD_W0_CIPHER_ALG, CIPHER_NONE);
- rt2x00_desc_write(txd, 0, word);
- }
-@@ -1239,12 +1202,12 @@ static void rt2500pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
- }
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
- rt2x00pci_register_read(rt2x00dev, TXCSR0, ®);
-- if (queue == IEEE80211_TX_QUEUE_DATA0)
-- rt2x00_set_field32(®, TXCSR0_KICK_PRIO, 1);
-- else if (queue == IEEE80211_TX_QUEUE_DATA1)
-- rt2x00_set_field32(®, TXCSR0_KICK_TX, 1);
-- else if (queue == IEEE80211_TX_QUEUE_AFTER_BEACON)
-- rt2x00_set_field32(®, TXCSR0_KICK_ATIM, 1);
-+ rt2x00_set_field32(®, TXCSR0_KICK_PRIO,
-+ (queue == IEEE80211_TX_QUEUE_DATA0));
-+ rt2x00_set_field32(®, TXCSR0_KICK_TX,
-+ (queue == IEEE80211_TX_QUEUE_DATA1));
-+ rt2x00_set_field32(®, TXCSR0_KICK_ATIM,
-+ (queue == IEEE80211_TX_QUEUE_AFTER_BEACON));
- rt2x00pci_register_write(rt2x00dev, TXCSR0, reg);
- }
+ if (pdc_stable_read(PDCS_ADDR_OSD1, &result, sizeof(result)) != PDC_OK)
+@@ -664,18 +659,17 @@ pdcs_osdep1_read(struct kset *kset, char *buf)
-@@ -1254,7 +1217,7 @@ static void rt2500pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
- static void rt2500pci_fill_rxdone(struct data_entry *entry,
- struct rxdata_entry_desc *desc)
+ /**
+ * pdcs_diagnostic_read - Stable Storage Diagnostic register output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ *
+ * I have NFC how to interpret the content of that register ;-).
+ */
+-static ssize_t
+-pdcs_diagnostic_read(struct kset *kset, char *buf)
++static ssize_t pdcs_diagnostic_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
-- struct data_desc *rxd = entry->priv;
-+ __le32 *rxd = entry->priv;
- u32 word0;
- u32 word2;
+ char *out = buf;
+ u32 result;
-@@ -1272,6 +1235,7 @@ static void rt2500pci_fill_rxdone(struct data_entry *entry,
- entry->ring->rt2x00dev->rssi_offset;
- desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
- desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
-+ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
- }
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
- /*
-@@ -1281,7 +1245,7 @@ static void rt2500pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
+ /* get diagnostic */
+@@ -689,18 +683,17 @@ pdcs_diagnostic_read(struct kset *kset, char *buf)
+
+ /**
+ * pdcs_fastsize_read - Stable Storage FastSize register output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ *
+ * This register holds the amount of system RAM to be tested during boot sequence.
+ */
+-static ssize_t
+-pdcs_fastsize_read(struct kset *kset, char *buf)
++static ssize_t pdcs_fastsize_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
- struct data_entry *entry;
-- struct data_desc *txd;
-+ __le32 *txd;
- u32 word;
- int tx_status;
- int retry;
-@@ -1301,26 +1265,8 @@ static void rt2500pci_txdone(struct rt2x00_dev *rt2x00dev, const int queue)
- tx_status = rt2x00_get_field32(word, TXD_W0_RESULT);
- retry = rt2x00_get_field32(word, TXD_W0_RETRY_COUNT);
+ char *out = buf;
+ u32 result;
-- rt2x00lib_txdone(entry, tx_status, retry);
--
-- /*
-- * Make this entry available for reuse.
-- */
-- entry->flags = 0;
-- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-- rt2x00_desc_write(txd, 0, word);
-- rt2x00_ring_index_done_inc(ring);
-+ rt2x00pci_txdone(rt2x00dev, entry, tx_status, retry);
- }
--
-- /*
-- * If the data ring was full before the txdone handler
-- * we must make sure the packet queue in the mac80211 stack
-- * is reenabled when the txdone handler has finished.
-- */
-- entry = ring->entry;
-- if (!rt2x00_ring_full(ring))
-- ieee80211_wake_queue(rt2x00dev->hw,
-- entry->tx_status.control.queue);
- }
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
- static irqreturn_t rt2500pci_interrupt(int irq, void *dev_instance)
-@@ -1420,9 +1366,12 @@ static int rt2500pci_validate_eeprom(struct rt2x00_dev *rt2x00dev)
- rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
- if (word == 0xffff) {
- rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 0);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 0);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE, 0);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
-+ ANTENNA_SW_DIVERSITY);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
-+ ANTENNA_SW_DIVERSITY);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE,
-+ LED_MODE_DEFAULT);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_RF_TYPE, RF2522);
-@@ -1481,9 +1430,9 @@ static int rt2500pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
- /*
- * Identify default antenna configuration.
- */
-- rt2x00dev->hw->conf.antenna_sel_tx =
-+ rt2x00dev->default_ant.tx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
-- rt2x00dev->hw->conf.antenna_sel_rx =
-+ rt2x00dev->default_ant.rx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
+ /* get fast-size */
+@@ -718,13 +711,12 @@ pdcs_fastsize_read(struct kset *kset, char *buf)
- /*
-@@ -1774,7 +1723,6 @@ static void rt2500pci_configure_filter(struct ieee80211_hw *hw,
- struct dev_addr_list *mc_list)
+ /**
+ * pdcs_osdep2_read - Stable Storage OS-Dependent data area 2 output.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The output buffer to write to.
+ *
+ * This can hold pdcs_size - 224 bytes of OS-Dependent data, when available.
+ */
+-static ssize_t
+-pdcs_osdep2_read(struct kset *kset, char *buf)
++static ssize_t pdcs_osdep2_read(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- struct interface *intf = &rt2x00dev->interface;
- u32 reg;
-
- /*
-@@ -1793,22 +1741,19 @@ static void rt2500pci_configure_filter(struct ieee80211_hw *hw,
- * Apply some rules to the filters:
- * - Some filters imply different filters to be set.
- * - Some things we can't filter out at all.
-- * - Some filters are set based on interface type.
- */
- if (mc_count)
- *total_flags |= FIF_ALLMULTI;
- if (*total_flags & FIF_OTHER_BSS ||
- *total_flags & FIF_PROMISC_IN_BSS)
- *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
-- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
-- *total_flags |= FIF_PROMISC_IN_BSS;
+ char *out = buf;
+ unsigned long size;
+@@ -736,7 +728,7 @@ pdcs_osdep2_read(struct kset *kset, char *buf)
- /*
- * Check if there is any work left for us.
- */
-- if (intf->filter == *total_flags)
-+ if (rt2x00dev->packet_filter == *total_flags)
- return;
-- intf->filter = *total_flags;
-+ rt2x00dev->packet_filter = *total_flags;
+ size = pdcs_size - 224;
- /*
- * Start configuration steps.
-@@ -1890,7 +1835,7 @@ static const struct ieee80211_ops rt2500pci_mac80211_ops = {
- .configure_filter = rt2500pci_configure_filter,
- .get_stats = rt2x00mac_get_stats,
- .set_retry_limit = rt2500pci_set_retry_limit,
-- .erp_ie_changed = rt2x00mac_erp_ie_changed,
-+ .bss_info_changed = rt2x00mac_bss_info_changed,
- .conf_tx = rt2x00mac_conf_tx,
- .get_tx_stats = rt2x00mac_get_tx_stats,
- .get_tsf = rt2500pci_get_tsf,
-@@ -1904,6 +1849,8 @@ static const struct rt2x00lib_ops rt2500pci_rt2x00_ops = {
- .probe_hw = rt2500pci_probe_hw,
- .initialize = rt2x00pci_initialize,
- .uninitialize = rt2x00pci_uninitialize,
-+ .init_rxentry = rt2500pci_init_rxentry,
-+ .init_txentry = rt2500pci_init_txentry,
- .set_device_state = rt2500pci_set_device_state,
- .rfkill_poll = rt2500pci_rfkill_poll,
- .link_stats = rt2500pci_link_stats,
-@@ -1921,7 +1868,7 @@ static const struct rt2x00lib_ops rt2500pci_rt2x00_ops = {
- };
+- if (!kset || !buf)
++ if (!buf)
+ return -EINVAL;
- static const struct rt2x00_ops rt2500pci_ops = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .rxd_size = RXD_DESC_SIZE,
- .txd_size = TXD_DESC_SIZE,
- .eeprom_size = EEPROM_SIZE,
-@@ -1949,7 +1896,7 @@ MODULE_DEVICE_TABLE(pci, rt2500pci_device_table);
- MODULE_LICENSE("GPL");
+ for (i=0; i<size; i+=4) {
+@@ -751,7 +743,6 @@ pdcs_osdep2_read(struct kset *kset, char *buf)
- static struct pci_driver rt2500pci_driver = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .id_table = rt2500pci_device_table,
- .probe = rt2x00pci_probe,
- .remove = __devexit_p(rt2x00pci_remove),
-diff --git a/drivers/net/wireless/rt2x00/rt2500pci.h b/drivers/net/wireless/rt2x00/rt2500pci.h
-index d92aa56..92ba090 100644
---- a/drivers/net/wireless/rt2x00/rt2500pci.h
-+++ b/drivers/net/wireless/rt2x00/rt2500pci.h
-@@ -1082,8 +1082,8 @@
- /*
- * DMA descriptor defines.
+ /**
+ * pdcs_auto_write - This function handles autoboot/search flag modifying.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The input buffer to read from.
+ * @count: The number of bytes to be read.
+ * @knob: The PF_AUTOBOOT or PF_AUTOSEARCH flag
+@@ -760,8 +751,9 @@ pdcs_osdep2_read(struct kset *kset, char *buf)
+ * We expect a precise syntax:
+ * \"n\" (n == 0 or 1) to toggle AutoBoot Off or On
*/
--#define TXD_DESC_SIZE ( 11 * sizeof(struct data_desc) )
--#define RXD_DESC_SIZE ( 11 * sizeof(struct data_desc) )
-+#define TXD_DESC_SIZE ( 11 * sizeof(__le32) )
-+#define RXD_DESC_SIZE ( 11 * sizeof(__le32) )
+-static ssize_t
+-pdcs_auto_write(struct kset *kset, const char *buf, size_t count, int knob)
++static ssize_t pdcs_auto_write(struct kobject *kobj,
++ struct kobj_attribute *attr, const char *buf,
++ size_t count, int knob)
+ {
+ struct pdcspath_entry *pathentry;
+ unsigned char flags;
+@@ -771,7 +763,7 @@ pdcs_auto_write(struct kset *kset, const char *buf, size_t count, int knob)
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
- /*
- * TX descriptor format for TX, PRIO, ATIM and Beacon Ring.
-diff --git a/drivers/net/wireless/rt2x00/rt2500usb.c b/drivers/net/wireless/rt2x00/rt2500usb.c
-index 18b1f91..86ded40 100644
---- a/drivers/net/wireless/rt2x00/rt2500usb.c
-+++ b/drivers/net/wireless/rt2x00/rt2500usb.c
-@@ -24,11 +24,6 @@
- Supported chipsets: RT2570.
- */
+- if (!kset || !buf || !count)
++ if (!buf || !count)
+ return -EINVAL;
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2500usb"
--
- #include <linux/delay.h>
- #include <linux/etherdevice.h>
- #include <linux/init.h>
-@@ -52,8 +47,10 @@
- * between each attampt. When the busy bit is still set at that time,
- * the access attempt is considered to have failed,
- * and we will print an error.
-+ * If the usb_cache_mutex is already held then the _lock variants must
-+ * be used instead.
+ /* We'll use a local copy of buf */
+@@ -826,7 +818,6 @@ parse_error:
+
+ /**
+ * pdcs_autoboot_write - This function handles autoboot flag modifying.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The input buffer to read from.
+ * @count: The number of bytes to be read.
+ *
+@@ -834,15 +825,15 @@ parse_error:
+ * We expect a precise syntax:
+ * \"n\" (n == 0 or 1) to toggle AutoSearch Off or On
*/
--static inline void rt2500usb_register_read(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2500usb_register_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- u16 *value)
+-static inline ssize_t
+-pdcs_autoboot_write(struct kset *kset, const char *buf, size_t count)
++static ssize_t pdcs_autoboot_write(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf, size_t count)
{
-@@ -64,8 +61,18 @@ static inline void rt2500usb_register_read(const struct rt2x00_dev *rt2x00dev,
- *value = le16_to_cpu(reg);
+- return pdcs_auto_write(kset, buf, count, PF_AUTOBOOT);
++ return pdcs_auto_write(kset, attr, buf, count, PF_AUTOBOOT);
}
--static inline void rt2500usb_register_multiread(const struct rt2x00_dev
-- *rt2x00dev,
-+static inline void rt2500usb_register_read_lock(struct rt2x00_dev *rt2x00dev,
-+ const unsigned int offset,
-+ u16 *value)
-+{
-+ __le16 reg;
-+ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_READ,
-+ USB_VENDOR_REQUEST_IN, offset,
-+ ®, sizeof(u16), REGISTER_TIMEOUT);
-+ *value = le16_to_cpu(reg);
-+}
-+
-+static inline void rt2500usb_register_multiread(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- void *value, const u16 length)
+ /**
+ * pdcs_autosearch_write - This function handles autosearch flag modifying.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The input buffer to read from.
+ * @count: The number of bytes to be read.
+ *
+@@ -850,15 +841,15 @@ pdcs_autoboot_write(struct kset *kset, const char *buf, size_t count)
+ * We expect a precise syntax:
+ * \"n\" (n == 0 or 1) to toggle AutoSearch Off or On
+ */
+-static inline ssize_t
+-pdcs_autosearch_write(struct kset *kset, const char *buf, size_t count)
++static ssize_t pdcs_autosearch_write(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf, size_t count)
{
-@@ -75,7 +82,7 @@ static inline void rt2500usb_register_multiread(const struct rt2x00_dev
- value, length, timeout);
+- return pdcs_auto_write(kset, buf, count, PF_AUTOSEARCH);
++ return pdcs_auto_write(kset, attr, buf, count, PF_AUTOSEARCH);
}
--static inline void rt2500usb_register_write(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2500usb_register_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- u16 value)
+ /**
+ * pdcs_osdep1_write - Stable Storage OS-Dependent data area 1 input.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The input buffer to read from.
+ * @count: The number of bytes to be read.
+ *
+@@ -866,15 +857,16 @@ pdcs_autosearch_write(struct kset *kset, const char *buf, size_t count)
+ * write approach. It's up to userspace to deal with it when constructing
+ * its input buffer.
+ */
+-static ssize_t
+-pdcs_osdep1_write(struct kset *kset, const char *buf, size_t count)
++static ssize_t pdcs_osdep1_write(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf, size_t count)
{
-@@ -85,8 +92,17 @@ static inline void rt2500usb_register_write(const struct rt2x00_dev *rt2x00dev,
- ®, sizeof(u16), REGISTER_TIMEOUT);
- }
+ u8 in[16];
--static inline void rt2500usb_register_multiwrite(const struct rt2x00_dev
-- *rt2x00dev,
-+static inline void rt2500usb_register_write_lock(struct rt2x00_dev *rt2x00dev,
-+ const unsigned int offset,
-+ u16 value)
-+{
-+ __le16 reg = cpu_to_le16(value);
-+ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_WRITE,
-+ USB_VENDOR_REQUEST_OUT, offset,
-+ ®, sizeof(u16), REGISTER_TIMEOUT);
-+}
-+
-+static inline void rt2500usb_register_multiwrite(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- void *value, const u16 length)
- {
-@@ -96,13 +112,13 @@ static inline void rt2500usb_register_multiwrite(const struct rt2x00_dev
- value, length, timeout);
- }
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
--static u16 rt2500usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
-+static u16 rt2500usb_bbp_check(struct rt2x00_dev *rt2x00dev)
- {
- u16 reg;
- unsigned int i;
+- if (!kset || !buf || !count)
++ if (!buf || !count)
+ return -EINVAL;
- for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
-- rt2500usb_register_read(rt2x00dev, PHY_CSR8, ®);
-+ rt2500usb_register_read_lock(rt2x00dev, PHY_CSR8, ®);
- if (!rt2x00_get_field16(reg, PHY_CSR8_BUSY))
- break;
- udelay(REGISTER_BUSY_DELAY);
-@@ -111,17 +127,20 @@ static u16 rt2500usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
- return reg;
- }
+ if (unlikely(pdcs_osid != OS_ID_LINUX))
+@@ -895,7 +887,6 @@ pdcs_osdep1_write(struct kset *kset, const char *buf, size_t count)
--static void rt2500usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500usb_bbp_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u8 value)
+ /**
+ * pdcs_osdep2_write - Stable Storage OS-Dependent data area 2 input.
+- * @kset: An allocated and populated struct kset. We don't use it tho.
+ * @buf: The input buffer to read from.
+ * @count: The number of bytes to be read.
+ *
+@@ -903,8 +894,9 @@ pdcs_osdep1_write(struct kset *kset, const char *buf, size_t count)
+ * byte-by-byte write approach. It's up to userspace to deal with it when
+ * constructing its input buffer.
+ */
+-static ssize_t
+-pdcs_osdep2_write(struct kset *kset, const char *buf, size_t count)
++static ssize_t pdcs_osdep2_write(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf, size_t count)
{
- u16 reg;
+ unsigned long size;
+ unsigned short i;
+@@ -913,7 +905,7 @@ pdcs_osdep2_write(struct kset *kset, const char *buf, size_t count)
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
-+
- /*
- * Wait until the BBP becomes ready.
- */
- reg = rt2500usb_bbp_check(rt2x00dev);
- if (rt2x00_get_field16(reg, PHY_CSR8_BUSY)) {
- ERROR(rt2x00dev, "PHY_CSR8 register busy. Write failed.\n");
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- return;
- }
+- if (!kset || !buf || !count)
++ if (!buf || !count)
+ return -EINVAL;
-@@ -133,14 +152,18 @@ static void rt2500usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field16(®, PHY_CSR7_REG_ID, word);
- rt2x00_set_field16(®, PHY_CSR7_READ_CONTROL, 0);
+ if (unlikely(pdcs_size <= 224))
+@@ -951,21 +943,25 @@ static PDCS_ATTR(diagnostic, 0400, pdcs_diagnostic_read, NULL);
+ static PDCS_ATTR(fastsize, 0400, pdcs_fastsize_read, NULL);
+ static PDCS_ATTR(osdep2, 0600, pdcs_osdep2_read, pdcs_osdep2_write);
-- rt2500usb_register_write(rt2x00dev, PHY_CSR7, reg);
-+ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR7, reg);
+-static struct subsys_attribute *pdcs_subsys_attrs[] = {
+- &pdcs_attr_size,
+- &pdcs_attr_autoboot,
+- &pdcs_attr_autosearch,
+- &pdcs_attr_timer,
+- &pdcs_attr_osid,
+- &pdcs_attr_osdep1,
+- &pdcs_attr_diagnostic,
+- &pdcs_attr_fastsize,
+- &pdcs_attr_osdep2,
++static struct attribute *pdcs_subsys_attrs[] = {
++ &pdcs_attr_size.attr,
++ &pdcs_attr_autoboot.attr,
++ &pdcs_attr_autosearch.attr,
++ &pdcs_attr_timer.attr,
++ &pdcs_attr_osid.attr,
++ &pdcs_attr_osdep1.attr,
++ &pdcs_attr_diagnostic.attr,
++ &pdcs_attr_fastsize.attr,
++ &pdcs_attr_osdep2.attr,
+ NULL,
+ };
+
+-static decl_subsys(paths, &ktype_pdcspath, NULL);
+-static decl_subsys(stable, NULL, NULL);
++static struct attribute_group pdcs_attr_group = {
++ .attrs = pdcs_subsys_attrs,
++};
+
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- }
++static struct kobject *stable_kobj;
++static struct kset *paths_kset;
--static void rt2500usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500usb_bbp_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u8 *value)
- {
- u16 reg;
+ /**
+ * pdcs_register_pathentries - Prepares path entries kobjects for sysfs usage.
+@@ -995,12 +991,12 @@ pdcs_register_pathentries(void)
+ if (err < 0)
+ continue;
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
+- if ((err = kobject_set_name(&entry->kobj, "%s", entry->name)))
+- return err;
+- kobj_set_kset_s(entry, paths_subsys);
+- if ((err = kobject_register(&entry->kobj)))
++ entry->kobj.kset = paths_kset;
++ err = kobject_init_and_add(&entry->kobj, &ktype_pdcspath, NULL,
++ "%s", entry->name);
++ if (err)
+ return err;
+-
+
- /*
- * Wait until the BBP becomes ready.
- */
-@@ -157,7 +180,7 @@ static void rt2500usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field16(®, PHY_CSR7_REG_ID, word);
- rt2x00_set_field16(®, PHY_CSR7_READ_CONTROL, 1);
-
-- rt2500usb_register_write(rt2x00dev, PHY_CSR7, reg);
-+ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR7, reg);
+ /* kobject is now registered */
+ write_lock(&entry->rw_lock);
+ entry->ready = 2;
+@@ -1012,6 +1008,7 @@ pdcs_register_pathentries(void)
+ }
- /*
- * Wait until the BBP becomes ready.
-@@ -166,14 +189,17 @@ static void rt2500usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
- if (rt2x00_get_field16(reg, PHY_CSR8_BUSY)) {
- ERROR(rt2x00dev, "PHY_CSR8 register busy. Read failed.\n");
- *value = 0xff;
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- return;
+ write_unlock(&entry->rw_lock);
++ kobject_uevent(&entry->kobj, KOBJ_ADD);
+ }
+
+ return 0;
+@@ -1029,7 +1026,7 @@ pdcs_unregister_pathentries(void)
+ for (i = 0; (entry = pdcspath_entries[i]); i++) {
+ read_lock(&entry->rw_lock);
+ if (entry->ready >= 2)
+- kobject_unregister(&entry->kobj);
++ kobject_put(&entry->kobj);
+ read_unlock(&entry->rw_lock);
}
-
-- rt2500usb_register_read(rt2x00dev, PHY_CSR7, ®);
-+ rt2500usb_register_read_lock(rt2x00dev, PHY_CSR7, ®);
- *value = rt2x00_get_field16(reg, PHY_CSR7_DATA);
-+
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
}
-
--static void rt2500usb_rf_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500usb_rf_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u32 value)
+@@ -1041,8 +1038,7 @@ pdcs_unregister_pathentries(void)
+ static int __init
+ pdc_stable_init(void)
{
- u16 reg;
-@@ -182,20 +208,23 @@ static void rt2500usb_rf_write(const struct rt2x00_dev *rt2x00dev,
- if (!word)
- return;
-
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
-+
- for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
-- rt2500usb_register_read(rt2x00dev, PHY_CSR10, ®);
-+ rt2500usb_register_read_lock(rt2x00dev, PHY_CSR10, ®);
- if (!rt2x00_get_field16(reg, PHY_CSR10_RF_BUSY))
- goto rf_write;
- udelay(REGISTER_BUSY_DELAY);
- }
+- struct subsys_attribute *attr;
+- int i, rc = 0, error = 0;
++ int rc = 0, error = 0;
+ u32 result;
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- ERROR(rt2x00dev, "PHY_CSR10 register busy. Write failed.\n");
- return;
+ /* find the size of the stable storage */
+@@ -1062,21 +1058,24 @@ pdc_stable_init(void)
+ /* the actual result is 16 bits away */
+ pdcs_osid = (u16)(result >> 16);
- rf_write:
- reg = 0;
- rt2x00_set_field16(®, PHY_CSR9_RF_VALUE, value);
-- rt2500usb_register_write(rt2x00dev, PHY_CSR9, reg);
-+ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR9, reg);
+- /* For now we'll register the stable subsys within this driver */
+- if ((rc = firmware_register(&stable_subsys)))
++ /* For now we'll register the directory at /sys/firmware/stable */
++ stable_kobj = kobject_create_and_add("stable", firmware_kobj);
++ if (!stable_kobj) {
++ rc = -ENOMEM;
+ goto fail_firmreg;
++ }
- reg = 0;
- rt2x00_set_field16(®, PHY_CSR10_RF_VALUE, value >> 16);
-@@ -203,20 +232,22 @@ rf_write:
- rt2x00_set_field16(®, PHY_CSR10_RF_IF_SELECT, 0);
- rt2x00_set_field16(®, PHY_CSR10_RF_BUSY, 1);
+ /* Don't forget the root entries */
+- for (i = 0; (attr = pdcs_subsys_attrs[i]) && !error; i++)
+- if (attr->show)
+- error = subsys_create_file(&stable_subsys, attr);
+-
+- /* register the paths subsys as a subsystem of stable subsys */
+- kobj_set_kset_s(&paths_subsys, stable_subsys);
+- if ((rc = subsystem_register(&paths_subsys)))
+- goto fail_subsysreg;
++ error = sysfs_create_group(stable_kobj, pdcs_attr_group);
-- rt2500usb_register_write(rt2x00dev, PHY_CSR10, reg);
-+ rt2500usb_register_write_lock(rt2x00dev, PHY_CSR10, reg);
- rt2x00_rf_write(rt2x00dev, word, value);
+- /* now we create all "files" for the paths subsys */
++ /* register the paths kset as a child of the stable kset */
++ paths_kset = kset_create_and_add("paths", NULL, stable_kobj);
++ if (!paths_kset) {
++ rc = -ENOMEM;
++ goto fail_ksetreg;
++ }
+
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- }
-
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
- #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u16)) )
++ /* now we create all "files" for the paths kset */
+ if ((rc = pdcs_register_pathentries()))
+ goto fail_pdcsreg;
--static void rt2500usb_read_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500usb_read_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 *data)
+@@ -1084,10 +1083,10 @@ pdc_stable_init(void)
+
+ fail_pdcsreg:
+ pdcs_unregister_pathentries();
+- subsystem_unregister(&paths_subsys);
++ kset_unregister(paths_kset);
+
+-fail_subsysreg:
+- firmware_unregister(&stable_subsys);
++fail_ksetreg:
++ kobject_put(stable_kobj);
+
+ fail_firmreg:
+ printk(KERN_INFO PDCS_PREFIX " bailing out\n");
+@@ -1098,9 +1097,8 @@ static void __exit
+ pdc_stable_exit(void)
{
- rt2500usb_register_read(rt2x00dev, CSR_OFFSET(word), (u16 *) data);
+ pdcs_unregister_pathentries();
+- subsystem_unregister(&paths_subsys);
+-
+- firmware_unregister(&stable_subsys);
++ kset_unregister(paths_kset);
++ kobject_put(stable_kobj);
}
--static void rt2500usb_write_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt2500usb_write_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 data)
- {
- rt2500usb_register_write(rt2x00dev, CSR_OFFSET(word), data);
-@@ -296,7 +327,8 @@ static void rt2500usb_config_type(struct rt2x00_dev *rt2x00dev, const int type,
- rt2500usb_register_read(rt2x00dev, TXRX_CSR19, ®);
- rt2x00_set_field16(®, TXRX_CSR19_TSF_COUNT, 1);
-- rt2x00_set_field16(®, TXRX_CSR19_TBCN, 1);
-+ rt2x00_set_field16(®, TXRX_CSR19_TBCN,
-+ (tsf_sync == TSF_SYNC_BEACON));
- rt2x00_set_field16(®, TXRX_CSR19_BEACON_GEN, 0);
- rt2x00_set_field16(®, TXRX_CSR19_TSF_SYNC, tsf_sync);
- rt2500usb_register_write(rt2x00dev, TXRX_CSR19, reg);
-@@ -385,7 +417,7 @@ static void rt2500usb_config_txpower(struct rt2x00_dev *rt2x00dev,
- }
+diff --git a/drivers/pci/hotplug/acpiphp_ibm.c b/drivers/pci/hotplug/acpiphp_ibm.c
+index 47d26b6..750ebd7 100644
+--- a/drivers/pci/hotplug/acpiphp_ibm.c
++++ b/drivers/pci/hotplug/acpiphp_ibm.c
+@@ -429,7 +429,7 @@ static int __init ibm_acpiphp_init(void)
+ int retval = 0;
+ acpi_status status;
+ struct acpi_device *device;
+- struct kobject *sysdir = &pci_hotplug_slots_subsys.kobj;
++ struct kobject *sysdir = &pci_hotplug_slots_kset->kobj;
- static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx, const int antenna_rx)
-+ struct antenna_setup *ant)
+ dbg("%s\n", __FUNCTION__);
+
+@@ -476,7 +476,7 @@ init_return:
+ static void __exit ibm_acpiphp_exit(void)
{
- u8 r2;
- u8 r14;
-@@ -400,8 +432,7 @@ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
- /*
- * Configure the TX antenna.
- */
-- switch (antenna_tx) {
-- case ANTENNA_SW_DIVERSITY:
-+ switch (ant->tx) {
- case ANTENNA_HW_DIVERSITY:
- rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 1);
- rt2x00_set_field16(&csr5, PHY_CSR5_CCK, 1);
-@@ -412,6 +443,13 @@ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field16(&csr5, PHY_CSR5_CCK, 0);
- rt2x00_set_field16(&csr6, PHY_CSR6_OFDM, 0);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
- rt2x00_set_field8(&r2, BBP_R2_TX_ANTENNA, 2);
- rt2x00_set_field16(&csr5, PHY_CSR5_CCK, 2);
-@@ -422,14 +460,20 @@ static void rt2500usb_config_antenna(struct rt2x00_dev *rt2x00dev,
- /*
- * Configure the RX antenna.
- */
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
-+ switch (ant->rx) {
- case ANTENNA_HW_DIVERSITY:
- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 1);
- break;
- case ANTENNA_A:
- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 0);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
- rt2x00_set_field8(&r14, BBP_R14_RX_ANTENNA, 2);
- break;
-@@ -487,9 +531,7 @@ static void rt2500usb_config(struct rt2x00_dev *rt2x00dev,
- rt2500usb_config_txpower(rt2x00dev,
- libconf->conf->power_level);
- if (flags & CONFIG_UPDATE_ANTENNA)
-- rt2500usb_config_antenna(rt2x00dev,
-- libconf->conf->antenna_sel_tx,
-- libconf->conf->antenna_sel_rx);
-+ rt2500usb_config_antenna(rt2x00dev, &libconf->ant);
- if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
- rt2500usb_config_duration(rt2x00dev, libconf);
- }
-@@ -507,18 +549,10 @@ static void rt2500usb_enable_led(struct rt2x00_dev *rt2x00dev)
- rt2500usb_register_write(rt2x00dev, MAC_CSR21, reg);
+ acpi_status status;
+- struct kobject *sysdir = &pci_hotplug_slots_subsys.kobj;
++ struct kobject *sysdir = &pci_hotplug_slots_kset->kobj;
- rt2500usb_register_read(rt2x00dev, MAC_CSR20, ®);
+ dbg("%s\n", __FUNCTION__);
+
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index 01c351c..47bb0e1 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -61,7 +61,7 @@ static int debug;
+
+ static LIST_HEAD(pci_hotplug_slot_list);
+
+-struct kset pci_hotplug_slots_subsys;
++struct kset *pci_hotplug_slots_kset;
+
+ static ssize_t hotplug_slot_attr_show(struct kobject *kobj,
+ struct attribute *attr, char *buf)
+@@ -96,8 +96,6 @@ static struct kobj_type hotplug_slot_ktype = {
+ .release = &hotplug_slot_release,
+ };
+
+-decl_subsys_name(pci_hotplug_slots, slots, &hotplug_slot_ktype, NULL);
-
-- if (rt2x00dev->led_mode == LED_MODE_TXRX_ACTIVITY) {
-- rt2x00_set_field16(®, MAC_CSR20_LINK, 1);
-- rt2x00_set_field16(®, MAC_CSR20_ACTIVITY, 0);
-- } else if (rt2x00dev->led_mode == LED_MODE_ASUS) {
-- rt2x00_set_field16(®, MAC_CSR20_LINK, 0);
-- rt2x00_set_field16(®, MAC_CSR20_ACTIVITY, 1);
-- } else {
-- rt2x00_set_field16(®, MAC_CSR20_LINK, 1);
-- rt2x00_set_field16(®, MAC_CSR20_ACTIVITY, 1);
-- }
+ /* these strings match up with the values in pci_bus_speed */
+ static char *pci_bus_speed_strings[] = {
+ "33 MHz PCI", /* 0x00 */
+@@ -632,18 +630,19 @@ int pci_hp_register (struct hotplug_slot *slot)
+ return -EINVAL;
+ }
+
+- kobject_set_name(&slot->kobj, "%s", slot->name);
+- kobj_set_kset_s(slot, pci_hotplug_slots_subsys);
-
-+ rt2x00_set_field16(®, MAC_CSR20_LINK,
-+ (rt2x00dev->led_mode != LED_MODE_ASUS));
-+ rt2x00_set_field16(®, MAC_CSR20_ACTIVITY,
-+ (rt2x00dev->led_mode != LED_MODE_TXRX_ACTIVITY));
- rt2500usb_register_write(rt2x00dev, MAC_CSR20, reg);
+ /* this can fail if we have already registered a slot with the same name */
+- if (kobject_register(&slot->kobj)) {
+- err("Unable to register kobject");
++ slot->kobj.kset = pci_hotplug_slots_kset;
++ result = kobject_init_and_add(&slot->kobj, &hotplug_slot_ktype, NULL,
++ "%s", slot->name);
++ if (result) {
++ err("Unable to register kobject '%s'", slot->name);
+ return -EINVAL;
+ }
+-
++
+ list_add (&slot->slot_list, &pci_hotplug_slot_list);
+
+ result = fs_add_slot (slot);
++ kobject_uevent(&slot->kobj, KOBJ_ADD);
+ dbg ("Added slot %s to the list\n", slot->name);
+ return result;
}
+@@ -672,7 +671,7 @@ int pci_hp_deregister (struct hotplug_slot *slot)
-@@ -535,7 +569,8 @@ static void rt2500usb_disable_led(struct rt2x00_dev *rt2x00dev)
- /*
- * Link tuning
- */
--static void rt2500usb_link_stats(struct rt2x00_dev *rt2x00dev)
-+static void rt2500usb_link_stats(struct rt2x00_dev *rt2x00dev,
-+ struct link_qual *qual)
+ fs_remove_slot (slot);
+ dbg ("Removed slot %s from the list\n", slot->name);
+- kobject_unregister(&slot->kobj);
++ kobject_put(&slot->kobj);
+ return 0;
+ }
+
+@@ -700,11 +699,15 @@ int __must_check pci_hp_change_slot_info(struct hotplug_slot *slot,
+ static int __init pci_hotplug_init (void)
{
- u16 reg;
+ int result;
++ struct kset *pci_bus_kset;
-@@ -543,14 +578,13 @@ static void rt2500usb_link_stats(struct rt2x00_dev *rt2x00dev)
- * Update FCS error count from register.
- */
- rt2500usb_register_read(rt2x00dev, STA_CSR0, ®);
-- rt2x00dev->link.rx_failed = rt2x00_get_field16(reg, STA_CSR0_FCS_ERROR);
-+ qual->rx_failed = rt2x00_get_field16(reg, STA_CSR0_FCS_ERROR);
+- kobj_set_kset_s(&pci_hotplug_slots_subsys, pci_bus_type.subsys);
+- result = subsystem_register(&pci_hotplug_slots_subsys);
+- if (result) {
+- err("Register subsys with error %d\n", result);
++ pci_bus_kset = bus_get_kset(&pci_bus_type);
++
++ pci_hotplug_slots_kset = kset_create_and_add("slots", NULL,
++ &pci_bus_kset->kobj);
++ if (!pci_hotplug_slots_kset) {
++ result = -ENOMEM;
++ err("Register subsys error\n");
+ goto exit;
+ }
+ result = cpci_hotplug_init(debug);
+@@ -715,9 +718,9 @@ static int __init pci_hotplug_init (void)
- /*
- * Update False CCA count from register.
- */
- rt2500usb_register_read(rt2x00dev, STA_CSR3, ®);
-- rt2x00dev->link.false_cca =
-- rt2x00_get_field16(reg, STA_CSR3_FALSE_CCA_ERROR);
-+ qual->false_cca = rt2x00_get_field16(reg, STA_CSR3_FALSE_CCA_ERROR);
+ info (DRIVER_DESC " version: " DRIVER_VERSION "\n");
+ goto exit;
+-
++
+ err_subsys:
+- subsystem_unregister(&pci_hotplug_slots_subsys);
++ kset_unregister(pci_hotplug_slots_kset);
+ exit:
+ return result;
+ }
+@@ -725,7 +728,7 @@ exit:
+ static void __exit pci_hotplug_exit (void)
+ {
+ cpci_hotplug_exit();
+- subsystem_unregister(&pci_hotplug_slots_subsys);
++ kset_unregister(pci_hotplug_slots_kset);
}
- static void rt2500usb_reset_tuner(struct rt2x00_dev *rt2x00dev)
-@@ -673,10 +707,10 @@ static void rt2500usb_link_tuner(struct rt2x00_dev *rt2x00dev)
- if (r17 > up_bound) {
- rt2500usb_bbp_write(rt2x00dev, 17, up_bound);
- rt2x00dev->link.vgc_level = up_bound;
-- } else if (rt2x00dev->link.false_cca > 512 && r17 < up_bound) {
-+ } else if (rt2x00dev->link.qual.false_cca > 512 && r17 < up_bound) {
- rt2500usb_bbp_write(rt2x00dev, 17, ++r17);
- rt2x00dev->link.vgc_level = r17;
-- } else if (rt2x00dev->link.false_cca < 100 && r17 > low_bound) {
-+ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > low_bound) {
- rt2500usb_bbp_write(rt2x00dev, 17, --r17);
- rt2x00dev->link.vgc_level = r17;
- }
-@@ -755,9 +789,11 @@ static int rt2500usb_init_registers(struct rt2x00_dev *rt2x00dev)
+ module_init(pci_hotplug_init);
+@@ -737,7 +740,7 @@ MODULE_LICENSE("GPL");
+ module_param(debug, bool, 0644);
+ MODULE_PARM_DESC(debug, "Debugging mode enabled or not");
- if (rt2x00_rev(&rt2x00dev->chip) >= RT2570_VERSION_C) {
- rt2500usb_register_read(rt2x00dev, PHY_CSR2, ®);
-- reg &= ~0x0002;
-+ rt2x00_set_field16(®, PHY_CSR2_LNA, 0);
- } else {
-- reg = 0x3002;
-+ reg = 0;
-+ rt2x00_set_field16(®, PHY_CSR2_LNA, 1);
-+ rt2x00_set_field16(®, PHY_CSR2_LNA_MODE, 3);
- }
- rt2500usb_register_write(rt2x00dev, PHY_CSR2, reg);
+-EXPORT_SYMBOL_GPL(pci_hotplug_slots_subsys);
++EXPORT_SYMBOL_GPL(pci_hotplug_slots_kset);
+ EXPORT_SYMBOL_GPL(pci_hp_register);
+ EXPORT_SYMBOL_GPL(pci_hp_deregister);
+ EXPORT_SYMBOL_GPL(pci_hp_change_slot_info);
+diff --git a/drivers/pci/hotplug/rpadlpar_sysfs.c b/drivers/pci/hotplug/rpadlpar_sysfs.c
+index a080fed..e32148a 100644
+--- a/drivers/pci/hotplug/rpadlpar_sysfs.c
++++ b/drivers/pci/hotplug/rpadlpar_sysfs.c
+@@ -23,44 +23,13 @@
-@@ -884,8 +920,6 @@ static int rt2500usb_enable_radio(struct rt2x00_dev *rt2x00dev)
- return -EIO;
- }
+ #define MAX_DRC_NAME_LEN 64
-- rt2x00usb_enable_radio(rt2x00dev);
+-/* Store return code of dlpar operation in attribute struct */
+-struct dlpar_io_attr {
+- int rc;
+- struct attribute attr;
+- ssize_t (*store)(struct dlpar_io_attr *dlpar_attr, const char *buf,
+- size_t nbytes);
+-};
+
+-/* Common show callback for all attrs, display the return code
+- * of the dlpar op */
+-static ssize_t
+-dlpar_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
+-{
+- struct dlpar_io_attr *dlpar_attr = container_of(attr,
+- struct dlpar_io_attr, attr);
+- return sprintf(buf, "%d\n", dlpar_attr->rc);
+-}
-
- /*
- * Enable LED
- */
-@@ -988,12 +1022,12 @@ static int rt2500usb_set_device_state(struct rt2x00_dev *rt2x00dev,
- * TX descriptor initialization
- */
- static void rt2500usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-+ struct sk_buff *skb,
- struct txdata_entry_desc *desc,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
- struct ieee80211_tx_control *control)
+-static ssize_t
+-dlpar_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char *buf, size_t nbytes)
+-{
+- struct dlpar_io_attr *dlpar_attr = container_of(attr,
+- struct dlpar_io_attr, attr);
+- return dlpar_attr->store ?
+- dlpar_attr->store(dlpar_attr, buf, nbytes) : -EIO;
+-}
+-
+-static struct sysfs_ops dlpar_attr_sysfs_ops = {
+- .show = dlpar_attr_show,
+- .store = dlpar_attr_store,
+-};
+-
+-static ssize_t add_slot_store(struct dlpar_io_attr *dlpar_attr,
+- const char *buf, size_t nbytes)
++static ssize_t add_slot_store(struct kobject *kobj, struct kobj_attribute *attr,
++ const char *buf, size_t nbytes)
{
-+ struct skb_desc *skbdesc = get_skb_desc(skb);
-+ __le32 *txd = skbdesc->desc;
- u32 word;
+ char drc_name[MAX_DRC_NAME_LEN];
+ char *end;
++ int rc;
- /*
-@@ -1018,7 +1052,7 @@ static void rt2500usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
- test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_ACK,
-- !(control->flags & IEEE80211_TXCTL_NO_ACK));
-+ test_bit(ENTRY_TXD_ACK, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
- test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_OFDM,
-@@ -1026,7 +1060,7 @@ static void rt2500usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_NEW_SEQ,
- !!(control->flags & IEEE80211_TXCTL_FIRST_FRAGMENT));
- rt2x00_set_field32(&word, TXD_W0_IFS, desc->ifs);
-- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
-+ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
- rt2x00_set_field32(&word, TXD_W0_CIPHER, CIPHER_NONE);
- rt2x00_desc_write(txd, 0, word);
- }
-@@ -1079,10 +1113,10 @@ static void rt2500usb_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
- static void rt2500usb_fill_rxdone(struct data_entry *entry,
- struct rxdata_entry_desc *desc)
- {
-+ struct skb_desc *skbdesc = get_skb_desc(entry->skb);
- struct urb *urb = entry->priv;
-- struct data_desc *rxd = (struct data_desc *)(entry->skb->data +
-- (urb->actual_length -
-- entry->ring->desc_size));
-+ __le32 *rxd = (__le32 *)(entry->skb->data +
-+ (urb->actual_length - entry->ring->desc_size));
- u32 word0;
- u32 word1;
+ if (nbytes >= MAX_DRC_NAME_LEN)
+ return 0;
+@@ -72,15 +41,25 @@ static ssize_t add_slot_store(struct dlpar_io_attr *dlpar_attr,
+ end = &drc_name[nbytes];
+ *end = '\0';
-@@ -1103,8 +1137,15 @@ static void rt2500usb_fill_rxdone(struct data_entry *entry,
- entry->ring->rt2x00dev->rssi_offset;
- desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
- desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
-+ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
+- dlpar_attr->rc = dlpar_add_slot(drc_name);
++ rc = dlpar_add_slot(drc_name);
++ if (rc)
++ return rc;
-- return;
-+ /*
-+ * Set descriptor and data pointer.
-+ */
-+ skbdesc->desc = entry->skb->data + desc->size;
-+ skbdesc->desc_len = entry->ring->desc_size;
-+ skbdesc->data = entry->skb->data;
-+ skbdesc->data_len = desc->size;
+ return nbytes;
}
- /*
-@@ -1163,9 +1204,12 @@ static int rt2500usb_validate_eeprom(struct rt2x00_dev *rt2x00dev)
- rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
- if (word == 0xffff) {
- rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 0);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 0);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE, 0);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
-+ ANTENNA_SW_DIVERSITY);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
-+ ANTENNA_SW_DIVERSITY);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_LED_MODE,
-+ LED_MODE_DEFAULT);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_RF_TYPE, RF2522);
-@@ -1275,12 +1319,23 @@ static int rt2500usb_init_eeprom(struct rt2x00_dev *rt2x00dev)
- /*
- * Identify default antenna configuration.
- */
-- rt2x00dev->hw->conf.antenna_sel_tx =
-+ rt2x00dev->default_ant.tx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
-- rt2x00dev->hw->conf.antenna_sel_rx =
-+ rt2x00dev->default_ant.rx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
-
- /*
-+ * When the eeprom indicates SW_DIVERSITY use HW_DIVERSITY instead.
-+ * I am not 100% sure about this, but the legacy drivers do not
-+ * indicate antenna swapping in software is required when
-+ * diversity is enabled.
-+ */
-+ if (rt2x00dev->default_ant.tx == ANTENNA_SW_DIVERSITY)
-+ rt2x00dev->default_ant.tx = ANTENNA_HW_DIVERSITY;
-+ if (rt2x00dev->default_ant.rx == ANTENNA_SW_DIVERSITY)
-+ rt2x00dev->default_ant.rx = ANTENNA_HW_DIVERSITY;
+-static ssize_t remove_slot_store(struct dlpar_io_attr *dlpar_attr,
+- const char *buf, size_t nbytes)
++static ssize_t add_slot_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
++{
++ return sprintf(buf, "0\n");
++}
+
-+ /*
- * Store led mode, for correct led behaviour.
- */
- rt2x00dev->led_mode =
-@@ -1562,7 +1617,6 @@ static void rt2500usb_configure_filter(struct ieee80211_hw *hw,
- struct dev_addr_list *mc_list)
++static ssize_t remove_slot_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf, size_t nbytes)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- struct interface *intf = &rt2x00dev->interface;
- u16 reg;
-
- /*
-@@ -1581,22 +1635,19 @@ static void rt2500usb_configure_filter(struct ieee80211_hw *hw,
- * Apply some rules to the filters:
- * - Some filters imply different filters to be set.
- * - Some things we can't filter out at all.
-- * - Some filters are set based on interface type.
- */
- if (mc_count)
- *total_flags |= FIF_ALLMULTI;
- if (*total_flags & FIF_OTHER_BSS ||
- *total_flags & FIF_PROMISC_IN_BSS)
- *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
-- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
-- *total_flags |= FIF_PROMISC_IN_BSS;
+ char drc_name[MAX_DRC_NAME_LEN];
++ int rc;
+ char *end;
- /*
- * Check if there is any work left for us.
- */
-- if (intf->filter == *total_flags)
-+ if (rt2x00dev->packet_filter == *total_flags)
- return;
-- intf->filter = *total_flags;
-+ rt2x00dev->packet_filter = *total_flags;
+ if (nbytes >= MAX_DRC_NAME_LEN)
+@@ -93,22 +72,24 @@ static ssize_t remove_slot_store(struct dlpar_io_attr *dlpar_attr,
+ end = &drc_name[nbytes];
+ *end = '\0';
- /*
- * When in atomic context, reschedule and let rt2x00lib
-@@ -1638,8 +1689,8 @@ static int rt2500usb_beacon_update(struct ieee80211_hw *hw,
- struct rt2x00_dev *rt2x00dev = hw->priv;
- struct usb_device *usb_dev =
- interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
-- struct data_ring *ring =
-- rt2x00lib_get_ring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
-+ struct skb_desc *desc;
-+ struct data_ring *ring;
- struct data_entry *beacon;
- struct data_entry *guardian;
- int pipe = usb_sndbulkpipe(usb_dev, 1);
-@@ -1651,6 +1702,7 @@ static int rt2500usb_beacon_update(struct ieee80211_hw *hw,
- * initialization.
- */
- control->queue = IEEE80211_TX_QUEUE_BEACON;
-+ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
+- dlpar_attr->rc = dlpar_remove_slot(drc_name);
++ rc = dlpar_remove_slot(drc_name);
++ if (rc)
++ return rc;
- /*
- * Obtain 2 entries, one for the guardian byte,
-@@ -1661,23 +1713,34 @@ static int rt2500usb_beacon_update(struct ieee80211_hw *hw,
- beacon = rt2x00_get_data_entry(ring);
+ return nbytes;
+ }
- /*
-- * First we create the beacon.
-+ * Add the descriptor in front of the skb.
- */
- skb_push(skb, ring->desc_size);
- memset(skb->data, 0, ring->desc_size);
+-static struct dlpar_io_attr add_slot_attr = {
+- .rc = 0,
+- .attr = { .name = ADD_SLOT_ATTR_NAME, .mode = 0644, },
+- .store = add_slot_store,
+-};
++static ssize_t remove_slot_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
++{
++ return sprintf(buf, "0\n");
++}
-- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
-- (struct ieee80211_hdr *)(skb->data +
-- ring->desc_size),
-- skb->len - ring->desc_size, control);
-+ /*
-+ * Fill in skb descriptor
-+ */
-+ desc = get_skb_desc(skb);
-+ desc->desc_len = ring->desc_size;
-+ desc->data_len = skb->len - ring->desc_size;
-+ desc->desc = skb->data;
-+ desc->data = skb->data + ring->desc_size;
-+ desc->ring = ring;
-+ desc->entry = beacon;
+-static struct dlpar_io_attr remove_slot_attr = {
+- .rc = 0,
+- .attr = { .name = REMOVE_SLOT_ATTR_NAME, .mode = 0644},
+- .store = remove_slot_store,
+-};
++static struct kobj_attribute add_slot_attr =
++ __ATTR(ADD_SLOT_ATTR_NAME, 0644, add_slot_show, add_slot_store);
+
-+ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
-
-+ /*
-+ * USB devices cannot blindly pass the skb->len as the
-+ * length of the data to usb_fill_bulk_urb. Pass the skb
-+ * to the driver to determine what the length should be.
-+ */
- length = rt2500usb_get_tx_data_len(rt2x00dev, skb);
++static struct kobj_attribute remove_slot_attr =
++ __ATTR(REMOVE_SLOT_ATTR_NAME, 0644, remove_slot_show, remove_slot_store);
- usb_fill_bulk_urb(beacon->priv, usb_dev, pipe,
- skb->data, length, rt2500usb_beacondone, beacon);
+ static struct attribute *default_attrs[] = {
+ &add_slot_attr.attr,
+@@ -116,37 +97,29 @@ static struct attribute *default_attrs[] = {
+ NULL,
+ };
-- beacon->skb = skb;
+-static void dlpar_io_release(struct kobject *kobj)
+-{
+- /* noop */
+- return;
+-}
-
- /*
- * Second we need to create the guardian byte.
- * We only need a single byte, so lets recycle
-@@ -1710,7 +1773,7 @@ static const struct ieee80211_ops rt2500usb_mac80211_ops = {
- .config_interface = rt2x00mac_config_interface,
- .configure_filter = rt2500usb_configure_filter,
- .get_stats = rt2x00mac_get_stats,
-- .erp_ie_changed = rt2x00mac_erp_ie_changed,
-+ .bss_info_changed = rt2x00mac_bss_info_changed,
- .conf_tx = rt2x00mac_conf_tx,
- .get_tx_stats = rt2x00mac_get_tx_stats,
- .beacon_update = rt2500usb_beacon_update,
-@@ -1720,6 +1783,8 @@ static const struct rt2x00lib_ops rt2500usb_rt2x00_ops = {
- .probe_hw = rt2500usb_probe_hw,
- .initialize = rt2x00usb_initialize,
- .uninitialize = rt2x00usb_uninitialize,
-+ .init_rxentry = rt2x00usb_init_rxentry,
-+ .init_txentry = rt2x00usb_init_txentry,
- .set_device_state = rt2500usb_set_device_state,
- .link_stats = rt2500usb_link_stats,
- .reset_tuner = rt2500usb_reset_tuner,
-@@ -1737,7 +1802,7 @@ static const struct rt2x00lib_ops rt2500usb_rt2x00_ops = {
+-struct kobj_type ktype_dlpar_io = {
+- .release = dlpar_io_release,
+- .sysfs_ops = &dlpar_attr_sysfs_ops,
+- .default_attrs = default_attrs,
++static struct attribute_group dlpar_attr_group = {
++ .attrs = default_attrs,
};
- static const struct rt2x00_ops rt2500usb_ops = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .rxd_size = RXD_DESC_SIZE,
- .txd_size = TXD_DESC_SIZE,
- .eeprom_size = EEPROM_SIZE,
-@@ -1809,7 +1874,7 @@ MODULE_DEVICE_TABLE(usb, rt2500usb_device_table);
- MODULE_LICENSE("GPL");
-
- static struct usb_driver rt2500usb_driver = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .id_table = rt2500usb_device_table,
- .probe = rt2x00usb_probe,
- .disconnect = rt2x00usb_disconnect,
-diff --git a/drivers/net/wireless/rt2x00/rt2500usb.h b/drivers/net/wireless/rt2x00/rt2500usb.h
-index b18d56e..9e04337 100644
---- a/drivers/net/wireless/rt2x00/rt2500usb.h
-+++ b/drivers/net/wireless/rt2x00/rt2500usb.h
-@@ -430,10 +430,21 @@
+-struct kset dlpar_io_kset = {
+- .kobj = {.ktype = &ktype_dlpar_io,
+- .parent = &pci_hotplug_slots_subsys.kobj},
+- .ktype = &ktype_dlpar_io,
+-};
++static struct kobject *dlpar_kobj;
- /*
- * MAC configuration registers.
-+ */
-+
-+/*
- * PHY_CSR2: TX MAC configuration.
-- * PHY_CSR3: RX MAC configuration.
-+ * NOTE: Both register fields are complete dummy,
-+ * documentation and legacy drivers are unclear un
-+ * what this register means or what fields exists.
- */
- #define PHY_CSR2 0x04c4
-+#define PHY_CSR2_LNA FIELD16(0x0002)
-+#define PHY_CSR2_LNA_MODE FIELD16(0x3000)
+ int dlpar_sysfs_init(void)
+ {
+- kobject_set_name(&dlpar_io_kset.kobj, DLPAR_KOBJ_NAME);
+- if (kset_register(&dlpar_io_kset)) {
+- printk(KERN_ERR "rpadlpar_io: cannot register kset for %s\n",
+- kobject_name(&dlpar_io_kset.kobj));
++ int error;
+
-+/*
-+ * PHY_CSR3: RX MAC configuration.
-+ */
- #define PHY_CSR3 0x04c6
++ dlpar_kobj = kobject_create_and_add(DLPAR_KOBJ_NAME,
++ &pci_hotplug_slots_kset->kobj);
++ if (!dlpar_kobj)
+ return -EINVAL;
+- }
- /*
-@@ -692,8 +703,8 @@
- /*
- * DMA descriptor defines.
- */
--#define TXD_DESC_SIZE ( 5 * sizeof(struct data_desc) )
--#define RXD_DESC_SIZE ( 4 * sizeof(struct data_desc) )
-+#define TXD_DESC_SIZE ( 5 * sizeof(__le32) )
-+#define RXD_DESC_SIZE ( 4 * sizeof(__le32) )
+- return 0;
++ error = sysfs_create_group(dlpar_kobj, &dlpar_attr_group);
++ if (error)
++ kobject_put(dlpar_kobj);
++ return error;
+ }
+ void dlpar_sysfs_exit(void)
+ {
+- kset_unregister(&dlpar_io_kset);
++ sysfs_remove_group(dlpar_kobj, &dlpar_attr_group);
++ kobject_put(dlpar_kobj);
+ }
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index 6d1a216..c4fa35d 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -1,6 +1,11 @@
/*
- * TX descriptor format for TX, PRIO, ATIM and Beacon Ring.
-diff --git a/drivers/net/wireless/rt2x00/rt2x00.h b/drivers/net/wireless/rt2x00/rt2x00.h
-index c8f16f1..05927b9 100644
---- a/drivers/net/wireless/rt2x00/rt2x00.h
-+++ b/drivers/net/wireless/rt2x00/rt2x00.h
-@@ -31,6 +31,8 @@
- #include <linux/skbuff.h>
- #include <linux/workqueue.h>
- #include <linux/firmware.h>
-+#include <linux/mutex.h>
-+#include <linux/etherdevice.h>
-
- #include <net/mac80211.h>
+ * drivers/pci/pci-driver.c
+ *
++ * (C) Copyright 2002-2004, 2007 Greg Kroah-Hartman <greg at kroah.com>
++ * (C) Copyright 2007 Novell Inc.
++ *
++ * Released under the GPL v2 only.
++ *
+ */
-@@ -40,9 +42,8 @@
+ #include <linux/pci.h>
+@@ -96,17 +101,21 @@ pci_create_newid_file(struct pci_driver *drv)
+ {
+ int error = 0;
+ if (drv->probe != NULL)
+- error = sysfs_create_file(&drv->driver.kobj,
+- &driver_attr_new_id.attr);
++ error = driver_create_file(&drv->driver, &driver_attr_new_id);
+ return error;
+ }
- /*
- * Module information.
-- * DRV_NAME should be set within the individual module source files.
- */
--#define DRV_VERSION "2.0.10"
-+#define DRV_VERSION "2.0.14"
- #define DRV_PROJECT "http://rt2x00.serialmonkey.com"
++static void pci_remove_newid_file(struct pci_driver *drv)
++{
++ driver_remove_file(&drv->driver, &driver_attr_new_id);
++}
+ #else /* !CONFIG_HOTPLUG */
+ static inline void pci_free_dynids(struct pci_driver *drv) {}
+ static inline int pci_create_newid_file(struct pci_driver *drv)
+ {
+ return 0;
+ }
++static inline void pci_remove_newid_file(struct pci_driver *drv) {}
+ #endif
- /*
-@@ -55,7 +56,7 @@
+ /**
+@@ -352,50 +361,6 @@ static void pci_device_shutdown(struct device *dev)
+ drv->shutdown(pci_dev);
+ }
- #define DEBUG_PRINTK_PROBE(__kernlvl, __lvl, __msg, __args...) \
- printk(__kernlvl "%s -> %s: %s - " __msg, \
-- DRV_NAME, __FUNCTION__, __lvl, ##__args)
-+ KBUILD_MODNAME, __FUNCTION__, __lvl, ##__args)
+-#define kobj_to_pci_driver(obj) container_of(obj, struct device_driver, kobj)
+-#define attr_to_driver_attribute(obj) container_of(obj, struct driver_attribute, attr)
+-
+-static ssize_t
+-pci_driver_attr_show(struct kobject * kobj, struct attribute *attr, char *buf)
+-{
+- struct device_driver *driver = kobj_to_pci_driver(kobj);
+- struct driver_attribute *dattr = attr_to_driver_attribute(attr);
+- ssize_t ret;
+-
+- if (!get_driver(driver))
+- return -ENODEV;
+-
+- ret = dattr->show ? dattr->show(driver, buf) : -EIO;
+-
+- put_driver(driver);
+- return ret;
+-}
+-
+-static ssize_t
+-pci_driver_attr_store(struct kobject * kobj, struct attribute *attr,
+- const char *buf, size_t count)
+-{
+- struct device_driver *driver = kobj_to_pci_driver(kobj);
+- struct driver_attribute *dattr = attr_to_driver_attribute(attr);
+- ssize_t ret;
+-
+- if (!get_driver(driver))
+- return -ENODEV;
+-
+- ret = dattr->store ? dattr->store(driver, buf, count) : -EIO;
+-
+- put_driver(driver);
+- return ret;
+-}
+-
+-static struct sysfs_ops pci_driver_sysfs_ops = {
+- .show = pci_driver_attr_show,
+- .store = pci_driver_attr_store,
+-};
+-static struct kobj_type pci_driver_kobj_type = {
+- .sysfs_ops = &pci_driver_sysfs_ops,
+-};
+-
+ /**
+ * __pci_register_driver - register a new pci driver
+ * @drv: the driver structure to register
+@@ -417,7 +382,6 @@ int __pci_register_driver(struct pci_driver *drv, struct module *owner,
+ drv->driver.bus = &pci_bus_type;
+ drv->driver.owner = owner;
+ drv->driver.mod_name = mod_name;
+- drv->driver.kobj.ktype = &pci_driver_kobj_type;
- #ifdef CONFIG_RT2X00_DEBUG
- #define DEBUG_PRINTK(__dev, __kernlvl, __lvl, __msg, __args...) \
-@@ -133,20 +134,26 @@
- */
- static inline int is_rts_frame(u16 fc)
+ spin_lock_init(&drv->dynids.lock);
+ INIT_LIST_HEAD(&drv->dynids.list);
+@@ -447,6 +411,7 @@ int __pci_register_driver(struct pci_driver *drv, struct module *owner,
+ void
+ pci_unregister_driver(struct pci_driver *drv)
{
-- return !!(((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
-- ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_RTS));
-+ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
-+ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_RTS));
++ pci_remove_newid_file(drv);
+ driver_unregister(&drv->driver);
+ pci_free_dynids(drv);
}
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index c5ca313..5fd5852 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1210,16 +1210,19 @@ static void __init pci_sort_breadthfirst_klist(void)
+ struct klist_node *n;
+ struct device *dev;
+ struct pci_dev *pdev;
++ struct klist *device_klist;
- static inline int is_cts_frame(u16 fc)
- {
-- return !!(((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
-- ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_CTS));
-+ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_CTL) &&
-+ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_CTS));
+- spin_lock(&pci_bus_type.klist_devices.k_lock);
+- list_for_each_safe(pos, tmp, &pci_bus_type.klist_devices.k_list) {
++ device_klist = bus_get_device_klist(&pci_bus_type);
++
++ spin_lock(&device_klist->k_lock);
++ list_for_each_safe(pos, tmp, &device_klist->k_list) {
+ n = container_of(pos, struct klist_node, n_node);
+ dev = container_of(n, struct device, knode_bus);
+ pdev = to_pci_dev(dev);
+ pci_insertion_sort_klist(pdev, &sorted_devices);
+ }
+- list_splice(&sorted_devices, &pci_bus_type.klist_devices.k_list);
+- spin_unlock(&pci_bus_type.klist_devices.k_lock);
++ list_splice(&sorted_devices, &device_klist->k_list);
++ spin_unlock(&device_klist->k_lock);
}
- static inline int is_probe_resp(u16 fc)
+ static void __init pci_insertion_sort_devices(struct pci_dev *a, struct list_head *list)
+diff --git a/drivers/pcmcia/ds.c b/drivers/pcmcia/ds.c
+index 5cf89a9..15c18f5 100644
+--- a/drivers/pcmcia/ds.c
++++ b/drivers/pcmcia/ds.c
+@@ -312,8 +312,7 @@ pcmcia_create_newid_file(struct pcmcia_driver *drv)
{
-- return !!(((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) &&
-- ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_PROBE_RESP));
-+ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) &&
-+ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_PROBE_RESP));
-+}
-+
-+static inline int is_beacon(u16 fc)
-+{
-+ return (((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_MGMT) &&
-+ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_BEACON));
+ int error = 0;
+ if (drv->probe != NULL)
+- error = sysfs_create_file(&drv->drv.kobj,
+- &driver_attr_new_id.attr);
++ error = driver_create_file(&drv->drv, &driver_attr_new_id);
+ return error;
}
- /*
-@@ -180,18 +187,17 @@ struct rf_channel {
- };
+diff --git a/drivers/pcmcia/pxa2xx_base.c b/drivers/pcmcia/pxa2xx_base.c
+index 874923f..e439044 100644
+--- a/drivers/pcmcia/pxa2xx_base.c
++++ b/drivers/pcmcia/pxa2xx_base.c
+@@ -29,6 +29,7 @@
+ #include <asm/irq.h>
+ #include <asm/system.h>
+ #include <asm/arch/pxa-regs.h>
++#include <asm/arch/pxa2xx-regs.h>
- /*
-- * To optimize the quality of the link we need to store
-- * the quality of received frames and periodically
-- * optimize the link.
-+ * Antenna setup values.
- */
--struct link {
-- /*
-- * Link tuner counter
-- * The number of times the link has been tuned
-- * since the radio has been switched on.
-- */
-- u32 count;
-+struct antenna_setup {
-+ enum antenna rx;
-+ enum antenna tx;
-+};
+ #include <pcmcia/cs_types.h>
+ #include <pcmcia/ss.h>
+diff --git a/drivers/pnp/pnpbios/bioscalls.c b/drivers/pnp/pnpbios/bioscalls.c
+index 5dba68f..a8364d8 100644
+--- a/drivers/pnp/pnpbios/bioscalls.c
++++ b/drivers/pnp/pnpbios/bioscalls.c
+@@ -61,7 +61,7 @@ set_base(gdt[(selname) >> 3], (u32)(address)); \
+ set_limit(gdt[(selname) >> 3], size); \
+ } while(0)
-+/*
-+ * Quality statistics about the currently active link.
-+ */
-+struct link_qual {
- /*
- * Statistics required for Link tuning.
- * For the average RSSI value we use the "Walking average" approach.
-@@ -211,7 +217,6 @@ struct link {
- * the new values correctly allowing a effective link tuning.
- */
- int avg_rssi;
-- int vgc_level;
- int false_cca;
+-static struct desc_struct bad_bios_desc = { 0, 0x00409200 };
++static struct desc_struct bad_bios_desc;
- /*
-@@ -240,6 +245,72 @@ struct link {
- #define WEIGHT_RSSI 20
- #define WEIGHT_RX 40
- #define WEIGHT_TX 40
-+};
-+
-+/*
-+ * Antenna settings about the currently active link.
-+ */
-+struct link_ant {
-+ /*
-+ * Antenna flags
-+ */
-+ unsigned int flags;
-+#define ANTENNA_RX_DIVERSITY 0x00000001
-+#define ANTENNA_TX_DIVERSITY 0x00000002
-+#define ANTENNA_MODE_SAMPLE 0x00000004
-+
-+ /*
-+ * Currently active TX/RX antenna setup.
-+ * When software diversity is used, this will indicate
-+ * which antenna is actually used at this time.
-+ */
-+ struct antenna_setup active;
-+
-+ /*
-+ * RSSI information for the different antenna's.
-+ * These statistics are used to determine when
-+ * to switch antenna when using software diversity.
-+ *
-+ * rssi[0] -> Antenna A RSSI
-+ * rssi[1] -> Antenna B RSSI
-+ */
-+ int rssi_history[2];
-+
-+ /*
-+ * Current RSSI average of the currently active antenna.
-+ * Similar to the avg_rssi in the link_qual structure
-+ * this value is updated by using the walking average.
-+ */
-+ int rssi_ant;
-+};
-+
-+/*
-+ * To optimize the quality of the link we need to store
-+ * the quality of received frames and periodically
-+ * optimize the link.
-+ */
-+struct link {
-+ /*
-+ * Link tuner counter
-+ * The number of times the link has been tuned
-+ * since the radio has been switched on.
-+ */
-+ u32 count;
-+
-+ /*
-+ * Quality measurement values.
-+ */
-+ struct link_qual qual;
-+
-+ /*
-+ * TX/RX antenna setup.
-+ */
-+ struct link_ant ant;
+ /*
+ * At some point we want to use this stack frame pointer to unwind
+@@ -477,6 +477,9 @@ void pnpbios_calls_init(union pnp_bios_install_struct *header)
+ pnp_bios_callpoint.offset = header->fields.pm16offset;
+ pnp_bios_callpoint.segment = PNP_CS16;
+
++ bad_bios_desc.a = 0;
++ bad_bios_desc.b = 0x00409200;
+
-+ /*
-+ * Active VGC level
-+ */
-+ int vgc_level;
+ set_base(bad_bios_desc, __va((unsigned long)0x40 << 4));
+ _set_limit((char *)&bad_bios_desc, 4095 - (0x40 << 4));
+ for (i = 0; i < NR_CPUS; i++) {
+diff --git a/drivers/power/apm_power.c b/drivers/power/apm_power.c
+index bbf3ee1..7e29b90 100644
+--- a/drivers/power/apm_power.c
++++ b/drivers/power/apm_power.c
+@@ -13,6 +13,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/apm-emulation.h>
+
++static DEFINE_MUTEX(apm_mutex);
+ #define PSY_PROP(psy, prop, val) psy->get_property(psy, \
+ POWER_SUPPLY_PROP_##prop, val)
+
+@@ -23,67 +24,86 @@
+
+ static struct power_supply *main_battery;
+
+-static void find_main_battery(void)
+-{
+- struct device *dev;
+- struct power_supply *bat = NULL;
+- struct power_supply *max_charge_bat = NULL;
+- struct power_supply *max_energy_bat = NULL;
++struct find_bat_param {
++ struct power_supply *main;
++ struct power_supply *bat;
++ struct power_supply *max_charge_bat;
++ struct power_supply *max_energy_bat;
+ union power_supply_propval full;
+- int max_charge = 0;
+- int max_energy = 0;
++ int max_charge;
++ int max_energy;
++};
- /*
- * Work structure for scheduling periodic link tuning.
-@@ -248,36 +319,47 @@ struct link {
- };
+- main_battery = NULL;
++static int __find_main_battery(struct device *dev, void *data)
++{
++ struct find_bat_param *bp = (struct find_bat_param *)data;
- /*
-- * Clear all counters inside the link structure.
-- * This can be easiest achieved by memsetting everything
-- * except for the work structure at the end.
-+ * Small helper macro to work with moving/walking averages.
- */
--static inline void rt2x00_clear_link(struct link *link)
--{
-- memset(link, 0x00, sizeof(*link) - sizeof(link->work));
-- link->rx_percentage = 50;
-- link->tx_percentage = 50;
--}
-+#define MOVING_AVERAGE(__avg, __val, __samples) \
-+ ( (((__avg) * ((__samples) - 1)) + (__val)) / (__samples) )
+- list_for_each_entry(dev, &power_supply_class->devices, node) {
+- bat = dev_get_drvdata(dev);
++ bp->bat = dev_get_drvdata(dev);
- /*
-- * Update the rssi using the walking average approach.
-+ * When we lack RSSI information return something less then -80 to
-+ * tell the driver to tune the device to maximum sensitivity.
- */
--static inline void rt2x00_update_link_rssi(struct link *link, int rssi)
--{
-- if (!link->avg_rssi)
-- link->avg_rssi = rssi;
-- else
-- link->avg_rssi = ((link->avg_rssi * 7) + rssi) / 8;
--}
-+#define DEFAULT_RSSI ( -128 )
+- if (bat->use_for_apm) {
+- /* nice, we explicitly asked to report this battery. */
+- main_battery = bat;
+- return;
+- }
++ if (bp->bat->use_for_apm) {
++ /* nice, we explicitly asked to report this battery. */
++ bp->main = bp->bat;
++ return 1;
++ }
- /*
-- * When the avg_rssi is unset or no frames have been received),
-- * we need to return the default value which needs to be less
-- * than -80 so the device will select the maximum sensitivity.
-+ * Link quality access functions.
- */
- static inline int rt2x00_get_link_rssi(struct link *link)
- {
-- return (link->avg_rssi && link->rx_success) ? link->avg_rssi : -128;
-+ if (link->qual.avg_rssi && link->qual.rx_success)
-+ return link->qual.avg_rssi;
-+ return DEFAULT_RSSI;
+- if (!PSY_PROP(bat, CHARGE_FULL_DESIGN, &full) ||
+- !PSY_PROP(bat, CHARGE_FULL, &full)) {
+- if (full.intval > max_charge) {
+- max_charge_bat = bat;
+- max_charge = full.intval;
+- }
+- } else if (!PSY_PROP(bat, ENERGY_FULL_DESIGN, &full) ||
+- !PSY_PROP(bat, ENERGY_FULL, &full)) {
+- if (full.intval > max_energy) {
+- max_energy_bat = bat;
+- max_energy = full.intval;
+- }
++ if (!PSY_PROP(bp->bat, CHARGE_FULL_DESIGN, &bp->full) ||
++ !PSY_PROP(bp->bat, CHARGE_FULL, &bp->full)) {
++ if (bp->full.intval > bp->max_charge) {
++ bp->max_charge_bat = bp->bat;
++ bp->max_charge = bp->full.intval;
++ }
++ } else if (!PSY_PROP(bp->bat, ENERGY_FULL_DESIGN, &bp->full) ||
++ !PSY_PROP(bp->bat, ENERGY_FULL, &bp->full)) {
++ if (bp->full.intval > bp->max_energy) {
++ bp->max_energy_bat = bp->bat;
++ bp->max_energy = bp->full.intval;
+ }
+ }
++ return 0;
+}
+
-+static inline int rt2x00_get_link_ant_rssi(struct link *link)
++static void find_main_battery(void)
+{
-+ if (link->ant.rssi_ant && link->qual.rx_success)
-+ return link->ant.rssi_ant;
-+ return DEFAULT_RSSI;
-+}
++ struct find_bat_param bp;
++ int error;
+
-+static inline int rt2x00_get_link_ant_rssi_history(struct link *link,
-+ enum antenna ant)
-+{
-+ if (link->ant.rssi_history[ant - ANTENNA_A])
-+ return link->ant.rssi_history[ant - ANTENNA_A];
-+ return DEFAULT_RSSI;
-+}
++ memset(&bp, 0, sizeof(struct find_bat_param));
++ main_battery = NULL;
++ bp.main = main_battery;
+
-+static inline int rt2x00_update_ant_rssi(struct link *link, int rssi)
-+{
-+ int old_rssi = link->ant.rssi_history[link->ant.active.rx - ANTENNA_A];
-+ link->ant.rssi_history[link->ant.active.rx - ANTENNA_A] = rssi;
-+ return old_rssi;
++ error = class_for_each_device(power_supply_class, &bp,
++ __find_main_battery);
++ if (error) {
++ main_battery = bp.main;
++ return;
++ }
+
+- if ((max_energy_bat && max_charge_bat) &&
+- (max_energy_bat != max_charge_bat)) {
++ if ((bp.max_energy_bat && bp.max_charge_bat) &&
++ (bp.max_energy_bat != bp.max_charge_bat)) {
+ /* try guess battery with more capacity */
+- if (!PSY_PROP(max_charge_bat, VOLTAGE_MAX_DESIGN, &full)) {
+- if (max_energy > max_charge * full.intval)
+- main_battery = max_energy_bat;
++ if (!PSY_PROP(bp.max_charge_bat, VOLTAGE_MAX_DESIGN,
++ &bp.full)) {
++ if (bp.max_energy > bp.max_charge * bp.full.intval)
++ main_battery = bp.max_energy_bat;
+ else
+- main_battery = max_charge_bat;
+- } else if (!PSY_PROP(max_energy_bat, VOLTAGE_MAX_DESIGN,
+- &full)) {
+- if (max_charge > max_energy / full.intval)
+- main_battery = max_charge_bat;
++ main_battery = bp.max_charge_bat;
++ } else if (!PSY_PROP(bp.max_energy_bat, VOLTAGE_MAX_DESIGN,
++ &bp.full)) {
++ if (bp.max_charge > bp.max_energy / bp.full.intval)
++ main_battery = bp.max_charge_bat;
+ else
+- main_battery = max_energy_bat;
++ main_battery = bp.max_energy_bat;
+ } else {
+ /* give up, choice any */
+- main_battery = max_energy_bat;
++ main_battery = bp.max_energy_bat;
+ }
+- } else if (max_charge_bat) {
+- main_battery = max_charge_bat;
+- } else if (max_energy_bat) {
+- main_battery = max_energy_bat;
++ } else if (bp.max_charge_bat) {
++ main_battery = bp.max_charge_bat;
++ } else if (bp.max_energy_bat) {
++ main_battery = bp.max_energy_bat;
+ } else {
+ /* give up, try the last if any */
+- main_battery = bat;
++ main_battery = bp.bat;
+ }
}
- /*
-@@ -290,14 +372,12 @@ struct interface {
- * to us by the 80211 stack, and is used to request
- * new beacons.
- */
-- int id;
-+ struct ieee80211_vif *id;
+@@ -207,10 +227,10 @@ static void apm_battery_apm_get_power_status(struct apm_power_info *info)
+ union power_supply_propval status;
+ union power_supply_propval capacity, time_to_full, time_to_empty;
- /*
- * Current working type (IEEE80211_IF_TYPE_*).
-- * When set to INVALID_INTERFACE, no interface is configured.
- */
- int type;
--#define INVALID_INTERFACE IEEE80211_IF_TYPE_INVALID
+- down(&power_supply_class->sem);
++ mutex_lock(&apm_mutex);
+ find_main_battery();
+ if (!main_battery) {
+- up(&power_supply_class->sem);
++ mutex_unlock(&apm_mutex);
+ return;
+ }
- /*
- * MAC of the device.
-@@ -308,11 +388,6 @@ struct interface {
- * BBSID of the AP to associate with.
- */
- u8 bssid[ETH_ALEN];
--
-- /*
-- * Store the packet filter mode for the current interface.
-- */
-- unsigned int filter;
- };
+@@ -278,7 +298,7 @@ static void apm_battery_apm_get_power_status(struct apm_power_info *info)
+ }
+ }
- static inline int is_interface_present(struct interface *intf)
-@@ -362,6 +437,8 @@ struct rt2x00lib_conf {
- struct ieee80211_conf *conf;
- struct rf_channel rf;
+- up(&power_supply_class->sem);
++ mutex_unlock(&apm_mutex);
+ }
-+ struct antenna_setup ant;
-+
- int phymode;
+ static int __init apm_battery_init(void)
+diff --git a/drivers/power/power_supply_core.c b/drivers/power/power_supply_core.c
+index a63b75c..03d6a38 100644
+--- a/drivers/power/power_supply_core.c
++++ b/drivers/power/power_supply_core.c
+@@ -20,28 +20,29 @@
- int basic_rates;
-@@ -397,12 +474,21 @@ struct rt2x00lib_ops {
- void (*uninitialize) (struct rt2x00_dev *rt2x00dev);
+ struct class *power_supply_class;
- /*
-+ * Ring initialization handlers
-+ */
-+ void (*init_rxentry) (struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry);
-+ void (*init_txentry) (struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry);
++static int __power_supply_changed_work(struct device *dev, void *data)
++{
++ struct power_supply *psy = (struct power_supply *)data;
++ struct power_supply *pst = dev_get_drvdata(dev);
++ int i;
+
-+ /*
- * Radio control handlers.
- */
- int (*set_device_state) (struct rt2x00_dev *rt2x00dev,
- enum dev_state state);
- int (*rfkill_poll) (struct rt2x00_dev *rt2x00dev);
-- void (*link_stats) (struct rt2x00_dev *rt2x00dev);
-+ void (*link_stats) (struct rt2x00_dev *rt2x00dev,
-+ struct link_qual *qual);
- void (*reset_tuner) (struct rt2x00_dev *rt2x00dev);
- void (*link_tuner) (struct rt2x00_dev *rt2x00dev);
++ for (i = 0; i < psy->num_supplicants; i++)
++ if (!strcmp(psy->supplied_to[i], pst->name)) {
++ if (pst->external_power_changed)
++ pst->external_power_changed(pst);
++ }
++ return 0;
++}
++
+ static void power_supply_changed_work(struct work_struct *work)
+ {
+ struct power_supply *psy = container_of(work, struct power_supply,
+ changed_work);
+- int i;
-@@ -410,10 +496,8 @@ struct rt2x00lib_ops {
- * TX control handlers
- */
- void (*write_tx_desc) (struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-+ struct sk_buff *skb,
- struct txdata_entry_desc *desc,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
- struct ieee80211_tx_control *control);
- int (*write_tx_data) (struct rt2x00_dev *rt2x00dev,
- struct data_ring *ring, struct sk_buff *skb,
-@@ -545,7 +629,7 @@ struct rt2x00_dev {
- * required for deregistration of debugfs.
- */
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
-- const struct rt2x00debug_intf *debugfs_intf;
-+ struct rt2x00debug_intf *debugfs_intf;
- #endif /* CONFIG_RT2X00_LIB_DEBUGFS */
+ dev_dbg(psy->dev, "%s\n", __FUNCTION__);
- /*
-@@ -566,6 +650,13 @@ struct rt2x00_dev {
- struct hw_mode_spec spec;
+- for (i = 0; i < psy->num_supplicants; i++) {
+- struct device *dev;
+-
+- down(&power_supply_class->sem);
+- list_for_each_entry(dev, &power_supply_class->devices, node) {
+- struct power_supply *pst = dev_get_drvdata(dev);
+-
+- if (!strcmp(psy->supplied_to[i], pst->name)) {
+- if (pst->external_power_changed)
+- pst->external_power_changed(pst);
+- }
+- }
+- up(&power_supply_class->sem);
+- }
++ class_for_each_device(power_supply_class, psy,
++ __power_supply_changed_work);
- /*
-+ * This is the default TX/RX antenna setup as indicated
-+ * by the device's EEPROM. When mac80211 sets its
-+ * antenna value to 0 we should be using these values.
-+ */
-+ struct antenna_setup default_ant;
-+
-+ /*
- * Register pointers
- * csr_addr: Base register address. (PCI)
- * csr_cache: CSR cache for usb_control_msg. (USB)
-@@ -574,6 +665,25 @@ struct rt2x00_dev {
- void *csr_cache;
+ power_supply_update_leds(psy);
- /*
-+ * Mutex to protect register accesses on USB devices.
-+ * There are 2 reasons this is needed, one is to ensure
-+ * use of the csr_cache (for USB devices) by one thread
-+ * isn't corrupted by another thread trying to access it.
-+ * The other is that access to BBP and RF registers
-+ * require multiple BUS transactions and if another thread
-+ * attempted to access one of those registers at the same
-+ * time one of the writes could silently fail.
-+ */
-+ struct mutex usb_cache_mutex;
-+
-+ /*
-+ * Current packet filter configuration for the device.
-+ * This contains all currently active FIF_* flags send
-+ * to us by mac80211 during configure_filter().
-+ */
-+ unsigned int packet_filter;
-+
-+ /*
- * Interface configuration.
- */
- struct interface interface;
-@@ -697,13 +807,13 @@ struct rt2x00_dev {
- * Generic RF access.
- * The RF is being accessed by word index.
- */
--static inline void rt2x00_rf_read(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2x00_rf_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 *data)
- {
- *data = rt2x00dev->rf[word];
+@@ -55,32 +56,35 @@ void power_supply_changed(struct power_supply *psy)
+ schedule_work(&psy->changed_work);
}
--static inline void rt2x00_rf_write(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2x00_rf_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 data)
- {
- rt2x00dev->rf[word] = data;
-@@ -713,19 +823,19 @@ static inline void rt2x00_rf_write(const struct rt2x00_dev *rt2x00dev,
- * Generic EEPROM access.
- * The EEPROM is being accessed by word index.
- */
--static inline void *rt2x00_eeprom_addr(const struct rt2x00_dev *rt2x00dev,
-+static inline void *rt2x00_eeprom_addr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word)
+-int power_supply_am_i_supplied(struct power_supply *psy)
++static int __power_supply_am_i_supplied(struct device *dev, void *data)
{
- return (void *)&rt2x00dev->eeprom[word];
- }
+ union power_supply_propval ret = {0,};
+- struct device *dev;
+-
+- down(&power_supply_class->sem);
+- list_for_each_entry(dev, &power_supply_class->devices, node) {
+- struct power_supply *epsy = dev_get_drvdata(dev);
+- int i;
+-
+- for (i = 0; i < epsy->num_supplicants; i++) {
+- if (!strcmp(epsy->supplied_to[i], psy->name)) {
+- if (epsy->get_property(epsy,
+- POWER_SUPPLY_PROP_ONLINE, &ret))
+- continue;
+- if (ret.intval)
+- goto out;
+- }
++ struct power_supply *psy = (struct power_supply *)data;
++ struct power_supply *epsy = dev_get_drvdata(dev);
++ int i;
++
++ for (i = 0; i < epsy->num_supplicants; i++) {
++ if (!strcmp(epsy->supplied_to[i], psy->name)) {
++ if (epsy->get_property(epsy,
++ POWER_SUPPLY_PROP_ONLINE, &ret))
++ continue;
++ if (ret.intval)
++ return ret.intval;
+ }
+ }
+-out:
+- up(&power_supply_class->sem);
++ return 0;
++}
++
++int power_supply_am_i_supplied(struct power_supply *psy)
++{
++ int error;
++
++ error = class_for_each_device(power_supply_class, psy,
++ __power_supply_am_i_supplied);
--static inline void rt2x00_eeprom_read(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2x00_eeprom_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u16 *data)
- {
- *data = le16_to_cpu(rt2x00dev->eeprom[word]);
- }
+- dev_dbg(psy->dev, "%s %d\n", __FUNCTION__, ret.intval);
++ dev_dbg(psy->dev, "%s %d\n", __FUNCTION__, error);
--static inline void rt2x00_eeprom_write(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2x00_eeprom_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u16 data)
- {
- rt2x00dev->eeprom[word] = cpu_to_le16(data);
-@@ -804,9 +914,7 @@ void rt2x00lib_rxdone(struct data_entry *entry, struct sk_buff *skb,
- * TX descriptor initializer
- */
- void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
-+ struct sk_buff *skb,
- struct ieee80211_tx_control *control);
+- return ret.intval;
++ return error;
+ }
- /*
-@@ -821,14 +929,17 @@ int rt2x00mac_add_interface(struct ieee80211_hw *hw,
- void rt2x00mac_remove_interface(struct ieee80211_hw *hw,
- struct ieee80211_if_init_conf *conf);
- int rt2x00mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf);
--int rt2x00mac_config_interface(struct ieee80211_hw *hw, int if_id,
-+int rt2x00mac_config_interface(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
- struct ieee80211_if_conf *conf);
- int rt2x00mac_get_stats(struct ieee80211_hw *hw,
- struct ieee80211_low_level_stats *stats);
- int rt2x00mac_get_tx_stats(struct ieee80211_hw *hw,
- struct ieee80211_tx_queue_stats *stats);
--void rt2x00mac_erp_ie_changed(struct ieee80211_hw *hw, u8 changes,
-- int cts_protection, int preamble);
-+void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
-+ struct ieee80211_bss_conf *bss_conf,
-+ u32 changes);
- int rt2x00mac_conf_tx(struct ieee80211_hw *hw, int queue,
- const struct ieee80211_tx_queue_params *params);
+ int power_supply_register(struct device *parent, struct power_supply *psy)
+diff --git a/drivers/rapidio/rio.h b/drivers/rapidio/rio.h
+index b242cee..80e3f03 100644
+--- a/drivers/rapidio/rio.h
++++ b/drivers/rapidio/rio.h
+@@ -31,8 +31,8 @@ extern struct rio_route_ops __end_rio_route_ops[];
-diff --git a/drivers/net/wireless/rt2x00/rt2x00config.c b/drivers/net/wireless/rt2x00/rt2x00config.c
-index 12914cf..72cfe00 100644
---- a/drivers/net/wireless/rt2x00/rt2x00config.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00config.c
-@@ -23,11 +23,6 @@
- Abstract: rt2x00 generic configuration routines.
- */
+ /* Helpers internal to the RIO core code */
+ #define DECLARE_RIO_ROUTE_SECTION(section, vid, did, add_hook, get_hook) \
+- static struct rio_route_ops __rio_route_ops __attribute_used__ \
+- __attribute__((__section__(#section))) = { vid, did, add_hook, get_hook };
++ static struct rio_route_ops __rio_route_ops __used \
++ __section(section)= { vid, did, add_hook, get_hook };
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00lib"
--
- #include <linux/kernel.h>
- #include <linux/module.h>
+ /**
+ * DECLARE_RIO_ROUTE_OPS - Registers switch routing operations
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 1e6715e..45e4b96 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -404,7 +404,7 @@ config RTC_DRV_SA1100
-@@ -94,12 +89,44 @@ void rt2x00lib_config_type(struct rt2x00_dev *rt2x00dev, const int type)
- rt2x00dev->ops->lib->config_type(rt2x00dev, type, tsf_sync);
+ config RTC_DRV_SH
+ tristate "SuperH On-Chip RTC"
+- depends on RTC_CLASS && (CPU_SH3 || CPU_SH4)
++ depends on RTC_CLASS && SUPERH
+ help
+ Say Y here to enable support for the on-chip RTC found in
+ most SuperH processors.
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index f1e00ff..7e3ad4f 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -251,20 +251,23 @@ void rtc_update_irq(struct rtc_device *rtc,
}
+ EXPORT_SYMBOL_GPL(rtc_update_irq);
-+void rt2x00lib_config_antenna(struct rt2x00_dev *rt2x00dev,
-+ enum antenna rx, enum antenna tx)
++static int __rtc_match(struct device *dev, void *data)
+{
-+ struct rt2x00lib_conf libconf;
-+
-+ libconf.ant.rx = rx;
-+ libconf.ant.tx = tx;
-+
-+ /*
-+ * Antenna setup changes require the RX to be disabled,
-+ * else the changes will be ignored by the device.
-+ */
-+ if (test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
-+ rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
-+
-+ /*
-+ * Write new antenna setup to device and reset the link tuner.
-+ * The latter is required since we need to recalibrate the
-+ * noise-sensitivity ratio for the new setup.
-+ */
-+ rt2x00dev->ops->lib->config(rt2x00dev, CONFIG_UPDATE_ANTENNA, &libconf);
-+ rt2x00lib_reset_link_tuner(rt2x00dev);
-+
-+ rt2x00dev->link.ant.active.rx = libconf.ant.rx;
-+ rt2x00dev->link.ant.active.tx = libconf.ant.tx;
++ char *name = (char *)data;
+
-+ if (test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
-+ rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++ if (strncmp(dev->bus_id, name, BUS_ID_SIZE) == 0)
++ return 1;
++ return 0;
+}
+
- void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
- struct ieee80211_conf *conf, const int force_config)
+ struct rtc_device *rtc_class_open(char *name)
{
- struct rt2x00lib_conf libconf;
- struct ieee80211_hw_mode *mode;
- struct ieee80211_rate *rate;
-+ struct antenna_setup *default_ant = &rt2x00dev->default_ant;
-+ struct antenna_setup *active_ant = &rt2x00dev->link.ant.active;
- int flags = 0;
- int short_slot_time;
+ struct device *dev;
+ struct rtc_device *rtc = NULL;
-@@ -122,7 +149,39 @@ void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
- flags |= CONFIG_UPDATE_CHANNEL;
- if (rt2x00dev->tx_power != conf->power_level)
- flags |= CONFIG_UPDATE_TXPOWER;
-- if (rt2x00dev->rx_status.antenna == conf->antenna_sel_rx)
-+
-+ /*
-+ * Determining changes in the antenna setups request several checks:
-+ * antenna_sel_{r,t}x = 0
-+ * -> Does active_{r,t}x match default_{r,t}x
-+ * -> Is default_{r,t}x SW_DIVERSITY
-+ * antenna_sel_{r,t}x = 1/2
-+ * -> Does active_{r,t}x match antenna_sel_{r,t}x
-+ * The reason for not updating the antenna while SW diversity
-+ * should be used is simple: Software diversity means that
-+ * we should switch between the antenna's based on the
-+ * quality. This means that the current antenna is good enough
-+ * to work with untill the link tuner decides that an antenna
-+ * switch should be performed.
-+ */
-+ if (!conf->antenna_sel_rx &&
-+ default_ant->rx != ANTENNA_SW_DIVERSITY &&
-+ default_ant->rx != active_ant->rx)
-+ flags |= CONFIG_UPDATE_ANTENNA;
-+ else if (conf->antenna_sel_rx &&
-+ conf->antenna_sel_rx != active_ant->rx)
-+ flags |= CONFIG_UPDATE_ANTENNA;
-+ else if (active_ant->rx == ANTENNA_SW_DIVERSITY)
-+ flags |= CONFIG_UPDATE_ANTENNA;
-+
-+ if (!conf->antenna_sel_tx &&
-+ default_ant->tx != ANTENNA_SW_DIVERSITY &&
-+ default_ant->tx != active_ant->tx)
-+ flags |= CONFIG_UPDATE_ANTENNA;
-+ else if (conf->antenna_sel_tx &&
-+ conf->antenna_sel_tx != active_ant->tx)
-+ flags |= CONFIG_UPDATE_ANTENNA;
-+ else if (active_ant->tx == ANTENNA_SW_DIVERSITY)
- flags |= CONFIG_UPDATE_ANTENNA;
+- down(&rtc_class->sem);
+- list_for_each_entry(dev, &rtc_class->devices, node) {
+- if (strncmp(dev->bus_id, name, BUS_ID_SIZE) == 0) {
+- dev = get_device(dev);
+- if (dev)
+- rtc = to_rtc_device(dev);
+- break;
+- }
+- }
++ dev = class_find_device(rtc_class, name, __rtc_match);
++ if (dev)
++ rtc = to_rtc_device(dev);
- /*
-@@ -171,6 +230,22 @@ config:
- sizeof(libconf.rf));
+ if (rtc) {
+ if (!try_module_get(rtc->owner)) {
+@@ -272,7 +275,6 @@ struct rtc_device *rtc_class_open(char *name)
+ rtc = NULL;
+ }
}
+- up(&rtc_class->sem);
-+ if (flags & CONFIG_UPDATE_ANTENNA) {
-+ if (conf->antenna_sel_rx)
-+ libconf.ant.rx = conf->antenna_sel_rx;
-+ else if (default_ant->rx != ANTENNA_SW_DIVERSITY)
-+ libconf.ant.rx = default_ant->rx;
-+ else if (active_ant->rx == ANTENNA_SW_DIVERSITY)
-+ libconf.ant.rx = ANTENNA_B;
-+
-+ if (conf->antenna_sel_tx)
-+ libconf.ant.tx = conf->antenna_sel_tx;
-+ else if (default_ant->tx != ANTENNA_SW_DIVERSITY)
-+ libconf.ant.tx = default_ant->tx;
-+ else if (active_ant->tx == ANTENNA_SW_DIVERSITY)
-+ libconf.ant.tx = ANTENNA_B;
-+ }
-+
- if (flags & CONFIG_UPDATE_SLOT_TIME) {
- short_slot_time = conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME;
-
-@@ -196,10 +271,17 @@ config:
- if (flags & (CONFIG_UPDATE_CHANNEL | CONFIG_UPDATE_ANTENNA))
- rt2x00lib_reset_link_tuner(rt2x00dev);
-
-- rt2x00dev->curr_hwmode = libconf.phymode;
-- rt2x00dev->rx_status.phymode = conf->phymode;
-+ if (flags & CONFIG_UPDATE_PHYMODE) {
-+ rt2x00dev->curr_hwmode = libconf.phymode;
-+ rt2x00dev->rx_status.phymode = conf->phymode;
-+ }
-+
- rt2x00dev->rx_status.freq = conf->freq;
- rt2x00dev->rx_status.channel = conf->channel;
- rt2x00dev->tx_power = conf->power_level;
-- rt2x00dev->rx_status.antenna = conf->antenna_sel_rx;
-+
-+ if (flags & CONFIG_UPDATE_ANTENNA) {
-+ rt2x00dev->link.ant.active.rx = libconf.ant.rx;
-+ rt2x00dev->link.ant.active.tx = libconf.ant.tx;
-+ }
+ return rtc;
}
-diff --git a/drivers/net/wireless/rt2x00/rt2x00debug.c b/drivers/net/wireless/rt2x00/rt2x00debug.c
-index 9275d6f..b44a9f4 100644
---- a/drivers/net/wireless/rt2x00/rt2x00debug.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00debug.c
-@@ -23,18 +23,15 @@
- Abstract: rt2x00 debugfs specific routines.
- */
+diff --git a/drivers/rtc/rtc-ds1672.c b/drivers/rtc/rtc-ds1672.c
+index dfef163..e0900ca 100644
+--- a/drivers/rtc/rtc-ds1672.c
++++ b/drivers/rtc/rtc-ds1672.c
+@@ -16,7 +16,7 @@
+ #define DRV_VERSION "0.3"
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00lib"
--
- #include <linux/debugfs.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-+#include <linux/poll.h>
- #include <linux/uaccess.h>
+ /* Addresses to scan: none. This chip cannot be detected. */
+-static unsigned short normal_i2c[] = { I2C_CLIENT_END };
++static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
- #include "rt2x00.h"
- #include "rt2x00lib.h"
-+#include "rt2x00dump.h"
+ /* Insmod parameters */
+ I2C_CLIENT_INSMOD;
+diff --git a/drivers/rtc/rtc-isl1208.c b/drivers/rtc/rtc-isl1208.c
+index 1c74364..725b0c7 100644
+--- a/drivers/rtc/rtc-isl1208.c
++++ b/drivers/rtc/rtc-isl1208.c
+@@ -61,7 +61,7 @@
+ /* i2c configuration */
+ #define ISL1208_I2C_ADDR 0xde
- #define PRINT_LINE_LEN_MAX 32
+-static unsigned short normal_i2c[] = {
++static const unsigned short normal_i2c[] = {
+ ISL1208_I2C_ADDR>>1, I2C_CLIENT_END
+ };
+ I2C_CLIENT_INSMOD; /* defines addr_data */
+diff --git a/drivers/rtc/rtc-max6900.c b/drivers/rtc/rtc-max6900.c
+index a1cd448..7683412 100644
+--- a/drivers/rtc/rtc-max6900.c
++++ b/drivers/rtc/rtc-max6900.c
+@@ -54,7 +54,7 @@
-@@ -55,18 +52,22 @@ struct rt2x00debug_intf {
- /*
- * Debugfs entries for:
- * - driver folder
-- * - driver file
-- * - chipset file
-- * - device flags file
-- * - register offset/value files
-- * - eeprom offset/value files
-- * - bbp offset/value files
-- * - rf offset/value files
-+ * - driver file
-+ * - chipset file
-+ * - device flags file
-+ * - register folder
-+ * - csr offset/value files
-+ * - eeprom offset/value files
-+ * - bbp offset/value files
-+ * - rf offset/value files
-+ * - frame dump folder
-+ * - frame dump file
- */
- struct dentry *driver_folder;
- struct dentry *driver_entry;
- struct dentry *chipset_entry;
- struct dentry *dev_flags;
-+ struct dentry *register_folder;
- struct dentry *csr_off_entry;
- struct dentry *csr_val_entry;
- struct dentry *eeprom_off_entry;
-@@ -75,6 +76,24 @@ struct rt2x00debug_intf {
- struct dentry *bbp_val_entry;
- struct dentry *rf_off_entry;
- struct dentry *rf_val_entry;
-+ struct dentry *frame_folder;
-+ struct dentry *frame_dump_entry;
-+
-+ /*
-+ * The frame dump file only allows a single reader,
-+ * so we need to store the current state here.
-+ */
-+ unsigned long frame_dump_flags;
-+#define FRAME_DUMP_FILE_OPEN 1
-+
-+ /*
-+ * We queue each frame before dumping it to the user,
-+ * per read command we will pass a single skb structure
-+ * so we should be prepared to queue multiple sk buffers
-+ * before sending it to userspace.
-+ */
-+ struct sk_buff_head frame_dump_skbqueue;
-+ wait_queue_head_t frame_dump_waitqueue;
+ #define MAX6900_I2C_ADDR 0xa0
- /*
- * Driver and chipset files will use a data buffer
-@@ -93,6 +112,59 @@ struct rt2x00debug_intf {
- unsigned int offset_rf;
+-static unsigned short normal_i2c[] = {
++static const unsigned short normal_i2c[] = {
+ MAX6900_I2C_ADDR >> 1,
+ I2C_CLIENT_END
};
+diff --git a/drivers/rtc/rtc-pcf8563.c b/drivers/rtc/rtc-pcf8563.c
+index 0242d80..b3317fc 100644
+--- a/drivers/rtc/rtc-pcf8563.c
++++ b/drivers/rtc/rtc-pcf8563.c
+@@ -25,7 +25,7 @@
+ * located at 0x51 will pass the validation routine due to
+ * the way the registers are implemented.
+ */
+-static unsigned short normal_i2c[] = { I2C_CLIENT_END };
++static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
-+void rt2x00debug_dump_frame(struct rt2x00_dev *rt2x00dev,
-+ struct sk_buff *skb)
-+{
-+ struct rt2x00debug_intf *intf = rt2x00dev->debugfs_intf;
-+ struct skb_desc *desc = get_skb_desc(skb);
-+ struct sk_buff *skbcopy;
-+ struct rt2x00dump_hdr *dump_hdr;
-+ struct timeval timestamp;
-+
-+ do_gettimeofday(×tamp);
-+
-+ if (!test_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags))
-+ return;
-+
-+ if (skb_queue_len(&intf->frame_dump_skbqueue) > 20) {
-+ DEBUG(rt2x00dev, "txrx dump queue length exceeded.\n");
-+ return;
-+ }
-+
-+ skbcopy = alloc_skb(sizeof(*dump_hdr) + desc->desc_len + desc->data_len,
-+ GFP_ATOMIC);
-+ if (!skbcopy) {
-+ DEBUG(rt2x00dev, "Failed to copy skb for dump.\n");
-+ return;
-+ }
-+
-+ dump_hdr = (struct rt2x00dump_hdr *)skb_put(skbcopy, sizeof(*dump_hdr));
-+ dump_hdr->version = cpu_to_le32(DUMP_HEADER_VERSION);
-+ dump_hdr->header_length = cpu_to_le32(sizeof(*dump_hdr));
-+ dump_hdr->desc_length = cpu_to_le32(desc->desc_len);
-+ dump_hdr->data_length = cpu_to_le32(desc->data_len);
-+ dump_hdr->chip_rt = cpu_to_le16(rt2x00dev->chip.rt);
-+ dump_hdr->chip_rf = cpu_to_le16(rt2x00dev->chip.rf);
-+ dump_hdr->chip_rev = cpu_to_le32(rt2x00dev->chip.rev);
-+ dump_hdr->type = cpu_to_le16(desc->frame_type);
-+ dump_hdr->ring_index = desc->ring->queue_idx;
-+ dump_hdr->entry_index = desc->entry->entry_idx;
-+ dump_hdr->timestamp_sec = cpu_to_le32(timestamp.tv_sec);
-+ dump_hdr->timestamp_usec = cpu_to_le32(timestamp.tv_usec);
-+
-+ memcpy(skb_put(skbcopy, desc->desc_len), desc->desc, desc->desc_len);
-+ memcpy(skb_put(skbcopy, desc->data_len), desc->data, desc->data_len);
-+
-+ skb_queue_tail(&intf->frame_dump_skbqueue, skbcopy);
-+ wake_up_interruptible(&intf->frame_dump_waitqueue);
-+
-+ /*
-+ * Verify that the file has not been closed while we were working.
-+ */
-+ if (!test_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags))
-+ skb_queue_purge(&intf->frame_dump_skbqueue);
-+}
+ /* Module parameters */
+ I2C_CLIENT_INSMOD;
+diff --git a/drivers/rtc/rtc-pcf8583.c b/drivers/rtc/rtc-pcf8583.c
+index 556d0e7..c973ba9 100644
+--- a/drivers/rtc/rtc-pcf8583.c
++++ b/drivers/rtc/rtc-pcf8583.c
+@@ -40,7 +40,7 @@ struct pcf8583 {
+ #define CTRL_ALARM 0x02
+ #define CTRL_TIMER 0x01
+
+-static unsigned short normal_i2c[] = { 0x50, I2C_CLIENT_END };
++static const unsigned short normal_i2c[] = { 0x50, I2C_CLIENT_END };
+
+ /* Module parameters */
+ I2C_CLIENT_INSMOD;
+diff --git a/drivers/rtc/rtc-sa1100.c b/drivers/rtc/rtc-sa1100.c
+index 6f1e9a9..2eb3852 100644
+--- a/drivers/rtc/rtc-sa1100.c
++++ b/drivers/rtc/rtc-sa1100.c
+@@ -337,6 +337,8 @@ static int sa1100_rtc_probe(struct platform_device *pdev)
+ if (IS_ERR(rtc))
+ return PTR_ERR(rtc);
+
++ device_init_wakeup(&pdev->dev, 1);
+
- static int rt2x00debug_file_open(struct inode *inode, struct file *file)
- {
- struct rt2x00debug_intf *intf = inode->i_private;
-@@ -114,13 +186,96 @@ static int rt2x00debug_file_release(struct inode *inode, struct file *file)
+ platform_set_drvdata(pdev, rtc);
+
+ return 0;
+@@ -352,9 +354,38 @@ static int sa1100_rtc_remove(struct platform_device *pdev)
return 0;
}
-+static int rt2x00debug_open_ring_dump(struct inode *inode, struct file *file)
++#ifdef CONFIG_PM
++static int sa1100_rtc_suspend(struct platform_device *pdev, pm_message_t state)
+{
-+ struct rt2x00debug_intf *intf = inode->i_private;
-+ int retval;
-+
-+ retval = rt2x00debug_file_open(inode, file);
-+ if (retval)
-+ return retval;
++ if (pdev->dev.power.power_state.event != state.event) {
++ if (state.event == PM_EVENT_SUSPEND &&
++ device_may_wakeup(&pdev->dev))
++ enable_irq_wake(IRQ_RTCAlrm);
+
-+ if (test_and_set_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags)) {
-+ rt2x00debug_file_release(inode, file);
-+ return -EBUSY;
++ pdev->dev.power.power_state = state;
+ }
-+
+ return 0;
+}
+
-+static int rt2x00debug_release_ring_dump(struct inode *inode, struct file *file)
-+{
-+ struct rt2x00debug_intf *intf = inode->i_private;
-+
-+ skb_queue_purge(&intf->frame_dump_skbqueue);
-+
-+ clear_bit(FRAME_DUMP_FILE_OPEN, &intf->frame_dump_flags);
-+
-+ return rt2x00debug_file_release(inode, file);
-+}
-+
-+static ssize_t rt2x00debug_read_ring_dump(struct file *file,
-+ char __user *buf,
-+ size_t length,
-+ loff_t *offset)
++static int sa1100_rtc_resume(struct platform_device *pdev)
+{
-+ struct rt2x00debug_intf *intf = file->private_data;
-+ struct sk_buff *skb;
-+ size_t status;
-+ int retval;
-+
-+ if (file->f_flags & O_NONBLOCK)
-+ return -EAGAIN;
-+
-+ retval =
-+ wait_event_interruptible(intf->frame_dump_waitqueue,
-+ (skb =
-+ skb_dequeue(&intf->frame_dump_skbqueue)));
-+ if (retval)
-+ return retval;
-+
-+ status = min((size_t)skb->len, length);
-+ if (copy_to_user(buf, skb->data, status)) {
-+ status = -EFAULT;
-+ goto exit;
++ if (pdev->dev.power.power_state.event != PM_EVENT_ON) {
++ if (device_may_wakeup(&pdev->dev))
++ disable_irq_wake(IRQ_RTCAlrm);
++ pdev->dev.power.power_state = PMSG_ON;
+ }
-+
-+ *offset += status;
-+
-+exit:
-+ kfree_skb(skb);
-+
-+ return status;
-+}
-+
-+static unsigned int rt2x00debug_poll_ring_dump(struct file *file,
-+ poll_table *wait)
-+{
-+ struct rt2x00debug_intf *intf = file->private_data;
-+
-+ poll_wait(file, &intf->frame_dump_waitqueue, wait);
-+
-+ if (!skb_queue_empty(&intf->frame_dump_skbqueue))
-+ return POLLOUT | POLLWRNORM;
-+
+ return 0;
+}
++#else
++#define sa1100_rtc_suspend NULL
++#define sa1100_rtc_resume NULL
++#endif
+
-+static const struct file_operations rt2x00debug_fop_ring_dump = {
-+ .owner = THIS_MODULE,
-+ .read = rt2x00debug_read_ring_dump,
-+ .poll = rt2x00debug_poll_ring_dump,
-+ .open = rt2x00debug_open_ring_dump,
-+ .release = rt2x00debug_release_ring_dump,
-+};
-+
- #define RT2X00DEBUGFS_OPS_READ(__name, __format, __type) \
- static ssize_t rt2x00debug_read_##__name(struct file *file, \
- char __user *buf, \
- size_t length, \
- loff_t *offset) \
- { \
-- struct rt2x00debug_intf *intf = file->private_data; \
-+ struct rt2x00debug_intf *intf = file->private_data; \
- const struct rt2x00debug *debug = intf->debug; \
- char line[16]; \
- size_t size; \
-@@ -150,7 +305,7 @@ static ssize_t rt2x00debug_write_##__name(struct file *file, \
- size_t length, \
- loff_t *offset) \
- { \
-- struct rt2x00debug_intf *intf = file->private_data; \
-+ struct rt2x00debug_intf *intf = file->private_data; \
- const struct rt2x00debug *debug = intf->debug; \
- char line[16]; \
- size_t size; \
-@@ -254,11 +409,15 @@ static struct dentry *rt2x00debug_create_file_chipset(const char *name,
- const struct rt2x00debug *debug = intf->debug;
- char *data;
-
-- data = kzalloc(4 * PRINT_LINE_LEN_MAX, GFP_KERNEL);
-+ data = kzalloc(8 * PRINT_LINE_LEN_MAX, GFP_KERNEL);
- if (!data)
- return NULL;
+ static struct platform_driver sa1100_rtc_driver = {
+ .probe = sa1100_rtc_probe,
+ .remove = sa1100_rtc_remove,
++ .suspend = sa1100_rtc_suspend,
++ .resume = sa1100_rtc_resume,
+ .driver = {
+ .name = "sa1100-rtc",
+ },
+diff --git a/drivers/rtc/rtc-sh.c b/drivers/rtc/rtc-sh.c
+index 8e8c8b8..c1d6a18 100644
+--- a/drivers/rtc/rtc-sh.c
++++ b/drivers/rtc/rtc-sh.c
+@@ -26,17 +26,7 @@
+ #include <asm/rtc.h>
- blob->data = data;
-+ data += sprintf(data, "rt chip: %04x\n", intf->rt2x00dev->chip.rt);
-+ data += sprintf(data, "rf chip: %04x\n", intf->rt2x00dev->chip.rf);
-+ data += sprintf(data, "revision:%08x\n", intf->rt2x00dev->chip.rev);
-+ data += sprintf(data, "\n");
- data += sprintf(data, "csr length: %d\n", debug->csr.word_count);
- data += sprintf(data, "eeprom length: %d\n", debug->eeprom.word_count);
- data += sprintf(data, "bbp length: %d\n", debug->bbp.word_count);
-@@ -306,12 +465,17 @@ void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
- if (IS_ERR(intf->dev_flags))
- goto exit;
+ #define DRV_NAME "sh-rtc"
+-#define DRV_VERSION "0.1.3"
+-
+-#ifdef CONFIG_CPU_SH3
+-#define rtc_reg_size sizeof(u16)
+-#define RTC_BIT_INVERTED 0 /* No bug on SH7708, SH7709A */
+-#define RTC_DEF_CAPABILITIES 0UL
+-#elif defined(CONFIG_CPU_SH4)
+-#define rtc_reg_size sizeof(u32)
+-#define RTC_BIT_INVERTED 0x40 /* bug on SH7750, SH7750S */
+-#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
+-#endif
++#define DRV_VERSION "0.1.6"
--#define RT2X00DEBUGFS_CREATE_ENTRY(__intf, __name) \
-+ intf->register_folder =
-+ debugfs_create_dir("register", intf->driver_folder);
-+ if (IS_ERR(intf->register_folder))
-+ goto exit;
-+
-+#define RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(__intf, __name) \
- ({ \
- (__intf)->__name##_off_entry = \
- debugfs_create_u32(__stringify(__name) "_offset", \
- S_IRUGO | S_IWUSR, \
-- (__intf)->driver_folder, \
-+ (__intf)->register_folder, \
- &(__intf)->offset_##__name); \
- if (IS_ERR((__intf)->__name##_off_entry)) \
- goto exit; \
-@@ -319,18 +483,32 @@ void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
- (__intf)->__name##_val_entry = \
- debugfs_create_file(__stringify(__name) "_value", \
- S_IRUGO | S_IWUSR, \
-- (__intf)->driver_folder, \
-+ (__intf)->register_folder, \
- (__intf), &rt2x00debug_fop_##__name);\
- if (IS_ERR((__intf)->__name##_val_entry)) \
- goto exit; \
- })
+ #define RTC_REG(r) ((r) * rtc_reg_size)
-- RT2X00DEBUGFS_CREATE_ENTRY(intf, csr);
-- RT2X00DEBUGFS_CREATE_ENTRY(intf, eeprom);
-- RT2X00DEBUGFS_CREATE_ENTRY(intf, bbp);
-- RT2X00DEBUGFS_CREATE_ENTRY(intf, rf);
-+ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, csr);
-+ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, eeprom);
-+ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, bbp);
-+ RT2X00DEBUGFS_CREATE_REGISTER_ENTRY(intf, rf);
+@@ -58,6 +48,18 @@
+ #define RCR1 RTC_REG(14) /* Control */
+ #define RCR2 RTC_REG(15) /* Control */
--#undef RT2X00DEBUGFS_CREATE_ENTRY
-+#undef RT2X00DEBUGFS_CREATE_REGISTER_ENTRY
-+
-+ intf->frame_folder =
-+ debugfs_create_dir("frame", intf->driver_folder);
-+ if (IS_ERR(intf->frame_folder))
-+ goto exit;
-+
-+ intf->frame_dump_entry =
-+ debugfs_create_file("dump", S_IRUGO, intf->frame_folder,
-+ intf, &rt2x00debug_fop_ring_dump);
-+ if (IS_ERR(intf->frame_dump_entry))
-+ goto exit;
++/*
++ * Note on RYRAR and RCR3: Up until this point most of the register
++ * definitions are consistent across all of the available parts. However,
++ * the placement of the optional RYRAR and RCR3 (the RYRAR control
++ * register used to control RYRCNT/RYRAR compare) varies considerably
++ * across various parts, occasionally being mapped in to a completely
++ * unrelated address space. For proper RYRAR support a separate resource
++ * would have to be handed off, but as this is purely optional in
++ * practice, we simply opt not to support it, thereby keeping the code
++ * quite a bit more simplified.
++ */
+
-+ skb_queue_head_init(&intf->frame_dump_skbqueue);
-+ init_waitqueue_head(&intf->frame_dump_waitqueue);
-
- return;
-
-@@ -343,11 +521,15 @@ exit:
+ /* ALARM Bits - or with BCD encoded value */
+ #define AR_ENB 0x80 /* Enable for alarm cmp */
- void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
- {
-- const struct rt2x00debug_intf *intf = rt2x00dev->debugfs_intf;
-+ struct rt2x00debug_intf *intf = rt2x00dev->debugfs_intf;
+diff --git a/drivers/rtc/rtc-x1205.c b/drivers/rtc/rtc-x1205.c
+index b3fae35..b90fb18 100644
+--- a/drivers/rtc/rtc-x1205.c
++++ b/drivers/rtc/rtc-x1205.c
+@@ -32,7 +32,7 @@
+ * unknown chips, the user must explicitly set the probe parameter.
+ */
- if (unlikely(!intf))
- return;
+-static unsigned short normal_i2c[] = { I2C_CLIENT_END };
++static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
-+ skb_queue_purge(&intf->frame_dump_skbqueue);
-+
-+ debugfs_remove(intf->frame_dump_entry);
-+ debugfs_remove(intf->frame_folder);
- debugfs_remove(intf->rf_val_entry);
- debugfs_remove(intf->rf_off_entry);
- debugfs_remove(intf->bbp_val_entry);
-@@ -356,6 +538,7 @@ void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
- debugfs_remove(intf->eeprom_off_entry);
- debugfs_remove(intf->csr_val_entry);
- debugfs_remove(intf->csr_off_entry);
-+ debugfs_remove(intf->register_folder);
- debugfs_remove(intf->dev_flags);
- debugfs_remove(intf->chipset_entry);
- debugfs_remove(intf->driver_entry);
-diff --git a/drivers/net/wireless/rt2x00/rt2x00debug.h b/drivers/net/wireless/rt2x00/rt2x00debug.h
-index 860e8fa..d37efbd 100644
---- a/drivers/net/wireless/rt2x00/rt2x00debug.h
-+++ b/drivers/net/wireless/rt2x00/rt2x00debug.h
-@@ -30,9 +30,9 @@ struct rt2x00_dev;
+ /* Insmod parameters */
+ I2C_CLIENT_INSMOD;
+diff --git a/drivers/s390/block/Makefile b/drivers/s390/block/Makefile
+index be9f22d..0a89e08 100644
+--- a/drivers/s390/block/Makefile
++++ b/drivers/s390/block/Makefile
+@@ -2,8 +2,8 @@
+ # S/390 block devices
+ #
- #define RT2X00DEBUGFS_REGISTER_ENTRY(__name, __type) \
- struct reg##__name { \
-- void (*read)(const struct rt2x00_dev *rt2x00dev, \
-+ void (*read)(struct rt2x00_dev *rt2x00dev, \
- const unsigned int word, __type *data); \
-- void (*write)(const struct rt2x00_dev *rt2x00dev, \
-+ void (*write)(struct rt2x00_dev *rt2x00dev, \
- const unsigned int word, __type data); \
- \
- unsigned int word_size; \
-diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c
-index ff399f8..c4be2ac 100644
---- a/drivers/net/wireless/rt2x00/rt2x00dev.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00dev.c
-@@ -23,16 +23,12 @@
- Abstract: rt2x00 generic device routines.
+-dasd_eckd_mod-objs := dasd_eckd.o dasd_3990_erp.o dasd_9343_erp.o
+-dasd_fba_mod-objs := dasd_fba.o dasd_3370_erp.o dasd_9336_erp.o
++dasd_eckd_mod-objs := dasd_eckd.o dasd_3990_erp.o dasd_alias.o
++dasd_fba_mod-objs := dasd_fba.o
+ dasd_diag_mod-objs := dasd_diag.o
+ dasd_mod-objs := dasd.o dasd_ioctl.o dasd_proc.o dasd_devmap.o \
+ dasd_genhd.o dasd_erp.o
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index e6bfce6..d640427 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -48,13 +48,15 @@ MODULE_LICENSE("GPL");
+ /*
+ * SECTION: prototypes for static functions of dasd.c
*/
+-static int dasd_alloc_queue(struct dasd_device * device);
+-static void dasd_setup_queue(struct dasd_device * device);
+-static void dasd_free_queue(struct dasd_device * device);
+-static void dasd_flush_request_queue(struct dasd_device *);
+-static int dasd_flush_ccw_queue(struct dasd_device *, int);
+-static void dasd_tasklet(struct dasd_device *);
++static int dasd_alloc_queue(struct dasd_block *);
++static void dasd_setup_queue(struct dasd_block *);
++static void dasd_free_queue(struct dasd_block *);
++static void dasd_flush_request_queue(struct dasd_block *);
++static int dasd_flush_block_queue(struct dasd_block *);
++static void dasd_device_tasklet(struct dasd_device *);
++static void dasd_block_tasklet(struct dasd_block *);
+ static void do_kick_device(struct work_struct *);
++static void dasd_return_cqr_cb(struct dasd_ccw_req *, void *);
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00lib"
--
- #include <linux/kernel.h>
- #include <linux/module.h>
+ /*
+ * SECTION: Operations on the device structure.
+@@ -65,26 +67,23 @@ static wait_queue_head_t dasd_flush_wq;
+ /*
+ * Allocate memory for a new device structure.
+ */
+-struct dasd_device *
+-dasd_alloc_device(void)
++struct dasd_device *dasd_alloc_device(void)
+ {
+ struct dasd_device *device;
- #include "rt2x00.h"
- #include "rt2x00lib.h"
-+#include "rt2x00dump.h"
+- device = kzalloc(sizeof (struct dasd_device), GFP_ATOMIC);
+- if (device == NULL)
++ device = kzalloc(sizeof(struct dasd_device), GFP_ATOMIC);
++ if (!device)
+ return ERR_PTR(-ENOMEM);
+- /* open_count = 0 means device online but not in use */
+- atomic_set(&device->open_count, -1);
+ /* Get two pages for normal block device operations. */
+ device->ccw_mem = (void *) __get_free_pages(GFP_ATOMIC | GFP_DMA, 1);
+- if (device->ccw_mem == NULL) {
++ if (!device->ccw_mem) {
+ kfree(device);
+ return ERR_PTR(-ENOMEM);
+ }
+ /* Get one page for error recovery. */
+ device->erp_mem = (void *) get_zeroed_page(GFP_ATOMIC | GFP_DMA);
+- if (device->erp_mem == NULL) {
++ if (!device->erp_mem) {
+ free_pages((unsigned long) device->ccw_mem, 1);
+ kfree(device);
+ return ERR_PTR(-ENOMEM);
+@@ -93,10 +92,9 @@ dasd_alloc_device(void)
+ dasd_init_chunklist(&device->ccw_chunks, device->ccw_mem, PAGE_SIZE*2);
+ dasd_init_chunklist(&device->erp_chunks, device->erp_mem, PAGE_SIZE);
+ spin_lock_init(&device->mem_lock);
+- spin_lock_init(&device->request_queue_lock);
+- atomic_set (&device->tasklet_scheduled, 0);
++ atomic_set(&device->tasklet_scheduled, 0);
+ tasklet_init(&device->tasklet,
+- (void (*)(unsigned long)) dasd_tasklet,
++ (void (*)(unsigned long)) dasd_device_tasklet,
+ (unsigned long) device);
+ INIT_LIST_HEAD(&device->ccw_queue);
+ init_timer(&device->timer);
+@@ -110,8 +108,7 @@ dasd_alloc_device(void)
/*
- * Ring handler.
-@@ -67,7 +63,21 @@ EXPORT_SYMBOL_GPL(rt2x00lib_get_ring);
+ * Free memory of a device structure.
*/
- static void rt2x00lib_start_link_tuner(struct rt2x00_dev *rt2x00dev)
+-void
+-dasd_free_device(struct dasd_device *device)
++void dasd_free_device(struct dasd_device *device)
{
-- rt2x00_clear_link(&rt2x00dev->link);
-+ rt2x00dev->link.count = 0;
-+ rt2x00dev->link.vgc_level = 0;
-+
-+ memset(&rt2x00dev->link.qual, 0, sizeof(rt2x00dev->link.qual));
-+
-+ /*
-+ * The RX and TX percentage should start at 50%
-+ * this will assure we will get at least get some
-+ * decent value when the link tuner starts.
-+ * The value will be dropped and overwritten with
-+ * the correct (measured )value anyway during the
-+ * first run of the link tuner.
-+ */
-+ rt2x00dev->link.qual.rx_percentage = 50;
-+ rt2x00dev->link.qual.tx_percentage = 50;
-
- /*
- * Reset the link tuner.
-@@ -93,6 +103,46 @@ void rt2x00lib_reset_link_tuner(struct rt2x00_dev *rt2x00dev)
+ kfree(device->private);
+ free_page((unsigned long) device->erp_mem);
+@@ -120,10 +117,42 @@ dasd_free_device(struct dasd_device *device)
}
/*
-+ * Ring initialization
++ * Allocate memory for a new device structure.
+ */
-+static void rt2x00lib_init_rxrings(struct rt2x00_dev *rt2x00dev)
++struct dasd_block *dasd_alloc_block(void)
+{
-+ struct data_ring *ring = rt2x00dev->rx;
-+ unsigned int i;
-+
-+ if (!rt2x00dev->ops->lib->init_rxentry)
-+ return;
++ struct dasd_block *block;
+
-+ if (ring->data_addr)
-+ memset(ring->data_addr, 0, rt2x00_get_ring_size(ring));
++ block = kzalloc(sizeof(*block), GFP_ATOMIC);
++ if (!block)
++ return ERR_PTR(-ENOMEM);
++ /* open_count = 0 means device online but not in use */
++ atomic_set(&block->open_count, -1);
+
-+ for (i = 0; i < ring->stats.limit; i++)
-+ rt2x00dev->ops->lib->init_rxentry(rt2x00dev, &ring->entry[i]);
++ spin_lock_init(&block->request_queue_lock);
++ atomic_set(&block->tasklet_scheduled, 0);
++ tasklet_init(&block->tasklet,
++ (void (*)(unsigned long)) dasd_block_tasklet,
++ (unsigned long) block);
++ INIT_LIST_HEAD(&block->ccw_queue);
++ spin_lock_init(&block->queue_lock);
++ init_timer(&block->timer);
+
-+ rt2x00_ring_index_clear(ring);
++ return block;
+}
+
-+static void rt2x00lib_init_txrings(struct rt2x00_dev *rt2x00dev)
++/*
++ * Free memory of a device structure.
++ */
++void dasd_free_block(struct dasd_block *block)
+{
-+ struct data_ring *ring;
-+ unsigned int i;
-+
-+ if (!rt2x00dev->ops->lib->init_txentry)
-+ return;
-+
-+ txringall_for_each(rt2x00dev, ring) {
-+ if (ring->data_addr)
-+ memset(ring->data_addr, 0, rt2x00_get_ring_size(ring));
-+
-+ for (i = 0; i < ring->stats.limit; i++)
-+ rt2x00dev->ops->lib->init_txentry(rt2x00dev,
-+ &ring->entry[i]);
-+
-+ rt2x00_ring_index_clear(ring);
-+ }
++ kfree(block);
+}
+
+/*
- * Radio control handlers.
+ * Make a new device known to the system.
*/
- int rt2x00lib_enable_radio(struct rt2x00_dev *rt2x00dev)
-@@ -108,6 +158,12 @@ int rt2x00lib_enable_radio(struct rt2x00_dev *rt2x00dev)
- return 0;
+-static int
+-dasd_state_new_to_known(struct dasd_device *device)
++static int dasd_state_new_to_known(struct dasd_device *device)
+ {
+ int rc;
- /*
-+ * Initialize all data rings.
-+ */
-+ rt2x00lib_init_rxrings(rt2x00dev);
-+ rt2x00lib_init_txrings(rt2x00dev);
-+
-+ /*
- * Enable radio.
+@@ -133,12 +162,13 @@ dasd_state_new_to_known(struct dasd_device *device)
*/
- status = rt2x00dev->ops->lib->set_device_state(rt2x00dev,
-@@ -179,26 +235,153 @@ void rt2x00lib_toggle_rx(struct rt2x00_dev *rt2x00dev, enum dev_state state)
- rt2x00lib_start_link_tuner(rt2x00dev);
- }
+ dasd_get_device(device);
--static void rt2x00lib_precalculate_link_signal(struct link *link)
-+static void rt2x00lib_evaluate_antenna_sample(struct rt2x00_dev *rt2x00dev)
+- rc = dasd_alloc_queue(device);
+- if (rc) {
+- dasd_put_device(device);
+- return rc;
++ if (device->block) {
++ rc = dasd_alloc_queue(device->block);
++ if (rc) {
++ dasd_put_device(device);
++ return rc;
++ }
+ }
+-
+ device->state = DASD_STATE_KNOWN;
+ return 0;
+ }
+@@ -146,21 +176,24 @@ dasd_state_new_to_known(struct dasd_device *device)
+ /*
+ * Let the system forget about a device.
+ */
+-static int
+-dasd_state_known_to_new(struct dasd_device * device)
++static int dasd_state_known_to_new(struct dasd_device *device)
{
-- if (link->rx_failed || link->rx_success)
-- link->rx_percentage =
-- (link->rx_success * 100) /
-- (link->rx_failed + link->rx_success);
-+ enum antenna rx = rt2x00dev->link.ant.active.rx;
-+ enum antenna tx = rt2x00dev->link.ant.active.tx;
-+ int sample_a =
-+ rt2x00_get_link_ant_rssi_history(&rt2x00dev->link, ANTENNA_A);
-+ int sample_b =
-+ rt2x00_get_link_ant_rssi_history(&rt2x00dev->link, ANTENNA_B);
-+
-+ /*
-+ * We are done sampling. Now we should evaluate the results.
-+ */
-+ rt2x00dev->link.ant.flags &= ~ANTENNA_MODE_SAMPLE;
-+
-+ /*
-+ * During the last period we have sampled the RSSI
-+ * from both antenna's. It now is time to determine
-+ * which antenna demonstrated the best performance.
-+ * When we are already on the antenna with the best
-+ * performance, then there really is nothing for us
-+ * left to do.
-+ */
-+ if (sample_a == sample_b)
-+ return;
-+
-+ if (rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY) {
-+ if (sample_a > sample_b && rx == ANTENNA_B)
-+ rx = ANTENNA_A;
-+ else if (rx == ANTENNA_A)
-+ rx = ANTENNA_B;
+ /* Disable extended error reporting for this device. */
+ dasd_eer_disable(device);
+ /* Forget the discipline information. */
+- if (device->discipline)
++ if (device->discipline) {
++ if (device->discipline->uncheck_device)
++ device->discipline->uncheck_device(device);
+ module_put(device->discipline->owner);
+ }
-+
-+ if (rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY) {
-+ if (sample_a > sample_b && tx == ANTENNA_B)
-+ tx = ANTENNA_A;
-+ else if (tx == ANTENNA_A)
-+ tx = ANTENNA_B;
+ device->discipline = NULL;
+ if (device->base_discipline)
+ module_put(device->base_discipline->owner);
+ device->base_discipline = NULL;
+ device->state = DASD_STATE_NEW;
+
+- dasd_free_queue(device);
++ if (device->block)
++ dasd_free_queue(device->block);
+
+ /* Give up reference we took in dasd_state_new_to_known. */
+ dasd_put_device(device);
+@@ -170,19 +203,19 @@ dasd_state_known_to_new(struct dasd_device * device)
+ /*
+ * Request the irq line for the device.
+ */
+-static int
+-dasd_state_known_to_basic(struct dasd_device * device)
++static int dasd_state_known_to_basic(struct dasd_device *device)
+ {
+ int rc;
+
+ /* Allocate and register gendisk structure. */
+- rc = dasd_gendisk_alloc(device);
+- if (rc)
+- return rc;
+-
++ if (device->block) {
++ rc = dasd_gendisk_alloc(device->block);
++ if (rc)
++ return rc;
+ }
-+
-+ rt2x00lib_config_antenna(rt2x00dev, rx, tx);
-+}
-+
-+static void rt2x00lib_evaluate_antenna_eval(struct rt2x00_dev *rt2x00dev)
-+{
-+ enum antenna rx = rt2x00dev->link.ant.active.rx;
-+ enum antenna tx = rt2x00dev->link.ant.active.tx;
-+ int rssi_curr = rt2x00_get_link_ant_rssi(&rt2x00dev->link);
-+ int rssi_old = rt2x00_update_ant_rssi(&rt2x00dev->link, rssi_curr);
-+
-+ /*
-+ * Legacy driver indicates that we should swap antenna's
-+ * when the difference in RSSI is greater that 5. This
-+ * also should be done when the RSSI was actually better
-+ * then the previous sample.
-+ * When the difference exceeds the threshold we should
-+ * sample the rssi from the other antenna to make a valid
-+ * comparison between the 2 antennas.
-+ */
-+ if ((rssi_curr - rssi_old) > -5 || (rssi_curr - rssi_old) < 5)
-+ return;
-+
-+ rt2x00dev->link.ant.flags |= ANTENNA_MODE_SAMPLE;
-+
-+ if (rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY)
-+ rx = (rx == ANTENNA_A) ? ANTENNA_B : ANTENNA_A;
-+
-+ if (rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY)
-+ tx = (tx == ANTENNA_A) ? ANTENNA_B : ANTENNA_A;
-+
-+ rt2x00lib_config_antenna(rt2x00dev, rx, tx);
-+}
-+
-+static void rt2x00lib_evaluate_antenna(struct rt2x00_dev *rt2x00dev)
-+{
-+ /*
-+ * Determine if software diversity is enabled for
-+ * either the TX or RX antenna (or both).
-+ * Always perform this check since within the link
-+ * tuner interval the configuration might have changed.
-+ */
-+ rt2x00dev->link.ant.flags &= ~ANTENNA_RX_DIVERSITY;
-+ rt2x00dev->link.ant.flags &= ~ANTENNA_TX_DIVERSITY;
-+
-+ if (rt2x00dev->hw->conf.antenna_sel_rx == 0 &&
-+ rt2x00dev->default_ant.rx != ANTENNA_SW_DIVERSITY)
-+ rt2x00dev->link.ant.flags |= ANTENNA_RX_DIVERSITY;
-+ if (rt2x00dev->hw->conf.antenna_sel_tx == 0 &&
-+ rt2x00dev->default_ant.tx != ANTENNA_SW_DIVERSITY)
-+ rt2x00dev->link.ant.flags |= ANTENNA_TX_DIVERSITY;
-+
-+ if (!(rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY) &&
-+ !(rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY)) {
-+ rt2x00dev->link.ant.flags &= ~ANTENNA_MODE_SAMPLE;
-+ return;
+ /* register 'device' debug area, used for all DBF_DEV_XXX calls */
+- device->debug_area = debug_register(device->cdev->dev.bus_id, 1, 2,
+- 8 * sizeof (long));
++ device->debug_area = debug_register(device->cdev->dev.bus_id, 1, 1,
++ 8 * sizeof(long));
+ debug_register_view(device->debug_area, &debug_sprintf_view);
+ debug_set_level(device->debug_area, DBF_WARNING);
+ DBF_DEV_EVENT(DBF_EMERG, device, "%s", "debug area created");
+@@ -194,16 +227,17 @@ dasd_state_known_to_basic(struct dasd_device * device)
+ /*
+ * Release the irq line for the device. Terminate any running i/o.
+ */
+-static int
+-dasd_state_basic_to_known(struct dasd_device * device)
++static int dasd_state_basic_to_known(struct dasd_device *device)
+ {
+ int rc;
+-
+- dasd_gendisk_free(device);
+- rc = dasd_flush_ccw_queue(device, 1);
++ if (device->block) {
++ dasd_gendisk_free(device->block);
++ dasd_block_clear_timer(device->block);
+ }
-+
-+ /*
-+ * If we have only sampled the data over the last period
-+ * we should now harvest the data. Otherwise just evaluate
-+ * the data. The latter should only be performed once
-+ * every 2 seconds.
-+ */
-+ if (rt2x00dev->link.ant.flags & ANTENNA_MODE_SAMPLE)
-+ rt2x00lib_evaluate_antenna_sample(rt2x00dev);
-+ else if (rt2x00dev->link.count & 1)
-+ rt2x00lib_evaluate_antenna_eval(rt2x00dev);
-+}
-+
-+static void rt2x00lib_update_link_stats(struct link *link, int rssi)
-+{
-+ int avg_rssi = rssi;
-+
-+ /*
-+ * Update global RSSI
-+ */
-+ if (link->qual.avg_rssi)
-+ avg_rssi = MOVING_AVERAGE(link->qual.avg_rssi, rssi, 8);
-+ link->qual.avg_rssi = avg_rssi;
-+
-+ /*
-+ * Update antenna RSSI
-+ */
-+ if (link->ant.rssi_ant)
-+ rssi = MOVING_AVERAGE(link->ant.rssi_ant, rssi, 8);
-+ link->ant.rssi_ant = rssi;
-+}
-+
-+static void rt2x00lib_precalculate_link_signal(struct link_qual *qual)
-+{
-+ if (qual->rx_failed || qual->rx_success)
-+ qual->rx_percentage =
-+ (qual->rx_success * 100) /
-+ (qual->rx_failed + qual->rx_success);
- else
-- link->rx_percentage = 50;
-+ qual->rx_percentage = 50;
++ rc = dasd_flush_device_queue(device);
+ if (rc)
+ return rc;
+- dasd_clear_timer(device);
++ dasd_device_clear_timer(device);
-- if (link->tx_failed || link->tx_success)
-- link->tx_percentage =
-- (link->tx_success * 100) /
-- (link->tx_failed + link->tx_success);
-+ if (qual->tx_failed || qual->tx_success)
-+ qual->tx_percentage =
-+ (qual->tx_success * 100) /
-+ (qual->tx_failed + qual->tx_success);
- else
-- link->tx_percentage = 50;
-+ qual->tx_percentage = 50;
+ DBF_DEV_EVENT(DBF_EMERG, device, "%p debug area deleted", device);
+ if (device->debug_area != NULL) {
+@@ -228,26 +262,32 @@ dasd_state_basic_to_known(struct dasd_device * device)
+ * In case the analysis returns an error, the device setup is stopped
+ * (a fake disk was already added to allow formatting).
+ */
+-static int
+-dasd_state_basic_to_ready(struct dasd_device * device)
++static int dasd_state_basic_to_ready(struct dasd_device *device)
+ {
+ int rc;
++ struct dasd_block *block;
-- link->rx_success = 0;
-- link->rx_failed = 0;
-- link->tx_success = 0;
-- link->tx_failed = 0;
-+ qual->rx_success = 0;
-+ qual->rx_failed = 0;
-+ qual->tx_success = 0;
-+ qual->tx_failed = 0;
+ rc = 0;
+- if (device->discipline->do_analysis != NULL)
+- rc = device->discipline->do_analysis(device);
+- if (rc) {
+- if (rc != -EAGAIN)
+- device->state = DASD_STATE_UNFMT;
+- return rc;
+- }
++ block = device->block;
+ /* make disk known with correct capacity */
+- dasd_setup_queue(device);
+- set_capacity(device->gdp, device->blocks << device->s2b_shift);
+- device->state = DASD_STATE_READY;
+- rc = dasd_scan_partitions(device);
+- if (rc)
+- device->state = DASD_STATE_BASIC;
++ if (block) {
++ if (block->base->discipline->do_analysis != NULL)
++ rc = block->base->discipline->do_analysis(block);
++ if (rc) {
++ if (rc != -EAGAIN)
++ device->state = DASD_STATE_UNFMT;
++ return rc;
++ }
++ dasd_setup_queue(block);
++ set_capacity(block->gdp,
++ block->blocks << block->s2b_shift);
++ device->state = DASD_STATE_READY;
++ rc = dasd_scan_partitions(block);
++ if (rc)
++ device->state = DASD_STATE_BASIC;
++ } else {
++ device->state = DASD_STATE_READY;
++ }
+ return rc;
}
- static int rt2x00lib_calculate_link_signal(struct rt2x00_dev *rt2x00dev,
-@@ -225,8 +408,8 @@ static int rt2x00lib_calculate_link_signal(struct rt2x00_dev *rt2x00dev,
- * defines to calculate the current link signal.
- */
- signal = ((WEIGHT_RSSI * rssi_percentage) +
-- (WEIGHT_TX * rt2x00dev->link.tx_percentage) +
-- (WEIGHT_RX * rt2x00dev->link.rx_percentage)) / 100;
-+ (WEIGHT_TX * rt2x00dev->link.qual.tx_percentage) +
-+ (WEIGHT_RX * rt2x00dev->link.qual.rx_percentage)) / 100;
+@@ -256,28 +296,31 @@ dasd_state_basic_to_ready(struct dasd_device * device)
+ * Forget format information. Check if the target level is basic
+ * and if it is create fake disk for formatting.
+ */
+-static int
+-dasd_state_ready_to_basic(struct dasd_device * device)
++static int dasd_state_ready_to_basic(struct dasd_device *device)
+ {
+ int rc;
- return (signal > 100) ? 100 : signal;
+- rc = dasd_flush_ccw_queue(device, 0);
+- if (rc)
+- return rc;
+- dasd_destroy_partitions(device);
+- dasd_flush_request_queue(device);
+- device->blocks = 0;
+- device->bp_block = 0;
+- device->s2b_shift = 0;
+ device->state = DASD_STATE_BASIC;
++ if (device->block) {
++ struct dasd_block *block = device->block;
++ rc = dasd_flush_block_queue(block);
++ if (rc) {
++ device->state = DASD_STATE_READY;
++ return rc;
++ }
++ dasd_destroy_partitions(block);
++ dasd_flush_request_queue(block);
++ block->blocks = 0;
++ block->bp_block = 0;
++ block->s2b_shift = 0;
++ }
+ return 0;
}
-@@ -246,10 +429,9 @@ static void rt2x00lib_link_tuner(struct work_struct *work)
- /*
- * Update statistics.
- */
-- rt2x00dev->ops->lib->link_stats(rt2x00dev);
--
-+ rt2x00dev->ops->lib->link_stats(rt2x00dev, &rt2x00dev->link.qual);
- rt2x00dev->low_level_stats.dot11FCSErrorCount +=
-- rt2x00dev->link.rx_failed;
-+ rt2x00dev->link.qual.rx_failed;
- /*
- * Only perform the link tuning when Link tuning
-@@ -259,10 +441,15 @@ static void rt2x00lib_link_tuner(struct work_struct *work)
- rt2x00dev->ops->lib->link_tuner(rt2x00dev);
+ /*
+ * Back to basic.
+ */
+-static int
+-dasd_state_unfmt_to_basic(struct dasd_device * device)
++static int dasd_state_unfmt_to_basic(struct dasd_device *device)
+ {
+ device->state = DASD_STATE_BASIC;
+ return 0;
+@@ -291,17 +334,31 @@ dasd_state_unfmt_to_basic(struct dasd_device * device)
+ static int
+ dasd_state_ready_to_online(struct dasd_device * device)
+ {
++ int rc;
++
++ if (device->discipline->ready_to_online) {
++ rc = device->discipline->ready_to_online(device);
++ if (rc)
++ return rc;
++ }
+ device->state = DASD_STATE_ONLINE;
+- dasd_schedule_bh(device);
++ if (device->block)
++ dasd_schedule_block_bh(device->block);
+ return 0;
+ }
- /*
-+ * Evaluate antenna setup.
-+ */
-+ rt2x00lib_evaluate_antenna(rt2x00dev);
+ /*
+ * Stop the requeueing of requests again.
+ */
+-static int
+-dasd_state_online_to_ready(struct dasd_device * device)
++static int dasd_state_online_to_ready(struct dasd_device *device)
+ {
++ int rc;
+
-+ /*
- * Precalculate a portion of the link signal which is
- * in based on the tx/rx success/failure counters.
- */
-- rt2x00lib_precalculate_link_signal(&rt2x00dev->link);
-+ rt2x00lib_precalculate_link_signal(&rt2x00dev->link.qual);
++ if (device->discipline->online_to_ready) {
++ rc = device->discipline->online_to_ready(device);
++ if (rc)
++ return rc;
++ }
+ device->state = DASD_STATE_READY;
+ return 0;
+ }
+@@ -309,8 +366,7 @@ dasd_state_online_to_ready(struct dasd_device * device)
+ /*
+ * Device startup state changes.
+ */
+-static int
+-dasd_increase_state(struct dasd_device *device)
++static int dasd_increase_state(struct dasd_device *device)
+ {
+ int rc;
- /*
- * Increase tuner counter, and reschedule the next link tuner run.
-@@ -276,7 +463,7 @@ static void rt2x00lib_packetfilter_scheduled(struct work_struct *work)
+@@ -345,8 +401,7 @@ dasd_increase_state(struct dasd_device *device)
+ /*
+ * Device shutdown state changes.
+ */
+-static int
+-dasd_decrease_state(struct dasd_device *device)
++static int dasd_decrease_state(struct dasd_device *device)
{
- struct rt2x00_dev *rt2x00dev =
- container_of(work, struct rt2x00_dev, filter_work);
-- unsigned int filter = rt2x00dev->interface.filter;
-+ unsigned int filter = rt2x00dev->packet_filter;
+ int rc;
- /*
- * Since we had stored the filter inside interface.filter,
-@@ -284,7 +471,7 @@ static void rt2x00lib_packetfilter_scheduled(struct work_struct *work)
- * assume nothing has changed (*total_flags will be compared
- * to interface.filter to determine if any action is required).
- */
-- rt2x00dev->interface.filter = 0;
-+ rt2x00dev->packet_filter = 0;
+@@ -381,8 +436,7 @@ dasd_decrease_state(struct dasd_device *device)
+ /*
+ * This is the main startup/shutdown routine.
+ */
+-static void
+-dasd_change_state(struct dasd_device *device)
++static void dasd_change_state(struct dasd_device *device)
+ {
+ int rc;
- rt2x00dev->ops->hw->configure_filter(rt2x00dev->hw,
- filter, &filter, 0, NULL);
-@@ -294,10 +481,17 @@ static void rt2x00lib_configuration_scheduled(struct work_struct *work)
+@@ -409,17 +463,15 @@ dasd_change_state(struct dasd_device *device)
+ * dasd_kick_device will schedule a call do do_kick_device to the kernel
+ * event daemon.
+ */
+-static void
+-do_kick_device(struct work_struct *work)
++static void do_kick_device(struct work_struct *work)
{
- struct rt2x00_dev *rt2x00dev =
- container_of(work, struct rt2x00_dev, config_work);
-- int preamble = !test_bit(CONFIG_SHORT_PREAMBLE, &rt2x00dev->flags);
-+ struct ieee80211_bss_conf bss_conf;
+ struct dasd_device *device = container_of(work, struct dasd_device, kick_work);
+ dasd_change_state(device);
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ dasd_put_device(device);
+ }
-- rt2x00mac_erp_ie_changed(rt2x00dev->hw,
-- IEEE80211_ERP_CHANGE_PREAMBLE, 0, preamble);
-+ bss_conf.use_short_preamble =
-+ test_bit(CONFIG_SHORT_PREAMBLE, &rt2x00dev->flags);
-+
-+ /*
-+ * FIXME: shouldn't invoke it this way because all other contents
-+ * of bss_conf is invalid.
-+ */
-+ rt2x00mac_bss_info_changed(rt2x00dev->hw, rt2x00dev->interface.id,
-+ &bss_conf, BSS_CHANGED_ERP_PREAMBLE);
+-void
+-dasd_kick_device(struct dasd_device *device)
++void dasd_kick_device(struct dasd_device *device)
+ {
+ dasd_get_device(device);
+ /* queue call to dasd_kick_device to the kernel event daemon. */
+@@ -429,8 +481,7 @@ dasd_kick_device(struct dasd_device *device)
+ /*
+ * Set the target state for a device and starts the state change.
+ */
+-void
+-dasd_set_target_state(struct dasd_device *device, int target)
++void dasd_set_target_state(struct dasd_device *device, int target)
+ {
+ /* If we are in probeonly mode stop at DASD_STATE_READY. */
+ if (dasd_probeonly && target > DASD_STATE_READY)
+@@ -447,14 +498,12 @@ dasd_set_target_state(struct dasd_device *device, int target)
+ /*
+ * Enable devices with device numbers in [from..to].
+ */
+-static inline int
+-_wait_for_device(struct dasd_device *device)
++static inline int _wait_for_device(struct dasd_device *device)
+ {
+ return (device->state == device->target);
}
+-void
+-dasd_enable_device(struct dasd_device *device)
++void dasd_enable_device(struct dasd_device *device)
+ {
+ dasd_set_target_state(device, DASD_STATE_ONLINE);
+ if (device->state <= DASD_STATE_KNOWN)
+@@ -475,20 +524,20 @@ unsigned int dasd_profile_level = DASD_PROFILE_OFF;
/*
-@@ -350,8 +544,8 @@ void rt2x00lib_txdone(struct data_entry *entry,
- tx_status->ack_signal = 0;
- tx_status->excessive_retries = (status == TX_FAIL_RETRY);
- tx_status->retry_count = retry;
-- rt2x00dev->link.tx_success += success;
-- rt2x00dev->link.tx_failed += retry + fail;
-+ rt2x00dev->link.qual.tx_success += success;
-+ rt2x00dev->link.qual.tx_failed += retry + fail;
+ * Increments counter in global and local profiling structures.
+ */
+-#define dasd_profile_counter(value, counter, device) \
++#define dasd_profile_counter(value, counter, block) \
+ { \
+ int index; \
+ for (index = 0; index < 31 && value >> (2+index); index++); \
+ dasd_global_profile.counter[index]++; \
+- device->profile.counter[index]++; \
++ block->profile.counter[index]++; \
+ }
- if (!(tx_status->control.flags & IEEE80211_TXCTL_NO_ACK)) {
- if (success)
-@@ -371,9 +565,11 @@ void rt2x00lib_txdone(struct data_entry *entry,
- }
+ /*
+ * Add profiling information for cqr before execution.
+ */
+-static void
+-dasd_profile_start(struct dasd_device *device, struct dasd_ccw_req * cqr,
+- struct request *req)
++static void dasd_profile_start(struct dasd_block *block,
++ struct dasd_ccw_req *cqr,
++ struct request *req)
+ {
+ struct list_head *l;
+ unsigned int counter;
+@@ -498,19 +547,19 @@ dasd_profile_start(struct dasd_device *device, struct dasd_ccw_req * cqr,
- /*
-- * Send the tx_status to mac80211,
-- * that method also cleans up the skb structure.
-+ * Send the tx_status to mac80211 & debugfs.
-+ * mac80211 will clean up the skb structure.
- */
-+ get_skb_desc(entry->skb)->frame_type = DUMP_FRAME_TXDONE;
-+ rt2x00debug_dump_frame(rt2x00dev, entry->skb);
- ieee80211_tx_status_irqsafe(rt2x00dev->hw, entry->skb, tx_status);
- entry->skb = NULL;
+ /* count the length of the chanq for statistics */
+ counter = 0;
+- list_for_each(l, &device->ccw_queue)
++ list_for_each(l, &block->ccw_queue)
+ if (++counter >= 31)
+ break;
+ dasd_global_profile.dasd_io_nr_req[counter]++;
+- device->profile.dasd_io_nr_req[counter]++;
++ block->profile.dasd_io_nr_req[counter]++;
}
-@@ -386,8 +582,10 @@ void rt2x00lib_rxdone(struct data_entry *entry, struct sk_buff *skb,
- struct ieee80211_rx_status *rx_status = &rt2x00dev->rx_status;
- struct ieee80211_hw_mode *mode;
- struct ieee80211_rate *rate;
-+ struct ieee80211_hdr *hdr;
- unsigned int i;
- int val = 0;
-+ u16 fc;
- /*
- * Update RX statistics.
-@@ -412,17 +610,28 @@ void rt2x00lib_rxdone(struct data_entry *entry, struct sk_buff *skb,
- }
- }
+ /*
+ * Add profiling information for cqr after execution.
+ */
+-static void
+-dasd_profile_end(struct dasd_device *device, struct dasd_ccw_req * cqr,
+- struct request *req)
++static void dasd_profile_end(struct dasd_block *block,
++ struct dasd_ccw_req *cqr,
++ struct request *req)
+ {
+ long strtime, irqtime, endtime, tottime; /* in microseconds */
+ long tottimeps, sectors;
+@@ -532,27 +581,27 @@ dasd_profile_end(struct dasd_device *device, struct dasd_ccw_req * cqr,
-- rt2x00_update_link_rssi(&rt2x00dev->link, desc->rssi);
-- rt2x00dev->link.rx_success++;
-+ /*
-+ * Only update link status if this is a beacon frame carrying our bssid.
-+ */
-+ hdr = (struct ieee80211_hdr*)skb->data;
-+ fc = le16_to_cpu(hdr->frame_control);
-+ if (is_beacon(fc) && desc->my_bss)
-+ rt2x00lib_update_link_stats(&rt2x00dev->link, desc->rssi);
-+
-+ rt2x00dev->link.qual.rx_success++;
-+
- rx_status->rate = val;
- rx_status->signal =
- rt2x00lib_calculate_link_signal(rt2x00dev, desc->rssi);
- rx_status->ssi = desc->rssi;
- rx_status->flag = desc->flags;
-+ rx_status->antenna = rt2x00dev->link.ant.active.rx;
+ if (!dasd_global_profile.dasd_io_reqs)
+ memset(&dasd_global_profile, 0,
+- sizeof (struct dasd_profile_info_t));
++ sizeof(struct dasd_profile_info_t));
+ dasd_global_profile.dasd_io_reqs++;
+ dasd_global_profile.dasd_io_sects += sectors;
- /*
-- * Send frame to mac80211
-+ * Send frame to mac80211 & debugfs
- */
-+ get_skb_desc(skb)->frame_type = DUMP_FRAME_RXDONE;
-+ rt2x00debug_dump_frame(rt2x00dev, skb);
- ieee80211_rx_irqsafe(rt2x00dev->hw, skb, rx_status);
+- if (!device->profile.dasd_io_reqs)
+- memset(&device->profile, 0,
+- sizeof (struct dasd_profile_info_t));
+- device->profile.dasd_io_reqs++;
+- device->profile.dasd_io_sects += sectors;
++ if (!block->profile.dasd_io_reqs)
++ memset(&block->profile, 0,
++ sizeof(struct dasd_profile_info_t));
++ block->profile.dasd_io_reqs++;
++ block->profile.dasd_io_sects += sectors;
+
+- dasd_profile_counter(sectors, dasd_io_secs, device);
+- dasd_profile_counter(tottime, dasd_io_times, device);
+- dasd_profile_counter(tottimeps, dasd_io_timps, device);
+- dasd_profile_counter(strtime, dasd_io_time1, device);
+- dasd_profile_counter(irqtime, dasd_io_time2, device);
+- dasd_profile_counter(irqtime / sectors, dasd_io_time2ps, device);
+- dasd_profile_counter(endtime, dasd_io_time3, device);
++ dasd_profile_counter(sectors, dasd_io_secs, block);
++ dasd_profile_counter(tottime, dasd_io_times, block);
++ dasd_profile_counter(tottimeps, dasd_io_timps, block);
++ dasd_profile_counter(strtime, dasd_io_time1, block);
++ dasd_profile_counter(irqtime, dasd_io_time2, block);
++ dasd_profile_counter(irqtime / sectors, dasd_io_time2ps, block);
++ dasd_profile_counter(endtime, dasd_io_time3, block);
}
- EXPORT_SYMBOL_GPL(rt2x00lib_rxdone);
-@@ -431,36 +640,25 @@ EXPORT_SYMBOL_GPL(rt2x00lib_rxdone);
- * TX descriptor initializer
+ #else
+-#define dasd_profile_start(device, cqr, req) do {} while (0)
+-#define dasd_profile_end(device, cqr, req) do {} while (0)
++#define dasd_profile_start(block, cqr, req) do {} while (0)
++#define dasd_profile_end(block, cqr, req) do {} while (0)
+ #endif /* CONFIG_DASD_PROFILE */
+
+ /*
+@@ -562,9 +611,9 @@ dasd_profile_end(struct dasd_device *device, struct dasd_ccw_req * cqr,
+ * memory and 2) dasd_smalloc_request uses the static ccw memory
+ * that gets allocated for each device.
*/
- void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
-+ struct sk_buff *skb,
- struct ieee80211_tx_control *control)
+-struct dasd_ccw_req *
+-dasd_kmalloc_request(char *magic, int cplength, int datasize,
+- struct dasd_device * device)
++struct dasd_ccw_req *dasd_kmalloc_request(char *magic, int cplength,
++ int datasize,
++ struct dasd_device *device)
{
- struct txdata_entry_desc desc;
-- struct data_ring *ring;
-+ struct skb_desc *skbdesc = get_skb_desc(skb);
-+ struct ieee80211_hdr *ieee80211hdr = skbdesc->data;
- int tx_rate;
- int bitrate;
-+ int length;
- int duration;
- int residual;
- u16 frame_control;
- u16 seq_ctrl;
-
-- /*
-- * Make sure the descriptor is properly cleared.
-- */
-- memset(&desc, 0x00, sizeof(desc));
-+ memset(&desc, 0, sizeof(desc));
-
-- /*
-- * Get ring pointer, if we fail to obtain the
-- * correct ring, then use the first TX ring.
-- */
-- ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
-- if (!ring)
-- ring = rt2x00lib_get_ring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
--
-- desc.cw_min = ring->tx_params.cw_min;
-- desc.cw_max = ring->tx_params.cw_max;
-- desc.aifs = ring->tx_params.aifs;
-+ desc.cw_min = skbdesc->ring->tx_params.cw_min;
-+ desc.cw_max = skbdesc->ring->tx_params.cw_max;
-+ desc.aifs = skbdesc->ring->tx_params.aifs;
+ struct dasd_ccw_req *cqr;
- /*
- * Identify queue
-@@ -482,12 +680,21 @@ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- tx_rate = control->tx_rate;
+@@ -600,9 +649,9 @@ dasd_kmalloc_request(char *magic, int cplength, int datasize,
+ return cqr;
+ }
- /*
-+ * Check whether this frame is to be acked
-+ */
-+ if (!(control->flags & IEEE80211_TXCTL_NO_ACK))
-+ __set_bit(ENTRY_TXD_ACK, &desc.flags);
-+
-+ /*
- * Check if this is a RTS/CTS frame
- */
- if (is_rts_frame(frame_control) || is_cts_frame(frame_control)) {
- __set_bit(ENTRY_TXD_BURST, &desc.flags);
-- if (is_rts_frame(frame_control))
-+ if (is_rts_frame(frame_control)) {
- __set_bit(ENTRY_TXD_RTS_FRAME, &desc.flags);
-+ __set_bit(ENTRY_TXD_ACK, &desc.flags);
-+ } else
-+ __clear_bit(ENTRY_TXD_ACK, &desc.flags);
- if (control->rts_cts_rate)
- tx_rate = control->rts_cts_rate;
- }
-@@ -532,17 +739,18 @@ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- desc.signal = DEVICE_GET_RATE_FIELD(tx_rate, PLCP);
- desc.service = 0x04;
+-struct dasd_ccw_req *
+-dasd_smalloc_request(char *magic, int cplength, int datasize,
+- struct dasd_device * device)
++struct dasd_ccw_req *dasd_smalloc_request(char *magic, int cplength,
++ int datasize,
++ struct dasd_device *device)
+ {
+ unsigned long flags;
+ struct dasd_ccw_req *cqr;
+@@ -649,8 +698,7 @@ dasd_smalloc_request(char *magic, int cplength, int datasize,
+ * idal lists that might have been created by dasd_set_cda and the
+ * struct dasd_ccw_req itself.
+ */
+-void
+-dasd_kfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
++void dasd_kfree_request(struct dasd_ccw_req *cqr, struct dasd_device *device)
+ {
+ #ifdef CONFIG_64BIT
+ struct ccw1 *ccw;
+@@ -667,8 +715,7 @@ dasd_kfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
+ dasd_put_device(device);
+ }
-+ length = skbdesc->data_len + FCS_LEN;
- if (test_bit(ENTRY_TXD_OFDM_RATE, &desc.flags)) {
-- desc.length_high = ((length + FCS_LEN) >> 6) & 0x3f;
-- desc.length_low = ((length + FCS_LEN) & 0x3f);
-+ desc.length_high = (length >> 6) & 0x3f;
-+ desc.length_low = length & 0x3f;
- } else {
- bitrate = DEVICE_GET_RATE_FIELD(tx_rate, RATE);
+-void
+-dasd_sfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
++void dasd_sfree_request(struct dasd_ccw_req *cqr, struct dasd_device *device)
+ {
+ unsigned long flags;
- /*
- * Convert length to microseconds.
- */
-- residual = get_duration_res(length + FCS_LEN, bitrate);
-- duration = get_duration(length + FCS_LEN, bitrate);
-+ residual = get_duration_res(length, bitrate);
-+ duration = get_duration(length, bitrate);
+@@ -681,14 +728,13 @@ dasd_sfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
+ /*
+ * Check discipline magic in cqr.
+ */
+-static inline int
+-dasd_check_cqr(struct dasd_ccw_req *cqr)
++static inline int dasd_check_cqr(struct dasd_ccw_req *cqr)
+ {
+ struct dasd_device *device;
- if (residual != 0) {
- duration++;
-@@ -565,8 +773,22 @@ void rt2x00lib_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- desc.signal |= 0x08;
+ if (cqr == NULL)
+ return -EINVAL;
+- device = cqr->device;
++ device = cqr->startdev;
+ if (strncmp((char *) &cqr->magic, device->discipline->ebcname, 4)) {
+ DEV_MESSAGE(KERN_WARNING, device,
+ " dasd_ccw_req 0x%08x magic doesn't match"
+@@ -706,8 +752,7 @@ dasd_check_cqr(struct dasd_ccw_req *cqr)
+ * ccw_device_clear can fail if the i/o subsystem
+ * is in a bad mood.
+ */
+-int
+-dasd_term_IO(struct dasd_ccw_req * cqr)
++int dasd_term_IO(struct dasd_ccw_req *cqr)
+ {
+ struct dasd_device *device;
+ int retries, rc;
+@@ -717,13 +762,13 @@ dasd_term_IO(struct dasd_ccw_req * cqr)
+ if (rc)
+ return rc;
+ retries = 0;
+- device = (struct dasd_device *) cqr->device;
++ device = (struct dasd_device *) cqr->startdev;
+ while ((retries < 5) && (cqr->status == DASD_CQR_IN_IO)) {
+ rc = ccw_device_clear(device->cdev, (long) cqr);
+ switch (rc) {
+ case 0: /* termination successful */
+ cqr->retries--;
+- cqr->status = DASD_CQR_CLEAR;
++ cqr->status = DASD_CQR_CLEAR_PENDING;
+ cqr->stopclk = get_clock();
+ cqr->starttime = 0;
+ DBF_DEV_EVENT(DBF_DEBUG, device,
+@@ -753,7 +798,7 @@ dasd_term_IO(struct dasd_ccw_req * cqr)
+ }
+ retries++;
}
-
-- rt2x00dev->ops->lib->write_tx_desc(rt2x00dev, txd, &desc,
-- ieee80211hdr, length, control);
-+ rt2x00dev->ops->lib->write_tx_desc(rt2x00dev, skb, &desc, control);
-+
-+ /*
-+ * Update ring entry.
-+ */
-+ skbdesc->entry->skb = skb;
-+ memcpy(&skbdesc->entry->tx_status.control, control, sizeof(*control));
-+
-+ /*
-+ * The frame has been completely initialized and ready
-+ * for sending to the device. The caller will push the
-+ * frame to the device, but we are going to push the
-+ * frame to debugfs here.
-+ */
-+ skbdesc->frame_type = DUMP_FRAME_TX;
-+ rt2x00debug_dump_frame(rt2x00dev, skb);
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ return rc;
}
- EXPORT_SYMBOL_GPL(rt2x00lib_write_tx_desc);
-@@ -809,6 +1031,7 @@ static int rt2x00lib_alloc_entries(struct data_ring *ring,
- entry[i].flags = 0;
- entry[i].ring = ring;
- entry[i].skb = NULL;
-+ entry[i].entry_idx = i;
+@@ -761,8 +806,7 @@ dasd_term_IO(struct dasd_ccw_req * cqr)
+ * Start the i/o. This start_IO can fail if the channel is really busy.
+ * In that case set up a timer to start the request later.
+ */
+-int
+-dasd_start_IO(struct dasd_ccw_req * cqr)
++int dasd_start_IO(struct dasd_ccw_req *cqr)
+ {
+ struct dasd_device *device;
+ int rc;
+@@ -771,12 +815,12 @@ dasd_start_IO(struct dasd_ccw_req * cqr)
+ rc = dasd_check_cqr(cqr);
+ if (rc)
+ return rc;
+- device = (struct dasd_device *) cqr->device;
++ device = (struct dasd_device *) cqr->startdev;
+ if (cqr->retries < 0) {
+ DEV_MESSAGE(KERN_DEBUG, device,
+ "start_IO: request %p (%02x/%i) - no retry left.",
+ cqr, cqr->status, cqr->retries);
+- cqr->status = DASD_CQR_FAILED;
++ cqr->status = DASD_CQR_ERROR;
+ return -EIO;
}
+ cqr->startclk = get_clock();
+@@ -833,8 +877,7 @@ dasd_start_IO(struct dasd_ccw_req * cqr)
+ * The head of the ccw queue will have status DASD_CQR_IN_IO for 1),
+ * DASD_CQR_QUEUED for 2) and 3).
+ */
+-static void
+-dasd_timeout_device(unsigned long ptr)
++static void dasd_device_timeout(unsigned long ptr)
+ {
+ unsigned long flags;
+ struct dasd_device *device;
+@@ -844,14 +887,13 @@ dasd_timeout_device(unsigned long ptr)
+ /* re-activate request queue */
+ device->stopped &= ~DASD_STOPPED_PENDING;
+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ }
- ring->entry = entry;
-@@ -866,7 +1089,7 @@ static void rt2x00lib_free_ring_entries(struct rt2x00_dev *rt2x00dev)
+ /*
+ * Setup timeout for a device in jiffies.
+ */
+-void
+-dasd_set_timer(struct dasd_device *device, int expires)
++void dasd_device_set_timer(struct dasd_device *device, int expires)
+ {
+ if (expires == 0) {
+ if (timer_pending(&device->timer))
+@@ -862,7 +904,7 @@ dasd_set_timer(struct dasd_device *device, int expires)
+ if (mod_timer(&device->timer, jiffies + expires))
+ return;
}
+- device->timer.function = dasd_timeout_device;
++ device->timer.function = dasd_device_timeout;
+ device->timer.data = (unsigned long) device;
+ device->timer.expires = jiffies + expires;
+ add_timer(&device->timer);
+@@ -871,15 +913,14 @@ dasd_set_timer(struct dasd_device *device, int expires)
+ /*
+ * Clear timeout for a device.
+ */
+-void
+-dasd_clear_timer(struct dasd_device *device)
++void dasd_device_clear_timer(struct dasd_device *device)
+ {
+ if (timer_pending(&device->timer))
+ del_timer(&device->timer);
}
--void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev)
-+static void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev)
+-static void
+-dasd_handle_killed_request(struct ccw_device *cdev, unsigned long intparm)
++static void dasd_handle_killed_request(struct ccw_device *cdev,
++ unsigned long intparm)
{
- if (!__test_and_clear_bit(DEVICE_INITIALIZED, &rt2x00dev->flags))
+ struct dasd_ccw_req *cqr;
+ struct dasd_device *device;
+@@ -893,7 +934,7 @@ dasd_handle_killed_request(struct ccw_device *cdev, unsigned long intparm)
return;
-@@ -887,7 +1110,7 @@ void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev)
- rt2x00lib_free_ring_entries(rt2x00dev);
+ }
+
+- device = (struct dasd_device *) cqr->device;
++ device = (struct dasd_device *) cqr->startdev;
+ if (device == NULL ||
+ device != dasd_device_from_cdev_locked(cdev) ||
+ strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
+@@ -905,46 +946,32 @@ dasd_handle_killed_request(struct ccw_device *cdev, unsigned long intparm)
+ /* Schedule request to be retried. */
+ cqr->status = DASD_CQR_QUEUED;
+
+- dasd_clear_timer(device);
+- dasd_schedule_bh(device);
++ dasd_device_clear_timer(device);
++ dasd_schedule_device_bh(device);
+ dasd_put_device(device);
}
--int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev)
-+static int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev)
+-static void
+-dasd_handle_state_change_pending(struct dasd_device *device)
++void dasd_generic_handle_state_change(struct dasd_device *device)
{
- int status;
+- struct dasd_ccw_req *cqr;
+- struct list_head *l, *n;
+-
+ /* First of all start sense subsystem status request. */
+ dasd_eer_snss(device);
-@@ -930,12 +1153,65 @@ exit:
- return status;
+ device->stopped &= ~DASD_STOPPED_PENDING;
+-
+- /* restart all 'running' IO on queue */
+- list_for_each_safe(l, n, &device->ccw_queue) {
+- cqr = list_entry(l, struct dasd_ccw_req, list);
+- if (cqr->status == DASD_CQR_IN_IO) {
+- cqr->status = DASD_CQR_QUEUED;
+- }
+- }
+- dasd_clear_timer(device);
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
++ if (device->block)
++ dasd_schedule_block_bh(device->block);
}
-+int rt2x00lib_start(struct rt2x00_dev *rt2x00dev)
-+{
-+ int retval;
-+
-+ if (test_bit(DEVICE_STARTED, &rt2x00dev->flags))
-+ return 0;
-+
-+ /*
-+ * If this is the first interface which is added,
-+ * we should load the firmware now.
-+ */
-+ if (test_bit(DRIVER_REQUIRE_FIRMWARE, &rt2x00dev->flags)) {
-+ retval = rt2x00lib_load_firmware(rt2x00dev);
-+ if (retval)
-+ return retval;
-+ }
-+
-+ /*
-+ * Initialize the device.
-+ */
-+ retval = rt2x00lib_initialize(rt2x00dev);
-+ if (retval)
-+ return retval;
-+
-+ /*
-+ * Enable radio.
-+ */
-+ retval = rt2x00lib_enable_radio(rt2x00dev);
-+ if (retval) {
-+ rt2x00lib_uninitialize(rt2x00dev);
-+ return retval;
-+ }
-+
-+ __set_bit(DEVICE_STARTED, &rt2x00dev->flags);
-+
-+ return 0;
-+}
-+
-+void rt2x00lib_stop(struct rt2x00_dev *rt2x00dev)
-+{
-+ if (!test_bit(DEVICE_STARTED, &rt2x00dev->flags))
-+ return;
-+
-+ /*
-+ * Perhaps we can add something smarter here,
-+ * but for now just disabling the radio should do.
-+ */
-+ rt2x00lib_disable_radio(rt2x00dev);
-+
-+ __clear_bit(DEVICE_STARTED, &rt2x00dev->flags);
-+}
-+
/*
- * driver allocation handlers.
+ * Interrupt handler for "normal" ssch-io based dasd devices.
*/
- static int rt2x00lib_alloc_rings(struct rt2x00_dev *rt2x00dev)
+-void
+-dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+- struct irb *irb)
++void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
++ struct irb *irb)
{
- struct data_ring *ring;
-+ unsigned int index;
-
- /*
- * We need the following rings:
-@@ -963,11 +1239,18 @@ static int rt2x00lib_alloc_rings(struct rt2x00_dev *rt2x00dev)
-
- /*
- * Initialize ring parameters.
-- * cw_min: 2^5 = 32.
-- * cw_max: 2^10 = 1024.
-+ * RX: queue_idx = 0
-+ * TX: queue_idx = IEEE80211_TX_QUEUE_DATA0 + index
-+ * TX: cw_min: 2^5 = 32.
-+ * TX: cw_max: 2^10 = 1024.
- */
-- ring_for_each(rt2x00dev, ring) {
-+ rt2x00dev->rx->rt2x00dev = rt2x00dev;
-+ rt2x00dev->rx->queue_idx = 0;
-+
-+ index = IEEE80211_TX_QUEUE_DATA0;
-+ txring_for_each(rt2x00dev, ring) {
- ring->rt2x00dev = rt2x00dev;
-+ ring->queue_idx = index++;
- ring->tx_params.aifs = 2;
- ring->tx_params.cw_min = 5;
- ring->tx_params.cw_max = 10;
-@@ -1008,7 +1291,7 @@ int rt2x00lib_probe_dev(struct rt2x00_dev *rt2x00dev)
- /*
- * Reset current working type.
- */
-- rt2x00dev->interface.type = INVALID_INTERFACE;
-+ rt2x00dev->interface.type = IEEE80211_IF_TYPE_INVALID;
-
- /*
- * Allocate ring array.
-@@ -1112,7 +1395,7 @@ int rt2x00lib_suspend(struct rt2x00_dev *rt2x00dev, pm_message_t state)
- * Disable radio and unitialize all items
- * that must be recreated on resume.
- */
-- rt2x00mac_stop(rt2x00dev->hw);
-+ rt2x00lib_stop(rt2x00dev);
- rt2x00lib_uninitialize(rt2x00dev);
- rt2x00debug_deregister(rt2x00dev);
-
-@@ -1134,7 +1417,6 @@ int rt2x00lib_resume(struct rt2x00_dev *rt2x00dev)
- int retval;
-
- NOTICE(rt2x00dev, "Waking up.\n");
-- __set_bit(DEVICE_PRESENT, &rt2x00dev->flags);
-
- /*
- * Open the debugfs entry.
-@@ -1150,7 +1432,7 @@ int rt2x00lib_resume(struct rt2x00_dev *rt2x00dev)
- /*
- * Reinitialize device and all active interfaces.
- */
-- retval = rt2x00mac_start(rt2x00dev->hw);
-+ retval = rt2x00lib_start(rt2x00dev);
- if (retval)
- goto exit;
+ struct dasd_ccw_req *cqr, *next;
+ struct dasd_device *device;
+ unsigned long long now;
+ int expires;
+- dasd_era_t era;
+- char mask;
-@@ -1166,6 +1448,11 @@ int rt2x00lib_resume(struct rt2x00_dev *rt2x00dev)
- rt2x00lib_config_type(rt2x00dev, intf->type);
+ if (IS_ERR(irb)) {
+ switch (PTR_ERR(irb)) {
+@@ -969,29 +996,25 @@ dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ cdev->dev.bus_id, ((irb->scsw.cstat<<8)|irb->scsw.dstat),
+ (unsigned int) intparm);
- /*
-+ * We are ready again to receive requests from mac80211.
-+ */
-+ __set_bit(DEVICE_PRESENT, &rt2x00dev->flags);
-+
-+ /*
- * It is possible that during that mac80211 has attempted
- * to send frames while we were suspending or resuming.
- * In that case we have disabled the TX queue and should
-diff --git a/drivers/net/wireless/rt2x00/rt2x00dump.h b/drivers/net/wireless/rt2x00/rt2x00dump.h
-new file mode 100644
-index 0000000..99f3f36
---- /dev/null
-+++ b/drivers/net/wireless/rt2x00/rt2x00dump.h
-@@ -0,0 +1,121 @@
-+/*
-+ Copyright (C) 2004 - 2007 rt2x00 SourceForge Project
-+ <http://rt2x00.serialmonkey.com>
-+
-+ This program is free software; you can redistribute it and/or modify
-+ it under the terms of the GNU General Public License as published by
-+ the Free Software Foundation; either version 2 of the License, or
-+ (at your option) any later version.
-+
-+ This program is distributed in the hope that it will be useful,
-+ but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ GNU General Public License for more details.
-+
-+ You should have received a copy of the GNU General Public License
-+ along with this program; if not, write to the
-+ Free Software Foundation, Inc.,
-+ 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-+ */
-+
-+/*
-+ Module: rt2x00dump
-+ Abstract: Data structures for the rt2x00debug & userspace.
-+ */
-+
-+#ifndef RT2X00DUMP_H
-+#define RT2X00DUMP_H
-+
-+/**
-+ * DOC: Introduction
-+ *
-+ * This header is intended to be exported to userspace,
-+ * to make the structures and enumerations available to userspace
-+ * applications. This means that all data types should be exportable.
-+ *
-+ * When rt2x00 is compiled with debugfs support enabled,
-+ * it is possible to capture all data coming in and out of the device
-+ * by reading the frame dump file. This file can have only a single reader.
-+ * The following frames will be reported:
-+ * - All incoming frames (rx)
-+ * - All outgoing frames (tx, including beacon and atim)
-+ * - All completed frames (txdone including atim)
-+ *
-+ * The data is send to the file using the following format:
-+ *
-+ * [rt2x00dump header][hardware descriptor][ieee802.11 frame]
-+ *
-+ * rt2x00dump header: The description of the dumped frame, as well as
-+ * additional information usefull for debugging. See &rt2x00dump_hdr.
-+ * hardware descriptor: Descriptor that was used to receive or transmit
-+ * the frame.
-+ * ieee802.11 frame: The actual frame that was received or transmitted.
-+ */
-+
-+/**
-+ * enum rt2x00_dump_type - Frame type
-+ *
-+ * These values are used for the @type member of &rt2x00dump_hdr.
-+ * @DUMP_FRAME_RXDONE: This frame has been received by the hardware.
-+ * @DUMP_FRAME_TX: This frame is queued for transmission to the hardware.
-+ * @DUMP_FRAME_TXDONE: This frame indicates the device has handled
-+ * the tx event which has either succeeded or failed. A frame
-+ * with this type should also have been reported with as a
-+ * %DUMP_FRAME_TX frame.
-+ */
-+enum rt2x00_dump_type {
-+ DUMP_FRAME_RXDONE = 1,
-+ DUMP_FRAME_TX = 2,
-+ DUMP_FRAME_TXDONE = 3,
-+};
-+
-+/**
-+ * struct rt2x00dump_hdr - Dump frame header
-+ *
-+ * Each frame dumped to the debugfs file starts with this header
-+ * attached. This header contains the description of the actual
-+ * frame which was dumped.
-+ *
-+ * New fields inside the structure must be appended to the end of
-+ * the structure. This way userspace tools compiled for earlier
-+ * header versions can still correctly handle the frame dump
-+ * (although they will not handle all data passed to them in the dump).
-+ *
-+ * @version: Header version should always be set to %DUMP_HEADER_VERSION.
-+ * This field must be checked by userspace to determine if it can
-+ * handle this frame.
-+ * @header_length: The length of the &rt2x00dump_hdr structure. This is
-+ * used for compatibility reasons so userspace can easily determine
-+ * the location of the next field in the dump.
-+ * @desc_length: The length of the device descriptor.
-+ * @data_length: The length of the frame data (including the ieee802.11 header.
-+ * @chip_rt: RT chipset
-+ * @chip_rf: RF chipset
-+ * @chip_rev: Chipset revision
-+ * @type: The frame type (&rt2x00_dump_type)
-+ * @ring_index: The index number of the data ring.
-+ * @entry_index: The index number of the entry inside the data ring.
-+ * @timestamp_sec: Timestamp - seconds
-+ * @timestamp_usec: Timestamp - microseconds
-+ */
-+struct rt2x00dump_hdr {
-+ __le32 version;
-+#define DUMP_HEADER_VERSION 2
-+
-+ __le32 header_length;
-+ __le32 desc_length;
-+ __le32 data_length;
-+
-+ __le16 chip_rt;
-+ __le16 chip_rf;
-+ __le32 chip_rev;
-+
-+ __le16 type;
-+ __u8 ring_index;
-+ __u8 entry_index;
-+
-+ __le32 timestamp_sec;
-+ __le32 timestamp_usec;
-+};
-+
-+#endif /* RT2X00DUMP_H */
-diff --git a/drivers/net/wireless/rt2x00/rt2x00firmware.c b/drivers/net/wireless/rt2x00/rt2x00firmware.c
-index 236025f..0a475e4 100644
---- a/drivers/net/wireless/rt2x00/rt2x00firmware.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00firmware.c
-@@ -23,11 +23,6 @@
- Abstract: rt2x00 firmware loading routines.
- */
+- /* first of all check for state change pending interrupt */
+- mask = DEV_STAT_ATTENTION | DEV_STAT_DEV_END | DEV_STAT_UNIT_EXCEP;
+- if ((irb->scsw.dstat & mask) == mask) {
++ /* check for unsolicited interrupts */
++ cqr = (struct dasd_ccw_req *) intparm;
++ if (!cqr || ((irb->scsw.cc == 1) &&
++ (irb->scsw.fctl & SCSW_FCTL_START_FUNC) &&
++ (irb->scsw.stctl & SCSW_STCTL_STATUS_PEND)) ) {
++ if (cqr && cqr->status == DASD_CQR_IN_IO)
++ cqr->status = DASD_CQR_QUEUED;
+ device = dasd_device_from_cdev_locked(cdev);
+ if (!IS_ERR(device)) {
+- dasd_handle_state_change_pending(device);
++ dasd_device_clear_timer(device);
++ device->discipline->handle_unsolicited_interrupt(device,
++ irb);
+ dasd_put_device(device);
+ }
+ return;
+ }
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00lib"
+- cqr = (struct dasd_ccw_req *) intparm;
-
- #include <linux/crc-itu-t.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-diff --git a/drivers/net/wireless/rt2x00/rt2x00lib.h b/drivers/net/wireless/rt2x00/rt2x00lib.h
-index 06d9bc0..1adbd28 100644
---- a/drivers/net/wireless/rt2x00/rt2x00lib.h
-+++ b/drivers/net/wireless/rt2x00/rt2x00lib.h
-@@ -44,8 +44,8 @@ void rt2x00lib_reset_link_tuner(struct rt2x00_dev *rt2x00dev);
- /*
- * Initialization handlers.
- */
--int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev);
--void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev);
-+int rt2x00lib_start(struct rt2x00_dev *rt2x00dev);
-+void rt2x00lib_stop(struct rt2x00_dev *rt2x00dev);
+- /* check for unsolicited interrupts */
+- if (cqr == NULL) {
+- MESSAGE(KERN_DEBUG,
+- "unsolicited interrupt received: bus_id %s",
+- cdev->dev.bus_id);
+- return;
+- }
+-
+- device = (struct dasd_device *) cqr->device;
+- if (device == NULL ||
++ device = (struct dasd_device *) cqr->startdev;
++ if (!device ||
+ strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
+ MESSAGE(KERN_DEBUG, "invalid device in request: bus_id %s",
+ cdev->dev.bus_id);
+@@ -999,12 +1022,12 @@ dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ }
- /*
- * Configuration handlers.
-@@ -53,6 +53,8 @@ void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev);
- void rt2x00lib_config_mac_addr(struct rt2x00_dev *rt2x00dev, u8 *mac);
- void rt2x00lib_config_bssid(struct rt2x00_dev *rt2x00dev, u8 *bssid);
- void rt2x00lib_config_type(struct rt2x00_dev *rt2x00dev, const int type);
-+void rt2x00lib_config_antenna(struct rt2x00_dev *rt2x00dev,
-+ enum antenna rx, enum antenna tx);
- void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
- struct ieee80211_conf *conf, const int force_config);
+ /* Check for clear pending */
+- if (cqr->status == DASD_CQR_CLEAR &&
++ if (cqr->status == DASD_CQR_CLEAR_PENDING &&
+ irb->scsw.fctl & SCSW_FCTL_CLEAR_FUNC) {
+- cqr->status = DASD_CQR_QUEUED;
+- dasd_clear_timer(device);
++ cqr->status = DASD_CQR_CLEARED;
++ dasd_device_clear_timer(device);
+ wake_up(&dasd_flush_wq);
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ return;
+ }
-@@ -78,6 +80,7 @@ static inline void rt2x00lib_free_firmware(struct rt2x00_dev *rt2x00dev)
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
- void rt2x00debug_register(struct rt2x00_dev *rt2x00dev);
- void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev);
-+void rt2x00debug_dump_frame(struct rt2x00_dev *rt2x00dev, struct sk_buff *skb);
- #else
- static inline void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
- {
-@@ -86,6 +89,11 @@ static inline void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
- static inline void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
- {
+@@ -1017,277 +1040,170 @@ dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ }
+ DBF_DEV_EVENT(DBF_DEBUG, device, "Int: CS/DS 0x%04x for cqr %p",
+ ((irb->scsw.cstat << 8) | irb->scsw.dstat), cqr);
+-
+- /* Find out the appropriate era_action. */
+- if (irb->scsw.fctl & SCSW_FCTL_HALT_FUNC)
+- era = dasd_era_fatal;
+- else if (irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END) &&
+- irb->scsw.cstat == 0 &&
+- !irb->esw.esw0.erw.cons)
+- era = dasd_era_none;
+- else if (irb->esw.esw0.erw.cons)
+- era = device->discipline->examine_error(cqr, irb);
+- else
+- era = dasd_era_recover;
+-
+- DBF_DEV_EVENT(DBF_DEBUG, device, "era_code %d", era);
++ next = NULL;
+ expires = 0;
+- if (era == dasd_era_none) {
+- cqr->status = DASD_CQR_DONE;
++ if (irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END) &&
++ irb->scsw.cstat == 0 && !irb->esw.esw0.erw.cons) {
++ /* request was completed successfully */
++ cqr->status = DASD_CQR_SUCCESS;
+ cqr->stopclk = now;
+ /* Start first request on queue if possible -> fast_io. */
+- if (cqr->list.next != &device->ccw_queue) {
+- next = list_entry(cqr->list.next,
+- struct dasd_ccw_req, list);
+- if ((next->status == DASD_CQR_QUEUED) &&
+- (!device->stopped)) {
+- if (device->discipline->start_IO(next) == 0)
+- expires = next->expires;
+- else
+- DEV_MESSAGE(KERN_DEBUG, device, "%s",
+- "Interrupt fastpath "
+- "failed!");
+- }
++ if (cqr->devlist.next != &device->ccw_queue) {
++ next = list_entry(cqr->devlist.next,
++ struct dasd_ccw_req, devlist);
+ }
+- } else { /* error */
+- memcpy(&cqr->irb, irb, sizeof (struct irb));
++ } else { /* error */
++ memcpy(&cqr->irb, irb, sizeof(struct irb));
+ if (device->features & DASD_FEATURE_ERPLOG) {
+- /* dump sense data */
+ dasd_log_sense(cqr, irb);
+ }
+- switch (era) {
+- case dasd_era_fatal:
+- cqr->status = DASD_CQR_FAILED;
+- cqr->stopclk = now;
+- break;
+- case dasd_era_recover:
++ /* If we have no sense data, or we just don't want complex ERP
++ * for this request, but if we have retries left, then just
++ * reset this request and retry it in the fastpath
++ */
++ if (!(cqr->irb.esw.esw0.erw.cons &&
++ test_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags)) &&
++ cqr->retries > 0) {
++ DEV_MESSAGE(KERN_DEBUG, device,
++ "default ERP in fastpath (%i retries left)",
++ cqr->retries);
++ cqr->lpm = LPM_ANYPATH;
++ cqr->status = DASD_CQR_QUEUED;
++ next = cqr;
++ } else
+ cqr->status = DASD_CQR_ERROR;
+- break;
+- default:
+- BUG();
+- }
++ }
++ if (next && (next->status == DASD_CQR_QUEUED) &&
++ (!device->stopped)) {
++ if (device->discipline->start_IO(next) == 0)
++ expires = next->expires;
++ else
++ DEV_MESSAGE(KERN_DEBUG, device, "%s",
++ "Interrupt fastpath "
++ "failed!");
+ }
+ if (expires != 0)
+- dasd_set_timer(device, expires);
++ dasd_device_set_timer(device, expires);
+ else
+- dasd_clear_timer(device);
+- dasd_schedule_bh(device);
++ dasd_device_clear_timer(device);
++ dasd_schedule_device_bh(device);
}
-+
-+static inline void rt2x00debug_dump_frame(struct rt2x00_dev *rt2x00dev,
-+ struct sk_buff *skb)
-+{
-+}
- #endif /* CONFIG_RT2X00_LIB_DEBUGFS */
/*
-diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c
-index 85ea8a8..e3f15e5 100644
---- a/drivers/net/wireless/rt2x00/rt2x00mac.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00mac.c
-@@ -23,11 +23,6 @@
- Abstract: rt2x00 generic mac80211 routines.
+- * posts the buffer_cache about a finalized request
++ * If we have an error on a dasd_block layer request then we cancel
++ * and return all further requests from the same dasd_block as well.
*/
+-static inline void
+-dasd_end_request(struct request *req, int uptodate)
++static void __dasd_device_recovery(struct dasd_device *device,
++ struct dasd_ccw_req *ref_cqr)
+ {
+- if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
+- BUG();
+- add_disk_randomness(req->rq_disk);
+- end_that_request_last(req, uptodate);
+-}
++ struct list_head *l, *n;
++ struct dasd_ccw_req *cqr;
-/*
-- * Set enviroment defines for rt2x00.h
+- * Process finished error recovery ccw.
- */
--#define DRV_NAME "rt2x00lib"
--
- #include <linux/kernel.h>
- #include <linux/module.h>
-
-@@ -89,7 +84,7 @@ int rt2x00mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
- */
- if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags)) {
- ieee80211_stop_queues(hw);
-- return 0;
-+ return NETDEV_TX_OK;
- }
-
- /*
-@@ -115,15 +110,24 @@ int rt2x00mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
- if (!is_rts_frame(frame_control) && !is_cts_frame(frame_control) &&
- (control->flags & (IEEE80211_TXCTL_USE_RTS_CTS |
- IEEE80211_TXCTL_USE_CTS_PROTECT))) {
-- if (rt2x00_ring_free(ring) <= 1)
-+ if (rt2x00_ring_free(ring) <= 1) {
-+ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
- return NETDEV_TX_BUSY;
-+ }
+-static inline void
+-__dasd_process_erp(struct dasd_device *device, struct dasd_ccw_req *cqr)
+-{
+- dasd_erp_fn_t erp_fn;
++ /*
++ * only requeue request that came from the dasd_block layer
++ */
++ if (!ref_cqr->block)
++ return;
-- if (rt2x00mac_tx_rts_cts(rt2x00dev, ring, skb, control))
-+ if (rt2x00mac_tx_rts_cts(rt2x00dev, ring, skb, control)) {
-+ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
- return NETDEV_TX_BUSY;
+- if (cqr->status == DASD_CQR_DONE)
+- DBF_DEV_EVENT(DBF_NOTICE, device, "%s", "ERP successful");
+- else
+- DEV_MESSAGE(KERN_ERR, device, "%s", "ERP unsuccessful");
+- erp_fn = device->discipline->erp_postaction(cqr);
+- erp_fn(cqr);
+-}
++ list_for_each_safe(l, n, &device->ccw_queue) {
++ cqr = list_entry(l, struct dasd_ccw_req, devlist);
++ if (cqr->status == DASD_CQR_QUEUED &&
++ ref_cqr->block == cqr->block) {
++ cqr->status = DASD_CQR_CLEARED;
+ }
- }
-
-- if (rt2x00dev->ops->lib->write_tx_data(rt2x00dev, ring, skb, control))
-+ if (rt2x00dev->ops->lib->write_tx_data(rt2x00dev, ring, skb, control)) {
-+ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
- return NETDEV_TX_BUSY;
+ }
-+
-+ if (rt2x00_ring_full(ring))
-+ ieee80211_stop_queue(rt2x00dev->hw, control->queue);
++};
- if (rt2x00dev->ops->lib->kick_tx_queue)
- rt2x00dev->ops->lib->kick_tx_queue(rt2x00dev, control->queue);
-@@ -135,41 +139,11 @@ EXPORT_SYMBOL_GPL(rt2x00mac_tx);
- int rt2x00mac_start(struct ieee80211_hw *hw)
+ /*
+- * Process ccw request queue.
++ * Remove those ccw requests from the queue that need to be returned
++ * to the upper layer.
+ */
+-static void
+-__dasd_process_ccw_queue(struct dasd_device * device,
+- struct list_head *final_queue)
++static void __dasd_device_process_ccw_queue(struct dasd_device *device,
++ struct list_head *final_queue)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- int status;
-
-- if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags) ||
-- test_bit(DEVICE_STARTED, &rt2x00dev->flags))
-+ if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags))
- return 0;
+ struct list_head *l, *n;
+ struct dasd_ccw_req *cqr;
+- dasd_erp_fn_t erp_fn;
-- /*
-- * If this is the first interface which is added,
-- * we should load the firmware now.
-- */
-- if (test_bit(DRIVER_REQUIRE_FIRMWARE, &rt2x00dev->flags)) {
-- status = rt2x00lib_load_firmware(rt2x00dev);
-- if (status)
-- return status;
-- }
+-restart:
+ /* Process request with final status. */
+ list_for_each_safe(l, n, &device->ccw_queue) {
+- cqr = list_entry(l, struct dasd_ccw_req, list);
++ cqr = list_entry(l, struct dasd_ccw_req, devlist);
++
+ /* Stop list processing at the first non-final request. */
+- if (cqr->status != DASD_CQR_DONE &&
+- cqr->status != DASD_CQR_FAILED &&
+- cqr->status != DASD_CQR_ERROR)
++ if (cqr->status == DASD_CQR_QUEUED ||
++ cqr->status == DASD_CQR_IN_IO ||
++ cqr->status == DASD_CQR_CLEAR_PENDING)
+ break;
+- /* Process requests with DASD_CQR_ERROR */
+ if (cqr->status == DASD_CQR_ERROR) {
+- if (cqr->irb.scsw.fctl & SCSW_FCTL_HALT_FUNC) {
+- cqr->status = DASD_CQR_FAILED;
+- cqr->stopclk = get_clock();
+- } else {
+- if (cqr->irb.esw.esw0.erw.cons &&
+- test_bit(DASD_CQR_FLAGS_USE_ERP,
+- &cqr->flags)) {
+- erp_fn = device->discipline->
+- erp_action(cqr);
+- erp_fn(cqr);
+- } else
+- dasd_default_erp_action(cqr);
+- }
+- goto restart;
+- }
-
-- /*
-- * Initialize the device.
-- */
-- status = rt2x00lib_initialize(rt2x00dev);
-- if (status)
-- return status;
+- /* First of all call extended error reporting. */
+- if (dasd_eer_enabled(device) &&
+- cqr->status == DASD_CQR_FAILED) {
+- dasd_eer_write(device, cqr, DASD_EER_FATALERROR);
-
-- /*
-- * Enable radio.
-- */
-- status = rt2x00lib_enable_radio(rt2x00dev);
-- if (status) {
-- rt2x00lib_uninitialize(rt2x00dev);
-- return status;
-- }
+- /* restart request */
+- cqr->status = DASD_CQR_QUEUED;
+- cqr->retries = 255;
+- device->stopped |= DASD_STOPPED_QUIESCE;
+- goto restart;
++ __dasd_device_recovery(device, cqr);
+ }
-
-- __set_bit(DEVICE_STARTED, &rt2x00dev->flags);
+- /* Process finished ERP request. */
+- if (cqr->refers) {
+- __dasd_process_erp(device, cqr);
+- goto restart;
+- }
-
-- return 0;
-+ return rt2x00lib_start(rt2x00dev);
+ /* Rechain finished requests to final queue */
+- cqr->endclk = get_clock();
+- list_move_tail(&cqr->list, final_queue);
++ list_move_tail(&cqr->devlist, final_queue);
+ }
}
- EXPORT_SYMBOL_GPL(rt2x00mac_start);
-
-@@ -180,13 +154,7 @@ void rt2x00mac_stop(struct ieee80211_hw *hw)
- if (!test_bit(DEVICE_PRESENT, &rt2x00dev->flags))
- return;
+-static void
+-dasd_end_request_cb(struct dasd_ccw_req * cqr, void *data)
+-{
+- struct request *req;
+- struct dasd_device *device;
+- int status;
+-
+- req = (struct request *) data;
+- device = cqr->device;
+- dasd_profile_end(device, cqr, req);
+- status = cqr->device->discipline->free_cp(cqr,req);
+- spin_lock_irq(&device->request_queue_lock);
+- dasd_end_request(req, status);
+- spin_unlock_irq(&device->request_queue_lock);
+-}
+-
+-
+ /*
+- * Fetch requests from the block device queue.
++ * the cqrs from the final queue are returned to the upper layer
++ * by setting a dasd_block state and calling the callback function
+ */
+-static void
+-__dasd_process_blk_queue(struct dasd_device * device)
++static void __dasd_device_process_final_queue(struct dasd_device *device,
++ struct list_head *final_queue)
+ {
+- struct request_queue *queue;
+- struct request *req;
++ struct list_head *l, *n;
+ struct dasd_ccw_req *cqr;
+- int nr_queued;
+-
+- queue = device->request_queue;
+- /* No queue ? Then there is nothing to do. */
+- if (queue == NULL)
+- return;
+-
- /*
-- * Perhaps we can add something smarter here,
-- * but for now just disabling the radio should do.
+- * We requeue request from the block device queue to the ccw
+- * queue only in two states. In state DASD_STATE_READY the
+- * partition detection is done and we need to requeue requests
+- * for that. State DASD_STATE_ONLINE is normal block device
+- * operation.
- */
-- rt2x00lib_disable_radio(rt2x00dev);
--
-- __clear_bit(DEVICE_STARTED, &rt2x00dev->flags);
-+ rt2x00lib_stop(rt2x00dev);
- }
- EXPORT_SYMBOL_GPL(rt2x00mac_stop);
-
-@@ -213,7 +181,7 @@ int rt2x00mac_add_interface(struct ieee80211_hw *hw,
- is_interface_present(intf))
- return -ENOBUFS;
-
-- intf->id = conf->if_id;
-+ intf->id = conf->vif;
- intf->type = conf->type;
- if (conf->type == IEEE80211_IF_TYPE_AP)
- memcpy(&intf->bssid, conf->mac_addr, ETH_ALEN);
-@@ -247,7 +215,7 @@ void rt2x00mac_remove_interface(struct ieee80211_hw *hw,
- return;
-
- intf->id = 0;
-- intf->type = INVALID_INTERFACE;
-+ intf->type = IEEE80211_IF_TYPE_INVALID;
- memset(&intf->bssid, 0x00, ETH_ALEN);
- memset(&intf->mac, 0x00, ETH_ALEN);
+- if (device->state != DASD_STATE_READY &&
+- device->state != DASD_STATE_ONLINE)
+- return;
+- nr_queued = 0;
+- /* Now we try to fetch requests from the request queue */
+- list_for_each_entry(cqr, &device->ccw_queue, list)
+- if (cqr->status == DASD_CQR_QUEUED)
+- nr_queued++;
+- while (!blk_queue_plugged(queue) &&
+- elv_next_request(queue) &&
+- nr_queued < DASD_CHANQ_MAX_SIZE) {
+- req = elv_next_request(queue);
-@@ -297,7 +265,8 @@ int rt2x00mac_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
+- if (device->features & DASD_FEATURE_READONLY &&
+- rq_data_dir(req) == WRITE) {
+- DBF_DEV_EVENT(DBF_ERR, device,
+- "Rejecting write request %p",
+- req);
+- blkdev_dequeue_request(req);
+- dasd_end_request(req, 0);
+- continue;
+- }
+- if (device->stopped & DASD_STOPPED_DC_EIO) {
+- blkdev_dequeue_request(req);
+- dasd_end_request(req, 0);
+- continue;
+- }
+- cqr = device->discipline->build_cp(device, req);
+- if (IS_ERR(cqr)) {
+- if (PTR_ERR(cqr) == -ENOMEM)
+- break; /* terminate request queue loop */
+- if (PTR_ERR(cqr) == -EAGAIN) {
+- /*
+- * The current request cannot be build right
+- * now, we have to try later. If this request
+- * is the head-of-queue we stop the device
+- * for 1/2 second.
+- */
+- if (!list_empty(&device->ccw_queue))
+- break;
+- device->stopped |= DASD_STOPPED_PENDING;
+- dasd_set_timer(device, HZ/2);
+- break;
+- }
+- DBF_DEV_EVENT(DBF_ERR, device,
+- "CCW creation failed (rc=%ld) "
+- "on request %p",
+- PTR_ERR(cqr), req);
+- blkdev_dequeue_request(req);
+- dasd_end_request(req, 0);
+- continue;
++ list_for_each_safe(l, n, final_queue) {
++ cqr = list_entry(l, struct dasd_ccw_req, devlist);
++ list_del_init(&cqr->devlist);
++ if (cqr->block)
++ spin_lock_bh(&cqr->block->queue_lock);
++ switch (cqr->status) {
++ case DASD_CQR_SUCCESS:
++ cqr->status = DASD_CQR_DONE;
++ break;
++ case DASD_CQR_ERROR:
++ cqr->status = DASD_CQR_NEED_ERP;
++ break;
++ case DASD_CQR_CLEARED:
++ cqr->status = DASD_CQR_TERMINATED;
++ break;
++ default:
++ DEV_MESSAGE(KERN_ERR, device,
++ "wrong cqr status in __dasd_process_final_queue "
++ "for cqr %p, status %x",
++ cqr, cqr->status);
++ BUG();
+ }
+- cqr->callback = dasd_end_request_cb;
+- cqr->callback_data = (void *) req;
+- cqr->status = DASD_CQR_QUEUED;
+- blkdev_dequeue_request(req);
+- list_add_tail(&cqr->list, &device->ccw_queue);
+- dasd_profile_start(device, cqr, req);
+- nr_queued++;
++ if (cqr->block)
++ spin_unlock_bh(&cqr->block->queue_lock);
++ if (cqr->callback != NULL)
++ (cqr->callback)(cqr, cqr->callback_data);
+ }
}
- EXPORT_SYMBOL_GPL(rt2x00mac_config);
--int rt2x00mac_config_interface(struct ieee80211_hw *hw, int if_id,
-+int rt2x00mac_config_interface(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
- struct ieee80211_if_conf *conf)
++
++
+ /*
+ * Take a look at the first request on the ccw queue and check
+ * if it reached its expire time. If so, terminate the IO.
+ */
+-static void
+-__dasd_check_expire(struct dasd_device * device)
++static void __dasd_device_check_expire(struct dasd_device *device)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-@@ -373,23 +342,27 @@ int rt2x00mac_get_tx_stats(struct ieee80211_hw *hw,
- }
- EXPORT_SYMBOL_GPL(rt2x00mac_get_tx_stats);
+ struct dasd_ccw_req *cqr;
--void rt2x00mac_erp_ie_changed(struct ieee80211_hw *hw, u8 changes,
-- int cts_protection, int preamble)
-+void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
-+ struct ieee80211_bss_conf *bss_conf,
-+ u32 changes)
+ if (list_empty(&device->ccw_queue))
+ return;
+- cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list);
++ cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);
+ if ((cqr->status == DASD_CQR_IN_IO && cqr->expires != 0) &&
+ (time_after_eq(jiffies, cqr->expires + cqr->starttime))) {
+ if (device->discipline->term_IO(cqr) != 0) {
+ /* Hmpf, try again in 5 sec */
+- dasd_set_timer(device, 5*HZ);
+ DEV_MESSAGE(KERN_ERR, device,
+ "internal error - timeout (%is) expired "
+ "for cqr %p, termination failed, "
+ "retrying in 5s",
+ (cqr->expires/HZ), cqr);
++ cqr->expires += 5*HZ;
++ dasd_device_set_timer(device, 5*HZ);
+ } else {
+ DEV_MESSAGE(KERN_ERR, device,
+ "internal error - timeout (%is) expired "
+@@ -1301,77 +1217,53 @@ __dasd_check_expire(struct dasd_device * device)
+ * Take a look at the first request on the ccw queue and check
+ * if it needs to be started.
+ */
+-static void
+-__dasd_start_head(struct dasd_device * device)
++static void __dasd_device_start_head(struct dasd_device *device)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
- int short_preamble;
- int ack_timeout;
- int ack_consume_time;
- int difs;
-+ int preamble;
+ struct dasd_ccw_req *cqr;
+ int rc;
- /*
- * We only support changing preamble mode.
- */
-- if (!(changes & IEEE80211_ERP_CHANGE_PREAMBLE))
-+ if (!(changes & BSS_CHANGED_ERP_PREAMBLE))
+ if (list_empty(&device->ccw_queue))
return;
+- cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list);
++ cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);
+ if (cqr->status != DASD_CQR_QUEUED)
+ return;
+- /* Non-temporary stop condition will trigger fail fast */
+- if (device->stopped & ~DASD_STOPPED_PENDING &&
+- test_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags) &&
+- (!dasd_eer_enabled(device))) {
+- cqr->status = DASD_CQR_FAILED;
+- dasd_schedule_bh(device);
++ /* when device is stopped, return request to previous layer */
++ if (device->stopped) {
++ cqr->status = DASD_CQR_CLEARED;
++ dasd_schedule_device_bh(device);
+ return;
+ }
+- /* Don't try to start requests if device is stopped */
+- if (device->stopped)
+- return;
-- short_preamble = !preamble;
-- preamble = !!(preamble) ? PREAMBLE : SHORT_PREAMBLE;
-+ short_preamble = bss_conf->use_short_preamble;
-+ preamble = bss_conf->use_short_preamble ?
-+ SHORT_PREAMBLE : PREAMBLE;
-
- difs = (hw->conf.flags & IEEE80211_CONF_SHORT_SLOT_TIME) ?
- SHORT_DIFS : DIFS;
-@@ -405,7 +378,7 @@ void rt2x00mac_erp_ie_changed(struct ieee80211_hw *hw, u8 changes,
- rt2x00dev->ops->lib->config_preamble(rt2x00dev, short_preamble,
- ack_timeout, ack_consume_time);
+ rc = device->discipline->start_IO(cqr);
+ if (rc == 0)
+- dasd_set_timer(device, cqr->expires);
++ dasd_device_set_timer(device, cqr->expires);
+ else if (rc == -EACCES) {
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ } else
+ /* Hmpf, try again in 1/2 sec */
+- dasd_set_timer(device, 50);
+-}
+-
+-static inline int
+-_wait_for_clear(struct dasd_ccw_req *cqr)
+-{
+- return (cqr->status == DASD_CQR_QUEUED);
++ dasd_device_set_timer(device, 50);
}
--EXPORT_SYMBOL_GPL(rt2x00mac_erp_ie_changed);
-+EXPORT_SYMBOL_GPL(rt2x00mac_bss_info_changed);
- int rt2x00mac_conf_tx(struct ieee80211_hw *hw, int queue,
- const struct ieee80211_tx_queue_params *params)
-diff --git a/drivers/net/wireless/rt2x00/rt2x00pci.c b/drivers/net/wireless/rt2x00/rt2x00pci.c
-index 04663eb..804a998 100644
---- a/drivers/net/wireless/rt2x00/rt2x00pci.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00pci.c
-@@ -23,11 +23,6 @@
- Abstract: rt2x00 generic pci device routines.
+ /*
+- * Remove all requests from the ccw queue (all = '1') or only block device
+- * requests in case all = '0'.
+- * Take care of the erp-chain (chained via cqr->refers) and remove either
+- * the whole erp-chain or none of the erp-requests.
+- * If a request is currently running, term_IO is called and the request
+- * is re-queued. Prior to removing the terminated request we need to wait
+- * for the clear-interrupt.
+- * In case termination is not possible we stop processing and just finishing
+- * the already moved requests.
++ * Go through all request on the dasd_device request queue,
++ * terminate them on the cdev if necessary, and return them to the
++ * submitting layer via callback.
++ * Note:
++ * Make sure that all 'submitting layers' still exist when
++ * this function is called!. In other words, when 'device' is a base
++ * device then all block layer requests must have been removed before
++ * via dasd_flush_block_queue.
*/
-
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00pci"
--
- #include <linux/dma-mapping.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-@@ -43,9 +38,9 @@ int rt2x00pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- struct ieee80211_tx_control *control)
- {
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- struct data_ring *ring =
-- rt2x00lib_get_ring(rt2x00dev, IEEE80211_TX_QUEUE_BEACON);
-- struct data_entry *entry = rt2x00_get_data_entry(ring);
-+ struct skb_desc *desc;
-+ struct data_ring *ring;
-+ struct data_entry *entry;
-
- /*
- * Just in case mac80211 doesn't set this correctly,
-@@ -53,14 +48,22 @@ int rt2x00pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- * initialization.
- */
- control->queue = IEEE80211_TX_QUEUE_BEACON;
-+ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
-+ entry = rt2x00_get_data_entry(ring);
-
- /*
-- * Update the beacon entry.
-+ * Fill in skb descriptor
- */
-+ desc = get_skb_desc(skb);
-+ desc->desc_len = ring->desc_size;
-+ desc->data_len = skb->len;
-+ desc->desc = entry->priv;
-+ desc->data = skb->data;
-+ desc->ring = ring;
-+ desc->entry = entry;
-+
- memcpy(entry->data_addr, skb->data, skb->len);
-- rt2x00lib_write_tx_desc(rt2x00dev, entry->priv,
-- (struct ieee80211_hdr *)skb->data,
-- skb->len, control);
-+ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
-
- /*
- * Enable beacon generation.
-@@ -78,15 +81,13 @@ int rt2x00pci_write_tx_data(struct rt2x00_dev *rt2x00dev,
- struct data_ring *ring, struct sk_buff *skb,
- struct ieee80211_tx_control *control)
+-static int
+-dasd_flush_ccw_queue(struct dasd_device * device, int all)
++int dasd_flush_device_queue(struct dasd_device *device)
{
-- struct ieee80211_hdr *ieee80211hdr = (struct ieee80211_hdr *)skb->data;
- struct data_entry *entry = rt2x00_get_data_entry(ring);
-- struct data_desc *txd = entry->priv;
-+ __le32 *txd = entry->priv;
-+ struct skb_desc *desc;
- u32 word;
-
-- if (rt2x00_ring_full(ring)) {
-- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
-+ if (rt2x00_ring_full(ring))
- return -EINVAL;
-- }
-
- rt2x00_desc_read(txd, 0, &word);
+- struct dasd_ccw_req *cqr, *orig, *n;
+- int rc, i;
+-
++ struct dasd_ccw_req *cqr, *n;
++ int rc;
+ struct list_head flush_queue;
-@@ -96,37 +97,42 @@ int rt2x00pci_write_tx_data(struct rt2x00_dev *rt2x00dev,
- "Arrived at non-free entry in the non-full queue %d.\n"
- "Please file bug report to %s.\n",
- control->queue, DRV_PROJECT);
-- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
- return -EINVAL;
+ INIT_LIST_HEAD(&flush_queue);
+ spin_lock_irq(get_ccwdev_lock(device->cdev));
+ rc = 0;
+-restart:
+- list_for_each_entry_safe(cqr, n, &device->ccw_queue, list) {
+- /* get original request of erp request-chain */
+- for (orig = cqr; orig->refers != NULL; orig = orig->refers);
+-
+- /* Flush all request or only block device requests? */
+- if (all == 0 && cqr->callback != dasd_end_request_cb &&
+- orig->callback != dasd_end_request_cb) {
+- continue;
+- }
++ list_for_each_entry_safe(cqr, n, &device->ccw_queue, devlist) {
+ /* Check status and move request to flush_queue */
+ switch (cqr->status) {
+ case DASD_CQR_IN_IO:
+@@ -1387,90 +1279,60 @@ restart:
+ }
+ break;
+ case DASD_CQR_QUEUED:
+- case DASD_CQR_ERROR:
+- /* set request to FAILED */
+ cqr->stopclk = get_clock();
+- cqr->status = DASD_CQR_FAILED;
++ cqr->status = DASD_CQR_CLEARED;
+ break;
+- default: /* do not touch the others */
++ default: /* no need to modify the others */
+ break;
+ }
+- /* Rechain request (including erp chain) */
+- for (i = 0; cqr != NULL; cqr = cqr->refers, i++) {
+- cqr->endclk = get_clock();
+- list_move_tail(&cqr->list, &flush_queue);
+- }
+- if (i > 1)
+- /* moved more than one request - need to restart */
+- goto restart;
++ list_move_tail(&cqr->devlist, &flush_queue);
}
-
-- entry->skb = skb;
-- memcpy(&entry->tx_status.control, control, sizeof(*control));
+-
+ finished:
+ spin_unlock_irq(get_ccwdev_lock(device->cdev));
+- /* Now call the callback function of flushed requests */
+-restart_cb:
+- list_for_each_entry_safe(cqr, n, &flush_queue, list) {
+- if (cqr->status == DASD_CQR_CLEAR) {
+- /* wait for clear interrupt! */
+- wait_event(dasd_flush_wq, _wait_for_clear(cqr));
+- cqr->status = DASD_CQR_FAILED;
+- }
+- /* Process finished ERP request. */
+- if (cqr->refers) {
+- __dasd_process_erp(device, cqr);
+- /* restart list_for_xx loop since dasd_process_erp
+- * might remove multiple elements */
+- goto restart_cb;
+- }
+- /* call the callback function */
+- cqr->endclk = get_clock();
+- if (cqr->callback != NULL)
+- (cqr->callback)(cqr, cqr->callback_data);
+- }
+ /*
-+ * Fill in skb descriptor
++ * After this point all requests must be in state CLEAR_PENDING,
++ * CLEARED, SUCCESS or ERROR. Now wait for CLEAR_PENDING to become
++ * one of the others.
+ */
-+ desc = get_skb_desc(skb);
-+ desc->desc_len = ring->desc_size;
-+ desc->data_len = skb->len;
-+ desc->desc = entry->priv;
-+ desc->data = skb->data;
-+ desc->ring = ring;
-+ desc->entry = entry;
-+
- memcpy(entry->data_addr, skb->data, skb->len);
-- rt2x00lib_write_tx_desc(rt2x00dev, txd, ieee80211hdr,
-- skb->len, control);
-+ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
-
- rt2x00_ring_index_inc(ring);
-
-- if (rt2x00_ring_full(ring))
-- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
--
- return 0;
++ list_for_each_entry_safe(cqr, n, &flush_queue, devlist)
++ wait_event(dasd_flush_wq,
++ (cqr->status != DASD_CQR_CLEAR_PENDING));
++ /*
++ * Now set each request back to TERMINATED, DONE or NEED_ERP
++ * and call the callback function of flushed requests
++ */
++ __dasd_device_process_final_queue(device, &flush_queue);
+ return rc;
}
- EXPORT_SYMBOL_GPL(rt2x00pci_write_tx_data);
/*
-- * RX data handlers.
-+ * TX/RX data handlers.
+ * Acquire the device lock and process queues for the device.
*/
- void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
+-static void
+-dasd_tasklet(struct dasd_device * device)
++static void dasd_device_tasklet(struct dasd_device *device)
{
- struct data_ring *ring = rt2x00dev->rx;
- struct data_entry *entry;
-- struct data_desc *rxd;
- struct sk_buff *skb;
- struct ieee80211_hdr *hdr;
-+ struct skb_desc *skbdesc;
- struct rxdata_entry_desc desc;
- int header_size;
-+ __le32 *rxd;
- int align;
- u32 word;
-
-@@ -138,7 +144,7 @@ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
- if (rt2x00_get_field32(word, RXD_ENTRY_OWNER_NIC))
- break;
-
-- memset(&desc, 0x00, sizeof(desc));
-+ memset(&desc, 0, sizeof(desc));
- rt2x00dev->ops->lib->fill_rxdone(entry, &desc);
-
- hdr = (struct ieee80211_hdr *)entry->data_addr;
-@@ -163,6 +169,17 @@ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
- memcpy(skb_put(skb, desc.size), entry->data_addr, desc.size);
+ struct list_head final_queue;
+- struct list_head *l, *n;
+- struct dasd_ccw_req *cqr;
- /*
-+ * Fill in skb descriptor
-+ */
-+ skbdesc = get_skb_desc(skb);
-+ skbdesc->desc_len = entry->ring->desc_size;
-+ skbdesc->data_len = skb->len;
-+ skbdesc->desc = entry->priv;
-+ skbdesc->data = skb->data;
-+ skbdesc->ring = ring;
-+ skbdesc->entry = entry;
-+
-+ /*
- * Send the frame to rt2x00lib for further processing.
- */
- rt2x00lib_rxdone(entry, skb, &desc);
-@@ -177,6 +194,37 @@ void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev)
+ atomic_set (&device->tasklet_scheduled, 0);
+ INIT_LIST_HEAD(&final_queue);
+ spin_lock_irq(get_ccwdev_lock(device->cdev));
+ /* Check expire time of first request on the ccw queue. */
+- __dasd_check_expire(device);
+- /* Finish off requests on ccw queue */
+- __dasd_process_ccw_queue(device, &final_queue);
++ __dasd_device_check_expire(device);
++ /* find final requests on ccw queue */
++ __dasd_device_process_ccw_queue(device, &final_queue);
+ spin_unlock_irq(get_ccwdev_lock(device->cdev));
+ /* Now call the callback function of requests with final status */
+- list_for_each_safe(l, n, &final_queue) {
+- cqr = list_entry(l, struct dasd_ccw_req, list);
+- list_del_init(&cqr->list);
+- if (cqr->callback != NULL)
+- (cqr->callback)(cqr, cqr->callback_data);
+- }
+- spin_lock_irq(&device->request_queue_lock);
+- spin_lock(get_ccwdev_lock(device->cdev));
+- /* Get new request from the block device request queue */
+- __dasd_process_blk_queue(device);
++ __dasd_device_process_final_queue(device, &final_queue);
++ spin_lock_irq(get_ccwdev_lock(device->cdev));
+ /* Now check if the head of the ccw queue needs to be started. */
+- __dasd_start_head(device);
+- spin_unlock(get_ccwdev_lock(device->cdev));
+- spin_unlock_irq(&device->request_queue_lock);
++ __dasd_device_start_head(device);
++ spin_unlock_irq(get_ccwdev_lock(device->cdev));
+ dasd_put_device(device);
}
- EXPORT_SYMBOL_GPL(rt2x00pci_rxdone);
-+void rt2x00pci_txdone(struct rt2x00_dev *rt2x00dev, struct data_entry *entry,
-+ const int tx_status, const int retry)
-+{
-+ u32 word;
-+
-+ rt2x00lib_txdone(entry, tx_status, retry);
-+
-+ /*
-+ * Make this entry available for reuse.
-+ */
-+ entry->flags = 0;
-+
-+ rt2x00_desc_read(entry->priv, 0, &word);
-+ rt2x00_set_field32(&word, TXD_ENTRY_OWNER_NIC, 0);
-+ rt2x00_set_field32(&word, TXD_ENTRY_VALID, 0);
-+ rt2x00_desc_write(entry->priv, 0, word);
-+
-+ rt2x00_ring_index_done_inc(entry->ring);
-+
-+ /*
-+ * If the data ring was full before the txdone handler
-+ * we must make sure the packet queue in the mac80211 stack
-+ * is reenabled when the txdone handler has finished.
-+ */
-+ if (!rt2x00_ring_full(entry->ring))
-+ ieee80211_wake_queue(rt2x00dev->hw,
-+ entry->tx_status.control.queue);
-+
-+}
-+EXPORT_SYMBOL_GPL(rt2x00pci_txdone);
-+
/*
- * Device initialization handlers.
+ * Schedules a call to dasd_tasklet over the device tasklet.
*/
-diff --git a/drivers/net/wireless/rt2x00/rt2x00pci.h b/drivers/net/wireless/rt2x00/rt2x00pci.h
-index 82adeac..2d1eb81 100644
---- a/drivers/net/wireless/rt2x00/rt2x00pci.h
-+++ b/drivers/net/wireless/rt2x00/rt2x00pci.h
-@@ -57,7 +57,7 @@
+-void
+-dasd_schedule_bh(struct dasd_device * device)
++void dasd_schedule_device_bh(struct dasd_device *device)
+ {
+ /* Protect against rescheduling. */
+ if (atomic_cmpxchg (&device->tasklet_scheduled, 0, 1) != 0)
+@@ -1480,160 +1342,109 @@ dasd_schedule_bh(struct dasd_device * device)
+ }
+
/*
- * Register access.
+- * Queue a request to the head of the ccw_queue. Start the I/O if
+- * possible.
++ * Queue a request to the head of the device ccw_queue.
++ * Start the I/O if possible.
*/
--static inline void rt2x00pci_register_read(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2x00pci_register_read(struct rt2x00_dev *rt2x00dev,
- const unsigned long offset,
- u32 *value)
+-void
+-dasd_add_request_head(struct dasd_ccw_req *req)
++void dasd_add_request_head(struct dasd_ccw_req *cqr)
{
-@@ -65,14 +65,14 @@ static inline void rt2x00pci_register_read(const struct rt2x00_dev *rt2x00dev,
+ struct dasd_device *device;
+ unsigned long flags;
+
+- device = req->device;
++ device = cqr->startdev;
+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+- req->status = DASD_CQR_QUEUED;
+- req->device = device;
+- list_add(&req->list, &device->ccw_queue);
++ cqr->status = DASD_CQR_QUEUED;
++ list_add(&cqr->devlist, &device->ccw_queue);
+ /* let the bh start the request to keep them in order */
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
}
- static inline void
--rt2x00pci_register_multiread(const struct rt2x00_dev *rt2x00dev,
-+rt2x00pci_register_multiread(struct rt2x00_dev *rt2x00dev,
- const unsigned long offset,
- void *value, const u16 length)
+ /*
+- * Queue a request to the tail of the ccw_queue. Start the I/O if
+- * possible.
++ * Queue a request to the tail of the device ccw_queue.
++ * Start the I/O if possible.
+ */
+-void
+-dasd_add_request_tail(struct dasd_ccw_req *req)
++void dasd_add_request_tail(struct dasd_ccw_req *cqr)
{
- memcpy_fromio(value, rt2x00dev->csr_addr + offset, length);
+ struct dasd_device *device;
+ unsigned long flags;
+
+- device = req->device;
++ device = cqr->startdev;
+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+- req->status = DASD_CQR_QUEUED;
+- req->device = device;
+- list_add_tail(&req->list, &device->ccw_queue);
++ cqr->status = DASD_CQR_QUEUED;
++ list_add_tail(&cqr->devlist, &device->ccw_queue);
+ /* let the bh start the request to keep them in order */
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
}
--static inline void rt2x00pci_register_write(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt2x00pci_register_write(struct rt2x00_dev *rt2x00dev,
- const unsigned long offset,
- u32 value)
+ /*
+- * Wakeup callback.
++ * Wakeup helper for the 'sleep_on' functions.
+ */
+-static void
+-dasd_wakeup_cb(struct dasd_ccw_req *cqr, void *data)
++static void dasd_wakeup_cb(struct dasd_ccw_req *cqr, void *data)
{
-@@ -80,7 +80,7 @@ static inline void rt2x00pci_register_write(const struct rt2x00_dev *rt2x00dev,
+ wake_up((wait_queue_head_t *) data);
}
- static inline void
--rt2x00pci_register_multiwrite(const struct rt2x00_dev *rt2x00dev,
-+rt2x00pci_register_multiwrite(struct rt2x00_dev *rt2x00dev,
- const unsigned long offset,
- void *value, const u16 length)
+-static inline int
+-_wait_for_wakeup(struct dasd_ccw_req *cqr)
++static inline int _wait_for_wakeup(struct dasd_ccw_req *cqr)
{
-@@ -101,9 +101,11 @@ int rt2x00pci_write_tx_data(struct rt2x00_dev *rt2x00dev,
- struct ieee80211_tx_control *control);
+ struct dasd_device *device;
+ int rc;
- /*
-- * RX data handlers.
-+ * RX/TX data handlers.
- */
- void rt2x00pci_rxdone(struct rt2x00_dev *rt2x00dev);
-+void rt2x00pci_txdone(struct rt2x00_dev *rt2x00dev, struct data_entry *entry,
-+ const int tx_status, const int retry);
+- device = cqr->device;
++ device = cqr->startdev;
+ spin_lock_irq(get_ccwdev_lock(device->cdev));
+ rc = ((cqr->status == DASD_CQR_DONE ||
+- cqr->status == DASD_CQR_FAILED) &&
+- list_empty(&cqr->list));
++ cqr->status == DASD_CQR_NEED_ERP ||
++ cqr->status == DASD_CQR_TERMINATED) &&
++ list_empty(&cqr->devlist));
+ spin_unlock_irq(get_ccwdev_lock(device->cdev));
+ return rc;
+ }
/*
- * Device initialization handlers.
-diff --git a/drivers/net/wireless/rt2x00/rt2x00rfkill.c b/drivers/net/wireless/rt2x00/rt2x00rfkill.c
-index a0f8b8e..34a96d4 100644
---- a/drivers/net/wireless/rt2x00/rt2x00rfkill.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00rfkill.c
-@@ -23,11 +23,6 @@
- Abstract: rt2x00 rfkill routines.
+- * Attempts to start a special ccw queue and waits for its completion.
++ * Queue a request to the tail of the device ccw_queue and wait for
++ * it's completion.
*/
+-int
+-dasd_sleep_on(struct dasd_ccw_req * cqr)
++int dasd_sleep_on(struct dasd_ccw_req *cqr)
+ {
+ wait_queue_head_t wait_q;
+ struct dasd_device *device;
+ int rc;
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00lib"
+- device = cqr->device;
+- spin_lock_irq(get_ccwdev_lock(device->cdev));
++ device = cqr->startdev;
+
+ init_waitqueue_head (&wait_q);
+ cqr->callback = dasd_wakeup_cb;
+ cqr->callback_data = (void *) &wait_q;
+- cqr->status = DASD_CQR_QUEUED;
+- list_add_tail(&cqr->list, &device->ccw_queue);
-
- #include <linux/input-polldev.h>
- #include <linux/kernel.h>
- #include <linux/module.h>
-@@ -68,8 +63,10 @@ static void rt2x00rfkill_poll(struct input_polled_dev *poll_dev)
- struct rt2x00_dev *rt2x00dev = poll_dev->private;
- int state = rt2x00dev->ops->lib->rfkill_poll(rt2x00dev);
+- /* let the bh start the request to keep them in order */
+- dasd_schedule_bh(device);
+-
+- spin_unlock_irq(get_ccwdev_lock(device->cdev));
+-
++ dasd_add_request_tail(cqr);
+ wait_event(wait_q, _wait_for_wakeup(cqr));
-- if (rt2x00dev->rfkill->state != state)
-+ if (rt2x00dev->rfkill->state != state) {
- input_report_key(poll_dev->input, KEY_WLAN, 1);
-+ input_report_key(poll_dev->input, KEY_WLAN, 0);
-+ }
+ /* Request status is either done or failed. */
+- rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
++ rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
+ return rc;
}
- int rt2x00rfkill_register(struct rt2x00_dev *rt2x00dev)
-@@ -92,6 +89,13 @@ int rt2x00rfkill_register(struct rt2x00_dev *rt2x00dev)
- return retval;
+ /*
+- * Attempts to start a special ccw queue and wait interruptible
+- * for its completion.
++ * Queue a request to the tail of the device ccw_queue and wait
++ * interruptible for it's completion.
+ */
+-int
+-dasd_sleep_on_interruptible(struct dasd_ccw_req * cqr)
++int dasd_sleep_on_interruptible(struct dasd_ccw_req *cqr)
+ {
+ wait_queue_head_t wait_q;
+ struct dasd_device *device;
+- int rc, finished;
+-
+- device = cqr->device;
+- spin_lock_irq(get_ccwdev_lock(device->cdev));
++ int rc;
+
++ device = cqr->startdev;
+ init_waitqueue_head (&wait_q);
+ cqr->callback = dasd_wakeup_cb;
+ cqr->callback_data = (void *) &wait_q;
+- cqr->status = DASD_CQR_QUEUED;
+- list_add_tail(&cqr->list, &device->ccw_queue);
+-
+- /* let the bh start the request to keep them in order */
+- dasd_schedule_bh(device);
+- spin_unlock_irq(get_ccwdev_lock(device->cdev));
+-
+- finished = 0;
+- while (!finished) {
+- rc = wait_event_interruptible(wait_q, _wait_for_wakeup(cqr));
+- if (rc != -ERESTARTSYS) {
+- /* Request is final (done or failed) */
+- rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
+- break;
+- }
+- spin_lock_irq(get_ccwdev_lock(device->cdev));
+- switch (cqr->status) {
+- case DASD_CQR_IN_IO:
+- /* terminate runnig cqr */
+- if (device->discipline->term_IO) {
+- cqr->retries = -1;
+- device->discipline->term_IO(cqr);
+- /* wait (non-interruptible) for final status
+- * because signal ist still pending */
+- spin_unlock_irq(get_ccwdev_lock(device->cdev));
+- wait_event(wait_q, _wait_for_wakeup(cqr));
+- spin_lock_irq(get_ccwdev_lock(device->cdev));
+- rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
+- finished = 1;
+- }
+- break;
+- case DASD_CQR_QUEUED:
+- /* request */
+- list_del_init(&cqr->list);
+- rc = -EIO;
+- finished = 1;
+- break;
+- default:
+- /* cqr with 'non-interruptable' status - just wait */
+- break;
+- }
+- spin_unlock_irq(get_ccwdev_lock(device->cdev));
++ dasd_add_request_tail(cqr);
++ rc = wait_event_interruptible(wait_q, _wait_for_wakeup(cqr));
++ if (rc == -ERESTARTSYS) {
++ dasd_cancel_req(cqr);
++ /* wait (non-interruptible) for final status */
++ wait_event(wait_q, _wait_for_wakeup(cqr));
}
++ rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
+ return rc;
+ }
-+ /*
-+ * Force initial poll which will detect the initial device state,
-+ * and correctly sends the signal to the rfkill layer about this
-+ * state.
-+ */
-+ rt2x00rfkill_poll(rt2x00dev->poll_dev);
-+
- return 0;
+@@ -1643,25 +1454,23 @@ dasd_sleep_on_interruptible(struct dasd_ccw_req * cqr)
+ * and be put back to status queued, before the special request is added
+ * to the head of the queue. Then the special request is waited on normally.
+ */
+-static inline int
+-_dasd_term_running_cqr(struct dasd_device *device)
++static inline int _dasd_term_running_cqr(struct dasd_device *device)
+ {
+ struct dasd_ccw_req *cqr;
+
+ if (list_empty(&device->ccw_queue))
+ return 0;
+- cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list);
++ cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);
+ return device->discipline->term_IO(cqr);
}
-@@ -114,26 +118,41 @@ int rt2x00rfkill_allocate(struct rt2x00_dev *rt2x00dev)
- rt2x00dev->rfkill = rfkill_allocate(device, RFKILL_TYPE_WLAN);
- if (!rt2x00dev->rfkill) {
- ERROR(rt2x00dev, "Failed to allocate rfkill handler.\n");
-- return -ENOMEM;
-+ goto exit;
- }
+-int
+-dasd_sleep_on_immediatly(struct dasd_ccw_req * cqr)
++int dasd_sleep_on_immediatly(struct dasd_ccw_req *cqr)
+ {
+ wait_queue_head_t wait_q;
+ struct dasd_device *device;
+ int rc;
- rt2x00dev->rfkill->name = rt2x00dev->ops->name;
- rt2x00dev->rfkill->data = rt2x00dev;
-- rt2x00dev->rfkill->state = rt2x00dev->ops->lib->rfkill_poll(rt2x00dev);
-+ rt2x00dev->rfkill->state = -1;
- rt2x00dev->rfkill->toggle_radio = rt2x00rfkill_toggle_radio;
+- device = cqr->device;
++ device = cqr->startdev;
+ spin_lock_irq(get_ccwdev_lock(device->cdev));
+ rc = _dasd_term_running_cqr(device);
+ if (rc) {
+@@ -1673,17 +1482,17 @@ dasd_sleep_on_immediatly(struct dasd_ccw_req * cqr)
+ cqr->callback = dasd_wakeup_cb;
+ cqr->callback_data = (void *) &wait_q;
+ cqr->status = DASD_CQR_QUEUED;
+- list_add(&cqr->list, &device->ccw_queue);
++ list_add(&cqr->devlist, &device->ccw_queue);
- rt2x00dev->poll_dev = input_allocate_polled_device();
- if (!rt2x00dev->poll_dev) {
- ERROR(rt2x00dev, "Failed to allocate polled device.\n");
-- rfkill_free(rt2x00dev->rfkill);
-- return -ENOMEM;
-+ goto exit_free_rfkill;
- }
+ /* let the bh start the request to keep them in order */
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
- rt2x00dev->poll_dev->private = rt2x00dev;
- rt2x00dev->poll_dev->poll = rt2x00rfkill_poll;
- rt2x00dev->poll_dev->poll_interval = RFKILL_POLL_INTERVAL;
+ spin_unlock_irq(get_ccwdev_lock(device->cdev));
-+ rt2x00dev->poll_dev->input->name = rt2x00dev->ops->name;
-+ rt2x00dev->poll_dev->input->phys = wiphy_name(rt2x00dev->hw->wiphy);
-+ rt2x00dev->poll_dev->input->id.bustype = BUS_HOST;
-+ rt2x00dev->poll_dev->input->id.vendor = 0x1814;
-+ rt2x00dev->poll_dev->input->id.product = rt2x00dev->chip.rt;
-+ rt2x00dev->poll_dev->input->id.version = rt2x00dev->chip.rev;
-+ rt2x00dev->poll_dev->input->dev.parent = device;
-+ rt2x00dev->poll_dev->input->evbit[0] = BIT(EV_KEY);
-+ set_bit(KEY_WLAN, rt2x00dev->poll_dev->input->keybit);
+ wait_event(wait_q, _wait_for_wakeup(cqr));
+
+ /* Request status is either done or failed. */
+- rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
++ rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
+ return rc;
+ }
+
+@@ -1692,11 +1501,14 @@ dasd_sleep_on_immediatly(struct dasd_ccw_req * cqr)
+ * This is useful to timeout requests. The request will be
+ * terminated if it is currently in i/o.
+ * Returns 1 if the request has been terminated.
++ * 0 if there was no need to terminate the request (not started yet)
++ * negative error code if termination failed
++ * Cancellation of a request is an asynchronous operation! The calling
++ * function has to wait until the request is properly returned via callback.
+ */
+-int
+-dasd_cancel_req(struct dasd_ccw_req *cqr)
++int dasd_cancel_req(struct dasd_ccw_req *cqr)
+ {
+- struct dasd_device *device = cqr->device;
++ struct dasd_device *device = cqr->startdev;
+ unsigned long flags;
+ int rc;
+
+@@ -1704,74 +1516,454 @@ dasd_cancel_req(struct dasd_ccw_req *cqr)
+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+ switch (cqr->status) {
+ case DASD_CQR_QUEUED:
+- /* request was not started - just set to failed */
+- cqr->status = DASD_CQR_FAILED;
++ /* request was not started - just set to cleared */
++ cqr->status = DASD_CQR_CLEARED;
+ break;
+ case DASD_CQR_IN_IO:
+ /* request in IO - terminate IO and release again */
+- if (device->discipline->term_IO(cqr) != 0)
+- /* what to do if unable to terminate ??????
+- e.g. not _IN_IO */
+- cqr->status = DASD_CQR_FAILED;
+- cqr->stopclk = get_clock();
+- rc = 1;
++ rc = device->discipline->term_IO(cqr);
++ if (rc) {
++ DEV_MESSAGE(KERN_ERR, device,
++ "dasd_cancel_req is unable "
++ " to terminate request %p, rc = %d",
++ cqr, rc);
++ } else {
++ cqr->stopclk = get_clock();
++ rc = 1;
++ }
+ break;
+- case DASD_CQR_DONE:
+- case DASD_CQR_FAILED:
+- /* already finished - do nothing */
++ default: /* already finished or clear pending - do nothing */
+ break;
+- default:
+- DEV_MESSAGE(KERN_ALERT, device,
+- "invalid status %02x in request",
+- cqr->status);
++ }
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ dasd_schedule_device_bh(device);
++ return rc;
++}
++
++
++/*
++ * SECTION: Operations of the dasd_block layer.
++ */
++
++/*
++ * Timeout function for dasd_block. This is used when the block layer
++ * is waiting for something that may not come reliably, (e.g. a state
++ * change interrupt)
++ */
++static void dasd_block_timeout(unsigned long ptr)
++{
++ unsigned long flags;
++ struct dasd_block *block;
++
++ block = (struct dasd_block *) ptr;
++ spin_lock_irqsave(get_ccwdev_lock(block->base->cdev), flags);
++ /* re-activate request queue */
++ block->base->stopped &= ~DASD_STOPPED_PENDING;
++ spin_unlock_irqrestore(get_ccwdev_lock(block->base->cdev), flags);
++ dasd_schedule_block_bh(block);
++}
++
++/*
++ * Setup timeout for a dasd_block in jiffies.
++ */
++void dasd_block_set_timer(struct dasd_block *block, int expires)
++{
++ if (expires == 0) {
++ if (timer_pending(&block->timer))
++ del_timer(&block->timer);
++ return;
++ }
++ if (timer_pending(&block->timer)) {
++ if (mod_timer(&block->timer, jiffies + expires))
++ return;
++ }
++ block->timer.function = dasd_block_timeout;
++ block->timer.data = (unsigned long) block;
++ block->timer.expires = jiffies + expires;
++ add_timer(&block->timer);
++}
++
++/*
++ * Clear timeout for a dasd_block.
++ */
++void dasd_block_clear_timer(struct dasd_block *block)
++{
++ if (timer_pending(&block->timer))
++ del_timer(&block->timer);
++}
++
++/*
++ * posts the buffer_cache about a finalized request
++ */
++static inline void dasd_end_request(struct request *req, int error)
++{
++ if (__blk_end_request(req, error, blk_rq_bytes(req)))
+ BUG();
++}
++
++/*
++ * Process finished error recovery ccw.
++ */
++static inline void __dasd_block_process_erp(struct dasd_block *block,
++ struct dasd_ccw_req *cqr)
++{
++ dasd_erp_fn_t erp_fn;
++ struct dasd_device *device = block->base;
++
++ if (cqr->status == DASD_CQR_DONE)
++ DBF_DEV_EVENT(DBF_NOTICE, device, "%s", "ERP successful");
++ else
++ DEV_MESSAGE(KERN_ERR, device, "%s", "ERP unsuccessful");
++ erp_fn = device->discipline->erp_postaction(cqr);
++ erp_fn(cqr);
++}
+
++/*
++ * Fetch requests from the block device queue.
++ */
++static void __dasd_process_request_queue(struct dasd_block *block)
++{
++ struct request_queue *queue;
++ struct request *req;
++ struct dasd_ccw_req *cqr;
++ struct dasd_device *basedev;
++ unsigned long flags;
++ queue = block->request_queue;
++ basedev = block->base;
++ /* No queue ? Then there is nothing to do. */
++ if (queue == NULL)
++ return;
++
++ /*
++ * We requeue request from the block device queue to the ccw
++ * queue only in two states. In state DASD_STATE_READY the
++ * partition detection is done and we need to requeue requests
++ * for that. State DASD_STATE_ONLINE is normal block device
++ * operation.
++ */
++ if (basedev->state < DASD_STATE_READY)
++ return;
++ /* Now we try to fetch requests from the request queue */
++ while (!blk_queue_plugged(queue) &&
++ elv_next_request(queue)) {
++
++ req = elv_next_request(queue);
++
++ if (basedev->features & DASD_FEATURE_READONLY &&
++ rq_data_dir(req) == WRITE) {
++ DBF_DEV_EVENT(DBF_ERR, basedev,
++ "Rejecting write request %p",
++ req);
++ blkdev_dequeue_request(req);
++ dasd_end_request(req, -EIO);
++ continue;
++ }
++ cqr = basedev->discipline->build_cp(basedev, block, req);
++ if (IS_ERR(cqr)) {
++ if (PTR_ERR(cqr) == -EBUSY)
++ break; /* normal end condition */
++ if (PTR_ERR(cqr) == -ENOMEM)
++ break; /* terminate request queue loop */
++ if (PTR_ERR(cqr) == -EAGAIN) {
++ /*
++ * The current request cannot be build right
++ * now, we have to try later. If this request
++ * is the head-of-queue we stop the device
++ * for 1/2 second.
++ */
++ if (!list_empty(&block->ccw_queue))
++ break;
++ spin_lock_irqsave(get_ccwdev_lock(basedev->cdev), flags);
++ basedev->stopped |= DASD_STOPPED_PENDING;
++ spin_unlock_irqrestore(get_ccwdev_lock(basedev->cdev), flags);
++ dasd_block_set_timer(block, HZ/2);
++ break;
++ }
++ DBF_DEV_EVENT(DBF_ERR, basedev,
++ "CCW creation failed (rc=%ld) "
++ "on request %p",
++ PTR_ERR(cqr), req);
++ blkdev_dequeue_request(req);
++ dasd_end_request(req, -EIO);
++ continue;
++ }
++ /*
++ * Note: callback is set to dasd_return_cqr_cb in
++ * __dasd_block_start_head to cover erp requests as well
++ */
++ cqr->callback_data = (void *) req;
++ cqr->status = DASD_CQR_FILLED;
++ blkdev_dequeue_request(req);
++ list_add_tail(&cqr->blocklist, &block->ccw_queue);
++ dasd_profile_start(block, cqr, req);
++ }
++}
++
++static void __dasd_cleanup_cqr(struct dasd_ccw_req *cqr)
++{
++ struct request *req;
++ int status;
++ int error = 0;
++
++ req = (struct request *) cqr->callback_data;
++ dasd_profile_end(cqr->block, cqr, req);
++ status = cqr->memdev->discipline->free_cp(cqr, req);
++ if (status <= 0)
++ error = status ? status : -EIO;
++ dasd_end_request(req, error);
++}
+
- return 0;
++/*
++ * Process ccw request queue.
++ */
++static void __dasd_process_block_ccw_queue(struct dasd_block *block,
++ struct list_head *final_queue)
++{
++ struct list_head *l, *n;
++ struct dasd_ccw_req *cqr;
++ dasd_erp_fn_t erp_fn;
++ unsigned long flags;
++ struct dasd_device *base = block->base;
+
-+exit_free_rfkill:
-+ rfkill_free(rt2x00dev->rfkill);
++restart:
++ /* Process request with final status. */
++ list_for_each_safe(l, n, &block->ccw_queue) {
++ cqr = list_entry(l, struct dasd_ccw_req, blocklist);
++ if (cqr->status != DASD_CQR_DONE &&
++ cqr->status != DASD_CQR_FAILED &&
++ cqr->status != DASD_CQR_NEED_ERP &&
++ cqr->status != DASD_CQR_TERMINATED)
++ continue;
+
-+exit:
-+ return -ENOMEM;
- }
-
- void rt2x00rfkill_free(struct rt2x00_dev *rt2x00dev)
-diff --git a/drivers/net/wireless/rt2x00/rt2x00ring.h b/drivers/net/wireless/rt2x00/rt2x00ring.h
-index 1a864d3..1caa6d6 100644
---- a/drivers/net/wireless/rt2x00/rt2x00ring.h
-+++ b/drivers/net/wireless/rt2x00/rt2x00ring.h
-@@ -27,19 +27,27 @@
- #define RT2X00RING_H
-
- /*
-- * data_desc
-- * Each data entry also contains a descriptor which is used by the
-- * device to determine what should be done with the packet and
-- * what the current status is.
-- * This structure is greatly simplified, but the descriptors
-- * are basically a list of little endian 32 bit values.
-- * Make the array by default 1 word big, this will allow us
-- * to use sizeof() correctly.
-+ * skb_desc
-+ * Descriptor information for the skb buffer
- */
--struct data_desc {
-- __le32 word[1];
-+struct skb_desc {
-+ unsigned int frame_type;
++ if (cqr->status == DASD_CQR_TERMINATED) {
++ base->discipline->handle_terminated_request(cqr);
++ goto restart;
++ }
+
-+ unsigned int desc_len;
-+ unsigned int data_len;
++ /* Process requests that may be recovered */
++ if (cqr->status == DASD_CQR_NEED_ERP) {
++ if (cqr->irb.esw.esw0.erw.cons &&
++ test_bit(DASD_CQR_FLAGS_USE_ERP,
++ &cqr->flags)) {
++ erp_fn = base->discipline->erp_action(cqr);
++ erp_fn(cqr);
++ }
++ goto restart;
++ }
+
-+ void *desc;
-+ void *data;
++ /* First of all call extended error reporting. */
++ if (dasd_eer_enabled(base) &&
++ cqr->status == DASD_CQR_FAILED) {
++ dasd_eer_write(base, cqr, DASD_EER_FATALERROR);
+
-+ struct data_ring *ring;
-+ struct data_entry *entry;
- };
-
-+static inline struct skb_desc* get_skb_desc(struct sk_buff *skb)
++ /* restart request */
++ cqr->status = DASD_CQR_FILLED;
++ cqr->retries = 255;
++ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
++ base->stopped |= DASD_STOPPED_QUIESCE;
++ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev),
++ flags);
++ goto restart;
++ }
++
++ /* Process finished ERP request. */
++ if (cqr->refers) {
++ __dasd_block_process_erp(block, cqr);
++ goto restart;
++ }
++
++ /* Rechain finished requests to final queue */
++ cqr->endclk = get_clock();
++ list_move_tail(&cqr->blocklist, final_queue);
++ }
++}
++
++static void dasd_return_cqr_cb(struct dasd_ccw_req *cqr, void *data)
+{
-+ return (struct skb_desc*)&skb->cb[0];
++ dasd_schedule_block_bh(cqr->block);
+}
+
- /*
- * rxdata_entry_desc
- * Summary of information that has been read from the
-@@ -51,6 +59,7 @@ struct rxdata_entry_desc {
- int ofdm;
- int size;
- int flags;
-+ int my_bss;
- };
-
- /*
-@@ -66,6 +75,7 @@ struct txdata_entry_desc {
- #define ENTRY_TXD_MORE_FRAG 4
- #define ENTRY_TXD_REQ_TIMESTAMP 5
- #define ENTRY_TXD_BURST 6
-+#define ENTRY_TXD_ACK 7
-
- /*
- * Queue ID. ID's 0-4 are data TX rings
-@@ -134,6 +144,11 @@ struct data_entry {
- */
- void *data_addr;
- dma_addr_t data_dma;
++static void __dasd_block_start_head(struct dasd_block *block)
++{
++ struct dasd_ccw_req *cqr;
+
-+ /*
-+ * Entry identification number (index).
-+ */
-+ unsigned int entry_idx;
- };
-
- /*
-@@ -172,6 +187,13 @@ struct data_ring {
- void *data_addr;
-
- /*
-+ * Queue identification number:
-+ * RX: 0
-+ * TX: IEEE80211_TX_*
++ if (list_empty(&block->ccw_queue))
++ return;
++ /* We allways begin with the first requests on the queue, as some
++ * of previously started requests have to be enqueued on a
++ * dasd_device again for error recovery.
+ */
-+ unsigned int queue_idx;
++ list_for_each_entry(cqr, &block->ccw_queue, blocklist) {
++ if (cqr->status != DASD_CQR_FILLED)
++ continue;
++ /* Non-temporary stop condition will trigger fail fast */
++ if (block->base->stopped & ~DASD_STOPPED_PENDING &&
++ test_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags) &&
++ (!dasd_eer_enabled(block->base))) {
++ cqr->status = DASD_CQR_FAILED;
++ dasd_schedule_block_bh(block);
++ continue;
++ }
++ /* Don't try to start requests if device is stopped */
++ if (block->base->stopped)
++ return;
+
-+ /*
- * Index variables.
- */
- u16 index;
-@@ -253,16 +275,16 @@ static inline int rt2x00_ring_free(struct data_ring *ring)
- /*
- * TX/RX Descriptor access functions.
- */
--static inline void rt2x00_desc_read(struct data_desc *desc,
-+static inline void rt2x00_desc_read(__le32 *desc,
- const u8 word, u32 *value)
- {
-- *value = le32_to_cpu(desc->word[word]);
-+ *value = le32_to_cpu(desc[word]);
- }
-
--static inline void rt2x00_desc_write(struct data_desc *desc,
-+static inline void rt2x00_desc_write(__le32 *desc,
- const u8 word, const u32 value)
- {
-- desc->word[word] = cpu_to_le32(value);
-+ desc[word] = cpu_to_le32(value);
- }
-
- #endif /* RT2X00RING_H */
-diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.c b/drivers/net/wireless/rt2x00/rt2x00usb.c
-index 568d738..84e9bdb 100644
---- a/drivers/net/wireless/rt2x00/rt2x00usb.c
-+++ b/drivers/net/wireless/rt2x00/rt2x00usb.c
-@@ -23,14 +23,10 @@
- Abstract: rt2x00 generic usb device routines.
- */
-
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt2x00usb"
--
- #include <linux/kernel.h>
- #include <linux/module.h>
- #include <linux/usb.h>
-+#include <linux/bug.h>
-
- #include "rt2x00.h"
- #include "rt2x00usb.h"
-@@ -38,7 +34,7 @@
- /*
- * Interfacing with the HW.
- */
--int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
-+int rt2x00usb_vendor_request(struct rt2x00_dev *rt2x00dev,
- const u8 request, const u8 requesttype,
- const u16 offset, const u16 value,
- void *buffer, const u16 buffer_length,
-@@ -52,6 +48,7 @@ int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
- (requesttype == USB_VENDOR_REQUEST_IN) ?
- usb_rcvctrlpipe(usb_dev, 0) : usb_sndctrlpipe(usb_dev, 0);
-
++ /* just a fail safe check, should not happen */
++ if (!cqr->startdev)
++ cqr->startdev = block->base;
+
- for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
- status = usb_control_msg(usb_dev, pipe, request, requesttype,
- value, offset, buffer, buffer_length,
-@@ -76,13 +73,15 @@ int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
- }
- EXPORT_SYMBOL_GPL(rt2x00usb_vendor_request);
-
--int rt2x00usb_vendor_request_buff(const struct rt2x00_dev *rt2x00dev,
-- const u8 request, const u8 requesttype,
-- const u16 offset, void *buffer,
-- const u16 buffer_length, const int timeout)
-+int rt2x00usb_vendor_req_buff_lock(struct rt2x00_dev *rt2x00dev,
-+ const u8 request, const u8 requesttype,
-+ const u16 offset, void *buffer,
-+ const u16 buffer_length, const int timeout)
- {
- int status;
-
-+ BUG_ON(!mutex_is_locked(&rt2x00dev->usb_cache_mutex));
++ /* make sure that the requests we submit find their way back */
++ cqr->callback = dasd_return_cqr_cb;
+
- /*
- * Check for Cache availability.
- */
-@@ -103,6 +102,25 @@ int rt2x00usb_vendor_request_buff(const struct rt2x00_dev *rt2x00dev,
-
- return status;
- }
-+EXPORT_SYMBOL_GPL(rt2x00usb_vendor_req_buff_lock);
++ dasd_add_request_tail(cqr);
++ }
++}
+
-+int rt2x00usb_vendor_request_buff(struct rt2x00_dev *rt2x00dev,
-+ const u8 request, const u8 requesttype,
-+ const u16 offset, void *buffer,
-+ const u16 buffer_length, const int timeout)
++/*
++ * Central dasd_block layer routine. Takes requests from the generic
++ * block layer request queue, creates ccw requests, enqueues them on
++ * a dasd_device and processes ccw requests that have been returned.
++ */
++static void dasd_block_tasklet(struct dasd_block *block)
+{
-+ int status;
++ struct list_head final_queue;
++ struct list_head *l, *n;
++ struct dasd_ccw_req *cqr;
+
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
++ atomic_set(&block->tasklet_scheduled, 0);
++ INIT_LIST_HEAD(&final_queue);
++ spin_lock(&block->queue_lock);
++ /* Finish off requests on ccw queue */
++ __dasd_process_block_ccw_queue(block, &final_queue);
++ spin_unlock(&block->queue_lock);
++ /* Now call the callback function of requests with final status */
++ spin_lock_irq(&block->request_queue_lock);
++ list_for_each_safe(l, n, &final_queue) {
++ cqr = list_entry(l, struct dasd_ccw_req, blocklist);
++ list_del_init(&cqr->blocklist);
++ __dasd_cleanup_cqr(cqr);
++ }
++ spin_lock(&block->queue_lock);
++ /* Get new request from the block device request queue */
++ __dasd_process_request_queue(block);
++ /* Now check if the head of the ccw queue needs to be started. */
++ __dasd_block_start_head(block);
++ spin_unlock(&block->queue_lock);
++ spin_unlock_irq(&block->request_queue_lock);
++ dasd_put_device(block->base);
++}
+
-+ status = rt2x00usb_vendor_req_buff_lock(rt2x00dev, request,
-+ requesttype, offset, buffer,
-+ buffer_length, timeout);
++static void _dasd_wake_block_flush_cb(struct dasd_ccw_req *cqr, void *data)
++{
++ wake_up(&dasd_flush_wq);
++}
+
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
++/*
++ * Go through all request on the dasd_block request queue, cancel them
++ * on the respective dasd_device, and return them to the generic
++ * block layer.
++ */
++static int dasd_flush_block_queue(struct dasd_block *block)
++{
++ struct dasd_ccw_req *cqr, *n;
++ int rc, i;
++ struct list_head flush_queue;
+
-+ return status;
-+}
- EXPORT_SYMBOL_GPL(rt2x00usb_vendor_request_buff);
-
- /*
-@@ -113,7 +131,7 @@ static void rt2x00usb_interrupt_txdone(struct urb *urb)
- struct data_entry *entry = (struct data_entry *)urb->context;
- struct data_ring *ring = entry->ring;
- struct rt2x00_dev *rt2x00dev = ring->rt2x00dev;
-- struct data_desc *txd = (struct data_desc *)entry->skb->data;
-+ __le32 *txd = (__le32 *)entry->skb->data;
- u32 word;
- int tx_status;
-
-@@ -158,20 +176,17 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
- struct usb_device *usb_dev =
- interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
- struct data_entry *entry = rt2x00_get_data_entry(ring);
-- int pipe = usb_sndbulkpipe(usb_dev, 1);
-+ struct skb_desc *desc;
- u32 length;
-
-- if (rt2x00_ring_full(ring)) {
-- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
-+ if (rt2x00_ring_full(ring))
- return -EINVAL;
-- }
-
- if (test_bit(ENTRY_OWNER_NIC, &entry->flags)) {
- ERROR(rt2x00dev,
- "Arrived at non-free entry in the non-full queue %d.\n"
- "Please file bug report to %s.\n",
- control->queue, DRV_PROJECT);
-- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
- return -EINVAL;
++ INIT_LIST_HEAD(&flush_queue);
++ spin_lock_bh(&block->queue_lock);
++ rc = 0;
++restart:
++ list_for_each_entry_safe(cqr, n, &block->ccw_queue, blocklist) {
++ /* if this request currently owned by a dasd_device cancel it */
++ if (cqr->status >= DASD_CQR_QUEUED)
++ rc = dasd_cancel_req(cqr);
++ if (rc < 0)
++ break;
++ /* Rechain request (including erp chain) so it won't be
++ * touched by the dasd_block_tasklet anymore.
++ * Replace the callback so we notice when the request
++ * is returned from the dasd_device layer.
++ */
++ cqr->callback = _dasd_wake_block_flush_cb;
++ for (i = 0; cqr != NULL; cqr = cqr->refers, i++)
++ list_move_tail(&cqr->blocklist, &flush_queue);
++ if (i > 1)
++ /* moved more than one request - need to restart */
++ goto restart;
++ }
++ spin_unlock_bh(&block->queue_lock);
++ /* Now call the callback function of flushed requests */
++restart_cb:
++ list_for_each_entry_safe(cqr, n, &flush_queue, blocklist) {
++ wait_event(dasd_flush_wq, (cqr->status < DASD_CQR_QUEUED));
++ /* Process finished ERP request. */
++ if (cqr->refers) {
++ __dasd_block_process_erp(block, cqr);
++ /* restart list_for_xx loop since dasd_process_erp
++ * might remove multiple elements */
++ goto restart_cb;
++ }
++ /* call the callback function */
++ cqr->endclk = get_clock();
++ list_del_init(&cqr->blocklist);
++ __dasd_cleanup_cqr(cqr);
}
-
-@@ -181,12 +196,18 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
- skb_push(skb, ring->desc_size);
- memset(skb->data, 0, ring->desc_size);
-
-- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
-- (struct ieee80211_hdr *)(skb->data +
-- ring->desc_size),
-- skb->len - ring->desc_size, control);
-- memcpy(&entry->tx_status.control, control, sizeof(*control));
-- entry->skb = skb;
-+ /*
-+ * Fill in skb descriptor
-+ */
-+ desc = get_skb_desc(skb);
-+ desc->desc_len = ring->desc_size;
-+ desc->data_len = skb->len - ring->desc_size;
-+ desc->desc = skb->data;
-+ desc->data = skb->data + ring->desc_size;
-+ desc->ring = ring;
-+ desc->entry = entry;
-+
-+ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
-
- /*
- * USB devices cannot blindly pass the skb->len as the
-@@ -199,15 +220,12 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
- * Initialize URB and send the frame to the device.
- */
- __set_bit(ENTRY_OWNER_NIC, &entry->flags);
-- usb_fill_bulk_urb(entry->priv, usb_dev, pipe,
-+ usb_fill_bulk_urb(entry->priv, usb_dev, usb_sndbulkpipe(usb_dev, 1),
- skb->data, length, rt2x00usb_interrupt_txdone, entry);
- usb_submit_urb(entry->priv, GFP_ATOMIC);
-
- rt2x00_ring_index_inc(ring);
-
-- if (rt2x00_ring_full(ring))
-- ieee80211_stop_queue(rt2x00dev->hw, control->queue);
--
- return 0;
+- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
+- dasd_schedule_bh(device);
+ return rc;
}
- EXPORT_SYMBOL_GPL(rt2x00usb_write_tx_data);
-@@ -222,6 +240,7 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
- struct rt2x00_dev *rt2x00dev = ring->rt2x00dev;
- struct sk_buff *skb;
- struct ieee80211_hdr *hdr;
-+ struct skb_desc *skbdesc;
- struct rxdata_entry_desc desc;
- int header_size;
- int frame_size;
-@@ -238,7 +257,14 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
- if (urb->actual_length < entry->ring->desc_size || urb->status)
- goto skip_entry;
-- memset(&desc, 0x00, sizeof(desc));
-+ /*
-+ * Fill in skb descriptor
-+ */
-+ skbdesc = get_skb_desc(entry->skb);
-+ skbdesc->ring = ring;
-+ skbdesc->entry = entry;
+ /*
+- * SECTION: Block device operations (request queue, partitions, open, release).
++ * Schedules a call to dasd_tasklet over the device tasklet.
++ */
++void dasd_schedule_block_bh(struct dasd_block *block)
++{
++ /* Protect against rescheduling. */
++ if (atomic_cmpxchg(&block->tasklet_scheduled, 0, 1) != 0)
++ return;
++ /* life cycle of block is bound to it's base device */
++ dasd_get_device(block->base);
++ tasklet_hi_schedule(&block->tasklet);
++}
+
-+ memset(&desc, 0, sizeof(desc));
- rt2x00dev->ops->lib->fill_rxdone(entry, &desc);
-
- /*
-@@ -264,9 +290,6 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
- /*
- * The data behind the ieee80211 header must be
- * aligned on a 4 byte boundary.
-- * After that trim the entire buffer down to only
-- * contain the valid frame data excluding the device
-- * descriptor.
- */
- hdr = (struct ieee80211_hdr *)entry->skb->data;
- header_size =
-@@ -276,6 +299,16 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb)
- skb_push(entry->skb, 2);
- memmove(entry->skb->data, entry->skb->data + 2, skb->len - 2);
- }
+
-+ /*
-+ * Trim the entire buffer down to only contain the valid frame data
-+ * excluding the device descriptor. The position of the descriptor
-+ * varies. This means that we should check where the descriptor is
-+ * and decide if we need to pull the data pointer to exclude the
-+ * device descriptor.
-+ */
-+ if (skbdesc->data > skbdesc->desc)
-+ skb_pull(entry->skb, skbdesc->desc_len);
- skb_trim(entry->skb, desc.size);
++/*
++ * SECTION: external block device operations
++ * (request queue handling, open, release, etc.)
+ */
- /*
-@@ -303,43 +336,6 @@ skip_entry:
/*
- * Radio handlers
+ * Dasd request queue function. Called from ll_rw_blk.c
*/
--void rt2x00usb_enable_radio(struct rt2x00_dev *rt2x00dev)
--{
-- struct usb_device *usb_dev =
-- interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
-- struct data_ring *ring;
-- struct data_entry *entry;
-- unsigned int i;
--
-- /*
-- * Initialize the TX rings
-- */
-- txringall_for_each(rt2x00dev, ring) {
-- for (i = 0; i < ring->stats.limit; i++)
-- ring->entry[i].flags = 0;
--
-- rt2x00_ring_index_clear(ring);
-- }
--
-- /*
-- * Initialize and start the RX ring.
-- */
-- rt2x00_ring_index_clear(rt2x00dev->rx);
--
-- for (i = 0; i < rt2x00dev->rx->stats.limit; i++) {
-- entry = &rt2x00dev->rx->entry[i];
--
-- usb_fill_bulk_urb(entry->priv, usb_dev,
-- usb_rcvbulkpipe(usb_dev, 1),
-- entry->skb->data, entry->skb->len,
-- rt2x00usb_interrupt_rxdone, entry);
--
-- __set_bit(ENTRY_OWNER_NIC, &entry->flags);
-- usb_submit_urb(entry->priv, GFP_ATOMIC);
-- }
--}
--EXPORT_SYMBOL_GPL(rt2x00usb_enable_radio);
--
- void rt2x00usb_disable_radio(struct rt2x00_dev *rt2x00dev)
+-static void
+-do_dasd_request(struct request_queue * queue)
++static void do_dasd_request(struct request_queue *queue)
{
- struct data_ring *ring;
-@@ -361,6 +357,29 @@ EXPORT_SYMBOL_GPL(rt2x00usb_disable_radio);
+- struct dasd_device *device;
++ struct dasd_block *block;
+
+- device = (struct dasd_device *) queue->queuedata;
+- spin_lock(get_ccwdev_lock(device->cdev));
++ block = queue->queuedata;
++ spin_lock(&block->queue_lock);
+ /* Get new request from the block device request queue */
+- __dasd_process_blk_queue(device);
++ __dasd_process_request_queue(block);
+ /* Now check if the head of the ccw queue needs to be started. */
+- __dasd_start_head(device);
+- spin_unlock(get_ccwdev_lock(device->cdev));
++ __dasd_block_start_head(block);
++ spin_unlock(&block->queue_lock);
+ }
+
/*
- * Device initialization handlers.
+ * Allocate and initialize request queue and default I/O scheduler.
*/
-+void rt2x00usb_init_rxentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
-+{
-+ struct usb_device *usb_dev =
-+ interface_to_usbdev(rt2x00dev_usb(rt2x00dev));
-+
-+ usb_fill_bulk_urb(entry->priv, usb_dev,
-+ usb_rcvbulkpipe(usb_dev, 1),
-+ entry->skb->data, entry->skb->len,
-+ rt2x00usb_interrupt_rxdone, entry);
-+
-+ __set_bit(ENTRY_OWNER_NIC, &entry->flags);
-+ usb_submit_urb(entry->priv, GFP_ATOMIC);
-+}
-+EXPORT_SYMBOL_GPL(rt2x00usb_init_rxentry);
-+
-+void rt2x00usb_init_txentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
-+{
-+ entry->flags = 0;
-+}
-+EXPORT_SYMBOL_GPL(rt2x00usb_init_txentry);
-+
- static int rt2x00usb_alloc_urb(struct rt2x00_dev *rt2x00dev,
- struct data_ring *ring)
+-static int
+-dasd_alloc_queue(struct dasd_device * device)
++static int dasd_alloc_queue(struct dasd_block *block)
{
-@@ -400,7 +419,7 @@ int rt2x00usb_initialize(struct rt2x00_dev *rt2x00dev)
- struct sk_buff *skb;
- unsigned int entry_size;
- unsigned int i;
-- int status;
-+ int uninitialized_var(status);
+ int rc;
- /*
- * Allocate DMA
-@@ -507,6 +526,7 @@ int rt2x00usb_probe(struct usb_interface *usb_intf,
- rt2x00dev->dev = usb_intf;
- rt2x00dev->ops = ops;
- rt2x00dev->hw = hw;
-+ mutex_init(&rt2x00dev->usb_cache_mutex);
+- device->request_queue = blk_init_queue(do_dasd_request,
+- &device->request_queue_lock);
+- if (device->request_queue == NULL)
++ block->request_queue = blk_init_queue(do_dasd_request,
++ &block->request_queue_lock);
++ if (block->request_queue == NULL)
+ return -ENOMEM;
- rt2x00dev->usb_maxpacket =
- usb_maxpacket(usb_dev, usb_sndbulkpipe(usb_dev, 1), 1);
-diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.h b/drivers/net/wireless/rt2x00/rt2x00usb.h
-index 2681abe..e40df40 100644
---- a/drivers/net/wireless/rt2x00/rt2x00usb.h
-+++ b/drivers/net/wireless/rt2x00/rt2x00usb.h
-@@ -91,7 +91,7 @@
- * a buffer allocated by kmalloc. Failure to do so can lead
- * to unexpected behavior depending on the architecture.
- */
--int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
-+int rt2x00usb_vendor_request(struct rt2x00_dev *rt2x00dev,
- const u8 request, const u8 requesttype,
- const u16 offset, const u16 value,
- void *buffer, const u16 buffer_length,
-@@ -107,18 +107,25 @@ int rt2x00usb_vendor_request(const struct rt2x00_dev *rt2x00dev,
- * kmalloc. Hence the reason for using a previously allocated cache
- * which has been allocated properly.
- */
--int rt2x00usb_vendor_request_buff(const struct rt2x00_dev *rt2x00dev,
-+int rt2x00usb_vendor_request_buff(struct rt2x00_dev *rt2x00dev,
- const u8 request, const u8 requesttype,
- const u16 offset, void *buffer,
- const u16 buffer_length, const int timeout);
+- device->request_queue->queuedata = device;
++ block->request_queue->queuedata = block;
+- elevator_exit(device->request_queue->elevator);
+- rc = elevator_init(device->request_queue, "deadline");
++ elevator_exit(block->request_queue->elevator);
++ rc = elevator_init(block->request_queue, "deadline");
+ if (rc) {
+- blk_cleanup_queue(device->request_queue);
++ blk_cleanup_queue(block->request_queue);
+ return rc;
+ }
+ return 0;
+@@ -1780,79 +1972,76 @@ dasd_alloc_queue(struct dasd_device * device)
/*
-+ * A version of rt2x00usb_vendor_request_buff which must be called
-+ * if the usb_cache_mutex is already held. */
-+int rt2x00usb_vendor_req_buff_lock(struct rt2x00_dev *rt2x00dev,
-+ const u8 request, const u8 requesttype,
-+ const u16 offset, void *buffer,
-+ const u16 buffer_length, const int timeout);
-+
-+/*
- * Simple wrapper around rt2x00usb_vendor_request to write a single
- * command to the device. Since we don't use the buffer argument we
- * don't have to worry about kmalloc here.
- */
--static inline int rt2x00usb_vendor_request_sw(const struct rt2x00_dev
-- *rt2x00dev,
-+static inline int rt2x00usb_vendor_request_sw(struct rt2x00_dev *rt2x00dev,
- const u8 request,
- const u16 offset,
- const u16 value,
-@@ -134,8 +141,8 @@ static inline int rt2x00usb_vendor_request_sw(const struct rt2x00_dev
- * from the device. Note that the eeprom argument _must_ be allocated using
- * kmalloc for correct handling inside the kernel USB layer.
+ * Allocate and initialize request queue.
*/
--static inline int rt2x00usb_eeprom_read(const struct rt2x00_dev *rt2x00dev,
-- __le16 *eeprom, const u16 lenght)
-+static inline int rt2x00usb_eeprom_read(struct rt2x00_dev *rt2x00dev,
-+ __le16 *eeprom, const u16 lenght)
+-static void
+-dasd_setup_queue(struct dasd_device * device)
++static void dasd_setup_queue(struct dasd_block *block)
{
- int timeout = REGISTER_TIMEOUT * (lenght / sizeof(u16));
+ int max;
+
+- blk_queue_hardsect_size(device->request_queue, device->bp_block);
+- max = device->discipline->max_blocks << device->s2b_shift;
+- blk_queue_max_sectors(device->request_queue, max);
+- blk_queue_max_phys_segments(device->request_queue, -1L);
+- blk_queue_max_hw_segments(device->request_queue, -1L);
+- blk_queue_max_segment_size(device->request_queue, -1L);
+- blk_queue_segment_boundary(device->request_queue, -1L);
+- blk_queue_ordered(device->request_queue, QUEUE_ORDERED_TAG, NULL);
++ blk_queue_hardsect_size(block->request_queue, block->bp_block);
++ max = block->base->discipline->max_blocks << block->s2b_shift;
++ blk_queue_max_sectors(block->request_queue, max);
++ blk_queue_max_phys_segments(block->request_queue, -1L);
++ blk_queue_max_hw_segments(block->request_queue, -1L);
++ blk_queue_max_segment_size(block->request_queue, -1L);
++ blk_queue_segment_boundary(block->request_queue, -1L);
++ blk_queue_ordered(block->request_queue, QUEUE_ORDERED_DRAIN, NULL);
+ }
-@@ -147,7 +154,6 @@ static inline int rt2x00usb_eeprom_read(const struct rt2x00_dev *rt2x00dev,
/*
- * Radio handlers
+ * Deactivate and free request queue.
*/
--void rt2x00usb_enable_radio(struct rt2x00_dev *rt2x00dev);
- void rt2x00usb_disable_radio(struct rt2x00_dev *rt2x00dev);
+-static void
+-dasd_free_queue(struct dasd_device * device)
++static void dasd_free_queue(struct dasd_block *block)
+ {
+- if (device->request_queue) {
+- blk_cleanup_queue(device->request_queue);
+- device->request_queue = NULL;
++ if (block->request_queue) {
++ blk_cleanup_queue(block->request_queue);
++ block->request_queue = NULL;
+ }
+ }
/*
-@@ -160,6 +166,10 @@ int rt2x00usb_write_tx_data(struct rt2x00_dev *rt2x00dev,
- /*
- * Device initialization handlers.
+ * Flush request on the request queue.
*/
-+void rt2x00usb_init_rxentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry);
-+void rt2x00usb_init_txentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry);
- int rt2x00usb_initialize(struct rt2x00_dev *rt2x00dev);
- void rt2x00usb_uninitialize(struct rt2x00_dev *rt2x00dev);
+-static void
+-dasd_flush_request_queue(struct dasd_device * device)
++static void dasd_flush_request_queue(struct dasd_block *block)
+ {
+ struct request *req;
-diff --git a/drivers/net/wireless/rt2x00/rt61pci.c b/drivers/net/wireless/rt2x00/rt61pci.c
-index ecae968..ab52f22 100644
---- a/drivers/net/wireless/rt2x00/rt61pci.c
-+++ b/drivers/net/wireless/rt2x00/rt61pci.c
-@@ -24,11 +24,6 @@
- Supported chipsets: RT2561, RT2561s, RT2661.
- */
+- if (!device->request_queue)
++ if (!block->request_queue)
+ return;
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt61pci"
--
- #include <linux/delay.h>
- #include <linux/etherdevice.h>
- #include <linux/init.h>
-@@ -52,7 +47,7 @@
- * the access attempt is considered to have failed,
- * and we will print an error.
- */
--static u32 rt61pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
-+static u32 rt61pci_bbp_check(struct rt2x00_dev *rt2x00dev)
- {
- u32 reg;
- unsigned int i;
-@@ -67,7 +62,7 @@ static u32 rt61pci_bbp_check(const struct rt2x00_dev *rt2x00dev)
- return reg;
+- spin_lock_irq(&device->request_queue_lock);
+- while ((req = elv_next_request(device->request_queue))) {
++ spin_lock_irq(&block->request_queue_lock);
++ while ((req = elv_next_request(block->request_queue))) {
+ blkdev_dequeue_request(req);
+- dasd_end_request(req, 0);
++ dasd_end_request(req, -EIO);
+ }
+- spin_unlock_irq(&device->request_queue_lock);
++ spin_unlock_irq(&block->request_queue_lock);
}
--static void rt61pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt61pci_bbp_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u8 value)
+-static int
+-dasd_open(struct inode *inp, struct file *filp)
++static int dasd_open(struct inode *inp, struct file *filp)
{
- u32 reg;
-@@ -93,7 +88,7 @@ static void rt61pci_bbp_write(const struct rt2x00_dev *rt2x00dev,
- rt2x00pci_register_write(rt2x00dev, PHY_CSR3, reg);
- }
+ struct gendisk *disk = inp->i_bdev->bd_disk;
+- struct dasd_device *device = disk->private_data;
++ struct dasd_block *block = disk->private_data;
++ struct dasd_device *base = block->base;
+ int rc;
--static void rt61pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
-+static void rt61pci_bbp_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u8 *value)
- {
- u32 reg;
-@@ -130,7 +125,7 @@ static void rt61pci_bbp_read(const struct rt2x00_dev *rt2x00dev,
- *value = rt2x00_get_field32(reg, PHY_CSR3_VALUE);
+- atomic_inc(&device->open_count);
+- if (test_bit(DASD_FLAG_OFFLINE, &device->flags)) {
++ atomic_inc(&block->open_count);
++ if (test_bit(DASD_FLAG_OFFLINE, &base->flags)) {
+ rc = -ENODEV;
+ goto unlock;
+ }
+
+- if (!try_module_get(device->discipline->owner)) {
++ if (!try_module_get(base->discipline->owner)) {
+ rc = -EINVAL;
+ goto unlock;
+ }
+
+ if (dasd_probeonly) {
+- DEV_MESSAGE(KERN_INFO, device, "%s",
++ DEV_MESSAGE(KERN_INFO, base, "%s",
+ "No access to device due to probeonly mode");
+ rc = -EPERM;
+ goto out;
+ }
+
+- if (device->state <= DASD_STATE_BASIC) {
+- DBF_DEV_EVENT(DBF_ERR, device, " %s",
++ if (base->state <= DASD_STATE_BASIC) {
++ DBF_DEV_EVENT(DBF_ERR, base, " %s",
+ " Cannot open unrecognized device");
+ rc = -ENODEV;
+ goto out;
+@@ -1861,41 +2050,41 @@ dasd_open(struct inode *inp, struct file *filp)
+ return 0;
+
+ out:
+- module_put(device->discipline->owner);
++ module_put(base->discipline->owner);
+ unlock:
+- atomic_dec(&device->open_count);
++ atomic_dec(&block->open_count);
+ return rc;
}
--static void rt61pci_rf_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt61pci_rf_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u32 value)
+-static int
+-dasd_release(struct inode *inp, struct file *filp)
++static int dasd_release(struct inode *inp, struct file *filp)
{
- u32 reg;
-@@ -160,7 +155,7 @@ rf_write:
- rt2x00_rf_write(rt2x00dev, word, value);
+ struct gendisk *disk = inp->i_bdev->bd_disk;
+- struct dasd_device *device = disk->private_data;
++ struct dasd_block *block = disk->private_data;
+
+- atomic_dec(&device->open_count);
+- module_put(device->discipline->owner);
++ atomic_dec(&block->open_count);
++ module_put(block->base->discipline->owner);
+ return 0;
}
--static void rt61pci_mcu_request(const struct rt2x00_dev *rt2x00dev,
-+static void rt61pci_mcu_request(struct rt2x00_dev *rt2x00dev,
- const u8 command, const u8 token,
- const u8 arg0, const u8 arg1)
+ /*
+ * Return disk geometry.
+ */
+-static int
+-dasd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
++static int dasd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
{
-@@ -220,13 +215,13 @@ static void rt61pci_eepromregister_write(struct eeprom_93cx6 *eeprom)
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
- #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
+- struct dasd_device *device;
++ struct dasd_block *block;
++ struct dasd_device *base;
--static void rt61pci_read_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt61pci_read_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 *data)
- {
- rt2x00pci_register_read(rt2x00dev, CSR_OFFSET(word), data);
+- device = bdev->bd_disk->private_data;
+- if (!device)
++ block = bdev->bd_disk->private_data;
++ base = block->base;
++ if (!block)
+ return -ENODEV;
+
+- if (!device->discipline ||
+- !device->discipline->fill_geometry)
++ if (!base->discipline ||
++ !base->discipline->fill_geometry)
+ return -EINVAL;
+
+- device->discipline->fill_geometry(device, geo);
+- geo->start = get_start_sect(bdev) >> device->s2b_shift;
++ base->discipline->fill_geometry(block, geo);
++ geo->start = get_start_sect(bdev) >> block->s2b_shift;
+ return 0;
}
--static void rt61pci_write_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt61pci_write_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 data)
+@@ -1909,6 +2098,9 @@ dasd_device_operations = {
+ .getgeo = dasd_getgeo,
+ };
+
++/*******************************************************************************
++ * end of block device operations
++ */
+
+ static void
+ dasd_exit(void)
+@@ -1937,9 +2129,8 @@ dasd_exit(void)
+ * Initial attempt at a probe function. this can be simplified once
+ * the other detection code is gone.
+ */
+-int
+-dasd_generic_probe (struct ccw_device *cdev,
+- struct dasd_discipline *discipline)
++int dasd_generic_probe(struct ccw_device *cdev,
++ struct dasd_discipline *discipline)
{
- rt2x00pci_register_write(rt2x00dev, CSR_OFFSET(word), data);
-@@ -322,7 +317,8 @@ static void rt61pci_config_type(struct rt2x00_dev *rt2x00dev, const int type,
- */
- rt2x00pci_register_read(rt2x00dev, TXRX_CSR9, ®);
- rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 1);
-- rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 1);
-+ rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE,
-+ (tsf_sync == TSF_SYNC_BEACON));
- rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0);
- rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, tsf_sync);
- rt2x00pci_register_write(rt2x00dev, TXRX_CSR9, reg);
-@@ -411,8 +407,7 @@ static void rt61pci_config_txpower(struct rt2x00_dev *rt2x00dev,
+ int ret;
+
+@@ -1969,19 +2160,20 @@ dasd_generic_probe (struct ccw_device *cdev,
+ ret = ccw_device_set_online(cdev);
+ if (ret)
+ printk(KERN_WARNING
+- "dasd_generic_probe: could not initially online "
+- "ccw-device %s\n", cdev->dev.bus_id);
+- return ret;
++ "dasd_generic_probe: could not initially "
++ "online ccw-device %s; return code: %d\n",
++ cdev->dev.bus_id, ret);
++ return 0;
}
- static void rt61pci_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx,
-- const int antenna_rx)
-+ struct antenna_setup *ant)
+ /*
+ * This will one day be called from a global not_oper handler.
+ * It is also used by driver_unregister during module unload.
+ */
+-void
+-dasd_generic_remove (struct ccw_device *cdev)
++void dasd_generic_remove(struct ccw_device *cdev)
{
- u8 r3;
- u8 r4;
-@@ -423,32 +418,39 @@ static void rt61pci_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
- rt61pci_bbp_read(rt2x00dev, 77, &r77);
+ struct dasd_device *device;
++ struct dasd_block *block;
- rt2x00_set_field8(&r3, BBP_R3_SMART_MODE,
-- !rt2x00_rf(&rt2x00dev->chip, RF5225));
-+ rt2x00_rf(&rt2x00dev->chip, RF5325));
+ cdev->handler = NULL;
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
+@@ -2001,7 +2193,15 @@ dasd_generic_remove (struct ccw_device *cdev)
+ */
+ dasd_set_target_state(device, DASD_STATE_NEW);
+ /* dasd_delete_device destroys the device reference. */
++ block = device->block;
++ device->block = NULL;
+ dasd_delete_device(device);
+ /*
-+ * Configure the RX antenna.
++ * life cycle of block is bound to device, so delete it after
++ * device was safely removed
+ */
-+ switch (ant->rx) {
- case ANTENNA_HW_DIVERSITY:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
-- !!(rt2x00dev->curr_hwmode != HWMODE_A));
-+ (rt2x00dev->curr_hwmode != HWMODE_A));
- break;
- case ANTENNA_A:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
--
- if (rt2x00dev->curr_hwmode == HWMODE_A)
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
- else
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
++ if (block)
++ dasd_free_block(block);
+ }
+
+ /*
+@@ -2009,10 +2209,8 @@ dasd_generic_remove (struct ccw_device *cdev)
+ * the device is detected for the first time and is supposed to be used
+ * or the user has started activation through sysfs.
+ */
+-int
+-dasd_generic_set_online (struct ccw_device *cdev,
+- struct dasd_discipline *base_discipline)
-
- if (rt2x00dev->curr_hwmode == HWMODE_A)
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
- else
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
- break;
- }
++int dasd_generic_set_online(struct ccw_device *cdev,
++ struct dasd_discipline *base_discipline)
+ {
+ struct dasd_discipline *discipline;
+ struct dasd_device *device;
+@@ -2048,6 +2246,7 @@ dasd_generic_set_online (struct ccw_device *cdev,
+ device->base_discipline = base_discipline;
+ device->discipline = discipline;
-@@ -458,8 +460,7 @@ static void rt61pci_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
++ /* check_device will allocate block device if necessary */
+ rc = discipline->check_device(device);
+ if (rc) {
+ printk (KERN_WARNING
+@@ -2067,6 +2266,8 @@ dasd_generic_set_online (struct ccw_device *cdev,
+ cdev->dev.bus_id);
+ rc = -ENODEV;
+ dasd_set_target_state(device, DASD_STATE_NEW);
++ if (device->block)
++ dasd_free_block(device->block);
+ dasd_delete_device(device);
+ } else
+ pr_debug("dasd_generic device %s found\n",
+@@ -2081,10 +2282,10 @@ dasd_generic_set_online (struct ccw_device *cdev,
+ return rc;
}
- static void rt61pci_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx,
-- const int antenna_rx)
-+ struct antenna_setup *ant)
+-int
+-dasd_generic_set_offline (struct ccw_device *cdev)
++int dasd_generic_set_offline(struct ccw_device *cdev)
{
- u8 r3;
- u8 r4;
-@@ -470,22 +471,31 @@ static void rt61pci_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
- rt61pci_bbp_read(rt2x00dev, 77, &r77);
-
- rt2x00_set_field8(&r3, BBP_R3_SMART_MODE,
-- !rt2x00_rf(&rt2x00dev->chip, RF2527));
-+ rt2x00_rf(&rt2x00dev->chip, RF2529));
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
- !test_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags));
+ struct dasd_device *device;
++ struct dasd_block *block;
+ int max_count, open_count;
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
+ device = dasd_device_from_cdev(cdev);
+@@ -2101,30 +2302,39 @@ dasd_generic_set_offline (struct ccw_device *cdev)
+ * the blkdev_get in dasd_scan_partitions. We are only interested
+ * in the other openers.
+ */
+- max_count = device->bdev ? 0 : -1;
+- open_count = (int) atomic_read(&device->open_count);
+- if (open_count > max_count) {
+- if (open_count > 0)
+- printk (KERN_WARNING "Can't offline dasd device with "
+- "open count = %i.\n",
+- open_count);
+- else
+- printk (KERN_WARNING "%s",
+- "Can't offline dasd device due to internal "
+- "use\n");
+- clear_bit(DASD_FLAG_OFFLINE, &device->flags);
+- dasd_put_device(device);
+- return -EBUSY;
++ if (device->block) {
++ struct dasd_block *block = device->block;
++ max_count = block->bdev ? 0 : -1;
++ open_count = (int) atomic_read(&block->open_count);
++ if (open_count > max_count) {
++ if (open_count > 0)
++ printk(KERN_WARNING "Can't offline dasd "
++ "device with open count = %i.\n",
++ open_count);
++ else
++ printk(KERN_WARNING "%s",
++ "Can't offline dasd device due "
++ "to internal use\n");
++ clear_bit(DASD_FLAG_OFFLINE, &device->flags);
++ dasd_put_device(device);
++ return -EBUSY;
++ }
+ }
+ dasd_set_target_state(device, DASD_STATE_NEW);
+ /* dasd_delete_device destroys the device reference. */
++ block = device->block;
++ device->block = NULL;
+ dasd_delete_device(device);
+-
+ /*
-+ * Configure the RX antenna.
++ * life cycle of block is bound to device, so delete it after
++ * device was safely removed
+ */
-+ switch (ant->rx) {
- case ANTENNA_HW_DIVERSITY:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
- break;
- case ANTENNA_A:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
++ if (block)
++ dasd_free_block(block);
+ return 0;
+ }
+
+-int
+-dasd_generic_notify(struct ccw_device *cdev, int event)
++int dasd_generic_notify(struct ccw_device *cdev, int event)
+ {
+ struct dasd_device *device;
+ struct dasd_ccw_req *cqr;
+@@ -2145,27 +2355,22 @@ dasd_generic_notify(struct ccw_device *cdev, int event)
+ if (device->state < DASD_STATE_BASIC)
+ break;
+ /* Device is active. We want to keep it. */
+- if (test_bit(DASD_FLAG_DSC_ERROR, &device->flags)) {
+- list_for_each_entry(cqr, &device->ccw_queue, list)
+- if (cqr->status == DASD_CQR_IN_IO)
+- cqr->status = DASD_CQR_FAILED;
+- device->stopped |= DASD_STOPPED_DC_EIO;
+- } else {
+- list_for_each_entry(cqr, &device->ccw_queue, list)
+- if (cqr->status == DASD_CQR_IN_IO) {
+- cqr->status = DASD_CQR_QUEUED;
+- cqr->retries++;
+- }
+- device->stopped |= DASD_STOPPED_DC_WAIT;
+- dasd_set_timer(device, 0);
+- }
+- dasd_schedule_bh(device);
++ list_for_each_entry(cqr, &device->ccw_queue, devlist)
++ if (cqr->status == DASD_CQR_IN_IO) {
++ cqr->status = DASD_CQR_QUEUED;
++ cqr->retries++;
++ }
++ device->stopped |= DASD_STOPPED_DC_WAIT;
++ dasd_device_clear_timer(device);
++ dasd_schedule_device_bh(device);
+ ret = 1;
break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
+ case CIO_OPER:
+ /* FIXME: add a sanity check. */
+- device->stopped &= ~(DASD_STOPPED_DC_WAIT|DASD_STOPPED_DC_EIO);
+- dasd_schedule_bh(device);
++ device->stopped &= ~DASD_STOPPED_DC_WAIT;
++ dasd_schedule_device_bh(device);
++ if (device->block)
++ dasd_schedule_block_bh(device->block);
+ ret = 1;
break;
}
+@@ -2195,7 +2400,8 @@ static struct dasd_ccw_req *dasd_generic_build_rdc(struct dasd_device *device,
+ ccw->cda = (__u32)(addr_t)rdc_buffer;
+ ccw->count = rdc_buffer_size;
-@@ -501,23 +511,18 @@ static void rt61pci_config_antenna_2529_rx(struct rt2x00_dev *rt2x00dev,
-
- rt2x00pci_register_read(rt2x00dev, MAC_CSR13, ®);
+- cqr->device = device;
++ cqr->startdev = device;
++ cqr->memdev = device;
+ cqr->expires = 10*HZ;
+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
+ cqr->retries = 2;
+@@ -2217,13 +2423,12 @@ int dasd_generic_read_dev_chars(struct dasd_device *device, char *magic,
+ return PTR_ERR(cqr);
-- if (p1 != 0xff) {
-- rt2x00_set_field32(®, MAC_CSR13_BIT4, !!p1);
-- rt2x00_set_field32(®, MAC_CSR13_BIT12, 0);
-- rt2x00pci_register_write(rt2x00dev, MAC_CSR13, reg);
-- }
-- if (p2 != 0xff) {
-- rt2x00_set_field32(®, MAC_CSR13_BIT3, !p2);
-- rt2x00_set_field32(®, MAC_CSR13_BIT11, 0);
-- rt2x00pci_register_write(rt2x00dev, MAC_CSR13, reg);
-- }
-+ rt2x00_set_field32(®, MAC_CSR13_BIT4, p1);
-+ rt2x00_set_field32(®, MAC_CSR13_BIT12, 0);
-+
-+ rt2x00_set_field32(®, MAC_CSR13_BIT3, !p2);
-+ rt2x00_set_field32(®, MAC_CSR13_BIT11, 0);
-+
-+ rt2x00pci_register_write(rt2x00dev, MAC_CSR13, reg);
+ ret = dasd_sleep_on(cqr);
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return ret;
}
+ EXPORT_SYMBOL_GPL(dasd_generic_read_dev_chars);
- static void rt61pci_config_antenna_2529(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx,
-- const int antenna_rx)
-+ struct antenna_setup *ant)
+-static int __init
+-dasd_init(void)
++static int __init dasd_init(void)
{
-- u16 eeprom;
- u8 r3;
- u8 r4;
- u8 r77;
-@@ -525,70 +530,36 @@ static void rt61pci_config_antenna_2529(struct rt2x00_dev *rt2x00dev,
- rt61pci_bbp_read(rt2x00dev, 3, &r3);
- rt61pci_bbp_read(rt2x00dev, 4, &r4);
- rt61pci_bbp_read(rt2x00dev, 77, &r77);
-- rt2x00_eeprom_read(rt2x00dev, EEPROM_NIC, &eeprom);
+ int rc;
-- rt2x00_set_field8(&r3, BBP_R3_SMART_MODE, 0);
+@@ -2231,7 +2436,7 @@ dasd_init(void)
+ init_waitqueue_head(&dasd_flush_wq);
+
+ /* register 'common' DASD debug area, used for all DBF_XXX calls */
+- dasd_debug_area = debug_register("dasd", 1, 2, 8 * sizeof (long));
++ dasd_debug_area = debug_register("dasd", 1, 1, 8 * sizeof(long));
+ if (dasd_debug_area == NULL) {
+ rc = -ENOMEM;
+ goto failed;
+@@ -2277,15 +2482,18 @@ EXPORT_SYMBOL(dasd_diag_discipline_pointer);
+ EXPORT_SYMBOL(dasd_add_request_head);
+ EXPORT_SYMBOL(dasd_add_request_tail);
+ EXPORT_SYMBOL(dasd_cancel_req);
+-EXPORT_SYMBOL(dasd_clear_timer);
++EXPORT_SYMBOL(dasd_device_clear_timer);
++EXPORT_SYMBOL(dasd_block_clear_timer);
+ EXPORT_SYMBOL(dasd_enable_device);
+ EXPORT_SYMBOL(dasd_int_handler);
+ EXPORT_SYMBOL(dasd_kfree_request);
+ EXPORT_SYMBOL(dasd_kick_device);
+ EXPORT_SYMBOL(dasd_kmalloc_request);
+-EXPORT_SYMBOL(dasd_schedule_bh);
++EXPORT_SYMBOL(dasd_schedule_device_bh);
++EXPORT_SYMBOL(dasd_schedule_block_bh);
+ EXPORT_SYMBOL(dasd_set_target_state);
+-EXPORT_SYMBOL(dasd_set_timer);
++EXPORT_SYMBOL(dasd_device_set_timer);
++EXPORT_SYMBOL(dasd_block_set_timer);
+ EXPORT_SYMBOL(dasd_sfree_request);
+ EXPORT_SYMBOL(dasd_sleep_on);
+ EXPORT_SYMBOL(dasd_sleep_on_immediatly);
+@@ -2299,4 +2507,7 @@ EXPORT_SYMBOL_GPL(dasd_generic_remove);
+ EXPORT_SYMBOL_GPL(dasd_generic_notify);
+ EXPORT_SYMBOL_GPL(dasd_generic_set_online);
+ EXPORT_SYMBOL_GPL(dasd_generic_set_offline);
-
-- if (rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY) &&
-- rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY)) {
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
-- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 1);
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 1);
-- } else if (rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY)) {
-- if (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED) >= 2) {
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-- rt61pci_bbp_write(rt2x00dev, 77, r77);
-- }
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
-- } else if (!rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY) &&
-- rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY)) {
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
-- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
++EXPORT_SYMBOL_GPL(dasd_generic_handle_state_change);
++EXPORT_SYMBOL_GPL(dasd_flush_device_queue);
++EXPORT_SYMBOL_GPL(dasd_alloc_block);
++EXPORT_SYMBOL_GPL(dasd_free_block);
+diff --git a/drivers/s390/block/dasd_3370_erp.c b/drivers/s390/block/dasd_3370_erp.c
+deleted file mode 100644
+index 1ddab89..0000000
+--- a/drivers/s390/block/dasd_3370_erp.c
++++ /dev/null
+@@ -1,84 +0,0 @@
+-/*
+- * File...........: linux/drivers/s390/block/dasd_3370_erp.c
+- * Author(s)......: Holger Smolinski <Holger.Smolinski at de.ibm.com>
+- * Bugreports.to..: <Linux390 at de.ibm.com>
+- * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+- *
+- */
-
-- switch (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED)) {
-- case 0:
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 1);
-- break;
-- case 1:
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 0);
-- break;
-- case 2:
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 0);
-- break;
-- case 3:
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
-- break;
+-#define PRINTK_HEADER "dasd_erp(3370)"
+-
+-#include "dasd_int.h"
+-
+-
+-/*
+- * DASD_3370_ERP_EXAMINE
+- *
+- * DESCRIPTION
+- * Checks only for fatal/no/recover error.
+- * A detailed examination of the sense data is done later outside
+- * the interrupt handler.
+- *
+- * The logic is based on the 'IBM 3880 Storage Control Reference' manual
+- * 'Chapter 7. 3370 Sense Data'.
+- *
+- * RETURN VALUES
+- * dasd_era_none no error
+- * dasd_era_fatal for all fatal (unrecoverable errors)
+- * dasd_era_recover for all others.
+- */
+-dasd_era_t
+-dasd_3370_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
+-{
+- char *sense = irb->ecw;
+-
+- /* check for successful execution first */
+- if (irb->scsw.cstat == 0x00 &&
+- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+- return dasd_era_none;
+- if (sense[0] & 0x80) { /* CMD reject */
+- return dasd_era_fatal;
+- }
+- if (sense[0] & 0x40) { /* Drive offline */
+- return dasd_era_recover;
+- }
+- if (sense[0] & 0x20) { /* Bus out parity */
+- return dasd_era_recover;
+- }
+- if (sense[0] & 0x10) { /* equipment check */
+- if (sense[1] & 0x80) {
+- return dasd_era_fatal;
- }
-- } else if (!rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY) &&
-- !rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY)) {
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
-+ /* FIXME: Antenna selection for the rf 2529 is very confusing in the
-+ * legacy driver. The code below should be ok for non-diversity setups.
-+ */
-
-- switch (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED)) {
-- case 0:
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-- rt61pci_bbp_write(rt2x00dev, 77, r77);
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 1);
-- break;
-- case 1:
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-- rt61pci_bbp_write(rt2x00dev, 77, r77);
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 0);
-- break;
-- case 2:
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-- rt61pci_bbp_write(rt2x00dev, 77, r77);
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 0);
-- break;
-- case 3:
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-- rt61pci_bbp_write(rt2x00dev, 77, r77);
-- rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
-- break;
+- return dasd_era_recover;
+- }
+- if (sense[0] & 0x08) { /* data check */
+- if (sense[1] & 0x80) {
+- return dasd_era_fatal;
- }
-+ /*
-+ * Configure the RX antenna.
-+ */
-+ switch (ant->rx) {
-+ case ANTENNA_A:
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
-+ rt61pci_config_antenna_2529_rx(rt2x00dev, 0, 0);
-+ break;
-+ case ANTENNA_SW_DIVERSITY:
-+ case ANTENNA_HW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
-+ case ANTENNA_B:
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
-+ rt61pci_config_antenna_2529_rx(rt2x00dev, 1, 1);
-+ break;
- }
-
-+ rt61pci_bbp_write(rt2x00dev, 77, r77);
- rt61pci_bbp_write(rt2x00dev, 3, r3);
- rt61pci_bbp_write(rt2x00dev, 4, r4);
- }
-@@ -625,46 +596,44 @@ static const struct antenna_sel antenna_sel_bg[] = {
- };
-
- static void rt61pci_config_antenna(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx, const int antenna_rx)
-+ struct antenna_setup *ant)
- {
- const struct antenna_sel *sel;
- unsigned int lna;
- unsigned int i;
- u32 reg;
+- return dasd_era_recover;
+- }
+- if (sense[0] & 0x04) { /* overrun */
+- if (sense[1] & 0x80) {
+- return dasd_era_fatal;
+- }
+- return dasd_era_recover;
+- }
+- if (sense[1] & 0x40) { /* invalid blocksize */
+- return dasd_era_fatal;
+- }
+- if (sense[1] & 0x04) { /* file protected */
+- return dasd_era_recover;
+- }
+- if (sense[1] & 0x01) { /* operation incomplete */
+- return dasd_era_recover;
+- }
+- if (sense[2] & 0x80) { /* check data erroor */
+- return dasd_era_recover;
+- }
+- if (sense[2] & 0x10) { /* Env. data present */
+- return dasd_era_recover;
+- }
+- /* examine the 24 byte sense data */
+- return dasd_era_recover;
+-
+-} /* END dasd_3370_erp_examine */
+diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c
+index 5b7385e..c361ab6 100644
+--- a/drivers/s390/block/dasd_3990_erp.c
++++ b/drivers/s390/block/dasd_3990_erp.c
+@@ -26,158 +26,6 @@ struct DCTL_data {
-- rt2x00pci_register_read(rt2x00dev, PHY_CSR0, ®);
+ /*
+ *****************************************************************************
+- * SECTION ERP EXAMINATION
+- *****************************************************************************
+- */
-
- if (rt2x00dev->curr_hwmode == HWMODE_A) {
- sel = antenna_sel_a;
- lna = test_bit(CONFIG_EXTERNAL_LNA_A, &rt2x00dev->flags);
+-/*
+- * DASD_3990_ERP_EXAMINE_24
+- *
+- * DESCRIPTION
+- * Checks only for fatal (unrecoverable) error.
+- * A detailed examination of the sense data is done later outside
+- * the interrupt handler.
+- *
+- * Each bit configuration leading to an action code 2 (Exit with
+- * programming error or unusual condition indication)
+- * are handled as fatal errors.
+- *
+- * All other configurations are handled as recoverable errors.
+- *
+- * RETURN VALUES
+- * dasd_era_fatal for all fatal (unrecoverable errors)
+- * dasd_era_recover for all others.
+- */
+-static dasd_era_t
+-dasd_3990_erp_examine_24(struct dasd_ccw_req * cqr, char *sense)
+-{
-
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 0);
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 1);
- } else {
- sel = antenna_sel_bg;
- lna = test_bit(CONFIG_EXTERNAL_LNA_BG, &rt2x00dev->flags);
+- struct dasd_device *device = cqr->device;
-
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 1);
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 0);
- }
-
- for (i = 0; i < ARRAY_SIZE(antenna_sel_a); i++)
- rt61pci_bbp_write(rt2x00dev, sel[i].word, sel[i].value[lna]);
-
-+ rt2x00pci_register_read(rt2x00dev, PHY_CSR0, ®);
-+
-+ rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG,
-+ (rt2x00dev->curr_hwmode == HWMODE_B ||
-+ rt2x00dev->curr_hwmode == HWMODE_G));
-+ rt2x00_set_field32(®, PHY_CSR0_PA_PE_A,
-+ (rt2x00dev->curr_hwmode == HWMODE_A));
-+
- rt2x00pci_register_write(rt2x00dev, PHY_CSR0, reg);
+- /* check for 'Command Reject' */
+- if ((sense[0] & SNS0_CMD_REJECT) &&
+- (!(sense[2] & SNS2_ENV_DATA_PRESENT))) {
+-
+- DEV_MESSAGE(KERN_ERR, device, "%s",
+- "EXAMINE 24: Command Reject detected - "
+- "fatal error");
+-
+- return dasd_era_fatal;
+- }
+-
+- /* check for 'Invalid Track Format' */
+- if ((sense[1] & SNS1_INV_TRACK_FORMAT) &&
+- (!(sense[2] & SNS2_ENV_DATA_PRESENT))) {
+-
+- DEV_MESSAGE(KERN_ERR, device, "%s",
+- "EXAMINE 24: Invalid Track Format detected "
+- "- fatal error");
+-
+- return dasd_era_fatal;
+- }
+-
+- /* check for 'No Record Found' */
+- if (sense[1] & SNS1_NO_REC_FOUND) {
+-
+- /* FIXME: fatal error ?!? */
+- DEV_MESSAGE(KERN_ERR, device,
+- "EXAMINE 24: No Record Found detected %s",
+- device->state <= DASD_STATE_BASIC ?
+- " " : "- fatal error");
+-
+- return dasd_era_fatal;
+- }
+-
+- /* return recoverable for all others */
+- return dasd_era_recover;
+-} /* END dasd_3990_erp_examine_24 */
+-
+-/*
+- * DASD_3990_ERP_EXAMINE_32
+- *
+- * DESCRIPTION
+- * Checks only for fatal/no/recoverable error.
+- * A detailed examination of the sense data is done later outside
+- * the interrupt handler.
+- *
+- * RETURN VALUES
+- * dasd_era_none no error
+- * dasd_era_fatal for all fatal (unrecoverable errors)
+- * dasd_era_recover for recoverable others.
+- */
+-static dasd_era_t
+-dasd_3990_erp_examine_32(struct dasd_ccw_req * cqr, char *sense)
+-{
+-
+- struct dasd_device *device = cqr->device;
+-
+- switch (sense[25]) {
+- case 0x00:
+- return dasd_era_none;
+-
+- case 0x01:
+- DEV_MESSAGE(KERN_ERR, device, "%s", "EXAMINE 32: fatal error");
+-
+- return dasd_era_fatal;
+-
+- default:
+-
+- return dasd_era_recover;
+- }
+-
+-} /* end dasd_3990_erp_examine_32 */
+-
+-/*
+- * DASD_3990_ERP_EXAMINE
+- *
+- * DESCRIPTION
+- * Checks only for fatal/no/recover error.
+- * A detailed examination of the sense data is done later outside
+- * the interrupt handler.
+- *
+- * The logic is based on the 'IBM 3990 Storage Control Reference' manual
+- * 'Chapter 7. Error Recovery Procedures'.
+- *
+- * RETURN VALUES
+- * dasd_era_none no error
+- * dasd_era_fatal for all fatal (unrecoverable errors)
+- * dasd_era_recover for all others.
+- */
+-dasd_era_t
+-dasd_3990_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
+-{
+-
+- char *sense = irb->ecw;
+- dasd_era_t era = dasd_era_recover;
+- struct dasd_device *device = cqr->device;
+-
+- /* check for successful execution first */
+- if (irb->scsw.cstat == 0x00 &&
+- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+- return dasd_era_none;
+-
+- /* distinguish between 24 and 32 byte sense data */
+- if (sense[27] & DASD_SENSE_BIT_0) {
+-
+- era = dasd_3990_erp_examine_24(cqr, sense);
+-
+- } else {
+-
+- era = dasd_3990_erp_examine_32(cqr, sense);
+-
+- }
+-
+- /* log the erp chain if fatal error occurred */
+- if ((era == dasd_era_fatal) && (device->state >= DASD_STATE_READY)) {
+- dasd_log_sense(cqr, irb);
+- }
+-
+- return era;
+-
+-} /* END dasd_3990_erp_examine */
+-
+-/*
+- *****************************************************************************
+ * SECTION ERP HANDLING
+ *****************************************************************************
+ */
+@@ -206,7 +54,7 @@ dasd_3990_erp_cleanup(struct dasd_ccw_req * erp, char final_status)
+ {
+ struct dasd_ccw_req *cqr = erp->refers;
- if (rt2x00_rf(&rt2x00dev->chip, RF5225) ||
- rt2x00_rf(&rt2x00dev->chip, RF5325))
-- rt61pci_config_antenna_5x(rt2x00dev, antenna_tx, antenna_rx);
-+ rt61pci_config_antenna_5x(rt2x00dev, ant);
- else if (rt2x00_rf(&rt2x00dev->chip, RF2527))
-- rt61pci_config_antenna_2x(rt2x00dev, antenna_tx, antenna_rx);
-+ rt61pci_config_antenna_2x(rt2x00dev, ant);
- else if (rt2x00_rf(&rt2x00dev->chip, RF2529)) {
- if (test_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags))
-- rt61pci_config_antenna_2x(rt2x00dev, antenna_tx,
-- antenna_rx);
-+ rt61pci_config_antenna_2x(rt2x00dev, ant);
- else
-- rt61pci_config_antenna_2529(rt2x00dev, antenna_tx,
-- antenna_rx);
-+ rt61pci_config_antenna_2529(rt2x00dev, ant);
- }
- }
+- dasd_free_erp_request(erp, erp->device);
++ dasd_free_erp_request(erp, erp->memdev);
+ cqr->status = final_status;
+ return cqr;
-@@ -709,8 +678,7 @@ static void rt61pci_config(struct rt2x00_dev *rt2x00dev,
- if ((flags & CONFIG_UPDATE_TXPOWER) && !(flags & CONFIG_UPDATE_CHANNEL))
- rt61pci_config_txpower(rt2x00dev, libconf->conf->power_level);
- if (flags & CONFIG_UPDATE_ANTENNA)
-- rt61pci_config_antenna(rt2x00dev, libconf->conf->antenna_sel_tx,
-- libconf->conf->antenna_sel_rx);
-+ rt61pci_config_antenna(rt2x00dev, &libconf->ant);
- if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
- rt61pci_config_duration(rt2x00dev, libconf);
- }
-@@ -721,7 +689,6 @@ static void rt61pci_config(struct rt2x00_dev *rt2x00dev,
- static void rt61pci_enable_led(struct rt2x00_dev *rt2x00dev)
+@@ -224,15 +72,17 @@ static void
+ dasd_3990_erp_block_queue(struct dasd_ccw_req * erp, int expires)
{
- u32 reg;
-- u16 led_reg;
- u8 arg0;
- u8 arg1;
-
-@@ -730,15 +697,14 @@ static void rt61pci_enable_led(struct rt2x00_dev *rt2x00dev)
- rt2x00_set_field32(®, MAC_CSR14_OFF_PERIOD, 30);
- rt2x00pci_register_write(rt2x00dev, MAC_CSR14, reg);
-- led_reg = rt2x00dev->led_reg;
-- rt2x00_set_field16(&led_reg, MCU_LEDCS_RADIO_STATUS, 1);
-- if (rt2x00dev->rx_status.phymode == MODE_IEEE80211A)
-- rt2x00_set_field16(&led_reg, MCU_LEDCS_LINK_A_STATUS, 1);
-- else
-- rt2x00_set_field16(&led_reg, MCU_LEDCS_LINK_BG_STATUS, 1);
-+ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_RADIO_STATUS, 1);
-+ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_A_STATUS,
-+ (rt2x00dev->rx_status.phymode == MODE_IEEE80211A));
-+ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_BG_STATUS,
-+ (rt2x00dev->rx_status.phymode != MODE_IEEE80211A));
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
++ unsigned long flags;
-- arg0 = led_reg & 0xff;
-- arg1 = (led_reg >> 8) & 0xff;
-+ arg0 = rt2x00dev->led_reg & 0xff;
-+ arg1 = (rt2x00dev->led_reg >> 8) & 0xff;
+ DEV_MESSAGE(KERN_INFO, device,
+ "blocking request queue for %is", expires/HZ);
- rt61pci_mcu_request(rt2x00dev, MCU_LED, 0xff, arg0, arg1);
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+ device->stopped |= DASD_STOPPED_PENDING;
+- erp->status = DASD_CQR_QUEUED;
+-
+- dasd_set_timer(device, expires);
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ erp->status = DASD_CQR_FILLED;
++ dasd_block_set_timer(device->block, expires);
}
-@@ -792,7 +758,8 @@ static void rt61pci_activity_led(struct rt2x00_dev *rt2x00dev, int rssi)
+
/*
- * Link tuning
- */
--static void rt61pci_link_stats(struct rt2x00_dev *rt2x00dev)
-+static void rt61pci_link_stats(struct rt2x00_dev *rt2x00dev,
-+ struct link_qual *qual)
+@@ -251,7 +101,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_int_req(struct dasd_ccw_req * erp)
{
- u32 reg;
-
-@@ -800,14 +767,13 @@ static void rt61pci_link_stats(struct rt2x00_dev *rt2x00dev)
- * Update FCS error count from register.
- */
- rt2x00pci_register_read(rt2x00dev, STA_CSR0, ®);
-- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
-+ qual->rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
-
- /*
- * Update False CCA count from register.
- */
- rt2x00pci_register_read(rt2x00dev, STA_CSR1, ®);
-- rt2x00dev->link.false_cca =
-- rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
-+ qual->false_cca = rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
- }
- static void rt61pci_reset_tuner(struct rt2x00_dev *rt2x00dev)
-@@ -904,11 +870,11 @@ static void rt61pci_link_tuner(struct rt2x00_dev *rt2x00dev)
- * r17 does not yet exceed upper limit, continue and base
- * the r17 tuning on the false CCA count.
- */
-- if (rt2x00dev->link.false_cca > 512 && r17 < up_bound) {
-+ if (rt2x00dev->link.qual.false_cca > 512 && r17 < up_bound) {
- if (++r17 > up_bound)
- r17 = up_bound;
- rt61pci_bbp_write(rt2x00dev, 17, r17);
-- } else if (rt2x00dev->link.false_cca < 100 && r17 > low_bound) {
-+ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > low_bound) {
- if (--r17 < low_bound)
- r17 = low_bound;
- rt61pci_bbp_write(rt2x00dev, 17, r17);
-@@ -1023,64 +989,46 @@ static int rt61pci_load_firmware(struct rt2x00_dev *rt2x00dev, void *data,
- return 0;
- }
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
--static void rt61pci_init_rxring(struct rt2x00_dev *rt2x00dev)
-+static void rt61pci_init_rxentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
+ /* first time set initial retry counter and erp_function */
+ /* and retry once without blocking queue */
+@@ -292,11 +142,14 @@ dasd_3990_erp_int_req(struct dasd_ccw_req * erp)
+ static void
+ dasd_3990_erp_alternate_path(struct dasd_ccw_req * erp)
{
-- struct data_ring *ring = rt2x00dev->rx;
-- struct data_desc *rxd;
-- unsigned int i;
-+ __le32 *rxd = entry->priv;
- u32 word;
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
+ __u8 opm;
++ unsigned long flags;
-- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
--
-- for (i = 0; i < ring->stats.limit; i++) {
-- rxd = ring->entry[i].priv;
--
-- rt2x00_desc_read(rxd, 5, &word);
-- rt2x00_set_field32(&word, RXD_W5_BUFFER_PHYSICAL_ADDRESS,
-- ring->entry[i].data_dma);
-- rt2x00_desc_write(rxd, 5, word);
-+ rt2x00_desc_read(rxd, 5, &word);
-+ rt2x00_set_field32(&word, RXD_W5_BUFFER_PHYSICAL_ADDRESS,
-+ entry->data_dma);
-+ rt2x00_desc_write(rxd, 5, word);
+ /* try alternate valid path */
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+ opm = ccw_device_get_path_mask(device->cdev);
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
+ //FIXME: start with get_opm ?
+ if (erp->lpm == 0)
+ erp->lpm = LPM_ANYPATH & ~(erp->irb.esw.esw0.sublog.lpum);
+@@ -309,9 +162,8 @@ dasd_3990_erp_alternate_path(struct dasd_ccw_req * erp)
+ "try alternate lpm=%x (lpum=%x / opm=%x)",
+ erp->lpm, erp->irb.esw.esw0.sublog.lpum, opm);
-- rt2x00_desc_read(rxd, 0, &word);
-- rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
-- rt2x00_desc_write(rxd, 0, word);
-- }
--
-- rt2x00_ring_index_clear(rt2x00dev->rx);
-+ rt2x00_desc_read(rxd, 0, &word);
-+ rt2x00_set_field32(&word, RXD_W0_OWNER_NIC, 1);
-+ rt2x00_desc_write(rxd, 0, word);
- }
+- /* reset status to queued to handle the request again... */
+- if (erp->status > DASD_CQR_QUEUED)
+- erp->status = DASD_CQR_QUEUED;
++ /* reset status to submit the request again... */
++ erp->status = DASD_CQR_FILLED;
+ erp->retries = 1;
+ } else {
+ DEV_MESSAGE(KERN_ERR, device,
+@@ -320,8 +172,7 @@ dasd_3990_erp_alternate_path(struct dasd_ccw_req * erp)
+ erp->irb.esw.esw0.sublog.lpum, opm);
--static void rt61pci_init_txring(struct rt2x00_dev *rt2x00dev, const int queue)
-+static void rt61pci_init_txentry(struct rt2x00_dev *rt2x00dev,
-+ struct data_entry *entry)
+ /* post request with permanent error */
+- if (erp->status > DASD_CQR_QUEUED)
+- erp->status = DASD_CQR_FAILED;
++ erp->status = DASD_CQR_FAILED;
+ }
+ } /* end dasd_3990_erp_alternate_path */
+
+@@ -344,14 +195,14 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_DCTL(struct dasd_ccw_req * erp, char modifier)
{
-- struct data_ring *ring = rt2x00lib_get_ring(rt2x00dev, queue);
-- struct data_desc *txd;
-- unsigned int i;
-+ __le32 *txd = entry->priv;
- u32 word;
-- memset(ring->data_addr, 0x00, rt2x00_get_ring_size(ring));
--
-- for (i = 0; i < ring->stats.limit; i++) {
-- txd = ring->entry[i].priv;
--
-- rt2x00_desc_read(txd, 1, &word);
-- rt2x00_set_field32(&word, TXD_W1_BUFFER_COUNT, 1);
-- rt2x00_desc_write(txd, 1, word);
--
-- rt2x00_desc_read(txd, 5, &word);
-- rt2x00_set_field32(&word, TXD_W5_PID_TYPE, queue);
-- rt2x00_set_field32(&word, TXD_W5_PID_SUBTYPE, i);
-- rt2x00_desc_write(txd, 5, word);
-+ rt2x00_desc_read(txd, 1, &word);
-+ rt2x00_set_field32(&word, TXD_W1_BUFFER_COUNT, 1);
-+ rt2x00_desc_write(txd, 1, word);
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
+ struct DCTL_data *DCTL_data;
+ struct ccw1 *ccw;
+ struct dasd_ccw_req *dctl_cqr;
-- rt2x00_desc_read(txd, 6, &word);
-- rt2x00_set_field32(&word, TXD_W6_BUFFER_PHYSICAL_ADDRESS,
-- ring->entry[i].data_dma);
-- rt2x00_desc_write(txd, 6, word);
-+ rt2x00_desc_read(txd, 5, &word);
-+ rt2x00_set_field32(&word, TXD_W5_PID_TYPE, entry->ring->queue_idx);
-+ rt2x00_set_field32(&word, TXD_W5_PID_SUBTYPE, entry->entry_idx);
-+ rt2x00_desc_write(txd, 5, word);
+ dctl_cqr = dasd_alloc_erp_request((char *) &erp->magic, 1,
+- sizeof (struct DCTL_data),
+- erp->device);
++ sizeof(struct DCTL_data),
++ device);
+ if (IS_ERR(dctl_cqr)) {
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+ "Unable to allocate DCTL-CQR");
+@@ -365,13 +216,14 @@ dasd_3990_erp_DCTL(struct dasd_ccw_req * erp, char modifier)
+ DCTL_data->modifier = modifier;
-- rt2x00_desc_read(txd, 0, &word);
-- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-- rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
-- rt2x00_desc_write(txd, 0, word);
-- }
-+ rt2x00_desc_read(txd, 6, &word);
-+ rt2x00_set_field32(&word, TXD_W6_BUFFER_PHYSICAL_ADDRESS,
-+ entry->data_dma);
-+ rt2x00_desc_write(txd, 6, word);
+ ccw = dctl_cqr->cpaddr;
+- memset(ccw, 0, sizeof (struct ccw1));
++ memset(ccw, 0, sizeof(struct ccw1));
+ ccw->cmd_code = CCW_CMD_DCTL;
+ ccw->count = 4;
+ ccw->cda = (__u32)(addr_t) DCTL_data;
+ dctl_cqr->function = dasd_3990_erp_DCTL;
+ dctl_cqr->refers = erp;
+- dctl_cqr->device = erp->device;
++ dctl_cqr->startdev = device;
++ dctl_cqr->memdev = device;
+ dctl_cqr->magic = erp->magic;
+ dctl_cqr->expires = 5 * 60 * HZ;
+ dctl_cqr->retries = 2;
+@@ -435,7 +287,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_action_4(struct dasd_ccw_req * erp, char *sense)
+ {
-- rt2x00_ring_index_clear(ring);
-+ rt2x00_desc_read(txd, 0, &word);
-+ rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-+ rt2x00_set_field32(&word, TXD_W0_OWNER_NIC, 0);
-+ rt2x00_desc_write(txd, 0, word);
- }
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- static int rt61pci_init_rings(struct rt2x00_dev *rt2x00dev)
-@@ -1088,16 +1036,6 @@ static int rt61pci_init_rings(struct rt2x00_dev *rt2x00dev)
- u32 reg;
+ /* first time set initial retry counter and erp_function */
+ /* and retry once without waiting for state change pending */
+@@ -472,7 +324,7 @@ dasd_3990_erp_action_4(struct dasd_ccw_req * erp, char *sense)
+ "redriving request immediately, "
+ "%d retries left",
+ erp->retries);
+- erp->status = DASD_CQR_QUEUED;
++ erp->status = DASD_CQR_FILLED;
+ }
+ }
- /*
-- * Initialize rings.
-- */
-- rt61pci_init_rxring(rt2x00dev);
-- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA0);
-- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA1);
-- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA2);
-- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA3);
-- rt61pci_init_txring(rt2x00dev, IEEE80211_TX_QUEUE_DATA4);
--
-- /*
- * Initialize registers.
- */
- rt2x00pci_register_read(rt2x00dev, TX_RING_CSR0, ®);
-@@ -1565,12 +1503,12 @@ static int rt61pci_set_device_state(struct rt2x00_dev *rt2x00dev,
- * TX descriptor initialization
- */
- static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-- struct txdata_entry_desc *desc,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
-- struct ieee80211_tx_control *control)
-+ struct sk_buff *skb,
-+ struct txdata_entry_desc *desc,
-+ struct ieee80211_tx_control *control)
+@@ -530,7 +382,7 @@ static void
+ dasd_3990_handle_env_data(struct dasd_ccw_req * erp, char *sense)
{
-+ struct skb_desc *skbdesc = get_skb_desc(skb);
-+ __le32 *txd = skbdesc->desc;
- u32 word;
- /*
-@@ -1599,7 +1537,7 @@ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_desc_write(txd, 5, word);
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
+ char msg_format = (sense[7] & 0xF0);
+ char msg_no = (sense[7] & 0x0F);
- rt2x00_desc_read(txd, 11, &word);
-- rt2x00_set_field32(&word, TXD_W11_BUFFER_LENGTH0, length);
-+ rt2x00_set_field32(&word, TXD_W11_BUFFER_LENGTH0, skbdesc->data_len);
- rt2x00_desc_write(txd, 11, word);
+@@ -1157,7 +1009,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_com_rej(struct dasd_ccw_req * erp, char *sense)
+ {
- rt2x00_desc_read(txd, 0, &word);
-@@ -1608,7 +1546,7 @@ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
- test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_ACK,
-- !(control->flags & IEEE80211_TXCTL_NO_ACK));
-+ test_bit(ENTRY_TXD_ACK, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
- test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_OFDM,
-@@ -1618,7 +1556,7 @@ static void rt61pci_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- !!(control->flags &
- IEEE80211_TXCTL_LONG_RETRY_LIMIT));
- rt2x00_set_field32(&word, TXD_W0_TKIP_MIC, 0);
-- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
-+ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
- rt2x00_set_field32(&word, TXD_W0_BURST,
- test_bit(ENTRY_TXD_BURST, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_CIPHER_ALG, CIPHER_NONE);
-@@ -1649,16 +1587,16 @@ static void rt61pci_kick_tx_queue(struct rt2x00_dev *rt2x00dev,
- }
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- rt2x00pci_register_read(rt2x00dev, TX_CNTL_CSR, ®);
-- if (queue == IEEE80211_TX_QUEUE_DATA0)
-- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC0, 1);
-- else if (queue == IEEE80211_TX_QUEUE_DATA1)
-- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC1, 1);
-- else if (queue == IEEE80211_TX_QUEUE_DATA2)
-- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC2, 1);
-- else if (queue == IEEE80211_TX_QUEUE_DATA3)
-- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC3, 1);
-- else if (queue == IEEE80211_TX_QUEUE_DATA4)
-- rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_MGMT, 1);
-+ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC0,
-+ (queue == IEEE80211_TX_QUEUE_DATA0));
-+ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC1,
-+ (queue == IEEE80211_TX_QUEUE_DATA1));
-+ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC2,
-+ (queue == IEEE80211_TX_QUEUE_DATA2));
-+ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_AC3,
-+ (queue == IEEE80211_TX_QUEUE_DATA3));
-+ rt2x00_set_field32(®, TX_CNTL_CSR_KICK_TX_MGMT,
-+ (queue == IEEE80211_TX_QUEUE_DATA4));
- rt2x00pci_register_write(rt2x00dev, TX_CNTL_CSR, reg);
- }
+ erp->function = dasd_3990_erp_com_rej;
-@@ -1709,7 +1647,7 @@ static int rt61pci_agc_to_rssi(struct rt2x00_dev *rt2x00dev, int rxd_w1)
- static void rt61pci_fill_rxdone(struct data_entry *entry,
- struct rxdata_entry_desc *desc)
+@@ -1198,7 +1050,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_bus_out(struct dasd_ccw_req * erp)
{
-- struct data_desc *rxd = entry->priv;
-+ __le32 *rxd = entry->priv;
- u32 word0;
- u32 word1;
-@@ -1727,8 +1665,7 @@ static void rt61pci_fill_rxdone(struct data_entry *entry,
- desc->rssi = rt61pci_agc_to_rssi(entry->ring->rt2x00dev, word1);
- desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
- desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
--
-- return;
-+ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
- }
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- /*
-@@ -1739,7 +1676,7 @@ static void rt61pci_txdone(struct rt2x00_dev *rt2x00dev)
- struct data_ring *ring;
- struct data_entry *entry;
- struct data_entry *entry_done;
-- struct data_desc *txd;
-+ __le32 *txd;
- u32 word;
- u32 reg;
- u32 old_reg;
-@@ -1809,24 +1746,7 @@ static void rt61pci_txdone(struct rt2x00_dev *rt2x00dev)
- tx_status = rt2x00_get_field32(reg, STA_CSR4_TX_RESULT);
- retry = rt2x00_get_field32(reg, STA_CSR4_RETRY_COUNT);
+ /* first time set initial retry counter and erp_function */
+ /* and retry once without blocking queue */
+@@ -1237,7 +1089,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_equip_check(struct dasd_ccw_req * erp, char *sense)
+ {
-- rt2x00lib_txdone(entry, tx_status, retry);
--
-- /*
-- * Make this entry available for reuse.
-- */
-- entry->flags = 0;
-- rt2x00_set_field32(&word, TXD_W0_VALID, 0);
-- rt2x00_desc_write(txd, 0, word);
-- rt2x00_ring_index_done_inc(entry->ring);
--
-- /*
-- * If the data ring was full before the txdone handler
-- * we must make sure the packet queue in the mac80211 stack
-- * is reenabled when the txdone handler has finished.
-- */
-- if (!rt2x00_ring_full(ring))
-- ieee80211_wake_queue(rt2x00dev->hw,
-- entry->tx_status.control.queue);
-+ rt2x00pci_txdone(rt2x00dev, entry, tx_status, retry);
- }
- }
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
-@@ -1920,8 +1840,10 @@ static int rt61pci_validate_eeprom(struct rt2x00_dev *rt2x00dev)
- rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
- if (word == 0xffff) {
- rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 2);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 2);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
-+ ANTENNA_B);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
-+ ANTENNA_B);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_FRAME_TYPE, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
-@@ -2025,11 +1947,17 @@ static int rt61pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ erp->function = dasd_3990_erp_equip_check;
+
+@@ -1279,7 +1131,6 @@ dasd_3990_erp_equip_check(struct dasd_ccw_req * erp, char *sense)
+
+ erp = dasd_3990_erp_action_5(erp);
}
+-
+ return erp;
- /*
-+ * Determine number of antenna's.
-+ */
-+ if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_NUM) == 2)
-+ __set_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags);
-+
-+ /*
- * Identify default antenna configuration.
- */
-- rt2x00dev->hw->conf.antenna_sel_tx =
-+ rt2x00dev->default_ant.tx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
-- rt2x00dev->hw->conf.antenna_sel_rx =
-+ rt2x00dev->default_ant.rx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
+ } /* end dasd_3990_erp_equip_check */
+@@ -1299,7 +1150,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_data_check(struct dasd_ccw_req * erp, char *sense)
+ {
- /*
-@@ -2039,12 +1967,6 @@ static int rt61pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
- __set_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags);
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- /*
-- * Determine number of antenna's.
-- */
-- if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_NUM) == 2)
-- __set_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags);
--
-- /*
- * Detect if this device has an hardware controlled radio.
- */
- #ifdef CONFIG_RT61PCI_RFKILL
-@@ -2072,6 +1994,38 @@ static int rt61pci_init_eeprom(struct rt2x00_dev *rt2x00dev)
- __set_bit(CONFIG_EXTERNAL_LNA_BG, &rt2x00dev->flags);
+ erp->function = dasd_3990_erp_data_check;
- /*
-+ * When working with a RF2529 chip without double antenna
-+ * the antenna settings should be gathered from the NIC
-+ * eeprom word.
-+ */
-+ if (rt2x00_rf(&rt2x00dev->chip, RF2529) &&
-+ !test_bit(CONFIG_DOUBLE_ANTENNA, &rt2x00dev->flags)) {
-+ switch (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_RX_FIXED)) {
-+ case 0:
-+ rt2x00dev->default_ant.tx = ANTENNA_B;
-+ rt2x00dev->default_ant.rx = ANTENNA_A;
-+ break;
-+ case 1:
-+ rt2x00dev->default_ant.tx = ANTENNA_B;
-+ rt2x00dev->default_ant.rx = ANTENNA_B;
-+ break;
-+ case 2:
-+ rt2x00dev->default_ant.tx = ANTENNA_A;
-+ rt2x00dev->default_ant.rx = ANTENNA_A;
-+ break;
-+ case 3:
-+ rt2x00dev->default_ant.tx = ANTENNA_A;
-+ rt2x00dev->default_ant.rx = ANTENNA_B;
-+ break;
-+ }
-+
-+ if (rt2x00_get_field16(eeprom, EEPROM_NIC_TX_DIVERSITY))
-+ rt2x00dev->default_ant.tx = ANTENNA_SW_DIVERSITY;
-+ if (rt2x00_get_field16(eeprom, EEPROM_NIC_ENABLE_DIVERSITY))
-+ rt2x00dev->default_ant.rx = ANTENNA_SW_DIVERSITY;
-+ }
-+
-+ /*
- * Store led settings, for correct led behaviour.
- * If the eeprom value is invalid,
- * switch to default led mode.
-@@ -2325,7 +2279,6 @@ static void rt61pci_configure_filter(struct ieee80211_hw *hw,
- struct dev_addr_list *mc_list)
+@@ -1358,7 +1209,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_overrun(struct dasd_ccw_req * erp, char *sense)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- struct interface *intf = &rt2x00dev->interface;
- u32 reg;
- /*
-@@ -2344,22 +2297,19 @@ static void rt61pci_configure_filter(struct ieee80211_hw *hw,
- * Apply some rules to the filters:
- * - Some filters imply different filters to be set.
- * - Some things we can't filter out at all.
-- * - Some filters are set based on interface type.
- */
- if (mc_count)
- *total_flags |= FIF_ALLMULTI;
- if (*total_flags & FIF_OTHER_BSS ||
- *total_flags & FIF_PROMISC_IN_BSS)
- *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
-- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
-- *total_flags |= FIF_PROMISC_IN_BSS;
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- /*
- * Check if there is any work left for us.
- */
-- if (intf->filter == *total_flags)
-+ if (rt2x00dev->packet_filter == *total_flags)
- return;
-- intf->filter = *total_flags;
-+ rt2x00dev->packet_filter = *total_flags;
+ erp->function = dasd_3990_erp_overrun;
- /*
- * Start configuration steps.
-@@ -2426,6 +2376,9 @@ static int rt61pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- struct ieee80211_tx_control *control)
+@@ -1387,7 +1238,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_inv_format(struct dasd_ccw_req * erp, char *sense)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-+ struct skb_desc *desc;
-+ struct data_ring *ring;
-+ struct data_entry *entry;
- /*
- * Just in case the ieee80211 doesn't set this,
-@@ -2433,6 +2386,8 @@ static int rt61pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- * initialization.
- */
- control->queue = IEEE80211_TX_QUEUE_BEACON;
-+ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
-+ entry = rt2x00_get_data_entry(ring);
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- /*
- * We need to append the descriptor in front of the
-@@ -2446,15 +2401,23 @@ static int rt61pci_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- }
+ erp->function = dasd_3990_erp_inv_format;
- /*
-- * First we create the beacon.
-+ * Add the descriptor in front of the skb.
-+ */
-+ skb_push(skb, ring->desc_size);
-+ memset(skb->data, 0, ring->desc_size);
-+
-+ /*
-+ * Fill in skb descriptor
- */
-- skb_push(skb, TXD_DESC_SIZE);
-- memset(skb->data, 0, TXD_DESC_SIZE);
-+ desc = get_skb_desc(skb);
-+ desc->desc_len = ring->desc_size;
-+ desc->data_len = skb->len - ring->desc_size;
-+ desc->desc = skb->data;
-+ desc->data = skb->data + ring->desc_size;
-+ desc->ring = ring;
-+ desc->entry = entry;
+@@ -1403,8 +1254,7 @@ dasd_3990_erp_inv_format(struct dasd_ccw_req * erp, char *sense)
-- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
-- (struct ieee80211_hdr *)(skb->data +
-- TXD_DESC_SIZE),
-- skb->len - TXD_DESC_SIZE, control);
-+ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
+ } else {
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+- "Invalid Track Format - Fatal error should have "
+- "been handled within the interrupt handler");
++ "Invalid Track Format - Fatal error");
- /*
- * Write entire beacon with descriptor to register,
-@@ -2478,7 +2441,7 @@ static const struct ieee80211_ops rt61pci_mac80211_ops = {
- .configure_filter = rt61pci_configure_filter,
- .get_stats = rt2x00mac_get_stats,
- .set_retry_limit = rt61pci_set_retry_limit,
-- .erp_ie_changed = rt2x00mac_erp_ie_changed,
-+ .bss_info_changed = rt2x00mac_bss_info_changed,
- .conf_tx = rt2x00mac_conf_tx,
- .get_tx_stats = rt2x00mac_get_tx_stats,
- .get_tsf = rt61pci_get_tsf,
-@@ -2493,6 +2456,8 @@ static const struct rt2x00lib_ops rt61pci_rt2x00_ops = {
- .load_firmware = rt61pci_load_firmware,
- .initialize = rt2x00pci_initialize,
- .uninitialize = rt2x00pci_uninitialize,
-+ .init_rxentry = rt61pci_init_rxentry,
-+ .init_txentry = rt61pci_init_txentry,
- .set_device_state = rt61pci_set_device_state,
- .rfkill_poll = rt61pci_rfkill_poll,
- .link_stats = rt61pci_link_stats,
-@@ -2510,7 +2475,7 @@ static const struct rt2x00lib_ops rt61pci_rt2x00_ops = {
- };
+ erp = dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);
+ }
+@@ -1428,7 +1278,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_EOC(struct dasd_ccw_req * default_erp, char *sense)
+ {
- static const struct rt2x00_ops rt61pci_ops = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .rxd_size = RXD_DESC_SIZE,
- .txd_size = TXD_DESC_SIZE,
- .eeprom_size = EEPROM_SIZE,
-@@ -2547,7 +2512,7 @@ MODULE_FIRMWARE(FIRMWARE_RT2661);
- MODULE_LICENSE("GPL");
+- struct dasd_device *device = default_erp->device;
++ struct dasd_device *device = default_erp->startdev;
- static struct pci_driver rt61pci_driver = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .id_table = rt61pci_device_table,
- .probe = rt2x00pci_probe,
- .remove = __devexit_p(rt2x00pci_remove),
-diff --git a/drivers/net/wireless/rt2x00/rt61pci.h b/drivers/net/wireless/rt2x00/rt61pci.h
-index 6721d7d..4c6524e 100644
---- a/drivers/net/wireless/rt2x00/rt61pci.h
-+++ b/drivers/net/wireless/rt2x00/rt61pci.h
-@@ -1077,13 +1077,19 @@ struct hw_pairwise_ta_entry {
- * R4: RX antenna control
- * FRAME_END: 1 - DPDT, 0 - SPDT (Only valid for 802.11G, RF2527 & RF2529)
- */
--#define BBP_R4_RX_ANTENNA FIELD8(0x03)
-+
-+/*
-+ * ANTENNA_CONTROL semantics (guessed):
-+ * 0x1: Software controlled antenna switching (fixed or SW diversity)
-+ * 0x2: Hardware diversity.
-+ */
-+#define BBP_R4_RX_ANTENNA_CONTROL FIELD8(0x03)
- #define BBP_R4_RX_FRAME_END FIELD8(0x20)
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+ "End-of-Cylinder - must never happen");
+@@ -1453,7 +1303,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_env_data(struct dasd_ccw_req * erp, char *sense)
+ {
- /*
- * R77
- */
--#define BBP_R77_PAIR FIELD8(0x03)
-+#define BBP_R77_RX_ANTENNA FIELD8(0x03)
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- /*
- * RF registers
-@@ -1240,8 +1246,8 @@ struct hw_pairwise_ta_entry {
- /*
- * DMA descriptor defines.
- */
--#define TXD_DESC_SIZE ( 16 * sizeof(struct data_desc) )
--#define RXD_DESC_SIZE ( 16 * sizeof(struct data_desc) )
-+#define TXD_DESC_SIZE ( 16 * sizeof(__le32) )
-+#define RXD_DESC_SIZE ( 16 * sizeof(__le32) )
+ erp->function = dasd_3990_erp_env_data;
- /*
- * TX descriptor format for TX, PRIO and Beacon Ring.
-diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c
-index c0671c2..4d576ab 100644
---- a/drivers/net/wireless/rt2x00/rt73usb.c
-+++ b/drivers/net/wireless/rt2x00/rt73usb.c
-@@ -24,11 +24,6 @@
- Supported chipsets: rt2571W & rt2671.
- */
+@@ -1463,11 +1313,9 @@ dasd_3990_erp_env_data(struct dasd_ccw_req * erp, char *sense)
--/*
-- * Set enviroment defines for rt2x00.h
-- */
--#define DRV_NAME "rt73usb"
+ /* don't retry on disabled interface */
+ if (sense[7] != 0x0F) {
-
- #include <linux/delay.h>
- #include <linux/etherdevice.h>
- #include <linux/init.h>
-@@ -52,8 +47,9 @@
- * between each attampt. When the busy bit is still set at that time,
- * the access attempt is considered to have failed,
- * and we will print an error.
-+ * The _lock versions must be used if you already hold the usb_cache_mutex
- */
--static inline void rt73usb_register_read(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt73usb_register_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset, u32 *value)
- {
- __le32 reg;
-@@ -63,8 +59,17 @@ static inline void rt73usb_register_read(const struct rt2x00_dev *rt2x00dev,
- *value = le32_to_cpu(reg);
- }
+ erp = dasd_3990_erp_action_4(erp, sense);
+ } else {
+-
+- erp = dasd_3990_erp_cleanup(erp, DASD_CQR_IN_IO);
++ erp->status = DASD_CQR_FILLED;
+ }
--static inline void rt73usb_register_multiread(const struct rt2x00_dev
-- *rt2x00dev,
-+static inline void rt73usb_register_read_lock(struct rt2x00_dev *rt2x00dev,
-+ const unsigned int offset, u32 *value)
-+{
-+ __le32 reg;
-+ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_READ,
-+ USB_VENDOR_REQUEST_IN, offset,
-+ ®, sizeof(u32), REGISTER_TIMEOUT);
-+ *value = le32_to_cpu(reg);
-+}
-+
-+static inline void rt73usb_register_multiread(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- void *value, const u32 length)
+ return erp;
+@@ -1490,11 +1338,10 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_no_rec(struct dasd_ccw_req * default_erp, char *sense)
{
-@@ -74,7 +79,7 @@ static inline void rt73usb_register_multiread(const struct rt2x00_dev
- value, length, timeout);
- }
--static inline void rt73usb_register_write(const struct rt2x00_dev *rt2x00dev,
-+static inline void rt73usb_register_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset, u32 value)
+- struct dasd_device *device = default_erp->device;
++ struct dasd_device *device = default_erp->startdev;
+
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+- "No Record Found - Fatal error should "
+- "have been handled within the interrupt handler");
++ "No Record Found - Fatal error ");
+
+ return dasd_3990_erp_cleanup(default_erp, DASD_CQR_FAILED);
+
+@@ -1517,7 +1364,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_file_prot(struct dasd_ccw_req * erp)
{
- __le32 reg = cpu_to_le32(value);
-@@ -83,8 +88,16 @@ static inline void rt73usb_register_write(const struct rt2x00_dev *rt2x00dev,
- ®, sizeof(u32), REGISTER_TIMEOUT);
- }
--static inline void rt73usb_register_multiwrite(const struct rt2x00_dev
-- *rt2x00dev,
-+static inline void rt73usb_register_write_lock(struct rt2x00_dev *rt2x00dev,
-+ const unsigned int offset, u32 value)
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
+
+ DEV_MESSAGE(KERN_ERR, device, "%s", "File Protected");
+
+@@ -1526,6 +1373,43 @@ dasd_3990_erp_file_prot(struct dasd_ccw_req * erp)
+ } /* end dasd_3990_erp_file_prot */
+
+ /*
++ * DASD_3990_ERP_INSPECT_ALIAS
++ *
++ * DESCRIPTION
++ * Checks if the original request was started on an alias device.
++ * If yes, it modifies the original and the erp request so that
++ * the erp request can be started on a base device.
++ *
++ * PARAMETER
++ * erp pointer to the currently created default ERP
++ *
++ * RETURN VALUES
++ * erp pointer to the modified ERP, or NULL
++ */
++
++static struct dasd_ccw_req *dasd_3990_erp_inspect_alias(
++ struct dasd_ccw_req *erp)
+{
-+ __le32 reg = cpu_to_le32(value);
-+ rt2x00usb_vendor_req_buff_lock(rt2x00dev, USB_MULTI_WRITE,
-+ USB_VENDOR_REQUEST_OUT, offset,
-+ ®, sizeof(u32), REGISTER_TIMEOUT);
++ struct dasd_ccw_req *cqr = erp->refers;
++
++ if (cqr->block &&
++ (cqr->block->base != cqr->startdev)) {
++ if (cqr->startdev->features & DASD_FEATURE_ERPLOG) {
++ DEV_MESSAGE(KERN_ERR, cqr->startdev,
++ "ERP on alias device for request %p,"
++ " recover on base device %s", cqr,
++ cqr->block->base->cdev->dev.bus_id);
++ }
++ dasd_eckd_reset_ccw_to_base_io(cqr);
++ erp->startdev = cqr->block->base;
++ erp->function = dasd_3990_erp_inspect_alias;
++ return erp;
++ } else
++ return NULL;
+}
+
-+static inline void rt73usb_register_multiwrite(struct rt2x00_dev *rt2x00dev,
- const unsigned int offset,
- void *value, const u32 length)
- {
-@@ -94,13 +107,13 @@ static inline void rt73usb_register_multiwrite(const struct rt2x00_dev
- value, length, timeout);
- }
-
--static u32 rt73usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
-+static u32 rt73usb_bbp_check(struct rt2x00_dev *rt2x00dev)
++
++/*
+ * DASD_3990_ERP_INSPECT_24
+ *
+ * DESCRIPTION
+@@ -1623,7 +1507,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_action_10_32(struct dasd_ccw_req * erp, char *sense)
{
- u32 reg;
- unsigned int i;
- for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
-- rt73usb_register_read(rt2x00dev, PHY_CSR3, ®);
-+ rt73usb_register_read_lock(rt2x00dev, PHY_CSR3, ®);
- if (!rt2x00_get_field32(reg, PHY_CSR3_BUSY))
- break;
- udelay(REGISTER_BUSY_DELAY);
-@@ -109,17 +122,20 @@ static u32 rt73usb_bbp_check(const struct rt2x00_dev *rt2x00dev)
- return reg;
- }
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
--static void rt73usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt73usb_bbp_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u8 value)
+ erp->retries = 256;
+ erp->function = dasd_3990_erp_action_10_32;
+@@ -1657,13 +1541,14 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
{
- u32 reg;
-
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
-+
- /*
- * Wait until the BBP becomes ready.
- */
- reg = rt73usb_bbp_check(rt2x00dev);
- if (rt2x00_get_field32(reg, PHY_CSR3_BUSY)) {
- ERROR(rt2x00dev, "PHY_CSR3 register busy. Write failed.\n");
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- return;
- }
-@@ -132,20 +148,24 @@ static void rt73usb_bbp_write(const struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(®, PHY_CSR3_BUSY, 1);
- rt2x00_set_field32(®, PHY_CSR3_READ_CONTROL, 0);
+- struct dasd_device *device = default_erp->device;
++ struct dasd_device *device = default_erp->startdev;
+ __u32 cpa = 0;
+ struct dasd_ccw_req *cqr;
+ struct dasd_ccw_req *erp;
+ struct DE_eckd_data *DE_data;
++ struct PFX_eckd_data *PFX_data;
+ char *LO_data; /* LO_eckd_data_t */
+- struct ccw1 *ccw;
++ struct ccw1 *ccw, *oldccw;
-- rt73usb_register_write(rt2x00dev, PHY_CSR3, reg);
-+ rt73usb_register_write_lock(rt2x00dev, PHY_CSR3, reg);
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- }
+ DEV_MESSAGE(KERN_DEBUG, device, "%s",
+ "Write not finished because of unexpected condition");
+@@ -1702,8 +1587,8 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
+ /* Build new ERP request including DE/LO */
+ erp = dasd_alloc_erp_request((char *) &cqr->magic,
+ 2 + 1,/* DE/LO + TIC */
+- sizeof (struct DE_eckd_data) +
+- sizeof (struct LO_eckd_data), device);
++ sizeof(struct DE_eckd_data) +
++ sizeof(struct LO_eckd_data), device);
--static void rt73usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
-+static void rt73usb_bbp_read(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u8 *value)
- {
- u32 reg;
+ if (IS_ERR(erp)) {
+ DEV_MESSAGE(KERN_ERR, device, "%s", "Unable to allocate ERP");
+@@ -1712,10 +1597,16 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
-+
- /*
- * Wait until the BBP becomes ready.
- */
- reg = rt73usb_bbp_check(rt2x00dev);
- if (rt2x00_get_field32(reg, PHY_CSR3_BUSY)) {
- ERROR(rt2x00dev, "PHY_CSR3 register busy. Read failed.\n");
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- return;
- }
+ /* use original DE */
+ DE_data = erp->data;
+- memcpy(DE_data, cqr->data, sizeof (struct DE_eckd_data));
++ oldccw = cqr->cpaddr;
++ if (oldccw->cmd_code == DASD_ECKD_CCW_PFX) {
++ PFX_data = cqr->data;
++ memcpy(DE_data, &PFX_data->define_extend,
++ sizeof(struct DE_eckd_data));
++ } else
++ memcpy(DE_data, cqr->data, sizeof(struct DE_eckd_data));
-@@ -157,7 +177,7 @@ static void rt73usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(®, PHY_CSR3_BUSY, 1);
- rt2x00_set_field32(®, PHY_CSR3_READ_CONTROL, 1);
+ /* create LO */
+- LO_data = erp->data + sizeof (struct DE_eckd_data);
++ LO_data = erp->data + sizeof(struct DE_eckd_data);
-- rt73usb_register_write(rt2x00dev, PHY_CSR3, reg);
-+ rt73usb_register_write_lock(rt2x00dev, PHY_CSR3, reg);
+ if ((sense[3] == 0x01) && (LO_data[1] & 0x01)) {
- /*
- * Wait until the BBP becomes ready.
-@@ -170,9 +190,10 @@ static void rt73usb_bbp_read(const struct rt2x00_dev *rt2x00dev,
- }
+@@ -1748,7 +1639,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
- *value = rt2x00_get_field32(reg, PHY_CSR3_VALUE);
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- }
+ /* create DE ccw */
+ ccw = erp->cpaddr;
+- memset(ccw, 0, sizeof (struct ccw1));
++ memset(ccw, 0, sizeof(struct ccw1));
+ ccw->cmd_code = DASD_ECKD_CCW_DEFINE_EXTENT;
+ ccw->flags = CCW_FLAG_CC;
+ ccw->count = 16;
+@@ -1756,7 +1647,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
--static void rt73usb_rf_write(const struct rt2x00_dev *rt2x00dev,
-+static void rt73usb_rf_write(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, const u32 value)
+ /* create LO ccw */
+ ccw++;
+- memset(ccw, 0, sizeof (struct ccw1));
++ memset(ccw, 0, sizeof(struct ccw1));
+ ccw->cmd_code = DASD_ECKD_CCW_LOCATE_RECORD;
+ ccw->flags = CCW_FLAG_CC;
+ ccw->count = 16;
+@@ -1770,7 +1661,8 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
+ /* fill erp related fields */
+ erp->function = dasd_3990_erp_action_1B_32;
+ erp->refers = default_erp->refers;
+- erp->device = device;
++ erp->startdev = device;
++ erp->memdev = device;
+ erp->magic = default_erp->magic;
+ erp->expires = 0;
+ erp->retries = 256;
+@@ -1803,7 +1695,7 @@ static struct dasd_ccw_req *
+ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
{
- u32 reg;
-@@ -181,13 +202,16 @@ static void rt73usb_rf_write(const struct rt2x00_dev *rt2x00dev,
- if (!word)
- return;
-+ mutex_lock(&rt2x00dev->usb_cache_mutex);
-+
- for (i = 0; i < REGISTER_BUSY_COUNT; i++) {
-- rt73usb_register_read(rt2x00dev, PHY_CSR4, ®);
-+ rt73usb_register_read_lock(rt2x00dev, PHY_CSR4, ®);
- if (!rt2x00_get_field32(reg, PHY_CSR4_BUSY))
- goto rf_write;
- udelay(REGISTER_BUSY_DELAY);
- }
-
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- ERROR(rt2x00dev, "PHY_CSR4 register busy. Write failed.\n");
- return;
+- struct dasd_device *device = previous_erp->device;
++ struct dasd_device *device = previous_erp->startdev;
+ __u32 cpa = 0;
+ struct dasd_ccw_req *cqr;
+ struct dasd_ccw_req *erp;
+@@ -1827,7 +1719,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
+ DEV_MESSAGE(KERN_DEBUG, device, "%s",
+ "Imprecise ending is set - just retry");
-@@ -200,25 +224,26 @@ rf_write:
- * all others contain 20 bits.
- */
- rt2x00_set_field32(®, PHY_CSR4_NUMBER_OF_BITS,
-- 20 + !!(rt2x00_rf(&rt2x00dev->chip, RF5225) ||
-- rt2x00_rf(&rt2x00dev->chip, RF2527)));
-+ 20 + (rt2x00_rf(&rt2x00dev->chip, RF5225) ||
-+ rt2x00_rf(&rt2x00dev->chip, RF2527)));
- rt2x00_set_field32(®, PHY_CSR4_IF_SELECT, 0);
- rt2x00_set_field32(®, PHY_CSR4_BUSY, 1);
+- previous_erp->status = DASD_CQR_QUEUED;
++ previous_erp->status = DASD_CQR_FILLED;
-- rt73usb_register_write(rt2x00dev, PHY_CSR4, reg);
-+ rt73usb_register_write_lock(rt2x00dev, PHY_CSR4, reg);
- rt2x00_rf_write(rt2x00dev, word, value);
-+ mutex_unlock(&rt2x00dev->usb_cache_mutex);
- }
+ return previous_erp;
+ }
+@@ -1850,7 +1742,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
+ erp = previous_erp;
- #ifdef CONFIG_RT2X00_LIB_DEBUGFS
- #define CSR_OFFSET(__word) ( CSR_REG_BASE + ((__word) * sizeof(u32)) )
+ /* update the LO with the new returned sense data */
+- LO_data = erp->data + sizeof (struct DE_eckd_data);
++ LO_data = erp->data + sizeof(struct DE_eckd_data);
--static void rt73usb_read_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt73usb_read_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 *data)
- {
- rt73usb_register_read(rt2x00dev, CSR_OFFSET(word), data);
- }
+ if ((sense[3] == 0x01) && (LO_data[1] & 0x01)) {
--static void rt73usb_write_csr(const struct rt2x00_dev *rt2x00dev,
-+static void rt73usb_write_csr(struct rt2x00_dev *rt2x00dev,
- const unsigned int word, u32 data)
- {
- rt73usb_register_write(rt2x00dev, CSR_OFFSET(word), data);
-@@ -302,7 +327,8 @@ static void rt73usb_config_type(struct rt2x00_dev *rt2x00dev, const int type,
- */
- rt73usb_register_read(rt2x00dev, TXRX_CSR9, ®);
- rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 1);
-- rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 1);
-+ rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE,
-+ (tsf_sync == TSF_SYNC_BEACON));
- rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0);
- rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, tsf_sync);
- rt73usb_register_write(rt2x00dev, TXRX_CSR9, reg);
-@@ -396,12 +422,12 @@ static void rt73usb_config_txpower(struct rt2x00_dev *rt2x00dev,
- }
+@@ -1889,7 +1781,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
+ ccw++; /* addr of TIC ccw */
+ ccw->cda = cpa;
- static void rt73usb_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx,
-- const int antenna_rx)
-+ struct antenna_setup *ant)
- {
- u8 r3;
- u8 r4;
- u8 r77;
-+ u8 temp;
+- erp->status = DASD_CQR_QUEUED;
++ erp->status = DASD_CQR_FILLED;
- rt73usb_bbp_read(rt2x00dev, 3, &r3);
- rt73usb_bbp_read(rt2x00dev, 4, &r4);
-@@ -409,30 +435,38 @@ static void rt73usb_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
+ return erp;
- rt2x00_set_field8(&r3, BBP_R3_SMART_MODE, 0);
+@@ -1968,9 +1860,7 @@ dasd_3990_erp_compound_path(struct dasd_ccw_req * erp, char *sense)
+ * try further actions. */
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * Configure the RX antenna.
-+ */
-+ switch (ant->rx) {
- case ANTENNA_HW_DIVERSITY:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
-- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
-- !!(rt2x00dev->curr_hwmode != HWMODE_A));
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
-+ temp = !test_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags)
-+ && (rt2x00dev->curr_hwmode != HWMODE_A);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, temp);
- break;
- case ANTENNA_A:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
+ erp->lpm = 0;
-
- if (rt2x00dev->curr_hwmode == HWMODE_A)
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
- else
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END, 0);
+- erp->status = DASD_CQR_ERROR;
-
- if (rt2x00dev->curr_hwmode == HWMODE_A)
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
- else
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
- break;
++ erp->status = DASD_CQR_NEED_ERP;
+ }
}
-@@ -442,8 +476,7 @@ static void rt73usb_config_antenna_5x(struct rt2x00_dev *rt2x00dev,
- }
+@@ -2047,7 +1937,7 @@ dasd_3990_erp_compound_config(struct dasd_ccw_req * erp, char *sense)
+ if ((sense[25] & DASD_SENSE_BIT_1) && (sense[26] & DASD_SENSE_BIT_2)) {
- static void rt73usb_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx,
-- const int antenna_rx)
-+ struct antenna_setup *ant)
+ /* set to suspended duplex state then restart */
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
+
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+ "Set device to suspended duplex state should be "
+@@ -2081,28 +1971,26 @@ dasd_3990_erp_compound(struct dasd_ccw_req * erp, char *sense)
{
- u8 r3;
- u8 r4;
-@@ -457,18 +490,27 @@ static void rt73usb_config_antenna_2x(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field8(&r4, BBP_R4_RX_FRAME_END,
- !test_bit(CONFIG_FRAME_TYPE, &rt2x00dev->flags));
-- switch (antenna_rx) {
-- case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * Configure the RX antenna.
-+ */
-+ switch (ant->rx) {
- case ANTENNA_HW_DIVERSITY:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 2);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 2);
- break;
- case ANTENNA_A:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 3);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 3);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
- break;
-+ case ANTENNA_SW_DIVERSITY:
-+ /*
-+ * NOTE: We should never come here because rt2x00lib is
-+ * supposed to catch this and send us the correct antenna
-+ * explicitely. However we are nog going to bug about this.
-+ * Instead, just default to antenna B.
-+ */
- case ANTENNA_B:
-- rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA, 1);
-- rt2x00_set_field8(&r77, BBP_R77_PAIR, 0);
-+ rt2x00_set_field8(&r77, BBP_R77_RX_ANTENNA, 0);
-+ rt2x00_set_field8(&r4, BBP_R4_RX_ANTENNA_CONTROL, 1);
- break;
+ if ((erp->function == dasd_3990_erp_compound_retry) &&
+- (erp->status == DASD_CQR_ERROR)) {
++ (erp->status == DASD_CQR_NEED_ERP)) {
+
+ dasd_3990_erp_compound_path(erp, sense);
}
-@@ -509,40 +551,40 @@ static const struct antenna_sel antenna_sel_bg[] = {
- };
+ if ((erp->function == dasd_3990_erp_compound_path) &&
+- (erp->status == DASD_CQR_ERROR)) {
++ (erp->status == DASD_CQR_NEED_ERP)) {
- static void rt73usb_config_antenna(struct rt2x00_dev *rt2x00dev,
-- const int antenna_tx, const int antenna_rx)
-+ struct antenna_setup *ant)
- {
- const struct antenna_sel *sel;
- unsigned int lna;
- unsigned int i;
- u32 reg;
+ erp = dasd_3990_erp_compound_code(erp, sense);
+ }
-- rt73usb_register_read(rt2x00dev, PHY_CSR0, ®);
--
- if (rt2x00dev->curr_hwmode == HWMODE_A) {
- sel = antenna_sel_a;
- lna = test_bit(CONFIG_EXTERNAL_LNA_A, &rt2x00dev->flags);
--
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 0);
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 1);
- } else {
- sel = antenna_sel_bg;
- lna = test_bit(CONFIG_EXTERNAL_LNA_BG, &rt2x00dev->flags);
--
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG, 1);
-- rt2x00_set_field32(®, PHY_CSR0_PA_PE_A, 0);
+ if ((erp->function == dasd_3990_erp_compound_code) &&
+- (erp->status == DASD_CQR_ERROR)) {
++ (erp->status == DASD_CQR_NEED_ERP)) {
+
+ dasd_3990_erp_compound_config(erp, sense);
}
- for (i = 0; i < ARRAY_SIZE(antenna_sel_a); i++)
- rt73usb_bbp_write(rt2x00dev, sel[i].word, sel[i].value[lna]);
+ /* if no compound action ERP specified, the request failed */
+- if (erp->status == DASD_CQR_ERROR) {
+-
++ if (erp->status == DASD_CQR_NEED_ERP)
+ erp->status = DASD_CQR_FAILED;
+- }
-+ rt73usb_register_read(rt2x00dev, PHY_CSR0, ®);
-+
-+ rt2x00_set_field32(®, PHY_CSR0_PA_PE_BG,
-+ (rt2x00dev->curr_hwmode == HWMODE_B ||
-+ rt2x00dev->curr_hwmode == HWMODE_G));
-+ rt2x00_set_field32(®, PHY_CSR0_PA_PE_A,
-+ (rt2x00dev->curr_hwmode == HWMODE_A));
-+
- rt73usb_register_write(rt2x00dev, PHY_CSR0, reg);
+ return erp;
- if (rt2x00_rf(&rt2x00dev->chip, RF5226) ||
- rt2x00_rf(&rt2x00dev->chip, RF5225))
-- rt73usb_config_antenna_5x(rt2x00dev, antenna_tx, antenna_rx);
-+ rt73usb_config_antenna_5x(rt2x00dev, ant);
- else if (rt2x00_rf(&rt2x00dev->chip, RF2528) ||
- rt2x00_rf(&rt2x00dev->chip, RF2527))
-- rt73usb_config_antenna_2x(rt2x00dev, antenna_tx, antenna_rx);
-+ rt73usb_config_antenna_2x(rt2x00dev, ant);
- }
+@@ -2127,7 +2015,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_inspect_32(struct dasd_ccw_req * erp, char *sense)
+ {
- static void rt73usb_config_duration(struct rt2x00_dev *rt2x00dev,
-@@ -586,8 +628,7 @@ static void rt73usb_config(struct rt2x00_dev *rt2x00dev,
- if ((flags & CONFIG_UPDATE_TXPOWER) && !(flags & CONFIG_UPDATE_CHANNEL))
- rt73usb_config_txpower(rt2x00dev, libconf->conf->power_level);
- if (flags & CONFIG_UPDATE_ANTENNA)
-- rt73usb_config_antenna(rt2x00dev, libconf->conf->antenna_sel_tx,
-- libconf->conf->antenna_sel_rx);
-+ rt73usb_config_antenna(rt2x00dev, &libconf->ant);
- if (flags & (CONFIG_UPDATE_SLOT_TIME | CONFIG_UPDATE_BEACON_INT))
- rt73usb_config_duration(rt2x00dev, libconf);
- }
-@@ -605,12 +646,10 @@ static void rt73usb_enable_led(struct rt2x00_dev *rt2x00dev)
- rt73usb_register_write(rt2x00dev, MAC_CSR14, reg);
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
- rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_RADIO_STATUS, 1);
-- if (rt2x00dev->rx_status.phymode == MODE_IEEE80211A)
-- rt2x00_set_field16(&rt2x00dev->led_reg,
-- MCU_LEDCS_LINK_A_STATUS, 1);
-- else
-- rt2x00_set_field16(&rt2x00dev->led_reg,
-- MCU_LEDCS_LINK_BG_STATUS, 1);
-+ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_A_STATUS,
-+ (rt2x00dev->rx_status.phymode == MODE_IEEE80211A));
-+ rt2x00_set_field16(&rt2x00dev->led_reg, MCU_LEDCS_LINK_BG_STATUS,
-+ (rt2x00dev->rx_status.phymode != MODE_IEEE80211A));
+ erp->function = dasd_3990_erp_inspect_32;
- rt2x00usb_vendor_request_sw(rt2x00dev, USB_LED_CONTROL, 0x0000,
- rt2x00dev->led_reg, REGISTER_TIMEOUT);
-@@ -659,7 +698,8 @@ static void rt73usb_activity_led(struct rt2x00_dev *rt2x00dev, int rssi)
- /*
- * Link tuning
- */
--static void rt73usb_link_stats(struct rt2x00_dev *rt2x00dev)
-+static void rt73usb_link_stats(struct rt2x00_dev *rt2x00dev,
-+ struct link_qual *qual)
- {
- u32 reg;
+@@ -2149,8 +2037,7 @@ dasd_3990_erp_inspect_32(struct dasd_ccw_req * erp, char *sense)
-@@ -667,15 +707,13 @@ static void rt73usb_link_stats(struct rt2x00_dev *rt2x00dev)
- * Update FCS error count from register.
- */
- rt73usb_register_read(rt2x00dev, STA_CSR0, ®);
-- rt2x00dev->link.rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
-+ qual->rx_failed = rt2x00_get_field32(reg, STA_CSR0_FCS_ERROR);
+ case 0x01: /* fatal error */
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+- "Fatal error should have been "
+- "handled within the interrupt handler");
++ "Retry not recommended - Fatal error");
- /*
- * Update False CCA count from register.
- */
- rt73usb_register_read(rt2x00dev, STA_CSR1, ®);
-- reg = rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
-- rt2x00dev->link.false_cca =
-- rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
-+ qual->false_cca = rt2x00_get_field32(reg, STA_CSR1_FALSE_CCA_ERROR);
- }
+ erp = dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);
+ break;
+@@ -2253,6 +2140,11 @@ dasd_3990_erp_inspect(struct dasd_ccw_req * erp)
+ /* already set up new ERP ! */
+ char *sense = erp->refers->irb.ecw;
- static void rt73usb_reset_tuner(struct rt2x00_dev *rt2x00dev)
-@@ -781,12 +819,12 @@ static void rt73usb_link_tuner(struct rt2x00_dev *rt2x00dev)
- * r17 does not yet exceed upper limit, continue and base
- * the r17 tuning on the false CCA count.
- */
-- if (rt2x00dev->link.false_cca > 512 && r17 < up_bound) {
-+ if (rt2x00dev->link.qual.false_cca > 512 && r17 < up_bound) {
- r17 += 4;
- if (r17 > up_bound)
- r17 = up_bound;
- rt73usb_bbp_write(rt2x00dev, 17, r17);
-- } else if (rt2x00dev->link.false_cca < 100 && r17 > low_bound) {
-+ } else if (rt2x00dev->link.qual.false_cca < 100 && r17 > low_bound) {
- r17 -= 4;
- if (r17 < low_bound)
- r17 = low_bound;
-@@ -1098,8 +1136,6 @@ static int rt73usb_enable_radio(struct rt2x00_dev *rt2x00dev)
- return -EIO;
- }
++ /* if this problem occured on an alias retry on base */
++ erp_new = dasd_3990_erp_inspect_alias(erp);
++ if (erp_new)
++ return erp_new;
++
+ /* distinguish between 24 and 32 byte sense data */
+ if (sense[27] & DASD_SENSE_BIT_0) {
-- rt2x00usb_enable_radio(rt2x00dev);
--
- /*
- * Enable LED
- */
-@@ -1193,12 +1229,12 @@ static int rt73usb_set_device_state(struct rt2x00_dev *rt2x00dev,
- * TX descriptor initialization
- */
- static void rt73usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
-- struct data_desc *txd,
-- struct txdata_entry_desc *desc,
-- struct ieee80211_hdr *ieee80211hdr,
-- unsigned int length,
-- struct ieee80211_tx_control *control)
-+ struct sk_buff *skb,
-+ struct txdata_entry_desc *desc,
-+ struct ieee80211_tx_control *control)
+@@ -2287,13 +2179,13 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_add_erp(struct dasd_ccw_req * cqr)
{
-+ struct skb_desc *skbdesc = get_skb_desc(skb);
-+ __le32 *txd = skbdesc->desc;
- u32 word;
- /*
-@@ -1233,7 +1269,7 @@ static void rt73usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- rt2x00_set_field32(&word, TXD_W0_MORE_FRAG,
- test_bit(ENTRY_TXD_MORE_FRAG, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_ACK,
-- !(control->flags & IEEE80211_TXCTL_NO_ACK));
-+ test_bit(ENTRY_TXD_ACK, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_TIMESTAMP,
- test_bit(ENTRY_TXD_REQ_TIMESTAMP, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_OFDM,
-@@ -1243,7 +1279,7 @@ static void rt73usb_write_tx_desc(struct rt2x00_dev *rt2x00dev,
- !!(control->flags &
- IEEE80211_TXCTL_LONG_RETRY_LIMIT));
- rt2x00_set_field32(&word, TXD_W0_TKIP_MIC, 0);
-- rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, length);
-+ rt2x00_set_field32(&word, TXD_W0_DATABYTE_COUNT, skbdesc->data_len);
- rt2x00_set_field32(&word, TXD_W0_BURST2,
- test_bit(ENTRY_TXD_BURST, &desc->flags));
- rt2x00_set_field32(&word, TXD_W0_CIPHER_ALG, CIPHER_NONE);
-@@ -1340,7 +1376,8 @@ static int rt73usb_agc_to_rssi(struct rt2x00_dev *rt2x00dev, int rxd_w1)
- static void rt73usb_fill_rxdone(struct data_entry *entry,
- struct rxdata_entry_desc *desc)
- {
-- struct data_desc *rxd = (struct data_desc *)entry->skb->data;
-+ struct skb_desc *skbdesc = get_skb_desc(entry->skb);
-+ __le32 *rxd = (__le32 *)entry->skb->data;
- u32 word0;
- u32 word1;
+- struct dasd_device *device = cqr->device;
++ struct dasd_device *device = cqr->startdev;
+ struct ccw1 *ccw;
-@@ -1358,13 +1395,15 @@ static void rt73usb_fill_rxdone(struct data_entry *entry,
- desc->rssi = rt73usb_agc_to_rssi(entry->ring->rt2x00dev, word1);
- desc->ofdm = rt2x00_get_field32(word0, RXD_W0_OFDM);
- desc->size = rt2x00_get_field32(word0, RXD_W0_DATABYTE_COUNT);
-+ desc->my_bss = !!rt2x00_get_field32(word0, RXD_W0_MY_BSS);
+ /* allocate additional request block */
+ struct dasd_ccw_req *erp;
- /*
-- * Pull the skb to clear the descriptor area.
-+ * Set descriptor and data pointer.
- */
-- skb_pull(entry->skb, entry->ring->desc_size);
--
-- return;
-+ skbdesc->desc = entry->skb->data;
-+ skbdesc->desc_len = entry->ring->desc_size;
-+ skbdesc->data = entry->skb->data + entry->ring->desc_size;
-+ skbdesc->data_len = desc->size;
- }
+- erp = dasd_alloc_erp_request((char *) &cqr->magic, 2, 0, cqr->device);
++ erp = dasd_alloc_erp_request((char *) &cqr->magic, 2, 0, device);
+ if (IS_ERR(erp)) {
+ if (cqr->retries <= 0) {
+ DEV_MESSAGE(KERN_ERR, device, "%s",
+@@ -2305,7 +2197,7 @@ dasd_3990_erp_add_erp(struct dasd_ccw_req * cqr)
+ "Unable to allocate ERP request "
+ "(%i retries left)",
+ cqr->retries);
+- dasd_set_timer(device, (HZ << 3));
++ dasd_block_set_timer(device->block, (HZ << 3));
+ }
+ return cqr;
+ }
+@@ -2319,7 +2211,9 @@ dasd_3990_erp_add_erp(struct dasd_ccw_req * cqr)
+ ccw->cda = (long)(cqr->cpaddr);
+ erp->function = dasd_3990_erp_add_erp;
+ erp->refers = cqr;
+- erp->device = cqr->device;
++ erp->startdev = device;
++ erp->memdev = device;
++ erp->block = cqr->block;
+ erp->magic = cqr->magic;
+ erp->expires = 0;
+ erp->retries = 256;
+@@ -2466,7 +2360,7 @@ static struct dasd_ccw_req *
+ dasd_3990_erp_further_erp(struct dasd_ccw_req *erp)
+ {
- /*
-@@ -1392,8 +1431,10 @@ static int rt73usb_validate_eeprom(struct rt2x00_dev *rt2x00dev)
- rt2x00_eeprom_read(rt2x00dev, EEPROM_ANTENNA, &word);
- if (word == 0xffff) {
- rt2x00_set_field16(&word, EEPROM_ANTENNA_NUM, 2);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT, 2);
-- rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT, 2);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_TX_DEFAULT,
-+ ANTENNA_B);
-+ rt2x00_set_field16(&word, EEPROM_ANTENNA_RX_DEFAULT,
-+ ANTENNA_B);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_FRAME_TYPE, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_DYN_TXAGC, 0);
- rt2x00_set_field16(&word, EEPROM_ANTENNA_HARDWARE_RADIO, 0);
-@@ -1502,9 +1543,9 @@ static int rt73usb_init_eeprom(struct rt2x00_dev *rt2x00dev)
- /*
- * Identify default antenna configuration.
- */
-- rt2x00dev->hw->conf.antenna_sel_tx =
-+ rt2x00dev->default_ant.tx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_TX_DEFAULT);
-- rt2x00dev->hw->conf.antenna_sel_rx =
-+ rt2x00dev->default_ant.rx =
- rt2x00_get_field16(eeprom, EEPROM_ANTENNA_RX_DEFAULT);
+- struct dasd_device *device = erp->device;
++ struct dasd_device *device = erp->startdev;
+ char *sense = erp->irb.ecw;
- /*
-@@ -1806,7 +1847,6 @@ static void rt73usb_configure_filter(struct ieee80211_hw *hw,
- struct dev_addr_list *mc_list)
+ /* check for 24 byte sense ERP */
+@@ -2557,7 +2451,7 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
+ struct dasd_ccw_req *erp)
{
- struct rt2x00_dev *rt2x00dev = hw->priv;
-- struct interface *intf = &rt2x00dev->interface;
- u32 reg;
- /*
-@@ -1825,22 +1865,19 @@ static void rt73usb_configure_filter(struct ieee80211_hw *hw,
- * Apply some rules to the filters:
- * - Some filters imply different filters to be set.
- * - Some things we can't filter out at all.
-- * - Some filters are set based on interface type.
- */
- if (mc_count)
- *total_flags |= FIF_ALLMULTI;
- if (*total_flags & FIF_OTHER_BSS ||
- *total_flags & FIF_PROMISC_IN_BSS)
- *total_flags |= FIF_PROMISC_IN_BSS | FIF_OTHER_BSS;
-- if (is_interface_type(intf, IEEE80211_IF_TYPE_AP))
-- *total_flags |= FIF_PROMISC_IN_BSS;
+- struct dasd_device *device = erp_head->device;
++ struct dasd_device *device = erp_head->startdev;
+ struct dasd_ccw_req *erp_done = erp_head; /* finished req */
+ struct dasd_ccw_req *erp_free = NULL; /* req to be freed */
- /*
- * Check if there is any work left for us.
- */
-- if (intf->filter == *total_flags)
-+ if (rt2x00dev->packet_filter == *total_flags)
- return;
-- intf->filter = *total_flags;
-+ rt2x00dev->packet_filter = *total_flags;
+@@ -2569,13 +2463,13 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
+ "original request was lost\n");
- /*
- * When in atomic context, reschedule and let rt2x00lib
-@@ -1926,6 +1963,9 @@ static int rt73usb_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- struct ieee80211_tx_control *control)
- {
- struct rt2x00_dev *rt2x00dev = hw->priv;
-+ struct skb_desc *desc;
-+ struct data_ring *ring;
-+ struct data_entry *entry;
- int timeout;
+ /* remove the request from the device queue */
+- list_del(&erp_done->list);
++ list_del(&erp_done->blocklist);
- /*
-@@ -1934,17 +1974,27 @@ static int rt73usb_beacon_update(struct ieee80211_hw *hw, struct sk_buff *skb,
- * initialization.
- */
- control->queue = IEEE80211_TX_QUEUE_BEACON;
-+ ring = rt2x00lib_get_ring(rt2x00dev, control->queue);
-+ entry = rt2x00_get_data_entry(ring);
-+
-+ /*
-+ * Add the descriptor in front of the skb.
-+ */
-+ skb_push(skb, ring->desc_size);
-+ memset(skb->data, 0, ring->desc_size);
+ erp_free = erp_done;
+ erp_done = erp_done->refers;
- /*
-- * First we create the beacon.
-+ * Fill in skb descriptor
- */
-- skb_push(skb, TXD_DESC_SIZE);
-- memset(skb->data, 0, TXD_DESC_SIZE);
-+ desc = get_skb_desc(skb);
-+ desc->desc_len = ring->desc_size;
-+ desc->data_len = skb->len - ring->desc_size;
-+ desc->desc = skb->data;
-+ desc->data = skb->data + ring->desc_size;
-+ desc->ring = ring;
-+ desc->entry = entry;
+ /* free the finished erp request */
+- dasd_free_erp_request(erp_free, erp_free->device);
++ dasd_free_erp_request(erp_free, erp_free->memdev);
-- rt2x00lib_write_tx_desc(rt2x00dev, (struct data_desc *)skb->data,
-- (struct ieee80211_hdr *)(skb->data +
-- TXD_DESC_SIZE),
-- skb->len - TXD_DESC_SIZE, control);
-+ rt2x00lib_write_tx_desc(rt2x00dev, skb, control);
+ } /* end while */
- /*
- * Write entire beacon with descriptor to register,
-@@ -1971,7 +2021,7 @@ static const struct ieee80211_ops rt73usb_mac80211_ops = {
- .configure_filter = rt73usb_configure_filter,
- .get_stats = rt2x00mac_get_stats,
- .set_retry_limit = rt73usb_set_retry_limit,
-- .erp_ie_changed = rt2x00mac_erp_ie_changed,
-+ .bss_info_changed = rt2x00mac_bss_info_changed,
- .conf_tx = rt2x00mac_conf_tx,
- .get_tx_stats = rt2x00mac_get_tx_stats,
- .get_tsf = rt73usb_get_tsf,
-@@ -1985,6 +2035,8 @@ static const struct rt2x00lib_ops rt73usb_rt2x00_ops = {
- .load_firmware = rt73usb_load_firmware,
- .initialize = rt2x00usb_initialize,
- .uninitialize = rt2x00usb_uninitialize,
-+ .init_rxentry = rt2x00usb_init_rxentry,
-+ .init_txentry = rt2x00usb_init_txentry,
- .set_device_state = rt73usb_set_device_state,
- .link_stats = rt73usb_link_stats,
- .reset_tuner = rt73usb_reset_tuner,
-@@ -2002,7 +2054,7 @@ static const struct rt2x00lib_ops rt73usb_rt2x00_ops = {
- };
+@@ -2603,7 +2497,7 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
+ erp->retries, erp);
- static const struct rt2x00_ops rt73usb_ops = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .rxd_size = RXD_DESC_SIZE,
- .txd_size = TXD_DESC_SIZE,
- .eeprom_size = EEPROM_SIZE,
-@@ -2089,7 +2141,7 @@ MODULE_FIRMWARE(FIRMWARE_RT2571);
- MODULE_LICENSE("GPL");
+ /* handle the request again... */
+- erp->status = DASD_CQR_QUEUED;
++ erp->status = DASD_CQR_FILLED;
+ }
- static struct usb_driver rt73usb_driver = {
-- .name = DRV_NAME,
-+ .name = KBUILD_MODNAME,
- .id_table = rt73usb_device_table,
- .probe = rt2x00usb_probe,
- .disconnect = rt2x00usb_disconnect,
-diff --git a/drivers/net/wireless/rt2x00/rt73usb.h b/drivers/net/wireless/rt2x00/rt73usb.h
-index f095151..d49dcaa 100644
---- a/drivers/net/wireless/rt2x00/rt73usb.h
-+++ b/drivers/net/wireless/rt2x00/rt73usb.h
-@@ -713,13 +713,19 @@ struct hw_pairwise_ta_entry {
- * R4: RX antenna control
- * FRAME_END: 1 - DPDT, 0 - SPDT (Only valid for 802.11G, RF2527 & RF2529)
- */
--#define BBP_R4_RX_ANTENNA FIELD8(0x03)
-+
-+/*
-+ * ANTENNA_CONTROL semantics (guessed):
-+ * 0x1: Software controlled antenna switching (fixed or SW diversity)
-+ * 0x2: Hardware diversity.
-+ */
-+#define BBP_R4_RX_ANTENNA_CONTROL FIELD8(0x03)
- #define BBP_R4_RX_FRAME_END FIELD8(0x20)
+ } else {
+@@ -2620,7 +2514,7 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
+ * DASD_3990_ERP_ACTION
+ *
+ * DESCRIPTION
+- * controll routine for 3990 erp actions.
++ * control routine for 3990 erp actions.
+ * Has to be called with the queue lock (namely the s390_irq_lock) acquired.
+ *
+ * PARAMETER
+@@ -2636,9 +2530,8 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
+ struct dasd_ccw_req *
+ dasd_3990_erp_action(struct dasd_ccw_req * cqr)
+ {
+-
+ struct dasd_ccw_req *erp = NULL;
+- struct dasd_device *device = cqr->device;
++ struct dasd_device *device = cqr->startdev;
+ struct dasd_ccw_req *temp_erp = NULL;
- /*
- * R77
- */
--#define BBP_R77_PAIR FIELD8(0x03)
-+#define BBP_R77_RX_ANTENNA FIELD8(0x03)
+ if (device->features & DASD_FEATURE_ERPLOG) {
+@@ -2704,10 +2597,11 @@ dasd_3990_erp_action(struct dasd_ccw_req * cqr)
+ }
+ }
- /*
- * RF registers
-@@ -860,8 +866,8 @@ struct hw_pairwise_ta_entry {
- /*
- * DMA descriptor defines.
- */
--#define TXD_DESC_SIZE ( 6 * sizeof(struct data_desc) )
--#define RXD_DESC_SIZE ( 6 * sizeof(struct data_desc) )
-+#define TXD_DESC_SIZE ( 6 * sizeof(__le32) )
-+#define RXD_DESC_SIZE ( 6 * sizeof(__le32) )
+- /* enqueue added ERP request */
+- if (erp->status == DASD_CQR_FILLED) {
+- erp->status = DASD_CQR_QUEUED;
+- list_add(&erp->list, &device->ccw_queue);
++ /* enqueue ERP request if it's a new one */
++ if (list_empty(&erp->blocklist)) {
++ cqr->status = DASD_CQR_IN_ERP;
++ /* add erp request before the cqr */
++ list_add_tail(&erp->blocklist, &cqr->blocklist);
+ }
- /*
- * TX descriptor format for TX, PRIO and Beacon Ring.
-diff --git a/drivers/net/wireless/rtl8180.h b/drivers/net/wireless/rtl8180.h
-new file mode 100644
-index 0000000..2cbfe3c
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180.h
-@@ -0,0 +1,151 @@
-+#ifndef RTL8180_H
-+#define RTL8180_H
-+
-+#include "rtl818x.h"
-+
-+#define MAX_RX_SIZE IEEE80211_MAX_RTS_THRESHOLD
-+
-+#define RF_PARAM_ANALOGPHY (1 << 0)
-+#define RF_PARAM_ANTBDEFAULT (1 << 1)
-+#define RF_PARAM_CARRIERSENSE1 (1 << 2)
-+#define RF_PARAM_CARRIERSENSE2 (1 << 3)
-+
-+#define BB_ANTATTEN_CHAN14 0x0C
-+#define BB_ANTENNA_B 0x40
-+
-+#define BB_HOST_BANG (1 << 30)
-+#define BB_HOST_BANG_EN (1 << 2)
-+#define BB_HOST_BANG_CLK (1 << 1)
-+#define BB_HOST_BANG_DATA 1
-+
-+#define ANAPARAM_TXDACOFF_SHIFT 27
-+#define ANAPARAM_PWR0_SHIFT 28
-+#define ANAPARAM_PWR0_MASK (0x07 << ANAPARAM_PWR0_SHIFT)
-+#define ANAPARAM_PWR1_SHIFT 20
-+#define ANAPARAM_PWR1_MASK (0x7F << ANAPARAM_PWR1_SHIFT)
-+
-+enum rtl8180_tx_desc_flags {
-+ RTL8180_TX_DESC_FLAG_NO_ENC = (1 << 15),
-+ RTL8180_TX_DESC_FLAG_TX_OK = (1 << 15),
-+ RTL8180_TX_DESC_FLAG_SPLCP = (1 << 16),
-+ RTL8180_TX_DESC_FLAG_RX_UNDER = (1 << 16),
-+ RTL8180_TX_DESC_FLAG_MOREFRAG = (1 << 17),
-+ RTL8180_TX_DESC_FLAG_CTS = (1 << 18),
-+ RTL8180_TX_DESC_FLAG_RTS = (1 << 23),
-+ RTL8180_TX_DESC_FLAG_LS = (1 << 28),
-+ RTL8180_TX_DESC_FLAG_FS = (1 << 29),
-+ RTL8180_TX_DESC_FLAG_DMA = (1 << 30),
-+ RTL8180_TX_DESC_FLAG_OWN = (1 << 31)
-+};
-+
-+struct rtl8180_tx_desc {
-+ __le32 flags;
-+ __le16 rts_duration;
-+ __le16 plcp_len;
-+ __le32 tx_buf;
-+ __le32 frame_len;
-+ __le32 next_tx_desc;
-+ u8 cw;
-+ u8 retry_limit;
-+ u8 agc;
-+ u8 flags2;
-+ u32 reserved[2];
-+} __attribute__ ((packed));
-+
-+enum rtl8180_rx_desc_flags {
-+ RTL8180_RX_DESC_FLAG_ICV_ERR = (1 << 12),
-+ RTL8180_RX_DESC_FLAG_CRC32_ERR = (1 << 13),
-+ RTL8180_RX_DESC_FLAG_PM = (1 << 14),
-+ RTL8180_RX_DESC_FLAG_RX_ERR = (1 << 15),
-+ RTL8180_RX_DESC_FLAG_BCAST = (1 << 16),
-+ RTL8180_RX_DESC_FLAG_PAM = (1 << 17),
-+ RTL8180_RX_DESC_FLAG_MCAST = (1 << 18),
-+ RTL8180_RX_DESC_FLAG_SPLCP = (1 << 25),
-+ RTL8180_RX_DESC_FLAG_FOF = (1 << 26),
-+ RTL8180_RX_DESC_FLAG_DMA_FAIL = (1 << 27),
-+ RTL8180_RX_DESC_FLAG_LS = (1 << 28),
-+ RTL8180_RX_DESC_FLAG_FS = (1 << 29),
-+ RTL8180_RX_DESC_FLAG_EOR = (1 << 30),
-+ RTL8180_RX_DESC_FLAG_OWN = (1 << 31)
-+};
-+
-+struct rtl8180_rx_desc {
-+ __le32 flags;
-+ __le32 flags2;
-+ union {
-+ __le32 rx_buf;
-+ __le64 tsft;
-+ };
-+} __attribute__ ((packed));
-+
-+struct rtl8180_tx_ring {
-+ struct rtl8180_tx_desc *desc;
-+ dma_addr_t dma;
-+ unsigned int idx;
-+ unsigned int entries;
-+ struct sk_buff_head queue;
-+};
-+
-+struct rtl8180_priv {
-+ /* common between rtl818x drivers */
-+ struct rtl818x_csr __iomem *map;
-+ const struct rtl818x_rf_ops *rf;
-+ struct ieee80211_vif *vif;
-+ int mode;
-+
-+ /* rtl8180 driver specific */
-+ spinlock_t lock;
-+ struct rtl8180_rx_desc *rx_ring;
-+ dma_addr_t rx_ring_dma;
-+ unsigned int rx_idx;
-+ struct sk_buff *rx_buf[32];
-+ struct rtl8180_tx_ring tx_ring[4];
-+ struct ieee80211_channel channels[14];
-+ struct ieee80211_rate rates[12];
-+ struct ieee80211_hw_mode modes[2];
-+ struct pci_dev *pdev;
-+ u32 rx_conf;
-+
-+ int r8185;
-+ u32 anaparam;
-+ u16 rfparam;
-+ u8 csthreshold;
-+};
-+
-+void rtl8180_write_phy(struct ieee80211_hw *dev, u8 addr, u32 data);
-+void rtl8180_set_anaparam(struct rtl8180_priv *priv, u32 anaparam);
-+
-+static inline u8 rtl818x_ioread8(struct rtl8180_priv *priv, u8 __iomem *addr)
-+{
-+ return ioread8(addr);
-+}
-+
-+static inline u16 rtl818x_ioread16(struct rtl8180_priv *priv, __le16 __iomem *addr)
-+{
-+ return ioread16(addr);
-+}
-+
-+static inline u32 rtl818x_ioread32(struct rtl8180_priv *priv, __le32 __iomem *addr)
-+{
-+ return ioread32(addr);
-+}
-+
-+static inline void rtl818x_iowrite8(struct rtl8180_priv *priv,
-+ u8 __iomem *addr, u8 val)
-+{
-+ iowrite8(val, addr);
-+}
-+
-+static inline void rtl818x_iowrite16(struct rtl8180_priv *priv,
-+ __le16 __iomem *addr, u16 val)
-+{
-+ iowrite16(val, addr);
-+}
-+
-+static inline void rtl818x_iowrite32(struct rtl8180_priv *priv,
-+ __le32 __iomem *addr, u32 val)
-+{
-+ iowrite32(val, addr);
-+}
-+
-+#endif /* RTL8180_H */
-diff --git a/drivers/net/wireless/rtl8180_dev.c b/drivers/net/wireless/rtl8180_dev.c
-new file mode 100644
-index 0000000..07f37b0
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_dev.c
-@@ -0,0 +1,1051 @@
-+
-+/*
-+ * Linux device driver for RTL8180 / RTL8185
-+ *
-+ * Copyright 2007 Michael Wu <flamingice at sourmilk.net>
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Based on the r8180 driver, which is:
-+ * Copyright 2004-2005 Andrea Merello <andreamrl at tiscali.it>, et al.
-+ *
-+ * Thanks to Realtek for their support!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ */
-+
-+#include <linux/init.h>
-+#include <linux/pci.h>
-+#include <linux/delay.h>
-+#include <linux/etherdevice.h>
-+#include <linux/eeprom_93cx6.h>
-+#include <net/mac80211.h>
-+
-+#include "rtl8180.h"
-+#include "rtl8180_rtl8225.h"
-+#include "rtl8180_sa2400.h"
-+#include "rtl8180_max2820.h"
-+#include "rtl8180_grf5101.h"
-+
-+MODULE_AUTHOR("Michael Wu <flamingice at sourmilk.net>");
-+MODULE_AUTHOR("Andrea Merello <andreamrl at tiscali.it>");
-+MODULE_DESCRIPTION("RTL8180 / RTL8185 PCI wireless driver");
-+MODULE_LICENSE("GPL");
-+
-+static struct pci_device_id rtl8180_table[] __devinitdata = {
-+ /* rtl8185 */
-+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8185) },
-+ { PCI_DEVICE(PCI_VENDOR_ID_BELKIN, 0x701f) },
-+
-+ /* rtl8180 */
-+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8180) },
-+ { PCI_DEVICE(0x1799, 0x6001) },
-+ { PCI_DEVICE(0x1799, 0x6020) },
-+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x3300) },
-+ { }
-+};
-+
-+MODULE_DEVICE_TABLE(pci, rtl8180_table);
-+
-+void rtl8180_write_phy(struct ieee80211_hw *dev, u8 addr, u32 data)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ int i = 10;
-+ u32 buf;
-+
-+ buf = (data << 8) | addr;
-+
-+ rtl818x_iowrite32(priv, (__le32 __iomem *)&priv->map->PHY[0], buf | 0x80);
-+ while (i--) {
-+ rtl818x_iowrite32(priv, (__le32 __iomem *)&priv->map->PHY[0], buf);
-+ if (rtl818x_ioread8(priv, &priv->map->PHY[2]) == (data & 0xFF))
-+ return;
-+ }
-+}
-+
-+static void rtl8180_handle_rx(struct ieee80211_hw *dev)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ unsigned int count = 32;
-+
-+ while (count--) {
-+ struct rtl8180_rx_desc *entry = &priv->rx_ring[priv->rx_idx];
-+ struct sk_buff *skb = priv->rx_buf[priv->rx_idx];
-+ u32 flags = le32_to_cpu(entry->flags);
-+
-+ if (flags & RTL8180_RX_DESC_FLAG_OWN)
-+ return;
-+
-+ if (unlikely(flags & (RTL8180_RX_DESC_FLAG_DMA_FAIL |
-+ RTL8180_RX_DESC_FLAG_FOF |
-+ RTL8180_RX_DESC_FLAG_RX_ERR)))
-+ goto done;
-+ else {
-+ u32 flags2 = le32_to_cpu(entry->flags2);
-+ struct ieee80211_rx_status rx_status = {0};
-+ struct sk_buff *new_skb = dev_alloc_skb(MAX_RX_SIZE);
-+
-+ if (unlikely(!new_skb))
-+ goto done;
-+
-+ pci_unmap_single(priv->pdev,
-+ *((dma_addr_t *)skb->cb),
-+ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
-+ skb_put(skb, flags & 0xFFF);
-+
-+ rx_status.antenna = (flags2 >> 15) & 1;
-+ /* TODO: improve signal/rssi reporting */
-+ rx_status.signal = flags2 & 0xFF;
-+ rx_status.ssi = (flags2 >> 8) & 0x7F;
-+ rx_status.rate = (flags >> 20) & 0xF;
-+ rx_status.freq = dev->conf.freq;
-+ rx_status.channel = dev->conf.channel;
-+ rx_status.phymode = dev->conf.phymode;
-+ rx_status.mactime = le64_to_cpu(entry->tsft);
-+ rx_status.flag |= RX_FLAG_TSFT;
-+ if (flags & RTL8180_RX_DESC_FLAG_CRC32_ERR)
-+ rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
-+
-+ ieee80211_rx_irqsafe(dev, skb, &rx_status);
-+
-+ skb = new_skb;
-+ priv->rx_buf[priv->rx_idx] = skb;
-+ *((dma_addr_t *) skb->cb) =
-+ pci_map_single(priv->pdev, skb_tail_pointer(skb),
-+ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
-+ }
-+
-+ done:
-+ entry->rx_buf = cpu_to_le32(*((dma_addr_t *)skb->cb));
-+ entry->flags = cpu_to_le32(RTL8180_RX_DESC_FLAG_OWN |
-+ MAX_RX_SIZE);
-+ if (priv->rx_idx == 31)
-+ entry->flags |= cpu_to_le32(RTL8180_RX_DESC_FLAG_EOR);
-+ priv->rx_idx = (priv->rx_idx + 1) % 32;
-+ }
-+}
-+
-+static void rtl8180_handle_tx(struct ieee80211_hw *dev, unsigned int prio)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ struct rtl8180_tx_ring *ring = &priv->tx_ring[prio];
-+
-+ while (skb_queue_len(&ring->queue)) {
-+ struct rtl8180_tx_desc *entry = &ring->desc[ring->idx];
-+ struct sk_buff *skb;
-+ struct ieee80211_tx_status status = { {0} };
-+ struct ieee80211_tx_control *control;
-+ u32 flags = le32_to_cpu(entry->flags);
-+
-+ if (flags & RTL8180_TX_DESC_FLAG_OWN)
-+ return;
-+
-+ ring->idx = (ring->idx + 1) % ring->entries;
-+ skb = __skb_dequeue(&ring->queue);
-+ pci_unmap_single(priv->pdev, le32_to_cpu(entry->tx_buf),
-+ skb->len, PCI_DMA_TODEVICE);
-+
-+ control = *((struct ieee80211_tx_control **)skb->cb);
-+ if (control)
-+ memcpy(&status.control, control, sizeof(*control));
-+ kfree(control);
-+
-+ if (!(status.control.flags & IEEE80211_TXCTL_NO_ACK)) {
-+ if (flags & RTL8180_TX_DESC_FLAG_TX_OK)
-+ status.flags = IEEE80211_TX_STATUS_ACK;
-+ else
-+ status.excessive_retries = 1;
-+ }
-+ status.retry_count = flags & 0xFF;
-+
-+ ieee80211_tx_status_irqsafe(dev, skb, &status);
-+ if (ring->entries - skb_queue_len(&ring->queue) == 2)
-+ ieee80211_wake_queue(dev, prio);
-+ }
-+}
-+
-+static irqreturn_t rtl8180_interrupt(int irq, void *dev_id)
-+{
-+ struct ieee80211_hw *dev = dev_id;
-+ struct rtl8180_priv *priv = dev->priv;
-+ u16 reg;
-+
-+ spin_lock(&priv->lock);
-+ reg = rtl818x_ioread16(priv, &priv->map->INT_STATUS);
-+ if (unlikely(reg == 0xFFFF)) {
-+ spin_unlock(&priv->lock);
-+ return IRQ_HANDLED;
-+ }
-+
-+ rtl818x_iowrite16(priv, &priv->map->INT_STATUS, reg);
-+
-+ if (reg & (RTL818X_INT_TXB_OK | RTL818X_INT_TXB_ERR))
-+ rtl8180_handle_tx(dev, 3);
-+
-+ if (reg & (RTL818X_INT_TXH_OK | RTL818X_INT_TXH_ERR))
-+ rtl8180_handle_tx(dev, 2);
-+
-+ if (reg & (RTL818X_INT_TXN_OK | RTL818X_INT_TXN_ERR))
-+ rtl8180_handle_tx(dev, 1);
-+
-+ if (reg & (RTL818X_INT_TXL_OK | RTL818X_INT_TXL_ERR))
-+ rtl8180_handle_tx(dev, 0);
-+
-+ if (reg & (RTL818X_INT_RX_OK | RTL818X_INT_RX_ERR))
-+ rtl8180_handle_rx(dev);
-+
-+ spin_unlock(&priv->lock);
-+
-+ return IRQ_HANDLED;
-+}
-+
-+static int rtl8180_tx(struct ieee80211_hw *dev, struct sk_buff *skb,
-+ struct ieee80211_tx_control *control)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ struct rtl8180_tx_ring *ring;
-+ struct rtl8180_tx_desc *entry;
-+ unsigned long flags;
-+ unsigned int idx, prio;
-+ dma_addr_t mapping;
-+ u32 tx_flags;
-+ u16 plcp_len = 0;
-+ __le16 rts_duration = 0;
-+
-+ prio = control->queue;
-+ ring = &priv->tx_ring[prio];
-+
-+ mapping = pci_map_single(priv->pdev, skb->data,
-+ skb->len, PCI_DMA_TODEVICE);
-+
-+ tx_flags = RTL8180_TX_DESC_FLAG_OWN | RTL8180_TX_DESC_FLAG_FS |
-+ RTL8180_TX_DESC_FLAG_LS | (control->tx_rate << 24) |
-+ (control->rts_cts_rate << 19) | skb->len;
-+
-+ if (priv->r8185)
-+ tx_flags |= RTL8180_TX_DESC_FLAG_DMA |
-+ RTL8180_TX_DESC_FLAG_NO_ENC;
-+
-+ if (control->flags & IEEE80211_TXCTL_USE_RTS_CTS)
-+ tx_flags |= RTL8180_TX_DESC_FLAG_RTS;
-+ else if (control->flags & IEEE80211_TXCTL_USE_CTS_PROTECT)
-+ tx_flags |= RTL8180_TX_DESC_FLAG_CTS;
-+
-+ *((struct ieee80211_tx_control **) skb->cb) =
-+ kmemdup(control, sizeof(*control), GFP_ATOMIC);
-+
-+ if (control->flags & IEEE80211_TXCTL_USE_RTS_CTS)
-+ rts_duration = ieee80211_rts_duration(dev, priv->vif, skb->len,
-+ control);
-+
-+ if (!priv->r8185) {
-+ unsigned int remainder;
-+
-+ plcp_len = DIV_ROUND_UP(16 * (skb->len + 4),
-+ (control->rate->rate * 2) / 10);
-+ remainder = (16 * (skb->len + 4)) %
-+ ((control->rate->rate * 2) / 10);
-+ if (remainder > 0 && remainder <= 6)
-+ plcp_len |= 1 << 15;
-+ }
-+
-+ spin_lock_irqsave(&priv->lock, flags);
-+ idx = (ring->idx + skb_queue_len(&ring->queue)) % ring->entries;
-+ entry = &ring->desc[idx];
-+
-+ entry->rts_duration = rts_duration;
-+ entry->plcp_len = cpu_to_le16(plcp_len);
-+ entry->tx_buf = cpu_to_le32(mapping);
-+ entry->frame_len = cpu_to_le32(skb->len);
-+ entry->flags2 = control->alt_retry_rate != -1 ?
-+ control->alt_retry_rate << 4 : 0;
-+ entry->retry_limit = control->retry_limit;
-+ entry->flags = cpu_to_le32(tx_flags);
-+ __skb_queue_tail(&ring->queue, skb);
-+ if (ring->entries - skb_queue_len(&ring->queue) < 2)
-+ ieee80211_stop_queue(dev, control->queue);
-+ spin_unlock_irqrestore(&priv->lock, flags);
-+
-+ rtl818x_iowrite8(priv, &priv->map->TX_DMA_POLLING, (1 << (prio + 4)));
-+
-+ return 0;
-+}
-+
-+void rtl8180_set_anaparam(struct rtl8180_priv *priv, u32 anaparam)
-+{
-+ u8 reg;
-+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3,
-+ reg | RTL818X_CONFIG3_ANAPARAM_WRITE);
-+ rtl818x_iowrite32(priv, &priv->map->ANAPARAM, anaparam);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3,
-+ reg & ~RTL818X_CONFIG3_ANAPARAM_WRITE);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
-+}
-+
-+static int rtl8180_init_hw(struct ieee80211_hw *dev)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u16 reg;
-+
-+ rtl818x_iowrite8(priv, &priv->map->CMD, 0);
-+ rtl818x_ioread8(priv, &priv->map->CMD);
-+ msleep(10);
-+
-+ /* reset */
-+ rtl818x_iowrite16(priv, &priv->map->INT_MASK, 0);
-+ rtl818x_ioread8(priv, &priv->map->CMD);
-+
-+ reg = rtl818x_ioread8(priv, &priv->map->CMD);
-+ reg &= (1 << 1);
-+ reg |= RTL818X_CMD_RESET;
-+ rtl818x_iowrite8(priv, &priv->map->CMD, RTL818X_CMD_RESET);
-+ rtl818x_ioread8(priv, &priv->map->CMD);
-+ msleep(200);
-+
-+ /* check success of reset */
-+ if (rtl818x_ioread8(priv, &priv->map->CMD) & RTL818X_CMD_RESET) {
-+ printk(KERN_ERR "%s: reset timeout!\n", wiphy_name(dev->wiphy));
-+ return -ETIMEDOUT;
-+ }
-+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_LOAD);
-+ rtl818x_ioread8(priv, &priv->map->CMD);
-+ msleep(200);
-+
-+ if (rtl818x_ioread8(priv, &priv->map->CONFIG3) & (1 << 3)) {
-+ /* For cardbus */
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
-+ reg |= 1 << 1;
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg);
-+ reg = rtl818x_ioread16(priv, &priv->map->FEMR);
-+ reg |= (1 << 15) | (1 << 14) | (1 << 4);
-+ rtl818x_iowrite16(priv, &priv->map->FEMR, reg);
-+ }
-+
-+ rtl818x_iowrite8(priv, &priv->map->MSR, 0);
-+
-+ if (!priv->r8185)
-+ rtl8180_set_anaparam(priv, priv->anaparam);
-+
-+ rtl818x_iowrite32(priv, &priv->map->RDSAR, priv->rx_ring_dma);
-+ rtl818x_iowrite32(priv, &priv->map->TBDA, priv->tx_ring[3].dma);
-+ rtl818x_iowrite32(priv, &priv->map->THPDA, priv->tx_ring[2].dma);
-+ rtl818x_iowrite32(priv, &priv->map->TNPDA, priv->tx_ring[1].dma);
-+ rtl818x_iowrite32(priv, &priv->map->TLPDA, priv->tx_ring[0].dma);
-+
-+ /* TODO: necessary? specs indicate not */
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG2);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG2, reg & ~(1 << 3));
-+ if (priv->r8185) {
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG2);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG2, reg | (1 << 4));
-+ }
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
-+
-+ /* TODO: set CONFIG5 for calibrating AGC on rtl8180 + philips radio? */
-+
-+ /* TODO: turn off hw wep on rtl8180 */
-+
-+ rtl818x_iowrite32(priv, &priv->map->INT_TIMEOUT, 0);
-+
-+ if (priv->r8185) {
-+ rtl818x_iowrite8(priv, &priv->map->WPA_CONF, 0);
-+ rtl818x_iowrite8(priv, &priv->map->RATE_FALLBACK, 0x81);
-+ rtl818x_iowrite8(priv, &priv->map->RESP_RATE, (8 << 4) | 0);
-+
-+ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x01F3);
-+
-+ /* TODO: set ClkRun enable? necessary? */
-+ reg = rtl818x_ioread8(priv, &priv->map->GP_ENABLE);
-+ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, reg & ~(1 << 6));
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg | (1 << 2));
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
-+ } else {
-+ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x1);
-+ rtl818x_iowrite8(priv, &priv->map->SECURITY, 0);
-+
-+ rtl818x_iowrite8(priv, &priv->map->PHY_DELAY, 0x6);
-+ rtl818x_iowrite8(priv, &priv->map->CARRIER_SENSE_COUNTER, 0x4C);
-+ }
-+
-+ priv->rf->init(dev);
-+ if (priv->r8185)
-+ rtl818x_iowrite16(priv, &priv->map->BRSR, 0x01F3);
-+ return 0;
-+}
-+
-+static int rtl8180_init_rx_ring(struct ieee80211_hw *dev)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ struct rtl8180_rx_desc *entry;
-+ int i;
-+
-+ priv->rx_ring = pci_alloc_consistent(priv->pdev,
-+ sizeof(*priv->rx_ring) * 32,
-+ &priv->rx_ring_dma);
-+
-+ if (!priv->rx_ring || (unsigned long)priv->rx_ring & 0xFF) {
-+ printk(KERN_ERR "%s: Cannot allocate RX ring\n",
-+ wiphy_name(dev->wiphy));
-+ return -ENOMEM;
-+ }
-+
-+ memset(priv->rx_ring, 0, sizeof(*priv->rx_ring) * 32);
-+ priv->rx_idx = 0;
-+
-+ for (i = 0; i < 32; i++) {
-+ struct sk_buff *skb = dev_alloc_skb(MAX_RX_SIZE);
-+ dma_addr_t *mapping;
-+ entry = &priv->rx_ring[i];
-+ if (!skb)
-+ return 0;
-+
-+ priv->rx_buf[i] = skb;
-+ mapping = (dma_addr_t *)skb->cb;
-+ *mapping = pci_map_single(priv->pdev, skb_tail_pointer(skb),
-+ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
-+ entry->rx_buf = cpu_to_le32(*mapping);
-+ entry->flags = cpu_to_le32(RTL8180_RX_DESC_FLAG_OWN |
-+ MAX_RX_SIZE);
-+ }
-+ entry->flags |= cpu_to_le32(RTL8180_RX_DESC_FLAG_EOR);
-+ return 0;
-+}
-+
-+static void rtl8180_free_rx_ring(struct ieee80211_hw *dev)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ int i;
-+
-+ for (i = 0; i < 32; i++) {
-+ struct sk_buff *skb = priv->rx_buf[i];
-+ if (!skb)
-+ continue;
-+
-+ pci_unmap_single(priv->pdev,
-+ *((dma_addr_t *)skb->cb),
-+ MAX_RX_SIZE, PCI_DMA_FROMDEVICE);
-+ kfree_skb(skb);
-+ }
-+
-+ pci_free_consistent(priv->pdev, sizeof(*priv->rx_ring) * 32,
-+ priv->rx_ring, priv->rx_ring_dma);
-+ priv->rx_ring = NULL;
-+}
-+
-+static int rtl8180_init_tx_ring(struct ieee80211_hw *dev,
-+ unsigned int prio, unsigned int entries)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ struct rtl8180_tx_desc *ring;
-+ dma_addr_t dma;
-+ int i;
-+
-+ ring = pci_alloc_consistent(priv->pdev, sizeof(*ring) * entries, &dma);
-+ if (!ring || (unsigned long)ring & 0xFF) {
-+ printk(KERN_ERR "%s: Cannot allocate TX ring (prio = %d)\n",
-+ wiphy_name(dev->wiphy), prio);
-+ return -ENOMEM;
-+ }
-+
-+ memset(ring, 0, sizeof(*ring)*entries);
-+ priv->tx_ring[prio].desc = ring;
-+ priv->tx_ring[prio].dma = dma;
-+ priv->tx_ring[prio].idx = 0;
-+ priv->tx_ring[prio].entries = entries;
-+ skb_queue_head_init(&priv->tx_ring[prio].queue);
-+
-+ for (i = 0; i < entries; i++)
-+ ring[i].next_tx_desc =
-+ cpu_to_le32((u32)dma + ((i + 1) % entries) * sizeof(*ring));
-+
-+ return 0;
-+}
-+
-+static void rtl8180_free_tx_ring(struct ieee80211_hw *dev, unsigned int prio)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ struct rtl8180_tx_ring *ring = &priv->tx_ring[prio];
-+
-+ while (skb_queue_len(&ring->queue)) {
-+ struct rtl8180_tx_desc *entry = &ring->desc[ring->idx];
-+ struct sk_buff *skb = __skb_dequeue(&ring->queue);
-+
-+ pci_unmap_single(priv->pdev, le32_to_cpu(entry->tx_buf),
-+ skb->len, PCI_DMA_TODEVICE);
-+ kfree(*((struct ieee80211_tx_control **) skb->cb));
-+ kfree_skb(skb);
-+ ring->idx = (ring->idx + 1) % ring->entries;
-+ }
-+
-+ pci_free_consistent(priv->pdev, sizeof(*ring->desc)*ring->entries,
-+ ring->desc, ring->dma);
-+ ring->desc = NULL;
-+}
-+
-+static int rtl8180_start(struct ieee80211_hw *dev)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ int ret, i;
-+ u32 reg;
-+
-+ ret = rtl8180_init_rx_ring(dev);
-+ if (ret)
-+ return ret;
-+
-+ for (i = 0; i < 4; i++)
-+ if ((ret = rtl8180_init_tx_ring(dev, i, 16)))
-+ goto err_free_rings;
-+
-+ ret = rtl8180_init_hw(dev);
-+ if (ret)
-+ goto err_free_rings;
-+
-+ rtl818x_iowrite32(priv, &priv->map->RDSAR, priv->rx_ring_dma);
-+ rtl818x_iowrite32(priv, &priv->map->TBDA, priv->tx_ring[3].dma);
-+ rtl818x_iowrite32(priv, &priv->map->THPDA, priv->tx_ring[2].dma);
-+ rtl818x_iowrite32(priv, &priv->map->TNPDA, priv->tx_ring[1].dma);
-+ rtl818x_iowrite32(priv, &priv->map->TLPDA, priv->tx_ring[0].dma);
-+
-+ ret = request_irq(priv->pdev->irq, &rtl8180_interrupt,
-+ IRQF_SHARED, KBUILD_MODNAME, dev);
-+ if (ret) {
-+ printk(KERN_ERR "%s: failed to register IRQ handler\n",
-+ wiphy_name(dev->wiphy));
-+ goto err_free_rings;
-+ }
-+
-+ rtl818x_iowrite16(priv, &priv->map->INT_MASK, 0xFFFF);
-+
-+ rtl818x_iowrite32(priv, &priv->map->MAR[0], ~0);
-+ rtl818x_iowrite32(priv, &priv->map->MAR[1], ~0);
-+
-+ reg = RTL818X_RX_CONF_ONLYERLPKT |
-+ RTL818X_RX_CONF_RX_AUTORESETPHY |
-+ RTL818X_RX_CONF_MGMT |
-+ RTL818X_RX_CONF_DATA |
-+ (7 << 8 /* MAX RX DMA */) |
-+ RTL818X_RX_CONF_BROADCAST |
-+ RTL818X_RX_CONF_NICMAC;
-+
-+ if (priv->r8185)
-+ reg |= RTL818X_RX_CONF_CSDM1 | RTL818X_RX_CONF_CSDM2;
-+ else {
-+ reg |= (priv->rfparam & RF_PARAM_CARRIERSENSE1)
-+ ? RTL818X_RX_CONF_CSDM1 : 0;
-+ reg |= (priv->rfparam & RF_PARAM_CARRIERSENSE2)
-+ ? RTL818X_RX_CONF_CSDM2 : 0;
-+ }
-+
-+ priv->rx_conf = reg;
-+ rtl818x_iowrite32(priv, &priv->map->RX_CONF, reg);
-+
-+ if (priv->r8185) {
-+ reg = rtl818x_ioread8(priv, &priv->map->CW_CONF);
-+ reg &= ~RTL818X_CW_CONF_PERPACKET_CW_SHIFT;
-+ reg |= RTL818X_CW_CONF_PERPACKET_RETRY_SHIFT;
-+ rtl818x_iowrite8(priv, &priv->map->CW_CONF, reg);
-+
-+ reg = rtl818x_ioread8(priv, &priv->map->TX_AGC_CTL);
-+ reg &= ~RTL818X_TX_AGC_CTL_PERPACKET_GAIN_SHIFT;
-+ reg &= ~RTL818X_TX_AGC_CTL_PERPACKET_ANTSEL_SHIFT;
-+ reg |= RTL818X_TX_AGC_CTL_FEEDBACK_ANT;
-+ rtl818x_iowrite8(priv, &priv->map->TX_AGC_CTL, reg);
-+
-+ /* disable early TX */
-+ rtl818x_iowrite8(priv, (u8 __iomem *)priv->map + 0xec, 0x3f);
-+ }
+ return erp;
+diff --git a/drivers/s390/block/dasd_9336_erp.c b/drivers/s390/block/dasd_9336_erp.c
+deleted file mode 100644
+index 6e08268..0000000
+--- a/drivers/s390/block/dasd_9336_erp.c
++++ /dev/null
+@@ -1,41 +0,0 @@
+-/*
+- * File...........: linux/drivers/s390/block/dasd_9336_erp.c
+- * Author(s)......: Holger Smolinski <Holger.Smolinski at de.ibm.com>
+- * Bugreports.to..: <Linux390 at de.ibm.com>
+- * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+- *
+- */
+-
+-#define PRINTK_HEADER "dasd_erp(9336)"
+-
+-#include "dasd_int.h"
+-
+-
+-/*
+- * DASD_9336_ERP_EXAMINE
+- *
+- * DESCRIPTION
+- * Checks only for fatal/no/recover error.
+- * A detailed examination of the sense data is done later outside
+- * the interrupt handler.
+- *
+- * The logic is based on the 'IBM 3880 Storage Control Reference' manual
+- * 'Chapter 7. 9336 Sense Data'.
+- *
+- * RETURN VALUES
+- * dasd_era_none no error
+- * dasd_era_fatal for all fatal (unrecoverable errors)
+- * dasd_era_recover for all others.
+- */
+-dasd_era_t
+-dasd_9336_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
+-{
+- /* check for successful execution first */
+- if (irb->scsw.cstat == 0x00 &&
+- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+- return dasd_era_none;
+-
+- /* examine the 24 byte sense data */
+- return dasd_era_recover;
+-
+-} /* END dasd_9336_erp_examine */
+diff --git a/drivers/s390/block/dasd_9343_erp.c b/drivers/s390/block/dasd_9343_erp.c
+deleted file mode 100644
+index ddecb98..0000000
+--- a/drivers/s390/block/dasd_9343_erp.c
++++ /dev/null
+@@ -1,21 +0,0 @@
+-/*
+- * File...........: linux/drivers/s390/block/dasd_9345_erp.c
+- * Author(s)......: Holger Smolinski <Holger.Smolinski at de.ibm.com>
+- * Bugreports.to..: <Linux390 at de.ibm.com>
+- * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+- *
+- */
+-
+-#define PRINTK_HEADER "dasd_erp(9343)"
+-
+-#include "dasd_int.h"
+-
+-dasd_era_t
+-dasd_9343_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
+-{
+- if (irb->scsw.cstat == 0x00 &&
+- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+- return dasd_era_none;
+-
+- return dasd_era_recover;
+-}
+diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
+new file mode 100644
+index 0000000..3a40bee
+--- /dev/null
++++ b/drivers/s390/block/dasd_alias.c
+@@ -0,0 +1,903 @@
++/*
++ * PAV alias management for the DASD ECKD discipline
++ *
++ * Copyright IBM Corporation, 2007
++ * Author(s): Stefan Weinhuber <wein at de.ibm.com>
++ */
+
-+ reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
-+ reg |= (6 << 21 /* MAX TX DMA */) |
-+ RTL818X_TX_CONF_NO_ICV;
++#include <linux/list.h>
++#include <asm/ebcdic.h>
++#include "dasd_int.h"
++#include "dasd_eckd.h"
+
-+ if (priv->r8185)
-+ reg &= ~RTL818X_TX_CONF_PROBE_DTS;
-+ else
-+ reg &= ~RTL818X_TX_CONF_HW_SEQNUM;
++#ifdef PRINTK_HEADER
++#undef PRINTK_HEADER
++#endif /* PRINTK_HEADER */
++#define PRINTK_HEADER "dasd(eckd):"
+
-+ /* different meaning, same value on both rtl8185 and rtl8180 */
-+ reg &= ~RTL818X_TX_CONF_SAT_HWPLCP;
+
-+ rtl818x_iowrite32(priv, &priv->map->TX_CONF, reg);
++/*
++ * General concept of alias management:
++ * - PAV and DASD alias management is specific to the eckd discipline.
++ * - A device is connected to an lcu as long as the device exists.
++ * dasd_alias_make_device_known_to_lcu will be called wenn the
++ * device is checked by the eckd discipline and
++ * dasd_alias_disconnect_device_from_lcu will be called
++ * before the device is deleted.
++ * - The dasd_alias_add_device / dasd_alias_remove_device
++ * functions mark the point when a device is 'ready for service'.
++ * - A summary unit check is a rare occasion, but it is mandatory to
++ * support it. It requires some complex recovery actions before the
++ * devices can be used again (see dasd_alias_handle_summary_unit_check).
++ * - dasd_alias_get_start_dev will find an alias device that can be used
++ * instead of the base device and does some (very simple) load balancing.
++ * This is the function that gets called for each I/O, so when improving
++ * something, this function should get faster or better, the rest has just
++ * to be correct.
++ */
+
-+ reg = rtl818x_ioread8(priv, &priv->map->CMD);
-+ reg |= RTL818X_CMD_RX_ENABLE;
-+ reg |= RTL818X_CMD_TX_ENABLE;
-+ rtl818x_iowrite8(priv, &priv->map->CMD, reg);
+
-+ priv->mode = IEEE80211_IF_TYPE_MNTR;
-+ return 0;
++static void summary_unit_check_handling_work(struct work_struct *);
++static void lcu_update_work(struct work_struct *);
++static int _schedule_lcu_update(struct alias_lcu *, struct dasd_device *);
+
-+ err_free_rings:
-+ rtl8180_free_rx_ring(dev);
-+ for (i = 0; i < 4; i++)
-+ if (priv->tx_ring[i].desc)
-+ rtl8180_free_tx_ring(dev, i);
++static struct alias_root aliastree = {
++ .serverlist = LIST_HEAD_INIT(aliastree.serverlist),
++ .lock = __SPIN_LOCK_UNLOCKED(aliastree.lock),
++};
+
-+ return ret;
++static struct alias_server *_find_server(struct dasd_uid *uid)
++{
++ struct alias_server *pos;
++ list_for_each_entry(pos, &aliastree.serverlist, server) {
++ if (!strncmp(pos->uid.vendor, uid->vendor,
++ sizeof(uid->vendor))
++ && !strncmp(pos->uid.serial, uid->serial,
++ sizeof(uid->serial)))
++ return pos;
++ };
++ return NULL;
+}
+
-+static void rtl8180_stop(struct ieee80211_hw *dev)
++static struct alias_lcu *_find_lcu(struct alias_server *server,
++ struct dasd_uid *uid)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 reg;
-+ int i;
-+
-+ priv->mode = IEEE80211_IF_TYPE_INVALID;
-+
-+ rtl818x_iowrite16(priv, &priv->map->INT_MASK, 0);
-+
-+ reg = rtl818x_ioread8(priv, &priv->map->CMD);
-+ reg &= ~RTL818X_CMD_TX_ENABLE;
-+ reg &= ~RTL818X_CMD_RX_ENABLE;
-+ rtl818x_iowrite8(priv, &priv->map->CMD, reg);
-+
-+ priv->rf->stop(dev);
-+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG4);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG4, reg | RTL818X_CONFIG4_VCOOFF);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
-+
-+ free_irq(priv->pdev->irq, dev);
-+
-+ rtl8180_free_rx_ring(dev);
-+ for (i = 0; i < 4; i++)
-+ rtl8180_free_tx_ring(dev, i);
++ struct alias_lcu *pos;
++ list_for_each_entry(pos, &server->lculist, lcu) {
++ if (pos->uid.ssid == uid->ssid)
++ return pos;
++ };
++ return NULL;
+}
+
-+static int rtl8180_add_interface(struct ieee80211_hw *dev,
-+ struct ieee80211_if_init_conf *conf)
++static struct alias_pav_group *_find_group(struct alias_lcu *lcu,
++ struct dasd_uid *uid)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+
-+ if (priv->mode != IEEE80211_IF_TYPE_MNTR)
-+ return -EOPNOTSUPP;
++ struct alias_pav_group *pos;
++ __u8 search_unit_addr;
+
-+ switch (conf->type) {
-+ case IEEE80211_IF_TYPE_STA:
-+ priv->mode = conf->type;
-+ break;
-+ default:
-+ return -EOPNOTSUPP;
++ /* for hyper pav there is only one group */
++ if (lcu->pav == HYPER_PAV) {
++ if (list_empty(&lcu->grouplist))
++ return NULL;
++ else
++ return list_first_entry(&lcu->grouplist,
++ struct alias_pav_group, group);
+ }
+
-+ priv->vif = conf->vif;
-+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ rtl818x_iowrite32(priv, (__le32 __iomem *)&priv->map->MAC[0],
-+ cpu_to_le32(*(u32 *)conf->mac_addr));
-+ rtl818x_iowrite16(priv, (__le16 __iomem *)&priv->map->MAC[4],
-+ cpu_to_le16(*(u16 *)(conf->mac_addr + 4)));
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
-+
-+ return 0;
-+}
-+
-+static void rtl8180_remove_interface(struct ieee80211_hw *dev,
-+ struct ieee80211_if_init_conf *conf)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ priv->mode = IEEE80211_IF_TYPE_MNTR;
-+ priv->vif = NULL;
++ /* for base pav we have to find the group that matches the base */
++ if (uid->type == UA_BASE_DEVICE)
++ search_unit_addr = uid->real_unit_addr;
++ else
++ search_unit_addr = uid->base_unit_addr;
++ list_for_each_entry(pos, &lcu->grouplist, group) {
++ if (pos->uid.base_unit_addr == search_unit_addr)
++ return pos;
++ };
++ return NULL;
+}
+
-+static int rtl8180_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
++static struct alias_server *_allocate_server(struct dasd_uid *uid)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+
-+ priv->rf->set_chan(dev, conf);
++ struct alias_server *server;
+
-+ return 0;
++ server = kzalloc(sizeof(*server), GFP_KERNEL);
++ if (!server)
++ return ERR_PTR(-ENOMEM);
++ memcpy(server->uid.vendor, uid->vendor, sizeof(uid->vendor));
++ memcpy(server->uid.serial, uid->serial, sizeof(uid->serial));
++ INIT_LIST_HEAD(&server->server);
++ INIT_LIST_HEAD(&server->lculist);
++ return server;
+}
+
-+static int rtl8180_config_interface(struct ieee80211_hw *dev,
-+ struct ieee80211_vif *vif,
-+ struct ieee80211_if_conf *conf)
++static void _free_server(struct alias_server *server)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ int i;
-+
-+ for (i = 0; i < ETH_ALEN; i++)
-+ rtl818x_iowrite8(priv, &priv->map->BSSID[i], conf->bssid[i]);
-+
-+ if (is_valid_ether_addr(conf->bssid))
-+ rtl818x_iowrite8(priv, &priv->map->MSR, RTL818X_MSR_INFRA);
-+ else
-+ rtl818x_iowrite8(priv, &priv->map->MSR, RTL818X_MSR_NO_LINK);
-+
-+ return 0;
++ kfree(server);
+}
+
-+static void rtl8180_configure_filter(struct ieee80211_hw *dev,
-+ unsigned int changed_flags,
-+ unsigned int *total_flags,
-+ int mc_count, struct dev_addr_list *mclist)
++static struct alias_lcu *_allocate_lcu(struct dasd_uid *uid)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+
-+ if (changed_flags & FIF_FCSFAIL)
-+ priv->rx_conf ^= RTL818X_RX_CONF_FCS;
-+ if (changed_flags & FIF_CONTROL)
-+ priv->rx_conf ^= RTL818X_RX_CONF_CTRL;
-+ if (changed_flags & FIF_OTHER_BSS)
-+ priv->rx_conf ^= RTL818X_RX_CONF_MONITOR;
-+ if (*total_flags & FIF_ALLMULTI || mc_count > 0)
-+ priv->rx_conf |= RTL818X_RX_CONF_MULTICAST;
-+ else
-+ priv->rx_conf &= ~RTL818X_RX_CONF_MULTICAST;
-+
-+ *total_flags = 0;
-+
-+ if (priv->rx_conf & RTL818X_RX_CONF_FCS)
-+ *total_flags |= FIF_FCSFAIL;
-+ if (priv->rx_conf & RTL818X_RX_CONF_CTRL)
-+ *total_flags |= FIF_CONTROL;
-+ if (priv->rx_conf & RTL818X_RX_CONF_MONITOR)
-+ *total_flags |= FIF_OTHER_BSS;
-+ if (priv->rx_conf & RTL818X_RX_CONF_MULTICAST)
-+ *total_flags |= FIF_ALLMULTI;
-+
-+ rtl818x_iowrite32(priv, &priv->map->RX_CONF, priv->rx_conf);
-+}
++ struct alias_lcu *lcu;
+
-+static const struct ieee80211_ops rtl8180_ops = {
-+ .tx = rtl8180_tx,
-+ .start = rtl8180_start,
-+ .stop = rtl8180_stop,
-+ .add_interface = rtl8180_add_interface,
-+ .remove_interface = rtl8180_remove_interface,
-+ .config = rtl8180_config,
-+ .config_interface = rtl8180_config_interface,
-+ .configure_filter = rtl8180_configure_filter,
-+};
++ lcu = kzalloc(sizeof(*lcu), GFP_KERNEL);
++ if (!lcu)
++ return ERR_PTR(-ENOMEM);
++ lcu->uac = kzalloc(sizeof(*(lcu->uac)), GFP_KERNEL | GFP_DMA);
++ if (!lcu->uac)
++ goto out_err1;
++ lcu->rsu_cqr = kzalloc(sizeof(*lcu->rsu_cqr), GFP_KERNEL | GFP_DMA);
++ if (!lcu->rsu_cqr)
++ goto out_err2;
++ lcu->rsu_cqr->cpaddr = kzalloc(sizeof(struct ccw1),
++ GFP_KERNEL | GFP_DMA);
++ if (!lcu->rsu_cqr->cpaddr)
++ goto out_err3;
++ lcu->rsu_cqr->data = kzalloc(16, GFP_KERNEL | GFP_DMA);
++ if (!lcu->rsu_cqr->data)
++ goto out_err4;
+
-+static void rtl8180_eeprom_register_read(struct eeprom_93cx6 *eeprom)
-+{
-+ struct ieee80211_hw *dev = eeprom->data;
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 reg = rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
++ memcpy(lcu->uid.vendor, uid->vendor, sizeof(uid->vendor));
++ memcpy(lcu->uid.serial, uid->serial, sizeof(uid->serial));
++ lcu->uid.ssid = uid->ssid;
++ lcu->pav = NO_PAV;
++ lcu->flags = NEED_UAC_UPDATE | UPDATE_PENDING;
++ INIT_LIST_HEAD(&lcu->lcu);
++ INIT_LIST_HEAD(&lcu->inactive_devices);
++ INIT_LIST_HEAD(&lcu->active_devices);
++ INIT_LIST_HEAD(&lcu->grouplist);
++ INIT_WORK(&lcu->suc_data.worker, summary_unit_check_handling_work);
++ INIT_DELAYED_WORK(&lcu->ruac_data.dwork, lcu_update_work);
++ spin_lock_init(&lcu->lock);
++ return lcu;
+
-+ eeprom->reg_data_in = reg & RTL818X_EEPROM_CMD_WRITE;
-+ eeprom->reg_data_out = reg & RTL818X_EEPROM_CMD_READ;
-+ eeprom->reg_data_clock = reg & RTL818X_EEPROM_CMD_CK;
-+ eeprom->reg_chip_select = reg & RTL818X_EEPROM_CMD_CS;
++out_err4:
++ kfree(lcu->rsu_cqr->cpaddr);
++out_err3:
++ kfree(lcu->rsu_cqr);
++out_err2:
++ kfree(lcu->uac);
++out_err1:
++ kfree(lcu);
++ return ERR_PTR(-ENOMEM);
+}
+
-+static void rtl8180_eeprom_register_write(struct eeprom_93cx6 *eeprom)
++static void _free_lcu(struct alias_lcu *lcu)
+{
-+ struct ieee80211_hw *dev = eeprom->data;
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 reg = 2 << 6;
-+
-+ if (eeprom->reg_data_in)
-+ reg |= RTL818X_EEPROM_CMD_WRITE;
-+ if (eeprom->reg_data_out)
-+ reg |= RTL818X_EEPROM_CMD_READ;
-+ if (eeprom->reg_data_clock)
-+ reg |= RTL818X_EEPROM_CMD_CK;
-+ if (eeprom->reg_chip_select)
-+ reg |= RTL818X_EEPROM_CMD_CS;
-+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, reg);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(10);
++ kfree(lcu->rsu_cqr->data);
++ kfree(lcu->rsu_cqr->cpaddr);
++ kfree(lcu->rsu_cqr);
++ kfree(lcu->uac);
++ kfree(lcu);
+}
+
-+static int __devinit rtl8180_probe(struct pci_dev *pdev,
-+ const struct pci_device_id *id)
++/*
++ * This is the function that will allocate all the server and lcu data,
++ * so this function must be called first for a new device.
++ * If the return value is 1, the lcu was already known before, if it
++ * is 0, this is a new lcu.
++ * Negative return code indicates that something went wrong (e.g. -ENOMEM)
++ */
++int dasd_alias_make_device_known_to_lcu(struct dasd_device *device)
+{
-+ struct ieee80211_hw *dev;
-+ struct rtl8180_priv *priv;
-+ unsigned long mem_addr, mem_len;
-+ unsigned int io_addr, io_len;
-+ int err, i;
-+ struct eeprom_93cx6 eeprom;
-+ const char *chip_name, *rf_name = NULL;
-+ u32 reg;
-+ u16 eeprom_val;
-+ DECLARE_MAC_BUF(mac);
-+
-+ err = pci_enable_device(pdev);
-+ if (err) {
-+ printk(KERN_ERR "%s (rtl8180): Cannot enable new PCI device\n",
-+ pci_name(pdev));
-+ return err;
-+ }
++ struct dasd_eckd_private *private;
++ unsigned long flags;
++ struct alias_server *server, *newserver;
++ struct alias_lcu *lcu, *newlcu;
++ int is_lcu_known;
++ struct dasd_uid *uid;
+
-+ err = pci_request_regions(pdev, KBUILD_MODNAME);
-+ if (err) {
-+ printk(KERN_ERR "%s (rtl8180): Cannot obtain PCI resources\n",
-+ pci_name(pdev));
-+ return err;
++ private = (struct dasd_eckd_private *) device->private;
++ uid = &private->uid;
++ spin_lock_irqsave(&aliastree.lock, flags);
++ is_lcu_known = 1;
++ server = _find_server(uid);
++ if (!server) {
++ spin_unlock_irqrestore(&aliastree.lock, flags);
++ newserver = _allocate_server(uid);
++ if (IS_ERR(newserver))
++ return PTR_ERR(newserver);
++ spin_lock_irqsave(&aliastree.lock, flags);
++ server = _find_server(uid);
++ if (!server) {
++ list_add(&newserver->server, &aliastree.serverlist);
++ server = newserver;
++ is_lcu_known = 0;
++ } else {
++ /* someone was faster */
++ _free_server(newserver);
++ }
+ }
+
-+ io_addr = pci_resource_start(pdev, 0);
-+ io_len = pci_resource_len(pdev, 0);
-+ mem_addr = pci_resource_start(pdev, 1);
-+ mem_len = pci_resource_len(pdev, 1);
-+
-+ if (mem_len < sizeof(struct rtl818x_csr) ||
-+ io_len < sizeof(struct rtl818x_csr)) {
-+ printk(KERN_ERR "%s (rtl8180): Too short PCI resources\n",
-+ pci_name(pdev));
-+ err = -ENOMEM;
-+ goto err_free_reg;
++ lcu = _find_lcu(server, uid);
++ if (!lcu) {
++ spin_unlock_irqrestore(&aliastree.lock, flags);
++ newlcu = _allocate_lcu(uid);
++ if (IS_ERR(newlcu))
++ return PTR_ERR(lcu);
++ spin_lock_irqsave(&aliastree.lock, flags);
++ lcu = _find_lcu(server, uid);
++ if (!lcu) {
++ list_add(&newlcu->lcu, &server->lculist);
++ lcu = newlcu;
++ is_lcu_known = 0;
++ } else {
++ /* someone was faster */
++ _free_lcu(newlcu);
++ }
++ is_lcu_known = 0;
+ }
++ spin_lock(&lcu->lock);
++ list_add(&device->alias_list, &lcu->inactive_devices);
++ private->lcu = lcu;
++ spin_unlock(&lcu->lock);
++ spin_unlock_irqrestore(&aliastree.lock, flags);
+
-+ if ((err = pci_set_dma_mask(pdev, 0xFFFFFF00ULL)) ||
-+ (err = pci_set_consistent_dma_mask(pdev, 0xFFFFFF00ULL))) {
-+ printk(KERN_ERR "%s (rtl8180): No suitable DMA available\n",
-+ pci_name(pdev));
-+ goto err_free_reg;
-+ }
++ return is_lcu_known;
++}
+
-+ pci_set_master(pdev);
++/*
++ * This function removes a device from the scope of alias management.
++ * The complicated part is to make sure that it is not in use by
++ * any of the workers. If necessary cancel the work.
++ */
++void dasd_alias_disconnect_device_from_lcu(struct dasd_device *device)
++{
++ struct dasd_eckd_private *private;
++ unsigned long flags;
++ struct alias_lcu *lcu;
++ struct alias_server *server;
++ int was_pending;
+
-+ dev = ieee80211_alloc_hw(sizeof(*priv), &rtl8180_ops);
-+ if (!dev) {
-+ printk(KERN_ERR "%s (rtl8180): ieee80211 alloc failed\n",
-+ pci_name(pdev));
-+ err = -ENOMEM;
-+ goto err_free_reg;
++ private = (struct dasd_eckd_private *) device->private;
++ lcu = private->lcu;
++ spin_lock_irqsave(&lcu->lock, flags);
++ list_del_init(&device->alias_list);
++ /* make sure that the workers don't use this device */
++ if (device == lcu->suc_data.device) {
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ cancel_work_sync(&lcu->suc_data.worker);
++ spin_lock_irqsave(&lcu->lock, flags);
++ if (device == lcu->suc_data.device)
++ lcu->suc_data.device = NULL;
+ }
-+
-+ priv = dev->priv;
-+ priv->pdev = pdev;
-+
-+ SET_IEEE80211_DEV(dev, &pdev->dev);
-+ pci_set_drvdata(pdev, dev);
-+
-+ priv->map = pci_iomap(pdev, 1, mem_len);
-+ if (!priv->map)
-+ priv->map = pci_iomap(pdev, 0, io_len);
-+
-+ if (!priv->map) {
-+ printk(KERN_ERR "%s (rtl8180): Cannot map device memory\n",
-+ pci_name(pdev));
-+ goto err_free_dev;
++ was_pending = 0;
++ if (device == lcu->ruac_data.device) {
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ was_pending = 1;
++ cancel_delayed_work_sync(&lcu->ruac_data.dwork);
++ spin_lock_irqsave(&lcu->lock, flags);
++ if (device == lcu->ruac_data.device)
++ lcu->ruac_data.device = NULL;
+ }
++ private->lcu = NULL;
++ spin_unlock_irqrestore(&lcu->lock, flags);
+
-+ memcpy(priv->channels, rtl818x_channels, sizeof(rtl818x_channels));
-+ memcpy(priv->rates, rtl818x_rates, sizeof(rtl818x_rates));
-+ priv->modes[0].mode = MODE_IEEE80211G;
-+ priv->modes[0].num_rates = ARRAY_SIZE(rtl818x_rates);
-+ priv->modes[0].rates = priv->rates;
-+ priv->modes[0].num_channels = ARRAY_SIZE(rtl818x_channels);
-+ priv->modes[0].channels = priv->channels;
-+ priv->modes[1].mode = MODE_IEEE80211B;
-+ priv->modes[1].num_rates = 4;
-+ priv->modes[1].rates = priv->rates;
-+ priv->modes[1].num_channels = ARRAY_SIZE(rtl818x_channels);
-+ priv->modes[1].channels = priv->channels;
-+ priv->mode = IEEE80211_IF_TYPE_INVALID;
-+ dev->flags = IEEE80211_HW_HOST_BROADCAST_PS_BUFFERING |
-+ IEEE80211_HW_RX_INCLUDES_FCS;
-+ dev->queues = 1;
-+ dev->max_rssi = 65;
-+
-+ reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
-+ reg &= RTL818X_TX_CONF_HWVER_MASK;
-+ switch (reg) {
-+ case RTL818X_TX_CONF_R8180_ABCD:
-+ chip_name = "RTL8180";
-+ break;
-+ case RTL818X_TX_CONF_R8180_F:
-+ chip_name = "RTL8180vF";
-+ break;
-+ case RTL818X_TX_CONF_R8185_ABC:
-+ chip_name = "RTL8185";
-+ break;
-+ case RTL818X_TX_CONF_R8185_D:
-+ chip_name = "RTL8185vD";
-+ break;
-+ default:
-+ printk(KERN_ERR "%s (rtl8180): Unknown chip! (0x%x)\n",
-+ pci_name(pdev), reg >> 25);
-+ goto err_iounmap;
++ spin_lock_irqsave(&aliastree.lock, flags);
++ spin_lock(&lcu->lock);
++ if (list_empty(&lcu->grouplist) &&
++ list_empty(&lcu->active_devices) &&
++ list_empty(&lcu->inactive_devices)) {
++ list_del(&lcu->lcu);
++ spin_unlock(&lcu->lock);
++ _free_lcu(lcu);
++ lcu = NULL;
++ } else {
++ if (was_pending)
++ _schedule_lcu_update(lcu, NULL);
++ spin_unlock(&lcu->lock);
+ }
-+
-+ priv->r8185 = reg & RTL818X_TX_CONF_R8185_ABC;
-+ if (priv->r8185) {
-+ if ((err = ieee80211_register_hwmode(dev, &priv->modes[0])))
-+ goto err_iounmap;
-+
-+ pci_try_set_mwi(pdev);
++ server = _find_server(&private->uid);
++ if (server && list_empty(&server->lculist)) {
++ list_del(&server->server);
++ _free_server(server);
+ }
++ spin_unlock_irqrestore(&aliastree.lock, flags);
++}
+
-+ if ((err = ieee80211_register_hwmode(dev, &priv->modes[1])))
-+ goto err_iounmap;
-+
-+ eeprom.data = dev;
-+ eeprom.register_read = rtl8180_eeprom_register_read;
-+ eeprom.register_write = rtl8180_eeprom_register_write;
-+ if (rtl818x_ioread32(priv, &priv->map->RX_CONF) & (1 << 6))
-+ eeprom.width = PCI_EEPROM_WIDTH_93C66;
-+ else
-+ eeprom.width = PCI_EEPROM_WIDTH_93C46;
++/*
++ * This function assumes that the unit address configuration stored
++ * in the lcu is up to date and will update the device uid before
++ * adding it to a pav group.
++ */
++static int _add_device_to_lcu(struct alias_lcu *lcu,
++ struct dasd_device *device)
++{
+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_PROGRAM);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(10);
++ struct dasd_eckd_private *private;
++ struct alias_pav_group *group;
++ struct dasd_uid *uid;
+
-+ eeprom_93cx6_read(&eeprom, 0x06, &eeprom_val);
-+ eeprom_val &= 0xFF;
-+ switch (eeprom_val) {
-+ case 1: rf_name = "Intersil";
-+ break;
-+ case 2: rf_name = "RFMD";
-+ break;
-+ case 3: priv->rf = &sa2400_rf_ops;
-+ break;
-+ case 4: priv->rf = &max2820_rf_ops;
-+ break;
-+ case 5: priv->rf = &grf5101_rf_ops;
-+ break;
-+ case 9: priv->rf = rtl8180_detect_rf(dev);
-+ break;
-+ case 10:
-+ rf_name = "RTL8255";
-+ break;
-+ default:
-+ printk(KERN_ERR "%s (rtl8180): Unknown RF! (0x%x)\n",
-+ pci_name(pdev), eeprom_val);
-+ goto err_iounmap;
-+ }
++ private = (struct dasd_eckd_private *) device->private;
++ uid = &private->uid;
++ uid->type = lcu->uac->unit[uid->real_unit_addr].ua_type;
++ uid->base_unit_addr = lcu->uac->unit[uid->real_unit_addr].base_ua;
++ dasd_set_uid(device->cdev, &private->uid);
+
-+ if (!priv->rf) {
-+ printk(KERN_ERR "%s (rtl8180): %s RF frontend not supported!\n",
-+ pci_name(pdev), rf_name);
-+ goto err_iounmap;
++ /* if we have no PAV anyway, we don't need to bother with PAV groups */
++ if (lcu->pav == NO_PAV) {
++ list_move(&device->alias_list, &lcu->active_devices);
++ return 0;
+ }
+
-+ eeprom_93cx6_read(&eeprom, 0x17, &eeprom_val);
-+ priv->csthreshold = eeprom_val >> 8;
-+ if (!priv->r8185) {
-+ __le32 anaparam;
-+ eeprom_93cx6_multiread(&eeprom, 0xD, (__le16 *)&anaparam, 2);
-+ priv->anaparam = le32_to_cpu(anaparam);
-+ eeprom_93cx6_read(&eeprom, 0x19, &priv->rfparam);
++ group = _find_group(lcu, uid);
++ if (!group) {
++ group = kzalloc(sizeof(*group), GFP_ATOMIC);
++ if (!group)
++ return -ENOMEM;
++ memcpy(group->uid.vendor, uid->vendor, sizeof(uid->vendor));
++ memcpy(group->uid.serial, uid->serial, sizeof(uid->serial));
++ group->uid.ssid = uid->ssid;
++ if (uid->type == UA_BASE_DEVICE)
++ group->uid.base_unit_addr = uid->real_unit_addr;
++ else
++ group->uid.base_unit_addr = uid->base_unit_addr;
++ INIT_LIST_HEAD(&group->group);
++ INIT_LIST_HEAD(&group->baselist);
++ INIT_LIST_HEAD(&group->aliaslist);
++ list_add(&group->group, &lcu->grouplist);
+ }
++ if (uid->type == UA_BASE_DEVICE)
++ list_move(&device->alias_list, &group->baselist);
++ else
++ list_move(&device->alias_list, &group->aliaslist);
++ private->pavgroup = group;
++ return 0;
++};
+
-+ eeprom_93cx6_multiread(&eeprom, 0x7, (__le16 *)dev->wiphy->perm_addr, 3);
-+ if (!is_valid_ether_addr(dev->wiphy->perm_addr)) {
-+ printk(KERN_WARNING "%s (rtl8180): Invalid hwaddr! Using"
-+ " randomly generated MAC addr\n", pci_name(pdev));
-+ random_ether_addr(dev->wiphy->perm_addr);
-+ }
++static void _remove_device_from_lcu(struct alias_lcu *lcu,
++ struct dasd_device *device)
++{
++ struct dasd_eckd_private *private;
++ struct alias_pav_group *group;
+
-+ /* CCK TX power */
-+ for (i = 0; i < 14; i += 2) {
-+ u16 txpwr;
-+ eeprom_93cx6_read(&eeprom, 0x10 + (i >> 1), &txpwr);
-+ priv->channels[i].val = txpwr & 0xFF;
-+ priv->channels[i + 1].val = txpwr >> 8;
++ private = (struct dasd_eckd_private *) device->private;
++ list_move(&device->alias_list, &lcu->inactive_devices);
++ group = private->pavgroup;
++ if (!group)
++ return;
++ private->pavgroup = NULL;
++ if (list_empty(&group->baselist) && list_empty(&group->aliaslist)) {
++ list_del(&group->group);
++ kfree(group);
++ return;
+ }
++ if (group->next == device)
++ group->next = NULL;
++};
+
-+ /* OFDM TX power */
-+ if (priv->r8185) {
-+ for (i = 0; i < 14; i += 2) {
-+ u16 txpwr;
-+ eeprom_93cx6_read(&eeprom, 0x20 + (i >> 1), &txpwr);
-+ priv->channels[i].val |= (txpwr & 0xFF) << 8;
-+ priv->channels[i + 1].val |= txpwr & 0xFF00;
-+ }
-+ }
++static int read_unit_address_configuration(struct dasd_device *device,
++ struct alias_lcu *lcu)
++{
++ struct dasd_psf_prssd_data *prssdp;
++ struct dasd_ccw_req *cqr;
++ struct ccw1 *ccw;
++ int rc;
++ unsigned long flags;
+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++ cqr = dasd_kmalloc_request("ECKD",
++ 1 /* PSF */ + 1 /* RSSD */ ,
++ (sizeof(struct dasd_psf_prssd_data)),
++ device);
++ if (IS_ERR(cqr))
++ return PTR_ERR(cqr);
++ cqr->startdev = device;
++ cqr->memdev = device;
++ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
++ cqr->retries = 10;
++ cqr->expires = 20 * HZ;
+
-+ spin_lock_init(&priv->lock);
++ /* Prepare for Read Subsystem Data */
++ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
++ memset(prssdp, 0, sizeof(struct dasd_psf_prssd_data));
++ prssdp->order = PSF_ORDER_PRSSD;
++ prssdp->suborder = 0x0e; /* Read unit address configuration */
++ /* all other bytes of prssdp must be zero */
+
-+ err = ieee80211_register_hw(dev);
-+ if (err) {
-+ printk(KERN_ERR "%s (rtl8180): Cannot register device\n",
-+ pci_name(pdev));
-+ goto err_iounmap;
-+ }
++ ccw = cqr->cpaddr;
++ ccw->cmd_code = DASD_ECKD_CCW_PSF;
++ ccw->count = sizeof(struct dasd_psf_prssd_data);
++ ccw->flags |= CCW_FLAG_CC;
++ ccw->cda = (__u32)(addr_t) prssdp;
+
-+ printk(KERN_INFO "%s: hwaddr %s, %s + %s\n",
-+ wiphy_name(dev->wiphy), print_mac(mac, dev->wiphy->perm_addr),
-+ chip_name, priv->rf->name);
++ /* Read Subsystem Data - feature codes */
++ memset(lcu->uac, 0, sizeof(*(lcu->uac)));
+
-+ return 0;
++ ccw++;
++ ccw->cmd_code = DASD_ECKD_CCW_RSSD;
++ ccw->count = sizeof(*(lcu->uac));
++ ccw->cda = (__u32)(addr_t) lcu->uac;
+
-+ err_iounmap:
-+ iounmap(priv->map);
++ cqr->buildclk = get_clock();
++ cqr->status = DASD_CQR_FILLED;
+
-+ err_free_dev:
-+ pci_set_drvdata(pdev, NULL);
-+ ieee80211_free_hw(dev);
++ /* need to unset flag here to detect race with summary unit check */
++ spin_lock_irqsave(&lcu->lock, flags);
++ lcu->flags &= ~NEED_UAC_UPDATE;
++ spin_unlock_irqrestore(&lcu->lock, flags);
+
-+ err_free_reg:
-+ pci_release_regions(pdev);
-+ pci_disable_device(pdev);
-+ return err;
++ do {
++ rc = dasd_sleep_on(cqr);
++ } while (rc && (cqr->retries > 0));
++ if (rc) {
++ spin_lock_irqsave(&lcu->lock, flags);
++ lcu->flags |= NEED_UAC_UPDATE;
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ }
++ dasd_kfree_request(cqr, cqr->memdev);
++ return rc;
+}
+
-+static void __devexit rtl8180_remove(struct pci_dev *pdev)
++static int _lcu_update(struct dasd_device *refdev, struct alias_lcu *lcu)
+{
-+ struct ieee80211_hw *dev = pci_get_drvdata(pdev);
-+ struct rtl8180_priv *priv;
-+
-+ if (!dev)
-+ return;
-+
-+ ieee80211_unregister_hw(dev);
++ unsigned long flags;
++ struct alias_pav_group *pavgroup, *tempgroup;
++ struct dasd_device *device, *tempdev;
++ int i, rc;
++ struct dasd_eckd_private *private;
+
-+ priv = dev->priv;
++ spin_lock_irqsave(&lcu->lock, flags);
++ list_for_each_entry_safe(pavgroup, tempgroup, &lcu->grouplist, group) {
++ list_for_each_entry_safe(device, tempdev, &pavgroup->baselist,
++ alias_list) {
++ list_move(&device->alias_list, &lcu->active_devices);
++ private = (struct dasd_eckd_private *) device->private;
++ private->pavgroup = NULL;
++ }
++ list_for_each_entry_safe(device, tempdev, &pavgroup->aliaslist,
++ alias_list) {
++ list_move(&device->alias_list, &lcu->active_devices);
++ private = (struct dasd_eckd_private *) device->private;
++ private->pavgroup = NULL;
++ }
++ list_del(&pavgroup->group);
++ kfree(pavgroup);
++ }
++ spin_unlock_irqrestore(&lcu->lock, flags);
+
-+ pci_iounmap(pdev, priv->map);
-+ pci_release_regions(pdev);
-+ pci_disable_device(pdev);
-+ ieee80211_free_hw(dev);
-+}
++ rc = read_unit_address_configuration(refdev, lcu);
++ if (rc)
++ return rc;
+
-+#ifdef CONFIG_PM
-+static int rtl8180_suspend(struct pci_dev *pdev, pm_message_t state)
-+{
-+ pci_save_state(pdev);
-+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
-+ return 0;
-+}
++ spin_lock_irqsave(&lcu->lock, flags);
++ lcu->pav = NO_PAV;
++ for (i = 0; i < MAX_DEVICES_PER_LCU; ++i) {
++ switch (lcu->uac->unit[i].ua_type) {
++ case UA_BASE_PAV_ALIAS:
++ lcu->pav = BASE_PAV;
++ break;
++ case UA_HYPER_PAV_ALIAS:
++ lcu->pav = HYPER_PAV;
++ break;
++ }
++ if (lcu->pav != NO_PAV)
++ break;
++ }
+
-+static int rtl8180_resume(struct pci_dev *pdev)
-+{
-+ pci_set_power_state(pdev, PCI_D0);
-+ pci_restore_state(pdev);
++ list_for_each_entry_safe(device, tempdev, &lcu->active_devices,
++ alias_list) {
++ _add_device_to_lcu(lcu, device);
++ }
++ spin_unlock_irqrestore(&lcu->lock, flags);
+ return 0;
+}
+
-+#endif /* CONFIG_PM */
-+
-+static struct pci_driver rtl8180_driver = {
-+ .name = KBUILD_MODNAME,
-+ .id_table = rtl8180_table,
-+ .probe = rtl8180_probe,
-+ .remove = __devexit_p(rtl8180_remove),
-+#ifdef CONFIG_PM
-+ .suspend = rtl8180_suspend,
-+ .resume = rtl8180_resume,
-+#endif /* CONFIG_PM */
-+};
-+
-+static int __init rtl8180_init(void)
-+{
-+ return pci_register_driver(&rtl8180_driver);
-+}
-+
-+static void __exit rtl8180_exit(void)
-+{
-+ pci_unregister_driver(&rtl8180_driver);
-+}
-+
-+module_init(rtl8180_init);
-+module_exit(rtl8180_exit);
-diff --git a/drivers/net/wireless/rtl8180_grf5101.c b/drivers/net/wireless/rtl8180_grf5101.c
-new file mode 100644
-index 0000000..8293e19
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_grf5101.c
-@@ -0,0 +1,179 @@
-+
-+/*
-+ * Radio tuning for GCT GRF5101 on RTL8180
-+ *
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Code from the BSD driver and the rtl8181 project have been
-+ * very useful to understand certain things
-+ *
-+ * I want to thanks the Authors of such projects and the Ndiswrapper
-+ * project Authors.
-+ *
-+ * A special Big Thanks also is for all people who donated me cards,
-+ * making possible the creation of the original rtl8180 driver
-+ * from which this code is derived!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ */
-+
-+#include <linux/init.h>
-+#include <linux/pci.h>
-+#include <linux/delay.h>
-+#include <net/mac80211.h>
-+
-+#include "rtl8180.h"
-+#include "rtl8180_grf5101.h"
-+
-+static const int grf5101_encode[] = {
-+ 0x0, 0x8, 0x4, 0xC,
-+ 0x2, 0xA, 0x6, 0xE,
-+ 0x1, 0x9, 0x5, 0xD,
-+ 0x3, 0xB, 0x7, 0xF
-+};
-+
-+static void write_grf5101(struct ieee80211_hw *dev, u8 addr, u32 data)
++static void lcu_update_work(struct work_struct *work)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 phy_config;
-+
-+ phy_config = grf5101_encode[(data >> 8) & 0xF];
-+ phy_config |= grf5101_encode[(data >> 4) & 0xF] << 4;
-+ phy_config |= grf5101_encode[data & 0xF] << 8;
-+ phy_config |= grf5101_encode[(addr >> 1) & 0xF] << 12;
-+ phy_config |= (addr & 1) << 16;
-+ phy_config |= grf5101_encode[(data & 0xf000) >> 12] << 24;
-+
-+ /* MAC will bang bits to the chip */
-+ phy_config |= 0x90000000;
-+
-+ rtl818x_iowrite32(priv,
-+ (__le32 __iomem *) &priv->map->RFPinsOutput, phy_config);
++ struct alias_lcu *lcu;
++ struct read_uac_work_data *ruac_data;
++ struct dasd_device *device;
++ unsigned long flags;
++ int rc;
+
-+ msleep(3);
++ ruac_data = container_of(work, struct read_uac_work_data, dwork.work);
++ lcu = container_of(ruac_data, struct alias_lcu, ruac_data);
++ device = ruac_data->device;
++ rc = _lcu_update(device, lcu);
++ /*
++ * Need to check flags again, as there could have been another
++ * prepare_update or a new device a new device while we were still
++ * processing the data
++ */
++ spin_lock_irqsave(&lcu->lock, flags);
++ if (rc || (lcu->flags & NEED_UAC_UPDATE)) {
++ DEV_MESSAGE(KERN_WARNING, device, "could not update"
++ " alias data in lcu (rc = %d), retry later", rc);
++ schedule_delayed_work(&lcu->ruac_data.dwork, 30*HZ);
++ } else {
++ lcu->ruac_data.device = NULL;
++ lcu->flags &= ~UPDATE_PENDING;
++ }
++ spin_unlock_irqrestore(&lcu->lock, flags);
+}
+
-+static void grf5101_write_phy_antenna(struct ieee80211_hw *dev, short chan)
++static int _schedule_lcu_update(struct alias_lcu *lcu,
++ struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 ant = GRF5101_ANTENNA;
-+
-+ if (priv->rfparam & RF_PARAM_ANTBDEFAULT)
-+ ant |= BB_ANTENNA_B;
++ struct dasd_device *usedev = NULL;
++ struct alias_pav_group *group;
+
-+ if (chan == 14)
-+ ant |= BB_ANTATTEN_CHAN14;
++ lcu->flags |= NEED_UAC_UPDATE;
++ if (lcu->ruac_data.device) {
++ /* already scheduled or running */
++ return 0;
++ }
++ if (device && !list_empty(&device->alias_list))
++ usedev = device;
+
-+ rtl8180_write_phy(dev, 0x10, ant);
++ if (!usedev && !list_empty(&lcu->grouplist)) {
++ group = list_first_entry(&lcu->grouplist,
++ struct alias_pav_group, group);
++ if (!list_empty(&group->baselist))
++ usedev = list_first_entry(&group->baselist,
++ struct dasd_device,
++ alias_list);
++ else if (!list_empty(&group->aliaslist))
++ usedev = list_first_entry(&group->aliaslist,
++ struct dasd_device,
++ alias_list);
++ }
++ if (!usedev && !list_empty(&lcu->active_devices)) {
++ usedev = list_first_entry(&lcu->active_devices,
++ struct dasd_device, alias_list);
++ }
++ /*
++ * if we haven't found a proper device yet, give up for now, the next
++ * device that will be set active will trigger an lcu update
++ */
++ if (!usedev)
++ return -EINVAL;
++ lcu->ruac_data.device = usedev;
++ schedule_delayed_work(&lcu->ruac_data.dwork, 0);
++ return 0;
+}
+
-+static void grf5101_rf_set_channel(struct ieee80211_hw *dev,
-+ struct ieee80211_conf *conf)
++int dasd_alias_add_device(struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 txpw = priv->channels[conf->channel - 1].val & 0xFF;
-+ u32 chan = conf->channel - 1;
-+
-+ /* set TX power */
-+ write_grf5101(dev, 0x15, 0x0);
-+ write_grf5101(dev, 0x06, txpw);
-+ write_grf5101(dev, 0x15, 0x10);
-+ write_grf5101(dev, 0x15, 0x0);
-+
-+ /* set frequency */
-+ write_grf5101(dev, 0x07, 0x0);
-+ write_grf5101(dev, 0x0B, chan);
-+ write_grf5101(dev, 0x07, 0x1000);
++ struct dasd_eckd_private *private;
++ struct alias_lcu *lcu;
++ unsigned long flags;
++ int rc;
+
-+ grf5101_write_phy_antenna(dev, chan);
++ private = (struct dasd_eckd_private *) device->private;
++ lcu = private->lcu;
++ rc = 0;
++ spin_lock_irqsave(&lcu->lock, flags);
++ if (!(lcu->flags & UPDATE_PENDING)) {
++ rc = _add_device_to_lcu(lcu, device);
++ if (rc)
++ lcu->flags |= UPDATE_PENDING;
++ }
++ if (lcu->flags & UPDATE_PENDING) {
++ list_move(&device->alias_list, &lcu->active_devices);
++ _schedule_lcu_update(lcu, device);
++ }
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ return rc;
+}
+
-+static void grf5101_rf_stop(struct ieee80211_hw *dev)
++int dasd_alias_remove_device(struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 anaparam;
-+
-+ anaparam = priv->anaparam;
-+ anaparam &= 0x000fffff;
-+ anaparam |= 0x3f900000;
-+ rtl8180_set_anaparam(priv, anaparam);
++ struct dasd_eckd_private *private;
++ struct alias_lcu *lcu;
++ unsigned long flags;
+
-+ write_grf5101(dev, 0x07, 0x0);
-+ write_grf5101(dev, 0x1f, 0x45);
-+ write_grf5101(dev, 0x1f, 0x5);
-+ write_grf5101(dev, 0x00, 0x8e4);
++ private = (struct dasd_eckd_private *) device->private;
++ lcu = private->lcu;
++ spin_lock_irqsave(&lcu->lock, flags);
++ _remove_device_from_lcu(lcu, device);
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ return 0;
+}
+
-+static void grf5101_rf_init(struct ieee80211_hw *dev)
++struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+
-+ rtl8180_set_anaparam(priv, priv->anaparam);
-+
-+ write_grf5101(dev, 0x1f, 0x0);
-+ write_grf5101(dev, 0x1f, 0x0);
-+ write_grf5101(dev, 0x1f, 0x40);
-+ write_grf5101(dev, 0x1f, 0x60);
-+ write_grf5101(dev, 0x1f, 0x61);
-+ write_grf5101(dev, 0x1f, 0x61);
-+ write_grf5101(dev, 0x00, 0xae4);
-+ write_grf5101(dev, 0x1f, 0x1);
-+ write_grf5101(dev, 0x1f, 0x41);
-+ write_grf5101(dev, 0x1f, 0x61);
-+
-+ write_grf5101(dev, 0x01, 0x1a23);
-+ write_grf5101(dev, 0x02, 0x4971);
-+ write_grf5101(dev, 0x03, 0x41de);
-+ write_grf5101(dev, 0x04, 0x2d80);
-+ write_grf5101(dev, 0x05, 0x68ff); /* 0x61ff original value */
-+ write_grf5101(dev, 0x06, 0x0);
-+ write_grf5101(dev, 0x07, 0x0);
-+ write_grf5101(dev, 0x08, 0x7533);
-+ write_grf5101(dev, 0x09, 0xc401);
-+ write_grf5101(dev, 0x0a, 0x0);
-+ write_grf5101(dev, 0x0c, 0x1c7);
-+ write_grf5101(dev, 0x0d, 0x29d3);
-+ write_grf5101(dev, 0x0e, 0x2e8);
-+ write_grf5101(dev, 0x10, 0x192);
-+ write_grf5101(dev, 0x11, 0x248);
-+ write_grf5101(dev, 0x12, 0x0);
-+ write_grf5101(dev, 0x13, 0x20c4);
-+ write_grf5101(dev, 0x14, 0xf4fc);
-+ write_grf5101(dev, 0x15, 0x0);
-+ write_grf5101(dev, 0x16, 0x1500);
-+
-+ write_grf5101(dev, 0x07, 0x1000);
-+
-+ /* baseband configuration */
-+ rtl8180_write_phy(dev, 0, 0xa8);
-+ rtl8180_write_phy(dev, 3, 0x0);
-+ rtl8180_write_phy(dev, 4, 0xc0);
-+ rtl8180_write_phy(dev, 5, 0x90);
-+ rtl8180_write_phy(dev, 6, 0x1e);
-+ rtl8180_write_phy(dev, 7, 0x64);
+
-+ grf5101_write_phy_antenna(dev, 1);
++ struct dasd_device *alias_device;
++ struct alias_pav_group *group;
++ struct alias_lcu *lcu;
++ struct dasd_eckd_private *private, *alias_priv;
++ unsigned long flags;
+
-+ rtl8180_write_phy(dev, 0x11, 0x88);
++ private = (struct dasd_eckd_private *) base_device->private;
++ group = private->pavgroup;
++ lcu = private->lcu;
++ if (!group || !lcu)
++ return NULL;
++ if (lcu->pav == NO_PAV ||
++ lcu->flags & (NEED_UAC_UPDATE | UPDATE_PENDING))
++ return NULL;
+
-+ if (rtl818x_ioread8(priv, &priv->map->CONFIG2) &
-+ RTL818X_CONFIG2_ANTENNA_DIV)
-+ rtl8180_write_phy(dev, 0x12, 0xc0); /* enable ant diversity */
++ spin_lock_irqsave(&lcu->lock, flags);
++ alias_device = group->next;
++ if (!alias_device) {
++ if (list_empty(&group->aliaslist)) {
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ return NULL;
++ } else {
++ alias_device = list_first_entry(&group->aliaslist,
++ struct dasd_device,
++ alias_list);
++ }
++ }
++ if (list_is_last(&alias_device->alias_list, &group->aliaslist))
++ group->next = list_first_entry(&group->aliaslist,
++ struct dasd_device, alias_list);
+ else
-+ rtl8180_write_phy(dev, 0x12, 0x40); /* disable ant diversity */
-+
-+ rtl8180_write_phy(dev, 0x13, 0x90 | priv->csthreshold);
-+
-+ rtl8180_write_phy(dev, 0x19, 0x0);
-+ rtl8180_write_phy(dev, 0x1a, 0xa0);
-+ rtl8180_write_phy(dev, 0x1b, 0x44);
++ group->next = list_first_entry(&alias_device->alias_list,
++ struct dasd_device, alias_list);
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ alias_priv = (struct dasd_eckd_private *) alias_device->private;
++ if ((alias_priv->count < private->count) && !alias_device->stopped)
++ return alias_device;
++ else
++ return NULL;
+}
+
-+const struct rtl818x_rf_ops grf5101_rf_ops = {
-+ .name = "GCT",
-+ .init = grf5101_rf_init,
-+ .stop = grf5101_rf_stop,
-+ .set_chan = grf5101_rf_set_channel
-+};
-diff --git a/drivers/net/wireless/rtl8180_grf5101.h b/drivers/net/wireless/rtl8180_grf5101.h
-new file mode 100644
-index 0000000..7664711
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_grf5101.h
-@@ -0,0 +1,28 @@
-+#ifndef RTL8180_GRF5101_H
-+#define RTL8180_GRF5101_H
-+
-+/*
-+ * Radio tuning for GCT GRF5101 on RTL8180
-+ *
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Code from the BSD driver and the rtl8181 project have been
-+ * very useful to understand certain things
-+ *
-+ * I want to thanks the Authors of such projects and the Ndiswrapper
-+ * project Authors.
-+ *
-+ * A special Big Thanks also is for all people who donated me cards,
-+ * making possible the creation of the original rtl8180 driver
-+ * from which this code is derived!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ */
-+
-+#define GRF5101_ANTENNA 0xA3
-+
-+extern const struct rtl818x_rf_ops grf5101_rf_ops;
-+
-+#endif /* RTL8180_GRF5101_H */
-diff --git a/drivers/net/wireless/rtl8180_max2820.c b/drivers/net/wireless/rtl8180_max2820.c
-new file mode 100644
-index 0000000..98fe9fd
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_max2820.c
-@@ -0,0 +1,150 @@
+/*
-+ * Radio tuning for Maxim max2820 on RTL8180
-+ *
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Code from the BSD driver and the rtl8181 project have been
-+ * very useful to understand certain things
-+ *
-+ * I want to thanks the Authors of such projects and the Ndiswrapper
-+ * project Authors.
-+ *
-+ * A special Big Thanks also is for all people who donated me cards,
-+ * making possible the creation of the original rtl8180 driver
-+ * from which this code is derived!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * Summary unit check handling depends on the way alias devices
++ * are handled so it is done here rather then in dasd_eckd.c
+ */
-+
-+#include <linux/init.h>
-+#include <linux/pci.h>
-+#include <linux/delay.h>
-+#include <net/mac80211.h>
-+
-+#include "rtl8180.h"
-+#include "rtl8180_max2820.h"
-+
-+static const u32 max2820_chan[] = {
-+ 12, /* CH 1 */
-+ 17,
-+ 22,
-+ 27,
-+ 32,
-+ 37,
-+ 42,
-+ 47,
-+ 52,
-+ 57,
-+ 62,
-+ 67,
-+ 72,
-+ 84, /* CH 14 */
-+};
-+
-+static void write_max2820(struct ieee80211_hw *dev, u8 addr, u32 data)
++static int reset_summary_unit_check(struct alias_lcu *lcu,
++ struct dasd_device *device,
++ char reason)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 phy_config;
-+
-+ phy_config = 0x90 + (data & 0xf);
-+ phy_config <<= 16;
-+ phy_config += addr;
-+ phy_config <<= 8;
-+ phy_config += (data >> 4) & 0xff;
-+
-+ rtl818x_iowrite32(priv,
-+ (__le32 __iomem *) &priv->map->RFPinsOutput, phy_config);
-+
-+ msleep(1);
-+}
++ struct dasd_ccw_req *cqr;
++ int rc = 0;
+
-+static void max2820_write_phy_antenna(struct ieee80211_hw *dev, short chan)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 ant;
++ cqr = lcu->rsu_cqr;
++ strncpy((char *) &cqr->magic, "ECKD", 4);
++ ASCEBC((char *) &cqr->magic, 4);
++ cqr->cpaddr->cmd_code = DASD_ECKD_CCW_RSCK;
++ cqr->cpaddr->flags = 0 ;
++ cqr->cpaddr->count = 16;
++ cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
++ ((char *)cqr->data)[0] = reason;
+
-+ ant = MAXIM_ANTENNA;
-+ if (priv->rfparam & RF_PARAM_ANTBDEFAULT)
-+ ant |= BB_ANTENNA_B;
-+ if (chan == 14)
-+ ant |= BB_ANTATTEN_CHAN14;
++ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
++ cqr->retries = 255; /* set retry counter to enable basic ERP */
++ cqr->startdev = device;
++ cqr->memdev = device;
++ cqr->block = NULL;
++ cqr->expires = 5 * HZ;
++ cqr->buildclk = get_clock();
++ cqr->status = DASD_CQR_FILLED;
+
-+ rtl8180_write_phy(dev, 0x10, ant);
++ rc = dasd_sleep_on_immediatly(cqr);
++ return rc;
+}
+
-+static void max2820_rf_set_channel(struct ieee80211_hw *dev,
-+ struct ieee80211_conf *conf)
++static void _restart_all_base_devices_on_lcu(struct alias_lcu *lcu)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ unsigned int chan_idx = conf ? conf->channel - 1 : 0;
-+ u32 txpw = priv->channels[chan_idx].val & 0xFF;
-+ u32 chan = max2820_chan[chan_idx];
-+
-+ /* While philips SA2400 drive the PA bias from
-+ * sa2400, for MAXIM we do this directly from BB */
-+ rtl8180_write_phy(dev, 3, txpw);
-+
-+ max2820_write_phy_antenna(dev, chan);
-+ write_max2820(dev, 3, chan);
-+}
++ struct alias_pav_group *pavgroup;
++ struct dasd_device *device;
++ struct dasd_eckd_private *private;
+
-+static void max2820_rf_stop(struct ieee80211_hw *dev)
-+{
-+ rtl8180_write_phy(dev, 3, 0x8);
-+ write_max2820(dev, 1, 0);
++ /* active and inactive list can contain alias as well as base devices */
++ list_for_each_entry(device, &lcu->active_devices, alias_list) {
++ private = (struct dasd_eckd_private *) device->private;
++ if (private->uid.type != UA_BASE_DEVICE)
++ continue;
++ dasd_schedule_block_bh(device->block);
++ dasd_schedule_device_bh(device);
++ }
++ list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
++ private = (struct dasd_eckd_private *) device->private;
++ if (private->uid.type != UA_BASE_DEVICE)
++ continue;
++ dasd_schedule_block_bh(device->block);
++ dasd_schedule_device_bh(device);
++ }
++ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
++ list_for_each_entry(device, &pavgroup->baselist, alias_list) {
++ dasd_schedule_block_bh(device->block);
++ dasd_schedule_device_bh(device);
++ }
++ }
+}
+
-+
-+static void max2820_rf_init(struct ieee80211_hw *dev)
++static void flush_all_alias_devices_on_lcu(struct alias_lcu *lcu)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+
-+ /* MAXIM from netbsd driver */
-+ write_max2820(dev, 0, 0x007); /* test mode as indicated in datasheet */
-+ write_max2820(dev, 1, 0x01e); /* enable register */
-+ write_max2820(dev, 2, 0x001); /* synt register */
-+
-+ max2820_rf_set_channel(dev, NULL);
-+
-+ write_max2820(dev, 4, 0x313); /* rx register */
++ struct alias_pav_group *pavgroup;
++ struct dasd_device *device, *temp;
++ struct dasd_eckd_private *private;
++ int rc;
++ unsigned long flags;
++ LIST_HEAD(active);
+
-+ /* PA is driven directly by the BB, we keep the MAXIM bias
-+ * at the highest value in case that setting it to lower
-+ * values may introduce some further attenuation somewhere..
++ /*
++ * Problem here ist that dasd_flush_device_queue may wait
++ * for termination of a request to complete. We can't keep
++ * the lcu lock during that time, so we must assume that
++ * the lists may have changed.
++ * Idea: first gather all active alias devices in a separate list,
++ * then flush the first element of this list unlocked, and afterwards
++ * check if it is still on the list before moving it to the
++ * active_devices list.
+ */
-+ write_max2820(dev, 5, 0x00f);
-+
-+ /* baseband configuration */
-+ rtl8180_write_phy(dev, 0, 0x88); /* sys1 */
-+ rtl8180_write_phy(dev, 3, 0x08); /* txagc */
-+ rtl8180_write_phy(dev, 4, 0xf8); /* lnadet */
-+ rtl8180_write_phy(dev, 5, 0x90); /* ifagcinit */
-+ rtl8180_write_phy(dev, 6, 0x1a); /* ifagclimit */
-+ rtl8180_write_phy(dev, 7, 0x64); /* ifagcdet */
-+
-+ max2820_write_phy_antenna(dev, 1);
-+
-+ rtl8180_write_phy(dev, 0x11, 0x88); /* trl */
-+
-+ if (rtl818x_ioread8(priv, &priv->map->CONFIG2) &
-+ RTL818X_CONFIG2_ANTENNA_DIV)
-+ rtl8180_write_phy(dev, 0x12, 0xc7);
-+ else
-+ rtl8180_write_phy(dev, 0x12, 0x47);
-+
-+ rtl8180_write_phy(dev, 0x13, 0x9b);
+
-+ rtl8180_write_phy(dev, 0x19, 0x0); /* CHESTLIM */
-+ rtl8180_write_phy(dev, 0x1a, 0x9f); /* CHSQLIM */
++ spin_lock_irqsave(&lcu->lock, flags);
++ list_for_each_entry_safe(device, temp, &lcu->active_devices,
++ alias_list) {
++ private = (struct dasd_eckd_private *) device->private;
++ if (private->uid.type == UA_BASE_DEVICE)
++ continue;
++ list_move(&device->alias_list, &active);
++ }
+
-+ max2820_rf_set_channel(dev, NULL);
++ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
++ list_splice_init(&pavgroup->aliaslist, &active);
++ }
++ while (!list_empty(&active)) {
++ device = list_first_entry(&active, struct dasd_device,
++ alias_list);
++ spin_unlock_irqrestore(&lcu->lock, flags);
++ rc = dasd_flush_device_queue(device);
++ spin_lock_irqsave(&lcu->lock, flags);
++ /*
++ * only move device around if it wasn't moved away while we
++ * were waiting for the flush
++ */
++ if (device == list_first_entry(&active,
++ struct dasd_device, alias_list))
++ list_move(&device->alias_list, &lcu->active_devices);
++ }
++ spin_unlock_irqrestore(&lcu->lock, flags);
+}
+
-+const struct rtl818x_rf_ops max2820_rf_ops = {
-+ .name = "Maxim",
-+ .init = max2820_rf_init,
-+ .stop = max2820_rf_stop,
-+ .set_chan = max2820_rf_set_channel
-+};
-diff --git a/drivers/net/wireless/rtl8180_max2820.h b/drivers/net/wireless/rtl8180_max2820.h
-new file mode 100644
-index 0000000..61cf6d1
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_max2820.h
-@@ -0,0 +1,28 @@
-+#ifndef RTL8180_MAX2820_H
-+#define RTL8180_MAX2820_H
-+
-+/*
-+ * Radio tuning for Maxim max2820 on RTL8180
-+ *
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Code from the BSD driver and the rtl8181 project have been
-+ * very useful to understand certain things
-+ *
-+ * I want to thanks the Authors of such projects and the Ndiswrapper
-+ * project Authors.
-+ *
-+ * A special Big Thanks also is for all people who donated me cards,
-+ * making possible the creation of the original rtl8180 driver
-+ * from which this code is derived!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ */
-+
-+#define MAXIM_ANTENNA 0xb3
-+
-+extern const struct rtl818x_rf_ops max2820_rf_ops;
-+
-+#endif /* RTL8180_MAX2820_H */
-diff --git a/drivers/net/wireless/rtl8180_rtl8225.c b/drivers/net/wireless/rtl8180_rtl8225.c
-new file mode 100644
-index 0000000..ef3832b
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_rtl8225.c
-@@ -0,0 +1,779 @@
-+
+/*
-+ * Radio tuning for RTL8225 on RTL8180
-+ *
-+ * Copyright 2007 Michael Wu <flamingice at sourmilk.net>
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Based on the r8180 driver, which is:
-+ * Copyright 2005 Andrea Merello <andreamrl at tiscali.it>, et al.
-+ *
-+ * Thanks to Realtek for their support!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * This function is called in interrupt context, so the
++ * cdev lock for device is already locked!
+ */
-+
-+#include <linux/init.h>
-+#include <linux/pci.h>
-+#include <linux/delay.h>
-+#include <net/mac80211.h>
-+
-+#include "rtl8180.h"
-+#include "rtl8180_rtl8225.h"
-+
-+static void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
++static void _stop_all_devices_on_lcu(struct alias_lcu *lcu,
++ struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u16 reg80, reg84, reg82;
-+ u32 bangdata;
-+ int i;
-+
-+ bangdata = (data << 4) | (addr & 0xf);
-+
-+ reg80 = rtl818x_ioread16(priv, &priv->map->RFPinsOutput) & 0xfff3;
-+ reg82 = rtl818x_ioread16(priv, &priv->map->RFPinsEnable);
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, reg82 | 0x7);
-+
-+ reg84 = rtl818x_ioread16(priv, &priv->map->RFPinsSelect);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84 | 0x7 | 0x400);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(10);
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(10);
-+
-+ for (i = 15; i >= 0; i--) {
-+ u16 reg = reg80 | !!(bangdata & (1 << i));
-+
-+ if (i & 1)
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg | (1 << 1));
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg | (1 << 1));
++ struct alias_pav_group *pavgroup;
++ struct dasd_device *pos;
+
-+ if (!(i & 1))
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
++ list_for_each_entry(pos, &lcu->active_devices, alias_list) {
++ if (pos != device)
++ spin_lock(get_ccwdev_lock(pos->cdev));
++ pos->stopped |= DASD_STOPPED_SU;
++ if (pos != device)
++ spin_unlock(get_ccwdev_lock(pos->cdev));
++ }
++ list_for_each_entry(pos, &lcu->inactive_devices, alias_list) {
++ if (pos != device)
++ spin_lock(get_ccwdev_lock(pos->cdev));
++ pos->stopped |= DASD_STOPPED_SU;
++ if (pos != device)
++ spin_unlock(get_ccwdev_lock(pos->cdev));
++ }
++ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
++ list_for_each_entry(pos, &pavgroup->baselist, alias_list) {
++ if (pos != device)
++ spin_lock(get_ccwdev_lock(pos->cdev));
++ pos->stopped |= DASD_STOPPED_SU;
++ if (pos != device)
++ spin_unlock(get_ccwdev_lock(pos->cdev));
++ }
++ list_for_each_entry(pos, &pavgroup->aliaslist, alias_list) {
++ if (pos != device)
++ spin_lock(get_ccwdev_lock(pos->cdev));
++ pos->stopped |= DASD_STOPPED_SU;
++ if (pos != device)
++ spin_unlock(get_ccwdev_lock(pos->cdev));
++ }
+ }
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(10);
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84 | 0x400);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
+}
+
-+static u16 rtl8225_read(struct ieee80211_hw *dev, u8 addr)
++static void _unstop_all_devices_on_lcu(struct alias_lcu *lcu)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u16 reg80, reg82, reg84, out;
-+ int i;
-+
-+ reg80 = rtl818x_ioread16(priv, &priv->map->RFPinsOutput);
-+ reg82 = rtl818x_ioread16(priv, &priv->map->RFPinsEnable);
-+ reg84 = rtl818x_ioread16(priv, &priv->map->RFPinsSelect) | 0x400;
-+
-+ reg80 &= ~0xF;
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, reg82 | 0x000F);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84 | 0x000F);
++ struct alias_pav_group *pavgroup;
++ struct dasd_device *device;
++ unsigned long flags;
+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80 | (1 << 2));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(4);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg80);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(5);
++ list_for_each_entry(device, &lcu->active_devices, alias_list) {
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
++ device->stopped &= ~DASD_STOPPED_SU;
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ }
+
-+ for (i = 4; i >= 0; i--) {
-+ u16 reg = reg80 | ((addr >> i) & 1);
++ list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
++ device->stopped &= ~DASD_STOPPED_SU;
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ }
+
-+ if (!(i & 1)) {
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(1);
++ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
++ list_for_each_entry(device, &pavgroup->baselist, alias_list) {
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
++ device->stopped &= ~DASD_STOPPED_SU;
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev),
++ flags);
+ }
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg | (1 << 1));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg | (1 << 1));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+
-+ if (i & 1) {
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, reg);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(1);
++ list_for_each_entry(device, &pavgroup->aliaslist, alias_list) {
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
++ device->stopped &= ~DASD_STOPPED_SU;
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev),
++ flags);
+ }
+ }
++}
+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x000E);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x040E);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3) | (1 << 1));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+
-+ out = 0;
-+ for (i = 11; i >= 0; i--) {
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(1);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3) | (1 << 1));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3) | (1 << 1));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3) | (1 << 1));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+
-+ if (rtl818x_ioread16(priv, &priv->map->RFPinsInput) & (1 << 1))
-+ out |= 1 << i;
++static void summary_unit_check_handling_work(struct work_struct *work)
++{
++ struct alias_lcu *lcu;
++ struct summary_unit_check_work_data *suc_data;
++ unsigned long flags;
++ struct dasd_device *device;
+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
-+ }
++ suc_data = container_of(work, struct summary_unit_check_work_data,
++ worker);
++ lcu = container_of(suc_data, struct alias_lcu, suc_data);
++ device = suc_data->device;
+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput,
-+ reg80 | (1 << 3) | (1 << 2));
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ udelay(2);
++ /* 1. flush alias devices */
++ flush_all_alias_devices_on_lcu(lcu);
+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, reg82);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, reg84);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x03A0);
++ /* 2. reset summary unit check */
++ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
++ device->stopped &= ~(DASD_STOPPED_SU | DASD_STOPPED_PENDING);
++ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ reset_summary_unit_check(lcu, device, suc_data->reason);
+
-+ return out;
++ spin_lock_irqsave(&lcu->lock, flags);
++ _unstop_all_devices_on_lcu(lcu);
++ _restart_all_base_devices_on_lcu(lcu);
++ /* 3. read new alias configuration */
++ _schedule_lcu_update(lcu, device);
++ lcu->suc_data.device = NULL;
++ spin_unlock_irqrestore(&lcu->lock, flags);
+}
+
-+static const u16 rtl8225bcd_rxgain[] = {
-+ 0x0400, 0x0401, 0x0402, 0x0403, 0x0404, 0x0405, 0x0408, 0x0409,
-+ 0x040a, 0x040b, 0x0502, 0x0503, 0x0504, 0x0505, 0x0540, 0x0541,
-+ 0x0542, 0x0543, 0x0544, 0x0545, 0x0580, 0x0581, 0x0582, 0x0583,
-+ 0x0584, 0x0585, 0x0588, 0x0589, 0x058a, 0x058b, 0x0643, 0x0644,
-+ 0x0645, 0x0680, 0x0681, 0x0682, 0x0683, 0x0684, 0x0685, 0x0688,
-+ 0x0689, 0x068a, 0x068b, 0x068c, 0x0742, 0x0743, 0x0744, 0x0745,
-+ 0x0780, 0x0781, 0x0782, 0x0783, 0x0784, 0x0785, 0x0788, 0x0789,
-+ 0x078a, 0x078b, 0x078c, 0x078d, 0x0790, 0x0791, 0x0792, 0x0793,
-+ 0x0794, 0x0795, 0x0798, 0x0799, 0x079a, 0x079b, 0x079c, 0x079d,
-+ 0x07a0, 0x07a1, 0x07a2, 0x07a3, 0x07a4, 0x07a5, 0x07a8, 0x07a9,
-+ 0x07aa, 0x07ab, 0x07ac, 0x07ad, 0x07b0, 0x07b1, 0x07b2, 0x07b3,
-+ 0x07b4, 0x07b5, 0x07b8, 0x07b9, 0x07ba, 0x07bb, 0x07bb
-+};
-+
-+static const u8 rtl8225_agc[] = {
-+ 0x9e, 0x9e, 0x9e, 0x9e, 0x9e, 0x9e, 0x9e, 0x9e,
-+ 0x9d, 0x9c, 0x9b, 0x9a, 0x99, 0x98, 0x97, 0x96,
-+ 0x95, 0x94, 0x93, 0x92, 0x91, 0x90, 0x8f, 0x8e,
-+ 0x8d, 0x8c, 0x8b, 0x8a, 0x89, 0x88, 0x87, 0x86,
-+ 0x85, 0x84, 0x83, 0x82, 0x81, 0x80, 0x3f, 0x3e,
-+ 0x3d, 0x3c, 0x3b, 0x3a, 0x39, 0x38, 0x37, 0x36,
-+ 0x35, 0x34, 0x33, 0x32, 0x31, 0x30, 0x2f, 0x2e,
-+ 0x2d, 0x2c, 0x2b, 0x2a, 0x29, 0x28, 0x27, 0x26,
-+ 0x25, 0x24, 0x23, 0x22, 0x21, 0x20, 0x1f, 0x1e,
-+ 0x1d, 0x1c, 0x1b, 0x1a, 0x19, 0x18, 0x17, 0x16,
-+ 0x15, 0x14, 0x13, 0x12, 0x11, 0x10, 0x0f, 0x0e,
-+ 0x0d, 0x0c, 0x0b, 0x0a, 0x09, 0x08, 0x07, 0x06,
-+ 0x05, 0x04, 0x03, 0x02, 0x01, 0x01, 0x01, 0x01,
-+ 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
-+ 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
-+ 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01
-+};
-+
-+static const u8 rtl8225_gain[] = {
-+ 0x23, 0x88, 0x7c, 0xa5, /* -82dbm */
-+ 0x23, 0x88, 0x7c, 0xb5, /* -82dbm */
-+ 0x23, 0x88, 0x7c, 0xc5, /* -82dbm */
-+ 0x33, 0x80, 0x79, 0xc5, /* -78dbm */
-+ 0x43, 0x78, 0x76, 0xc5, /* -74dbm */
-+ 0x53, 0x60, 0x73, 0xc5, /* -70dbm */
-+ 0x63, 0x58, 0x70, 0xc5, /* -66dbm */
-+};
-+
-+static const u8 rtl8225_threshold[] = {
-+ 0x8d, 0x8d, 0x8d, 0x8d, 0x9d, 0xad, 0xbd
-+};
-+
-+static const u8 rtl8225_tx_gain_cck_ofdm[] = {
-+ 0x02, 0x06, 0x0e, 0x1e, 0x3e, 0x7e
-+};
++/*
++ * note: this will be called from int handler context (cdev locked)
++ */
++void dasd_alias_handle_summary_unit_check(struct dasd_device *device,
++ struct irb *irb)
++{
++ struct alias_lcu *lcu;
++ char reason;
++ struct dasd_eckd_private *private;
+
-+static const u8 rtl8225_tx_power_cck[] = {
-+ 0x18, 0x17, 0x15, 0x11, 0x0c, 0x08, 0x04, 0x02,
-+ 0x1b, 0x1a, 0x17, 0x13, 0x0e, 0x09, 0x04, 0x02,
-+ 0x1f, 0x1e, 0x1a, 0x15, 0x10, 0x0a, 0x05, 0x02,
-+ 0x22, 0x21, 0x1d, 0x18, 0x11, 0x0b, 0x06, 0x02,
-+ 0x26, 0x25, 0x21, 0x1b, 0x14, 0x0d, 0x06, 0x03,
-+ 0x2b, 0x2a, 0x25, 0x1e, 0x16, 0x0e, 0x07, 0x03
-+};
++ private = (struct dasd_eckd_private *) device->private;
+
-+static const u8 rtl8225_tx_power_cck_ch14[] = {
-+ 0x18, 0x17, 0x15, 0x0c, 0x00, 0x00, 0x00, 0x00,
-+ 0x1b, 0x1a, 0x17, 0x0e, 0x00, 0x00, 0x00, 0x00,
-+ 0x1f, 0x1e, 0x1a, 0x0f, 0x00, 0x00, 0x00, 0x00,
-+ 0x22, 0x21, 0x1d, 0x11, 0x00, 0x00, 0x00, 0x00,
-+ 0x26, 0x25, 0x21, 0x13, 0x00, 0x00, 0x00, 0x00,
-+ 0x2b, 0x2a, 0x25, 0x15, 0x00, 0x00, 0x00, 0x00
-+};
++ reason = irb->ecw[8];
++ DEV_MESSAGE(KERN_WARNING, device, "%s %x",
++ "eckd handle summary unit check: reason", reason);
+
-+static const u8 rtl8225_tx_power_ofdm[] = {
-+ 0x80, 0x90, 0xa2, 0xb5, 0xcb, 0xe4
++ lcu = private->lcu;
++ if (!lcu) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "device not ready to handle summary"
++ " unit check (no lcu structure)");
++ return;
++ }
++ spin_lock(&lcu->lock);
++ _stop_all_devices_on_lcu(lcu, device);
++ /* prepare for lcu_update */
++ private->lcu->flags |= NEED_UAC_UPDATE | UPDATE_PENDING;
++ /* If this device is about to be removed just return and wait for
++ * the next interrupt on a different device
++ */
++ if (list_empty(&device->alias_list)) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "device is in offline processing,"
++ " don't do summary unit check handling");
++ spin_unlock(&lcu->lock);
++ return;
++ }
++ if (lcu->suc_data.device) {
++ /* already scheduled or running */
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "previous instance of summary unit check worker"
++ " still pending");
++ spin_unlock(&lcu->lock);
++ return ;
++ }
++ lcu->suc_data.reason = reason;
++ lcu->suc_data.device = device;
++ spin_unlock(&lcu->lock);
++ schedule_work(&lcu->suc_data.worker);
+};
+diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c
+index 0c67258..f4fb402 100644
+--- a/drivers/s390/block/dasd_devmap.c
++++ b/drivers/s390/block/dasd_devmap.c
+@@ -49,22 +49,6 @@ struct dasd_devmap {
+ };
+
+ /*
+- * dasd_server_ssid_map contains a globally unique storage server subsystem ID.
+- * dasd_server_ssid_list contains the list of all subsystem IDs accessed by
+- * the DASD device driver.
+- */
+-struct dasd_server_ssid_map {
+- struct list_head list;
+- struct system_id {
+- char vendor[4];
+- char serial[15];
+- __u16 ssid;
+- } sid;
+-};
+-
+-static struct list_head dasd_server_ssid_list;
+-
+-/*
+ * Parameter parsing functions for dasd= parameter. The syntax is:
+ * <devno> : (0x)?[0-9a-fA-F]+
+ * <busid> : [0-0a-f]\.[0-9a-f]\.(0x)?[0-9a-fA-F]+
+@@ -721,8 +705,9 @@ dasd_ro_store(struct device *dev, struct device_attribute *attr,
+ devmap->features &= ~DASD_FEATURE_READONLY;
+ if (devmap->device)
+ devmap->device->features = devmap->features;
+- if (devmap->device && devmap->device->gdp)
+- set_disk_ro(devmap->device->gdp, val);
++ if (devmap->device && devmap->device->block
++ && devmap->device->block->gdp)
++ set_disk_ro(devmap->device->block->gdp, val);
+ spin_unlock(&dasd_devmap_lock);
+ return count;
+ }
+@@ -893,12 +878,16 @@ dasd_alias_show(struct device *dev, struct device_attribute *attr, char *buf)
+
+ devmap = dasd_find_busid(dev->bus_id);
+ spin_lock(&dasd_devmap_lock);
+- if (!IS_ERR(devmap))
+- alias = devmap->uid.alias;
++ if (IS_ERR(devmap) || strlen(devmap->uid.vendor) == 0) {
++ spin_unlock(&dasd_devmap_lock);
++ return sprintf(buf, "0\n");
++ }
++ if (devmap->uid.type == UA_BASE_PAV_ALIAS ||
++ devmap->uid.type == UA_HYPER_PAV_ALIAS)
++ alias = 1;
+ else
+ alias = 0;
+ spin_unlock(&dasd_devmap_lock);
+-
+ return sprintf(buf, alias ? "1\n" : "0\n");
+ }
+
+@@ -930,19 +919,36 @@ static ssize_t
+ dasd_uid_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ struct dasd_devmap *devmap;
+- char uid[UID_STRLEN];
++ char uid_string[UID_STRLEN];
++ char ua_string[3];
++ struct dasd_uid *uid;
+
+ devmap = dasd_find_busid(dev->bus_id);
+ spin_lock(&dasd_devmap_lock);
+- if (!IS_ERR(devmap) && strlen(devmap->uid.vendor) > 0)
+- snprintf(uid, sizeof(uid), "%s.%s.%04x.%02x",
+- devmap->uid.vendor, devmap->uid.serial,
+- devmap->uid.ssid, devmap->uid.unit_addr);
+- else
+- uid[0] = 0;
++ if (IS_ERR(devmap) || strlen(devmap->uid.vendor) == 0) {
++ spin_unlock(&dasd_devmap_lock);
++ return sprintf(buf, "\n");
++ }
++ uid = &devmap->uid;
++ switch (uid->type) {
++ case UA_BASE_DEVICE:
++ sprintf(ua_string, "%02x", uid->real_unit_addr);
++ break;
++ case UA_BASE_PAV_ALIAS:
++ sprintf(ua_string, "%02x", uid->base_unit_addr);
++ break;
++ case UA_HYPER_PAV_ALIAS:
++ sprintf(ua_string, "xx");
++ break;
++ default:
++ /* should not happen, treat like base device */
++ sprintf(ua_string, "%02x", uid->real_unit_addr);
++ break;
++ }
++ snprintf(uid_string, sizeof(uid_string), "%s.%s.%04x.%s",
++ uid->vendor, uid->serial, uid->ssid, ua_string);
+ spin_unlock(&dasd_devmap_lock);
+-
+- return snprintf(buf, PAGE_SIZE, "%s\n", uid);
++ return snprintf(buf, PAGE_SIZE, "%s\n", uid_string);
+ }
+
+ static DEVICE_ATTR(uid, 0444, dasd_uid_show, NULL);
+@@ -1040,39 +1046,16 @@ int
+ dasd_set_uid(struct ccw_device *cdev, struct dasd_uid *uid)
+ {
+ struct dasd_devmap *devmap;
+- struct dasd_server_ssid_map *srv, *tmp;
+
+ devmap = dasd_find_busid(cdev->dev.bus_id);
+ if (IS_ERR(devmap))
+ return PTR_ERR(devmap);
+
+- /* generate entry for server_ssid_map */
+- srv = (struct dasd_server_ssid_map *)
+- kzalloc(sizeof(struct dasd_server_ssid_map), GFP_KERNEL);
+- if (!srv)
+- return -ENOMEM;
+- strncpy(srv->sid.vendor, uid->vendor, sizeof(srv->sid.vendor) - 1);
+- strncpy(srv->sid.serial, uid->serial, sizeof(srv->sid.serial) - 1);
+- srv->sid.ssid = uid->ssid;
+-
+- /* server is already contained ? */
+ spin_lock(&dasd_devmap_lock);
+ devmap->uid = *uid;
+- list_for_each_entry(tmp, &dasd_server_ssid_list, list) {
+- if (!memcmp(&srv->sid, &tmp->sid,
+- sizeof(struct system_id))) {
+- kfree(srv);
+- srv = NULL;
+- break;
+- }
+- }
+-
+- /* add servermap to serverlist */
+- if (srv)
+- list_add(&srv->list, &dasd_server_ssid_list);
+ spin_unlock(&dasd_devmap_lock);
+
+- return (srv ? 1 : 0);
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(dasd_set_uid);
+
+@@ -1138,9 +1121,6 @@ dasd_devmap_init(void)
+ dasd_max_devindex = 0;
+ for (i = 0; i < 256; i++)
+ INIT_LIST_HEAD(&dasd_hashlists[i]);
+-
+- /* Initialize servermap structure. */
+- INIT_LIST_HEAD(&dasd_server_ssid_list);
+ return 0;
+ }
+
+diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c
+index 571320a..d91df38 100644
+--- a/drivers/s390/block/dasd_diag.c
++++ b/drivers/s390/block/dasd_diag.c
+@@ -142,7 +142,7 @@ dasd_diag_erp(struct dasd_device *device)
+ int rc;
+
+ mdsk_term_io(device);
+- rc = mdsk_init_io(device, device->bp_block, 0, NULL);
++ rc = mdsk_init_io(device, device->block->bp_block, 0, NULL);
+ if (rc)
+ DEV_MESSAGE(KERN_WARNING, device, "DIAG ERP unsuccessful, "
+ "rc=%d", rc);
+@@ -158,11 +158,11 @@ dasd_start_diag(struct dasd_ccw_req * cqr)
+ struct dasd_diag_req *dreq;
+ int rc;
+
+- device = cqr->device;
++ device = cqr->startdev;
+ if (cqr->retries < 0) {
+ DEV_MESSAGE(KERN_WARNING, device, "DIAG start_IO: request %p "
+ "- no retry left)", cqr);
+- cqr->status = DASD_CQR_FAILED;
++ cqr->status = DASD_CQR_ERROR;
+ return -EIO;
+ }
+ private = (struct dasd_diag_private *) device->private;
+@@ -184,7 +184,7 @@ dasd_start_diag(struct dasd_ccw_req * cqr)
+ switch (rc) {
+ case 0: /* Synchronous I/O finished successfully */
+ cqr->stopclk = get_clock();
+- cqr->status = DASD_CQR_DONE;
++ cqr->status = DASD_CQR_SUCCESS;
+ /* Indicate to calling function that only a dasd_schedule_bh()
+ and no timer is needed */
+ rc = -EACCES;
+@@ -209,12 +209,12 @@ dasd_diag_term_IO(struct dasd_ccw_req * cqr)
+ {
+ struct dasd_device *device;
+
+- device = cqr->device;
++ device = cqr->startdev;
+ mdsk_term_io(device);
+- mdsk_init_io(device, device->bp_block, 0, NULL);
+- cqr->status = DASD_CQR_CLEAR;
++ mdsk_init_io(device, device->block->bp_block, 0, NULL);
++ cqr->status = DASD_CQR_CLEAR_PENDING;
+ cqr->stopclk = get_clock();
+- dasd_schedule_bh(device);
++ dasd_schedule_device_bh(device);
+ return 0;
+ }
+
+@@ -247,7 +247,7 @@ dasd_ext_handler(__u16 code)
+ return;
+ }
+ cqr = (struct dasd_ccw_req *) ip;
+- device = (struct dasd_device *) cqr->device;
++ device = (struct dasd_device *) cqr->startdev;
+ if (strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
+ DEV_MESSAGE(KERN_WARNING, device,
+ " magic number of dasd_ccw_req 0x%08X doesn't"
+@@ -260,10 +260,10 @@ dasd_ext_handler(__u16 code)
+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+
+ /* Check for a pending clear operation */
+- if (cqr->status == DASD_CQR_CLEAR) {
+- cqr->status = DASD_CQR_QUEUED;
+- dasd_clear_timer(device);
+- dasd_schedule_bh(device);
++ if (cqr->status == DASD_CQR_CLEAR_PENDING) {
++ cqr->status = DASD_CQR_CLEARED;
++ dasd_device_clear_timer(device);
++ dasd_schedule_device_bh(device);
+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
+ return;
+ }
+@@ -272,11 +272,11 @@ dasd_ext_handler(__u16 code)
+
+ expires = 0;
+ if (status == 0) {
+- cqr->status = DASD_CQR_DONE;
++ cqr->status = DASD_CQR_SUCCESS;
+ /* Start first request on queue if possible -> fast_io. */
+ if (!list_empty(&device->ccw_queue)) {
+ next = list_entry(device->ccw_queue.next,
+- struct dasd_ccw_req, list);
++ struct dasd_ccw_req, devlist);
+ if (next->status == DASD_CQR_QUEUED) {
+ rc = dasd_start_diag(next);
+ if (rc == 0)
+@@ -296,10 +296,10 @@ dasd_ext_handler(__u16 code)
+ }
+
+ if (expires != 0)
+- dasd_set_timer(device, expires);
++ dasd_device_set_timer(device, expires);
+ else
+- dasd_clear_timer(device);
+- dasd_schedule_bh(device);
++ dasd_device_clear_timer(device);
++ dasd_schedule_device_bh(device);
+
+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
+ }
+@@ -309,6 +309,7 @@ dasd_ext_handler(__u16 code)
+ static int
+ dasd_diag_check_device(struct dasd_device *device)
+ {
++ struct dasd_block *block;
+ struct dasd_diag_private *private;
+ struct dasd_diag_characteristics *rdc_data;
+ struct dasd_diag_bio bio;
+@@ -328,6 +329,16 @@ dasd_diag_check_device(struct dasd_device *device)
+ ccw_device_get_id(device->cdev, &private->dev_id);
+ device->private = (void *) private;
+ }
++ block = dasd_alloc_block();
++ if (IS_ERR(block)) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "could not allocate dasd block structure");
++ kfree(device->private);
++ return PTR_ERR(block);
++ }
++ device->block = block;
++ block->base = device;
+
-+static const u32 rtl8225_chan[] = {
-+ 0x085c, 0x08dc, 0x095c, 0x09dc, 0x0a5c, 0x0adc, 0x0b5c,
-+ 0x0bdc, 0x0c5c, 0x0cdc, 0x0d5c, 0x0ddc, 0x0e5c, 0x0f72
+ /* Read Device Characteristics */
+ rdc_data = (void *) &(private->rdc_data);
+ rdc_data->dev_nr = private->dev_id.devno;
+@@ -409,14 +420,14 @@ dasd_diag_check_device(struct dasd_device *device)
+ sizeof(DASD_DIAG_CMS1)) == 0) {
+ /* get formatted blocksize from label block */
+ bsize = (unsigned int) label->block_size;
+- device->blocks = (unsigned long) label->block_count;
++ block->blocks = (unsigned long) label->block_count;
+ } else
+- device->blocks = end_block;
+- device->bp_block = bsize;
+- device->s2b_shift = 0; /* bits to shift 512 to get a block */
++ block->blocks = end_block;
++ block->bp_block = bsize;
++ block->s2b_shift = 0; /* bits to shift 512 to get a block */
+ for (sb = 512; sb < bsize; sb = sb << 1)
+- device->s2b_shift++;
+- rc = mdsk_init_io(device, device->bp_block, 0, NULL);
++ block->s2b_shift++;
++ rc = mdsk_init_io(device, block->bp_block, 0, NULL);
+ if (rc) {
+ DEV_MESSAGE(KERN_WARNING, device, "DIAG initialization "
+ "failed (rc=%d)", rc);
+@@ -424,9 +435,9 @@ dasd_diag_check_device(struct dasd_device *device)
+ } else {
+ DEV_MESSAGE(KERN_INFO, device,
+ "(%ld B/blk): %ldkB",
+- (unsigned long) device->bp_block,
+- (unsigned long) (device->blocks <<
+- device->s2b_shift) >> 1);
++ (unsigned long) block->bp_block,
++ (unsigned long) (block->blocks <<
++ block->s2b_shift) >> 1);
+ }
+ out:
+ free_page((long) label);
+@@ -436,22 +447,16 @@ out:
+ /* Fill in virtual disk geometry for device. Return zero on success, non-zero
+ * otherwise. */
+ static int
+-dasd_diag_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
++dasd_diag_fill_geometry(struct dasd_block *block, struct hd_geometry *geo)
+ {
+- if (dasd_check_blocksize(device->bp_block) != 0)
++ if (dasd_check_blocksize(block->bp_block) != 0)
+ return -EINVAL;
+- geo->cylinders = (device->blocks << device->s2b_shift) >> 10;
++ geo->cylinders = (block->blocks << block->s2b_shift) >> 10;
+ geo->heads = 16;
+- geo->sectors = 128 >> device->s2b_shift;
++ geo->sectors = 128 >> block->s2b_shift;
+ return 0;
+ }
+
+-static dasd_era_t
+-dasd_diag_examine_error(struct dasd_ccw_req * cqr, struct irb * stat)
+-{
+- return dasd_era_fatal;
+-}
+-
+ static dasd_erp_fn_t
+ dasd_diag_erp_action(struct dasd_ccw_req * cqr)
+ {
+@@ -466,8 +471,9 @@ dasd_diag_erp_postaction(struct dasd_ccw_req * cqr)
+
+ /* Create DASD request from block device request. Return pointer to new
+ * request on success, ERR_PTR otherwise. */
+-static struct dasd_ccw_req *
+-dasd_diag_build_cp(struct dasd_device * device, struct request *req)
++static struct dasd_ccw_req *dasd_diag_build_cp(struct dasd_device *memdev,
++ struct dasd_block *block,
++ struct request *req)
+ {
+ struct dasd_ccw_req *cqr;
+ struct dasd_diag_req *dreq;
+@@ -486,17 +492,17 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
+ rw_cmd = MDSK_WRITE_REQ;
+ else
+ return ERR_PTR(-EINVAL);
+- blksize = device->bp_block;
++ blksize = block->bp_block;
+ /* Calculate record id of first and last block. */
+- first_rec = req->sector >> device->s2b_shift;
+- last_rec = (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
++ first_rec = req->sector >> block->s2b_shift;
++ last_rec = (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
+ /* Check struct bio and count the number of blocks for the request. */
+ count = 0;
+ rq_for_each_segment(bv, req, iter) {
+ if (bv->bv_len & (blksize - 1))
+ /* Fba can only do full blocks. */
+ return ERR_PTR(-EINVAL);
+- count += bv->bv_len >> (device->s2b_shift + 9);
++ count += bv->bv_len >> (block->s2b_shift + 9);
+ }
+ /* Paranoia. */
+ if (count != last_rec - first_rec + 1)
+@@ -505,7 +511,7 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
+ datasize = sizeof(struct dasd_diag_req) +
+ count*sizeof(struct dasd_diag_bio);
+ cqr = dasd_smalloc_request(dasd_diag_discipline.name, 0,
+- datasize, device);
++ datasize, memdev);
+ if (IS_ERR(cqr))
+ return cqr;
+
+@@ -529,7 +535,9 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
+ cqr->buildclk = get_clock();
+ if (req->cmd_flags & REQ_FAILFAST)
+ set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
+- cqr->device = device;
++ cqr->startdev = memdev;
++ cqr->memdev = memdev;
++ cqr->block = block;
+ cqr->expires = DIAG_TIMEOUT;
+ cqr->status = DASD_CQR_FILLED;
+ return cqr;
+@@ -543,10 +551,15 @@ dasd_diag_free_cp(struct dasd_ccw_req *cqr, struct request *req)
+ int status;
+
+ status = cqr->status == DASD_CQR_DONE;
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return status;
+ }
+
++static void dasd_diag_handle_terminated_request(struct dasd_ccw_req *cqr)
++{
++ cqr->status = DASD_CQR_FILLED;
+};
+
-+static void rtl8225_rf_set_tx_power(struct ieee80211_hw *dev, int channel)
+ /* Fill in IOCTL data for device. */
+ static int
+ dasd_diag_fill_info(struct dasd_device * device,
+@@ -583,7 +596,7 @@ static struct dasd_discipline dasd_diag_discipline = {
+ .fill_geometry = dasd_diag_fill_geometry,
+ .start_IO = dasd_start_diag,
+ .term_IO = dasd_diag_term_IO,
+- .examine_error = dasd_diag_examine_error,
++ .handle_terminated_request = dasd_diag_handle_terminated_request,
+ .erp_action = dasd_diag_erp_action,
+ .erp_postaction = dasd_diag_erp_postaction,
+ .build_cp = dasd_diag_build_cp,
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 44adf84..61f1693 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -52,16 +52,6 @@ MODULE_LICENSE("GPL");
+
+ static struct dasd_discipline dasd_eckd_discipline;
+
+-struct dasd_eckd_private {
+- struct dasd_eckd_characteristics rdc_data;
+- struct dasd_eckd_confdata conf_data;
+- struct dasd_eckd_path path_data;
+- struct eckd_count count_area[5];
+- int init_cqr_status;
+- int uses_cdl;
+- struct attrib_data_t attrib; /* e.g. cache operations */
+-};
+-
+ /* The ccw bus type uses this table to find devices that it sends to
+ * dasd_eckd_probe */
+ static struct ccw_device_id dasd_eckd_ids[] = {
+@@ -188,7 +178,7 @@ check_XRC (struct ccw1 *de_ccw,
+ if (rc == -ENOSYS || rc == -EACCES)
+ rc = 0;
+
+- de_ccw->count = sizeof (struct DE_eckd_data);
++ de_ccw->count = sizeof(struct DE_eckd_data);
+ de_ccw->flags |= CCW_FLAG_SLI;
+ return rc;
+ }
+@@ -208,7 +198,7 @@ define_extent(struct ccw1 * ccw, struct DE_eckd_data * data, int trk,
+ ccw->count = 16;
+ ccw->cda = (__u32) __pa(data);
+
+- memset(data, 0, sizeof (struct DE_eckd_data));
++ memset(data, 0, sizeof(struct DE_eckd_data));
+ switch (cmd) {
+ case DASD_ECKD_CCW_READ_HOME_ADDRESS:
+ case DASD_ECKD_CCW_READ_RECORD_ZERO:
+@@ -280,6 +270,132 @@ define_extent(struct ccw1 * ccw, struct DE_eckd_data * data, int trk,
+ return rc;
+ }
+
++static int check_XRC_on_prefix(struct PFX_eckd_data *pfxdata,
++ struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 cck_power, ofdm_power;
-+ const u8 *tmp;
-+ u32 reg;
-+ int i;
-+
-+ cck_power = priv->channels[channel - 1].val & 0xFF;
-+ ofdm_power = priv->channels[channel - 1].val >> 8;
-+
-+ cck_power = min(cck_power, (u8)35);
-+ ofdm_power = min(ofdm_power, (u8)35);
-+
-+ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_CCK,
-+ rtl8225_tx_gain_cck_ofdm[cck_power / 6] >> 1);
-+
-+ if (channel == 14)
-+ tmp = &rtl8225_tx_power_cck_ch14[(cck_power % 6) * 8];
-+ else
-+ tmp = &rtl8225_tx_power_cck[(cck_power % 6) * 8];
-+
-+ for (i = 0; i < 8; i++)
-+ rtl8225_write_phy_cck(dev, 0x44 + i, *tmp++);
-+
-+ msleep(1); /* FIXME: optional? */
-+
-+ /* anaparam2 on */
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg | RTL818X_CONFIG3_ANAPARAM_WRITE);
-+ rtl818x_iowrite32(priv, &priv->map->ANAPARAM2, RTL8225_ANAPARAM2_ON);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg & ~RTL818X_CONFIG3_ANAPARAM_WRITE);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
-+
-+ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_OFDM,
-+ rtl8225_tx_gain_cck_ofdm[ofdm_power/6] >> 1);
++ struct dasd_eckd_private *private;
++ int rc;
+
-+ tmp = &rtl8225_tx_power_ofdm[ofdm_power % 6];
++ private = (struct dasd_eckd_private *) device->private;
++ if (!private->rdc_data.facilities.XRC_supported)
++ return 0;
+
-+ rtl8225_write_phy_ofdm(dev, 5, *tmp);
-+ rtl8225_write_phy_ofdm(dev, 7, *tmp);
++ /* switch on System Time Stamp - needed for XRC Support */
++ pfxdata->define_extend.ga_extended |= 0x08; /* 'Time Stamp Valid' */
++ pfxdata->define_extend.ga_extended |= 0x02; /* 'Extended Parameter' */
++ pfxdata->validity.time_stamp = 1; /* 'Time Stamp Valid' */
+
-+ msleep(1);
++ rc = get_sync_clock(&pfxdata->define_extend.ep_sys_time);
++ /* Ignore return code if sync clock is switched off. */
++ if (rc == -ENOSYS || rc == -EACCES)
++ rc = 0;
++ return rc;
+}
+
-+static void rtl8225_rf_init(struct ieee80211_hw *dev)
++static int prefix(struct ccw1 *ccw, struct PFX_eckd_data *pfxdata, int trk,
++ int totrk, int cmd, struct dasd_device *basedev,
++ struct dasd_device *startdev)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ int i;
-+
-+ rtl8180_set_anaparam(priv, RTL8225_ANAPARAM_ON);
-+
-+ /* host_pci_init */
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x0480);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x0488);
-+ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ msleep(200); /* FIXME: ehh?? */
-+ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0xFF & ~(1 << 6));
-+
-+ rtl818x_iowrite32(priv, &priv->map->RF_TIMING, 0x000a8008);
-+
-+ /* TODO: check if we need really to change BRSR to do RF config */
-+ rtl818x_ioread16(priv, &priv->map->BRSR);
-+ rtl818x_iowrite16(priv, &priv->map->BRSR, 0xFFFF);
-+ rtl818x_iowrite32(priv, &priv->map->RF_PARA, 0x00100044);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, 0x44);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++ struct dasd_eckd_private *basepriv, *startpriv;
++ struct DE_eckd_data *data;
++ struct ch_t geo, beg, end;
++ int rc = 0;
+
-+ rtl8225_write(dev, 0x0, 0x067);
-+ rtl8225_write(dev, 0x1, 0xFE0);
-+ rtl8225_write(dev, 0x2, 0x44D);
-+ rtl8225_write(dev, 0x3, 0x441);
-+ rtl8225_write(dev, 0x4, 0x8BE);
-+ rtl8225_write(dev, 0x5, 0xBF0); /* TODO: minipci */
-+ rtl8225_write(dev, 0x6, 0xAE6);
-+ rtl8225_write(dev, 0x7, rtl8225_chan[0]);
-+ rtl8225_write(dev, 0x8, 0x01F);
-+ rtl8225_write(dev, 0x9, 0x334);
-+ rtl8225_write(dev, 0xA, 0xFD4);
-+ rtl8225_write(dev, 0xB, 0x391);
-+ rtl8225_write(dev, 0xC, 0x050);
-+ rtl8225_write(dev, 0xD, 0x6DB);
-+ rtl8225_write(dev, 0xE, 0x029);
-+ rtl8225_write(dev, 0xF, 0x914); msleep(1);
++ basepriv = (struct dasd_eckd_private *) basedev->private;
++ startpriv = (struct dasd_eckd_private *) startdev->private;
++ data = &pfxdata->define_extend;
+
-+ rtl8225_write(dev, 0x2, 0xC4D); msleep(100);
++ ccw->cmd_code = DASD_ECKD_CCW_PFX;
++ ccw->flags = 0;
++ ccw->count = sizeof(*pfxdata);
++ ccw->cda = (__u32) __pa(pfxdata);
+
-+ rtl8225_write(dev, 0x0, 0x127);
++ memset(pfxdata, 0, sizeof(*pfxdata));
++ /* prefix data */
++ pfxdata->format = 0;
++ pfxdata->base_address = basepriv->conf_data.ned1.unit_addr;
++ pfxdata->base_lss = basepriv->conf_data.ned1.ID;
++ pfxdata->validity.define_extend = 1;
+
-+ for (i = 0; i < ARRAY_SIZE(rtl8225bcd_rxgain); i++) {
-+ rtl8225_write(dev, 0x1, i + 1);
-+ rtl8225_write(dev, 0x2, rtl8225bcd_rxgain[i]);
++ /* private uid is kept up to date, conf_data may be outdated */
++ if (startpriv->uid.type != UA_BASE_DEVICE) {
++ pfxdata->validity.verify_base = 1;
++ if (startpriv->uid.type == UA_HYPER_PAV_ALIAS)
++ pfxdata->validity.hyper_pav = 1;
+ }
+
-+ rtl8225_write(dev, 0x0, 0x027);
-+ rtl8225_write(dev, 0x0, 0x22F);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
-+
-+ for (i = 0; i < ARRAY_SIZE(rtl8225_agc); i++) {
-+ rtl8225_write_phy_ofdm(dev, 0xB, rtl8225_agc[i]);
-+ msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0xA, 0x80 + i);
-+ msleep(1);
++ /* define extend data (mostly)*/
++ switch (cmd) {
++ case DASD_ECKD_CCW_READ_HOME_ADDRESS:
++ case DASD_ECKD_CCW_READ_RECORD_ZERO:
++ case DASD_ECKD_CCW_READ:
++ case DASD_ECKD_CCW_READ_MT:
++ case DASD_ECKD_CCW_READ_CKD:
++ case DASD_ECKD_CCW_READ_CKD_MT:
++ case DASD_ECKD_CCW_READ_KD:
++ case DASD_ECKD_CCW_READ_KD_MT:
++ case DASD_ECKD_CCW_READ_COUNT:
++ data->mask.perm = 0x1;
++ data->attributes.operation = basepriv->attrib.operation;
++ break;
++ case DASD_ECKD_CCW_WRITE:
++ case DASD_ECKD_CCW_WRITE_MT:
++ case DASD_ECKD_CCW_WRITE_KD:
++ case DASD_ECKD_CCW_WRITE_KD_MT:
++ data->mask.perm = 0x02;
++ data->attributes.operation = basepriv->attrib.operation;
++ rc = check_XRC_on_prefix(pfxdata, basedev);
++ break;
++ case DASD_ECKD_CCW_WRITE_CKD:
++ case DASD_ECKD_CCW_WRITE_CKD_MT:
++ data->attributes.operation = DASD_BYPASS_CACHE;
++ rc = check_XRC_on_prefix(pfxdata, basedev);
++ break;
++ case DASD_ECKD_CCW_ERASE:
++ case DASD_ECKD_CCW_WRITE_HOME_ADDRESS:
++ case DASD_ECKD_CCW_WRITE_RECORD_ZERO:
++ data->mask.perm = 0x3;
++ data->mask.auth = 0x1;
++ data->attributes.operation = DASD_BYPASS_CACHE;
++ rc = check_XRC_on_prefix(pfxdata, basedev);
++ break;
++ default:
++ DEV_MESSAGE(KERN_ERR, basedev, "unknown opcode 0x%x", cmd);
++ break;
+ }
+
-+ msleep(1);
-+
-+ rtl8225_write_phy_ofdm(dev, 0x00, 0x01); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x01, 0x02); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x02, 0x62); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x03, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x04, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x05, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x06, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x07, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x08, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x09, 0xfe); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0a, 0x09); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0b, 0x80); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0c, 0x01); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0e, 0xd3); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0f, 0x38); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x10, 0x84); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x11, 0x03); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x12, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x13, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x14, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x15, 0x40); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x16, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x17, 0x40); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x18, 0xef); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x19, 0x19); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1a, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1b, 0x76); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1c, 0x04); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1e, 0x95); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1f, 0x75); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x20, 0x1f); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x21, 0x27); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x22, 0x16); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x24, 0x46); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x25, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x27, 0x88); msleep(1);
-+
-+ rtl8225_write_phy_cck(dev, 0x00, 0x98); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x03, 0x20); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x04, 0x7e); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x05, 0x12); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x06, 0xfc); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x07, 0x78); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x08, 0x2e); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x10, 0x93); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x11, 0x88); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x12, 0x47); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x13, 0xd0);
-+ rtl8225_write_phy_cck(dev, 0x19, 0x00);
-+ rtl8225_write_phy_cck(dev, 0x1a, 0xa0);
-+ rtl8225_write_phy_cck(dev, 0x1b, 0x08);
-+ rtl8225_write_phy_cck(dev, 0x40, 0x86);
-+ rtl8225_write_phy_cck(dev, 0x41, 0x8d); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x42, 0x15); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x43, 0x18); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x44, 0x1f); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x45, 0x1e); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x46, 0x1a); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x47, 0x15); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x48, 0x10); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x49, 0x0a); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x4a, 0x05); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x4b, 0x02); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x4c, 0x05); msleep(1);
++ data->attributes.mode = 0x3; /* ECKD */
+
-+ rtl818x_iowrite8(priv, &priv->map->TESTR, 0x0D); msleep(1);
++ if ((basepriv->rdc_data.cu_type == 0x2105 ||
++ basepriv->rdc_data.cu_type == 0x2107 ||
++ basepriv->rdc_data.cu_type == 0x1750)
++ && !(basepriv->uses_cdl && trk < 2))
++ data->ga_extended |= 0x40; /* Regular Data Format Mode */
+
-+ rtl8225_rf_set_tx_power(dev, 1);
++ geo.cyl = basepriv->rdc_data.no_cyl;
++ geo.head = basepriv->rdc_data.trk_per_cyl;
++ beg.cyl = trk / geo.head;
++ beg.head = trk % geo.head;
++ end.cyl = totrk / geo.head;
++ end.head = totrk % geo.head;
+
-+ /* RX antenna default to A */
-+ rtl8225_write_phy_cck(dev, 0x10, 0x9b); msleep(1); /* B: 0xDB */
-+ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1); /* B: 0x10 */
++ /* check for sequential prestage - enhance cylinder range */
++ if (data->attributes.operation == DASD_SEQ_PRESTAGE ||
++ data->attributes.operation == DASD_SEQ_ACCESS) {
+
-+ rtl818x_iowrite8(priv, &priv->map->TX_ANTENNA, 0x03); /* B: 0x00 */
-+ msleep(1);
-+ rtl818x_iowrite32(priv, (__le32 __iomem *)((void __iomem *)priv->map + 0x94), 0x15c00002);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++ if (end.cyl + basepriv->attrib.nr_cyl < geo.cyl)
++ end.cyl += basepriv->attrib.nr_cyl;
++ else
++ end.cyl = (geo.cyl - 1);
++ }
+
-+ rtl8225_write(dev, 0x0c, 0x50);
-+ /* set OFDM initial gain */
-+ rtl8225_write_phy_ofdm(dev, 0x0d, rtl8225_gain[4 * 4]);
-+ rtl8225_write_phy_ofdm(dev, 0x23, rtl8225_gain[4 * 4 + 1]);
-+ rtl8225_write_phy_ofdm(dev, 0x1b, rtl8225_gain[4 * 4 + 2]);
-+ rtl8225_write_phy_ofdm(dev, 0x1d, rtl8225_gain[4 * 4 + 3]);
-+ /* set CCK threshold */
-+ rtl8225_write_phy_cck(dev, 0x41, rtl8225_threshold[0]);
++ data->beg_ext.cyl = beg.cyl;
++ data->beg_ext.head = beg.head;
++ data->end_ext.cyl = end.cyl;
++ data->end_ext.head = end.head;
++ return rc;
+}
+
-+static const u8 rtl8225z2_tx_power_cck_ch14[] = {
-+ 0x36, 0x35, 0x2e, 0x1b, 0x00, 0x00, 0x00, 0x00
-+};
-+
-+static const u8 rtl8225z2_tx_power_cck_B[] = {
-+ 0x30, 0x2f, 0x29, 0x21, 0x19, 0x10, 0x08, 0x04
-+};
-+
-+static const u8 rtl8225z2_tx_power_cck_A[] = {
-+ 0x33, 0x32, 0x2b, 0x23, 0x1a, 0x11, 0x08, 0x04
-+};
-+
-+static const u8 rtl8225z2_tx_power_cck[] = {
-+ 0x36, 0x35, 0x2e, 0x25, 0x1c, 0x12, 0x09, 0x04
-+};
-+
-+static void rtl8225z2_rf_set_tx_power(struct ieee80211_hw *dev, int channel)
+ static void
+ locate_record(struct ccw1 *ccw, struct LO_eckd_data *data, int trk,
+ int rec_on_trk, int no_rec, int cmd,
+@@ -300,7 +416,7 @@ locate_record(struct ccw1 *ccw, struct LO_eckd_data *data, int trk,
+ ccw->count = 16;
+ ccw->cda = (__u32) __pa(data);
+
+- memset(data, 0, sizeof (struct LO_eckd_data));
++ memset(data, 0, sizeof(struct LO_eckd_data));
+ sector = 0;
+ if (rec_on_trk) {
+ switch (private->rdc_data.dev_type) {
+@@ -441,12 +557,15 @@ dasd_eckd_generate_uid(struct dasd_device *device, struct dasd_uid *uid)
+ sizeof(uid->serial) - 1);
+ EBCASC(uid->serial, sizeof(uid->serial) - 1);
+ uid->ssid = confdata->neq.subsystemID;
+- if (confdata->ned2.sneq.flags == 0x40) {
+- uid->alias = 1;
+- uid->unit_addr = confdata->ned2.sneq.base_unit_addr;
+- } else
+- uid->unit_addr = confdata->ned1.unit_addr;
+-
++ uid->real_unit_addr = confdata->ned1.unit_addr;
++ if (confdata->ned2.sneq.flags == 0x40 &&
++ confdata->ned2.sneq.format == 0x0001) {
++ uid->type = confdata->ned2.sneq.sua_flags;
++ if (uid->type == UA_BASE_PAV_ALIAS)
++ uid->base_unit_addr = confdata->ned2.sneq.base_unit_addr;
++ } else {
++ uid->type = UA_BASE_DEVICE;
++ }
+ return 0;
+ }
+
+@@ -470,7 +589,9 @@ static struct dasd_ccw_req *dasd_eckd_build_rcd_lpm(struct dasd_device *device,
+ ccw->cda = (__u32)(addr_t)rcd_buffer;
+ ccw->count = ciw->count;
+
+- cqr->device = device;
++ cqr->startdev = device;
++ cqr->memdev = device;
++ cqr->block = NULL;
+ cqr->expires = 10*HZ;
+ cqr->lpm = lpm;
+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
+@@ -511,7 +632,7 @@ static int dasd_eckd_read_conf_lpm(struct dasd_device *device,
+ /*
+ * on success we update the user input parms
+ */
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ if (ret)
+ goto out_error;
+
+@@ -557,19 +678,19 @@ dasd_eckd_read_conf(struct dasd_device *device)
+ "data retrieved");
+ continue; /* no error */
+ }
+- if (conf_len != sizeof (struct dasd_eckd_confdata)) {
++ if (conf_len != sizeof(struct dasd_eckd_confdata)) {
+ MESSAGE(KERN_WARNING,
+ "sizes of configuration data mismatch"
+ "%d (read) vs %ld (expected)",
+ conf_len,
+- sizeof (struct dasd_eckd_confdata));
++ sizeof(struct dasd_eckd_confdata));
+ kfree(conf_data);
+ continue; /* no error */
+ }
+ /* save first valid configuration data */
+ if (!conf_data_saved){
+ memcpy(&private->conf_data, conf_data,
+- sizeof (struct dasd_eckd_confdata));
++ sizeof(struct dasd_eckd_confdata));
+ conf_data_saved++;
+ }
+ switch (((char *)conf_data)[242] & 0x07){
+@@ -586,39 +707,104 @@ dasd_eckd_read_conf(struct dasd_device *device)
+ return 0;
+ }
+
++static int dasd_eckd_read_features(struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 cck_power, ofdm_power;
-+ const u8 *tmp;
-+ int i;
-+
-+ cck_power = priv->channels[channel - 1].val & 0xFF;
-+ ofdm_power = priv->channels[channel - 1].val >> 8;
-+
-+ if (channel == 14)
-+ tmp = rtl8225z2_tx_power_cck_ch14;
-+ else if (cck_power == 12)
-+ tmp = rtl8225z2_tx_power_cck_B;
-+ else if (cck_power == 13)
-+ tmp = rtl8225z2_tx_power_cck_A;
-+ else
-+ tmp = rtl8225z2_tx_power_cck;
++ struct dasd_psf_prssd_data *prssdp;
++ struct dasd_rssd_features *features;
++ struct dasd_ccw_req *cqr;
++ struct ccw1 *ccw;
++ int rc;
++ struct dasd_eckd_private *private;
+
-+ for (i = 0; i < 8; i++)
-+ rtl8225_write_phy_cck(dev, 0x44 + i, *tmp++);
++ private = (struct dasd_eckd_private *) device->private;
++ cqr = dasd_smalloc_request(dasd_eckd_discipline.name,
++ 1 /* PSF */ + 1 /* RSSD */ ,
++ (sizeof(struct dasd_psf_prssd_data) +
++ sizeof(struct dasd_rssd_features)),
++ device);
++ if (IS_ERR(cqr)) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "Could not allocate initialization request");
++ return PTR_ERR(cqr);
++ }
++ cqr->startdev = device;
++ cqr->memdev = device;
++ cqr->block = NULL;
++ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
++ cqr->retries = 5;
++ cqr->expires = 10 * HZ;
+
-+ cck_power = min(cck_power, (u8)35);
-+ if (cck_power == 13 || cck_power == 14)
-+ cck_power = 12;
-+ if (cck_power >= 15)
-+ cck_power -= 2;
++ /* Prepare for Read Subsystem Data */
++ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
++ memset(prssdp, 0, sizeof(struct dasd_psf_prssd_data));
++ prssdp->order = PSF_ORDER_PRSSD;
++ prssdp->suborder = 0x41; /* Read Feature Codes */
++ /* all other bytes of prssdp must be zero */
+
-+ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_CCK, cck_power);
-+ rtl818x_ioread8(priv, &priv->map->TX_GAIN_CCK);
-+ msleep(1);
++ ccw = cqr->cpaddr;
++ ccw->cmd_code = DASD_ECKD_CCW_PSF;
++ ccw->count = sizeof(struct dasd_psf_prssd_data);
++ ccw->flags |= CCW_FLAG_CC;
++ ccw->cda = (__u32)(addr_t) prssdp;
+
-+ ofdm_power = min(ofdm_power, (u8)35);
-+ rtl818x_iowrite8(priv, &priv->map->TX_GAIN_OFDM, ofdm_power);
++ /* Read Subsystem Data - feature codes */
++ features = (struct dasd_rssd_features *) (prssdp + 1);
++ memset(features, 0, sizeof(struct dasd_rssd_features));
+
-+ rtl8225_write_phy_ofdm(dev, 2, 0x62);
-+ rtl8225_write_phy_ofdm(dev, 5, 0x00);
-+ rtl8225_write_phy_ofdm(dev, 6, 0x40);
-+ rtl8225_write_phy_ofdm(dev, 7, 0x00);
-+ rtl8225_write_phy_ofdm(dev, 8, 0x40);
++ ccw++;
++ ccw->cmd_code = DASD_ECKD_CCW_RSSD;
++ ccw->count = sizeof(struct dasd_rssd_features);
++ ccw->cda = (__u32)(addr_t) features;
+
-+ msleep(1);
++ cqr->buildclk = get_clock();
++ cqr->status = DASD_CQR_FILLED;
++ rc = dasd_sleep_on(cqr);
++ if (rc == 0) {
++ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
++ features = (struct dasd_rssd_features *) (prssdp + 1);
++ memcpy(&private->features, features,
++ sizeof(struct dasd_rssd_features));
++ }
++ dasd_sfree_request(cqr, cqr->memdev);
++ return rc;
+}
+
-+static const u16 rtl8225z2_rxgain[] = {
-+ 0x0000, 0x0001, 0x0002, 0x0003, 0x0004, 0x0005, 0x0008, 0x0009,
-+ 0x000a, 0x000b, 0x0102, 0x0103, 0x0104, 0x0105, 0x0140, 0x0141,
-+ 0x0142, 0x0143, 0x0144, 0x0145, 0x0180, 0x0181, 0x0182, 0x0183,
-+ 0x0184, 0x0185, 0x0188, 0x0189, 0x018a, 0x018b, 0x0243, 0x0244,
-+ 0x0245, 0x0280, 0x0281, 0x0282, 0x0283, 0x0284, 0x0285, 0x0288,
-+ 0x0289, 0x028a, 0x028b, 0x028c, 0x0342, 0x0343, 0x0344, 0x0345,
-+ 0x0380, 0x0381, 0x0382, 0x0383, 0x0384, 0x0385, 0x0388, 0x0389,
-+ 0x038a, 0x038b, 0x038c, 0x038d, 0x0390, 0x0391, 0x0392, 0x0393,
-+ 0x0394, 0x0395, 0x0398, 0x0399, 0x039a, 0x039b, 0x039c, 0x039d,
-+ 0x03a0, 0x03a1, 0x03a2, 0x03a3, 0x03a4, 0x03a5, 0x03a8, 0x03a9,
-+ 0x03aa, 0x03ab, 0x03ac, 0x03ad, 0x03b0, 0x03b1, 0x03b2, 0x03b3,
-+ 0x03b4, 0x03b5, 0x03b8, 0x03b9, 0x03ba, 0x03bb, 0x03bb
-+};
-+
-+static void rtl8225z2_rf_init(struct ieee80211_hw *dev)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ int i;
-+
-+ rtl8180_set_anaparam(priv, RTL8225_ANAPARAM_ON);
+
-+ /* host_pci_init */
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x0480);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x0488);
-+ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ msleep(200); /* FIXME: ehh?? */
-+ rtl818x_iowrite8(priv, &priv->map->GP_ENABLE, 0xFF & ~(1 << 6));
+ /*
+ * Build CP for Perform Subsystem Function - SSC.
+ */
+-static struct dasd_ccw_req *
+-dasd_eckd_build_psf_ssc(struct dasd_device *device)
++static struct dasd_ccw_req *dasd_eckd_build_psf_ssc(struct dasd_device *device)
+ {
+- struct dasd_ccw_req *cqr;
+- struct dasd_psf_ssc_data *psf_ssc_data;
+- struct ccw1 *ccw;
++ struct dasd_ccw_req *cqr;
++ struct dasd_psf_ssc_data *psf_ssc_data;
++ struct ccw1 *ccw;
+
+- cqr = dasd_smalloc_request("ECKD", 1 /* PSF */ ,
++ cqr = dasd_smalloc_request("ECKD", 1 /* PSF */ ,
+ sizeof(struct dasd_psf_ssc_data),
+ device);
+
+- if (IS_ERR(cqr)) {
+- DEV_MESSAGE(KERN_WARNING, device, "%s",
++ if (IS_ERR(cqr)) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
+ "Could not allocate PSF-SSC request");
+- return cqr;
+- }
+- psf_ssc_data = (struct dasd_psf_ssc_data *)cqr->data;
+- psf_ssc_data->order = PSF_ORDER_SSC;
+- psf_ssc_data->suborder = 0x08;
+-
+- ccw = cqr->cpaddr;
+- ccw->cmd_code = DASD_ECKD_CCW_PSF;
+- ccw->cda = (__u32)(addr_t)psf_ssc_data;
+- ccw->count = 66;
+-
+- cqr->device = device;
+- cqr->expires = 10*HZ;
+- cqr->buildclk = get_clock();
+- cqr->status = DASD_CQR_FILLED;
+- return cqr;
++ return cqr;
++ }
++ psf_ssc_data = (struct dasd_psf_ssc_data *)cqr->data;
++ psf_ssc_data->order = PSF_ORDER_SSC;
++ psf_ssc_data->suborder = 0x88;
++ psf_ssc_data->reserved[0] = 0x88;
+
-+ rtl818x_iowrite32(priv, &priv->map->RF_TIMING, 0x00088008);
++ ccw = cqr->cpaddr;
++ ccw->cmd_code = DASD_ECKD_CCW_PSF;
++ ccw->cda = (__u32)(addr_t)psf_ssc_data;
++ ccw->count = 66;
+
-+ /* TODO: check if we need really to change BRSR to do RF config */
-+ rtl818x_ioread16(priv, &priv->map->BRSR);
-+ rtl818x_iowrite16(priv, &priv->map->BRSR, 0xFFFF);
-+ rtl818x_iowrite32(priv, &priv->map->RF_PARA, 0x00100044);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, 0x44);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++ cqr->startdev = device;
++ cqr->memdev = device;
++ cqr->block = NULL;
++ cqr->expires = 10*HZ;
++ cqr->buildclk = get_clock();
++ cqr->status = DASD_CQR_FILLED;
++ return cqr;
+ }
+
+ /*
+@@ -629,28 +815,28 @@ dasd_eckd_build_psf_ssc(struct dasd_device *device)
+ static int
+ dasd_eckd_psf_ssc(struct dasd_device *device)
+ {
+- struct dasd_ccw_req *cqr;
+- int rc;
+-
+- cqr = dasd_eckd_build_psf_ssc(device);
+- if (IS_ERR(cqr))
+- return PTR_ERR(cqr);
+-
+- rc = dasd_sleep_on(cqr);
+- if (!rc)
+- /* trigger CIO to reprobe devices */
+- css_schedule_reprobe();
+- dasd_sfree_request(cqr, cqr->device);
+- return rc;
++ struct dasd_ccw_req *cqr;
++ int rc;
+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
++ cqr = dasd_eckd_build_psf_ssc(device);
++ if (IS_ERR(cqr))
++ return PTR_ERR(cqr);
+
-+ rtl8225_write(dev, 0x0, 0x0B7); msleep(1);
-+ rtl8225_write(dev, 0x1, 0xEE0); msleep(1);
-+ rtl8225_write(dev, 0x2, 0x44D); msleep(1);
-+ rtl8225_write(dev, 0x3, 0x441); msleep(1);
-+ rtl8225_write(dev, 0x4, 0x8C3); msleep(1);
-+ rtl8225_write(dev, 0x5, 0xC72); msleep(1);
-+ rtl8225_write(dev, 0x6, 0x0E6); msleep(1);
-+ rtl8225_write(dev, 0x7, 0x82A); msleep(1);
-+ rtl8225_write(dev, 0x8, 0x03F); msleep(1);
-+ rtl8225_write(dev, 0x9, 0x335); msleep(1);
-+ rtl8225_write(dev, 0xa, 0x9D4); msleep(1);
-+ rtl8225_write(dev, 0xb, 0x7BB); msleep(1);
-+ rtl8225_write(dev, 0xc, 0x850); msleep(1);
-+ rtl8225_write(dev, 0xd, 0xCDF); msleep(1);
-+ rtl8225_write(dev, 0xe, 0x02B); msleep(1);
-+ rtl8225_write(dev, 0xf, 0x114); msleep(100);
++ rc = dasd_sleep_on(cqr);
++ if (!rc)
++ /* trigger CIO to reprobe devices */
++ css_schedule_reprobe();
++ dasd_sfree_request(cqr, cqr->memdev);
++ return rc;
+ }
+
+ /*
+ * Valide storage server of current device.
+ */
+-static int
+-dasd_eckd_validate_server(struct dasd_device *device, struct dasd_uid *uid)
++static int dasd_eckd_validate_server(struct dasd_device *device)
+ {
+ int rc;
++ struct dasd_eckd_private *private;
+
+ /* Currently PAV is the only reason to 'validate' server on LPAR */
+ if (dasd_nopav || MACHINE_IS_VM)
+@@ -659,9 +845,11 @@ dasd_eckd_validate_server(struct dasd_device *device, struct dasd_uid *uid)
+ rc = dasd_eckd_psf_ssc(device);
+ /* may be requested feature is not available on server,
+ * therefore just report error and go ahead */
++ private = (struct dasd_eckd_private *) device->private;
+ DEV_MESSAGE(KERN_INFO, device,
+ "PSF-SSC on storage subsystem %s.%s.%04x returned rc=%d",
+- uid->vendor, uid->serial, uid->ssid, rc);
++ private->uid.vendor, private->uid.serial,
++ private->uid.ssid, rc);
+ /* RE-Read Configuration Data */
+ return dasd_eckd_read_conf(device);
+ }
+@@ -674,9 +862,9 @@ static int
+ dasd_eckd_check_characteristics(struct dasd_device *device)
+ {
+ struct dasd_eckd_private *private;
+- struct dasd_uid uid;
++ struct dasd_block *block;
+ void *rdc_data;
+- int rc;
++ int is_known, rc;
+
+ private = (struct dasd_eckd_private *) device->private;
+ if (private == NULL) {
+@@ -699,27 +887,54 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
+ /* Read Configuration Data */
+ rc = dasd_eckd_read_conf(device);
+ if (rc)
+- return rc;
++ goto out_err1;
+
+ /* Generate device unique id and register in devmap */
+- rc = dasd_eckd_generate_uid(device, &uid);
++ rc = dasd_eckd_generate_uid(device, &private->uid);
+ if (rc)
+- return rc;
+- rc = dasd_set_uid(device->cdev, &uid);
+- if (rc == 1) /* new server found */
+- rc = dasd_eckd_validate_server(device, &uid);
++ goto out_err1;
++ dasd_set_uid(device->cdev, &private->uid);
+
-+ if (!(rtl8225_read(dev, 6) & (1 << 7))) {
-+ rtl8225_write(dev, 0x02, 0x0C4D);
-+ msleep(200);
-+ rtl8225_write(dev, 0x02, 0x044D);
-+ msleep(100);
-+ /* TODO: readd calibration failure message when the calibration
-+ check works */
++ if (private->uid.type == UA_BASE_DEVICE) {
++ block = dasd_alloc_block();
++ if (IS_ERR(block)) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "could not allocate dasd block structure");
++ rc = PTR_ERR(block);
++ goto out_err1;
++ }
++ device->block = block;
++ block->base = device;
+ }
+
-+ rtl8225_write(dev, 0x0, 0x1B7);
-+ rtl8225_write(dev, 0x3, 0x002);
-+ rtl8225_write(dev, 0x5, 0x004);
-+
-+ for (i = 0; i < ARRAY_SIZE(rtl8225z2_rxgain); i++) {
-+ rtl8225_write(dev, 0x1, i + 1);
-+ rtl8225_write(dev, 0x2, rtl8225z2_rxgain[i]);
++ /* register lcu with alias handling, enable PAV if this is a new lcu */
++ is_known = dasd_alias_make_device_known_to_lcu(device);
++ if (is_known < 0) {
++ rc = is_known;
++ goto out_err2;
+ }
-+
-+ rtl8225_write(dev, 0x0, 0x0B7); msleep(100);
-+ rtl8225_write(dev, 0x2, 0xC4D);
-+
-+ msleep(200);
-+ rtl8225_write(dev, 0x2, 0x44D);
-+ msleep(100);
-+
-+ rtl8225_write(dev, 0x00, 0x2BF);
-+ rtl8225_write(dev, 0xFF, 0xFFFF);
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
-+
-+ for (i = 0; i < ARRAY_SIZE(rtl8225_agc); i++) {
-+ rtl8225_write_phy_ofdm(dev, 0xB, rtl8225_agc[i]);
-+ msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0xA, 0x80 + i);
-+ msleep(1);
++ if (!is_known) {
++ /* new lcu found */
++ rc = dasd_eckd_validate_server(device); /* will switch pav on */
++ if (rc)
++ goto out_err3;
+ }
+
-+ msleep(1);
-+
-+ rtl8225_write_phy_ofdm(dev, 0x00, 0x01); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x01, 0x02); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x02, 0x62); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x03, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x04, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x05, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x06, 0x40); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x07, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x08, 0x40); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x09, 0xfe); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0a, 0x09); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x18, 0xef); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0b, 0x80); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0c, 0x01); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0d, 0x43);
-+ rtl8225_write_phy_ofdm(dev, 0x0e, 0xd3); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x0f, 0x38); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x10, 0x84); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x11, 0x06); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x12, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x13, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x14, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x15, 0x40); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x16, 0x00); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x17, 0x40); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x18, 0xef); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x19, 0x19); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1a, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1b, 0x11); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1c, 0x04); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1d, 0xc5); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1e, 0xb3); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x1f, 0x75); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x20, 0x1f); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x21, 0x27); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x22, 0x16); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x23, 0x80); msleep(1); /* FIXME: not needed? */
-+ rtl8225_write_phy_ofdm(dev, 0x24, 0x46); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x25, 0x20); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1);
-+ rtl8225_write_phy_ofdm(dev, 0x27, 0x88); msleep(1);
-+
-+ rtl8225_write_phy_cck(dev, 0x00, 0x98); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x03, 0x20); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x04, 0x7e); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x05, 0x12); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x06, 0xfc); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x07, 0x78); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x08, 0x2e); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x10, 0x93); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x11, 0x88); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x12, 0x47); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x13, 0xd0);
-+ rtl8225_write_phy_cck(dev, 0x19, 0x00);
-+ rtl8225_write_phy_cck(dev, 0x1a, 0xa0);
-+ rtl8225_write_phy_cck(dev, 0x1b, 0x08);
-+ rtl8225_write_phy_cck(dev, 0x40, 0x86);
-+ rtl8225_write_phy_cck(dev, 0x41, 0x8a); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x42, 0x15); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x43, 0x18); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x44, 0x36); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x45, 0x35); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x46, 0x2e); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x47, 0x25); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x48, 0x1c); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x49, 0x12); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x4a, 0x09); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x4b, 0x04); msleep(1);
-+ rtl8225_write_phy_cck(dev, 0x4c, 0x05); msleep(1);
-+
-+ rtl818x_iowrite8(priv, (u8 __iomem *)((void __iomem *)priv->map + 0x5B), 0x0D); msleep(1);
-+
-+ rtl8225z2_rf_set_tx_power(dev, 1);
-+
-+ /* RX antenna default to A */
-+ rtl8225_write_phy_cck(dev, 0x10, 0x9b); msleep(1); /* B: 0xDB */
-+ rtl8225_write_phy_ofdm(dev, 0x26, 0x90); msleep(1); /* B: 0x10 */
-+
-+ rtl818x_iowrite8(priv, &priv->map->TX_ANTENNA, 0x03); /* B: 0x00 */
-+ msleep(1);
-+ rtl818x_iowrite32(priv, (__le32 __iomem *)((void __iomem *)priv->map + 0x94), 0x15c00002);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
-+}
++ /* Read Feature Codes */
++ rc = dasd_eckd_read_features(device);
+ if (rc)
+- return rc;
++ goto out_err3;
+
+ /* Read Device Characteristics */
+ rdc_data = (void *) &(private->rdc_data);
+ memset(rdc_data, 0, sizeof(rdc_data));
+ rc = dasd_generic_read_dev_chars(device, "ECKD", &rdc_data, 64);
+- if (rc)
++ if (rc) {
+ DEV_MESSAGE(KERN_WARNING, device,
+ "Read device characteristics returned "
+ "rc=%d", rc);
+-
++ goto out_err3;
++ }
+ DEV_MESSAGE(KERN_INFO, device,
+ "%04X/%02X(CU:%04X/%02X) Cyl:%d Head:%d Sec:%d",
+ private->rdc_data.dev_type,
+@@ -729,9 +944,24 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
+ private->rdc_data.no_cyl,
+ private->rdc_data.trk_per_cyl,
+ private->rdc_data.sec_per_trk);
++ return 0;
+
-+static void rtl8225_rf_stop(struct ieee80211_hw *dev)
++out_err3:
++ dasd_alias_disconnect_device_from_lcu(device);
++out_err2:
++ dasd_free_block(device->block);
++ device->block = NULL;
++out_err1:
++ kfree(device->private);
++ device->private = NULL;
+ return rc;
+ }
+
++static void dasd_eckd_uncheck_device(struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 reg;
-+
-+ rtl8225_write(dev, 0x4, 0x1f); msleep(1);
-+
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
-+ reg = rtl818x_ioread8(priv, &priv->map->CONFIG3);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg | RTL818X_CONFIG3_ANAPARAM_WRITE);
-+ rtl818x_iowrite32(priv, &priv->map->ANAPARAM2, RTL8225_ANAPARAM2_OFF);
-+ rtl818x_iowrite32(priv, &priv->map->ANAPARAM, RTL8225_ANAPARAM_OFF);
-+ rtl818x_iowrite8(priv, &priv->map->CONFIG3, reg & ~RTL818X_CONFIG3_ANAPARAM_WRITE);
-+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
++ dasd_alias_disconnect_device_from_lcu(device);
+}
+
-+static void rtl8225_rf_set_channel(struct ieee80211_hw *dev,
-+ struct ieee80211_conf *conf)
+ static struct dasd_ccw_req *
+ dasd_eckd_analysis_ccw(struct dasd_device *device)
+ {
+@@ -755,7 +985,7 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
+ /* Define extent for the first 3 tracks. */
+ define_extent(ccw++, cqr->data, 0, 2,
+ DASD_ECKD_CCW_READ_COUNT, device);
+- LO_data = cqr->data + sizeof (struct DE_eckd_data);
++ LO_data = cqr->data + sizeof(struct DE_eckd_data);
+ /* Locate record for the first 4 records on track 0. */
+ ccw[-1].flags |= CCW_FLAG_CC;
+ locate_record(ccw++, LO_data++, 0, 0, 4,
+@@ -783,7 +1013,9 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
+ ccw->count = 8;
+ ccw->cda = (__u32)(addr_t) count_data;
+
+- cqr->device = device;
++ cqr->block = NULL;
++ cqr->startdev = device;
++ cqr->memdev = device;
+ cqr->retries = 0;
+ cqr->buildclk = get_clock();
+ cqr->status = DASD_CQR_FILLED;
+@@ -803,7 +1035,7 @@ dasd_eckd_analysis_callback(struct dasd_ccw_req *init_cqr, void *data)
+ struct dasd_eckd_private *private;
+ struct dasd_device *device;
+
+- device = init_cqr->device;
++ device = init_cqr->startdev;
+ private = (struct dasd_eckd_private *) device->private;
+ private->init_cqr_status = init_cqr->status;
+ dasd_sfree_request(init_cqr, device);
+@@ -811,13 +1043,13 @@ dasd_eckd_analysis_callback(struct dasd_ccw_req *init_cqr, void *data)
+ }
+
+ static int
+-dasd_eckd_start_analysis(struct dasd_device *device)
++dasd_eckd_start_analysis(struct dasd_block *block)
+ {
+ struct dasd_eckd_private *private;
+ struct dasd_ccw_req *init_cqr;
+
+- private = (struct dasd_eckd_private *) device->private;
+- init_cqr = dasd_eckd_analysis_ccw(device);
++ private = (struct dasd_eckd_private *) block->base->private;
++ init_cqr = dasd_eckd_analysis_ccw(block->base);
+ if (IS_ERR(init_cqr))
+ return PTR_ERR(init_cqr);
+ init_cqr->callback = dasd_eckd_analysis_callback;
+@@ -828,13 +1060,15 @@ dasd_eckd_start_analysis(struct dasd_device *device)
+ }
+
+ static int
+-dasd_eckd_end_analysis(struct dasd_device *device)
++dasd_eckd_end_analysis(struct dasd_block *block)
+ {
++ struct dasd_device *device;
+ struct dasd_eckd_private *private;
+ struct eckd_count *count_area;
+ unsigned int sb, blk_per_trk;
+ int status, i;
+
++ device = block->base;
+ private = (struct dasd_eckd_private *) device->private;
+ status = private->init_cqr_status;
+ private->init_cqr_status = -1;
+@@ -846,7 +1080,7 @@ dasd_eckd_end_analysis(struct dasd_device *device)
+
+ private->uses_cdl = 1;
+ /* Calculate number of blocks/records per track. */
+- blk_per_trk = recs_per_track(&private->rdc_data, 0, device->bp_block);
++ blk_per_trk = recs_per_track(&private->rdc_data, 0, block->bp_block);
+ /* Check Track 0 for Compatible Disk Layout */
+ count_area = NULL;
+ for (i = 0; i < 3; i++) {
+@@ -876,56 +1110,65 @@ dasd_eckd_end_analysis(struct dasd_device *device)
+ if (count_area != NULL && count_area->kl == 0) {
+ /* we found notthing violating our disk layout */
+ if (dasd_check_blocksize(count_area->dl) == 0)
+- device->bp_block = count_area->dl;
++ block->bp_block = count_area->dl;
+ }
+- if (device->bp_block == 0) {
++ if (block->bp_block == 0) {
+ DEV_MESSAGE(KERN_WARNING, device, "%s",
+ "Volume has incompatible disk layout");
+ return -EMEDIUMTYPE;
+ }
+- device->s2b_shift = 0; /* bits to shift 512 to get a block */
+- for (sb = 512; sb < device->bp_block; sb = sb << 1)
+- device->s2b_shift++;
++ block->s2b_shift = 0; /* bits to shift 512 to get a block */
++ for (sb = 512; sb < block->bp_block; sb = sb << 1)
++ block->s2b_shift++;
+
+- blk_per_trk = recs_per_track(&private->rdc_data, 0, device->bp_block);
+- device->blocks = (private->rdc_data.no_cyl *
++ blk_per_trk = recs_per_track(&private->rdc_data, 0, block->bp_block);
++ block->blocks = (private->rdc_data.no_cyl *
+ private->rdc_data.trk_per_cyl *
+ blk_per_trk);
+
+ DEV_MESSAGE(KERN_INFO, device,
+ "(%dkB blks): %dkB at %dkB/trk %s",
+- (device->bp_block >> 10),
++ (block->bp_block >> 10),
+ ((private->rdc_data.no_cyl *
+ private->rdc_data.trk_per_cyl *
+- blk_per_trk * (device->bp_block >> 9)) >> 1),
+- ((blk_per_trk * device->bp_block) >> 10),
++ blk_per_trk * (block->bp_block >> 9)) >> 1),
++ ((blk_per_trk * block->bp_block) >> 10),
+ private->uses_cdl ?
+ "compatible disk layout" : "linux disk layout");
+
+ return 0;
+ }
+
+-static int
+-dasd_eckd_do_analysis(struct dasd_device *device)
++static int dasd_eckd_do_analysis(struct dasd_block *block)
+ {
+ struct dasd_eckd_private *private;
+
+- private = (struct dasd_eckd_private *) device->private;
++ private = (struct dasd_eckd_private *) block->base->private;
+ if (private->init_cqr_status < 0)
+- return dasd_eckd_start_analysis(device);
++ return dasd_eckd_start_analysis(block);
+ else
+- return dasd_eckd_end_analysis(device);
++ return dasd_eckd_end_analysis(block);
+ }
+
++static int dasd_eckd_ready_to_online(struct dasd_device *device)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+
-+ if (priv->rf->init == rtl8225_rf_init)
-+ rtl8225_rf_set_tx_power(dev, conf->channel);
-+ else
-+ rtl8225z2_rf_set_tx_power(dev, conf->channel);
-+
-+ rtl8225_write(dev, 0x7, rtl8225_chan[conf->channel - 1]);
-+ msleep(10);
-+
-+ if (conf->flags & IEEE80211_CONF_SHORT_SLOT_TIME) {
-+ rtl818x_iowrite8(priv, &priv->map->SLOT, 0x9);
-+ rtl818x_iowrite8(priv, &priv->map->SIFS, 0x22);
-+ rtl818x_iowrite8(priv, &priv->map->DIFS, 0x14);
-+ rtl818x_iowrite8(priv, &priv->map->EIFS, 81);
-+ rtl818x_iowrite8(priv, &priv->map->CW_VAL, 0x73);
-+ } else {
-+ rtl818x_iowrite8(priv, &priv->map->SLOT, 0x14);
-+ rtl818x_iowrite8(priv, &priv->map->SIFS, 0x44);
-+ rtl818x_iowrite8(priv, &priv->map->DIFS, 0x24);
-+ rtl818x_iowrite8(priv, &priv->map->EIFS, 81);
-+ rtl818x_iowrite8(priv, &priv->map->CW_VAL, 0xa5);
-+ }
-+}
++ return dasd_alias_add_device(device);
++};
+
-+static const struct rtl818x_rf_ops rtl8225_ops = {
-+ .name = "rtl8225",
-+ .init = rtl8225_rf_init,
-+ .stop = rtl8225_rf_stop,
-+ .set_chan = rtl8225_rf_set_channel
++static int dasd_eckd_online_to_ready(struct dasd_device *device)
++{
++ return dasd_alias_remove_device(device);
+};
+
-+static const struct rtl818x_rf_ops rtl8225z2_ops = {
-+ .name = "rtl8225z2",
-+ .init = rtl8225z2_rf_init,
-+ .stop = rtl8225_rf_stop,
-+ .set_chan = rtl8225_rf_set_channel
+ static int
+-dasd_eckd_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
++dasd_eckd_fill_geometry(struct dasd_block *block, struct hd_geometry *geo)
+ {
+ struct dasd_eckd_private *private;
+
+- private = (struct dasd_eckd_private *) device->private;
+- if (dasd_check_blocksize(device->bp_block) == 0) {
++ private = (struct dasd_eckd_private *) block->base->private;
++ if (dasd_check_blocksize(block->bp_block) == 0) {
+ geo->sectors = recs_per_track(&private->rdc_data,
+- 0, device->bp_block);
++ 0, block->bp_block);
+ }
+ geo->cylinders = private->rdc_data.no_cyl;
+ geo->heads = private->rdc_data.trk_per_cyl;
+@@ -1037,7 +1280,7 @@ dasd_eckd_format_device(struct dasd_device * device,
+ locate_record(ccw++, (struct LO_eckd_data *) data,
+ fdata->start_unit, 0, rpt + 1,
+ DASD_ECKD_CCW_WRITE_RECORD_ZERO, device,
+- device->bp_block);
++ device->block->bp_block);
+ data += sizeof(struct LO_eckd_data);
+ break;
+ case 0x04: /* Invalidate track. */
+@@ -1110,43 +1353,28 @@ dasd_eckd_format_device(struct dasd_device * device,
+ ccw++;
+ }
+ }
+- fcp->device = device;
+- fcp->retries = 2; /* set retry counter to enable ERP */
++ fcp->startdev = device;
++ fcp->memdev = device;
++ clear_bit(DASD_CQR_FLAGS_USE_ERP, &fcp->flags);
++ fcp->retries = 5; /* set retry counter to enable default ERP */
+ fcp->buildclk = get_clock();
+ fcp->status = DASD_CQR_FILLED;
+ return fcp;
+ }
+
+-static dasd_era_t
+-dasd_eckd_examine_error(struct dasd_ccw_req * cqr, struct irb * irb)
++static void dasd_eckd_handle_terminated_request(struct dasd_ccw_req *cqr)
+ {
+- struct dasd_device *device = (struct dasd_device *) cqr->device;
+- struct ccw_device *cdev = device->cdev;
+-
+- if (irb->scsw.cstat == 0x00 &&
+- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+- return dasd_era_none;
+-
+- switch (cdev->id.cu_type) {
+- case 0x3990:
+- case 0x2105:
+- case 0x2107:
+- case 0x1750:
+- return dasd_3990_erp_examine(cqr, irb);
+- case 0x9343:
+- return dasd_9343_erp_examine(cqr, irb);
+- case 0x3880:
+- default:
+- DEV_MESSAGE(KERN_WARNING, device, "%s",
+- "default (unknown CU type) - RECOVERABLE return");
+- return dasd_era_recover;
++ cqr->status = DASD_CQR_FILLED;
++ if (cqr->block && (cqr->startdev != cqr->block->base)) {
++ dasd_eckd_reset_ccw_to_base_io(cqr);
++ cqr->startdev = cqr->block->base;
+ }
+-}
+};
+
+ static dasd_erp_fn_t
+ dasd_eckd_erp_action(struct dasd_ccw_req * cqr)
+ {
+- struct dasd_device *device = (struct dasd_device *) cqr->device;
++ struct dasd_device *device = (struct dasd_device *) cqr->startdev;
+ struct ccw_device *cdev = device->cdev;
+
+ switch (cdev->id.cu_type) {
+@@ -1168,8 +1396,37 @@ dasd_eckd_erp_postaction(struct dasd_ccw_req * cqr)
+ return dasd_default_erp_postaction;
+ }
+
+-static struct dasd_ccw_req *
+-dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+
-+const struct rtl818x_rf_ops * rtl8180_detect_rf(struct ieee80211_hw *dev)
++static void dasd_eckd_handle_unsolicited_interrupt(struct dasd_device *device,
++ struct irb *irb)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u16 reg8, reg9;
-+
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsOutput, 0x0480);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsSelect, 0x0488);
-+ rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FFF);
-+ rtl818x_ioread8(priv, &priv->map->EEPROM_CMD);
-+ msleep(100);
-+
-+ rtl8225_write(dev, 0, 0x1B7);
-+
-+ reg8 = rtl8225_read(dev, 8);
-+ reg9 = rtl8225_read(dev, 9);
-+
-+ rtl8225_write(dev, 0, 0x0B7);
-+
-+ if (reg8 != 0x588 || reg9 != 0x700)
-+ return &rtl8225_ops;
-+
-+ return &rtl8225z2_ops;
-+}
-diff --git a/drivers/net/wireless/rtl8180_rtl8225.h b/drivers/net/wireless/rtl8180_rtl8225.h
-new file mode 100644
-index 0000000..310013a
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_rtl8225.h
-@@ -0,0 +1,23 @@
-+#ifndef RTL8180_RTL8225_H
-+#define RTL8180_RTL8225_H
++ char mask;
+
-+#define RTL8225_ANAPARAM_ON 0xa0000b59
-+#define RTL8225_ANAPARAM2_ON 0x860dec11
-+#define RTL8225_ANAPARAM_OFF 0xa00beb59
-+#define RTL8225_ANAPARAM2_OFF 0x840dec11
++ /* first of all check for state change pending interrupt */
++ mask = DEV_STAT_ATTENTION | DEV_STAT_DEV_END | DEV_STAT_UNIT_EXCEP;
++ if ((irb->scsw.dstat & mask) == mask) {
++ dasd_generic_handle_state_change(device);
++ return;
++ }
+
-+const struct rtl818x_rf_ops * rtl8180_detect_rf(struct ieee80211_hw *);
++ /* summary unit check */
++ if ((irb->scsw.dstat & DEV_STAT_UNIT_CHECK) && irb->ecw[7] == 0x0D) {
++ dasd_alias_handle_summary_unit_check(device, irb);
++ return;
++ }
+
-+static inline void rtl8225_write_phy_ofdm(struct ieee80211_hw *dev,
-+ u8 addr, u8 data)
-+{
-+ rtl8180_write_phy(dev, addr, data);
-+}
++ /* just report other unsolicited interrupts */
++ DEV_MESSAGE(KERN_DEBUG, device, "%s",
++ "unsolicited interrupt received");
++ device->discipline->dump_sense(device, NULL, irb);
++ dasd_schedule_device_bh(device);
+
-+static inline void rtl8225_write_phy_cck(struct ieee80211_hw *dev,
-+ u8 addr, u8 data)
-+{
-+ rtl8180_write_phy(dev, addr, data | 0x10000);
-+}
++ return;
++};
+
-+#endif /* RTL8180_RTL8225_H */
-diff --git a/drivers/net/wireless/rtl8180_sa2400.c b/drivers/net/wireless/rtl8180_sa2400.c
-new file mode 100644
-index 0000000..e08ace7
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_sa2400.c
-@@ -0,0 +1,201 @@
++static struct dasd_ccw_req *dasd_eckd_build_cp(struct dasd_device *startdev,
++ struct dasd_block *block,
++ struct request *req)
+ {
+ struct dasd_eckd_private *private;
+ unsigned long *idaws;
+@@ -1185,8 +1442,11 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ sector_t first_trk, last_trk;
+ unsigned int first_offs, last_offs;
+ unsigned char cmd, rcmd;
++ int use_prefix;
++ struct dasd_device *basedev;
+
+- private = (struct dasd_eckd_private *) device->private;
++ basedev = block->base;
++ private = (struct dasd_eckd_private *) basedev->private;
+ if (rq_data_dir(req) == READ)
+ cmd = DASD_ECKD_CCW_READ_MT;
+ else if (rq_data_dir(req) == WRITE)
+@@ -1194,13 +1454,13 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ else
+ return ERR_PTR(-EINVAL);
+ /* Calculate number of blocks/records per track. */
+- blksize = device->bp_block;
++ blksize = block->bp_block;
+ blk_per_trk = recs_per_track(&private->rdc_data, 0, blksize);
+ /* Calculate record id of first and last block. */
+- first_rec = first_trk = req->sector >> device->s2b_shift;
++ first_rec = first_trk = req->sector >> block->s2b_shift;
+ first_offs = sector_div(first_trk, blk_per_trk);
+ last_rec = last_trk =
+- (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
++ (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
+ last_offs = sector_div(last_trk, blk_per_trk);
+ /* Check struct bio and count the number of blocks for the request. */
+ count = 0;
+@@ -1209,20 +1469,33 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ if (bv->bv_len & (blksize - 1))
+ /* Eckd can only do full blocks. */
+ return ERR_PTR(-EINVAL);
+- count += bv->bv_len >> (device->s2b_shift + 9);
++ count += bv->bv_len >> (block->s2b_shift + 9);
+ #if defined(CONFIG_64BIT)
+ if (idal_is_needed (page_address(bv->bv_page), bv->bv_len))
+- cidaw += bv->bv_len >> (device->s2b_shift + 9);
++ cidaw += bv->bv_len >> (block->s2b_shift + 9);
+ #endif
+ }
+ /* Paranoia. */
+ if (count != last_rec - first_rec + 1)
+ return ERR_PTR(-EINVAL);
+- /* 1x define extent + 1x locate record + number of blocks */
+- cplength = 2 + count;
+- /* 1x define extent + 1x locate record + cidaws*sizeof(long) */
+- datasize = sizeof(struct DE_eckd_data) + sizeof(struct LO_eckd_data) +
+- cidaw * sizeof(unsigned long);
+
++ /* use the prefix command if available */
++ use_prefix = private->features.feature[8] & 0x01;
++ if (use_prefix) {
++ /* 1x prefix + number of blocks */
++ cplength = 2 + count;
++ /* 1x prefix + cidaws*sizeof(long) */
++ datasize = sizeof(struct PFX_eckd_data) +
++ sizeof(struct LO_eckd_data) +
++ cidaw * sizeof(unsigned long);
++ } else {
++ /* 1x define extent + 1x locate record + number of blocks */
++ cplength = 2 + count;
++ /* 1x define extent + 1x locate record + cidaws*sizeof(long) */
++ datasize = sizeof(struct DE_eckd_data) +
++ sizeof(struct LO_eckd_data) +
++ cidaw * sizeof(unsigned long);
++ }
+ /* Find out the number of additional locate record ccws for cdl. */
+ if (private->uses_cdl && first_rec < 2*blk_per_trk) {
+ if (last_rec >= 2*blk_per_trk)
+@@ -1232,26 +1505,42 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ }
+ /* Allocate the ccw request. */
+ cqr = dasd_smalloc_request(dasd_eckd_discipline.name,
+- cplength, datasize, device);
++ cplength, datasize, startdev);
+ if (IS_ERR(cqr))
+ return cqr;
+ ccw = cqr->cpaddr;
+- /* First ccw is define extent. */
+- if (define_extent(ccw++, cqr->data, first_trk,
+- last_trk, cmd, device) == -EAGAIN) {
+- /* Clock not in sync and XRC is enabled. Try again later. */
+- dasd_sfree_request(cqr, device);
+- return ERR_PTR(-EAGAIN);
++ /* First ccw is define extent or prefix. */
++ if (use_prefix) {
++ if (prefix(ccw++, cqr->data, first_trk,
++ last_trk, cmd, basedev, startdev) == -EAGAIN) {
++ /* Clock not in sync and XRC is enabled.
++ * Try again later.
++ */
++ dasd_sfree_request(cqr, startdev);
++ return ERR_PTR(-EAGAIN);
++ }
++ idaws = (unsigned long *) (cqr->data +
++ sizeof(struct PFX_eckd_data));
++ } else {
++ if (define_extent(ccw++, cqr->data, first_trk,
++ last_trk, cmd, startdev) == -EAGAIN) {
++ /* Clock not in sync and XRC is enabled.
++ * Try again later.
++ */
++ dasd_sfree_request(cqr, startdev);
++ return ERR_PTR(-EAGAIN);
++ }
++ idaws = (unsigned long *) (cqr->data +
++ sizeof(struct DE_eckd_data));
+ }
+ /* Build locate_record+read/write/ccws. */
+- idaws = (unsigned long *) (cqr->data + sizeof(struct DE_eckd_data));
+ LO_data = (struct LO_eckd_data *) (idaws + cidaw);
+ recid = first_rec;
+ if (private->uses_cdl == 0 || recid > 2*blk_per_trk) {
+ /* Only standard blocks so there is just one locate record. */
+ ccw[-1].flags |= CCW_FLAG_CC;
+ locate_record(ccw++, LO_data++, first_trk, first_offs + 1,
+- last_rec - recid + 1, cmd, device, blksize);
++ last_rec - recid + 1, cmd, basedev, blksize);
+ }
+ rq_for_each_segment(bv, req, iter) {
+ dst = page_address(bv->bv_page) + bv->bv_offset;
+@@ -1281,7 +1570,7 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ ccw[-1].flags |= CCW_FLAG_CC;
+ locate_record(ccw++, LO_data++,
+ trkid, recoffs + 1,
+- 1, rcmd, device, count);
++ 1, rcmd, basedev, count);
+ }
+ /* Locate record for standard blocks ? */
+ if (private->uses_cdl && recid == 2*blk_per_trk) {
+@@ -1289,7 +1578,7 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ locate_record(ccw++, LO_data++,
+ trkid, recoffs + 1,
+ last_rec - recid + 1,
+- cmd, device, count);
++ cmd, basedev, count);
+ }
+ /* Read/write ccw. */
+ ccw[-1].flags |= CCW_FLAG_CC;
+@@ -1310,7 +1599,9 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
+ }
+ if (req->cmd_flags & REQ_FAILFAST)
+ set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
+- cqr->device = device;
++ cqr->startdev = startdev;
++ cqr->memdev = startdev;
++ cqr->block = block;
+ cqr->expires = 5 * 60 * HZ; /* 5 minutes */
+ cqr->lpm = private->path_data.ppm;
+ cqr->retries = 256;
+@@ -1333,10 +1624,10 @@ dasd_eckd_free_cp(struct dasd_ccw_req *cqr, struct request *req)
+
+ if (!dasd_page_cache)
+ goto out;
+- private = (struct dasd_eckd_private *) cqr->device->private;
+- blksize = cqr->device->bp_block;
++ private = (struct dasd_eckd_private *) cqr->block->base->private;
++ blksize = cqr->block->bp_block;
+ blk_per_trk = recs_per_track(&private->rdc_data, 0, blksize);
+- recid = req->sector >> cqr->device->s2b_shift;
++ recid = req->sector >> cqr->block->s2b_shift;
+ ccw = cqr->cpaddr;
+ /* Skip over define extent & locate record. */
+ ccw++;
+@@ -1367,10 +1658,71 @@ dasd_eckd_free_cp(struct dasd_ccw_req *cqr, struct request *req)
+ }
+ out:
+ status = cqr->status == DASD_CQR_DONE;
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return status;
+ }
+
+/*
-+ * Radio tuning for Philips SA2400 on RTL8180
-+ *
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Code from the BSD driver and the rtl8181 project have been
-+ * very useful to understand certain things
-+ *
-+ * I want to thanks the Authors of such projects and the Ndiswrapper
-+ * project Authors.
-+ *
-+ * A special Big Thanks also is for all people who donated me cards,
-+ * making possible the creation of the original rtl8180 driver
-+ * from which this code is derived!
++ * Modify ccw chain in cqr so it can be started on a base device.
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * Note that this is not enough to restart the cqr!
++ * Either reset cqr->startdev as well (summary unit check handling)
++ * or restart via separate cqr (as in ERP handling).
+ */
-+
-+#include <linux/init.h>
-+#include <linux/pci.h>
-+#include <linux/delay.h>
-+#include <net/mac80211.h>
-+
-+#include "rtl8180.h"
-+#include "rtl8180_sa2400.h"
-+
-+static const u32 sa2400_chan[] = {
-+ 0x00096c, /* ch1 */
-+ 0x080970,
-+ 0x100974,
-+ 0x180978,
-+ 0x000980,
-+ 0x080984,
-+ 0x100988,
-+ 0x18098c,
-+ 0x000994,
-+ 0x080998,
-+ 0x10099c,
-+ 0x1809a0,
-+ 0x0009a8,
-+ 0x0009b4, /* ch 14 */
-+};
-+
-+static void write_sa2400(struct ieee80211_hw *dev, u8 addr, u32 data)
++void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *cqr)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 phy_config;
-+
-+ /* MAC will bang bits to the sa2400. sw 3-wire is NOT used */
-+ phy_config = 0xb0000000;
-+
-+ phy_config |= ((u32)(addr & 0xf)) << 24;
-+ phy_config |= data & 0xffffff;
++ struct ccw1 *ccw;
++ struct PFX_eckd_data *pfxdata;
+
-+ rtl818x_iowrite32(priv,
-+ (__le32 __iomem *) &priv->map->RFPinsOutput, phy_config);
++ ccw = cqr->cpaddr;
++ pfxdata = cqr->data;
+
-+ msleep(3);
++ if (ccw->cmd_code == DASD_ECKD_CCW_PFX) {
++ pfxdata->validity.verify_base = 0;
++ pfxdata->validity.hyper_pav = 0;
++ }
+}
+
-+static void sa2400_write_phy_antenna(struct ieee80211_hw *dev, short chan)
-+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u8 ant = SA2400_ANTENNA;
-+
-+ if (priv->rfparam & RF_PARAM_ANTBDEFAULT)
-+ ant |= BB_ANTENNA_B;
-+
-+ if (chan == 14)
-+ ant |= BB_ANTATTEN_CHAN14;
-+
-+ rtl8180_write_phy(dev, 0x10, ant);
-+
-+}
++#define DASD_ECKD_CHANQ_MAX_SIZE 4
+
-+static void sa2400_rf_set_channel(struct ieee80211_hw *dev,
-+ struct ieee80211_conf *conf)
++static struct dasd_ccw_req *dasd_eckd_build_alias_cp(struct dasd_device *base,
++ struct dasd_block *block,
++ struct request *req)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 txpw = priv->channels[conf->channel - 1].val & 0xFF;
-+ u32 chan = sa2400_chan[conf->channel - 1];
-+
-+ write_sa2400(dev, 7, txpw);
-+
-+ sa2400_write_phy_antenna(dev, chan);
++ struct dasd_eckd_private *private;
++ struct dasd_device *startdev;
++ unsigned long flags;
++ struct dasd_ccw_req *cqr;
+
-+ write_sa2400(dev, 0, chan);
-+ write_sa2400(dev, 1, 0xbb50);
-+ write_sa2400(dev, 2, 0x80);
-+ write_sa2400(dev, 3, 0);
-+}
++ startdev = dasd_alias_get_start_dev(base);
++ if (!startdev)
++ startdev = base;
++ private = (struct dasd_eckd_private *) startdev->private;
++ if (private->count >= DASD_ECKD_CHANQ_MAX_SIZE)
++ return ERR_PTR(-EBUSY);
+
-+static void sa2400_rf_stop(struct ieee80211_hw *dev)
-+{
-+ write_sa2400(dev, 4, 0);
++ spin_lock_irqsave(get_ccwdev_lock(startdev->cdev), flags);
++ private->count++;
++ cqr = dasd_eckd_build_cp(startdev, block, req);
++ if (IS_ERR(cqr))
++ private->count--;
++ spin_unlock_irqrestore(get_ccwdev_lock(startdev->cdev), flags);
++ return cqr;
+}
+
-+static void sa2400_rf_init(struct ieee80211_hw *dev)
++static int dasd_eckd_free_alias_cp(struct dasd_ccw_req *cqr,
++ struct request *req)
+{
-+ struct rtl8180_priv *priv = dev->priv;
-+ u32 anaparam, txconf;
-+ u8 firdac;
-+ int analogphy = priv->rfparam & RF_PARAM_ANALOGPHY;
-+
-+ anaparam = priv->anaparam;
-+ anaparam &= ~(1 << ANAPARAM_TXDACOFF_SHIFT);
-+ anaparam &= ~ANAPARAM_PWR1_MASK;
-+ anaparam &= ~ANAPARAM_PWR0_MASK;
-+
-+ if (analogphy) {
-+ anaparam |= SA2400_ANA_ANAPARAM_PWR1_ON << ANAPARAM_PWR1_SHIFT;
-+ firdac = 0;
-+ } else {
-+ anaparam |= (SA2400_DIG_ANAPARAM_PWR1_ON << ANAPARAM_PWR1_SHIFT);
-+ anaparam |= (SA2400_ANAPARAM_PWR0_ON << ANAPARAM_PWR0_SHIFT);
-+ firdac = 1 << SA2400_REG4_FIRDAC_SHIFT;
-+ }
-+
-+ rtl8180_set_anaparam(priv, anaparam);
-+
-+ write_sa2400(dev, 0, sa2400_chan[0]);
-+ write_sa2400(dev, 1, 0xbb50);
-+ write_sa2400(dev, 2, 0x80);
-+ write_sa2400(dev, 3, 0);
-+ write_sa2400(dev, 4, 0x19340 | firdac);
-+ write_sa2400(dev, 5, 0x1dfb | (SA2400_MAX_SENS - 54) << 15);
-+ write_sa2400(dev, 4, 0x19348 | firdac); /* calibrate VCO */
++ struct dasd_eckd_private *private;
++ unsigned long flags;
+
-+ if (!analogphy)
-+ write_sa2400(dev, 4, 0x1938c); /*???*/
++ spin_lock_irqsave(get_ccwdev_lock(cqr->memdev->cdev), flags);
++ private = (struct dasd_eckd_private *) cqr->memdev->private;
++ private->count--;
++ spin_unlock_irqrestore(get_ccwdev_lock(cqr->memdev->cdev), flags);
++ return dasd_eckd_free_cp(cqr, req);
++}
+
-+ write_sa2400(dev, 4, 0x19340 | firdac);
+ static int
+ dasd_eckd_fill_info(struct dasd_device * device,
+ struct dasd_information2_t * info)
+@@ -1384,9 +1736,9 @@ dasd_eckd_fill_info(struct dasd_device * device,
+ info->characteristics_size = sizeof(struct dasd_eckd_characteristics);
+ memcpy(info->characteristics, &private->rdc_data,
+ sizeof(struct dasd_eckd_characteristics));
+- info->confdata_size = sizeof (struct dasd_eckd_confdata);
++ info->confdata_size = sizeof(struct dasd_eckd_confdata);
+ memcpy(info->configuration_data, &private->conf_data,
+- sizeof (struct dasd_eckd_confdata));
++ sizeof(struct dasd_eckd_confdata));
+ return 0;
+ }
+
+@@ -1419,7 +1771,8 @@ dasd_eckd_release(struct dasd_device *device)
+ cqr->cpaddr->flags |= CCW_FLAG_SLI;
+ cqr->cpaddr->count = 32;
+ cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
+- cqr->device = device;
++ cqr->startdev = device;
++ cqr->memdev = device;
+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
+ set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
+ cqr->retries = 2; /* set retry counter to enable basic ERP */
+@@ -1429,7 +1782,7 @@ dasd_eckd_release(struct dasd_device *device)
+
+ rc = dasd_sleep_on_immediatly(cqr);
+
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return rc;
+ }
+
+@@ -1459,7 +1812,8 @@ dasd_eckd_reserve(struct dasd_device *device)
+ cqr->cpaddr->flags |= CCW_FLAG_SLI;
+ cqr->cpaddr->count = 32;
+ cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
+- cqr->device = device;
++ cqr->startdev = device;
++ cqr->memdev = device;
+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
+ set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
+ cqr->retries = 2; /* set retry counter to enable basic ERP */
+@@ -1469,7 +1823,7 @@ dasd_eckd_reserve(struct dasd_device *device)
+
+ rc = dasd_sleep_on_immediatly(cqr);
+
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return rc;
+ }
+
+@@ -1498,7 +1852,8 @@ dasd_eckd_steal_lock(struct dasd_device *device)
+ cqr->cpaddr->flags |= CCW_FLAG_SLI;
+ cqr->cpaddr->count = 32;
+ cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
+- cqr->device = device;
++ cqr->startdev = device;
++ cqr->memdev = device;
+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
+ set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
+ cqr->retries = 2; /* set retry counter to enable basic ERP */
+@@ -1508,7 +1863,7 @@ dasd_eckd_steal_lock(struct dasd_device *device)
+
+ rc = dasd_sleep_on_immediatly(cqr);
+
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return rc;
+ }
+
+@@ -1526,52 +1881,52 @@ dasd_eckd_performance(struct dasd_device *device, void __user *argp)
+
+ cqr = dasd_smalloc_request(dasd_eckd_discipline.name,
+ 1 /* PSF */ + 1 /* RSSD */ ,
+- (sizeof (struct dasd_psf_prssd_data) +
+- sizeof (struct dasd_rssd_perf_stats_t)),
++ (sizeof(struct dasd_psf_prssd_data) +
++ sizeof(struct dasd_rssd_perf_stats_t)),
+ device);
+ if (IS_ERR(cqr)) {
+ DEV_MESSAGE(KERN_WARNING, device, "%s",
+ "Could not allocate initialization request");
+ return PTR_ERR(cqr);
+ }
+- cqr->device = device;
++ cqr->startdev = device;
++ cqr->memdev = device;
+ cqr->retries = 0;
+ cqr->expires = 10 * HZ;
+
+ /* Prepare for Read Subsystem Data */
+ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
+- memset(prssdp, 0, sizeof (struct dasd_psf_prssd_data));
++ memset(prssdp, 0, sizeof(struct dasd_psf_prssd_data));
+ prssdp->order = PSF_ORDER_PRSSD;
+- prssdp->suborder = 0x01; /* Perfomance Statistics */
++ prssdp->suborder = 0x01; /* Performance Statistics */
+ prssdp->varies[1] = 0x01; /* Perf Statistics for the Subsystem */
+
+ ccw = cqr->cpaddr;
+ ccw->cmd_code = DASD_ECKD_CCW_PSF;
+- ccw->count = sizeof (struct dasd_psf_prssd_data);
++ ccw->count = sizeof(struct dasd_psf_prssd_data);
+ ccw->flags |= CCW_FLAG_CC;
+ ccw->cda = (__u32)(addr_t) prssdp;
+
+ /* Read Subsystem Data - Performance Statistics */
+ stats = (struct dasd_rssd_perf_stats_t *) (prssdp + 1);
+- memset(stats, 0, sizeof (struct dasd_rssd_perf_stats_t));
++ memset(stats, 0, sizeof(struct dasd_rssd_perf_stats_t));
+
+ ccw++;
+ ccw->cmd_code = DASD_ECKD_CCW_RSSD;
+- ccw->count = sizeof (struct dasd_rssd_perf_stats_t);
++ ccw->count = sizeof(struct dasd_rssd_perf_stats_t);
+ ccw->cda = (__u32)(addr_t) stats;
+
+ cqr->buildclk = get_clock();
+ cqr->status = DASD_CQR_FILLED;
+ rc = dasd_sleep_on(cqr);
+ if (rc == 0) {
+- /* Prepare for Read Subsystem Data */
+ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
+ stats = (struct dasd_rssd_perf_stats_t *) (prssdp + 1);
+ if (copy_to_user(argp, stats,
+ sizeof(struct dasd_rssd_perf_stats_t)))
+ rc = -EFAULT;
+ }
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return rc;
+ }
+
+@@ -1594,7 +1949,7 @@ dasd_eckd_get_attrib(struct dasd_device *device, void __user *argp)
+
+ rc = 0;
+ if (copy_to_user(argp, (long *) &attrib,
+- sizeof (struct attrib_data_t)))
++ sizeof(struct attrib_data_t)))
+ rc = -EFAULT;
+
+ return rc;
+@@ -1627,8 +1982,10 @@ dasd_eckd_set_attrib(struct dasd_device *device, void __user *argp)
+ }
+
+ static int
+-dasd_eckd_ioctl(struct dasd_device *device, unsigned int cmd, void __user *argp)
++dasd_eckd_ioctl(struct dasd_block *block, unsigned int cmd, void __user *argp)
+ {
++ struct dasd_device *device = block->base;
+
-+ write_sa2400(dev, 0, sa2400_chan[0]);
-+ write_sa2400(dev, 1, 0xbb50);
-+ write_sa2400(dev, 2, 0x80);
-+ write_sa2400(dev, 3, 0);
-+ write_sa2400(dev, 4, 0x19344 | firdac); /* calibrate filter */
+ switch (cmd) {
+ case BIODASDGATTR:
+ return dasd_eckd_get_attrib(device, argp);
+@@ -1685,9 +2042,8 @@ dasd_eckd_dump_ccw_range(struct ccw1 *from, struct ccw1 *to, char *page)
+ * Print sense data and related channel program.
+ * Parts are printed because printk buffer is only 1024 bytes.
+ */
+-static void
+-dasd_eckd_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
+- struct irb *irb)
++static void dasd_eckd_dump_sense(struct dasd_device *device,
++ struct dasd_ccw_req *req, struct irb *irb)
+ {
+ char *page;
+ struct ccw1 *first, *last, *fail, *from, *to;
+@@ -1743,37 +2099,40 @@ dasd_eckd_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
+ }
+ printk("%s", page);
+
+- /* dump the Channel Program (max 140 Bytes per line) */
+- /* Count CCW and print first CCWs (maximum 1024 % 140 = 7) */
+- first = req->cpaddr;
+- for (last = first; last->flags & (CCW_FLAG_CC | CCW_FLAG_DC); last++);
+- to = min(first + 6, last);
+- len = sprintf(page, KERN_ERR PRINTK_HEADER
+- " Related CP in req: %p\n", req);
+- dasd_eckd_dump_ccw_range(first, to, page + len);
+- printk("%s", page);
++ if (req) {
++ /* req == NULL for unsolicited interrupts */
++ /* dump the Channel Program (max 140 Bytes per line) */
++ /* Count CCW and print first CCWs (maximum 1024 % 140 = 7) */
++ first = req->cpaddr;
++ for (last = first; last->flags & (CCW_FLAG_CC | CCW_FLAG_DC); last++);
++ to = min(first + 6, last);
++ len = sprintf(page, KERN_ERR PRINTK_HEADER
++ " Related CP in req: %p\n", req);
++ dasd_eckd_dump_ccw_range(first, to, page + len);
++ printk("%s", page);
+
+- /* print failing CCW area (maximum 4) */
+- /* scsw->cda is either valid or zero */
+- len = 0;
+- from = ++to;
+- fail = (struct ccw1 *)(addr_t) irb->scsw.cpa; /* failing CCW */
+- if (from < fail - 2) {
+- from = fail - 2; /* there is a gap - print header */
+- len += sprintf(page, KERN_ERR PRINTK_HEADER "......\n");
+- }
+- to = min(fail + 1, last);
+- len += dasd_eckd_dump_ccw_range(from, to, page + len);
+-
+- /* print last CCWs (maximum 2) */
+- from = max(from, ++to);
+- if (from < last - 1) {
+- from = last - 1; /* there is a gap - print header */
+- len += sprintf(page + len, KERN_ERR PRINTK_HEADER "......\n");
++ /* print failing CCW area (maximum 4) */
++ /* scsw->cda is either valid or zero */
++ len = 0;
++ from = ++to;
++ fail = (struct ccw1 *)(addr_t) irb->scsw.cpa; /* failing CCW */
++ if (from < fail - 2) {
++ from = fail - 2; /* there is a gap - print header */
++ len += sprintf(page, KERN_ERR PRINTK_HEADER "......\n");
++ }
++ to = min(fail + 1, last);
++ len += dasd_eckd_dump_ccw_range(from, to, page + len);
+
-+ /* new from rtl8180 embedded driver (rtl8181 project) */
-+ write_sa2400(dev, 6, 0x13ff | (1 << 23)); /* MANRX */
-+ write_sa2400(dev, 8, 0); /* VCO */
++ /* print last CCWs (maximum 2) */
++ from = max(from, ++to);
++ if (from < last - 1) {
++ from = last - 1; /* there is a gap - print header */
++ len += sprintf(page + len, KERN_ERR PRINTK_HEADER "......\n");
++ }
++ len += dasd_eckd_dump_ccw_range(from, last, page + len);
++ if (len > 0)
++ printk("%s", page);
+ }
+- len += dasd_eckd_dump_ccw_range(from, last, page + len);
+- if (len > 0)
+- printk("%s", page);
+ free_page((unsigned long) page);
+ }
+
+@@ -1796,16 +2155,20 @@ static struct dasd_discipline dasd_eckd_discipline = {
+ .ebcname = "ECKD",
+ .max_blocks = 240,
+ .check_device = dasd_eckd_check_characteristics,
++ .uncheck_device = dasd_eckd_uncheck_device,
+ .do_analysis = dasd_eckd_do_analysis,
++ .ready_to_online = dasd_eckd_ready_to_online,
++ .online_to_ready = dasd_eckd_online_to_ready,
+ .fill_geometry = dasd_eckd_fill_geometry,
+ .start_IO = dasd_start_IO,
+ .term_IO = dasd_term_IO,
++ .handle_terminated_request = dasd_eckd_handle_terminated_request,
+ .format_device = dasd_eckd_format_device,
+- .examine_error = dasd_eckd_examine_error,
+ .erp_action = dasd_eckd_erp_action,
+ .erp_postaction = dasd_eckd_erp_postaction,
+- .build_cp = dasd_eckd_build_cp,
+- .free_cp = dasd_eckd_free_cp,
++ .handle_unsolicited_interrupt = dasd_eckd_handle_unsolicited_interrupt,
++ .build_cp = dasd_eckd_build_alias_cp,
++ .free_cp = dasd_eckd_free_alias_cp,
+ .dump_sense = dasd_eckd_dump_sense,
+ .fill_info = dasd_eckd_fill_info,
+ .ioctl = dasd_eckd_ioctl,
+diff --git a/drivers/s390/block/dasd_eckd.h b/drivers/s390/block/dasd_eckd.h
+index 712ff16..fc2509c 100644
+--- a/drivers/s390/block/dasd_eckd.h
++++ b/drivers/s390/block/dasd_eckd.h
+@@ -39,6 +39,8 @@
+ #define DASD_ECKD_CCW_READ_CKD_MT 0x9e
+ #define DASD_ECKD_CCW_WRITE_CKD_MT 0x9d
+ #define DASD_ECKD_CCW_RESERVE 0xB4
++#define DASD_ECKD_CCW_PFX 0xE7
++#define DASD_ECKD_CCW_RSCK 0xF9
+
+ /*
+ * Perform Subsystem Function / Sub-Orders
+@@ -137,6 +139,25 @@ struct LO_eckd_data {
+ __u16 length;
+ } __attribute__ ((packed));
+
++/* Prefix data for format 0x00 and 0x01 */
++struct PFX_eckd_data {
++ unsigned char format;
++ struct {
++ unsigned char define_extend:1;
++ unsigned char time_stamp:1;
++ unsigned char verify_base:1;
++ unsigned char hyper_pav:1;
++ unsigned char reserved:4;
++ } __attribute__ ((packed)) validity;
++ __u8 base_address;
++ __u8 aux;
++ __u8 base_lss;
++ __u8 reserved[7];
++ struct DE_eckd_data define_extend;
++ struct LO_eckd_data locate_record;
++ __u8 LO_extended_data[4];
++} __attribute__ ((packed));
+
-+ if (analogphy) {
-+ rtl8180_set_anaparam(priv, anaparam |
-+ (1 << ANAPARAM_TXDACOFF_SHIFT));
+ struct dasd_eckd_characteristics {
+ __u16 cu_type;
+ struct {
+@@ -254,7 +275,9 @@ struct dasd_eckd_confdata {
+ } __attribute__ ((packed)) ned;
+ struct {
+ unsigned char flags; /* byte 0 */
+- unsigned char res2[7]; /* byte 1- 7 */
++ unsigned char res1; /* byte 1 */
++ __u16 format; /* byte 2-3 */
++ unsigned char res2[4]; /* byte 4-7 */
+ unsigned char sua_flags; /* byte 8 */
+ __u8 base_unit_addr; /* byte 9 */
+ unsigned char res3[22]; /* byte 10-31 */
+@@ -343,6 +366,11 @@ struct dasd_eckd_path {
+ __u8 npm;
+ };
+
++struct dasd_rssd_features {
++ char feature[256];
++} __attribute__((packed));
+
-+ txconf = rtl818x_ioread32(priv, &priv->map->TX_CONF);
-+ rtl818x_iowrite32(priv, &priv->map->TX_CONF,
-+ txconf | RTL818X_TX_CONF_LOOPBACK_CONT);
+
-+ write_sa2400(dev, 4, 0x19341); /* calibrates DC */
+ /*
+ * Perform Subsystem Function - Prepare for Read Subsystem Data
+ */
+@@ -365,4 +393,99 @@ struct dasd_psf_ssc_data {
+ unsigned char reserved[59];
+ } __attribute__((packed));
+
+
-+ /* a 5us sleep is required here,
-+ * we rely on the 3ms delay introduced in write_sa2400 */
-+ write_sa2400(dev, 4, 0x19345);
++/*
++ * some structures and definitions for alias handling
++ */
++struct dasd_unit_address_configuration {
++ struct {
++ char ua_type;
++ char base_ua;
++ } unit[256];
++} __attribute__((packed));
+
-+ /* a 20us sleep is required here,
-+ * we rely on the 3ms delay introduced in write_sa2400 */
+
-+ rtl818x_iowrite32(priv, &priv->map->TX_CONF, txconf);
++#define MAX_DEVICES_PER_LCU 256
+
-+ rtl8180_set_anaparam(priv, anaparam);
-+ }
-+ /* end new code */
++/* flags on the LCU */
++#define NEED_UAC_UPDATE 0x01
++#define UPDATE_PENDING 0x02
+
-+ write_sa2400(dev, 4, 0x19341 | firdac); /* RTX MODE */
++enum pavtype {NO_PAV, BASE_PAV, HYPER_PAV};
+
-+ /* baseband configuration */
-+ rtl8180_write_phy(dev, 0, 0x98);
-+ rtl8180_write_phy(dev, 3, 0x38);
-+ rtl8180_write_phy(dev, 4, 0xe0);
-+ rtl8180_write_phy(dev, 5, 0x90);
-+ rtl8180_write_phy(dev, 6, 0x1a);
-+ rtl8180_write_phy(dev, 7, 0x64);
+
-+ sa2400_write_phy_antenna(dev, 1);
++struct alias_root {
++ struct list_head serverlist;
++ spinlock_t lock;
++};
+
-+ rtl8180_write_phy(dev, 0x11, 0x80);
++struct alias_server {
++ struct list_head server;
++ struct dasd_uid uid;
++ struct list_head lculist;
++};
+
-+ if (rtl818x_ioread8(priv, &priv->map->CONFIG2) &
-+ RTL818X_CONFIG2_ANTENNA_DIV)
-+ rtl8180_write_phy(dev, 0x12, 0xc7); /* enable ant diversity */
-+ else
-+ rtl8180_write_phy(dev, 0x12, 0x47); /* disable ant diversity */
++struct summary_unit_check_work_data {
++ char reason;
++ struct dasd_device *device;
++ struct work_struct worker;
++};
+
-+ rtl8180_write_phy(dev, 0x13, 0x90 | priv->csthreshold);
++struct read_uac_work_data {
++ struct dasd_device *device;
++ struct delayed_work dwork;
++};
+
-+ rtl8180_write_phy(dev, 0x19, 0x0);
-+ rtl8180_write_phy(dev, 0x1a, 0xa0);
-+}
++struct alias_lcu {
++ struct list_head lcu;
++ struct dasd_uid uid;
++ enum pavtype pav;
++ char flags;
++ spinlock_t lock;
++ struct list_head grouplist;
++ struct list_head active_devices;
++ struct list_head inactive_devices;
++ struct dasd_unit_address_configuration *uac;
++ struct summary_unit_check_work_data suc_data;
++ struct read_uac_work_data ruac_data;
++ struct dasd_ccw_req *rsu_cqr;
++};
+
-+const struct rtl818x_rf_ops sa2400_rf_ops = {
-+ .name = "Philips",
-+ .init = sa2400_rf_init,
-+ .stop = sa2400_rf_stop,
-+ .set_chan = sa2400_rf_set_channel
++struct alias_pav_group {
++ struct list_head group;
++ struct dasd_uid uid;
++ struct alias_lcu *lcu;
++ struct list_head baselist;
++ struct list_head aliaslist;
++ struct dasd_device *next;
+};
-diff --git a/drivers/net/wireless/rtl8180_sa2400.h b/drivers/net/wireless/rtl8180_sa2400.h
-new file mode 100644
-index 0000000..a4aaa0d
---- /dev/null
-+++ b/drivers/net/wireless/rtl8180_sa2400.h
-@@ -0,0 +1,36 @@
-+#ifndef RTL8180_SA2400_H
-+#define RTL8180_SA2400_H
+
-+/*
-+ * Radio tuning for Philips SA2400 on RTL8180
-+ *
-+ * Copyright 2007 Andrea Merello <andreamrl at tiscali.it>
-+ *
-+ * Code from the BSD driver and the rtl8181 project have been
-+ * very useful to understand certain things
-+ *
-+ * I want to thanks the Authors of such projects and the Ndiswrapper
-+ * project Authors.
-+ *
-+ * A special Big Thanks also is for all people who donated me cards,
-+ * making possible the creation of the original rtl8180 driver
-+ * from which this code is derived!
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ */
+
-+#define SA2400_ANTENNA 0x91
-+#define SA2400_DIG_ANAPARAM_PWR1_ON 0x8
-+#define SA2400_ANA_ANAPARAM_PWR1_ON 0x28
-+#define SA2400_ANAPARAM_PWR0_ON 0x3
++struct dasd_eckd_private {
++ struct dasd_eckd_characteristics rdc_data;
++ struct dasd_eckd_confdata conf_data;
++ struct dasd_eckd_path path_data;
++ struct eckd_count count_area[5];
++ int init_cqr_status;
++ int uses_cdl;
++ struct attrib_data_t attrib; /* e.g. cache operations */
++ struct dasd_rssd_features features;
+
-+/* RX sensitivity in dbm */
-+#define SA2400_MAX_SENS 85
++ /* alias managemnet */
++ struct dasd_uid uid;
++ struct alias_pav_group *pavgroup;
++ struct alias_lcu *lcu;
++ int count;
++};
+
-+#define SA2400_REG4_FIRDAC_SHIFT 7
+
-+extern const struct rtl818x_rf_ops sa2400_rf_ops;
+
-+#endif /* RTL8180_SA2400_H */
-diff --git a/drivers/net/wireless/rtl8187.h b/drivers/net/wireless/rtl8187.h
-index 6ad322e..8680a0b 100644
---- a/drivers/net/wireless/rtl8187.h
-+++ b/drivers/net/wireless/rtl8187.h
-@@ -64,9 +64,9 @@ struct rtl8187_tx_hdr {
- struct rtl8187_priv {
- /* common between rtl818x drivers */
- struct rtl818x_csr *map;
-- void (*rf_init)(struct ieee80211_hw *);
-+ const struct rtl818x_rf_ops *rf;
-+ struct ieee80211_vif *vif;
- int mode;
-- int if_id;
++int dasd_alias_make_device_known_to_lcu(struct dasd_device *);
++void dasd_alias_disconnect_device_from_lcu(struct dasd_device *);
++int dasd_alias_add_device(struct dasd_device *);
++int dasd_alias_remove_device(struct dasd_device *);
++struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *);
++void dasd_alias_handle_summary_unit_check(struct dasd_device *, struct irb *);
++void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *);
++
+ #endif /* DASD_ECKD_H */
+diff --git a/drivers/s390/block/dasd_eer.c b/drivers/s390/block/dasd_eer.c
+index 0c081a6..6e53ab6 100644
+--- a/drivers/s390/block/dasd_eer.c
++++ b/drivers/s390/block/dasd_eer.c
+@@ -336,7 +336,7 @@ static void dasd_eer_write_snss_trigger(struct dasd_device *device,
+ unsigned long flags;
+ struct eerbuffer *eerb;
- /* rtl8187 specific */
- struct ieee80211_channel channels[14];
-diff --git a/drivers/net/wireless/rtl8187_dev.c b/drivers/net/wireless/rtl8187_dev.c
-index bd1ab3b..0d71716 100644
---- a/drivers/net/wireless/rtl8187_dev.c
-+++ b/drivers/net/wireless/rtl8187_dev.c
-@@ -150,7 +150,8 @@ static int rtl8187_tx(struct ieee80211_hw *dev, struct sk_buff *skb,
- flags |= RTL8187_TX_FLAG_MORE_FRAG;
- if (control->flags & IEEE80211_TXCTL_USE_RTS_CTS) {
- flags |= RTL8187_TX_FLAG_RTS;
-- rts_dur = ieee80211_rts_duration(dev, priv->if_id, skb->len, control);
-+ rts_dur = ieee80211_rts_duration(dev, priv->vif,
-+ skb->len, control);
+- snss_rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
++ snss_rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
+ if (snss_rc)
+ data_size = 0;
+ else
+@@ -404,10 +404,11 @@ void dasd_eer_snss(struct dasd_device *device)
+ set_bit(DASD_FLAG_EER_SNSS, &device->flags);
+ return;
}
- if (control->flags & IEEE80211_TXCTL_USE_CTS_PROTECT)
- flags |= RTL8187_TX_FLAG_CTS;
-@@ -227,6 +228,7 @@ static void rtl8187_rx_cb(struct urb *urb)
- rx_status.channel = dev->conf.channel;
- rx_status.phymode = dev->conf.phymode;
- rx_status.mactime = le64_to_cpu(hdr->mac_time);
-+ rx_status.flag |= RX_FLAG_TSFT;
- if (flags & (1 << 13))
- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
- ieee80211_rx_irqsafe(dev, skb, &rx_status);
-@@ -392,37 +394,19 @@ static int rtl8187_init_hw(struct ieee80211_hw *dev)
- rtl818x_iowrite16(priv, &priv->map->RFPinsEnable, 0x1FF7);
- msleep(100);
-
-- priv->rf_init(dev);
-+ priv->rf->init(dev);
-
- rtl818x_iowrite16(priv, &priv->map->BRSR, 0x01F3);
-- reg = rtl818x_ioread16(priv, &priv->map->PGSELECT) & 0xfffe;
-- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg | 0x1);
-+ reg = rtl818x_ioread8(priv, &priv->map->PGSELECT) & ~1;
-+ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg | 1);
- rtl818x_iowrite16(priv, (__le16 *)0xFFFE, 0x10);
- rtl818x_iowrite8(priv, &priv->map->TALLY_SEL, 0x80);
- rtl818x_iowrite8(priv, (u8 *)0xFFFF, 0x60);
-- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg);
-+ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg);
-
- return 0;
++ /* cdev is already locked, can't use dasd_add_request_head */
+ clear_bit(DASD_FLAG_EER_SNSS, &device->flags);
+ cqr->status = DASD_CQR_QUEUED;
+- list_add(&cqr->list, &device->ccw_queue);
+- dasd_schedule_bh(device);
++ list_add(&cqr->devlist, &device->ccw_queue);
++ dasd_schedule_device_bh(device);
}
--static void rtl8187_set_channel(struct ieee80211_hw *dev, int channel)
--{
-- u32 reg;
-- struct rtl8187_priv *priv = dev->priv;
--
-- reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
-- /* Enable TX loopback on MAC level to avoid TX during channel
-- * changes, as this has be seen to causes problems and the
-- * card will stop work until next reset
-- */
-- rtl818x_iowrite32(priv, &priv->map->TX_CONF,
-- reg | RTL818X_TX_CONF_LOOPBACK_MAC);
-- msleep(10);
-- rtl8225_rf_set_channel(dev, channel);
-- msleep(10);
-- rtl818x_iowrite32(priv, &priv->map->TX_CONF, reg);
--}
--
- static int rtl8187_start(struct ieee80211_hw *dev)
- {
- struct rtl8187_priv *priv = dev->priv;
-@@ -491,7 +475,7 @@ static void rtl8187_stop(struct ieee80211_hw *dev)
- reg &= ~RTL818X_CMD_RX_ENABLE;
- rtl818x_iowrite8(priv, &priv->map->CMD, reg);
-
-- rtl8225_rf_stop(dev);
-+ priv->rf->stop(dev);
-
- rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_CONFIG);
- reg = rtl818x_ioread8(priv, &priv->map->CONFIG4);
-@@ -542,7 +526,19 @@ static void rtl8187_remove_interface(struct ieee80211_hw *dev,
- static int rtl8187_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
+ /*
+@@ -415,7 +416,7 @@ void dasd_eer_snss(struct dasd_device *device)
+ */
+ static void dasd_eer_snss_cb(struct dasd_ccw_req *cqr, void *data)
{
- struct rtl8187_priv *priv = dev->priv;
-- rtl8187_set_channel(dev, conf->channel);
-+ u32 reg;
-+
-+ reg = rtl818x_ioread32(priv, &priv->map->TX_CONF);
-+ /* Enable TX loopback on MAC level to avoid TX during channel
-+ * changes, as this has be seen to causes problems and the
-+ * card will stop work until next reset
-+ */
-+ rtl818x_iowrite32(priv, &priv->map->TX_CONF,
-+ reg | RTL818X_TX_CONF_LOOPBACK_MAC);
-+ msleep(10);
-+ priv->rf->set_chan(dev, conf);
-+ msleep(10);
-+ rtl818x_iowrite32(priv, &priv->map->TX_CONF, reg);
+- struct dasd_device *device = cqr->device;
++ struct dasd_device *device = cqr->startdev;
+ unsigned long flags;
- rtl818x_iowrite8(priv, &priv->map->SIFS, 0x22);
+ dasd_eer_write(device, cqr, DASD_EER_STATECHANGE);
+@@ -458,7 +459,7 @@ int dasd_eer_enable(struct dasd_device *device)
+ if (!cqr)
+ return -ENOMEM;
-@@ -565,14 +561,13 @@ static int rtl8187_config(struct ieee80211_hw *dev, struct ieee80211_conf *conf)
- return 0;
+- cqr->device = device;
++ cqr->startdev = device;
+ cqr->retries = 255;
+ cqr->expires = 10 * HZ;
+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
+diff --git a/drivers/s390/block/dasd_erp.c b/drivers/s390/block/dasd_erp.c
+index caa5d91..8f10000 100644
+--- a/drivers/s390/block/dasd_erp.c
++++ b/drivers/s390/block/dasd_erp.c
+@@ -46,6 +46,8 @@ dasd_alloc_erp_request(char *magic, int cplength, int datasize,
+ if (cqr == NULL)
+ return ERR_PTR(-ENOMEM);
+ memset(cqr, 0, sizeof(struct dasd_ccw_req));
++ INIT_LIST_HEAD(&cqr->devlist);
++ INIT_LIST_HEAD(&cqr->blocklist);
+ data = (char *) cqr + ((sizeof(struct dasd_ccw_req) + 7L) & -8L);
+ cqr->cpaddr = NULL;
+ if (cplength > 0) {
+@@ -66,7 +68,7 @@ dasd_alloc_erp_request(char *magic, int cplength, int datasize,
}
--static int rtl8187_config_interface(struct ieee80211_hw *dev, int if_id,
-+static int rtl8187_config_interface(struct ieee80211_hw *dev,
-+ struct ieee80211_vif *vif,
- struct ieee80211_if_conf *conf)
+ void
+-dasd_free_erp_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
++dasd_free_erp_request(struct dasd_ccw_req *cqr, struct dasd_device * device)
{
- struct rtl8187_priv *priv = dev->priv;
- int i;
-
-- priv->if_id = if_id;
--
- for (i = 0; i < ETH_ALEN; i++)
- rtl818x_iowrite8(priv, &priv->map->BSSID[i], conf->bssid[i]);
+ unsigned long flags;
-@@ -752,23 +747,16 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
- eeprom_93cx6_read(&eeprom, RTL8187_EEPROM_TXPWR_BASE,
- &priv->txpwr_base);
+@@ -81,11 +83,11 @@ dasd_free_erp_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
+ * dasd_default_erp_action just retries the current cqr
+ */
+ struct dasd_ccw_req *
+-dasd_default_erp_action(struct dasd_ccw_req * cqr)
++dasd_default_erp_action(struct dasd_ccw_req *cqr)
+ {
+ struct dasd_device *device;
-- reg = rtl818x_ioread16(priv, &priv->map->PGSELECT) & ~1;
-- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg | 1);
-+ reg = rtl818x_ioread8(priv, &priv->map->PGSELECT) & ~1;
-+ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg | 1);
- /* 0 means asic B-cut, we should use SW 3 wire
- * bit-by-bit banging for radio. 1 means we can use
- * USB specific request to write radio registers */
- priv->asic_rev = rtl818x_ioread8(priv, (u8 *)0xFFFE) & 0x3;
-- rtl818x_iowrite16(priv, &priv->map->PGSELECT, reg);
-+ rtl818x_iowrite8(priv, &priv->map->PGSELECT, reg);
- rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+- device = cqr->device;
++ device = cqr->startdev;
-- rtl8225_write(dev, 0, 0x1B7);
--
-- if (rtl8225_read(dev, 8) != 0x588 || rtl8225_read(dev, 9) != 0x700)
-- priv->rf_init = rtl8225_rf_init;
-- else
-- priv->rf_init = rtl8225z2_rf_init;
--
-- rtl8225_write(dev, 0, 0x0B7);
-+ priv->rf = rtl8187_detect_rf(dev);
+ /* just retry - there is nothing to save ... I got no sense data.... */
+ if (cqr->retries > 0) {
+@@ -93,12 +95,12 @@ dasd_default_erp_action(struct dasd_ccw_req * cqr)
+ "default ERP called (%i retries left)",
+ cqr->retries);
+ cqr->lpm = LPM_ANYPATH;
+- cqr->status = DASD_CQR_QUEUED;
++ cqr->status = DASD_CQR_FILLED;
+ } else {
+ DEV_MESSAGE (KERN_WARNING, device, "%s",
+ "default ERP called (NO retry left)");
+ cqr->status = DASD_CQR_FAILED;
+- cqr->stopclk = get_clock ();
++ cqr->stopclk = get_clock();
+ }
+ return cqr;
+ } /* end dasd_default_erp_action */
+@@ -117,15 +119,12 @@ dasd_default_erp_action(struct dasd_ccw_req * cqr)
+ * RETURN VALUES
+ * cqr pointer to the original CQR
+ */
+-struct dasd_ccw_req *
+-dasd_default_erp_postaction(struct dasd_ccw_req * cqr)
++struct dasd_ccw_req *dasd_default_erp_postaction(struct dasd_ccw_req *cqr)
+ {
+- struct dasd_device *device;
+ int success;
- err = ieee80211_register_hw(dev);
- if (err) {
-@@ -778,8 +766,7 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
+ BUG_ON(cqr->refers == NULL || cqr->function == NULL);
- printk(KERN_INFO "%s: hwaddr %s, rtl8187 V%d + %s\n",
- wiphy_name(dev->wiphy), print_mac(mac, dev->wiphy->perm_addr),
-- priv->asic_rev, priv->rf_init == rtl8225_rf_init ?
-- "rtl8225" : "rtl8225z2");
-+ priv->asic_rev, priv->rf->name);
+- device = cqr->device;
+ success = cqr->status == DASD_CQR_DONE;
- return 0;
+ /* free all ERPs - but NOT the original cqr */
+@@ -133,10 +132,10 @@ dasd_default_erp_postaction(struct dasd_ccw_req * cqr)
+ struct dasd_ccw_req *refers;
-diff --git a/drivers/net/wireless/rtl8187_rtl8225.c b/drivers/net/wireless/rtl8187_rtl8225.c
-index efc4120..b713de1 100644
---- a/drivers/net/wireless/rtl8187_rtl8225.c
-+++ b/drivers/net/wireless/rtl8187_rtl8225.c
-@@ -101,7 +101,7 @@ static void rtl8225_write_8051(struct ieee80211_hw *dev, u8 addr, __le16 data)
- msleep(2);
- }
+ refers = cqr->refers;
+- /* remove the request from the device queue */
+- list_del(&cqr->list);
++ /* remove the request from the block queue */
++ list_del(&cqr->blocklist);
+ /* free the finished erp request */
+- dasd_free_erp_request(cqr, device);
++ dasd_free_erp_request(cqr, cqr->memdev);
+ cqr = refers;
+ }
--void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
-+static void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
+@@ -157,7 +156,7 @@ dasd_log_sense(struct dasd_ccw_req *cqr, struct irb *irb)
{
- struct rtl8187_priv *priv = dev->priv;
-
-@@ -111,7 +111,7 @@ void rtl8225_write(struct ieee80211_hw *dev, u8 addr, u16 data)
- rtl8225_write_bitbang(dev, addr, data);
- }
+ struct dasd_device *device;
--u16 rtl8225_read(struct ieee80211_hw *dev, u8 addr)
-+static u16 rtl8225_read(struct ieee80211_hw *dev, u8 addr)
+- device = cqr->device;
++ device = cqr->startdev;
+ /* dump sense data */
+ if (device->discipline && device->discipline->dump_sense)
+ device->discipline->dump_sense(device, cqr, irb);
+diff --git a/drivers/s390/block/dasd_fba.c b/drivers/s390/block/dasd_fba.c
+index 1d95822..d13ea05 100644
+--- a/drivers/s390/block/dasd_fba.c
++++ b/drivers/s390/block/dasd_fba.c
+@@ -117,6 +117,7 @@ locate_record(struct ccw1 * ccw, struct LO_fba_data *data, int rw,
+ static int
+ dasd_fba_check_characteristics(struct dasd_device *device)
{
- struct rtl8187_priv *priv = dev->priv;
- u16 reg80, reg82, reg84, out;
-@@ -325,7 +325,7 @@ static void rtl8225_rf_set_tx_power(struct ieee80211_hw *dev, int channel)
- msleep(1);
++ struct dasd_block *block;
+ struct dasd_fba_private *private;
+ struct ccw_device *cdev = device->cdev;
+ void *rdc_data;
+@@ -133,6 +134,16 @@ dasd_fba_check_characteristics(struct dasd_device *device)
+ }
+ device->private = (void *) private;
+ }
++ block = dasd_alloc_block();
++ if (IS_ERR(block)) {
++ DEV_MESSAGE(KERN_WARNING, device, "%s",
++ "could not allocate dasd block structure");
++ kfree(device->private);
++ return PTR_ERR(block);
++ }
++ device->block = block;
++ block->base = device;
++
+ /* Read Device Characteristics */
+ rdc_data = (void *) &(private->rdc_data);
+ rc = dasd_generic_read_dev_chars(device, "FBA ", &rdc_data, 32);
+@@ -155,60 +166,37 @@ dasd_fba_check_characteristics(struct dasd_device *device)
+ return 0;
}
--void rtl8225_rf_init(struct ieee80211_hw *dev)
-+static void rtl8225_rf_init(struct ieee80211_hw *dev)
+-static int
+-dasd_fba_do_analysis(struct dasd_device *device)
++static int dasd_fba_do_analysis(struct dasd_block *block)
{
- struct rtl8187_priv *priv = dev->priv;
- int i;
-@@ -567,7 +567,7 @@ static const u8 rtl8225z2_gain_bg[] = {
- 0x63, 0x15, 0xc5 /* -66dBm */
- };
+ struct dasd_fba_private *private;
+ int sb, rc;
--void rtl8225z2_rf_init(struct ieee80211_hw *dev)
-+static void rtl8225z2_rf_init(struct ieee80211_hw *dev)
- {
- struct rtl8187_priv *priv = dev->priv;
- int i;
-@@ -715,7 +715,7 @@ void rtl8225z2_rf_init(struct ieee80211_hw *dev)
- rtl818x_iowrite32(priv, (__le32 *)0xFF94, 0x3dc00002);
+- private = (struct dasd_fba_private *) device->private;
++ private = (struct dasd_fba_private *) block->base->private;
+ rc = dasd_check_blocksize(private->rdc_data.blk_size);
+ if (rc) {
+- DEV_MESSAGE(KERN_INFO, device, "unknown blocksize %d",
++ DEV_MESSAGE(KERN_INFO, block->base, "unknown blocksize %d",
+ private->rdc_data.blk_size);
+ return rc;
+ }
+- device->blocks = private->rdc_data.blk_bdsa;
+- device->bp_block = private->rdc_data.blk_size;
+- device->s2b_shift = 0; /* bits to shift 512 to get a block */
++ block->blocks = private->rdc_data.blk_bdsa;
++ block->bp_block = private->rdc_data.blk_size;
++ block->s2b_shift = 0; /* bits to shift 512 to get a block */
+ for (sb = 512; sb < private->rdc_data.blk_size; sb = sb << 1)
+- device->s2b_shift++;
++ block->s2b_shift++;
+ return 0;
}
--void rtl8225_rf_stop(struct ieee80211_hw *dev)
-+static void rtl8225_rf_stop(struct ieee80211_hw *dev)
+-static int
+-dasd_fba_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
++static int dasd_fba_fill_geometry(struct dasd_block *block,
++ struct hd_geometry *geo)
{
- u8 reg;
- struct rtl8187_priv *priv = dev->priv;
-@@ -731,15 +731,47 @@ void rtl8225_rf_stop(struct ieee80211_hw *dev)
- rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+- if (dasd_check_blocksize(device->bp_block) != 0)
++ if (dasd_check_blocksize(block->bp_block) != 0)
+ return -EINVAL;
+- geo->cylinders = (device->blocks << device->s2b_shift) >> 10;
++ geo->cylinders = (block->blocks << block->s2b_shift) >> 10;
+ geo->heads = 16;
+- geo->sectors = 128 >> device->s2b_shift;
++ geo->sectors = 128 >> block->s2b_shift;
+ return 0;
}
--void rtl8225_rf_set_channel(struct ieee80211_hw *dev, int channel)
-+static void rtl8225_rf_set_channel(struct ieee80211_hw *dev,
-+ struct ieee80211_conf *conf)
+-static dasd_era_t
+-dasd_fba_examine_error(struct dasd_ccw_req * cqr, struct irb * irb)
+-{
+- struct dasd_device *device;
+- struct ccw_device *cdev;
+-
+- device = (struct dasd_device *) cqr->device;
+- if (irb->scsw.cstat == 0x00 &&
+- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+- return dasd_era_none;
+-
+- cdev = device->cdev;
+- switch (cdev->id.dev_type) {
+- case 0x3370:
+- return dasd_3370_erp_examine(cqr, irb);
+- case 0x9336:
+- return dasd_9336_erp_examine(cqr, irb);
+- default:
+- return dasd_era_recover;
+- }
+-}
+-
+ static dasd_erp_fn_t
+ dasd_fba_erp_action(struct dasd_ccw_req * cqr)
{
- struct rtl8187_priv *priv = dev->priv;
-
-- if (priv->rf_init == rtl8225_rf_init)
-- rtl8225_rf_set_tx_power(dev, channel);
-+ if (priv->rf->init == rtl8225_rf_init)
-+ rtl8225_rf_set_tx_power(dev, conf->channel);
- else
-- rtl8225z2_rf_set_tx_power(dev, channel);
-+ rtl8225z2_rf_set_tx_power(dev, conf->channel);
+@@ -221,13 +209,34 @@ dasd_fba_erp_postaction(struct dasd_ccw_req * cqr)
+ if (cqr->function == dasd_default_erp_action)
+ return dasd_default_erp_postaction;
-- rtl8225_write(dev, 0x7, rtl8225_chan[channel - 1]);
-+ rtl8225_write(dev, 0x7, rtl8225_chan[conf->channel - 1]);
- msleep(10);
+- DEV_MESSAGE(KERN_WARNING, cqr->device, "unknown ERP action %p",
++ DEV_MESSAGE(KERN_WARNING, cqr->startdev, "unknown ERP action %p",
+ cqr->function);
+ return NULL;
}
-+
-+static const struct rtl818x_rf_ops rtl8225_ops = {
-+ .name = "rtl8225",
-+ .init = rtl8225_rf_init,
-+ .stop = rtl8225_rf_stop,
-+ .set_chan = rtl8225_rf_set_channel
-+};
-+
-+static const struct rtl818x_rf_ops rtl8225z2_ops = {
-+ .name = "rtl8225z2",
-+ .init = rtl8225z2_rf_init,
-+ .stop = rtl8225_rf_stop,
-+ .set_chan = rtl8225_rf_set_channel
-+};
-+
-+const struct rtl818x_rf_ops * rtl8187_detect_rf(struct ieee80211_hw *dev)
+
+-static struct dasd_ccw_req *
+-dasd_fba_build_cp(struct dasd_device * device, struct request *req)
++static void dasd_fba_handle_unsolicited_interrupt(struct dasd_device *device,
++ struct irb *irb)
+{
-+ u16 reg8, reg9;
-+
-+ rtl8225_write(dev, 0, 0x1B7);
-+
-+ reg8 = rtl8225_read(dev, 8);
-+ reg9 = rtl8225_read(dev, 9);
++ char mask;
+
-+ rtl8225_write(dev, 0, 0x0B7);
++ /* first of all check for state change pending interrupt */
++ mask = DEV_STAT_ATTENTION | DEV_STAT_DEV_END | DEV_STAT_UNIT_EXCEP;
++ if ((irb->scsw.dstat & mask) == mask) {
++ dasd_generic_handle_state_change(device);
++ return;
++ }
+
-+ if (reg8 != 0x588 || reg9 != 0x700)
-+ return &rtl8225_ops;
++ /* check for unsolicited interrupts */
++ DEV_MESSAGE(KERN_DEBUG, device, "%s",
++ "unsolicited interrupt received");
++ device->discipline->dump_sense(device, NULL, irb);
++ dasd_schedule_device_bh(device);
++ return;
++};
+
-+ return &rtl8225z2_ops;
-+}
-diff --git a/drivers/net/wireless/rtl8187_rtl8225.h b/drivers/net/wireless/rtl8187_rtl8225.h
-index 798ba4a..d39ed02 100644
---- a/drivers/net/wireless/rtl8187_rtl8225.h
-+++ b/drivers/net/wireless/rtl8187_rtl8225.h
-@@ -20,14 +20,7 @@
- #define RTL8225_ANAPARAM_OFF 0xa00beb59
- #define RTL8225_ANAPARAM2_OFF 0x840dec11
++static struct dasd_ccw_req *dasd_fba_build_cp(struct dasd_device * memdev,
++ struct dasd_block *block,
++ struct request *req)
+ {
+ struct dasd_fba_private *private;
+ unsigned long *idaws;
+@@ -242,17 +251,17 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
+ unsigned int blksize, off;
+ unsigned char cmd;
--void rtl8225_write(struct ieee80211_hw *, u8 addr, u16 data);
--u16 rtl8225_read(struct ieee80211_hw *, u8 addr);
--
--void rtl8225_rf_init(struct ieee80211_hw *);
--void rtl8225z2_rf_init(struct ieee80211_hw *);
--void rtl8225_rf_stop(struct ieee80211_hw *);
--void rtl8225_rf_set_channel(struct ieee80211_hw *, int);
--
-+const struct rtl818x_rf_ops * rtl8187_detect_rf(struct ieee80211_hw *);
+- private = (struct dasd_fba_private *) device->private;
++ private = (struct dasd_fba_private *) block->base->private;
+ if (rq_data_dir(req) == READ) {
+ cmd = DASD_FBA_CCW_READ;
+ } else if (rq_data_dir(req) == WRITE) {
+ cmd = DASD_FBA_CCW_WRITE;
+ } else
+ return ERR_PTR(-EINVAL);
+- blksize = device->bp_block;
++ blksize = block->bp_block;
+ /* Calculate record id of first and last block. */
+- first_rec = req->sector >> device->s2b_shift;
+- last_rec = (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
++ first_rec = req->sector >> block->s2b_shift;
++ last_rec = (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
+ /* Check struct bio and count the number of blocks for the request. */
+ count = 0;
+ cidaw = 0;
+@@ -260,7 +269,7 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
+ if (bv->bv_len & (blksize - 1))
+ /* Fba can only do full blocks. */
+ return ERR_PTR(-EINVAL);
+- count += bv->bv_len >> (device->s2b_shift + 9);
++ count += bv->bv_len >> (block->s2b_shift + 9);
+ #if defined(CONFIG_64BIT)
+ if (idal_is_needed (page_address(bv->bv_page), bv->bv_len))
+ cidaw += bv->bv_len / blksize;
+@@ -284,13 +293,13 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
+ }
+ /* Allocate the ccw request. */
+ cqr = dasd_smalloc_request(dasd_fba_discipline.name,
+- cplength, datasize, device);
++ cplength, datasize, memdev);
+ if (IS_ERR(cqr))
+ return cqr;
+ ccw = cqr->cpaddr;
+ /* First ccw is define extent. */
+ define_extent(ccw++, cqr->data, rq_data_dir(req),
+- device->bp_block, req->sector, req->nr_sectors);
++ block->bp_block, req->sector, req->nr_sectors);
+ /* Build locate_record + read/write ccws. */
+ idaws = (unsigned long *) (cqr->data + sizeof(struct DE_fba_data));
+ LO_data = (struct LO_fba_data *) (idaws + cidaw);
+@@ -326,7 +335,7 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
+ ccw[-1].flags |= CCW_FLAG_CC;
+ }
+ ccw->cmd_code = cmd;
+- ccw->count = device->bp_block;
++ ccw->count = block->bp_block;
+ if (idal_is_needed(dst, blksize)) {
+ ccw->cda = (__u32)(addr_t) idaws;
+ ccw->flags = CCW_FLAG_IDA;
+@@ -342,7 +351,9 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
+ }
+ if (req->cmd_flags & REQ_FAILFAST)
+ set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
+- cqr->device = device;
++ cqr->startdev = memdev;
++ cqr->memdev = memdev;
++ cqr->block = block;
+ cqr->expires = 5 * 60 * HZ; /* 5 minutes */
+ cqr->retries = 32;
+ cqr->buildclk = get_clock();
+@@ -363,8 +374,8 @@ dasd_fba_free_cp(struct dasd_ccw_req *cqr, struct request *req)
- static inline void rtl8225_write_phy_ofdm(struct ieee80211_hw *dev,
- u8 addr, u32 data)
-diff --git a/drivers/net/wireless/rtl818x.h b/drivers/net/wireless/rtl818x.h
-index 880d4be..1e7d6f8 100644
---- a/drivers/net/wireless/rtl818x.h
-+++ b/drivers/net/wireless/rtl818x.h
-@@ -58,13 +58,17 @@ struct rtl818x_csr {
- #define RTL818X_INT_TX_FO (1 << 15)
- __le32 TX_CONF;
- #define RTL818X_TX_CONF_LOOPBACK_MAC (1 << 17)
-+#define RTL818X_TX_CONF_LOOPBACK_CONT (3 << 17)
- #define RTL818X_TX_CONF_NO_ICV (1 << 19)
- #define RTL818X_TX_CONF_DISCW (1 << 20)
-+#define RTL818X_TX_CONF_SAT_HWPLCP (1 << 24)
- #define RTL818X_TX_CONF_R8180_ABCD (2 << 25)
- #define RTL818X_TX_CONF_R8180_F (3 << 25)
- #define RTL818X_TX_CONF_R8185_ABC (4 << 25)
- #define RTL818X_TX_CONF_R8185_D (5 << 25)
- #define RTL818X_TX_CONF_HWVER_MASK (7 << 25)
-+#define RTL818X_TX_CONF_PROBE_DTS (1 << 29)
-+#define RTL818X_TX_CONF_HW_SEQNUM (1 << 30)
- #define RTL818X_TX_CONF_CW_MIN (1 << 31)
- __le32 RX_CONF;
- #define RTL818X_RX_CONF_MONITOR (1 << 0)
-@@ -75,8 +79,12 @@ struct rtl818x_csr {
- #define RTL818X_RX_CONF_DATA (1 << 18)
- #define RTL818X_RX_CONF_CTRL (1 << 19)
- #define RTL818X_RX_CONF_MGMT (1 << 20)
-+#define RTL818X_RX_CONF_ADDR3 (1 << 21)
-+#define RTL818X_RX_CONF_PM (1 << 22)
- #define RTL818X_RX_CONF_BSSID (1 << 23)
- #define RTL818X_RX_CONF_RX_AUTORESETPHY (1 << 28)
-+#define RTL818X_RX_CONF_CSDM1 (1 << 29)
-+#define RTL818X_RX_CONF_CSDM2 (1 << 30)
- #define RTL818X_RX_CONF_ONLYERLPKT (1 << 31)
- __le32 INT_TIMEOUT;
- __le32 TBDA;
-@@ -92,6 +100,7 @@ struct rtl818x_csr {
- u8 CONFIG0;
- u8 CONFIG1;
- u8 CONFIG2;
-+#define RTL818X_CONFIG2_ANTENNA_DIV (1 << 6)
- __le32 ANAPARAM;
- u8 MSR;
- #define RTL818X_MSR_NO_LINK (0 << 2)
-@@ -104,14 +113,17 @@ struct rtl818x_csr {
- #define RTL818X_CONFIG4_VCOOFF (1 << 7)
- u8 TESTR;
- u8 reserved_9[2];
-- __le16 PGSELECT;
-+ u8 PGSELECT;
-+ u8 SECURITY;
- __le32 ANAPARAM2;
- u8 reserved_10[12];
- __le16 BEACON_INTERVAL;
- __le16 ATIM_WND;
- __le16 BEACON_INTERVAL_TIME;
- __le16 ATIMTR_INTERVAL;
-- u8 reserved_11[4];
-+ u8 PHY_DELAY;
-+ u8 CARRIER_SENSE_COUNTER;
-+ u8 reserved_11[2];
- u8 PHY[4];
- __le16 RFPinsOutput;
- __le16 RFPinsEnable;
-@@ -149,11 +161,20 @@ struct rtl818x_csr {
- u8 RETRY_CTR;
- u8 reserved_18[5];
- __le32 RDSAR;
-- u8 reserved_19[18];
-- u16 TALLY_CNT;
-+ u8 reserved_19[12];
-+ __le16 FEMR;
-+ u8 reserved_20[4];
-+ __le16 TALLY_CNT;
- u8 TALLY_SEL;
- } __attribute__((packed));
+ if (!dasd_page_cache)
+ goto out;
+- private = (struct dasd_fba_private *) cqr->device->private;
+- blksize = cqr->device->bp_block;
++ private = (struct dasd_fba_private *) cqr->block->base->private;
++ blksize = cqr->block->bp_block;
+ ccw = cqr->cpaddr;
+ /* Skip over define extent & locate record. */
+ ccw++;
+@@ -394,10 +405,15 @@ dasd_fba_free_cp(struct dasd_ccw_req *cqr, struct request *req)
+ }
+ out:
+ status = cqr->status == DASD_CQR_DONE;
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ return status;
+ }
-+struct rtl818x_rf_ops {
-+ char *name;
-+ void (*init)(struct ieee80211_hw *);
-+ void (*stop)(struct ieee80211_hw *);
-+ void (*set_chan)(struct ieee80211_hw *, struct ieee80211_conf *);
++static void dasd_fba_handle_terminated_request(struct dasd_ccw_req *cqr)
++{
++ cqr->status = DASD_CQR_FILLED;
+};
+
- static const struct ieee80211_rate rtl818x_rates[] = {
- { .rate = 10,
- .val = 0,
-diff --git a/drivers/net/wireless/wavelan.c b/drivers/net/wireless/wavelan.c
-index a1f8a16..03384a4 100644
---- a/drivers/net/wireless/wavelan.c
-+++ b/drivers/net/wireless/wavelan.c
-@@ -49,27 +49,6 @@ static int __init wv_psa_to_irq(u8 irqval)
- return -1;
- }
-
--#ifdef STRUCT_CHECK
--/*------------------------------------------------------------------*/
--/*
-- * Sanity routine to verify the sizes of the various WaveLAN interface
-- * structures.
-- */
--static char *wv_struct_check(void)
--{
--#define SC(t,s,n) if (sizeof(t) != s) return(n);
--
-- SC(psa_t, PSA_SIZE, "psa_t");
-- SC(mmw_t, MMW_SIZE, "mmw_t");
-- SC(mmr_t, MMR_SIZE, "mmr_t");
-- SC(ha_t, HA_SIZE, "ha_t");
--
--#undef SC
--
-- return ((char *) NULL);
--} /* wv_struct_check */
--#endif /* STRUCT_CHECK */
--
- /********************* HOST ADAPTER SUBROUTINES *********************/
+ static int
+ dasd_fba_fill_info(struct dasd_device * device,
+ struct dasd_information2_t * info)
+@@ -546,9 +562,10 @@ static struct dasd_discipline dasd_fba_discipline = {
+ .fill_geometry = dasd_fba_fill_geometry,
+ .start_IO = dasd_start_IO,
+ .term_IO = dasd_term_IO,
+- .examine_error = dasd_fba_examine_error,
++ .handle_terminated_request = dasd_fba_handle_terminated_request,
+ .erp_action = dasd_fba_erp_action,
+ .erp_postaction = dasd_fba_erp_postaction,
++ .handle_unsolicited_interrupt = dasd_fba_handle_unsolicited_interrupt,
+ .build_cp = dasd_fba_build_cp,
+ .free_cp = dasd_fba_free_cp,
+ .dump_sense = dasd_fba_dump_sense,
+diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
+index 47ba446..aee6565 100644
+--- a/drivers/s390/block/dasd_genhd.c
++++ b/drivers/s390/block/dasd_genhd.c
+@@ -25,14 +25,15 @@
/*
- * Useful subroutines to manage the WaveLAN ISA interface
-@@ -3740,7 +3719,7 @@ static int wv_check_ioaddr(unsigned long ioaddr, u8 * mac)
- * non-NCR/AT&T/Lucent ISA card. See wavelan.p.h for detail on
- * how to configure your card.
- */
-- for (i = 0; i < (sizeof(MAC_ADDRESSES) / sizeof(char) / 3); i++)
-+ for (i = 0; i < ARRAY_SIZE(MAC_ADDRESSES); i++)
- if ((mac[0] == MAC_ADDRESSES[i][0]) &&
- (mac[1] == MAC_ADDRESSES[i][1]) &&
- (mac[2] == MAC_ADDRESSES[i][2]))
-@@ -4215,14 +4194,11 @@ struct net_device * __init wavelan_probe(int unit)
- int i;
- int r = 0;
-
--#ifdef STRUCT_CHECK
-- if (wv_struct_check() != (char *) NULL) {
-- printk(KERN_WARNING
-- "%s: wavelan_probe(): structure/compiler botch: \"%s\"\n",
-- dev->name, wv_struct_check());
-- return -ENODEV;
-- }
--#endif /* STRUCT_CHECK */
-+ /* compile-time check the sizes of structures */
-+ BUILD_BUG_ON(sizeof(psa_t) != PSA_SIZE);
-+ BUILD_BUG_ON(sizeof(mmw_t) != MMW_SIZE);
-+ BUILD_BUG_ON(sizeof(mmr_t) != MMR_SIZE);
-+ BUILD_BUG_ON(sizeof(ha_t) != HA_SIZE);
-
- dev = alloc_etherdev(sizeof(net_local));
- if (!dev)
-diff --git a/drivers/net/wireless/wavelan.p.h b/drivers/net/wireless/wavelan.p.h
-index fe24281..b33ac47 100644
---- a/drivers/net/wireless/wavelan.p.h
-+++ b/drivers/net/wireless/wavelan.p.h
-@@ -400,7 +400,6 @@
- */
- #undef SET_PSA_CRC /* Calculate and set the CRC on PSA (slower) */
- #define USE_PSA_CONFIG /* Use info from the PSA. */
--#undef STRUCT_CHECK /* Verify padding of structures. */
- #undef EEPROM_IS_PROTECTED /* doesn't seem to be necessary */
- #define MULTICAST_AVOID /* Avoid extra multicast (I'm sceptical). */
- #undef SET_MAC_ADDRESS /* Experimental */
-diff --git a/drivers/net/wireless/wavelan_cs.c b/drivers/net/wireless/wavelan_cs.c
-index 577c647..c2037b2 100644
---- a/drivers/net/wireless/wavelan_cs.c
-+++ b/drivers/net/wireless/wavelan_cs.c
-@@ -71,27 +71,6 @@ static void wv_nwid_filter(unsigned char mode, net_local *lp);
- * (wavelan modem or i82593)
+ * Allocate and register gendisk structure for device.
*/
-
--#ifdef STRUCT_CHECK
--/*------------------------------------------------------------------*/
--/*
-- * Sanity routine to verify the sizes of the various WaveLAN interface
-- * structures.
-- */
--static char *
--wv_structuct_check(void)
--{
--#define SC(t,s,n) if (sizeof(t) != s) return(n);
--
-- SC(psa_t, PSA_SIZE, "psa_t");
-- SC(mmw_t, MMW_SIZE, "mmw_t");
-- SC(mmr_t, MMR_SIZE, "mmr_t");
--
--#undef SC
--
-- return((char *) NULL);
--} /* wv_structuct_check */
--#endif /* STRUCT_CHECK */
--
- /******************* MODEM MANAGEMENT SUBROUTINES *******************/
- /*
- * Useful subroutines to manage the modem of the wavelan
-@@ -3223,14 +3202,14 @@ wv_mmc_init(struct net_device * dev)
- * non-NCR/AT&T/Lucent PCMCIA cards, see wavelan_cs.h for detail on
- * how to configure your card...
- */
-- for(i = 0; i < (sizeof(MAC_ADDRESSES) / sizeof(char) / 3); i++)
-- if((psa.psa_univ_mac_addr[0] == MAC_ADDRESSES[i][0]) &&
-- (psa.psa_univ_mac_addr[1] == MAC_ADDRESSES[i][1]) &&
-- (psa.psa_univ_mac_addr[2] == MAC_ADDRESSES[i][2]))
-+ for (i = 0; i < ARRAY_SIZE(MAC_ADDRESSES); i++)
-+ if ((psa.psa_univ_mac_addr[0] == MAC_ADDRESSES[i][0]) &&
-+ (psa.psa_univ_mac_addr[1] == MAC_ADDRESSES[i][1]) &&
-+ (psa.psa_univ_mac_addr[2] == MAC_ADDRESSES[i][2]))
- break;
-
- /* If we have not found it... */
-- if(i == (sizeof(MAC_ADDRESSES) / sizeof(char) / 3))
-+ if (i == ARRAY_SIZE(MAC_ADDRESSES))
- {
- #ifdef DEBUG_CONFIG_ERRORS
- printk(KERN_WARNING "%s: wv_mmc_init(): Invalid MAC address: %02X:%02X:%02X:...\n",
-@@ -3794,14 +3773,10 @@ wv_hw_config(struct net_device * dev)
- printk(KERN_DEBUG "%s: ->wv_hw_config()\n", dev->name);
- #endif
-
--#ifdef STRUCT_CHECK
-- if(wv_structuct_check() != (char *) NULL)
-- {
-- printk(KERN_WARNING "%s: wv_hw_config: structure/compiler botch: \"%s\"\n",
-- dev->name, wv_structuct_check());
-- return FALSE;
-- }
--#endif /* STRUCT_CHECK == 1 */
-+ /* compile-time check the sizes of structures */
-+ BUILD_BUG_ON(sizeof(psa_t) != PSA_SIZE);
-+ BUILD_BUG_ON(sizeof(mmw_t) != MMW_SIZE);
-+ BUILD_BUG_ON(sizeof(mmr_t) != MMR_SIZE);
-
- /* Reset the pcmcia interface */
- if(wv_pcmcia_reset(dev) == FALSE)
-diff --git a/drivers/net/wireless/wavelan_cs.p.h b/drivers/net/wireless/wavelan_cs.p.h
-index 4b9de00..33dd970 100644
---- a/drivers/net/wireless/wavelan_cs.p.h
-+++ b/drivers/net/wireless/wavelan_cs.p.h
-@@ -459,7 +459,6 @@
- #undef WAVELAN_ROAMING_EXT /* Enable roaming wireless extensions */
- #undef SET_PSA_CRC /* Set the CRC in PSA (slower) */
- #define USE_PSA_CONFIG /* Use info from the PSA */
--#undef STRUCT_CHECK /* Verify padding of structures */
- #undef EEPROM_IS_PROTECTED /* Doesn't seem to be necessary */
- #define MULTICAST_AVOID /* Avoid extra multicast (I'm sceptical) */
- #undef SET_MAC_ADDRESS /* Experimental */
-@@ -548,7 +547,7 @@ typedef struct wavepoint_beacon
- spec_id2, /* Unused */
- pdu_type, /* Unused */
- seq; /* WavePoint beacon sequence number */
-- unsigned short domain_id, /* WavePoint Domain ID */
-+ __be16 domain_id, /* WavePoint Domain ID */
- nwid; /* WavePoint NWID */
- } wavepoint_beacon;
-
-diff --git a/drivers/net/wireless/zd1211rw/Kconfig b/drivers/net/wireless/zd1211rw/Kconfig
-index d1ab24a..74b31ea 100644
---- a/drivers/net/wireless/zd1211rw/Kconfig
-+++ b/drivers/net/wireless/zd1211rw/Kconfig
-@@ -1,14 +1,13 @@
- config ZD1211RW
- tristate "ZyDAS ZD1211/ZD1211B USB-wireless support"
-- depends on USB && IEEE80211_SOFTMAC && WLAN_80211 && EXPERIMENTAL
-- select WIRELESS_EXT
-+ depends on USB && MAC80211 && WLAN_80211 && EXPERIMENTAL
- select FW_LOADER
- ---help---
- This is an experimental driver for the ZyDAS ZD1211/ZD1211B wireless
- chip, present in many USB-wireless adapters.
-
-- Device firmware is required alongside this driver. You can download the
-- firmware distribution from http://zd1211.ath.cx/get-firmware
-+ Device firmware is required alongside this driver. You can download
-+ the firmware distribution from http://zd1211.ath.cx/get-firmware
-
- config ZD1211RW_DEBUG
- bool "ZyDAS ZD1211 debugging"
-diff --git a/drivers/net/wireless/zd1211rw/Makefile b/drivers/net/wireless/zd1211rw/Makefile
-index 7a2f2a9..cc36126 100644
---- a/drivers/net/wireless/zd1211rw/Makefile
-+++ b/drivers/net/wireless/zd1211rw/Makefile
-@@ -1,7 +1,6 @@
- obj-$(CONFIG_ZD1211RW) += zd1211rw.o
-
--zd1211rw-objs := zd_chip.o zd_ieee80211.o \
-- zd_mac.o zd_netdev.o \
-+zd1211rw-objs := zd_chip.o zd_ieee80211.o zd_mac.o \
- zd_rf_al2230.o zd_rf_rf2959.o \
- zd_rf_al7230b.o zd_rf_uw2453.o \
- zd_rf.o zd_usb.o
-diff --git a/drivers/net/wireless/zd1211rw/zd_chip.c b/drivers/net/wireless/zd1211rw/zd_chip.c
-index f831b68..99e5b03 100644
---- a/drivers/net/wireless/zd1211rw/zd_chip.c
-+++ b/drivers/net/wireless/zd1211rw/zd_chip.c
-@@ -1,4 +1,7 @@
--/* zd_chip.c
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -30,12 +33,12 @@
- #include "zd_rf.h"
-
- void zd_chip_init(struct zd_chip *chip,
-- struct net_device *netdev,
-+ struct ieee80211_hw *hw,
- struct usb_interface *intf)
+-int
+-dasd_gendisk_alloc(struct dasd_device *device)
++int dasd_gendisk_alloc(struct dasd_block *block)
{
- memset(chip, 0, sizeof(*chip));
- mutex_init(&chip->mutex);
-- zd_usb_init(&chip->usb, netdev, intf);
-+ zd_usb_init(&chip->usb, hw, intf);
- zd_rf_init(&chip->rf);
- }
+ struct gendisk *gdp;
++ struct dasd_device *base;
+ int len;
-@@ -50,7 +53,7 @@ void zd_chip_clear(struct zd_chip *chip)
+ /* Make sure the minor for this device exists. */
+- if (device->devindex >= DASD_PER_MAJOR)
++ base = block->base;
++ if (base->devindex >= DASD_PER_MAJOR)
+ return -EBUSY;
- static int scnprint_mac_oui(struct zd_chip *chip, char *buffer, size_t size)
- {
-- u8 *addr = zd_usb_to_netdev(&chip->usb)->dev_addr;
-+ u8 *addr = zd_mac_get_perm_addr(zd_chip_to_mac(chip));
- return scnprintf(buffer, size, "%02x-%02x-%02x",
- addr[0], addr[1], addr[2]);
- }
-@@ -378,15 +381,18 @@ int zd_write_mac_addr(struct zd_chip *chip, const u8 *mac_addr)
- };
- DECLARE_MAC_BUF(mac);
+ gdp = alloc_disk(1 << DASD_PARTN_BITS);
+@@ -41,9 +42,9 @@ dasd_gendisk_alloc(struct dasd_device *device)
-- reqs[0].value = (mac_addr[3] << 24)
-- | (mac_addr[2] << 16)
-- | (mac_addr[1] << 8)
-- | mac_addr[0];
-- reqs[1].value = (mac_addr[5] << 8)
-- | mac_addr[4];
--
-- dev_dbg_f(zd_chip_dev(chip),
-- "mac addr %s\n", print_mac(mac, mac_addr));
-+ if (mac_addr) {
-+ reqs[0].value = (mac_addr[3] << 24)
-+ | (mac_addr[2] << 16)
-+ | (mac_addr[1] << 8)
-+ | mac_addr[0];
-+ reqs[1].value = (mac_addr[5] << 8)
-+ | mac_addr[4];
-+ dev_dbg_f(zd_chip_dev(chip),
-+ "mac addr %s\n", print_mac(mac, mac_addr));
-+ } else {
-+ dev_dbg_f(zd_chip_dev(chip), "set NULL mac\n");
-+ }
+ /* Initialize gendisk structure. */
+ gdp->major = DASD_MAJOR;
+- gdp->first_minor = device->devindex << DASD_PARTN_BITS;
++ gdp->first_minor = base->devindex << DASD_PARTN_BITS;
+ gdp->fops = &dasd_device_operations;
+- gdp->driverfs_dev = &device->cdev->dev;
++ gdp->driverfs_dev = &base->cdev->dev;
- mutex_lock(&chip->mutex);
- r = zd_iowrite32a_locked(chip, reqs, ARRAY_SIZE(reqs));
-@@ -980,7 +986,7 @@ static int print_fw_version(struct zd_chip *chip)
+ /*
+ * Set device name.
+@@ -53,53 +54,51 @@ dasd_gendisk_alloc(struct dasd_device *device)
+ * dasdaaaa - dasdzzzz : 456976 devices, added up = 475252
+ */
+ len = sprintf(gdp->disk_name, "dasd");
+- if (device->devindex > 25) {
+- if (device->devindex > 701) {
+- if (device->devindex > 18277)
++ if (base->devindex > 25) {
++ if (base->devindex > 701) {
++ if (base->devindex > 18277)
+ len += sprintf(gdp->disk_name + len, "%c",
+- 'a'+(((device->devindex-18278)
++ 'a'+(((base->devindex-18278)
+ /17576)%26));
+ len += sprintf(gdp->disk_name + len, "%c",
+- 'a'+(((device->devindex-702)/676)%26));
++ 'a'+(((base->devindex-702)/676)%26));
+ }
+ len += sprintf(gdp->disk_name + len, "%c",
+- 'a'+(((device->devindex-26)/26)%26));
++ 'a'+(((base->devindex-26)/26)%26));
+ }
+- len += sprintf(gdp->disk_name + len, "%c", 'a'+(device->devindex%26));
++ len += sprintf(gdp->disk_name + len, "%c", 'a'+(base->devindex%26));
+
+- if (device->features & DASD_FEATURE_READONLY)
++ if (block->base->features & DASD_FEATURE_READONLY)
+ set_disk_ro(gdp, 1);
+- gdp->private_data = device;
+- gdp->queue = device->request_queue;
+- device->gdp = gdp;
+- set_capacity(device->gdp, 0);
+- add_disk(device->gdp);
++ gdp->private_data = block;
++ gdp->queue = block->request_queue;
++ block->gdp = gdp;
++ set_capacity(block->gdp, 0);
++ add_disk(block->gdp);
return 0;
}
--static int set_mandatory_rates(struct zd_chip *chip, enum ieee80211_std std)
-+static int set_mandatory_rates(struct zd_chip *chip, int mode)
+ /*
+ * Unregister and free gendisk structure for device.
+ */
+-void
+-dasd_gendisk_free(struct dasd_device *device)
++void dasd_gendisk_free(struct dasd_block *block)
{
- u32 rates;
- ZD_ASSERT(mutex_is_locked(&chip->mutex));
-@@ -988,11 +994,11 @@ static int set_mandatory_rates(struct zd_chip *chip, enum ieee80211_std std)
- * that the device is supporting. Until further notice we should try
- * to support 802.11g also for full speed USB.
- */
-- switch (std) {
-- case IEEE80211B:
-+ switch (mode) {
-+ case MODE_IEEE80211B:
- rates = CR_RATE_1M|CR_RATE_2M|CR_RATE_5_5M|CR_RATE_11M;
- break;
-- case IEEE80211G:
-+ case MODE_IEEE80211G:
- rates = CR_RATE_1M|CR_RATE_2M|CR_RATE_5_5M|CR_RATE_11M|
- CR_RATE_6M|CR_RATE_12M|CR_RATE_24M;
- break;
-@@ -1003,24 +1009,17 @@ static int set_mandatory_rates(struct zd_chip *chip, enum ieee80211_std std)
+- if (device->gdp) {
+- del_gendisk(device->gdp);
+- device->gdp->queue = NULL;
+- put_disk(device->gdp);
+- device->gdp = NULL;
++ if (block->gdp) {
++ del_gendisk(block->gdp);
++ block->gdp->queue = NULL;
++ put_disk(block->gdp);
++ block->gdp = NULL;
+ }
}
- int zd_chip_set_rts_cts_rate_locked(struct zd_chip *chip,
-- u8 rts_rate, int preamble)
-+ int preamble)
+ /*
+ * Trigger a partition detection.
+ */
+-int
+-dasd_scan_partitions(struct dasd_device * device)
++int dasd_scan_partitions(struct dasd_block *block)
{
-- int rts_mod = ZD_RX_CCK;
- u32 value = 0;
-
-- /* Modulation bit */
-- if (ZD_MODULATION_TYPE(rts_rate) == ZD_OFDM)
-- rts_mod = ZD_RX_OFDM;
--
-- dev_dbg_f(zd_chip_dev(chip), "rts_rate=%x preamble=%x\n",
-- rts_rate, preamble);
--
-- value |= ZD_PURE_RATE(rts_rate) << RTSCTS_SH_RTS_RATE;
-- value |= rts_mod << RTSCTS_SH_RTS_MOD_TYPE;
-+ dev_dbg_f(zd_chip_dev(chip), "preamble=%x\n", preamble);
- value |= preamble << RTSCTS_SH_RTS_PMB_TYPE;
- value |= preamble << RTSCTS_SH_CTS_PMB_TYPE;
-
-- /* We always send 11M self-CTS messages, like the vendor driver. */
-+ /* We always send 11M RTS/self-CTS messages, like the vendor driver. */
-+ value |= ZD_PURE_RATE(ZD_CCK_RATE_11M) << RTSCTS_SH_RTS_RATE;
-+ value |= ZD_RX_CCK << RTSCTS_SH_RTS_MOD_TYPE;
- value |= ZD_PURE_RATE(ZD_CCK_RATE_11M) << RTSCTS_SH_CTS_RATE;
- value |= ZD_RX_CCK << RTSCTS_SH_CTS_MOD_TYPE;
+ struct block_device *bdev;
-@@ -1109,7 +1108,7 @@ int zd_chip_init_hw(struct zd_chip *chip)
- * It might be discussed, whether we should suppport pure b mode for
- * full speed USB.
+- bdev = bdget_disk(device->gdp, 0);
++ bdev = bdget_disk(block->gdp, 0);
+ if (!bdev || blkdev_get(bdev, FMODE_READ, 1) < 0)
+ return -ENODEV;
+ /*
+@@ -117,7 +116,7 @@ dasd_scan_partitions(struct dasd_device * device)
+ * is why the assignment to device->bdev is done AFTER
+ * the BLKRRPART ioctl.
*/
-- r = set_mandatory_rates(chip, IEEE80211G);
-+ r = set_mandatory_rates(chip, MODE_IEEE80211G);
- if (r)
- goto out;
- /* Disabling interrupts is certainly a smart thing here.
-@@ -1320,12 +1319,17 @@ out:
- return r;
- }
-
--int zd_chip_set_basic_rates_locked(struct zd_chip *chip, u16 cr_rates)
-+int zd_chip_set_basic_rates(struct zd_chip *chip, u16 cr_rates)
- {
-- ZD_ASSERT((cr_rates & ~(CR_RATES_80211B | CR_RATES_80211G)) == 0);
-- dev_dbg_f(zd_chip_dev(chip), "%x\n", cr_rates);
-+ int r;
-+
-+ if (cr_rates & ~(CR_RATES_80211B|CR_RATES_80211G))
-+ return -EINVAL;
-
-- return zd_iowrite32_locked(chip, cr_rates, CR_BASIC_RATE_TBL);
-+ mutex_lock(&chip->mutex);
-+ r = zd_iowrite32_locked(chip, cr_rates, CR_BASIC_RATE_TBL);
-+ mutex_unlock(&chip->mutex);
-+ return r;
- }
-
- static int ofdm_qual_db(u8 status_quality, u8 zd_rate, unsigned int size)
-@@ -1468,56 +1472,44 @@ u8 zd_rx_qual_percent(const void *rx_frame, unsigned int size,
- {
- return (status->frame_status&ZD_RX_OFDM) ?
- ofdm_qual_percent(status->signal_quality_ofdm,
-- zd_rate_from_ofdm_plcp_header(rx_frame),
-+ zd_rate_from_ofdm_plcp_header(rx_frame),
- size) :
- cck_qual_percent(status->signal_quality_cck);
+- device->bdev = bdev;
++ block->bdev = bdev;
+ return 0;
}
--u8 zd_rx_strength_percent(u8 rssi)
--{
-- int r = (rssi*100) / 41;
-- if (r > 100)
-- r = 100;
-- return (u8) r;
--}
--
--u16 zd_rx_rate(const void *rx_frame, const struct rx_status *status)
-+/**
-+ * zd_rx_rate - report zd-rate
-+ * @rx_frame - received frame
-+ * @rx_status - rx_status as given by the device
-+ *
-+ * This function converts the rate as encoded in the received packet to the
-+ * zd-rate, we are using on other places in the driver.
-+ */
-+u8 zd_rx_rate(const void *rx_frame, const struct rx_status *status)
+@@ -125,8 +124,7 @@ dasd_scan_partitions(struct dasd_device * device)
+ * Remove all inodes in the system for a device, delete the
+ * partitions and make device unusable by setting its size to zero.
+ */
+-void
+-dasd_destroy_partitions(struct dasd_device * device)
++void dasd_destroy_partitions(struct dasd_block *block)
{
-- static const u16 ofdm_rates[] = {
-- [ZD_OFDM_PLCP_RATE_6M] = 60,
-- [ZD_OFDM_PLCP_RATE_9M] = 90,
-- [ZD_OFDM_PLCP_RATE_12M] = 120,
-- [ZD_OFDM_PLCP_RATE_18M] = 180,
-- [ZD_OFDM_PLCP_RATE_24M] = 240,
-- [ZD_OFDM_PLCP_RATE_36M] = 360,
-- [ZD_OFDM_PLCP_RATE_48M] = 480,
-- [ZD_OFDM_PLCP_RATE_54M] = 540,
-- };
-- u16 rate;
-+ u8 zd_rate;
- if (status->frame_status & ZD_RX_OFDM) {
-- /* Deals with PLCP OFDM rate (not zd_rates) */
-- u8 ofdm_rate = zd_ofdm_plcp_header_rate(rx_frame);
-- rate = ofdm_rates[ofdm_rate & 0xf];
-+ zd_rate = zd_rate_from_ofdm_plcp_header(rx_frame);
- } else {
- switch (zd_cck_plcp_header_signal(rx_frame)) {
- case ZD_CCK_PLCP_SIGNAL_1M:
-- rate = 10;
-+ zd_rate = ZD_CCK_RATE_1M;
- break;
- case ZD_CCK_PLCP_SIGNAL_2M:
-- rate = 20;
-+ zd_rate = ZD_CCK_RATE_2M;
- break;
- case ZD_CCK_PLCP_SIGNAL_5M5:
-- rate = 55;
-+ zd_rate = ZD_CCK_RATE_5_5M;
- break;
- case ZD_CCK_PLCP_SIGNAL_11M:
-- rate = 110;
-+ zd_rate = ZD_CCK_RATE_11M;
- break;
- default:
-- rate = 0;
-+ zd_rate = 0;
- }
- }
+ /* The two structs have 168/176 byte on 31/64 bit. */
+ struct blkpg_partition bpart;
+@@ -137,8 +135,8 @@ dasd_destroy_partitions(struct dasd_device * device)
+ * Get the bdev pointer from the device structure and clear
+ * device->bdev to lower the offline open_count limit again.
+ */
+- bdev = device->bdev;
+- device->bdev = NULL;
++ bdev = block->bdev;
++ block->bdev = NULL;
-- return rate;
-+ return zd_rate;
- }
+ /*
+ * See fs/partition/check.c:delete_partition
+@@ -149,17 +147,16 @@ dasd_destroy_partitions(struct dasd_device * device)
+ memset(&barg, 0, sizeof(struct blkpg_ioctl_arg));
+ barg.data = (void __force __user *) &bpart;
+ barg.op = BLKPG_DEL_PARTITION;
+- for (bpart.pno = device->gdp->minors - 1; bpart.pno > 0; bpart.pno--)
++ for (bpart.pno = block->gdp->minors - 1; bpart.pno > 0; bpart.pno--)
+ ioctl_by_bdev(bdev, BLKPG, (unsigned long) &barg);
- int zd_chip_switch_radio_on(struct zd_chip *chip)
-@@ -1557,20 +1549,22 @@ void zd_chip_disable_int(struct zd_chip *chip)
- mutex_unlock(&chip->mutex);
+- invalidate_partition(device->gdp, 0);
++ invalidate_partition(block->gdp, 0);
+ /* Matching blkdev_put to the blkdev_get in dasd_scan_partitions. */
+ blkdev_put(bdev);
+- set_capacity(device->gdp, 0);
++ set_capacity(block->gdp, 0);
}
--int zd_chip_enable_rx(struct zd_chip *chip)
-+int zd_chip_enable_rxtx(struct zd_chip *chip)
+-int
+-dasd_gendisk_init(void)
++int dasd_gendisk_init(void)
{
- int r;
+ int rc;
- mutex_lock(&chip->mutex);
-+ zd_usb_enable_tx(&chip->usb);
- r = zd_usb_enable_rx(&chip->usb);
- mutex_unlock(&chip->mutex);
- return r;
+@@ -174,8 +171,7 @@ dasd_gendisk_init(void)
+ return 0;
}
--void zd_chip_disable_rx(struct zd_chip *chip)
-+void zd_chip_disable_rxtx(struct zd_chip *chip)
+-void
+-dasd_gendisk_exit(void)
++void dasd_gendisk_exit(void)
{
- mutex_lock(&chip->mutex);
- zd_usb_disable_rx(&chip->usb);
-+ zd_usb_disable_tx(&chip->usb);
- mutex_unlock(&chip->mutex);
+ unregister_blkdev(DASD_MAJOR, "dasd");
}
-
-diff --git a/drivers/net/wireless/zd1211rw/zd_chip.h b/drivers/net/wireless/zd1211rw/zd_chip.h
-index 8009b70..009c037 100644
---- a/drivers/net/wireless/zd1211rw/zd_chip.h
-+++ b/drivers/net/wireless/zd1211rw/zd_chip.h
-@@ -1,4 +1,7 @@
--/* zd_chip.h
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -433,9 +436,10 @@ enum {
- #define CR_GROUP_HASH_P2 CTL_REG(0x0628)
-
- #define CR_RX_TIMEOUT CTL_REG(0x062C)
-+
- /* Basic rates supported by the BSS. When producing ACK or CTS messages, the
- * device will use a rate in this table that is less than or equal to the rate
-- * of the incoming frame which prompted the response */
-+ * of the incoming frame which prompted the response. */
- #define CR_BASIC_RATE_TBL CTL_REG(0x0630)
- #define CR_RATE_1M (1 << 0) /* 802.11b */
- #define CR_RATE_2M (1 << 1) /* 802.11b */
-@@ -509,14 +513,37 @@ enum {
- #define CR_UNDERRUN_CNT CTL_REG(0x0688)
-
- #define CR_RX_FILTER CTL_REG(0x068c)
-+#define RX_FILTER_ASSOC_REQUEST (1 << 0)
- #define RX_FILTER_ASSOC_RESPONSE (1 << 1)
-+#define RX_FILTER_REASSOC_REQUEST (1 << 2)
- #define RX_FILTER_REASSOC_RESPONSE (1 << 3)
-+#define RX_FILTER_PROBE_REQUEST (1 << 4)
- #define RX_FILTER_PROBE_RESPONSE (1 << 5)
-+/* bits 6 and 7 reserved */
- #define RX_FILTER_BEACON (1 << 8)
-+#define RX_FILTER_ATIM (1 << 9)
- #define RX_FILTER_DISASSOC (1 << 10)
- #define RX_FILTER_AUTH (1 << 11)
--#define AP_RX_FILTER 0x0400feff
--#define STA_RX_FILTER 0x0000ffff
-+#define RX_FILTER_DEAUTH (1 << 12)
-+#define RX_FILTER_PSPOLL (1 << 26)
-+#define RX_FILTER_RTS (1 << 27)
-+#define RX_FILTER_CTS (1 << 28)
-+#define RX_FILTER_ACK (1 << 29)
-+#define RX_FILTER_CFEND (1 << 30)
-+#define RX_FILTER_CFACK (1 << 31)
-+
-+/* Enable bits for all frames you are interested in. */
-+#define STA_RX_FILTER (RX_FILTER_ASSOC_REQUEST | RX_FILTER_ASSOC_RESPONSE | \
-+ RX_FILTER_REASSOC_REQUEST | RX_FILTER_REASSOC_RESPONSE | \
-+ RX_FILTER_PROBE_REQUEST | RX_FILTER_PROBE_RESPONSE | \
-+ (0x3 << 6) /* vendor driver sets these reserved bits */ | \
-+ RX_FILTER_BEACON | RX_FILTER_ATIM | RX_FILTER_DISASSOC | \
-+ RX_FILTER_AUTH | RX_FILTER_DEAUTH | \
-+ (0x7 << 13) /* vendor driver sets these reserved bits */ | \
-+ RX_FILTER_PSPOLL | RX_FILTER_ACK) /* 0x2400ffff */
-+
-+#define RX_FILTER_CTRL (RX_FILTER_RTS | RX_FILTER_CTS | \
-+ RX_FILTER_CFEND | RX_FILTER_CFACK)
-
- /* Monitor mode sets filter to 0xfffff */
-
-@@ -730,7 +757,7 @@ static inline struct zd_chip *zd_rf_to_chip(struct zd_rf *rf)
- #define zd_chip_dev(chip) (&(chip)->usb.intf->dev)
-
- void zd_chip_init(struct zd_chip *chip,
-- struct net_device *netdev,
-+ struct ieee80211_hw *hw,
- struct usb_interface *intf);
- void zd_chip_clear(struct zd_chip *chip);
- int zd_chip_read_mac_addr_fw(struct zd_chip *chip, u8 *addr);
-@@ -835,14 +862,12 @@ int zd_chip_switch_radio_on(struct zd_chip *chip);
- int zd_chip_switch_radio_off(struct zd_chip *chip);
- int zd_chip_enable_int(struct zd_chip *chip);
- void zd_chip_disable_int(struct zd_chip *chip);
--int zd_chip_enable_rx(struct zd_chip *chip);
--void zd_chip_disable_rx(struct zd_chip *chip);
-+int zd_chip_enable_rxtx(struct zd_chip *chip);
-+void zd_chip_disable_rxtx(struct zd_chip *chip);
- int zd_chip_enable_hwint(struct zd_chip *chip);
- int zd_chip_disable_hwint(struct zd_chip *chip);
- int zd_chip_generic_patch_6m_band(struct zd_chip *chip, int channel);
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index d427dae..44b2984 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -64,13 +64,7 @@
+ * SECTION: Type definitions
+ */
+ struct dasd_device;
-
--int zd_chip_set_rts_cts_rate_locked(struct zd_chip *chip,
-- u8 rts_rate, int preamble);
-+int zd_chip_set_rts_cts_rate_locked(struct zd_chip *chip, int preamble);
+-typedef enum {
+- dasd_era_fatal = -1, /* no chance to recover */
+- dasd_era_none = 0, /* don't recover, everything alright */
+- dasd_era_msg = 1, /* don't recover, just report... */
+- dasd_era_recover = 2 /* recovery action recommended */
+-} dasd_era_t;
++struct dasd_block;
- static inline int zd_get_encryption_type(struct zd_chip *chip, u32 *type)
- {
-@@ -859,17 +884,7 @@ static inline int zd_chip_get_basic_rates(struct zd_chip *chip, u16 *cr_rates)
- return zd_ioread16(chip, CR_BASIC_RATE_TBL, cr_rates);
- }
+ /* BIT DEFINITIONS FOR SENSE DATA */
+ #define DASD_SENSE_BIT_0 0x80
+@@ -151,19 +145,22 @@ do { \
--int zd_chip_set_basic_rates_locked(struct zd_chip *chip, u16 cr_rates);
--
--static inline int zd_chip_set_basic_rates(struct zd_chip *chip, u16 cr_rates)
--{
-- int r;
--
-- mutex_lock(&chip->mutex);
-- r = zd_chip_set_basic_rates_locked(chip, cr_rates);
-- mutex_unlock(&chip->mutex);
-- return r;
--}
-+int zd_chip_set_basic_rates(struct zd_chip *chip, u16 cr_rates);
+ struct dasd_ccw_req {
+ unsigned int magic; /* Eye catcher */
+- struct list_head list; /* list_head for request queueing. */
++ struct list_head devlist; /* for dasd_device request queue */
++ struct list_head blocklist; /* for dasd_block request queue */
- int zd_chip_lock_phy_regs(struct zd_chip *chip);
- int zd_chip_unlock_phy_regs(struct zd_chip *chip);
-@@ -893,9 +908,8 @@ struct rx_status;
+ /* Where to execute what... */
+- struct dasd_device *device; /* device the request is for */
++ struct dasd_block *block; /* the originating block device */
++ struct dasd_device *memdev; /* the device used to allocate this */
++ struct dasd_device *startdev; /* device the request is started on */
+ struct ccw1 *cpaddr; /* address of channel program */
+- char status; /* status of this request */
++ char status; /* status of this request */
+ short retries; /* A retry counter */
+ unsigned long flags; /* flags of this request */
- u8 zd_rx_qual_percent(const void *rx_frame, unsigned int size,
- const struct rx_status *status);
--u8 zd_rx_strength_percent(u8 rssi);
+ /* ... and how */
+ unsigned long starttime; /* jiffies time of request start */
+ int expires; /* expiration period in jiffies */
+- char lpm; /* logical path mask */
++ char lpm; /* logical path mask */
+ void *data; /* pointer to data area */
--u16 zd_rx_rate(const void *rx_frame, const struct rx_status *status);
-+u8 zd_rx_rate(const void *rx_frame, const struct rx_status *status);
+ /* these are important for recovering erroneous requests */
+@@ -178,20 +175,27 @@ struct dasd_ccw_req {
+ unsigned long long endclk; /* TOD-clock of request termination */
- struct zd_mc_hash {
- u32 low;
-diff --git a/drivers/net/wireless/zd1211rw/zd_def.h b/drivers/net/wireless/zd1211rw/zd_def.h
-index 505b4d7..5200db4 100644
---- a/drivers/net/wireless/zd1211rw/zd_def.h
-+++ b/drivers/net/wireless/zd1211rw/zd_def.h
-@@ -1,4 +1,7 @@
--/* zd_def.h
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-diff --git a/drivers/net/wireless/zd1211rw/zd_ieee80211.c b/drivers/net/wireless/zd1211rw/zd_ieee80211.c
-index 189160e..7c277ec 100644
---- a/drivers/net/wireless/zd1211rw/zd_ieee80211.c
-+++ b/drivers/net/wireless/zd1211rw/zd_ieee80211.c
-@@ -1,4 +1,7 @@
--/* zd_ieee80211.c
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -16,178 +19,85 @@
- */
+ /* Callback that is called after reaching final status. */
+- void (*callback)(struct dasd_ccw_req *, void *data);
+- void *callback_data;
++ void (*callback)(struct dasd_ccw_req *, void *data);
++ void *callback_data;
+ };
/*
-- * A lot of this code is generic and should be moved into the upper layers
-- * at some point.
-+ * In the long term, we'll probably find a better way of handling regulatory
-+ * requirements outside of the driver.
+ * dasd_ccw_req -> status can be:
*/
-
--#include <linux/errno.h>
--#include <linux/wireless.h>
- #include <linux/kernel.h>
--#include <net/ieee80211.h>
-+#include <net/mac80211.h>
-
--#include "zd_def.h"
- #include "zd_ieee80211.h"
- #include "zd_mac.h"
-
-+struct channel_range {
-+ u8 regdomain;
-+ u8 start;
-+ u8 end; /* exclusive (channel must be less than end) */
-+};
+-#define DASD_CQR_FILLED 0x00 /* request is ready to be processed */
+-#define DASD_CQR_QUEUED 0x01 /* request is queued to be processed */
+-#define DASD_CQR_IN_IO 0x02 /* request is currently in IO */
+-#define DASD_CQR_DONE 0x03 /* request is completed successfully */
+-#define DASD_CQR_ERROR 0x04 /* request is completed with error */
+-#define DASD_CQR_FAILED 0x05 /* request is finally failed */
+-#define DASD_CQR_CLEAR 0x06 /* request is clear pending */
++#define DASD_CQR_FILLED 0x00 /* request is ready to be processed */
++#define DASD_CQR_DONE 0x01 /* request is completed successfully */
++#define DASD_CQR_NEED_ERP 0x02 /* request needs recovery action */
++#define DASD_CQR_IN_ERP 0x03 /* request is in recovery */
++#define DASD_CQR_FAILED 0x04 /* request is finally failed */
++#define DASD_CQR_TERMINATED 0x05 /* request was stopped by driver */
++
++#define DASD_CQR_QUEUED 0x80 /* request is queued to be processed */
++#define DASD_CQR_IN_IO 0x81 /* request is currently in IO */
++#define DASD_CQR_ERROR 0x82 /* request is completed with error */
++#define DASD_CQR_CLEAR_PENDING 0x83 /* request is clear pending */
++#define DASD_CQR_CLEARED 0x84 /* request was cleared */
++#define DASD_CQR_SUCCESS 0x85 /* request was successfull */
+
- static const struct channel_range channel_ranges[] = {
-- [0] = { 0, 0},
-- [ZD_REGDOMAIN_FCC] = { 1, 12},
-- [ZD_REGDOMAIN_IC] = { 1, 12},
-- [ZD_REGDOMAIN_ETSI] = { 1, 14},
-- [ZD_REGDOMAIN_JAPAN] = { 1, 14},
-- [ZD_REGDOMAIN_SPAIN] = { 1, 14},
-- [ZD_REGDOMAIN_FRANCE] = { 1, 14},
-+ { ZD_REGDOMAIN_FCC, 1, 12 },
-+ { ZD_REGDOMAIN_IC, 1, 12 },
-+ { ZD_REGDOMAIN_ETSI, 1, 14 },
-+ { ZD_REGDOMAIN_JAPAN, 1, 14 },
-+ { ZD_REGDOMAIN_SPAIN, 1, 14 },
-+ { ZD_REGDOMAIN_FRANCE, 1, 14 },
-
- /* Japan originally only had channel 14 available (see CHNL_ID 0x40 in
- * 802.11). However, in 2001 the range was extended to include channels
- * 1-13. The ZyDAS devices still use the old region code but are
- * designed to allow the extra channel access in Japan. */
-- [ZD_REGDOMAIN_JAPAN_ADD] = { 1, 15},
-+ { ZD_REGDOMAIN_JAPAN_ADD, 1, 15 },
- };
-
--const struct channel_range *zd_channel_range(u8 regdomain)
--{
-- if (regdomain >= ARRAY_SIZE(channel_ranges))
-- regdomain = 0;
-- return &channel_ranges[regdomain];
--}
--
--int zd_regdomain_supports_channel(u8 regdomain, u8 channel)
--{
-- const struct channel_range *range = zd_channel_range(regdomain);
-- return range->start <= channel && channel < range->end;
--}
--
--int zd_regdomain_supported(u8 regdomain)
--{
-- const struct channel_range *range = zd_channel_range(regdomain);
-- return range->start != 0;
--}
--
--/* Stores channel frequencies in MHz. */
--static const u16 channel_frequencies[] = {
-- 2412, 2417, 2422, 2427, 2432, 2437, 2442, 2447,
-- 2452, 2457, 2462, 2467, 2472, 2484,
--};
--
--#define NUM_CHANNELS ARRAY_SIZE(channel_frequencies)
--
--static int compute_freq(struct iw_freq *freq, u32 mhz, u32 hz)
--{
-- u32 factor;
--
-- freq->e = 0;
-- if (mhz >= 1000000000U) {
-- pr_debug("zd1211 mhz %u to large\n", mhz);
-- freq->m = 0;
-- return -EINVAL;
-- }
--
-- factor = 1000;
-- while (mhz >= factor) {
--
-- freq->e += 1;
-- factor *= 10;
-- }
--
-- factor /= 1000U;
-- freq->m = mhz * (1000000U/factor) + hz/factor;
--
-- return 0;
--}
--
--int zd_channel_to_freq(struct iw_freq *freq, u8 channel)
-+static const struct channel_range *zd_channel_range(u8 regdomain)
- {
-- if (channel > NUM_CHANNELS) {
-- freq->m = 0;
-- freq->e = 0;
-- return -EINVAL;
-- }
-- if (!channel) {
-- freq->m = 0;
-- freq->e = 0;
-- return -EINVAL;
-+ int i;
-+ for (i = 0; i < ARRAY_SIZE(channel_ranges); i++) {
-+ const struct channel_range *range = &channel_ranges[i];
-+ if (range->regdomain == regdomain)
-+ return range;
- }
-- return compute_freq(freq, channel_frequencies[channel-1], 0);
-+ return NULL;
- }
-
--static int freq_to_mhz(const struct iw_freq *freq)
--{
-- u32 factor;
-- int e;
--
-- /* Such high frequencies are not supported. */
-- if (freq->e > 6)
-- return -EINVAL;
--
-- factor = 1;
-- for (e = freq->e; e > 0; --e) {
-- factor *= 10;
-- }
-- factor = 1000000U / factor;
--
-- if (freq->m % factor) {
-- return -EINVAL;
-- }
--
-- return freq->m / factor;
--}
-+#define CHAN_TO_IDX(chan) ((chan) - 1)
-
--int zd_find_channel(u8 *channel, const struct iw_freq *freq)
-+static void unmask_bg_channels(struct ieee80211_hw *hw,
-+ const struct channel_range *range,
-+ struct ieee80211_hw_mode *mode)
- {
-- int i, r;
-- u32 mhz;
--
-- if (freq->m < 1000) {
-- if (freq->m > NUM_CHANNELS || freq->m == 0)
-- return -EINVAL;
-- *channel = freq->m;
-- return 1;
-- }
--
-- r = freq_to_mhz(freq);
-- if (r < 0)
-- return r;
-- mhz = r;
-+ u8 channel;
-
-- for (i = 0; i < NUM_CHANNELS; i++) {
-- if (mhz == channel_frequencies[i]) {
-- *channel = i+1;
-- return 1;
-- }
-+ for (channel = range->start; channel < range->end; channel++) {
-+ struct ieee80211_channel *chan =
-+ &mode->channels[CHAN_TO_IDX(channel)];
-+ chan->flag |= IEEE80211_CHAN_W_SCAN |
-+ IEEE80211_CHAN_W_ACTIVE_SCAN |
-+ IEEE80211_CHAN_W_IBSS;
- }
--
-- return -EINVAL;
- }
-
--int zd_geo_init(struct ieee80211_device *ieee, u8 regdomain)
-+void zd_geo_init(struct ieee80211_hw *hw, u8 regdomain)
- {
-- struct ieee80211_geo geo;
-+ struct zd_mac *mac = zd_hw_mac(hw);
- const struct channel_range *range;
-- int i;
-- u8 channel;
-- dev_dbg(zd_mac_dev(zd_netdev_mac(ieee->dev)),
-- "regdomain %#04x\n", regdomain);
-+ dev_dbg(zd_mac_dev(mac), "regdomain %#02x\n", regdomain);
+ /* per dasd_ccw_req flags */
+ #define DASD_CQR_FLAGS_USE_ERP 0 /* use ERP for this request */
+@@ -214,52 +218,71 @@ struct dasd_discipline {
- range = zd_channel_range(regdomain);
-- if (range->start == 0) {
-- dev_err(zd_mac_dev(zd_netdev_mac(ieee->dev)),
-- "zd1211 regdomain %#04x not supported\n",
-- regdomain);
-- return -EINVAL;
-+ if (!range) {
-+ /* The vendor driver overrides the regulatory domain and
-+ * allowed channel registers and unconditionally restricts
-+ * available channels to 1-11 everywhere. Match their
-+ * questionable behaviour only for regdomains which we don't
-+ * recognise. */
-+ dev_warn(zd_mac_dev(mac), "Unrecognised regulatory domain: "
-+ "%#02x. Defaulting to FCC.\n", regdomain);
-+ range = zd_channel_range(ZD_REGDOMAIN_FCC);
- }
+ struct list_head list; /* used for list of disciplines */
-- memset(&geo, 0, sizeof(geo));
--
-- for (i = 0, channel = range->start; channel < range->end; channel++) {
-- struct ieee80211_channel *chan = &geo.bg[i++];
-- chan->freq = channel_frequencies[channel - 1];
-- chan->channel = channel;
-- }
+- /*
+- * Device recognition functions. check_device is used to verify
+- * the sense data and the information returned by read device
+- * characteristics. It returns 0 if the discipline can be used
+- * for the device in question.
+- * do_analysis is used in the step from device state "basic" to
+- * state "accept". It returns 0 if the device can be made ready,
+- * it returns -EMEDIUMTYPE if the device can't be made ready or
+- * -EAGAIN if do_analysis started a ccw that needs to complete
+- * before the analysis may be repeated.
+- */
+- int (*check_device)(struct dasd_device *);
+- int (*do_analysis) (struct dasd_device *);
-
-- geo.bg_channels = i;
-- memcpy(geo.name, "XX ", 4);
-- ieee80211_set_geo(ieee, &geo);
-- return 0;
-+ unmask_bg_channels(hw, range, &mac->modes[0]);
-+ unmask_bg_channels(hw, range, &mac->modes[1]);
- }
+- /*
+- * Device operation functions. build_cp creates a ccw chain for
+- * a block device request, start_io starts the request and
+- * term_IO cancels it (e.g. in case of a timeout). format_device
+- * returns a ccw chain to be used to format the device.
+- */
++ /*
++ * Device recognition functions. check_device is used to verify
++ * the sense data and the information returned by read device
++ * characteristics. It returns 0 if the discipline can be used
++ * for the device in question. uncheck_device is called during
++ * device shutdown to deregister a device from its discipline.
++ */
++ int (*check_device) (struct dasd_device *);
++ void (*uncheck_device) (struct dasd_device *);
+
-diff --git a/drivers/net/wireless/zd1211rw/zd_ieee80211.h b/drivers/net/wireless/zd1211rw/zd_ieee80211.h
-index fbf6491..26b79f1 100644
---- a/drivers/net/wireless/zd1211rw/zd_ieee80211.h
-+++ b/drivers/net/wireless/zd1211rw/zd_ieee80211.h
-@@ -1,7 +1,27 @@
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with this program; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-+ */
++ /*
++ * do_analysis is used in the step from device state "basic" to
++ * state "accept". It returns 0 if the device can be made ready,
++ * it returns -EMEDIUMTYPE if the device can't be made ready or
++ * -EAGAIN if do_analysis started a ccw that needs to complete
++ * before the analysis may be repeated.
++ */
++ int (*do_analysis) (struct dasd_block *);
+
- #ifndef _ZD_IEEE80211_H
- #define _ZD_IEEE80211_H
++ /*
++ * Last things to do when a device is set online, and first things
++ * when it is set offline.
++ */
++ int (*ready_to_online) (struct dasd_device *);
++ int (*online_to_ready) (struct dasd_device *);
++
++ /*
++ * Device operation functions. build_cp creates a ccw chain for
++ * a block device request, start_io starts the request and
++ * term_IO cancels it (e.g. in case of a timeout). format_device
++ * returns a ccw chain to be used to format the device.
++ * handle_terminated_request allows to examine a cqr and prepare
++ * it for retry.
++ */
+ struct dasd_ccw_req *(*build_cp) (struct dasd_device *,
++ struct dasd_block *,
+ struct request *);
+ int (*start_IO) (struct dasd_ccw_req *);
+ int (*term_IO) (struct dasd_ccw_req *);
++ void (*handle_terminated_request) (struct dasd_ccw_req *);
+ struct dasd_ccw_req *(*format_device) (struct dasd_device *,
+ struct format_data_t *);
+ int (*free_cp) (struct dasd_ccw_req *, struct request *);
+- /*
+- * Error recovery functions. examine_error() returns a value that
+- * indicates what to do for an error condition. If examine_error()
++
++ /*
++ * Error recovery functions. examine_error() returns a value that
++ * indicates what to do for an error condition. If examine_error()
+ * returns 'dasd_era_recover' erp_action() is called to create a
+- * special error recovery ccw. erp_postaction() is called after
+- * an error recovery ccw has finished its execution. dump_sense
+- * is called for every error condition to print the sense data
+- * to the console.
+- */
+- dasd_era_t(*examine_error) (struct dasd_ccw_req *, struct irb *);
++ * special error recovery ccw. erp_postaction() is called after
++ * an error recovery ccw has finished its execution. dump_sense
++ * is called for every error condition to print the sense data
++ * to the console.
++ */
+ dasd_erp_fn_t(*erp_action) (struct dasd_ccw_req *);
+ dasd_erp_fn_t(*erp_postaction) (struct dasd_ccw_req *);
+ void (*dump_sense) (struct dasd_device *, struct dasd_ccw_req *,
+ struct irb *);
--#include <net/ieee80211.h>
-+#include <net/mac80211.h>
++ void (*handle_unsolicited_interrupt) (struct dasd_device *,
++ struct irb *);
++
+ /* i/o control functions. */
+- int (*fill_geometry) (struct dasd_device *, struct hd_geometry *);
++ int (*fill_geometry) (struct dasd_block *, struct hd_geometry *);
+ int (*fill_info) (struct dasd_device *, struct dasd_information2_t *);
+- int (*ioctl) (struct dasd_device *, unsigned int, void __user *);
++ int (*ioctl) (struct dasd_block *, unsigned int, void __user *);
+ };
- /* Additional definitions from the standards.
+ extern struct dasd_discipline *dasd_diag_discipline_pointer;
+@@ -267,12 +290,18 @@ extern struct dasd_discipline *dasd_diag_discipline_pointer;
+ /*
+ * Unique identifier for dasd device.
*/
-@@ -19,22 +39,7 @@ enum {
- MAX_CHANNEL24 = 14,
++#define UA_NOT_CONFIGURED 0x00
++#define UA_BASE_DEVICE 0x01
++#define UA_BASE_PAV_ALIAS 0x02
++#define UA_HYPER_PAV_ALIAS 0x03
++
+ struct dasd_uid {
+- __u8 alias;
++ __u8 type;
+ char vendor[4];
+ char serial[15];
+ __u16 ssid;
+- __u8 unit_addr;
++ __u8 real_unit_addr;
++ __u8 base_unit_addr;
};
--struct channel_range {
-- u8 start;
-- u8 end; /* exclusive (channel must be less than end) */
--};
--
--struct iw_freq;
--
--int zd_geo_init(struct ieee80211_device *ieee, u8 regdomain);
--
--const struct channel_range *zd_channel_range(u8 regdomain);
--int zd_regdomain_supports_channel(u8 regdomain, u8 channel);
--int zd_regdomain_supported(u8 regdomain);
--
--/* for 2.4 GHz band */
--int zd_channel_to_freq(struct iw_freq *freq, u8 channel);
--int zd_find_channel(u8 *channel, const struct iw_freq *freq);
-+void zd_geo_init(struct ieee80211_hw *hw, u8 regdomain);
+ /*
+@@ -293,14 +322,9 @@ struct dasd_uid {
- #define ZD_PLCP_SERVICE_LENGTH_EXTENSION 0x80
+ struct dasd_device {
+ /* Block device stuff. */
+- struct gendisk *gdp;
+- struct request_queue *request_queue;
+- spinlock_t request_queue_lock;
+- struct block_device *bdev;
++ struct dasd_block *block;
++
+ unsigned int devindex;
+- unsigned long blocks; /* size of volume in blocks */
+- unsigned int bp_block; /* bytes per block */
+- unsigned int s2b_shift; /* log2 (bp_block/512) */
+ unsigned long flags; /* per device flags */
+ unsigned short features; /* copy of devmap-features (read-only!) */
-@@ -54,8 +59,8 @@ static inline u8 zd_ofdm_plcp_header_rate(const struct ofdm_plcp_header *header)
- *
- * See the struct zd_ctrlset definition in zd_mac.h.
- */
--#define ZD_OFDM_PLCP_RATE_6M 0xb
--#define ZD_OFDM_PLCP_RATE_9M 0xf
-+#define ZD_OFDM_PLCP_RATE_6M 0xb
-+#define ZD_OFDM_PLCP_RATE_9M 0xf
- #define ZD_OFDM_PLCP_RATE_12M 0xa
- #define ZD_OFDM_PLCP_RATE_18M 0xe
- #define ZD_OFDM_PLCP_RATE_24M 0x9
-@@ -87,10 +92,4 @@ static inline u8 zd_cck_plcp_header_signal(const struct cck_plcp_header *header)
- #define ZD_CCK_PLCP_SIGNAL_5M5 0x37
- #define ZD_CCK_PLCP_SIGNAL_11M 0x6e
+@@ -316,9 +340,8 @@ struct dasd_device {
+ int state, target;
+ int stopped; /* device (ccw_device_start) was stopped */
--enum ieee80211_std {
-- IEEE80211B = 0x01,
-- IEEE80211A = 0x02,
-- IEEE80211G = 0x04,
--};
--
- #endif /* _ZD_IEEE80211_H */
-diff --git a/drivers/net/wireless/zd1211rw/zd_mac.c b/drivers/net/wireless/zd1211rw/zd_mac.c
-index 5298a8b..49127e4 100644
---- a/drivers/net/wireless/zd1211rw/zd_mac.c
-+++ b/drivers/net/wireless/zd1211rw/zd_mac.c
-@@ -1,4 +1,9 @@
--/* zd_mac.c
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
-+ * Copyright (C) 2006-2007 Michael Wu <flamingice at sourmilk.net>
-+ * Copyright (c) 2007 Luis R. Rodriguez <mcgrof at winlab.rutgers.edu>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -17,7 +22,6 @@
+- /* Open and reference count. */
++ /* reference count. */
+ atomic_t ref_count;
+- atomic_t open_count;
+
+ /* ccw queue and memory for static ccw/erp buffers. */
+ struct list_head ccw_queue;
+@@ -337,20 +360,45 @@ struct dasd_device {
- #include <linux/netdevice.h>
- #include <linux/etherdevice.h>
--#include <linux/wireless.h>
- #include <linux/usb.h>
- #include <linux/jiffies.h>
- #include <net/ieee80211_radiotap.h>
-@@ -26,81 +30,105 @@
- #include "zd_chip.h"
- #include "zd_mac.h"
- #include "zd_ieee80211.h"
--#include "zd_netdev.h"
- #include "zd_rf.h"
+ struct ccw_device *cdev;
--static void ieee_init(struct ieee80211_device *ieee);
--static void softmac_init(struct ieee80211softmac_device *sm);
--static void set_rts_cts_work(struct work_struct *work);
--static void set_basic_rates_work(struct work_struct *work);
-+/* This table contains the hardware specific values for the modulation rates. */
-+static const struct ieee80211_rate zd_rates[] = {
-+ { .rate = 10,
-+ .val = ZD_CCK_RATE_1M,
-+ .flags = IEEE80211_RATE_CCK },
-+ { .rate = 20,
-+ .val = ZD_CCK_RATE_2M,
-+ .val2 = ZD_CCK_RATE_2M | ZD_CCK_PREA_SHORT,
-+ .flags = IEEE80211_RATE_CCK_2 },
-+ { .rate = 55,
-+ .val = ZD_CCK_RATE_5_5M,
-+ .val2 = ZD_CCK_RATE_5_5M | ZD_CCK_PREA_SHORT,
-+ .flags = IEEE80211_RATE_CCK_2 },
-+ { .rate = 110,
-+ .val = ZD_CCK_RATE_11M,
-+ .val2 = ZD_CCK_RATE_11M | ZD_CCK_PREA_SHORT,
-+ .flags = IEEE80211_RATE_CCK_2 },
-+ { .rate = 60,
-+ .val = ZD_OFDM_RATE_6M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 90,
-+ .val = ZD_OFDM_RATE_9M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 120,
-+ .val = ZD_OFDM_RATE_12M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 180,
-+ .val = ZD_OFDM_RATE_18M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 240,
-+ .val = ZD_OFDM_RATE_24M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 360,
-+ .val = ZD_OFDM_RATE_36M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 480,
-+ .val = ZD_OFDM_RATE_48M,
-+ .flags = IEEE80211_RATE_OFDM },
-+ { .rate = 540,
-+ .val = ZD_OFDM_RATE_54M,
-+ .flags = IEEE80211_RATE_OFDM },
++ /* hook for alias management */
++ struct list_head alias_list;
+};
+
-+static const struct ieee80211_channel zd_channels[] = {
-+ { .chan = 1,
-+ .freq = 2412},
-+ { .chan = 2,
-+ .freq = 2417},
-+ { .chan = 3,
-+ .freq = 2422},
-+ { .chan = 4,
-+ .freq = 2427},
-+ { .chan = 5,
-+ .freq = 2432},
-+ { .chan = 6,
-+ .freq = 2437},
-+ { .chan = 7,
-+ .freq = 2442},
-+ { .chan = 8,
-+ .freq = 2447},
-+ { .chan = 9,
-+ .freq = 2452},
-+ { .chan = 10,
-+ .freq = 2457},
-+ { .chan = 11,
-+ .freq = 2462},
-+ { .chan = 12,
-+ .freq = 2467},
-+ { .chan = 13,
-+ .freq = 2472},
-+ { .chan = 14,
-+ .freq = 2484}
-+};
-
- static void housekeeping_init(struct zd_mac *mac);
- static void housekeeping_enable(struct zd_mac *mac);
- static void housekeeping_disable(struct zd_mac *mac);
++struct dasd_block {
++ /* Block device stuff. */
++ struct gendisk *gdp;
++ struct request_queue *request_queue;
++ spinlock_t request_queue_lock;
++ struct block_device *bdev;
++ atomic_t open_count;
++
++ unsigned long blocks; /* size of volume in blocks */
++ unsigned int bp_block; /* bytes per block */
++ unsigned int s2b_shift; /* log2 (bp_block/512) */
++
++ struct dasd_device *base;
++ struct list_head ccw_queue;
++ spinlock_t queue_lock;
++
++ atomic_t tasklet_scheduled;
++ struct tasklet_struct tasklet;
++ struct timer_list timer;
++
+ #ifdef CONFIG_DASD_PROFILE
+ struct dasd_profile_info_t profile;
+ #endif
+ };
--static void set_multicast_hash_handler(struct work_struct *work);
--
--static void do_rx(unsigned long mac_ptr);
--
--int zd_mac_init(struct zd_mac *mac,
-- struct net_device *netdev,
-- struct usb_interface *intf)
--{
-- struct ieee80211_device *ieee = zd_netdev_ieee80211(netdev);
--
-- memset(mac, 0, sizeof(*mac));
-- spin_lock_init(&mac->lock);
-- mac->netdev = netdev;
-- INIT_DELAYED_WORK(&mac->set_rts_cts_work, set_rts_cts_work);
-- INIT_DELAYED_WORK(&mac->set_basic_rates_work, set_basic_rates_work);
--
-- skb_queue_head_init(&mac->rx_queue);
-- tasklet_init(&mac->rx_tasklet, do_rx, (unsigned long)mac);
-- tasklet_disable(&mac->rx_tasklet);
--
-- ieee_init(ieee);
-- softmac_init(ieee80211_priv(netdev));
-- zd_chip_init(&mac->chip, netdev, intf);
-- housekeeping_init(mac);
-- INIT_WORK(&mac->set_multicast_hash_work, set_multicast_hash_handler);
-- return 0;
--}
--
--static int reset_channel(struct zd_mac *mac)
--{
-- int r;
-- unsigned long flags;
-- const struct channel_range *range;
--
-- spin_lock_irqsave(&mac->lock, flags);
-- range = zd_channel_range(mac->regdomain);
-- if (!range->start) {
-- r = -EINVAL;
-- goto out;
-- }
-- mac->requested_channel = range->start;
-- r = 0;
--out:
-- spin_unlock_irqrestore(&mac->lock, flags);
-- return r;
--}
--
--int zd_mac_preinit_hw(struct zd_mac *mac)
-+int zd_mac_preinit_hw(struct ieee80211_hw *hw)
- {
- int r;
- u8 addr[ETH_ALEN];
-+ struct zd_mac *mac = zd_hw_mac(hw);
++
++
+ /* reasons why device (ccw_device_start) was stopped */
+ #define DASD_STOPPED_NOT_ACC 1 /* not accessible */
+ #define DASD_STOPPED_QUIESCE 2 /* Quiesced */
+ #define DASD_STOPPED_PENDING 4 /* long busy */
+ #define DASD_STOPPED_DC_WAIT 8 /* disconnected, wait */
+-#define DASD_STOPPED_DC_EIO 16 /* disconnected, return -EIO */
++#define DASD_STOPPED_SU 16 /* summary unit check handling */
- r = zd_chip_read_mac_addr_fw(&mac->chip, addr);
- if (r)
- return r;
+ /* per device flags */
+-#define DASD_FLAG_DSC_ERROR 2 /* return -EIO when disconnected */
+ #define DASD_FLAG_OFFLINE 3 /* device is in offline processing */
+ #define DASD_FLAG_EER_SNSS 4 /* A SNSS is required */
+ #define DASD_FLAG_EER_IN_USE 5 /* A SNSS request is running */
+@@ -489,6 +537,9 @@ dasd_kmalloc_set_cda(struct ccw1 *ccw, void *cda, struct dasd_device *device)
+ struct dasd_device *dasd_alloc_device(void);
+ void dasd_free_device(struct dasd_device *);
-- memcpy(mac->netdev->dev_addr, addr, ETH_ALEN);
-+ SET_IEEE80211_PERM_ADDR(hw, addr);
++struct dasd_block *dasd_alloc_block(void);
++void dasd_free_block(struct dasd_block *);
+
- return 0;
- }
+ void dasd_enable_device(struct dasd_device *);
+ void dasd_set_target_state(struct dasd_device *, int);
+ void dasd_kick_device(struct dasd_device *);
+@@ -497,18 +548,23 @@ void dasd_add_request_head(struct dasd_ccw_req *);
+ void dasd_add_request_tail(struct dasd_ccw_req *);
+ int dasd_start_IO(struct dasd_ccw_req *);
+ int dasd_term_IO(struct dasd_ccw_req *);
+-void dasd_schedule_bh(struct dasd_device *);
++void dasd_schedule_device_bh(struct dasd_device *);
++void dasd_schedule_block_bh(struct dasd_block *);
+ int dasd_sleep_on(struct dasd_ccw_req *);
+ int dasd_sleep_on_immediatly(struct dasd_ccw_req *);
+ int dasd_sleep_on_interruptible(struct dasd_ccw_req *);
+-void dasd_set_timer(struct dasd_device *, int);
+-void dasd_clear_timer(struct dasd_device *);
++void dasd_device_set_timer(struct dasd_device *, int);
++void dasd_device_clear_timer(struct dasd_device *);
++void dasd_block_set_timer(struct dasd_block *, int);
++void dasd_block_clear_timer(struct dasd_block *);
+ int dasd_cancel_req(struct dasd_ccw_req *);
++int dasd_flush_device_queue(struct dasd_device *);
+ int dasd_generic_probe (struct ccw_device *, struct dasd_discipline *);
+ void dasd_generic_remove (struct ccw_device *cdev);
+ int dasd_generic_set_online(struct ccw_device *, struct dasd_discipline *);
+ int dasd_generic_set_offline (struct ccw_device *cdev);
+ int dasd_generic_notify(struct ccw_device *, int);
++void dasd_generic_handle_state_change(struct dasd_device *);
--int zd_mac_init_hw(struct zd_mac *mac)
-+int zd_mac_init_hw(struct ieee80211_hw *hw)
- {
- int r;
-+ struct zd_mac *mac = zd_hw_mac(hw);
- struct zd_chip *chip = &mac->chip;
- u8 default_regdomain;
+ int dasd_generic_read_dev_chars(struct dasd_device *, char *, void **, int);
-@@ -116,22 +144,9 @@ int zd_mac_init_hw(struct zd_mac *mac)
- r = zd_read_regdomain(chip, &default_regdomain);
- if (r)
- goto disable_int;
-- if (!zd_regdomain_supported(default_regdomain)) {
-- /* The vendor driver overrides the regulatory domain and
-- * allowed channel registers and unconditionally restricts
-- * available channels to 1-11 everywhere. Match their
-- * questionable behaviour only for regdomains which we don't
-- * recognise. */
-- dev_warn(zd_mac_dev(mac), "Unrecognised regulatory domain: "
-- "%#04x. Defaulting to FCC.\n", default_regdomain);
-- default_regdomain = ZD_REGDOMAIN_FCC;
-- }
- spin_lock_irq(&mac->lock);
- mac->regdomain = mac->default_regdomain = default_regdomain;
- spin_unlock_irq(&mac->lock);
-- r = reset_channel(mac);
-- if (r)
-- goto disable_int;
+@@ -542,10 +598,10 @@ int dasd_busid_known(char *);
+ /* externals in dasd_gendisk.c */
+ int dasd_gendisk_init(void);
+ void dasd_gendisk_exit(void);
+-int dasd_gendisk_alloc(struct dasd_device *);
+-void dasd_gendisk_free(struct dasd_device *);
+-int dasd_scan_partitions(struct dasd_device *);
+-void dasd_destroy_partitions(struct dasd_device *);
++int dasd_gendisk_alloc(struct dasd_block *);
++void dasd_gendisk_free(struct dasd_block *);
++int dasd_scan_partitions(struct dasd_block *);
++void dasd_destroy_partitions(struct dasd_block *);
- /* We must inform the device that we are doing encryption/decryption in
- * software at the moment. */
-@@ -139,9 +154,7 @@ int zd_mac_init_hw(struct zd_mac *mac)
- if (r)
- goto disable_int;
+ /* externals in dasd_ioctl.c */
+ int dasd_ioctl(struct inode *, struct file *, unsigned int, unsigned long);
+@@ -563,20 +619,9 @@ struct dasd_ccw_req *dasd_alloc_erp_request(char *, int, int,
+ void dasd_free_erp_request(struct dasd_ccw_req *, struct dasd_device *);
+ void dasd_log_sense(struct dasd_ccw_req *, struct irb *);
-- r = zd_geo_init(zd_mac_to_ieee80211(mac), mac->regdomain);
-- if (r)
-- goto disable_int;
-+ zd_geo_init(hw, mac->regdomain);
+-/* externals in dasd_3370_erp.c */
+-dasd_era_t dasd_3370_erp_examine(struct dasd_ccw_req *, struct irb *);
+-
+ /* externals in dasd_3990_erp.c */
+-dasd_era_t dasd_3990_erp_examine(struct dasd_ccw_req *, struct irb *);
+ struct dasd_ccw_req *dasd_3990_erp_action(struct dasd_ccw_req *);
- r = 0;
- disable_int:
-@@ -153,8 +166,6 @@ out:
- void zd_mac_clear(struct zd_mac *mac)
+-/* externals in dasd_9336_erp.c */
+-dasd_era_t dasd_9336_erp_examine(struct dasd_ccw_req *, struct irb *);
+-
+-/* externals in dasd_9336_erp.c */
+-dasd_era_t dasd_9343_erp_examine(struct dasd_ccw_req *, struct irb *);
+-struct dasd_ccw_req *dasd_9343_erp_action(struct dasd_ccw_req *);
+-
+ /* externals in dasd_eer.c */
+ #ifdef CONFIG_DASD_EER
+ int dasd_eer_init(void);
+diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
+index 672eb0a..91a6463 100644
+--- a/drivers/s390/block/dasd_ioctl.c
++++ b/drivers/s390/block/dasd_ioctl.c
+@@ -38,15 +38,15 @@ dasd_ioctl_api_version(void __user *argp)
+ static int
+ dasd_ioctl_enable(struct block_device *bdev)
{
- flush_workqueue(zd_workqueue);
-- skb_queue_purge(&mac->rx_queue);
-- tasklet_kill(&mac->rx_tasklet);
- zd_chip_clear(&mac->chip);
- ZD_ASSERT(!spin_is_locked(&mac->lock));
- ZD_MEMCLEAR(mac, sizeof(struct zd_mac));
-@@ -162,34 +173,27 @@ void zd_mac_clear(struct zd_mac *mac)
+- struct dasd_device *device = bdev->bd_disk->private_data;
++ struct dasd_block *block = bdev->bd_disk->private_data;
- static int set_rx_filter(struct zd_mac *mac)
- {
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- u32 filter = (ieee->iw_mode == IW_MODE_MONITOR) ? ~0 : STA_RX_FILTER;
-- return zd_iowrite32(&mac->chip, CR_RX_FILTER, filter);
--}
-+ unsigned long flags;
-+ u32 filter = STA_RX_FILTER;
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
--static int set_sniffer(struct zd_mac *mac)
--{
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- return zd_iowrite32(&mac->chip, CR_SNIFFER_ON,
-- ieee->iw_mode == IW_MODE_MONITOR ? 1 : 0);
-- return 0;
-+ spin_lock_irqsave(&mac->lock, flags);
-+ if (mac->pass_ctrl)
-+ filter |= RX_FILTER_CTRL;
-+ spin_unlock_irqrestore(&mac->lock, flags);
-+
-+ return zd_iowrite32(&mac->chip, CR_RX_FILTER, filter);
+- dasd_enable_device(device);
++ dasd_enable_device(block->base);
+ /* Formatting the dasd device can change the capacity. */
+ mutex_lock(&bdev->bd_mutex);
+- i_size_write(bdev->bd_inode, (loff_t)get_capacity(device->gdp) << 9);
++ i_size_write(bdev->bd_inode, (loff_t)get_capacity(block->gdp) << 9);
+ mutex_unlock(&bdev->bd_mutex);
+ return 0;
}
+@@ -58,7 +58,7 @@ dasd_ioctl_enable(struct block_device *bdev)
+ static int
+ dasd_ioctl_disable(struct block_device *bdev)
+ {
+- struct dasd_device *device = bdev->bd_disk->private_data;
++ struct dasd_block *block = bdev->bd_disk->private_data;
- static int set_mc_hash(struct zd_mac *mac)
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+@@ -71,7 +71,7 @@ dasd_ioctl_disable(struct block_device *bdev)
+ * using the BIODASDFMT ioctl. Therefore the correct state for the
+ * device is DASD_STATE_BASIC that allows to do basic i/o.
+ */
+- dasd_set_target_state(device, DASD_STATE_BASIC);
++ dasd_set_target_state(block->base, DASD_STATE_BASIC);
+ /*
+ * Set i_size to zero, since read, write, etc. check against this
+ * value.
+@@ -85,19 +85,19 @@ dasd_ioctl_disable(struct block_device *bdev)
+ /*
+ * Quiesce device.
+ */
+-static int
+-dasd_ioctl_quiesce(struct dasd_device *device)
++static int dasd_ioctl_quiesce(struct dasd_block *block)
{
- struct zd_mc_hash hash;
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
--
- zd_mc_clear(&hash);
-- if (ieee->iw_mode == IW_MODE_MONITOR)
-- zd_mc_add_all(&hash);
--
- return zd_chip_set_multicast_hash(&mac->chip, &hash);
+ unsigned long flags;
++ struct dasd_device *base;
+
++ base = block->base;
+ if (!capable (CAP_SYS_ADMIN))
+ return -EACCES;
+
+- DEV_MESSAGE (KERN_DEBUG, device, "%s",
+- "Quiesce IO on device");
+- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+- device->stopped |= DASD_STOPPED_QUIESCE;
+- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ DEV_MESSAGE(KERN_DEBUG, base, "%s", "Quiesce IO on device");
++ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
++ base->stopped |= DASD_STOPPED_QUIESCE;
++ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
+ return 0;
}
--int zd_mac_open(struct net_device *netdev)
-+static int zd_op_start(struct ieee80211_hw *hw)
+@@ -105,22 +105,21 @@ dasd_ioctl_quiesce(struct dasd_device *device)
+ /*
+ * Quiesce device.
+ */
+-static int
+-dasd_ioctl_resume(struct dasd_device *device)
++static int dasd_ioctl_resume(struct dasd_block *block)
{
-- struct zd_mac *mac = zd_netdev_mac(netdev);
-+ struct zd_mac *mac = zd_hw_mac(hw);
- struct zd_chip *chip = &mac->chip;
- struct zd_usb *usb = &chip->usb;
- int r;
-@@ -200,46 +204,33 @@ int zd_mac_open(struct net_device *netdev)
- goto out;
- }
+ unsigned long flags;
++ struct dasd_device *base;
-- tasklet_enable(&mac->rx_tasklet);
--
- r = zd_chip_enable_int(chip);
- if (r < 0)
- goto out;
++ base = block->base;
+ if (!capable (CAP_SYS_ADMIN))
+ return -EACCES;
-- r = zd_write_mac_addr(chip, netdev->dev_addr);
-- if (r)
-- goto disable_int;
+- DEV_MESSAGE (KERN_DEBUG, device, "%s",
+- "resume IO on device");
-
- r = zd_chip_set_basic_rates(chip, CR_RATES_80211B | CR_RATES_80211G);
- if (r < 0)
- goto disable_int;
- r = set_rx_filter(mac);
- if (r)
- goto disable_int;
-- r = set_sniffer(mac);
-- if (r)
-- goto disable_int;
- r = set_mc_hash(mac);
- if (r)
- goto disable_int;
- r = zd_chip_switch_radio_on(chip);
- if (r < 0)
- goto disable_int;
-- r = zd_chip_set_channel(chip, mac->requested_channel);
-- if (r < 0)
-- goto disable_radio;
-- r = zd_chip_enable_rx(chip);
-+ r = zd_chip_enable_rxtx(chip);
- if (r < 0)
- goto disable_radio;
- r = zd_chip_enable_hwint(chip);
- if (r < 0)
-- goto disable_rx;
-+ goto disable_rxtx;
+- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+- device->stopped &= ~DASD_STOPPED_QUIESCE;
+- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
++ DEV_MESSAGE(KERN_DEBUG, base, "%s", "resume IO on device");
++ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
++ base->stopped &= ~DASD_STOPPED_QUIESCE;
++ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
- housekeeping_enable(mac);
-- ieee80211softmac_start(netdev);
+- dasd_schedule_bh (device);
++ dasd_schedule_block_bh(block);
return 0;
--disable_rx:
-- zd_chip_disable_rx(chip);
-+disable_rxtx:
-+ zd_chip_disable_rxtx(chip);
- disable_radio:
- zd_chip_switch_radio_off(chip);
- disable_int:
-@@ -248,494 +239,190 @@ out:
- return r;
}
--int zd_mac_stop(struct net_device *netdev)
-+/**
-+ * clear_tx_skb_control_block - clears the control block of tx skbuffs
-+ * @skb: a &struct sk_buff pointer
-+ *
-+ * This clears the control block of skbuff buffers, which were transmitted to
-+ * the device. Notify that the function is not thread-safe, so prevent
-+ * multiple calls.
-+ */
-+static void clear_tx_skb_control_block(struct sk_buff *skb)
+@@ -130,22 +129,23 @@ dasd_ioctl_resume(struct dasd_device *device)
+ * commands to format a single unit of the device. In terms of the ECKD
+ * devices this means CCWs are generated to format a single track.
+ */
+-static int
+-dasd_format(struct dasd_device * device, struct format_data_t * fdata)
++static int dasd_format(struct dasd_block *block, struct format_data_t *fdata)
{
-- struct zd_mac *mac = zd_netdev_mac(netdev);
-- struct zd_chip *chip = &mac->chip;
-+ struct zd_tx_skb_control_block *cb =
-+ (struct zd_tx_skb_control_block *)skb->cb;
+ struct dasd_ccw_req *cqr;
++ struct dasd_device *base;
+ int rc;
-- netif_stop_queue(netdev);
-+ kfree(cb->control);
-+ cb->control = NULL;
-+}
+- if (device->discipline->format_device == NULL)
++ base = block->base;
++ if (base->discipline->format_device == NULL)
+ return -EPERM;
-- /*
-- * The order here deliberately is a little different from the open()
-+/**
-+ * kfree_tx_skb - frees a tx skbuff
-+ * @skb: a &struct sk_buff pointer
-+ *
-+ * Frees the tx skbuff. Frees also the allocated control structure in the
-+ * control block if necessary.
-+ */
-+static void kfree_tx_skb(struct sk_buff *skb)
-+{
-+ clear_tx_skb_control_block(skb);
-+ dev_kfree_skb_any(skb);
-+}
-+
-+static void zd_op_stop(struct ieee80211_hw *hw)
-+{
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ struct zd_chip *chip = &mac->chip;
-+ struct sk_buff *skb;
-+ struct sk_buff_head *ack_wait_queue = &mac->ack_wait_queue;
-+
-+ /* The order here deliberately is a little different from the open()
- * method, since we need to make sure there is no opportunity for RX
-- * frames to be processed by softmac after we have stopped it.
-+ * frames to be processed by mac80211 after we have stopped it.
- */
+- if (device->state != DASD_STATE_BASIC) {
+- DEV_MESSAGE(KERN_WARNING, device, "%s",
++ if (base->state != DASD_STATE_BASIC) {
++ DEV_MESSAGE(KERN_WARNING, base, "%s",
+ "dasd_format: device is not disabled! ");
+ return -EBUSY;
+ }
-- zd_chip_disable_rx(chip);
-- skb_queue_purge(&mac->rx_queue);
-- tasklet_disable(&mac->rx_tasklet);
-+ zd_chip_disable_rxtx(chip);
- housekeeping_disable(mac);
-- ieee80211softmac_stop(netdev);
--
-- /* Ensure no work items are running or queued from this point */
-- cancel_delayed_work(&mac->set_rts_cts_work);
-- cancel_delayed_work(&mac->set_basic_rates_work);
- flush_workqueue(zd_workqueue);
-- mac->updating_rts_rate = 0;
-- mac->updating_basic_rates = 0;
+- DBF_DEV_EVENT(DBF_NOTICE, device,
++ DBF_DEV_EVENT(DBF_NOTICE, base,
+ "formatting units %d to %d (%d B blocks) flags %d",
+ fdata->start_unit,
+ fdata->stop_unit, fdata->blksize, fdata->intensity);
+@@ -156,20 +156,20 @@ dasd_format(struct dasd_device * device, struct format_data_t * fdata)
+ * enabling the device later.
+ */
+ if (fdata->start_unit == 0) {
+- struct block_device *bdev = bdget_disk(device->gdp, 0);
++ struct block_device *bdev = bdget_disk(block->gdp, 0);
+ bdev->bd_inode->i_blkbits = blksize_bits(fdata->blksize);
+ bdput(bdev);
+ }
- zd_chip_disable_hwint(chip);
- zd_chip_switch_radio_off(chip);
- zd_chip_disable_int(chip);
+ while (fdata->start_unit <= fdata->stop_unit) {
+- cqr = device->discipline->format_device(device, fdata);
++ cqr = base->discipline->format_device(base, fdata);
+ if (IS_ERR(cqr))
+ return PTR_ERR(cqr);
+ rc = dasd_sleep_on_interruptible(cqr);
+- dasd_sfree_request(cqr, cqr->device);
++ dasd_sfree_request(cqr, cqr->memdev);
+ if (rc) {
+ if (rc != -ERESTARTSYS)
+- DEV_MESSAGE(KERN_ERR, device,
++ DEV_MESSAGE(KERN_ERR, base,
+ " Formatting of unit %d failed "
+ "with rc = %d",
+ fdata->start_unit, rc);
+@@ -186,7 +186,7 @@ dasd_format(struct dasd_device * device, struct format_data_t * fdata)
+ static int
+ dasd_ioctl_format(struct block_device *bdev, void __user *argp)
+ {
+- struct dasd_device *device = bdev->bd_disk->private_data;
++ struct dasd_block *block = bdev->bd_disk->private_data;
+ struct format_data_t fdata;
-- return 0;
--}
--
--int zd_mac_set_mac_address(struct net_device *netdev, void *p)
--{
-- int r;
-- unsigned long flags;
-- struct sockaddr *addr = p;
-- struct zd_mac *mac = zd_netdev_mac(netdev);
-- struct zd_chip *chip = &mac->chip;
-- DECLARE_MAC_BUF(mac2);
--
-- if (!is_valid_ether_addr(addr->sa_data))
-- return -EADDRNOTAVAIL;
--
-- dev_dbg_f(zd_mac_dev(mac),
-- "Setting MAC to %s\n", print_mac(mac2, addr->sa_data));
--
-- if (netdev->flags & IFF_UP) {
-- r = zd_write_mac_addr(chip, addr->sa_data);
-- if (r)
-- return r;
-- }
--
-- spin_lock_irqsave(&mac->lock, flags);
-- memcpy(netdev->dev_addr, addr->sa_data, ETH_ALEN);
-- spin_unlock_irqrestore(&mac->lock, flags);
--
-- return 0;
--}
--
--static void set_multicast_hash_handler(struct work_struct *work)
--{
-- struct zd_mac *mac = container_of(work, struct zd_mac,
-- set_multicast_hash_work);
-- struct zd_mc_hash hash;
--
-- spin_lock_irq(&mac->lock);
-- hash = mac->multicast_hash;
-- spin_unlock_irq(&mac->lock);
+ if (!capable(CAP_SYS_ADMIN))
+@@ -194,51 +194,47 @@ dasd_ioctl_format(struct block_device *bdev, void __user *argp)
+ if (!argp)
+ return -EINVAL;
-- zd_chip_set_multicast_hash(&mac->chip, &hash);
-+ while ((skb = skb_dequeue(ack_wait_queue)))
-+ kfree_tx_skb(skb);
+- if (device->features & DASD_FEATURE_READONLY)
++ if (block->base->features & DASD_FEATURE_READONLY)
+ return -EROFS;
+ if (copy_from_user(&fdata, argp, sizeof(struct format_data_t)))
+ return -EFAULT;
+ if (bdev != bdev->bd_contains) {
+- DEV_MESSAGE(KERN_WARNING, device, "%s",
++ DEV_MESSAGE(KERN_WARNING, block->base, "%s",
+ "Cannot low-level format a partition");
+ return -EINVAL;
+ }
+- return dasd_format(device, &fdata);
++ return dasd_format(block, &fdata);
}
--void zd_mac_set_multicast_list(struct net_device *dev)
--{
-- struct zd_mac *mac = zd_netdev_mac(dev);
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- struct zd_mc_hash hash;
-- struct dev_mc_list *mc;
-- unsigned long flags;
-- DECLARE_MAC_BUF(mac2);
--
-- if (dev->flags & (IFF_PROMISC|IFF_ALLMULTI) ||
-- ieee->iw_mode == IW_MODE_MONITOR) {
-- zd_mc_add_all(&hash);
-- } else {
-- zd_mc_clear(&hash);
-- for (mc = dev->mc_list; mc; mc = mc->next) {
-- dev_dbg_f(zd_mac_dev(mac), "mc addr %s\n",
-- print_mac(mac2, mc->dmi_addr));
-- zd_mc_add_addr(&hash, mc->dmi_addr);
-- }
-- }
--
-- spin_lock_irqsave(&mac->lock, flags);
-- mac->multicast_hash = hash;
-- spin_unlock_irqrestore(&mac->lock, flags);
-- queue_work(zd_workqueue, &mac->set_multicast_hash_work);
--}
--
--int zd_mac_set_regdomain(struct zd_mac *mac, u8 regdomain)
--{
-- int r;
-- u8 channel;
--
-- ZD_ASSERT(!irqs_disabled());
-- spin_lock_irq(&mac->lock);
-- if (regdomain == 0) {
-- regdomain = mac->default_regdomain;
-- }
-- if (!zd_regdomain_supported(regdomain)) {
-- spin_unlock_irq(&mac->lock);
-- return -EINVAL;
-- }
-- mac->regdomain = regdomain;
-- channel = mac->requested_channel;
-- spin_unlock_irq(&mac->lock);
--
-- r = zd_geo_init(zd_mac_to_ieee80211(mac), regdomain);
-- if (r)
-- return r;
-- if (!zd_regdomain_supports_channel(regdomain, channel)) {
-- r = reset_channel(mac);
-- if (r)
-- return r;
-- }
-+/**
-+ * init_tx_skb_control_block - initializes skb control block
-+ * @skb: a &sk_buff pointer
-+ * @dev: pointer to the mac80221 device
-+ * @control: mac80211 tx control applying for the frame in @skb
-+ *
-+ * Initializes the control block of the skbuff to be transmitted.
-+ */
-+static int init_tx_skb_control_block(struct sk_buff *skb,
-+ struct ieee80211_hw *hw,
-+ struct ieee80211_tx_control *control)
-+{
-+ struct zd_tx_skb_control_block *cb =
-+ (struct zd_tx_skb_control_block *)skb->cb;
-+
-+ ZD_ASSERT(sizeof(*cb) <= sizeof(skb->cb));
-+ memset(cb, 0, sizeof(*cb));
-+ cb->hw= hw;
-+ cb->control = kmalloc(sizeof(*control), GFP_ATOMIC);
-+ if (cb->control == NULL)
-+ return -ENOMEM;
-+ memcpy(cb->control, control, sizeof(*control));
-
+ #ifdef CONFIG_DASD_PROFILE
+ /*
+ * Reset device profile information
+ */
+-static int
+-dasd_ioctl_reset_profile(struct dasd_device *device)
++static int dasd_ioctl_reset_profile(struct dasd_block *block)
+ {
+- memset(&device->profile, 0, sizeof (struct dasd_profile_info_t));
++ memset(&block->profile, 0, sizeof(struct dasd_profile_info_t));
return 0;
}
--u8 zd_mac_get_regdomain(struct zd_mac *mac)
--{
-- unsigned long flags;
-- u8 regdomain;
--
-- spin_lock_irqsave(&mac->lock, flags);
-- regdomain = mac->regdomain;
-- spin_unlock_irqrestore(&mac->lock, flags);
-- return regdomain;
--}
--
--/* Fallback to lowest rate, if rate is unknown. */
--static u8 rate_to_zd_rate(u8 rate)
--{
-- switch (rate) {
-- case IEEE80211_CCK_RATE_2MB:
-- return ZD_CCK_RATE_2M;
-- case IEEE80211_CCK_RATE_5MB:
-- return ZD_CCK_RATE_5_5M;
-- case IEEE80211_CCK_RATE_11MB:
-- return ZD_CCK_RATE_11M;
-- case IEEE80211_OFDM_RATE_6MB:
-- return ZD_OFDM_RATE_6M;
-- case IEEE80211_OFDM_RATE_9MB:
-- return ZD_OFDM_RATE_9M;
-- case IEEE80211_OFDM_RATE_12MB:
-- return ZD_OFDM_RATE_12M;
-- case IEEE80211_OFDM_RATE_18MB:
-- return ZD_OFDM_RATE_18M;
-- case IEEE80211_OFDM_RATE_24MB:
-- return ZD_OFDM_RATE_24M;
-- case IEEE80211_OFDM_RATE_36MB:
-- return ZD_OFDM_RATE_36M;
-- case IEEE80211_OFDM_RATE_48MB:
-- return ZD_OFDM_RATE_48M;
-- case IEEE80211_OFDM_RATE_54MB:
-- return ZD_OFDM_RATE_54M;
-- }
-- return ZD_CCK_RATE_1M;
--}
--
--static u16 rate_to_cr_rate(u8 rate)
--{
-- switch (rate) {
-- case IEEE80211_CCK_RATE_2MB:
-- return CR_RATE_1M;
-- case IEEE80211_CCK_RATE_5MB:
-- return CR_RATE_5_5M;
-- case IEEE80211_CCK_RATE_11MB:
-- return CR_RATE_11M;
-- case IEEE80211_OFDM_RATE_6MB:
-- return CR_RATE_6M;
-- case IEEE80211_OFDM_RATE_9MB:
-- return CR_RATE_9M;
-- case IEEE80211_OFDM_RATE_12MB:
-- return CR_RATE_12M;
-- case IEEE80211_OFDM_RATE_18MB:
-- return CR_RATE_18M;
-- case IEEE80211_OFDM_RATE_24MB:
-- return CR_RATE_24M;
-- case IEEE80211_OFDM_RATE_36MB:
-- return CR_RATE_36M;
-- case IEEE80211_OFDM_RATE_48MB:
-- return CR_RATE_48M;
-- case IEEE80211_OFDM_RATE_54MB:
-- return CR_RATE_54M;
-- }
-- return CR_RATE_1M;
--}
--
--static void try_enable_tx(struct zd_mac *mac)
--{
-- unsigned long flags;
--
-- spin_lock_irqsave(&mac->lock, flags);
-- if (mac->updating_rts_rate == 0 && mac->updating_basic_rates == 0)
-- netif_wake_queue(mac->netdev);
-- spin_unlock_irqrestore(&mac->lock, flags);
--}
--
--static void set_rts_cts_work(struct work_struct *work)
-+/**
-+ * tx_status - reports tx status of a packet if required
-+ * @hw - a &struct ieee80211_hw pointer
-+ * @skb - a sk-buffer
-+ * @status - the tx status of the packet without control information
-+ * @success - True for successfull transmission of the frame
-+ *
-+ * This information calls ieee80211_tx_status_irqsafe() if required by the
-+ * control information. It copies the control information into the status
-+ * information.
-+ *
-+ * If no status information has been requested, the skb is freed.
-+ */
-+static void tx_status(struct ieee80211_hw *hw, struct sk_buff *skb,
-+ struct ieee80211_tx_status *status,
-+ bool success)
+ /*
+ * Return device profile information
+ */
+-static int
+-dasd_ioctl_read_profile(struct dasd_device *device, void __user *argp)
++static int dasd_ioctl_read_profile(struct dasd_block *block, void __user *argp)
{
-- struct zd_mac *mac =
-- container_of(work, struct zd_mac, set_rts_cts_work.work);
-- unsigned long flags;
-- u8 rts_rate;
-- unsigned int short_preamble;
--
-- mutex_lock(&mac->chip.mutex);
--
-- spin_lock_irqsave(&mac->lock, flags);
-- mac->updating_rts_rate = 0;
-- rts_rate = mac->rts_rate;
-- short_preamble = mac->short_preamble;
-- spin_unlock_irqrestore(&mac->lock, flags);
--
-- zd_chip_set_rts_cts_rate_locked(&mac->chip, rts_rate, short_preamble);
-- mutex_unlock(&mac->chip.mutex);
-+ struct zd_tx_skb_control_block *cb = (struct zd_tx_skb_control_block *)
-+ skb->cb;
-
-- try_enable_tx(mac);
-+ ZD_ASSERT(cb->control != NULL);
-+ memcpy(&status->control, cb->control, sizeof(status->control));
-+ if (!success)
-+ status->excessive_retries = 1;
-+ clear_tx_skb_control_block(skb);
-+ ieee80211_tx_status_irqsafe(hw, skb, status);
+ if (dasd_profile_level == DASD_PROFILE_OFF)
+ return -EIO;
+- if (copy_to_user(argp, &device->profile,
+- sizeof (struct dasd_profile_info_t)))
++ if (copy_to_user(argp, &block->profile,
++ sizeof(struct dasd_profile_info_t)))
+ return -EFAULT;
+ return 0;
}
-
--static void set_basic_rates_work(struct work_struct *work)
-+/**
-+ * zd_mac_tx_failed - callback for failed frames
-+ * @dev: the mac80211 wireless device
-+ *
-+ * This function is called if a frame couldn't be succesfully be
-+ * transferred. The first frame from the tx queue, will be selected and
-+ * reported as error to the upper layers.
-+ */
-+void zd_mac_tx_failed(struct ieee80211_hw *hw)
+ #else
+-static int
+-dasd_ioctl_reset_profile(struct dasd_device *device)
++static int dasd_ioctl_reset_profile(struct dasd_block *block)
{
-- struct zd_mac *mac =
-- container_of(work, struct zd_mac, set_basic_rates_work.work);
-- unsigned long flags;
-- u16 basic_rates;
--
-- mutex_lock(&mac->chip.mutex);
--
-- spin_lock_irqsave(&mac->lock, flags);
-- mac->updating_basic_rates = 0;
-- basic_rates = mac->basic_rates;
-- spin_unlock_irqrestore(&mac->lock, flags);
--
-- zd_chip_set_basic_rates_locked(&mac->chip, basic_rates);
-- mutex_unlock(&mac->chip.mutex);
-+ struct sk_buff_head *q = &zd_hw_mac(hw)->ack_wait_queue;
-+ struct sk_buff *skb;
-+ struct ieee80211_tx_status status = {{0}};
+ return -ENOSYS;
+ }
-- try_enable_tx(mac);
-+ skb = skb_dequeue(q);
-+ if (skb == NULL)
-+ return;
-+ tx_status(hw, skb, &status, 0);
+-static int
+-dasd_ioctl_read_profile(struct dasd_device *device, void __user *argp)
++static int dasd_ioctl_read_profile(struct dasd_block *block, void __user *argp)
+ {
+ return -ENOSYS;
}
+@@ -247,87 +243,88 @@ dasd_ioctl_read_profile(struct dasd_device *device, void __user *argp)
+ /*
+ * Return dasd information. Used for BIODASDINFO and BIODASDINFO2.
+ */
+-static int
+-dasd_ioctl_information(struct dasd_device *device,
+- unsigned int cmd, void __user *argp)
++static int dasd_ioctl_information(struct dasd_block *block,
++ unsigned int cmd, void __user *argp)
+ {
+ struct dasd_information2_t *dasd_info;
+ unsigned long flags;
+ int rc;
++ struct dasd_device *base;
+ struct ccw_device *cdev;
+ struct ccw_dev_id dev_id;
--static void bssinfo_change(struct net_device *netdev, u32 changes)
--{
-- struct zd_mac *mac = zd_netdev_mac(netdev);
-- struct ieee80211softmac_device *softmac = ieee80211_priv(netdev);
-- struct ieee80211softmac_bss_info *bssinfo = &softmac->bssinfo;
-- int need_set_rts_cts = 0;
-- int need_set_rates = 0;
-- u16 basic_rates;
-- unsigned long flags;
--
-- dev_dbg_f(zd_mac_dev(mac), "changes: %x\n", changes);
--
-- if (changes & IEEE80211SOFTMAC_BSSINFOCHG_SHORT_PREAMBLE) {
-- spin_lock_irqsave(&mac->lock, flags);
-- mac->short_preamble = bssinfo->short_preamble;
-- spin_unlock_irqrestore(&mac->lock, flags);
-- need_set_rts_cts = 1;
-- }
--
-- if (changes & IEEE80211SOFTMAC_BSSINFOCHG_RATES) {
-- /* Set RTS rate to highest available basic rate */
-- u8 hi_rate = ieee80211softmac_highest_supported_rate(softmac,
-- &bssinfo->supported_rates, 1);
-- hi_rate = rate_to_zd_rate(hi_rate);
--
-- spin_lock_irqsave(&mac->lock, flags);
-- if (hi_rate != mac->rts_rate) {
-- mac->rts_rate = hi_rate;
-- need_set_rts_cts = 1;
-- }
-- spin_unlock_irqrestore(&mac->lock, flags);
--
-- /* Set basic rates */
-- need_set_rates = 1;
-- if (bssinfo->supported_rates.count == 0) {
-- /* Allow the device to be flexible */
-- basic_rates = CR_RATES_80211B | CR_RATES_80211G;
-+/**
-+ * zd_mac_tx_to_dev - callback for USB layer
-+ * @skb: a &sk_buff pointer
-+ * @error: error value, 0 if transmission successful
-+ *
-+ * Informs the MAC layer that the frame has successfully transferred to the
-+ * device. If an ACK is required and the transfer to the device has been
-+ * successful, the packets are put on the @ack_wait_queue with
-+ * the control set removed.
-+ */
-+void zd_mac_tx_to_dev(struct sk_buff *skb, int error)
-+{
-+ struct zd_tx_skb_control_block *cb =
-+ (struct zd_tx_skb_control_block *)skb->cb;
-+ struct ieee80211_hw *hw = cb->hw;
-+
-+ if (likely(cb->control)) {
-+ skb_pull(skb, sizeof(struct zd_ctrlset));
-+ if (unlikely(error ||
-+ (cb->control->flags & IEEE80211_TXCTL_NO_ACK)))
-+ {
-+ struct ieee80211_tx_status status = {{0}};
-+ tx_status(hw, skb, &status, !error);
- } else {
-- int i = 0;
-- basic_rates = 0;
--
-- for (i = 0; i < bssinfo->supported_rates.count; i++) {
-- u16 rate = bssinfo->supported_rates.rates[i];
-- if ((rate & IEEE80211_BASIC_RATE_MASK) == 0)
-- continue;
-+ struct sk_buff_head *q =
-+ &zd_hw_mac(hw)->ack_wait_queue;
+- if (!device->discipline->fill_info)
++ base = block->base;
++ if (!base->discipline->fill_info)
+ return -EINVAL;
-- rate &= ~IEEE80211_BASIC_RATE_MASK;
-- basic_rates |= rate_to_cr_rate(rate);
-- }
-+ skb_queue_tail(q, skb);
-+ while (skb_queue_len(q) > ZD_MAC_MAX_ACK_WAITERS)
-+ zd_mac_tx_failed(hw);
+ dasd_info = kzalloc(sizeof(struct dasd_information2_t), GFP_KERNEL);
+ if (dasd_info == NULL)
+ return -ENOMEM;
+
+- rc = device->discipline->fill_info(device, dasd_info);
++ rc = base->discipline->fill_info(base, dasd_info);
+ if (rc) {
+ kfree(dasd_info);
+ return rc;
+ }
+
+- cdev = device->cdev;
++ cdev = base->cdev;
+ ccw_device_get_id(cdev, &dev_id);
+
+ dasd_info->devno = dev_id.devno;
+- dasd_info->schid = _ccw_device_get_subchannel_number(device->cdev);
++ dasd_info->schid = _ccw_device_get_subchannel_number(base->cdev);
+ dasd_info->cu_type = cdev->id.cu_type;
+ dasd_info->cu_model = cdev->id.cu_model;
+ dasd_info->dev_type = cdev->id.dev_type;
+ dasd_info->dev_model = cdev->id.dev_model;
+- dasd_info->status = device->state;
++ dasd_info->status = base->state;
+ /*
+ * The open_count is increased for every opener, that includes
+ * the blkdev_get in dasd_scan_partitions.
+ * This must be hidden from user-space.
+ */
+- dasd_info->open_count = atomic_read(&device->open_count);
+- if (!device->bdev)
++ dasd_info->open_count = atomic_read(&block->open_count);
++ if (!block->bdev)
+ dasd_info->open_count++;
+
+ /*
+ * check if device is really formatted
+ * LDL / CDL was returned by 'fill_info'
+ */
+- if ((device->state < DASD_STATE_READY) ||
+- (dasd_check_blocksize(device->bp_block)))
++ if ((base->state < DASD_STATE_READY) ||
++ (dasd_check_blocksize(block->bp_block)))
+ dasd_info->format = DASD_FORMAT_NONE;
+
+ dasd_info->features |=
+- ((device->features & DASD_FEATURE_READONLY) != 0);
++ ((base->features & DASD_FEATURE_READONLY) != 0);
+
+- if (device->discipline)
+- memcpy(dasd_info->type, device->discipline->name, 4);
++ if (base->discipline)
++ memcpy(dasd_info->type, base->discipline->name, 4);
+ else
+ memcpy(dasd_info->type, "none", 4);
+
+- if (device->request_queue->request_fn) {
++ if (block->request_queue->request_fn) {
+ struct list_head *l;
+ #ifdef DASD_EXTENDED_PROFILING
+ {
+ struct list_head *l;
+- spin_lock_irqsave(&device->lock, flags);
+- list_for_each(l, &device->request_queue->queue_head)
++ spin_lock_irqsave(&block->lock, flags);
++ list_for_each(l, &block->request_queue->queue_head)
+ dasd_info->req_queue_len++;
+- spin_unlock_irqrestore(&device->lock, flags);
++ spin_unlock_irqrestore(&block->lock, flags);
}
-- spin_lock_irqsave(&mac->lock, flags);
-- mac->basic_rates = basic_rates;
-- spin_unlock_irqrestore(&mac->lock, flags);
-- }
--
-- /* Schedule any changes we made above */
--
-- spin_lock_irqsave(&mac->lock, flags);
-- if (need_set_rts_cts && !mac->updating_rts_rate) {
-- mac->updating_rts_rate = 1;
-- netif_stop_queue(mac->netdev);
-- queue_delayed_work(zd_workqueue, &mac->set_rts_cts_work, 0);
-- }
-- if (need_set_rates && !mac->updating_basic_rates) {
-- mac->updating_basic_rates = 1;
-- netif_stop_queue(mac->netdev);
-- queue_delayed_work(zd_workqueue, &mac->set_basic_rates_work,
-- 0);
-- }
-- spin_unlock_irqrestore(&mac->lock, flags);
--}
--
--static void set_channel(struct net_device *netdev, u8 channel)
--{
-- struct zd_mac *mac = zd_netdev_mac(netdev);
--
-- dev_dbg_f(zd_mac_dev(mac), "channel %d\n", channel);
--
-- zd_chip_set_channel(&mac->chip, channel);
--}
--
--int zd_mac_request_channel(struct zd_mac *mac, u8 channel)
--{
-- unsigned long lock_flags;
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
--
-- if (ieee->iw_mode == IW_MODE_INFRA)
-- return -EPERM;
--
-- spin_lock_irqsave(&mac->lock, lock_flags);
-- if (!zd_regdomain_supports_channel(mac->regdomain, channel)) {
-- spin_unlock_irqrestore(&mac->lock, lock_flags);
-- return -EINVAL;
-- }
-- mac->requested_channel = channel;
-- spin_unlock_irqrestore(&mac->lock, lock_flags);
-- if (netif_running(mac->netdev))
-- return zd_chip_set_channel(&mac->chip, channel);
-- else
-- return 0;
--}
--
--u8 zd_mac_get_channel(struct zd_mac *mac)
--{
-- u8 channel = zd_chip_get_channel(&mac->chip);
--
-- dev_dbg_f(zd_mac_dev(mac), "channel %u\n", channel);
-- return channel;
--}
--
--int zd_mac_set_mode(struct zd_mac *mac, u32 mode)
--{
-- struct ieee80211_device *ieee;
--
-- switch (mode) {
-- case IW_MODE_AUTO:
-- case IW_MODE_ADHOC:
-- case IW_MODE_INFRA:
-- mac->netdev->type = ARPHRD_ETHER;
-- break;
-- case IW_MODE_MONITOR:
-- mac->netdev->type = ARPHRD_IEEE80211_RADIOTAP;
-- break;
-- default:
-- dev_dbg_f(zd_mac_dev(mac), "wrong mode %u\n", mode);
-- return -EINVAL;
-- }
--
-- ieee = zd_mac_to_ieee80211(mac);
-- ZD_ASSERT(!irqs_disabled());
-- spin_lock_irq(&ieee->lock);
-- ieee->iw_mode = mode;
-- spin_unlock_irq(&ieee->lock);
--
-- if (netif_running(mac->netdev)) {
-- int r = set_rx_filter(mac);
-- if (r)
-- return r;
-- return set_sniffer(mac);
-- }
--
-- return 0;
--}
--
--int zd_mac_get_mode(struct zd_mac *mac, u32 *mode)
--{
-- unsigned long flags;
-- struct ieee80211_device *ieee;
--
-- ieee = zd_mac_to_ieee80211(mac);
-- spin_lock_irqsave(&ieee->lock, flags);
-- *mode = ieee->iw_mode;
-- spin_unlock_irqrestore(&ieee->lock, flags);
-- return 0;
--}
--
--int zd_mac_get_range(struct zd_mac *mac, struct iw_range *range)
--{
-- int i;
-- const struct channel_range *channel_range;
-- u8 regdomain;
--
-- memset(range, 0, sizeof(*range));
--
-- /* FIXME: Not so important and depends on the mode. For 802.11g
-- * usually this value is used. It seems to be that Bit/s number is
-- * given here.
-- */
-- range->throughput = 27 * 1000 * 1000;
--
-- range->max_qual.qual = 100;
-- range->max_qual.level = 100;
--
-- /* FIXME: Needs still to be tuned. */
-- range->avg_qual.qual = 71;
-- range->avg_qual.level = 80;
--
-- /* FIXME: depends on standard? */
-- range->min_rts = 256;
-- range->max_rts = 2346;
--
-- range->min_frag = MIN_FRAG_THRESHOLD;
-- range->max_frag = MAX_FRAG_THRESHOLD;
--
-- range->max_encoding_tokens = WEP_KEYS;
-- range->num_encoding_sizes = 2;
-- range->encoding_size[0] = 5;
-- range->encoding_size[1] = WEP_KEY_LEN;
--
-- range->we_version_compiled = WIRELESS_EXT;
-- range->we_version_source = 20;
--
-- range->enc_capa = IW_ENC_CAPA_WPA | IW_ENC_CAPA_WPA2 |
-- IW_ENC_CAPA_CIPHER_TKIP | IW_ENC_CAPA_CIPHER_CCMP;
--
-- ZD_ASSERT(!irqs_disabled());
-- spin_lock_irq(&mac->lock);
-- regdomain = mac->regdomain;
-- spin_unlock_irq(&mac->lock);
-- channel_range = zd_channel_range(regdomain);
--
-- range->num_channels = channel_range->end - channel_range->start;
-- range->old_num_channels = range->num_channels;
-- range->num_frequency = range->num_channels;
-- range->old_num_frequency = range->num_frequency;
--
-- for (i = 0; i < range->num_frequency; i++) {
-- struct iw_freq *freq = &range->freq[i];
-- freq->i = channel_range->start + i;
-- zd_channel_to_freq(freq, freq->i);
-+ } else {
-+ kfree_tx_skb(skb);
+ #endif /* DASD_EXTENDED_PROFILING */
+- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+- list_for_each(l, &device->ccw_queue)
++ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
++ list_for_each(l, &base->ccw_queue)
+ dasd_info->chanq_len++;
+- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev),
++ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev),
+ flags);
}
--
-- return 0;
- }
- static int zd_calc_tx_length_us(u8 *service, u8 zd_rate, u16 tx_length)
+ rc = 0;
+ if (copy_to_user(argp, dasd_info,
+ ((cmd == (unsigned int) BIODASDINFO2) ?
+- sizeof (struct dasd_information2_t) :
+- sizeof (struct dasd_information_t))))
++ sizeof(struct dasd_information2_t) :
++ sizeof(struct dasd_information_t))))
+ rc = -EFAULT;
+ kfree(dasd_info);
+ return rc;
+@@ -339,7 +336,7 @@ dasd_ioctl_information(struct dasd_device *device,
+ static int
+ dasd_ioctl_set_ro(struct block_device *bdev, void __user *argp)
{
- /* ZD_PURE_RATE() must be used to remove the modulation type flag of
-- * the zd-rate values. */
-+ * the zd-rate values.
-+ */
- static const u8 rate_divisor[] = {
-- [ZD_PURE_RATE(ZD_CCK_RATE_1M)] = 1,
-- [ZD_PURE_RATE(ZD_CCK_RATE_2M)] = 2,
--
-- /* bits must be doubled */
-- [ZD_PURE_RATE(ZD_CCK_RATE_5_5M)] = 11,
--
-- [ZD_PURE_RATE(ZD_CCK_RATE_11M)] = 11,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_6M)] = 6,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_9M)] = 9,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_12M)] = 12,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_18M)] = 18,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_24M)] = 24,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_36M)] = 36,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_48M)] = 48,
-- [ZD_PURE_RATE(ZD_OFDM_RATE_54M)] = 54,
-+ [ZD_PURE_RATE(ZD_CCK_RATE_1M)] = 1,
-+ [ZD_PURE_RATE(ZD_CCK_RATE_2M)] = 2,
-+ /* Bits must be doubled. */
-+ [ZD_PURE_RATE(ZD_CCK_RATE_5_5M)] = 11,
-+ [ZD_PURE_RATE(ZD_CCK_RATE_11M)] = 11,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_6M)] = 6,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_9M)] = 9,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_12M)] = 12,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_18M)] = 18,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_24M)] = 24,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_36M)] = 36,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_48M)] = 48,
-+ [ZD_PURE_RATE(ZD_OFDM_RATE_54M)] = 54,
- };
+- struct dasd_device *device = bdev->bd_disk->private_data;
++ struct dasd_block *block = bdev->bd_disk->private_data;
+ int intval;
- u32 bits = (u32)tx_length * 8;
-@@ -764,34 +451,10 @@ static int zd_calc_tx_length_us(u8 *service, u8 zd_rate, u16 tx_length)
- return bits/divisor;
+ if (!capable(CAP_SYS_ADMIN))
+@@ -351,11 +348,10 @@ dasd_ioctl_set_ro(struct block_device *bdev, void __user *argp)
+ return -EFAULT;
+
+ set_disk_ro(bdev->bd_disk, intval);
+- return dasd_set_feature(device->cdev, DASD_FEATURE_READONLY, intval);
++ return dasd_set_feature(block->base->cdev, DASD_FEATURE_READONLY, intval);
}
--static void cs_set_modulation(struct zd_mac *mac, struct zd_ctrlset *cs,
-- struct ieee80211_hdr_4addr *hdr)
--{
-- struct ieee80211softmac_device *softmac = ieee80211_priv(mac->netdev);
-- u16 ftype = WLAN_FC_GET_TYPE(le16_to_cpu(hdr->frame_ctl));
-- u8 rate;
-- int is_mgt = (ftype == IEEE80211_FTYPE_MGMT) != 0;
-- int is_multicast = is_multicast_ether_addr(hdr->addr1);
-- int short_preamble = ieee80211softmac_short_preamble_ok(softmac,
-- is_multicast, is_mgt);
--
-- rate = ieee80211softmac_suggest_txrate(softmac, is_multicast, is_mgt);
-- cs->modulation = rate_to_zd_rate(rate);
--
-- /* Set short preamble bit when appropriate */
-- if (short_preamble && ZD_MODULATION_TYPE(cs->modulation) == ZD_CCK
-- && cs->modulation != ZD_CCK_RATE_1M)
-- cs->modulation |= ZD_CCK_PREA_SHORT;
--}
--
- static void cs_set_control(struct zd_mac *mac, struct zd_ctrlset *cs,
-- struct ieee80211_hdr_4addr *header)
-+ struct ieee80211_hdr *header, u32 flags)
+-static int
+-dasd_ioctl_readall_cmb(struct dasd_device *device, unsigned int cmd,
++static int dasd_ioctl_readall_cmb(struct dasd_block *block, unsigned int cmd,
+ unsigned long arg)
{
-- struct ieee80211softmac_device *softmac = ieee80211_priv(mac->netdev);
-- unsigned int tx_length = le16_to_cpu(cs->tx_length);
-- u16 fctl = le16_to_cpu(header->frame_ctl);
-- u16 ftype = WLAN_FC_GET_TYPE(fctl);
-- u16 stype = WLAN_FC_GET_STYPE(fctl);
-+ u16 fctl = le16_to_cpu(header->frame_control);
+ struct cmbdata __user *argp = (void __user *) arg;
+@@ -363,7 +359,7 @@ dasd_ioctl_readall_cmb(struct dasd_device *device, unsigned int cmd,
+ struct cmbdata data;
+ int ret;
- /*
- * CONTROL TODO:
-@@ -802,7 +465,7 @@ static void cs_set_control(struct zd_mac *mac, struct zd_ctrlset *cs,
- cs->control = 0;
+- ret = cmf_readall(device->cdev, &data);
++ ret = cmf_readall(block->base->cdev, &data);
+ if (!ret && copy_to_user(argp, &data, min(size, sizeof(*argp))))
+ return -EFAULT;
+ return ret;
+@@ -374,10 +370,10 @@ dasd_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+ {
+ struct block_device *bdev = inode->i_bdev;
+- struct dasd_device *device = bdev->bd_disk->private_data;
++ struct dasd_block *block = bdev->bd_disk->private_data;
+ void __user *argp = (void __user *)arg;
- /* First fragment */
-- if (WLAN_GET_SEQ_FRAG(le16_to_cpu(header->seq_ctl)) == 0)
-+ if (flags & IEEE80211_TXCTL_FIRST_FRAGMENT)
- cs->control |= ZD_CS_NEED_RANDOM_BACKOFF;
+- if (!device)
++ if (!block)
+ return -ENODEV;
- /* Multicast */
-@@ -810,54 +473,37 @@ static void cs_set_control(struct zd_mac *mac, struct zd_ctrlset *cs,
- cs->control |= ZD_CS_MULTICAST;
+ if ((_IOC_DIR(cmd) != _IOC_NONE) && !arg) {
+@@ -391,33 +387,33 @@ dasd_ioctl(struct inode *inode, struct file *file,
+ case BIODASDENABLE:
+ return dasd_ioctl_enable(bdev);
+ case BIODASDQUIESCE:
+- return dasd_ioctl_quiesce(device);
++ return dasd_ioctl_quiesce(block);
+ case BIODASDRESUME:
+- return dasd_ioctl_resume(device);
++ return dasd_ioctl_resume(block);
+ case BIODASDFMT:
+ return dasd_ioctl_format(bdev, argp);
+ case BIODASDINFO:
+- return dasd_ioctl_information(device, cmd, argp);
++ return dasd_ioctl_information(block, cmd, argp);
+ case BIODASDINFO2:
+- return dasd_ioctl_information(device, cmd, argp);
++ return dasd_ioctl_information(block, cmd, argp);
+ case BIODASDPRRD:
+- return dasd_ioctl_read_profile(device, argp);
++ return dasd_ioctl_read_profile(block, argp);
+ case BIODASDPRRST:
+- return dasd_ioctl_reset_profile(device);
++ return dasd_ioctl_reset_profile(block);
+ case BLKROSET:
+ return dasd_ioctl_set_ro(bdev, argp);
+ case DASDAPIVER:
+ return dasd_ioctl_api_version(argp);
+ case BIODASDCMFENABLE:
+- return enable_cmf(device->cdev);
++ return enable_cmf(block->base->cdev);
+ case BIODASDCMFDISABLE:
+- return disable_cmf(device->cdev);
++ return disable_cmf(block->base->cdev);
+ case BIODASDREADALLCMB:
+- return dasd_ioctl_readall_cmb(device, cmd, arg);
++ return dasd_ioctl_readall_cmb(block, cmd, arg);
+ default:
+ /* if the discipline has an ioctl method try it. */
+- if (device->discipline->ioctl) {
+- int rval = device->discipline->ioctl(device, cmd, argp);
++ if (block->base->discipline->ioctl) {
++ int rval = block->base->discipline->ioctl(block, cmd, argp);
+ if (rval != -ENOIOCTLCMD)
+ return rval;
+ }
+diff --git a/drivers/s390/block/dasd_proc.c b/drivers/s390/block/dasd_proc.c
+index ac7e8ef..28a86f0 100644
+--- a/drivers/s390/block/dasd_proc.c
++++ b/drivers/s390/block/dasd_proc.c
+@@ -54,11 +54,16 @@ static int
+ dasd_devices_show(struct seq_file *m, void *v)
+ {
+ struct dasd_device *device;
++ struct dasd_block *block;
+ char *substr;
- /* PS-POLL */
-- if (ftype == IEEE80211_FTYPE_CTL && stype == IEEE80211_STYPE_PSPOLL)
-+ if ((fctl & (IEEE80211_FCTL_FTYPE|IEEE80211_FCTL_STYPE)) ==
-+ (IEEE80211_FTYPE_CTL|IEEE80211_STYPE_PSPOLL))
- cs->control |= ZD_CS_PS_POLL_FRAME;
+ device = dasd_device_from_devindex((unsigned long) v - 1);
+ if (IS_ERR(device))
+ return 0;
++ if (device->block)
++ block = device->block;
++ else
++ return 0;
+ /* Print device number. */
+ seq_printf(m, "%s", device->cdev->dev.bus_id);
+ /* Print discipline string. */
+@@ -67,14 +72,14 @@ dasd_devices_show(struct seq_file *m, void *v)
+ else
+ seq_printf(m, "(none)");
+ /* Print kdev. */
+- if (device->gdp)
++ if (block->gdp)
+ seq_printf(m, " at (%3d:%6d)",
+- device->gdp->major, device->gdp->first_minor);
++ block->gdp->major, block->gdp->first_minor);
+ else
+ seq_printf(m, " at (???:??????)");
+ /* Print device name. */
+- if (device->gdp)
+- seq_printf(m, " is %-8s", device->gdp->disk_name);
++ if (block->gdp)
++ seq_printf(m, " is %-8s", block->gdp->disk_name);
+ else
+ seq_printf(m, " is ????????");
+ /* Print devices features. */
+@@ -100,14 +105,14 @@ dasd_devices_show(struct seq_file *m, void *v)
+ case DASD_STATE_READY:
+ case DASD_STATE_ONLINE:
+ seq_printf(m, "active ");
+- if (dasd_check_blocksize(device->bp_block))
++ if (dasd_check_blocksize(block->bp_block))
+ seq_printf(m, "n/f ");
+ else
+ seq_printf(m,
+ "at blocksize: %d, %ld blocks, %ld MB",
+- device->bp_block, device->blocks,
+- ((device->bp_block >> 9) *
+- device->blocks) >> 11);
++ block->bp_block, block->blocks,
++ ((block->bp_block >> 9) *
++ block->blocks) >> 11);
+ break;
+ default:
+ seq_printf(m, "no stat");
+@@ -137,7 +142,7 @@ static void dasd_devices_stop(struct seq_file *m, void *v)
+ {
+ }
-- /* Unicast data frames over the threshold should have RTS */
-- if (!is_multicast_ether_addr(header->addr1) &&
-- ftype != IEEE80211_FTYPE_MGMT &&
-- tx_length > zd_netdev_ieee80211(mac->netdev)->rts)
-+ if (flags & IEEE80211_TXCTL_USE_RTS_CTS)
- cs->control |= ZD_CS_RTS;
+-static struct seq_operations dasd_devices_seq_ops = {
++static const struct seq_operations dasd_devices_seq_ops = {
+ .start = dasd_devices_start,
+ .next = dasd_devices_next,
+ .stop = dasd_devices_stop,
+diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
+index 15a5789..7779bfc 100644
+--- a/drivers/s390/block/dcssblk.c
++++ b/drivers/s390/block/dcssblk.c
+@@ -82,7 +82,7 @@ struct dcssblk_dev_info {
+ struct request_queue *dcssblk_queue;
+ };
-- /* Use CTS-to-self protection if required */
-- if (ZD_MODULATION_TYPE(cs->modulation) == ZD_OFDM &&
-- ieee80211softmac_protection_needed(softmac)) {
-- /* FIXME: avoid sending RTS *and* self-CTS, is that correct? */
-- cs->control &= ~ZD_CS_RTS;
-+ if (flags & IEEE80211_TXCTL_USE_CTS_PROTECT)
- cs->control |= ZD_CS_SELF_CTS;
-- }
+-static struct list_head dcssblk_devices = LIST_HEAD_INIT(dcssblk_devices);
++static LIST_HEAD(dcssblk_devices);
+ static struct rw_semaphore dcssblk_devices_sem;
- /* FIXME: Management frame? */
- }
+ /*
+diff --git a/drivers/s390/char/Makefile b/drivers/s390/char/Makefile
+index 130de19..7e73e39 100644
+--- a/drivers/s390/char/Makefile
++++ b/drivers/s390/char/Makefile
+@@ -3,7 +3,7 @@
+ #
- static int fill_ctrlset(struct zd_mac *mac,
-- struct ieee80211_txb *txb,
-- int frag_num)
-+ struct sk_buff *skb,
-+ struct ieee80211_tx_control *control)
- {
- int r;
-- struct sk_buff *skb = txb->fragments[frag_num];
-- struct ieee80211_hdr_4addr *hdr =
-- (struct ieee80211_hdr_4addr *) skb->data;
-- unsigned int frag_len = skb->len + IEEE80211_FCS_LEN;
-- unsigned int next_frag_len;
-+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
-+ unsigned int frag_len = skb->len + FCS_LEN;
- unsigned int packet_length;
- struct zd_ctrlset *cs = (struct zd_ctrlset *)
- skb_push(skb, sizeof(struct zd_ctrlset));
+ obj-y += ctrlchar.o keyboard.o defkeymap.o sclp.o sclp_rw.o sclp_quiesce.o \
+- sclp_info.o sclp_config.o sclp_chp.o
++ sclp_cmd.o sclp_config.o sclp_cpi_sys.o
-- if (frag_num+1 < txb->nr_frags) {
-- next_frag_len = txb->fragments[frag_num+1]->len +
-- IEEE80211_FCS_LEN;
-- } else {
-- next_frag_len = 0;
-- }
- ZD_ASSERT(frag_len <= 0xffff);
-- ZD_ASSERT(next_frag_len <= 0xffff);
+ obj-$(CONFIG_TN3270) += raw3270.o
+ obj-$(CONFIG_TN3270_CONSOLE) += con3270.o
+diff --git a/drivers/s390/char/monwriter.c b/drivers/s390/char/monwriter.c
+index 20442fb..a86c053 100644
+--- a/drivers/s390/char/monwriter.c
++++ b/drivers/s390/char/monwriter.c
+@@ -295,7 +295,7 @@ module_init(mon_init);
+ module_exit(mon_exit);
-- cs_set_modulation(mac, cs, hdr);
-+ cs->modulation = control->tx_rate;
+ module_param_named(max_bufs, mon_max_bufs, int, 0644);
+-MODULE_PARM_DESC(max_bufs, "Maximum number of sample monitor data buffers"
++MODULE_PARM_DESC(max_bufs, "Maximum number of sample monitor data buffers "
+ "that can be active at one time");
- cs->tx_length = cpu_to_le16(frag_len);
+ MODULE_AUTHOR("Melissa Howland <Melissa.Howland at us.ibm.com>");
+diff --git a/drivers/s390/char/raw3270.c b/drivers/s390/char/raw3270.c
+index 8d1c64a..0d98f1f 100644
+--- a/drivers/s390/char/raw3270.c
++++ b/drivers/s390/char/raw3270.c
+@@ -66,7 +66,7 @@ struct raw3270 {
+ static DEFINE_MUTEX(raw3270_mutex);
-- cs_set_control(mac, cs, hdr);
-+ cs_set_control(mac, cs, hdr, control->flags);
+ /* List of 3270 devices. */
+-static struct list_head raw3270_devices = LIST_HEAD_INIT(raw3270_devices);
++static LIST_HEAD(raw3270_devices);
- packet_length = frag_len + sizeof(struct zd_ctrlset) + 10;
- ZD_ASSERT(packet_length <= 0xffff);
-@@ -886,419 +532,417 @@ static int fill_ctrlset(struct zd_mac *mac,
- if (r < 0)
- return r;
- cs->current_length = cpu_to_le16(r);
--
-- if (next_frag_len == 0) {
-- cs->next_frame_length = 0;
-- } else {
-- r = zd_calc_tx_length_us(NULL, ZD_RATE(cs->modulation),
-- next_frag_len);
-- if (r < 0)
-- return r;
-- cs->next_frame_length = cpu_to_le16(r);
-- }
-+ cs->next_frame_length = 0;
+ /*
+ * Flag to indicate if the driver has been registered. Some operations
+@@ -1210,7 +1210,7 @@ struct raw3270_notifier {
+ void (*notifier)(int, int);
+ };
- return 0;
- }
+-static struct list_head raw3270_notifier = LIST_HEAD_INIT(raw3270_notifier);
++static LIST_HEAD(raw3270_notifier);
--static int zd_mac_tx(struct zd_mac *mac, struct ieee80211_txb *txb, int pri)
-+/**
-+ * zd_op_tx - transmits a network frame to the device
-+ *
-+ * @dev: mac80211 hardware device
-+ * @skb: socket buffer
-+ * @control: the control structure
-+ *
-+ * This function transmit an IEEE 802.11 network frame to the device. The
-+ * control block of the skbuff will be initialized. If necessary the incoming
-+ * mac80211 queues will be stopped.
-+ */
-+static int zd_op_tx(struct ieee80211_hw *hw, struct sk_buff *skb,
-+ struct ieee80211_tx_control *control)
+ int raw3270_register_notifier(void (*notifier)(int, int))
{
-- int i, r;
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ int r;
+diff --git a/drivers/s390/char/sclp.h b/drivers/s390/char/sclp.h
+index c7318a1..aa8186d 100644
+--- a/drivers/s390/char/sclp.h
++++ b/drivers/s390/char/sclp.h
+@@ -56,8 +56,6 @@ typedef unsigned int sclp_cmdw_t;
+ #define SCLP_CMDW_READ_EVENT_DATA 0x00770005
+ #define SCLP_CMDW_WRITE_EVENT_DATA 0x00760005
+ #define SCLP_CMDW_WRITE_EVENT_MASK 0x00780005
+-#define SCLP_CMDW_READ_SCP_INFO 0x00020001
+-#define SCLP_CMDW_READ_SCP_INFO_FORCED 0x00120001
-- for (i = 0; i < txb->nr_frags; i++) {
-- struct sk_buff *skb = txb->fragments[i];
-+ r = fill_ctrlset(mac, skb, control);
-+ if (r)
-+ return r;
+ #define GDS_ID_MDSMU 0x1310
+ #define GDS_ID_MDSROUTEINFO 0x1311
+@@ -83,6 +81,8 @@ extern u64 sclp_facilities;
-- r = fill_ctrlset(mac, txb, i);
-- if (r) {
-- ieee->stats.tx_dropped++;
-- return r;
-- }
-- r = zd_usb_tx(&mac->chip.usb, skb->data, skb->len);
-- if (r) {
-- ieee->stats.tx_dropped++;
-- return r;
-- }
-+ r = init_tx_skb_control_block(skb, hw, control);
-+ if (r)
-+ return r;
-+ r = zd_usb_tx(&mac->chip.usb, skb);
-+ if (r) {
-+ clear_tx_skb_control_block(skb);
-+ return r;
- }
--
-- /* FIXME: shouldn't this be handled by the upper layers? */
-- mac->netdev->trans_start = jiffies;
--
-- ieee80211_txb_free(txb);
- return 0;
- }
+ #define SCLP_HAS_CHP_INFO (sclp_facilities & 0x8000000000000000ULL)
+ #define SCLP_HAS_CHP_RECONFIG (sclp_facilities & 0x2000000000000000ULL)
++#define SCLP_HAS_CPU_INFO (sclp_facilities & 0x0800000000000000ULL)
++#define SCLP_HAS_CPU_RECONFIG (sclp_facilities & 0x0400000000000000ULL)
--struct zd_rt_hdr {
-- struct ieee80211_radiotap_header rt_hdr;
-- u8 rt_flags;
-- u8 rt_rate;
-- u16 rt_channel;
-- u16 rt_chbitmask;
--} __attribute__((packed));
+ struct gds_subvector {
+ u8 length;
+diff --git a/drivers/s390/char/sclp_chp.c b/drivers/s390/char/sclp_chp.c
+deleted file mode 100644
+index c68f5e7..0000000
+--- a/drivers/s390/char/sclp_chp.c
++++ /dev/null
+@@ -1,200 +0,0 @@
+-/*
+- * drivers/s390/char/sclp_chp.c
+- *
+- * Copyright IBM Corp. 2007
+- * Author(s): Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
+- */
-
--static void fill_rt_header(void *buffer, struct zd_mac *mac,
-- const struct ieee80211_rx_stats *stats,
-- const struct rx_status *status)
--{
-- struct zd_rt_hdr *hdr = buffer;
+-#include <linux/types.h>
+-#include <linux/gfp.h>
+-#include <linux/errno.h>
+-#include <linux/completion.h>
+-#include <asm/sclp.h>
+-#include <asm/chpid.h>
-
-- hdr->rt_hdr.it_version = PKTHDR_RADIOTAP_VERSION;
-- hdr->rt_hdr.it_pad = 0;
-- hdr->rt_hdr.it_len = cpu_to_le16(sizeof(struct zd_rt_hdr));
-- hdr->rt_hdr.it_present = cpu_to_le32((1 << IEEE80211_RADIOTAP_FLAGS) |
-- (1 << IEEE80211_RADIOTAP_CHANNEL) |
-- (1 << IEEE80211_RADIOTAP_RATE));
+-#include "sclp.h"
-
-- hdr->rt_flags = 0;
-- if (status->decryption_type & (ZD_RX_WEP64|ZD_RX_WEP128|ZD_RX_WEP256))
-- hdr->rt_flags |= IEEE80211_RADIOTAP_F_WEP;
+-#define TAG "sclp_chp: "
-
-- hdr->rt_rate = stats->rate / 5;
+-#define SCLP_CMDW_CONFIGURE_CHANNEL_PATH 0x000f0001
+-#define SCLP_CMDW_DECONFIGURE_CHANNEL_PATH 0x000e0001
+-#define SCLP_CMDW_READ_CHANNEL_PATH_INFORMATION 0x00030001
-
-- /* FIXME: 802.11a */
-- hdr->rt_channel = cpu_to_le16(ieee80211chan2mhz(
-- _zd_chip_get_channel(&mac->chip)));
-- hdr->rt_chbitmask = cpu_to_le16(IEEE80211_CHAN_2GHZ |
-- ((status->frame_status & ZD_RX_FRAME_MODULATION_MASK) ==
-- ZD_RX_OFDM ? IEEE80211_CHAN_OFDM : IEEE80211_CHAN_CCK));
+-static inline sclp_cmdw_t get_configure_cmdw(struct chp_id chpid)
+-{
+- return SCLP_CMDW_CONFIGURE_CHANNEL_PATH | chpid.id << 8;
-}
-
--/* Returns 1 if the data packet is for us and 0 otherwise. */
--static int is_data_packet_for_us(struct ieee80211_device *ieee,
-- struct ieee80211_hdr_4addr *hdr)
+-static inline sclp_cmdw_t get_deconfigure_cmdw(struct chp_id chpid)
-{
-- struct net_device *netdev = ieee->dev;
-- u16 fc = le16_to_cpu(hdr->frame_ctl);
--
-- ZD_ASSERT(WLAN_FC_GET_TYPE(fc) == IEEE80211_FTYPE_DATA);
+- return SCLP_CMDW_DECONFIGURE_CHANNEL_PATH | chpid.id << 8;
+-}
-
-- switch (ieee->iw_mode) {
-- case IW_MODE_ADHOC:
-- if ((fc & (IEEE80211_FCTL_TODS|IEEE80211_FCTL_FROMDS)) != 0 ||
-- compare_ether_addr(hdr->addr3, ieee->bssid) != 0)
-- return 0;
-- break;
-- case IW_MODE_AUTO:
-- case IW_MODE_INFRA:
-- if ((fc & (IEEE80211_FCTL_TODS|IEEE80211_FCTL_FROMDS)) !=
-- IEEE80211_FCTL_FROMDS ||
-- compare_ether_addr(hdr->addr2, ieee->bssid) != 0)
-- return 0;
-- break;
-- default:
-- ZD_ASSERT(ieee->iw_mode != IW_MODE_MONITOR);
-- return 0;
-- }
+-static void chp_callback(struct sclp_req *req, void *data)
+-{
+- struct completion *completion = data;
-
-- return compare_ether_addr(hdr->addr1, netdev->dev_addr) == 0 ||
-- (is_multicast_ether_addr(hdr->addr1) &&
-- compare_ether_addr(hdr->addr3, netdev->dev_addr) != 0) ||
-- (netdev->flags & IFF_PROMISC);
+- complete(completion);
-}
-
--/* Filters received packets. The function returns 1 if the packet should be
-- * forwarded to ieee80211_rx(). If the packet should be ignored the function
-- * returns 0. If an invalid packet is found the function returns -EINVAL.
-+/**
-+ * filter_ack - filters incoming packets for acknowledgements
-+ * @dev: the mac80211 device
-+ * @rx_hdr: received header
-+ * @stats: the status for the received packet
- *
-- * The function calls ieee80211_rx_mgt() directly.
-+ * This functions looks for ACK packets and tries to match them with the
-+ * frames in the tx queue. If a match is found the frame will be dequeued and
-+ * the upper layers is informed about the successful transmission. If
-+ * mac80211 queues have been stopped and the number of frames still to be
-+ * transmitted is low the queues will be opened again.
- *
-- * It has been based on ieee80211_rx_any.
-+ * Returns 1 if the frame was an ACK, 0 if it was ignored.
- */
--static int filter_rx(struct ieee80211_device *ieee,
-- const u8 *buffer, unsigned int length,
-- struct ieee80211_rx_stats *stats)
-+static int filter_ack(struct ieee80211_hw *hw, struct ieee80211_hdr *rx_hdr,
-+ struct ieee80211_rx_status *stats)
- {
-- struct ieee80211_hdr_4addr *hdr;
-- u16 fc;
+-struct chp_cfg_sccb {
+- struct sccb_header header;
+- u8 ccm;
+- u8 reserved[6];
+- u8 cssid;
+-} __attribute__((packed));
-
-- if (ieee->iw_mode == IW_MODE_MONITOR)
-- return 1;
+-struct chp_cfg_data {
+- struct chp_cfg_sccb sccb;
+- struct sclp_req req;
+- struct completion completion;
+-} __attribute__((packed));
-
-- hdr = (struct ieee80211_hdr_4addr *)buffer;
-- fc = le16_to_cpu(hdr->frame_ctl);
-- if ((fc & IEEE80211_FCTL_VERS) != 0)
-- return -EINVAL;
-+ u16 fc = le16_to_cpu(rx_hdr->frame_control);
-+ struct sk_buff *skb;
-+ struct sk_buff_head *q;
-+ unsigned long flags;
-
-- switch (WLAN_FC_GET_TYPE(fc)) {
-- case IEEE80211_FTYPE_MGMT:
-- if (length < sizeof(struct ieee80211_hdr_3addr))
-- return -EINVAL;
-- ieee80211_rx_mgt(ieee, hdr, stats);
-- return 0;
-- case IEEE80211_FTYPE_CTL:
-+ if ((fc & (IEEE80211_FCTL_FTYPE | IEEE80211_FCTL_STYPE)) !=
-+ (IEEE80211_FTYPE_CTL | IEEE80211_STYPE_ACK))
- return 0;
-- case IEEE80211_FTYPE_DATA:
-- /* Ignore invalid short buffers */
-- if (length < sizeof(struct ieee80211_hdr_3addr))
-- return -EINVAL;
-- return is_data_packet_for_us(ieee, hdr);
-- }
-
-- return -EINVAL;
-+ q = &zd_hw_mac(hw)->ack_wait_queue;
-+ spin_lock_irqsave(&q->lock, flags);
-+ for (skb = q->next; skb != (struct sk_buff *)q; skb = skb->next) {
-+ struct ieee80211_hdr *tx_hdr;
-+
-+ tx_hdr = (struct ieee80211_hdr *)skb->data;
-+ if (likely(!compare_ether_addr(tx_hdr->addr2, rx_hdr->addr1)))
-+ {
-+ struct ieee80211_tx_status status = {{0}};
-+ status.flags = IEEE80211_TX_STATUS_ACK;
-+ status.ack_signal = stats->ssi;
-+ __skb_unlink(skb, q);
-+ tx_status(hw, skb, &status, 1);
-+ goto out;
-+ }
-+ }
-+out:
-+ spin_unlock_irqrestore(&q->lock, flags);
-+ return 1;
- }
-
--static void update_qual_rssi(struct zd_mac *mac,
-- const u8 *buffer, unsigned int length,
-- u8 qual_percent, u8 rssi_percent)
-+int zd_mac_rx(struct ieee80211_hw *hw, const u8 *buffer, unsigned int length)
- {
-- unsigned long flags;
-- struct ieee80211_hdr_3addr *hdr;
-- int i;
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ struct ieee80211_rx_status stats;
-+ const struct rx_status *status;
-+ struct sk_buff *skb;
-+ int bad_frame = 0;
-+ u16 fc;
-+ bool is_qos, is_4addr, need_padding;
-
-- hdr = (struct ieee80211_hdr_3addr *)buffer;
-- if (length < offsetof(struct ieee80211_hdr_3addr, addr3))
-- return;
-- if (compare_ether_addr(hdr->addr2, zd_mac_to_ieee80211(mac)->bssid) != 0)
-- return;
-+ if (length < ZD_PLCP_HEADER_SIZE + 10 /* IEEE80211_1ADDR_LEN */ +
-+ FCS_LEN + sizeof(struct rx_status))
-+ return -EINVAL;
-
-- spin_lock_irqsave(&mac->lock, flags);
-- i = mac->stats_count % ZD_MAC_STATS_BUFFER_SIZE;
-- mac->qual_buffer[i] = qual_percent;
-- mac->rssi_buffer[i] = rssi_percent;
-- mac->stats_count++;
-- spin_unlock_irqrestore(&mac->lock, flags);
--}
-+ memset(&stats, 0, sizeof(stats));
-
--static int fill_rx_stats(struct ieee80211_rx_stats *stats,
-- const struct rx_status **pstatus,
-- struct zd_mac *mac,
-- const u8 *buffer, unsigned int length)
--{
-- const struct rx_status *status;
-+ /* Note about pass_failed_fcs and pass_ctrl access below:
-+ * mac locking intentionally omitted here, as this is the only unlocked
-+ * reader and the only writer is configure_filter. Plus, if there were
-+ * any races accessing these variables, it wouldn't really matter.
-+ * If mac80211 ever provides a way for us to access filter flags
-+ * from outside configure_filter, we could improve on this. Also, this
-+ * situation may change once we implement some kind of DMA-into-skb
-+ * RX path. */
-
-- *pstatus = status = (struct rx_status *)
-+ /* Caller has to ensure that length >= sizeof(struct rx_status). */
-+ status = (struct rx_status *)
- (buffer + (length - sizeof(struct rx_status)));
- if (status->frame_status & ZD_RX_ERROR) {
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- ieee->stats.rx_errors++;
-- if (status->frame_status & ZD_RX_TIMEOUT_ERROR)
-- ieee->stats.rx_missed_errors++;
-- else if (status->frame_status & ZD_RX_FIFO_OVERRUN_ERROR)
-- ieee->stats.rx_fifo_errors++;
-- else if (status->frame_status & ZD_RX_DECRYPTION_ERROR)
-- ieee->ieee_stats.rx_discards_undecryptable++;
-- else if (status->frame_status & ZD_RX_CRC32_ERROR) {
-- ieee->stats.rx_crc_errors++;
-- ieee->ieee_stats.rx_fcs_errors++;
-+ if (mac->pass_failed_fcs &&
-+ (status->frame_status & ZD_RX_CRC32_ERROR)) {
-+ stats.flag |= RX_FLAG_FAILED_FCS_CRC;
-+ bad_frame = 1;
-+ } else {
-+ return -EINVAL;
- }
-- else if (status->frame_status & ZD_RX_CRC16_ERROR)
-- ieee->stats.rx_crc_errors++;
-- return -EINVAL;
- }
-
-- memset(stats, 0, sizeof(struct ieee80211_rx_stats));
-- stats->len = length - (ZD_PLCP_HEADER_SIZE + IEEE80211_FCS_LEN +
-- + sizeof(struct rx_status));
-- /* FIXME: 802.11a */
-- stats->freq = IEEE80211_24GHZ_BAND;
-- stats->received_channel = _zd_chip_get_channel(&mac->chip);
-- stats->rssi = zd_rx_strength_percent(status->signal_strength);
-- stats->signal = zd_rx_qual_percent(buffer,
-+ stats.channel = _zd_chip_get_channel(&mac->chip);
-+ stats.freq = zd_channels[stats.channel - 1].freq;
-+ stats.phymode = MODE_IEEE80211G;
-+ stats.ssi = status->signal_strength;
-+ stats.signal = zd_rx_qual_percent(buffer,
- length - sizeof(struct rx_status),
- status);
-- stats->mask = IEEE80211_STATMASK_RSSI | IEEE80211_STATMASK_SIGNAL;
-- stats->rate = zd_rx_rate(buffer, status);
-- if (stats->rate)
-- stats->mask |= IEEE80211_STATMASK_RATE;
-+ stats.rate = zd_rx_rate(buffer, status);
-+
-+ length -= ZD_PLCP_HEADER_SIZE + sizeof(struct rx_status);
-+ buffer += ZD_PLCP_HEADER_SIZE;
-+
-+ /* Except for bad frames, filter each frame to see if it is an ACK, in
-+ * which case our internal TX tracking is updated. Normally we then
-+ * bail here as there's no need to pass ACKs on up to the stack, but
-+ * there is also the case where the stack has requested us to pass
-+ * control frames on up (pass_ctrl) which we must consider. */
-+ if (!bad_frame &&
-+ filter_ack(hw, (struct ieee80211_hdr *)buffer, &stats)
-+ && !mac->pass_ctrl)
-+ return 0;
-
-- return 0;
--}
-+ fc = le16_to_cpu(*((__le16 *) buffer));
-
--static void zd_mac_rx(struct zd_mac *mac, struct sk_buff *skb)
+-static int do_configure(sclp_cmdw_t cmd)
-{
-- int r;
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- struct ieee80211_rx_stats stats;
-- const struct rx_status *status;
-+ is_qos = ((fc & IEEE80211_FCTL_FTYPE) == IEEE80211_FTYPE_DATA) &&
-+ ((fc & IEEE80211_FCTL_STYPE) == IEEE80211_STYPE_QOS_DATA);
-+ is_4addr = (fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) ==
-+ (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS);
-+ need_padding = is_qos ^ is_4addr;
-
-- if (skb->len < ZD_PLCP_HEADER_SIZE + IEEE80211_1ADDR_LEN +
-- IEEE80211_FCS_LEN + sizeof(struct rx_status))
-- {
-- ieee->stats.rx_errors++;
-- ieee->stats.rx_length_errors++;
-- goto free_skb;
-+ skb = dev_alloc_skb(length + (need_padding ? 2 : 0));
-+ if (skb == NULL)
-+ return -ENOMEM;
-+ if (need_padding) {
-+ /* Make sure the the payload data is 4 byte aligned. */
-+ skb_reserve(skb, 2);
- }
-
-- r = fill_rx_stats(&stats, &status, mac, skb->data, skb->len);
-- if (r) {
-- /* Only packets with rx errors are included here.
-- * The error stats have already been set in fill_rx_stats.
-- */
-- goto free_skb;
+- struct chp_cfg_data *data;
+- int rc;
+-
+- if (!SCLP_HAS_CHP_RECONFIG)
+- return -EOPNOTSUPP;
+- /* Prepare sccb. */
+- data = (struct chp_cfg_data *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
+- if (!data)
+- return -ENOMEM;
+- data->sccb.header.length = sizeof(struct chp_cfg_sccb);
+- data->req.command = cmd;
+- data->req.sccb = &(data->sccb);
+- data->req.status = SCLP_REQ_FILLED;
+- data->req.callback = chp_callback;
+- data->req.callback_data = &(data->completion);
+- init_completion(&data->completion);
+-
+- /* Perform sclp request. */
+- rc = sclp_add_request(&(data->req));
+- if (rc)
+- goto out;
+- wait_for_completion(&data->completion);
+-
+- /* Check response .*/
+- if (data->req.status != SCLP_REQ_DONE) {
+- printk(KERN_WARNING TAG "configure channel-path request failed "
+- "(status=0x%02x)\n", data->req.status);
+- rc = -EIO;
+- goto out;
+- }
+- switch (data->sccb.header.response_code) {
+- case 0x0020:
+- case 0x0120:
+- case 0x0440:
+- case 0x0450:
+- break;
+- default:
+- printk(KERN_WARNING TAG "configure channel-path failed "
+- "(cmd=0x%08x, response=0x%04x)\n", cmd,
+- data->sccb.header.response_code);
+- rc = -EIO;
+- break;
- }
-+ memcpy(skb_put(skb, length), buffer, length);
-
-- __skb_pull(skb, ZD_PLCP_HEADER_SIZE);
-- __skb_trim(skb, skb->len -
-- (IEEE80211_FCS_LEN + sizeof(struct rx_status)));
-+ ieee80211_rx_irqsafe(hw, skb, &stats);
-+ return 0;
-+}
-
-- ZD_ASSERT(IS_ALIGNED((unsigned long)skb->data, 4));
-+static int zd_op_add_interface(struct ieee80211_hw *hw,
-+ struct ieee80211_if_init_conf *conf)
-+{
-+ struct zd_mac *mac = zd_hw_mac(hw);
-
-- update_qual_rssi(mac, skb->data, skb->len, stats.signal,
-- status->signal_strength);
-+ /* using IEEE80211_IF_TYPE_INVALID to indicate no mode selected */
-+ if (mac->type != IEEE80211_IF_TYPE_INVALID)
-+ return -EOPNOTSUPP;
-
-- r = filter_rx(ieee, skb->data, skb->len, &stats);
-- if (r <= 0) {
-- if (r < 0) {
-- ieee->stats.rx_errors++;
-- dev_dbg_f(zd_mac_dev(mac), "Error in packet.\n");
-- }
-- goto free_skb;
-+ switch (conf->type) {
-+ case IEEE80211_IF_TYPE_MNTR:
-+ case IEEE80211_IF_TYPE_STA:
-+ mac->type = conf->type;
-+ break;
-+ default:
-+ return -EOPNOTSUPP;
- }
-
-- if (ieee->iw_mode == IW_MODE_MONITOR)
-- fill_rt_header(skb_push(skb, sizeof(struct zd_rt_hdr)), mac,
-- &stats, status);
+-out:
+- free_page((unsigned long) data);
-
-- r = ieee80211_rx(ieee, skb, &stats);
-- if (r)
-- return;
--free_skb:
-- /* We are always in a soft irq. */
-- dev_kfree_skb(skb);
-+ return zd_write_mac_addr(&mac->chip, conf->mac_addr);
- }
-
--static void do_rx(unsigned long mac_ptr)
-+static void zd_op_remove_interface(struct ieee80211_hw *hw,
-+ struct ieee80211_if_init_conf *conf)
- {
-- struct zd_mac *mac = (struct zd_mac *)mac_ptr;
-- struct sk_buff *skb;
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ mac->type = IEEE80211_IF_TYPE_INVALID;
-+ zd_write_mac_addr(&mac->chip, NULL);
-+}
-
-- while ((skb = skb_dequeue(&mac->rx_queue)) != NULL)
-- zd_mac_rx(mac, skb);
-+static int zd_op_config(struct ieee80211_hw *hw, struct ieee80211_conf *conf)
-+{
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ return zd_chip_set_channel(&mac->chip, conf->channel);
- }
-
--int zd_mac_rx_irq(struct zd_mac *mac, const u8 *buffer, unsigned int length)
-+static int zd_op_config_interface(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
-+ struct ieee80211_if_conf *conf)
- {
-- struct sk_buff *skb;
-- unsigned int reserved =
-- ALIGN(max_t(unsigned int,
-- sizeof(struct zd_rt_hdr), ZD_PLCP_HEADER_SIZE), 4) -
-- ZD_PLCP_HEADER_SIZE;
+- return rc;
+-}
-
-- skb = dev_alloc_skb(reserved + length);
-- if (!skb) {
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- dev_warn(zd_mac_dev(mac), "Could not allocate skb.\n");
-- ieee->stats.rx_dropped++;
+-/**
+- * sclp_chp_configure - perform configure channel-path sclp command
+- * @chpid: channel-path ID
+- *
+- * Perform configure channel-path command sclp command for specified chpid.
+- * Return 0 after command successfully finished, non-zero otherwise.
+- */
+-int sclp_chp_configure(struct chp_id chpid)
+-{
+- return do_configure(get_configure_cmdw(chpid));
+-}
+-
+-/**
+- * sclp_chp_deconfigure - perform deconfigure channel-path sclp command
+- * @chpid: channel-path ID
+- *
+- * Perform deconfigure channel-path command sclp command for specified chpid
+- * and wait for completion. On success return 0. Return non-zero otherwise.
+- */
+-int sclp_chp_deconfigure(struct chp_id chpid)
+-{
+- return do_configure(get_deconfigure_cmdw(chpid));
+-}
+-
+-struct chp_info_sccb {
+- struct sccb_header header;
+- u8 recognized[SCLP_CHP_INFO_MASK_SIZE];
+- u8 standby[SCLP_CHP_INFO_MASK_SIZE];
+- u8 configured[SCLP_CHP_INFO_MASK_SIZE];
+- u8 ccm;
+- u8 reserved[6];
+- u8 cssid;
+-} __attribute__((packed));
+-
+-struct chp_info_data {
+- struct chp_info_sccb sccb;
+- struct sclp_req req;
+- struct completion completion;
+-} __attribute__((packed));
+-
+-/**
+- * sclp_chp_read_info - perform read channel-path information sclp command
+- * @info: resulting channel-path information data
+- *
+- * Perform read channel-path information sclp command and wait for completion.
+- * On success, store channel-path information in @info and return 0. Return
+- * non-zero otherwise.
+- */
+-int sclp_chp_read_info(struct sclp_chp_info *info)
+-{
+- struct chp_info_data *data;
+- int rc;
+-
+- if (!SCLP_HAS_CHP_INFO)
+- return -EOPNOTSUPP;
+- /* Prepare sccb. */
+- data = (struct chp_info_data *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
+- if (!data)
- return -ENOMEM;
+- data->sccb.header.length = sizeof(struct chp_info_sccb);
+- data->req.command = SCLP_CMDW_READ_CHANNEL_PATH_INFORMATION;
+- data->req.sccb = &(data->sccb);
+- data->req.status = SCLP_REQ_FILLED;
+- data->req.callback = chp_callback;
+- data->req.callback_data = &(data->completion);
+- init_completion(&data->completion);
+-
+- /* Perform sclp request. */
+- rc = sclp_add_request(&(data->req));
+- if (rc)
+- goto out;
+- wait_for_completion(&data->completion);
+-
+- /* Check response .*/
+- if (data->req.status != SCLP_REQ_DONE) {
+- printk(KERN_WARNING TAG "read channel-path info request failed "
+- "(status=0x%02x)\n", data->req.status);
+- rc = -EIO;
+- goto out;
- }
-- skb_reserve(skb, reserved);
-- memcpy(__skb_put(skb, length), buffer, length);
-- skb_queue_tail(&mac->rx_queue, skb);
-- tasklet_schedule(&mac->rx_tasklet);
-+ struct zd_mac *mac = zd_hw_mac(hw);
+- if (data->sccb.header.response_code != 0x0010) {
+- printk(KERN_WARNING TAG "read channel-path info failed "
+- "(response=0x%04x)\n", data->sccb.header.response_code);
+- rc = -EIO;
+- goto out;
+- }
+- memcpy(info->recognized, data->sccb.recognized,
+- SCLP_CHP_INFO_MASK_SIZE);
+- memcpy(info->standby, data->sccb.standby,
+- SCLP_CHP_INFO_MASK_SIZE);
+- memcpy(info->configured, data->sccb.configured,
+- SCLP_CHP_INFO_MASK_SIZE);
+-out:
+- free_page((unsigned long) data);
+-
+- return rc;
+-}
+diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c
+new file mode 100644
+index 0000000..b5c2339
+--- /dev/null
++++ b/drivers/s390/char/sclp_cmd.c
+@@ -0,0 +1,398 @@
++/*
++ * drivers/s390/char/sclp_cmd.c
++ *
++ * Copyright IBM Corp. 2007
++ * Author(s): Heiko Carstens <heiko.carstens at de.ibm.com>,
++ * Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
++ */
+
-+ spin_lock_irq(&mac->lock);
-+ mac->associated = is_valid_ether_addr(conf->bssid);
-+ spin_unlock_irq(&mac->lock);
++#include <linux/completion.h>
++#include <linux/init.h>
++#include <linux/errno.h>
++#include <linux/slab.h>
++#include <linux/string.h>
++#include <asm/chpid.h>
++#include <asm/sclp.h>
++#include "sclp.h"
+
-+ /* TODO: do hardware bssid filtering */
- return 0;
- }
-
--static int netdev_tx(struct ieee80211_txb *txb, struct net_device *netdev,
-- int pri)
-+static void set_multicast_hash_handler(struct work_struct *work)
- {
-- return zd_mac_tx(zd_netdev_mac(netdev), txb, pri);
-+ struct zd_mac *mac =
-+ container_of(work, struct zd_mac, set_multicast_hash_work);
-+ struct zd_mc_hash hash;
++#define TAG "sclp_cmd: "
+
-+ spin_lock_irq(&mac->lock);
-+ hash = mac->multicast_hash;
-+ spin_unlock_irq(&mac->lock);
++#define SCLP_CMDW_READ_SCP_INFO 0x00020001
++#define SCLP_CMDW_READ_SCP_INFO_FORCED 0x00120001
+
-+ zd_chip_set_multicast_hash(&mac->chip, &hash);
- }
-
--static void set_security(struct net_device *netdev,
-- struct ieee80211_security *sec)
-+static void set_rx_filter_handler(struct work_struct *work)
- {
-- struct ieee80211_device *ieee = zd_netdev_ieee80211(netdev);
-- struct ieee80211_security *secinfo = &ieee->sec;
-- int keyidx;
--
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)), "\n");
--
-- for (keyidx = 0; keyidx<WEP_KEYS; keyidx++)
-- if (sec->flags & (1<<keyidx)) {
-- secinfo->encode_alg[keyidx] = sec->encode_alg[keyidx];
-- secinfo->key_sizes[keyidx] = sec->key_sizes[keyidx];
-- memcpy(secinfo->keys[keyidx], sec->keys[keyidx],
-- SCM_KEY_LEN);
-- }
-+ struct zd_mac *mac =
-+ container_of(work, struct zd_mac, set_rx_filter_work);
-+ int r;
-
-- if (sec->flags & SEC_ACTIVE_KEY) {
-- secinfo->active_key = sec->active_key;
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
-- " .active_key = %d\n", sec->active_key);
-- }
-- if (sec->flags & SEC_UNICAST_GROUP) {
-- secinfo->unicast_uses_group = sec->unicast_uses_group;
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
-- " .unicast_uses_group = %d\n",
-- sec->unicast_uses_group);
-- }
-- if (sec->flags & SEC_LEVEL) {
-- secinfo->level = sec->level;
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
-- " .level = %d\n", sec->level);
-- }
-- if (sec->flags & SEC_ENABLED) {
-- secinfo->enabled = sec->enabled;
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
-- " .enabled = %d\n", sec->enabled);
-- }
-- if (sec->flags & SEC_ENCRYPT) {
-- secinfo->encrypt = sec->encrypt;
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
-- " .encrypt = %d\n", sec->encrypt);
-- }
-- if (sec->flags & SEC_AUTH_MODE) {
-- secinfo->auth_mode = sec->auth_mode;
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)),
-- " .auth_mode = %d\n", sec->auth_mode);
-+ dev_dbg_f(zd_mac_dev(mac), "\n");
-+ r = set_rx_filter(mac);
-+ if (r)
-+ dev_err(zd_mac_dev(mac), "set_rx_filter_handler error %d\n", r);
++struct read_info_sccb {
++ struct sccb_header header; /* 0-7 */
++ u16 rnmax; /* 8-9 */
++ u8 rnsize; /* 10 */
++ u8 _reserved0[24 - 11]; /* 11-15 */
++ u8 loadparm[8]; /* 24-31 */
++ u8 _reserved1[48 - 32]; /* 32-47 */
++ u64 facilities; /* 48-55 */
++ u8 _reserved2[84 - 56]; /* 56-83 */
++ u8 fac84; /* 84 */
++ u8 _reserved3[91 - 85]; /* 85-90 */
++ u8 flags; /* 91 */
++ u8 _reserved4[100 - 92]; /* 92-99 */
++ u32 rnsize2; /* 100-103 */
++ u64 rnmax2; /* 104-111 */
++ u8 _reserved5[4096 - 112]; /* 112-4095 */
++} __attribute__((packed, aligned(PAGE_SIZE)));
++
++static struct read_info_sccb __initdata early_read_info_sccb;
++static int __initdata early_read_info_sccb_valid;
++
++u64 sclp_facilities;
++static u8 sclp_fac84;
++
++static int __init sclp_cmd_sync_early(sclp_cmdw_t cmd, void *sccb)
++{
++ int rc;
++
++ __ctl_set_bit(0, 9);
++ rc = sclp_service_call(cmd, sccb);
++ if (rc)
++ goto out;
++ __load_psw_mask(PSW_BASE_BITS | PSW_MASK_EXT |
++ PSW_MASK_WAIT | PSW_DEFAULT_KEY);
++ local_irq_disable();
++out:
++ /* Contents of the sccb might have changed. */
++ barrier();
++ __ctl_clear_bit(0, 9);
++ return rc;
+}
+
-+#define SUPPORTED_FIF_FLAGS \
-+ (FIF_PROMISC_IN_BSS | FIF_ALLMULTI | FIF_FCSFAIL | FIF_CONTROL | \
-+ FIF_OTHER_BSS)
-+static void zd_op_configure_filter(struct ieee80211_hw *hw,
-+ unsigned int changed_flags,
-+ unsigned int *new_flags,
-+ int mc_count, struct dev_mc_list *mclist)
++void __init sclp_read_info_early(void)
+{
-+ struct zd_mc_hash hash;
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ unsigned long flags;
++ int rc;
+ int i;
++ struct read_info_sccb *sccb;
++ sclp_cmdw_t commands[] = {SCLP_CMDW_READ_SCP_INFO_FORCED,
++ SCLP_CMDW_READ_SCP_INFO};
+
-+ /* Only deal with supported flags */
-+ changed_flags &= SUPPORTED_FIF_FLAGS;
-+ *new_flags &= SUPPORTED_FIF_FLAGS;
++ sccb = &early_read_info_sccb;
++ for (i = 0; i < ARRAY_SIZE(commands); i++) {
++ do {
++ memset(sccb, 0, sizeof(*sccb));
++ sccb->header.length = sizeof(*sccb);
++ sccb->header.control_mask[2] = 0x80;
++ rc = sclp_cmd_sync_early(commands[i], sccb);
++ } while (rc == -EBUSY);
+
-+ /* changed_flags is always populated but this driver
-+ * doesn't support all FIF flags so its possible we don't
-+ * need to do anything */
-+ if (!changed_flags)
++ if (rc)
++ break;
++ if (sccb->header.response_code == 0x10) {
++ early_read_info_sccb_valid = 1;
++ break;
++ }
++ if (sccb->header.response_code != 0x1f0)
++ break;
++ }
++}
++
++void __init sclp_facilities_detect(void)
++{
++ if (!early_read_info_sccb_valid)
+ return;
++ sclp_facilities = early_read_info_sccb.facilities;
++ sclp_fac84 = early_read_info_sccb.fac84;
++}
+
-+ if (*new_flags & (FIF_PROMISC_IN_BSS | FIF_ALLMULTI)) {
-+ zd_mc_add_all(&hash);
-+ } else {
-+ DECLARE_MAC_BUF(macbuf);
++unsigned long long __init sclp_memory_detect(void)
++{
++ unsigned long long memsize;
++ struct read_info_sccb *sccb;
+
-+ zd_mc_clear(&hash);
-+ for (i = 0; i < mc_count; i++) {
-+ if (!mclist)
-+ break;
-+ dev_dbg_f(zd_mac_dev(mac), "mc addr %s\n",
-+ print_mac(macbuf, mclist->dmi_addr));
-+ zd_mc_add_addr(&hash, mclist->dmi_addr);
-+ mclist = mclist->next;
-+ }
- }
++ if (!early_read_info_sccb_valid)
++ return 0;
++ sccb = &early_read_info_sccb;
++ if (sccb->rnsize)
++ memsize = sccb->rnsize << 20;
++ else
++ memsize = sccb->rnsize2 << 20;
++ if (sccb->rnmax)
++ memsize *= sccb->rnmax;
++ else
++ memsize *= sccb->rnmax2;
++ return memsize;
++}
+
-+ spin_lock_irqsave(&mac->lock, flags);
-+ mac->pass_failed_fcs = !!(*new_flags & FIF_FCSFAIL);
-+ mac->pass_ctrl = !!(*new_flags & FIF_CONTROL);
-+ mac->multicast_hash = hash;
-+ spin_unlock_irqrestore(&mac->lock, flags);
-+ queue_work(zd_workqueue, &mac->set_multicast_hash_work);
++/*
++ * This function will be called after sclp_memory_detect(), which gets called
++ * early from early.c code. Therefore the sccb should have valid contents.
++ */
++void __init sclp_get_ipl_info(struct sclp_ipl_info *info)
++{
++ struct read_info_sccb *sccb;
+
-+ if (changed_flags & FIF_CONTROL)
-+ queue_work(zd_workqueue, &mac->set_rx_filter_work);
++ if (!early_read_info_sccb_valid)
++ return;
++ sccb = &early_read_info_sccb;
++ info->is_valid = 1;
++ if (sccb->flags & 0x2)
++ info->has_dump = 1;
++ memcpy(&info->loadparm, &sccb->loadparm, LOADPARM_LEN);
++}
+
-+ /* no handling required for FIF_OTHER_BSS as we don't currently
-+ * do BSSID filtering */
-+ /* FIXME: in future it would be nice to enable the probe response
-+ * filter (so that the driver doesn't see them) until
-+ * FIF_BCN_PRBRESP_PROMISC is set. however due to atomicity here, we'd
-+ * have to schedule work to enable prbresp reception, which might
-+ * happen too late. For now we'll just listen and forward them all the
-+ * time. */
- }
-
--static void ieee_init(struct ieee80211_device *ieee)
-+static void set_rts_cts_work(struct work_struct *work)
- {
-- ieee->mode = IEEE_B | IEEE_G;
-- ieee->freq_band = IEEE80211_24GHZ_BAND;
-- ieee->modulation = IEEE80211_OFDM_MODULATION | IEEE80211_CCK_MODULATION;
-- ieee->tx_headroom = sizeof(struct zd_ctrlset);
-- ieee->set_security = set_security;
-- ieee->hard_start_xmit = netdev_tx;
--
-- /* Software encryption/decryption for now */
-- ieee->host_build_iv = 0;
-- ieee->host_encrypt = 1;
-- ieee->host_decrypt = 1;
--
-- /* FIXME: default to managed mode, until ieee80211 and zd1211rw can
-- * correctly support AUTO */
-- ieee->iw_mode = IW_MODE_INFRA;
-+ struct zd_mac *mac =
-+ container_of(work, struct zd_mac, set_rts_cts_work);
-+ unsigned long flags;
-+ unsigned int short_preamble;
++static void sclp_sync_callback(struct sclp_req *req, void *data)
++{
++ struct completion *completion = data;
+
-+ mutex_lock(&mac->chip.mutex);
++ complete(completion);
++}
+
-+ spin_lock_irqsave(&mac->lock, flags);
-+ mac->updating_rts_rate = 0;
-+ short_preamble = mac->short_preamble;
-+ spin_unlock_irqrestore(&mac->lock, flags);
++static int do_sync_request(sclp_cmdw_t cmd, void *sccb)
++{
++ struct completion completion;
++ struct sclp_req *request;
++ int rc;
+
-+ zd_chip_set_rts_cts_rate_locked(&mac->chip, short_preamble);
-+ mutex_unlock(&mac->chip.mutex);
- }
-
--static void softmac_init(struct ieee80211softmac_device *sm)
-+static void zd_op_bss_info_changed(struct ieee80211_hw *hw,
-+ struct ieee80211_vif *vif,
-+ struct ieee80211_bss_conf *bss_conf,
-+ u32 changes)
- {
-- sm->set_channel = set_channel;
-- sm->bssinfo_change = bssinfo_change;
-+ struct zd_mac *mac = zd_hw_mac(hw);
-+ unsigned long flags;
++ request = kzalloc(sizeof(*request), GFP_KERNEL);
++ if (!request)
++ return -ENOMEM;
++ request->command = cmd;
++ request->sccb = sccb;
++ request->status = SCLP_REQ_FILLED;
++ request->callback = sclp_sync_callback;
++ request->callback_data = &completion;
++ init_completion(&completion);
+
-+ dev_dbg_f(zd_mac_dev(mac), "changes: %x\n", changes);
++ /* Perform sclp request. */
++ rc = sclp_add_request(request);
++ if (rc)
++ goto out;
++ wait_for_completion(&completion);
+
-+ if (changes & BSS_CHANGED_ERP_PREAMBLE) {
-+ spin_lock_irqsave(&mac->lock, flags);
-+ mac->short_preamble = bss_conf->use_short_preamble;
-+ if (!mac->updating_rts_rate) {
-+ mac->updating_rts_rate = 1;
-+ /* FIXME: should disable TX here, until work has
-+ * completed and RTS_CTS reg is updated */
-+ queue_work(zd_workqueue, &mac->set_rts_cts_work);
-+ }
-+ spin_unlock_irqrestore(&mac->lock, flags);
++ /* Check response. */
++ if (request->status != SCLP_REQ_DONE) {
++ printk(KERN_WARNING TAG "sync request failed "
++ "(cmd=0x%08x, status=0x%02x)\n", cmd, request->status);
++ rc = -EIO;
+ }
- }
-
--struct iw_statistics *zd_mac_get_wireless_stats(struct net_device *ndev)
-+static const struct ieee80211_ops zd_ops = {
-+ .tx = zd_op_tx,
-+ .start = zd_op_start,
-+ .stop = zd_op_stop,
-+ .add_interface = zd_op_add_interface,
-+ .remove_interface = zd_op_remove_interface,
-+ .config = zd_op_config,
-+ .config_interface = zd_op_config_interface,
-+ .configure_filter = zd_op_configure_filter,
-+ .bss_info_changed = zd_op_bss_info_changed,
-+};
++out:
++ kfree(request);
++ return rc;
++}
+
-+struct ieee80211_hw *zd_mac_alloc_hw(struct usb_interface *intf)
- {
-- struct zd_mac *mac = zd_netdev_mac(ndev);
-- struct iw_statistics *iw_stats = &mac->iw_stats;
-- unsigned int i, count, qual_total, rssi_total;
-+ struct zd_mac *mac;
-+ struct ieee80211_hw *hw;
-+ int i;
-
-- memset(iw_stats, 0, sizeof(struct iw_statistics));
-- /* We are not setting the status, because ieee->state is not updated
-- * at all and this driver doesn't track authentication state.
-- */
-- spin_lock_irq(&mac->lock);
-- count = mac->stats_count < ZD_MAC_STATS_BUFFER_SIZE ?
-- mac->stats_count : ZD_MAC_STATS_BUFFER_SIZE;
-- qual_total = rssi_total = 0;
-- for (i = 0; i < count; i++) {
-- qual_total += mac->qual_buffer[i];
-- rssi_total += mac->rssi_buffer[i];
-+ hw = ieee80211_alloc_hw(sizeof(struct zd_mac), &zd_ops);
-+ if (!hw) {
-+ dev_dbg_f(&intf->dev, "out of memory\n");
-+ return NULL;
- }
-- spin_unlock_irq(&mac->lock);
-- iw_stats->qual.updated = IW_QUAL_NOISE_INVALID;
-- if (count > 0) {
-- iw_stats->qual.qual = qual_total / count;
-- iw_stats->qual.level = rssi_total / count;
-- iw_stats->qual.updated |=
-- IW_QUAL_QUAL_UPDATED|IW_QUAL_LEVEL_UPDATED;
-- } else {
-- iw_stats->qual.updated |=
-- IW_QUAL_QUAL_INVALID|IW_QUAL_LEVEL_INVALID;
++/*
++ * CPU configuration related functions.
++ */
+
-+ mac = zd_hw_mac(hw);
++#define SCLP_CMDW_READ_CPU_INFO 0x00010001
++#define SCLP_CMDW_CONFIGURE_CPU 0x00110001
++#define SCLP_CMDW_DECONFIGURE_CPU 0x00100001
+
-+ memset(mac, 0, sizeof(*mac));
-+ spin_lock_init(&mac->lock);
-+ mac->hw = hw;
++struct read_cpu_info_sccb {
++ struct sccb_header header;
++ u16 nr_configured;
++ u16 offset_configured;
++ u16 nr_standby;
++ u16 offset_standby;
++ u8 reserved[4096 - 16];
++} __attribute__((packed, aligned(PAGE_SIZE)));
+
-+ mac->type = IEEE80211_IF_TYPE_INVALID;
++static void sclp_fill_cpu_info(struct sclp_cpu_info *info,
++ struct read_cpu_info_sccb *sccb)
++{
++ char *page = (char *) sccb;
+
-+ memcpy(mac->channels, zd_channels, sizeof(zd_channels));
-+ memcpy(mac->rates, zd_rates, sizeof(zd_rates));
-+ mac->modes[0].mode = MODE_IEEE80211G;
-+ mac->modes[0].num_rates = ARRAY_SIZE(zd_rates);
-+ mac->modes[0].rates = mac->rates;
-+ mac->modes[0].num_channels = ARRAY_SIZE(zd_channels);
-+ mac->modes[0].channels = mac->channels;
-+ mac->modes[1].mode = MODE_IEEE80211B;
-+ mac->modes[1].num_rates = 4;
-+ mac->modes[1].rates = mac->rates;
-+ mac->modes[1].num_channels = ARRAY_SIZE(zd_channels);
-+ mac->modes[1].channels = mac->channels;
++ memset(info, 0, sizeof(*info));
++ info->configured = sccb->nr_configured;
++ info->standby = sccb->nr_standby;
++ info->combined = sccb->nr_configured + sccb->nr_standby;
++ info->has_cpu_type = sclp_fac84 & 0x1;
++ memcpy(&info->cpu, page + sccb->offset_configured,
++ info->combined * sizeof(struct sclp_cpu_entry));
++}
+
-+ hw->flags = IEEE80211_HW_RX_INCLUDES_FCS |
-+ IEEE80211_HW_DEFAULT_REG_DOMAIN_CONFIGURED;
-+ hw->max_rssi = 100;
-+ hw->max_signal = 100;
++int sclp_get_cpu_info(struct sclp_cpu_info *info)
++{
++ int rc;
++ struct read_cpu_info_sccb *sccb;
+
-+ hw->queues = 1;
-+ hw->extra_tx_headroom = sizeof(struct zd_ctrlset);
++ if (!SCLP_HAS_CPU_INFO)
++ return -EOPNOTSUPP;
++ sccb = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
++ if (!sccb)
++ return -ENOMEM;
++ sccb->header.length = sizeof(*sccb);
++ rc = do_sync_request(SCLP_CMDW_READ_CPU_INFO, sccb);
++ if (rc)
++ goto out;
++ if (sccb->header.response_code != 0x0010) {
++ printk(KERN_WARNING TAG "readcpuinfo failed "
++ "(response=0x%04x)\n", sccb->header.response_code);
++ rc = -EIO;
++ goto out;
++ }
++ sclp_fill_cpu_info(info, sccb);
++out:
++ free_page((unsigned long) sccb);
++ return rc;
++}
+
-+ skb_queue_head_init(&mac->ack_wait_queue);
++struct cpu_configure_sccb {
++ struct sccb_header header;
++} __attribute__((packed, aligned(8)));
+
-+ for (i = 0; i < 2; i++) {
-+ if (ieee80211_register_hwmode(hw, &mac->modes[i])) {
-+ dev_dbg_f(&intf->dev, "cannot register hwmode\n");
-+ ieee80211_free_hw(hw);
-+ return NULL;
-+ }
- }
-- /* TODO: update counter */
-- return iw_stats;
++static int do_cpu_configure(sclp_cmdw_t cmd)
++{
++ struct cpu_configure_sccb *sccb;
++ int rc;
+
-+ zd_chip_init(&mac->chip, hw, intf);
-+ housekeeping_init(mac);
-+ INIT_WORK(&mac->set_multicast_hash_work, set_multicast_hash_handler);
-+ INIT_WORK(&mac->set_rts_cts_work, set_rts_cts_work);
-+ INIT_WORK(&mac->set_rx_filter_work, set_rx_filter_handler);
++ if (!SCLP_HAS_CPU_RECONFIG)
++ return -EOPNOTSUPP;
++ /*
++ * This is not going to cross a page boundary since we force
++ * kmalloc to have a minimum alignment of 8 bytes on s390.
++ */
++ sccb = kzalloc(sizeof(*sccb), GFP_KERNEL | GFP_DMA);
++ if (!sccb)
++ return -ENOMEM;
++ sccb->header.length = sizeof(*sccb);
++ rc = do_sync_request(cmd, sccb);
++ if (rc)
++ goto out;
++ switch (sccb->header.response_code) {
++ case 0x0020:
++ case 0x0120:
++ break;
++ default:
++ printk(KERN_WARNING TAG "configure cpu failed (cmd=0x%08x, "
++ "response=0x%04x)\n", cmd, sccb->header.response_code);
++ rc = -EIO;
++ break;
++ }
++out:
++ kfree(sccb);
++ return rc;
++}
+
-+ SET_IEEE80211_DEV(hw, &intf->dev);
-+ return hw;
- }
-
- #define LINK_LED_WORK_DELAY HZ
-@@ -1308,18 +952,17 @@ static void link_led_handler(struct work_struct *work)
- struct zd_mac *mac =
- container_of(work, struct zd_mac, housekeeping.link_led_work.work);
- struct zd_chip *chip = &mac->chip;
-- struct ieee80211softmac_device *sm = ieee80211_priv(mac->netdev);
- int is_associated;
- int r;
-
- spin_lock_irq(&mac->lock);
-- is_associated = sm->associnfo.associated != 0;
-+ is_associated = mac->associated;
- spin_unlock_irq(&mac->lock);
-
- r = zd_chip_control_leds(chip,
- is_associated ? LED_ASSOCIATED : LED_SCANNING);
- if (r)
-- dev_err(zd_mac_dev(mac), "zd_chip_control_leds error %d\n", r);
-+ dev_dbg_f(zd_mac_dev(mac), "zd_chip_control_leds error %d\n", r);
-
- queue_delayed_work(zd_workqueue, &mac->housekeeping.link_led_work,
- LINK_LED_WORK_DELAY);
-diff --git a/drivers/net/wireless/zd1211rw/zd_mac.h b/drivers/net/wireless/zd1211rw/zd_mac.h
-index 1b15bde..2dde108 100644
---- a/drivers/net/wireless/zd1211rw/zd_mac.h
-+++ b/drivers/net/wireless/zd1211rw/zd_mac.h
-@@ -1,4 +1,7 @@
--/* zd_mac.h
-+/* ZD1211 USB-WLAN driver for Linux
++int sclp_cpu_configure(u8 cpu)
++{
++ return do_cpu_configure(SCLP_CMDW_CONFIGURE_CPU | cpu << 8);
++}
++
++int sclp_cpu_deconfigure(u8 cpu)
++{
++ return do_cpu_configure(SCLP_CMDW_DECONFIGURE_CPU | cpu << 8);
++}
++
++/*
++ * Channel path configuration related functions.
++ */
++
++#define SCLP_CMDW_CONFIGURE_CHPATH 0x000f0001
++#define SCLP_CMDW_DECONFIGURE_CHPATH 0x000e0001
++#define SCLP_CMDW_READ_CHPATH_INFORMATION 0x00030001
++
++struct chp_cfg_sccb {
++ struct sccb_header header;
++ u8 ccm;
++ u8 reserved[6];
++ u8 cssid;
++} __attribute__((packed));
++
++static int do_chp_configure(sclp_cmdw_t cmd)
++{
++ struct chp_cfg_sccb *sccb;
++ int rc;
++
++ if (!SCLP_HAS_CHP_RECONFIG)
++ return -EOPNOTSUPP;
++ /* Prepare sccb. */
++ sccb = (struct chp_cfg_sccb *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
++ if (!sccb)
++ return -ENOMEM;
++ sccb->header.length = sizeof(*sccb);
++ rc = do_sync_request(cmd, sccb);
++ if (rc)
++ goto out;
++ switch (sccb->header.response_code) {
++ case 0x0020:
++ case 0x0120:
++ case 0x0440:
++ case 0x0450:
++ break;
++ default:
++ printk(KERN_WARNING TAG "configure channel-path failed "
++ "(cmd=0x%08x, response=0x%04x)\n", cmd,
++ sccb->header.response_code);
++ rc = -EIO;
++ break;
++ }
++out:
++ free_page((unsigned long) sccb);
++ return rc;
++}
++
++/**
++ * sclp_chp_configure - perform configure channel-path sclp command
++ * @chpid: channel-path ID
+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -18,14 +21,11 @@
- #ifndef _ZD_MAC_H
- #define _ZD_MAC_H
-
--#include <linux/wireless.h>
- #include <linux/kernel.h>
--#include <linux/workqueue.h>
--#include <net/ieee80211.h>
--#include <net/ieee80211softmac.h>
-+#include <net/mac80211.h>
-
- #include "zd_chip.h"
--#include "zd_netdev.h"
-+#include "zd_ieee80211.h"
-
- struct zd_ctrlset {
- u8 modulation;
-@@ -57,7 +57,7 @@ struct zd_ctrlset {
- /* The two possible modulation types. Notify that 802.11b doesn't use the CCK
- * codeing for the 1 and 2 MBit/s rate. We stay with the term here to remain
- * consistent with uses the term at other places.
-- */
++ * Perform configure channel-path command sclp command for specified chpid.
++ * Return 0 after command successfully finished, non-zero otherwise.
+ */
- #define ZD_CCK 0x00
- #define ZD_OFDM 0x10
-
-@@ -141,58 +141,68 @@ struct rx_status {
- #define ZD_RX_CRC16_ERROR 0x40
- #define ZD_RX_ERROR 0x80
-
-+enum mac_flags {
-+ MAC_FIXED_CHANNEL = 0x01,
-+};
++int sclp_chp_configure(struct chp_id chpid)
++{
++ return do_chp_configure(SCLP_CMDW_CONFIGURE_CHPATH | chpid.id << 8);
++}
+
- struct housekeeping {
- struct delayed_work link_led_work;
- };
-
+/**
-+ * struct zd_tx_skb_control_block - control block for tx skbuffs
-+ * @control: &struct ieee80211_tx_control pointer
-+ * @context: context pointer
++ * sclp_chp_deconfigure - perform deconfigure channel-path sclp command
++ * @chpid: channel-path ID
+ *
-+ * This structure is used to fill the cb field in an &sk_buff to transmit.
-+ * The control field is NULL, if there is no requirement from the mac80211
-+ * stack to report about the packet ACK. This is the case if the flag
-+ * IEEE80211_TXCTL_NO_ACK is not set in &struct ieee80211_tx_control.
++ * Perform deconfigure channel-path command sclp command for specified chpid
++ * and wait for completion. On success return 0. Return non-zero otherwise.
+ */
-+struct zd_tx_skb_control_block {
-+ struct ieee80211_tx_control *control;
-+ struct ieee80211_hw *hw;
-+ void *context;
-+};
++int sclp_chp_deconfigure(struct chp_id chpid)
++{
++ return do_chp_configure(SCLP_CMDW_DECONFIGURE_CHPATH | chpid.id << 8);
++}
+
- #define ZD_MAC_STATS_BUFFER_SIZE 16
-
-+#define ZD_MAC_MAX_ACK_WAITERS 10
++struct chp_info_sccb {
++ struct sccb_header header;
++ u8 recognized[SCLP_CHP_INFO_MASK_SIZE];
++ u8 standby[SCLP_CHP_INFO_MASK_SIZE];
++ u8 configured[SCLP_CHP_INFO_MASK_SIZE];
++ u8 ccm;
++ u8 reserved[6];
++ u8 cssid;
++} __attribute__((packed));
+
- struct zd_mac {
- struct zd_chip chip;
- spinlock_t lock;
-- struct net_device *netdev;
--
-- /* Unlocked reading possible */
-- struct iw_statistics iw_stats;
--
-+ struct ieee80211_hw *hw;
- struct housekeeping housekeeping;
- struct work_struct set_multicast_hash_work;
-+ struct work_struct set_rts_cts_work;
-+ struct work_struct set_rx_filter_work;
- struct zd_mc_hash multicast_hash;
-- struct delayed_work set_rts_cts_work;
-- struct delayed_work set_basic_rates_work;
--
-- struct tasklet_struct rx_tasklet;
-- struct sk_buff_head rx_queue;
--
-- unsigned int stats_count;
-- u8 qual_buffer[ZD_MAC_STATS_BUFFER_SIZE];
-- u8 rssi_buffer[ZD_MAC_STATS_BUFFER_SIZE];
- u8 regdomain;
- u8 default_regdomain;
-- u8 requested_channel;
--
-- /* A bitpattern of cr_rates */
-- u16 basic_rates;
--
-- /* A zd_rate */
-- u8 rts_rate;
-+ int type;
-+ int associated;
-+ struct sk_buff_head ack_wait_queue;
-+ struct ieee80211_channel channels[14];
-+ struct ieee80211_rate rates[12];
-+ struct ieee80211_hw_mode modes[2];
-
- /* Short preamble (used for RTS/CTS) */
- unsigned int short_preamble:1;
-
- /* flags to indicate update in progress */
- unsigned int updating_rts_rate:1;
-- unsigned int updating_basic_rates:1;
--};
-
--static inline struct ieee80211_device *zd_mac_to_ieee80211(struct zd_mac *mac)
--{
-- return zd_netdev_ieee80211(mac->netdev);
--}
-+ /* whether to pass frames with CRC errors to stack */
-+ unsigned int pass_failed_fcs:1;
-
--static inline struct zd_mac *zd_netdev_mac(struct net_device *netdev)
-+ /* whether to pass control frames to stack */
-+ unsigned int pass_ctrl:1;
-+};
++/**
++ * sclp_chp_read_info - perform read channel-path information sclp command
++ * @info: resulting channel-path information data
++ *
++ * Perform read channel-path information sclp command and wait for completion.
++ * On success, store channel-path information in @info and return 0. Return
++ * non-zero otherwise.
++ */
++int sclp_chp_read_info(struct sclp_chp_info *info)
++{
++ struct chp_info_sccb *sccb;
++ int rc;
+
-+static inline struct zd_mac *zd_hw_mac(struct ieee80211_hw *hw)
- {
-- return ieee80211softmac_priv(netdev);
-+ return hw->priv;
- }
++ if (!SCLP_HAS_CHP_INFO)
++ return -EOPNOTSUPP;
++ /* Prepare sccb. */
++ sccb = (struct chp_info_sccb *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
++ if (!sccb)
++ return -ENOMEM;
++ sccb->header.length = sizeof(*sccb);
++ rc = do_sync_request(SCLP_CMDW_READ_CHPATH_INFORMATION, sccb);
++ if (rc)
++ goto out;
++ if (sccb->header.response_code != 0x0010) {
++ printk(KERN_WARNING TAG "read channel-path info failed "
++ "(response=0x%04x)\n", sccb->header.response_code);
++ rc = -EIO;
++ goto out;
++ }
++ memcpy(info->recognized, sccb->recognized, SCLP_CHP_INFO_MASK_SIZE);
++ memcpy(info->standby, sccb->standby, SCLP_CHP_INFO_MASK_SIZE);
++ memcpy(info->configured, sccb->configured, SCLP_CHP_INFO_MASK_SIZE);
++out:
++ free_page((unsigned long) sccb);
++ return rc;
++}
+diff --git a/drivers/s390/char/sclp_config.c b/drivers/s390/char/sclp_config.c
+index 5322e5e..9dc77f1 100644
+--- a/drivers/s390/char/sclp_config.c
++++ b/drivers/s390/char/sclp_config.c
+@@ -29,12 +29,12 @@ static void sclp_cpu_capability_notify(struct work_struct *work)
+ struct sys_device *sysdev;
- static inline struct zd_mac *zd_chip_to_mac(struct zd_chip *chip)
-@@ -205,35 +215,22 @@ static inline struct zd_mac *zd_usb_to_mac(struct zd_usb *usb)
- return zd_chip_to_mac(zd_usb_to_chip(usb));
+ printk(KERN_WARNING TAG "cpu capability changed.\n");
+- lock_cpu_hotplug();
++ get_online_cpus();
+ for_each_online_cpu(cpu) {
+ sysdev = get_cpu_sysdev(cpu);
+ kobject_uevent(&sysdev->kobj, KOBJ_CHANGE);
+ }
+- unlock_cpu_hotplug();
++ put_online_cpus();
}
-+static inline u8 *zd_mac_get_perm_addr(struct zd_mac *mac)
-+{
-+ return mac->hw->wiphy->perm_addr;
-+}
-+
- #define zd_mac_dev(mac) (zd_chip_dev(&(mac)->chip))
-
--int zd_mac_init(struct zd_mac *mac,
-- struct net_device *netdev,
-- struct usb_interface *intf);
-+struct ieee80211_hw *zd_mac_alloc_hw(struct usb_interface *intf);
- void zd_mac_clear(struct zd_mac *mac);
+ static void sclp_conf_receiver_fn(struct evbuf_header *evbuf)
+diff --git a/drivers/s390/char/sclp_cpi.c b/drivers/s390/char/sclp_cpi.c
+index 82a13d9..5716487 100644
+--- a/drivers/s390/char/sclp_cpi.c
++++ b/drivers/s390/char/sclp_cpi.c
+@@ -1,255 +1,41 @@
+ /*
+- * Author: Martin Peschke <mpeschke at de.ibm.com>
+- * Copyright (C) 2001 IBM Entwicklung GmbH, IBM Corporation
++ * drivers/s390/char/sclp_cpi.c
++ * SCLP control programm identification
+ *
+- * SCLP Control-Program Identification.
++ * Copyright IBM Corp. 2001, 2007
++ * Author(s): Martin Peschke <mpeschke at de.ibm.com>
++ * Michael Ernst <mernst at de.ibm.com>
+ */
--int zd_mac_preinit_hw(struct zd_mac *mac);
--int zd_mac_init_hw(struct zd_mac *mac);
--
--int zd_mac_open(struct net_device *netdev);
--int zd_mac_stop(struct net_device *netdev);
--int zd_mac_set_mac_address(struct net_device *dev, void *p);
--void zd_mac_set_multicast_list(struct net_device *netdev);
+-#include <linux/version.h>
+ #include <linux/kmod.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+-#include <linux/init.h>
+-#include <linux/timer.h>
+-#include <linux/string.h>
+-#include <linux/err.h>
+-#include <linux/slab.h>
+-#include <asm/ebcdic.h>
+-#include <asm/semaphore.h>
-
--int zd_mac_rx_irq(struct zd_mac *mac, const u8 *buffer, unsigned int length);
+-#include "sclp.h"
+-#include "sclp_rw.h"
-
--int zd_mac_set_regdomain(struct zd_mac *zd_mac, u8 regdomain);
--u8 zd_mac_get_regdomain(struct zd_mac *zd_mac);
+-#define CPI_LENGTH_SYSTEM_TYPE 8
+-#define CPI_LENGTH_SYSTEM_NAME 8
+-#define CPI_LENGTH_SYSPLEX_NAME 8
-
--int zd_mac_request_channel(struct zd_mac *mac, u8 channel);
--u8 zd_mac_get_channel(struct zd_mac *mac);
+-struct cpi_evbuf {
+- struct evbuf_header header;
+- u8 id_format;
+- u8 reserved0;
+- u8 system_type[CPI_LENGTH_SYSTEM_TYPE];
+- u64 reserved1;
+- u8 system_name[CPI_LENGTH_SYSTEM_NAME];
+- u64 reserved2;
+- u64 system_level;
+- u64 reserved3;
+- u8 sysplex_name[CPI_LENGTH_SYSPLEX_NAME];
+- u8 reserved4[16];
+-} __attribute__((packed));
-
--int zd_mac_set_mode(struct zd_mac *mac, u32 mode);
--int zd_mac_get_mode(struct zd_mac *mac, u32 *mode);
+-struct cpi_sccb {
+- struct sccb_header header;
+- struct cpi_evbuf cpi_evbuf;
+-} __attribute__((packed));
-
--int zd_mac_get_range(struct zd_mac *mac, struct iw_range *range);
-+int zd_mac_preinit_hw(struct ieee80211_hw *hw);
-+int zd_mac_init_hw(struct ieee80211_hw *hw);
+-/* Event type structure for write message and write priority message */
+-static struct sclp_register sclp_cpi_event =
+-{
+- .send_mask = EVTYP_CTLPROGIDENT_MASK
+-};
++#include <linux/version.h>
++#include "sclp_cpi_sys.h"
--struct iw_statistics *zd_mac_get_wireless_stats(struct net_device *ndev);
-+int zd_mac_rx(struct ieee80211_hw *hw, const u8 *buffer, unsigned int length);
-+void zd_mac_tx_failed(struct ieee80211_hw *hw);
-+void zd_mac_tx_to_dev(struct sk_buff *skb, int error);
+ MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Identify this operating system instance "
++ "to the System z hardware");
++MODULE_AUTHOR("Martin Peschke <mpeschke at de.ibm.com>, "
++ "Michael Ernst <mernst at de.ibm.com>");
- #ifdef DEBUG
- void zd_dump_rx_status(const struct rx_status *status);
-diff --git a/drivers/net/wireless/zd1211rw/zd_netdev.c b/drivers/net/wireless/zd1211rw/zd_netdev.c
-deleted file mode 100644
-index 047cab3..0000000
---- a/drivers/net/wireless/zd1211rw/zd_netdev.c
-+++ /dev/null
-@@ -1,264 +0,0 @@
--/* zd_netdev.c
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License as published by
-- * the Free Software Foundation; either version 2 of the License, or
-- * (at your option) any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-- * GNU General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; if not, write to the Free Software
-- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-- */
--
--#include <linux/netdevice.h>
--#include <linux/etherdevice.h>
--#include <linux/skbuff.h>
--#include <net/ieee80211.h>
--#include <net/ieee80211softmac.h>
--#include <net/ieee80211softmac_wx.h>
--#include <net/iw_handler.h>
+-MODULE_AUTHOR(
+- "Martin Peschke, IBM Deutschland Entwicklung GmbH "
+- "<mpeschke at de.ibm.com>");
-
--#include "zd_def.h"
--#include "zd_netdev.h"
--#include "zd_mac.h"
--#include "zd_ieee80211.h"
+-MODULE_DESCRIPTION(
+- "identify this operating system instance to the S/390 "
+- "or zSeries hardware");
++static char *system_name = "";
++static char *sysplex_name = "";
+
+-static char *system_name = NULL;
+ module_param(system_name, charp, 0);
+ MODULE_PARM_DESC(system_name, "e.g. hostname - max. 8 characters");
-
--/* Region 0 means reset regdomain to default. */
--static int zd_set_regdomain(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- const u8 *regdomain = (u8 *)req;
-- return zd_mac_set_regdomain(zd_netdev_mac(netdev), *regdomain);
--}
+-static char *sysplex_name = NULL;
+-#ifdef ALLOW_SYSPLEX_NAME
+ module_param(sysplex_name, charp, 0);
+ MODULE_PARM_DESC(sysplex_name, "if applicable - max. 8 characters");
+-#endif
-
--static int zd_get_regdomain(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- u8 *regdomain = (u8 *)req;
-- if (!regdomain)
+-/* use default value for this field (as well as for system level) */
+-static char *system_type = "LINUX";
+
+-static int
+-cpi_check_parms(void)
++static int __init cpi_module_init(void)
+ {
+- /* reject if no system type specified */
+- if (!system_type) {
+- printk("cpi: bug: no system type specified\n");
- return -EINVAL;
-- *regdomain = zd_mac_get_regdomain(zd_netdev_mac(netdev));
-- return 0;
--}
--
--static const struct iw_priv_args zd_priv_args[] = {
-- {
-- .cmd = ZD_PRIV_SET_REGDOMAIN,
-- .set_args = IW_PRIV_TYPE_BYTE | IW_PRIV_SIZE_FIXED | 1,
-- .name = "set_regdomain",
-- },
-- {
-- .cmd = ZD_PRIV_GET_REGDOMAIN,
-- .get_args = IW_PRIV_TYPE_BYTE | IW_PRIV_SIZE_FIXED | 1,
-- .name = "get_regdomain",
-- },
--};
+- }
-
--#define PRIV_OFFSET(x) [(x)-SIOCIWFIRSTPRIV]
+- /* reject if system type larger than 8 characters */
+- if (strlen(system_type) > CPI_LENGTH_SYSTEM_NAME) {
+- printk("cpi: bug: system type has length of %li characters - "
+- "only %i characters supported\n",
+- strlen(system_type), CPI_LENGTH_SYSTEM_TYPE);
+- return -EINVAL;
+- }
-
--static const iw_handler zd_priv_handler[] = {
-- PRIV_OFFSET(ZD_PRIV_SET_REGDOMAIN) = zd_set_regdomain,
-- PRIV_OFFSET(ZD_PRIV_GET_REGDOMAIN) = zd_get_regdomain,
--};
+- /* reject if no system name specified */
+- if (!system_name) {
+- printk("cpi: no system name specified\n");
+- return -EINVAL;
+- }
-
--static int iw_get_name(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- /* FIXME: check whether 802.11a will also supported */
-- strlcpy(req->name, "IEEE 802.11b/g", IFNAMSIZ);
-- return 0;
--}
+- /* reject if system name larger than 8 characters */
+- if (strlen(system_name) > CPI_LENGTH_SYSTEM_NAME) {
+- printk("cpi: system name has length of %li characters - "
+- "only %i characters supported\n",
+- strlen(system_name), CPI_LENGTH_SYSTEM_NAME);
+- return -EINVAL;
+- }
-
--static int iw_get_nick(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- strcpy(extra, "zd1211");
-- req->data.length = strlen(extra);
-- req->data.flags = 1;
+- /* reject if specified sysplex name larger than 8 characters */
+- if (sysplex_name && strlen(sysplex_name) > CPI_LENGTH_SYSPLEX_NAME) {
+- printk("cpi: sysplex name has length of %li characters"
+- " - only %i characters supported\n",
+- strlen(sysplex_name), CPI_LENGTH_SYSPLEX_NAME);
+- return -EINVAL;
+- }
- return 0;
--}
--
--static int iw_set_freq(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
++ return sclp_cpi_set_data(system_name, sysplex_name, "LINUX",
++ LINUX_VERSION_CODE);
+ }
+
+-static void
+-cpi_callback(struct sclp_req *req, void *data)
-{
-- int r;
-- struct zd_mac *mac = zd_netdev_mac(netdev);
-- struct iw_freq *freq = &req->freq;
-- u8 channel;
+- struct semaphore *sem;
-
-- r = zd_find_channel(&channel, freq);
-- if (r < 0)
-- return r;
-- r = zd_mac_request_channel(mac, channel);
-- return r;
+- sem = (struct semaphore *) data;
+- up(sem);
-}
-
--static int iw_get_freq(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
+-static struct sclp_req *
+-cpi_prepare_req(void)
-{
-- struct zd_mac *mac = zd_netdev_mac(netdev);
-- struct iw_freq *freq = &req->freq;
+- struct sclp_req *req;
+- struct cpi_sccb *sccb;
+- struct cpi_evbuf *evb;
-
-- return zd_channel_to_freq(freq, zd_mac_get_channel(mac));
--}
+- req = kmalloc(sizeof(struct sclp_req), GFP_KERNEL);
+- if (req == NULL)
+- return ERR_PTR(-ENOMEM);
+- sccb = (struct cpi_sccb *) __get_free_page(GFP_KERNEL | GFP_DMA);
+- if (sccb == NULL) {
+- kfree(req);
+- return ERR_PTR(-ENOMEM);
+- }
+- memset(sccb, 0, sizeof(struct cpi_sccb));
-
--static int iw_set_mode(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- return zd_mac_set_mode(zd_netdev_mac(netdev), req->mode);
--}
+- /* setup SCCB for Control-Program Identification */
+- sccb->header.length = sizeof(struct cpi_sccb);
+- sccb->cpi_evbuf.header.length = sizeof(struct cpi_evbuf);
+- sccb->cpi_evbuf.header.type = 0x0B;
+- evb = &sccb->cpi_evbuf;
-
--static int iw_get_mode(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- return zd_mac_get_mode(zd_netdev_mac(netdev), &req->mode);
--}
+- /* set system type */
+- memset(evb->system_type, ' ', CPI_LENGTH_SYSTEM_TYPE);
+- memcpy(evb->system_type, system_type, strlen(system_type));
+- sclp_ascebc_str(evb->system_type, CPI_LENGTH_SYSTEM_TYPE);
+- EBC_TOUPPER(evb->system_type, CPI_LENGTH_SYSTEM_TYPE);
-
--static int iw_get_range(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *req, char *extra)
--{
-- struct iw_range *range = (struct iw_range *)extra;
+- /* set system name */
+- memset(evb->system_name, ' ', CPI_LENGTH_SYSTEM_NAME);
+- memcpy(evb->system_name, system_name, strlen(system_name));
+- sclp_ascebc_str(evb->system_name, CPI_LENGTH_SYSTEM_NAME);
+- EBC_TOUPPER(evb->system_name, CPI_LENGTH_SYSTEM_NAME);
-
-- dev_dbg_f(zd_mac_dev(zd_netdev_mac(netdev)), "\n");
-- req->data.length = sizeof(*range);
-- return zd_mac_get_range(zd_netdev_mac(netdev), range);
--}
+- /* set system level */
+- evb->system_level = LINUX_VERSION_CODE;
-
--static int iw_set_encode(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *data,
-- char *extra)
--{
-- return ieee80211_wx_set_encode(zd_netdev_ieee80211(netdev), info,
-- data, extra);
--}
+- /* set sysplex name */
+- if (sysplex_name) {
+- memset(evb->sysplex_name, ' ', CPI_LENGTH_SYSPLEX_NAME);
+- memcpy(evb->sysplex_name, sysplex_name, strlen(sysplex_name));
+- sclp_ascebc_str(evb->sysplex_name, CPI_LENGTH_SYSPLEX_NAME);
+- EBC_TOUPPER(evb->sysplex_name, CPI_LENGTH_SYSPLEX_NAME);
+- }
-
--static int iw_get_encode(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *data,
-- char *extra)
--{
-- return ieee80211_wx_get_encode(zd_netdev_ieee80211(netdev), info,
-- data, extra);
+- /* prepare request data structure presented to SCLP driver */
+- req->command = SCLP_CMDW_WRITE_EVENT_DATA;
+- req->sccb = sccb;
+- req->status = SCLP_REQ_FILLED;
+- req->callback = cpi_callback;
+- return req;
-}
-
--static int iw_set_encodeext(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *data,
-- char *extra)
+-static void
+-cpi_free_req(struct sclp_req *req)
-{
-- return ieee80211_wx_set_encodeext(zd_netdev_ieee80211(netdev), info,
-- data, extra);
+- free_page((unsigned long) req->sccb);
+- kfree(req);
-}
-
--static int iw_get_encodeext(struct net_device *netdev,
-- struct iw_request_info *info,
-- union iwreq_data *data,
-- char *extra)
+-static int __init
+-cpi_module_init(void)
-{
-- return ieee80211_wx_get_encodeext(zd_netdev_ieee80211(netdev), info,
-- data, extra);
--}
--
--#define WX(x) [(x)-SIOCIWFIRST]
--
--static const iw_handler zd_standard_iw_handlers[] = {
-- WX(SIOCGIWNAME) = iw_get_name,
-- WX(SIOCGIWNICKN) = iw_get_nick,
-- WX(SIOCSIWFREQ) = iw_set_freq,
-- WX(SIOCGIWFREQ) = iw_get_freq,
-- WX(SIOCSIWMODE) = iw_set_mode,
-- WX(SIOCGIWMODE) = iw_get_mode,
-- WX(SIOCGIWRANGE) = iw_get_range,
-- WX(SIOCSIWENCODE) = iw_set_encode,
-- WX(SIOCGIWENCODE) = iw_get_encode,
-- WX(SIOCSIWENCODEEXT) = iw_set_encodeext,
-- WX(SIOCGIWENCODEEXT) = iw_get_encodeext,
-- WX(SIOCSIWAUTH) = ieee80211_wx_set_auth,
-- WX(SIOCGIWAUTH) = ieee80211_wx_get_auth,
-- WX(SIOCSIWSCAN) = ieee80211softmac_wx_trigger_scan,
-- WX(SIOCGIWSCAN) = ieee80211softmac_wx_get_scan_results,
-- WX(SIOCSIWESSID) = ieee80211softmac_wx_set_essid,
-- WX(SIOCGIWESSID) = ieee80211softmac_wx_get_essid,
-- WX(SIOCSIWAP) = ieee80211softmac_wx_set_wap,
-- WX(SIOCGIWAP) = ieee80211softmac_wx_get_wap,
-- WX(SIOCSIWRATE) = ieee80211softmac_wx_set_rate,
-- WX(SIOCGIWRATE) = ieee80211softmac_wx_get_rate,
-- WX(SIOCSIWGENIE) = ieee80211softmac_wx_set_genie,
-- WX(SIOCGIWGENIE) = ieee80211softmac_wx_get_genie,
-- WX(SIOCSIWMLME) = ieee80211softmac_wx_set_mlme,
--};
--
--static const struct iw_handler_def iw_handler_def = {
-- .standard = zd_standard_iw_handlers,
-- .num_standard = ARRAY_SIZE(zd_standard_iw_handlers),
-- .private = zd_priv_handler,
-- .num_private = ARRAY_SIZE(zd_priv_handler),
-- .private_args = zd_priv_args,
-- .num_private_args = ARRAY_SIZE(zd_priv_args),
-- .get_wireless_stats = zd_mac_get_wireless_stats,
--};
+- struct semaphore sem;
+- struct sclp_req *req;
+- int rc;
-
--struct net_device *zd_netdev_alloc(struct usb_interface *intf)
--{
-- int r;
-- struct net_device *netdev;
-- struct zd_mac *mac;
+- rc = cpi_check_parms();
+- if (rc)
+- return rc;
-
-- netdev = alloc_ieee80211softmac(sizeof(struct zd_mac));
-- if (!netdev) {
-- dev_dbg_f(&intf->dev, "out of memory\n");
-- return NULL;
+- rc = sclp_register(&sclp_cpi_event);
+- if (rc) {
+- /* could not register sclp event. Die. */
+- printk(KERN_WARNING "cpi: could not register to hardware "
+- "console.\n");
+- return -EINVAL;
- }
--
-- mac = zd_netdev_mac(netdev);
-- r = zd_mac_init(mac, netdev, intf);
-- if (r) {
-- usb_set_intfdata(intf, NULL);
-- free_ieee80211(netdev);
-- return NULL;
+- if (!(sclp_cpi_event.sclp_send_mask & EVTYP_CTLPROGIDENT_MASK)) {
+- printk(KERN_WARNING "cpi: no control program identification "
+- "support\n");
+- sclp_unregister(&sclp_cpi_event);
+- return -EOPNOTSUPP;
- }
-
-- SET_NETDEV_DEV(netdev, &intf->dev);
--
-- dev_dbg_f(&intf->dev, "netdev->flags %#06hx\n", netdev->flags);
-- dev_dbg_f(&intf->dev, "netdev->features %#010lx\n", netdev->features);
--
-- netdev->open = zd_mac_open;
-- netdev->stop = zd_mac_stop;
-- /* netdev->get_stats = */
-- netdev->set_multicast_list = zd_mac_set_multicast_list;
-- netdev->set_mac_address = zd_mac_set_mac_address;
-- netdev->wireless_handlers = &iw_handler_def;
-- /* netdev->ethtool_ops = */
--
-- return netdev;
--}
--
--void zd_netdev_free(struct net_device *netdev)
--{
-- if (!netdev)
-- return;
--
-- zd_mac_clear(zd_netdev_mac(netdev));
-- free_ieee80211(netdev);
--}
--
--void zd_netdev_disconnect(struct net_device *netdev)
--{
-- unregister_netdev(netdev);
--}
-diff --git a/drivers/net/wireless/zd1211rw/zd_netdev.h b/drivers/net/wireless/zd1211rw/zd_netdev.h
-deleted file mode 100644
-index 374a957..0000000
---- a/drivers/net/wireless/zd1211rw/zd_netdev.h
-+++ /dev/null
-@@ -1,45 +0,0 @@
--/* zd_netdev.h: Header for net device related functions.
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License as published by
-- * the Free Software Foundation; either version 2 of the License, or
-- * (at your option) any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-- * GNU General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; if not, write to the Free Software
-- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-- */
--
--#ifndef _ZD_NETDEV_H
--#define _ZD_NETDEV_H
+- req = cpi_prepare_req();
+- if (IS_ERR(req)) {
+- printk(KERN_WARNING "cpi: couldn't allocate request\n");
+- sclp_unregister(&sclp_cpi_event);
+- return PTR_ERR(req);
+- }
-
--#include <linux/usb.h>
--#include <linux/netdevice.h>
--#include <net/ieee80211.h>
+- /* Prepare semaphore */
+- sema_init(&sem, 0);
+- req->callback_data = &sem;
+- /* Add request to sclp queue */
+- rc = sclp_add_request(req);
+- if (rc) {
+- printk(KERN_WARNING "cpi: could not start request\n");
+- cpi_free_req(req);
+- sclp_unregister(&sclp_cpi_event);
+- return rc;
+- }
+- /* make "insmod" sleep until callback arrives */
+- down(&sem);
-
--#define ZD_PRIV_SET_REGDOMAIN (SIOCIWFIRSTPRIV)
--#define ZD_PRIV_GET_REGDOMAIN (SIOCIWFIRSTPRIV+1)
+- rc = ((struct cpi_sccb *) req->sccb)->header.response_code;
+- if (rc != 0x0020) {
+- printk(KERN_WARNING "cpi: failed with response code 0x%x\n",
+- rc);
+- rc = -ECOMM;
+- } else
+- rc = 0;
-
--static inline struct ieee80211_device *zd_netdev_ieee80211(
-- struct net_device *ndev)
--{
-- return netdev_priv(ndev);
--}
+- cpi_free_req(req);
+- sclp_unregister(&sclp_cpi_event);
-
--static inline struct net_device *zd_ieee80211_to_netdev(
-- struct ieee80211_device *ieee)
--{
-- return ieee->dev;
+- return rc;
-}
-
--struct net_device *zd_netdev_alloc(struct usb_interface *intf);
--void zd_netdev_free(struct net_device *netdev);
--
--void zd_netdev_disconnect(struct net_device *netdev);
-
--#endif /* _ZD_NETDEV_H */
-diff --git a/drivers/net/wireless/zd1211rw/zd_rf.c b/drivers/net/wireless/zd1211rw/zd_rf.c
-index abe5d38..ec41293 100644
---- a/drivers/net/wireless/zd1211rw/zd_rf.c
-+++ b/drivers/net/wireless/zd1211rw/zd_rf.c
-@@ -1,4 +1,7 @@
--/* zd_rf.c
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-diff --git a/drivers/net/wireless/zd1211rw/zd_rf.h b/drivers/net/wireless/zd1211rw/zd_rf.h
-index 30502f2..79dc103 100644
---- a/drivers/net/wireless/zd1211rw/zd_rf.h
-+++ b/drivers/net/wireless/zd1211rw/zd_rf.h
-@@ -1,4 +1,7 @@
--/* zd_rf.h
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-diff --git a/drivers/net/wireless/zd1211rw/zd_rf_al2230.c b/drivers/net/wireless/zd1211rw/zd_rf_al2230.c
-index 006774d..74a8f7a 100644
---- a/drivers/net/wireless/zd1211rw/zd_rf_al2230.c
-+++ b/drivers/net/wireless/zd1211rw/zd_rf_al2230.c
-@@ -1,4 +1,7 @@
--/* zd_rf_al2230.c: Functions for the AL2230 RF controller
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-diff --git a/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c b/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c
-index 73d0bb2..65095d6 100644
---- a/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c
-+++ b/drivers/net/wireless/zd1211rw/zd_rf_al7230b.c
-@@ -1,4 +1,7 @@
--/* zd_rf_al7230b.c: Functions for the AL7230B RF controller
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-diff --git a/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c b/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c
-index cc70d40..0597d86 100644
---- a/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c
-+++ b/drivers/net/wireless/zd1211rw/zd_rf_rf2959.c
-@@ -1,4 +1,7 @@
--/* zd_rf_rfmd.c: Functions for the RFMD RF controller
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-diff --git a/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c b/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c
-index 857dcf3..439799b 100644
---- a/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c
-+++ b/drivers/net/wireless/zd1211rw/zd_rf_uw2453.c
-@@ -1,4 +1,7 @@
--/* zd_rf_uw2453.c: Functions for the UW2453 RF controller
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -403,7 +406,7 @@ static int uw2453_init_hw(struct zd_rf *rf)
- if (r)
- return r;
-
-- if (!intr_status & 0xf) {
-+ if (!(intr_status & 0xf)) {
- dev_dbg_f(zd_chip_dev(chip),
- "PLL locked on configuration %d\n", i);
- found_config = i;
-diff --git a/drivers/net/wireless/zd1211rw/zd_usb.c b/drivers/net/wireless/zd1211rw/zd_usb.c
-index c755b69..7942b15 100644
---- a/drivers/net/wireless/zd1211rw/zd_usb.c
-+++ b/drivers/net/wireless/zd1211rw/zd_usb.c
-@@ -1,4 +1,8 @@
--/* zd_usb.c
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
-+ * Copyright (C) 2006-2007 Michael Wu <flamingice at sourmilk.net>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -17,18 +21,16 @@
-
- #include <linux/kernel.h>
- #include <linux/init.h>
--#include <linux/module.h>
- #include <linux/firmware.h>
- #include <linux/device.h>
- #include <linux/errno.h>
- #include <linux/skbuff.h>
- #include <linux/usb.h>
- #include <linux/workqueue.h>
--#include <net/ieee80211.h>
-+#include <net/mac80211.h>
- #include <asm/unaligned.h>
-
- #include "zd_def.h"
--#include "zd_netdev.h"
- #include "zd_mac.h"
- #include "zd_usb.h"
-
-@@ -55,6 +57,7 @@ static struct usb_device_id usb_ids[] = {
- { USB_DEVICE(0x13b1, 0x001e), .driver_info = DEVICE_ZD1211 },
- { USB_DEVICE(0x0586, 0x3407), .driver_info = DEVICE_ZD1211 },
- { USB_DEVICE(0x129b, 0x1666), .driver_info = DEVICE_ZD1211 },
-+ { USB_DEVICE(0x157e, 0x300a), .driver_info = DEVICE_ZD1211 },
- /* ZD1211B */
- { USB_DEVICE(0x0ace, 0x1215), .driver_info = DEVICE_ZD1211B },
- { USB_DEVICE(0x157e, 0x300d), .driver_info = DEVICE_ZD1211B },
-@@ -353,18 +356,6 @@ out:
- spin_unlock(&intr->lock);
+ static void __exit cpi_module_exit(void)
+ {
}
--static inline void handle_retry_failed_int(struct urb *urb)
--{
-- struct zd_usb *usb = urb->context;
-- struct zd_mac *mac = zd_usb_to_mac(usb);
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
--
-- ieee->stats.tx_errors++;
-- ieee->ieee_stats.tx_retry_limit_exceeded++;
-- dev_dbg_f(urb_dev(urb), "retry failed interrupt\n");
--}
-
+-/* declare driver module init/cleanup functions */
+ module_init(cpi_module_init);
+ module_exit(cpi_module_exit);
-
- static void int_urb_complete(struct urb *urb)
- {
- int r;
-@@ -400,7 +391,7 @@ static void int_urb_complete(struct urb *urb)
- handle_regs_int(urb);
- break;
- case USB_INT_ID_RETRY_FAILED:
-- handle_retry_failed_int(urb);
-+ zd_mac_tx_failed(zd_usb_to_hw(urb->context));
- break;
- default:
- dev_dbg_f(urb_dev(urb), "error: urb %p unknown id %x\n", urb,
-@@ -530,14 +521,10 @@ static void handle_rx_packet(struct zd_usb *usb, const u8 *buffer,
- unsigned int length)
- {
- int i;
-- struct zd_mac *mac = zd_usb_to_mac(usb);
- const struct rx_length_info *length_info;
-
- if (length < sizeof(struct rx_length_info)) {
- /* It's not a complete packet anyhow. */
-- struct ieee80211_device *ieee = zd_mac_to_ieee80211(mac);
-- ieee->stats.rx_errors++;
-- ieee->stats.rx_length_errors++;
- return;
- }
- length_info = (struct rx_length_info *)
-@@ -561,13 +548,13 @@ static void handle_rx_packet(struct zd_usb *usb, const u8 *buffer,
- n = l+k;
- if (n > length)
- return;
-- zd_mac_rx_irq(mac, buffer+l, k);
-+ zd_mac_rx(zd_usb_to_hw(usb), buffer+l, k);
- if (i >= 2)
- return;
- l = (n+3) & ~3;
- }
- } else {
-- zd_mac_rx_irq(mac, buffer, length);
-+ zd_mac_rx(zd_usb_to_hw(usb), buffer, length);
- }
- }
-
-@@ -629,7 +616,7 @@ resubmit:
- usb_submit_urb(urb, GFP_ATOMIC);
- }
-
--static struct urb *alloc_urb(struct zd_usb *usb)
-+static struct urb *alloc_rx_urb(struct zd_usb *usb)
- {
- struct usb_device *udev = zd_usb_to_usbdev(usb);
- struct urb *urb;
-@@ -653,7 +640,7 @@ static struct urb *alloc_urb(struct zd_usb *usb)
- return urb;
- }
-
--static void free_urb(struct urb *urb)
-+static void free_rx_urb(struct urb *urb)
- {
- if (!urb)
- return;
-@@ -671,11 +658,11 @@ int zd_usb_enable_rx(struct zd_usb *usb)
- dev_dbg_f(zd_usb_dev(usb), "\n");
-
- r = -ENOMEM;
-- urbs = kcalloc(URBS_COUNT, sizeof(struct urb *), GFP_KERNEL);
-+ urbs = kcalloc(RX_URBS_COUNT, sizeof(struct urb *), GFP_KERNEL);
- if (!urbs)
- goto error;
-- for (i = 0; i < URBS_COUNT; i++) {
-- urbs[i] = alloc_urb(usb);
-+ for (i = 0; i < RX_URBS_COUNT; i++) {
-+ urbs[i] = alloc_rx_urb(usb);
- if (!urbs[i])
- goto error;
- }
-@@ -688,10 +675,10 @@ int zd_usb_enable_rx(struct zd_usb *usb)
- goto error;
- }
- rx->urbs = urbs;
-- rx->urbs_count = URBS_COUNT;
-+ rx->urbs_count = RX_URBS_COUNT;
- spin_unlock_irq(&rx->lock);
-
-- for (i = 0; i < URBS_COUNT; i++) {
-+ for (i = 0; i < RX_URBS_COUNT; i++) {
- r = usb_submit_urb(urbs[i], GFP_KERNEL);
- if (r)
- goto error_submit;
-@@ -699,7 +686,7 @@ int zd_usb_enable_rx(struct zd_usb *usb)
-
- return 0;
- error_submit:
-- for (i = 0; i < URBS_COUNT; i++) {
-+ for (i = 0; i < RX_URBS_COUNT; i++) {
- usb_kill_urb(urbs[i]);
- }
- spin_lock_irq(&rx->lock);
-@@ -708,8 +695,8 @@ error_submit:
- spin_unlock_irq(&rx->lock);
- error:
- if (urbs) {
-- for (i = 0; i < URBS_COUNT; i++)
-- free_urb(urbs[i]);
-+ for (i = 0; i < RX_URBS_COUNT; i++)
-+ free_rx_urb(urbs[i]);
- }
- return r;
- }
-@@ -731,7 +718,7 @@ void zd_usb_disable_rx(struct zd_usb *usb)
-
- for (i = 0; i < count; i++) {
- usb_kill_urb(urbs[i]);
-- free_urb(urbs[i]);
-+ free_rx_urb(urbs[i]);
- }
- kfree(urbs);
-
-@@ -741,9 +728,142 @@ void zd_usb_disable_rx(struct zd_usb *usb)
- spin_unlock_irqrestore(&rx->lock, flags);
- }
-
-+/**
-+ * zd_usb_disable_tx - disable transmission
-+ * @usb: the zd1211rw-private USB structure
+diff --git a/drivers/s390/char/sclp_cpi_sys.c b/drivers/s390/char/sclp_cpi_sys.c
+new file mode 100644
+index 0000000..4161703
+--- /dev/null
++++ b/drivers/s390/char/sclp_cpi_sys.c
+@@ -0,0 +1,400 @@
++/*
++ * drivers/s390/char/sclp_cpi_sys.c
++ * SCLP control program identification sysfs interface
+ *
-+ * Frees all URBs in the free list and marks the transmission as disabled.
++ * Copyright IBM Corp. 2001, 2007
++ * Author(s): Martin Peschke <mpeschke at de.ibm.com>
++ * Michael Ernst <mernst at de.ibm.com>
+ */
-+void zd_usb_disable_tx(struct zd_usb *usb)
++
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/stat.h>
++#include <linux/device.h>
++#include <linux/string.h>
++#include <linux/ctype.h>
++#include <linux/kmod.h>
++#include <linux/timer.h>
++#include <linux/err.h>
++#include <linux/slab.h>
++#include <linux/completion.h>
++#include <asm/ebcdic.h>
++#include <asm/sclp.h>
++#include "sclp.h"
++#include "sclp_rw.h"
++#include "sclp_cpi_sys.h"
++
++#define CPI_LENGTH_NAME 8
++#define CPI_LENGTH_LEVEL 16
++
++struct cpi_evbuf {
++ struct evbuf_header header;
++ u8 id_format;
++ u8 reserved0;
++ u8 system_type[CPI_LENGTH_NAME];
++ u64 reserved1;
++ u8 system_name[CPI_LENGTH_NAME];
++ u64 reserved2;
++ u64 system_level;
++ u64 reserved3;
++ u8 sysplex_name[CPI_LENGTH_NAME];
++ u8 reserved4[16];
++} __attribute__((packed));
++
++struct cpi_sccb {
++ struct sccb_header header;
++ struct cpi_evbuf cpi_evbuf;
++} __attribute__((packed));
++
++static struct sclp_register sclp_cpi_event = {
++ .send_mask = EVTYP_CTLPROGIDENT_MASK,
++};
++
++static char system_name[CPI_LENGTH_NAME + 1];
++static char sysplex_name[CPI_LENGTH_NAME + 1];
++static char system_type[CPI_LENGTH_NAME + 1];
++static u64 system_level;
++
++static void set_data(char *field, char *data)
++{
++ memset(field, ' ', CPI_LENGTH_NAME);
++ memcpy(field, data, strlen(data));
++ sclp_ascebc_str(field, CPI_LENGTH_NAME);
++}
++
++static void cpi_callback(struct sclp_req *req, void *data)
++{
++ struct completion *completion = data;
++
++ complete(completion);
++}
++
++static struct sclp_req *cpi_prepare_req(void)
++{
++ struct sclp_req *req;
++ struct cpi_sccb *sccb;
++ struct cpi_evbuf *evb;
++
++ req = kzalloc(sizeof(struct sclp_req), GFP_KERNEL);
++ if (!req)
++ return ERR_PTR(-ENOMEM);
++ sccb = (struct cpi_sccb *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
++ if (!sccb) {
++ kfree(req);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ /* setup SCCB for Control-Program Identification */
++ sccb->header.length = sizeof(struct cpi_sccb);
++ sccb->cpi_evbuf.header.length = sizeof(struct cpi_evbuf);
++ sccb->cpi_evbuf.header.type = 0x0b;
++ evb = &sccb->cpi_evbuf;
++
++ /* set system type */
++ set_data(evb->system_type, system_type);
++
++ /* set system name */
++ set_data(evb->system_name, system_name);
++
++ /* set sytem level */
++ evb->system_level = system_level;
++
++ /* set sysplex name */
++ set_data(evb->sysplex_name, sysplex_name);
++
++ /* prepare request data structure presented to SCLP driver */
++ req->command = SCLP_CMDW_WRITE_EVENT_DATA;
++ req->sccb = sccb;
++ req->status = SCLP_REQ_FILLED;
++ req->callback = cpi_callback;
++ return req;
++}
++
++static void cpi_free_req(struct sclp_req *req)
++{
++ free_page((unsigned long) req->sccb);
++ kfree(req);
++}
++
++static int cpi_req(void)
++{
++ struct completion completion;
++ struct sclp_req *req;
++ int rc;
++ int response;
++
++ rc = sclp_register(&sclp_cpi_event);
++ if (rc) {
++ printk(KERN_WARNING "cpi: could not register "
++ "to hardware console.\n");
++ goto out;
++ }
++ if (!(sclp_cpi_event.sclp_send_mask & EVTYP_CTLPROGIDENT_MASK)) {
++ printk(KERN_WARNING "cpi: no control program "
++ "identification support\n");
++ rc = -EOPNOTSUPP;
++ goto out_unregister;
++ }
++
++ req = cpi_prepare_req();
++ if (IS_ERR(req)) {
++ printk(KERN_WARNING "cpi: could not allocate request\n");
++ rc = PTR_ERR(req);
++ goto out_unregister;
++ }
++
++ init_completion(&completion);
++ req->callback_data = &completion;
++
++ /* Add request to sclp queue */
++ rc = sclp_add_request(req);
++ if (rc) {
++ printk(KERN_WARNING "cpi: could not start request\n");
++ goto out_free_req;
++ }
++
++ wait_for_completion(&completion);
++
++ if (req->status != SCLP_REQ_DONE) {
++ printk(KERN_WARNING "cpi: request failed (status=0x%02x)\n",
++ req->status);
++ rc = -EIO;
++ goto out_free_req;
++ }
++
++ response = ((struct cpi_sccb *) req->sccb)->header.response_code;
++ if (response != 0x0020) {
++ printk(KERN_WARNING "cpi: failed with "
++ "response code 0x%x\n", response);
++ rc = -EIO;
++ }
++
++out_free_req:
++ cpi_free_req(req);
++
++out_unregister:
++ sclp_unregister(&sclp_cpi_event);
++
++out:
++ return rc;
++}
++
++static int check_string(const char *attr, const char *str)
++{
++ size_t len;
++ size_t i;
++
++ len = strlen(str);
++
++ if ((len > 0) && (str[len - 1] == '\n'))
++ len--;
++
++ if (len > CPI_LENGTH_NAME)
++ return -EINVAL;
++
++ for (i = 0; i < len ; i++) {
++ if (isalpha(str[i]) || isdigit(str[i]) ||
++ strchr("$@# ", str[i]))
++ continue;
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static void set_string(char *attr, const char *value)
++{
++ size_t len;
++ size_t i;
++
++ len = strlen(value);
++
++ if ((len > 0) && (value[len - 1] == '\n'))
++ len--;
++
++ for (i = 0; i < CPI_LENGTH_NAME; i++) {
++ if (i < len)
++ attr[i] = toupper(value[i]);
++ else
++ attr[i] = ' ';
++ }
++}
++
++static ssize_t system_name_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *page)
++{
++ return snprintf(page, PAGE_SIZE, "%s\n", system_name);
++}
++
++static ssize_t system_name_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf,
++ size_t len)
++{
++ int rc;
++
++ rc = check_string("system_name", buf);
++ if (rc)
++ return rc;
++
++ set_string(system_name, buf);
++
++ return len;
++}
++
++static struct kobj_attribute system_name_attr =
++ __ATTR(system_name, 0644, system_name_show, system_name_store);
++
++static ssize_t sysplex_name_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *page)
++{
++ return snprintf(page, PAGE_SIZE, "%s\n", sysplex_name);
++}
++
++static ssize_t sysplex_name_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf,
++ size_t len)
+{
-+ struct zd_usb_tx *tx = &usb->tx;
-+ unsigned long flags;
-+ struct list_head *pos, *n;
++ int rc;
+
-+ spin_lock_irqsave(&tx->lock, flags);
-+ list_for_each_safe(pos, n, &tx->free_urb_list) {
-+ list_del(pos);
-+ usb_free_urb(list_entry(pos, struct urb, urb_list));
-+ }
-+ tx->enabled = 0;
-+ tx->submitted_urbs = 0;
-+ /* The stopped state is ignored, relying on ieee80211_wake_queues()
-+ * in a potentionally following zd_usb_enable_tx().
-+ */
-+ spin_unlock_irqrestore(&tx->lock, flags);
++ rc = check_string("sysplex_name", buf);
++ if (rc)
++ return rc;
++
++ set_string(sysplex_name, buf);
++
++ return len;
+}
+
-+/**
-+ * zd_usb_enable_tx - enables transmission
-+ * @usb: a &struct zd_usb pointer
-+ *
-+ * This function enables transmission and prepares the &zd_usb_tx data
-+ * structure.
-+ */
-+void zd_usb_enable_tx(struct zd_usb *usb)
++static struct kobj_attribute sysplex_name_attr =
++ __ATTR(sysplex_name, 0644, sysplex_name_show, sysplex_name_store);
++
++static ssize_t system_type_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *page)
+{
-+ unsigned long flags;
-+ struct zd_usb_tx *tx = &usb->tx;
++ return snprintf(page, PAGE_SIZE, "%s\n", system_type);
++}
+
-+ spin_lock_irqsave(&tx->lock, flags);
-+ tx->enabled = 1;
-+ tx->submitted_urbs = 0;
-+ ieee80211_wake_queues(zd_usb_to_hw(usb));
-+ tx->stopped = 0;
-+ spin_unlock_irqrestore(&tx->lock, flags);
++static ssize_t system_type_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf,
++ size_t len)
++{
++ int rc;
++
++ rc = check_string("system_type", buf);
++ if (rc)
++ return rc;
++
++ set_string(system_type, buf);
++
++ return len;
+}
+
-+/**
-+ * alloc_tx_urb - provides an tx URB
-+ * @usb: a &struct zd_usb pointer
-+ *
-+ * Allocates a new URB. If possible takes the urb from the free list in
-+ * usb->tx.
-+ */
-+static struct urb *alloc_tx_urb(struct zd_usb *usb)
++static struct kobj_attribute system_type_attr =
++ __ATTR(system_type, 0644, system_type_show, system_type_store);
++
++static ssize_t system_level_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *page)
+{
-+ struct zd_usb_tx *tx = &usb->tx;
-+ unsigned long flags;
-+ struct list_head *entry;
-+ struct urb *urb;
++ unsigned long long level = system_level;
+
-+ spin_lock_irqsave(&tx->lock, flags);
-+ if (list_empty(&tx->free_urb_list)) {
-+ urb = usb_alloc_urb(0, GFP_ATOMIC);
-+ goto out;
-+ }
-+ entry = tx->free_urb_list.next;
-+ list_del(entry);
-+ urb = list_entry(entry, struct urb, urb_list);
-+out:
-+ spin_unlock_irqrestore(&tx->lock, flags);
-+ return urb;
++ return snprintf(page, PAGE_SIZE, "%#018llx\n", level);
+}
+
-+/**
-+ * free_tx_urb - frees a used tx URB
-+ * @usb: a &struct zd_usb pointer
-+ * @urb: URB to be freed
-+ *
-+ * Frees the the transmission URB, which means to put it on the free URB
-+ * list.
-+ */
-+static void free_tx_urb(struct zd_usb *usb, struct urb *urb)
++static ssize_t system_level_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf,
++ size_t len)
+{
-+ struct zd_usb_tx *tx = &usb->tx;
-+ unsigned long flags;
++ unsigned long long level;
++ char *endp;
+
-+ spin_lock_irqsave(&tx->lock, flags);
-+ if (!tx->enabled) {
-+ usb_free_urb(urb);
-+ goto out;
-+ }
-+ list_add(&urb->urb_list, &tx->free_urb_list);
-+out:
-+ spin_unlock_irqrestore(&tx->lock, flags);
++ level = simple_strtoull(buf, &endp, 16);
++
++ if (endp == buf)
++ return -EINVAL;
++ if (*endp == '\n')
++ endp++;
++ if (*endp)
++ return -EINVAL;
++
++ system_level = level;
++
++ return len;
+}
+
-+static void tx_dec_submitted_urbs(struct zd_usb *usb)
++static struct kobj_attribute system_level_attr =
++ __ATTR(system_level, 0644, system_level_show, system_level_store);
++
++static ssize_t set_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
++ const char *buf, size_t len)
+{
-+ struct zd_usb_tx *tx = &usb->tx;
-+ unsigned long flags;
++ int rc;
+
-+ spin_lock_irqsave(&tx->lock, flags);
-+ --tx->submitted_urbs;
-+ if (tx->stopped && tx->submitted_urbs <= ZD_USB_TX_LOW) {
-+ ieee80211_wake_queues(zd_usb_to_hw(usb));
-+ tx->stopped = 0;
-+ }
-+ spin_unlock_irqrestore(&tx->lock, flags);
++ rc = cpi_req();
++ if (rc)
++ return rc;
++
++ return len;
+}
+
-+static void tx_inc_submitted_urbs(struct zd_usb *usb)
++static struct kobj_attribute set_attr = __ATTR(set, 0200, NULL, set_store);
++
++static struct attribute *cpi_attrs[] = {
++ &system_name_attr.attr,
++ &sysplex_name_attr.attr,
++ &system_type_attr.attr,
++ &system_level_attr.attr,
++ &set_attr.attr,
++ NULL,
++};
++
++static struct attribute_group cpi_attr_group = {
++ .attrs = cpi_attrs,
++};
++
++static struct kset *cpi_kset;
++
++int sclp_cpi_set_data(const char *system, const char *sysplex, const char *type,
++ const u64 level)
+{
-+ struct zd_usb_tx *tx = &usb->tx;
-+ unsigned long flags;
++ int rc;
+
-+ spin_lock_irqsave(&tx->lock, flags);
-+ ++tx->submitted_urbs;
-+ if (!tx->stopped && tx->submitted_urbs > ZD_USB_TX_HIGH) {
-+ ieee80211_stop_queues(zd_usb_to_hw(usb));
-+ tx->stopped = 1;
-+ }
-+ spin_unlock_irqrestore(&tx->lock, flags);
++ rc = check_string("system_name", system);
++ if (rc)
++ return rc;
++ rc = check_string("sysplex_name", sysplex);
++ if (rc)
++ return rc;
++ rc = check_string("system_type", type);
++ if (rc)
++ return rc;
++
++ set_string(system_name, system);
++ set_string(sysplex_name, sysplex);
++ set_string(system_type, type);
++ system_level = level;
++
++ return cpi_req();
+}
++EXPORT_SYMBOL(sclp_cpi_set_data);
+
-+/**
-+ * tx_urb_complete - completes the execution of an URB
-+ * @urb: a URB
++static int __init cpi_init(void)
++{
++ int rc;
++
++ cpi_kset = kset_create_and_add("cpi", NULL, firmware_kobj);
++ if (!cpi_kset)
++ return -ENOMEM;
++
++ rc = sysfs_create_group(&cpi_kset->kobj, &cpi_attr_group);
++ if (rc)
++ kset_unregister(cpi_kset);
++
++ return rc;
++}
++
++__initcall(cpi_init);
+diff --git a/drivers/s390/char/sclp_cpi_sys.h b/drivers/s390/char/sclp_cpi_sys.h
+new file mode 100644
+index 0000000..deef3e6
+--- /dev/null
++++ b/drivers/s390/char/sclp_cpi_sys.h
+@@ -0,0 +1,15 @@
++/*
++ * drivers/s390/char/sclp_cpi_sys.h
++ * SCLP control program identification sysfs interface
+ *
-+ * This function is called if the URB has been transferred to a device or an
-+ * error has happened.
++ * Copyright IBM Corp. 2007
++ * Author(s): Michael Ernst <mernst at de.ibm.com>
+ */
- static void tx_urb_complete(struct urb *urb)
- {
- int r;
-+ struct sk_buff *skb;
-+ struct zd_tx_skb_control_block *cb;
-+ struct zd_usb *usb;
-
- switch (urb->status) {
- case 0:
-@@ -761,9 +881,12 @@ static void tx_urb_complete(struct urb *urb)
- goto resubmit;
- }
- free_urb:
-- usb_buffer_free(urb->dev, urb->transfer_buffer_length,
-- urb->transfer_buffer, urb->transfer_dma);
-- usb_free_urb(urb);
-+ skb = (struct sk_buff *)urb->context;
-+ zd_mac_tx_to_dev(skb, urb->status);
-+ cb = (struct zd_tx_skb_control_block *)skb->cb;
-+ usb = &zd_hw_mac(cb->hw)->chip.usb;
-+ free_tx_urb(usb, urb);
-+ tx_dec_submitted_urbs(usb);
- return;
- resubmit:
- r = usb_submit_urb(urb, GFP_ATOMIC);
-@@ -773,43 +896,40 @@ resubmit:
- }
- }
-
--/* Puts the frame on the USB endpoint. It doesn't wait for
-- * completion. The frame must contain the control set.
-+/**
-+ * zd_usb_tx: initiates transfer of a frame of the device
-+ *
-+ * @usb: the zd1211rw-private USB structure
-+ * @skb: a &struct sk_buff pointer
-+ *
-+ * This function tranmits a frame to the device. It doesn't wait for
-+ * completion. The frame must contain the control set and have all the
-+ * control set information available.
-+ *
-+ * The function returns 0 if the transfer has been successfully initiated.
- */
--int zd_usb_tx(struct zd_usb *usb, const u8 *frame, unsigned int length)
-+int zd_usb_tx(struct zd_usb *usb, struct sk_buff *skb)
- {
- int r;
- struct usb_device *udev = zd_usb_to_usbdev(usb);
- struct urb *urb;
-- void *buffer;
-
-- urb = usb_alloc_urb(0, GFP_ATOMIC);
-+ urb = alloc_tx_urb(usb);
- if (!urb) {
- r = -ENOMEM;
- goto out;
- }
-
-- buffer = usb_buffer_alloc(zd_usb_to_usbdev(usb), length, GFP_ATOMIC,
-- &urb->transfer_dma);
-- if (!buffer) {
-- r = -ENOMEM;
-- goto error_free_urb;
++
++#ifndef __SCLP_CPI_SYS_H__
++#define __SCLP_CPI_SYS_H__
++
++int sclp_cpi_set_data(const char *system, const char *sysplex,
++ const char *type, u64 level);
++
++#endif /* __SCLP_CPI_SYS_H__ */
+diff --git a/drivers/s390/char/sclp_info.c b/drivers/s390/char/sclp_info.c
+deleted file mode 100644
+index a1136e0..0000000
+--- a/drivers/s390/char/sclp_info.c
++++ /dev/null
+@@ -1,116 +0,0 @@
+-/*
+- * drivers/s390/char/sclp_info.c
+- *
+- * Copyright IBM Corp. 2007
+- * Author(s): Heiko Carstens <heiko.carstens at de.ibm.com>
+- */
+-
+-#include <linux/init.h>
+-#include <linux/errno.h>
+-#include <linux/string.h>
+-#include <asm/sclp.h>
+-#include "sclp.h"
+-
+-struct sclp_readinfo_sccb {
+- struct sccb_header header; /* 0-7 */
+- u16 rnmax; /* 8-9 */
+- u8 rnsize; /* 10 */
+- u8 _reserved0[24 - 11]; /* 11-23 */
+- u8 loadparm[8]; /* 24-31 */
+- u8 _reserved1[48 - 32]; /* 32-47 */
+- u64 facilities; /* 48-55 */
+- u8 _reserved2[91 - 56]; /* 56-90 */
+- u8 flags; /* 91 */
+- u8 _reserved3[100 - 92]; /* 92-99 */
+- u32 rnsize2; /* 100-103 */
+- u64 rnmax2; /* 104-111 */
+- u8 _reserved4[4096 - 112]; /* 112-4095 */
+-} __attribute__((packed, aligned(4096)));
+-
+-static struct sclp_readinfo_sccb __initdata early_readinfo_sccb;
+-static int __initdata early_readinfo_sccb_valid;
+-
+-u64 sclp_facilities;
+-
+-void __init sclp_readinfo_early(void)
+-{
+- int ret;
+- int i;
+- struct sclp_readinfo_sccb *sccb;
+- sclp_cmdw_t commands[] = {SCLP_CMDW_READ_SCP_INFO_FORCED,
+- SCLP_CMDW_READ_SCP_INFO};
+-
+- /* Enable service signal subclass mask. */
+- __ctl_set_bit(0, 9);
+- sccb = &early_readinfo_sccb;
+- for (i = 0; i < ARRAY_SIZE(commands); i++) {
+- do {
+- memset(sccb, 0, sizeof(*sccb));
+- sccb->header.length = sizeof(*sccb);
+- sccb->header.control_mask[2] = 0x80;
+- ret = sclp_service_call(commands[i], sccb);
+- } while (ret == -EBUSY);
+-
+- if (ret)
+- break;
+- __load_psw_mask(PSW_BASE_BITS | PSW_MASK_EXT |
+- PSW_MASK_WAIT | PSW_DEFAULT_KEY);
+- local_irq_disable();
+- /*
+- * Contents of the sccb might have changed
+- * therefore a barrier is needed.
+- */
+- barrier();
+- if (sccb->header.response_code == 0x10) {
+- early_readinfo_sccb_valid = 1;
+- break;
+- }
+- if (sccb->header.response_code != 0x1f0)
+- break;
- }
-- memcpy(buffer, frame, length);
+- /* Disable service signal subclass mask again. */
+- __ctl_clear_bit(0, 9);
+-}
-
- usb_fill_bulk_urb(urb, udev, usb_sndbulkpipe(udev, EP_DATA_OUT),
-- buffer, length, tx_urb_complete, NULL);
-- urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
-+ skb->data, skb->len, tx_urb_complete, skb);
-
- r = usb_submit_urb(urb, GFP_ATOMIC);
- if (r)
- goto error;
-+ tx_inc_submitted_urbs(usb);
- return 0;
- error:
-- usb_buffer_free(zd_usb_to_usbdev(usb), length, buffer,
-- urb->transfer_dma);
--error_free_urb:
-- usb_free_urb(urb);
-+ free_tx_urb(usb, urb);
- out:
- return r;
+-void __init sclp_facilities_detect(void)
+-{
+- if (!early_readinfo_sccb_valid)
+- return;
+- sclp_facilities = early_readinfo_sccb.facilities;
+-}
+-
+-unsigned long long __init sclp_memory_detect(void)
+-{
+- unsigned long long memsize;
+- struct sclp_readinfo_sccb *sccb;
+-
+- if (!early_readinfo_sccb_valid)
+- return 0;
+- sccb = &early_readinfo_sccb;
+- if (sccb->rnsize)
+- memsize = sccb->rnsize << 20;
+- else
+- memsize = sccb->rnsize2 << 20;
+- if (sccb->rnmax)
+- memsize *= sccb->rnmax;
+- else
+- memsize *= sccb->rnmax2;
+- return memsize;
+-}
+-
+-/*
+- * This function will be called after sclp_memory_detect(), which gets called
+- * early from early.c code. Therefore the sccb should have valid contents.
+- */
+-void __init sclp_get_ipl_info(struct sclp_ipl_info *info)
+-{
+- struct sclp_readinfo_sccb *sccb;
+-
+- if (!early_readinfo_sccb_valid)
+- return;
+- sccb = &early_readinfo_sccb;
+- info->is_valid = 1;
+- if (sccb->flags & 0x2)
+- info->has_dump = 1;
+- memcpy(&info->loadparm, &sccb->loadparm, LOADPARM_LEN);
+-}
+diff --git a/drivers/s390/char/sclp_rw.c b/drivers/s390/char/sclp_rw.c
+index d6b06ab..ad7195d 100644
+--- a/drivers/s390/char/sclp_rw.c
++++ b/drivers/s390/char/sclp_rw.c
+@@ -76,7 +76,7 @@ sclp_make_buffer(void *page, unsigned short columns, unsigned short htab)
}
-@@ -838,16 +958,20 @@ static inline void init_usb_rx(struct zd_usb *usb)
- static inline void init_usb_tx(struct zd_usb *usb)
+ /*
+- * Return a pointer to the orignal page that has been used to create
++ * Return a pointer to the original page that has been used to create
+ * the buffer.
+ */
+ void *
+diff --git a/drivers/s390/char/tape_3590.c b/drivers/s390/char/tape_3590.c
+index da25f8e..8246ef3 100644
+--- a/drivers/s390/char/tape_3590.c
++++ b/drivers/s390/char/tape_3590.c
+@@ -1495,7 +1495,7 @@ tape_3590_unit_check(struct tape_device *device, struct tape_request *request,
+ device->cdev->dev.bus_id);
+ return tape_3590_erp_basic(device, request, irb, -EPERM);
+ case 0x8013:
+- PRINT_WARN("(%s): Another host has priviliged access to the "
++ PRINT_WARN("(%s): Another host has privileged access to the "
+ "tape device\n", device->cdev->dev.bus_id);
+ PRINT_WARN("(%s): To solve the problem unload the current "
+ "cartridge!\n", device->cdev->dev.bus_id);
+diff --git a/drivers/s390/char/tape_block.c b/drivers/s390/char/tape_block.c
+index eeb92e2..ddc4a11 100644
+--- a/drivers/s390/char/tape_block.c
++++ b/drivers/s390/char/tape_block.c
+@@ -74,11 +74,10 @@ tapeblock_trigger_requeue(struct tape_device *device)
+ * Post finished request.
+ */
+ static void
+-tapeblock_end_request(struct request *req, int uptodate)
++tapeblock_end_request(struct request *req, int error)
{
-- /* FIXME: at this point we will allocate a fixed number of urb's for
-- * use in a cyclic scheme */
-+ struct zd_usb_tx *tx = &usb->tx;
-+ spin_lock_init(&tx->lock);
-+ tx->enabled = 0;
-+ tx->stopped = 0;
-+ INIT_LIST_HEAD(&tx->free_urb_list);
-+ tx->submitted_urbs = 0;
+- if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
++ if (__blk_end_request(req, error, blk_rq_bytes(req)))
+ BUG();
+- end_that_request_last(req, uptodate);
}
--void zd_usb_init(struct zd_usb *usb, struct net_device *netdev,
-+void zd_usb_init(struct zd_usb *usb, struct ieee80211_hw *hw,
- struct usb_interface *intf)
- {
- memset(usb, 0, sizeof(*usb));
- usb->intf = usb_get_intf(intf);
-- usb_set_intfdata(usb->intf, netdev);
-+ usb_set_intfdata(usb->intf, hw);
- init_usb_interrupt(usb);
- init_usb_tx(usb);
- init_usb_rx(usb);
-@@ -973,7 +1097,7 @@ int zd_usb_init_hw(struct zd_usb *usb)
- return r;
- }
-
-- r = zd_mac_init_hw(mac);
-+ r = zd_mac_init_hw(mac->hw);
- if (r) {
- dev_dbg_f(zd_usb_dev(usb),
- "couldn't initialize mac. Error number %d\n", r);
-@@ -987,9 +1111,9 @@ int zd_usb_init_hw(struct zd_usb *usb)
- static int probe(struct usb_interface *intf, const struct usb_device_id *id)
- {
- int r;
-- struct zd_usb *usb;
- struct usb_device *udev = interface_to_usbdev(intf);
-- struct net_device *netdev = NULL;
-+ struct zd_usb *usb;
-+ struct ieee80211_hw *hw = NULL;
-
- print_id(udev);
+ static void
+@@ -91,7 +90,7 @@ __tapeblock_end_request(struct tape_request *ccw_req, void *data)
-@@ -1007,57 +1131,65 @@ static int probe(struct usb_interface *intf, const struct usb_device_id *id)
- goto error;
+ device = ccw_req->device;
+ req = (struct request *) data;
+- tapeblock_end_request(req, ccw_req->rc == 0);
++ tapeblock_end_request(req, (ccw_req->rc == 0) ? 0 : -EIO);
+ if (ccw_req->rc == 0)
+ /* Update position. */
+ device->blk_data.block_position =
+@@ -119,7 +118,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
+ ccw_req = device->discipline->bread(device, req);
+ if (IS_ERR(ccw_req)) {
+ DBF_EVENT(1, "TBLOCK: bread failed\n");
+- tapeblock_end_request(req, 0);
++ tapeblock_end_request(req, -EIO);
+ return PTR_ERR(ccw_req);
}
-
-- usb_reset_device(interface_to_usbdev(intf));
-+ r = usb_reset_device(udev);
-+ if (r) {
-+ dev_err(&intf->dev,
-+ "couldn't reset usb device. Error number %d\n", r);
-+ goto error;
-+ }
-
-- netdev = zd_netdev_alloc(intf);
-- if (netdev == NULL) {
-+ hw = zd_mac_alloc_hw(intf);
-+ if (hw == NULL) {
- r = -ENOMEM;
- goto error;
+ ccw_req->callback = __tapeblock_end_request;
+@@ -132,7 +131,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
+ * Start/enqueueing failed. No retries in
+ * this case.
+ */
+- tapeblock_end_request(req, 0);
++ tapeblock_end_request(req, -EIO);
+ device->discipline->free_bread(ccw_req);
}
-- usb = &zd_netdev_mac(netdev)->chip.usb;
-+ usb = &zd_hw_mac(hw)->chip.usb;
- usb->is_zd1211b = (id->driver_info == DEVICE_ZD1211B) != 0;
+@@ -177,7 +176,7 @@ tapeblock_requeue(struct work_struct *work) {
+ if (rq_data_dir(req) == WRITE) {
+ DBF_EVENT(1, "TBLOCK: Rejecting write request\n");
+ blkdev_dequeue_request(req);
+- tapeblock_end_request(req, 0);
++ tapeblock_end_request(req, -EIO);
+ continue;
+ }
+ spin_unlock_irq(&device->blk_data.request_queue_lock);
+diff --git a/drivers/s390/char/tape_core.c b/drivers/s390/char/tape_core.c
+index 2fae633..7ad8cf1 100644
+--- a/drivers/s390/char/tape_core.c
++++ b/drivers/s390/char/tape_core.c
+@@ -37,7 +37,7 @@ static void tape_long_busy_timeout(unsigned long data);
+ * we can assign the devices to minor numbers of the same major
+ * The list is protected by the rwlock
+ */
+-static struct list_head tape_device_list = LIST_HEAD_INIT(tape_device_list);
++static LIST_HEAD(tape_device_list);
+ static DEFINE_RWLOCK(tape_device_lock);
-- r = zd_mac_preinit_hw(zd_netdev_mac(netdev));
-+ r = zd_mac_preinit_hw(hw);
- if (r) {
- dev_dbg_f(&intf->dev,
- "couldn't initialize mac. Error number %d\n", r);
- goto error;
- }
+ /*
+diff --git a/drivers/s390/char/tape_proc.c b/drivers/s390/char/tape_proc.c
+index cea49f0..c9b96d5 100644
+--- a/drivers/s390/char/tape_proc.c
++++ b/drivers/s390/char/tape_proc.c
+@@ -97,7 +97,7 @@ static void tape_proc_stop(struct seq_file *m, void *v)
+ {
+ }
-- r = register_netdev(netdev);
-+ r = ieee80211_register_hw(hw);
- if (r) {
- dev_dbg_f(&intf->dev,
-- "couldn't register netdev. Error number %d\n", r);
-+ "couldn't register device. Error number %d\n", r);
- goto error;
+-static struct seq_operations tape_proc_seq = {
++static const struct seq_operations tape_proc_seq = {
+ .start = tape_proc_start,
+ .next = tape_proc_next,
+ .stop = tape_proc_stop,
+diff --git a/drivers/s390/char/vmlogrdr.c b/drivers/s390/char/vmlogrdr.c
+index e0c4c50..d364e0b 100644
+--- a/drivers/s390/char/vmlogrdr.c
++++ b/drivers/s390/char/vmlogrdr.c
+@@ -683,7 +683,7 @@ static int vmlogrdr_register_driver(void)
+ /* Register with iucv driver */
+ ret = iucv_register(&vmlogrdr_iucv_handler, 1);
+ if (ret) {
+- printk (KERN_ERR "vmlogrdr: failed to register with"
++ printk (KERN_ERR "vmlogrdr: failed to register with "
+ "iucv driver\n");
+ goto out;
}
-
- dev_dbg_f(&intf->dev, "successful\n");
-- dev_info(&intf->dev,"%s\n", netdev->name);
-+ dev_info(&intf->dev, "%s\n", wiphy_name(hw->wiphy));
- return 0;
- error:
- usb_reset_device(interface_to_usbdev(intf));
-- zd_netdev_free(netdev);
-+ if (hw) {
-+ zd_mac_clear(zd_hw_mac(hw));
-+ ieee80211_free_hw(hw);
-+ }
- return r;
+diff --git a/drivers/s390/char/vmur.c b/drivers/s390/char/vmur.c
+index d70a6e6..7689b50 100644
+--- a/drivers/s390/char/vmur.c
++++ b/drivers/s390/char/vmur.c
+@@ -759,7 +759,7 @@ static loff_t ur_llseek(struct file *file, loff_t offset, int whence)
+ return newpos;
}
- static void disconnect(struct usb_interface *intf)
- {
-- struct net_device *netdev = zd_intf_to_netdev(intf);
-+ struct ieee80211_hw *hw = zd_intf_to_hw(intf);
- struct zd_mac *mac;
- struct zd_usb *usb;
+-static struct file_operations ur_fops = {
++static const struct file_operations ur_fops = {
+ .owner = THIS_MODULE,
+ .open = ur_open,
+ .release = ur_release,
+diff --git a/drivers/s390/char/zcore.c b/drivers/s390/char/zcore.c
+index 7073daf..f523501 100644
+--- a/drivers/s390/char/zcore.c
++++ b/drivers/s390/char/zcore.c
+@@ -470,7 +470,7 @@ static loff_t zcore_lseek(struct file *file, loff_t offset, int orig)
+ return rc;
+ }
- /* Either something really bad happened, or we're just dealing with
- * a DEVICE_INSTALLER. */
-- if (netdev == NULL)
-+ if (hw == NULL)
- return;
+-static struct file_operations zcore_fops = {
++static const struct file_operations zcore_fops = {
+ .owner = THIS_MODULE,
+ .llseek = zcore_lseek,
+ .read = zcore_read,
+diff --git a/drivers/s390/cio/airq.c b/drivers/s390/cio/airq.c
+index 5287631..b7a07a8 100644
+--- a/drivers/s390/cio/airq.c
++++ b/drivers/s390/cio/airq.c
+@@ -1,12 +1,12 @@
+ /*
+ * drivers/s390/cio/airq.c
+- * S/390 common I/O routines -- support for adapter interruptions
++ * Support for adapter interruptions
+ *
+- * Copyright (C) 1999-2002 IBM Deutschland Entwicklung GmbH,
+- * IBM Corporation
+- * Author(s): Ingo Adlung (adlung at de.ibm.com)
+- * Cornelia Huck (cornelia.huck at de.ibm.com)
+- * Arnd Bergmann (arndb at de.ibm.com)
++ * Copyright IBM Corp. 1999,2007
++ * Author(s): Ingo Adlung <adlung at de.ibm.com>
++ * Cornelia Huck <cornelia.huck at de.ibm.com>
++ * Arnd Bergmann <arndb at de.ibm.com>
++ * Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
+ */
-- mac = zd_netdev_mac(netdev);
-+ mac = zd_hw_mac(hw);
- usb = &mac->chip.usb;
+ #include <linux/init.h>
+@@ -14,72 +14,131 @@
+ #include <linux/slab.h>
+ #include <linux/rcupdate.h>
- dev_dbg_f(zd_usb_dev(usb), "\n");
++#include <asm/airq.h>
++
++#include "cio.h"
+ #include "cio_debug.h"
+-#include "airq.h"
-- zd_netdev_disconnect(netdev);
-+ ieee80211_unregister_hw(hw);
+-static adapter_int_handler_t adapter_handler;
++#define NR_AIRQS 32
++#define NR_AIRQS_PER_WORD sizeof(unsigned long)
++#define NR_AIRQ_WORDS (NR_AIRQS / NR_AIRQS_PER_WORD)
- /* Just in case something has gone wrong! */
- zd_usb_disable_rx(usb);
-@@ -1070,12 +1202,13 @@ static void disconnect(struct usb_interface *intf)
- */
- usb_reset_device(interface_to_usbdev(intf));
+-/*
+- * register for adapter interrupts
+- *
+- * With HiperSockets the zSeries architecture provides for
+- * means of adapter interrups, pseudo I/O interrupts that are
+- * not tied to an I/O subchannel, but to an adapter. However,
+- * it doesn't disclose the info how to enable/disable them, but
+- * to recognize them only. Perhaps we should consider them
+- * being shared interrupts, and thus build a linked list
+- * of adapter handlers ... to be evaluated ...
+- */
+-int
+-s390_register_adapter_interrupt (adapter_int_handler_t handler)
+-{
+- int ret;
+- char dbf_txt[15];
++union indicator_t {
++ unsigned long word[NR_AIRQ_WORDS];
++ unsigned char byte[NR_AIRQS];
++} __attribute__((packed));
-- zd_netdev_free(netdev);
-+ zd_mac_clear(mac);
-+ ieee80211_free_hw(hw);
- dev_dbg(&intf->dev, "disconnected\n");
- }
+- CIO_TRACE_EVENT (4, "rgaint");
++struct airq_t {
++ adapter_int_handler_t handler;
++ void *drv_data;
++};
- static struct usb_driver driver = {
-- .name = "zd1211rw",
-+ .name = KBUILD_MODNAME,
- .id_table = usb_ids,
- .probe = probe,
- .disconnect = disconnect,
-diff --git a/drivers/net/wireless/zd1211rw/zd_usb.h b/drivers/net/wireless/zd1211rw/zd_usb.h
-index 961a7a1..049f8b9 100644
---- a/drivers/net/wireless/zd1211rw/zd_usb.h
-+++ b/drivers/net/wireless/zd1211rw/zd_usb.h
-@@ -1,4 +1,7 @@
--/* zd_usb.h: Header for USB interface implemented by ZD1211 chip
-+/* ZD1211 USB-WLAN driver for Linux
-+ *
-+ * Copyright (C) 2005-2007 Ulrich Kunitz <kune at deine-taler.de>
-+ * Copyright (C) 2006-2007 Daniel Drake <dsd at gentoo.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
-@@ -26,6 +29,9 @@
+- if (handler == NULL)
+- ret = -EINVAL;
+- else
+- ret = (cmpxchg(&adapter_handler, NULL, handler) ? -EBUSY : 0);
+- if (!ret)
+- synchronize_sched(); /* Allow interrupts to complete. */
++static union indicator_t indicators;
++static struct airq_t *airqs[NR_AIRQS];
- #include "zd_def.h"
+- sprintf (dbf_txt, "ret:%d", ret);
+- CIO_TRACE_EVENT (4, dbf_txt);
++static int register_airq(struct airq_t *airq)
++{
++ int i;
-+#define ZD_USB_TX_HIGH 5
-+#define ZD_USB_TX_LOW 2
-+
- enum devicetype {
- DEVICE_ZD1211 = 0,
- DEVICE_ZD1211B = 1,
-@@ -165,7 +171,7 @@ static inline struct usb_int_regs *get_read_regs(struct zd_usb_interrupt *intr)
- return (struct usb_int_regs *)intr->read_regs.buffer;
+- return ret;
++ for (i = 0; i < NR_AIRQS; i++)
++ if (!cmpxchg(&airqs[i], NULL, airq))
++ return i;
++ return -ENOMEM;
}
--#define URBS_COUNT 5
-+#define RX_URBS_COUNT 5
-
- struct zd_usb_rx {
- spinlock_t lock;
-@@ -176,8 +182,21 @@ struct zd_usb_rx {
- int urbs_count;
- };
-
+-int
+-s390_unregister_adapter_interrupt (adapter_int_handler_t handler)
+/**
-+ * struct zd_usb_tx - structure used for transmitting frames
-+ * @lock: lock for transmission
-+ * @free_urb_list: list of free URBs, contains all the URBs, which can be used
-+ * @submitted_urbs: atomic integer that counts the URBs having sent to the
-+ * device, which haven't been completed
-+ * @enabled: enabled flag, indicates whether tx is enabled
-+ * @stopped: indicates whether higher level tx queues are stopped
++ * s390_register_adapter_interrupt() - register adapter interrupt handler
++ * @handler: adapter handler to be registered
++ * @drv_data: driver data passed with each call to the handler
++ *
++ * Returns:
++ * Pointer to the indicator to be used on success
++ * ERR_PTR() if registration failed
+ */
- struct zd_usb_tx {
- spinlock_t lock;
-+ struct list_head free_urb_list;
-+ int submitted_urbs;
-+ int enabled;
-+ int stopped;
- };
++void *s390_register_adapter_interrupt(adapter_int_handler_t handler,
++ void *drv_data)
+ {
++ struct airq_t *airq;
++ char dbf_txt[16];
+ int ret;
+- char dbf_txt[15];
- /* Contains the usb parts. The structure doesn't require a lock because intf
-@@ -198,17 +217,17 @@ static inline struct usb_device *zd_usb_to_usbdev(struct zd_usb *usb)
- return interface_to_usbdev(usb->intf);
+- CIO_TRACE_EVENT (4, "urgaint");
+-
+- if (handler == NULL)
+- ret = -EINVAL;
+- else {
+- adapter_handler = NULL;
+- synchronize_sched(); /* Allow interrupts to complete. */
+- ret = 0;
++ airq = kmalloc(sizeof(struct airq_t), GFP_KERNEL);
++ if (!airq) {
++ ret = -ENOMEM;
++ goto out;
+ }
+- sprintf (dbf_txt, "ret:%d", ret);
+- CIO_TRACE_EVENT (4, dbf_txt);
+-
+- return ret;
++ airq->handler = handler;
++ airq->drv_data = drv_data;
++ ret = register_airq(airq);
++ if (ret < 0)
++ kfree(airq);
++out:
++ snprintf(dbf_txt, sizeof(dbf_txt), "rairq:%d", ret);
++ CIO_TRACE_EVENT(4, dbf_txt);
++ if (ret < 0)
++ return ERR_PTR(ret);
++ else
++ return &indicators.byte[ret];
}
++EXPORT_SYMBOL(s390_register_adapter_interrupt);
--static inline struct net_device *zd_intf_to_netdev(struct usb_interface *intf)
-+static inline struct ieee80211_hw *zd_intf_to_hw(struct usb_interface *intf)
+-void
+-do_adapter_IO (void)
++/**
++ * s390_unregister_adapter_interrupt - unregister adapter interrupt handler
++ * @ind: indicator for which the handler is to be unregistered
++ */
++void s390_unregister_adapter_interrupt(void *ind)
{
- return usb_get_intfdata(intf);
+- CIO_TRACE_EVENT (6, "doaio");
++ struct airq_t *airq;
++ char dbf_txt[16];
++ int i;
+
+- if (adapter_handler)
+- (*adapter_handler) ();
++ i = (int) ((addr_t) ind) - ((addr_t) &indicators.byte[0]);
++ snprintf(dbf_txt, sizeof(dbf_txt), "urairq:%d", i);
++ CIO_TRACE_EVENT(4, dbf_txt);
++ indicators.byte[i] = 0;
++ airq = xchg(&airqs[i], NULL);
++ /*
++ * Allow interrupts to complete. This will ensure that the airq handle
++ * is no longer referenced by any interrupt handler.
++ */
++ synchronize_sched();
++ kfree(airq);
}
++EXPORT_SYMBOL(s390_unregister_adapter_interrupt);
++
++#define INDICATOR_MASK (0xffUL << ((NR_AIRQS_PER_WORD - 1) * 8))
--static inline struct net_device *zd_usb_to_netdev(struct zd_usb *usb)
-+static inline struct ieee80211_hw *zd_usb_to_hw(struct zd_usb *usb)
- {
-- return zd_intf_to_netdev(usb->intf);
-+ return zd_intf_to_hw(usb->intf);
+-EXPORT_SYMBOL (s390_register_adapter_interrupt);
+-EXPORT_SYMBOL (s390_unregister_adapter_interrupt);
++void do_adapter_IO(void)
++{
++ int w;
++ int i;
++ unsigned long word;
++ struct airq_t *airq;
++
++ /*
++ * Access indicator array in word-sized chunks to minimize storage
++ * fetch operations.
++ */
++ for (w = 0; w < NR_AIRQ_WORDS; w++) {
++ word = indicators.word[w];
++ i = w * NR_AIRQS_PER_WORD;
++ /*
++ * Check bytes within word for active indicators.
++ */
++ while (word) {
++ if (word & INDICATOR_MASK) {
++ airq = airqs[i];
++ if (likely(airq))
++ airq->handler(&indicators.byte[i],
++ airq->drv_data);
++ else
++ /*
++ * Reset ill-behaved indicator.
++ */
++ indicators.byte[i] = 0;
++ }
++ word <<= 8;
++ i++;
++ }
++ }
++}
+diff --git a/drivers/s390/cio/airq.h b/drivers/s390/cio/airq.h
+deleted file mode 100644
+index 7d6be3f..0000000
+--- a/drivers/s390/cio/airq.h
++++ /dev/null
+@@ -1,10 +0,0 @@
+-#ifndef S390_AINTERRUPT_H
+-#define S390_AINTERRUPT_H
+-
+-typedef int (*adapter_int_handler_t)(void);
+-
+-extern int s390_register_adapter_interrupt(adapter_int_handler_t handler);
+-extern int s390_unregister_adapter_interrupt(adapter_int_handler_t handler);
+-extern void do_adapter_IO (void);
+-
+-#endif
+diff --git a/drivers/s390/cio/blacklist.c b/drivers/s390/cio/blacklist.c
+index bd5f16f..e8597ec 100644
+--- a/drivers/s390/cio/blacklist.c
++++ b/drivers/s390/cio/blacklist.c
+@@ -348,7 +348,7 @@ cio_ignore_write(struct file *file, const char __user *user_buf,
+ return user_len;
}
--void zd_usb_init(struct zd_usb *usb, struct net_device *netdev,
-+void zd_usb_init(struct zd_usb *usb, struct ieee80211_hw *hw,
- struct usb_interface *intf);
- int zd_usb_init_hw(struct zd_usb *usb);
- void zd_usb_clear(struct zd_usb *usb);
-@@ -221,7 +240,10 @@ void zd_usb_disable_int(struct zd_usb *usb);
- int zd_usb_enable_rx(struct zd_usb *usb);
- void zd_usb_disable_rx(struct zd_usb *usb);
+-static struct seq_operations cio_ignore_proc_seq_ops = {
++static const struct seq_operations cio_ignore_proc_seq_ops = {
+ .start = cio_ignore_proc_seq_start,
+ .stop = cio_ignore_proc_seq_stop,
+ .next = cio_ignore_proc_seq_next,
+diff --git a/drivers/s390/cio/ccwgroup.c b/drivers/s390/cio/ccwgroup.c
+index 5baa517..3964056 100644
+--- a/drivers/s390/cio/ccwgroup.c
++++ b/drivers/s390/cio/ccwgroup.c
+@@ -35,8 +35,8 @@ ccwgroup_bus_match (struct device * dev, struct device_driver * drv)
+ struct ccwgroup_device *gdev;
+ struct ccwgroup_driver *gdrv;
--int zd_usb_tx(struct zd_usb *usb, const u8 *frame, unsigned int length);
-+void zd_usb_enable_tx(struct zd_usb *usb);
-+void zd_usb_disable_tx(struct zd_usb *usb);
-+
-+int zd_usb_tx(struct zd_usb *usb, struct sk_buff *skb);
+- gdev = container_of(dev, struct ccwgroup_device, dev);
+- gdrv = container_of(drv, struct ccwgroup_driver, driver);
++ gdev = to_ccwgroupdev(dev);
++ gdrv = to_ccwgroupdrv(drv);
- int zd_usb_ioread16v(struct zd_usb *usb, u16 *values,
- const zd_addr_t *addresses, unsigned int count);
-diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
-index bca37bf..7483d45 100644
---- a/drivers/net/xen-netfront.c
-+++ b/drivers/net/xen-netfront.c
-@@ -1073,7 +1073,7 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
- if (!xen_feature(XENFEAT_auto_translated_physmap)) {
- /* Do all the remapping work and M2P updates. */
- MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-- 0, DOMID_SELF);
-+ NULL, DOMID_SELF);
- mcl++;
- HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
+ if (gdev->creator_id == gdrv->driver_id)
+ return 1;
+@@ -75,8 +75,10 @@ static void ccwgroup_ungroup_callback(struct device *dev)
+ struct ccwgroup_device *gdev = to_ccwgroupdev(dev);
+
+ mutex_lock(&gdev->reg_mutex);
+- __ccwgroup_remove_symlinks(gdev);
+- device_unregister(dev);
++ if (device_is_registered(&gdev->dev)) {
++ __ccwgroup_remove_symlinks(gdev);
++ device_unregister(dev);
++ }
+ mutex_unlock(&gdev->reg_mutex);
+ }
+
+@@ -111,7 +113,7 @@ ccwgroup_release (struct device *dev)
+ gdev = to_ccwgroupdev(dev);
+
+ for (i = 0; i < gdev->count; i++) {
+- gdev->cdev[i]->dev.driver_data = NULL;
++ dev_set_drvdata(&gdev->cdev[i]->dev, NULL);
+ put_device(&gdev->cdev[i]->dev);
+ }
+ kfree(gdev);
+@@ -196,11 +198,11 @@ int ccwgroup_create(struct device *root, unsigned int creator_id,
+ goto error;
}
-diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
-index a6d6b24..703b85e 100644
---- a/drivers/parisc/led.c
-+++ b/drivers/parisc/led.c
-@@ -364,7 +364,7 @@ static __inline__ int led_get_net_activity(void)
- struct in_device *in_dev = __in_dev_get_rcu(dev);
- if (!in_dev || !in_dev->ifa_list)
- continue;
-- if (LOOPBACK(in_dev->ifa_list->ifa_local))
-+ if (ipv4_is_loopback(in_dev->ifa_list->ifa_local))
- continue;
- stats = dev->get_stats(dev);
- rx_total += stats->rx_packets;
-diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c
-index ebb09e9..de34aa9 100644
---- a/drivers/parisc/pdc_stable.c
-+++ b/drivers/parisc/pdc_stable.c
-@@ -120,7 +120,7 @@ struct pdcspath_entry pdcspath_entry_##_name = { \
- };
+ /* Don't allow a device to belong to more than one group. */
+- if (gdev->cdev[i]->dev.driver_data) {
++ if (dev_get_drvdata(&gdev->cdev[i]->dev)) {
+ rc = -EINVAL;
+ goto error;
+ }
+- gdev->cdev[i]->dev.driver_data = gdev;
++ dev_set_drvdata(&gdev->cdev[i]->dev, gdev);
+ }
- #define PDCS_ATTR(_name, _mode, _show, _store) \
--struct subsys_attribute pdcs_attr_##_name = { \
-+struct kobj_attribute pdcs_attr_##_name = { \
- .attr = {.name = __stringify(_name), .mode = _mode}, \
- .show = _show, \
- .store = _store, \
-@@ -523,15 +523,15 @@ static struct pdcspath_entry *pdcspath_entries[] = {
+ gdev->creator_id = creator_id;
+@@ -234,8 +236,8 @@ int ccwgroup_create(struct device *root, unsigned int creator_id,
+ error:
+ for (i = 0; i < argc; i++)
+ if (gdev->cdev[i]) {
+- if (gdev->cdev[i]->dev.driver_data == gdev)
+- gdev->cdev[i]->dev.driver_data = NULL;
++ if (dev_get_drvdata(&gdev->cdev[i]->dev) == gdev)
++ dev_set_drvdata(&gdev->cdev[i]->dev, NULL);
+ put_device(&gdev->cdev[i]->dev);
+ }
+ mutex_unlock(&gdev->reg_mutex);
+@@ -408,6 +410,7 @@ int ccwgroup_driver_register(struct ccwgroup_driver *cdriver)
+ /* register our new driver with the core */
+ cdriver->driver.bus = &ccwgroup_bus_type;
+ cdriver->driver.name = cdriver->name;
++ cdriver->driver.owner = cdriver->owner;
- /**
- * pdcs_size_read - Stable Storage size output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- */
--static ssize_t
--pdcs_size_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_size_read(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ char *buf)
+ return driver_register(&cdriver->driver);
+ }
+@@ -463,8 +466,8 @@ __ccwgroup_get_gdev_by_cdev(struct ccw_device *cdev)
{
- char *out = buf;
-
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+ struct ccwgroup_device *gdev;
- /* show the size of the stable storage */
-@@ -542,17 +542,17 @@ pdcs_size_read(struct kset *kset, char *buf)
+- if (cdev->dev.driver_data) {
+- gdev = (struct ccwgroup_device *)cdev->dev.driver_data;
++ gdev = dev_get_drvdata(&cdev->dev);
++ if (gdev) {
+ if (get_device(&gdev->dev)) {
+ mutex_lock(&gdev->reg_mutex);
+ if (device_is_registered(&gdev->dev))
+diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
+index 597c0c7..e7ba16a 100644
+--- a/drivers/s390/cio/chsc.c
++++ b/drivers/s390/cio/chsc.c
+@@ -89,7 +89,8 @@ int chsc_get_ssd_info(struct subchannel_id schid, struct chsc_ssd_info *ssd)
+ /* Copy data */
+ ret = 0;
+ memset(ssd, 0, sizeof(struct chsc_ssd_info));
+- if ((ssd_area->st != 0) && (ssd_area->st != 2))
++ if ((ssd_area->st != SUBCHANNEL_TYPE_IO) &&
++ (ssd_area->st != SUBCHANNEL_TYPE_MSG))
+ goto out_free;
+ ssd->path_mask = ssd_area->path_mask;
+ ssd->fla_valid_mask = ssd_area->fla_valid_mask;
+@@ -132,20 +133,16 @@ static void terminate_internal_io(struct subchannel *sch)
+ device_set_intretry(sch);
+ /* Call handler. */
+ if (sch->driver && sch->driver->termination)
+- sch->driver->termination(&sch->dev);
++ sch->driver->termination(sch);
+ }
- /**
- * pdcs_auto_read - Stable Storage autoboot/search flag output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- * @knob: The PF_AUTOBOOT or PF_AUTOSEARCH flag
- */
--static ssize_t
--pdcs_auto_read(struct kset *kset, char *buf, int knob)
-+static ssize_t pdcs_auto_read(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ char *buf, int knob)
+-static int
+-s390_subchannel_remove_chpid(struct device *dev, void *data)
++static int s390_subchannel_remove_chpid(struct subchannel *sch, void *data)
{
- char *out = buf;
- struct pdcspath_entry *pathentry;
+ int j;
+ int mask;
+- struct subchannel *sch;
+- struct chp_id *chpid;
++ struct chp_id *chpid = data;
+ struct schib schib;
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+- sch = to_subchannel(dev);
+- chpid = data;
+ for (j = 0; j < 8; j++) {
+ mask = 0x80 >> j;
+ if ((sch->schib.pmcw.pim & mask) &&
+@@ -158,7 +155,7 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
+ spin_lock_irq(sch->lock);
- /* Current flags are stored in primary boot path entry */
-@@ -568,40 +568,37 @@ pdcs_auto_read(struct kset *kset, char *buf, int knob)
+ stsch(sch->schid, &schib);
+- if (!schib.pmcw.dnv)
++ if (!css_sch_is_valid(&schib))
+ goto out_unreg;
+ memcpy(&sch->schib, &schib, sizeof(struct schib));
+ /* Check for single path devices. */
+@@ -172,12 +169,12 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
+ terminate_internal_io(sch);
+ /* Re-start path verification. */
+ if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
+ }
+ } else {
+ /* trigger path verification. */
+ if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
+ else if (sch->lpm == mask)
+ goto out_unreg;
+ }
+@@ -201,12 +198,10 @@ void chsc_chp_offline(struct chp_id chpid)
- /**
- * pdcs_autoboot_read - Stable Storage autoboot flag output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- */
--static inline ssize_t
--pdcs_autoboot_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_autoboot_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
- {
-- return pdcs_auto_read(kset, buf, PF_AUTOBOOT);
-+ return pdcs_auto_read(kobj, attr, buf, PF_AUTOBOOT);
+ if (chp_get_status(chpid) <= 0)
+ return;
+- bus_for_each_dev(&css_bus_type, NULL, &chpid,
+- s390_subchannel_remove_chpid);
++ for_each_subchannel_staged(s390_subchannel_remove_chpid, NULL, &chpid);
}
- /**
- * pdcs_autosearch_read - Stable Storage autoboot flag output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- */
--static inline ssize_t
--pdcs_autosearch_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_autosearch_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+-static int
+-s390_process_res_acc_new_sch(struct subchannel_id schid)
++static int s390_process_res_acc_new_sch(struct subchannel_id schid, void *data)
{
-- return pdcs_auto_read(kset, buf, PF_AUTOSEARCH);
-+ return pdcs_auto_read(kobj, attr, buf, PF_AUTOSEARCH);
+ struct schib schib;
+ /*
+@@ -252,18 +247,10 @@ static int get_res_chpid_mask(struct chsc_ssd_info *ssd,
+ return 0;
}
- /**
- * pdcs_timer_read - Stable Storage timer count output (in seconds).
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- *
- * The value of the timer field correponds to a number of seconds in powers of 2.
- */
--static ssize_t
--pdcs_timer_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_timer_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+-static int
+-__s390_process_res_acc(struct subchannel_id schid, void *data)
++static int __s390_process_res_acc(struct subchannel *sch, void *data)
{
- char *out = buf;
- struct pdcspath_entry *pathentry;
+ int chp_mask, old_lpm;
+- struct res_acc_data *res_data;
+- struct subchannel *sch;
+-
+- res_data = data;
+- sch = get_subchannel_by_schid(schid);
+- if (!sch)
+- /* Check if a subchannel is newly available. */
+- return s390_process_res_acc_new_sch(schid);
++ struct res_acc_data *res_data = data;
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+ spin_lock_irq(sch->lock);
+ chp_mask = get_res_chpid_mask(&sch->ssd_info, res_data);
+@@ -279,10 +266,10 @@ __s390_process_res_acc(struct subchannel_id schid, void *data)
+ if (!old_lpm && sch->lpm)
+ device_trigger_reprobe(sch);
+ else if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
+ out:
+ spin_unlock_irq(sch->lock);
+- put_device(&sch->dev);
++
+ return 0;
+ }
- /* Current flags are stored in primary boot path entry */
-@@ -618,15 +615,14 @@ pdcs_timer_read(struct kset *kset, char *buf)
+@@ -305,7 +292,8 @@ static void s390_process_res_acc (struct res_acc_data *res_data)
+ * The more information we have (info), the less scanning
+ * will we have to do.
+ */
+- for_each_subchannel(__s390_process_res_acc, res_data);
++ for_each_subchannel_staged(__s390_process_res_acc,
++ s390_process_res_acc_new_sch, res_data);
+ }
- /**
- * pdcs_osid_read - Stable Storage OS ID register output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- */
--static ssize_t
--pdcs_osid_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_osid_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+ static int
+@@ -499,8 +487,7 @@ void chsc_process_crw(void)
+ } while (sei_area->flags & 0x80);
+ }
+
+-static int
+-__chp_add_new_sch(struct subchannel_id schid)
++static int __chp_add_new_sch(struct subchannel_id schid, void *data)
{
- char *out = buf;
+ struct schib schib;
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+@@ -514,45 +501,37 @@ __chp_add_new_sch(struct subchannel_id schid)
+ }
- out += sprintf(out, "%s dependent data (0x%.4x)\n",
-@@ -637,18 +633,17 @@ pdcs_osid_read(struct kset *kset, char *buf)
- /**
- * pdcs_osdep1_read - Stable Storage OS-Dependent data area 1 output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- *
- * This can hold 16 bytes of OS-Dependent data.
- */
--static ssize_t
--pdcs_osdep1_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_osdep1_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+-static int
+-__chp_add(struct subchannel_id schid, void *data)
++static int __chp_add(struct subchannel *sch, void *data)
{
- char *out = buf;
- u32 result[4];
+ int i, mask;
+- struct chp_id *chpid;
+- struct subchannel *sch;
+-
+- chpid = data;
+- sch = get_subchannel_by_schid(schid);
+- if (!sch)
+- /* Check if the subchannel is now available. */
+- return __chp_add_new_sch(schid);
++ struct chp_id *chpid = data;
++
+ spin_lock_irq(sch->lock);
+ for (i=0; i<8; i++) {
+ mask = 0x80 >> i;
+ if ((sch->schib.pmcw.pim & mask) &&
+- (sch->schib.pmcw.chpid[i] == chpid->id)) {
+- if (stsch(sch->schid, &sch->schib) != 0) {
+- /* Endgame. */
+- spin_unlock_irq(sch->lock);
+- return -ENXIO;
+- }
++ (sch->schib.pmcw.chpid[i] == chpid->id))
+ break;
+- }
+ }
+ if (i==8) {
+ spin_unlock_irq(sch->lock);
+ return 0;
+ }
++ if (stsch(sch->schid, &sch->schib)) {
++ spin_unlock_irq(sch->lock);
++ css_schedule_eval(sch->schid);
++ return 0;
++ }
+ sch->lpm = ((sch->schib.pmcw.pim &
+ sch->schib.pmcw.pam &
+ sch->schib.pmcw.pom)
+ | mask) & sch->opm;
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+ if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
- if (pdc_stable_read(PDCS_ADDR_OSD1, &result, sizeof(result)) != PDC_OK)
-@@ -664,18 +659,17 @@ pdcs_osdep1_read(struct kset *kset, char *buf)
+ spin_unlock_irq(sch->lock);
+- put_device(&sch->dev);
++
+ return 0;
+ }
- /**
- * pdcs_diagnostic_read - Stable Storage Diagnostic register output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- *
- * I have NFC how to interpret the content of that register ;-).
- */
--static ssize_t
--pdcs_diagnostic_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_diagnostic_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
- {
- char *out = buf;
- u32 result;
+@@ -564,7 +543,8 @@ void chsc_chp_online(struct chp_id chpid)
+ CIO_TRACE_EVENT(2, dbf_txt);
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+ if (chp_get_status(chpid) != 0)
+- for_each_subchannel(__chp_add, &chpid);
++ for_each_subchannel_staged(__chp_add, __chp_add_new_sch,
++ &chpid);
+ }
- /* get diagnostic */
-@@ -689,18 +683,17 @@ pdcs_diagnostic_read(struct kset *kset, char *buf)
+ static void __s390_subchannel_vary_chpid(struct subchannel *sch,
+@@ -589,7 +569,7 @@ static void __s390_subchannel_vary_chpid(struct subchannel *sch,
+ if (!old_lpm)
+ device_trigger_reprobe(sch);
+ else if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
+ break;
+ }
+ sch->opm &= ~mask;
+@@ -603,37 +583,29 @@ static void __s390_subchannel_vary_chpid(struct subchannel *sch,
+ terminate_internal_io(sch);
+ /* Re-start path verification. */
+ if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
+ }
+ } else if (!sch->lpm) {
+ if (device_trigger_verify(sch) != 0)
+ css_schedule_eval(sch->schid);
+ } else if (sch->driver && sch->driver->verify)
+- sch->driver->verify(&sch->dev);
++ sch->driver->verify(sch);
+ break;
+ }
+ spin_unlock_irqrestore(sch->lock, flags);
+ }
- /**
- * pdcs_fastsize_read - Stable Storage FastSize register output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- *
- * This register holds the amount of system RAM to be tested during boot sequence.
- */
--static ssize_t
--pdcs_fastsize_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_fastsize_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+-static int s390_subchannel_vary_chpid_off(struct device *dev, void *data)
++static int s390_subchannel_vary_chpid_off(struct subchannel *sch, void *data)
{
- char *out = buf;
- u32 result;
-
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+- struct subchannel *sch;
+- struct chp_id *chpid;
+-
+- sch = to_subchannel(dev);
+- chpid = data;
++ struct chp_id *chpid = data;
- /* get fast-size */
-@@ -718,13 +711,12 @@ pdcs_fastsize_read(struct kset *kset, char *buf)
+ __s390_subchannel_vary_chpid(sch, *chpid, 0);
+ return 0;
+ }
- /**
- * pdcs_osdep2_read - Stable Storage OS-Dependent data area 2 output.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The output buffer to write to.
- *
- * This can hold pdcs_size - 224 bytes of OS-Dependent data, when available.
- */
--static ssize_t
--pdcs_osdep2_read(struct kset *kset, char *buf)
-+static ssize_t pdcs_osdep2_read(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+-static int s390_subchannel_vary_chpid_on(struct device *dev, void *data)
++static int s390_subchannel_vary_chpid_on(struct subchannel *sch, void *data)
{
- char *out = buf;
- unsigned long size;
-@@ -736,7 +728,7 @@ pdcs_osdep2_read(struct kset *kset, char *buf)
-
- size = pdcs_size - 224;
+- struct subchannel *sch;
+- struct chp_id *chpid;
+-
+- sch = to_subchannel(dev);
+- chpid = data;
++ struct chp_id *chpid = data;
-- if (!kset || !buf)
-+ if (!buf)
- return -EINVAL;
+ __s390_subchannel_vary_chpid(sch, *chpid, 1);
+ return 0;
+@@ -643,13 +615,7 @@ static int
+ __s390_vary_chpid_on(struct subchannel_id schid, void *data)
+ {
+ struct schib schib;
+- struct subchannel *sch;
- for (i=0; i<size; i+=4) {
-@@ -751,7 +743,6 @@ pdcs_osdep2_read(struct kset *kset, char *buf)
+- sch = get_subchannel_by_schid(schid);
+- if (sch) {
+- put_device(&sch->dev);
+- return 0;
+- }
+ if (stsch_err(schid, &schib))
+ /* We're through */
+ return -ENXIO;
+@@ -669,12 +635,13 @@ int chsc_chp_vary(struct chp_id chpid, int on)
+ * Redo PathVerification on the devices the chpid connects to
+ */
- /**
- * pdcs_auto_write - This function handles autoboot/search flag modifying.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The input buffer to read from.
- * @count: The number of bytes to be read.
- * @knob: The PF_AUTOBOOT or PF_AUTOSEARCH flag
-@@ -760,8 +751,9 @@ pdcs_osdep2_read(struct kset *kset, char *buf)
- * We expect a precise syntax:
- * \"n\" (n == 0 or 1) to toggle AutoBoot Off or On
- */
--static ssize_t
--pdcs_auto_write(struct kset *kset, const char *buf, size_t count, int knob)
-+static ssize_t pdcs_auto_write(struct kobject *kobj,
-+ struct kobj_attribute *attr, const char *buf,
-+ size_t count, int knob)
- {
- struct pdcspath_entry *pathentry;
- unsigned char flags;
-@@ -771,7 +763,7 @@ pdcs_auto_write(struct kset *kset, const char *buf, size_t count, int knob)
- if (!capable(CAP_SYS_ADMIN))
- return -EACCES;
+- bus_for_each_dev(&css_bus_type, NULL, &chpid, on ?
+- s390_subchannel_vary_chpid_on :
+- s390_subchannel_vary_chpid_off);
+ if (on)
+- /* Scan for new devices on varied on path. */
+- for_each_subchannel(__s390_vary_chpid_on, NULL);
++ for_each_subchannel_staged(s390_subchannel_vary_chpid_on,
++ __s390_vary_chpid_on, &chpid);
++ else
++ for_each_subchannel_staged(s390_subchannel_vary_chpid_off,
++ NULL, &chpid);
++
+ return 0;
+ }
-- if (!kset || !buf || !count)
-+ if (!buf || !count)
- return -EINVAL;
+@@ -1075,7 +1042,7 @@ chsc_determine_css_characteristics(void)
- /* We'll use a local copy of buf */
-@@ -826,7 +818,6 @@ parse_error:
+ scsc_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
+ if (!scsc_area) {
+- CIO_MSG_EVENT(0, "Was not able to determine available"
++ CIO_MSG_EVENT(0, "Was not able to determine available "
+ "CHSCs due to no memory.\n");
+ return -ENOMEM;
+ }
+diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
+index 4690534..60590a1 100644
+--- a/drivers/s390/cio/cio.c
++++ b/drivers/s390/cio/cio.c
+@@ -23,11 +23,12 @@
+ #include <asm/reset.h>
+ #include <asm/ipl.h>
+ #include <asm/chpid.h>
+-#include "airq.h"
++#include <asm/airq.h>
+ #include "cio.h"
+ #include "css.h"
+ #include "chsc.h"
+ #include "ioasm.h"
++#include "io_sch.h"
+ #include "blacklist.h"
+ #include "cio_debug.h"
+ #include "chp.h"
+@@ -56,39 +57,37 @@ __setup ("cio_msg=", cio_setup);
- /**
- * pdcs_autoboot_write - This function handles autoboot flag modifying.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The input buffer to read from.
- * @count: The number of bytes to be read.
- *
-@@ -834,15 +825,15 @@ parse_error:
- * We expect a precise syntax:
- * \"n\" (n == 0 or 1) to toggle AutoSearch Off or On
+ /*
+ * Function: cio_debug_init
+- * Initializes three debug logs (under /proc/s390dbf) for common I/O:
+- * - cio_msg logs the messages which are printk'ed when CONFIG_DEBUG_IO is on
++ * Initializes three debug logs for common I/O:
++ * - cio_msg logs generic cio messages
+ * - cio_trace logs the calling of different functions
+- * - cio_crw logs the messages which are printk'ed when CONFIG_DEBUG_CRW is on
+- * debug levels depend on CONFIG_DEBUG_IO resp. CONFIG_DEBUG_CRW
++ * - cio_crw logs machine check related cio messages
*/
--static inline ssize_t
--pdcs_autoboot_write(struct kset *kset, const char *buf, size_t count)
-+static ssize_t pdcs_autoboot_write(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf, size_t count)
+-static int __init
+-cio_debug_init (void)
++static int __init cio_debug_init(void)
{
-- return pdcs_auto_write(kset, buf, count, PF_AUTOBOOT);
-+ return pdcs_auto_write(kset, attr, buf, count, PF_AUTOBOOT);
- }
+- cio_debug_msg_id = debug_register ("cio_msg", 16, 4, 16*sizeof (long));
++ cio_debug_msg_id = debug_register("cio_msg", 16, 1, 16 * sizeof(long));
+ if (!cio_debug_msg_id)
+ goto out_unregister;
+- debug_register_view (cio_debug_msg_id, &debug_sprintf_view);
+- debug_set_level (cio_debug_msg_id, 2);
+- cio_debug_trace_id = debug_register ("cio_trace", 16, 4, 16);
++ debug_register_view(cio_debug_msg_id, &debug_sprintf_view);
++ debug_set_level(cio_debug_msg_id, 2);
++ cio_debug_trace_id = debug_register("cio_trace", 16, 1, 16);
+ if (!cio_debug_trace_id)
+ goto out_unregister;
+- debug_register_view (cio_debug_trace_id, &debug_hex_ascii_view);
+- debug_set_level (cio_debug_trace_id, 2);
+- cio_debug_crw_id = debug_register ("cio_crw", 4, 4, 16*sizeof (long));
++ debug_register_view(cio_debug_trace_id, &debug_hex_ascii_view);
++ debug_set_level(cio_debug_trace_id, 2);
++ cio_debug_crw_id = debug_register("cio_crw", 16, 1, 16 * sizeof(long));
+ if (!cio_debug_crw_id)
+ goto out_unregister;
+- debug_register_view (cio_debug_crw_id, &debug_sprintf_view);
+- debug_set_level (cio_debug_crw_id, 2);
++ debug_register_view(cio_debug_crw_id, &debug_sprintf_view);
++ debug_set_level(cio_debug_crw_id, 4);
+ return 0;
- /**
- * pdcs_autosearch_write - This function handles autosearch flag modifying.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The input buffer to read from.
- * @count: The number of bytes to be read.
- *
-@@ -850,15 +841,15 @@ pdcs_autoboot_write(struct kset *kset, const char *buf, size_t count)
- * We expect a precise syntax:
- * \"n\" (n == 0 or 1) to toggle AutoSearch Off or On
- */
--static inline ssize_t
--pdcs_autosearch_write(struct kset *kset, const char *buf, size_t count)
-+static ssize_t pdcs_autosearch_write(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf, size_t count)
- {
-- return pdcs_auto_write(kset, buf, count, PF_AUTOSEARCH);
-+ return pdcs_auto_write(kset, attr, buf, count, PF_AUTOSEARCH);
+ out_unregister:
+ if (cio_debug_msg_id)
+- debug_unregister (cio_debug_msg_id);
++ debug_unregister(cio_debug_msg_id);
+ if (cio_debug_trace_id)
+- debug_unregister (cio_debug_trace_id);
++ debug_unregister(cio_debug_trace_id);
+ if (cio_debug_crw_id)
+- debug_unregister (cio_debug_crw_id);
++ debug_unregister(cio_debug_crw_id);
+ printk(KERN_WARNING"cio: could not initialize debugging\n");
+ return -1;
}
-
- /**
- * pdcs_osdep1_write - Stable Storage OS-Dependent data area 1 input.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The input buffer to read from.
- * @count: The number of bytes to be read.
- *
-@@ -866,15 +857,16 @@ pdcs_autosearch_write(struct kset *kset, const char *buf, size_t count)
- * write approach. It's up to userspace to deal with it when constructing
- * its input buffer.
- */
--static ssize_t
--pdcs_osdep1_write(struct kset *kset, const char *buf, size_t count)
-+static ssize_t pdcs_osdep1_write(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf, size_t count)
+@@ -147,7 +146,7 @@ cio_tpi(void)
+ spin_lock(sch->lock);
+ memcpy (&sch->schib.scsw, &irb->scsw, sizeof (struct scsw));
+ if (sch->driver && sch->driver->irq)
+- sch->driver->irq(&sch->dev);
++ sch->driver->irq(sch);
+ spin_unlock(sch->lock);
+ irq_exit ();
+ _local_bh_enable();
+@@ -184,33 +183,35 @@ cio_start_key (struct subchannel *sch, /* subchannel structure */
{
- u8 in[16];
+ char dbf_txt[15];
+ int ccode;
++ struct orb *orb;
- if (!capable(CAP_SYS_ADMIN))
- return -EACCES;
+- CIO_TRACE_EVENT (4, "stIO");
+- CIO_TRACE_EVENT (4, sch->dev.bus_id);
++ CIO_TRACE_EVENT(4, "stIO");
++ CIO_TRACE_EVENT(4, sch->dev.bus_id);
-- if (!kset || !buf || !count)
-+ if (!buf || !count)
- return -EINVAL;
++ orb = &to_io_private(sch)->orb;
+ /* sch is always under 2G. */
+- sch->orb.intparm = (__u32)(unsigned long)sch;
+- sch->orb.fmt = 1;
++ orb->intparm = (u32)(addr_t)sch;
++ orb->fmt = 1;
- if (unlikely(pdcs_osid != OS_ID_LINUX))
-@@ -895,7 +887,6 @@ pdcs_osdep1_write(struct kset *kset, const char *buf, size_t count)
+- sch->orb.pfch = sch->options.prefetch == 0;
+- sch->orb.spnd = sch->options.suspend;
+- sch->orb.ssic = sch->options.suspend && sch->options.inter;
+- sch->orb.lpm = (lpm != 0) ? lpm : sch->lpm;
++ orb->pfch = sch->options.prefetch == 0;
++ orb->spnd = sch->options.suspend;
++ orb->ssic = sch->options.suspend && sch->options.inter;
++ orb->lpm = (lpm != 0) ? lpm : sch->lpm;
+ #ifdef CONFIG_64BIT
+ /*
+ * for 64 bit we always support 64 bit IDAWs with 4k page size only
+ */
+- sch->orb.c64 = 1;
+- sch->orb.i2k = 0;
++ orb->c64 = 1;
++ orb->i2k = 0;
+ #endif
+- sch->orb.key = key >> 4;
++ orb->key = key >> 4;
+ /* issue "Start Subchannel" */
+- sch->orb.cpa = (__u32) __pa (cpa);
+- ccode = ssch (sch->schid, &sch->orb);
++ orb->cpa = (__u32) __pa(cpa);
++ ccode = ssch(sch->schid, orb);
- /**
- * pdcs_osdep2_write - Stable Storage OS-Dependent data area 2 input.
-- * @kset: An allocated and populated struct kset. We don't use it tho.
- * @buf: The input buffer to read from.
- * @count: The number of bytes to be read.
- *
-@@ -903,8 +894,9 @@ pdcs_osdep1_write(struct kset *kset, const char *buf, size_t count)
- * byte-by-byte write approach. It's up to userspace to deal with it when
- * constructing its input buffer.
+ /* process condition code */
+- sprintf (dbf_txt, "ccode:%d", ccode);
+- CIO_TRACE_EVENT (4, dbf_txt);
++ sprintf(dbf_txt, "ccode:%d", ccode);
++ CIO_TRACE_EVENT(4, dbf_txt);
+
+ switch (ccode) {
+ case 0:
+@@ -405,8 +406,8 @@ cio_modify (struct subchannel *sch)
+ /*
+ * Enable subchannel.
*/
--static ssize_t
--pdcs_osdep2_write(struct kset *kset, const char *buf, size_t count)
-+static ssize_t pdcs_osdep2_write(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf, size_t count)
+-int
+-cio_enable_subchannel (struct subchannel *sch, unsigned int isc)
++int cio_enable_subchannel(struct subchannel *sch, unsigned int isc,
++ u32 intparm)
{
- unsigned long size;
- unsigned short i;
-@@ -913,7 +905,7 @@ pdcs_osdep2_write(struct kset *kset, const char *buf, size_t count)
- if (!capable(CAP_SYS_ADMIN))
- return -EACCES;
-
-- if (!kset || !buf || !count)
-+ if (!buf || !count)
- return -EINVAL;
-
- if (unlikely(pdcs_size <= 224))
-@@ -951,21 +943,25 @@ static PDCS_ATTR(diagnostic, 0400, pdcs_diagnostic_read, NULL);
- static PDCS_ATTR(fastsize, 0400, pdcs_fastsize_read, NULL);
- static PDCS_ATTR(osdep2, 0600, pdcs_osdep2_read, pdcs_osdep2_write);
-
--static struct subsys_attribute *pdcs_subsys_attrs[] = {
-- &pdcs_attr_size,
-- &pdcs_attr_autoboot,
-- &pdcs_attr_autosearch,
-- &pdcs_attr_timer,
-- &pdcs_attr_osid,
-- &pdcs_attr_osdep1,
-- &pdcs_attr_diagnostic,
-- &pdcs_attr_fastsize,
-- &pdcs_attr_osdep2,
-+static struct attribute *pdcs_subsys_attrs[] = {
-+ &pdcs_attr_size.attr,
-+ &pdcs_attr_autoboot.attr,
-+ &pdcs_attr_autosearch.attr,
-+ &pdcs_attr_timer.attr,
-+ &pdcs_attr_osid.attr,
-+ &pdcs_attr_osdep1.attr,
-+ &pdcs_attr_diagnostic.attr,
-+ &pdcs_attr_fastsize.attr,
-+ &pdcs_attr_osdep2.attr,
- NULL,
- };
+ char dbf_txt[15];
+ int ccode;
+@@ -425,7 +426,7 @@ cio_enable_subchannel (struct subchannel *sch, unsigned int isc)
+ for (retry = 5, ret = 0; retry > 0; retry--) {
+ sch->schib.pmcw.ena = 1;
+ sch->schib.pmcw.isc = isc;
+- sch->schib.pmcw.intparm = (__u32)(unsigned long)sch;
++ sch->schib.pmcw.intparm = intparm;
+ ret = cio_modify(sch);
+ if (ret == -ENODEV)
+ break;
+@@ -567,7 +568,7 @@ cio_validate_subchannel (struct subchannel *sch, struct subchannel_id schid)
+ */
+ if (sch->st != 0) {
+ CIO_DEBUG(KERN_INFO, 0,
+- "cio: Subchannel 0.%x.%04x reports "
++ "Subchannel 0.%x.%04x reports "
+ "non-I/O subchannel type %04X\n",
+ sch->schid.ssid, sch->schid.sch_no, sch->st);
+ /* We stop here for non-io subchannels. */
+@@ -576,11 +577,11 @@ cio_validate_subchannel (struct subchannel *sch, struct subchannel_id schid)
+ }
--static decl_subsys(paths, &ktype_pdcspath, NULL);
--static decl_subsys(stable, NULL, NULL);
-+static struct attribute_group pdcs_attr_group = {
-+ .attrs = pdcs_subsys_attrs,
-+};
+ /* Initialization for io subchannels. */
+- if (!sch->schib.pmcw.dnv) {
+- /* io subchannel but device number is invalid. */
++ if (!css_sch_is_valid(&sch->schib)) {
+ err = -ENODEV;
+ goto out;
+ }
+
-+static struct kobject *stable_kobj;
-+static struct kset *paths_kset;
-
- /**
- * pdcs_register_pathentries - Prepares path entries kobjects for sysfs usage.
-@@ -995,12 +991,12 @@ pdcs_register_pathentries(void)
- if (err < 0)
- continue;
+ /* Devno is valid. */
+ if (is_blacklisted (sch->schid.ssid, sch->schib.pmcw.dev)) {
+ /*
+@@ -600,7 +601,7 @@ cio_validate_subchannel (struct subchannel *sch, struct subchannel_id schid)
+ sch->lpm = sch->schib.pmcw.pam & sch->opm;
-- if ((err = kobject_set_name(&entry->kobj, "%s", entry->name)))
-- return err;
-- kobj_set_kset_s(entry, paths_subsys);
-- if ((err = kobject_register(&entry->kobj)))
-+ entry->kobj.kset = paths_kset;
-+ err = kobject_init_and_add(&entry->kobj, &ktype_pdcspath, NULL,
-+ "%s", entry->name);
-+ if (err)
- return err;
--
-+
- /* kobject is now registered */
- write_lock(&entry->rw_lock);
- entry->ready = 2;
-@@ -1012,6 +1008,7 @@ pdcs_register_pathentries(void)
+ CIO_DEBUG(KERN_INFO, 0,
+- "cio: Detected device %04x on subchannel 0.%x.%04X"
++ "Detected device %04x on subchannel 0.%x.%04X"
+ " - PIM = %02X, PAM = %02X, POM = %02X\n",
+ sch->schib.pmcw.dev, sch->schid.ssid,
+ sch->schid.sch_no, sch->schib.pmcw.pim,
+@@ -680,7 +681,7 @@ do_IRQ (struct pt_regs *regs)
+ sizeof (irb->scsw));
+ /* Call interrupt handler if there is one. */
+ if (sch->driver && sch->driver->irq)
+- sch->driver->irq(&sch->dev);
++ sch->driver->irq(sch);
}
+ if (sch)
+ spin_unlock(sch->lock);
+@@ -698,8 +699,14 @@ do_IRQ (struct pt_regs *regs)
- write_unlock(&entry->rw_lock);
-+ kobject_uevent(&entry->kobj, KOBJ_ADD);
- }
-
- return 0;
-@@ -1029,7 +1026,7 @@ pdcs_unregister_pathentries(void)
- for (i = 0; (entry = pdcspath_entries[i]); i++) {
- read_lock(&entry->rw_lock);
- if (entry->ready >= 2)
-- kobject_unregister(&entry->kobj);
-+ kobject_put(&entry->kobj);
- read_unlock(&entry->rw_lock);
- }
- }
-@@ -1041,8 +1038,7 @@ pdcs_unregister_pathentries(void)
- static int __init
- pdc_stable_init(void)
- {
-- struct subsys_attribute *attr;
-- int i, rc = 0, error = 0;
-+ int rc = 0, error = 0;
- u32 result;
-
- /* find the size of the stable storage */
-@@ -1062,21 +1058,24 @@ pdc_stable_init(void)
- /* the actual result is 16 bits away */
- pdcs_osid = (u16)(result >> 16);
-
-- /* For now we'll register the stable subsys within this driver */
-- if ((rc = firmware_register(&stable_subsys)))
-+ /* For now we'll register the directory at /sys/firmware/stable */
-+ stable_kobj = kobject_create_and_add("stable", firmware_kobj);
-+ if (!stable_kobj) {
-+ rc = -ENOMEM;
- goto fail_firmreg;
-+ }
-
- /* Don't forget the root entries */
-- for (i = 0; (attr = pdcs_subsys_attrs[i]) && !error; i++)
-- if (attr->show)
-- error = subsys_create_file(&stable_subsys, attr);
--
-- /* register the paths subsys as a subsystem of stable subsys */
-- kobj_set_kset_s(&paths_subsys, stable_subsys);
-- if ((rc = subsystem_register(&paths_subsys)))
-- goto fail_subsysreg;
-+ error = sysfs_create_group(stable_kobj, pdcs_attr_group);
+ #ifdef CONFIG_CCW_CONSOLE
+ static struct subchannel console_subchannel;
++static struct io_subchannel_private console_priv;
+ static int console_subchannel_in_use;
-- /* now we create all "files" for the paths subsys */
-+ /* register the paths kset as a child of the stable kset */
-+ paths_kset = kset_create_and_add("paths", NULL, stable_kobj);
-+ if (!paths_kset) {
-+ rc = -ENOMEM;
-+ goto fail_ksetreg;
-+ }
++void *cio_get_console_priv(void)
++{
++ return &console_priv;
++}
+
-+ /* now we create all "files" for the paths kset */
- if ((rc = pdcs_register_pathentries()))
- goto fail_pdcsreg;
-
-@@ -1084,10 +1083,10 @@ pdc_stable_init(void)
-
- fail_pdcsreg:
- pdcs_unregister_pathentries();
-- subsystem_unregister(&paths_subsys);
-+ kset_unregister(paths_kset);
-
--fail_subsysreg:
-- firmware_unregister(&stable_subsys);
-+fail_ksetreg:
-+ kobject_put(stable_kobj);
-
- fail_firmreg:
- printk(KERN_INFO PDCS_PREFIX " bailing out\n");
-@@ -1098,9 +1097,8 @@ static void __exit
- pdc_stable_exit(void)
+ /*
+ * busy wait for the next interrupt on the console
+ */
+@@ -738,9 +745,9 @@ cio_test_for_console(struct subchannel_id schid, void *data)
{
- pdcs_unregister_pathentries();
-- subsystem_unregister(&paths_subsys);
--
-- firmware_unregister(&stable_subsys);
-+ kset_unregister(paths_kset);
-+ kobject_put(stable_kobj);
- }
+ if (stsch_err(schid, &console_subchannel.schib) != 0)
+ return -ENXIO;
+- if (console_subchannel.schib.pmcw.dnv &&
+- console_subchannel.schib.pmcw.dev ==
+- console_devno) {
++ if ((console_subchannel.schib.pmcw.st == SUBCHANNEL_TYPE_IO) &&
++ console_subchannel.schib.pmcw.dnv &&
++ (console_subchannel.schib.pmcw.dev == console_devno)) {
+ console_irq = schid.sch_no;
+ return 1; /* found */
+ }
+@@ -758,6 +765,7 @@ cio_get_console_sch_no(void)
+ /* VM provided us with the irq number of the console. */
+ schid.sch_no = console_irq;
+ if (stsch(schid, &console_subchannel.schib) != 0 ||
++ (console_subchannel.schib.pmcw.st != SUBCHANNEL_TYPE_IO) ||
+ !console_subchannel.schib.pmcw.dnv)
+ return -1;
+ console_devno = console_subchannel.schib.pmcw.dev;
+@@ -804,7 +812,7 @@ cio_probe_console(void)
+ ctl_set_bit(6, 24);
+ console_subchannel.schib.pmcw.isc = 7;
+ console_subchannel.schib.pmcw.intparm =
+- (__u32)(unsigned long)&console_subchannel;
++ (u32)(addr_t)&console_subchannel;
+ ret = cio_modify(&console_subchannel);
+ if (ret) {
+ console_subchannel_in_use = 0;
+@@ -1022,7 +1030,7 @@ static int __reipl_subchannel_match(struct subchannel_id schid, void *data)
+
+ if (stsch_reset(schid, &schib))
+ return -ENXIO;
+- if (schib.pmcw.dnv &&
++ if ((schib.pmcw.st == SUBCHANNEL_TYPE_IO) && schib.pmcw.dnv &&
+ (schib.pmcw.dev == match_id->devid.devno) &&
+ (schid.ssid == match_id->devid.ssid)) {
+ match_id->schid = schid;
+@@ -1068,6 +1076,8 @@ int __init cio_get_iplinfo(struct cio_iplinfo *iplinfo)
+ return -ENODEV;
+ if (stsch(schid, &schib))
+ return -ENODEV;
++ if (schib.pmcw.st != SUBCHANNEL_TYPE_IO)
++ return -ENODEV;
+ if (!schib.pmcw.dnv)
+ return -ENODEV;
+ iplinfo->devno = schib.pmcw.dev;
+diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h
+index 7446c39..52afa4c 100644
+--- a/drivers/s390/cio/cio.h
++++ b/drivers/s390/cio/cio.h
+@@ -11,32 +11,32 @@
+ * path management control word
+ */
+ struct pmcw {
+- __u32 intparm; /* interruption parameter */
+- __u32 qf : 1; /* qdio facility */
+- __u32 res0 : 1; /* reserved zeros */
+- __u32 isc : 3; /* interruption sublass */
+- __u32 res5 : 3; /* reserved zeros */
+- __u32 ena : 1; /* enabled */
+- __u32 lm : 2; /* limit mode */
+- __u32 mme : 2; /* measurement-mode enable */
+- __u32 mp : 1; /* multipath mode */
+- __u32 tf : 1; /* timing facility */
+- __u32 dnv : 1; /* device number valid */
+- __u32 dev : 16; /* device number */
+- __u8 lpm; /* logical path mask */
+- __u8 pnom; /* path not operational mask */
+- __u8 lpum; /* last path used mask */
+- __u8 pim; /* path installed mask */
+- __u16 mbi; /* measurement-block index */
+- __u8 pom; /* path operational mask */
+- __u8 pam; /* path available mask */
+- __u8 chpid[8]; /* CHPID 0-7 (if available) */
+- __u32 unused1 : 8; /* reserved zeros */
+- __u32 st : 3; /* subchannel type */
+- __u32 unused2 : 18; /* reserved zeros */
+- __u32 mbfc : 1; /* measurement block format control */
+- __u32 xmwme : 1; /* extended measurement word mode enable */
+- __u32 csense : 1; /* concurrent sense; can be enabled ...*/
++ u32 intparm; /* interruption parameter */
++ u32 qf : 1; /* qdio facility */
++ u32 res0 : 1; /* reserved zeros */
++ u32 isc : 3; /* interruption sublass */
++ u32 res5 : 3; /* reserved zeros */
++ u32 ena : 1; /* enabled */
++ u32 lm : 2; /* limit mode */
++ u32 mme : 2; /* measurement-mode enable */
++ u32 mp : 1; /* multipath mode */
++ u32 tf : 1; /* timing facility */
++ u32 dnv : 1; /* device number valid */
++ u32 dev : 16; /* device number */
++ u8 lpm; /* logical path mask */
++ u8 pnom; /* path not operational mask */
++ u8 lpum; /* last path used mask */
++ u8 pim; /* path installed mask */
++ u16 mbi; /* measurement-block index */
++ u8 pom; /* path operational mask */
++ u8 pam; /* path available mask */
++ u8 chpid[8]; /* CHPID 0-7 (if available) */
++ u32 unused1 : 8; /* reserved zeros */
++ u32 st : 3; /* subchannel type */
++ u32 unused2 : 18; /* reserved zeros */
++ u32 mbfc : 1; /* measurement block format control */
++ u32 xmwme : 1; /* extended measurement word mode enable */
++ u32 csense : 1; /* concurrent sense; can be enabled ...*/
+ /* ... per MSCH, however, if facility */
+ /* ... is not installed, this results */
+ /* ... in an operand exception. */
+@@ -52,31 +52,6 @@ struct schib {
+ __u8 mda[4]; /* model dependent area */
+ } __attribute__ ((packed,aligned(4)));
+-/*
+- * operation request block
+- */
+-struct orb {
+- __u32 intparm; /* interruption parameter */
+- __u32 key : 4; /* flags, like key, suspend control, etc. */
+- __u32 spnd : 1; /* suspend control */
+- __u32 res1 : 1; /* reserved */
+- __u32 mod : 1; /* modification control */
+- __u32 sync : 1; /* synchronize control */
+- __u32 fmt : 1; /* format control */
+- __u32 pfch : 1; /* prefetch control */
+- __u32 isic : 1; /* initial-status-interruption control */
+- __u32 alcc : 1; /* address-limit-checking control */
+- __u32 ssic : 1; /* suppress-suspended-interr. control */
+- __u32 res2 : 1; /* reserved */
+- __u32 c64 : 1; /* IDAW/QDIO 64 bit control */
+- __u32 i2k : 1; /* IDAW 2/4kB block size control */
+- __u32 lpm : 8; /* logical path mask */
+- __u32 ils : 1; /* incorrect length */
+- __u32 zero : 6; /* reserved zeros */
+- __u32 orbx : 1; /* ORB extension control */
+- __u32 cpa; /* channel program address */
+-} __attribute__ ((packed,aligned(4)));
+-
+ /* subchannel data structure used by I/O subroutines */
+ struct subchannel {
+ struct subchannel_id schid;
+@@ -85,7 +60,7 @@ struct subchannel {
+ enum {
+ SUBCHANNEL_TYPE_IO = 0,
+ SUBCHANNEL_TYPE_CHSC = 1,
+- SUBCHANNEL_TYPE_MESSAGE = 2,
++ SUBCHANNEL_TYPE_MSG = 2,
+ SUBCHANNEL_TYPE_ADM = 3,
+ } st; /* subchannel type */
-diff --git a/drivers/pci/hotplug/acpiphp_ibm.c b/drivers/pci/hotplug/acpiphp_ibm.c
-index 47d26b6..750ebd7 100644
---- a/drivers/pci/hotplug/acpiphp_ibm.c
-+++ b/drivers/pci/hotplug/acpiphp_ibm.c
-@@ -429,7 +429,7 @@ static int __init ibm_acpiphp_init(void)
- int retval = 0;
- acpi_status status;
- struct acpi_device *device;
-- struct kobject *sysdir = &pci_hotplug_slots_subsys.kobj;
-+ struct kobject *sysdir = &pci_hotplug_slots_kset->kobj;
+@@ -99,11 +74,10 @@ struct subchannel {
+ __u8 lpm; /* logical path mask */
+ __u8 opm; /* operational path mask */
+ struct schib schib; /* subchannel information block */
+- struct orb orb; /* operation request block */
+- struct ccw1 sense_ccw; /* static ccw for sense command */
+ struct chsc_ssd_info ssd_info; /* subchannel description */
+ struct device dev; /* entry in device tree */
+ struct css_driver *driver;
++ void *private; /* private per subchannel type data */
+ } __attribute__ ((aligned(8)));
- dbg("%s\n", __FUNCTION__);
+ #define IO_INTERRUPT_TYPE 0 /* I/O interrupt type */
+@@ -111,7 +85,7 @@ struct subchannel {
+ #define to_subchannel(n) container_of(n, struct subchannel, dev)
-@@ -476,7 +476,7 @@ init_return:
- static void __exit ibm_acpiphp_exit(void)
- {
- acpi_status status;
-- struct kobject *sysdir = &pci_hotplug_slots_subsys.kobj;
-+ struct kobject *sysdir = &pci_hotplug_slots_kset->kobj;
+ extern int cio_validate_subchannel (struct subchannel *, struct subchannel_id);
+-extern int cio_enable_subchannel (struct subchannel *, unsigned int);
++extern int cio_enable_subchannel(struct subchannel *, unsigned int, u32);
+ extern int cio_disable_subchannel (struct subchannel *);
+ extern int cio_cancel (struct subchannel *);
+ extern int cio_clear (struct subchannel *);
+@@ -125,6 +99,7 @@ extern int cio_get_options (struct subchannel *);
+ extern int cio_modify (struct subchannel *);
- dbg("%s\n", __FUNCTION__);
+ int cio_create_sch_lock(struct subchannel *);
++void do_adapter_IO(void);
-diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
-index 01c351c..47bb0e1 100644
---- a/drivers/pci/hotplug/pci_hotplug_core.c
-+++ b/drivers/pci/hotplug/pci_hotplug_core.c
-@@ -61,7 +61,7 @@ static int debug;
+ /* Use with care. */
+ #ifdef CONFIG_CCW_CONSOLE
+@@ -133,10 +108,12 @@ extern void cio_release_console(void);
+ extern int cio_is_console(struct subchannel_id);
+ extern struct subchannel *cio_get_console_subchannel(void);
+ extern spinlock_t * cio_get_console_lock(void);
++extern void *cio_get_console_priv(void);
+ #else
+ #define cio_is_console(schid) 0
+ #define cio_get_console_subchannel() NULL
+-#define cio_get_console_lock() NULL;
++#define cio_get_console_lock() NULL
++#define cio_get_console_priv() NULL
+ #endif
- static LIST_HEAD(pci_hotplug_slot_list);
+ extern int cio_show_msg;
+diff --git a/drivers/s390/cio/cio_debug.h b/drivers/s390/cio/cio_debug.h
+index c9bf898..d7429ef 100644
+--- a/drivers/s390/cio/cio_debug.h
++++ b/drivers/s390/cio/cio_debug.h
+@@ -8,20 +8,19 @@ extern debug_info_t *cio_debug_msg_id;
+ extern debug_info_t *cio_debug_trace_id;
+ extern debug_info_t *cio_debug_crw_id;
--struct kset pci_hotplug_slots_subsys;
-+struct kset *pci_hotplug_slots_kset;
+-#define CIO_TRACE_EVENT(imp, txt) do { \
++#define CIO_TRACE_EVENT(imp, txt) do { \
+ debug_text_event(cio_debug_trace_id, imp, txt); \
+ } while (0)
- static ssize_t hotplug_slot_attr_show(struct kobject *kobj,
- struct attribute *attr, char *buf)
-@@ -96,8 +96,6 @@ static struct kobj_type hotplug_slot_ktype = {
- .release = &hotplug_slot_release,
- };
+-#define CIO_MSG_EVENT(imp, args...) do { \
+- debug_sprintf_event(cio_debug_msg_id, imp , ##args); \
++#define CIO_MSG_EVENT(imp, args...) do { \
++ debug_sprintf_event(cio_debug_msg_id, imp , ##args); \
+ } while (0)
--decl_subsys_name(pci_hotplug_slots, slots, &hotplug_slot_ktype, NULL);
--
- /* these strings match up with the values in pci_bus_speed */
- static char *pci_bus_speed_strings[] = {
- "33 MHz PCI", /* 0x00 */
-@@ -632,18 +630,19 @@ int pci_hp_register (struct hotplug_slot *slot)
- return -EINVAL;
- }
+-#define CIO_CRW_EVENT(imp, args...) do { \
+- debug_sprintf_event(cio_debug_crw_id, imp , ##args); \
++#define CIO_CRW_EVENT(imp, args...) do { \
++ debug_sprintf_event(cio_debug_crw_id, imp , ##args); \
+ } while (0)
-- kobject_set_name(&slot->kobj, "%s", slot->name);
-- kobj_set_kset_s(slot, pci_hotplug_slots_subsys);
--
- /* this can fail if we have already registered a slot with the same name */
-- if (kobject_register(&slot->kobj)) {
-- err("Unable to register kobject");
-+ slot->kobj.kset = pci_hotplug_slots_kset;
-+ result = kobject_init_and_add(&slot->kobj, &hotplug_slot_ktype, NULL,
-+ "%s", slot->name);
-+ if (result) {
-+ err("Unable to register kobject '%s'", slot->name);
- return -EINVAL;
+-static inline void
+-CIO_HEX_EVENT(int level, void *data, int length)
++static inline void CIO_HEX_EVENT(int level, void *data, int length)
+ {
+ if (unlikely(!cio_debug_trace_id))
+ return;
+@@ -32,9 +31,10 @@ CIO_HEX_EVENT(int level, void *data, int length)
}
--
-+
- list_add (&slot->slot_list, &pci_hotplug_slot_list);
-
- result = fs_add_slot (slot);
-+ kobject_uevent(&slot->kobj, KOBJ_ADD);
- dbg ("Added slot %s to the list\n", slot->name);
- return result;
}
-@@ -672,7 +671,7 @@ int pci_hp_deregister (struct hotplug_slot *slot)
- fs_remove_slot (slot);
- dbg ("Removed slot %s from the list\n", slot->name);
-- kobject_unregister(&slot->kobj);
-+ kobject_put(&slot->kobj);
- return 0;
- }
+-#define CIO_DEBUG(printk_level,event_level,msg...) ({ \
+- if (cio_show_msg) printk(printk_level msg); \
+- CIO_MSG_EVENT (event_level, msg); \
+-})
++#define CIO_DEBUG(printk_level, event_level, msg...) do { \
++ if (cio_show_msg) \
++ printk(printk_level "cio: " msg); \
++ CIO_MSG_EVENT(event_level, msg); \
++ } while (0)
-@@ -700,11 +699,15 @@ int __must_check pci_hp_change_slot_info(struct hotplug_slot *slot,
- static int __init pci_hotplug_init (void)
- {
- int result;
-+ struct kset *pci_bus_kset;
+ #endif
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index c3df2cd..3b45bbe 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -51,6 +51,62 @@ for_each_subchannel(int(*fn)(struct subchannel_id, void *), void *data)
+ return ret;
+ }
-- kobj_set_kset_s(&pci_hotplug_slots_subsys, pci_bus_type.subsys);
-- result = subsystem_register(&pci_hotplug_slots_subsys);
-- if (result) {
-- err("Register subsys with error %d\n", result);
-+ pci_bus_kset = bus_get_kset(&pci_bus_type);
++struct cb_data {
++ void *data;
++ struct idset *set;
++ int (*fn_known_sch)(struct subchannel *, void *);
++ int (*fn_unknown_sch)(struct subchannel_id, void *);
++};
+
-+ pci_hotplug_slots_kset = kset_create_and_add("slots", NULL,
-+ &pci_bus_kset->kobj);
-+ if (!pci_hotplug_slots_kset) {
-+ result = -ENOMEM;
-+ err("Register subsys error\n");
- goto exit;
- }
- result = cpci_hotplug_init(debug);
-@@ -715,9 +718,9 @@ static int __init pci_hotplug_init (void)
-
- info (DRIVER_DESC " version: " DRIVER_VERSION "\n");
- goto exit;
--
++static int call_fn_known_sch(struct device *dev, void *data)
++{
++ struct subchannel *sch = to_subchannel(dev);
++ struct cb_data *cb = data;
++ int rc = 0;
+
- err_subsys:
-- subsystem_unregister(&pci_hotplug_slots_subsys);
-+ kset_unregister(pci_hotplug_slots_kset);
- exit:
- return result;
- }
-@@ -725,7 +728,7 @@ exit:
- static void __exit pci_hotplug_exit (void)
- {
- cpci_hotplug_exit();
-- subsystem_unregister(&pci_hotplug_slots_subsys);
-+ kset_unregister(pci_hotplug_slots_kset);
- }
-
- module_init(pci_hotplug_init);
-@@ -737,7 +740,7 @@ MODULE_LICENSE("GPL");
- module_param(debug, bool, 0644);
- MODULE_PARM_DESC(debug, "Debugging mode enabled or not");
-
--EXPORT_SYMBOL_GPL(pci_hotplug_slots_subsys);
-+EXPORT_SYMBOL_GPL(pci_hotplug_slots_kset);
- EXPORT_SYMBOL_GPL(pci_hp_register);
- EXPORT_SYMBOL_GPL(pci_hp_deregister);
- EXPORT_SYMBOL_GPL(pci_hp_change_slot_info);
-diff --git a/drivers/pci/hotplug/rpadlpar_sysfs.c b/drivers/pci/hotplug/rpadlpar_sysfs.c
-index a080fed..e32148a 100644
---- a/drivers/pci/hotplug/rpadlpar_sysfs.c
-+++ b/drivers/pci/hotplug/rpadlpar_sysfs.c
-@@ -23,44 +23,13 @@
-
- #define MAX_DRC_NAME_LEN 64
-
--/* Store return code of dlpar operation in attribute struct */
--struct dlpar_io_attr {
-- int rc;
-- struct attribute attr;
-- ssize_t (*store)(struct dlpar_io_attr *dlpar_attr, const char *buf,
-- size_t nbytes);
--};
-
--/* Common show callback for all attrs, display the return code
-- * of the dlpar op */
--static ssize_t
--dlpar_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
--{
-- struct dlpar_io_attr *dlpar_attr = container_of(attr,
-- struct dlpar_io_attr, attr);
-- return sprintf(buf, "%d\n", dlpar_attr->rc);
--}
--
--static ssize_t
--dlpar_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char *buf, size_t nbytes)
--{
-- struct dlpar_io_attr *dlpar_attr = container_of(attr,
-- struct dlpar_io_attr, attr);
-- return dlpar_attr->store ?
-- dlpar_attr->store(dlpar_attr, buf, nbytes) : -EIO;
--}
--
--static struct sysfs_ops dlpar_attr_sysfs_ops = {
-- .show = dlpar_attr_show,
-- .store = dlpar_attr_store,
--};
--
--static ssize_t add_slot_store(struct dlpar_io_attr *dlpar_attr,
-- const char *buf, size_t nbytes)
-+static ssize_t add_slot_store(struct kobject *kobj, struct kobj_attribute *attr,
-+ const char *buf, size_t nbytes)
- {
- char drc_name[MAX_DRC_NAME_LEN];
- char *end;
++ idset_sch_del(cb->set, sch->schid);
++ if (cb->fn_known_sch)
++ rc = cb->fn_known_sch(sch, cb->data);
++ return rc;
++}
++
++static int call_fn_unknown_sch(struct subchannel_id schid, void *data)
++{
++ struct cb_data *cb = data;
++ int rc = 0;
++
++ if (idset_sch_contains(cb->set, schid))
++ rc = cb->fn_unknown_sch(schid, cb->data);
++ return rc;
++}
++
++int for_each_subchannel_staged(int (*fn_known)(struct subchannel *, void *),
++ int (*fn_unknown)(struct subchannel_id,
++ void *), void *data)
++{
++ struct cb_data cb;
+ int rc;
-
- if (nbytes >= MAX_DRC_NAME_LEN)
- return 0;
-@@ -72,15 +41,25 @@ static ssize_t add_slot_store(struct dlpar_io_attr *dlpar_attr,
- end = &drc_name[nbytes];
- *end = '\0';
-
-- dlpar_attr->rc = dlpar_add_slot(drc_name);
-+ rc = dlpar_add_slot(drc_name);
++
++ cb.set = idset_sch_new();
++ if (!cb.set)
++ return -ENOMEM;
++ idset_fill(cb.set);
++ cb.data = data;
++ cb.fn_known_sch = fn_known;
++ cb.fn_unknown_sch = fn_unknown;
++ /* Process registered subchannels. */
++ rc = bus_for_each_dev(&css_bus_type, NULL, &cb, call_fn_known_sch);
+ if (rc)
-+ return rc;
-
- return nbytes;
++ goto out;
++ /* Process unregistered subchannels. */
++ if (fn_unknown)
++ rc = for_each_subchannel(call_fn_unknown_sch, &cb);
++out:
++ idset_free(cb.set);
++
++ return rc;
++}
++
+ static struct subchannel *
+ css_alloc_subchannel(struct subchannel_id schid)
+ {
+@@ -77,7 +133,7 @@ css_alloc_subchannel(struct subchannel_id schid)
+ * This is fine even on 64bit since the subchannel is always located
+ * under 2G.
+ */
+- sch->schib.pmcw.intparm = (__u32)(unsigned long)sch;
++ sch->schib.pmcw.intparm = (u32)(addr_t)sch;
+ ret = cio_modify(sch);
+ if (ret) {
+ kfree(sch->lock);
+@@ -237,11 +293,25 @@ get_subchannel_by_schid(struct subchannel_id schid)
+ return dev ? to_subchannel(dev) : NULL;
}
--static ssize_t remove_slot_store(struct dlpar_io_attr *dlpar_attr,
-- const char *buf, size_t nbytes)
-+static ssize_t add_slot_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
++/**
++ * css_sch_is_valid() - check if a subchannel is valid
++ * @schib: subchannel information block for the subchannel
++ */
++int css_sch_is_valid(struct schib *schib)
+{
-+ return sprintf(buf, "0\n");
++ if ((schib->pmcw.st == SUBCHANNEL_TYPE_IO) && !schib->pmcw.dnv)
++ return 0;
++ return 1;
+}
++EXPORT_SYMBOL_GPL(css_sch_is_valid);
+
-+static ssize_t remove_slot_store(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf, size_t nbytes)
+ static int css_get_subchannel_status(struct subchannel *sch)
{
- char drc_name[MAX_DRC_NAME_LEN];
-+ int rc;
- char *end;
-
- if (nbytes >= MAX_DRC_NAME_LEN)
-@@ -93,22 +72,24 @@ static ssize_t remove_slot_store(struct dlpar_io_attr *dlpar_attr,
- end = &drc_name[nbytes];
- *end = '\0';
-
-- dlpar_attr->rc = dlpar_remove_slot(drc_name);
-+ rc = dlpar_remove_slot(drc_name);
-+ if (rc)
-+ return rc;
+ struct schib schib;
- return nbytes;
+- if (stsch(sch->schid, &schib) || !schib.pmcw.dnv)
++ if (stsch(sch->schid, &schib))
++ return CIO_GONE;
++ if (!css_sch_is_valid(&schib))
+ return CIO_GONE;
+ if (sch->schib.pmcw.dnv && (schib.pmcw.dev != sch->schib.pmcw.dev))
+ return CIO_REVALIDATE;
+@@ -293,7 +363,7 @@ static int css_evaluate_known_subchannel(struct subchannel *sch, int slow)
+ action = UNREGISTER;
+ if (sch->driver && sch->driver->notify) {
+ spin_unlock_irqrestore(sch->lock, flags);
+- ret = sch->driver->notify(&sch->dev, event);
++ ret = sch->driver->notify(sch, event);
+ spin_lock_irqsave(sch->lock, flags);
+ if (ret)
+ action = NONE;
+@@ -349,7 +419,7 @@ static int css_evaluate_new_subchannel(struct subchannel_id schid, int slow)
+ /* Will be done on the slow path. */
+ return -EAGAIN;
+ }
+- if (stsch_err(schid, &schib) || !schib.pmcw.dnv) {
++ if (stsch_err(schid, &schib) || !css_sch_is_valid(&schib)) {
+ /* Unusable - ignore. */
+ return 0;
+ }
+@@ -388,20 +458,56 @@ static int __init slow_subchannel_init(void)
+ return 0;
}
--static struct dlpar_io_attr add_slot_attr = {
-- .rc = 0,
-- .attr = { .name = ADD_SLOT_ATTR_NAME, .mode = 0644, },
-- .store = add_slot_store,
--};
-+static ssize_t remove_slot_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
+-static void css_slow_path_func(struct work_struct *unused)
++static int slow_eval_known_fn(struct subchannel *sch, void *data)
+ {
+- struct subchannel_id schid;
++ int eval;
++ int rc;
+
+- CIO_TRACE_EVENT(4, "slowpath");
+ spin_lock_irq(&slow_subchannel_lock);
+- init_subchannel_id(&schid);
+- while (idset_sch_get_first(slow_subchannel_set, &schid)) {
+- idset_sch_del(slow_subchannel_set, schid);
+- spin_unlock_irq(&slow_subchannel_lock);
+- css_evaluate_subchannel(schid, 1);
+- spin_lock_irq(&slow_subchannel_lock);
++ eval = idset_sch_contains(slow_subchannel_set, sch->schid);
++ idset_sch_del(slow_subchannel_set, sch->schid);
++ spin_unlock_irq(&slow_subchannel_lock);
++ if (eval) {
++ rc = css_evaluate_known_subchannel(sch, 1);
++ if (rc == -EAGAIN)
++ css_schedule_eval(sch->schid);
+ }
++ return 0;
++}
++
++static int slow_eval_unknown_fn(struct subchannel_id schid, void *data)
+{
-+ return sprintf(buf, "0\n");
++ int eval;
++ int rc = 0;
++
++ spin_lock_irq(&slow_subchannel_lock);
++ eval = idset_sch_contains(slow_subchannel_set, schid);
++ idset_sch_del(slow_subchannel_set, schid);
+ spin_unlock_irq(&slow_subchannel_lock);
++ if (eval) {
++ rc = css_evaluate_new_subchannel(schid, 1);
++ switch (rc) {
++ case -EAGAIN:
++ css_schedule_eval(schid);
++ rc = 0;
++ break;
++ case -ENXIO:
++ case -ENOMEM:
++ case -EIO:
++ /* These should abort looping */
++ break;
++ default:
++ rc = 0;
++ }
++ }
++ return rc;
+}
-
--static struct dlpar_io_attr remove_slot_attr = {
-- .rc = 0,
-- .attr = { .name = REMOVE_SLOT_ATTR_NAME, .mode = 0644},
-- .store = remove_slot_store,
--};
-+static struct kobj_attribute add_slot_attr =
-+ __ATTR(ADD_SLOT_ATTR_NAME, 0644, add_slot_show, add_slot_store);
+
-+static struct kobj_attribute remove_slot_attr =
-+ __ATTR(REMOVE_SLOT_ATTR_NAME, 0644, remove_slot_show, remove_slot_store);
++static void css_slow_path_func(struct work_struct *unused)
++{
++ CIO_TRACE_EVENT(4, "slowpath");
++ for_each_subchannel_staged(slow_eval_known_fn, slow_eval_unknown_fn,
++ NULL);
+ }
- static struct attribute *default_attrs[] = {
- &add_slot_attr.attr,
-@@ -116,37 +97,29 @@ static struct attribute *default_attrs[] = {
- NULL,
- };
+ static DECLARE_WORK(slow_path_work, css_slow_path_func);
+@@ -430,7 +536,6 @@ void css_schedule_eval_all(void)
+ /* Reprobe subchannel if unregistered. */
+ static int reprobe_subchannel(struct subchannel_id schid, void *data)
+ {
+- struct subchannel *sch;
+ int ret;
--static void dlpar_io_release(struct kobject *kobj)
--{
-- /* noop */
-- return;
--}
+ CIO_MSG_EVENT(6, "cio: reprobe 0.%x.%04x\n",
+@@ -438,13 +543,6 @@ static int reprobe_subchannel(struct subchannel_id schid, void *data)
+ if (need_reprobe)
+ return -EAGAIN;
+
+- sch = get_subchannel_by_schid(schid);
+- if (sch) {
+- /* Already known. */
+- put_device(&sch->dev);
+- return 0;
+- }
-
--struct kobj_type ktype_dlpar_io = {
-- .release = dlpar_io_release,
-- .sysfs_ops = &dlpar_attr_sysfs_ops,
-- .default_attrs = default_attrs,
-+static struct attribute_group dlpar_attr_group = {
-+ .attrs = default_attrs,
- };
+ ret = css_probe_device(schid);
+ switch (ret) {
+ case 0:
+@@ -472,7 +570,7 @@ static void reprobe_all(struct work_struct *unused)
+ /* Make sure initial subchannel scan is done. */
+ wait_event(ccw_device_init_wq,
+ atomic_read(&ccw_device_init_count) == 0);
+- ret = for_each_subchannel(reprobe_subchannel, NULL);
++ ret = for_each_subchannel_staged(NULL, reprobe_subchannel, NULL);
--struct kset dlpar_io_kset = {
-- .kobj = {.ktype = &ktype_dlpar_io,
-- .parent = &pci_hotplug_slots_subsys.kobj},
-- .ktype = &ktype_dlpar_io,
--};
-+static struct kobject *dlpar_kobj;
+ CIO_MSG_EVENT(2, "reprobe done (rc=%d, need_reprobe=%d)\n", ret,
+ need_reprobe);
+@@ -787,8 +885,8 @@ int sch_is_pseudo_sch(struct subchannel *sch)
+ static int
+ css_bus_match (struct device *dev, struct device_driver *drv)
+ {
+- struct subchannel *sch = container_of (dev, struct subchannel, dev);
+- struct css_driver *driver = container_of (drv, struct css_driver, drv);
++ struct subchannel *sch = to_subchannel(dev);
++ struct css_driver *driver = to_cssdriver(drv);
- int dlpar_sysfs_init(void)
+ if (sch->st == driver->subchannel_type)
+ return 1;
+@@ -796,32 +894,36 @@ css_bus_match (struct device *dev, struct device_driver *drv)
+ return 0;
+ }
+
+-static int
+-css_probe (struct device *dev)
++static int css_probe(struct device *dev)
{
-- kobject_set_name(&dlpar_io_kset.kobj, DLPAR_KOBJ_NAME);
-- if (kset_register(&dlpar_io_kset)) {
-- printk(KERN_ERR "rpadlpar_io: cannot register kset for %s\n",
-- kobject_name(&dlpar_io_kset.kobj));
-+ int error;
-+
-+ dlpar_kobj = kobject_create_and_add(DLPAR_KOBJ_NAME,
-+ &pci_hotplug_slots_kset->kobj);
-+ if (!dlpar_kobj)
- return -EINVAL;
-- }
+ struct subchannel *sch;
++ int ret;
-- return 0;
-+ error = sysfs_create_group(dlpar_kobj, &dlpar_attr_group);
-+ if (error)
-+ kobject_put(dlpar_kobj);
-+ return error;
+ sch = to_subchannel(dev);
+- sch->driver = container_of (dev->driver, struct css_driver, drv);
+- return (sch->driver->probe ? sch->driver->probe(sch) : 0);
++ sch->driver = to_cssdriver(dev->driver);
++ ret = sch->driver->probe ? sch->driver->probe(sch) : 0;
++ if (ret)
++ sch->driver = NULL;
++ return ret;
}
- void dlpar_sysfs_exit(void)
+-static int
+-css_remove (struct device *dev)
++static int css_remove(struct device *dev)
{
-- kset_unregister(&dlpar_io_kset);
-+ sysfs_remove_group(dlpar_kobj, &dlpar_attr_group);
-+ kobject_put(dlpar_kobj);
+ struct subchannel *sch;
++ int ret;
+
+ sch = to_subchannel(dev);
+- return (sch->driver->remove ? sch->driver->remove(sch) : 0);
++ ret = sch->driver->remove ? sch->driver->remove(sch) : 0;
++ sch->driver = NULL;
++ return ret;
}
-diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
-index 6d1a216..c4fa35d 100644
---- a/drivers/pci/pci-driver.c
-+++ b/drivers/pci/pci-driver.c
-@@ -1,6 +1,11 @@
- /*
- * drivers/pci/pci-driver.c
- *
-+ * (C) Copyright 2002-2004, 2007 Greg Kroah-Hartman <greg at kroah.com>
-+ * (C) Copyright 2007 Novell Inc.
-+ *
-+ * Released under the GPL v2 only.
-+ *
- */
- #include <linux/pci.h>
-@@ -96,17 +101,21 @@ pci_create_newid_file(struct pci_driver *drv)
+-static void
+-css_shutdown (struct device *dev)
++static void css_shutdown(struct device *dev)
{
- int error = 0;
- if (drv->probe != NULL)
-- error = sysfs_create_file(&drv->driver.kobj,
-- &driver_attr_new_id.attr);
-+ error = driver_create_file(&drv->driver, &driver_attr_new_id);
- return error;
+ struct subchannel *sch;
+
+ sch = to_subchannel(dev);
+- if (sch->driver->shutdown)
++ if (sch->driver && sch->driver->shutdown)
+ sch->driver->shutdown(sch);
}
-+static void pci_remove_newid_file(struct pci_driver *drv)
+@@ -833,6 +935,34 @@ struct bus_type css_bus_type = {
+ .shutdown = css_shutdown,
+ };
+
++/**
++ * css_driver_register - register a css driver
++ * @cdrv: css driver to register
++ *
++ * This is mainly a wrapper around driver_register that sets name
++ * and bus_type in the embedded struct device_driver correctly.
++ */
++int css_driver_register(struct css_driver *cdrv)
+{
-+ driver_remove_file(&drv->driver, &driver_attr_new_id);
++ cdrv->drv.name = cdrv->name;
++ cdrv->drv.bus = &css_bus_type;
++ cdrv->drv.owner = cdrv->owner;
++ return driver_register(&cdrv->drv);
+}
- #else /* !CONFIG_HOTPLUG */
- static inline void pci_free_dynids(struct pci_driver *drv) {}
- static inline int pci_create_newid_file(struct pci_driver *drv)
- {
- return 0;
- }
-+static inline void pci_remove_newid_file(struct pci_driver *drv) {}
- #endif
++EXPORT_SYMBOL_GPL(css_driver_register);
++
++/**
++ * css_driver_unregister - unregister a css driver
++ * @cdrv: css driver to unregister
++ *
++ * This is a wrapper around driver_unregister.
++ */
++void css_driver_unregister(struct css_driver *cdrv)
++{
++ driver_unregister(&cdrv->drv);
++}
++EXPORT_SYMBOL_GPL(css_driver_unregister);
++
+ subsys_initcall(init_channel_subsystem);
- /**
-@@ -352,50 +361,6 @@ static void pci_device_shutdown(struct device *dev)
- drv->shutdown(pci_dev);
- }
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/s390/cio/css.h b/drivers/s390/cio/css.h
+index 81215ef..b705545 100644
+--- a/drivers/s390/cio/css.h
++++ b/drivers/s390/cio/css.h
+@@ -58,64 +58,6 @@ struct pgid {
+ __u32 tod_high; /* high word TOD clock */
+ } __attribute__ ((packed));
--#define kobj_to_pci_driver(obj) container_of(obj, struct device_driver, kobj)
--#define attr_to_driver_attribute(obj) container_of(obj, struct driver_attribute, attr)
--
--static ssize_t
--pci_driver_attr_show(struct kobject * kobj, struct attribute *attr, char *buf)
--{
-- struct device_driver *driver = kobj_to_pci_driver(kobj);
-- struct driver_attribute *dattr = attr_to_driver_attribute(attr);
-- ssize_t ret;
--
-- if (!get_driver(driver))
-- return -ENODEV;
--
-- ret = dattr->show ? dattr->show(driver, buf) : -EIO;
--
-- put_driver(driver);
-- return ret;
--}
--
--static ssize_t
--pci_driver_attr_store(struct kobject * kobj, struct attribute *attr,
-- const char *buf, size_t count)
--{
-- struct device_driver *driver = kobj_to_pci_driver(kobj);
-- struct driver_attribute *dattr = attr_to_driver_attribute(attr);
-- ssize_t ret;
--
-- if (!get_driver(driver))
-- return -ENODEV;
--
-- ret = dattr->store ? dattr->store(driver, buf, count) : -EIO;
+-#define MAX_CIWS 8
-
-- put_driver(driver);
-- return ret;
--}
+-/*
+- * sense-id response buffer layout
+- */
+-struct senseid {
+- /* common part */
+- __u8 reserved; /* always 0x'FF' */
+- __u16 cu_type; /* control unit type */
+- __u8 cu_model; /* control unit model */
+- __u16 dev_type; /* device type */
+- __u8 dev_model; /* device model */
+- __u8 unused; /* padding byte */
+- /* extended part */
+- struct ciw ciw[MAX_CIWS]; /* variable # of CIWs */
+-} __attribute__ ((packed,aligned(4)));
-
--static struct sysfs_ops pci_driver_sysfs_ops = {
-- .show = pci_driver_attr_show,
-- .store = pci_driver_attr_store,
--};
--static struct kobj_type pci_driver_kobj_type = {
-- .sysfs_ops = &pci_driver_sysfs_ops,
+-struct ccw_device_private {
+- struct ccw_device *cdev;
+- struct subchannel *sch;
+- int state; /* device state */
+- atomic_t onoff;
+- unsigned long registered;
+- struct ccw_dev_id dev_id; /* device id */
+- struct subchannel_id schid; /* subchannel number */
+- __u8 imask; /* lpm mask for SNID/SID/SPGID */
+- int iretry; /* retry counter SNID/SID/SPGID */
+- struct {
+- unsigned int fast:1; /* post with "channel end" */
+- unsigned int repall:1; /* report every interrupt status */
+- unsigned int pgroup:1; /* do path grouping */
+- unsigned int force:1; /* allow forced online */
+- } __attribute__ ((packed)) options;
+- struct {
+- unsigned int pgid_single:1; /* use single path for Set PGID */
+- unsigned int esid:1; /* Ext. SenseID supported by HW */
+- unsigned int dosense:1; /* delayed SENSE required */
+- unsigned int doverify:1; /* delayed path verification */
+- unsigned int donotify:1; /* call notify function */
+- unsigned int recog_done:1; /* dev. recog. complete */
+- unsigned int fake_irb:1; /* deliver faked irb */
+- unsigned int intretry:1; /* retry internal operation */
+- } __attribute__((packed)) flags;
+- unsigned long intparm; /* user interruption parameter */
+- struct qdio_irq *qdio_data;
+- struct irb irb; /* device status */
+- struct senseid senseid; /* SenseID info */
+- struct pgid pgid[8]; /* path group IDs per chpid*/
+- struct ccw1 iccws[2]; /* ccws for SNID/SID/SPGID commands */
+- struct work_struct kick_work;
+- wait_queue_head_t wait_q;
+- struct timer_list timer;
+- void *cmb; /* measurement information */
+- struct list_head cmb_list; /* list of measured devices */
+- u64 cmb_start_time; /* clock value of cmb reset */
+- void *cmb_wait; /* deferred cmb enable/disable */
-};
-
- /**
- * __pci_register_driver - register a new pci driver
- * @drv: the driver structure to register
-@@ -417,7 +382,6 @@ int __pci_register_driver(struct pci_driver *drv, struct module *owner,
- drv->driver.bus = &pci_bus_type;
- drv->driver.owner = owner;
- drv->driver.mod_name = mod_name;
-- drv->driver.kobj.ktype = &pci_driver_kobj_type;
-
- spin_lock_init(&drv->dynids.lock);
- INIT_LIST_HEAD(&drv->dynids.list);
-@@ -447,6 +411,7 @@ int __pci_register_driver(struct pci_driver *drv, struct module *owner,
- void
- pci_unregister_driver(struct pci_driver *drv)
- {
-+ pci_remove_newid_file(drv);
- driver_unregister(&drv->driver);
- pci_free_dynids(drv);
- }
-diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
-index c5ca313..5fd5852 100644
---- a/drivers/pci/probe.c
-+++ b/drivers/pci/probe.c
-@@ -1210,16 +1210,19 @@ static void __init pci_sort_breadthfirst_klist(void)
- struct klist_node *n;
- struct device *dev;
- struct pci_dev *pdev;
-+ struct klist *device_klist;
+ /*
+ * A css driver handles all subchannels of one type.
+ * Currently, we only care about I/O subchannels (type 0), these
+@@ -123,25 +65,35 @@ struct ccw_device_private {
+ */
+ struct subchannel;
+ struct css_driver {
++ struct module *owner;
+ unsigned int subchannel_type;
+ struct device_driver drv;
+- void (*irq)(struct device *);
+- int (*notify)(struct device *, int);
+- void (*verify)(struct device *);
+- void (*termination)(struct device *);
++ void (*irq)(struct subchannel *);
++ int (*notify)(struct subchannel *, int);
++ void (*verify)(struct subchannel *);
++ void (*termination)(struct subchannel *);
+ int (*probe)(struct subchannel *);
+ int (*remove)(struct subchannel *);
+ void (*shutdown)(struct subchannel *);
++ const char *name;
+ };
-- spin_lock(&pci_bus_type.klist_devices.k_lock);
-- list_for_each_safe(pos, tmp, &pci_bus_type.klist_devices.k_list) {
-+ device_klist = bus_get_device_klist(&pci_bus_type);
++#define to_cssdriver(n) container_of(n, struct css_driver, drv)
+
-+ spin_lock(&device_klist->k_lock);
-+ list_for_each_safe(pos, tmp, &device_klist->k_list) {
- n = container_of(pos, struct klist_node, n_node);
- dev = container_of(n, struct device, knode_bus);
- pdev = to_pci_dev(dev);
- pci_insertion_sort_klist(pdev, &sorted_devices);
- }
-- list_splice(&sorted_devices, &pci_bus_type.klist_devices.k_list);
-- spin_unlock(&pci_bus_type.klist_devices.k_lock);
-+ list_splice(&sorted_devices, &device_klist->k_list);
-+ spin_unlock(&device_klist->k_lock);
- }
+ /*
+ * all css_drivers have the css_bus_type
+ */
+ extern struct bus_type css_bus_type;
- static void __init pci_insertion_sort_devices(struct pci_dev *a, struct list_head *list)
-diff --git a/drivers/pcmcia/ds.c b/drivers/pcmcia/ds.c
-index 5cf89a9..15c18f5 100644
---- a/drivers/pcmcia/ds.c
-+++ b/drivers/pcmcia/ds.c
-@@ -312,8 +312,7 @@ pcmcia_create_newid_file(struct pcmcia_driver *drv)
- {
- int error = 0;
- if (drv->probe != NULL)
-- error = sysfs_create_file(&drv->drv.kobj,
-- &driver_attr_new_id.attr);
-+ error = driver_create_file(&drv->drv, &driver_attr_new_id);
- return error;
- }
++extern int css_driver_register(struct css_driver *);
++extern void css_driver_unregister(struct css_driver *);
++
+ extern void css_sch_device_unregister(struct subchannel *);
+ extern struct subchannel * get_subchannel_by_schid(struct subchannel_id);
+ extern int css_init_done;
++int for_each_subchannel_staged(int (*fn_known)(struct subchannel *, void *),
++ int (*fn_unknown)(struct subchannel_id,
++ void *), void *data);
+ extern int for_each_subchannel(int(*fn)(struct subchannel_id, void *), void *);
+ extern void css_process_crw(int, int);
+ extern void css_reiterate_subchannels(void);
+@@ -188,6 +140,8 @@ void css_schedule_eval(struct subchannel_id schid);
+ void css_schedule_eval_all(void);
-diff --git a/drivers/pcmcia/pxa2xx_base.c b/drivers/pcmcia/pxa2xx_base.c
-index 874923f..e439044 100644
---- a/drivers/pcmcia/pxa2xx_base.c
-+++ b/drivers/pcmcia/pxa2xx_base.c
-@@ -29,6 +29,7 @@
- #include <asm/irq.h>
- #include <asm/system.h>
- #include <asm/arch/pxa-regs.h>
-+#include <asm/arch/pxa2xx-regs.h>
+ int sch_is_pseudo_sch(struct subchannel *);
++struct schib;
++int css_sch_is_valid(struct schib *);
- #include <pcmcia/cs_types.h>
- #include <pcmcia/ss.h>
-diff --git a/drivers/power/apm_power.c b/drivers/power/apm_power.c
-index bbf3ee1..7e29b90 100644
---- a/drivers/power/apm_power.c
-+++ b/drivers/power/apm_power.c
-@@ -13,6 +13,7 @@
- #include <linux/power_supply.h>
- #include <linux/apm-emulation.h>
+ extern struct workqueue_struct *slow_path_wq;
-+static DEFINE_MUTEX(apm_mutex);
- #define PSY_PROP(psy, prop, val) psy->get_property(psy, \
- POWER_SUPPLY_PROP_##prop, val)
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index 74f6b53..d35dc3f 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -17,6 +17,7 @@
+ #include <linux/list.h>
+ #include <linux/device.h>
+ #include <linux/workqueue.h>
++#include <linux/timer.h>
-@@ -23,67 +24,86 @@
+ #include <asm/ccwdev.h>
+ #include <asm/cio.h>
+@@ -28,6 +29,12 @@
+ #include "css.h"
+ #include "device.h"
+ #include "ioasm.h"
++#include "io_sch.h"
++
++static struct timer_list recovery_timer;
++static spinlock_t recovery_lock;
++static int recovery_phase;
++static const unsigned long recovery_delay[] = { 3, 30, 300 };
- static struct power_supply *main_battery;
+ /******************* bus type handling ***********************/
--static void find_main_battery(void)
--{
-- struct device *dev;
-- struct power_supply *bat = NULL;
-- struct power_supply *max_charge_bat = NULL;
-- struct power_supply *max_energy_bat = NULL;
-+struct find_bat_param {
-+ struct power_supply *main;
-+ struct power_supply *bat;
-+ struct power_supply *max_charge_bat;
-+ struct power_supply *max_energy_bat;
- union power_supply_propval full;
-- int max_charge = 0;
-- int max_energy = 0;
-+ int max_charge;
-+ int max_energy;
-+};
+@@ -115,19 +122,18 @@ static int ccw_uevent(struct device *dev, struct kobj_uevent_env *env)
-- main_battery = NULL;
-+static int __find_main_battery(struct device *dev, void *data)
-+{
-+ struct find_bat_param *bp = (struct find_bat_param *)data;
+ struct bus_type ccw_bus_type;
-- list_for_each_entry(dev, &power_supply_class->devices, node) {
-- bat = dev_get_drvdata(dev);
-+ bp->bat = dev_get_drvdata(dev);
+-static int io_subchannel_probe (struct subchannel *);
+-static int io_subchannel_remove (struct subchannel *);
+-static int io_subchannel_notify(struct device *, int);
+-static void io_subchannel_verify(struct device *);
+-static void io_subchannel_ioterm(struct device *);
++static void io_subchannel_irq(struct subchannel *);
++static int io_subchannel_probe(struct subchannel *);
++static int io_subchannel_remove(struct subchannel *);
++static int io_subchannel_notify(struct subchannel *, int);
++static void io_subchannel_verify(struct subchannel *);
++static void io_subchannel_ioterm(struct subchannel *);
+ static void io_subchannel_shutdown(struct subchannel *);
-- if (bat->use_for_apm) {
-- /* nice, we explicitly asked to report this battery. */
-- main_battery = bat;
-- return;
-- }
-+ if (bp->bat->use_for_apm) {
-+ /* nice, we explicitly asked to report this battery. */
-+ bp->main = bp->bat;
-+ return 1;
-+ }
+ static struct css_driver io_subchannel_driver = {
++ .owner = THIS_MODULE,
+ .subchannel_type = SUBCHANNEL_TYPE_IO,
+- .drv = {
+- .name = "io_subchannel",
+- .bus = &css_bus_type,
+- },
++ .name = "io_subchannel",
+ .irq = io_subchannel_irq,
+ .notify = io_subchannel_notify,
+ .verify = io_subchannel_verify,
+@@ -142,6 +148,8 @@ struct workqueue_struct *ccw_device_notify_work;
+ wait_queue_head_t ccw_device_init_wq;
+ atomic_t ccw_device_init_count;
-- if (!PSY_PROP(bat, CHARGE_FULL_DESIGN, &full) ||
-- !PSY_PROP(bat, CHARGE_FULL, &full)) {
-- if (full.intval > max_charge) {
-- max_charge_bat = bat;
-- max_charge = full.intval;
-- }
-- } else if (!PSY_PROP(bat, ENERGY_FULL_DESIGN, &full) ||
-- !PSY_PROP(bat, ENERGY_FULL, &full)) {
-- if (full.intval > max_energy) {
-- max_energy_bat = bat;
-- max_energy = full.intval;
-- }
-+ if (!PSY_PROP(bp->bat, CHARGE_FULL_DESIGN, &bp->full) ||
-+ !PSY_PROP(bp->bat, CHARGE_FULL, &bp->full)) {
-+ if (bp->full.intval > bp->max_charge) {
-+ bp->max_charge_bat = bp->bat;
-+ bp->max_charge = bp->full.intval;
-+ }
-+ } else if (!PSY_PROP(bp->bat, ENERGY_FULL_DESIGN, &bp->full) ||
-+ !PSY_PROP(bp->bat, ENERGY_FULL, &bp->full)) {
-+ if (bp->full.intval > bp->max_energy) {
-+ bp->max_energy_bat = bp->bat;
-+ bp->max_energy = bp->full.intval;
- }
- }
-+ return 0;
-+}
-+
-+static void find_main_battery(void)
-+{
-+ struct find_bat_param bp;
-+ int error;
-+
-+ memset(&bp, 0, sizeof(struct find_bat_param));
-+ main_battery = NULL;
-+ bp.main = main_battery;
++static void recovery_func(unsigned long data);
+
-+ error = class_for_each_device(power_supply_class, &bp,
-+ __find_main_battery);
-+ if (error) {
-+ main_battery = bp.main;
-+ return;
-+ }
+ static int __init
+ init_ccw_bus_type (void)
+ {
+@@ -149,6 +157,7 @@ init_ccw_bus_type (void)
-- if ((max_energy_bat && max_charge_bat) &&
-- (max_energy_bat != max_charge_bat)) {
-+ if ((bp.max_energy_bat && bp.max_charge_bat) &&
-+ (bp.max_energy_bat != bp.max_charge_bat)) {
- /* try guess battery with more capacity */
-- if (!PSY_PROP(max_charge_bat, VOLTAGE_MAX_DESIGN, &full)) {
-- if (max_energy > max_charge * full.intval)
-- main_battery = max_energy_bat;
-+ if (!PSY_PROP(bp.max_charge_bat, VOLTAGE_MAX_DESIGN,
-+ &bp.full)) {
-+ if (bp.max_energy > bp.max_charge * bp.full.intval)
-+ main_battery = bp.max_energy_bat;
- else
-- main_battery = max_charge_bat;
-- } else if (!PSY_PROP(max_energy_bat, VOLTAGE_MAX_DESIGN,
-- &full)) {
-- if (max_charge > max_energy / full.intval)
-- main_battery = max_charge_bat;
-+ main_battery = bp.max_charge_bat;
-+ } else if (!PSY_PROP(bp.max_energy_bat, VOLTAGE_MAX_DESIGN,
-+ &bp.full)) {
-+ if (bp.max_charge > bp.max_energy / bp.full.intval)
-+ main_battery = bp.max_charge_bat;
- else
-- main_battery = max_energy_bat;
-+ main_battery = bp.max_energy_bat;
- } else {
- /* give up, choice any */
-- main_battery = max_energy_bat;
-+ main_battery = bp.max_energy_bat;
- }
-- } else if (max_charge_bat) {
-- main_battery = max_charge_bat;
-- } else if (max_energy_bat) {
-- main_battery = max_energy_bat;
-+ } else if (bp.max_charge_bat) {
-+ main_battery = bp.max_charge_bat;
-+ } else if (bp.max_energy_bat) {
-+ main_battery = bp.max_energy_bat;
- } else {
- /* give up, try the last if any */
-- main_battery = bat;
-+ main_battery = bp.bat;
- }
- }
+ init_waitqueue_head(&ccw_device_init_wq);
+ atomic_set(&ccw_device_init_count, 0);
++ setup_timer(&recovery_timer, recovery_func, 0);
-@@ -207,10 +227,10 @@ static void apm_battery_apm_get_power_status(struct apm_power_info *info)
- union power_supply_propval status;
- union power_supply_propval capacity, time_to_full, time_to_empty;
+ ccw_device_work = create_singlethread_workqueue("cio");
+ if (!ccw_device_work)
+@@ -166,7 +175,8 @@ init_ccw_bus_type (void)
+ if ((ret = bus_register (&ccw_bus_type)))
+ goto out_err;
-- down(&power_supply_class->sem);
-+ mutex_lock(&apm_mutex);
- find_main_battery();
- if (!main_battery) {
-- up(&power_supply_class->sem);
-+ mutex_unlock(&apm_mutex);
+- if ((ret = driver_register(&io_subchannel_driver.drv)))
++ ret = css_driver_register(&io_subchannel_driver);
++ if (ret)
+ goto out_err;
+
+ wait_event(ccw_device_init_wq,
+@@ -186,7 +196,7 @@ out_err:
+ static void __exit
+ cleanup_ccw_bus_type (void)
+ {
+- driver_unregister(&io_subchannel_driver.drv);
++ css_driver_unregister(&io_subchannel_driver);
+ bus_unregister(&ccw_bus_type);
+ destroy_workqueue(ccw_device_notify_work);
+ destroy_workqueue(ccw_device_work);
+@@ -773,7 +783,7 @@ static void sch_attach_device(struct subchannel *sch,
+ {
+ css_update_ssd_info(sch);
+ spin_lock_irq(sch->lock);
+- sch->dev.driver_data = cdev;
++ sch_set_cdev(sch, cdev);
+ cdev->private->schid = sch->schid;
+ cdev->ccwlock = sch->lock;
+ device_trigger_reprobe(sch);
+@@ -795,7 +805,7 @@ static void sch_attach_disconnected_device(struct subchannel *sch,
+ put_device(&other_sch->dev);
return;
}
-
-@@ -278,7 +298,7 @@ static void apm_battery_apm_get_power_status(struct apm_power_info *info)
- }
+- other_sch->dev.driver_data = NULL;
++ sch_set_cdev(other_sch, NULL);
+ /* No need to keep a subchannel without ccw device around. */
+ css_sch_device_unregister(other_sch);
+ put_device(&other_sch->dev);
+@@ -831,12 +841,12 @@ static void sch_create_and_recog_new_device(struct subchannel *sch)
+ return;
}
+ spin_lock_irq(sch->lock);
+- sch->dev.driver_data = cdev;
++ sch_set_cdev(sch, cdev);
+ spin_unlock_irq(sch->lock);
+ /* Start recognition for the new ccw device. */
+ if (io_subchannel_recog(cdev, sch)) {
+ spin_lock_irq(sch->lock);
+- sch->dev.driver_data = NULL;
++ sch_set_cdev(sch, NULL);
+ spin_unlock_irq(sch->lock);
+ if (cdev->dev.release)
+ cdev->dev.release(&cdev->dev);
+@@ -940,7 +950,7 @@ io_subchannel_register(struct work_struct *work)
+ cdev->private->dev_id.devno, ret);
+ put_device(&cdev->dev);
+ spin_lock_irqsave(sch->lock, flags);
+- sch->dev.driver_data = NULL;
++ sch_set_cdev(sch, NULL);
+ spin_unlock_irqrestore(sch->lock, flags);
+ kfree (cdev->private);
+ kfree (cdev);
+@@ -1022,7 +1032,7 @@ io_subchannel_recog(struct ccw_device *cdev, struct subchannel *sch)
+ int rc;
+ struct ccw_device_private *priv;
-- up(&power_supply_class->sem);
-+ mutex_unlock(&apm_mutex);
- }
-
- static int __init apm_battery_init(void)
-diff --git a/drivers/power/power_supply_core.c b/drivers/power/power_supply_core.c
-index a63b75c..03d6a38 100644
---- a/drivers/power/power_supply_core.c
-+++ b/drivers/power/power_supply_core.c
-@@ -20,28 +20,29 @@
+- sch->dev.driver_data = cdev;
++ sch_set_cdev(sch, cdev);
+ sch->driver = &io_subchannel_driver;
+ cdev->ccwlock = sch->lock;
- struct class *power_supply_class;
+@@ -1082,7 +1092,7 @@ static void ccw_device_move_to_sch(struct work_struct *work)
+ }
+ if (former_parent) {
+ spin_lock_irq(former_parent->lock);
+- former_parent->dev.driver_data = NULL;
++ sch_set_cdev(former_parent, NULL);
+ spin_unlock_irq(former_parent->lock);
+ css_sch_device_unregister(former_parent);
+ /* Reset intparm to zeroes. */
+@@ -1096,6 +1106,18 @@ out:
+ put_device(&cdev->dev);
+ }
-+static int __power_supply_changed_work(struct device *dev, void *data)
++static void io_subchannel_irq(struct subchannel *sch)
+{
-+ struct power_supply *psy = (struct power_supply *)data;
-+ struct power_supply *pst = dev_get_drvdata(dev);
-+ int i;
++ struct ccw_device *cdev;
+
-+ for (i = 0; i < psy->num_supplicants; i++)
-+ if (!strcmp(psy->supplied_to[i], pst->name)) {
-+ if (pst->external_power_changed)
-+ pst->external_power_changed(pst);
-+ }
-+ return 0;
++ cdev = sch_get_cdev(sch);
++
++ CIO_TRACE_EVENT(3, "IRQ");
++ CIO_TRACE_EVENT(3, sch->dev.bus_id);
++ if (cdev)
++ dev_fsm_event(cdev, DEV_EVENT_INTERRUPT);
+}
+
- static void power_supply_changed_work(struct work_struct *work)
+ static int
+ io_subchannel_probe (struct subchannel *sch)
{
- struct power_supply *psy = container_of(work, struct power_supply,
- changed_work);
-- int i;
-
- dev_dbg(psy->dev, "%s\n", __FUNCTION__);
+@@ -1104,13 +1126,13 @@ io_subchannel_probe (struct subchannel *sch)
+ unsigned long flags;
+ struct ccw_dev_id dev_id;
-- for (i = 0; i < psy->num_supplicants; i++) {
-- struct device *dev;
--
-- down(&power_supply_class->sem);
-- list_for_each_entry(dev, &power_supply_class->devices, node) {
-- struct power_supply *pst = dev_get_drvdata(dev);
+- if (sch->dev.driver_data) {
++ cdev = sch_get_cdev(sch);
++ if (cdev) {
+ /*
+ * This subchannel already has an associated ccw_device.
+ * Register it and exit. This happens for all early
+ * device, e.g. the console.
+ */
+- cdev = sch->dev.driver_data;
+ cdev->dev.groups = ccwdev_attr_groups;
+ device_initialize(&cdev->dev);
+ ccw_device_register(cdev);
+@@ -1132,6 +1154,11 @@ io_subchannel_probe (struct subchannel *sch)
+ */
+ dev_id.devno = sch->schib.pmcw.dev;
+ dev_id.ssid = sch->schid.ssid;
++ /* Allocate I/O subchannel private data. */
++ sch->private = kzalloc(sizeof(struct io_subchannel_private),
++ GFP_KERNEL | GFP_DMA);
++ if (!sch->private)
++ return -ENOMEM;
+ cdev = get_disc_ccwdev_by_dev_id(&dev_id, NULL);
+ if (!cdev)
+ cdev = get_orphaned_ccwdev_by_dev_id(to_css(sch->dev.parent),
+@@ -1149,16 +1176,18 @@ io_subchannel_probe (struct subchannel *sch)
+ return 0;
+ }
+ cdev = io_subchannel_create_ccwdev(sch);
+- if (IS_ERR(cdev))
++ if (IS_ERR(cdev)) {
++ kfree(sch->private);
+ return PTR_ERR(cdev);
-
-- if (!strcmp(psy->supplied_to[i], pst->name)) {
-- if (pst->external_power_changed)
-- pst->external_power_changed(pst);
-- }
-- }
-- up(&power_supply_class->sem);
-- }
-+ class_for_each_device(power_supply_class, psy,
-+ __power_supply_changed_work);
++ }
+ rc = io_subchannel_recog(cdev, sch);
+ if (rc) {
+ spin_lock_irqsave(sch->lock, flags);
+- sch->dev.driver_data = NULL;
++ sch_set_cdev(sch, NULL);
+ spin_unlock_irqrestore(sch->lock, flags);
+ if (cdev->dev.release)
+ cdev->dev.release(&cdev->dev);
++ kfree(sch->private);
+ }
- power_supply_update_leds(psy);
+ return rc;
+@@ -1170,25 +1199,25 @@ io_subchannel_remove (struct subchannel *sch)
+ struct ccw_device *cdev;
+ unsigned long flags;
-@@ -55,32 +56,35 @@ void power_supply_changed(struct power_supply *psy)
- schedule_work(&psy->changed_work);
+- if (!sch->dev.driver_data)
++ cdev = sch_get_cdev(sch);
++ if (!cdev)
+ return 0;
+- cdev = sch->dev.driver_data;
+ /* Set ccw device to not operational and drop reference. */
+ spin_lock_irqsave(cdev->ccwlock, flags);
+- sch->dev.driver_data = NULL;
++ sch_set_cdev(sch, NULL);
+ cdev->private->state = DEV_STATE_NOT_OPER;
+ spin_unlock_irqrestore(cdev->ccwlock, flags);
+ ccw_device_unregister(cdev);
+ put_device(&cdev->dev);
++ kfree(sch->private);
+ return 0;
}
--int power_supply_am_i_supplied(struct power_supply *psy)
-+static int __power_supply_am_i_supplied(struct device *dev, void *data)
+-static int
+-io_subchannel_notify(struct device *dev, int event)
++static int io_subchannel_notify(struct subchannel *sch, int event)
{
- union power_supply_propval ret = {0,};
-- struct device *dev;
--
-- down(&power_supply_class->sem);
-- list_for_each_entry(dev, &power_supply_class->devices, node) {
-- struct power_supply *epsy = dev_get_drvdata(dev);
-- int i;
--
-- for (i = 0; i < epsy->num_supplicants; i++) {
-- if (!strcmp(epsy->supplied_to[i], psy->name)) {
-- if (epsy->get_property(epsy,
-- POWER_SUPPLY_PROP_ONLINE, &ret))
-- continue;
-- if (ret.intval)
-- goto out;
-- }
-+ struct power_supply *psy = (struct power_supply *)data;
-+ struct power_supply *epsy = dev_get_drvdata(dev);
-+ int i;
-+
-+ for (i = 0; i < epsy->num_supplicants; i++) {
-+ if (!strcmp(epsy->supplied_to[i], psy->name)) {
-+ if (epsy->get_property(epsy,
-+ POWER_SUPPLY_PROP_ONLINE, &ret))
-+ continue;
-+ if (ret.intval)
-+ return ret.intval;
- }
- }
--out:
-- up(&power_supply_class->sem);
-+ return 0;
-+}
-+
-+int power_supply_am_i_supplied(struct power_supply *psy)
-+{
-+ int error;
-+
-+ error = class_for_each_device(power_supply_class, psy,
-+ __power_supply_am_i_supplied);
+ struct ccw_device *cdev;
-- dev_dbg(psy->dev, "%s %d\n", __FUNCTION__, ret.intval);
-+ dev_dbg(psy->dev, "%s %d\n", __FUNCTION__, error);
+- cdev = dev->driver_data;
++ cdev = sch_get_cdev(sch);
+ if (!cdev)
+ return 0;
+ if (!cdev->drv)
+@@ -1198,22 +1227,20 @@ io_subchannel_notify(struct device *dev, int event)
+ return cdev->drv->notify ? cdev->drv->notify(cdev, event) : 0;
+ }
-- return ret.intval;
-+ return error;
+-static void
+-io_subchannel_verify(struct device *dev)
++static void io_subchannel_verify(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
+
+- cdev = dev->driver_data;
++ cdev = sch_get_cdev(sch);
+ if (cdev)
+ dev_fsm_event(cdev, DEV_EVENT_VERIFY);
}
- int power_supply_register(struct device *parent, struct power_supply *psy)
-diff --git a/drivers/rapidio/rio.h b/drivers/rapidio/rio.h
-index b242cee..80e3f03 100644
---- a/drivers/rapidio/rio.h
-+++ b/drivers/rapidio/rio.h
-@@ -31,8 +31,8 @@ extern struct rio_route_ops __end_rio_route_ops[];
+-static void
+-io_subchannel_ioterm(struct device *dev)
++static void io_subchannel_ioterm(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
- /* Helpers internal to the RIO core code */
- #define DECLARE_RIO_ROUTE_SECTION(section, vid, did, add_hook, get_hook) \
-- static struct rio_route_ops __rio_route_ops __attribute_used__ \
-- __attribute__((__section__(#section))) = { vid, did, add_hook, get_hook };
-+ static struct rio_route_ops __rio_route_ops __used \
-+ __section(section)= { vid, did, add_hook, get_hook };
+- cdev = dev->driver_data;
++ cdev = sch_get_cdev(sch);
+ if (!cdev)
+ return;
+ /* Internal I/O will be retried by the interrupt handler. */
+@@ -1231,7 +1258,7 @@ io_subchannel_shutdown(struct subchannel *sch)
+ struct ccw_device *cdev;
+ int ret;
- /**
- * DECLARE_RIO_ROUTE_OPS - Registers switch routing operations
-diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
-index 1e6715e..45e4b96 100644
---- a/drivers/rtc/Kconfig
-+++ b/drivers/rtc/Kconfig
-@@ -404,7 +404,7 @@ config RTC_DRV_SA1100
+- cdev = sch->dev.driver_data;
++ cdev = sch_get_cdev(sch);
- config RTC_DRV_SH
- tristate "SuperH On-Chip RTC"
-- depends on RTC_CLASS && (CPU_SH3 || CPU_SH4)
-+ depends on RTC_CLASS && SUPERH
- help
- Say Y here to enable support for the on-chip RTC found in
- most SuperH processors.
-diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
-index f1e00ff..7e3ad4f 100644
---- a/drivers/rtc/interface.c
-+++ b/drivers/rtc/interface.c
-@@ -251,20 +251,23 @@ void rtc_update_irq(struct rtc_device *rtc,
+ if (cio_is_console(sch->schid))
+ return;
+@@ -1271,6 +1298,9 @@ ccw_device_console_enable (struct ccw_device *cdev, struct subchannel *sch)
+ {
+ int rc;
+
++ /* Attach subchannel private data. */
++ sch->private = cio_get_console_priv();
++ memset(sch->private, 0, sizeof(struct io_subchannel_private));
+ /* Initialize the ccw_device structure. */
+ cdev->dev.parent= &sch->dev;
+ rc = io_subchannel_recog(cdev, sch);
+@@ -1456,6 +1486,7 @@ int ccw_driver_register(struct ccw_driver *cdriver)
+
+ drv->bus = &ccw_bus_type;
+ drv->name = cdriver->name;
++ drv->owner = cdriver->owner;
+
+ return driver_register(drv);
+ }
+@@ -1481,6 +1512,60 @@ ccw_device_get_subchannel_id(struct ccw_device *cdev)
+ return sch->schid;
}
- EXPORT_SYMBOL_GPL(rtc_update_irq);
-+static int __rtc_match(struct device *dev, void *data)
++static int recovery_check(struct device *dev, void *data)
+{
-+ char *name = (char *)data;
++ struct ccw_device *cdev = to_ccwdev(dev);
++ int *redo = data;
++
++ spin_lock_irq(cdev->ccwlock);
++ switch (cdev->private->state) {
++ case DEV_STATE_DISCONNECTED:
++ CIO_MSG_EVENT(3, "recovery: trigger 0.%x.%04x\n",
++ cdev->private->dev_id.ssid,
++ cdev->private->dev_id.devno);
++ dev_fsm_event(cdev, DEV_EVENT_VERIFY);
++ *redo = 1;
++ break;
++ case DEV_STATE_DISCONNECTED_SENSE_ID:
++ *redo = 1;
++ break;
++ }
++ spin_unlock_irq(cdev->ccwlock);
+
-+ if (strncmp(dev->bus_id, name, BUS_ID_SIZE) == 0)
-+ return 1;
+ return 0;
+}
+
- struct rtc_device *rtc_class_open(char *name)
- {
- struct device *dev;
- struct rtc_device *rtc = NULL;
-
-- down(&rtc_class->sem);
-- list_for_each_entry(dev, &rtc_class->devices, node) {
-- if (strncmp(dev->bus_id, name, BUS_ID_SIZE) == 0) {
-- dev = get_device(dev);
-- if (dev)
-- rtc = to_rtc_device(dev);
-- break;
-- }
-- }
-+ dev = class_find_device(rtc_class, name, __rtc_match);
-+ if (dev)
-+ rtc = to_rtc_device(dev);
++static void recovery_func(unsigned long data)
++{
++ int redo = 0;
++
++ bus_for_each_dev(&ccw_bus_type, NULL, &redo, recovery_check);
++ if (redo) {
++ spin_lock_irq(&recovery_lock);
++ if (!timer_pending(&recovery_timer)) {
++ if (recovery_phase < ARRAY_SIZE(recovery_delay) - 1)
++ recovery_phase++;
++ mod_timer(&recovery_timer, jiffies +
++ recovery_delay[recovery_phase] * HZ);
++ }
++ spin_unlock_irq(&recovery_lock);
++ } else
++ CIO_MSG_EVENT(2, "recovery: end\n");
++}
++
++void ccw_device_schedule_recovery(void)
++{
++ unsigned long flags;
++
++ CIO_MSG_EVENT(2, "recovery: schedule\n");
++ spin_lock_irqsave(&recovery_lock, flags);
++ if (!timer_pending(&recovery_timer) || (recovery_phase != 0)) {
++ recovery_phase = 0;
++ mod_timer(&recovery_timer, jiffies + recovery_delay[0] * HZ);
++ }
++ spin_unlock_irqrestore(&recovery_lock, flags);
++}
++
+ MODULE_LICENSE("GPL");
+ EXPORT_SYMBOL(ccw_device_set_online);
+ EXPORT_SYMBOL(ccw_device_set_offline);
+diff --git a/drivers/s390/cio/device.h b/drivers/s390/cio/device.h
+index 0d40896..d40a2ff 100644
+--- a/drivers/s390/cio/device.h
++++ b/drivers/s390/cio/device.h
+@@ -5,6 +5,8 @@
+ #include <asm/atomic.h>
+ #include <linux/wait.h>
- if (rtc) {
- if (!try_module_get(rtc->owner)) {
-@@ -272,7 +275,6 @@ struct rtc_device *rtc_class_open(char *name)
- rtc = NULL;
- }
- }
-- up(&rtc_class->sem);
++#include "io_sch.h"
++
+ /*
+ * states of the device statemachine
+ */
+@@ -74,7 +76,6 @@ extern struct workqueue_struct *ccw_device_notify_work;
+ extern wait_queue_head_t ccw_device_init_wq;
+ extern atomic_t ccw_device_init_count;
- return rtc;
- }
-diff --git a/drivers/rtc/rtc-ds1672.c b/drivers/rtc/rtc-ds1672.c
-index dfef163..e0900ca 100644
---- a/drivers/rtc/rtc-ds1672.c
-+++ b/drivers/rtc/rtc-ds1672.c
-@@ -16,7 +16,7 @@
- #define DRV_VERSION "0.3"
+-void io_subchannel_irq (struct device *pdev);
+ void io_subchannel_recog_done(struct ccw_device *cdev);
- /* Addresses to scan: none. This chip cannot be detected. */
--static unsigned short normal_i2c[] = { I2C_CLIENT_END };
-+static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
+ int ccw_device_cancel_halt_clear(struct ccw_device *);
+@@ -87,6 +88,8 @@ int ccw_device_recognition(struct ccw_device *);
+ int ccw_device_online(struct ccw_device *);
+ int ccw_device_offline(struct ccw_device *);
- /* Insmod parameters */
- I2C_CLIENT_INSMOD;
-diff --git a/drivers/rtc/rtc-isl1208.c b/drivers/rtc/rtc-isl1208.c
-index 1c74364..725b0c7 100644
---- a/drivers/rtc/rtc-isl1208.c
-+++ b/drivers/rtc/rtc-isl1208.c
-@@ -61,7 +61,7 @@
- /* i2c configuration */
- #define ISL1208_I2C_ADDR 0xde
++void ccw_device_schedule_recovery(void);
++
+ /* Function prototypes for device status and basic sense stuff. */
+ void ccw_device_accumulate_irb(struct ccw_device *, struct irb *);
+ void ccw_device_accumulate_basic_sense(struct ccw_device *, struct irb *);
+diff --git a/drivers/s390/cio/device_fsm.c b/drivers/s390/cio/device_fsm.c
+index bfad421..4b92c84 100644
+--- a/drivers/s390/cio/device_fsm.c
++++ b/drivers/s390/cio/device_fsm.c
+@@ -25,14 +25,16 @@
+ #include "ioasm.h"
+ #include "chp.h"
--static unsigned short normal_i2c[] = {
-+static const unsigned short normal_i2c[] = {
- ISL1208_I2C_ADDR>>1, I2C_CLIENT_END
- };
- I2C_CLIENT_INSMOD; /* defines addr_data */
-diff --git a/drivers/rtc/rtc-max6900.c b/drivers/rtc/rtc-max6900.c
-index a1cd448..7683412 100644
---- a/drivers/rtc/rtc-max6900.c
-+++ b/drivers/rtc/rtc-max6900.c
-@@ -54,7 +54,7 @@
++static int timeout_log_enabled;
++
+ int
+ device_is_online(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
- #define MAX6900_I2C_ADDR 0xa0
+- if (!sch->dev.driver_data)
++ cdev = sch_get_cdev(sch);
++ if (!cdev)
+ return 0;
+- cdev = sch->dev.driver_data;
+ return (cdev->private->state == DEV_STATE_ONLINE);
+ }
--static unsigned short normal_i2c[] = {
-+static const unsigned short normal_i2c[] = {
- MAX6900_I2C_ADDR >> 1,
- I2C_CLIENT_END
- };
-diff --git a/drivers/rtc/rtc-pcf8563.c b/drivers/rtc/rtc-pcf8563.c
-index 0242d80..b3317fc 100644
---- a/drivers/rtc/rtc-pcf8563.c
-+++ b/drivers/rtc/rtc-pcf8563.c
-@@ -25,7 +25,7 @@
- * located at 0x51 will pass the validation routine due to
- * the way the registers are implemented.
- */
--static unsigned short normal_i2c[] = { I2C_CLIENT_END };
-+static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
+@@ -41,9 +43,9 @@ device_is_disconnected(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
- /* Module parameters */
- I2C_CLIENT_INSMOD;
-diff --git a/drivers/rtc/rtc-pcf8583.c b/drivers/rtc/rtc-pcf8583.c
-index 556d0e7..c973ba9 100644
---- a/drivers/rtc/rtc-pcf8583.c
-+++ b/drivers/rtc/rtc-pcf8583.c
-@@ -40,7 +40,7 @@ struct pcf8583 {
- #define CTRL_ALARM 0x02
- #define CTRL_TIMER 0x01
+- if (!sch->dev.driver_data)
++ cdev = sch_get_cdev(sch);
++ if (!cdev)
+ return 0;
+- cdev = sch->dev.driver_data;
+ return (cdev->private->state == DEV_STATE_DISCONNECTED ||
+ cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID);
+ }
+@@ -53,19 +55,21 @@ device_set_disconnected(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
--static unsigned short normal_i2c[] = { 0x50, I2C_CLIENT_END };
-+static const unsigned short normal_i2c[] = { 0x50, I2C_CLIENT_END };
+- if (!sch->dev.driver_data)
++ cdev = sch_get_cdev(sch);
++ if (!cdev)
+ return;
+- cdev = sch->dev.driver_data;
+ ccw_device_set_timeout(cdev, 0);
+ cdev->private->flags.fake_irb = 0;
+ cdev->private->state = DEV_STATE_DISCONNECTED;
++ if (cdev->online)
++ ccw_device_schedule_recovery();
+ }
- /* Module parameters */
- I2C_CLIENT_INSMOD;
-diff --git a/drivers/rtc/rtc-sa1100.c b/drivers/rtc/rtc-sa1100.c
-index 6f1e9a9..2eb3852 100644
---- a/drivers/rtc/rtc-sa1100.c
-+++ b/drivers/rtc/rtc-sa1100.c
-@@ -337,6 +337,8 @@ static int sa1100_rtc_probe(struct platform_device *pdev)
- if (IS_ERR(rtc))
- return PTR_ERR(rtc);
+ void device_set_intretry(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
-+ device_init_wakeup(&pdev->dev, 1);
-+
- platform_set_drvdata(pdev, rtc);
+- cdev = sch->dev.driver_data;
++ cdev = sch_get_cdev(sch);
+ if (!cdev)
+ return;
+ cdev->private->flags.intretry = 1;
+@@ -75,13 +79,62 @@ int device_trigger_verify(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
- return 0;
-@@ -352,9 +354,38 @@ static int sa1100_rtc_remove(struct platform_device *pdev)
+- cdev = sch->dev.driver_data;
++ cdev = sch_get_cdev(sch);
+ if (!cdev || !cdev->online)
+ return -EINVAL;
+ dev_fsm_event(cdev, DEV_EVENT_VERIFY);
return 0;
}
-+#ifdef CONFIG_PM
-+static int sa1100_rtc_suspend(struct platform_device *pdev, pm_message_t state)
++static int __init ccw_timeout_log_setup(char *unused)
+{
-+ if (pdev->dev.power.power_state.event != state.event) {
-+ if (state.event == PM_EVENT_SUSPEND &&
-+ device_may_wakeup(&pdev->dev))
-+ enable_irq_wake(IRQ_RTCAlrm);
-+
-+ pdev->dev.power.power_state = state;
-+ }
-+ return 0;
++ timeout_log_enabled = 1;
++ return 1;
+}
+
-+static int sa1100_rtc_resume(struct platform_device *pdev)
++__setup("ccw_timeout_log", ccw_timeout_log_setup);
++
++static void ccw_timeout_log(struct ccw_device *cdev)
+{
-+ if (pdev->dev.power.power_state.event != PM_EVENT_ON) {
-+ if (device_may_wakeup(&pdev->dev))
-+ disable_irq_wake(IRQ_RTCAlrm);
-+ pdev->dev.power.power_state = PMSG_ON;
-+ }
-+ return 0;
++ struct schib schib;
++ struct subchannel *sch;
++ struct io_subchannel_private *private;
++ int cc;
++
++ sch = to_subchannel(cdev->dev.parent);
++ private = to_io_private(sch);
++ cc = stsch(sch->schid, &schib);
++
++ printk(KERN_WARNING "cio: ccw device timeout occurred at %llx, "
++ "device information:\n", get_clock());
++ printk(KERN_WARNING "cio: orb:\n");
++ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
++ &private->orb, sizeof(private->orb), 0);
++ printk(KERN_WARNING "cio: ccw device bus id: %s\n", cdev->dev.bus_id);
++ printk(KERN_WARNING "cio: subchannel bus id: %s\n", sch->dev.bus_id);
++ printk(KERN_WARNING "cio: subchannel lpm: %02x, opm: %02x, "
++ "vpm: %02x\n", sch->lpm, sch->opm, sch->vpm);
++
++ if ((void *)(addr_t)private->orb.cpa == &private->sense_ccw ||
++ (void *)(addr_t)private->orb.cpa == cdev->private->iccws)
++ printk(KERN_WARNING "cio: last channel program (intern):\n");
++ else
++ printk(KERN_WARNING "cio: last channel program:\n");
++
++ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
++ (void *)(addr_t)private->orb.cpa,
++ sizeof(struct ccw1), 0);
++ printk(KERN_WARNING "cio: ccw device state: %d\n",
++ cdev->private->state);
++ printk(KERN_WARNING "cio: store subchannel returned: cc=%d\n", cc);
++ printk(KERN_WARNING "cio: schib:\n");
++ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
++ &schib, sizeof(schib), 0);
++ printk(KERN_WARNING "cio: ccw device flags:\n");
++ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
++ &cdev->private->flags, sizeof(cdev->private->flags), 0);
+}
-+#else
-+#define sa1100_rtc_suspend NULL
-+#define sa1100_rtc_resume NULL
-+#endif
+
- static struct platform_driver sa1100_rtc_driver = {
- .probe = sa1100_rtc_probe,
- .remove = sa1100_rtc_remove,
-+ .suspend = sa1100_rtc_suspend,
-+ .resume = sa1100_rtc_resume,
- .driver = {
- .name = "sa1100-rtc",
- },
-diff --git a/drivers/rtc/rtc-sh.c b/drivers/rtc/rtc-sh.c
-index 8e8c8b8..c1d6a18 100644
---- a/drivers/rtc/rtc-sh.c
-+++ b/drivers/rtc/rtc-sh.c
-@@ -26,17 +26,7 @@
- #include <asm/rtc.h>
+ /*
+ * Timeout function. It just triggers a DEV_EVENT_TIMEOUT.
+ */
+@@ -92,6 +145,8 @@ ccw_device_timeout(unsigned long data)
- #define DRV_NAME "sh-rtc"
--#define DRV_VERSION "0.1.3"
--
--#ifdef CONFIG_CPU_SH3
--#define rtc_reg_size sizeof(u16)
--#define RTC_BIT_INVERTED 0 /* No bug on SH7708, SH7709A */
--#define RTC_DEF_CAPABILITIES 0UL
--#elif defined(CONFIG_CPU_SH4)
--#define rtc_reg_size sizeof(u32)
--#define RTC_BIT_INVERTED 0x40 /* bug on SH7750, SH7750S */
--#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
--#endif
-+#define DRV_VERSION "0.1.6"
+ cdev = (struct ccw_device *) data;
+ spin_lock_irq(cdev->ccwlock);
++ if (timeout_log_enabled)
++ ccw_timeout_log(cdev);
+ dev_fsm_event(cdev, DEV_EVENT_TIMEOUT);
+ spin_unlock_irq(cdev->ccwlock);
+ }
+@@ -122,9 +177,9 @@ device_kill_pending_timer(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
- #define RTC_REG(r) ((r) * rtc_reg_size)
+- if (!sch->dev.driver_data)
++ cdev = sch_get_cdev(sch);
++ if (!cdev)
+ return;
+- cdev = sch->dev.driver_data;
+ ccw_device_set_timeout(cdev, 0);
+ }
-@@ -58,6 +48,18 @@
- #define RCR1 RTC_REG(14) /* Control */
- #define RCR2 RTC_REG(15) /* Control */
+@@ -268,7 +323,7 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
+ switch (state) {
+ case DEV_STATE_NOT_OPER:
+ CIO_DEBUG(KERN_WARNING, 2,
+- "cio: SenseID : unknown device %04x on subchannel "
++ "SenseID : unknown device %04x on subchannel "
+ "0.%x.%04x\n", cdev->private->dev_id.devno,
+ sch->schid.ssid, sch->schid.sch_no);
+ break;
+@@ -294,7 +349,7 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
+ }
+ /* Issue device info message. */
+ CIO_DEBUG(KERN_INFO, 2,
+- "cio: SenseID : device 0.%x.%04x reports: "
++ "SenseID : device 0.%x.%04x reports: "
+ "CU Type/Mod = %04X/%02X, Dev Type/Mod = "
+ "%04X/%02X\n",
+ cdev->private->dev_id.ssid,
+@@ -304,7 +359,7 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
+ break;
+ case DEV_STATE_BOXED:
+ CIO_DEBUG(KERN_WARNING, 2,
+- "cio: SenseID : boxed device %04x on subchannel "
++ "SenseID : boxed device %04x on subchannel "
+ "0.%x.%04x\n", cdev->private->dev_id.devno,
+ sch->schid.ssid, sch->schid.sch_no);
+ break;
+@@ -349,7 +404,7 @@ ccw_device_oper_notify(struct work_struct *work)
+ sch = to_subchannel(cdev->dev.parent);
+ if (sch->driver && sch->driver->notify) {
+ spin_unlock_irqrestore(cdev->ccwlock, flags);
+- ret = sch->driver->notify(&sch->dev, CIO_OPER);
++ ret = sch->driver->notify(sch, CIO_OPER);
+ spin_lock_irqsave(cdev->ccwlock, flags);
+ } else
+ ret = 0;
+@@ -389,7 +444,7 @@ ccw_device_done(struct ccw_device *cdev, int state)
-+/*
-+ * Note on RYRAR and RCR3: Up until this point most of the register
-+ * definitions are consistent across all of the available parts. However,
-+ * the placement of the optional RYRAR and RCR3 (the RYRAR control
-+ * register used to control RYRCNT/RYRAR compare) varies considerably
-+ * across various parts, occasionally being mapped in to a completely
-+ * unrelated address space. For proper RYRAR support a separate resource
-+ * would have to be handed off, but as this is purely optional in
-+ * practice, we simply opt not to support it, thereby keeping the code
-+ * quite a bit more simplified.
-+ */
-+
- /* ALARM Bits - or with BCD encoded value */
- #define AR_ENB 0x80 /* Enable for alarm cmp */
+ if (state == DEV_STATE_BOXED)
+ CIO_DEBUG(KERN_WARNING, 2,
+- "cio: Boxed device %04x on subchannel %04x\n",
++ "Boxed device %04x on subchannel %04x\n",
+ cdev->private->dev_id.devno, sch->schid.sch_no);
-diff --git a/drivers/rtc/rtc-x1205.c b/drivers/rtc/rtc-x1205.c
-index b3fae35..b90fb18 100644
---- a/drivers/rtc/rtc-x1205.c
-+++ b/drivers/rtc/rtc-x1205.c
-@@ -32,7 +32,7 @@
- * unknown chips, the user must explicitly set the probe parameter.
- */
+ if (cdev->private->flags.donotify) {
+@@ -500,7 +555,8 @@ ccw_device_recognition(struct ccw_device *cdev)
+ (cdev->private->state != DEV_STATE_BOXED))
+ return -EINVAL;
+ sch = to_subchannel(cdev->dev.parent);
+- ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc);
++ ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc,
++ (u32)(addr_t)sch);
+ if (ret != 0)
+ /* Couldn't enable the subchannel for i/o. Sick device. */
+ return ret;
+@@ -587,9 +643,10 @@ ccw_device_verify_done(struct ccw_device *cdev, int err)
+ default:
+ /* Reset oper notify indication after verify error. */
+ cdev->private->flags.donotify = 0;
+- if (cdev->online)
++ if (cdev->online) {
++ ccw_device_set_timeout(cdev, 0);
+ dev_fsm_event(cdev, DEV_EVENT_NOTOPER);
+- else
++ } else
+ ccw_device_done(cdev, DEV_STATE_NOT_OPER);
+ break;
+ }
+@@ -610,7 +667,8 @@ ccw_device_online(struct ccw_device *cdev)
+ sch = to_subchannel(cdev->dev.parent);
+ if (css_init_done && !get_device(&cdev->dev))
+ return -ENODEV;
+- ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc);
++ ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc,
++ (u32)(addr_t)sch);
+ if (ret != 0) {
+ /* Couldn't enable the subchannel for i/o. Sick device. */
+ if (ret == -ENODEV)
+@@ -937,7 +995,7 @@ void device_kill_io(struct subchannel *sch)
+ int ret;
+ struct ccw_device *cdev;
--static unsigned short normal_i2c[] = { I2C_CLIENT_END };
-+static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
+- cdev = sch->dev.driver_data;
++ cdev = sch_get_cdev(sch);
+ ret = ccw_device_cancel_halt_clear(cdev);
+ if (ret == -EBUSY) {
+ ccw_device_set_timeout(cdev, 3*HZ);
+@@ -990,7 +1048,8 @@ ccw_device_start_id(struct ccw_device *cdev, enum dev_event dev_event)
+ struct subchannel *sch;
- /* Insmod parameters */
- I2C_CLIENT_INSMOD;
-diff --git a/drivers/s390/block/Makefile b/drivers/s390/block/Makefile
-index be9f22d..0a89e08 100644
---- a/drivers/s390/block/Makefile
-+++ b/drivers/s390/block/Makefile
-@@ -2,8 +2,8 @@
- # S/390 block devices
- #
+ sch = to_subchannel(cdev->dev.parent);
+- if (cio_enable_subchannel(sch, sch->schib.pmcw.isc) != 0)
++ if (cio_enable_subchannel(sch, sch->schib.pmcw.isc,
++ (u32)(addr_t)sch) != 0)
+ /* Couldn't enable the subchannel for i/o. Sick device. */
+ return;
+
+@@ -1006,9 +1065,9 @@ device_trigger_reprobe(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
+
+- if (!sch->dev.driver_data)
++ cdev = sch_get_cdev(sch);
++ if (!cdev)
+ return;
+- cdev = sch->dev.driver_data;
+ if (cdev->private->state != DEV_STATE_DISCONNECTED)
+ return;
+
+@@ -1028,7 +1087,7 @@ device_trigger_reprobe(struct subchannel *sch)
+ sch->schib.pmcw.ena = 0;
+ if ((sch->lpm & (sch->lpm - 1)) != 0)
+ sch->schib.pmcw.mp = 1;
+- sch->schib.pmcw.intparm = (__u32)(unsigned long)sch;
++ sch->schib.pmcw.intparm = (u32)(addr_t)sch;
+ /* We should also udate ssd info, but this has to wait. */
+ /* Check if this is another device which appeared on the same sch. */
+ if (sch->schib.pmcw.dev != cdev->private->dev_id.devno) {
+@@ -1223,21 +1282,4 @@ fsm_func_t *dev_jumptable[NR_DEV_STATES][NR_DEV_EVENTS] = {
+ },
+ };
+
+-/*
+- * io_subchannel_irq is called for "real" interrupts or for status
+- * pending conditions on msch.
+- */
+-void
+-io_subchannel_irq (struct device *pdev)
+-{
+- struct ccw_device *cdev;
+-
+- cdev = to_subchannel(pdev)->dev.driver_data;
+-
+- CIO_TRACE_EVENT (3, "IRQ");
+- CIO_TRACE_EVENT (3, pdev->bus_id);
+- if (cdev)
+- dev_fsm_event(cdev, DEV_EVENT_INTERRUPT);
+-}
+-
+ EXPORT_SYMBOL_GPL(ccw_device_set_timeout);
+diff --git a/drivers/s390/cio/device_id.c b/drivers/s390/cio/device_id.c
+index 156f3f9..918b8b8 100644
+--- a/drivers/s390/cio/device_id.c
++++ b/drivers/s390/cio/device_id.c
+@@ -24,6 +24,7 @@
+ #include "css.h"
+ #include "device.h"
+ #include "ioasm.h"
++#include "io_sch.h"
--dasd_eckd_mod-objs := dasd_eckd.o dasd_3990_erp.o dasd_9343_erp.o
--dasd_fba_mod-objs := dasd_fba.o dasd_3370_erp.o dasd_9336_erp.o
-+dasd_eckd_mod-objs := dasd_eckd.o dasd_3990_erp.o dasd_alias.o
-+dasd_fba_mod-objs := dasd_fba.o
- dasd_diag_mod-objs := dasd_diag.o
- dasd_mod-objs := dasd.o dasd_ioctl.o dasd_proc.o dasd_devmap.o \
- dasd_genhd.o dasd_erp.o
-diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
-index e6bfce6..d640427 100644
---- a/drivers/s390/block/dasd.c
-+++ b/drivers/s390/block/dasd.c
-@@ -48,13 +48,15 @@ MODULE_LICENSE("GPL");
/*
- * SECTION: prototypes for static functions of dasd.c
- */
--static int dasd_alloc_queue(struct dasd_device * device);
--static void dasd_setup_queue(struct dasd_device * device);
--static void dasd_free_queue(struct dasd_device * device);
--static void dasd_flush_request_queue(struct dasd_device *);
--static int dasd_flush_ccw_queue(struct dasd_device *, int);
--static void dasd_tasklet(struct dasd_device *);
-+static int dasd_alloc_queue(struct dasd_block *);
-+static void dasd_setup_queue(struct dasd_block *);
-+static void dasd_free_queue(struct dasd_block *);
-+static void dasd_flush_request_queue(struct dasd_block *);
-+static int dasd_flush_block_queue(struct dasd_block *);
-+static void dasd_device_tasklet(struct dasd_device *);
-+static void dasd_block_tasklet(struct dasd_block *);
- static void do_kick_device(struct work_struct *);
-+static void dasd_return_cqr_cb(struct dasd_ccw_req *, void *);
+ * Input :
+@@ -219,11 +220,13 @@ ccw_device_check_sense_id(struct ccw_device *cdev)
+ return -EAGAIN;
+ }
+ if (irb->scsw.cc == 3) {
+- if ((sch->orb.lpm &
+- sch->schib.pmcw.pim & sch->schib.pmcw.pam) != 0)
++ u8 lpm;
++
++ lpm = to_io_private(sch)->orb.lpm;
++ if ((lpm & sch->schib.pmcw.pim & sch->schib.pmcw.pam) != 0)
+ CIO_MSG_EVENT(2, "SenseID : path %02X for device %04x "
+ "on subchannel 0.%x.%04x is "
+- "'not operational'\n", sch->orb.lpm,
++ "'not operational'\n", lpm,
+ cdev->private->dev_id.devno,
+ sch->schid.ssid, sch->schid.sch_no);
+ return -EACCES;
+diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
+index 7fd2dad..49b58eb 100644
+--- a/drivers/s390/cio/device_ops.c
++++ b/drivers/s390/cio/device_ops.c
+@@ -501,7 +501,7 @@ ccw_device_stlck(struct ccw_device *cdev)
+ return -ENOMEM;
+ }
+ spin_lock_irqsave(sch->lock, flags);
+- ret = cio_enable_subchannel(sch, 3);
++ ret = cio_enable_subchannel(sch, 3, (u32)(addr_t)sch);
+ if (ret)
+ goto out_unlock;
+ /*
+diff --git a/drivers/s390/cio/device_pgid.c b/drivers/s390/cio/device_pgid.c
+index cb1879a..c52449a 100644
+--- a/drivers/s390/cio/device_pgid.c
++++ b/drivers/s390/cio/device_pgid.c
+@@ -22,6 +22,7 @@
+ #include "css.h"
+ #include "device.h"
+ #include "ioasm.h"
++#include "io_sch.h"
/*
- * SECTION: Operations on the device structure.
-@@ -65,26 +67,23 @@ static wait_queue_head_t dasd_flush_wq;
+ * Helper function called from interrupt context to decide whether an
+@@ -155,10 +156,13 @@ __ccw_device_check_sense_pgid(struct ccw_device *cdev)
+ return -EAGAIN;
+ }
+ if (irb->scsw.cc == 3) {
++ u8 lpm;
++
++ lpm = to_io_private(sch)->orb.lpm;
+ CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel 0.%x.%04x,"
+ " lpm %02X, became 'not operational'\n",
+ cdev->private->dev_id.devno, sch->schid.ssid,
+- sch->schid.sch_no, sch->orb.lpm);
++ sch->schid.sch_no, lpm);
+ return -EACCES;
+ }
+ i = 8 - ffs(cdev->private->imask);
+diff --git a/drivers/s390/cio/device_status.c b/drivers/s390/cio/device_status.c
+index aa96e67..ebe0848 100644
+--- a/drivers/s390/cio/device_status.c
++++ b/drivers/s390/cio/device_status.c
+@@ -20,6 +20,7 @@
+ #include "css.h"
+ #include "device.h"
+ #include "ioasm.h"
++#include "io_sch.h"
+
/*
- * Allocate memory for a new device structure.
- */
--struct dasd_device *
--dasd_alloc_device(void)
-+struct dasd_device *dasd_alloc_device(void)
+ * Check for any kind of channel or interface control check but don't
+@@ -310,6 +311,7 @@ int
+ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb)
{
- struct dasd_device *device;
+ struct subchannel *sch;
++ struct ccw1 *sense_ccw;
-- device = kzalloc(sizeof (struct dasd_device), GFP_ATOMIC);
-- if (device == NULL)
-+ device = kzalloc(sizeof(struct dasd_device), GFP_ATOMIC);
-+ if (!device)
- return ERR_PTR(-ENOMEM);
-- /* open_count = 0 means device online but not in use */
-- atomic_set(&device->open_count, -1);
+ sch = to_subchannel(cdev->dev.parent);
- /* Get two pages for normal block device operations. */
- device->ccw_mem = (void *) __get_free_pages(GFP_ATOMIC | GFP_DMA, 1);
-- if (device->ccw_mem == NULL) {
-+ if (!device->ccw_mem) {
- kfree(device);
- return ERR_PTR(-ENOMEM);
- }
- /* Get one page for error recovery. */
- device->erp_mem = (void *) get_zeroed_page(GFP_ATOMIC | GFP_DMA);
-- if (device->erp_mem == NULL) {
-+ if (!device->erp_mem) {
- free_pages((unsigned long) device->ccw_mem, 1);
- kfree(device);
- return ERR_PTR(-ENOMEM);
-@@ -93,10 +92,9 @@ dasd_alloc_device(void)
- dasd_init_chunklist(&device->ccw_chunks, device->ccw_mem, PAGE_SIZE*2);
- dasd_init_chunklist(&device->erp_chunks, device->erp_mem, PAGE_SIZE);
- spin_lock_init(&device->mem_lock);
-- spin_lock_init(&device->request_queue_lock);
-- atomic_set (&device->tasklet_scheduled, 0);
-+ atomic_set(&device->tasklet_scheduled, 0);
- tasklet_init(&device->tasklet,
-- (void (*)(unsigned long)) dasd_tasklet,
-+ (void (*)(unsigned long)) dasd_device_tasklet,
- (unsigned long) device);
- INIT_LIST_HEAD(&device->ccw_queue);
- init_timer(&device->timer);
-@@ -110,8 +108,7 @@ dasd_alloc_device(void)
- /*
- * Free memory of a device structure.
- */
--void
--dasd_free_device(struct dasd_device *device)
-+void dasd_free_device(struct dasd_device *device)
- {
- kfree(device->private);
- free_page((unsigned long) device->erp_mem);
-@@ -120,10 +117,42 @@ dasd_free_device(struct dasd_device *device)
+@@ -326,15 +328,16 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb)
+ /*
+ * We have ending status but no sense information. Do a basic sense.
+ */
+- sch->sense_ccw.cmd_code = CCW_CMD_BASIC_SENSE;
+- sch->sense_ccw.cda = (__u32) __pa(cdev->private->irb.ecw);
+- sch->sense_ccw.count = SENSE_MAX_COUNT;
+- sch->sense_ccw.flags = CCW_FLAG_SLI;
++ sense_ccw = &to_io_private(sch)->sense_ccw;
++ sense_ccw->cmd_code = CCW_CMD_BASIC_SENSE;
++ sense_ccw->cda = (__u32) __pa(cdev->private->irb.ecw);
++ sense_ccw->count = SENSE_MAX_COUNT;
++ sense_ccw->flags = CCW_FLAG_SLI;
+
+ /* Reset internal retry indication. */
+ cdev->private->flags.intretry = 0;
+
+- return cio_start (sch, &sch->sense_ccw, 0xff);
++ return cio_start(sch, sense_ccw, 0xff);
}
/*
-+ * Allocate memory for a new device structure.
+diff --git a/drivers/s390/cio/io_sch.h b/drivers/s390/cio/io_sch.h
+new file mode 100644
+index 0000000..8c61316
+--- /dev/null
++++ b/drivers/s390/cio/io_sch.h
+@@ -0,0 +1,163 @@
++#ifndef S390_IO_SCH_H
++#define S390_IO_SCH_H
++
++#include "schid.h"
++
++/*
++ * operation request block
+ */
-+struct dasd_block *dasd_alloc_block(void)
-+{
-+ struct dasd_block *block;
++struct orb {
++ u32 intparm; /* interruption parameter */
++ u32 key : 4; /* flags, like key, suspend control, etc. */
++ u32 spnd : 1; /* suspend control */
++ u32 res1 : 1; /* reserved */
++ u32 mod : 1; /* modification control */
++ u32 sync : 1; /* synchronize control */
++ u32 fmt : 1; /* format control */
++ u32 pfch : 1; /* prefetch control */
++ u32 isic : 1; /* initial-status-interruption control */
++ u32 alcc : 1; /* address-limit-checking control */
++ u32 ssic : 1; /* suppress-suspended-interr. control */
++ u32 res2 : 1; /* reserved */
++ u32 c64 : 1; /* IDAW/QDIO 64 bit control */
++ u32 i2k : 1; /* IDAW 2/4kB block size control */
++ u32 lpm : 8; /* logical path mask */
++ u32 ils : 1; /* incorrect length */
++ u32 zero : 6; /* reserved zeros */
++ u32 orbx : 1; /* ORB extension control */
++ u32 cpa; /* channel program address */
++} __attribute__ ((packed, aligned(4)));
+
-+ block = kzalloc(sizeof(*block), GFP_ATOMIC);
-+ if (!block)
-+ return ERR_PTR(-ENOMEM);
-+ /* open_count = 0 means device online but not in use */
-+ atomic_set(&block->open_count, -1);
++struct io_subchannel_private {
++ struct orb orb; /* operation request block */
++ struct ccw1 sense_ccw; /* static ccw for sense command */
++} __attribute__ ((aligned(8)));
+
-+ spin_lock_init(&block->request_queue_lock);
-+ atomic_set(&block->tasklet_scheduled, 0);
-+ tasklet_init(&block->tasklet,
-+ (void (*)(unsigned long)) dasd_block_tasklet,
-+ (unsigned long) block);
-+ INIT_LIST_HEAD(&block->ccw_queue);
-+ spin_lock_init(&block->queue_lock);
-+ init_timer(&block->timer);
++#define to_io_private(n) ((struct io_subchannel_private *)n->private)
++#define sch_get_cdev(n) (dev_get_drvdata(&n->dev))
++#define sch_set_cdev(n, c) (dev_set_drvdata(&n->dev, c))
+
-+ return block;
-+}
++#define MAX_CIWS 8
+
+/*
-+ * Free memory of a device structure.
++ * sense-id response buffer layout
+ */
-+void dasd_free_block(struct dasd_block *block)
++struct senseid {
++ /* common part */
++ u8 reserved; /* always 0x'FF' */
++ u16 cu_type; /* control unit type */
++ u8 cu_model; /* control unit model */
++ u16 dev_type; /* device type */
++ u8 dev_model; /* device model */
++ u8 unused; /* padding byte */
++ /* extended part */
++ struct ciw ciw[MAX_CIWS]; /* variable # of CIWs */
++} __attribute__ ((packed, aligned(4)));
++
++struct ccw_device_private {
++ struct ccw_device *cdev;
++ struct subchannel *sch;
++ int state; /* device state */
++ atomic_t onoff;
++ unsigned long registered;
++ struct ccw_dev_id dev_id; /* device id */
++ struct subchannel_id schid; /* subchannel number */
++ u8 imask; /* lpm mask for SNID/SID/SPGID */
++ int iretry; /* retry counter SNID/SID/SPGID */
++ struct {
++ unsigned int fast:1; /* post with "channel end" */
++ unsigned int repall:1; /* report every interrupt status */
++ unsigned int pgroup:1; /* do path grouping */
++ unsigned int force:1; /* allow forced online */
++ } __attribute__ ((packed)) options;
++ struct {
++ unsigned int pgid_single:1; /* use single path for Set PGID */
++ unsigned int esid:1; /* Ext. SenseID supported by HW */
++ unsigned int dosense:1; /* delayed SENSE required */
++ unsigned int doverify:1; /* delayed path verification */
++ unsigned int donotify:1; /* call notify function */
++ unsigned int recog_done:1; /* dev. recog. complete */
++ unsigned int fake_irb:1; /* deliver faked irb */
++ unsigned int intretry:1; /* retry internal operation */
++ } __attribute__((packed)) flags;
++ unsigned long intparm; /* user interruption parameter */
++ struct qdio_irq *qdio_data;
++ struct irb irb; /* device status */
++ struct senseid senseid; /* SenseID info */
++ struct pgid pgid[8]; /* path group IDs per chpid*/
++ struct ccw1 iccws[2]; /* ccws for SNID/SID/SPGID commands */
++ struct work_struct kick_work;
++ wait_queue_head_t wait_q;
++ struct timer_list timer;
++ void *cmb; /* measurement information */
++ struct list_head cmb_list; /* list of measured devices */
++ u64 cmb_start_time; /* clock value of cmb reset */
++ void *cmb_wait; /* deferred cmb enable/disable */
++};
++
++static inline int ssch(struct subchannel_id schid, volatile struct orb *addr)
+{
-+ kfree(block);
++ register struct subchannel_id reg1 asm("1") = schid;
++ int ccode;
++
++ asm volatile(
++ " ssch 0(%2)\n"
++ " ipm %0\n"
++ " srl %0,28"
++ : "=d" (ccode) : "d" (reg1), "a" (addr), "m" (*addr) : "cc");
++ return ccode;
+}
+
-+/*
- * Make a new device known to the system.
- */
--static int
--dasd_state_new_to_known(struct dasd_device *device)
-+static int dasd_state_new_to_known(struct dasd_device *device)
++static inline int rsch(struct subchannel_id schid)
++{
++ register struct subchannel_id reg1 asm("1") = schid;
++ int ccode;
++
++ asm volatile(
++ " rsch\n"
++ " ipm %0\n"
++ " srl %0,28"
++ : "=d" (ccode) : "d" (reg1) : "cc");
++ return ccode;
++}
++
++static inline int csch(struct subchannel_id schid)
++{
++ register struct subchannel_id reg1 asm("1") = schid;
++ int ccode;
++
++ asm volatile(
++ " csch\n"
++ " ipm %0\n"
++ " srl %0,28"
++ : "=d" (ccode) : "d" (reg1) : "cc");
++ return ccode;
++}
++
++static inline int hsch(struct subchannel_id schid)
++{
++ register struct subchannel_id reg1 asm("1") = schid;
++ int ccode;
++
++ asm volatile(
++ " hsch\n"
++ " ipm %0\n"
++ " srl %0,28"
++ : "=d" (ccode) : "d" (reg1) : "cc");
++ return ccode;
++}
++
++static inline int xsch(struct subchannel_id schid)
++{
++ register struct subchannel_id reg1 asm("1") = schid;
++ int ccode;
++
++ asm volatile(
++ " .insn rre,0xb2760000,%1,0\n"
++ " ipm %0\n"
++ " srl %0,28"
++ : "=d" (ccode) : "d" (reg1) : "cc");
++ return ccode;
++}
++
++#endif
+diff --git a/drivers/s390/cio/ioasm.h b/drivers/s390/cio/ioasm.h
+index 7153dd9..652ea36 100644
+--- a/drivers/s390/cio/ioasm.h
++++ b/drivers/s390/cio/ioasm.h
+@@ -109,72 +109,6 @@ static inline int tpi( volatile struct tpi_info *addr)
+ return ccode;
+ }
+
+-static inline int ssch(struct subchannel_id schid,
+- volatile struct orb *addr)
+-{
+- register struct subchannel_id reg1 asm ("1") = schid;
+- int ccode;
+-
+- asm volatile(
+- " ssch 0(%2)\n"
+- " ipm %0\n"
+- " srl %0,28"
+- : "=d" (ccode) : "d" (reg1), "a" (addr), "m" (*addr) : "cc");
+- return ccode;
+-}
+-
+-static inline int rsch(struct subchannel_id schid)
+-{
+- register struct subchannel_id reg1 asm ("1") = schid;
+- int ccode;
+-
+- asm volatile(
+- " rsch\n"
+- " ipm %0\n"
+- " srl %0,28"
+- : "=d" (ccode) : "d" (reg1) : "cc");
+- return ccode;
+-}
+-
+-static inline int csch(struct subchannel_id schid)
+-{
+- register struct subchannel_id reg1 asm ("1") = schid;
+- int ccode;
+-
+- asm volatile(
+- " csch\n"
+- " ipm %0\n"
+- " srl %0,28"
+- : "=d" (ccode) : "d" (reg1) : "cc");
+- return ccode;
+-}
+-
+-static inline int hsch(struct subchannel_id schid)
+-{
+- register struct subchannel_id reg1 asm ("1") = schid;
+- int ccode;
+-
+- asm volatile(
+- " hsch\n"
+- " ipm %0\n"
+- " srl %0,28"
+- : "=d" (ccode) : "d" (reg1) : "cc");
+- return ccode;
+-}
+-
+-static inline int xsch(struct subchannel_id schid)
+-{
+- register struct subchannel_id reg1 asm ("1") = schid;
+- int ccode;
+-
+- asm volatile(
+- " .insn rre,0xb2760000,%1,0\n"
+- " ipm %0\n"
+- " srl %0,28"
+- : "=d" (ccode) : "d" (reg1) : "cc");
+- return ccode;
+-}
+-
+ static inline int chsc(void *chsc_area)
{
- int rc;
+ typedef struct { char _[4096]; } addr_type;
+diff --git a/drivers/s390/cio/qdio.c b/drivers/s390/cio/qdio.c
+index 40a3208..e2a781b 100644
+--- a/drivers/s390/cio/qdio.c
++++ b/drivers/s390/cio/qdio.c
+@@ -48,11 +48,11 @@
+ #include <asm/debug.h>
+ #include <asm/s390_rdev.h>
+ #include <asm/qdio.h>
++#include <asm/airq.h>
-@@ -133,12 +162,13 @@ dasd_state_new_to_known(struct dasd_device *device)
- */
- dasd_get_device(device);
+ #include "cio.h"
+ #include "css.h"
+ #include "device.h"
+-#include "airq.h"
+ #include "qdio.h"
+ #include "ioasm.h"
+ #include "chsc.h"
+@@ -96,7 +96,7 @@ static debug_info_t *qdio_dbf_slsb_in;
+ static volatile struct qdio_q *tiq_list=NULL; /* volatile as it could change
+ during a while loop */
+ static DEFINE_SPINLOCK(ttiq_list_lock);
+-static int register_thinint_result;
++static void *tiqdio_ind;
+ static void tiqdio_tl(unsigned long);
+ static DECLARE_TASKLET(tiqdio_tasklet,tiqdio_tl,0);
-- rc = dasd_alloc_queue(device);
-- if (rc) {
-- dasd_put_device(device);
-- return rc;
-+ if (device->block) {
-+ rc = dasd_alloc_queue(device->block);
-+ if (rc) {
-+ dasd_put_device(device);
-+ return rc;
-+ }
+@@ -399,7 +399,7 @@ qdio_get_indicator(void)
+ {
+ int i;
+
+- for (i=1;i<INDICATORS_PER_CACHELINE;i++)
++ for (i = 0; i < INDICATORS_PER_CACHELINE; i++)
+ if (!indicator_used[i]) {
+ indicator_used[i]=1;
+ return indicators+i;
+@@ -1408,8 +1408,7 @@ __tiqdio_inbound_processing(struct qdio_q *q, int spare_ind_was_set)
+ if (q->hydra_gives_outbound_pcis) {
+ if (!q->siga_sync_done_on_thinints) {
+ SYNC_MEMORY_ALL;
+- } else if ((!q->siga_sync_done_on_outb_tis)&&
+- (q->hydra_gives_outbound_pcis)) {
++ } else if (!q->siga_sync_done_on_outb_tis) {
+ SYNC_MEMORY_ALL_OUTB;
+ }
+ } else {
+@@ -1911,8 +1910,7 @@ qdio_fill_thresholds(struct qdio_irq *irq_ptr,
}
--
- device->state = DASD_STATE_KNOWN;
- return 0;
}
-@@ -146,21 +176,24 @@ dasd_state_new_to_known(struct dasd_device *device)
- /*
- * Let the system forget about a device.
- */
+
-static int
--dasd_state_known_to_new(struct dasd_device * device)
-+static int dasd_state_known_to_new(struct dasd_device *device)
+-tiqdio_thinint_handler(void)
++static void tiqdio_thinint_handler(void *ind, void *drv_data)
{
- /* Disable extended error reporting for this device. */
- dasd_eer_disable(device);
- /* Forget the discipline information. */
-- if (device->discipline)
-+ if (device->discipline) {
-+ if (device->discipline->uncheck_device)
-+ device->discipline->uncheck_device(device);
- module_put(device->discipline->owner);
-+ }
- device->discipline = NULL;
- if (device->base_discipline)
- module_put(device->base_discipline->owner);
- device->base_discipline = NULL;
- device->state = DASD_STATE_NEW;
+ QDIO_DBF_TEXT4(0,trace,"thin_int");
-- dasd_free_queue(device);
-+ if (device->block)
-+ dasd_free_queue(device->block);
+@@ -1925,7 +1923,6 @@ tiqdio_thinint_handler(void)
+ tiqdio_clear_global_summary();
- /* Give up reference we took in dasd_state_new_to_known. */
- dasd_put_device(device);
-@@ -170,19 +203,19 @@ dasd_state_known_to_new(struct dasd_device * device)
- /*
- * Request the irq line for the device.
- */
--static int
--dasd_state_known_to_basic(struct dasd_device * device)
-+static int dasd_state_known_to_basic(struct dasd_device *device)
+ tiqdio_inbound_checks();
+- return 0;
+ }
+
+ static void
+@@ -2445,7 +2442,7 @@ tiqdio_set_subchannel_ind(struct qdio_irq *irq_ptr, int reset_to_zero)
+ real_addr_dev_st_chg_ind=0;
+ } else {
+ real_addr_local_summary_bit=
+- virt_to_phys((volatile void *)indicators);
++ virt_to_phys((volatile void *)tiqdio_ind);
+ real_addr_dev_st_chg_ind=
+ virt_to_phys((volatile void *)irq_ptr->dev_st_chg_ind);
+ }
+@@ -3740,23 +3737,25 @@ static void
+ tiqdio_register_thinints(void)
{
- int rc;
+ char dbf_text[20];
+- register_thinint_result=
+- s390_register_adapter_interrupt(&tiqdio_thinint_handler);
+- if (register_thinint_result) {
+- sprintf(dbf_text,"regthn%x",(register_thinint_result&0xff));
++
++ tiqdio_ind =
++ s390_register_adapter_interrupt(&tiqdio_thinint_handler, NULL);
++ if (IS_ERR(tiqdio_ind)) {
++ sprintf(dbf_text, "regthn%lx", PTR_ERR(tiqdio_ind));
+ QDIO_DBF_TEXT0(0,setup,dbf_text);
+ QDIO_PRINT_ERR("failed to register adapter handler " \
+- "(rc=%i).\nAdapter interrupts might " \
++ "(rc=%li).\nAdapter interrupts might " \
+ "not work. Continuing.\n",
+- register_thinint_result);
++ PTR_ERR(tiqdio_ind));
++ tiqdio_ind = NULL;
+ }
+ }
- /* Allocate and register gendisk structure. */
-- rc = dasd_gendisk_alloc(device);
-- if (rc)
-- return rc;
--
-+ if (device->block) {
-+ rc = dasd_gendisk_alloc(device->block);
-+ if (rc)
-+ return rc;
-+ }
- /* register 'device' debug area, used for all DBF_DEV_XXX calls */
-- device->debug_area = debug_register(device->cdev->dev.bus_id, 1, 2,
-- 8 * sizeof (long));
-+ device->debug_area = debug_register(device->cdev->dev.bus_id, 1, 1,
-+ 8 * sizeof(long));
- debug_register_view(device->debug_area, &debug_sprintf_view);
- debug_set_level(device->debug_area, DBF_WARNING);
- DBF_DEV_EVENT(DBF_EMERG, device, "%s", "debug area created");
-@@ -194,16 +227,17 @@ dasd_state_known_to_basic(struct dasd_device * device)
- /*
- * Release the irq line for the device. Terminate any running i/o.
- */
--static int
--dasd_state_basic_to_known(struct dasd_device * device)
-+static int dasd_state_basic_to_known(struct dasd_device *device)
+ static void
+ tiqdio_unregister_thinints(void)
{
- int rc;
+- if (!register_thinint_result)
+- s390_unregister_adapter_interrupt(&tiqdio_thinint_handler);
++ if (tiqdio_ind)
++ s390_unregister_adapter_interrupt(tiqdio_ind);
+ }
+
+ static int
+@@ -3768,8 +3767,8 @@ qdio_get_qdio_memory(void)
+ for (i=1;i<INDICATORS_PER_CACHELINE;i++)
+ indicator_used[i]=0;
+ indicators = kzalloc(sizeof(__u32)*(INDICATORS_PER_CACHELINE),
+- GFP_KERNEL);
+- if (!indicators)
++ GFP_KERNEL);
++ if (!indicators)
+ return -ENOMEM;
+ return 0;
+ }
+@@ -3780,7 +3779,6 @@ qdio_release_qdio_memory(void)
+ kfree(indicators);
+ }
+
-
-- dasd_gendisk_free(device);
-- rc = dasd_flush_ccw_queue(device, 1);
-+ if (device->block) {
-+ dasd_gendisk_free(device->block);
-+ dasd_block_clear_timer(device->block);
-+ }
-+ rc = dasd_flush_device_queue(device);
- if (rc)
- return rc;
-- dasd_clear_timer(device);
-+ dasd_device_clear_timer(device);
+ static void
+ qdio_unregister_dbf_views(void)
+ {
+diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
+index 6d7aad1..37870e4 100644
+--- a/drivers/s390/cio/qdio.h
++++ b/drivers/s390/cio/qdio.h
+@@ -57,7 +57,7 @@
+ of the queue to 0 */
- DBF_DEV_EVENT(DBF_EMERG, device, "%p debug area deleted", device);
- if (device->debug_area != NULL) {
-@@ -228,26 +262,32 @@ dasd_state_basic_to_known(struct dasd_device * device)
- * In case the analysis returns an error, the device setup is stopped
- * (a fake disk was already added to allow formatting).
+ #define QDIO_ESTABLISH_TIMEOUT (1*HZ)
+-#define QDIO_ACTIVATE_TIMEOUT ((5*HZ)>>10)
++#define QDIO_ACTIVATE_TIMEOUT (5*HZ)
+ #define QDIO_CLEANUP_CLEAR_TIMEOUT (20*HZ)
+ #define QDIO_CLEANUP_HALT_TIMEOUT (10*HZ)
+ #define QDIO_FORCE_CHECK_TIMEOUT (10*HZ)
+diff --git a/drivers/s390/net/claw.c b/drivers/s390/net/claw.c
+index 3561982..c307621 100644
+--- a/drivers/s390/net/claw.c
++++ b/drivers/s390/net/claw.c
+@@ -2416,7 +2416,7 @@ init_ccw_bk(struct net_device *dev)
+ privptr->p_buff_pages_perwrite);
+ #endif
+ if (p_buff==NULL) {
+- printk(KERN_INFO "%s:%s __get_free_pages"
++ printk(KERN_INFO "%s:%s __get_free_pages "
+ "for writes buf failed : get is for %d pages\n",
+ dev->name,
+ __FUNCTION__,
+diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
+index 0fd663b..7bfe8d7 100644
+--- a/drivers/s390/net/lcs.c
++++ b/drivers/s390/net/lcs.c
+@@ -1115,7 +1115,7 @@ list_modified:
+ rc = lcs_send_setipm(card, ipm);
+ spin_lock_irqsave(&card->ipm_lock, flags);
+ if (rc) {
+- PRINT_INFO("Adding multicast address failed."
++ PRINT_INFO("Adding multicast address failed. "
+ "Table possibly full!\n");
+ /* store ipm in failed list -> will be added
+ * to ipm_list again, so a retry will be done
+diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
+index c7ea938..f3d893c 100644
+--- a/drivers/s390/net/netiucv.c
++++ b/drivers/s390/net/netiucv.c
+@@ -198,8 +198,7 @@ struct iucv_connection {
+ /**
+ * Linked list of all connection structs.
*/
--static int
--dasd_state_basic_to_ready(struct dasd_device * device)
-+static int dasd_state_basic_to_ready(struct dasd_device *device)
+-static struct list_head iucv_connection_list =
+- LIST_HEAD_INIT(iucv_connection_list);
++static LIST_HEAD(iucv_connection_list);
+ static DEFINE_RWLOCK(iucv_connection_rwlock);
+
+ /**
+@@ -2089,6 +2088,11 @@ static struct attribute_group netiucv_drv_attr_group = {
+ .attrs = netiucv_drv_attrs,
+ };
+
++static struct attribute_group *netiucv_drv_attr_groups[] = {
++ &netiucv_drv_attr_group,
++ NULL,
++};
++
+ static void netiucv_banner(void)
{
- int rc;
-+ struct dasd_block *block;
+ PRINT_INFO("NETIUCV driver initialized\n");
+@@ -2113,7 +2117,6 @@ static void __exit netiucv_exit(void)
+ netiucv_unregister_device(dev);
+ }
- rc = 0;
-- if (device->discipline->do_analysis != NULL)
-- rc = device->discipline->do_analysis(device);
+- sysfs_remove_group(&netiucv_driver.kobj, &netiucv_drv_attr_group);
+ driver_unregister(&netiucv_driver);
+ iucv_unregister(&netiucv_handler, 1);
+ iucv_unregister_dbf_views();
+@@ -2133,6 +2136,7 @@ static int __init netiucv_init(void)
+ if (rc)
+ goto out_dbf;
+ IUCV_DBF_TEXT(trace, 3, __FUNCTION__);
++ netiucv_driver.groups = netiucv_drv_attr_groups;
+ rc = driver_register(&netiucv_driver);
+ if (rc) {
+ PRINT_ERR("NETIUCV: failed to register driver.\n");
+@@ -2140,18 +2144,9 @@ static int __init netiucv_init(void)
+ goto out_iucv;
+ }
+
+- rc = sysfs_create_group(&netiucv_driver.kobj, &netiucv_drv_attr_group);
- if (rc) {
-- if (rc != -EAGAIN)
-- device->state = DASD_STATE_UNFMT;
-- return rc;
+- PRINT_ERR("NETIUCV: failed to add driver attributes.\n");
+- IUCV_DBF_TEXT_(setup, 2,
+- "ret %d - netiucv_drv_attr_group\n", rc);
+- goto out_driver;
- }
-+ block = device->block;
- /* make disk known with correct capacity */
-- dasd_setup_queue(device);
-- set_capacity(device->gdp, device->blocks << device->s2b_shift);
-- device->state = DASD_STATE_READY;
-- rc = dasd_scan_partitions(device);
-- if (rc)
-- device->state = DASD_STATE_BASIC;
-+ if (block) {
-+ if (block->base->discipline->do_analysis != NULL)
-+ rc = block->base->discipline->do_analysis(block);
-+ if (rc) {
-+ if (rc != -EAGAIN)
-+ device->state = DASD_STATE_UNFMT;
-+ return rc;
-+ }
-+ dasd_setup_queue(block);
-+ set_capacity(block->gdp,
-+ block->blocks << block->s2b_shift);
-+ device->state = DASD_STATE_READY;
-+ rc = dasd_scan_partitions(block);
-+ if (rc)
-+ device->state = DASD_STATE_BASIC;
-+ } else {
-+ device->state = DASD_STATE_READY;
-+ }
+ netiucv_banner();
return rc;
- }
-@@ -256,28 +296,31 @@ dasd_state_basic_to_ready(struct dasd_device * device)
- * Forget format information. Check if the target level is basic
- * and if it is create fake disk for formatting.
- */
--static int
--dasd_state_ready_to_basic(struct dasd_device * device)
-+static int dasd_state_ready_to_basic(struct dasd_device *device)
- {
- int rc;
+-out_driver:
+- driver_unregister(&netiucv_driver);
+ out_iucv:
+ iucv_unregister(&netiucv_handler, 1);
+ out_dbf:
+diff --git a/drivers/s390/net/qeth_main.c b/drivers/s390/net/qeth_main.c
+index ff999ff..62606ce 100644
+--- a/drivers/s390/net/qeth_main.c
++++ b/drivers/s390/net/qeth_main.c
+@@ -3890,7 +3890,7 @@ qeth_verify_vlan_dev(struct net_device *dev, struct qeth_card *card)
+ break;
+ }
+ }
+- if (rc && !(VLAN_DEV_INFO(dev)->real_dev->priv == (void *)card))
++ if (rc && !(vlan_dev_info(dev)->real_dev->priv == (void *)card))
+ return 0;
-- rc = dasd_flush_ccw_queue(device, 0);
-- if (rc)
-- return rc;
-- dasd_destroy_partitions(device);
-- dasd_flush_request_queue(device);
-- device->blocks = 0;
-- device->bp_block = 0;
-- device->s2b_shift = 0;
- device->state = DASD_STATE_BASIC;
-+ if (device->block) {
-+ struct dasd_block *block = device->block;
-+ rc = dasd_flush_block_queue(block);
-+ if (rc) {
-+ device->state = DASD_STATE_READY;
-+ return rc;
-+ }
-+ dasd_destroy_partitions(block);
-+ dasd_flush_request_queue(block);
-+ block->blocks = 0;
-+ block->bp_block = 0;
-+ block->s2b_shift = 0;
-+ }
- return 0;
- }
+ #endif
+@@ -3930,7 +3930,7 @@ qeth_get_card_from_dev(struct net_device *dev)
+ card = (struct qeth_card *)dev->priv;
+ else if (rc == QETH_VLAN_CARD)
+ card = (struct qeth_card *)
+- VLAN_DEV_INFO(dev)->real_dev->priv;
++ vlan_dev_info(dev)->real_dev->priv;
- /*
- * Back to basic.
- */
--static int
--dasd_state_unfmt_to_basic(struct dasd_device * device)
-+static int dasd_state_unfmt_to_basic(struct dasd_device *device)
- {
- device->state = DASD_STATE_BASIC;
- return 0;
-@@ -291,17 +334,31 @@ dasd_state_unfmt_to_basic(struct dasd_device * device)
- static int
- dasd_state_ready_to_online(struct dasd_device * device)
- {
-+ int rc;
-+
-+ if (device->discipline->ready_to_online) {
-+ rc = device->discipline->ready_to_online(device);
-+ if (rc)
-+ return rc;
-+ }
- device->state = DASD_STATE_ONLINE;
-- dasd_schedule_bh(device);
-+ if (device->block)
-+ dasd_schedule_block_bh(device->block);
+ QETH_DBF_TEXT_(trace, 4, "%d", rc);
+ return card ;
+@@ -8340,7 +8340,7 @@ qeth_arp_constructor(struct neighbour *neigh)
+ neigh->parms = neigh_parms_clone(parms);
+ rcu_read_unlock();
+
+- neigh->type = inet_addr_type(*(__be32 *) neigh->primary_key);
++ neigh->type = inet_addr_type(&init_net, *(__be32 *) neigh->primary_key);
+ neigh->nud_state = NUD_NOARP;
+ neigh->ops = arp_direct_ops;
+ neigh->output = neigh->ops->queue_xmit;
+diff --git a/drivers/s390/net/qeth_proc.c b/drivers/s390/net/qeth_proc.c
+index f1ff165..46ecd03 100644
+--- a/drivers/s390/net/qeth_proc.c
++++ b/drivers/s390/net/qeth_proc.c
+@@ -146,7 +146,7 @@ qeth_procfile_seq_show(struct seq_file *s, void *it)
return 0;
}
- /*
- * Stop the requeueing of requests again.
- */
--static int
--dasd_state_online_to_ready(struct dasd_device * device)
-+static int dasd_state_online_to_ready(struct dasd_device *device)
- {
-+ int rc;
-+
-+ if (device->discipline->online_to_ready) {
-+ rc = device->discipline->online_to_ready(device);
-+ if (rc)
-+ return rc;
-+ }
- device->state = DASD_STATE_READY;
+-static struct seq_operations qeth_procfile_seq_ops = {
++static const struct seq_operations qeth_procfile_seq_ops = {
+ .start = qeth_procfile_seq_start,
+ .stop = qeth_procfile_seq_stop,
+ .next = qeth_procfile_seq_next,
+@@ -264,7 +264,7 @@ qeth_perf_procfile_seq_show(struct seq_file *s, void *it)
return 0;
}
-@@ -309,8 +366,7 @@ dasd_state_online_to_ready(struct dasd_device * device)
- /*
- * Device startup state changes.
- */
--static int
--dasd_increase_state(struct dasd_device *device)
-+static int dasd_increase_state(struct dasd_device *device)
- {
- int rc;
-@@ -345,8 +401,7 @@ dasd_increase_state(struct dasd_device *device)
- /*
- * Device shutdown state changes.
- */
--static int
--dasd_decrease_state(struct dasd_device *device)
-+static int dasd_decrease_state(struct dasd_device *device)
- {
- int rc;
+-static struct seq_operations qeth_perf_procfile_seq_ops = {
++static const struct seq_operations qeth_perf_procfile_seq_ops = {
+ .start = qeth_procfile_seq_start,
+ .stop = qeth_procfile_seq_stop,
+ .next = qeth_procfile_seq_next,
+diff --git a/drivers/s390/net/smsgiucv.c b/drivers/s390/net/smsgiucv.c
+index 47bb47b..8735a41 100644
+--- a/drivers/s390/net/smsgiucv.c
++++ b/drivers/s390/net/smsgiucv.c
+@@ -42,7 +42,7 @@ MODULE_DESCRIPTION ("Linux for S/390 IUCV special message driver");
+ static struct iucv_path *smsg_path;
-@@ -381,8 +436,7 @@ dasd_decrease_state(struct dasd_device *device)
- /*
- * This is the main startup/shutdown routine.
- */
--static void
--dasd_change_state(struct dasd_device *device)
-+static void dasd_change_state(struct dasd_device *device)
- {
- int rc;
+ static DEFINE_SPINLOCK(smsg_list_lock);
+-static struct list_head smsg_list = LIST_HEAD_INIT(smsg_list);
++static LIST_HEAD(smsg_list);
-@@ -409,17 +463,15 @@ dasd_change_state(struct dasd_device *device)
- * dasd_kick_device will schedule a call do do_kick_device to the kernel
- * event daemon.
- */
--static void
--do_kick_device(struct work_struct *work)
-+static void do_kick_device(struct work_struct *work)
- {
- struct dasd_device *device = container_of(work, struct dasd_device, kick_work);
- dasd_change_state(device);
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- dasd_put_device(device);
- }
+ static int smsg_path_pending(struct iucv_path *, u8 ipvmid[8], u8 ipuser[16]);
+ static void smsg_message_pending(struct iucv_path *, struct iucv_message *);
+diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
+index 0011849..874b55e 100644
+--- a/drivers/s390/scsi/zfcp_aux.c
++++ b/drivers/s390/scsi/zfcp_aux.c
+@@ -844,8 +844,6 @@ zfcp_unit_enqueue(struct zfcp_port *port, fcp_lun_t fcp_lun)
+ unit->sysfs_device.release = zfcp_sysfs_unit_release;
+ dev_set_drvdata(&unit->sysfs_device, unit);
--void
--dasd_kick_device(struct dasd_device *device)
-+void dasd_kick_device(struct dasd_device *device)
- {
- dasd_get_device(device);
- /* queue call to dasd_kick_device to the kernel event daemon. */
-@@ -429,8 +481,7 @@ dasd_kick_device(struct dasd_device *device)
- /*
- * Set the target state for a device and starts the state change.
- */
--void
--dasd_set_target_state(struct dasd_device *device, int target)
-+void dasd_set_target_state(struct dasd_device *device, int target)
- {
- /* If we are in probeonly mode stop at DASD_STATE_READY. */
- if (dasd_probeonly && target > DASD_STATE_READY)
-@@ -447,14 +498,12 @@ dasd_set_target_state(struct dasd_device *device, int target)
- /*
- * Enable devices with device numbers in [from..to].
- */
--static inline int
--_wait_for_device(struct dasd_device *device)
-+static inline int _wait_for_device(struct dasd_device *device)
- {
- return (device->state == device->target);
- }
+- init_waitqueue_head(&unit->scsi_scan_wq);
+-
+ /* mark unit unusable as long as sysfs registration is not complete */
+ atomic_set_mask(ZFCP_STATUS_COMMON_REMOVE, &unit->status);
--void
--dasd_enable_device(struct dasd_device *device)
-+void dasd_enable_device(struct dasd_device *device)
+diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c
+index e01cbf1..edc5015 100644
+--- a/drivers/s390/scsi/zfcp_ccw.c
++++ b/drivers/s390/scsi/zfcp_ccw.c
+@@ -52,6 +52,9 @@ static struct ccw_driver zfcp_ccw_driver = {
+ .set_offline = zfcp_ccw_set_offline,
+ .notify = zfcp_ccw_notify,
+ .shutdown = zfcp_ccw_shutdown,
++ .driver = {
++ .groups = zfcp_driver_attr_groups,
++ },
+ };
+
+ MODULE_DEVICE_TABLE(ccw, zfcp_ccw_device_id);
+@@ -120,6 +123,9 @@ zfcp_ccw_remove(struct ccw_device *ccw_device)
+
+ list_for_each_entry_safe(port, p, &adapter->port_remove_lh, list) {
+ list_for_each_entry_safe(unit, u, &port->unit_remove_lh, list) {
++ if (atomic_test_mask(ZFCP_STATUS_UNIT_REGISTERED,
++ &unit->status))
++ scsi_remove_device(unit->device);
+ zfcp_unit_dequeue(unit);
+ }
+ zfcp_port_dequeue(port);
+@@ -251,16 +257,7 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event)
+ int __init
+ zfcp_ccw_register(void)
{
- dasd_set_target_state(device, DASD_STATE_ONLINE);
- if (device->state <= DASD_STATE_KNOWN)
-@@ -475,20 +524,20 @@ unsigned int dasd_profile_level = DASD_PROFILE_OFF;
- /*
- * Increments counter in global and local profiling structures.
- */
--#define dasd_profile_counter(value, counter, device) \
-+#define dasd_profile_counter(value, counter, block) \
- { \
- int index; \
- for (index = 0; index < 31 && value >> (2+index); index++); \
- dasd_global_profile.counter[index]++; \
-- device->profile.counter[index]++; \
-+ block->profile.counter[index]++; \
+- int retval;
+-
+- retval = ccw_driver_register(&zfcp_ccw_driver);
+- if (retval)
+- goto out;
+- retval = zfcp_sysfs_driver_create_files(&zfcp_ccw_driver.driver);
+- if (retval)
+- ccw_driver_unregister(&zfcp_ccw_driver);
+- out:
+- return retval;
++ return ccw_driver_register(&zfcp_ccw_driver);
}
- /*
- * Add profiling information for cqr before execution.
- */
--static void
--dasd_profile_start(struct dasd_device *device, struct dasd_ccw_req * cqr,
-- struct request *req)
-+static void dasd_profile_start(struct dasd_block *block,
-+ struct dasd_ccw_req *cqr,
-+ struct request *req)
- {
- struct list_head *l;
- unsigned int counter;
-@@ -498,19 +547,19 @@ dasd_profile_start(struct dasd_device *device, struct dasd_ccw_req * cqr,
+ /**
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index ffa3bf7..701046c 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -161,12 +161,6 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req)
+ (fsf_req->fsf_command == FSF_QTCB_OPEN_LUN)) {
+ strncpy(rec->tag2, "open", ZFCP_DBF_TAG_SIZE);
+ level = 4;
+- } else if ((prot_status_qual->doubleword[0] != 0) ||
+- (prot_status_qual->doubleword[1] != 0) ||
+- (fsf_status_qual->doubleword[0] != 0) ||
+- (fsf_status_qual->doubleword[1] != 0)) {
+- strncpy(rec->tag2, "qual", ZFCP_DBF_TAG_SIZE);
+- level = 3;
+ } else {
+ strncpy(rec->tag2, "norm", ZFCP_DBF_TAG_SIZE);
+ level = 6;
+diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
+index e268f79..9e9f6c1 100644
+--- a/drivers/s390/scsi/zfcp_def.h
++++ b/drivers/s390/scsi/zfcp_def.h
+@@ -118,7 +118,7 @@ zfcp_address_to_sg(void *address, struct scatterlist *list, unsigned int size)
- /* count the length of the chanq for statistics */
- counter = 0;
-- list_for_each(l, &device->ccw_queue)
-+ list_for_each(l, &block->ccw_queue)
- if (++counter >= 31)
- break;
- dasd_global_profile.dasd_io_nr_req[counter]++;
-- device->profile.dasd_io_nr_req[counter]++;
-+ block->profile.dasd_io_nr_req[counter]++;
- }
+ #define ZFCP_SBAL_TIMEOUT (5*HZ)
- /*
- * Add profiling information for cqr after execution.
- */
--static void
--dasd_profile_end(struct dasd_device *device, struct dasd_ccw_req * cqr,
-- struct request *req)
-+static void dasd_profile_end(struct dasd_block *block,
-+ struct dasd_ccw_req *cqr,
-+ struct request *req)
- {
- long strtime, irqtime, endtime, tottime; /* in microseconds */
- long tottimeps, sectors;
-@@ -532,27 +581,27 @@ dasd_profile_end(struct dasd_device *device, struct dasd_ccw_req * cqr,
+-#define ZFCP_TYPE2_RECOVERY_TIME (8*HZ)
++#define ZFCP_TYPE2_RECOVERY_TIME 8 /* seconds */
- if (!dasd_global_profile.dasd_io_reqs)
- memset(&dasd_global_profile, 0,
-- sizeof (struct dasd_profile_info_t));
-+ sizeof(struct dasd_profile_info_t));
- dasd_global_profile.dasd_io_reqs++;
- dasd_global_profile.dasd_io_sects += sectors;
+ /* queue polling (values in microseconds) */
+ #define ZFCP_MAX_INPUT_THRESHOLD 5000 /* FIXME: tune */
+@@ -139,7 +139,7 @@ zfcp_address_to_sg(void *address, struct scatterlist *list, unsigned int size)
+ #define ZFCP_STATUS_READS_RECOM FSF_STATUS_READS_RECOM
-- if (!device->profile.dasd_io_reqs)
-- memset(&device->profile, 0,
-- sizeof (struct dasd_profile_info_t));
-- device->profile.dasd_io_reqs++;
-- device->profile.dasd_io_sects += sectors;
-+ if (!block->profile.dasd_io_reqs)
-+ memset(&block->profile, 0,
-+ sizeof(struct dasd_profile_info_t));
-+ block->profile.dasd_io_reqs++;
-+ block->profile.dasd_io_sects += sectors;
+ /* Do 1st retry in 1 second, then double the timeout for each following retry */
+-#define ZFCP_EXCHANGE_CONFIG_DATA_FIRST_SLEEP 100
++#define ZFCP_EXCHANGE_CONFIG_DATA_FIRST_SLEEP 1
+ #define ZFCP_EXCHANGE_CONFIG_DATA_RETRIES 7
-- dasd_profile_counter(sectors, dasd_io_secs, device);
-- dasd_profile_counter(tottime, dasd_io_times, device);
-- dasd_profile_counter(tottimeps, dasd_io_timps, device);
-- dasd_profile_counter(strtime, dasd_io_time1, device);
-- dasd_profile_counter(irqtime, dasd_io_time2, device);
-- dasd_profile_counter(irqtime / sectors, dasd_io_time2ps, device);
-- dasd_profile_counter(endtime, dasd_io_time3, device);
-+ dasd_profile_counter(sectors, dasd_io_secs, block);
-+ dasd_profile_counter(tottime, dasd_io_times, block);
-+ dasd_profile_counter(tottimeps, dasd_io_timps, block);
-+ dasd_profile_counter(strtime, dasd_io_time1, block);
-+ dasd_profile_counter(irqtime, dasd_io_time2, block);
-+ dasd_profile_counter(irqtime / sectors, dasd_io_time2ps, block);
-+ dasd_profile_counter(endtime, dasd_io_time3, block);
+ /* timeout value for "default timer" for fsf requests */
+@@ -983,10 +983,6 @@ struct zfcp_unit {
+ struct scsi_device *device; /* scsi device struct pointer */
+ struct zfcp_erp_action erp_action; /* pending error recovery */
+ atomic_t erp_counter;
+- wait_queue_head_t scsi_scan_wq; /* can be used to wait until
+- all scsi_scan_target
+- requests have been
+- completed. */
+ };
+
+ /* FSF request */
+@@ -1127,6 +1123,20 @@ zfcp_reqlist_find(struct zfcp_adapter *adapter, unsigned long req_id)
+ return NULL;
}
- #else
--#define dasd_profile_start(device, cqr, req) do {} while (0)
--#define dasd_profile_end(device, cqr, req) do {} while (0)
-+#define dasd_profile_start(block, cqr, req) do {} while (0)
-+#define dasd_profile_end(block, cqr, req) do {} while (0)
- #endif /* CONFIG_DASD_PROFILE */
++static inline struct zfcp_fsf_req *
++zfcp_reqlist_find_safe(struct zfcp_adapter *adapter, struct zfcp_fsf_req *req)
++{
++ struct zfcp_fsf_req *request;
++ unsigned int idx;
++
++ for (idx = 0; idx < REQUEST_LIST_SIZE; idx++) {
++ list_for_each_entry(request, &adapter->req_list[idx], list)
++ if (request == req)
++ return request;
++ }
++ return NULL;
++}
++
/*
-@@ -562,9 +611,9 @@ dasd_profile_end(struct dasd_device *device, struct dasd_ccw_req * cqr,
- * memory and 2) dasd_smalloc_request uses the static ccw memory
- * that gets allocated for each device.
+ * functions needed for reference/usage counting
*/
--struct dasd_ccw_req *
--dasd_kmalloc_request(char *magic, int cplength, int datasize,
-- struct dasd_device * device)
-+struct dasd_ccw_req *dasd_kmalloc_request(char *magic, int cplength,
-+ int datasize,
-+ struct dasd_device *device)
- {
- struct dasd_ccw_req *cqr;
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 07fa824..2dc8110 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -131,7 +131,7 @@ static void zfcp_close_qdio(struct zfcp_adapter *adapter)
+ debug_text_event(adapter->erp_dbf, 3, "qdio_down2a");
+ while (qdio_shutdown(adapter->ccw_device,
+ QDIO_FLAG_CLEANUP_USING_CLEAR) == -EINPROGRESS)
+- msleep(1000);
++ ssleep(1);
+ debug_text_event(adapter->erp_dbf, 3, "qdio_down2b");
-@@ -600,9 +649,9 @@ dasd_kmalloc_request(char *magic, int cplength, int datasize,
- return cqr;
- }
+ /* cleanup used outbound sbals */
+@@ -456,7 +456,7 @@ zfcp_test_link(struct zfcp_port *port)
--struct dasd_ccw_req *
--dasd_smalloc_request(char *magic, int cplength, int datasize,
-- struct dasd_device * device)
-+struct dasd_ccw_req *dasd_smalloc_request(char *magic, int cplength,
-+ int datasize,
-+ struct dasd_device *device)
- {
- unsigned long flags;
- struct dasd_ccw_req *cqr;
-@@ -649,8 +698,7 @@ dasd_smalloc_request(char *magic, int cplength, int datasize,
- * idal lists that might have been created by dasd_set_cda and the
- * struct dasd_ccw_req itself.
- */
--void
--dasd_kfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
-+void dasd_kfree_request(struct dasd_ccw_req *cqr, struct dasd_device *device)
- {
- #ifdef CONFIG_64BIT
- struct ccw1 *ccw;
-@@ -667,8 +715,7 @@ dasd_kfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
- dasd_put_device(device);
+ zfcp_port_get(port);
+ retval = zfcp_erp_adisc(port);
+- if (retval != 0) {
++ if (retval != 0 && retval != -EBUSY) {
+ zfcp_port_put(port);
+ ZFCP_LOG_NORMAL("reopen needed for port 0x%016Lx "
+ "on adapter %s\n ", port->wwpn,
+@@ -846,7 +846,8 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action)
+ if (erp_action->fsf_req) {
+ /* take lock to ensure that request is not deleted meanwhile */
+ spin_lock(&adapter->req_list_lock);
+- if (zfcp_reqlist_find(adapter, erp_action->fsf_req->req_id)) {
++ if (zfcp_reqlist_find_safe(adapter, erp_action->fsf_req) &&
++ erp_action->fsf_req->erp_action == erp_action) {
+ /* fsf_req still exists */
+ debug_text_event(adapter->erp_dbf, 3, "a_ca_req");
+ debug_event(adapter->erp_dbf, 3, &erp_action->fsf_req,
+@@ -1285,7 +1286,7 @@ zfcp_erp_strategy_do_action(struct zfcp_erp_action *erp_action)
+ * note: no lock in subsequent strategy routines
+ * (this allows these routine to call schedule, e.g.
+ * kmalloc with such flags or qdio_initialize & friends)
+- * Note: in case of timeout, the seperate strategies will fail
++ * Note: in case of timeout, the separate strategies will fail
+ * anyhow. No need for a special action. Even worse, a nameserver
+ * failure would not wake up waiting ports without the call.
+ */
+@@ -1609,7 +1610,6 @@ static void zfcp_erp_scsi_scan(struct work_struct *work)
+ scsi_scan_target(&rport->dev, 0, rport->scsi_target_id,
+ unit->scsi_lun, 0);
+ atomic_clear_mask(ZFCP_STATUS_UNIT_SCSI_WORK_PENDING, &unit->status);
+- wake_up(&unit->scsi_scan_wq);
+ zfcp_unit_put(unit);
+ kfree(p);
}
+@@ -1900,7 +1900,7 @@ zfcp_erp_adapter_strategy(struct zfcp_erp_action *erp_action)
+ ZFCP_LOG_INFO("Waiting to allow the adapter %s "
+ "to recover itself\n",
+ zfcp_get_busid_by_adapter(adapter));
+- msleep(jiffies_to_msecs(ZFCP_TYPE2_RECOVERY_TIME));
++ ssleep(ZFCP_TYPE2_RECOVERY_TIME);
+ }
--void
--dasd_sfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
-+void dasd_sfree_request(struct dasd_ccw_req *cqr, struct dasd_device *device)
- {
- unsigned long flags;
-
-@@ -681,14 +728,13 @@ dasd_sfree_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
- /*
- * Check discipline magic in cqr.
- */
--static inline int
--dasd_check_cqr(struct dasd_ccw_req *cqr)
-+static inline int dasd_check_cqr(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
+ return retval;
+@@ -2080,7 +2080,7 @@ zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *erp_action)
+ debug_text_event(adapter->erp_dbf, 3, "qdio_down1a");
+ while (qdio_shutdown(adapter->ccw_device,
+ QDIO_FLAG_CLEANUP_USING_CLEAR) == -EINPROGRESS)
+- msleep(1000);
++ ssleep(1);
+ debug_text_event(adapter->erp_dbf, 3, "qdio_down1b");
- if (cqr == NULL)
- return -EINVAL;
-- device = cqr->device;
-+ device = cqr->startdev;
- if (strncmp((char *) &cqr->magic, device->discipline->ebcname, 4)) {
- DEV_MESSAGE(KERN_WARNING, device,
- " dasd_ccw_req 0x%08x magic doesn't match"
-@@ -706,8 +752,7 @@ dasd_check_cqr(struct dasd_ccw_req *cqr)
- * ccw_device_clear can fail if the i/o subsystem
- * is in a bad mood.
- */
--int
--dasd_term_IO(struct dasd_ccw_req * cqr)
-+int dasd_term_IO(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
- int retries, rc;
-@@ -717,13 +762,13 @@ dasd_term_IO(struct dasd_ccw_req * cqr)
- if (rc)
- return rc;
- retries = 0;
-- device = (struct dasd_device *) cqr->device;
-+ device = (struct dasd_device *) cqr->startdev;
- while ((retries < 5) && (cqr->status == DASD_CQR_IN_IO)) {
- rc = ccw_device_clear(device->cdev, (long) cqr);
- switch (rc) {
- case 0: /* termination successful */
- cqr->retries--;
-- cqr->status = DASD_CQR_CLEAR;
-+ cqr->status = DASD_CQR_CLEAR_PENDING;
- cqr->stopclk = get_clock();
- cqr->starttime = 0;
- DBF_DEV_EVENT(DBF_DEBUG, device,
-@@ -753,7 +798,7 @@ dasd_term_IO(struct dasd_ccw_req * cqr)
- }
- retries++;
+ failed_qdio_establish:
+@@ -2165,7 +2165,7 @@ zfcp_erp_adapter_strategy_open_fsf_xconfig(struct zfcp_erp_action *erp_action)
+ ZFCP_LOG_DEBUG("host connection still initialising... "
+ "waiting and retrying...\n");
+ /* sleep a little bit before retry */
+- msleep(jiffies_to_msecs(sleep));
++ ssleep(sleep);
+ sleep *= 2;
}
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- return rc;
- }
-@@ -761,8 +806,7 @@ dasd_term_IO(struct dasd_ccw_req * cqr)
- * Start the i/o. This start_IO can fail if the channel is really busy.
- * In that case set up a timer to start the request later.
- */
--int
--dasd_start_IO(struct dasd_ccw_req * cqr)
-+int dasd_start_IO(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
- int rc;
-@@ -771,12 +815,12 @@ dasd_start_IO(struct dasd_ccw_req * cqr)
- rc = dasd_check_cqr(cqr);
- if (rc)
- return rc;
-- device = (struct dasd_device *) cqr->device;
-+ device = (struct dasd_device *) cqr->startdev;
- if (cqr->retries < 0) {
- DEV_MESSAGE(KERN_DEBUG, device,
- "start_IO: request %p (%02x/%i) - no retry left.",
- cqr, cqr->status, cqr->retries);
-- cqr->status = DASD_CQR_FAILED;
-+ cqr->status = DASD_CQR_ERROR;
- return -EIO;
- }
- cqr->startclk = get_clock();
-@@ -833,8 +877,7 @@ dasd_start_IO(struct dasd_ccw_req * cqr)
- * The head of the ccw queue will have status DASD_CQR_IN_IO for 1),
- * DASD_CQR_QUEUED for 2) and 3).
- */
--static void
--dasd_timeout_device(unsigned long ptr)
-+static void dasd_device_timeout(unsigned long ptr)
- {
- unsigned long flags;
- struct dasd_device *device;
-@@ -844,14 +887,13 @@ dasd_timeout_device(unsigned long ptr)
- /* re-activate request queue */
- device->stopped &= ~DASD_STOPPED_PENDING;
- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- }
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index 8534cf0..06b1079 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -27,8 +27,7 @@
+ extern struct zfcp_data zfcp_data;
- /*
- * Setup timeout for a device in jiffies.
- */
--void
--dasd_set_timer(struct dasd_device *device, int expires)
-+void dasd_device_set_timer(struct dasd_device *device, int expires)
- {
- if (expires == 0) {
- if (timer_pending(&device->timer))
-@@ -862,7 +904,7 @@ dasd_set_timer(struct dasd_device *device, int expires)
- if (mod_timer(&device->timer, jiffies + expires))
- return;
- }
-- device->timer.function = dasd_timeout_device;
-+ device->timer.function = dasd_device_timeout;
- device->timer.data = (unsigned long) device;
- device->timer.expires = jiffies + expires;
- add_timer(&device->timer);
-@@ -871,15 +913,14 @@ dasd_set_timer(struct dasd_device *device, int expires)
- /*
- * Clear timeout for a device.
- */
--void
--dasd_clear_timer(struct dasd_device *device)
-+void dasd_device_clear_timer(struct dasd_device *device)
- {
- if (timer_pending(&device->timer))
- del_timer(&device->timer);
- }
+ /******************************** SYSFS *************************************/
+-extern int zfcp_sysfs_driver_create_files(struct device_driver *);
+-extern void zfcp_sysfs_driver_remove_files(struct device_driver *);
++extern struct attribute_group *zfcp_driver_attr_groups[];
+ extern int zfcp_sysfs_adapter_create_files(struct device *);
+ extern void zfcp_sysfs_adapter_remove_files(struct device *);
+ extern int zfcp_sysfs_port_create_files(struct device *, u32);
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index ff866eb..e45f85f 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -502,7 +502,7 @@ zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *fsf_req)
+ fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR;
+ break;
+ case FSF_SQ_NO_RECOM:
+- ZFCP_LOG_NORMAL("bug: No recommendation could be given for a"
++ ZFCP_LOG_NORMAL("bug: No recommendation could be given for a "
+ "problem on the adapter %s "
+ "Stopping all operations on this adapter. ",
+ zfcp_get_busid_by_adapter(fsf_req->adapter));
+@@ -813,7 +813,7 @@ zfcp_fsf_status_read_port_closed(struct zfcp_fsf_req *fsf_req)
+ read_unlock_irqrestore(&zfcp_data.config_lock, flags);
--static void
--dasd_handle_killed_request(struct ccw_device *cdev, unsigned long intparm)
-+static void dasd_handle_killed_request(struct ccw_device *cdev,
-+ unsigned long intparm)
- {
- struct dasd_ccw_req *cqr;
- struct dasd_device *device;
-@@ -893,7 +934,7 @@ dasd_handle_killed_request(struct ccw_device *cdev, unsigned long intparm)
- return;
+ if (!port || (port->d_id != (status_buffer->d_id & ZFCP_DID_MASK))) {
+- ZFCP_LOG_NORMAL("bug: Reopen port indication received for"
++ ZFCP_LOG_NORMAL("bug: Reopen port indication received for "
+ "nonexisting port with d_id 0x%06x on "
+ "adapter %s. Ignored.\n",
+ status_buffer->d_id & ZFCP_DID_MASK,
+@@ -1116,6 +1116,10 @@ zfcp_fsf_abort_fcp_command(unsigned long old_req_id,
+ goto out;
}
-- device = (struct dasd_device *) cqr->device;
-+ device = (struct dasd_device *) cqr->startdev;
- if (device == NULL ||
- device != dasd_device_from_cdev_locked(cdev) ||
- strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
-@@ -905,46 +946,32 @@ dasd_handle_killed_request(struct ccw_device *cdev, unsigned long intparm)
- /* Schedule request to be retried. */
- cqr->status = DASD_CQR_QUEUED;
++ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
++ &unit->status)))
++ goto unit_blocked;
++
+ sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0);
+ sbale[0].flags |= SBAL_FLAGS0_TYPE_READ;
+ sbale[1].flags |= SBAL_FLAGS_LAST_ENTRY;
+@@ -1131,22 +1135,13 @@ zfcp_fsf_abort_fcp_command(unsigned long old_req_id,
-- dasd_clear_timer(device);
-- dasd_schedule_bh(device);
-+ dasd_device_clear_timer(device);
-+ dasd_schedule_device_bh(device);
- dasd_put_device(device);
- }
+ zfcp_fsf_start_timer(fsf_req, ZFCP_SCSI_ER_TIMEOUT);
+ retval = zfcp_fsf_req_send(fsf_req);
+- if (retval) {
+- ZFCP_LOG_INFO("error: Failed to send abort command request "
+- "on adapter %s, port 0x%016Lx, unit 0x%016Lx\n",
+- zfcp_get_busid_by_adapter(adapter),
+- unit->port->wwpn, unit->fcp_lun);
++ if (!retval)
++ goto out;
++
++ unit_blocked:
+ zfcp_fsf_req_free(fsf_req);
+ fsf_req = NULL;
+- goto out;
+- }
--static void
--dasd_handle_state_change_pending(struct dasd_device *device)
-+void dasd_generic_handle_state_change(struct dasd_device *device)
+- ZFCP_LOG_DEBUG("Abort FCP Command request initiated "
+- "(adapter%s, port d_id=0x%06x, "
+- "unit x%016Lx, old_req_id=0x%lx)\n",
+- zfcp_get_busid_by_adapter(adapter),
+- unit->port->d_id,
+- unit->fcp_lun, old_req_id);
+ out:
+ write_unlock_irqrestore(&adapter->request_queue.queue_lock, lock_flags);
+ return fsf_req;
+@@ -1164,8 +1159,8 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req)
{
-- struct dasd_ccw_req *cqr;
-- struct list_head *l, *n;
--
- /* First of all start sense subsystem status request. */
- dasd_eer_snss(device);
+ int retval = -EINVAL;
+ struct zfcp_unit *unit;
+- unsigned char status_qual =
+- new_fsf_req->qtcb->header.fsf_status_qual.word[0];
++ union fsf_status_qual *fsf_stat_qual =
++ &new_fsf_req->qtcb->header.fsf_status_qual;
- device->stopped &= ~DASD_STOPPED_PENDING;
--
-- /* restart all 'running' IO on queue */
-- list_for_each_safe(l, n, &device->ccw_queue) {
-- cqr = list_entry(l, struct dasd_ccw_req, list);
-- if (cqr->status == DASD_CQR_IN_IO) {
-- cqr->status = DASD_CQR_QUEUED;
-- }
-- }
-- dasd_clear_timer(device);
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
-+ if (device->block)
-+ dasd_schedule_block_bh(device->block);
- }
+ if (new_fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) {
+ /* do not set ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED */
+@@ -1178,7 +1173,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req)
+ switch (new_fsf_req->qtcb->header.fsf_status) {
- /*
- * Interrupt handler for "normal" ssch-io based dasd devices.
- */
--void
--dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
-- struct irb *irb)
-+void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
-+ struct irb *irb)
- {
- struct dasd_ccw_req *cqr, *next;
- struct dasd_device *device;
- unsigned long long now;
- int expires;
-- dasd_era_t era;
-- char mask;
+ case FSF_PORT_HANDLE_NOT_VALID:
+- if (status_qual >> 4 != status_qual % 0xf) {
++ if (fsf_stat_qual->word[0] != fsf_stat_qual->word[1]) {
+ debug_text_event(new_fsf_req->adapter->erp_dbf, 3,
+ "fsf_s_phand_nv0");
+ /*
+@@ -1207,8 +1202,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req)
+ break;
- if (IS_ERR(irb)) {
- switch (PTR_ERR(irb)) {
-@@ -969,29 +996,25 @@ dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
- cdev->dev.bus_id, ((irb->scsw.cstat<<8)|irb->scsw.dstat),
- (unsigned int) intparm);
+ case FSF_LUN_HANDLE_NOT_VALID:
+- if (status_qual >> 4 != status_qual % 0xf) {
+- /* 2 */
++ if (fsf_stat_qual->word[0] != fsf_stat_qual->word[1]) {
+ debug_text_event(new_fsf_req->adapter->erp_dbf, 3,
+ "fsf_s_lhand_nv0");
+ /*
+@@ -1674,6 +1668,12 @@ zfcp_fsf_send_els(struct zfcp_send_els *els)
+ goto failed_req;
+ }
-- /* first of all check for state change pending interrupt */
-- mask = DEV_STAT_ATTENTION | DEV_STAT_DEV_END | DEV_STAT_UNIT_EXCEP;
-- if ((irb->scsw.dstat & mask) == mask) {
-+ /* check for unsolicited interrupts */
-+ cqr = (struct dasd_ccw_req *) intparm;
-+ if (!cqr || ((irb->scsw.cc == 1) &&
-+ (irb->scsw.fctl & SCSW_FCTL_START_FUNC) &&
-+ (irb->scsw.stctl & SCSW_STCTL_STATUS_PEND)) ) {
-+ if (cqr && cqr->status == DASD_CQR_IN_IO)
-+ cqr->status = DASD_CQR_QUEUED;
- device = dasd_device_from_cdev_locked(cdev);
- if (!IS_ERR(device)) {
-- dasd_handle_state_change_pending(device);
-+ dasd_device_clear_timer(device);
-+ device->discipline->handle_unsolicited_interrupt(device,
-+ irb);
- dasd_put_device(device);
- }
- return;
++ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
++ &els->port->status))) {
++ ret = -EBUSY;
++ goto port_blocked;
++ }
++
+ sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0);
+ if (zfcp_use_one_sbal(els->req, els->req_count,
+ els->resp, els->resp_count)){
+@@ -1755,6 +1755,7 @@ zfcp_fsf_send_els(struct zfcp_send_els *els)
+ "0x%06x)\n", zfcp_get_busid_by_adapter(adapter), d_id);
+ goto out;
+
++ port_blocked:
+ failed_send:
+ zfcp_fsf_req_free(fsf_req);
+
+@@ -2280,7 +2281,7 @@ zfcp_fsf_exchange_port_data(struct zfcp_erp_action *erp_action)
+ &lock_flags, &fsf_req);
+ if (retval) {
+ ZFCP_LOG_INFO("error: Out of resources. Could not create an "
+- "exchange port data request for"
++ "exchange port data request for "
+ "the adapter %s.\n",
+ zfcp_get_busid_by_adapter(adapter));
+ write_unlock_irqrestore(&adapter->request_queue.queue_lock,
+@@ -2339,7 +2340,7 @@ zfcp_fsf_exchange_port_data_sync(struct zfcp_adapter *adapter,
+ 0, NULL, &lock_flags, &fsf_req);
+ if (retval) {
+ ZFCP_LOG_INFO("error: Out of resources. Could not create an "
+- "exchange port data request for"
++ "exchange port data request for "
+ "the adapter %s.\n",
+ zfcp_get_busid_by_adapter(adapter));
+ write_unlock_irqrestore(&adapter->request_queue.queue_lock,
+@@ -3592,6 +3593,12 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter,
+ goto failed_req_create;
}
-- cqr = (struct dasd_ccw_req *) intparm;
--
-- /* check for unsolicited interrupts */
-- if (cqr == NULL) {
-- MESSAGE(KERN_DEBUG,
-- "unsolicited interrupt received: bus_id %s",
-- cdev->dev.bus_id);
-- return;
-- }
--
-- device = (struct dasd_device *) cqr->device;
-- if (device == NULL ||
-+ device = (struct dasd_device *) cqr->startdev;
-+ if (!device ||
- strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
- MESSAGE(KERN_DEBUG, "invalid device in request: bus_id %s",
- cdev->dev.bus_id);
-@@ -999,12 +1022,12 @@ dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
++ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
++ &unit->status))) {
++ retval = -EBUSY;
++ goto unit_blocked;
++ }
++
+ zfcp_unit_get(unit);
+ fsf_req->unit = unit;
+
+@@ -3732,6 +3739,7 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter,
+ send_failed:
+ no_fit:
+ failed_scsi_cmnd:
++ unit_blocked:
+ zfcp_unit_put(unit);
+ zfcp_fsf_req_free(fsf_req);
+ fsf_req = NULL;
+@@ -3766,6 +3774,10 @@ zfcp_fsf_send_fcp_command_task_management(struct zfcp_adapter *adapter,
+ goto out;
}
- /* Check for clear pending */
-- if (cqr->status == DASD_CQR_CLEAR &&
-+ if (cqr->status == DASD_CQR_CLEAR_PENDING &&
- irb->scsw.fctl & SCSW_FCTL_CLEAR_FUNC) {
-- cqr->status = DASD_CQR_QUEUED;
-- dasd_clear_timer(device);
-+ cqr->status = DASD_CQR_CLEARED;
-+ dasd_device_clear_timer(device);
- wake_up(&dasd_flush_wq);
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- return;
++ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
++ &unit->status)))
++ goto unit_blocked;
++
+ /*
+ * Used to decide on proper handler in the return path,
+ * could be either zfcp_fsf_send_fcp_command_task_handler or
+@@ -3799,25 +3811,13 @@ zfcp_fsf_send_fcp_command_task_management(struct zfcp_adapter *adapter,
+
+ zfcp_fsf_start_timer(fsf_req, ZFCP_SCSI_ER_TIMEOUT);
+ retval = zfcp_fsf_req_send(fsf_req);
+- if (retval) {
+- ZFCP_LOG_INFO("error: Could not send an FCP-command (task "
+- "management) on adapter %s, port 0x%016Lx for "
+- "unit LUN 0x%016Lx\n",
+- zfcp_get_busid_by_adapter(adapter),
+- unit->port->wwpn,
+- unit->fcp_lun);
+- zfcp_fsf_req_free(fsf_req);
+- fsf_req = NULL;
++ if (!retval)
+ goto out;
+- }
+
+- ZFCP_LOG_TRACE("Send FCP Command (task management function) initiated "
+- "(adapter %s, port 0x%016Lx, unit 0x%016Lx, "
+- "tm_flags=0x%x)\n",
+- zfcp_get_busid_by_adapter(adapter),
+- unit->port->wwpn,
+- unit->fcp_lun,
+- tm_flags);
++ unit_blocked:
++ zfcp_fsf_req_free(fsf_req);
++ fsf_req = NULL;
++
+ out:
+ write_unlock_irqrestore(&adapter->request_queue.queue_lock, lock_flags);
+ return fsf_req;
+@@ -4725,7 +4725,7 @@ zfcp_fsf_req_create(struct zfcp_adapter *adapter, u32 fsf_cmd, int req_flags,
+ /* allocate new FSF request */
+ fsf_req = zfcp_fsf_req_alloc(pool, req_flags);
+ if (unlikely(NULL == fsf_req)) {
+- ZFCP_LOG_DEBUG("error: Could not put an FSF request into"
++ ZFCP_LOG_DEBUG("error: Could not put an FSF request into "
+ "the outbound (send) queue.\n");
+ ret = -ENOMEM;
+ goto failed_fsf_req;
+diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
+index 51d92b1..22fdc17 100644
+--- a/drivers/s390/scsi/zfcp_qdio.c
++++ b/drivers/s390/scsi/zfcp_qdio.c
+@@ -529,7 +529,7 @@ zfcp_qdio_sbals_wipe(struct zfcp_fsf_req *fsf_req)
+
+
+ /**
+- * zfcp_qdio_sbale_fill - set address and lenght in current SBALE
++ * zfcp_qdio_sbale_fill - set address and length in current SBALE
+ * on request_queue
+ */
+ static void
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index abae202..b9daf5c 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -51,7 +51,6 @@ struct zfcp_data zfcp_data = {
+ .queuecommand = zfcp_scsi_queuecommand,
+ .eh_abort_handler = zfcp_scsi_eh_abort_handler,
+ .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler,
+- .eh_bus_reset_handler = zfcp_scsi_eh_host_reset_handler,
+ .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler,
+ .can_queue = 4096,
+ .this_id = -1,
+@@ -181,9 +180,6 @@ static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt)
+
+ if (unit) {
+ zfcp_erp_wait(unit->port->adapter);
+- wait_event(unit->scsi_scan_wq,
+- atomic_test_mask(ZFCP_STATUS_UNIT_SCSI_WORK_PENDING,
+- &unit->status) == 0);
+ atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status);
+ sdpnt->hostdata = NULL;
+ unit->device = NULL;
+@@ -262,8 +258,9 @@ zfcp_scsi_command_async(struct zfcp_adapter *adapter, struct zfcp_unit *unit,
+ goto out;
}
-@@ -1017,277 +1040,170 @@ dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+- if (unlikely(
+- !atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status))) {
++ tmp = zfcp_fsf_send_fcp_command_task(adapter, unit, scpnt, use_timer,
++ ZFCP_REQ_AUTO_CLEANUP);
++ if (unlikely(tmp == -EBUSY)) {
+ ZFCP_LOG_DEBUG("adapter %s not ready or unit 0x%016Lx "
+ "on port 0x%016Lx in recovery\n",
+ zfcp_get_busid_by_unit(unit),
+@@ -272,9 +269,6 @@ zfcp_scsi_command_async(struct zfcp_adapter *adapter, struct zfcp_unit *unit,
+ goto out;
}
- DBF_DEV_EVENT(DBF_DEBUG, device, "Int: CS/DS 0x%04x for cqr %p",
- ((irb->scsw.cstat << 8) | irb->scsw.dstat), cqr);
--
-- /* Find out the appropriate era_action. */
-- if (irb->scsw.fctl & SCSW_FCTL_HALT_FUNC)
-- era = dasd_era_fatal;
-- else if (irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END) &&
-- irb->scsw.cstat == 0 &&
-- !irb->esw.esw0.erw.cons)
-- era = dasd_era_none;
-- else if (irb->esw.esw0.erw.cons)
-- era = device->discipline->examine_error(cqr, irb);
-- else
-- era = dasd_era_recover;
+
+- tmp = zfcp_fsf_send_fcp_command_task(adapter, unit, scpnt, use_timer,
+- ZFCP_REQ_AUTO_CLEANUP);
-
-- DBF_DEV_EVENT(DBF_DEBUG, device, "era_code %d", era);
-+ next = NULL;
- expires = 0;
-- if (era == dasd_era_none) {
-- cqr->status = DASD_CQR_DONE;
-+ if (irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END) &&
-+ irb->scsw.cstat == 0 && !irb->esw.esw0.erw.cons) {
-+ /* request was completed successfully */
-+ cqr->status = DASD_CQR_SUCCESS;
- cqr->stopclk = now;
- /* Start first request on queue if possible -> fast_io. */
-- if (cqr->list.next != &device->ccw_queue) {
-- next = list_entry(cqr->list.next,
-- struct dasd_ccw_req, list);
-- if ((next->status == DASD_CQR_QUEUED) &&
-- (!device->stopped)) {
-- if (device->discipline->start_IO(next) == 0)
-- expires = next->expires;
-- else
-- DEV_MESSAGE(KERN_DEBUG, device, "%s",
-- "Interrupt fastpath "
-- "failed!");
-- }
-+ if (cqr->devlist.next != &device->ccw_queue) {
-+ next = list_entry(cqr->devlist.next,
-+ struct dasd_ccw_req, devlist);
- }
-- } else { /* error */
-- memcpy(&cqr->irb, irb, sizeof (struct irb));
-+ } else { /* error */
-+ memcpy(&cqr->irb, irb, sizeof(struct irb));
- if (device->features & DASD_FEATURE_ERPLOG) {
-- /* dump sense data */
- dasd_log_sense(cqr, irb);
- }
-- switch (era) {
-- case dasd_era_fatal:
-- cqr->status = DASD_CQR_FAILED;
-- cqr->stopclk = now;
-- break;
-- case dasd_era_recover:
-+ /* If we have no sense data, or we just don't want complex ERP
-+ * for this request, but if we have retries left, then just
-+ * reset this request and retry it in the fastpath
-+ */
-+ if (!(cqr->irb.esw.esw0.erw.cons &&
-+ test_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags)) &&
-+ cqr->retries > 0) {
-+ DEV_MESSAGE(KERN_DEBUG, device,
-+ "default ERP in fastpath (%i retries left)",
-+ cqr->retries);
-+ cqr->lpm = LPM_ANYPATH;
-+ cqr->status = DASD_CQR_QUEUED;
-+ next = cqr;
-+ } else
- cqr->status = DASD_CQR_ERROR;
-- break;
-- default:
-- BUG();
-- }
-+ }
-+ if (next && (next->status == DASD_CQR_QUEUED) &&
-+ (!device->stopped)) {
-+ if (device->discipline->start_IO(next) == 0)
-+ expires = next->expires;
-+ else
-+ DEV_MESSAGE(KERN_DEBUG, device, "%s",
-+ "Interrupt fastpath "
-+ "failed!");
+ if (unlikely(tmp < 0)) {
+ ZFCP_LOG_DEBUG("error: initiation of Send FCP Cmnd failed\n");
+ retval = SCSI_MLQUEUE_HOST_BUSY;
+@@ -459,7 +453,9 @@ zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *scpnt)
+ retval = SUCCESS;
+ goto out;
}
- if (expires != 0)
-- dasd_set_timer(device, expires);
-+ dasd_device_set_timer(device, expires);
- else
-- dasd_clear_timer(device);
-- dasd_schedule_bh(device);
-+ dasd_device_clear_timer(device);
-+ dasd_schedule_device_bh(device);
+- ZFCP_LOG_NORMAL("resetting unit 0x%016Lx\n", unit->fcp_lun);
++ ZFCP_LOG_NORMAL("resetting unit 0x%016Lx on port 0x%016Lx, adapter %s\n",
++ unit->fcp_lun, unit->port->wwpn,
++ zfcp_get_busid_by_adapter(unit->port->adapter));
+
+ /*
+ * If we do not know whether the unit supports 'logical unit reset'
+@@ -542,7 +538,7 @@ zfcp_task_management_function(struct zfcp_unit *unit, u8 tm_flags,
}
- /*
-- * posts the buffer_cache about a finalized request
-+ * If we have an error on a dasd_block layer request then we cancel
-+ * and return all further requests from the same dasd_block as well.
+ /**
+- * zfcp_scsi_eh_host_reset_handler - handler for host and bus reset
++ * zfcp_scsi_eh_host_reset_handler - handler for host reset
*/
--static inline void
--dasd_end_request(struct request *req, int uptodate)
-+static void __dasd_device_recovery(struct dasd_device *device,
-+ struct dasd_ccw_req *ref_cqr)
+ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
{
-- if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
-- BUG();
-- add_disk_randomness(req->rq_disk);
-- end_that_request_last(req, uptodate);
--}
-+ struct list_head *l, *n;
-+ struct dasd_ccw_req *cqr;
+@@ -552,8 +548,10 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
+ unit = (struct zfcp_unit*) scpnt->device->hostdata;
+ adapter = unit->port->adapter;
--/*
-- * Process finished error recovery ccw.
+- ZFCP_LOG_NORMAL("host/bus reset because of problems with "
+- "unit 0x%016Lx\n", unit->fcp_lun);
++ ZFCP_LOG_NORMAL("host reset because of problems with "
++ "unit 0x%016Lx on port 0x%016Lx, adapter %s\n",
++ unit->fcp_lun, unit->port->wwpn,
++ zfcp_get_busid_by_adapter(unit->port->adapter));
+
+ zfcp_erp_adapter_reopen(adapter, 0);
+ zfcp_erp_wait(adapter);
+diff --git a/drivers/s390/scsi/zfcp_sysfs_driver.c b/drivers/s390/scsi/zfcp_sysfs_driver.c
+index 005e62f..651edd5 100644
+--- a/drivers/s390/scsi/zfcp_sysfs_driver.c
++++ b/drivers/s390/scsi/zfcp_sysfs_driver.c
+@@ -98,28 +98,9 @@ static struct attribute_group zfcp_driver_attr_group = {
+ .attrs = zfcp_driver_attrs,
+ };
+
+-/**
+- * zfcp_sysfs_create_driver_files - create sysfs driver files
+- * @dev: pointer to belonging device
+- *
+- * Create all sysfs attributes of the zfcp device driver
- */
--static inline void
--__dasd_process_erp(struct dasd_device *device, struct dasd_ccw_req *cqr)
+-int
+-zfcp_sysfs_driver_create_files(struct device_driver *drv)
-{
-- dasd_erp_fn_t erp_fn;
-+ /*
-+ * only requeue request that came from the dasd_block layer
-+ */
-+ if (!ref_cqr->block)
-+ return;
-
-- if (cqr->status == DASD_CQR_DONE)
-- DBF_DEV_EVENT(DBF_NOTICE, device, "%s", "ERP successful");
-- else
-- DEV_MESSAGE(KERN_ERR, device, "%s", "ERP unsuccessful");
-- erp_fn = device->discipline->erp_postaction(cqr);
-- erp_fn(cqr);
+- return sysfs_create_group(&drv->kobj, &zfcp_driver_attr_group);
-}
-+ list_for_each_safe(l, n, &device->ccw_queue) {
-+ cqr = list_entry(l, struct dasd_ccw_req, devlist);
-+ if (cqr->status == DASD_CQR_QUEUED &&
-+ ref_cqr->block == cqr->block) {
-+ cqr->status = DASD_CQR_CLEARED;
-+ }
-+ }
+-
+-/**
+- * zfcp_sysfs_remove_driver_files - remove sysfs driver files
+- * @dev: pointer to belonging device
+- *
+- * Remove all sysfs attributes of the zfcp device driver
+- */
+-void
+-zfcp_sysfs_driver_remove_files(struct device_driver *drv)
+-{
+- sysfs_remove_group(&drv->kobj, &zfcp_driver_attr_group);
+-}
++struct attribute_group *zfcp_driver_attr_groups[] = {
++ &zfcp_driver_attr_group,
++ NULL,
+};
- /*
-- * Process ccw request queue.
-+ * Remove those ccw requests from the queue that need to be returned
-+ * to the upper layer.
- */
--static void
--__dasd_process_ccw_queue(struct dasd_device * device,
-- struct list_head *final_queue)
-+static void __dasd_device_process_ccw_queue(struct dasd_device *device,
-+ struct list_head *final_queue)
- {
- struct list_head *l, *n;
- struct dasd_ccw_req *cqr;
-- dasd_erp_fn_t erp_fn;
+ #undef ZFCP_LOG_AREA
+diff --git a/drivers/scsi/.gitignore b/drivers/scsi/.gitignore
+index b385af3..c89ae9a 100644
+--- a/drivers/scsi/.gitignore
++++ b/drivers/scsi/.gitignore
+@@ -1,3 +1 @@
+ 53c700_d.h
+-53c7xx_d.h
+-53c7xx_u.h
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index afb262b..1c24483 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -2010,6 +2010,7 @@ static int __devinit twa_probe(struct pci_dev *pdev, const struct pci_device_id
+ }
--restart:
- /* Process request with final status. */
- list_for_each_safe(l, n, &device->ccw_queue) {
-- cqr = list_entry(l, struct dasd_ccw_req, list);
-+ cqr = list_entry(l, struct dasd_ccw_req, devlist);
-+
- /* Stop list processing at the first non-final request. */
-- if (cqr->status != DASD_CQR_DONE &&
-- cqr->status != DASD_CQR_FAILED &&
-- cqr->status != DASD_CQR_ERROR)
-+ if (cqr->status == DASD_CQR_QUEUED ||
-+ cqr->status == DASD_CQR_IN_IO ||
-+ cqr->status == DASD_CQR_CLEAR_PENDING)
- break;
-- /* Process requests with DASD_CQR_ERROR */
- if (cqr->status == DASD_CQR_ERROR) {
-- if (cqr->irb.scsw.fctl & SCSW_FCTL_HALT_FUNC) {
-- cqr->status = DASD_CQR_FAILED;
-- cqr->stopclk = get_clock();
-- } else {
-- if (cqr->irb.esw.esw0.erw.cons &&
-- test_bit(DASD_CQR_FLAGS_USE_ERP,
-- &cqr->flags)) {
-- erp_fn = device->discipline->
-- erp_action(cqr);
-- erp_fn(cqr);
-- } else
-- dasd_default_erp_action(cqr);
-- }
-- goto restart;
-- }
--
-- /* First of all call extended error reporting. */
-- if (dasd_eer_enabled(device) &&
-- cqr->status == DASD_CQR_FAILED) {
-- dasd_eer_write(device, cqr, DASD_EER_FATALERROR);
--
-- /* restart request */
-- cqr->status = DASD_CQR_QUEUED;
-- cqr->retries = 255;
-- device->stopped |= DASD_STOPPED_QUIESCE;
-- goto restart;
-+ __dasd_device_recovery(device, cqr);
+ pci_set_master(pdev);
++ pci_try_set_mwi(pdev);
+
+ if (pci_set_dma_mask(pdev, DMA_64BIT_MASK)
+ || pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK))
+diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c
+index 71ff3fb..f4c4fe9 100644
+--- a/drivers/scsi/53c700.c
++++ b/drivers/scsi/53c700.c
+@@ -608,7 +608,8 @@ NCR_700_scsi_done(struct NCR_700_Host_Parameters *hostdata,
+ scsi_print_sense("53c700", SCp);
+
+ #endif
+- dma_unmap_single(hostdata->dev, slot->dma_handle, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
++ dma_unmap_single(hostdata->dev, slot->dma_handle,
++ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+ /* restore the old result if the request sense was
+ * successful */
+ if (result == 0)
+@@ -1010,7 +1011,7 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
+ cmnd[1] = (SCp->device->lun & 0x7) << 5;
+ cmnd[2] = 0;
+ cmnd[3] = 0;
+- cmnd[4] = sizeof(SCp->sense_buffer);
++ cmnd[4] = SCSI_SENSE_BUFFERSIZE;
+ cmnd[5] = 0;
+ /* Here's a quiet hack: the
+ * REQUEST_SENSE command is six bytes,
+@@ -1024,14 +1025,14 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
+ SCp->cmd_len = 6; /* command length for
+ * REQUEST_SENSE */
+ slot->pCmd = dma_map_single(hostdata->dev, cmnd, MAX_COMMAND_SIZE, DMA_TO_DEVICE);
+- slot->dma_handle = dma_map_single(hostdata->dev, SCp->sense_buffer, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
+- slot->SG[0].ins = bS_to_host(SCRIPT_MOVE_DATA_IN | sizeof(SCp->sense_buffer));
++ slot->dma_handle = dma_map_single(hostdata->dev, SCp->sense_buffer, SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
++ slot->SG[0].ins = bS_to_host(SCRIPT_MOVE_DATA_IN | SCSI_SENSE_BUFFERSIZE);
+ slot->SG[0].pAddr = bS_to_host(slot->dma_handle);
+ slot->SG[1].ins = bS_to_host(SCRIPT_RETURN);
+ slot->SG[1].pAddr = 0;
+ slot->resume_offset = hostdata->pScript;
+ dma_cache_sync(hostdata->dev, slot->SG, sizeof(slot->SG[0])*2, DMA_TO_DEVICE);
+- dma_cache_sync(hostdata->dev, SCp->sense_buffer, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
++ dma_cache_sync(hostdata->dev, SCp->sense_buffer, SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+
+ /* queue the command for reissue */
+ slot->state = NCR_700_SLOT_QUEUED;
+diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
+index 49e1ffa..ead47c1 100644
+--- a/drivers/scsi/BusLogic.c
++++ b/drivers/scsi/BusLogic.c
+@@ -2947,7 +2947,7 @@ static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRou
}
--
-- /* Process finished ERP request. */
-- if (cqr->refers) {
-- __dasd_process_erp(device, cqr);
-- goto restart;
-- }
--
- /* Rechain finished requests to final queue */
-- cqr->endclk = get_clock();
-- list_move_tail(&cqr->list, final_queue);
-+ list_move_tail(&cqr->devlist, final_queue);
}
- }
+ memcpy(CCB->CDB, CDB, CDB_Length);
+- CCB->SenseDataLength = sizeof(Command->sense_buffer);
++ CCB->SenseDataLength = SCSI_SENSE_BUFFERSIZE;
+ CCB->SenseDataPointer = pci_map_single(HostAdapter->PCI_Device, Command->sense_buffer, CCB->SenseDataLength, PCI_DMA_FROMDEVICE);
+ CCB->Command = Command;
+ Command->scsi_done = CompletionRoutine;
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index 184c7ae..3e161cd 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -341,7 +341,7 @@ config ISCSI_TCP
+ The userspace component needed to initialize the driver, documentation,
+ and sample configuration files can be found here:
+
+- http://linux-iscsi.sf.net
++ http://open-iscsi.org
+
+ config SGIWD93_SCSI
+ tristate "SGI WD93C93 SCSI Driver"
+@@ -573,10 +573,10 @@ config SCSI_ARCMSR_AER
+ source "drivers/scsi/megaraid/Kconfig.megaraid"
--static void
--dasd_end_request_cb(struct dasd_ccw_req * cqr, void *data)
--{
-- struct request *req;
-- struct dasd_device *device;
-- int status;
--
-- req = (struct request *) data;
-- device = cqr->device;
-- dasd_profile_end(device, cqr, req);
-- status = cqr->device->discipline->free_cp(cqr,req);
-- spin_lock_irq(&device->request_queue_lock);
-- dasd_end_request(req, status);
-- spin_unlock_irq(&device->request_queue_lock);
--}
+ config SCSI_HPTIOP
+- tristate "HighPoint RocketRAID 3xxx Controller support"
++ tristate "HighPoint RocketRAID 3xxx/4xxx Controller support"
+ depends on SCSI && PCI
+ help
+- This option enables support for HighPoint RocketRAID 3xxx
++ This option enables support for HighPoint RocketRAID 3xxx/4xxx
+ controllers.
+
+ To compile this driver as a module, choose M here; the module
+@@ -1288,17 +1288,6 @@ config SCSI_PAS16
+ To compile this driver as a module, choose M here: the
+ module will be called pas16.
+
+-config SCSI_PSI240I
+- tristate "PSI240i support"
+- depends on ISA && SCSI
+- help
+- This is support for the PSI240i EIDE interface card which acts as a
+- SCSI host adapter. Please read the SCSI-HOWTO, available from
+- <http://www.tldp.org/docs.html#howto>.
-
+- To compile this driver as a module, choose M here: the
+- module will be called psi240i.
-
- /*
-- * Fetch requests from the block device queue.
-+ * the cqrs from the final queue are returned to the upper layer
-+ * by setting a dasd_block state and calling the callback function
- */
--static void
--__dasd_process_blk_queue(struct dasd_device * device)
-+static void __dasd_device_process_final_queue(struct dasd_device *device,
-+ struct list_head *final_queue)
- {
-- struct request_queue *queue;
-- struct request *req;
-+ struct list_head *l, *n;
- struct dasd_ccw_req *cqr;
-- int nr_queued;
+ config SCSI_QLOGIC_FAS
+ tristate "Qlogic FAS SCSI support"
+ depends on ISA && SCSI
+@@ -1359,21 +1348,6 @@ config SCSI_LPFC
+ This lpfc driver supports the Emulex LightPulse
+ Family of Fibre Channel PCI host adapters.
+
+-config SCSI_SEAGATE
+- tristate "Seagate ST-02 and Future Domain TMC-8xx SCSI support"
+- depends on X86 && ISA && SCSI
+- select CHECK_SIGNATURE
+- ---help---
+- These are 8-bit SCSI controllers; the ST-01 is also supported by
+- this driver. It is explained in section 3.9 of the SCSI-HOWTO,
+- available from <http://www.tldp.org/docs.html#howto>. If it
+- doesn't work out of the box, you may have to change some macros at
+- compiletime, which are described in <file:drivers/scsi/seagate.c>.
-
-- queue = device->request_queue;
-- /* No queue ? Then there is nothing to do. */
-- if (queue == NULL)
-- return;
+- To compile this driver as a module, choose M here: the
+- module will be called seagate.
-
-- /*
-- * We requeue request from the block device queue to the ccw
-- * queue only in two states. In state DASD_STATE_READY the
-- * partition detection is done and we need to requeue requests
-- * for that. State DASD_STATE_ONLINE is normal block device
-- * operation.
-- */
-- if (device->state != DASD_STATE_READY &&
-- device->state != DASD_STATE_ONLINE)
-- return;
-- nr_queued = 0;
-- /* Now we try to fetch requests from the request queue */
-- list_for_each_entry(cqr, &device->ccw_queue, list)
-- if (cqr->status == DASD_CQR_QUEUED)
-- nr_queued++;
-- while (!blk_queue_plugged(queue) &&
-- elv_next_request(queue) &&
-- nr_queued < DASD_CHANQ_MAX_SIZE) {
-- req = elv_next_request(queue);
+-# definitely looks not 64bit safe:
+ config SCSI_SIM710
+ tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
+ depends on (EISA || MCA) && SCSI
+diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
+index 2e6129f..93e1428 100644
+--- a/drivers/scsi/Makefile
++++ b/drivers/scsi/Makefile
+@@ -16,9 +16,8 @@
-- if (device->features & DASD_FEATURE_READONLY &&
-- rq_data_dir(req) == WRITE) {
-- DBF_DEV_EVENT(DBF_ERR, device,
-- "Rejecting write request %p",
-- req);
-- blkdev_dequeue_request(req);
-- dasd_end_request(req, 0);
-- continue;
-- }
-- if (device->stopped & DASD_STOPPED_DC_EIO) {
-- blkdev_dequeue_request(req);
-- dasd_end_request(req, 0);
-- continue;
-- }
-- cqr = device->discipline->build_cp(device, req);
-- if (IS_ERR(cqr)) {
-- if (PTR_ERR(cqr) == -ENOMEM)
-- break; /* terminate request queue loop */
-- if (PTR_ERR(cqr) == -EAGAIN) {
-- /*
-- * The current request cannot be build right
-- * now, we have to try later. If this request
-- * is the head-of-queue we stop the device
-- * for 1/2 second.
-- */
-- if (!list_empty(&device->ccw_queue))
-- break;
-- device->stopped |= DASD_STOPPED_PENDING;
-- dasd_set_timer(device, HZ/2);
-- break;
-- }
-- DBF_DEV_EVENT(DBF_ERR, device,
-- "CCW creation failed (rc=%ld) "
-- "on request %p",
-- PTR_ERR(cqr), req);
-- blkdev_dequeue_request(req);
-- dasd_end_request(req, 0);
-- continue;
-+ list_for_each_safe(l, n, final_queue) {
-+ cqr = list_entry(l, struct dasd_ccw_req, devlist);
-+ list_del_init(&cqr->devlist);
-+ if (cqr->block)
-+ spin_lock_bh(&cqr->block->queue_lock);
-+ switch (cqr->status) {
-+ case DASD_CQR_SUCCESS:
-+ cqr->status = DASD_CQR_DONE;
-+ break;
-+ case DASD_CQR_ERROR:
-+ cqr->status = DASD_CQR_NEED_ERP;
-+ break;
-+ case DASD_CQR_CLEARED:
-+ cqr->status = DASD_CQR_TERMINATED;
-+ break;
-+ default:
-+ DEV_MESSAGE(KERN_ERR, device,
-+ "wrong cqr status in __dasd_process_final_queue "
-+ "for cqr %p, status %x",
-+ cqr, cqr->status);
-+ BUG();
- }
-- cqr->callback = dasd_end_request_cb;
-- cqr->callback_data = (void *) req;
-- cqr->status = DASD_CQR_QUEUED;
-- blkdev_dequeue_request(req);
-- list_add_tail(&cqr->list, &device->ccw_queue);
-- dasd_profile_start(device, cqr, req);
-- nr_queued++;
-+ if (cqr->block)
-+ spin_unlock_bh(&cqr->block->queue_lock);
-+ if (cqr->callback != NULL)
-+ (cqr->callback)(cqr, cqr->callback_data);
+ CFLAGS_aha152x.o = -DAHA152X_STAT -DAUTOCONF
+ CFLAGS_gdth.o = # -DDEBUG_GDTH=2 -D__SERIAL__ -D__COM2__ -DGDTH_STATISTICS
+-CFLAGS_seagate.o = -DARBITRATE -DPARITY -DSEAGATE_USE_ASM
+
+-subdir-$(CONFIG_PCMCIA) += pcmcia
++obj-$(CONFIG_PCMCIA) += pcmcia/
+
+ obj-$(CONFIG_SCSI) += scsi_mod.o
+ obj-$(CONFIG_SCSI_TGT) += scsi_tgt.o
+@@ -59,7 +58,6 @@ obj-$(CONFIG_MVME16x_SCSI) += 53c700.o mvme16x_scsi.o
+ obj-$(CONFIG_BVME6000_SCSI) += 53c700.o bvme6000_scsi.o
+ obj-$(CONFIG_SCSI_SIM710) += 53c700.o sim710.o
+ obj-$(CONFIG_SCSI_ADVANSYS) += advansys.o
+-obj-$(CONFIG_SCSI_PSI240I) += psi240i.o
+ obj-$(CONFIG_SCSI_BUSLOGIC) += BusLogic.o
+ obj-$(CONFIG_SCSI_DPT_I2O) += dpt_i2o.o
+ obj-$(CONFIG_SCSI_U14_34F) += u14-34f.o
+@@ -90,7 +88,6 @@ obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
+ obj-$(CONFIG_SCSI_QLA_ISCSI) += qla4xxx/
+ obj-$(CONFIG_SCSI_LPFC) += lpfc/
+ obj-$(CONFIG_SCSI_PAS16) += pas16.o
+-obj-$(CONFIG_SCSI_SEAGATE) += seagate.o
+ obj-$(CONFIG_SCSI_T128) += t128.o
+ obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o
+ obj-$(CONFIG_SCSI_DTC3280) += dtc.o
+diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
+index 2597209..eeddbd1 100644
+--- a/drivers/scsi/NCR5380.c
++++ b/drivers/scsi/NCR5380.c
+@@ -295,16 +295,16 @@ static __inline__ void initialize_SCp(Scsi_Cmnd * cmd)
+ * various queues are valid.
+ */
+
+- if (cmd->use_sg) {
+- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ } else {
+ cmd->SCp.buffer = NULL;
+ cmd->SCp.buffers_residual = 0;
+- cmd->SCp.ptr = (char *) cmd->request_buffer;
+- cmd->SCp.this_residual = cmd->request_bufflen;
++ cmd->SCp.ptr = NULL;
++ cmd->SCp.this_residual = 0;
}
}
-+
-+
- /*
- * Take a look at the first request on the ccw queue and check
- * if it reached its expire time. If so, terminate the IO.
+@@ -932,7 +932,7 @@ static int __devinit NCR5380_init(struct Scsi_Host *instance, int flags)
+ * @instance: adapter to remove
*/
--static void
--__dasd_check_expire(struct dasd_device * device)
-+static void __dasd_device_check_expire(struct dasd_device *device)
+
+-static void __devexit NCR5380_exit(struct Scsi_Host *instance)
++static void NCR5380_exit(struct Scsi_Host *instance)
{
- struct dasd_ccw_req *cqr;
+ struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata;
- if (list_empty(&device->ccw_queue))
- return;
-- cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list);
-+ cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);
- if ((cqr->status == DASD_CQR_IN_IO && cqr->expires != 0) &&
- (time_after_eq(jiffies, cqr->expires + cqr->starttime))) {
- if (device->discipline->term_IO(cqr) != 0) {
- /* Hmpf, try again in 5 sec */
-- dasd_set_timer(device, 5*HZ);
- DEV_MESSAGE(KERN_ERR, device,
- "internal error - timeout (%is) expired "
- "for cqr %p, termination failed, "
- "retrying in 5s",
- (cqr->expires/HZ), cqr);
-+ cqr->expires += 5*HZ;
-+ dasd_device_set_timer(device, 5*HZ);
- } else {
- DEV_MESSAGE(KERN_ERR, device,
- "internal error - timeout (%is) expired "
-@@ -1301,77 +1217,53 @@ __dasd_check_expire(struct dasd_device * device)
- * Take a look at the first request on the ccw queue and check
- * if it needs to be started.
+@@ -975,14 +975,14 @@ static int NCR5380_queue_command(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))
+ case WRITE_6:
+ case WRITE_10:
+ hostdata->time_write[cmd->device->id] -= (jiffies - hostdata->timebase);
+- hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;
++ hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);
+ hostdata->pendingw++;
+ break;
+ case READ:
+ case READ_6:
+ case READ_10:
+ hostdata->time_read[cmd->device->id] -= (jiffies - hostdata->timebase);
+- hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;
++ hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);
+ hostdata->pendingr++;
+ break;
+ }
+@@ -1157,16 +1157,17 @@ static void NCR5380_main(struct work_struct *work)
+ * Locks: takes the needed instance locks
*/
--static void
--__dasd_start_head(struct dasd_device * device)
-+static void __dasd_device_start_head(struct dasd_device *device)
+
+-static irqreturn_t NCR5380_intr(int irq, void *dev_id)
++static irqreturn_t NCR5380_intr(int dummy, void *dev_id)
{
- struct dasd_ccw_req *cqr;
- int rc;
+ NCR5380_local_declare();
+- struct Scsi_Host *instance = (struct Scsi_Host *)dev_id;
++ struct Scsi_Host *instance = dev_id;
+ struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata;
+ int done;
+ unsigned char basr;
+ unsigned long flags;
- if (list_empty(&device->ccw_queue))
- return;
-- cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list);
-+ cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);
- if (cqr->status != DASD_CQR_QUEUED)
- return;
-- /* Non-temporary stop condition will trigger fail fast */
-- if (device->stopped & ~DASD_STOPPED_PENDING &&
-- test_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags) &&
-- (!dasd_eer_enabled(device))) {
-- cqr->status = DASD_CQR_FAILED;
-- dasd_schedule_bh(device);
-+ /* when device is stopped, return request to previous layer */
-+ if (device->stopped) {
-+ cqr->status = DASD_CQR_CLEARED;
-+ dasd_schedule_device_bh(device);
- return;
+- dprintk(NDEBUG_INTR, ("scsi : NCR5380 irq %d triggered\n", irq));
++ dprintk(NDEBUG_INTR, ("scsi : NCR5380 irq %d triggered\n",
++ instance->irq));
+
+ do {
+ done = 1;
+diff --git a/drivers/scsi/a2091.c b/drivers/scsi/a2091.c
+index b7c5385..23f27c9 100644
+--- a/drivers/scsi/a2091.c
++++ b/drivers/scsi/a2091.c
+@@ -73,18 +73,9 @@ static int dma_setup(struct scsi_cmnd *cmd, int dir_in)
}
-- /* Don't try to start requests if device is stopped */
-- if (device->stopped)
-- return;
- rc = device->discipline->start_IO(cqr);
- if (rc == 0)
-- dasd_set_timer(device, cqr->expires);
-+ dasd_device_set_timer(device, cqr->expires);
- else if (rc == -EACCES) {
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- } else
- /* Hmpf, try again in 1/2 sec */
-- dasd_set_timer(device, 50);
--}
+ if (!dir_in) {
+- /* copy to bounce buffer for a write */
+- if (cmd->use_sg)
+-#if 0
+- panic ("scsi%ddma: incomplete s/g support",
+- instance->host_no);
+-#else
++ /* copy to bounce buffer for a write */
+ memcpy (HDATA(instance)->dma_bounce_buffer,
+ cmd->SCp.ptr, cmd->SCp.this_residual);
+-#endif
+- else
+- memcpy (HDATA(instance)->dma_bounce_buffer,
+- cmd->request_buffer, cmd->request_bufflen);
+ }
+ }
+
+@@ -144,30 +135,13 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
+
+ /* copy from a bounce buffer, if necessary */
+ if (status && HDATA(instance)->dma_bounce_buffer) {
+- if (SCpnt && SCpnt->use_sg) {
+-#if 0
+- panic ("scsi%d: incomplete s/g support",
+- instance->host_no);
+-#else
+- if( HDATA(instance)->dma_dir )
++ if( HDATA(instance)->dma_dir )
+ memcpy (SCpnt->SCp.ptr,
+ HDATA(instance)->dma_bounce_buffer,
+ SCpnt->SCp.this_residual);
+- kfree (HDATA(instance)->dma_bounce_buffer);
+- HDATA(instance)->dma_bounce_buffer = NULL;
+- HDATA(instance)->dma_bounce_len = 0;
+-
+-#endif
+- } else {
+- if (HDATA(instance)->dma_dir && SCpnt)
+- memcpy (SCpnt->request_buffer,
+- HDATA(instance)->dma_bounce_buffer,
+- SCpnt->request_bufflen);
-
--static inline int
--_wait_for_clear(struct dasd_ccw_req *cqr)
--{
-- return (cqr->status == DASD_CQR_QUEUED);
-+ dasd_device_set_timer(device, 50);
+- kfree (HDATA(instance)->dma_bounce_buffer);
+- HDATA(instance)->dma_bounce_buffer = NULL;
+- HDATA(instance)->dma_bounce_len = 0;
+- }
++ kfree (HDATA(instance)->dma_bounce_buffer);
++ HDATA(instance)->dma_bounce_buffer = NULL;
++ HDATA(instance)->dma_bounce_len = 0;
+ }
}
- /*
-- * Remove all requests from the ccw queue (all = '1') or only block device
-- * requests in case all = '0'.
-- * Take care of the erp-chain (chained via cqr->refers) and remove either
-- * the whole erp-chain or none of the erp-requests.
-- * If a request is currently running, term_IO is called and the request
-- * is re-queued. Prior to removing the terminated request we need to wait
-- * for the clear-interrupt.
-- * In case termination is not possible we stop processing and just finishing
-- * the already moved requests.
-+ * Go through all request on the dasd_device request queue,
-+ * terminate them on the cdev if necessary, and return them to the
-+ * submitting layer via callback.
-+ * Note:
-+ * Make sure that all 'submitting layers' still exist when
-+ * this function is called!. In other words, when 'device' is a base
-+ * device then all block layer requests must have been removed before
-+ * via dasd_flush_block_queue.
- */
--static int
--dasd_flush_ccw_queue(struct dasd_device * device, int all)
-+int dasd_flush_device_queue(struct dasd_device *device)
- {
-- struct dasd_ccw_req *cqr, *orig, *n;
-- int rc, i;
--
-+ struct dasd_ccw_req *cqr, *n;
-+ int rc;
- struct list_head flush_queue;
+diff --git a/drivers/scsi/a3000.c b/drivers/scsi/a3000.c
+index 796f1c4..d7255c8 100644
+--- a/drivers/scsi/a3000.c
++++ b/drivers/scsi/a3000.c
+@@ -70,12 +70,8 @@ static int dma_setup(struct scsi_cmnd *cmd, int dir_in)
- INIT_LIST_HEAD(&flush_queue);
- spin_lock_irq(get_ccwdev_lock(device->cdev));
- rc = 0;
--restart:
-- list_for_each_entry_safe(cqr, n, &device->ccw_queue, list) {
-- /* get original request of erp request-chain */
-- for (orig = cqr; orig->refers != NULL; orig = orig->refers);
--
-- /* Flush all request or only block device requests? */
-- if (all == 0 && cqr->callback != dasd_end_request_cb &&
-- orig->callback != dasd_end_request_cb) {
-- continue;
-- }
-+ list_for_each_entry_safe(cqr, n, &device->ccw_queue, devlist) {
- /* Check status and move request to flush_queue */
- switch (cqr->status) {
- case DASD_CQR_IN_IO:
-@@ -1387,90 +1279,60 @@ restart:
- }
- break;
- case DASD_CQR_QUEUED:
-- case DASD_CQR_ERROR:
-- /* set request to FAILED */
- cqr->stopclk = get_clock();
-- cqr->status = DASD_CQR_FAILED;
-+ cqr->status = DASD_CQR_CLEARED;
- break;
-- default: /* do not touch the others */
-+ default: /* no need to modify the others */
- break;
- }
-- /* Rechain request (including erp chain) */
-- for (i = 0; cqr != NULL; cqr = cqr->refers, i++) {
-- cqr->endclk = get_clock();
-- list_move_tail(&cqr->list, &flush_queue);
-- }
-- if (i > 1)
-- /* moved more than one request - need to restart */
-- goto restart;
-+ list_move_tail(&cqr->devlist, &flush_queue);
+ if (!dir_in) {
+ /* copy to bounce buffer for a write */
+- if (cmd->use_sg) {
+- memcpy (HDATA(a3000_host)->dma_bounce_buffer,
+- cmd->SCp.ptr, cmd->SCp.this_residual);
+- } else
+- memcpy (HDATA(a3000_host)->dma_bounce_buffer,
+- cmd->request_buffer, cmd->request_bufflen);
++ memcpy (HDATA(a3000_host)->dma_bounce_buffer,
++ cmd->SCp.ptr, cmd->SCp.this_residual);
}
+
+ addr = virt_to_bus(HDATA(a3000_host)->dma_bounce_buffer);
+@@ -146,7 +142,7 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
+
+ /* copy from a bounce buffer, if necessary */
+ if (status && HDATA(instance)->dma_bounce_buffer) {
+- if (SCpnt && SCpnt->use_sg) {
++ if (SCpnt) {
+ if (HDATA(instance)->dma_dir && SCpnt)
+ memcpy (SCpnt->SCp.ptr,
+ HDATA(instance)->dma_bounce_buffer,
+@@ -155,11 +151,6 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
+ HDATA(instance)->dma_bounce_buffer = NULL;
+ HDATA(instance)->dma_bounce_len = 0;
+ } else {
+- if (HDATA(instance)->dma_dir && SCpnt)
+- memcpy (SCpnt->request_buffer,
+- HDATA(instance)->dma_bounce_buffer,
+- SCpnt->request_bufflen);
-
- finished:
- spin_unlock_irq(get_ccwdev_lock(device->cdev));
-- /* Now call the callback function of flushed requests */
--restart_cb:
-- list_for_each_entry_safe(cqr, n, &flush_queue, list) {
-- if (cqr->status == DASD_CQR_CLEAR) {
-- /* wait for clear interrupt! */
-- wait_event(dasd_flush_wq, _wait_for_clear(cqr));
-- cqr->status = DASD_CQR_FAILED;
-- }
-- /* Process finished ERP request. */
-- if (cqr->refers) {
-- __dasd_process_erp(device, cqr);
-- /* restart list_for_xx loop since dasd_process_erp
-- * might remove multiple elements */
-- goto restart_cb;
-- }
-- /* call the callback function */
-- cqr->endclk = get_clock();
-- if (cqr->callback != NULL)
-- (cqr->callback)(cqr, cqr->callback_data);
-- }
-+ /*
-+ * After this point all requests must be in state CLEAR_PENDING,
-+ * CLEARED, SUCCESS or ERROR. Now wait for CLEAR_PENDING to become
-+ * one of the others.
-+ */
-+ list_for_each_entry_safe(cqr, n, &flush_queue, devlist)
-+ wait_event(dasd_flush_wq,
-+ (cqr->status != DASD_CQR_CLEAR_PENDING));
-+ /*
-+ * Now set each request back to TERMINATED, DONE or NEED_ERP
-+ * and call the callback function of flushed requests
-+ */
-+ __dasd_device_process_final_queue(device, &flush_queue);
- return rc;
- }
+ kfree (HDATA(instance)->dma_bounce_buffer);
+ HDATA(instance)->dma_bounce_buffer = NULL;
+ HDATA(instance)->dma_bounce_len = 0;
+diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
+index a77ab8d..d7235f4 100644
+--- a/drivers/scsi/aacraid/aachba.c
++++ b/drivers/scsi/aacraid/aachba.c
+@@ -31,9 +31,9 @@
+ #include <linux/slab.h>
+ #include <linux/completion.h>
+ #include <linux/blkdev.h>
+-#include <linux/dma-mapping.h>
+ #include <asm/semaphore.h>
+ #include <asm/uaccess.h>
++#include <linux/highmem.h> /* For flush_kernel_dcache_page */
+ #include <scsi/scsi.h>
+ #include <scsi/scsi_cmnd.h>
+@@ -56,54 +56,54 @@
/*
- * Acquire the device lock and process queues for the device.
+ * Sense codes
*/
--static void
--dasd_tasklet(struct dasd_device * device)
-+static void dasd_device_tasklet(struct dasd_device *device)
- {
- struct list_head final_queue;
-- struct list_head *l, *n;
-- struct dasd_ccw_req *cqr;
-
- atomic_set (&device->tasklet_scheduled, 0);
- INIT_LIST_HEAD(&final_queue);
- spin_lock_irq(get_ccwdev_lock(device->cdev));
- /* Check expire time of first request on the ccw queue. */
-- __dasd_check_expire(device);
-- /* Finish off requests on ccw queue */
-- __dasd_process_ccw_queue(device, &final_queue);
-+ __dasd_device_check_expire(device);
-+ /* find final requests on ccw queue */
-+ __dasd_device_process_ccw_queue(device, &final_queue);
- spin_unlock_irq(get_ccwdev_lock(device->cdev));
- /* Now call the callback function of requests with final status */
-- list_for_each_safe(l, n, &final_queue) {
-- cqr = list_entry(l, struct dasd_ccw_req, list);
-- list_del_init(&cqr->list);
-- if (cqr->callback != NULL)
-- (cqr->callback)(cqr, cqr->callback_data);
-- }
-- spin_lock_irq(&device->request_queue_lock);
-- spin_lock(get_ccwdev_lock(device->cdev));
-- /* Get new request from the block device request queue */
-- __dasd_process_blk_queue(device);
-+ __dasd_device_process_final_queue(device, &final_queue);
-+ spin_lock_irq(get_ccwdev_lock(device->cdev));
- /* Now check if the head of the ccw queue needs to be started. */
-- __dasd_start_head(device);
-- spin_unlock(get_ccwdev_lock(device->cdev));
-- spin_unlock_irq(&device->request_queue_lock);
-+ __dasd_device_start_head(device);
-+ spin_unlock_irq(get_ccwdev_lock(device->cdev));
- dasd_put_device(device);
- }
+-
+-#define SENCODE_NO_SENSE 0x00
+-#define SENCODE_END_OF_DATA 0x00
+-#define SENCODE_BECOMING_READY 0x04
+-#define SENCODE_INIT_CMD_REQUIRED 0x04
+-#define SENCODE_PARAM_LIST_LENGTH_ERROR 0x1A
+-#define SENCODE_INVALID_COMMAND 0x20
+-#define SENCODE_LBA_OUT_OF_RANGE 0x21
+-#define SENCODE_INVALID_CDB_FIELD 0x24
+-#define SENCODE_LUN_NOT_SUPPORTED 0x25
+-#define SENCODE_INVALID_PARAM_FIELD 0x26
+-#define SENCODE_PARAM_NOT_SUPPORTED 0x26
+-#define SENCODE_PARAM_VALUE_INVALID 0x26
+-#define SENCODE_RESET_OCCURRED 0x29
+-#define SENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x3E
+-#define SENCODE_INQUIRY_DATA_CHANGED 0x3F
+-#define SENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x39
+-#define SENCODE_DIAGNOSTIC_FAILURE 0x40
+-#define SENCODE_INTERNAL_TARGET_FAILURE 0x44
+-#define SENCODE_INVALID_MESSAGE_ERROR 0x49
+-#define SENCODE_LUN_FAILED_SELF_CONFIG 0x4c
+-#define SENCODE_OVERLAPPED_COMMAND 0x4E
++
++#define SENCODE_NO_SENSE 0x00
++#define SENCODE_END_OF_DATA 0x00
++#define SENCODE_BECOMING_READY 0x04
++#define SENCODE_INIT_CMD_REQUIRED 0x04
++#define SENCODE_PARAM_LIST_LENGTH_ERROR 0x1A
++#define SENCODE_INVALID_COMMAND 0x20
++#define SENCODE_LBA_OUT_OF_RANGE 0x21
++#define SENCODE_INVALID_CDB_FIELD 0x24
++#define SENCODE_LUN_NOT_SUPPORTED 0x25
++#define SENCODE_INVALID_PARAM_FIELD 0x26
++#define SENCODE_PARAM_NOT_SUPPORTED 0x26
++#define SENCODE_PARAM_VALUE_INVALID 0x26
++#define SENCODE_RESET_OCCURRED 0x29
++#define SENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x3E
++#define SENCODE_INQUIRY_DATA_CHANGED 0x3F
++#define SENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x39
++#define SENCODE_DIAGNOSTIC_FAILURE 0x40
++#define SENCODE_INTERNAL_TARGET_FAILURE 0x44
++#define SENCODE_INVALID_MESSAGE_ERROR 0x49
++#define SENCODE_LUN_FAILED_SELF_CONFIG 0x4c
++#define SENCODE_OVERLAPPED_COMMAND 0x4E
/*
- * Schedules a call to dasd_tasklet over the device tasklet.
+ * Additional sense codes
*/
--void
--dasd_schedule_bh(struct dasd_device * device)
-+void dasd_schedule_device_bh(struct dasd_device *device)
- {
- /* Protect against rescheduling. */
- if (atomic_cmpxchg (&device->tasklet_scheduled, 0, 1) != 0)
-@@ -1480,160 +1342,109 @@ dasd_schedule_bh(struct dasd_device * device)
- }
+-
+-#define ASENCODE_NO_SENSE 0x00
+-#define ASENCODE_END_OF_DATA 0x05
+-#define ASENCODE_BECOMING_READY 0x01
+-#define ASENCODE_INIT_CMD_REQUIRED 0x02
+-#define ASENCODE_PARAM_LIST_LENGTH_ERROR 0x00
+-#define ASENCODE_INVALID_COMMAND 0x00
+-#define ASENCODE_LBA_OUT_OF_RANGE 0x00
+-#define ASENCODE_INVALID_CDB_FIELD 0x00
+-#define ASENCODE_LUN_NOT_SUPPORTED 0x00
+-#define ASENCODE_INVALID_PARAM_FIELD 0x00
+-#define ASENCODE_PARAM_NOT_SUPPORTED 0x01
+-#define ASENCODE_PARAM_VALUE_INVALID 0x02
+-#define ASENCODE_RESET_OCCURRED 0x00
+-#define ASENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x00
+-#define ASENCODE_INQUIRY_DATA_CHANGED 0x03
+-#define ASENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x00
+-#define ASENCODE_DIAGNOSTIC_FAILURE 0x80
+-#define ASENCODE_INTERNAL_TARGET_FAILURE 0x00
+-#define ASENCODE_INVALID_MESSAGE_ERROR 0x00
+-#define ASENCODE_LUN_FAILED_SELF_CONFIG 0x00
+-#define ASENCODE_OVERLAPPED_COMMAND 0x00
++
++#define ASENCODE_NO_SENSE 0x00
++#define ASENCODE_END_OF_DATA 0x05
++#define ASENCODE_BECOMING_READY 0x01
++#define ASENCODE_INIT_CMD_REQUIRED 0x02
++#define ASENCODE_PARAM_LIST_LENGTH_ERROR 0x00
++#define ASENCODE_INVALID_COMMAND 0x00
++#define ASENCODE_LBA_OUT_OF_RANGE 0x00
++#define ASENCODE_INVALID_CDB_FIELD 0x00
++#define ASENCODE_LUN_NOT_SUPPORTED 0x00
++#define ASENCODE_INVALID_PARAM_FIELD 0x00
++#define ASENCODE_PARAM_NOT_SUPPORTED 0x01
++#define ASENCODE_PARAM_VALUE_INVALID 0x02
++#define ASENCODE_RESET_OCCURRED 0x00
++#define ASENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x00
++#define ASENCODE_INQUIRY_DATA_CHANGED 0x03
++#define ASENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x00
++#define ASENCODE_DIAGNOSTIC_FAILURE 0x80
++#define ASENCODE_INTERNAL_TARGET_FAILURE 0x00
++#define ASENCODE_INVALID_MESSAGE_ERROR 0x00
++#define ASENCODE_LUN_FAILED_SELF_CONFIG 0x00
++#define ASENCODE_OVERLAPPED_COMMAND 0x00
+ #define BYTE0(x) (unsigned char)(x)
+ #define BYTE1(x) (unsigned char)((x) >> 8)
+@@ -115,8 +115,8 @@
+ *----------------------------------------------------------------------------*/
+ /* SCSI inquiry data */
+ struct inquiry_data {
+- u8 inqd_pdt; /* Peripheral qualifier | Peripheral Device Type */
+- u8 inqd_dtq; /* RMB | Device Type Qualifier */
++ u8 inqd_pdt; /* Peripheral qualifier | Peripheral Device Type */
++ u8 inqd_dtq; /* RMB | Device Type Qualifier */
+ u8 inqd_ver; /* ISO version | ECMA version | ANSI-approved version */
+ u8 inqd_rdf; /* AENC | TrmIOP | Response data format */
+ u8 inqd_len; /* Additional length (n-4) */
+@@ -130,7 +130,7 @@ struct inquiry_data {
/*
-- * Queue a request to the head of the ccw_queue. Start the I/O if
-- * possible.
-+ * Queue a request to the head of the device ccw_queue.
-+ * Start the I/O if possible.
+ * M O D U L E G L O B A L S
*/
--void
--dasd_add_request_head(struct dasd_ccw_req *req)
-+void dasd_add_request_head(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
- unsigned long flags;
-
-- device = req->device;
-+ device = cqr->startdev;
- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-- req->status = DASD_CQR_QUEUED;
-- req->device = device;
-- list_add(&req->list, &device->ccw_queue);
-+ cqr->status = DASD_CQR_QUEUED;
-+ list_add(&cqr->devlist, &device->ccw_queue);
- /* let the bh start the request to keep them in order */
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
- }
+-
++
+ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* sgmap);
+ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* psg);
+ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg);
+@@ -141,9 +141,10 @@ static char *aac_get_status_string(u32 status);
/*
-- * Queue a request to the tail of the ccw_queue. Start the I/O if
-- * possible.
-+ * Queue a request to the tail of the device ccw_queue.
-+ * Start the I/O if possible.
- */
--void
--dasd_add_request_tail(struct dasd_ccw_req *req)
-+void dasd_add_request_tail(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
- unsigned long flags;
+ * Non dasd selection is handled entirely in aachba now
+- */
+-
++ */
++
+ static int nondasd = -1;
++static int aac_cache = 0;
+ static int dacmode = -1;
-- device = req->device;
-+ device = cqr->startdev;
- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-- req->status = DASD_CQR_QUEUED;
-- req->device = device;
-- list_add_tail(&req->list, &device->ccw_queue);
-+ cqr->status = DASD_CQR_QUEUED;
-+ list_add_tail(&cqr->devlist, &device->ccw_queue);
- /* let the bh start the request to keep them in order */
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
- }
+ int aac_commit = -1;
+@@ -152,6 +153,8 @@ int aif_timeout = 120;
- /*
-- * Wakeup callback.
-+ * Wakeup helper for the 'sleep_on' functions.
- */
--static void
--dasd_wakeup_cb(struct dasd_ccw_req *cqr, void *data)
-+static void dasd_wakeup_cb(struct dasd_ccw_req *cqr, void *data)
- {
- wake_up((wait_queue_head_t *) data);
- }
+ module_param(nondasd, int, S_IRUGO|S_IWUSR);
+ MODULE_PARM_DESC(nondasd, "Control scanning of hba for nondasd devices. 0=off, 1=on");
++module_param_named(cache, aac_cache, int, S_IRUGO|S_IWUSR);
++MODULE_PARM_DESC(cache, "Disable Queue Flush commands:\n\tbit 0 - Disable FUA in WRITE SCSI commands\n\tbit 1 - Disable SYNCHRONIZE_CACHE SCSI command\n\tbit 2 - Disable only if Battery not protecting Cache");
+ module_param(dacmode, int, S_IRUGO|S_IWUSR);
+ MODULE_PARM_DESC(dacmode, "Control whether dma addressing is using 64 bit DAC. 0=off, 1=on");
+ module_param_named(commit, aac_commit, int, S_IRUGO|S_IWUSR);
+@@ -179,7 +182,7 @@ MODULE_PARM_DESC(check_interval, "Interval in seconds between adapter health che
--static inline int
--_wait_for_wakeup(struct dasd_ccw_req *cqr)
-+static inline int _wait_for_wakeup(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
- int rc;
+ int aac_check_reset = 1;
+ module_param_named(check_reset, aac_check_reset, int, S_IRUGO|S_IWUSR);
+-MODULE_PARM_DESC(aac_check_reset, "If adapter fails health check, reset the adapter.");
++MODULE_PARM_DESC(aac_check_reset, "If adapter fails health check, reset the adapter. a value of -1 forces the reset to adapters programmed to ignore it.");
-- device = cqr->device;
-+ device = cqr->startdev;
- spin_lock_irq(get_ccwdev_lock(device->cdev));
- rc = ((cqr->status == DASD_CQR_DONE ||
-- cqr->status == DASD_CQR_FAILED) &&
-- list_empty(&cqr->list));
-+ cqr->status == DASD_CQR_NEED_ERP ||
-+ cqr->status == DASD_CQR_TERMINATED) &&
-+ list_empty(&cqr->devlist));
- spin_unlock_irq(get_ccwdev_lock(device->cdev));
- return rc;
- }
+ int expose_physicals = -1;
+ module_param(expose_physicals, int, S_IRUGO|S_IWUSR);
+@@ -193,12 +196,12 @@ static inline int aac_valid_context(struct scsi_cmnd *scsicmd,
+ struct fib *fibptr) {
+ struct scsi_device *device;
- /*
-- * Attempts to start a special ccw queue and waits for its completion.
-+ * Queue a request to the tail of the device ccw_queue and wait for
-+ * it's completion.
- */
--int
--dasd_sleep_on(struct dasd_ccw_req * cqr)
-+int dasd_sleep_on(struct dasd_ccw_req *cqr)
+- if (unlikely(!scsicmd || !scsicmd->scsi_done )) {
++ if (unlikely(!scsicmd || !scsicmd->scsi_done)) {
+ dprintk((KERN_WARNING "aac_valid_context: scsi command corrupt\n"));
+- aac_fib_complete(fibptr);
+- aac_fib_free(fibptr);
+- return 0;
+- }
++ aac_fib_complete(fibptr);
++ aac_fib_free(fibptr);
++ return 0;
++ }
+ scsicmd->SCp.phase = AAC_OWNER_MIDLEVEL;
+ device = scsicmd->device;
+ if (unlikely(!device || !scsi_device_online(device))) {
+@@ -240,7 +243,7 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag)
+ FsaNormal,
+ 1, 1,
+ NULL, NULL);
+- if (status < 0 ) {
++ if (status < 0) {
+ printk(KERN_WARNING "aac_get_config_status: SendFIB failed.\n");
+ } else {
+ struct aac_get_config_status_resp *reply
+@@ -264,10 +267,10 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag)
+ struct aac_commit_config * dinfo;
+ aac_fib_init(fibptr);
+ dinfo = (struct aac_commit_config *) fib_data(fibptr);
+-
++
+ dinfo->command = cpu_to_le32(VM_ContainerConfig);
+ dinfo->type = cpu_to_le32(CT_COMMIT_CONFIG);
+-
++
+ status = aac_fib_send(ContainerCommand,
+ fibptr,
+ sizeof (struct aac_commit_config),
+@@ -293,7 +296,7 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag)
+ int aac_get_containers(struct aac_dev *dev)
{
- wait_queue_head_t wait_q;
- struct dasd_device *device;
- int rc;
-
-- device = cqr->device;
-- spin_lock_irq(get_ccwdev_lock(device->cdev));
-+ device = cqr->startdev;
+ struct fsa_dev_info *fsa_dev_ptr;
+- u32 index;
++ u32 index;
+ int status = 0;
+ struct fib * fibptr;
+ struct aac_get_container_count *dinfo;
+@@ -363,6 +366,7 @@ static void aac_internal_transfer(struct scsi_cmnd *scsicmd, void *data, unsigne
+ if (buf && transfer_len > 0)
+ memcpy(buf + offset, data, transfer_len);
- init_waitqueue_head (&wait_q);
- cqr->callback = dasd_wakeup_cb;
- cqr->callback_data = (void *) &wait_q;
-- cqr->status = DASD_CQR_QUEUED;
-- list_add_tail(&cqr->list, &device->ccw_queue);
--
-- /* let the bh start the request to keep them in order */
-- dasd_schedule_bh(device);
--
-- spin_unlock_irq(get_ccwdev_lock(device->cdev));
--
-+ dasd_add_request_tail(cqr);
- wait_event(wait_q, _wait_for_wakeup(cqr));
++ flush_kernel_dcache_page(kmap_atomic_to_page(buf - sg->offset));
+ kunmap_atomic(buf - sg->offset, KM_IRQ0);
- /* Request status is either done or failed. */
-- rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
-+ rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
- return rc;
}
+@@ -395,7 +399,7 @@ static void get_container_name_callback(void *context, struct fib * fibptr)
+ do {
+ *dp++ = (*sp) ? *sp++ : ' ';
+ } while (--count > 0);
+- aac_internal_transfer(scsicmd, d,
++ aac_internal_transfer(scsicmd, d,
+ offsetof(struct inquiry_data, inqd_pid), sizeof(d));
+ }
+ }
+@@ -431,13 +435,13 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
+ dinfo->count = cpu_to_le32(sizeof(((struct aac_get_name_resp *)NULL)->data));
- /*
-- * Attempts to start a special ccw queue and wait interruptible
-- * for its completion.
-+ * Queue a request to the tail of the device ccw_queue and wait
-+ * interruptible for it's completion.
+ status = aac_fib_send(ContainerCommand,
+- cmd_fibcontext,
++ cmd_fibcontext,
+ sizeof (struct aac_get_name),
+- FsaNormal,
+- 0, 1,
+- (fib_callback) get_container_name_callback,
++ FsaNormal,
++ 0, 1,
++ (fib_callback)get_container_name_callback,
+ (void *) scsicmd);
+-
++
+ /*
+ * Check that the command queued to the controller
+ */
+@@ -445,7 +449,7 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
+ scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
+ return 0;
+ }
+-
++
+ printk(KERN_WARNING "aac_get_container_name: aac_fib_send failed with status: %d.\n", status);
+ aac_fib_complete(cmd_fibcontext);
+ aac_fib_free(cmd_fibcontext);
+@@ -652,42 +656,47 @@ struct scsi_inq {
+ * @a: string to copy from
+ * @b: string to copy to
+ *
+- * Copy a String from one location to another
++ * Copy a String from one location to another
+ * without copying \0
*/
--int
--dasd_sleep_on_interruptible(struct dasd_ccw_req * cqr)
-+int dasd_sleep_on_interruptible(struct dasd_ccw_req *cqr)
+
+ static void inqstrcpy(char *a, char *b)
{
- wait_queue_head_t wait_q;
- struct dasd_device *device;
-- int rc, finished;
--
-- device = cqr->device;
-- spin_lock_irq(get_ccwdev_lock(device->cdev));
-+ int rc;
-+ device = cqr->startdev;
- init_waitqueue_head (&wait_q);
- cqr->callback = dasd_wakeup_cb;
- cqr->callback_data = (void *) &wait_q;
-- cqr->status = DASD_CQR_QUEUED;
-- list_add_tail(&cqr->list, &device->ccw_queue);
--
-- /* let the bh start the request to keep them in order */
-- dasd_schedule_bh(device);
-- spin_unlock_irq(get_ccwdev_lock(device->cdev));
+- while(*a != (char)0)
++ while (*a != (char)0)
+ *b++ = *a++;
+ }
+
+ static char *container_types[] = {
+- "None",
+- "Volume",
+- "Mirror",
+- "Stripe",
+- "RAID5",
+- "SSRW",
+- "SSRO",
+- "Morph",
+- "Legacy",
+- "RAID4",
+- "RAID10",
+- "RAID00",
+- "V-MIRRORS",
+- "PSEUDO R4",
++ "None",
++ "Volume",
++ "Mirror",
++ "Stripe",
++ "RAID5",
++ "SSRW",
++ "SSRO",
++ "Morph",
++ "Legacy",
++ "RAID4",
++ "RAID10",
++ "RAID00",
++ "V-MIRRORS",
++ "PSEUDO R4",
+ "RAID50",
+ "RAID5D",
+ "RAID5D0",
+ "RAID1E",
+ "RAID6",
+ "RAID60",
+- "Unknown"
++ "Unknown"
+ };
+
-
-- finished = 0;
-- while (!finished) {
-- rc = wait_event_interruptible(wait_q, _wait_for_wakeup(cqr));
-- if (rc != -ERESTARTSYS) {
-- /* Request is final (done or failed) */
-- rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
-- break;
-- }
-- spin_lock_irq(get_ccwdev_lock(device->cdev));
-- switch (cqr->status) {
-- case DASD_CQR_IN_IO:
-- /* terminate runnig cqr */
-- if (device->discipline->term_IO) {
-- cqr->retries = -1;
-- device->discipline->term_IO(cqr);
-- /* wait (non-interruptible) for final status
-- * because signal ist still pending */
-- spin_unlock_irq(get_ccwdev_lock(device->cdev));
-- wait_event(wait_q, _wait_for_wakeup(cqr));
-- spin_lock_irq(get_ccwdev_lock(device->cdev));
-- rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
-- finished = 1;
-- }
-- break;
-- case DASD_CQR_QUEUED:
-- /* request */
-- list_del_init(&cqr->list);
-- rc = -EIO;
-- finished = 1;
-- break;
-- default:
-- /* cqr with 'non-interruptable' status - just wait */
-- break;
-- }
-- spin_unlock_irq(get_ccwdev_lock(device->cdev));
-+ dasd_add_request_tail(cqr);
-+ rc = wait_event_interruptible(wait_q, _wait_for_wakeup(cqr));
-+ if (rc == -ERESTARTSYS) {
-+ dasd_cancel_req(cqr);
-+ /* wait (non-interruptible) for final status */
-+ wait_event(wait_q, _wait_for_wakeup(cqr));
++char * get_container_type(unsigned tindex)
++{
++ if (tindex >= ARRAY_SIZE(container_types))
++ tindex = ARRAY_SIZE(container_types) - 1;
++ return container_types[tindex];
++}
+
+ /* Function: setinqstr
+ *
+@@ -707,16 +716,21 @@ static void setinqstr(struct aac_dev *dev, void *data, int tindex)
+
+ if (dev->supplement_adapter_info.AdapterTypeText[0]) {
+ char * cp = dev->supplement_adapter_info.AdapterTypeText;
+- int c = sizeof(str->vid);
+- while (*cp && *cp != ' ' && --c)
+- ++cp;
+- c = *cp;
+- *cp = '\0';
+- inqstrcpy (dev->supplement_adapter_info.AdapterTypeText,
+- str->vid);
+- *cp = c;
+- while (*cp && *cp != ' ')
+- ++cp;
++ int c;
++ if ((cp[0] == 'A') && (cp[1] == 'O') && (cp[2] == 'C'))
++ inqstrcpy("SMC", str->vid);
++ else {
++ c = sizeof(str->vid);
++ while (*cp && *cp != ' ' && --c)
++ ++cp;
++ c = *cp;
++ *cp = '\0';
++ inqstrcpy (dev->supplement_adapter_info.AdapterTypeText,
++ str->vid);
++ *cp = c;
++ while (*cp && *cp != ' ')
++ ++cp;
++ }
+ while (*cp == ' ')
+ ++cp;
+ /* last six chars reserved for vol type */
+@@ -898,9 +912,8 @@ static int aac_bounds_32(struct aac_dev * dev, struct scsi_cmnd * cmd, u64 lba)
+ ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
+ 0, 0);
+ memcpy(cmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
+- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(cmd->sense_buffer))
+- ? sizeof(cmd->sense_buffer)
+- : sizeof(dev->fsa_dev[cid].sense_data));
++ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
++ SCSI_SENSE_BUFFERSIZE));
+ cmd->scsi_done(cmd);
+ return 1;
}
-+ rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
- return rc;
+@@ -981,7 +994,7 @@ static int aac_read_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32
+ aac_fib_init(fib);
+ readcmd = (struct aac_read *) fib_data(fib);
+ readcmd->command = cpu_to_le32(VM_CtBlockRead);
+- readcmd->cid = cpu_to_le16(scmd_id(cmd));
++ readcmd->cid = cpu_to_le32(scmd_id(cmd));
+ readcmd->block = cpu_to_le32((u32)(lba&0xffffffff));
+ readcmd->count = cpu_to_le32(count * 512);
+
+@@ -1013,7 +1026,8 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
+ writecmd->block[1] = cpu_to_le32((u32)((lba&0xffffffff00000000LL)>>32));
+ writecmd->count = cpu_to_le32(count<<9);
+ writecmd->cid = cpu_to_le16(scmd_id(cmd));
+- writecmd->flags = fua ?
++ writecmd->flags = (fua && ((aac_cache & 5) != 1) &&
++ (((aac_cache & 5) != 5) || !fib->dev->cache_protected)) ?
+ cpu_to_le16(IO_TYPE_WRITE|IO_SUREWRITE) :
+ cpu_to_le16(IO_TYPE_WRITE);
+ writecmd->bpTotal = 0;
+@@ -1072,7 +1086,7 @@ static int aac_write_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
+ aac_fib_init(fib);
+ writecmd = (struct aac_write *) fib_data(fib);
+ writecmd->command = cpu_to_le32(VM_CtBlockWrite);
+- writecmd->cid = cpu_to_le16(scmd_id(cmd));
++ writecmd->cid = cpu_to_le32(scmd_id(cmd));
+ writecmd->block = cpu_to_le32((u32)(lba&0xffffffff));
+ writecmd->count = cpu_to_le32(count * 512);
+ writecmd->sg.count = cpu_to_le32(1);
+@@ -1190,6 +1204,15 @@ static int aac_scsi_32(struct fib * fib, struct scsi_cmnd * cmd)
+ (fib_callback) aac_srb_callback, (void *) cmd);
}
-@@ -1643,25 +1454,23 @@ dasd_sleep_on_interruptible(struct dasd_ccw_req * cqr)
- * and be put back to status queued, before the special request is added
- * to the head of the queue. Then the special request is waited on normally.
- */
--static inline int
--_dasd_term_running_cqr(struct dasd_device *device)
-+static inline int _dasd_term_running_cqr(struct dasd_device *device)
++static int aac_scsi_32_64(struct fib * fib, struct scsi_cmnd * cmd)
++{
++ if ((sizeof(dma_addr_t) > 4) &&
++ (num_physpages > (0xFFFFFFFFULL >> PAGE_SHIFT)) &&
++ (fib->dev->adapter_info.options & AAC_OPT_SGMAP_HOST64))
++ return FAILED;
++ return aac_scsi_32(fib, cmd);
++}
++
+ int aac_get_adapter_info(struct aac_dev* dev)
{
- struct dasd_ccw_req *cqr;
+ struct fib* fibptr;
+@@ -1207,11 +1230,11 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ memset(info,0,sizeof(*info));
- if (list_empty(&device->ccw_queue))
- return 0;
-- cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, list);
-+ cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);
- return device->discipline->term_IO(cqr);
- }
+ rcode = aac_fib_send(RequestAdapterInfo,
+- fibptr,
++ fibptr,
+ sizeof(*info),
+- FsaNormal,
++ FsaNormal,
+ -1, 1, /* First `interrupt' command uses special wait */
+- NULL,
++ NULL,
+ NULL);
--int
--dasd_sleep_on_immediatly(struct dasd_ccw_req * cqr)
-+int dasd_sleep_on_immediatly(struct dasd_ccw_req *cqr)
- {
- wait_queue_head_t wait_q;
- struct dasd_device *device;
- int rc;
+ if (rcode < 0) {
+@@ -1222,29 +1245,29 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ memcpy(&dev->adapter_info, info, sizeof(*info));
-- device = cqr->device;
-+ device = cqr->startdev;
- spin_lock_irq(get_ccwdev_lock(device->cdev));
- rc = _dasd_term_running_cqr(device);
- if (rc) {
-@@ -1673,17 +1482,17 @@ dasd_sleep_on_immediatly(struct dasd_ccw_req * cqr)
- cqr->callback = dasd_wakeup_cb;
- cqr->callback_data = (void *) &wait_q;
- cqr->status = DASD_CQR_QUEUED;
-- list_add(&cqr->list, &device->ccw_queue);
-+ list_add(&cqr->devlist, &device->ccw_queue);
+ if (dev->adapter_info.options & AAC_OPT_SUPPLEMENT_ADAPTER_INFO) {
+- struct aac_supplement_adapter_info * info;
++ struct aac_supplement_adapter_info * sinfo;
- /* let the bh start the request to keep them in order */
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
+ aac_fib_init(fibptr);
- spin_unlock_irq(get_ccwdev_lock(device->cdev));
+- info = (struct aac_supplement_adapter_info *) fib_data(fibptr);
++ sinfo = (struct aac_supplement_adapter_info *) fib_data(fibptr);
- wait_event(wait_q, _wait_for_wakeup(cqr));
+- memset(info,0,sizeof(*info));
++ memset(sinfo,0,sizeof(*sinfo));
- /* Request status is either done or failed. */
-- rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
-+ rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
- return rc;
- }
+ rcode = aac_fib_send(RequestSupplementAdapterInfo,
+ fibptr,
+- sizeof(*info),
++ sizeof(*sinfo),
+ FsaNormal,
+ 1, 1,
+ NULL,
+ NULL);
-@@ -1692,11 +1501,14 @@ dasd_sleep_on_immediatly(struct dasd_ccw_req * cqr)
- * This is useful to timeout requests. The request will be
- * terminated if it is currently in i/o.
- * Returns 1 if the request has been terminated.
-+ * 0 if there was no need to terminate the request (not started yet)
-+ * negative error code if termination failed
-+ * Cancellation of a request is an asynchronous operation! The calling
-+ * function has to wait until the request is properly returned via callback.
- */
--int
--dasd_cancel_req(struct dasd_ccw_req *cqr)
-+int dasd_cancel_req(struct dasd_ccw_req *cqr)
- {
-- struct dasd_device *device = cqr->device;
-+ struct dasd_device *device = cqr->startdev;
- unsigned long flags;
- int rc;
+ if (rcode >= 0)
+- memcpy(&dev->supplement_adapter_info, info, sizeof(*info));
++ memcpy(&dev->supplement_adapter_info, sinfo, sizeof(*sinfo));
+ }
-@@ -1704,74 +1516,454 @@ dasd_cancel_req(struct dasd_ccw_req *cqr)
- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
- switch (cqr->status) {
- case DASD_CQR_QUEUED:
-- /* request was not started - just set to failed */
-- cqr->status = DASD_CQR_FAILED;
-+ /* request was not started - just set to cleared */
-+ cqr->status = DASD_CQR_CLEARED;
- break;
- case DASD_CQR_IN_IO:
- /* request in IO - terminate IO and release again */
-- if (device->discipline->term_IO(cqr) != 0)
-- /* what to do if unable to terminate ??????
-- e.g. not _IN_IO */
-- cqr->status = DASD_CQR_FAILED;
-- cqr->stopclk = get_clock();
-- rc = 1;
-+ rc = device->discipline->term_IO(cqr);
-+ if (rc) {
-+ DEV_MESSAGE(KERN_ERR, device,
-+ "dasd_cancel_req is unable "
-+ " to terminate request %p, rc = %d",
-+ cqr, rc);
-+ } else {
-+ cqr->stopclk = get_clock();
-+ rc = 1;
-+ }
- break;
-- case DASD_CQR_DONE:
-- case DASD_CQR_FAILED:
-- /* already finished - do nothing */
-+ default: /* already finished or clear pending - do nothing */
- break;
-- default:
-- DEV_MESSAGE(KERN_ALERT, device,
-- "invalid status %02x in request",
-- cqr->status);
-+ }
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ dasd_schedule_device_bh(device);
-+ return rc;
-+}
-+
-+
-+/*
-+ * SECTION: Operations of the dasd_block layer.
-+ */
-+
-+/*
-+ * Timeout function for dasd_block. This is used when the block layer
-+ * is waiting for something that may not come reliably, (e.g. a state
-+ * change interrupt)
-+ */
-+static void dasd_block_timeout(unsigned long ptr)
-+{
-+ unsigned long flags;
-+ struct dasd_block *block;
-+
-+ block = (struct dasd_block *) ptr;
-+ spin_lock_irqsave(get_ccwdev_lock(block->base->cdev), flags);
-+ /* re-activate request queue */
-+ block->base->stopped &= ~DASD_STOPPED_PENDING;
-+ spin_unlock_irqrestore(get_ccwdev_lock(block->base->cdev), flags);
-+ dasd_schedule_block_bh(block);
-+}
-+
-+/*
-+ * Setup timeout for a dasd_block in jiffies.
-+ */
-+void dasd_block_set_timer(struct dasd_block *block, int expires)
-+{
-+ if (expires == 0) {
-+ if (timer_pending(&block->timer))
-+ del_timer(&block->timer);
-+ return;
-+ }
-+ if (timer_pending(&block->timer)) {
-+ if (mod_timer(&block->timer, jiffies + expires))
-+ return;
-+ }
-+ block->timer.function = dasd_block_timeout;
-+ block->timer.data = (unsigned long) block;
-+ block->timer.expires = jiffies + expires;
-+ add_timer(&block->timer);
-+}
-+
-+/*
-+ * Clear timeout for a dasd_block.
-+ */
-+void dasd_block_clear_timer(struct dasd_block *block)
-+{
-+ if (timer_pending(&block->timer))
-+ del_timer(&block->timer);
-+}
-+
-+/*
-+ * posts the buffer_cache about a finalized request
-+ */
-+static inline void dasd_end_request(struct request *req, int error)
-+{
-+ if (__blk_end_request(req, error, blk_rq_bytes(req)))
- BUG();
-+}
-+
-+/*
-+ * Process finished error recovery ccw.
-+ */
-+static inline void __dasd_block_process_erp(struct dasd_block *block,
-+ struct dasd_ccw_req *cqr)
-+{
-+ dasd_erp_fn_t erp_fn;
-+ struct dasd_device *device = block->base;
-+
-+ if (cqr->status == DASD_CQR_DONE)
-+ DBF_DEV_EVENT(DBF_NOTICE, device, "%s", "ERP successful");
-+ else
-+ DEV_MESSAGE(KERN_ERR, device, "%s", "ERP unsuccessful");
-+ erp_fn = device->discipline->erp_postaction(cqr);
-+ erp_fn(cqr);
-+}
-+/*
-+ * Fetch requests from the block device queue.
-+ */
-+static void __dasd_process_request_queue(struct dasd_block *block)
-+{
-+ struct request_queue *queue;
-+ struct request *req;
-+ struct dasd_ccw_req *cqr;
-+ struct dasd_device *basedev;
-+ unsigned long flags;
-+ queue = block->request_queue;
-+ basedev = block->base;
-+ /* No queue ? Then there is nothing to do. */
-+ if (queue == NULL)
-+ return;
-+
+- /*
+- * GetBusInfo
+ /*
-+ * We requeue request from the block device queue to the ccw
-+ * queue only in two states. In state DASD_STATE_READY the
-+ * partition detection is done and we need to requeue requests
-+ * for that. State DASD_STATE_ONLINE is normal block device
-+ * operation.
-+ */
-+ if (basedev->state < DASD_STATE_READY)
-+ return;
-+ /* Now we try to fetch requests from the request queue */
-+ while (!blk_queue_plugged(queue) &&
-+ elv_next_request(queue)) {
-+
-+ req = elv_next_request(queue);
-+
-+ if (basedev->features & DASD_FEATURE_READONLY &&
-+ rq_data_dir(req) == WRITE) {
-+ DBF_DEV_EVENT(DBF_ERR, basedev,
-+ "Rejecting write request %p",
-+ req);
-+ blkdev_dequeue_request(req);
-+ dasd_end_request(req, -EIO);
-+ continue;
-+ }
-+ cqr = basedev->discipline->build_cp(basedev, block, req);
-+ if (IS_ERR(cqr)) {
-+ if (PTR_ERR(cqr) == -EBUSY)
-+ break; /* normal end condition */
-+ if (PTR_ERR(cqr) == -ENOMEM)
-+ break; /* terminate request queue loop */
-+ if (PTR_ERR(cqr) == -EAGAIN) {
-+ /*
-+ * The current request cannot be build right
-+ * now, we have to try later. If this request
-+ * is the head-of-queue we stop the device
-+ * for 1/2 second.
-+ */
-+ if (!list_empty(&block->ccw_queue))
-+ break;
-+ spin_lock_irqsave(get_ccwdev_lock(basedev->cdev), flags);
-+ basedev->stopped |= DASD_STOPPED_PENDING;
-+ spin_unlock_irqrestore(get_ccwdev_lock(basedev->cdev), flags);
-+ dasd_block_set_timer(block, HZ/2);
-+ break;
-+ }
-+ DBF_DEV_EVENT(DBF_ERR, basedev,
-+ "CCW creation failed (rc=%ld) "
-+ "on request %p",
-+ PTR_ERR(cqr), req);
-+ blkdev_dequeue_request(req);
-+ dasd_end_request(req, -EIO);
-+ continue;
-+ }
-+ /*
-+ * Note: callback is set to dasd_return_cqr_cb in
-+ * __dasd_block_start_head to cover erp requests as well
-+ */
-+ cqr->callback_data = (void *) req;
-+ cqr->status = DASD_CQR_FILLED;
-+ blkdev_dequeue_request(req);
-+ list_add_tail(&cqr->blocklist, &block->ccw_queue);
-+ dasd_profile_start(block, cqr, req);
-+ }
-+}
-+
-+static void __dasd_cleanup_cqr(struct dasd_ccw_req *cqr)
-+{
-+ struct request *req;
-+ int status;
-+ int error = 0;
-+
-+ req = (struct request *) cqr->callback_data;
-+ dasd_profile_end(cqr->block, cqr, req);
-+ status = cqr->memdev->discipline->free_cp(cqr, req);
-+ if (status <= 0)
-+ error = status ? status : -EIO;
-+ dasd_end_request(req, error);
-+}
-+
-+/*
-+ * Process ccw request queue.
-+ */
-+static void __dasd_process_block_ccw_queue(struct dasd_block *block,
-+ struct list_head *final_queue)
-+{
-+ struct list_head *l, *n;
-+ struct dasd_ccw_req *cqr;
-+ dasd_erp_fn_t erp_fn;
-+ unsigned long flags;
-+ struct dasd_device *base = block->base;
-+
-+restart:
-+ /* Process request with final status. */
-+ list_for_each_safe(l, n, &block->ccw_queue) {
-+ cqr = list_entry(l, struct dasd_ccw_req, blocklist);
-+ if (cqr->status != DASD_CQR_DONE &&
-+ cqr->status != DASD_CQR_FAILED &&
-+ cqr->status != DASD_CQR_NEED_ERP &&
-+ cqr->status != DASD_CQR_TERMINATED)
-+ continue;
-+
-+ if (cqr->status == DASD_CQR_TERMINATED) {
-+ base->discipline->handle_terminated_request(cqr);
-+ goto restart;
-+ }
-+
-+ /* Process requests that may be recovered */
-+ if (cqr->status == DASD_CQR_NEED_ERP) {
-+ if (cqr->irb.esw.esw0.erw.cons &&
-+ test_bit(DASD_CQR_FLAGS_USE_ERP,
-+ &cqr->flags)) {
-+ erp_fn = base->discipline->erp_action(cqr);
-+ erp_fn(cqr);
-+ }
-+ goto restart;
-+ }
-+
-+ /* First of all call extended error reporting. */
-+ if (dasd_eer_enabled(base) &&
-+ cqr->status == DASD_CQR_FAILED) {
-+ dasd_eer_write(base, cqr, DASD_EER_FATALERROR);
-+
-+ /* restart request */
-+ cqr->status = DASD_CQR_FILLED;
-+ cqr->retries = 255;
-+ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
-+ base->stopped |= DASD_STOPPED_QUIESCE;
-+ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev),
-+ flags);
-+ goto restart;
-+ }
-+
-+ /* Process finished ERP request. */
-+ if (cqr->refers) {
-+ __dasd_block_process_erp(block, cqr);
-+ goto restart;
-+ }
-+
-+ /* Rechain finished requests to final queue */
-+ cqr->endclk = get_clock();
-+ list_move_tail(&cqr->blocklist, final_queue);
-+ }
-+}
-+
-+static void dasd_return_cqr_cb(struct dasd_ccw_req *cqr, void *data)
-+{
-+ dasd_schedule_block_bh(cqr->block);
-+}
-+
-+static void __dasd_block_start_head(struct dasd_block *block)
-+{
-+ struct dasd_ccw_req *cqr;
-+
-+ if (list_empty(&block->ccw_queue))
-+ return;
-+ /* We allways begin with the first requests on the queue, as some
-+ * of previously started requests have to be enqueued on a
-+ * dasd_device again for error recovery.
-+ */
-+ list_for_each_entry(cqr, &block->ccw_queue, blocklist) {
-+ if (cqr->status != DASD_CQR_FILLED)
-+ continue;
-+ /* Non-temporary stop condition will trigger fail fast */
-+ if (block->base->stopped & ~DASD_STOPPED_PENDING &&
-+ test_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags) &&
-+ (!dasd_eer_enabled(block->base))) {
-+ cqr->status = DASD_CQR_FAILED;
-+ dasd_schedule_block_bh(block);
-+ continue;
-+ }
-+ /* Don't try to start requests if device is stopped */
-+ if (block->base->stopped)
-+ return;
-+
-+ /* just a fail safe check, should not happen */
-+ if (!cqr->startdev)
-+ cqr->startdev = block->base;
-+
-+ /* make sure that the requests we submit find their way back */
-+ cqr->callback = dasd_return_cqr_cb;
-+
-+ dasd_add_request_tail(cqr);
-+ }
-+}
-+
-+/*
-+ * Central dasd_block layer routine. Takes requests from the generic
-+ * block layer request queue, creates ccw requests, enqueues them on
-+ * a dasd_device and processes ccw requests that have been returned.
-+ */
-+static void dasd_block_tasklet(struct dasd_block *block)
-+{
-+ struct list_head final_queue;
-+ struct list_head *l, *n;
-+ struct dasd_ccw_req *cqr;
-+
-+ atomic_set(&block->tasklet_scheduled, 0);
-+ INIT_LIST_HEAD(&final_queue);
-+ spin_lock(&block->queue_lock);
-+ /* Finish off requests on ccw queue */
-+ __dasd_process_block_ccw_queue(block, &final_queue);
-+ spin_unlock(&block->queue_lock);
-+ /* Now call the callback function of requests with final status */
-+ spin_lock_irq(&block->request_queue_lock);
-+ list_for_each_safe(l, n, &final_queue) {
-+ cqr = list_entry(l, struct dasd_ccw_req, blocklist);
-+ list_del_init(&cqr->blocklist);
-+ __dasd_cleanup_cqr(cqr);
-+ }
-+ spin_lock(&block->queue_lock);
-+ /* Get new request from the block device request queue */
-+ __dasd_process_request_queue(block);
-+ /* Now check if the head of the ccw queue needs to be started. */
-+ __dasd_block_start_head(block);
-+ spin_unlock(&block->queue_lock);
-+ spin_unlock_irq(&block->request_queue_lock);
-+ dasd_put_device(block->base);
-+}
-+
-+static void _dasd_wake_block_flush_cb(struct dasd_ccw_req *cqr, void *data)
-+{
-+ wake_up(&dasd_flush_wq);
-+}
-+
-+/*
-+ * Go through all request on the dasd_block request queue, cancel them
-+ * on the respective dasd_device, and return them to the generic
-+ * block layer.
-+ */
-+static int dasd_flush_block_queue(struct dasd_block *block)
-+{
-+ struct dasd_ccw_req *cqr, *n;
-+ int rc, i;
-+ struct list_head flush_queue;
-+
-+ INIT_LIST_HEAD(&flush_queue);
-+ spin_lock_bh(&block->queue_lock);
-+ rc = 0;
-+restart:
-+ list_for_each_entry_safe(cqr, n, &block->ccw_queue, blocklist) {
-+ /* if this request currently owned by a dasd_device cancel it */
-+ if (cqr->status >= DASD_CQR_QUEUED)
-+ rc = dasd_cancel_req(cqr);
-+ if (rc < 0)
-+ break;
-+ /* Rechain request (including erp chain) so it won't be
-+ * touched by the dasd_block_tasklet anymore.
-+ * Replace the callback so we notice when the request
-+ * is returned from the dasd_device layer.
-+ */
-+ cqr->callback = _dasd_wake_block_flush_cb;
-+ for (i = 0; cqr != NULL; cqr = cqr->refers, i++)
-+ list_move_tail(&cqr->blocklist, &flush_queue);
-+ if (i > 1)
-+ /* moved more than one request - need to restart */
-+ goto restart;
-+ }
-+ spin_unlock_bh(&block->queue_lock);
-+ /* Now call the callback function of flushed requests */
-+restart_cb:
-+ list_for_each_entry_safe(cqr, n, &flush_queue, blocklist) {
-+ wait_event(dasd_flush_wq, (cqr->status < DASD_CQR_QUEUED));
-+ /* Process finished ERP request. */
-+ if (cqr->refers) {
-+ __dasd_block_process_erp(block, cqr);
-+ /* restart list_for_xx loop since dasd_process_erp
-+ * might remove multiple elements */
-+ goto restart_cb;
-+ }
-+ /* call the callback function */
-+ cqr->endclk = get_clock();
-+ list_del_init(&cqr->blocklist);
-+ __dasd_cleanup_cqr(cqr);
++ * GetBusInfo
+ */
+
+ aac_fib_init(fibptr);
+@@ -1267,6 +1290,8 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ 1, 1,
+ NULL, NULL);
+
++ /* reasoned default */
++ dev->maximum_num_physicals = 16;
+ if (rcode >= 0 && le32_to_cpu(bus_info->Status) == ST_OK) {
+ dev->maximum_num_physicals = le32_to_cpu(bus_info->TargetsPerBus);
+ dev->maximum_num_channels = le32_to_cpu(bus_info->BusCount);
+@@ -1276,7 +1301,7 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ char buffer[16];
+ tmp = le32_to_cpu(dev->adapter_info.kernelrev);
+ printk(KERN_INFO "%s%d: kernel %d.%d-%d[%d] %.*s\n",
+- dev->name,
++ dev->name,
+ dev->id,
+ tmp>>24,
+ (tmp>>16)&0xff,
+@@ -1305,19 +1330,21 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ (int)sizeof(dev->supplement_adapter_info.VpdInfo.Tsid),
+ dev->supplement_adapter_info.VpdInfo.Tsid);
+ }
+- if (!aac_check_reset ||
++ if (!aac_check_reset || ((aac_check_reset != 1) &&
+ (dev->supplement_adapter_info.SupportedOptions2 &
+- le32_to_cpu(AAC_OPTION_IGNORE_RESET))) {
++ AAC_OPTION_IGNORE_RESET))) {
+ printk(KERN_INFO "%s%d: Reset Adapter Ignored\n",
+ dev->name, dev->id);
+ }
}
-- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-- dasd_schedule_bh(device);
- return rc;
- }
- /*
-- * SECTION: Block device operations (request queue, partitions, open, release).
-+ * Schedules a call to dasd_tasklet over the device tasklet.
-+ */
-+void dasd_schedule_block_bh(struct dasd_block *block)
-+{
-+ /* Protect against rescheduling. */
-+ if (atomic_cmpxchg(&block->tasklet_scheduled, 0, 1) != 0)
-+ return;
-+ /* life cycle of block is bound to it's base device */
-+ dasd_get_device(block->base);
-+ tasklet_hi_schedule(&block->tasklet);
-+}
-+
++ dev->cache_protected = 0;
++ dev->jbod = ((dev->supplement_adapter_info.FeatureBits &
++ AAC_FEATURE_JBOD) != 0);
+ dev->nondasd_support = 0;
+ dev->raid_scsi_mode = 0;
+- if(dev->adapter_info.options & AAC_OPT_NONDASD){
++ if(dev->adapter_info.options & AAC_OPT_NONDASD)
+ dev->nondasd_support = 1;
+- }
+
+ /*
+ * If the firmware supports ROMB RAID/SCSI mode and we are currently
+@@ -1338,11 +1365,10 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ if (dev->raid_scsi_mode != 0)
+ printk(KERN_INFO "%s%d: ROMB RAID/SCSI mode enabled\n",
+ dev->name, dev->id);
+-
+- if(nondasd != -1) {
+
-+/*
-+ * SECTION: external block device operations
-+ * (request queue handling, open, release, etc.)
- */
++ if (nondasd != -1)
+ dev->nondasd_support = (nondasd!=0);
+- }
+- if(dev->nondasd_support != 0){
++ if(dev->nondasd_support != 0) {
+ printk(KERN_INFO "%s%d: Non-DASD support enabled.\n",dev->name, dev->id);
+ }
- /*
- * Dasd request queue function. Called from ll_rw_blk.c
- */
--static void
--do_dasd_request(struct request_queue * queue)
-+static void do_dasd_request(struct request_queue *queue)
- {
-- struct dasd_device *device;
-+ struct dasd_block *block;
+@@ -1371,12 +1397,14 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ rcode = -ENOMEM;
+ }
+ }
+- /*
++ /*
+ * Deal with configuring for the individualized limits of each packet
+ * interface.
+ */
+ dev->a_ops.adapter_scsi = (dev->dac_support)
+- ? aac_scsi_64
++ ? ((aac_get_driver_ident(dev->cardtype)->quirks & AAC_QUIRK_SCSI_32)
++ ? aac_scsi_32_64
++ : aac_scsi_64)
+ : aac_scsi_32;
+ if (dev->raw_io_interface) {
+ dev->a_ops.adapter_bounds = (dev->raw_io_64)
+@@ -1393,8 +1421,8 @@ int aac_get_adapter_info(struct aac_dev* dev)
+ if (dev->dac_support) {
+ dev->a_ops.adapter_read = aac_read_block64;
+ dev->a_ops.adapter_write = aac_write_block64;
+- /*
+- * 38 scatter gather elements
++ /*
++ * 38 scatter gather elements
+ */
+ dev->scsi_host_ptr->sg_tablesize =
+ (dev->max_fib_size -
+@@ -1498,9 +1526,8 @@ static void io_callback(void *context, struct fib * fibptr)
+ ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
+ 0, 0);
+ memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
+- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
+- ? sizeof(scsicmd->sense_buffer)
+- : sizeof(dev->fsa_dev[cid].sense_data));
++ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
++ SCSI_SENSE_BUFFERSIZE));
+ }
+ aac_fib_complete(fibptr);
+ aac_fib_free(fibptr);
+@@ -1524,7 +1551,7 @@ static int aac_read(struct scsi_cmnd * scsicmd)
+ case READ_6:
+ dprintk((KERN_DEBUG "aachba: received a read(6) command on id %d.\n", scmd_id(scsicmd)));
-- device = (struct dasd_device *) queue->queuedata;
-- spin_lock(get_ccwdev_lock(device->cdev));
-+ block = queue->queuedata;
-+ spin_lock(&block->queue_lock);
- /* Get new request from the block device request queue */
-- __dasd_process_blk_queue(device);
-+ __dasd_process_request_queue(block);
- /* Now check if the head of the ccw queue needs to be started. */
-- __dasd_start_head(device);
-- spin_unlock(get_ccwdev_lock(device->cdev));
-+ __dasd_block_start_head(block);
-+ spin_unlock(&block->queue_lock);
- }
+- lba = ((scsicmd->cmnd[1] & 0x1F) << 16) |
++ lba = ((scsicmd->cmnd[1] & 0x1F) << 16) |
+ (scsicmd->cmnd[2] << 8) | scsicmd->cmnd[3];
+ count = scsicmd->cmnd[4];
- /*
- * Allocate and initialize request queue and default I/O scheduler.
- */
--static int
--dasd_alloc_queue(struct dasd_device * device)
-+static int dasd_alloc_queue(struct dasd_block *block)
- {
- int rc;
+@@ -1534,32 +1561,32 @@ static int aac_read(struct scsi_cmnd * scsicmd)
+ case READ_16:
+ dprintk((KERN_DEBUG "aachba: received a read(16) command on id %d.\n", scmd_id(scsicmd)));
-- device->request_queue = blk_init_queue(do_dasd_request,
-- &device->request_queue_lock);
-- if (device->request_queue == NULL)
-+ block->request_queue = blk_init_queue(do_dasd_request,
-+ &block->request_queue_lock);
-+ if (block->request_queue == NULL)
- return -ENOMEM;
+- lba = ((u64)scsicmd->cmnd[2] << 56) |
+- ((u64)scsicmd->cmnd[3] << 48) |
++ lba = ((u64)scsicmd->cmnd[2] << 56) |
++ ((u64)scsicmd->cmnd[3] << 48) |
+ ((u64)scsicmd->cmnd[4] << 40) |
+ ((u64)scsicmd->cmnd[5] << 32) |
+- ((u64)scsicmd->cmnd[6] << 24) |
++ ((u64)scsicmd->cmnd[6] << 24) |
+ (scsicmd->cmnd[7] << 16) |
+ (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
+- count = (scsicmd->cmnd[10] << 24) |
++ count = (scsicmd->cmnd[10] << 24) |
+ (scsicmd->cmnd[11] << 16) |
+ (scsicmd->cmnd[12] << 8) | scsicmd->cmnd[13];
+ break;
+ case READ_12:
+ dprintk((KERN_DEBUG "aachba: received a read(12) command on id %d.\n", scmd_id(scsicmd)));
-- device->request_queue->queuedata = device;
-+ block->request_queue->queuedata = block;
+- lba = ((u64)scsicmd->cmnd[2] << 24) |
++ lba = ((u64)scsicmd->cmnd[2] << 24) |
+ (scsicmd->cmnd[3] << 16) |
+- (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
+- count = (scsicmd->cmnd[6] << 24) |
++ (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
++ count = (scsicmd->cmnd[6] << 24) |
+ (scsicmd->cmnd[7] << 16) |
+- (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
++ (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
+ break;
+ default:
+ dprintk((KERN_DEBUG "aachba: received a read(10) command on id %d.\n", scmd_id(scsicmd)));
-- elevator_exit(device->request_queue->elevator);
-- rc = elevator_init(device->request_queue, "deadline");
-+ elevator_exit(block->request_queue->elevator);
-+ rc = elevator_init(block->request_queue, "deadline");
- if (rc) {
-- blk_cleanup_queue(device->request_queue);
-+ blk_cleanup_queue(block->request_queue);
- return rc;
+- lba = ((u64)scsicmd->cmnd[2] << 24) |
+- (scsicmd->cmnd[3] << 16) |
++ lba = ((u64)scsicmd->cmnd[2] << 24) |
++ (scsicmd->cmnd[3] << 16) |
+ (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
+ count = (scsicmd->cmnd[7] << 8) | scsicmd->cmnd[8];
+ break;
+@@ -1584,7 +1611,7 @@ static int aac_read(struct scsi_cmnd * scsicmd)
+ scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
+ return 0;
}
- return 0;
-@@ -1780,79 +1972,76 @@ dasd_alloc_queue(struct dasd_device * device)
- /*
- * Allocate and initialize request queue.
- */
--static void
--dasd_setup_queue(struct dasd_device * device)
-+static void dasd_setup_queue(struct dasd_block *block)
- {
- int max;
+-
++
+ printk(KERN_WARNING "aac_read: aac_fib_send failed with status: %d.\n", status);
+ /*
+ * For some reason, the Fib didn't queue, return QUEUE_FULL
+@@ -1619,11 +1646,11 @@ static int aac_write(struct scsi_cmnd * scsicmd)
+ } else if (scsicmd->cmnd[0] == WRITE_16) { /* 16 byte command */
+ dprintk((KERN_DEBUG "aachba: received a write(16) command on id %d.\n", scmd_id(scsicmd)));
-- blk_queue_hardsect_size(device->request_queue, device->bp_block);
-- max = device->discipline->max_blocks << device->s2b_shift;
-- blk_queue_max_sectors(device->request_queue, max);
-- blk_queue_max_phys_segments(device->request_queue, -1L);
-- blk_queue_max_hw_segments(device->request_queue, -1L);
-- blk_queue_max_segment_size(device->request_queue, -1L);
-- blk_queue_segment_boundary(device->request_queue, -1L);
-- blk_queue_ordered(device->request_queue, QUEUE_ORDERED_TAG, NULL);
-+ blk_queue_hardsect_size(block->request_queue, block->bp_block);
-+ max = block->base->discipline->max_blocks << block->s2b_shift;
-+ blk_queue_max_sectors(block->request_queue, max);
-+ blk_queue_max_phys_segments(block->request_queue, -1L);
-+ blk_queue_max_hw_segments(block->request_queue, -1L);
-+ blk_queue_max_segment_size(block->request_queue, -1L);
-+ blk_queue_segment_boundary(block->request_queue, -1L);
-+ blk_queue_ordered(block->request_queue, QUEUE_ORDERED_DRAIN, NULL);
- }
+- lba = ((u64)scsicmd->cmnd[2] << 56) |
++ lba = ((u64)scsicmd->cmnd[2] << 56) |
+ ((u64)scsicmd->cmnd[3] << 48) |
+ ((u64)scsicmd->cmnd[4] << 40) |
+ ((u64)scsicmd->cmnd[5] << 32) |
+- ((u64)scsicmd->cmnd[6] << 24) |
++ ((u64)scsicmd->cmnd[6] << 24) |
+ (scsicmd->cmnd[7] << 16) |
+ (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
+ count = (scsicmd->cmnd[10] << 24) | (scsicmd->cmnd[11] << 16) |
+@@ -1712,8 +1739,8 @@ static void synchronize_callback(void *context, struct fib *fibptr)
+ ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
+ 0, 0);
+ memcpy(cmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
+- min(sizeof(dev->fsa_dev[cid].sense_data),
+- sizeof(cmd->sense_buffer)));
++ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
++ SCSI_SENSE_BUFFERSIZE));
+ }
- /*
- * Deactivate and free request queue.
+ aac_fib_complete(fibptr);
+@@ -1798,7 +1825,7 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
+ if (active)
+ return SCSI_MLQUEUE_DEVICE_BUSY;
+
+- aac = (struct aac_dev *)scsicmd->device->host->hostdata;
++ aac = (struct aac_dev *)sdev->host->hostdata;
+ if (aac->in_reset)
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+@@ -1850,14 +1877,14 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
+ * Emulate a SCSI command and queue the required request for the
+ * aacraid firmware.
*/
--static void
--dasd_free_queue(struct dasd_device * device)
-+static void dasd_free_queue(struct dasd_block *block)
+-
++
+ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
{
-- if (device->request_queue) {
-- blk_cleanup_queue(device->request_queue);
-- device->request_queue = NULL;
-+ if (block->request_queue) {
-+ blk_cleanup_queue(block->request_queue);
-+ block->request_queue = NULL;
+ u32 cid;
+ struct Scsi_Host *host = scsicmd->device->host;
+ struct aac_dev *dev = (struct aac_dev *)host->hostdata;
+ struct fsa_dev_info *fsa_dev_ptr = dev->fsa_dev;
+-
++
+ if (fsa_dev_ptr == NULL)
+ return -1;
+ /*
+@@ -1898,7 +1925,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ }
+ }
+ } else { /* check for physical non-dasd devices */
+- if ((dev->nondasd_support == 1) || expose_physicals) {
++ if (dev->nondasd_support || expose_physicals ||
++ dev->jbod) {
+ if (dev->in_reset)
+ return -1;
+ return aac_send_srb_fib(scsicmd);
+@@ -1913,7 +1941,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ * else Command for the controller itself
+ */
+ else if ((scsicmd->cmnd[0] != INQUIRY) && /* only INQUIRY & TUR cmnd supported for controller */
+- (scsicmd->cmnd[0] != TEST_UNIT_READY))
++ (scsicmd->cmnd[0] != TEST_UNIT_READY))
+ {
+ dprintk((KERN_WARNING "Only INQUIRY & TUR command supported for controller, rcvd = 0x%x.\n", scsicmd->cmnd[0]));
+ scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_CHECK_CONDITION;
+@@ -1922,9 +1950,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ SENCODE_INVALID_COMMAND,
+ ASENCODE_INVALID_COMMAND, 0, 0, 0, 0);
+ memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
+- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
+- ? sizeof(scsicmd->sense_buffer)
+- : sizeof(dev->fsa_dev[cid].sense_data));
++ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
++ SCSI_SENSE_BUFFERSIZE));
+ scsicmd->scsi_done(scsicmd);
+ return 0;
+ }
+@@ -1939,7 +1966,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ dprintk((KERN_DEBUG "INQUIRY command, ID: %d.\n", cid));
+ memset(&inq_data, 0, sizeof (struct inquiry_data));
+
+- if (scsicmd->cmnd[1] & 0x1 ) {
++ if (scsicmd->cmnd[1] & 0x1) {
+ char *arr = (char *)&inq_data;
+
+ /* EVPD bit set */
+@@ -1974,10 +2001,9 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ ASENCODE_NO_SENSE, 0, 7, 2, 0);
+ memcpy(scsicmd->sense_buffer,
+ &dev->fsa_dev[cid].sense_data,
+- (sizeof(dev->fsa_dev[cid].sense_data) >
+- sizeof(scsicmd->sense_buffer))
+- ? sizeof(scsicmd->sense_buffer)
+- : sizeof(dev->fsa_dev[cid].sense_data));
++ min_t(size_t,
++ sizeof(dev->fsa_dev[cid].sense_data),
++ SCSI_SENSE_BUFFERSIZE));
+ }
+ scsicmd->scsi_done(scsicmd);
+ return 0;
+@@ -2092,7 +2118,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ mode_buf[2] = 0; /* Device-specific param,
+ bit 8: 0/1 = write enabled/protected
+ bit 4: 0/1 = FUA enabled */
+- if (dev->raw_io_interface)
++ if (dev->raw_io_interface && ((aac_cache & 5) != 1))
+ mode_buf[2] = 0x10;
+ mode_buf[3] = 0; /* Block descriptor length */
+ if (((scsicmd->cmnd[2] & 0x3f) == 8) ||
+@@ -2100,7 +2126,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ mode_buf[0] = 6;
+ mode_buf[4] = 8;
+ mode_buf[5] = 1;
+- mode_buf[6] = 0x04; /* WCE */
++ mode_buf[6] = ((aac_cache & 6) == 2)
++ ? 0 : 0x04; /* WCE */
+ mode_buf_length = 7;
+ if (mode_buf_length > scsicmd->cmnd[4])
+ mode_buf_length = scsicmd->cmnd[4];
+@@ -2123,7 +2150,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ mode_buf[3] = 0; /* Device-specific param,
+ bit 8: 0/1 = write enabled/protected
+ bit 4: 0/1 = FUA enabled */
+- if (dev->raw_io_interface)
++ if (dev->raw_io_interface && ((aac_cache & 5) != 1))
+ mode_buf[3] = 0x10;
+ mode_buf[4] = 0; /* reserved */
+ mode_buf[5] = 0; /* reserved */
+@@ -2134,7 +2161,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ mode_buf[1] = 9;
+ mode_buf[8] = 8;
+ mode_buf[9] = 1;
+- mode_buf[10] = 0x04; /* WCE */
++ mode_buf[10] = ((aac_cache & 6) == 2)
++ ? 0 : 0x04; /* WCE */
+ mode_buf_length = 11;
+ if (mode_buf_length > scsicmd->cmnd[8])
+ mode_buf_length = scsicmd->cmnd[8];
+@@ -2179,7 +2207,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ return 0;
}
- }
- /*
- * Flush request on the request queue.
- */
--static void
--dasd_flush_request_queue(struct dasd_device * device)
-+static void dasd_flush_request_queue(struct dasd_block *block)
- {
- struct request *req;
+- switch (scsicmd->cmnd[0])
++ switch (scsicmd->cmnd[0])
+ {
+ case READ_6:
+ case READ_10:
+@@ -2192,11 +2220,11 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ * corresponds to a container. Needed to convert
+ * containers to /dev/sd device names
+ */
+-
++
+ if (scsicmd->request->rq_disk)
+ strlcpy(fsa_dev_ptr[cid].devname,
+ scsicmd->request->rq_disk->disk_name,
+- min(sizeof(fsa_dev_ptr[cid].devname),
++ min(sizeof(fsa_dev_ptr[cid].devname),
+ sizeof(scsicmd->request->rq_disk->disk_name) + 1));
-- if (!device->request_queue)
-+ if (!block->request_queue)
- return;
+ return aac_read(scsicmd);
+@@ -2210,9 +2238,16 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ return aac_write(scsicmd);
-- spin_lock_irq(&device->request_queue_lock);
-- while ((req = elv_next_request(device->request_queue))) {
-+ spin_lock_irq(&block->request_queue_lock);
-+ while ((req = elv_next_request(block->request_queue))) {
- blkdev_dequeue_request(req);
-- dasd_end_request(req, 0);
-+ dasd_end_request(req, -EIO);
+ case SYNCHRONIZE_CACHE:
++ if (((aac_cache & 6) == 6) && dev->cache_protected) {
++ scsicmd->result = DID_OK << 16 |
++ COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
++ scsicmd->scsi_done(scsicmd);
++ return 0;
++ }
+ /* Issue FIB to tell Firmware to flush it's cache */
+- return aac_synchronize(scsicmd);
+-
++ if ((aac_cache & 6) != 2)
++ return aac_synchronize(scsicmd);
++ /* FALLTHRU */
+ default:
+ /*
+ * Unhandled commands
+@@ -2223,9 +2258,9 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ ILLEGAL_REQUEST, SENCODE_INVALID_COMMAND,
+ ASENCODE_INVALID_COMMAND, 0, 0, 0, 0);
+ memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
+- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
+- ? sizeof(scsicmd->sense_buffer)
+- : sizeof(dev->fsa_dev[cid].sense_data));
++ min_t(size_t,
++ sizeof(dev->fsa_dev[cid].sense_data),
++ SCSI_SENSE_BUFFERSIZE));
+ scsicmd->scsi_done(scsicmd);
+ return 0;
}
-- spin_unlock_irq(&device->request_queue_lock);
-+ spin_unlock_irq(&block->request_queue_lock);
- }
+@@ -2243,7 +2278,7 @@ static int query_disk(struct aac_dev *dev, void __user *arg)
+ return -EFAULT;
+ if (qd.cnum == -1)
+ qd.cnum = qd.id;
+- else if ((qd.bus == -1) && (qd.id == -1) && (qd.lun == -1))
++ else if ((qd.bus == -1) && (qd.id == -1) && (qd.lun == -1))
+ {
+ if (qd.cnum < 0 || qd.cnum >= dev->maximum_num_containers)
+ return -EINVAL;
+@@ -2370,7 +2405,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
--static int
--dasd_open(struct inode *inp, struct file *filp)
-+static int dasd_open(struct inode *inp, struct file *filp)
- {
- struct gendisk *disk = inp->i_bdev->bd_disk;
-- struct dasd_device *device = disk->private_data;
-+ struct dasd_block *block = disk->private_data;
-+ struct dasd_device *base = block->base;
- int rc;
+ scsicmd->sense_buffer[0] = '\0'; /* Initialize sense valid flag to false */
+ /*
+- * Calculate resid for sg
++ * Calculate resid for sg
+ */
-- atomic_inc(&device->open_count);
-- if (test_bit(DASD_FLAG_OFFLINE, &device->flags)) {
-+ atomic_inc(&block->open_count);
-+ if (test_bit(DASD_FLAG_OFFLINE, &base->flags)) {
- rc = -ENODEV;
- goto unlock;
+ scsi_set_resid(scsicmd, scsi_bufflen(scsicmd)
+@@ -2385,10 +2420,8 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
+ if (le32_to_cpu(srbreply->status) != ST_OK){
+ int len;
+ printk(KERN_WARNING "aac_srb_callback: srb failed, status = %d\n", le32_to_cpu(srbreply->status));
+- len = (le32_to_cpu(srbreply->sense_data_size) >
+- sizeof(scsicmd->sense_buffer)) ?
+- sizeof(scsicmd->sense_buffer) :
+- le32_to_cpu(srbreply->sense_data_size);
++ len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
++ SCSI_SENSE_BUFFERSIZE);
+ scsicmd->result = DID_ERROR << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_CHECK_CONDITION;
+ memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);
}
-
-- if (!try_module_get(device->discipline->owner)) {
-+ if (!try_module_get(base->discipline->owner)) {
- rc = -EINVAL;
- goto unlock;
+@@ -2412,7 +2445,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
+ case WRITE_12:
+ case READ_16:
+ case WRITE_16:
+- if(le32_to_cpu(srbreply->data_xfer_length) < scsicmd->underflow ) {
++ if (le32_to_cpu(srbreply->data_xfer_length) < scsicmd->underflow) {
+ printk(KERN_WARNING"aacraid: SCSI CMD underflow\n");
+ } else {
+ printk(KERN_WARNING"aacraid: SCSI CMD Data Overrun\n");
+@@ -2481,26 +2514,23 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
+ printk("aacraid: SRB ERROR(%u) %s scsi cmd 0x%x - scsi status 0x%x\n",
+ le32_to_cpu(srbreply->srb_status) & 0x3F,
+ aac_get_status_string(
+- le32_to_cpu(srbreply->srb_status) & 0x3F),
+- scsicmd->cmnd[0],
++ le32_to_cpu(srbreply->srb_status) & 0x3F),
++ scsicmd->cmnd[0],
+ le32_to_cpu(srbreply->scsi_status));
+ #endif
+ scsicmd->result = DID_ERROR << 16 | COMMAND_COMPLETE << 8;
+ break;
}
-
- if (dasd_probeonly) {
-- DEV_MESSAGE(KERN_INFO, device, "%s",
-+ DEV_MESSAGE(KERN_INFO, base, "%s",
- "No access to device due to probeonly mode");
- rc = -EPERM;
- goto out;
+- if (le32_to_cpu(srbreply->scsi_status) == 0x02 ){ // Check Condition
++ if (le32_to_cpu(srbreply->scsi_status) == SAM_STAT_CHECK_CONDITION) {
+ int len;
+ scsicmd->result |= SAM_STAT_CHECK_CONDITION;
+- len = (le32_to_cpu(srbreply->sense_data_size) >
+- sizeof(scsicmd->sense_buffer)) ?
+- sizeof(scsicmd->sense_buffer) :
+- le32_to_cpu(srbreply->sense_data_size);
++ len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
++ SCSI_SENSE_BUFFERSIZE);
+ #ifdef AAC_DETAILED_STATUS_INFO
+ printk(KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
+ le32_to_cpu(srbreply->status), len);
+ #endif
+ memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);
+-
}
+ /*
+ * OR in the scsi status (already shifted up a bit)
+@@ -2517,7 +2547,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
+ * aac_send_scb_fib
+ * @scsicmd: the scsi command block
+ *
+- * This routine will form a FIB and fill in the aac_srb from the
++ * This routine will form a FIB and fill in the aac_srb from the
+ * scsicmd passed in.
+ */
-- if (device->state <= DASD_STATE_BASIC) {
-- DBF_DEV_EVENT(DBF_ERR, device, " %s",
-+ if (base->state <= DASD_STATE_BASIC) {
-+ DBF_DEV_EVENT(DBF_ERR, base, " %s",
- " Cannot open unrecognized device");
- rc = -ENODEV;
- goto out;
-@@ -1861,41 +2050,41 @@ dasd_open(struct inode *inp, struct file *filp)
- return 0;
-
- out:
-- module_put(device->discipline->owner);
-+ module_put(base->discipline->owner);
- unlock:
-- atomic_dec(&device->open_count);
-+ atomic_dec(&block->open_count);
- return rc;
- }
-
--static int
--dasd_release(struct inode *inp, struct file *filp)
-+static int dasd_release(struct inode *inp, struct file *filp)
- {
- struct gendisk *disk = inp->i_bdev->bd_disk;
-- struct dasd_device *device = disk->private_data;
-+ struct dasd_block *block = disk->private_data;
-
-- atomic_dec(&device->open_count);
-- module_put(device->discipline->owner);
-+ atomic_dec(&block->open_count);
-+ module_put(block->base->discipline->owner);
- return 0;
- }
+@@ -2731,7 +2761,7 @@ static struct aac_srb_status_info srb_status_info[] = {
+ { SRB_STATUS_ERROR_RECOVERY, "Error Recovery"},
+ { SRB_STATUS_NOT_STARTED, "Not Started"},
+ { SRB_STATUS_NOT_IN_USE, "Not In Use"},
+- { SRB_STATUS_FORCE_ABORT, "Force Abort"},
++ { SRB_STATUS_FORCE_ABORT, "Force Abort"},
+ { SRB_STATUS_DOMAIN_VALIDATION_FAIL,"Domain Validation Failure"},
+ { 0xff, "Unknown Error"}
+ };
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 9abba8b..3195d29 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -1,4 +1,4 @@
+-#if (!defined(dprintk))
++#ifndef dprintk
+ # define dprintk(x)
+ #endif
+ /* eg: if (nblank(dprintk(x))) */
+@@ -12,7 +12,7 @@
+ *----------------------------------------------------------------------------*/
+ #ifndef AAC_DRIVER_BUILD
+-# define AAC_DRIVER_BUILD 2449
++# define AAC_DRIVER_BUILD 2455
+ # define AAC_DRIVER_BRANCH "-ms"
+ #endif
+ #define MAXIMUM_NUM_CONTAINERS 32
+@@ -50,9 +50,9 @@ struct diskparm
/*
- * Return disk geometry.
+ * Firmware constants
*/
--static int
--dasd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
-+static int dasd_getgeo(struct block_device *bdev, struct hd_geometry *geo)
- {
-- struct dasd_device *device;
-+ struct dasd_block *block;
-+ struct dasd_device *base;
-
-- device = bdev->bd_disk->private_data;
-- if (!device)
-+ block = bdev->bd_disk->private_data;
-+ base = block->base;
-+ if (!block)
- return -ENODEV;
+-
++
+ #define CT_NONE 0
+-#define CT_OK 218
++#define CT_OK 218
+ #define FT_FILESYS 8 /* ADAPTEC's "FSA"(tm) filesystem */
+ #define FT_DRIVE 9 /* physical disk - addressable in scsi by bus/id/lun */
-- if (!device->discipline ||
-- !device->discipline->fill_geometry)
-+ if (!base->discipline ||
-+ !base->discipline->fill_geometry)
- return -EINVAL;
+@@ -107,12 +107,12 @@ struct user_sgentryraw {
-- device->discipline->fill_geometry(device, geo);
-- geo->start = get_start_sect(bdev) >> device->s2b_shift;
-+ base->discipline->fill_geometry(block, geo);
-+ geo->start = get_start_sect(bdev) >> block->s2b_shift;
- return 0;
- }
+ struct sgmap {
+ __le32 count;
+- struct sgentry sg[1];
++ struct sgentry sg[1];
+ };
-@@ -1909,6 +2098,9 @@ dasd_device_operations = {
- .getgeo = dasd_getgeo,
+ struct user_sgmap {
+ u32 count;
+- struct user_sgentry sg[1];
++ struct user_sgentry sg[1];
};
-+/*******************************************************************************
-+ * end of block device operations
-+ */
+ struct sgmap64 {
+@@ -137,18 +137,18 @@ struct user_sgmapraw {
- static void
- dasd_exit(void)
-@@ -1937,9 +2129,8 @@ dasd_exit(void)
- * Initial attempt at a probe function. this can be simplified once
- * the other detection code is gone.
- */
--int
--dasd_generic_probe (struct ccw_device *cdev,
-- struct dasd_discipline *discipline)
-+int dasd_generic_probe(struct ccw_device *cdev,
-+ struct dasd_discipline *discipline)
+ struct creation_info
{
- int ret;
+- u8 buildnum; /* e.g., 588 */
+- u8 usec; /* e.g., 588 */
+- u8 via; /* e.g., 1 = FSU,
+- * 2 = API
++ u8 buildnum; /* e.g., 588 */
++ u8 usec; /* e.g., 588 */
++ u8 via; /* e.g., 1 = FSU,
++ * 2 = API
+ */
+- u8 year; /* e.g., 1997 = 97 */
++ u8 year; /* e.g., 1997 = 97 */
+ __le32 date; /*
+- * unsigned Month :4; // 1 - 12
+- * unsigned Day :6; // 1 - 32
+- * unsigned Hour :6; // 0 - 23
+- * unsigned Minute :6; // 0 - 60
+- * unsigned Second :6; // 0 - 60
++ * unsigned Month :4; // 1 - 12
++ * unsigned Day :6; // 1 - 32
++ * unsigned Hour :6; // 0 - 23
++ * unsigned Minute :6; // 0 - 60
++ * unsigned Second :6; // 0 - 60
+ */
+ __le32 serial[2]; /* e.g., 0x1DEADB0BFAFAF001 */
+ };
+@@ -184,7 +184,7 @@ struct creation_info
+ /*
+ * Set the queues on a 16 byte alignment
+ */
+-
++
+ #define QUEUE_ALIGNMENT 16
-@@ -1969,19 +2160,20 @@ dasd_generic_probe (struct ccw_device *cdev,
- ret = ccw_device_set_online(cdev);
- if (ret)
- printk(KERN_WARNING
-- "dasd_generic_probe: could not initially online "
-- "ccw-device %s\n", cdev->dev.bus_id);
-- return ret;
-+ "dasd_generic_probe: could not initially "
-+ "online ccw-device %s; return code: %d\n",
-+ cdev->dev.bus_id, ret);
-+ return 0;
- }
+ /*
+@@ -203,9 +203,9 @@ struct aac_entry {
+ * The adapter assumes the ProducerIndex and ConsumerIndex are grouped
+ * adjacently and in that order.
+ */
+-
++
+ struct aac_qhdr {
+- __le64 header_addr;/* Address to hand the adapter to access
++ __le64 header_addr;/* Address to hand the adapter to access
+ to this queue head */
+ __le32 *producer; /* The producer index for this queue (host address) */
+ __le32 *consumer; /* The consumer index for this queue (host address) */
+@@ -215,7 +215,7 @@ struct aac_qhdr {
+ * Define all the events which the adapter would like to notify
+ * the host of.
+ */
+-
++
+ #define HostNormCmdQue 1 /* Change in host normal priority command queue */
+ #define HostHighCmdQue 2 /* Change in host high priority command queue */
+ #define HostNormRespQue 3 /* Change in host normal priority response queue */
+@@ -286,17 +286,17 @@ struct aac_fibhdr {
+ u8 StructType; /* Type FIB */
+ u8 Flags; /* Flags for FIB */
+ __le16 Size; /* Size of this FIB in bytes */
+- __le16 SenderSize; /* Size of the FIB in the sender
++ __le16 SenderSize; /* Size of the FIB in the sender
+ (for response sizing) */
+ __le32 SenderFibAddress; /* Host defined data in the FIB */
+- __le32 ReceiverFibAddress;/* Logical address of this FIB for
++ __le32 ReceiverFibAddress;/* Logical address of this FIB for
+ the adapter */
+ u32 SenderData; /* Place holder for the sender to store data */
+ union {
+ struct {
+- __le32 _ReceiverTimeStart; /* Timestamp for
++ __le32 _ReceiverTimeStart; /* Timestamp for
+ receipt of fib */
+- __le32 _ReceiverTimeDone; /* Timestamp for
++ __le32 _ReceiverTimeDone; /* Timestamp for
+ completion of fib */
+ } _s;
+ } _u;
+@@ -311,7 +311,7 @@ struct hw_fib {
+ * FIB commands
+ */
+-#define TestCommandResponse 1
++#define TestCommandResponse 1
+ #define TestAdapterCommand 2
/*
- * This will one day be called from a global not_oper handler.
- * It is also used by driver_unregister during module unload.
+ * Lowlevel and comm commands
+@@ -350,10 +350,6 @@ struct hw_fib {
+ #define ContainerCommand64 501
+ #define ContainerRawIo 502
+ /*
+- * Cluster Commands
+- */
+-#define ClusterCommand 550
+-/*
+ * Scsi Port commands (scsi passthrough)
+ */
+ #define ScsiPortCommand 600
+@@ -375,19 +371,19 @@ struct hw_fib {
*/
--void
--dasd_generic_remove (struct ccw_device *cdev)
-+void dasd_generic_remove(struct ccw_device *cdev)
- {
- struct dasd_device *device;
-+ struct dasd_block *block;
- cdev->handler = NULL;
+ enum fib_xfer_state {
+- HostOwned = (1<<0),
+- AdapterOwned = (1<<1),
+- FibInitialized = (1<<2),
+- FibEmpty = (1<<3),
+- AllocatedFromPool = (1<<4),
+- SentFromHost = (1<<5),
+- SentFromAdapter = (1<<6),
+- ResponseExpected = (1<<7),
+- NoResponseExpected = (1<<8),
+- AdapterProcessed = (1<<9),
+- HostProcessed = (1<<10),
+- HighPriority = (1<<11),
+- NormalPriority = (1<<12),
++ HostOwned = (1<<0),
++ AdapterOwned = (1<<1),
++ FibInitialized = (1<<2),
++ FibEmpty = (1<<3),
++ AllocatedFromPool = (1<<4),
++ SentFromHost = (1<<5),
++ SentFromAdapter = (1<<6),
++ ResponseExpected = (1<<7),
++ NoResponseExpected = (1<<8),
++ AdapterProcessed = (1<<9),
++ HostProcessed = (1<<10),
++ HighPriority = (1<<11),
++ NormalPriority = (1<<12),
+ Async = (1<<13),
+ AsyncIo = (1<<13), // rpbfix: remove with new regime
+ PageFileIo = (1<<14), // rpbfix: remove with new regime
+@@ -420,7 +416,7 @@ struct aac_init
+ __le32 AdapterFibAlign;
+ __le32 printfbuf;
+ __le32 printfbufsiz;
+- __le32 HostPhysMemPages; /* number of 4k pages of host
++ __le32 HostPhysMemPages; /* number of 4k pages of host
+ physical memory */
+ __le32 HostElapsedSeconds; /* number of seconds since 1970. */
+ /*
+@@ -481,7 +477,7 @@ struct adapter_ops
-@@ -2001,7 +2193,15 @@ dasd_generic_remove (struct ccw_device *cdev)
- */
- dasd_set_target_state(device, DASD_STATE_NEW);
- /* dasd_delete_device destroys the device reference. */
-+ block = device->block;
-+ device->block = NULL;
- dasd_delete_device(device);
-+ /*
-+ * life cycle of block is bound to device, so delete it after
-+ * device was safely removed
-+ */
-+ if (block)
-+ dasd_free_block(block);
- }
+ struct aac_driver_ident
+ {
+- int (*init)(struct aac_dev *dev);
++ int (*init)(struct aac_dev *dev);
+ char * name;
+ char * vname;
+ char * model;
+@@ -489,7 +485,7 @@ struct aac_driver_ident
+ int quirks;
+ };
+ /*
+- * Some adapter firmware needs communication memory
++ * Some adapter firmware needs communication memory
+ * below 2gig. This tells the init function to set the
+ * dma mask such that fib memory will be allocated where the
+ * adapter firmware can get to it.
+@@ -521,33 +517,39 @@ struct aac_driver_ident
+ #define AAC_QUIRK_17SG 0x0010
/*
-@@ -2009,10 +2209,8 @@ dasd_generic_remove (struct ccw_device *cdev)
- * the device is detected for the first time and is supposed to be used
- * or the user has started activation through sysfs.
++ * Some adapter firmware does not support 64 bit scsi passthrough
++ * commands.
++ */
++#define AAC_QUIRK_SCSI_32 0x0020
++
++/*
+ * The adapter interface specs all queues to be located in the same
+ * physically contigous block. The host structure that defines the
+ * commuication queues will assume they are each a separate physically
+ * contigous memory region that will support them all being one big
+- * contigous block.
++ * contigous block.
+ * There is a command and response queue for each level and direction of
+ * commuication. These regions are accessed by both the host and adapter.
*/
--int
--dasd_generic_set_online (struct ccw_device *cdev,
-- struct dasd_discipline *base_discipline)
--
-+int dasd_generic_set_online(struct ccw_device *cdev,
-+ struct dasd_discipline *base_discipline)
- {
- struct dasd_discipline *discipline;
- struct dasd_device *device;
-@@ -2048,6 +2246,7 @@ dasd_generic_set_online (struct ccw_device *cdev,
- device->base_discipline = base_discipline;
- device->discipline = discipline;
+-
++
+ struct aac_queue {
+- u64 logical; /*address we give the adapter */
++ u64 logical; /*address we give the adapter */
+ struct aac_entry *base; /*system virtual address */
+- struct aac_qhdr headers; /*producer,consumer q headers*/
+- u32 entries; /*Number of queue entries */
++ struct aac_qhdr headers; /*producer,consumer q headers*/
++ u32 entries; /*Number of queue entries */
+ wait_queue_head_t qfull; /*Event to wait on if q full */
+ wait_queue_head_t cmdready; /*Cmd ready from the adapter */
+- /* This is only valid for adapter to host command queues. */
+- spinlock_t *lock; /* Spinlock for this queue must take this lock before accessing the lock */
++ /* This is only valid for adapter to host command queues. */
++ spinlock_t *lock; /* Spinlock for this queue must take this lock before accessing the lock */
+ spinlock_t lockdata; /* Actual lock (used only on one side of the lock) */
+- struct list_head cmdq; /* A queue of FIBs which need to be prcessed by the FS thread. This is */
+- /* only valid for command queues which receive entries from the adapter. */
++ struct list_head cmdq; /* A queue of FIBs which need to be prcessed by the FS thread. This is */
++ /* only valid for command queues which receive entries from the adapter. */
+ u32 numpending; /* Number of entries on outstanding queue. */
+ struct aac_dev * dev; /* Back pointer to adapter structure */
+ };
-+ /* check_device will allocate block device if necessary */
- rc = discipline->check_device(device);
- if (rc) {
- printk (KERN_WARNING
-@@ -2067,6 +2266,8 @@ dasd_generic_set_online (struct ccw_device *cdev,
- cdev->dev.bus_id);
- rc = -ENODEV;
- dasd_set_target_state(device, DASD_STATE_NEW);
-+ if (device->block)
-+ dasd_free_block(device->block);
- dasd_delete_device(device);
- } else
- pr_debug("dasd_generic device %s found\n",
-@@ -2081,10 +2282,10 @@ dasd_generic_set_online (struct ccw_device *cdev,
- return rc;
- }
+ /*
+- * Message queues. The order here is important, see also the
++ * Message queues. The order here is important, see also the
+ * queue type ordering
+ */
--int
--dasd_generic_set_offline (struct ccw_device *cdev)
-+int dasd_generic_set_offline(struct ccw_device *cdev)
- {
- struct dasd_device *device;
-+ struct dasd_block *block;
- int max_count, open_count;
+@@ -559,12 +561,12 @@ struct aac_queue_block
+ /*
+ * SaP1 Message Unit Registers
+ */
+-
++
+ struct sa_drawbridge_CSR {
+- /* Offset | Name */
++ /* Offset | Name */
+ __le32 reserved[10]; /* 00h-27h | Reserved */
+ u8 LUT_Offset; /* 28h | Lookup Table Offset */
+- u8 reserved1[3]; /* 29h-2bh | Reserved */
++ u8 reserved1[3]; /* 29h-2bh | Reserved */
+ __le32 LUT_Data; /* 2ch | Looup Table Data */
+ __le32 reserved2[26]; /* 30h-97h | Reserved */
+ __le16 PRICLEARIRQ; /* 98h | Primary Clear Irq */
+@@ -583,8 +585,8 @@ struct sa_drawbridge_CSR {
+ __le32 MAILBOX5; /* bch | Scratchpad 5 */
+ __le32 MAILBOX6; /* c0h | Scratchpad 6 */
+ __le32 MAILBOX7; /* c4h | Scratchpad 7 */
+- __le32 ROM_Setup_Data; /* c8h | Rom Setup and Data */
+- __le32 ROM_Control_Addr;/* cch | Rom Control and Address */
++ __le32 ROM_Setup_Data; /* c8h | Rom Setup and Data */
++ __le32 ROM_Control_Addr;/* cch | Rom Control and Address */
+ __le32 reserved3[12]; /* d0h-ffh | reserved */
+ __le32 LUT[64]; /* 100h-1ffh | Lookup Table Entries */
+ };
+@@ -597,7 +599,7 @@ struct sa_drawbridge_CSR {
+ #define Mailbox5 SaDbCSR.MAILBOX5
+ #define Mailbox6 SaDbCSR.MAILBOX6
+ #define Mailbox7 SaDbCSR.MAILBOX7
+-
++
+ #define DoorbellReg_p SaDbCSR.PRISETIRQ
+ #define DoorbellReg_s SaDbCSR.SECSETIRQ
+ #define DoorbellClrReg_p SaDbCSR.PRICLEARIRQ
+@@ -611,19 +613,19 @@ struct sa_drawbridge_CSR {
+ #define DOORBELL_5 0x0020
+ #define DOORBELL_6 0x0040
- device = dasd_device_from_cdev(cdev);
-@@ -2101,30 +2302,39 @@ dasd_generic_set_offline (struct ccw_device *cdev)
- * the blkdev_get in dasd_scan_partitions. We are only interested
- * in the other openers.
- */
-- max_count = device->bdev ? 0 : -1;
-- open_count = (int) atomic_read(&device->open_count);
-- if (open_count > max_count) {
-- if (open_count > 0)
-- printk (KERN_WARNING "Can't offline dasd device with "
-- "open count = %i.\n",
-- open_count);
-- else
-- printk (KERN_WARNING "%s",
-- "Can't offline dasd device due to internal "
-- "use\n");
-- clear_bit(DASD_FLAG_OFFLINE, &device->flags);
-- dasd_put_device(device);
-- return -EBUSY;
-+ if (device->block) {
-+ struct dasd_block *block = device->block;
-+ max_count = block->bdev ? 0 : -1;
-+ open_count = (int) atomic_read(&block->open_count);
-+ if (open_count > max_count) {
-+ if (open_count > 0)
-+ printk(KERN_WARNING "Can't offline dasd "
-+ "device with open count = %i.\n",
-+ open_count);
-+ else
-+ printk(KERN_WARNING "%s",
-+ "Can't offline dasd device due "
-+ "to internal use\n");
-+ clear_bit(DASD_FLAG_OFFLINE, &device->flags);
-+ dasd_put_device(device);
-+ return -EBUSY;
-+ }
- }
- dasd_set_target_state(device, DASD_STATE_NEW);
- /* dasd_delete_device destroys the device reference. */
-+ block = device->block;
-+ device->block = NULL;
- dasd_delete_device(device);
--
-+ /*
-+ * life cycle of block is bound to device, so delete it after
-+ * device was safely removed
-+ */
-+ if (block)
-+ dasd_free_block(block);
- return 0;
- }
+-
++
+ #define PrintfReady DOORBELL_5
+ #define PrintfDone DOORBELL_5
+-
++
+ struct sa_registers {
+ struct sa_drawbridge_CSR SaDbCSR; /* 98h - c4h */
+ };
+-
++
--int
--dasd_generic_notify(struct ccw_device *cdev, int event)
-+int dasd_generic_notify(struct ccw_device *cdev, int event)
- {
- struct dasd_device *device;
- struct dasd_ccw_req *cqr;
-@@ -2145,27 +2355,22 @@ dasd_generic_notify(struct ccw_device *cdev, int event)
- if (device->state < DASD_STATE_BASIC)
- break;
- /* Device is active. We want to keep it. */
-- if (test_bit(DASD_FLAG_DSC_ERROR, &device->flags)) {
-- list_for_each_entry(cqr, &device->ccw_queue, list)
-- if (cqr->status == DASD_CQR_IN_IO)
-- cqr->status = DASD_CQR_FAILED;
-- device->stopped |= DASD_STOPPED_DC_EIO;
-- } else {
-- list_for_each_entry(cqr, &device->ccw_queue, list)
-- if (cqr->status == DASD_CQR_IN_IO) {
-- cqr->status = DASD_CQR_QUEUED;
-- cqr->retries++;
-- }
-- device->stopped |= DASD_STOPPED_DC_WAIT;
-- dasd_set_timer(device, 0);
-- }
-- dasd_schedule_bh(device);
-+ list_for_each_entry(cqr, &device->ccw_queue, devlist)
-+ if (cqr->status == DASD_CQR_IN_IO) {
-+ cqr->status = DASD_CQR_QUEUED;
-+ cqr->retries++;
-+ }
-+ device->stopped |= DASD_STOPPED_DC_WAIT;
-+ dasd_device_clear_timer(device);
-+ dasd_schedule_device_bh(device);
- ret = 1;
- break;
- case CIO_OPER:
- /* FIXME: add a sanity check. */
-- device->stopped &= ~(DASD_STOPPED_DC_WAIT|DASD_STOPPED_DC_EIO);
-- dasd_schedule_bh(device);
-+ device->stopped &= ~DASD_STOPPED_DC_WAIT;
-+ dasd_schedule_device_bh(device);
-+ if (device->block)
-+ dasd_schedule_block_bh(device->block);
- ret = 1;
- break;
- }
-@@ -2195,7 +2400,8 @@ static struct dasd_ccw_req *dasd_generic_build_rdc(struct dasd_device *device,
- ccw->cda = (__u32)(addr_t)rdc_buffer;
- ccw->count = rdc_buffer_size;
+ #define Sa_MINIPORT_REVISION 1
-- cqr->device = device;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
- cqr->expires = 10*HZ;
- clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
- cqr->retries = 2;
-@@ -2217,13 +2423,12 @@ int dasd_generic_read_dev_chars(struct dasd_device *device, char *magic,
- return PTR_ERR(cqr);
+ #define sa_readw(AEP, CSR) readl(&((AEP)->regs.sa->CSR))
+-#define sa_readl(AEP, CSR) readl(&((AEP)->regs.sa->CSR))
++#define sa_readl(AEP, CSR) readl(&((AEP)->regs.sa->CSR))
+ #define sa_writew(AEP, CSR, value) writew(value, &((AEP)->regs.sa->CSR))
+ #define sa_writel(AEP, CSR, value) writel(value, &((AEP)->regs.sa->CSR))
- ret = dasd_sleep_on(cqr);
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return ret;
- }
- EXPORT_SYMBOL_GPL(dasd_generic_read_dev_chars);
+@@ -640,21 +642,21 @@ struct rx_mu_registers {
+ __le32 IMRx[2]; /* 1310h | 10h | Inbound Message Registers */
+ __le32 OMRx[2]; /* 1318h | 18h | Outbound Message Registers */
+ __le32 IDR; /* 1320h | 20h | Inbound Doorbell Register */
+- __le32 IISR; /* 1324h | 24h | Inbound Interrupt
++ __le32 IISR; /* 1324h | 24h | Inbound Interrupt
+ Status Register */
+- __le32 IIMR; /* 1328h | 28h | Inbound Interrupt
+- Mask Register */
++ __le32 IIMR; /* 1328h | 28h | Inbound Interrupt
++ Mask Register */
+ __le32 ODR; /* 132Ch | 2Ch | Outbound Doorbell Register */
+- __le32 OISR; /* 1330h | 30h | Outbound Interrupt
++ __le32 OISR; /* 1330h | 30h | Outbound Interrupt
+ Status Register */
+- __le32 OIMR; /* 1334h | 34h | Outbound Interrupt
++ __le32 OIMR; /* 1334h | 34h | Outbound Interrupt
+ Mask Register */
+ __le32 reserved2; /* 1338h | 38h | Reserved */
+ __le32 reserved3; /* 133Ch | 3Ch | Reserved */
+ __le32 InboundQueue;/* 1340h | 40h | Inbound Queue Port relative to firmware */
+ __le32 OutboundQueue;/*1344h | 44h | Outbound Queue Port relative to firmware */
+- /* * Must access through ATU Inbound
+- Translation Window */
++ /* * Must access through ATU Inbound
++ Translation Window */
+ };
--static int __init
--dasd_init(void)
-+static int __init dasd_init(void)
- {
- int rc;
+ struct rx_inbound {
+@@ -710,12 +712,12 @@ struct rkt_registers {
+ typedef void (*fib_callback)(void *ctxt, struct fib *fibctx);
-@@ -2231,7 +2436,7 @@ dasd_init(void)
- init_waitqueue_head(&dasd_flush_wq);
+ struct aac_fib_context {
+- s16 type; // used for verification of structure
+- s16 size;
++ s16 type; // used for verification of structure
++ s16 size;
+ u32 unique; // unique value representing this context
+ ulong jiffies; // used for cleanup - dmb changed to ulong
+ struct list_head next; // used to link context's into a linked list
+- struct semaphore wait_sem; // this is used to wait for the next fib to arrive.
++ struct semaphore wait_sem; // this is used to wait for the next fib to arrive.
+ int wait; // Set to true when thread is in WaitForSingleObject
+ unsigned long count; // total number of FIBs on FibList
+ struct list_head fib_list; // this holds fibs and their attachd hw_fibs
+@@ -734,9 +736,9 @@ struct sense_data {
+ u8 EOM:1; /* End Of Medium - reserved for random access devices */
+ u8 filemark:1; /* Filemark - reserved for random access devices */
- /* register 'common' DASD debug area, used for all DBF_XXX calls */
-- dasd_debug_area = debug_register("dasd", 1, 2, 8 * sizeof (long));
-+ dasd_debug_area = debug_register("dasd", 1, 1, 8 * sizeof(long));
- if (dasd_debug_area == NULL) {
- rc = -ENOMEM;
- goto failed;
-@@ -2277,15 +2482,18 @@ EXPORT_SYMBOL(dasd_diag_discipline_pointer);
- EXPORT_SYMBOL(dasd_add_request_head);
- EXPORT_SYMBOL(dasd_add_request_tail);
- EXPORT_SYMBOL(dasd_cancel_req);
--EXPORT_SYMBOL(dasd_clear_timer);
-+EXPORT_SYMBOL(dasd_device_clear_timer);
-+EXPORT_SYMBOL(dasd_block_clear_timer);
- EXPORT_SYMBOL(dasd_enable_device);
- EXPORT_SYMBOL(dasd_int_handler);
- EXPORT_SYMBOL(dasd_kfree_request);
- EXPORT_SYMBOL(dasd_kick_device);
- EXPORT_SYMBOL(dasd_kmalloc_request);
--EXPORT_SYMBOL(dasd_schedule_bh);
-+EXPORT_SYMBOL(dasd_schedule_device_bh);
-+EXPORT_SYMBOL(dasd_schedule_block_bh);
- EXPORT_SYMBOL(dasd_set_target_state);
--EXPORT_SYMBOL(dasd_set_timer);
-+EXPORT_SYMBOL(dasd_device_set_timer);
-+EXPORT_SYMBOL(dasd_block_set_timer);
- EXPORT_SYMBOL(dasd_sfree_request);
- EXPORT_SYMBOL(dasd_sleep_on);
- EXPORT_SYMBOL(dasd_sleep_on_immediatly);
-@@ -2299,4 +2507,7 @@ EXPORT_SYMBOL_GPL(dasd_generic_remove);
- EXPORT_SYMBOL_GPL(dasd_generic_notify);
- EXPORT_SYMBOL_GPL(dasd_generic_set_online);
- EXPORT_SYMBOL_GPL(dasd_generic_set_offline);
--
-+EXPORT_SYMBOL_GPL(dasd_generic_handle_state_change);
-+EXPORT_SYMBOL_GPL(dasd_flush_device_queue);
-+EXPORT_SYMBOL_GPL(dasd_alloc_block);
-+EXPORT_SYMBOL_GPL(dasd_free_block);
-diff --git a/drivers/s390/block/dasd_3370_erp.c b/drivers/s390/block/dasd_3370_erp.c
-deleted file mode 100644
-index 1ddab89..0000000
---- a/drivers/s390/block/dasd_3370_erp.c
-+++ /dev/null
-@@ -1,84 +0,0 @@
--/*
-- * File...........: linux/drivers/s390/block/dasd_3370_erp.c
-- * Author(s)......: Holger Smolinski <Holger.Smolinski at de.ibm.com>
-- * Bugreports.to..: <Linux390 at de.ibm.com>
-- * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
-- *
-- */
--
--#define PRINTK_HEADER "dasd_erp(3370)"
--
--#include "dasd_int.h"
--
--
--/*
-- * DASD_3370_ERP_EXAMINE
-- *
-- * DESCRIPTION
-- * Checks only for fatal/no/recover error.
-- * A detailed examination of the sense data is done later outside
-- * the interrupt handler.
-- *
-- * The logic is based on the 'IBM 3880 Storage Control Reference' manual
-- * 'Chapter 7. 3370 Sense Data'.
-- *
-- * RETURN VALUES
-- * dasd_era_none no error
-- * dasd_era_fatal for all fatal (unrecoverable errors)
-- * dasd_era_recover for all others.
-- */
--dasd_era_t
--dasd_3370_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
--{
-- char *sense = irb->ecw;
--
-- /* check for successful execution first */
-- if (irb->scsw.cstat == 0x00 &&
-- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
-- return dasd_era_none;
-- if (sense[0] & 0x80) { /* CMD reject */
-- return dasd_era_fatal;
-- }
-- if (sense[0] & 0x40) { /* Drive offline */
-- return dasd_era_recover;
-- }
-- if (sense[0] & 0x20) { /* Bus out parity */
-- return dasd_era_recover;
-- }
-- if (sense[0] & 0x10) { /* equipment check */
-- if (sense[1] & 0x80) {
-- return dasd_era_fatal;
-- }
-- return dasd_era_recover;
-- }
-- if (sense[0] & 0x08) { /* data check */
-- if (sense[1] & 0x80) {
-- return dasd_era_fatal;
-- }
-- return dasd_era_recover;
-- }
-- if (sense[0] & 0x04) { /* overrun */
-- if (sense[1] & 0x80) {
-- return dasd_era_fatal;
-- }
-- return dasd_era_recover;
-- }
-- if (sense[1] & 0x40) { /* invalid blocksize */
-- return dasd_era_fatal;
-- }
-- if (sense[1] & 0x04) { /* file protected */
-- return dasd_era_recover;
-- }
-- if (sense[1] & 0x01) { /* operation incomplete */
-- return dasd_era_recover;
-- }
-- if (sense[2] & 0x80) { /* check data erroor */
-- return dasd_era_recover;
-- }
-- if (sense[2] & 0x10) { /* Env. data present */
-- return dasd_era_recover;
-- }
-- /* examine the 24 byte sense data */
-- return dasd_era_recover;
--
--} /* END dasd_3370_erp_examine */
-diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c
-index 5b7385e..c361ab6 100644
---- a/drivers/s390/block/dasd_3990_erp.c
-+++ b/drivers/s390/block/dasd_3990_erp.c
-@@ -26,158 +26,6 @@ struct DCTL_data {
+- u8 information[4]; /* for direct-access devices, contains the unsigned
+- * logical block address or residue associated with
+- * the sense key
++ u8 information[4]; /* for direct-access devices, contains the unsigned
++ * logical block address or residue associated with
++ * the sense key
+ */
+ u8 add_sense_len; /* number of additional sense bytes to follow this field */
+ u8 cmnd_info[4]; /* not used */
+@@ -746,7 +748,7 @@ struct sense_data {
+ u8 bit_ptr:3; /* indicates which byte of the CDB or parameter data
+ * was in error
+ */
+- u8 BPV:1; /* bit pointer valid (BPV): 1- indicates that
++ u8 BPV:1; /* bit pointer valid (BPV): 1- indicates that
+ * the bit_ptr field has valid value
+ */
+ u8 reserved2:2;
+@@ -780,24 +782,24 @@ struct fib {
+ /*
+ * The Adapter that this I/O is destined for.
+ */
+- struct aac_dev *dev;
++ struct aac_dev *dev;
+ /*
+ * This is the event the sendfib routine will wait on if the
+ * caller did not pass one and this is synch io.
+ */
+- struct semaphore event_wait;
++ struct semaphore event_wait;
+ spinlock_t event_lock;
- /*
- *****************************************************************************
-- * SECTION ERP EXAMINATION
-- *****************************************************************************
-- */
--
--/*
-- * DASD_3990_ERP_EXAMINE_24
-- *
-- * DESCRIPTION
-- * Checks only for fatal (unrecoverable) error.
-- * A detailed examination of the sense data is done later outside
-- * the interrupt handler.
-- *
-- * Each bit configuration leading to an action code 2 (Exit with
-- * programming error or unusual condition indication)
-- * are handled as fatal errors.
-- *
-- * All other configurations are handled as recoverable errors.
-- *
-- * RETURN VALUES
-- * dasd_era_fatal for all fatal (unrecoverable errors)
-- * dasd_era_recover for all others.
-- */
--static dasd_era_t
--dasd_3990_erp_examine_24(struct dasd_ccw_req * cqr, char *sense)
--{
--
-- struct dasd_device *device = cqr->device;
--
-- /* check for 'Command Reject' */
-- if ((sense[0] & SNS0_CMD_REJECT) &&
-- (!(sense[2] & SNS2_ENV_DATA_PRESENT))) {
--
-- DEV_MESSAGE(KERN_ERR, device, "%s",
-- "EXAMINE 24: Command Reject detected - "
-- "fatal error");
--
-- return dasd_era_fatal;
-- }
--
-- /* check for 'Invalid Track Format' */
-- if ((sense[1] & SNS1_INV_TRACK_FORMAT) &&
-- (!(sense[2] & SNS2_ENV_DATA_PRESENT))) {
--
-- DEV_MESSAGE(KERN_ERR, device, "%s",
-- "EXAMINE 24: Invalid Track Format detected "
-- "- fatal error");
--
-- return dasd_era_fatal;
-- }
--
-- /* check for 'No Record Found' */
-- if (sense[1] & SNS1_NO_REC_FOUND) {
--
-- /* FIXME: fatal error ?!? */
-- DEV_MESSAGE(KERN_ERR, device,
-- "EXAMINE 24: No Record Found detected %s",
-- device->state <= DASD_STATE_BASIC ?
-- " " : "- fatal error");
--
-- return dasd_era_fatal;
-- }
--
-- /* return recoverable for all others */
-- return dasd_era_recover;
--} /* END dasd_3990_erp_examine_24 */
--
--/*
-- * DASD_3990_ERP_EXAMINE_32
-- *
-- * DESCRIPTION
-- * Checks only for fatal/no/recoverable error.
-- * A detailed examination of the sense data is done later outside
-- * the interrupt handler.
-- *
-- * RETURN VALUES
-- * dasd_era_none no error
-- * dasd_era_fatal for all fatal (unrecoverable errors)
-- * dasd_era_recover for recoverable others.
-- */
--static dasd_era_t
--dasd_3990_erp_examine_32(struct dasd_ccw_req * cqr, char *sense)
--{
--
-- struct dasd_device *device = cqr->device;
--
-- switch (sense[25]) {
-- case 0x00:
-- return dasd_era_none;
--
-- case 0x01:
-- DEV_MESSAGE(KERN_ERR, device, "%s", "EXAMINE 32: fatal error");
--
-- return dasd_era_fatal;
--
-- default:
--
-- return dasd_era_recover;
-- }
--
--} /* end dasd_3990_erp_examine_32 */
--
--/*
-- * DASD_3990_ERP_EXAMINE
-- *
-- * DESCRIPTION
-- * Checks only for fatal/no/recover error.
-- * A detailed examination of the sense data is done later outside
-- * the interrupt handler.
-- *
-- * The logic is based on the 'IBM 3990 Storage Control Reference' manual
-- * 'Chapter 7. Error Recovery Procedures'.
-- *
-- * RETURN VALUES
-- * dasd_era_none no error
-- * dasd_era_fatal for all fatal (unrecoverable errors)
-- * dasd_era_recover for all others.
-- */
--dasd_era_t
--dasd_3990_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
--{
--
-- char *sense = irb->ecw;
-- dasd_era_t era = dasd_era_recover;
-- struct dasd_device *device = cqr->device;
--
-- /* check for successful execution first */
-- if (irb->scsw.cstat == 0x00 &&
-- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
-- return dasd_era_none;
--
-- /* distinguish between 24 and 32 byte sense data */
-- if (sense[27] & DASD_SENSE_BIT_0) {
--
-- era = dasd_3990_erp_examine_24(cqr, sense);
--
-- } else {
--
-- era = dasd_3990_erp_examine_32(cqr, sense);
--
-- }
--
-- /* log the erp chain if fatal error occurred */
-- if ((era == dasd_era_fatal) && (device->state >= DASD_STATE_READY)) {
-- dasd_log_sense(cqr, irb);
-- }
--
-- return era;
--
--} /* END dasd_3990_erp_examine */
--
--/*
-- *****************************************************************************
- * SECTION ERP HANDLING
- *****************************************************************************
+ u32 done; /* gets set to 1 when fib is complete */
+- fib_callback callback;
+- void *callback_data;
++ fib_callback callback;
++ void *callback_data;
+ u32 flags; // u32 dmb was ulong
+ /*
+ * And for the internal issue/reply queues (we may be able
+ * to merge these two)
+ */
+ struct list_head fiblink;
+- void *data;
++ void *data;
+ struct hw_fib *hw_fib_va; /* Actual shared object */
+ dma_addr_t hw_fib_pa; /* physical address of hw_fib*/
+ };
+@@ -807,7 +809,7 @@ struct fib {
+ *
+ * This is returned by the RequestAdapterInfo block
*/
-@@ -206,7 +54,7 @@ dasd_3990_erp_cleanup(struct dasd_ccw_req * erp, char final_status)
+-
++
+ struct aac_adapter_info
{
- struct dasd_ccw_req *cqr = erp->refers;
+ __le32 platform;
+@@ -826,7 +828,7 @@ struct aac_adapter_info
+ __le32 biosrev;
+ __le32 biosbuild;
+ __le32 cluster;
+- __le32 clusterchannelmask;
++ __le32 clusterchannelmask;
+ __le32 serial[2];
+ __le32 battery;
+ __le32 options;
+@@ -863,9 +865,10 @@ struct aac_supplement_adapter_info
+ __le32 SupportedOptions2;
+ __le32 ReservedGrowth[1];
+ };
+-#define AAC_FEATURE_FALCON 0x00000010
+-#define AAC_OPTION_MU_RESET 0x00000001
+-#define AAC_OPTION_IGNORE_RESET 0x00000002
++#define AAC_FEATURE_FALCON cpu_to_le32(0x00000010)
++#define AAC_FEATURE_JBOD cpu_to_le32(0x08000000)
++#define AAC_OPTION_MU_RESET cpu_to_le32(0x00000001)
++#define AAC_OPTION_IGNORE_RESET cpu_to_le32(0x00000002)
+ #define AAC_SIS_VERSION_V3 3
+ #define AAC_SIS_SLOT_UNKNOWN 0xFF
-- dasd_free_erp_request(erp, erp->device);
-+ dasd_free_erp_request(erp, erp->memdev);
- cqr->status = final_status;
- return cqr;
+@@ -916,13 +919,13 @@ struct aac_bus_info_response {
+ #define AAC_OPT_HOST_TIME_FIB cpu_to_le32(1<<4)
+ #define AAC_OPT_RAID50 cpu_to_le32(1<<5)
+ #define AAC_OPT_4GB_WINDOW cpu_to_le32(1<<6)
+-#define AAC_OPT_SCSI_UPGRADEABLE cpu_to_le32(1<<7)
++#define AAC_OPT_SCSI_UPGRADEABLE cpu_to_le32(1<<7)
+ #define AAC_OPT_SOFT_ERR_REPORT cpu_to_le32(1<<8)
+-#define AAC_OPT_SUPPORTED_RECONDITION cpu_to_le32(1<<9)
++#define AAC_OPT_SUPPORTED_RECONDITION cpu_to_le32(1<<9)
+ #define AAC_OPT_SGMAP_HOST64 cpu_to_le32(1<<10)
+ #define AAC_OPT_ALARM cpu_to_le32(1<<11)
+ #define AAC_OPT_NONDASD cpu_to_le32(1<<12)
+-#define AAC_OPT_SCSI_MANAGED cpu_to_le32(1<<13)
++#define AAC_OPT_SCSI_MANAGED cpu_to_le32(1<<13)
+ #define AAC_OPT_RAID_SCSI_MODE cpu_to_le32(1<<14)
+ #define AAC_OPT_SUPPLEMENT_ADAPTER_INFO cpu_to_le32(1<<16)
+ #define AAC_OPT_NEW_COMM cpu_to_le32(1<<17)
+@@ -942,7 +945,7 @@ struct aac_dev
-@@ -224,15 +72,17 @@ static void
- dasd_3990_erp_block_queue(struct dasd_ccw_req * erp, int expires)
- {
+ /*
+ * Map for 128 fib objects (64k)
+- */
++ */
+ dma_addr_t hw_fib_pa;
+ struct hw_fib *hw_fib_va;
+ struct hw_fib *aif_base_va;
+@@ -953,24 +956,24 @@ struct aac_dev
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
-+ unsigned long flags;
+ struct fib *free_fib;
+ spinlock_t fib_lock;
+-
++
+ struct aac_queue_block *queues;
+ /*
+ * The user API will use an IOCTL to register itself to receive
+ * FIBs from the adapter. The following list is used to keep
+ * track of all the threads that have requested these FIBs. The
+- * mutex is used to synchronize access to all data associated
++ * mutex is used to synchronize access to all data associated
+ * with the adapter fibs.
+ */
+ struct list_head fib_list;
- DEV_MESSAGE(KERN_INFO, device,
- "blocking request queue for %is", expires/HZ);
+ struct adapter_ops a_ops;
+ unsigned long fsrev; /* Main driver's revision number */
+-
++
+ unsigned base_size; /* Size of mapped in region */
+ struct aac_init *init; /* Holds initialization info to communicate with adapter */
+- dma_addr_t init_pa; /* Holds physical address of the init struct */
+-
++ dma_addr_t init_pa; /* Holds physical address of the init struct */
++
+ struct pci_dev *pdev; /* Our PCI interface */
+ void * printfbuf; /* pointer to buffer used for printf's from the adapter */
+ void * comm_addr; /* Base address of Comm area */
+@@ -984,11 +987,11 @@ struct aac_dev
+ struct fsa_dev_info *fsa_dev;
+ struct task_struct *thread;
+ int cardtype;
+-
++
+ /*
+ * The following is the device specific extension.
+ */
+-#if (!defined(AAC_MIN_FOOTPRINT_SIZE))
++#ifndef AAC_MIN_FOOTPRINT_SIZE
+ # define AAC_MIN_FOOTPRINT_SIZE 8192
+ #endif
+ union
+@@ -1009,7 +1012,9 @@ struct aac_dev
+ /* These are in adapter info but they are in the io flow so
+ * lets break them out so we don't have to do an AND to check them
+ */
+- u8 nondasd_support;
++ u8 nondasd_support;
++ u8 jbod;
++ u8 cache_protected;
+ u8 dac_support;
+ u8 raid_scsi_mode;
+ u8 comm_interface;
+@@ -1066,18 +1071,19 @@ struct aac_dev
+ (dev)->a_ops.adapter_comm(dev, comm)
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
- device->stopped |= DASD_STOPPED_PENDING;
-- erp->status = DASD_CQR_QUEUED;
--
-- dasd_set_timer(device, expires);
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ erp->status = DASD_CQR_FILLED;
-+ dasd_block_set_timer(device->block, expires);
- }
+ #define FIB_CONTEXT_FLAG_TIMED_OUT (0x00000001)
++#define FIB_CONTEXT_FLAG (0x00000002)
/*
-@@ -251,7 +101,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_int_req(struct dasd_ccw_req * erp)
- {
-
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+ * Define the command values
+ */
+-
++
+ #define Null 0
+-#define GetAttributes 1
+-#define SetAttributes 2
+-#define Lookup 3
+-#define ReadLink 4
+-#define Read 5
+-#define Write 6
++#define GetAttributes 1
++#define SetAttributes 2
++#define Lookup 3
++#define ReadLink 4
++#define Read 5
++#define Write 6
+ #define Create 7
+ #define MakeDirectory 8
+ #define SymbolicLink 9
+@@ -1173,19 +1179,19 @@ struct aac_dev
- /* first time set initial retry counter and erp_function */
- /* and retry once without blocking queue */
-@@ -292,11 +142,14 @@ dasd_3990_erp_int_req(struct dasd_ccw_req * erp)
- static void
- dasd_3990_erp_alternate_path(struct dasd_ccw_req * erp)
+ struct aac_read
{
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
- __u8 opm;
-+ unsigned long flags;
-
- /* try alternate valid path */
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
- opm = ccw_device_get_path_mask(device->cdev);
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
- //FIXME: start with get_opm ?
- if (erp->lpm == 0)
- erp->lpm = LPM_ANYPATH & ~(erp->irb.esw.esw0.sublog.lpum);
-@@ -309,9 +162,8 @@ dasd_3990_erp_alternate_path(struct dasd_ccw_req * erp)
- "try alternate lpm=%x (lpum=%x / opm=%x)",
- erp->lpm, erp->irb.esw.esw0.sublog.lpum, opm);
-
-- /* reset status to queued to handle the request again... */
-- if (erp->status > DASD_CQR_QUEUED)
-- erp->status = DASD_CQR_QUEUED;
-+ /* reset status to submit the request again... */
-+ erp->status = DASD_CQR_FILLED;
- erp->retries = 1;
- } else {
- DEV_MESSAGE(KERN_ERR, device,
-@@ -320,8 +172,7 @@ dasd_3990_erp_alternate_path(struct dasd_ccw_req * erp)
- erp->irb.esw.esw0.sublog.lpum, opm);
-
- /* post request with permanent error */
-- if (erp->status > DASD_CQR_QUEUED)
-- erp->status = DASD_CQR_FAILED;
-+ erp->status = DASD_CQR_FAILED;
- }
- } /* end dasd_3990_erp_alternate_path */
+- __le32 command;
+- __le32 cid;
+- __le32 block;
+- __le32 count;
++ __le32 command;
++ __le32 cid;
++ __le32 block;
++ __le32 count;
+ struct sgmap sg; // Must be last in struct because it is variable
+ };
-@@ -344,14 +195,14 @@ static struct dasd_ccw_req *
- dasd_3990_erp_DCTL(struct dasd_ccw_req * erp, char modifier)
+ struct aac_read64
{
+- __le32 command;
+- __le16 cid;
+- __le16 sector_count;
+- __le32 block;
++ __le32 command;
++ __le16 cid;
++ __le16 sector_count;
++ __le32 block;
+ __le16 pad;
+ __le16 flags;
+ struct sgmap64 sg; // Must be last in struct because it is variable
+@@ -1193,26 +1199,26 @@ struct aac_read64
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
- struct DCTL_data *DCTL_data;
- struct ccw1 *ccw;
- struct dasd_ccw_req *dctl_cqr;
+ struct aac_read_reply
+ {
+- __le32 status;
+- __le32 count;
++ __le32 status;
++ __le32 count;
+ };
- dctl_cqr = dasd_alloc_erp_request((char *) &erp->magic, 1,
-- sizeof (struct DCTL_data),
-- erp->device);
-+ sizeof(struct DCTL_data),
-+ device);
- if (IS_ERR(dctl_cqr)) {
- DEV_MESSAGE(KERN_ERR, device, "%s",
- "Unable to allocate DCTL-CQR");
-@@ -365,13 +216,14 @@ dasd_3990_erp_DCTL(struct dasd_ccw_req * erp, char modifier)
- DCTL_data->modifier = modifier;
+ struct aac_write
+ {
+ __le32 command;
+- __le32 cid;
+- __le32 block;
+- __le32 count;
+- __le32 stable; // Not used
++ __le32 cid;
++ __le32 block;
++ __le32 count;
++ __le32 stable; // Not used
+ struct sgmap sg; // Must be last in struct because it is variable
+ };
- ccw = dctl_cqr->cpaddr;
-- memset(ccw, 0, sizeof (struct ccw1));
-+ memset(ccw, 0, sizeof(struct ccw1));
- ccw->cmd_code = CCW_CMD_DCTL;
- ccw->count = 4;
- ccw->cda = (__u32)(addr_t) DCTL_data;
- dctl_cqr->function = dasd_3990_erp_DCTL;
- dctl_cqr->refers = erp;
-- dctl_cqr->device = erp->device;
-+ dctl_cqr->startdev = device;
-+ dctl_cqr->memdev = device;
- dctl_cqr->magic = erp->magic;
- dctl_cqr->expires = 5 * 60 * HZ;
- dctl_cqr->retries = 2;
-@@ -435,7 +287,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_action_4(struct dasd_ccw_req * erp, char *sense)
+ struct aac_write64
+ {
+- __le32 command;
+- __le16 cid;
+- __le16 sector_count;
+- __le32 block;
++ __le32 command;
++ __le16 cid;
++ __le16 sector_count;
++ __le32 block;
+ __le16 pad;
+ __le16 flags;
+ #define IO_TYPE_WRITE 0x00000000
+@@ -1223,7 +1229,7 @@ struct aac_write64
+ struct aac_write_reply
{
+ __le32 status;
+- __le32 count;
++ __le32 count;
+ __le32 committed;
+ };
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+@@ -1326,10 +1332,10 @@ struct aac_srb_reply
+ #define SRB_NoDataXfer 0x0000
+ #define SRB_DisableDisconnect 0x0004
+ #define SRB_DisableSynchTransfer 0x0008
+-#define SRB_BypassFrozenQueue 0x0010
++#define SRB_BypassFrozenQueue 0x0010
+ #define SRB_DisableAutosense 0x0020
+ #define SRB_DataIn 0x0040
+-#define SRB_DataOut 0x0080
++#define SRB_DataOut 0x0080
- /* first time set initial retry counter and erp_function */
- /* and retry once without waiting for state change pending */
-@@ -472,7 +324,7 @@ dasd_3990_erp_action_4(struct dasd_ccw_req * erp, char *sense)
- "redriving request immediately, "
- "%d retries left",
- erp->retries);
-- erp->status = DASD_CQR_QUEUED;
-+ erp->status = DASD_CQR_FILLED;
- }
- }
+ /*
+ * SRB Functions - set in aac_srb->function
+@@ -1352,7 +1358,7 @@ struct aac_srb_reply
+ #define SRBF_RemoveDevice 0x0016
+ #define SRBF_DomainValidation 0x0017
-@@ -530,7 +382,7 @@ static void
- dasd_3990_handle_env_data(struct dasd_ccw_req * erp, char *sense)
- {
+-/*
++/*
+ * SRB SCSI Status - set in aac_srb->scsi_status
+ */
+ #define SRB_STATUS_PENDING 0x00
+@@ -1511,17 +1517,17 @@ struct aac_get_container_count_resp {
+ */
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
- char msg_format = (sense[7] & 0xF0);
- char msg_no = (sense[7] & 0x0F);
+ struct aac_mntent {
+- __le32 oid;
++ __le32 oid;
+ u8 name[16]; /* if applicable */
+ struct creation_info create_info; /* if applicable */
+ __le32 capacity;
+- __le32 vol; /* substrate structure */
+- __le32 obj; /* FT_FILESYS, etc. */
+- __le32 state; /* unready for mounting,
++ __le32 vol; /* substrate structure */
++ __le32 obj; /* FT_FILESYS, etc. */
++ __le32 state; /* unready for mounting,
+ readonly, etc. */
+- union aac_contentinfo fileinfo; /* Info specific to content
++ union aac_contentinfo fileinfo; /* Info specific to content
+ manager (eg, filesystem) */
+- __le32 altoid; /* != oid <==> snapshot or
++ __le32 altoid; /* != oid <==> snapshot or
+ broken mirror exists */
+ __le32 capacityhigh;
+ };
+@@ -1538,7 +1544,7 @@ struct aac_query_mount {
-@@ -1157,7 +1009,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_com_rej(struct dasd_ccw_req * erp, char *sense)
+ struct aac_mount {
+ __le32 status;
+- __le32 type; /* should be same as that requested */
++ __le32 type; /* should be same as that requested */
+ __le32 count;
+ struct aac_mntent mnt[1];
+ };
+@@ -1608,7 +1614,7 @@ struct aac_delete_disk {
+ u32 disknum;
+ u32 cnum;
+ };
+-
++
+ struct fib_ioctl
{
+ u32 fibctx;
+@@ -1622,10 +1628,10 @@ struct revision
+ __le32 version;
+ __le32 build;
+ };
+-
++
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+ /*
+- * Ugly - non Linux like ioctl coding for back compat.
++ * Ugly - non Linux like ioctl coding for back compat.
+ */
- erp->function = dasd_3990_erp_com_rej;
+ #define CTL_CODE(function, method) ( \
+@@ -1633,7 +1639,7 @@ struct revision
+ )
-@@ -1198,7 +1050,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_bus_out(struct dasd_ccw_req * erp)
- {
+ /*
+- * Define the method codes for how buffers are passed for I/O and FS
++ * Define the method codes for how buffers are passed for I/O and FS
+ * controls
+ */
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+@@ -1644,15 +1650,15 @@ struct revision
+ * Filesystem ioctls
+ */
- /* first time set initial retry counter and erp_function */
- /* and retry once without blocking queue */
-@@ -1237,7 +1089,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_equip_check(struct dasd_ccw_req * erp, char *sense)
+-#define FSACTL_SENDFIB CTL_CODE(2050, METHOD_BUFFERED)
+-#define FSACTL_SEND_RAW_SRB CTL_CODE(2067, METHOD_BUFFERED)
++#define FSACTL_SENDFIB CTL_CODE(2050, METHOD_BUFFERED)
++#define FSACTL_SEND_RAW_SRB CTL_CODE(2067, METHOD_BUFFERED)
+ #define FSACTL_DELETE_DISK 0x163
+ #define FSACTL_QUERY_DISK 0x173
+ #define FSACTL_OPEN_GET_ADAPTER_FIB CTL_CODE(2100, METHOD_BUFFERED)
+ #define FSACTL_GET_NEXT_ADAPTER_FIB CTL_CODE(2101, METHOD_BUFFERED)
+ #define FSACTL_CLOSE_GET_ADAPTER_FIB CTL_CODE(2102, METHOD_BUFFERED)
+ #define FSACTL_MINIPORT_REV_CHECK CTL_CODE(2107, METHOD_BUFFERED)
+-#define FSACTL_GET_PCI_INFO CTL_CODE(2119, METHOD_BUFFERED)
++#define FSACTL_GET_PCI_INFO CTL_CODE(2119, METHOD_BUFFERED)
+ #define FSACTL_FORCE_DELETE_DISK CTL_CODE(2120, METHOD_NEITHER)
+ #define FSACTL_GET_CONTAINERS 2131
+ #define FSACTL_SEND_LARGE_FIB CTL_CODE(2138, METHOD_BUFFERED)
+@@ -1661,7 +1667,7 @@ struct revision
+ struct aac_common
{
+ /*
+- * If this value is set to 1 then interrupt moderation will occur
++ * If this value is set to 1 then interrupt moderation will occur
+ * in the base commuication support.
+ */
+ u32 irq_mod;
+@@ -1690,11 +1696,11 @@ extern struct aac_common aac_config;
+ * The following macro is used when sending and receiving FIBs. It is
+ * only used for debugging.
+ */
+-
++
+ #ifdef DBG
+ #define FIB_COUNTER_INCREMENT(counter) (counter)++
+ #else
+-#define FIB_COUNTER_INCREMENT(counter)
++#define FIB_COUNTER_INCREMENT(counter)
+ #endif
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+ /*
+@@ -1726,17 +1732,17 @@ extern struct aac_common aac_config;
+ *
+ * The adapter reports is present state through the phase. Only
+ * a single phase should be ever be set. Each phase can have multiple
+- * phase status bits to provide more detailed information about the
+- * state of the board. Care should be taken to ensure that any phase
++ * phase status bits to provide more detailed information about the
++ * state of the board. Care should be taken to ensure that any phase
+ * status bits that are set when changing the phase are also valid
+ * for the new phase or be cleared out. Adapter software (monitor,
+- * iflash, kernel) is responsible for properly maintining the phase
++ * iflash, kernel) is responsible for properly maintining the phase
+ * status mailbox when it is running.
+- *
+- * MONKER_API Phases
+ *
+- * Phases are bit oriented. It is NOT valid to have multiple bits set
+- */
++ * MONKER_API Phases
++ *
++ * Phases are bit oriented. It is NOT valid to have multiple bits set
++ */
- erp->function = dasd_3990_erp_equip_check;
+ #define SELF_TEST_FAILED 0x00000004
+ #define MONITOR_PANIC 0x00000020
+@@ -1759,16 +1765,22 @@ extern struct aac_common aac_config;
+ * For FIB communication, we need all of the following things
+ * to send back to the user.
+ */
+-
+-#define AifCmdEventNotify 1 /* Notify of event */
++
++#define AifCmdEventNotify 1 /* Notify of event */
+ #define AifEnConfigChange 3 /* Adapter configuration change */
+ #define AifEnContainerChange 4 /* Container configuration change */
+ #define AifEnDeviceFailure 5 /* SCSI device failed */
++#define AifEnEnclosureManagement 13 /* EM_DRIVE_* */
++#define EM_DRIVE_INSERTION 31
++#define EM_DRIVE_REMOVAL 32
++#define AifEnBatteryEvent 14 /* Change in Battery State */
+ #define AifEnAddContainer 15 /* A new array was created */
+ #define AifEnDeleteContainer 16 /* A container was deleted */
+ #define AifEnExpEvent 23 /* Firmware Event Log */
+ #define AifExeFirmwarePanic 3 /* Firmware Event Panic */
+ #define AifHighPriority 3 /* Highest Priority Event */
++#define AifEnAddJBOD 30 /* JBOD created */
++#define AifEnDeleteJBOD 31 /* JBOD deleted */
-@@ -1279,7 +1131,6 @@ dasd_3990_erp_equip_check(struct dasd_ccw_req * erp, char *sense)
+ #define AifCmdJobProgress 2 /* Progress report */
+ #define AifJobCtrZero 101 /* Array Zero progress */
+@@ -1780,11 +1792,11 @@ extern struct aac_common aac_config;
+ #define AifDenVolumeExtendComplete 201 /* A volume extend completed */
+ #define AifReqJobList 100 /* Gets back complete job list */
+ #define AifReqJobsForCtr 101 /* Gets back jobs for specific container */
+-#define AifReqJobsForScsi 102 /* Gets back jobs for specific SCSI device */
+-#define AifReqJobReport 103 /* Gets back a specific job report or list of them */
++#define AifReqJobsForScsi 102 /* Gets back jobs for specific SCSI device */
++#define AifReqJobReport 103 /* Gets back a specific job report or list of them */
+ #define AifReqTerminateJob 104 /* Terminates job */
+ #define AifReqSuspendJob 105 /* Suspends a job */
+-#define AifReqResumeJob 106 /* Resumes a job */
++#define AifReqResumeJob 106 /* Resumes a job */
+ #define AifReqSendAPIReport 107 /* API generic report requests */
+ #define AifReqAPIJobStart 108 /* Start a job from the API */
+ #define AifReqAPIJobUpdate 109 /* Update a job report from the API */
+@@ -1803,8 +1815,8 @@ struct aac_aifcmd {
+ };
- erp = dasd_3990_erp_action_5(erp);
+ /**
+- * Convert capacity to cylinders
+- * accounting for the fact capacity could be a 64 bit value
++ * Convert capacity to cylinders
++ * accounting for the fact capacity could be a 64 bit value
+ *
+ */
+ static inline unsigned int cap_to_cyls(sector_t capacity, unsigned divisor)
+@@ -1861,6 +1873,7 @@ int aac_probe_container(struct aac_dev *dev, int cid);
+ int _aac_rx_init(struct aac_dev *dev);
+ int aac_rx_select_comm(struct aac_dev *dev, int comm);
+ int aac_rx_deliver_producer(struct fib * fib);
++char * get_container_type(unsigned type);
+ extern int numacb;
+ extern int acbsize;
+ extern char aac_driver_version[];
+diff --git a/drivers/scsi/aacraid/commctrl.c b/drivers/scsi/aacraid/commctrl.c
+index 1e6d7a9..851a7e5 100644
+--- a/drivers/scsi/aacraid/commctrl.c
++++ b/drivers/scsi/aacraid/commctrl.c
+@@ -48,13 +48,13 @@
+ * ioctl_send_fib - send a FIB from userspace
+ * @dev: adapter is being processed
+ * @arg: arguments to the ioctl call
+- *
++ *
+ * This routine sends a fib to the adapter on behalf of a user level
+ * program.
+ */
+ # define AAC_DEBUG_PREAMBLE KERN_INFO
+ # define AAC_DEBUG_POSTAMBLE
+-
++
+ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
+ {
+ struct hw_fib * kfib;
+@@ -71,7 +71,7 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
+ if(fibptr == NULL) {
+ return -ENOMEM;
}
--
- return erp;
+-
++
+ kfib = fibptr->hw_fib_va;
+ /*
+ * First copy in the header so that we can check the size field.
+@@ -109,7 +109,7 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
+ if (kfib->header.Command == cpu_to_le16(TakeABreakPt)) {
+ aac_adapter_interrupt(dev);
+ /*
+- * Since we didn't really send a fib, zero out the state to allow
++ * Since we didn't really send a fib, zero out the state to allow
+ * cleanup code not to assert.
+ */
+ kfib->header.XferState = 0;
+@@ -169,7 +169,7 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
- } /* end dasd_3990_erp_equip_check */
-@@ -1299,7 +1150,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_data_check(struct dasd_ccw_req * erp, char *sense)
- {
+ fibctx->type = FSAFS_NTC_GET_ADAPTER_FIB_CONTEXT;
+ fibctx->size = sizeof(struct aac_fib_context);
+- /*
++ /*
+ * Yes yes, I know this could be an index, but we have a
+ * better guarantee of uniqueness for the locked loop below.
+ * Without the aid of a persistent history, this also helps
+@@ -189,7 +189,7 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ INIT_LIST_HEAD(&fibctx->fib_list);
+ fibctx->jiffies = jiffies/HZ;
+ /*
+- * Now add this context onto the adapter's
++ * Now add this context onto the adapter's
+ * AdapterFibContext list.
+ */
+ spin_lock_irqsave(&dev->fib_lock, flags);
+@@ -207,12 +207,12 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ }
+ list_add_tail(&fibctx->next, &dev->fib_list);
+ spin_unlock_irqrestore(&dev->fib_lock, flags);
+- if (copy_to_user(arg, &fibctx->unique,
++ if (copy_to_user(arg, &fibctx->unique,
+ sizeof(fibctx->unique))) {
+ status = -EFAULT;
+ } else {
+ status = 0;
+- }
++ }
+ }
+ return status;
+ }
+@@ -221,8 +221,8 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ * next_getadapter_fib - get the next fib
+ * @dev: adapter to use
+ * @arg: ioctl argument
+- *
+- * This routine will get the next Fib, if available, from the AdapterFibContext
++ *
++ * This routine will get the next Fib, if available, from the AdapterFibContext
+ * passed in from the user.
+ */
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+@@ -234,7 +234,7 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ int status;
+ struct list_head * entry;
+ unsigned long flags;
+-
++
+ if(copy_from_user((void *)&f, arg, sizeof(struct fib_ioctl)))
+ return -EFAULT;
+ /*
+@@ -243,6 +243,7 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ * Search the list of AdapterFibContext addresses on the adapter
+ * to be sure this is a valid address
+ */
++ spin_lock_irqsave(&dev->fib_lock, flags);
+ entry = dev->fib_list.next;
+ fibctx = NULL;
- erp->function = dasd_3990_erp_data_check;
+@@ -251,37 +252,37 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ /*
+ * Extract the AdapterFibContext from the Input parameters.
+ */
+- if (fibctx->unique == f.fibctx) { /* We found a winner */
++ if (fibctx->unique == f.fibctx) { /* We found a winner */
+ break;
+ }
+ entry = entry->next;
+ fibctx = NULL;
+ }
+ if (!fibctx) {
++ spin_unlock_irqrestore(&dev->fib_lock, flags);
+ dprintk ((KERN_INFO "Fib Context not found\n"));
+ return -EINVAL;
+ }
-@@ -1358,7 +1209,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_overrun(struct dasd_ccw_req * erp, char *sense)
+ if((fibctx->type != FSAFS_NTC_GET_ADAPTER_FIB_CONTEXT) ||
+ (fibctx->size != sizeof(struct aac_fib_context))) {
++ spin_unlock_irqrestore(&dev->fib_lock, flags);
+ dprintk ((KERN_INFO "Fib Context corrupt?\n"));
+ return -EINVAL;
+ }
+ status = 0;
+- spin_lock_irqsave(&dev->fib_lock, flags);
+ /*
+ * If there are no fibs to send back, then either wait or return
+ * -EAGAIN
+ */
+ return_fib:
+ if (!list_empty(&fibctx->fib_list)) {
+- struct list_head * entry;
+ /*
+ * Pull the next fib from the fibs
+ */
+ entry = fibctx->fib_list.next;
+ list_del(entry);
+-
++
+ fib = list_entry(entry, struct fib, fiblink);
+ fibctx->count--;
+ spin_unlock_irqrestore(&dev->fib_lock, flags);
+@@ -289,7 +290,7 @@ return_fib:
+ kfree(fib->hw_fib_va);
+ kfree(fib);
+ return -EFAULT;
+- }
++ }
+ /*
+ * Free the space occupied by this copy of the fib.
+ */
+@@ -318,7 +319,7 @@ return_fib:
+ }
+ } else {
+ status = -EAGAIN;
+- }
++ }
+ }
+ fibctx->jiffies = jiffies/HZ;
+ return status;
+@@ -327,7 +328,9 @@ return_fib:
+ int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context * fibctx)
{
+ struct fib *fib;
++ unsigned long flags;
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
-
- erp->function = dasd_3990_erp_overrun;
-
-@@ -1387,7 +1238,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_inv_format(struct dasd_ccw_req * erp, char *sense)
++ spin_lock_irqsave(&dev->fib_lock, flags);
+ /*
+ * First free any FIBs that have not been consumed.
+ */
+@@ -350,6 +353,7 @@ int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context * fibctx)
+ * Remove the Context from the AdapterFibContext List
+ */
+ list_del(&fibctx->next);
++ spin_unlock_irqrestore(&dev->fib_lock, flags);
+ /*
+ * Invalidate context
+ */
+@@ -368,7 +372,7 @@ int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context * fibctx)
+ *
+ * This routine will close down the fibctx passed in from the user.
+ */
+-
++
+ static int close_getadapter_fib(struct aac_dev * dev, void __user *arg)
{
+ struct aac_fib_context *fibctx;
+@@ -415,8 +419,8 @@ static int close_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ * @arg: ioctl arguments
+ *
+ * This routine returns the driver version.
+- * Under Linux, there have been no version incompatibilities, so this is
+- * simple!
++ * Under Linux, there have been no version incompatibilities, so this is
++ * simple!
+ */
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
-
- erp->function = dasd_3990_erp_inv_format;
-
-@@ -1403,8 +1254,7 @@ dasd_3990_erp_inv_format(struct dasd_ccw_req * erp, char *sense)
+ static int check_revision(struct aac_dev *dev, void __user *arg)
+@@ -426,12 +430,12 @@ static int check_revision(struct aac_dev *dev, void __user *arg)
+ u32 version;
- } else {
- DEV_MESSAGE(KERN_ERR, device, "%s",
-- "Invalid Track Format - Fatal error should have "
-- "been handled within the interrupt handler");
-+ "Invalid Track Format - Fatal error");
+ response.compat = 1;
+- version = (simple_strtol(driver_version,
++ version = (simple_strtol(driver_version,
+ &driver_version, 10) << 24) | 0x00000400;
+ version += simple_strtol(driver_version + 1, &driver_version, 10) << 16;
+ version += simple_strtol(driver_version + 1, NULL, 10);
+ response.version = cpu_to_le32(version);
+-# if (defined(AAC_DRIVER_BUILD))
++# ifdef AAC_DRIVER_BUILD
+ response.build = cpu_to_le32(AAC_DRIVER_BUILD);
+ # else
+ response.build = cpu_to_le32(9999);
+@@ -464,7 +468,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ u32 data_dir;
+ void __user *sg_user[32];
+ void *sg_list[32];
+- u32 sg_indx = 0;
++ u32 sg_indx = 0;
+ u32 byte_count = 0;
+ u32 actual_fibsize64, actual_fibsize = 0;
+ int i;
+@@ -475,7 +479,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ return -EBUSY;
+ }
+ if (!capable(CAP_SYS_ADMIN)){
+- dprintk((KERN_DEBUG"aacraid: No permission to send raw srb\n"));
++ dprintk((KERN_DEBUG"aacraid: No permission to send raw srb\n"));
+ return -EPERM;
+ }
+ /*
+@@ -490,7 +494,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- erp = dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);
+ memset(sg_list, 0, sizeof(sg_list)); /* cleanup may take issue */
+ if(copy_from_user(&fibsize, &user_srb->count,sizeof(u32))){
+- dprintk((KERN_DEBUG"aacraid: Could not copy data size from user\n"));
++ dprintk((KERN_DEBUG"aacraid: Could not copy data size from user\n"));
+ rcode = -EFAULT;
+ goto cleanup;
}
-@@ -1428,7 +1278,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_EOC(struct dasd_ccw_req * default_erp, char *sense)
- {
+@@ -507,7 +511,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ goto cleanup;
+ }
+ if(copy_from_user(user_srbcmd, user_srb,fibsize)){
+- dprintk((KERN_DEBUG"aacraid: Could not copy srb from user\n"));
++ dprintk((KERN_DEBUG"aacraid: Could not copy srb from user\n"));
+ rcode = -EFAULT;
+ goto cleanup;
+ }
+@@ -518,15 +522,15 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ // Fix up srb for endian and force some values
-- struct dasd_device *device = default_erp->device;
-+ struct dasd_device *device = default_erp->startdev;
+ srbcmd->function = cpu_to_le32(SRBF_ExecuteScsi); // Force this
+- srbcmd->channel = cpu_to_le32(user_srbcmd->channel);
++ srbcmd->channel = cpu_to_le32(user_srbcmd->channel);
+ srbcmd->id = cpu_to_le32(user_srbcmd->id);
+- srbcmd->lun = cpu_to_le32(user_srbcmd->lun);
+- srbcmd->timeout = cpu_to_le32(user_srbcmd->timeout);
+- srbcmd->flags = cpu_to_le32(flags);
++ srbcmd->lun = cpu_to_le32(user_srbcmd->lun);
++ srbcmd->timeout = cpu_to_le32(user_srbcmd->timeout);
++ srbcmd->flags = cpu_to_le32(flags);
+ srbcmd->retry_limit = 0; // Obsolete parameter
+ srbcmd->cdb_size = cpu_to_le32(user_srbcmd->cdb_size);
+ memcpy(srbcmd->cdb, user_srbcmd->cdb, sizeof(srbcmd->cdb));
+-
++
+ switch (flags & (SRB_DataIn | SRB_DataOut)) {
+ case SRB_DataOut:
+ data_dir = DMA_TO_DEVICE;
+@@ -582,7 +586,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ void* p;
+ /* Does this really need to be GFP_DMA? */
+ p = kmalloc(upsg->sg[i].count,GFP_KERNEL|__GFP_DMA);
+- if(p == 0) {
++ if(!p) {
+ dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
+ upsg->sg[i].count,i,upsg->count));
+ rcode = -ENOMEM;
+@@ -594,7 +598,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ sg_list[i] = p; // save so we can clean up later
+ sg_indx = i;
- DEV_MESSAGE(KERN_ERR, device, "%s",
- "End-of-Cylinder - must never happen");
-@@ -1453,7 +1303,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_env_data(struct dasd_ccw_req * erp, char *sense)
- {
+- if( flags & SRB_DataOut ){
++ if (flags & SRB_DataOut) {
+ if(copy_from_user(p,sg_user[i],upsg->sg[i].count)){
+ dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
+ rcode = -EFAULT;
+@@ -626,7 +630,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ void* p;
+ /* Does this really need to be GFP_DMA? */
+ p = kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
+- if(p == 0) {
++ if(!p) {
+ kfree (usg);
+ dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
+ usg->sg[i].count,i,usg->count));
+@@ -637,7 +641,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ sg_list[i] = p; // save so we can clean up later
+ sg_indx = i;
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+- if( flags & SRB_DataOut ){
++ if (flags & SRB_DataOut) {
+ if(copy_from_user(p,sg_user[i],upsg->sg[i].count)){
+ kfree (usg);
+ dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
+@@ -668,7 +672,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ void* p;
+ /* Does this really need to be GFP_DMA? */
+ p = kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
+- if(p == 0) {
++ if(!p) {
+ dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
+ usg->sg[i].count,i,usg->count));
+ rcode = -ENOMEM;
+@@ -680,7 +684,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ sg_list[i] = p; // save so we can clean up later
+ sg_indx = i;
- erp->function = dasd_3990_erp_env_data;
+- if( flags & SRB_DataOut ){
++ if (flags & SRB_DataOut) {
+ if(copy_from_user(p,sg_user[i],usg->sg[i].count)){
+ dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
+ rcode = -EFAULT;
+@@ -698,7 +702,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ dma_addr_t addr;
+ void* p;
+ p = kmalloc(upsg->sg[i].count, GFP_KERNEL);
+- if(p == 0) {
++ if (!p) {
+ dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
+ upsg->sg[i].count, i, upsg->count));
+ rcode = -ENOMEM;
+@@ -708,7 +712,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ sg_list[i] = p; // save so we can clean up later
+ sg_indx = i;
-@@ -1463,11 +1313,9 @@ dasd_3990_erp_env_data(struct dasd_ccw_req * erp, char *sense)
+- if( flags & SRB_DataOut ){
++ if (flags & SRB_DataOut) {
+ if(copy_from_user(p, sg_user[i],
+ upsg->sg[i].count)) {
+ dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
+@@ -734,19 +738,19 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ }
- /* don't retry on disabled interface */
- if (sense[7] != 0x0F) {
--
- erp = dasd_3990_erp_action_4(erp, sense);
- } else {
--
-- erp = dasd_3990_erp_cleanup(erp, DASD_CQR_IN_IO);
-+ erp->status = DASD_CQR_FILLED;
+ if (status != 0){
+- dprintk((KERN_DEBUG"aacraid: Could not send raw srb fib to hba\n"));
++ dprintk((KERN_DEBUG"aacraid: Could not send raw srb fib to hba\n"));
+ rcode = -ENXIO;
+ goto cleanup;
}
- return erp;
-@@ -1490,11 +1338,10 @@ static struct dasd_ccw_req *
- dasd_3990_erp_no_rec(struct dasd_ccw_req * default_erp, char *sense)
- {
+- if( flags & SRB_DataIn ) {
++ if (flags & SRB_DataIn) {
+ for(i = 0 ; i <= sg_indx; i++){
+ byte_count = le32_to_cpu(
+ (dev->adapter_info.options & AAC_OPT_SGMAP_HOST64)
+ ? ((struct sgmap64*)&srbcmd->sg)->sg[i].count
+ : srbcmd->sg.sg[i].count);
+ if(copy_to_user(sg_user[i], sg_list[i], byte_count)){
+- dprintk((KERN_DEBUG"aacraid: Could not copy sg data to user\n"));
++ dprintk((KERN_DEBUG"aacraid: Could not copy sg data to user\n"));
+ rcode = -EFAULT;
+ goto cleanup;
+
+@@ -756,7 +760,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
-- struct dasd_device *device = default_erp->device;
-+ struct dasd_device *device = default_erp->startdev;
+ reply = (struct aac_srb_reply *) fib_data(srbfib);
+ if(copy_to_user(user_reply,reply,sizeof(struct aac_srb_reply))){
+- dprintk((KERN_DEBUG"aacraid: Could not copy reply to user\n"));
++ dprintk((KERN_DEBUG"aacraid: Could not copy reply to user\n"));
+ rcode = -EFAULT;
+ goto cleanup;
+ }
+@@ -775,34 +779,34 @@ cleanup:
+ }
- DEV_MESSAGE(KERN_ERR, device, "%s",
-- "No Record Found - Fatal error should "
-- "have been handled within the interrupt handler");
-+ "No Record Found - Fatal error ");
+ struct aac_pci_info {
+- u32 bus;
+- u32 slot;
++ u32 bus;
++ u32 slot;
+ };
- return dasd_3990_erp_cleanup(default_erp, DASD_CQR_FAILED);
-@@ -1517,7 +1364,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_file_prot(struct dasd_ccw_req * erp)
+ static int aac_get_pci_info(struct aac_dev* dev, void __user *arg)
{
+- struct aac_pci_info pci_info;
++ struct aac_pci_info pci_info;
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
-
- DEV_MESSAGE(KERN_ERR, device, "%s", "File Protected");
+ pci_info.bus = dev->pdev->bus->number;
+ pci_info.slot = PCI_SLOT(dev->pdev->devfn);
-@@ -1526,6 +1373,43 @@ dasd_3990_erp_file_prot(struct dasd_ccw_req * erp)
- } /* end dasd_3990_erp_file_prot */
+- if (copy_to_user(arg, &pci_info, sizeof(struct aac_pci_info))) {
+- dprintk((KERN_DEBUG "aacraid: Could not copy pci info\n"));
+- return -EFAULT;
++ if (copy_to_user(arg, &pci_info, sizeof(struct aac_pci_info))) {
++ dprintk((KERN_DEBUG "aacraid: Could not copy pci info\n"));
++ return -EFAULT;
+ }
+- return 0;
++ return 0;
+ }
+-
++
- /*
-+ * DASD_3990_ERP_INSPECT_ALIAS
-+ *
-+ * DESCRIPTION
-+ * Checks if the original request was started on an alias device.
-+ * If yes, it modifies the original and the erp request so that
-+ * the erp request can be started on a base device.
-+ *
-+ * PARAMETER
-+ * erp pointer to the currently created default ERP
-+ *
-+ * RETURN VALUES
-+ * erp pointer to the modified ERP, or NULL
-+ */
+ int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg)
+ {
+ int status;
+-
+
-+static struct dasd_ccw_req *dasd_3990_erp_inspect_alias(
-+ struct dasd_ccw_req *erp)
-+{
-+ struct dasd_ccw_req *cqr = erp->refers;
+ /*
+ * HBA gets first crack
+ */
+-
+
-+ if (cqr->block &&
-+ (cqr->block->base != cqr->startdev)) {
-+ if (cqr->startdev->features & DASD_FEATURE_ERPLOG) {
-+ DEV_MESSAGE(KERN_ERR, cqr->startdev,
-+ "ERP on alias device for request %p,"
-+ " recover on base device %s", cqr,
-+ cqr->block->base->cdev->dev.bus_id);
-+ }
-+ dasd_eckd_reset_ccw_to_base_io(cqr);
-+ erp->startdev = cqr->block->base;
-+ erp->function = dasd_3990_erp_inspect_alias;
-+ return erp;
-+ } else
-+ return NULL;
-+}
+ status = aac_dev_ioctl(dev, cmd, arg);
+ if(status != -ENOTTY)
+ return status;
+@@ -832,7 +836,7 @@ int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg)
+ break;
+ default:
+ status = -ENOTTY;
+- break;
++ break;
+ }
+ return status;
+ }
+diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
+index 8736813..89cc8b7 100644
+--- a/drivers/scsi/aacraid/comminit.c
++++ b/drivers/scsi/aacraid/comminit.c
+@@ -301,10 +301,10 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
+ if ((!aac_adapter_sync_cmd(dev, GET_ADAPTER_PROPERTIES,
+ 0, 0, 0, 0, 0, 0, status+0, status+1, status+2, NULL, NULL)) &&
+ (status[0] == 0x00000001)) {
+- if (status[1] & AAC_OPT_NEW_COMM_64)
++ if (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_64))
+ dev->raw_io_64 = 1;
+ if (dev->a_ops.adapter_comm &&
+- (status[1] & AAC_OPT_NEW_COMM))
++ (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM)))
+ dev->comm_interface = AAC_COMM_MESSAGE;
+ if ((dev->comm_interface == AAC_COMM_MESSAGE) &&
+ (status[2] > dev->base_size)) {
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index abce48c..81b3692 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -56,7 +56,7 @@
+ * Allocate and map the shared PCI space for the FIB blocks used to
+ * talk to the Adaptec firmware.
+ */
+-
+
+ static int fib_map_alloc(struct aac_dev *dev)
+ {
+ dprintk((KERN_INFO
+@@ -109,14 +109,16 @@ int aac_fib_setup(struct aac_dev * dev)
+ }
+ if (i<0)
+ return -ENOMEM;
+-
+
-+/*
- * DASD_3990_ERP_INSPECT_24
- *
- * DESCRIPTION
-@@ -1623,7 +1507,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_action_10_32(struct dasd_ccw_req * erp, char *sense)
+ hw_fib = dev->hw_fib_va;
+ hw_fib_pa = dev->hw_fib_pa;
+ memset(hw_fib, 0, dev->max_fib_size * (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB));
+ /*
+ * Initialise the fibs
+ */
+- for (i = 0, fibptr = &dev->fibs[i]; i < (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB); i++, fibptr++)
++ for (i = 0, fibptr = &dev->fibs[i];
++ i < (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB);
++ i++, fibptr++)
+ {
+ fibptr->dev = dev;
+ fibptr->hw_fib_va = hw_fib;
+@@ -148,13 +150,13 @@ int aac_fib_setup(struct aac_dev * dev)
+ * Allocate a fib from the adapter fib pool. If the pool is empty we
+ * return NULL.
+ */
+-
++
+ struct fib *aac_fib_alloc(struct aac_dev *dev)
{
+ struct fib * fibptr;
+ unsigned long flags;
+ spin_lock_irqsave(&dev->fib_lock, flags);
+- fibptr = dev->free_fib;
++ fibptr = dev->free_fib;
+ if(!fibptr){
+ spin_unlock_irqrestore(&dev->fib_lock, flags);
+ return fibptr;
+@@ -171,6 +173,7 @@ struct fib *aac_fib_alloc(struct aac_dev *dev)
+ * each I/O
+ */
+ fibptr->hw_fib_va->header.XferState = 0;
++ fibptr->flags = 0;
+ fibptr->callback = NULL;
+ fibptr->callback_data = NULL;
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
-
- erp->retries = 256;
- erp->function = dasd_3990_erp_action_10_32;
-@@ -1657,13 +1541,14 @@ static struct dasd_ccw_req *
- dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
+@@ -183,7 +186,7 @@ struct fib *aac_fib_alloc(struct aac_dev *dev)
+ *
+ * Frees up a fib and places it on the appropriate queue
+ */
+-
++
+ void aac_fib_free(struct fib *fibptr)
{
-
-- struct dasd_device *device = default_erp->device;
-+ struct dasd_device *device = default_erp->startdev;
- __u32 cpa = 0;
- struct dasd_ccw_req *cqr;
- struct dasd_ccw_req *erp;
- struct DE_eckd_data *DE_data;
-+ struct PFX_eckd_data *PFX_data;
- char *LO_data; /* LO_eckd_data_t */
-- struct ccw1 *ccw;
-+ struct ccw1 *ccw, *oldccw;
-
- DEV_MESSAGE(KERN_DEBUG, device, "%s",
- "Write not finished because of unexpected condition");
-@@ -1702,8 +1587,8 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
- /* Build new ERP request including DE/LO */
- erp = dasd_alloc_erp_request((char *) &cqr->magic,
- 2 + 1,/* DE/LO + TIC */
-- sizeof (struct DE_eckd_data) +
-- sizeof (struct LO_eckd_data), device);
-+ sizeof(struct DE_eckd_data) +
-+ sizeof(struct LO_eckd_data), device);
-
- if (IS_ERR(erp)) {
- DEV_MESSAGE(KERN_ERR, device, "%s", "Unable to allocate ERP");
-@@ -1712,10 +1597,16 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
-
- /* use original DE */
- DE_data = erp->data;
-- memcpy(DE_data, cqr->data, sizeof (struct DE_eckd_data));
-+ oldccw = cqr->cpaddr;
-+ if (oldccw->cmd_code == DASD_ECKD_CCW_PFX) {
-+ PFX_data = cqr->data;
-+ memcpy(DE_data, &PFX_data->define_extend,
-+ sizeof(struct DE_eckd_data));
-+ } else
-+ memcpy(DE_data, cqr->data, sizeof(struct DE_eckd_data));
-
- /* create LO */
-- LO_data = erp->data + sizeof (struct DE_eckd_data);
-+ LO_data = erp->data + sizeof(struct DE_eckd_data);
-
- if ((sense[3] == 0x01) && (LO_data[1] & 0x01)) {
-
-@@ -1748,7 +1639,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
-
- /* create DE ccw */
- ccw = erp->cpaddr;
-- memset(ccw, 0, sizeof (struct ccw1));
-+ memset(ccw, 0, sizeof(struct ccw1));
- ccw->cmd_code = DASD_ECKD_CCW_DEFINE_EXTENT;
- ccw->flags = CCW_FLAG_CC;
- ccw->count = 16;
-@@ -1756,7 +1647,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
-
- /* create LO ccw */
- ccw++;
-- memset(ccw, 0, sizeof (struct ccw1));
-+ memset(ccw, 0, sizeof(struct ccw1));
- ccw->cmd_code = DASD_ECKD_CCW_LOCATE_RECORD;
- ccw->flags = CCW_FLAG_CC;
- ccw->count = 16;
-@@ -1770,7 +1661,8 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
- /* fill erp related fields */
- erp->function = dasd_3990_erp_action_1B_32;
- erp->refers = default_erp->refers;
-- erp->device = device;
-+ erp->startdev = device;
-+ erp->memdev = device;
- erp->magic = default_erp->magic;
- erp->expires = 0;
- erp->retries = 256;
-@@ -1803,7 +1695,7 @@ static struct dasd_ccw_req *
- dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
+ unsigned long flags;
+@@ -204,10 +207,10 @@ void aac_fib_free(struct fib *fibptr)
+ /**
+ * aac_fib_init - initialise a fib
+ * @fibptr: The fib to initialize
+- *
++ *
+ * Set up the generic fib fields ready for use
+ */
+-
++
+ void aac_fib_init(struct fib *fibptr)
{
+ struct hw_fib *hw_fib = fibptr->hw_fib_va;
+@@ -227,12 +230,12 @@ void aac_fib_init(struct fib *fibptr)
+ * Will deallocate and return to the free pool the FIB pointed to by the
+ * caller.
+ */
+-
++
+ static void fib_dealloc(struct fib * fibptr)
+ {
+ struct hw_fib *hw_fib = fibptr->hw_fib_va;
+ BUG_ON(hw_fib->header.StructType != FIB_MAGIC);
+- hw_fib->header.XferState = 0;
++ hw_fib->header.XferState = 0;
+ }
-- struct dasd_device *device = previous_erp->device;
-+ struct dasd_device *device = previous_erp->startdev;
- __u32 cpa = 0;
- struct dasd_ccw_req *cqr;
- struct dasd_ccw_req *erp;
-@@ -1827,7 +1719,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
- DEV_MESSAGE(KERN_DEBUG, device, "%s",
- "Imprecise ending is set - just retry");
-
-- previous_erp->status = DASD_CQR_QUEUED;
-+ previous_erp->status = DASD_CQR_FILLED;
-
- return previous_erp;
- }
-@@ -1850,7 +1742,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
- erp = previous_erp;
-
- /* update the LO with the new returned sense data */
-- LO_data = erp->data + sizeof (struct DE_eckd_data);
-+ LO_data = erp->data + sizeof(struct DE_eckd_data);
-
- if ((sense[3] == 0x01) && (LO_data[1] & 0x01)) {
-
-@@ -1889,7 +1781,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
- ccw++; /* addr of TIC ccw */
- ccw->cda = cpa;
-
-- erp->status = DASD_CQR_QUEUED;
-+ erp->status = DASD_CQR_FILLED;
-
- return erp;
-
-@@ -1968,9 +1860,7 @@ dasd_3990_erp_compound_path(struct dasd_ccw_req * erp, char *sense)
- * try further actions. */
-
- erp->lpm = 0;
--
-- erp->status = DASD_CQR_ERROR;
--
-+ erp->status = DASD_CQR_NEED_ERP;
+ /*
+@@ -241,7 +244,7 @@ static void fib_dealloc(struct fib * fibptr)
+ * these routines and are the only routines which have a knowledge of the
+ * how these queues are implemented.
+ */
+-
++
+ /**
+ * aac_get_entry - get a queue entry
+ * @dev: Adapter
+@@ -254,7 +257,7 @@ static void fib_dealloc(struct fib * fibptr)
+ * is full(no free entries) than no entry is returned and the function returns 0 otherwise 1 is
+ * returned.
+ */
+-
++
+ static int aac_get_entry (struct aac_dev * dev, u32 qid, struct aac_entry **entry, u32 * index, unsigned long *nonotify)
+ {
+ struct aac_queue * q;
+@@ -279,26 +282,27 @@ static int aac_get_entry (struct aac_dev * dev, u32 qid, struct aac_entry **entr
+ idx = ADAP_NORM_RESP_ENTRIES;
}
+ if (idx != le32_to_cpu(*(q->headers.consumer)))
+- *nonotify = 1;
++ *nonotify = 1;
}
-@@ -2047,7 +1937,7 @@ dasd_3990_erp_compound_config(struct dasd_ccw_req * erp, char *sense)
- if ((sense[25] & DASD_SENSE_BIT_1) && (sense[26] & DASD_SENSE_BIT_2)) {
+ if (qid == AdapNormCmdQueue) {
+- if (*index >= ADAP_NORM_CMD_ENTRIES)
++ if (*index >= ADAP_NORM_CMD_ENTRIES)
+ *index = 0; /* Wrap to front of the Producer Queue. */
+ } else {
+- if (*index >= ADAP_NORM_RESP_ENTRIES)
++ if (*index >= ADAP_NORM_RESP_ENTRIES)
+ *index = 0; /* Wrap to front of the Producer Queue. */
+ }
- /* set to suspended duplex state then restart */
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+- if ((*index + 1) == le32_to_cpu(*(q->headers.consumer))) { /* Queue is full */
++ /* Queue is full */
++ if ((*index + 1) == le32_to_cpu(*(q->headers.consumer))) {
+ printk(KERN_WARNING "Queue %d full, %u outstanding.\n",
+ qid, q->numpending);
+ return 0;
+ } else {
+- *entry = q->base + *index;
++ *entry = q->base + *index;
+ return 1;
+ }
+-}
++}
- DEV_MESSAGE(KERN_ERR, device, "%s",
- "Set device to suspended duplex state should be "
-@@ -2081,28 +1971,26 @@ dasd_3990_erp_compound(struct dasd_ccw_req * erp, char *sense)
+ /**
+ * aac_queue_get - get the next free QE
+@@ -320,31 +324,29 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
{
-
- if ((erp->function == dasd_3990_erp_compound_retry) &&
-- (erp->status == DASD_CQR_ERROR)) {
-+ (erp->status == DASD_CQR_NEED_ERP)) {
-
- dasd_3990_erp_compound_path(erp, sense);
+ struct aac_entry * entry = NULL;
+ int map = 0;
+-
++
+ if (qid == AdapNormCmdQueue) {
+ /* if no entries wait for some if caller wants to */
+- while (!aac_get_entry(dev, qid, &entry, index, nonotify))
+- {
++ while (!aac_get_entry(dev, qid, &entry, index, nonotify)) {
+ printk(KERN_ERR "GetEntries failed\n");
+ }
+- /*
+- * Setup queue entry with a command, status and fib mapped
+- */
+- entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
+- map = 1;
++ /*
++ * Setup queue entry with a command, status and fib mapped
++ */
++ entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
++ map = 1;
+ } else {
+- while(!aac_get_entry(dev, qid, &entry, index, nonotify))
+- {
++ while (!aac_get_entry(dev, qid, &entry, index, nonotify)) {
+ /* if no entries wait for some if caller wants to */
+ }
+- /*
+- * Setup queue entry with command, status and fib mapped
+- */
+- entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
+- entry->addr = hw_fib->header.SenderFibAddress;
+- /* Restore adapters pointer to the FIB */
++ /*
++ * Setup queue entry with command, status and fib mapped
++ */
++ entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
++ entry->addr = hw_fib->header.SenderFibAddress;
++ /* Restore adapters pointer to the FIB */
+ hw_fib->header.ReceiverFibAddress = hw_fib->header.SenderFibAddress; /* Let the adapter now where to find its data */
+- map = 0;
++ map = 0;
}
+ /*
+ * If MapFib is true than we need to map the Fib and put pointers
+@@ -356,8 +358,8 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
+ }
- if ((erp->function == dasd_3990_erp_compound_path) &&
-- (erp->status == DASD_CQR_ERROR)) {
-+ (erp->status == DASD_CQR_NEED_ERP)) {
-
- erp = dasd_3990_erp_compound_code(erp, sense);
+ /*
+- * Define the highest level of host to adapter communication routines.
+- * These routines will support host to adapter FS commuication. These
++ * Define the highest level of host to adapter communication routines.
++ * These routines will support host to adapter FS commuication. These
+ * routines have no knowledge of the commuication method used. This level
+ * sends and receives FIBs. This level has no knowledge of how these FIBs
+ * get passed back and forth.
+@@ -379,7 +381,7 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
+ * an event to wait on must be supplied. This event will be set when a
+ * response FIB is received from the adapter.
+ */
+-
++
+ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ int priority, int wait, int reply, fib_callback callback,
+ void *callback_data)
+@@ -392,16 +394,17 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ if (!(hw_fib->header.XferState & cpu_to_le32(HostOwned)))
+ return -EBUSY;
+ /*
+- * There are 5 cases with the wait and reponse requested flags.
++ * There are 5 cases with the wait and reponse requested flags.
+ * The only invalid cases are if the caller requests to wait and
+ * does not request a response and if the caller does not want a
+ * response and the Fib is not allocated from pool. If a response
+ * is not requesed the Fib will just be deallocaed by the DPC
+ * routine when the response comes back from the adapter. No
+- * further processing will be done besides deleting the Fib. We
++ * further processing will be done besides deleting the Fib. We
+ * will have a debug mode where the adapter can notify the host
+ * it had a problem and the host can log that fact.
+ */
++ fibptr->flags = 0;
+ if (wait && !reply) {
+ return -EINVAL;
+ } else if (!wait && reply) {
+@@ -413,7 +416,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ } else if (wait && reply) {
+ hw_fib->header.XferState |= cpu_to_le32(ResponseExpected);
+ FIB_COUNTER_INCREMENT(aac_config.NormalSent);
+- }
++ }
+ /*
+ * Map the fib into 32bits by using the fib number
+ */
+@@ -436,7 +439,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ hw_fib->header.Size = cpu_to_le16(sizeof(struct aac_fibhdr) + size);
+ if (le16_to_cpu(hw_fib->header.Size) > le16_to_cpu(hw_fib->header.SenderSize)) {
+ return -EMSGSIZE;
+- }
++ }
+ /*
+ * Get a queue entry connect the FIB to it and send an notify
+ * the adapter a command is ready.
+@@ -450,10 +453,10 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ if (!wait) {
+ fibptr->callback = callback;
+ fibptr->callback_data = callback_data;
++ fibptr->flags = FIB_CONTEXT_FLAG;
}
- if ((erp->function == dasd_3990_erp_compound_code) &&
-- (erp->status == DASD_CQR_ERROR)) {
-+ (erp->status == DASD_CQR_NEED_ERP)) {
-
- dasd_3990_erp_compound_config(erp, sense);
- }
+ fibptr->done = 0;
+- fibptr->flags = 0;
- /* if no compound action ERP specified, the request failed */
-- if (erp->status == DASD_CQR_ERROR) {
--
-+ if (erp->status == DASD_CQR_NEED_ERP)
- erp->status = DASD_CQR_FAILED;
-- }
+ FIB_COUNTER_INCREMENT(aac_config.FibsSent);
- return erp;
+@@ -473,9 +476,9 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ aac_adapter_deliver(fibptr);
-@@ -2127,7 +2015,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_inspect_32(struct dasd_ccw_req * erp, char *sense)
- {
+ /*
+- * If the caller wanted us to wait for response wait now.
++ * If the caller wanted us to wait for response wait now.
+ */
+-
++
+ if (wait) {
+ spin_unlock_irqrestore(&fibptr->event_lock, flags);
+ /* Only set for first known interruptable command */
+@@ -522,7 +525,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ }
+ spin_unlock_irqrestore(&fibptr->event_lock, flags);
+ BUG_ON(fibptr->done == 0);
+-
++
+ if(unlikely(fibptr->flags & FIB_CONTEXT_FLAG_TIMED_OUT))
+ return -ETIMEDOUT;
+ return 0;
+@@ -537,15 +540,15 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ return 0;
+ }
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
+-/**
++/**
+ * aac_consumer_get - get the top of the queue
+ * @dev: Adapter
+ * @q: Queue
+ * @entry: Return entry
+ *
+ * Will return a pointer to the entry on the top of the queue requested that
+- * we are a consumer of, and return the address of the queue entry. It does
+- * not change the state of the queue.
++ * we are a consumer of, and return the address of the queue entry. It does
++ * not change the state of the queue.
+ */
- erp->function = dasd_3990_erp_inspect_32;
+ int aac_consumer_get(struct aac_dev * dev, struct aac_queue * q, struct aac_entry **entry)
+@@ -560,10 +563,10 @@ int aac_consumer_get(struct aac_dev * dev, struct aac_queue * q, struct aac_entr
+ * the end of the queue, else we just use the entry
+ * pointed to by the header index
+ */
+- if (le32_to_cpu(*q->headers.consumer) >= q->entries)
+- index = 0;
++ if (le32_to_cpu(*q->headers.consumer) >= q->entries)
++ index = 0;
+ else
+- index = le32_to_cpu(*q->headers.consumer);
++ index = le32_to_cpu(*q->headers.consumer);
+ *entry = q->base + index;
+ status = 1;
+ }
+@@ -587,12 +590,12 @@ void aac_consumer_free(struct aac_dev * dev, struct aac_queue *q, u32 qid)
-@@ -2149,8 +2037,7 @@ dasd_3990_erp_inspect_32(struct dasd_ccw_req * erp, char *sense)
+ if ((le32_to_cpu(*q->headers.producer)+1) == le32_to_cpu(*q->headers.consumer))
+ wasfull = 1;
+-
++
+ if (le32_to_cpu(*q->headers.consumer) >= q->entries)
+ *q->headers.consumer = cpu_to_le32(1);
+ else
+ *q->headers.consumer = cpu_to_le32(le32_to_cpu(*q->headers.consumer)+1);
+-
++
+ if (wasfull) {
+ switch (qid) {
- case 0x01: /* fatal error */
- DEV_MESSAGE(KERN_ERR, device, "%s",
-- "Fatal error should have been "
-- "handled within the interrupt handler");
-+ "Retry not recommended - Fatal error");
+@@ -608,7 +611,7 @@ void aac_consumer_free(struct aac_dev * dev, struct aac_queue *q, u32 qid)
+ }
+ aac_adapter_notify(dev, notify);
+ }
+-}
++}
- erp = dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);
- break;
-@@ -2253,6 +2140,11 @@ dasd_3990_erp_inspect(struct dasd_ccw_req * erp)
- /* already set up new ERP ! */
- char *sense = erp->refers->irb.ecw;
+ /**
+ * aac_fib_adapter_complete - complete adapter issued fib
+@@ -630,32 +633,32 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
+ if (hw_fib->header.XferState == 0) {
+ if (dev->comm_interface == AAC_COMM_MESSAGE)
+ kfree (hw_fib);
+- return 0;
++ return 0;
+ }
+ /*
+ * If we plan to do anything check the structure type first.
+- */
+- if ( hw_fib->header.StructType != FIB_MAGIC ) {
++ */
++ if (hw_fib->header.StructType != FIB_MAGIC) {
+ if (dev->comm_interface == AAC_COMM_MESSAGE)
+ kfree (hw_fib);
+- return -EINVAL;
++ return -EINVAL;
+ }
+ /*
+ * This block handles the case where the adapter had sent us a
+ * command and we have finished processing the command. We
+- * call completeFib when we are done processing the command
+- * and want to send a response back to the adapter. This will
++ * call completeFib when we are done processing the command
++ * and want to send a response back to the adapter. This will
+ * send the completed cdb to the adapter.
+ */
+ if (hw_fib->header.XferState & cpu_to_le32(SentFromAdapter)) {
+ if (dev->comm_interface == AAC_COMM_MESSAGE) {
+ kfree (hw_fib);
+ } else {
+- u32 index;
+- hw_fib->header.XferState |= cpu_to_le32(HostProcessed);
++ u32 index;
++ hw_fib->header.XferState |= cpu_to_le32(HostProcessed);
+ if (size) {
+ size += sizeof(struct aac_fibhdr);
+- if (size > le16_to_cpu(hw_fib->header.SenderSize))
++ if (size > le16_to_cpu(hw_fib->header.SenderSize))
+ return -EMSGSIZE;
+ hw_fib->header.Size = cpu_to_le16(size);
+ }
+@@ -667,12 +670,11 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
+ if (!(nointr & (int)aac_config.irq_mod))
+ aac_adapter_notify(dev, AdapNormRespQueue);
+ }
++ } else {
++ printk(KERN_WARNING "aac_fib_adapter_complete: "
++ "Unknown xferstate detected.\n");
++ BUG();
+ }
+- else
+- {
+- printk(KERN_WARNING "aac_fib_adapter_complete: Unknown xferstate detected.\n");
+- BUG();
+- }
+ return 0;
+ }
-+ /* if this problem occured on an alias retry on base */
-+ erp_new = dasd_3990_erp_inspect_alias(erp);
-+ if (erp_new)
-+ return erp_new;
+@@ -682,7 +684,7 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
+ *
+ * Will do all necessary work to complete a FIB.
+ */
+-
+
- /* distinguish between 24 and 32 byte sense data */
- if (sense[27] & DASD_SENSE_BIT_0) {
-
-@@ -2287,13 +2179,13 @@ static struct dasd_ccw_req *
- dasd_3990_erp_add_erp(struct dasd_ccw_req * cqr)
+ int aac_fib_complete(struct fib *fibptr)
{
+ struct hw_fib * hw_fib = fibptr->hw_fib_va;
+@@ -692,15 +694,15 @@ int aac_fib_complete(struct fib *fibptr)
+ */
-- struct dasd_device *device = cqr->device;
-+ struct dasd_device *device = cqr->startdev;
- struct ccw1 *ccw;
+ if (hw_fib->header.XferState == 0)
+- return 0;
++ return 0;
+ /*
+ * If we plan to do anything check the structure type first.
+- */
++ */
- /* allocate additional request block */
- struct dasd_ccw_req *erp;
+ if (hw_fib->header.StructType != FIB_MAGIC)
+- return -EINVAL;
++ return -EINVAL;
+ /*
+- * This block completes a cdb which orginated on the host and we
++ * This block completes a cdb which orginated on the host and we
+ * just need to deallocate the cdb or reinit it. At this point the
+ * command is complete that we had sent to the adapter and this
+ * cdb could be reused.
+@@ -721,7 +723,7 @@ int aac_fib_complete(struct fib *fibptr)
+ fib_dealloc(fibptr);
+ } else {
+ BUG();
+- }
++ }
+ return 0;
+ }
-- erp = dasd_alloc_erp_request((char *) &cqr->magic, 2, 0, cqr->device);
-+ erp = dasd_alloc_erp_request((char *) &cqr->magic, 2, 0, device);
- if (IS_ERR(erp)) {
- if (cqr->retries <= 0) {
- DEV_MESSAGE(KERN_ERR, device, "%s",
-@@ -2305,7 +2197,7 @@ dasd_3990_erp_add_erp(struct dasd_ccw_req * cqr)
- "Unable to allocate ERP request "
- "(%i retries left)",
- cqr->retries);
-- dasd_set_timer(device, (HZ << 3));
-+ dasd_block_set_timer(device->block, (HZ << 3));
- }
- return cqr;
+@@ -741,7 +743,7 @@ void aac_printf(struct aac_dev *dev, u32 val)
+ {
+ int length = val & 0xffff;
+ int level = (val >> 16) & 0xffff;
+-
++
+ /*
+ * The size of the printfbuf is set in port.c
+ * There is no variable or define for it
+@@ -755,7 +757,7 @@ void aac_printf(struct aac_dev *dev, u32 val)
+ else
+ printk(KERN_INFO "%s:%s", dev->name, cp);
}
-@@ -2319,7 +2211,9 @@ dasd_3990_erp_add_erp(struct dasd_ccw_req * cqr)
- ccw->cda = (long)(cqr->cpaddr);
- erp->function = dasd_3990_erp_add_erp;
- erp->refers = cqr;
-- erp->device = cqr->device;
-+ erp->startdev = device;
-+ erp->memdev = device;
-+ erp->block = cqr->block;
- erp->magic = cqr->magic;
- erp->expires = 0;
- erp->retries = 256;
-@@ -2466,7 +2360,7 @@ static struct dasd_ccw_req *
- dasd_3990_erp_further_erp(struct dasd_ccw_req *erp)
- {
+- memset(cp, 0, 256);
++ memset(cp, 0, 256);
+ }
-- struct dasd_device *device = erp->device;
-+ struct dasd_device *device = erp->startdev;
- char *sense = erp->irb.ecw;
- /* check for 24 byte sense ERP */
-@@ -2557,7 +2451,7 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
- struct dasd_ccw_req *erp)
+@@ -773,20 +775,20 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
{
+ struct hw_fib * hw_fib = fibptr->hw_fib_va;
+ struct aac_aifcmd * aifcmd = (struct aac_aifcmd *)hw_fib->data;
+- u32 container;
++ u32 channel, id, lun, container;
+ struct scsi_device *device;
+ enum {
+ NOTHING,
+ DELETE,
+ ADD,
+ CHANGE
+- } device_config_needed;
++ } device_config_needed = NOTHING;
-- struct dasd_device *device = erp_head->device;
-+ struct dasd_device *device = erp_head->startdev;
- struct dasd_ccw_req *erp_done = erp_head; /* finished req */
- struct dasd_ccw_req *erp_free = NULL; /* req to be freed */
-
-@@ -2569,13 +2463,13 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
- "original request was lost\n");
+ /* Sniff for container changes */
- /* remove the request from the device queue */
-- list_del(&erp_done->list);
-+ list_del(&erp_done->blocklist);
+ if (!dev || !dev->fsa_dev)
+ return;
+- container = (u32)-1;
++ container = channel = id = lun = (u32)-1;
- erp_free = erp_done;
- erp_done = erp_done->refers;
+ /*
+ * We have set this up to try and minimize the number of
+@@ -796,13 +798,13 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ */
+ switch (le32_to_cpu(aifcmd->command)) {
+ case AifCmdDriverNotify:
+- switch (le32_to_cpu(((u32 *)aifcmd->data)[0])) {
++ switch (le32_to_cpu(((__le32 *)aifcmd->data)[0])) {
+ /*
+ * Morph or Expand complete
+ */
+ case AifDenMorphComplete:
+ case AifDenVolumeExtendComplete:
+- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
++ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
+ if (container >= dev->maximum_num_containers)
+ break;
- /* free the finished erp request */
-- dasd_free_erp_request(erp_free, erp_free->device);
-+ dasd_free_erp_request(erp_free, erp_free->memdev);
+@@ -814,9 +816,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ */
- } /* end while */
+ if ((dev != NULL) && (dev->scsi_host_ptr != NULL)) {
+- device = scsi_device_lookup(dev->scsi_host_ptr,
+- CONTAINER_TO_CHANNEL(container),
+- CONTAINER_TO_ID(container),
++ device = scsi_device_lookup(dev->scsi_host_ptr,
++ CONTAINER_TO_CHANNEL(container),
++ CONTAINER_TO_ID(container),
+ CONTAINER_TO_LUN(container));
+ if (device) {
+ dev->fsa_dev[container].config_needed = CHANGE;
+@@ -835,25 +837,29 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ if (container >= dev->maximum_num_containers)
+ break;
+ if ((dev->fsa_dev[container].config_waiting_on ==
+- le32_to_cpu(*(u32 *)aifcmd->data)) &&
++ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
+ time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
+ dev->fsa_dev[container].config_waiting_on = 0;
+ } else for (container = 0;
+ container < dev->maximum_num_containers; ++container) {
+ if ((dev->fsa_dev[container].config_waiting_on ==
+- le32_to_cpu(*(u32 *)aifcmd->data)) &&
++ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
+ time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
+ dev->fsa_dev[container].config_waiting_on = 0;
+ }
+ break;
-@@ -2603,7 +2497,7 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
- erp->retries, erp);
+ case AifCmdEventNotify:
+- switch (le32_to_cpu(((u32 *)aifcmd->data)[0])) {
++ switch (le32_to_cpu(((__le32 *)aifcmd->data)[0])) {
++ case AifEnBatteryEvent:
++ dev->cache_protected =
++ (((__le32 *)aifcmd->data)[1] == cpu_to_le32(3));
++ break;
+ /*
+ * Add an Array.
+ */
+ case AifEnAddContainer:
+- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
++ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
+ if (container >= dev->maximum_num_containers)
+ break;
+ dev->fsa_dev[container].config_needed = ADD;
+@@ -866,7 +872,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ * Delete an Array.
+ */
+ case AifEnDeleteContainer:
+- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
++ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
+ if (container >= dev->maximum_num_containers)
+ break;
+ dev->fsa_dev[container].config_needed = DELETE;
+@@ -880,7 +886,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ * waiting on something else, setup to wait on a Config Change.
+ */
+ case AifEnContainerChange:
+- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
++ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
+ if (container >= dev->maximum_num_containers)
+ break;
+ if (dev->fsa_dev[container].config_waiting_on &&
+@@ -895,6 +901,60 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ case AifEnConfigChange:
+ break;
- /* handle the request again... */
-- erp->status = DASD_CQR_QUEUED;
-+ erp->status = DASD_CQR_FILLED;
++ case AifEnAddJBOD:
++ case AifEnDeleteJBOD:
++ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
++ if ((container >> 28))
++ break;
++ channel = (container >> 24) & 0xF;
++ if (channel >= dev->maximum_num_channels)
++ break;
++ id = container & 0xFFFF;
++ if (id >= dev->maximum_num_physicals)
++ break;
++ lun = (container >> 16) & 0xFF;
++ channel = aac_phys_to_logical(channel);
++ device_config_needed =
++ (((__le32 *)aifcmd->data)[0] ==
++ cpu_to_le32(AifEnAddJBOD)) ? ADD : DELETE;
++ break;
++
++ case AifEnEnclosureManagement:
++ /*
++ * If in JBOD mode, automatic exposure of new
++ * physical target to be suppressed until configured.
++ */
++ if (dev->jbod)
++ break;
++ switch (le32_to_cpu(((__le32 *)aifcmd->data)[3])) {
++ case EM_DRIVE_INSERTION:
++ case EM_DRIVE_REMOVAL:
++ container = le32_to_cpu(
++ ((__le32 *)aifcmd->data)[2]);
++ if ((container >> 28))
++ break;
++ channel = (container >> 24) & 0xF;
++ if (channel >= dev->maximum_num_channels)
++ break;
++ id = container & 0xFFFF;
++ lun = (container >> 16) & 0xFF;
++ if (id >= dev->maximum_num_physicals) {
++ /* legacy dev_t ? */
++ if ((0x2000 <= id) || lun || channel ||
++ ((channel = (id >> 7) & 0x3F) >=
++ dev->maximum_num_channels))
++ break;
++ lun = (id >> 4) & 7;
++ id &= 0xF;
++ }
++ channel = aac_phys_to_logical(channel);
++ device_config_needed =
++ (((__le32 *)aifcmd->data)[3]
++ == cpu_to_le32(EM_DRIVE_INSERTION)) ?
++ ADD : DELETE;
++ break;
++ }
++ break;
}
- } else {
-@@ -2620,7 +2514,7 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
- * DASD_3990_ERP_ACTION
- *
- * DESCRIPTION
-- * controll routine for 3990 erp actions.
-+ * control routine for 3990 erp actions.
- * Has to be called with the queue lock (namely the s390_irq_lock) acquired.
- *
- * PARAMETER
-@@ -2636,9 +2530,8 @@ dasd_3990_erp_handle_match_erp(struct dasd_ccw_req *erp_head,
- struct dasd_ccw_req *
- dasd_3990_erp_action(struct dasd_ccw_req * cqr)
- {
--
- struct dasd_ccw_req *erp = NULL;
-- struct dasd_device *device = cqr->device;
-+ struct dasd_device *device = cqr->startdev;
- struct dasd_ccw_req *temp_erp = NULL;
+ /*
+@@ -905,13 +965,13 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ if (container >= dev->maximum_num_containers)
+ break;
+ if ((dev->fsa_dev[container].config_waiting_on ==
+- le32_to_cpu(*(u32 *)aifcmd->data)) &&
++ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
+ time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
+ dev->fsa_dev[container].config_waiting_on = 0;
+ } else for (container = 0;
+ container < dev->maximum_num_containers; ++container) {
+ if ((dev->fsa_dev[container].config_waiting_on ==
+- le32_to_cpu(*(u32 *)aifcmd->data)) &&
++ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
+ time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
+ dev->fsa_dev[container].config_waiting_on = 0;
+ }
+@@ -926,9 +986,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ * wait for a container change.
+ */
- if (device->features & DASD_FEATURE_ERPLOG) {
-@@ -2704,10 +2597,11 @@ dasd_3990_erp_action(struct dasd_ccw_req * cqr)
+- if ((((u32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero))
+- && ((((u32 *)aifcmd->data)[6] == ((u32 *)aifcmd->data)[5])
+- || (((u32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsSuccess)))) {
++ if (((__le32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero) &&
++ (((__le32 *)aifcmd->data)[6] == ((__le32 *)aifcmd->data)[5] ||
++ ((__le32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsSuccess))) {
+ for (container = 0;
+ container < dev->maximum_num_containers;
+ ++container) {
+@@ -943,9 +1003,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ jiffies;
+ }
}
+- if ((((u32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero))
+- && (((u32 *)aifcmd->data)[6] == 0)
+- && (((u32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsRunning))) {
++ if (((__le32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero) &&
++ ((__le32 *)aifcmd->data)[6] == 0 &&
++ ((__le32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsRunning)) {
+ for (container = 0;
+ container < dev->maximum_num_containers;
+ ++container) {
+@@ -963,7 +1023,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ break;
}
-- /* enqueue added ERP request */
-- if (erp->status == DASD_CQR_FILLED) {
-- erp->status = DASD_CQR_QUEUED;
-- list_add(&erp->list, &device->ccw_queue);
-+ /* enqueue ERP request if it's a new one */
-+ if (list_empty(&erp->blocklist)) {
-+ cqr->status = DASD_CQR_IN_ERP;
-+ /* add erp request before the cqr */
-+ list_add_tail(&erp->blocklist, &cqr->blocklist);
+- device_config_needed = NOTHING;
++ if (device_config_needed == NOTHING)
+ for (container = 0; container < dev->maximum_num_containers;
+ ++container) {
+ if ((dev->fsa_dev[container].config_waiting_on == 0) &&
+@@ -972,6 +1032,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ device_config_needed =
+ dev->fsa_dev[container].config_needed;
+ dev->fsa_dev[container].config_needed = NOTHING;
++ channel = CONTAINER_TO_CHANNEL(container);
++ id = CONTAINER_TO_ID(container);
++ lun = CONTAINER_TO_LUN(container);
+ break;
+ }
}
-
- return erp;
-diff --git a/drivers/s390/block/dasd_9336_erp.c b/drivers/s390/block/dasd_9336_erp.c
-deleted file mode 100644
-index 6e08268..0000000
---- a/drivers/s390/block/dasd_9336_erp.c
-+++ /dev/null
-@@ -1,41 +0,0 @@
--/*
-- * File...........: linux/drivers/s390/block/dasd_9336_erp.c
-- * Author(s)......: Holger Smolinski <Holger.Smolinski at de.ibm.com>
-- * Bugreports.to..: <Linux390 at de.ibm.com>
-- * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
-- *
-- */
--
--#define PRINTK_HEADER "dasd_erp(9336)"
--
--#include "dasd_int.h"
--
--
--/*
-- * DASD_9336_ERP_EXAMINE
-- *
-- * DESCRIPTION
-- * Checks only for fatal/no/recover error.
-- * A detailed examination of the sense data is done later outside
-- * the interrupt handler.
-- *
-- * The logic is based on the 'IBM 3880 Storage Control Reference' manual
-- * 'Chapter 7. 9336 Sense Data'.
-- *
-- * RETURN VALUES
-- * dasd_era_none no error
-- * dasd_era_fatal for all fatal (unrecoverable errors)
-- * dasd_era_recover for all others.
-- */
--dasd_era_t
--dasd_9336_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
--{
-- /* check for successful execution first */
-- if (irb->scsw.cstat == 0x00 &&
-- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
-- return dasd_era_none;
--
-- /* examine the 24 byte sense data */
-- return dasd_era_recover;
--
--} /* END dasd_9336_erp_examine */
-diff --git a/drivers/s390/block/dasd_9343_erp.c b/drivers/s390/block/dasd_9343_erp.c
-deleted file mode 100644
-index ddecb98..0000000
---- a/drivers/s390/block/dasd_9343_erp.c
-+++ /dev/null
-@@ -1,21 +0,0 @@
--/*
-- * File...........: linux/drivers/s390/block/dasd_9345_erp.c
-- * Author(s)......: Holger Smolinski <Holger.Smolinski at de.ibm.com>
-- * Bugreports.to..: <Linux390 at de.ibm.com>
-- * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
-- *
-- */
--
--#define PRINTK_HEADER "dasd_erp(9343)"
--
--#include "dasd_int.h"
--
--dasd_era_t
--dasd_9343_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
--{
-- if (irb->scsw.cstat == 0x00 &&
-- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
-- return dasd_era_none;
--
-- return dasd_era_recover;
--}
-diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
-new file mode 100644
-index 0000000..3a40bee
---- /dev/null
-+++ b/drivers/s390/block/dasd_alias.c
-@@ -0,0 +1,903 @@
-+/*
-+ * PAV alias management for the DASD ECKD discipline
-+ *
-+ * Copyright IBM Corporation, 2007
-+ * Author(s): Stefan Weinhuber <wein at de.ibm.com>
-+ */
-+
-+#include <linux/list.h>
-+#include <asm/ebcdic.h>
-+#include "dasd_int.h"
-+#include "dasd_eckd.h"
-+
-+#ifdef PRINTK_HEADER
-+#undef PRINTK_HEADER
-+#endif /* PRINTK_HEADER */
-+#define PRINTK_HEADER "dasd(eckd):"
-+
-+
-+/*
-+ * General concept of alias management:
-+ * - PAV and DASD alias management is specific to the eckd discipline.
-+ * - A device is connected to an lcu as long as the device exists.
-+ * dasd_alias_make_device_known_to_lcu will be called wenn the
-+ * device is checked by the eckd discipline and
-+ * dasd_alias_disconnect_device_from_lcu will be called
-+ * before the device is deleted.
-+ * - The dasd_alias_add_device / dasd_alias_remove_device
-+ * functions mark the point when a device is 'ready for service'.
-+ * - A summary unit check is a rare occasion, but it is mandatory to
-+ * support it. It requires some complex recovery actions before the
-+ * devices can be used again (see dasd_alias_handle_summary_unit_check).
-+ * - dasd_alias_get_start_dev will find an alias device that can be used
-+ * instead of the base device and does some (very simple) load balancing.
-+ * This is the function that gets called for each I/O, so when improving
-+ * something, this function should get faster or better, the rest has just
-+ * to be correct.
-+ */
-+
-+
-+static void summary_unit_check_handling_work(struct work_struct *);
-+static void lcu_update_work(struct work_struct *);
-+static int _schedule_lcu_update(struct alias_lcu *, struct dasd_device *);
-+
-+static struct alias_root aliastree = {
-+ .serverlist = LIST_HEAD_INIT(aliastree.serverlist),
-+ .lock = __SPIN_LOCK_UNLOCKED(aliastree.lock),
-+};
-+
-+static struct alias_server *_find_server(struct dasd_uid *uid)
-+{
-+ struct alias_server *pos;
-+ list_for_each_entry(pos, &aliastree.serverlist, server) {
-+ if (!strncmp(pos->uid.vendor, uid->vendor,
-+ sizeof(uid->vendor))
-+ && !strncmp(pos->uid.serial, uid->serial,
-+ sizeof(uid->serial)))
-+ return pos;
-+ };
-+ return NULL;
-+}
-+
-+static struct alias_lcu *_find_lcu(struct alias_server *server,
-+ struct dasd_uid *uid)
-+{
-+ struct alias_lcu *pos;
-+ list_for_each_entry(pos, &server->lculist, lcu) {
-+ if (pos->uid.ssid == uid->ssid)
-+ return pos;
-+ };
-+ return NULL;
-+}
-+
-+static struct alias_pav_group *_find_group(struct alias_lcu *lcu,
-+ struct dasd_uid *uid)
-+{
-+ struct alias_pav_group *pos;
-+ __u8 search_unit_addr;
-+
-+ /* for hyper pav there is only one group */
-+ if (lcu->pav == HYPER_PAV) {
-+ if (list_empty(&lcu->grouplist))
-+ return NULL;
-+ else
-+ return list_first_entry(&lcu->grouplist,
-+ struct alias_pav_group, group);
-+ }
-+
-+ /* for base pav we have to find the group that matches the base */
-+ if (uid->type == UA_BASE_DEVICE)
-+ search_unit_addr = uid->real_unit_addr;
-+ else
-+ search_unit_addr = uid->base_unit_addr;
-+ list_for_each_entry(pos, &lcu->grouplist, group) {
-+ if (pos->uid.base_unit_addr == search_unit_addr)
-+ return pos;
-+ };
-+ return NULL;
-+}
-+
-+static struct alias_server *_allocate_server(struct dasd_uid *uid)
-+{
-+ struct alias_server *server;
-+
-+ server = kzalloc(sizeof(*server), GFP_KERNEL);
-+ if (!server)
-+ return ERR_PTR(-ENOMEM);
-+ memcpy(server->uid.vendor, uid->vendor, sizeof(uid->vendor));
-+ memcpy(server->uid.serial, uid->serial, sizeof(uid->serial));
-+ INIT_LIST_HEAD(&server->server);
-+ INIT_LIST_HEAD(&server->lculist);
-+ return server;
-+}
-+
-+static void _free_server(struct alias_server *server)
-+{
-+ kfree(server);
-+}
-+
-+static struct alias_lcu *_allocate_lcu(struct dasd_uid *uid)
-+{
-+ struct alias_lcu *lcu;
-+
-+ lcu = kzalloc(sizeof(*lcu), GFP_KERNEL);
-+ if (!lcu)
-+ return ERR_PTR(-ENOMEM);
-+ lcu->uac = kzalloc(sizeof(*(lcu->uac)), GFP_KERNEL | GFP_DMA);
-+ if (!lcu->uac)
-+ goto out_err1;
-+ lcu->rsu_cqr = kzalloc(sizeof(*lcu->rsu_cqr), GFP_KERNEL | GFP_DMA);
-+ if (!lcu->rsu_cqr)
-+ goto out_err2;
-+ lcu->rsu_cqr->cpaddr = kzalloc(sizeof(struct ccw1),
-+ GFP_KERNEL | GFP_DMA);
-+ if (!lcu->rsu_cqr->cpaddr)
-+ goto out_err3;
-+ lcu->rsu_cqr->data = kzalloc(16, GFP_KERNEL | GFP_DMA);
-+ if (!lcu->rsu_cqr->data)
-+ goto out_err4;
-+
-+ memcpy(lcu->uid.vendor, uid->vendor, sizeof(uid->vendor));
-+ memcpy(lcu->uid.serial, uid->serial, sizeof(uid->serial));
-+ lcu->uid.ssid = uid->ssid;
-+ lcu->pav = NO_PAV;
-+ lcu->flags = NEED_UAC_UPDATE | UPDATE_PENDING;
-+ INIT_LIST_HEAD(&lcu->lcu);
-+ INIT_LIST_HEAD(&lcu->inactive_devices);
-+ INIT_LIST_HEAD(&lcu->active_devices);
-+ INIT_LIST_HEAD(&lcu->grouplist);
-+ INIT_WORK(&lcu->suc_data.worker, summary_unit_check_handling_work);
-+ INIT_DELAYED_WORK(&lcu->ruac_data.dwork, lcu_update_work);
-+ spin_lock_init(&lcu->lock);
-+ return lcu;
-+
-+out_err4:
-+ kfree(lcu->rsu_cqr->cpaddr);
-+out_err3:
-+ kfree(lcu->rsu_cqr);
-+out_err2:
-+ kfree(lcu->uac);
-+out_err1:
-+ kfree(lcu);
-+ return ERR_PTR(-ENOMEM);
-+}
-+
-+static void _free_lcu(struct alias_lcu *lcu)
-+{
-+ kfree(lcu->rsu_cqr->data);
-+ kfree(lcu->rsu_cqr->cpaddr);
-+ kfree(lcu->rsu_cqr);
-+ kfree(lcu->uac);
-+ kfree(lcu);
-+}
-+
-+/*
-+ * This is the function that will allocate all the server and lcu data,
-+ * so this function must be called first for a new device.
-+ * If the return value is 1, the lcu was already known before, if it
-+ * is 0, this is a new lcu.
-+ * Negative return code indicates that something went wrong (e.g. -ENOMEM)
-+ */
-+int dasd_alias_make_device_known_to_lcu(struct dasd_device *device)
-+{
-+ struct dasd_eckd_private *private;
-+ unsigned long flags;
-+ struct alias_server *server, *newserver;
-+ struct alias_lcu *lcu, *newlcu;
-+ int is_lcu_known;
-+ struct dasd_uid *uid;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ uid = &private->uid;
-+ spin_lock_irqsave(&aliastree.lock, flags);
-+ is_lcu_known = 1;
-+ server = _find_server(uid);
-+ if (!server) {
-+ spin_unlock_irqrestore(&aliastree.lock, flags);
-+ newserver = _allocate_server(uid);
-+ if (IS_ERR(newserver))
-+ return PTR_ERR(newserver);
-+ spin_lock_irqsave(&aliastree.lock, flags);
-+ server = _find_server(uid);
-+ if (!server) {
-+ list_add(&newserver->server, &aliastree.serverlist);
-+ server = newserver;
-+ is_lcu_known = 0;
-+ } else {
-+ /* someone was faster */
-+ _free_server(newserver);
-+ }
-+ }
-+
-+ lcu = _find_lcu(server, uid);
-+ if (!lcu) {
-+ spin_unlock_irqrestore(&aliastree.lock, flags);
-+ newlcu = _allocate_lcu(uid);
-+ if (IS_ERR(newlcu))
-+ return PTR_ERR(lcu);
-+ spin_lock_irqsave(&aliastree.lock, flags);
-+ lcu = _find_lcu(server, uid);
-+ if (!lcu) {
-+ list_add(&newlcu->lcu, &server->lculist);
-+ lcu = newlcu;
-+ is_lcu_known = 0;
-+ } else {
-+ /* someone was faster */
-+ _free_lcu(newlcu);
-+ }
-+ is_lcu_known = 0;
-+ }
-+ spin_lock(&lcu->lock);
-+ list_add(&device->alias_list, &lcu->inactive_devices);
-+ private->lcu = lcu;
-+ spin_unlock(&lcu->lock);
-+ spin_unlock_irqrestore(&aliastree.lock, flags);
-+
-+ return is_lcu_known;
-+}
-+
-+/*
-+ * This function removes a device from the scope of alias management.
-+ * The complicated part is to make sure that it is not in use by
-+ * any of the workers. If necessary cancel the work.
-+ */
-+void dasd_alias_disconnect_device_from_lcu(struct dasd_device *device)
-+{
-+ struct dasd_eckd_private *private;
-+ unsigned long flags;
-+ struct alias_lcu *lcu;
-+ struct alias_server *server;
-+ int was_pending;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ lcu = private->lcu;
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ list_del_init(&device->alias_list);
-+ /* make sure that the workers don't use this device */
-+ if (device == lcu->suc_data.device) {
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ cancel_work_sync(&lcu->suc_data.worker);
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ if (device == lcu->suc_data.device)
-+ lcu->suc_data.device = NULL;
-+ }
-+ was_pending = 0;
-+ if (device == lcu->ruac_data.device) {
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ was_pending = 1;
-+ cancel_delayed_work_sync(&lcu->ruac_data.dwork);
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ if (device == lcu->ruac_data.device)
-+ lcu->ruac_data.device = NULL;
-+ }
-+ private->lcu = NULL;
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+
-+ spin_lock_irqsave(&aliastree.lock, flags);
-+ spin_lock(&lcu->lock);
-+ if (list_empty(&lcu->grouplist) &&
-+ list_empty(&lcu->active_devices) &&
-+ list_empty(&lcu->inactive_devices)) {
-+ list_del(&lcu->lcu);
-+ spin_unlock(&lcu->lock);
-+ _free_lcu(lcu);
-+ lcu = NULL;
-+ } else {
-+ if (was_pending)
-+ _schedule_lcu_update(lcu, NULL);
-+ spin_unlock(&lcu->lock);
-+ }
-+ server = _find_server(&private->uid);
-+ if (server && list_empty(&server->lculist)) {
-+ list_del(&server->server);
-+ _free_server(server);
-+ }
-+ spin_unlock_irqrestore(&aliastree.lock, flags);
-+}
-+
-+/*
-+ * This function assumes that the unit address configuration stored
-+ * in the lcu is up to date and will update the device uid before
-+ * adding it to a pav group.
-+ */
-+static int _add_device_to_lcu(struct alias_lcu *lcu,
-+ struct dasd_device *device)
-+{
-+
-+ struct dasd_eckd_private *private;
-+ struct alias_pav_group *group;
-+ struct dasd_uid *uid;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ uid = &private->uid;
-+ uid->type = lcu->uac->unit[uid->real_unit_addr].ua_type;
-+ uid->base_unit_addr = lcu->uac->unit[uid->real_unit_addr].base_ua;
-+ dasd_set_uid(device->cdev, &private->uid);
-+
-+ /* if we have no PAV anyway, we don't need to bother with PAV groups */
-+ if (lcu->pav == NO_PAV) {
-+ list_move(&device->alias_list, &lcu->active_devices);
-+ return 0;
-+ }
-+
-+ group = _find_group(lcu, uid);
-+ if (!group) {
-+ group = kzalloc(sizeof(*group), GFP_ATOMIC);
-+ if (!group)
-+ return -ENOMEM;
-+ memcpy(group->uid.vendor, uid->vendor, sizeof(uid->vendor));
-+ memcpy(group->uid.serial, uid->serial, sizeof(uid->serial));
-+ group->uid.ssid = uid->ssid;
-+ if (uid->type == UA_BASE_DEVICE)
-+ group->uid.base_unit_addr = uid->real_unit_addr;
-+ else
-+ group->uid.base_unit_addr = uid->base_unit_addr;
-+ INIT_LIST_HEAD(&group->group);
-+ INIT_LIST_HEAD(&group->baselist);
-+ INIT_LIST_HEAD(&group->aliaslist);
-+ list_add(&group->group, &lcu->grouplist);
-+ }
-+ if (uid->type == UA_BASE_DEVICE)
-+ list_move(&device->alias_list, &group->baselist);
-+ else
-+ list_move(&device->alias_list, &group->aliaslist);
-+ private->pavgroup = group;
-+ return 0;
-+};
-+
-+static void _remove_device_from_lcu(struct alias_lcu *lcu,
-+ struct dasd_device *device)
-+{
-+ struct dasd_eckd_private *private;
-+ struct alias_pav_group *group;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ list_move(&device->alias_list, &lcu->inactive_devices);
-+ group = private->pavgroup;
-+ if (!group)
-+ return;
-+ private->pavgroup = NULL;
-+ if (list_empty(&group->baselist) && list_empty(&group->aliaslist)) {
-+ list_del(&group->group);
-+ kfree(group);
-+ return;
-+ }
-+ if (group->next == device)
-+ group->next = NULL;
-+};
-+
-+static int read_unit_address_configuration(struct dasd_device *device,
-+ struct alias_lcu *lcu)
-+{
-+ struct dasd_psf_prssd_data *prssdp;
-+ struct dasd_ccw_req *cqr;
-+ struct ccw1 *ccw;
-+ int rc;
-+ unsigned long flags;
-+
-+ cqr = dasd_kmalloc_request("ECKD",
-+ 1 /* PSF */ + 1 /* RSSD */ ,
-+ (sizeof(struct dasd_psf_prssd_data)),
-+ device);
-+ if (IS_ERR(cqr))
-+ return PTR_ERR(cqr);
-+ cqr->startdev = device;
-+ cqr->memdev = device;
-+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
-+ cqr->retries = 10;
-+ cqr->expires = 20 * HZ;
-+
-+ /* Prepare for Read Subsystem Data */
-+ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
-+ memset(prssdp, 0, sizeof(struct dasd_psf_prssd_data));
-+ prssdp->order = PSF_ORDER_PRSSD;
-+ prssdp->suborder = 0x0e; /* Read unit address configuration */
-+ /* all other bytes of prssdp must be zero */
-+
-+ ccw = cqr->cpaddr;
-+ ccw->cmd_code = DASD_ECKD_CCW_PSF;
-+ ccw->count = sizeof(struct dasd_psf_prssd_data);
-+ ccw->flags |= CCW_FLAG_CC;
-+ ccw->cda = (__u32)(addr_t) prssdp;
-+
-+ /* Read Subsystem Data - feature codes */
-+ memset(lcu->uac, 0, sizeof(*(lcu->uac)));
-+
-+ ccw++;
-+ ccw->cmd_code = DASD_ECKD_CCW_RSSD;
-+ ccw->count = sizeof(*(lcu->uac));
-+ ccw->cda = (__u32)(addr_t) lcu->uac;
-+
-+ cqr->buildclk = get_clock();
-+ cqr->status = DASD_CQR_FILLED;
-+
-+ /* need to unset flag here to detect race with summary unit check */
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ lcu->flags &= ~NEED_UAC_UPDATE;
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+
-+ do {
-+ rc = dasd_sleep_on(cqr);
-+ } while (rc && (cqr->retries > 0));
-+ if (rc) {
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ lcu->flags |= NEED_UAC_UPDATE;
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ }
-+ dasd_kfree_request(cqr, cqr->memdev);
-+ return rc;
-+}
-+
-+static int _lcu_update(struct dasd_device *refdev, struct alias_lcu *lcu)
-+{
-+ unsigned long flags;
-+ struct alias_pav_group *pavgroup, *tempgroup;
-+ struct dasd_device *device, *tempdev;
-+ int i, rc;
-+ struct dasd_eckd_private *private;
-+
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ list_for_each_entry_safe(pavgroup, tempgroup, &lcu->grouplist, group) {
-+ list_for_each_entry_safe(device, tempdev, &pavgroup->baselist,
-+ alias_list) {
-+ list_move(&device->alias_list, &lcu->active_devices);
-+ private = (struct dasd_eckd_private *) device->private;
-+ private->pavgroup = NULL;
-+ }
-+ list_for_each_entry_safe(device, tempdev, &pavgroup->aliaslist,
-+ alias_list) {
-+ list_move(&device->alias_list, &lcu->active_devices);
-+ private = (struct dasd_eckd_private *) device->private;
-+ private->pavgroup = NULL;
-+ }
-+ list_del(&pavgroup->group);
-+ kfree(pavgroup);
-+ }
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+
-+ rc = read_unit_address_configuration(refdev, lcu);
-+ if (rc)
-+ return rc;
-+
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ lcu->pav = NO_PAV;
-+ for (i = 0; i < MAX_DEVICES_PER_LCU; ++i) {
-+ switch (lcu->uac->unit[i].ua_type) {
-+ case UA_BASE_PAV_ALIAS:
-+ lcu->pav = BASE_PAV;
-+ break;
-+ case UA_HYPER_PAV_ALIAS:
-+ lcu->pav = HYPER_PAV;
-+ break;
-+ }
-+ if (lcu->pav != NO_PAV)
-+ break;
-+ }
-+
-+ list_for_each_entry_safe(device, tempdev, &lcu->active_devices,
-+ alias_list) {
-+ _add_device_to_lcu(lcu, device);
-+ }
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ return 0;
-+}
-+
-+static void lcu_update_work(struct work_struct *work)
-+{
-+ struct alias_lcu *lcu;
-+ struct read_uac_work_data *ruac_data;
-+ struct dasd_device *device;
-+ unsigned long flags;
-+ int rc;
-+
-+ ruac_data = container_of(work, struct read_uac_work_data, dwork.work);
-+ lcu = container_of(ruac_data, struct alias_lcu, ruac_data);
-+ device = ruac_data->device;
-+ rc = _lcu_update(device, lcu);
-+ /*
-+ * Need to check flags again, as there could have been another
-+ * prepare_update or a new device a new device while we were still
-+ * processing the data
-+ */
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ if (rc || (lcu->flags & NEED_UAC_UPDATE)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "could not update"
-+ " alias data in lcu (rc = %d), retry later", rc);
-+ schedule_delayed_work(&lcu->ruac_data.dwork, 30*HZ);
-+ } else {
-+ lcu->ruac_data.device = NULL;
-+ lcu->flags &= ~UPDATE_PENDING;
-+ }
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+}
-+
-+static int _schedule_lcu_update(struct alias_lcu *lcu,
-+ struct dasd_device *device)
-+{
-+ struct dasd_device *usedev = NULL;
-+ struct alias_pav_group *group;
-+
-+ lcu->flags |= NEED_UAC_UPDATE;
-+ if (lcu->ruac_data.device) {
-+ /* already scheduled or running */
-+ return 0;
-+ }
-+ if (device && !list_empty(&device->alias_list))
-+ usedev = device;
-+
-+ if (!usedev && !list_empty(&lcu->grouplist)) {
-+ group = list_first_entry(&lcu->grouplist,
-+ struct alias_pav_group, group);
-+ if (!list_empty(&group->baselist))
-+ usedev = list_first_entry(&group->baselist,
-+ struct dasd_device,
-+ alias_list);
-+ else if (!list_empty(&group->aliaslist))
-+ usedev = list_first_entry(&group->aliaslist,
-+ struct dasd_device,
-+ alias_list);
-+ }
-+ if (!usedev && !list_empty(&lcu->active_devices)) {
-+ usedev = list_first_entry(&lcu->active_devices,
-+ struct dasd_device, alias_list);
-+ }
-+ /*
-+ * if we haven't found a proper device yet, give up for now, the next
-+ * device that will be set active will trigger an lcu update
-+ */
-+ if (!usedev)
-+ return -EINVAL;
-+ lcu->ruac_data.device = usedev;
-+ schedule_delayed_work(&lcu->ruac_data.dwork, 0);
-+ return 0;
-+}
-+
-+int dasd_alias_add_device(struct dasd_device *device)
-+{
-+ struct dasd_eckd_private *private;
-+ struct alias_lcu *lcu;
-+ unsigned long flags;
-+ int rc;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ lcu = private->lcu;
-+ rc = 0;
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ if (!(lcu->flags & UPDATE_PENDING)) {
-+ rc = _add_device_to_lcu(lcu, device);
-+ if (rc)
-+ lcu->flags |= UPDATE_PENDING;
-+ }
-+ if (lcu->flags & UPDATE_PENDING) {
-+ list_move(&device->alias_list, &lcu->active_devices);
-+ _schedule_lcu_update(lcu, device);
-+ }
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ return rc;
-+}
-+
-+int dasd_alias_remove_device(struct dasd_device *device)
-+{
-+ struct dasd_eckd_private *private;
-+ struct alias_lcu *lcu;
-+ unsigned long flags;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ lcu = private->lcu;
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ _remove_device_from_lcu(lcu, device);
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ return 0;
-+}
-+
-+struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
-+{
-+
-+ struct dasd_device *alias_device;
-+ struct alias_pav_group *group;
-+ struct alias_lcu *lcu;
-+ struct dasd_eckd_private *private, *alias_priv;
-+ unsigned long flags;
-+
-+ private = (struct dasd_eckd_private *) base_device->private;
-+ group = private->pavgroup;
-+ lcu = private->lcu;
-+ if (!group || !lcu)
-+ return NULL;
-+ if (lcu->pav == NO_PAV ||
-+ lcu->flags & (NEED_UAC_UPDATE | UPDATE_PENDING))
-+ return NULL;
-+
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ alias_device = group->next;
-+ if (!alias_device) {
-+ if (list_empty(&group->aliaslist)) {
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ return NULL;
-+ } else {
-+ alias_device = list_first_entry(&group->aliaslist,
-+ struct dasd_device,
-+ alias_list);
-+ }
-+ }
-+ if (list_is_last(&alias_device->alias_list, &group->aliaslist))
-+ group->next = list_first_entry(&group->aliaslist,
-+ struct dasd_device, alias_list);
-+ else
-+ group->next = list_first_entry(&alias_device->alias_list,
-+ struct dasd_device, alias_list);
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ alias_priv = (struct dasd_eckd_private *) alias_device->private;
-+ if ((alias_priv->count < private->count) && !alias_device->stopped)
-+ return alias_device;
-+ else
-+ return NULL;
-+}
-+
-+/*
-+ * Summary unit check handling depends on the way alias devices
-+ * are handled so it is done here rather then in dasd_eckd.c
-+ */
-+static int reset_summary_unit_check(struct alias_lcu *lcu,
-+ struct dasd_device *device,
-+ char reason)
-+{
-+ struct dasd_ccw_req *cqr;
-+ int rc = 0;
-+
-+ cqr = lcu->rsu_cqr;
-+ strncpy((char *) &cqr->magic, "ECKD", 4);
-+ ASCEBC((char *) &cqr->magic, 4);
-+ cqr->cpaddr->cmd_code = DASD_ECKD_CCW_RSCK;
-+ cqr->cpaddr->flags = 0 ;
-+ cqr->cpaddr->count = 16;
-+ cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
-+ ((char *)cqr->data)[0] = reason;
-+
-+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
-+ cqr->retries = 255; /* set retry counter to enable basic ERP */
-+ cqr->startdev = device;
-+ cqr->memdev = device;
-+ cqr->block = NULL;
-+ cqr->expires = 5 * HZ;
-+ cqr->buildclk = get_clock();
-+ cqr->status = DASD_CQR_FILLED;
-+
-+ rc = dasd_sleep_on_immediatly(cqr);
-+ return rc;
-+}
-+
-+static void _restart_all_base_devices_on_lcu(struct alias_lcu *lcu)
-+{
-+ struct alias_pav_group *pavgroup;
-+ struct dasd_device *device;
-+ struct dasd_eckd_private *private;
-+
-+ /* active and inactive list can contain alias as well as base devices */
-+ list_for_each_entry(device, &lcu->active_devices, alias_list) {
-+ private = (struct dasd_eckd_private *) device->private;
-+ if (private->uid.type != UA_BASE_DEVICE)
-+ continue;
-+ dasd_schedule_block_bh(device->block);
-+ dasd_schedule_device_bh(device);
-+ }
-+ list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
-+ private = (struct dasd_eckd_private *) device->private;
-+ if (private->uid.type != UA_BASE_DEVICE)
-+ continue;
-+ dasd_schedule_block_bh(device->block);
-+ dasd_schedule_device_bh(device);
-+ }
-+ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
-+ list_for_each_entry(device, &pavgroup->baselist, alias_list) {
-+ dasd_schedule_block_bh(device->block);
-+ dasd_schedule_device_bh(device);
-+ }
-+ }
-+}
-+
-+static void flush_all_alias_devices_on_lcu(struct alias_lcu *lcu)
-+{
-+ struct alias_pav_group *pavgroup;
-+ struct dasd_device *device, *temp;
-+ struct dasd_eckd_private *private;
-+ int rc;
-+ unsigned long flags;
-+ LIST_HEAD(active);
-+
-+ /*
-+ * Problem here ist that dasd_flush_device_queue may wait
-+ * for termination of a request to complete. We can't keep
-+ * the lcu lock during that time, so we must assume that
-+ * the lists may have changed.
-+ * Idea: first gather all active alias devices in a separate list,
-+ * then flush the first element of this list unlocked, and afterwards
-+ * check if it is still on the list before moving it to the
-+ * active_devices list.
-+ */
-+
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ list_for_each_entry_safe(device, temp, &lcu->active_devices,
-+ alias_list) {
-+ private = (struct dasd_eckd_private *) device->private;
-+ if (private->uid.type == UA_BASE_DEVICE)
-+ continue;
-+ list_move(&device->alias_list, &active);
-+ }
-+
-+ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
-+ list_splice_init(&pavgroup->aliaslist, &active);
-+ }
-+ while (!list_empty(&active)) {
-+ device = list_first_entry(&active, struct dasd_device,
-+ alias_list);
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+ rc = dasd_flush_device_queue(device);
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ /*
-+ * only move device around if it wasn't moved away while we
-+ * were waiting for the flush
-+ */
-+ if (device == list_first_entry(&active,
-+ struct dasd_device, alias_list))
-+ list_move(&device->alias_list, &lcu->active_devices);
-+ }
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+}
-+
-+/*
-+ * This function is called in interrupt context, so the
-+ * cdev lock for device is already locked!
-+ */
-+static void _stop_all_devices_on_lcu(struct alias_lcu *lcu,
-+ struct dasd_device *device)
-+{
-+ struct alias_pav_group *pavgroup;
-+ struct dasd_device *pos;
-+
-+ list_for_each_entry(pos, &lcu->active_devices, alias_list) {
-+ if (pos != device)
-+ spin_lock(get_ccwdev_lock(pos->cdev));
-+ pos->stopped |= DASD_STOPPED_SU;
-+ if (pos != device)
-+ spin_unlock(get_ccwdev_lock(pos->cdev));
-+ }
-+ list_for_each_entry(pos, &lcu->inactive_devices, alias_list) {
-+ if (pos != device)
-+ spin_lock(get_ccwdev_lock(pos->cdev));
-+ pos->stopped |= DASD_STOPPED_SU;
-+ if (pos != device)
-+ spin_unlock(get_ccwdev_lock(pos->cdev));
-+ }
-+ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
-+ list_for_each_entry(pos, &pavgroup->baselist, alias_list) {
-+ if (pos != device)
-+ spin_lock(get_ccwdev_lock(pos->cdev));
-+ pos->stopped |= DASD_STOPPED_SU;
-+ if (pos != device)
-+ spin_unlock(get_ccwdev_lock(pos->cdev));
-+ }
-+ list_for_each_entry(pos, &pavgroup->aliaslist, alias_list) {
-+ if (pos != device)
-+ spin_lock(get_ccwdev_lock(pos->cdev));
-+ pos->stopped |= DASD_STOPPED_SU;
-+ if (pos != device)
-+ spin_unlock(get_ccwdev_lock(pos->cdev));
-+ }
-+ }
-+}
-+
-+static void _unstop_all_devices_on_lcu(struct alias_lcu *lcu)
-+{
-+ struct alias_pav_group *pavgroup;
-+ struct dasd_device *device;
-+ unsigned long flags;
-+
-+ list_for_each_entry(device, &lcu->active_devices, alias_list) {
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-+ device->stopped &= ~DASD_STOPPED_SU;
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ }
-+
-+ list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-+ device->stopped &= ~DASD_STOPPED_SU;
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ }
-+
-+ list_for_each_entry(pavgroup, &lcu->grouplist, group) {
-+ list_for_each_entry(device, &pavgroup->baselist, alias_list) {
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-+ device->stopped &= ~DASD_STOPPED_SU;
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev),
-+ flags);
-+ }
-+ list_for_each_entry(device, &pavgroup->aliaslist, alias_list) {
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-+ device->stopped &= ~DASD_STOPPED_SU;
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev),
-+ flags);
-+ }
-+ }
-+}
-+
-+static void summary_unit_check_handling_work(struct work_struct *work)
-+{
-+ struct alias_lcu *lcu;
-+ struct summary_unit_check_work_data *suc_data;
-+ unsigned long flags;
-+ struct dasd_device *device;
-+
-+ suc_data = container_of(work, struct summary_unit_check_work_data,
-+ worker);
-+ lcu = container_of(suc_data, struct alias_lcu, suc_data);
-+ device = suc_data->device;
-+
-+ /* 1. flush alias devices */
-+ flush_all_alias_devices_on_lcu(lcu);
-+
-+ /* 2. reset summary unit check */
-+ spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-+ device->stopped &= ~(DASD_STOPPED_SU | DASD_STOPPED_PENDING);
-+ spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ reset_summary_unit_check(lcu, device, suc_data->reason);
-+
-+ spin_lock_irqsave(&lcu->lock, flags);
-+ _unstop_all_devices_on_lcu(lcu);
-+ _restart_all_base_devices_on_lcu(lcu);
-+ /* 3. read new alias configuration */
-+ _schedule_lcu_update(lcu, device);
-+ lcu->suc_data.device = NULL;
-+ spin_unlock_irqrestore(&lcu->lock, flags);
-+}
-+
-+/*
-+ * note: this will be called from int handler context (cdev locked)
-+ */
-+void dasd_alias_handle_summary_unit_check(struct dasd_device *device,
-+ struct irb *irb)
-+{
-+ struct alias_lcu *lcu;
-+ char reason;
-+ struct dasd_eckd_private *private;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+
-+ reason = irb->ecw[8];
-+ DEV_MESSAGE(KERN_WARNING, device, "%s %x",
-+ "eckd handle summary unit check: reason", reason);
-+
-+ lcu = private->lcu;
-+ if (!lcu) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "device not ready to handle summary"
-+ " unit check (no lcu structure)");
-+ return;
-+ }
-+ spin_lock(&lcu->lock);
-+ _stop_all_devices_on_lcu(lcu, device);
-+ /* prepare for lcu_update */
-+ private->lcu->flags |= NEED_UAC_UPDATE | UPDATE_PENDING;
-+ /* If this device is about to be removed just return and wait for
-+ * the next interrupt on a different device
-+ */
-+ if (list_empty(&device->alias_list)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "device is in offline processing,"
-+ " don't do summary unit check handling");
-+ spin_unlock(&lcu->lock);
-+ return;
-+ }
-+ if (lcu->suc_data.device) {
-+ /* already scheduled or running */
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "previous instance of summary unit check worker"
-+ " still pending");
-+ spin_unlock(&lcu->lock);
-+ return ;
+@@ -995,34 +1058,56 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ /*
+ * force reload of disk info via aac_probe_container
+ */
+- if ((device_config_needed == CHANGE)
+- && (dev->fsa_dev[container].valid == 1))
+- dev->fsa_dev[container].valid = 2;
+- if ((device_config_needed == CHANGE) ||
+- (device_config_needed == ADD))
++ if ((channel == CONTAINER_CHANNEL) &&
++ (device_config_needed != NOTHING)) {
++ if (dev->fsa_dev[container].valid == 1)
++ dev->fsa_dev[container].valid = 2;
+ aac_probe_container(dev, container);
+- device = scsi_device_lookup(dev->scsi_host_ptr,
+- CONTAINER_TO_CHANNEL(container),
+- CONTAINER_TO_ID(container),
+- CONTAINER_TO_LUN(container));
+ }
-+ lcu->suc_data.reason = reason;
-+ lcu->suc_data.device = device;
-+ spin_unlock(&lcu->lock);
-+ schedule_work(&lcu->suc_data.worker);
-+};
-diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c
-index 0c67258..f4fb402 100644
---- a/drivers/s390/block/dasd_devmap.c
-+++ b/drivers/s390/block/dasd_devmap.c
-@@ -49,22 +49,6 @@ struct dasd_devmap {
- };
++ device = scsi_device_lookup(dev->scsi_host_ptr, channel, id, lun);
+ if (device) {
+ switch (device_config_needed) {
+ case DELETE:
++ if (scsi_device_online(device)) {
++ scsi_device_set_state(device, SDEV_OFFLINE);
++ sdev_printk(KERN_INFO, device,
++ "Device offlined - %s\n",
++ (channel == CONTAINER_CHANNEL) ?
++ "array deleted" :
++ "enclosure services event");
++ }
++ break;
++ case ADD:
++ if (!scsi_device_online(device)) {
++ sdev_printk(KERN_INFO, device,
++ "Device online - %s\n",
++ (channel == CONTAINER_CHANNEL) ?
++ "array created" :
++ "enclosure services event");
++ scsi_device_set_state(device, SDEV_RUNNING);
++ }
++ /* FALLTHRU */
+ case CHANGE:
++ if ((channel == CONTAINER_CHANNEL)
++ && (!dev->fsa_dev[container].valid)) {
++ if (!scsi_device_online(device))
++ break;
++ scsi_device_set_state(device, SDEV_OFFLINE);
++ sdev_printk(KERN_INFO, device,
++ "Device offlined - %s\n",
++ "array failed");
++ break;
++ }
+ scsi_rescan_device(&device->sdev_gendev);
- /*
-- * dasd_server_ssid_map contains a globally unique storage server subsystem ID.
-- * dasd_server_ssid_list contains the list of all subsystem IDs accessed by
-- * the DASD device driver.
-- */
--struct dasd_server_ssid_map {
-- struct list_head list;
-- struct system_id {
-- char vendor[4];
-- char serial[15];
-- __u16 ssid;
-- } sid;
--};
--
--static struct list_head dasd_server_ssid_list;
+ default:
+ break;
+ }
+ scsi_device_put(device);
++ device_config_needed = NOTHING;
+ }
+- if (device_config_needed == ADD) {
+- scsi_add_device(dev->scsi_host_ptr,
+- CONTAINER_TO_CHANNEL(container),
+- CONTAINER_TO_ID(container),
+- CONTAINER_TO_LUN(container));
+- }
-
--/*
- * Parameter parsing functions for dasd= parameter. The syntax is:
- * <devno> : (0x)?[0-9a-fA-F]+
- * <busid> : [0-0a-f]\.[0-9a-f]\.(0x)?[0-9a-fA-F]+
-@@ -721,8 +705,9 @@ dasd_ro_store(struct device *dev, struct device_attribute *attr,
- devmap->features &= ~DASD_FEATURE_READONLY;
- if (devmap->device)
- devmap->device->features = devmap->features;
-- if (devmap->device && devmap->device->gdp)
-- set_disk_ro(devmap->device->gdp, val);
-+ if (devmap->device && devmap->device->block
-+ && devmap->device->block->gdp)
-+ set_disk_ro(devmap->device->block->gdp, val);
- spin_unlock(&dasd_devmap_lock);
- return count;
++ if (device_config_needed == ADD)
++ scsi_add_device(dev->scsi_host_ptr, channel, id, lun);
}
-@@ -893,12 +878,16 @@ dasd_alias_show(struct device *dev, struct device_attribute *attr, char *buf)
- devmap = dasd_find_busid(dev->bus_id);
- spin_lock(&dasd_devmap_lock);
-- if (!IS_ERR(devmap))
-- alias = devmap->uid.alias;
-+ if (IS_ERR(devmap) || strlen(devmap->uid.vendor) == 0) {
-+ spin_unlock(&dasd_devmap_lock);
-+ return sprintf(buf, "0\n");
+ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
+@@ -1099,7 +1184,8 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
+ free_irq(aac->pdev->irq, aac);
+ kfree(aac->fsa_dev);
+ aac->fsa_dev = NULL;
+- if (aac_get_driver_ident(index)->quirks & AAC_QUIRK_31BIT) {
++ quirks = aac_get_driver_ident(index)->quirks;
++ if (quirks & AAC_QUIRK_31BIT) {
+ if (((retval = pci_set_dma_mask(aac->pdev, DMA_31BIT_MASK))) ||
+ ((retval = pci_set_consistent_dma_mask(aac->pdev, DMA_31BIT_MASK))))
+ goto out;
+@@ -1110,7 +1196,7 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
+ }
+ if ((retval = (*(aac_get_driver_ident(index)->init))(aac)))
+ goto out;
+- if (aac_get_driver_ident(index)->quirks & AAC_QUIRK_31BIT)
++ if (quirks & AAC_QUIRK_31BIT)
+ if ((retval = pci_set_dma_mask(aac->pdev, DMA_32BIT_MASK)))
+ goto out;
+ if (jafo) {
+@@ -1121,15 +1207,14 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
+ }
+ }
+ (void)aac_get_adapter_info(aac);
+- quirks = aac_get_driver_ident(index)->quirks;
+ if ((quirks & AAC_QUIRK_34SG) && (host->sg_tablesize > 34)) {
+- host->sg_tablesize = 34;
+- host->max_sectors = (host->sg_tablesize * 8) + 112;
+- }
+- if ((quirks & AAC_QUIRK_17SG) && (host->sg_tablesize > 17)) {
+- host->sg_tablesize = 17;
+- host->max_sectors = (host->sg_tablesize * 8) + 112;
+- }
++ host->sg_tablesize = 34;
++ host->max_sectors = (host->sg_tablesize * 8) + 112;
+ }
-+ if (devmap->uid.type == UA_BASE_PAV_ALIAS ||
-+ devmap->uid.type == UA_HYPER_PAV_ALIAS)
-+ alias = 1;
- else
- alias = 0;
- spin_unlock(&dasd_devmap_lock);
--
- return sprintf(buf, alias ? "1\n" : "0\n");
- }
++ if ((quirks & AAC_QUIRK_17SG) && (host->sg_tablesize > 17)) {
++ host->sg_tablesize = 17;
++ host->max_sectors = (host->sg_tablesize * 8) + 112;
++ }
+ aac_get_config_status(aac, 1);
+ aac_get_containers(aac);
+ /*
+@@ -1217,12 +1302,13 @@ int aac_reset_adapter(struct aac_dev * aac, int forced)
+ }
-@@ -930,19 +919,36 @@ static ssize_t
- dasd_uid_show(struct device *dev, struct device_attribute *attr, char *buf)
- {
- struct dasd_devmap *devmap;
-- char uid[UID_STRLEN];
-+ char uid_string[UID_STRLEN];
-+ char ua_string[3];
-+ struct dasd_uid *uid;
+ /* Quiesce build, flush cache, write through mode */
+- aac_send_shutdown(aac);
++ if (forced < 2)
++ aac_send_shutdown(aac);
+ spin_lock_irqsave(host->host_lock, flagv);
+- retval = _aac_reset_adapter(aac, forced);
++ retval = _aac_reset_adapter(aac, forced ? forced : ((aac_check_reset != 0) && (aac_check_reset != 1)));
+ spin_unlock_irqrestore(host->host_lock, flagv);
- devmap = dasd_find_busid(dev->bus_id);
- spin_lock(&dasd_devmap_lock);
-- if (!IS_ERR(devmap) && strlen(devmap->uid.vendor) > 0)
-- snprintf(uid, sizeof(uid), "%s.%s.%04x.%02x",
-- devmap->uid.vendor, devmap->uid.serial,
-- devmap->uid.ssid, devmap->uid.unit_addr);
-- else
-- uid[0] = 0;
-+ if (IS_ERR(devmap) || strlen(devmap->uid.vendor) == 0) {
-+ spin_unlock(&dasd_devmap_lock);
-+ return sprintf(buf, "\n");
-+ }
-+ uid = &devmap->uid;
-+ switch (uid->type) {
-+ case UA_BASE_DEVICE:
-+ sprintf(ua_string, "%02x", uid->real_unit_addr);
-+ break;
-+ case UA_BASE_PAV_ALIAS:
-+ sprintf(ua_string, "%02x", uid->base_unit_addr);
-+ break;
-+ case UA_HYPER_PAV_ALIAS:
-+ sprintf(ua_string, "xx");
-+ break;
-+ default:
-+ /* should not happen, treat like base device */
-+ sprintf(ua_string, "%02x", uid->real_unit_addr);
-+ break;
-+ }
-+ snprintf(uid_string, sizeof(uid_string), "%s.%s.%04x.%s",
-+ uid->vendor, uid->serial, uid->ssid, ua_string);
- spin_unlock(&dasd_devmap_lock);
--
-- return snprintf(buf, PAGE_SIZE, "%s\n", uid);
-+ return snprintf(buf, PAGE_SIZE, "%s\n", uid_string);
- }
+- if (retval == -ENODEV) {
++ if ((forced < 2) && (retval == -ENODEV)) {
+ /* Unwind aac_send_shutdown() IOP_RESET unsupported/disabled */
+ struct fib * fibctx = aac_fib_alloc(aac);
+ if (fibctx) {
+@@ -1338,11 +1424,11 @@ int aac_check_health(struct aac_dev * aac)
+ fib->data = hw_fib->data;
+ aif = (struct aac_aifcmd *)hw_fib->data;
+ aif->command = cpu_to_le32(AifCmdEventNotify);
+- aif->seqnum = cpu_to_le32(0xFFFFFFFF);
+- aif->data[0] = AifEnExpEvent;
+- aif->data[1] = AifExeFirmwarePanic;
+- aif->data[2] = AifHighPriority;
+- aif->data[3] = BlinkLED;
++ aif->seqnum = cpu_to_le32(0xFFFFFFFF);
++ ((__le32 *)aif->data)[0] = cpu_to_le32(AifEnExpEvent);
++ ((__le32 *)aif->data)[1] = cpu_to_le32(AifExeFirmwarePanic);
++ ((__le32 *)aif->data)[2] = cpu_to_le32(AifHighPriority);
++ ((__le32 *)aif->data)[3] = cpu_to_le32(BlinkLED);
- static DEVICE_ATTR(uid, 0444, dasd_uid_show, NULL);
-@@ -1040,39 +1046,16 @@ int
- dasd_set_uid(struct ccw_device *cdev, struct dasd_uid *uid)
- {
- struct dasd_devmap *devmap;
-- struct dasd_server_ssid_map *srv, *tmp;
+ /*
+ * Put the FIB onto the
+@@ -1372,14 +1458,14 @@ int aac_check_health(struct aac_dev * aac)
- devmap = dasd_find_busid(cdev->dev.bus_id);
- if (IS_ERR(devmap))
- return PTR_ERR(devmap);
+ printk(KERN_ERR "%s: Host adapter BLINK LED 0x%x\n", aac->name, BlinkLED);
-- /* generate entry for server_ssid_map */
-- srv = (struct dasd_server_ssid_map *)
-- kzalloc(sizeof(struct dasd_server_ssid_map), GFP_KERNEL);
-- if (!srv)
-- return -ENOMEM;
-- strncpy(srv->sid.vendor, uid->vendor, sizeof(srv->sid.vendor) - 1);
-- strncpy(srv->sid.serial, uid->serial, sizeof(srv->sid.serial) - 1);
-- srv->sid.ssid = uid->ssid;
--
-- /* server is already contained ? */
- spin_lock(&dasd_devmap_lock);
- devmap->uid = *uid;
-- list_for_each_entry(tmp, &dasd_server_ssid_list, list) {
-- if (!memcmp(&srv->sid, &tmp->sid,
-- sizeof(struct system_id))) {
-- kfree(srv);
-- srv = NULL;
-- break;
-- }
-- }
--
-- /* add servermap to serverlist */
-- if (srv)
-- list_add(&srv->list, &dasd_server_ssid_list);
- spin_unlock(&dasd_devmap_lock);
+- if (!aac_check_reset ||
++ if (!aac_check_reset || ((aac_check_reset != 1) &&
+ (aac->supplement_adapter_info.SupportedOptions2 &
+- le32_to_cpu(AAC_OPTION_IGNORE_RESET)))
++ AAC_OPTION_IGNORE_RESET)))
+ goto out;
+ host = aac->scsi_host_ptr;
+ if (aac->thread->pid != current->pid)
+ spin_lock_irqsave(host->host_lock, flagv);
+- BlinkLED = _aac_reset_adapter(aac, 0);
++ BlinkLED = _aac_reset_adapter(aac, aac_check_reset != 1);
+ if (aac->thread->pid != current->pid)
+ spin_unlock_irqrestore(host->host_lock, flagv);
+ return BlinkLED;
+@@ -1399,7 +1485,7 @@ out:
+ * until the queue is empty. When the queue is empty it will wait for
+ * more FIBs.
+ */
+-
++
+ int aac_command_thread(void *data)
+ {
+ struct aac_dev *dev = data;
+@@ -1425,30 +1511,29 @@ int aac_command_thread(void *data)
+ add_wait_queue(&dev->queues->queue[HostNormCmdQueue].cmdready, &wait);
+ set_current_state(TASK_INTERRUPTIBLE);
+ dprintk ((KERN_INFO "aac_command_thread start\n"));
+- while(1)
+- {
++ while (1) {
+ spin_lock_irqsave(dev->queues->queue[HostNormCmdQueue].lock, flags);
+ while(!list_empty(&(dev->queues->queue[HostNormCmdQueue].cmdq))) {
+ struct list_head *entry;
+ struct aac_aifcmd * aifcmd;
-- return (srv ? 1 : 0);
-+ return 0;
- }
- EXPORT_SYMBOL_GPL(dasd_set_uid);
+ set_current_state(TASK_RUNNING);
+-
++
+ entry = dev->queues->queue[HostNormCmdQueue].cmdq.next;
+ list_del(entry);
+-
++
+ spin_unlock_irqrestore(dev->queues->queue[HostNormCmdQueue].lock, flags);
+ fib = list_entry(entry, struct fib, fiblink);
+ /*
+- * We will process the FIB here or pass it to a
+- * worker thread that is TBD. We Really can't
++ * We will process the FIB here or pass it to a
++ * worker thread that is TBD. We Really can't
+ * do anything at this point since we don't have
+ * anything defined for this thread to do.
+ */
+ hw_fib = fib->hw_fib_va;
+ memset(fib, 0, sizeof(struct fib));
+ fib->type = FSAFS_NTC_FIB_CONTEXT;
+- fib->size = sizeof( struct fib );
++ fib->size = sizeof(struct fib);
+ fib->hw_fib_va = hw_fib;
+ fib->data = hw_fib->data;
+ fib->dev = dev;
+@@ -1462,20 +1547,19 @@ int aac_command_thread(void *data)
+ *(__le32 *)hw_fib->data = cpu_to_le32(ST_OK);
+ aac_fib_adapter_complete(fib, (u16)sizeof(u32));
+ } else {
+- struct list_head *entry;
+ /* The u32 here is important and intended. We are using
+ 32bit wrapping time to fit the adapter field */
+-
++
+ u32 time_now, time_last;
+ unsigned long flagv;
+ unsigned num;
+ struct hw_fib ** hw_fib_pool, ** hw_fib_p;
+ struct fib ** fib_pool, ** fib_p;
+-
++
+ /* Sniff events */
+- if ((aifcmd->command ==
++ if ((aifcmd->command ==
+ cpu_to_le32(AifCmdEventNotify)) ||
+- (aifcmd->command ==
++ (aifcmd->command ==
+ cpu_to_le32(AifCmdJobProgress))) {
+ aac_handle_aif(dev, fib);
+ }
+@@ -1527,7 +1611,7 @@ int aac_command_thread(void *data)
+ spin_lock_irqsave(&dev->fib_lock, flagv);
+ entry = dev->fib_list.next;
+ /*
+- * For each Context that is on the
++ * For each Context that is on the
+ * fibctxList, make a copy of the
+ * fib, and then set the event to wake up the
+ * thread that is waiting for it.
+@@ -1552,7 +1636,7 @@ int aac_command_thread(void *data)
+ */
+ time_last = fibctx->jiffies;
+ /*
+- * Has it been > 2 minutes
++ * Has it been > 2 minutes
+ * since the last read off
+ * the queue?
+ */
+@@ -1583,7 +1667,7 @@ int aac_command_thread(void *data)
+ */
+ list_add_tail(&newfib->fiblink, &fibctx->fib_list);
+ fibctx->count++;
+- /*
++ /*
+ * Set the event to wake up the
+ * thread that is waiting.
+ */
+@@ -1655,11 +1739,11 @@ int aac_command_thread(void *data)
+ struct fib *fibptr;
-@@ -1138,9 +1121,6 @@ dasd_devmap_init(void)
- dasd_max_devindex = 0;
- for (i = 0; i < 256; i++)
- INIT_LIST_HEAD(&dasd_hashlists[i]);
--
-- /* Initialize servermap structure. */
-- INIT_LIST_HEAD(&dasd_server_ssid_list);
- return 0;
- }
+ if ((fibptr = aac_fib_alloc(dev))) {
+- u32 * info;
++ __le32 *info;
-diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c
-index 571320a..d91df38 100644
---- a/drivers/s390/block/dasd_diag.c
-+++ b/drivers/s390/block/dasd_diag.c
-@@ -142,7 +142,7 @@ dasd_diag_erp(struct dasd_device *device)
- int rc;
+ aac_fib_init(fibptr);
- mdsk_term_io(device);
-- rc = mdsk_init_io(device, device->bp_block, 0, NULL);
-+ rc = mdsk_init_io(device, device->block->bp_block, 0, NULL);
- if (rc)
- DEV_MESSAGE(KERN_WARNING, device, "DIAG ERP unsuccessful, "
- "rc=%d", rc);
-@@ -158,11 +158,11 @@ dasd_start_diag(struct dasd_ccw_req * cqr)
- struct dasd_diag_req *dreq;
- int rc;
+- info = (u32 *) fib_data(fibptr);
++ info = (__le32 *) fib_data(fibptr);
+ if (now.tv_usec > 500000)
+ ++now.tv_sec;
-- device = cqr->device;
-+ device = cqr->startdev;
- if (cqr->retries < 0) {
- DEV_MESSAGE(KERN_WARNING, device, "DIAG start_IO: request %p "
- "- no retry left)", cqr);
-- cqr->status = DASD_CQR_FAILED;
-+ cqr->status = DASD_CQR_ERROR;
- return -EIO;
- }
- private = (struct dasd_diag_private *) device->private;
-@@ -184,7 +184,7 @@ dasd_start_diag(struct dasd_ccw_req * cqr)
- switch (rc) {
- case 0: /* Synchronous I/O finished successfully */
- cqr->stopclk = get_clock();
-- cqr->status = DASD_CQR_DONE;
-+ cqr->status = DASD_CQR_SUCCESS;
- /* Indicate to calling function that only a dasd_schedule_bh()
- and no timer is needed */
- rc = -EACCES;
-@@ -209,12 +209,12 @@ dasd_diag_term_IO(struct dasd_ccw_req * cqr)
- {
- struct dasd_device *device;
+diff --git a/drivers/scsi/aacraid/dpcsup.c b/drivers/scsi/aacraid/dpcsup.c
+index e6032ff..d1163de 100644
+--- a/drivers/scsi/aacraid/dpcsup.c
++++ b/drivers/scsi/aacraid/dpcsup.c
+@@ -120,6 +120,7 @@ unsigned int aac_response_normal(struct aac_queue * q)
+ * NOTE: we cannot touch the fib after this
+ * call, because it may have been deallocated.
+ */
++ fib->flags = 0;
+ fib->callback(fib->callback_data, fib);
+ } else {
+ unsigned long flagv;
+@@ -229,11 +230,9 @@ unsigned int aac_command_normal(struct aac_queue *q)
+ * all QE there are and wake up all the waiters before exiting.
+ */
-- device = cqr->device;
-+ device = cqr->startdev;
- mdsk_term_io(device);
-- mdsk_init_io(device, device->bp_block, 0, NULL);
-- cqr->status = DASD_CQR_CLEAR;
-+ mdsk_init_io(device, device->block->bp_block, 0, NULL);
-+ cqr->status = DASD_CQR_CLEAR_PENDING;
- cqr->stopclk = get_clock();
-- dasd_schedule_bh(device);
-+ dasd_schedule_device_bh(device);
- return 0;
- }
+-unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index)
++unsigned int aac_intr_normal(struct aac_dev * dev, u32 index)
+ {
+- u32 index = le32_to_cpu(Index);
+-
+- dprintk((KERN_INFO "aac_intr_normal(%p,%x)\n", dev, Index));
++ dprintk((KERN_INFO "aac_intr_normal(%p,%x)\n", dev, index));
+ if ((index & 0x00000002L)) {
+ struct hw_fib * hw_fib;
+ struct fib * fib;
+@@ -301,7 +300,7 @@ unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index)
-@@ -247,7 +247,7 @@ dasd_ext_handler(__u16 code)
- return;
- }
- cqr = (struct dasd_ccw_req *) ip;
-- device = (struct dasd_device *) cqr->device;
-+ device = (struct dasd_device *) cqr->startdev;
- if (strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
- DEV_MESSAGE(KERN_WARNING, device,
- " magic number of dasd_ccw_req 0x%08X doesn't"
-@@ -260,10 +260,10 @@ dasd_ext_handler(__u16 code)
- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
+ if (hwfib->header.Command == cpu_to_le16(NuFileSystem))
+ {
+- u32 *pstatus = (u32 *)hwfib->data;
++ __le32 *pstatus = (__le32 *)hwfib->data;
+ if (*pstatus & cpu_to_le32(0xffff0000))
+ *pstatus = cpu_to_le32(ST_OK);
+ }
+@@ -315,6 +314,7 @@ unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index)
+ * NOTE: we cannot touch the fib after this
+ * call, because it may have been deallocated.
+ */
++ fib->flags = 0;
+ fib->callback(fib->callback_data, fib);
+ } else {
+ unsigned long flagv;
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 9dd331b..61be227 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -159,27 +159,27 @@ static struct pci_device_id aac_pci_tbl[] = {
+ MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
- /* Check for a pending clear operation */
-- if (cqr->status == DASD_CQR_CLEAR) {
-- cqr->status = DASD_CQR_QUEUED;
-- dasd_clear_timer(device);
-- dasd_schedule_bh(device);
-+ if (cqr->status == DASD_CQR_CLEAR_PENDING) {
-+ cqr->status = DASD_CQR_CLEARED;
-+ dasd_device_clear_timer(device);
-+ dasd_schedule_device_bh(device);
- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
- return;
- }
-@@ -272,11 +272,11 @@ dasd_ext_handler(__u16 code)
+ /*
+- * dmb - For now we add the number of channels to this structure.
++ * dmb - For now we add the number of channels to this structure.
+ * In the future we should add a fib that reports the number of channels
+ * for the card. At that time we can remove the channels from here
+ */
+ static struct aac_driver_ident aac_drivers[] = {
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 2/Si (Iguana/PERC2Si) */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Opal/PERC3Di) */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Si (SlimFast/PERC3Si */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Iguana FlipChip/PERC3DiF */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Viper/PERC3DiV) */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Lexus/PERC3DiL) */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Jaguar/PERC3DiJ) */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Dagger/PERC3DiD) */
+- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Boxster/PERC3DiB) */
+- { aac_rx_init, "aacraid", "ADAPTEC ", "catapult ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* catapult */
+- { aac_rx_init, "aacraid", "ADAPTEC ", "tomcat ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* tomcat */
+- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2120S ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec 2120S (Crusader) */
+- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec 2200S (Vulcan) */
+- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec 2200S (Vulcan-2m) */
+- { aac_rx_init, "aacraid", "Legend ", "Legend S220 ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend S220 (Legend Crusader) */
+- { aac_rx_init, "aacraid", "Legend ", "Legend S230 ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend S230 (Legend Vulcan) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 2/Si (Iguana/PERC2Si) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Opal/PERC3Di) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Si (SlimFast/PERC3Si */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Iguana FlipChip/PERC3DiF */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Viper/PERC3DiV) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Lexus/PERC3DiL) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Jaguar/PERC3DiJ) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Dagger/PERC3DiD) */
++ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Boxster/PERC3DiB) */
++ { aac_rx_init, "aacraid", "ADAPTEC ", "catapult ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* catapult */
++ { aac_rx_init, "aacraid", "ADAPTEC ", "tomcat ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* tomcat */
++ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2120S ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Adaptec 2120S (Crusader) */
++ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Adaptec 2200S (Vulcan) */
++ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Adaptec 2200S (Vulcan-2m) */
++ { aac_rx_init, "aacraid", "Legend ", "Legend S220 ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Legend S220 (Legend Crusader) */
++ { aac_rx_init, "aacraid", "Legend ", "Legend S230 ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Legend S230 (Legend Vulcan) */
- expires = 0;
- if (status == 0) {
-- cqr->status = DASD_CQR_DONE;
-+ cqr->status = DASD_CQR_SUCCESS;
- /* Start first request on queue if possible -> fast_io. */
- if (!list_empty(&device->ccw_queue)) {
- next = list_entry(device->ccw_queue.next,
-- struct dasd_ccw_req, list);
-+ struct dasd_ccw_req, devlist);
- if (next->status == DASD_CQR_QUEUED) {
- rc = dasd_start_diag(next);
- if (rc == 0)
-@@ -296,10 +296,10 @@ dasd_ext_handler(__u16 code)
- }
+ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 3230S ", 2 }, /* Adaptec 3230S (Harrier) */
+ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 3240S ", 2 }, /* Adaptec 3240S (Tornado) */
+@@ -224,8 +224,8 @@ static struct aac_driver_ident aac_drivers[] = {
+ { aac_sa_init, "percraid", "DELL ", "PERCRAID ", 4, AAC_QUIRK_34SG }, /* Dell PERC2/QC */
+ { aac_sa_init, "hpnraid", "HP ", "NetRAID ", 4, AAC_QUIRK_34SG }, /* HP NetRAID-4M */
- if (expires != 0)
-- dasd_set_timer(device, expires);
-+ dasd_device_set_timer(device, expires);
- else
-- dasd_clear_timer(device);
-- dasd_schedule_bh(device);
-+ dasd_device_clear_timer(device);
-+ dasd_schedule_device_bh(device);
+- { aac_rx_init, "aacraid", "DELL ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Dell Catchall */
+- { aac_rx_init, "aacraid", "Legend ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend Catchall */
++ { aac_rx_init, "aacraid", "DELL ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Dell Catchall */
++ { aac_rx_init, "aacraid", "Legend ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Legend Catchall */
+ { aac_rx_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Catch All */
+ { aac_rkt_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Rocket Catch All */
+ { aac_nark_init, "aacraid", "ADAPTEC ", "RAID ", 2 } /* Adaptec NEMER/ARK Catch All */
+@@ -239,7 +239,7 @@ static struct aac_driver_ident aac_drivers[] = {
+ * Queues a command for execution by the associated Host Adapter.
+ *
+ * TODO: unify with aac_scsi_cmd().
+- */
++ */
- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
- }
-@@ -309,6 +309,7 @@ dasd_ext_handler(__u16 code)
- static int
- dasd_diag_check_device(struct dasd_device *device)
+ static int aac_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
-+ struct dasd_block *block;
- struct dasd_diag_private *private;
- struct dasd_diag_characteristics *rdc_data;
- struct dasd_diag_bio bio;
-@@ -328,6 +329,16 @@ dasd_diag_check_device(struct dasd_device *device)
- ccw_device_get_id(device->cdev, &private->dev_id);
- device->private = (void *) private;
+@@ -258,7 +258,7 @@ static int aac_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd
}
-+ block = dasd_alloc_block();
-+ if (IS_ERR(block)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "could not allocate dasd block structure");
-+ kfree(device->private);
-+ return PTR_ERR(block);
-+ }
-+ device->block = block;
-+ block->base = device;
+ cmd->SCp.phase = AAC_OWNER_LOWLEVEL;
+ return (aac_scsi_cmd(cmd) ? FAILED : 0);
+-}
++}
+
+ /**
+ * aac_info - Returns the host adapter name
+@@ -292,21 +292,21 @@ struct aac_driver_ident* aac_get_driver_ident(int devtype)
+ * @capacity: the sector capacity of the disk
+ * @geom: geometry block to fill in
+ *
+- * Return the Heads/Sectors/Cylinders BIOS Disk Parameters for Disk.
+- * The default disk geometry is 64 heads, 32 sectors, and the appropriate
+- * number of cylinders so as not to exceed drive capacity. In order for
++ * Return the Heads/Sectors/Cylinders BIOS Disk Parameters for Disk.
++ * The default disk geometry is 64 heads, 32 sectors, and the appropriate
++ * number of cylinders so as not to exceed drive capacity. In order for
+ * disks equal to or larger than 1 GB to be addressable by the BIOS
+- * without exceeding the BIOS limitation of 1024 cylinders, Extended
+- * Translation should be enabled. With Extended Translation enabled,
+- * drives between 1 GB inclusive and 2 GB exclusive are given a disk
+- * geometry of 128 heads and 32 sectors, and drives above 2 GB inclusive
+- * are given a disk geometry of 255 heads and 63 sectors. However, if
+- * the BIOS detects that the Extended Translation setting does not match
+- * the geometry in the partition table, then the translation inferred
+- * from the partition table will be used by the BIOS, and a warning may
++ * without exceeding the BIOS limitation of 1024 cylinders, Extended
++ * Translation should be enabled. With Extended Translation enabled,
++ * drives between 1 GB inclusive and 2 GB exclusive are given a disk
++ * geometry of 128 heads and 32 sectors, and drives above 2 GB inclusive
++ * are given a disk geometry of 255 heads and 63 sectors. However, if
++ * the BIOS detects that the Extended Translation setting does not match
++ * the geometry in the partition table, then the translation inferred
++ * from the partition table will be used by the BIOS, and a warning may
+ * be displayed.
+ */
+-
+
- /* Read Device Characteristics */
- rdc_data = (void *) &(private->rdc_data);
- rdc_data->dev_nr = private->dev_id.devno;
-@@ -409,14 +420,14 @@ dasd_diag_check_device(struct dasd_device *device)
- sizeof(DASD_DIAG_CMS1)) == 0) {
- /* get formatted blocksize from label block */
- bsize = (unsigned int) label->block_size;
-- device->blocks = (unsigned long) label->block_count;
-+ block->blocks = (unsigned long) label->block_count;
- } else
-- device->blocks = end_block;
-- device->bp_block = bsize;
-- device->s2b_shift = 0; /* bits to shift 512 to get a block */
-+ block->blocks = end_block;
-+ block->bp_block = bsize;
-+ block->s2b_shift = 0; /* bits to shift 512 to get a block */
- for (sb = 512; sb < bsize; sb = sb << 1)
-- device->s2b_shift++;
-- rc = mdsk_init_io(device, device->bp_block, 0, NULL);
-+ block->s2b_shift++;
-+ rc = mdsk_init_io(device, block->bp_block, 0, NULL);
- if (rc) {
- DEV_MESSAGE(KERN_WARNING, device, "DIAG initialization "
- "failed (rc=%d)", rc);
-@@ -424,9 +435,9 @@ dasd_diag_check_device(struct dasd_device *device)
- } else {
- DEV_MESSAGE(KERN_INFO, device,
- "(%ld B/blk): %ldkB",
-- (unsigned long) device->bp_block,
-- (unsigned long) (device->blocks <<
-- device->s2b_shift) >> 1);
-+ (unsigned long) block->bp_block,
-+ (unsigned long) (block->blocks <<
-+ block->s2b_shift) >> 1);
- }
- out:
- free_page((long) label);
-@@ -436,22 +447,16 @@ out:
- /* Fill in virtual disk geometry for device. Return zero on success, non-zero
- * otherwise. */
- static int
--dasd_diag_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
-+dasd_diag_fill_geometry(struct dasd_block *block, struct hd_geometry *geo)
+ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
+ sector_t capacity, int *geom)
{
-- if (dasd_check_blocksize(device->bp_block) != 0)
-+ if (dasd_check_blocksize(block->bp_block) != 0)
- return -EINVAL;
-- geo->cylinders = (device->blocks << device->s2b_shift) >> 10;
-+ geo->cylinders = (block->blocks << block->s2b_shift) >> 10;
- geo->heads = 16;
-- geo->sectors = 128 >> device->s2b_shift;
-+ geo->sectors = 128 >> block->s2b_shift;
- return 0;
- }
+@@ -333,10 +333,10 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
--static dasd_era_t
--dasd_diag_examine_error(struct dasd_ccw_req * cqr, struct irb * stat)
--{
-- return dasd_era_fatal;
--}
--
- static dasd_erp_fn_t
- dasd_diag_erp_action(struct dasd_ccw_req * cqr)
- {
-@@ -466,8 +471,9 @@ dasd_diag_erp_postaction(struct dasd_ccw_req * cqr)
+ param->cylinders = cap_to_cyls(capacity, param->heads * param->sectors);
- /* Create DASD request from block device request. Return pointer to new
- * request on success, ERR_PTR otherwise. */
--static struct dasd_ccw_req *
--dasd_diag_build_cp(struct dasd_device * device, struct request *req)
-+static struct dasd_ccw_req *dasd_diag_build_cp(struct dasd_device *memdev,
-+ struct dasd_block *block,
-+ struct request *req)
+- /*
++ /*
+ * Read the first 1024 bytes from the disk device, if the boot
+ * sector partition table is valid, search for a partition table
+- * entry whose end_head matches one of the standard geometry
++ * entry whose end_head matches one of the standard geometry
+ * translations ( 64/32, 128/32, 255/63 ).
+ */
+ buf = scsi_bios_ptable(bdev);
+@@ -401,30 +401,44 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
+
+ static int aac_slave_configure(struct scsi_device *sdev)
{
- struct dasd_ccw_req *cqr;
- struct dasd_diag_req *dreq;
-@@ -486,17 +492,17 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
- rw_cmd = MDSK_WRITE_REQ;
- else
- return ERR_PTR(-EINVAL);
-- blksize = device->bp_block;
-+ blksize = block->bp_block;
- /* Calculate record id of first and last block. */
-- first_rec = req->sector >> device->s2b_shift;
-- last_rec = (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
-+ first_rec = req->sector >> block->s2b_shift;
-+ last_rec = (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
- /* Check struct bio and count the number of blocks for the request. */
- count = 0;
- rq_for_each_segment(bv, req, iter) {
- if (bv->bv_len & (blksize - 1))
- /* Fba can only do full blocks. */
- return ERR_PTR(-EINVAL);
-- count += bv->bv_len >> (device->s2b_shift + 9);
-+ count += bv->bv_len >> (block->s2b_shift + 9);
++ struct aac_dev *aac = (struct aac_dev *)sdev->host->hostdata;
+ if ((sdev->type == TYPE_DISK) &&
+- (sdev_channel(sdev) != CONTAINER_CHANNEL)) {
++ (sdev_channel(sdev) != CONTAINER_CHANNEL) &&
++ (!aac->jbod || sdev->inq_periph_qual) &&
++ (!aac->raid_scsi_mode || (sdev_channel(sdev) != 2))) {
+ if (expose_physicals == 0)
+ return -ENXIO;
+- if (expose_physicals < 0) {
+- struct aac_dev *aac =
+- (struct aac_dev *)sdev->host->hostdata;
+- if (!aac->raid_scsi_mode || (sdev_channel(sdev) != 2))
+- sdev->no_uld_attach = 1;
+- }
++ if (expose_physicals < 0)
++ sdev->no_uld_attach = 1;
}
- /* Paranoia. */
- if (count != last_rec - first_rec + 1)
-@@ -505,7 +511,7 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
- datasize = sizeof(struct dasd_diag_req) +
- count*sizeof(struct dasd_diag_bio);
- cqr = dasd_smalloc_request(dasd_diag_discipline.name, 0,
-- datasize, device);
-+ datasize, memdev);
- if (IS_ERR(cqr))
- return cqr;
-
-@@ -529,7 +535,9 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
- cqr->buildclk = get_clock();
- if (req->cmd_flags & REQ_FAILFAST)
- set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
-- cqr->device = device;
-+ cqr->startdev = memdev;
-+ cqr->memdev = memdev;
-+ cqr->block = block;
- cqr->expires = DIAG_TIMEOUT;
- cqr->status = DASD_CQR_FILLED;
- return cqr;
-@@ -543,10 +551,15 @@ dasd_diag_free_cp(struct dasd_ccw_req *cqr, struct request *req)
- int status;
+ if (sdev->tagged_supported && (sdev->type == TYPE_DISK) &&
+- (sdev_channel(sdev) == CONTAINER_CHANNEL)) {
++ (!aac->raid_scsi_mode || (sdev_channel(sdev) != 2)) &&
++ !sdev->no_uld_attach) {
+ struct scsi_device * dev;
+ struct Scsi_Host *host = sdev->host;
+ unsigned num_lsu = 0;
+ unsigned num_one = 0;
+ unsigned depth;
++ unsigned cid;
- status = cqr->status == DASD_CQR_DONE;
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return status;
++ /*
++ * Firmware has an individual device recovery time typically
++ * of 35 seconds, give us a margin.
++ */
++ if (sdev->timeout < (45 * HZ))
++ sdev->timeout = 45 * HZ;
++ for (cid = 0; cid < aac->maximum_num_containers; ++cid)
++ if (aac->fsa_dev[cid].valid)
++ ++num_lsu;
+ __shost_for_each_device(dev, host) {
+ if (dev->tagged_supported && (dev->type == TYPE_DISK) &&
+- (sdev_channel(dev) == CONTAINER_CHANNEL))
+- ++num_lsu;
+- else
++ (!aac->raid_scsi_mode ||
++ (sdev_channel(sdev) != 2)) &&
++ !dev->no_uld_attach) {
++ if ((sdev_channel(dev) != CONTAINER_CHANNEL)
++ || !aac->fsa_dev[sdev_id(dev)].valid)
++ ++num_lsu;
++ } else
+ ++num_one;
+ }
+ if (num_lsu == 0)
+@@ -481,9 +495,35 @@ static int aac_change_queue_depth(struct scsi_device *sdev, int depth)
+ return sdev->queue_depth;
}
-+static void dasd_diag_handle_terminated_request(struct dasd_ccw_req *cqr)
++static ssize_t aac_show_raid_level(struct device *dev, struct device_attribute *attr, char *buf)
+{
-+ cqr->status = DASD_CQR_FILLED;
++ struct scsi_device * sdev = to_scsi_device(dev);
++ if (sdev_channel(sdev) != CONTAINER_CHANNEL)
++ return snprintf(buf, PAGE_SIZE, sdev->no_uld_attach
++ ? "Hidden\n" : "JBOD");
++ return snprintf(buf, PAGE_SIZE, "%s\n",
++ get_container_type(((struct aac_dev *)(sdev->host->hostdata))
++ ->fsa_dev[sdev_id(sdev)].type));
++}
++
++static struct device_attribute aac_raid_level_attr = {
++ .attr = {
++ .name = "level",
++ .mode = S_IRUGO,
++ },
++ .show = aac_show_raid_level
+};
+
- /* Fill in IOCTL data for device. */
- static int
- dasd_diag_fill_info(struct dasd_device * device,
-@@ -583,7 +596,7 @@ static struct dasd_discipline dasd_diag_discipline = {
- .fill_geometry = dasd_diag_fill_geometry,
- .start_IO = dasd_start_diag,
- .term_IO = dasd_diag_term_IO,
-- .examine_error = dasd_diag_examine_error,
-+ .handle_terminated_request = dasd_diag_handle_terminated_request,
- .erp_action = dasd_diag_erp_action,
- .erp_postaction = dasd_diag_erp_postaction,
- .build_cp = dasd_diag_build_cp,
-diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
-index 44adf84..61f1693 100644
---- a/drivers/s390/block/dasd_eckd.c
-+++ b/drivers/s390/block/dasd_eckd.c
-@@ -52,16 +52,6 @@ MODULE_LICENSE("GPL");
-
- static struct dasd_discipline dasd_eckd_discipline;
-
--struct dasd_eckd_private {
-- struct dasd_eckd_characteristics rdc_data;
-- struct dasd_eckd_confdata conf_data;
-- struct dasd_eckd_path path_data;
-- struct eckd_count count_area[5];
-- int init_cqr_status;
-- int uses_cdl;
-- struct attrib_data_t attrib; /* e.g. cache operations */
--};
--
- /* The ccw bus type uses this table to find devices that it sends to
- * dasd_eckd_probe */
- static struct ccw_device_id dasd_eckd_ids[] = {
-@@ -188,7 +178,7 @@ check_XRC (struct ccw1 *de_ccw,
- if (rc == -ENOSYS || rc == -EACCES)
- rc = 0;
-
-- de_ccw->count = sizeof (struct DE_eckd_data);
-+ de_ccw->count = sizeof(struct DE_eckd_data);
- de_ccw->flags |= CCW_FLAG_SLI;
- return rc;
++static struct device_attribute *aac_dev_attrs[] = {
++ &aac_raid_level_attr,
++ NULL,
++};
++
+ static int aac_ioctl(struct scsi_device *sdev, int cmd, void __user * arg)
+ {
+ struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
++ if (!capable(CAP_SYS_RAWIO))
++ return -EPERM;
+ return aac_do_ioctl(dev, cmd, arg);
}
-@@ -208,7 +198,7 @@ define_extent(struct ccw1 * ccw, struct DE_eckd_data * data, int trk,
- ccw->count = 16;
- ccw->cda = (__u32) __pa(data);
-- memset(data, 0, sizeof (struct DE_eckd_data));
-+ memset(data, 0, sizeof(struct DE_eckd_data));
- switch (cmd) {
- case DASD_ECKD_CCW_READ_HOME_ADDRESS:
- case DASD_ECKD_CCW_READ_RECORD_ZERO:
-@@ -280,6 +270,132 @@ define_extent(struct ccw1 * ccw, struct DE_eckd_data * data, int trk,
- return rc;
+@@ -506,17 +546,33 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
+ break;
+ case INQUIRY:
+ case READ_CAPACITY:
+- case TEST_UNIT_READY:
+ /* Mark associated FIB to not complete, eh handler does this */
+ for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
+ struct fib * fib = &aac->fibs[count];
+ if (fib->hw_fib_va->header.XferState &&
++ (fib->flags & FIB_CONTEXT_FLAG) &&
+ (fib->callback_data == cmd)) {
+ fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
+ cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
+ ret = SUCCESS;
+ }
+ }
++ break;
++ case TEST_UNIT_READY:
++ /* Mark associated FIB to not complete, eh handler does this */
++ for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
++ struct scsi_cmnd * command;
++ struct fib * fib = &aac->fibs[count];
++ if ((fib->hw_fib_va->header.XferState & cpu_to_le32(Async | NoResponseExpected)) &&
++ (fib->flags & FIB_CONTEXT_FLAG) &&
++ ((command = fib->callback_data)) &&
++ (command->device == cmd->device)) {
++ fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
++ command->SCp.phase = AAC_OWNER_ERROR_HANDLER;
++ if (command == cmd)
++ ret = SUCCESS;
++ }
++ }
+ }
+ return ret;
}
+@@ -539,12 +595,13 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
+ for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
+ struct fib * fib = &aac->fibs[count];
+ if (fib->hw_fib_va->header.XferState &&
++ (fib->flags & FIB_CONTEXT_FLAG) &&
+ (fib->callback_data == cmd)) {
+ fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
+ cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
+ }
+ }
+- printk(KERN_ERR "%s: Host adapter reset request. SCSI hang ?\n",
++ printk(KERN_ERR "%s: Host adapter reset request. SCSI hang ?\n",
+ AAC_DRIVERNAME);
-+static int check_XRC_on_prefix(struct PFX_eckd_data *pfxdata,
-+ struct dasd_device *device)
-+{
-+ struct dasd_eckd_private *private;
-+ int rc;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ if (!private->rdc_data.facilities.XRC_supported)
-+ return 0;
-+
-+ /* switch on System Time Stamp - needed for XRC Support */
-+ pfxdata->define_extend.ga_extended |= 0x08; /* 'Time Stamp Valid' */
-+ pfxdata->define_extend.ga_extended |= 0x02; /* 'Extended Parameter' */
-+ pfxdata->validity.time_stamp = 1; /* 'Time Stamp Valid' */
-+
-+ rc = get_sync_clock(&pfxdata->define_extend.ep_sys_time);
-+ /* Ignore return code if sync clock is switched off. */
-+ if (rc == -ENOSYS || rc == -EACCES)
-+ rc = 0;
-+ return rc;
-+}
-+
-+static int prefix(struct ccw1 *ccw, struct PFX_eckd_data *pfxdata, int trk,
-+ int totrk, int cmd, struct dasd_device *basedev,
-+ struct dasd_device *startdev)
-+{
-+ struct dasd_eckd_private *basepriv, *startpriv;
-+ struct DE_eckd_data *data;
-+ struct ch_t geo, beg, end;
-+ int rc = 0;
-+
-+ basepriv = (struct dasd_eckd_private *) basedev->private;
-+ startpriv = (struct dasd_eckd_private *) startdev->private;
-+ data = &pfxdata->define_extend;
-+
-+ ccw->cmd_code = DASD_ECKD_CCW_PFX;
-+ ccw->flags = 0;
-+ ccw->count = sizeof(*pfxdata);
-+ ccw->cda = (__u32) __pa(pfxdata);
-+
-+ memset(pfxdata, 0, sizeof(*pfxdata));
-+ /* prefix data */
-+ pfxdata->format = 0;
-+ pfxdata->base_address = basepriv->conf_data.ned1.unit_addr;
-+ pfxdata->base_lss = basepriv->conf_data.ned1.ID;
-+ pfxdata->validity.define_extend = 1;
-+
-+ /* private uid is kept up to date, conf_data may be outdated */
-+ if (startpriv->uid.type != UA_BASE_DEVICE) {
-+ pfxdata->validity.verify_base = 1;
-+ if (startpriv->uid.type == UA_HYPER_PAV_ALIAS)
-+ pfxdata->validity.hyper_pav = 1;
-+ }
+ if ((count = aac_check_health(aac)))
+@@ -584,8 +641,11 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
+ * support a register, instead of a commanded, reset.
+ */
+ if ((aac->supplement_adapter_info.SupportedOptions2 &
+- le32_to_cpu(AAC_OPTION_MU_RESET|AAC_OPTION_IGNORE_RESET)) ==
+- le32_to_cpu(AAC_OPTION_MU_RESET))
++ AAC_OPTION_MU_RESET) &&
++ aac_check_reset &&
++ ((aac_check_reset != 1) ||
++ (aac->supplement_adapter_info.SupportedOptions2 &
++ AAC_OPTION_IGNORE_RESET)))
+ aac_reset_adapter(aac, 2); /* Bypass wait for command quiesce */
+ return SUCCESS; /* Cause an immediate retry of the command with a ten second delay after successful tur */
+ }
+@@ -632,8 +692,8 @@ static int aac_cfg_open(struct inode *inode, struct file *file)
+ * Bugs: Needs locking against parallel ioctls lower down
+ * Bugs: Needs to handle hot plugging
+ */
+-
+-static int aac_cfg_ioctl(struct inode *inode, struct file *file,
+
-+ /* define extend data (mostly)*/
++static int aac_cfg_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+ {
+ if (!capable(CAP_SYS_RAWIO))
+@@ -646,7 +706,7 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
+ {
+ long ret;
+ lock_kernel();
+- switch (cmd) {
+ switch (cmd) {
-+ case DASD_ECKD_CCW_READ_HOME_ADDRESS:
-+ case DASD_ECKD_CCW_READ_RECORD_ZERO:
-+ case DASD_ECKD_CCW_READ:
-+ case DASD_ECKD_CCW_READ_MT:
-+ case DASD_ECKD_CCW_READ_CKD:
-+ case DASD_ECKD_CCW_READ_CKD_MT:
-+ case DASD_ECKD_CCW_READ_KD:
-+ case DASD_ECKD_CCW_READ_KD_MT:
-+ case DASD_ECKD_CCW_READ_COUNT:
-+ data->mask.perm = 0x1;
-+ data->attributes.operation = basepriv->attrib.operation;
-+ break;
-+ case DASD_ECKD_CCW_WRITE:
-+ case DASD_ECKD_CCW_WRITE_MT:
-+ case DASD_ECKD_CCW_WRITE_KD:
-+ case DASD_ECKD_CCW_WRITE_KD_MT:
-+ data->mask.perm = 0x02;
-+ data->attributes.operation = basepriv->attrib.operation;
-+ rc = check_XRC_on_prefix(pfxdata, basedev);
-+ break;
-+ case DASD_ECKD_CCW_WRITE_CKD:
-+ case DASD_ECKD_CCW_WRITE_CKD_MT:
-+ data->attributes.operation = DASD_BYPASS_CACHE;
-+ rc = check_XRC_on_prefix(pfxdata, basedev);
-+ break;
-+ case DASD_ECKD_CCW_ERASE:
-+ case DASD_ECKD_CCW_WRITE_HOME_ADDRESS:
-+ case DASD_ECKD_CCW_WRITE_RECORD_ZERO:
-+ data->mask.perm = 0x3;
-+ data->mask.auth = 0x1;
-+ data->attributes.operation = DASD_BYPASS_CACHE;
-+ rc = check_XRC_on_prefix(pfxdata, basedev);
-+ break;
-+ default:
-+ DEV_MESSAGE(KERN_ERR, basedev, "unknown opcode 0x%x", cmd);
-+ break;
-+ }
-+
-+ data->attributes.mode = 0x3; /* ECKD */
-+
-+ if ((basepriv->rdc_data.cu_type == 0x2105 ||
-+ basepriv->rdc_data.cu_type == 0x2107 ||
-+ basepriv->rdc_data.cu_type == 0x1750)
-+ && !(basepriv->uses_cdl && trk < 2))
-+ data->ga_extended |= 0x40; /* Regular Data Format Mode */
-+
-+ geo.cyl = basepriv->rdc_data.no_cyl;
-+ geo.head = basepriv->rdc_data.trk_per_cyl;
-+ beg.cyl = trk / geo.head;
-+ beg.head = trk % geo.head;
-+ end.cyl = totrk / geo.head;
-+ end.head = totrk % geo.head;
-+
-+ /* check for sequential prestage - enhance cylinder range */
-+ if (data->attributes.operation == DASD_SEQ_PRESTAGE ||
-+ data->attributes.operation == DASD_SEQ_ACCESS) {
+ case FSACTL_MINIPORT_REV_CHECK:
+ case FSACTL_SENDFIB:
+ case FSACTL_OPEN_GET_ADAPTER_FIB:
+@@ -656,14 +716,14 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
+ case FSACTL_QUERY_DISK:
+ case FSACTL_DELETE_DISK:
+ case FSACTL_FORCE_DELETE_DISK:
+- case FSACTL_GET_CONTAINERS:
++ case FSACTL_GET_CONTAINERS:
+ case FSACTL_SEND_LARGE_FIB:
+ ret = aac_do_ioctl(dev, cmd, (void __user *)arg);
+ break;
+
+ case FSACTL_GET_NEXT_ADAPTER_FIB: {
+ struct fib_ioctl __user *f;
+-
+
-+ if (end.cyl + basepriv->attrib.nr_cyl < geo.cyl)
-+ end.cyl += basepriv->attrib.nr_cyl;
-+ else
-+ end.cyl = (geo.cyl - 1);
+ f = compat_alloc_user_space(sizeof(*f));
+ ret = 0;
+ if (clear_user(f, sizeof(*f)))
+@@ -676,9 +736,9 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
+ }
+
+ default:
+- ret = -ENOIOCTLCMD;
++ ret = -ENOIOCTLCMD;
+ break;
+- }
+ }
+ unlock_kernel();
+ return ret;
+ }
+@@ -735,6 +795,25 @@ static ssize_t aac_show_vendor(struct class_device *class_dev,
+ return len;
+ }
+
++static ssize_t aac_show_flags(struct class_device *class_dev, char *buf)
++{
++ int len = 0;
++ struct aac_dev *dev = (struct aac_dev*)class_to_shost(class_dev)->hostdata;
+
-+ data->beg_ext.cyl = beg.cyl;
-+ data->beg_ext.head = beg.head;
-+ data->end_ext.cyl = end.cyl;
-+ data->end_ext.head = end.head;
-+ return rc;
++ if (nblank(dprintk(x)))
++ len = snprintf(buf, PAGE_SIZE, "dprintk\n");
++#ifdef AAC_DETAILED_STATUS_INFO
++ len += snprintf(buf + len, PAGE_SIZE - len,
++ "AAC_DETAILED_STATUS_INFO\n");
++#endif
++ if (dev->raw_io_interface && dev->raw_io_64)
++ len += snprintf(buf + len, PAGE_SIZE - len,
++ "SAI_READ_CAPACITY_16\n");
++ if (dev->jbod)
++ len += snprintf(buf + len, PAGE_SIZE - len, "SUPPORTED_JBOD\n");
++ return len;
+}
+
- static void
- locate_record(struct ccw1 *ccw, struct LO_eckd_data *data, int trk,
- int rec_on_trk, int no_rec, int cmd,
-@@ -300,7 +416,7 @@ locate_record(struct ccw1 *ccw, struct LO_eckd_data *data, int trk,
- ccw->count = 16;
- ccw->cda = (__u32) __pa(data);
+ static ssize_t aac_show_kernel_version(struct class_device *class_dev,
+ char *buf)
+ {
+@@ -742,7 +821,7 @@ static ssize_t aac_show_kernel_version(struct class_device *class_dev,
+ int len, tmp;
-- memset(data, 0, sizeof (struct LO_eckd_data));
-+ memset(data, 0, sizeof(struct LO_eckd_data));
- sector = 0;
- if (rec_on_trk) {
- switch (private->rdc_data.dev_type) {
-@@ -441,12 +557,15 @@ dasd_eckd_generate_uid(struct dasd_device *device, struct dasd_uid *uid)
- sizeof(uid->serial) - 1);
- EBCASC(uid->serial, sizeof(uid->serial) - 1);
- uid->ssid = confdata->neq.subsystemID;
-- if (confdata->ned2.sneq.flags == 0x40) {
-- uid->alias = 1;
-- uid->unit_addr = confdata->ned2.sneq.base_unit_addr;
-- } else
-- uid->unit_addr = confdata->ned1.unit_addr;
--
-+ uid->real_unit_addr = confdata->ned1.unit_addr;
-+ if (confdata->ned2.sneq.flags == 0x40 &&
-+ confdata->ned2.sneq.format == 0x0001) {
-+ uid->type = confdata->ned2.sneq.sua_flags;
-+ if (uid->type == UA_BASE_PAV_ALIAS)
-+ uid->base_unit_addr = confdata->ned2.sneq.base_unit_addr;
-+ } else {
-+ uid->type = UA_BASE_DEVICE;
-+ }
- return 0;
- }
+ tmp = le32_to_cpu(dev->adapter_info.kernelrev);
+- len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
++ len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
+ tmp >> 24, (tmp >> 16) & 0xff, tmp & 0xff,
+ le32_to_cpu(dev->adapter_info.kernelbuild));
+ return len;
+@@ -755,7 +834,7 @@ static ssize_t aac_show_monitor_version(struct class_device *class_dev,
+ int len, tmp;
-@@ -470,7 +589,9 @@ static struct dasd_ccw_req *dasd_eckd_build_rcd_lpm(struct dasd_device *device,
- ccw->cda = (__u32)(addr_t)rcd_buffer;
- ccw->count = ciw->count;
+ tmp = le32_to_cpu(dev->adapter_info.monitorrev);
+- len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
++ len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
+ tmp >> 24, (tmp >> 16) & 0xff, tmp & 0xff,
+ le32_to_cpu(dev->adapter_info.monitorbuild));
+ return len;
+@@ -768,7 +847,7 @@ static ssize_t aac_show_bios_version(struct class_device *class_dev,
+ int len, tmp;
-- cqr->device = device;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
-+ cqr->block = NULL;
- cqr->expires = 10*HZ;
- cqr->lpm = lpm;
- clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
-@@ -511,7 +632,7 @@ static int dasd_eckd_read_conf_lpm(struct dasd_device *device,
+ tmp = le32_to_cpu(dev->adapter_info.biosrev);
+- len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
++ len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
+ tmp >> 24, (tmp >> 16) & 0xff, tmp & 0xff,
+ le32_to_cpu(dev->adapter_info.biosbuild));
+ return len;
+@@ -844,6 +923,13 @@ static struct class_device_attribute aac_vendor = {
+ },
+ .show = aac_show_vendor,
+ };
++static struct class_device_attribute aac_flags = {
++ .attr = {
++ .name = "flags",
++ .mode = S_IRUGO,
++ },
++ .show = aac_show_flags,
++};
+ static struct class_device_attribute aac_kernel_version = {
+ .attr = {
+ .name = "hba_kernel_version",
+@@ -898,6 +984,7 @@ static struct class_device_attribute aac_reset = {
+ static struct class_device_attribute *aac_attrs[] = {
+ &aac_model,
+ &aac_vendor,
++ &aac_flags,
+ &aac_kernel_version,
+ &aac_monitor_version,
+ &aac_bios_version,
+@@ -928,21 +1015,22 @@ static struct scsi_host_template aac_driver_template = {
+ .compat_ioctl = aac_compat_ioctl,
+ #endif
+ .queuecommand = aac_queuecommand,
+- .bios_param = aac_biosparm,
++ .bios_param = aac_biosparm,
+ .shost_attrs = aac_attrs,
+ .slave_configure = aac_slave_configure,
+ .change_queue_depth = aac_change_queue_depth,
++ .sdev_attrs = aac_dev_attrs,
+ .eh_abort_handler = aac_eh_abort,
+ .eh_host_reset_handler = aac_eh_reset,
+- .can_queue = AAC_NUM_IO_FIB,
++ .can_queue = AAC_NUM_IO_FIB,
+ .this_id = MAXIMUM_NUM_CONTAINERS,
+ .sg_tablesize = 16,
+ .max_sectors = 128,
+ #if (AAC_NUM_IO_FIB > 256)
+ .cmd_per_lun = 256,
+-#else
+- .cmd_per_lun = AAC_NUM_IO_FIB,
+-#endif
++#else
++ .cmd_per_lun = AAC_NUM_IO_FIB,
++#endif
+ .use_clustering = ENABLE_CLUSTERING,
+ .use_sg_chaining = ENABLE_SG_CHAINING,
+ .emulated = 1,
+@@ -979,18 +1067,18 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
+ goto out;
+ error = -ENODEV;
+
+- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) ||
++ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) ||
+ pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK))
+ goto out_disable_pdev;
/*
- * on success we update the user input parms
+ * If the quirk31 bit is set, the adapter needs adapter
+ * to driver communication memory to be allocated below 2gig
*/
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- if (ret)
- goto out_error;
+- if (aac_drivers[index].quirks & AAC_QUIRK_31BIT)
++ if (aac_drivers[index].quirks & AAC_QUIRK_31BIT)
+ if (pci_set_dma_mask(pdev, DMA_31BIT_MASK) ||
+ pci_set_consistent_dma_mask(pdev, DMA_31BIT_MASK))
+ goto out_disable_pdev;
+-
++
+ pci_set_master(pdev);
-@@ -557,19 +678,19 @@ dasd_eckd_read_conf(struct dasd_device *device)
- "data retrieved");
- continue; /* no error */
- }
-- if (conf_len != sizeof (struct dasd_eckd_confdata)) {
-+ if (conf_len != sizeof(struct dasd_eckd_confdata)) {
- MESSAGE(KERN_WARNING,
- "sizes of configuration data mismatch"
- "%d (read) vs %ld (expected)",
- conf_len,
-- sizeof (struct dasd_eckd_confdata));
-+ sizeof(struct dasd_eckd_confdata));
- kfree(conf_data);
- continue; /* no error */
- }
- /* save first valid configuration data */
- if (!conf_data_saved){
- memcpy(&private->conf_data, conf_data,
-- sizeof (struct dasd_eckd_confdata));
-+ sizeof(struct dasd_eckd_confdata));
- conf_data_saved++;
- }
- switch (((char *)conf_data)[242] & 0x07){
-@@ -586,39 +707,104 @@ dasd_eckd_read_conf(struct dasd_device *device)
- return 0;
- }
+ shost = scsi_host_alloc(&aac_driver_template, sizeof(struct aac_dev));
+@@ -1003,7 +1091,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
+ shost->max_cmd_len = 16;
-+static int dasd_eckd_read_features(struct dasd_device *device)
-+{
-+ struct dasd_psf_prssd_data *prssdp;
-+ struct dasd_rssd_features *features;
-+ struct dasd_ccw_req *cqr;
-+ struct ccw1 *ccw;
-+ int rc;
-+ struct dasd_eckd_private *private;
-+
-+ private = (struct dasd_eckd_private *) device->private;
-+ cqr = dasd_smalloc_request(dasd_eckd_discipline.name,
-+ 1 /* PSF */ + 1 /* RSSD */ ,
-+ (sizeof(struct dasd_psf_prssd_data) +
-+ sizeof(struct dasd_rssd_features)),
-+ device);
-+ if (IS_ERR(cqr)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "Could not allocate initialization request");
-+ return PTR_ERR(cqr);
-+ }
-+ cqr->startdev = device;
-+ cqr->memdev = device;
-+ cqr->block = NULL;
-+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
-+ cqr->retries = 5;
-+ cqr->expires = 10 * HZ;
-+
-+ /* Prepare for Read Subsystem Data */
-+ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
-+ memset(prssdp, 0, sizeof(struct dasd_psf_prssd_data));
-+ prssdp->order = PSF_ORDER_PRSSD;
-+ prssdp->suborder = 0x41; /* Read Feature Codes */
-+ /* all other bytes of prssdp must be zero */
-+
-+ ccw = cqr->cpaddr;
-+ ccw->cmd_code = DASD_ECKD_CCW_PSF;
-+ ccw->count = sizeof(struct dasd_psf_prssd_data);
-+ ccw->flags |= CCW_FLAG_CC;
-+ ccw->cda = (__u32)(addr_t) prssdp;
-+
-+ /* Read Subsystem Data - feature codes */
-+ features = (struct dasd_rssd_features *) (prssdp + 1);
-+ memset(features, 0, sizeof(struct dasd_rssd_features));
+ aac = (struct aac_dev *)shost->hostdata;
+- aac->scsi_host_ptr = shost;
++ aac->scsi_host_ptr = shost;
+ aac->pdev = pdev;
+ aac->name = aac_driver_template.name;
+ aac->id = shost->unique_id;
+@@ -1040,7 +1128,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
+ if (aac_drivers[index].quirks & AAC_QUIRK_31BIT)
+ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK))
+ goto out_deinit;
+-
+
-+ ccw++;
-+ ccw->cmd_code = DASD_ECKD_CCW_RSSD;
-+ ccw->count = sizeof(struct dasd_rssd_features);
-+ ccw->cda = (__u32)(addr_t) features;
+ aac->maximum_num_channels = aac_drivers[index].channels;
+ error = aac_get_adapter_info(aac);
+ if (error < 0)
+@@ -1049,7 +1137,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
+ /*
+ * Lets override negotiations and drop the maximum SG limit to 34
+ */
+- if ((aac_drivers[index].quirks & AAC_QUIRK_34SG) &&
++ if ((aac_drivers[index].quirks & AAC_QUIRK_34SG) &&
+ (aac->scsi_host_ptr->sg_tablesize > 34)) {
+ aac->scsi_host_ptr->sg_tablesize = 34;
+ aac->scsi_host_ptr->max_sectors
+@@ -1066,17 +1154,17 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
+ /*
+ * Firware printf works only with older firmware.
+ */
+- if (aac_drivers[index].quirks & AAC_QUIRK_34SG)
++ if (aac_drivers[index].quirks & AAC_QUIRK_34SG)
+ aac->printf_enabled = 1;
+ else
+ aac->printf_enabled = 0;
+-
+
-+ cqr->buildclk = get_clock();
-+ cqr->status = DASD_CQR_FILLED;
-+ rc = dasd_sleep_on(cqr);
-+ if (rc == 0) {
-+ prssdp = (struct dasd_psf_prssd_data *) cqr->data;
-+ features = (struct dasd_rssd_features *) (prssdp + 1);
-+ memcpy(&private->features, features,
-+ sizeof(struct dasd_rssd_features));
-+ }
-+ dasd_sfree_request(cqr, cqr->memdev);
-+ return rc;
-+}
+ /*
+ * max channel will be the physical channels plus 1 virtual channel
+ * all containers are on the virtual channel 0 (CONTAINER_CHANNEL)
+ * physical channels are address by their actual physical number+1
+ */
+- if ((aac->nondasd_support == 1) || expose_physicals)
++ if (aac->nondasd_support || expose_physicals || aac->jbod)
+ shost->max_channel = aac->maximum_num_channels;
+ else
+ shost->max_channel = 0;
+@@ -1148,10 +1236,10 @@ static void __devexit aac_remove_one(struct pci_dev *pdev)
+ kfree(aac->queues);
+
+ aac_adapter_ioremap(aac, 0);
+-
+
+ kfree(aac->fibs);
+ kfree(aac->fsa_dev);
+-
+
- /*
- * Build CP for Perform Subsystem Function - SSC.
- */
--static struct dasd_ccw_req *
--dasd_eckd_build_psf_ssc(struct dasd_device *device)
-+static struct dasd_ccw_req *dasd_eckd_build_psf_ssc(struct dasd_device *device)
+ list_del(&aac->entry);
+ scsi_host_put(shost);
+ pci_disable_device(pdev);
+@@ -1172,7 +1260,7 @@ static struct pci_driver aac_pci_driver = {
+ static int __init aac_init(void)
{
-- struct dasd_ccw_req *cqr;
-- struct dasd_psf_ssc_data *psf_ssc_data;
-- struct ccw1 *ccw;
-+ struct dasd_ccw_req *cqr;
-+ struct dasd_psf_ssc_data *psf_ssc_data;
-+ struct ccw1 *ccw;
+ int error;
+-
++
+ printk(KERN_INFO "Adaptec %s driver %s\n",
+ AAC_DRIVERNAME, aac_driver_version);
-- cqr = dasd_smalloc_request("ECKD", 1 /* PSF */ ,
-+ cqr = dasd_smalloc_request("ECKD", 1 /* PSF */ ,
- sizeof(struct dasd_psf_ssc_data),
- device);
+diff --git a/drivers/scsi/aacraid/rx.c b/drivers/scsi/aacraid/rx.c
+index 73eef3d..a08bbf1 100644
+--- a/drivers/scsi/aacraid/rx.c
++++ b/drivers/scsi/aacraid/rx.c
+@@ -465,7 +465,7 @@ static int aac_rx_restart_adapter(struct aac_dev *dev, int bled)
+ u32 var;
-- if (IS_ERR(cqr)) {
-- DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ if (IS_ERR(cqr)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
- "Could not allocate PSF-SSC request");
-- return cqr;
-- }
-- psf_ssc_data = (struct dasd_psf_ssc_data *)cqr->data;
-- psf_ssc_data->order = PSF_ORDER_SSC;
-- psf_ssc_data->suborder = 0x08;
--
-- ccw = cqr->cpaddr;
-- ccw->cmd_code = DASD_ECKD_CCW_PSF;
-- ccw->cda = (__u32)(addr_t)psf_ssc_data;
-- ccw->count = 66;
--
-- cqr->device = device;
-- cqr->expires = 10*HZ;
-- cqr->buildclk = get_clock();
-- cqr->status = DASD_CQR_FILLED;
-- return cqr;
-+ return cqr;
-+ }
-+ psf_ssc_data = (struct dasd_psf_ssc_data *)cqr->data;
-+ psf_ssc_data->order = PSF_ORDER_SSC;
-+ psf_ssc_data->suborder = 0x88;
-+ psf_ssc_data->reserved[0] = 0x88;
-+
-+ ccw = cqr->cpaddr;
-+ ccw->cmd_code = DASD_ECKD_CCW_PSF;
-+ ccw->cda = (__u32)(addr_t)psf_ssc_data;
-+ ccw->count = 66;
-+
-+ cqr->startdev = device;
-+ cqr->memdev = device;
-+ cqr->block = NULL;
-+ cqr->expires = 10*HZ;
-+ cqr->buildclk = get_clock();
-+ cqr->status = DASD_CQR_FILLED;
-+ return cqr;
- }
+ if (!(dev->supplement_adapter_info.SupportedOptions2 &
+- le32_to_cpu(AAC_OPTION_MU_RESET)) || (bled >= 0) || (bled == -2)) {
++ AAC_OPTION_MU_RESET) || (bled >= 0) || (bled == -2)) {
+ if (bled)
+ printk(KERN_ERR "%s%d: adapter kernel panic'd %x.\n",
+ dev->name, dev->id, bled);
+@@ -549,7 +549,9 @@ int _aac_rx_init(struct aac_dev *dev)
+ dev->OIMR = status = rx_readb (dev, MUnit.OIMR);
+ if ((((status & 0x0c) != 0x0c) || aac_reset_devices || reset_devices) &&
+ !aac_rx_restart_adapter(dev, 0))
+- ++restart;
++ /* Make sure the Hardware FIFO is empty */
++ while ((++restart < 512) &&
++ (rx_readl(dev, MUnit.OutboundQueue) != 0xFFFFFFFFL));
+ /*
+ * Check to see if the board panic'd while booting.
+ */
+diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c
+index 38a1ee2..374ed02 100644
+--- a/drivers/scsi/advansys.c
++++ b/drivers/scsi/advansys.c
+@@ -8233,7 +8233,7 @@ static void adv_isr_callback(ADV_DVC_VAR *adv_dvc_varp, ADV_SCSI_REQ_Q *scsiqp)
+ if (scsiqp->scsi_status == SAM_STAT_CHECK_CONDITION) {
+ ASC_DBG(2, "SAM_STAT_CHECK_CONDITION\n");
+ ASC_DBG_PRT_SENSE(2, scp->sense_buffer,
+- sizeof(scp->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ /*
+ * Note: The 'status_byte()' macro used by
+ * target drivers defined in scsi.h shifts the
+@@ -9136,7 +9136,7 @@ static void asc_isr_callback(ASC_DVC_VAR *asc_dvc_varp, ASC_QDONE_INFO *qdonep)
+ BUG_ON(asc_dvc_varp != &boardp->dvc_var.asc_dvc_var);
- /*
-@@ -629,28 +815,28 @@ dasd_eckd_build_psf_ssc(struct dasd_device *device)
- static int
- dasd_eckd_psf_ssc(struct dasd_device *device)
+ dma_unmap_single(boardp->dev, scp->SCp.dma_handle,
+- sizeof(scp->sense_buffer), DMA_FROM_DEVICE);
++ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+ /*
+ * 'qdonep' contains the command's ending status.
+ */
+@@ -9166,7 +9166,7 @@ static void asc_isr_callback(ASC_DVC_VAR *asc_dvc_varp, ASC_QDONE_INFO *qdonep)
+ if (qdonep->d3.scsi_stat == SAM_STAT_CHECK_CONDITION) {
+ ASC_DBG(2, "SAM_STAT_CHECK_CONDITION\n");
+ ASC_DBG_PRT_SENSE(2, scp->sense_buffer,
+- sizeof(scp->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ /*
+ * Note: The 'status_byte()' macro used by
+ * target drivers defined in scsi.h shifts the
+@@ -9881,9 +9881,9 @@ static __le32 advansys_get_sense_buffer_dma(struct scsi_cmnd *scp)
{
-- struct dasd_ccw_req *cqr;
-- int rc;
+ struct asc_board *board = shost_priv(scp->device->host);
+ scp->SCp.dma_handle = dma_map_single(board->dev, scp->sense_buffer,
+- sizeof(scp->sense_buffer), DMA_FROM_DEVICE);
++ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+ dma_cache_sync(board->dev, scp->sense_buffer,
+- sizeof(scp->sense_buffer), DMA_FROM_DEVICE);
++ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+ return cpu_to_le32(scp->SCp.dma_handle);
+ }
+
+@@ -9914,7 +9914,7 @@ static int asc_build_req(struct asc_board *boardp, struct scsi_cmnd *scp,
+ asc_scsi_q->q2.target_ix =
+ ASC_TIDLUN_TO_IX(scp->device->id, scp->device->lun);
+ asc_scsi_q->q1.sense_addr = advansys_get_sense_buffer_dma(scp);
+- asc_scsi_q->q1.sense_len = sizeof(scp->sense_buffer);
++ asc_scsi_q->q1.sense_len = SCSI_SENSE_BUFFERSIZE;
+
+ /*
+ * If there are any outstanding requests for the current target,
+@@ -10173,7 +10173,7 @@ adv_build_req(struct asc_board *boardp, struct scsi_cmnd *scp,
+ scsiqp->target_lun = scp->device->lun;
+
+ scsiqp->sense_addr = cpu_to_le32(virt_to_bus(&scp->sense_buffer[0]));
+- scsiqp->sense_len = sizeof(scp->sense_buffer);
++ scsiqp->sense_len = SCSI_SENSE_BUFFERSIZE;
+
+ /* Build ADV_SCSI_REQ_Q */
+
+diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c
+index ea8c699..6ccdc96 100644
+--- a/drivers/scsi/aha152x.c
++++ b/drivers/scsi/aha152x.c
+@@ -260,6 +260,7 @@
+ #include <scsi/scsi_dbg.h>
+ #include <scsi/scsi_host.h>
+ #include <scsi/scsi_transport_spi.h>
++#include <scsi/scsi_eh.h>
+ #include "aha152x.h"
+
+ static LIST_HEAD(aha152x_host_list);
+@@ -558,9 +559,7 @@ struct aha152x_hostdata {
+ struct aha152x_scdata {
+ Scsi_Cmnd *next; /* next sc in queue */
+ struct completion *done;/* semaphore to block on */
+- unsigned char aha_orig_cmd_len;
+- unsigned char aha_orig_cmnd[MAX_COMMAND_SIZE];
+- int aha_orig_resid;
++ struct scsi_eh_save ses;
+ };
+
+ /* access macros for hostdata */
+@@ -1017,16 +1016,10 @@ static int aha152x_internal_queue(Scsi_Cmnd *SCpnt, struct completion *complete,
+ SCp.buffers_residual : left buffers in list
+ SCp.phase : current state of the command */
+
+- if ((phase & (check_condition|resetting)) || !scsi_sglist(SCpnt)) {
+- if (phase & check_condition) {
+- SCpnt->SCp.ptr = SCpnt->sense_buffer;
+- SCpnt->SCp.this_residual = sizeof(SCpnt->sense_buffer);
+- scsi_set_resid(SCpnt, sizeof(SCpnt->sense_buffer));
+- } else {
+- SCpnt->SCp.ptr = NULL;
+- SCpnt->SCp.this_residual = 0;
+- scsi_set_resid(SCpnt, 0);
+- }
++ if ((phase & resetting) || !scsi_sglist(SCpnt)) {
++ SCpnt->SCp.ptr = NULL;
++ SCpnt->SCp.this_residual = 0;
++ scsi_set_resid(SCpnt, 0);
+ SCpnt->SCp.buffer = NULL;
+ SCpnt->SCp.buffers_residual = 0;
+ } else {
+@@ -1561,10 +1554,7 @@ static void busfree_run(struct Scsi_Host *shpnt)
+ }
+ #endif
+
+- /* restore old command */
+- memcpy(cmd->cmnd, sc->aha_orig_cmnd, sizeof(cmd->cmnd));
+- cmd->cmd_len = sc->aha_orig_cmd_len;
+- scsi_set_resid(cmd, sc->aha_orig_resid);
++ scsi_eh_restore_cmnd(cmd, &sc->ses);
+
+ cmd->SCp.Status = SAM_STAT_CHECK_CONDITION;
+
+@@ -1587,22 +1577,10 @@ static void busfree_run(struct Scsi_Host *shpnt)
+ DPRINTK(debug_eh, ERR_LEAD "requesting sense\n", CMDINFO(ptr));
+ #endif
+
+- /* save old command */
+ sc = SCDATA(ptr);
+ /* It was allocated in aha152x_internal_queue? */
+ BUG_ON(!sc);
+- memcpy(sc->aha_orig_cmnd, ptr->cmnd,
+- sizeof(ptr->cmnd));
+- sc->aha_orig_cmd_len = ptr->cmd_len;
+- sc->aha_orig_resid = scsi_get_resid(ptr);
-
-- cqr = dasd_eckd_build_psf_ssc(device);
-- if (IS_ERR(cqr))
-- return PTR_ERR(cqr);
+- ptr->cmnd[0] = REQUEST_SENSE;
+- ptr->cmnd[1] = 0;
+- ptr->cmnd[2] = 0;
+- ptr->cmnd[3] = 0;
+- ptr->cmnd[4] = sizeof(ptr->sense_buffer);
+- ptr->cmnd[5] = 0;
+- ptr->cmd_len = 6;
++ scsi_eh_prep_cmnd(ptr, &sc->ses, NULL, 0, ~0);
+
+ DO_UNLOCK(flags);
+ aha152x_internal_queue(ptr, NULL, check_condition, ptr->scsi_done);
+diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c
+index bbcc2c5..190568e 100644
+--- a/drivers/scsi/aha1542.c
++++ b/drivers/scsi/aha1542.c
+@@ -51,15 +51,6 @@
+ #define SCSI_BUF_PA(address) isa_virt_to_bus(address)
+ #define SCSI_SG_PA(sgent) (isa_page_to_bus(sg_page((sgent))) + (sgent)->offset)
+
+-static void BAD_DMA(void *address, unsigned int length)
+-{
+- printk(KERN_CRIT "buf vaddress %p paddress 0x%lx length %d\n",
+- address,
+- SCSI_BUF_PA(address),
+- length);
+- panic("Buffer at physical address > 16Mb used for aha1542");
+-}
-
-- rc = dasd_sleep_on(cqr);
-- if (!rc)
-- /* trigger CIO to reprobe devices */
-- css_schedule_reprobe();
-- dasd_sfree_request(cqr, cqr->device);
-- return rc;
-+ struct dasd_ccw_req *cqr;
-+ int rc;
-+
-+ cqr = dasd_eckd_build_psf_ssc(device);
-+ if (IS_ERR(cqr))
-+ return PTR_ERR(cqr);
-+
-+ rc = dasd_sleep_on(cqr);
-+ if (!rc)
-+ /* trigger CIO to reprobe devices */
-+ css_schedule_reprobe();
-+ dasd_sfree_request(cqr, cqr->memdev);
-+ return rc;
- }
+ static void BAD_SG_DMA(Scsi_Cmnd * SCpnt,
+ struct scatterlist *sgp,
+ int nseg,
+@@ -545,7 +536,7 @@ static void aha1542_intr_handle(struct Scsi_Host *shost, void *dev_id)
+ we will still have it in the cdb when we come back */
+ if (ccb[mbo].tarstat == 2)
+ memcpy(SCtmp->sense_buffer, &ccb[mbo].cdb[ccb[mbo].cdblen],
+- sizeof(SCtmp->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
- /*
- * Valide storage server of current device.
- */
--static int
--dasd_eckd_validate_server(struct dasd_device *device, struct dasd_uid *uid)
-+static int dasd_eckd_validate_server(struct dasd_device *device)
- {
- int rc;
-+ struct dasd_eckd_private *private;
- /* Currently PAV is the only reason to 'validate' server on LPAR */
- if (dasd_nopav || MACHINE_IS_VM)
-@@ -659,9 +845,11 @@ dasd_eckd_validate_server(struct dasd_device *device, struct dasd_uid *uid)
- rc = dasd_eckd_psf_ssc(device);
- /* may be requested feature is not available on server,
- * therefore just report error and go ahead */
-+ private = (struct dasd_eckd_private *) device->private;
- DEV_MESSAGE(KERN_INFO, device,
- "PSF-SSC on storage subsystem %s.%s.%04x returned rc=%d",
-- uid->vendor, uid->serial, uid->ssid, rc);
-+ private->uid.vendor, private->uid.serial,
-+ private->uid.ssid, rc);
- /* RE-Read Configuration Data */
- return dasd_eckd_read_conf(device);
- }
-@@ -674,9 +862,9 @@ static int
- dasd_eckd_check_characteristics(struct dasd_device *device)
- {
- struct dasd_eckd_private *private;
-- struct dasd_uid uid;
-+ struct dasd_block *block;
- void *rdc_data;
-- int rc;
-+ int is_known, rc;
+ /* is there mail :-) */
+@@ -597,8 +588,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+ unchar target = SCpnt->device->id;
+ unchar lun = SCpnt->device->lun;
+ unsigned long flags;
+- void *buff = SCpnt->request_buffer;
+- int bufflen = SCpnt->request_bufflen;
++ int bufflen = scsi_bufflen(SCpnt);
+ int mbo;
+ struct mailbox *mb;
+ struct ccb *ccb;
+@@ -619,7 +609,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+ #if 0
+ /* scsi_request_sense() provides a buffer of size 256,
+ so there is no reason to expect equality */
+- if (bufflen != sizeof(SCpnt->sense_buffer))
++ if (bufflen != SCSI_SENSE_BUFFERSIZE)
+ printk(KERN_CRIT "aha1542: Wrong buffer length supplied "
+ "for request sense (%d)\n", bufflen);
+ #endif
+@@ -689,42 +679,29 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
- private = (struct dasd_eckd_private *) device->private;
- if (private == NULL) {
-@@ -699,27 +887,54 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
- /* Read Configuration Data */
- rc = dasd_eckd_read_conf(device);
- if (rc)
-- return rc;
-+ goto out_err1;
+ memcpy(ccb[mbo].cdb, cmd, ccb[mbo].cdblen);
- /* Generate device unique id and register in devmap */
-- rc = dasd_eckd_generate_uid(device, &uid);
-+ rc = dasd_eckd_generate_uid(device, &private->uid);
- if (rc)
-- return rc;
-- rc = dasd_set_uid(device->cdev, &uid);
-- if (rc == 1) /* new server found */
-- rc = dasd_eckd_validate_server(device, &uid);
-+ goto out_err1;
-+ dasd_set_uid(device->cdev, &private->uid);
-+
-+ if (private->uid.type == UA_BASE_DEVICE) {
-+ block = dasd_alloc_block();
-+ if (IS_ERR(block)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "could not allocate dasd block structure");
-+ rc = PTR_ERR(block);
-+ goto out_err1;
-+ }
-+ device->block = block;
-+ block->base = device;
-+ }
-+
-+ /* register lcu with alias handling, enable PAV if this is a new lcu */
-+ is_known = dasd_alias_make_device_known_to_lcu(device);
-+ if (is_known < 0) {
-+ rc = is_known;
-+ goto out_err2;
-+ }
-+ if (!is_known) {
-+ /* new lcu found */
-+ rc = dasd_eckd_validate_server(device); /* will switch pav on */
-+ if (rc)
-+ goto out_err3;
-+ }
+- if (SCpnt->use_sg) {
++ if (bufflen) {
+ struct scatterlist *sg;
+ struct chain *cptr;
+ #ifdef DEBUG
+ unsigned char *ptr;
+ #endif
+- int i;
++ int i, sg_count = scsi_sg_count(SCpnt);
+ ccb[mbo].op = 2; /* SCSI Initiator Command w/scatter-gather */
+- SCpnt->host_scribble = kmalloc(512, GFP_KERNEL | GFP_DMA);
++ SCpnt->host_scribble = kmalloc(sizeof(*cptr)*sg_count,
++ GFP_KERNEL | GFP_DMA);
+ cptr = (struct chain *) SCpnt->host_scribble;
+ if (cptr == NULL) {
+ /* free the claimed mailbox slot */
+ HOSTDATA(SCpnt->device->host)->SCint[mbo] = NULL;
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+- scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) {
+- if (sg->length == 0 || SCpnt->use_sg > 16 ||
+- (((int) sg->offset) & 1) || (sg->length & 1)) {
+- unsigned char *ptr;
+- printk(KERN_CRIT "Bad segment list supplied to aha1542.c (%d, %d)\n", SCpnt->use_sg, i);
+- scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) {
+- printk(KERN_CRIT "%d: %p %d\n", i,
+- sg_virt(sg), sg->length);
+- };
+- printk(KERN_CRIT "cptr %x: ", (unsigned int) cptr);
+- ptr = (unsigned char *) &cptr[i];
+- for (i = 0; i < 18; i++)
+- printk("%02x ", ptr[i]);
+- panic("Foooooooood fight!");
+- };
++ scsi_for_each_sg(SCpnt, sg, sg_count, i) {
+ any2scsi(cptr[i].dataptr, SCSI_SG_PA(sg));
+ if (SCSI_SG_PA(sg) + sg->length - 1 > ISA_DMA_THRESHOLD)
+- BAD_SG_DMA(SCpnt, sg, SCpnt->use_sg, i);
++ BAD_SG_DMA(SCpnt, scsi_sglist(SCpnt), sg_count, i);
+ any2scsi(cptr[i].datalen, sg->length);
+ };
+- any2scsi(ccb[mbo].datalen, SCpnt->use_sg * sizeof(struct chain));
++ any2scsi(ccb[mbo].datalen, sg_count * sizeof(struct chain));
+ any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(cptr));
+ #ifdef DEBUG
+ printk("cptr %x: ", cptr);
+@@ -735,10 +712,8 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+ } else {
+ ccb[mbo].op = 0; /* SCSI Initiator Command */
+ SCpnt->host_scribble = NULL;
+- any2scsi(ccb[mbo].datalen, bufflen);
+- if (buff && SCSI_BUF_PA(buff + bufflen - 1) > ISA_DMA_THRESHOLD)
+- BAD_DMA(buff, bufflen);
+- any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(buff));
++ any2scsi(ccb[mbo].datalen, 0);
++ any2scsi(ccb[mbo].dataptr, 0);
+ };
+ ccb[mbo].idlun = (target & 7) << 5 | direction | (lun & 7); /*SCSI Target Id */
+ ccb[mbo].rsalen = 16;
+diff --git a/drivers/scsi/aha1740.c b/drivers/scsi/aha1740.c
+index f6722fd..be58a0b 100644
+--- a/drivers/scsi/aha1740.c
++++ b/drivers/scsi/aha1740.c
+@@ -286,7 +286,7 @@ static irqreturn_t aha1740_intr_handle(int irq, void *dev_id)
+ cdb when we come back */
+ if ( (adapstat & G2INTST_MASK) == G2INTST_CCBERROR ) {
+ memcpy(SCtmp->sense_buffer, ecbptr->sense,
+- sizeof(SCtmp->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ errstatus = aha1740_makecode(ecbptr->sense,ecbptr->status);
+ } else
+ errstatus = 0;
+diff --git a/drivers/scsi/aic7xxx/Makefile b/drivers/scsi/aic7xxx/Makefile
+index 9a6ce19..e4f70c5 100644
+--- a/drivers/scsi/aic7xxx/Makefile
++++ b/drivers/scsi/aic7xxx/Makefile
+@@ -33,11 +33,10 @@ aic79xx-y += aic79xx_osm.o \
+ aic79xx_proc.o \
+ aic79xx_osm_pci.o
+
+-EXTRA_CFLAGS += -Idrivers/scsi
++ccflags-y += -Idrivers/scsi
+ ifdef WARNINGS_BECOME_ERRORS
+-EXTRA_CFLAGS += -Werror
++ccflags-y += -Werror
+ endif
+-#EXTRA_CFLAGS += -g
+
+ # Files generated that shall be removed upon make clean
+ clean-files := aic7xxx_seq.h aic7xxx_reg.h aic7xxx_reg_print.c
+@@ -46,53 +45,45 @@ clean-files += aic79xx_seq.h aic79xx_reg.h aic79xx_reg_print.c
+ # Dependencies for generated files need to be listed explicitly
+
+ $(obj)/aic7xxx_core.o: $(obj)/aic7xxx_seq.h
++$(obj)/aic7xxx_core.o: $(obj)/aic7xxx_reg.h
+ $(obj)/aic79xx_core.o: $(obj)/aic79xx_seq.h
+-$(obj)/aic79xx_reg_print.c: $(src)/aic79xx_reg_print.c_shipped
+-$(obj)/aic7xxx_reg_print.c: $(src)/aic7xxx_reg_print.c_shipped
++$(obj)/aic79xx_core.o: $(obj)/aic79xx_reg.h
+
+-$(addprefix $(obj)/,$(aic7xxx-y)): $(obj)/aic7xxx_reg.h
+-$(addprefix $(obj)/,$(aic79xx-y)): $(obj)/aic79xx_reg.h
++$(addprefix $(obj)/,$(aic7xxx-y)): $(obj)/aic7xxx_seq.h
++$(addprefix $(obj)/,$(aic79xx-y)): $(obj)/aic79xx_seq.h
+
+-aic7xxx-gen-$(CONFIG_AIC7XXX_BUILD_FIRMWARE) := $(obj)/aic7xxx_seq.h \
+- $(obj)/aic7xxx_reg.h
++aic7xxx-gen-$(CONFIG_AIC7XXX_BUILD_FIRMWARE) := $(obj)/aic7xxx_reg.h
+ aic7xxx-gen-$(CONFIG_AIC7XXX_REG_PRETTY_PRINT) += $(obj)/aic7xxx_reg_print.c
+
+ aicasm-7xxx-opts-$(CONFIG_AIC7XXX_REG_PRETTY_PRINT) := \
+ -p $(obj)/aic7xxx_reg_print.c -i aic7xxx_osm.h
+
+ ifeq ($(CONFIG_AIC7XXX_BUILD_FIRMWARE),y)
+-# Create a dependency chain in generated files
+-# to avoid concurrent invocations of the single
+-# rule that builds them all.
+-aic7xxx_seq.h: aic7xxx_reg.h
+-ifeq ($(CONFIG_AIC7XXX_REG_PRETTY_PRINT),y)
+-aic7xxx_reg.h: aic7xxx_reg_print.c
+-endif
+-$(aic7xxx-gen-y): $(src)/aic7xxx.seq $(src)/aic7xxx.reg $(obj)/aicasm/aicasm
++$(obj)/aic7xxx_seq.h: $(src)/aic7xxx.seq $(src)/aic7xxx.reg $(obj)/aicasm/aicasm
+ $(obj)/aicasm/aicasm -I$(src) -r $(obj)/aic7xxx_reg.h \
+ $(aicasm-7xxx-opts-y) -o $(obj)/aic7xxx_seq.h \
+ $(src)/aic7xxx.seq
+
-+ /* Read Feature Codes */
-+ rc = dasd_eckd_read_features(device);
- if (rc)
-- return rc;
-+ goto out_err3;
++$(aic7xxx-gen-y): $(obj)/aic7xxx_seq.h
++else
++$(obj)/aic7xxx_reg_print.c: $(src)/aic7xxx_reg_print.c_shipped
+ endif
- /* Read Device Characteristics */
- rdc_data = (void *) &(private->rdc_data);
- memset(rdc_data, 0, sizeof(rdc_data));
- rc = dasd_generic_read_dev_chars(device, "ECKD", &rdc_data, 64);
-- if (rc)
-+ if (rc) {
- DEV_MESSAGE(KERN_WARNING, device,
- "Read device characteristics returned "
- "rc=%d", rc);
--
-+ goto out_err3;
-+ }
- DEV_MESSAGE(KERN_INFO, device,
- "%04X/%02X(CU:%04X/%02X) Cyl:%d Head:%d Sec:%d",
- private->rdc_data.dev_type,
-@@ -729,9 +944,24 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
- private->rdc_data.no_cyl,
- private->rdc_data.trk_per_cyl,
- private->rdc_data.sec_per_trk);
-+ return 0;
+-aic79xx-gen-$(CONFIG_AIC79XX_BUILD_FIRMWARE) := $(obj)/aic79xx_seq.h \
+- $(obj)/aic79xx_reg.h
++aic79xx-gen-$(CONFIG_AIC79XX_BUILD_FIRMWARE) := $(obj)/aic79xx_reg.h
+ aic79xx-gen-$(CONFIG_AIC79XX_REG_PRETTY_PRINT) += $(obj)/aic79xx_reg_print.c
+
+ aicasm-79xx-opts-$(CONFIG_AIC79XX_REG_PRETTY_PRINT) := \
+ -p $(obj)/aic79xx_reg_print.c -i aic79xx_osm.h
+
+ ifeq ($(CONFIG_AIC79XX_BUILD_FIRMWARE),y)
+-# Create a dependency chain in generated files
+-# to avoid concurrent invocations of the single
+-# rule that builds them all.
+-aic79xx_seq.h: aic79xx_reg.h
+-ifeq ($(CONFIG_AIC79XX_REG_PRETTY_PRINT),y)
+-aic79xx_reg.h: aic79xx_reg_print.c
+-endif
+-$(aic79xx-gen-y): $(src)/aic79xx.seq $(src)/aic79xx.reg $(obj)/aicasm/aicasm
++$(obj)/aic79xx_seq.h: $(src)/aic79xx.seq $(src)/aic79xx.reg $(obj)/aicasm/aicasm
+ $(obj)/aicasm/aicasm -I$(src) -r $(obj)/aic79xx_reg.h \
+ $(aicasm-79xx-opts-y) -o $(obj)/aic79xx_seq.h \
+ $(src)/aic79xx.seq
+
-+out_err3:
-+ dasd_alias_disconnect_device_from_lcu(device);
-+out_err2:
-+ dasd_free_block(device->block);
-+ device->block = NULL;
-+out_err1:
-+ kfree(device->private);
-+ device->private = NULL;
- return rc;
- }
++$(aic79xx-gen-y): $(obj)/aic79xx_seq.h
++else
++$(obj)/aic79xx_reg_print.c: $(src)/aic79xx_reg_print.c_shipped
+ endif
+
+ $(obj)/aicasm/aicasm: $(src)/aicasm/*.[chyl]
+diff --git a/drivers/scsi/aic7xxx/aic79xx_osm.c b/drivers/scsi/aic7xxx/aic79xx_osm.c
+index 2d02040..0e4708f 100644
+--- a/drivers/scsi/aic7xxx/aic79xx_osm.c
++++ b/drivers/scsi/aic7xxx/aic79xx_osm.c
+@@ -1784,7 +1784,7 @@ ahd_linux_handle_scsi_status(struct ahd_softc *ahd,
+ if (scb->flags & SCB_SENSE) {
+ sense_size = min(sizeof(struct scsi_sense_data)
+ - ahd_get_sense_residual(scb),
+- (u_long)sizeof(cmd->sense_buffer));
++ (u_long)SCSI_SENSE_BUFFERSIZE);
+ sense_offset = 0;
+ } else {
+ /*
+@@ -1795,11 +1795,11 @@ ahd_linux_handle_scsi_status(struct ahd_softc *ahd,
+ scb->sense_data;
+ sense_size = min_t(size_t,
+ scsi_4btoul(siu->sense_length),
+- sizeof(cmd->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ sense_offset = SIU_SENSE_OFFSET(siu);
+ }
+
+- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
++ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ memcpy(cmd->sense_buffer,
+ ahd_get_sense_buf(ahd, scb)
+ + sense_offset, sense_size);
+diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm.c b/drivers/scsi/aic7xxx/aic7xxx_osm.c
+index 390b0fc..e310e41 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx_osm.c
++++ b/drivers/scsi/aic7xxx/aic7xxx_osm.c
+@@ -1801,12 +1801,12 @@ ahc_linux_handle_scsi_status(struct ahc_softc *ahc,
+
+ sense_size = min(sizeof(struct scsi_sense_data)
+ - ahc_get_sense_residual(scb),
+- (u_long)sizeof(cmd->sense_buffer));
++ (u_long)SCSI_SENSE_BUFFERSIZE);
+ memcpy(cmd->sense_buffer,
+ ahc_get_sense_buf(ahc, scb), sense_size);
+- if (sense_size < sizeof(cmd->sense_buffer))
++ if (sense_size < SCSI_SENSE_BUFFERSIZE)
+ memset(&cmd->sense_buffer[sense_size], 0,
+- sizeof(cmd->sense_buffer) - sense_size);
++ SCSI_SENSE_BUFFERSIZE - sense_size);
+ cmd->result |= (DRIVER_SENSE << 24);
+ #ifdef AHC_DEBUG
+ if (ahc_debug & AHC_SHOW_SENSE) {
+diff --git a/drivers/scsi/aic7xxx_old.c b/drivers/scsi/aic7xxx_old.c
+index 8f8db5f..bcb0b87 100644
+--- a/drivers/scsi/aic7xxx_old.c
++++ b/drivers/scsi/aic7xxx_old.c
+@@ -2696,7 +2696,7 @@ aic7xxx_done(struct aic7xxx_host *p, struct aic7xxx_scb *scb)
+ {
+ pci_unmap_single(p->pdev,
+ le32_to_cpu(scb->sg_list[0].address),
+- sizeof(cmd->sense_buffer),
++ SCSI_SENSE_BUFFERSIZE,
+ PCI_DMA_FROMDEVICE);
+ }
+ if (scb->flags & SCB_RECOVERY_SCB)
+@@ -4267,13 +4267,13 @@ aic7xxx_handle_seqint(struct aic7xxx_host *p, unsigned char intstat)
+ sizeof(generic_sense));
-+static void dasd_eckd_uncheck_device(struct dasd_device *device)
-+{
-+ dasd_alias_disconnect_device_from_lcu(device);
-+}
-+
- static struct dasd_ccw_req *
- dasd_eckd_analysis_ccw(struct dasd_device *device)
- {
-@@ -755,7 +985,7 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
- /* Define extent for the first 3 tracks. */
- define_extent(ccw++, cqr->data, 0, 2,
- DASD_ECKD_CCW_READ_COUNT, device);
-- LO_data = cqr->data + sizeof (struct DE_eckd_data);
-+ LO_data = cqr->data + sizeof(struct DE_eckd_data);
- /* Locate record for the first 4 records on track 0. */
- ccw[-1].flags |= CCW_FLAG_CC;
- locate_record(ccw++, LO_data++, 0, 0, 4,
-@@ -783,7 +1013,9 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
- ccw->count = 8;
- ccw->cda = (__u32)(addr_t) count_data;
+ scb->sense_cmd[1] = (cmd->device->lun << 5);
+- scb->sense_cmd[4] = sizeof(cmd->sense_buffer);
++ scb->sense_cmd[4] = SCSI_SENSE_BUFFERSIZE;
-- cqr->device = device;
-+ cqr->block = NULL;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
- cqr->retries = 0;
- cqr->buildclk = get_clock();
- cqr->status = DASD_CQR_FILLED;
-@@ -803,7 +1035,7 @@ dasd_eckd_analysis_callback(struct dasd_ccw_req *init_cqr, void *data)
- struct dasd_eckd_private *private;
- struct dasd_device *device;
+ scb->sg_list[0].length =
+- cpu_to_le32(sizeof(cmd->sense_buffer));
++ cpu_to_le32(SCSI_SENSE_BUFFERSIZE);
+ scb->sg_list[0].address =
+ cpu_to_le32(pci_map_single(p->pdev, cmd->sense_buffer,
+- sizeof(cmd->sense_buffer),
++ SCSI_SENSE_BUFFERSIZE,
+ PCI_DMA_FROMDEVICE));
-- device = init_cqr->device;
-+ device = init_cqr->startdev;
- private = (struct dasd_eckd_private *) device->private;
- private->init_cqr_status = init_cqr->status;
- dasd_sfree_request(init_cqr, device);
-@@ -811,13 +1043,13 @@ dasd_eckd_analysis_callback(struct dasd_ccw_req *init_cqr, void *data)
- }
+ /*
+@@ -4296,7 +4296,7 @@ aic7xxx_handle_seqint(struct aic7xxx_host *p, unsigned char intstat)
+ hscb->residual_data_count[2] = 0;
- static int
--dasd_eckd_start_analysis(struct dasd_device *device)
-+dasd_eckd_start_analysis(struct dasd_block *block)
- {
- struct dasd_eckd_private *private;
- struct dasd_ccw_req *init_cqr;
+ scb->sg_count = hscb->SG_segment_count = 1;
+- scb->sg_length = sizeof(cmd->sense_buffer);
++ scb->sg_length = SCSI_SENSE_BUFFERSIZE;
+ scb->tag_action = 0;
+ scb->flags |= SCB_SENSE;
+ /*
+@@ -10293,7 +10293,6 @@ static int aic7xxx_queue(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *))
+ aic7xxx_position(cmd) = scb->hscb->tag;
+ cmd->scsi_done = fn;
+ cmd->result = DID_OK;
+- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
+ aic7xxx_error(cmd) = DID_OK;
+ aic7xxx_status(cmd) = 0;
+ cmd->host_scribble = NULL;
+diff --git a/drivers/scsi/aic94xx/aic94xx_dev.c b/drivers/scsi/aic94xx/aic94xx_dev.c
+index 3dce618..72042ca 100644
+--- a/drivers/scsi/aic94xx/aic94xx_dev.c
++++ b/drivers/scsi/aic94xx/aic94xx_dev.c
+@@ -165,7 +165,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
+ if (dev->port->oob_mode != SATA_OOB_MODE) {
+ flags |= OPEN_REQUIRED;
+ if ((dev->dev_type == SATA_DEV) ||
+- (dev->tproto & SAS_PROTO_STP)) {
++ (dev->tproto & SAS_PROTOCOL_STP)) {
+ struct smp_resp *rps_resp = &dev->sata_dev.rps_resp;
+ if (rps_resp->frame_type == SMP_RESPONSE &&
+ rps_resp->function == SMP_REPORT_PHY_SATA &&
+@@ -193,7 +193,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
+ asd_ddbsite_write_byte(asd_ha, ddb, DDB_TARG_FLAGS, flags);
-- private = (struct dasd_eckd_private *) device->private;
-- init_cqr = dasd_eckd_analysis_ccw(device);
-+ private = (struct dasd_eckd_private *) block->base->private;
-+ init_cqr = dasd_eckd_analysis_ccw(block->base);
- if (IS_ERR(init_cqr))
- return PTR_ERR(init_cqr);
- init_cqr->callback = dasd_eckd_analysis_callback;
-@@ -828,13 +1060,15 @@ dasd_eckd_start_analysis(struct dasd_device *device)
- }
+ flags = 0;
+- if (dev->tproto & SAS_PROTO_STP)
++ if (dev->tproto & SAS_PROTOCOL_STP)
+ flags |= STP_CL_POL_NO_TX;
+ asd_ddbsite_write_byte(asd_ha, ddb, DDB_TARG_FLAGS2, flags);
- static int
--dasd_eckd_end_analysis(struct dasd_device *device)
-+dasd_eckd_end_analysis(struct dasd_block *block)
- {
-+ struct dasd_device *device;
- struct dasd_eckd_private *private;
- struct eckd_count *count_area;
- unsigned int sb, blk_per_trk;
- int status, i;
+@@ -201,7 +201,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
+ asd_ddbsite_write_word(asd_ha, ddb, SEND_QUEUE_TAIL, 0xFFFF);
+ asd_ddbsite_write_word(asd_ha, ddb, SISTER_DDB, 0xFFFF);
-+ device = block->base;
- private = (struct dasd_eckd_private *) device->private;
- status = private->init_cqr_status;
- private->init_cqr_status = -1;
-@@ -846,7 +1080,7 @@ dasd_eckd_end_analysis(struct dasd_device *device)
+- if (dev->dev_type == SATA_DEV || (dev->tproto & SAS_PROTO_STP)) {
++ if (dev->dev_type == SATA_DEV || (dev->tproto & SAS_PROTOCOL_STP)) {
+ i = asd_init_sata(dev);
+ if (i < 0) {
+ asd_free_ddb(asd_ha, ddb);
+diff --git a/drivers/scsi/aic94xx/aic94xx_dump.c b/drivers/scsi/aic94xx/aic94xx_dump.c
+index 6bd8e30..3d8c4ff 100644
+--- a/drivers/scsi/aic94xx/aic94xx_dump.c
++++ b/drivers/scsi/aic94xx/aic94xx_dump.c
+@@ -903,11 +903,11 @@ void asd_dump_frame_rcvd(struct asd_phy *phy,
+ int i;
- private->uses_cdl = 1;
- /* Calculate number of blocks/records per track. */
-- blk_per_trk = recs_per_track(&private->rdc_data, 0, device->bp_block);
-+ blk_per_trk = recs_per_track(&private->rdc_data, 0, block->bp_block);
- /* Check Track 0 for Compatible Disk Layout */
- count_area = NULL;
- for (i = 0; i < 3; i++) {
-@@ -876,56 +1110,65 @@ dasd_eckd_end_analysis(struct dasd_device *device)
- if (count_area != NULL && count_area->kl == 0) {
- /* we found notthing violating our disk layout */
- if (dasd_check_blocksize(count_area->dl) == 0)
-- device->bp_block = count_area->dl;
-+ block->bp_block = count_area->dl;
- }
-- if (device->bp_block == 0) {
-+ if (block->bp_block == 0) {
- DEV_MESSAGE(KERN_WARNING, device, "%s",
- "Volume has incompatible disk layout");
- return -EMEDIUMTYPE;
+ switch ((dl->status_block[1] & 0x70) >> 3) {
+- case SAS_PROTO_STP:
++ case SAS_PROTOCOL_STP:
+ ASD_DPRINTK("STP proto device-to-host FIS:\n");
+ break;
+ default:
+- case SAS_PROTO_SSP:
++ case SAS_PROTOCOL_SSP:
+ ASD_DPRINTK("SAS proto IDENTIFY:\n");
+ break;
}
-- device->s2b_shift = 0; /* bits to shift 512 to get a block */
-- for (sb = 512; sb < device->bp_block; sb = sb << 1)
-- device->s2b_shift++;
-+ block->s2b_shift = 0; /* bits to shift 512 to get a block */
-+ for (sb = 512; sb < block->bp_block; sb = sb << 1)
-+ block->s2b_shift++;
+diff --git a/drivers/scsi/aic94xx/aic94xx_hwi.c b/drivers/scsi/aic94xx/aic94xx_hwi.c
+index 0cd7eed..098b5f3 100644
+--- a/drivers/scsi/aic94xx/aic94xx_hwi.c
++++ b/drivers/scsi/aic94xx/aic94xx_hwi.c
+@@ -91,7 +91,7 @@ static int asd_init_phy(struct asd_phy *phy)
-- blk_per_trk = recs_per_track(&private->rdc_data, 0, device->bp_block);
-- device->blocks = (private->rdc_data.no_cyl *
-+ blk_per_trk = recs_per_track(&private->rdc_data, 0, block->bp_block);
-+ block->blocks = (private->rdc_data.no_cyl *
- private->rdc_data.trk_per_cyl *
- blk_per_trk);
+ sas_phy->enabled = 1;
+ sas_phy->class = SAS;
+- sas_phy->iproto = SAS_PROTO_ALL;
++ sas_phy->iproto = SAS_PROTOCOL_ALL;
+ sas_phy->tproto = 0;
+ sas_phy->type = PHY_TYPE_PHYSICAL;
+ sas_phy->role = PHY_ROLE_INITIATOR;
+diff --git a/drivers/scsi/aic94xx/aic94xx_hwi.h b/drivers/scsi/aic94xx/aic94xx_hwi.h
+index 491e5d8..150f670 100644
+--- a/drivers/scsi/aic94xx/aic94xx_hwi.h
++++ b/drivers/scsi/aic94xx/aic94xx_hwi.h
+@@ -72,6 +72,7 @@ struct flash_struct {
+ u8 manuf;
+ u8 dev_id;
+ u8 sec_prot;
++ u8 method;
- DEV_MESSAGE(KERN_INFO, device,
- "(%dkB blks): %dkB at %dkB/trk %s",
-- (device->bp_block >> 10),
-+ (block->bp_block >> 10),
- ((private->rdc_data.no_cyl *
- private->rdc_data.trk_per_cyl *
-- blk_per_trk * (device->bp_block >> 9)) >> 1),
-- ((blk_per_trk * device->bp_block) >> 10),
-+ blk_per_trk * (block->bp_block >> 9)) >> 1),
-+ ((blk_per_trk * block->bp_block) >> 10),
- private->uses_cdl ?
- "compatible disk layout" : "linux disk layout");
+ u32 dir_offs;
+ };
+@@ -216,6 +217,8 @@ struct asd_ha_struct {
+ struct dma_pool *scb_pool;
- return 0;
- }
+ struct asd_seq_data seq; /* sequencer related */
++ u32 bios_status;
++ const struct firmware *bios_image;
+ };
--static int
--dasd_eckd_do_analysis(struct dasd_device *device)
-+static int dasd_eckd_do_analysis(struct dasd_block *block)
- {
- struct dasd_eckd_private *private;
+ /* ---------- Common macros ---------- */
+diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
+index b70d6e7..5d761eb 100644
+--- a/drivers/scsi/aic94xx/aic94xx_init.c
++++ b/drivers/scsi/aic94xx/aic94xx_init.c
+@@ -29,6 +29,7 @@
+ #include <linux/kernel.h>
+ #include <linux/pci.h>
+ #include <linux/delay.h>
++#include <linux/firmware.h>
-- private = (struct dasd_eckd_private *) device->private;
-+ private = (struct dasd_eckd_private *) block->base->private;
- if (private->init_cqr_status < 0)
-- return dasd_eckd_start_analysis(device);
-+ return dasd_eckd_start_analysis(block);
- else
-- return dasd_eckd_end_analysis(device);
-+ return dasd_eckd_end_analysis(block);
+ #include <scsi/scsi_host.h>
+
+@@ -36,6 +37,7 @@
+ #include "aic94xx_reg.h"
+ #include "aic94xx_hwi.h"
+ #include "aic94xx_seq.h"
++#include "aic94xx_sds.h"
+
+ /* The format is "version.release.patchlevel" */
+ #define ASD_DRIVER_VERSION "1.0.3"
+@@ -134,7 +136,7 @@ Err:
+ return err;
}
-+static int dasd_eckd_ready_to_online(struct dasd_device *device)
-+{
-+ return dasd_alias_add_device(device);
-+};
-+
-+static int dasd_eckd_online_to_ready(struct dasd_device *device)
-+{
-+ return dasd_alias_remove_device(device);
-+};
-+
- static int
--dasd_eckd_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
-+dasd_eckd_fill_geometry(struct dasd_block *block, struct hd_geometry *geo)
+-static void __devexit asd_unmap_memio(struct asd_ha_struct *asd_ha)
++static void asd_unmap_memio(struct asd_ha_struct *asd_ha)
{
- struct dasd_eckd_private *private;
+ struct asd_ha_addrspace *io_handle;
-- private = (struct dasd_eckd_private *) device->private;
-- if (dasd_check_blocksize(device->bp_block) == 0) {
-+ private = (struct dasd_eckd_private *) block->base->private;
-+ if (dasd_check_blocksize(block->bp_block) == 0) {
- geo->sectors = recs_per_track(&private->rdc_data,
-- 0, device->bp_block);
-+ 0, block->bp_block);
- }
- geo->cylinders = private->rdc_data.no_cyl;
- geo->heads = private->rdc_data.trk_per_cyl;
-@@ -1037,7 +1280,7 @@ dasd_eckd_format_device(struct dasd_device * device,
- locate_record(ccw++, (struct LO_eckd_data *) data,
- fdata->start_unit, 0, rpt + 1,
- DASD_ECKD_CCW_WRITE_RECORD_ZERO, device,
-- device->bp_block);
-+ device->block->bp_block);
- data += sizeof(struct LO_eckd_data);
- break;
- case 0x04: /* Invalidate track. */
-@@ -1110,43 +1353,28 @@ dasd_eckd_format_device(struct dasd_device * device,
- ccw++;
- }
- }
-- fcp->device = device;
-- fcp->retries = 2; /* set retry counter to enable ERP */
-+ fcp->startdev = device;
-+ fcp->memdev = device;
-+ clear_bit(DASD_CQR_FLAGS_USE_ERP, &fcp->flags);
-+ fcp->retries = 5; /* set retry counter to enable default ERP */
- fcp->buildclk = get_clock();
- fcp->status = DASD_CQR_FILLED;
- return fcp;
+@@ -171,7 +173,7 @@ static int __devinit asd_map_ioport(struct asd_ha_struct *asd_ha)
+ return err;
}
--static dasd_era_t
--dasd_eckd_examine_error(struct dasd_ccw_req * cqr, struct irb * irb)
-+static void dasd_eckd_handle_terminated_request(struct dasd_ccw_req *cqr)
+-static void __devexit asd_unmap_ioport(struct asd_ha_struct *asd_ha)
++static void asd_unmap_ioport(struct asd_ha_struct *asd_ha)
{
-- struct dasd_device *device = (struct dasd_device *) cqr->device;
-- struct ccw_device *cdev = device->cdev;
--
-- if (irb->scsw.cstat == 0x00 &&
-- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
-- return dasd_era_none;
--
-- switch (cdev->id.cu_type) {
-- case 0x3990:
-- case 0x2105:
-- case 0x2107:
-- case 0x1750:
-- return dasd_3990_erp_examine(cqr, irb);
-- case 0x9343:
-- return dasd_9343_erp_examine(cqr, irb);
-- case 0x3880:
-- default:
-- DEV_MESSAGE(KERN_WARNING, device, "%s",
-- "default (unknown CU type) - RECOVERABLE return");
-- return dasd_era_recover;
-+ cqr->status = DASD_CQR_FILLED;
-+ if (cqr->block && (cqr->startdev != cqr->block->base)) {
-+ dasd_eckd_reset_ccw_to_base_io(cqr);
-+ cqr->startdev = cqr->block->base;
- }
--}
-+};
+ pci_release_region(asd_ha->pcidev, PCI_IOBAR_OFFSET);
+ }
+@@ -208,7 +210,7 @@ Err:
+ return err;
+ }
- static dasd_erp_fn_t
- dasd_eckd_erp_action(struct dasd_ccw_req * cqr)
+-static void __devexit asd_unmap_ha(struct asd_ha_struct *asd_ha)
++static void asd_unmap_ha(struct asd_ha_struct *asd_ha)
{
-- struct dasd_device *device = (struct dasd_device *) cqr->device;
-+ struct dasd_device *device = (struct dasd_device *) cqr->startdev;
- struct ccw_device *cdev = device->cdev;
-
- switch (cdev->id.cu_type) {
-@@ -1168,8 +1396,37 @@ dasd_eckd_erp_postaction(struct dasd_ccw_req * cqr)
- return dasd_default_erp_postaction;
+ if (asd_ha->iospace)
+ asd_unmap_ioport(asd_ha);
+@@ -313,6 +315,181 @@ static ssize_t asd_show_dev_pcba_sn(struct device *dev,
}
+ static DEVICE_ATTR(pcba_sn, S_IRUGO, asd_show_dev_pcba_sn, NULL);
--static struct dasd_ccw_req *
--dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
++#define FLASH_CMD_NONE 0x00
++#define FLASH_CMD_UPDATE 0x01
++#define FLASH_CMD_VERIFY 0x02
+
-+static void dasd_eckd_handle_unsolicited_interrupt(struct dasd_device *device,
-+ struct irb *irb)
++struct flash_command {
++ u8 command[8];
++ int code;
++};
++
++static struct flash_command flash_command_table[] =
+{
-+ char mask;
++ {"verify", FLASH_CMD_VERIFY},
++ {"update", FLASH_CMD_UPDATE},
++ {"", FLASH_CMD_NONE} /* Last entry should be NULL. */
++};
+
-+ /* first of all check for state change pending interrupt */
-+ mask = DEV_STAT_ATTENTION | DEV_STAT_DEV_END | DEV_STAT_UNIT_EXCEP;
-+ if ((irb->scsw.dstat & mask) == mask) {
-+ dasd_generic_handle_state_change(device);
-+ return;
-+ }
++struct error_bios {
++ char *reason;
++ int err_code;
++};
+
-+ /* summary unit check */
-+ if ((irb->scsw.dstat & DEV_STAT_UNIT_CHECK) && irb->ecw[7] == 0x0D) {
-+ dasd_alias_handle_summary_unit_check(device, irb);
-+ return;
-+ }
++static struct error_bios flash_error_table[] =
++{
++ {"Failed to open bios image file", FAIL_OPEN_BIOS_FILE},
++ {"PCI ID mismatch", FAIL_CHECK_PCI_ID},
++ {"Checksum mismatch", FAIL_CHECK_SUM},
++ {"Unknown Error", FAIL_UNKNOWN},
++ {"Failed to verify.", FAIL_VERIFY},
++ {"Failed to reset flash chip.", FAIL_RESET_FLASH},
++ {"Failed to find flash chip type.", FAIL_FIND_FLASH_ID},
++ {"Failed to erash flash chip.", FAIL_ERASE_FLASH},
++ {"Failed to program flash chip.", FAIL_WRITE_FLASH},
++ {"Flash in progress", FLASH_IN_PROGRESS},
++ {"Image file size Error", FAIL_FILE_SIZE},
++ {"Input parameter error", FAIL_PARAMETERS},
++ {"Out of memory", FAIL_OUT_MEMORY},
++ {"OK", 0} /* Last entry err_code = 0. */
++};
+
-+ /* just report other unsolicited interrupts */
-+ DEV_MESSAGE(KERN_DEBUG, device, "%s",
-+ "unsolicited interrupt received");
-+ device->discipline->dump_sense(device, NULL, irb);
-+ dasd_schedule_device_bh(device);
++static ssize_t asd_store_update_bios(struct device *dev,
++ struct device_attribute *attr,
++ const char *buf, size_t count)
++{
++ struct asd_ha_struct *asd_ha = dev_to_asd_ha(dev);
++ char *cmd_ptr, *filename_ptr;
++ struct bios_file_header header, *hdr_ptr;
++ int res, i;
++ u32 csum = 0;
++ int flash_command = FLASH_CMD_NONE;
++ int err = 0;
+
-+ return;
-+};
++ cmd_ptr = kzalloc(count*2, GFP_KERNEL);
+
-+static struct dasd_ccw_req *dasd_eckd_build_cp(struct dasd_device *startdev,
-+ struct dasd_block *block,
-+ struct request *req)
- {
- struct dasd_eckd_private *private;
- unsigned long *idaws;
-@@ -1185,8 +1442,11 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- sector_t first_trk, last_trk;
- unsigned int first_offs, last_offs;
- unsigned char cmd, rcmd;
-+ int use_prefix;
-+ struct dasd_device *basedev;
-
-- private = (struct dasd_eckd_private *) device->private;
-+ basedev = block->base;
-+ private = (struct dasd_eckd_private *) basedev->private;
- if (rq_data_dir(req) == READ)
- cmd = DASD_ECKD_CCW_READ_MT;
- else if (rq_data_dir(req) == WRITE)
-@@ -1194,13 +1454,13 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- else
- return ERR_PTR(-EINVAL);
- /* Calculate number of blocks/records per track. */
-- blksize = device->bp_block;
-+ blksize = block->bp_block;
- blk_per_trk = recs_per_track(&private->rdc_data, 0, blksize);
- /* Calculate record id of first and last block. */
-- first_rec = first_trk = req->sector >> device->s2b_shift;
-+ first_rec = first_trk = req->sector >> block->s2b_shift;
- first_offs = sector_div(first_trk, blk_per_trk);
- last_rec = last_trk =
-- (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
-+ (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
- last_offs = sector_div(last_trk, blk_per_trk);
- /* Check struct bio and count the number of blocks for the request. */
- count = 0;
-@@ -1209,20 +1469,33 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- if (bv->bv_len & (blksize - 1))
- /* Eckd can only do full blocks. */
- return ERR_PTR(-EINVAL);
-- count += bv->bv_len >> (device->s2b_shift + 9);
-+ count += bv->bv_len >> (block->s2b_shift + 9);
- #if defined(CONFIG_64BIT)
- if (idal_is_needed (page_address(bv->bv_page), bv->bv_len))
-- cidaw += bv->bv_len >> (device->s2b_shift + 9);
-+ cidaw += bv->bv_len >> (block->s2b_shift + 9);
- #endif
- }
- /* Paranoia. */
- if (count != last_rec - first_rec + 1)
- return ERR_PTR(-EINVAL);
-- /* 1x define extent + 1x locate record + number of blocks */
-- cplength = 2 + count;
-- /* 1x define extent + 1x locate record + cidaws*sizeof(long) */
-- datasize = sizeof(struct DE_eckd_data) + sizeof(struct LO_eckd_data) +
-- cidaw * sizeof(unsigned long);
++ if (!cmd_ptr) {
++ err = FAIL_OUT_MEMORY;
++ goto out;
++ }
+
-+ /* use the prefix command if available */
-+ use_prefix = private->features.feature[8] & 0x01;
-+ if (use_prefix) {
-+ /* 1x prefix + number of blocks */
-+ cplength = 2 + count;
-+ /* 1x prefix + cidaws*sizeof(long) */
-+ datasize = sizeof(struct PFX_eckd_data) +
-+ sizeof(struct LO_eckd_data) +
-+ cidaw * sizeof(unsigned long);
-+ } else {
-+ /* 1x define extent + 1x locate record + number of blocks */
-+ cplength = 2 + count;
-+ /* 1x define extent + 1x locate record + cidaws*sizeof(long) */
-+ datasize = sizeof(struct DE_eckd_data) +
-+ sizeof(struct LO_eckd_data) +
-+ cidaw * sizeof(unsigned long);
++ filename_ptr = cmd_ptr + count;
++ res = sscanf(buf, "%s %s", cmd_ptr, filename_ptr);
++ if (res != 2) {
++ err = FAIL_PARAMETERS;
++ goto out1;
+ }
- /* Find out the number of additional locate record ccws for cdl. */
- if (private->uses_cdl && first_rec < 2*blk_per_trk) {
- if (last_rec >= 2*blk_per_trk)
-@@ -1232,26 +1505,42 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- }
- /* Allocate the ccw request. */
- cqr = dasd_smalloc_request(dasd_eckd_discipline.name,
-- cplength, datasize, device);
-+ cplength, datasize, startdev);
- if (IS_ERR(cqr))
- return cqr;
- ccw = cqr->cpaddr;
-- /* First ccw is define extent. */
-- if (define_extent(ccw++, cqr->data, first_trk,
-- last_trk, cmd, device) == -EAGAIN) {
-- /* Clock not in sync and XRC is enabled. Try again later. */
-- dasd_sfree_request(cqr, device);
-- return ERR_PTR(-EAGAIN);
-+ /* First ccw is define extent or prefix. */
-+ if (use_prefix) {
-+ if (prefix(ccw++, cqr->data, first_trk,
-+ last_trk, cmd, basedev, startdev) == -EAGAIN) {
-+ /* Clock not in sync and XRC is enabled.
-+ * Try again later.
-+ */
-+ dasd_sfree_request(cqr, startdev);
-+ return ERR_PTR(-EAGAIN);
-+ }
-+ idaws = (unsigned long *) (cqr->data +
-+ sizeof(struct PFX_eckd_data));
-+ } else {
-+ if (define_extent(ccw++, cqr->data, first_trk,
-+ last_trk, cmd, startdev) == -EAGAIN) {
-+ /* Clock not in sync and XRC is enabled.
-+ * Try again later.
-+ */
-+ dasd_sfree_request(cqr, startdev);
-+ return ERR_PTR(-EAGAIN);
++
++ for (i = 0; flash_command_table[i].code != FLASH_CMD_NONE; i++) {
++ if (!memcmp(flash_command_table[i].command,
++ cmd_ptr, strlen(cmd_ptr))) {
++ flash_command = flash_command_table[i].code;
++ break;
+ }
-+ idaws = (unsigned long *) (cqr->data +
-+ sizeof(struct DE_eckd_data));
- }
- /* Build locate_record+read/write/ccws. */
-- idaws = (unsigned long *) (cqr->data + sizeof(struct DE_eckd_data));
- LO_data = (struct LO_eckd_data *) (idaws + cidaw);
- recid = first_rec;
- if (private->uses_cdl == 0 || recid > 2*blk_per_trk) {
- /* Only standard blocks so there is just one locate record. */
- ccw[-1].flags |= CCW_FLAG_CC;
- locate_record(ccw++, LO_data++, first_trk, first_offs + 1,
-- last_rec - recid + 1, cmd, device, blksize);
-+ last_rec - recid + 1, cmd, basedev, blksize);
- }
- rq_for_each_segment(bv, req, iter) {
- dst = page_address(bv->bv_page) + bv->bv_offset;
-@@ -1281,7 +1570,7 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- ccw[-1].flags |= CCW_FLAG_CC;
- locate_record(ccw++, LO_data++,
- trkid, recoffs + 1,
-- 1, rcmd, device, count);
-+ 1, rcmd, basedev, count);
- }
- /* Locate record for standard blocks ? */
- if (private->uses_cdl && recid == 2*blk_per_trk) {
-@@ -1289,7 +1578,7 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- locate_record(ccw++, LO_data++,
- trkid, recoffs + 1,
- last_rec - recid + 1,
-- cmd, device, count);
-+ cmd, basedev, count);
- }
- /* Read/write ccw. */
- ccw[-1].flags |= CCW_FLAG_CC;
-@@ -1310,7 +1599,9 @@ dasd_eckd_build_cp(struct dasd_device * device, struct request *req)
- }
- if (req->cmd_flags & REQ_FAILFAST)
- set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
-- cqr->device = device;
-+ cqr->startdev = startdev;
-+ cqr->memdev = startdev;
-+ cqr->block = block;
- cqr->expires = 5 * 60 * HZ; /* 5 minutes */
- cqr->lpm = private->path_data.ppm;
- cqr->retries = 256;
-@@ -1333,10 +1624,10 @@ dasd_eckd_free_cp(struct dasd_ccw_req *cqr, struct request *req)
-
- if (!dasd_page_cache)
- goto out;
-- private = (struct dasd_eckd_private *) cqr->device->private;
-- blksize = cqr->device->bp_block;
-+ private = (struct dasd_eckd_private *) cqr->block->base->private;
-+ blksize = cqr->block->bp_block;
- blk_per_trk = recs_per_track(&private->rdc_data, 0, blksize);
-- recid = req->sector >> cqr->device->s2b_shift;
-+ recid = req->sector >> cqr->block->s2b_shift;
- ccw = cqr->cpaddr;
- /* Skip over define extent & locate record. */
- ccw++;
-@@ -1367,10 +1658,71 @@ dasd_eckd_free_cp(struct dasd_ccw_req *cqr, struct request *req)
- }
- out:
- status = cqr->status == DASD_CQR_DONE;
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return status;
- }
-
-+/*
-+ * Modify ccw chain in cqr so it can be started on a base device.
-+ *
-+ * Note that this is not enough to restart the cqr!
-+ * Either reset cqr->startdev as well (summary unit check handling)
-+ * or restart via separate cqr (as in ERP handling).
-+ */
-+void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *cqr)
-+{
-+ struct ccw1 *ccw;
-+ struct PFX_eckd_data *pfxdata;
++ }
++ if (flash_command == FLASH_CMD_NONE) {
++ err = FAIL_PARAMETERS;
++ goto out1;
++ }
+
-+ ccw = cqr->cpaddr;
-+ pfxdata = cqr->data;
++ if (asd_ha->bios_status == FLASH_IN_PROGRESS) {
++ err = FLASH_IN_PROGRESS;
++ goto out1;
++ }
++ err = request_firmware(&asd_ha->bios_image,
++ filename_ptr,
++ &asd_ha->pcidev->dev);
++ if (err) {
++ asd_printk("Failed to load bios image file %s, error %d\n",
++ filename_ptr, err);
++ err = FAIL_OPEN_BIOS_FILE;
++ goto out1;
++ }
+
-+ if (ccw->cmd_code == DASD_ECKD_CCW_PFX) {
-+ pfxdata->validity.verify_base = 0;
-+ pfxdata->validity.hyper_pav = 0;
++ hdr_ptr = (struct bios_file_header *)asd_ha->bios_image->data;
++
++ if ((hdr_ptr->contrl_id.vendor != asd_ha->pcidev->vendor ||
++ hdr_ptr->contrl_id.device != asd_ha->pcidev->device) &&
++ (hdr_ptr->contrl_id.sub_vendor != asd_ha->pcidev->vendor ||
++ hdr_ptr->contrl_id.sub_device != asd_ha->pcidev->device)) {
++
++ ASD_DPRINTK("The PCI vendor or device id does not match\n");
++ ASD_DPRINTK("vendor=%x dev=%x sub_vendor=%x sub_dev=%x"
++ " pci vendor=%x pci dev=%x\n",
++ hdr_ptr->contrl_id.vendor,
++ hdr_ptr->contrl_id.device,
++ hdr_ptr->contrl_id.sub_vendor,
++ hdr_ptr->contrl_id.sub_device,
++ asd_ha->pcidev->vendor,
++ asd_ha->pcidev->device);
++ err = FAIL_CHECK_PCI_ID;
++ goto out2;
+ }
-+}
+
-+#define DASD_ECKD_CHANQ_MAX_SIZE 4
++ if (hdr_ptr->filelen != asd_ha->bios_image->size) {
++ err = FAIL_FILE_SIZE;
++ goto out2;
++ }
+
-+static struct dasd_ccw_req *dasd_eckd_build_alias_cp(struct dasd_device *base,
-+ struct dasd_block *block,
-+ struct request *req)
-+{
-+ struct dasd_eckd_private *private;
-+ struct dasd_device *startdev;
-+ unsigned long flags;
-+ struct dasd_ccw_req *cqr;
++ /* calculate checksum */
++ for (i = 0; i < hdr_ptr->filelen; i++)
++ csum += asd_ha->bios_image->data[i];
+
-+ startdev = dasd_alias_get_start_dev(base);
-+ if (!startdev)
-+ startdev = base;
-+ private = (struct dasd_eckd_private *) startdev->private;
-+ if (private->count >= DASD_ECKD_CHANQ_MAX_SIZE)
-+ return ERR_PTR(-EBUSY);
++ if ((csum & 0x0000ffff) != hdr_ptr->checksum) {
++ ASD_DPRINTK("BIOS file checksum mismatch\n");
++ err = FAIL_CHECK_SUM;
++ goto out2;
++ }
++ if (flash_command == FLASH_CMD_UPDATE) {
++ asd_ha->bios_status = FLASH_IN_PROGRESS;
++ err = asd_write_flash_seg(asd_ha,
++ &asd_ha->bios_image->data[sizeof(*hdr_ptr)],
++ 0, hdr_ptr->filelen-sizeof(*hdr_ptr));
++ if (!err)
++ err = asd_verify_flash_seg(asd_ha,
++ &asd_ha->bios_image->data[sizeof(*hdr_ptr)],
++ 0, hdr_ptr->filelen-sizeof(*hdr_ptr));
++ } else {
++ asd_ha->bios_status = FLASH_IN_PROGRESS;
++ err = asd_verify_flash_seg(asd_ha,
++ &asd_ha->bios_image->data[sizeof(header)],
++ 0, hdr_ptr->filelen-sizeof(header));
++ }
+
-+ spin_lock_irqsave(get_ccwdev_lock(startdev->cdev), flags);
-+ private->count++;
-+ cqr = dasd_eckd_build_cp(startdev, block, req);
-+ if (IS_ERR(cqr))
-+ private->count--;
-+ spin_unlock_irqrestore(get_ccwdev_lock(startdev->cdev), flags);
-+ return cqr;
++out2:
++ release_firmware(asd_ha->bios_image);
++out1:
++ kfree(cmd_ptr);
++out:
++ asd_ha->bios_status = err;
++
++ if (!err)
++ return count;
++ else
++ return -err;
+}
+
-+static int dasd_eckd_free_alias_cp(struct dasd_ccw_req *cqr,
-+ struct request *req)
++static ssize_t asd_show_update_bios(struct device *dev,
++ struct device_attribute *attr, char *buf)
+{
-+ struct dasd_eckd_private *private;
-+ unsigned long flags;
++ int i;
++ struct asd_ha_struct *asd_ha = dev_to_asd_ha(dev);
+
-+ spin_lock_irqsave(get_ccwdev_lock(cqr->memdev->cdev), flags);
-+ private = (struct dasd_eckd_private *) cqr->memdev->private;
-+ private->count--;
-+ spin_unlock_irqrestore(get_ccwdev_lock(cqr->memdev->cdev), flags);
-+ return dasd_eckd_free_cp(cqr, req);
++ for (i = 0; flash_error_table[i].err_code != 0; i++) {
++ if (flash_error_table[i].err_code == asd_ha->bios_status)
++ break;
++ }
++ if (asd_ha->bios_status != FLASH_IN_PROGRESS)
++ asd_ha->bios_status = FLASH_OK;
++
++ return snprintf(buf, PAGE_SIZE, "status=%x %s\n",
++ flash_error_table[i].err_code,
++ flash_error_table[i].reason);
+}
+
- static int
- dasd_eckd_fill_info(struct dasd_device * device,
- struct dasd_information2_t * info)
-@@ -1384,9 +1736,9 @@ dasd_eckd_fill_info(struct dasd_device * device,
- info->characteristics_size = sizeof(struct dasd_eckd_characteristics);
- memcpy(info->characteristics, &private->rdc_data,
- sizeof(struct dasd_eckd_characteristics));
-- info->confdata_size = sizeof (struct dasd_eckd_confdata);
-+ info->confdata_size = sizeof(struct dasd_eckd_confdata);
- memcpy(info->configuration_data, &private->conf_data,
-- sizeof (struct dasd_eckd_confdata));
-+ sizeof(struct dasd_eckd_confdata));
- return 0;
- }
-
-@@ -1419,7 +1771,8 @@ dasd_eckd_release(struct dasd_device *device)
- cqr->cpaddr->flags |= CCW_FLAG_SLI;
- cqr->cpaddr->count = 32;
- cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
-- cqr->device = device;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
- clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
- set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
- cqr->retries = 2; /* set retry counter to enable basic ERP */
-@@ -1429,7 +1782,7 @@ dasd_eckd_release(struct dasd_device *device)
-
- rc = dasd_sleep_on_immediatly(cqr);
-
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return rc;
- }
-
-@@ -1459,7 +1812,8 @@ dasd_eckd_reserve(struct dasd_device *device)
- cqr->cpaddr->flags |= CCW_FLAG_SLI;
- cqr->cpaddr->count = 32;
- cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
-- cqr->device = device;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
- clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
- set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
- cqr->retries = 2; /* set retry counter to enable basic ERP */
-@@ -1469,7 +1823,7 @@ dasd_eckd_reserve(struct dasd_device *device)
-
- rc = dasd_sleep_on_immediatly(cqr);
-
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return rc;
- }
-
-@@ -1498,7 +1852,8 @@ dasd_eckd_steal_lock(struct dasd_device *device)
- cqr->cpaddr->flags |= CCW_FLAG_SLI;
- cqr->cpaddr->count = 32;
- cqr->cpaddr->cda = (__u32)(addr_t) cqr->data;
-- cqr->device = device;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
- clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
- set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
- cqr->retries = 2; /* set retry counter to enable basic ERP */
-@@ -1508,7 +1863,7 @@ dasd_eckd_steal_lock(struct dasd_device *device)
++static DEVICE_ATTR(update_bios, S_IRUGO|S_IWUGO,
++ asd_show_update_bios, asd_store_update_bios);
++
+ static int asd_create_dev_attrs(struct asd_ha_struct *asd_ha)
+ {
+ int err;
+@@ -328,9 +505,14 @@ static int asd_create_dev_attrs(struct asd_ha_struct *asd_ha)
+ err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn);
+ if (err)
+ goto err_biosb;
++ err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_update_bios);
++ if (err)
++ goto err_update_bios;
- rc = dasd_sleep_on_immediatly(cqr);
+ return 0;
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return rc;
++err_update_bios:
++ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn);
+ err_biosb:
+ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build);
+ err_rev:
+@@ -343,6 +525,7 @@ static void asd_remove_dev_attrs(struct asd_ha_struct *asd_ha)
+ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_revision);
+ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build);
+ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn);
++ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_update_bios);
}
-@@ -1526,52 +1881,52 @@ dasd_eckd_performance(struct dasd_device *device, void __user *argp)
-
- cqr = dasd_smalloc_request(dasd_eckd_discipline.name,
- 1 /* PSF */ + 1 /* RSSD */ ,
-- (sizeof (struct dasd_psf_prssd_data) +
-- sizeof (struct dasd_rssd_perf_stats_t)),
-+ (sizeof(struct dasd_psf_prssd_data) +
-+ sizeof(struct dasd_rssd_perf_stats_t)),
- device);
- if (IS_ERR(cqr)) {
- DEV_MESSAGE(KERN_WARNING, device, "%s",
- "Could not allocate initialization request");
- return PTR_ERR(cqr);
- }
-- cqr->device = device;
-+ cqr->startdev = device;
-+ cqr->memdev = device;
- cqr->retries = 0;
- cqr->expires = 10 * HZ;
-
- /* Prepare for Read Subsystem Data */
- prssdp = (struct dasd_psf_prssd_data *) cqr->data;
-- memset(prssdp, 0, sizeof (struct dasd_psf_prssd_data));
-+ memset(prssdp, 0, sizeof(struct dasd_psf_prssd_data));
- prssdp->order = PSF_ORDER_PRSSD;
-- prssdp->suborder = 0x01; /* Perfomance Statistics */
-+ prssdp->suborder = 0x01; /* Performance Statistics */
- prssdp->varies[1] = 0x01; /* Perf Statistics for the Subsystem */
+ /* The first entry, 0, is used for dynamic ids, the rest for devices
+@@ -589,6 +772,7 @@ static int __devinit asd_pci_probe(struct pci_dev *dev,
+ asd_ha->sas_ha.dev = &asd_ha->pcidev->dev;
+ asd_ha->sas_ha.lldd_ha = asd_ha;
- ccw = cqr->cpaddr;
- ccw->cmd_code = DASD_ECKD_CCW_PSF;
-- ccw->count = sizeof (struct dasd_psf_prssd_data);
-+ ccw->count = sizeof(struct dasd_psf_prssd_data);
- ccw->flags |= CCW_FLAG_CC;
- ccw->cda = (__u32)(addr_t) prssdp;
++ asd_ha->bios_status = FLASH_OK;
+ asd_ha->name = asd_dev->name;
+ asd_printk("found %s, device %s\n", asd_ha->name, pci_name(dev));
- /* Read Subsystem Data - Performance Statistics */
- stats = (struct dasd_rssd_perf_stats_t *) (prssdp + 1);
-- memset(stats, 0, sizeof (struct dasd_rssd_perf_stats_t));
-+ memset(stats, 0, sizeof(struct dasd_rssd_perf_stats_t));
+diff --git a/drivers/scsi/aic94xx/aic94xx_scb.c b/drivers/scsi/aic94xx/aic94xx_scb.c
+index db6ab1a..0febad4 100644
+--- a/drivers/scsi/aic94xx/aic94xx_scb.c
++++ b/drivers/scsi/aic94xx/aic94xx_scb.c
+@@ -788,12 +788,12 @@ void asd_build_control_phy(struct asd_ascb *ascb, int phy_id, u8 subfunc)
- ccw++;
- ccw->cmd_code = DASD_ECKD_CCW_RSSD;
-- ccw->count = sizeof (struct dasd_rssd_perf_stats_t);
-+ ccw->count = sizeof(struct dasd_rssd_perf_stats_t);
- ccw->cda = (__u32)(addr_t) stats;
+ /* initiator port settings are in the hi nibble */
+ if (phy->sas_phy.role == PHY_ROLE_INITIATOR)
+- control_phy->port_type = SAS_PROTO_ALL << 4;
++ control_phy->port_type = SAS_PROTOCOL_ALL << 4;
+ else if (phy->sas_phy.role == PHY_ROLE_TARGET)
+- control_phy->port_type = SAS_PROTO_ALL;
++ control_phy->port_type = SAS_PROTOCOL_ALL;
+ else
+ control_phy->port_type =
+- (SAS_PROTO_ALL << 4) | SAS_PROTO_ALL;
++ (SAS_PROTOCOL_ALL << 4) | SAS_PROTOCOL_ALL;
- cqr->buildclk = get_clock();
- cqr->status = DASD_CQR_FILLED;
- rc = dasd_sleep_on(cqr);
- if (rc == 0) {
-- /* Prepare for Read Subsystem Data */
- prssdp = (struct dasd_psf_prssd_data *) cqr->data;
- stats = (struct dasd_rssd_perf_stats_t *) (prssdp + 1);
- if (copy_to_user(argp, stats,
- sizeof(struct dasd_rssd_perf_stats_t)))
- rc = -EFAULT;
- }
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return rc;
- }
+ /* link reset retries, this should be nominal */
+ control_phy->link_reset_retries = 10;
+diff --git a/drivers/scsi/aic94xx/aic94xx_sds.c b/drivers/scsi/aic94xx/aic94xx_sds.c
+index 06509bf..2a4c933 100644
+--- a/drivers/scsi/aic94xx/aic94xx_sds.c
++++ b/drivers/scsi/aic94xx/aic94xx_sds.c
+@@ -30,6 +30,7 @@
-@@ -1594,7 +1949,7 @@ dasd_eckd_get_attrib(struct dasd_device *device, void __user *argp)
+ #include "aic94xx.h"
+ #include "aic94xx_reg.h"
++#include "aic94xx_sds.h"
- rc = 0;
- if (copy_to_user(argp, (long *) &attrib,
-- sizeof (struct attrib_data_t)))
-+ sizeof(struct attrib_data_t)))
- rc = -EFAULT;
+ /* ---------- OCM stuff ---------- */
- return rc;
-@@ -1627,8 +1982,10 @@ dasd_eckd_set_attrib(struct dasd_device *device, void __user *argp)
+@@ -1083,3 +1084,391 @@ out:
+ kfree(flash_dir);
+ return err;
}
-
- static int
--dasd_eckd_ioctl(struct dasd_device *device, unsigned int cmd, void __user *argp)
-+dasd_eckd_ioctl(struct dasd_block *block, unsigned int cmd, void __user *argp)
- {
-+ struct dasd_device *device = block->base;
+
- switch (cmd) {
- case BIODASDGATTR:
- return dasd_eckd_get_attrib(device, argp);
-@@ -1685,9 +2042,8 @@ dasd_eckd_dump_ccw_range(struct ccw1 *from, struct ccw1 *to, char *page)
- * Print sense data and related channel program.
- * Parts are printed because printk buffer is only 1024 bytes.
- */
--static void
--dasd_eckd_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
-- struct irb *irb)
-+static void dasd_eckd_dump_sense(struct dasd_device *device,
-+ struct dasd_ccw_req *req, struct irb *irb)
- {
- char *page;
- struct ccw1 *first, *last, *fail, *from, *to;
-@@ -1743,37 +2099,40 @@ dasd_eckd_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
- }
- printk("%s", page);
-
-- /* dump the Channel Program (max 140 Bytes per line) */
-- /* Count CCW and print first CCWs (maximum 1024 % 140 = 7) */
-- first = req->cpaddr;
-- for (last = first; last->flags & (CCW_FLAG_CC | CCW_FLAG_DC); last++);
-- to = min(first + 6, last);
-- len = sprintf(page, KERN_ERR PRINTK_HEADER
-- " Related CP in req: %p\n", req);
-- dasd_eckd_dump_ccw_range(first, to, page + len);
-- printk("%s", page);
-+ if (req) {
-+ /* req == NULL for unsolicited interrupts */
-+ /* dump the Channel Program (max 140 Bytes per line) */
-+ /* Count CCW and print first CCWs (maximum 1024 % 140 = 7) */
-+ first = req->cpaddr;
-+ for (last = first; last->flags & (CCW_FLAG_CC | CCW_FLAG_DC); last++);
-+ to = min(first + 6, last);
-+ len = sprintf(page, KERN_ERR PRINTK_HEADER
-+ " Related CP in req: %p\n", req);
-+ dasd_eckd_dump_ccw_range(first, to, page + len);
-+ printk("%s", page);
-
-- /* print failing CCW area (maximum 4) */
-- /* scsw->cda is either valid or zero */
-- len = 0;
-- from = ++to;
-- fail = (struct ccw1 *)(addr_t) irb->scsw.cpa; /* failing CCW */
-- if (from < fail - 2) {
-- from = fail - 2; /* there is a gap - print header */
-- len += sprintf(page, KERN_ERR PRINTK_HEADER "......\n");
-- }
-- to = min(fail + 1, last);
-- len += dasd_eckd_dump_ccw_range(from, to, page + len);
--
-- /* print last CCWs (maximum 2) */
-- from = max(from, ++to);
-- if (from < last - 1) {
-- from = last - 1; /* there is a gap - print header */
-- len += sprintf(page + len, KERN_ERR PRINTK_HEADER "......\n");
-+ /* print failing CCW area (maximum 4) */
-+ /* scsw->cda is either valid or zero */
-+ len = 0;
-+ from = ++to;
-+ fail = (struct ccw1 *)(addr_t) irb->scsw.cpa; /* failing CCW */
-+ if (from < fail - 2) {
-+ from = fail - 2; /* there is a gap - print header */
-+ len += sprintf(page, KERN_ERR PRINTK_HEADER "......\n");
++/**
++ * asd_verify_flash_seg - verify data with flash memory
++ * @asd_ha: pointer to the host adapter structure
++ * @src: pointer to the source data to be verified
++ * @dest_offset: offset from flash memory
++ * @bytes_to_verify: total bytes to verify
++ */
++int asd_verify_flash_seg(struct asd_ha_struct *asd_ha,
++ void *src, u32 dest_offset, u32 bytes_to_verify)
++{
++ u8 *src_buf;
++ u8 flash_char;
++ int err;
++ u32 nv_offset, reg, i;
++
++ reg = asd_ha->hw_prof.flash.bar;
++ src_buf = NULL;
++
++ err = FLASH_OK;
++ nv_offset = dest_offset;
++ src_buf = (u8 *)src;
++ for (i = 0; i < bytes_to_verify; i++) {
++ flash_char = asd_read_reg_byte(asd_ha, reg + nv_offset + i);
++ if (flash_char != src_buf[i]) {
++ err = FAIL_VERIFY;
++ break;
+ }
-+ to = min(fail + 1, last);
-+ len += dasd_eckd_dump_ccw_range(from, to, page + len);
++ }
++ return err;
++}
+
-+ /* print last CCWs (maximum 2) */
-+ from = max(from, ++to);
-+ if (from < last - 1) {
-+ from = last - 1; /* there is a gap - print header */
-+ len += sprintf(page + len, KERN_ERR PRINTK_HEADER "......\n");
++/**
++ * asd_write_flash_seg - write data into flash memory
++ * @asd_ha: pointer to the host adapter structure
++ * @src: pointer to the source data to be written
++ * @dest_offset: offset from flash memory
++ * @bytes_to_write: total bytes to write
++ */
++int asd_write_flash_seg(struct asd_ha_struct *asd_ha,
++ void *src, u32 dest_offset, u32 bytes_to_write)
++{
++ u8 *src_buf;
++ u32 nv_offset, reg, i;
++ int err;
++
++ reg = asd_ha->hw_prof.flash.bar;
++ src_buf = NULL;
++
++ err = asd_check_flash_type(asd_ha);
++ if (err) {
++ ASD_DPRINTK("couldn't find the type of flash. err=%d\n", err);
++ return err;
++ }
++
++ nv_offset = dest_offset;
++ err = asd_erase_nv_sector(asd_ha, nv_offset, bytes_to_write);
++ if (err) {
++ ASD_DPRINTK("Erase failed at offset:0x%x\n",
++ nv_offset);
++ return err;
++ }
++
++ err = asd_reset_flash(asd_ha);
++ if (err) {
++ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
++ return err;
++ }
++
++ src_buf = (u8 *)src;
++ for (i = 0; i < bytes_to_write; i++) {
++ /* Setup program command sequence */
++ switch (asd_ha->hw_prof.flash.method) {
++ case FLASH_METHOD_A:
++ {
++ asd_write_reg_byte(asd_ha,
++ (reg + 0xAAA), 0xAA);
++ asd_write_reg_byte(asd_ha,
++ (reg + 0x555), 0x55);
++ asd_write_reg_byte(asd_ha,
++ (reg + 0xAAA), 0xA0);
++ asd_write_reg_byte(asd_ha,
++ (reg + nv_offset + i),
++ (*(src_buf + i)));
++ break;
+ }
-+ len += dasd_eckd_dump_ccw_range(from, last, page + len);
-+ if (len > 0)
-+ printk("%s", page);
- }
-- len += dasd_eckd_dump_ccw_range(from, last, page + len);
-- if (len > 0)
-- printk("%s", page);
- free_page((unsigned long) page);
- }
-
-@@ -1796,16 +2155,20 @@ static struct dasd_discipline dasd_eckd_discipline = {
- .ebcname = "ECKD",
- .max_blocks = 240,
- .check_device = dasd_eckd_check_characteristics,
-+ .uncheck_device = dasd_eckd_uncheck_device,
- .do_analysis = dasd_eckd_do_analysis,
-+ .ready_to_online = dasd_eckd_ready_to_online,
-+ .online_to_ready = dasd_eckd_online_to_ready,
- .fill_geometry = dasd_eckd_fill_geometry,
- .start_IO = dasd_start_IO,
- .term_IO = dasd_term_IO,
-+ .handle_terminated_request = dasd_eckd_handle_terminated_request,
- .format_device = dasd_eckd_format_device,
-- .examine_error = dasd_eckd_examine_error,
- .erp_action = dasd_eckd_erp_action,
- .erp_postaction = dasd_eckd_erp_postaction,
-- .build_cp = dasd_eckd_build_cp,
-- .free_cp = dasd_eckd_free_cp,
-+ .handle_unsolicited_interrupt = dasd_eckd_handle_unsolicited_interrupt,
-+ .build_cp = dasd_eckd_build_alias_cp,
-+ .free_cp = dasd_eckd_free_alias_cp,
- .dump_sense = dasd_eckd_dump_sense,
- .fill_info = dasd_eckd_fill_info,
- .ioctl = dasd_eckd_ioctl,
-diff --git a/drivers/s390/block/dasd_eckd.h b/drivers/s390/block/dasd_eckd.h
-index 712ff16..fc2509c 100644
---- a/drivers/s390/block/dasd_eckd.h
-+++ b/drivers/s390/block/dasd_eckd.h
-@@ -39,6 +39,8 @@
- #define DASD_ECKD_CCW_READ_CKD_MT 0x9e
- #define DASD_ECKD_CCW_WRITE_CKD_MT 0x9d
- #define DASD_ECKD_CCW_RESERVE 0xB4
-+#define DASD_ECKD_CCW_PFX 0xE7
-+#define DASD_ECKD_CCW_RSCK 0xF9
-
- /*
- * Perform Subsystem Function / Sub-Orders
-@@ -137,6 +139,25 @@ struct LO_eckd_data {
- __u16 length;
- } __attribute__ ((packed));
-
-+/* Prefix data for format 0x00 and 0x01 */
-+struct PFX_eckd_data {
-+ unsigned char format;
-+ struct {
-+ unsigned char define_extend:1;
-+ unsigned char time_stamp:1;
-+ unsigned char verify_base:1;
-+ unsigned char hyper_pav:1;
-+ unsigned char reserved:4;
-+ } __attribute__ ((packed)) validity;
-+ __u8 base_address;
-+ __u8 aux;
-+ __u8 base_lss;
-+ __u8 reserved[7];
-+ struct DE_eckd_data define_extend;
-+ struct LO_eckd_data locate_record;
-+ __u8 LO_extended_data[4];
-+} __attribute__ ((packed));
++ case FLASH_METHOD_B:
++ {
++ asd_write_reg_byte(asd_ha,
++ (reg + 0x555), 0xAA);
++ asd_write_reg_byte(asd_ha,
++ (reg + 0x2AA), 0x55);
++ asd_write_reg_byte(asd_ha,
++ (reg + 0x555), 0xA0);
++ asd_write_reg_byte(asd_ha,
++ (reg + nv_offset + i),
++ (*(src_buf + i)));
++ break;
++ }
++ default:
++ break;
++ }
++ if (asd_chk_write_status(asd_ha,
++ (nv_offset + i), 0) != 0) {
++ ASD_DPRINTK("aicx: Write failed at offset:0x%x\n",
++ reg + nv_offset + i);
++ return FAIL_WRITE_FLASH;
++ }
++ }
+
- struct dasd_eckd_characteristics {
- __u16 cu_type;
- struct {
-@@ -254,7 +275,9 @@ struct dasd_eckd_confdata {
- } __attribute__ ((packed)) ned;
- struct {
- unsigned char flags; /* byte 0 */
-- unsigned char res2[7]; /* byte 1- 7 */
-+ unsigned char res1; /* byte 1 */
-+ __u16 format; /* byte 2-3 */
-+ unsigned char res2[4]; /* byte 4-7 */
- unsigned char sua_flags; /* byte 8 */
- __u8 base_unit_addr; /* byte 9 */
- unsigned char res3[22]; /* byte 10-31 */
-@@ -343,6 +366,11 @@ struct dasd_eckd_path {
- __u8 npm;
- };
-
-+struct dasd_rssd_features {
-+ char feature[256];
-+} __attribute__((packed));
++ err = asd_reset_flash(asd_ha);
++ if (err) {
++ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
++ return err;
++ }
++ return 0;
++}
+
++int asd_chk_write_status(struct asd_ha_struct *asd_ha,
++ u32 sector_addr, u8 erase_flag)
++{
++ u32 reg;
++ u32 loop_cnt;
++ u8 nv_data1, nv_data2;
++ u8 toggle_bit1;
+
- /*
- * Perform Subsystem Function - Prepare for Read Subsystem Data
- */
-@@ -365,4 +393,99 @@ struct dasd_psf_ssc_data {
- unsigned char reserved[59];
- } __attribute__((packed));
-
++ /*
++ * Read from DQ2 requires sector address
++ * while it's dont care for DQ6
++ */
++ reg = asd_ha->hw_prof.flash.bar;
+
-+/*
-+ * some structures and definitions for alias handling
++ for (loop_cnt = 0; loop_cnt < 50000; loop_cnt++) {
++ nv_data1 = asd_read_reg_byte(asd_ha, reg);
++ nv_data2 = asd_read_reg_byte(asd_ha, reg);
++
++ toggle_bit1 = ((nv_data1 & FLASH_STATUS_BIT_MASK_DQ6)
++ ^ (nv_data2 & FLASH_STATUS_BIT_MASK_DQ6));
++
++ if (toggle_bit1 == 0) {
++ return 0;
++ } else {
++ if (nv_data2 & FLASH_STATUS_BIT_MASK_DQ5) {
++ nv_data1 = asd_read_reg_byte(asd_ha,
++ reg);
++ nv_data2 = asd_read_reg_byte(asd_ha,
++ reg);
++ toggle_bit1 =
++ ((nv_data1 & FLASH_STATUS_BIT_MASK_DQ6)
++ ^ (nv_data2 & FLASH_STATUS_BIT_MASK_DQ6));
++
++ if (toggle_bit1 == 0)
++ return 0;
++ }
++ }
++
++ /*
++ * ERASE is a sector-by-sector operation and requires
++ * more time to finish while WRITE is byte-byte-byte
++ * operation and takes lesser time to finish.
++ *
++ * For some strange reason a reduced ERASE delay gives different
++ * behaviour across different spirit boards. Hence we set
++ * a optimum balance of 50mus for ERASE which works well
++ * across all boards.
++ */
++ if (erase_flag) {
++ udelay(FLASH_STATUS_ERASE_DELAY_COUNT);
++ } else {
++ udelay(FLASH_STATUS_WRITE_DELAY_COUNT);
++ }
++ }
++ return -1;
++}
++
++/**
++ * asd_hwi_erase_nv_sector - Erase the flash memory sectors.
++ * @asd_ha: pointer to the host adapter structure
++ * @flash_addr: pointer to offset from flash memory
++ * @size: total bytes to erase.
+ */
-+struct dasd_unit_address_configuration {
-+ struct {
-+ char ua_type;
-+ char base_ua;
-+ } unit[256];
-+} __attribute__((packed));
++int asd_erase_nv_sector(struct asd_ha_struct *asd_ha, u32 flash_addr, u32 size)
++{
++ u32 reg;
++ u32 sector_addr;
+
++ reg = asd_ha->hw_prof.flash.bar;
+
-+#define MAX_DEVICES_PER_LCU 256
++ /* sector staring address */
++ sector_addr = flash_addr & FLASH_SECTOR_SIZE_MASK;
+
-+/* flags on the LCU */
-+#define NEED_UAC_UPDATE 0x01
-+#define UPDATE_PENDING 0x02
++ /*
++ * Erasing an flash sector needs to be done in six consecutive
++ * write cyles.
++ */
++ while (sector_addr < flash_addr+size) {
++ switch (asd_ha->hw_prof.flash.method) {
++ case FLASH_METHOD_A:
++ asd_write_reg_byte(asd_ha, (reg + 0xAAA), 0xAA);
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x55);
++ asd_write_reg_byte(asd_ha, (reg + 0xAAA), 0x80);
++ asd_write_reg_byte(asd_ha, (reg + 0xAAA), 0xAA);
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x55);
++ asd_write_reg_byte(asd_ha, (reg + sector_addr), 0x30);
++ break;
++ case FLASH_METHOD_B:
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0xAA);
++ asd_write_reg_byte(asd_ha, (reg + 0x2AA), 0x55);
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x80);
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0xAA);
++ asd_write_reg_byte(asd_ha, (reg + 0x2AA), 0x55);
++ asd_write_reg_byte(asd_ha, (reg + sector_addr), 0x30);
++ break;
++ default:
++ break;
++ }
+
-+enum pavtype {NO_PAV, BASE_PAV, HYPER_PAV};
++ if (asd_chk_write_status(asd_ha, sector_addr, 1) != 0)
++ return FAIL_ERASE_FLASH;
+
++ sector_addr += FLASH_SECTOR_SIZE;
++ }
+
-+struct alias_root {
-+ struct list_head serverlist;
-+ spinlock_t lock;
-+};
++ return 0;
++}
+
-+struct alias_server {
-+ struct list_head server;
-+ struct dasd_uid uid;
-+ struct list_head lculist;
-+};
++int asd_check_flash_type(struct asd_ha_struct *asd_ha)
++{
++ u8 manuf_id;
++ u8 dev_id;
++ u8 sec_prot;
++ u32 inc;
++ u32 reg;
++ int err;
+
-+struct summary_unit_check_work_data {
-+ char reason;
-+ struct dasd_device *device;
-+ struct work_struct worker;
-+};
++ /* get Flash memory base address */
++ reg = asd_ha->hw_prof.flash.bar;
+
-+struct read_uac_work_data {
-+ struct dasd_device *device;
-+ struct delayed_work dwork;
-+};
++ /* Determine flash info */
++ err = asd_reset_flash(asd_ha);
++ if (err) {
++ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
++ return err;
++ }
+
-+struct alias_lcu {
-+ struct list_head lcu;
-+ struct dasd_uid uid;
-+ enum pavtype pav;
-+ char flags;
-+ spinlock_t lock;
-+ struct list_head grouplist;
-+ struct list_head active_devices;
-+ struct list_head inactive_devices;
-+ struct dasd_unit_address_configuration *uac;
-+ struct summary_unit_check_work_data suc_data;
-+ struct read_uac_work_data ruac_data;
-+ struct dasd_ccw_req *rsu_cqr;
-+};
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_UNKNOWN;
++ asd_ha->hw_prof.flash.manuf = FLASH_MANUF_ID_UNKNOWN;
++ asd_ha->hw_prof.flash.dev_id = FLASH_DEV_ID_UNKNOWN;
+
-+struct alias_pav_group {
-+ struct list_head group;
-+ struct dasd_uid uid;
-+ struct alias_lcu *lcu;
-+ struct list_head baselist;
-+ struct list_head aliaslist;
-+ struct dasd_device *next;
++ /* Get flash info. This would most likely be AMD Am29LV family flash.
++ * First try the sequence for word mode. It is the same as for
++ * 008B (byte mode only), 160B (word mode) and 800D (word mode).
++ */
++ inc = asd_ha->hw_prof.flash.wide ? 2 : 1;
++ asd_write_reg_byte(asd_ha, reg + 0xAAA, 0xAA);
++ asd_write_reg_byte(asd_ha, reg + 0x555, 0x55);
++ asd_write_reg_byte(asd_ha, reg + 0xAAA, 0x90);
++ manuf_id = asd_read_reg_byte(asd_ha, reg);
++ dev_id = asd_read_reg_byte(asd_ha, reg + inc);
++ sec_prot = asd_read_reg_byte(asd_ha, reg + inc + inc);
++ /* Get out of autoselect mode. */
++ err = asd_reset_flash(asd_ha);
++ if (err) {
++ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
++ return err;
++ }
++ ASD_DPRINTK("Flash MethodA manuf_id(0x%x) dev_id(0x%x) "
++ "sec_prot(0x%x)\n", manuf_id, dev_id, sec_prot);
++ err = asd_reset_flash(asd_ha);
++ if (err != 0)
++ return err;
++
++ switch (manuf_id) {
++ case FLASH_MANUF_ID_AMD:
++ switch (sec_prot) {
++ case FLASH_DEV_ID_AM29LV800DT:
++ case FLASH_DEV_ID_AM29LV640MT:
++ case FLASH_DEV_ID_AM29F800B:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
++ break;
++ default:
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_ST:
++ switch (sec_prot) {
++ case FLASH_DEV_ID_STM29W800DT:
++ case FLASH_DEV_ID_STM29LV640:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
++ break;
++ default:
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_FUJITSU:
++ switch (sec_prot) {
++ case FLASH_DEV_ID_MBM29LV800TE:
++ case FLASH_DEV_ID_MBM29DL800TA:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_MACRONIX:
++ switch (sec_prot) {
++ case FLASH_DEV_ID_MX29LV800BT:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
++ break;
++ }
++ break;
++ }
++
++ if (asd_ha->hw_prof.flash.method == FLASH_METHOD_UNKNOWN) {
++ err = asd_reset_flash(asd_ha);
++ if (err) {
++ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
++ return err;
++ }
++
++ /* Issue Unlock sequence for AM29LV008BT */
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0xAA);
++ asd_write_reg_byte(asd_ha, (reg + 0x2AA), 0x55);
++ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x90);
++ manuf_id = asd_read_reg_byte(asd_ha, reg);
++ dev_id = asd_read_reg_byte(asd_ha, reg + inc);
++ sec_prot = asd_read_reg_byte(asd_ha, reg + inc + inc);
++
++ ASD_DPRINTK("Flash MethodB manuf_id(0x%x) dev_id(0x%x) sec_prot"
++ "(0x%x)\n", manuf_id, dev_id, sec_prot);
++
++ err = asd_reset_flash(asd_ha);
++ if (err != 0) {
++ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
++ return err;
++ }
++
++ switch (manuf_id) {
++ case FLASH_MANUF_ID_AMD:
++ switch (dev_id) {
++ case FLASH_DEV_ID_AM29LV008BT:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
++ break;
++ default:
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_ST:
++ switch (dev_id) {
++ case FLASH_DEV_ID_STM29008:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
++ break;
++ default:
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_FUJITSU:
++ switch (dev_id) {
++ case FLASH_DEV_ID_MBM29LV008TA:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_INTEL:
++ switch (dev_id) {
++ case FLASH_DEV_ID_I28LV00TAT:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
++ break;
++ }
++ break;
++ case FLASH_MANUF_ID_MACRONIX:
++ switch (dev_id) {
++ case FLASH_DEV_ID_I28LV00TAT:
++ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
++ break;
++ }
++ break;
++ default:
++ return FAIL_FIND_FLASH_ID;
++ }
++ }
++
++ if (asd_ha->hw_prof.flash.method == FLASH_METHOD_UNKNOWN)
++ return FAIL_FIND_FLASH_ID;
++
++ asd_ha->hw_prof.flash.manuf = manuf_id;
++ asd_ha->hw_prof.flash.dev_id = dev_id;
++ asd_ha->hw_prof.flash.sec_prot = sec_prot;
++ return 0;
++}
+diff --git a/drivers/scsi/aic94xx/aic94xx_sds.h b/drivers/scsi/aic94xx/aic94xx_sds.h
+new file mode 100644
+index 0000000..bb9795a
+--- /dev/null
++++ b/drivers/scsi/aic94xx/aic94xx_sds.h
+@@ -0,0 +1,121 @@
++/*
++ * Aic94xx SAS/SATA driver hardware interface header file.
++ *
++ * Copyright (C) 2005 Adaptec, Inc. All rights reserved.
++ * Copyright (C) 2005 Gilbert Wu <gilbert_wu at adaptec.com>
++ *
++ * This file is licensed under GPLv2.
++ *
++ * This file is part of the aic94xx driver.
++ *
++ * The aic94xx driver is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License as
++ * published by the Free Software Foundation; version 2 of the
++ * License.
++ *
++ * The aic94xx driver is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++ * General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with the aic94xx driver; if not, write to the Free Software
++ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
++ *
++ */
++#ifndef _AIC94XX_SDS_H_
++#define _AIC94XX_SDS_H_
++
++enum {
++ FLASH_METHOD_UNKNOWN,
++ FLASH_METHOD_A,
++ FLASH_METHOD_B
+};
+
++#define FLASH_MANUF_ID_AMD 0x01
++#define FLASH_MANUF_ID_ST 0x20
++#define FLASH_MANUF_ID_FUJITSU 0x04
++#define FLASH_MANUF_ID_MACRONIX 0xC2
++#define FLASH_MANUF_ID_INTEL 0x89
++#define FLASH_MANUF_ID_UNKNOWN 0xFF
+
-+struct dasd_eckd_private {
-+ struct dasd_eckd_characteristics rdc_data;
-+ struct dasd_eckd_confdata conf_data;
-+ struct dasd_eckd_path path_data;
-+ struct eckd_count count_area[5];
-+ int init_cqr_status;
-+ int uses_cdl;
-+ struct attrib_data_t attrib; /* e.g. cache operations */
-+ struct dasd_rssd_features features;
++#define FLASH_DEV_ID_AM29LV008BT 0x3E
++#define FLASH_DEV_ID_AM29LV800DT 0xDA
++#define FLASH_DEV_ID_STM29W800DT 0xD7
++#define FLASH_DEV_ID_STM29LV640 0xDE
++#define FLASH_DEV_ID_STM29008 0xEA
++#define FLASH_DEV_ID_MBM29LV800TE 0xDA
++#define FLASH_DEV_ID_MBM29DL800TA 0x4A
++#define FLASH_DEV_ID_MBM29LV008TA 0x3E
++#define FLASH_DEV_ID_AM29LV640MT 0x7E
++#define FLASH_DEV_ID_AM29F800B 0xD6
++#define FLASH_DEV_ID_MX29LV800BT 0xDA
++#define FLASH_DEV_ID_MX29LV008CT 0xDA
++#define FLASH_DEV_ID_I28LV00TAT 0x3E
++#define FLASH_DEV_ID_UNKNOWN 0xFF
+
-+ /* alias managemnet */
-+ struct dasd_uid uid;
-+ struct alias_pav_group *pavgroup;
-+ struct alias_lcu *lcu;
-+ int count;
++/* status bit mask values */
++#define FLASH_STATUS_BIT_MASK_DQ6 0x40
++#define FLASH_STATUS_BIT_MASK_DQ5 0x20
++#define FLASH_STATUS_BIT_MASK_DQ2 0x04
++
++/* minimum value in micro seconds needed for checking status */
++#define FLASH_STATUS_ERASE_DELAY_COUNT 50
++#define FLASH_STATUS_WRITE_DELAY_COUNT 25
++
++#define FLASH_SECTOR_SIZE 0x010000
++#define FLASH_SECTOR_SIZE_MASK 0xffff0000
++
++#define FLASH_OK 0x000000
++#define FAIL_OPEN_BIOS_FILE 0x000100
++#define FAIL_CHECK_PCI_ID 0x000200
++#define FAIL_CHECK_SUM 0x000300
++#define FAIL_UNKNOWN 0x000400
++#define FAIL_VERIFY 0x000500
++#define FAIL_RESET_FLASH 0x000600
++#define FAIL_FIND_FLASH_ID 0x000700
++#define FAIL_ERASE_FLASH 0x000800
++#define FAIL_WRITE_FLASH 0x000900
++#define FAIL_FILE_SIZE 0x000a00
++#define FAIL_PARAMETERS 0x000b00
++#define FAIL_OUT_MEMORY 0x000c00
++#define FLASH_IN_PROGRESS 0x001000
++
++struct controller_id {
++ u32 vendor; /* PCI Vendor ID */
++ u32 device; /* PCI Device ID */
++ u32 sub_vendor; /* PCI Subvendor ID */
++ u32 sub_device; /* PCI Subdevice ID */
+};
+
++struct image_info {
++ u32 ImageId; /* Identifies the image */
++ u32 ImageOffset; /* Offset the beginning of the file */
++ u32 ImageLength; /* length of the image */
++ u32 ImageChecksum; /* Image checksum */
++ u32 ImageVersion; /* Version of the image, could be build number */
++};
+
++struct bios_file_header {
++ u8 signature[32]; /* Signature/Cookie to identify the file */
++ u32 checksum; /*Entire file checksum with this field zero */
++ u32 antidote; /* Entire file checksum with this field 0xFFFFFFFF */
++ struct controller_id contrl_id; /*PCI id to identify the controller */
++ u32 filelen; /*Length of the entire file*/
++ u32 chunk_num; /*The chunk/part number for multiple Image files */
++ u32 total_chunks; /*Total number of chunks/parts in the image file */
++ u32 num_images; /* Number of images in the file */
++ u32 build_num; /* Build number of this image */
++ struct image_info image_header;
++};
+
-+int dasd_alias_make_device_known_to_lcu(struct dasd_device *);
-+void dasd_alias_disconnect_device_from_lcu(struct dasd_device *);
-+int dasd_alias_add_device(struct dasd_device *);
-+int dasd_alias_remove_device(struct dasd_device *);
-+struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *);
-+void dasd_alias_handle_summary_unit_check(struct dasd_device *, struct irb *);
-+void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *);
++int asd_verify_flash_seg(struct asd_ha_struct *asd_ha,
++ void *src, u32 dest_offset, u32 bytes_to_verify);
++int asd_write_flash_seg(struct asd_ha_struct *asd_ha,
++ void *src, u32 dest_offset, u32 bytes_to_write);
++int asd_chk_write_status(struct asd_ha_struct *asd_ha,
++ u32 sector_addr, u8 erase_flag);
++int asd_check_flash_type(struct asd_ha_struct *asd_ha);
++int asd_erase_nv_sector(struct asd_ha_struct *asd_ha,
++ u32 flash_addr, u32 size);
++#endif
+diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
+index ee0a98b..965d4bb 100644
+--- a/drivers/scsi/aic94xx/aic94xx_task.c
++++ b/drivers/scsi/aic94xx/aic94xx_task.c
+@@ -187,29 +187,13 @@ static void asd_get_response_tasklet(struct asd_ascb *ascb,
+ ts->buf_valid_size = 0;
+ edb = asd_ha->seq.edb_arr[edb_id + escb->edb_index];
+ r = edb->vaddr;
+- if (task->task_proto == SAS_PROTO_SSP) {
++ if (task->task_proto == SAS_PROTOCOL_SSP) {
+ struct ssp_response_iu *iu =
+ r + 16 + sizeof(struct ssp_frame_hdr);
+
+ ts->residual = le32_to_cpu(*(__le32 *)r);
+- ts->resp = SAS_TASK_COMPLETE;
+- if (iu->datapres == 0)
+- ts->stat = iu->status;
+- else if (iu->datapres == 1)
+- ts->stat = iu->resp_data[3];
+- else if (iu->datapres == 2) {
+- ts->stat = SAM_CHECK_COND;
+- ts->buf_valid_size = min((u32) SAS_STATUS_BUF_SIZE,
+- be32_to_cpu(iu->sense_data_len));
+- memcpy(ts->buf, iu->sense_data, ts->buf_valid_size);
+- if (iu->status != SAM_CHECK_COND) {
+- ASD_DPRINTK("device %llx sent sense data, but "
+- "stat(0x%x) is not CHECK_CONDITION"
+- "\n",
+- SAS_ADDR(task->dev->sas_addr),
+- iu->status);
+- }
+- }
+
- #endif /* DASD_ECKD_H */
-diff --git a/drivers/s390/block/dasd_eer.c b/drivers/s390/block/dasd_eer.c
-index 0c081a6..6e53ab6 100644
---- a/drivers/s390/block/dasd_eer.c
-+++ b/drivers/s390/block/dasd_eer.c
-@@ -336,7 +336,7 @@ static void dasd_eer_write_snss_trigger(struct dasd_device *device,
- unsigned long flags;
- struct eerbuffer *eerb;
++ sas_ssp_task_response(&asd_ha->pcidev->dev, task, iu);
+ } else {
+ struct ata_task_resp *resp = (void *) &ts->buf[0];
-- snss_rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
-+ snss_rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
- if (snss_rc)
- data_size = 0;
- else
-@@ -404,10 +404,11 @@ void dasd_eer_snss(struct dasd_device *device)
- set_bit(DASD_FLAG_EER_SNSS, &device->flags);
- return;
+@@ -341,14 +325,14 @@ Again:
+ }
+
+ switch (task->task_proto) {
+- case SATA_PROTO:
+- case SAS_PROTO_STP:
++ case SAS_PROTOCOL_SATA:
++ case SAS_PROTOCOL_STP:
+ asd_unbuild_ata_ascb(ascb);
+ break;
+- case SAS_PROTO_SMP:
++ case SAS_PROTOCOL_SMP:
+ asd_unbuild_smp_ascb(ascb);
+ break;
+- case SAS_PROTO_SSP:
++ case SAS_PROTOCOL_SSP:
+ asd_unbuild_ssp_ascb(ascb);
+ default:
+ break;
+@@ -586,17 +570,17 @@ int asd_execute_task(struct sas_task *task, const int num,
+ list_for_each_entry(a, &alist, list) {
+ t = a->uldd_task;
+ a->uldd_timer = 1;
+- if (t->task_proto & SAS_PROTO_STP)
+- t->task_proto = SAS_PROTO_STP;
++ if (t->task_proto & SAS_PROTOCOL_STP)
++ t->task_proto = SAS_PROTOCOL_STP;
+ switch (t->task_proto) {
+- case SATA_PROTO:
+- case SAS_PROTO_STP:
++ case SAS_PROTOCOL_SATA:
++ case SAS_PROTOCOL_STP:
+ res = asd_build_ata_ascb(a, t, gfp_flags);
+ break;
+- case SAS_PROTO_SMP:
++ case SAS_PROTOCOL_SMP:
+ res = asd_build_smp_ascb(a, t, gfp_flags);
+ break;
+- case SAS_PROTO_SSP:
++ case SAS_PROTOCOL_SSP:
+ res = asd_build_ssp_ascb(a, t, gfp_flags);
+ break;
+ default:
+@@ -633,14 +617,14 @@ out_err_unmap:
+ t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
+ spin_unlock_irqrestore(&t->task_state_lock, flags);
+ switch (t->task_proto) {
+- case SATA_PROTO:
+- case SAS_PROTO_STP:
++ case SAS_PROTOCOL_SATA:
++ case SAS_PROTOCOL_STP:
+ asd_unbuild_ata_ascb(a);
+ break;
+- case SAS_PROTO_SMP:
++ case SAS_PROTOCOL_SMP:
+ asd_unbuild_smp_ascb(a);
+ break;
+- case SAS_PROTO_SSP:
++ case SAS_PROTOCOL_SSP:
+ asd_unbuild_ssp_ascb(a);
+ default:
+ break;
+diff --git a/drivers/scsi/aic94xx/aic94xx_tmf.c b/drivers/scsi/aic94xx/aic94xx_tmf.c
+index c0d0b7d..87b2f6e 100644
+--- a/drivers/scsi/aic94xx/aic94xx_tmf.c
++++ b/drivers/scsi/aic94xx/aic94xx_tmf.c
+@@ -372,21 +372,21 @@ int asd_abort_task(struct sas_task *task)
+ scb->header.opcode = ABORT_TASK;
+
+ switch (task->task_proto) {
+- case SATA_PROTO:
+- case SAS_PROTO_STP:
++ case SAS_PROTOCOL_SATA:
++ case SAS_PROTOCOL_STP:
+ scb->abort_task.proto_conn_rate = (1 << 5); /* STP */
+ break;
+- case SAS_PROTO_SSP:
++ case SAS_PROTOCOL_SSP:
+ scb->abort_task.proto_conn_rate = (1 << 4); /* SSP */
+ scb->abort_task.proto_conn_rate |= task->dev->linkrate;
+ break;
+- case SAS_PROTO_SMP:
++ case SAS_PROTOCOL_SMP:
+ break;
+ default:
+ break;
+ }
+
+- if (task->task_proto == SAS_PROTO_SSP) {
++ if (task->task_proto == SAS_PROTOCOL_SSP) {
+ scb->abort_task.ssp_frame.frame_type = SSP_TASK;
+ memcpy(scb->abort_task.ssp_frame.hashed_dest_addr,
+ task->dev->hashed_sas_addr, HASHED_SAS_ADDR_SIZE);
+@@ -512,7 +512,7 @@ static int asd_initiate_ssp_tmf(struct domain_device *dev, u8 *lun,
+ int res = 1;
+ struct scb *scb;
+
+- if (!(dev->tproto & SAS_PROTO_SSP))
++ if (!(dev->tproto & SAS_PROTOCOL_SSP))
+ return TMF_RESP_FUNC_ESUPP;
+
+ ascb = asd_ascb_alloc_list(asd_ha, &res, GFP_KERNEL);
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index d466a2d..d80dba9 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -634,9 +634,9 @@ static void arcmsr_report_sense_info(struct CommandControlBlock *ccb)
+ pcmd->result = DID_OK << 16;
+ if (sensebuffer) {
+ int sense_data_length =
+- sizeof(struct SENSE_DATA) < sizeof(pcmd->sense_buffer)
+- ? sizeof(struct SENSE_DATA) : sizeof(pcmd->sense_buffer);
+- memset(sensebuffer, 0, sizeof(pcmd->sense_buffer));
++ sizeof(struct SENSE_DATA) < SCSI_SENSE_BUFFERSIZE
++ ? sizeof(struct SENSE_DATA) : SCSI_SENSE_BUFFERSIZE;
++ memset(sensebuffer, 0, SCSI_SENSE_BUFFERSIZE);
+ memcpy(sensebuffer, ccb->arcmsr_cdb.SenseData, sense_data_length);
+ sensebuffer->ErrorCode = SCSI_SENSE_CURRENT_ERRORS;
+ sensebuffer->Valid = 1;
+diff --git a/drivers/scsi/atari_NCR5380.c b/drivers/scsi/atari_NCR5380.c
+index a9680b5..93b61f1 100644
+--- a/drivers/scsi/atari_NCR5380.c
++++ b/drivers/scsi/atari_NCR5380.c
+@@ -511,9 +511,9 @@ static inline void initialize_SCp(Scsi_Cmnd *cmd)
+ * various queues are valid.
+ */
+
+- if (cmd->use_sg) {
+- cmd->SCp.buffer = (struct scatterlist *)cmd->request_buffer;
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ /* ++roman: Try to merge some scatter-buffers if they are at
+@@ -523,8 +523,8 @@ static inline void initialize_SCp(Scsi_Cmnd *cmd)
+ } else {
+ cmd->SCp.buffer = NULL;
+ cmd->SCp.buffers_residual = 0;
+- cmd->SCp.ptr = (char *)cmd->request_buffer;
+- cmd->SCp.this_residual = cmd->request_bufflen;
++ cmd->SCp.ptr = NULL;
++ cmd->SCp.this_residual = 0;
}
-+ /* cdev is already locked, can't use dasd_add_request_head */
- clear_bit(DASD_FLAG_EER_SNSS, &device->flags);
- cqr->status = DASD_CQR_QUEUED;
-- list_add(&cqr->list, &device->ccw_queue);
-- dasd_schedule_bh(device);
-+ list_add(&cqr->devlist, &device->ccw_queue);
-+ dasd_schedule_device_bh(device);
}
- /*
-@@ -415,7 +416,7 @@ void dasd_eer_snss(struct dasd_device *device)
- */
- static void dasd_eer_snss_cb(struct dasd_ccw_req *cqr, void *data)
+@@ -936,21 +936,21 @@ static int NCR5380_queue_command(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *))
+ }
+ # endif
+ # ifdef NCR5380_STAT_LIMIT
+- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
++ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
+ # endif
+ switch (cmd->cmnd[0]) {
+ case WRITE:
+ case WRITE_6:
+ case WRITE_10:
+ hostdata->time_write[cmd->device->id] -= (jiffies - hostdata->timebase);
+- hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;
++ hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);
+ hostdata->pendingw++;
+ break;
+ case READ:
+ case READ_6:
+ case READ_10:
+ hostdata->time_read[cmd->device->id] -= (jiffies - hostdata->timebase);
+- hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;
++ hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);
+ hostdata->pendingr++;
+ break;
+ }
+@@ -1352,21 +1352,21 @@ static irqreturn_t NCR5380_intr(int irq, void *dev_id)
+ static void collect_stats(struct NCR5380_hostdata* hostdata, Scsi_Cmnd *cmd)
{
-- struct dasd_device *device = cqr->device;
-+ struct dasd_device *device = cqr->startdev;
- unsigned long flags;
+ # ifdef NCR5380_STAT_LIMIT
+- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
++ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
+ # endif
+ switch (cmd->cmnd[0]) {
+ case WRITE:
+ case WRITE_6:
+ case WRITE_10:
+ hostdata->time_write[cmd->device->id] += (jiffies - hostdata->timebase);
+- /*hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;*/
++ /*hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);*/
+ hostdata->pendingw--;
+ break;
+ case READ:
+ case READ_6:
+ case READ_10:
+ hostdata->time_read[cmd->device->id] += (jiffies - hostdata->timebase);
+- /*hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;*/
++ /*hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);*/
+ hostdata->pendingr--;
+ break;
+ }
+@@ -1868,7 +1868,7 @@ static int do_abort(struct Scsi_Host *host)
+ * the target sees, so we just handshake.
+ */
- dasd_eer_write(device, cqr, DASD_EER_STATECHANGE);
-@@ -458,7 +459,7 @@ int dasd_eer_enable(struct dasd_device *device)
- if (!cqr)
- return -ENOMEM;
+- while (!(tmp = NCR5380_read(STATUS_REG)) & SR_REQ)
++ while (!((tmp = NCR5380_read(STATUS_REG)) & SR_REQ))
+ ;
-- cqr->device = device;
-+ cqr->startdev = device;
- cqr->retries = 255;
- cqr->expires = 10 * HZ;
- clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
-diff --git a/drivers/s390/block/dasd_erp.c b/drivers/s390/block/dasd_erp.c
-index caa5d91..8f10000 100644
---- a/drivers/s390/block/dasd_erp.c
-+++ b/drivers/s390/block/dasd_erp.c
-@@ -46,6 +46,8 @@ dasd_alloc_erp_request(char *magic, int cplength, int datasize,
- if (cqr == NULL)
- return ERR_PTR(-ENOMEM);
- memset(cqr, 0, sizeof(struct dasd_ccw_req));
-+ INIT_LIST_HEAD(&cqr->devlist);
-+ INIT_LIST_HEAD(&cqr->blocklist);
- data = (char *) cqr + ((sizeof(struct dasd_ccw_req) + 7L) & -8L);
- cqr->cpaddr = NULL;
- if (cplength > 0) {
-@@ -66,7 +68,7 @@ dasd_alloc_erp_request(char *magic, int cplength, int datasize,
- }
+ NCR5380_write(TARGET_COMMAND_REG, PHASE_SR_TO_TCR(tmp));
+diff --git a/drivers/scsi/atp870u.c b/drivers/scsi/atp870u.c
+index fec58cc..db6de5e 100644
+--- a/drivers/scsi/atp870u.c
++++ b/drivers/scsi/atp870u.c
+@@ -471,18 +471,8 @@ go_42:
+ /*
+ * Complete the command
+ */
+- if (workreq->use_sg) {
+- pci_unmap_sg(dev->pdev,
+- (struct scatterlist *)workreq->request_buffer,
+- workreq->use_sg,
+- workreq->sc_data_direction);
+- } else if (workreq->request_bufflen &&
+- workreq->sc_data_direction != DMA_NONE) {
+- pci_unmap_single(dev->pdev,
+- workreq->SCp.dma_handle,
+- workreq->request_bufflen,
+- workreq->sc_data_direction);
+- }
++ scsi_dma_unmap(workreq);
++
+ spin_lock_irqsave(dev->host->host_lock, flags);
+ (*workreq->scsi_done) (workreq);
+ #ifdef ED_DBGP
+@@ -624,7 +614,7 @@ static int atp870u_queuecommand(struct scsi_cmnd * req_p,
+
+ c = scmd_channel(req_p);
+ req_p->sense_buffer[0]=0;
+- req_p->resid = 0;
++ scsi_set_resid(req_p, 0);
+ if (scmd_channel(req_p) > 1) {
+ req_p->result = 0x00040000;
+ done(req_p);
+@@ -722,7 +712,6 @@ static void send_s870(struct atp_unit *dev,unsigned char c)
+ unsigned short int tmpcip, w;
+ unsigned long l, bttl = 0;
+ unsigned int workport;
+- struct scatterlist *sgpnt;
+ unsigned long sg_count;
+
+ if (dev->in_snd[c] != 0) {
+@@ -793,6 +782,8 @@ oktosend:
+ }
+ printk("\n");
+ #endif
++ l = scsi_bufflen(workreq);
++
+ if (dev->dev_id == ATP885_DEVID) {
+ j = inb(dev->baseport + 0x29) & 0xfe;
+ outb(j, dev->baseport + 0x29);
+@@ -800,12 +791,11 @@ oktosend:
+ }
+
+ if (workreq->cmnd[0] == READ_CAPACITY) {
+- if (workreq->request_bufflen > 8) {
+- workreq->request_bufflen = 0x08;
+- }
++ if (l > 8)
++ l = 8;
+ }
+ if (workreq->cmnd[0] == 0x00) {
+- workreq->request_bufflen = 0;
++ l = 0;
+ }
- void
--dasd_free_erp_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
-+dasd_free_erp_request(struct dasd_ccw_req *cqr, struct dasd_device * device)
- {
- unsigned long flags;
+ tmport = workport + 0x1b;
+@@ -852,40 +842,8 @@ oktosend:
+ #ifdef ED_DBGP
+ printk("dev->id[%d][%d].devsp = %2x\n",c,target_id,dev->id[c][target_id].devsp);
+ #endif
+- /*
+- * Figure out the transfer size
+- */
+- if (workreq->use_sg) {
+-#ifdef ED_DBGP
+- printk("Using SGL\n");
+-#endif
+- l = 0;
+-
+- sgpnt = (struct scatterlist *) workreq->request_buffer;
+- sg_count = pci_map_sg(dev->pdev, sgpnt, workreq->use_sg,
+- workreq->sc_data_direction);
+-
+- for (i = 0; i < workreq->use_sg; i++) {
+- if (sgpnt[i].length == 0 || workreq->use_sg > ATP870U_SCATTER) {
+- panic("Foooooooood fight!");
+- }
+- l += sgpnt[i].length;
+- }
+-#ifdef ED_DBGP
+- printk( "send_s870: workreq->use_sg %d, sg_count %d l %8ld\n", workreq->use_sg, sg_count, l);
+-#endif
+- } else if(workreq->request_bufflen && workreq->sc_data_direction != PCI_DMA_NONE) {
+-#ifdef ED_DBGP
+- printk("Not using SGL\n");
+-#endif
+- workreq->SCp.dma_handle = pci_map_single(dev->pdev, workreq->request_buffer,
+- workreq->request_bufflen,
+- workreq->sc_data_direction);
+- l = workreq->request_bufflen;
+-#ifdef ED_DBGP
+- printk( "send_s870: workreq->use_sg %d, l %8ld\n", workreq->use_sg, l);
+-#endif
+- } else l = 0;
++
++ sg_count = scsi_dma_map(workreq);
+ /*
+ * Write transfer size
+ */
+@@ -938,16 +896,16 @@ oktosend:
+ * a linear chain.
+ */
-@@ -81,11 +83,11 @@ dasd_free_erp_request(struct dasd_ccw_req * cqr, struct dasd_device * device)
- * dasd_default_erp_action just retries the current cqr
- */
- struct dasd_ccw_req *
--dasd_default_erp_action(struct dasd_ccw_req * cqr)
-+dasd_default_erp_action(struct dasd_ccw_req *cqr)
- {
- struct dasd_device *device;
+- if (workreq->use_sg) {
+- sgpnt = (struct scatterlist *) workreq->request_buffer;
++ if (l) {
++ struct scatterlist *sgpnt;
+ i = 0;
+- for (j = 0; j < workreq->use_sg; j++) {
+- bttl = sg_dma_address(&sgpnt[j]);
+- l=sg_dma_len(&sgpnt[j]);
++ scsi_for_each_sg(workreq, sgpnt, sg_count, j) {
++ bttl = sg_dma_address(sgpnt);
++ l=sg_dma_len(sgpnt);
+ #ifdef ED_DBGP
+- printk("1. bttl %x, l %x\n",bttl, l);
++ printk("1. bttl %x, l %x\n",bttl, l);
+ #endif
+- while (l > 0x10000) {
++ while (l > 0x10000) {
+ (((u16 *) (prd))[i + 3]) = 0x0000;
+ (((u16 *) (prd))[i + 2]) = 0x0000;
+ (((u32 *) (prd))[i >> 1]) = cpu_to_le32(bttl);
+@@ -965,32 +923,6 @@ oktosend:
+ printk("prd %4x %4x %4x %4x\n",(((unsigned short int *)prd)[0]),(((unsigned short int *)prd)[1]),(((unsigned short int *)prd)[2]),(((unsigned short int *)prd)[3]));
+ printk("2. bttl %x, l %x\n",bttl, l);
+ #endif
+- } else {
+- /*
+- * For a linear request write a chain of blocks
+- */
+- bttl = workreq->SCp.dma_handle;
+- l = workreq->request_bufflen;
+- i = 0;
+-#ifdef ED_DBGP
+- printk("3. bttl %x, l %x\n",bttl, l);
+-#endif
+- while (l > 0x10000) {
+- (((u16 *) (prd))[i + 3]) = 0x0000;
+- (((u16 *) (prd))[i + 2]) = 0x0000;
+- (((u32 *) (prd))[i >> 1]) = cpu_to_le32(bttl);
+- l -= 0x10000;
+- bttl += 0x10000;
+- i += 0x04;
+- }
+- (((u16 *) (prd))[i + 3]) = cpu_to_le16(0x8000);
+- (((u16 *) (prd))[i + 2]) = cpu_to_le16(l);
+- (((u32 *) (prd))[i >> 1]) = cpu_to_le32(bttl);
+-#ifdef ED_DBGP
+- printk("prd %4x %4x %4x %4x\n",(((unsigned short int *)prd)[0]),(((unsigned short int *)prd)[1]),(((unsigned short int *)prd)[2]),(((unsigned short int *)prd)[3]));
+- printk("4. bttl %x, l %x\n",bttl, l);
+-#endif
+-
+ }
+ tmpcip += 4;
+ #ifdef ED_DBGP
+diff --git a/drivers/scsi/ch.c b/drivers/scsi/ch.c
+index 2311019..7aad154 100644
+--- a/drivers/scsi/ch.c
++++ b/drivers/scsi/ch.c
+@@ -21,6 +21,7 @@
+ #include <linux/compat.h>
+ #include <linux/chio.h> /* here are all the ioctls */
+ #include <linux/mutex.h>
++#include <linux/idr.h>
-- device = cqr->device;
-+ device = cqr->startdev;
+ #include <scsi/scsi.h>
+ #include <scsi/scsi_cmnd.h>
+@@ -33,6 +34,7 @@
- /* just retry - there is nothing to save ... I got no sense data.... */
- if (cqr->retries > 0) {
-@@ -93,12 +95,12 @@ dasd_default_erp_action(struct dasd_ccw_req * cqr)
- "default ERP called (%i retries left)",
- cqr->retries);
- cqr->lpm = LPM_ANYPATH;
-- cqr->status = DASD_CQR_QUEUED;
-+ cqr->status = DASD_CQR_FILLED;
- } else {
- DEV_MESSAGE (KERN_WARNING, device, "%s",
- "default ERP called (NO retry left)");
- cqr->status = DASD_CQR_FAILED;
-- cqr->stopclk = get_clock ();
-+ cqr->stopclk = get_clock();
- }
- return cqr;
- } /* end dasd_default_erp_action */
-@@ -117,15 +119,12 @@ dasd_default_erp_action(struct dasd_ccw_req * cqr)
- * RETURN VALUES
- * cqr pointer to the original CQR
- */
--struct dasd_ccw_req *
--dasd_default_erp_postaction(struct dasd_ccw_req * cqr)
-+struct dasd_ccw_req *dasd_default_erp_postaction(struct dasd_ccw_req *cqr)
- {
-- struct dasd_device *device;
- int success;
+ #define CH_DT_MAX 16
+ #define CH_TYPES 8
++#define CH_MAX_DEVS 128
- BUG_ON(cqr->refers == NULL || cqr->function == NULL);
+ MODULE_DESCRIPTION("device driver for scsi media changer devices");
+ MODULE_AUTHOR("Gerd Knorr <kraxel at bytesex.org>");
+@@ -88,17 +90,6 @@ static const char * vendor_labels[CH_TYPES-4] = {
-- device = cqr->device;
- success = cqr->status == DASD_CQR_DONE;
+ #define MAX_RETRIES 1
- /* free all ERPs - but NOT the original cqr */
-@@ -133,10 +132,10 @@ dasd_default_erp_postaction(struct dasd_ccw_req * cqr)
- struct dasd_ccw_req *refers;
+-static int ch_probe(struct device *);
+-static int ch_remove(struct device *);
+-static int ch_open(struct inode * inode, struct file * filp);
+-static int ch_release(struct inode * inode, struct file * filp);
+-static int ch_ioctl(struct inode * inode, struct file * filp,
+- unsigned int cmd, unsigned long arg);
+-#ifdef CONFIG_COMPAT
+-static long ch_ioctl_compat(struct file * filp,
+- unsigned int cmd, unsigned long arg);
+-#endif
+-
+ static struct class * ch_sysfs_class;
- refers = cqr->refers;
-- /* remove the request from the device queue */
-- list_del(&cqr->list);
-+ /* remove the request from the block queue */
-+ list_del(&cqr->blocklist);
- /* free the finished erp request */
-- dasd_free_erp_request(cqr, device);
-+ dasd_free_erp_request(cqr, cqr->memdev);
- cqr = refers;
- }
+ typedef struct {
+@@ -114,30 +105,8 @@ typedef struct {
+ struct mutex lock;
+ } scsi_changer;
-@@ -157,7 +156,7 @@ dasd_log_sense(struct dasd_ccw_req *cqr, struct irb *irb)
+-static LIST_HEAD(ch_devlist);
+-static DEFINE_SPINLOCK(ch_devlist_lock);
+-static int ch_devcount;
+-
+-static struct scsi_driver ch_template =
+-{
+- .owner = THIS_MODULE,
+- .gendrv = {
+- .name = "ch",
+- .probe = ch_probe,
+- .remove = ch_remove,
+- },
+-};
+-
+-static const struct file_operations changer_fops =
+-{
+- .owner = THIS_MODULE,
+- .open = ch_open,
+- .release = ch_release,
+- .ioctl = ch_ioctl,
+-#ifdef CONFIG_COMPAT
+- .compat_ioctl = ch_ioctl_compat,
+-#endif
+-};
++static DEFINE_IDR(ch_index_idr);
++static DEFINE_SPINLOCK(ch_index_lock);
+
+ static const struct {
+ unsigned char sense;
+@@ -207,7 +176,7 @@ ch_do_scsi(scsi_changer *ch, unsigned char *cmd,
{
- struct dasd_device *device;
+ int errno, retries = 0, timeout, result;
+ struct scsi_sense_hdr sshdr;
+-
++
+ timeout = (cmd[0] == INITIALIZE_ELEMENT_STATUS)
+ ? timeout_init : timeout_move;
-- device = cqr->device;
-+ device = cqr->startdev;
- /* dump sense data */
- if (device->discipline && device->discipline->dump_sense)
- device->discipline->dump_sense(device, cqr, irb);
-diff --git a/drivers/s390/block/dasd_fba.c b/drivers/s390/block/dasd_fba.c
-index 1d95822..d13ea05 100644
---- a/drivers/s390/block/dasd_fba.c
-+++ b/drivers/s390/block/dasd_fba.c
-@@ -117,6 +117,7 @@ locate_record(struct ccw1 * ccw, struct LO_fba_data *data, int rw,
- static int
- dasd_fba_check_characteristics(struct dasd_device *device)
+@@ -245,7 +214,7 @@ static int
+ ch_elem_to_typecode(scsi_changer *ch, u_int elem)
{
-+ struct dasd_block *block;
- struct dasd_fba_private *private;
- struct ccw_device *cdev = device->cdev;
- void *rdc_data;
-@@ -133,6 +134,16 @@ dasd_fba_check_characteristics(struct dasd_device *device)
- }
- device->private = (void *) private;
- }
-+ block = dasd_alloc_block();
-+ if (IS_ERR(block)) {
-+ DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ "could not allocate dasd block structure");
-+ kfree(device->private);
-+ return PTR_ERR(block);
-+ }
-+ device->block = block;
-+ block->base = device;
+ int i;
+-
+
- /* Read Device Characteristics */
- rdc_data = (void *) &(private->rdc_data);
- rc = dasd_generic_read_dev_chars(device, "FBA ", &rdc_data, 32);
-@@ -155,60 +166,37 @@ dasd_fba_check_characteristics(struct dasd_device *device)
- return 0;
+ for (i = 0; i < CH_TYPES; i++) {
+ if (elem >= ch->firsts[i] &&
+ elem < ch->firsts[i] +
+@@ -261,15 +230,15 @@ ch_read_element_status(scsi_changer *ch, u_int elem, char *data)
+ u_char cmd[12];
+ u_char *buffer;
+ int result;
+-
++
+ buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
+ if(!buffer)
+ return -ENOMEM;
+-
++
+ retry:
+ memset(cmd,0,sizeof(cmd));
+ cmd[0] = READ_ELEMENT_STATUS;
+- cmd[1] = (ch->device->lun << 5) |
++ cmd[1] = (ch->device->lun << 5) |
+ (ch->voltags ? 0x10 : 0) |
+ ch_elem_to_typecode(ch,elem);
+ cmd[2] = (elem >> 8) & 0xff;
+@@ -296,7 +265,7 @@ ch_read_element_status(scsi_changer *ch, u_int elem, char *data)
+ return result;
}
--static int
--dasd_fba_do_analysis(struct dasd_device *device)
-+static int dasd_fba_do_analysis(struct dasd_block *block)
+-static int
++static int
+ ch_init_elem(scsi_changer *ch)
{
- struct dasd_fba_private *private;
- int sb, rc;
-
-- private = (struct dasd_fba_private *) device->private;
-+ private = (struct dasd_fba_private *) block->base->private;
- rc = dasd_check_blocksize(private->rdc_data.blk_size);
- if (rc) {
-- DEV_MESSAGE(KERN_INFO, device, "unknown blocksize %d",
-+ DEV_MESSAGE(KERN_INFO, block->base, "unknown blocksize %d",
- private->rdc_data.blk_size);
- return rc;
+ int err;
+@@ -322,7 +291,7 @@ ch_readconfig(scsi_changer *ch)
+ buffer = kzalloc(512, GFP_KERNEL | GFP_DMA);
+ if (!buffer)
+ return -ENOMEM;
+-
++
+ memset(cmd,0,sizeof(cmd));
+ cmd[0] = MODE_SENSE;
+ cmd[1] = ch->device->lun << 5;
+@@ -365,7 +334,7 @@ ch_readconfig(scsi_changer *ch)
+ } else {
+ vprintk("reading element address assigment page failed!\n");
}
-- device->blocks = private->rdc_data.blk_bdsa;
-- device->bp_block = private->rdc_data.blk_size;
-- device->s2b_shift = 0; /* bits to shift 512 to get a block */
-+ block->blocks = private->rdc_data.blk_bdsa;
-+ block->bp_block = private->rdc_data.blk_size;
-+ block->s2b_shift = 0; /* bits to shift 512 to get a block */
- for (sb = 512; sb < private->rdc_data.blk_size; sb = sb << 1)
-- device->s2b_shift++;
-+ block->s2b_shift++;
- return 0;
- }
-
--static int
--dasd_fba_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
-+static int dasd_fba_fill_geometry(struct dasd_block *block,
-+ struct hd_geometry *geo)
+-
++
+ /* vendor specific element types */
+ for (i = 0; i < 4; i++) {
+ if (0 == vendor_counts[i])
+@@ -443,7 +412,7 @@ static int
+ ch_position(scsi_changer *ch, u_int trans, u_int elem, int rotate)
{
-- if (dasd_check_blocksize(device->bp_block) != 0)
-+ if (dasd_check_blocksize(block->bp_block) != 0)
- return -EINVAL;
-- geo->cylinders = (device->blocks << device->s2b_shift) >> 10;
-+ geo->cylinders = (block->blocks << block->s2b_shift) >> 10;
- geo->heads = 16;
-- geo->sectors = 128 >> device->s2b_shift;
-+ geo->sectors = 128 >> block->s2b_shift;
- return 0;
- }
-
--static dasd_era_t
--dasd_fba_examine_error(struct dasd_ccw_req * cqr, struct irb * irb)
--{
-- struct dasd_device *device;
-- struct ccw_device *cdev;
--
-- device = (struct dasd_device *) cqr->device;
-- if (irb->scsw.cstat == 0x00 &&
-- irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
-- return dasd_era_none;
--
-- cdev = device->cdev;
-- switch (cdev->id.dev_type) {
-- case 0x3370:
-- return dasd_3370_erp_examine(cqr, irb);
-- case 0x9336:
-- return dasd_9336_erp_examine(cqr, irb);
-- default:
-- return dasd_era_recover;
-- }
--}
--
- static dasd_erp_fn_t
- dasd_fba_erp_action(struct dasd_ccw_req * cqr)
+ u_char cmd[10];
+-
++
+ dprintk("position: 0x%x\n",elem);
+ if (0 == trans)
+ trans = ch->firsts[CHET_MT];
+@@ -462,7 +431,7 @@ static int
+ ch_move(scsi_changer *ch, u_int trans, u_int src, u_int dest, int rotate)
{
-@@ -221,13 +209,34 @@ dasd_fba_erp_postaction(struct dasd_ccw_req * cqr)
- if (cqr->function == dasd_default_erp_action)
- return dasd_default_erp_postaction;
-
-- DEV_MESSAGE(KERN_WARNING, cqr->device, "unknown ERP action %p",
-+ DEV_MESSAGE(KERN_WARNING, cqr->startdev, "unknown ERP action %p",
- cqr->function);
- return NULL;
- }
-
--static struct dasd_ccw_req *
--dasd_fba_build_cp(struct dasd_device * device, struct request *req)
-+static void dasd_fba_handle_unsolicited_interrupt(struct dasd_device *device,
-+ struct irb *irb)
-+{
-+ char mask;
+ u_char cmd[12];
+-
+
-+ /* first of all check for state change pending interrupt */
-+ mask = DEV_STAT_ATTENTION | DEV_STAT_DEV_END | DEV_STAT_UNIT_EXCEP;
-+ if ((irb->scsw.dstat & mask) == mask) {
-+ dasd_generic_handle_state_change(device);
-+ return;
-+ }
+ dprintk("move: 0x%x => 0x%x\n",src,dest);
+ if (0 == trans)
+ trans = ch->firsts[CHET_MT];
+@@ -484,7 +453,7 @@ ch_exchange(scsi_changer *ch, u_int trans, u_int src,
+ u_int dest1, u_int dest2, int rotate1, int rotate2)
+ {
+ u_char cmd[12];
+-
+
-+ /* check for unsolicited interrupts */
-+ DEV_MESSAGE(KERN_DEBUG, device, "%s",
-+ "unsolicited interrupt received");
-+ device->discipline->dump_sense(device, NULL, irb);
-+ dasd_schedule_device_bh(device);
-+ return;
-+};
+ dprintk("exchange: 0x%x => 0x%x => 0x%x\n",
+ src,dest1,dest2);
+ if (0 == trans)
+@@ -501,7 +470,7 @@ ch_exchange(scsi_changer *ch, u_int trans, u_int src,
+ cmd[8] = (dest2 >> 8) & 0xff;
+ cmd[9] = dest2 & 0xff;
+ cmd[10] = (rotate1 ? 1 : 0) | (rotate2 ? 2 : 0);
+-
+
-+static struct dasd_ccw_req *dasd_fba_build_cp(struct dasd_device * memdev,
-+ struct dasd_block *block,
-+ struct request *req)
- {
- struct dasd_fba_private *private;
- unsigned long *idaws;
-@@ -242,17 +251,17 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
- unsigned int blksize, off;
- unsigned char cmd;
-
-- private = (struct dasd_fba_private *) device->private;
-+ private = (struct dasd_fba_private *) block->base->private;
- if (rq_data_dir(req) == READ) {
- cmd = DASD_FBA_CCW_READ;
- } else if (rq_data_dir(req) == WRITE) {
- cmd = DASD_FBA_CCW_WRITE;
- } else
- return ERR_PTR(-EINVAL);
-- blksize = device->bp_block;
-+ blksize = block->bp_block;
- /* Calculate record id of first and last block. */
-- first_rec = req->sector >> device->s2b_shift;
-- last_rec = (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
-+ first_rec = req->sector >> block->s2b_shift;
-+ last_rec = (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
- /* Check struct bio and count the number of blocks for the request. */
- count = 0;
- cidaw = 0;
-@@ -260,7 +269,7 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
- if (bv->bv_len & (blksize - 1))
- /* Fba can only do full blocks. */
- return ERR_PTR(-EINVAL);
-- count += bv->bv_len >> (device->s2b_shift + 9);
-+ count += bv->bv_len >> (block->s2b_shift + 9);
- #if defined(CONFIG_64BIT)
- if (idal_is_needed (page_address(bv->bv_page), bv->bv_len))
- cidaw += bv->bv_len / blksize;
-@@ -284,13 +293,13 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
- }
- /* Allocate the ccw request. */
- cqr = dasd_smalloc_request(dasd_fba_discipline.name,
-- cplength, datasize, device);
-+ cplength, datasize, memdev);
- if (IS_ERR(cqr))
- return cqr;
- ccw = cqr->cpaddr;
- /* First ccw is define extent. */
- define_extent(ccw++, cqr->data, rq_data_dir(req),
-- device->bp_block, req->sector, req->nr_sectors);
-+ block->bp_block, req->sector, req->nr_sectors);
- /* Build locate_record + read/write ccws. */
- idaws = (unsigned long *) (cqr->data + sizeof(struct DE_fba_data));
- LO_data = (struct LO_fba_data *) (idaws + cidaw);
-@@ -326,7 +335,7 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
- ccw[-1].flags |= CCW_FLAG_CC;
- }
- ccw->cmd_code = cmd;
-- ccw->count = device->bp_block;
-+ ccw->count = block->bp_block;
- if (idal_is_needed(dst, blksize)) {
- ccw->cda = (__u32)(addr_t) idaws;
- ccw->flags = CCW_FLAG_IDA;
-@@ -342,7 +351,9 @@ dasd_fba_build_cp(struct dasd_device * device, struct request *req)
- }
- if (req->cmd_flags & REQ_FAILFAST)
- set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
-- cqr->device = device;
-+ cqr->startdev = memdev;
-+ cqr->memdev = memdev;
-+ cqr->block = block;
- cqr->expires = 5 * 60 * HZ; /* 5 minutes */
- cqr->retries = 32;
- cqr->buildclk = get_clock();
-@@ -363,8 +374,8 @@ dasd_fba_free_cp(struct dasd_ccw_req *cqr, struct request *req)
-
- if (!dasd_page_cache)
- goto out;
-- private = (struct dasd_fba_private *) cqr->device->private;
-- blksize = cqr->device->bp_block;
-+ private = (struct dasd_fba_private *) cqr->block->base->private;
-+ blksize = cqr->block->bp_block;
- ccw = cqr->cpaddr;
- /* Skip over define extent & locate record. */
- ccw++;
-@@ -394,10 +405,15 @@ dasd_fba_free_cp(struct dasd_ccw_req *cqr, struct request *req)
- }
- out:
- status = cqr->status == DASD_CQR_DONE;
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- return status;
+ return ch_do_scsi(ch, cmd, NULL,0, DMA_NONE);
}
-+static void dasd_fba_handle_terminated_request(struct dasd_ccw_req *cqr)
-+{
-+ cqr->status = DASD_CQR_FILLED;
-+};
+@@ -539,14 +508,14 @@ ch_set_voltag(scsi_changer *ch, u_int elem,
+ elem, tag);
+ memset(cmd,0,sizeof(cmd));
+ cmd[0] = SEND_VOLUME_TAG;
+- cmd[1] = (ch->device->lun << 5) |
++ cmd[1] = (ch->device->lun << 5) |
+ ch_elem_to_typecode(ch,elem);
+ cmd[2] = (elem >> 8) & 0xff;
+ cmd[3] = elem & 0xff;
+ cmd[5] = clear
+ ? (alternate ? 0x0d : 0x0c)
+ : (alternate ? 0x0b : 0x0a);
+-
++
+ cmd[9] = 255;
+
+ memcpy(buffer,tag,32);
+@@ -562,7 +531,7 @@ static int ch_gstatus(scsi_changer *ch, int type, unsigned char __user *dest)
+ int retval = 0;
+ u_char data[16];
+ unsigned int i;
+-
+
+ mutex_lock(&ch->lock);
+ for (i = 0; i < ch->counts[type]; i++) {
+ if (0 != ch_read_element_status
+@@ -599,20 +568,17 @@ ch_release(struct inode *inode, struct file *file)
static int
- dasd_fba_fill_info(struct dasd_device * device,
- struct dasd_information2_t * info)
-@@ -546,9 +562,10 @@ static struct dasd_discipline dasd_fba_discipline = {
- .fill_geometry = dasd_fba_fill_geometry,
- .start_IO = dasd_start_IO,
- .term_IO = dasd_term_IO,
-- .examine_error = dasd_fba_examine_error,
-+ .handle_terminated_request = dasd_fba_handle_terminated_request,
- .erp_action = dasd_fba_erp_action,
- .erp_postaction = dasd_fba_erp_postaction,
-+ .handle_unsolicited_interrupt = dasd_fba_handle_unsolicited_interrupt,
- .build_cp = dasd_fba_build_cp,
- .free_cp = dasd_fba_free_cp,
- .dump_sense = dasd_fba_dump_sense,
-diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
-index 47ba446..aee6565 100644
---- a/drivers/s390/block/dasd_genhd.c
-+++ b/drivers/s390/block/dasd_genhd.c
-@@ -25,14 +25,15 @@
- /*
- * Allocate and register gendisk structure for device.
- */
--int
--dasd_gendisk_alloc(struct dasd_device *device)
-+int dasd_gendisk_alloc(struct dasd_block *block)
+ ch_open(struct inode *inode, struct file *file)
{
- struct gendisk *gdp;
-+ struct dasd_device *base;
- int len;
-
- /* Make sure the minor for this device exists. */
-- if (device->devindex >= DASD_PER_MAJOR)
-+ base = block->base;
-+ if (base->devindex >= DASD_PER_MAJOR)
- return -EBUSY;
-
- gdp = alloc_disk(1 << DASD_PARTN_BITS);
-@@ -41,9 +42,9 @@ dasd_gendisk_alloc(struct dasd_device *device)
-
- /* Initialize gendisk structure. */
- gdp->major = DASD_MAJOR;
-- gdp->first_minor = device->devindex << DASD_PARTN_BITS;
-+ gdp->first_minor = base->devindex << DASD_PARTN_BITS;
- gdp->fops = &dasd_device_operations;
-- gdp->driverfs_dev = &device->cdev->dev;
-+ gdp->driverfs_dev = &base->cdev->dev;
-
- /*
- * Set device name.
-@@ -53,53 +54,51 @@ dasd_gendisk_alloc(struct dasd_device *device)
- * dasdaaaa - dasdzzzz : 456976 devices, added up = 475252
- */
- len = sprintf(gdp->disk_name, "dasd");
-- if (device->devindex > 25) {
-- if (device->devindex > 701) {
-- if (device->devindex > 18277)
-+ if (base->devindex > 25) {
-+ if (base->devindex > 701) {
-+ if (base->devindex > 18277)
- len += sprintf(gdp->disk_name + len, "%c",
-- 'a'+(((device->devindex-18278)
-+ 'a'+(((base->devindex-18278)
- /17576)%26));
- len += sprintf(gdp->disk_name + len, "%c",
-- 'a'+(((device->devindex-702)/676)%26));
-+ 'a'+(((base->devindex-702)/676)%26));
- }
- len += sprintf(gdp->disk_name + len, "%c",
-- 'a'+(((device->devindex-26)/26)%26));
-+ 'a'+(((base->devindex-26)/26)%26));
- }
-- len += sprintf(gdp->disk_name + len, "%c", 'a'+(device->devindex%26));
-+ len += sprintf(gdp->disk_name + len, "%c", 'a'+(base->devindex%26));
-
-- if (device->features & DASD_FEATURE_READONLY)
-+ if (block->base->features & DASD_FEATURE_READONLY)
- set_disk_ro(gdp, 1);
-- gdp->private_data = device;
-- gdp->queue = device->request_queue;
-- device->gdp = gdp;
-- set_capacity(device->gdp, 0);
-- add_disk(device->gdp);
-+ gdp->private_data = block;
-+ gdp->queue = block->request_queue;
-+ block->gdp = gdp;
-+ set_capacity(block->gdp, 0);
-+ add_disk(block->gdp);
- return 0;
- }
+- scsi_changer *tmp, *ch;
++ scsi_changer *ch;
+ int minor = iminor(inode);
- /*
- * Unregister and free gendisk structure for device.
- */
--void
--dasd_gendisk_free(struct dasd_device *device)
-+void dasd_gendisk_free(struct dasd_block *block)
- {
-- if (device->gdp) {
-- del_gendisk(device->gdp);
-- device->gdp->queue = NULL;
-- put_disk(device->gdp);
-- device->gdp = NULL;
-+ if (block->gdp) {
-+ del_gendisk(block->gdp);
-+ block->gdp->queue = NULL;
-+ put_disk(block->gdp);
-+ block->gdp = NULL;
+- spin_lock(&ch_devlist_lock);
+- ch = NULL;
+- list_for_each_entry(tmp,&ch_devlist,list) {
+- if (tmp->minor == minor)
+- ch = tmp;
+- }
++ spin_lock(&ch_index_lock);
++ ch = idr_find(&ch_index_idr, minor);
++
+ if (NULL == ch || scsi_device_get(ch->device)) {
+- spin_unlock(&ch_devlist_lock);
++ spin_unlock(&ch_index_lock);
+ return -ENXIO;
}
- }
-
- /*
- * Trigger a partition detection.
- */
--int
--dasd_scan_partitions(struct dasd_device * device)
-+int dasd_scan_partitions(struct dasd_block *block)
- {
- struct block_device *bdev;
+- spin_unlock(&ch_devlist_lock);
++ spin_unlock(&ch_index_lock);
-- bdev = bdget_disk(device->gdp, 0);
-+ bdev = bdget_disk(block->gdp, 0);
- if (!bdev || blkdev_get(bdev, FMODE_READ, 1) < 0)
- return -ENODEV;
- /*
-@@ -117,7 +116,7 @@ dasd_scan_partitions(struct dasd_device * device)
- * is why the assignment to device->bdev is done AFTER
- * the BLKRRPART ioctl.
- */
-- device->bdev = bdev;
-+ block->bdev = bdev;
+ file->private_data = ch;
return 0;
- }
-
-@@ -125,8 +124,7 @@ dasd_scan_partitions(struct dasd_device * device)
- * Remove all inodes in the system for a device, delete the
- * partitions and make device unusable by setting its size to zero.
- */
--void
--dasd_destroy_partitions(struct dasd_device * device)
-+void dasd_destroy_partitions(struct dasd_block *block)
- {
- /* The two structs have 168/176 byte on 31/64 bit. */
- struct blkpg_partition bpart;
-@@ -137,8 +135,8 @@ dasd_destroy_partitions(struct dasd_device * device)
- * Get the bdev pointer from the device structure and clear
- * device->bdev to lower the offline open_count limit again.
- */
-- bdev = device->bdev;
-- device->bdev = NULL;
-+ bdev = block->bdev;
-+ block->bdev = NULL;
-
- /*
- * See fs/partition/check.c:delete_partition
-@@ -149,17 +147,16 @@ dasd_destroy_partitions(struct dasd_device * device)
- memset(&barg, 0, sizeof(struct blkpg_ioctl_arg));
- barg.data = (void __force __user *) &bpart;
- barg.op = BLKPG_DEL_PARTITION;
-- for (bpart.pno = device->gdp->minors - 1; bpart.pno > 0; bpart.pno--)
-+ for (bpart.pno = block->gdp->minors - 1; bpart.pno > 0; bpart.pno--)
- ioctl_by_bdev(bdev, BLKPG, (unsigned long) &barg);
-
-- invalidate_partition(device->gdp, 0);
-+ invalidate_partition(block->gdp, 0);
- /* Matching blkdev_put to the blkdev_get in dasd_scan_partitions. */
- blkdev_put(bdev);
-- set_capacity(device->gdp, 0);
-+ set_capacity(block->gdp, 0);
- }
-
--int
--dasd_gendisk_init(void)
-+int dasd_gendisk_init(void)
- {
- int rc;
-
-@@ -174,8 +171,7 @@ dasd_gendisk_init(void)
+@@ -626,24 +592,24 @@ ch_checkrange(scsi_changer *ch, unsigned int type, unsigned int unit)
return 0;
}
--void
--dasd_gendisk_exit(void)
-+void dasd_gendisk_exit(void)
+-static int ch_ioctl(struct inode * inode, struct file * file,
++static long ch_ioctl(struct file *file,
+ unsigned int cmd, unsigned long arg)
{
- unregister_blkdev(DASD_MAJOR, "dasd");
- }
-diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
-index d427dae..44b2984 100644
---- a/drivers/s390/block/dasd_int.h
-+++ b/drivers/s390/block/dasd_int.h
-@@ -64,13 +64,7 @@
- * SECTION: Type definitions
- */
- struct dasd_device;
--
--typedef enum {
-- dasd_era_fatal = -1, /* no chance to recover */
-- dasd_era_none = 0, /* don't recover, everything alright */
-- dasd_era_msg = 1, /* don't recover, just report... */
-- dasd_era_recover = 2 /* recovery action recommended */
--} dasd_era_t;
-+struct dasd_block;
-
- /* BIT DEFINITIONS FOR SENSE DATA */
- #define DASD_SENSE_BIT_0 0x80
-@@ -151,19 +145,22 @@ do { \
-
- struct dasd_ccw_req {
- unsigned int magic; /* Eye catcher */
-- struct list_head list; /* list_head for request queueing. */
-+ struct list_head devlist; /* for dasd_device request queue */
-+ struct list_head blocklist; /* for dasd_block request queue */
-
- /* Where to execute what... */
-- struct dasd_device *device; /* device the request is for */
-+ struct dasd_block *block; /* the originating block device */
-+ struct dasd_device *memdev; /* the device used to allocate this */
-+ struct dasd_device *startdev; /* device the request is started on */
- struct ccw1 *cpaddr; /* address of channel program */
-- char status; /* status of this request */
-+ char status; /* status of this request */
- short retries; /* A retry counter */
- unsigned long flags; /* flags of this request */
-
- /* ... and how */
- unsigned long starttime; /* jiffies time of request start */
- int expires; /* expiration period in jiffies */
-- char lpm; /* logical path mask */
-+ char lpm; /* logical path mask */
- void *data; /* pointer to data area */
-
- /* these are important for recovering erroneous requests */
-@@ -178,20 +175,27 @@ struct dasd_ccw_req {
- unsigned long long endclk; /* TOD-clock of request termination */
-
- /* Callback that is called after reaching final status. */
-- void (*callback)(struct dasd_ccw_req *, void *data);
-- void *callback_data;
-+ void (*callback)(struct dasd_ccw_req *, void *data);
-+ void *callback_data;
- };
-
- /*
- * dasd_ccw_req -> status can be:
- */
--#define DASD_CQR_FILLED 0x00 /* request is ready to be processed */
--#define DASD_CQR_QUEUED 0x01 /* request is queued to be processed */
--#define DASD_CQR_IN_IO 0x02 /* request is currently in IO */
--#define DASD_CQR_DONE 0x03 /* request is completed successfully */
--#define DASD_CQR_ERROR 0x04 /* request is completed with error */
--#define DASD_CQR_FAILED 0x05 /* request is finally failed */
--#define DASD_CQR_CLEAR 0x06 /* request is clear pending */
-+#define DASD_CQR_FILLED 0x00 /* request is ready to be processed */
-+#define DASD_CQR_DONE 0x01 /* request is completed successfully */
-+#define DASD_CQR_NEED_ERP 0x02 /* request needs recovery action */
-+#define DASD_CQR_IN_ERP 0x03 /* request is in recovery */
-+#define DASD_CQR_FAILED 0x04 /* request is finally failed */
-+#define DASD_CQR_TERMINATED 0x05 /* request was stopped by driver */
-+
-+#define DASD_CQR_QUEUED 0x80 /* request is queued to be processed */
-+#define DASD_CQR_IN_IO 0x81 /* request is currently in IO */
-+#define DASD_CQR_ERROR 0x82 /* request is completed with error */
-+#define DASD_CQR_CLEAR_PENDING 0x83 /* request is clear pending */
-+#define DASD_CQR_CLEARED 0x84 /* request was cleared */
-+#define DASD_CQR_SUCCESS 0x85 /* request was successfull */
+ scsi_changer *ch = file->private_data;
+ int retval;
+ void __user *argp = (void __user *)arg;
+-
+
-
- /* per dasd_ccw_req flags */
- #define DASD_CQR_FLAGS_USE_ERP 0 /* use ERP for this request */
-@@ -214,52 +218,71 @@ struct dasd_discipline {
-
- struct list_head list; /* used for list of disciplines */
-
-- /*
-- * Device recognition functions. check_device is used to verify
-- * the sense data and the information returned by read device
-- * characteristics. It returns 0 if the discipline can be used
-- * for the device in question.
-- * do_analysis is used in the step from device state "basic" to
-- * state "accept". It returns 0 if the device can be made ready,
-- * it returns -EMEDIUMTYPE if the device can't be made ready or
-- * -EAGAIN if do_analysis started a ccw that needs to complete
-- * before the analysis may be repeated.
-- */
-- int (*check_device)(struct dasd_device *);
-- int (*do_analysis) (struct dasd_device *);
--
-- /*
-- * Device operation functions. build_cp creates a ccw chain for
-- * a block device request, start_io starts the request and
-- * term_IO cancels it (e.g. in case of a timeout). format_device
-- * returns a ccw chain to be used to format the device.
-- */
-+ /*
-+ * Device recognition functions. check_device is used to verify
-+ * the sense data and the information returned by read device
-+ * characteristics. It returns 0 if the discipline can be used
-+ * for the device in question. uncheck_device is called during
-+ * device shutdown to deregister a device from its discipline.
-+ */
-+ int (*check_device) (struct dasd_device *);
-+ void (*uncheck_device) (struct dasd_device *);
+ switch (cmd) {
+ case CHIOGPARAMS:
+ {
+ struct changer_params params;
+-
+
-+ /*
-+ * do_analysis is used in the step from device state "basic" to
-+ * state "accept". It returns 0 if the device can be made ready,
-+ * it returns -EMEDIUMTYPE if the device can't be made ready or
-+ * -EAGAIN if do_analysis started a ccw that needs to complete
-+ * before the analysis may be repeated.
-+ */
-+ int (*do_analysis) (struct dasd_block *);
+ params.cp_curpicker = 0;
+ params.cp_npickers = ch->counts[CHET_MT];
+ params.cp_nslots = ch->counts[CHET_ST];
+ params.cp_nportals = ch->counts[CHET_IE];
+ params.cp_ndrives = ch->counts[CHET_DT];
+-
+
-+ /*
-+ * Last things to do when a device is set online, and first things
-+ * when it is set offline.
-+ */
-+ int (*ready_to_online) (struct dasd_device *);
-+ int (*online_to_ready) (struct dasd_device *);
+ if (copy_to_user(argp, ¶ms, sizeof(params)))
+ return -EFAULT;
+ return 0;
+@@ -673,11 +639,11 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ return -EFAULT;
+ return 0;
+ }
+-
+
-+ /*
-+ * Device operation functions. build_cp creates a ccw chain for
-+ * a block device request, start_io starts the request and
-+ * term_IO cancels it (e.g. in case of a timeout). format_device
-+ * returns a ccw chain to be used to format the device.
-+ * handle_terminated_request allows to examine a cqr and prepare
-+ * it for retry.
-+ */
- struct dasd_ccw_req *(*build_cp) (struct dasd_device *,
-+ struct dasd_block *,
- struct request *);
- int (*start_IO) (struct dasd_ccw_req *);
- int (*term_IO) (struct dasd_ccw_req *);
-+ void (*handle_terminated_request) (struct dasd_ccw_req *);
- struct dasd_ccw_req *(*format_device) (struct dasd_device *,
- struct format_data_t *);
- int (*free_cp) (struct dasd_ccw_req *, struct request *);
-- /*
-- * Error recovery functions. examine_error() returns a value that
-- * indicates what to do for an error condition. If examine_error()
+ case CHIOPOSITION:
+ {
+ struct changer_position pos;
+-
+
-+ /*
-+ * Error recovery functions. examine_error() returns a value that
-+ * indicates what to do for an error condition. If examine_error()
- * returns 'dasd_era_recover' erp_action() is called to create a
-- * special error recovery ccw. erp_postaction() is called after
-- * an error recovery ccw has finished its execution. dump_sense
-- * is called for every error condition to print the sense data
-- * to the console.
-- */
-- dasd_era_t(*examine_error) (struct dasd_ccw_req *, struct irb *);
-+ * special error recovery ccw. erp_postaction() is called after
-+ * an error recovery ccw has finished its execution. dump_sense
-+ * is called for every error condition to print the sense data
-+ * to the console.
-+ */
- dasd_erp_fn_t(*erp_action) (struct dasd_ccw_req *);
- dasd_erp_fn_t(*erp_postaction) (struct dasd_ccw_req *);
- void (*dump_sense) (struct dasd_device *, struct dasd_ccw_req *,
- struct irb *);
+ if (copy_from_user(&pos, argp, sizeof (pos)))
+ return -EFAULT;
-+ void (*handle_unsolicited_interrupt) (struct dasd_device *,
-+ struct irb *);
+@@ -692,7 +658,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ mutex_unlock(&ch->lock);
+ return retval;
+ }
+-
+
- /* i/o control functions. */
-- int (*fill_geometry) (struct dasd_device *, struct hd_geometry *);
-+ int (*fill_geometry) (struct dasd_block *, struct hd_geometry *);
- int (*fill_info) (struct dasd_device *, struct dasd_information2_t *);
-- int (*ioctl) (struct dasd_device *, unsigned int, void __user *);
-+ int (*ioctl) (struct dasd_block *, unsigned int, void __user *);
- };
-
- extern struct dasd_discipline *dasd_diag_discipline_pointer;
-@@ -267,12 +290,18 @@ extern struct dasd_discipline *dasd_diag_discipline_pointer;
- /*
- * Unique identifier for dasd device.
- */
-+#define UA_NOT_CONFIGURED 0x00
-+#define UA_BASE_DEVICE 0x01
-+#define UA_BASE_PAV_ALIAS 0x02
-+#define UA_HYPER_PAV_ALIAS 0x03
+ case CHIOMOVE:
+ {
+ struct changer_move mv;
+@@ -705,7 +671,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ dprintk("CHIOMOVE: invalid parameter\n");
+ return -EBADSLT;
+ }
+-
+
- struct dasd_uid {
-- __u8 alias;
-+ __u8 type;
- char vendor[4];
- char serial[15];
- __u16 ssid;
-- __u8 unit_addr;
-+ __u8 real_unit_addr;
-+ __u8 base_unit_addr;
- };
-
- /*
-@@ -293,14 +322,9 @@ struct dasd_uid {
-
- struct dasd_device {
- /* Block device stuff. */
-- struct gendisk *gdp;
-- struct request_queue *request_queue;
-- spinlock_t request_queue_lock;
-- struct block_device *bdev;
-+ struct dasd_block *block;
+ mutex_lock(&ch->lock);
+ retval = ch_move(ch,0,
+ ch->firsts[mv.cm_fromtype] + mv.cm_fromunit,
+@@ -718,7 +684,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ case CHIOEXCHANGE:
+ {
+ struct changer_exchange mv;
+-
+
- unsigned int devindex;
-- unsigned long blocks; /* size of volume in blocks */
-- unsigned int bp_block; /* bytes per block */
-- unsigned int s2b_shift; /* log2 (bp_block/512) */
- unsigned long flags; /* per device flags */
- unsigned short features; /* copy of devmap-features (read-only!) */
-
-@@ -316,9 +340,8 @@ struct dasd_device {
- int state, target;
- int stopped; /* device (ccw_device_start) was stopped */
-
-- /* Open and reference count. */
-+ /* reference count. */
- atomic_t ref_count;
-- atomic_t open_count;
-
- /* ccw queue and memory for static ccw/erp buffers. */
- struct list_head ccw_queue;
-@@ -337,20 +360,45 @@ struct dasd_device {
+ if (copy_from_user(&mv, argp, sizeof (mv)))
+ return -EFAULT;
- struct ccw_device *cdev;
+@@ -728,7 +694,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ dprintk("CHIOEXCHANGE: invalid parameter\n");
+ return -EBADSLT;
+ }
+-
++
+ mutex_lock(&ch->lock);
+ retval = ch_exchange
+ (ch,0,
+@@ -743,7 +709,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ case CHIOGSTATUS:
+ {
+ struct changer_element_status ces;
+-
++
+ if (copy_from_user(&ces, argp, sizeof (ces)))
+ return -EFAULT;
+ if (ces.ces_type < 0 || ces.ces_type >= CH_TYPES)
+@@ -759,19 +725,19 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ u_char *buffer;
+ unsigned int elem;
+ int result,i;
+-
++
+ if (copy_from_user(&cge, argp, sizeof (cge)))
+ return -EFAULT;
-+ /* hook for alias management */
-+ struct list_head alias_list;
-+};
+ if (0 != ch_checkrange(ch, cge.cge_type, cge.cge_unit))
+ return -EINVAL;
+ elem = ch->firsts[cge.cge_type] + cge.cge_unit;
+-
+
-+struct dasd_block {
-+ /* Block device stuff. */
-+ struct gendisk *gdp;
-+ struct request_queue *request_queue;
-+ spinlock_t request_queue_lock;
-+ struct block_device *bdev;
-+ atomic_t open_count;
+ buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
+ if (!buffer)
+ return -ENOMEM;
+ mutex_lock(&ch->lock);
+-
+
-+ unsigned long blocks; /* size of volume in blocks */
-+ unsigned int bp_block; /* bytes per block */
-+ unsigned int s2b_shift; /* log2 (bp_block/512) */
+ voltag_retry:
+ memset(cmd,0,sizeof(cmd));
+ cmd[0] = READ_ELEMENT_STATUS;
+@@ -782,7 +748,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ cmd[3] = elem & 0xff;
+ cmd[5] = 1;
+ cmd[9] = 255;
+-
+
-+ struct dasd_device *base;
-+ struct list_head ccw_queue;
-+ spinlock_t queue_lock;
+ if (0 == (result = ch_do_scsi(ch, cmd, buffer, 256, DMA_FROM_DEVICE))) {
+ cge.cge_status = buffer[18];
+ cge.cge_flags = 0;
+@@ -822,7 +788,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ }
+ kfree(buffer);
+ mutex_unlock(&ch->lock);
+-
+
-+ atomic_t tasklet_scheduled;
-+ struct tasklet_struct tasklet;
-+ struct timer_list timer;
+ if (copy_to_user(argp, &cge, sizeof (cge)))
+ return -EFAULT;
+ return result;
+@@ -835,7 +801,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
+ mutex_unlock(&ch->lock);
+ return retval;
+ }
+-
+
- #ifdef CONFIG_DASD_PROFILE
- struct dasd_profile_info_t profile;
- #endif
- };
-
+ case CHIOSVOLTAG:
+ {
+ struct changer_set_voltag csv;
+@@ -876,7 +842,7 @@ static long ch_ioctl_compat(struct file * file,
+ unsigned int cmd, unsigned long arg)
+ {
+ scsi_changer *ch = file->private_data;
+-
+
+ switch (cmd) {
+ case CHIOGPARAMS:
+ case CHIOGVPARAMS:
+@@ -887,13 +853,12 @@ static long ch_ioctl_compat(struct file * file,
+ case CHIOINITELEM:
+ case CHIOSVOLTAG:
+ /* compatible */
+- return ch_ioctl(NULL /* inode, unused */,
+- file, cmd, arg);
++ return ch_ioctl(file, cmd, arg);
+ case CHIOGSTATUS32:
+ {
+ struct changer_element_status32 ces32;
+ unsigned char __user *data;
+-
+
- /* reasons why device (ccw_device_start) was stopped */
- #define DASD_STOPPED_NOT_ACC 1 /* not accessible */
- #define DASD_STOPPED_QUIESCE 2 /* Quiesced */
- #define DASD_STOPPED_PENDING 4 /* long busy */
- #define DASD_STOPPED_DC_WAIT 8 /* disconnected, wait */
--#define DASD_STOPPED_DC_EIO 16 /* disconnected, return -EIO */
-+#define DASD_STOPPED_SU 16 /* summary unit check handling */
-
- /* per device flags */
--#define DASD_FLAG_DSC_ERROR 2 /* return -EIO when disconnected */
- #define DASD_FLAG_OFFLINE 3 /* device is in offline processing */
- #define DASD_FLAG_EER_SNSS 4 /* A SNSS is required */
- #define DASD_FLAG_EER_IN_USE 5 /* A SNSS request is running */
-@@ -489,6 +537,9 @@ dasd_kmalloc_set_cda(struct ccw1 *ccw, void *cda, struct dasd_device *device)
- struct dasd_device *dasd_alloc_device(void);
- void dasd_free_device(struct dasd_device *);
-
-+struct dasd_block *dasd_alloc_block(void);
-+void dasd_free_block(struct dasd_block *);
+ if (copy_from_user(&ces32, (void __user *)arg, sizeof (ces32)))
+ return -EFAULT;
+ if (ces32.ces_type < 0 || ces32.ces_type >= CH_TYPES)
+@@ -915,63 +880,100 @@ static long ch_ioctl_compat(struct file * file,
+ static int ch_probe(struct device *dev)
+ {
+ struct scsi_device *sd = to_scsi_device(dev);
++ struct class_device *class_dev;
++ int minor, ret = -ENOMEM;
+ scsi_changer *ch;
+-
+
- void dasd_enable_device(struct dasd_device *);
- void dasd_set_target_state(struct dasd_device *, int);
- void dasd_kick_device(struct dasd_device *);
-@@ -497,18 +548,23 @@ void dasd_add_request_head(struct dasd_ccw_req *);
- void dasd_add_request_tail(struct dasd_ccw_req *);
- int dasd_start_IO(struct dasd_ccw_req *);
- int dasd_term_IO(struct dasd_ccw_req *);
--void dasd_schedule_bh(struct dasd_device *);
-+void dasd_schedule_device_bh(struct dasd_device *);
-+void dasd_schedule_block_bh(struct dasd_block *);
- int dasd_sleep_on(struct dasd_ccw_req *);
- int dasd_sleep_on_immediatly(struct dasd_ccw_req *);
- int dasd_sleep_on_interruptible(struct dasd_ccw_req *);
--void dasd_set_timer(struct dasd_device *, int);
--void dasd_clear_timer(struct dasd_device *);
-+void dasd_device_set_timer(struct dasd_device *, int);
-+void dasd_device_clear_timer(struct dasd_device *);
-+void dasd_block_set_timer(struct dasd_block *, int);
-+void dasd_block_clear_timer(struct dasd_block *);
- int dasd_cancel_req(struct dasd_ccw_req *);
-+int dasd_flush_device_queue(struct dasd_device *);
- int dasd_generic_probe (struct ccw_device *, struct dasd_discipline *);
- void dasd_generic_remove (struct ccw_device *cdev);
- int dasd_generic_set_online(struct ccw_device *, struct dasd_discipline *);
- int dasd_generic_set_offline (struct ccw_device *cdev);
- int dasd_generic_notify(struct ccw_device *, int);
-+void dasd_generic_handle_state_change(struct dasd_device *);
-
- int dasd_generic_read_dev_chars(struct dasd_device *, char *, void **, int);
-
-@@ -542,10 +598,10 @@ int dasd_busid_known(char *);
- /* externals in dasd_gendisk.c */
- int dasd_gendisk_init(void);
- void dasd_gendisk_exit(void);
--int dasd_gendisk_alloc(struct dasd_device *);
--void dasd_gendisk_free(struct dasd_device *);
--int dasd_scan_partitions(struct dasd_device *);
--void dasd_destroy_partitions(struct dasd_device *);
-+int dasd_gendisk_alloc(struct dasd_block *);
-+void dasd_gendisk_free(struct dasd_block *);
-+int dasd_scan_partitions(struct dasd_block *);
-+void dasd_destroy_partitions(struct dasd_block *);
-
- /* externals in dasd_ioctl.c */
- int dasd_ioctl(struct inode *, struct file *, unsigned int, unsigned long);
-@@ -563,20 +619,9 @@ struct dasd_ccw_req *dasd_alloc_erp_request(char *, int, int,
- void dasd_free_erp_request(struct dasd_ccw_req *, struct dasd_device *);
- void dasd_log_sense(struct dasd_ccw_req *, struct irb *);
+ if (sd->type != TYPE_MEDIUM_CHANGER)
+ return -ENODEV;
+-
++
+ ch = kzalloc(sizeof(*ch), GFP_KERNEL);
+ if (NULL == ch)
+ return -ENOMEM;
--/* externals in dasd_3370_erp.c */
--dasd_era_t dasd_3370_erp_examine(struct dasd_ccw_req *, struct irb *);
--
- /* externals in dasd_3990_erp.c */
--dasd_era_t dasd_3990_erp_examine(struct dasd_ccw_req *, struct irb *);
- struct dasd_ccw_req *dasd_3990_erp_action(struct dasd_ccw_req *);
+- ch->minor = ch_devcount;
++ if (!idr_pre_get(&ch_index_idr, GFP_KERNEL))
++ goto free_ch;
++
++ spin_lock(&ch_index_lock);
++ ret = idr_get_new(&ch_index_idr, ch, &minor);
++ spin_unlock(&ch_index_lock);
++
++ if (ret)
++ goto free_ch;
++
++ if (minor > CH_MAX_DEVS) {
++ ret = -ENODEV;
++ goto remove_idr;
++ }
++
++ ch->minor = minor;
+ sprintf(ch->name,"ch%d",ch->minor);
++
++ class_dev = class_device_create(ch_sysfs_class, NULL,
++ MKDEV(SCSI_CHANGER_MAJOR, ch->minor),
++ dev, "s%s", ch->name);
++ if (IS_ERR(class_dev)) {
++ printk(KERN_WARNING "ch%d: class_device_create failed\n",
++ ch->minor);
++ ret = PTR_ERR(class_dev);
++ goto remove_idr;
++ }
++
+ mutex_init(&ch->lock);
+ ch->device = sd;
+ ch_readconfig(ch);
+ if (init)
+ ch_init_elem(ch);
--/* externals in dasd_9336_erp.c */
--dasd_era_t dasd_9336_erp_examine(struct dasd_ccw_req *, struct irb *);
--
--/* externals in dasd_9336_erp.c */
--dasd_era_t dasd_9343_erp_examine(struct dasd_ccw_req *, struct irb *);
--struct dasd_ccw_req *dasd_9343_erp_action(struct dasd_ccw_req *);
+- class_device_create(ch_sysfs_class, NULL,
+- MKDEV(SCSI_CHANGER_MAJOR,ch->minor),
+- dev, "s%s", ch->name);
-
- /* externals in dasd_eer.c */
- #ifdef CONFIG_DASD_EER
- int dasd_eer_init(void);
-diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
-index 672eb0a..91a6463 100644
---- a/drivers/s390/block/dasd_ioctl.c
-+++ b/drivers/s390/block/dasd_ioctl.c
-@@ -38,15 +38,15 @@ dasd_ioctl_api_version(void __user *argp)
- static int
- dasd_ioctl_enable(struct block_device *bdev)
- {
-- struct dasd_device *device = bdev->bd_disk->private_data;
-+ struct dasd_block *block = bdev->bd_disk->private_data;
-
- if (!capable(CAP_SYS_ADMIN))
- return -EACCES;
-
-- dasd_enable_device(device);
-+ dasd_enable_device(block->base);
- /* Formatting the dasd device can change the capacity. */
- mutex_lock(&bdev->bd_mutex);
-- i_size_write(bdev->bd_inode, (loff_t)get_capacity(device->gdp) << 9);
-+ i_size_write(bdev->bd_inode, (loff_t)get_capacity(block->gdp) << 9);
- mutex_unlock(&bdev->bd_mutex);
++ dev_set_drvdata(dev, ch);
+ sdev_printk(KERN_INFO, sd, "Attached scsi changer %s\n", ch->name);
+-
+- spin_lock(&ch_devlist_lock);
+- list_add_tail(&ch->list,&ch_devlist);
+- ch_devcount++;
+- spin_unlock(&ch_devlist_lock);
++
return 0;
++remove_idr:
++ idr_remove(&ch_index_idr, minor);
++free_ch:
++ kfree(ch);
++ return ret;
}
-@@ -58,7 +58,7 @@ dasd_ioctl_enable(struct block_device *bdev)
- static int
- dasd_ioctl_disable(struct block_device *bdev)
- {
-- struct dasd_device *device = bdev->bd_disk->private_data;
-+ struct dasd_block *block = bdev->bd_disk->private_data;
- if (!capable(CAP_SYS_ADMIN))
- return -EACCES;
-@@ -71,7 +71,7 @@ dasd_ioctl_disable(struct block_device *bdev)
- * using the BIODASDFMT ioctl. Therefore the correct state for the
- * device is DASD_STATE_BASIC that allows to do basic i/o.
- */
-- dasd_set_target_state(device, DASD_STATE_BASIC);
-+ dasd_set_target_state(block->base, DASD_STATE_BASIC);
- /*
- * Set i_size to zero, since read, write, etc. check against this
- * value.
-@@ -85,19 +85,19 @@ dasd_ioctl_disable(struct block_device *bdev)
- /*
- * Quiesce device.
- */
--static int
--dasd_ioctl_quiesce(struct dasd_device *device)
-+static int dasd_ioctl_quiesce(struct dasd_block *block)
+ static int ch_remove(struct device *dev)
{
- unsigned long flags;
-+ struct dasd_device *base;
+- struct scsi_device *sd = to_scsi_device(dev);
+- scsi_changer *tmp, *ch;
++ scsi_changer *ch = dev_get_drvdata(dev);
-+ base = block->base;
- if (!capable (CAP_SYS_ADMIN))
- return -EACCES;
+- spin_lock(&ch_devlist_lock);
+- ch = NULL;
+- list_for_each_entry(tmp,&ch_devlist,list) {
+- if (tmp->device == sd)
+- ch = tmp;
+- }
+- BUG_ON(NULL == ch);
+- list_del(&ch->list);
+- spin_unlock(&ch_devlist_lock);
++ spin_lock(&ch_index_lock);
++ idr_remove(&ch_index_idr, ch->minor);
++ spin_unlock(&ch_index_lock);
-- DEV_MESSAGE (KERN_DEBUG, device, "%s",
-- "Quiesce IO on device");
-- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-- device->stopped |= DASD_STOPPED_QUIESCE;
-- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ DEV_MESSAGE(KERN_DEBUG, base, "%s", "Quiesce IO on device");
-+ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
-+ base->stopped |= DASD_STOPPED_QUIESCE;
-+ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
+ class_device_destroy(ch_sysfs_class,
+ MKDEV(SCSI_CHANGER_MAJOR,ch->minor));
+ kfree(ch->dt);
+ kfree(ch);
+- ch_devcount--;
return 0;
}
-@@ -105,22 +105,21 @@ dasd_ioctl_quiesce(struct dasd_device *device)
- /*
- * Quiesce device.
- */
--static int
--dasd_ioctl_resume(struct dasd_device *device)
-+static int dasd_ioctl_resume(struct dasd_block *block)
++static struct scsi_driver ch_template = {
++ .owner = THIS_MODULE,
++ .gendrv = {
++ .name = "ch",
++ .probe = ch_probe,
++ .remove = ch_remove,
++ },
++};
++
++static const struct file_operations changer_fops = {
++ .owner = THIS_MODULE,
++ .open = ch_open,
++ .release = ch_release,
++ .unlocked_ioctl = ch_ioctl,
++#ifdef CONFIG_COMPAT
++ .compat_ioctl = ch_ioctl_compat,
++#endif
++};
++
+ static int __init init_ch_module(void)
{
- unsigned long flags;
-+ struct dasd_device *base;
-
-+ base = block->base;
- if (!capable (CAP_SYS_ADMIN))
- return -EACCES;
-
-- DEV_MESSAGE (KERN_DEBUG, device, "%s",
-- "resume IO on device");
--
-- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-- device->stopped &= ~DASD_STOPPED_QUIESCE;
-- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
-+ DEV_MESSAGE(KERN_DEBUG, base, "%s", "resume IO on device");
-+ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
-+ base->stopped &= ~DASD_STOPPED_QUIESCE;
-+ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
-
-- dasd_schedule_bh (device);
-+ dasd_schedule_block_bh(block);
- return 0;
+ int rc;
+-
++
+ printk(KERN_INFO "SCSI Media Changer driver v" VERSION " \n");
+ ch_sysfs_class = class_create(THIS_MODULE, "scsi_changer");
+ if (IS_ERR(ch_sysfs_class)) {
+@@ -996,11 +998,12 @@ static int __init init_ch_module(void)
+ return rc;
}
-@@ -130,22 +129,23 @@ dasd_ioctl_resume(struct dasd_device *device)
- * commands to format a single unit of the device. In terms of the ECKD
- * devices this means CCWs are generated to format a single track.
- */
--static int
--dasd_format(struct dasd_device * device, struct format_data_t * fdata)
-+static int dasd_format(struct dasd_block *block, struct format_data_t *fdata)
+-static void __exit exit_ch_module(void)
++static void __exit exit_ch_module(void)
{
- struct dasd_ccw_req *cqr;
-+ struct dasd_device *base;
- int rc;
+ scsi_unregister_driver(&ch_template.gendrv);
+ unregister_chrdev(SCSI_CHANGER_MAJOR, "ch");
+ class_destroy(ch_sysfs_class);
++ idr_destroy(&ch_index_idr);
+ }
-- if (device->discipline->format_device == NULL)
-+ base = block->base;
-+ if (base->discipline->format_device == NULL)
- return -EPERM;
+ module_init(init_ch_module);
+diff --git a/drivers/scsi/constants.c b/drivers/scsi/constants.c
+index 024553f..403a7f2 100644
+--- a/drivers/scsi/constants.c
++++ b/drivers/scsi/constants.c
+@@ -362,7 +362,6 @@ void scsi_print_command(struct scsi_cmnd *cmd)
+ EXPORT_SYMBOL(scsi_print_command);
-- if (device->state != DASD_STATE_BASIC) {
-- DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ if (base->state != DASD_STATE_BASIC) {
-+ DEV_MESSAGE(KERN_WARNING, base, "%s",
- "dasd_format: device is not disabled! ");
- return -EBUSY;
- }
+ /**
+- *
+ * scsi_print_status - print scsi status description
+ * @scsi_status: scsi status value
+ *
+@@ -1369,7 +1368,7 @@ EXPORT_SYMBOL(scsi_print_sense);
+ static const char * const hostbyte_table[]={
+ "DID_OK", "DID_NO_CONNECT", "DID_BUS_BUSY", "DID_TIME_OUT", "DID_BAD_TARGET",
+ "DID_ABORT", "DID_PARITY", "DID_ERROR", "DID_RESET", "DID_BAD_INTR",
+-"DID_PASSTHROUGH", "DID_SOFT_ERROR", "DID_IMM_RETRY"};
++"DID_PASSTHROUGH", "DID_SOFT_ERROR", "DID_IMM_RETRY", "DID_REQUEUE"};
+ #define NUM_HOSTBYTE_STRS ARRAY_SIZE(hostbyte_table)
-- DBF_DEV_EVENT(DBF_NOTICE, device,
-+ DBF_DEV_EVENT(DBF_NOTICE, base,
- "formatting units %d to %d (%d B blocks) flags %d",
- fdata->start_unit,
- fdata->stop_unit, fdata->blksize, fdata->intensity);
-@@ -156,20 +156,20 @@ dasd_format(struct dasd_device * device, struct format_data_t * fdata)
- * enabling the device later.
- */
- if (fdata->start_unit == 0) {
-- struct block_device *bdev = bdget_disk(device->gdp, 0);
-+ struct block_device *bdev = bdget_disk(block->gdp, 0);
- bdev->bd_inode->i_blkbits = blksize_bits(fdata->blksize);
- bdput(bdev);
+ static const char * const driverbyte_table[]={
+diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
+index a9def6e..f93c73c 100644
+--- a/drivers/scsi/dc395x.c
++++ b/drivers/scsi/dc395x.c
+@@ -1629,8 +1629,7 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, (dcb->target_lun << 5));
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
+- DC395x_write8(acb, TRM_S1040_SCSI_FIFO,
+- sizeof(srb->cmd->sense_buffer));
++ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, SCSI_SENSE_BUFFERSIZE);
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
+ } else {
+ ptr = (u8 *)srb->cmd->cmnd;
+@@ -1915,8 +1914,7 @@ static void command_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, (dcb->target_lun << 5));
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
+- DC395x_write8(acb, TRM_S1040_SCSI_FIFO,
+- sizeof(srb->cmd->sense_buffer));
++ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, SCSI_SENSE_BUFFERSIZE);
+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
}
+ srb->state |= SRB_COMMAND;
+@@ -3685,7 +3683,7 @@ static void request_sense(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
+ srb->target_status = 0;
- while (fdata->start_unit <= fdata->stop_unit) {
-- cqr = device->discipline->format_device(device, fdata);
-+ cqr = base->discipline->format_device(base, fdata);
- if (IS_ERR(cqr))
- return PTR_ERR(cqr);
- rc = dasd_sleep_on_interruptible(cqr);
-- dasd_sfree_request(cqr, cqr->device);
-+ dasd_sfree_request(cqr, cqr->memdev);
- if (rc) {
- if (rc != -ERESTARTSYS)
-- DEV_MESSAGE(KERN_ERR, device,
-+ DEV_MESSAGE(KERN_ERR, base,
- " Formatting of unit %d failed "
- "with rc = %d",
- fdata->start_unit, rc);
-@@ -186,7 +186,7 @@ dasd_format(struct dasd_device * device, struct format_data_t * fdata)
- static int
- dasd_ioctl_format(struct block_device *bdev, void __user *argp)
- {
-- struct dasd_device *device = bdev->bd_disk->private_data;
-+ struct dasd_block *block = bdev->bd_disk->private_data;
- struct format_data_t fdata;
+ /* KG: Can this prevent crap sense data ? */
+- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
++ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- if (!capable(CAP_SYS_ADMIN))
-@@ -194,51 +194,47 @@ dasd_ioctl_format(struct block_device *bdev, void __user *argp)
- if (!argp)
- return -EINVAL;
+ /* Save some data */
+ srb->segment_x[DC395x_MAX_SG_LISTENTRY - 1].address =
+@@ -3694,15 +3692,15 @@ static void request_sense(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
+ srb->segment_x[0].length;
+ srb->xferred = srb->total_xfer_length;
+ /* srb->segment_x : a one entry of S/G list table */
+- srb->total_xfer_length = sizeof(cmd->sense_buffer);
+- srb->segment_x[0].length = sizeof(cmd->sense_buffer);
++ srb->total_xfer_length = SCSI_SENSE_BUFFERSIZE;
++ srb->segment_x[0].length = SCSI_SENSE_BUFFERSIZE;
+ /* Map sense buffer */
+ srb->segment_x[0].address =
+ pci_map_single(acb->dev, cmd->sense_buffer,
+- sizeof(cmd->sense_buffer), PCI_DMA_FROMDEVICE);
++ SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE);
+ dprintkdbg(DBG_SG, "request_sense: map buffer %p->%08x(%05x)\n",
+ cmd->sense_buffer, srb->segment_x[0].address,
+- sizeof(cmd->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ srb->sg_count = 1;
+ srb->sg_index = 0;
-- if (device->features & DASD_FEATURE_READONLY)
-+ if (block->base->features & DASD_FEATURE_READONLY)
- return -EROFS;
- if (copy_from_user(&fdata, argp, sizeof(struct format_data_t)))
- return -EFAULT;
- if (bdev != bdev->bd_contains) {
-- DEV_MESSAGE(KERN_WARNING, device, "%s",
-+ DEV_MESSAGE(KERN_WARNING, block->base, "%s",
- "Cannot low-level format a partition");
- return -EINVAL;
- }
-- return dasd_format(device, &fdata);
-+ return dasd_format(block, &fdata);
- }
+diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c
+index b31d1c9..19cce12 100644
+--- a/drivers/scsi/dpt_i2o.c
++++ b/drivers/scsi/dpt_i2o.c
+@@ -2296,9 +2296,8 @@ static s32 adpt_i2o_to_scsi(void __iomem *reply, struct scsi_cmnd* cmd)
- #ifdef CONFIG_DASD_PROFILE
- /*
- * Reset device profile information
- */
--static int
--dasd_ioctl_reset_profile(struct dasd_device *device)
-+static int dasd_ioctl_reset_profile(struct dasd_block *block)
- {
-- memset(&device->profile, 0, sizeof (struct dasd_profile_info_t));
-+ memset(&block->profile, 0, sizeof(struct dasd_profile_info_t));
- return 0;
- }
+ // copy over the request sense data if it was a check
+ // condition status
+- if(dev_status == 0x02 /*CHECK_CONDITION*/) {
+- u32 len = sizeof(cmd->sense_buffer);
+- len = (len > 40) ? 40 : len;
++ if (dev_status == SAM_STAT_CHECK_CONDITION) {
++ u32 len = min(SCSI_SENSE_BUFFERSIZE, 40);
+ // Copy over the sense data
+ memcpy_fromio(cmd->sense_buffer, (reply+28) , len);
+ if(cmd->sense_buffer[0] == 0x70 /* class 7 */ &&
+diff --git a/drivers/scsi/eata.c b/drivers/scsi/eata.c
+index 7ead521..05163ce 100644
+--- a/drivers/scsi/eata.c
++++ b/drivers/scsi/eata.c
+@@ -1623,9 +1623,9 @@ static void map_dma(unsigned int i, struct hostdata *ha)
+ if (SCpnt->sense_buffer)
+ cpp->sense_addr =
+ H2DEV(pci_map_single(ha->pdev, SCpnt->sense_buffer,
+- sizeof SCpnt->sense_buffer, PCI_DMA_FROMDEVICE));
++ SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE));
- /*
- * Return device profile information
- */
--static int
--dasd_ioctl_read_profile(struct dasd_device *device, void __user *argp)
-+static int dasd_ioctl_read_profile(struct dasd_block *block, void __user *argp)
- {
- if (dasd_profile_level == DASD_PROFILE_OFF)
- return -EIO;
-- if (copy_to_user(argp, &device->profile,
-- sizeof (struct dasd_profile_info_t)))
-+ if (copy_to_user(argp, &block->profile,
-+ sizeof(struct dasd_profile_info_t)))
- return -EFAULT;
- return 0;
- }
- #else
--static int
--dasd_ioctl_reset_profile(struct dasd_device *device)
-+static int dasd_ioctl_reset_profile(struct dasd_block *block)
- {
- return -ENOSYS;
- }
+- cpp->sense_len = sizeof SCpnt->sense_buffer;
++ cpp->sense_len = SCSI_SENSE_BUFFERSIZE;
--static int
--dasd_ioctl_read_profile(struct dasd_device *device, void __user *argp)
-+static int dasd_ioctl_read_profile(struct dasd_block *block, void __user *argp)
- {
- return -ENOSYS;
- }
-@@ -247,87 +243,88 @@ dasd_ioctl_read_profile(struct dasd_device *device, void __user *argp)
- /*
- * Return dasd information. Used for BIODASDINFO and BIODASDINFO2.
- */
--static int
--dasd_ioctl_information(struct dasd_device *device,
-- unsigned int cmd, void __user *argp)
-+static int dasd_ioctl_information(struct dasd_block *block,
-+ unsigned int cmd, void __user *argp)
- {
- struct dasd_information2_t *dasd_info;
- unsigned long flags;
- int rc;
-+ struct dasd_device *base;
- struct ccw_device *cdev;
- struct ccw_dev_id dev_id;
+ count = scsi_dma_map(SCpnt);
+ BUG_ON(count < 0);
+diff --git a/drivers/scsi/eata_pio.c b/drivers/scsi/eata_pio.c
+index 982c509..b5a6092 100644
+--- a/drivers/scsi/eata_pio.c
++++ b/drivers/scsi/eata_pio.c
+@@ -369,7 +369,6 @@ static int eata_pio_queue(struct scsi_cmnd *cmd,
+ cp = &hd->ccb[y];
-- if (!device->discipline->fill_info)
-+ base = block->base;
-+ if (!base->discipline->fill_info)
- return -EINVAL;
+ memset(cp, 0, sizeof(struct eata_ccb));
+- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
- dasd_info = kzalloc(sizeof(struct dasd_information2_t), GFP_KERNEL);
- if (dasd_info == NULL)
- return -ENOMEM;
+ cp->status = USED; /* claim free slot */
-- rc = device->discipline->fill_info(device, dasd_info);
-+ rc = base->discipline->fill_info(base, dasd_info);
- if (rc) {
- kfree(dasd_info);
- return rc;
- }
+@@ -385,7 +384,7 @@ static int eata_pio_queue(struct scsi_cmnd *cmd,
+ cp->DataIn = 0; /* Input mode */
-- cdev = device->cdev;
-+ cdev = base->cdev;
- ccw_device_get_id(cdev, &dev_id);
+ cp->Interpret = (cmd->device->id == hd->hostid);
+- cp->cp_datalen = cpu_to_be32(cmd->request_bufflen);
++ cp->cp_datalen = cpu_to_be32(scsi_bufflen(cmd));
+ cp->Auto_Req_Sen = 0;
+ cp->cp_reqDMA = 0;
+ cp->reqlen = 0;
+@@ -402,14 +401,14 @@ static int eata_pio_queue(struct scsi_cmnd *cmd,
+ cp->cmd = cmd;
+ cmd->host_scribble = (char *) &hd->ccb[y];
- dasd_info->devno = dev_id.devno;
-- dasd_info->schid = _ccw_device_get_subchannel_number(device->cdev);
-+ dasd_info->schid = _ccw_device_get_subchannel_number(base->cdev);
- dasd_info->cu_type = cdev->id.cu_type;
- dasd_info->cu_model = cdev->id.cu_model;
- dasd_info->dev_type = cdev->id.dev_type;
- dasd_info->dev_model = cdev->id.dev_model;
-- dasd_info->status = device->state;
-+ dasd_info->status = base->state;
- /*
- * The open_count is increased for every opener, that includes
- * the blkdev_get in dasd_scan_partitions.
- * This must be hidden from user-space.
- */
-- dasd_info->open_count = atomic_read(&device->open_count);
-- if (!device->bdev)
-+ dasd_info->open_count = atomic_read(&block->open_count);
-+ if (!block->bdev)
- dasd_info->open_count++;
+- if (cmd->use_sg == 0) {
++ if (!scsi_bufflen(cmd)) {
+ cmd->SCp.buffers_residual = 1;
+- cmd->SCp.ptr = cmd->request_buffer;
+- cmd->SCp.this_residual = cmd->request_bufflen;
++ cmd->SCp.ptr = NULL;
++ cmd->SCp.this_residual = 0;
+ cmd->SCp.buffer = NULL;
+ } else {
+- cmd->SCp.buffer = cmd->request_buffer;
+- cmd->SCp.buffers_residual = cmd->use_sg;
++ cmd->SCp.buffer = scsi_sglist(cmd);
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd);
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ }
+diff --git a/drivers/scsi/fd_mcs.c b/drivers/scsi/fd_mcs.c
+index 8335b60..85bd54c 100644
+--- a/drivers/scsi/fd_mcs.c
++++ b/drivers/scsi/fd_mcs.c
+@@ -1017,24 +1017,6 @@ static irqreturn_t fd_mcs_intr(int irq, void *dev_id)
+ printk(" ** IN DONE %d ** ", current_SC->SCp.have_data_in);
+ #endif
- /*
- * check if device is really formatted
- * LDL / CDL was returned by 'fill_info'
- */
-- if ((device->state < DASD_STATE_READY) ||
-- (dasd_check_blocksize(device->bp_block)))
-+ if ((base->state < DASD_STATE_READY) ||
-+ (dasd_check_blocksize(block->bp_block)))
- dasd_info->format = DASD_FORMAT_NONE;
+-#if ERRORS_ONLY
+- if (current_SC->cmnd[0] == REQUEST_SENSE && !current_SC->SCp.Status) {
+- if ((unsigned char) (*((char *) current_SC->request_buffer + 2)) & 0x0f) {
+- unsigned char key;
+- unsigned char code;
+- unsigned char qualifier;
+-
+- key = (unsigned char) (*((char *) current_SC->request_buffer + 2)) & 0x0f;
+- code = (unsigned char) (*((char *) current_SC->request_buffer + 12));
+- qualifier = (unsigned char) (*((char *) current_SC->request_buffer + 13));
+-
+- if (key != UNIT_ATTENTION && !(key == NOT_READY && code == 0x04 && (!qualifier || qualifier == 0x02 || qualifier == 0x01))
+- && !(key == ILLEGAL_REQUEST && (code == 0x25 || code == 0x24 || !code)))
+-
+- printk("fd_mcs: REQUEST SENSE " "Key = %x, Code = %x, Qualifier = %x\n", key, code, qualifier);
+- }
+- }
+-#endif
+ #if EVERY_ACCESS
+ printk("BEFORE MY_DONE. . .");
+ #endif
+@@ -1097,7 +1079,9 @@ static int fd_mcs_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+ panic("fd_mcs: fd_mcs_queue() NOT REENTRANT!\n");
+ }
+ #if EVERY_ACCESS
+- printk("queue: target = %d cmnd = 0x%02x pieces = %d size = %u\n", SCpnt->target, *(unsigned char *) SCpnt->cmnd, SCpnt->use_sg, SCpnt->request_bufflen);
++ printk("queue: target = %d cmnd = 0x%02x pieces = %d size = %u\n",
++ SCpnt->target, *(unsigned char *) SCpnt->cmnd,
++ scsi_sg_count(SCpnt), scsi_bufflen(SCpnt));
+ #endif
- dasd_info->features |=
-- ((device->features & DASD_FEATURE_READONLY) != 0);
-+ ((base->features & DASD_FEATURE_READONLY) != 0);
+ fd_mcs_make_bus_idle(shpnt);
+@@ -1107,14 +1091,14 @@ static int fd_mcs_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
-- if (device->discipline)
-- memcpy(dasd_info->type, device->discipline->name, 4);
-+ if (base->discipline)
-+ memcpy(dasd_info->type, base->discipline->name, 4);
- else
- memcpy(dasd_info->type, "none", 4);
+ /* Initialize static data */
-- if (device->request_queue->request_fn) {
-+ if (block->request_queue->request_fn) {
- struct list_head *l;
- #ifdef DASD_EXTENDED_PROFILING
- {
- struct list_head *l;
-- spin_lock_irqsave(&device->lock, flags);
-- list_for_each(l, &device->request_queue->queue_head)
-+ spin_lock_irqsave(&block->lock, flags);
-+ list_for_each(l, &block->request_queue->queue_head)
- dasd_info->req_queue_len++;
-- spin_unlock_irqrestore(&device->lock, flags);
-+ spin_unlock_irqrestore(&block->lock, flags);
- }
- #endif /* DASD_EXTENDED_PROFILING */
-- spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
-- list_for_each(l, &device->ccw_queue)
-+ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags);
-+ list_for_each(l, &base->ccw_queue)
- dasd_info->chanq_len++;
-- spin_unlock_irqrestore(get_ccwdev_lock(device->cdev),
-+ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev),
- flags);
+- if (current_SC->use_sg) {
+- current_SC->SCp.buffer = (struct scatterlist *) current_SC->request_buffer;
++ if (scsi_bufflen(current_SC)) {
++ current_SC->SCp.buffer = scsi_sglist(current_SC);
+ current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer);
+ current_SC->SCp.this_residual = current_SC->SCp.buffer->length;
+- current_SC->SCp.buffers_residual = current_SC->use_sg - 1;
++ current_SC->SCp.buffers_residual = scsi_sg_count(current_SC) - 1;
+ } else {
+- current_SC->SCp.ptr = (char *) current_SC->request_buffer;
+- current_SC->SCp.this_residual = current_SC->request_bufflen;
++ current_SC->SCp.ptr = NULL;
++ current_SC->SCp.this_residual = 0;
+ current_SC->SCp.buffer = NULL;
+ current_SC->SCp.buffers_residual = 0;
+ }
+@@ -1166,7 +1150,9 @@ static void fd_mcs_print_info(Scsi_Cmnd * SCpnt)
+ break;
}
- rc = 0;
- if (copy_to_user(argp, dasd_info,
- ((cmd == (unsigned int) BIODASDINFO2) ?
-- sizeof (struct dasd_information2_t) :
-- sizeof (struct dasd_information_t))))
-+ sizeof(struct dasd_information2_t) :
-+ sizeof(struct dasd_information_t))))
- rc = -EFAULT;
- kfree(dasd_info);
- return rc;
-@@ -339,7 +336,7 @@ dasd_ioctl_information(struct dasd_device *device,
- static int
- dasd_ioctl_set_ro(struct block_device *bdev, void __user *argp)
- {
-- struct dasd_device *device = bdev->bd_disk->private_data;
-+ struct dasd_block *block = bdev->bd_disk->private_data;
- int intval;
-
- if (!capable(CAP_SYS_ADMIN))
-@@ -351,11 +348,10 @@ dasd_ioctl_set_ro(struct block_device *bdev, void __user *argp)
- return -EFAULT;
+- printk("(%d), target = %d cmnd = 0x%02x pieces = %d size = %u\n", SCpnt->SCp.phase, SCpnt->device->id, *(unsigned char *) SCpnt->cmnd, SCpnt->use_sg, SCpnt->request_bufflen);
++ printk("(%d), target = %d cmnd = 0x%02x pieces = %d size = %u\n",
++ SCpnt->SCp.phase, SCpnt->device->id, *(unsigned char *) SCpnt->cmnd,
++ scsi_sg_count(SCpnt), scsi_bufflen(SCpnt));
+ printk("sent_command = %d, have_data_in = %d, timeout = %d\n", SCpnt->SCp.sent_command, SCpnt->SCp.have_data_in, SCpnt->timeout);
+ #if DEBUG_RACE
+ printk("in_interrupt_flag = %d\n", in_interrupt_flag);
+diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
+index b253b8c..c825239 100644
+--- a/drivers/scsi/gdth.c
++++ b/drivers/scsi/gdth.c
+@@ -141,7 +141,7 @@
+ static void gdth_delay(int milliseconds);
+ static void gdth_eval_mapping(ulong32 size, ulong32 *cyls, int *heads, int *secs);
+ static irqreturn_t gdth_interrupt(int irq, void *dev_id);
+-static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
++static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
+ int gdth_from_wait, int* pIndex);
+ static int gdth_sync_event(gdth_ha_str *ha, int service, unchar index,
+ Scsi_Cmnd *scp);
+@@ -165,7 +165,6 @@ static int gdth_internal_cache_cmd(gdth_ha_str *ha, Scsi_Cmnd *scp);
+ static int gdth_fill_cache_cmd(gdth_ha_str *ha, Scsi_Cmnd *scp, ushort hdrive);
- set_disk_ro(bdev->bd_disk, intval);
-- return dasd_set_feature(device->cdev, DASD_FEATURE_READONLY, intval);
-+ return dasd_set_feature(block->base->cdev, DASD_FEATURE_READONLY, intval);
+ static void gdth_enable_int(gdth_ha_str *ha);
+-static unchar gdth_get_status(gdth_ha_str *ha, int irq);
+ static int gdth_test_busy(gdth_ha_str *ha);
+ static int gdth_get_cmd_index(gdth_ha_str *ha);
+ static void gdth_release_event(gdth_ha_str *ha);
+@@ -1334,14 +1333,12 @@ static void __init gdth_enable_int(gdth_ha_str *ha)
}
--static int
--dasd_ioctl_readall_cmb(struct dasd_device *device, unsigned int cmd,
-+static int dasd_ioctl_readall_cmb(struct dasd_block *block, unsigned int cmd,
- unsigned long arg)
- {
- struct cmbdata __user *argp = (void __user *) arg;
-@@ -363,7 +359,7 @@ dasd_ioctl_readall_cmb(struct dasd_device *device, unsigned int cmd,
- struct cmbdata data;
- int ret;
-
-- ret = cmf_readall(device->cdev, &data);
-+ ret = cmf_readall(block->base->cdev, &data);
- if (!ret && copy_to_user(argp, &data, min(size, sizeof(*argp))))
- return -EFAULT;
- return ret;
-@@ -374,10 +370,10 @@ dasd_ioctl(struct inode *inode, struct file *file,
- unsigned int cmd, unsigned long arg)
+ /* return IStatus if interrupt was from this card else 0 */
+-static unchar gdth_get_status(gdth_ha_str *ha, int irq)
++static unchar gdth_get_status(gdth_ha_str *ha)
{
- struct block_device *bdev = inode->i_bdev;
-- struct dasd_device *device = bdev->bd_disk->private_data;
-+ struct dasd_block *block = bdev->bd_disk->private_data;
- void __user *argp = (void __user *)arg;
-
-- if (!device)
-+ if (!block)
- return -ENODEV;
+ unchar IStatus = 0;
- if ((_IOC_DIR(cmd) != _IOC_NONE) && !arg) {
-@@ -391,33 +387,33 @@ dasd_ioctl(struct inode *inode, struct file *file,
- case BIODASDENABLE:
- return dasd_ioctl_enable(bdev);
- case BIODASDQUIESCE:
-- return dasd_ioctl_quiesce(device);
-+ return dasd_ioctl_quiesce(block);
- case BIODASDRESUME:
-- return dasd_ioctl_resume(device);
-+ return dasd_ioctl_resume(block);
- case BIODASDFMT:
- return dasd_ioctl_format(bdev, argp);
- case BIODASDINFO:
-- return dasd_ioctl_information(device, cmd, argp);
-+ return dasd_ioctl_information(block, cmd, argp);
- case BIODASDINFO2:
-- return dasd_ioctl_information(device, cmd, argp);
-+ return dasd_ioctl_information(block, cmd, argp);
- case BIODASDPRRD:
-- return dasd_ioctl_read_profile(device, argp);
-+ return dasd_ioctl_read_profile(block, argp);
- case BIODASDPRRST:
-- return dasd_ioctl_reset_profile(device);
-+ return dasd_ioctl_reset_profile(block);
- case BLKROSET:
- return dasd_ioctl_set_ro(bdev, argp);
- case DASDAPIVER:
- return dasd_ioctl_api_version(argp);
- case BIODASDCMFENABLE:
-- return enable_cmf(device->cdev);
-+ return enable_cmf(block->base->cdev);
- case BIODASDCMFDISABLE:
-- return disable_cmf(device->cdev);
-+ return disable_cmf(block->base->cdev);
- case BIODASDREADALLCMB:
-- return dasd_ioctl_readall_cmb(device, cmd, arg);
-+ return dasd_ioctl_readall_cmb(block, cmd, arg);
- default:
- /* if the discipline has an ioctl method try it. */
-- if (device->discipline->ioctl) {
-- int rval = device->discipline->ioctl(device, cmd, argp);
-+ if (block->base->discipline->ioctl) {
-+ int rval = block->base->discipline->ioctl(block, cmd, argp);
- if (rval != -ENOIOCTLCMD)
- return rval;
- }
-diff --git a/drivers/s390/block/dasd_proc.c b/drivers/s390/block/dasd_proc.c
-index ac7e8ef..28a86f0 100644
---- a/drivers/s390/block/dasd_proc.c
-+++ b/drivers/s390/block/dasd_proc.c
-@@ -54,11 +54,16 @@ static int
- dasd_devices_show(struct seq_file *m, void *v)
- {
- struct dasd_device *device;
-+ struct dasd_block *block;
- char *substr;
+- TRACE(("gdth_get_status() irq %d ctr_count %d\n", irq, gdth_ctr_count));
++ TRACE(("gdth_get_status() irq %d ctr_count %d\n", ha->irq, gdth_ctr_count));
- device = dasd_device_from_devindex((unsigned long) v - 1);
- if (IS_ERR(device))
- return 0;
-+ if (device->block)
-+ block = device->block;
-+ else
-+ return 0;
- /* Print device number. */
- seq_printf(m, "%s", device->cdev->dev.bus_id);
- /* Print discipline string. */
-@@ -67,14 +72,14 @@ dasd_devices_show(struct seq_file *m, void *v)
- else
- seq_printf(m, "(none)");
- /* Print kdev. */
-- if (device->gdp)
-+ if (block->gdp)
- seq_printf(m, " at (%3d:%6d)",
-- device->gdp->major, device->gdp->first_minor);
-+ block->gdp->major, block->gdp->first_minor);
- else
- seq_printf(m, " at (???:??????)");
- /* Print device name. */
-- if (device->gdp)
-- seq_printf(m, " is %-8s", device->gdp->disk_name);
-+ if (block->gdp)
-+ seq_printf(m, " is %-8s", block->gdp->disk_name);
- else
- seq_printf(m, " is ????????");
- /* Print devices features. */
-@@ -100,14 +105,14 @@ dasd_devices_show(struct seq_file *m, void *v)
- case DASD_STATE_READY:
- case DASD_STATE_ONLINE:
- seq_printf(m, "active ");
-- if (dasd_check_blocksize(device->bp_block))
-+ if (dasd_check_blocksize(block->bp_block))
- seq_printf(m, "n/f ");
- else
- seq_printf(m,
- "at blocksize: %d, %ld blocks, %ld MB",
-- device->bp_block, device->blocks,
-- ((device->bp_block >> 9) *
-- device->blocks) >> 11);
-+ block->bp_block, block->blocks,
-+ ((block->bp_block >> 9) *
-+ block->blocks) >> 11);
- break;
- default:
- seq_printf(m, "no stat");
-@@ -137,7 +142,7 @@ static void dasd_devices_stop(struct seq_file *m, void *v)
- {
- }
+- if (ha->irq != (unchar)irq) /* check IRQ */
+- return false;
+ if (ha->type == GDT_EISA)
+ IStatus = inb((ushort)ha->bmic + EDOORREG);
+ else if (ha->type == GDT_ISA)
+@@ -1523,7 +1520,7 @@ static int gdth_wait(gdth_ha_str *ha, int index, ulong32 time)
+ return 1; /* no wait required */
--static struct seq_operations dasd_devices_seq_ops = {
-+static const struct seq_operations dasd_devices_seq_ops = {
- .start = dasd_devices_start,
- .next = dasd_devices_next,
- .stop = dasd_devices_stop,
-diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
-index 15a5789..7779bfc 100644
---- a/drivers/s390/block/dcssblk.c
-+++ b/drivers/s390/block/dcssblk.c
-@@ -82,7 +82,7 @@ struct dcssblk_dev_info {
- struct request_queue *dcssblk_queue;
- };
+ do {
+- __gdth_interrupt(ha, (int)ha->irq, true, &wait_index);
++ __gdth_interrupt(ha, true, &wait_index);
+ if (wait_index == index) {
+ answer_found = TRUE;
+ break;
+@@ -3036,7 +3033,7 @@ static void gdth_clear_events(void)
--static struct list_head dcssblk_devices = LIST_HEAD_INIT(dcssblk_devices);
-+static LIST_HEAD(dcssblk_devices);
- static struct rw_semaphore dcssblk_devices_sem;
+ /* SCSI interface functions */
- /*
-diff --git a/drivers/s390/char/Makefile b/drivers/s390/char/Makefile
-index 130de19..7e73e39 100644
---- a/drivers/s390/char/Makefile
-+++ b/drivers/s390/char/Makefile
-@@ -3,7 +3,7 @@
- #
+-static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
++static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
+ int gdth_from_wait, int* pIndex)
+ {
+ gdt6m_dpram_str __iomem *dp6m_ptr = NULL;
+@@ -3054,7 +3051,7 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
+ int act_int_coal = 0;
+ #endif
- obj-y += ctrlchar.o keyboard.o defkeymap.o sclp.o sclp_rw.o sclp_quiesce.o \
-- sclp_info.o sclp_config.o sclp_chp.o
-+ sclp_cmd.o sclp_config.o sclp_cpi_sys.o
+- TRACE(("gdth_interrupt() IRQ %d\n",irq));
++ TRACE(("gdth_interrupt() IRQ %d\n", ha->irq));
- obj-$(CONFIG_TN3270) += raw3270.o
- obj-$(CONFIG_TN3270_CONSOLE) += con3270.o
-diff --git a/drivers/s390/char/monwriter.c b/drivers/s390/char/monwriter.c
-index 20442fb..a86c053 100644
---- a/drivers/s390/char/monwriter.c
-+++ b/drivers/s390/char/monwriter.c
-@@ -295,7 +295,7 @@ module_init(mon_init);
- module_exit(mon_exit);
+ /* if polling and not from gdth_wait() -> return */
+ if (gdth_polling) {
+@@ -3067,7 +3064,8 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
+ spin_lock_irqsave(&ha->smp_lock, flags);
- module_param_named(max_bufs, mon_max_bufs, int, 0644);
--MODULE_PARM_DESC(max_bufs, "Maximum number of sample monitor data buffers"
-+MODULE_PARM_DESC(max_bufs, "Maximum number of sample monitor data buffers "
- "that can be active at one time");
+ /* search controller */
+- if (0 == (IStatus = gdth_get_status(ha, irq))) {
++ IStatus = gdth_get_status(ha);
++ if (IStatus == 0) {
+ /* spurious interrupt */
+ if (!gdth_polling)
+ spin_unlock_irqrestore(&ha->smp_lock, flags);
+@@ -3294,9 +3292,9 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
- MODULE_AUTHOR("Melissa Howland <Melissa.Howland at us.ibm.com>");
-diff --git a/drivers/s390/char/raw3270.c b/drivers/s390/char/raw3270.c
-index 8d1c64a..0d98f1f 100644
---- a/drivers/s390/char/raw3270.c
-+++ b/drivers/s390/char/raw3270.c
-@@ -66,7 +66,7 @@ struct raw3270 {
- static DEFINE_MUTEX(raw3270_mutex);
+ static irqreturn_t gdth_interrupt(int irq, void *dev_id)
+ {
+- gdth_ha_str *ha = (gdth_ha_str *)dev_id;
++ gdth_ha_str *ha = dev_id;
- /* List of 3270 devices. */
--static struct list_head raw3270_devices = LIST_HEAD_INIT(raw3270_devices);
-+static LIST_HEAD(raw3270_devices);
+- return __gdth_interrupt(ha, irq, false, NULL);
++ return __gdth_interrupt(ha, false, NULL);
+ }
- /*
- * Flag to indicate if the driver has been registered. Some operations
-@@ -1210,7 +1210,7 @@ struct raw3270_notifier {
- void (*notifier)(int, int);
+ static int gdth_sync_event(gdth_ha_str *ha, int service, unchar index,
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 24271a8..5ea1f98 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -54,8 +54,7 @@ static struct class shost_class = {
};
--static struct list_head raw3270_notifier = LIST_HEAD_INIT(raw3270_notifier);
-+static LIST_HEAD(raw3270_notifier);
+ /**
+- * scsi_host_set_state - Take the given host through the host
+- * state model.
++ * scsi_host_set_state - Take the given host through the host state model.
+ * @shost: scsi host to change the state of.
+ * @state: state to change to.
+ *
+@@ -429,9 +428,17 @@ void scsi_unregister(struct Scsi_Host *shost)
+ }
+ EXPORT_SYMBOL(scsi_unregister);
- int raw3270_register_notifier(void (*notifier)(int, int))
++static int __scsi_host_match(struct class_device *cdev, void *data)
++{
++ struct Scsi_Host *p;
++ unsigned short *hostnum = (unsigned short *)data;
++
++ p = class_to_shost(cdev);
++ return p->host_no == *hostnum;
++}
++
+ /**
+ * scsi_host_lookup - get a reference to a Scsi_Host by host no
+- *
+ * @hostnum: host number to locate
+ *
+ * Return value:
+@@ -439,19 +446,12 @@ EXPORT_SYMBOL(scsi_unregister);
+ **/
+ struct Scsi_Host *scsi_host_lookup(unsigned short hostnum)
{
-diff --git a/drivers/s390/char/sclp.h b/drivers/s390/char/sclp.h
-index c7318a1..aa8186d 100644
---- a/drivers/s390/char/sclp.h
-+++ b/drivers/s390/char/sclp.h
-@@ -56,8 +56,6 @@ typedef unsigned int sclp_cmdw_t;
- #define SCLP_CMDW_READ_EVENT_DATA 0x00770005
- #define SCLP_CMDW_WRITE_EVENT_DATA 0x00760005
- #define SCLP_CMDW_WRITE_EVENT_MASK 0x00780005
--#define SCLP_CMDW_READ_SCP_INFO 0x00020001
--#define SCLP_CMDW_READ_SCP_INFO_FORCED 0x00120001
+- struct class *class = &shost_class;
+ struct class_device *cdev;
+- struct Scsi_Host *shost = ERR_PTR(-ENXIO), *p;
++ struct Scsi_Host *shost = ERR_PTR(-ENXIO);
- #define GDS_ID_MDSMU 0x1310
- #define GDS_ID_MDSROUTEINFO 0x1311
-@@ -83,6 +81,8 @@ extern u64 sclp_facilities;
+- down(&class->sem);
+- list_for_each_entry(cdev, &class->children, node) {
+- p = class_to_shost(cdev);
+- if (p->host_no == hostnum) {
+- shost = scsi_host_get(p);
+- break;
+- }
+- }
+- up(&class->sem);
++ cdev = class_find_child(&shost_class, &hostnum, __scsi_host_match);
++ if (cdev)
++ shost = scsi_host_get(class_to_shost(cdev));
- #define SCLP_HAS_CHP_INFO (sclp_facilities & 0x8000000000000000ULL)
- #define SCLP_HAS_CHP_RECONFIG (sclp_facilities & 0x2000000000000000ULL)
-+#define SCLP_HAS_CPU_INFO (sclp_facilities & 0x0800000000000000ULL)
-+#define SCLP_HAS_CPU_RECONFIG (sclp_facilities & 0x0400000000000000ULL)
+ return shost;
+ }
+diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c
+index 0844331..e7b2f35 100644
+--- a/drivers/scsi/hptiop.c
++++ b/drivers/scsi/hptiop.c
+@@ -1,5 +1,5 @@
+ /*
+- * HighPoint RR3xxx controller driver for Linux
++ * HighPoint RR3xxx/4xxx controller driver for Linux
+ * Copyright (C) 2006-2007 HighPoint Technologies, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+@@ -38,80 +38,84 @@
+ #include "hptiop.h"
- struct gds_subvector {
- u8 length;
-diff --git a/drivers/s390/char/sclp_chp.c b/drivers/s390/char/sclp_chp.c
-deleted file mode 100644
-index c68f5e7..0000000
---- a/drivers/s390/char/sclp_chp.c
-+++ /dev/null
-@@ -1,200 +0,0 @@
--/*
-- * drivers/s390/char/sclp_chp.c
-- *
-- * Copyright IBM Corp. 2007
-- * Author(s): Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
-- */
--
--#include <linux/types.h>
--#include <linux/gfp.h>
--#include <linux/errno.h>
--#include <linux/completion.h>
--#include <asm/sclp.h>
--#include <asm/chpid.h>
--
--#include "sclp.h"
--
--#define TAG "sclp_chp: "
--
--#define SCLP_CMDW_CONFIGURE_CHANNEL_PATH 0x000f0001
--#define SCLP_CMDW_DECONFIGURE_CHANNEL_PATH 0x000e0001
--#define SCLP_CMDW_READ_CHANNEL_PATH_INFORMATION 0x00030001
--
--static inline sclp_cmdw_t get_configure_cmdw(struct chp_id chpid)
--{
-- return SCLP_CMDW_CONFIGURE_CHANNEL_PATH | chpid.id << 8;
--}
--
--static inline sclp_cmdw_t get_deconfigure_cmdw(struct chp_id chpid)
--{
-- return SCLP_CMDW_DECONFIGURE_CHANNEL_PATH | chpid.id << 8;
--}
--
--static void chp_callback(struct sclp_req *req, void *data)
--{
-- struct completion *completion = data;
--
-- complete(completion);
--}
--
--struct chp_cfg_sccb {
-- struct sccb_header header;
-- u8 ccm;
-- u8 reserved[6];
-- u8 cssid;
--} __attribute__((packed));
--
--struct chp_cfg_data {
-- struct chp_cfg_sccb sccb;
-- struct sclp_req req;
-- struct completion completion;
--} __attribute__((packed));
--
--static int do_configure(sclp_cmdw_t cmd)
--{
-- struct chp_cfg_data *data;
-- int rc;
--
-- if (!SCLP_HAS_CHP_RECONFIG)
-- return -EOPNOTSUPP;
-- /* Prepare sccb. */
-- data = (struct chp_cfg_data *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
-- if (!data)
-- return -ENOMEM;
-- data->sccb.header.length = sizeof(struct chp_cfg_sccb);
-- data->req.command = cmd;
-- data->req.sccb = &(data->sccb);
-- data->req.status = SCLP_REQ_FILLED;
-- data->req.callback = chp_callback;
-- data->req.callback_data = &(data->completion);
-- init_completion(&data->completion);
--
-- /* Perform sclp request. */
-- rc = sclp_add_request(&(data->req));
-- if (rc)
-- goto out;
-- wait_for_completion(&data->completion);
--
-- /* Check response .*/
-- if (data->req.status != SCLP_REQ_DONE) {
-- printk(KERN_WARNING TAG "configure channel-path request failed "
-- "(status=0x%02x)\n", data->req.status);
-- rc = -EIO;
-- goto out;
-- }
-- switch (data->sccb.header.response_code) {
-- case 0x0020:
-- case 0x0120:
-- case 0x0440:
-- case 0x0450:
-- break;
-- default:
-- printk(KERN_WARNING TAG "configure channel-path failed "
-- "(cmd=0x%08x, response=0x%04x)\n", cmd,
-- data->sccb.header.response_code);
-- rc = -EIO;
-- break;
-- }
--out:
-- free_page((unsigned long) data);
--
-- return rc;
--}
--
--/**
-- * sclp_chp_configure - perform configure channel-path sclp command
-- * @chpid: channel-path ID
-- *
-- * Perform configure channel-path command sclp command for specified chpid.
-- * Return 0 after command successfully finished, non-zero otherwise.
-- */
--int sclp_chp_configure(struct chp_id chpid)
--{
-- return do_configure(get_configure_cmdw(chpid));
--}
+ MODULE_AUTHOR("HighPoint Technologies, Inc.");
+-MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx SATA Controller Driver");
++MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx/4xxx Controller Driver");
+
+ static char driver_name[] = "hptiop";
+-static const char driver_name_long[] = "RocketRAID 3xxx SATA Controller driver";
+-static const char driver_ver[] = "v1.2 (070830)";
-
--/**
-- * sclp_chp_deconfigure - perform deconfigure channel-path sclp command
-- * @chpid: channel-path ID
-- *
-- * Perform deconfigure channel-path command sclp command for specified chpid
-- * and wait for completion. On success return 0. Return non-zero otherwise.
-- */
--int sclp_chp_deconfigure(struct chp_id chpid)
+-static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 tag);
+-static void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag);
++static const char driver_name_long[] = "RocketRAID 3xxx/4xxx Controller driver";
++static const char driver_ver[] = "v1.3 (071203)";
++
++static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec);
++static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
++ struct hpt_iop_request_scsi_command *req);
++static void hptiop_host_request_callback_itl(struct hptiop_hba *hba, u32 tag);
++static void hptiop_iop_request_callback_itl(struct hptiop_hba *hba, u32 tag);
+ static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg);
+
+-static inline void hptiop_pci_posting_flush(struct hpt_iopmu __iomem *iop)
-{
-- return do_configure(get_deconfigure_cmdw(chpid));
+- readl(&iop->outbound_intstatus);
-}
-
--struct chp_info_sccb {
-- struct sccb_header header;
-- u8 recognized[SCLP_CHP_INFO_MASK_SIZE];
-- u8 standby[SCLP_CHP_INFO_MASK_SIZE];
-- u8 configured[SCLP_CHP_INFO_MASK_SIZE];
-- u8 ccm;
-- u8 reserved[6];
-- u8 cssid;
--} __attribute__((packed));
--
--struct chp_info_data {
-- struct chp_info_sccb sccb;
-- struct sclp_req req;
-- struct completion completion;
--} __attribute__((packed));
--
--/**
-- * sclp_chp_read_info - perform read channel-path information sclp command
-- * @info: resulting channel-path information data
-- *
-- * Perform read channel-path information sclp command and wait for completion.
-- * On success, store channel-path information in @info and return 0. Return
-- * non-zero otherwise.
-- */
--int sclp_chp_read_info(struct sclp_chp_info *info)
--{
-- struct chp_info_data *data;
-- int rc;
--
-- if (!SCLP_HAS_CHP_INFO)
-- return -EOPNOTSUPP;
-- /* Prepare sccb. */
-- data = (struct chp_info_data *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
-- if (!data)
-- return -ENOMEM;
-- data->sccb.header.length = sizeof(struct chp_info_sccb);
-- data->req.command = SCLP_CMDW_READ_CHANNEL_PATH_INFORMATION;
-- data->req.sccb = &(data->sccb);
-- data->req.status = SCLP_REQ_FILLED;
-- data->req.callback = chp_callback;
-- data->req.callback_data = &(data->completion);
-- init_completion(&data->completion);
--
-- /* Perform sclp request. */
-- rc = sclp_add_request(&(data->req));
-- if (rc)
-- goto out;
-- wait_for_completion(&data->completion);
--
-- /* Check response .*/
-- if (data->req.status != SCLP_REQ_DONE) {
-- printk(KERN_WARNING TAG "read channel-path info request failed "
-- "(status=0x%02x)\n", data->req.status);
-- rc = -EIO;
-- goto out;
-- }
-- if (data->sccb.header.response_code != 0x0010) {
-- printk(KERN_WARNING TAG "read channel-path info failed "
-- "(response=0x%04x)\n", data->sccb.header.response_code);
-- rc = -EIO;
-- goto out;
-- }
-- memcpy(info->recognized, data->sccb.recognized,
-- SCLP_CHP_INFO_MASK_SIZE);
-- memcpy(info->standby, data->sccb.standby,
-- SCLP_CHP_INFO_MASK_SIZE);
-- memcpy(info->configured, data->sccb.configured,
-- SCLP_CHP_INFO_MASK_SIZE);
--out:
-- free_page((unsigned long) data);
--
-- return rc;
--}
-diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c
-new file mode 100644
-index 0000000..b5c2339
---- /dev/null
-+++ b/drivers/s390/char/sclp_cmd.c
-@@ -0,0 +1,398 @@
-+/*
-+ * drivers/s390/char/sclp_cmd.c
-+ *
-+ * Copyright IBM Corp. 2007
-+ * Author(s): Heiko Carstens <heiko.carstens at de.ibm.com>,
-+ * Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
-+ */
-+
-+#include <linux/completion.h>
-+#include <linux/init.h>
-+#include <linux/errno.h>
-+#include <linux/slab.h>
-+#include <linux/string.h>
-+#include <asm/chpid.h>
-+#include <asm/sclp.h>
-+#include "sclp.h"
-+
-+#define TAG "sclp_cmd: "
-+
-+#define SCLP_CMDW_READ_SCP_INFO 0x00020001
-+#define SCLP_CMDW_READ_SCP_INFO_FORCED 0x00120001
-+
-+struct read_info_sccb {
-+ struct sccb_header header; /* 0-7 */
-+ u16 rnmax; /* 8-9 */
-+ u8 rnsize; /* 10 */
-+ u8 _reserved0[24 - 11]; /* 11-15 */
-+ u8 loadparm[8]; /* 24-31 */
-+ u8 _reserved1[48 - 32]; /* 32-47 */
-+ u64 facilities; /* 48-55 */
-+ u8 _reserved2[84 - 56]; /* 56-83 */
-+ u8 fac84; /* 84 */
-+ u8 _reserved3[91 - 85]; /* 85-90 */
-+ u8 flags; /* 91 */
-+ u8 _reserved4[100 - 92]; /* 92-99 */
-+ u32 rnsize2; /* 100-103 */
-+ u64 rnmax2; /* 104-111 */
-+ u8 _reserved5[4096 - 112]; /* 112-4095 */
-+} __attribute__((packed, aligned(PAGE_SIZE)));
-+
-+static struct read_info_sccb __initdata early_read_info_sccb;
-+static int __initdata early_read_info_sccb_valid;
-+
-+u64 sclp_facilities;
-+static u8 sclp_fac84;
-+
-+static int __init sclp_cmd_sync_early(sclp_cmdw_t cmd, void *sccb)
+-static int iop_wait_ready(struct hpt_iopmu __iomem *iop, u32 millisec)
++static int iop_wait_ready_itl(struct hptiop_hba *hba, u32 millisec)
+ {
+ u32 req = 0;
+ int i;
+
+ for (i = 0; i < millisec; i++) {
+- req = readl(&iop->inbound_queue);
++ req = readl(&hba->u.itl.iop->inbound_queue);
+ if (req != IOPMU_QUEUE_EMPTY)
+ break;
+ msleep(1);
+ }
+
+ if (req != IOPMU_QUEUE_EMPTY) {
+- writel(req, &iop->outbound_queue);
+- hptiop_pci_posting_flush(iop);
++ writel(req, &hba->u.itl.iop->outbound_queue);
++ readl(&hba->u.itl.iop->outbound_intstatus);
+ return 0;
+ }
+
+ return -1;
+ }
+
+-static void hptiop_request_callback(struct hptiop_hba *hba, u32 tag)
++static int iop_wait_ready_mv(struct hptiop_hba *hba, u32 millisec)
+{
-+ int rc;
-+
-+ __ctl_set_bit(0, 9);
-+ rc = sclp_service_call(cmd, sccb);
-+ if (rc)
-+ goto out;
-+ __load_psw_mask(PSW_BASE_BITS | PSW_MASK_EXT |
-+ PSW_MASK_WAIT | PSW_DEFAULT_KEY);
-+ local_irq_disable();
-+out:
-+ /* Contents of the sccb might have changed. */
-+ barrier();
-+ __ctl_clear_bit(0, 9);
-+ return rc;
++ return iop_send_sync_msg(hba, IOPMU_INBOUND_MSG0_NOP, millisec);
+}
+
-+void __init sclp_read_info_early(void)
-+{
-+ int rc;
-+ int i;
-+ struct read_info_sccb *sccb;
-+ sclp_cmdw_t commands[] = {SCLP_CMDW_READ_SCP_INFO_FORCED,
-+ SCLP_CMDW_READ_SCP_INFO};
-+
-+ sccb = &early_read_info_sccb;
-+ for (i = 0; i < ARRAY_SIZE(commands); i++) {
-+ do {
-+ memset(sccb, 0, sizeof(*sccb));
-+ sccb->header.length = sizeof(*sccb);
-+ sccb->header.control_mask[2] = 0x80;
-+ rc = sclp_cmd_sync_early(commands[i], sccb);
-+ } while (rc == -EBUSY);
++static void hptiop_request_callback_itl(struct hptiop_hba *hba, u32 tag)
+ {
+ if (tag & IOPMU_QUEUE_ADDR_HOST_BIT)
+- return hptiop_host_request_callback(hba,
++ hptiop_host_request_callback_itl(hba,
+ tag & ~IOPMU_QUEUE_ADDR_HOST_BIT);
+ else
+- return hptiop_iop_request_callback(hba, tag);
++ hptiop_iop_request_callback_itl(hba, tag);
+ }
+
+-static inline void hptiop_drain_outbound_queue(struct hptiop_hba *hba)
++static void hptiop_drain_outbound_queue_itl(struct hptiop_hba *hba)
+ {
+ u32 req;
+
+- while ((req = readl(&hba->iop->outbound_queue)) != IOPMU_QUEUE_EMPTY) {
++ while ((req = readl(&hba->u.itl.iop->outbound_queue)) !=
++ IOPMU_QUEUE_EMPTY) {
+
+ if (req & IOPMU_QUEUE_MASK_HOST_BITS)
+- hptiop_request_callback(hba, req);
++ hptiop_request_callback_itl(hba, req);
+ else {
+ struct hpt_iop_request_header __iomem * p;
+
+ p = (struct hpt_iop_request_header __iomem *)
+- ((char __iomem *)hba->iop + req);
++ ((char __iomem *)hba->u.itl.iop + req);
+
+ if (readl(&p->flags) & IOP_REQUEST_FLAG_SYNC_REQUEST) {
+ if (readl(&p->context))
+- hptiop_request_callback(hba, req);
++ hptiop_request_callback_itl(hba, req);
+ else
+ writel(1, &p->context);
+ }
+ else
+- hptiop_request_callback(hba, req);
++ hptiop_request_callback_itl(hba, req);
+ }
+ }
+ }
+
+-static int __iop_intr(struct hptiop_hba *hba)
++static int iop_intr_itl(struct hptiop_hba *hba)
+ {
+- struct hpt_iopmu __iomem *iop = hba->iop;
++ struct hpt_iopmu_itl __iomem *iop = hba->u.itl.iop;
+ u32 status;
+ int ret = 0;
+
+@@ -119,6 +123,7 @@ static int __iop_intr(struct hptiop_hba *hba)
+
+ if (status & IOPMU_OUTBOUND_INT_MSG0) {
+ u32 msg = readl(&iop->outbound_msgaddr0);
+
-+ if (rc)
-+ break;
-+ if (sccb->header.response_code == 0x10) {
-+ early_read_info_sccb_valid = 1;
-+ break;
-+ }
-+ if (sccb->header.response_code != 0x1f0)
-+ break;
+ dprintk("received outbound msg %x\n", msg);
+ writel(IOPMU_OUTBOUND_INT_MSG0, &iop->outbound_intstatus);
+ hptiop_message_callback(hba, msg);
+@@ -126,31 +131,115 @@ static int __iop_intr(struct hptiop_hba *hba)
+ }
+
+ if (status & IOPMU_OUTBOUND_INT_POSTQUEUE) {
+- hptiop_drain_outbound_queue(hba);
++ hptiop_drain_outbound_queue_itl(hba);
++ ret = 1;
+ }
-+}
-+
-+void __init sclp_facilities_detect(void)
-+{
-+ if (!early_read_info_sccb_valid)
-+ return;
-+ sclp_facilities = early_read_info_sccb.facilities;
-+ sclp_fac84 = early_read_info_sccb.fac84;
-+}
-+
-+unsigned long long __init sclp_memory_detect(void)
-+{
-+ unsigned long long memsize;
-+ struct read_info_sccb *sccb;
+
-+ if (!early_read_info_sccb_valid)
-+ return 0;
-+ sccb = &early_read_info_sccb;
-+ if (sccb->rnsize)
-+ memsize = sccb->rnsize << 20;
-+ else
-+ memsize = sccb->rnsize2 << 20;
-+ if (sccb->rnmax)
-+ memsize *= sccb->rnmax;
-+ else
-+ memsize *= sccb->rnmax2;
-+ return memsize;
++ return ret;
+}
+
-+/*
-+ * This function will be called after sclp_memory_detect(), which gets called
-+ * early from early.c code. Therefore the sccb should have valid contents.
-+ */
-+void __init sclp_get_ipl_info(struct sclp_ipl_info *info)
++static u64 mv_outbound_read(struct hpt_iopmu_mv __iomem *mu)
+{
-+ struct read_info_sccb *sccb;
++ u32 outbound_tail = readl(&mu->outbound_tail);
++ u32 outbound_head = readl(&mu->outbound_head);
+
-+ if (!early_read_info_sccb_valid)
-+ return;
-+ sccb = &early_read_info_sccb;
-+ info->is_valid = 1;
-+ if (sccb->flags & 0x2)
-+ info->has_dump = 1;
-+ memcpy(&info->loadparm, &sccb->loadparm, LOADPARM_LEN);
-+}
++ if (outbound_tail != outbound_head) {
++ u64 p;
+
-+static void sclp_sync_callback(struct sclp_req *req, void *data)
-+{
-+ struct completion *completion = data;
++ memcpy_fromio(&p, &mu->outbound_q[mu->outbound_tail], 8);
++ outbound_tail++;
+
-+ complete(completion);
++ if (outbound_tail == MVIOP_QUEUE_LEN)
++ outbound_tail = 0;
++ writel(outbound_tail, &mu->outbound_tail);
++ return p;
++ } else
++ return 0;
+}
+
-+static int do_sync_request(sclp_cmdw_t cmd, void *sccb)
++static void mv_inbound_write(u64 p, struct hptiop_hba *hba)
+{
-+ struct completion completion;
-+ struct sclp_req *request;
-+ int rc;
-+
-+ request = kzalloc(sizeof(*request), GFP_KERNEL);
-+ if (!request)
-+ return -ENOMEM;
-+ request->command = cmd;
-+ request->sccb = sccb;
-+ request->status = SCLP_REQ_FILLED;
-+ request->callback = sclp_sync_callback;
-+ request->callback_data = &completion;
-+ init_completion(&completion);
-+
-+ /* Perform sclp request. */
-+ rc = sclp_add_request(request);
-+ if (rc)
-+ goto out;
-+ wait_for_completion(&completion);
-+
-+ /* Check response. */
-+ if (request->status != SCLP_REQ_DONE) {
-+ printk(KERN_WARNING TAG "sync request failed "
-+ "(cmd=0x%08x, status=0x%02x)\n", cmd, request->status);
-+ rc = -EIO;
-+ }
-+out:
-+ kfree(request);
-+ return rc;
-+}
-+
-+/*
-+ * CPU configuration related functions.
-+ */
-+
-+#define SCLP_CMDW_READ_CPU_INFO 0x00010001
-+#define SCLP_CMDW_CONFIGURE_CPU 0x00110001
-+#define SCLP_CMDW_DECONFIGURE_CPU 0x00100001
-+
-+struct read_cpu_info_sccb {
-+ struct sccb_header header;
-+ u16 nr_configured;
-+ u16 offset_configured;
-+ u16 nr_standby;
-+ u16 offset_standby;
-+ u8 reserved[4096 - 16];
-+} __attribute__((packed, aligned(PAGE_SIZE)));
++ u32 inbound_head = readl(&hba->u.mv.mu->inbound_head);
++ u32 head = inbound_head + 1;
+
-+static void sclp_fill_cpu_info(struct sclp_cpu_info *info,
-+ struct read_cpu_info_sccb *sccb)
-+{
-+ char *page = (char *) sccb;
++ if (head == MVIOP_QUEUE_LEN)
++ head = 0;
+
-+ memset(info, 0, sizeof(*info));
-+ info->configured = sccb->nr_configured;
-+ info->standby = sccb->nr_standby;
-+ info->combined = sccb->nr_configured + sccb->nr_standby;
-+ info->has_cpu_type = sclp_fac84 & 0x1;
-+ memcpy(&info->cpu, page + sccb->offset_configured,
-+ info->combined * sizeof(struct sclp_cpu_entry));
++ memcpy_toio(&hba->u.mv.mu->inbound_q[inbound_head], &p, 8);
++ writel(head, &hba->u.mv.mu->inbound_head);
++ writel(MVIOP_MU_INBOUND_INT_POSTQUEUE,
++ &hba->u.mv.regs->inbound_doorbell);
+}
+
-+int sclp_get_cpu_info(struct sclp_cpu_info *info)
++static void hptiop_request_callback_mv(struct hptiop_hba *hba, u64 tag)
+{
-+ int rc;
-+ struct read_cpu_info_sccb *sccb;
-+
-+ if (!SCLP_HAS_CPU_INFO)
-+ return -EOPNOTSUPP;
-+ sccb = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
-+ if (!sccb)
-+ return -ENOMEM;
-+ sccb->header.length = sizeof(*sccb);
-+ rc = do_sync_request(SCLP_CMDW_READ_CPU_INFO, sccb);
-+ if (rc)
-+ goto out;
-+ if (sccb->header.response_code != 0x0010) {
-+ printk(KERN_WARNING TAG "readcpuinfo failed "
-+ "(response=0x%04x)\n", sccb->header.response_code);
-+ rc = -EIO;
-+ goto out;
-+ }
-+ sclp_fill_cpu_info(info, sccb);
-+out:
-+ free_page((unsigned long) sccb);
-+ return rc;
-+}
++ u32 req_type = (tag >> 5) & 0x7;
++ struct hpt_iop_request_scsi_command *req;
+
-+struct cpu_configure_sccb {
-+ struct sccb_header header;
-+} __attribute__((packed, aligned(8)));
++ dprintk("hptiop_request_callback_mv: tag=%llx\n", tag);
+
-+static int do_cpu_configure(sclp_cmdw_t cmd)
-+{
-+ struct cpu_configure_sccb *sccb;
-+ int rc;
++ BUG_ON((tag & MVIOP_MU_QUEUE_REQUEST_RETURN_CONTEXT) == 0);
+
-+ if (!SCLP_HAS_CPU_RECONFIG)
-+ return -EOPNOTSUPP;
-+ /*
-+ * This is not going to cross a page boundary since we force
-+ * kmalloc to have a minimum alignment of 8 bytes on s390.
-+ */
-+ sccb = kzalloc(sizeof(*sccb), GFP_KERNEL | GFP_DMA);
-+ if (!sccb)
-+ return -ENOMEM;
-+ sccb->header.length = sizeof(*sccb);
-+ rc = do_sync_request(cmd, sccb);
-+ if (rc)
-+ goto out;
-+ switch (sccb->header.response_code) {
-+ case 0x0020:
-+ case 0x0120:
-+ break;
-+ default:
-+ printk(KERN_WARNING TAG "configure cpu failed (cmd=0x%08x, "
-+ "response=0x%04x)\n", cmd, sccb->header.response_code);
-+ rc = -EIO;
++ switch (req_type) {
++ case IOP_REQUEST_TYPE_GET_CONFIG:
++ case IOP_REQUEST_TYPE_SET_CONFIG:
++ hba->msg_done = 1;
+ break;
-+ }
-+out:
-+ kfree(sccb);
-+ return rc;
-+}
-+
-+int sclp_cpu_configure(u8 cpu)
-+{
-+ return do_cpu_configure(SCLP_CMDW_CONFIGURE_CPU | cpu << 8);
-+}
-+
-+int sclp_cpu_deconfigure(u8 cpu)
-+{
-+ return do_cpu_configure(SCLP_CMDW_DECONFIGURE_CPU | cpu << 8);
-+}
-+
-+/*
-+ * Channel path configuration related functions.
-+ */
-+
-+#define SCLP_CMDW_CONFIGURE_CHPATH 0x000f0001
-+#define SCLP_CMDW_DECONFIGURE_CHPATH 0x000e0001
-+#define SCLP_CMDW_READ_CHPATH_INFORMATION 0x00030001
-+
-+struct chp_cfg_sccb {
-+ struct sccb_header header;
-+ u8 ccm;
-+ u8 reserved[6];
-+ u8 cssid;
-+} __attribute__((packed));
+
-+static int do_chp_configure(sclp_cmdw_t cmd)
-+{
-+ struct chp_cfg_sccb *sccb;
-+ int rc;
++ case IOP_REQUEST_TYPE_SCSI_COMMAND:
++ req = hba->reqs[tag >> 8].req_virt;
++ if (likely(tag & MVIOP_MU_QUEUE_REQUEST_RESULT_BIT))
++ req->header.result = cpu_to_le32(IOP_RESULT_SUCCESS);
+
-+ if (!SCLP_HAS_CHP_RECONFIG)
-+ return -EOPNOTSUPP;
-+ /* Prepare sccb. */
-+ sccb = (struct chp_cfg_sccb *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
-+ if (!sccb)
-+ return -ENOMEM;
-+ sccb->header.length = sizeof(*sccb);
-+ rc = do_sync_request(cmd, sccb);
-+ if (rc)
-+ goto out;
-+ switch (sccb->header.response_code) {
-+ case 0x0020:
-+ case 0x0120:
-+ case 0x0440:
-+ case 0x0450:
++ hptiop_finish_scsi_req(hba, tag>>8, req);
+ break;
++
+ default:
-+ printk(KERN_WARNING TAG "configure channel-path failed "
-+ "(cmd=0x%08x, response=0x%04x)\n", cmd,
-+ sccb->header.response_code);
-+ rc = -EIO;
+ break;
+ }
-+out:
-+ free_page((unsigned long) sccb);
-+ return rc;
+}
+
-+/**
-+ * sclp_chp_configure - perform configure channel-path sclp command
-+ * @chpid: channel-path ID
-+ *
-+ * Perform configure channel-path command sclp command for specified chpid.
-+ * Return 0 after command successfully finished, non-zero otherwise.
-+ */
-+int sclp_chp_configure(struct chp_id chpid)
++static int iop_intr_mv(struct hptiop_hba *hba)
+{
-+ return do_chp_configure(SCLP_CMDW_CONFIGURE_CHPATH | chpid.id << 8);
-+}
++ u32 status;
++ int ret = 0;
+
-+/**
-+ * sclp_chp_deconfigure - perform deconfigure channel-path sclp command
-+ * @chpid: channel-path ID
-+ *
-+ * Perform deconfigure channel-path command sclp command for specified chpid
-+ * and wait for completion. On success return 0. Return non-zero otherwise.
-+ */
-+int sclp_chp_deconfigure(struct chp_id chpid)
-+{
-+ return do_chp_configure(SCLP_CMDW_DECONFIGURE_CHPATH | chpid.id << 8);
-+}
++ status = readl(&hba->u.mv.regs->outbound_doorbell);
++ writel(~status, &hba->u.mv.regs->outbound_doorbell);
+
-+struct chp_info_sccb {
-+ struct sccb_header header;
-+ u8 recognized[SCLP_CHP_INFO_MASK_SIZE];
-+ u8 standby[SCLP_CHP_INFO_MASK_SIZE];
-+ u8 configured[SCLP_CHP_INFO_MASK_SIZE];
-+ u8 ccm;
-+ u8 reserved[6];
-+ u8 cssid;
-+} __attribute__((packed));
++ if (status & MVIOP_MU_OUTBOUND_INT_MSG) {
++ u32 msg;
++ msg = readl(&hba->u.mv.mu->outbound_msg);
++ dprintk("received outbound msg %x\n", msg);
++ hptiop_message_callback(hba, msg);
++ ret = 1;
++ }
+
-+/**
-+ * sclp_chp_read_info - perform read channel-path information sclp command
-+ * @info: resulting channel-path information data
-+ *
-+ * Perform read channel-path information sclp command and wait for completion.
-+ * On success, store channel-path information in @info and return 0. Return
-+ * non-zero otherwise.
-+ */
-+int sclp_chp_read_info(struct sclp_chp_info *info)
-+{
-+ struct chp_info_sccb *sccb;
-+ int rc;
++ if (status & MVIOP_MU_OUTBOUND_INT_POSTQUEUE) {
++ u64 tag;
+
-+ if (!SCLP_HAS_CHP_INFO)
-+ return -EOPNOTSUPP;
-+ /* Prepare sccb. */
-+ sccb = (struct chp_info_sccb *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
-+ if (!sccb)
-+ return -ENOMEM;
-+ sccb->header.length = sizeof(*sccb);
-+ rc = do_sync_request(SCLP_CMDW_READ_CHPATH_INFORMATION, sccb);
-+ if (rc)
-+ goto out;
-+ if (sccb->header.response_code != 0x0010) {
-+ printk(KERN_WARNING TAG "read channel-path info failed "
-+ "(response=0x%04x)\n", sccb->header.response_code);
-+ rc = -EIO;
-+ goto out;
-+ }
-+ memcpy(info->recognized, sccb->recognized, SCLP_CHP_INFO_MASK_SIZE);
-+ memcpy(info->standby, sccb->standby, SCLP_CHP_INFO_MASK_SIZE);
-+ memcpy(info->configured, sccb->configured, SCLP_CHP_INFO_MASK_SIZE);
-+out:
-+ free_page((unsigned long) sccb);
-+ return rc;
-+}
-diff --git a/drivers/s390/char/sclp_config.c b/drivers/s390/char/sclp_config.c
-index 5322e5e..9dc77f1 100644
---- a/drivers/s390/char/sclp_config.c
-+++ b/drivers/s390/char/sclp_config.c
-@@ -29,12 +29,12 @@ static void sclp_cpu_capability_notify(struct work_struct *work)
- struct sys_device *sysdev;
-
- printk(KERN_WARNING TAG "cpu capability changed.\n");
-- lock_cpu_hotplug();
-+ get_online_cpus();
- for_each_online_cpu(cpu) {
- sysdev = get_cpu_sysdev(cpu);
- kobject_uevent(&sysdev->kobj, KOBJ_CHANGE);
++ while ((tag = mv_outbound_read(hba->u.mv.mu)))
++ hptiop_request_callback_mv(hba, tag);
+ ret = 1;
}
-- unlock_cpu_hotplug();
-+ put_online_cpus();
- }
-
- static void sclp_conf_receiver_fn(struct evbuf_header *evbuf)
-diff --git a/drivers/s390/char/sclp_cpi.c b/drivers/s390/char/sclp_cpi.c
-index 82a13d9..5716487 100644
---- a/drivers/s390/char/sclp_cpi.c
-+++ b/drivers/s390/char/sclp_cpi.c
-@@ -1,255 +1,41 @@
- /*
-- * Author: Martin Peschke <mpeschke at de.ibm.com>
-- * Copyright (C) 2001 IBM Entwicklung GmbH, IBM Corporation
-+ * drivers/s390/char/sclp_cpi.c
-+ * SCLP control programm identification
- *
-- * SCLP Control-Program Identification.
-+ * Copyright IBM Corp. 2001, 2007
-+ * Author(s): Martin Peschke <mpeschke at de.ibm.com>
-+ * Michael Ernst <mernst at de.ibm.com>
- */
--#include <linux/version.h>
- #include <linux/kmod.h>
- #include <linux/module.h>
- #include <linux/moduleparam.h>
--#include <linux/init.h>
--#include <linux/timer.h>
--#include <linux/string.h>
--#include <linux/err.h>
--#include <linux/slab.h>
--#include <asm/ebcdic.h>
--#include <asm/semaphore.h>
--
--#include "sclp.h"
--#include "sclp_rw.h"
--
--#define CPI_LENGTH_SYSTEM_TYPE 8
--#define CPI_LENGTH_SYSTEM_NAME 8
--#define CPI_LENGTH_SYSPLEX_NAME 8
--
--struct cpi_evbuf {
-- struct evbuf_header header;
-- u8 id_format;
-- u8 reserved0;
-- u8 system_type[CPI_LENGTH_SYSTEM_TYPE];
-- u64 reserved1;
-- u8 system_name[CPI_LENGTH_SYSTEM_NAME];
-- u64 reserved2;
-- u64 system_level;
-- u64 reserved3;
-- u8 sysplex_name[CPI_LENGTH_SYSPLEX_NAME];
-- u8 reserved4[16];
--} __attribute__((packed));
--
--struct cpi_sccb {
-- struct sccb_header header;
-- struct cpi_evbuf cpi_evbuf;
--} __attribute__((packed));
--
--/* Event type structure for write message and write priority message */
--static struct sclp_register sclp_cpi_event =
--{
-- .send_mask = EVTYP_CTLPROGIDENT_MASK
--};
-+#include <linux/version.h>
-+#include "sclp_cpi_sys.h"
+ return ret;
+ }
- MODULE_LICENSE("GPL");
-+MODULE_DESCRIPTION("Identify this operating system instance "
-+ "to the System z hardware");
-+MODULE_AUTHOR("Martin Peschke <mpeschke at de.ibm.com>, "
-+ "Michael Ernst <mernst at de.ibm.com>");
+-static int iop_send_sync_request(struct hptiop_hba *hba,
++static int iop_send_sync_request_itl(struct hptiop_hba *hba,
+ void __iomem *_req, u32 millisec)
+ {
+ struct hpt_iop_request_header __iomem *req = _req;
+ u32 i;
--MODULE_AUTHOR(
-- "Martin Peschke, IBM Deutschland Entwicklung GmbH "
-- "<mpeschke at de.ibm.com>");
+- writel(readl(&req->flags) | IOP_REQUEST_FLAG_SYNC_REQUEST,
+- &req->flags);
-
--MODULE_DESCRIPTION(
-- "identify this operating system instance to the S/390 "
-- "or zSeries hardware");
-+static char *system_name = "";
-+static char *sysplex_name = "";
-
--static char *system_name = NULL;
- module_param(system_name, charp, 0);
- MODULE_PARM_DESC(system_name, "e.g. hostname - max. 8 characters");
++ writel(readl(&req->flags) | IOP_REQUEST_FLAG_SYNC_REQUEST, &req->flags);
+ writel(0, &req->context);
-
--static char *sysplex_name = NULL;
--#ifdef ALLOW_SYSPLEX_NAME
- module_param(sysplex_name, charp, 0);
- MODULE_PARM_DESC(sysplex_name, "if applicable - max. 8 characters");
--#endif
+- writel((unsigned long)req - (unsigned long)hba->iop,
+- &hba->iop->inbound_queue);
-
--/* use default value for this field (as well as for system level) */
--static char *system_type = "LINUX";
+- hptiop_pci_posting_flush(hba->iop);
++ writel((unsigned long)req - (unsigned long)hba->u.itl.iop,
++ &hba->u.itl.iop->inbound_queue);
++ readl(&hba->u.itl.iop->outbound_intstatus);
--static int
--cpi_check_parms(void)
-+static int __init cpi_module_init(void)
- {
-- /* reject if no system type specified */
-- if (!system_type) {
-- printk("cpi: bug: no system type specified\n");
-- return -EINVAL;
-- }
--
-- /* reject if system type larger than 8 characters */
-- if (strlen(system_type) > CPI_LENGTH_SYSTEM_NAME) {
-- printk("cpi: bug: system type has length of %li characters - "
-- "only %i characters supported\n",
-- strlen(system_type), CPI_LENGTH_SYSTEM_TYPE);
-- return -EINVAL;
-- }
--
-- /* reject if no system name specified */
-- if (!system_name) {
-- printk("cpi: no system name specified\n");
-- return -EINVAL;
-- }
--
-- /* reject if system name larger than 8 characters */
-- if (strlen(system_name) > CPI_LENGTH_SYSTEM_NAME) {
-- printk("cpi: system name has length of %li characters - "
-- "only %i characters supported\n",
-- strlen(system_name), CPI_LENGTH_SYSTEM_NAME);
-- return -EINVAL;
-- }
--
-- /* reject if specified sysplex name larger than 8 characters */
-- if (sysplex_name && strlen(sysplex_name) > CPI_LENGTH_SYSPLEX_NAME) {
-- printk("cpi: sysplex name has length of %li characters"
-- " - only %i characters supported\n",
-- strlen(sysplex_name), CPI_LENGTH_SYSPLEX_NAME);
-- return -EINVAL;
-- }
-- return 0;
-+ return sclp_cpi_set_data(system_name, sysplex_name, "LINUX",
-+ LINUX_VERSION_CODE);
+ for (i = 0; i < millisec; i++) {
+- __iop_intr(hba);
++ iop_intr_itl(hba);
+ if (readl(&req->context))
+ return 0;
+ msleep(1);
+@@ -159,19 +248,49 @@ static int iop_send_sync_request(struct hptiop_hba *hba,
+ return -1;
}
--static void
--cpi_callback(struct sclp_req *req, void *data)
--{
-- struct semaphore *sem;
--
-- sem = (struct semaphore *) data;
-- up(sem);
--}
--
--static struct sclp_req *
--cpi_prepare_req(void)
--{
-- struct sclp_req *req;
-- struct cpi_sccb *sccb;
-- struct cpi_evbuf *evb;
--
-- req = kmalloc(sizeof(struct sclp_req), GFP_KERNEL);
-- if (req == NULL)
-- return ERR_PTR(-ENOMEM);
-- sccb = (struct cpi_sccb *) __get_free_page(GFP_KERNEL | GFP_DMA);
-- if (sccb == NULL) {
-- kfree(req);
-- return ERR_PTR(-ENOMEM);
-- }
-- memset(sccb, 0, sizeof(struct cpi_sccb));
--
-- /* setup SCCB for Control-Program Identification */
-- sccb->header.length = sizeof(struct cpi_sccb);
-- sccb->cpi_evbuf.header.length = sizeof(struct cpi_evbuf);
-- sccb->cpi_evbuf.header.type = 0x0B;
-- evb = &sccb->cpi_evbuf;
--
-- /* set system type */
-- memset(evb->system_type, ' ', CPI_LENGTH_SYSTEM_TYPE);
-- memcpy(evb->system_type, system_type, strlen(system_type));
-- sclp_ascebc_str(evb->system_type, CPI_LENGTH_SYSTEM_TYPE);
-- EBC_TOUPPER(evb->system_type, CPI_LENGTH_SYSTEM_TYPE);
--
-- /* set system name */
-- memset(evb->system_name, ' ', CPI_LENGTH_SYSTEM_NAME);
-- memcpy(evb->system_name, system_name, strlen(system_name));
-- sclp_ascebc_str(evb->system_name, CPI_LENGTH_SYSTEM_NAME);
-- EBC_TOUPPER(evb->system_name, CPI_LENGTH_SYSTEM_NAME);
--
-- /* set system level */
-- evb->system_level = LINUX_VERSION_CODE;
--
-- /* set sysplex name */
-- if (sysplex_name) {
-- memset(evb->sysplex_name, ' ', CPI_LENGTH_SYSPLEX_NAME);
-- memcpy(evb->sysplex_name, sysplex_name, strlen(sysplex_name));
-- sclp_ascebc_str(evb->sysplex_name, CPI_LENGTH_SYSPLEX_NAME);
-- EBC_TOUPPER(evb->sysplex_name, CPI_LENGTH_SYSPLEX_NAME);
-- }
--
-- /* prepare request data structure presented to SCLP driver */
-- req->command = SCLP_CMDW_WRITE_EVENT_DATA;
-- req->sccb = sccb;
-- req->status = SCLP_REQ_FILLED;
-- req->callback = cpi_callback;
-- return req;
--}
--
--static void
--cpi_free_req(struct sclp_req *req)
--{
-- free_page((unsigned long) req->sccb);
-- kfree(req);
--}
--
--static int __init
--cpi_module_init(void)
--{
-- struct semaphore sem;
-- struct sclp_req *req;
-- int rc;
--
-- rc = cpi_check_parms();
-- if (rc)
-- return rc;
--
-- rc = sclp_register(&sclp_cpi_event);
-- if (rc) {
-- /* could not register sclp event. Die. */
-- printk(KERN_WARNING "cpi: could not register to hardware "
-- "console.\n");
-- return -EINVAL;
-- }
-- if (!(sclp_cpi_event.sclp_send_mask & EVTYP_CTLPROGIDENT_MASK)) {
-- printk(KERN_WARNING "cpi: no control program identification "
-- "support\n");
-- sclp_unregister(&sclp_cpi_event);
-- return -EOPNOTSUPP;
-- }
--
-- req = cpi_prepare_req();
-- if (IS_ERR(req)) {
-- printk(KERN_WARNING "cpi: couldn't allocate request\n");
-- sclp_unregister(&sclp_cpi_event);
-- return PTR_ERR(req);
-- }
--
-- /* Prepare semaphore */
-- sema_init(&sem, 0);
-- req->callback_data = &sem;
-- /* Add request to sclp queue */
-- rc = sclp_add_request(req);
-- if (rc) {
-- printk(KERN_WARNING "cpi: could not start request\n");
-- cpi_free_req(req);
-- sclp_unregister(&sclp_cpi_event);
-- return rc;
-- }
-- /* make "insmod" sleep until callback arrives */
-- down(&sem);
--
-- rc = ((struct cpi_sccb *) req->sccb)->header.response_code;
-- if (rc != 0x0020) {
-- printk(KERN_WARNING "cpi: failed with response code 0x%x\n",
-- rc);
-- rc = -ECOMM;
-- } else
-- rc = 0;
--
-- cpi_free_req(req);
-- sclp_unregister(&sclp_cpi_event);
--
-- return rc;
--}
--
--
- static void __exit cpi_module_exit(void)
+-static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
++static int iop_send_sync_request_mv(struct hptiop_hba *hba,
++ u32 size_bits, u32 millisec)
{
- }
++ struct hpt_iop_request_header *reqhdr = hba->u.mv.internal_req;
+ u32 i;
--
--/* declare driver module init/cleanup functions */
- module_init(cpi_module_init);
- module_exit(cpi_module_exit);
--
-diff --git a/drivers/s390/char/sclp_cpi_sys.c b/drivers/s390/char/sclp_cpi_sys.c
-new file mode 100644
-index 0000000..4161703
---- /dev/null
-+++ b/drivers/s390/char/sclp_cpi_sys.c
-@@ -0,0 +1,400 @@
-+/*
-+ * drivers/s390/char/sclp_cpi_sys.c
-+ * SCLP control program identification sysfs interface
-+ *
-+ * Copyright IBM Corp. 2001, 2007
-+ * Author(s): Martin Peschke <mpeschke at de.ibm.com>
-+ * Michael Ernst <mernst at de.ibm.com>
-+ */
-+
-+#include <linux/kernel.h>
-+#include <linux/init.h>
-+#include <linux/stat.h>
-+#include <linux/device.h>
-+#include <linux/string.h>
-+#include <linux/ctype.h>
-+#include <linux/kmod.h>
-+#include <linux/timer.h>
-+#include <linux/err.h>
-+#include <linux/slab.h>
-+#include <linux/completion.h>
-+#include <asm/ebcdic.h>
-+#include <asm/sclp.h>
-+#include "sclp.h"
-+#include "sclp_rw.h"
-+#include "sclp_cpi_sys.h"
-+
-+#define CPI_LENGTH_NAME 8
-+#define CPI_LENGTH_LEVEL 16
-+
-+struct cpi_evbuf {
-+ struct evbuf_header header;
-+ u8 id_format;
-+ u8 reserved0;
-+ u8 system_type[CPI_LENGTH_NAME];
-+ u64 reserved1;
-+ u8 system_name[CPI_LENGTH_NAME];
-+ u64 reserved2;
-+ u64 system_level;
-+ u64 reserved3;
-+ u8 sysplex_name[CPI_LENGTH_NAME];
-+ u8 reserved4[16];
-+} __attribute__((packed));
-+
-+struct cpi_sccb {
-+ struct sccb_header header;
-+ struct cpi_evbuf cpi_evbuf;
-+} __attribute__((packed));
-+
-+static struct sclp_register sclp_cpi_event = {
-+ .send_mask = EVTYP_CTLPROGIDENT_MASK,
-+};
-+
-+static char system_name[CPI_LENGTH_NAME + 1];
-+static char sysplex_name[CPI_LENGTH_NAME + 1];
-+static char system_type[CPI_LENGTH_NAME + 1];
-+static u64 system_level;
+ hba->msg_done = 0;
++ reqhdr->flags |= cpu_to_le32(IOP_REQUEST_FLAG_SYNC_REQUEST);
++ mv_inbound_write(hba->u.mv.internal_req_phy |
++ MVIOP_MU_QUEUE_ADDR_HOST_BIT | size_bits, hba);
+
-+static void set_data(char *field, char *data)
-+{
-+ memset(field, ' ', CPI_LENGTH_NAME);
-+ memcpy(field, data, strlen(data));
-+ sclp_ascebc_str(field, CPI_LENGTH_NAME);
++ for (i = 0; i < millisec; i++) {
++ iop_intr_mv(hba);
++ if (hba->msg_done)
++ return 0;
++ msleep(1);
++ }
++ return -1;
+}
+
-+static void cpi_callback(struct sclp_req *req, void *data)
++static void hptiop_post_msg_itl(struct hptiop_hba *hba, u32 msg)
+{
-+ struct completion *completion = data;
-+
-+ complete(completion);
++ writel(msg, &hba->u.itl.iop->inbound_msgaddr0);
++ readl(&hba->u.itl.iop->outbound_intstatus);
+}
+
-+static struct sclp_req *cpi_prepare_req(void)
++static void hptiop_post_msg_mv(struct hptiop_hba *hba, u32 msg)
+{
-+ struct sclp_req *req;
-+ struct cpi_sccb *sccb;
-+ struct cpi_evbuf *evb;
-+
-+ req = kzalloc(sizeof(struct sclp_req), GFP_KERNEL);
-+ if (!req)
-+ return ERR_PTR(-ENOMEM);
-+ sccb = (struct cpi_sccb *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
-+ if (!sccb) {
-+ kfree(req);
-+ return ERR_PTR(-ENOMEM);
-+ }
-+
-+ /* setup SCCB for Control-Program Identification */
-+ sccb->header.length = sizeof(struct cpi_sccb);
-+ sccb->cpi_evbuf.header.length = sizeof(struct cpi_evbuf);
-+ sccb->cpi_evbuf.header.type = 0x0b;
-+ evb = &sccb->cpi_evbuf;
-+
-+ /* set system type */
-+ set_data(evb->system_type, system_type);
-+
-+ /* set system name */
-+ set_data(evb->system_name, system_name);
-+
-+ /* set sytem level */
-+ evb->system_level = system_level;
-+
-+ /* set sysplex name */
-+ set_data(evb->sysplex_name, sysplex_name);
-+
-+ /* prepare request data structure presented to SCLP driver */
-+ req->command = SCLP_CMDW_WRITE_EVENT_DATA;
-+ req->sccb = sccb;
-+ req->status = SCLP_REQ_FILLED;
-+ req->callback = cpi_callback;
-+ return req;
++ writel(msg, &hba->u.mv.mu->inbound_msg);
++ writel(MVIOP_MU_INBOUND_INT_MSG, &hba->u.mv.regs->inbound_doorbell);
++ readl(&hba->u.mv.regs->inbound_doorbell);
+}
-+
-+static void cpi_free_req(struct sclp_req *req)
+
+- writel(msg, &hba->iop->inbound_msgaddr0);
++static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
+{
-+ free_page((unsigned long) req->sccb);
-+ kfree(req);
++ u32 i;
+
+- hptiop_pci_posting_flush(hba->iop);
++ hba->msg_done = 0;
++ hba->ops->post_msg(hba, msg);
+
+ for (i = 0; i < millisec; i++) {
+ spin_lock_irq(hba->host->host_lock);
+- __iop_intr(hba);
++ hba->ops->iop_intr(hba);
+ spin_unlock_irq(hba->host->host_lock);
+ if (hba->msg_done)
+ break;
+@@ -181,46 +300,67 @@ static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
+ return hba->msg_done? 0 : -1;
+ }
+
+-static int iop_get_config(struct hptiop_hba *hba,
++static int iop_get_config_itl(struct hptiop_hba *hba,
+ struct hpt_iop_request_get_config *config)
+ {
+ u32 req32;
+ struct hpt_iop_request_get_config __iomem *req;
+
+- req32 = readl(&hba->iop->inbound_queue);
++ req32 = readl(&hba->u.itl.iop->inbound_queue);
+ if (req32 == IOPMU_QUEUE_EMPTY)
+ return -1;
+
+ req = (struct hpt_iop_request_get_config __iomem *)
+- ((unsigned long)hba->iop + req32);
++ ((unsigned long)hba->u.itl.iop + req32);
+
+ writel(0, &req->header.flags);
+ writel(IOP_REQUEST_TYPE_GET_CONFIG, &req->header.type);
+ writel(sizeof(struct hpt_iop_request_get_config), &req->header.size);
+ writel(IOP_RESULT_PENDING, &req->header.result);
+
+- if (iop_send_sync_request(hba, req, 20000)) {
++ if (iop_send_sync_request_itl(hba, req, 20000)) {
+ dprintk("Get config send cmd failed\n");
+ return -1;
+ }
+
+ memcpy_fromio(config, req, sizeof(*config));
+- writel(req32, &hba->iop->outbound_queue);
++ writel(req32, &hba->u.itl.iop->outbound_queue);
++ return 0;
+}
+
-+static int cpi_req(void)
++static int iop_get_config_mv(struct hptiop_hba *hba,
++ struct hpt_iop_request_get_config *config)
+{
-+ struct completion completion;
-+ struct sclp_req *req;
-+ int rc;
-+ int response;
-+
-+ rc = sclp_register(&sclp_cpi_event);
-+ if (rc) {
-+ printk(KERN_WARNING "cpi: could not register "
-+ "to hardware console.\n");
-+ goto out;
-+ }
-+ if (!(sclp_cpi_event.sclp_send_mask & EVTYP_CTLPROGIDENT_MASK)) {
-+ printk(KERN_WARNING "cpi: no control program "
-+ "identification support\n");
-+ rc = -EOPNOTSUPP;
-+ goto out_unregister;
-+ }
-+
-+ req = cpi_prepare_req();
-+ if (IS_ERR(req)) {
-+ printk(KERN_WARNING "cpi: could not allocate request\n");
-+ rc = PTR_ERR(req);
-+ goto out_unregister;
-+ }
++ struct hpt_iop_request_get_config *req = hba->u.mv.internal_req;
+
-+ init_completion(&completion);
-+ req->callback_data = &completion;
++ req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
++ req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_GET_CONFIG);
++ req->header.size =
++ cpu_to_le32(sizeof(struct hpt_iop_request_get_config));
++ req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
++ req->header.context = cpu_to_le64(IOP_REQUEST_TYPE_GET_CONFIG<<5);
+
-+ /* Add request to sclp queue */
-+ rc = sclp_add_request(req);
-+ if (rc) {
-+ printk(KERN_WARNING "cpi: could not start request\n");
-+ goto out_free_req;
++ if (iop_send_sync_request_mv(hba, 0, 20000)) {
++ dprintk("Get config send cmd failed\n");
++ return -1;
+ }
+
-+ wait_for_completion(&completion);
-+
-+ if (req->status != SCLP_REQ_DONE) {
-+ printk(KERN_WARNING "cpi: request failed (status=0x%02x)\n",
-+ req->status);
-+ rc = -EIO;
-+ goto out_free_req;
-+ }
++ memcpy(config, req, sizeof(struct hpt_iop_request_get_config));
+ return 0;
+ }
+
+-static int iop_set_config(struct hptiop_hba *hba,
++static int iop_set_config_itl(struct hptiop_hba *hba,
+ struct hpt_iop_request_set_config *config)
+ {
+ u32 req32;
+ struct hpt_iop_request_set_config __iomem *req;
+
+- req32 = readl(&hba->iop->inbound_queue);
++ req32 = readl(&hba->u.itl.iop->inbound_queue);
+ if (req32 == IOPMU_QUEUE_EMPTY)
+ return -1;
+
+ req = (struct hpt_iop_request_set_config __iomem *)
+- ((unsigned long)hba->iop + req32);
++ ((unsigned long)hba->u.itl.iop + req32);
+
+ memcpy_toio((u8 __iomem *)req + sizeof(struct hpt_iop_request_header),
+ (u8 *)config + sizeof(struct hpt_iop_request_header),
+@@ -232,22 +372,52 @@ static int iop_set_config(struct hptiop_hba *hba,
+ writel(sizeof(struct hpt_iop_request_set_config), &req->header.size);
+ writel(IOP_RESULT_PENDING, &req->header.result);
+
+- if (iop_send_sync_request(hba, req, 20000)) {
++ if (iop_send_sync_request_itl(hba, req, 20000)) {
+ dprintk("Set config send cmd failed\n");
+ return -1;
+ }
+
+- writel(req32, &hba->iop->outbound_queue);
++ writel(req32, &hba->u.itl.iop->outbound_queue);
+ return 0;
+ }
+
+-static int hptiop_initialize_iop(struct hptiop_hba *hba)
++static int iop_set_config_mv(struct hptiop_hba *hba,
++ struct hpt_iop_request_set_config *config)
+ {
+- struct hpt_iopmu __iomem *iop = hba->iop;
++ struct hpt_iop_request_set_config *req = hba->u.mv.internal_req;
+
+- /* enable interrupts */
++ memcpy(req, config, sizeof(struct hpt_iop_request_set_config));
++ req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
++ req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_SET_CONFIG);
++ req->header.size =
++ cpu_to_le32(sizeof(struct hpt_iop_request_set_config));
++ req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
++ req->header.context = cpu_to_le64(IOP_REQUEST_TYPE_SET_CONFIG<<5);
+
-+ response = ((struct cpi_sccb *) req->sccb)->header.response_code;
-+ if (response != 0x0020) {
-+ printk(KERN_WARNING "cpi: failed with "
-+ "response code 0x%x\n", response);
-+ rc = -EIO;
++ if (iop_send_sync_request_mv(hba, 0, 20000)) {
++ dprintk("Set config send cmd failed\n");
++ return -1;
+ }
+
-+out_free_req:
-+ cpi_free_req(req);
-+
-+out_unregister:
-+ sclp_unregister(&sclp_cpi_event);
-+
-+out:
-+ return rc;
++ return 0;
+}
+
-+static int check_string(const char *attr, const char *str)
++static void hptiop_enable_intr_itl(struct hptiop_hba *hba)
+{
-+ size_t len;
-+ size_t i;
-+
-+ len = strlen(str);
-+
-+ if ((len > 0) && (str[len - 1] == '\n'))
-+ len--;
-+
-+ if (len > CPI_LENGTH_NAME)
-+ return -EINVAL;
-+
-+ for (i = 0; i < len ; i++) {
-+ if (isalpha(str[i]) || isdigit(str[i]) ||
-+ strchr("$@# ", str[i]))
-+ continue;
-+ return -EINVAL;
-+ }
-+
-+ return 0;
+ writel(~(IOPMU_OUTBOUND_INT_POSTQUEUE | IOPMU_OUTBOUND_INT_MSG0),
+- &iop->outbound_intmask);
++ &hba->u.itl.iop->outbound_intmask);
+}
+
-+static void set_string(char *attr, const char *value)
++static void hptiop_enable_intr_mv(struct hptiop_hba *hba)
+{
-+ size_t len;
-+ size_t i;
++ writel(MVIOP_MU_OUTBOUND_INT_POSTQUEUE | MVIOP_MU_OUTBOUND_INT_MSG,
++ &hba->u.mv.regs->outbound_intmask);
++}
+
-+ len = strlen(value);
++static int hptiop_initialize_iop(struct hptiop_hba *hba)
++{
++ /* enable interrupts */
++ hba->ops->enable_intr(hba);
+
+ hba->initialized = 1;
+
+@@ -261,37 +431,74 @@ static int hptiop_initialize_iop(struct hptiop_hba *hba)
+ return 0;
+ }
+
+-static int hptiop_map_pci_bar(struct hptiop_hba *hba)
++static void __iomem *hptiop_map_pci_bar(struct hptiop_hba *hba, int index)
+ {
+ u32 mem_base_phy, length;
+ void __iomem *mem_base_virt;
+
-+ if ((len > 0) && (value[len - 1] == '\n'))
-+ len--;
+ struct pci_dev *pcidev = hba->pcidev;
+
+- if (!(pci_resource_flags(pcidev, 0) & IORESOURCE_MEM)) {
+
-+ for (i = 0; i < CPI_LENGTH_NAME; i++) {
-+ if (i < len)
-+ attr[i] = toupper(value[i]);
-+ else
-+ attr[i] = ' ';
++ if (!(pci_resource_flags(pcidev, index) & IORESOURCE_MEM)) {
+ printk(KERN_ERR "scsi%d: pci resource invalid\n",
+ hba->host->host_no);
+- return -1;
++ return 0;
+ }
+
+- mem_base_phy = pci_resource_start(pcidev, 0);
+- length = pci_resource_len(pcidev, 0);
++ mem_base_phy = pci_resource_start(pcidev, index);
++ length = pci_resource_len(pcidev, index);
+ mem_base_virt = ioremap(mem_base_phy, length);
+
+ if (!mem_base_virt) {
+ printk(KERN_ERR "scsi%d: Fail to ioremap memory space\n",
+ hba->host->host_no);
++ return 0;
+ }
++ return mem_base_virt;
+}
+
-+static ssize_t system_name_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *page)
++static int hptiop_map_pci_bar_itl(struct hptiop_hba *hba)
+{
-+ return snprintf(page, PAGE_SIZE, "%s\n", system_name);
++ hba->u.itl.iop = hptiop_map_pci_bar(hba, 0);
++ if (hba->u.itl.iop)
++ return 0;
++ else
++ return -1;
+}
+
-+static ssize_t system_name_store(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf,
-+ size_t len)
++static void hptiop_unmap_pci_bar_itl(struct hptiop_hba *hba)
+{
-+ int rc;
-+
-+ rc = check_string("system_name", buf);
-+ if (rc)
-+ return rc;
-+
-+ set_string(system_name, buf);
-+
-+ return len;
++ iounmap(hba->u.itl.iop);
+}
+
-+static struct kobj_attribute system_name_attr =
-+ __ATTR(system_name, 0644, system_name_show, system_name_store);
++static int hptiop_map_pci_bar_mv(struct hptiop_hba *hba)
++{
++ hba->u.mv.regs = hptiop_map_pci_bar(hba, 0);
++ if (hba->u.mv.regs == 0)
++ return -1;
+
-+static ssize_t sysplex_name_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *page)
++ hba->u.mv.mu = hptiop_map_pci_bar(hba, 2);
++ if (hba->u.mv.mu == 0) {
++ iounmap(hba->u.mv.regs);
+ return -1;
+ }
+
+- hba->iop = mem_base_virt;
+- dprintk("hptiop_map_pci_bar: iop=%p\n", hba->iop);
+ return 0;
+ }
+
++static void hptiop_unmap_pci_bar_mv(struct hptiop_hba *hba)
+{
-+ return snprintf(page, PAGE_SIZE, "%s\n", sysplex_name);
++ iounmap(hba->u.mv.regs);
++ iounmap(hba->u.mv.mu);
+}
+
-+static ssize_t sysplex_name_store(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf,
-+ size_t len)
-+{
-+ int rc;
+ static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg)
+ {
+ dprintk("iop message 0x%x\n", msg);
+
++ if (msg == IOPMU_INBOUND_MSG0_NOP)
++ hba->msg_done = 1;
+
-+ rc = check_string("sysplex_name", buf);
-+ if (rc)
-+ return rc;
+ if (!hba->initialized)
+ return;
+
+@@ -303,7 +510,7 @@ static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg)
+ hba->msg_done = 1;
+ }
+
+-static inline struct hptiop_request *get_req(struct hptiop_hba *hba)
++static struct hptiop_request *get_req(struct hptiop_hba *hba)
+ {
+ struct hptiop_request *ret;
+
+@@ -316,30 +523,19 @@ static inline struct hptiop_request *get_req(struct hptiop_hba *hba)
+ return ret;
+ }
+
+-static inline void free_req(struct hptiop_hba *hba, struct hptiop_request *req)
++static void free_req(struct hptiop_hba *hba, struct hptiop_request *req)
+ {
+ dprintk("free_req(%d, %p)\n", req->index, req);
+ req->next = hba->req_list;
+ hba->req_list = req;
+ }
+
+-static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
++static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
++ struct hpt_iop_request_scsi_command *req)
+ {
+- struct hpt_iop_request_scsi_command *req;
+ struct scsi_cmnd *scp;
+- u32 tag;
+-
+- if (hba->iopintf_v2) {
+- tag = _tag & ~ IOPMU_QUEUE_REQUEST_RESULT_BIT;
+- req = hba->reqs[tag].req_virt;
+- if (likely(_tag & IOPMU_QUEUE_REQUEST_RESULT_BIT))
+- req->header.result = IOP_RESULT_SUCCESS;
+- } else {
+- tag = _tag;
+- req = hba->reqs[tag].req_virt;
+- }
+
+- dprintk("hptiop_host_request_callback: req=%p, type=%d, "
++ dprintk("hptiop_finish_scsi_req: req=%p, type=%d, "
+ "result=%d, context=0x%x tag=%d\n",
+ req, req->header.type, req->header.result,
+ req->header.context, tag);
+@@ -354,6 +550,8 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
+
+ switch (le32_to_cpu(req->header.result)) {
+ case IOP_RESULT_SUCCESS:
++ scsi_set_resid(scp,
++ scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
+ scp->result = (DID_OK<<16);
+ break;
+ case IOP_RESULT_BAD_TARGET:
+@@ -371,12 +569,12 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
+ case IOP_RESULT_INVALID_REQUEST:
+ scp->result = (DID_ABORT<<16);
+ break;
+- case IOP_RESULT_MODE_SENSE_CHECK_CONDITION:
++ case IOP_RESULT_CHECK_CONDITION:
++ scsi_set_resid(scp,
++ scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
+ scp->result = SAM_STAT_CHECK_CONDITION;
+- memset(&scp->sense_buffer,
+- 0, sizeof(scp->sense_buffer));
+ memcpy(&scp->sense_buffer, &req->sg_list,
+- min(sizeof(scp->sense_buffer),
++ min_t(size_t, SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(req->dataxfer_length)));
+ break;
+
+@@ -391,15 +589,33 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
+ free_req(hba, &hba->reqs[tag]);
+ }
+
+-void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag)
++static void hptiop_host_request_callback_itl(struct hptiop_hba *hba, u32 _tag)
++{
++ struct hpt_iop_request_scsi_command *req;
++ u32 tag;
+
-+ set_string(sysplex_name, buf);
++ if (hba->iopintf_v2) {
++ tag = _tag & ~IOPMU_QUEUE_REQUEST_RESULT_BIT;
++ req = hba->reqs[tag].req_virt;
++ if (likely(_tag & IOPMU_QUEUE_REQUEST_RESULT_BIT))
++ req->header.result = cpu_to_le32(IOP_RESULT_SUCCESS);
++ } else {
++ tag = _tag;
++ req = hba->reqs[tag].req_virt;
++ }
+
-+ return len;
++ hptiop_finish_scsi_req(hba, tag, req);
+}
+
-+static struct kobj_attribute sysplex_name_attr =
-+ __ATTR(sysplex_name, 0644, sysplex_name_show, sysplex_name_store);
-+
-+static ssize_t system_type_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *page)
++void hptiop_iop_request_callback_itl(struct hptiop_hba *hba, u32 tag)
+ {
+ struct hpt_iop_request_header __iomem *req;
+ struct hpt_iop_request_ioctl_command __iomem *p;
+ struct hpt_ioctl_k *arg;
+
+ req = (struct hpt_iop_request_header __iomem *)
+- ((unsigned long)hba->iop + tag);
+- dprintk("hptiop_iop_request_callback: req=%p, type=%d, "
++ ((unsigned long)hba->u.itl.iop + tag);
++ dprintk("hptiop_iop_request_callback_itl: req=%p, type=%d, "
+ "result=%d, context=0x%x tag=%d\n",
+ req, readl(&req->type), readl(&req->result),
+ readl(&req->context), tag);
+@@ -427,7 +643,7 @@ void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag)
+ arg->result = HPT_IOCTL_RESULT_FAILED;
+
+ arg->done(arg);
+- writel(tag, &hba->iop->outbound_queue);
++ writel(tag, &hba->u.itl.iop->outbound_queue);
+ }
+
+ static irqreturn_t hptiop_intr(int irq, void *dev_id)
+@@ -437,7 +653,7 @@ static irqreturn_t hptiop_intr(int irq, void *dev_id)
+ unsigned long flags;
+
+ spin_lock_irqsave(hba->host->host_lock, flags);
+- handled = __iop_intr(hba);
++ handled = hba->ops->iop_intr(hba);
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+
+ return handled;
+@@ -469,6 +685,57 @@ static int hptiop_buildsgl(struct scsi_cmnd *scp, struct hpt_iopsg *psg)
+ return HPT_SCP(scp)->sgcnt;
+ }
+
++static void hptiop_post_req_itl(struct hptiop_hba *hba,
++ struct hptiop_request *_req)
+{
-+ return snprintf(page, PAGE_SIZE, "%s\n", system_type);
++ struct hpt_iop_request_header *reqhdr = _req->req_virt;
++
++ reqhdr->context = cpu_to_le32(IOPMU_QUEUE_ADDR_HOST_BIT |
++ (u32)_req->index);
++ reqhdr->context_hi32 = 0;
++
++ if (hba->iopintf_v2) {
++ u32 size, size_bits;
++
++ size = le32_to_cpu(reqhdr->size);
++ if (size < 256)
++ size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT;
++ else if (size < 512)
++ size_bits = IOPMU_QUEUE_ADDR_HOST_BIT;
++ else
++ size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT |
++ IOPMU_QUEUE_ADDR_HOST_BIT;
++ writel(_req->req_shifted_phy | size_bits,
++ &hba->u.itl.iop->inbound_queue);
++ } else
++ writel(_req->req_shifted_phy | IOPMU_QUEUE_ADDR_HOST_BIT,
++ &hba->u.itl.iop->inbound_queue);
+}
+
-+static ssize_t system_type_store(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf,
-+ size_t len)
++static void hptiop_post_req_mv(struct hptiop_hba *hba,
++ struct hptiop_request *_req)
+{
-+ int rc;
++ struct hpt_iop_request_header *reqhdr = _req->req_virt;
++ u32 size, size_bit;
+
-+ rc = check_string("system_type", buf);
-+ if (rc)
-+ return rc;
++ reqhdr->context = cpu_to_le32(_req->index<<8 |
++ IOP_REQUEST_TYPE_SCSI_COMMAND<<5);
++ reqhdr->context_hi32 = 0;
++ size = le32_to_cpu(reqhdr->size);
+
-+ set_string(system_type, buf);
++ if (size <= 256)
++ size_bit = 0;
++ else if (size <= 256*2)
++ size_bit = 1;
++ else if (size <= 256*3)
++ size_bit = 2;
++ else
++ size_bit = 3;
+
-+ return len;
++ mv_inbound_write((_req->req_shifted_phy << 5) |
++ MVIOP_MU_QUEUE_ADDR_HOST_BIT | size_bit, hba);
+}
+
-+static struct kobj_attribute system_type_attr =
-+ __ATTR(system_type, 0644, system_type_show, system_type_store);
+ static int hptiop_queuecommand(struct scsi_cmnd *scp,
+ void (*done)(struct scsi_cmnd *))
+ {
+@@ -518,9 +785,6 @@ static int hptiop_queuecommand(struct scsi_cmnd *scp,
+ req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
+ req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_SCSI_COMMAND);
+ req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
+- req->header.context = cpu_to_le32(IOPMU_QUEUE_ADDR_HOST_BIT |
+- (u32)_req->index);
+- req->header.context_hi32 = 0;
+ req->dataxfer_length = cpu_to_le32(scsi_bufflen(scp));
+ req->channel = scp->device->channel;
+ req->target = scp->device->id;
+@@ -531,21 +795,7 @@ static int hptiop_queuecommand(struct scsi_cmnd *scp,
+ + sg_count * sizeof(struct hpt_iopsg));
+
+ memcpy(req->cdb, scp->cmnd, sizeof(req->cdb));
+-
+- if (hba->iopintf_v2) {
+- u32 size_bits;
+- if (req->header.size < 256)
+- size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT;
+- else if (req->header.size < 512)
+- size_bits = IOPMU_QUEUE_ADDR_HOST_BIT;
+- else
+- size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT |
+- IOPMU_QUEUE_ADDR_HOST_BIT;
+- writel(_req->req_shifted_phy | size_bits, &hba->iop->inbound_queue);
+- } else
+- writel(_req->req_shifted_phy | IOPMU_QUEUE_ADDR_HOST_BIT,
+- &hba->iop->inbound_queue);
+-
++ hba->ops->post_req(hba, _req);
+ return 0;
+
+ cmd_done:
+@@ -563,9 +813,7 @@ static int hptiop_reset_hba(struct hptiop_hba *hba)
+ {
+ if (atomic_xchg(&hba->resetting, 1) == 0) {
+ atomic_inc(&hba->reset_count);
+- writel(IOPMU_INBOUND_MSG0_RESET,
+- &hba->iop->inbound_msgaddr0);
+- hptiop_pci_posting_flush(hba->iop);
++ hba->ops->post_msg(hba, IOPMU_INBOUND_MSG0_RESET);
+ }
+
+ wait_event_timeout(hba->reset_wq,
+@@ -601,8 +849,10 @@ static int hptiop_reset(struct scsi_cmnd *scp)
+ static int hptiop_adjust_disk_queue_depth(struct scsi_device *sdev,
+ int queue_depth)
+ {
+- if(queue_depth > 256)
+- queue_depth = 256;
++ struct hptiop_hba *hba = (struct hptiop_hba *)sdev->host->hostdata;
+
-+static ssize_t system_level_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *page)
++ if (queue_depth > hba->max_requests)
++ queue_depth = hba->max_requests;
+ scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, queue_depth);
+ return queue_depth;
+ }
+@@ -663,6 +913,26 @@ static struct scsi_host_template driver_template = {
+ .change_queue_depth = hptiop_adjust_disk_queue_depth,
+ };
+
++static int hptiop_internal_memalloc_mv(struct hptiop_hba *hba)
+{
-+ unsigned long long level = system_level;
-+
-+ return snprintf(page, PAGE_SIZE, "%#018llx\n", level);
++ hba->u.mv.internal_req = dma_alloc_coherent(&hba->pcidev->dev,
++ 0x800, &hba->u.mv.internal_req_phy, GFP_KERNEL);
++ if (hba->u.mv.internal_req)
++ return 0;
++ else
++ return -1;
+}
+
-+static ssize_t system_level_store(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf,
-+ size_t len)
++static int hptiop_internal_memfree_mv(struct hptiop_hba *hba)
+{
-+ unsigned long long level;
-+ char *endp;
-+
-+ level = simple_strtoull(buf, &endp, 16);
++ if (hba->u.mv.internal_req) {
++ dma_free_coherent(&hba->pcidev->dev, 0x800,
++ hba->u.mv.internal_req, hba->u.mv.internal_req_phy);
++ return 0;
++ } else
++ return -1;
++}
+
-+ if (endp == buf)
-+ return -EINVAL;
-+ if (*endp == '\n')
-+ endp++;
-+ if (*endp)
-+ return -EINVAL;
+ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+ const struct pci_device_id *id)
+ {
+@@ -708,6 +978,7 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+
+ hba = (struct hptiop_hba *)host->hostdata;
+
++ hba->ops = (struct hptiop_adapter_ops *)id->driver_data;
+ hba->pcidev = pcidev;
+ hba->host = host;
+ hba->initialized = 0;
+@@ -725,16 +996,24 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+ host->n_io_port = 0;
+ host->irq = pcidev->irq;
+
+- if (hptiop_map_pci_bar(hba))
++ if (hba->ops->map_pci_bar(hba))
+ goto free_scsi_host;
+
+- if (iop_wait_ready(hba->iop, 20000)) {
++ if (hba->ops->iop_wait_ready(hba, 20000)) {
+ printk(KERN_ERR "scsi%d: firmware not ready\n",
+ hba->host->host_no);
+ goto unmap_pci_bar;
+ }
+
+- if (iop_get_config(hba, &iop_config)) {
++ if (hba->ops->internal_memalloc) {
++ if (hba->ops->internal_memalloc(hba)) {
++ printk(KERN_ERR "scsi%d: internal_memalloc failed\n",
++ hba->host->host_no);
++ goto unmap_pci_bar;
++ }
++ }
+
-+ system_level = level;
++ if (hba->ops->get_config(hba, &iop_config)) {
+ printk(KERN_ERR "scsi%d: get config failed\n",
+ hba->host->host_no);
+ goto unmap_pci_bar;
+@@ -770,7 +1049,7 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+ set_config.vbus_id = cpu_to_le16(host->host_no);
+ set_config.max_host_request_size = cpu_to_le16(req_size);
+
+- if (iop_set_config(hba, &set_config)) {
++ if (hba->ops->set_config(hba, &set_config)) {
+ printk(KERN_ERR "scsi%d: set config failed\n",
+ hba->host->host_no);
+ goto unmap_pci_bar;
+@@ -839,21 +1118,24 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+
+ free_request_mem:
+ dma_free_coherent(&hba->pcidev->dev,
+- hba->req_size*hba->max_requests + 0x20,
++ hba->req_size * hba->max_requests + 0x20,
+ hba->dma_coherent, hba->dma_coherent_handle);
+
+ free_request_irq:
+ free_irq(hba->pcidev->irq, hba);
+
+ unmap_pci_bar:
+- iounmap(hba->iop);
++ if (hba->ops->internal_memfree)
++ hba->ops->internal_memfree(hba);
+
+-free_pci_regions:
+- pci_release_regions(pcidev) ;
++ hba->ops->unmap_pci_bar(hba);
+
+ free_scsi_host:
+ scsi_host_put(host);
+
++free_pci_regions:
++ pci_release_regions(pcidev);
+
-+ return len;
+ disable_pci_device:
+ pci_disable_device(pcidev);
+
+@@ -865,8 +1147,6 @@ static void hptiop_shutdown(struct pci_dev *pcidev)
+ {
+ struct Scsi_Host *host = pci_get_drvdata(pcidev);
+ struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata;
+- struct hpt_iopmu __iomem *iop = hba->iop;
+- u32 int_mask;
+
+ dprintk("hptiop_shutdown(%p)\n", hba);
+
+@@ -876,11 +1156,24 @@ static void hptiop_shutdown(struct pci_dev *pcidev)
+ hba->host->host_no);
+
+ /* disable all outbound interrupts */
+- int_mask = readl(&iop->outbound_intmask);
++ hba->ops->disable_intr(hba);
+}
+
-+static struct kobj_attribute system_level_attr =
-+ __ATTR(system_level, 0644, system_level_show, system_level_store);
-+
-+static ssize_t set_store(struct kobject *kobj,
-+ struct kobj_attribute *attr,
-+ const char *buf, size_t len)
-+{
-+ int rc;
-+
-+ rc = cpi_req();
-+ if (rc)
-+ return rc;
++static void hptiop_disable_intr_itl(struct hptiop_hba *hba)
++{
++ u32 int_mask;
+
-+ return len;
++ int_mask = readl(&hba->u.itl.iop->outbound_intmask);
+ writel(int_mask |
+ IOPMU_OUTBOUND_INT_MSG0 | IOPMU_OUTBOUND_INT_POSTQUEUE,
+- &iop->outbound_intmask);
+- hptiop_pci_posting_flush(iop);
++ &hba->u.itl.iop->outbound_intmask);
++ readl(&hba->u.itl.iop->outbound_intmask);
+}
+
-+static struct kobj_attribute set_attr = __ATTR(set, 0200, NULL, set_store);
++static void hptiop_disable_intr_mv(struct hptiop_hba *hba)
++{
++ writel(0, &hba->u.mv.regs->outbound_intmask);
++ readl(&hba->u.mv.regs->outbound_intmask);
+ }
+
+ static void hptiop_remove(struct pci_dev *pcidev)
+@@ -901,7 +1194,10 @@ static void hptiop_remove(struct pci_dev *pcidev)
+ hba->dma_coherent,
+ hba->dma_coherent_handle);
+
+- iounmap(hba->iop);
++ if (hba->ops->internal_memfree)
++ hba->ops->internal_memfree(hba);
+
-+static struct attribute *cpi_attrs[] = {
-+ &system_name_attr.attr,
-+ &sysplex_name_attr.attr,
-+ &system_type_attr.attr,
-+ &system_level_attr.attr,
-+ &set_attr.attr,
-+ NULL,
++ hba->ops->unmap_pci_bar(hba);
+
+ pci_release_regions(hba->pcidev);
+ pci_set_drvdata(hba->pcidev, NULL);
+@@ -910,11 +1206,50 @@ static void hptiop_remove(struct pci_dev *pcidev)
+ scsi_host_put(host);
+ }
+
++static struct hptiop_adapter_ops hptiop_itl_ops = {
++ .iop_wait_ready = iop_wait_ready_itl,
++ .internal_memalloc = 0,
++ .internal_memfree = 0,
++ .map_pci_bar = hptiop_map_pci_bar_itl,
++ .unmap_pci_bar = hptiop_unmap_pci_bar_itl,
++ .enable_intr = hptiop_enable_intr_itl,
++ .disable_intr = hptiop_disable_intr_itl,
++ .get_config = iop_get_config_itl,
++ .set_config = iop_set_config_itl,
++ .iop_intr = iop_intr_itl,
++ .post_msg = hptiop_post_msg_itl,
++ .post_req = hptiop_post_req_itl,
+};
+
-+static struct attribute_group cpi_attr_group = {
-+ .attrs = cpi_attrs,
++static struct hptiop_adapter_ops hptiop_mv_ops = {
++ .iop_wait_ready = iop_wait_ready_mv,
++ .internal_memalloc = hptiop_internal_memalloc_mv,
++ .internal_memfree = hptiop_internal_memfree_mv,
++ .map_pci_bar = hptiop_map_pci_bar_mv,
++ .unmap_pci_bar = hptiop_unmap_pci_bar_mv,
++ .enable_intr = hptiop_enable_intr_mv,
++ .disable_intr = hptiop_disable_intr_mv,
++ .get_config = iop_get_config_mv,
++ .set_config = iop_set_config_mv,
++ .iop_intr = iop_intr_mv,
++ .post_msg = hptiop_post_msg_mv,
++ .post_req = hptiop_post_req_mv,
+};
+
-+static struct kset *cpi_kset;
-+
-+int sclp_cpi_set_data(const char *system, const char *sysplex, const char *type,
-+ const u64 level)
-+{
-+ int rc;
-+
-+ rc = check_string("system_name", system);
-+ if (rc)
-+ return rc;
-+ rc = check_string("sysplex_name", sysplex);
-+ if (rc)
-+ return rc;
-+ rc = check_string("system_type", type);
-+ if (rc)
-+ return rc;
-+
-+ set_string(system_name, system);
-+ set_string(sysplex_name, sysplex);
-+ set_string(system_type, type);
-+ system_level = level;
-+
-+ return cpi_req();
-+}
-+EXPORT_SYMBOL(sclp_cpi_set_data);
-+
-+static int __init cpi_init(void)
-+{
-+ int rc;
-+
-+ cpi_kset = kset_create_and_add("cpi", NULL, firmware_kobj);
-+ if (!cpi_kset)
-+ return -ENOMEM;
+ static struct pci_device_id hptiop_id_table[] = {
+- { PCI_VDEVICE(TTI, 0x3220) },
+- { PCI_VDEVICE(TTI, 0x3320) },
+- { PCI_VDEVICE(TTI, 0x3520) },
+- { PCI_VDEVICE(TTI, 0x4320) },
++ { PCI_VDEVICE(TTI, 0x3220), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3320), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3520), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x4320), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3510), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3511), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3521), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3522), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3410), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3540), (kernel_ulong_t)&hptiop_itl_ops },
++ { PCI_VDEVICE(TTI, 0x3120), (kernel_ulong_t)&hptiop_mv_ops },
++ { PCI_VDEVICE(TTI, 0x3122), (kernel_ulong_t)&hptiop_mv_ops },
++ { PCI_VDEVICE(TTI, 0x3020), (kernel_ulong_t)&hptiop_mv_ops },
+ {},
+ };
+
+diff --git a/drivers/scsi/hptiop.h b/drivers/scsi/hptiop.h
+index 2a5e46e..a0289f2 100644
+--- a/drivers/scsi/hptiop.h
++++ b/drivers/scsi/hptiop.h
+@@ -1,5 +1,5 @@
+ /*
+- * HighPoint RR3xxx controller driver for Linux
++ * HighPoint RR3xxx/4xxx controller driver for Linux
+ * Copyright (C) 2006-2007 HighPoint Technologies, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+@@ -18,8 +18,7 @@
+ #ifndef _HPTIOP_H_
+ #define _HPTIOP_H_
+
+-struct hpt_iopmu
+-{
++struct hpt_iopmu_itl {
+ __le32 resrved0[4];
+ __le32 inbound_msgaddr0;
+ __le32 inbound_msgaddr1;
+@@ -54,6 +53,40 @@ struct hpt_iopmu
+ #define IOPMU_INBOUND_INT_ERROR 8
+ #define IOPMU_INBOUND_INT_POSTQUEUE 0x10
+
++#define MVIOP_QUEUE_LEN 512
+
-+ rc = sysfs_create_group(&cpi_kset->kobj, &cpi_attr_group);
-+ if (rc)
-+ kset_unregister(cpi_kset);
++struct hpt_iopmu_mv {
++ __le32 inbound_head;
++ __le32 inbound_tail;
++ __le32 outbound_head;
++ __le32 outbound_tail;
++ __le32 inbound_msg;
++ __le32 outbound_msg;
++ __le32 reserve[10];
++ __le64 inbound_q[MVIOP_QUEUE_LEN];
++ __le64 outbound_q[MVIOP_QUEUE_LEN];
++};
+
-+ return rc;
-+}
++struct hpt_iopmv_regs {
++ __le32 reserved[0x20400 / 4];
++ __le32 inbound_doorbell;
++ __le32 inbound_intmask;
++ __le32 outbound_doorbell;
++ __le32 outbound_intmask;
++};
+
-+__initcall(cpi_init);
-diff --git a/drivers/s390/char/sclp_cpi_sys.h b/drivers/s390/char/sclp_cpi_sys.h
-new file mode 100644
-index 0000000..deef3e6
---- /dev/null
-+++ b/drivers/s390/char/sclp_cpi_sys.h
-@@ -0,0 +1,15 @@
-+/*
-+ * drivers/s390/char/sclp_cpi_sys.h
-+ * SCLP control program identification sysfs interface
-+ *
-+ * Copyright IBM Corp. 2007
-+ * Author(s): Michael Ernst <mernst at de.ibm.com>
-+ */
++#define MVIOP_MU_QUEUE_ADDR_HOST_MASK (~(0x1full))
++#define MVIOP_MU_QUEUE_ADDR_HOST_BIT 4
+
-+#ifndef __SCLP_CPI_SYS_H__
-+#define __SCLP_CPI_SYS_H__
++#define MVIOP_MU_QUEUE_ADDR_IOP_HIGH32 0xffffffff
++#define MVIOP_MU_QUEUE_REQUEST_RESULT_BIT 1
++#define MVIOP_MU_QUEUE_REQUEST_RETURN_CONTEXT 2
+
-+int sclp_cpi_set_data(const char *system, const char *sysplex,
-+ const char *type, u64 level);
++#define MVIOP_MU_INBOUND_INT_MSG 1
++#define MVIOP_MU_INBOUND_INT_POSTQUEUE 2
++#define MVIOP_MU_OUTBOUND_INT_MSG 1
++#define MVIOP_MU_OUTBOUND_INT_POSTQUEUE 2
+
-+#endif /* __SCLP_CPI_SYS_H__ */
-diff --git a/drivers/s390/char/sclp_info.c b/drivers/s390/char/sclp_info.c
-deleted file mode 100644
-index a1136e0..0000000
---- a/drivers/s390/char/sclp_info.c
-+++ /dev/null
-@@ -1,116 +0,0 @@
--/*
-- * drivers/s390/char/sclp_info.c
-- *
-- * Copyright IBM Corp. 2007
-- * Author(s): Heiko Carstens <heiko.carstens at de.ibm.com>
-- */
--
--#include <linux/init.h>
--#include <linux/errno.h>
--#include <linux/string.h>
--#include <asm/sclp.h>
--#include "sclp.h"
--
--struct sclp_readinfo_sccb {
-- struct sccb_header header; /* 0-7 */
-- u16 rnmax; /* 8-9 */
-- u8 rnsize; /* 10 */
-- u8 _reserved0[24 - 11]; /* 11-23 */
-- u8 loadparm[8]; /* 24-31 */
-- u8 _reserved1[48 - 32]; /* 32-47 */
-- u64 facilities; /* 48-55 */
-- u8 _reserved2[91 - 56]; /* 56-90 */
-- u8 flags; /* 91 */
-- u8 _reserved3[100 - 92]; /* 92-99 */
-- u32 rnsize2; /* 100-103 */
-- u64 rnmax2; /* 104-111 */
-- u8 _reserved4[4096 - 112]; /* 112-4095 */
--} __attribute__((packed, aligned(4096)));
--
--static struct sclp_readinfo_sccb __initdata early_readinfo_sccb;
--static int __initdata early_readinfo_sccb_valid;
--
--u64 sclp_facilities;
--
--void __init sclp_readinfo_early(void)
+ enum hpt_iopmu_message {
+ /* host-to-iop messages */
+ IOPMU_INBOUND_MSG0_NOP = 0,
+@@ -72,8 +105,7 @@ enum hpt_iopmu_message {
+ IOPMU_OUTBOUND_MSG0_REVALIDATE_DEVICE_MAX = 0x3ff,
+ };
+
+-struct hpt_iop_request_header
-{
-- int ret;
-- int i;
-- struct sclp_readinfo_sccb *sccb;
-- sclp_cmdw_t commands[] = {SCLP_CMDW_READ_SCP_INFO_FORCED,
-- SCLP_CMDW_READ_SCP_INFO};
--
-- /* Enable service signal subclass mask. */
-- __ctl_set_bit(0, 9);
-- sccb = &early_readinfo_sccb;
-- for (i = 0; i < ARRAY_SIZE(commands); i++) {
-- do {
-- memset(sccb, 0, sizeof(*sccb));
-- sccb->header.length = sizeof(*sccb);
-- sccb->header.control_mask[2] = 0x80;
-- ret = sclp_service_call(commands[i], sccb);
-- } while (ret == -EBUSY);
--
-- if (ret)
-- break;
-- __load_psw_mask(PSW_BASE_BITS | PSW_MASK_EXT |
-- PSW_MASK_WAIT | PSW_DEFAULT_KEY);
-- local_irq_disable();
-- /*
-- * Contents of the sccb might have changed
-- * therefore a barrier is needed.
-- */
-- barrier();
-- if (sccb->header.response_code == 0x10) {
-- early_readinfo_sccb_valid = 1;
-- break;
-- }
-- if (sccb->header.response_code != 0x1f0)
-- break;
-- }
-- /* Disable service signal subclass mask again. */
-- __ctl_clear_bit(0, 9);
--}
--
--void __init sclp_facilities_detect(void)
++struct hpt_iop_request_header {
+ __le32 size;
+ __le32 type;
+ __le32 flags;
+@@ -104,11 +136,10 @@ enum hpt_iop_result_type {
+ IOP_RESULT_RESET,
+ IOP_RESULT_INVALID_REQUEST,
+ IOP_RESULT_BAD_TARGET,
+- IOP_RESULT_MODE_SENSE_CHECK_CONDITION,
++ IOP_RESULT_CHECK_CONDITION,
+ };
+
+-struct hpt_iop_request_get_config
-{
-- if (!early_readinfo_sccb_valid)
-- return;
-- sclp_facilities = early_readinfo_sccb.facilities;
--}
--
--unsigned long long __init sclp_memory_detect(void)
++struct hpt_iop_request_get_config {
+ struct hpt_iop_request_header header;
+ __le32 interface_version;
+ __le32 firmware_version;
+@@ -121,8 +152,7 @@ struct hpt_iop_request_get_config
+ __le32 sdram_size;
+ };
+
+-struct hpt_iop_request_set_config
-{
-- unsigned long long memsize;
-- struct sclp_readinfo_sccb *sccb;
--
-- if (!early_readinfo_sccb_valid)
-- return 0;
-- sccb = &early_readinfo_sccb;
-- if (sccb->rnsize)
-- memsize = sccb->rnsize << 20;
-- else
-- memsize = sccb->rnsize2 << 20;
-- if (sccb->rnmax)
-- memsize *= sccb->rnmax;
-- else
-- memsize *= sccb->rnmax2;
-- return memsize;
--}
--
--/*
-- * This function will be called after sclp_memory_detect(), which gets called
-- * early from early.c code. Therefore the sccb should have valid contents.
-- */
--void __init sclp_get_ipl_info(struct sclp_ipl_info *info)
++struct hpt_iop_request_set_config {
+ struct hpt_iop_request_header header;
+ __le32 iop_id;
+ __le16 vbus_id;
+@@ -130,15 +160,13 @@ struct hpt_iop_request_set_config
+ __le32 reserve[6];
+ };
+
+-struct hpt_iopsg
-{
-- struct sclp_readinfo_sccb *sccb;
--
-- if (!early_readinfo_sccb_valid)
-- return;
-- sccb = &early_readinfo_sccb;
-- info->is_valid = 1;
-- if (sccb->flags & 0x2)
-- info->has_dump = 1;
-- memcpy(&info->loadparm, &sccb->loadparm, LOADPARM_LEN);
--}
-diff --git a/drivers/s390/char/sclp_rw.c b/drivers/s390/char/sclp_rw.c
-index d6b06ab..ad7195d 100644
---- a/drivers/s390/char/sclp_rw.c
-+++ b/drivers/s390/char/sclp_rw.c
-@@ -76,7 +76,7 @@ sclp_make_buffer(void *page, unsigned short columns, unsigned short htab)
- }
++struct hpt_iopsg {
+ __le32 size;
+ __le32 eot; /* non-zero: end of table */
+ __le64 pci_address;
+ };
- /*
-- * Return a pointer to the orignal page that has been used to create
-+ * Return a pointer to the original page that has been used to create
- * the buffer.
- */
- void *
-diff --git a/drivers/s390/char/tape_3590.c b/drivers/s390/char/tape_3590.c
-index da25f8e..8246ef3 100644
---- a/drivers/s390/char/tape_3590.c
-+++ b/drivers/s390/char/tape_3590.c
-@@ -1495,7 +1495,7 @@ tape_3590_unit_check(struct tape_device *device, struct tape_request *request,
- device->cdev->dev.bus_id);
- return tape_3590_erp_basic(device, request, irb, -EPERM);
- case 0x8013:
-- PRINT_WARN("(%s): Another host has priviliged access to the "
-+ PRINT_WARN("(%s): Another host has privileged access to the "
- "tape device\n", device->cdev->dev.bus_id);
- PRINT_WARN("(%s): To solve the problem unload the current "
- "cartridge!\n", device->cdev->dev.bus_id);
-diff --git a/drivers/s390/char/tape_block.c b/drivers/s390/char/tape_block.c
-index eeb92e2..ddc4a11 100644
---- a/drivers/s390/char/tape_block.c
-+++ b/drivers/s390/char/tape_block.c
-@@ -74,11 +74,10 @@ tapeblock_trigger_requeue(struct tape_device *device)
- * Post finished request.
- */
- static void
--tapeblock_end_request(struct request *req, int uptodate)
-+tapeblock_end_request(struct request *req, int error)
- {
-- if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
-+ if (__blk_end_request(req, error, blk_rq_bytes(req)))
- BUG();
-- end_that_request_last(req, uptodate);
- }
+-struct hpt_iop_request_block_command
+-{
++struct hpt_iop_request_block_command {
+ struct hpt_iop_request_header header;
+ u8 channel;
+ u8 target;
+@@ -156,8 +184,7 @@ struct hpt_iop_request_block_command
+ #define IOP_BLOCK_COMMAND_FLUSH 4
+ #define IOP_BLOCK_COMMAND_SHUTDOWN 5
- static void
-@@ -91,7 +90,7 @@ __tapeblock_end_request(struct tape_request *ccw_req, void *data)
+-struct hpt_iop_request_scsi_command
+-{
++struct hpt_iop_request_scsi_command {
+ struct hpt_iop_request_header header;
+ u8 channel;
+ u8 target;
+@@ -168,8 +195,7 @@ struct hpt_iop_request_scsi_command
+ struct hpt_iopsg sg_list[1];
+ };
- device = ccw_req->device;
- req = (struct request *) data;
-- tapeblock_end_request(req, ccw_req->rc == 0);
-+ tapeblock_end_request(req, (ccw_req->rc == 0) ? 0 : -EIO);
- if (ccw_req->rc == 0)
- /* Update position. */
- device->blk_data.block_position =
-@@ -119,7 +118,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
- ccw_req = device->discipline->bread(device, req);
- if (IS_ERR(ccw_req)) {
- DBF_EVENT(1, "TBLOCK: bread failed\n");
-- tapeblock_end_request(req, 0);
-+ tapeblock_end_request(req, -EIO);
- return PTR_ERR(ccw_req);
- }
- ccw_req->callback = __tapeblock_end_request;
-@@ -132,7 +131,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
- * Start/enqueueing failed. No retries in
- * this case.
- */
-- tapeblock_end_request(req, 0);
-+ tapeblock_end_request(req, -EIO);
- device->discipline->free_bread(ccw_req);
- }
+-struct hpt_iop_request_ioctl_command
+-{
++struct hpt_iop_request_ioctl_command {
+ struct hpt_iop_request_header header;
+ __le32 ioctl_code;
+ __le32 inbuf_size;
+@@ -182,11 +208,11 @@ struct hpt_iop_request_ioctl_command
+ #define HPTIOP_MAX_REQUESTS 256u
-@@ -177,7 +176,7 @@ tapeblock_requeue(struct work_struct *work) {
- if (rq_data_dir(req) == WRITE) {
- DBF_EVENT(1, "TBLOCK: Rejecting write request\n");
- blkdev_dequeue_request(req);
-- tapeblock_end_request(req, 0);
-+ tapeblock_end_request(req, -EIO);
- continue;
- }
- spin_unlock_irq(&device->blk_data.request_queue_lock);
-diff --git a/drivers/s390/char/tape_core.c b/drivers/s390/char/tape_core.c
-index 2fae633..7ad8cf1 100644
---- a/drivers/s390/char/tape_core.c
-+++ b/drivers/s390/char/tape_core.c
-@@ -37,7 +37,7 @@ static void tape_long_busy_timeout(unsigned long data);
- * we can assign the devices to minor numbers of the same major
- * The list is protected by the rwlock
- */
--static struct list_head tape_device_list = LIST_HEAD_INIT(tape_device_list);
-+static LIST_HEAD(tape_device_list);
- static DEFINE_RWLOCK(tape_device_lock);
+ struct hptiop_request {
+- struct hptiop_request * next;
+- void * req_virt;
+- u32 req_shifted_phy;
+- struct scsi_cmnd * scp;
+- int index;
++ struct hptiop_request *next;
++ void *req_virt;
++ u32 req_shifted_phy;
++ struct scsi_cmnd *scp;
++ int index;
+ };
- /*
-diff --git a/drivers/s390/char/tape_proc.c b/drivers/s390/char/tape_proc.c
-index cea49f0..c9b96d5 100644
---- a/drivers/s390/char/tape_proc.c
-+++ b/drivers/s390/char/tape_proc.c
-@@ -97,7 +97,7 @@ static void tape_proc_stop(struct seq_file *m, void *v)
- {
- }
+ struct hpt_scsi_pointer {
+@@ -198,9 +224,21 @@ struct hpt_scsi_pointer {
+ #define HPT_SCP(scp) ((struct hpt_scsi_pointer *)&(scp)->SCp)
--static struct seq_operations tape_proc_seq = {
-+static const struct seq_operations tape_proc_seq = {
- .start = tape_proc_start,
- .next = tape_proc_next,
- .stop = tape_proc_stop,
-diff --git a/drivers/s390/char/vmlogrdr.c b/drivers/s390/char/vmlogrdr.c
-index e0c4c50..d364e0b 100644
---- a/drivers/s390/char/vmlogrdr.c
-+++ b/drivers/s390/char/vmlogrdr.c
-@@ -683,7 +683,7 @@ static int vmlogrdr_register_driver(void)
- /* Register with iucv driver */
- ret = iucv_register(&vmlogrdr_iucv_handler, 1);
- if (ret) {
-- printk (KERN_ERR "vmlogrdr: failed to register with"
-+ printk (KERN_ERR "vmlogrdr: failed to register with "
- "iucv driver\n");
- goto out;
- }
-diff --git a/drivers/s390/char/vmur.c b/drivers/s390/char/vmur.c
-index d70a6e6..7689b50 100644
---- a/drivers/s390/char/vmur.c
-+++ b/drivers/s390/char/vmur.c
-@@ -759,7 +759,7 @@ static loff_t ur_llseek(struct file *file, loff_t offset, int whence)
- return newpos;
- }
+ struct hptiop_hba {
+- struct hpt_iopmu __iomem * iop;
+- struct Scsi_Host * host;
+- struct pci_dev * pcidev;
++ struct hptiop_adapter_ops *ops;
++ union {
++ struct {
++ struct hpt_iopmu_itl __iomem *iop;
++ } itl;
++ struct {
++ struct hpt_iopmv_regs *regs;
++ struct hpt_iopmu_mv __iomem *mu;
++ void *internal_req;
++ dma_addr_t internal_req_phy;
++ } mv;
++ } u;
++
++ struct Scsi_Host *host;
++ struct pci_dev *pcidev;
--static struct file_operations ur_fops = {
-+static const struct file_operations ur_fops = {
- .owner = THIS_MODULE,
- .open = ur_open,
- .release = ur_release,
-diff --git a/drivers/s390/char/zcore.c b/drivers/s390/char/zcore.c
-index 7073daf..f523501 100644
---- a/drivers/s390/char/zcore.c
-+++ b/drivers/s390/char/zcore.c
-@@ -470,7 +470,7 @@ static loff_t zcore_lseek(struct file *file, loff_t offset, int orig)
- return rc;
- }
+ /* IOP config info */
+ u32 interface_version;
+@@ -213,15 +251,15 @@ struct hptiop_hba {
--static struct file_operations zcore_fops = {
-+static const struct file_operations zcore_fops = {
- .owner = THIS_MODULE,
- .llseek = zcore_lseek,
- .read = zcore_read,
-diff --git a/drivers/s390/cio/airq.c b/drivers/s390/cio/airq.c
-index 5287631..b7a07a8 100644
---- a/drivers/s390/cio/airq.c
-+++ b/drivers/s390/cio/airq.c
-@@ -1,12 +1,12 @@
- /*
- * drivers/s390/cio/airq.c
-- * S/390 common I/O routines -- support for adapter interruptions
-+ * Support for adapter interruptions
- *
-- * Copyright (C) 1999-2002 IBM Deutschland Entwicklung GmbH,
-- * IBM Corporation
-- * Author(s): Ingo Adlung (adlung at de.ibm.com)
-- * Cornelia Huck (cornelia.huck at de.ibm.com)
-- * Arnd Bergmann (arndb at de.ibm.com)
-+ * Copyright IBM Corp. 1999,2007
-+ * Author(s): Ingo Adlung <adlung at de.ibm.com>
-+ * Cornelia Huck <cornelia.huck at de.ibm.com>
-+ * Arnd Bergmann <arndb at de.ibm.com>
-+ * Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
- */
+ u32 req_size; /* host-allocated request buffer size */
- #include <linux/init.h>
-@@ -14,72 +14,131 @@
- #include <linux/slab.h>
- #include <linux/rcupdate.h>
+- int iopintf_v2: 1;
+- int initialized: 1;
+- int msg_done: 1;
++ u32 iopintf_v2: 1;
++ u32 initialized: 1;
++ u32 msg_done: 1;
-+#include <asm/airq.h>
-+
-+#include "cio.h"
- #include "cio_debug.h"
--#include "airq.h"
+ struct hptiop_request * req_list;
+ struct hptiop_request reqs[HPTIOP_MAX_REQUESTS];
--static adapter_int_handler_t adapter_handler;
-+#define NR_AIRQS 32
-+#define NR_AIRQS_PER_WORD sizeof(unsigned long)
-+#define NR_AIRQ_WORDS (NR_AIRQS / NR_AIRQS_PER_WORD)
+ /* used to free allocated dma area */
+- void * dma_coherent;
++ void *dma_coherent;
+ dma_addr_t dma_coherent_handle;
--/*
-- * register for adapter interrupts
-- *
-- * With HiperSockets the zSeries architecture provides for
-- * means of adapter interrups, pseudo I/O interrupts that are
-- * not tied to an I/O subchannel, but to an adapter. However,
-- * it doesn't disclose the info how to enable/disable them, but
-- * to recognize them only. Perhaps we should consider them
-- * being shared interrupts, and thus build a linked list
-- * of adapter handlers ... to be evaluated ...
-- */
--int
--s390_register_adapter_interrupt (adapter_int_handler_t handler)
+ atomic_t reset_count;
+@@ -231,19 +269,35 @@ struct hptiop_hba {
+ wait_queue_head_t ioctl_wq;
+ };
+
+-struct hpt_ioctl_k
-{
-- int ret;
-- char dbf_txt[15];
-+union indicator_t {
-+ unsigned long word[NR_AIRQ_WORDS];
-+ unsigned char byte[NR_AIRQS];
-+} __attribute__((packed));
++struct hpt_ioctl_k {
+ struct hptiop_hba * hba;
+ u32 ioctl_code;
+ u32 inbuf_size;
+ u32 outbuf_size;
+- void * inbuf;
+- void * outbuf;
+- u32 * bytes_returned;
++ void *inbuf;
++ void *outbuf;
++ u32 *bytes_returned;
+ void (*done)(struct hpt_ioctl_k *);
+ int result; /* HPT_IOCTL_RESULT_ */
+ };
-- CIO_TRACE_EVENT (4, "rgaint");
-+struct airq_t {
-+ adapter_int_handler_t handler;
-+ void *drv_data;
++struct hptiop_adapter_ops {
++ int (*iop_wait_ready)(struct hptiop_hba *hba, u32 millisec);
++ int (*internal_memalloc)(struct hptiop_hba *hba);
++ int (*internal_memfree)(struct hptiop_hba *hba);
++ int (*map_pci_bar)(struct hptiop_hba *hba);
++ void (*unmap_pci_bar)(struct hptiop_hba *hba);
++ void (*enable_intr)(struct hptiop_hba *hba);
++ void (*disable_intr)(struct hptiop_hba *hba);
++ int (*get_config)(struct hptiop_hba *hba,
++ struct hpt_iop_request_get_config *config);
++ int (*set_config)(struct hptiop_hba *hba,
++ struct hpt_iop_request_set_config *config);
++ int (*iop_intr)(struct hptiop_hba *hba);
++ void (*post_msg)(struct hptiop_hba *hba, u32 msg);
++ void (*post_req)(struct hptiop_hba *hba, struct hptiop_request *_req);
+};
++
+ #define HPT_IOCTL_RESULT_OK 0
+ #define HPT_IOCTL_RESULT_FAILED (-1)
-- if (handler == NULL)
-- ret = -EINVAL;
-- else
-- ret = (cmpxchg(&adapter_handler, NULL, handler) ? -EBUSY : 0);
-- if (!ret)
-- synchronize_sched(); /* Allow interrupts to complete. */
-+static union indicator_t indicators;
-+static struct airq_t *airqs[NR_AIRQS];
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 5f2396c..3081901 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -629,6 +629,16 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
+ list_del(&evt_struct->list);
+ del_timer(&evt_struct->timer);
-- sprintf (dbf_txt, "ret:%d", ret);
-- CIO_TRACE_EVENT (4, dbf_txt);
-+static int register_airq(struct airq_t *airq)
-+{
-+ int i;
++ /* If send_crq returns H_CLOSED, return SCSI_MLQUEUE_HOST_BUSY.
++ * Firmware will send a CRQ with a transport event (0xFF) to
++ * tell this client what has happened to the transport. This
++ * will be handled in ibmvscsi_handle_crq()
++ */
++ if (rc == H_CLOSED) {
++ dev_warn(hostdata->dev, "send warning. "
++ "Receive queue closed, will retry.\n");
++ goto send_busy;
++ }
+ dev_err(hostdata->dev, "send error %d\n", rc);
+ atomic_inc(&hostdata->request_limit);
+ goto send_error;
+@@ -976,58 +986,74 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
+ int rsp_rc;
+ unsigned long flags;
+ u16 lun = lun_from_dev(cmd->device);
++ unsigned long wait_switch = 0;
-- return ret;
-+ for (i = 0; i < NR_AIRQS; i++)
-+ if (!cmpxchg(&airqs[i], NULL, airq))
-+ return i;
-+ return -ENOMEM;
- }
+ /* First, find this command in our sent list so we can figure
+ * out the correct tag
+ */
+ spin_lock_irqsave(hostdata->host->host_lock, flags);
+- found_evt = NULL;
+- list_for_each_entry(tmp_evt, &hostdata->sent, list) {
+- if (tmp_evt->cmnd == cmd) {
+- found_evt = tmp_evt;
+- break;
++ wait_switch = jiffies + (init_timeout * HZ);
++ do {
++ found_evt = NULL;
++ list_for_each_entry(tmp_evt, &hostdata->sent, list) {
++ if (tmp_evt->cmnd == cmd) {
++ found_evt = tmp_evt;
++ break;
++ }
+ }
+- }
--int
--s390_unregister_adapter_interrupt (adapter_int_handler_t handler)
-+/**
-+ * s390_register_adapter_interrupt() - register adapter interrupt handler
-+ * @handler: adapter handler to be registered
-+ * @drv_data: driver data passed with each call to the handler
-+ *
-+ * Returns:
-+ * Pointer to the indicator to be used on success
-+ * ERR_PTR() if registration failed
-+ */
-+void *s390_register_adapter_interrupt(adapter_int_handler_t handler,
-+ void *drv_data)
- {
-+ struct airq_t *airq;
-+ char dbf_txt[16];
- int ret;
-- char dbf_txt[15];
+- if (!found_evt) {
+- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
+- return SUCCESS;
+- }
++ if (!found_evt) {
++ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++ return SUCCESS;
++ }
-- CIO_TRACE_EVENT (4, "urgaint");
+- evt = get_event_struct(&hostdata->pool);
+- if (evt == NULL) {
+- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
+- sdev_printk(KERN_ERR, cmd->device, "failed to allocate abort event\n");
+- return FAILED;
+- }
++ evt = get_event_struct(&hostdata->pool);
++ if (evt == NULL) {
++ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++ sdev_printk(KERN_ERR, cmd->device,
++ "failed to allocate abort event\n");
++ return FAILED;
++ }
+
+- init_event_struct(evt,
+- sync_completion,
+- VIOSRP_SRP_FORMAT,
+- init_timeout);
++ init_event_struct(evt,
++ sync_completion,
++ VIOSRP_SRP_FORMAT,
++ init_timeout);
+
+- tsk_mgmt = &evt->iu.srp.tsk_mgmt;
++ tsk_mgmt = &evt->iu.srp.tsk_mgmt;
+
+- /* Set up an abort SRP command */
+- memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
+- tsk_mgmt->opcode = SRP_TSK_MGMT;
+- tsk_mgmt->lun = ((u64) lun) << 48;
+- tsk_mgmt->tsk_mgmt_func = SRP_TSK_ABORT_TASK;
+- tsk_mgmt->task_tag = (u64) found_evt;
-
-- if (handler == NULL)
-- ret = -EINVAL;
-- else {
-- adapter_handler = NULL;
-- synchronize_sched(); /* Allow interrupts to complete. */
-- ret = 0;
-+ airq = kmalloc(sizeof(struct airq_t), GFP_KERNEL);
-+ if (!airq) {
-+ ret = -ENOMEM;
-+ goto out;
- }
-- sprintf (dbf_txt, "ret:%d", ret);
-- CIO_TRACE_EVENT (4, dbf_txt);
+- sdev_printk(KERN_INFO, cmd->device, "aborting command. lun 0x%lx, tag 0x%lx\n",
+- tsk_mgmt->lun, tsk_mgmt->task_tag);
-
-- return ret;
-+ airq->handler = handler;
-+ airq->drv_data = drv_data;
-+ ret = register_airq(airq);
-+ if (ret < 0)
-+ kfree(airq);
-+out:
-+ snprintf(dbf_txt, sizeof(dbf_txt), "rairq:%d", ret);
-+ CIO_TRACE_EVENT(4, dbf_txt);
-+ if (ret < 0)
-+ return ERR_PTR(ret);
-+ else
-+ return &indicators.byte[ret];
- }
-+EXPORT_SYMBOL(s390_register_adapter_interrupt);
-
--void
--do_adapter_IO (void)
-+/**
-+ * s390_unregister_adapter_interrupt - unregister adapter interrupt handler
-+ * @ind: indicator for which the handler is to be unregistered
-+ */
-+void s390_unregister_adapter_interrupt(void *ind)
- {
-- CIO_TRACE_EVENT (6, "doaio");
-+ struct airq_t *airq;
-+ char dbf_txt[16];
-+ int i;
-
-- if (adapter_handler)
-- (*adapter_handler) ();
-+ i = (int) ((addr_t) ind) - ((addr_t) &indicators.byte[0]);
-+ snprintf(dbf_txt, sizeof(dbf_txt), "urairq:%d", i);
-+ CIO_TRACE_EVENT(4, dbf_txt);
-+ indicators.byte[i] = 0;
-+ airq = xchg(&airqs[i], NULL);
-+ /*
-+ * Allow interrupts to complete. This will ensure that the airq handle
-+ * is no longer referenced by any interrupt handler.
-+ */
-+ synchronize_sched();
-+ kfree(airq);
- }
-+EXPORT_SYMBOL(s390_unregister_adapter_interrupt);
+- evt->sync_srp = &srp_rsp;
+- init_completion(&evt->comp);
+- rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
++ /* Set up an abort SRP command */
++ memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
++ tsk_mgmt->opcode = SRP_TSK_MGMT;
++ tsk_mgmt->lun = ((u64) lun) << 48;
++ tsk_mgmt->tsk_mgmt_func = SRP_TSK_ABORT_TASK;
++ tsk_mgmt->task_tag = (u64) found_evt;
+
-+#define INDICATOR_MASK (0xffUL << ((NR_AIRQS_PER_WORD - 1) * 8))
++ evt->sync_srp = &srp_rsp;
++
++ init_completion(&evt->comp);
++ rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
++
++ if (rsp_rc != SCSI_MLQUEUE_HOST_BUSY)
++ break;
++
++ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++ msleep(10);
++ spin_lock_irqsave(hostdata->host->host_lock, flags);
++ } while (time_before(jiffies, wait_switch));
++
+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++
+ if (rsp_rc != 0) {
+ sdev_printk(KERN_ERR, cmd->device,
+ "failed to send abort() event. rc=%d\n", rsp_rc);
+ return FAILED;
+ }
--EXPORT_SYMBOL (s390_register_adapter_interrupt);
--EXPORT_SYMBOL (s390_unregister_adapter_interrupt);
-+void do_adapter_IO(void)
-+{
-+ int w;
-+ int i;
-+ unsigned long word;
-+ struct airq_t *airq;
++ sdev_printk(KERN_INFO, cmd->device,
++ "aborting command. lun 0x%lx, tag 0x%lx\n",
++ (((u64) lun) << 48), (u64) found_evt);
+
-+ /*
-+ * Access indicator array in word-sized chunks to minimize storage
-+ * fetch operations.
-+ */
-+ for (w = 0; w < NR_AIRQ_WORDS; w++) {
-+ word = indicators.word[w];
-+ i = w * NR_AIRQS_PER_WORD;
-+ /*
-+ * Check bytes within word for active indicators.
-+ */
-+ while (word) {
-+ if (word & INDICATOR_MASK) {
-+ airq = airqs[i];
-+ if (likely(airq))
-+ airq->handler(&indicators.byte[i],
-+ airq->drv_data);
-+ else
-+ /*
-+ * Reset ill-behaved indicator.
-+ */
-+ indicators.byte[i] = 0;
-+ }
-+ word <<= 8;
-+ i++;
-+ }
-+ }
-+}
-diff --git a/drivers/s390/cio/airq.h b/drivers/s390/cio/airq.h
-deleted file mode 100644
-index 7d6be3f..0000000
---- a/drivers/s390/cio/airq.h
-+++ /dev/null
-@@ -1,10 +0,0 @@
--#ifndef S390_AINTERRUPT_H
--#define S390_AINTERRUPT_H
--
--typedef int (*adapter_int_handler_t)(void);
--
--extern int s390_register_adapter_interrupt(adapter_int_handler_t handler);
--extern int s390_unregister_adapter_interrupt(adapter_int_handler_t handler);
--extern void do_adapter_IO (void);
--
--#endif
-diff --git a/drivers/s390/cio/blacklist.c b/drivers/s390/cio/blacklist.c
-index bd5f16f..e8597ec 100644
---- a/drivers/s390/cio/blacklist.c
-+++ b/drivers/s390/cio/blacklist.c
-@@ -348,7 +348,7 @@ cio_ignore_write(struct file *file, const char __user *user_buf,
- return user_len;
- }
+ wait_for_completion(&evt->comp);
--static struct seq_operations cio_ignore_proc_seq_ops = {
-+static const struct seq_operations cio_ignore_proc_seq_ops = {
- .start = cio_ignore_proc_seq_start,
- .stop = cio_ignore_proc_seq_stop,
- .next = cio_ignore_proc_seq_next,
-diff --git a/drivers/s390/cio/ccwgroup.c b/drivers/s390/cio/ccwgroup.c
-index 5baa517..3964056 100644
---- a/drivers/s390/cio/ccwgroup.c
-+++ b/drivers/s390/cio/ccwgroup.c
-@@ -35,8 +35,8 @@ ccwgroup_bus_match (struct device * dev, struct device_driver * drv)
- struct ccwgroup_device *gdev;
- struct ccwgroup_driver *gdrv;
+ /* make sure we got a good response */
+@@ -1099,41 +1125,56 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
+ int rsp_rc;
+ unsigned long flags;
+ u16 lun = lun_from_dev(cmd->device);
++ unsigned long wait_switch = 0;
-- gdev = container_of(dev, struct ccwgroup_device, dev);
-- gdrv = container_of(drv, struct ccwgroup_driver, driver);
-+ gdev = to_ccwgroupdev(dev);
-+ gdrv = to_ccwgroupdrv(drv);
+ spin_lock_irqsave(hostdata->host->host_lock, flags);
+- evt = get_event_struct(&hostdata->pool);
+- if (evt == NULL) {
+- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
+- sdev_printk(KERN_ERR, cmd->device, "failed to allocate reset event\n");
+- return FAILED;
+- }
++ wait_switch = jiffies + (init_timeout * HZ);
++ do {
++ evt = get_event_struct(&hostdata->pool);
++ if (evt == NULL) {
++ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++ sdev_printk(KERN_ERR, cmd->device,
++ "failed to allocate reset event\n");
++ return FAILED;
++ }
+
+- init_event_struct(evt,
+- sync_completion,
+- VIOSRP_SRP_FORMAT,
+- init_timeout);
++ init_event_struct(evt,
++ sync_completion,
++ VIOSRP_SRP_FORMAT,
++ init_timeout);
- if (gdev->creator_id == gdrv->driver_id)
- return 1;
-@@ -75,8 +75,10 @@ static void ccwgroup_ungroup_callback(struct device *dev)
- struct ccwgroup_device *gdev = to_ccwgroupdev(dev);
+- tsk_mgmt = &evt->iu.srp.tsk_mgmt;
++ tsk_mgmt = &evt->iu.srp.tsk_mgmt;
- mutex_lock(&gdev->reg_mutex);
-- __ccwgroup_remove_symlinks(gdev);
-- device_unregister(dev);
-+ if (device_is_registered(&gdev->dev)) {
-+ __ccwgroup_remove_symlinks(gdev);
-+ device_unregister(dev);
-+ }
- mutex_unlock(&gdev->reg_mutex);
- }
+- /* Set up a lun reset SRP command */
+- memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
+- tsk_mgmt->opcode = SRP_TSK_MGMT;
+- tsk_mgmt->lun = ((u64) lun) << 48;
+- tsk_mgmt->tsk_mgmt_func = SRP_TSK_LUN_RESET;
++ /* Set up a lun reset SRP command */
++ memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
++ tsk_mgmt->opcode = SRP_TSK_MGMT;
++ tsk_mgmt->lun = ((u64) lun) << 48;
++ tsk_mgmt->tsk_mgmt_func = SRP_TSK_LUN_RESET;
-@@ -111,7 +113,7 @@ ccwgroup_release (struct device *dev)
- gdev = to_ccwgroupdev(dev);
+- sdev_printk(KERN_INFO, cmd->device, "resetting device. lun 0x%lx\n",
+- tsk_mgmt->lun);
++ evt->sync_srp = &srp_rsp;
++
++ init_completion(&evt->comp);
++ rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
++
++ if (rsp_rc != SCSI_MLQUEUE_HOST_BUSY)
++ break;
++
++ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++ msleep(10);
++ spin_lock_irqsave(hostdata->host->host_lock, flags);
++ } while (time_before(jiffies, wait_switch));
- for (i = 0; i < gdev->count; i++) {
-- gdev->cdev[i]->dev.driver_data = NULL;
-+ dev_set_drvdata(&gdev->cdev[i]->dev, NULL);
- put_device(&gdev->cdev[i]->dev);
- }
- kfree(gdev);
-@@ -196,11 +198,11 @@ int ccwgroup_create(struct device *root, unsigned int creator_id,
- goto error;
- }
- /* Don't allow a device to belong to more than one group. */
-- if (gdev->cdev[i]->dev.driver_data) {
-+ if (dev_get_drvdata(&gdev->cdev[i]->dev)) {
- rc = -EINVAL;
- goto error;
- }
-- gdev->cdev[i]->dev.driver_data = gdev;
-+ dev_set_drvdata(&gdev->cdev[i]->dev, gdev);
+- evt->sync_srp = &srp_rsp;
+- init_completion(&evt->comp);
+- rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++
+ if (rsp_rc != 0) {
+ sdev_printk(KERN_ERR, cmd->device,
+ "failed to send reset event. rc=%d\n", rsp_rc);
+ return FAILED;
}
- gdev->creator_id = creator_id;
-@@ -234,8 +236,8 @@ int ccwgroup_create(struct device *root, unsigned int creator_id,
- error:
- for (i = 0; i < argc; i++)
- if (gdev->cdev[i]) {
-- if (gdev->cdev[i]->dev.driver_data == gdev)
-- gdev->cdev[i]->dev.driver_data = NULL;
-+ if (dev_get_drvdata(&gdev->cdev[i]->dev) == gdev)
-+ dev_set_drvdata(&gdev->cdev[i]->dev, NULL);
- put_device(&gdev->cdev[i]->dev);
- }
- mutex_unlock(&gdev->reg_mutex);
-@@ -408,6 +410,7 @@ int ccwgroup_driver_register(struct ccwgroup_driver *cdriver)
- /* register our new driver with the core */
- cdriver->driver.bus = &ccwgroup_bus_type;
- cdriver->driver.name = cdriver->name;
-+ cdriver->driver.owner = cdriver->owner;
++ sdev_printk(KERN_INFO, cmd->device, "resetting device. lun 0x%lx\n",
++ (((u64) lun) << 48));
++
+ wait_for_completion(&evt->comp);
- return driver_register(&cdriver->driver);
- }
-@@ -463,8 +466,8 @@ __ccwgroup_get_gdev_by_cdev(struct ccw_device *cdev)
- {
- struct ccwgroup_device *gdev;
+ /* make sure we got a good response */
+@@ -1386,8 +1427,10 @@ static int ibmvscsi_slave_configure(struct scsi_device *sdev)
+ unsigned long lock_flags = 0;
-- if (cdev->dev.driver_data) {
-- gdev = (struct ccwgroup_device *)cdev->dev.driver_data;
-+ gdev = dev_get_drvdata(&cdev->dev);
-+ if (gdev) {
- if (get_device(&gdev->dev)) {
- mutex_lock(&gdev->reg_mutex);
- if (device_is_registered(&gdev->dev))
-diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
-index 597c0c7..e7ba16a 100644
---- a/drivers/s390/cio/chsc.c
-+++ b/drivers/s390/cio/chsc.c
-@@ -89,7 +89,8 @@ int chsc_get_ssd_info(struct subchannel_id schid, struct chsc_ssd_info *ssd)
- /* Copy data */
- ret = 0;
- memset(ssd, 0, sizeof(struct chsc_ssd_info));
-- if ((ssd_area->st != 0) && (ssd_area->st != 2))
-+ if ((ssd_area->st != SUBCHANNEL_TYPE_IO) &&
-+ (ssd_area->st != SUBCHANNEL_TYPE_MSG))
- goto out_free;
- ssd->path_mask = ssd_area->path_mask;
- ssd->fla_valid_mask = ssd_area->fla_valid_mask;
-@@ -132,20 +133,16 @@ static void terminate_internal_io(struct subchannel *sch)
- device_set_intretry(sch);
- /* Call handler. */
- if (sch->driver && sch->driver->termination)
-- sch->driver->termination(&sch->dev);
-+ sch->driver->termination(sch);
- }
+ spin_lock_irqsave(shost->host_lock, lock_flags);
+- if (sdev->type == TYPE_DISK)
++ if (sdev->type == TYPE_DISK) {
+ sdev->allow_restart = 1;
++ sdev->timeout = 60 * HZ;
++ }
+ scsi_adjust_queue_depth(sdev, 0, shost->cmd_per_lun);
+ spin_unlock_irqrestore(shost->host_lock, lock_flags);
+ return 0;
+diff --git a/drivers/scsi/ibmvscsi/ibmvstgt.c b/drivers/scsi/ibmvscsi/ibmvstgt.c
+index 82bcab6..d63f11e 100644
+--- a/drivers/scsi/ibmvscsi/ibmvstgt.c
++++ b/drivers/scsi/ibmvscsi/ibmvstgt.c
+@@ -292,7 +292,7 @@ static int ibmvstgt_cmd_done(struct scsi_cmnd *sc,
+ dprintk("%p %p %x %u\n", iue, target, vio_iu(iue)->srp.cmd.cdb[0],
+ cmd->usg_sg);
--static int
--s390_subchannel_remove_chpid(struct device *dev, void *data)
-+static int s390_subchannel_remove_chpid(struct subchannel *sch, void *data)
+- if (sc->use_sg)
++ if (scsi_sg_count(sc))
+ err = srp_transfer_data(sc, &vio_iu(iue)->srp.cmd, ibmvstgt_rdma, 1, 1);
+
+ spin_lock_irqsave(&target->lock, flags);
+diff --git a/drivers/scsi/ide-scsi.c b/drivers/scsi/ide-scsi.c
+index 9706de9..db8bc20 100644
+--- a/drivers/scsi/ide-scsi.c
++++ b/drivers/scsi/ide-scsi.c
+@@ -395,14 +395,12 @@ static int idescsi_expiry(ide_drive_t *drive)
+ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
{
- int j;
- int mask;
-- struct subchannel *sch;
-- struct chp_id *chpid;
-+ struct chp_id *chpid = data;
- struct schib schib;
+ idescsi_scsi_t *scsi = drive_to_idescsi(drive);
+- idescsi_pc_t *pc=scsi->pc;
++ ide_hwif_t *hwif = drive->hwif;
++ idescsi_pc_t *pc = scsi->pc;
+ struct request *rq = pc->rq;
+- atapi_bcount_t bcount;
+- atapi_status_t status;
+- atapi_ireason_t ireason;
+- atapi_feature_t feature;
+-
+ unsigned int temp;
++ u16 bcount;
++ u8 stat, ireason;
-- sch = to_subchannel(dev);
-- chpid = data;
- for (j = 0; j < 8; j++) {
- mask = 0x80 >> j;
- if ((sch->schib.pmcw.pim & mask) &&
-@@ -158,7 +155,7 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
- spin_lock_irq(sch->lock);
+ #if IDESCSI_DEBUG_LOG
+ printk (KERN_INFO "ide-scsi: Reached idescsi_pc_intr interrupt handler\n");
+@@ -425,30 +423,29 @@ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
+ (void) HWIF(drive)->ide_dma_end(drive);
+ }
- stsch(sch->schid, &schib);
-- if (!schib.pmcw.dnv)
-+ if (!css_sch_is_valid(&schib))
- goto out_unreg;
- memcpy(&sch->schib, &schib, sizeof(struct schib));
- /* Check for single path devices. */
-@@ -172,12 +169,12 @@ s390_subchannel_remove_chpid(struct device *dev, void *data)
- terminate_internal_io(sch);
- /* Re-start path verification. */
- if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
+- feature.all = 0;
+ /* Clear the interrupt */
+- status.all = HWIF(drive)->INB(IDE_STATUS_REG);
++ stat = drive->hwif->INB(IDE_STATUS_REG);
+
+- if (!status.b.drq) {
++ if ((stat & DRQ_STAT) == 0) {
+ /* No more interrupts */
+ if (test_bit(IDESCSI_LOG_CMD, &scsi->log))
+ printk (KERN_INFO "Packet command completed, %d bytes transferred\n", pc->actually_transferred);
+ local_irq_enable_in_hardirq();
+- if (status.b.check)
++ if (stat & ERR_STAT)
+ rq->errors++;
+ idescsi_end_request (drive, 1, 0);
+ return ide_stopped;
+ }
+- bcount.b.low = HWIF(drive)->INB(IDE_BCOUNTL_REG);
+- bcount.b.high = HWIF(drive)->INB(IDE_BCOUNTH_REG);
+- ireason.all = HWIF(drive)->INB(IDE_IREASON_REG);
++ bcount = (hwif->INB(IDE_BCOUNTH_REG) << 8) |
++ hwif->INB(IDE_BCOUNTL_REG);
++ ireason = hwif->INB(IDE_IREASON_REG);
+
+- if (ireason.b.cod) {
++ if (ireason & CD) {
+ printk(KERN_ERR "ide-scsi: CoD != 0 in idescsi_pc_intr\n");
+ return ide_do_reset (drive);
+ }
+- if (ireason.b.io) {
+- temp = pc->actually_transferred + bcount.all;
++ if (ireason & IO) {
++ temp = pc->actually_transferred + bcount;
+ if (temp > pc->request_transfer) {
+ if (temp > pc->buffer_size) {
+ printk(KERN_ERR "ide-scsi: The scsi wants to "
+@@ -461,11 +458,13 @@ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
+ idescsi_input_buffers(drive, pc, temp);
+ else
+ drive->hwif->atapi_input_bytes(drive, pc->current_position, temp);
+- printk(KERN_ERR "ide-scsi: transferred %d of %d bytes\n", temp, bcount.all);
++ printk(KERN_ERR "ide-scsi: transferred"
++ " %d of %d bytes\n",
++ temp, bcount);
+ }
+ pc->actually_transferred += temp;
+ pc->current_position += temp;
+- idescsi_discard_data(drive, bcount.all - temp);
++ idescsi_discard_data(drive, bcount - temp);
+ ide_set_handler(drive, &idescsi_pc_intr, get_timeout(pc), idescsi_expiry);
+ return ide_started;
+ }
+@@ -474,22 +473,24 @@ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
+ #endif /* IDESCSI_DEBUG_LOG */
}
+ }
+- if (ireason.b.io) {
++ if (ireason & IO) {
+ clear_bit(PC_WRITING, &pc->flags);
+ if (pc->sg)
+- idescsi_input_buffers(drive, pc, bcount.all);
++ idescsi_input_buffers(drive, pc, bcount);
+ else
+- HWIF(drive)->atapi_input_bytes(drive, pc->current_position, bcount.all);
++ hwif->atapi_input_bytes(drive, pc->current_position,
++ bcount);
} else {
- /* trigger path verification. */
- if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
- else if (sch->lpm == mask)
- goto out_unreg;
+ set_bit(PC_WRITING, &pc->flags);
+ if (pc->sg)
+- idescsi_output_buffers (drive, pc, bcount.all);
++ idescsi_output_buffers(drive, pc, bcount);
+ else
+- HWIF(drive)->atapi_output_bytes(drive, pc->current_position, bcount.all);
++ hwif->atapi_output_bytes(drive, pc->current_position,
++ bcount);
}
-@@ -201,12 +198,10 @@ void chsc_chp_offline(struct chp_id chpid)
+ /* Update the current position */
+- pc->actually_transferred += bcount.all;
+- pc->current_position += bcount.all;
++ pc->actually_transferred += bcount;
++ pc->current_position += bcount;
- if (chp_get_status(chpid) <= 0)
- return;
-- bus_for_each_dev(&css_bus_type, NULL, &chpid,
-- s390_subchannel_remove_chpid);
-+ for_each_subchannel_staged(s390_subchannel_remove_chpid, NULL, &chpid);
- }
+ /* And set the interrupt handler again */
+ ide_set_handler(drive, &idescsi_pc_intr, get_timeout(pc), idescsi_expiry);
+@@ -501,16 +502,16 @@ static ide_startstop_t idescsi_transfer_pc(ide_drive_t *drive)
+ ide_hwif_t *hwif = drive->hwif;
+ idescsi_scsi_t *scsi = drive_to_idescsi(drive);
+ idescsi_pc_t *pc = scsi->pc;
+- atapi_ireason_t ireason;
+ ide_startstop_t startstop;
++ u8 ireason;
--static int
--s390_process_res_acc_new_sch(struct subchannel_id schid)
-+static int s390_process_res_acc_new_sch(struct subchannel_id schid, void *data)
+ if (ide_wait_stat(&startstop,drive,DRQ_STAT,BUSY_STAT,WAIT_READY)) {
+ printk(KERN_ERR "ide-scsi: Strange, packet command "
+ "initiated yet DRQ isn't asserted\n");
+ return startstop;
+ }
+- ireason.all = HWIF(drive)->INB(IDE_IREASON_REG);
+- if (!ireason.b.cod || ireason.b.io) {
++ ireason = hwif->INB(IDE_IREASON_REG);
++ if ((ireason & CD) == 0 || (ireason & IO)) {
+ printk(KERN_ERR "ide-scsi: (IO,CoD) != (0,1) while "
+ "issuing a packet command\n");
+ return ide_do_reset (drive);
+@@ -573,30 +574,26 @@ static ide_startstop_t idescsi_issue_pc (ide_drive_t *drive, idescsi_pc_t *pc)
{
- struct schib schib;
- /*
-@@ -252,18 +247,10 @@ static int get_res_chpid_mask(struct chsc_ssd_info *ssd,
- return 0;
- }
+ idescsi_scsi_t *scsi = drive_to_idescsi(drive);
+ ide_hwif_t *hwif = drive->hwif;
+- atapi_feature_t feature;
+- atapi_bcount_t bcount;
++ u16 bcount;
++ u8 dma = 0;
--static int
--__s390_process_res_acc(struct subchannel_id schid, void *data)
-+static int __s390_process_res_acc(struct subchannel *sch, void *data)
- {
- int chp_mask, old_lpm;
-- struct res_acc_data *res_data;
-- struct subchannel *sch;
--
-- res_data = data;
-- sch = get_subchannel_by_schid(schid);
-- if (!sch)
-- /* Check if a subchannel is newly available. */
-- return s390_process_res_acc_new_sch(schid);
-+ struct res_acc_data *res_data = data;
+ scsi->pc=pc; /* Set the current packet command */
+ pc->actually_transferred=0; /* We haven't transferred any data yet */
+ pc->current_position=pc->buffer;
+- bcount.all = min(pc->request_transfer, 63 * 1024); /* Request to transfer the entire buffer at once */
++ /* Request to transfer the entire buffer at once */
++ bcount = min(pc->request_transfer, 63 * 1024);
- spin_lock_irq(sch->lock);
- chp_mask = get_res_chpid_mask(&sch->ssd_info, res_data);
-@@ -279,10 +266,10 @@ __s390_process_res_acc(struct subchannel_id schid, void *data)
- if (!old_lpm && sch->lpm)
- device_trigger_reprobe(sch);
- else if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
- out:
- spin_unlock_irq(sch->lock);
-- put_device(&sch->dev);
-+
- return 0;
- }
+- feature.all = 0;
+ if (drive->using_dma && !idescsi_map_sg(drive, pc)) {
+ hwif->sg_mapped = 1;
+- feature.b.dma = !hwif->dma_setup(drive);
++ dma = !hwif->dma_setup(drive);
+ hwif->sg_mapped = 0;
+ }
-@@ -305,7 +292,8 @@ static void s390_process_res_acc (struct res_acc_data *res_data)
- * The more information we have (info), the less scanning
- * will we have to do.
- */
-- for_each_subchannel(__s390_process_res_acc, res_data);
-+ for_each_subchannel_staged(__s390_process_res_acc,
-+ s390_process_res_acc_new_sch, res_data);
- }
+ SELECT_DRIVE(drive);
+- if (IDE_CONTROL_REG)
+- HWIF(drive)->OUTB(drive->ctl, IDE_CONTROL_REG);
- static int
-@@ -499,8 +487,7 @@ void chsc_process_crw(void)
- } while (sei_area->flags & 0x80);
- }
+- HWIF(drive)->OUTB(feature.all, IDE_FEATURE_REG);
+- HWIF(drive)->OUTB(bcount.b.high, IDE_BCOUNTH_REG);
+- HWIF(drive)->OUTB(bcount.b.low, IDE_BCOUNTL_REG);
++ ide_pktcmd_tf_load(drive, IDE_TFLAG_NO_SELECT_MASK, bcount, dma);
--static int
--__chp_add_new_sch(struct subchannel_id schid)
-+static int __chp_add_new_sch(struct subchannel_id schid, void *data)
- {
- struct schib schib;
+- if (feature.b.dma)
++ if (dma)
+ set_bit(PC_DMA_OK, &pc->flags);
-@@ -514,45 +501,37 @@ __chp_add_new_sch(struct subchannel_id schid)
- }
+ if (test_bit(IDESCSI_DRQ_INTERRUPT, &scsi->flags)) {
+@@ -922,8 +919,8 @@ static int idescsi_eh_reset (struct scsi_cmnd *cmd)
+ }
+ /* kill current request */
+- blkdev_dequeue_request(req);
+- end_that_request_last(req, 0);
++ if (__blk_end_request(req, -EIO, 0))
++ BUG();
+ if (blk_sense_request(req))
+ kfree(scsi->pc->buffer);
+ kfree(scsi->pc);
+@@ -932,8 +929,8 @@ static int idescsi_eh_reset (struct scsi_cmnd *cmd)
--static int
--__chp_add(struct subchannel_id schid, void *data)
-+static int __chp_add(struct subchannel *sch, void *data)
- {
- int i, mask;
-- struct chp_id *chpid;
-- struct subchannel *sch;
--
-- chpid = data;
-- sch = get_subchannel_by_schid(schid);
-- if (!sch)
-- /* Check if the subchannel is now available. */
-- return __chp_add_new_sch(schid);
-+ struct chp_id *chpid = data;
-+
- spin_lock_irq(sch->lock);
- for (i=0; i<8; i++) {
- mask = 0x80 >> i;
- if ((sch->schib.pmcw.pim & mask) &&
-- (sch->schib.pmcw.chpid[i] == chpid->id)) {
-- if (stsch(sch->schid, &sch->schib) != 0) {
-- /* Endgame. */
-- spin_unlock_irq(sch->lock);
-- return -ENXIO;
-- }
-+ (sch->schib.pmcw.chpid[i] == chpid->id))
- break;
-- }
- }
- if (i==8) {
- spin_unlock_irq(sch->lock);
- return 0;
+ /* now nuke the drive queue */
+ while ((req = elv_next_request(drive->queue))) {
+- blkdev_dequeue_request(req);
+- end_that_request_last(req, 0);
++ if (__blk_end_request(req, -EIO, 0))
++ BUG();
}
-+ if (stsch(sch->schid, &sch->schib)) {
-+ spin_unlock_irq(sch->lock);
-+ css_schedule_eval(sch->schid);
-+ return 0;
-+ }
- sch->lpm = ((sch->schib.pmcw.pim &
- sch->schib.pmcw.pam &
- sch->schib.pmcw.pom)
- | mask) & sch->opm;
- if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
+ HWGROUP(drive)->rq = NULL;
+diff --git a/drivers/scsi/imm.c b/drivers/scsi/imm.c
+index a3d0c6b..f97d172 100644
+--- a/drivers/scsi/imm.c
++++ b/drivers/scsi/imm.c
+@@ -837,19 +837,16 @@ static int imm_engine(imm_struct *dev, struct scsi_cmnd *cmd)
- spin_unlock_irq(sch->lock);
-- put_device(&sch->dev);
-+
- return 0;
- }
+ /* Phase 4 - Setup scatter/gather buffers */
+ case 4:
+- if (cmd->use_sg) {
+- /* if many buffers are available, start filling the first */
+- cmd->SCp.buffer =
+- (struct scatterlist *) cmd->request_buffer;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ } else {
+- /* else fill the only available buffer */
+ cmd->SCp.buffer = NULL;
+- cmd->SCp.this_residual = cmd->request_bufflen;
+- cmd->SCp.ptr = cmd->request_buffer;
++ cmd->SCp.this_residual = 0;
++ cmd->SCp.ptr = NULL;
+ }
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.phase++;
+ if (cmd->SCp.this_residual & 0x01)
+ cmd->SCp.this_residual++;
+diff --git a/drivers/scsi/in2000.c b/drivers/scsi/in2000.c
+index c8b452f..8053b1e 100644
+--- a/drivers/scsi/in2000.c
++++ b/drivers/scsi/in2000.c
+@@ -369,16 +369,16 @@ static int in2000_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))
+ * - SCp.phase records this command's SRCID_ER bit setting
+ */
-@@ -564,7 +543,8 @@ void chsc_chp_online(struct chp_id chpid)
- CIO_TRACE_EVENT(2, dbf_txt);
+- if (cmd->use_sg) {
+- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ } else {
+ cmd->SCp.buffer = NULL;
+ cmd->SCp.buffers_residual = 0;
+- cmd->SCp.ptr = (char *) cmd->request_buffer;
+- cmd->SCp.this_residual = cmd->request_bufflen;
++ cmd->SCp.ptr = NULL;
++ cmd->SCp.this_residual = 0;
+ }
+ cmd->SCp.have_data_in = 0;
- if (chp_get_status(chpid) != 0)
-- for_each_subchannel(__chp_add, &chpid);
-+ for_each_subchannel_staged(__chp_add, __chp_add_new_sch,
-+ &chpid);
- }
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index 0841df0..73270ff 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -84,7 +84,7 @@
+ /*
+ * Global Data
+ */
+-static struct list_head ipr_ioa_head = LIST_HEAD_INIT(ipr_ioa_head);
++static LIST_HEAD(ipr_ioa_head);
+ static unsigned int ipr_log_level = IPR_DEFAULT_LOG_LEVEL;
+ static unsigned int ipr_max_speed = 1;
+ static int ipr_testmode = 0;
+@@ -5142,6 +5142,7 @@ static void ipr_build_ata_ioadl(struct ipr_cmnd *ipr_cmd,
+ struct ipr_ioadl_desc *last_ioadl = NULL;
+ int len = qc->nbytes + qc->pad_len;
+ struct scatterlist *sg;
++ unsigned int si;
- static void __s390_subchannel_vary_chpid(struct subchannel *sch,
-@@ -589,7 +569,7 @@ static void __s390_subchannel_vary_chpid(struct subchannel *sch,
- if (!old_lpm)
- device_trigger_reprobe(sch);
- else if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
- break;
- }
- sch->opm &= ~mask;
-@@ -603,37 +583,29 @@ static void __s390_subchannel_vary_chpid(struct subchannel *sch,
- terminate_internal_io(sch);
- /* Re-start path verification. */
- if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
- }
- } else if (!sch->lpm) {
- if (device_trigger_verify(sch) != 0)
- css_schedule_eval(sch->schid);
- } else if (sch->driver && sch->driver->verify)
-- sch->driver->verify(&sch->dev);
-+ sch->driver->verify(sch);
- break;
+ if (len == 0)
+ return;
+@@ -5159,7 +5160,7 @@ static void ipr_build_ata_ioadl(struct ipr_cmnd *ipr_cmd,
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
}
- spin_unlock_irqrestore(sch->lock, flags);
- }
-
--static int s390_subchannel_vary_chpid_off(struct device *dev, void *data)
-+static int s390_subchannel_vary_chpid_off(struct subchannel *sch, void *data)
- {
-- struct subchannel *sch;
-- struct chp_id *chpid;
--
-- sch = to_subchannel(dev);
-- chpid = data;
-+ struct chp_id *chpid = data;
- __s390_subchannel_vary_chpid(sch, *chpid, 0);
- return 0;
- }
+- ata_for_each_sg(sg, qc) {
++ for_each_sg(qc->sg, sg, qc->n_elem, si) {
+ ioadl->flags_and_data_len = cpu_to_be32(ioadl_flags | sg_dma_len(sg));
+ ioadl->address = cpu_to_be32(sg_dma_address(sg));
--static int s390_subchannel_vary_chpid_on(struct device *dev, void *data)
-+static int s390_subchannel_vary_chpid_on(struct subchannel *sch, void *data)
- {
-- struct subchannel *sch;
-- struct chp_id *chpid;
--
-- sch = to_subchannel(dev);
-- chpid = data;
-+ struct chp_id *chpid = data;
+@@ -5222,12 +5223,12 @@ static unsigned int ipr_qc_issue(struct ata_queued_cmd *qc)
+ regs->flags |= IPR_ATA_FLAG_XFER_TYPE_DMA;
+ break;
- __s390_subchannel_vary_chpid(sch, *chpid, 1);
- return 0;
-@@ -643,13 +615,7 @@ static int
- __s390_vary_chpid_on(struct subchannel_id schid, void *data)
- {
- struct schib schib;
-- struct subchannel *sch;
+- case ATA_PROT_ATAPI:
+- case ATA_PROT_ATAPI_NODATA:
++ case ATAPI_PROT_PIO:
++ case ATAPI_PROT_NODATA:
+ regs->flags |= IPR_ATA_FLAG_PACKET_CMD;
+ break;
-- sch = get_subchannel_by_schid(schid);
-- if (sch) {
-- put_device(&sch->dev);
-- return 0;
-- }
- if (stsch_err(schid, &schib))
- /* We're through */
- return -ENXIO;
-@@ -669,12 +635,13 @@ int chsc_chp_vary(struct chp_id chpid, int on)
- * Redo PathVerification on the devices the chpid connects to
- */
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ regs->flags |= IPR_ATA_FLAG_PACKET_CMD;
+ regs->flags |= IPR_ATA_FLAG_XFER_TYPE_DMA;
+ break;
+diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
+index 5c5a9b2..7505cca 100644
+--- a/drivers/scsi/ips.c
++++ b/drivers/scsi/ips.c
+@@ -389,17 +389,17 @@ static struct pci_device_id ips_pci_table[] = {
+ MODULE_DEVICE_TABLE( pci, ips_pci_table );
-- bus_for_each_dev(&css_bus_type, NULL, &chpid, on ?
-- s390_subchannel_vary_chpid_on :
-- s390_subchannel_vary_chpid_off);
- if (on)
-- /* Scan for new devices on varied on path. */
-- for_each_subchannel(__s390_vary_chpid_on, NULL);
-+ for_each_subchannel_staged(s390_subchannel_vary_chpid_on,
-+ __s390_vary_chpid_on, &chpid);
-+ else
-+ for_each_subchannel_staged(s390_subchannel_vary_chpid_off,
-+ NULL, &chpid);
+ static char ips_hot_plug_name[] = "ips";
+-
++
+ static int __devinit ips_insert_device(struct pci_dev *pci_dev, const struct pci_device_id *ent);
+ static void __devexit ips_remove_device(struct pci_dev *pci_dev);
+-
++
+ static struct pci_driver ips_pci_driver = {
+ .name = ips_hot_plug_name,
+ .id_table = ips_pci_table,
+ .probe = ips_insert_device,
+ .remove = __devexit_p(ips_remove_device),
+ };
+-
+
- return 0;
- }
-
-@@ -1075,7 +1042,7 @@ chsc_determine_css_characteristics(void)
-
- scsc_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
- if (!scsc_area) {
-- CIO_MSG_EVENT(0, "Was not able to determine available"
-+ CIO_MSG_EVENT(0, "Was not able to determine available "
- "CHSCs due to no memory.\n");
- return -ENOMEM;
- }
-diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
-index 4690534..60590a1 100644
---- a/drivers/s390/cio/cio.c
-+++ b/drivers/s390/cio/cio.c
-@@ -23,11 +23,12 @@
- #include <asm/reset.h>
- #include <asm/ipl.h>
- #include <asm/chpid.h>
--#include "airq.h"
-+#include <asm/airq.h>
- #include "cio.h"
- #include "css.h"
- #include "chsc.h"
- #include "ioasm.h"
-+#include "io_sch.h"
- #include "blacklist.h"
- #include "cio_debug.h"
- #include "chp.h"
-@@ -56,39 +57,37 @@ __setup ("cio_msg=", cio_setup);
/*
- * Function: cio_debug_init
-- * Initializes three debug logs (under /proc/s390dbf) for common I/O:
-- * - cio_msg logs the messages which are printk'ed when CONFIG_DEBUG_IO is on
-+ * Initializes three debug logs for common I/O:
-+ * - cio_msg logs generic cio messages
- * - cio_trace logs the calling of different functions
-- * - cio_crw logs the messages which are printk'ed when CONFIG_DEBUG_CRW is on
-- * debug levels depend on CONFIG_DEBUG_IO resp. CONFIG_DEBUG_CRW
-+ * - cio_crw logs machine check related cio messages
- */
--static int __init
--cio_debug_init (void)
-+static int __init cio_debug_init(void)
- {
-- cio_debug_msg_id = debug_register ("cio_msg", 16, 4, 16*sizeof (long));
-+ cio_debug_msg_id = debug_register("cio_msg", 16, 1, 16 * sizeof(long));
- if (!cio_debug_msg_id)
- goto out_unregister;
-- debug_register_view (cio_debug_msg_id, &debug_sprintf_view);
-- debug_set_level (cio_debug_msg_id, 2);
-- cio_debug_trace_id = debug_register ("cio_trace", 16, 4, 16);
-+ debug_register_view(cio_debug_msg_id, &debug_sprintf_view);
-+ debug_set_level(cio_debug_msg_id, 2);
-+ cio_debug_trace_id = debug_register("cio_trace", 16, 1, 16);
- if (!cio_debug_trace_id)
- goto out_unregister;
-- debug_register_view (cio_debug_trace_id, &debug_hex_ascii_view);
-- debug_set_level (cio_debug_trace_id, 2);
-- cio_debug_crw_id = debug_register ("cio_crw", 4, 4, 16*sizeof (long));
-+ debug_register_view(cio_debug_trace_id, &debug_hex_ascii_view);
-+ debug_set_level(cio_debug_trace_id, 2);
-+ cio_debug_crw_id = debug_register("cio_crw", 16, 1, 16 * sizeof(long));
- if (!cio_debug_crw_id)
- goto out_unregister;
-- debug_register_view (cio_debug_crw_id, &debug_sprintf_view);
-- debug_set_level (cio_debug_crw_id, 2);
-+ debug_register_view(cio_debug_crw_id, &debug_sprintf_view);
-+ debug_set_level(cio_debug_crw_id, 4);
- return 0;
-
- out_unregister:
- if (cio_debug_msg_id)
-- debug_unregister (cio_debug_msg_id);
-+ debug_unregister(cio_debug_msg_id);
- if (cio_debug_trace_id)
-- debug_unregister (cio_debug_trace_id);
-+ debug_unregister(cio_debug_trace_id);
- if (cio_debug_crw_id)
-- debug_unregister (cio_debug_crw_id);
-+ debug_unregister(cio_debug_crw_id);
- printk(KERN_WARNING"cio: could not initialize debugging\n");
- return -1;
- }
-@@ -147,7 +146,7 @@ cio_tpi(void)
- spin_lock(sch->lock);
- memcpy (&sch->schib.scsw, &irb->scsw, sizeof (struct scsw));
- if (sch->driver && sch->driver->irq)
-- sch->driver->irq(&sch->dev);
-+ sch->driver->irq(sch);
- spin_unlock(sch->lock);
- irq_exit ();
- _local_bh_enable();
-@@ -184,33 +183,35 @@ cio_start_key (struct subchannel *sch, /* subchannel structure */
+ * Necessary forward function protoypes
+@@ -587,7 +587,7 @@ static void
+ ips_setup_funclist(ips_ha_t * ha)
{
- char dbf_txt[15];
- int ccode;
-+ struct orb *orb;
-- CIO_TRACE_EVENT (4, "stIO");
-- CIO_TRACE_EVENT (4, sch->dev.bus_id);
-+ CIO_TRACE_EVENT(4, "stIO");
-+ CIO_TRACE_EVENT(4, sch->dev.bus_id);
+- /*
++ /*
+ * Setup Functions
+ */
+ if (IPS_IS_MORPHEUS(ha) || IPS_IS_MARCO(ha)) {
+@@ -702,12 +702,8 @@ ips_release(struct Scsi_Host *sh)
+ /* free extra memory */
+ ips_free(ha);
-+ orb = &to_io_private(sch)->orb;
- /* sch is always under 2G. */
-- sch->orb.intparm = (__u32)(unsigned long)sch;
-- sch->orb.fmt = 1;
-+ orb->intparm = (u32)(addr_t)sch;
-+ orb->fmt = 1;
+- /* Free I/O Region */
+- if (ha->io_addr)
+- release_region(ha->io_addr, ha->io_len);
+-
+ /* free IRQ */
+- free_irq(ha->irq, ha);
++ free_irq(ha->pcidev->irq, ha);
-- sch->orb.pfch = sch->options.prefetch == 0;
-- sch->orb.spnd = sch->options.suspend;
-- sch->orb.ssic = sch->options.suspend && sch->options.inter;
-- sch->orb.lpm = (lpm != 0) ? lpm : sch->lpm;
-+ orb->pfch = sch->options.prefetch == 0;
-+ orb->spnd = sch->options.suspend;
-+ orb->ssic = sch->options.suspend && sch->options.inter;
-+ orb->lpm = (lpm != 0) ? lpm : sch->lpm;
- #ifdef CONFIG_64BIT
- /*
- * for 64 bit we always support 64 bit IDAWs with 4k page size only
- */
-- sch->orb.c64 = 1;
-- sch->orb.i2k = 0;
-+ orb->c64 = 1;
-+ orb->i2k = 0;
- #endif
-- sch->orb.key = key >> 4;
-+ orb->key = key >> 4;
- /* issue "Start Subchannel" */
-- sch->orb.cpa = (__u32) __pa (cpa);
-- ccode = ssch (sch->schid, &sch->orb);
-+ orb->cpa = (__u32) __pa(cpa);
-+ ccode = ssch(sch->schid, orb);
+ scsi_host_put(sh);
- /* process condition code */
-- sprintf (dbf_txt, "ccode:%d", ccode);
-- CIO_TRACE_EVENT (4, dbf_txt);
-+ sprintf(dbf_txt, "ccode:%d", ccode);
-+ CIO_TRACE_EVENT(4, dbf_txt);
+@@ -1637,7 +1633,7 @@ ips_make_passthru(ips_ha_t *ha, struct scsi_cmnd *SC, ips_scb_t *scb, int intr)
+ return (IPS_FAILURE);
+ }
- switch (ccode) {
- case 0:
-@@ -405,8 +406,8 @@ cio_modify (struct subchannel *sch)
- /*
- * Enable subchannel.
- */
--int
--cio_enable_subchannel (struct subchannel *sch, unsigned int isc)
-+int cio_enable_subchannel(struct subchannel *sch, unsigned int isc,
-+ u32 intparm)
- {
- char dbf_txt[15];
- int ccode;
-@@ -425,7 +426,7 @@ cio_enable_subchannel (struct subchannel *sch, unsigned int isc)
- for (retry = 5, ret = 0; retry > 0; retry--) {
- sch->schib.pmcw.ena = 1;
- sch->schib.pmcw.isc = isc;
-- sch->schib.pmcw.intparm = (__u32)(unsigned long)sch;
-+ sch->schib.pmcw.intparm = intparm;
- ret = cio_modify(sch);
- if (ret == -ENODEV)
- break;
-@@ -567,7 +568,7 @@ cio_validate_subchannel (struct subchannel *sch, struct subchannel_id schid)
- */
- if (sch->st != 0) {
- CIO_DEBUG(KERN_INFO, 0,
-- "cio: Subchannel 0.%x.%04x reports "
-+ "Subchannel 0.%x.%04x reports "
- "non-I/O subchannel type %04X\n",
- sch->schid.ssid, sch->schid.sch_no, sch->st);
- /* We stop here for non-io subchannels. */
-@@ -576,11 +577,11 @@ cio_validate_subchannel (struct subchannel *sch, struct subchannel_id schid)
- }
+- if (ha->device_id == IPS_DEVICEID_COPPERHEAD &&
++ if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD &&
+ pt->CoppCP.cmd.flashfw.op_code ==
+ IPS_CMD_RW_BIOSFW) {
+ ret = ips_flash_copperhead(ha, pt, scb);
+@@ -2021,7 +2017,7 @@ ips_cleanup_passthru(ips_ha_t * ha, ips_scb_t * scb)
+ pt->ExtendedStatus = scb->extended_status;
+ pt->AdapterType = ha->ad_type;
- /* Initialization for io subchannels. */
-- if (!sch->schib.pmcw.dnv) {
-- /* io subchannel but device number is invalid. */
-+ if (!css_sch_is_valid(&sch->schib)) {
- err = -ENODEV;
- goto out;
+- if (ha->device_id == IPS_DEVICEID_COPPERHEAD &&
++ if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD &&
+ (scb->cmd.flashfw.op_code == IPS_CMD_DOWNLOAD ||
+ scb->cmd.flashfw.op_code == IPS_CMD_RW_BIOSFW))
+ ips_free_flash_copperhead(ha);
+@@ -2075,13 +2071,13 @@ ips_host_info(ips_ha_t * ha, char *ptr, off_t offset, int len)
+ ha->mem_ptr);
}
-+
- /* Devno is valid. */
- if (is_blacklisted (sch->schid.ssid, sch->schib.pmcw.dev)) {
- /*
-@@ -600,7 +601,7 @@ cio_validate_subchannel (struct subchannel *sch, struct subchannel_id schid)
- sch->lpm = sch->schib.pmcw.pam & sch->opm;
- CIO_DEBUG(KERN_INFO, 0,
-- "cio: Detected device %04x on subchannel 0.%x.%04X"
-+ "Detected device %04x on subchannel 0.%x.%04X"
- " - PIM = %02X, PAM = %02X, POM = %02X\n",
- sch->schib.pmcw.dev, sch->schid.ssid,
- sch->schid.sch_no, sch->schib.pmcw.pim,
-@@ -680,7 +681,7 @@ do_IRQ (struct pt_regs *regs)
- sizeof (irb->scsw));
- /* Call interrupt handler if there is one. */
- if (sch->driver && sch->driver->irq)
-- sch->driver->irq(&sch->dev);
-+ sch->driver->irq(sch);
- }
- if (sch)
- spin_unlock(sch->lock);
-@@ -698,8 +699,14 @@ do_IRQ (struct pt_regs *regs)
+- copy_info(&info, "\tIRQ number : %d\n", ha->irq);
++ copy_info(&info, "\tIRQ number : %d\n", ha->pcidev->irq);
- #ifdef CONFIG_CCW_CONSOLE
- static struct subchannel console_subchannel;
-+static struct io_subchannel_private console_priv;
- static int console_subchannel_in_use;
+ /* For the Next 3 lines Check for Binary 0 at the end and don't include it if it's there. */
+ /* That keeps everything happy for "text" operations on the proc file. */
-+void *cio_get_console_priv(void)
-+{
-+ return &console_priv;
-+}
-+
- /*
- * busy wait for the next interrupt on the console
- */
-@@ -738,9 +745,9 @@ cio_test_for_console(struct subchannel_id schid, void *data)
+ if (le32_to_cpu(ha->nvram->signature) == IPS_NVRAM_P5_SIG) {
+- if (ha->nvram->bios_low[3] == 0) {
++ if (ha->nvram->bios_low[3] == 0) {
+ copy_info(&info,
+ "\tBIOS Version : %c%c%c%c%c%c%c\n",
+ ha->nvram->bios_high[0], ha->nvram->bios_high[1],
+@@ -2232,31 +2228,31 @@ ips_identify_controller(ips_ha_t * ha)
{
- if (stsch_err(schid, &console_subchannel.schib) != 0)
- return -ENXIO;
-- if (console_subchannel.schib.pmcw.dnv &&
-- console_subchannel.schib.pmcw.dev ==
-- console_devno) {
-+ if ((console_subchannel.schib.pmcw.st == SUBCHANNEL_TYPE_IO) &&
-+ console_subchannel.schib.pmcw.dnv &&
-+ (console_subchannel.schib.pmcw.dev == console_devno)) {
- console_irq = schid.sch_no;
- return 1; /* found */
- }
-@@ -758,6 +765,7 @@ cio_get_console_sch_no(void)
- /* VM provided us with the irq number of the console. */
- schid.sch_no = console_irq;
- if (stsch(schid, &console_subchannel.schib) != 0 ||
-+ (console_subchannel.schib.pmcw.st != SUBCHANNEL_TYPE_IO) ||
- !console_subchannel.schib.pmcw.dnv)
- return -1;
- console_devno = console_subchannel.schib.pmcw.dev;
-@@ -804,7 +812,7 @@ cio_probe_console(void)
- ctl_set_bit(6, 24);
- console_subchannel.schib.pmcw.isc = 7;
- console_subchannel.schib.pmcw.intparm =
-- (__u32)(unsigned long)&console_subchannel;
-+ (u32)(addr_t)&console_subchannel;
- ret = cio_modify(&console_subchannel);
- if (ret) {
- console_subchannel_in_use = 0;
-@@ -1022,7 +1030,7 @@ static int __reipl_subchannel_match(struct subchannel_id schid, void *data)
+ METHOD_TRACE("ips_identify_controller", 1);
- if (stsch_reset(schid, &schib))
- return -ENXIO;
-- if (schib.pmcw.dnv &&
-+ if ((schib.pmcw.st == SUBCHANNEL_TYPE_IO) && schib.pmcw.dnv &&
- (schib.pmcw.dev == match_id->devid.devno) &&
- (schid.ssid == match_id->devid.ssid)) {
- match_id->schid = schid;
-@@ -1068,6 +1076,8 @@ int __init cio_get_iplinfo(struct cio_iplinfo *iplinfo)
- return -ENODEV;
- if (stsch(schid, &schib))
- return -ENODEV;
-+ if (schib.pmcw.st != SUBCHANNEL_TYPE_IO)
-+ return -ENODEV;
- if (!schib.pmcw.dnv)
- return -ENODEV;
- iplinfo->devno = schib.pmcw.dev;
-diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h
-index 7446c39..52afa4c 100644
---- a/drivers/s390/cio/cio.h
-+++ b/drivers/s390/cio/cio.h
-@@ -11,32 +11,32 @@
- * path management control word
- */
- struct pmcw {
-- __u32 intparm; /* interruption parameter */
-- __u32 qf : 1; /* qdio facility */
-- __u32 res0 : 1; /* reserved zeros */
-- __u32 isc : 3; /* interruption sublass */
-- __u32 res5 : 3; /* reserved zeros */
-- __u32 ena : 1; /* enabled */
-- __u32 lm : 2; /* limit mode */
-- __u32 mme : 2; /* measurement-mode enable */
-- __u32 mp : 1; /* multipath mode */
-- __u32 tf : 1; /* timing facility */
-- __u32 dnv : 1; /* device number valid */
-- __u32 dev : 16; /* device number */
-- __u8 lpm; /* logical path mask */
-- __u8 pnom; /* path not operational mask */
-- __u8 lpum; /* last path used mask */
-- __u8 pim; /* path installed mask */
-- __u16 mbi; /* measurement-block index */
-- __u8 pom; /* path operational mask */
-- __u8 pam; /* path available mask */
-- __u8 chpid[8]; /* CHPID 0-7 (if available) */
-- __u32 unused1 : 8; /* reserved zeros */
-- __u32 st : 3; /* subchannel type */
-- __u32 unused2 : 18; /* reserved zeros */
-- __u32 mbfc : 1; /* measurement block format control */
-- __u32 xmwme : 1; /* extended measurement word mode enable */
-- __u32 csense : 1; /* concurrent sense; can be enabled ...*/
-+ u32 intparm; /* interruption parameter */
-+ u32 qf : 1; /* qdio facility */
-+ u32 res0 : 1; /* reserved zeros */
-+ u32 isc : 3; /* interruption sublass */
-+ u32 res5 : 3; /* reserved zeros */
-+ u32 ena : 1; /* enabled */
-+ u32 lm : 2; /* limit mode */
-+ u32 mme : 2; /* measurement-mode enable */
-+ u32 mp : 1; /* multipath mode */
-+ u32 tf : 1; /* timing facility */
-+ u32 dnv : 1; /* device number valid */
-+ u32 dev : 16; /* device number */
-+ u8 lpm; /* logical path mask */
-+ u8 pnom; /* path not operational mask */
-+ u8 lpum; /* last path used mask */
-+ u8 pim; /* path installed mask */
-+ u16 mbi; /* measurement-block index */
-+ u8 pom; /* path operational mask */
-+ u8 pam; /* path available mask */
-+ u8 chpid[8]; /* CHPID 0-7 (if available) */
-+ u32 unused1 : 8; /* reserved zeros */
-+ u32 st : 3; /* subchannel type */
-+ u32 unused2 : 18; /* reserved zeros */
-+ u32 mbfc : 1; /* measurement block format control */
-+ u32 xmwme : 1; /* extended measurement word mode enable */
-+ u32 csense : 1; /* concurrent sense; can be enabled ...*/
- /* ... per MSCH, however, if facility */
- /* ... is not installed, this results */
- /* ... in an operand exception. */
-@@ -52,31 +52,6 @@ struct schib {
- __u8 mda[4]; /* model dependent area */
- } __attribute__ ((packed,aligned(4)));
+- switch (ha->device_id) {
++ switch (ha->pcidev->device) {
+ case IPS_DEVICEID_COPPERHEAD:
+- if (ha->revision_id <= IPS_REVID_SERVERAID) {
++ if (ha->pcidev->revision <= IPS_REVID_SERVERAID) {
+ ha->ad_type = IPS_ADTYPE_SERVERAID;
+- } else if (ha->revision_id == IPS_REVID_SERVERAID2) {
++ } else if (ha->pcidev->revision == IPS_REVID_SERVERAID2) {
+ ha->ad_type = IPS_ADTYPE_SERVERAID2;
+- } else if (ha->revision_id == IPS_REVID_NAVAJO) {
++ } else if (ha->pcidev->revision == IPS_REVID_NAVAJO) {
+ ha->ad_type = IPS_ADTYPE_NAVAJO;
+- } else if ((ha->revision_id == IPS_REVID_SERVERAID2)
++ } else if ((ha->pcidev->revision == IPS_REVID_SERVERAID2)
+ && (ha->slot_num == 0)) {
+ ha->ad_type = IPS_ADTYPE_KIOWA;
+- } else if ((ha->revision_id >= IPS_REVID_CLARINETP1) &&
+- (ha->revision_id <= IPS_REVID_CLARINETP3)) {
++ } else if ((ha->pcidev->revision >= IPS_REVID_CLARINETP1) &&
++ (ha->pcidev->revision <= IPS_REVID_CLARINETP3)) {
+ if (ha->enq->ucMaxPhysicalDevices == 15)
+ ha->ad_type = IPS_ADTYPE_SERVERAID3L;
+ else
+ ha->ad_type = IPS_ADTYPE_SERVERAID3;
+- } else if ((ha->revision_id >= IPS_REVID_TROMBONE32) &&
+- (ha->revision_id <= IPS_REVID_TROMBONE64)) {
++ } else if ((ha->pcidev->revision >= IPS_REVID_TROMBONE32) &&
++ (ha->pcidev->revision <= IPS_REVID_TROMBONE64)) {
+ ha->ad_type = IPS_ADTYPE_SERVERAID4H;
+ }
+ break;
--/*
-- * operation request block
-- */
--struct orb {
-- __u32 intparm; /* interruption parameter */
-- __u32 key : 4; /* flags, like key, suspend control, etc. */
-- __u32 spnd : 1; /* suspend control */
-- __u32 res1 : 1; /* reserved */
-- __u32 mod : 1; /* modification control */
-- __u32 sync : 1; /* synchronize control */
-- __u32 fmt : 1; /* format control */
-- __u32 pfch : 1; /* prefetch control */
-- __u32 isic : 1; /* initial-status-interruption control */
-- __u32 alcc : 1; /* address-limit-checking control */
-- __u32 ssic : 1; /* suppress-suspended-interr. control */
-- __u32 res2 : 1; /* reserved */
-- __u32 c64 : 1; /* IDAW/QDIO 64 bit control */
-- __u32 i2k : 1; /* IDAW 2/4kB block size control */
-- __u32 lpm : 8; /* logical path mask */
-- __u32 ils : 1; /* incorrect length */
-- __u32 zero : 6; /* reserved zeros */
-- __u32 orbx : 1; /* ORB extension control */
-- __u32 cpa; /* channel program address */
--} __attribute__ ((packed,aligned(4)));
--
- /* subchannel data structure used by I/O subroutines */
- struct subchannel {
- struct subchannel_id schid;
-@@ -85,7 +60,7 @@ struct subchannel {
- enum {
- SUBCHANNEL_TYPE_IO = 0,
- SUBCHANNEL_TYPE_CHSC = 1,
-- SUBCHANNEL_TYPE_MESSAGE = 2,
-+ SUBCHANNEL_TYPE_MSG = 2,
- SUBCHANNEL_TYPE_ADM = 3,
- } st; /* subchannel type */
+ case IPS_DEVICEID_MORPHEUS:
+- switch (ha->subdevice_id) {
++ switch (ha->pcidev->subsystem_device) {
+ case IPS_SUBDEVICEID_4L:
+ ha->ad_type = IPS_ADTYPE_SERVERAID4L;
+ break;
+@@ -2285,7 +2281,7 @@ ips_identify_controller(ips_ha_t * ha)
+ break;
-@@ -99,11 +74,10 @@ struct subchannel {
- __u8 lpm; /* logical path mask */
- __u8 opm; /* operational path mask */
- struct schib schib; /* subchannel information block */
-- struct orb orb; /* operation request block */
-- struct ccw1 sense_ccw; /* static ccw for sense command */
- struct chsc_ssd_info ssd_info; /* subchannel description */
- struct device dev; /* entry in device tree */
- struct css_driver *driver;
-+ void *private; /* private per subchannel type data */
- } __attribute__ ((aligned(8)));
+ case IPS_DEVICEID_MARCO:
+- switch (ha->subdevice_id) {
++ switch (ha->pcidev->subsystem_device) {
+ case IPS_SUBDEVICEID_6M:
+ ha->ad_type = IPS_ADTYPE_SERVERAID6M;
+ break;
+@@ -2332,20 +2328,20 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
- #define IO_INTERRUPT_TYPE 0 /* I/O interrupt type */
-@@ -111,7 +85,7 @@ struct subchannel {
- #define to_subchannel(n) container_of(n, struct subchannel, dev)
+ strncpy(ha->bios_version, " ?", 8);
- extern int cio_validate_subchannel (struct subchannel *, struct subchannel_id);
--extern int cio_enable_subchannel (struct subchannel *, unsigned int);
-+extern int cio_enable_subchannel(struct subchannel *, unsigned int, u32);
- extern int cio_disable_subchannel (struct subchannel *);
- extern int cio_cancel (struct subchannel *);
- extern int cio_clear (struct subchannel *);
-@@ -125,6 +99,7 @@ extern int cio_get_options (struct subchannel *);
- extern int cio_modify (struct subchannel *);
+- if (ha->device_id == IPS_DEVICEID_COPPERHEAD) {
++ if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) {
+ if (IPS_USE_MEMIO(ha)) {
+ /* Memory Mapped I/O */
- int cio_create_sch_lock(struct subchannel *);
-+void do_adapter_IO(void);
+ /* test 1st byte */
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- /* Use with care. */
- #ifdef CONFIG_CCW_CONSOLE
-@@ -133,10 +108,12 @@ extern void cio_release_console(void);
- extern int cio_is_console(struct subchannel_id);
- extern struct subchannel *cio_get_console_subchannel(void);
- extern spinlock_t * cio_get_console_lock(void);
-+extern void *cio_get_console_priv(void);
- #else
- #define cio_is_console(schid) 0
- #define cio_get_console_subchannel() NULL
--#define cio_get_console_lock() NULL;
-+#define cio_get_console_lock() NULL
-+#define cio_get_console_priv() NULL
- #endif
+ if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0x55)
+ return;
- extern int cio_show_msg;
-diff --git a/drivers/s390/cio/cio_debug.h b/drivers/s390/cio/cio_debug.h
-index c9bf898..d7429ef 100644
---- a/drivers/s390/cio/cio_debug.h
-+++ b/drivers/s390/cio/cio_debug.h
-@@ -8,20 +8,19 @@ extern debug_info_t *cio_debug_msg_id;
- extern debug_info_t *cio_debug_trace_id;
- extern debug_info_t *cio_debug_crw_id;
+ writel(1, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--#define CIO_TRACE_EVENT(imp, txt) do { \
-+#define CIO_TRACE_EVENT(imp, txt) do { \
- debug_text_event(cio_debug_trace_id, imp, txt); \
- } while (0)
+ if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0xAA)
+@@ -2353,20 +2349,20 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
--#define CIO_MSG_EVENT(imp, args...) do { \
-- debug_sprintf_event(cio_debug_msg_id, imp , ##args); \
-+#define CIO_MSG_EVENT(imp, args...) do { \
-+ debug_sprintf_event(cio_debug_msg_id, imp , ##args); \
- } while (0)
+ /* Get Major version */
+ writel(0x1FF, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--#define CIO_CRW_EVENT(imp, args...) do { \
-- debug_sprintf_event(cio_debug_crw_id, imp , ##args); \
-+#define CIO_CRW_EVENT(imp, args...) do { \
-+ debug_sprintf_event(cio_debug_crw_id, imp , ##args); \
- } while (0)
+ major = readb(ha->mem_ptr + IPS_REG_FLDP);
--static inline void
--CIO_HEX_EVENT(int level, void *data, int length)
-+static inline void CIO_HEX_EVENT(int level, void *data, int length)
- {
- if (unlikely(!cio_debug_trace_id))
- return;
-@@ -32,9 +31,10 @@ CIO_HEX_EVENT(int level, void *data, int length)
- }
- }
+ /* Get Minor version */
+ writel(0x1FE, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+ minor = readb(ha->mem_ptr + IPS_REG_FLDP);
--#define CIO_DEBUG(printk_level,event_level,msg...) ({ \
-- if (cio_show_msg) printk(printk_level msg); \
-- CIO_MSG_EVENT (event_level, msg); \
--})
-+#define CIO_DEBUG(printk_level, event_level, msg...) do { \
-+ if (cio_show_msg) \
-+ printk(printk_level "cio: " msg); \
-+ CIO_MSG_EVENT(event_level, msg); \
-+ } while (0)
+ /* Get SubMinor version */
+ writel(0x1FD, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+ subminor = readb(ha->mem_ptr + IPS_REG_FLDP);
- #endif
-diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
-index c3df2cd..3b45bbe 100644
---- a/drivers/s390/cio/css.c
-+++ b/drivers/s390/cio/css.c
-@@ -51,6 +51,62 @@ for_each_subchannel(int(*fn)(struct subchannel_id, void *), void *data)
- return ret;
- }
+@@ -2375,14 +2371,14 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
-+struct cb_data {
-+ void *data;
-+ struct idset *set;
-+ int (*fn_known_sch)(struct subchannel *, void *);
-+ int (*fn_unknown_sch)(struct subchannel_id, void *);
-+};
-+
-+static int call_fn_known_sch(struct device *dev, void *data)
-+{
-+ struct subchannel *sch = to_subchannel(dev);
-+ struct cb_data *cb = data;
-+ int rc = 0;
-+
-+ idset_sch_del(cb->set, sch->schid);
-+ if (cb->fn_known_sch)
-+ rc = cb->fn_known_sch(sch, cb->data);
-+ return rc;
-+}
-+
-+static int call_fn_unknown_sch(struct subchannel_id schid, void *data)
-+{
-+ struct cb_data *cb = data;
-+ int rc = 0;
-+
-+ if (idset_sch_contains(cb->set, schid))
-+ rc = cb->fn_unknown_sch(schid, cb->data);
-+ return rc;
-+}
-+
-+int for_each_subchannel_staged(int (*fn_known)(struct subchannel *, void *),
-+ int (*fn_unknown)(struct subchannel_id,
-+ void *), void *data)
-+{
-+ struct cb_data cb;
-+ int rc;
-+
-+ cb.set = idset_sch_new();
-+ if (!cb.set)
-+ return -ENOMEM;
-+ idset_fill(cb.set);
-+ cb.data = data;
-+ cb.fn_known_sch = fn_known;
-+ cb.fn_unknown_sch = fn_unknown;
-+ /* Process registered subchannels. */
-+ rc = bus_for_each_dev(&css_bus_type, NULL, &cb, call_fn_known_sch);
-+ if (rc)
-+ goto out;
-+ /* Process unregistered subchannels. */
-+ if (fn_unknown)
-+ rc = for_each_subchannel(call_fn_unknown_sch, &cb);
-+out:
-+ idset_free(cb.set);
+ /* test 1st byte */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+
+ if (inb(ha->io_addr + IPS_REG_FLDP) != 0x55)
+ return;
+
+ outl(cpu_to_le32(1), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+
+ if (inb(ha->io_addr + IPS_REG_FLDP) != 0xAA)
+@@ -2390,21 +2386,21 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
+
+ /* Get Major version */
+ outl(cpu_to_le32(0x1FF), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+
+ major = inb(ha->io_addr + IPS_REG_FLDP);
+
+ /* Get Minor version */
+ outl(cpu_to_le32(0x1FE), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+
+ minor = inb(ha->io_addr + IPS_REG_FLDP);
+
+ /* Get SubMinor version */
+ outl(cpu_to_le32(0x1FD), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+
+ subminor = inb(ha->io_addr + IPS_REG_FLDP);
+@@ -2740,8 +2736,6 @@ ips_next(ips_ha_t * ha, int intr)
+ SC->result = DID_OK;
+ SC->host_scribble = NULL;
+
+- memset(SC->sense_buffer, 0, sizeof (SC->sense_buffer));
+-
+ scb->target_id = SC->device->id;
+ scb->lun = SC->device->lun;
+ scb->bus = SC->device->channel;
+@@ -2780,10 +2774,11 @@ ips_next(ips_ha_t * ha, int intr)
+ scb->dcdb.cmd_attribute =
+ ips_command_direction[scb->scsi_cmd->cmnd[0]];
+
+- /* Allow a WRITE BUFFER Command to Have no Data */
+- /* This is Used by Tape Flash Utilites */
+- if ((scb->scsi_cmd->cmnd[0] == WRITE_BUFFER) && (scb->data_len == 0))
+- scb->dcdb.cmd_attribute = 0;
++ /* Allow a WRITE BUFFER Command to Have no Data */
++ /* This is Used by Tape Flash Utilites */
++ if ((scb->scsi_cmd->cmnd[0] == WRITE_BUFFER) &&
++ (scb->data_len == 0))
++ scb->dcdb.cmd_attribute = 0;
+
+ if (!(scb->dcdb.cmd_attribute & 0x3))
+ scb->dcdb.transfer_length = 0;
+@@ -3404,7 +3399,7 @@ ips_map_status(ips_ha_t * ha, ips_scb_t * scb, ips_stat_t * sp)
+
+ /* Restrict access to physical DASD */
+ if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
+- ips_scmd_buf_read(scb->scsi_cmd,
++ ips_scmd_buf_read(scb->scsi_cmd,
+ &inquiryData, sizeof (inquiryData));
+ if ((inquiryData.DeviceType & 0x1f) == TYPE_DISK) {
+ errcode = DID_TIME_OUT;
+@@ -3438,13 +3433,11 @@ ips_map_status(ips_ha_t * ha, ips_scb_t * scb, ips_stat_t * sp)
+ (IPS_DCDB_TABLE_TAPE *) & scb->dcdb;
+ memcpy(scb->scsi_cmd->sense_buffer,
+ tapeDCDB->sense_info,
+- sizeof (scb->scsi_cmd->
+- sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ } else {
+ memcpy(scb->scsi_cmd->sense_buffer,
+ scb->dcdb.sense_info,
+- sizeof (scb->scsi_cmd->
+- sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+ }
+ device_error = 2; /* check condition */
+ }
+@@ -3824,7 +3817,6 @@ ips_send_cmd(ips_ha_t * ha, ips_scb_t * scb)
+ /* attempted, a Check Condition occurred, and Sense */
+ /* Data indicating an Invalid CDB OpCode is returned. */
+ sp = (char *) scb->scsi_cmd->sense_buffer;
+- memset(sp, 0, sizeof (scb->scsi_cmd->sense_buffer));
+
+ sp[0] = 0x70; /* Error Code */
+ sp[2] = ILLEGAL_REQUEST; /* Sense Key 5 Illegal Req. */
+@@ -4090,10 +4082,10 @@ ips_chkstatus(ips_ha_t * ha, IPS_STATUS * pstatus)
+ scb->scsi_cmd->result = errcode << 16;
+ } else { /* bus == 0 */
+ /* restrict access to physical drives */
+- if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
+- ips_scmd_buf_read(scb->scsi_cmd,
++ if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
++ ips_scmd_buf_read(scb->scsi_cmd,
+ &inquiryData, sizeof (inquiryData));
+- if ((inquiryData.DeviceType & 0x1f) == TYPE_DISK)
++ if ((inquiryData.DeviceType & 0x1f) == TYPE_DISK)
+ scb->scsi_cmd->result = DID_TIME_OUT << 16;
+ }
+ } /* else */
+@@ -4393,8 +4385,6 @@ ips_free(ips_ha_t * ha)
+ ha->mem_ptr = NULL;
+ }
+
+- if (ha->mem_addr)
+- release_mem_region(ha->mem_addr, ha->mem_len);
+ ha->mem_addr = 0;
+
+ }
+@@ -4661,8 +4651,8 @@ ips_isinit_morpheus(ips_ha_t * ha)
+ uint32_t bits;
+
+ METHOD_TRACE("ips_is_init_morpheus", 1);
+-
+- if (ips_isintr_morpheus(ha))
+
-+ return rc;
-+}
++ if (ips_isintr_morpheus(ha))
+ ips_flush_and_reset(ha);
+
+ post = readl(ha->mem_ptr + IPS_REG_I960_MSG0);
+@@ -4686,7 +4676,7 @@ ips_isinit_morpheus(ips_ha_t * ha)
+ /* state ( was trying to INIT and an interrupt was already pending ) ... */
+ /* */
+ /****************************************************************************/
+-static void
++static void
+ ips_flush_and_reset(ips_ha_t *ha)
+ {
+ ips_scb_t *scb;
+@@ -4718,9 +4708,9 @@ ips_flush_and_reset(ips_ha_t *ha)
+ if (ret == IPS_SUCCESS) {
+ time = 60 * IPS_ONE_SEC; /* Max Wait time is 60 seconds */
+ done = 0;
+-
+
- static struct subchannel *
- css_alloc_subchannel(struct subchannel_id schid)
+ while ((time > 0) && (!done)) {
+- done = ips_poll_for_flush_complete(ha);
++ done = ips_poll_for_flush_complete(ha);
+ /* This may look evil, but it's only done during extremely rare start-up conditions ! */
+ udelay(1000);
+ time--;
+@@ -4749,17 +4739,17 @@ static int
+ ips_poll_for_flush_complete(ips_ha_t * ha)
{
-@@ -77,7 +133,7 @@ css_alloc_subchannel(struct subchannel_id schid)
- * This is fine even on 64bit since the subchannel is always located
- * under 2G.
- */
-- sch->schib.pmcw.intparm = (__u32)(unsigned long)sch;
-+ sch->schib.pmcw.intparm = (u32)(addr_t)sch;
- ret = cio_modify(sch);
- if (ret) {
- kfree(sch->lock);
-@@ -237,11 +293,25 @@ get_subchannel_by_schid(struct subchannel_id schid)
- return dev ? to_subchannel(dev) : NULL;
- }
+ IPS_STATUS cstatus;
+-
++
+ while (TRUE) {
+ cstatus.value = (*ha->func.statupd) (ha);
-+/**
-+ * css_sch_is_valid() - check if a subchannel is valid
-+ * @schib: subchannel information block for the subchannel
-+ */
-+int css_sch_is_valid(struct schib *schib)
-+{
-+ if ((schib->pmcw.st == SUBCHANNEL_TYPE_IO) && !schib->pmcw.dnv)
-+ return 0;
-+ return 1;
-+}
-+EXPORT_SYMBOL_GPL(css_sch_is_valid);
+ if (cstatus.value == 0xffffffff) /* If No Interrupt to process */
+ break;
+-
+
- static int css_get_subchannel_status(struct subchannel *sch)
- {
- struct schib schib;
+ /* Success is when we see the Flush Command ID */
+- if (cstatus.fields.command_id == IPS_MAX_CMDS )
++ if (cstatus.fields.command_id == IPS_MAX_CMDS)
+ return 1;
+- }
++ }
-- if (stsch(sch->schid, &schib) || !schib.pmcw.dnv)
-+ if (stsch(sch->schid, &schib))
-+ return CIO_GONE;
-+ if (!css_sch_is_valid(&schib))
- return CIO_GONE;
- if (sch->schib.pmcw.dnv && (schib.pmcw.dev != sch->schib.pmcw.dev))
- return CIO_REVALIDATE;
-@@ -293,7 +363,7 @@ static int css_evaluate_known_subchannel(struct subchannel *sch, int slow)
- action = UNREGISTER;
- if (sch->driver && sch->driver->notify) {
- spin_unlock_irqrestore(sch->lock, flags);
-- ret = sch->driver->notify(&sch->dev, event);
-+ ret = sch->driver->notify(sch, event);
- spin_lock_irqsave(sch->lock, flags);
- if (ret)
- action = NONE;
-@@ -349,7 +419,7 @@ static int css_evaluate_new_subchannel(struct subchannel_id schid, int slow)
- /* Will be done on the slow path. */
- return -EAGAIN;
- }
-- if (stsch_err(schid, &schib) || !schib.pmcw.dnv) {
-+ if (stsch_err(schid, &schib) || !css_sch_is_valid(&schib)) {
- /* Unusable - ignore. */
- return 0;
- }
-@@ -388,20 +458,56 @@ static int __init slow_subchannel_init(void)
return 0;
}
+@@ -4903,7 +4893,7 @@ ips_init_copperhead(ips_ha_t * ha)
+ /* Enable busmastering */
+ outb(IPS_BIT_EBM, ha->io_addr + IPS_REG_SCPR);
--static void css_slow_path_func(struct work_struct *unused)
-+static int slow_eval_known_fn(struct subchannel *sch, void *data)
- {
-- struct subchannel_id schid;
-+ int eval;
-+ int rc;
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ /* fix for anaconda64 */
+ outl(0, ha->io_addr + IPS_REG_NDAE);
-- CIO_TRACE_EVENT(4, "slowpath");
- spin_lock_irq(&slow_subchannel_lock);
-- init_subchannel_id(&schid);
-- while (idset_sch_get_first(slow_subchannel_set, &schid)) {
-- idset_sch_del(slow_subchannel_set, schid);
-- spin_unlock_irq(&slow_subchannel_lock);
-- css_evaluate_subchannel(schid, 1);
-- spin_lock_irq(&slow_subchannel_lock);
-+ eval = idset_sch_contains(slow_subchannel_set, sch->schid);
-+ idset_sch_del(slow_subchannel_set, sch->schid);
-+ spin_unlock_irq(&slow_subchannel_lock);
-+ if (eval) {
-+ rc = css_evaluate_known_subchannel(sch, 1);
-+ if (rc == -EAGAIN)
-+ css_schedule_eval(sch->schid);
+@@ -4997,7 +4987,7 @@ ips_init_copperhead_memio(ips_ha_t * ha)
+ /* Enable busmastering */
+ writeb(IPS_BIT_EBM, ha->mem_ptr + IPS_REG_SCPR);
+
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ /* fix for anaconda64 */
+ writel(0, ha->mem_ptr + IPS_REG_NDAE);
+
+@@ -5142,7 +5132,7 @@ ips_reset_copperhead(ips_ha_t * ha)
+ METHOD_TRACE("ips_reset_copperhead", 1);
+
+ DEBUG_VAR(1, "(%s%d) ips_reset_copperhead: io addr: %x, irq: %d",
+- ips_name, ha->host_num, ha->io_addr, ha->irq);
++ ips_name, ha->host_num, ha->io_addr, ha->pcidev->irq);
+
+ reset_counter = 0;
+
+@@ -5187,7 +5177,7 @@ ips_reset_copperhead_memio(ips_ha_t * ha)
+ METHOD_TRACE("ips_reset_copperhead_memio", 1);
+
+ DEBUG_VAR(1, "(%s%d) ips_reset_copperhead_memio: mem addr: %x, irq: %d",
+- ips_name, ha->host_num, ha->mem_addr, ha->irq);
++ ips_name, ha->host_num, ha->mem_addr, ha->pcidev->irq);
+
+ reset_counter = 0;
+
+@@ -5233,7 +5223,7 @@ ips_reset_morpheus(ips_ha_t * ha)
+ METHOD_TRACE("ips_reset_morpheus", 1);
+
+ DEBUG_VAR(1, "(%s%d) ips_reset_morpheus: mem addr: %x, irq: %d",
+- ips_name, ha->host_num, ha->mem_addr, ha->irq);
++ ips_name, ha->host_num, ha->mem_addr, ha->pcidev->irq);
+
+ reset_counter = 0;
+
+@@ -5920,7 +5910,7 @@ ips_read_config(ips_ha_t * ha, int intr)
+
+ return (0);
}
-+ return 0;
-+}
-+
-+static int slow_eval_unknown_fn(struct subchannel_id schid, void *data)
-+{
-+ int eval;
-+ int rc = 0;
-+
-+ spin_lock_irq(&slow_subchannel_lock);
-+ eval = idset_sch_contains(slow_subchannel_set, schid);
-+ idset_sch_del(slow_subchannel_set, schid);
- spin_unlock_irq(&slow_subchannel_lock);
-+ if (eval) {
-+ rc = css_evaluate_new_subchannel(schid, 1);
-+ switch (rc) {
-+ case -EAGAIN:
-+ css_schedule_eval(schid);
-+ rc = 0;
-+ break;
-+ case -ENXIO:
-+ case -ENOMEM:
-+ case -EIO:
-+ /* These should abort looping */
-+ break;
-+ default:
-+ rc = 0;
-+ }
-+ }
-+ return rc;
-+}
+-
+
-+static void css_slow_path_func(struct work_struct *unused)
-+{
-+ CIO_TRACE_EVENT(4, "slowpath");
-+ for_each_subchannel_staged(slow_eval_known_fn, slow_eval_unknown_fn,
-+ NULL);
+ memcpy(ha->conf, ha->ioctl_data, sizeof(*ha->conf));
+ return (1);
}
+@@ -5959,7 +5949,7 @@ ips_readwrite_page5(ips_ha_t * ha, int write, int intr)
+ scb->cmd.nvram.buffer_addr = ha->ioctl_busaddr;
+ if (write)
+ memcpy(ha->ioctl_data, ha->nvram, sizeof(*ha->nvram));
+-
++
+ /* issue the command */
+ if (((ret =
+ ips_send_wait(ha, scb, ips_cmd_timeout, intr)) == IPS_FAILURE)
+@@ -6196,32 +6186,32 @@ ips_erase_bios(ips_ha_t * ha)
- static DECLARE_WORK(slow_path_work, css_slow_path_func);
-@@ -430,7 +536,6 @@ void css_schedule_eval_all(void)
- /* Reprobe subchannel if unregistered. */
- static int reprobe_subchannel(struct subchannel_id schid, void *data)
- {
-- struct subchannel *sch;
- int ret;
+ /* Clear the status register */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- CIO_MSG_EVENT(6, "cio: reprobe 0.%x.%04x\n",
-@@ -438,13 +543,6 @@ static int reprobe_subchannel(struct subchannel_id schid, void *data)
- if (need_reprobe)
- return -EAGAIN;
+ outb(0x50, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- sch = get_subchannel_by_schid(schid);
-- if (sch) {
-- /* Already known. */
-- put_device(&sch->dev);
-- return 0;
-- }
--
- ret = css_probe_device(schid);
- switch (ret) {
- case 0:
-@@ -472,7 +570,7 @@ static void reprobe_all(struct work_struct *unused)
- /* Make sure initial subchannel scan is done. */
- wait_event(ccw_device_init_wq,
- atomic_read(&ccw_device_init_count) == 0);
-- ret = for_each_subchannel(reprobe_subchannel, NULL);
-+ ret = for_each_subchannel_staged(NULL, reprobe_subchannel, NULL);
+ /* Erase Setup */
+ outb(0x20, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- CIO_MSG_EVENT(2, "reprobe done (rc=%d, need_reprobe=%d)\n", ret,
- need_reprobe);
-@@ -787,8 +885,8 @@ int sch_is_pseudo_sch(struct subchannel *sch)
- static int
- css_bus_match (struct device *dev, struct device_driver *drv)
- {
-- struct subchannel *sch = container_of (dev, struct subchannel, dev);
-- struct css_driver *driver = container_of (drv, struct css_driver, drv);
-+ struct subchannel *sch = to_subchannel(dev);
-+ struct css_driver *driver = to_cssdriver(drv);
+ /* Erase Confirm */
+ outb(0xD0, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- if (sch->st == driver->subchannel_type)
- return 1;
-@@ -796,32 +894,36 @@ css_bus_match (struct device *dev, struct device_driver *drv)
- return 0;
- }
+ /* Erase Status */
+ outb(0x70, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--static int
--css_probe (struct device *dev)
-+static int css_probe(struct device *dev)
- {
- struct subchannel *sch;
-+ int ret;
+ timeout = 80000; /* 80 seconds */
- sch = to_subchannel(dev);
-- sch->driver = container_of (dev->driver, struct css_driver, drv);
-- return (sch->driver->probe ? sch->driver->probe(sch) : 0);
-+ sch->driver = to_cssdriver(dev->driver);
-+ ret = sch->driver->probe ? sch->driver->probe(sch) : 0;
-+ if (ret)
-+ sch->driver = NULL;
-+ return ret;
- }
+ while (timeout > 0) {
+- if (ha->revision_id == IPS_REVID_TROMBONE64) {
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ udelay(25); /* 25 us */
+ }
+@@ -6241,13 +6231,13 @@ ips_erase_bios(ips_ha_t * ha)
--static int
--css_remove (struct device *dev)
-+static int css_remove(struct device *dev)
- {
- struct subchannel *sch;
-+ int ret;
+ /* try to suspend the erase */
+ outb(0xB0, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- sch = to_subchannel(dev);
-- return (sch->driver->remove ? sch->driver->remove(sch) : 0);
-+ ret = sch->driver->remove ? sch->driver->remove(sch) : 0;
-+ sch->driver = NULL;
-+ return ret;
- }
+ /* wait for 10 seconds */
+ timeout = 10000;
+ while (timeout > 0) {
+- if (ha->revision_id == IPS_REVID_TROMBONE64) {
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ udelay(25); /* 25 us */
+ }
+@@ -6277,12 +6267,12 @@ ips_erase_bios(ips_ha_t * ha)
+ /* Otherwise, we were successful */
+ /* clear status */
+ outb(0x50, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--static void
--css_shutdown (struct device *dev)
-+static void css_shutdown(struct device *dev)
- {
- struct subchannel *sch;
+ /* enable reads */
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- sch = to_subchannel(dev);
-- if (sch->driver->shutdown)
-+ if (sch->driver && sch->driver->shutdown)
- sch->driver->shutdown(sch);
- }
+ return (0);
+@@ -6308,32 +6298,32 @@ ips_erase_bios_memio(ips_ha_t * ha)
-@@ -833,6 +935,34 @@ struct bus_type css_bus_type = {
- .shutdown = css_shutdown,
- };
+ /* Clear the status register */
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-+/**
-+ * css_driver_register - register a css driver
-+ * @cdrv: css driver to register
-+ *
-+ * This is mainly a wrapper around driver_register that sets name
-+ * and bus_type in the embedded struct device_driver correctly.
-+ */
-+int css_driver_register(struct css_driver *cdrv)
-+{
-+ cdrv->drv.name = cdrv->name;
-+ cdrv->drv.bus = &css_bus_type;
-+ cdrv->drv.owner = cdrv->owner;
-+ return driver_register(&cdrv->drv);
-+}
-+EXPORT_SYMBOL_GPL(css_driver_register);
-+
-+/**
-+ * css_driver_unregister - unregister a css driver
-+ * @cdrv: css driver to unregister
-+ *
-+ * This is a wrapper around driver_unregister.
-+ */
-+void css_driver_unregister(struct css_driver *cdrv)
-+{
-+ driver_unregister(&cdrv->drv);
-+}
-+EXPORT_SYMBOL_GPL(css_driver_unregister);
-+
- subsys_initcall(init_channel_subsystem);
+ writeb(0x50, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- MODULE_LICENSE("GPL");
-diff --git a/drivers/s390/cio/css.h b/drivers/s390/cio/css.h
-index 81215ef..b705545 100644
---- a/drivers/s390/cio/css.h
-+++ b/drivers/s390/cio/css.h
-@@ -58,64 +58,6 @@ struct pgid {
- __u32 tod_high; /* high word TOD clock */
- } __attribute__ ((packed));
+ /* Erase Setup */
+ writeb(0x20, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--#define MAX_CIWS 8
--
--/*
-- * sense-id response buffer layout
-- */
--struct senseid {
-- /* common part */
-- __u8 reserved; /* always 0x'FF' */
-- __u16 cu_type; /* control unit type */
-- __u8 cu_model; /* control unit model */
-- __u16 dev_type; /* device type */
-- __u8 dev_model; /* device model */
-- __u8 unused; /* padding byte */
-- /* extended part */
-- struct ciw ciw[MAX_CIWS]; /* variable # of CIWs */
--} __attribute__ ((packed,aligned(4)));
--
--struct ccw_device_private {
-- struct ccw_device *cdev;
-- struct subchannel *sch;
-- int state; /* device state */
-- atomic_t onoff;
-- unsigned long registered;
-- struct ccw_dev_id dev_id; /* device id */
-- struct subchannel_id schid; /* subchannel number */
-- __u8 imask; /* lpm mask for SNID/SID/SPGID */
-- int iretry; /* retry counter SNID/SID/SPGID */
-- struct {
-- unsigned int fast:1; /* post with "channel end" */
-- unsigned int repall:1; /* report every interrupt status */
-- unsigned int pgroup:1; /* do path grouping */
-- unsigned int force:1; /* allow forced online */
-- } __attribute__ ((packed)) options;
-- struct {
-- unsigned int pgid_single:1; /* use single path for Set PGID */
-- unsigned int esid:1; /* Ext. SenseID supported by HW */
-- unsigned int dosense:1; /* delayed SENSE required */
-- unsigned int doverify:1; /* delayed path verification */
-- unsigned int donotify:1; /* call notify function */
-- unsigned int recog_done:1; /* dev. recog. complete */
-- unsigned int fake_irb:1; /* deliver faked irb */
-- unsigned int intretry:1; /* retry internal operation */
-- } __attribute__((packed)) flags;
-- unsigned long intparm; /* user interruption parameter */
-- struct qdio_irq *qdio_data;
-- struct irb irb; /* device status */
-- struct senseid senseid; /* SenseID info */
-- struct pgid pgid[8]; /* path group IDs per chpid*/
-- struct ccw1 iccws[2]; /* ccws for SNID/SID/SPGID commands */
-- struct work_struct kick_work;
-- wait_queue_head_t wait_q;
-- struct timer_list timer;
-- void *cmb; /* measurement information */
-- struct list_head cmb_list; /* list of measured devices */
-- u64 cmb_start_time; /* clock value of cmb reset */
-- void *cmb_wait; /* deferred cmb enable/disable */
--};
--
- /*
- * A css driver handles all subchannels of one type.
- * Currently, we only care about I/O subchannels (type 0), these
-@@ -123,25 +65,35 @@ struct ccw_device_private {
- */
- struct subchannel;
- struct css_driver {
-+ struct module *owner;
- unsigned int subchannel_type;
- struct device_driver drv;
-- void (*irq)(struct device *);
-- int (*notify)(struct device *, int);
-- void (*verify)(struct device *);
-- void (*termination)(struct device *);
-+ void (*irq)(struct subchannel *);
-+ int (*notify)(struct subchannel *, int);
-+ void (*verify)(struct subchannel *);
-+ void (*termination)(struct subchannel *);
- int (*probe)(struct subchannel *);
- int (*remove)(struct subchannel *);
- void (*shutdown)(struct subchannel *);
-+ const char *name;
- };
+ /* Erase Confirm */
+ writeb(0xD0, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-+#define to_cssdriver(n) container_of(n, struct css_driver, drv)
-+
- /*
- * all css_drivers have the css_bus_type
- */
- extern struct bus_type css_bus_type;
+ /* Erase Status */
+ writeb(0x70, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-+extern int css_driver_register(struct css_driver *);
-+extern void css_driver_unregister(struct css_driver *);
-+
- extern void css_sch_device_unregister(struct subchannel *);
- extern struct subchannel * get_subchannel_by_schid(struct subchannel_id);
- extern int css_init_done;
-+int for_each_subchannel_staged(int (*fn_known)(struct subchannel *, void *),
-+ int (*fn_unknown)(struct subchannel_id,
-+ void *), void *data);
- extern int for_each_subchannel(int(*fn)(struct subchannel_id, void *), void *);
- extern void css_process_crw(int, int);
- extern void css_reiterate_subchannels(void);
-@@ -188,6 +140,8 @@ void css_schedule_eval(struct subchannel_id schid);
- void css_schedule_eval_all(void);
+ timeout = 80000; /* 80 seconds */
- int sch_is_pseudo_sch(struct subchannel *);
-+struct schib;
-+int css_sch_is_valid(struct schib *);
+ while (timeout > 0) {
+- if (ha->revision_id == IPS_REVID_TROMBONE64) {
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+ udelay(25); /* 25 us */
+ }
+@@ -6353,13 +6343,13 @@ ips_erase_bios_memio(ips_ha_t * ha)
- extern struct workqueue_struct *slow_path_wq;
+ /* try to suspend the erase */
+ writeb(0xB0, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
-index 74f6b53..d35dc3f 100644
---- a/drivers/s390/cio/device.c
-+++ b/drivers/s390/cio/device.c
-@@ -17,6 +17,7 @@
- #include <linux/list.h>
- #include <linux/device.h>
- #include <linux/workqueue.h>
-+#include <linux/timer.h>
+ /* wait for 10 seconds */
+ timeout = 10000;
+ while (timeout > 0) {
+- if (ha->revision_id == IPS_REVID_TROMBONE64) {
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+ udelay(25); /* 25 us */
+ }
+@@ -6389,12 +6379,12 @@ ips_erase_bios_memio(ips_ha_t * ha)
+ /* Otherwise, we were successful */
+ /* clear status */
+ writeb(0x50, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- #include <asm/ccwdev.h>
- #include <asm/cio.h>
-@@ -28,6 +29,12 @@
- #include "css.h"
- #include "device.h"
- #include "ioasm.h"
-+#include "io_sch.h"
-+
-+static struct timer_list recovery_timer;
-+static spinlock_t recovery_lock;
-+static int recovery_phase;
-+static const unsigned long recovery_delay[] = { 3, 30, 300 };
+ /* enable reads */
+ writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- /******************* bus type handling ***********************/
+ return (0);
+@@ -6423,21 +6413,21 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ for (i = 0; i < buffersize; i++) {
+ /* write a byte */
+ outl(cpu_to_le32(i + offset), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-@@ -115,19 +122,18 @@ static int ccw_uevent(struct device *dev, struct kobj_uevent_env *env)
+ outb(0x40, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- struct bus_type ccw_bus_type;
+ outb(buffer[i], ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--static int io_subchannel_probe (struct subchannel *);
--static int io_subchannel_remove (struct subchannel *);
--static int io_subchannel_notify(struct device *, int);
--static void io_subchannel_verify(struct device *);
--static void io_subchannel_ioterm(struct device *);
-+static void io_subchannel_irq(struct subchannel *);
-+static int io_subchannel_probe(struct subchannel *);
-+static int io_subchannel_remove(struct subchannel *);
-+static int io_subchannel_notify(struct subchannel *, int);
-+static void io_subchannel_verify(struct subchannel *);
-+static void io_subchannel_ioterm(struct subchannel *);
- static void io_subchannel_shutdown(struct subchannel *);
+ /* wait up to one second */
+ timeout = 1000;
+ while (timeout > 0) {
+- if (ha->revision_id == IPS_REVID_TROMBONE64) {
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ udelay(25); /* 25 us */
+ }
+@@ -6454,11 +6444,11 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ if (timeout == 0) {
+ /* timeout error */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- static struct css_driver io_subchannel_driver = {
-+ .owner = THIS_MODULE,
- .subchannel_type = SUBCHANNEL_TYPE_IO,
-- .drv = {
-- .name = "io_subchannel",
-- .bus = &css_bus_type,
-- },
-+ .name = "io_subchannel",
- .irq = io_subchannel_irq,
- .notify = io_subchannel_notify,
- .verify = io_subchannel_verify,
-@@ -142,6 +148,8 @@ struct workqueue_struct *ccw_device_notify_work;
- wait_queue_head_t ccw_device_init_wq;
- atomic_t ccw_device_init_count;
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-+static void recovery_func(unsigned long data);
-+
- static int __init
- init_ccw_bus_type (void)
- {
-@@ -149,6 +157,7 @@ init_ccw_bus_type (void)
+ return (1);
+@@ -6468,11 +6458,11 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ if (status & 0x18) {
+ /* programming error */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- init_waitqueue_head(&ccw_device_init_wq);
- atomic_set(&ccw_device_init_count, 0);
-+ setup_timer(&recovery_timer, recovery_func, 0);
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- ccw_device_work = create_singlethread_workqueue("cio");
- if (!ccw_device_work)
-@@ -166,7 +175,8 @@ init_ccw_bus_type (void)
- if ((ret = bus_register (&ccw_bus_type)))
- goto out_err;
+ return (1);
+@@ -6481,11 +6471,11 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
-- if ((ret = driver_register(&io_subchannel_driver.drv)))
-+ ret = css_driver_register(&io_subchannel_driver);
-+ if (ret)
- goto out_err;
+ /* Enable reading */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- wait_event(ccw_device_init_wq,
-@@ -186,7 +196,7 @@ out_err:
- static void __exit
- cleanup_ccw_bus_type (void)
- {
-- driver_unregister(&io_subchannel_driver.drv);
-+ css_driver_unregister(&io_subchannel_driver);
- bus_unregister(&ccw_bus_type);
- destroy_workqueue(ccw_device_notify_work);
- destroy_workqueue(ccw_device_work);
-@@ -773,7 +783,7 @@ static void sch_attach_device(struct subchannel *sch,
- {
- css_update_ssd_info(sch);
- spin_lock_irq(sch->lock);
-- sch->dev.driver_data = cdev;
-+ sch_set_cdev(sch, cdev);
- cdev->private->schid = sch->schid;
- cdev->ccwlock = sch->lock;
- device_trigger_reprobe(sch);
-@@ -795,7 +805,7 @@ static void sch_attach_disconnected_device(struct subchannel *sch,
- put_device(&other_sch->dev);
- return;
- }
-- other_sch->dev.driver_data = NULL;
-+ sch_set_cdev(other_sch, NULL);
- /* No need to keep a subchannel without ccw device around. */
- css_sch_device_unregister(other_sch);
- put_device(&other_sch->dev);
-@@ -831,12 +841,12 @@ static void sch_create_and_recog_new_device(struct subchannel *sch)
- return;
- }
- spin_lock_irq(sch->lock);
-- sch->dev.driver_data = cdev;
-+ sch_set_cdev(sch, cdev);
- spin_unlock_irq(sch->lock);
- /* Start recognition for the new ccw device. */
- if (io_subchannel_recog(cdev, sch)) {
- spin_lock_irq(sch->lock);
-- sch->dev.driver_data = NULL;
-+ sch_set_cdev(sch, NULL);
- spin_unlock_irq(sch->lock);
- if (cdev->dev.release)
- cdev->dev.release(&cdev->dev);
-@@ -940,7 +950,7 @@ io_subchannel_register(struct work_struct *work)
- cdev->private->dev_id.devno, ret);
- put_device(&cdev->dev);
- spin_lock_irqsave(sch->lock, flags);
-- sch->dev.driver_data = NULL;
-+ sch_set_cdev(sch, NULL);
- spin_unlock_irqrestore(sch->lock, flags);
- kfree (cdev->private);
- kfree (cdev);
-@@ -1022,7 +1032,7 @@ io_subchannel_recog(struct ccw_device *cdev, struct subchannel *sch)
- int rc;
- struct ccw_device_private *priv;
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- sch->dev.driver_data = cdev;
-+ sch_set_cdev(sch, cdev);
- sch->driver = &io_subchannel_driver;
- cdev->ccwlock = sch->lock;
+ return (0);
+@@ -6514,21 +6504,21 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ for (i = 0; i < buffersize; i++) {
+ /* write a byte */
+ writel(i + offset, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-@@ -1082,7 +1092,7 @@ static void ccw_device_move_to_sch(struct work_struct *work)
- }
- if (former_parent) {
- spin_lock_irq(former_parent->lock);
-- former_parent->dev.driver_data = NULL;
-+ sch_set_cdev(former_parent, NULL);
- spin_unlock_irq(former_parent->lock);
- css_sch_device_unregister(former_parent);
- /* Reset intparm to zeroes. */
-@@ -1096,6 +1106,18 @@ out:
- put_device(&cdev->dev);
- }
+ writeb(0x40, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-+static void io_subchannel_irq(struct subchannel *sch)
-+{
-+ struct ccw_device *cdev;
-+
-+ cdev = sch_get_cdev(sch);
-+
-+ CIO_TRACE_EVENT(3, "IRQ");
-+ CIO_TRACE_EVENT(3, sch->dev.bus_id);
-+ if (cdev)
-+ dev_fsm_event(cdev, DEV_EVENT_INTERRUPT);
-+}
-+
- static int
- io_subchannel_probe (struct subchannel *sch)
- {
-@@ -1104,13 +1126,13 @@ io_subchannel_probe (struct subchannel *sch)
- unsigned long flags;
- struct ccw_dev_id dev_id;
+ writeb(buffer[i], ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- if (sch->dev.driver_data) {
-+ cdev = sch_get_cdev(sch);
-+ if (cdev) {
- /*
- * This subchannel already has an associated ccw_device.
- * Register it and exit. This happens for all early
- * device, e.g. the console.
- */
-- cdev = sch->dev.driver_data;
- cdev->dev.groups = ccwdev_attr_groups;
- device_initialize(&cdev->dev);
- ccw_device_register(cdev);
-@@ -1132,6 +1154,11 @@ io_subchannel_probe (struct subchannel *sch)
- */
- dev_id.devno = sch->schib.pmcw.dev;
- dev_id.ssid = sch->schid.ssid;
-+ /* Allocate I/O subchannel private data. */
-+ sch->private = kzalloc(sizeof(struct io_subchannel_private),
-+ GFP_KERNEL | GFP_DMA);
-+ if (!sch->private)
-+ return -ENOMEM;
- cdev = get_disc_ccwdev_by_dev_id(&dev_id, NULL);
- if (!cdev)
- cdev = get_orphaned_ccwdev_by_dev_id(to_css(sch->dev.parent),
-@@ -1149,16 +1176,18 @@ io_subchannel_probe (struct subchannel *sch)
- return 0;
- }
- cdev = io_subchannel_create_ccwdev(sch);
-- if (IS_ERR(cdev))
-+ if (IS_ERR(cdev)) {
-+ kfree(sch->private);
- return PTR_ERR(cdev);
--
-+ }
- rc = io_subchannel_recog(cdev, sch);
- if (rc) {
- spin_lock_irqsave(sch->lock, flags);
-- sch->dev.driver_data = NULL;
-+ sch_set_cdev(sch, NULL);
- spin_unlock_irqrestore(sch->lock, flags);
- if (cdev->dev.release)
- cdev->dev.release(&cdev->dev);
-+ kfree(sch->private);
- }
+ /* wait up to one second */
+ timeout = 1000;
+ while (timeout > 0) {
+- if (ha->revision_id == IPS_REVID_TROMBONE64) {
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+ udelay(25); /* 25 us */
+ }
+@@ -6545,11 +6535,11 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ if (timeout == 0) {
+ /* timeout error */
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- return rc;
-@@ -1170,25 +1199,25 @@ io_subchannel_remove (struct subchannel *sch)
- struct ccw_device *cdev;
- unsigned long flags;
+ writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- if (!sch->dev.driver_data)
-+ cdev = sch_get_cdev(sch);
-+ if (!cdev)
- return 0;
-- cdev = sch->dev.driver_data;
- /* Set ccw device to not operational and drop reference. */
- spin_lock_irqsave(cdev->ccwlock, flags);
-- sch->dev.driver_data = NULL;
-+ sch_set_cdev(sch, NULL);
- cdev->private->state = DEV_STATE_NOT_OPER;
- spin_unlock_irqrestore(cdev->ccwlock, flags);
- ccw_device_unregister(cdev);
- put_device(&cdev->dev);
-+ kfree(sch->private);
- return 0;
- }
+ return (1);
+@@ -6559,11 +6549,11 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ if (status & 0x18) {
+ /* programming error */
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--static int
--io_subchannel_notify(struct device *dev, int event)
-+static int io_subchannel_notify(struct subchannel *sch, int event)
- {
- struct ccw_device *cdev;
+ writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- cdev = dev->driver_data;
-+ cdev = sch_get_cdev(sch);
- if (!cdev)
- return 0;
- if (!cdev->drv)
-@@ -1198,22 +1227,20 @@ io_subchannel_notify(struct device *dev, int event)
- return cdev->drv->notify ? cdev->drv->notify(cdev, event) : 0;
- }
+ return (1);
+@@ -6572,11 +6562,11 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
--static void
--io_subchannel_verify(struct device *dev)
-+static void io_subchannel_verify(struct subchannel *sch)
- {
- struct ccw_device *cdev;
+ /* Enable reading */
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- cdev = dev->driver_data;
-+ cdev = sch_get_cdev(sch);
- if (cdev)
- dev_fsm_event(cdev, DEV_EVENT_VERIFY);
- }
+ writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
--static void
--io_subchannel_ioterm(struct device *dev)
-+static void io_subchannel_ioterm(struct subchannel *sch)
- {
- struct ccw_device *cdev;
+ return (0);
+@@ -6601,14 +6591,14 @@ ips_verify_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
-- cdev = dev->driver_data;
-+ cdev = sch_get_cdev(sch);
- if (!cdev)
- return;
- /* Internal I/O will be retried by the interrupt handler. */
-@@ -1231,7 +1258,7 @@ io_subchannel_shutdown(struct subchannel *sch)
- struct ccw_device *cdev;
- int ret;
+ /* test 1st byte */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-- cdev = sch->dev.driver_data;
-+ cdev = sch_get_cdev(sch);
+ if (inb(ha->io_addr + IPS_REG_FLDP) != 0x55)
+ return (1);
- if (cio_is_console(sch->schid))
- return;
-@@ -1271,6 +1298,9 @@ ccw_device_console_enable (struct ccw_device *cdev, struct subchannel *sch)
- {
- int rc;
+ outl(cpu_to_le32(1), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+ if (inb(ha->io_addr + IPS_REG_FLDP) != 0xAA)
+ return (1);
+@@ -6617,7 +6607,7 @@ ips_verify_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ for (i = 2; i < buffersize; i++) {
-+ /* Attach subchannel private data. */
-+ sch->private = cio_get_console_priv();
-+ memset(sch->private, 0, sizeof(struct io_subchannel_private));
- /* Initialize the ccw_device structure. */
- cdev->dev.parent= &sch->dev;
- rc = io_subchannel_recog(cdev, sch);
-@@ -1456,6 +1486,7 @@ int ccw_driver_register(struct ccw_driver *cdriver)
+ outl(cpu_to_le32(i + offset), ha->io_addr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- drv->bus = &ccw_bus_type;
- drv->name = cdriver->name;
-+ drv->owner = cdriver->owner;
+ checksum = (uint8_t) checksum + inb(ha->io_addr + IPS_REG_FLDP);
+@@ -6650,14 +6640,14 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- return driver_register(drv);
- }
-@@ -1481,6 +1512,60 @@ ccw_device_get_subchannel_id(struct ccw_device *cdev)
- return sch->schid;
- }
+ /* test 1st byte */
+ writel(0, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
-+static int recovery_check(struct device *dev, void *data)
-+{
-+ struct ccw_device *cdev = to_ccwdev(dev);
-+ int *redo = data;
-+
-+ spin_lock_irq(cdev->ccwlock);
-+ switch (cdev->private->state) {
-+ case DEV_STATE_DISCONNECTED:
-+ CIO_MSG_EVENT(3, "recovery: trigger 0.%x.%04x\n",
-+ cdev->private->dev_id.ssid,
-+ cdev->private->dev_id.devno);
-+ dev_fsm_event(cdev, DEV_EVENT_VERIFY);
-+ *redo = 1;
-+ break;
-+ case DEV_STATE_DISCONNECTED_SENSE_ID:
-+ *redo = 1;
-+ break;
-+ }
-+ spin_unlock_irq(cdev->ccwlock);
-+
-+ return 0;
-+}
-+
-+static void recovery_func(unsigned long data)
-+{
-+ int redo = 0;
-+
-+ bus_for_each_dev(&ccw_bus_type, NULL, &redo, recovery_check);
-+ if (redo) {
-+ spin_lock_irq(&recovery_lock);
-+ if (!timer_pending(&recovery_timer)) {
-+ if (recovery_phase < ARRAY_SIZE(recovery_delay) - 1)
-+ recovery_phase++;
-+ mod_timer(&recovery_timer, jiffies +
-+ recovery_delay[recovery_phase] * HZ);
-+ }
-+ spin_unlock_irq(&recovery_lock);
-+ } else
-+ CIO_MSG_EVENT(2, "recovery: end\n");
-+}
-+
-+void ccw_device_schedule_recovery(void)
-+{
-+ unsigned long flags;
-+
-+ CIO_MSG_EVENT(2, "recovery: schedule\n");
-+ spin_lock_irqsave(&recovery_lock, flags);
-+ if (!timer_pending(&recovery_timer) || (recovery_phase != 0)) {
-+ recovery_phase = 0;
-+ mod_timer(&recovery_timer, jiffies + recovery_delay[0] * HZ);
-+ }
-+ spin_unlock_irqrestore(&recovery_lock, flags);
-+}
-+
- MODULE_LICENSE("GPL");
- EXPORT_SYMBOL(ccw_device_set_online);
- EXPORT_SYMBOL(ccw_device_set_offline);
-diff --git a/drivers/s390/cio/device.h b/drivers/s390/cio/device.h
-index 0d40896..d40a2ff 100644
---- a/drivers/s390/cio/device.h
-+++ b/drivers/s390/cio/device.h
-@@ -5,6 +5,8 @@
- #include <asm/atomic.h>
- #include <linux/wait.h>
+ if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0x55)
+ return (1);
-+#include "io_sch.h"
-+
- /*
- * states of the device statemachine
- */
-@@ -74,7 +76,6 @@ extern struct workqueue_struct *ccw_device_notify_work;
- extern wait_queue_head_t ccw_device_init_wq;
- extern atomic_t ccw_device_init_count;
+ writel(1, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
+ if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0xAA)
+ return (1);
+@@ -6666,7 +6656,7 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ for (i = 2; i < buffersize; i++) {
--void io_subchannel_irq (struct device *pdev);
- void io_subchannel_recog_done(struct ccw_device *cdev);
+ writel(i + offset, ha->mem_ptr + IPS_REG_FLAP);
+- if (ha->revision_id == IPS_REVID_TROMBONE64)
++ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
+ udelay(25); /* 25 us */
- int ccw_device_cancel_halt_clear(struct ccw_device *);
-@@ -87,6 +88,8 @@ int ccw_device_recognition(struct ccw_device *);
- int ccw_device_online(struct ccw_device *);
- int ccw_device_offline(struct ccw_device *);
+ checksum =
+@@ -6837,24 +6827,18 @@ ips_register_scsi(int index)
+ }
+ ha = IPS_HA(sh);
+ memcpy(ha, oldha, sizeof (ips_ha_t));
+- free_irq(oldha->irq, oldha);
++ free_irq(oldha->pcidev->irq, oldha);
+ /* Install the interrupt handler with the new ha */
+- if (request_irq(ha->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
++ if (request_irq(ha->pcidev->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
+ IPS_PRINTK(KERN_WARNING, ha->pcidev,
+ "Unable to install interrupt handler\n");
+- scsi_host_put(sh);
+- return -1;
++ goto err_out_sh;
+ }
-+void ccw_device_schedule_recovery(void);
-+
- /* Function prototypes for device status and basic sense stuff. */
- void ccw_device_accumulate_irb(struct ccw_device *, struct irb *);
- void ccw_device_accumulate_basic_sense(struct ccw_device *, struct irb *);
-diff --git a/drivers/s390/cio/device_fsm.c b/drivers/s390/cio/device_fsm.c
-index bfad421..4b92c84 100644
---- a/drivers/s390/cio/device_fsm.c
-+++ b/drivers/s390/cio/device_fsm.c
-@@ -25,14 +25,16 @@
- #include "ioasm.h"
- #include "chp.h"
+ kfree(oldha);
+- ips_sh[index] = sh;
+- ips_ha[index] = ha;
-+static int timeout_log_enabled;
+ /* Store away needed values for later use */
+- sh->io_port = ha->io_addr;
+- sh->n_io_port = ha->io_addr ? 255 : 0;
+ sh->unique_id = (ha->io_addr) ? ha->io_addr : ha->mem_addr;
+- sh->irq = ha->irq;
+ sh->sg_tablesize = sh->hostt->sg_tablesize;
+ sh->can_queue = sh->hostt->can_queue;
+ sh->cmd_per_lun = sh->hostt->cmd_per_lun;
+@@ -6867,10 +6851,21 @@ ips_register_scsi(int index)
+ sh->max_channel = ha->nbus - 1;
+ sh->can_queue = ha->max_cmds - 1;
+
+- scsi_add_host(sh, NULL);
++ if (scsi_add_host(sh, &ha->pcidev->dev))
++ goto err_out;
+
- int
- device_is_online(struct subchannel *sch)
- {
- struct ccw_device *cdev;
++ ips_sh[index] = sh;
++ ips_ha[index] = ha;
++
+ scsi_scan_host(sh);
-- if (!sch->dev.driver_data)
-+ cdev = sch_get_cdev(sch);
-+ if (!cdev)
- return 0;
-- cdev = sch->dev.driver_data;
- return (cdev->private->state == DEV_STATE_ONLINE);
+ return 0;
++
++err_out:
++ free_irq(ha->pcidev->irq, ha);
++err_out_sh:
++ scsi_host_put(sh);
++ return -1;
}
-@@ -41,9 +43,9 @@ device_is_disconnected(struct subchannel *sch)
- {
- struct ccw_device *cdev;
-
-- if (!sch->dev.driver_data)
-+ cdev = sch_get_cdev(sch);
-+ if (!cdev)
- return 0;
-- cdev = sch->dev.driver_data;
- return (cdev->private->state == DEV_STATE_DISCONNECTED ||
- cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID);
- }
-@@ -53,19 +55,21 @@ device_set_disconnected(struct subchannel *sch)
+ /*---------------------------------------------------------------------------*/
+@@ -6882,20 +6877,14 @@ ips_register_scsi(int index)
+ static void __devexit
+ ips_remove_device(struct pci_dev *pci_dev)
{
- struct ccw_device *cdev;
+- int i;
+- struct Scsi_Host *sh;
+- ips_ha_t *ha;
++ struct Scsi_Host *sh = pci_get_drvdata(pci_dev);
-- if (!sch->dev.driver_data)
-+ cdev = sch_get_cdev(sch);
-+ if (!cdev)
- return;
-- cdev = sch->dev.driver_data;
- ccw_device_set_timeout(cdev, 0);
- cdev->private->flags.fake_irb = 0;
- cdev->private->state = DEV_STATE_DISCONNECTED;
-+ if (cdev->online)
-+ ccw_device_schedule_recovery();
+- for (i = 0; i < IPS_MAX_ADAPTERS; i++) {
+- ha = ips_ha[i];
+- if (ha) {
+- if ((pci_dev->bus->number == ha->pcidev->bus->number) &&
+- (pci_dev->devfn == ha->pcidev->devfn)) {
+- sh = ips_sh[i];
+- ips_release(sh);
+- }
+- }
+- }
++ pci_set_drvdata(pci_dev, NULL);
++
++ ips_release(sh);
++
++ pci_release_regions(pci_dev);
++ pci_disable_device(pci_dev);
}
- void device_set_intretry(struct subchannel *sch)
+ /****************************************************************************/
+@@ -6949,12 +6938,17 @@ module_exit(ips_module_exit);
+ static int __devinit
+ ips_insert_device(struct pci_dev *pci_dev, const struct pci_device_id *ent)
{
- struct ccw_device *cdev;
+- int uninitialized_var(index);
++ int index = -1;
+ int rc;
-- cdev = sch->dev.driver_data;
-+ cdev = sch_get_cdev(sch);
- if (!cdev)
- return;
- cdev->private->flags.intretry = 1;
-@@ -75,13 +79,62 @@ int device_trigger_verify(struct subchannel *sch)
- {
- struct ccw_device *cdev;
+ METHOD_TRACE("ips_insert_device", 1);
+- if (pci_enable_device(pci_dev))
+- return -1;
++ rc = pci_enable_device(pci_dev);
++ if (rc)
++ return rc;
++
++ rc = pci_request_regions(pci_dev, "ips");
++ if (rc)
++ goto err_out;
-- cdev = sch->dev.driver_data;
-+ cdev = sch_get_cdev(sch);
- if (!cdev || !cdev->online)
- return -EINVAL;
- dev_fsm_event(cdev, DEV_EVENT_VERIFY);
- return 0;
- }
+ rc = ips_init_phase1(pci_dev, &index);
+ if (rc == SUCCESS)
+@@ -6970,6 +6964,19 @@ ips_insert_device(struct pci_dev *pci_dev, const struct pci_device_id *ent)
+ ips_num_controllers++;
-+static int __init ccw_timeout_log_setup(char *unused)
-+{
-+ timeout_log_enabled = 1;
-+ return 1;
-+}
-+
-+__setup("ccw_timeout_log", ccw_timeout_log_setup);
-+
-+static void ccw_timeout_log(struct ccw_device *cdev)
-+{
-+ struct schib schib;
-+ struct subchannel *sch;
-+ struct io_subchannel_private *private;
-+ int cc;
-+
-+ sch = to_subchannel(cdev->dev.parent);
-+ private = to_io_private(sch);
-+ cc = stsch(sch->schid, &schib);
-+
-+ printk(KERN_WARNING "cio: ccw device timeout occurred at %llx, "
-+ "device information:\n", get_clock());
-+ printk(KERN_WARNING "cio: orb:\n");
-+ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
-+ &private->orb, sizeof(private->orb), 0);
-+ printk(KERN_WARNING "cio: ccw device bus id: %s\n", cdev->dev.bus_id);
-+ printk(KERN_WARNING "cio: subchannel bus id: %s\n", sch->dev.bus_id);
-+ printk(KERN_WARNING "cio: subchannel lpm: %02x, opm: %02x, "
-+ "vpm: %02x\n", sch->lpm, sch->opm, sch->vpm);
+ ips_next_controller = ips_num_controllers;
+
-+ if ((void *)(addr_t)private->orb.cpa == &private->sense_ccw ||
-+ (void *)(addr_t)private->orb.cpa == cdev->private->iccws)
-+ printk(KERN_WARNING "cio: last channel program (intern):\n");
-+ else
-+ printk(KERN_WARNING "cio: last channel program:\n");
++ if (rc < 0) {
++ rc = -ENODEV;
++ goto err_out_regions;
++ }
+
-+ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
-+ (void *)(addr_t)private->orb.cpa,
-+ sizeof(struct ccw1), 0);
-+ printk(KERN_WARNING "cio: ccw device state: %d\n",
-+ cdev->private->state);
-+ printk(KERN_WARNING "cio: store subchannel returned: cc=%d\n", cc);
-+ printk(KERN_WARNING "cio: schib:\n");
-+ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
-+ &schib, sizeof(schib), 0);
-+ printk(KERN_WARNING "cio: ccw device flags:\n");
-+ print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
-+ &cdev->private->flags, sizeof(cdev->private->flags), 0);
-+}
++ pci_set_drvdata(pci_dev, ips_sh[index]);
++ return 0;
+
- /*
- * Timeout function. It just triggers a DEV_EVENT_TIMEOUT.
- */
-@@ -92,6 +145,8 @@ ccw_device_timeout(unsigned long data)
-
- cdev = (struct ccw_device *) data;
- spin_lock_irq(cdev->ccwlock);
-+ if (timeout_log_enabled)
-+ ccw_timeout_log(cdev);
- dev_fsm_event(cdev, DEV_EVENT_TIMEOUT);
- spin_unlock_irq(cdev->ccwlock);
- }
-@@ -122,9 +177,9 @@ device_kill_pending_timer(struct subchannel *sch)
- {
- struct ccw_device *cdev;
-
-- if (!sch->dev.driver_data)
-+ cdev = sch_get_cdev(sch);
-+ if (!cdev)
- return;
-- cdev = sch->dev.driver_data;
- ccw_device_set_timeout(cdev, 0);
++err_out_regions:
++ pci_release_regions(pci_dev);
++err_out:
++ pci_disable_device(pci_dev);
+ return rc;
}
-@@ -268,7 +323,7 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
- switch (state) {
- case DEV_STATE_NOT_OPER:
- CIO_DEBUG(KERN_WARNING, 2,
-- "cio: SenseID : unknown device %04x on subchannel "
-+ "SenseID : unknown device %04x on subchannel "
- "0.%x.%04x\n", cdev->private->dev_id.devno,
- sch->schid.ssid, sch->schid.sch_no);
- break;
-@@ -294,7 +349,7 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
+@@ -6992,8 +6999,6 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
+ uint32_t mem_len;
+ uint8_t bus;
+ uint8_t func;
+- uint8_t irq;
+- uint16_t subdevice_id;
+ int j;
+ int index;
+ dma_addr_t dma_address;
+@@ -7004,7 +7009,7 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
+ METHOD_TRACE("ips_init_phase1", 1);
+ index = IPS_MAX_ADAPTERS;
+ for (j = 0; j < IPS_MAX_ADAPTERS; j++) {
+- if (ips_ha[j] == 0) {
++ if (ips_ha[j] == NULL) {
+ index = j;
+ break;
}
- /* Issue device info message. */
- CIO_DEBUG(KERN_INFO, 2,
-- "cio: SenseID : device 0.%x.%04x reports: "
-+ "SenseID : device 0.%x.%04x reports: "
- "CU Type/Mod = %04X/%02X, Dev Type/Mod = "
- "%04X/%02X\n",
- cdev->private->dev_id.ssid,
-@@ -304,7 +359,7 @@ ccw_device_recog_done(struct ccw_device *cdev, int state)
- break;
- case DEV_STATE_BOXED:
- CIO_DEBUG(KERN_WARNING, 2,
-- "cio: SenseID : boxed device %04x on subchannel "
-+ "SenseID : boxed device %04x on subchannel "
- "0.%x.%04x\n", cdev->private->dev_id.devno,
- sch->schid.ssid, sch->schid.sch_no);
- break;
-@@ -349,7 +404,7 @@ ccw_device_oper_notify(struct work_struct *work)
- sch = to_subchannel(cdev->dev.parent);
- if (sch->driver && sch->driver->notify) {
- spin_unlock_irqrestore(cdev->ccwlock, flags);
-- ret = sch->driver->notify(&sch->dev, CIO_OPER);
-+ ret = sch->driver->notify(sch, CIO_OPER);
- spin_lock_irqsave(cdev->ccwlock, flags);
- } else
- ret = 0;
-@@ -389,7 +444,7 @@ ccw_device_done(struct ccw_device *cdev, int state)
+@@ -7014,7 +7019,6 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
+ return -1;
- if (state == DEV_STATE_BOXED)
- CIO_DEBUG(KERN_WARNING, 2,
-- "cio: Boxed device %04x on subchannel %04x\n",
-+ "Boxed device %04x on subchannel %04x\n",
- cdev->private->dev_id.devno, sch->schid.sch_no);
+ /* stuff that we get in dev */
+- irq = pci_dev->irq;
+ bus = pci_dev->bus->number;
+ func = pci_dev->devfn;
- if (cdev->private->flags.donotify) {
-@@ -500,7 +555,8 @@ ccw_device_recognition(struct ccw_device *cdev)
- (cdev->private->state != DEV_STATE_BOXED))
- return -EINVAL;
- sch = to_subchannel(cdev->dev.parent);
-- ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc);
-+ ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc,
-+ (u32)(addr_t)sch);
- if (ret != 0)
- /* Couldn't enable the subchannel for i/o. Sick device. */
- return ret;
-@@ -587,9 +643,10 @@ ccw_device_verify_done(struct ccw_device *cdev, int err)
- default:
- /* Reset oper notify indication after verify error. */
- cdev->private->flags.donotify = 0;
-- if (cdev->online)
-+ if (cdev->online) {
-+ ccw_device_set_timeout(cdev, 0);
- dev_fsm_event(cdev, DEV_EVENT_NOTOPER);
-- else
-+ } else
- ccw_device_done(cdev, DEV_STATE_NOT_OPER);
- break;
- }
-@@ -610,7 +667,8 @@ ccw_device_online(struct ccw_device *cdev)
- sch = to_subchannel(cdev->dev.parent);
- if (css_init_done && !get_device(&cdev->dev))
- return -ENODEV;
-- ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc);
-+ ret = cio_enable_subchannel(sch, sch->schib.pmcw.isc,
-+ (u32)(addr_t)sch);
- if (ret != 0) {
- /* Couldn't enable the subchannel for i/o. Sick device. */
- if (ret == -ENODEV)
-@@ -937,7 +995,7 @@ void device_kill_io(struct subchannel *sch)
- int ret;
- struct ccw_device *cdev;
+@@ -7042,34 +7046,17 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
+ uint32_t base;
+ uint32_t offs;
-- cdev = sch->dev.driver_data;
-+ cdev = sch_get_cdev(sch);
- ret = ccw_device_cancel_halt_clear(cdev);
- if (ret == -EBUSY) {
- ccw_device_set_timeout(cdev, 3*HZ);
-@@ -990,7 +1048,8 @@ ccw_device_start_id(struct ccw_device *cdev, enum dev_event dev_event)
- struct subchannel *sch;
+- if (!request_mem_region(mem_addr, mem_len, "ips")) {
+- IPS_PRINTK(KERN_WARNING, pci_dev,
+- "Couldn't allocate IO Memory space %x len %d.\n",
+- mem_addr, mem_len);
+- return -1;
+- }
+-
+ base = mem_addr & PAGE_MASK;
+ offs = mem_addr - base;
+ ioremap_ptr = ioremap(base, PAGE_SIZE);
++ if (!ioremap_ptr)
++ return -1;
+ mem_ptr = ioremap_ptr + offs;
+ } else {
+ ioremap_ptr = NULL;
+ mem_ptr = NULL;
+ }
- sch = to_subchannel(cdev->dev.parent);
-- if (cio_enable_subchannel(sch, sch->schib.pmcw.isc) != 0)
-+ if (cio_enable_subchannel(sch, sch->schib.pmcw.isc,
-+ (u32)(addr_t)sch) != 0)
- /* Couldn't enable the subchannel for i/o. Sick device. */
- return;
+- /* setup I/O mapped area (if applicable) */
+- if (io_addr) {
+- if (!request_region(io_addr, io_len, "ips")) {
+- IPS_PRINTK(KERN_WARNING, pci_dev,
+- "Couldn't allocate IO space %x len %d.\n",
+- io_addr, io_len);
+- return -1;
+- }
+- }
+-
+- subdevice_id = pci_dev->subsystem_device;
+-
+ /* found a controller */
+ ha = kzalloc(sizeof (ips_ha_t), GFP_KERNEL);
+ if (ha == NULL) {
+@@ -7078,13 +7065,11 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
+ return -1;
+ }
-@@ -1006,9 +1065,9 @@ device_trigger_reprobe(struct subchannel *sch)
- {
- struct ccw_device *cdev;
+-
+ ips_sh[index] = NULL;
+ ips_ha[index] = ha;
+ ha->active = 1;
-- if (!sch->dev.driver_data)
-+ cdev = sch_get_cdev(sch);
-+ if (!cdev)
- return;
-- cdev = sch->dev.driver_data;
- if (cdev->private->state != DEV_STATE_DISCONNECTED)
- return;
+ /* Store info in HA structure */
+- ha->irq = irq;
+ ha->io_addr = io_addr;
+ ha->io_len = io_len;
+ ha->mem_addr = mem_addr;
+@@ -7092,10 +7077,7 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
+ ha->mem_ptr = mem_ptr;
+ ha->ioremap_ptr = ioremap_ptr;
+ ha->host_num = (uint32_t) index;
+- ha->revision_id = pci_dev->revision;
+ ha->slot_num = PCI_SLOT(pci_dev->devfn);
+- ha->device_id = pci_dev->device;
+- ha->subdevice_id = subdevice_id;
+ ha->pcidev = pci_dev;
-@@ -1028,7 +1087,7 @@ device_trigger_reprobe(struct subchannel *sch)
- sch->schib.pmcw.ena = 0;
- if ((sch->lpm & (sch->lpm - 1)) != 0)
- sch->schib.pmcw.mp = 1;
-- sch->schib.pmcw.intparm = (__u32)(unsigned long)sch;
-+ sch->schib.pmcw.intparm = (u32)(addr_t)sch;
- /* We should also udate ssd info, but this has to wait. */
- /* Check if this is another device which appeared on the same sch. */
- if (sch->schib.pmcw.dev != cdev->private->dev_id.devno) {
-@@ -1223,21 +1282,4 @@ fsm_func_t *dev_jumptable[NR_DEV_STATES][NR_DEV_EVENTS] = {
- },
- };
+ /*
+@@ -7240,7 +7222,7 @@ ips_init_phase2(int index)
+ }
--/*
-- * io_subchannel_irq is called for "real" interrupts or for status
-- * pending conditions on msch.
-- */
--void
--io_subchannel_irq (struct device *pdev)
--{
-- struct ccw_device *cdev;
--
-- cdev = to_subchannel(pdev)->dev.driver_data;
--
-- CIO_TRACE_EVENT (3, "IRQ");
-- CIO_TRACE_EVENT (3, pdev->bus_id);
-- if (cdev)
-- dev_fsm_event(cdev, DEV_EVENT_INTERRUPT);
--}
--
- EXPORT_SYMBOL_GPL(ccw_device_set_timeout);
-diff --git a/drivers/s390/cio/device_id.c b/drivers/s390/cio/device_id.c
-index 156f3f9..918b8b8 100644
---- a/drivers/s390/cio/device_id.c
-+++ b/drivers/s390/cio/device_id.c
-@@ -24,6 +24,7 @@
- #include "css.h"
- #include "device.h"
- #include "ioasm.h"
-+#include "io_sch.h"
+ /* Install the interrupt handler */
+- if (request_irq(ha->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
++ if (request_irq(ha->pcidev->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
+ IPS_PRINTK(KERN_WARNING, ha->pcidev,
+ "Unable to install interrupt handler\n");
+ return ips_abort_init(ha, index);
+@@ -7253,14 +7235,14 @@ ips_init_phase2(int index)
+ if (!ips_allocatescbs(ha)) {
+ IPS_PRINTK(KERN_WARNING, ha->pcidev,
+ "Unable to allocate a CCB\n");
+- free_irq(ha->irq, ha);
++ free_irq(ha->pcidev->irq, ha);
+ return ips_abort_init(ha, index);
+ }
- /*
- * Input :
-@@ -219,11 +220,13 @@ ccw_device_check_sense_id(struct ccw_device *cdev)
- return -EAGAIN;
+ if (!ips_hainit(ha)) {
+ IPS_PRINTK(KERN_WARNING, ha->pcidev,
+ "Unable to initialize controller\n");
+- free_irq(ha->irq, ha);
++ free_irq(ha->pcidev->irq, ha);
+ return ips_abort_init(ha, index);
}
- if (irb->scsw.cc == 3) {
-- if ((sch->orb.lpm &
-- sch->schib.pmcw.pim & sch->schib.pmcw.pam) != 0)
-+ u8 lpm;
-+
-+ lpm = to_io_private(sch)->orb.lpm;
-+ if ((lpm & sch->schib.pmcw.pim & sch->schib.pmcw.pam) != 0)
- CIO_MSG_EVENT(2, "SenseID : path %02X for device %04x "
- "on subchannel 0.%x.%04x is "
-- "'not operational'\n", sch->orb.lpm,
-+ "'not operational'\n", lpm,
- cdev->private->dev_id.devno,
- sch->schid.ssid, sch->schid.sch_no);
- return -EACCES;
-diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
-index 7fd2dad..49b58eb 100644
---- a/drivers/s390/cio/device_ops.c
-+++ b/drivers/s390/cio/device_ops.c
-@@ -501,7 +501,7 @@ ccw_device_stlck(struct ccw_device *cdev)
- return -ENOMEM;
+ /* Free the temporary SCB */
+@@ -7270,7 +7252,7 @@ ips_init_phase2(int index)
+ if (!ips_allocatescbs(ha)) {
+ IPS_PRINTK(KERN_WARNING, ha->pcidev,
+ "Unable to allocate CCBs\n");
+- free_irq(ha->irq, ha);
++ free_irq(ha->pcidev->irq, ha);
+ return ips_abort_init(ha, index);
}
- spin_lock_irqsave(sch->lock, flags);
-- ret = cio_enable_subchannel(sch, 3);
-+ ret = cio_enable_subchannel(sch, 3, (u32)(addr_t)sch);
- if (ret)
- goto out_unlock;
- /*
-diff --git a/drivers/s390/cio/device_pgid.c b/drivers/s390/cio/device_pgid.c
-index cb1879a..c52449a 100644
---- a/drivers/s390/cio/device_pgid.c
-+++ b/drivers/s390/cio/device_pgid.c
-@@ -22,6 +22,7 @@
- #include "css.h"
- #include "device.h"
- #include "ioasm.h"
-+#include "io_sch.h"
- /*
- * Helper function called from interrupt context to decide whether an
-@@ -155,10 +156,13 @@ __ccw_device_check_sense_pgid(struct ccw_device *cdev)
- return -EAGAIN;
- }
- if (irb->scsw.cc == 3) {
-+ u8 lpm;
-+
-+ lpm = to_io_private(sch)->orb.lpm;
- CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel 0.%x.%04x,"
- " lpm %02X, became 'not operational'\n",
- cdev->private->dev_id.devno, sch->schid.ssid,
-- sch->schid.sch_no, sch->orb.lpm);
-+ sch->schid.sch_no, lpm);
- return -EACCES;
- }
- i = 8 - ffs(cdev->private->imask);
-diff --git a/drivers/s390/cio/device_status.c b/drivers/s390/cio/device_status.c
-index aa96e67..ebe0848 100644
---- a/drivers/s390/cio/device_status.c
-+++ b/drivers/s390/cio/device_status.c
-@@ -20,6 +20,7 @@
- #include "css.h"
- #include "device.h"
- #include "ioasm.h"
-+#include "io_sch.h"
+diff --git a/drivers/scsi/ips.h b/drivers/scsi/ips.h
+index 3bcbd9f..e0657b6 100644
+--- a/drivers/scsi/ips.h
++++ b/drivers/scsi/ips.h
+@@ -60,14 +60,14 @@
+ */
+ #define IPS_HA(x) ((ips_ha_t *) x->hostdata)
+ #define IPS_COMMAND_ID(ha, scb) (int) (scb - ha->scbs)
+- #define IPS_IS_TROMBONE(ha) (((ha->device_id == IPS_DEVICEID_COPPERHEAD) && \
+- (ha->revision_id >= IPS_REVID_TROMBONE32) && \
+- (ha->revision_id <= IPS_REVID_TROMBONE64)) ? 1 : 0)
+- #define IPS_IS_CLARINET(ha) (((ha->device_id == IPS_DEVICEID_COPPERHEAD) && \
+- (ha->revision_id >= IPS_REVID_CLARINETP1) && \
+- (ha->revision_id <= IPS_REVID_CLARINETP3)) ? 1 : 0)
+- #define IPS_IS_MORPHEUS(ha) (ha->device_id == IPS_DEVICEID_MORPHEUS)
+- #define IPS_IS_MARCO(ha) (ha->device_id == IPS_DEVICEID_MARCO)
++ #define IPS_IS_TROMBONE(ha) (((ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) && \
++ (ha->pcidev->revision >= IPS_REVID_TROMBONE32) && \
++ (ha->pcidev->revision <= IPS_REVID_TROMBONE64)) ? 1 : 0)
++ #define IPS_IS_CLARINET(ha) (((ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) && \
++ (ha->pcidev->revision >= IPS_REVID_CLARINETP1) && \
++ (ha->pcidev->revision <= IPS_REVID_CLARINETP3)) ? 1 : 0)
++ #define IPS_IS_MORPHEUS(ha) (ha->pcidev->device == IPS_DEVICEID_MORPHEUS)
++ #define IPS_IS_MARCO(ha) (ha->pcidev->device == IPS_DEVICEID_MARCO)
+ #define IPS_USE_I2O_DELIVER(ha) ((IPS_IS_MORPHEUS(ha) || \
+ (IPS_IS_TROMBONE(ha) && \
+ (ips_force_i2o))) ? 1 : 0)
+@@ -92,7 +92,7 @@
+ #ifndef min
+ #define min(x,y) ((x) < (y) ? x : y)
+ #endif
+-
++
+ #ifndef __iomem /* For clean compiles in earlier kernels without __iomem annotations */
+ #define __iomem
+ #endif
+@@ -171,7 +171,7 @@
+ #define IPS_CMD_DOWNLOAD 0x20
+ #define IPS_CMD_RW_BIOSFW 0x22
+ #define IPS_CMD_GET_VERSION_INFO 0xC6
+- #define IPS_CMD_RESET_CHANNEL 0x1A
++ #define IPS_CMD_RESET_CHANNEL 0x1A
+
+ /*
+ * Adapter Equates
+@@ -458,7 +458,7 @@ typedef struct {
+ uint32_t reserved3;
+ uint32_t buffer_addr;
+ uint32_t reserved4;
+-} IPS_IOCTL_CMD, *PIPS_IOCTL_CMD;
++} IPS_IOCTL_CMD, *PIPS_IOCTL_CMD;
+
+ typedef struct {
+ uint8_t op_code;
+@@ -552,7 +552,7 @@ typedef struct {
+ uint32_t cccr;
+ } IPS_NVRAM_CMD, *PIPS_NVRAM_CMD;
+
+-typedef struct
++typedef struct
+ {
+ uint8_t op_code;
+ uint8_t command_id;
+@@ -650,7 +650,7 @@ typedef struct {
+ uint8_t device_address;
+ uint8_t cmd_attribute;
+ uint8_t cdb_length;
+- uint8_t reserved_for_LUN;
++ uint8_t reserved_for_LUN;
+ uint32_t transfer_length;
+ uint32_t buffer_pointer;
+ uint16_t sg_count;
+@@ -790,7 +790,7 @@ typedef struct {
+ /* SubSystem Parameter[4] */
+ #define IPS_GET_VERSION_SUPPORT 0x00018000 /* Mask for Versioning Support */
+
+-typedef struct
++typedef struct
+ {
+ uint32_t revision;
+ uint8_t bootBlkVersion[32];
+@@ -1034,7 +1034,6 @@ typedef struct ips_ha {
+ uint8_t ha_id[IPS_MAX_CHANNELS+1];
+ uint32_t dcdb_active[IPS_MAX_CHANNELS];
+ uint32_t io_addr; /* Base I/O address */
+- uint8_t irq; /* IRQ for adapter */
+ uint8_t ntargets; /* Number of targets */
+ uint8_t nbus; /* Number of buses */
+ uint8_t nlun; /* Number of Luns */
+@@ -1066,10 +1065,7 @@ typedef struct ips_ha {
+ int ioctl_reset; /* IOCTL Requested Reset Flag */
+ uint16_t reset_count; /* number of resets */
+ time_t last_ffdc; /* last time we sent ffdc info*/
+- uint8_t revision_id; /* Revision level */
+- uint16_t device_id; /* PCI device ID */
+ uint8_t slot_num; /* PCI Slot Number */
+- uint16_t subdevice_id; /* Subsystem device ID */
+ int ioctl_len; /* size of ioctl buffer */
+ dma_addr_t ioctl_busaddr; /* dma address of ioctl buffer*/
+ uint8_t bios_version[8]; /* BIOS Revision */
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 57ce225..e5be5fd 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -48,7 +48,7 @@ MODULE_AUTHOR("Dmitry Yusupov <dmitry_yus at yahoo.com>, "
+ "Alex Aizman <itn780 at yahoo.com>");
+ MODULE_DESCRIPTION("iSCSI/TCP data-path");
+ MODULE_LICENSE("GPL");
+-/* #define DEBUG_TCP */
++#undef DEBUG_TCP
+ #define DEBUG_ASSERT
+
+ #ifdef DEBUG_TCP
+@@ -67,115 +67,429 @@ MODULE_LICENSE("GPL");
+ static unsigned int iscsi_max_lun = 512;
+ module_param_named(max_lun, iscsi_max_lun, uint, S_IRUGO);
- /*
- * Check for any kind of channel or interface control check but don't
-@@ -310,6 +311,7 @@ int
- ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb)
++static int iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment);
++
++/*
++ * Scatterlist handling: inside the iscsi_segment, we
++ * remember an index into the scatterlist, and set data/size
++ * to the current scatterlist entry. For highmem pages, we
++ * kmap as needed.
++ *
++ * Note that the page is unmapped when we return from
++ * TCP's data_ready handler, so we may end up mapping and
++ * unmapping the same page repeatedly. The whole reason
++ * for this is that we shouldn't keep the page mapped
++ * outside the softirq.
++ */
++
++/**
++ * iscsi_tcp_segment_init_sg - init indicated scatterlist entry
++ * @segment: the buffer object
++ * @sg: scatterlist
++ * @offset: byte offset into that sg entry
++ *
++ * This function sets up the segment so that subsequent
++ * data is copied to the indicated sg entry, at the given
++ * offset.
++ */
+ static inline void
+-iscsi_buf_init_iov(struct iscsi_buf *ibuf, char *vbuf, int size)
++iscsi_tcp_segment_init_sg(struct iscsi_segment *segment,
++ struct scatterlist *sg, unsigned int offset)
{
- struct subchannel *sch;
-+ struct ccw1 *sense_ccw;
-
- sch = to_subchannel(cdev->dev.parent);
+- sg_init_one(&ibuf->sg, vbuf, size);
+- ibuf->sent = 0;
+- ibuf->use_sendmsg = 1;
++ segment->sg = sg;
++ segment->sg_offset = offset;
++ segment->size = min(sg->length - offset,
++ segment->total_size - segment->total_copied);
++ segment->data = NULL;
+ }
-@@ -326,15 +328,16 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb)
++/**
++ * iscsi_tcp_segment_map - map the current S/G page
++ * @segment: iscsi_segment
++ * @recv: 1 if called from recv path
++ *
++ * We only need to possibly kmap data if scatter lists are being used,
++ * because the iscsi passthrough and internal IO paths will never use high
++ * mem pages.
++ */
+ static inline void
+-iscsi_buf_init_sg(struct iscsi_buf *ibuf, struct scatterlist *sg)
++iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv)
+ {
+- sg_init_table(&ibuf->sg, 1);
+- sg_set_page(&ibuf->sg, sg_page(sg), sg->length, sg->offset);
++ struct scatterlist *sg;
++
++ if (segment->data != NULL || !segment->sg)
++ return;
++
++ sg = segment->sg;
++ BUG_ON(segment->sg_mapped);
++ BUG_ON(sg->length == 0);
++
/*
- * We have ending status but no sense information. Do a basic sense.
+- * Fastpath: sg element fits into single page
++ * If the page count is greater than one it is ok to send
++ * to the network layer's zero copy send path. If not we
++ * have to go the slow sendmsg path. We always map for the
++ * recv path.
*/
-- sch->sense_ccw.cmd_code = CCW_CMD_BASIC_SENSE;
-- sch->sense_ccw.cda = (__u32) __pa(cdev->private->irb.ecw);
-- sch->sense_ccw.count = SENSE_MAX_COUNT;
-- sch->sense_ccw.flags = CCW_FLAG_SLI;
-+ sense_ccw = &to_io_private(sch)->sense_ccw;
-+ sense_ccw->cmd_code = CCW_CMD_BASIC_SENSE;
-+ sense_ccw->cda = (__u32) __pa(cdev->private->irb.ecw);
-+ sense_ccw->count = SENSE_MAX_COUNT;
-+ sense_ccw->flags = CCW_FLAG_SLI;
+- if (sg->length + sg->offset <= PAGE_SIZE && !PageSlab(sg_page(sg)))
+- ibuf->use_sendmsg = 0;
+- else
+- ibuf->use_sendmsg = 1;
+- ibuf->sent = 0;
++ if (page_count(sg_page(sg)) >= 1 && !recv)
++ return;
++
++ debug_tcp("iscsi_tcp_segment_map %s %p\n", recv ? "recv" : "xmit",
++ segment);
++ segment->sg_mapped = kmap_atomic(sg_page(sg), KM_SOFTIRQ0);
++ segment->data = segment->sg_mapped + sg->offset + segment->sg_offset;
+ }
- /* Reset internal retry indication. */
- cdev->private->flags.intretry = 0;
+-static inline int
+-iscsi_buf_left(struct iscsi_buf *ibuf)
++static inline void
++iscsi_tcp_segment_unmap(struct iscsi_segment *segment)
+ {
+- int rc;
++ debug_tcp("iscsi_tcp_segment_unmap %p\n", segment);
-- return cio_start (sch, &sch->sense_ccw, 0xff);
-+ return cio_start(sch, sense_ccw, 0xff);
+- rc = ibuf->sg.length - ibuf->sent;
+- BUG_ON(rc < 0);
+- return rc;
++ if (segment->sg_mapped) {
++ debug_tcp("iscsi_tcp_segment_unmap valid\n");
++ kunmap_atomic(segment->sg_mapped, KM_SOFTIRQ0);
++ segment->sg_mapped = NULL;
++ segment->data = NULL;
++ }
}
- /*
-diff --git a/drivers/s390/cio/io_sch.h b/drivers/s390/cio/io_sch.h
-new file mode 100644
-index 0000000..8c61316
---- /dev/null
-+++ b/drivers/s390/cio/io_sch.h
-@@ -0,0 +1,163 @@
-+#ifndef S390_IO_SCH_H
-+#define S390_IO_SCH_H
-+
-+#include "schid.h"
-+
+/*
-+ * operation request block
++ * Splice the digest buffer into the buffer
+ */
-+struct orb {
-+ u32 intparm; /* interruption parameter */
-+ u32 key : 4; /* flags, like key, suspend control, etc. */
-+ u32 spnd : 1; /* suspend control */
-+ u32 res1 : 1; /* reserved */
-+ u32 mod : 1; /* modification control */
-+ u32 sync : 1; /* synchronize control */
-+ u32 fmt : 1; /* format control */
-+ u32 pfch : 1; /* prefetch control */
-+ u32 isic : 1; /* initial-status-interruption control */
-+ u32 alcc : 1; /* address-limit-checking control */
-+ u32 ssic : 1; /* suppress-suspended-interr. control */
-+ u32 res2 : 1; /* reserved */
-+ u32 c64 : 1; /* IDAW/QDIO 64 bit control */
-+ u32 i2k : 1; /* IDAW 2/4kB block size control */
-+ u32 lpm : 8; /* logical path mask */
-+ u32 ils : 1; /* incorrect length */
-+ u32 zero : 6; /* reserved zeros */
-+ u32 orbx : 1; /* ORB extension control */
-+ u32 cpa; /* channel program address */
-+} __attribute__ ((packed, aligned(4)));
+ static inline void
+-iscsi_hdr_digest(struct iscsi_conn *conn, struct iscsi_buf *buf,
+- u8* crc)
++iscsi_tcp_segment_splice_digest(struct iscsi_segment *segment, void *digest)
+ {
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+-
+- crypto_hash_digest(&tcp_conn->tx_hash, &buf->sg, buf->sg.length, crc);
+- buf->sg.length += sizeof(u32);
++ segment->data = digest;
++ segment->digest_len = ISCSI_DIGEST_SIZE;
++ segment->total_size += ISCSI_DIGEST_SIZE;
++ segment->size = ISCSI_DIGEST_SIZE;
++ segment->copied = 0;
++ segment->sg = NULL;
++ segment->hash = NULL;
+ }
+
++/**
++ * iscsi_tcp_segment_done - check whether the segment is complete
++ * @segment: iscsi segment to check
++ * @recv: set to one of this is called from the recv path
++ * @copied: number of bytes copied
++ *
++ * Check if we're done receiving this segment. If the receive
++ * buffer is full but we expect more data, move on to the
++ * next entry in the scatterlist.
++ *
++ * If the amount of data we received isn't a multiple of 4,
++ * we will transparently receive the pad bytes, too.
++ *
++ * This function must be re-entrant.
++ */
+ static inline int
+-iscsi_hdr_extract(struct iscsi_tcp_conn *tcp_conn)
++iscsi_tcp_segment_done(struct iscsi_segment *segment, int recv, unsigned copied)
+ {
+- struct sk_buff *skb = tcp_conn->in.skb;
+-
+- tcp_conn->in.zero_copy_hdr = 0;
++ static unsigned char padbuf[ISCSI_PAD_LEN];
++ struct scatterlist sg;
++ unsigned int pad;
+
+- if (tcp_conn->in.copy >= tcp_conn->hdr_size &&
+- tcp_conn->in_progress == IN_PROGRESS_WAIT_HEADER) {
++ debug_tcp("copied %u %u size %u %s\n", segment->copied, copied,
++ segment->size, recv ? "recv" : "xmit");
++ if (segment->hash && copied) {
+ /*
+- * Zero-copy PDU Header: using connection context
+- * to store header pointer.
++ * If a segment is kmapd we must unmap it before sending
++ * to the crypto layer since that will try to kmap it again.
+ */
+- if (skb_shinfo(skb)->frag_list == NULL &&
+- !skb_shinfo(skb)->nr_frags) {
+- tcp_conn->in.hdr = (struct iscsi_hdr *)
+- ((char*)skb->data + tcp_conn->in.offset);
+- tcp_conn->in.zero_copy_hdr = 1;
++ iscsi_tcp_segment_unmap(segment);
+
-+struct io_subchannel_private {
-+ struct orb orb; /* operation request block */
-+ struct ccw1 sense_ccw; /* static ccw for sense command */
-+} __attribute__ ((aligned(8)));
++ if (!segment->data) {
++ sg_init_table(&sg, 1);
++ sg_set_page(&sg, sg_page(segment->sg), copied,
++ segment->copied + segment->sg_offset +
++ segment->sg->offset);
++ } else
++ sg_init_one(&sg, segment->data + segment->copied,
++ copied);
++ crypto_hash_update(segment->hash, &sg, copied);
++ }
+
-+#define to_io_private(n) ((struct io_subchannel_private *)n->private)
-+#define sch_get_cdev(n) (dev_get_drvdata(&n->dev))
-+#define sch_set_cdev(n, c) (dev_set_drvdata(&n->dev, c))
++ segment->copied += copied;
++ if (segment->copied < segment->size) {
++ iscsi_tcp_segment_map(segment, recv);
++ return 0;
++ }
+
-+#define MAX_CIWS 8
++ segment->total_copied += segment->copied;
++ segment->copied = 0;
++ segment->size = 0;
+
-+/*
-+ * sense-id response buffer layout
-+ */
-+struct senseid {
-+ /* common part */
-+ u8 reserved; /* always 0x'FF' */
-+ u16 cu_type; /* control unit type */
-+ u8 cu_model; /* control unit model */
-+ u16 dev_type; /* device type */
-+ u8 dev_model; /* device model */
-+ u8 unused; /* padding byte */
-+ /* extended part */
-+ struct ciw ciw[MAX_CIWS]; /* variable # of CIWs */
-+} __attribute__ ((packed, aligned(4)));
++ /* Unmap the current scatterlist page, if there is one. */
++ iscsi_tcp_segment_unmap(segment);
+
-+struct ccw_device_private {
-+ struct ccw_device *cdev;
-+ struct subchannel *sch;
-+ int state; /* device state */
-+ atomic_t onoff;
-+ unsigned long registered;
-+ struct ccw_dev_id dev_id; /* device id */
-+ struct subchannel_id schid; /* subchannel number */
-+ u8 imask; /* lpm mask for SNID/SID/SPGID */
-+ int iretry; /* retry counter SNID/SID/SPGID */
-+ struct {
-+ unsigned int fast:1; /* post with "channel end" */
-+ unsigned int repall:1; /* report every interrupt status */
-+ unsigned int pgroup:1; /* do path grouping */
-+ unsigned int force:1; /* allow forced online */
-+ } __attribute__ ((packed)) options;
-+ struct {
-+ unsigned int pgid_single:1; /* use single path for Set PGID */
-+ unsigned int esid:1; /* Ext. SenseID supported by HW */
-+ unsigned int dosense:1; /* delayed SENSE required */
-+ unsigned int doverify:1; /* delayed path verification */
-+ unsigned int donotify:1; /* call notify function */
-+ unsigned int recog_done:1; /* dev. recog. complete */
-+ unsigned int fake_irb:1; /* deliver faked irb */
-+ unsigned int intretry:1; /* retry internal operation */
-+ } __attribute__((packed)) flags;
-+ unsigned long intparm; /* user interruption parameter */
-+ struct qdio_irq *qdio_data;
-+ struct irb irb; /* device status */
-+ struct senseid senseid; /* SenseID info */
-+ struct pgid pgid[8]; /* path group IDs per chpid*/
-+ struct ccw1 iccws[2]; /* ccws for SNID/SID/SPGID commands */
-+ struct work_struct kick_work;
-+ wait_queue_head_t wait_q;
-+ struct timer_list timer;
-+ void *cmb; /* measurement information */
-+ struct list_head cmb_list; /* list of measured devices */
-+ u64 cmb_start_time; /* clock value of cmb reset */
-+ void *cmb_wait; /* deferred cmb enable/disable */
-+};
++ /* Do we have more scatterlist entries? */
++ debug_tcp("total copied %u total size %u\n", segment->total_copied,
++ segment->total_size);
++ if (segment->total_copied < segment->total_size) {
++ /* Proceed to the next entry in the scatterlist. */
++ iscsi_tcp_segment_init_sg(segment, sg_next(segment->sg),
++ 0);
++ iscsi_tcp_segment_map(segment, recv);
++ BUG_ON(segment->size == 0);
++ return 0;
++ }
+
-+static inline int ssch(struct subchannel_id schid, volatile struct orb *addr)
-+{
-+ register struct subchannel_id reg1 asm("1") = schid;
-+ int ccode;
++ /* Do we need to handle padding? */
++ pad = iscsi_padding(segment->total_copied);
++ if (pad != 0) {
++ debug_tcp("consume %d pad bytes\n", pad);
++ segment->total_size += pad;
++ segment->size = pad;
++ segment->data = padbuf;
++ return 0;
++ }
+
-+ asm volatile(
-+ " ssch 0(%2)\n"
-+ " ipm %0\n"
-+ " srl %0,28"
-+ : "=d" (ccode) : "d" (reg1), "a" (addr), "m" (*addr) : "cc");
-+ return ccode;
++ /*
++ * Set us up for transferring the data digest. hdr digest
++ * is completely handled in hdr done function.
++ */
++ if (segment->hash) {
++ crypto_hash_final(segment->hash, segment->digest);
++ iscsi_tcp_segment_splice_digest(segment,
++ recv ? segment->recv_digest : segment->digest);
++ return 0;
++ }
++
++ return 1;
+}
+
-+static inline int rsch(struct subchannel_id schid)
++/**
++ * iscsi_tcp_xmit_segment - transmit segment
++ * @tcp_conn: the iSCSI TCP connection
++ * @segment: the buffer to transmnit
++ *
++ * This function transmits as much of the buffer as
++ * the network layer will accept, and returns the number of
++ * bytes transmitted.
++ *
++ * If CRC hashing is enabled, the function will compute the
++ * hash as it goes. When the entire segment has been transmitted,
++ * it will retrieve the hash value and send it as well.
++ */
++static int
++iscsi_tcp_xmit_segment(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment)
+{
-+ register struct subchannel_id reg1 asm("1") = schid;
-+ int ccode;
++ struct socket *sk = tcp_conn->sock;
++ unsigned int copied = 0;
++ int r = 0;
+
-+ asm volatile(
-+ " rsch\n"
-+ " ipm %0\n"
-+ " srl %0,28"
-+ : "=d" (ccode) : "d" (reg1) : "cc");
-+ return ccode;
-+}
++ while (!iscsi_tcp_segment_done(segment, 0, r)) {
++ struct scatterlist *sg;
++ unsigned int offset, copy;
++ int flags = 0;
+
-+static inline int csch(struct subchannel_id schid)
-+{
-+ register struct subchannel_id reg1 asm("1") = schid;
-+ int ccode;
++ r = 0;
++ offset = segment->copied;
++ copy = segment->size - offset;
+
-+ asm volatile(
-+ " csch\n"
-+ " ipm %0\n"
-+ " srl %0,28"
-+ : "=d" (ccode) : "d" (reg1) : "cc");
-+ return ccode;
-+}
++ if (segment->total_copied + segment->size < segment->total_size)
++ flags |= MSG_MORE;
+
-+static inline int hsch(struct subchannel_id schid)
-+{
-+ register struct subchannel_id reg1 asm("1") = schid;
-+ int ccode;
++ /* Use sendpage if we can; else fall back to sendmsg */
++ if (!segment->data) {
++ sg = segment->sg;
++ offset += segment->sg_offset + sg->offset;
++ r = tcp_conn->sendpage(sk, sg_page(sg), offset, copy,
++ flags);
+ } else {
+- /* ignoring return code since we checked
+- * in.copy before */
+- skb_copy_bits(skb, tcp_conn->in.offset,
+- &tcp_conn->hdr, tcp_conn->hdr_size);
+- tcp_conn->in.hdr = &tcp_conn->hdr;
++ struct msghdr msg = { .msg_flags = flags };
++ struct kvec iov = {
++ .iov_base = segment->data + offset,
++ .iov_len = copy
++ };
+
-+ asm volatile(
-+ " hsch\n"
-+ " ipm %0\n"
-+ " srl %0,28"
-+ : "=d" (ccode) : "d" (reg1) : "cc");
-+ return ccode;
++ r = kernel_sendmsg(sk, &msg, &iov, 1, copy);
+ }
+- tcp_conn->in.offset += tcp_conn->hdr_size;
+- tcp_conn->in.copy -= tcp_conn->hdr_size;
+- } else {
+- int hdr_remains;
+- int copylen;
+
+- /*
+- * PDU header scattered across SKB's,
+- * copying it... This'll happen quite rarely.
+- */
++ if (r < 0) {
++ iscsi_tcp_segment_unmap(segment);
++ if (copied || r == -EAGAIN)
++ break;
++ return r;
++ }
++ copied += r;
++ }
++ return copied;
+}
+
-+static inline int xsch(struct subchannel_id schid)
++/**
++ * iscsi_tcp_segment_recv - copy data to segment
++ * @tcp_conn: the iSCSI TCP connection
++ * @segment: the buffer to copy to
++ * @ptr: data pointer
++ * @len: amount of data available
++ *
++ * This function copies up to @len bytes to the
++ * given buffer, and returns the number of bytes
++ * consumed, which can actually be less than @len.
++ *
++ * If hash digest is enabled, the function will update the
++ * hash while copying.
++ * Combining these two operations doesn't buy us a lot (yet),
++ * but in the future we could implement combined copy+crc,
++ * just way we do for network layer checksums.
++ */
++static int
++iscsi_tcp_segment_recv(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment, const void *ptr,
++ unsigned int len)
+{
-+ register struct subchannel_id reg1 asm("1") = schid;
-+ int ccode;
++ unsigned int copy = 0, copied = 0;
+
-+ asm volatile(
-+ " .insn rre,0xb2760000,%1,0\n"
-+ " ipm %0\n"
-+ " srl %0,28"
-+ : "=d" (ccode) : "d" (reg1) : "cc");
-+ return ccode;
-+}
++ while (!iscsi_tcp_segment_done(segment, 1, copy)) {
++ if (copied == len) {
++ debug_tcp("iscsi_tcp_segment_recv copied %d bytes\n",
++ len);
++ break;
++ }
+
-+#endif
-diff --git a/drivers/s390/cio/ioasm.h b/drivers/s390/cio/ioasm.h
-index 7153dd9..652ea36 100644
---- a/drivers/s390/cio/ioasm.h
-+++ b/drivers/s390/cio/ioasm.h
-@@ -109,72 +109,6 @@ static inline int tpi( volatile struct tpi_info *addr)
- return ccode;
- }
++ copy = min(len - copied, segment->size - segment->copied);
++ debug_tcp("iscsi_tcp_segment_recv copying %d\n", copy);
++ memcpy(segment->data + segment->copied, ptr + copied, copy);
++ copied += copy;
++ }
++ return copied;
++}
--static inline int ssch(struct subchannel_id schid,
-- volatile struct orb *addr)
--{
-- register struct subchannel_id reg1 asm ("1") = schid;
-- int ccode;
--
-- asm volatile(
-- " ssch 0(%2)\n"
-- " ipm %0\n"
-- " srl %0,28"
-- : "=d" (ccode) : "d" (reg1), "a" (addr), "m" (*addr) : "cc");
-- return ccode;
--}
--
--static inline int rsch(struct subchannel_id schid)
--{
-- register struct subchannel_id reg1 asm ("1") = schid;
-- int ccode;
--
-- asm volatile(
-- " rsch\n"
-- " ipm %0\n"
-- " srl %0,28"
-- : "=d" (ccode) : "d" (reg1) : "cc");
-- return ccode;
--}
--
--static inline int csch(struct subchannel_id schid)
--{
-- register struct subchannel_id reg1 asm ("1") = schid;
-- int ccode;
--
-- asm volatile(
-- " csch\n"
-- " ipm %0\n"
-- " srl %0,28"
-- : "=d" (ccode) : "d" (reg1) : "cc");
-- return ccode;
--}
--
--static inline int hsch(struct subchannel_id schid)
--{
-- register struct subchannel_id reg1 asm ("1") = schid;
-- int ccode;
--
-- asm volatile(
-- " hsch\n"
-- " ipm %0\n"
-- " srl %0,28"
-- : "=d" (ccode) : "d" (reg1) : "cc");
-- return ccode;
--}
--
--static inline int xsch(struct subchannel_id schid)
--{
-- register struct subchannel_id reg1 asm ("1") = schid;
-- int ccode;
--
-- asm volatile(
-- " .insn rre,0xb2760000,%1,0\n"
-- " ipm %0\n"
-- " srl %0,28"
-- : "=d" (ccode) : "d" (reg1) : "cc");
-- return ccode;
--}
--
- static inline int chsc(void *chsc_area)
- {
- typedef struct { char _[4096]; } addr_type;
-diff --git a/drivers/s390/cio/qdio.c b/drivers/s390/cio/qdio.c
-index 40a3208..e2a781b 100644
---- a/drivers/s390/cio/qdio.c
-+++ b/drivers/s390/cio/qdio.c
-@@ -48,11 +48,11 @@
- #include <asm/debug.h>
- #include <asm/s390_rdev.h>
- #include <asm/qdio.h>
-+#include <asm/airq.h>
+- if (tcp_conn->in_progress == IN_PROGRESS_WAIT_HEADER)
+- tcp_conn->in.hdr_offset = 0;
++static inline void
++iscsi_tcp_dgst_header(struct hash_desc *hash, const void *hdr, size_t hdrlen,
++ unsigned char digest[ISCSI_DIGEST_SIZE])
++{
++ struct scatterlist sg;
- #include "cio.h"
- #include "css.h"
- #include "device.h"
--#include "airq.h"
- #include "qdio.h"
- #include "ioasm.h"
- #include "chsc.h"
-@@ -96,7 +96,7 @@ static debug_info_t *qdio_dbf_slsb_in;
- static volatile struct qdio_q *tiq_list=NULL; /* volatile as it could change
- during a while loop */
- static DEFINE_SPINLOCK(ttiq_list_lock);
--static int register_thinint_result;
-+static void *tiqdio_ind;
- static void tiqdio_tl(unsigned long);
- static DECLARE_TASKLET(tiqdio_tasklet,tiqdio_tl,0);
+- hdr_remains = tcp_conn->hdr_size - tcp_conn->in.hdr_offset;
+- BUG_ON(hdr_remains <= 0);
++ sg_init_one(&sg, hdr, hdrlen);
++ crypto_hash_digest(hash, &sg, hdrlen, digest);
++}
-@@ -399,7 +399,7 @@ qdio_get_indicator(void)
- {
- int i;
+- copylen = min(tcp_conn->in.copy, hdr_remains);
+- skb_copy_bits(skb, tcp_conn->in.offset,
+- (char*)&tcp_conn->hdr + tcp_conn->in.hdr_offset,
+- copylen);
++static inline int
++iscsi_tcp_dgst_verify(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment)
++{
++ if (!segment->digest_len)
++ return 1;
-- for (i=1;i<INDICATORS_PER_CACHELINE;i++)
-+ for (i = 0; i < INDICATORS_PER_CACHELINE; i++)
- if (!indicator_used[i]) {
- indicator_used[i]=1;
- return indicators+i;
-@@ -1408,8 +1408,7 @@ __tiqdio_inbound_processing(struct qdio_q *q, int spare_ind_was_set)
- if (q->hydra_gives_outbound_pcis) {
- if (!q->siga_sync_done_on_thinints) {
- SYNC_MEMORY_ALL;
-- } else if ((!q->siga_sync_done_on_outb_tis)&&
-- (q->hydra_gives_outbound_pcis)) {
-+ } else if (!q->siga_sync_done_on_outb_tis) {
- SYNC_MEMORY_ALL_OUTB;
+- debug_tcp("PDU gather offset %d bytes %d in.offset %d "
+- "in.copy %d\n", tcp_conn->in.hdr_offset, copylen,
+- tcp_conn->in.offset, tcp_conn->in.copy);
++ if (memcmp(segment->recv_digest, segment->digest,
++ segment->digest_len)) {
++ debug_scsi("digest mismatch\n");
++ return 0;
++ }
+
+- tcp_conn->in.offset += copylen;
+- tcp_conn->in.copy -= copylen;
+- if (copylen < hdr_remains) {
+- tcp_conn->in_progress = IN_PROGRESS_HEADER_GATHER;
+- tcp_conn->in.hdr_offset += copylen;
+- return -EAGAIN;
++ return 1;
++}
++
++/*
++ * Helper function to set up segment buffer
++ */
++static inline void
++__iscsi_segment_init(struct iscsi_segment *segment, size_t size,
++ iscsi_segment_done_fn_t *done, struct hash_desc *hash)
++{
++ memset(segment, 0, sizeof(*segment));
++ segment->total_size = size;
++ segment->done = done;
++
++ if (hash) {
++ segment->hash = hash;
++ crypto_hash_init(hash);
++ }
++}
++
++static inline void
++iscsi_segment_init_linear(struct iscsi_segment *segment, void *data,
++ size_t size, iscsi_segment_done_fn_t *done,
++ struct hash_desc *hash)
++{
++ __iscsi_segment_init(segment, size, done, hash);
++ segment->data = data;
++ segment->size = size;
++}
++
++static inline int
++iscsi_segment_seek_sg(struct iscsi_segment *segment,
++ struct scatterlist *sg_list, unsigned int sg_count,
++ unsigned int offset, size_t size,
++ iscsi_segment_done_fn_t *done, struct hash_desc *hash)
++{
++ struct scatterlist *sg;
++ unsigned int i;
++
++ debug_scsi("iscsi_segment_seek_sg offset %u size %llu\n",
++ offset, size);
++ __iscsi_segment_init(segment, size, done, hash);
++ for_each_sg(sg_list, sg, sg_count, i) {
++ debug_scsi("sg %d, len %u offset %u\n", i, sg->length,
++ sg->offset);
++ if (offset < sg->length) {
++ iscsi_tcp_segment_init_sg(segment, sg, offset);
++ return 0;
}
- } else {
-@@ -1911,8 +1910,7 @@ qdio_fill_thresholds(struct qdio_irq *irq_ptr,
+- tcp_conn->in.hdr = &tcp_conn->hdr;
+- tcp_conn->discontiguous_hdr_cnt++;
+- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
++ offset -= sg->length;
}
- }
-
--static int
--tiqdio_thinint_handler(void)
-+static void tiqdio_thinint_handler(void *ind, void *drv_data)
- {
- QDIO_DBF_TEXT4(0,trace,"thin_int");
-
-@@ -1925,7 +1923,6 @@ tiqdio_thinint_handler(void)
- tiqdio_clear_global_summary();
- tiqdio_inbound_checks();
-- return 0;
- }
-
- static void
-@@ -2445,7 +2442,7 @@ tiqdio_set_subchannel_ind(struct qdio_irq *irq_ptr, int reset_to_zero)
- real_addr_dev_st_chg_ind=0;
- } else {
- real_addr_local_summary_bit=
-- virt_to_phys((volatile void *)indicators);
-+ virt_to_phys((volatile void *)tiqdio_ind);
- real_addr_dev_st_chg_ind=
- virt_to_phys((volatile void *)irq_ptr->dev_st_chg_ind);
- }
-@@ -3740,23 +3737,25 @@ static void
- tiqdio_register_thinints(void)
- {
- char dbf_text[20];
-- register_thinint_result=
-- s390_register_adapter_interrupt(&tiqdio_thinint_handler);
-- if (register_thinint_result) {
-- sprintf(dbf_text,"regthn%x",(register_thinint_result&0xff));
++ return ISCSI_ERR_DATA_OFFSET;
++}
+
-+ tiqdio_ind =
-+ s390_register_adapter_interrupt(&tiqdio_thinint_handler, NULL);
-+ if (IS_ERR(tiqdio_ind)) {
-+ sprintf(dbf_text, "regthn%lx", PTR_ERR(tiqdio_ind));
- QDIO_DBF_TEXT0(0,setup,dbf_text);
- QDIO_PRINT_ERR("failed to register adapter handler " \
-- "(rc=%i).\nAdapter interrupts might " \
-+ "(rc=%li).\nAdapter interrupts might " \
- "not work. Continuing.\n",
-- register_thinint_result);
-+ PTR_ERR(tiqdio_ind));
-+ tiqdio_ind = NULL;
- }
++/**
++ * iscsi_tcp_hdr_recv_prep - prep segment for hdr reception
++ * @tcp_conn: iscsi connection to prep for
++ *
++ * This function always passes NULL for the hash argument, because when this
++ * function is called we do not yet know the final size of the header and want
++ * to delay the digest processing until we know that.
++ */
++static void
++iscsi_tcp_hdr_recv_prep(struct iscsi_tcp_conn *tcp_conn)
++{
++ debug_tcp("iscsi_tcp_hdr_recv_prep(%p%s)\n", tcp_conn,
++ tcp_conn->iscsi_conn->hdrdgst_en ? ", digest enabled" : "");
++ iscsi_segment_init_linear(&tcp_conn->in.segment,
++ tcp_conn->in.hdr_buf, sizeof(struct iscsi_hdr),
++ iscsi_tcp_hdr_recv_done, NULL);
++}
++
++/*
++ * Handle incoming reply to any other type of command
++ */
++static int
++iscsi_tcp_data_recv_done(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment)
++{
++ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
++ int rc = 0;
++
++ if (!iscsi_tcp_dgst_verify(tcp_conn, segment))
++ return ISCSI_ERR_DATA_DGST;
++
++ rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr,
++ conn->data, tcp_conn->in.datalen);
++ if (rc)
++ return rc;
++
++ iscsi_tcp_hdr_recv_prep(tcp_conn);
+ return 0;
}
- static void
- tiqdio_unregister_thinints(void)
++static void
++iscsi_tcp_data_recv_prep(struct iscsi_tcp_conn *tcp_conn)
++{
++ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
++ struct hash_desc *rx_hash = NULL;
++
++ if (conn->datadgst_en)
++ rx_hash = &tcp_conn->rx_hash;
++
++ iscsi_segment_init_linear(&tcp_conn->in.segment,
++ conn->data, tcp_conn->in.datalen,
++ iscsi_tcp_data_recv_done, rx_hash);
++}
++
+ /*
+ * must be called with session lock
+ */
+@@ -184,7 +498,6 @@ iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
{
-- if (!register_thinint_result)
-- s390_unregister_adapter_interrupt(&tiqdio_thinint_handler);
-+ if (tiqdio_ind)
-+ s390_unregister_adapter_interrupt(tiqdio_ind);
- }
+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+ struct iscsi_r2t_info *r2t;
+- struct scsi_cmnd *sc;
- static int
-@@ -3768,8 +3767,8 @@ qdio_get_qdio_memory(void)
- for (i=1;i<INDICATORS_PER_CACHELINE;i++)
- indicator_used[i]=0;
- indicators = kzalloc(sizeof(__u32)*(INDICATORS_PER_CACHELINE),
-- GFP_KERNEL);
-- if (!indicators)
-+ GFP_KERNEL);
-+ if (!indicators)
- return -ENOMEM;
- return 0;
- }
-@@ -3780,7 +3779,6 @@ qdio_release_qdio_memory(void)
- kfree(indicators);
- }
+ /* flush ctask's r2t queues */
+ while (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*))) {
+@@ -193,12 +506,12 @@ iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+ debug_scsi("iscsi_tcp_cleanup_ctask pending r2t dropped\n");
+ }
+- sc = ctask->sc;
+- if (unlikely(!sc))
+- return;
-
- static void
- qdio_unregister_dbf_views(void)
- {
-diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
-index 6d7aad1..37870e4 100644
---- a/drivers/s390/cio/qdio.h
-+++ b/drivers/s390/cio/qdio.h
-@@ -57,7 +57,7 @@
- of the queue to 0 */
-
- #define QDIO_ESTABLISH_TIMEOUT (1*HZ)
--#define QDIO_ACTIVATE_TIMEOUT ((5*HZ)>>10)
-+#define QDIO_ACTIVATE_TIMEOUT (5*HZ)
- #define QDIO_CLEANUP_CLEAR_TIMEOUT (20*HZ)
- #define QDIO_CLEANUP_HALT_TIMEOUT (10*HZ)
- #define QDIO_FORCE_CHECK_TIMEOUT (10*HZ)
-diff --git a/drivers/s390/net/claw.c b/drivers/s390/net/claw.c
-index 3561982..c307621 100644
---- a/drivers/s390/net/claw.c
-+++ b/drivers/s390/net/claw.c
-@@ -2416,7 +2416,7 @@ init_ccw_bk(struct net_device *dev)
- privptr->p_buff_pages_perwrite);
- #endif
- if (p_buff==NULL) {
-- printk(KERN_INFO "%s:%s __get_free_pages"
-+ printk(KERN_INFO "%s:%s __get_free_pages "
- "for writes buf failed : get is for %d pages\n",
- dev->name,
- __FUNCTION__,
-diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
-index 0fd663b..7bfe8d7 100644
---- a/drivers/s390/net/lcs.c
-+++ b/drivers/s390/net/lcs.c
-@@ -1115,7 +1115,7 @@ list_modified:
- rc = lcs_send_setipm(card, ipm);
- spin_lock_irqsave(&card->ipm_lock, flags);
- if (rc) {
-- PRINT_INFO("Adding multicast address failed."
-+ PRINT_INFO("Adding multicast address failed. "
- "Table possibly full!\n");
- /* store ipm in failed list -> will be added
- * to ipm_list again, so a retry will be done
-diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
-index c7ea938..f3d893c 100644
---- a/drivers/s390/net/netiucv.c
-+++ b/drivers/s390/net/netiucv.c
-@@ -198,8 +198,7 @@ struct iucv_connection {
- /**
- * Linked list of all connection structs.
- */
--static struct list_head iucv_connection_list =
-- LIST_HEAD_INIT(iucv_connection_list);
-+static LIST_HEAD(iucv_connection_list);
- static DEFINE_RWLOCK(iucv_connection_rwlock);
+- tcp_ctask->xmstate = XMSTATE_VALUE_IDLE;
+- tcp_ctask->r2t = NULL;
++ r2t = tcp_ctask->r2t;
++ if (r2t != NULL) {
++ __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
++ sizeof(void*));
++ tcp_ctask->r2t = NULL;
++ }
+ }
/**
-@@ -2089,6 +2088,11 @@ static struct attribute_group netiucv_drv_attr_group = {
- .attrs = netiucv_drv_attrs,
- };
+@@ -217,11 +530,6 @@ iscsi_data_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+ int datasn = be32_to_cpu(rhdr->datasn);
-+static struct attribute_group *netiucv_drv_attr_groups[] = {
-+ &netiucv_drv_attr_group,
-+ NULL,
-+};
-+
- static void netiucv_banner(void)
- {
- PRINT_INFO("NETIUCV driver initialized\n");
-@@ -2113,7 +2117,6 @@ static void __exit netiucv_exit(void)
- netiucv_unregister_device(dev);
- }
+ iscsi_update_cmdsn(session, (struct iscsi_nopin*)rhdr);
+- /*
+- * setup Data-In byte counter (gets decremented..)
+- */
+- ctask->data_count = tcp_conn->in.datalen;
+-
+ if (tcp_conn->in.datalen == 0)
+ return 0;
-- sysfs_remove_group(&netiucv_driver.kobj, &netiucv_drv_attr_group);
- driver_unregister(&netiucv_driver);
- iucv_unregister(&netiucv_handler, 1);
- iucv_unregister_dbf_views();
-@@ -2133,6 +2136,7 @@ static int __init netiucv_init(void)
- if (rc)
- goto out_dbf;
- IUCV_DBF_TEXT(trace, 3, __FUNCTION__);
-+ netiucv_driver.groups = netiucv_drv_attr_groups;
- rc = driver_register(&netiucv_driver);
- if (rc) {
- PRINT_ERR("NETIUCV: failed to register driver.\n");
-@@ -2140,18 +2144,9 @@ static int __init netiucv_init(void)
- goto out_iucv;
+@@ -242,22 +550,20 @@ iscsi_data_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
}
-- rc = sysfs_create_group(&netiucv_driver.kobj, &netiucv_drv_attr_group);
-- if (rc) {
-- PRINT_ERR("NETIUCV: failed to add driver attributes.\n");
-- IUCV_DBF_TEXT_(setup, 2,
-- "ret %d - netiucv_drv_attr_group\n", rc);
-- goto out_driver;
-- }
- netiucv_banner();
- return rc;
+ if (rhdr->flags & ISCSI_FLAG_DATA_STATUS) {
++ sc->result = (DID_OK << 16) | rhdr->cmd_status;
+ conn->exp_statsn = be32_to_cpu(rhdr->statsn) + 1;
+- if (rhdr->flags & ISCSI_FLAG_DATA_UNDERFLOW) {
++ if (rhdr->flags & (ISCSI_FLAG_DATA_UNDERFLOW |
++ ISCSI_FLAG_DATA_OVERFLOW)) {
+ int res_count = be32_to_cpu(rhdr->residual_count);
--out_driver:
-- driver_unregister(&netiucv_driver);
- out_iucv:
- iucv_unregister(&netiucv_handler, 1);
- out_dbf:
-diff --git a/drivers/s390/net/qeth_main.c b/drivers/s390/net/qeth_main.c
-index ff999ff..62606ce 100644
---- a/drivers/s390/net/qeth_main.c
-+++ b/drivers/s390/net/qeth_main.c
-@@ -3890,7 +3890,7 @@ qeth_verify_vlan_dev(struct net_device *dev, struct qeth_card *card)
- break;
- }
+ if (res_count > 0 &&
+- res_count <= scsi_bufflen(sc)) {
++ (rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW ||
++ res_count <= scsi_bufflen(sc)))
+ scsi_set_resid(sc, res_count);
+- sc->result = (DID_OK << 16) | rhdr->cmd_status;
+- } else
++ else
+ sc->result = (DID_BAD_TARGET << 16) |
+ rhdr->cmd_status;
+- } else if (rhdr->flags & ISCSI_FLAG_DATA_OVERFLOW) {
+- scsi_set_resid(sc, be32_to_cpu(rhdr->residual_count));
+- sc->result = (DID_OK << 16) | rhdr->cmd_status;
+- } else
+- sc->result = (DID_OK << 16) | rhdr->cmd_status;
++ }
}
-- if (rc && !(VLAN_DEV_INFO(dev)->real_dev->priv == (void *)card))
-+ if (rc && !(vlan_dev_info(dev)->real_dev->priv == (void *)card))
- return 0;
-
- #endif
-@@ -3930,7 +3930,7 @@ qeth_get_card_from_dev(struct net_device *dev)
- card = (struct qeth_card *)dev->priv;
- else if (rc == QETH_VLAN_CARD)
- card = (struct qeth_card *)
-- VLAN_DEV_INFO(dev)->real_dev->priv;
-+ vlan_dev_info(dev)->real_dev->priv;
-
- QETH_DBF_TEXT_(trace, 4, "%d", rc);
- return card ;
-@@ -8340,7 +8340,7 @@ qeth_arp_constructor(struct neighbour *neigh)
- neigh->parms = neigh_parms_clone(parms);
- rcu_read_unlock();
-
-- neigh->type = inet_addr_type(*(__be32 *) neigh->primary_key);
-+ neigh->type = inet_addr_type(&init_net, *(__be32 *) neigh->primary_key);
- neigh->nud_state = NUD_NOARP;
- neigh->ops = arp_direct_ops;
- neigh->output = neigh->ops->queue_xmit;
-diff --git a/drivers/s390/net/qeth_proc.c b/drivers/s390/net/qeth_proc.c
-index f1ff165..46ecd03 100644
---- a/drivers/s390/net/qeth_proc.c
-+++ b/drivers/s390/net/qeth_proc.c
-@@ -146,7 +146,7 @@ qeth_procfile_seq_show(struct seq_file *s, void *it)
- return 0;
- }
-
--static struct seq_operations qeth_procfile_seq_ops = {
-+static const struct seq_operations qeth_procfile_seq_ops = {
- .start = qeth_procfile_seq_start,
- .stop = qeth_procfile_seq_stop,
- .next = qeth_procfile_seq_next,
-@@ -264,7 +264,7 @@ qeth_perf_procfile_seq_show(struct seq_file *s, void *it)
- return 0;
- }
-
--static struct seq_operations qeth_perf_procfile_seq_ops = {
-+static const struct seq_operations qeth_perf_procfile_seq_ops = {
- .start = qeth_procfile_seq_start,
- .stop = qeth_procfile_seq_stop,
- .next = qeth_procfile_seq_next,
-diff --git a/drivers/s390/net/smsgiucv.c b/drivers/s390/net/smsgiucv.c
-index 47bb47b..8735a41 100644
---- a/drivers/s390/net/smsgiucv.c
-+++ b/drivers/s390/net/smsgiucv.c
-@@ -42,7 +42,7 @@ MODULE_DESCRIPTION ("Linux for S/390 IUCV special message driver");
- static struct iucv_path *smsg_path;
- static DEFINE_SPINLOCK(smsg_list_lock);
--static struct list_head smsg_list = LIST_HEAD_INIT(smsg_list);
-+static LIST_HEAD(smsg_list);
+ conn->datain_pdus_cnt++;
+@@ -281,9 +587,6 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
+ struct iscsi_r2t_info *r2t)
+ {
+ struct iscsi_data *hdr;
+- struct scsi_cmnd *sc = ctask->sc;
+- int i, sg_count = 0;
+- struct scatterlist *sg;
- static int smsg_path_pending(struct iucv_path *, u8 ipvmid[8], u8 ipuser[16]);
- static void smsg_message_pending(struct iucv_path *, struct iucv_message *);
-diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
-index 0011849..874b55e 100644
---- a/drivers/s390/scsi/zfcp_aux.c
-+++ b/drivers/s390/scsi/zfcp_aux.c
-@@ -844,8 +844,6 @@ zfcp_unit_enqueue(struct zfcp_port *port, fcp_lun_t fcp_lun)
- unit->sysfs_device.release = zfcp_sysfs_unit_release;
- dev_set_drvdata(&unit->sysfs_device, unit);
+ hdr = &r2t->dtask.hdr;
+ memset(hdr, 0, sizeof(struct iscsi_data));
+@@ -307,34 +610,6 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
+ conn->dataout_pdus_cnt++;
-- init_waitqueue_head(&unit->scsi_scan_wq);
+ r2t->sent = 0;
-
- /* mark unit unusable as long as sysfs registration is not complete */
- atomic_set_mask(ZFCP_STATUS_COMMON_REMOVE, &unit->status);
-
-diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c
-index e01cbf1..edc5015 100644
---- a/drivers/s390/scsi/zfcp_ccw.c
-+++ b/drivers/s390/scsi/zfcp_ccw.c
-@@ -52,6 +52,9 @@ static struct ccw_driver zfcp_ccw_driver = {
- .set_offline = zfcp_ccw_set_offline,
- .notify = zfcp_ccw_notify,
- .shutdown = zfcp_ccw_shutdown,
-+ .driver = {
-+ .groups = zfcp_driver_attr_groups,
-+ },
- };
-
- MODULE_DEVICE_TABLE(ccw, zfcp_ccw_device_id);
-@@ -120,6 +123,9 @@ zfcp_ccw_remove(struct ccw_device *ccw_device)
-
- list_for_each_entry_safe(port, p, &adapter->port_remove_lh, list) {
- list_for_each_entry_safe(unit, u, &port->unit_remove_lh, list) {
-+ if (atomic_test_mask(ZFCP_STATUS_UNIT_REGISTERED,
-+ &unit->status))
-+ scsi_remove_device(unit->device);
- zfcp_unit_dequeue(unit);
- }
- zfcp_port_dequeue(port);
-@@ -251,16 +257,7 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event)
- int __init
- zfcp_ccw_register(void)
- {
-- int retval;
+- iscsi_buf_init_iov(&r2t->headbuf, (char*)hdr,
+- sizeof(struct iscsi_hdr));
-
-- retval = ccw_driver_register(&zfcp_ccw_driver);
-- if (retval)
-- goto out;
-- retval = zfcp_sysfs_driver_create_files(&zfcp_ccw_driver.driver);
-- if (retval)
-- ccw_driver_unregister(&zfcp_ccw_driver);
-- out:
-- return retval;
-+ return ccw_driver_register(&zfcp_ccw_driver);
+- sg = scsi_sglist(sc);
+- r2t->sg = NULL;
+- for (i = 0; i < scsi_sg_count(sc); i++, sg += 1) {
+- /* FIXME: prefetch ? */
+- if (sg_count + sg->length > r2t->data_offset) {
+- int page_offset;
+-
+- /* sg page found! */
+-
+- /* offset within this page */
+- page_offset = r2t->data_offset - sg_count;
+-
+- /* fill in this buffer */
+- iscsi_buf_init_sg(&r2t->sendbuf, sg);
+- r2t->sendbuf.sg.offset += page_offset;
+- r2t->sendbuf.sg.length -= page_offset;
+-
+- /* xmit logic will continue with next one */
+- r2t->sg = sg + 1;
+- break;
+- }
+- sg_count += sg->length;
+- }
+- BUG_ON(r2t->sg == NULL);
}
/**
-diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
-index ffa3bf7..701046c 100644
---- a/drivers/s390/scsi/zfcp_dbf.c
-+++ b/drivers/s390/scsi/zfcp_dbf.c
-@@ -161,12 +161,6 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req)
- (fsf_req->fsf_command == FSF_QTCB_OPEN_LUN)) {
- strncpy(rec->tag2, "open", ZFCP_DBF_TAG_SIZE);
- level = 4;
-- } else if ((prot_status_qual->doubleword[0] != 0) ||
-- (prot_status_qual->doubleword[1] != 0) ||
-- (fsf_status_qual->doubleword[0] != 0) ||
-- (fsf_status_qual->doubleword[1] != 0)) {
-- strncpy(rec->tag2, "qual", ZFCP_DBF_TAG_SIZE);
-- level = 3;
- } else {
- strncpy(rec->tag2, "norm", ZFCP_DBF_TAG_SIZE);
- level = 6;
-diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
-index e268f79..9e9f6c1 100644
---- a/drivers/s390/scsi/zfcp_def.h
-+++ b/drivers/s390/scsi/zfcp_def.h
-@@ -118,7 +118,7 @@ zfcp_address_to_sg(void *address, struct scatterlist *list, unsigned int size)
+@@ -366,14 +641,11 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+ }
- #define ZFCP_SBAL_TIMEOUT (5*HZ)
+ /* fill-in new R2T associated with the task */
+- spin_lock(&session->lock);
+ iscsi_update_cmdsn(session, (struct iscsi_nopin*)rhdr);
--#define ZFCP_TYPE2_RECOVERY_TIME (8*HZ)
-+#define ZFCP_TYPE2_RECOVERY_TIME 8 /* seconds */
+- if (!ctask->sc || ctask->mtask ||
+- session->state != ISCSI_STATE_LOGGED_IN) {
++ if (!ctask->sc || session->state != ISCSI_STATE_LOGGED_IN) {
+ printk(KERN_INFO "iscsi_tcp: dropping R2T itt %d in "
+ "recovery...\n", ctask->itt);
+- spin_unlock(&session->lock);
+ return 0;
+ }
- /* queue polling (values in microseconds) */
- #define ZFCP_MAX_INPUT_THRESHOLD 5000 /* FIXME: tune */
-@@ -139,7 +139,7 @@ zfcp_address_to_sg(void *address, struct scatterlist *list, unsigned int size)
- #define ZFCP_STATUS_READS_RECOM FSF_STATUS_READS_RECOM
+@@ -384,7 +656,8 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+ r2t->data_length = be32_to_cpu(rhdr->data_length);
+ if (r2t->data_length == 0) {
+ printk(KERN_ERR "iscsi_tcp: invalid R2T with zero data len\n");
+- spin_unlock(&session->lock);
++ __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
++ sizeof(void*));
+ return ISCSI_ERR_DATALEN;
+ }
- /* Do 1st retry in 1 second, then double the timeout for each following retry */
--#define ZFCP_EXCHANGE_CONFIG_DATA_FIRST_SLEEP 100
-+#define ZFCP_EXCHANGE_CONFIG_DATA_FIRST_SLEEP 1
- #define ZFCP_EXCHANGE_CONFIG_DATA_RETRIES 7
+@@ -395,10 +668,11 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- /* timeout value for "default timer" for fsf requests */
-@@ -983,10 +983,6 @@ struct zfcp_unit {
- struct scsi_device *device; /* scsi device struct pointer */
- struct zfcp_erp_action erp_action; /* pending error recovery */
- atomic_t erp_counter;
-- wait_queue_head_t scsi_scan_wq; /* can be used to wait until
-- all scsi_scan_target
-- requests have been
-- completed. */
- };
+ r2t->data_offset = be32_to_cpu(rhdr->data_offset);
+ if (r2t->data_offset + r2t->data_length > scsi_bufflen(ctask->sc)) {
+- spin_unlock(&session->lock);
+ printk(KERN_ERR "iscsi_tcp: invalid R2T with data len %u at "
+ "offset %u and total length %d\n", r2t->data_length,
+ r2t->data_offset, scsi_bufflen(ctask->sc));
++ __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
++ sizeof(void*));
+ return ISCSI_ERR_DATALEN;
+ }
- /* FSF request */
-@@ -1127,6 +1123,20 @@ zfcp_reqlist_find(struct zfcp_adapter *adapter, unsigned long req_id)
- return NULL;
+@@ -409,26 +683,55 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+
+ tcp_ctask->exp_datasn = r2tsn + 1;
+ __kfifo_put(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*));
+- set_bit(XMSTATE_BIT_SOL_HDR_INIT, &tcp_ctask->xmstate);
+- list_move_tail(&ctask->running, &conn->xmitqueue);
+-
+- scsi_queue_work(session->host, &conn->xmitwork);
+ conn->r2t_pdus_cnt++;
+- spin_unlock(&session->lock);
+
++ iscsi_requeue_ctask(ctask);
+ return 0;
}
-+static inline struct zfcp_fsf_req *
-+zfcp_reqlist_find_safe(struct zfcp_adapter *adapter, struct zfcp_fsf_req *req)
++/*
++ * Handle incoming reply to DataIn command
++ */
+ static int
+-iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
++iscsi_tcp_process_data_in(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment)
+{
-+ struct zfcp_fsf_req *request;
-+ unsigned int idx;
++ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
++ struct iscsi_hdr *hdr = tcp_conn->in.hdr;
++ int rc;
+
-+ for (idx = 0; idx < REQUEST_LIST_SIZE; idx++) {
-+ list_for_each_entry(request, &adapter->req_list[idx], list)
-+ if (request == req)
-+ return request;
++ if (!iscsi_tcp_dgst_verify(tcp_conn, segment))
++ return ISCSI_ERR_DATA_DGST;
++
++ /* check for non-exceptional status */
++ if (hdr->flags & ISCSI_FLAG_DATA_STATUS) {
++ rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr, NULL, 0);
++ if (rc)
++ return rc;
+ }
-+ return NULL;
++
++ iscsi_tcp_hdr_recv_prep(tcp_conn);
++ return 0;
+}
+
- /*
- * functions needed for reference/usage counting
- */
-diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
-index 07fa824..2dc8110 100644
---- a/drivers/s390/scsi/zfcp_erp.c
-+++ b/drivers/s390/scsi/zfcp_erp.c
-@@ -131,7 +131,7 @@ static void zfcp_close_qdio(struct zfcp_adapter *adapter)
- debug_text_event(adapter->erp_dbf, 3, "qdio_down2a");
- while (qdio_shutdown(adapter->ccw_device,
- QDIO_FLAG_CLEANUP_USING_CLEAR) == -EINPROGRESS)
-- msleep(1000);
-+ ssleep(1);
- debug_text_event(adapter->erp_dbf, 3, "qdio_down2b");
-
- /* cleanup used outbound sbals */
-@@ -456,7 +456,7 @@ zfcp_test_link(struct zfcp_port *port)
-
- zfcp_port_get(port);
- retval = zfcp_erp_adisc(port);
-- if (retval != 0) {
-+ if (retval != 0 && retval != -EBUSY) {
- zfcp_port_put(port);
- ZFCP_LOG_NORMAL("reopen needed for port 0x%016Lx "
- "on adapter %s\n ", port->wwpn,
-@@ -846,7 +846,8 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action)
- if (erp_action->fsf_req) {
- /* take lock to ensure that request is not deleted meanwhile */
- spin_lock(&adapter->req_list_lock);
-- if (zfcp_reqlist_find(adapter, erp_action->fsf_req->req_id)) {
-+ if (zfcp_reqlist_find_safe(adapter, erp_action->fsf_req) &&
-+ erp_action->fsf_req->erp_action == erp_action) {
- /* fsf_req still exists */
- debug_text_event(adapter->erp_dbf, 3, "a_ca_req");
- debug_event(adapter->erp_dbf, 3, &erp_action->fsf_req,
-@@ -1285,7 +1286,7 @@ zfcp_erp_strategy_do_action(struct zfcp_erp_action *erp_action)
- * note: no lock in subsequent strategy routines
- * (this allows these routine to call schedule, e.g.
- * kmalloc with such flags or qdio_initialize & friends)
-- * Note: in case of timeout, the seperate strategies will fail
-+ * Note: in case of timeout, the separate strategies will fail
- * anyhow. No need for a special action. Even worse, a nameserver
- * failure would not wake up waiting ports without the call.
- */
-@@ -1609,7 +1610,6 @@ static void zfcp_erp_scsi_scan(struct work_struct *work)
- scsi_scan_target(&rport->dev, 0, rport->scsi_target_id,
- unit->scsi_lun, 0);
- atomic_clear_mask(ZFCP_STATUS_UNIT_SCSI_WORK_PENDING, &unit->status);
-- wake_up(&unit->scsi_scan_wq);
- zfcp_unit_put(unit);
- kfree(p);
- }
-@@ -1900,7 +1900,7 @@ zfcp_erp_adapter_strategy(struct zfcp_erp_action *erp_action)
- ZFCP_LOG_INFO("Waiting to allow the adapter %s "
- "to recover itself\n",
- zfcp_get_busid_by_adapter(adapter));
-- msleep(jiffies_to_msecs(ZFCP_TYPE2_RECOVERY_TIME));
-+ ssleep(ZFCP_TYPE2_RECOVERY_TIME);
- }
-
- return retval;
-@@ -2080,7 +2080,7 @@ zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *erp_action)
- debug_text_event(adapter->erp_dbf, 3, "qdio_down1a");
- while (qdio_shutdown(adapter->ccw_device,
- QDIO_FLAG_CLEANUP_USING_CLEAR) == -EINPROGRESS)
-- msleep(1000);
-+ ssleep(1);
- debug_text_event(adapter->erp_dbf, 3, "qdio_down1b");
++/**
++ * iscsi_tcp_hdr_dissect - process PDU header
++ * @conn: iSCSI connection
++ * @hdr: PDU header
++ *
++ * This function analyzes the header of the PDU received,
++ * and performs several sanity checks. If the PDU is accompanied
++ * by data, the receive buffer is set up to copy the incoming data
++ * to the correct location.
++ */
++static int
++iscsi_tcp_hdr_dissect(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
+ {
+ int rc = 0, opcode, ahslen;
+- struct iscsi_hdr *hdr;
+ struct iscsi_session *session = conn->session;
+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- uint32_t cdgst, rdgst = 0, itt;
+-
+- hdr = tcp_conn->in.hdr;
++ struct iscsi_cmd_task *ctask;
++ uint32_t itt;
- failed_qdio_establish:
-@@ -2165,7 +2165,7 @@ zfcp_erp_adapter_strategy_open_fsf_xconfig(struct zfcp_erp_action *erp_action)
- ZFCP_LOG_DEBUG("host connection still initialising... "
- "waiting and retrying...\n");
- /* sleep a little bit before retry */
-- msleep(jiffies_to_msecs(sleep));
-+ ssleep(sleep);
- sleep *= 2;
+ /* verify PDU length */
+ tcp_conn->in.datalen = ntoh24(hdr->dlength);
+@@ -437,78 +740,73 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
+ tcp_conn->in.datalen, conn->max_recv_dlength);
+ return ISCSI_ERR_DATALEN;
}
+- tcp_conn->data_copied = 0;
-diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
-index 8534cf0..06b1079 100644
---- a/drivers/s390/scsi/zfcp_ext.h
-+++ b/drivers/s390/scsi/zfcp_ext.h
-@@ -27,8 +27,7 @@
- extern struct zfcp_data zfcp_data;
+- /* read AHS */
++ /* Additional header segments. So far, we don't
++ * process additional headers.
++ */
+ ahslen = hdr->hlength << 2;
+- tcp_conn->in.offset += ahslen;
+- tcp_conn->in.copy -= ahslen;
+- if (tcp_conn->in.copy < 0) {
+- printk(KERN_ERR "iscsi_tcp: can't handle AHS with length "
+- "%d bytes\n", ahslen);
+- return ISCSI_ERR_AHSLEN;
+- }
+-
+- /* calculate read padding */
+- tcp_conn->in.padding = tcp_conn->in.datalen & (ISCSI_PAD_LEN-1);
+- if (tcp_conn->in.padding) {
+- tcp_conn->in.padding = ISCSI_PAD_LEN - tcp_conn->in.padding;
+- debug_scsi("read padding %d bytes\n", tcp_conn->in.padding);
+- }
+-
+- if (conn->hdrdgst_en) {
+- struct scatterlist sg;
+-
+- sg_init_one(&sg, (u8 *)hdr,
+- sizeof(struct iscsi_hdr) + ahslen);
+- crypto_hash_digest(&tcp_conn->rx_hash, &sg, sg.length,
+- (u8 *)&cdgst);
+- rdgst = *(uint32_t*)((char*)hdr + sizeof(struct iscsi_hdr) +
+- ahslen);
+- if (cdgst != rdgst) {
+- printk(KERN_ERR "iscsi_tcp: hdrdgst error "
+- "recv 0x%x calc 0x%x\n", rdgst, cdgst);
+- return ISCSI_ERR_HDR_DGST;
+- }
+- }
- /******************************** SYSFS *************************************/
--extern int zfcp_sysfs_driver_create_files(struct device_driver *);
--extern void zfcp_sysfs_driver_remove_files(struct device_driver *);
-+extern struct attribute_group *zfcp_driver_attr_groups[];
- extern int zfcp_sysfs_adapter_create_files(struct device *);
- extern void zfcp_sysfs_adapter_remove_files(struct device *);
- extern int zfcp_sysfs_port_create_files(struct device *, u32);
-diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
-index ff866eb..e45f85f 100644
---- a/drivers/s390/scsi/zfcp_fsf.c
-+++ b/drivers/s390/scsi/zfcp_fsf.c
-@@ -502,7 +502,7 @@ zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *fsf_req)
- fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR;
- break;
- case FSF_SQ_NO_RECOM:
-- ZFCP_LOG_NORMAL("bug: No recommendation could be given for a"
-+ ZFCP_LOG_NORMAL("bug: No recommendation could be given for a "
- "problem on the adapter %s "
- "Stopping all operations on this adapter. ",
- zfcp_get_busid_by_adapter(fsf_req->adapter));
-@@ -813,7 +813,7 @@ zfcp_fsf_status_read_port_closed(struct zfcp_fsf_req *fsf_req)
- read_unlock_irqrestore(&zfcp_data.config_lock, flags);
+ opcode = hdr->opcode & ISCSI_OPCODE_MASK;
+ /* verify itt (itt encoding: age+cid+itt) */
+ rc = iscsi_verify_itt(conn, hdr, &itt);
+- if (rc == ISCSI_ERR_NO_SCSI_CMD) {
+- tcp_conn->in.datalen = 0; /* force drop */
+- return 0;
+- } else if (rc)
++ if (rc)
+ return rc;
- if (!port || (port->d_id != (status_buffer->d_id & ZFCP_DID_MASK))) {
-- ZFCP_LOG_NORMAL("bug: Reopen port indication received for"
-+ ZFCP_LOG_NORMAL("bug: Reopen port indication received for "
- "nonexisting port with d_id 0x%06x on "
- "adapter %s. Ignored.\n",
- status_buffer->d_id & ZFCP_DID_MASK,
-@@ -1116,6 +1116,10 @@ zfcp_fsf_abort_fcp_command(unsigned long old_req_id,
- goto out;
- }
+- debug_tcp("opcode 0x%x offset %d copy %d ahslen %d datalen %d\n",
+- opcode, tcp_conn->in.offset, tcp_conn->in.copy,
+- ahslen, tcp_conn->in.datalen);
++ debug_tcp("opcode 0x%x ahslen %d datalen %d\n",
++ opcode, ahslen, tcp_conn->in.datalen);
-+ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
-+ &unit->status)))
-+ goto unit_blocked;
+ switch(opcode) {
+ case ISCSI_OP_SCSI_DATA_IN:
+- tcp_conn->in.ctask = session->cmds[itt];
+- rc = iscsi_data_rsp(conn, tcp_conn->in.ctask);
++ ctask = session->cmds[itt];
++ spin_lock(&conn->session->lock);
++ rc = iscsi_data_rsp(conn, ctask);
++ spin_unlock(&conn->session->lock);
+ if (rc)
+ return rc;
++ if (tcp_conn->in.datalen) {
++ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
++ struct hash_desc *rx_hash = NULL;
+
- sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0);
- sbale[0].flags |= SBAL_FLAGS0_TYPE_READ;
- sbale[1].flags |= SBAL_FLAGS_LAST_ENTRY;
-@@ -1131,22 +1135,13 @@ zfcp_fsf_abort_fcp_command(unsigned long old_req_id,
-
- zfcp_fsf_start_timer(fsf_req, ZFCP_SCSI_ER_TIMEOUT);
- retval = zfcp_fsf_req_send(fsf_req);
-- if (retval) {
-- ZFCP_LOG_INFO("error: Failed to send abort command request "
-- "on adapter %s, port 0x%016Lx, unit 0x%016Lx\n",
-- zfcp_get_busid_by_adapter(adapter),
-- unit->port->wwpn, unit->fcp_lun);
-+ if (!retval)
-+ goto out;
++ /*
++ * Setup copy of Data-In into the Scsi_Cmnd
++ * Scatterlist case:
++ * We set up the iscsi_segment to point to the next
++ * scatterlist entry to copy to. As we go along,
++ * we move on to the next scatterlist entry and
++ * update the digest per-entry.
++ */
++ if (conn->datadgst_en)
++ rx_hash = &tcp_conn->rx_hash;
+
-+ unit_blocked:
- zfcp_fsf_req_free(fsf_req);
- fsf_req = NULL;
-- goto out;
-- }
-
-- ZFCP_LOG_DEBUG("Abort FCP Command request initiated "
-- "(adapter%s, port d_id=0x%06x, "
-- "unit x%016Lx, old_req_id=0x%lx)\n",
-- zfcp_get_busid_by_adapter(adapter),
-- unit->port->d_id,
-- unit->fcp_lun, old_req_id);
- out:
- write_unlock_irqrestore(&adapter->request_queue.queue_lock, lock_flags);
- return fsf_req;
-@@ -1164,8 +1159,8 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req)
- {
- int retval = -EINVAL;
- struct zfcp_unit *unit;
-- unsigned char status_qual =
-- new_fsf_req->qtcb->header.fsf_status_qual.word[0];
-+ union fsf_status_qual *fsf_stat_qual =
-+ &new_fsf_req->qtcb->header.fsf_status_qual;
-
- if (new_fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) {
- /* do not set ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED */
-@@ -1178,7 +1173,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req)
- switch (new_fsf_req->qtcb->header.fsf_status) {
-
- case FSF_PORT_HANDLE_NOT_VALID:
-- if (status_qual >> 4 != status_qual % 0xf) {
-+ if (fsf_stat_qual->word[0] != fsf_stat_qual->word[1]) {
- debug_text_event(new_fsf_req->adapter->erp_dbf, 3,
- "fsf_s_phand_nv0");
- /*
-@@ -1207,8 +1202,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req)
++ debug_tcp("iscsi_tcp_begin_data_in(%p, offset=%d, "
++ "datalen=%d)\n", tcp_conn,
++ tcp_ctask->data_offset,
++ tcp_conn->in.datalen);
++ return iscsi_segment_seek_sg(&tcp_conn->in.segment,
++ scsi_sglist(ctask->sc),
++ scsi_sg_count(ctask->sc),
++ tcp_ctask->data_offset,
++ tcp_conn->in.datalen,
++ iscsi_tcp_process_data_in,
++ rx_hash);
++ }
+ /* fall through */
+ case ISCSI_OP_SCSI_CMD_RSP:
+- tcp_conn->in.ctask = session->cmds[itt];
+- if (tcp_conn->in.datalen)
+- goto copy_hdr;
+-
+- spin_lock(&session->lock);
+- rc = __iscsi_complete_pdu(conn, hdr, NULL, 0);
+- spin_unlock(&session->lock);
++ if (tcp_conn->in.datalen) {
++ iscsi_tcp_data_recv_prep(tcp_conn);
++ return 0;
++ }
++ rc = iscsi_complete_pdu(conn, hdr, NULL, 0);
break;
+ case ISCSI_OP_R2T:
+- tcp_conn->in.ctask = session->cmds[itt];
++ ctask = session->cmds[itt];
+ if (ahslen)
+ rc = ISCSI_ERR_AHSLEN;
+- else if (tcp_conn->in.ctask->sc->sc_data_direction ==
+- DMA_TO_DEVICE)
+- rc = iscsi_r2t_rsp(conn, tcp_conn->in.ctask);
+- else
++ else if (ctask->sc->sc_data_direction == DMA_TO_DEVICE) {
++ spin_lock(&session->lock);
++ rc = iscsi_r2t_rsp(conn, ctask);
++ spin_unlock(&session->lock);
++ } else
+ rc = ISCSI_ERR_PROTO;
+ break;
+ case ISCSI_OP_LOGIN_RSP:
+@@ -520,8 +818,7 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
+ * than 8K, but there are no targets that currently do this.
+ * For now we fail until we find a vendor that needs it
+ */
+- if (ISCSI_DEF_MAX_RECV_SEG_LEN <
+- tcp_conn->in.datalen) {
++ if (ISCSI_DEF_MAX_RECV_SEG_LEN < tcp_conn->in.datalen) {
+ printk(KERN_ERR "iscsi_tcp: received buffer of len %u "
+ "but conn buffer is only %u (opcode %0x)\n",
+ tcp_conn->in.datalen,
+@@ -530,8 +827,13 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
+ break;
+ }
- case FSF_LUN_HANDLE_NOT_VALID:
-- if (status_qual >> 4 != status_qual % 0xf) {
-- /* 2 */
-+ if (fsf_stat_qual->word[0] != fsf_stat_qual->word[1]) {
- debug_text_event(new_fsf_req->adapter->erp_dbf, 3,
- "fsf_s_lhand_nv0");
- /*
-@@ -1674,6 +1668,12 @@ zfcp_fsf_send_els(struct zfcp_send_els *els)
- goto failed_req;
- }
-
-+ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
-+ &els->port->status))) {
-+ ret = -EBUSY;
-+ goto port_blocked;
-+ }
-+
- sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0);
- if (zfcp_use_one_sbal(els->req, els->req_count,
- els->resp, els->resp_count)){
-@@ -1755,6 +1755,7 @@ zfcp_fsf_send_els(struct zfcp_send_els *els)
- "0x%06x)\n", zfcp_get_busid_by_adapter(adapter), d_id);
- goto out;
-
-+ port_blocked:
- failed_send:
- zfcp_fsf_req_free(fsf_req);
-
-@@ -2280,7 +2281,7 @@ zfcp_fsf_exchange_port_data(struct zfcp_erp_action *erp_action)
- &lock_flags, &fsf_req);
- if (retval) {
- ZFCP_LOG_INFO("error: Out of resources. Could not create an "
-- "exchange port data request for"
-+ "exchange port data request for "
- "the adapter %s.\n",
- zfcp_get_busid_by_adapter(adapter));
- write_unlock_irqrestore(&adapter->request_queue.queue_lock,
-@@ -2339,7 +2340,7 @@ zfcp_fsf_exchange_port_data_sync(struct zfcp_adapter *adapter,
- 0, NULL, &lock_flags, &fsf_req);
- if (retval) {
- ZFCP_LOG_INFO("error: Out of resources. Could not create an "
-- "exchange port data request for"
-+ "exchange port data request for "
- "the adapter %s.\n",
- zfcp_get_busid_by_adapter(adapter));
- write_unlock_irqrestore(&adapter->request_queue.queue_lock,
-@@ -3592,6 +3593,12 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter,
- goto failed_req_create;
+- if (tcp_conn->in.datalen)
+- goto copy_hdr;
++ /* If there's data coming in with the response,
++ * receive it to the connection's buffer.
++ */
++ if (tcp_conn->in.datalen) {
++ iscsi_tcp_data_recv_prep(tcp_conn);
++ return 0;
++ }
+ /* fall through */
+ case ISCSI_OP_LOGOUT_RSP:
+ case ISCSI_OP_NOOP_IN:
+@@ -543,461 +845,161 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
+ break;
}
-+ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
-+ &unit->status))) {
-+ retval = -EBUSY;
-+ goto unit_blocked;
-+ }
-+
- zfcp_unit_get(unit);
- fsf_req->unit = unit;
-
-@@ -3732,6 +3739,7 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter,
- send_failed:
- no_fit:
- failed_scsi_cmnd:
-+ unit_blocked:
- zfcp_unit_put(unit);
- zfcp_fsf_req_free(fsf_req);
- fsf_req = NULL;
-@@ -3766,6 +3774,10 @@ zfcp_fsf_send_fcp_command_task_management(struct zfcp_adapter *adapter,
- goto out;
+- return rc;
+-
+-copy_hdr:
+- /*
+- * if we did zero copy for the header but we will need multiple
+- * skbs to complete the command then we have to copy the header
+- * for later use
+- */
+- if (tcp_conn->in.zero_copy_hdr && tcp_conn->in.copy <=
+- (tcp_conn->in.datalen + tcp_conn->in.padding +
+- (conn->datadgst_en ? 4 : 0))) {
+- debug_tcp("Copying header for later use. in.copy %d in.datalen"
+- " %d\n", tcp_conn->in.copy, tcp_conn->in.datalen);
+- memcpy(&tcp_conn->hdr, tcp_conn->in.hdr,
+- sizeof(struct iscsi_hdr));
+- tcp_conn->in.hdr = &tcp_conn->hdr;
+- tcp_conn->in.zero_copy_hdr = 0;
+- }
+- return 0;
+-}
+-
+-/**
+- * iscsi_ctask_copy - copy skb bits to the destanation cmd task
+- * @conn: iscsi tcp connection
+- * @ctask: scsi command task
+- * @buf: buffer to copy to
+- * @buf_size: size of buffer
+- * @offset: offset within the buffer
+- *
+- * Notes:
+- * The function calls skb_copy_bits() and updates per-connection and
+- * per-cmd byte counters.
+- *
+- * Read counters (in bytes):
+- *
+- * conn->in.offset offset within in progress SKB
+- * conn->in.copy left to copy from in progress SKB
+- * including padding
+- * conn->in.copied copied already from in progress SKB
+- * conn->data_copied copied already from in progress buffer
+- * ctask->sent total bytes sent up to the MidLayer
+- * ctask->data_count left to copy from in progress Data-In
+- * buf_left left to copy from in progress buffer
+- **/
+-static inline int
+-iscsi_ctask_copy(struct iscsi_tcp_conn *tcp_conn, struct iscsi_cmd_task *ctask,
+- void *buf, int buf_size, int offset)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- int buf_left = buf_size - (tcp_conn->data_copied + offset);
+- unsigned size = min(tcp_conn->in.copy, buf_left);
+- int rc;
+-
+- size = min(size, ctask->data_count);
+-
+- debug_tcp("ctask_copy %d bytes at offset %d copied %d\n",
+- size, tcp_conn->in.offset, tcp_conn->in.copied);
+-
+- BUG_ON(size <= 0);
+- BUG_ON(tcp_ctask->sent + size > scsi_bufflen(ctask->sc));
+-
+- rc = skb_copy_bits(tcp_conn->in.skb, tcp_conn->in.offset,
+- (char*)buf + (offset + tcp_conn->data_copied), size);
+- /* must fit into skb->len */
+- BUG_ON(rc);
+-
+- tcp_conn->in.offset += size;
+- tcp_conn->in.copy -= size;
+- tcp_conn->in.copied += size;
+- tcp_conn->data_copied += size;
+- tcp_ctask->sent += size;
+- ctask->data_count -= size;
+-
+- BUG_ON(tcp_conn->in.copy < 0);
+- BUG_ON(ctask->data_count < 0);
+-
+- if (buf_size != (tcp_conn->data_copied + offset)) {
+- if (!ctask->data_count) {
+- BUG_ON(buf_size - tcp_conn->data_copied < 0);
+- /* done with this PDU */
+- return buf_size - tcp_conn->data_copied;
+- }
+- return -EAGAIN;
++ if (rc == 0) {
++ /* Anything that comes with data should have
++ * been handled above. */
++ if (tcp_conn->in.datalen)
++ return ISCSI_ERR_PROTO;
++ iscsi_tcp_hdr_recv_prep(tcp_conn);
}
-+ if (unlikely(!atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED,
-+ &unit->status)))
-+ goto unit_blocked;
-+
- /*
- * Used to decide on proper handler in the return path,
- * could be either zfcp_fsf_send_fcp_command_task_handler or
-@@ -3799,25 +3811,13 @@ zfcp_fsf_send_fcp_command_task_management(struct zfcp_adapter *adapter,
+- /* done with this buffer or with both - PDU and buffer */
+- tcp_conn->data_copied = 0;
+- return 0;
++ return rc;
+ }
- zfcp_fsf_start_timer(fsf_req, ZFCP_SCSI_ER_TIMEOUT);
- retval = zfcp_fsf_req_send(fsf_req);
-- if (retval) {
-- ZFCP_LOG_INFO("error: Could not send an FCP-command (task "
-- "management) on adapter %s, port 0x%016Lx for "
-- "unit LUN 0x%016Lx\n",
-- zfcp_get_busid_by_adapter(adapter),
-- unit->port->wwpn,
-- unit->fcp_lun);
-- zfcp_fsf_req_free(fsf_req);
-- fsf_req = NULL;
-+ if (!retval)
- goto out;
+ /**
+- * iscsi_tcp_copy - copy skb bits to the destanation buffer
+- * @conn: iscsi tcp connection
++ * iscsi_tcp_hdr_recv_done - process PDU header
+ *
+- * Notes:
+- * The function calls skb_copy_bits() and updates per-connection
+- * byte counters.
+- **/
+-static inline int
+-iscsi_tcp_copy(struct iscsi_conn *conn, int buf_size)
+-{
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- int buf_left = buf_size - tcp_conn->data_copied;
+- int size = min(tcp_conn->in.copy, buf_left);
+- int rc;
+-
+- debug_tcp("tcp_copy %d bytes at offset %d copied %d\n",
+- size, tcp_conn->in.offset, tcp_conn->data_copied);
+- BUG_ON(size <= 0);
+-
+- rc = skb_copy_bits(tcp_conn->in.skb, tcp_conn->in.offset,
+- (char*)conn->data + tcp_conn->data_copied, size);
+- BUG_ON(rc);
+-
+- tcp_conn->in.offset += size;
+- tcp_conn->in.copy -= size;
+- tcp_conn->in.copied += size;
+- tcp_conn->data_copied += size;
+-
+- if (buf_size != tcp_conn->data_copied)
+- return -EAGAIN;
+-
+- return 0;
+-}
+-
+-static inline void
+-partial_sg_digest_update(struct hash_desc *desc, struct scatterlist *sg,
+- int offset, int length)
+-{
+- struct scatterlist temp;
+-
+- sg_init_table(&temp, 1);
+- sg_set_page(&temp, sg_page(sg), length, offset);
+- crypto_hash_update(desc, &temp, length);
+-}
+-
+-static void
+-iscsi_recv_digest_update(struct iscsi_tcp_conn *tcp_conn, char* buf, int len)
+-{
+- struct scatterlist tmp;
+-
+- sg_init_one(&tmp, buf, len);
+- crypto_hash_update(&tcp_conn->rx_hash, &tmp, len);
+-}
+-
+-static int iscsi_scsi_data_in(struct iscsi_conn *conn)
++ * This is the callback invoked when the PDU header has
++ * been received. If the header is followed by additional
++ * header segments, we go back for more data.
++ */
++static int
++iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment)
+ {
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- struct iscsi_cmd_task *ctask = tcp_conn->in.ctask;
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- struct scsi_cmnd *sc = ctask->sc;
+- struct scatterlist *sg;
+- int i, offset, rc = 0;
+-
+- BUG_ON((void*)ctask != sc->SCp.ptr);
+-
+- offset = tcp_ctask->data_offset;
+- sg = scsi_sglist(sc);
+-
+- if (tcp_ctask->data_offset)
+- for (i = 0; i < tcp_ctask->sg_count; i++)
+- offset -= sg[i].length;
+- /* we've passed through partial sg*/
+- if (offset < 0)
+- offset = 0;
+-
+- for (i = tcp_ctask->sg_count; i < scsi_sg_count(sc); i++) {
+- char *dest;
+-
+- dest = kmap_atomic(sg_page(&sg[i]), KM_SOFTIRQ0);
+- rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg[i].offset,
+- sg[i].length, offset);
+- kunmap_atomic(dest, KM_SOFTIRQ0);
+- if (rc == -EAGAIN)
+- /* continue with the next SKB/PDU */
+- return rc;
+- if (!rc) {
+- if (conn->datadgst_en) {
+- if (!offset)
+- crypto_hash_update(
+- &tcp_conn->rx_hash,
+- &sg[i], sg[i].length);
+- else
+- partial_sg_digest_update(
+- &tcp_conn->rx_hash,
+- &sg[i],
+- sg[i].offset + offset,
+- sg[i].length - offset);
+- }
+- offset = 0;
+- tcp_ctask->sg_count++;
+- }
+-
+- if (!ctask->data_count) {
+- if (rc && conn->datadgst_en)
+- /*
+- * data-in is complete, but buffer not...
+- */
+- partial_sg_digest_update(&tcp_conn->rx_hash,
+- &sg[i],
+- sg[i].offset,
+- sg[i].length-rc);
+- rc = 0;
+- break;
+- }
+-
+- if (!tcp_conn->in.copy)
+- return -EAGAIN;
- }
+- BUG_ON(ctask->data_count);
++ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
++ struct iscsi_hdr *hdr;
-- ZFCP_LOG_TRACE("Send FCP Command (task management function) initiated "
-- "(adapter %s, port 0x%016Lx, unit 0x%016Lx, "
-- "tm_flags=0x%x)\n",
-- zfcp_get_busid_by_adapter(adapter),
-- unit->port->wwpn,
-- unit->fcp_lun,
-- tm_flags);
-+ unit_blocked:
-+ zfcp_fsf_req_free(fsf_req);
-+ fsf_req = NULL;
+- /* check for non-exceptional status */
+- if (tcp_conn->in.hdr->flags & ISCSI_FLAG_DATA_STATUS) {
+- debug_scsi("done [sc %lx res %d itt 0x%x flags 0x%x]\n",
+- (long)sc, sc->result, ctask->itt,
+- tcp_conn->in.hdr->flags);
+- spin_lock(&conn->session->lock);
+- __iscsi_complete_pdu(conn, tcp_conn->in.hdr, NULL, 0);
+- spin_unlock(&conn->session->lock);
++ /* Check if there are additional header segments
++ * *prior* to computing the digest, because we
++ * may need to go back to the caller for more.
++ */
++ hdr = (struct iscsi_hdr *) tcp_conn->in.hdr_buf;
++ if (segment->copied == sizeof(struct iscsi_hdr) && hdr->hlength) {
++ /* Bump the header length - the caller will
++ * just loop around and get the AHS for us, and
++ * call again. */
++ unsigned int ahslen = hdr->hlength << 2;
+
- out:
- write_unlock_irqrestore(&adapter->request_queue.queue_lock, lock_flags);
- return fsf_req;
-@@ -4725,7 +4725,7 @@ zfcp_fsf_req_create(struct zfcp_adapter *adapter, u32 fsf_cmd, int req_flags,
- /* allocate new FSF request */
- fsf_req = zfcp_fsf_req_alloc(pool, req_flags);
- if (unlikely(NULL == fsf_req)) {
-- ZFCP_LOG_DEBUG("error: Could not put an FSF request into"
-+ ZFCP_LOG_DEBUG("error: Could not put an FSF request into "
- "the outbound (send) queue.\n");
- ret = -ENOMEM;
- goto failed_fsf_req;
-diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
-index 51d92b1..22fdc17 100644
---- a/drivers/s390/scsi/zfcp_qdio.c
-+++ b/drivers/s390/scsi/zfcp_qdio.c
-@@ -529,7 +529,7 @@ zfcp_qdio_sbals_wipe(struct zfcp_fsf_req *fsf_req)
-
-
- /**
-- * zfcp_qdio_sbale_fill - set address and lenght in current SBALE
-+ * zfcp_qdio_sbale_fill - set address and length in current SBALE
- * on request_queue
- */
- static void
-diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
-index abae202..b9daf5c 100644
---- a/drivers/s390/scsi/zfcp_scsi.c
-+++ b/drivers/s390/scsi/zfcp_scsi.c
-@@ -51,7 +51,6 @@ struct zfcp_data zfcp_data = {
- .queuecommand = zfcp_scsi_queuecommand,
- .eh_abort_handler = zfcp_scsi_eh_abort_handler,
- .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler,
-- .eh_bus_reset_handler = zfcp_scsi_eh_host_reset_handler,
- .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler,
- .can_queue = 4096,
- .this_id = -1,
-@@ -181,9 +180,6 @@ static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt)
-
- if (unit) {
- zfcp_erp_wait(unit->port->adapter);
-- wait_event(unit->scsi_scan_wq,
-- atomic_test_mask(ZFCP_STATUS_UNIT_SCSI_WORK_PENDING,
-- &unit->status) == 0);
- atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status);
- sdpnt->hostdata = NULL;
- unit->device = NULL;
-@@ -262,8 +258,9 @@ zfcp_scsi_command_async(struct zfcp_adapter *adapter, struct zfcp_unit *unit,
- goto out;
++ /* Make sure we don't overflow */
++ if (sizeof(*hdr) + ahslen > sizeof(tcp_conn->in.hdr_buf))
++ return ISCSI_ERR_AHSLEN;
++
++ segment->total_size += ahslen;
++ segment->size += ahslen;
++ return 0;
}
-- if (unlikely(
-- !atomic_test_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status))) {
-+ tmp = zfcp_fsf_send_fcp_command_task(adapter, unit, scpnt, use_timer,
-+ ZFCP_REQ_AUTO_CLEANUP);
-+ if (unlikely(tmp == -EBUSY)) {
- ZFCP_LOG_DEBUG("adapter %s not ready or unit 0x%016Lx "
- "on port 0x%016Lx in recovery\n",
- zfcp_get_busid_by_unit(unit),
-@@ -272,9 +269,6 @@ zfcp_scsi_command_async(struct zfcp_adapter *adapter, struct zfcp_unit *unit,
- goto out;
- }
+- return rc;
+-}
+-
+-static int
+-iscsi_data_recv(struct iscsi_conn *conn)
+-{
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- int rc = 0, opcode;
+-
+- opcode = tcp_conn->in.hdr->opcode & ISCSI_OPCODE_MASK;
+- switch (opcode) {
+- case ISCSI_OP_SCSI_DATA_IN:
+- rc = iscsi_scsi_data_in(conn);
+- break;
+- case ISCSI_OP_SCSI_CMD_RSP:
+- case ISCSI_OP_TEXT_RSP:
+- case ISCSI_OP_LOGIN_RSP:
+- case ISCSI_OP_ASYNC_EVENT:
+- case ISCSI_OP_REJECT:
+- /*
+- * Collect data segment to the connection's data
+- * placeholder
+- */
+- if (iscsi_tcp_copy(conn, tcp_conn->in.datalen)) {
+- rc = -EAGAIN;
+- goto exit;
++ /* We're done processing the header. See if we're doing
++ * header digests; if so, set up the recv_digest buffer
++ * and go back for more. */
++ if (conn->hdrdgst_en) {
++ if (segment->digest_len == 0) {
++ iscsi_tcp_segment_splice_digest(segment,
++ segment->recv_digest);
++ return 0;
+ }
++ iscsi_tcp_dgst_header(&tcp_conn->rx_hash, hdr,
++ segment->total_copied - ISCSI_DIGEST_SIZE,
++ segment->digest);
-- tmp = zfcp_fsf_send_fcp_command_task(adapter, unit, scpnt, use_timer,
-- ZFCP_REQ_AUTO_CLEANUP);
--
- if (unlikely(tmp < 0)) {
- ZFCP_LOG_DEBUG("error: initiation of Send FCP Cmnd failed\n");
- retval = SCSI_MLQUEUE_HOST_BUSY;
-@@ -459,7 +453,9 @@ zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *scpnt)
- retval = SUCCESS;
- goto out;
+- rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr, conn->data,
+- tcp_conn->in.datalen);
+- if (!rc && conn->datadgst_en && opcode != ISCSI_OP_LOGIN_RSP)
+- iscsi_recv_digest_update(tcp_conn, conn->data,
+- tcp_conn->in.datalen);
+- break;
+- default:
+- BUG_ON(1);
++ if (!iscsi_tcp_dgst_verify(tcp_conn, segment))
++ return ISCSI_ERR_HDR_DGST;
}
-- ZFCP_LOG_NORMAL("resetting unit 0x%016Lx\n", unit->fcp_lun);
-+ ZFCP_LOG_NORMAL("resetting unit 0x%016Lx on port 0x%016Lx, adapter %s\n",
-+ unit->fcp_lun, unit->port->wwpn,
-+ zfcp_get_busid_by_adapter(unit->port->adapter));
-
- /*
- * If we do not know whether the unit supports 'logical unit reset'
-@@ -542,7 +538,7 @@ zfcp_task_management_function(struct zfcp_unit *unit, u8 tm_flags,
+-exit:
+- return rc;
++
++ tcp_conn->in.hdr = hdr;
++ return iscsi_tcp_hdr_dissect(conn, hdr);
}
/**
-- * zfcp_scsi_eh_host_reset_handler - handler for host and bus reset
-+ * zfcp_scsi_eh_host_reset_handler - handler for host reset
- */
- static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
+- * iscsi_tcp_data_recv - TCP receive in sendfile fashion
++ * iscsi_tcp_recv - TCP receive in sendfile fashion
+ * @rd_desc: read descriptor
+ * @skb: socket buffer
+ * @offset: offset in skb
+ * @len: skb->len - offset
+ **/
+ static int
+-iscsi_tcp_data_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
+- unsigned int offset, size_t len)
++iscsi_tcp_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
++ unsigned int offset, size_t len)
{
-@@ -552,8 +548,10 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
- unit = (struct zfcp_unit*) scpnt->device->hostdata;
- adapter = unit->port->adapter;
-
-- ZFCP_LOG_NORMAL("host/bus reset because of problems with "
-- "unit 0x%016Lx\n", unit->fcp_lun);
-+ ZFCP_LOG_NORMAL("host reset because of problems with "
-+ "unit 0x%016Lx on port 0x%016Lx, adapter %s\n",
-+ unit->fcp_lun, unit->port->wwpn,
-+ zfcp_get_busid_by_adapter(unit->port->adapter));
-
- zfcp_erp_adapter_reopen(adapter, 0);
- zfcp_erp_wait(adapter);
-diff --git a/drivers/s390/scsi/zfcp_sysfs_driver.c b/drivers/s390/scsi/zfcp_sysfs_driver.c
-index 005e62f..651edd5 100644
---- a/drivers/s390/scsi/zfcp_sysfs_driver.c
-+++ b/drivers/s390/scsi/zfcp_sysfs_driver.c
-@@ -98,28 +98,9 @@ static struct attribute_group zfcp_driver_attr_group = {
- .attrs = zfcp_driver_attrs,
- };
-
--/**
-- * zfcp_sysfs_create_driver_files - create sysfs driver files
-- * @dev: pointer to belonging device
-- *
-- * Create all sysfs attributes of the zfcp device driver
-- */
--int
--zfcp_sysfs_driver_create_files(struct device_driver *drv)
--{
-- return sysfs_create_group(&drv->kobj, &zfcp_driver_attr_group);
--}
+- int rc;
+ struct iscsi_conn *conn = rd_desc->arg.data;
+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- int processed;
+- char pad[ISCSI_PAD_LEN];
+- struct scatterlist sg;
-
--/**
-- * zfcp_sysfs_remove_driver_files - remove sysfs driver files
-- * @dev: pointer to belonging device
-- *
-- * Remove all sysfs attributes of the zfcp device driver
-- */
--void
--zfcp_sysfs_driver_remove_files(struct device_driver *drv)
--{
-- sysfs_remove_group(&drv->kobj, &zfcp_driver_attr_group);
--}
-+struct attribute_group *zfcp_driver_attr_groups[] = {
-+ &zfcp_driver_attr_group,
-+ NULL,
-+};
+- /*
+- * Save current SKB and its offset in the corresponding
+- * connection context.
+- */
+- tcp_conn->in.copy = skb->len - offset;
+- tcp_conn->in.offset = offset;
+- tcp_conn->in.skb = skb;
+- tcp_conn->in.len = tcp_conn->in.copy;
+- BUG_ON(tcp_conn->in.copy <= 0);
+- debug_tcp("in %d bytes\n", tcp_conn->in.copy);
++ struct iscsi_segment *segment = &tcp_conn->in.segment;
++ struct skb_seq_state seq;
++ unsigned int consumed = 0;
++ int rc = 0;
- #undef ZFCP_LOG_AREA
-diff --git a/drivers/scsi/.gitignore b/drivers/scsi/.gitignore
-index b385af3..c89ae9a 100644
---- a/drivers/scsi/.gitignore
-+++ b/drivers/scsi/.gitignore
-@@ -1,3 +1 @@
- 53c700_d.h
--53c7xx_d.h
--53c7xx_u.h
-diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
-index afb262b..1c24483 100644
---- a/drivers/scsi/3w-9xxx.c
-+++ b/drivers/scsi/3w-9xxx.c
-@@ -2010,6 +2010,7 @@ static int __devinit twa_probe(struct pci_dev *pdev, const struct pci_device_id
- }
+-more:
+- tcp_conn->in.copied = 0;
+- rc = 0;
++ debug_tcp("in %d bytes\n", skb->len - offset);
- pci_set_master(pdev);
-+ pci_try_set_mwi(pdev);
+ if (unlikely(conn->suspend_rx)) {
+ debug_tcp("conn %d Rx suspended!\n", conn->id);
+ return 0;
+ }
- if (pci_set_dma_mask(pdev, DMA_64BIT_MASK)
- || pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK))
-diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c
-index 71ff3fb..f4c4fe9 100644
---- a/drivers/scsi/53c700.c
-+++ b/drivers/scsi/53c700.c
-@@ -608,7 +608,8 @@ NCR_700_scsi_done(struct NCR_700_Host_Parameters *hostdata,
- scsi_print_sense("53c700", SCp);
+- if (tcp_conn->in_progress == IN_PROGRESS_WAIT_HEADER ||
+- tcp_conn->in_progress == IN_PROGRESS_HEADER_GATHER) {
+- rc = iscsi_hdr_extract(tcp_conn);
+- if (rc) {
+- if (rc == -EAGAIN)
+- goto nomore;
+- else {
+- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- return 0;
+- }
+- }
++ skb_prepare_seq_read(skb, offset, skb->len, &seq);
++ while (1) {
++ unsigned int avail;
++ const u8 *ptr;
- #endif
-- dma_unmap_single(hostdata->dev, slot->dma_handle, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
-+ dma_unmap_single(hostdata->dev, slot->dma_handle,
-+ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
- /* restore the old result if the request sense was
- * successful */
- if (result == 0)
-@@ -1010,7 +1011,7 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
- cmnd[1] = (SCp->device->lun & 0x7) << 5;
- cmnd[2] = 0;
- cmnd[3] = 0;
-- cmnd[4] = sizeof(SCp->sense_buffer);
-+ cmnd[4] = SCSI_SENSE_BUFFERSIZE;
- cmnd[5] = 0;
- /* Here's a quiet hack: the
- * REQUEST_SENSE command is six bytes,
-@@ -1024,14 +1025,14 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
- SCp->cmd_len = 6; /* command length for
- * REQUEST_SENSE */
- slot->pCmd = dma_map_single(hostdata->dev, cmnd, MAX_COMMAND_SIZE, DMA_TO_DEVICE);
-- slot->dma_handle = dma_map_single(hostdata->dev, SCp->sense_buffer, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
-- slot->SG[0].ins = bS_to_host(SCRIPT_MOVE_DATA_IN | sizeof(SCp->sense_buffer));
-+ slot->dma_handle = dma_map_single(hostdata->dev, SCp->sense_buffer, SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
-+ slot->SG[0].ins = bS_to_host(SCRIPT_MOVE_DATA_IN | SCSI_SENSE_BUFFERSIZE);
- slot->SG[0].pAddr = bS_to_host(slot->dma_handle);
- slot->SG[1].ins = bS_to_host(SCRIPT_RETURN);
- slot->SG[1].pAddr = 0;
- slot->resume_offset = hostdata->pScript;
- dma_cache_sync(hostdata->dev, slot->SG, sizeof(slot->SG[0])*2, DMA_TO_DEVICE);
-- dma_cache_sync(hostdata->dev, SCp->sense_buffer, sizeof(SCp->sense_buffer), DMA_FROM_DEVICE);
-+ dma_cache_sync(hostdata->dev, SCp->sense_buffer, SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
+- /*
+- * Verify and process incoming PDU header.
+- */
+- rc = iscsi_tcp_hdr_recv(conn);
+- if (!rc && tcp_conn->in.datalen) {
+- if (conn->datadgst_en)
+- crypto_hash_init(&tcp_conn->rx_hash);
+- tcp_conn->in_progress = IN_PROGRESS_DATA_RECV;
+- } else if (rc) {
+- iscsi_conn_failure(conn, rc);
+- return 0;
++ avail = skb_seq_read(consumed, &ptr, &seq);
++ if (avail == 0) {
++ debug_tcp("no more data avail. Consumed %d\n",
++ consumed);
++ break;
+ }
+- }
+-
+- if (tcp_conn->in_progress == IN_PROGRESS_DDIGEST_RECV &&
+- tcp_conn->in.copy) {
+- uint32_t recv_digest;
+-
+- debug_tcp("extra data_recv offset %d copy %d\n",
+- tcp_conn->in.offset, tcp_conn->in.copy);
+-
+- if (!tcp_conn->data_copied) {
+- if (tcp_conn->in.padding) {
+- debug_tcp("padding -> %d\n",
+- tcp_conn->in.padding);
+- memset(pad, 0, tcp_conn->in.padding);
+- sg_init_one(&sg, pad, tcp_conn->in.padding);
+- crypto_hash_update(&tcp_conn->rx_hash,
+- &sg, sg.length);
++ BUG_ON(segment->copied >= segment->size);
++
++ debug_tcp("skb %p ptr=%p avail=%u\n", skb, ptr, avail);
++ rc = iscsi_tcp_segment_recv(tcp_conn, segment, ptr, avail);
++ BUG_ON(rc == 0);
++ consumed += rc;
++
++ if (segment->total_copied >= segment->total_size) {
++ debug_tcp("segment done\n");
++ rc = segment->done(tcp_conn, segment);
++ if (rc != 0) {
++ skb_abort_seq_read(&seq);
++ goto error;
+ }
+- crypto_hash_final(&tcp_conn->rx_hash,
+- (u8 *) &tcp_conn->in.datadgst);
+- debug_tcp("rx digest 0x%x\n", tcp_conn->in.datadgst);
+- }
- /* queue the command for reissue */
- slot->state = NCR_700_SLOT_QUEUED;
-diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
-index 49e1ffa..ead47c1 100644
---- a/drivers/scsi/BusLogic.c
-+++ b/drivers/scsi/BusLogic.c
-@@ -2947,7 +2947,7 @@ static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRou
+- rc = iscsi_tcp_copy(conn, sizeof(uint32_t));
+- if (rc) {
+- if (rc == -EAGAIN)
+- goto again;
+- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- return 0;
+- }
+-
+- memcpy(&recv_digest, conn->data, sizeof(uint32_t));
+- if (recv_digest != tcp_conn->in.datadgst) {
+- debug_tcp("iscsi_tcp: data digest error!"
+- "0x%x != 0x%x\n", recv_digest,
+- tcp_conn->in.datadgst);
+- iscsi_conn_failure(conn, ISCSI_ERR_DATA_DGST);
+- return 0;
+- } else {
+- debug_tcp("iscsi_tcp: data digest match!"
+- "0x%x == 0x%x\n", recv_digest,
+- tcp_conn->in.datadgst);
+- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
++ /* The done() functions sets up the
++ * next segment. */
}
}
- memcpy(CCB->CDB, CDB, CDB_Length);
-- CCB->SenseDataLength = sizeof(Command->sense_buffer);
-+ CCB->SenseDataLength = SCSI_SENSE_BUFFERSIZE;
- CCB->SenseDataPointer = pci_map_single(HostAdapter->PCI_Device, Command->sense_buffer, CCB->SenseDataLength, PCI_DMA_FROMDEVICE);
- CCB->Command = Command;
- Command->scsi_done = CompletionRoutine;
-diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
-index 184c7ae..3e161cd 100644
---- a/drivers/scsi/Kconfig
-+++ b/drivers/scsi/Kconfig
-@@ -341,7 +341,7 @@ config ISCSI_TCP
- The userspace component needed to initialize the driver, documentation,
- and sample configuration files can be found here:
-
-- http://linux-iscsi.sf.net
-+ http://open-iscsi.org
-
- config SGIWD93_SCSI
- tristate "SGI WD93C93 SCSI Driver"
-@@ -573,10 +573,10 @@ config SCSI_ARCMSR_AER
- source "drivers/scsi/megaraid/Kconfig.megaraid"
-
- config SCSI_HPTIOP
-- tristate "HighPoint RocketRAID 3xxx Controller support"
-+ tristate "HighPoint RocketRAID 3xxx/4xxx Controller support"
- depends on SCSI && PCI
- help
-- This option enables support for HighPoint RocketRAID 3xxx
-+ This option enables support for HighPoint RocketRAID 3xxx/4xxx
- controllers.
-
- To compile this driver as a module, choose M here; the module
-@@ -1288,17 +1288,6 @@ config SCSI_PAS16
- To compile this driver as a module, choose M here: the
- module will be called pas16.
++ skb_abort_seq_read(&seq);
++ conn->rxdata_octets += consumed;
++ return consumed;
--config SCSI_PSI240I
-- tristate "PSI240i support"
-- depends on ISA && SCSI
-- help
-- This is support for the PSI240i EIDE interface card which acts as a
-- SCSI host adapter. Please read the SCSI-HOWTO, available from
-- <http://www.tldp.org/docs.html#howto>.
+- if (tcp_conn->in_progress == IN_PROGRESS_DATA_RECV &&
+- tcp_conn->in.copy) {
+- debug_tcp("data_recv offset %d copy %d\n",
+- tcp_conn->in.offset, tcp_conn->in.copy);
-
-- To compile this driver as a module, choose M here: the
-- module will be called psi240i.
+- rc = iscsi_data_recv(conn);
+- if (rc) {
+- if (rc == -EAGAIN)
+- goto again;
+- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- return 0;
+- }
-
- config SCSI_QLOGIC_FAS
- tristate "Qlogic FAS SCSI support"
- depends on ISA && SCSI
-@@ -1359,21 +1348,6 @@ config SCSI_LPFC
- This lpfc driver supports the Emulex LightPulse
- Family of Fibre Channel PCI host adapters.
-
--config SCSI_SEAGATE
-- tristate "Seagate ST-02 and Future Domain TMC-8xx SCSI support"
-- depends on X86 && ISA && SCSI
-- select CHECK_SIGNATURE
-- ---help---
-- These are 8-bit SCSI controllers; the ST-01 is also supported by
-- this driver. It is explained in section 3.9 of the SCSI-HOWTO,
-- available from <http://www.tldp.org/docs.html#howto>. If it
-- doesn't work out of the box, you may have to change some macros at
-- compiletime, which are described in <file:drivers/scsi/seagate.c>.
+- if (tcp_conn->in.padding)
+- tcp_conn->in_progress = IN_PROGRESS_PAD_RECV;
+- else if (conn->datadgst_en)
+- tcp_conn->in_progress = IN_PROGRESS_DDIGEST_RECV;
+- else
+- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
+- tcp_conn->data_copied = 0;
+- }
-
-- To compile this driver as a module, choose M here: the
-- module will be called seagate.
+- if (tcp_conn->in_progress == IN_PROGRESS_PAD_RECV &&
+- tcp_conn->in.copy) {
+- int copylen = min(tcp_conn->in.padding - tcp_conn->data_copied,
+- tcp_conn->in.copy);
-
--# definitely looks not 64bit safe:
- config SCSI_SIM710
- tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
- depends on (EISA || MCA) && SCSI
-diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
-index 2e6129f..93e1428 100644
---- a/drivers/scsi/Makefile
-+++ b/drivers/scsi/Makefile
-@@ -16,9 +16,8 @@
+- tcp_conn->in.copy -= copylen;
+- tcp_conn->in.offset += copylen;
+- tcp_conn->data_copied += copylen;
+-
+- if (tcp_conn->data_copied != tcp_conn->in.padding)
+- tcp_conn->in_progress = IN_PROGRESS_PAD_RECV;
+- else if (conn->datadgst_en)
+- tcp_conn->in_progress = IN_PROGRESS_DDIGEST_RECV;
+- else
+- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
+- tcp_conn->data_copied = 0;
+- }
+-
+- debug_tcp("f, processed %d from out of %d padding %d\n",
+- tcp_conn->in.offset - offset, (int)len, tcp_conn->in.padding);
+- BUG_ON(tcp_conn->in.offset - offset > len);
+-
+- if (tcp_conn->in.offset - offset != len) {
+- debug_tcp("continue to process %d bytes\n",
+- (int)len - (tcp_conn->in.offset - offset));
+- goto more;
+- }
+-
+-nomore:
+- processed = tcp_conn->in.offset - offset;
+- BUG_ON(processed == 0);
+- return processed;
+-
+-again:
+- processed = tcp_conn->in.offset - offset;
+- debug_tcp("c, processed %d from out of %d rd_desc_cnt %d\n",
+- processed, (int)len, (int)rd_desc->count);
+- BUG_ON(processed == 0);
+- BUG_ON(processed > len);
+-
+- conn->rxdata_octets += processed;
+- return processed;
++error:
++ debug_tcp("Error receiving PDU, errno=%d\n", rc);
++ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
++ return 0;
+ }
- CFLAGS_aha152x.o = -DAHA152X_STAT -DAUTOCONF
- CFLAGS_gdth.o = # -DDEBUG_GDTH=2 -D__SERIAL__ -D__COM2__ -DGDTH_STATISTICS
--CFLAGS_seagate.o = -DARBITRATE -DPARITY -DSEAGATE_USE_ASM
+ static void
+ iscsi_tcp_data_ready(struct sock *sk, int flag)
+ {
+ struct iscsi_conn *conn = sk->sk_user_data;
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ read_descriptor_t rd_desc;
--subdir-$(CONFIG_PCMCIA) += pcmcia
-+obj-$(CONFIG_PCMCIA) += pcmcia/
+ read_lock(&sk->sk_callback_lock);
- obj-$(CONFIG_SCSI) += scsi_mod.o
- obj-$(CONFIG_SCSI_TGT) += scsi_tgt.o
-@@ -59,7 +58,6 @@ obj-$(CONFIG_MVME16x_SCSI) += 53c700.o mvme16x_scsi.o
- obj-$(CONFIG_BVME6000_SCSI) += 53c700.o bvme6000_scsi.o
- obj-$(CONFIG_SCSI_SIM710) += 53c700.o sim710.o
- obj-$(CONFIG_SCSI_ADVANSYS) += advansys.o
--obj-$(CONFIG_SCSI_PSI240I) += psi240i.o
- obj-$(CONFIG_SCSI_BUSLOGIC) += BusLogic.o
- obj-$(CONFIG_SCSI_DPT_I2O) += dpt_i2o.o
- obj-$(CONFIG_SCSI_U14_34F) += u14-34f.o
-@@ -90,7 +88,6 @@ obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
- obj-$(CONFIG_SCSI_QLA_ISCSI) += qla4xxx/
- obj-$(CONFIG_SCSI_LPFC) += lpfc/
- obj-$(CONFIG_SCSI_PAS16) += pas16.o
--obj-$(CONFIG_SCSI_SEAGATE) += seagate.o
- obj-$(CONFIG_SCSI_T128) += t128.o
- obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o
- obj-$(CONFIG_SCSI_DTC3280) += dtc.o
-diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
-index 2597209..eeddbd1 100644
---- a/drivers/scsi/NCR5380.c
-+++ b/drivers/scsi/NCR5380.c
-@@ -295,16 +295,16 @@ static __inline__ void initialize_SCp(Scsi_Cmnd * cmd)
- * various queues are valid.
+ /*
+- * Use rd_desc to pass 'conn' to iscsi_tcp_data_recv.
++ * Use rd_desc to pass 'conn' to iscsi_tcp_recv.
+ * We set count to 1 because we want the network layer to
+- * hand us all the skbs that are available. iscsi_tcp_data_recv
++ * hand us all the skbs that are available. iscsi_tcp_recv
+ * handled pdus that cross buffers or pdus that still need data.
*/
+ rd_desc.arg.data = conn;
+ rd_desc.count = 1;
+- tcp_read_sock(sk, &rd_desc, iscsi_tcp_data_recv);
++ tcp_read_sock(sk, &rd_desc, iscsi_tcp_recv);
-- if (cmd->use_sg) {
-- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
- } else {
- cmd->SCp.buffer = NULL;
- cmd->SCp.buffers_residual = 0;
-- cmd->SCp.ptr = (char *) cmd->request_buffer;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-+ cmd->SCp.ptr = NULL;
-+ cmd->SCp.this_residual = 0;
- }
+ read_unlock(&sk->sk_callback_lock);
++
++ /* If we had to (atomically) map a highmem page,
++ * unmap it now. */
++ iscsi_tcp_segment_unmap(&tcp_conn->in.segment);
}
-@@ -932,7 +932,7 @@ static int __devinit NCR5380_init(struct Scsi_Host *instance, int flags)
- * @instance: adapter to remove
- */
-
--static void __devexit NCR5380_exit(struct Scsi_Host *instance)
-+static void NCR5380_exit(struct Scsi_Host *instance)
- {
- struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata;
-
-@@ -975,14 +975,14 @@ static int NCR5380_queue_command(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))
- case WRITE_6:
- case WRITE_10:
- hostdata->time_write[cmd->device->id] -= (jiffies - hostdata->timebase);
-- hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;
-+ hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);
- hostdata->pendingw++;
- break;
- case READ:
- case READ_6:
- case READ_10:
- hostdata->time_read[cmd->device->id] -= (jiffies - hostdata->timebase);
-- hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;
-+ hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);
- hostdata->pendingr++;
- break;
- }
-@@ -1157,16 +1157,17 @@ static void NCR5380_main(struct work_struct *work)
- * Locks: takes the needed instance locks
- */
+ static void
+@@ -1077,121 +1079,173 @@ iscsi_conn_restore_callbacks(struct iscsi_tcp_conn *tcp_conn)
+ }
--static irqreturn_t NCR5380_intr(int irq, void *dev_id)
-+static irqreturn_t NCR5380_intr(int dummy, void *dev_id)
+ /**
+- * iscsi_send - generic send routine
+- * @sk: kernel's socket
+- * @buf: buffer to write from
+- * @size: actual size to write
+- * @flags: socket's flags
+- */
+-static inline int
+-iscsi_send(struct iscsi_conn *conn, struct iscsi_buf *buf, int size, int flags)
++ * iscsi_xmit - TCP transmit
++ **/
++static int
++iscsi_xmit(struct iscsi_conn *conn)
{
- NCR5380_local_declare();
-- struct Scsi_Host *instance = (struct Scsi_Host *)dev_id;
-+ struct Scsi_Host *instance = dev_id;
- struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata;
- int done;
- unsigned char basr;
- unsigned long flags;
-
-- dprintk(NDEBUG_INTR, ("scsi : NCR5380 irq %d triggered\n", irq));
-+ dprintk(NDEBUG_INTR, ("scsi : NCR5380 irq %d triggered\n",
-+ instance->irq));
-
- do {
- done = 1;
-diff --git a/drivers/scsi/a2091.c b/drivers/scsi/a2091.c
-index b7c5385..23f27c9 100644
---- a/drivers/scsi/a2091.c
-+++ b/drivers/scsi/a2091.c
-@@ -73,18 +73,9 @@ static int dma_setup(struct scsi_cmnd *cmd, int dir_in)
- }
+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- struct socket *sk = tcp_conn->sock;
+- int offset = buf->sg.offset + buf->sent, res;
++ struct iscsi_segment *segment = &tcp_conn->out.segment;
++ unsigned int consumed = 0;
++ int rc = 0;
- if (!dir_in) {
-- /* copy to bounce buffer for a write */
-- if (cmd->use_sg)
--#if 0
-- panic ("scsi%ddma: incomplete s/g support",
-- instance->host_no);
--#else
-+ /* copy to bounce buffer for a write */
- memcpy (HDATA(instance)->dma_bounce_buffer,
- cmd->SCp.ptr, cmd->SCp.this_residual);
--#endif
-- else
-- memcpy (HDATA(instance)->dma_bounce_buffer,
-- cmd->request_buffer, cmd->request_bufflen);
+- /*
+- * if we got use_sg=0 or are sending something we kmallocd
+- * then we did not have to do kmap (kmap returns page_address)
+- *
+- * if we got use_sg > 0, but had to drop down, we do not
+- * set clustering so this should only happen for that
+- * slab case.
+- */
+- if (buf->use_sendmsg)
+- res = sock_no_sendpage(sk, sg_page(&buf->sg), offset, size, flags);
+- else
+- res = tcp_conn->sendpage(sk, sg_page(&buf->sg), offset, size, flags);
+-
+- if (res >= 0) {
+- conn->txdata_octets += res;
+- buf->sent += res;
+- return res;
++ while (1) {
++ rc = iscsi_tcp_xmit_segment(tcp_conn, segment);
++ if (rc < 0)
++ goto error;
++ if (rc == 0)
++ break;
++
++ consumed += rc;
++
++ if (segment->total_copied >= segment->total_size) {
++ if (segment->done != NULL) {
++ rc = segment->done(tcp_conn, segment);
++ if (rc < 0)
++ goto error;
++ }
++ }
}
- }
-@@ -144,30 +135,13 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
+- tcp_conn->sendpage_failures_cnt++;
+- if (res == -EAGAIN)
+- res = -ENOBUFS;
+- else
+- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- return res;
++ debug_tcp("xmit %d bytes\n", consumed);
++
++ conn->txdata_octets += consumed;
++ return consumed;
++
++error:
++ /* Transmit error. We could initiate error recovery
++ * here. */
++ debug_tcp("Error sending PDU, errno=%d\n", rc);
++ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
++ return rc;
+ }
- /* copy from a bounce buffer, if necessary */
- if (status && HDATA(instance)->dma_bounce_buffer) {
-- if (SCpnt && SCpnt->use_sg) {
--#if 0
-- panic ("scsi%d: incomplete s/g support",
-- instance->host_no);
--#else
-- if( HDATA(instance)->dma_dir )
-+ if( HDATA(instance)->dma_dir )
- memcpy (SCpnt->SCp.ptr,
- HDATA(instance)->dma_bounce_buffer,
- SCpnt->SCp.this_residual);
-- kfree (HDATA(instance)->dma_bounce_buffer);
-- HDATA(instance)->dma_bounce_buffer = NULL;
-- HDATA(instance)->dma_bounce_len = 0;
--
--#endif
-- } else {
-- if (HDATA(instance)->dma_dir && SCpnt)
-- memcpy (SCpnt->request_buffer,
-- HDATA(instance)->dma_bounce_buffer,
-- SCpnt->request_bufflen);
+ /**
+- * iscsi_sendhdr - send PDU Header via tcp_sendpage()
+- * @conn: iscsi connection
+- * @buf: buffer to write from
+- * @datalen: lenght of data to be sent after the header
+- *
+- * Notes:
+- * (Tx, Fast Path)
+- **/
++ * iscsi_tcp_xmit_qlen - return the number of bytes queued for xmit
++ */
+ static inline int
+-iscsi_sendhdr(struct iscsi_conn *conn, struct iscsi_buf *buf, int datalen)
++iscsi_tcp_xmit_qlen(struct iscsi_conn *conn)
+ {
+- int flags = 0; /* MSG_DONTWAIT; */
+- int res, size;
-
-- kfree (HDATA(instance)->dma_bounce_buffer);
-- HDATA(instance)->dma_bounce_buffer = NULL;
-- HDATA(instance)->dma_bounce_len = 0;
+- size = buf->sg.length - buf->sent;
+- BUG_ON(buf->sent + size > buf->sg.length);
+- if (buf->sent + size != buf->sg.length || datalen)
+- flags |= MSG_MORE;
+-
+- res = iscsi_send(conn, buf, size, flags);
+- debug_tcp("sendhdr %d bytes, sent %d res %d\n", size, buf->sent, res);
+- if (res >= 0) {
+- if (size != res)
+- return -EAGAIN;
+- return 0;
- }
-+ kfree (HDATA(instance)->dma_bounce_buffer);
-+ HDATA(instance)->dma_bounce_buffer = NULL;
-+ HDATA(instance)->dma_bounce_len = 0;
- }
- }
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
++ struct iscsi_segment *segment = &tcp_conn->out.segment;
-diff --git a/drivers/scsi/a3000.c b/drivers/scsi/a3000.c
-index 796f1c4..d7255c8 100644
---- a/drivers/scsi/a3000.c
-+++ b/drivers/scsi/a3000.c
-@@ -70,12 +70,8 @@ static int dma_setup(struct scsi_cmnd *cmd, int dir_in)
+- return res;
++ return segment->total_copied - segment->total_size;
+ }
- if (!dir_in) {
- /* copy to bounce buffer for a write */
-- if (cmd->use_sg) {
-- memcpy (HDATA(a3000_host)->dma_bounce_buffer,
-- cmd->SCp.ptr, cmd->SCp.this_residual);
-- } else
-- memcpy (HDATA(a3000_host)->dma_bounce_buffer,
-- cmd->request_buffer, cmd->request_bufflen);
-+ memcpy (HDATA(a3000_host)->dma_bounce_buffer,
-+ cmd->SCp.ptr, cmd->SCp.this_residual);
+-/**
+- * iscsi_sendpage - send one page of iSCSI Data-Out.
+- * @conn: iscsi connection
+- * @buf: buffer to write from
+- * @count: remaining data
+- * @sent: number of bytes sent
+- *
+- * Notes:
+- * (Tx, Fast Path)
+- **/
+ static inline int
+-iscsi_sendpage(struct iscsi_conn *conn, struct iscsi_buf *buf,
+- int *count, int *sent)
++iscsi_tcp_flush(struct iscsi_conn *conn)
+ {
+- int flags = 0; /* MSG_DONTWAIT; */
+- int res, size;
+-
+- size = buf->sg.length - buf->sent;
+- BUG_ON(buf->sent + size > buf->sg.length);
+- if (size > *count)
+- size = *count;
+- if (buf->sent + size != buf->sg.length || *count != size)
+- flags |= MSG_MORE;
+-
+- res = iscsi_send(conn, buf, size, flags);
+- debug_tcp("sendpage: %d bytes, sent %d left %d sent %d res %d\n",
+- size, buf->sent, *count, *sent, res);
+- if (res >= 0) {
+- *count -= res;
+- *sent += res;
+- if (size != res)
++ int rc;
++
++ while (iscsi_tcp_xmit_qlen(conn)) {
++ rc = iscsi_xmit(conn);
++ if (rc == 0)
+ return -EAGAIN;
+- return 0;
++ if (rc < 0)
++ return rc;
}
- addr = virt_to_bus(HDATA(a3000_host)->dma_bounce_buffer);
-@@ -146,7 +142,7 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
-
- /* copy from a bounce buffer, if necessary */
- if (status && HDATA(instance)->dma_bounce_buffer) {
-- if (SCpnt && SCpnt->use_sg) {
-+ if (SCpnt) {
- if (HDATA(instance)->dma_dir && SCpnt)
- memcpy (SCpnt->SCp.ptr,
- HDATA(instance)->dma_bounce_buffer,
-@@ -155,11 +151,6 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
- HDATA(instance)->dma_bounce_buffer = NULL;
- HDATA(instance)->dma_bounce_len = 0;
- } else {
-- if (HDATA(instance)->dma_dir && SCpnt)
-- memcpy (SCpnt->request_buffer,
-- HDATA(instance)->dma_bounce_buffer,
-- SCpnt->request_bufflen);
--
- kfree (HDATA(instance)->dma_bounce_buffer);
- HDATA(instance)->dma_bounce_buffer = NULL;
- HDATA(instance)->dma_bounce_len = 0;
-diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
-index a77ab8d..d7235f4 100644
---- a/drivers/scsi/aacraid/aachba.c
-+++ b/drivers/scsi/aacraid/aachba.c
-@@ -31,9 +31,9 @@
- #include <linux/slab.h>
- #include <linux/completion.h>
- #include <linux/blkdev.h>
--#include <linux/dma-mapping.h>
- #include <asm/semaphore.h>
- #include <asm/uaccess.h>
-+#include <linux/highmem.h> /* For flush_kernel_dcache_page */
+- return res;
++ return 0;
+ }
- #include <scsi/scsi.h>
- #include <scsi/scsi_cmnd.h>
-@@ -56,54 +56,54 @@
- /*
- * Sense codes
- */
--
--#define SENCODE_NO_SENSE 0x00
--#define SENCODE_END_OF_DATA 0x00
--#define SENCODE_BECOMING_READY 0x04
--#define SENCODE_INIT_CMD_REQUIRED 0x04
--#define SENCODE_PARAM_LIST_LENGTH_ERROR 0x1A
--#define SENCODE_INVALID_COMMAND 0x20
--#define SENCODE_LBA_OUT_OF_RANGE 0x21
--#define SENCODE_INVALID_CDB_FIELD 0x24
--#define SENCODE_LUN_NOT_SUPPORTED 0x25
--#define SENCODE_INVALID_PARAM_FIELD 0x26
--#define SENCODE_PARAM_NOT_SUPPORTED 0x26
--#define SENCODE_PARAM_VALUE_INVALID 0x26
--#define SENCODE_RESET_OCCURRED 0x29
--#define SENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x3E
--#define SENCODE_INQUIRY_DATA_CHANGED 0x3F
--#define SENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x39
--#define SENCODE_DIAGNOSTIC_FAILURE 0x40
--#define SENCODE_INTERNAL_TARGET_FAILURE 0x44
--#define SENCODE_INVALID_MESSAGE_ERROR 0x49
--#define SENCODE_LUN_FAILED_SELF_CONFIG 0x4c
--#define SENCODE_OVERLAPPED_COMMAND 0x4E
+-static inline void
+-iscsi_data_digest_init(struct iscsi_tcp_conn *tcp_conn,
+- struct iscsi_tcp_cmd_task *tcp_ctask)
++/*
++ * This is called when we're done sending the header.
++ * Simply copy the data_segment to the send segment, and return.
++ */
++static int
++iscsi_tcp_send_hdr_done(struct iscsi_tcp_conn *tcp_conn,
++ struct iscsi_segment *segment)
++{
++ tcp_conn->out.segment = tcp_conn->out.data_segment;
++ debug_tcp("Header done. Next segment size %u total_size %u\n",
++ tcp_conn->out.segment.size, tcp_conn->out.segment.total_size);
++ return 0;
++}
+
-+#define SENCODE_NO_SENSE 0x00
-+#define SENCODE_END_OF_DATA 0x00
-+#define SENCODE_BECOMING_READY 0x04
-+#define SENCODE_INIT_CMD_REQUIRED 0x04
-+#define SENCODE_PARAM_LIST_LENGTH_ERROR 0x1A
-+#define SENCODE_INVALID_COMMAND 0x20
-+#define SENCODE_LBA_OUT_OF_RANGE 0x21
-+#define SENCODE_INVALID_CDB_FIELD 0x24
-+#define SENCODE_LUN_NOT_SUPPORTED 0x25
-+#define SENCODE_INVALID_PARAM_FIELD 0x26
-+#define SENCODE_PARAM_NOT_SUPPORTED 0x26
-+#define SENCODE_PARAM_VALUE_INVALID 0x26
-+#define SENCODE_RESET_OCCURRED 0x29
-+#define SENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x3E
-+#define SENCODE_INQUIRY_DATA_CHANGED 0x3F
-+#define SENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x39
-+#define SENCODE_DIAGNOSTIC_FAILURE 0x40
-+#define SENCODE_INTERNAL_TARGET_FAILURE 0x44
-+#define SENCODE_INVALID_MESSAGE_ERROR 0x49
-+#define SENCODE_LUN_FAILED_SELF_CONFIG 0x4c
-+#define SENCODE_OVERLAPPED_COMMAND 0x4E
-
- /*
- * Additional sense codes
- */
--
--#define ASENCODE_NO_SENSE 0x00
--#define ASENCODE_END_OF_DATA 0x05
--#define ASENCODE_BECOMING_READY 0x01
--#define ASENCODE_INIT_CMD_REQUIRED 0x02
--#define ASENCODE_PARAM_LIST_LENGTH_ERROR 0x00
--#define ASENCODE_INVALID_COMMAND 0x00
--#define ASENCODE_LBA_OUT_OF_RANGE 0x00
--#define ASENCODE_INVALID_CDB_FIELD 0x00
--#define ASENCODE_LUN_NOT_SUPPORTED 0x00
--#define ASENCODE_INVALID_PARAM_FIELD 0x00
--#define ASENCODE_PARAM_NOT_SUPPORTED 0x01
--#define ASENCODE_PARAM_VALUE_INVALID 0x02
--#define ASENCODE_RESET_OCCURRED 0x00
--#define ASENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x00
--#define ASENCODE_INQUIRY_DATA_CHANGED 0x03
--#define ASENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x00
--#define ASENCODE_DIAGNOSTIC_FAILURE 0x80
--#define ASENCODE_INTERNAL_TARGET_FAILURE 0x00
--#define ASENCODE_INVALID_MESSAGE_ERROR 0x00
--#define ASENCODE_LUN_FAILED_SELF_CONFIG 0x00
--#define ASENCODE_OVERLAPPED_COMMAND 0x00
++static void
++iscsi_tcp_send_hdr_prep(struct iscsi_conn *conn, void *hdr, size_t hdrlen)
+ {
+- crypto_hash_init(&tcp_conn->tx_hash);
+- tcp_ctask->digest_count = 4;
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+
-+#define ASENCODE_NO_SENSE 0x00
-+#define ASENCODE_END_OF_DATA 0x05
-+#define ASENCODE_BECOMING_READY 0x01
-+#define ASENCODE_INIT_CMD_REQUIRED 0x02
-+#define ASENCODE_PARAM_LIST_LENGTH_ERROR 0x00
-+#define ASENCODE_INVALID_COMMAND 0x00
-+#define ASENCODE_LBA_OUT_OF_RANGE 0x00
-+#define ASENCODE_INVALID_CDB_FIELD 0x00
-+#define ASENCODE_LUN_NOT_SUPPORTED 0x00
-+#define ASENCODE_INVALID_PARAM_FIELD 0x00
-+#define ASENCODE_PARAM_NOT_SUPPORTED 0x01
-+#define ASENCODE_PARAM_VALUE_INVALID 0x02
-+#define ASENCODE_RESET_OCCURRED 0x00
-+#define ASENCODE_LUN_NOT_SELF_CONFIGURED_YET 0x00
-+#define ASENCODE_INQUIRY_DATA_CHANGED 0x03
-+#define ASENCODE_SAVING_PARAMS_NOT_SUPPORTED 0x00
-+#define ASENCODE_DIAGNOSTIC_FAILURE 0x80
-+#define ASENCODE_INTERNAL_TARGET_FAILURE 0x00
-+#define ASENCODE_INVALID_MESSAGE_ERROR 0x00
-+#define ASENCODE_LUN_FAILED_SELF_CONFIG 0x00
-+#define ASENCODE_OVERLAPPED_COMMAND 0x00
-
- #define BYTE0(x) (unsigned char)(x)
- #define BYTE1(x) (unsigned char)((x) >> 8)
-@@ -115,8 +115,8 @@
- *----------------------------------------------------------------------------*/
- /* SCSI inquiry data */
- struct inquiry_data {
-- u8 inqd_pdt; /* Peripheral qualifier | Peripheral Device Type */
-- u8 inqd_dtq; /* RMB | Device Type Qualifier */
-+ u8 inqd_pdt; /* Peripheral qualifier | Peripheral Device Type */
-+ u8 inqd_dtq; /* RMB | Device Type Qualifier */
- u8 inqd_ver; /* ISO version | ECMA version | ANSI-approved version */
- u8 inqd_rdf; /* AENC | TrmIOP | Response data format */
- u8 inqd_len; /* Additional length (n-4) */
-@@ -130,7 +130,7 @@ struct inquiry_data {
- /*
- * M O D U L E G L O B A L S
- */
--
++ debug_tcp("%s(%p%s)\n", __FUNCTION__, tcp_conn,
++ conn->hdrdgst_en? ", digest enabled" : "");
+
- static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* sgmap);
- static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* psg);
- static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg);
-@@ -141,9 +141,10 @@ static char *aac_get_status_string(u32 status);
-
- /*
- * Non dasd selection is handled entirely in aachba now
-- */
--
-+ */
++ /* Clear the data segment - needs to be filled in by the
++ * caller using iscsi_tcp_send_data_prep() */
++ memset(&tcp_conn->out.data_segment, 0, sizeof(struct iscsi_segment));
+
- static int nondasd = -1;
-+static int aac_cache = 0;
- static int dacmode = -1;
-
- int aac_commit = -1;
-@@ -152,6 +153,8 @@ int aif_timeout = 120;
-
- module_param(nondasd, int, S_IRUGO|S_IWUSR);
- MODULE_PARM_DESC(nondasd, "Control scanning of hba for nondasd devices. 0=off, 1=on");
-+module_param_named(cache, aac_cache, int, S_IRUGO|S_IWUSR);
-+MODULE_PARM_DESC(cache, "Disable Queue Flush commands:\n\tbit 0 - Disable FUA in WRITE SCSI commands\n\tbit 1 - Disable SYNCHRONIZE_CACHE SCSI command\n\tbit 2 - Disable only if Battery not protecting Cache");
- module_param(dacmode, int, S_IRUGO|S_IWUSR);
- MODULE_PARM_DESC(dacmode, "Control whether dma addressing is using 64 bit DAC. 0=off, 1=on");
- module_param_named(commit, aac_commit, int, S_IRUGO|S_IWUSR);
-@@ -179,7 +182,7 @@ MODULE_PARM_DESC(check_interval, "Interval in seconds between adapter health che
-
- int aac_check_reset = 1;
- module_param_named(check_reset, aac_check_reset, int, S_IRUGO|S_IWUSR);
--MODULE_PARM_DESC(aac_check_reset, "If adapter fails health check, reset the adapter.");
-+MODULE_PARM_DESC(aac_check_reset, "If adapter fails health check, reset the adapter. a value of -1 forces the reset to adapters programmed to ignore it.");
-
- int expose_physicals = -1;
- module_param(expose_physicals, int, S_IRUGO|S_IWUSR);
-@@ -193,12 +196,12 @@ static inline int aac_valid_context(struct scsi_cmnd *scsicmd,
- struct fib *fibptr) {
- struct scsi_device *device;
-
-- if (unlikely(!scsicmd || !scsicmd->scsi_done )) {
-+ if (unlikely(!scsicmd || !scsicmd->scsi_done)) {
- dprintk((KERN_WARNING "aac_valid_context: scsi command corrupt\n"));
-- aac_fib_complete(fibptr);
-- aac_fib_free(fibptr);
-- return 0;
-- }
-+ aac_fib_complete(fibptr);
-+ aac_fib_free(fibptr);
-+ return 0;
++ /* If header digest is enabled, compute the CRC and
++ * place the digest into the same buffer. We make
++ * sure that both iscsi_tcp_ctask and mtask have
++ * sufficient room.
++ */
++ if (conn->hdrdgst_en) {
++ iscsi_tcp_dgst_header(&tcp_conn->tx_hash, hdr, hdrlen,
++ hdr + hdrlen);
++ hdrlen += ISCSI_DIGEST_SIZE;
+ }
- scsicmd->SCp.phase = AAC_OWNER_MIDLEVEL;
- device = scsicmd->device;
- if (unlikely(!device || !scsi_device_online(device))) {
-@@ -240,7 +243,7 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag)
- FsaNormal,
- 1, 1,
- NULL, NULL);
-- if (status < 0 ) {
-+ if (status < 0) {
- printk(KERN_WARNING "aac_get_config_status: SendFIB failed.\n");
- } else {
- struct aac_get_config_status_resp *reply
-@@ -264,10 +267,10 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag)
- struct aac_commit_config * dinfo;
- aac_fib_init(fibptr);
- dinfo = (struct aac_commit_config *) fib_data(fibptr);
--
+
- dinfo->command = cpu_to_le32(VM_ContainerConfig);
- dinfo->type = cpu_to_le32(CT_COMMIT_CONFIG);
--
-+
- status = aac_fib_send(ContainerCommand,
- fibptr,
- sizeof (struct aac_commit_config),
-@@ -293,7 +296,7 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag)
- int aac_get_containers(struct aac_dev *dev)
- {
- struct fsa_dev_info *fsa_dev_ptr;
-- u32 index;
-+ u32 index;
- int status = 0;
- struct fib * fibptr;
- struct aac_get_container_count *dinfo;
-@@ -363,6 +366,7 @@ static void aac_internal_transfer(struct scsi_cmnd *scsicmd, void *data, unsigne
- if (buf && transfer_len > 0)
- memcpy(buf + offset, data, transfer_len);
-
-+ flush_kernel_dcache_page(kmap_atomic_to_page(buf - sg->offset));
- kunmap_atomic(buf - sg->offset, KM_IRQ0);
-
- }
-@@ -395,7 +399,7 @@ static void get_container_name_callback(void *context, struct fib * fibptr)
- do {
- *dp++ = (*sp) ? *sp++ : ' ';
- } while (--count > 0);
-- aac_internal_transfer(scsicmd, d,
-+ aac_internal_transfer(scsicmd, d,
- offsetof(struct inquiry_data, inqd_pid), sizeof(d));
- }
- }
-@@ -431,13 +435,13 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
- dinfo->count = cpu_to_le32(sizeof(((struct aac_get_name_resp *)NULL)->data));
-
- status = aac_fib_send(ContainerCommand,
-- cmd_fibcontext,
-+ cmd_fibcontext,
- sizeof (struct aac_get_name),
-- FsaNormal,
-- 0, 1,
-- (fib_callback) get_container_name_callback,
-+ FsaNormal,
-+ 0, 1,
-+ (fib_callback)get_container_name_callback,
- (void *) scsicmd);
--
++ /* Remember header pointer for later, when we need
++ * to decide whether there's a payload to go along
++ * with the header. */
++ tcp_conn->out.hdr = hdr;
+
- /*
- * Check that the command queued to the controller
- */
-@@ -445,7 +449,7 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
- scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
- return 0;
- }
--
++ iscsi_segment_init_linear(&tcp_conn->out.segment, hdr, hdrlen,
++ iscsi_tcp_send_hdr_done, NULL);
++}
+
- printk(KERN_WARNING "aac_get_container_name: aac_fib_send failed with status: %d.\n", status);
- aac_fib_complete(cmd_fibcontext);
- aac_fib_free(cmd_fibcontext);
-@@ -652,42 +656,47 @@ struct scsi_inq {
- * @a: string to copy from
- * @b: string to copy to
- *
-- * Copy a String from one location to another
-+ * Copy a String from one location to another
- * without copying \0
- */
-
- static void inqstrcpy(char *a, char *b)
- {
-
-- while(*a != (char)0)
-+ while (*a != (char)0)
- *b++ = *a++;
- }
-
- static char *container_types[] = {
-- "None",
-- "Volume",
-- "Mirror",
-- "Stripe",
-- "RAID5",
-- "SSRW",
-- "SSRO",
-- "Morph",
-- "Legacy",
-- "RAID4",
-- "RAID10",
-- "RAID00",
-- "V-MIRRORS",
-- "PSEUDO R4",
-+ "None",
-+ "Volume",
-+ "Mirror",
-+ "Stripe",
-+ "RAID5",
-+ "SSRW",
-+ "SSRO",
-+ "Morph",
-+ "Legacy",
-+ "RAID4",
-+ "RAID10",
-+ "RAID00",
-+ "V-MIRRORS",
-+ "PSEUDO R4",
- "RAID50",
- "RAID5D",
- "RAID5D0",
- "RAID1E",
- "RAID6",
- "RAID60",
-- "Unknown"
-+ "Unknown"
- };
-
--
-+char * get_container_type(unsigned tindex)
++/*
++ * Prepare the send buffer for the payload data.
++ * Padding and checksumming will all be taken care
++ * of by the iscsi_segment routines.
++ */
++static int
++iscsi_tcp_send_data_prep(struct iscsi_conn *conn, struct scatterlist *sg,
++ unsigned int count, unsigned int offset,
++ unsigned int len)
+{
-+ if (tindex >= ARRAY_SIZE(container_types))
-+ tindex = ARRAY_SIZE(container_types) - 1;
-+ return container_types[tindex];
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
++ struct hash_desc *tx_hash = NULL;
++ unsigned int hdr_spec_len;
++
++ debug_tcp("%s(%p, offset=%d, datalen=%d%s)\n", __FUNCTION__,
++ tcp_conn, offset, len,
++ conn->datadgst_en? ", digest enabled" : "");
++
++ /* Make sure the datalen matches what the caller
++ said he would send. */
++ hdr_spec_len = ntoh24(tcp_conn->out.hdr->dlength);
++ WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len));
++
++ if (conn->datadgst_en)
++ tx_hash = &tcp_conn->tx_hash;
++
++ return iscsi_segment_seek_sg(&tcp_conn->out.data_segment,
++ sg, count, offset, len,
++ NULL, tx_hash);
+}
++
++static void
++iscsi_tcp_send_linear_data_prepare(struct iscsi_conn *conn, void *data,
++ size_t len)
++{
++ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
++ struct hash_desc *tx_hash = NULL;
++ unsigned int hdr_spec_len;
++
++ debug_tcp("%s(%p, datalen=%d%s)\n", __FUNCTION__, tcp_conn, len,
++ conn->datadgst_en? ", digest enabled" : "");
++
++ /* Make sure the datalen matches what the caller
++ said he would send. */
++ hdr_spec_len = ntoh24(tcp_conn->out.hdr->dlength);
++ WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len));
++
++ if (conn->datadgst_en)
++ tx_hash = &tcp_conn->tx_hash;
++
++ iscsi_segment_init_linear(&tcp_conn->out.data_segment,
++ data, len, NULL, tx_hash);
+ }
- /* Function: setinqstr
+ /**
+@@ -1207,12 +1261,17 @@ iscsi_data_digest_init(struct iscsi_tcp_conn *tcp_conn,
*
-@@ -707,16 +716,21 @@ static void setinqstr(struct aac_dev *dev, void *data, int tindex)
+ * Called under connection lock.
+ **/
+-static void
++static int
+ iscsi_solicit_data_cont(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
+- struct iscsi_r2t_info *r2t, int left)
++ struct iscsi_r2t_info *r2t)
+ {
+ struct iscsi_data *hdr;
+- int new_offset;
++ int new_offset, left;
++
++ BUG_ON(r2t->data_length - r2t->sent < 0);
++ left = r2t->data_length - r2t->sent;
++ if (left == 0)
++ return 0;
- if (dev->supplement_adapter_info.AdapterTypeText[0]) {
- char * cp = dev->supplement_adapter_info.AdapterTypeText;
-- int c = sizeof(str->vid);
-- while (*cp && *cp != ' ' && --c)
-- ++cp;
-- c = *cp;
-- *cp = '\0';
-- inqstrcpy (dev->supplement_adapter_info.AdapterTypeText,
-- str->vid);
-- *cp = c;
-- while (*cp && *cp != ' ')
-- ++cp;
-+ int c;
-+ if ((cp[0] == 'A') && (cp[1] == 'O') && (cp[2] == 'C'))
-+ inqstrcpy("SMC", str->vid);
-+ else {
-+ c = sizeof(str->vid);
-+ while (*cp && *cp != ' ' && --c)
-+ ++cp;
-+ c = *cp;
-+ *cp = '\0';
-+ inqstrcpy (dev->supplement_adapter_info.AdapterTypeText,
-+ str->vid);
-+ *cp = c;
-+ while (*cp && *cp != ' ')
-+ ++cp;
-+ }
- while (*cp == ' ')
- ++cp;
- /* last six chars reserved for vol type */
-@@ -898,9 +912,8 @@ static int aac_bounds_32(struct aac_dev * dev, struct scsi_cmnd * cmd, u64 lba)
- ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
- 0, 0);
- memcpy(cmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
-- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(cmd->sense_buffer))
-- ? sizeof(cmd->sense_buffer)
-- : sizeof(dev->fsa_dev[cid].sense_data));
-+ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
-+ SCSI_SENSE_BUFFERSIZE));
- cmd->scsi_done(cmd);
- return 1;
+ hdr = &r2t->dtask.hdr;
+ memset(hdr, 0, sizeof(struct iscsi_data));
+@@ -1233,43 +1292,46 @@ iscsi_solicit_data_cont(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
+ r2t->data_count = left;
+ hdr->flags = ISCSI_FLAG_CMD_FINAL;
}
-@@ -981,7 +994,7 @@ static int aac_read_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32
- aac_fib_init(fib);
- readcmd = (struct aac_read *) fib_data(fib);
- readcmd->command = cpu_to_le32(VM_CtBlockRead);
-- readcmd->cid = cpu_to_le16(scmd_id(cmd));
-+ readcmd->cid = cpu_to_le32(scmd_id(cmd));
- readcmd->block = cpu_to_le32((u32)(lba&0xffffffff));
- readcmd->count = cpu_to_le32(count * 512);
+- conn->dataout_pdus_cnt++;
+-
+- iscsi_buf_init_iov(&r2t->headbuf, (char*)hdr,
+- sizeof(struct iscsi_hdr));
+-
+- if (iscsi_buf_left(&r2t->sendbuf))
+- return;
+-
+- iscsi_buf_init_sg(&r2t->sendbuf, r2t->sg);
+- r2t->sg += 1;
+-}
-@@ -1013,7 +1026,8 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
- writecmd->block[1] = cpu_to_le32((u32)((lba&0xffffffff00000000LL)>>32));
- writecmd->count = cpu_to_le32(count<<9);
- writecmd->cid = cpu_to_le16(scmd_id(cmd));
-- writecmd->flags = fua ?
-+ writecmd->flags = (fua && ((aac_cache & 5) != 1) &&
-+ (((aac_cache & 5) != 5) || !fib->dev->cache_protected)) ?
- cpu_to_le16(IO_TYPE_WRITE|IO_SUREWRITE) :
- cpu_to_le16(IO_TYPE_WRITE);
- writecmd->bpTotal = 0;
-@@ -1072,7 +1086,7 @@ static int aac_write_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
- aac_fib_init(fib);
- writecmd = (struct aac_write *) fib_data(fib);
- writecmd->command = cpu_to_le32(VM_CtBlockWrite);
-- writecmd->cid = cpu_to_le16(scmd_id(cmd));
-+ writecmd->cid = cpu_to_le32(scmd_id(cmd));
- writecmd->block = cpu_to_le32((u32)(lba&0xffffffff));
- writecmd->count = cpu_to_le32(count * 512);
- writecmd->sg.count = cpu_to_le32(1);
-@@ -1190,6 +1204,15 @@ static int aac_scsi_32(struct fib * fib, struct scsi_cmnd * cmd)
- (fib_callback) aac_srb_callback, (void *) cmd);
+-static void iscsi_set_padding(struct iscsi_tcp_cmd_task *tcp_ctask,
+- unsigned long len)
+-{
+- tcp_ctask->pad_count = len & (ISCSI_PAD_LEN - 1);
+- if (!tcp_ctask->pad_count)
+- return;
+-
+- tcp_ctask->pad_count = ISCSI_PAD_LEN - tcp_ctask->pad_count;
+- debug_scsi("write padding %d bytes\n", tcp_ctask->pad_count);
+- set_bit(XMSTATE_BIT_W_PAD, &tcp_ctask->xmstate);
++ conn->dataout_pdus_cnt++;
++ return 1;
}
-+static int aac_scsi_32_64(struct fib * fib, struct scsi_cmnd * cmd)
-+{
-+ if ((sizeof(dma_addr_t) > 4) &&
-+ (num_physpages > (0xFFFFFFFFULL >> PAGE_SHIFT)) &&
-+ (fib->dev->adapter_info.options & AAC_OPT_SGMAP_HOST64))
-+ return FAILED;
-+ return aac_scsi_32(fib, cmd);
-+}
-+
- int aac_get_adapter_info(struct aac_dev* dev)
+ /**
+- * iscsi_tcp_cmd_init - Initialize iSCSI SCSI_READ or SCSI_WRITE commands
++ * iscsi_tcp_ctask - Initialize iSCSI SCSI_READ or SCSI_WRITE commands
+ * @conn: iscsi connection
+ * @ctask: scsi command task
+ * @sc: scsi command
+ **/
+-static void
+-iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask)
++static int
++iscsi_tcp_ctask_init(struct iscsi_cmd_task *ctask)
{
- struct fib* fibptr;
-@@ -1207,11 +1230,11 @@ int aac_get_adapter_info(struct aac_dev* dev)
- memset(info,0,sizeof(*info));
-
- rcode = aac_fib_send(RequestAdapterInfo,
-- fibptr,
-+ fibptr,
- sizeof(*info),
-- FsaNormal,
-+ FsaNormal,
- -1, 1, /* First `interrupt' command uses special wait */
-- NULL,
-+ NULL,
- NULL);
+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
++ struct iscsi_conn *conn = ctask->conn;
++ struct scsi_cmnd *sc = ctask->sc;
++ int err;
- if (rcode < 0) {
-@@ -1222,29 +1245,29 @@ int aac_get_adapter_info(struct aac_dev* dev)
- memcpy(&dev->adapter_info, info, sizeof(*info));
+ BUG_ON(__kfifo_len(tcp_ctask->r2tqueue));
+- tcp_ctask->xmstate = 1 << XMSTATE_BIT_CMD_HDR_INIT;
++ tcp_ctask->sent = 0;
++ tcp_ctask->exp_datasn = 0;
++
++ /* Prepare PDU, optionally w/ immediate data */
++ debug_scsi("ctask deq [cid %d itt 0x%x imm %d unsol %d]\n",
++ conn->id, ctask->itt, ctask->imm_count,
++ ctask->unsol_count);
++ iscsi_tcp_send_hdr_prep(conn, ctask->hdr, ctask->hdr_len);
++
++ if (!ctask->imm_count)
++ return 0;
++
++ /* If we have immediate data, attach a payload */
++ err = iscsi_tcp_send_data_prep(conn, scsi_sglist(sc), scsi_sg_count(sc),
++ 0, ctask->imm_count);
++ if (err)
++ return err;
++ tcp_ctask->sent += ctask->imm_count;
++ ctask->imm_count = 0;
++ return 0;
+ }
- if (dev->adapter_info.options & AAC_OPT_SUPPLEMENT_ADAPTER_INFO) {
-- struct aac_supplement_adapter_info * info;
-+ struct aac_supplement_adapter_info * sinfo;
+ /**
+@@ -1281,484 +1343,130 @@ iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask)
+ * The function can return -EAGAIN in which case caller must
+ * call it again later, or recover. '0' return code means successful
+ * xmit.
+- *
+- * Management xmit state machine consists of these states:
+- * XMSTATE_BIT_IMM_HDR_INIT - calculate digest of PDU Header
+- * XMSTATE_BIT_IMM_HDR - PDU Header xmit in progress
+- * XMSTATE_BIT_IMM_DATA - PDU Data xmit in progress
+- * XMSTATE_VALUE_IDLE - management PDU is done
+ **/
+ static int
+ iscsi_tcp_mtask_xmit(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
+ {
+- struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
+ int rc;
- aac_fib_init(fibptr);
+- debug_scsi("mtask deq [cid %d state %x itt 0x%x]\n",
+- conn->id, tcp_mtask->xmstate, mtask->itt);
+-
+- if (test_bit(XMSTATE_BIT_IMM_HDR_INIT, &tcp_mtask->xmstate)) {
+- iscsi_buf_init_iov(&tcp_mtask->headbuf, (char*)mtask->hdr,
+- sizeof(struct iscsi_hdr));
+-
+- if (mtask->data_count) {
+- set_bit(XMSTATE_BIT_IMM_DATA, &tcp_mtask->xmstate);
+- iscsi_buf_init_iov(&tcp_mtask->sendbuf,
+- (char*)mtask->data,
+- mtask->data_count);
+- }
+-
+- if (conn->c_stage != ISCSI_CONN_INITIAL_STAGE &&
+- conn->stop_stage != STOP_CONN_RECOVER &&
+- conn->hdrdgst_en)
+- iscsi_hdr_digest(conn, &tcp_mtask->headbuf,
+- (u8*)tcp_mtask->hdrext);
+-
+- tcp_mtask->sent = 0;
+- clear_bit(XMSTATE_BIT_IMM_HDR_INIT, &tcp_mtask->xmstate);
+- set_bit(XMSTATE_BIT_IMM_HDR, &tcp_mtask->xmstate);
+- }
+-
+- if (test_bit(XMSTATE_BIT_IMM_HDR, &tcp_mtask->xmstate)) {
+- rc = iscsi_sendhdr(conn, &tcp_mtask->headbuf,
+- mtask->data_count);
+- if (rc)
+- return rc;
+- clear_bit(XMSTATE_BIT_IMM_HDR, &tcp_mtask->xmstate);
+- }
+-
+- if (test_and_clear_bit(XMSTATE_BIT_IMM_DATA, &tcp_mtask->xmstate)) {
+- BUG_ON(!mtask->data_count);
+- /* FIXME: implement.
+- * Virtual buffer could be spreaded across multiple pages...
+- */
+- do {
+- int rc;
+-
+- rc = iscsi_sendpage(conn, &tcp_mtask->sendbuf,
+- &mtask->data_count, &tcp_mtask->sent);
+- if (rc) {
+- set_bit(XMSTATE_BIT_IMM_DATA, &tcp_mtask->xmstate);
+- return rc;
+- }
+- } while (mtask->data_count);
+- }
++ /* Flush any pending data first. */
++ rc = iscsi_tcp_flush(conn);
++ if (rc < 0)
++ return rc;
-- info = (struct aac_supplement_adapter_info *) fib_data(fibptr);
-+ sinfo = (struct aac_supplement_adapter_info *) fib_data(fibptr);
+- BUG_ON(tcp_mtask->xmstate != XMSTATE_VALUE_IDLE);
+ if (mtask->hdr->itt == RESERVED_ITT) {
+ struct iscsi_session *session = conn->session;
-- memset(info,0,sizeof(*info));
-+ memset(sinfo,0,sizeof(*sinfo));
+ spin_lock_bh(&session->lock);
+- list_del(&conn->mtask->running);
+- __kfifo_put(session->mgmtpool.queue, (void*)&conn->mtask,
+- sizeof(void*));
++ iscsi_free_mgmt_task(conn, mtask);
+ spin_unlock_bh(&session->lock);
+ }
++
+ return 0;
+ }
- rcode = aac_fib_send(RequestSupplementAdapterInfo,
- fibptr,
-- sizeof(*info),
-+ sizeof(*sinfo),
- FsaNormal,
- 1, 1,
- NULL,
- NULL);
++/*
++ * iscsi_tcp_ctask_xmit - xmit normal PDU task
++ * @conn: iscsi connection
++ * @ctask: iscsi command task
++ *
++ * We're expected to return 0 when everything was transmitted succesfully,
++ * -EAGAIN if there's still data in the queue, or != 0 for any other kind
++ * of error.
++ */
+ static int
+-iscsi_send_cmd_hdr(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
++iscsi_tcp_ctask_xmit(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+ {
+- struct scsi_cmnd *sc = ctask->sc;
+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
++ struct scsi_cmnd *sc = ctask->sc;
+ int rc = 0;
- if (rcode >= 0)
-- memcpy(&dev->supplement_adapter_info, info, sizeof(*info));
-+ memcpy(&dev->supplement_adapter_info, sinfo, sizeof(*sinfo));
- }
+- if (test_bit(XMSTATE_BIT_CMD_HDR_INIT, &tcp_ctask->xmstate)) {
+- tcp_ctask->sent = 0;
+- tcp_ctask->sg_count = 0;
+- tcp_ctask->exp_datasn = 0;
+-
+- if (sc->sc_data_direction == DMA_TO_DEVICE) {
+- struct scatterlist *sg = scsi_sglist(sc);
+-
+- iscsi_buf_init_sg(&tcp_ctask->sendbuf, sg);
+- tcp_ctask->sg = sg + 1;
+- tcp_ctask->bad_sg = sg + scsi_sg_count(sc);
+-
+- debug_scsi("cmd [itt 0x%x total %d imm_data %d "
+- "unsol count %d, unsol offset %d]\n",
+- ctask->itt, scsi_bufflen(sc),
+- ctask->imm_count, ctask->unsol_count,
+- ctask->unsol_offset);
+- }
+-
+- iscsi_buf_init_iov(&tcp_ctask->headbuf, (char*)ctask->hdr,
+- sizeof(struct iscsi_hdr));
+-
+- if (conn->hdrdgst_en)
+- iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
+- (u8*)tcp_ctask->hdrext);
+- clear_bit(XMSTATE_BIT_CMD_HDR_INIT, &tcp_ctask->xmstate);
+- set_bit(XMSTATE_BIT_CMD_HDR_XMIT, &tcp_ctask->xmstate);
+- }
+-
+- if (test_bit(XMSTATE_BIT_CMD_HDR_XMIT, &tcp_ctask->xmstate)) {
+- rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, ctask->imm_count);
+- if (rc)
+- return rc;
+- clear_bit(XMSTATE_BIT_CMD_HDR_XMIT, &tcp_ctask->xmstate);
+-
+- if (sc->sc_data_direction != DMA_TO_DEVICE)
+- return 0;
+-
+- if (ctask->imm_count) {
+- set_bit(XMSTATE_BIT_IMM_DATA, &tcp_ctask->xmstate);
+- iscsi_set_padding(tcp_ctask, ctask->imm_count);
+-
+- if (ctask->conn->datadgst_en) {
+- iscsi_data_digest_init(ctask->conn->dd_data,
+- tcp_ctask);
+- tcp_ctask->immdigest = 0;
+- }
+- }
+-
+- if (ctask->unsol_count) {
+- set_bit(XMSTATE_BIT_UNS_HDR, &tcp_ctask->xmstate);
+- set_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate);
+- }
+- }
+- return rc;
+-}
+-
+-static int
+-iscsi_send_padding(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- int sent = 0, rc;
+-
+- if (test_bit(XMSTATE_BIT_W_PAD, &tcp_ctask->xmstate)) {
+- iscsi_buf_init_iov(&tcp_ctask->sendbuf, (char*)&tcp_ctask->pad,
+- tcp_ctask->pad_count);
+- if (conn->datadgst_en)
+- crypto_hash_update(&tcp_conn->tx_hash,
+- &tcp_ctask->sendbuf.sg,
+- tcp_ctask->sendbuf.sg.length);
+- } else if (!test_bit(XMSTATE_BIT_W_RESEND_PAD, &tcp_ctask->xmstate))
+- return 0;
+-
+- clear_bit(XMSTATE_BIT_W_PAD, &tcp_ctask->xmstate);
+- clear_bit(XMSTATE_BIT_W_RESEND_PAD, &tcp_ctask->xmstate);
+- debug_scsi("sending %d pad bytes for itt 0x%x\n",
+- tcp_ctask->pad_count, ctask->itt);
+- rc = iscsi_sendpage(conn, &tcp_ctask->sendbuf, &tcp_ctask->pad_count,
+- &sent);
+- if (rc) {
+- debug_scsi("padding send failed %d\n", rc);
+- set_bit(XMSTATE_BIT_W_RESEND_PAD, &tcp_ctask->xmstate);
+- }
+- return rc;
+-}
+-
+-static int
+-iscsi_send_digest(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
+- struct iscsi_buf *buf, uint32_t *digest)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask;
+- struct iscsi_tcp_conn *tcp_conn;
+- int rc, sent = 0;
+-
+- if (!conn->datadgst_en)
+- return 0;
+-
+- tcp_ctask = ctask->dd_data;
+- tcp_conn = conn->dd_data;
+-
+- if (!test_bit(XMSTATE_BIT_W_RESEND_DATA_DIGEST, &tcp_ctask->xmstate)) {
+- crypto_hash_final(&tcp_conn->tx_hash, (u8*)digest);
+- iscsi_buf_init_iov(buf, (char*)digest, 4);
+- }
+- clear_bit(XMSTATE_BIT_W_RESEND_DATA_DIGEST, &tcp_ctask->xmstate);
+-
+- rc = iscsi_sendpage(conn, buf, &tcp_ctask->digest_count, &sent);
+- if (!rc)
+- debug_scsi("sent digest 0x%x for itt 0x%x\n", *digest,
+- ctask->itt);
+- else {
+- debug_scsi("sending digest 0x%x failed for itt 0x%x!\n",
+- *digest, ctask->itt);
+- set_bit(XMSTATE_BIT_W_RESEND_DATA_DIGEST, &tcp_ctask->xmstate);
+- }
+- return rc;
+-}
+-
+-static int
+-iscsi_send_data(struct iscsi_cmd_task *ctask, struct iscsi_buf *sendbuf,
+- struct scatterlist **sg, int *sent, int *count,
+- struct iscsi_buf *digestbuf, uint32_t *digest)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- struct iscsi_conn *conn = ctask->conn;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- int rc, buf_sent, offset;
+-
+- while (*count) {
+- buf_sent = 0;
+- offset = sendbuf->sent;
+-
+- rc = iscsi_sendpage(conn, sendbuf, count, &buf_sent);
+- *sent = *sent + buf_sent;
+- if (buf_sent && conn->datadgst_en)
+- partial_sg_digest_update(&tcp_conn->tx_hash,
+- &sendbuf->sg, sendbuf->sg.offset + offset,
+- buf_sent);
+- if (!iscsi_buf_left(sendbuf) && *sg != tcp_ctask->bad_sg) {
+- iscsi_buf_init_sg(sendbuf, *sg);
+- *sg = *sg + 1;
+- }
+-
+- if (rc)
+- return rc;
+- }
+-
+- rc = iscsi_send_padding(conn, ctask);
+- if (rc)
++flush:
++ /* Flush any pending data first. */
++ rc = iscsi_tcp_flush(conn);
++ if (rc < 0)
+ return rc;
+- return iscsi_send_digest(conn, ctask, digestbuf, digest);
+-}
+-
+-static int
+-iscsi_send_unsol_hdr(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- struct iscsi_data_task *dtask;
+- int rc;
+-
+- set_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate);
+- if (test_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate)) {
+- dtask = &tcp_ctask->unsol_dtask;
+-
+- iscsi_prep_unsolicit_data_pdu(ctask, &dtask->hdr);
+- iscsi_buf_init_iov(&tcp_ctask->headbuf, (char*)&dtask->hdr,
+- sizeof(struct iscsi_hdr));
+- if (conn->hdrdgst_en)
+- iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
+- (u8*)dtask->hdrext);
+-
+- clear_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate);
+- iscsi_set_padding(tcp_ctask, ctask->data_count);
+- }
+-
+- rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, ctask->data_count);
+- if (rc) {
+- clear_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate);
+- set_bit(XMSTATE_BIT_UNS_HDR, &tcp_ctask->xmstate);
+- return rc;
+- }
++ /* Are we done already? */
++ if (sc->sc_data_direction != DMA_TO_DEVICE)
++ return 0;
-- /*
-- * GetBusInfo
-+ /*
-+ * GetBusInfo
- */
+- if (conn->datadgst_en) {
+- dtask = &tcp_ctask->unsol_dtask;
+- iscsi_data_digest_init(ctask->conn->dd_data, tcp_ctask);
+- dtask->digest = 0;
+- }
++ if (ctask->unsol_count != 0) {
++ struct iscsi_data *hdr = &tcp_ctask->unsol_dtask.hdr;
- aac_fib_init(fibptr);
-@@ -1267,6 +1290,8 @@ int aac_get_adapter_info(struct aac_dev* dev)
- 1, 1,
- NULL, NULL);
+- debug_scsi("uns dout [itt 0x%x dlen %d sent %d]\n",
+- ctask->itt, ctask->unsol_count, tcp_ctask->sent);
+- return 0;
+-}
++ /* Prepare a header for the unsolicited PDU.
++ * The amount of data we want to send will be
++ * in ctask->data_count.
++ * FIXME: return the data count instead.
++ */
++ iscsi_prep_unsolicit_data_pdu(ctask, hdr);
-+ /* reasoned default */
-+ dev->maximum_num_physicals = 16;
- if (rcode >= 0 && le32_to_cpu(bus_info->Status) == ST_OK) {
- dev->maximum_num_physicals = le32_to_cpu(bus_info->TargetsPerBus);
- dev->maximum_num_channels = le32_to_cpu(bus_info->BusCount);
-@@ -1276,7 +1301,7 @@ int aac_get_adapter_info(struct aac_dev* dev)
- char buffer[16];
- tmp = le32_to_cpu(dev->adapter_info.kernelrev);
- printk(KERN_INFO "%s%d: kernel %d.%d-%d[%d] %.*s\n",
-- dev->name,
-+ dev->name,
- dev->id,
- tmp>>24,
- (tmp>>16)&0xff,
-@@ -1305,19 +1330,21 @@ int aac_get_adapter_info(struct aac_dev* dev)
- (int)sizeof(dev->supplement_adapter_info.VpdInfo.Tsid),
- dev->supplement_adapter_info.VpdInfo.Tsid);
- }
-- if (!aac_check_reset ||
-+ if (!aac_check_reset || ((aac_check_reset != 1) &&
- (dev->supplement_adapter_info.SupportedOptions2 &
-- le32_to_cpu(AAC_OPTION_IGNORE_RESET))) {
-+ AAC_OPTION_IGNORE_RESET))) {
- printk(KERN_INFO "%s%d: Reset Adapter Ignored\n",
- dev->name, dev->id);
- }
- }
+-static int
+-iscsi_send_unsol_pdu(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- int rc;
++ debug_tcp("unsol dout [itt 0x%x doff %d dlen %d]\n",
++ ctask->itt, tcp_ctask->sent, ctask->data_count);
-+ dev->cache_protected = 0;
-+ dev->jbod = ((dev->supplement_adapter_info.FeatureBits &
-+ AAC_FEATURE_JBOD) != 0);
- dev->nondasd_support = 0;
- dev->raid_scsi_mode = 0;
-- if(dev->adapter_info.options & AAC_OPT_NONDASD){
-+ if(dev->adapter_info.options & AAC_OPT_NONDASD)
- dev->nondasd_support = 1;
+- if (test_and_clear_bit(XMSTATE_BIT_UNS_HDR, &tcp_ctask->xmstate)) {
+- BUG_ON(!ctask->unsol_count);
+-send_hdr:
+- rc = iscsi_send_unsol_hdr(conn, ctask);
++ iscsi_tcp_send_hdr_prep(conn, hdr, sizeof(*hdr));
++ rc = iscsi_tcp_send_data_prep(conn, scsi_sglist(sc),
++ scsi_sg_count(sc),
++ tcp_ctask->sent,
++ ctask->data_count);
+ if (rc)
+- return rc;
- }
+-
+- if (test_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate)) {
+- struct iscsi_data_task *dtask = &tcp_ctask->unsol_dtask;
+- int start = tcp_ctask->sent;
++ goto fail;
++ tcp_ctask->sent += ctask->data_count;
++ ctask->unsol_count -= ctask->data_count;
++ goto flush;
++ } else {
++ struct iscsi_session *session = conn->session;
++ struct iscsi_r2t_info *r2t;
- /*
- * If the firmware supports ROMB RAID/SCSI mode and we are currently
-@@ -1338,11 +1365,10 @@ int aac_get_adapter_info(struct aac_dev* dev)
- if (dev->raid_scsi_mode != 0)
- printk(KERN_INFO "%s%d: ROMB RAID/SCSI mode enabled\n",
- dev->name, dev->id);
--
-- if(nondasd != -1) {
+- rc = iscsi_send_data(ctask, &tcp_ctask->sendbuf, &tcp_ctask->sg,
+- &tcp_ctask->sent, &ctask->data_count,
+- &dtask->digestbuf, &dtask->digest);
+- ctask->unsol_count -= tcp_ctask->sent - start;
+- if (rc)
+- return rc;
+- clear_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate);
+- /*
+- * Done with the Data-Out. Next, check if we need
+- * to send another unsolicited Data-Out.
++ /* All unsolicited PDUs sent. Check for solicited PDUs.
+ */
+- if (ctask->unsol_count) {
+- debug_scsi("sending more uns\n");
+- set_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate);
+- goto send_hdr;
++ spin_lock_bh(&session->lock);
++ r2t = tcp_ctask->r2t;
++ if (r2t != NULL) {
++ /* Continue with this R2T? */
++ if (!iscsi_solicit_data_cont(conn, ctask, r2t)) {
++ debug_scsi(" done with r2t %p\n", r2t);
+
-+ if (nondasd != -1)
- dev->nondasd_support = (nondasd!=0);
++ __kfifo_put(tcp_ctask->r2tpool.queue,
++ (void*)&r2t, sizeof(void*));
++ tcp_ctask->r2t = r2t = NULL;
++ }
+ }
- }
-- if(dev->nondasd_support != 0){
-+ if(dev->nondasd_support != 0) {
- printk(KERN_INFO "%s%d: Non-DASD support enabled.\n",dev->name, dev->id);
- }
+- return 0;
+-}
+-
+-static int iscsi_send_sol_pdu(struct iscsi_conn *conn,
+- struct iscsi_cmd_task *ctask)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- struct iscsi_session *session = conn->session;
+- struct iscsi_r2t_info *r2t;
+- struct iscsi_data_task *dtask;
+- int left, rc;
-@@ -1371,12 +1397,14 @@ int aac_get_adapter_info(struct aac_dev* dev)
- rcode = -ENOMEM;
+- if (test_bit(XMSTATE_BIT_SOL_HDR_INIT, &tcp_ctask->xmstate)) {
+- if (!tcp_ctask->r2t) {
+- spin_lock_bh(&session->lock);
++ if (r2t == NULL) {
+ __kfifo_get(tcp_ctask->r2tqueue, (void*)&tcp_ctask->r2t,
+ sizeof(void*));
+- spin_unlock_bh(&session->lock);
++ r2t = tcp_ctask->r2t;
}
- }
-- /*
-+ /*
- * Deal with configuring for the individualized limits of each packet
- * interface.
- */
- dev->a_ops.adapter_scsi = (dev->dac_support)
-- ? aac_scsi_64
-+ ? ((aac_get_driver_ident(dev->cardtype)->quirks & AAC_QUIRK_SCSI_32)
-+ ? aac_scsi_32_64
-+ : aac_scsi_64)
- : aac_scsi_32;
- if (dev->raw_io_interface) {
- dev->a_ops.adapter_bounds = (dev->raw_io_64)
-@@ -1393,8 +1421,8 @@ int aac_get_adapter_info(struct aac_dev* dev)
- if (dev->dac_support) {
- dev->a_ops.adapter_read = aac_read_block64;
- dev->a_ops.adapter_write = aac_write_block64;
-- /*
-- * 38 scatter gather elements
-+ /*
-+ * 38 scatter gather elements
- */
- dev->scsi_host_ptr->sg_tablesize =
- (dev->max_fib_size -
-@@ -1498,9 +1526,8 @@ static void io_callback(void *context, struct fib * fibptr)
- ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
- 0, 0);
- memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
-- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
-- ? sizeof(scsicmd->sense_buffer)
-- : sizeof(dev->fsa_dev[cid].sense_data));
-+ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
-+ SCSI_SENSE_BUFFERSIZE));
- }
- aac_fib_complete(fibptr);
- aac_fib_free(fibptr);
-@@ -1524,7 +1551,7 @@ static int aac_read(struct scsi_cmnd * scsicmd)
- case READ_6:
- dprintk((KERN_DEBUG "aachba: received a read(6) command on id %d.\n", scmd_id(scsicmd)));
+-send_hdr:
+- r2t = tcp_ctask->r2t;
+- dtask = &r2t->dtask;
+-
+- if (conn->hdrdgst_en)
+- iscsi_hdr_digest(conn, &r2t->headbuf,
+- (u8*)dtask->hdrext);
+- clear_bit(XMSTATE_BIT_SOL_HDR_INIT, &tcp_ctask->xmstate);
+- set_bit(XMSTATE_BIT_SOL_HDR, &tcp_ctask->xmstate);
+- }
+-
+- if (test_bit(XMSTATE_BIT_SOL_HDR, &tcp_ctask->xmstate)) {
+- r2t = tcp_ctask->r2t;
+- dtask = &r2t->dtask;
+-
+- rc = iscsi_sendhdr(conn, &r2t->headbuf, r2t->data_count);
+- if (rc)
+- return rc;
+- clear_bit(XMSTATE_BIT_SOL_HDR, &tcp_ctask->xmstate);
+- set_bit(XMSTATE_BIT_SOL_DATA, &tcp_ctask->xmstate);
++ spin_unlock_bh(&session->lock);
-- lba = ((scsicmd->cmnd[1] & 0x1F) << 16) |
-+ lba = ((scsicmd->cmnd[1] & 0x1F) << 16) |
- (scsicmd->cmnd[2] << 8) | scsicmd->cmnd[3];
- count = scsicmd->cmnd[4];
+- if (conn->datadgst_en) {
+- iscsi_data_digest_init(conn->dd_data, tcp_ctask);
+- dtask->digest = 0;
++ /* Waiting for more R2Ts to arrive. */
++ if (r2t == NULL) {
++ debug_tcp("no R2Ts yet\n");
++ return 0;
+ }
-@@ -1534,32 +1561,32 @@ static int aac_read(struct scsi_cmnd * scsicmd)
- case READ_16:
- dprintk((KERN_DEBUG "aachba: received a read(16) command on id %d.\n", scmd_id(scsicmd)));
+- iscsi_set_padding(tcp_ctask, r2t->data_count);
+- debug_scsi("sol dout [dsn %d itt 0x%x dlen %d sent %d]\n",
+- r2t->solicit_datasn - 1, ctask->itt, r2t->data_count,
+- r2t->sent);
+- }
++ debug_scsi("sol dout %p [dsn %d itt 0x%x doff %d dlen %d]\n",
++ r2t, r2t->solicit_datasn - 1, ctask->itt,
++ r2t->data_offset + r2t->sent, r2t->data_count);
-- lba = ((u64)scsicmd->cmnd[2] << 56) |
-- ((u64)scsicmd->cmnd[3] << 48) |
-+ lba = ((u64)scsicmd->cmnd[2] << 56) |
-+ ((u64)scsicmd->cmnd[3] << 48) |
- ((u64)scsicmd->cmnd[4] << 40) |
- ((u64)scsicmd->cmnd[5] << 32) |
-- ((u64)scsicmd->cmnd[6] << 24) |
-+ ((u64)scsicmd->cmnd[6] << 24) |
- (scsicmd->cmnd[7] << 16) |
- (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
-- count = (scsicmd->cmnd[10] << 24) |
-+ count = (scsicmd->cmnd[10] << 24) |
- (scsicmd->cmnd[11] << 16) |
- (scsicmd->cmnd[12] << 8) | scsicmd->cmnd[13];
- break;
- case READ_12:
- dprintk((KERN_DEBUG "aachba: received a read(12) command on id %d.\n", scmd_id(scsicmd)));
+- if (test_bit(XMSTATE_BIT_SOL_DATA, &tcp_ctask->xmstate)) {
+- r2t = tcp_ctask->r2t;
+- dtask = &r2t->dtask;
++ iscsi_tcp_send_hdr_prep(conn, &r2t->dtask.hdr,
++ sizeof(struct iscsi_hdr));
-- lba = ((u64)scsicmd->cmnd[2] << 24) |
-+ lba = ((u64)scsicmd->cmnd[2] << 24) |
- (scsicmd->cmnd[3] << 16) |
-- (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
-- count = (scsicmd->cmnd[6] << 24) |
-+ (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
-+ count = (scsicmd->cmnd[6] << 24) |
- (scsicmd->cmnd[7] << 16) |
-- (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
-+ (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
- break;
- default:
- dprintk((KERN_DEBUG "aachba: received a read(10) command on id %d.\n", scmd_id(scsicmd)));
+- rc = iscsi_send_data(ctask, &r2t->sendbuf, &r2t->sg,
+- &r2t->sent, &r2t->data_count,
+- &dtask->digestbuf, &dtask->digest);
++ rc = iscsi_tcp_send_data_prep(conn, scsi_sglist(sc),
++ scsi_sg_count(sc),
++ r2t->data_offset + r2t->sent,
++ r2t->data_count);
+ if (rc)
+- return rc;
+- clear_bit(XMSTATE_BIT_SOL_DATA, &tcp_ctask->xmstate);
+-
+- /*
+- * Done with this Data-Out. Next, check if we have
+- * to send another Data-Out for this R2T.
+- */
+- BUG_ON(r2t->data_length - r2t->sent < 0);
+- left = r2t->data_length - r2t->sent;
+- if (left) {
+- iscsi_solicit_data_cont(conn, ctask, r2t, left);
+- goto send_hdr;
+- }
+-
+- /*
+- * Done with this R2T. Check if there are more
+- * outstanding R2Ts ready to be processed.
+- */
+- spin_lock_bh(&session->lock);
+- tcp_ctask->r2t = NULL;
+- __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
+- sizeof(void*));
+- if (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t,
+- sizeof(void*))) {
+- tcp_ctask->r2t = r2t;
+- spin_unlock_bh(&session->lock);
+- goto send_hdr;
+- }
+- spin_unlock_bh(&session->lock);
++ goto fail;
++ tcp_ctask->sent += r2t->data_count;
++ r2t->sent += r2t->data_count;
++ goto flush;
+ }
+ return 0;
+-}
+-
+-/**
+- * iscsi_tcp_ctask_xmit - xmit normal PDU task
+- * @conn: iscsi connection
+- * @ctask: iscsi command task
+- *
+- * Notes:
+- * The function can return -EAGAIN in which case caller must
+- * call it again later, or recover. '0' return code means successful
+- * xmit.
+- * The function is devided to logical helpers (above) for the different
+- * xmit stages.
+- *
+- *iscsi_send_cmd_hdr()
+- * XMSTATE_BIT_CMD_HDR_INIT - prepare Header and Data buffers Calculate
+- * Header Digest
+- * XMSTATE_BIT_CMD_HDR_XMIT - Transmit header in progress
+- *
+- *iscsi_send_padding
+- * XMSTATE_BIT_W_PAD - Prepare and send pading
+- * XMSTATE_BIT_W_RESEND_PAD - retry send pading
+- *
+- *iscsi_send_digest
+- * XMSTATE_BIT_W_RESEND_DATA_DIGEST - Finalize and send Data Digest
+- * XMSTATE_BIT_W_RESEND_DATA_DIGEST - retry sending digest
+- *
+- *iscsi_send_unsol_hdr
+- * XMSTATE_BIT_UNS_INIT - prepare un-solicit data header and digest
+- * XMSTATE_BIT_UNS_HDR - send un-solicit header
+- *
+- *iscsi_send_unsol_pdu
+- * XMSTATE_BIT_UNS_DATA - send un-solicit data in progress
+- *
+- *iscsi_send_sol_pdu
+- * XMSTATE_BIT_SOL_HDR_INIT - solicit data header and digest initialize
+- * XMSTATE_BIT_SOL_HDR - send solicit header
+- * XMSTATE_BIT_SOL_DATA - send solicit data
+- *
+- *iscsi_tcp_ctask_xmit
+- * XMSTATE_BIT_IMM_DATA - xmit managment data (??)
+- **/
+-static int
+-iscsi_tcp_ctask_xmit(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+-{
+- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+- int rc = 0;
+-
+- debug_scsi("ctask deq [cid %d xmstate %x itt 0x%x]\n",
+- conn->id, tcp_ctask->xmstate, ctask->itt);
+-
+- rc = iscsi_send_cmd_hdr(conn, ctask);
+- if (rc)
+- return rc;
+- if (ctask->sc->sc_data_direction != DMA_TO_DEVICE)
+- return 0;
+-
+- if (test_bit(XMSTATE_BIT_IMM_DATA, &tcp_ctask->xmstate)) {
+- rc = iscsi_send_data(ctask, &tcp_ctask->sendbuf, &tcp_ctask->sg,
+- &tcp_ctask->sent, &ctask->imm_count,
+- &tcp_ctask->immbuf, &tcp_ctask->immdigest);
+- if (rc)
+- return rc;
+- clear_bit(XMSTATE_BIT_IMM_DATA, &tcp_ctask->xmstate);
+- }
+-
+- rc = iscsi_send_unsol_pdu(conn, ctask);
+- if (rc)
+- return rc;
+-
+- rc = iscsi_send_sol_pdu(conn, ctask);
+- if (rc)
+- return rc;
+-
+- return rc;
++fail:
++ iscsi_conn_failure(conn, rc);
++ return -EIO;
+ }
-- lba = ((u64)scsicmd->cmnd[2] << 24) |
-- (scsicmd->cmnd[3] << 16) |
-+ lba = ((u64)scsicmd->cmnd[2] << 24) |
-+ (scsicmd->cmnd[3] << 16) |
- (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
- count = (scsicmd->cmnd[7] << 8) | scsicmd->cmnd[8];
- break;
-@@ -1584,7 +1611,7 @@ static int aac_read(struct scsi_cmnd * scsicmd)
- scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
- return 0;
- }
--
-+
- printk(KERN_WARNING "aac_read: aac_fib_send failed with status: %d.\n", status);
- /*
- * For some reason, the Fib didn't queue, return QUEUE_FULL
-@@ -1619,11 +1646,11 @@ static int aac_write(struct scsi_cmnd * scsicmd)
- } else if (scsicmd->cmnd[0] == WRITE_16) { /* 16 byte command */
- dprintk((KERN_DEBUG "aachba: received a write(16) command on id %d.\n", scmd_id(scsicmd)));
+ static struct iscsi_cls_conn *
+@@ -1784,9 +1492,6 @@ iscsi_tcp_conn_create(struct iscsi_cls_session *cls_session, uint32_t conn_idx)
-- lba = ((u64)scsicmd->cmnd[2] << 56) |
-+ lba = ((u64)scsicmd->cmnd[2] << 56) |
- ((u64)scsicmd->cmnd[3] << 48) |
- ((u64)scsicmd->cmnd[4] << 40) |
- ((u64)scsicmd->cmnd[5] << 32) |
-- ((u64)scsicmd->cmnd[6] << 24) |
-+ ((u64)scsicmd->cmnd[6] << 24) |
- (scsicmd->cmnd[7] << 16) |
- (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
- count = (scsicmd->cmnd[10] << 24) | (scsicmd->cmnd[11] << 16) |
-@@ -1712,8 +1739,8 @@ static void synchronize_callback(void *context, struct fib *fibptr)
- ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
- 0, 0);
- memcpy(cmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
-- min(sizeof(dev->fsa_dev[cid].sense_data),
-- sizeof(cmd->sense_buffer)));
-+ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
-+ SCSI_SENSE_BUFFERSIZE));
- }
+ conn->dd_data = tcp_conn;
+ tcp_conn->iscsi_conn = conn;
+- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
+- /* initial operational parameters */
+- tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
- aac_fib_complete(fibptr);
-@@ -1798,7 +1825,7 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
- if (active)
- return SCSI_MLQUEUE_DEVICE_BUSY;
+ tcp_conn->tx_hash.tfm = crypto_alloc_hash("crc32c", 0,
+ CRYPTO_ALG_ASYNC);
+@@ -1863,11 +1568,9 @@ static void
+ iscsi_tcp_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
+ {
+ struct iscsi_conn *conn = cls_conn->dd_data;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- aac = (struct aac_dev *)scsicmd->device->host->hostdata;
-+ aac = (struct aac_dev *)sdev->host->hostdata;
- if (aac->in_reset)
- return SCSI_MLQUEUE_HOST_BUSY;
+ iscsi_conn_stop(cls_conn, flag);
+ iscsi_tcp_release_conn(conn);
+- tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
+ }
-@@ -1850,14 +1877,14 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
- * Emulate a SCSI command and queue the required request for the
- * aacraid firmware.
- */
--
-+
- int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- {
- u32 cid;
- struct Scsi_Host *host = scsicmd->device->host;
- struct aac_dev *dev = (struct aac_dev *)host->hostdata;
- struct fsa_dev_info *fsa_dev_ptr = dev->fsa_dev;
--
-+
- if (fsa_dev_ptr == NULL)
- return -1;
+ static int iscsi_tcp_get_addr(struct iscsi_conn *conn, struct socket *sock,
+@@ -1967,7 +1670,7 @@ iscsi_tcp_conn_bind(struct iscsi_cls_session *cls_session,
/*
-@@ -1898,7 +1925,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- }
- }
- } else { /* check for physical non-dasd devices */
-- if ((dev->nondasd_support == 1) || expose_physicals) {
-+ if (dev->nondasd_support || expose_physicals ||
-+ dev->jbod) {
- if (dev->in_reset)
- return -1;
- return aac_send_srb_fib(scsicmd);
-@@ -1913,7 +1941,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- * else Command for the controller itself
+ * set receive state machine into initial state
*/
- else if ((scsicmd->cmnd[0] != INQUIRY) && /* only INQUIRY & TUR cmnd supported for controller */
-- (scsicmd->cmnd[0] != TEST_UNIT_READY))
-+ (scsicmd->cmnd[0] != TEST_UNIT_READY))
- {
- dprintk((KERN_WARNING "Only INQUIRY & TUR command supported for controller, rcvd = 0x%x.\n", scsicmd->cmnd[0]));
- scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_CHECK_CONDITION;
-@@ -1922,9 +1950,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- SENCODE_INVALID_COMMAND,
- ASENCODE_INVALID_COMMAND, 0, 0, 0, 0);
- memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
-- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
-- ? sizeof(scsicmd->sense_buffer)
-- : sizeof(dev->fsa_dev[cid].sense_data));
-+ min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
-+ SCSI_SENSE_BUFFERSIZE));
- scsicmd->scsi_done(scsicmd);
- return 0;
- }
-@@ -1939,7 +1966,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- dprintk((KERN_DEBUG "INQUIRY command, ID: %d.\n", cid));
- memset(&inq_data, 0, sizeof (struct inquiry_data));
-
-- if (scsicmd->cmnd[1] & 0x1 ) {
-+ if (scsicmd->cmnd[1] & 0x1) {
- char *arr = (char *)&inq_data;
+- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
++ iscsi_tcp_hdr_recv_prep(tcp_conn);
+ return 0;
- /* EVPD bit set */
-@@ -1974,10 +2001,9 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- ASENCODE_NO_SENSE, 0, 7, 2, 0);
- memcpy(scsicmd->sense_buffer,
- &dev->fsa_dev[cid].sense_data,
-- (sizeof(dev->fsa_dev[cid].sense_data) >
-- sizeof(scsicmd->sense_buffer))
-- ? sizeof(scsicmd->sense_buffer)
-- : sizeof(dev->fsa_dev[cid].sense_data));
-+ min_t(size_t,
-+ sizeof(dev->fsa_dev[cid].sense_data),
-+ SCSI_SENSE_BUFFERSIZE));
- }
- scsicmd->scsi_done(scsicmd);
- return 0;
-@@ -2092,7 +2118,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- mode_buf[2] = 0; /* Device-specific param,
- bit 8: 0/1 = write enabled/protected
- bit 4: 0/1 = FUA enabled */
-- if (dev->raw_io_interface)
-+ if (dev->raw_io_interface && ((aac_cache & 5) != 1))
- mode_buf[2] = 0x10;
- mode_buf[3] = 0; /* Block descriptor length */
- if (((scsicmd->cmnd[2] & 0x3f) == 8) ||
-@@ -2100,7 +2126,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- mode_buf[0] = 6;
- mode_buf[4] = 8;
- mode_buf[5] = 1;
-- mode_buf[6] = 0x04; /* WCE */
-+ mode_buf[6] = ((aac_cache & 6) == 2)
-+ ? 0 : 0x04; /* WCE */
- mode_buf_length = 7;
- if (mode_buf_length > scsicmd->cmnd[4])
- mode_buf_length = scsicmd->cmnd[4];
-@@ -2123,7 +2150,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- mode_buf[3] = 0; /* Device-specific param,
- bit 8: 0/1 = write enabled/protected
- bit 4: 0/1 = FUA enabled */
-- if (dev->raw_io_interface)
-+ if (dev->raw_io_interface && ((aac_cache & 5) != 1))
- mode_buf[3] = 0x10;
- mode_buf[4] = 0; /* reserved */
- mode_buf[5] = 0; /* reserved */
-@@ -2134,7 +2161,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- mode_buf[1] = 9;
- mode_buf[8] = 8;
- mode_buf[9] = 1;
-- mode_buf[10] = 0x04; /* WCE */
-+ mode_buf[10] = ((aac_cache & 6) == 2)
-+ ? 0 : 0x04; /* WCE */
- mode_buf_length = 11;
- if (mode_buf_length > scsicmd->cmnd[8])
- mode_buf_length = scsicmd->cmnd[8];
-@@ -2179,7 +2207,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- return 0;
- }
+ free_socket:
+@@ -1977,10 +1680,17 @@ free_socket:
-- switch (scsicmd->cmnd[0])
-+ switch (scsicmd->cmnd[0])
- {
- case READ_6:
- case READ_10:
-@@ -2192,11 +2220,11 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- * corresponds to a container. Needed to convert
- * containers to /dev/sd device names
- */
--
+ /* called with host lock */
+ static void
+-iscsi_tcp_mgmt_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
++iscsi_tcp_mtask_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
+ {
+- struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
+- tcp_mtask->xmstate = 1 << XMSTATE_BIT_IMM_HDR_INIT;
++ debug_scsi("mtask deq [cid %d itt 0x%x]\n", conn->id, mtask->itt);
+
- if (scsicmd->request->rq_disk)
- strlcpy(fsa_dev_ptr[cid].devname,
- scsicmd->request->rq_disk->disk_name,
-- min(sizeof(fsa_dev_ptr[cid].devname),
-+ min(sizeof(fsa_dev_ptr[cid].devname),
- sizeof(scsicmd->request->rq_disk->disk_name) + 1));
++ /* Prepare PDU, optionally w/ immediate data */
++ iscsi_tcp_send_hdr_prep(conn, mtask->hdr, sizeof(*mtask->hdr));
++
++ /* If we have immediate data, attach a payload */
++ if (mtask->data_count)
++ iscsi_tcp_send_linear_data_prepare(conn, mtask->data,
++ mtask->data_count);
+ }
- return aac_read(scsicmd);
-@@ -2210,9 +2238,16 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- return aac_write(scsicmd);
+ static int
+@@ -2003,8 +1713,7 @@ iscsi_r2tpool_alloc(struct iscsi_session *session)
+ */
- case SYNCHRONIZE_CACHE:
-+ if (((aac_cache & 6) == 6) && dev->cache_protected) {
-+ scsicmd->result = DID_OK << 16 |
-+ COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
-+ scsicmd->scsi_done(scsicmd);
-+ return 0;
-+ }
- /* Issue FIB to tell Firmware to flush it's cache */
-- return aac_synchronize(scsicmd);
--
-+ if ((aac_cache & 6) != 2)
-+ return aac_synchronize(scsicmd);
-+ /* FALLTHRU */
- default:
- /*
- * Unhandled commands
-@@ -2223,9 +2258,9 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
- ILLEGAL_REQUEST, SENCODE_INVALID_COMMAND,
- ASENCODE_INVALID_COMMAND, 0, 0, 0, 0);
- memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
-- (sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
-- ? sizeof(scsicmd->sense_buffer)
-- : sizeof(dev->fsa_dev[cid].sense_data));
-+ min_t(size_t,
-+ sizeof(dev->fsa_dev[cid].sense_data),
-+ SCSI_SENSE_BUFFERSIZE));
- scsicmd->scsi_done(scsicmd);
- return 0;
+ /* R2T pool */
+- if (iscsi_pool_init(&tcp_ctask->r2tpool, session->max_r2t * 4,
+- (void***)&tcp_ctask->r2ts,
++ if (iscsi_pool_init(&tcp_ctask->r2tpool, session->max_r2t * 4, NULL,
+ sizeof(struct iscsi_r2t_info))) {
+ goto r2t_alloc_fail;
+ }
+@@ -2013,8 +1722,7 @@ iscsi_r2tpool_alloc(struct iscsi_session *session)
+ tcp_ctask->r2tqueue = kfifo_alloc(
+ session->max_r2t * 4 * sizeof(void*), GFP_KERNEL, NULL);
+ if (tcp_ctask->r2tqueue == ERR_PTR(-ENOMEM)) {
+- iscsi_pool_free(&tcp_ctask->r2tpool,
+- (void**)tcp_ctask->r2ts);
++ iscsi_pool_free(&tcp_ctask->r2tpool);
+ goto r2t_alloc_fail;
+ }
}
-@@ -2243,7 +2278,7 @@ static int query_disk(struct aac_dev *dev, void __user *arg)
- return -EFAULT;
- if (qd.cnum == -1)
- qd.cnum = qd.id;
-- else if ((qd.bus == -1) && (qd.id == -1) && (qd.lun == -1))
-+ else if ((qd.bus == -1) && (qd.id == -1) && (qd.lun == -1))
- {
- if (qd.cnum < 0 || qd.cnum >= dev->maximum_num_containers)
- return -EINVAL;
-@@ -2370,7 +2405,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
-
- scsicmd->sense_buffer[0] = '\0'; /* Initialize sense valid flag to false */
- /*
-- * Calculate resid for sg
-+ * Calculate resid for sg
- */
+@@ -2027,8 +1735,7 @@ r2t_alloc_fail:
+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
- scsi_set_resid(scsicmd, scsi_bufflen(scsicmd)
-@@ -2385,10 +2420,8 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
- if (le32_to_cpu(srbreply->status) != ST_OK){
- int len;
- printk(KERN_WARNING "aac_srb_callback: srb failed, status = %d\n", le32_to_cpu(srbreply->status));
-- len = (le32_to_cpu(srbreply->sense_data_size) >
-- sizeof(scsicmd->sense_buffer)) ?
-- sizeof(scsicmd->sense_buffer) :
-- le32_to_cpu(srbreply->sense_data_size);
-+ len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
-+ SCSI_SENSE_BUFFERSIZE);
- scsicmd->result = DID_ERROR << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_CHECK_CONDITION;
- memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);
- }
-@@ -2412,7 +2445,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
- case WRITE_12:
- case READ_16:
- case WRITE_16:
-- if(le32_to_cpu(srbreply->data_xfer_length) < scsicmd->underflow ) {
-+ if (le32_to_cpu(srbreply->data_xfer_length) < scsicmd->underflow) {
- printk(KERN_WARNING"aacraid: SCSI CMD underflow\n");
- } else {
- printk(KERN_WARNING"aacraid: SCSI CMD Data Overrun\n");
-@@ -2481,26 +2514,23 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
- printk("aacraid: SRB ERROR(%u) %s scsi cmd 0x%x - scsi status 0x%x\n",
- le32_to_cpu(srbreply->srb_status) & 0x3F,
- aac_get_status_string(
-- le32_to_cpu(srbreply->srb_status) & 0x3F),
-- scsicmd->cmnd[0],
-+ le32_to_cpu(srbreply->srb_status) & 0x3F),
-+ scsicmd->cmnd[0],
- le32_to_cpu(srbreply->scsi_status));
- #endif
- scsicmd->result = DID_ERROR << 16 | COMMAND_COMPLETE << 8;
- break;
+ kfifo_free(tcp_ctask->r2tqueue);
+- iscsi_pool_free(&tcp_ctask->r2tpool,
+- (void**)tcp_ctask->r2ts);
++ iscsi_pool_free(&tcp_ctask->r2tpool);
}
-- if (le32_to_cpu(srbreply->scsi_status) == 0x02 ){ // Check Condition
-+ if (le32_to_cpu(srbreply->scsi_status) == SAM_STAT_CHECK_CONDITION) {
- int len;
- scsicmd->result |= SAM_STAT_CHECK_CONDITION;
-- len = (le32_to_cpu(srbreply->sense_data_size) >
-- sizeof(scsicmd->sense_buffer)) ?
-- sizeof(scsicmd->sense_buffer) :
-- le32_to_cpu(srbreply->sense_data_size);
-+ len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
-+ SCSI_SENSE_BUFFERSIZE);
- #ifdef AAC_DETAILED_STATUS_INFO
- printk(KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
- le32_to_cpu(srbreply->status), len);
- #endif
- memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);
--
+ return -ENOMEM;
+ }
+@@ -2043,8 +1750,7 @@ iscsi_r2tpool_free(struct iscsi_session *session)
+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+
+ kfifo_free(tcp_ctask->r2tqueue);
+- iscsi_pool_free(&tcp_ctask->r2tpool,
+- (void**)tcp_ctask->r2ts);
++ iscsi_pool_free(&tcp_ctask->r2tpool);
}
- /*
- * OR in the scsi status (already shifted up a bit)
-@@ -2517,7 +2547,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
- * aac_send_scb_fib
- * @scsicmd: the scsi command block
- *
-- * This routine will form a FIB and fill in the aac_srb from the
-+ * This routine will form a FIB and fill in the aac_srb from the
- * scsicmd passed in.
- */
+ }
-@@ -2731,7 +2761,7 @@ static struct aac_srb_status_info srb_status_info[] = {
- { SRB_STATUS_ERROR_RECOVERY, "Error Recovery"},
- { SRB_STATUS_NOT_STARTED, "Not Started"},
- { SRB_STATUS_NOT_IN_USE, "Not In Use"},
-- { SRB_STATUS_FORCE_ABORT, "Force Abort"},
-+ { SRB_STATUS_FORCE_ABORT, "Force Abort"},
- { SRB_STATUS_DOMAIN_VALIDATION_FAIL,"Domain Validation Failure"},
- { 0xff, "Unknown Error"}
- };
-diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
-index 9abba8b..3195d29 100644
---- a/drivers/scsi/aacraid/aacraid.h
-+++ b/drivers/scsi/aacraid/aacraid.h
-@@ -1,4 +1,4 @@
--#if (!defined(dprintk))
-+#ifndef dprintk
- # define dprintk(x)
- #endif
- /* eg: if (nblank(dprintk(x))) */
-@@ -12,7 +12,7 @@
- *----------------------------------------------------------------------------*/
+@@ -2060,9 +1766,6 @@ iscsi_conn_set_param(struct iscsi_cls_conn *cls_conn, enum iscsi_param param,
+ switch(param) {
+ case ISCSI_PARAM_HDRDGST_EN:
+ iscsi_set_param(cls_conn, param, buf, buflen);
+- tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
+- if (conn->hdrdgst_en)
+- tcp_conn->hdr_size += sizeof(__u32);
+ break;
+ case ISCSI_PARAM_DATADGST_EN:
+ iscsi_set_param(cls_conn, param, buf, buflen);
+@@ -2071,12 +1774,12 @@ iscsi_conn_set_param(struct iscsi_cls_conn *cls_conn, enum iscsi_param param,
+ break;
+ case ISCSI_PARAM_MAX_R2T:
+ sscanf(buf, "%d", &value);
+- if (session->max_r2t == roundup_pow_of_two(value))
++ if (value <= 0 || !is_power_of_2(value))
++ return -EINVAL;
++ if (session->max_r2t == value)
+ break;
+ iscsi_r2tpool_free(session);
+ iscsi_set_param(cls_conn, param, buf, buflen);
+- if (session->max_r2t & (session->max_r2t - 1))
+- session->max_r2t = roundup_pow_of_two(session->max_r2t);
+ if (iscsi_r2tpool_alloc(session))
+ return -ENOMEM;
+ break;
+@@ -2183,14 +1886,15 @@ iscsi_tcp_session_create(struct iscsi_transport *iscsit,
+ struct iscsi_cmd_task *ctask = session->cmds[cmd_i];
+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
- #ifndef AAC_DRIVER_BUILD
--# define AAC_DRIVER_BUILD 2449
-+# define AAC_DRIVER_BUILD 2455
- # define AAC_DRIVER_BRANCH "-ms"
- #endif
- #define MAXIMUM_NUM_CONTAINERS 32
-@@ -50,9 +50,9 @@ struct diskparm
- /*
- * Firmware constants
- */
--
-+
- #define CT_NONE 0
--#define CT_OK 218
-+#define CT_OK 218
- #define FT_FILESYS 8 /* ADAPTEC's "FSA"(tm) filesystem */
- #define FT_DRIVE 9 /* physical disk - addressable in scsi by bus/id/lun */
+- ctask->hdr = &tcp_ctask->hdr;
++ ctask->hdr = &tcp_ctask->hdr.cmd_hdr;
++ ctask->hdr_max = sizeof(tcp_ctask->hdr) - ISCSI_DIGEST_SIZE;
+ }
-@@ -107,12 +107,12 @@ struct user_sgentryraw {
+ for (cmd_i = 0; cmd_i < session->mgmtpool_max; cmd_i++) {
+ struct iscsi_mgmt_task *mtask = session->mgmt_cmds[cmd_i];
+ struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
- struct sgmap {
- __le32 count;
-- struct sgentry sg[1];
-+ struct sgentry sg[1];
- };
+- mtask->hdr = &tcp_mtask->hdr;
++ mtask->hdr = (struct iscsi_hdr *) &tcp_mtask->hdr;
+ }
- struct user_sgmap {
- u32 count;
-- struct user_sgentry sg[1];
-+ struct user_sgentry sg[1];
- };
+ if (iscsi_r2tpool_alloc(class_to_transport_session(cls_session)))
+@@ -2222,12 +1926,14 @@ static struct scsi_host_template iscsi_sht = {
+ .queuecommand = iscsi_queuecommand,
+ .change_queue_depth = iscsi_change_queue_depth,
+ .can_queue = ISCSI_DEF_XMIT_CMDS_MAX - 1,
+- .sg_tablesize = ISCSI_SG_TABLESIZE,
++ .sg_tablesize = 4096,
+ .max_sectors = 0xFFFF,
+ .cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
+ .eh_abort_handler = iscsi_eh_abort,
++ .eh_device_reset_handler= iscsi_eh_device_reset,
+ .eh_host_reset_handler = iscsi_eh_host_reset,
+ .use_clustering = DISABLE_CLUSTERING,
++ .use_sg_chaining = ENABLE_SG_CHAINING,
+ .slave_configure = iscsi_tcp_slave_configure,
+ .proc_name = "iscsi_tcp",
+ .this_id = -1,
+@@ -2257,14 +1963,17 @@ static struct iscsi_transport iscsi_tcp_transport = {
+ ISCSI_PERSISTENT_ADDRESS |
+ ISCSI_TARGET_NAME | ISCSI_TPGT |
+ ISCSI_USERNAME | ISCSI_PASSWORD |
+- ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN,
++ ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
++ ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
++ ISCSI_LU_RESET_TMO |
++ ISCSI_PING_TMO | ISCSI_RECV_TMO,
+ .host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
+ ISCSI_HOST_INITIATOR_NAME |
+ ISCSI_HOST_NETDEV_NAME,
+ .host_template = &iscsi_sht,
+ .conndata_size = sizeof(struct iscsi_conn),
+ .max_conn = 1,
+- .max_cmd_len = ISCSI_TCP_MAX_CMD_LEN,
++ .max_cmd_len = 16,
+ /* session management */
+ .create_session = iscsi_tcp_session_create,
+ .destroy_session = iscsi_tcp_session_destroy,
+@@ -2283,8 +1992,8 @@ static struct iscsi_transport iscsi_tcp_transport = {
+ /* IO */
+ .send_pdu = iscsi_conn_send_pdu,
+ .get_stats = iscsi_conn_get_stats,
+- .init_cmd_task = iscsi_tcp_cmd_init,
+- .init_mgmt_task = iscsi_tcp_mgmt_init,
++ .init_cmd_task = iscsi_tcp_ctask_init,
++ .init_mgmt_task = iscsi_tcp_mtask_init,
+ .xmit_cmd_task = iscsi_tcp_ctask_xmit,
+ .xmit_mgmt_task = iscsi_tcp_mtask_xmit,
+ .cleanup_cmd_task = iscsi_tcp_cleanup_ctask,
+diff --git a/drivers/scsi/iscsi_tcp.h b/drivers/scsi/iscsi_tcp.h
+index 68c36cc..ed0b991 100644
+--- a/drivers/scsi/iscsi_tcp.h
++++ b/drivers/scsi/iscsi_tcp.h
+@@ -24,71 +24,61 @@
- struct sgmap64 {
-@@ -137,18 +137,18 @@ struct user_sgmapraw {
+ #include <scsi/libiscsi.h>
- struct creation_info
- {
-- u8 buildnum; /* e.g., 588 */
-- u8 usec; /* e.g., 588 */
-- u8 via; /* e.g., 1 = FSU,
-- * 2 = API
-+ u8 buildnum; /* e.g., 588 */
-+ u8 usec; /* e.g., 588 */
-+ u8 via; /* e.g., 1 = FSU,
-+ * 2 = API
- */
-- u8 year; /* e.g., 1997 = 97 */
-+ u8 year; /* e.g., 1997 = 97 */
- __le32 date; /*
-- * unsigned Month :4; // 1 - 12
-- * unsigned Day :6; // 1 - 32
-- * unsigned Hour :6; // 0 - 23
-- * unsigned Minute :6; // 0 - 60
-- * unsigned Second :6; // 0 - 60
-+ * unsigned Month :4; // 1 - 12
-+ * unsigned Day :6; // 1 - 32
-+ * unsigned Hour :6; // 0 - 23
-+ * unsigned Minute :6; // 0 - 60
-+ * unsigned Second :6; // 0 - 60
- */
- __le32 serial[2]; /* e.g., 0x1DEADB0BFAFAF001 */
- };
-@@ -184,7 +184,7 @@ struct creation_info
- /*
- * Set the queues on a 16 byte alignment
- */
--
+-/* Socket's Receive state machine */
+-#define IN_PROGRESS_WAIT_HEADER 0x0
+-#define IN_PROGRESS_HEADER_GATHER 0x1
+-#define IN_PROGRESS_DATA_RECV 0x2
+-#define IN_PROGRESS_DDIGEST_RECV 0x3
+-#define IN_PROGRESS_PAD_RECV 0x4
+-
+-/* xmit state machine */
+-#define XMSTATE_VALUE_IDLE 0
+-#define XMSTATE_BIT_CMD_HDR_INIT 0
+-#define XMSTATE_BIT_CMD_HDR_XMIT 1
+-#define XMSTATE_BIT_IMM_HDR 2
+-#define XMSTATE_BIT_IMM_DATA 3
+-#define XMSTATE_BIT_UNS_INIT 4
+-#define XMSTATE_BIT_UNS_HDR 5
+-#define XMSTATE_BIT_UNS_DATA 6
+-#define XMSTATE_BIT_SOL_HDR 7
+-#define XMSTATE_BIT_SOL_DATA 8
+-#define XMSTATE_BIT_W_PAD 9
+-#define XMSTATE_BIT_W_RESEND_PAD 10
+-#define XMSTATE_BIT_W_RESEND_DATA_DIGEST 11
+-#define XMSTATE_BIT_IMM_HDR_INIT 12
+-#define XMSTATE_BIT_SOL_HDR_INIT 13
+-
+-#define ISCSI_PAD_LEN 4
+-#define ISCSI_SG_TABLESIZE SG_ALL
+-#define ISCSI_TCP_MAX_CMD_LEN 16
+-
+ struct crypto_hash;
+ struct socket;
++struct iscsi_tcp_conn;
++struct iscsi_segment;
+
- #define QUEUE_ALIGNMENT 16
++typedef int iscsi_segment_done_fn_t(struct iscsi_tcp_conn *,
++ struct iscsi_segment *);
++
++struct iscsi_segment {
++ unsigned char *data;
++ unsigned int size;
++ unsigned int copied;
++ unsigned int total_size;
++ unsigned int total_copied;
++
++ struct hash_desc *hash;
++ unsigned char recv_digest[ISCSI_DIGEST_SIZE];
++ unsigned char digest[ISCSI_DIGEST_SIZE];
++ unsigned int digest_len;
++
++ struct scatterlist *sg;
++ void *sg_mapped;
++ unsigned int sg_offset;
++
++ iscsi_segment_done_fn_t *done;
++};
- /*
-@@ -203,9 +203,9 @@ struct aac_entry {
- * The adapter assumes the ProducerIndex and ConsumerIndex are grouped
- * adjacently and in that order.
- */
--
+ /* Socket connection recieve helper */
+ struct iscsi_tcp_recv {
+ struct iscsi_hdr *hdr;
+- struct sk_buff *skb;
+- int offset;
+- int len;
+- int hdr_offset;
+- int copy;
+- int copied;
+- int padding;
+- struct iscsi_cmd_task *ctask; /* current cmd in progress */
++ struct iscsi_segment segment;
+
- struct aac_qhdr {
-- __le64 header_addr;/* Address to hand the adapter to access
-+ __le64 header_addr;/* Address to hand the adapter to access
- to this queue head */
- __le32 *producer; /* The producer index for this queue (host address) */
- __le32 *consumer; /* The consumer index for this queue (host address) */
-@@ -215,7 +215,7 @@ struct aac_qhdr {
- * Define all the events which the adapter would like to notify
- * the host of.
- */
--
++ /* Allocate buffer for BHS + AHS */
++ uint32_t hdr_buf[64];
+
+ /* copied and flipped values */
+ int datalen;
+- int datadgst;
+- char zero_copy_hdr;
++};
+
- #define HostNormCmdQue 1 /* Change in host normal priority command queue */
- #define HostHighCmdQue 2 /* Change in host high priority command queue */
- #define HostNormRespQue 3 /* Change in host normal priority response queue */
-@@ -286,17 +286,17 @@ struct aac_fibhdr {
- u8 StructType; /* Type FIB */
- u8 Flags; /* Flags for FIB */
- __le16 Size; /* Size of this FIB in bytes */
-- __le16 SenderSize; /* Size of the FIB in the sender
-+ __le16 SenderSize; /* Size of the FIB in the sender
- (for response sizing) */
- __le32 SenderFibAddress; /* Host defined data in the FIB */
-- __le32 ReceiverFibAddress;/* Logical address of this FIB for
-+ __le32 ReceiverFibAddress;/* Logical address of this FIB for
- the adapter */
- u32 SenderData; /* Place holder for the sender to store data */
- union {
- struct {
-- __le32 _ReceiverTimeStart; /* Timestamp for
-+ __le32 _ReceiverTimeStart; /* Timestamp for
- receipt of fib */
-- __le32 _ReceiverTimeDone; /* Timestamp for
-+ __le32 _ReceiverTimeDone; /* Timestamp for
- completion of fib */
- } _s;
- } _u;
-@@ -311,7 +311,7 @@ struct hw_fib {
- * FIB commands
- */
++/* Socket connection send helper */
++struct iscsi_tcp_send {
++ struct iscsi_hdr *hdr;
++ struct iscsi_segment segment;
++ struct iscsi_segment data_segment;
+ };
--#define TestCommandResponse 1
-+#define TestCommandResponse 1
- #define TestAdapterCommand 2
- /*
- * Lowlevel and comm commands
-@@ -350,10 +350,6 @@ struct hw_fib {
- #define ContainerCommand64 501
- #define ContainerRawIo 502
- /*
-- * Cluster Commands
-- */
--#define ClusterCommand 550
--/*
- * Scsi Port commands (scsi passthrough)
- */
- #define ScsiPortCommand 600
-@@ -375,19 +371,19 @@ struct hw_fib {
- */
+ struct iscsi_tcp_conn {
+ struct iscsi_conn *iscsi_conn;
+ struct socket *sock;
+- struct iscsi_hdr hdr; /* header placeholder */
+- char hdrext[4*sizeof(__u16) +
+- sizeof(__u32)];
+- int data_copied;
+ int stop_stage; /* conn_stop() flag: *
+ * stop to recover, *
+ * stop to terminate */
+- /* iSCSI connection-wide sequencing */
+- int hdr_size; /* PDU header size */
+-
+ /* control data */
+ struct iscsi_tcp_recv in; /* TCP receive context */
+- int in_progress; /* connection state machine */
++ struct iscsi_tcp_send out; /* TCP send context */
- enum fib_xfer_state {
-- HostOwned = (1<<0),
-- AdapterOwned = (1<<1),
-- FibInitialized = (1<<2),
-- FibEmpty = (1<<3),
-- AllocatedFromPool = (1<<4),
-- SentFromHost = (1<<5),
-- SentFromAdapter = (1<<6),
-- ResponseExpected = (1<<7),
-- NoResponseExpected = (1<<8),
-- AdapterProcessed = (1<<9),
-- HostProcessed = (1<<10),
-- HighPriority = (1<<11),
-- NormalPriority = (1<<12),
-+ HostOwned = (1<<0),
-+ AdapterOwned = (1<<1),
-+ FibInitialized = (1<<2),
-+ FibEmpty = (1<<3),
-+ AllocatedFromPool = (1<<4),
-+ SentFromHost = (1<<5),
-+ SentFromAdapter = (1<<6),
-+ ResponseExpected = (1<<7),
-+ NoResponseExpected = (1<<8),
-+ AdapterProcessed = (1<<9),
-+ HostProcessed = (1<<10),
-+ HighPriority = (1<<11),
-+ NormalPriority = (1<<12),
- Async = (1<<13),
- AsyncIo = (1<<13), // rpbfix: remove with new regime
- PageFileIo = (1<<14), // rpbfix: remove with new regime
-@@ -420,7 +416,7 @@ struct aac_init
- __le32 AdapterFibAlign;
- __le32 printfbuf;
- __le32 printfbufsiz;
-- __le32 HostPhysMemPages; /* number of 4k pages of host
-+ __le32 HostPhysMemPages; /* number of 4k pages of host
- physical memory */
- __le32 HostElapsedSeconds; /* number of seconds since 1970. */
- /*
-@@ -481,7 +477,7 @@ struct adapter_ops
+ /* old values for socket callbacks */
+ void (*old_data_ready)(struct sock *, int);
+@@ -103,29 +93,19 @@ struct iscsi_tcp_conn {
+ uint32_t sendpage_failures_cnt;
+ uint32_t discontiguous_hdr_cnt;
- struct aac_driver_ident
- {
-- int (*init)(struct aac_dev *dev);
-+ int (*init)(struct aac_dev *dev);
- char * name;
- char * vname;
- char * model;
-@@ -489,7 +485,7 @@ struct aac_driver_ident
- int quirks;
+- ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int);
+-};
++ int error;
+
+-struct iscsi_buf {
+- struct scatterlist sg;
+- unsigned int sent;
+- char use_sendmsg;
++ ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int);
};
- /*
-- * Some adapter firmware needs communication memory
-+ * Some adapter firmware needs communication memory
- * below 2gig. This tells the init function to set the
- * dma mask such that fib memory will be allocated where the
- * adapter firmware can get to it.
-@@ -521,33 +517,39 @@ struct aac_driver_ident
- #define AAC_QUIRK_17SG 0x0010
- /*
-+ * Some adapter firmware does not support 64 bit scsi passthrough
-+ * commands.
-+ */
-+#define AAC_QUIRK_SCSI_32 0x0020
-+
-+/*
- * The adapter interface specs all queues to be located in the same
- * physically contigous block. The host structure that defines the
- * commuication queues will assume they are each a separate physically
- * contigous memory region that will support them all being one big
-- * contigous block.
-+ * contigous block.
- * There is a command and response queue for each level and direction of
- * commuication. These regions are accessed by both the host and adapter.
- */
--
-+
- struct aac_queue {
-- u64 logical; /*address we give the adapter */
-+ u64 logical; /*address we give the adapter */
- struct aac_entry *base; /*system virtual address */
-- struct aac_qhdr headers; /*producer,consumer q headers*/
-- u32 entries; /*Number of queue entries */
-+ struct aac_qhdr headers; /*producer,consumer q headers*/
-+ u32 entries; /*Number of queue entries */
- wait_queue_head_t qfull; /*Event to wait on if q full */
- wait_queue_head_t cmdready; /*Cmd ready from the adapter */
-- /* This is only valid for adapter to host command queues. */
-- spinlock_t *lock; /* Spinlock for this queue must take this lock before accessing the lock */
-+ /* This is only valid for adapter to host command queues. */
-+ spinlock_t *lock; /* Spinlock for this queue must take this lock before accessing the lock */
- spinlock_t lockdata; /* Actual lock (used only on one side of the lock) */
-- struct list_head cmdq; /* A queue of FIBs which need to be prcessed by the FS thread. This is */
-- /* only valid for command queues which receive entries from the adapter. */
-+ struct list_head cmdq; /* A queue of FIBs which need to be prcessed by the FS thread. This is */
-+ /* only valid for command queues which receive entries from the adapter. */
- u32 numpending; /* Number of entries on outstanding queue. */
- struct aac_dev * dev; /* Back pointer to adapter structure */
+ struct iscsi_data_task {
+ struct iscsi_data hdr; /* PDU */
+- char hdrext[sizeof(__u32)]; /* Header-Digest */
+- struct iscsi_buf digestbuf; /* digest buffer */
+- uint32_t digest; /* data digest */
++ char hdrext[ISCSI_DIGEST_SIZE];/* Header-Digest */
};
- /*
-- * Message queues. The order here is important, see also the
-+ * Message queues. The order here is important, see also the
- * queue type ordering
- */
+ struct iscsi_tcp_mgmt_task {
+ struct iscsi_hdr hdr;
+- char hdrext[sizeof(__u32)]; /* Header-Digest */
+- unsigned long xmstate; /* mgmt xmit progress */
+- struct iscsi_buf headbuf; /* header buffer */
+- struct iscsi_buf sendbuf; /* in progress buffer */
+- int sent;
++ char hdrext[ISCSI_DIGEST_SIZE]; /* Header-Digest */
+ };
-@@ -559,12 +561,12 @@ struct aac_queue_block
- /*
- * SaP1 Message Unit Registers
- */
--
-+
- struct sa_drawbridge_CSR {
-- /* Offset | Name */
-+ /* Offset | Name */
- __le32 reserved[10]; /* 00h-27h | Reserved */
- u8 LUT_Offset; /* 28h | Lookup Table Offset */
-- u8 reserved1[3]; /* 29h-2bh | Reserved */
-+ u8 reserved1[3]; /* 29h-2bh | Reserved */
- __le32 LUT_Data; /* 2ch | Looup Table Data */
- __le32 reserved2[26]; /* 30h-97h | Reserved */
- __le16 PRICLEARIRQ; /* 98h | Primary Clear Irq */
-@@ -583,8 +585,8 @@ struct sa_drawbridge_CSR {
- __le32 MAILBOX5; /* bch | Scratchpad 5 */
- __le32 MAILBOX6; /* c0h | Scratchpad 6 */
- __le32 MAILBOX7; /* c4h | Scratchpad 7 */
-- __le32 ROM_Setup_Data; /* c8h | Rom Setup and Data */
-- __le32 ROM_Control_Addr;/* cch | Rom Control and Address */
-+ __le32 ROM_Setup_Data; /* c8h | Rom Setup and Data */
-+ __le32 ROM_Control_Addr;/* cch | Rom Control and Address */
- __le32 reserved3[12]; /* d0h-ffh | reserved */
- __le32 LUT[64]; /* 100h-1ffh | Lookup Table Entries */
+ struct iscsi_r2t_info {
+@@ -133,38 +113,26 @@ struct iscsi_r2t_info {
+ __be32 exp_statsn; /* copied from R2T */
+ uint32_t data_length; /* copied from R2T */
+ uint32_t data_offset; /* copied from R2T */
+- struct iscsi_buf headbuf; /* Data-Out Header Buffer */
+- struct iscsi_buf sendbuf; /* Data-Out in progress buffer*/
+ int sent; /* R2T sequence progress */
+ int data_count; /* DATA-Out payload progress */
+- struct scatterlist *sg; /* per-R2T SG list */
+ int solicit_datasn;
+- struct iscsi_data_task dtask; /* which data task */
++ struct iscsi_data_task dtask; /* Data-Out header buf */
};
-@@ -597,7 +599,7 @@ struct sa_drawbridge_CSR {
- #define Mailbox5 SaDbCSR.MAILBOX5
- #define Mailbox6 SaDbCSR.MAILBOX6
- #define Mailbox7 SaDbCSR.MAILBOX7
--
+
+ struct iscsi_tcp_cmd_task {
+- struct iscsi_cmd hdr;
+- char hdrext[4*sizeof(__u16)+ /* AHS */
+- sizeof(__u32)]; /* HeaderDigest */
+- char pad[ISCSI_PAD_LEN];
+- int pad_count; /* padded bytes */
+- struct iscsi_buf headbuf; /* header buf (xmit) */
+- struct iscsi_buf sendbuf; /* in progress buffer*/
+- unsigned long xmstate; /* xmit xtate machine */
++ struct iscsi_hdr_buff {
++ struct iscsi_cmd cmd_hdr;
++ char hdrextbuf[ISCSI_MAX_AHS_SIZE +
++ ISCSI_DIGEST_SIZE];
++ } hdr;
+
- #define DoorbellReg_p SaDbCSR.PRISETIRQ
- #define DoorbellReg_s SaDbCSR.SECSETIRQ
- #define DoorbellClrReg_p SaDbCSR.PRICLEARIRQ
-@@ -611,19 +613,19 @@ struct sa_drawbridge_CSR {
- #define DOORBELL_5 0x0020
- #define DOORBELL_6 0x0040
+ int sent;
+- struct scatterlist *sg; /* per-cmd SG list */
+- struct scatterlist *bad_sg; /* assert statement */
+- int sg_count; /* SG's to process */
+- uint32_t exp_datasn; /* expected target's R2TSN/DataSN */
++ uint32_t exp_datasn; /* expected target's R2TSN/DataSN */
+ int data_offset;
+- struct iscsi_r2t_info *r2t; /* in progress R2T */
+- struct iscsi_queue r2tpool;
++ struct iscsi_r2t_info *r2t; /* in progress R2T */
++ struct iscsi_pool r2tpool;
+ struct kfifo *r2tqueue;
+- struct iscsi_r2t_info **r2ts;
+- int digest_count;
+- uint32_t immdigest; /* for imm data */
+- struct iscsi_buf immbuf; /* for imm data digest */
+- struct iscsi_data_task unsol_dtask; /* unsol data task */
++ struct iscsi_data_task unsol_dtask; /* Data-Out header buf */
+ };
--
+ #endif /* ISCSI_H */
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 8b57af5..553168a 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -24,6 +24,7 @@
+ #include <linux/types.h>
+ #include <linux/kfifo.h>
+ #include <linux/delay.h>
++#include <linux/log2.h>
+ #include <asm/unaligned.h>
+ #include <net/tcp.h>
+ #include <scsi/scsi_cmnd.h>
+@@ -86,7 +87,7 @@ iscsi_update_cmdsn(struct iscsi_session *session, struct iscsi_nopin *hdr)
+ * xmit thread
+ */
+ if (!list_empty(&session->leadconn->xmitqueue) ||
+- __kfifo_len(session->leadconn->mgmtqueue))
++ !list_empty(&session->leadconn->mgmtqueue))
+ scsi_queue_work(session->host,
+ &session->leadconn->xmitwork);
+ }
+@@ -122,6 +123,20 @@ void iscsi_prep_unsolicit_data_pdu(struct iscsi_cmd_task *ctask,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_prep_unsolicit_data_pdu);
+
++static int iscsi_add_hdr(struct iscsi_cmd_task *ctask, unsigned len)
++{
++ unsigned exp_len = ctask->hdr_len + len;
+
- #define PrintfReady DOORBELL_5
- #define PrintfDone DOORBELL_5
--
++ if (exp_len > ctask->hdr_max) {
++ WARN_ON(1);
++ return -EINVAL;
++ }
+
- struct sa_registers {
- struct sa_drawbridge_CSR SaDbCSR; /* 98h - c4h */
- };
--
++ WARN_ON(len & (ISCSI_PAD_LEN - 1)); /* caller must pad the AHS */
++ ctask->hdr_len = exp_len;
++ return 0;
++}
+
+ /**
+ * iscsi_prep_scsi_cmd_pdu - prep iscsi scsi cmd pdu
+ * @ctask: iscsi cmd task
+@@ -129,27 +144,32 @@ EXPORT_SYMBOL_GPL(iscsi_prep_unsolicit_data_pdu);
+ * Prep basic iSCSI PDU fields for a scsi cmd pdu. The LLD should set
+ * fields like dlength or final based on how much data it sends
+ */
+-static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
++static int iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
+ {
+ struct iscsi_conn *conn = ctask->conn;
+ struct iscsi_session *session = conn->session;
+ struct iscsi_cmd *hdr = ctask->hdr;
+ struct scsi_cmnd *sc = ctask->sc;
++ unsigned hdrlength;
++ int rc;
- #define Sa_MINIPORT_REVISION 1
+- hdr->opcode = ISCSI_OP_SCSI_CMD;
+- hdr->flags = ISCSI_ATTR_SIMPLE;
+- int_to_scsilun(sc->device->lun, (struct scsi_lun *)hdr->lun);
+- hdr->itt = build_itt(ctask->itt, conn->id, session->age);
+- hdr->data_length = cpu_to_be32(scsi_bufflen(sc));
+- hdr->cmdsn = cpu_to_be32(session->cmdsn);
+- session->cmdsn++;
+- hdr->exp_statsn = cpu_to_be32(conn->exp_statsn);
+- memcpy(hdr->cdb, sc->cmnd, sc->cmd_len);
++ ctask->hdr_len = 0;
++ rc = iscsi_add_hdr(ctask, sizeof(*hdr));
++ if (rc)
++ return rc;
++ hdr->opcode = ISCSI_OP_SCSI_CMD;
++ hdr->flags = ISCSI_ATTR_SIMPLE;
++ int_to_scsilun(sc->device->lun, (struct scsi_lun *)hdr->lun);
++ hdr->itt = build_itt(ctask->itt, conn->id, session->age);
++ hdr->data_length = cpu_to_be32(scsi_bufflen(sc));
++ hdr->cmdsn = cpu_to_be32(session->cmdsn);
++ session->cmdsn++;
++ hdr->exp_statsn = cpu_to_be32(conn->exp_statsn);
++ memcpy(hdr->cdb, sc->cmnd, sc->cmd_len);
+ if (sc->cmd_len < MAX_COMMAND_SIZE)
+ memset(&hdr->cdb[sc->cmd_len], 0,
+ MAX_COMMAND_SIZE - sc->cmd_len);
- #define sa_readw(AEP, CSR) readl(&((AEP)->regs.sa->CSR))
--#define sa_readl(AEP, CSR) readl(&((AEP)->regs.sa->CSR))
-+#define sa_readl(AEP, CSR) readl(&((AEP)->regs.sa->CSR))
- #define sa_writew(AEP, CSR, value) writew(value, &((AEP)->regs.sa->CSR))
- #define sa_writel(AEP, CSR, value) writel(value, &((AEP)->regs.sa->CSR))
+- ctask->data_count = 0;
+ ctask->imm_count = 0;
+ if (sc->sc_data_direction == DMA_TO_DEVICE) {
+ hdr->flags |= ISCSI_FLAG_CMD_WRITE;
+@@ -178,9 +198,9 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
+ else
+ ctask->imm_count = min(scsi_bufflen(sc),
+ conn->max_xmit_dlength);
+- hton24(ctask->hdr->dlength, ctask->imm_count);
++ hton24(hdr->dlength, ctask->imm_count);
+ } else
+- zero_data(ctask->hdr->dlength);
++ zero_data(hdr->dlength);
-@@ -640,21 +642,21 @@ struct rx_mu_registers {
- __le32 IMRx[2]; /* 1310h | 10h | Inbound Message Registers */
- __le32 OMRx[2]; /* 1318h | 18h | Outbound Message Registers */
- __le32 IDR; /* 1320h | 20h | Inbound Doorbell Register */
-- __le32 IISR; /* 1324h | 24h | Inbound Interrupt
-+ __le32 IISR; /* 1324h | 24h | Inbound Interrupt
- Status Register */
-- __le32 IIMR; /* 1328h | 28h | Inbound Interrupt
-- Mask Register */
-+ __le32 IIMR; /* 1328h | 28h | Inbound Interrupt
-+ Mask Register */
- __le32 ODR; /* 132Ch | 2Ch | Outbound Doorbell Register */
-- __le32 OISR; /* 1330h | 30h | Outbound Interrupt
-+ __le32 OISR; /* 1330h | 30h | Outbound Interrupt
- Status Register */
-- __le32 OIMR; /* 1334h | 34h | Outbound Interrupt
-+ __le32 OIMR; /* 1334h | 34h | Outbound Interrupt
- Mask Register */
- __le32 reserved2; /* 1338h | 38h | Reserved */
- __le32 reserved3; /* 133Ch | 3Ch | Reserved */
- __le32 InboundQueue;/* 1340h | 40h | Inbound Queue Port relative to firmware */
- __le32 OutboundQueue;/*1344h | 44h | Outbound Queue Port relative to firmware */
-- /* * Must access through ATU Inbound
-- Translation Window */
-+ /* * Must access through ATU Inbound
-+ Translation Window */
- };
+ if (!session->initial_r2t_en) {
+ ctask->unsol_count = min((session->first_burst),
+@@ -190,7 +210,7 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
- struct rx_inbound {
-@@ -710,12 +712,12 @@ struct rkt_registers {
- typedef void (*fib_callback)(void *ctxt, struct fib *fibctx);
+ if (!ctask->unsol_count)
+ /* No unsolicit Data-Out's */
+- ctask->hdr->flags |= ISCSI_FLAG_CMD_FINAL;
++ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
+ } else {
+ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
+ zero_data(hdr->dlength);
+@@ -199,13 +219,25 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
+ hdr->flags |= ISCSI_FLAG_CMD_READ;
+ }
- struct aac_fib_context {
-- s16 type; // used for verification of structure
-- s16 size;
-+ s16 type; // used for verification of structure
-+ s16 size;
- u32 unique; // unique value representing this context
- ulong jiffies; // used for cleanup - dmb changed to ulong
- struct list_head next; // used to link context's into a linked list
-- struct semaphore wait_sem; // this is used to wait for the next fib to arrive.
-+ struct semaphore wait_sem; // this is used to wait for the next fib to arrive.
- int wait; // Set to true when thread is in WaitForSingleObject
- unsigned long count; // total number of FIBs on FibList
- struct list_head fib_list; // this holds fibs and their attachd hw_fibs
-@@ -734,9 +736,9 @@ struct sense_data {
- u8 EOM:1; /* End Of Medium - reserved for random access devices */
- u8 filemark:1; /* Filemark - reserved for random access devices */
+- conn->scsicmd_pdus_cnt++;
++ /* calculate size of additional header segments (AHSs) */
++ hdrlength = ctask->hdr_len - sizeof(*hdr);
++
++ WARN_ON(hdrlength & (ISCSI_PAD_LEN-1));
++ hdrlength /= ISCSI_PAD_LEN;
++
++ WARN_ON(hdrlength >= 256);
++ hdr->hlength = hdrlength & 0xFF;
++
++ if (conn->session->tt->init_cmd_task(conn->ctask))
++ return EIO;
-- u8 information[4]; /* for direct-access devices, contains the unsigned
-- * logical block address or residue associated with
-- * the sense key
-+ u8 information[4]; /* for direct-access devices, contains the unsigned
-+ * logical block address or residue associated with
-+ * the sense key
- */
- u8 add_sense_len; /* number of additional sense bytes to follow this field */
- u8 cmnd_info[4]; /* not used */
-@@ -746,7 +748,7 @@ struct sense_data {
- u8 bit_ptr:3; /* indicates which byte of the CDB or parameter data
- * was in error
- */
-- u8 BPV:1; /* bit pointer valid (BPV): 1- indicates that
-+ u8 BPV:1; /* bit pointer valid (BPV): 1- indicates that
- * the bit_ptr field has valid value
- */
- u8 reserved2:2;
-@@ -780,24 +782,24 @@ struct fib {
- /*
- * The Adapter that this I/O is destined for.
- */
-- struct aac_dev *dev;
-+ struct aac_dev *dev;
- /*
- * This is the event the sendfib routine will wait on if the
- * caller did not pass one and this is synch io.
- */
-- struct semaphore event_wait;
-+ struct semaphore event_wait;
- spinlock_t event_lock;
+- debug_scsi("iscsi prep [%s cid %d sc %p cdb 0x%x itt 0x%x len %d "
++ conn->scsicmd_pdus_cnt++;
++ debug_scsi("iscsi prep [%s cid %d sc %p cdb 0x%x itt 0x%x len %d "
+ "cmdsn %d win %d]\n",
+- sc->sc_data_direction == DMA_TO_DEVICE ? "write" : "read",
++ sc->sc_data_direction == DMA_TO_DEVICE ? "write" : "read",
+ conn->id, sc, sc->cmnd[0], ctask->itt, scsi_bufflen(sc),
+- session->cmdsn, session->max_cmdsn - session->exp_cmdsn + 1);
++ session->cmdsn, session->max_cmdsn - session->exp_cmdsn + 1);
++ return 0;
+ }
- u32 done; /* gets set to 1 when fib is complete */
-- fib_callback callback;
-- void *callback_data;
-+ fib_callback callback;
-+ void *callback_data;
- u32 flags; // u32 dmb was ulong
- /*
- * And for the internal issue/reply queues (we may be able
- * to merge these two)
- */
- struct list_head fiblink;
-- void *data;
-+ void *data;
- struct hw_fib *hw_fib_va; /* Actual shared object */
- dma_addr_t hw_fib_pa; /* physical address of hw_fib*/
- };
-@@ -807,7 +809,7 @@ struct fib {
- *
- * This is returned by the RequestAdapterInfo block
+ /**
+@@ -218,13 +250,16 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
*/
--
-+
- struct aac_adapter_info
+ static void iscsi_complete_command(struct iscsi_cmd_task *ctask)
{
- __le32 platform;
-@@ -826,7 +828,7 @@ struct aac_adapter_info
- __le32 biosrev;
- __le32 biosbuild;
- __le32 cluster;
-- __le32 clusterchannelmask;
-+ __le32 clusterchannelmask;
- __le32 serial[2];
- __le32 battery;
- __le32 options;
-@@ -863,9 +865,10 @@ struct aac_supplement_adapter_info
- __le32 SupportedOptions2;
- __le32 ReservedGrowth[1];
- };
--#define AAC_FEATURE_FALCON 0x00000010
--#define AAC_OPTION_MU_RESET 0x00000001
--#define AAC_OPTION_IGNORE_RESET 0x00000002
-+#define AAC_FEATURE_FALCON cpu_to_le32(0x00000010)
-+#define AAC_FEATURE_JBOD cpu_to_le32(0x08000000)
-+#define AAC_OPTION_MU_RESET cpu_to_le32(0x00000001)
-+#define AAC_OPTION_IGNORE_RESET cpu_to_le32(0x00000002)
- #define AAC_SIS_VERSION_V3 3
- #define AAC_SIS_SLOT_UNKNOWN 0xFF
-
-@@ -916,13 +919,13 @@ struct aac_bus_info_response {
- #define AAC_OPT_HOST_TIME_FIB cpu_to_le32(1<<4)
- #define AAC_OPT_RAID50 cpu_to_le32(1<<5)
- #define AAC_OPT_4GB_WINDOW cpu_to_le32(1<<6)
--#define AAC_OPT_SCSI_UPGRADEABLE cpu_to_le32(1<<7)
-+#define AAC_OPT_SCSI_UPGRADEABLE cpu_to_le32(1<<7)
- #define AAC_OPT_SOFT_ERR_REPORT cpu_to_le32(1<<8)
--#define AAC_OPT_SUPPORTED_RECONDITION cpu_to_le32(1<<9)
-+#define AAC_OPT_SUPPORTED_RECONDITION cpu_to_le32(1<<9)
- #define AAC_OPT_SGMAP_HOST64 cpu_to_le32(1<<10)
- #define AAC_OPT_ALARM cpu_to_le32(1<<11)
- #define AAC_OPT_NONDASD cpu_to_le32(1<<12)
--#define AAC_OPT_SCSI_MANAGED cpu_to_le32(1<<13)
-+#define AAC_OPT_SCSI_MANAGED cpu_to_le32(1<<13)
- #define AAC_OPT_RAID_SCSI_MODE cpu_to_le32(1<<14)
- #define AAC_OPT_SUPPLEMENT_ADAPTER_INFO cpu_to_le32(1<<16)
- #define AAC_OPT_NEW_COMM cpu_to_le32(1<<17)
-@@ -942,7 +945,7 @@ struct aac_dev
+- struct iscsi_session *session = ctask->conn->session;
++ struct iscsi_conn *conn = ctask->conn;
++ struct iscsi_session *session = conn->session;
+ struct scsi_cmnd *sc = ctask->sc;
- /*
- * Map for 128 fib objects (64k)
-- */
-+ */
- dma_addr_t hw_fib_pa;
- struct hw_fib *hw_fib_va;
- struct hw_fib *aif_base_va;
-@@ -953,24 +956,24 @@ struct aac_dev
+ ctask->state = ISCSI_TASK_COMPLETED;
+ ctask->sc = NULL;
+ /* SCSI eh reuses commands to verify us */
+ sc->SCp.ptr = NULL;
++ if (conn->ctask == ctask)
++ conn->ctask = NULL;
+ list_del_init(&ctask->running);
+ __kfifo_put(session->cmdpool.queue, (void*)&ctask, sizeof(void*));
+ sc->scsi_done(sc);
+@@ -241,6 +276,112 @@ static void __iscsi_put_ctask(struct iscsi_cmd_task *ctask)
+ iscsi_complete_command(ctask);
+ }
- struct fib *free_fib;
- spinlock_t fib_lock;
--
++/*
++ * session lock must be held
++ */
++static void fail_command(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
++ int err)
++{
++ struct scsi_cmnd *sc;
+
- struct aac_queue_block *queues;
- /*
- * The user API will use an IOCTL to register itself to receive
- * FIBs from the adapter. The following list is used to keep
- * track of all the threads that have requested these FIBs. The
-- * mutex is used to synchronize access to all data associated
-+ * mutex is used to synchronize access to all data associated
- * with the adapter fibs.
- */
- struct list_head fib_list;
-
- struct adapter_ops a_ops;
- unsigned long fsrev; /* Main driver's revision number */
--
++ sc = ctask->sc;
++ if (!sc)
++ return;
+
- unsigned base_size; /* Size of mapped in region */
- struct aac_init *init; /* Holds initialization info to communicate with adapter */
-- dma_addr_t init_pa; /* Holds physical address of the init struct */
--
-+ dma_addr_t init_pa; /* Holds physical address of the init struct */
++ if (ctask->state == ISCSI_TASK_PENDING)
++ /*
++ * cmd never made it to the xmit thread, so we should not count
++ * the cmd in the sequencing
++ */
++ conn->session->queued_cmdsn--;
++ else
++ conn->session->tt->cleanup_cmd_task(conn, ctask);
+
- struct pci_dev *pdev; /* Our PCI interface */
- void * printfbuf; /* pointer to buffer used for printf's from the adapter */
- void * comm_addr; /* Base address of Comm area */
-@@ -984,11 +987,11 @@ struct aac_dev
- struct fsa_dev_info *fsa_dev;
- struct task_struct *thread;
- int cardtype;
--
++ sc->result = err;
++ scsi_set_resid(sc, scsi_bufflen(sc));
++ if (conn->ctask == ctask)
++ conn->ctask = NULL;
++ /* release ref from queuecommand */
++ __iscsi_put_ctask(ctask);
++}
+
- /*
- * The following is the device specific extension.
- */
--#if (!defined(AAC_MIN_FOOTPRINT_SIZE))
-+#ifndef AAC_MIN_FOOTPRINT_SIZE
- # define AAC_MIN_FOOTPRINT_SIZE 8192
- #endif
- union
-@@ -1009,7 +1012,9 @@ struct aac_dev
- /* These are in adapter info but they are in the io flow so
- * lets break them out so we don't have to do an AND to check them
- */
-- u8 nondasd_support;
-+ u8 nondasd_support;
-+ u8 jbod;
-+ u8 cache_protected;
- u8 dac_support;
- u8 raid_scsi_mode;
- u8 comm_interface;
-@@ -1066,18 +1071,19 @@ struct aac_dev
- (dev)->a_ops.adapter_comm(dev, comm)
++/**
++ * iscsi_free_mgmt_task - return mgmt task back to pool
++ * @conn: iscsi connection
++ * @mtask: mtask
++ *
++ * Must be called with session lock.
++ */
++void iscsi_free_mgmt_task(struct iscsi_conn *conn,
++ struct iscsi_mgmt_task *mtask)
++{
++ list_del_init(&mtask->running);
++ if (conn->login_mtask == mtask)
++ return;
++
++ if (conn->ping_mtask == mtask)
++ conn->ping_mtask = NULL;
++ __kfifo_put(conn->session->mgmtpool.queue,
++ (void*)&mtask, sizeof(void*));
++}
++EXPORT_SYMBOL_GPL(iscsi_free_mgmt_task);
++
++static struct iscsi_mgmt_task *
++__iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
++ char *data, uint32_t data_size)
++{
++ struct iscsi_session *session = conn->session;
++ struct iscsi_mgmt_task *mtask;
++
++ if (session->state == ISCSI_STATE_TERMINATE)
++ return NULL;
++
++ if (hdr->opcode == (ISCSI_OP_LOGIN | ISCSI_OP_IMMEDIATE) ||
++ hdr->opcode == (ISCSI_OP_TEXT | ISCSI_OP_IMMEDIATE))
++ /*
++ * Login and Text are sent serially, in
++ * request-followed-by-response sequence.
++ * Same mtask can be used. Same ITT must be used.
++ * Note that login_mtask is preallocated at conn_create().
++ */
++ mtask = conn->login_mtask;
++ else {
++ BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE);
++ BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED);
++
++ if (!__kfifo_get(session->mgmtpool.queue,
++ (void*)&mtask, sizeof(void*)))
++ return NULL;
++ }
++
++ if (data_size) {
++ memcpy(mtask->data, data, data_size);
++ mtask->data_count = data_size;
++ } else
++ mtask->data_count = 0;
++
++ memcpy(mtask->hdr, hdr, sizeof(struct iscsi_hdr));
++ INIT_LIST_HEAD(&mtask->running);
++ list_add_tail(&mtask->running, &conn->mgmtqueue);
++ return mtask;
++}
++
++int iscsi_conn_send_pdu(struct iscsi_cls_conn *cls_conn, struct iscsi_hdr *hdr,
++ char *data, uint32_t data_size)
++{
++ struct iscsi_conn *conn = cls_conn->dd_data;
++ struct iscsi_session *session = conn->session;
++ int err = 0;
++
++ spin_lock_bh(&session->lock);
++ if (!__iscsi_conn_send_pdu(conn, hdr, data, data_size))
++ err = -EPERM;
++ spin_unlock_bh(&session->lock);
++ scsi_queue_work(session->host, &conn->xmitwork);
++ return err;
++}
++EXPORT_SYMBOL_GPL(iscsi_conn_send_pdu);
++
+ /**
+ * iscsi_cmd_rsp - SCSI Command Response processing
+ * @conn: iscsi connection
+@@ -291,17 +432,19 @@ invalid_datalen:
+ min_t(uint16_t, senselen, SCSI_SENSE_BUFFERSIZE));
+ }
- #define FIB_CONTEXT_FLAG_TIMED_OUT (0x00000001)
-+#define FIB_CONTEXT_FLAG (0x00000002)
+- if (rhdr->flags & ISCSI_FLAG_CMD_UNDERFLOW) {
++ if (rhdr->flags & (ISCSI_FLAG_CMD_UNDERFLOW |
++ ISCSI_FLAG_CMD_OVERFLOW)) {
+ int res_count = be32_to_cpu(rhdr->residual_count);
- /*
- * Define the command values
- */
--
-+
- #define Null 0
--#define GetAttributes 1
--#define SetAttributes 2
--#define Lookup 3
--#define ReadLink 4
--#define Read 5
--#define Write 6
-+#define GetAttributes 1
-+#define SetAttributes 2
-+#define Lookup 3
-+#define ReadLink 4
-+#define Read 5
-+#define Write 6
- #define Create 7
- #define MakeDirectory 8
- #define SymbolicLink 9
-@@ -1173,19 +1179,19 @@ struct aac_dev
+- if (res_count > 0 && res_count <= scsi_bufflen(sc))
++ if (res_count > 0 &&
++ (rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW ||
++ res_count <= scsi_bufflen(sc)))
+ scsi_set_resid(sc, res_count);
+ else
+ sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
+- } else if (rhdr->flags & ISCSI_FLAG_CMD_BIDI_UNDERFLOW)
++ } else if (rhdr->flags & (ISCSI_FLAG_CMD_BIDI_UNDERFLOW |
++ ISCSI_FLAG_CMD_BIDI_OVERFLOW))
+ sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
+- else if (rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW)
+- scsi_set_resid(sc, be32_to_cpu(rhdr->residual_count));
- struct aac_read
- {
-- __le32 command;
-- __le32 cid;
-- __le32 block;
-- __le32 count;
-+ __le32 command;
-+ __le32 cid;
-+ __le32 block;
-+ __le32 count;
- struct sgmap sg; // Must be last in struct because it is variable
- };
+ out:
+ debug_scsi("done [sc %lx res %d itt 0x%x]\n",
+@@ -318,18 +461,51 @@ static void iscsi_tmf_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
+ conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1;
+ conn->tmfrsp_pdus_cnt++;
- struct aac_read64
- {
-- __le32 command;
-- __le16 cid;
-- __le16 sector_count;
-- __le32 block;
-+ __le32 command;
-+ __le16 cid;
-+ __le16 sector_count;
-+ __le32 block;
- __le16 pad;
- __le16 flags;
- struct sgmap64 sg; // Must be last in struct because it is variable
-@@ -1193,26 +1199,26 @@ struct aac_read64
+- if (conn->tmabort_state != TMABORT_INITIAL)
++ if (conn->tmf_state != TMF_QUEUED)
+ return;
- struct aac_read_reply
- {
-- __le32 status;
-- __le32 count;
-+ __le32 status;
-+ __le32 count;
- };
+ if (tmf->response == ISCSI_TMF_RSP_COMPLETE)
+- conn->tmabort_state = TMABORT_SUCCESS;
++ conn->tmf_state = TMF_SUCCESS;
+ else if (tmf->response == ISCSI_TMF_RSP_NO_TASK)
+- conn->tmabort_state = TMABORT_NOT_FOUND;
++ conn->tmf_state = TMF_NOT_FOUND;
+ else
+- conn->tmabort_state = TMABORT_FAILED;
++ conn->tmf_state = TMF_FAILED;
+ wake_up(&conn->ehwait);
+ }
- struct aac_write
++static void iscsi_send_nopout(struct iscsi_conn *conn, struct iscsi_nopin *rhdr)
++{
++ struct iscsi_nopout hdr;
++ struct iscsi_mgmt_task *mtask;
++
++ if (!rhdr && conn->ping_mtask)
++ return;
++
++ memset(&hdr, 0, sizeof(struct iscsi_nopout));
++ hdr.opcode = ISCSI_OP_NOOP_OUT | ISCSI_OP_IMMEDIATE;
++ hdr.flags = ISCSI_FLAG_CMD_FINAL;
++
++ if (rhdr) {
++ memcpy(hdr.lun, rhdr->lun, 8);
++ hdr.ttt = rhdr->ttt;
++ hdr.itt = RESERVED_ITT;
++ } else
++ hdr.ttt = RESERVED_ITT;
++
++ mtask = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)&hdr, NULL, 0);
++ if (!mtask) {
++ printk(KERN_ERR "Could not send nopout\n");
++ return;
++ }
++
++ /* only track our nops */
++ if (!rhdr) {
++ conn->ping_mtask = mtask;
++ conn->last_ping = jiffies;
++ }
++ scsi_queue_work(conn->session->host, &conn->xmitwork);
++}
++
+ static int iscsi_handle_reject(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ char *data, int datalen)
{
- __le32 command;
-- __le32 cid;
-- __le32 block;
-- __le32 count;
-- __le32 stable; // Not used
-+ __le32 cid;
-+ __le32 block;
-+ __le32 count;
-+ __le32 stable; // Not used
- struct sgmap sg; // Must be last in struct because it is variable
- };
+@@ -374,6 +550,7 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ struct iscsi_mgmt_task *mtask;
+ uint32_t itt;
- struct aac_write64
- {
-- __le32 command;
-- __le16 cid;
-- __le16 sector_count;
-- __le32 block;
-+ __le32 command;
-+ __le16 cid;
-+ __le16 sector_count;
-+ __le32 block;
- __le16 pad;
- __le16 flags;
- #define IO_TYPE_WRITE 0x00000000
-@@ -1223,7 +1229,7 @@ struct aac_write64
- struct aac_write_reply
- {
- __le32 status;
-- __le32 count;
-+ __le32 count;
- __le32 committed;
- };
++ conn->last_recv = jiffies;
+ if (hdr->itt != RESERVED_ITT)
+ itt = get_itt(hdr->itt);
+ else
+@@ -429,10 +606,7 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ */
+ if (iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen))
+ rc = ISCSI_ERR_CONN_FAILED;
+- list_del(&mtask->running);
+- if (conn->login_mtask != mtask)
+- __kfifo_put(session->mgmtpool.queue,
+- (void*)&mtask, sizeof(void*));
++ iscsi_free_mgmt_task(conn, mtask);
+ break;
+ case ISCSI_OP_SCSI_TMFUNC_RSP:
+ if (datalen) {
+@@ -441,20 +615,26 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ }
-@@ -1326,10 +1332,10 @@ struct aac_srb_reply
- #define SRB_NoDataXfer 0x0000
- #define SRB_DisableDisconnect 0x0004
- #define SRB_DisableSynchTransfer 0x0008
--#define SRB_BypassFrozenQueue 0x0010
-+#define SRB_BypassFrozenQueue 0x0010
- #define SRB_DisableAutosense 0x0020
- #define SRB_DataIn 0x0040
--#define SRB_DataOut 0x0080
-+#define SRB_DataOut 0x0080
+ iscsi_tmf_rsp(conn, hdr);
++ iscsi_free_mgmt_task(conn, mtask);
+ break;
+ case ISCSI_OP_NOOP_IN:
+- if (hdr->ttt != cpu_to_be32(ISCSI_RESERVED_TAG) || datalen) {
++ if (hdr->ttt != cpu_to_be32(ISCSI_RESERVED_TAG) ||
++ datalen) {
+ rc = ISCSI_ERR_PROTO;
+ break;
+ }
+ conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1;
- /*
- * SRB Functions - set in aac_srb->function
-@@ -1352,7 +1358,7 @@ struct aac_srb_reply
- #define SRBF_RemoveDevice 0x0016
- #define SRBF_DomainValidation 0x0017
+- if (iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen))
+- rc = ISCSI_ERR_CONN_FAILED;
+- list_del(&mtask->running);
+- if (conn->login_mtask != mtask)
+- __kfifo_put(session->mgmtpool.queue,
+- (void*)&mtask, sizeof(void*));
++ if (conn->ping_mtask != mtask) {
++ /*
++ * If this is not in response to one of our
++ * nops then it must be from userspace.
++ */
++ if (iscsi_recv_pdu(conn->cls_conn, hdr, data,
++ datalen))
++ rc = ISCSI_ERR_CONN_FAILED;
++ }
++ iscsi_free_mgmt_task(conn, mtask);
+ break;
+ default:
+ rc = ISCSI_ERR_BAD_OPCODE;
+@@ -473,8 +653,7 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ if (hdr->ttt == cpu_to_be32(ISCSI_RESERVED_TAG))
+ break;
--/*
-+/*
- * SRB SCSI Status - set in aac_srb->scsi_status
- */
- #define SRB_STATUS_PENDING 0x00
-@@ -1511,17 +1517,17 @@ struct aac_get_container_count_resp {
- */
+- if (iscsi_recv_pdu(conn->cls_conn, hdr, NULL, 0))
+- rc = ISCSI_ERR_CONN_FAILED;
++ iscsi_send_nopout(conn, (struct iscsi_nopin*)hdr);
+ break;
+ case ISCSI_OP_REJECT:
+ rc = iscsi_handle_reject(conn, hdr, data, datalen);
+@@ -609,20 +788,19 @@ static void iscsi_prep_mtask(struct iscsi_conn *conn,
+ session->tt->init_mgmt_task(conn, mtask);
- struct aac_mntent {
-- __le32 oid;
-+ __le32 oid;
- u8 name[16]; /* if applicable */
- struct creation_info create_info; /* if applicable */
- __le32 capacity;
-- __le32 vol; /* substrate structure */
-- __le32 obj; /* FT_FILESYS, etc. */
-- __le32 state; /* unready for mounting,
-+ __le32 vol; /* substrate structure */
-+ __le32 obj; /* FT_FILESYS, etc. */
-+ __le32 state; /* unready for mounting,
- readonly, etc. */
-- union aac_contentinfo fileinfo; /* Info specific to content
-+ union aac_contentinfo fileinfo; /* Info specific to content
- manager (eg, filesystem) */
-- __le32 altoid; /* != oid <==> snapshot or
-+ __le32 altoid; /* != oid <==> snapshot or
- broken mirror exists */
- __le32 capacityhigh;
- };
-@@ -1538,7 +1544,7 @@ struct aac_query_mount {
+ debug_scsi("mgmtpdu [op 0x%x hdr->itt 0x%x datalen %d]\n",
+- hdr->opcode, hdr->itt, mtask->data_count);
++ hdr->opcode & ISCSI_OPCODE_MASK, hdr->itt,
++ mtask->data_count);
+ }
- struct aac_mount {
- __le32 status;
-- __le32 type; /* should be same as that requested */
-+ __le32 type; /* should be same as that requested */
- __le32 count;
- struct aac_mntent mnt[1];
- };
-@@ -1608,7 +1614,7 @@ struct aac_delete_disk {
- u32 disknum;
- u32 cnum;
- };
--
-+
- struct fib_ioctl
+ static int iscsi_xmit_mtask(struct iscsi_conn *conn)
{
- u32 fibctx;
-@@ -1622,10 +1628,10 @@ struct revision
- __le32 version;
- __le32 build;
- };
--
-+
-
- /*
-- * Ugly - non Linux like ioctl coding for back compat.
-+ * Ugly - non Linux like ioctl coding for back compat.
- */
-
- #define CTL_CODE(function, method) ( \
-@@ -1633,7 +1639,7 @@ struct revision
- )
+ struct iscsi_hdr *hdr = conn->mtask->hdr;
+- int rc, was_logout = 0;
++ int rc;
- /*
-- * Define the method codes for how buffers are passed for I/O and FS
-+ * Define the method codes for how buffers are passed for I/O and FS
- * controls
- */
++ if ((hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGOUT)
++ conn->session->state = ISCSI_STATE_LOGGING_OUT;
+ spin_unlock_bh(&conn->session->lock);
+- if ((hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGOUT) {
+- conn->session->state = ISCSI_STATE_IN_RECOVERY;
+- iscsi_block_session(session_to_cls(conn->session));
+- was_logout = 1;
+- }
++
+ rc = conn->session->tt->xmit_mgmt_task(conn, conn->mtask);
+ spin_lock_bh(&conn->session->lock);
+ if (rc)
+@@ -630,11 +808,6 @@ static int iscsi_xmit_mtask(struct iscsi_conn *conn)
-@@ -1644,15 +1650,15 @@ struct revision
- * Filesystem ioctls
- */
+ /* done with this in-progress mtask */
+ conn->mtask = NULL;
+-
+- if (was_logout) {
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+- return -ENODATA;
+- }
+ return 0;
+ }
--#define FSACTL_SENDFIB CTL_CODE(2050, METHOD_BUFFERED)
--#define FSACTL_SEND_RAW_SRB CTL_CODE(2067, METHOD_BUFFERED)
-+#define FSACTL_SENDFIB CTL_CODE(2050, METHOD_BUFFERED)
-+#define FSACTL_SEND_RAW_SRB CTL_CODE(2067, METHOD_BUFFERED)
- #define FSACTL_DELETE_DISK 0x163
- #define FSACTL_QUERY_DISK 0x173
- #define FSACTL_OPEN_GET_ADAPTER_FIB CTL_CODE(2100, METHOD_BUFFERED)
- #define FSACTL_GET_NEXT_ADAPTER_FIB CTL_CODE(2101, METHOD_BUFFERED)
- #define FSACTL_CLOSE_GET_ADAPTER_FIB CTL_CODE(2102, METHOD_BUFFERED)
- #define FSACTL_MINIPORT_REV_CHECK CTL_CODE(2107, METHOD_BUFFERED)
--#define FSACTL_GET_PCI_INFO CTL_CODE(2119, METHOD_BUFFERED)
-+#define FSACTL_GET_PCI_INFO CTL_CODE(2119, METHOD_BUFFERED)
- #define FSACTL_FORCE_DELETE_DISK CTL_CODE(2120, METHOD_NEITHER)
- #define FSACTL_GET_CONTAINERS 2131
- #define FSACTL_SEND_LARGE_FIB CTL_CODE(2138, METHOD_BUFFERED)
-@@ -1661,7 +1667,7 @@ struct revision
- struct aac_common
+@@ -658,21 +831,13 @@ static int iscsi_check_cmdsn_window_closed(struct iscsi_conn *conn)
+ static int iscsi_xmit_ctask(struct iscsi_conn *conn)
{
- /*
-- * If this value is set to 1 then interrupt moderation will occur
-+ * If this value is set to 1 then interrupt moderation will occur
- * in the base commuication support.
- */
- u32 irq_mod;
-@@ -1690,11 +1696,11 @@ extern struct aac_common aac_config;
- * The following macro is used when sending and receiving FIBs. It is
- * only used for debugging.
- */
--
-+
- #ifdef DBG
- #define FIB_COUNTER_INCREMENT(counter) (counter)++
- #else
--#define FIB_COUNTER_INCREMENT(counter)
-+#define FIB_COUNTER_INCREMENT(counter)
- #endif
+ struct iscsi_cmd_task *ctask = conn->ctask;
+- int rc = 0;
+-
+- /*
+- * serialize with TMF AbortTask
+- */
+- if (ctask->state == ISCSI_TASK_ABORTING)
+- goto done;
++ int rc;
- /*
-@@ -1726,17 +1732,17 @@ extern struct aac_common aac_config;
- *
- * The adapter reports is present state through the phase. Only
- * a single phase should be ever be set. Each phase can have multiple
-- * phase status bits to provide more detailed information about the
-- * state of the board. Care should be taken to ensure that any phase
-+ * phase status bits to provide more detailed information about the
-+ * state of the board. Care should be taken to ensure that any phase
- * status bits that are set when changing the phase are also valid
- * for the new phase or be cleared out. Adapter software (monitor,
-- * iflash, kernel) is responsible for properly maintining the phase
-+ * iflash, kernel) is responsible for properly maintining the phase
- * status mailbox when it is running.
-- *
-- * MONKER_API Phases
- *
-- * Phases are bit oriented. It is NOT valid to have multiple bits set
-- */
-+ * MONKER_API Phases
+ __iscsi_get_ctask(ctask);
+ spin_unlock_bh(&conn->session->lock);
+ rc = conn->session->tt->xmit_cmd_task(conn, ctask);
+ spin_lock_bh(&conn->session->lock);
+ __iscsi_put_ctask(ctask);
+-
+-done:
+ if (!rc)
+ /* done with this ctask */
+ conn->ctask = NULL;
+@@ -680,6 +845,22 @@ done:
+ }
+
+ /**
++ * iscsi_requeue_ctask - requeue ctask to run from session workqueue
++ * @ctask: ctask to requeue
+ *
-+ * Phases are bit oriented. It is NOT valid to have multiple bits set
++ * LLDs that need to run a ctask from the session workqueue should call
++ * this. The session lock must be held.
+ */
-
- #define SELF_TEST_FAILED 0x00000004
- #define MONITOR_PANIC 0x00000020
-@@ -1759,16 +1765,22 @@ extern struct aac_common aac_config;
- * For FIB communication, we need all of the following things
- * to send back to the user.
- */
--
--#define AifCmdEventNotify 1 /* Notify of event */
++void iscsi_requeue_ctask(struct iscsi_cmd_task *ctask)
++{
++ struct iscsi_conn *conn = ctask->conn;
+
-+#define AifCmdEventNotify 1 /* Notify of event */
- #define AifEnConfigChange 3 /* Adapter configuration change */
- #define AifEnContainerChange 4 /* Container configuration change */
- #define AifEnDeviceFailure 5 /* SCSI device failed */
-+#define AifEnEnclosureManagement 13 /* EM_DRIVE_* */
-+#define EM_DRIVE_INSERTION 31
-+#define EM_DRIVE_REMOVAL 32
-+#define AifEnBatteryEvent 14 /* Change in Battery State */
- #define AifEnAddContainer 15 /* A new array was created */
- #define AifEnDeleteContainer 16 /* A container was deleted */
- #define AifEnExpEvent 23 /* Firmware Event Log */
- #define AifExeFirmwarePanic 3 /* Firmware Event Panic */
- #define AifHighPriority 3 /* Highest Priority Event */
-+#define AifEnAddJBOD 30 /* JBOD created */
-+#define AifEnDeleteJBOD 31 /* JBOD deleted */
-
- #define AifCmdJobProgress 2 /* Progress report */
- #define AifJobCtrZero 101 /* Array Zero progress */
-@@ -1780,11 +1792,11 @@ extern struct aac_common aac_config;
- #define AifDenVolumeExtendComplete 201 /* A volume extend completed */
- #define AifReqJobList 100 /* Gets back complete job list */
- #define AifReqJobsForCtr 101 /* Gets back jobs for specific container */
--#define AifReqJobsForScsi 102 /* Gets back jobs for specific SCSI device */
--#define AifReqJobReport 103 /* Gets back a specific job report or list of them */
-+#define AifReqJobsForScsi 102 /* Gets back jobs for specific SCSI device */
-+#define AifReqJobReport 103 /* Gets back a specific job report or list of them */
- #define AifReqTerminateJob 104 /* Terminates job */
- #define AifReqSuspendJob 105 /* Suspends a job */
--#define AifReqResumeJob 106 /* Resumes a job */
-+#define AifReqResumeJob 106 /* Resumes a job */
- #define AifReqSendAPIReport 107 /* API generic report requests */
- #define AifReqAPIJobStart 108 /* Start a job from the API */
- #define AifReqAPIJobUpdate 109 /* Update a job report from the API */
-@@ -1803,8 +1815,8 @@ struct aac_aifcmd {
- };
-
- /**
-- * Convert capacity to cylinders
-- * accounting for the fact capacity could be a 64 bit value
-+ * Convert capacity to cylinders
-+ * accounting for the fact capacity could be a 64 bit value
++ list_move_tail(&ctask->running, &conn->requeue);
++ scsi_queue_work(conn->session->host, &conn->xmitwork);
++}
++EXPORT_SYMBOL_GPL(iscsi_requeue_ctask);
++
++/**
+ * iscsi_data_xmit - xmit any command into the scheduled connection
+ * @conn: iscsi connection
*
- */
- static inline unsigned int cap_to_cyls(sector_t capacity, unsigned divisor)
-@@ -1861,6 +1873,7 @@ int aac_probe_container(struct aac_dev *dev, int cid);
- int _aac_rx_init(struct aac_dev *dev);
- int aac_rx_select_comm(struct aac_dev *dev, int comm);
- int aac_rx_deliver_producer(struct fib * fib);
-+char * get_container_type(unsigned type);
- extern int numacb;
- extern int acbsize;
- extern char aac_driver_version[];
-diff --git a/drivers/scsi/aacraid/commctrl.c b/drivers/scsi/aacraid/commctrl.c
-index 1e6d7a9..851a7e5 100644
---- a/drivers/scsi/aacraid/commctrl.c
-+++ b/drivers/scsi/aacraid/commctrl.c
-@@ -48,13 +48,13 @@
- * ioctl_send_fib - send a FIB from userspace
- * @dev: adapter is being processed
- * @arg: arguments to the ioctl call
-- *
-+ *
- * This routine sends a fib to the adapter on behalf of a user level
- * program.
- */
- # define AAC_DEBUG_PREAMBLE KERN_INFO
- # define AAC_DEBUG_POSTAMBLE
--
+@@ -717,36 +898,40 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
+ * overflow us with nop-ins
+ */
+ check_mgmt:
+- while (__kfifo_get(conn->mgmtqueue, (void*)&conn->mtask,
+- sizeof(void*))) {
++ while (!list_empty(&conn->mgmtqueue)) {
++ conn->mtask = list_entry(conn->mgmtqueue.next,
++ struct iscsi_mgmt_task, running);
++ if (conn->session->state == ISCSI_STATE_LOGGING_OUT) {
++ iscsi_free_mgmt_task(conn, conn->mtask);
++ conn->mtask = NULL;
++ continue;
++ }
+
- static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
- {
- struct hw_fib * kfib;
-@@ -71,7 +71,7 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
- if(fibptr == NULL) {
- return -ENOMEM;
+ iscsi_prep_mtask(conn, conn->mtask);
+- list_add_tail(&conn->mtask->running, &conn->mgmt_run_list);
++ list_move_tail(conn->mgmtqueue.next, &conn->mgmt_run_list);
+ rc = iscsi_xmit_mtask(conn);
+ if (rc)
+ goto again;
}
--
+
+- /* process command queue */
++ /* process pending command queue */
+ while (!list_empty(&conn->xmitqueue)) {
+- /*
+- * iscsi tcp may readd the task to the xmitqueue to send
+- * write data
+- */
++ if (conn->tmf_state == TMF_QUEUED)
++ break;
+
- kfib = fibptr->hw_fib_va;
- /*
- * First copy in the header so that we can check the size field.
-@@ -109,7 +109,7 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
- if (kfib->header.Command == cpu_to_le16(TakeABreakPt)) {
- aac_adapter_interrupt(dev);
- /*
-- * Since we didn't really send a fib, zero out the state to allow
-+ * Since we didn't really send a fib, zero out the state to allow
- * cleanup code not to assert.
- */
- kfib->header.XferState = 0;
-@@ -169,7 +169,7 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
+ conn->ctask = list_entry(conn->xmitqueue.next,
+ struct iscsi_cmd_task, running);
+- switch (conn->ctask->state) {
+- case ISCSI_TASK_ABORTING:
+- break;
+- case ISCSI_TASK_PENDING:
+- iscsi_prep_scsi_cmd_pdu(conn->ctask);
+- conn->session->tt->init_cmd_task(conn->ctask);
+- /* fall through */
+- default:
+- conn->ctask->state = ISCSI_TASK_RUNNING;
+- break;
++ if (conn->session->state == ISCSI_STATE_LOGGING_OUT) {
++ fail_command(conn, conn->ctask, DID_IMM_RETRY << 16);
++ continue;
++ }
++ if (iscsi_prep_scsi_cmd_pdu(conn->ctask)) {
++ fail_command(conn, conn->ctask, DID_ABORT << 16);
++ continue;
+ }
+- list_move_tail(conn->xmitqueue.next, &conn->run_list);
- fibctx->type = FSAFS_NTC_GET_ADAPTER_FIB_CONTEXT;
- fibctx->size = sizeof(struct aac_fib_context);
-- /*
-+ /*
- * Yes yes, I know this could be an index, but we have a
- * better guarantee of uniqueness for the locked loop below.
- * Without the aid of a persistent history, this also helps
-@@ -189,7 +189,7 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
- INIT_LIST_HEAD(&fibctx->fib_list);
- fibctx->jiffies = jiffies/HZ;
- /*
-- * Now add this context onto the adapter's
-+ * Now add this context onto the adapter's
- * AdapterFibContext list.
++ conn->ctask->state = ISCSI_TASK_RUNNING;
++ list_move_tail(conn->xmitqueue.next, &conn->run_list);
+ rc = iscsi_xmit_ctask(conn);
+ if (rc)
+ goto again;
+@@ -755,7 +940,28 @@ check_mgmt:
+ * we need to check the mgmt queue for nops that need to
+ * be sent to aviod starvation
*/
- spin_lock_irqsave(&dev->fib_lock, flags);
-@@ -207,12 +207,12 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
- }
- list_add_tail(&fibctx->next, &dev->fib_list);
- spin_unlock_irqrestore(&dev->fib_lock, flags);
-- if (copy_to_user(arg, &fibctx->unique,
-+ if (copy_to_user(arg, &fibctx->unique,
- sizeof(fibctx->unique))) {
- status = -EFAULT;
- } else {
- status = 0;
-- }
-+ }
+- if (__kfifo_len(conn->mgmtqueue))
++ if (!list_empty(&conn->mgmtqueue))
++ goto check_mgmt;
++ }
++
++ while (!list_empty(&conn->requeue)) {
++ if (conn->session->fast_abort && conn->tmf_state != TMF_INITIAL)
++ break;
++
++ /*
++ * we always do fastlogout - conn stop code will clean up.
++ */
++ if (conn->session->state == ISCSI_STATE_LOGGING_OUT)
++ break;
++
++ conn->ctask = list_entry(conn->requeue.next,
++ struct iscsi_cmd_task, running);
++ conn->ctask->state = ISCSI_TASK_RUNNING;
++ list_move_tail(conn->requeue.next, &conn->run_list);
++ rc = iscsi_xmit_ctask(conn);
++ if (rc)
++ goto again;
++ if (!list_empty(&conn->mgmtqueue))
+ goto check_mgmt;
}
- return status;
- }
-@@ -221,8 +221,8 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
- * next_getadapter_fib - get the next fib
- * @dev: adapter to use
- * @arg: ioctl argument
-- *
-- * This routine will get the next Fib, if available, from the AdapterFibContext
-+ *
-+ * This routine will get the next Fib, if available, from the AdapterFibContext
- * passed in from the user.
- */
+ spin_unlock_bh(&conn->session->lock);
+@@ -790,6 +996,7 @@ enum {
+ FAILURE_SESSION_TERMINATE,
+ FAILURE_SESSION_IN_RECOVERY,
+ FAILURE_SESSION_RECOVERY_TIMEOUT,
++ FAILURE_SESSION_LOGGING_OUT,
+ };
+
+ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
+@@ -805,8 +1012,9 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
+ sc->SCp.ptr = NULL;
-@@ -234,7 +234,7 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
- int status;
- struct list_head * entry;
- unsigned long flags;
--
-+
- if(copy_from_user((void *)&f, arg, sizeof(struct fib_ioctl)))
- return -EFAULT;
- /*
-@@ -243,6 +243,7 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
- * Search the list of AdapterFibContext addresses on the adapter
- * to be sure this is a valid address
- */
-+ spin_lock_irqsave(&dev->fib_lock, flags);
- entry = dev->fib_list.next;
- fibctx = NULL;
+ host = sc->device->host;
+- session = iscsi_hostdata(host->hostdata);
++ spin_unlock(host->host_lock);
-@@ -251,37 +252,37 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
- /*
- * Extract the AdapterFibContext from the Input parameters.
- */
-- if (fibctx->unique == f.fibctx) { /* We found a winner */
-+ if (fibctx->unique == f.fibctx) { /* We found a winner */
- break;
- }
- entry = entry->next;
- fibctx = NULL;
- }
- if (!fibctx) {
-+ spin_unlock_irqrestore(&dev->fib_lock, flags);
- dprintk ((KERN_INFO "Fib Context not found\n"));
- return -EINVAL;
- }
++ session = iscsi_hostdata(host->hostdata);
+ spin_lock(&session->lock);
- if((fibctx->type != FSAFS_NTC_GET_ADAPTER_FIB_CONTEXT) ||
- (fibctx->size != sizeof(struct aac_fib_context))) {
-+ spin_unlock_irqrestore(&dev->fib_lock, flags);
- dprintk ((KERN_INFO "Fib Context corrupt?\n"));
- return -EINVAL;
- }
- status = 0;
-- spin_lock_irqsave(&dev->fib_lock, flags);
/*
- * If there are no fibs to send back, then either wait or return
- * -EAGAIN
- */
- return_fib:
- if (!list_empty(&fibctx->fib_list)) {
-- struct list_head * entry;
- /*
- * Pull the next fib from the fibs
- */
- entry = fibctx->fib_list.next;
- list_del(entry);
--
-+
- fib = list_entry(entry, struct fib, fiblink);
- fibctx->count--;
- spin_unlock_irqrestore(&dev->fib_lock, flags);
-@@ -289,7 +290,7 @@ return_fib:
- kfree(fib->hw_fib_va);
- kfree(fib);
- return -EFAULT;
-- }
-+ }
- /*
- * Free the space occupied by this copy of the fib.
+@@ -822,17 +1030,22 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
+ * be entering our queuecommand while a block is starting
+ * up because the block code is not locked)
*/
-@@ -318,7 +319,7 @@ return_fib:
- }
- } else {
- status = -EAGAIN;
-- }
+- if (session->state == ISCSI_STATE_IN_RECOVERY) {
++ switch (session->state) {
++ case ISCSI_STATE_IN_RECOVERY:
+ reason = FAILURE_SESSION_IN_RECOVERY;
+ goto reject;
+- }
+-
+- if (session->state == ISCSI_STATE_RECOVERY_FAILED)
++ case ISCSI_STATE_LOGGING_OUT:
++ reason = FAILURE_SESSION_LOGGING_OUT;
++ goto reject;
++ case ISCSI_STATE_RECOVERY_FAILED:
+ reason = FAILURE_SESSION_RECOVERY_TIMEOUT;
+- else if (session->state == ISCSI_STATE_TERMINATE)
++ break;
++ case ISCSI_STATE_TERMINATE:
+ reason = FAILURE_SESSION_TERMINATE;
+- else
++ break;
++ default:
+ reason = FAILURE_SESSION_FREED;
+ }
+ goto fault;
}
- fibctx->jiffies = jiffies/HZ;
- return status;
-@@ -327,7 +328,9 @@ return_fib:
- int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context * fibctx)
- {
- struct fib *fib;
-+ unsigned long flags;
-+ spin_lock_irqsave(&dev->fib_lock, flags);
- /*
- * First free any FIBs that have not been consumed.
- */
-@@ -350,6 +353,7 @@ int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context * fibctx)
- * Remove the Context from the AdapterFibContext List
- */
- list_del(&fibctx->next);
-+ spin_unlock_irqrestore(&dev->fib_lock, flags);
- /*
- * Invalidate context
- */
-@@ -368,7 +372,7 @@ int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context * fibctx)
- *
- * This routine will close down the fibctx passed in from the user.
- */
--
-+
- static int close_getadapter_fib(struct aac_dev * dev, void __user *arg)
- {
- struct aac_fib_context *fibctx;
-@@ -415,8 +419,8 @@ static int close_getadapter_fib(struct aac_dev * dev, void __user *arg)
- * @arg: ioctl arguments
- *
- * This routine returns the driver version.
-- * Under Linux, there have been no version incompatibilities, so this is
-- * simple!
-+ * Under Linux, there have been no version incompatibilities, so this is
-+ * simple!
- */
+@@ -859,7 +1072,6 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
- static int check_revision(struct aac_dev *dev, void __user *arg)
-@@ -426,12 +430,12 @@ static int check_revision(struct aac_dev *dev, void __user *arg)
- u32 version;
+ atomic_set(&ctask->refcount, 1);
+ ctask->state = ISCSI_TASK_PENDING;
+- ctask->mtask = NULL;
+ ctask->conn = conn;
+ ctask->sc = sc;
+ INIT_LIST_HEAD(&ctask->running);
+@@ -868,11 +1080,13 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
+ spin_unlock(&session->lock);
- response.compat = 1;
-- version = (simple_strtol(driver_version,
-+ version = (simple_strtol(driver_version,
- &driver_version, 10) << 24) | 0x00000400;
- version += simple_strtol(driver_version + 1, &driver_version, 10) << 16;
- version += simple_strtol(driver_version + 1, NULL, 10);
- response.version = cpu_to_le32(version);
--# if (defined(AAC_DRIVER_BUILD))
-+# ifdef AAC_DRIVER_BUILD
- response.build = cpu_to_le32(AAC_DRIVER_BUILD);
- # else
- response.build = cpu_to_le32(9999);
-@@ -464,7 +468,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- u32 data_dir;
- void __user *sg_user[32];
- void *sg_list[32];
-- u32 sg_indx = 0;
-+ u32 sg_indx = 0;
- u32 byte_count = 0;
- u32 actual_fibsize64, actual_fibsize = 0;
- int i;
-@@ -475,7 +479,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- return -EBUSY;
- }
- if (!capable(CAP_SYS_ADMIN)){
-- dprintk((KERN_DEBUG"aacraid: No permission to send raw srb\n"));
-+ dprintk((KERN_DEBUG"aacraid: No permission to send raw srb\n"));
- return -EPERM;
- }
- /*
-@@ -490,7 +494,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ scsi_queue_work(host, &conn->xmitwork);
++ spin_lock(host->host_lock);
+ return 0;
- memset(sg_list, 0, sizeof(sg_list)); /* cleanup may take issue */
- if(copy_from_user(&fibsize, &user_srb->count,sizeof(u32))){
-- dprintk((KERN_DEBUG"aacraid: Could not copy data size from user\n"));
-+ dprintk((KERN_DEBUG"aacraid: Could not copy data size from user\n"));
- rcode = -EFAULT;
- goto cleanup;
- }
-@@ -507,7 +511,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- goto cleanup;
+ reject:
+ spin_unlock(&session->lock);
+ debug_scsi("cmd 0x%x rejected (%d)\n", sc->cmnd[0], reason);
++ spin_lock(host->host_lock);
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+ fault:
+@@ -882,6 +1096,7 @@ fault:
+ sc->result = (DID_NO_CONNECT << 16);
+ scsi_set_resid(sc, scsi_bufflen(sc));
+ sc->scsi_done(sc);
++ spin_lock(host->host_lock);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_queuecommand);
+@@ -895,72 +1110,15 @@ int iscsi_change_queue_depth(struct scsi_device *sdev, int depth)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_change_queue_depth);
+
+-static struct iscsi_mgmt_task *
+-__iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+- char *data, uint32_t data_size)
+-{
+- struct iscsi_session *session = conn->session;
+- struct iscsi_mgmt_task *mtask;
+-
+- if (session->state == ISCSI_STATE_TERMINATE)
+- return NULL;
+-
+- if (hdr->opcode == (ISCSI_OP_LOGIN | ISCSI_OP_IMMEDIATE) ||
+- hdr->opcode == (ISCSI_OP_TEXT | ISCSI_OP_IMMEDIATE))
+- /*
+- * Login and Text are sent serially, in
+- * request-followed-by-response sequence.
+- * Same mtask can be used. Same ITT must be used.
+- * Note that login_mtask is preallocated at conn_create().
+- */
+- mtask = conn->login_mtask;
+- else {
+- BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE);
+- BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED);
+-
+- if (!__kfifo_get(session->mgmtpool.queue,
+- (void*)&mtask, sizeof(void*)))
+- return NULL;
+- }
+-
+- if (data_size) {
+- memcpy(mtask->data, data, data_size);
+- mtask->data_count = data_size;
+- } else
+- mtask->data_count = 0;
+-
+- INIT_LIST_HEAD(&mtask->running);
+- memcpy(mtask->hdr, hdr, sizeof(struct iscsi_hdr));
+- __kfifo_put(conn->mgmtqueue, (void*)&mtask, sizeof(void*));
+- return mtask;
+-}
+-
+-int iscsi_conn_send_pdu(struct iscsi_cls_conn *cls_conn, struct iscsi_hdr *hdr,
+- char *data, uint32_t data_size)
+-{
+- struct iscsi_conn *conn = cls_conn->dd_data;
+- struct iscsi_session *session = conn->session;
+- int err = 0;
+-
+- spin_lock_bh(&session->lock);
+- if (!__iscsi_conn_send_pdu(conn, hdr, data, data_size))
+- err = -EPERM;
+- spin_unlock_bh(&session->lock);
+- scsi_queue_work(session->host, &conn->xmitwork);
+- return err;
+-}
+-EXPORT_SYMBOL_GPL(iscsi_conn_send_pdu);
+-
+ void iscsi_session_recovery_timedout(struct iscsi_cls_session *cls_session)
+ {
+ struct iscsi_session *session = class_to_transport_session(cls_session);
+- struct iscsi_conn *conn = session->leadconn;
+
+ spin_lock_bh(&session->lock);
+ if (session->state != ISCSI_STATE_LOGGED_IN) {
+ session->state = ISCSI_STATE_RECOVERY_FAILED;
+- if (conn)
+- wake_up(&conn->ehwait);
++ if (session->leadconn)
++ wake_up(&session->leadconn->ehwait);
}
- if(copy_from_user(user_srbcmd, user_srb,fibsize)){
-- dprintk((KERN_DEBUG"aacraid: Could not copy srb from user\n"));
-+ dprintk((KERN_DEBUG"aacraid: Could not copy srb from user\n"));
- rcode = -EFAULT;
- goto cleanup;
+ spin_unlock_bh(&session->lock);
+ }
+@@ -971,30 +1129,25 @@ int iscsi_eh_host_reset(struct scsi_cmnd *sc)
+ struct Scsi_Host *host = sc->device->host;
+ struct iscsi_session *session = iscsi_hostdata(host->hostdata);
+ struct iscsi_conn *conn = session->leadconn;
+- int fail_session = 0;
+
++ mutex_lock(&session->eh_mutex);
+ spin_lock_bh(&session->lock);
+ if (session->state == ISCSI_STATE_TERMINATE) {
+ failed:
+ debug_scsi("failing host reset: session terminated "
+ "[CID %d age %d]\n", conn->id, session->age);
+ spin_unlock_bh(&session->lock);
++ mutex_unlock(&session->eh_mutex);
+ return FAILED;
}
-@@ -518,15 +522,15 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- // Fix up srb for endian and force some values
- srbcmd->function = cpu_to_le32(SRBF_ExecuteScsi); // Force this
-- srbcmd->channel = cpu_to_le32(user_srbcmd->channel);
-+ srbcmd->channel = cpu_to_le32(user_srbcmd->channel);
- srbcmd->id = cpu_to_le32(user_srbcmd->id);
-- srbcmd->lun = cpu_to_le32(user_srbcmd->lun);
-- srbcmd->timeout = cpu_to_le32(user_srbcmd->timeout);
-- srbcmd->flags = cpu_to_le32(flags);
-+ srbcmd->lun = cpu_to_le32(user_srbcmd->lun);
-+ srbcmd->timeout = cpu_to_le32(user_srbcmd->timeout);
-+ srbcmd->flags = cpu_to_le32(flags);
- srbcmd->retry_limit = 0; // Obsolete parameter
- srbcmd->cdb_size = cpu_to_le32(user_srbcmd->cdb_size);
- memcpy(srbcmd->cdb, user_srbcmd->cdb, sizeof(srbcmd->cdb));
--
-+
- switch (flags & (SRB_DataIn | SRB_DataOut)) {
- case SRB_DataOut:
- data_dir = DMA_TO_DEVICE;
-@@ -582,7 +586,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- void* p;
- /* Does this really need to be GFP_DMA? */
- p = kmalloc(upsg->sg[i].count,GFP_KERNEL|__GFP_DMA);
-- if(p == 0) {
-+ if(!p) {
- dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
- upsg->sg[i].count,i,upsg->count));
- rcode = -ENOMEM;
-@@ -594,7 +598,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- sg_list[i] = p; // save so we can clean up later
- sg_indx = i;
+- if (sc->SCp.phase == session->age) {
+- debug_scsi("failing connection CID %d due to SCSI host reset\n",
+- conn->id);
+- fail_session = 1;
+- }
+ spin_unlock_bh(&session->lock);
+-
++ mutex_unlock(&session->eh_mutex);
+ /*
+ * we drop the lock here but the leadconn cannot be destoyed while
+ * we are in the scsi eh
+ */
+- if (fail_session)
+- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
++ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-- if( flags & SRB_DataOut ){
-+ if (flags & SRB_DataOut) {
- if(copy_from_user(p,sg_user[i],upsg->sg[i].count)){
- dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
- rcode = -EFAULT;
-@@ -626,7 +630,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- void* p;
- /* Does this really need to be GFP_DMA? */
- p = kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
-- if(p == 0) {
-+ if(!p) {
- kfree (usg);
- dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
- usg->sg[i].count,i,usg->count));
-@@ -637,7 +641,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- sg_list[i] = p; // save so we can clean up later
- sg_indx = i;
+ debug_scsi("iscsi_eh_host_reset wait for relogin\n");
+ wait_event_interruptible(conn->ehwait,
+@@ -1004,73 +1157,56 @@ failed:
+ if (signal_pending(current))
+ flush_signals(current);
-- if( flags & SRB_DataOut ){
-+ if (flags & SRB_DataOut) {
- if(copy_from_user(p,sg_user[i],upsg->sg[i].count)){
- kfree (usg);
- dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
-@@ -668,7 +672,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- void* p;
- /* Does this really need to be GFP_DMA? */
- p = kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
-- if(p == 0) {
-+ if(!p) {
- dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
- usg->sg[i].count,i,usg->count));
- rcode = -ENOMEM;
-@@ -680,7 +684,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- sg_list[i] = p; // save so we can clean up later
- sg_indx = i;
++ mutex_lock(&session->eh_mutex);
+ spin_lock_bh(&session->lock);
+ if (session->state == ISCSI_STATE_LOGGED_IN)
+ printk(KERN_INFO "iscsi: host reset succeeded\n");
+ else
+ goto failed;
+ spin_unlock_bh(&session->lock);
+-
++ mutex_unlock(&session->eh_mutex);
+ return SUCCESS;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_eh_host_reset);
-- if( flags & SRB_DataOut ){
-+ if (flags & SRB_DataOut) {
- if(copy_from_user(p,sg_user[i],usg->sg[i].count)){
- dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
- rcode = -EFAULT;
-@@ -698,7 +702,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- dma_addr_t addr;
- void* p;
- p = kmalloc(upsg->sg[i].count, GFP_KERNEL);
-- if(p == 0) {
-+ if (!p) {
- dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
- upsg->sg[i].count, i, upsg->count));
- rcode = -ENOMEM;
-@@ -708,7 +712,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
- sg_list[i] = p; // save so we can clean up later
- sg_indx = i;
+-static void iscsi_tmabort_timedout(unsigned long data)
++static void iscsi_tmf_timedout(unsigned long data)
+ {
+- struct iscsi_cmd_task *ctask = (struct iscsi_cmd_task *)data;
+- struct iscsi_conn *conn = ctask->conn;
++ struct iscsi_conn *conn = (struct iscsi_conn *)data;
+ struct iscsi_session *session = conn->session;
-- if( flags & SRB_DataOut ){
-+ if (flags & SRB_DataOut) {
- if(copy_from_user(p, sg_user[i],
- upsg->sg[i].count)) {
- dprintk((KERN_DEBUG"aacraid: Could not copy sg data from user\n"));
-@@ -734,19 +738,19 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ spin_lock(&session->lock);
+- if (conn->tmabort_state == TMABORT_INITIAL) {
+- conn->tmabort_state = TMABORT_TIMEDOUT;
+- debug_scsi("tmabort timedout [sc %p itt 0x%x]\n",
+- ctask->sc, ctask->itt);
++ if (conn->tmf_state == TMF_QUEUED) {
++ conn->tmf_state = TMF_TIMEDOUT;
++ debug_scsi("tmf timedout\n");
+ /* unblock eh_abort() */
+ wake_up(&conn->ehwait);
}
+ spin_unlock(&session->lock);
+ }
- if (status != 0){
-- dprintk((KERN_DEBUG"aacraid: Could not send raw srb fib to hba\n"));
-+ dprintk((KERN_DEBUG"aacraid: Could not send raw srb fib to hba\n"));
- rcode = -ENXIO;
- goto cleanup;
+-static int iscsi_exec_abort_task(struct scsi_cmnd *sc,
+- struct iscsi_cmd_task *ctask)
++static int iscsi_exec_task_mgmt_fn(struct iscsi_conn *conn,
++ struct iscsi_tm *hdr, int age,
++ int timeout)
+ {
+- struct iscsi_conn *conn = ctask->conn;
+ struct iscsi_session *session = conn->session;
+- struct iscsi_tm *hdr = &conn->tmhdr;
+-
+- /*
+- * ctask timed out but session is OK requests must be serialized.
+- */
+- memset(hdr, 0, sizeof(struct iscsi_tm));
+- hdr->opcode = ISCSI_OP_SCSI_TMFUNC | ISCSI_OP_IMMEDIATE;
+- hdr->flags = ISCSI_TM_FUNC_ABORT_TASK;
+- hdr->flags |= ISCSI_FLAG_CMD_FINAL;
+- memcpy(hdr->lun, ctask->hdr->lun, sizeof(hdr->lun));
+- hdr->rtt = ctask->hdr->itt;
+- hdr->refcmdsn = ctask->hdr->cmdsn;
++ struct iscsi_mgmt_task *mtask;
+
+- ctask->mtask = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)hdr,
+- NULL, 0);
+- if (!ctask->mtask) {
++ mtask = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)hdr,
++ NULL, 0);
++ if (!mtask) {
+ spin_unlock_bh(&session->lock);
+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- spin_lock_bh(&session->lock)
+- debug_scsi("abort sent failure [itt 0x%x]\n", ctask->itt);
++ spin_lock_bh(&session->lock);
++ debug_scsi("tmf exec failure\n");
+ return -EPERM;
}
+- ctask->state = ISCSI_TASK_ABORTING;
++ conn->tmfcmd_pdus_cnt++;
++ conn->tmf_timer.expires = timeout * HZ + jiffies;
++ conn->tmf_timer.function = iscsi_tmf_timedout;
++ conn->tmf_timer.data = (unsigned long)conn;
++ add_timer(&conn->tmf_timer);
++ debug_scsi("tmf set timeout\n");
-- if( flags & SRB_DataIn ) {
-+ if (flags & SRB_DataIn) {
- for(i = 0 ; i <= sg_indx; i++){
- byte_count = le32_to_cpu(
- (dev->adapter_info.options & AAC_OPT_SGMAP_HOST64)
- ? ((struct sgmap64*)&srbcmd->sg)->sg[i].count
- : srbcmd->sg.sg[i].count);
- if(copy_to_user(sg_user[i], sg_list[i], byte_count)){
-- dprintk((KERN_DEBUG"aacraid: Could not copy sg data to user\n"));
-+ dprintk((KERN_DEBUG"aacraid: Could not copy sg data to user\n"));
- rcode = -EFAULT;
- goto cleanup;
+- debug_scsi("abort sent [itt 0x%x]\n", ctask->itt);
+-
+- if (conn->tmabort_state == TMABORT_INITIAL) {
+- conn->tmfcmd_pdus_cnt++;
+- conn->tmabort_timer.expires = 20*HZ + jiffies;
+- conn->tmabort_timer.function = iscsi_tmabort_timedout;
+- conn->tmabort_timer.data = (unsigned long)ctask;
+- add_timer(&conn->tmabort_timer);
+- debug_scsi("abort set timeout [itt 0x%x]\n", ctask->itt);
+- }
+ spin_unlock_bh(&session->lock);
+ mutex_unlock(&session->eh_mutex);
+ scsi_queue_work(session->host, &conn->xmitwork);
+@@ -1078,113 +1214,197 @@ static int iscsi_exec_abort_task(struct scsi_cmnd *sc,
+ /*
+ * block eh thread until:
+ *
+- * 1) abort response
+- * 2) abort timeout
++ * 1) tmf response
++ * 2) tmf timeout
+ * 3) session is terminated or restarted or userspace has
+ * given up on recovery
+ */
+- wait_event_interruptible(conn->ehwait,
+- sc->SCp.phase != session->age ||
++ wait_event_interruptible(conn->ehwait, age != session->age ||
+ session->state != ISCSI_STATE_LOGGED_IN ||
+- conn->tmabort_state != TMABORT_INITIAL);
++ conn->tmf_state != TMF_QUEUED);
+ if (signal_pending(current))
+ flush_signals(current);
+- del_timer_sync(&conn->tmabort_timer);
++ del_timer_sync(&conn->tmf_timer);
++
+ mutex_lock(&session->eh_mutex);
+ spin_lock_bh(&session->lock);
++ /* if the session drops it will clean up the mtask */
++ if (age != session->age ||
++ session->state != ISCSI_STATE_LOGGED_IN)
++ return -ENOTCONN;
+ return 0;
+ }
-@@ -756,7 +760,7 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
+ /*
+- * session lock must be held
++ * Fail commands. session lock held and recv side suspended and xmit
++ * thread flushed
+ */
+-static struct iscsi_mgmt_task *
+-iscsi_remove_mgmt_task(struct kfifo *fifo, uint32_t itt)
++static void fail_all_commands(struct iscsi_conn *conn, unsigned lun)
+ {
+- int i, nr_tasks = __kfifo_len(fifo) / sizeof(void*);
+- struct iscsi_mgmt_task *task;
++ struct iscsi_cmd_task *ctask, *tmp;
- reply = (struct aac_srb_reply *) fib_data(srbfib);
- if(copy_to_user(user_reply,reply,sizeof(struct aac_srb_reply))){
-- dprintk((KERN_DEBUG"aacraid: Could not copy reply to user\n"));
-+ dprintk((KERN_DEBUG"aacraid: Could not copy reply to user\n"));
- rcode = -EFAULT;
- goto cleanup;
+- debug_scsi("searching %d tasks\n", nr_tasks);
++ if (conn->ctask && (conn->ctask->sc->device->lun == lun || lun == -1))
++ conn->ctask = NULL;
+
+- for (i = 0; i < nr_tasks; i++) {
+- __kfifo_get(fifo, (void*)&task, sizeof(void*));
+- debug_scsi("check task %u\n", task->itt);
++ /* flush pending */
++ list_for_each_entry_safe(ctask, tmp, &conn->xmitqueue, running) {
++ if (lun == ctask->sc->device->lun || lun == -1) {
++ debug_scsi("failing pending sc %p itt 0x%x\n",
++ ctask->sc, ctask->itt);
++ fail_command(conn, ctask, DID_BUS_BUSY << 16);
++ }
++ }
+
+- if (task->itt == itt) {
+- debug_scsi("matched task\n");
+- return task;
++ list_for_each_entry_safe(ctask, tmp, &conn->requeue, running) {
++ if (lun == ctask->sc->device->lun || lun == -1) {
++ debug_scsi("failing requeued sc %p itt 0x%x\n",
++ ctask->sc, ctask->itt);
++ fail_command(conn, ctask, DID_BUS_BUSY << 16);
+ }
++ }
+
+- __kfifo_put(fifo, (void*)&task, sizeof(void*));
++ /* fail all other running */
++ list_for_each_entry_safe(ctask, tmp, &conn->run_list, running) {
++ if (lun == ctask->sc->device->lun || lun == -1) {
++ debug_scsi("failing in progress sc %p itt 0x%x\n",
++ ctask->sc, ctask->itt);
++ fail_command(conn, ctask, DID_BUS_BUSY << 16);
++ }
}
-@@ -775,34 +779,34 @@ cleanup:
+- return NULL;
}
- struct aac_pci_info {
-- u32 bus;
-- u32 slot;
-+ u32 bus;
-+ u32 slot;
- };
+-static int iscsi_ctask_mtask_cleanup(struct iscsi_cmd_task *ctask)
++static void iscsi_suspend_tx(struct iscsi_conn *conn)
+ {
+- struct iscsi_conn *conn = ctask->conn;
+- struct iscsi_session *session = conn->session;
+-
+- if (!ctask->mtask)
+- return -EINVAL;
++ set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
++ scsi_flush_work(conn->session->host);
++}
+- if (!iscsi_remove_mgmt_task(conn->mgmtqueue, ctask->mtask->itt))
+- list_del(&ctask->mtask->running);
+- __kfifo_put(session->mgmtpool.queue, (void*)&ctask->mtask,
+- sizeof(void*));
+- ctask->mtask = NULL;
+- return 0;
++static void iscsi_start_tx(struct iscsi_conn *conn)
++{
++ clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
++ scsi_queue_work(conn->session->host, &conn->xmitwork);
+ }
- static int aac_get_pci_info(struct aac_dev* dev, void __user *arg)
+-/*
+- * session lock must be held
+- */
+-static void fail_command(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
+- int err)
++static enum scsi_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *scmd)
{
-- struct aac_pci_info pci_info;
-+ struct aac_pci_info pci_info;
+- struct scsi_cmnd *sc;
++ struct iscsi_cls_session *cls_session;
++ struct iscsi_session *session;
++ struct iscsi_conn *conn;
++ enum scsi_eh_timer_return rc = EH_NOT_HANDLED;
- pci_info.bus = dev->pdev->bus->number;
- pci_info.slot = PCI_SLOT(dev->pdev->devfn);
+- sc = ctask->sc;
+- if (!sc)
+- return;
++ cls_session = starget_to_session(scsi_target(scmd->device));
++ session = class_to_transport_session(cls_session);
-- if (copy_to_user(arg, &pci_info, sizeof(struct aac_pci_info))) {
-- dprintk((KERN_DEBUG "aacraid: Could not copy pci info\n"));
-- return -EFAULT;
-+ if (copy_to_user(arg, &pci_info, sizeof(struct aac_pci_info))) {
-+ dprintk((KERN_DEBUG "aacraid: Could not copy pci info\n"));
-+ return -EFAULT;
- }
-- return 0;
-+ return 0;
- }
--
+- if (ctask->state == ISCSI_TASK_PENDING)
++ debug_scsi("scsi cmd %p timedout\n", scmd);
+
++ spin_lock(&session->lock);
++ if (session->state != ISCSI_STATE_LOGGED_IN) {
+ /*
+- * cmd never made it to the xmit thread, so we should not count
+- * the cmd in the sequencing
++ * We are probably in the middle of iscsi recovery so let
++ * that complete and handle the error.
+ */
+- conn->session->queued_cmdsn--;
+- else
+- conn->session->tt->cleanup_cmd_task(conn, ctask);
+- iscsi_ctask_mtask_cleanup(ctask);
++ rc = EH_RESET_TIMER;
++ goto done;
++ }
- int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg)
- {
- int status;
--
-+
- /*
- * HBA gets first crack
- */
--
+- sc->result = err;
+- scsi_set_resid(sc, scsi_bufflen(sc));
+- if (conn->ctask == ctask)
+- conn->ctask = NULL;
+- /* release ref from queuecommand */
+- __iscsi_put_ctask(ctask);
++ conn = session->leadconn;
++ if (!conn) {
++ /* In the middle of shuting down */
++ rc = EH_RESET_TIMER;
++ goto done;
++ }
+
- status = aac_dev_ioctl(dev, cmd, arg);
- if(status != -ENOTTY)
- return status;
-@@ -832,7 +836,7 @@ int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg)
- break;
- default:
- status = -ENOTTY;
-- break;
-+ break;
- }
- return status;
++ if (!conn->recv_timeout && !conn->ping_timeout)
++ goto done;
++ /*
++ * if the ping timedout then we are in the middle of cleaning up
++ * and can let the iscsi eh handle it
++ */
++ if (time_before_eq(conn->last_recv + (conn->recv_timeout * HZ) +
++ (conn->ping_timeout * HZ), jiffies))
++ rc = EH_RESET_TIMER;
++ /*
++ * if we are about to check the transport then give the command
++ * more time
++ */
++ if (time_before_eq(conn->last_recv + (conn->recv_timeout * HZ),
++ jiffies))
++ rc = EH_RESET_TIMER;
++ /* if in the middle of checking the transport then give us more time */
++ if (conn->ping_mtask)
++ rc = EH_RESET_TIMER;
++done:
++ spin_unlock(&session->lock);
++ debug_scsi("return %s\n", rc == EH_RESET_TIMER ? "timer reset" : "nh");
++ return rc;
}
-diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
-index 8736813..89cc8b7 100644
---- a/drivers/scsi/aacraid/comminit.c
-+++ b/drivers/scsi/aacraid/comminit.c
-@@ -301,10 +301,10 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
- if ((!aac_adapter_sync_cmd(dev, GET_ADAPTER_PROPERTIES,
- 0, 0, 0, 0, 0, 0, status+0, status+1, status+2, NULL, NULL)) &&
- (status[0] == 0x00000001)) {
-- if (status[1] & AAC_OPT_NEW_COMM_64)
-+ if (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_64))
- dev->raw_io_64 = 1;
- if (dev->a_ops.adapter_comm &&
-- (status[1] & AAC_OPT_NEW_COMM))
-+ (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM)))
- dev->comm_interface = AAC_COMM_MESSAGE;
- if ((dev->comm_interface == AAC_COMM_MESSAGE) &&
- (status[2] > dev->base_size)) {
-diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
-index abce48c..81b3692 100644
---- a/drivers/scsi/aacraid/commsup.c
-+++ b/drivers/scsi/aacraid/commsup.c
-@@ -56,7 +56,7 @@
- * Allocate and map the shared PCI space for the FIB blocks used to
- * talk to the Adaptec firmware.
- */
--
-+
- static int fib_map_alloc(struct aac_dev *dev)
+
+-static void iscsi_suspend_tx(struct iscsi_conn *conn)
++static void iscsi_check_transport_timeouts(unsigned long data)
{
- dprintk((KERN_INFO
-@@ -109,14 +109,16 @@ int aac_fib_setup(struct aac_dev * dev)
- }
- if (i<0)
- return -ENOMEM;
--
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+- scsi_flush_work(conn->session->host);
++ struct iscsi_conn *conn = (struct iscsi_conn *)data;
++ struct iscsi_session *session = conn->session;
++ unsigned long timeout, next_timeout = 0, last_recv;
+
- hw_fib = dev->hw_fib_va;
- hw_fib_pa = dev->hw_fib_pa;
- memset(hw_fib, 0, dev->max_fib_size * (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB));
- /*
- * Initialise the fibs
- */
-- for (i = 0, fibptr = &dev->fibs[i]; i < (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB); i++, fibptr++)
-+ for (i = 0, fibptr = &dev->fibs[i];
-+ i < (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB);
-+ i++, fibptr++)
- {
- fibptr->dev = dev;
- fibptr->hw_fib_va = hw_fib;
-@@ -148,13 +150,13 @@ int aac_fib_setup(struct aac_dev * dev)
- * Allocate a fib from the adapter fib pool. If the pool is empty we
- * return NULL.
- */
--
++ spin_lock(&session->lock);
++ if (session->state != ISCSI_STATE_LOGGED_IN)
++ goto done;
+
- struct fib *aac_fib_alloc(struct aac_dev *dev)
- {
- struct fib * fibptr;
- unsigned long flags;
- spin_lock_irqsave(&dev->fib_lock, flags);
-- fibptr = dev->free_fib;
-+ fibptr = dev->free_fib;
- if(!fibptr){
- spin_unlock_irqrestore(&dev->fib_lock, flags);
- return fibptr;
-@@ -171,6 +173,7 @@ struct fib *aac_fib_alloc(struct aac_dev *dev)
- * each I/O
- */
- fibptr->hw_fib_va->header.XferState = 0;
-+ fibptr->flags = 0;
- fibptr->callback = NULL;
- fibptr->callback_data = NULL;
-
-@@ -183,7 +186,7 @@ struct fib *aac_fib_alloc(struct aac_dev *dev)
- *
- * Frees up a fib and places it on the appropriate queue
- */
--
++ timeout = conn->recv_timeout;
++ if (!timeout)
++ goto done;
+
- void aac_fib_free(struct fib *fibptr)
- {
- unsigned long flags;
-@@ -204,10 +207,10 @@ void aac_fib_free(struct fib *fibptr)
- /**
- * aac_fib_init - initialise a fib
- * @fibptr: The fib to initialize
-- *
-+ *
- * Set up the generic fib fields ready for use
- */
--
++ timeout *= HZ;
++ last_recv = conn->last_recv;
++ if (time_before_eq(last_recv + timeout + (conn->ping_timeout * HZ),
++ jiffies)) {
++ printk(KERN_ERR "ping timeout of %d secs expired, "
++ "last rx %lu, last ping %lu, now %lu\n",
++ conn->ping_timeout, last_recv,
++ conn->last_ping, jiffies);
++ spin_unlock(&session->lock);
++ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
++ return;
++ }
+
- void aac_fib_init(struct fib *fibptr)
- {
- struct hw_fib *hw_fib = fibptr->hw_fib_va;
-@@ -227,12 +230,12 @@ void aac_fib_init(struct fib *fibptr)
- * Will deallocate and return to the free pool the FIB pointed to by the
- * caller.
- */
--
++ if (time_before_eq(last_recv + timeout, jiffies)) {
++ if (time_before_eq(conn->last_ping, last_recv)) {
++ /* send a ping to try to provoke some traffic */
++ debug_scsi("Sending nopout as ping on conn %p\n", conn);
++ iscsi_send_nopout(conn, NULL);
++ }
++ next_timeout = last_recv + timeout + (conn->ping_timeout * HZ);
++ } else {
++ next_timeout = last_recv + timeout;
++ }
+
- static void fib_dealloc(struct fib * fibptr)
++ if (next_timeout) {
++ debug_scsi("Setting next tmo %lu\n", next_timeout);
++ mod_timer(&conn->transport_timer, next_timeout);
++ }
++done:
++ spin_unlock(&session->lock);
+ }
+
+-static void iscsi_start_tx(struct iscsi_conn *conn)
++static void iscsi_prep_abort_task_pdu(struct iscsi_cmd_task *ctask,
++ struct iscsi_tm *hdr)
{
- struct hw_fib *hw_fib = fibptr->hw_fib_va;
- BUG_ON(hw_fib->header.StructType != FIB_MAGIC);
-- hw_fib->header.XferState = 0;
-+ hw_fib->header.XferState = 0;
+- clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+- scsi_queue_work(conn->session->host, &conn->xmitwork);
++ memset(hdr, 0, sizeof(*hdr));
++ hdr->opcode = ISCSI_OP_SCSI_TMFUNC | ISCSI_OP_IMMEDIATE;
++ hdr->flags = ISCSI_TM_FUNC_ABORT_TASK & ISCSI_FLAG_TM_FUNC_MASK;
++ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
++ memcpy(hdr->lun, ctask->hdr->lun, sizeof(hdr->lun));
++ hdr->rtt = ctask->hdr->itt;
++ hdr->refcmdsn = ctask->hdr->cmdsn;
}
- /*
-@@ -241,7 +244,7 @@ static void fib_dealloc(struct fib * fibptr)
- * these routines and are the only routines which have a knowledge of the
- * how these queues are implemented.
- */
--
-+
- /**
- * aac_get_entry - get a queue entry
- * @dev: Adapter
-@@ -254,7 +257,7 @@ static void fib_dealloc(struct fib * fibptr)
- * is full(no free entries) than no entry is returned and the function returns 0 otherwise 1 is
- * returned.
- */
--
-+
- static int aac_get_entry (struct aac_dev * dev, u32 qid, struct aac_entry **entry, u32 * index, unsigned long *nonotify)
+ int iscsi_eh_abort(struct scsi_cmnd *sc)
{
- struct aac_queue * q;
-@@ -279,26 +282,27 @@ static int aac_get_entry (struct aac_dev * dev, u32 qid, struct aac_entry **entr
- idx = ADAP_NORM_RESP_ENTRIES;
- }
- if (idx != le32_to_cpu(*(q->headers.consumer)))
-- *nonotify = 1;
-+ *nonotify = 1;
+ struct Scsi_Host *host = sc->device->host;
+ struct iscsi_session *session = iscsi_hostdata(host->hostdata);
+- struct iscsi_cmd_task *ctask;
+ struct iscsi_conn *conn;
+- int rc;
++ struct iscsi_cmd_task *ctask;
++ struct iscsi_tm *hdr;
++ int rc, age;
+
+ mutex_lock(&session->eh_mutex);
+ spin_lock_bh(&session->lock);
+@@ -1199,19 +1419,23 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ return SUCCESS;
}
- if (qid == AdapNormCmdQueue) {
-- if (*index >= ADAP_NORM_CMD_ENTRIES)
-+ if (*index >= ADAP_NORM_CMD_ENTRIES)
- *index = 0; /* Wrap to front of the Producer Queue. */
- } else {
-- if (*index >= ADAP_NORM_RESP_ENTRIES)
-+ if (*index >= ADAP_NORM_RESP_ENTRIES)
- *index = 0; /* Wrap to front of the Producer Queue. */
+- ctask = (struct iscsi_cmd_task *)sc->SCp.ptr;
+- conn = ctask->conn;
+-
+- conn->eh_abort_cnt++;
+- debug_scsi("aborting [sc %p itt 0x%x]\n", sc, ctask->itt);
+-
+ /*
+ * If we are not logged in or we have started a new session
+ * then let the host reset code handle this
+ */
+- if (session->state != ISCSI_STATE_LOGGED_IN ||
+- sc->SCp.phase != session->age)
+- goto failed;
++ if (!session->leadconn || session->state != ISCSI_STATE_LOGGED_IN ||
++ sc->SCp.phase != session->age) {
++ spin_unlock_bh(&session->lock);
++ mutex_unlock(&session->eh_mutex);
++ return FAILED;
++ }
++
++ conn = session->leadconn;
++ conn->eh_abort_cnt++;
++ age = session->age;
++
++ ctask = (struct iscsi_cmd_task *)sc->SCp.ptr;
++ debug_scsi("aborting [sc %p itt 0x%x]\n", sc, ctask->itt);
+
+ /* ctask completed before time out */
+ if (!ctask->sc) {
+@@ -1219,27 +1443,26 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ goto success;
}
-- if ((*index + 1) == le32_to_cpu(*(q->headers.consumer))) { /* Queue is full */
-+ /* Queue is full */
-+ if ((*index + 1) == le32_to_cpu(*(q->headers.consumer))) {
- printk(KERN_WARNING "Queue %d full, %u outstanding.\n",
- qid, q->numpending);
- return 0;
- } else {
-- *entry = q->base + *index;
-+ *entry = q->base + *index;
- return 1;
+- /* what should we do here ? */
+- if (conn->ctask == ctask) {
+- printk(KERN_INFO "iscsi: sc %p itt 0x%x partially sent. "
+- "Failing abort\n", sc, ctask->itt);
+- goto failed;
+- }
+-
+ if (ctask->state == ISCSI_TASK_PENDING) {
+ fail_command(conn, ctask, DID_ABORT << 16);
+ goto success;
}
--}
-+}
- /**
- * aac_queue_get - get the next free QE
-@@ -320,31 +324,29 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
- {
- struct aac_entry * entry = NULL;
- int map = 0;
--
+- conn->tmabort_state = TMABORT_INITIAL;
+- rc = iscsi_exec_abort_task(sc, ctask);
+- if (rc || sc->SCp.phase != session->age ||
+- session->state != ISCSI_STATE_LOGGED_IN)
++ /* only have one tmf outstanding at a time */
++ if (conn->tmf_state != TMF_INITIAL)
++ goto failed;
++ conn->tmf_state = TMF_QUEUED;
+
- if (qid == AdapNormCmdQueue) {
- /* if no entries wait for some if caller wants to */
-- while (!aac_get_entry(dev, qid, &entry, index, nonotify))
-- {
-+ while (!aac_get_entry(dev, qid, &entry, index, nonotify)) {
- printk(KERN_ERR "GetEntries failed\n");
- }
-- /*
-- * Setup queue entry with a command, status and fib mapped
-- */
-- entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
-- map = 1;
-+ /*
-+ * Setup queue entry with a command, status and fib mapped
-+ */
-+ entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
-+ map = 1;
- } else {
-- while(!aac_get_entry(dev, qid, &entry, index, nonotify))
-- {
-+ while (!aac_get_entry(dev, qid, &entry, index, nonotify)) {
- /* if no entries wait for some if caller wants to */
++ hdr = &conn->tmhdr;
++ iscsi_prep_abort_task_pdu(ctask, hdr);
++
++ if (iscsi_exec_task_mgmt_fn(conn, hdr, age, session->abort_timeout)) {
++ rc = FAILED;
+ goto failed;
+- iscsi_ctask_mtask_cleanup(ctask);
++ }
+
+- switch (conn->tmabort_state) {
+- case TMABORT_SUCCESS:
++ switch (conn->tmf_state) {
++ case TMF_SUCCESS:
+ spin_unlock_bh(&session->lock);
+ iscsi_suspend_tx(conn);
+ /*
+@@ -1248,22 +1471,26 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ write_lock_bh(conn->recv_lock);
+ spin_lock(&session->lock);
+ fail_command(conn, ctask, DID_ABORT << 16);
++ conn->tmf_state = TMF_INITIAL;
+ spin_unlock(&session->lock);
+ write_unlock_bh(conn->recv_lock);
+ iscsi_start_tx(conn);
+ goto success_unlocked;
+- case TMABORT_NOT_FOUND:
+- if (!ctask->sc) {
++ case TMF_TIMEDOUT:
++ spin_unlock_bh(&session->lock);
++ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
++ goto failed_unlocked;
++ case TMF_NOT_FOUND:
++ if (!sc->SCp.ptr) {
++ conn->tmf_state = TMF_INITIAL;
+ /* ctask completed before tmf abort response */
+ debug_scsi("sc completed while abort in progress\n");
+ goto success;
}
-- /*
-- * Setup queue entry with command, status and fib mapped
-- */
-- entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
-- entry->addr = hw_fib->header.SenderFibAddress;
-- /* Restore adapters pointer to the FIB */
-+ /*
-+ * Setup queue entry with command, status and fib mapped
-+ */
-+ entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
-+ entry->addr = hw_fib->header.SenderFibAddress;
-+ /* Restore adapters pointer to the FIB */
- hw_fib->header.ReceiverFibAddress = hw_fib->header.SenderFibAddress; /* Let the adapter now where to find its data */
-- map = 0;
-+ map = 0;
+ /* fall through */
+ default:
+- /* timedout or failed */
+- spin_unlock_bh(&session->lock);
+- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- goto failed_unlocked;
++ conn->tmf_state = TMF_INITIAL;
++ goto failed;
}
- /*
- * If MapFib is true than we need to map the Fib and put pointers
-@@ -356,8 +358,8 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
+
+ success:
+@@ -1276,65 +1503,152 @@ success_unlocked:
+ failed:
+ spin_unlock_bh(&session->lock);
+ failed_unlocked:
+- debug_scsi("abort failed [sc %lx itt 0x%x]\n", (long)sc, ctask->itt);
++ debug_scsi("abort failed [sc %p itt 0x%x]\n", sc,
++ ctask ? ctask->itt : 0);
+ mutex_unlock(&session->eh_mutex);
+ return FAILED;
}
+ EXPORT_SYMBOL_GPL(iscsi_eh_abort);
- /*
-- * Define the highest level of host to adapter communication routines.
-- * These routines will support host to adapter FS commuication. These
-+ * Define the highest level of host to adapter communication routines.
-+ * These routines will support host to adapter FS commuication. These
- * routines have no knowledge of the commuication method used. This level
- * sends and receives FIBs. This level has no knowledge of how these FIBs
- * get passed back and forth.
-@@ -379,7 +381,7 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
- * an event to wait on must be supplied. This event will be set when a
- * response FIB is received from the adapter.
- */
--
++static void iscsi_prep_lun_reset_pdu(struct scsi_cmnd *sc, struct iscsi_tm *hdr)
++{
++ memset(hdr, 0, sizeof(*hdr));
++ hdr->opcode = ISCSI_OP_SCSI_TMFUNC | ISCSI_OP_IMMEDIATE;
++ hdr->flags = ISCSI_TM_FUNC_LOGICAL_UNIT_RESET & ISCSI_FLAG_TM_FUNC_MASK;
++ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
++ int_to_scsilun(sc->device->lun, (struct scsi_lun *)hdr->lun);
++ hdr->rtt = RESERVED_ITT;
++}
+
- int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- int priority, int wait, int reply, fib_callback callback,
- void *callback_data)
-@@ -392,16 +394,17 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- if (!(hw_fib->header.XferState & cpu_to_le32(HostOwned)))
- return -EBUSY;
- /*
-- * There are 5 cases with the wait and reponse requested flags.
-+ * There are 5 cases with the wait and reponse requested flags.
- * The only invalid cases are if the caller requests to wait and
- * does not request a response and if the caller does not want a
- * response and the Fib is not allocated from pool. If a response
- * is not requesed the Fib will just be deallocaed by the DPC
- * routine when the response comes back from the adapter. No
-- * further processing will be done besides deleting the Fib. We
-+ * further processing will be done besides deleting the Fib. We
- * will have a debug mode where the adapter can notify the host
- * it had a problem and the host can log that fact.
- */
-+ fibptr->flags = 0;
- if (wait && !reply) {
- return -EINVAL;
- } else if (!wait && reply) {
-@@ -413,7 +416,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- } else if (wait && reply) {
- hw_fib->header.XferState |= cpu_to_le32(ResponseExpected);
- FIB_COUNTER_INCREMENT(aac_config.NormalSent);
-- }
++int iscsi_eh_device_reset(struct scsi_cmnd *sc)
++{
++ struct Scsi_Host *host = sc->device->host;
++ struct iscsi_session *session = iscsi_hostdata(host->hostdata);
++ struct iscsi_conn *conn;
++ struct iscsi_tm *hdr;
++ int rc = FAILED;
++
++ debug_scsi("LU Reset [sc %p lun %u]\n", sc, sc->device->lun);
++
++ mutex_lock(&session->eh_mutex);
++ spin_lock_bh(&session->lock);
++ /*
++ * Just check if we are not logged in. We cannot check for
++ * the phase because the reset could come from a ioctl.
++ */
++ if (!session->leadconn || session->state != ISCSI_STATE_LOGGED_IN)
++ goto unlock;
++ conn = session->leadconn;
++
++ /* only have one tmf outstanding at a time */
++ if (conn->tmf_state != TMF_INITIAL)
++ goto unlock;
++ conn->tmf_state = TMF_QUEUED;
++
++ hdr = &conn->tmhdr;
++ iscsi_prep_lun_reset_pdu(sc, hdr);
++
++ if (iscsi_exec_task_mgmt_fn(conn, hdr, session->age,
++ session->lu_reset_timeout)) {
++ rc = FAILED;
++ goto unlock;
+ }
- /*
- * Map the fib into 32bits by using the fib number
- */
-@@ -436,7 +439,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- hw_fib->header.Size = cpu_to_le16(sizeof(struct aac_fibhdr) + size);
- if (le16_to_cpu(hw_fib->header.Size) > le16_to_cpu(hw_fib->header.SenderSize)) {
- return -EMSGSIZE;
-- }
++
++ switch (conn->tmf_state) {
++ case TMF_SUCCESS:
++ break;
++ case TMF_TIMEDOUT:
++ spin_unlock_bh(&session->lock);
++ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
++ goto done;
++ default:
++ conn->tmf_state = TMF_INITIAL;
++ goto unlock;
+ }
- /*
- * Get a queue entry connect the FIB to it and send an notify
- * the adapter a command is ready.
-@@ -450,10 +453,10 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- if (!wait) {
- fibptr->callback = callback;
- fibptr->callback_data = callback_data;
-+ fibptr->flags = FIB_CONTEXT_FLAG;
- }
++
++ rc = SUCCESS;
++ spin_unlock_bh(&session->lock);
++
++ iscsi_suspend_tx(conn);
++ /* need to grab the recv lock then session lock */
++ write_lock_bh(conn->recv_lock);
++ spin_lock(&session->lock);
++ fail_all_commands(conn, sc->device->lun);
++ conn->tmf_state = TMF_INITIAL;
++ spin_unlock(&session->lock);
++ write_unlock_bh(conn->recv_lock);
++
++ iscsi_start_tx(conn);
++ goto done;
++
++unlock:
++ spin_unlock_bh(&session->lock);
++done:
++ debug_scsi("iscsi_eh_device_reset %s\n",
++ rc == SUCCESS ? "SUCCESS" : "FAILED");
++ mutex_unlock(&session->eh_mutex);
++ return rc;
++}
++EXPORT_SYMBOL_GPL(iscsi_eh_device_reset);
++
++/*
++ * Pre-allocate a pool of @max items of @item_size. By default, the pool
++ * should be accessed via kfifo_{get,put} on q->queue.
++ * Optionally, the caller can obtain the array of object pointers
++ * by passing in a non-NULL @items pointer
++ */
+ int
+-iscsi_pool_init(struct iscsi_queue *q, int max, void ***items, int item_size)
++iscsi_pool_init(struct iscsi_pool *q, int max, void ***items, int item_size)
+ {
+- int i;
++ int i, num_arrays = 1;
- fibptr->done = 0;
-- fibptr->flags = 0;
+- *items = kmalloc(max * sizeof(void*), GFP_KERNEL);
+- if (*items == NULL)
+- return -ENOMEM;
++ memset(q, 0, sizeof(*q));
- FIB_COUNTER_INCREMENT(aac_config.FibsSent);
+ q->max = max;
+- q->pool = kmalloc(max * sizeof(void*), GFP_KERNEL);
+- if (q->pool == NULL) {
+- kfree(*items);
+- return -ENOMEM;
+- }
++
++ /* If the user passed an items pointer, he wants a copy of
++ * the array. */
++ if (items)
++ num_arrays++;
++ q->pool = kzalloc(num_arrays * max * sizeof(void*), GFP_KERNEL);
++ if (q->pool == NULL)
++ goto enomem;
-@@ -473,9 +476,9 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- aac_adapter_deliver(fibptr);
+ q->queue = kfifo_init((void*)q->pool, max * sizeof(void*),
+ GFP_KERNEL, NULL);
+- if (q->queue == ERR_PTR(-ENOMEM)) {
+- kfree(q->pool);
+- kfree(*items);
+- return -ENOMEM;
+- }
++ if (q->queue == ERR_PTR(-ENOMEM))
++ goto enomem;
- /*
-- * If the caller wanted us to wait for response wait now.
-+ * If the caller wanted us to wait for response wait now.
- */
--
-+
- if (wait) {
- spin_unlock_irqrestore(&fibptr->event_lock, flags);
- /* Only set for first known interruptable command */
-@@ -522,7 +525,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ for (i = 0; i < max; i++) {
+- q->pool[i] = kmalloc(item_size, GFP_KERNEL);
++ q->pool[i] = kzalloc(item_size, GFP_KERNEL);
+ if (q->pool[i] == NULL) {
+- int j;
+-
+- for (j = 0; j < i; j++)
+- kfree(q->pool[j]);
+-
+- kfifo_free(q->queue);
+- kfree(q->pool);
+- kfree(*items);
+- return -ENOMEM;
++ q->max = i;
++ goto enomem;
}
- spin_unlock_irqrestore(&fibptr->event_lock, flags);
- BUG_ON(fibptr->done == 0);
--
+- memset(q->pool[i], 0, item_size);
+- (*items)[i] = q->pool[i];
+ __kfifo_put(q->queue, (void*)&q->pool[i], sizeof(void*));
+ }
+
- if(unlikely(fibptr->flags & FIB_CONTEXT_FLAG_TIMED_OUT))
- return -ETIMEDOUT;
- return 0;
-@@ -537,15 +540,15 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
- return 0;
++ if (items) {
++ *items = q->pool + max;
++ memcpy(*items, q->pool, max * sizeof(void *));
++ }
++
+ return 0;
++
++enomem:
++ iscsi_pool_free(q);
++ return -ENOMEM;
}
+ EXPORT_SYMBOL_GPL(iscsi_pool_init);
--/**
-+/**
- * aac_consumer_get - get the top of the queue
- * @dev: Adapter
- * @q: Queue
- * @entry: Return entry
- *
- * Will return a pointer to the entry on the top of the queue requested that
-- * we are a consumer of, and return the address of the queue entry. It does
-- * not change the state of the queue.
-+ * we are a consumer of, and return the address of the queue entry. It does
-+ * not change the state of the queue.
- */
+-void iscsi_pool_free(struct iscsi_queue *q, void **items)
++void iscsi_pool_free(struct iscsi_pool *q)
+ {
+ int i;
- int aac_consumer_get(struct aac_dev * dev, struct aac_queue * q, struct aac_entry **entry)
-@@ -560,10 +563,10 @@ int aac_consumer_get(struct aac_dev * dev, struct aac_queue * q, struct aac_entr
- * the end of the queue, else we just use the entry
- * pointed to by the header index
- */
-- if (le32_to_cpu(*q->headers.consumer) >= q->entries)
-- index = 0;
-+ if (le32_to_cpu(*q->headers.consumer) >= q->entries)
-+ index = 0;
- else
-- index = le32_to_cpu(*q->headers.consumer);
-+ index = le32_to_cpu(*q->headers.consumer);
- *entry = q->base + index;
- status = 1;
+ for (i = 0; i < q->max; i++)
+- kfree(items[i]);
+- kfree(q->pool);
+- kfree(items);
++ kfree(q->pool[i]);
++ if (q->pool)
++ kfree(q->pool);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_pool_free);
+
+@@ -1387,7 +1701,7 @@ iscsi_session_setup(struct iscsi_transport *iscsit,
+ qdepth = ISCSI_DEF_CMD_PER_LUN;
}
-@@ -587,12 +590,12 @@ void aac_consumer_free(struct aac_dev * dev, struct aac_queue *q, u32 qid)
- if ((le32_to_cpu(*q->headers.producer)+1) == le32_to_cpu(*q->headers.consumer))
- wasfull = 1;
--
+- if (cmds_max < 2 || (cmds_max & (cmds_max - 1)) ||
++ if (!is_power_of_2(cmds_max) ||
+ cmds_max >= ISCSI_MGMT_ITT_OFFSET) {
+ if (cmds_max != 0)
+ printk(KERN_ERR "iscsi: invalid can_queue of %d. "
+@@ -1411,12 +1725,16 @@ iscsi_session_setup(struct iscsi_transport *iscsit,
+ shost->max_cmd_len = iscsit->max_cmd_len;
+ shost->transportt = scsit;
+ shost->transportt->create_work_queue = 1;
++ shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out;
+ *hostno = shost->host_no;
+
+ session = iscsi_hostdata(shost->hostdata);
+ memset(session, 0, sizeof(struct iscsi_session));
+ session->host = shost;
+ session->state = ISCSI_STATE_FREE;
++ session->fast_abort = 1;
++ session->lu_reset_timeout = 15;
++ session->abort_timeout = 10;
+ session->mgmtpool_max = ISCSI_MGMT_CMDS_MAX;
+ session->cmds_max = cmds_max;
+ session->queued_cmdsn = session->cmdsn = initial_cmdsn;
+@@ -1479,9 +1797,9 @@ module_put:
+ cls_session_fail:
+ scsi_remove_host(shost);
+ add_host_fail:
+- iscsi_pool_free(&session->mgmtpool, (void**)session->mgmt_cmds);
++ iscsi_pool_free(&session->mgmtpool);
+ mgmtpool_alloc_fail:
+- iscsi_pool_free(&session->cmdpool, (void**)session->cmds);
++ iscsi_pool_free(&session->cmdpool);
+ cmdpool_alloc_fail:
+ scsi_host_put(shost);
+ return NULL;
+@@ -1501,11 +1819,11 @@ void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
+ struct iscsi_session *session = iscsi_hostdata(shost->hostdata);
+ struct module *owner = cls_session->transport->owner;
+
+- iscsi_unblock_session(cls_session);
++ iscsi_remove_session(cls_session);
+ scsi_remove_host(shost);
+
+- iscsi_pool_free(&session->mgmtpool, (void**)session->mgmt_cmds);
+- iscsi_pool_free(&session->cmdpool, (void**)session->cmds);
++ iscsi_pool_free(&session->mgmtpool);
++ iscsi_pool_free(&session->cmdpool);
+
+ kfree(session->password);
+ kfree(session->password_in);
+@@ -1516,7 +1834,7 @@ void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
+ kfree(session->hwaddress);
+ kfree(session->initiatorname);
+
+- iscsi_destroy_session(cls_session);
++ iscsi_free_session(cls_session);
+ scsi_host_put(shost);
+ module_put(owner);
+ }
+@@ -1546,17 +1864,17 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, uint32_t conn_idx)
+ conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
+ conn->id = conn_idx;
+ conn->exp_statsn = 0;
+- conn->tmabort_state = TMABORT_INITIAL;
++ conn->tmf_state = TMF_INITIAL;
+
- if (le32_to_cpu(*q->headers.consumer) >= q->entries)
- *q->headers.consumer = cpu_to_le32(1);
- else
- *q->headers.consumer = cpu_to_le32(le32_to_cpu(*q->headers.consumer)+1);
--
++ init_timer(&conn->transport_timer);
++ conn->transport_timer.data = (unsigned long)conn;
++ conn->transport_timer.function = iscsi_check_transport_timeouts;
+
- if (wasfull) {
- switch (qid) {
+ INIT_LIST_HEAD(&conn->run_list);
+ INIT_LIST_HEAD(&conn->mgmt_run_list);
++ INIT_LIST_HEAD(&conn->mgmtqueue);
+ INIT_LIST_HEAD(&conn->xmitqueue);
+-
+- /* initialize general immediate & non-immediate PDU commands queue */
+- conn->mgmtqueue = kfifo_alloc(session->mgmtpool_max * sizeof(void*),
+- GFP_KERNEL, NULL);
+- if (conn->mgmtqueue == ERR_PTR(-ENOMEM))
+- goto mgmtqueue_alloc_fail;
+-
++ INIT_LIST_HEAD(&conn->requeue);
+ INIT_WORK(&conn->xmitwork, iscsi_xmitworker);
-@@ -608,7 +611,7 @@ void aac_consumer_free(struct aac_dev * dev, struct aac_queue *q, u32 qid)
- }
- aac_adapter_notify(dev, notify);
- }
--}
-+}
+ /* allocate login_mtask used for the login/text sequences */
+@@ -1574,7 +1892,7 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, uint32_t conn_idx)
+ goto login_mtask_data_alloc_fail;
+ conn->login_mtask->data = conn->data = data;
- /**
- * aac_fib_adapter_complete - complete adapter issued fib
-@@ -630,32 +633,32 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
- if (hw_fib->header.XferState == 0) {
- if (dev->comm_interface == AAC_COMM_MESSAGE)
- kfree (hw_fib);
-- return 0;
-+ return 0;
- }
- /*
- * If we plan to do anything check the structure type first.
-- */
-- if ( hw_fib->header.StructType != FIB_MAGIC ) {
-+ */
-+ if (hw_fib->header.StructType != FIB_MAGIC) {
- if (dev->comm_interface == AAC_COMM_MESSAGE)
- kfree (hw_fib);
-- return -EINVAL;
-+ return -EINVAL;
- }
- /*
- * This block handles the case where the adapter had sent us a
- * command and we have finished processing the command. We
-- * call completeFib when we are done processing the command
-- * and want to send a response back to the adapter. This will
-+ * call completeFib when we are done processing the command
-+ * and want to send a response back to the adapter. This will
- * send the completed cdb to the adapter.
- */
- if (hw_fib->header.XferState & cpu_to_le32(SentFromAdapter)) {
- if (dev->comm_interface == AAC_COMM_MESSAGE) {
- kfree (hw_fib);
- } else {
-- u32 index;
-- hw_fib->header.XferState |= cpu_to_le32(HostProcessed);
-+ u32 index;
-+ hw_fib->header.XferState |= cpu_to_le32(HostProcessed);
- if (size) {
- size += sizeof(struct aac_fibhdr);
-- if (size > le16_to_cpu(hw_fib->header.SenderSize))
-+ if (size > le16_to_cpu(hw_fib->header.SenderSize))
- return -EMSGSIZE;
- hw_fib->header.Size = cpu_to_le16(size);
- }
-@@ -667,12 +670,11 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
- if (!(nointr & (int)aac_config.irq_mod))
- aac_adapter_notify(dev, AdapNormRespQueue);
- }
-+ } else {
-+ printk(KERN_WARNING "aac_fib_adapter_complete: "
-+ "Unknown xferstate detected.\n");
-+ BUG();
- }
-- else
-- {
-- printk(KERN_WARNING "aac_fib_adapter_complete: Unknown xferstate detected.\n");
-- BUG();
-- }
- return 0;
+- init_timer(&conn->tmabort_timer);
++ init_timer(&conn->tmf_timer);
+ init_waitqueue_head(&conn->ehwait);
+
+ return cls_conn;
+@@ -1583,8 +1901,6 @@ login_mtask_data_alloc_fail:
+ __kfifo_put(session->mgmtpool.queue, (void*)&conn->login_mtask,
+ sizeof(void*));
+ login_mtask_alloc_fail:
+- kfifo_free(conn->mgmtqueue);
+-mgmtqueue_alloc_fail:
+ iscsi_destroy_conn(cls_conn);
+ return NULL;
}
+@@ -1603,8 +1919,9 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ struct iscsi_session *session = conn->session;
+ unsigned long flags;
-@@ -682,7 +684,7 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
- *
- * Will do all necessary work to complete a FIB.
- */
--
++ del_timer_sync(&conn->transport_timer);
+
- int aac_fib_complete(struct fib *fibptr)
- {
- struct hw_fib * hw_fib = fibptr->hw_fib_va;
-@@ -692,15 +694,15 @@ int aac_fib_complete(struct fib *fibptr)
- */
+ spin_lock_bh(&session->lock);
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+ conn->c_stage = ISCSI_CONN_CLEANUP_WAIT;
+ if (session->leadconn == conn) {
+ /*
+@@ -1637,7 +1954,7 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ }
- if (hw_fib->header.XferState == 0)
-- return 0;
-+ return 0;
- /*
- * If we plan to do anything check the structure type first.
-- */
-+ */
+ /* flush queued up work because we free the connection below */
+- scsi_flush_work(session->host);
++ iscsi_suspend_tx(conn);
- if (hw_fib->header.StructType != FIB_MAGIC)
-- return -EINVAL;
-+ return -EINVAL;
- /*
-- * This block completes a cdb which orginated on the host and we
-+ * This block completes a cdb which orginated on the host and we
- * just need to deallocate the cdb or reinit it. At this point the
- * command is complete that we had sent to the adapter and this
- * cdb could be reused.
-@@ -721,7 +723,7 @@ int aac_fib_complete(struct fib *fibptr)
- fib_dealloc(fibptr);
- } else {
- BUG();
-- }
-+ }
- return 0;
+ spin_lock_bh(&session->lock);
+ kfree(conn->data);
+@@ -1648,8 +1965,6 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ session->leadconn = NULL;
+ spin_unlock_bh(&session->lock);
+
+- kfifo_free(conn->mgmtqueue);
+-
+ iscsi_destroy_conn(cls_conn);
}
+ EXPORT_SYMBOL_GPL(iscsi_conn_teardown);
+@@ -1672,11 +1987,29 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
+ return -EINVAL;
+ }
-@@ -741,7 +743,7 @@ void aac_printf(struct aac_dev *dev, u32 val)
- {
- int length = val & 0xffff;
- int level = (val >> 16) & 0xffff;
--
++ if (conn->ping_timeout && !conn->recv_timeout) {
++ printk(KERN_ERR "iscsi: invalid recv timeout of zero "
++ "Using 5 seconds\n.");
++ conn->recv_timeout = 5;
++ }
++
++ if (conn->recv_timeout && !conn->ping_timeout) {
++ printk(KERN_ERR "iscsi: invalid ping timeout of zero "
++ "Using 5 seconds.\n");
++ conn->ping_timeout = 5;
++ }
++
+ spin_lock_bh(&session->lock);
+ conn->c_stage = ISCSI_CONN_STARTED;
+ session->state = ISCSI_STATE_LOGGED_IN;
+ session->queued_cmdsn = session->cmdsn;
+
++ conn->last_recv = jiffies;
++ conn->last_ping = jiffies;
++ if (conn->recv_timeout && conn->ping_timeout)
++ mod_timer(&conn->transport_timer,
++ jiffies + (conn->recv_timeout * HZ));
+
+ switch(conn->stop_stage) {
+ case STOP_CONN_RECOVER:
/*
- * The size of the printfbuf is set in port.c
- * There is no variable or define for it
-@@ -755,7 +757,7 @@ void aac_printf(struct aac_dev *dev, u32 val)
- else
- printk(KERN_INFO "%s:%s", dev->name, cp);
- }
-- memset(cp, 0, 256);
-+ memset(cp, 0, 256);
- }
+@@ -1684,7 +2017,7 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
+ * commands after successful recovery
+ */
+ conn->stop_stage = 0;
+- conn->tmabort_state = TMABORT_INITIAL;
++ conn->tmf_state = TMF_INITIAL;
+ session->age++;
+ spin_unlock_bh(&session->lock);
+@@ -1709,55 +2042,27 @@ flush_control_queues(struct iscsi_session *session, struct iscsi_conn *conn)
+ struct iscsi_mgmt_task *mtask, *tmp;
-@@ -773,20 +775,20 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- {
- struct hw_fib * hw_fib = fibptr->hw_fib_va;
- struct aac_aifcmd * aifcmd = (struct aac_aifcmd *)hw_fib->data;
-- u32 container;
-+ u32 channel, id, lun, container;
- struct scsi_device *device;
- enum {
- NOTHING,
- DELETE,
- ADD,
- CHANGE
-- } device_config_needed;
-+ } device_config_needed = NOTHING;
+ /* handle pending */
+- while (__kfifo_get(conn->mgmtqueue, (void*)&mtask, sizeof(void*))) {
+- if (mtask == conn->login_mtask)
+- continue;
++ list_for_each_entry_safe(mtask, tmp, &conn->mgmtqueue, running) {
+ debug_scsi("flushing pending mgmt task itt 0x%x\n", mtask->itt);
+- __kfifo_put(session->mgmtpool.queue, (void*)&mtask,
+- sizeof(void*));
++ iscsi_free_mgmt_task(conn, mtask);
+ }
- /* Sniff for container changes */
+ /* handle running */
+ list_for_each_entry_safe(mtask, tmp, &conn->mgmt_run_list, running) {
+ debug_scsi("flushing running mgmt task itt 0x%x\n", mtask->itt);
+- list_del(&mtask->running);
+-
+- if (mtask == conn->login_mtask)
+- continue;
+- __kfifo_put(session->mgmtpool.queue, (void*)&mtask,
+- sizeof(void*));
++ iscsi_free_mgmt_task(conn, mtask);
+ }
- if (!dev || !dev->fsa_dev)
- return;
-- container = (u32)-1;
-+ container = channel = id = lun = (u32)-1;
+ conn->mtask = NULL;
+ }
- /*
- * We have set this up to try and minimize the number of
-@@ -796,13 +798,13 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+-/* Fail commands. Mutex and session lock held and recv side suspended */
+-static void fail_all_commands(struct iscsi_conn *conn)
+-{
+- struct iscsi_cmd_task *ctask, *tmp;
+-
+- /* flush pending */
+- list_for_each_entry_safe(ctask, tmp, &conn->xmitqueue, running) {
+- debug_scsi("failing pending sc %p itt 0x%x\n", ctask->sc,
+- ctask->itt);
+- fail_command(conn, ctask, DID_BUS_BUSY << 16);
+- }
+-
+- /* fail all other running */
+- list_for_each_entry_safe(ctask, tmp, &conn->run_list, running) {
+- debug_scsi("failing in progress sc %p itt 0x%x\n",
+- ctask->sc, ctask->itt);
+- fail_command(conn, ctask, DID_BUS_BUSY << 16);
+- }
+-
+- conn->ctask = NULL;
+-}
+-
+ static void iscsi_start_session_recovery(struct iscsi_session *session,
+ struct iscsi_conn *conn, int flag)
+ {
+ int old_stop_stage;
+
++ del_timer_sync(&conn->transport_timer);
++
+ mutex_lock(&session->eh_mutex);
+ spin_lock_bh(&session->lock);
+ if (conn->stop_stage == STOP_CONN_TERM) {
+@@ -1818,7 +2123,7 @@ static void iscsi_start_session_recovery(struct iscsi_session *session,
+ * flush queues.
*/
- switch (le32_to_cpu(aifcmd->command)) {
- case AifCmdDriverNotify:
-- switch (le32_to_cpu(((u32 *)aifcmd->data)[0])) {
-+ switch (le32_to_cpu(((__le32 *)aifcmd->data)[0])) {
- /*
- * Morph or Expand complete
- */
- case AifDenMorphComplete:
- case AifDenVolumeExtendComplete:
-- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
-+ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
- if (container >= dev->maximum_num_containers)
- break;
+ spin_lock_bh(&session->lock);
+- fail_all_commands(conn);
++ fail_all_commands(conn, -1);
+ flush_control_queues(session, conn);
+ spin_unlock_bh(&session->lock);
+ mutex_unlock(&session->eh_mutex);
+@@ -1869,6 +2174,21 @@ int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
+ uint32_t value;
-@@ -814,9 +816,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- */
+ switch(param) {
++ case ISCSI_PARAM_FAST_ABORT:
++ sscanf(buf, "%d", &session->fast_abort);
++ break;
++ case ISCSI_PARAM_ABORT_TMO:
++ sscanf(buf, "%d", &session->abort_timeout);
++ break;
++ case ISCSI_PARAM_LU_RESET_TMO:
++ sscanf(buf, "%d", &session->lu_reset_timeout);
++ break;
++ case ISCSI_PARAM_PING_TMO:
++ sscanf(buf, "%d", &conn->ping_timeout);
++ break;
++ case ISCSI_PARAM_RECV_TMO:
++ sscanf(buf, "%d", &conn->recv_timeout);
++ break;
+ case ISCSI_PARAM_MAX_RECV_DLENGTH:
+ sscanf(buf, "%d", &conn->max_recv_dlength);
+ break;
+@@ -1983,6 +2303,15 @@ int iscsi_session_get_param(struct iscsi_cls_session *cls_session,
+ int len;
- if ((dev != NULL) && (dev->scsi_host_ptr != NULL)) {
-- device = scsi_device_lookup(dev->scsi_host_ptr,
-- CONTAINER_TO_CHANNEL(container),
-- CONTAINER_TO_ID(container),
-+ device = scsi_device_lookup(dev->scsi_host_ptr,
-+ CONTAINER_TO_CHANNEL(container),
-+ CONTAINER_TO_ID(container),
- CONTAINER_TO_LUN(container));
- if (device) {
- dev->fsa_dev[container].config_needed = CHANGE;
-@@ -835,25 +837,29 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- if (container >= dev->maximum_num_containers)
- break;
- if ((dev->fsa_dev[container].config_waiting_on ==
-- le32_to_cpu(*(u32 *)aifcmd->data)) &&
-+ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
- time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
- dev->fsa_dev[container].config_waiting_on = 0;
- } else for (container = 0;
- container < dev->maximum_num_containers; ++container) {
- if ((dev->fsa_dev[container].config_waiting_on ==
-- le32_to_cpu(*(u32 *)aifcmd->data)) &&
-+ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
- time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
- dev->fsa_dev[container].config_waiting_on = 0;
- }
+ switch(param) {
++ case ISCSI_PARAM_FAST_ABORT:
++ len = sprintf(buf, "%d\n", session->fast_abort);
++ break;
++ case ISCSI_PARAM_ABORT_TMO:
++ len = sprintf(buf, "%d\n", session->abort_timeout);
++ break;
++ case ISCSI_PARAM_LU_RESET_TMO:
++ len = sprintf(buf, "%d\n", session->lu_reset_timeout);
++ break;
+ case ISCSI_PARAM_INITIAL_R2T_EN:
+ len = sprintf(buf, "%d\n", session->initial_r2t_en);
break;
+@@ -2040,6 +2369,12 @@ int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ int len;
- case AifCmdEventNotify:
-- switch (le32_to_cpu(((u32 *)aifcmd->data)[0])) {
-+ switch (le32_to_cpu(((__le32 *)aifcmd->data)[0])) {
-+ case AifEnBatteryEvent:
-+ dev->cache_protected =
-+ (((__le32 *)aifcmd->data)[1] == cpu_to_le32(3));
-+ break;
- /*
- * Add an Array.
- */
- case AifEnAddContainer:
-- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
-+ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
- if (container >= dev->maximum_num_containers)
- break;
- dev->fsa_dev[container].config_needed = ADD;
-@@ -866,7 +872,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- * Delete an Array.
- */
- case AifEnDeleteContainer:
-- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
-+ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
- if (container >= dev->maximum_num_containers)
- break;
- dev->fsa_dev[container].config_needed = DELETE;
-@@ -880,7 +886,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- * waiting on something else, setup to wait on a Config Change.
- */
- case AifEnContainerChange:
-- container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
-+ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
- if (container >= dev->maximum_num_containers)
- break;
- if (dev->fsa_dev[container].config_waiting_on &&
-@@ -895,6 +901,60 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- case AifEnConfigChange:
- break;
+ switch(param) {
++ case ISCSI_PARAM_PING_TMO:
++ len = sprintf(buf, "%u\n", conn->ping_timeout);
++ break;
++ case ISCSI_PARAM_RECV_TMO:
++ len = sprintf(buf, "%u\n", conn->recv_timeout);
++ break;
+ case ISCSI_PARAM_MAX_RECV_DLENGTH:
+ len = sprintf(buf, "%u\n", conn->max_recv_dlength);
+ break;
+diff --git a/drivers/scsi/libsas/Kconfig b/drivers/scsi/libsas/Kconfig
+index c01a40d..18f33cd 100644
+--- a/drivers/scsi/libsas/Kconfig
++++ b/drivers/scsi/libsas/Kconfig
+@@ -38,6 +38,15 @@ config SCSI_SAS_ATA
+ Builds in ATA support into libsas. Will necessitate
+ the loading of libata along with libsas.
-+ case AifEnAddJBOD:
-+ case AifEnDeleteJBOD:
-+ container = le32_to_cpu(((__le32 *)aifcmd->data)[1]);
-+ if ((container >> 28))
-+ break;
-+ channel = (container >> 24) & 0xF;
-+ if (channel >= dev->maximum_num_channels)
-+ break;
-+ id = container & 0xFFFF;
-+ if (id >= dev->maximum_num_physicals)
-+ break;
-+ lun = (container >> 16) & 0xFF;
-+ channel = aac_phys_to_logical(channel);
-+ device_config_needed =
-+ (((__le32 *)aifcmd->data)[0] ==
-+ cpu_to_le32(AifEnAddJBOD)) ? ADD : DELETE;
-+ break;
++config SCSI_SAS_HOST_SMP
++ bool "Support for SMP interpretation for SAS hosts"
++ default y
++ depends on SCSI_SAS_LIBSAS
++ help
++ Allows sas hosts to receive SMP frames. Selecting this
++ option builds an SMP interpreter into libsas. Say
++ N here if you want to save the few kb this consumes.
+
-+ case AifEnEnclosureManagement:
-+ /*
-+ * If in JBOD mode, automatic exposure of new
-+ * physical target to be suppressed until configured.
-+ */
-+ if (dev->jbod)
-+ break;
-+ switch (le32_to_cpu(((__le32 *)aifcmd->data)[3])) {
-+ case EM_DRIVE_INSERTION:
-+ case EM_DRIVE_REMOVAL:
-+ container = le32_to_cpu(
-+ ((__le32 *)aifcmd->data)[2]);
-+ if ((container >> 28))
-+ break;
-+ channel = (container >> 24) & 0xF;
-+ if (channel >= dev->maximum_num_channels)
-+ break;
-+ id = container & 0xFFFF;
-+ lun = (container >> 16) & 0xFF;
-+ if (id >= dev->maximum_num_physicals) {
-+ /* legacy dev_t ? */
-+ if ((0x2000 <= id) || lun || channel ||
-+ ((channel = (id >> 7) & 0x3F) >=
-+ dev->maximum_num_channels))
-+ break;
-+ lun = (id >> 4) & 7;
-+ id &= 0xF;
-+ }
-+ channel = aac_phys_to_logical(channel);
-+ device_config_needed =
-+ (((__le32 *)aifcmd->data)[3]
-+ == cpu_to_le32(EM_DRIVE_INSERTION)) ?
-+ ADD : DELETE;
-+ break;
-+ }
-+ break;
- }
+ config SCSI_SAS_LIBSAS_DEBUG
+ bool "Compile the SAS Domain Transport Attributes in debug mode"
+ default y
+diff --git a/drivers/scsi/libsas/Makefile b/drivers/scsi/libsas/Makefile
+index fd387b9..1ad1323 100644
+--- a/drivers/scsi/libsas/Makefile
++++ b/drivers/scsi/libsas/Makefile
+@@ -33,5 +33,7 @@ libsas-y += sas_init.o \
+ sas_dump.o \
+ sas_discover.o \
+ sas_expander.o \
+- sas_scsi_host.o
++ sas_scsi_host.o \
++ sas_task.o
+ libsas-$(CONFIG_SCSI_SAS_ATA) += sas_ata.o
++libsas-$(CONFIG_SCSI_SAS_HOST_SMP) += sas_host_smp.o
+\ No newline at end of file
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index 0829b55..0996f86 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -158,8 +158,8 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ struct Scsi_Host *host = sas_ha->core.shost;
+ struct sas_internal *i = to_sas_internal(host->transportt);
+ struct scatterlist *sg;
+- unsigned int num = 0;
+ unsigned int xfer = 0;
++ unsigned int si;
- /*
-@@ -905,13 +965,13 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- if (container >= dev->maximum_num_containers)
- break;
- if ((dev->fsa_dev[container].config_waiting_on ==
-- le32_to_cpu(*(u32 *)aifcmd->data)) &&
-+ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
- time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
- dev->fsa_dev[container].config_waiting_on = 0;
- } else for (container = 0;
- container < dev->maximum_num_containers; ++container) {
- if ((dev->fsa_dev[container].config_waiting_on ==
-- le32_to_cpu(*(u32 *)aifcmd->data)) &&
-+ le32_to_cpu(*(__le32 *)aifcmd->data)) &&
- time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT))
- dev->fsa_dev[container].config_waiting_on = 0;
- }
-@@ -926,9 +986,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- * wait for a container change.
- */
+ task = sas_alloc_task(GFP_ATOMIC);
+ if (!task)
+@@ -176,22 +176,20 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
-- if ((((u32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero))
-- && ((((u32 *)aifcmd->data)[6] == ((u32 *)aifcmd->data)[5])
-- || (((u32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsSuccess)))) {
-+ if (((__le32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero) &&
-+ (((__le32 *)aifcmd->data)[6] == ((__le32 *)aifcmd->data)[5] ||
-+ ((__le32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsSuccess))) {
- for (container = 0;
- container < dev->maximum_num_containers;
- ++container) {
-@@ -943,9 +1003,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- jiffies;
- }
+ ata_tf_to_fis(&qc->tf, 1, 0, (u8*)&task->ata_task.fis);
+ task->uldd_task = qc;
+- if (is_atapi_taskfile(&qc->tf)) {
++ if (ata_is_atapi(qc->tf.protocol)) {
+ memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
+ task->total_xfer_len = qc->nbytes + qc->pad_len;
+ task->num_scatter = qc->pad_len ? qc->n_elem + 1 : qc->n_elem;
+ } else {
+- ata_for_each_sg(sg, qc) {
+- num++;
++ for_each_sg(qc->sg, sg, qc->n_elem, si)
+ xfer += sg->length;
+- }
+
+ task->total_xfer_len = xfer;
+- task->num_scatter = num;
++ task->num_scatter = si;
+ }
+
+ task->data_dir = qc->dma_dir;
+- task->scatter = qc->__sg;
++ task->scatter = qc->sg;
+ task->ata_task.retry_count = 1;
+ task->task_state_flags = SAS_TASK_STATE_PENDING;
+ qc->lldd_task = task;
+@@ -200,7 +198,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ case ATA_PROT_NCQ:
+ task->ata_task.use_ncq = 1;
+ /* fall through */
+- case ATA_PROT_ATAPI_DMA:
++ case ATAPI_PROT_DMA:
+ case ATA_PROT_DMA:
+ task->ata_task.dma_xfer = 1;
+ break;
+@@ -500,7 +498,7 @@ static int sas_execute_task(struct sas_task *task, void *buffer, int size,
+ goto ex_err;
+ }
+ wait_for_completion(&task->completion);
+- res = -ETASK;
++ res = -ECOMM;
+ if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
+ int res2;
+ SAS_DPRINTK("task aborted, flags:0x%x\n",
+diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
+index 5f3a0d7..31b9af2 100644
+--- a/drivers/scsi/libsas/sas_discover.c
++++ b/drivers/scsi/libsas/sas_discover.c
+@@ -98,7 +98,7 @@ static int sas_get_port_device(struct asd_sas_port *port)
+ dev->dev_type = SATA_PM;
+ else
+ dev->dev_type = SATA_DEV;
+- dev->tproto = SATA_PROTO;
++ dev->tproto = SAS_PROTOCOL_SATA;
+ } else {
+ struct sas_identify_frame *id =
+ (struct sas_identify_frame *) dev->frame_rcvd;
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 8727436..aefd865 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -96,7 +96,7 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
}
-- if ((((u32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero))
-- && (((u32 *)aifcmd->data)[6] == 0)
-- && (((u32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsRunning))) {
-+ if (((__le32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero) &&
-+ ((__le32 *)aifcmd->data)[6] == 0 &&
-+ ((__le32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsRunning)) {
- for (container = 0;
- container < dev->maximum_num_containers;
- ++container) {
-@@ -963,7 +1023,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- break;
- }
-- device_config_needed = NOTHING;
-+ if (device_config_needed == NOTHING)
- for (container = 0; container < dev->maximum_num_containers;
- ++container) {
- if ((dev->fsa_dev[container].config_waiting_on == 0) &&
-@@ -972,6 +1032,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- device_config_needed =
- dev->fsa_dev[container].config_needed;
- dev->fsa_dev[container].config_needed = NOTHING;
-+ channel = CONTAINER_TO_CHANNEL(container);
-+ id = CONTAINER_TO_ID(container);
-+ lun = CONTAINER_TO_LUN(container);
+ wait_for_completion(&task->completion);
+- res = -ETASK;
++ res = -ECOMM;
+ if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+ SAS_DPRINTK("smp task timed out or aborted\n");
+ i->dft->lldd_abort_task(task);
+@@ -109,6 +109,16 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
+ task->task_status.stat == SAM_GOOD) {
+ res = 0;
break;
- }
- }
-@@ -995,34 +1058,56 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- /*
- * force reload of disk info via aac_probe_container
- */
-- if ((device_config_needed == CHANGE)
-- && (dev->fsa_dev[container].valid == 1))
-- dev->fsa_dev[container].valid = 2;
-- if ((device_config_needed == CHANGE) ||
-- (device_config_needed == ADD))
-+ if ((channel == CONTAINER_CHANNEL) &&
-+ (device_config_needed != NOTHING)) {
-+ if (dev->fsa_dev[container].valid == 1)
-+ dev->fsa_dev[container].valid = 2;
- aac_probe_container(dev, container);
-- device = scsi_device_lookup(dev->scsi_host_ptr,
-- CONTAINER_TO_CHANNEL(container),
-- CONTAINER_TO_ID(container),
-- CONTAINER_TO_LUN(container));
-+ }
-+ device = scsi_device_lookup(dev->scsi_host_ptr, channel, id, lun);
- if (device) {
- switch (device_config_needed) {
- case DELETE:
-+ if (scsi_device_online(device)) {
-+ scsi_device_set_state(device, SDEV_OFFLINE);
-+ sdev_printk(KERN_INFO, device,
-+ "Device offlined - %s\n",
-+ (channel == CONTAINER_CHANNEL) ?
-+ "array deleted" :
-+ "enclosure services event");
-+ }
++ } if (task->task_status.resp == SAS_TASK_COMPLETE &&
++ task->task_status.stat == SAS_DATA_UNDERRUN) {
++ /* no error, but return the number of bytes of
++ * underrun */
++ res = task->task_status.residual;
+ break;
-+ case ADD:
-+ if (!scsi_device_online(device)) {
-+ sdev_printk(KERN_INFO, device,
-+ "Device online - %s\n",
-+ (channel == CONTAINER_CHANNEL) ?
-+ "array created" :
-+ "enclosure services event");
-+ scsi_device_set_state(device, SDEV_RUNNING);
-+ }
-+ /* FALLTHRU */
- case CHANGE:
-+ if ((channel == CONTAINER_CHANNEL)
-+ && (!dev->fsa_dev[container].valid)) {
-+ if (!scsi_device_online(device))
-+ break;
-+ scsi_device_set_state(device, SDEV_OFFLINE);
-+ sdev_printk(KERN_INFO, device,
-+ "Device offlined - %s\n",
-+ "array failed");
-+ break;
-+ }
- scsi_rescan_device(&device->sdev_gendev);
++ } if (task->task_status.resp == SAS_TASK_COMPLETE &&
++ task->task_status.stat == SAS_DATA_OVERRUN) {
++ res = -EMSGSIZE;
++ break;
+ } else {
+ SAS_DPRINTK("%s: task to dev %016llx response: 0x%x "
+ "status 0x%x\n", __FUNCTION__,
+@@ -656,9 +666,9 @@ static struct domain_device *sas_ex_discover_end_dev(
+ sas_ex_get_linkrate(parent, child, phy);
- default:
- break;
+ #ifdef CONFIG_SCSI_SAS_ATA
+- if ((phy->attached_tproto & SAS_PROTO_STP) || phy->attached_sata_dev) {
++ if ((phy->attached_tproto & SAS_PROTOCOL_STP) || phy->attached_sata_dev) {
+ child->dev_type = SATA_DEV;
+- if (phy->attached_tproto & SAS_PROTO_STP)
++ if (phy->attached_tproto & SAS_PROTOCOL_STP)
+ child->tproto = phy->attached_tproto;
+ if (phy->attached_sata_dev)
+ child->tproto |= SATA_DEV;
+@@ -695,7 +705,7 @@ static struct domain_device *sas_ex_discover_end_dev(
}
- scsi_device_put(device);
-+ device_config_needed = NOTHING;
+ } else
+ #endif
+- if (phy->attached_tproto & SAS_PROTO_SSP) {
++ if (phy->attached_tproto & SAS_PROTOCOL_SSP) {
+ child->dev_type = SAS_END_DEV;
+ rphy = sas_end_device_alloc(phy->port);
+ /* FIXME: error handling */
+@@ -1896,11 +1906,9 @@ int sas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
}
-- if (device_config_needed == ADD) {
-- scsi_add_device(dev->scsi_host_ptr,
-- CONTAINER_TO_CHANNEL(container),
-- CONTAINER_TO_ID(container),
-- CONTAINER_TO_LUN(container));
+
+ /* no rphy means no smp target support (ie aic94xx host) */
+- if (!rphy) {
+- printk("%s: can we send a smp request to a host?\n",
+- __FUNCTION__);
+- return -EINVAL;
- }
--
-+ if (device_config_needed == ADD)
-+ scsi_add_device(dev->scsi_host_ptr, channel, id, lun);
- }
++ if (!rphy)
++ return sas_smp_host_handler(shost, req, rsp);
++
+ type = rphy->identify.device_type;
- static int _aac_reset_adapter(struct aac_dev *aac, int forced)
-@@ -1099,7 +1184,8 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
- free_irq(aac->pdev->irq, aac);
- kfree(aac->fsa_dev);
- aac->fsa_dev = NULL;
-- if (aac_get_driver_ident(index)->quirks & AAC_QUIRK_31BIT) {
-+ quirks = aac_get_driver_ident(index)->quirks;
-+ if (quirks & AAC_QUIRK_31BIT) {
- if (((retval = pci_set_dma_mask(aac->pdev, DMA_31BIT_MASK))) ||
- ((retval = pci_set_consistent_dma_mask(aac->pdev, DMA_31BIT_MASK))))
- goto out;
-@@ -1110,7 +1196,7 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
- }
- if ((retval = (*(aac_get_driver_ident(index)->init))(aac)))
- goto out;
-- if (aac_get_driver_ident(index)->quirks & AAC_QUIRK_31BIT)
-+ if (quirks & AAC_QUIRK_31BIT)
- if ((retval = pci_set_dma_mask(aac->pdev, DMA_32BIT_MASK)))
- goto out;
- if (jafo) {
-@@ -1121,15 +1207,14 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
- }
- }
- (void)aac_get_adapter_info(aac);
-- quirks = aac_get_driver_ident(index)->quirks;
- if ((quirks & AAC_QUIRK_34SG) && (host->sg_tablesize > 34)) {
-- host->sg_tablesize = 34;
-- host->max_sectors = (host->sg_tablesize * 8) + 112;
-- }
-- if ((quirks & AAC_QUIRK_17SG) && (host->sg_tablesize > 17)) {
-- host->sg_tablesize = 17;
-- host->max_sectors = (host->sg_tablesize * 8) + 112;
-- }
-+ host->sg_tablesize = 34;
-+ host->max_sectors = (host->sg_tablesize * 8) + 112;
+ if (type != SAS_EDGE_EXPANDER_DEVICE &&
+@@ -1926,6 +1934,15 @@ int sas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
+
+ ret = smp_execute_task(dev, bio_data(req->bio), req->data_len,
+ bio_data(rsp->bio), rsp->data_len);
++ if (ret > 0) {
++ /* positive number is the untransferred residual */
++ rsp->data_len = ret;
++ req->data_len = 0;
++ ret = 0;
++ } else if (ret == 0) {
++ rsp->data_len = 0;
++ req->data_len = 0;
+ }
-+ if ((quirks & AAC_QUIRK_17SG) && (host->sg_tablesize > 17)) {
-+ host->sg_tablesize = 17;
-+ host->max_sectors = (host->sg_tablesize * 8) + 112;
+
+ return ret;
+ }
+diff --git a/drivers/scsi/libsas/sas_host_smp.c b/drivers/scsi/libsas/sas_host_smp.c
+new file mode 100644
+index 0000000..16f9312
+--- /dev/null
++++ b/drivers/scsi/libsas/sas_host_smp.c
+@@ -0,0 +1,274 @@
++/*
++ * Serial Attached SCSI (SAS) Expander discovery and configuration
++ *
++ * Copyright (C) 2007 James E.J. Bottomley
++ * <James.Bottomley at HansenPartnership.com>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License as
++ * published by the Free Software Foundation; version 2 only.
++ */
++#include <linux/scatterlist.h>
++#include <linux/blkdev.h>
++
++#include "sas_internal.h"
++
++#include <scsi/scsi_transport.h>
++#include <scsi/scsi_transport_sas.h>
++#include "../scsi_sas_internal.h"
++
++static void sas_host_smp_discover(struct sas_ha_struct *sas_ha, u8 *resp_data,
++ u8 phy_id)
++{
++ struct sas_phy *phy;
++ struct sas_rphy *rphy;
++
++ if (phy_id >= sas_ha->num_phys) {
++ resp_data[2] = SMP_RESP_NO_PHY;
++ return;
+ }
- aac_get_config_status(aac, 1);
- aac_get_containers(aac);
- /*
-@@ -1217,12 +1302,13 @@ int aac_reset_adapter(struct aac_dev * aac, int forced)
- }
++ resp_data[2] = SMP_RESP_FUNC_ACC;
++
++ phy = sas_ha->sas_phy[phy_id]->phy;
++ resp_data[9] = phy_id;
++ resp_data[13] = phy->negotiated_linkrate;
++ memcpy(resp_data + 16, sas_ha->sas_addr, SAS_ADDR_SIZE);
++ memcpy(resp_data + 24, sas_ha->sas_phy[phy_id]->attached_sas_addr,
++ SAS_ADDR_SIZE);
++ resp_data[40] = (phy->minimum_linkrate << 4) |
++ phy->minimum_linkrate_hw;
++ resp_data[41] = (phy->maximum_linkrate << 4) |
++ phy->maximum_linkrate_hw;
++
++ if (!sas_ha->sas_phy[phy_id]->port ||
++ !sas_ha->sas_phy[phy_id]->port->port_dev)
++ return;
++
++ rphy = sas_ha->sas_phy[phy_id]->port->port_dev->rphy;
++ resp_data[12] = rphy->identify.device_type << 4;
++ resp_data[14] = rphy->identify.initiator_port_protocols;
++ resp_data[15] = rphy->identify.target_port_protocols;
++}
++
++static void sas_report_phy_sata(struct sas_ha_struct *sas_ha, u8 *resp_data,
++ u8 phy_id)
++{
++ struct sas_rphy *rphy;
++ struct dev_to_host_fis *fis;
++ int i;
++
++ if (phy_id >= sas_ha->num_phys) {
++ resp_data[2] = SMP_RESP_NO_PHY;
++ return;
++ }
++
++ resp_data[2] = SMP_RESP_PHY_NO_SATA;
++
++ if (!sas_ha->sas_phy[phy_id]->port)
++ return;
++
++ rphy = sas_ha->sas_phy[phy_id]->port->port_dev->rphy;
++ fis = (struct dev_to_host_fis *)
++ sas_ha->sas_phy[phy_id]->port->port_dev->frame_rcvd;
++ if (rphy->identify.target_port_protocols != SAS_PROTOCOL_SATA)
++ return;
++
++ resp_data[2] = SMP_RESP_FUNC_ACC;
++ resp_data[9] = phy_id;
++ memcpy(resp_data + 16, sas_ha->sas_phy[phy_id]->attached_sas_addr,
++ SAS_ADDR_SIZE);
++
++ /* check to see if we have a valid d2h fis */
++ if (fis->fis_type != 0x34)
++ return;
++
++ /* the d2h fis is required by the standard to be in LE format */
++ for (i = 0; i < 20; i += 4) {
++ u8 *dst = resp_data + 24 + i, *src =
++ &sas_ha->sas_phy[phy_id]->port->port_dev->frame_rcvd[i];
++ dst[0] = src[3];
++ dst[1] = src[2];
++ dst[2] = src[1];
++ dst[3] = src[0];
++ }
++}
++
++static void sas_phy_control(struct sas_ha_struct *sas_ha, u8 phy_id,
++ u8 phy_op, enum sas_linkrate min,
++ enum sas_linkrate max, u8 *resp_data)
++{
++ struct sas_internal *i =
++ to_sas_internal(sas_ha->core.shost->transportt);
++ struct sas_phy_linkrates rates;
++
++ if (phy_id >= sas_ha->num_phys) {
++ resp_data[2] = SMP_RESP_NO_PHY;
++ return;
++ }
++ switch (phy_op) {
++ case PHY_FUNC_NOP:
++ case PHY_FUNC_LINK_RESET:
++ case PHY_FUNC_HARD_RESET:
++ case PHY_FUNC_DISABLE:
++ case PHY_FUNC_CLEAR_ERROR_LOG:
++ case PHY_FUNC_CLEAR_AFFIL:
++ case PHY_FUNC_TX_SATA_PS_SIGNAL:
++ break;
++
++ default:
++ resp_data[2] = SMP_RESP_PHY_UNK_OP;
++ return;
++ }
++
++ rates.minimum_linkrate = min;
++ rates.maximum_linkrate = max;
++
++ if (i->dft->lldd_control_phy(sas_ha->sas_phy[phy_id], phy_op, &rates))
++ resp_data[2] = SMP_RESP_FUNC_FAILED;
++ else
++ resp_data[2] = SMP_RESP_FUNC_ACC;
++}
++
++int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
++ struct request *rsp)
++{
++ u8 *req_data = NULL, *resp_data = NULL, *buf;
++ struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
++ int error = -EINVAL, resp_data_len = rsp->data_len;
++
++ /* eight is the minimum size for request and response frames */
++ if (req->data_len < 8 || rsp->data_len < 8)
++ goto out;
++
++ if (bio_offset(req->bio) + req->data_len > PAGE_SIZE ||
++ bio_offset(rsp->bio) + rsp->data_len > PAGE_SIZE) {
++ shost_printk(KERN_ERR, shost,
++ "SMP request/response frame crosses page boundary");
++ goto out;
++ }
++
++ req_data = kzalloc(req->data_len, GFP_KERNEL);
++
++ /* make sure frame can always be built ... we copy
++ * back only the requested length */
++ resp_data = kzalloc(max(rsp->data_len, 128U), GFP_KERNEL);
++
++ if (!req_data || !resp_data) {
++ error = -ENOMEM;
++ goto out;
++ }
++
++ local_irq_disable();
++ buf = kmap_atomic(bio_page(req->bio), KM_USER0) + bio_offset(req->bio);
++ memcpy(req_data, buf, req->data_len);
++ kunmap_atomic(buf - bio_offset(req->bio), KM_USER0);
++ local_irq_enable();
++
++ if (req_data[0] != SMP_REQUEST)
++ goto out;
++
++ /* always succeeds ... even if we can't process the request
++ * the result is in the response frame */
++ error = 0;
++
++ /* set up default don't know response */
++ resp_data[0] = SMP_RESPONSE;
++ resp_data[1] = req_data[1];
++ resp_data[2] = SMP_RESP_FUNC_UNK;
++
++ switch (req_data[1]) {
++ case SMP_REPORT_GENERAL:
++ req->data_len -= 8;
++ resp_data_len -= 32;
++ resp_data[2] = SMP_RESP_FUNC_ACC;
++ resp_data[9] = sas_ha->num_phys;
++ break;
++
++ case SMP_REPORT_MANUF_INFO:
++ req->data_len -= 8;
++ resp_data_len -= 64;
++ resp_data[2] = SMP_RESP_FUNC_ACC;
++ memcpy(resp_data + 12, shost->hostt->name,
++ SAS_EXPANDER_VENDOR_ID_LEN);
++ memcpy(resp_data + 20, "libsas virt phy",
++ SAS_EXPANDER_PRODUCT_ID_LEN);
++ break;
++
++ case SMP_READ_GPIO_REG:
++ /* FIXME: need GPIO support in the transport class */
++ break;
++
++ case SMP_DISCOVER:
++ req->data_len =- 16;
++ if (req->data_len < 0) {
++ req->data_len = 0;
++ error = -EINVAL;
++ goto out;
++ }
++ resp_data_len -= 56;
++ sas_host_smp_discover(sas_ha, resp_data, req_data[9]);
++ break;
++
++ case SMP_REPORT_PHY_ERR_LOG:
++ /* FIXME: could implement this with additional
++ * libsas callbacks providing the HW supports it */
++ break;
++
++ case SMP_REPORT_PHY_SATA:
++ req->data_len =- 16;
++ if (req->data_len < 0) {
++ req->data_len = 0;
++ error = -EINVAL;
++ goto out;
++ }
++ resp_data_len -= 60;
++ sas_report_phy_sata(sas_ha, resp_data, req_data[9]);
++ break;
++
++ case SMP_REPORT_ROUTE_INFO:
++ /* Can't implement; hosts have no routes */
++ break;
++
++ case SMP_WRITE_GPIO_REG:
++ /* FIXME: need GPIO support in the transport class */
++ break;
++
++ case SMP_CONF_ROUTE_INFO:
++ /* Can't implement; hosts have no routes */
++ break;
++
++ case SMP_PHY_CONTROL:
++ req->data_len =- 44;
++ if (req->data_len < 0) {
++ req->data_len = 0;
++ error = -EINVAL;
++ goto out;
++ }
++ resp_data_len -= 8;
++ sas_phy_control(sas_ha, req_data[9], req_data[10],
++ req_data[32] >> 4, req_data[33] >> 4,
++ resp_data);
++ break;
++
++ case SMP_PHY_TEST_FUNCTION:
++ /* FIXME: should this be implemented? */
++ break;
++
++ default:
++ /* probably a 2.0 function */
++ break;
++ }
++
++ local_irq_disable();
++ buf = kmap_atomic(bio_page(rsp->bio), KM_USER0) + bio_offset(rsp->bio);
++ memcpy(buf, resp_data, rsp->data_len);
++ flush_kernel_dcache_page(bio_page(rsp->bio));
++ kunmap_atomic(buf - bio_offset(rsp->bio), KM_USER0);
++ local_irq_enable();
++ rsp->data_len = resp_data_len;
++
++ out:
++ kfree(req_data);
++ kfree(resp_data);
++ return error;
++}
+diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h
+index 2b8213b..b4f9368 100644
+--- a/drivers/scsi/libsas/sas_internal.h
++++ b/drivers/scsi/libsas/sas_internal.h
+@@ -45,7 +45,7 @@
+ void sas_scsi_recover_host(struct Scsi_Host *shost);
- /* Quiesce build, flush cache, write through mode */
-- aac_send_shutdown(aac);
-+ if (forced < 2)
-+ aac_send_shutdown(aac);
- spin_lock_irqsave(host->host_lock, flagv);
-- retval = _aac_reset_adapter(aac, forced);
-+ retval = _aac_reset_adapter(aac, forced ? forced : ((aac_check_reset != 0) && (aac_check_reset != 1)));
- spin_unlock_irqrestore(host->host_lock, flagv);
+ int sas_show_class(enum sas_class class, char *buf);
+-int sas_show_proto(enum sas_proto proto, char *buf);
++int sas_show_proto(enum sas_protocol proto, char *buf);
+ int sas_show_linkrate(enum sas_linkrate linkrate, char *buf);
+ int sas_show_oob_mode(enum sas_oob_mode oob_mode, char *buf);
-- if (retval == -ENODEV) {
-+ if ((forced < 2) && (retval == -ENODEV)) {
- /* Unwind aac_send_shutdown() IOP_RESET unsupported/disabled */
- struct fib * fibctx = aac_fib_alloc(aac);
- if (fibctx) {
-@@ -1338,11 +1424,11 @@ int aac_check_health(struct aac_dev * aac)
- fib->data = hw_fib->data;
- aif = (struct aac_aifcmd *)hw_fib->data;
- aif->command = cpu_to_le32(AifCmdEventNotify);
-- aif->seqnum = cpu_to_le32(0xFFFFFFFF);
-- aif->data[0] = AifEnExpEvent;
-- aif->data[1] = AifExeFirmwarePanic;
-- aif->data[2] = AifHighPriority;
-- aif->data[3] = BlinkLED;
-+ aif->seqnum = cpu_to_le32(0xFFFFFFFF);
-+ ((__le32 *)aif->data)[0] = cpu_to_le32(AifEnExpEvent);
-+ ((__le32 *)aif->data)[1] = cpu_to_le32(AifExeFirmwarePanic);
-+ ((__le32 *)aif->data)[2] = cpu_to_le32(AifHighPriority);
-+ ((__le32 *)aif->data)[3] = cpu_to_le32(BlinkLED);
+@@ -80,6 +80,20 @@ struct domain_device *sas_find_dev_by_rphy(struct sas_rphy *rphy);
- /*
- * Put the FIB onto the
-@@ -1372,14 +1458,14 @@ int aac_check_health(struct aac_dev * aac)
+ void sas_hae_reset(struct work_struct *work);
- printk(KERN_ERR "%s: Host adapter BLINK LED 0x%x\n", aac->name, BlinkLED);
++#ifdef CONFIG_SCSI_SAS_HOST_SMP
++extern int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
++ struct request *rsp);
++#else
++static inline int sas_smp_host_handler(struct Scsi_Host *shost,
++ struct request *req,
++ struct request *rsp)
++{
++ shost_printk(KERN_ERR, shost,
++ "Cannot send SMP to a sas host (not enabled in CONFIG)\n");
++ return -EINVAL;
++}
++#endif
++
+ static inline void sas_queue_event(int event, spinlock_t *lock,
+ unsigned long *pending,
+ struct work_struct *work,
+diff --git a/drivers/scsi/libsas/sas_scsi_host.c b/drivers/scsi/libsas/sas_scsi_host.c
+index 7663841..f869fba 100644
+--- a/drivers/scsi/libsas/sas_scsi_host.c
++++ b/drivers/scsi/libsas/sas_scsi_host.c
+@@ -108,7 +108,7 @@ static void sas_scsi_task_done(struct sas_task *task)
+ break;
+ case SAM_CHECK_COND:
+ memcpy(sc->sense_buffer, ts->buf,
+- max(SCSI_SENSE_BUFFERSIZE, ts->buf_valid_size));
++ min(SCSI_SENSE_BUFFERSIZE, ts->buf_valid_size));
+ stat = SAM_CHECK_COND;
+ break;
+ default:
+@@ -148,7 +148,6 @@ static struct sas_task *sas_create_task(struct scsi_cmnd *cmd,
+ if (!task)
+ return NULL;
-- if (!aac_check_reset ||
-+ if (!aac_check_reset || ((aac_check_reset != 1) &&
- (aac->supplement_adapter_info.SupportedOptions2 &
-- le32_to_cpu(AAC_OPTION_IGNORE_RESET)))
-+ AAC_OPTION_IGNORE_RESET)))
- goto out;
- host = aac->scsi_host_ptr;
- if (aac->thread->pid != current->pid)
- spin_lock_irqsave(host->host_lock, flagv);
-- BlinkLED = _aac_reset_adapter(aac, 0);
-+ BlinkLED = _aac_reset_adapter(aac, aac_check_reset != 1);
- if (aac->thread->pid != current->pid)
- spin_unlock_irqrestore(host->host_lock, flagv);
- return BlinkLED;
-@@ -1399,7 +1485,7 @@ out:
- * until the queue is empty. When the queue is empty it will wait for
- * more FIBs.
+- *(u32 *)cmd->sense_buffer = 0;
+ task->uldd_task = cmd;
+ ASSIGN_SAS_TASK(cmd, task);
+
+@@ -200,6 +199,10 @@ int sas_queue_up(struct sas_task *task)
*/
--
-+
- int aac_command_thread(void *data)
+ int sas_queuecommand(struct scsi_cmnd *cmd,
+ void (*scsi_done)(struct scsi_cmnd *))
++ __releases(host->host_lock)
++ __acquires(dev->sata_dev.ap->lock)
++ __releases(dev->sata_dev.ap->lock)
++ __acquires(host->host_lock)
{
- struct aac_dev *dev = data;
-@@ -1425,30 +1511,29 @@ int aac_command_thread(void *data)
- add_wait_queue(&dev->queues->queue[HostNormCmdQueue].cmdready, &wait);
- set_current_state(TASK_INTERRUPTIBLE);
- dprintk ((KERN_INFO "aac_command_thread start\n"));
-- while(1)
-- {
-+ while (1) {
- spin_lock_irqsave(dev->queues->queue[HostNormCmdQueue].lock, flags);
- while(!list_empty(&(dev->queues->queue[HostNormCmdQueue].cmdq))) {
- struct list_head *entry;
- struct aac_aifcmd * aifcmd;
+ int res = 0;
+ struct domain_device *dev = cmd_to_domain_dev(cmd);
+@@ -410,7 +413,7 @@ static int sas_recover_I_T(struct domain_device *dev)
+ }
- set_current_state(TASK_RUNNING);
--
+ /* Find the sas_phy that's attached to this device */
+-struct sas_phy *find_local_sas_phy(struct domain_device *dev)
++static struct sas_phy *find_local_sas_phy(struct domain_device *dev)
+ {
+ struct domain_device *pdev = dev->parent;
+ struct ex_phy *exphy = NULL;
+@@ -464,7 +467,7 @@ int sas_eh_bus_reset_handler(struct scsi_cmnd *cmd)
+ res = sas_phy_reset(phy, 1);
+ if (res)
+ SAS_DPRINTK("Bus reset of %s failed 0x%x\n",
+- phy->dev.kobj.k_name,
++ kobject_name(&phy->dev.kobj),
+ res);
+ if (res == TMF_RESP_FUNC_SUCC || res == TMF_RESP_FUNC_COMPLETE)
+ return SUCCESS;
+diff --git a/drivers/scsi/libsas/sas_task.c b/drivers/scsi/libsas/sas_task.c
+new file mode 100644
+index 0000000..594524d
+--- /dev/null
++++ b/drivers/scsi/libsas/sas_task.c
+@@ -0,0 +1,36 @@
++#include <linux/kernel.h>
++#include <scsi/sas.h>
++#include <scsi/libsas.h>
+
- entry = dev->queues->queue[HostNormCmdQueue].cmdq.next;
- list_del(entry);
--
++/* fill task_status_struct based on SSP response frame */
++void sas_ssp_task_response(struct device *dev, struct sas_task *task,
++ struct ssp_response_iu *iu)
++{
++ struct task_status_struct *tstat = &task->task_status;
+
- spin_unlock_irqrestore(dev->queues->queue[HostNormCmdQueue].lock, flags);
- fib = list_entry(entry, struct fib, fiblink);
- /*
-- * We will process the FIB here or pass it to a
-- * worker thread that is TBD. We Really can't
-+ * We will process the FIB here or pass it to a
-+ * worker thread that is TBD. We Really can't
- * do anything at this point since we don't have
- * anything defined for this thread to do.
- */
- hw_fib = fib->hw_fib_va;
- memset(fib, 0, sizeof(struct fib));
- fib->type = FSAFS_NTC_FIB_CONTEXT;
-- fib->size = sizeof( struct fib );
-+ fib->size = sizeof(struct fib);
- fib->hw_fib_va = hw_fib;
- fib->data = hw_fib->data;
- fib->dev = dev;
-@@ -1462,20 +1547,19 @@ int aac_command_thread(void *data)
- *(__le32 *)hw_fib->data = cpu_to_le32(ST_OK);
- aac_fib_adapter_complete(fib, (u16)sizeof(u32));
- } else {
-- struct list_head *entry;
- /* The u32 here is important and intended. We are using
- 32bit wrapping time to fit the adapter field */
--
++ tstat->resp = SAS_TASK_COMPLETE;
+
- u32 time_now, time_last;
- unsigned long flagv;
- unsigned num;
- struct hw_fib ** hw_fib_pool, ** hw_fib_p;
- struct fib ** fib_pool, ** fib_p;
--
++ if (iu->datapres == 0)
++ tstat->stat = iu->status;
++ else if (iu->datapres == 1)
++ tstat->stat = iu->resp_data[3];
++ else if (iu->datapres == 2) {
++ tstat->stat = SAM_CHECK_COND;
++ tstat->buf_valid_size =
++ min_t(int, SAS_STATUS_BUF_SIZE,
++ be32_to_cpu(iu->sense_data_len));
++ memcpy(tstat->buf, iu->sense_data, tstat->buf_valid_size);
+
- /* Sniff events */
-- if ((aifcmd->command ==
-+ if ((aifcmd->command ==
- cpu_to_le32(AifCmdEventNotify)) ||
-- (aifcmd->command ==
-+ (aifcmd->command ==
- cpu_to_le32(AifCmdJobProgress))) {
- aac_handle_aif(dev, fib);
- }
-@@ -1527,7 +1611,7 @@ int aac_command_thread(void *data)
- spin_lock_irqsave(&dev->fib_lock, flagv);
- entry = dev->fib_list.next;
- /*
-- * For each Context that is on the
-+ * For each Context that is on the
- * fibctxList, make a copy of the
- * fib, and then set the event to wake up the
- * thread that is waiting for it.
-@@ -1552,7 +1636,7 @@ int aac_command_thread(void *data)
- */
- time_last = fibctx->jiffies;
- /*
-- * Has it been > 2 minutes
-+ * Has it been > 2 minutes
- * since the last read off
- * the queue?
- */
-@@ -1583,7 +1667,7 @@ int aac_command_thread(void *data)
- */
- list_add_tail(&newfib->fiblink, &fibctx->fib_list);
- fibctx->count++;
-- /*
-+ /*
- * Set the event to wake up the
- * thread that is waiting.
- */
-@@ -1655,11 +1739,11 @@ int aac_command_thread(void *data)
- struct fib *fibptr;
++ if (iu->status != SAM_CHECK_COND)
++ dev_printk(KERN_WARNING, dev,
++ "dev %llx sent sense data, but "
++ "stat(%x) is not CHECK CONDITION\n",
++ SAS_ADDR(task->dev->sas_addr),
++ iu->status);
++ }
++ else
++ /* when datapres contains corrupt/unknown value... */
++ tstat->stat = SAM_CHECK_COND;
++}
++EXPORT_SYMBOL_GPL(sas_ssp_task_response);
++
+diff --git a/drivers/scsi/libsrp.c b/drivers/scsi/libsrp.c
+index 2ad0a27..5cff020 100644
+--- a/drivers/scsi/libsrp.c
++++ b/drivers/scsi/libsrp.c
+@@ -192,18 +192,18 @@ static int srp_direct_data(struct scsi_cmnd *sc, struct srp_direct_buf *md,
- if ((fibptr = aac_fib_alloc(dev))) {
-- u32 * info;
-+ __le32 *info;
+ if (dma_map) {
+ iue = (struct iu_entry *) sc->SCp.ptr;
+- sg = sc->request_buffer;
++ sg = scsi_sglist(sc);
- aac_fib_init(fibptr);
+- dprintk("%p %u %u %d\n", iue, sc->request_bufflen,
+- md->len, sc->use_sg);
++ dprintk("%p %u %u %d\n", iue, scsi_bufflen(sc),
++ md->len, scsi_sg_count(sc));
-- info = (u32 *) fib_data(fibptr);
-+ info = (__le32 *) fib_data(fibptr);
- if (now.tv_usec > 500000)
- ++now.tv_sec;
+- nsg = dma_map_sg(iue->target->dev, sg, sc->use_sg,
++ nsg = dma_map_sg(iue->target->dev, sg, scsi_sg_count(sc),
+ DMA_BIDIRECTIONAL);
+ if (!nsg) {
+- printk("fail to map %p %d\n", iue, sc->use_sg);
++ printk("fail to map %p %d\n", iue, scsi_sg_count(sc));
+ return 0;
+ }
+- len = min(sc->request_bufflen, md->len);
++ len = min(scsi_bufflen(sc), md->len);
+ } else
+ len = md->len;
-diff --git a/drivers/scsi/aacraid/dpcsup.c b/drivers/scsi/aacraid/dpcsup.c
-index e6032ff..d1163de 100644
---- a/drivers/scsi/aacraid/dpcsup.c
-+++ b/drivers/scsi/aacraid/dpcsup.c
-@@ -120,6 +120,7 @@ unsigned int aac_response_normal(struct aac_queue * q)
- * NOTE: we cannot touch the fib after this
- * call, because it may have been deallocated.
- */
-+ fib->flags = 0;
- fib->callback(fib->callback_data, fib);
- } else {
- unsigned long flagv;
-@@ -229,11 +230,9 @@ unsigned int aac_command_normal(struct aac_queue *q)
- * all QE there are and wake up all the waiters before exiting.
- */
+@@ -229,10 +229,10 @@ static int srp_indirect_data(struct scsi_cmnd *sc, struct srp_cmd *cmd,
--unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index)
-+unsigned int aac_intr_normal(struct aac_dev * dev, u32 index)
- {
-- u32 index = le32_to_cpu(Index);
--
-- dprintk((KERN_INFO "aac_intr_normal(%p,%x)\n", dev, Index));
-+ dprintk((KERN_INFO "aac_intr_normal(%p,%x)\n", dev, index));
- if ((index & 0x00000002L)) {
- struct hw_fib * hw_fib;
- struct fib * fib;
-@@ -301,7 +300,7 @@ unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index)
+ if (dma_map || ext_desc) {
+ iue = (struct iu_entry *) sc->SCp.ptr;
+- sg = sc->request_buffer;
++ sg = scsi_sglist(sc);
- if (hwfib->header.Command == cpu_to_le16(NuFileSystem))
- {
-- u32 *pstatus = (u32 *)hwfib->data;
-+ __le32 *pstatus = (__le32 *)hwfib->data;
- if (*pstatus & cpu_to_le32(0xffff0000))
- *pstatus = cpu_to_le32(ST_OK);
- }
-@@ -315,6 +314,7 @@ unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index)
- * NOTE: we cannot touch the fib after this
- * call, because it may have been deallocated.
- */
-+ fib->flags = 0;
- fib->callback(fib->callback_data, fib);
- } else {
- unsigned long flagv;
-diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
-index 9dd331b..61be227 100644
---- a/drivers/scsi/aacraid/linit.c
-+++ b/drivers/scsi/aacraid/linit.c
-@@ -159,27 +159,27 @@ static struct pci_device_id aac_pci_tbl[] = {
- MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
+ dprintk("%p %u %u %d %d\n",
+- iue, sc->request_bufflen, id->len,
++ iue, scsi_bufflen(sc), id->len,
+ cmd->data_in_desc_cnt, cmd->data_out_desc_cnt);
+ }
- /*
-- * dmb - For now we add the number of channels to this structure.
-+ * dmb - For now we add the number of channels to this structure.
- * In the future we should add a fib that reports the number of channels
- * for the card. At that time we can remove the channels from here
- */
- static struct aac_driver_ident aac_drivers[] = {
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 2/Si (Iguana/PERC2Si) */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Opal/PERC3Di) */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Si (SlimFast/PERC3Si */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Iguana FlipChip/PERC3DiF */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Viper/PERC3DiV) */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Lexus/PERC3DiL) */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Jaguar/PERC3DiJ) */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Dagger/PERC3DiD) */
-- { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* PERC 3/Di (Boxster/PERC3DiB) */
-- { aac_rx_init, "aacraid", "ADAPTEC ", "catapult ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* catapult */
-- { aac_rx_init, "aacraid", "ADAPTEC ", "tomcat ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* tomcat */
-- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2120S ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec 2120S (Crusader) */
-- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec 2200S (Vulcan) */
-- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec 2200S (Vulcan-2m) */
-- { aac_rx_init, "aacraid", "Legend ", "Legend S220 ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend S220 (Legend Crusader) */
-- { aac_rx_init, "aacraid", "Legend ", "Legend S230 ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend S230 (Legend Vulcan) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 2/Si (Iguana/PERC2Si) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Opal/PERC3Di) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Si (SlimFast/PERC3Si */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Iguana FlipChip/PERC3DiF */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Viper/PERC3DiV) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Lexus/PERC3DiL) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Jaguar/PERC3DiJ) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Dagger/PERC3DiD) */
-+ { aac_rx_init, "percraid", "DELL ", "PERCRAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* PERC 3/Di (Boxster/PERC3DiB) */
-+ { aac_rx_init, "aacraid", "ADAPTEC ", "catapult ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* catapult */
-+ { aac_rx_init, "aacraid", "ADAPTEC ", "tomcat ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* tomcat */
-+ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2120S ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Adaptec 2120S (Crusader) */
-+ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Adaptec 2200S (Vulcan) */
-+ { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 2200S ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Adaptec 2200S (Vulcan-2m) */
-+ { aac_rx_init, "aacraid", "Legend ", "Legend S220 ", 1, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Legend S220 (Legend Crusader) */
-+ { aac_rx_init, "aacraid", "Legend ", "Legend S230 ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Legend S230 (Legend Vulcan) */
+@@ -268,13 +268,14 @@ static int srp_indirect_data(struct scsi_cmnd *sc, struct srp_cmd *cmd,
- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 3230S ", 2 }, /* Adaptec 3230S (Harrier) */
- { aac_rx_init, "aacraid", "ADAPTEC ", "Adaptec 3240S ", 2 }, /* Adaptec 3240S (Tornado) */
-@@ -224,8 +224,8 @@ static struct aac_driver_ident aac_drivers[] = {
- { aac_sa_init, "percraid", "DELL ", "PERCRAID ", 4, AAC_QUIRK_34SG }, /* Dell PERC2/QC */
- { aac_sa_init, "hpnraid", "HP ", "NetRAID ", 4, AAC_QUIRK_34SG }, /* HP NetRAID-4M */
+ rdma:
+ if (dma_map) {
+- nsg = dma_map_sg(iue->target->dev, sg, sc->use_sg, DMA_BIDIRECTIONAL);
++ nsg = dma_map_sg(iue->target->dev, sg, scsi_sg_count(sc),
++ DMA_BIDIRECTIONAL);
+ if (!nsg) {
+- eprintk("fail to map %p %d\n", iue, sc->use_sg);
++ eprintk("fail to map %p %d\n", iue, scsi_sg_count(sc));
+ err = -EIO;
+ goto free_mem;
+ }
+- len = min(sc->request_bufflen, id->len);
++ len = min(scsi_bufflen(sc), id->len);
+ } else
+ len = id->len;
-- { aac_rx_init, "aacraid", "DELL ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Dell Catchall */
-- { aac_rx_init, "aacraid", "Legend ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend Catchall */
-+ { aac_rx_init, "aacraid", "DELL ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Dell Catchall */
-+ { aac_rx_init, "aacraid", "Legend ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG | AAC_QUIRK_SCSI_32 }, /* Legend Catchall */
- { aac_rx_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Catch All */
- { aac_rkt_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Rocket Catch All */
- { aac_nark_init, "aacraid", "ADAPTEC ", "RAID ", 2 } /* Adaptec NEMER/ARK Catch All */
-@@ -239,7 +239,7 @@ static struct aac_driver_ident aac_drivers[] = {
- * Queues a command for execution by the associated Host Adapter.
- *
- * TODO: unify with aac_scsi_cmd().
-- */
-+ */
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index ba3ecab..f26b953 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -29,7 +29,8 @@ struct lpfc_sli2_slim;
+ #define LPFC_MAX_NS_RETRY 3 /* Number of retry attempts to contact
+ the NameServer before giving up. */
+ #define LPFC_CMD_PER_LUN 3 /* max outstanding cmds per lun */
+-#define LPFC_SG_SEG_CNT 64 /* sg element count per scsi cmnd */
++#define LPFC_DEFAULT_SG_SEG_CNT 64 /* sg element count per scsi cmnd */
++#define LPFC_MAX_SG_SEG_CNT 256 /* sg element count per scsi cmnd */
+ #define LPFC_IOCB_LIST_CNT 2250 /* list of IOCBs for fast-path usage. */
+ #define LPFC_Q_RAMP_UP_INTERVAL 120 /* lun q_depth ramp up interval */
- static int aac_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
- {
-@@ -258,7 +258,7 @@ static int aac_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd
- }
- cmd->SCp.phase = AAC_OWNER_LOWLEVEL;
- return (aac_scsi_cmd(cmd) ? FAILED : 0);
--}
-+}
+@@ -68,6 +69,7 @@ struct lpfc_dmabuf {
+ struct list_head list;
+ void *virt; /* virtual address ptr */
+ dma_addr_t phys; /* mapped address */
++ uint32_t buffer_tag; /* used for tagged queue ring */
+ };
- /**
- * aac_info - Returns the host adapter name
-@@ -292,21 +292,21 @@ struct aac_driver_ident* aac_get_driver_ident(int devtype)
- * @capacity: the sector capacity of the disk
- * @geom: geometry block to fill in
- *
-- * Return the Heads/Sectors/Cylinders BIOS Disk Parameters for Disk.
-- * The default disk geometry is 64 heads, 32 sectors, and the appropriate
-- * number of cylinders so as not to exceed drive capacity. In order for
-+ * Return the Heads/Sectors/Cylinders BIOS Disk Parameters for Disk.
-+ * The default disk geometry is 64 heads, 32 sectors, and the appropriate
-+ * number of cylinders so as not to exceed drive capacity. In order for
- * disks equal to or larger than 1 GB to be addressable by the BIOS
-- * without exceeding the BIOS limitation of 1024 cylinders, Extended
-- * Translation should be enabled. With Extended Translation enabled,
-- * drives between 1 GB inclusive and 2 GB exclusive are given a disk
-- * geometry of 128 heads and 32 sectors, and drives above 2 GB inclusive
-- * are given a disk geometry of 255 heads and 63 sectors. However, if
-- * the BIOS detects that the Extended Translation setting does not match
-- * the geometry in the partition table, then the translation inferred
-- * from the partition table will be used by the BIOS, and a warning may
-+ * without exceeding the BIOS limitation of 1024 cylinders, Extended
-+ * Translation should be enabled. With Extended Translation enabled,
-+ * drives between 1 GB inclusive and 2 GB exclusive are given a disk
-+ * geometry of 128 heads and 32 sectors, and drives above 2 GB inclusive
-+ * are given a disk geometry of 255 heads and 63 sectors. However, if
-+ * the BIOS detects that the Extended Translation setting does not match
-+ * the geometry in the partition table, then the translation inferred
-+ * from the partition table will be used by the BIOS, and a warning may
- * be displayed.
- */
--
+ struct lpfc_dma_pool {
+@@ -272,10 +274,16 @@ struct lpfc_vport {
+ #define FC_ABORT_DISCOVERY 0x8000 /* we want to abort discovery */
+ #define FC_NDISC_ACTIVE 0x10000 /* NPort discovery active */
+ #define FC_BYPASSED_MODE 0x20000 /* NPort is in bypassed mode */
+-#define FC_RFF_NOT_SUPPORTED 0x40000 /* RFF_ID was rejected by switch */
+ #define FC_VPORT_NEEDS_REG_VPI 0x80000 /* Needs to have its vpi registered */
+ #define FC_RSCN_DEFERRED 0x100000 /* A deferred RSCN being processed */
+
++ uint32_t ct_flags;
++#define FC_CT_RFF_ID 0x1 /* RFF_ID accepted by switch */
++#define FC_CT_RNN_ID 0x2 /* RNN_ID accepted by switch */
++#define FC_CT_RSNN_NN 0x4 /* RSNN_NN accepted by switch */
++#define FC_CT_RSPN_ID 0x8 /* RSPN_ID accepted by switch */
++#define FC_CT_RFT_ID 0x10 /* RFT_ID accepted by switch */
+
- static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
- sector_t capacity, int *geom)
- {
-@@ -333,10 +333,10 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
+ struct list_head fc_nodes;
- param->cylinders = cap_to_cyls(capacity, param->heads * param->sectors);
+ /* Keep counters for the number of entries in each list. */
+@@ -344,6 +352,7 @@ struct lpfc_vport {
+ uint32_t cfg_discovery_threads;
+ uint32_t cfg_log_verbose;
+ uint32_t cfg_max_luns;
++ uint32_t cfg_enable_da_id;
-- /*
-+ /*
- * Read the first 1024 bytes from the disk device, if the boot
- * sector partition table is valid, search for a partition table
-- * entry whose end_head matches one of the standard geometry
-+ * entry whose end_head matches one of the standard geometry
- * translations ( 64/32, 128/32, 255/63 ).
- */
- buf = scsi_bios_ptable(bdev);
-@@ -401,30 +401,44 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
+ uint32_t dev_loss_tmo_changed;
- static int aac_slave_configure(struct scsi_device *sdev)
- {
-+ struct aac_dev *aac = (struct aac_dev *)sdev->host->hostdata;
- if ((sdev->type == TYPE_DISK) &&
-- (sdev_channel(sdev) != CONTAINER_CHANNEL)) {
-+ (sdev_channel(sdev) != CONTAINER_CHANNEL) &&
-+ (!aac->jbod || sdev->inq_periph_qual) &&
-+ (!aac->raid_scsi_mode || (sdev_channel(sdev) != 2))) {
- if (expose_physicals == 0)
- return -ENXIO;
-- if (expose_physicals < 0) {
-- struct aac_dev *aac =
-- (struct aac_dev *)sdev->host->hostdata;
-- if (!aac->raid_scsi_mode || (sdev_channel(sdev) != 2))
-- sdev->no_uld_attach = 1;
-- }
-+ if (expose_physicals < 0)
-+ sdev->no_uld_attach = 1;
- }
- if (sdev->tagged_supported && (sdev->type == TYPE_DISK) &&
-- (sdev_channel(sdev) == CONTAINER_CHANNEL)) {
-+ (!aac->raid_scsi_mode || (sdev_channel(sdev) != 2)) &&
-+ !sdev->no_uld_attach) {
- struct scsi_device * dev;
- struct Scsi_Host *host = sdev->host;
- unsigned num_lsu = 0;
- unsigned num_one = 0;
- unsigned depth;
-+ unsigned cid;
+@@ -360,6 +369,7 @@ struct lpfc_vport {
-+ /*
-+ * Firmware has an individual device recovery time typically
-+ * of 35 seconds, give us a margin.
-+ */
-+ if (sdev->timeout < (45 * HZ))
-+ sdev->timeout = 45 * HZ;
-+ for (cid = 0; cid < aac->maximum_num_containers; ++cid)
-+ if (aac->fsa_dev[cid].valid)
-+ ++num_lsu;
- __shost_for_each_device(dev, host) {
- if (dev->tagged_supported && (dev->type == TYPE_DISK) &&
-- (sdev_channel(dev) == CONTAINER_CHANNEL))
-- ++num_lsu;
-- else
-+ (!aac->raid_scsi_mode ||
-+ (sdev_channel(sdev) != 2)) &&
-+ !dev->no_uld_attach) {
-+ if ((sdev_channel(dev) != CONTAINER_CHANNEL)
-+ || !aac->fsa_dev[sdev_id(dev)].valid)
-+ ++num_lsu;
-+ } else
- ++num_one;
- }
- if (num_lsu == 0)
-@@ -481,9 +495,35 @@ static int aac_change_queue_depth(struct scsi_device *sdev, int depth)
- return sdev->queue_depth;
- }
+ struct hbq_s {
+ uint16_t entry_count; /* Current number of HBQ slots */
++ uint16_t buffer_count; /* Current number of buffers posted */
+ uint32_t next_hbqPutIdx; /* Index to next HBQ slot to use */
+ uint32_t hbqPutIdx; /* HBQ slot to use */
+ uint32_t local_hbqGetIdx; /* Local copy of Get index from Port */
+@@ -377,6 +387,11 @@ struct hbq_s {
+ #define LPFC_ELS_HBQ 0
+ #define LPFC_EXTRA_HBQ 1
-+static ssize_t aac_show_raid_level(struct device *dev, struct device_attribute *attr, char *buf)
-+{
-+ struct scsi_device * sdev = to_scsi_device(dev);
-+ if (sdev_channel(sdev) != CONTAINER_CHANNEL)
-+ return snprintf(buf, PAGE_SIZE, sdev->no_uld_attach
-+ ? "Hidden\n" : "JBOD");
-+ return snprintf(buf, PAGE_SIZE, "%s\n",
-+ get_container_type(((struct aac_dev *)(sdev->host->hostdata))
-+ ->fsa_dev[sdev_id(sdev)].type));
-+}
-+
-+static struct device_attribute aac_raid_level_attr = {
-+ .attr = {
-+ .name = "level",
-+ .mode = S_IRUGO,
-+ },
-+ .show = aac_show_raid_level
-+};
-+
-+static struct device_attribute *aac_dev_attrs[] = {
-+ &aac_raid_level_attr,
-+ NULL,
++enum hba_temp_state {
++ HBA_NORMAL_TEMP,
++ HBA_OVER_TEMP
+};
+
- static int aac_ioctl(struct scsi_device *sdev, int cmd, void __user * arg)
- {
- struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
-+ if (!capable(CAP_SYS_RAWIO))
-+ return -EPERM;
- return aac_do_ioctl(dev, cmd, arg);
- }
+ struct lpfc_hba {
+ struct lpfc_sli sli;
+ uint32_t sli_rev; /* SLI2 or SLI3 */
+@@ -457,7 +472,8 @@ struct lpfc_hba {
+ uint64_t cfg_soft_wwnn;
+ uint64_t cfg_soft_wwpn;
+ uint32_t cfg_hba_queue_depth;
+-
++ uint32_t cfg_enable_hba_reset;
++ uint32_t cfg_enable_hba_heartbeat;
-@@ -506,17 +546,33 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
- break;
- case INQUIRY:
- case READ_CAPACITY:
-- case TEST_UNIT_READY:
- /* Mark associated FIB to not complete, eh handler does this */
- for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
- struct fib * fib = &aac->fibs[count];
- if (fib->hw_fib_va->header.XferState &&
-+ (fib->flags & FIB_CONTEXT_FLAG) &&
- (fib->callback_data == cmd)) {
- fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
- cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
- ret = SUCCESS;
- }
- }
-+ break;
-+ case TEST_UNIT_READY:
-+ /* Mark associated FIB to not complete, eh handler does this */
-+ for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
-+ struct scsi_cmnd * command;
-+ struct fib * fib = &aac->fibs[count];
-+ if ((fib->hw_fib_va->header.XferState & cpu_to_le32(Async | NoResponseExpected)) &&
-+ (fib->flags & FIB_CONTEXT_FLAG) &&
-+ ((command = fib->callback_data)) &&
-+ (command->device == cmd->device)) {
-+ fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
-+ command->SCp.phase = AAC_OWNER_ERROR_HANDLER;
-+ if (command == cmd)
-+ ret = SUCCESS;
-+ }
-+ }
- }
- return ret;
- }
-@@ -539,12 +595,13 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
- for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
- struct fib * fib = &aac->fibs[count];
- if (fib->hw_fib_va->header.XferState &&
-+ (fib->flags & FIB_CONTEXT_FLAG) &&
- (fib->callback_data == cmd)) {
- fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
- cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
- }
- }
-- printk(KERN_ERR "%s: Host adapter reset request. SCSI hang ?\n",
-+ printk(KERN_ERR "%s: Host adapter reset request. SCSI hang ?\n",
- AAC_DRIVERNAME);
+ lpfc_vpd_t vpd; /* vital product data */
- if ((count = aac_check_health(aac)))
-@@ -584,8 +641,11 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
- * support a register, instead of a commanded, reset.
- */
- if ((aac->supplement_adapter_info.SupportedOptions2 &
-- le32_to_cpu(AAC_OPTION_MU_RESET|AAC_OPTION_IGNORE_RESET)) ==
-- le32_to_cpu(AAC_OPTION_MU_RESET))
-+ AAC_OPTION_MU_RESET) &&
-+ aac_check_reset &&
-+ ((aac_check_reset != 1) ||
-+ (aac->supplement_adapter_info.SupportedOptions2 &
-+ AAC_OPTION_IGNORE_RESET)))
- aac_reset_adapter(aac, 2); /* Bypass wait for command quiesce */
- return SUCCESS; /* Cause an immediate retry of the command with a ten second delay after successful tur */
- }
-@@ -632,8 +692,8 @@ static int aac_cfg_open(struct inode *inode, struct file *file)
- * Bugs: Needs locking against parallel ioctls lower down
- * Bugs: Needs to handle hot plugging
- */
--
--static int aac_cfg_ioctl(struct inode *inode, struct file *file,
-+
-+static int aac_cfg_ioctl(struct inode *inode, struct file *file,
- unsigned int cmd, unsigned long arg)
- {
- if (!capable(CAP_SYS_RAWIO))
-@@ -646,7 +706,7 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
- {
- long ret;
- lock_kernel();
-- switch (cmd) {
-+ switch (cmd) {
- case FSACTL_MINIPORT_REV_CHECK:
- case FSACTL_SENDFIB:
- case FSACTL_OPEN_GET_ADAPTER_FIB:
-@@ -656,14 +716,14 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
- case FSACTL_QUERY_DISK:
- case FSACTL_DELETE_DISK:
- case FSACTL_FORCE_DELETE_DISK:
-- case FSACTL_GET_CONTAINERS:
-+ case FSACTL_GET_CONTAINERS:
- case FSACTL_SEND_LARGE_FIB:
- ret = aac_do_ioctl(dev, cmd, (void __user *)arg);
- break;
+@@ -544,8 +560,7 @@ struct lpfc_hba {
+ struct list_head port_list;
+ struct lpfc_vport *pport; /* physical lpfc_vport pointer */
+ uint16_t max_vpi; /* Maximum virtual nports */
+-#define LPFC_MAX_VPI 100 /* Max number of VPI supported */
+-#define LPFC_MAX_VPORTS (LPFC_MAX_VPI+1)/* Max number of VPorts supported */
++#define LPFC_MAX_VPI 0xFFFF /* Max number of VPI supported */
+ unsigned long *vpi_bmask; /* vpi allocation table */
- case FSACTL_GET_NEXT_ADAPTER_FIB: {
- struct fib_ioctl __user *f;
--
+ /* Data structure used by fabric iocb scheduler */
+@@ -563,16 +578,30 @@ struct lpfc_hba {
+ struct dentry *hba_debugfs_root;
+ atomic_t debugfs_vport_count;
+ struct dentry *debug_hbqinfo;
+- struct dentry *debug_dumpslim;
++ struct dentry *debug_dumpHostSlim;
++ struct dentry *debug_dumpHBASlim;
+ struct dentry *debug_slow_ring_trc;
+ struct lpfc_debugfs_trc *slow_ring_trc;
+ atomic_t slow_ring_trc_cnt;
+ #endif
+
++ /* Used for deferred freeing of ELS data buffers */
++ struct list_head elsbuf;
++ int elsbuf_cnt;
++ int elsbuf_prev_cnt;
+
- f = compat_alloc_user_space(sizeof(*f));
- ret = 0;
- if (clear_user(f, sizeof(*f)))
-@@ -676,9 +736,9 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
- }
++ uint8_t temp_sensor_support;
+ /* Fields used for heart beat. */
+ unsigned long last_completion_time;
+ struct timer_list hb_tmofunc;
+ uint8_t hb_outstanding;
++ /*
++ * Following bit will be set for all buffer tags which are not
++ * associated with any HBQ.
++ */
++#define QUE_BUFTAG_BIT (1<<31)
++ uint32_t buffer_tag_count;
++ enum hba_temp_state over_temp_state;
+ };
- default:
-- ret = -ENOIOCTLCMD;
-+ ret = -ENOIOCTLCMD;
- break;
-- }
-+ }
- unlock_kernel();
- return ret;
+ static inline struct Scsi_Host *
+@@ -598,5 +627,15 @@ lpfc_is_link_up(struct lpfc_hba *phba)
+ phba->link_state == LPFC_HBA_READY;
}
-@@ -735,6 +795,25 @@ static ssize_t aac_show_vendor(struct class_device *class_dev,
- return len;
+
+-#define FC_REG_DUMP_EVENT 0x10 /* Register for Dump events */
++#define FC_REG_DUMP_EVENT 0x10 /* Register for Dump events */
++#define FC_REG_TEMPERATURE_EVENT 0x20 /* Register for temperature
++ event */
+
++struct temp_event {
++ uint32_t event_type;
++ uint32_t event_code;
++ uint32_t data;
++};
++#define LPFC_CRIT_TEMP 0x1
++#define LPFC_THRESHOLD_TEMP 0x2
++#define LPFC_NORMAL_TEMP 0x3
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 80a1121..4bae4a2 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1,7 +1,7 @@
+ /*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for *
+ * Fibre Channel Host Bus Adapters. *
+- * Copyright (C) 2004-2007 Emulex. All rights reserved. *
++ * Copyright (C) 2004-2008 Emulex. All rights reserved. *
+ * EMULEX and SLI are trademarks of Emulex. *
+ * www.emulex.com *
+ * Portions Copyright (C) 2004-2005 Christoph Hellwig *
+@@ -45,6 +45,10 @@
+ #define LPFC_MIN_DEVLOSS_TMO 1
+ #define LPFC_MAX_DEVLOSS_TMO 255
+
++#define LPFC_MAX_LINK_SPEED 8
++#define LPFC_LINK_SPEED_BITMAP 0x00000117
++#define LPFC_LINK_SPEED_STRING "0, 1, 2, 4, 8"
++
+ static void
+ lpfc_jedec_to_ascii(int incr, char hdw[])
+ {
+@@ -86,6 +90,15 @@ lpfc_serialnum_show(struct class_device *cdev, char *buf)
}
-+static ssize_t aac_show_flags(struct class_device *class_dev, char *buf)
+ static ssize_t
++lpfc_temp_sensor_show(struct class_device *cdev, char *buf)
+{
-+ int len = 0;
-+ struct aac_dev *dev = (struct aac_dev*)class_to_shost(class_dev)->hostdata;
-+
-+ if (nblank(dprintk(x)))
-+ len = snprintf(buf, PAGE_SIZE, "dprintk\n");
-+#ifdef AAC_DETAILED_STATUS_INFO
-+ len += snprintf(buf + len, PAGE_SIZE - len,
-+ "AAC_DETAILED_STATUS_INFO\n");
-+#endif
-+ if (dev->raw_io_interface && dev->raw_io_64)
-+ len += snprintf(buf + len, PAGE_SIZE - len,
-+ "SAI_READ_CAPACITY_16\n");
-+ if (dev->jbod)
-+ len += snprintf(buf + len, PAGE_SIZE - len, "SUPPORTED_JBOD\n");
-+ return len;
++ struct Scsi_Host *shost = class_to_shost(cdev);
++ struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
++ struct lpfc_hba *phba = vport->phba;
++ return snprintf(buf, PAGE_SIZE, "%d\n",phba->temp_sensor_support);
+}
+
- static ssize_t aac_show_kernel_version(struct class_device *class_dev,
- char *buf)
++static ssize_t
+ lpfc_modeldesc_show(struct class_device *cdev, char *buf)
{
-@@ -742,7 +821,7 @@ static ssize_t aac_show_kernel_version(struct class_device *class_dev,
- int len, tmp;
-
- tmp = le32_to_cpu(dev->adapter_info.kernelrev);
-- len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
-+ len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
- tmp >> 24, (tmp >> 16) & 0xff, tmp & 0xff,
- le32_to_cpu(dev->adapter_info.kernelbuild));
- return len;
-@@ -755,7 +834,7 @@ static ssize_t aac_show_monitor_version(struct class_device *class_dev,
- int len, tmp;
+ struct Scsi_Host *shost = class_to_shost(cdev);
+@@ -178,12 +191,9 @@ lpfc_state_show(struct class_device *cdev, char *buf)
+ case LPFC_LINK_UP:
+ case LPFC_CLEAR_LA:
+ case LPFC_HBA_READY:
+- len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - \n");
++ len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
- tmp = le32_to_cpu(dev->adapter_info.monitorrev);
-- len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
-+ len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
- tmp >> 24, (tmp >> 16) & 0xff, tmp & 0xff,
- le32_to_cpu(dev->adapter_info.monitorbuild));
- return len;
-@@ -768,7 +847,7 @@ static ssize_t aac_show_bios_version(struct class_device *class_dev,
- int len, tmp;
+ switch (vport->port_state) {
+- len += snprintf(buf + len, PAGE_SIZE-len,
+- "initializing\n");
+- break;
+ case LPFC_LOCAL_CFG_LINK:
+ len += snprintf(buf + len, PAGE_SIZE-len,
+ "Configuring Link\n");
+@@ -252,8 +262,7 @@ lpfc_issue_lip(struct Scsi_Host *shost)
+ int mbxstatus = MBXERR_ERROR;
- tmp = le32_to_cpu(dev->adapter_info.biosrev);
-- len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
-+ len = snprintf(buf, PAGE_SIZE, "%d.%d-%d[%d]\n",
- tmp >> 24, (tmp >> 16) & 0xff, tmp & 0xff,
- le32_to_cpu(dev->adapter_info.biosbuild));
- return len;
-@@ -844,6 +923,13 @@ static struct class_device_attribute aac_vendor = {
- },
- .show = aac_show_vendor,
- };
-+static struct class_device_attribute aac_flags = {
-+ .attr = {
-+ .name = "flags",
-+ .mode = S_IRUGO,
-+ },
-+ .show = aac_show_flags,
-+};
- static struct class_device_attribute aac_kernel_version = {
- .attr = {
- .name = "hba_kernel_version",
-@@ -898,6 +984,7 @@ static struct class_device_attribute aac_reset = {
- static struct class_device_attribute *aac_attrs[] = {
- &aac_model,
- &aac_vendor,
-+ &aac_flags,
- &aac_kernel_version,
- &aac_monitor_version,
- &aac_bios_version,
-@@ -928,21 +1015,22 @@ static struct scsi_host_template aac_driver_template = {
- .compat_ioctl = aac_compat_ioctl,
- #endif
- .queuecommand = aac_queuecommand,
-- .bios_param = aac_biosparm,
-+ .bios_param = aac_biosparm,
- .shost_attrs = aac_attrs,
- .slave_configure = aac_slave_configure,
- .change_queue_depth = aac_change_queue_depth,
-+ .sdev_attrs = aac_dev_attrs,
- .eh_abort_handler = aac_eh_abort,
- .eh_host_reset_handler = aac_eh_reset,
-- .can_queue = AAC_NUM_IO_FIB,
-+ .can_queue = AAC_NUM_IO_FIB,
- .this_id = MAXIMUM_NUM_CONTAINERS,
- .sg_tablesize = 16,
- .max_sectors = 128,
- #if (AAC_NUM_IO_FIB > 256)
- .cmd_per_lun = 256,
--#else
-- .cmd_per_lun = AAC_NUM_IO_FIB,
--#endif
-+#else
-+ .cmd_per_lun = AAC_NUM_IO_FIB,
-+#endif
- .use_clustering = ENABLE_CLUSTERING,
- .use_sg_chaining = ENABLE_SG_CHAINING,
- .emulated = 1,
-@@ -979,18 +1067,18 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
- goto out;
- error = -ENODEV;
+ if ((vport->fc_flag & FC_OFFLINE_MODE) ||
+- (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) ||
+- (vport->port_state != LPFC_VPORT_READY))
++ (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO))
+ return -EPERM;
-- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) ||
-+ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) ||
- pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK))
- goto out_disable_pdev;
- /*
- * If the quirk31 bit is set, the adapter needs adapter
- * to driver communication memory to be allocated below 2gig
- */
-- if (aac_drivers[index].quirks & AAC_QUIRK_31BIT)
-+ if (aac_drivers[index].quirks & AAC_QUIRK_31BIT)
- if (pci_set_dma_mask(pdev, DMA_31BIT_MASK) ||
- pci_set_consistent_dma_mask(pdev, DMA_31BIT_MASK))
- goto out_disable_pdev;
--
-+
- pci_set_master(pdev);
+ pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
+@@ -305,12 +314,14 @@ lpfc_do_offline(struct lpfc_hba *phba, uint32_t type)
- shost = scsi_host_alloc(&aac_driver_template, sizeof(struct aac_dev));
-@@ -1003,7 +1091,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
- shost->max_cmd_len = 16;
+ psli = &phba->sli;
- aac = (struct aac_dev *)shost->hostdata;
-- aac->scsi_host_ptr = shost;
-+ aac->scsi_host_ptr = shost;
- aac->pdev = pdev;
- aac->name = aac_driver_template.name;
- aac->id = shost->unique_id;
-@@ -1040,7 +1128,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
- if (aac_drivers[index].quirks & AAC_QUIRK_31BIT)
- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK))
- goto out_deinit;
--
-+
- aac->maximum_num_channels = aac_drivers[index].channels;
- error = aac_get_adapter_info(aac);
- if (error < 0)
-@@ -1049,7 +1137,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
- /*
- * Lets override negotiations and drop the maximum SG limit to 34
- */
-- if ((aac_drivers[index].quirks & AAC_QUIRK_34SG) &&
-+ if ((aac_drivers[index].quirks & AAC_QUIRK_34SG) &&
- (aac->scsi_host_ptr->sg_tablesize > 34)) {
- aac->scsi_host_ptr->sg_tablesize = 34;
- aac->scsi_host_ptr->max_sectors
-@@ -1066,17 +1154,17 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
- /*
- * Firware printf works only with older firmware.
- */
-- if (aac_drivers[index].quirks & AAC_QUIRK_34SG)
-+ if (aac_drivers[index].quirks & AAC_QUIRK_34SG)
- aac->printf_enabled = 1;
- else
- aac->printf_enabled = 0;
--
-+
- /*
- * max channel will be the physical channels plus 1 virtual channel
- * all containers are on the virtual channel 0 (CONTAINER_CHANNEL)
- * physical channels are address by their actual physical number+1
- */
-- if ((aac->nondasd_support == 1) || expose_physicals)
-+ if (aac->nondasd_support || expose_physicals || aac->jbod)
- shost->max_channel = aac->maximum_num_channels;
- else
- shost->max_channel = 0;
-@@ -1148,10 +1236,10 @@ static void __devexit aac_remove_one(struct pci_dev *pdev)
- kfree(aac->queues);
++ /* Wait a little for things to settle down, but not
++ * long enough for dev loss timeout to expire.
++ */
+ for (i = 0; i < psli->num_rings; i++) {
+ pring = &psli->ring[i];
+- /* The linkdown event takes 30 seconds to timeout. */
+ while (pring->txcmplq_cnt) {
+ msleep(10);
+- if (cnt++ > 3000) {
++ if (cnt++ > 500) { /* 5 secs */
+ lpfc_printf_log(phba,
+ KERN_WARNING, LOG_INIT,
+ "0466 Outstanding IO when "
+@@ -336,6 +347,9 @@ lpfc_selective_reset(struct lpfc_hba *phba)
+ struct completion online_compl;
+ int status = 0;
- aac_adapter_ioremap(aac, 0);
--
-+
- kfree(aac->fibs);
- kfree(aac->fsa_dev);
--
-+
- list_del(&aac->entry);
- scsi_host_put(shost);
- pci_disable_device(pdev);
-@@ -1172,7 +1260,7 @@ static struct pci_driver aac_pci_driver = {
- static int __init aac_init(void)
- {
- int error;
--
++ if (!phba->cfg_enable_hba_reset)
++ return -EIO;
+
- printk(KERN_INFO "Adaptec %s driver %s\n",
- AAC_DRIVERNAME, aac_driver_version);
-
-diff --git a/drivers/scsi/aacraid/rx.c b/drivers/scsi/aacraid/rx.c
-index 73eef3d..a08bbf1 100644
---- a/drivers/scsi/aacraid/rx.c
-+++ b/drivers/scsi/aacraid/rx.c
-@@ -465,7 +465,7 @@ static int aac_rx_restart_adapter(struct aac_dev *dev, int bled)
- u32 var;
-
- if (!(dev->supplement_adapter_info.SupportedOptions2 &
-- le32_to_cpu(AAC_OPTION_MU_RESET)) || (bled >= 0) || (bled == -2)) {
-+ AAC_OPTION_MU_RESET) || (bled >= 0) || (bled == -2)) {
- if (bled)
- printk(KERN_ERR "%s%d: adapter kernel panic'd %x.\n",
- dev->name, dev->id, bled);
-@@ -549,7 +549,9 @@ int _aac_rx_init(struct aac_dev *dev)
- dev->OIMR = status = rx_readb (dev, MUnit.OIMR);
- if ((((status & 0x0c) != 0x0c) || aac_reset_devices || reset_devices) &&
- !aac_rx_restart_adapter(dev, 0))
-- ++restart;
-+ /* Make sure the Hardware FIFO is empty */
-+ while ((++restart < 512) &&
-+ (rx_readl(dev, MUnit.OutboundQueue) != 0xFFFFFFFFL));
- /*
- * Check to see if the board panic'd while booting.
- */
-diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c
-index 38a1ee2..374ed02 100644
---- a/drivers/scsi/advansys.c
-+++ b/drivers/scsi/advansys.c
-@@ -8233,7 +8233,7 @@ static void adv_isr_callback(ADV_DVC_VAR *adv_dvc_varp, ADV_SCSI_REQ_Q *scsiqp)
- if (scsiqp->scsi_status == SAM_STAT_CHECK_CONDITION) {
- ASC_DBG(2, "SAM_STAT_CHECK_CONDITION\n");
- ASC_DBG_PRT_SENSE(2, scp->sense_buffer,
-- sizeof(scp->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
- /*
- * Note: The 'status_byte()' macro used by
- * target drivers defined in scsi.h shifts the
-@@ -9136,7 +9136,7 @@ static void asc_isr_callback(ASC_DVC_VAR *asc_dvc_varp, ASC_QDONE_INFO *qdonep)
- BUG_ON(asc_dvc_varp != &boardp->dvc_var.asc_dvc_var);
+ status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
- dma_unmap_single(boardp->dev, scp->SCp.dma_handle,
-- sizeof(scp->sense_buffer), DMA_FROM_DEVICE);
-+ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
- /*
- * 'qdonep' contains the command's ending status.
- */
-@@ -9166,7 +9166,7 @@ static void asc_isr_callback(ASC_DVC_VAR *asc_dvc_varp, ASC_QDONE_INFO *qdonep)
- if (qdonep->d3.scsi_stat == SAM_STAT_CHECK_CONDITION) {
- ASC_DBG(2, "SAM_STAT_CHECK_CONDITION\n");
- ASC_DBG_PRT_SENSE(2, scp->sense_buffer,
-- sizeof(scp->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
- /*
- * Note: The 'status_byte()' macro used by
- * target drivers defined in scsi.h shifts the
-@@ -9881,9 +9881,9 @@ static __le32 advansys_get_sense_buffer_dma(struct scsi_cmnd *scp)
- {
- struct asc_board *board = shost_priv(scp->device->host);
- scp->SCp.dma_handle = dma_map_single(board->dev, scp->sense_buffer,
-- sizeof(scp->sense_buffer), DMA_FROM_DEVICE);
-+ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
- dma_cache_sync(board->dev, scp->sense_buffer,
-- sizeof(scp->sense_buffer), DMA_FROM_DEVICE);
-+ SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE);
- return cpu_to_le32(scp->SCp.dma_handle);
- }
+ if (status != 0)
+@@ -409,6 +423,8 @@ lpfc_board_mode_store(struct class_device *cdev, const char *buf, size_t count)
+ struct completion online_compl;
+ int status=0;
-@@ -9914,7 +9914,7 @@ static int asc_build_req(struct asc_board *boardp, struct scsi_cmnd *scp,
- asc_scsi_q->q2.target_ix =
- ASC_TIDLUN_TO_IX(scp->device->id, scp->device->lun);
- asc_scsi_q->q1.sense_addr = advansys_get_sense_buffer_dma(scp);
-- asc_scsi_q->q1.sense_len = sizeof(scp->sense_buffer);
-+ asc_scsi_q->q1.sense_len = SCSI_SENSE_BUFFERSIZE;
++ if (!phba->cfg_enable_hba_reset)
++ return -EACCES;
+ init_completion(&online_compl);
- /*
- * If there are any outstanding requests for the current target,
-@@ -10173,7 +10173,7 @@ adv_build_req(struct asc_board *boardp, struct scsi_cmnd *scp,
- scsiqp->target_lun = scp->device->lun;
+ if(strncmp(buf, "online", sizeof("online") - 1) == 0) {
+@@ -908,6 +924,8 @@ static CLASS_DEVICE_ATTR(used_rpi, S_IRUGO, lpfc_used_rpi_show, NULL);
+ static CLASS_DEVICE_ATTR(max_xri, S_IRUGO, lpfc_max_xri_show, NULL);
+ static CLASS_DEVICE_ATTR(used_xri, S_IRUGO, lpfc_used_xri_show, NULL);
+ static CLASS_DEVICE_ATTR(npiv_info, S_IRUGO, lpfc_npiv_info_show, NULL);
++static CLASS_DEVICE_ATTR(lpfc_temp_sensor, S_IRUGO, lpfc_temp_sensor_show,
++ NULL);
- scsiqp->sense_addr = cpu_to_le32(virt_to_bus(&scp->sense_buffer[0]));
-- scsiqp->sense_len = sizeof(scp->sense_buffer);
-+ scsiqp->sense_len = SCSI_SENSE_BUFFERSIZE;
- /* Build ADV_SCSI_REQ_Q */
+ static char *lpfc_soft_wwn_key = "C99G71SL8032A";
+@@ -971,6 +989,14 @@ lpfc_soft_wwpn_store(struct class_device *cdev, const char *buf, size_t count)
+ unsigned int i, j, cnt=count;
+ u8 wwpn[8];
-diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c
-index ea8c699..6ccdc96 100644
---- a/drivers/scsi/aha152x.c
-+++ b/drivers/scsi/aha152x.c
-@@ -260,6 +260,7 @@
- #include <scsi/scsi_dbg.h>
- #include <scsi/scsi_host.h>
- #include <scsi/scsi_transport_spi.h>
-+#include <scsi/scsi_eh.h>
- #include "aha152x.h"
++ if (!phba->cfg_enable_hba_reset)
++ return -EACCES;
++ spin_lock_irq(&phba->hbalock);
++ if (phba->over_temp_state == HBA_OVER_TEMP) {
++ spin_unlock_irq(&phba->hbalock);
++ return -EACCES;
++ }
++ spin_unlock_irq(&phba->hbalock);
+ /* count may include a LF at end of string */
+ if (buf[cnt-1] == '\n')
+ cnt--;
+@@ -1102,7 +1128,13 @@ MODULE_PARM_DESC(lpfc_sli_mode, "SLI mode selector:"
+ " 2 - select SLI-2 even on SLI-3 capable HBAs,"
+ " 3 - select SLI-3");
- static LIST_HEAD(aha152x_host_list);
-@@ -558,9 +559,7 @@ struct aha152x_hostdata {
- struct aha152x_scdata {
- Scsi_Cmnd *next; /* next sc in queue */
- struct completion *done;/* semaphore to block on */
-- unsigned char aha_orig_cmd_len;
-- unsigned char aha_orig_cmnd[MAX_COMMAND_SIZE];
-- int aha_orig_resid;
-+ struct scsi_eh_save ses;
- };
+-LPFC_ATTR_R(enable_npiv, 0, 0, 1, "Enable NPIV functionality");
++int lpfc_enable_npiv = 0;
++module_param(lpfc_enable_npiv, int, 0);
++MODULE_PARM_DESC(lpfc_enable_npiv, "Enable NPIV functionality");
++lpfc_param_show(enable_npiv);
++lpfc_param_init(enable_npiv, 0, 0, 1);
++static CLASS_DEVICE_ATTR(lpfc_enable_npiv, S_IRUGO,
++ lpfc_enable_npiv_show, NULL);
- /* access macros for hostdata */
-@@ -1017,16 +1016,10 @@ static int aha152x_internal_queue(Scsi_Cmnd *SCpnt, struct completion *complete,
- SCp.buffers_residual : left buffers in list
- SCp.phase : current state of the command */
+ /*
+ # lpfc_nodev_tmo: If set, it will hold all I/O errors on devices that disappear
+@@ -1248,6 +1280,13 @@ LPFC_VPORT_ATTR_HEX_RW(log_verbose, 0x0, 0x0, 0xffff,
+ "Verbose logging bit-mask");
-- if ((phase & (check_condition|resetting)) || !scsi_sglist(SCpnt)) {
-- if (phase & check_condition) {
-- SCpnt->SCp.ptr = SCpnt->sense_buffer;
-- SCpnt->SCp.this_residual = sizeof(SCpnt->sense_buffer);
-- scsi_set_resid(SCpnt, sizeof(SCpnt->sense_buffer));
-- } else {
-- SCpnt->SCp.ptr = NULL;
-- SCpnt->SCp.this_residual = 0;
-- scsi_set_resid(SCpnt, 0);
-- }
-+ if ((phase & resetting) || !scsi_sglist(SCpnt)) {
-+ SCpnt->SCp.ptr = NULL;
-+ SCpnt->SCp.this_residual = 0;
-+ scsi_set_resid(SCpnt, 0);
- SCpnt->SCp.buffer = NULL;
- SCpnt->SCp.buffers_residual = 0;
- } else {
-@@ -1561,10 +1554,7 @@ static void busfree_run(struct Scsi_Host *shpnt)
- }
- #endif
+ /*
++# lpfc_enable_da_id: This turns on the DA_ID CT command that deregisters
++# objects that have been registered with the nameserver after login.
++*/
++LPFC_VPORT_ATTR_R(enable_da_id, 0, 0, 1,
++ "Deregister nameserver objects before LOGO");
++
++/*
+ # lun_queue_depth: This parameter is used to limit the number of outstanding
+ # commands per FCP LUN. Value range is [1,128]. Default value is 30.
+ */
+@@ -1369,7 +1408,33 @@ LPFC_VPORT_ATTR_R(scan_down, 1, 0, 1,
+ # Set loop mode if you want to run as an NL_Port. Value range is [0,0x6].
+ # Default value is 0.
+ */
+-LPFC_ATTR_RW(topology, 0, 0, 6, "Select Fibre Channel topology");
++static int
++lpfc_topology_set(struct lpfc_hba *phba, int val)
++{
++ int err;
++ uint32_t prev_val;
++ if (val >= 0 && val <= 6) {
++ prev_val = phba->cfg_topology;
++ phba->cfg_topology = val;
++ err = lpfc_issue_lip(lpfc_shost_from_vport(phba->pport));
++ if (err)
++ phba->cfg_topology = prev_val;
++ return err;
++ }
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "%d:0467 lpfc_topology attribute cannot be set to %d, "
++ "allowed range is [0, 6]\n",
++ phba->brd_no, val);
++ return -EINVAL;
++}
++static int lpfc_topology = 0;
++module_param(lpfc_topology, int, 0);
++MODULE_PARM_DESC(lpfc_topology, "Select Fibre Channel topology");
++lpfc_param_show(topology)
++lpfc_param_init(topology, 0, 0, 6)
++lpfc_param_store(topology)
++static CLASS_DEVICE_ATTR(lpfc_topology, S_IRUGO | S_IWUSR,
++ lpfc_topology_show, lpfc_topology_store);
-- /* restore old command */
-- memcpy(cmd->cmnd, sc->aha_orig_cmnd, sizeof(cmd->cmnd));
-- cmd->cmd_len = sc->aha_orig_cmd_len;
-- scsi_set_resid(cmd, sc->aha_orig_resid);
-+ scsi_eh_restore_cmnd(cmd, &sc->ses);
+ /*
+ # lpfc_link_speed: Link speed selection for initializing the Fibre Channel
+@@ -1381,7 +1446,59 @@ LPFC_ATTR_RW(topology, 0, 0, 6, "Select Fibre Channel topology");
+ # 8 = 8 Gigabaud
+ # Value range is [0,8]. Default value is 0.
+ */
+-LPFC_ATTR_R(link_speed, 0, 0, 8, "Select link speed");
++static int
++lpfc_link_speed_set(struct lpfc_hba *phba, int val)
++{
++ int err;
++ uint32_t prev_val;
++
++ if (((val == LINK_SPEED_1G) && !(phba->lmt & LMT_1Gb)) ||
++ ((val == LINK_SPEED_2G) && !(phba->lmt & LMT_2Gb)) ||
++ ((val == LINK_SPEED_4G) && !(phba->lmt & LMT_4Gb)) ||
++ ((val == LINK_SPEED_8G) && !(phba->lmt & LMT_8Gb)) ||
++ ((val == LINK_SPEED_10G) && !(phba->lmt & LMT_10Gb)))
++ return -EINVAL;
++
++ if ((val >= 0 && val <= LPFC_MAX_LINK_SPEED)
++ && (LPFC_LINK_SPEED_BITMAP & (1 << val))) {
++ prev_val = phba->cfg_link_speed;
++ phba->cfg_link_speed = val;
++ err = lpfc_issue_lip(lpfc_shost_from_vport(phba->pport));
++ if (err)
++ phba->cfg_link_speed = prev_val;
++ return err;
++ }
++
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "%d:0469 lpfc_link_speed attribute cannot be set to %d, "
++ "allowed range is [0, 8]\n",
++ phba->brd_no, val);
++ return -EINVAL;
++}
++
++static int lpfc_link_speed = 0;
++module_param(lpfc_link_speed, int, 0);
++MODULE_PARM_DESC(lpfc_link_speed, "Select link speed");
++lpfc_param_show(link_speed)
++static int
++lpfc_link_speed_init(struct lpfc_hba *phba, int val)
++{
++ if ((val >= 0 && val <= LPFC_MAX_LINK_SPEED)
++ && (LPFC_LINK_SPEED_BITMAP & (1 << val))) {
++ phba->cfg_link_speed = val;
++ return 0;
++ }
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "0454 lpfc_link_speed attribute cannot "
++ "be set to %d, allowed values are "
++ "["LPFC_LINK_SPEED_STRING"]\n", val);
++ phba->cfg_link_speed = 0;
++ return -EINVAL;
++}
++
++lpfc_param_store(link_speed)
++static CLASS_DEVICE_ATTR(lpfc_link_speed, S_IRUGO | S_IWUSR,
++ lpfc_link_speed_show, lpfc_link_speed_store);
- cmd->SCp.Status = SAM_STAT_CHECK_CONDITION;
+ /*
+ # lpfc_fcp_class: Determines FC class to use for the FCP protocol.
+@@ -1479,7 +1596,30 @@ LPFC_ATTR_RW(poll_tmo, 10, 1, 255,
+ */
+ LPFC_ATTR_R(use_msi, 0, 0, 1, "Use Message Signaled Interrupts, if possible");
-@@ -1587,22 +1577,10 @@ static void busfree_run(struct Scsi_Host *shpnt)
- DPRINTK(debug_eh, ERR_LEAD "requesting sense\n", CMDINFO(ptr));
- #endif
++/*
++# lpfc_enable_hba_reset: Allow or prevent HBA resets to the hardware.
++# 0 = HBA resets disabled
++# 1 = HBA resets enabled (default)
++# Value range is [0,1]. Default value is 1.
++*/
++LPFC_ATTR_R(enable_hba_reset, 1, 0, 1, "Enable HBA resets from the driver.");
++
++/*
++# lpfc_enable_hba_heartbeat: Enable HBA heartbeat timer..
++# 0 = HBA Heartbeat disabled
++# 1 = HBA Heartbeat enabled (default)
++# Value range is [0,1]. Default value is 1.
++*/
++LPFC_ATTR_R(enable_hba_heartbeat, 1, 0, 1, "Enable HBA Heartbeat.");
-- /* save old command */
- sc = SCDATA(ptr);
- /* It was allocated in aha152x_internal_queue? */
- BUG_ON(!sc);
-- memcpy(sc->aha_orig_cmnd, ptr->cmnd,
-- sizeof(ptr->cmnd));
-- sc->aha_orig_cmd_len = ptr->cmd_len;
-- sc->aha_orig_resid = scsi_get_resid(ptr);
--
-- ptr->cmnd[0] = REQUEST_SENSE;
-- ptr->cmnd[1] = 0;
-- ptr->cmnd[2] = 0;
-- ptr->cmnd[3] = 0;
-- ptr->cmnd[4] = sizeof(ptr->sense_buffer);
-- ptr->cmnd[5] = 0;
-- ptr->cmd_len = 6;
-+ scsi_eh_prep_cmnd(ptr, &sc->ses, NULL, 0, ~0);
++/*
++ * lpfc_sg_seg_cnt: Initial Maximum DMA Segment Count
++ * This value can be set to values between 64 and 256. The default value is
++ * 64, but may be increased to allow for larger Max I/O sizes. The scsi layer
++ * will be allowed to request I/Os of sizes up to (MAX_SEG_COUNT * SEG_SIZE).
++ */
++LPFC_ATTR_R(sg_seg_cnt, LPFC_DEFAULT_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT,
++ LPFC_MAX_SG_SEG_CNT, "Max Scatter Gather Segment Count");
- DO_UNLOCK(flags);
- aha152x_internal_queue(ptr, NULL, check_condition, ptr->scsi_done);
-diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c
-index bbcc2c5..190568e 100644
---- a/drivers/scsi/aha1542.c
-+++ b/drivers/scsi/aha1542.c
-@@ -51,15 +51,6 @@
- #define SCSI_BUF_PA(address) isa_virt_to_bus(address)
- #define SCSI_SG_PA(sgent) (isa_page_to_bus(sg_page((sgent))) + (sgent)->offset)
+ struct class_device_attribute *lpfc_hba_attrs[] = {
+ &class_device_attr_info,
+@@ -1494,6 +1634,7 @@ struct class_device_attribute *lpfc_hba_attrs[] = {
+ &class_device_attr_state,
+ &class_device_attr_num_discovered_ports,
+ &class_device_attr_lpfc_drvr_version,
++ &class_device_attr_lpfc_temp_sensor,
+ &class_device_attr_lpfc_log_verbose,
+ &class_device_attr_lpfc_lun_queue_depth,
+ &class_device_attr_lpfc_hba_queue_depth,
+@@ -1530,6 +1671,9 @@ struct class_device_attribute *lpfc_hba_attrs[] = {
+ &class_device_attr_lpfc_soft_wwnn,
+ &class_device_attr_lpfc_soft_wwpn,
+ &class_device_attr_lpfc_soft_wwn_enable,
++ &class_device_attr_lpfc_enable_hba_reset,
++ &class_device_attr_lpfc_enable_hba_heartbeat,
++ &class_device_attr_lpfc_sg_seg_cnt,
+ NULL,
+ };
--static void BAD_DMA(void *address, unsigned int length)
--{
-- printk(KERN_CRIT "buf vaddress %p paddress 0x%lx length %d\n",
-- address,
-- SCSI_BUF_PA(address),
-- length);
-- panic("Buffer at physical address > 16Mb used for aha1542");
--}
--
- static void BAD_SG_DMA(Scsi_Cmnd * SCpnt,
- struct scatterlist *sgp,
- int nseg,
-@@ -545,7 +536,7 @@ static void aha1542_intr_handle(struct Scsi_Host *shost, void *dev_id)
- we will still have it in the cdb when we come back */
- if (ccb[mbo].tarstat == 2)
- memcpy(SCtmp->sense_buffer, &ccb[mbo].cdb[ccb[mbo].cdblen],
-- sizeof(SCtmp->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
+@@ -1552,6 +1696,7 @@ struct class_device_attribute *lpfc_vport_attrs[] = {
+ &class_device_attr_lpfc_max_luns,
+ &class_device_attr_nport_evt_cnt,
+ &class_device_attr_npiv_info,
++ &class_device_attr_lpfc_enable_da_id,
+ NULL,
+ };
+@@ -1727,13 +1872,18 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
- /* is there mail :-) */
-@@ -597,8 +588,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
- unchar target = SCpnt->device->id;
- unchar lun = SCpnt->device->lun;
- unsigned long flags;
-- void *buff = SCpnt->request_buffer;
-- int bufflen = SCpnt->request_bufflen;
-+ int bufflen = scsi_bufflen(SCpnt);
- int mbo;
- struct mailbox *mb;
- struct ccb *ccb;
-@@ -619,7 +609,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
- #if 0
- /* scsi_request_sense() provides a buffer of size 256,
- so there is no reason to expect equality */
-- if (bufflen != sizeof(SCpnt->sense_buffer))
-+ if (bufflen != SCSI_SENSE_BUFFERSIZE)
- printk(KERN_CRIT "aha1542: Wrong buffer length supplied "
- "for request sense (%d)\n", bufflen);
- #endif
-@@ -689,42 +679,29 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+ spin_lock_irq(&phba->hbalock);
- memcpy(ccb[mbo].cdb, cmd, ccb[mbo].cdblen);
++ if (phba->over_temp_state == HBA_OVER_TEMP) {
++ sysfs_mbox_idle(phba);
++ spin_unlock_irq(&phba->hbalock);
++ return -EACCES;
++ }
++
+ if (off == 0 &&
+ phba->sysfs_mbox.state == SMBOX_WRITING &&
+ phba->sysfs_mbox.offset >= 2 * sizeof(uint32_t)) {
-- if (SCpnt->use_sg) {
-+ if (bufflen) {
- struct scatterlist *sg;
- struct chain *cptr;
- #ifdef DEBUG
- unsigned char *ptr;
- #endif
-- int i;
-+ int i, sg_count = scsi_sg_count(SCpnt);
- ccb[mbo].op = 2; /* SCSI Initiator Command w/scatter-gather */
-- SCpnt->host_scribble = kmalloc(512, GFP_KERNEL | GFP_DMA);
-+ SCpnt->host_scribble = kmalloc(sizeof(*cptr)*sg_count,
-+ GFP_KERNEL | GFP_DMA);
- cptr = (struct chain *) SCpnt->host_scribble;
- if (cptr == NULL) {
- /* free the claimed mailbox slot */
- HOSTDATA(SCpnt->device->host)->SCint[mbo] = NULL;
- return SCSI_MLQUEUE_HOST_BUSY;
- }
-- scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) {
-- if (sg->length == 0 || SCpnt->use_sg > 16 ||
-- (((int) sg->offset) & 1) || (sg->length & 1)) {
-- unsigned char *ptr;
-- printk(KERN_CRIT "Bad segment list supplied to aha1542.c (%d, %d)\n", SCpnt->use_sg, i);
-- scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) {
-- printk(KERN_CRIT "%d: %p %d\n", i,
-- sg_virt(sg), sg->length);
-- };
-- printk(KERN_CRIT "cptr %x: ", (unsigned int) cptr);
-- ptr = (unsigned char *) &cptr[i];
-- for (i = 0; i < 18; i++)
-- printk("%02x ", ptr[i]);
-- panic("Foooooooood fight!");
-- };
-+ scsi_for_each_sg(SCpnt, sg, sg_count, i) {
- any2scsi(cptr[i].dataptr, SCSI_SG_PA(sg));
- if (SCSI_SG_PA(sg) + sg->length - 1 > ISA_DMA_THRESHOLD)
-- BAD_SG_DMA(SCpnt, sg, SCpnt->use_sg, i);
-+ BAD_SG_DMA(SCpnt, scsi_sglist(SCpnt), sg_count, i);
- any2scsi(cptr[i].datalen, sg->length);
- };
-- any2scsi(ccb[mbo].datalen, SCpnt->use_sg * sizeof(struct chain));
-+ any2scsi(ccb[mbo].datalen, sg_count * sizeof(struct chain));
- any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(cptr));
- #ifdef DEBUG
- printk("cptr %x: ", cptr);
-@@ -735,10 +712,8 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
- } else {
- ccb[mbo].op = 0; /* SCSI Initiator Command */
- SCpnt->host_scribble = NULL;
-- any2scsi(ccb[mbo].datalen, bufflen);
-- if (buff && SCSI_BUF_PA(buff + bufflen - 1) > ISA_DMA_THRESHOLD)
-- BAD_DMA(buff, bufflen);
-- any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(buff));
-+ any2scsi(ccb[mbo].datalen, 0);
-+ any2scsi(ccb[mbo].dataptr, 0);
- };
- ccb[mbo].idlun = (target & 7) << 5 | direction | (lun & 7); /*SCSI Target Id */
- ccb[mbo].rsalen = 16;
-diff --git a/drivers/scsi/aha1740.c b/drivers/scsi/aha1740.c
-index f6722fd..be58a0b 100644
---- a/drivers/scsi/aha1740.c
-+++ b/drivers/scsi/aha1740.c
-@@ -286,7 +286,7 @@ static irqreturn_t aha1740_intr_handle(int irq, void *dev_id)
- cdb when we come back */
- if ( (adapstat & G2INTST_MASK) == G2INTST_CCBERROR ) {
- memcpy(SCtmp->sense_buffer, ecbptr->sense,
-- sizeof(SCtmp->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
- errstatus = aha1740_makecode(ecbptr->sense,ecbptr->status);
- } else
- errstatus = 0;
-diff --git a/drivers/scsi/aic7xxx/Makefile b/drivers/scsi/aic7xxx/Makefile
-index 9a6ce19..e4f70c5 100644
---- a/drivers/scsi/aic7xxx/Makefile
-+++ b/drivers/scsi/aic7xxx/Makefile
-@@ -33,11 +33,10 @@ aic79xx-y += aic79xx_osm.o \
- aic79xx_proc.o \
- aic79xx_osm_pci.o
+ switch (phba->sysfs_mbox.mbox->mb.mbxCommand) {
+ /* Offline only */
+- case MBX_WRITE_NV:
+ case MBX_INIT_LINK:
+ case MBX_DOWN_LINK:
+ case MBX_CONFIG_LINK:
+@@ -1744,9 +1894,7 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
+ case MBX_DUMP_CONTEXT:
+ case MBX_RUN_DIAGS:
+ case MBX_RESTART:
+- case MBX_FLASH_WR_ULA:
+ case MBX_SET_MASK:
+- case MBX_SET_SLIM:
+ case MBX_SET_DEBUG:
+ if (!(vport->fc_flag & FC_OFFLINE_MODE)) {
+ printk(KERN_WARNING "mbox_read:Command 0x%x "
+@@ -1756,6 +1904,8 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
+ spin_unlock_irq(&phba->hbalock);
+ return -EPERM;
+ }
++ case MBX_WRITE_NV:
++ case MBX_WRITE_VPARMS:
+ case MBX_LOAD_SM:
+ case MBX_READ_NV:
+ case MBX_READ_CONFIG:
+@@ -1772,6 +1922,8 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
+ case MBX_LOAD_EXP_ROM:
+ case MBX_BEACON:
+ case MBX_DEL_LD_ENTRY:
++ case MBX_SET_VARIABLE:
++ case MBX_WRITE_WWN:
+ break;
+ case MBX_READ_SPARM64:
+ case MBX_READ_LA:
+@@ -1793,6 +1945,17 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
+ return -EPERM;
+ }
--EXTRA_CFLAGS += -Idrivers/scsi
-+ccflags-y += -Idrivers/scsi
- ifdef WARNINGS_BECOME_ERRORS
--EXTRA_CFLAGS += -Werror
-+ccflags-y += -Werror
- endif
--#EXTRA_CFLAGS += -g
++ /* If HBA encountered an error attention, allow only DUMP
++ * mailbox command until the HBA is restarted.
++ */
++ if ((phba->pport->stopped) &&
++ (phba->sysfs_mbox.mbox->mb.mbxCommand
++ != MBX_DUMP_MEMORY)) {
++ sysfs_mbox_idle(phba);
++ spin_unlock_irq(&phba->hbalock);
++ return -EPERM;
++ }
++
+ phba->sysfs_mbox.mbox->vport = vport;
- # Files generated that shall be removed upon make clean
- clean-files := aic7xxx_seq.h aic7xxx_reg.h aic7xxx_reg_print.c
-@@ -46,53 +45,45 @@ clean-files += aic79xx_seq.h aic79xx_reg.h aic79xx_reg_print.c
- # Dependencies for generated files need to be listed explicitly
+ if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) {
+@@ -1993,7 +2156,8 @@ lpfc_get_host_speed(struct Scsi_Host *shost)
+ fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN;
+ break;
+ }
+- }
++ } else
++ fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN;
- $(obj)/aic7xxx_core.o: $(obj)/aic7xxx_seq.h
-+$(obj)/aic7xxx_core.o: $(obj)/aic7xxx_reg.h
- $(obj)/aic79xx_core.o: $(obj)/aic79xx_seq.h
--$(obj)/aic79xx_reg_print.c: $(src)/aic79xx_reg_print.c_shipped
--$(obj)/aic7xxx_reg_print.c: $(src)/aic7xxx_reg_print.c_shipped
-+$(obj)/aic79xx_core.o: $(obj)/aic79xx_reg.h
+ spin_unlock_irq(shost->host_lock);
+ }
+@@ -2013,7 +2177,7 @@ lpfc_get_host_fabric_name (struct Scsi_Host *shost)
+ node_name = wwn_to_u64(phba->fc_fabparam.nodeName.u.wwn);
+ else
+ /* fabric is local port if there is no F/FL_Port */
+- node_name = wwn_to_u64(vport->fc_nodename.u.wwn);
++ node_name = 0;
--$(addprefix $(obj)/,$(aic7xxx-y)): $(obj)/aic7xxx_reg.h
--$(addprefix $(obj)/,$(aic79xx-y)): $(obj)/aic79xx_reg.h
-+$(addprefix $(obj)/,$(aic7xxx-y)): $(obj)/aic7xxx_seq.h
-+$(addprefix $(obj)/,$(aic79xx-y)): $(obj)/aic79xx_seq.h
+ spin_unlock_irq(shost->host_lock);
--aic7xxx-gen-$(CONFIG_AIC7XXX_BUILD_FIRMWARE) := $(obj)/aic7xxx_seq.h \
-- $(obj)/aic7xxx_reg.h
-+aic7xxx-gen-$(CONFIG_AIC7XXX_BUILD_FIRMWARE) := $(obj)/aic7xxx_reg.h
- aic7xxx-gen-$(CONFIG_AIC7XXX_REG_PRETTY_PRINT) += $(obj)/aic7xxx_reg_print.c
+@@ -2337,8 +2501,6 @@ struct fc_function_template lpfc_transport_functions = {
+ .dev_loss_tmo_callbk = lpfc_dev_loss_tmo_callbk,
+ .terminate_rport_io = lpfc_terminate_rport_io,
- aicasm-7xxx-opts-$(CONFIG_AIC7XXX_REG_PRETTY_PRINT) := \
- -p $(obj)/aic7xxx_reg_print.c -i aic7xxx_osm.h
+- .vport_create = lpfc_vport_create,
+- .vport_delete = lpfc_vport_delete,
+ .dd_fcvport_size = sizeof(struct lpfc_vport *),
+ };
- ifeq ($(CONFIG_AIC7XXX_BUILD_FIRMWARE),y)
--# Create a dependency chain in generated files
--# to avoid concurrent invocations of the single
--# rule that builds them all.
--aic7xxx_seq.h: aic7xxx_reg.h
--ifeq ($(CONFIG_AIC7XXX_REG_PRETTY_PRINT),y)
--aic7xxx_reg.h: aic7xxx_reg_print.c
--endif
--$(aic7xxx-gen-y): $(src)/aic7xxx.seq $(src)/aic7xxx.reg $(obj)/aicasm/aicasm
-+$(obj)/aic7xxx_seq.h: $(src)/aic7xxx.seq $(src)/aic7xxx.reg $(obj)/aicasm/aicasm
- $(obj)/aicasm/aicasm -I$(src) -r $(obj)/aic7xxx_reg.h \
- $(aicasm-7xxx-opts-y) -o $(obj)/aic7xxx_seq.h \
- $(src)/aic7xxx.seq
+@@ -2414,21 +2576,23 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
+ lpfc_poll_tmo_init(phba, lpfc_poll_tmo);
+ lpfc_enable_npiv_init(phba, lpfc_enable_npiv);
+ lpfc_use_msi_init(phba, lpfc_use_msi);
++ lpfc_enable_hba_reset_init(phba, lpfc_enable_hba_reset);
++ lpfc_enable_hba_heartbeat_init(phba, lpfc_enable_hba_heartbeat);
+ phba->cfg_poll = lpfc_poll;
+ phba->cfg_soft_wwnn = 0L;
+ phba->cfg_soft_wwpn = 0L;
+- /*
+- * The total number of segments is the configuration value plus 2
+- * since the IOCB need a command and response bde.
+- */
+- phba->cfg_sg_seg_cnt = LPFC_SG_SEG_CNT + 2;
++ lpfc_sg_seg_cnt_init(phba, lpfc_sg_seg_cnt);
++ /* Also reinitialize the host templates with new values. */
++ lpfc_vport_template.sg_tablesize = phba->cfg_sg_seg_cnt;
++ lpfc_template.sg_tablesize = phba->cfg_sg_seg_cnt;
+ /*
+ * Since the sg_tablesize is module parameter, the sg_dma_buf_size
+- * used to create the sg_dma_buf_pool must be dynamically calculated
++ * used to create the sg_dma_buf_pool must be dynamically calculated.
++ * 2 segments are added since the IOCB needs a command and response bde.
+ */
+ phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) +
+ sizeof(struct fcp_rsp) +
+- (phba->cfg_sg_seg_cnt * sizeof(struct ulp_bde64));
++ ((phba->cfg_sg_seg_cnt + 2) * sizeof(struct ulp_bde64));
+ lpfc_hba_queue_depth_init(phba, lpfc_hba_queue_depth);
+ return;
+ }
+@@ -2448,5 +2612,6 @@ lpfc_get_vport_cfgparam(struct lpfc_vport *vport)
+ lpfc_discovery_threads_init(vport, lpfc_discovery_threads);
+ lpfc_max_luns_init(vport, lpfc_max_luns);
+ lpfc_scan_down_init(vport, lpfc_scan_down);
++ lpfc_enable_da_id_init(vport, lpfc_enable_da_id);
+ return;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
+index a599e15..50fcb7c 100644
+--- a/drivers/scsi/lpfc/lpfc_crtn.h
++++ b/drivers/scsi/lpfc/lpfc_crtn.h
+@@ -23,6 +23,8 @@ typedef int (*node_filter)(struct lpfc_nodelist *ndlp, void *param);
+ struct fc_rport;
+ void lpfc_dump_mem(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t);
+ void lpfc_read_nv(struct lpfc_hba *, LPFC_MBOXQ_t *);
++void lpfc_config_async(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
+
-+$(aic7xxx-gen-y): $(obj)/aic7xxx_seq.h
-+else
-+$(obj)/aic7xxx_reg_print.c: $(src)/aic7xxx_reg_print.c_shipped
- endif
+ void lpfc_heart_beat(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ int lpfc_read_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
+ struct lpfc_dmabuf *mp);
+@@ -43,9 +45,9 @@ void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
+ struct lpfc_vport *lpfc_find_vport_by_did(struct lpfc_hba *, uint32_t);
+ void lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove);
+ int lpfc_linkdown(struct lpfc_hba *);
++void lpfc_port_link_failure(struct lpfc_vport *);
+ void lpfc_mbx_cmpl_read_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
--aic79xx-gen-$(CONFIG_AIC79XX_BUILD_FIRMWARE) := $(obj)/aic79xx_seq.h \
-- $(obj)/aic79xx_reg.h
-+aic79xx-gen-$(CONFIG_AIC79XX_BUILD_FIRMWARE) := $(obj)/aic79xx_reg.h
- aic79xx-gen-$(CONFIG_AIC79XX_REG_PRETTY_PRINT) += $(obj)/aic79xx_reg_print.c
+-void lpfc_mbx_cmpl_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ void lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ void lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
+@@ -66,15 +68,15 @@ int lpfc_check_sli_ndlp(struct lpfc_hba *, struct lpfc_sli_ring *,
+ void lpfc_nlp_init(struct lpfc_vport *, struct lpfc_nodelist *, uint32_t);
+ struct lpfc_nodelist *lpfc_nlp_get(struct lpfc_nodelist *);
+ int lpfc_nlp_put(struct lpfc_nodelist *);
++int lpfc_nlp_not_used(struct lpfc_nodelist *ndlp);
+ struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_vport *, uint32_t);
+ void lpfc_disc_list_loopmap(struct lpfc_vport *);
+ void lpfc_disc_start(struct lpfc_vport *);
+-void lpfc_disc_flush_list(struct lpfc_vport *);
+ void lpfc_cleanup_discovery_resources(struct lpfc_vport *);
++void lpfc_cleanup(struct lpfc_vport *);
+ void lpfc_disc_timeout(unsigned long);
- aicasm-79xx-opts-$(CONFIG_AIC79XX_REG_PRETTY_PRINT) := \
- -p $(obj)/aic79xx_reg_print.c -i aic79xx_osm.h
+ struct lpfc_nodelist *__lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
+-struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
- ifeq ($(CONFIG_AIC79XX_BUILD_FIRMWARE),y)
--# Create a dependency chain in generated files
--# to avoid concurrent invocations of the single
--# rule that builds them all.
--aic79xx_seq.h: aic79xx_reg.h
--ifeq ($(CONFIG_AIC79XX_REG_PRETTY_PRINT),y)
--aic79xx_reg.h: aic79xx_reg_print.c
--endif
--$(aic79xx-gen-y): $(src)/aic79xx.seq $(src)/aic79xx.reg $(obj)/aicasm/aicasm
-+$(obj)/aic79xx_seq.h: $(src)/aic79xx.seq $(src)/aic79xx.reg $(obj)/aicasm/aicasm
- $(obj)/aicasm/aicasm -I$(src) -r $(obj)/aic79xx_reg.h \
- $(aicasm-79xx-opts-y) -o $(obj)/aic79xx_seq.h \
- $(src)/aic79xx.seq
+ void lpfc_worker_wake_up(struct lpfc_hba *);
+ int lpfc_workq_post_event(struct lpfc_hba *, void *, void *, uint32_t);
+@@ -82,17 +84,17 @@ int lpfc_do_work(void *);
+ int lpfc_disc_state_machine(struct lpfc_vport *, struct lpfc_nodelist *, void *,
+ uint32_t);
+
+-void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
+- struct lpfc_nodelist *);
+ void lpfc_do_scr_ns_plogi(struct lpfc_hba *, struct lpfc_vport *);
+ int lpfc_check_sparm(struct lpfc_vport *, struct lpfc_nodelist *,
+ struct serv_parm *, uint32_t);
+ int lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist *);
++void lpfc_more_plogi(struct lpfc_vport *);
++void lpfc_more_adisc(struct lpfc_vport *);
++void lpfc_end_rscn(struct lpfc_vport *);
+ int lpfc_els_chk_latt(struct lpfc_vport *);
+ int lpfc_els_abort_flogi(struct lpfc_hba *);
+ int lpfc_initial_flogi(struct lpfc_vport *);
+ int lpfc_initial_fdisc(struct lpfc_vport *);
+-int lpfc_issue_els_fdisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+ int lpfc_issue_els_plogi(struct lpfc_vport *, uint32_t, uint8_t);
+ int lpfc_issue_els_prli(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+ int lpfc_issue_els_adisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+@@ -112,7 +114,6 @@ int lpfc_els_rsp_prli_acc(struct lpfc_vport *, struct lpfc_iocbq *,
+ void lpfc_cancel_retry_delay_tmo(struct lpfc_vport *, struct lpfc_nodelist *);
+ void lpfc_els_retry_delay(unsigned long);
+ void lpfc_els_retry_delay_handler(struct lpfc_nodelist *);
+-void lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *);
+ void lpfc_els_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
+ struct lpfc_iocbq *);
+ int lpfc_els_handle_rscn(struct lpfc_vport *);
+@@ -124,7 +125,6 @@ int lpfc_els_disc_adisc(struct lpfc_vport *);
+ int lpfc_els_disc_plogi(struct lpfc_vport *);
+ void lpfc_els_timeout(unsigned long);
+ void lpfc_els_timeout_handler(struct lpfc_vport *);
+-void lpfc_hb_timeout(unsigned long);
+ void lpfc_hb_timeout_handler(struct lpfc_hba *);
+
+ void lpfc_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
+@@ -142,7 +142,6 @@ void lpfc_hba_init(struct lpfc_hba *, uint32_t *);
+ int lpfc_post_buffer(struct lpfc_hba *, struct lpfc_sli_ring *, int, int);
+ void lpfc_decode_firmware_rev(struct lpfc_hba *, char *, int);
+ int lpfc_online(struct lpfc_hba *);
+-void lpfc_block_mgmt_io(struct lpfc_hba *);
+ void lpfc_unblock_mgmt_io(struct lpfc_hba *);
+ void lpfc_offline_prep(struct lpfc_hba *);
+ void lpfc_offline(struct lpfc_hba *);
+@@ -165,7 +164,6 @@ int lpfc_mbox_tmo_val(struct lpfc_hba *, int);
+
+ void lpfc_config_hbq(struct lpfc_hba *, uint32_t, struct lpfc_hbq_init *,
+ uint32_t , LPFC_MBOXQ_t *);
+-struct lpfc_hbq_entry * lpfc_sli_next_hbq_slot(struct lpfc_hba *, uint32_t);
+ struct hbq_dmabuf *lpfc_els_hbq_alloc(struct lpfc_hba *);
+ void lpfc_els_hbq_free(struct lpfc_hba *, struct hbq_dmabuf *);
+
+@@ -178,7 +176,6 @@ void lpfc_poll_start_timer(struct lpfc_hba * phba);
+ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * hba);
+ struct lpfc_iocbq * lpfc_sli_get_iocbq(struct lpfc_hba *);
+ void lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
+-void __lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
+ uint16_t lpfc_sli_next_iotag(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
+
+ void lpfc_reset_barrier(struct lpfc_hba * phba);
+@@ -204,11 +201,14 @@ int lpfc_sli_ringpostbuf_put(struct lpfc_hba *, struct lpfc_sli_ring *,
+ struct lpfc_dmabuf *lpfc_sli_ringpostbuf_get(struct lpfc_hba *,
+ struct lpfc_sli_ring *,
+ dma_addr_t);
+
-+$(aic79xx-gen-y): $(obj)/aic79xx_seq.h
-+else
-+$(obj)/aic79xx_reg_print.c: $(src)/aic79xx_reg_print.c_shipped
- endif
++uint32_t lpfc_sli_get_buffer_tag(struct lpfc_hba *);
++struct lpfc_dmabuf * lpfc_sli_ring_taggedbuf_get(struct lpfc_hba *,
++ struct lpfc_sli_ring *, uint32_t );
++
+ int lpfc_sli_hbq_count(void);
+-int lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *, uint32_t);
+ int lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *, uint32_t);
+ void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *);
+-struct hbq_dmabuf *lpfc_sli_hbqbuf_find(struct lpfc_hba *, uint32_t);
+ int lpfc_sli_hbq_size(void);
+ int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *,
+ struct lpfc_iocbq *);
+@@ -219,9 +219,6 @@ int lpfc_sli_abort_iocb(struct lpfc_vport *, struct lpfc_sli_ring *, uint16_t,
+ void lpfc_mbox_timeout(unsigned long);
+ void lpfc_mbox_timeout_handler(struct lpfc_hba *);
- $(obj)/aicasm/aicasm: $(src)/aicasm/*.[chyl]
-diff --git a/drivers/scsi/aic7xxx/aic79xx_osm.c b/drivers/scsi/aic7xxx/aic79xx_osm.c
-index 2d02040..0e4708f 100644
---- a/drivers/scsi/aic7xxx/aic79xx_osm.c
-+++ b/drivers/scsi/aic7xxx/aic79xx_osm.c
-@@ -1784,7 +1784,7 @@ ahd_linux_handle_scsi_status(struct ahd_softc *ahd,
- if (scb->flags & SCB_SENSE) {
- sense_size = min(sizeof(struct scsi_sense_data)
- - ahd_get_sense_residual(scb),
-- (u_long)sizeof(cmd->sense_buffer));
-+ (u_long)SCSI_SENSE_BUFFERSIZE);
- sense_offset = 0;
- } else {
- /*
-@@ -1795,11 +1795,11 @@ ahd_linux_handle_scsi_status(struct ahd_softc *ahd,
- scb->sense_data;
- sense_size = min_t(size_t,
- scsi_4btoul(siu->sense_length),
-- sizeof(cmd->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
- sense_offset = SIU_SENSE_OFFSET(siu);
- }
+-struct lpfc_nodelist *__lpfc_find_node(struct lpfc_vport *, node_filter,
+- void *);
+-struct lpfc_nodelist *lpfc_find_node(struct lpfc_vport *, node_filter, void *);
+ struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_vport *, uint32_t);
+ struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_vport *,
+ struct lpfc_name *);
+@@ -260,6 +257,7 @@ extern struct scsi_host_template lpfc_vport_template;
+ extern struct fc_function_template lpfc_transport_functions;
+ extern struct fc_function_template lpfc_vport_transport_functions;
+ extern int lpfc_sli_mode;
++extern int lpfc_enable_npiv;
-- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
-+ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- memcpy(cmd->sense_buffer,
- ahd_get_sense_buf(ahd, scb)
- + sense_offset, sense_size);
-diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm.c b/drivers/scsi/aic7xxx/aic7xxx_osm.c
-index 390b0fc..e310e41 100644
---- a/drivers/scsi/aic7xxx/aic7xxx_osm.c
-+++ b/drivers/scsi/aic7xxx/aic7xxx_osm.c
-@@ -1801,12 +1801,12 @@ ahc_linux_handle_scsi_status(struct ahc_softc *ahc,
+ int lpfc_vport_symbolic_node_name(struct lpfc_vport *, char *, size_t);
+ void lpfc_terminate_rport_io(struct fc_rport *);
+@@ -281,11 +279,8 @@ extern void lpfc_debugfs_slow_ring_trc(struct lpfc_hba *, char *, uint32_t,
+ extern struct lpfc_hbq_init *lpfc_hbq_defs[];
- sense_size = min(sizeof(struct scsi_sense_data)
- - ahc_get_sense_residual(scb),
-- (u_long)sizeof(cmd->sense_buffer));
-+ (u_long)SCSI_SENSE_BUFFERSIZE);
- memcpy(cmd->sense_buffer,
- ahc_get_sense_buf(ahc, scb), sense_size);
-- if (sense_size < sizeof(cmd->sense_buffer))
-+ if (sense_size < SCSI_SENSE_BUFFERSIZE)
- memset(&cmd->sense_buffer[sense_size], 0,
-- sizeof(cmd->sense_buffer) - sense_size);
-+ SCSI_SENSE_BUFFERSIZE - sense_size);
- cmd->result |= (DRIVER_SENSE << 24);
- #ifdef AHC_DEBUG
- if (ahc_debug & AHC_SHOW_SENSE) {
-diff --git a/drivers/scsi/aic7xxx_old.c b/drivers/scsi/aic7xxx_old.c
-index 8f8db5f..bcb0b87 100644
---- a/drivers/scsi/aic7xxx_old.c
-+++ b/drivers/scsi/aic7xxx_old.c
-@@ -2696,7 +2696,7 @@ aic7xxx_done(struct aic7xxx_host *p, struct aic7xxx_scb *scb)
- {
- pci_unmap_single(p->pdev,
- le32_to_cpu(scb->sg_list[0].address),
-- sizeof(cmd->sense_buffer),
-+ SCSI_SENSE_BUFFERSIZE,
- PCI_DMA_FROMDEVICE);
- }
- if (scb->flags & SCB_RECOVERY_SCB)
-@@ -4267,13 +4267,13 @@ aic7xxx_handle_seqint(struct aic7xxx_host *p, unsigned char intstat)
- sizeof(generic_sense));
+ /* Interface exported by fabric iocb scheduler */
+-int lpfc_issue_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
+-void lpfc_fabric_abort_vport(struct lpfc_vport *);
+ void lpfc_fabric_abort_nport(struct lpfc_nodelist *);
+ void lpfc_fabric_abort_hba(struct lpfc_hba *);
+-void lpfc_fabric_abort_flogi(struct lpfc_hba *);
+ void lpfc_fabric_block_timeout(unsigned long);
+ void lpfc_unblock_fabric_iocbs(struct lpfc_hba *);
+ void lpfc_adjust_queue_depth(struct lpfc_hba *);
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index c701e4d..92441ce 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -19,7 +19,7 @@
+ *******************************************************************/
- scb->sense_cmd[1] = (cmd->device->lun << 5);
-- scb->sense_cmd[4] = sizeof(cmd->sense_buffer);
-+ scb->sense_cmd[4] = SCSI_SENSE_BUFFERSIZE;
+ /*
+- * Fibre Channel SCSI LAN Device Driver CT support
++ * Fibre Channel SCSI LAN Device Driver CT support: FC Generic Services FC-GS
+ */
- scb->sg_list[0].length =
-- cpu_to_le32(sizeof(cmd->sense_buffer));
-+ cpu_to_le32(SCSI_SENSE_BUFFERSIZE);
- scb->sg_list[0].address =
- cpu_to_le32(pci_map_single(p->pdev, cmd->sense_buffer,
-- sizeof(cmd->sense_buffer),
-+ SCSI_SENSE_BUFFERSIZE,
- PCI_DMA_FROMDEVICE));
+ #include <linux/blkdev.h>
+@@ -57,45 +57,27 @@
- /*
-@@ -4296,7 +4296,7 @@ aic7xxx_handle_seqint(struct aic7xxx_host *p, unsigned char intstat)
- hscb->residual_data_count[2] = 0;
+ static char *lpfc_release_version = LPFC_DRIVER_VERSION;
- scb->sg_count = hscb->SG_segment_count = 1;
-- scb->sg_length = sizeof(cmd->sense_buffer);
-+ scb->sg_length = SCSI_SENSE_BUFFERSIZE;
- scb->tag_action = 0;
- scb->flags |= SCB_SENSE;
- /*
-@@ -10293,7 +10293,6 @@ static int aic7xxx_queue(struct scsi_cmnd *cmd, void (*fn)(struct scsi_cmnd *))
- aic7xxx_position(cmd) = scb->hscb->tag;
- cmd->scsi_done = fn;
- cmd->result = DID_OK;
-- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
- aic7xxx_error(cmd) = DID_OK;
- aic7xxx_status(cmd) = 0;
- cmd->host_scribble = NULL;
-diff --git a/drivers/scsi/aic94xx/aic94xx_dev.c b/drivers/scsi/aic94xx/aic94xx_dev.c
-index 3dce618..72042ca 100644
---- a/drivers/scsi/aic94xx/aic94xx_dev.c
-+++ b/drivers/scsi/aic94xx/aic94xx_dev.c
-@@ -165,7 +165,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
- if (dev->port->oob_mode != SATA_OOB_MODE) {
- flags |= OPEN_REQUIRED;
- if ((dev->dev_type == SATA_DEV) ||
-- (dev->tproto & SAS_PROTO_STP)) {
-+ (dev->tproto & SAS_PROTOCOL_STP)) {
- struct smp_resp *rps_resp = &dev->sata_dev.rps_resp;
- if (rps_resp->frame_type == SMP_RESPONSE &&
- rps_resp->function == SMP_REPORT_PHY_SATA &&
-@@ -193,7 +193,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
- asd_ddbsite_write_byte(asd_ha, ddb, DDB_TARG_FLAGS, flags);
+-/*
+- * lpfc_ct_unsol_event
+- */
+ static void
+-lpfc_ct_unsol_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
+- struct lpfc_dmabuf *mp, uint32_t size)
++lpfc_ct_ignore_hbq_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
++ struct lpfc_dmabuf *mp, uint32_t size)
+ {
+ if (!mp) {
+- printk(KERN_ERR "%s (%d): Unsolited CT, no buffer, "
+- "piocbq = %p, status = x%x, mp = %p, size = %d\n",
+- __FUNCTION__, __LINE__,
+- piocbq, piocbq->iocb.ulpStatus, mp, size);
++ lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
++ "0146 Ignoring unsolicted CT No HBQ "
++ "status = x%x\n",
++ piocbq->iocb.ulpStatus);
+ }
+-
+- printk(KERN_ERR "%s (%d): Ignoring unsolicted CT piocbq = %p, "
+- "buffer = %p, size = %d, status = x%x\n",
+- __FUNCTION__, __LINE__,
+- piocbq, mp, size,
+- piocbq->iocb.ulpStatus);
+-
++ lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
++ "0145 Ignoring unsolicted CT HBQ Size:%d "
++ "status = x%x\n",
++ size, piocbq->iocb.ulpStatus);
+ }
- flags = 0;
-- if (dev->tproto & SAS_PROTO_STP)
-+ if (dev->tproto & SAS_PROTOCOL_STP)
- flags |= STP_CL_POL_NO_TX;
- asd_ddbsite_write_byte(asd_ha, ddb, DDB_TARG_FLAGS2, flags);
+ static void
+-lpfc_ct_ignore_hbq_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
+- struct lpfc_dmabuf *mp, uint32_t size)
++lpfc_ct_unsol_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
++ struct lpfc_dmabuf *mp, uint32_t size)
+ {
+- if (!mp) {
+- printk(KERN_ERR "%s (%d): Unsolited CT, no "
+- "HBQ buffer, piocbq = %p, status = x%x\n",
+- __FUNCTION__, __LINE__,
+- piocbq, piocbq->iocb.ulpStatus);
+- } else {
+- lpfc_ct_unsol_buffer(phba, piocbq, mp, size);
+- printk(KERN_ERR "%s (%d): Ignoring unsolicted CT "
+- "piocbq = %p, buffer = %p, size = %d, "
+- "status = x%x\n",
+- __FUNCTION__, __LINE__,
+- piocbq, mp, size, piocbq->iocb.ulpStatus);
+- }
++ lpfc_ct_ignore_hbq_buffer(phba, piocbq, mp, size);
+ }
-@@ -201,7 +201,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
- asd_ddbsite_write_word(asd_ha, ddb, SEND_QUEUE_TAIL, 0xFFFF);
- asd_ddbsite_write_word(asd_ha, ddb, SISTER_DDB, 0xFFFF);
+ void
+@@ -109,11 +91,8 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ struct lpfc_iocbq *iocbq;
+ dma_addr_t paddr;
+ uint32_t size;
+- struct lpfc_dmabuf *bdeBuf1 = piocbq->context2;
+- struct lpfc_dmabuf *bdeBuf2 = piocbq->context3;
+-
+- piocbq->context2 = NULL;
+- piocbq->context3 = NULL;
++ struct list_head head;
++ struct lpfc_dmabuf *bdeBuf;
-- if (dev->dev_type == SATA_DEV || (dev->tproto & SAS_PROTO_STP)) {
-+ if (dev->dev_type == SATA_DEV || (dev->tproto & SAS_PROTOCOL_STP)) {
- i = asd_init_sata(dev);
- if (i < 0) {
- asd_free_ddb(asd_ha, ddb);
-diff --git a/drivers/scsi/aic94xx/aic94xx_dump.c b/drivers/scsi/aic94xx/aic94xx_dump.c
-index 6bd8e30..3d8c4ff 100644
---- a/drivers/scsi/aic94xx/aic94xx_dump.c
-+++ b/drivers/scsi/aic94xx/aic94xx_dump.c
-@@ -903,11 +903,11 @@ void asd_dump_frame_rcvd(struct asd_phy *phy,
- int i;
+ if (unlikely(icmd->ulpStatus == IOSTAT_NEED_BUFFER)) {
+ lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ);
+@@ -122,7 +101,7 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ /* Not enough posted buffers; Try posting more buffers */
+ phba->fc_stat.NoRcvBuf++;
+ if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
+- lpfc_post_buffer(phba, pring, 0, 1);
++ lpfc_post_buffer(phba, pring, 2, 1);
+ return;
+ }
- switch ((dl->status_block[1] & 0x70) >> 3) {
-- case SAS_PROTO_STP:
-+ case SAS_PROTOCOL_STP:
- ASD_DPRINTK("STP proto device-to-host FIS:\n");
- break;
- default:
-- case SAS_PROTO_SSP:
-+ case SAS_PROTOCOL_SSP:
- ASD_DPRINTK("SAS proto IDENTIFY:\n");
- break;
+@@ -133,38 +112,34 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ return;
+
+ if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+- list_for_each_entry(iocbq, &piocbq->list, list) {
++ INIT_LIST_HEAD(&head);
++ list_add_tail(&head, &piocbq->list);
++ list_for_each_entry(iocbq, &head, list) {
+ icmd = &iocbq->iocb;
+- if (icmd->ulpBdeCount == 0) {
+- printk(KERN_ERR "%s (%d): Unsolited CT, no "
+- "BDE, iocbq = %p, status = x%x\n",
+- __FUNCTION__, __LINE__,
+- iocbq, iocbq->iocb.ulpStatus);
++ if (icmd->ulpBdeCount == 0)
+ continue;
+- }
+-
++ bdeBuf = iocbq->context2;
++ iocbq->context2 = NULL;
+ size = icmd->un.cont64[0].tus.f.bdeSize;
+- lpfc_ct_ignore_hbq_buffer(phba, piocbq, bdeBuf1, size);
+- lpfc_in_buf_free(phba, bdeBuf1);
++ lpfc_ct_unsol_buffer(phba, piocbq, bdeBuf, size);
++ lpfc_in_buf_free(phba, bdeBuf);
+ if (icmd->ulpBdeCount == 2) {
+- lpfc_ct_ignore_hbq_buffer(phba, piocbq, bdeBuf2,
+- size);
+- lpfc_in_buf_free(phba, bdeBuf2);
++ bdeBuf = iocbq->context3;
++ iocbq->context3 = NULL;
++ size = icmd->unsli3.rcvsli3.bde2.tus.f.bdeSize;
++ lpfc_ct_unsol_buffer(phba, piocbq, bdeBuf,
++ size);
++ lpfc_in_buf_free(phba, bdeBuf);
+ }
+ }
++ list_del(&head);
+ } else {
+ struct lpfc_iocbq *next;
+
+ list_for_each_entry_safe(iocbq, next, &piocbq->list, list) {
+ icmd = &iocbq->iocb;
+- if (icmd->ulpBdeCount == 0) {
+- printk(KERN_ERR "%s (%d): Unsolited CT, no "
+- "BDE, iocbq = %p, status = x%x\n",
+- __FUNCTION__, __LINE__,
+- iocbq, iocbq->iocb.ulpStatus);
+- continue;
+- }
+-
++ if (icmd->ulpBdeCount == 0)
++ lpfc_ct_unsol_buffer(phba, piocbq, NULL, 0);
+ for (i = 0; i < icmd->ulpBdeCount; i++) {
+ paddr = getPaddr(icmd->un.cont64[i].addrHigh,
+ icmd->un.cont64[i].addrLow);
+@@ -176,6 +151,7 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ }
+ list_del(&iocbq->list);
+ lpfc_sli_release_iocbq(phba, iocbq);
++ lpfc_post_buffer(phba, pring, i, 1);
+ }
}
-diff --git a/drivers/scsi/aic94xx/aic94xx_hwi.c b/drivers/scsi/aic94xx/aic94xx_hwi.c
-index 0cd7eed..098b5f3 100644
---- a/drivers/scsi/aic94xx/aic94xx_hwi.c
-+++ b/drivers/scsi/aic94xx/aic94xx_hwi.c
-@@ -91,7 +91,7 @@ static int asd_init_phy(struct asd_phy *phy)
+ }
+@@ -203,7 +179,7 @@ lpfc_alloc_ct_rsp(struct lpfc_hba *phba, int cmdcode, struct ulp_bde64 *bpl,
+ struct lpfc_dmabuf *mp;
+ int cnt, i = 0;
- sas_phy->enabled = 1;
- sas_phy->class = SAS;
-- sas_phy->iproto = SAS_PROTO_ALL;
-+ sas_phy->iproto = SAS_PROTOCOL_ALL;
- sas_phy->tproto = 0;
- sas_phy->type = PHY_TYPE_PHYSICAL;
- sas_phy->role = PHY_ROLE_INITIATOR;
-diff --git a/drivers/scsi/aic94xx/aic94xx_hwi.h b/drivers/scsi/aic94xx/aic94xx_hwi.h
-index 491e5d8..150f670 100644
---- a/drivers/scsi/aic94xx/aic94xx_hwi.h
-+++ b/drivers/scsi/aic94xx/aic94xx_hwi.h
-@@ -72,6 +72,7 @@ struct flash_struct {
- u8 manuf;
- u8 dev_id;
- u8 sec_prot;
-+ u8 method;
+- /* We get chucks of FCELSSIZE */
++ /* We get chunks of FCELSSIZE */
+ cnt = size > FCELSSIZE ? FCELSSIZE: size;
- u32 dir_offs;
- };
-@@ -216,6 +217,8 @@ struct asd_ha_struct {
- struct dma_pool *scb_pool;
+ while (size) {
+@@ -426,6 +402,7 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
- struct asd_seq_data seq; /* sequencer related */
-+ u32 bios_status;
-+ const struct firmware *bios_image;
- };
+ lpfc_set_disctmo(vport);
+ vport->num_disc_nodes = 0;
++ vport->fc_ns_retry = 0;
- /* ---------- Common macros ---------- */
-diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
-index b70d6e7..5d761eb 100644
---- a/drivers/scsi/aic94xx/aic94xx_init.c
-+++ b/drivers/scsi/aic94xx/aic94xx_init.c
-@@ -29,6 +29,7 @@
- #include <linux/kernel.h>
- #include <linux/pci.h>
- #include <linux/delay.h>
-+#include <linux/firmware.h>
- #include <scsi/scsi_host.h>
+ list_add_tail(&head, &mp->list);
+@@ -458,7 +435,7 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
+ ((lpfc_find_vport_by_did(phba, Did) == NULL) ||
+ vport->cfg_peer_port_login)) {
+ if ((vport->port_type != LPFC_NPIV_PORT) ||
+- (vport->fc_flag & FC_RFF_NOT_SUPPORTED) ||
++ (!(vport->ct_flags & FC_CT_RFF_ID)) ||
+ (!vport->cfg_restrict_login)) {
+ ndlp = lpfc_setup_disc_node(vport, Did);
+ if (ndlp) {
+@@ -506,7 +483,17 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
+ Did, vport->fc_flag,
+ vport->fc_rscn_id_cnt);
-@@ -36,6 +37,7 @@
- #include "aic94xx_reg.h"
- #include "aic94xx_hwi.h"
- #include "aic94xx_seq.h"
-+#include "aic94xx_sds.h"
+- if (lpfc_ns_cmd(vport,
++ /* This NPortID was previously
++ * a FCP target, * Don't even
++ * bother to send GFF_ID.
++ */
++ ndlp = lpfc_findnode_did(vport,
++ Did);
++ if (ndlp && (ndlp->nlp_type &
++ NLP_FCP_TARGET))
++ lpfc_setup_disc_node
++ (vport, Did);
++ else if (lpfc_ns_cmd(vport,
+ SLI_CTNS_GFF_ID,
+ 0, Did) == 0)
+ vport->num_disc_nodes++;
+@@ -554,7 +541,7 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_dmabuf *outp;
+ struct lpfc_sli_ct_request *CTrsp;
+ struct lpfc_nodelist *ndlp;
+- int rc;
++ int rc, retry;
- /* The format is "version.release.patchlevel" */
- #define ASD_DRIVER_VERSION "1.0.3"
-@@ -134,7 +136,7 @@ Err:
- return err;
- }
+ /* First save ndlp, before we overwrite it */
+ ndlp = cmdiocb->context_un.ndlp;
+@@ -574,7 +561,6 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if (vport->load_flag & FC_UNLOADING)
+ goto out;
--static void __devexit asd_unmap_memio(struct asd_ha_struct *asd_ha)
-+static void asd_unmap_memio(struct asd_ha_struct *asd_ha)
- {
- struct asd_ha_addrspace *io_handle;
+-
+ if (lpfc_els_chk_latt(vport) || lpfc_error_lost_link(irsp)) {
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+ "0216 Link event during NS query\n");
+@@ -585,14 +571,35 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if (irsp->ulpStatus) {
+ /* Check for retry */
+ if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) {
+- if ((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
+- (irsp->un.ulpWord[4] != IOERR_NO_RESOURCES))
++ retry = 1;
++ if (irsp->ulpStatus == IOSTAT_LOCAL_REJECT) {
++ switch (irsp->un.ulpWord[4]) {
++ case IOERR_NO_RESOURCES:
++ /* We don't increment the retry
++ * count for this case.
++ */
++ break;
++ case IOERR_LINK_DOWN:
++ case IOERR_SLI_ABORTED:
++ case IOERR_SLI_DOWN:
++ retry = 0;
++ break;
++ default:
++ vport->fc_ns_retry++;
++ }
++ }
++ else
+ vport->fc_ns_retry++;
+- /* CT command is being retried */
+- rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_FT,
++
++ if (retry) {
++ /* CT command is being retried */
++ rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_FT,
+ vport->fc_ns_retry, 0);
+- if (rc == 0)
+- goto out;
++ if (rc == 0) {
++ /* success */
++ goto out;
++ }
++ }
+ }
+ lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+@@ -698,7 +705,7 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_dmabuf *inp = (struct lpfc_dmabuf *) cmdiocb->context1;
+ struct lpfc_dmabuf *outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+ struct lpfc_sli_ct_request *CTrsp;
+- int did;
++ int did, rc, retry;
+ uint8_t fbits;
+ struct lpfc_nodelist *ndlp;
-@@ -171,7 +173,7 @@ static int __devinit asd_map_ioport(struct asd_ha_struct *asd_ha)
- return err;
- }
+@@ -729,6 +736,39 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+ }
+ else {
++ /* Check for retry */
++ if (cmdiocb->retry < LPFC_MAX_NS_RETRY) {
++ retry = 1;
++ if (irsp->ulpStatus == IOSTAT_LOCAL_REJECT) {
++ switch (irsp->un.ulpWord[4]) {
++ case IOERR_NO_RESOURCES:
++ /* We don't increment the retry
++ * count for this case.
++ */
++ break;
++ case IOERR_LINK_DOWN:
++ case IOERR_SLI_ABORTED:
++ case IOERR_SLI_DOWN:
++ retry = 0;
++ break;
++ default:
++ cmdiocb->retry++;
++ }
++ }
++ else
++ cmdiocb->retry++;
++
++ if (retry) {
++ /* CT command is being retried */
++ rc = lpfc_ns_cmd(vport, SLI_CTNS_GFF_ID,
++ cmdiocb->retry, did);
++ if (rc == 0) {
++ /* success */
++ lpfc_ct_free_iocb(phba, cmdiocb);
++ return;
++ }
++ }
++ }
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+ "0267 NameServer GFF Rsp "
+ "x%x Error (%d %d) Data: x%x x%x\n",
+@@ -778,8 +818,8 @@ out:
--static void __devexit asd_unmap_ioport(struct asd_ha_struct *asd_ha)
-+static void asd_unmap_ioport(struct asd_ha_struct *asd_ha)
- {
- pci_release_region(asd_ha->pcidev, PCI_IOBAR_OFFSET);
- }
-@@ -208,7 +210,7 @@ Err:
- return err;
- }
--static void __devexit asd_unmap_ha(struct asd_ha_struct *asd_ha)
-+static void asd_unmap_ha(struct asd_ha_struct *asd_ha)
+ static void
+-lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+- struct lpfc_iocbq *rspiocb)
++lpfc_cmpl_ct(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
++ struct lpfc_iocbq *rspiocb)
{
- if (asd_ha->iospace)
- asd_unmap_ioport(asd_ha);
-@@ -313,6 +315,181 @@ static ssize_t asd_show_dev_pcba_sn(struct device *dev,
+ struct lpfc_vport *vport = cmdiocb->vport;
+ struct lpfc_dmabuf *inp;
+@@ -809,7 +849,7 @@ lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+
+ /* RFT request completes status <ulpStatus> CmdRsp <CmdRsp> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+- "0209 RFT request completes, latt %d, "
++ "0209 CT Request completes, latt %d, "
+ "ulpStatus x%x CmdRsp x%x, Context x%x, Tag x%x\n",
+ latt, irsp->ulpStatus,
+ CTrsp->CommandResponse.bits.CmdRsp,
+@@ -848,10 +888,44 @@ out:
}
- static DEVICE_ATTR(pcba_sn, S_IRUGO, asd_show_dev_pcba_sn, NULL);
-+#define FLASH_CMD_NONE 0x00
-+#define FLASH_CMD_UPDATE 0x01
-+#define FLASH_CMD_VERIFY 0x02
-+
-+struct flash_command {
-+ u8 command[8];
-+ int code;
-+};
-+
-+static struct flash_command flash_command_table[] =
-+{
-+ {"verify", FLASH_CMD_VERIFY},
-+ {"update", FLASH_CMD_UPDATE},
-+ {"", FLASH_CMD_NONE} /* Last entry should be NULL. */
-+};
-+
-+struct error_bios {
-+ char *reason;
-+ int err_code;
-+};
-+
-+static struct error_bios flash_error_table[] =
-+{
-+ {"Failed to open bios image file", FAIL_OPEN_BIOS_FILE},
-+ {"PCI ID mismatch", FAIL_CHECK_PCI_ID},
-+ {"Checksum mismatch", FAIL_CHECK_SUM},
-+ {"Unknown Error", FAIL_UNKNOWN},
-+ {"Failed to verify.", FAIL_VERIFY},
-+ {"Failed to reset flash chip.", FAIL_RESET_FLASH},
-+ {"Failed to find flash chip type.", FAIL_FIND_FLASH_ID},
-+ {"Failed to erash flash chip.", FAIL_ERASE_FLASH},
-+ {"Failed to program flash chip.", FAIL_WRITE_FLASH},
-+ {"Flash in progress", FLASH_IN_PROGRESS},
-+ {"Image file size Error", FAIL_FILE_SIZE},
-+ {"Input parameter error", FAIL_PARAMETERS},
-+ {"Out of memory", FAIL_OUT_MEMORY},
-+ {"OK", 0} /* Last entry err_code = 0. */
-+};
-+
-+static ssize_t asd_store_update_bios(struct device *dev,
-+ struct device_attribute *attr,
-+ const char *buf, size_t count)
+ static void
++lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
++ struct lpfc_iocbq *rspiocb)
+{
-+ struct asd_ha_struct *asd_ha = dev_to_asd_ha(dev);
-+ char *cmd_ptr, *filename_ptr;
-+ struct bios_file_header header, *hdr_ptr;
-+ int res, i;
-+ u32 csum = 0;
-+ int flash_command = FLASH_CMD_NONE;
-+ int err = 0;
++ IOCB_t *irsp = &rspiocb->iocb;
++ struct lpfc_vport *vport = cmdiocb->vport;
+
-+ cmd_ptr = kzalloc(count*2, GFP_KERNEL);
++ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
++ struct lpfc_dmabuf *outp;
++ struct lpfc_sli_ct_request *CTrsp;
+
-+ if (!cmd_ptr) {
-+ err = FAIL_OUT_MEMORY;
-+ goto out;
++ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
++ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
++ if (CTrsp->CommandResponse.bits.CmdRsp ==
++ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
++ vport->ct_flags |= FC_CT_RFT_ID;
+ }
++ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
++ return;
++}
+
-+ filename_ptr = cmd_ptr + count;
-+ res = sscanf(buf, "%s %s", cmd_ptr, filename_ptr);
-+ if (res != 2) {
-+ err = FAIL_PARAMETERS;
-+ goto out1;
-+ }
++static void
+ lpfc_cmpl_ct_cmd_rnn_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_iocbq *rspiocb)
+ {
+- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
++ IOCB_t *irsp = &rspiocb->iocb;
++ struct lpfc_vport *vport = cmdiocb->vport;
+
-+ for (i = 0; flash_command_table[i].code != FLASH_CMD_NONE; i++) {
-+ if (!memcmp(flash_command_table[i].command,
-+ cmd_ptr, strlen(cmd_ptr))) {
-+ flash_command = flash_command_table[i].code;
-+ break;
-+ }
-+ }
-+ if (flash_command == FLASH_CMD_NONE) {
-+ err = FAIL_PARAMETERS;
-+ goto out1;
-+ }
++ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
++ struct lpfc_dmabuf *outp;
++ struct lpfc_sli_ct_request *CTrsp;
+
-+ if (asd_ha->bios_status == FLASH_IN_PROGRESS) {
-+ err = FLASH_IN_PROGRESS;
-+ goto out1;
-+ }
-+ err = request_firmware(&asd_ha->bios_image,
-+ filename_ptr,
-+ &asd_ha->pcidev->dev);
-+ if (err) {
-+ asd_printk("Failed to load bios image file %s, error %d\n",
-+ filename_ptr, err);
-+ err = FAIL_OPEN_BIOS_FILE;
-+ goto out1;
++ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
++ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
++ if (CTrsp->CommandResponse.bits.CmdRsp ==
++ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
++ vport->ct_flags |= FC_CT_RNN_ID;
+ }
++ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
+ return;
+ }
+
+@@ -859,7 +933,20 @@ static void
+ lpfc_cmpl_ct_cmd_rspn_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_iocbq *rspiocb)
+ {
+- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
++ IOCB_t *irsp = &rspiocb->iocb;
++ struct lpfc_vport *vport = cmdiocb->vport;
+
-+ hdr_ptr = (struct bios_file_header *)asd_ha->bios_image->data;
-+
-+ if ((hdr_ptr->contrl_id.vendor != asd_ha->pcidev->vendor ||
-+ hdr_ptr->contrl_id.device != asd_ha->pcidev->device) &&
-+ (hdr_ptr->contrl_id.sub_vendor != asd_ha->pcidev->vendor ||
-+ hdr_ptr->contrl_id.sub_device != asd_ha->pcidev->device)) {
-+
-+ ASD_DPRINTK("The PCI vendor or device id does not match\n");
-+ ASD_DPRINTK("vendor=%x dev=%x sub_vendor=%x sub_dev=%x"
-+ " pci vendor=%x pci dev=%x\n",
-+ hdr_ptr->contrl_id.vendor,
-+ hdr_ptr->contrl_id.device,
-+ hdr_ptr->contrl_id.sub_vendor,
-+ hdr_ptr->contrl_id.sub_device,
-+ asd_ha->pcidev->vendor,
-+ asd_ha->pcidev->device);
-+ err = FAIL_CHECK_PCI_ID;
-+ goto out2;
-+ }
++ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
++ struct lpfc_dmabuf *outp;
++ struct lpfc_sli_ct_request *CTrsp;
+
-+ if (hdr_ptr->filelen != asd_ha->bios_image->size) {
-+ err = FAIL_FILE_SIZE;
-+ goto out2;
++ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
++ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
++ if (CTrsp->CommandResponse.bits.CmdRsp ==
++ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
++ vport->ct_flags |= FC_CT_RSPN_ID;
+ }
++ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
+ return;
+ }
+
+@@ -867,7 +954,32 @@ static void
+ lpfc_cmpl_ct_cmd_rsnn_nn(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_iocbq *rspiocb)
+ {
+- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
++ IOCB_t *irsp = &rspiocb->iocb;
++ struct lpfc_vport *vport = cmdiocb->vport;
+
-+ /* calculate checksum */
-+ for (i = 0; i < hdr_ptr->filelen; i++)
-+ csum += asd_ha->bios_image->data[i];
++ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
++ struct lpfc_dmabuf *outp;
++ struct lpfc_sli_ct_request *CTrsp;
+
-+ if ((csum & 0x0000ffff) != hdr_ptr->checksum) {
-+ ASD_DPRINTK("BIOS file checksum mismatch\n");
-+ err = FAIL_CHECK_SUM;
-+ goto out2;
-+ }
-+ if (flash_command == FLASH_CMD_UPDATE) {
-+ asd_ha->bios_status = FLASH_IN_PROGRESS;
-+ err = asd_write_flash_seg(asd_ha,
-+ &asd_ha->bios_image->data[sizeof(*hdr_ptr)],
-+ 0, hdr_ptr->filelen-sizeof(*hdr_ptr));
-+ if (!err)
-+ err = asd_verify_flash_seg(asd_ha,
-+ &asd_ha->bios_image->data[sizeof(*hdr_ptr)],
-+ 0, hdr_ptr->filelen-sizeof(*hdr_ptr));
-+ } else {
-+ asd_ha->bios_status = FLASH_IN_PROGRESS;
-+ err = asd_verify_flash_seg(asd_ha,
-+ &asd_ha->bios_image->data[sizeof(header)],
-+ 0, hdr_ptr->filelen-sizeof(header));
++ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
++ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
++ if (CTrsp->CommandResponse.bits.CmdRsp ==
++ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
++ vport->ct_flags |= FC_CT_RSNN_NN;
+ }
-+
-+out2:
-+ release_firmware(asd_ha->bios_image);
-+out1:
-+ kfree(cmd_ptr);
-+out:
-+ asd_ha->bios_status = err;
-+
-+ if (!err)
-+ return count;
-+ else
-+ return -err;
++ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
++ return;
+}
+
-+static ssize_t asd_show_update_bios(struct device *dev,
-+ struct device_attribute *attr, char *buf)
++static void
++lpfc_cmpl_ct_cmd_da_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
++ struct lpfc_iocbq *rspiocb)
+{
-+ int i;
-+ struct asd_ha_struct *asd_ha = dev_to_asd_ha(dev);
-+
-+ for (i = 0; flash_error_table[i].err_code != 0; i++) {
-+ if (flash_error_table[i].err_code == asd_ha->bios_status)
-+ break;
-+ }
-+ if (asd_ha->bios_status != FLASH_IN_PROGRESS)
-+ asd_ha->bios_status = FLASH_OK;
-+
-+ return snprintf(buf, PAGE_SIZE, "status=%x %s\n",
-+ flash_error_table[i].err_code,
-+ flash_error_table[i].reason);
-+}
-+
-+static DEVICE_ATTR(update_bios, S_IRUGO|S_IWUGO,
-+ asd_show_update_bios, asd_store_update_bios);
++ struct lpfc_vport *vport = cmdiocb->vport;
+
- static int asd_create_dev_attrs(struct asd_ha_struct *asd_ha)
- {
- int err;
-@@ -328,9 +505,14 @@ static int asd_create_dev_attrs(struct asd_ha_struct *asd_ha)
- err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn);
- if (err)
- goto err_biosb;
-+ err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_update_bios);
-+ if (err)
-+ goto err_update_bios;
++ /* even if it fails we will act as though it succeeded. */
++ vport->ct_flags = 0;
++ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
+ return;
+ }
- return 0;
+@@ -878,10 +990,17 @@ lpfc_cmpl_ct_cmd_rff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ IOCB_t *irsp = &rspiocb->iocb;
+ struct lpfc_vport *vport = cmdiocb->vport;
-+err_update_bios:
-+ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn);
- err_biosb:
- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build);
- err_rev:
-@@ -343,6 +525,7 @@ static void asd_remove_dev_attrs(struct asd_ha_struct *asd_ha)
- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_revision);
- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build);
- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn);
-+ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_update_bios);
+- if (irsp->ulpStatus != IOSTAT_SUCCESS)
+- vport->fc_flag |= FC_RFF_NOT_SUPPORTED;
++ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
++ struct lpfc_dmabuf *outp;
++ struct lpfc_sli_ct_request *CTrsp;
+
+- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
++ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
++ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
++ if (CTrsp->CommandResponse.bits.CmdRsp ==
++ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
++ vport->ct_flags |= FC_CT_RFF_ID;
++ }
++ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
+ return;
}
- /* The first entry, 0, is used for dynamic ids, the rest for devices
-@@ -589,6 +772,7 @@ static int __devinit asd_pci_probe(struct pci_dev *dev,
- asd_ha->sas_ha.dev = &asd_ha->pcidev->dev;
- asd_ha->sas_ha.lldd_ha = asd_ha;
+@@ -1001,6 +1120,8 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
+ bpl->tus.f.bdeSize = RSPN_REQUEST_SZ;
+ else if (cmdcode == SLI_CTNS_RSNN_NN)
+ bpl->tus.f.bdeSize = RSNN_REQUEST_SZ;
++ else if (cmdcode == SLI_CTNS_DA_ID)
++ bpl->tus.f.bdeSize = DA_ID_REQUEST_SZ;
+ else if (cmdcode == SLI_CTNS_RFF_ID)
+ bpl->tus.f.bdeSize = RFF_REQUEST_SZ;
+ else
+@@ -1029,31 +1150,34 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
+ case SLI_CTNS_GFF_ID:
+ CtReq->CommandResponse.bits.CmdRsp =
+ be16_to_cpu(SLI_CTNS_GFF_ID);
+- CtReq->un.gff.PortId = be32_to_cpu(context);
++ CtReq->un.gff.PortId = cpu_to_be32(context);
+ cmpl = lpfc_cmpl_ct_cmd_gff_id;
+ break;
-+ asd_ha->bios_status = FLASH_OK;
- asd_ha->name = asd_dev->name;
- asd_printk("found %s, device %s\n", asd_ha->name, pci_name(dev));
+ case SLI_CTNS_RFT_ID:
++ vport->ct_flags &= ~FC_CT_RFT_ID;
+ CtReq->CommandResponse.bits.CmdRsp =
+ be16_to_cpu(SLI_CTNS_RFT_ID);
+- CtReq->un.rft.PortId = be32_to_cpu(vport->fc_myDID);
++ CtReq->un.rft.PortId = cpu_to_be32(vport->fc_myDID);
+ CtReq->un.rft.fcpReg = 1;
+ cmpl = lpfc_cmpl_ct_cmd_rft_id;
+ break;
-diff --git a/drivers/scsi/aic94xx/aic94xx_scb.c b/drivers/scsi/aic94xx/aic94xx_scb.c
-index db6ab1a..0febad4 100644
---- a/drivers/scsi/aic94xx/aic94xx_scb.c
-+++ b/drivers/scsi/aic94xx/aic94xx_scb.c
-@@ -788,12 +788,12 @@ void asd_build_control_phy(struct asd_ascb *ascb, int phy_id, u8 subfunc)
+ case SLI_CTNS_RNN_ID:
++ vport->ct_flags &= ~FC_CT_RNN_ID;
+ CtReq->CommandResponse.bits.CmdRsp =
+ be16_to_cpu(SLI_CTNS_RNN_ID);
+- CtReq->un.rnn.PortId = be32_to_cpu(vport->fc_myDID);
++ CtReq->un.rnn.PortId = cpu_to_be32(vport->fc_myDID);
+ memcpy(CtReq->un.rnn.wwnn, &vport->fc_nodename,
+ sizeof (struct lpfc_name));
+ cmpl = lpfc_cmpl_ct_cmd_rnn_id;
+ break;
- /* initiator port settings are in the hi nibble */
- if (phy->sas_phy.role == PHY_ROLE_INITIATOR)
-- control_phy->port_type = SAS_PROTO_ALL << 4;
-+ control_phy->port_type = SAS_PROTOCOL_ALL << 4;
- else if (phy->sas_phy.role == PHY_ROLE_TARGET)
-- control_phy->port_type = SAS_PROTO_ALL;
-+ control_phy->port_type = SAS_PROTOCOL_ALL;
- else
- control_phy->port_type =
-- (SAS_PROTO_ALL << 4) | SAS_PROTO_ALL;
-+ (SAS_PROTOCOL_ALL << 4) | SAS_PROTOCOL_ALL;
+ case SLI_CTNS_RSPN_ID:
++ vport->ct_flags &= ~FC_CT_RSPN_ID;
+ CtReq->CommandResponse.bits.CmdRsp =
+ be16_to_cpu(SLI_CTNS_RSPN_ID);
+- CtReq->un.rspn.PortId = be32_to_cpu(vport->fc_myDID);
++ CtReq->un.rspn.PortId = cpu_to_be32(vport->fc_myDID);
+ size = sizeof(CtReq->un.rspn.symbname);
+ CtReq->un.rspn.len =
+ lpfc_vport_symbolic_port_name(vport,
+@@ -1061,6 +1185,7 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
+ cmpl = lpfc_cmpl_ct_cmd_rspn_id;
+ break;
+ case SLI_CTNS_RSNN_NN:
++ vport->ct_flags &= ~FC_CT_RSNN_NN;
+ CtReq->CommandResponse.bits.CmdRsp =
+ be16_to_cpu(SLI_CTNS_RSNN_NN);
+ memcpy(CtReq->un.rsnn.wwnn, &vport->fc_nodename,
+@@ -1071,11 +1196,18 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
+ CtReq->un.rsnn.symbname, size);
+ cmpl = lpfc_cmpl_ct_cmd_rsnn_nn;
+ break;
++ case SLI_CTNS_DA_ID:
++ /* Implement DA_ID Nameserver request */
++ CtReq->CommandResponse.bits.CmdRsp =
++ be16_to_cpu(SLI_CTNS_DA_ID);
++ CtReq->un.da_id.port_id = cpu_to_be32(vport->fc_myDID);
++ cmpl = lpfc_cmpl_ct_cmd_da_id;
++ break;
+ case SLI_CTNS_RFF_ID:
+- vport->fc_flag &= ~FC_RFF_NOT_SUPPORTED;
++ vport->ct_flags &= ~FC_CT_RFF_ID;
+ CtReq->CommandResponse.bits.CmdRsp =
+ be16_to_cpu(SLI_CTNS_RFF_ID);
+- CtReq->un.rff.PortId = be32_to_cpu(vport->fc_myDID);;
++ CtReq->un.rff.PortId = cpu_to_be32(vport->fc_myDID);;
+ CtReq->un.rff.fbits = FC4_FEATURE_INIT;
+ CtReq->un.rff.type_code = FC_FCP_DATA;
+ cmpl = lpfc_cmpl_ct_cmd_rff_id;
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index d6a98bc..783d1ee 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -43,6 +43,7 @@
+ #include "lpfc_crtn.h"
+ #include "lpfc_vport.h"
+ #include "lpfc_version.h"
++#include "lpfc_compat.h"
+ #include "lpfc_debugfs.h"
- /* link reset retries, this should be nominal */
- control_phy->link_reset_retries = 10;
-diff --git a/drivers/scsi/aic94xx/aic94xx_sds.c b/drivers/scsi/aic94xx/aic94xx_sds.c
-index 06509bf..2a4c933 100644
---- a/drivers/scsi/aic94xx/aic94xx_sds.c
-+++ b/drivers/scsi/aic94xx/aic94xx_sds.c
-@@ -30,6 +30,7 @@
+ #ifdef CONFIG_LPFC_DEBUG_FS
+@@ -75,18 +76,18 @@ module_param(lpfc_debugfs_enable, int, 0);
+ MODULE_PARM_DESC(lpfc_debugfs_enable, "Enable debugfs services");
- #include "aic94xx.h"
- #include "aic94xx_reg.h"
-+#include "aic94xx_sds.h"
+ /* This MUST be a power of 2 */
+-static int lpfc_debugfs_max_disc_trc = 0;
++static int lpfc_debugfs_max_disc_trc;
+ module_param(lpfc_debugfs_max_disc_trc, int, 0);
+ MODULE_PARM_DESC(lpfc_debugfs_max_disc_trc,
+ "Set debugfs discovery trace depth");
- /* ---------- OCM stuff ---------- */
+ /* This MUST be a power of 2 */
+-static int lpfc_debugfs_max_slow_ring_trc = 0;
++static int lpfc_debugfs_max_slow_ring_trc;
+ module_param(lpfc_debugfs_max_slow_ring_trc, int, 0);
+ MODULE_PARM_DESC(lpfc_debugfs_max_slow_ring_trc,
+ "Set debugfs slow ring trace depth");
-@@ -1083,3 +1084,391 @@ out:
- kfree(flash_dir);
- return err;
- }
-+
-+/**
-+ * asd_verify_flash_seg - verify data with flash memory
-+ * @asd_ha: pointer to the host adapter structure
-+ * @src: pointer to the source data to be verified
-+ * @dest_offset: offset from flash memory
-+ * @bytes_to_verify: total bytes to verify
-+ */
-+int asd_verify_flash_seg(struct asd_ha_struct *asd_ha,
-+ void *src, u32 dest_offset, u32 bytes_to_verify)
-+{
-+ u8 *src_buf;
-+ u8 flash_char;
-+ int err;
-+ u32 nv_offset, reg, i;
-+
-+ reg = asd_ha->hw_prof.flash.bar;
-+ src_buf = NULL;
-+
-+ err = FLASH_OK;
-+ nv_offset = dest_offset;
-+ src_buf = (u8 *)src;
-+ for (i = 0; i < bytes_to_verify; i++) {
-+ flash_char = asd_read_reg_byte(asd_ha, reg + nv_offset + i);
-+ if (flash_char != src_buf[i]) {
-+ err = FAIL_VERIFY;
-+ break;
-+ }
-+ }
-+ return err;
-+}
-+
-+/**
-+ * asd_write_flash_seg - write data into flash memory
-+ * @asd_ha: pointer to the host adapter structure
-+ * @src: pointer to the source data to be written
-+ * @dest_offset: offset from flash memory
-+ * @bytes_to_write: total bytes to write
-+ */
-+int asd_write_flash_seg(struct asd_ha_struct *asd_ha,
-+ void *src, u32 dest_offset, u32 bytes_to_write)
-+{
-+ u8 *src_buf;
-+ u32 nv_offset, reg, i;
-+ int err;
-+
-+ reg = asd_ha->hw_prof.flash.bar;
-+ src_buf = NULL;
-+
-+ err = asd_check_flash_type(asd_ha);
-+ if (err) {
-+ ASD_DPRINTK("couldn't find the type of flash. err=%d\n", err);
-+ return err;
-+ }
-+
-+ nv_offset = dest_offset;
-+ err = asd_erase_nv_sector(asd_ha, nv_offset, bytes_to_write);
-+ if (err) {
-+ ASD_DPRINTK("Erase failed at offset:0x%x\n",
-+ nv_offset);
-+ return err;
-+ }
-+
-+ err = asd_reset_flash(asd_ha);
-+ if (err) {
-+ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
-+ return err;
-+ }
-+
-+ src_buf = (u8 *)src;
-+ for (i = 0; i < bytes_to_write; i++) {
-+ /* Setup program command sequence */
-+ switch (asd_ha->hw_prof.flash.method) {
-+ case FLASH_METHOD_A:
-+ {
-+ asd_write_reg_byte(asd_ha,
-+ (reg + 0xAAA), 0xAA);
-+ asd_write_reg_byte(asd_ha,
-+ (reg + 0x555), 0x55);
-+ asd_write_reg_byte(asd_ha,
-+ (reg + 0xAAA), 0xA0);
-+ asd_write_reg_byte(asd_ha,
-+ (reg + nv_offset + i),
-+ (*(src_buf + i)));
-+ break;
-+ }
-+ case FLASH_METHOD_B:
-+ {
-+ asd_write_reg_byte(asd_ha,
-+ (reg + 0x555), 0xAA);
-+ asd_write_reg_byte(asd_ha,
-+ (reg + 0x2AA), 0x55);
-+ asd_write_reg_byte(asd_ha,
-+ (reg + 0x555), 0xA0);
-+ asd_write_reg_byte(asd_ha,
-+ (reg + nv_offset + i),
-+ (*(src_buf + i)));
-+ break;
-+ }
-+ default:
-+ break;
-+ }
-+ if (asd_chk_write_status(asd_ha,
-+ (nv_offset + i), 0) != 0) {
-+ ASD_DPRINTK("aicx: Write failed at offset:0x%x\n",
-+ reg + nv_offset + i);
-+ return FAIL_WRITE_FLASH;
-+ }
-+ }
-+
-+ err = asd_reset_flash(asd_ha);
-+ if (err) {
-+ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
-+ return err;
-+ }
-+ return 0;
-+}
-+
-+int asd_chk_write_status(struct asd_ha_struct *asd_ha,
-+ u32 sector_addr, u8 erase_flag)
-+{
-+ u32 reg;
-+ u32 loop_cnt;
-+ u8 nv_data1, nv_data2;
-+ u8 toggle_bit1;
-+
-+ /*
-+ * Read from DQ2 requires sector address
-+ * while it's dont care for DQ6
-+ */
-+ reg = asd_ha->hw_prof.flash.bar;
-+
-+ for (loop_cnt = 0; loop_cnt < 50000; loop_cnt++) {
-+ nv_data1 = asd_read_reg_byte(asd_ha, reg);
-+ nv_data2 = asd_read_reg_byte(asd_ha, reg);
-+
-+ toggle_bit1 = ((nv_data1 & FLASH_STATUS_BIT_MASK_DQ6)
-+ ^ (nv_data2 & FLASH_STATUS_BIT_MASK_DQ6));
-+
-+ if (toggle_bit1 == 0) {
-+ return 0;
-+ } else {
-+ if (nv_data2 & FLASH_STATUS_BIT_MASK_DQ5) {
-+ nv_data1 = asd_read_reg_byte(asd_ha,
-+ reg);
-+ nv_data2 = asd_read_reg_byte(asd_ha,
-+ reg);
-+ toggle_bit1 =
-+ ((nv_data1 & FLASH_STATUS_BIT_MASK_DQ6)
-+ ^ (nv_data2 & FLASH_STATUS_BIT_MASK_DQ6));
-+
-+ if (toggle_bit1 == 0)
-+ return 0;
-+ }
-+ }
+-static int lpfc_debugfs_mask_disc_trc = 0;
++int lpfc_debugfs_mask_disc_trc;
+ module_param(lpfc_debugfs_mask_disc_trc, int, 0);
+ MODULE_PARM_DESC(lpfc_debugfs_mask_disc_trc,
+ "Set debugfs discovery trace mask");
+@@ -100,8 +101,11 @@ MODULE_PARM_DESC(lpfc_debugfs_mask_disc_trc,
+ #define LPFC_NODELIST_SIZE 8192
+ #define LPFC_NODELIST_ENTRY_SIZE 120
+
+-/* dumpslim output buffer size */
+-#define LPFC_DUMPSLIM_SIZE 4096
++/* dumpHBASlim output buffer size */
++#define LPFC_DUMPHBASLIM_SIZE 4096
+
-+ /*
-+ * ERASE is a sector-by-sector operation and requires
-+ * more time to finish while WRITE is byte-byte-byte
-+ * operation and takes lesser time to finish.
-+ *
-+ * For some strange reason a reduced ERASE delay gives different
-+ * behaviour across different spirit boards. Hence we set
-+ * a optimum balance of 50mus for ERASE which works well
-+ * across all boards.
-+ */
-+ if (erase_flag) {
-+ udelay(FLASH_STATUS_ERASE_DELAY_COUNT);
-+ } else {
-+ udelay(FLASH_STATUS_WRITE_DELAY_COUNT);
-+ }
-+ }
-+ return -1;
-+}
++/* dumpHostSlim output buffer size */
++#define LPFC_DUMPHOSTSLIM_SIZE 4096
+
+ /* hbqinfo output buffer size */
+ #define LPFC_HBQINFO_SIZE 8192
+@@ -243,16 +247,17 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ raw_index = phba->hbq_get[i];
+ getidx = le32_to_cpu(raw_index);
+ len += snprintf(buf+len, size-len,
+- "entrys:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
+- hbqs->entry_count, hbqs->hbqPutIdx, hbqs->next_hbqPutIdx,
+- hbqs->local_hbqGetIdx, getidx);
++ "entrys:%d bufcnt:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
++ hbqs->entry_count, hbqs->buffer_count, hbqs->hbqPutIdx,
++ hbqs->next_hbqPutIdx, hbqs->local_hbqGetIdx, getidx);
+
+ hbqe = (struct lpfc_hbq_entry *) phba->hbqs[i].hbq_virt;
+ for (j=0; j<hbqs->entry_count; j++) {
+ len += snprintf(buf+len, size-len,
+ "%03d: %08x %04x %05x ", j,
+- hbqe->bde.addrLow, hbqe->bde.tus.w, hbqe->buffer_tag);
+-
++ le32_to_cpu(hbqe->bde.addrLow),
++ le32_to_cpu(hbqe->bde.tus.w),
++ le32_to_cpu(hbqe->buffer_tag));
+ i = 0;
+ found = 0;
+
+@@ -276,7 +281,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ list_for_each_entry(d_buf, &hbqs->hbq_buffer_list, list) {
+ hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
+ phys = ((uint64_t)hbq_buf->dbuf.phys & 0xffffffff);
+- if (phys == hbqe->bde.addrLow) {
++ if (phys == le32_to_cpu(hbqe->bde.addrLow)) {
+ len += snprintf(buf+len, size-len,
+ "Buf%d: %p %06x\n", i,
+ hbq_buf->dbuf.virt, hbq_buf->tag);
+@@ -297,18 +302,58 @@ skipit:
+ return len;
+ }
+
++static int lpfc_debugfs_last_hba_slim_off;
+
-+/**
-+ * asd_hwi_erase_nv_sector - Erase the flash memory sectors.
-+ * @asd_ha: pointer to the host adapter structure
-+ * @flash_addr: pointer to offset from flash memory
-+ * @size: total bytes to erase.
-+ */
-+int asd_erase_nv_sector(struct asd_ha_struct *asd_ha, u32 flash_addr, u32 size)
++static int
++lpfc_debugfs_dumpHBASlim_data(struct lpfc_hba *phba, char *buf, int size)
+{
-+ u32 reg;
-+ u32 sector_addr;
++ int len = 0;
++ int i, off;
++ uint32_t *ptr;
++ char buffer[1024];
+
-+ reg = asd_ha->hw_prof.flash.bar;
++ off = 0;
++ spin_lock_irq(&phba->hbalock);
+
-+ /* sector staring address */
-+ sector_addr = flash_addr & FLASH_SECTOR_SIZE_MASK;
++ len += snprintf(buf+len, size-len, "HBA SLIM\n");
++ lpfc_memcpy_from_slim(buffer,
++ ((uint8_t *)phba->MBslimaddr) + lpfc_debugfs_last_hba_slim_off,
++ 1024);
+
-+ /*
-+ * Erasing an flash sector needs to be done in six consecutive
-+ * write cyles.
-+ */
-+ while (sector_addr < flash_addr+size) {
-+ switch (asd_ha->hw_prof.flash.method) {
-+ case FLASH_METHOD_A:
-+ asd_write_reg_byte(asd_ha, (reg + 0xAAA), 0xAA);
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x55);
-+ asd_write_reg_byte(asd_ha, (reg + 0xAAA), 0x80);
-+ asd_write_reg_byte(asd_ha, (reg + 0xAAA), 0xAA);
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x55);
-+ asd_write_reg_byte(asd_ha, (reg + sector_addr), 0x30);
-+ break;
-+ case FLASH_METHOD_B:
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0xAA);
-+ asd_write_reg_byte(asd_ha, (reg + 0x2AA), 0x55);
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x80);
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0xAA);
-+ asd_write_reg_byte(asd_ha, (reg + 0x2AA), 0x55);
-+ asd_write_reg_byte(asd_ha, (reg + sector_addr), 0x30);
-+ break;
-+ default:
-+ break;
-+ }
++ ptr = (uint32_t *)&buffer[0];
++ off = lpfc_debugfs_last_hba_slim_off;
+
-+ if (asd_chk_write_status(asd_ha, sector_addr, 1) != 0)
-+ return FAIL_ERASE_FLASH;
++ /* Set it up for the next time */
++ lpfc_debugfs_last_hba_slim_off += 1024;
++ if (lpfc_debugfs_last_hba_slim_off >= 4096)
++ lpfc_debugfs_last_hba_slim_off = 0;
+
-+ sector_addr += FLASH_SECTOR_SIZE;
++ i = 1024;
++ while (i > 0) {
++ len += snprintf(buf+len, size-len,
++ "%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
++ off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
++ *(ptr+5), *(ptr+6), *(ptr+7));
++ ptr += 8;
++ i -= (8 * sizeof(uint32_t));
++ off += (8 * sizeof(uint32_t));
+ }
+
-+ return 0;
++ spin_unlock_irq(&phba->hbalock);
++ return len;
+}
+
-+int asd_check_flash_type(struct asd_ha_struct *asd_ha)
+ static int
+-lpfc_debugfs_dumpslim_data(struct lpfc_hba *phba, char *buf, int size)
++lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ {
+ int len = 0;
+- int cnt, i, off;
++ int i, off;
+ uint32_t word0, word1, word2, word3;
+ uint32_t *ptr;
+ struct lpfc_pgp *pgpp;
+ struct lpfc_sli *psli = &phba->sli;
+ struct lpfc_sli_ring *pring;
+
+- cnt = LPFC_DUMPSLIM_SIZE;
+ off = 0;
+ spin_lock_irq(&phba->hbalock);
+
+@@ -620,7 +665,34 @@ out:
+ }
+
+ static int
+-lpfc_debugfs_dumpslim_open(struct inode *inode, struct file *file)
++lpfc_debugfs_dumpHBASlim_open(struct inode *inode, struct file *file)
+{
-+ u8 manuf_id;
-+ u8 dev_id;
-+ u8 sec_prot;
-+ u32 inc;
-+ u32 reg;
-+ int err;
-+
-+ /* get Flash memory base address */
-+ reg = asd_ha->hw_prof.flash.bar;
-+
-+ /* Determine flash info */
-+ err = asd_reset_flash(asd_ha);
-+ if (err) {
-+ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
-+ return err;
-+ }
-+
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_UNKNOWN;
-+ asd_ha->hw_prof.flash.manuf = FLASH_MANUF_ID_UNKNOWN;
-+ asd_ha->hw_prof.flash.dev_id = FLASH_DEV_ID_UNKNOWN;
-+
-+ /* Get flash info. This would most likely be AMD Am29LV family flash.
-+ * First try the sequence for word mode. It is the same as for
-+ * 008B (byte mode only), 160B (word mode) and 800D (word mode).
-+ */
-+ inc = asd_ha->hw_prof.flash.wide ? 2 : 1;
-+ asd_write_reg_byte(asd_ha, reg + 0xAAA, 0xAA);
-+ asd_write_reg_byte(asd_ha, reg + 0x555, 0x55);
-+ asd_write_reg_byte(asd_ha, reg + 0xAAA, 0x90);
-+ manuf_id = asd_read_reg_byte(asd_ha, reg);
-+ dev_id = asd_read_reg_byte(asd_ha, reg + inc);
-+ sec_prot = asd_read_reg_byte(asd_ha, reg + inc + inc);
-+ /* Get out of autoselect mode. */
-+ err = asd_reset_flash(asd_ha);
-+ if (err) {
-+ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
-+ return err;
-+ }
-+ ASD_DPRINTK("Flash MethodA manuf_id(0x%x) dev_id(0x%x) "
-+ "sec_prot(0x%x)\n", manuf_id, dev_id, sec_prot);
-+ err = asd_reset_flash(asd_ha);
-+ if (err != 0)
-+ return err;
-+
-+ switch (manuf_id) {
-+ case FLASH_MANUF_ID_AMD:
-+ switch (sec_prot) {
-+ case FLASH_DEV_ID_AM29LV800DT:
-+ case FLASH_DEV_ID_AM29LV640MT:
-+ case FLASH_DEV_ID_AM29F800B:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
-+ break;
-+ default:
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_ST:
-+ switch (sec_prot) {
-+ case FLASH_DEV_ID_STM29W800DT:
-+ case FLASH_DEV_ID_STM29LV640:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
-+ break;
-+ default:
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_FUJITSU:
-+ switch (sec_prot) {
-+ case FLASH_DEV_ID_MBM29LV800TE:
-+ case FLASH_DEV_ID_MBM29DL800TA:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_MACRONIX:
-+ switch (sec_prot) {
-+ case FLASH_DEV_ID_MX29LV800BT:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_A;
-+ break;
-+ }
-+ break;
-+ }
-+
-+ if (asd_ha->hw_prof.flash.method == FLASH_METHOD_UNKNOWN) {
-+ err = asd_reset_flash(asd_ha);
-+ if (err) {
-+ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
-+ return err;
-+ }
-+
-+ /* Issue Unlock sequence for AM29LV008BT */
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0xAA);
-+ asd_write_reg_byte(asd_ha, (reg + 0x2AA), 0x55);
-+ asd_write_reg_byte(asd_ha, (reg + 0x555), 0x90);
-+ manuf_id = asd_read_reg_byte(asd_ha, reg);
-+ dev_id = asd_read_reg_byte(asd_ha, reg + inc);
-+ sec_prot = asd_read_reg_byte(asd_ha, reg + inc + inc);
-+
-+ ASD_DPRINTK("Flash MethodB manuf_id(0x%x) dev_id(0x%x) sec_prot"
-+ "(0x%x)\n", manuf_id, dev_id, sec_prot);
++ struct lpfc_hba *phba = inode->i_private;
++ struct lpfc_debug *debug;
++ int rc = -ENOMEM;
+
-+ err = asd_reset_flash(asd_ha);
-+ if (err != 0) {
-+ ASD_DPRINTK("couldn't reset flash. err=%d\n", err);
-+ return err;
-+ }
++ debug = kmalloc(sizeof(*debug), GFP_KERNEL);
++ if (!debug)
++ goto out;
+
-+ switch (manuf_id) {
-+ case FLASH_MANUF_ID_AMD:
-+ switch (dev_id) {
-+ case FLASH_DEV_ID_AM29LV008BT:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
-+ break;
-+ default:
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_ST:
-+ switch (dev_id) {
-+ case FLASH_DEV_ID_STM29008:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
-+ break;
-+ default:
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_FUJITSU:
-+ switch (dev_id) {
-+ case FLASH_DEV_ID_MBM29LV008TA:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_INTEL:
-+ switch (dev_id) {
-+ case FLASH_DEV_ID_I28LV00TAT:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
-+ break;
-+ }
-+ break;
-+ case FLASH_MANUF_ID_MACRONIX:
-+ switch (dev_id) {
-+ case FLASH_DEV_ID_I28LV00TAT:
-+ asd_ha->hw_prof.flash.method = FLASH_METHOD_B;
-+ break;
-+ }
-+ break;
-+ default:
-+ return FAIL_FIND_FLASH_ID;
-+ }
++ /* Round to page boundry */
++ debug->buffer = kmalloc(LPFC_DUMPHBASLIM_SIZE, GFP_KERNEL);
++ if (!debug->buffer) {
++ kfree(debug);
++ goto out;
+ }
+
-+ if (asd_ha->hw_prof.flash.method == FLASH_METHOD_UNKNOWN)
-+ return FAIL_FIND_FLASH_ID;
++ debug->len = lpfc_debugfs_dumpHBASlim_data(phba, debug->buffer,
++ LPFC_DUMPHBASLIM_SIZE);
++ file->private_data = debug;
+
-+ asd_ha->hw_prof.flash.manuf = manuf_id;
-+ asd_ha->hw_prof.flash.dev_id = dev_id;
-+ asd_ha->hw_prof.flash.sec_prot = sec_prot;
-+ return 0;
++ rc = 0;
++out:
++ return rc;
+}
-diff --git a/drivers/scsi/aic94xx/aic94xx_sds.h b/drivers/scsi/aic94xx/aic94xx_sds.h
-new file mode 100644
-index 0000000..bb9795a
---- /dev/null
-+++ b/drivers/scsi/aic94xx/aic94xx_sds.h
-@@ -0,0 +1,121 @@
-+/*
-+ * Aic94xx SAS/SATA driver hardware interface header file.
-+ *
-+ * Copyright (C) 2005 Adaptec, Inc. All rights reserved.
-+ * Copyright (C) 2005 Gilbert Wu <gilbert_wu at adaptec.com>
-+ *
-+ * This file is licensed under GPLv2.
-+ *
-+ * This file is part of the aic94xx driver.
-+ *
-+ * The aic94xx driver is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public License as
-+ * published by the Free Software Foundation; version 2 of the
-+ * License.
-+ *
-+ * The aic94xx driver is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+ * General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with the aic94xx driver; if not, write to the Free Software
-+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-+ *
-+ */
-+#ifndef _AIC94XX_SDS_H_
-+#define _AIC94XX_SDS_H_
-+
-+enum {
-+ FLASH_METHOD_UNKNOWN,
-+ FLASH_METHOD_A,
-+ FLASH_METHOD_B
-+};
-+
-+#define FLASH_MANUF_ID_AMD 0x01
-+#define FLASH_MANUF_ID_ST 0x20
-+#define FLASH_MANUF_ID_FUJITSU 0x04
-+#define FLASH_MANUF_ID_MACRONIX 0xC2
-+#define FLASH_MANUF_ID_INTEL 0x89
-+#define FLASH_MANUF_ID_UNKNOWN 0xFF
-+
-+#define FLASH_DEV_ID_AM29LV008BT 0x3E
-+#define FLASH_DEV_ID_AM29LV800DT 0xDA
-+#define FLASH_DEV_ID_STM29W800DT 0xD7
-+#define FLASH_DEV_ID_STM29LV640 0xDE
-+#define FLASH_DEV_ID_STM29008 0xEA
-+#define FLASH_DEV_ID_MBM29LV800TE 0xDA
-+#define FLASH_DEV_ID_MBM29DL800TA 0x4A
-+#define FLASH_DEV_ID_MBM29LV008TA 0x3E
-+#define FLASH_DEV_ID_AM29LV640MT 0x7E
-+#define FLASH_DEV_ID_AM29F800B 0xD6
-+#define FLASH_DEV_ID_MX29LV800BT 0xDA
-+#define FLASH_DEV_ID_MX29LV008CT 0xDA
-+#define FLASH_DEV_ID_I28LV00TAT 0x3E
-+#define FLASH_DEV_ID_UNKNOWN 0xFF
-+
-+/* status bit mask values */
-+#define FLASH_STATUS_BIT_MASK_DQ6 0x40
-+#define FLASH_STATUS_BIT_MASK_DQ5 0x20
-+#define FLASH_STATUS_BIT_MASK_DQ2 0x04
-+
-+/* minimum value in micro seconds needed for checking status */
-+#define FLASH_STATUS_ERASE_DELAY_COUNT 50
-+#define FLASH_STATUS_WRITE_DELAY_COUNT 25
-+
-+#define FLASH_SECTOR_SIZE 0x010000
-+#define FLASH_SECTOR_SIZE_MASK 0xffff0000
-+
-+#define FLASH_OK 0x000000
-+#define FAIL_OPEN_BIOS_FILE 0x000100
-+#define FAIL_CHECK_PCI_ID 0x000200
-+#define FAIL_CHECK_SUM 0x000300
-+#define FAIL_UNKNOWN 0x000400
-+#define FAIL_VERIFY 0x000500
-+#define FAIL_RESET_FLASH 0x000600
-+#define FAIL_FIND_FLASH_ID 0x000700
-+#define FAIL_ERASE_FLASH 0x000800
-+#define FAIL_WRITE_FLASH 0x000900
-+#define FAIL_FILE_SIZE 0x000a00
-+#define FAIL_PARAMETERS 0x000b00
-+#define FAIL_OUT_MEMORY 0x000c00
-+#define FLASH_IN_PROGRESS 0x001000
-+
-+struct controller_id {
-+ u32 vendor; /* PCI Vendor ID */
-+ u32 device; /* PCI Device ID */
-+ u32 sub_vendor; /* PCI Subvendor ID */
-+ u32 sub_device; /* PCI Subdevice ID */
-+};
-+
-+struct image_info {
-+ u32 ImageId; /* Identifies the image */
-+ u32 ImageOffset; /* Offset the beginning of the file */
-+ u32 ImageLength; /* length of the image */
-+ u32 ImageChecksum; /* Image checksum */
-+ u32 ImageVersion; /* Version of the image, could be build number */
-+};
+
-+struct bios_file_header {
-+ u8 signature[32]; /* Signature/Cookie to identify the file */
-+ u32 checksum; /*Entire file checksum with this field zero */
-+ u32 antidote; /* Entire file checksum with this field 0xFFFFFFFF */
-+ struct controller_id contrl_id; /*PCI id to identify the controller */
-+ u32 filelen; /*Length of the entire file*/
-+ u32 chunk_num; /*The chunk/part number for multiple Image files */
-+ u32 total_chunks; /*Total number of chunks/parts in the image file */
-+ u32 num_images; /* Number of images in the file */
-+ u32 build_num; /* Build number of this image */
-+ struct image_info image_header;
++static int
++lpfc_debugfs_dumpHostSlim_open(struct inode *inode, struct file *file)
+ {
+ struct lpfc_hba *phba = inode->i_private;
+ struct lpfc_debug *debug;
+@@ -631,14 +703,14 @@ lpfc_debugfs_dumpslim_open(struct inode *inode, struct file *file)
+ goto out;
+
+ /* Round to page boundry */
+- debug->buffer = kmalloc(LPFC_DUMPSLIM_SIZE, GFP_KERNEL);
++ debug->buffer = kmalloc(LPFC_DUMPHOSTSLIM_SIZE, GFP_KERNEL);
+ if (!debug->buffer) {
+ kfree(debug);
+ goto out;
+ }
+
+- debug->len = lpfc_debugfs_dumpslim_data(phba, debug->buffer,
+- LPFC_DUMPSLIM_SIZE);
++ debug->len = lpfc_debugfs_dumpHostSlim_data(phba, debug->buffer,
++ LPFC_DUMPHOSTSLIM_SIZE);
+ file->private_data = debug;
+
+ rc = 0;
+@@ -741,10 +813,19 @@ static struct file_operations lpfc_debugfs_op_hbqinfo = {
+ .release = lpfc_debugfs_release,
+ };
+
+-#undef lpfc_debugfs_op_dumpslim
+-static struct file_operations lpfc_debugfs_op_dumpslim = {
++#undef lpfc_debugfs_op_dumpHBASlim
++static struct file_operations lpfc_debugfs_op_dumpHBASlim = {
++ .owner = THIS_MODULE,
++ .open = lpfc_debugfs_dumpHBASlim_open,
++ .llseek = lpfc_debugfs_lseek,
++ .read = lpfc_debugfs_read,
++ .release = lpfc_debugfs_release,
+};
+
-+int asd_verify_flash_seg(struct asd_ha_struct *asd_ha,
-+ void *src, u32 dest_offset, u32 bytes_to_verify);
-+int asd_write_flash_seg(struct asd_ha_struct *asd_ha,
-+ void *src, u32 dest_offset, u32 bytes_to_write);
-+int asd_chk_write_status(struct asd_ha_struct *asd_ha,
-+ u32 sector_addr, u8 erase_flag);
-+int asd_check_flash_type(struct asd_ha_struct *asd_ha);
-+int asd_erase_nv_sector(struct asd_ha_struct *asd_ha,
-+ u32 flash_addr, u32 size);
-+#endif
-diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
-index ee0a98b..965d4bb 100644
---- a/drivers/scsi/aic94xx/aic94xx_task.c
-+++ b/drivers/scsi/aic94xx/aic94xx_task.c
-@@ -187,29 +187,13 @@ static void asd_get_response_tasklet(struct asd_ascb *ascb,
- ts->buf_valid_size = 0;
- edb = asd_ha->seq.edb_arr[edb_id + escb->edb_index];
- r = edb->vaddr;
-- if (task->task_proto == SAS_PROTO_SSP) {
-+ if (task->task_proto == SAS_PROTOCOL_SSP) {
- struct ssp_response_iu *iu =
- r + 16 + sizeof(struct ssp_frame_hdr);
++#undef lpfc_debugfs_op_dumpHostSlim
++static struct file_operations lpfc_debugfs_op_dumpHostSlim = {
+ .owner = THIS_MODULE,
+- .open = lpfc_debugfs_dumpslim_open,
++ .open = lpfc_debugfs_dumpHostSlim_open,
+ .llseek = lpfc_debugfs_lseek,
+ .read = lpfc_debugfs_read,
+ .release = lpfc_debugfs_release,
+@@ -812,15 +893,27 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ goto debug_failed;
+ }
- ts->residual = le32_to_cpu(*(__le32 *)r);
-- ts->resp = SAS_TASK_COMPLETE;
-- if (iu->datapres == 0)
-- ts->stat = iu->status;
-- else if (iu->datapres == 1)
-- ts->stat = iu->resp_data[3];
-- else if (iu->datapres == 2) {
-- ts->stat = SAM_CHECK_COND;
-- ts->buf_valid_size = min((u32) SAS_STATUS_BUF_SIZE,
-- be32_to_cpu(iu->sense_data_len));
-- memcpy(ts->buf, iu->sense_data, ts->buf_valid_size);
-- if (iu->status != SAM_CHECK_COND) {
-- ASD_DPRINTK("device %llx sent sense data, but "
-- "stat(0x%x) is not CHECK_CONDITION"
-- "\n",
-- SAS_ADDR(task->dev->sas_addr),
-- iu->status);
-- }
-- }
+- /* Setup dumpslim */
+- snprintf(name, sizeof(name), "dumpslim");
+- phba->debug_dumpslim =
++ /* Setup dumpHBASlim */
++ snprintf(name, sizeof(name), "dumpHBASlim");
++ phba->debug_dumpHBASlim =
++ debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
++ phba->hba_debugfs_root,
++ phba, &lpfc_debugfs_op_dumpHBASlim);
++ if (!phba->debug_dumpHBASlim) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
++ "0409 Cannot create debugfs dumpHBASlim\n");
++ goto debug_failed;
++ }
+
-+ sas_ssp_task_response(&asd_ha->pcidev->dev, task, iu);
- } else {
- struct ata_task_resp *resp = (void *) &ts->buf[0];
++ /* Setup dumpHostSlim */
++ snprintf(name, sizeof(name), "dumpHostSlim");
++ phba->debug_dumpHostSlim =
+ debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
+ phba->hba_debugfs_root,
+- phba, &lpfc_debugfs_op_dumpslim);
+- if (!phba->debug_dumpslim) {
++ phba, &lpfc_debugfs_op_dumpHostSlim);
++ if (!phba->debug_dumpHostSlim) {
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+- "0409 Cannot create debugfs dumpslim\n");
++ "0409 Cannot create debugfs dumpHostSlim\n");
+ goto debug_failed;
+ }
-@@ -341,14 +325,14 @@ Again:
- }
+@@ -970,9 +1063,13 @@ lpfc_debugfs_terminate(struct lpfc_vport *vport)
+ debugfs_remove(phba->debug_hbqinfo); /* hbqinfo */
+ phba->debug_hbqinfo = NULL;
+ }
+- if (phba->debug_dumpslim) {
+- debugfs_remove(phba->debug_dumpslim); /* dumpslim */
+- phba->debug_dumpslim = NULL;
++ if (phba->debug_dumpHBASlim) {
++ debugfs_remove(phba->debug_dumpHBASlim); /* HBASlim */
++ phba->debug_dumpHBASlim = NULL;
++ }
++ if (phba->debug_dumpHostSlim) {
++ debugfs_remove(phba->debug_dumpHostSlim); /* HostSlim */
++ phba->debug_dumpHostSlim = NULL;
+ }
+ if (phba->slow_ring_trc) {
+ kfree(phba->slow_ring_trc);
+diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h
+index aacac9a..cfe81c5 100644
+--- a/drivers/scsi/lpfc/lpfc_disc.h
++++ b/drivers/scsi/lpfc/lpfc_disc.h
+@@ -36,7 +36,6 @@ enum lpfc_work_type {
+ LPFC_EVT_WARM_START,
+ LPFC_EVT_KILL,
+ LPFC_EVT_ELS_RETRY,
+- LPFC_EVT_DEV_LOSS_DELAY,
+ LPFC_EVT_DEV_LOSS,
+ };
- switch (task->task_proto) {
-- case SATA_PROTO:
-- case SAS_PROTO_STP:
-+ case SAS_PROTOCOL_SATA:
-+ case SAS_PROTOCOL_STP:
- asd_unbuild_ata_ascb(ascb);
- break;
-- case SAS_PROTO_SMP:
-+ case SAS_PROTOCOL_SMP:
- asd_unbuild_smp_ascb(ascb);
- break;
-- case SAS_PROTO_SSP:
-+ case SAS_PROTOCOL_SSP:
- asd_unbuild_ssp_ascb(ascb);
- default:
- break;
-@@ -586,17 +570,17 @@ int asd_execute_task(struct sas_task *task, const int num,
- list_for_each_entry(a, &alist, list) {
- t = a->uldd_task;
- a->uldd_timer = 1;
-- if (t->task_proto & SAS_PROTO_STP)
-- t->task_proto = SAS_PROTO_STP;
-+ if (t->task_proto & SAS_PROTOCOL_STP)
-+ t->task_proto = SAS_PROTOCOL_STP;
- switch (t->task_proto) {
-- case SATA_PROTO:
-- case SAS_PROTO_STP:
-+ case SAS_PROTOCOL_SATA:
-+ case SAS_PROTOCOL_STP:
- res = asd_build_ata_ascb(a, t, gfp_flags);
- break;
-- case SAS_PROTO_SMP:
-+ case SAS_PROTOCOL_SMP:
- res = asd_build_smp_ascb(a, t, gfp_flags);
- break;
-- case SAS_PROTO_SSP:
-+ case SAS_PROTOCOL_SSP:
- res = asd_build_ssp_ascb(a, t, gfp_flags);
- break;
- default:
-@@ -633,14 +617,14 @@ out_err_unmap:
- t->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
- spin_unlock_irqrestore(&t->task_state_lock, flags);
- switch (t->task_proto) {
-- case SATA_PROTO:
-- case SAS_PROTO_STP:
-+ case SAS_PROTOCOL_SATA:
-+ case SAS_PROTOCOL_STP:
- asd_unbuild_ata_ascb(a);
- break;
-- case SAS_PROTO_SMP:
-+ case SAS_PROTOCOL_SMP:
- asd_unbuild_smp_ascb(a);
- break;
-- case SAS_PROTO_SSP:
-+ case SAS_PROTOCOL_SSP:
- asd_unbuild_ssp_ascb(a);
- default:
- break;
-diff --git a/drivers/scsi/aic94xx/aic94xx_tmf.c b/drivers/scsi/aic94xx/aic94xx_tmf.c
-index c0d0b7d..87b2f6e 100644
---- a/drivers/scsi/aic94xx/aic94xx_tmf.c
-+++ b/drivers/scsi/aic94xx/aic94xx_tmf.c
-@@ -372,21 +372,21 @@ int asd_abort_task(struct sas_task *task)
- scb->header.opcode = ABORT_TASK;
+@@ -92,6 +91,7 @@ struct lpfc_nodelist {
+ #define NLP_LOGO_SND 0x100 /* sent LOGO request for this entry */
+ #define NLP_RNID_SND 0x400 /* sent RNID request for this entry */
+ #define NLP_ELS_SND_MASK 0x7e0 /* sent ELS request for this entry */
++#define NLP_DEFER_RM 0x10000 /* Remove this ndlp if no longer used */
+ #define NLP_DELAY_TMO 0x20000 /* delay timeout is running for node */
+ #define NLP_NPR_2B_DISC 0x40000 /* node is included in num_disc_nodes */
+ #define NLP_RCV_PLOGI 0x80000 /* Rcv'ed PLOGI from remote system */
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 8085900..c6b739d 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -18,7 +18,7 @@
+ * more details, a copy of which can be found in the file COPYING *
+ * included with this package. *
+ *******************************************************************/
+-
++/* See Fibre Channel protocol T11 FC-LS for details */
+ #include <linux/blkdev.h>
+ #include <linux/pci.h>
+ #include <linux/interrupt.h>
+@@ -42,6 +42,14 @@ static int lpfc_els_retry(struct lpfc_hba *, struct lpfc_iocbq *,
+ struct lpfc_iocbq *);
+ static void lpfc_cmpl_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *,
+ struct lpfc_iocbq *);
++static void lpfc_fabric_abort_vport(struct lpfc_vport *vport);
++static int lpfc_issue_els_fdisc(struct lpfc_vport *vport,
++ struct lpfc_nodelist *ndlp, uint8_t retry);
++static int lpfc_issue_fabric_iocb(struct lpfc_hba *phba,
++ struct lpfc_iocbq *iocb);
++static void lpfc_register_new_vport(struct lpfc_hba *phba,
++ struct lpfc_vport *vport,
++ struct lpfc_nodelist *ndlp);
- switch (task->task_proto) {
-- case SATA_PROTO:
-- case SAS_PROTO_STP:
-+ case SAS_PROTOCOL_SATA:
-+ case SAS_PROTOCOL_STP:
- scb->abort_task.proto_conn_rate = (1 << 5); /* STP */
- break;
-- case SAS_PROTO_SSP:
-+ case SAS_PROTOCOL_SSP:
- scb->abort_task.proto_conn_rate = (1 << 4); /* SSP */
- scb->abort_task.proto_conn_rate |= task->dev->linkrate;
- break;
-- case SAS_PROTO_SMP:
-+ case SAS_PROTOCOL_SMP:
- break;
- default:
- break;
- }
+ static int lpfc_max_els_tries = 3;
-- if (task->task_proto == SAS_PROTO_SSP) {
-+ if (task->task_proto == SAS_PROTOCOL_SSP) {
- scb->abort_task.ssp_frame.frame_type = SSP_TASK;
- memcpy(scb->abort_task.ssp_frame.hashed_dest_addr,
- task->dev->hashed_sas_addr, HASHED_SAS_ADDR_SIZE);
-@@ -512,7 +512,7 @@ static int asd_initiate_ssp_tmf(struct domain_device *dev, u8 *lun,
- int res = 1;
- struct scb *scb;
+@@ -109,14 +117,11 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
-- if (!(dev->tproto & SAS_PROTO_SSP))
-+ if (!(dev->tproto & SAS_PROTOCOL_SSP))
- return TMF_RESP_FUNC_ESUPP;
+ /* fill in BDEs for command */
+ /* Allocate buffer for command payload */
+- if (((pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL)) == 0) ||
+- ((pcmd->virt = lpfc_mbuf_alloc(phba,
+- MEM_PRI, &(pcmd->phys))) == 0)) {
+- kfree(pcmd);
+-
+- lpfc_sli_release_iocbq(phba, elsiocb);
+- return NULL;
+- }
++ pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
++ if (pcmd)
++ pcmd->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &pcmd->phys);
++ if (!pcmd || !pcmd->virt)
++ goto els_iocb_free_pcmb_exit;
- ascb = asd_ascb_alloc_list(asd_ha, &res, GFP_KERNEL);
-diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
-index d466a2d..d80dba9 100644
---- a/drivers/scsi/arcmsr/arcmsr_hba.c
-+++ b/drivers/scsi/arcmsr/arcmsr_hba.c
-@@ -634,9 +634,9 @@ static void arcmsr_report_sense_info(struct CommandControlBlock *ccb)
- pcmd->result = DID_OK << 16;
- if (sensebuffer) {
- int sense_data_length =
-- sizeof(struct SENSE_DATA) < sizeof(pcmd->sense_buffer)
-- ? sizeof(struct SENSE_DATA) : sizeof(pcmd->sense_buffer);
-- memset(sensebuffer, 0, sizeof(pcmd->sense_buffer));
-+ sizeof(struct SENSE_DATA) < SCSI_SENSE_BUFFERSIZE
-+ ? sizeof(struct SENSE_DATA) : SCSI_SENSE_BUFFERSIZE;
-+ memset(sensebuffer, 0, SCSI_SENSE_BUFFERSIZE);
- memcpy(sensebuffer, ccb->arcmsr_cdb.SenseData, sense_data_length);
- sensebuffer->ErrorCode = SCSI_SENSE_CURRENT_ERRORS;
- sensebuffer->Valid = 1;
-diff --git a/drivers/scsi/atari_NCR5380.c b/drivers/scsi/atari_NCR5380.c
-index a9680b5..93b61f1 100644
---- a/drivers/scsi/atari_NCR5380.c
-+++ b/drivers/scsi/atari_NCR5380.c
-@@ -511,9 +511,9 @@ static inline void initialize_SCp(Scsi_Cmnd *cmd)
- * various queues are valid.
- */
+ INIT_LIST_HEAD(&pcmd->list);
-- if (cmd->use_sg) {
-- cmd->SCp.buffer = (struct scatterlist *)cmd->request_buffer;
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
- /* ++roman: Try to merge some scatter-buffers if they are at
-@@ -523,8 +523,8 @@ static inline void initialize_SCp(Scsi_Cmnd *cmd)
+@@ -126,13 +131,8 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
+ if (prsp)
+ prsp->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
+ &prsp->phys);
+- if (prsp == 0 || prsp->virt == 0) {
+- kfree(prsp);
+- lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
+- kfree(pcmd);
+- lpfc_sli_release_iocbq(phba, elsiocb);
+- return NULL;
+- }
++ if (!prsp || !prsp->virt)
++ goto els_iocb_free_prsp_exit;
+ INIT_LIST_HEAD(&prsp->list);
} else {
- cmd->SCp.buffer = NULL;
- cmd->SCp.buffers_residual = 0;
-- cmd->SCp.ptr = (char *)cmd->request_buffer;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-+ cmd->SCp.ptr = NULL;
-+ cmd->SCp.this_residual = 0;
- }
- }
+ prsp = NULL;
+@@ -143,15 +143,8 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
+ if (pbuflist)
+ pbuflist->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
+ &pbuflist->phys);
+- if (pbuflist == 0 || pbuflist->virt == 0) {
+- lpfc_sli_release_iocbq(phba, elsiocb);
+- lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
+- lpfc_mbuf_free(phba, prsp->virt, prsp->phys);
+- kfree(pcmd);
+- kfree(prsp);
+- kfree(pbuflist);
+- return NULL;
+- }
++ if (!pbuflist || !pbuflist->virt)
++ goto els_iocb_free_pbuf_exit;
-@@ -936,21 +936,21 @@ static int NCR5380_queue_command(Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *))
+ INIT_LIST_HEAD(&pbuflist->list);
+
+@@ -196,7 +189,10 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
+ bpl->tus.w = le32_to_cpu(bpl->tus.w);
}
- # endif
- # ifdef NCR5380_STAT_LIMIT
-- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
-+ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
- # endif
- switch (cmd->cmnd[0]) {
- case WRITE:
- case WRITE_6:
- case WRITE_10:
- hostdata->time_write[cmd->device->id] -= (jiffies - hostdata->timebase);
-- hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;
-+ hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);
- hostdata->pendingw++;
- break;
- case READ:
- case READ_6:
- case READ_10:
- hostdata->time_read[cmd->device->id] -= (jiffies - hostdata->timebase);
-- hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;
-+ hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);
- hostdata->pendingr++;
- break;
- }
-@@ -1352,21 +1352,21 @@ static irqreturn_t NCR5380_intr(int irq, void *dev_id)
- static void collect_stats(struct NCR5380_hostdata* hostdata, Scsi_Cmnd *cmd)
- {
- # ifdef NCR5380_STAT_LIMIT
-- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
-+ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
- # endif
- switch (cmd->cmnd[0]) {
- case WRITE:
- case WRITE_6:
- case WRITE_10:
- hostdata->time_write[cmd->device->id] += (jiffies - hostdata->timebase);
-- /*hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;*/
-+ /*hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);*/
- hostdata->pendingw--;
- break;
- case READ:
- case READ_6:
- case READ_10:
- hostdata->time_read[cmd->device->id] += (jiffies - hostdata->timebase);
-- /*hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;*/
-+ /*hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);*/
- hostdata->pendingr--;
- break;
- }
-@@ -1868,7 +1868,7 @@ static int do_abort(struct Scsi_Host *host)
- * the target sees, so we just handshake.
- */
-- while (!(tmp = NCR5380_read(STATUS_REG)) & SR_REQ)
-+ while (!((tmp = NCR5380_read(STATUS_REG)) & SR_REQ))
- ;
++ /* prevent preparing iocb with NULL ndlp reference */
+ elsiocb->context1 = lpfc_nlp_get(ndlp);
++ if (!elsiocb->context1)
++ goto els_iocb_free_pbuf_exit;
+ elsiocb->context2 = pcmd;
+ elsiocb->context3 = pbuflist;
+ elsiocb->retry = retry;
+@@ -222,8 +218,20 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
+ cmdSize);
+ }
+ return elsiocb;
+-}
- NCR5380_write(TARGET_COMMAND_REG, PHASE_SR_TO_TCR(tmp));
-diff --git a/drivers/scsi/atp870u.c b/drivers/scsi/atp870u.c
-index fec58cc..db6de5e 100644
---- a/drivers/scsi/atp870u.c
-+++ b/drivers/scsi/atp870u.c
-@@ -471,18 +471,8 @@ go_42:
- /*
- * Complete the command
- */
-- if (workreq->use_sg) {
-- pci_unmap_sg(dev->pdev,
-- (struct scatterlist *)workreq->request_buffer,
-- workreq->use_sg,
-- workreq->sc_data_direction);
-- } else if (workreq->request_bufflen &&
-- workreq->sc_data_direction != DMA_NONE) {
-- pci_unmap_single(dev->pdev,
-- workreq->SCp.dma_handle,
-- workreq->request_bufflen,
-- workreq->sc_data_direction);
-- }
-+ scsi_dma_unmap(workreq);
++els_iocb_free_pbuf_exit:
++ lpfc_mbuf_free(phba, prsp->virt, prsp->phys);
++ kfree(pbuflist);
+
- spin_lock_irqsave(dev->host->host_lock, flags);
- (*workreq->scsi_done) (workreq);
- #ifdef ED_DBGP
-@@ -624,7 +614,7 @@ static int atp870u_queuecommand(struct scsi_cmnd * req_p,
++els_iocb_free_prsp_exit:
++ lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
++ kfree(prsp);
++
++els_iocb_free_pcmb_exit:
++ kfree(pcmd);
++ lpfc_sli_release_iocbq(phba, elsiocb);
++ return NULL;
++}
- c = scmd_channel(req_p);
- req_p->sense_buffer[0]=0;
-- req_p->resid = 0;
-+ scsi_set_resid(req_p, 0);
- if (scmd_channel(req_p) > 1) {
- req_p->result = 0x00040000;
- done(req_p);
-@@ -722,7 +712,6 @@ static void send_s870(struct atp_unit *dev,unsigned char c)
- unsigned short int tmpcip, w;
- unsigned long l, bttl = 0;
- unsigned int workport;
-- struct scatterlist *sgpnt;
- unsigned long sg_count;
+ static int
+ lpfc_issue_fabric_reglogin(struct lpfc_vport *vport)
+@@ -234,40 +242,53 @@ lpfc_issue_fabric_reglogin(struct lpfc_vport *vport)
+ struct lpfc_nodelist *ndlp;
+ struct serv_parm *sp;
+ int rc;
++ int err = 0;
- if (dev->in_snd[c] != 0) {
-@@ -793,6 +782,8 @@ oktosend:
- }
- printk("\n");
- #endif
-+ l = scsi_bufflen(workreq);
-+
- if (dev->dev_id == ATP885_DEVID) {
- j = inb(dev->baseport + 0x29) & 0xfe;
- outb(j, dev->baseport + 0x29);
-@@ -800,12 +791,11 @@ oktosend:
- }
-
- if (workreq->cmnd[0] == READ_CAPACITY) {
-- if (workreq->request_bufflen > 8) {
-- workreq->request_bufflen = 0x08;
-- }
-+ if (l > 8)
-+ l = 8;
- }
- if (workreq->cmnd[0] == 0x00) {
-- workreq->request_bufflen = 0;
-+ l = 0;
- }
+ sp = &phba->fc_fabparam;
+ ndlp = lpfc_findnode_did(vport, Fabric_DID);
+- if (!ndlp)
++ if (!ndlp) {
++ err = 1;
+ goto fail;
++ }
- tmport = workport + 0x1b;
-@@ -852,40 +842,8 @@ oktosend:
- #ifdef ED_DBGP
- printk("dev->id[%d][%d].devsp = %2x\n",c,target_id,dev->id[c][target_id].devsp);
- #endif
-- /*
-- * Figure out the transfer size
-- */
-- if (workreq->use_sg) {
--#ifdef ED_DBGP
-- printk("Using SGL\n");
--#endif
-- l = 0;
--
-- sgpnt = (struct scatterlist *) workreq->request_buffer;
-- sg_count = pci_map_sg(dev->pdev, sgpnt, workreq->use_sg,
-- workreq->sc_data_direction);
--
-- for (i = 0; i < workreq->use_sg; i++) {
-- if (sgpnt[i].length == 0 || workreq->use_sg > ATP870U_SCATTER) {
-- panic("Foooooooood fight!");
-- }
-- l += sgpnt[i].length;
-- }
--#ifdef ED_DBGP
-- printk( "send_s870: workreq->use_sg %d, sg_count %d l %8ld\n", workreq->use_sg, sg_count, l);
--#endif
-- } else if(workreq->request_bufflen && workreq->sc_data_direction != PCI_DMA_NONE) {
--#ifdef ED_DBGP
-- printk("Not using SGL\n");
--#endif
-- workreq->SCp.dma_handle = pci_map_single(dev->pdev, workreq->request_buffer,
-- workreq->request_bufflen,
-- workreq->sc_data_direction);
-- l = workreq->request_bufflen;
--#ifdef ED_DBGP
-- printk( "send_s870: workreq->use_sg %d, l %8ld\n", workreq->use_sg, l);
--#endif
-- } else l = 0;
-+
-+ sg_count = scsi_dma_map(workreq);
- /*
- * Write transfer size
- */
-@@ -938,16 +896,16 @@ oktosend:
- * a linear chain.
- */
+ mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+- if (!mbox)
++ if (!mbox) {
++ err = 2;
+ goto fail;
++ }
-- if (workreq->use_sg) {
-- sgpnt = (struct scatterlist *) workreq->request_buffer;
-+ if (l) {
-+ struct scatterlist *sgpnt;
- i = 0;
-- for (j = 0; j < workreq->use_sg; j++) {
-- bttl = sg_dma_address(&sgpnt[j]);
-- l=sg_dma_len(&sgpnt[j]);
-+ scsi_for_each_sg(workreq, sgpnt, sg_count, j) {
-+ bttl = sg_dma_address(sgpnt);
-+ l=sg_dma_len(sgpnt);
- #ifdef ED_DBGP
-- printk("1. bttl %x, l %x\n",bttl, l);
-+ printk("1. bttl %x, l %x\n",bttl, l);
- #endif
-- while (l > 0x10000) {
-+ while (l > 0x10000) {
- (((u16 *) (prd))[i + 3]) = 0x0000;
- (((u16 *) (prd))[i + 2]) = 0x0000;
- (((u32 *) (prd))[i >> 1]) = cpu_to_le32(bttl);
-@@ -965,32 +923,6 @@ oktosend:
- printk("prd %4x %4x %4x %4x\n",(((unsigned short int *)prd)[0]),(((unsigned short int *)prd)[1]),(((unsigned short int *)prd)[2]),(((unsigned short int *)prd)[3]));
- printk("2. bttl %x, l %x\n",bttl, l);
- #endif
-- } else {
-- /*
-- * For a linear request write a chain of blocks
-- */
-- bttl = workreq->SCp.dma_handle;
-- l = workreq->request_bufflen;
-- i = 0;
--#ifdef ED_DBGP
-- printk("3. bttl %x, l %x\n",bttl, l);
--#endif
-- while (l > 0x10000) {
-- (((u16 *) (prd))[i + 3]) = 0x0000;
-- (((u16 *) (prd))[i + 2]) = 0x0000;
-- (((u32 *) (prd))[i >> 1]) = cpu_to_le32(bttl);
-- l -= 0x10000;
-- bttl += 0x10000;
-- i += 0x04;
-- }
-- (((u16 *) (prd))[i + 3]) = cpu_to_le16(0x8000);
-- (((u16 *) (prd))[i + 2]) = cpu_to_le16(l);
-- (((u32 *) (prd))[i >> 1]) = cpu_to_le32(bttl);
--#ifdef ED_DBGP
-- printk("prd %4x %4x %4x %4x\n",(((unsigned short int *)prd)[0]),(((unsigned short int *)prd)[1]),(((unsigned short int *)prd)[2]),(((unsigned short int *)prd)[3]));
-- printk("4. bttl %x, l %x\n",bttl, l);
--#endif
--
- }
- tmpcip += 4;
- #ifdef ED_DBGP
-diff --git a/drivers/scsi/ch.c b/drivers/scsi/ch.c
-index 2311019..7aad154 100644
---- a/drivers/scsi/ch.c
-+++ b/drivers/scsi/ch.c
-@@ -21,6 +21,7 @@
- #include <linux/compat.h>
- #include <linux/chio.h> /* here are all the ioctls */
- #include <linux/mutex.h>
-+#include <linux/idr.h>
+ vport->port_state = LPFC_FABRIC_CFG_LINK;
+ lpfc_config_link(phba, mbox);
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+ mbox->vport = vport;
- #include <scsi/scsi.h>
- #include <scsi/scsi_cmnd.h>
-@@ -33,6 +34,7 @@
+- rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT | MBX_STOP_IOCB);
+- if (rc == MBX_NOT_FINISHED)
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
++ if (rc == MBX_NOT_FINISHED) {
++ err = 3;
+ goto fail_free_mbox;
++ }
- #define CH_DT_MAX 16
- #define CH_TYPES 8
-+#define CH_MAX_DEVS 128
+ mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+- if (!mbox)
++ if (!mbox) {
++ err = 4;
+ goto fail;
++ }
+ rc = lpfc_reg_login(phba, vport->vpi, Fabric_DID, (uint8_t *)sp, mbox,
+ 0);
+- if (rc)
++ if (rc) {
++ err = 5;
+ goto fail_free_mbox;
++ }
- MODULE_DESCRIPTION("device driver for scsi media changer devices");
- MODULE_AUTHOR("Gerd Knorr <kraxel at bytesex.org>");
-@@ -88,17 +90,6 @@ static const char * vendor_labels[CH_TYPES-4] = {
+ mbox->mbox_cmpl = lpfc_mbx_cmpl_fabric_reg_login;
+ mbox->vport = vport;
+ mbox->context2 = lpfc_nlp_get(ndlp);
- #define MAX_RETRIES 1
+- rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT | MBX_STOP_IOCB);
+- if (rc == MBX_NOT_FINISHED)
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
++ if (rc == MBX_NOT_FINISHED) {
++ err = 6;
+ goto fail_issue_reg_login;
++ }
--static int ch_probe(struct device *);
--static int ch_remove(struct device *);
--static int ch_open(struct inode * inode, struct file * filp);
--static int ch_release(struct inode * inode, struct file * filp);
--static int ch_ioctl(struct inode * inode, struct file * filp,
-- unsigned int cmd, unsigned long arg);
--#ifdef CONFIG_COMPAT
--static long ch_ioctl_compat(struct file * filp,
-- unsigned int cmd, unsigned long arg);
--#endif
--
- static struct class * ch_sysfs_class;
+ return 0;
- typedef struct {
-@@ -114,30 +105,8 @@ typedef struct {
- struct mutex lock;
- } scsi_changer;
+@@ -282,7 +303,7 @@ fail_free_mbox:
+ fail:
+ lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+- "0249 Cannot issue Register Fabric login\n");
++ "0249 Cannot issue Register Fabric login: Err %d\n", err);
+ return -ENXIO;
+ }
--static LIST_HEAD(ch_devlist);
--static DEFINE_SPINLOCK(ch_devlist_lock);
--static int ch_devcount;
--
--static struct scsi_driver ch_template =
--{
-- .owner = THIS_MODULE,
-- .gendrv = {
-- .name = "ch",
-- .probe = ch_probe,
-- .remove = ch_remove,
-- },
--};
--
--static const struct file_operations changer_fops =
--{
-- .owner = THIS_MODULE,
-- .open = ch_open,
-- .release = ch_release,
-- .ioctl = ch_ioctl,
--#ifdef CONFIG_COMPAT
-- .compat_ioctl = ch_ioctl_compat,
--#endif
--};
-+static DEFINE_IDR(ch_index_idr);
-+static DEFINE_SPINLOCK(ch_index_lock);
+@@ -370,11 +391,12 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ }
+ if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+ lpfc_mbx_unreg_vpi(vport);
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
++ spin_unlock_irq(shost->host_lock);
+ }
+ }
- static const struct {
- unsigned char sense;
-@@ -207,7 +176,7 @@ ch_do_scsi(scsi_changer *ch, unsigned char *cmd,
- {
- int errno, retries = 0, timeout, result;
- struct scsi_sense_hdr sshdr;
--
-+
- timeout = (cmd[0] == INITIALIZE_ELEMENT_STATUS)
- ? timeout_init : timeout_move;
+- ndlp->nlp_sid = irsp->un.ulpWord[4] & Mask_DID;
+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_REG_LOGIN_ISSUE);
-@@ -245,7 +214,7 @@ static int
- ch_elem_to_typecode(scsi_changer *ch, u_int elem)
- {
- int i;
--
-+
- for (i = 0; i < CH_TYPES; i++) {
- if (elem >= ch->firsts[i] &&
- elem < ch->firsts[i] +
-@@ -261,15 +230,15 @@ ch_read_element_status(scsi_changer *ch, u_int elem, char *data)
- u_char cmd[12];
- u_char *buffer;
- int result;
--
-+
- buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
- if(!buffer)
- return -ENOMEM;
--
+ if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED &&
+@@ -429,8 +451,7 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+ mbox->vport = vport;
+- rc = lpfc_sli_issue_mbox(phba, mbox,
+- MBX_NOWAIT | MBX_STOP_IOCB);
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ mempool_free(mbox, phba->mbox_mem_pool);
+ goto fail;
+@@ -463,6 +484,9 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ lpfc_nlp_put(ndlp);
+ }
+
++ /* If we are pt2pt with another NPort, force NPIV off! */
++ phba->sli3_options &= ~LPFC_SLI3_NPIV_ENABLED;
+
- retry:
- memset(cmd,0,sizeof(cmd));
- cmd[0] = READ_ELEMENT_STATUS;
-- cmd[1] = (ch->device->lun << 5) |
-+ cmd[1] = (ch->device->lun << 5) |
- (ch->voltags ? 0x10 : 0) |
- ch_elem_to_typecode(ch,elem);
- cmd[2] = (elem >> 8) & 0xff;
-@@ -296,7 +265,7 @@ ch_read_element_status(scsi_changer *ch, u_int elem, char *data)
- return result;
+ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_PT2PT;
+ spin_unlock_irq(shost->host_lock);
+@@ -488,6 +512,9 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+
+ /* Check to see if link went down during discovery */
+ if (lpfc_els_chk_latt(vport)) {
++ /* One additional decrement on node reference count to
++ * trigger the release of the node
++ */
+ lpfc_nlp_put(ndlp);
+ goto out;
+ }
+@@ -562,8 +589,13 @@ flogifail:
+
+ /* Start discovery */
+ lpfc_disc_start(vport);
++ } else if (((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
++ ((irsp->un.ulpWord[4] != IOERR_SLI_ABORTED) &&
++ (irsp->un.ulpWord[4] != IOERR_SLI_DOWN))) &&
++ (phba->link_state != LPFC_CLEAR_LA)) {
++ /* If FLOGI failed enable link interrupt. */
++ lpfc_issue_clear_la(phba, vport);
+ }
+-
+ out:
+ lpfc_els_free_iocb(phba, cmdiocb);
}
+@@ -685,6 +717,9 @@ lpfc_initial_flogi(struct lpfc_vport *vport)
+ struct lpfc_hba *phba = vport->phba;
+ struct lpfc_nodelist *ndlp;
--static int
-+static int
- ch_init_elem(scsi_changer *ch)
- {
- int err;
-@@ -322,7 +291,7 @@ ch_readconfig(scsi_changer *ch)
- buffer = kzalloc(512, GFP_KERNEL | GFP_DMA);
- if (!buffer)
- return -ENOMEM;
--
++ vport->port_state = LPFC_FLOGI;
++ lpfc_set_disctmo(vport);
+
- memset(cmd,0,sizeof(cmd));
- cmd[0] = MODE_SENSE;
- cmd[1] = ch->device->lun << 5;
-@@ -365,7 +334,7 @@ ch_readconfig(scsi_changer *ch)
+ /* First look for the Fabric ndlp */
+ ndlp = lpfc_findnode_did(vport, Fabric_DID);
+ if (!ndlp) {
+@@ -696,7 +731,11 @@ lpfc_initial_flogi(struct lpfc_vport *vport)
} else {
- vprintk("reading element address assigment page failed!\n");
+ lpfc_dequeue_node(vport, ndlp);
}
--
+
- /* vendor specific element types */
- for (i = 0; i < 4; i++) {
- if (0 == vendor_counts[i])
-@@ -443,7 +412,7 @@ static int
- ch_position(scsi_changer *ch, u_int trans, u_int elem, int rotate)
- {
- u_char cmd[10];
--
+ if (lpfc_issue_els_flogi(vport, ndlp, 0)) {
++ /* This decrement of reference count to node shall kick off
++ * the release of the node.
++ */
+ lpfc_nlp_put(ndlp);
+ }
+ return 1;
+@@ -720,11 +759,16 @@ lpfc_initial_fdisc(struct lpfc_vport *vport)
+ lpfc_dequeue_node(vport, ndlp);
+ }
+ if (lpfc_issue_els_fdisc(vport, ndlp, 0)) {
++ /* decrement node reference count to trigger the release of
++ * the node.
++ */
+ lpfc_nlp_put(ndlp);
++ return 0;
+ }
+ return 1;
+ }
+-static void
+
- dprintk("position: 0x%x\n",elem);
- if (0 == trans)
- trans = ch->firsts[CHET_MT];
-@@ -462,7 +431,7 @@ static int
- ch_move(scsi_changer *ch, u_int trans, u_int src, u_int dest, int rotate)
++void
+ lpfc_more_plogi(struct lpfc_vport *vport)
{
- u_char cmd[12];
--
-+
- dprintk("move: 0x%x => 0x%x\n",src,dest);
- if (0 == trans)
- trans = ch->firsts[CHET_MT];
-@@ -484,7 +453,7 @@ ch_exchange(scsi_changer *ch, u_int trans, u_int src,
- u_int dest1, u_int dest2, int rotate1, int rotate2)
+ int sentplogi;
+@@ -752,6 +796,8 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
{
- u_char cmd[12];
--
+ struct lpfc_vport *vport = ndlp->vport;
+ struct lpfc_nodelist *new_ndlp;
++ struct lpfc_rport_data *rdata;
++ struct fc_rport *rport;
+ struct serv_parm *sp;
+ uint8_t name[sizeof(struct lpfc_name)];
+ uint32_t rc;
+@@ -788,11 +834,34 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
+ lpfc_unreg_rpi(vport, new_ndlp);
+ new_ndlp->nlp_DID = ndlp->nlp_DID;
+ new_ndlp->nlp_prev_state = ndlp->nlp_prev_state;
+
- dprintk("exchange: 0x%x => 0x%x => 0x%x\n",
- src,dest1,dest2);
- if (0 == trans)
-@@ -501,7 +470,7 @@ ch_exchange(scsi_changer *ch, u_int trans, u_int src,
- cmd[8] = (dest2 >> 8) & 0xff;
- cmd[9] = dest2 & 0xff;
- cmd[10] = (rotate1 ? 1 : 0) | (rotate2 ? 2 : 0);
--
++ if (ndlp->nlp_flag & NLP_NPR_2B_DISC)
++ new_ndlp->nlp_flag |= NLP_NPR_2B_DISC;
++ ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
+
- return ch_do_scsi(ch, cmd, NULL,0, DMA_NONE);
+ lpfc_nlp_set_state(vport, new_ndlp, ndlp->nlp_state);
+
+ /* Move this back to NPR state */
+- if (memcmp(&ndlp->nlp_portname, name, sizeof(struct lpfc_name)) == 0)
++ if (memcmp(&ndlp->nlp_portname, name, sizeof(struct lpfc_name)) == 0) {
++ /* The new_ndlp is replacing ndlp totally, so we need
++ * to put ndlp on UNUSED list and try to free it.
++ */
++
++ /* Fix up the rport accordingly */
++ rport = ndlp->rport;
++ if (rport) {
++ rdata = rport->dd_data;
++ if (rdata->pnode == ndlp) {
++ lpfc_nlp_put(ndlp);
++ ndlp->rport = NULL;
++ rdata->pnode = lpfc_nlp_get(new_ndlp);
++ new_ndlp->rport = rport;
++ }
++ new_ndlp->nlp_type = ndlp->nlp_type;
++ }
++
+ lpfc_drop_node(vport, ndlp);
++ }
+ else {
+ lpfc_unreg_rpi(vport, ndlp);
+ ndlp->nlp_DID = 0; /* Two ndlps cannot have the same did */
+@@ -801,6 +870,27 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
+ return new_ndlp;
}
-@@ -539,14 +508,14 @@ ch_set_voltag(scsi_changer *ch, u_int elem,
- elem, tag);
- memset(cmd,0,sizeof(cmd));
- cmd[0] = SEND_VOLUME_TAG;
-- cmd[1] = (ch->device->lun << 5) |
-+ cmd[1] = (ch->device->lun << 5) |
- ch_elem_to_typecode(ch,elem);
- cmd[2] = (elem >> 8) & 0xff;
- cmd[3] = elem & 0xff;
- cmd[5] = clear
- ? (alternate ? 0x0d : 0x0c)
- : (alternate ? 0x0b : 0x0a);
--
++void
++lpfc_end_rscn(struct lpfc_vport *vport)
++{
++ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
- cmd[9] = 255;
++ if (vport->fc_flag & FC_RSCN_MODE) {
++ /*
++ * Check to see if more RSCNs came in while we were
++ * processing this one.
++ */
++ if (vport->fc_rscn_id_cnt ||
++ (vport->fc_flag & FC_RSCN_DISCOVERY) != 0)
++ lpfc_els_handle_rscn(vport);
++ else {
++ spin_lock_irq(shost->host_lock);
++ vport->fc_flag &= ~FC_RSCN_MODE;
++ spin_unlock_irq(shost->host_lock);
++ }
++ }
++}
++
+ static void
+ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_iocbq *rspiocb)
+@@ -871,13 +961,6 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ goto out;
+ }
+ /* PLOGI failed */
+- if (ndlp->nlp_DID == NameServer_DID) {
+- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+- "0250 Nameserver login error: "
+- "0x%x / 0x%x\n",
+- irsp->ulpStatus, irsp->un.ulpWord[4]);
+- }
+ /* Do not call DSM for lpfc_els_abort'ed ELS cmds */
+ if (lpfc_error_lost_link(irsp)) {
+ rc = NLP_STE_FREED_NODE;
+@@ -905,20 +988,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ spin_unlock_irq(shost->host_lock);
- memcpy(buffer,tag,32);
-@@ -562,7 +531,7 @@ static int ch_gstatus(scsi_changer *ch, int type, unsigned char __user *dest)
- int retval = 0;
- u_char data[16];
- unsigned int i;
--
+ lpfc_can_disctmo(vport);
+- if (vport->fc_flag & FC_RSCN_MODE) {
+- /*
+- * Check to see if more RSCNs came in while
+- * we were processing this one.
+- */
+- if ((vport->fc_rscn_id_cnt == 0) &&
+- (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
+- spin_lock_irq(shost->host_lock);
+- vport->fc_flag &= ~FC_RSCN_MODE;
+- spin_unlock_irq(shost->host_lock);
+- } else {
+- lpfc_els_handle_rscn(vport);
+- }
+- }
++ lpfc_end_rscn(vport);
+ }
+ }
+
+@@ -933,6 +1003,7 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
+ struct lpfc_hba *phba = vport->phba;
+ struct serv_parm *sp;
+ IOCB_t *icmd;
++ struct lpfc_nodelist *ndlp;
+ struct lpfc_iocbq *elsiocb;
+ struct lpfc_sli_ring *pring;
+ struct lpfc_sli *psli;
+@@ -943,8 +1014,11 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
+ psli = &phba->sli;
+ pring = &psli->ring[LPFC_ELS_RING]; /* ELS ring */
+
++ ndlp = lpfc_findnode_did(vport, did);
++ /* If ndlp if not NULL, we will bump the reference count on it */
+
- mutex_lock(&ch->lock);
- for (i = 0; i < ch->counts[type]; i++) {
- if (0 != ch_read_element_status
-@@ -599,20 +568,17 @@ ch_release(struct inode *inode, struct file *file)
- static int
- ch_open(struct inode *inode, struct file *file)
+ cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm));
+- elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, NULL, did,
++ elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp, did,
+ ELS_CMD_PLOGI);
+ if (!elsiocb)
+ return 1;
+@@ -1109,7 +1183,7 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ return 0;
+ }
+
+-static void
++void
+ lpfc_more_adisc(struct lpfc_vport *vport)
{
-- scsi_changer *tmp, *ch;
-+ scsi_changer *ch;
- int minor = iminor(inode);
+ int sentadisc;
+@@ -1134,8 +1208,6 @@ lpfc_more_adisc(struct lpfc_vport *vport)
+ static void
+ lpfc_rscn_disc(struct lpfc_vport *vport)
+ {
+- struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+-
+ lpfc_can_disctmo(vport);
-- spin_lock(&ch_devlist_lock);
-- ch = NULL;
-- list_for_each_entry(tmp,&ch_devlist,list) {
-- if (tmp->minor == minor)
-- ch = tmp;
+ /* RSCN discovery */
+@@ -1144,19 +1216,7 @@ lpfc_rscn_disc(struct lpfc_vport *vport)
+ if (lpfc_els_disc_plogi(vport))
+ return;
+
+- if (vport->fc_flag & FC_RSCN_MODE) {
+- /* Check to see if more RSCNs came in while we were
+- * processing this one.
+- */
+- if ((vport->fc_rscn_id_cnt == 0) &&
+- (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
+- spin_lock_irq(shost->host_lock);
+- vport->fc_flag &= ~FC_RSCN_MODE;
+- spin_unlock_irq(shost->host_lock);
+- } else {
+- lpfc_els_handle_rscn(vport);
+- }
- }
-+ spin_lock(&ch_index_lock);
-+ ch = idr_find(&ch_index_idr, minor);
++ lpfc_end_rscn(vport);
+ }
+
+ static void
+@@ -1413,6 +1473,13 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ psli = &phba->sli;
+ pring = &psli->ring[LPFC_ELS_RING];
+
++ spin_lock_irq(shost->host_lock);
++ if (ndlp->nlp_flag & NLP_LOGO_SND) {
++ spin_unlock_irq(shost->host_lock);
++ return 0;
++ }
++ spin_unlock_irq(shost->host_lock);
+
- if (NULL == ch || scsi_device_get(ch->device)) {
-- spin_unlock(&ch_devlist_lock);
-+ spin_unlock(&ch_index_lock);
- return -ENXIO;
- }
-- spin_unlock(&ch_devlist_lock);
-+ spin_unlock(&ch_index_lock);
+ cmdsize = (2 * sizeof(uint32_t)) + sizeof(struct lpfc_name);
+ elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+ ndlp->nlp_DID, ELS_CMD_LOGO);
+@@ -1499,6 +1566,9 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
+ ndlp->nlp_DID, ELS_CMD_SCR);
- file->private_data = ch;
+ if (!elsiocb) {
++ /* This will trigger the release of the node just
++ * allocated
++ */
+ lpfc_nlp_put(ndlp);
+ return 1;
+ }
+@@ -1520,10 +1590,17 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
+ phba->fc_stat.elsXmitSCR++;
+ elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
+ if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
++ /* The additional lpfc_nlp_put will cause the following
++ * lpfc_els_free_iocb routine to trigger the rlease of
++ * the node.
++ */
+ lpfc_nlp_put(ndlp);
+ lpfc_els_free_iocb(phba, elsiocb);
+ return 1;
+ }
++ /* This will cause the callback-function lpfc_cmpl_els_cmd to
++ * trigger the release of node.
++ */
+ lpfc_nlp_put(ndlp);
return 0;
-@@ -626,24 +592,24 @@ ch_checkrange(scsi_changer *ch, unsigned int type, unsigned int unit)
+ }
+@@ -1555,6 +1632,9 @@ lpfc_issue_els_farpr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
+ elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+ ndlp->nlp_DID, ELS_CMD_RNID);
+ if (!elsiocb) {
++ /* This will trigger the release of the node just
++ * allocated
++ */
+ lpfc_nlp_put(ndlp);
+ return 1;
+ }
+@@ -1591,35 +1671,21 @@ lpfc_issue_els_farpr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
+ phba->fc_stat.elsXmitFARPR++;
+ elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
+ if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
++ /* The additional lpfc_nlp_put will cause the following
++ * lpfc_els_free_iocb routine to trigger the release of
++ * the node.
++ */
+ lpfc_nlp_put(ndlp);
+ lpfc_els_free_iocb(phba, elsiocb);
+ return 1;
+ }
++ /* This will cause the callback-function lpfc_cmpl_els_cmd to
++ * trigger the release of the node.
++ */
+ lpfc_nlp_put(ndlp);
return 0;
}
--static int ch_ioctl(struct inode * inode, struct file * file,
-+static long ch_ioctl(struct file *file,
- unsigned int cmd, unsigned long arg)
+-static void
+-lpfc_end_rscn(struct lpfc_vport *vport)
+-{
+- struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+-
+- if (vport->fc_flag & FC_RSCN_MODE) {
+- /*
+- * Check to see if more RSCNs came in while we were
+- * processing this one.
+- */
+- if (vport->fc_rscn_id_cnt ||
+- (vport->fc_flag & FC_RSCN_DISCOVERY) != 0)
+- lpfc_els_handle_rscn(vport);
+- else {
+- spin_lock_irq(shost->host_lock);
+- vport->fc_flag &= ~FC_RSCN_MODE;
+- spin_unlock_irq(shost->host_lock);
+- }
+- }
+-}
+-
+ void
+ lpfc_cancel_retry_delay_tmo(struct lpfc_vport *vport, struct lpfc_nodelist *nlp)
{
- scsi_changer *ch = file->private_data;
- int retval;
- void __user *argp = (void __user *)arg;
--
-+
- switch (cmd) {
- case CHIOGPARAMS:
- {
- struct changer_params params;
--
-+
- params.cp_curpicker = 0;
- params.cp_npickers = ch->counts[CHET_MT];
- params.cp_nslots = ch->counts[CHET_ST];
- params.cp_nportals = ch->counts[CHET_IE];
- params.cp_ndrives = ch->counts[CHET_DT];
--
-+
- if (copy_to_user(argp, ¶ms, sizeof(params)))
- return -EFAULT;
- return 0;
-@@ -673,11 +639,11 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- return -EFAULT;
- return 0;
+@@ -1675,7 +1741,10 @@ lpfc_els_retry_delay(unsigned long ptr)
+ return;
}
--
-+
- case CHIOPOSITION:
- {
- struct changer_position pos;
--
-+
- if (copy_from_user(&pos, argp, sizeof (pos)))
- return -EFAULT;
-@@ -692,7 +658,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- mutex_unlock(&ch->lock);
- return retval;
- }
--
+- evtp->evt_arg1 = ndlp;
++ /* We need to hold the node by incrementing the reference
++ * count until the queued work is done
++ */
++ evtp->evt_arg1 = lpfc_nlp_get(ndlp);
+ evtp->evt = LPFC_EVT_ELS_RETRY;
+ list_add_tail(&evtp->evt_listp, &phba->work_list);
+ if (phba->work_wait)
+@@ -1759,6 +1828,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ uint32_t *elscmd;
+ struct ls_rjt stat;
+ int retry = 0, maxretry = lpfc_max_els_tries, delay = 0;
++ int logerr = 0;
+ uint32_t cmd = 0;
+ uint32_t did;
+
+@@ -1815,6 +1885,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ break;
+
+ case IOERR_NO_RESOURCES:
++ logerr = 1; /* HBA out of resources */
+ retry = 1;
+ if (cmdiocb->retry > 100)
+ delay = 100;
+@@ -1843,6 +1914,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+
+ case IOSTAT_NPORT_BSY:
+ case IOSTAT_FABRIC_BSY:
++ logerr = 1; /* Fabric / Remote NPort out of resources */
+ retry = 1;
+ break;
+
+@@ -1923,6 +1995,15 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if (did == FDMI_DID)
+ retry = 1;
+
++ if ((cmd == ELS_CMD_FLOGI) &&
++ (phba->fc_topology != TOPOLOGY_LOOP)) {
++ /* FLOGI retry policy */
++ retry = 1;
++ maxretry = 48;
++ if (cmdiocb->retry >= 32)
++ delay = 1000;
++ }
+
- case CHIOMOVE:
- {
- struct changer_move mv;
-@@ -705,7 +671,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- dprintk("CHIOMOVE: invalid parameter\n");
- return -EBADSLT;
+ if ((++cmdiocb->retry) >= maxretry) {
+ phba->fc_stat.elsRetryExceeded++;
+ retry = 0;
+@@ -2006,11 +2087,46 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
}
--
+ }
+ /* No retry ELS command <elsCmd> to remote NPORT <did> */
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
++ if (logerr) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
++ "0137 No retry ELS command x%x to remote "
++ "NPORT x%x: Out of Resources: Error:x%x/%x\n",
++ cmd, did, irsp->ulpStatus,
++ irsp->un.ulpWord[4]);
++ }
++ else {
++ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "0108 No retry ELS command x%x to remote "
+ "NPORT x%x Retried:%d Error:x%x/%x\n",
+ cmd, did, cmdiocb->retry, irsp->ulpStatus,
+ irsp->un.ulpWord[4]);
++ }
++ return 0;
++}
+
- mutex_lock(&ch->lock);
- retval = ch_move(ch,0,
- ch->firsts[mv.cm_fromtype] + mv.cm_fromunit,
-@@ -718,7 +684,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- case CHIOEXCHANGE:
- {
- struct changer_exchange mv;
--
++static int
++lpfc_els_free_data(struct lpfc_hba *phba, struct lpfc_dmabuf *buf_ptr1)
++{
++ struct lpfc_dmabuf *buf_ptr;
+
- if (copy_from_user(&mv, argp, sizeof (mv)))
- return -EFAULT;
-
-@@ -728,7 +694,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- dprintk("CHIOEXCHANGE: invalid parameter\n");
- return -EBADSLT;
- }
--
++ /* Free the response before processing the command. */
++ if (!list_empty(&buf_ptr1->list)) {
++ list_remove_head(&buf_ptr1->list, buf_ptr,
++ struct lpfc_dmabuf,
++ list);
++ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
++ kfree(buf_ptr);
++ }
++ lpfc_mbuf_free(phba, buf_ptr1->virt, buf_ptr1->phys);
++ kfree(buf_ptr1);
++ return 0;
++}
+
- mutex_lock(&ch->lock);
- retval = ch_exchange
- (ch,0,
-@@ -743,7 +709,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- case CHIOGSTATUS:
- {
- struct changer_element_status ces;
--
++static int
++lpfc_els_free_bpl(struct lpfc_hba *phba, struct lpfc_dmabuf *buf_ptr)
++{
++ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
++ kfree(buf_ptr);
+ return 0;
+ }
+
+@@ -2018,30 +2134,63 @@ int
+ lpfc_els_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *elsiocb)
+ {
+ struct lpfc_dmabuf *buf_ptr, *buf_ptr1;
++ struct lpfc_nodelist *ndlp;
+
+- if (elsiocb->context1) {
+- lpfc_nlp_put(elsiocb->context1);
++ ndlp = (struct lpfc_nodelist *)elsiocb->context1;
++ if (ndlp) {
++ if (ndlp->nlp_flag & NLP_DEFER_RM) {
++ lpfc_nlp_put(ndlp);
+
- if (copy_from_user(&ces, argp, sizeof (ces)))
- return -EFAULT;
- if (ces.ces_type < 0 || ces.ces_type >= CH_TYPES)
-@@ -759,19 +725,19 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- u_char *buffer;
- unsigned int elem;
- int result,i;
--
++ /* If the ndlp is not being used by another discovery
++ * thread, free it.
++ */
++ if (!lpfc_nlp_not_used(ndlp)) {
++ /* If ndlp is being used by another discovery
++ * thread, just clear NLP_DEFER_RM
++ */
++ ndlp->nlp_flag &= ~NLP_DEFER_RM;
++ }
++ }
++ else
++ lpfc_nlp_put(ndlp);
+ elsiocb->context1 = NULL;
+ }
+ /* context2 = cmd, context2->next = rsp, context3 = bpl */
+ if (elsiocb->context2) {
+- buf_ptr1 = (struct lpfc_dmabuf *) elsiocb->context2;
+- /* Free the response before processing the command. */
+- if (!list_empty(&buf_ptr1->list)) {
+- list_remove_head(&buf_ptr1->list, buf_ptr,
+- struct lpfc_dmabuf,
+- list);
+- lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
+- kfree(buf_ptr);
++ if (elsiocb->iocb_flag & LPFC_DELAY_MEM_FREE) {
++ /* Firmware could still be in progress of DMAing
++ * payload, so don't free data buffer till after
++ * a hbeat.
++ */
++ elsiocb->iocb_flag &= ~LPFC_DELAY_MEM_FREE;
++ buf_ptr = elsiocb->context2;
++ elsiocb->context2 = NULL;
++ if (buf_ptr) {
++ buf_ptr1 = NULL;
++ spin_lock_irq(&phba->hbalock);
++ if (!list_empty(&buf_ptr->list)) {
++ list_remove_head(&buf_ptr->list,
++ buf_ptr1, struct lpfc_dmabuf,
++ list);
++ INIT_LIST_HEAD(&buf_ptr1->list);
++ list_add_tail(&buf_ptr1->list,
++ &phba->elsbuf);
++ phba->elsbuf_cnt++;
++ }
++ INIT_LIST_HEAD(&buf_ptr->list);
++ list_add_tail(&buf_ptr->list, &phba->elsbuf);
++ phba->elsbuf_cnt++;
++ spin_unlock_irq(&phba->hbalock);
++ }
++ } else {
++ buf_ptr1 = (struct lpfc_dmabuf *) elsiocb->context2;
++ lpfc_els_free_data(phba, buf_ptr1);
+ }
+- lpfc_mbuf_free(phba, buf_ptr1->virt, buf_ptr1->phys);
+- kfree(buf_ptr1);
+ }
+
+ if (elsiocb->context3) {
+ buf_ptr = (struct lpfc_dmabuf *) elsiocb->context3;
+- lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
+- kfree(buf_ptr);
++ lpfc_els_free_bpl(phba, buf_ptr);
+ }
+ lpfc_sli_release_iocbq(phba, elsiocb);
+ return 0;
+@@ -2065,15 +2214,20 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ "Data: x%x x%x x%x\n",
+ ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+ ndlp->nlp_rpi);
+- switch (ndlp->nlp_state) {
+- case NLP_STE_UNUSED_NODE: /* node is just allocated */
+- lpfc_drop_node(vport, ndlp);
+- break;
+- case NLP_STE_NPR_NODE: /* NPort Recovery mode */
+- lpfc_unreg_rpi(vport, ndlp);
+- break;
+- default:
+- break;
+
- if (copy_from_user(&cge, argp, sizeof (cge)))
- return -EFAULT;
++ if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
++ /* NPort Recovery mode or node is just allocated */
++ if (!lpfc_nlp_not_used(ndlp)) {
++ /* If the ndlp is being used by another discovery
++ * thread, just unregister the RPI.
++ */
++ lpfc_unreg_rpi(vport, ndlp);
++ } else {
++ /* Indicate the node has already released, should
++ * not reference to it from within lpfc_els_free_iocb.
++ */
++ cmdiocb->context1 = NULL;
++ }
+ }
+ lpfc_els_free_iocb(phba, cmdiocb);
+ return;
+@@ -2089,7 +2243,14 @@ lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ kfree(mp);
+ mempool_free(pmb, phba->mbox_mem_pool);
+- lpfc_nlp_put(ndlp);
++ if (ndlp) {
++ lpfc_nlp_put(ndlp);
++ /* This is the end of the default RPI cleanup logic for this
++ * ndlp. If no other discovery threads are using this ndlp.
++ * we should free all resources associated with it.
++ */
++ lpfc_nlp_not_used(ndlp);
++ }
+ return;
+ }
- if (0 != ch_checkrange(ch, cge.cge_type, cge.cge_unit))
- return -EINVAL;
- elem = ch->firsts[cge.cge_type] + cge.cge_unit;
--
+@@ -2100,15 +2261,29 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+ struct lpfc_vport *vport = ndlp ? ndlp->vport : NULL;
+ struct Scsi_Host *shost = vport ? lpfc_shost_from_vport(vport) : NULL;
+- IOCB_t *irsp;
++ IOCB_t *irsp;
++ uint8_t *pcmd;
+ LPFC_MBOXQ_t *mbox = NULL;
+ struct lpfc_dmabuf *mp = NULL;
++ uint32_t ls_rjt = 0;
+
+ irsp = &rspiocb->iocb;
+
+ if (cmdiocb->context_un.mbox)
+ mbox = cmdiocb->context_un.mbox;
+
++ /* First determine if this is a LS_RJT cmpl. Note, this callback
++ * function can have cmdiocb->contest1 (ndlp) field set to NULL.
++ */
++ pcmd = (uint8_t *) (((struct lpfc_dmabuf *) cmdiocb->context2)->virt);
++ if (ndlp && (*((uint32_t *) (pcmd)) == ELS_CMD_LS_RJT)) {
++ /* A LS_RJT associated with Default RPI cleanup has its own
++ * seperate code path.
++ */
++ if (!(ndlp->nlp_flag & NLP_RM_DFLT_RPI))
++ ls_rjt = 1;
++ }
+
- buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
- if (!buffer)
- return -ENOMEM;
- mutex_lock(&ch->lock);
--
+ /* Check to see if link went down during discovery */
+ if (!ndlp || lpfc_els_chk_latt(vport)) {
+ if (mbox) {
+@@ -2119,6 +2294,15 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
+ mempool_free(mbox, phba->mbox_mem_pool);
+ }
++ if (ndlp && (ndlp->nlp_flag & NLP_RM_DFLT_RPI))
++ if (lpfc_nlp_not_used(ndlp)) {
++ ndlp = NULL;
++ /* Indicate the node has already released,
++ * should not reference to it from within
++ * the routine lpfc_els_free_iocb.
++ */
++ cmdiocb->context1 = NULL;
++ }
+ goto out;
+ }
+
+@@ -2150,20 +2334,39 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ lpfc_nlp_set_state(vport, ndlp,
+ NLP_STE_REG_LOGIN_ISSUE);
+ }
+- if (lpfc_sli_issue_mbox(phba, mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB))
++ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
+ != MBX_NOT_FINISHED) {
+ goto out;
+ }
+- lpfc_nlp_put(ndlp);
+- /* NOTE: we should have messages for unsuccessful
+- reglogin */
+
- voltag_retry:
- memset(cmd,0,sizeof(cmd));
- cmd[0] = READ_ELEMENT_STATUS;
-@@ -782,7 +748,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- cmd[3] = elem & 0xff;
- cmd[5] = 1;
- cmd[9] = 255;
--
++ /* ELS rsp: Cannot issue reg_login for <NPortid> */
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
++ "0138 ELS rsp: Cannot issue reg_login for x%x "
++ "Data: x%x x%x x%x\n",
++ ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
++ ndlp->nlp_rpi);
+
- if (0 == (result = ch_do_scsi(ch, cmd, buffer, 256, DMA_FROM_DEVICE))) {
- cge.cge_status = buffer[18];
- cge.cge_flags = 0;
-@@ -822,7 +788,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
++ if (lpfc_nlp_not_used(ndlp)) {
++ ndlp = NULL;
++ /* Indicate node has already been released,
++ * should not reference to it from within
++ * the routine lpfc_els_free_iocb.
++ */
++ cmdiocb->context1 = NULL;
++ }
+ } else {
+ /* Do not drop node for lpfc_els_abort'ed ELS cmds */
+ if (!lpfc_error_lost_link(irsp) &&
+ ndlp->nlp_flag & NLP_ACC_REGLOGIN) {
+- lpfc_drop_node(vport, ndlp);
+- ndlp = NULL;
++ if (lpfc_nlp_not_used(ndlp)) {
++ ndlp = NULL;
++ /* Indicate node has already been
++ * released, should not reference
++ * to it from within the routine
++ * lpfc_els_free_iocb.
++ */
++ cmdiocb->context1 = NULL;
++ }
+ }
}
- kfree(buffer);
- mutex_unlock(&ch->lock);
--
+ mp = (struct lpfc_dmabuf *) mbox->context1;
+@@ -2178,7 +2381,21 @@ out:
+ spin_lock_irq(shost->host_lock);
+ ndlp->nlp_flag &= ~(NLP_ACC_REGLOGIN | NLP_RM_DFLT_RPI);
+ spin_unlock_irq(shost->host_lock);
+
- if (copy_to_user(argp, &cge, sizeof (cge)))
- return -EFAULT;
- return result;
-@@ -835,7 +801,7 @@ static int ch_ioctl(struct inode * inode, struct file * file,
- mutex_unlock(&ch->lock);
- return retval;
++ /* If the node is not being used by another discovery thread,
++ * and we are sending a reject, we are done with it.
++ * Release driver reference count here and free associated
++ * resources.
++ */
++ if (ls_rjt)
++ if (lpfc_nlp_not_used(ndlp))
++ /* Indicate node has already been released,
++ * should not reference to it from within
++ * the routine lpfc_els_free_iocb.
++ */
++ cmdiocb->context1 = NULL;
}
--
-+
- case CHIOSVOLTAG:
- {
- struct changer_set_voltag csv;
-@@ -876,7 +842,7 @@ static long ch_ioctl_compat(struct file * file,
- unsigned int cmd, unsigned long arg)
- {
- scsi_changer *ch = file->private_data;
--
+
- switch (cmd) {
- case CHIOGPARAMS:
- case CHIOGVPARAMS:
-@@ -887,13 +853,12 @@ static long ch_ioctl_compat(struct file * file,
- case CHIOINITELEM:
- case CHIOSVOLTAG:
- /* compatible */
-- return ch_ioctl(NULL /* inode, unused */,
-- file, cmd, arg);
-+ return ch_ioctl(file, cmd, arg);
- case CHIOGSTATUS32:
- {
- struct changer_element_status32 ces32;
- unsigned char __user *data;
--
+ lpfc_els_free_iocb(phba, cmdiocb);
+ return;
+ }
+@@ -2349,14 +2566,6 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError,
+ elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
+ rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
+
+- /* If the node is in the UNUSED state, and we are sending
+- * a reject, we are done with it. Release driver reference
+- * count here. The outstanding els will release its reference on
+- * completion and the node can be freed then.
+- */
+- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
+- lpfc_nlp_put(ndlp);
+-
+ if (rc == IOCB_ERROR) {
+ lpfc_els_free_iocb(phba, elsiocb);
+ return 1;
+@@ -2642,7 +2851,10 @@ lpfc_els_disc_plogi(struct lpfc_vport *vport)
+ }
+ }
+ }
+- if (sentplogi == 0) {
++ if (sentplogi) {
++ lpfc_set_disctmo(vport);
++ }
++ else {
+ spin_lock_irq(shost->host_lock);
+ vport->fc_flag &= ~FC_NLP_MORE;
+ spin_unlock_irq(shost->host_lock);
+@@ -2830,10 +3042,10 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ "RCV RSCN defer: did:x%x/ste:x%x flg:x%x",
+ ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag);
+
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_RSCN_DEFERRED;
+ if ((rscn_cnt < FC_MAX_HOLD_RSCN) &&
+ !(vport->fc_flag & FC_RSCN_DISCOVERY)) {
+- spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_RSCN_MODE;
+ spin_unlock_irq(shost->host_lock);
+ if (rscn_cnt) {
+@@ -2862,7 +3074,6 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ vport->fc_rscn_id_cnt, vport->fc_flag,
+ vport->port_state);
+ } else {
+- spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_RSCN_DISCOVERY;
+ spin_unlock_irq(shost->host_lock);
+ /* ReDiscovery RSCN */
+@@ -2877,7 +3088,9 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+
+ /* send RECOVERY event for ALL nodes that match RSCN payload */
+ lpfc_rscn_recovery_check(vport);
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag &= ~FC_RSCN_DEFERRED;
++ spin_unlock_irq(shost->host_lock);
+ return 0;
+ }
+
+@@ -2929,6 +3142,8 @@ lpfc_els_handle_rscn(struct lpfc_vport *vport)
+
+ /* To process RSCN, first compare RSCN data with NameServer */
+ vport->fc_ns_retry = 0;
++ vport->num_disc_nodes = 0;
+
- if (copy_from_user(&ces32, (void __user *)arg, sizeof (ces32)))
- return -EFAULT;
- if (ces32.ces_type < 0 || ces32.ces_type >= CH_TYPES)
-@@ -915,63 +880,100 @@ static long ch_ioctl_compat(struct file * file,
- static int ch_probe(struct device *dev)
- {
- struct scsi_device *sd = to_scsi_device(dev);
-+ struct class_device *class_dev;
-+ int minor, ret = -ENOMEM;
- scsi_changer *ch;
--
+ ndlp = lpfc_findnode_did(vport, NameServer_DID);
+ if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE) {
+ /* Good ndlp, issue CT Request to NameServer */
+@@ -3022,8 +3237,7 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ mbox->mb.un.varInitLnk.lipsr_AL_PA = 0;
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+ mbox->vport = vport;
+- rc = lpfc_sli_issue_mbox
+- (phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+ lpfc_set_loopback_flag(phba);
+ if (rc == MBX_NOT_FINISHED) {
+ mempool_free(mbox, phba->mbox_mem_pool);
+@@ -3140,7 +3354,10 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ elsiocb = lpfc_prep_els_iocb(phba->pport, 0, cmdsize,
+ lpfc_max_els_tries, ndlp,
+ ndlp->nlp_DID, ELS_CMD_ACC);
+
- if (sd->type != TYPE_MEDIUM_CHANGER)
- return -ENODEV;
--
++ /* Decrement the ndlp reference count from previous mbox command */
+ lpfc_nlp_put(ndlp);
+
- ch = kzalloc(sizeof(*ch), GFP_KERNEL);
- if (NULL == ch)
- return -ENOMEM;
+ if (!elsiocb)
+ return;
-- ch->minor = ch_devcount;
-+ if (!idr_pre_get(&ch_index_idr, GFP_KERNEL))
-+ goto free_ch;
-+
-+ spin_lock(&ch_index_lock);
-+ ret = idr_get_new(&ch_index_idr, ch, &minor);
-+ spin_unlock(&ch_index_lock);
-+
-+ if (ret)
-+ goto free_ch;
+@@ -3160,13 +3377,13 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ status |= 0x4;
+
+ rps_rsp->rsvd1 = 0;
+- rps_rsp->portStatus = be16_to_cpu(status);
+- rps_rsp->linkFailureCnt = be32_to_cpu(mb->un.varRdLnk.linkFailureCnt);
+- rps_rsp->lossSyncCnt = be32_to_cpu(mb->un.varRdLnk.lossSyncCnt);
+- rps_rsp->lossSignalCnt = be32_to_cpu(mb->un.varRdLnk.lossSignalCnt);
+- rps_rsp->primSeqErrCnt = be32_to_cpu(mb->un.varRdLnk.primSeqErrCnt);
+- rps_rsp->invalidXmitWord = be32_to_cpu(mb->un.varRdLnk.invalidXmitWord);
+- rps_rsp->crcCnt = be32_to_cpu(mb->un.varRdLnk.crcCnt);
++ rps_rsp->portStatus = cpu_to_be16(status);
++ rps_rsp->linkFailureCnt = cpu_to_be32(mb->un.varRdLnk.linkFailureCnt);
++ rps_rsp->lossSyncCnt = cpu_to_be32(mb->un.varRdLnk.lossSyncCnt);
++ rps_rsp->lossSignalCnt = cpu_to_be32(mb->un.varRdLnk.lossSignalCnt);
++ rps_rsp->primSeqErrCnt = cpu_to_be32(mb->un.varRdLnk.primSeqErrCnt);
++ rps_rsp->invalidXmitWord = cpu_to_be32(mb->un.varRdLnk.invalidXmitWord);
++ rps_rsp->crcCnt = cpu_to_be32(mb->un.varRdLnk.crcCnt);
+ /* Xmit ELS RPS ACC response tag <ulpIoTag> */
+ lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_ELS,
+ "0118 Xmit ELS RPS ACC response tag x%x xri x%x, "
+@@ -3223,11 +3440,13 @@ lpfc_els_rcv_rps(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ mbox->context2 = lpfc_nlp_get(ndlp);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_els_rsp_rps_acc;
+- if (lpfc_sli_issue_mbox (phba, mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB)) != MBX_NOT_FINISHED)
++ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
++ != MBX_NOT_FINISHED)
+ /* Mbox completion will send ELS Response */
+ return 0;
+-
++ /* Decrement reference count used for the failed mbox
++ * command.
++ */
+ lpfc_nlp_put(ndlp);
+ mempool_free(mbox, phba->mbox_mem_pool);
+ }
+@@ -3461,6 +3680,7 @@ lpfc_els_rcv_fan(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ * other NLP_FABRIC logins
+ */
+ lpfc_drop_node(vport, ndlp);
+
-+ if (minor > CH_MAX_DEVS) {
-+ ret = -ENODEV;
-+ goto remove_idr;
+ } else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
+ /* Fail outstanding I/O now since this
+ * device is marked for PLOGI
+@@ -3469,8 +3689,6 @@ lpfc_els_rcv_fan(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ }
+ }
+
+- vport->port_state = LPFC_FLOGI;
+- lpfc_set_disctmo(vport);
+ lpfc_initial_flogi(vport);
+ return 0;
+ }
+@@ -3711,6 +3929,7 @@ static void
+ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ struct lpfc_vport *vport, struct lpfc_iocbq *elsiocb)
+ {
++ struct Scsi_Host *shost;
+ struct lpfc_nodelist *ndlp;
+ struct ls_rjt stat;
+ uint32_t *payload;
+@@ -3750,11 +3969,19 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ goto dropit;
+
+ lpfc_nlp_init(vport, ndlp, did);
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+ newnode = 1;
+ if ((did & Fabric_DID_MASK) == Fabric_DID_MASK) {
+ ndlp->nlp_type |= NLP_FABRIC;
+ }
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+ }
++ else {
++ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
++ /* This is simular to the new node path */
++ lpfc_nlp_get(ndlp);
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
++ newnode = 1;
++ }
+ }
+
+ phba->fc_stat.elsRcvFrame++;
+@@ -3783,6 +4010,12 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ rjt_err = LSRJT_UNABLE_TPC;
+ break;
+ }
+
-+ ch->minor = minor;
- sprintf(ch->name,"ch%d",ch->minor);
++ shost = lpfc_shost_from_vport(vport);
++ spin_lock_irq(shost->host_lock);
++ ndlp->nlp_flag &= ~NLP_TARGET_REMOVE;
++ spin_unlock_irq(shost->host_lock);
+
-+ class_dev = class_device_create(ch_sysfs_class, NULL,
-+ MKDEV(SCSI_CHANGER_MAJOR, ch->minor),
-+ dev, "s%s", ch->name);
-+ if (IS_ERR(class_dev)) {
-+ printk(KERN_WARNING "ch%d: class_device_create failed\n",
-+ ch->minor);
-+ ret = PTR_ERR(class_dev);
-+ goto remove_idr;
-+ }
+ lpfc_disc_state_machine(vport, ndlp, elsiocb,
+ NLP_EVT_RCV_PLOGI);
+
+@@ -3795,7 +4028,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ phba->fc_stat.elsRcvFLOGI++;
+ lpfc_els_rcv_flogi(vport, elsiocb, ndlp);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ case ELS_CMD_LOGO:
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+@@ -3825,7 +4058,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ phba->fc_stat.elsRcvRSCN++;
+ lpfc_els_rcv_rscn(vport, elsiocb, ndlp);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ case ELS_CMD_ADISC:
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+@@ -3897,7 +4130,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ phba->fc_stat.elsRcvLIRR++;
+ lpfc_els_rcv_lirr(vport, elsiocb, ndlp);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ case ELS_CMD_RPS:
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+@@ -3907,7 +4140,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ phba->fc_stat.elsRcvRPS++;
+ lpfc_els_rcv_rps(vport, elsiocb, ndlp);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ case ELS_CMD_RPL:
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+@@ -3917,7 +4150,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ phba->fc_stat.elsRcvRPL++;
+ lpfc_els_rcv_rpl(vport, elsiocb, ndlp);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ case ELS_CMD_RNID:
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+@@ -3927,7 +4160,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ phba->fc_stat.elsRcvRNID++;
+ lpfc_els_rcv_rnid(vport, elsiocb, ndlp);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ default:
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+@@ -3942,7 +4175,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ "0115 Unknown ELS command x%x "
+ "received from NPORT x%x\n", cmd, did);
+ if (newnode)
+- lpfc_drop_node(vport, ndlp);
++ lpfc_nlp_put(ndlp);
+ break;
+ }
+
+@@ -3958,10 +4191,11 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ return;
+
+ dropit:
+- lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
++ if (vport && !(vport->load_flag & FC_UNLOADING))
++ lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
+ "(%d):0111 Dropping received ELS cmd "
+ "Data: x%x x%x x%x\n",
+- vport ? vport->vpi : 0xffff, icmd->ulpStatus,
++ vport->vpi, icmd->ulpStatus,
+ icmd->un.ulpWord[4], icmd->ulpTimeout);
+ phba->fc_stat.elsRcvDrop++;
+ }
+@@ -4114,8 +4348,9 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
+ MAILBOX_t *mb = &pmb->mb;
+
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+- lpfc_nlp_put(ndlp);
++ spin_unlock_irq(shost->host_lock);
+
+ if (mb->mbxStatus) {
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+@@ -4135,7 +4370,9 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ default:
+ /* Try to recover from this error */
+ lpfc_mbx_unreg_vpi(vport);
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
++ spin_unlock_irq(shost->host_lock);
+ lpfc_initial_fdisc(vport);
+ break;
+ }
+@@ -4146,14 +4383,21 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ else
+ lpfc_do_scr_ns_plogi(phba, vport);
+ }
+
- mutex_init(&ch->lock);
- ch->device = sd;
- ch_readconfig(ch);
- if (init)
- ch_init_elem(ch);
++ /* Now, we decrement the ndlp reference count held for this
++ * callback function
++ */
++ lpfc_nlp_put(ndlp);
++
+ mempool_free(pmb, phba->mbox_mem_pool);
+ return;
+ }
-- class_device_create(ch_sysfs_class, NULL,
-- MKDEV(SCSI_CHANGER_MAJOR,ch->minor),
-- dev, "s%s", ch->name);
+-void
++static void
+ lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport,
+ struct lpfc_nodelist *ndlp)
+ {
++ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+ LPFC_MBOXQ_t *mbox;
+
+ mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+@@ -4162,25 +4406,31 @@ lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport,
+ mbox->vport = vport;
+ mbox->context2 = lpfc_nlp_get(ndlp);
+ mbox->mbox_cmpl = lpfc_cmpl_reg_new_vport;
+- if (lpfc_sli_issue_mbox(phba, mbox,
+- MBX_NOWAIT | MBX_STOP_IOCB)
++ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
+ == MBX_NOT_FINISHED) {
++ /* mailbox command not success, decrement ndlp
++ * reference count for this command
++ */
++ lpfc_nlp_put(ndlp);
+ mempool_free(mbox, phba->mbox_mem_pool);
+- vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+
+- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+ "0253 Register VPI: Can't send mbox\n");
++ goto mbox_err_exit;
+ }
+ } else {
+- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
-
-+ dev_set_drvdata(dev, ch);
- sdev_printk(KERN_INFO, sd, "Attached scsi changer %s\n", ch->name);
--
-- spin_lock(&ch_devlist_lock);
-- list_add_tail(&ch->list,&ch_devlist);
-- ch_devcount++;
-- spin_unlock(&ch_devlist_lock);
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+ "0254 Register VPI: no memory\n");
+-
+- vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+- lpfc_nlp_put(ndlp);
++ goto mbox_err_exit;
+ }
++ return;
+
- return 0;
-+remove_idr:
-+ idr_remove(&ch_index_idr, minor);
-+free_ch:
-+ kfree(ch);
-+ return ret;
++mbox_err_exit:
++ lpfc_vport_set_state(vport, FC_VPORT_FAILED);
++ spin_lock_irq(shost->host_lock);
++ vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
++ spin_unlock_irq(shost->host_lock);
++ return;
}
- static int ch_remove(struct device *dev)
- {
-- struct scsi_device *sd = to_scsi_device(dev);
-- scsi_changer *tmp, *ch;
-+ scsi_changer *ch = dev_get_drvdata(dev);
+ static void
+@@ -4251,7 +4501,9 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ lpfc_unreg_rpi(vport, np);
+ }
+ lpfc_mbx_unreg_vpi(vport);
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
++ spin_unlock_irq(shost->host_lock);
+ }
-- spin_lock(&ch_devlist_lock);
-- ch = NULL;
-- list_for_each_entry(tmp,&ch_devlist,list) {
-- if (tmp->device == sd)
-- ch = tmp;
-- }
-- BUG_ON(NULL == ch);
-- list_del(&ch->list);
-- spin_unlock(&ch_devlist_lock);
-+ spin_lock(&ch_index_lock);
-+ idr_remove(&ch_index_idr, ch->minor);
-+ spin_unlock(&ch_index_lock);
+ if (vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)
+@@ -4259,14 +4511,15 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ else
+ lpfc_do_scr_ns_plogi(phba, vport);
- class_device_destroy(ch_sysfs_class,
- MKDEV(SCSI_CHANGER_MAJOR,ch->minor));
- kfree(ch->dt);
- kfree(ch);
-- ch_devcount--;
- return 0;
+- lpfc_nlp_put(ndlp); /* Free Fabric ndlp for vports */
++ /* Unconditionaly kick off releasing fabric node for vports */
++ lpfc_nlp_put(ndlp);
+ }
+
+ out:
+ lpfc_els_free_iocb(phba, cmdiocb);
}
-+static struct scsi_driver ch_template = {
-+ .owner = THIS_MODULE,
-+ .gendrv = {
-+ .name = "ch",
-+ .probe = ch_probe,
-+ .remove = ch_remove,
-+ },
-+};
-+
-+static const struct file_operations changer_fops = {
-+ .owner = THIS_MODULE,
-+ .open = ch_open,
-+ .release = ch_release,
-+ .unlocked_ioctl = ch_ioctl,
-+#ifdef CONFIG_COMPAT
-+ .compat_ioctl = ch_ioctl_compat,
-+#endif
-+};
-+
- static int __init init_ch_module(void)
+-int
++static int
+ lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ uint8_t retry)
{
- int rc;
--
-+
- printk(KERN_INFO "SCSI Media Changer driver v" VERSION " \n");
- ch_sysfs_class = class_create(THIS_MODULE, "scsi_changer");
- if (IS_ERR(ch_sysfs_class)) {
-@@ -996,11 +998,12 @@ static int __init init_ch_module(void)
- return rc;
+@@ -4539,7 +4792,7 @@ lpfc_cmpl_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ }
}
--static void __exit exit_ch_module(void)
-+static void __exit exit_ch_module(void)
+-int
++static int
+ lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
{
- scsi_unregister_driver(&ch_template.gendrv);
- unregister_chrdev(SCSI_CHANGER_MAJOR, "ch");
- class_destroy(ch_sysfs_class);
-+ idr_destroy(&ch_index_idr);
+ unsigned long iflags;
+@@ -4583,7 +4836,7 @@ lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
}
- module_init(init_ch_module);
-diff --git a/drivers/scsi/constants.c b/drivers/scsi/constants.c
-index 024553f..403a7f2 100644
---- a/drivers/scsi/constants.c
-+++ b/drivers/scsi/constants.c
-@@ -362,7 +362,6 @@ void scsi_print_command(struct scsi_cmnd *cmd)
- EXPORT_SYMBOL(scsi_print_command);
- /**
-- *
- * scsi_print_status - print scsi status description
- * @scsi_status: scsi status value
- *
-@@ -1369,7 +1368,7 @@ EXPORT_SYMBOL(scsi_print_sense);
- static const char * const hostbyte_table[]={
- "DID_OK", "DID_NO_CONNECT", "DID_BUS_BUSY", "DID_TIME_OUT", "DID_BAD_TARGET",
- "DID_ABORT", "DID_PARITY", "DID_ERROR", "DID_RESET", "DID_BAD_INTR",
--"DID_PASSTHROUGH", "DID_SOFT_ERROR", "DID_IMM_RETRY"};
-+"DID_PASSTHROUGH", "DID_SOFT_ERROR", "DID_IMM_RETRY", "DID_REQUEUE"};
- #define NUM_HOSTBYTE_STRS ARRAY_SIZE(hostbyte_table)
+-void lpfc_fabric_abort_vport(struct lpfc_vport *vport)
++static void lpfc_fabric_abort_vport(struct lpfc_vport *vport)
+ {
+ LIST_HEAD(completions);
+ struct lpfc_hba *phba = vport->phba;
+@@ -4663,6 +4916,7 @@ void lpfc_fabric_abort_hba(struct lpfc_hba *phba)
+ }
- static const char * const driverbyte_table[]={
-diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
-index a9def6e..f93c73c 100644
---- a/drivers/scsi/dc395x.c
-+++ b/drivers/scsi/dc395x.c
-@@ -1629,8 +1629,7 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, (dcb->target_lun << 5));
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
-- DC395x_write8(acb, TRM_S1040_SCSI_FIFO,
-- sizeof(srb->cmd->sense_buffer));
-+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, SCSI_SENSE_BUFFERSIZE);
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
- } else {
- ptr = (u8 *)srb->cmd->cmnd;
-@@ -1915,8 +1914,7 @@ static void command_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, (dcb->target_lun << 5));
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
-- DC395x_write8(acb, TRM_S1040_SCSI_FIFO,
-- sizeof(srb->cmd->sense_buffer));
-+ DC395x_write8(acb, TRM_S1040_SCSI_FIFO, SCSI_SENSE_BUFFERSIZE);
- DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0);
+
++#if 0
+ void lpfc_fabric_abort_flogi(struct lpfc_hba *phba)
+ {
+ LIST_HEAD(completions);
+@@ -4693,5 +4947,6 @@ void lpfc_fabric_abort_flogi(struct lpfc_hba *phba)
+ (piocb->iocb_cmpl) (phba, piocb, piocb);
}
- srb->state |= SRB_COMMAND;
-@@ -3685,7 +3683,7 @@ static void request_sense(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
- srb->target_status = 0;
+ }
++#endif /* 0 */
- /* KG: Can this prevent crap sense data ? */
-- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
-+ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- /* Save some data */
- srb->segment_x[DC395x_MAX_SG_LISTENTRY - 1].address =
-@@ -3694,15 +3692,15 @@ static void request_sense(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
- srb->segment_x[0].length;
- srb->xferred = srb->total_xfer_length;
- /* srb->segment_x : a one entry of S/G list table */
-- srb->total_xfer_length = sizeof(cmd->sense_buffer);
-- srb->segment_x[0].length = sizeof(cmd->sense_buffer);
-+ srb->total_xfer_length = SCSI_SENSE_BUFFERSIZE;
-+ srb->segment_x[0].length = SCSI_SENSE_BUFFERSIZE;
- /* Map sense buffer */
- srb->segment_x[0].address =
- pci_map_single(acb->dev, cmd->sense_buffer,
-- sizeof(cmd->sense_buffer), PCI_DMA_FROMDEVICE);
-+ SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE);
- dprintkdbg(DBG_SG, "request_sense: map buffer %p->%08x(%05x)\n",
- cmd->sense_buffer, srb->segment_x[0].address,
-- sizeof(cmd->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
- srb->sg_count = 1;
- srb->sg_index = 0;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index c81c2b3..dc042bd 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -57,6 +57,7 @@ static uint8_t lpfcAlpaArray[] = {
+ };
+
+ static void lpfc_disc_timeout_handler(struct lpfc_vport *);
++static void lpfc_disc_flush_list(struct lpfc_vport *vport);
+
+ void
+ lpfc_terminate_rport_io(struct fc_rport *rport)
+@@ -107,20 +108,14 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ struct lpfc_nodelist * ndlp;
+ struct lpfc_vport *vport;
+ struct lpfc_hba *phba;
+- struct completion devloss_compl;
+ struct lpfc_work_evt *evtp;
++ int put_node;
++ int put_rport;
+
+ rdata = rport->dd_data;
+ ndlp = rdata->pnode;
+-
+- if (!ndlp) {
+- if (rport->scsi_target_id != -1) {
+- printk(KERN_ERR "Cannot find remote node"
+- " for rport in dev_loss_tmo_callbk x%x\n",
+- rport->port_id);
+- }
++ if (!ndlp)
+ return;
+- }
+
+ vport = ndlp->vport;
+ phba = vport->phba;
+@@ -129,15 +124,35 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ "rport devlosscb: sid:x%x did:x%x flg:x%x",
+ ndlp->nlp_sid, ndlp->nlp_DID, ndlp->nlp_flag);
+
+- init_completion(&devloss_compl);
++ /* Don't defer this if we are in the process of deleting the vport
++ * or unloading the driver. The unload will cleanup the node
++ * appropriately we just need to cleanup the ndlp rport info here.
++ */
++ if (vport->load_flag & FC_UNLOADING) {
++ put_node = rdata->pnode != NULL;
++ put_rport = ndlp->rport != NULL;
++ rdata->pnode = NULL;
++ ndlp->rport = NULL;
++ if (put_node)
++ lpfc_nlp_put(ndlp);
++ if (put_rport)
++ put_device(&rport->dev);
++ return;
++ }
++
++ if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
++ return;
++
+ evtp = &ndlp->dev_loss_evt;
-diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c
-index b31d1c9..19cce12 100644
---- a/drivers/scsi/dpt_i2o.c
-+++ b/drivers/scsi/dpt_i2o.c
-@@ -2296,9 +2296,8 @@ static s32 adpt_i2o_to_scsi(void __iomem *reply, struct scsi_cmnd* cmd)
+ if (!list_empty(&evtp->evt_listp))
+ return;
- // copy over the request sense data if it was a check
- // condition status
-- if(dev_status == 0x02 /*CHECK_CONDITION*/) {
-- u32 len = sizeof(cmd->sense_buffer);
-- len = (len > 40) ? 40 : len;
-+ if (dev_status == SAM_STAT_CHECK_CONDITION) {
-+ u32 len = min(SCSI_SENSE_BUFFERSIZE, 40);
- // Copy over the sense data
- memcpy_fromio(cmd->sense_buffer, (reply+28) , len);
- if(cmd->sense_buffer[0] == 0x70 /* class 7 */ &&
-diff --git a/drivers/scsi/eata.c b/drivers/scsi/eata.c
-index 7ead521..05163ce 100644
---- a/drivers/scsi/eata.c
-+++ b/drivers/scsi/eata.c
-@@ -1623,9 +1623,9 @@ static void map_dma(unsigned int i, struct hostdata *ha)
- if (SCpnt->sense_buffer)
- cpp->sense_addr =
- H2DEV(pci_map_single(ha->pdev, SCpnt->sense_buffer,
-- sizeof SCpnt->sense_buffer, PCI_DMA_FROMDEVICE));
-+ SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE));
+ spin_lock_irq(&phba->hbalock);
+- evtp->evt_arg1 = ndlp;
+- evtp->evt_arg2 = &devloss_compl;
++ /* We need to hold the node by incrementing the reference
++ * count until this queued work is done
++ */
++ evtp->evt_arg1 = lpfc_nlp_get(ndlp);
+ evtp->evt = LPFC_EVT_DEV_LOSS;
+ list_add_tail(&evtp->evt_listp, &phba->work_list);
+ if (phba->work_wait)
+@@ -145,8 +160,6 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
-- cpp->sense_len = sizeof SCpnt->sense_buffer;
-+ cpp->sense_len = SCSI_SENSE_BUFFERSIZE;
+ spin_unlock_irq(&phba->hbalock);
- count = scsi_dma_map(SCpnt);
- BUG_ON(count < 0);
-diff --git a/drivers/scsi/eata_pio.c b/drivers/scsi/eata_pio.c
-index 982c509..b5a6092 100644
---- a/drivers/scsi/eata_pio.c
-+++ b/drivers/scsi/eata_pio.c
-@@ -369,7 +369,6 @@ static int eata_pio_queue(struct scsi_cmnd *cmd,
- cp = &hd->ccb[y];
+- wait_for_completion(&devloss_compl);
+-
+ return;
+ }
- memset(cp, 0, sizeof(struct eata_ccb));
-- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
+@@ -154,7 +167,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ * This function is called from the worker thread when dev_loss_tmo
+ * expire.
+ */
+-void
++static void
+ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+ {
+ struct lpfc_rport_data *rdata;
+@@ -162,6 +175,8 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+ struct lpfc_vport *vport;
+ struct lpfc_hba *phba;
+ uint8_t *name;
++ int put_node;
++ int put_rport;
+ int warn_on = 0;
- cp->status = USED; /* claim free slot */
+ rport = ndlp->rport;
+@@ -178,14 +193,32 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+ "rport devlosstmo:did:x%x type:x%x id:x%x",
+ ndlp->nlp_DID, ndlp->nlp_type, rport->scsi_target_id);
-@@ -385,7 +384,7 @@ static int eata_pio_queue(struct scsi_cmnd *cmd,
- cp->DataIn = 0; /* Input mode */
+- if (!(vport->load_flag & FC_UNLOADING) &&
+- ndlp->nlp_state == NLP_STE_MAPPED_NODE)
++ /* Don't defer this if we are in the process of deleting the vport
++ * or unloading the driver. The unload will cleanup the node
++ * appropriately we just need to cleanup the ndlp rport info here.
++ */
++ if (vport->load_flag & FC_UNLOADING) {
++ if (ndlp->nlp_sid != NLP_NO_SID) {
++ /* flush the target */
++ lpfc_sli_abort_iocb(vport,
++ &phba->sli.ring[phba->sli.fcp_ring],
++ ndlp->nlp_sid, 0, LPFC_CTX_TGT);
++ }
++ put_node = rdata->pnode != NULL;
++ put_rport = ndlp->rport != NULL;
++ rdata->pnode = NULL;
++ ndlp->rport = NULL;
++ if (put_node)
++ lpfc_nlp_put(ndlp);
++ if (put_rport)
++ put_device(&rport->dev);
+ return;
++ }
- cp->Interpret = (cmd->device->id == hd->hostid);
-- cp->cp_datalen = cpu_to_be32(cmd->request_bufflen);
-+ cp->cp_datalen = cpu_to_be32(scsi_bufflen(cmd));
- cp->Auto_Req_Sen = 0;
- cp->cp_reqDMA = 0;
- cp->reqlen = 0;
-@@ -402,14 +401,14 @@ static int eata_pio_queue(struct scsi_cmnd *cmd,
- cp->cmd = cmd;
- cmd->host_scribble = (char *) &hd->ccb[y];
+- if (ndlp->nlp_type & NLP_FABRIC) {
+- int put_node;
+- int put_rport;
++ if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
++ return;
-- if (cmd->use_sg == 0) {
-+ if (!scsi_bufflen(cmd)) {
- cmd->SCp.buffers_residual = 1;
-- cmd->SCp.ptr = cmd->request_buffer;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-+ cmd->SCp.ptr = NULL;
-+ cmd->SCp.this_residual = 0;
- cmd->SCp.buffer = NULL;
- } else {
-- cmd->SCp.buffer = cmd->request_buffer;
-- cmd->SCp.buffers_residual = cmd->use_sg;
-+ cmd->SCp.buffer = scsi_sglist(cmd);
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd);
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
++ if (ndlp->nlp_type & NLP_FABRIC) {
+ /* We will clean up these Nodes in linkup */
+ put_node = rdata->pnode != NULL;
+ put_rport = ndlp->rport != NULL;
+@@ -227,23 +260,20 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+ ndlp->nlp_state, ndlp->nlp_rpi);
}
-diff --git a/drivers/scsi/fd_mcs.c b/drivers/scsi/fd_mcs.c
-index 8335b60..85bd54c 100644
---- a/drivers/scsi/fd_mcs.c
-+++ b/drivers/scsi/fd_mcs.c
-@@ -1017,24 +1017,6 @@ static irqreturn_t fd_mcs_intr(int irq, void *dev_id)
- printk(" ** IN DONE %d ** ", current_SC->SCp.have_data_in);
- #endif
--#if ERRORS_ONLY
-- if (current_SC->cmnd[0] == REQUEST_SENSE && !current_SC->SCp.Status) {
-- if ((unsigned char) (*((char *) current_SC->request_buffer + 2)) & 0x0f) {
-- unsigned char key;
-- unsigned char code;
-- unsigned char qualifier;
++ put_node = rdata->pnode != NULL;
++ put_rport = ndlp->rport != NULL;
++ rdata->pnode = NULL;
++ ndlp->rport = NULL;
++ if (put_node)
++ lpfc_nlp_put(ndlp);
++ if (put_rport)
++ put_device(&rport->dev);
++
+ if (!(vport->load_flag & FC_UNLOADING) &&
+ !(ndlp->nlp_flag & NLP_DELAY_TMO) &&
+ !(ndlp->nlp_flag & NLP_NPR_2B_DISC) &&
+- (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE))
++ (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE)) {
+ lpfc_disc_state_machine(vport, ndlp, NULL, NLP_EVT_DEVICE_RM);
+- else {
+- int put_node;
+- int put_rport;
-
-- key = (unsigned char) (*((char *) current_SC->request_buffer + 2)) & 0x0f;
-- code = (unsigned char) (*((char *) current_SC->request_buffer + 12));
-- qualifier = (unsigned char) (*((char *) current_SC->request_buffer + 13));
+- put_node = rdata->pnode != NULL;
+- put_rport = ndlp->rport != NULL;
+- rdata->pnode = NULL;
+- ndlp->rport = NULL;
+- if (put_node)
+- lpfc_nlp_put(ndlp);
+- if (put_rport)
+- put_device(&rport->dev);
+ }
+ }
+
+@@ -260,7 +290,6 @@ lpfc_work_list_done(struct lpfc_hba *phba)
+ {
+ struct lpfc_work_evt *evtp = NULL;
+ struct lpfc_nodelist *ndlp;
+- struct lpfc_vport *vport;
+ int free_evt;
+
+ spin_lock_irq(&phba->hbalock);
+@@ -270,35 +299,22 @@ lpfc_work_list_done(struct lpfc_hba *phba)
+ spin_unlock_irq(&phba->hbalock);
+ free_evt = 1;
+ switch (evtp->evt) {
+- case LPFC_EVT_DEV_LOSS_DELAY:
+- free_evt = 0; /* evt is part of ndlp */
+- ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
+- vport = ndlp->vport;
+- if (!vport)
+- break;
-
-- if (key != UNIT_ATTENTION && !(key == NOT_READY && code == 0x04 && (!qualifier || qualifier == 0x02 || qualifier == 0x01))
-- && !(key == ILLEGAL_REQUEST && (code == 0x25 || code == 0x24 || !code)))
+- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
+- "rport devlossdly:did:x%x flg:x%x",
+- ndlp->nlp_DID, ndlp->nlp_flag, 0);
-
-- printk("fd_mcs: REQUEST SENSE " "Key = %x, Code = %x, Qualifier = %x\n", key, code, qualifier);
+- if (!(vport->load_flag & FC_UNLOADING) &&
+- !(ndlp->nlp_flag & NLP_DELAY_TMO) &&
+- !(ndlp->nlp_flag & NLP_NPR_2B_DISC)) {
+- lpfc_disc_state_machine(vport, ndlp, NULL,
+- NLP_EVT_DEVICE_RM);
- }
-- }
--#endif
- #if EVERY_ACCESS
- printk("BEFORE MY_DONE. . .");
- #endif
-@@ -1097,7 +1079,9 @@ static int fd_mcs_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
- panic("fd_mcs: fd_mcs_queue() NOT REENTRANT!\n");
- }
- #if EVERY_ACCESS
-- printk("queue: target = %d cmnd = 0x%02x pieces = %d size = %u\n", SCpnt->target, *(unsigned char *) SCpnt->cmnd, SCpnt->use_sg, SCpnt->request_bufflen);
-+ printk("queue: target = %d cmnd = 0x%02x pieces = %d size = %u\n",
-+ SCpnt->target, *(unsigned char *) SCpnt->cmnd,
-+ scsi_sg_count(SCpnt), scsi_bufflen(SCpnt));
- #endif
+- break;
+ case LPFC_EVT_ELS_RETRY:
+ ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
+ lpfc_els_retry_delay_handler(ndlp);
+ free_evt = 0; /* evt is part of ndlp */
++ /* decrement the node reference count held
++ * for this queued work
++ */
++ lpfc_nlp_put(ndlp);
+ break;
+ case LPFC_EVT_DEV_LOSS:
+ ndlp = (struct lpfc_nodelist *)(evtp->evt_arg1);
+- lpfc_nlp_get(ndlp);
+ lpfc_dev_loss_tmo_handler(ndlp);
+ free_evt = 0;
+- complete((struct completion *)(evtp->evt_arg2));
++ /* decrement the node reference count held for
++ * this queued work
++ */
+ lpfc_nlp_put(ndlp);
+ break;
+ case LPFC_EVT_ONLINE:
+@@ -373,7 +389,7 @@ lpfc_work_done(struct lpfc_hba *phba)
+ lpfc_handle_latt(phba);
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS; i++) {
++ for(i = 0; i <= phba->max_vpi; i++) {
+ /*
+ * We could have no vports in array if unloading, so if
+ * this happens then just use the pport
+@@ -405,14 +421,14 @@ lpfc_work_done(struct lpfc_hba *phba)
+ vport->work_port_events &= ~work_port_events;
+ spin_unlock_irq(&vport->work_port_lock);
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
- fd_mcs_make_bus_idle(shpnt);
-@@ -1107,14 +1091,14 @@ static int fd_mcs_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+ pring = &phba->sli.ring[LPFC_ELS_RING];
+ status = (ha_copy & (HA_RXMASK << (4*LPFC_ELS_RING)));
+ status >>= (4*LPFC_ELS_RING);
+ if ((status & HA_RXMASK)
+ || (pring->flag & LPFC_DEFERRED_RING_EVENT)) {
+- if (pring->flag & LPFC_STOP_IOCB_MASK) {
++ if (pring->flag & LPFC_STOP_IOCB_EVENT) {
+ pring->flag |= LPFC_DEFERRED_RING_EVENT;
+ } else {
+ lpfc_sli_handle_slow_ring_event(phba, pring,
+@@ -544,6 +560,7 @@ lpfc_workq_post_event(struct lpfc_hba *phba, void *arg1, void *arg2,
+ void
+ lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
+ {
++ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+ struct lpfc_hba *phba = vport->phba;
+ struct lpfc_nodelist *ndlp, *next_ndlp;
+ int rc;
+@@ -552,7 +569,9 @@ lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
+ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
+ continue;
- /* Initialize static data */
+- if (phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN)
++ if ((phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) ||
++ ((vport->port_type == LPFC_NPIV_PORT) &&
++ (ndlp->nlp_DID == NameServer_DID)))
+ lpfc_unreg_rpi(vport, ndlp);
-- if (current_SC->use_sg) {
-- current_SC->SCp.buffer = (struct scatterlist *) current_SC->request_buffer;
-+ if (scsi_bufflen(current_SC)) {
-+ current_SC->SCp.buffer = scsi_sglist(current_SC);
- current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer);
- current_SC->SCp.this_residual = current_SC->SCp.buffer->length;
-- current_SC->SCp.buffers_residual = current_SC->use_sg - 1;
-+ current_SC->SCp.buffers_residual = scsi_sg_count(current_SC) - 1;
- } else {
-- current_SC->SCp.ptr = (char *) current_SC->request_buffer;
-- current_SC->SCp.this_residual = current_SC->request_bufflen;
-+ current_SC->SCp.ptr = NULL;
-+ current_SC->SCp.this_residual = 0;
- current_SC->SCp.buffer = NULL;
- current_SC->SCp.buffers_residual = 0;
+ /* Leave Fabric nodes alone on link down */
+@@ -565,14 +584,30 @@ lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
}
-@@ -1166,7 +1150,9 @@ static void fd_mcs_print_info(Scsi_Cmnd * SCpnt)
- break;
+ if (phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) {
+ lpfc_mbx_unreg_vpi(vport);
++ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
++ spin_unlock_irq(shost->host_lock);
}
-
-- printk("(%d), target = %d cmnd = 0x%02x pieces = %d size = %u\n", SCpnt->SCp.phase, SCpnt->device->id, *(unsigned char *) SCpnt->cmnd, SCpnt->use_sg, SCpnt->request_bufflen);
-+ printk("(%d), target = %d cmnd = 0x%02x pieces = %d size = %u\n",
-+ SCpnt->SCp.phase, SCpnt->device->id, *(unsigned char *) SCpnt->cmnd,
-+ scsi_sg_count(SCpnt), scsi_bufflen(SCpnt));
- printk("sent_command = %d, have_data_in = %d, timeout = %d\n", SCpnt->SCp.sent_command, SCpnt->SCp.have_data_in, SCpnt->timeout);
- #if DEBUG_RACE
- printk("in_interrupt_flag = %d\n", in_interrupt_flag);
-diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
-index b253b8c..c825239 100644
---- a/drivers/scsi/gdth.c
-+++ b/drivers/scsi/gdth.c
-@@ -141,7 +141,7 @@
- static void gdth_delay(int milliseconds);
- static void gdth_eval_mapping(ulong32 size, ulong32 *cyls, int *heads, int *secs);
- static irqreturn_t gdth_interrupt(int irq, void *dev_id);
--static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
-+static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
- int gdth_from_wait, int* pIndex);
- static int gdth_sync_event(gdth_ha_str *ha, int service, unchar index,
- Scsi_Cmnd *scp);
-@@ -165,7 +165,6 @@ static int gdth_internal_cache_cmd(gdth_ha_str *ha, Scsi_Cmnd *scp);
- static int gdth_fill_cache_cmd(gdth_ha_str *ha, Scsi_Cmnd *scp, ushort hdrive);
-
- static void gdth_enable_int(gdth_ha_str *ha);
--static unchar gdth_get_status(gdth_ha_str *ha, int irq);
- static int gdth_test_busy(gdth_ha_str *ha);
- static int gdth_get_cmd_index(gdth_ha_str *ha);
- static void gdth_release_event(gdth_ha_str *ha);
-@@ -1334,14 +1333,12 @@ static void __init gdth_enable_int(gdth_ha_str *ha)
}
- /* return IStatus if interrupt was from this card else 0 */
--static unchar gdth_get_status(gdth_ha_str *ha, int irq)
-+static unchar gdth_get_status(gdth_ha_str *ha)
++void
++lpfc_port_link_failure(struct lpfc_vport *vport)
++{
++ /* Cleanup any outstanding RSCN activity */
++ lpfc_els_flush_rscn(vport);
++
++ /* Cleanup any outstanding ELS commands */
++ lpfc_els_flush_cmd(vport);
++
++ lpfc_cleanup_rpis(vport, 0);
++
++ /* Turn off discovery timer if its running */
++ lpfc_can_disctmo(vport);
++}
++
+ static void
+ lpfc_linkdown_port(struct lpfc_vport *vport)
{
- unchar IStatus = 0;
-
-- TRACE(("gdth_get_status() irq %d ctr_count %d\n", irq, gdth_ctr_count));
-+ TRACE(("gdth_get_status() irq %d ctr_count %d\n", ha->irq, gdth_ctr_count));
+- struct lpfc_nodelist *ndlp, *next_ndlp;
+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
-- if (ha->irq != (unchar)irq) /* check IRQ */
-- return false;
- if (ha->type == GDT_EISA)
- IStatus = inb((ushort)ha->bmic + EDOORREG);
- else if (ha->type == GDT_ISA)
-@@ -1523,7 +1520,7 @@ static int gdth_wait(gdth_ha_str *ha, int index, ulong32 time)
- return 1; /* no wait required */
+ fc_host_post_event(shost, fc_get_event_number(), FCH_EVT_LINKDOWN, 0);
+@@ -581,21 +616,8 @@ lpfc_linkdown_port(struct lpfc_vport *vport)
+ "Link Down: state:x%x rtry:x%x flg:x%x",
+ vport->port_state, vport->fc_ns_retry, vport->fc_flag);
- do {
-- __gdth_interrupt(ha, (int)ha->irq, true, &wait_index);
-+ __gdth_interrupt(ha, true, &wait_index);
- if (wait_index == index) {
- answer_found = TRUE;
- break;
-@@ -3036,7 +3033,7 @@ static void gdth_clear_events(void)
+- /* Cleanup any outstanding RSCN activity */
+- lpfc_els_flush_rscn(vport);
+-
+- /* Cleanup any outstanding ELS commands */
+- lpfc_els_flush_cmd(vport);
++ lpfc_port_link_failure(vport);
- /* SCSI interface functions */
+- lpfc_cleanup_rpis(vport, 0);
+-
+- /* free any ndlp's on unused list */
+- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp)
+- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
+- lpfc_drop_node(vport, ndlp);
+-
+- /* Turn off discovery timer if its running */
+- lpfc_can_disctmo(vport);
+ }
--static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
-+static irqreturn_t __gdth_interrupt(gdth_ha_str *ha,
- int gdth_from_wait, int* pIndex)
+ int
+@@ -618,18 +640,18 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ spin_unlock_irq(&phba->hbalock);
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
+ /* Issue a LINK DOWN event to all nodes */
+ lpfc_linkdown_port(vports[i]);
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ /* Clean up any firmware default rpi's */
+ mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ if (mb) {
+ lpfc_unreg_did(phba, 0xffff, 0xffffffff, mb);
+ mb->vport = vport;
+ mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+- if (lpfc_sli_issue_mbox(phba, mb, (MBX_NOWAIT | MBX_STOP_IOCB))
++ if (lpfc_sli_issue_mbox(phba, mb, MBX_NOWAIT)
+ == MBX_NOT_FINISHED) {
+ mempool_free(mb, phba->mbox_mem_pool);
+ }
+@@ -643,8 +665,7 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ lpfc_config_link(phba, mb);
+ mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+ mb->vport = vport;
+- if (lpfc_sli_issue_mbox(phba, mb,
+- (MBX_NOWAIT | MBX_STOP_IOCB))
++ if (lpfc_sli_issue_mbox(phba, mb, MBX_NOWAIT)
+ == MBX_NOT_FINISHED) {
+ mempool_free(mb, phba->mbox_mem_pool);
+ }
+@@ -686,7 +707,6 @@ static void
+ lpfc_linkup_port(struct lpfc_vport *vport)
{
- gdt6m_dpram_str __iomem *dp6m_ptr = NULL;
-@@ -3054,7 +3051,7 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
- int act_int_coal = 0;
- #endif
+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+- struct lpfc_nodelist *ndlp, *next_ndlp;
+ struct lpfc_hba *phba = vport->phba;
-- TRACE(("gdth_interrupt() IRQ %d\n",irq));
-+ TRACE(("gdth_interrupt() IRQ %d\n", ha->irq));
+ if ((vport->load_flag & FC_UNLOADING) != 0)
+@@ -713,11 +733,6 @@ lpfc_linkup_port(struct lpfc_vport *vport)
+ if (vport->fc_flag & FC_LBIT)
+ lpfc_linkup_cleanup_nodes(vport);
- /* if polling and not from gdth_wait() -> return */
- if (gdth_polling) {
-@@ -3067,7 +3064,8 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
- spin_lock_irqsave(&ha->smp_lock, flags);
+- /* free any ndlp's in unused state */
+- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes,
+- nlp_listp)
+- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
+- lpfc_drop_node(vport, ndlp);
+ }
- /* search controller */
-- if (0 == (IStatus = gdth_get_status(ha, irq))) {
-+ IStatus = gdth_get_status(ha);
-+ if (IStatus == 0) {
- /* spurious interrupt */
- if (!gdth_polling)
- spin_unlock_irqrestore(&ha->smp_lock, flags);
-@@ -3294,9 +3292,9 @@ static irqreturn_t __gdth_interrupt(gdth_ha_str *ha, int irq,
+ static int
+@@ -734,9 +749,9 @@ lpfc_linkup(struct lpfc_hba *phba)
- static irqreturn_t gdth_interrupt(int irq, void *dev_id)
- {
-- gdth_ha_str *ha = (gdth_ha_str *)dev_id;
-+ gdth_ha_str *ha = dev_id;
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++)
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++)
+ lpfc_linkup_port(vports[i]);
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)
+ lpfc_issue_clear_la(phba, phba->pport);
-- return __gdth_interrupt(ha, irq, false, NULL);
-+ return __gdth_interrupt(ha, false, NULL);
- }
+@@ -749,7 +764,7 @@ lpfc_linkup(struct lpfc_hba *phba)
+ * as the completion routine when the command is
+ * handed off to the SLI layer.
+ */
+-void
++static void
+ lpfc_mbx_cmpl_clear_la(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ {
+ struct lpfc_vport *vport = pmb->vport;
+@@ -852,8 +867,6 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ * LPFC_FLOGI while waiting for FLOGI cmpl
+ */
+ if (vport->port_state != LPFC_FLOGI) {
+- vport->port_state = LPFC_FLOGI;
+- lpfc_set_disctmo(vport);
+ lpfc_initial_flogi(vport);
+ }
+ return;
+@@ -1022,8 +1035,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
+ lpfc_read_sparam(phba, sparam_mbox, 0);
+ sparam_mbox->vport = vport;
+ sparam_mbox->mbox_cmpl = lpfc_mbx_cmpl_read_sparam;
+- rc = lpfc_sli_issue_mbox(phba, sparam_mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, sparam_mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ mp = (struct lpfc_dmabuf *) sparam_mbox->context1;
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+@@ -1040,8 +1052,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
+ lpfc_config_link(phba, cfglink_mbox);
+ cfglink_mbox->vport = vport;
+ cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
+- rc = lpfc_sli_issue_mbox(phba, cfglink_mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
+ if (rc != MBX_NOT_FINISHED)
+ return;
+ mempool_free(cfglink_mbox, phba->mbox_mem_pool);
+@@ -1174,6 +1185,9 @@ lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ kfree(mp);
+ mempool_free(pmb, phba->mbox_mem_pool);
++ /* decrement the node reference count held for this callback
++ * function.
++ */
+ lpfc_nlp_put(ndlp);
- static int gdth_sync_event(gdth_ha_str *ha, int service, unchar index,
-diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
-index 24271a8..5ea1f98 100644
---- a/drivers/scsi/hosts.c
-+++ b/drivers/scsi/hosts.c
-@@ -54,8 +54,7 @@ static struct class shost_class = {
- };
+ return;
+@@ -1219,7 +1233,7 @@ lpfc_mbx_unreg_vpi(struct lpfc_vport *vport)
+ lpfc_unreg_vpi(phba, vport->vpi, mbox);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_mbx_cmpl_unreg_vpi;
+- rc = lpfc_sli_issue_mbox(phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
+ "1800 Could not issue unreg_vpi\n");
+@@ -1319,7 +1333,7 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+ for(i = 0;
+- i < LPFC_MAX_VPORTS && vports[i] != NULL;
++ i <= phba->max_vpi && vports[i] != NULL;
+ i++) {
+ if (vports[i]->port_type == LPFC_PHYSICAL_PORT)
+ continue;
+@@ -1335,7 +1349,7 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ "Fabric support\n");
+ }
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ lpfc_do_scr_ns_plogi(phba, vport);
+ }
- /**
-- * scsi_host_set_state - Take the given host through the host
-- * state model.
-+ * scsi_host_set_state - Take the given host through the host state model.
- * @shost: scsi host to change the state of.
- * @state: state to change to.
- *
-@@ -429,9 +428,17 @@ void scsi_unregister(struct Scsi_Host *shost)
- }
- EXPORT_SYMBOL(scsi_unregister);
+@@ -1361,11 +1375,16 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
-+static int __scsi_host_match(struct class_device *cdev, void *data)
-+{
-+ struct Scsi_Host *p;
-+ unsigned short *hostnum = (unsigned short *)data;
-+
-+ p = class_to_shost(cdev);
-+ return p->host_no == *hostnum;
-+}
+ if (mb->mbxStatus) {
+ out:
++ /* decrement the node reference count held for this
++ * callback function.
++ */
+ lpfc_nlp_put(ndlp);
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ kfree(mp);
+ mempool_free(pmb, phba->mbox_mem_pool);
+- lpfc_drop_node(vport, ndlp);
+
- /**
- * scsi_host_lookup - get a reference to a Scsi_Host by host no
-- *
- * @hostnum: host number to locate
- *
- * Return value:
-@@ -439,19 +446,12 @@ EXPORT_SYMBOL(scsi_unregister);
- **/
- struct Scsi_Host *scsi_host_lookup(unsigned short hostnum)
- {
-- struct class *class = &shost_class;
- struct class_device *cdev;
-- struct Scsi_Host *shost = ERR_PTR(-ENXIO), *p;
-+ struct Scsi_Host *shost = ERR_PTR(-ENXIO);
++ /* If no other thread is using the ndlp, free it */
++ lpfc_nlp_not_used(ndlp);
-- down(&class->sem);
-- list_for_each_entry(cdev, &class->children, node) {
-- p = class_to_shost(cdev);
-- if (p->host_no == hostnum) {
-- shost = scsi_host_get(p);
-- break;
-- }
-- }
-- up(&class->sem);
-+ cdev = class_find_child(&shost_class, &hostnum, __scsi_host_match);
-+ if (cdev)
-+ shost = scsi_host_get(class_to_shost(cdev));
+ if (phba->fc_topology == TOPOLOGY_LOOP) {
+ /*
+@@ -1410,6 +1429,9 @@ out:
+ goto out;
+ }
- return shost;
++ /* decrement the node reference count held for this
++ * callback function.
++ */
+ lpfc_nlp_put(ndlp);
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ kfree(mp);
+@@ -1656,8 +1678,18 @@ lpfc_dequeue_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ void
+ lpfc_drop_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ {
++ /*
++ * Use of lpfc_drop_node and UNUSED list: lpfc_drop_node should
++ * be used if we wish to issue the "last" lpfc_nlp_put() to remove
++ * the ndlp from the vport. The ndlp marked as UNUSED on the list
++ * until ALL other outstanding threads have completed. We check
++ * that the ndlp not already in the UNUSED state before we proceed.
++ */
++ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
++ return;
+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+ lpfc_nlp_put(ndlp);
++ return;
}
-diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c
-index 0844331..e7b2f35 100644
---- a/drivers/scsi/hptiop.c
-+++ b/drivers/scsi/hptiop.c
-@@ -1,5 +1,5 @@
- /*
-- * HighPoint RR3xxx controller driver for Linux
-+ * HighPoint RR3xxx/4xxx controller driver for Linux
- * Copyright (C) 2006-2007 HighPoint Technologies, Inc. All Rights Reserved.
- *
- * This program is free software; you can redistribute it and/or modify
-@@ -38,80 +38,84 @@
- #include "hptiop.h"
- MODULE_AUTHOR("HighPoint Technologies, Inc.");
--MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx SATA Controller Driver");
-+MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx/4xxx Controller Driver");
+ /*
+@@ -1868,8 +1900,7 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ lpfc_unreg_login(phba, vport->vpi, ndlp->nlp_rpi, mbox);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+- rc = lpfc_sli_issue_mbox(phba, mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED)
+ mempool_free(mbox, phba->mbox_mem_pool);
+ }
+@@ -1892,8 +1923,8 @@ lpfc_unreg_all_rpis(struct lpfc_vport *vport)
+ lpfc_unreg_login(phba, vport->vpi, 0xffff, mbox);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+- rc = lpfc_sli_issue_mbox(phba, mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB));
++ mbox->context1 = NULL;
++ rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO);
+ if (rc == MBX_NOT_FINISHED) {
+ mempool_free(mbox, phba->mbox_mem_pool);
+ }
+@@ -1912,8 +1943,8 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport)
+ lpfc_unreg_did(phba, vport->vpi, 0xffffffff, mbox);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+- rc = lpfc_sli_issue_mbox(phba, mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB));
++ mbox->context1 = NULL;
++ rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO);
+ if (rc == MBX_NOT_FINISHED) {
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
+ "1815 Could not issue "
+@@ -1981,11 +2012,6 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ if (!list_empty(&ndlp->dev_loss_evt.evt_listp))
+ list_del_init(&ndlp->dev_loss_evt.evt_listp);
- static char driver_name[] = "hptiop";
--static const char driver_name_long[] = "RocketRAID 3xxx SATA Controller driver";
--static const char driver_ver[] = "v1.2 (070830)";
+- if (!list_empty(&ndlp->dev_loss_evt.evt_listp)) {
+- list_del_init(&ndlp->dev_loss_evt.evt_listp);
+- complete((struct completion *)(ndlp->dev_loss_evt.evt_arg2));
+- }
-
--static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 tag);
--static void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag);
-+static const char driver_name_long[] = "RocketRAID 3xxx/4xxx Controller driver";
-+static const char driver_ver[] = "v1.3 (071203)";
-+
-+static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec);
-+static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
-+ struct hpt_iop_request_scsi_command *req);
-+static void hptiop_host_request_callback_itl(struct hptiop_hba *hba, u32 tag);
-+static void hptiop_iop_request_callback_itl(struct hptiop_hba *hba, u32 tag);
- static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg);
+ lpfc_unreg_rpi(vport, ndlp);
--static inline void hptiop_pci_posting_flush(struct hpt_iopmu __iomem *iop)
--{
-- readl(&iop->outbound_intstatus);
--}
--
--static int iop_wait_ready(struct hpt_iopmu __iomem *iop, u32 millisec)
-+static int iop_wait_ready_itl(struct hptiop_hba *hba, u32 millisec)
+ return 0;
+@@ -1999,12 +2025,39 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ static void
+ lpfc_nlp_remove(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
{
- u32 req = 0;
- int i;
-
- for (i = 0; i < millisec; i++) {
-- req = readl(&iop->inbound_queue);
-+ req = readl(&hba->u.itl.iop->inbound_queue);
- if (req != IOPMU_QUEUE_EMPTY)
- break;
- msleep(1);
- }
++ struct lpfc_hba *phba = vport->phba;
+ struct lpfc_rport_data *rdata;
++ LPFC_MBOXQ_t *mbox;
++ int rc;
- if (req != IOPMU_QUEUE_EMPTY) {
-- writel(req, &iop->outbound_queue);
-- hptiop_pci_posting_flush(iop);
-+ writel(req, &hba->u.itl.iop->outbound_queue);
-+ readl(&hba->u.itl.iop->outbound_intstatus);
- return 0;
+ if (ndlp->nlp_flag & NLP_DELAY_TMO) {
+ lpfc_cancel_retry_delay_tmo(vport, ndlp);
}
- return -1;
- }
++ if (ndlp->nlp_flag & NLP_DEFER_RM && !ndlp->nlp_rpi) {
++ /* For this case we need to cleanup the default rpi
++ * allocated by the firmware.
++ */
++ if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))
++ != NULL) {
++ rc = lpfc_reg_login(phba, vport->vpi, ndlp->nlp_DID,
++ (uint8_t *) &vport->fc_sparam, mbox, 0);
++ if (rc) {
++ mempool_free(mbox, phba->mbox_mem_pool);
++ }
++ else {
++ mbox->mbox_flag |= LPFC_MBX_IMED_UNREG;
++ mbox->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
++ mbox->vport = vport;
++ mbox->context2 = NULL;
++ rc =lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
++ if (rc == MBX_NOT_FINISHED) {
++ mempool_free(mbox, phba->mbox_mem_pool);
++ }
++ }
++ }
++ }
++
+ lpfc_cleanup_node(vport, ndlp);
--static void hptiop_request_callback(struct hptiop_hba *hba, u32 tag)
-+static int iop_wait_ready_mv(struct hptiop_hba *hba, u32 millisec)
-+{
-+ return iop_send_sync_msg(hba, IOPMU_INBOUND_MSG0_NOP, millisec);
-+}
+ /*
+@@ -2132,6 +2185,12 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
+ }
+ if (vport->fc_flag & FC_RSCN_MODE) {
+ if (lpfc_rscn_payload_check(vport, did)) {
++ /* If we've already recieved a PLOGI from this NPort
++ * we don't need to try to discover it again.
++ */
++ if (ndlp->nlp_flag & NLP_RCV_PLOGI)
++ return NULL;
+
-+static void hptiop_request_callback_itl(struct hptiop_hba *hba, u32 tag)
- {
- if (tag & IOPMU_QUEUE_ADDR_HOST_BIT)
-- return hptiop_host_request_callback(hba,
-+ hptiop_host_request_callback_itl(hba,
- tag & ~IOPMU_QUEUE_ADDR_HOST_BIT);
- else
-- return hptiop_iop_request_callback(hba, tag);
-+ hptiop_iop_request_callback_itl(hba, tag);
+ spin_lock_irq(shost->host_lock);
+ ndlp->nlp_flag |= NLP_NPR_2B_DISC;
+ spin_unlock_irq(shost->host_lock);
+@@ -2144,8 +2203,13 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
+ } else
+ ndlp = NULL;
+ } else {
++ /* If we've already recieved a PLOGI from this NPort,
++ * or we are already in the process of discovery on it,
++ * we don't need to try to discover it again.
++ */
+ if (ndlp->nlp_state == NLP_STE_ADISC_ISSUE ||
+- ndlp->nlp_state == NLP_STE_PLOGI_ISSUE)
++ ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
++ ndlp->nlp_flag & NLP_RCV_PLOGI)
+ return NULL;
+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+ spin_lock_irq(shost->host_lock);
+@@ -2220,8 +2284,7 @@ lpfc_issue_clear_la(struct lpfc_hba *phba, struct lpfc_vport *vport)
+ lpfc_clear_la(phba, mbox);
+ mbox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
+ mbox->vport = vport;
+- rc = lpfc_sli_issue_mbox(phba, mbox, (MBX_NOWAIT |
+- MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ mempool_free(mbox, phba->mbox_mem_pool);
+ lpfc_disc_flush_list(vport);
+@@ -2244,8 +2307,7 @@ lpfc_issue_reg_vpi(struct lpfc_hba *phba, struct lpfc_vport *vport)
+ lpfc_reg_vpi(phba, vport->vpi, vport->fc_myDID, regvpimbox);
+ regvpimbox->mbox_cmpl = lpfc_mbx_cmpl_reg_vpi;
+ regvpimbox->vport = vport;
+- if (lpfc_sli_issue_mbox(phba, regvpimbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB))
++ if (lpfc_sli_issue_mbox(phba, regvpimbox, MBX_NOWAIT)
+ == MBX_NOT_FINISHED) {
+ mempool_free(regvpimbox, phba->mbox_mem_pool);
+ }
+@@ -2414,7 +2476,7 @@ lpfc_free_tx(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
+ }
}
--static inline void hptiop_drain_outbound_queue(struct hptiop_hba *hba)
-+static void hptiop_drain_outbound_queue_itl(struct hptiop_hba *hba)
+-void
++static void
+ lpfc_disc_flush_list(struct lpfc_vport *vport)
{
- u32 req;
-
-- while ((req = readl(&hba->iop->outbound_queue)) != IOPMU_QUEUE_EMPTY) {
-+ while ((req = readl(&hba->u.itl.iop->outbound_queue)) !=
-+ IOPMU_QUEUE_EMPTY) {
+ struct lpfc_nodelist *ndlp, *next_ndlp;
+@@ -2426,7 +2488,6 @@ lpfc_disc_flush_list(struct lpfc_vport *vport)
+ if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
+ ndlp->nlp_state == NLP_STE_ADISC_ISSUE) {
+ lpfc_free_tx(phba, ndlp);
+- lpfc_nlp_put(ndlp);
+ }
+ }
+ }
+@@ -2516,6 +2577,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
+ if (ndlp->nlp_type & NLP_FABRIC) {
+ /* Clean up the ndlp on Fabric connections */
+ lpfc_drop_node(vport, ndlp);
++
+ } else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
+ /* Fail outstanding IO now since device
+ * is marked for PLOGI.
+@@ -2524,9 +2586,8 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
+ }
+ }
+ if (vport->port_state != LPFC_FLOGI) {
+- vport->port_state = LPFC_FLOGI;
+- lpfc_set_disctmo(vport);
+ lpfc_initial_flogi(vport);
++ return;
+ }
+ break;
- if (req & IOPMU_QUEUE_MASK_HOST_BITS)
-- hptiop_request_callback(hba, req);
-+ hptiop_request_callback_itl(hba, req);
- else {
- struct hpt_iop_request_header __iomem * p;
+@@ -2536,7 +2597,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
+ /* Initial FLOGI timeout */
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+ "0222 Initial %s timeout\n",
+- vport->vpi ? "FLOGI" : "FDISC");
++ vport->vpi ? "FDISC" : "FLOGI");
- p = (struct hpt_iop_request_header __iomem *)
-- ((char __iomem *)hba->iop + req);
-+ ((char __iomem *)hba->u.itl.iop + req);
+ /* Assume no Fabric and go on with discovery.
+ * Check for outstanding ELS FLOGI to abort.
+@@ -2558,10 +2619,10 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
+ /* Next look for NameServer ndlp */
+ ndlp = lpfc_findnode_did(vport, NameServer_DID);
+ if (ndlp)
+- lpfc_nlp_put(ndlp);
+- /* Start discovery */
+- lpfc_disc_start(vport);
+- break;
++ lpfc_els_abort(phba, ndlp);
++
++ /* ReStart discovery */
++ goto restart_disc;
- if (readl(&p->flags) & IOP_REQUEST_FLAG_SYNC_REQUEST) {
- if (readl(&p->context))
-- hptiop_request_callback(hba, req);
-+ hptiop_request_callback_itl(hba, req);
- else
- writel(1, &p->context);
- }
- else
-- hptiop_request_callback(hba, req);
-+ hptiop_request_callback_itl(hba, req);
+ case LPFC_NS_QRY:
+ /* Check for wait for NameServer Rsp timeout */
+@@ -2580,6 +2641,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
}
- }
+ vport->fc_ns_retry = 0;
+
++restart_disc:
+ /*
+ * Discovery is over.
+ * set port_state to PORT_READY if SLI2.
+@@ -2608,8 +2670,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
+ initlinkmbox->mb.un.varInitLnk.lipsr_AL_PA = 0;
+ initlinkmbox->vport = vport;
+ initlinkmbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+- rc = lpfc_sli_issue_mbox(phba, initlinkmbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, initlinkmbox, MBX_NOWAIT);
+ lpfc_set_loopback_flag(phba);
+ if (rc == MBX_NOT_FINISHED)
+ mempool_free(initlinkmbox, phba->mbox_mem_pool);
+@@ -2664,12 +2725,14 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
+ clrlaerr = 1;
+ break;
+
++ case LPFC_LINK_UP:
++ lpfc_issue_clear_la(phba, vport);
++ /* Drop thru */
+ case LPFC_LINK_UNKNOWN:
+ case LPFC_WARM_START:
+ case LPFC_INIT_START:
+ case LPFC_INIT_MBX_CMDS:
+ case LPFC_LINK_DOWN:
+- case LPFC_LINK_UP:
+ case LPFC_HBA_ERROR:
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+ "0230 Unexpected timeout, hba link "
+@@ -2723,7 +2786,9 @@ lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ else
+ mod_timer(&vport->fc_fdmitmo, jiffies + HZ * 60);
+
+- /* Mailbox took a reference to the node */
++ /* decrement the node reference count held for this callback
++ * function.
++ */
+ lpfc_nlp_put(ndlp);
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ kfree(mp);
+@@ -2747,19 +2812,19 @@ lpfc_filter_by_wwpn(struct lpfc_nodelist *ndlp, void *param)
+ sizeof(ndlp->nlp_portname)) == 0;
}
--static int __iop_intr(struct hptiop_hba *hba)
-+static int iop_intr_itl(struct hptiop_hba *hba)
+-struct lpfc_nodelist *
++static struct lpfc_nodelist *
+ __lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
{
-- struct hpt_iopmu __iomem *iop = hba->iop;
-+ struct hpt_iopmu_itl __iomem *iop = hba->u.itl.iop;
- u32 status;
- int ret = 0;
-
-@@ -119,6 +123,7 @@ static int __iop_intr(struct hptiop_hba *hba)
+ struct lpfc_nodelist *ndlp;
- if (status & IOPMU_OUTBOUND_INT_MSG0) {
- u32 msg = readl(&iop->outbound_msgaddr0);
-+
- dprintk("received outbound msg %x\n", msg);
- writel(IOPMU_OUTBOUND_INT_MSG0, &iop->outbound_intstatus);
- hptiop_message_callback(hba, msg);
-@@ -126,31 +131,115 @@ static int __iop_intr(struct hptiop_hba *hba)
+ list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+- if (ndlp->nlp_state != NLP_STE_UNUSED_NODE &&
+- filter(ndlp, param))
++ if (filter(ndlp, param))
+ return ndlp;
}
+ return NULL;
+ }
- if (status & IOPMU_OUTBOUND_INT_POSTQUEUE) {
-- hptiop_drain_outbound_queue(hba);
-+ hptiop_drain_outbound_queue_itl(hba);
-+ ret = 1;
-+ }
-+
-+ return ret;
-+}
-+
-+static u64 mv_outbound_read(struct hpt_iopmu_mv __iomem *mu)
-+{
-+ u32 outbound_tail = readl(&mu->outbound_tail);
-+ u32 outbound_head = readl(&mu->outbound_head);
-+
-+ if (outbound_tail != outbound_head) {
-+ u64 p;
-+
-+ memcpy_fromio(&p, &mu->outbound_q[mu->outbound_tail], 8);
-+ outbound_tail++;
-+
-+ if (outbound_tail == MVIOP_QUEUE_LEN)
-+ outbound_tail = 0;
-+ writel(outbound_tail, &mu->outbound_tail);
-+ return p;
-+ } else
-+ return 0;
-+}
-+
-+static void mv_inbound_write(u64 p, struct hptiop_hba *hba)
-+{
-+ u32 inbound_head = readl(&hba->u.mv.mu->inbound_head);
-+ u32 head = inbound_head + 1;
-+
-+ if (head == MVIOP_QUEUE_LEN)
-+ head = 0;
-+
-+ memcpy_toio(&hba->u.mv.mu->inbound_q[inbound_head], &p, 8);
-+ writel(head, &hba->u.mv.mu->inbound_head);
-+ writel(MVIOP_MU_INBOUND_INT_POSTQUEUE,
-+ &hba->u.mv.regs->inbound_doorbell);
-+}
-+
-+static void hptiop_request_callback_mv(struct hptiop_hba *hba, u64 tag)
-+{
-+ u32 req_type = (tag >> 5) & 0x7;
-+ struct hpt_iop_request_scsi_command *req;
-+
-+ dprintk("hptiop_request_callback_mv: tag=%llx\n", tag);
-+
-+ BUG_ON((tag & MVIOP_MU_QUEUE_REQUEST_RETURN_CONTEXT) == 0);
-+
-+ switch (req_type) {
-+ case IOP_REQUEST_TYPE_GET_CONFIG:
-+ case IOP_REQUEST_TYPE_SET_CONFIG:
-+ hba->msg_done = 1;
-+ break;
-+
-+ case IOP_REQUEST_TYPE_SCSI_COMMAND:
-+ req = hba->reqs[tag >> 8].req_virt;
-+ if (likely(tag & MVIOP_MU_QUEUE_REQUEST_RESULT_BIT))
-+ req->header.result = cpu_to_le32(IOP_RESULT_SUCCESS);
-+
-+ hptiop_finish_scsi_req(hba, tag>>8, req);
-+ break;
-+
-+ default:
-+ break;
-+ }
-+}
-+
-+static int iop_intr_mv(struct hptiop_hba *hba)
-+{
-+ u32 status;
-+ int ret = 0;
-+
-+ status = readl(&hba->u.mv.regs->outbound_doorbell);
-+ writel(~status, &hba->u.mv.regs->outbound_doorbell);
-+
-+ if (status & MVIOP_MU_OUTBOUND_INT_MSG) {
-+ u32 msg;
-+ msg = readl(&hba->u.mv.mu->outbound_msg);
-+ dprintk("received outbound msg %x\n", msg);
-+ hptiop_message_callback(hba, msg);
-+ ret = 1;
-+ }
-+
-+ if (status & MVIOP_MU_OUTBOUND_INT_POSTQUEUE) {
-+ u64 tag;
-+
-+ while ((tag = mv_outbound_read(hba->u.mv.mu)))
-+ hptiop_request_callback_mv(hba, tag);
- ret = 1;
- }
++#if 0
+ /*
+ * Search node lists for a remote port matching filter criteria
+ * Caller needs to hold host_lock before calling this routine.
+@@ -2775,6 +2840,7 @@ lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
+ spin_unlock_irq(shost->host_lock);
+ return ndlp;
+ }
++#endif /* 0 */
- return ret;
+ /*
+ * This routine looks up the ndlp lists for the given RPI. If rpi found it
+@@ -2786,6 +2852,7 @@ __lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
+ return __lpfc_find_node(vport, lpfc_filter_by_rpi, &rpi);
}
--static int iop_send_sync_request(struct hptiop_hba *hba,
-+static int iop_send_sync_request_itl(struct hptiop_hba *hba,
- void __iomem *_req, u32 millisec)
++#if 0
+ struct lpfc_nodelist *
+ lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
{
- struct hpt_iop_request_header __iomem *req = _req;
- u32 i;
+@@ -2797,6 +2864,7 @@ lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
+ spin_unlock_irq(shost->host_lock);
+ return ndlp;
+ }
++#endif /* 0 */
-- writel(readl(&req->flags) | IOP_REQUEST_FLAG_SYNC_REQUEST,
-- &req->flags);
--
-+ writel(readl(&req->flags) | IOP_REQUEST_FLAG_SYNC_REQUEST, &req->flags);
- writel(0, &req->context);
--
-- writel((unsigned long)req - (unsigned long)hba->iop,
-- &hba->iop->inbound_queue);
--
-- hptiop_pci_posting_flush(hba->iop);
-+ writel((unsigned long)req - (unsigned long)hba->u.itl.iop,
-+ &hba->u.itl.iop->inbound_queue);
-+ readl(&hba->u.itl.iop->outbound_intstatus);
+ /*
+ * This routine looks up the ndlp lists for the given WWPN. If WWPN found it
+@@ -2837,6 +2905,9 @@ lpfc_nlp_init(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ return;
+ }
- for (i = 0; i < millisec; i++) {
-- __iop_intr(hba);
-+ iop_intr_itl(hba);
- if (readl(&req->context))
- return 0;
- msleep(1);
-@@ -159,19 +248,49 @@ static int iop_send_sync_request(struct hptiop_hba *hba,
- return -1;
++/* This routine releases all resources associated with a specifc NPort's ndlp
++ * and mempool_free's the nodelist.
++ */
+ static void
+ lpfc_nlp_release(struct kref *kref)
+ {
+@@ -2851,16 +2922,57 @@ lpfc_nlp_release(struct kref *kref)
+ mempool_free(ndlp, ndlp->vport->phba->nlp_mem_pool);
}
--static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
-+static int iop_send_sync_request_mv(struct hptiop_hba *hba,
-+ u32 size_bits, u32 millisec)
++/* This routine bumps the reference count for a ndlp structure to ensure
++ * that one discovery thread won't free a ndlp while another discovery thread
++ * is using it.
++ */
+ struct lpfc_nodelist *
+ lpfc_nlp_get(struct lpfc_nodelist *ndlp)
{
-+ struct hpt_iop_request_header *reqhdr = hba->u.mv.internal_req;
- u32 i;
+- if (ndlp)
++ if (ndlp) {
++ lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
++ "node get: did:x%x flg:x%x refcnt:x%x",
++ ndlp->nlp_DID, ndlp->nlp_flag,
++ atomic_read(&ndlp->kref.refcount));
+ kref_get(&ndlp->kref);
++ }
+ return ndlp;
+ }
- hba->msg_done = 0;
-+ reqhdr->flags |= cpu_to_le32(IOP_REQUEST_FLAG_SYNC_REQUEST);
-+ mv_inbound_write(hba->u.mv.internal_req_phy |
-+ MVIOP_MU_QUEUE_ADDR_HOST_BIT | size_bits, hba);
+
-+ for (i = 0; i < millisec; i++) {
-+ iop_intr_mv(hba);
-+ if (hba->msg_done)
-+ return 0;
-+ msleep(1);
++/* This routine decrements the reference count for a ndlp structure. If the
++ * count goes to 0, this indicates the the associated nodelist should be freed.
++ */
+ int
+ lpfc_nlp_put(struct lpfc_nodelist *ndlp)
+ {
++ if (ndlp) {
++ lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
++ "node put: did:x%x flg:x%x refcnt:x%x",
++ ndlp->nlp_DID, ndlp->nlp_flag,
++ atomic_read(&ndlp->kref.refcount));
+ }
-+ return -1;
-+}
+ return ndlp ? kref_put(&ndlp->kref, lpfc_nlp_release) : 0;
+ }
+
-+static void hptiop_post_msg_itl(struct hptiop_hba *hba, u32 msg)
++/* This routine free's the specified nodelist if it is not in use
++ * by any other discovery thread. This routine returns 1 if the ndlp
++ * is not being used by anyone and has been freed. A return value of
++ * 0 indicates it is being used by another discovery thread and the
++ * refcount is left unchanged.
++ */
++int
++lpfc_nlp_not_used(struct lpfc_nodelist *ndlp)
+{
-+ writel(msg, &hba->u.itl.iop->inbound_msgaddr0);
-+ readl(&hba->u.itl.iop->outbound_intstatus);
-+}
++ lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
++ "node not used: did:x%x flg:x%x refcnt:x%x",
++ ndlp->nlp_DID, ndlp->nlp_flag,
++ atomic_read(&ndlp->kref.refcount));
+
-+static void hptiop_post_msg_mv(struct hptiop_hba *hba, u32 msg)
-+{
-+ writel(msg, &hba->u.mv.mu->inbound_msg);
-+ writel(MVIOP_MU_INBOUND_INT_MSG, &hba->u.mv.regs->inbound_doorbell);
-+ readl(&hba->u.mv.regs->inbound_doorbell);
++ if (atomic_read(&ndlp->kref.refcount) == 1) {
++ lpfc_nlp_put(ndlp);
++ return 1;
++ }
++ return 0;
+}
++
+diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h
+index 451accd..041f83e 100644
+--- a/drivers/scsi/lpfc/lpfc_hw.h
++++ b/drivers/scsi/lpfc/lpfc_hw.h
+@@ -139,6 +139,9 @@ struct lpfc_sli_ct_request {
+ uint8_t len;
+ uint8_t symbname[255];
+ } rsnn;
++ struct da_id { /* For DA_ID requests */
++ uint32_t port_id;
++ } da_id;
+ struct rspn { /* For RSPN_ID requests */
+ uint32_t PortId;
+ uint8_t len;
+@@ -150,11 +153,7 @@ struct lpfc_sli_ct_request {
+ struct gff_acc {
+ uint8_t fbits[128];
+ } gff_acc;
+-#ifdef __BIG_ENDIAN_BITFIELD
+ #define FCP_TYPE_FEATURE_OFFSET 7
+-#else /* __LITTLE_ENDIAN_BITFIELD */
+-#define FCP_TYPE_FEATURE_OFFSET 4
+-#endif
+ struct rff {
+ uint32_t PortId;
+ uint8_t reserved[2];
+@@ -177,6 +176,8 @@ struct lpfc_sli_ct_request {
+ sizeof(struct rnn))
+ #define RSNN_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
+ sizeof(struct rsnn))
++#define DA_ID_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
++ sizeof(struct da_id))
+ #define RSPN_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
+ sizeof(struct rspn))
-- writel(msg, &hba->iop->inbound_msgaddr0);
-+static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
-+{
-+ u32 i;
+@@ -1228,7 +1229,8 @@ typedef struct { /* FireFly BIU registers */
+ #define HS_FFER3 0x20000000 /* Bit 29 */
+ #define HS_FFER2 0x40000000 /* Bit 30 */
+ #define HS_FFER1 0x80000000 /* Bit 31 */
+-#define HS_FFERM 0xFF000000 /* Mask for error bits 31:24 */
++#define HS_CRIT_TEMP 0x00000100 /* Bit 8 */
++#define HS_FFERM 0xFF000100 /* Mask for error bits 31:24 and 8 */
-- hptiop_pci_posting_flush(hba->iop);
-+ hba->msg_done = 0;
-+ hba->ops->post_msg(hba, msg);
+ /* Host Control Register */
- for (i = 0; i < millisec; i++) {
- spin_lock_irq(hba->host->host_lock);
-- __iop_intr(hba);
-+ hba->ops->iop_intr(hba);
- spin_unlock_irq(hba->host->host_lock);
- if (hba->msg_done)
- break;
-@@ -181,46 +300,67 @@ static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
- return hba->msg_done? 0 : -1;
- }
+@@ -1277,12 +1279,14 @@ typedef struct { /* FireFly BIU registers */
+ #define MBX_DEL_LD_ENTRY 0x1D
+ #define MBX_RUN_PROGRAM 0x1E
+ #define MBX_SET_MASK 0x20
+-#define MBX_SET_SLIM 0x21
++#define MBX_SET_VARIABLE 0x21
+ #define MBX_UNREG_D_ID 0x23
+ #define MBX_KILL_BOARD 0x24
+ #define MBX_CONFIG_FARP 0x25
+ #define MBX_BEACON 0x2A
+ #define MBX_HEARTBEAT 0x31
++#define MBX_WRITE_VPARMS 0x32
++#define MBX_ASYNCEVT_ENABLE 0x33
--static int iop_get_config(struct hptiop_hba *hba,
-+static int iop_get_config_itl(struct hptiop_hba *hba,
- struct hpt_iop_request_get_config *config)
- {
- u32 req32;
- struct hpt_iop_request_get_config __iomem *req;
+ #define MBX_CONFIG_HBQ 0x7C
+ #define MBX_LOAD_AREA 0x81
+@@ -1297,7 +1301,7 @@ typedef struct { /* FireFly BIU registers */
+ #define MBX_REG_VNPID 0x96
+ #define MBX_UNREG_VNPID 0x97
-- req32 = readl(&hba->iop->inbound_queue);
-+ req32 = readl(&hba->u.itl.iop->inbound_queue);
- if (req32 == IOPMU_QUEUE_EMPTY)
- return -1;
+-#define MBX_FLASH_WR_ULA 0x98
++#define MBX_WRITE_WWN 0x98
+ #define MBX_SET_DEBUG 0x99
+ #define MBX_LOAD_EXP_ROM 0x9C
- req = (struct hpt_iop_request_get_config __iomem *)
-- ((unsigned long)hba->iop + req32);
-+ ((unsigned long)hba->u.itl.iop + req32);
+@@ -1344,6 +1348,7 @@ typedef struct { /* FireFly BIU registers */
- writel(0, &req->header.flags);
- writel(IOP_REQUEST_TYPE_GET_CONFIG, &req->header.type);
- writel(sizeof(struct hpt_iop_request_get_config), &req->header.size);
- writel(IOP_RESULT_PENDING, &req->header.result);
+ /* SLI_2 IOCB Command Set */
-- if (iop_send_sync_request(hba, req, 20000)) {
-+ if (iop_send_sync_request_itl(hba, req, 20000)) {
- dprintk("Get config send cmd failed\n");
- return -1;
- }
++#define CMD_ASYNC_STATUS 0x7C
+ #define CMD_RCV_SEQUENCE64_CX 0x81
+ #define CMD_XMIT_SEQUENCE64_CR 0x82
+ #define CMD_XMIT_SEQUENCE64_CX 0x83
+@@ -1368,6 +1373,7 @@ typedef struct { /* FireFly BIU registers */
+ #define CMD_FCP_TRECEIVE64_CX 0xA1
+ #define CMD_FCP_TRSP64_CX 0xA3
- memcpy_fromio(config, req, sizeof(*config));
-- writel(req32, &hba->iop->outbound_queue);
-+ writel(req32, &hba->u.itl.iop->outbound_queue);
-+ return 0;
-+}
-+
-+static int iop_get_config_mv(struct hptiop_hba *hba,
-+ struct hpt_iop_request_get_config *config)
-+{
-+ struct hpt_iop_request_get_config *req = hba->u.mv.internal_req;
++#define CMD_QUE_XRI64_CX 0xB3
+ #define CMD_IOCB_RCV_SEQ64_CX 0xB5
+ #define CMD_IOCB_RCV_ELS64_CX 0xB7
+ #define CMD_IOCB_RCV_CONT64_CX 0xBB
+@@ -1406,6 +1412,8 @@ typedef struct { /* FireFly BIU registers */
+ #define MBX_BUSY 0xffffff /* Attempted cmd to busy Mailbox */
+ #define MBX_TIMEOUT 0xfffffe /* time-out expired waiting for */
+
++#define TEMPERATURE_OFFSET 0xB0 /* Slim offset for critical temperature event */
+
-+ req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
-+ req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_GET_CONFIG);
-+ req->header.size =
-+ cpu_to_le32(sizeof(struct hpt_iop_request_get_config));
-+ req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
-+ req->header.context = cpu_to_le64(IOP_REQUEST_TYPE_GET_CONFIG<<5);
+ /*
+ * Begin Structure Definitions for Mailbox Commands
+ */
+@@ -2606,6 +2614,18 @@ typedef struct {
+ uint32_t IPAddress;
+ } CONFIG_FARP_VAR;
+
++/* Structure for MB Command MBX_ASYNCEVT_ENABLE (0x33) */
+
-+ if (iop_send_sync_request_mv(hba, 0, 20000)) {
-+ dprintk("Get config send cmd failed\n");
-+ return -1;
-+ }
++typedef struct {
++#ifdef __BIG_ENDIAN_BITFIELD
++ uint32_t rsvd:30;
++ uint32_t ring:2; /* Ring for ASYNC_EVENT iocb Bits 0-1*/
++#else /* __LITTLE_ENDIAN */
++ uint32_t ring:2; /* Ring for ASYNC_EVENT iocb Bits 0-1*/
++ uint32_t rsvd:30;
++#endif
++} ASYNCEVT_ENABLE_VAR;
+
-+ memcpy(config, req, sizeof(struct hpt_iop_request_get_config));
- return 0;
- }
+ /* Union of all Mailbox Command types */
+ #define MAILBOX_CMD_WSIZE 32
+ #define MAILBOX_CMD_SIZE (MAILBOX_CMD_WSIZE * sizeof(uint32_t))
+@@ -2645,6 +2665,7 @@ typedef union {
+ CONFIG_PORT_VAR varCfgPort; /* cmd = 0x88 (CONFIG_PORT) */
+ REG_VPI_VAR varRegVpi; /* cmd = 0x96 (REG_VPI) */
+ UNREG_VPI_VAR varUnregVpi; /* cmd = 0x97 (UNREG_VPI) */
++ ASYNCEVT_ENABLE_VAR varCfgAsyncEvent; /*cmd = x33 (CONFIG_ASYNC) */
+ } MAILVARIANTS;
--static int iop_set_config(struct hptiop_hba *hba,
-+static int iop_set_config_itl(struct hptiop_hba *hba,
- struct hpt_iop_request_set_config *config)
- {
- u32 req32;
- struct hpt_iop_request_set_config __iomem *req;
+ /*
+@@ -2973,6 +2994,34 @@ typedef struct {
+ #endif
+ } RCV_ELS_REQ64;
-- req32 = readl(&hba->iop->inbound_queue);
-+ req32 = readl(&hba->u.itl.iop->inbound_queue);
- if (req32 == IOPMU_QUEUE_EMPTY)
- return -1;
++/* IOCB Command template for RCV_SEQ64 */
++struct rcv_seq64 {
++ struct ulp_bde64 elsReq;
++ uint32_t hbq_1;
++ uint32_t parmRo;
++#ifdef __BIG_ENDIAN_BITFIELD
++ uint32_t rctl:8;
++ uint32_t type:8;
++ uint32_t dfctl:8;
++ uint32_t ls:1;
++ uint32_t fs:1;
++ uint32_t rsvd2:3;
++ uint32_t si:1;
++ uint32_t bc:1;
++ uint32_t rsvd3:1;
++#else /* __LITTLE_ENDIAN_BITFIELD */
++ uint32_t rsvd3:1;
++ uint32_t bc:1;
++ uint32_t si:1;
++ uint32_t rsvd2:3;
++ uint32_t fs:1;
++ uint32_t ls:1;
++ uint32_t dfctl:8;
++ uint32_t type:8;
++ uint32_t rctl:8;
++#endif
++};
++
+ /* IOCB Command template for all 64 bit FCP Initiator commands */
+ typedef struct {
+ ULP_BDL bdl;
+@@ -2987,6 +3036,21 @@ typedef struct {
+ uint32_t fcpt_Length; /* transfer ready for IWRITE */
+ } FCPT_FIELDS64;
- req = (struct hpt_iop_request_set_config __iomem *)
-- ((unsigned long)hba->iop + req32);
-+ ((unsigned long)hba->u.itl.iop + req32);
++/* IOCB Command template for Async Status iocb commands */
++typedef struct {
++ uint32_t rsvd[4];
++ uint32_t param;
++#ifdef __BIG_ENDIAN_BITFIELD
++ uint16_t evt_code; /* High order bits word 5 */
++ uint16_t sub_ctxt_tag; /* Low order bits word 5 */
++#else /* __LITTLE_ENDIAN_BITFIELD */
++ uint16_t sub_ctxt_tag; /* High order bits word 5 */
++ uint16_t evt_code; /* Low order bits word 5 */
++#endif
++} ASYNCSTAT_FIELDS;
++#define ASYNC_TEMP_WARN 0x100
++#define ASYNC_TEMP_SAFE 0x101
++
+ /* IOCB Command template for CMD_IOCB_RCV_ELS64_CX (0xB7)
+ or CMD_IOCB_RCV_SEQ64_CX (0xB5) */
- memcpy_toio((u8 __iomem *)req + sizeof(struct hpt_iop_request_header),
- (u8 *)config + sizeof(struct hpt_iop_request_header),
-@@ -232,22 +372,52 @@ static int iop_set_config(struct hptiop_hba *hba,
- writel(sizeof(struct hpt_iop_request_set_config), &req->header.size);
- writel(IOP_RESULT_PENDING, &req->header.result);
+@@ -3004,7 +3068,26 @@ struct rcv_sli3 {
+ struct ulp_bde64 bde2;
+ };
-- if (iop_send_sync_request(hba, req, 20000)) {
-+ if (iop_send_sync_request_itl(hba, req, 20000)) {
- dprintk("Set config send cmd failed\n");
- return -1;
- }
++/* Structure used for a single HBQ entry */
++struct lpfc_hbq_entry {
++ struct ulp_bde64 bde;
++ uint32_t buffer_tag;
++};
-- writel(req32, &hba->iop->outbound_queue);
-+ writel(req32, &hba->u.itl.iop->outbound_queue);
- return 0;
- }
++/* IOCB Command template for QUE_XRI64_CX (0xB3) command */
++typedef struct {
++ struct lpfc_hbq_entry buff;
++ uint32_t rsvd;
++ uint32_t rsvd1;
++} QUE_XRI64_CX_FIELDS;
++
++struct que_xri64cx_ext_fields {
++ uint32_t iotag64_low;
++ uint32_t iotag64_high;
++ uint32_t ebde_count;
++ uint32_t rsvd;
++ struct lpfc_hbq_entry buff[5];
++};
--static int hptiop_initialize_iop(struct hptiop_hba *hba)
-+static int iop_set_config_mv(struct hptiop_hba *hba,
-+ struct hpt_iop_request_set_config *config)
- {
-- struct hpt_iopmu __iomem *iop = hba->iop;
-+ struct hpt_iop_request_set_config *req = hba->u.mv.internal_req;
+ typedef struct _IOCB { /* IOCB structure */
+ union {
+@@ -3028,6 +3111,9 @@ typedef struct _IOCB { /* IOCB structure */
+ XMT_SEQ_FIELDS64 xseq64; /* XMIT / BCAST cmd */
+ FCPI_FIELDS64 fcpi64; /* FCP 64 bit Initiator template */
+ FCPT_FIELDS64 fcpt64; /* FCP 64 bit target template */
++ ASYNCSTAT_FIELDS asyncstat; /* async_status iocb */
++ QUE_XRI64_CX_FIELDS quexri64cx; /* que_xri64_cx fields */
++ struct rcv_seq64 rcvseq64; /* RCV_SEQ64 and RCV_CONT64 */
-- /* enable interrupts */
-+ memcpy(req, config, sizeof(struct hpt_iop_request_set_config));
-+ req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
-+ req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_SET_CONFIG);
-+ req->header.size =
-+ cpu_to_le32(sizeof(struct hpt_iop_request_set_config));
-+ req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
-+ req->header.context = cpu_to_le64(IOP_REQUEST_TYPE_SET_CONFIG<<5);
-+
-+ if (iop_send_sync_request_mv(hba, 0, 20000)) {
-+ dprintk("Set config send cmd failed\n");
-+ return -1;
-+ }
-+
-+ return 0;
-+}
-+
-+static void hptiop_enable_intr_itl(struct hptiop_hba *hba)
-+{
- writel(~(IOPMU_OUTBOUND_INT_POSTQUEUE | IOPMU_OUTBOUND_INT_MSG0),
-- &iop->outbound_intmask);
-+ &hba->u.itl.iop->outbound_intmask);
-+}
+ uint32_t ulpWord[IOCB_WORD_SZ - 2]; /* generic 6 'words' */
+ } un;
+@@ -3085,6 +3171,10 @@ typedef struct _IOCB { /* IOCB structure */
+
+ union {
+ struct rcv_sli3 rcvsli3; /* words 8 - 15 */
+
-+static void hptiop_enable_intr_mv(struct hptiop_hba *hba)
-+{
-+ writel(MVIOP_MU_OUTBOUND_INT_POSTQUEUE | MVIOP_MU_OUTBOUND_INT_MSG,
-+ &hba->u.mv.regs->outbound_intmask);
-+}
++ /* words 8-31 used for que_xri_cx iocb */
++ struct que_xri64cx_ext_fields que_xri64cx_ext_words;
+
-+static int hptiop_initialize_iop(struct hptiop_hba *hba)
-+{
-+ /* enable interrupts */
-+ hba->ops->enable_intr(hba);
+ uint32_t sli3Words[24]; /* 96 extra bytes for SLI-3 */
+ } unsli3;
- hba->initialized = 1;
+@@ -3124,12 +3214,6 @@ typedef struct _IOCB { /* IOCB structure */
-@@ -261,37 +431,74 @@ static int hptiop_initialize_iop(struct hptiop_hba *hba)
+ } IOCB_t;
+
+-/* Structure used for a single HBQ entry */
+-struct lpfc_hbq_entry {
+- struct ulp_bde64 bde;
+- uint32_t buffer_tag;
+-};
+-
+
+ #define SLI1_SLIM_SIZE (4 * 1024)
+
+@@ -3172,6 +3256,8 @@ lpfc_is_LC_HBA(unsigned short device)
+ (device == PCI_DEVICE_ID_BSMB) ||
+ (device == PCI_DEVICE_ID_ZMID) ||
+ (device == PCI_DEVICE_ID_ZSMB) ||
++ (device == PCI_DEVICE_ID_SAT_MID) ||
++ (device == PCI_DEVICE_ID_SAT_SMB) ||
+ (device == PCI_DEVICE_ID_RFLY))
+ return 1;
+ else
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index ecebdfa..3205f74 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -212,6 +212,18 @@ out_free_mbox:
return 0;
}
--static int hptiop_map_pci_bar(struct hptiop_hba *hba)
-+static void __iomem *hptiop_map_pci_bar(struct hptiop_hba *hba, int index)
- {
- u32 mem_base_phy, length;
- void __iomem *mem_base_virt;
++/* Completion handler for config async event mailbox command. */
++static void
++lpfc_config_async_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq)
++{
++ if (pmboxq->mb.mbxStatus == MBX_SUCCESS)
++ phba->temp_sensor_support = 1;
++ else
++ phba->temp_sensor_support = 0;
++ mempool_free(pmboxq, phba->mbox_mem_pool);
++ return;
++}
+
- struct pci_dev *pcidev = hba->pcidev;
+ /************************************************************************/
+ /* */
+ /* lpfc_config_port_post */
+@@ -234,6 +246,15 @@ lpfc_config_port_post(struct lpfc_hba *phba)
+ int i, j;
+ int rc;
-- if (!(pci_resource_flags(pcidev, 0) & IORESOURCE_MEM)) {
++ spin_lock_irq(&phba->hbalock);
++ /*
++ * If the Config port completed correctly the HBA is not
++ * over heated any more.
++ */
++ if (phba->over_temp_state == HBA_OVER_TEMP)
++ phba->over_temp_state = HBA_NORMAL_TEMP;
++ spin_unlock_irq(&phba->hbalock);
+
-+ if (!(pci_resource_flags(pcidev, index) & IORESOURCE_MEM)) {
- printk(KERN_ERR "scsi%d: pci resource invalid\n",
- hba->host->host_no);
-- return -1;
-+ return 0;
- }
+ pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ if (!pmb) {
+ phba->link_state = LPFC_HBA_ERROR;
+@@ -343,7 +364,7 @@ lpfc_config_port_post(struct lpfc_hba *phba)
-- mem_base_phy = pci_resource_start(pcidev, 0);
-- length = pci_resource_len(pcidev, 0);
-+ mem_base_phy = pci_resource_start(pcidev, index);
-+ length = pci_resource_len(pcidev, index);
- mem_base_virt = ioremap(mem_base_phy, length);
+ phba->link_state = LPFC_LINK_DOWN;
- if (!mem_base_virt) {
- printk(KERN_ERR "scsi%d: Fail to ioremap memory space\n",
- hba->host->host_no);
-+ return 0;
+- /* Only process IOCBs on ring 0 till hba_state is READY */
++ /* Only process IOCBs on ELS ring till hba_state is READY */
+ if (psli->ring[psli->extra_ring].cmdringaddr)
+ psli->ring[psli->extra_ring].flag |= LPFC_STOP_IOCB_EVENT;
+ if (psli->ring[psli->fcp_ring].cmdringaddr)
+@@ -409,7 +430,21 @@ lpfc_config_port_post(struct lpfc_hba *phba)
+ return -EIO;
+ }
+ /* MBOX buffer will be freed in mbox compl */
++ pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
++ lpfc_config_async(phba, pmb, LPFC_ELS_RING);
++ pmb->mbox_cmpl = lpfc_config_async_cmpl;
++ pmb->vport = phba->pport;
++ rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
+
++ if ((rc != MBX_BUSY) && (rc != MBX_SUCCESS)) {
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_INIT,
++ "0456 Adapter failed to issue "
++ "ASYNCEVT_ENABLE mbox status x%x \n.",
++ rc);
++ mempool_free(pmb, phba->mbox_mem_pool);
+ }
-+ return mem_base_virt;
-+}
+ return (0);
+ }
+
+@@ -449,6 +484,9 @@ lpfc_hba_down_post(struct lpfc_hba *phba)
+ struct lpfc_sli *psli = &phba->sli;
+ struct lpfc_sli_ring *pring;
+ struct lpfc_dmabuf *mp, *next_mp;
++ struct lpfc_iocbq *iocb;
++ IOCB_t *cmd = NULL;
++ LIST_HEAD(completions);
+ int i;
+
+ if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED)
+@@ -464,16 +502,42 @@ lpfc_hba_down_post(struct lpfc_hba *phba)
+ }
+ }
+
++ spin_lock_irq(&phba->hbalock);
+ for (i = 0; i < psli->num_rings; i++) {
+ pring = &psli->ring[i];
+
-+static int hptiop_map_pci_bar_itl(struct hptiop_hba *hba)
-+{
-+ hba->u.itl.iop = hptiop_map_pci_bar(hba, 0);
-+ if (hba->u.itl.iop)
-+ return 0;
-+ else
-+ return -1;
-+}
++ /* At this point in time the HBA is either reset or DOA. Either
++ * way, nothing should be on txcmplq as it will NEVER complete.
++ */
++ list_splice_init(&pring->txcmplq, &completions);
++ pring->txcmplq_cnt = 0;
++ spin_unlock_irq(&phba->hbalock);
+
-+static void hptiop_unmap_pci_bar_itl(struct hptiop_hba *hba)
-+{
-+ iounmap(hba->u.itl.iop);
-+}
++ while (!list_empty(&completions)) {
++ iocb = list_get_first(&completions, struct lpfc_iocbq,
++ list);
++ cmd = &iocb->iocb;
++ list_del_init(&iocb->list);
+
-+static int hptiop_map_pci_bar_mv(struct hptiop_hba *hba)
-+{
-+ hba->u.mv.regs = hptiop_map_pci_bar(hba, 0);
-+ if (hba->u.mv.regs == 0)
-+ return -1;
++ if (!iocb->iocb_cmpl)
++ lpfc_sli_release_iocbq(phba, iocb);
++ else {
++ cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
++ cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
++ (iocb->iocb_cmpl) (phba, iocb, iocb);
++ }
++ }
+
-+ hba->u.mv.mu = hptiop_map_pci_bar(hba, 2);
-+ if (hba->u.mv.mu == 0) {
-+ iounmap(hba->u.mv.regs);
- return -1;
+ lpfc_sli_abort_iocb_ring(phba, pring);
++ spin_lock_irq(&phba->hbalock);
}
++ spin_unlock_irq(&phba->hbalock);
-- hba->iop = mem_base_virt;
-- dprintk("hptiop_map_pci_bar: iop=%p\n", hba->iop);
return 0;
}
-+static void hptiop_unmap_pci_bar_mv(struct hptiop_hba *hba)
-+{
-+ iounmap(hba->u.mv.regs);
-+ iounmap(hba->u.mv.mu);
-+}
-+
- static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg)
+ /* HBA heart beat timeout handler */
+-void
++static void
+ lpfc_hb_timeout(unsigned long ptr)
{
- dprintk("iop message 0x%x\n", msg);
+ struct lpfc_hba *phba;
+@@ -512,8 +576,10 @@ void
+ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
+ {
+ LPFC_MBOXQ_t *pmboxq;
++ struct lpfc_dmabuf *buf_ptr;
+ int retval;
+ struct lpfc_sli *psli = &phba->sli;
++ LIST_HEAD(completions);
-+ if (msg == IOPMU_INBOUND_MSG0_NOP)
-+ hba->msg_done = 1;
+ if ((phba->link_state == LPFC_HBA_ERROR) ||
+ (phba->pport->load_flag & FC_UNLOADING) ||
+@@ -540,49 +606,88 @@ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
+ }
+ spin_unlock_irq(&phba->pport->work_port_lock);
+
+- /* If there is no heart beat outstanding, issue a heartbeat command */
+- if (!phba->hb_outstanding) {
+- pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
+- if (!pmboxq) {
+- mod_timer(&phba->hb_tmofunc,
+- jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
+- return;
++ if (phba->elsbuf_cnt &&
++ (phba->elsbuf_cnt == phba->elsbuf_prev_cnt)) {
++ spin_lock_irq(&phba->hbalock);
++ list_splice_init(&phba->elsbuf, &completions);
++ phba->elsbuf_cnt = 0;
++ phba->elsbuf_prev_cnt = 0;
++ spin_unlock_irq(&phba->hbalock);
+
- if (!hba->initialized)
- return;
++ while (!list_empty(&completions)) {
++ list_remove_head(&completions, buf_ptr,
++ struct lpfc_dmabuf, list);
++ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
++ kfree(buf_ptr);
+ }
++ }
++ phba->elsbuf_prev_cnt = phba->elsbuf_cnt;
-@@ -303,7 +510,7 @@ static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg)
- hba->msg_done = 1;
- }
+- lpfc_heart_beat(phba, pmboxq);
+- pmboxq->mbox_cmpl = lpfc_hb_mbox_cmpl;
+- pmboxq->vport = phba->pport;
+- retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
++ /* If there is no heart beat outstanding, issue a heartbeat command */
++ if (phba->cfg_enable_hba_heartbeat) {
++ if (!phba->hb_outstanding) {
++ pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
++ if (!pmboxq) {
++ mod_timer(&phba->hb_tmofunc,
++ jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
++ return;
++ }
--static inline struct hptiop_request *get_req(struct hptiop_hba *hba)
-+static struct hptiop_request *get_req(struct hptiop_hba *hba)
- {
- struct hptiop_request *ret;
+- if (retval != MBX_BUSY && retval != MBX_SUCCESS) {
+- mempool_free(pmboxq, phba->mbox_mem_pool);
++ lpfc_heart_beat(phba, pmboxq);
++ pmboxq->mbox_cmpl = lpfc_hb_mbox_cmpl;
++ pmboxq->vport = phba->pport;
++ retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
++
++ if (retval != MBX_BUSY && retval != MBX_SUCCESS) {
++ mempool_free(pmboxq, phba->mbox_mem_pool);
++ mod_timer(&phba->hb_tmofunc,
++ jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
++ return;
++ }
+ mod_timer(&phba->hb_tmofunc,
+- jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
++ jiffies + HZ * LPFC_HB_MBOX_TIMEOUT);
++ phba->hb_outstanding = 1;
+ return;
++ } else {
++ /*
++ * If heart beat timeout called with hb_outstanding set
++ * we need to take the HBA offline.
++ */
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "0459 Adapter heartbeat failure, "
++ "taking this port offline.\n");
++
++ spin_lock_irq(&phba->hbalock);
++ psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
++ spin_unlock_irq(&phba->hbalock);
++
++ lpfc_offline_prep(phba);
++ lpfc_offline(phba);
++ lpfc_unblock_mgmt_io(phba);
++ phba->link_state = LPFC_HBA_ERROR;
++ lpfc_hba_down_post(phba);
+ }
+- mod_timer(&phba->hb_tmofunc,
+- jiffies + HZ * LPFC_HB_MBOX_TIMEOUT);
+- phba->hb_outstanding = 1;
+- return;
+- } else {
+- /*
+- * If heart beat timeout called with hb_outstanding set we
+- * need to take the HBA offline.
+- */
+- lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+- "0459 Adapter heartbeat failure, taking "
+- "this port offline.\n");
++ }
++}
-@@ -316,30 +523,19 @@ static inline struct hptiop_request *get_req(struct hptiop_hba *hba)
- return ret;
- }
+- spin_lock_irq(&phba->hbalock);
+- psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+- spin_unlock_irq(&phba->hbalock);
++static void
++lpfc_offline_eratt(struct lpfc_hba *phba)
++{
++ struct lpfc_sli *psli = &phba->sli;
--static inline void free_req(struct hptiop_hba *hba, struct hptiop_request *req)
-+static void free_req(struct hptiop_hba *hba, struct hptiop_request *req)
- {
- dprintk("free_req(%d, %p)\n", req->index, req);
- req->next = hba->req_list;
- hba->req_list = req;
+- lpfc_offline_prep(phba);
+- lpfc_offline(phba);
+- lpfc_unblock_mgmt_io(phba);
+- phba->link_state = LPFC_HBA_ERROR;
+- lpfc_hba_down_post(phba);
+- }
++ spin_lock_irq(&phba->hbalock);
++ psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
++ spin_unlock_irq(&phba->hbalock);
++ lpfc_offline_prep(phba);
++
++ lpfc_offline(phba);
++ lpfc_reset_barrier(phba);
++ lpfc_sli_brdreset(phba);
++ lpfc_hba_down_post(phba);
++ lpfc_sli_brdready(phba, HS_MBRDY);
++ lpfc_unblock_mgmt_io(phba);
++ phba->link_state = LPFC_HBA_ERROR;
++ return;
}
--static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
-+static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
-+ struct hpt_iop_request_scsi_command *req)
- {
-- struct hpt_iop_request_scsi_command *req;
- struct scsi_cmnd *scp;
-- u32 tag;
--
-- if (hba->iopintf_v2) {
-- tag = _tag & ~ IOPMU_QUEUE_REQUEST_RESULT_BIT;
-- req = hba->reqs[tag].req_virt;
-- if (likely(_tag & IOPMU_QUEUE_REQUEST_RESULT_BIT))
-- req->header.result = IOP_RESULT_SUCCESS;
-- } else {
-- tag = _tag;
-- req = hba->reqs[tag].req_virt;
-- }
+ /************************************************************************/
+@@ -601,6 +706,8 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
+ struct lpfc_sli_ring *pring;
+ struct lpfc_vport **vports;
+ uint32_t event_data;
++ unsigned long temperature;
++ struct temp_event temp_event_data;
+ struct Scsi_Host *shost;
+ int i;
-- dprintk("hptiop_host_request_callback: req=%p, type=%d, "
-+ dprintk("hptiop_finish_scsi_req: req=%p, type=%d, "
- "result=%d, context=0x%x tag=%d\n",
- req, req->header.type, req->header.result,
- req->header.context, tag);
-@@ -354,6 +550,8 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
+@@ -608,6 +715,9 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
+ * since we cannot communicate with the pci card anyway. */
+ if (pci_channel_offline(phba->pcidev))
+ return;
++ /* If resets are disabled then leave the HBA alone and return */
++ if (!phba->cfg_enable_hba_reset)
++ return;
- switch (le32_to_cpu(req->header.result)) {
- case IOP_RESULT_SUCCESS:
-+ scsi_set_resid(scp,
-+ scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
- scp->result = (DID_OK<<16);
- break;
- case IOP_RESULT_BAD_TARGET:
-@@ -371,12 +569,12 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
- case IOP_RESULT_INVALID_REQUEST:
- scp->result = (DID_ABORT<<16);
- break;
-- case IOP_RESULT_MODE_SENSE_CHECK_CONDITION:
-+ case IOP_RESULT_CHECK_CONDITION:
-+ scsi_set_resid(scp,
-+ scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
- scp->result = SAM_STAT_CHECK_CONDITION;
-- memset(&scp->sense_buffer,
-- 0, sizeof(scp->sense_buffer));
- memcpy(&scp->sense_buffer, &req->sg_list,
-- min(sizeof(scp->sense_buffer),
-+ min_t(size_t, SCSI_SENSE_BUFFERSIZE,
- le32_to_cpu(req->dataxfer_length)));
- break;
+ if (phba->work_hs & HS_FFER6 ||
+ phba->work_hs & HS_FFER5) {
+@@ -620,14 +730,14 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+ for(i = 0;
+- i < LPFC_MAX_VPORTS && vports[i] != NULL;
++ i <= phba->max_vpi && vports[i] != NULL;
+ i++){
+ shost = lpfc_shost_from_vport(vports[i]);
+ spin_lock_irq(shost->host_lock);
+ vports[i]->fc_flag |= FC_ESTABLISH_LINK;
+ spin_unlock_irq(shost->host_lock);
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ spin_lock_irq(&phba->hbalock);
+ psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+ spin_unlock_irq(&phba->hbalock);
+@@ -655,6 +765,31 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
+ return;
+ }
+ lpfc_unblock_mgmt_io(phba);
++ } else if (phba->work_hs & HS_CRIT_TEMP) {
++ temperature = readl(phba->MBslimaddr + TEMPERATURE_OFFSET);
++ temp_event_data.event_type = FC_REG_TEMPERATURE_EVENT;
++ temp_event_data.event_code = LPFC_CRIT_TEMP;
++ temp_event_data.data = (uint32_t)temperature;
++
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "0459 Adapter maximum temperature exceeded "
++ "(%ld), taking this port offline "
++ "Data: x%x x%x x%x\n",
++ temperature, phba->work_hs,
++ phba->work_status[0], phba->work_status[1]);
++
++ shost = lpfc_shost_from_vport(phba->pport);
++ fc_host_post_vendor_event(shost, fc_get_event_number(),
++ sizeof(temp_event_data),
++ (char *) &temp_event_data,
++ SCSI_NL_VID_TYPE_PCI
++ | PCI_VENDOR_ID_EMULEX);
++
++ spin_lock_irq(&phba->hbalock);
++ phba->over_temp_state = HBA_OVER_TEMP;
++ spin_unlock_irq(&phba->hbalock);
++ lpfc_offline_eratt(phba);
++
+ } else {
+ /* The if clause above forces this code path when the status
+ * failure is a value other than FFER6. Do not call the offline
+@@ -672,14 +807,7 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
+ sizeof(event_data), (char *) &event_data,
+ SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_EMULEX);
-@@ -391,15 +589,33 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 _tag)
- free_req(hba, &hba->reqs[tag]);
+- spin_lock_irq(&phba->hbalock);
+- psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+- spin_unlock_irq(&phba->hbalock);
+- lpfc_offline_prep(phba);
+- lpfc_offline(phba);
+- lpfc_unblock_mgmt_io(phba);
+- phba->link_state = LPFC_HBA_ERROR;
+- lpfc_hba_down_post(phba);
++ lpfc_offline_eratt(phba);
+ }
}
--void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag)
-+static void hptiop_host_request_callback_itl(struct hptiop_hba *hba, u32 _tag)
-+{
-+ struct hpt_iop_request_scsi_command *req;
-+ u32 tag;
-+
-+ if (hba->iopintf_v2) {
-+ tag = _tag & ~IOPMU_QUEUE_REQUEST_RESULT_BIT;
-+ req = hba->reqs[tag].req_virt;
-+ if (likely(_tag & IOPMU_QUEUE_REQUEST_RESULT_BIT))
-+ req->header.result = cpu_to_le32(IOP_RESULT_SUCCESS);
-+ } else {
-+ tag = _tag;
-+ req = hba->reqs[tag].req_virt;
+@@ -699,21 +827,25 @@ lpfc_handle_latt(struct lpfc_hba *phba)
+ LPFC_MBOXQ_t *pmb;
+ volatile uint32_t control;
+ struct lpfc_dmabuf *mp;
+- int rc = -ENOMEM;
++ int rc = 0;
+
+ pmb = (LPFC_MBOXQ_t *)mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+- if (!pmb)
++ if (!pmb) {
++ rc = 1;
+ goto lpfc_handle_latt_err_exit;
++ }
+
+ mp = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
+- if (!mp)
++ if (!mp) {
++ rc = 2;
+ goto lpfc_handle_latt_free_pmb;
+ }
-+
-+ hptiop_finish_scsi_req(hba, tag, req);
-+}
-+
-+void hptiop_iop_request_callback_itl(struct hptiop_hba *hba, u32 tag)
- {
- struct hpt_iop_request_header __iomem *req;
- struct hpt_iop_request_ioctl_command __iomem *p;
- struct hpt_ioctl_k *arg;
- req = (struct hpt_iop_request_header __iomem *)
-- ((unsigned long)hba->iop + tag);
-- dprintk("hptiop_iop_request_callback: req=%p, type=%d, "
-+ ((unsigned long)hba->u.itl.iop + tag);
-+ dprintk("hptiop_iop_request_callback_itl: req=%p, type=%d, "
- "result=%d, context=0x%x tag=%d\n",
- req, readl(&req->type), readl(&req->result),
- readl(&req->context), tag);
-@@ -427,7 +643,7 @@ void hptiop_iop_request_callback(struct hptiop_hba *hba, u32 tag)
- arg->result = HPT_IOCTL_RESULT_FAILED;
+ mp->virt = lpfc_mbuf_alloc(phba, 0, &mp->phys);
+- if (!mp->virt)
++ if (!mp->virt) {
++ rc = 3;
+ goto lpfc_handle_latt_free_mp;
+-
+- rc = -EIO;
++ }
- arg->done(arg);
-- writel(tag, &hba->iop->outbound_queue);
-+ writel(tag, &hba->u.itl.iop->outbound_queue);
- }
+ /* Cleanup any outstanding ELS commands */
+ lpfc_els_flush_all_cmd(phba);
+@@ -722,9 +854,11 @@ lpfc_handle_latt(struct lpfc_hba *phba)
+ lpfc_read_la(phba, pmb, mp);
+ pmb->mbox_cmpl = lpfc_mbx_cmpl_read_la;
+ pmb->vport = vport;
+- rc = lpfc_sli_issue_mbox (phba, pmb, (MBX_NOWAIT | MBX_STOP_IOCB));
+- if (rc == MBX_NOT_FINISHED)
++ rc = lpfc_sli_issue_mbox (phba, pmb, MBX_NOWAIT);
++ if (rc == MBX_NOT_FINISHED) {
++ rc = 4;
+ goto lpfc_handle_latt_free_mbuf;
++ }
- static irqreturn_t hptiop_intr(int irq, void *dev_id)
-@@ -437,7 +653,7 @@ static irqreturn_t hptiop_intr(int irq, void *dev_id)
- unsigned long flags;
+ /* Clear Link Attention in HA REG */
+ spin_lock_irq(&phba->hbalock);
+@@ -756,10 +890,8 @@ lpfc_handle_latt_err_exit:
+ lpfc_linkdown(phba);
+ phba->link_state = LPFC_HBA_ERROR;
- spin_lock_irqsave(hba->host->host_lock, flags);
-- handled = __iop_intr(hba);
-+ handled = hba->ops->iop_intr(hba);
- spin_unlock_irqrestore(hba->host->host_lock, flags);
+- /* The other case is an error from issue_mbox */
+- if (rc == -ENOMEM)
+- lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX,
+- "0300 READ_LA: no buffers\n");
++ lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
++ "0300 LATT: Cannot issue READ_LA: Data:%d\n", rc);
- return handled;
-@@ -469,6 +685,57 @@ static int hptiop_buildsgl(struct scsi_cmnd *scp, struct hpt_iopsg *psg)
- return HPT_SCP(scp)->sgcnt;
+ return;
+ }
+@@ -1088,9 +1220,8 @@ lpfc_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt,
+ /* Allocate buffer to post */
+ mp1 = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
+ if (mp1)
+- mp1->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
+- &mp1->phys);
+- if (mp1 == 0 || mp1->virt == 0) {
++ mp1->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &mp1->phys);
++ if (!mp1 || !mp1->virt) {
+ kfree(mp1);
+ lpfc_sli_release_iocbq(phba, iocb);
+ pring->missbufcnt = cnt;
+@@ -1104,7 +1235,7 @@ lpfc_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt,
+ if (mp2)
+ mp2->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
+ &mp2->phys);
+- if (mp2 == 0 || mp2->virt == 0) {
++ if (!mp2 || !mp2->virt) {
+ kfree(mp2);
+ lpfc_mbuf_free(phba, mp1->virt, mp1->phys);
+ kfree(mp1);
+@@ -1280,15 +1411,39 @@ lpfc_hba_init(struct lpfc_hba *phba, uint32_t *hbainit)
+ kfree(HashWorking);
}
-+static void hptiop_post_req_itl(struct hptiop_hba *hba,
-+ struct hptiop_request *_req)
-+{
-+ struct hpt_iop_request_header *reqhdr = _req->req_virt;
-+
-+ reqhdr->context = cpu_to_le32(IOPMU_QUEUE_ADDR_HOST_BIT |
-+ (u32)_req->index);
-+ reqhdr->context_hi32 = 0;
-+
-+ if (hba->iopintf_v2) {
-+ u32 size, size_bits;
-+
-+ size = le32_to_cpu(reqhdr->size);
-+ if (size < 256)
-+ size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT;
-+ else if (size < 512)
-+ size_bits = IOPMU_QUEUE_ADDR_HOST_BIT;
-+ else
-+ size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT |
-+ IOPMU_QUEUE_ADDR_HOST_BIT;
-+ writel(_req->req_shifted_phy | size_bits,
-+ &hba->u.itl.iop->inbound_queue);
-+ } else
-+ writel(_req->req_shifted_phy | IOPMU_QUEUE_ADDR_HOST_BIT,
-+ &hba->u.itl.iop->inbound_queue);
-+}
-+
-+static void hptiop_post_req_mv(struct hptiop_hba *hba,
-+ struct hptiop_request *_req)
-+{
-+ struct hpt_iop_request_header *reqhdr = _req->req_virt;
-+ u32 size, size_bit;
+-static void
++void
+ lpfc_cleanup(struct lpfc_vport *vport)
+ {
++ struct lpfc_hba *phba = vport->phba;
+ struct lpfc_nodelist *ndlp, *next_ndlp;
++ int i = 0;
+
+- /* clean up phba - lpfc specific */
+- lpfc_can_disctmo(vport);
+- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp)
+- lpfc_nlp_put(ndlp);
++ if (phba->link_state > LPFC_LINK_DOWN)
++ lpfc_port_link_failure(vport);
+
-+ reqhdr->context = cpu_to_le32(_req->index<<8 |
-+ IOP_REQUEST_TYPE_SCSI_COMMAND<<5);
-+ reqhdr->context_hi32 = 0;
-+ size = le32_to_cpu(reqhdr->size);
++ list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
++ if (ndlp->nlp_type & NLP_FABRIC)
++ lpfc_disc_state_machine(vport, ndlp, NULL,
++ NLP_EVT_DEVICE_RECOVERY);
++ lpfc_disc_state_machine(vport, ndlp, NULL,
++ NLP_EVT_DEVICE_RM);
++ }
+
-+ if (size <= 256)
-+ size_bit = 0;
-+ else if (size <= 256*2)
-+ size_bit = 1;
-+ else if (size <= 256*3)
-+ size_bit = 2;
-+ else
-+ size_bit = 3;
++ /* At this point, ALL ndlp's should be gone
++ * because of the previous NLP_EVT_DEVICE_RM.
++ * Lets wait for this to happen, if needed.
++ */
++ while (!list_empty(&vport->fc_nodes)) {
+
-+ mv_inbound_write((_req->req_shifted_phy << 5) |
-+ MVIOP_MU_QUEUE_ADDR_HOST_BIT | size_bit, hba);
-+}
++ if (i++ > 3000) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
++ "0233 Nodelist not empty\n");
++ break;
++ }
+
- static int hptiop_queuecommand(struct scsi_cmnd *scp,
- void (*done)(struct scsi_cmnd *))
- {
-@@ -518,9 +785,6 @@ static int hptiop_queuecommand(struct scsi_cmnd *scp,
- req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
- req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_SCSI_COMMAND);
- req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
-- req->header.context = cpu_to_le32(IOPMU_QUEUE_ADDR_HOST_BIT |
-- (u32)_req->index);
-- req->header.context_hi32 = 0;
- req->dataxfer_length = cpu_to_le32(scsi_bufflen(scp));
- req->channel = scp->device->channel;
- req->target = scp->device->id;
-@@ -531,21 +795,7 @@ static int hptiop_queuecommand(struct scsi_cmnd *scp,
- + sg_count * sizeof(struct hpt_iopsg));
-
- memcpy(req->cdb, scp->cmnd, sizeof(req->cdb));
--
-- if (hba->iopintf_v2) {
-- u32 size_bits;
-- if (req->header.size < 256)
-- size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT;
-- else if (req->header.size < 512)
-- size_bits = IOPMU_QUEUE_ADDR_HOST_BIT;
-- else
-- size_bits = IOPMU_QUEUE_REQUEST_SIZE_BIT |
-- IOPMU_QUEUE_ADDR_HOST_BIT;
-- writel(_req->req_shifted_phy | size_bits, &hba->iop->inbound_queue);
-- } else
-- writel(_req->req_shifted_phy | IOPMU_QUEUE_ADDR_HOST_BIT,
-- &hba->iop->inbound_queue);
--
-+ hba->ops->post_req(hba, _req);
- return 0;
++ /* Wait for any activity on ndlps to settle */
++ msleep(10);
++ }
+ return;
+ }
- cmd_done:
-@@ -563,9 +813,7 @@ static int hptiop_reset_hba(struct hptiop_hba *hba)
- {
- if (atomic_xchg(&hba->resetting, 1) == 0) {
- atomic_inc(&hba->reset_count);
-- writel(IOPMU_INBOUND_MSG0_RESET,
-- &hba->iop->inbound_msgaddr0);
-- hptiop_pci_posting_flush(hba->iop);
-+ hba->ops->post_msg(hba, IOPMU_INBOUND_MSG0_RESET);
- }
+@@ -1307,14 +1462,14 @@ lpfc_establish_link_tmo(unsigned long ptr)
+ phba->pport->fc_flag, phba->pport->port_state);
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
+ struct Scsi_Host *shost;
+ shost = lpfc_shost_from_vport(vports[i]);
+ spin_lock_irqsave(shost->host_lock, iflag);
+ vports[i]->fc_flag &= ~FC_ESTABLISH_LINK;
+ spin_unlock_irqrestore(shost->host_lock, iflag);
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ }
- wait_event_timeout(hba->reset_wq,
-@@ -601,8 +849,10 @@ static int hptiop_reset(struct scsi_cmnd *scp)
- static int hptiop_adjust_disk_queue_depth(struct scsi_device *sdev,
- int queue_depth)
- {
-- if(queue_depth > 256)
-- queue_depth = 256;
-+ struct hptiop_hba *hba = (struct hptiop_hba *)sdev->host->hostdata;
-+
-+ if (queue_depth > hba->max_requests)
-+ queue_depth = hba->max_requests;
- scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, queue_depth);
- return queue_depth;
+ void
+@@ -1339,6 +1494,16 @@ lpfc_stop_phba_timers(struct lpfc_hba *phba)
+ return;
}
-@@ -663,6 +913,26 @@ static struct scsi_host_template driver_template = {
- .change_queue_depth = hptiop_adjust_disk_queue_depth,
- };
-+static int hptiop_internal_memalloc_mv(struct hptiop_hba *hba)
++static void
++lpfc_block_mgmt_io(struct lpfc_hba * phba)
+{
-+ hba->u.mv.internal_req = dma_alloc_coherent(&hba->pcidev->dev,
-+ 0x800, &hba->u.mv.internal_req_phy, GFP_KERNEL);
-+ if (hba->u.mv.internal_req)
-+ return 0;
-+ else
-+ return -1;
-+}
++ unsigned long iflag;
+
-+static int hptiop_internal_memfree_mv(struct hptiop_hba *hba)
-+{
-+ if (hba->u.mv.internal_req) {
-+ dma_free_coherent(&hba->pcidev->dev, 0x800,
-+ hba->u.mv.internal_req, hba->u.mv.internal_req_phy);
-+ return 0;
-+ } else
-+ return -1;
++ spin_lock_irqsave(&phba->hbalock, iflag);
++ phba->sli.sli_flag |= LPFC_BLOCK_MGMT_IO;
++ spin_unlock_irqrestore(&phba->hbalock, iflag);
+}
+
- static int __devinit hptiop_probe(struct pci_dev *pcidev,
- const struct pci_device_id *id)
+ int
+ lpfc_online(struct lpfc_hba *phba)
{
-@@ -708,6 +978,7 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+@@ -1369,7 +1534,7 @@ lpfc_online(struct lpfc_hba *phba)
- hba = (struct hptiop_hba *)host->hostdata;
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
+ struct Scsi_Host *shost;
+ shost = lpfc_shost_from_vport(vports[i]);
+ spin_lock_irq(shost->host_lock);
+@@ -1378,23 +1543,13 @@ lpfc_online(struct lpfc_hba *phba)
+ vports[i]->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+ spin_unlock_irq(shost->host_lock);
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
-+ hba->ops = (struct hptiop_adapter_ops *)id->driver_data;
- hba->pcidev = pcidev;
- hba->host = host;
- hba->initialized = 0;
-@@ -725,16 +996,24 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
- host->n_io_port = 0;
- host->irq = pcidev->irq;
+ lpfc_unblock_mgmt_io(phba);
+ return 0;
+ }
-- if (hptiop_map_pci_bar(hba))
-+ if (hba->ops->map_pci_bar(hba))
- goto free_scsi_host;
+ void
+-lpfc_block_mgmt_io(struct lpfc_hba * phba)
+-{
+- unsigned long iflag;
+-
+- spin_lock_irqsave(&phba->hbalock, iflag);
+- phba->sli.sli_flag |= LPFC_BLOCK_MGMT_IO;
+- spin_unlock_irqrestore(&phba->hbalock, iflag);
+-}
+-
+-void
+ lpfc_unblock_mgmt_io(struct lpfc_hba * phba)
+ {
+ unsigned long iflag;
+@@ -1409,6 +1564,8 @@ lpfc_offline_prep(struct lpfc_hba * phba)
+ {
+ struct lpfc_vport *vport = phba->pport;
+ struct lpfc_nodelist *ndlp, *next_ndlp;
++ struct lpfc_vport **vports;
++ int i;
-- if (iop_wait_ready(hba->iop, 20000)) {
-+ if (hba->ops->iop_wait_ready(hba, 20000)) {
- printk(KERN_ERR "scsi%d: firmware not ready\n",
- hba->host->host_no);
- goto unmap_pci_bar;
- }
+ if (vport->fc_flag & FC_OFFLINE_MODE)
+ return;
+@@ -1417,10 +1574,34 @@ lpfc_offline_prep(struct lpfc_hba * phba)
-- if (iop_get_config(hba, &iop_config)) {
-+ if (hba->ops->internal_memalloc) {
-+ if (hba->ops->internal_memalloc(hba)) {
-+ printk(KERN_ERR "scsi%d: internal_memalloc failed\n",
-+ hba->host->host_no);
-+ goto unmap_pci_bar;
+ lpfc_linkdown(phba);
+
+- /* Issue an unreg_login to all nodes */
+- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp)
+- if (ndlp->nlp_state != NLP_STE_UNUSED_NODE)
+- lpfc_unreg_rpi(vport, ndlp);
++ /* Issue an unreg_login to all nodes on all vports */
++ vports = lpfc_create_vport_work_array(phba);
++ if (vports != NULL) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
++ struct Scsi_Host *shost;
++
++ if (vports[i]->load_flag & FC_UNLOADING)
++ continue;
++ shost = lpfc_shost_from_vport(vports[i]);
++ list_for_each_entry_safe(ndlp, next_ndlp,
++ &vports[i]->fc_nodes,
++ nlp_listp) {
++ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
++ continue;
++ if (ndlp->nlp_type & NLP_FABRIC) {
++ lpfc_disc_state_machine(vports[i], ndlp,
++ NULL, NLP_EVT_DEVICE_RECOVERY);
++ lpfc_disc_state_machine(vports[i], ndlp,
++ NULL, NLP_EVT_DEVICE_RM);
++ }
++ spin_lock_irq(shost->host_lock);
++ ndlp->nlp_flag &= ~NLP_NPR_ADISC;
++ spin_unlock_irq(shost->host_lock);
++ lpfc_unreg_rpi(vports[i], ndlp);
++ }
+ }
+ }
-+
-+ if (hba->ops->get_config(hba, &iop_config)) {
- printk(KERN_ERR "scsi%d: get config failed\n",
- hba->host->host_no);
- goto unmap_pci_bar;
-@@ -770,7 +1049,7 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
- set_config.vbus_id = cpu_to_le16(host->host_no);
- set_config.max_host_request_size = cpu_to_le16(req_size);
++ lpfc_destroy_vport_work_array(phba, vports);
-- if (iop_set_config(hba, &set_config)) {
-+ if (hba->ops->set_config(hba, &set_config)) {
- printk(KERN_ERR "scsi%d: set config failed\n",
- hba->host->host_no);
- goto unmap_pci_bar;
-@@ -839,21 +1118,24 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
+ lpfc_sli_flush_mbox_queue(phba);
+ }
+@@ -1439,9 +1620,9 @@ lpfc_offline(struct lpfc_hba *phba)
+ lpfc_stop_phba_timers(phba);
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++)
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++)
+ lpfc_stop_vport_timers(vports[i]);
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
+ "0460 Bring Adapter offline\n");
+ /* Bring down the SLI Layer and cleanup. The HBA is offline
+@@ -1452,15 +1633,14 @@ lpfc_offline(struct lpfc_hba *phba)
+ spin_unlock_irq(&phba->hbalock);
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
+ shost = lpfc_shost_from_vport(vports[i]);
+- lpfc_cleanup(vports[i]);
+ spin_lock_irq(shost->host_lock);
+ vports[i]->work_port_events = 0;
+ vports[i]->fc_flag |= FC_OFFLINE_MODE;
+ spin_unlock_irq(shost->host_lock);
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ }
- free_request_mem:
- dma_free_coherent(&hba->pcidev->dev,
-- hba->req_size*hba->max_requests + 0x20,
-+ hba->req_size * hba->max_requests + 0x20,
- hba->dma_coherent, hba->dma_coherent_handle);
+ /******************************************************************************
+@@ -1674,6 +1854,8 @@ void lpfc_host_attrib_init(struct Scsi_Host *shost)
+ fc_host_supported_speeds(shost) = 0;
+ if (phba->lmt & LMT_10Gb)
+ fc_host_supported_speeds(shost) |= FC_PORTSPEED_10GBIT;
++ if (phba->lmt & LMT_8Gb)
++ fc_host_supported_speeds(shost) |= FC_PORTSPEED_8GBIT;
+ if (phba->lmt & LMT_4Gb)
+ fc_host_supported_speeds(shost) |= FC_PORTSPEED_4GBIT;
+ if (phba->lmt & LMT_2Gb)
+@@ -1707,13 +1889,14 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
+ struct Scsi_Host *shost = NULL;
+ void *ptr;
+ unsigned long bar0map_len, bar2map_len;
+- int error = -ENODEV;
++ int error = -ENODEV, retval;
+ int i, hbq_count;
+ uint16_t iotag;
++ int bars = pci_select_bars(pdev, IORESOURCE_MEM);
- free_request_irq:
- free_irq(hba->pcidev->irq, hba);
+- if (pci_enable_device(pdev))
++ if (pci_enable_device_bars(pdev, bars))
+ goto out;
+- if (pci_request_regions(pdev, LPFC_DRIVER_NAME))
++ if (pci_request_selected_regions(pdev, bars, LPFC_DRIVER_NAME))
+ goto out_disable_device;
- unmap_pci_bar:
-- iounmap(hba->iop);
-+ if (hba->ops->internal_memfree)
-+ hba->ops->internal_memfree(hba);
+ phba = kzalloc(sizeof (struct lpfc_hba), GFP_KERNEL);
+@@ -1823,9 +2006,11 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
+ lpfc_sli_setup(phba);
+ lpfc_sli_queue_setup(phba);
--free_pci_regions:
-- pci_release_regions(pcidev) ;
-+ hba->ops->unmap_pci_bar(hba);
+- error = lpfc_mem_alloc(phba);
+- if (error)
++ retval = lpfc_mem_alloc(phba);
++ if (retval) {
++ error = retval;
+ goto out_free_hbqslimp;
++ }
- free_scsi_host:
- scsi_host_put(host);
+ /* Initialize and populate the iocb list per host. */
+ INIT_LIST_HEAD(&phba->lpfc_iocb_list);
+@@ -1880,6 +2065,9 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
+ /* Initialize list of fabric iocbs */
+ INIT_LIST_HEAD(&phba->fabric_iocb_list);
-+free_pci_regions:
-+ pci_release_regions(pcidev);
++ /* Initialize list to save ELS buffers */
++ INIT_LIST_HEAD(&phba->elsbuf);
+
- disable_pci_device:
- pci_disable_device(pcidev);
+ vport = lpfc_create_port(phba, phba->brd_no, &phba->pcidev->dev);
+ if (!vport)
+ goto out_kthread_stop;
+@@ -1891,8 +2079,8 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
+ pci_set_drvdata(pdev, shost);
-@@ -865,8 +1147,6 @@ static void hptiop_shutdown(struct pci_dev *pcidev)
- {
- struct Scsi_Host *host = pci_get_drvdata(pcidev);
- struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata;
-- struct hpt_iopmu __iomem *iop = hba->iop;
-- u32 int_mask;
+ if (phba->cfg_use_msi) {
+- error = pci_enable_msi(phba->pcidev);
+- if (!error)
++ retval = pci_enable_msi(phba->pcidev);
++ if (!retval)
+ phba->using_msi = 1;
+ else
+ lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+@@ -1900,11 +2088,12 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
+ "with IRQ\n");
+ }
- dprintk("hptiop_shutdown(%p)\n", hba);
+- error = request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
++ retval = request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
+ LPFC_DRIVER_NAME, phba);
+- if (error) {
++ if (retval) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ "0451 Enable interrupt handler failed\n");
++ error = retval;
+ goto out_disable_msi;
+ }
-@@ -876,11 +1156,24 @@ static void hptiop_shutdown(struct pci_dev *pcidev)
- hba->host->host_no);
+@@ -1914,11 +2103,15 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
+ phba->HSregaddr = phba->ctrl_regs_memmap_p + HS_REG_OFFSET;
+ phba->HCregaddr = phba->ctrl_regs_memmap_p + HC_REG_OFFSET;
- /* disable all outbound interrupts */
-- int_mask = readl(&iop->outbound_intmask);
-+ hba->ops->disable_intr(hba);
-+}
+- if (lpfc_alloc_sysfs_attr(vport))
++ if (lpfc_alloc_sysfs_attr(vport)) {
++ error = -ENOMEM;
+ goto out_free_irq;
++ }
+
+- if (lpfc_sli_hba_setup(phba))
++ if (lpfc_sli_hba_setup(phba)) {
++ error = -ENODEV;
+ goto out_remove_device;
++ }
+
+ /*
+ * hba setup may have changed the hba_queue_depth so we need to adjust
+@@ -1975,7 +2168,7 @@ out_idr_remove:
+ out_free_phba:
+ kfree(phba);
+ out_release_regions:
+- pci_release_regions(pdev);
++ pci_release_selected_regions(pdev, bars);
+ out_disable_device:
+ pci_disable_device(pdev);
+ out:
+@@ -1991,6 +2184,8 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
+ struct Scsi_Host *shost = pci_get_drvdata(pdev);
+ struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ struct lpfc_hba *phba = vport->phba;
++ int bars = pci_select_bars(pdev, IORESOURCE_MEM);
+
-+static void hptiop_disable_intr_itl(struct hptiop_hba *hba)
-+{
-+ u32 int_mask;
+ spin_lock_irq(&phba->hbalock);
+ vport->load_flag |= FC_UNLOADING;
+ spin_unlock_irq(&phba->hbalock);
+@@ -1998,8 +2193,12 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
+ kfree(vport->vname);
+ lpfc_free_sysfs_attr(vport);
+
++ kthread_stop(phba->worker_thread);
+
-+ int_mask = readl(&hba->u.itl.iop->outbound_intmask);
- writel(int_mask |
- IOPMU_OUTBOUND_INT_MSG0 | IOPMU_OUTBOUND_INT_POSTQUEUE,
-- &iop->outbound_intmask);
-- hptiop_pci_posting_flush(iop);
-+ &hba->u.itl.iop->outbound_intmask);
-+ readl(&hba->u.itl.iop->outbound_intmask);
-+}
+ fc_remove_host(shost);
+ scsi_remove_host(shost);
++ lpfc_cleanup(vport);
+
-+static void hptiop_disable_intr_mv(struct hptiop_hba *hba)
-+{
-+ writel(0, &hba->u.mv.regs->outbound_intmask);
-+ readl(&hba->u.mv.regs->outbound_intmask);
+ /*
+ * Bring down the SLI Layer. This step disable all interrupts,
+ * clears the rings, discards all mailbox commands, and resets
+@@ -2014,9 +2213,6 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
+ spin_unlock_irq(&phba->hbalock);
+
+ lpfc_debugfs_terminate(vport);
+- lpfc_cleanup(vport);
+-
+- kthread_stop(phba->worker_thread);
+
+ /* Release the irq reservation */
+ free_irq(phba->pcidev->irq, phba);
+@@ -2048,7 +2244,7 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
+
+ kfree(phba);
+
+- pci_release_regions(pdev);
++ pci_release_selected_regions(pdev, bars);
+ pci_disable_device(pdev);
}
- static void hptiop_remove(struct pci_dev *pcidev)
-@@ -901,7 +1194,10 @@ static void hptiop_remove(struct pci_dev *pcidev)
- hba->dma_coherent,
- hba->dma_coherent_handle);
+@@ -2239,12 +2435,22 @@ lpfc_init(void)
+ printk(LPFC_MODULE_DESC "\n");
+ printk(LPFC_COPYRIGHT "\n");
-- iounmap(hba->iop);
-+ if (hba->ops->internal_memfree)
-+ hba->ops->internal_memfree(hba);
-+
-+ hba->ops->unmap_pci_bar(hba);
++ if (lpfc_enable_npiv) {
++ lpfc_transport_functions.vport_create = lpfc_vport_create;
++ lpfc_transport_functions.vport_delete = lpfc_vport_delete;
++ }
+ lpfc_transport_template =
+ fc_attach_transport(&lpfc_transport_functions);
+- lpfc_vport_transport_template =
+- fc_attach_transport(&lpfc_vport_transport_functions);
+- if (!lpfc_transport_template || !lpfc_vport_transport_template)
++ if (lpfc_transport_template == NULL)
+ return -ENOMEM;
++ if (lpfc_enable_npiv) {
++ lpfc_vport_transport_template =
++ fc_attach_transport(&lpfc_vport_transport_functions);
++ if (lpfc_vport_transport_template == NULL) {
++ fc_release_transport(lpfc_transport_template);
++ return -ENOMEM;
++ }
++ }
+ error = pci_register_driver(&lpfc_driver);
+ if (error) {
+ fc_release_transport(lpfc_transport_template);
+@@ -2259,7 +2465,8 @@ lpfc_exit(void)
+ {
+ pci_unregister_driver(&lpfc_driver);
+ fc_release_transport(lpfc_transport_template);
+- fc_release_transport(lpfc_vport_transport_template);
++ if (lpfc_enable_npiv)
++ fc_release_transport(lpfc_vport_transport_template);
+ }
- pci_release_regions(hba->pcidev);
- pci_set_drvdata(hba->pcidev, NULL);
-@@ -910,11 +1206,50 @@ static void hptiop_remove(struct pci_dev *pcidev)
- scsi_host_put(host);
+ module_init(lpfc_init);
+diff --git a/drivers/scsi/lpfc/lpfc_logmsg.h b/drivers/scsi/lpfc/lpfc_logmsg.h
+index 626e4d8..c5841d7 100644
+--- a/drivers/scsi/lpfc/lpfc_logmsg.h
++++ b/drivers/scsi/lpfc/lpfc_logmsg.h
+@@ -26,6 +26,7 @@
+ #define LOG_IP 0x20 /* IP traffic history */
+ #define LOG_FCP 0x40 /* FCP traffic history */
+ #define LOG_NODE 0x80 /* Node table events */
++#define LOG_TEMP 0x100 /* Temperature sensor events */
+ #define LOG_MISC 0x400 /* Miscellaneous events */
+ #define LOG_SLI 0x800 /* SLI events */
+ #define LOG_FCP_ERROR 0x1000 /* log errors, not underruns */
+diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c
+index a592733..dfc63f6 100644
+--- a/drivers/scsi/lpfc/lpfc_mbox.c
++++ b/drivers/scsi/lpfc/lpfc_mbox.c
+@@ -82,6 +82,24 @@ lpfc_read_nv(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
}
-+static struct hptiop_adapter_ops hptiop_itl_ops = {
-+ .iop_wait_ready = iop_wait_ready_itl,
-+ .internal_memalloc = 0,
-+ .internal_memfree = 0,
-+ .map_pci_bar = hptiop_map_pci_bar_itl,
-+ .unmap_pci_bar = hptiop_unmap_pci_bar_itl,
-+ .enable_intr = hptiop_enable_intr_itl,
-+ .disable_intr = hptiop_disable_intr_itl,
-+ .get_config = iop_get_config_itl,
-+ .set_config = iop_set_config_itl,
-+ .iop_intr = iop_intr_itl,
-+ .post_msg = hptiop_post_msg_itl,
-+ .post_req = hptiop_post_req_itl,
-+};
+ /**********************************************/
++/* lpfc_config_async Issue a */
++/* MBX_ASYNC_EVT_ENABLE mailbox command */
++/**********************************************/
++void
++lpfc_config_async(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
++ uint32_t ring)
++{
++ MAILBOX_t *mb;
+
-+static struct hptiop_adapter_ops hptiop_mv_ops = {
-+ .iop_wait_ready = iop_wait_ready_mv,
-+ .internal_memalloc = hptiop_internal_memalloc_mv,
-+ .internal_memfree = hptiop_internal_memfree_mv,
-+ .map_pci_bar = hptiop_map_pci_bar_mv,
-+ .unmap_pci_bar = hptiop_unmap_pci_bar_mv,
-+ .enable_intr = hptiop_enable_intr_mv,
-+ .disable_intr = hptiop_disable_intr_mv,
-+ .get_config = iop_get_config_mv,
-+ .set_config = iop_set_config_mv,
-+ .iop_intr = iop_intr_mv,
-+ .post_msg = hptiop_post_msg_mv,
-+ .post_req = hptiop_post_req_mv,
-+};
++ mb = &pmb->mb;
++ memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
++ mb->mbxCommand = MBX_ASYNCEVT_ENABLE;
++ mb->un.varCfgAsyncEvent.ring = ring;
++ mb->mbxOwner = OWN_HOST;
++ return;
++}
+
- static struct pci_device_id hptiop_id_table[] = {
-- { PCI_VDEVICE(TTI, 0x3220) },
-- { PCI_VDEVICE(TTI, 0x3320) },
-- { PCI_VDEVICE(TTI, 0x3520) },
-- { PCI_VDEVICE(TTI, 0x4320) },
-+ { PCI_VDEVICE(TTI, 0x3220), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3320), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3520), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x4320), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3510), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3511), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3521), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3522), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3410), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3540), (kernel_ulong_t)&hptiop_itl_ops },
-+ { PCI_VDEVICE(TTI, 0x3120), (kernel_ulong_t)&hptiop_mv_ops },
-+ { PCI_VDEVICE(TTI, 0x3122), (kernel_ulong_t)&hptiop_mv_ops },
-+ { PCI_VDEVICE(TTI, 0x3020), (kernel_ulong_t)&hptiop_mv_ops },
- {},
- };
++/**********************************************/
+ /* lpfc_heart_beat Issue a HEART_BEAT */
+ /* mailbox command */
+ /**********************************************/
+@@ -270,8 +288,10 @@ lpfc_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, int vpi)
-diff --git a/drivers/scsi/hptiop.h b/drivers/scsi/hptiop.h
-index 2a5e46e..a0289f2 100644
---- a/drivers/scsi/hptiop.h
-+++ b/drivers/scsi/hptiop.h
-@@ -1,5 +1,5 @@
- /*
-- * HighPoint RR3xxx controller driver for Linux
-+ * HighPoint RR3xxx/4xxx controller driver for Linux
- * Copyright (C) 2006-2007 HighPoint Technologies, Inc. All Rights Reserved.
- *
- * This program is free software; you can redistribute it and/or modify
-@@ -18,8 +18,7 @@
- #ifndef _HPTIOP_H_
- #define _HPTIOP_H_
+ /* Get a buffer to hold the HBAs Service Parameters */
--struct hpt_iopmu
--{
-+struct hpt_iopmu_itl {
- __le32 resrved0[4];
- __le32 inbound_msgaddr0;
- __le32 inbound_msgaddr1;
-@@ -54,6 +53,40 @@ struct hpt_iopmu
- #define IOPMU_INBOUND_INT_ERROR 8
- #define IOPMU_INBOUND_INT_POSTQUEUE 0x10
+- if (((mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL)) == 0) ||
+- ((mp->virt = lpfc_mbuf_alloc(phba, 0, &(mp->phys))) == 0)) {
++ mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
++ if (mp)
++ mp->virt = lpfc_mbuf_alloc(phba, 0, &mp->phys);
++ if (!mp || !mp->virt) {
+ kfree(mp);
+ mb->mbxCommand = MBX_READ_SPARM64;
+ /* READ_SPARAM: no buffers */
+@@ -369,8 +389,10 @@ lpfc_reg_login(struct lpfc_hba *phba, uint16_t vpi, uint32_t did,
+ mb->mbxOwner = OWN_HOST;
-+#define MVIOP_QUEUE_LEN 512
+ /* Get a buffer to hold NPorts Service Parameters */
+- if (((mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL)) == NULL) ||
+- ((mp->virt = lpfc_mbuf_alloc(phba, 0, &(mp->phys))) == 0)) {
++ mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
++ if (mp)
++ mp->virt = lpfc_mbuf_alloc(phba, 0, &mp->phys);
++ if (!mp || !mp->virt) {
+ kfree(mp);
+ mb->mbxCommand = MBX_REG_LOGIN64;
+ /* REG_LOGIN: no buffers */
+@@ -874,7 +896,7 @@ lpfc_mbox_tmo_val(struct lpfc_hba *phba, int cmd)
+ case MBX_DOWN_LOAD: /* 0x1C */
+ case MBX_DEL_LD_ENTRY: /* 0x1D */
+ case MBX_LOAD_AREA: /* 0x81 */
+- case MBX_FLASH_WR_ULA: /* 0x98 */
++ case MBX_WRITE_WWN: /* 0x98 */
+ case MBX_LOAD_EXP_ROM: /* 0x9C */
+ return LPFC_MBOX_TMO_FLASH_CMD;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c
+index 43c3b8a..6dc5ab8 100644
+--- a/drivers/scsi/lpfc/lpfc_mem.c
++++ b/drivers/scsi/lpfc/lpfc_mem.c
+@@ -98,6 +98,7 @@ lpfc_mem_alloc(struct lpfc_hba * phba)
+
+ fail_free_hbq_pool:
+ lpfc_sli_hbqbuf_free_all(phba);
++ pci_pool_destroy(phba->lpfc_hbq_pool);
+ fail_free_nlp_mem_pool:
+ mempool_destroy(phba->nlp_mem_pool);
+ phba->nlp_mem_pool = NULL;
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index 880af0c..4a0e340 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -287,6 +287,24 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
+ lp = (uint32_t *) pcmd->virt;
+ sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
++ if (wwn_to_u64(sp->portName.u.wwn) == 0) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
++ "0140 PLOGI Reject: invalid nname\n");
++ stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
++ stat.un.b.lsRjtRsnCodeExp = LSEXP_INVALID_PNAME;
++ lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
++ NULL);
++ return 0;
++ }
++ if (wwn_to_u64(sp->nodeName.u.wwn) == 0) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
++ "0141 PLOGI Reject: invalid pname\n");
++ stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
++ stat.un.b.lsRjtRsnCodeExp = LSEXP_INVALID_NNAME;
++ lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
++ NULL);
++ return 0;
++ }
+ if ((lpfc_check_sparm(vport, ndlp, sp, CLASS3) == 0)) {
+ /* Reject this request because invalid parameters */
+ stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
+@@ -343,8 +361,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ lpfc_config_link(phba, mbox);
+ mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+ mbox->vport = vport;
+- rc = lpfc_sli_issue_mbox
+- (phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
++ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ mempool_free(mbox, phba->mbox_mem_pool);
+ goto out;
+@@ -407,6 +424,61 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ ndlp, mbox);
+ return 1;
+ }
+
-+struct hpt_iopmu_mv {
-+ __le32 inbound_head;
-+ __le32 inbound_tail;
-+ __le32 outbound_head;
-+ __le32 outbound_tail;
-+ __le32 inbound_msg;
-+ __le32 outbound_msg;
-+ __le32 reserve[10];
-+ __le64 inbound_q[MVIOP_QUEUE_LEN];
-+ __le64 outbound_q[MVIOP_QUEUE_LEN];
-+};
++ /* If the remote NPort logs into us, before we can initiate
++ * discovery to them, cleanup the NPort from discovery accordingly.
++ */
++ if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
++ spin_lock_irq(shost->host_lock);
++ ndlp->nlp_flag &= ~NLP_DELAY_TMO;
++ spin_unlock_irq(shost->host_lock);
++ del_timer_sync(&ndlp->nlp_delayfunc);
++ ndlp->nlp_last_elscmd = 0;
+
-+struct hpt_iopmv_regs {
-+ __le32 reserved[0x20400 / 4];
-+ __le32 inbound_doorbell;
-+ __le32 inbound_intmask;
-+ __le32 outbound_doorbell;
-+ __le32 outbound_intmask;
-+};
++ if (!list_empty(&ndlp->els_retry_evt.evt_listp))
++ list_del_init(&ndlp->els_retry_evt.evt_listp);
+
-+#define MVIOP_MU_QUEUE_ADDR_HOST_MASK (~(0x1full))
-+#define MVIOP_MU_QUEUE_ADDR_HOST_BIT 4
++ if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
++ spin_lock_irq(shost->host_lock);
++ ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
++ spin_unlock_irq(shost->host_lock);
+
-+#define MVIOP_MU_QUEUE_ADDR_IOP_HIGH32 0xffffffff
-+#define MVIOP_MU_QUEUE_REQUEST_RESULT_BIT 1
-+#define MVIOP_MU_QUEUE_REQUEST_RETURN_CONTEXT 2
++ if ((ndlp->nlp_flag & NLP_ADISC_SND) &&
++ (vport->num_disc_nodes)) {
++ /* Check to see if there are more
++ * ADISCs to be sent
++ */
++ lpfc_more_adisc(vport);
+
-+#define MVIOP_MU_INBOUND_INT_MSG 1
-+#define MVIOP_MU_INBOUND_INT_POSTQUEUE 2
-+#define MVIOP_MU_OUTBOUND_INT_MSG 1
-+#define MVIOP_MU_OUTBOUND_INT_POSTQUEUE 2
++ if ((vport->num_disc_nodes == 0) &&
++ (vport->fc_npr_cnt))
++ lpfc_els_disc_plogi(vport);
+
- enum hpt_iopmu_message {
- /* host-to-iop messages */
- IOPMU_INBOUND_MSG0_NOP = 0,
-@@ -72,8 +105,7 @@ enum hpt_iopmu_message {
- IOPMU_OUTBOUND_MSG0_REVALIDATE_DEVICE_MAX = 0x3ff,
- };
++ if (vport->num_disc_nodes == 0) {
++ spin_lock_irq(shost->host_lock);
++ vport->fc_flag &= ~FC_NDISC_ACTIVE;
++ spin_unlock_irq(shost->host_lock);
++ lpfc_can_disctmo(vport);
++ lpfc_end_rscn(vport);
++ }
++ }
++ else if (vport->num_disc_nodes) {
++ /* Check to see if there are more
++ * PLOGIs to be sent
++ */
++ lpfc_more_plogi(vport);
++
++ if (vport->num_disc_nodes == 0) {
++ spin_lock_irq(shost->host_lock);
++ vport->fc_flag &= ~FC_NDISC_ACTIVE;
++ spin_unlock_irq(shost->host_lock);
++ lpfc_can_disctmo(vport);
++ lpfc_end_rscn(vport);
++ }
++ }
++ }
++ }
++
+ lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox);
+ return 1;
--struct hpt_iop_request_header
--{
-+struct hpt_iop_request_header {
- __le32 size;
- __le32 type;
- __le32 flags;
-@@ -104,11 +136,10 @@ enum hpt_iop_result_type {
- IOP_RESULT_RESET,
- IOP_RESULT_INVALID_REQUEST,
- IOP_RESULT_BAD_TARGET,
-- IOP_RESULT_MODE_SENSE_CHECK_CONDITION,
-+ IOP_RESULT_CHECK_CONDITION,
- };
+@@ -501,12 +573,9 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ spin_unlock_irq(shost->host_lock);
--struct hpt_iop_request_get_config
--{
-+struct hpt_iop_request_get_config {
- struct hpt_iop_request_header header;
- __le32 interface_version;
- __le32 firmware_version;
-@@ -121,8 +152,7 @@ struct hpt_iop_request_get_config
- __le32 sdram_size;
- };
+ ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
+- ndlp->nlp_prev_state = ndlp->nlp_state;
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+- } else {
+- ndlp->nlp_prev_state = ndlp->nlp_state;
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+ }
++ ndlp->nlp_prev_state = ndlp->nlp_state;
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
--struct hpt_iop_request_set_config
--{
-+struct hpt_iop_request_set_config {
- struct hpt_iop_request_header header;
- __le32 iop_id;
- __le16 vbus_id;
-@@ -130,15 +160,13 @@ struct hpt_iop_request_set_config
- __le32 reserve[6];
- };
+ spin_lock_irq(shost->host_lock);
+ ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+@@ -594,6 +663,25 @@ lpfc_disc_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ return ndlp->nlp_state;
+ }
--struct hpt_iopsg
--{
-+struct hpt_iopsg {
- __le32 size;
- __le32 eot; /* non-zero: end of table */
- __le64 pci_address;
- };
++static uint32_t
++lpfc_cmpl_plogi_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
++ void *arg, uint32_t evt)
++{
++ /* This transition is only legal if we previously
++ * rcv'ed a PLOGI. Since we don't want 2 discovery threads
++ * working on the same NPortID, do nothing for this thread
++ * to stop it.
++ */
++ if (!(ndlp->nlp_flag & NLP_RCV_PLOGI)) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
++ "0253 Illegal State Transition: node x%x "
++ "event x%x, state x%x Data: x%x x%x\n",
++ ndlp->nlp_DID, evt, ndlp->nlp_state, ndlp->nlp_rpi,
++ ndlp->nlp_flag);
++ }
++ return ndlp->nlp_state;
++}
++
+ /* Start of Discovery State Machine routines */
--struct hpt_iop_request_block_command
--{
-+struct hpt_iop_request_block_command {
- struct hpt_iop_request_header header;
- u8 channel;
- u8 target;
-@@ -156,8 +184,7 @@ struct hpt_iop_request_block_command
- #define IOP_BLOCK_COMMAND_FLUSH 4
- #define IOP_BLOCK_COMMAND_SHUTDOWN 5
+ static uint32_t
+@@ -605,11 +693,8 @@ lpfc_rcv_plogi_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ cmdiocb = (struct lpfc_iocbq *) arg;
--struct hpt_iop_request_scsi_command
--{
-+struct hpt_iop_request_scsi_command {
- struct hpt_iop_request_header header;
- u8 channel;
- u8 target;
-@@ -168,8 +195,7 @@ struct hpt_iop_request_scsi_command
- struct hpt_iopsg sg_list[1];
- };
+ if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) {
+- ndlp->nlp_prev_state = NLP_STE_UNUSED_NODE;
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+ return ndlp->nlp_state;
+ }
+- lpfc_drop_node(vport, ndlp);
+ return NLP_STE_FREED_NODE;
+ }
--struct hpt_iop_request_ioctl_command
--{
-+struct hpt_iop_request_ioctl_command {
- struct hpt_iop_request_header header;
- __le32 ioctl_code;
- __le32 inbuf_size;
-@@ -182,11 +208,11 @@ struct hpt_iop_request_ioctl_command
- #define HPTIOP_MAX_REQUESTS 256u
+@@ -618,7 +703,6 @@ lpfc_rcv_els_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ void *arg, uint32_t evt)
+ {
+ lpfc_issue_els_logo(vport, ndlp, 0);
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+ return ndlp->nlp_state;
+ }
- struct hptiop_request {
-- struct hptiop_request * next;
-- void * req_virt;
-- u32 req_shifted_phy;
-- struct scsi_cmnd * scp;
-- int index;
-+ struct hptiop_request *next;
-+ void *req_virt;
-+ u32 req_shifted_phy;
-+ struct scsi_cmnd *scp;
-+ int index;
- };
+@@ -633,7 +717,6 @@ lpfc_rcv_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ ndlp->nlp_flag |= NLP_LOGO_ACC;
+ spin_unlock_irq(shost->host_lock);
+ lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
- struct hpt_scsi_pointer {
-@@ -198,9 +224,21 @@ struct hpt_scsi_pointer {
- #define HPT_SCP(scp) ((struct hpt_scsi_pointer *)&(scp)->SCp)
+ return ndlp->nlp_state;
+ }
+@@ -642,7 +725,6 @@ static uint32_t
+ lpfc_cmpl_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ void *arg, uint32_t evt)
+ {
+- lpfc_drop_node(vport, ndlp);
+ return NLP_STE_FREED_NODE;
+ }
- struct hptiop_hba {
-- struct hpt_iopmu __iomem * iop;
-- struct Scsi_Host * host;
-- struct pci_dev * pcidev;
-+ struct hptiop_adapter_ops *ops;
-+ union {
-+ struct {
-+ struct hpt_iopmu_itl __iomem *iop;
-+ } itl;
-+ struct {
-+ struct hpt_iopmv_regs *regs;
-+ struct hpt_iopmu_mv __iomem *mu;
-+ void *internal_req;
-+ dma_addr_t internal_req_phy;
-+ } mv;
-+ } u;
-+
-+ struct Scsi_Host *host;
-+ struct pci_dev *pcidev;
+@@ -650,7 +732,6 @@ static uint32_t
+ lpfc_device_rm_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ void *arg, uint32_t evt)
+ {
+- lpfc_drop_node(vport, ndlp);
+ return NLP_STE_FREED_NODE;
+ }
- /* IOP config info */
- u32 interface_version;
-@@ -213,15 +251,15 @@ struct hptiop_hba {
+@@ -752,6 +833,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
+ uint32_t evt)
+ {
+ struct lpfc_hba *phba = vport->phba;
++ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+ struct lpfc_iocbq *cmdiocb, *rspiocb;
+ struct lpfc_dmabuf *pcmd, *prsp, *mp;
+ uint32_t *lp;
+@@ -778,6 +860,12 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
- u32 req_size; /* host-allocated request buffer size */
+ lp = (uint32_t *) prsp->virt;
+ sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
++ if (wwn_to_u64(sp->portName.u.wwn) == 0 ||
++ wwn_to_u64(sp->nodeName.u.wwn) == 0) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
++ "0142 PLOGI RSP: Invalid WWN.\n");
++ goto out;
++ }
+ if (!lpfc_check_sparm(vport, ndlp, sp, CLASS3))
+ goto out;
+ /* PLOGI chkparm OK */
+@@ -828,13 +916,15 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
+ }
+ mbox->context2 = lpfc_nlp_get(ndlp);
+ mbox->vport = vport;
+- if (lpfc_sli_issue_mbox(phba, mbox,
+- (MBX_NOWAIT | MBX_STOP_IOCB))
++ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
+ != MBX_NOT_FINISHED) {
+ lpfc_nlp_set_state(vport, ndlp,
+ NLP_STE_REG_LOGIN_ISSUE);
+ return ndlp->nlp_state;
+ }
++ /* decrement node reference count to the failed mbox
++ * command
++ */
+ lpfc_nlp_put(ndlp);
+ mp = (struct lpfc_dmabuf *) mbox->context1;
+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
+@@ -864,13 +954,27 @@ out:
+ "0261 Cannot Register NameServer login\n");
+ }
-- int iopintf_v2: 1;
-- int initialized: 1;
-- int msg_done: 1;
-+ u32 iopintf_v2: 1;
-+ u32 initialized: 1;
-+ u32 msg_done: 1;
+- /* Free this node since the driver cannot login or has the wrong
+- sparm */
+- lpfc_drop_node(vport, ndlp);
++ spin_lock_irq(shost->host_lock);
++ ndlp->nlp_flag |= NLP_DEFER_RM;
++ spin_unlock_irq(shost->host_lock);
+ return NLP_STE_FREED_NODE;
+ }
- struct hptiop_request * req_list;
- struct hptiop_request reqs[HPTIOP_MAX_REQUESTS];
+ static uint32_t
++lpfc_cmpl_logo_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
++ void *arg, uint32_t evt)
++{
++ return ndlp->nlp_state;
++}
++
++static uint32_t
++lpfc_cmpl_reglogin_plogi_issue(struct lpfc_vport *vport,
++ struct lpfc_nodelist *ndlp, void *arg, uint32_t evt)
++{
++ return ndlp->nlp_state;
++}
++
++static uint32_t
+ lpfc_device_rm_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ void *arg, uint32_t evt)
+ {
+@@ -1137,7 +1241,7 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport,
+ (ndlp == (struct lpfc_nodelist *) mb->context2)) {
+ mp = (struct lpfc_dmabuf *) (mb->context1);
+ if (mp) {
+- lpfc_mbuf_free(phba, mp->virt, mp->phys);
++ __lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ kfree(mp);
+ }
+ lpfc_nlp_put(ndlp);
+@@ -1197,8 +1301,8 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
+ * retry discovery.
+ */
+ if (mb->mbxStatus == MBXERR_RPI_FULL) {
+- ndlp->nlp_prev_state = NLP_STE_UNUSED_NODE;
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
++ ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+ return ndlp->nlp_state;
+ }
- /* used to free allocated dma area */
-- void * dma_coherent;
-+ void *dma_coherent;
- dma_addr_t dma_coherent_handle;
+@@ -1378,7 +1482,7 @@ out:
+ lpfc_issue_els_logo(vport, ndlp, 0);
- atomic_t reset_count;
-@@ -231,19 +269,35 @@ struct hptiop_hba {
- wait_queue_head_t ioctl_wq;
- };
+ ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+ return ndlp->nlp_state;
+ }
--struct hpt_ioctl_k
--{
-+struct hpt_ioctl_k {
- struct hptiop_hba * hba;
- u32 ioctl_code;
- u32 inbuf_size;
- u32 outbuf_size;
-- void * inbuf;
-- void * outbuf;
-- u32 * bytes_returned;
-+ void *inbuf;
-+ void *outbuf;
-+ u32 *bytes_returned;
- void (*done)(struct hpt_ioctl_k *);
- int result; /* HPT_IOCTL_RESULT_ */
- };
+@@ -1753,7 +1857,7 @@ lpfc_cmpl_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
-+struct hptiop_adapter_ops {
-+ int (*iop_wait_ready)(struct hptiop_hba *hba, u32 millisec);
-+ int (*internal_memalloc)(struct hptiop_hba *hba);
-+ int (*internal_memfree)(struct hptiop_hba *hba);
-+ int (*map_pci_bar)(struct hptiop_hba *hba);
-+ void (*unmap_pci_bar)(struct hptiop_hba *hba);
-+ void (*enable_intr)(struct hptiop_hba *hba);
-+ void (*disable_intr)(struct hptiop_hba *hba);
-+ int (*get_config)(struct hptiop_hba *hba,
-+ struct hpt_iop_request_get_config *config);
-+ int (*set_config)(struct hptiop_hba *hba,
-+ struct hpt_iop_request_set_config *config);
-+ int (*iop_intr)(struct hptiop_hba *hba);
-+ void (*post_msg)(struct hptiop_hba *hba, u32 msg);
-+ void (*post_req)(struct hptiop_hba *hba, struct hptiop_request *_req);
-+};
-+
- #define HPT_IOCTL_RESULT_OK 0
- #define HPT_IOCTL_RESULT_FAILED (-1)
+ irsp = &rspiocb->iocb;
+ if (irsp->ulpStatus) {
+- lpfc_drop_node(vport, ndlp);
++ ndlp->nlp_flag |= NLP_DEFER_RM;
+ return NLP_STE_FREED_NODE;
+ }
+ return ndlp->nlp_state;
+@@ -1942,9 +2046,9 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
+ lpfc_rcv_els_plogi_issue, /* RCV_PRLO */
+ lpfc_cmpl_plogi_plogi_issue, /* CMPL_PLOGI */
+ lpfc_disc_illegal, /* CMPL_PRLI */
+- lpfc_disc_illegal, /* CMPL_LOGO */
++ lpfc_cmpl_logo_plogi_issue, /* CMPL_LOGO */
+ lpfc_disc_illegal, /* CMPL_ADISC */
+- lpfc_disc_illegal, /* CMPL_REG_LOGIN */
++ lpfc_cmpl_reglogin_plogi_issue,/* CMPL_REG_LOGIN */
+ lpfc_device_rm_plogi_issue, /* DEVICE_RM */
+ lpfc_device_recov_plogi_issue, /* DEVICE_RECOVERY */
-diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
-index 5f2396c..3081901 100644
---- a/drivers/scsi/ibmvscsi/ibmvscsi.c
-+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
-@@ -629,6 +629,16 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
- list_del(&evt_struct->list);
- del_timer(&evt_struct->timer);
+@@ -1968,7 +2072,7 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
+ lpfc_rcv_padisc_reglogin_issue, /* RCV_ADISC */
+ lpfc_rcv_padisc_reglogin_issue, /* RCV_PDISC */
+ lpfc_rcv_prlo_reglogin_issue, /* RCV_PRLO */
+- lpfc_disc_illegal, /* CMPL_PLOGI */
++ lpfc_cmpl_plogi_illegal, /* CMPL_PLOGI */
+ lpfc_disc_illegal, /* CMPL_PRLI */
+ lpfc_disc_illegal, /* CMPL_LOGO */
+ lpfc_disc_illegal, /* CMPL_ADISC */
+@@ -1982,7 +2086,7 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
+ lpfc_rcv_padisc_prli_issue, /* RCV_ADISC */
+ lpfc_rcv_padisc_prli_issue, /* RCV_PDISC */
+ lpfc_rcv_prlo_prli_issue, /* RCV_PRLO */
+- lpfc_disc_illegal, /* CMPL_PLOGI */
++ lpfc_cmpl_plogi_illegal, /* CMPL_PLOGI */
+ lpfc_cmpl_prli_prli_issue, /* CMPL_PRLI */
+ lpfc_disc_illegal, /* CMPL_LOGO */
+ lpfc_disc_illegal, /* CMPL_ADISC */
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 4e46045..6483c62 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -130,7 +130,7 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
-+ /* If send_crq returns H_CLOSED, return SCSI_MLQUEUE_HOST_BUSY.
-+ * Firmware will send a CRQ with a transport event (0xFF) to
-+ * tell this client what has happened to the transport. This
-+ * will be handled in ibmvscsi_handle_crq()
-+ */
-+ if (rc == H_CLOSED) {
-+ dev_warn(hostdata->dev, "send warning. "
-+ "Receive queue closed, will retry.\n");
-+ goto send_busy;
-+ }
- dev_err(hostdata->dev, "send error %d\n", rc);
- atomic_inc(&hostdata->request_limit);
- goto send_error;
-@@ -976,58 +986,74 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
- int rsp_rc;
- unsigned long flags;
- u16 lun = lun_from_dev(cmd->device);
-+ unsigned long wait_switch = 0;
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
+ shost = lpfc_shost_from_vport(vports[i]);
+ shost_for_each_device(sdev, shost) {
+ new_queue_depth =
+@@ -151,7 +151,7 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
+ new_queue_depth);
+ }
+ }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ atomic_set(&phba->num_rsrc_err, 0);
+ atomic_set(&phba->num_cmd_success, 0);
+ }
+@@ -166,7 +166,7 @@ lpfc_ramp_up_queue_handler(struct lpfc_hba *phba)
- /* First, find this command in our sent list so we can figure
- * out the correct tag
- */
- spin_lock_irqsave(hostdata->host->host_lock, flags);
-- found_evt = NULL;
-- list_for_each_entry(tmp_evt, &hostdata->sent, list) {
-- if (tmp_evt->cmnd == cmd) {
-- found_evt = tmp_evt;
-- break;
-+ wait_switch = jiffies + (init_timeout * HZ);
-+ do {
-+ found_evt = NULL;
-+ list_for_each_entry(tmp_evt, &hostdata->sent, list) {
-+ if (tmp_evt->cmnd == cmd) {
-+ found_evt = tmp_evt;
-+ break;
-+ }
+ vports = lpfc_create_vport_work_array(phba);
+ if (vports != NULL)
+- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
++ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
+ shost = lpfc_shost_from_vport(vports[i]);
+ shost_for_each_device(sdev, shost) {
+ if (sdev->ordered_tags)
+@@ -179,7 +179,7 @@ lpfc_ramp_up_queue_handler(struct lpfc_hba *phba)
+ sdev->queue_depth+1);
+ }
}
-- }
+- lpfc_destroy_vport_work_array(vports);
++ lpfc_destroy_vport_work_array(phba, vports);
+ atomic_set(&phba->num_rsrc_err, 0);
+ atomic_set(&phba->num_cmd_success, 0);
+ }
+@@ -380,7 +380,7 @@ lpfc_scsi_prep_dma_buf(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
+ (num_bde * sizeof (struct ulp_bde64));
+ iocb_cmd->ulpBdeCount = 1;
+ iocb_cmd->ulpLe = 1;
+- fcp_cmnd->fcpDl = be32_to_cpu(scsi_bufflen(scsi_cmnd));
++ fcp_cmnd->fcpDl = cpu_to_be32(scsi_bufflen(scsi_cmnd));
+ return 0;
+ }
-- if (!found_evt) {
-- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-- return SUCCESS;
-- }
-+ if (!found_evt) {
-+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+ return SUCCESS;
-+ }
+@@ -542,6 +542,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ int result;
+ struct scsi_device *sdev, *tmp_sdev;
+ int depth = 0;
++ unsigned long flags;
-- evt = get_event_struct(&hostdata->pool);
-- if (evt == NULL) {
-- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-- sdev_printk(KERN_ERR, cmd->device, "failed to allocate abort event\n");
-- return FAILED;
-- }
-+ evt = get_event_struct(&hostdata->pool);
-+ if (evt == NULL) {
-+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+ sdev_printk(KERN_ERR, cmd->device,
-+ "failed to allocate abort event\n");
-+ return FAILED;
-+ }
-
-- init_event_struct(evt,
-- sync_completion,
-- VIOSRP_SRP_FORMAT,
-- init_timeout);
-+ init_event_struct(evt,
-+ sync_completion,
-+ VIOSRP_SRP_FORMAT,
-+ init_timeout);
+ lpfc_cmd->result = pIocbOut->iocb.un.ulpWord[4];
+ lpfc_cmd->status = pIocbOut->iocb.ulpStatus;
+@@ -608,6 +609,15 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ cmd->scsi_done(cmd);
-- tsk_mgmt = &evt->iu.srp.tsk_mgmt;
-+ tsk_mgmt = &evt->iu.srp.tsk_mgmt;
-
-- /* Set up an abort SRP command */
-- memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
-- tsk_mgmt->opcode = SRP_TSK_MGMT;
-- tsk_mgmt->lun = ((u64) lun) << 48;
-- tsk_mgmt->tsk_mgmt_func = SRP_TSK_ABORT_TASK;
-- tsk_mgmt->task_tag = (u64) found_evt;
--
-- sdev_printk(KERN_INFO, cmd->device, "aborting command. lun 0x%lx, tag 0x%lx\n",
-- tsk_mgmt->lun, tsk_mgmt->task_tag);
--
-- evt->sync_srp = &srp_rsp;
-- init_completion(&evt->comp);
-- rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
-+ /* Set up an abort SRP command */
-+ memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
-+ tsk_mgmt->opcode = SRP_TSK_MGMT;
-+ tsk_mgmt->lun = ((u64) lun) << 48;
-+ tsk_mgmt->tsk_mgmt_func = SRP_TSK_ABORT_TASK;
-+ tsk_mgmt->task_tag = (u64) found_evt;
-+
-+ evt->sync_srp = &srp_rsp;
-+
-+ init_completion(&evt->comp);
-+ rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
-+
-+ if (rsp_rc != SCSI_MLQUEUE_HOST_BUSY)
-+ break;
-+
-+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+ msleep(10);
-+ spin_lock_irqsave(hostdata->host->host_lock, flags);
-+ } while (time_before(jiffies, wait_switch));
-+
- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+
- if (rsp_rc != 0) {
- sdev_printk(KERN_ERR, cmd->device,
- "failed to send abort() event. rc=%d\n", rsp_rc);
- return FAILED;
+ if (phba->cfg_poll & ENABLE_FCP_RING_POLLING) {
++ /*
++ * If there is a thread waiting for command completion
++ * wake up the thread.
++ */
++ spin_lock_irqsave(sdev->host->host_lock, flags);
++ lpfc_cmd->pCmd = NULL;
++ if (lpfc_cmd->waitq)
++ wake_up(lpfc_cmd->waitq);
++ spin_unlock_irqrestore(sdev->host->host_lock, flags);
+ lpfc_release_scsi_buf(phba, lpfc_cmd);
+ return;
+ }
+@@ -669,6 +679,16 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ }
}
-+ sdev_printk(KERN_INFO, cmd->device,
-+ "aborting command. lun 0x%lx, tag 0x%lx\n",
-+ (((u64) lun) << 48), (u64) found_evt);
++ /*
++ * If there is a thread waiting for command completion
++ * wake up the thread.
++ */
++ spin_lock_irqsave(sdev->host->host_lock, flags);
++ lpfc_cmd->pCmd = NULL;
++ if (lpfc_cmd->waitq)
++ wake_up(lpfc_cmd->waitq);
++ spin_unlock_irqrestore(sdev->host->host_lock, flags);
+
- wait_for_completion(&evt->comp);
-
- /* make sure we got a good response */
-@@ -1099,41 +1125,56 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
- int rsp_rc;
- unsigned long flags;
- u16 lun = lun_from_dev(cmd->device);
-+ unsigned long wait_switch = 0;
+ lpfc_release_scsi_buf(phba, lpfc_cmd);
+ }
- spin_lock_irqsave(hostdata->host->host_lock, flags);
-- evt = get_event_struct(&hostdata->pool);
-- if (evt == NULL) {
-- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-- sdev_printk(KERN_ERR, cmd->device, "failed to allocate reset event\n");
-- return FAILED;
-- }
-+ wait_switch = jiffies + (init_timeout * HZ);
-+ do {
-+ evt = get_event_struct(&hostdata->pool);
-+ if (evt == NULL) {
-+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+ sdev_printk(KERN_ERR, cmd->device,
-+ "failed to allocate reset event\n");
-+ return FAILED;
-+ }
-
-- init_event_struct(evt,
-- sync_completion,
-- VIOSRP_SRP_FORMAT,
-- init_timeout);
-+ init_event_struct(evt,
-+ sync_completion,
-+ VIOSRP_SRP_FORMAT,
-+ init_timeout);
+@@ -743,6 +763,8 @@ lpfc_scsi_prep_cmnd(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd,
+ piocbq->iocb.ulpContext = pnode->nlp_rpi;
+ if (pnode->nlp_fcp_info & NLP_FCP_2_DEVICE)
+ piocbq->iocb.ulpFCP2Rcvy = 1;
++ else
++ piocbq->iocb.ulpFCP2Rcvy = 0;
-- tsk_mgmt = &evt->iu.srp.tsk_mgmt;
-+ tsk_mgmt = &evt->iu.srp.tsk_mgmt;
+ piocbq->iocb.ulpClass = (pnode->nlp_fcp_info & 0x0f);
+ piocbq->context1 = lpfc_cmd;
+@@ -1018,8 +1040,8 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ struct lpfc_iocbq *abtsiocb;
+ struct lpfc_scsi_buf *lpfc_cmd;
+ IOCB_t *cmd, *icmd;
+- unsigned int loop_count = 0;
+ int ret = SUCCESS;
++ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(waitq);
-- /* Set up a lun reset SRP command */
-- memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
-- tsk_mgmt->opcode = SRP_TSK_MGMT;
-- tsk_mgmt->lun = ((u64) lun) << 48;
-- tsk_mgmt->tsk_mgmt_func = SRP_TSK_LUN_RESET;
-+ /* Set up a lun reset SRP command */
-+ memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
-+ tsk_mgmt->opcode = SRP_TSK_MGMT;
-+ tsk_mgmt->lun = ((u64) lun) << 48;
-+ tsk_mgmt->tsk_mgmt_func = SRP_TSK_LUN_RESET;
+ lpfc_block_error_handler(cmnd);
+ lpfc_cmd = (struct lpfc_scsi_buf *)cmnd->host_scribble;
+@@ -1074,17 +1096,15 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ if (phba->cfg_poll & DISABLE_FCP_RING_INT)
+ lpfc_sli_poll_fcp_ring (phba);
-- sdev_printk(KERN_INFO, cmd->device, "resetting device. lun 0x%lx\n",
-- tsk_mgmt->lun);
-+ evt->sync_srp = &srp_rsp;
-+
-+ init_completion(&evt->comp);
-+ rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
-+
-+ if (rsp_rc != SCSI_MLQUEUE_HOST_BUSY)
-+ break;
-+
-+ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+ msleep(10);
-+ spin_lock_irqsave(hostdata->host->host_lock, flags);
-+ } while (time_before(jiffies, wait_switch));
++ lpfc_cmd->waitq = &waitq;
+ /* Wait for abort to complete */
+- while (lpfc_cmd->pCmd == cmnd)
+- {
+- if (phba->cfg_poll & DISABLE_FCP_RING_INT)
+- lpfc_sli_poll_fcp_ring (phba);
++ wait_event_timeout(waitq,
++ (lpfc_cmd->pCmd != cmnd),
++ (2*vport->cfg_devloss_tmo*HZ));
-- evt->sync_srp = &srp_rsp;
-- init_completion(&evt->comp);
-- rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
- spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+
- if (rsp_rc != 0) {
- sdev_printk(KERN_ERR, cmd->device,
- "failed to send reset event. rc=%d\n", rsp_rc);
- return FAILED;
- }
+- schedule_timeout_uninterruptible(LPFC_ABORT_WAIT * HZ);
+- if (++loop_count
+- > (2 * vport->cfg_devloss_tmo)/LPFC_ABORT_WAIT)
+- break;
+- }
++ spin_lock_irq(shost->host_lock);
++ lpfc_cmd->waitq = NULL;
++ spin_unlock_irq(shost->host_lock);
-+ sdev_printk(KERN_INFO, cmd->device, "resetting device. lun 0x%lx\n",
-+ (((u64) lun) << 48));
-+
- wait_for_completion(&evt->comp);
+ if (lpfc_cmd->pCmd == cmnd) {
+ ret = FAILED;
+@@ -1438,7 +1458,7 @@ struct scsi_host_template lpfc_template = {
+ .slave_destroy = lpfc_slave_destroy,
+ .scan_finished = lpfc_scan_finished,
+ .this_id = -1,
+- .sg_tablesize = LPFC_SG_SEG_CNT,
++ .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT,
+ .use_sg_chaining = ENABLE_SG_CHAINING,
+ .cmd_per_lun = LPFC_CMD_PER_LUN,
+ .use_clustering = ENABLE_CLUSTERING,
+@@ -1459,7 +1479,7 @@ struct scsi_host_template lpfc_vport_template = {
+ .slave_destroy = lpfc_slave_destroy,
+ .scan_finished = lpfc_scan_finished,
+ .this_id = -1,
+- .sg_tablesize = LPFC_SG_SEG_CNT,
++ .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT,
+ .cmd_per_lun = LPFC_CMD_PER_LUN,
+ .use_clustering = ENABLE_CLUSTERING,
+ .use_sg_chaining = ENABLE_SG_CHAINING,
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.h b/drivers/scsi/lpfc/lpfc_scsi.h
+index 31787bb..daba923 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.h
++++ b/drivers/scsi/lpfc/lpfc_scsi.h
+@@ -138,6 +138,7 @@ struct lpfc_scsi_buf {
+ * Iotag is in here
+ */
+ struct lpfc_iocbq cur_iocbq;
++ wait_queue_head_t *waitq;
+ };
- /* make sure we got a good response */
-@@ -1386,8 +1427,10 @@ static int ibmvscsi_slave_configure(struct scsi_device *sdev)
- unsigned long lock_flags = 0;
+ #define LPFC_SCSI_DMA_EXT_SIZE 264
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index ce348c5..fdd01e3 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -106,7 +106,7 @@ lpfc_sli_get_iocbq(struct lpfc_hba *phba)
+ return iocbq;
+ }
- spin_lock_irqsave(shost->host_lock, lock_flags);
-- if (sdev->type == TYPE_DISK)
-+ if (sdev->type == TYPE_DISK) {
- sdev->allow_restart = 1;
-+ sdev->timeout = 60 * HZ;
-+ }
- scsi_adjust_queue_depth(sdev, 0, shost->cmd_per_lun);
- spin_unlock_irqrestore(shost->host_lock, lock_flags);
- return 0;
-diff --git a/drivers/scsi/ibmvscsi/ibmvstgt.c b/drivers/scsi/ibmvscsi/ibmvstgt.c
-index 82bcab6..d63f11e 100644
---- a/drivers/scsi/ibmvscsi/ibmvstgt.c
-+++ b/drivers/scsi/ibmvscsi/ibmvstgt.c
-@@ -292,7 +292,7 @@ static int ibmvstgt_cmd_done(struct scsi_cmnd *sc,
- dprintk("%p %p %x %u\n", iue, target, vio_iu(iue)->srp.cmd.cdb[0],
- cmd->usg_sg);
+-void
++static void
+ __lpfc_sli_release_iocbq(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
+ {
+ size_t start_clean = offsetof(struct lpfc_iocbq, iocb);
+@@ -199,6 +199,7 @@ lpfc_sli_iocb_cmd_type(uint8_t iocb_cmnd)
+ case CMD_RCV_ELS_REQ_CX:
+ case CMD_RCV_SEQUENCE64_CX:
+ case CMD_RCV_ELS_REQ64_CX:
++ case CMD_ASYNC_STATUS:
+ case CMD_IOCB_RCV_SEQ64_CX:
+ case CMD_IOCB_RCV_ELS64_CX:
+ case CMD_IOCB_RCV_CONT64_CX:
+@@ -473,8 +474,7 @@ lpfc_sli_resume_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
+ if (pring->txq_cnt &&
+ lpfc_is_link_up(phba) &&
+ (pring->ringno != phba->sli.fcp_ring ||
+- phba->sli.sli_flag & LPFC_PROCESS_LA) &&
+- !(pring->flag & LPFC_STOP_IOCB_MBX)) {
++ phba->sli.sli_flag & LPFC_PROCESS_LA)) {
-- if (sc->use_sg)
-+ if (scsi_sg_count(sc))
- err = srp_transfer_data(sc, &vio_iu(iue)->srp.cmd, ibmvstgt_rdma, 1, 1);
+ while ((iocb = lpfc_sli_next_iocb_slot(phba, pring)) &&
+ (nextiocb = lpfc_sli_ringtx_get(phba, pring)))
+@@ -489,32 +489,7 @@ lpfc_sli_resume_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
+ return;
+ }
- spin_lock_irqsave(&target->lock, flags);
-diff --git a/drivers/scsi/ide-scsi.c b/drivers/scsi/ide-scsi.c
-index 9706de9..db8bc20 100644
---- a/drivers/scsi/ide-scsi.c
-+++ b/drivers/scsi/ide-scsi.c
-@@ -395,14 +395,12 @@ static int idescsi_expiry(ide_drive_t *drive)
- static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
- {
- idescsi_scsi_t *scsi = drive_to_idescsi(drive);
-- idescsi_pc_t *pc=scsi->pc;
-+ ide_hwif_t *hwif = drive->hwif;
-+ idescsi_pc_t *pc = scsi->pc;
- struct request *rq = pc->rq;
-- atapi_bcount_t bcount;
-- atapi_status_t status;
-- atapi_ireason_t ireason;
-- atapi_feature_t feature;
+-/* lpfc_sli_turn_on_ring is only called by lpfc_sli_handle_mb_event below */
+-static void
+-lpfc_sli_turn_on_ring(struct lpfc_hba *phba, int ringno)
+-{
+- struct lpfc_pgp *pgp = (phba->sli_rev == 3) ?
+- &phba->slim2p->mbx.us.s3_pgp.port[ringno] :
+- &phba->slim2p->mbx.us.s2.port[ringno];
+- unsigned long iflags;
-
- unsigned int temp;
-+ u16 bcount;
-+ u8 stat, ireason;
-
- #if IDESCSI_DEBUG_LOG
- printk (KERN_INFO "ide-scsi: Reached idescsi_pc_intr interrupt handler\n");
-@@ -425,30 +423,29 @@ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
- (void) HWIF(drive)->ide_dma_end(drive);
+- /* If the ring is active, flag it */
+- spin_lock_irqsave(&phba->hbalock, iflags);
+- if (phba->sli.ring[ringno].cmdringaddr) {
+- if (phba->sli.ring[ringno].flag & LPFC_STOP_IOCB_MBX) {
+- phba->sli.ring[ringno].flag &= ~LPFC_STOP_IOCB_MBX;
+- /*
+- * Force update of the local copy of cmdGetInx
+- */
+- phba->sli.ring[ringno].local_getidx
+- = le32_to_cpu(pgp->cmdGetInx);
+- lpfc_sli_resume_iocb(phba, &phba->sli.ring[ringno]);
+- }
+- }
+- spin_unlock_irqrestore(&phba->hbalock, iflags);
+-}
+-
+-struct lpfc_hbq_entry *
++static struct lpfc_hbq_entry *
+ lpfc_sli_next_hbq_slot(struct lpfc_hba *phba, uint32_t hbqno)
+ {
+ struct hbq_s *hbqp = &phba->hbqs[hbqno];
+@@ -565,6 +540,7 @@ lpfc_sli_hbqbuf_free_all(struct lpfc_hba *phba)
+ list_del(&hbq_buf->dbuf.list);
+ (phba->hbqs[i].hbq_free_buffer)(phba, hbq_buf);
+ }
++ phba->hbqs[i].buffer_count = 0;
}
+ }
-- feature.all = 0;
- /* Clear the interrupt */
-- status.all = HWIF(drive)->INB(IDE_STATUS_REG);
-+ stat = drive->hwif->INB(IDE_STATUS_REG);
-
-- if (!status.b.drq) {
-+ if ((stat & DRQ_STAT) == 0) {
- /* No more interrupts */
- if (test_bit(IDESCSI_LOG_CMD, &scsi->log))
- printk (KERN_INFO "Packet command completed, %d bytes transferred\n", pc->actually_transferred);
- local_irq_enable_in_hardirq();
-- if (status.b.check)
-+ if (stat & ERR_STAT)
- rq->errors++;
- idescsi_end_request (drive, 1, 0);
- return ide_stopped;
+@@ -633,8 +609,8 @@ lpfc_sli_hbqbuf_fill_hbqs(struct lpfc_hba *phba, uint32_t hbqno, uint32_t count)
+ return 0;
}
-- bcount.b.low = HWIF(drive)->INB(IDE_BCOUNTL_REG);
-- bcount.b.high = HWIF(drive)->INB(IDE_BCOUNTH_REG);
-- ireason.all = HWIF(drive)->INB(IDE_IREASON_REG);
-+ bcount = (hwif->INB(IDE_BCOUNTH_REG) << 8) |
-+ hwif->INB(IDE_BCOUNTL_REG);
-+ ireason = hwif->INB(IDE_IREASON_REG);
-- if (ireason.b.cod) {
-+ if (ireason & CD) {
- printk(KERN_ERR "ide-scsi: CoD != 0 in idescsi_pc_intr\n");
- return ide_do_reset (drive);
- }
-- if (ireason.b.io) {
-- temp = pc->actually_transferred + bcount.all;
-+ if (ireason & IO) {
-+ temp = pc->actually_transferred + bcount;
- if (temp > pc->request_transfer) {
- if (temp > pc->buffer_size) {
- printk(KERN_ERR "ide-scsi: The scsi wants to "
-@@ -461,11 +458,13 @@ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
- idescsi_input_buffers(drive, pc, temp);
- else
- drive->hwif->atapi_input_bytes(drive, pc->current_position, temp);
-- printk(KERN_ERR "ide-scsi: transferred %d of %d bytes\n", temp, bcount.all);
-+ printk(KERN_ERR "ide-scsi: transferred"
-+ " %d of %d bytes\n",
-+ temp, bcount);
- }
- pc->actually_transferred += temp;
- pc->current_position += temp;
-- idescsi_discard_data(drive, bcount.all - temp);
-+ idescsi_discard_data(drive, bcount - temp);
- ide_set_handler(drive, &idescsi_pc_intr, get_timeout(pc), idescsi_expiry);
- return ide_started;
- }
-@@ -474,22 +473,24 @@ static ide_startstop_t idescsi_pc_intr (ide_drive_t *drive)
- #endif /* IDESCSI_DEBUG_LOG */
- }
+- start = lpfc_hbq_defs[hbqno]->buffer_count;
+- end = count + lpfc_hbq_defs[hbqno]->buffer_count;
++ start = phba->hbqs[hbqno].buffer_count;
++ end = count + start;
+ if (end > lpfc_hbq_defs[hbqno]->entry_count) {
+ end = lpfc_hbq_defs[hbqno]->entry_count;
}
-- if (ireason.b.io) {
-+ if (ireason & IO) {
- clear_bit(PC_WRITING, &pc->flags);
- if (pc->sg)
-- idescsi_input_buffers(drive, pc, bcount.all);
-+ idescsi_input_buffers(drive, pc, bcount);
- else
-- HWIF(drive)->atapi_input_bytes(drive, pc->current_position, bcount.all);
-+ hwif->atapi_input_bytes(drive, pc->current_position,
-+ bcount);
- } else {
- set_bit(PC_WRITING, &pc->flags);
- if (pc->sg)
-- idescsi_output_buffers (drive, pc, bcount.all);
-+ idescsi_output_buffers(drive, pc, bcount);
+@@ -646,7 +622,7 @@ lpfc_sli_hbqbuf_fill_hbqs(struct lpfc_hba *phba, uint32_t hbqno, uint32_t count)
+ return 1;
+ hbq_buffer->tag = (i | (hbqno << 16));
+ if (lpfc_sli_hbq_to_firmware(phba, hbqno, hbq_buffer))
+- lpfc_hbq_defs[hbqno]->buffer_count++;
++ phba->hbqs[hbqno].buffer_count++;
else
-- HWIF(drive)->atapi_output_bytes(drive, pc->current_position, bcount.all);
-+ hwif->atapi_output_bytes(drive, pc->current_position,
-+ bcount);
+ (phba->hbqs[hbqno].hbq_free_buffer)(phba, hbq_buffer);
}
- /* Update the current position */
-- pc->actually_transferred += bcount.all;
-- pc->current_position += bcount.all;
-+ pc->actually_transferred += bcount;
-+ pc->current_position += bcount;
+@@ -660,14 +636,14 @@ lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *phba, uint32_t qno)
+ lpfc_hbq_defs[qno]->add_count));
+ }
- /* And set the interrupt handler again */
- ide_set_handler(drive, &idescsi_pc_intr, get_timeout(pc), idescsi_expiry);
-@@ -501,16 +502,16 @@ static ide_startstop_t idescsi_transfer_pc(ide_drive_t *drive)
- ide_hwif_t *hwif = drive->hwif;
- idescsi_scsi_t *scsi = drive_to_idescsi(drive);
- idescsi_pc_t *pc = scsi->pc;
-- atapi_ireason_t ireason;
- ide_startstop_t startstop;
-+ u8 ireason;
+-int
++static int
+ lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *phba, uint32_t qno)
+ {
+ return(lpfc_sli_hbqbuf_fill_hbqs(phba, qno,
+ lpfc_hbq_defs[qno]->init_count));
+ }
- if (ide_wait_stat(&startstop,drive,DRQ_STAT,BUSY_STAT,WAIT_READY)) {
- printk(KERN_ERR "ide-scsi: Strange, packet command "
- "initiated yet DRQ isn't asserted\n");
- return startstop;
- }
-- ireason.all = HWIF(drive)->INB(IDE_IREASON_REG);
-- if (!ireason.b.cod || ireason.b.io) {
-+ ireason = hwif->INB(IDE_IREASON_REG);
-+ if ((ireason & CD) == 0 || (ireason & IO)) {
- printk(KERN_ERR "ide-scsi: (IO,CoD) != (0,1) while "
- "issuing a packet command\n");
- return ide_do_reset (drive);
-@@ -573,30 +574,26 @@ static ide_startstop_t idescsi_issue_pc (ide_drive_t *drive, idescsi_pc_t *pc)
+-struct hbq_dmabuf *
++static struct hbq_dmabuf *
+ lpfc_sli_hbqbuf_find(struct lpfc_hba *phba, uint32_t tag)
{
- idescsi_scsi_t *scsi = drive_to_idescsi(drive);
- ide_hwif_t *hwif = drive->hwif;
-- atapi_feature_t feature;
-- atapi_bcount_t bcount;
-+ u16 bcount;
-+ u8 dma = 0;
+ struct lpfc_dmabuf *d_buf;
+@@ -686,7 +662,7 @@ lpfc_sli_hbqbuf_find(struct lpfc_hba *phba, uint32_t tag)
+ }
+ lpfc_printf_log(phba, KERN_ERR, LOG_SLI | LOG_VPORT,
+ "1803 Bad hbq tag. Data: x%x x%x\n",
+- tag, lpfc_hbq_defs[tag >> 16]->buffer_count);
++ tag, phba->hbqs[tag >> 16].buffer_count);
+ return NULL;
+ }
- scsi->pc=pc; /* Set the current packet command */
- pc->actually_transferred=0; /* We haven't transferred any data yet */
- pc->current_position=pc->buffer;
-- bcount.all = min(pc->request_transfer, 63 * 1024); /* Request to transfer the entire buffer at once */
-+ /* Request to transfer the entire buffer at once */
-+ bcount = min(pc->request_transfer, 63 * 1024);
+@@ -712,6 +688,7 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
+ case MBX_LOAD_SM:
+ case MBX_READ_NV:
+ case MBX_WRITE_NV:
++ case MBX_WRITE_VPARMS:
+ case MBX_RUN_BIU_DIAG:
+ case MBX_INIT_LINK:
+ case MBX_DOWN_LINK:
+@@ -739,7 +716,7 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
+ case MBX_DEL_LD_ENTRY:
+ case MBX_RUN_PROGRAM:
+ case MBX_SET_MASK:
+- case MBX_SET_SLIM:
++ case MBX_SET_VARIABLE:
+ case MBX_UNREG_D_ID:
+ case MBX_KILL_BOARD:
+ case MBX_CONFIG_FARP:
+@@ -751,9 +728,10 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
+ case MBX_READ_RPI64:
+ case MBX_REG_LOGIN64:
+ case MBX_READ_LA64:
+- case MBX_FLASH_WR_ULA:
++ case MBX_WRITE_WWN:
+ case MBX_SET_DEBUG:
+ case MBX_LOAD_EXP_ROM:
++ case MBX_ASYNCEVT_ENABLE:
+ case MBX_REG_VPI:
+ case MBX_UNREG_VPI:
+ case MBX_HEARTBEAT:
+@@ -953,6 +931,17 @@ lpfc_sli_replace_hbqbuff(struct lpfc_hba *phba, uint32_t tag)
+ return &new_hbq_entry->dbuf;
+ }
+
++static struct lpfc_dmabuf *
++lpfc_sli_get_buff(struct lpfc_hba *phba,
++ struct lpfc_sli_ring *pring,
++ uint32_t tag)
++{
++ if (tag & QUE_BUFTAG_BIT)
++ return lpfc_sli_ring_taggedbuf_get(phba, pring, tag);
++ else
++ return lpfc_sli_replace_hbqbuff(phba, tag);
++}
++
+ static int
+ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ struct lpfc_iocbq *saveq)
+@@ -961,19 +950,112 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ WORD5 * w5p;
+ uint32_t Rctl, Type;
+ uint32_t match, i;
++ struct lpfc_iocbq *iocbq;
+
+ match = 0;
+ irsp = &(saveq->iocb);
+- if ((irsp->ulpCommand == CMD_RCV_ELS_REQ64_CX)
+- || (irsp->ulpCommand == CMD_RCV_ELS_REQ_CX)
+- || (irsp->ulpCommand == CMD_IOCB_RCV_ELS64_CX)
+- || (irsp->ulpCommand == CMD_IOCB_RCV_CONT64_CX)) {
++
++ if (irsp->ulpStatus == IOSTAT_NEED_BUFFER)
++ return 1;
++ if (irsp->ulpCommand == CMD_ASYNC_STATUS) {
++ if (pring->lpfc_sli_rcv_async_status)
++ pring->lpfc_sli_rcv_async_status(phba, pring, saveq);
++ else
++ lpfc_printf_log(phba,
++ KERN_WARNING,
++ LOG_SLI,
++ "0316 Ring %d handler: unexpected "
++ "ASYNC_STATUS iocb received evt_code "
++ "0x%x\n",
++ pring->ringno,
++ irsp->un.asyncstat.evt_code);
++ return 1;
++ }
++
++ if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
++ if (irsp->ulpBdeCount != 0) {
++ saveq->context2 = lpfc_sli_get_buff(phba, pring,
++ irsp->un.ulpWord[3]);
++ if (!saveq->context2)
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_SLI,
++ "0341 Ring %d Cannot find buffer for "
++ "an unsolicited iocb. tag 0x%x\n",
++ pring->ringno,
++ irsp->un.ulpWord[3]);
++ }
++ if (irsp->ulpBdeCount == 2) {
++ saveq->context3 = lpfc_sli_get_buff(phba, pring,
++ irsp->unsli3.sli3Words[7]);
++ if (!saveq->context3)
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_SLI,
++ "0342 Ring %d Cannot find buffer for an"
++ " unsolicited iocb. tag 0x%x\n",
++ pring->ringno,
++ irsp->unsli3.sli3Words[7]);
++ }
++ list_for_each_entry(iocbq, &saveq->list, list) {
++ irsp = &(iocbq->iocb);
++ if (irsp->ulpBdeCount != 0) {
++ iocbq->context2 = lpfc_sli_get_buff(phba, pring,
++ irsp->un.ulpWord[3]);
++ if (!iocbq->context2)
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_SLI,
++ "0343 Ring %d Cannot find "
++ "buffer for an unsolicited iocb"
++ ". tag 0x%x\n", pring->ringno,
++ irsp->un.ulpWord[3]);
++ }
++ if (irsp->ulpBdeCount == 2) {
++ iocbq->context3 = lpfc_sli_get_buff(phba, pring,
++ irsp->unsli3.sli3Words[7]);
++ if (!iocbq->context3)
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_SLI,
++ "0344 Ring %d Cannot find "
++ "buffer for an unsolicited "
++ "iocb. tag 0x%x\n",
++ pring->ringno,
++ irsp->unsli3.sli3Words[7]);
++ }
++ }
++ }
++ if (irsp->ulpBdeCount != 0 &&
++ (irsp->ulpCommand == CMD_IOCB_RCV_CONT64_CX ||
++ irsp->ulpStatus == IOSTAT_INTERMED_RSP)) {
++ int found = 0;
++
++ /* search continue save q for same XRI */
++ list_for_each_entry(iocbq, &pring->iocb_continue_saveq, clist) {
++ if (iocbq->iocb.ulpContext == saveq->iocb.ulpContext) {
++ list_add_tail(&saveq->list, &iocbq->list);
++ found = 1;
++ break;
++ }
++ }
++ if (!found)
++ list_add_tail(&saveq->clist,
++ &pring->iocb_continue_saveq);
++ if (saveq->iocb.ulpStatus != IOSTAT_INTERMED_RSP) {
++ list_del_init(&iocbq->clist);
++ saveq = iocbq;
++ irsp = &(saveq->iocb);
++ } else
++ return 0;
++ }
++ if ((irsp->ulpCommand == CMD_RCV_ELS_REQ64_CX) ||
++ (irsp->ulpCommand == CMD_RCV_ELS_REQ_CX) ||
++ (irsp->ulpCommand == CMD_IOCB_RCV_ELS64_CX)) {
+ Rctl = FC_ELS_REQ;
+ Type = FC_ELS_DATA;
+ } else {
+- w5p =
+- (WORD5 *) & (saveq->iocb.un.
+- ulpWord[5]);
++ w5p = (WORD5 *)&(saveq->iocb.un.ulpWord[5]);
+ Rctl = w5p->hcsw.Rctl;
+ Type = w5p->hcsw.Type;
-- feature.all = 0;
- if (drive->using_dma && !idescsi_map_sg(drive, pc)) {
- hwif->sg_mapped = 1;
-- feature.b.dma = !hwif->dma_setup(drive);
-+ dma = !hwif->dma_setup(drive);
- hwif->sg_mapped = 0;
+@@ -988,15 +1070,6 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ }
}
- SELECT_DRIVE(drive);
-- if (IDE_CONTROL_REG)
-- HWIF(drive)->OUTB(drive->ctl, IDE_CONTROL_REG);
+- if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+- if (irsp->ulpBdeCount != 0)
+- saveq->context2 = lpfc_sli_replace_hbqbuff(phba,
+- irsp->un.ulpWord[3]);
+- if (irsp->ulpBdeCount == 2)
+- saveq->context3 = lpfc_sli_replace_hbqbuff(phba,
+- irsp->unsli3.sli3Words[7]);
+- }
+-
+ /* unSolicited Responses */
+ if (pring->prt[0].profile) {
+ if (pring->prt[0].lpfc_sli_rcv_unsol_event)
+@@ -1006,12 +1079,9 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ } else {
+ /* We must search, based on rctl / type
+ for the right routine */
+- for (i = 0; i < pring->num_mask;
+- i++) {
+- if ((pring->prt[i].rctl ==
+- Rctl)
+- && (pring->prt[i].
+- type == Type)) {
++ for (i = 0; i < pring->num_mask; i++) {
++ if ((pring->prt[i].rctl == Rctl)
++ && (pring->prt[i].type == Type)) {
+ if (pring->prt[i].lpfc_sli_rcv_unsol_event)
+ (pring->prt[i].lpfc_sli_rcv_unsol_event)
+ (phba, pring, saveq);
+@@ -1084,6 +1154,12 @@ lpfc_sli_process_sol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ IOSTAT_LOCAL_REJECT;
+ saveq->iocb.un.ulpWord[4] =
+ IOERR_SLI_ABORTED;
++
++ /* Firmware could still be in progress
++ * of DMAing payload, so don't free data
++ * buffer till after a hbeat.
++ */
++ saveq->iocb_flag |= LPFC_DELAY_MEM_FREE;
+ }
+ }
+ (cmdiocbp->iocb_cmpl) (phba, cmdiocbp, saveq);
+@@ -1572,12 +1648,7 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba *phba,
-- HWIF(drive)->OUTB(feature.all, IDE_FEATURE_REG);
-- HWIF(drive)->OUTB(bcount.b.high, IDE_BCOUNTH_REG);
-- HWIF(drive)->OUTB(bcount.b.low, IDE_BCOUNTL_REG);
-+ ide_pktcmd_tf_load(drive, IDE_TFLAG_NO_SELECT_MASK, bcount, dma);
+ writel(pring->rspidx, &phba->host_gp[pring->ringno].rspGetInx);
-- if (feature.b.dma)
-+ if (dma)
- set_bit(PC_DMA_OK, &pc->flags);
+- if (list_empty(&(pring->iocb_continueq))) {
+- list_add(&rspiocbp->list, &(pring->iocb_continueq));
+- } else {
+- list_add_tail(&rspiocbp->list,
+- &(pring->iocb_continueq));
+- }
++ list_add_tail(&rspiocbp->list, &(pring->iocb_continueq));
- if (test_bit(IDESCSI_DRQ_INTERRUPT, &scsi->flags)) {
-@@ -922,8 +919,8 @@ static int idescsi_eh_reset (struct scsi_cmnd *cmd)
- }
+ pring->iocb_continueq_cnt++;
+ if (irsp->ulpLe) {
+@@ -1642,17 +1713,17 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba *phba,
+ iocb_cmd_type = irsp->ulpCommand & CMD_IOCB_MASK;
+ type = lpfc_sli_iocb_cmd_type(iocb_cmd_type);
+ if (type == LPFC_SOL_IOCB) {
+- spin_unlock_irqrestore(&phba->hbalock,
+- iflag);
++ spin_unlock_irqrestore(&phba->hbalock, iflag);
+ rc = lpfc_sli_process_sol_iocb(phba, pring,
+ saveq);
+ spin_lock_irqsave(&phba->hbalock, iflag);
+ } else if (type == LPFC_UNSOL_IOCB) {
+- spin_unlock_irqrestore(&phba->hbalock,
+- iflag);
++ spin_unlock_irqrestore(&phba->hbalock, iflag);
+ rc = lpfc_sli_process_unsol_iocb(phba, pring,
+ saveq);
+ spin_lock_irqsave(&phba->hbalock, iflag);
++ if (!rc)
++ free_saveq = 0;
+ } else if (type == LPFC_ABORT_IOCB) {
+ if ((irsp->ulpCommand != CMD_XRI_ABORTED_CX) &&
+ ((cmdiocbp =
+@@ -1921,8 +1992,8 @@ lpfc_sli_brdkill(struct lpfc_hba *phba)
+ "0329 Kill HBA Data: x%x x%x\n",
+ phba->pport->port_state, psli->sli_flag);
- /* kill current request */
-- blkdev_dequeue_request(req);
-- end_that_request_last(req, 0);
-+ if (__blk_end_request(req, -EIO, 0))
-+ BUG();
- if (blk_sense_request(req))
- kfree(scsi->pc->buffer);
- kfree(scsi->pc);
-@@ -932,8 +929,8 @@ static int idescsi_eh_reset (struct scsi_cmnd *cmd)
+- if ((pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool,
+- GFP_KERNEL)) == 0)
++ pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
++ if (!pmb)
+ return 1;
- /* now nuke the drive queue */
- while ((req = elv_next_request(drive->queue))) {
-- blkdev_dequeue_request(req);
-- end_that_request_last(req, 0);
-+ if (__blk_end_request(req, -EIO, 0))
-+ BUG();
+ /* Disable the error attention */
+@@ -2113,7 +2184,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
+ <status> */
+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ "0436 Adapter failed to init, "
+- "timeout, status reg x%x\n", status);
++ "timeout, status reg x%x, "
++ "FW Data: A8 x%x AC x%x\n", status,
++ readl(phba->MBslimaddr + 0xa8),
++ readl(phba->MBslimaddr + 0xac));
+ phba->link_state = LPFC_HBA_ERROR;
+ return -ETIMEDOUT;
+ }
+@@ -2125,7 +2199,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
+ <status> */
+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ "0437 Adapter failed to init, "
+- "chipset, status reg x%x\n", status);
++ "chipset, status reg x%x, "
++ "FW Data: A8 x%x AC x%x\n", status,
++ readl(phba->MBslimaddr + 0xa8),
++ readl(phba->MBslimaddr + 0xac));
+ phba->link_state = LPFC_HBA_ERROR;
+ return -EIO;
+ }
+@@ -2153,7 +2230,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
+ /* Adapter failed to init, chipset, status reg <status> */
+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ "0438 Adapter failed to init, chipset, "
+- "status reg x%x\n", status);
++ "status reg x%x, "
++ "FW Data: A8 x%x AC x%x\n", status,
++ readl(phba->MBslimaddr + 0xa8),
++ readl(phba->MBslimaddr + 0xac));
+ phba->link_state = LPFC_HBA_ERROR;
+ return -EIO;
}
+@@ -2485,11 +2565,16 @@ lpfc_mbox_timeout_handler(struct lpfc_hba *phba)
+ lpfc_sli_abort_iocb_ring(phba, pring);
- HWGROUP(drive)->rq = NULL;
-diff --git a/drivers/scsi/imm.c b/drivers/scsi/imm.c
-index a3d0c6b..f97d172 100644
---- a/drivers/scsi/imm.c
-+++ b/drivers/scsi/imm.c
-@@ -837,19 +837,16 @@ static int imm_engine(imm_struct *dev, struct scsi_cmnd *cmd)
+ lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
+- "0316 Resetting board due to mailbox timeout\n");
++ "0345 Resetting board due to mailbox timeout\n");
+ /*
+ * lpfc_offline calls lpfc_sli_hba_down which will clean up
+ * on oustanding mailbox commands.
+ */
++ /* If resets are disabled then set error state and return. */
++ if (!phba->cfg_enable_hba_reset) {
++ phba->link_state = LPFC_HBA_ERROR;
++ return;
++ }
+ lpfc_offline_prep(phba);
+ lpfc_offline(phba);
+ lpfc_sli_brdrestart(phba);
+@@ -2507,6 +2592,7 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
+ uint32_t status, evtctr;
+ uint32_t ha_copy;
+ int i;
++ unsigned long timeout;
+ unsigned long drvr_flag = 0;
+ volatile uint32_t word0, ldata;
+ void __iomem *to_slim;
+@@ -2519,7 +2605,7 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
+ "1806 Mbox x%x failed. No vport\n",
+ pmbox->mb.mbxCommand);
+ dump_stack();
+- return MBXERR_ERROR;
++ return MBX_NOT_FINISHED;
+ }
+ }
- /* Phase 4 - Setup scatter/gather buffers */
- case 4:
-- if (cmd->use_sg) {
-- /* if many buffers are available, start filling the first */
-- cmd->SCp.buffer =
-- (struct scatterlist *) cmd->request_buffer;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- } else {
-- /* else fill the only available buffer */
- cmd->SCp.buffer = NULL;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-- cmd->SCp.ptr = cmd->request_buffer;
-+ cmd->SCp.this_residual = 0;
-+ cmd->SCp.ptr = NULL;
+@@ -2571,21 +2657,6 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
+ return MBX_NOT_FINISHED;
}
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.phase++;
- if (cmd->SCp.this_residual & 0x01)
- cmd->SCp.this_residual++;
-diff --git a/drivers/scsi/in2000.c b/drivers/scsi/in2000.c
-index c8b452f..8053b1e 100644
---- a/drivers/scsi/in2000.c
-+++ b/drivers/scsi/in2000.c
-@@ -369,16 +369,16 @@ static int in2000_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))
- * - SCp.phase records this command's SRCID_ER bit setting
- */
-- if (cmd->use_sg) {
-- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
- } else {
- cmd->SCp.buffer = NULL;
- cmd->SCp.buffers_residual = 0;
-- cmd->SCp.ptr = (char *) cmd->request_buffer;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-+ cmd->SCp.ptr = NULL;
-+ cmd->SCp.this_residual = 0;
+- /* Handle STOP IOCB processing flag. This is only meaningful
+- * if we are not polling for mbox completion.
+- */
+- if (flag & MBX_STOP_IOCB) {
+- flag &= ~MBX_STOP_IOCB;
+- /* Now flag each ring */
+- for (i = 0; i < psli->num_rings; i++) {
+- /* If the ring is active, flag it */
+- if (psli->ring[i].cmdringaddr) {
+- psli->ring[i].flag |=
+- LPFC_STOP_IOCB_MBX;
+- }
+- }
+- }
+-
+ /* Another mailbox command is still being processed, queue this
+ * command to be processed later.
+ */
+@@ -2620,23 +2691,6 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
+ return MBX_BUSY;
}
- cmd->SCp.have_data_in = 0;
-diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
-index 0841df0..73270ff 100644
---- a/drivers/scsi/ipr.c
-+++ b/drivers/scsi/ipr.c
-@@ -84,7 +84,7 @@
- /*
- * Global Data
- */
--static struct list_head ipr_ioa_head = LIST_HEAD_INIT(ipr_ioa_head);
-+static LIST_HEAD(ipr_ioa_head);
- static unsigned int ipr_log_level = IPR_DEFAULT_LOG_LEVEL;
- static unsigned int ipr_max_speed = 1;
- static int ipr_testmode = 0;
-@@ -5142,6 +5142,7 @@ static void ipr_build_ata_ioadl(struct ipr_cmnd *ipr_cmd,
- struct ipr_ioadl_desc *last_ioadl = NULL;
- int len = qc->nbytes + qc->pad_len;
- struct scatterlist *sg;
-+ unsigned int si;
+- /* Handle STOP IOCB processing flag. This is only meaningful
+- * if we are not polling for mbox completion.
+- */
+- if (flag & MBX_STOP_IOCB) {
+- flag &= ~MBX_STOP_IOCB;
+- if (flag == MBX_NOWAIT) {
+- /* Now flag each ring */
+- for (i = 0; i < psli->num_rings; i++) {
+- /* If the ring is active, flag it */
+- if (psli->ring[i].cmdringaddr) {
+- psli->ring[i].flag |=
+- LPFC_STOP_IOCB_MBX;
+- }
+- }
+- }
+- }
+-
+ psli->sli_flag |= LPFC_SLI_MBOX_ACTIVE;
- if (len == 0)
- return;
-@@ -5159,7 +5160,7 @@ static void ipr_build_ata_ioadl(struct ipr_cmnd *ipr_cmd,
- cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+ /* If we are not polling, we MUST be in SLI2 mode */
+@@ -2714,18 +2768,24 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
}
-- ata_for_each_sg(sg, qc) {
-+ for_each_sg(qc->sg, sg, qc->n_elem, si) {
- ioadl->flags_and_data_len = cpu_to_be32(ioadl_flags | sg_dma_len(sg));
- ioadl->address = cpu_to_be32(sg_dma_address(sg));
+ wmb();
+- /* interrupt board to doit right away */
+- writel(CA_MBATT, phba->CAregaddr);
+- readl(phba->CAregaddr); /* flush */
-@@ -5222,12 +5223,12 @@ static unsigned int ipr_qc_issue(struct ata_queued_cmd *qc)
- regs->flags |= IPR_ATA_FLAG_XFER_TYPE_DMA;
+ switch (flag) {
+ case MBX_NOWAIT:
+- /* Don't wait for it to finish, just return */
++ /* Set up reference to mailbox command */
+ psli->mbox_active = pmbox;
++ /* Interrupt board to do it */
++ writel(CA_MBATT, phba->CAregaddr);
++ readl(phba->CAregaddr); /* flush */
++ /* Don't wait for it to finish, just return */
break;
-- case ATA_PROT_ATAPI:
-- case ATA_PROT_ATAPI_NODATA:
-+ case ATAPI_PROT_PIO:
-+ case ATAPI_PROT_NODATA:
- regs->flags |= IPR_ATA_FLAG_PACKET_CMD;
- break;
+ case MBX_POLL:
++ /* Set up null reference to mailbox command */
+ psli->mbox_active = NULL;
++ /* Interrupt board to do it */
++ writel(CA_MBATT, phba->CAregaddr);
++ readl(phba->CAregaddr); /* flush */
++
+ if (psli->sli_flag & LPFC_SLI2_ACTIVE) {
+ /* First read mbox status word */
+ word0 = *((volatile uint32_t *)&phba->slim2p->mbx);
+@@ -2737,15 +2797,15 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- regs->flags |= IPR_ATA_FLAG_PACKET_CMD;
- regs->flags |= IPR_ATA_FLAG_XFER_TYPE_DMA;
- break;
-diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
-index 5c5a9b2..7505cca 100644
---- a/drivers/scsi/ips.c
-+++ b/drivers/scsi/ips.c
-@@ -389,17 +389,17 @@ static struct pci_device_id ips_pci_table[] = {
- MODULE_DEVICE_TABLE( pci, ips_pci_table );
+ /* Read the HBA Host Attention Register */
+ ha_copy = readl(phba->HAregaddr);
+-
+- i = lpfc_mbox_tmo_val(phba, mb->mbxCommand);
+- i *= 1000; /* Convert to ms */
+-
++ timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
++ mb->mbxCommand) *
++ 1000) + jiffies;
++ i = 0;
+ /* Wait for command to complete */
+ while (((word0 & OWN_CHIP) == OWN_CHIP) ||
+ (!(ha_copy & HA_MBATT) &&
+ (phba->link_state > LPFC_WARM_START))) {
+- if (i-- <= 0) {
++ if (time_after(jiffies, timeout)) {
+ psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
+ spin_unlock_irqrestore(&phba->hbalock,
+ drvr_flag);
+@@ -2758,12 +2818,12 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
+ && (evtctr != psli->slistat.mbox_event))
+ break;
- static char ips_hot_plug_name[] = "ips";
--
-+
- static int __devinit ips_insert_device(struct pci_dev *pci_dev, const struct pci_device_id *ent);
- static void __devexit ips_remove_device(struct pci_dev *pci_dev);
--
-+
- static struct pci_driver ips_pci_driver = {
- .name = ips_hot_plug_name,
- .id_table = ips_pci_table,
- .probe = ips_insert_device,
- .remove = __devexit_p(ips_remove_device),
- };
--
-+
+- spin_unlock_irqrestore(&phba->hbalock,
+- drvr_flag);
+-
+- msleep(1);
+-
+- spin_lock_irqsave(&phba->hbalock, drvr_flag);
++ if (i++ > 10) {
++ spin_unlock_irqrestore(&phba->hbalock,
++ drvr_flag);
++ msleep(1);
++ spin_lock_irqsave(&phba->hbalock, drvr_flag);
++ }
+ if (psli->sli_flag & LPFC_SLI2_ACTIVE) {
+ /* First copy command data */
+@@ -2848,7 +2908,7 @@ lpfc_sli_next_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
/*
- * Necessary forward function protoypes
-@@ -587,7 +587,7 @@ static void
- ips_setup_funclist(ips_ha_t * ha)
+ * Lockless version of lpfc_sli_issue_iocb.
+ */
+-int
++static int
+ __lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ struct lpfc_iocbq *piocb, uint32_t flag)
{
+@@ -2879,9 +2939,9 @@ __lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-- /*
-+ /*
- * Setup Functions
+ /*
+ * Check to see if we are blocking IOCB processing because of a
+- * outstanding mbox command.
++ * outstanding event.
*/
- if (IPS_IS_MORPHEUS(ha) || IPS_IS_MARCO(ha)) {
-@@ -702,12 +702,8 @@ ips_release(struct Scsi_Host *sh)
- /* free extra memory */
- ips_free(ha);
-
-- /* Free I/O Region */
-- if (ha->io_addr)
-- release_region(ha->io_addr, ha->io_len);
--
- /* free IRQ */
-- free_irq(ha->irq, ha);
-+ free_irq(ha->pcidev->irq, ha);
-
- scsi_host_put(sh);
-
-@@ -1637,7 +1633,7 @@ ips_make_passthru(ips_ha_t *ha, struct scsi_cmnd *SC, ips_scb_t *scb, int intr)
- return (IPS_FAILURE);
- }
-
-- if (ha->device_id == IPS_DEVICEID_COPPERHEAD &&
-+ if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD &&
- pt->CoppCP.cmd.flashfw.op_code ==
- IPS_CMD_RW_BIOSFW) {
- ret = ips_flash_copperhead(ha, pt, scb);
-@@ -2021,7 +2017,7 @@ ips_cleanup_passthru(ips_ha_t * ha, ips_scb_t * scb)
- pt->ExtendedStatus = scb->extended_status;
- pt->AdapterType = ha->ad_type;
-
-- if (ha->device_id == IPS_DEVICEID_COPPERHEAD &&
-+ if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD &&
- (scb->cmd.flashfw.op_code == IPS_CMD_DOWNLOAD ||
- scb->cmd.flashfw.op_code == IPS_CMD_RW_BIOSFW))
- ips_free_flash_copperhead(ha);
-@@ -2075,13 +2071,13 @@ ips_host_info(ips_ha_t * ha, char *ptr, off_t offset, int len)
- ha->mem_ptr);
- }
-
-- copy_info(&info, "\tIRQ number : %d\n", ha->irq);
-+ copy_info(&info, "\tIRQ number : %d\n", ha->pcidev->irq);
+- if (unlikely(pring->flag & LPFC_STOP_IOCB_MBX))
++ if (unlikely(pring->flag & LPFC_STOP_IOCB_EVENT))
+ goto iocb_busy;
- /* For the Next 3 lines Check for Binary 0 at the end and don't include it if it's there. */
- /* That keeps everything happy for "text" operations on the proc file. */
+ if (unlikely(phba->link_state == LPFC_LINK_DOWN)) {
+@@ -2993,6 +3053,61 @@ lpfc_extra_ring_setup( struct lpfc_hba *phba)
+ return 0;
+ }
- if (le32_to_cpu(ha->nvram->signature) == IPS_NVRAM_P5_SIG) {
-- if (ha->nvram->bios_low[3] == 0) {
-+ if (ha->nvram->bios_low[3] == 0) {
- copy_info(&info,
- "\tBIOS Version : %c%c%c%c%c%c%c\n",
- ha->nvram->bios_high[0], ha->nvram->bios_high[1],
-@@ -2232,31 +2228,31 @@ ips_identify_controller(ips_ha_t * ha)
++static void
++lpfc_sli_async_event_handler(struct lpfc_hba * phba,
++ struct lpfc_sli_ring * pring, struct lpfc_iocbq * iocbq)
++{
++ IOCB_t *icmd;
++ uint16_t evt_code;
++ uint16_t temp;
++ struct temp_event temp_event_data;
++ struct Scsi_Host *shost;
++
++ icmd = &iocbq->iocb;
++ evt_code = icmd->un.asyncstat.evt_code;
++ temp = icmd->ulpContext;
++
++ if ((evt_code != ASYNC_TEMP_WARN) &&
++ (evt_code != ASYNC_TEMP_SAFE)) {
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_SLI,
++ "0346 Ring %d handler: unexpected ASYNC_STATUS"
++ " evt_code 0x%x\n",
++ pring->ringno,
++ icmd->un.asyncstat.evt_code);
++ return;
++ }
++ temp_event_data.data = (uint32_t)temp;
++ temp_event_data.event_type = FC_REG_TEMPERATURE_EVENT;
++ if (evt_code == ASYNC_TEMP_WARN) {
++ temp_event_data.event_code = LPFC_THRESHOLD_TEMP;
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_TEMP,
++ "0347 Adapter is very hot, please take "
++ "corrective action. temperature : %d Celsius\n",
++ temp);
++ }
++ if (evt_code == ASYNC_TEMP_SAFE) {
++ temp_event_data.event_code = LPFC_NORMAL_TEMP;
++ lpfc_printf_log(phba,
++ KERN_ERR,
++ LOG_TEMP,
++ "0340 Adapter temperature is OK now. "
++ "temperature : %d Celsius\n",
++ temp);
++ }
++
++ /* Send temperature change event to applications */
++ shost = lpfc_shost_from_vport(phba->pport);
++ fc_host_post_vendor_event(shost, fc_get_event_number(),
++ sizeof(temp_event_data), (char *) &temp_event_data,
++ SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_EMULEX);
++
++}
++
++
+ int
+ lpfc_sli_setup(struct lpfc_hba *phba)
{
- METHOD_TRACE("ips_identify_controller", 1);
-
-- switch (ha->device_id) {
-+ switch (ha->pcidev->device) {
- case IPS_DEVICEID_COPPERHEAD:
-- if (ha->revision_id <= IPS_REVID_SERVERAID) {
-+ if (ha->pcidev->revision <= IPS_REVID_SERVERAID) {
- ha->ad_type = IPS_ADTYPE_SERVERAID;
-- } else if (ha->revision_id == IPS_REVID_SERVERAID2) {
-+ } else if (ha->pcidev->revision == IPS_REVID_SERVERAID2) {
- ha->ad_type = IPS_ADTYPE_SERVERAID2;
-- } else if (ha->revision_id == IPS_REVID_NAVAJO) {
-+ } else if (ha->pcidev->revision == IPS_REVID_NAVAJO) {
- ha->ad_type = IPS_ADTYPE_NAVAJO;
-- } else if ((ha->revision_id == IPS_REVID_SERVERAID2)
-+ } else if ((ha->pcidev->revision == IPS_REVID_SERVERAID2)
- && (ha->slot_num == 0)) {
- ha->ad_type = IPS_ADTYPE_KIOWA;
-- } else if ((ha->revision_id >= IPS_REVID_CLARINETP1) &&
-- (ha->revision_id <= IPS_REVID_CLARINETP3)) {
-+ } else if ((ha->pcidev->revision >= IPS_REVID_CLARINETP1) &&
-+ (ha->pcidev->revision <= IPS_REVID_CLARINETP3)) {
- if (ha->enq->ucMaxPhysicalDevices == 15)
- ha->ad_type = IPS_ADTYPE_SERVERAID3L;
- else
- ha->ad_type = IPS_ADTYPE_SERVERAID3;
-- } else if ((ha->revision_id >= IPS_REVID_TROMBONE32) &&
-- (ha->revision_id <= IPS_REVID_TROMBONE64)) {
-+ } else if ((ha->pcidev->revision >= IPS_REVID_TROMBONE32) &&
-+ (ha->pcidev->revision <= IPS_REVID_TROMBONE64)) {
- ha->ad_type = IPS_ADTYPE_SERVERAID4H;
+@@ -3059,6 +3174,8 @@ lpfc_sli_setup(struct lpfc_hba *phba)
+ pring->fast_iotag = 0;
+ pring->iotag_ctr = 0;
+ pring->iotag_max = 4096;
++ pring->lpfc_sli_rcv_async_status =
++ lpfc_sli_async_event_handler;
+ pring->num_mask = 4;
+ pring->prt[0].profile = 0; /* Mask 0 */
+ pring->prt[0].rctl = FC_ELS_REQ;
+@@ -3123,6 +3240,7 @@ lpfc_sli_queue_setup(struct lpfc_hba *phba)
+ INIT_LIST_HEAD(&pring->txq);
+ INIT_LIST_HEAD(&pring->txcmplq);
+ INIT_LIST_HEAD(&pring->iocb_continueq);
++ INIT_LIST_HEAD(&pring->iocb_continue_saveq);
+ INIT_LIST_HEAD(&pring->postbufq);
+ }
+ spin_unlock_irq(&phba->hbalock);
+@@ -3193,6 +3311,7 @@ lpfc_sli_hba_down(struct lpfc_hba *phba)
+ LIST_HEAD(completions);
+ struct lpfc_sli *psli = &phba->sli;
+ struct lpfc_sli_ring *pring;
++ struct lpfc_dmabuf *buf_ptr;
+ LPFC_MBOXQ_t *pmb;
+ struct lpfc_iocbq *iocb;
+ IOCB_t *cmd = NULL;
+@@ -3232,6 +3351,19 @@ lpfc_sli_hba_down(struct lpfc_hba *phba)
}
- break;
-
- case IPS_DEVICEID_MORPHEUS:
-- switch (ha->subdevice_id) {
-+ switch (ha->pcidev->subsystem_device) {
- case IPS_SUBDEVICEID_4L:
- ha->ad_type = IPS_ADTYPE_SERVERAID4L;
- break;
-@@ -2285,7 +2281,7 @@ ips_identify_controller(ips_ha_t * ha)
- break;
-
- case IPS_DEVICEID_MARCO:
-- switch (ha->subdevice_id) {
-+ switch (ha->pcidev->subsystem_device) {
- case IPS_SUBDEVICEID_6M:
- ha->ad_type = IPS_ADTYPE_SERVERAID6M;
- break;
-@@ -2332,20 +2328,20 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
-
- strncpy(ha->bios_version, " ?", 8);
-
-- if (ha->device_id == IPS_DEVICEID_COPPERHEAD) {
-+ if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) {
- if (IPS_USE_MEMIO(ha)) {
- /* Memory Mapped I/O */
-
- /* test 1st byte */
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0x55)
- return;
-
- writel(1, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0xAA)
-@@ -2353,20 +2349,20 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
-
- /* Get Major version */
- writel(0x1FF, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- major = readb(ha->mem_ptr + IPS_REG_FLDP);
-
- /* Get Minor version */
- writel(0x1FE, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
- minor = readb(ha->mem_ptr + IPS_REG_FLDP);
-
- /* Get SubMinor version */
- writel(0x1FD, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
- subminor = readb(ha->mem_ptr + IPS_REG_FLDP);
-
-@@ -2375,14 +2371,14 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
-
- /* test 1st byte */
- outl(0, ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- if (inb(ha->io_addr + IPS_REG_FLDP) != 0x55)
- return;
-
- outl(cpu_to_le32(1), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ }
- if (inb(ha->io_addr + IPS_REG_FLDP) != 0xAA)
-@@ -2390,21 +2386,21 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
++ spin_lock_irqsave(&phba->hbalock, flags);
++ list_splice_init(&phba->elsbuf, &completions);
++ phba->elsbuf_cnt = 0;
++ phba->elsbuf_prev_cnt = 0;
++ spin_unlock_irqrestore(&phba->hbalock, flags);
++
++ while (!list_empty(&completions)) {
++ list_remove_head(&completions, buf_ptr,
++ struct lpfc_dmabuf, list);
++ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
++ kfree(buf_ptr);
++ }
++
+ /* Return any active mbox cmds */
+ del_timer_sync(&psli->mbox_tmo);
+ spin_lock_irqsave(&phba->hbalock, flags);
+@@ -3294,6 +3426,47 @@ lpfc_sli_ringpostbuf_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ return 0;
+ }
- /* Get Major version */
- outl(cpu_to_le32(0x1FF), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
++uint32_t
++lpfc_sli_get_buffer_tag(struct lpfc_hba *phba)
++{
++ spin_lock_irq(&phba->hbalock);
++ phba->buffer_tag_count++;
++ /*
++ * Always set the QUE_BUFTAG_BIT to distiguish between
++ * a tag assigned by HBQ.
++ */
++ phba->buffer_tag_count |= QUE_BUFTAG_BIT;
++ spin_unlock_irq(&phba->hbalock);
++ return phba->buffer_tag_count;
++}
++
++struct lpfc_dmabuf *
++lpfc_sli_ring_taggedbuf_get(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
++ uint32_t tag)
++{
++ struct lpfc_dmabuf *mp, *next_mp;
++ struct list_head *slp = &pring->postbufq;
++
++ /* Search postbufq, from the begining, looking for a match on tag */
++ spin_lock_irq(&phba->hbalock);
++ list_for_each_entry_safe(mp, next_mp, &pring->postbufq, list) {
++ if (mp->buffer_tag == tag) {
++ list_del_init(&mp->list);
++ pring->postbufq_cnt--;
++ spin_unlock_irq(&phba->hbalock);
++ return mp;
++ }
++ }
++
++ spin_unlock_irq(&phba->hbalock);
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "0410 Cannot find virtual addr for buffer tag on "
++ "ring %d Data x%lx x%p x%p x%x\n",
++ pring->ringno, (unsigned long) tag,
++ slp->next, slp->prev, pring->postbufq_cnt);
++
++ return NULL;
++}
- major = inb(ha->io_addr + IPS_REG_FLDP);
+ struct lpfc_dmabuf *
+ lpfc_sli_ringpostbuf_get(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+@@ -3361,6 +3534,12 @@ lpfc_sli_abort_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ pring->txcmplq_cnt--;
+ spin_unlock_irq(&phba->hbalock);
- /* Get Minor version */
- outl(cpu_to_le32(0x1FE), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
++ /* Firmware could still be in progress of DMAing
++ * payload, so don't free data buffer till after
++ * a hbeat.
++ */
++ abort_iocb->iocb_flag |= LPFC_DELAY_MEM_FREE;
++
+ abort_iocb->iocb_flag &= ~LPFC_DRIVER_ABORTED;
+ abort_iocb->iocb.ulpStatus = IOSTAT_LOCAL_REJECT;
+ abort_iocb->iocb.un.ulpWord[4] = IOERR_SLI_ABORTED;
+@@ -3699,7 +3878,7 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq,
+ unsigned long flag;
- minor = inb(ha->io_addr + IPS_REG_FLDP);
+ /* The caller must leave context1 empty. */
+- if (pmboxq->context1 != 0)
++ if (pmboxq->context1)
+ return MBX_NOT_FINISHED;
- /* Get SubMinor version */
- outl(cpu_to_le32(0x1FD), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ /* setup wake call as IOCB callback */
+@@ -3771,7 +3950,6 @@ lpfc_intr_handler(int irq, void *dev_id)
+ uint32_t ha_copy;
+ uint32_t work_ha_copy;
+ unsigned long status;
+- int i;
+ uint32_t control;
- subminor = inb(ha->io_addr + IPS_REG_FLDP);
-@@ -2740,8 +2736,6 @@ ips_next(ips_ha_t * ha, int intr)
- SC->result = DID_OK;
- SC->host_scribble = NULL;
+ MAILBOX_t *mbox, *pmbox;
+@@ -3888,7 +4066,6 @@ lpfc_intr_handler(int irq, void *dev_id)
+ }
-- memset(SC->sense_buffer, 0, sizeof (SC->sense_buffer));
+ if (work_ha_copy & HA_ERATT) {
+- phba->link_state = LPFC_HBA_ERROR;
+ /*
+ * There was a link/board error. Read the
+ * status register to retrieve the error event
+@@ -3920,7 +4097,7 @@ lpfc_intr_handler(int irq, void *dev_id)
+ * Stray Mailbox Interrupt, mbxCommand <cmd>
+ * mbxStatus <status>
+ */
+- lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX |
++ lpfc_printf_log(phba, KERN_ERR, LOG_MBOX |
+ LOG_SLI,
+ "(%d):0304 Stray Mailbox "
+ "Interrupt mbxCommand x%x "
+@@ -3928,51 +4105,60 @@ lpfc_intr_handler(int irq, void *dev_id)
+ (vport ? vport->vpi : 0),
+ pmbox->mbxCommand,
+ pmbox->mbxStatus);
+- }
+- phba->last_completion_time = jiffies;
+- del_timer_sync(&phba->sli.mbox_tmo);
-
- scb->target_id = SC->device->id;
- scb->lun = SC->device->lun;
- scb->bus = SC->device->channel;
-@@ -2780,10 +2774,11 @@ ips_next(ips_ha_t * ha, int intr)
- scb->dcdb.cmd_attribute =
- ips_command_direction[scb->scsi_cmd->cmnd[0]];
-
-- /* Allow a WRITE BUFFER Command to Have no Data */
-- /* This is Used by Tape Flash Utilites */
-- if ((scb->scsi_cmd->cmnd[0] == WRITE_BUFFER) && (scb->data_len == 0))
-- scb->dcdb.cmd_attribute = 0;
-+ /* Allow a WRITE BUFFER Command to Have no Data */
-+ /* This is Used by Tape Flash Utilites */
-+ if ((scb->scsi_cmd->cmnd[0] == WRITE_BUFFER) &&
-+ (scb->data_len == 0))
-+ scb->dcdb.cmd_attribute = 0;
-
- if (!(scb->dcdb.cmd_attribute & 0x3))
- scb->dcdb.transfer_length = 0;
-@@ -3404,7 +3399,7 @@ ips_map_status(ips_ha_t * ha, ips_scb_t * scb, ips_stat_t * sp)
+- phba->sli.mbox_active = NULL;
+- if (pmb->mbox_cmpl) {
+- lpfc_sli_pcimem_bcopy(mbox, pmbox,
+- MAILBOX_CMD_SIZE);
+- }
+- if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) {
+- pmb->mbox_flag &= ~LPFC_MBX_IMED_UNREG;
++ /* clear mailbox attention bit */
++ work_ha_copy &= ~HA_MBATT;
++ } else {
++ phba->last_completion_time = jiffies;
++ del_timer(&phba->sli.mbox_tmo);
- /* Restrict access to physical DASD */
- if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
-- ips_scmd_buf_read(scb->scsi_cmd,
-+ ips_scmd_buf_read(scb->scsi_cmd,
- &inquiryData, sizeof (inquiryData));
- if ((inquiryData.DeviceType & 0x1f) == TYPE_DISK) {
- errcode = DID_TIME_OUT;
-@@ -3438,13 +3433,11 @@ ips_map_status(ips_ha_t * ha, ips_scb_t * scb, ips_stat_t * sp)
- (IPS_DCDB_TABLE_TAPE *) & scb->dcdb;
- memcpy(scb->scsi_cmd->sense_buffer,
- tapeDCDB->sense_info,
-- sizeof (scb->scsi_cmd->
-- sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
- } else {
- memcpy(scb->scsi_cmd->sense_buffer,
- scb->dcdb.sense_info,
-- sizeof (scb->scsi_cmd->
-- sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
+- lpfc_debugfs_disc_trc(vport,
+- LPFC_DISC_TRC_MBOX_VPORT,
+- "MBOX dflt rpi: : status:x%x rpi:x%x",
+- (uint32_t)pmbox->mbxStatus,
+- pmbox->un.varWords[0], 0);
+-
+- if ( !pmbox->mbxStatus) {
+- mp = (struct lpfc_dmabuf *)
+- (pmb->context1);
+- ndlp = (struct lpfc_nodelist *)
+- pmb->context2;
+-
+- /* Reg_LOGIN of dflt RPI was successful.
+- * new lets get rid of the RPI using the
+- * same mbox buffer.
+- */
+- lpfc_unreg_login(phba, vport->vpi,
+- pmbox->un.varWords[0], pmb);
+- pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
+- pmb->context1 = mp;
+- pmb->context2 = ndlp;
+- pmb->vport = vport;
+- spin_lock(&phba->hbalock);
+- phba->sli.sli_flag &=
+- ~LPFC_SLI_MBOX_ACTIVE;
+- spin_unlock(&phba->hbalock);
+- goto send_current_mbox;
++ phba->sli.mbox_active = NULL;
++ if (pmb->mbox_cmpl) {
++ lpfc_sli_pcimem_bcopy(mbox, pmbox,
++ MAILBOX_CMD_SIZE);
++ }
++ if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) {
++ pmb->mbox_flag &= ~LPFC_MBX_IMED_UNREG;
++
++ lpfc_debugfs_disc_trc(vport,
++ LPFC_DISC_TRC_MBOX_VPORT,
++ "MBOX dflt rpi: : "
++ "status:x%x rpi:x%x",
++ (uint32_t)pmbox->mbxStatus,
++ pmbox->un.varWords[0], 0);
++
++ if (!pmbox->mbxStatus) {
++ mp = (struct lpfc_dmabuf *)
++ (pmb->context1);
++ ndlp = (struct lpfc_nodelist *)
++ pmb->context2;
++
++ /* Reg_LOGIN of dflt RPI was
++ * successful. new lets get
++ * rid of the RPI using the
++ * same mbox buffer.
++ */
++ lpfc_unreg_login(phba,
++ vport->vpi,
++ pmbox->un.varWords[0],
++ pmb);
++ pmb->mbox_cmpl =
++ lpfc_mbx_cmpl_dflt_rpi;
++ pmb->context1 = mp;
++ pmb->context2 = ndlp;
++ pmb->vport = vport;
++ spin_lock(&phba->hbalock);
++ phba->sli.sli_flag &=
++ ~LPFC_SLI_MBOX_ACTIVE;
++ spin_unlock(&phba->hbalock);
++ goto send_current_mbox;
++ }
}
- device_error = 2; /* check condition */
- }
-@@ -3824,7 +3817,6 @@ ips_send_cmd(ips_ha_t * ha, ips_scb_t * scb)
- /* attempted, a Check Condition occurred, and Sense */
- /* Data indicating an Invalid CDB OpCode is returned. */
- sp = (char *) scb->scsi_cmd->sense_buffer;
-- memset(sp, 0, sizeof (scb->scsi_cmd->sense_buffer));
-
- sp[0] = 0x70; /* Error Code */
- sp[2] = ILLEGAL_REQUEST; /* Sense Key 5 Illegal Req. */
-@@ -4090,10 +4082,10 @@ ips_chkstatus(ips_ha_t * ha, IPS_STATUS * pstatus)
- scb->scsi_cmd->result = errcode << 16;
- } else { /* bus == 0 */
- /* restrict access to physical drives */
-- if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
-- ips_scmd_buf_read(scb->scsi_cmd,
-+ if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
-+ ips_scmd_buf_read(scb->scsi_cmd,
- &inquiryData, sizeof (inquiryData));
-- if ((inquiryData.DeviceType & 0x1f) == TYPE_DISK)
-+ if ((inquiryData.DeviceType & 0x1f) == TYPE_DISK)
- scb->scsi_cmd->result = DID_TIME_OUT << 16;
++ spin_lock(&phba->pport->work_port_lock);
++ phba->pport->work_port_events &=
++ ~WORKER_MBOX_TMO;
++ spin_unlock(&phba->pport->work_port_lock);
++ lpfc_mbox_cmpl_put(phba, pmb);
}
- } /* else */
-@@ -4393,8 +4385,6 @@ ips_free(ips_ha_t * ha)
- ha->mem_ptr = NULL;
+- spin_lock(&phba->pport->work_port_lock);
+- phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
+- spin_unlock(&phba->pport->work_port_lock);
+- lpfc_mbox_cmpl_put(phba, pmb);
}
+ if ((work_ha_copy & HA_MBATT) &&
+ (phba->sli.mbox_active == NULL)) {
+@@ -3990,10 +4176,6 @@ send_current_mbox:
+ lpfc_mbox_cmpl_put(phba, pmb);
+ goto send_next_mbox;
+ }
+- } else {
+- /* Turn on IOCB processing */
+- for (i = 0; i < phba->sli.num_rings; i++)
+- lpfc_sli_turn_on_ring(phba, i);
+ }
-- if (ha->mem_addr)
-- release_mem_region(ha->mem_addr, ha->mem_len);
- ha->mem_addr = 0;
-
- }
-@@ -4661,8 +4651,8 @@ ips_isinit_morpheus(ips_ha_t * ha)
- uint32_t bits;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_sli.h b/drivers/scsi/lpfc/lpfc_sli.h
+index 51b2b6b..7249fd2 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.h
++++ b/drivers/scsi/lpfc/lpfc_sli.h
+@@ -33,6 +33,7 @@ typedef enum _lpfc_ctx_cmd {
+ struct lpfc_iocbq {
+ /* lpfc_iocbqs are used in double linked lists */
+ struct list_head list;
++ struct list_head clist;
+ uint16_t iotag; /* pre-assigned IO tag */
+ uint16_t rsvd1;
- METHOD_TRACE("ips_is_init_morpheus", 1);
--
-- if (ips_isintr_morpheus(ha))
-+
-+ if (ips_isintr_morpheus(ha))
- ips_flush_and_reset(ha);
+@@ -44,6 +45,7 @@ struct lpfc_iocbq {
+ #define LPFC_IO_FCP 4 /* FCP command -- iocbq in scsi_buf */
+ #define LPFC_DRIVER_ABORTED 8 /* driver aborted this request */
+ #define LPFC_IO_FABRIC 0x10 /* Iocb send using fabric scheduler */
++#define LPFC_DELAY_MEM_FREE 0x20 /* Defer free'ing of FC data */
- post = readl(ha->mem_ptr + IPS_REG_I960_MSG0);
-@@ -4686,7 +4676,7 @@ ips_isinit_morpheus(ips_ha_t * ha)
- /* state ( was trying to INIT and an interrupt was already pending ) ... */
- /* */
- /****************************************************************************/
--static void
-+static void
- ips_flush_and_reset(ips_ha_t *ha)
- {
- ips_scb_t *scb;
-@@ -4718,9 +4708,9 @@ ips_flush_and_reset(ips_ha_t *ha)
- if (ret == IPS_SUCCESS) {
- time = 60 * IPS_ONE_SEC; /* Max Wait time is 60 seconds */
- done = 0;
--
-+
- while ((time > 0) && (!done)) {
-- done = ips_poll_for_flush_complete(ha);
-+ done = ips_poll_for_flush_complete(ha);
- /* This may look evil, but it's only done during extremely rare start-up conditions ! */
- udelay(1000);
- time--;
-@@ -4749,17 +4739,17 @@ static int
- ips_poll_for_flush_complete(ips_ha_t * ha)
- {
- IPS_STATUS cstatus;
--
-+
- while (TRUE) {
- cstatus.value = (*ha->func.statupd) (ha);
+ uint8_t abort_count;
+ uint8_t rsvd2;
+@@ -92,8 +94,6 @@ typedef struct lpfcMboxq {
+ #define MBX_POLL 1 /* poll mailbox till command done, then
+ return */
+ #define MBX_NOWAIT 2 /* issue command then return immediately */
+-#define MBX_STOP_IOCB 4 /* Stop iocb processing till mbox cmds
+- complete */
- if (cstatus.value == 0xffffffff) /* If No Interrupt to process */
- break;
--
-+
- /* Success is when we see the Flush Command ID */
-- if (cstatus.fields.command_id == IPS_MAX_CMDS )
-+ if (cstatus.fields.command_id == IPS_MAX_CMDS)
- return 1;
-- }
-+ }
+ #define LPFC_MAX_RING_MASK 4 /* max num of rctl/type masks allowed per
+ ring */
+@@ -129,9 +129,7 @@ struct lpfc_sli_ring {
+ uint16_t flag; /* ring flags */
+ #define LPFC_DEFERRED_RING_EVENT 0x001 /* Deferred processing a ring event */
+ #define LPFC_CALL_RING_AVAILABLE 0x002 /* indicates cmd was full */
+-#define LPFC_STOP_IOCB_MBX 0x010 /* Stop processing IOCB cmds mbox */
+ #define LPFC_STOP_IOCB_EVENT 0x020 /* Stop processing IOCB cmds event */
+-#define LPFC_STOP_IOCB_MASK 0x030 /* Stop processing IOCB cmds mask */
+ uint16_t abtsiotag; /* tracks next iotag to use for ABTS */
- return 0;
- }
-@@ -4903,7 +4893,7 @@ ips_init_copperhead(ips_ha_t * ha)
- /* Enable busmastering */
- outb(IPS_BIT_EBM, ha->io_addr + IPS_REG_SCPR);
+ uint32_t local_getidx; /* last available cmd index (from cmdGetInx) */
+@@ -163,9 +161,12 @@ struct lpfc_sli_ring {
+ struct list_head iocb_continueq;
+ uint16_t iocb_continueq_cnt; /* current length of queue */
+ uint16_t iocb_continueq_max; /* max length */
++ struct list_head iocb_continue_saveq;
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- /* fix for anaconda64 */
- outl(0, ha->io_addr + IPS_REG_NDAE);
+ struct lpfc_sli_ring_mask prt[LPFC_MAX_RING_MASK];
+ uint32_t num_mask; /* number of mask entries in prt array */
++ void (*lpfc_sli_rcv_async_status) (struct lpfc_hba *,
++ struct lpfc_sli_ring *, struct lpfc_iocbq *);
-@@ -4997,7 +4987,7 @@ ips_init_copperhead_memio(ips_ha_t * ha)
- /* Enable busmastering */
- writeb(IPS_BIT_EBM, ha->mem_ptr + IPS_REG_SCPR);
+ struct lpfc_sli_ring_stat stats; /* SLI statistical info */
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- /* fix for anaconda64 */
- writel(0, ha->mem_ptr + IPS_REG_NDAE);
+@@ -199,9 +200,6 @@ struct lpfc_hbq_init {
+ uint32_t add_count; /* number to allocate when starved */
+ } ;
-@@ -5142,7 +5132,7 @@ ips_reset_copperhead(ips_ha_t * ha)
- METHOD_TRACE("ips_reset_copperhead", 1);
+-#define LPFC_MAX_HBQ 16
+-
+-
+ /* Structure used to hold SLI statistical counters and info */
+ struct lpfc_sli_stat {
+ uint64_t mbox_stat_err; /* Mbox cmds completed status error */
+diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
+index 0081f49..4b633d3 100644
+--- a/drivers/scsi/lpfc/lpfc_version.h
++++ b/drivers/scsi/lpfc/lpfc_version.h
+@@ -1,7 +1,7 @@
+ /*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for *
+ * Fibre Channel Host Bus Adapters. *
+- * Copyright (C) 2004-2007 Emulex. All rights reserved. *
++ * Copyright (C) 2004-2008 Emulex. All rights reserved. *
+ * EMULEX and SLI are trademarks of Emulex. *
+ * www.emulex.com *
+ * *
+@@ -18,10 +18,10 @@
+ * included with this package. *
+ *******************************************************************/
- DEBUG_VAR(1, "(%s%d) ips_reset_copperhead: io addr: %x, irq: %d",
-- ips_name, ha->host_num, ha->io_addr, ha->irq);
-+ ips_name, ha->host_num, ha->io_addr, ha->pcidev->irq);
+-#define LPFC_DRIVER_VERSION "8.2.2"
++#define LPFC_DRIVER_VERSION "8.2.4"
- reset_counter = 0;
+ #define LPFC_DRIVER_NAME "lpfc"
-@@ -5187,7 +5177,7 @@ ips_reset_copperhead_memio(ips_ha_t * ha)
- METHOD_TRACE("ips_reset_copperhead_memio", 1);
+ #define LPFC_MODULE_DESC "Emulex LightPulse Fibre Channel SCSI driver " \
+ LPFC_DRIVER_VERSION
+-#define LPFC_COPYRIGHT "Copyright(c) 2004-2007 Emulex. All rights reserved."
++#define LPFC_COPYRIGHT "Copyright(c) 2004-2008 Emulex. All rights reserved."
+diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c
+index dcb415e..9fad766 100644
+--- a/drivers/scsi/lpfc/lpfc_vport.c
++++ b/drivers/scsi/lpfc/lpfc_vport.c
+@@ -125,15 +125,26 @@ lpfc_vport_sparm(struct lpfc_hba *phba, struct lpfc_vport *vport)
+ pmb->vport = vport;
+ rc = lpfc_sli_issue_mbox_wait(phba, pmb, phba->fc_ratov * 2);
+ if (rc != MBX_SUCCESS) {
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
+- "1818 VPort failed init, mbxCmd x%x "
+- "READ_SPARM mbxStatus x%x, rc = x%x\n",
+- mb->mbxCommand, mb->mbxStatus, rc);
+- lpfc_mbuf_free(phba, mp->virt, mp->phys);
+- kfree(mp);
+- if (rc != MBX_TIMEOUT)
+- mempool_free(pmb, phba->mbox_mem_pool);
+- return -EIO;
++ if (signal_pending(current)) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
++ "1830 Signal aborted mbxCmd x%x\n",
++ mb->mbxCommand);
++ lpfc_mbuf_free(phba, mp->virt, mp->phys);
++ kfree(mp);
++ if (rc != MBX_TIMEOUT)
++ mempool_free(pmb, phba->mbox_mem_pool);
++ return -EINTR;
++ } else {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
++ "1818 VPort failed init, mbxCmd x%x "
++ "READ_SPARM mbxStatus x%x, rc = x%x\n",
++ mb->mbxCommand, mb->mbxStatus, rc);
++ lpfc_mbuf_free(phba, mp->virt, mp->phys);
++ kfree(mp);
++ if (rc != MBX_TIMEOUT)
++ mempool_free(pmb, phba->mbox_mem_pool);
++ return -EIO;
++ }
+ }
- DEBUG_VAR(1, "(%s%d) ips_reset_copperhead_memio: mem addr: %x, irq: %d",
-- ips_name, ha->host_num, ha->mem_addr, ha->irq);
-+ ips_name, ha->host_num, ha->mem_addr, ha->pcidev->irq);
+ memcpy(&vport->fc_sparam, mp->virt, sizeof (struct serv_parm));
+@@ -204,6 +215,7 @@ lpfc_vport_create(struct fc_vport *fc_vport, bool disable)
+ int instance;
+ int vpi;
+ int rc = VPORT_ERROR;
++ int status;
- reset_counter = 0;
+ if ((phba->sli_rev < 3) ||
+ !(phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)) {
+@@ -248,13 +260,19 @@ lpfc_vport_create(struct fc_vport *fc_vport, bool disable)
+ vport->vpi = vpi;
+ lpfc_debugfs_initialize(vport);
-@@ -5233,7 +5223,7 @@ ips_reset_morpheus(ips_ha_t * ha)
- METHOD_TRACE("ips_reset_morpheus", 1);
+- if (lpfc_vport_sparm(phba, vport)) {
+- lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+- "1813 Create VPORT failed. "
+- "Cannot get sparam\n");
++ if ((status = lpfc_vport_sparm(phba, vport))) {
++ if (status == -EINTR) {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
++ "1831 Create VPORT Interrupted.\n");
++ rc = VPORT_ERROR;
++ } else {
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
++ "1813 Create VPORT failed. "
++ "Cannot get sparam\n");
++ rc = VPORT_NORESOURCES;
++ }
+ lpfc_free_vpi(phba, vpi);
+ destroy_port(vport);
+- rc = VPORT_NORESOURCES;
+ goto error_out;
+ }
- DEBUG_VAR(1, "(%s%d) ips_reset_morpheus: mem addr: %x, irq: %d",
-- ips_name, ha->host_num, ha->mem_addr, ha->irq);
-+ ips_name, ha->host_num, ha->mem_addr, ha->pcidev->irq);
+@@ -427,7 +445,6 @@ int
+ lpfc_vport_delete(struct fc_vport *fc_vport)
+ {
+ struct lpfc_nodelist *ndlp = NULL;
+- struct lpfc_nodelist *next_ndlp;
+ struct Scsi_Host *shost = (struct Scsi_Host *) fc_vport->shost;
+ struct lpfc_vport *vport = *(struct lpfc_vport **)fc_vport->dd_data;
+ struct lpfc_hba *phba = vport->phba;
+@@ -482,8 +499,18 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
- reset_counter = 0;
+ ndlp = lpfc_findnode_did(phba->pport, Fabric_DID);
+ if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE &&
+- phba->link_state >= LPFC_LINK_UP) {
+-
++ phba->link_state >= LPFC_LINK_UP) {
++ if (vport->cfg_enable_da_id) {
++ timeout = msecs_to_jiffies(phba->fc_ratov * 2000);
++ if (!lpfc_ns_cmd(vport, SLI_CTNS_DA_ID, 0, 0))
++ while (vport->ct_flags && timeout)
++ timeout = schedule_timeout(timeout);
++ else
++ lpfc_printf_log(vport->phba, KERN_WARNING,
++ LOG_VPORT,
++ "1829 CT command failed to "
++ "delete objects on fabric. \n");
++ }
+ /* First look for the Fabric ndlp */
+ ndlp = lpfc_findnode_did(vport, Fabric_DID);
+ if (!ndlp) {
+@@ -503,23 +530,20 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
+ }
-@@ -5920,7 +5910,7 @@ ips_read_config(ips_ha_t * ha, int intr)
+ skip_logo:
++ lpfc_cleanup(vport);
+ lpfc_sli_host_down(vport);
- return (0);
- }
--
+- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
+- lpfc_disc_state_machine(vport, ndlp, NULL,
+- NLP_EVT_DEVICE_RECOVERY);
+- lpfc_disc_state_machine(vport, ndlp, NULL,
+- NLP_EVT_DEVICE_RM);
+- }
+-
+ lpfc_stop_vport_timers(vport);
+ lpfc_unreg_all_rpis(vport);
+- lpfc_unreg_default_rpis(vport);
+- /*
+- * Completion of unreg_vpi (lpfc_mbx_cmpl_unreg_vpi) does the
+- * scsi_host_put() to release the vport.
+- */
+- lpfc_mbx_unreg_vpi(vport);
+
- memcpy(ha->conf, ha->ioctl_data, sizeof(*ha->conf));
- return (1);
++ if (!(phba->pport->load_flag & FC_UNLOADING)) {
++ lpfc_unreg_default_rpis(vport);
++ /*
++ * Completion of unreg_vpi (lpfc_mbx_cmpl_unreg_vpi)
++ * does the scsi_host_put() to release the vport.
++ */
++ lpfc_mbx_unreg_vpi(vport);
++ }
+
+ lpfc_free_vpi(phba, vport->vpi);
+ vport->work_port_events = 0;
+@@ -532,16 +556,13 @@ skip_logo:
+ return VPORT_OK;
}
-@@ -5959,7 +5949,7 @@ ips_readwrite_page5(ips_ha_t * ha, int write, int intr)
- scb->cmd.nvram.buffer_addr = ha->ioctl_busaddr;
- if (write)
- memcpy(ha->ioctl_data, ha->nvram, sizeof(*ha->nvram));
--
-+
- /* issue the command */
- if (((ret =
- ips_send_wait(ha, scb, ips_cmd_timeout, intr)) == IPS_FAILURE)
-@@ -6196,32 +6186,32 @@ ips_erase_bios(ips_ha_t * ha)
- /* Clear the status register */
- outl(0, ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+-EXPORT_SYMBOL(lpfc_vport_create);
+-EXPORT_SYMBOL(lpfc_vport_delete);
+-
+ struct lpfc_vport **
+ lpfc_create_vport_work_array(struct lpfc_hba *phba)
+ {
+ struct lpfc_vport *port_iterator;
+ struct lpfc_vport **vports;
+ int index = 0;
+- vports = kzalloc(LPFC_MAX_VPORTS * sizeof(struct lpfc_vport *),
++ vports = kzalloc((phba->max_vpi + 1) * sizeof(struct lpfc_vport *),
+ GFP_KERNEL);
+ if (vports == NULL)
+ return NULL;
+@@ -560,12 +581,12 @@ lpfc_create_vport_work_array(struct lpfc_hba *phba)
+ }
- outb(0x50, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ void
+-lpfc_destroy_vport_work_array(struct lpfc_vport **vports)
++lpfc_destroy_vport_work_array(struct lpfc_hba *phba, struct lpfc_vport **vports)
+ {
+ int i;
+ if (vports == NULL)
+ return;
+- for (i=0; vports[i] != NULL && i < LPFC_MAX_VPORTS; i++)
++ for (i=0; vports[i] != NULL && i <= phba->max_vpi; i++)
+ scsi_host_put(lpfc_shost_from_vport(vports[i]));
+ kfree(vports);
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_vport.h b/drivers/scsi/lpfc/lpfc_vport.h
+index 91da177..96c4453 100644
+--- a/drivers/scsi/lpfc/lpfc_vport.h
++++ b/drivers/scsi/lpfc/lpfc_vport.h
+@@ -89,7 +89,7 @@ int lpfc_vport_delete(struct fc_vport *);
+ int lpfc_vport_getinfo(struct Scsi_Host *, struct vport_info *);
+ int lpfc_vport_tgt_remove(struct Scsi_Host *, uint, uint);
+ struct lpfc_vport **lpfc_create_vport_work_array(struct lpfc_hba *);
+-void lpfc_destroy_vport_work_array(struct lpfc_vport **);
++void lpfc_destroy_vport_work_array(struct lpfc_hba *, struct lpfc_vport **);
- /* Erase Setup */
- outb(0x20, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ /*
+ * queuecommand VPORT-specific return codes. Specified in the host byte code.
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index 66c6520..765c24d 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4889,7 +4889,7 @@ __megaraid_shutdown(adapter_t *adapter)
+ mdelay(1000);
+ }
- /* Erase Confirm */
- outb(0xD0, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+-static void
++static void __devexit
+ megaraid_remove_one(struct pci_dev *pdev)
+ {
+ struct Scsi_Host *host = pci_get_drvdata(pdev);
+diff --git a/drivers/scsi/megaraid/megaraid_mbox.c b/drivers/scsi/megaraid/megaraid_mbox.c
+index c892310..24e32e4 100644
+--- a/drivers/scsi/megaraid/megaraid_mbox.c
++++ b/drivers/scsi/megaraid/megaraid_mbox.c
+@@ -300,7 +300,7 @@ static struct pci_device_id pci_id_table_g[] = {
+ MODULE_DEVICE_TABLE(pci, pci_id_table_g);
- /* Erase Status */
- outb(0x70, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
- timeout = 80000; /* 80 seconds */
+-static struct pci_driver megaraid_pci_driver_g = {
++static struct pci_driver megaraid_pci_driver = {
+ .name = "megaraid",
+ .id_table = pci_id_table_g,
+ .probe = megaraid_probe_one,
+@@ -394,7 +394,7 @@ megaraid_init(void)
- while (timeout > 0) {
-- if (ha->revision_id == IPS_REVID_TROMBONE64) {
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
- outl(0, ha->io_addr + IPS_REG_FLAP);
- udelay(25); /* 25 us */
- }
-@@ -6241,13 +6231,13 @@ ips_erase_bios(ips_ha_t * ha)
- /* try to suspend the erase */
- outb(0xB0, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ // register as a PCI hot-plug driver module
+- rval = pci_register_driver(&megaraid_pci_driver_g);
++ rval = pci_register_driver(&megaraid_pci_driver);
+ if (rval < 0) {
+ con_log(CL_ANN, (KERN_WARNING
+ "megaraid: could not register hotplug support.\n"));
+@@ -415,7 +415,7 @@ megaraid_exit(void)
+ con_log(CL_DLEVEL1, (KERN_NOTICE "megaraid: unloading framework\n"));
- /* wait for 10 seconds */
- timeout = 10000;
- while (timeout > 0) {
-- if (ha->revision_id == IPS_REVID_TROMBONE64) {
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
- outl(0, ha->io_addr + IPS_REG_FLAP);
- udelay(25); /* 25 us */
- }
-@@ -6277,12 +6267,12 @@ ips_erase_bios(ips_ha_t * ha)
- /* Otherwise, we were successful */
- /* clear status */
- outb(0x50, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ // unregister as PCI hotplug driver
+- pci_unregister_driver(&megaraid_pci_driver_g);
++ pci_unregister_driver(&megaraid_pci_driver);
- /* enable reads */
- outb(0xFF, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ return;
+ }
+diff --git a/drivers/scsi/megaraid/megaraid_sas.c b/drivers/scsi/megaraid/megaraid_sas.c
+index e3c5c52..d7ec921 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.c
++++ b/drivers/scsi/megaraid/megaraid_sas.c
+@@ -2,7 +2,7 @@
+ *
+ * Linux MegaRAID driver for SAS based RAID controllers
+ *
+- * Copyright (c) 2003-2005 LSI Logic Corporation.
++ * Copyright (c) 2003-2005 LSI Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+@@ -10,7 +10,7 @@
+ * 2 of the License, or (at your option) any later version.
+ *
+ * FILE : megaraid_sas.c
+- * Version : v00.00.03.10-rc5
++ * Version : v00.00.03.16-rc1
+ *
+ * Authors:
+ * (email-id : megaraidlinux at lsi.com)
+@@ -31,6 +31,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/module.h>
+ #include <linux/spinlock.h>
++#include <linux/mutex.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+ #include <linux/uio.h>
+@@ -46,10 +47,18 @@
+ #include <scsi/scsi_host.h>
+ #include "megaraid_sas.h"
- return (0);
-@@ -6308,32 +6298,32 @@ ips_erase_bios_memio(ips_ha_t * ha)
++/*
++ * poll_mode_io:1- schedule complete completion from q cmd
++ */
++static unsigned int poll_mode_io;
++module_param_named(poll_mode_io, poll_mode_io, int, 0);
++MODULE_PARM_DESC(poll_mode_io,
++ "Complete cmds from IO path, (default=0)");
++
+ MODULE_LICENSE("GPL");
+ MODULE_VERSION(MEGASAS_VERSION);
+ MODULE_AUTHOR("megaraidlinux at lsi.com");
+-MODULE_DESCRIPTION("LSI Logic MegaRAID SAS Driver");
++MODULE_DESCRIPTION("LSI MegaRAID SAS Driver");
- /* Clear the status register */
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ /*
+ * PCI ID table for all supported controllers
+@@ -76,6 +85,10 @@ static DEFINE_MUTEX(megasas_async_queue_mutex);
- writeb(0x50, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ static u32 megasas_dbg_lvl;
- /* Erase Setup */
- writeb(0x20, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
++static void
++megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
++ u8 alt_status);
++
+ /**
+ * megasas_get_cmd - Get a command from the free pool
+ * @instance: Adapter soft state
+@@ -855,6 +868,12 @@ megasas_queue_command(struct scsi_cmnd *scmd, void (*done) (struct scsi_cmnd *))
+ atomic_inc(&instance->fw_outstanding);
- /* Erase Confirm */
- writeb(0xD0, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ instance->instancet->fire_cmd(cmd->frame_phys_addr ,cmd->frame_count-1,instance->reg_set);
++ /*
++ * Check if we have pend cmds to be completed
++ */
++ if (poll_mode_io && atomic_read(&instance->fw_outstanding))
++ tasklet_schedule(&instance->isr_tasklet);
++
- /* Erase Status */
- writeb(0x70, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ return 0;
- timeout = 80000; /* 80 seconds */
+@@ -886,6 +905,64 @@ static int megasas_slave_configure(struct scsi_device *sdev)
+ }
- while (timeout > 0) {
-- if (ha->revision_id == IPS_REVID_TROMBONE64) {
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
- udelay(25); /* 25 us */
+ /**
++ * megasas_complete_cmd_dpc - Returns FW's controller structure
++ * @instance_addr: Address of adapter soft state
++ *
++ * Tasklet to complete cmds
++ */
++static void megasas_complete_cmd_dpc(unsigned long instance_addr)
++{
++ u32 producer;
++ u32 consumer;
++ u32 context;
++ struct megasas_cmd *cmd;
++ struct megasas_instance *instance =
++ (struct megasas_instance *)instance_addr;
++ unsigned long flags;
++
++ /* If we have already declared adapter dead, donot complete cmds */
++ if (instance->hw_crit_error)
++ return;
++
++ spin_lock_irqsave(&instance->completion_lock, flags);
++
++ producer = *instance->producer;
++ consumer = *instance->consumer;
++
++ while (consumer != producer) {
++ context = instance->reply_queue[consumer];
++
++ cmd = instance->cmd_list[context];
++
++ megasas_complete_cmd(instance, cmd, DID_OK);
++
++ consumer++;
++ if (consumer == (instance->max_fw_cmds + 1)) {
++ consumer = 0;
++ }
++ }
++
++ *instance->consumer = producer;
++
++ spin_unlock_irqrestore(&instance->completion_lock, flags);
++
++ /*
++ * Check if we can restore can_queue
++ */
++ if (instance->flag & MEGASAS_FW_BUSY
++ && time_after(jiffies, instance->last_time + 5 * HZ)
++ && atomic_read(&instance->fw_outstanding) < 17) {
++
++ spin_lock_irqsave(instance->host->host_lock, flags);
++ instance->flag &= ~MEGASAS_FW_BUSY;
++ instance->host->can_queue =
++ instance->max_fw_cmds - MEGASAS_INT_CMDS;
++
++ spin_unlock_irqrestore(instance->host->host_lock, flags);
++ }
++}
++
++/**
+ * megasas_wait_for_outstanding - Wait for all outstanding cmds
+ * @instance: Adapter soft state
+ *
+@@ -908,6 +985,11 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance)
+ if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) {
+ printk(KERN_NOTICE "megasas: [%2d]waiting for %d "
+ "commands to complete\n",i,outstanding);
++ /*
++ * Call cmd completion routine. Cmd to be
++ * be completed directly without depending on isr.
++ */
++ megasas_complete_cmd_dpc((unsigned long)instance);
}
-@@ -6353,13 +6343,13 @@ ips_erase_bios_memio(ips_ha_t * ha)
-
- /* try to suspend the erase */
- writeb(0xB0, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- /* wait for 10 seconds */
- timeout = 10000;
- while (timeout > 0) {
-- if (ha->revision_id == IPS_REVID_TROMBONE64) {
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
- udelay(25); /* 25 us */
- }
-@@ -6389,12 +6379,12 @@ ips_erase_bios_memio(ips_ha_t * ha)
- /* Otherwise, we were successful */
- /* clear status */
- writeb(0x50, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- /* enable reads */
- writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- return (0);
-@@ -6423,21 +6413,21 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- for (i = 0; i < buffersize; i++) {
- /* write a byte */
- outl(cpu_to_le32(i + offset), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- outb(0x40, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- outb(buffer[i], ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- /* wait up to one second */
- timeout = 1000;
- while (timeout > 0) {
-- if (ha->revision_id == IPS_REVID_TROMBONE64) {
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
- outl(0, ha->io_addr + IPS_REG_FLAP);
- udelay(25); /* 25 us */
- }
-@@ -6454,11 +6444,11 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- if (timeout == 0) {
- /* timeout error */
- outl(0, ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- outb(0xFF, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- return (1);
-@@ -6468,11 +6458,11 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- if (status & 0x18) {
- /* programming error */
- outl(0, ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- outb(0xFF, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- return (1);
-@@ -6481,11 +6471,11 @@ ips_program_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
-
- /* Enable reading */
- outl(0, ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- outb(0xFF, ha->io_addr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- return (0);
-@@ -6514,21 +6504,21 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- for (i = 0; i < buffersize; i++) {
- /* write a byte */
- writel(i + offset, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- writeb(0x40, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- writeb(buffer[i], ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- /* wait up to one second */
- timeout = 1000;
- while (timeout > 0) {
-- if (ha->revision_id == IPS_REVID_TROMBONE64) {
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64) {
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
- udelay(25); /* 25 us */
- }
-@@ -6545,11 +6535,11 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- if (timeout == 0) {
- /* timeout error */
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
-
- writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
- return (1);
-@@ -6559,11 +6549,11 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- if (status & 0x18) {
- /* programming error */
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ msleep(1000);
+@@ -1100,7 +1182,7 @@ megasas_service_aen(struct megasas_instance *instance, struct megasas_cmd *cmd)
+ static struct scsi_host_template megasas_template = {
- writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ .module = THIS_MODULE,
+- .name = "LSI Logic SAS based MegaRAID driver",
++ .name = "LSI SAS based MegaRAID driver",
+ .proc_name = "megaraid_sas",
+ .slave_configure = megasas_slave_configure,
+ .queuecommand = megasas_queue_command,
+@@ -1749,57 +1831,119 @@ megasas_get_ctrl_info(struct megasas_instance *instance,
+ }
- return (1);
-@@ -6572,11 +6562,11 @@ ips_program_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ /**
+- * megasas_complete_cmd_dpc - Returns FW's controller structure
+- * @instance_addr: Address of adapter soft state
++ * megasas_issue_init_mfi - Initializes the FW
++ * @instance: Adapter soft state
+ *
+- * Tasklet to complete cmds
++ * Issues the INIT MFI cmd
+ */
+-static void megasas_complete_cmd_dpc(unsigned long instance_addr)
++static int
++megasas_issue_init_mfi(struct megasas_instance *instance)
+ {
+- u32 producer;
+- u32 consumer;
+ u32 context;
++
+ struct megasas_cmd *cmd;
+- struct megasas_instance *instance = (struct megasas_instance *)instance_addr;
+- unsigned long flags;
- /* Enable reading */
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+- /* If we have already declared adapter dead, donot complete cmds */
+- if (instance->hw_crit_error)
+- return;
++ struct megasas_init_frame *init_frame;
++ struct megasas_init_queue_info *initq_info;
++ dma_addr_t init_frame_h;
++ dma_addr_t initq_info_h;
- writeb(0xFF, ha->mem_ptr + IPS_REG_FLDP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+- producer = *instance->producer;
+- consumer = *instance->consumer;
++ /*
++ * Prepare a init frame. Note the init frame points to queue info
++ * structure. Each frame has SGL allocated after first 64 bytes. For
++ * this frame - since we don't need any SGL - we use SGL's space as
++ * queue info structure
++ *
++ * We will not get a NULL command below. We just created the pool.
++ */
++ cmd = megasas_get_cmd(instance);
- return (0);
-@@ -6601,14 +6591,14 @@ ips_verify_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+- while (consumer != producer) {
+- context = instance->reply_queue[consumer];
++ init_frame = (struct megasas_init_frame *)cmd->frame;
++ initq_info = (struct megasas_init_queue_info *)
++ ((unsigned long)init_frame + 64);
- /* test 1st byte */
- outl(0, ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+- cmd = instance->cmd_list[context];
++ init_frame_h = cmd->frame_phys_addr;
++ initq_info_h = init_frame_h + 64;
- if (inb(ha->io_addr + IPS_REG_FLDP) != 0x55)
- return (1);
+- megasas_complete_cmd(instance, cmd, DID_OK);
++ context = init_frame->context;
++ memset(init_frame, 0, MEGAMFI_FRAME_SIZE);
++ memset(initq_info, 0, sizeof(struct megasas_init_queue_info));
++ init_frame->context = context;
- outl(cpu_to_le32(1), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
- if (inb(ha->io_addr + IPS_REG_FLDP) != 0xAA)
- return (1);
-@@ -6617,7 +6607,7 @@ ips_verify_bios(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- for (i = 2; i < buffersize; i++) {
+- consumer++;
+- if (consumer == (instance->max_fw_cmds + 1)) {
+- consumer = 0;
+- }
+- }
++ initq_info->reply_queue_entries = instance->max_fw_cmds + 1;
++ initq_info->reply_queue_start_phys_addr_lo = instance->reply_queue_h;
- outl(cpu_to_le32(i + offset), ha->io_addr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+- *instance->consumer = producer;
++ initq_info->producer_index_phys_addr_lo = instance->producer_h;
++ initq_info->consumer_index_phys_addr_lo = instance->consumer_h;
++
++ init_frame->cmd = MFI_CMD_INIT;
++ init_frame->cmd_status = 0xFF;
++ init_frame->queue_info_new_phys_addr_lo = initq_info_h;
++
++ init_frame->data_xfer_len = sizeof(struct megasas_init_queue_info);
- checksum = (uint8_t) checksum + inb(ha->io_addr + IPS_REG_FLDP);
-@@ -6650,14 +6640,14 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
+ /*
+- * Check if we can restore can_queue
++ * disable the intr before firing the init frame to FW
+ */
+- if (instance->flag & MEGASAS_FW_BUSY
+- && time_after(jiffies, instance->last_time + 5 * HZ)
+- && atomic_read(&instance->fw_outstanding) < 17) {
++ instance->instancet->disable_intr(instance->reg_set);
- /* test 1st byte */
- writel(0, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+- spin_lock_irqsave(instance->host->host_lock, flags);
+- instance->flag &= ~MEGASAS_FW_BUSY;
+- instance->host->can_queue =
+- instance->max_fw_cmds - MEGASAS_INT_CMDS;
++ /*
++ * Issue the init frame in polled mode
++ */
- if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0x55)
- return (1);
+- spin_unlock_irqrestore(instance->host->host_lock, flags);
++ if (megasas_issue_polled(instance, cmd)) {
++ printk(KERN_ERR "megasas: Failed to init firmware\n");
++ megasas_return_cmd(instance, cmd);
++ goto fail_fw_init;
+ }
- writel(1, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
- if (readb(ha->mem_ptr + IPS_REG_FLDP) != 0xAA)
- return (1);
-@@ -6666,7 +6656,7 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
- for (i = 2; i < buffersize; i++) {
++ megasas_return_cmd(instance, cmd);
++
++ return 0;
++
++fail_fw_init:
++ return -EINVAL;
++}
++
++/**
++ * megasas_start_timer - Initializes a timer object
++ * @instance: Adapter soft state
++ * @timer: timer object to be initialized
++ * @fn: timer function
++ * @interval: time interval between timer function call
++ */
++static inline void
++megasas_start_timer(struct megasas_instance *instance,
++ struct timer_list *timer,
++ void *fn, unsigned long interval)
++{
++ init_timer(timer);
++ timer->expires = jiffies + interval;
++ timer->data = (unsigned long)instance;
++ timer->function = fn;
++ add_timer(timer);
++}
++
++/**
++ * megasas_io_completion_timer - Timer fn
++ * @instance_addr: Address of adapter soft state
++ *
++ * Schedules tasklet for cmd completion
++ * if poll_mode_io is set
++ */
++static void
++megasas_io_completion_timer(unsigned long instance_addr)
++{
++ struct megasas_instance *instance =
++ (struct megasas_instance *)instance_addr;
++
++ if (atomic_read(&instance->fw_outstanding))
++ tasklet_schedule(&instance->isr_tasklet);
++
++ /* Restart timer */
++ if (poll_mode_io)
++ mod_timer(&instance->io_completion_timer,
++ jiffies + MEGASAS_COMPLETION_TIMER_INTERVAL);
+ }
- writel(i + offset, ha->mem_ptr + IPS_REG_FLAP);
-- if (ha->revision_id == IPS_REVID_TROMBONE64)
-+ if (ha->pcidev->revision == IPS_REVID_TROMBONE64)
- udelay(25); /* 25 us */
+ /**
+@@ -1814,22 +1958,15 @@ static int megasas_init_mfi(struct megasas_instance *instance)
+ u32 reply_q_sz;
+ u32 max_sectors_1;
+ u32 max_sectors_2;
++ u32 tmp_sectors;
+ struct megasas_register_set __iomem *reg_set;
+-
+- struct megasas_cmd *cmd;
+ struct megasas_ctrl_info *ctrl_info;
+-
+- struct megasas_init_frame *init_frame;
+- struct megasas_init_queue_info *initq_info;
+- dma_addr_t init_frame_h;
+- dma_addr_t initq_info_h;
+-
+ /*
+ * Map the message registers
+ */
+ instance->base_addr = pci_resource_start(instance->pdev, 0);
- checksum =
-@@ -6837,24 +6827,18 @@ ips_register_scsi(int index)
+- if (pci_request_regions(instance->pdev, "megasas: LSI Logic")) {
++ if (pci_request_regions(instance->pdev, "megasas: LSI")) {
+ printk(KERN_DEBUG "megasas: IO memory region busy!\n");
+ return -EBUSY;
}
- ha = IPS_HA(sh);
- memcpy(ha, oldha, sizeof (ips_ha_t));
-- free_irq(oldha->irq, oldha);
-+ free_irq(oldha->pcidev->irq, oldha);
- /* Install the interrupt handler with the new ha */
-- if (request_irq(ha->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
-+ if (request_irq(ha->pcidev->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
- IPS_PRINTK(KERN_WARNING, ha->pcidev,
- "Unable to install interrupt handler\n");
-- scsi_host_put(sh);
-- return -1;
-+ goto err_out_sh;
+@@ -1900,52 +2037,8 @@ static int megasas_init_mfi(struct megasas_instance *instance)
+ goto fail_reply_queue;
}
- kfree(oldha);
-- ips_sh[index] = sh;
-- ips_ha[index] = ha;
-
- /* Store away needed values for later use */
-- sh->io_port = ha->io_addr;
-- sh->n_io_port = ha->io_addr ? 255 : 0;
- sh->unique_id = (ha->io_addr) ? ha->io_addr : ha->mem_addr;
-- sh->irq = ha->irq;
- sh->sg_tablesize = sh->hostt->sg_tablesize;
- sh->can_queue = sh->hostt->can_queue;
- sh->cmd_per_lun = sh->hostt->cmd_per_lun;
-@@ -6867,10 +6851,21 @@ ips_register_scsi(int index)
- sh->max_channel = ha->nbus - 1;
- sh->can_queue = ha->max_cmds - 1;
+- /*
+- * Prepare a init frame. Note the init frame points to queue info
+- * structure. Each frame has SGL allocated after first 64 bytes. For
+- * this frame - since we don't need any SGL - we use SGL's space as
+- * queue info structure
+- *
+- * We will not get a NULL command below. We just created the pool.
+- */
+- cmd = megasas_get_cmd(instance);
+-
+- init_frame = (struct megasas_init_frame *)cmd->frame;
+- initq_info = (struct megasas_init_queue_info *)
+- ((unsigned long)init_frame + 64);
+-
+- init_frame_h = cmd->frame_phys_addr;
+- initq_info_h = init_frame_h + 64;
+-
+- memset(init_frame, 0, MEGAMFI_FRAME_SIZE);
+- memset(initq_info, 0, sizeof(struct megasas_init_queue_info));
+-
+- initq_info->reply_queue_entries = instance->max_fw_cmds + 1;
+- initq_info->reply_queue_start_phys_addr_lo = instance->reply_queue_h;
+-
+- initq_info->producer_index_phys_addr_lo = instance->producer_h;
+- initq_info->consumer_index_phys_addr_lo = instance->consumer_h;
+-
+- init_frame->cmd = MFI_CMD_INIT;
+- init_frame->cmd_status = 0xFF;
+- init_frame->queue_info_new_phys_addr_lo = initq_info_h;
+-
+- init_frame->data_xfer_len = sizeof(struct megasas_init_queue_info);
+-
+- /*
+- * disable the intr before firing the init frame to FW
+- */
+- instance->instancet->disable_intr(instance->reg_set);
+-
+- /*
+- * Issue the init frame in polled mode
+- */
+- if (megasas_issue_polled(instance, cmd)) {
+- printk(KERN_DEBUG "megasas: Failed to init firmware\n");
++ if (megasas_issue_init_mfi(instance))
+ goto fail_fw_init;
+- }
+-
+- megasas_return_cmd(instance, cmd);
-- scsi_add_host(sh, NULL);
-+ if (scsi_add_host(sh, &ha->pcidev->dev))
-+ goto err_out;
-+
-+ ips_sh[index] = sh;
-+ ips_ha[index] = ha;
-+
- scsi_scan_host(sh);
+ ctrl_info = kmalloc(sizeof(struct megasas_ctrl_info), GFP_KERNEL);
- return 0;
-+
-+err_out:
-+ free_irq(ha->pcidev->irq, ha);
-+err_out_sh:
-+ scsi_host_put(sh);
-+ return -1;
- }
+@@ -1958,17 +2051,20 @@ static int megasas_init_mfi(struct megasas_instance *instance)
+ * Note that older firmwares ( < FW ver 30) didn't report information
+ * to calculate max_sectors_1. So the number ended up as zero always.
+ */
++ tmp_sectors = 0;
+ if (ctrl_info && !megasas_get_ctrl_info(instance, ctrl_info)) {
- /*---------------------------------------------------------------------------*/
-@@ -6882,20 +6877,14 @@ ips_register_scsi(int index)
- static void __devexit
- ips_remove_device(struct pci_dev *pci_dev)
- {
-- int i;
-- struct Scsi_Host *sh;
-- ips_ha_t *ha;
-+ struct Scsi_Host *sh = pci_get_drvdata(pci_dev);
+ max_sectors_1 = (1 << ctrl_info->stripe_sz_ops.min) *
+ ctrl_info->max_strips_per_io;
+ max_sectors_2 = ctrl_info->max_request_size;
-- for (i = 0; i < IPS_MAX_ADAPTERS; i++) {
-- ha = ips_ha[i];
-- if (ha) {
-- if ((pci_dev->bus->number == ha->pcidev->bus->number) &&
-- (pci_dev->devfn == ha->pcidev->devfn)) {
-- sh = ips_sh[i];
-- ips_release(sh);
-- }
-- }
-- }
-+ pci_set_drvdata(pci_dev, NULL);
-+
-+ ips_release(sh);
+- instance->max_sectors_per_req = (max_sectors_1 < max_sectors_2)
+- ? max_sectors_1 : max_sectors_2;
+- } else
+- instance->max_sectors_per_req = instance->max_num_sge *
+- PAGE_SIZE / 512;
++ tmp_sectors = min_t(u32, max_sectors_1 , max_sectors_2);
++ }
+
-+ pci_release_regions(pci_dev);
-+ pci_disable_device(pci_dev);
- }
++ instance->max_sectors_per_req = instance->max_num_sge *
++ PAGE_SIZE / 512;
++ if (tmp_sectors && (instance->max_sectors_per_req > tmp_sectors))
++ instance->max_sectors_per_req = tmp_sectors;
- /****************************************************************************/
-@@ -6949,12 +6938,17 @@ module_exit(ips_module_exit);
- static int __devinit
- ips_insert_device(struct pci_dev *pci_dev, const struct pci_device_id *ent)
- {
-- int uninitialized_var(index);
-+ int index = -1;
- int rc;
+ kfree(ctrl_info);
- METHOD_TRACE("ips_insert_device", 1);
-- if (pci_enable_device(pci_dev))
-- return -1;
-+ rc = pci_enable_device(pci_dev);
-+ if (rc)
-+ return rc;
+@@ -1976,12 +2072,17 @@ static int megasas_init_mfi(struct megasas_instance *instance)
+ * Setup tasklet for cmd completion
+ */
+
+- tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc,
+- (unsigned long)instance);
++ tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc,
++ (unsigned long)instance);
+
-+ rc = pci_request_regions(pci_dev, "ips");
-+ if (rc)
-+ goto err_out;
++ /* Initialize the cmd completion timer */
++ if (poll_mode_io)
++ megasas_start_timer(instance, &instance->io_completion_timer,
++ megasas_io_completion_timer,
++ MEGASAS_COMPLETION_TIMER_INTERVAL);
+ return 0;
- rc = ips_init_phase1(pci_dev, &index);
- if (rc == SUCCESS)
-@@ -6970,6 +6964,19 @@ ips_insert_device(struct pci_dev *pci_dev, const struct pci_device_id *ent)
- ips_num_controllers++;
+ fail_fw_init:
+- megasas_return_cmd(instance, cmd);
- ips_next_controller = ips_num_controllers;
+ pci_free_consistent(instance->pdev, reply_q_sz,
+ instance->reply_queue, instance->reply_queue_h);
+@@ -2263,6 +2364,28 @@ static int megasas_io_attach(struct megasas_instance *instance)
+ return 0;
+ }
+
++static int
++megasas_set_dma_mask(struct pci_dev *pdev)
++{
++ /*
++ * All our contollers are capable of performing 64-bit DMA
++ */
++ if (IS_DMA64) {
++ if (pci_set_dma_mask(pdev, DMA_64BIT_MASK) != 0) {
+
-+ if (rc < 0) {
-+ rc = -ENODEV;
-+ goto err_out_regions;
++ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
++ goto fail_set_dma_mask;
++ }
++ } else {
++ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
++ goto fail_set_dma_mask;
+ }
-+
-+ pci_set_drvdata(pci_dev, ips_sh[index]);
+ return 0;
+
-+err_out_regions:
-+ pci_release_regions(pci_dev);
-+err_out:
-+ pci_disable_device(pci_dev);
- return rc;
- }
-
-@@ -6992,8 +6999,6 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
- uint32_t mem_len;
- uint8_t bus;
- uint8_t func;
-- uint8_t irq;
-- uint16_t subdevice_id;
- int j;
- int index;
- dma_addr_t dma_address;
-@@ -7004,7 +7009,7 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
- METHOD_TRACE("ips_init_phase1", 1);
- index = IPS_MAX_ADAPTERS;
- for (j = 0; j < IPS_MAX_ADAPTERS; j++) {
-- if (ips_ha[j] == 0) {
-+ if (ips_ha[j] == NULL) {
- index = j;
- break;
- }
-@@ -7014,7 +7019,6 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
- return -1;
-
- /* stuff that we get in dev */
-- irq = pci_dev->irq;
- bus = pci_dev->bus->number;
- func = pci_dev->devfn;
++fail_set_dma_mask:
++ return 1;
++}
++
+ /**
+ * megasas_probe_one - PCI hotplug entry point
+ * @pdev: PCI device structure
+@@ -2296,19 +2419,8 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
-@@ -7042,34 +7046,17 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
- uint32_t base;
- uint32_t offs;
+ pci_set_master(pdev);
-- if (!request_mem_region(mem_addr, mem_len, "ips")) {
-- IPS_PRINTK(KERN_WARNING, pci_dev,
-- "Couldn't allocate IO Memory space %x len %d.\n",
-- mem_addr, mem_len);
-- return -1;
-- }
+- /*
+- * All our contollers are capable of performing 64-bit DMA
+- */
+- if (IS_DMA64) {
+- if (pci_set_dma_mask(pdev, DMA_64BIT_MASK) != 0) {
-
- base = mem_addr & PAGE_MASK;
- offs = mem_addr - base;
- ioremap_ptr = ioremap(base, PAGE_SIZE);
-+ if (!ioremap_ptr)
-+ return -1;
- mem_ptr = ioremap_ptr + offs;
- } else {
- ioremap_ptr = NULL;
- mem_ptr = NULL;
- }
-
-- /* setup I/O mapped area (if applicable) */
-- if (io_addr) {
-- if (!request_region(io_addr, io_len, "ips")) {
-- IPS_PRINTK(KERN_WARNING, pci_dev,
-- "Couldn't allocate IO space %x len %d.\n",
-- io_addr, io_len);
-- return -1;
+- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
+- goto fail_set_dma_mask;
- }
+- } else {
+- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
+- goto fail_set_dma_mask;
- }
--
-- subdevice_id = pci_dev->subsystem_device;
--
- /* found a controller */
- ha = kzalloc(sizeof (ips_ha_t), GFP_KERNEL);
- if (ha == NULL) {
-@@ -7078,13 +7065,11 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
- return -1;
- }
-
--
- ips_sh[index] = NULL;
- ips_ha[index] = ha;
- ha->active = 1;
-
- /* Store info in HA structure */
-- ha->irq = irq;
- ha->io_addr = io_addr;
- ha->io_len = io_len;
- ha->mem_addr = mem_addr;
-@@ -7092,10 +7077,7 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
- ha->mem_ptr = mem_ptr;
- ha->ioremap_ptr = ioremap_ptr;
- ha->host_num = (uint32_t) index;
-- ha->revision_id = pci_dev->revision;
- ha->slot_num = PCI_SLOT(pci_dev->devfn);
-- ha->device_id = pci_dev->device;
-- ha->subdevice_id = subdevice_id;
- ha->pcidev = pci_dev;
-
- /*
-@@ -7240,7 +7222,7 @@ ips_init_phase2(int index)
- }
-
- /* Install the interrupt handler */
-- if (request_irq(ha->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
-+ if (request_irq(ha->pcidev->irq, do_ipsintr, IRQF_SHARED, ips_name, ha)) {
- IPS_PRINTK(KERN_WARNING, ha->pcidev,
- "Unable to install interrupt handler\n");
- return ips_abort_init(ha, index);
-@@ -7253,14 +7235,14 @@ ips_init_phase2(int index)
- if (!ips_allocatescbs(ha)) {
- IPS_PRINTK(KERN_WARNING, ha->pcidev,
- "Unable to allocate a CCB\n");
-- free_irq(ha->irq, ha);
-+ free_irq(ha->pcidev->irq, ha);
- return ips_abort_init(ha, index);
- }
-
- if (!ips_hainit(ha)) {
- IPS_PRINTK(KERN_WARNING, ha->pcidev,
- "Unable to initialize controller\n");
-- free_irq(ha->irq, ha);
-+ free_irq(ha->pcidev->irq, ha);
- return ips_abort_init(ha, index);
- }
- /* Free the temporary SCB */
-@@ -7270,7 +7252,7 @@ ips_init_phase2(int index)
- if (!ips_allocatescbs(ha)) {
- IPS_PRINTK(KERN_WARNING, ha->pcidev,
- "Unable to allocate CCBs\n");
-- free_irq(ha->irq, ha);
-+ free_irq(ha->pcidev->irq, ha);
- return ips_abort_init(ha, index);
- }
-
-diff --git a/drivers/scsi/ips.h b/drivers/scsi/ips.h
-index 3bcbd9f..e0657b6 100644
---- a/drivers/scsi/ips.h
-+++ b/drivers/scsi/ips.h
-@@ -60,14 +60,14 @@
- */
- #define IPS_HA(x) ((ips_ha_t *) x->hostdata)
- #define IPS_COMMAND_ID(ha, scb) (int) (scb - ha->scbs)
-- #define IPS_IS_TROMBONE(ha) (((ha->device_id == IPS_DEVICEID_COPPERHEAD) && \
-- (ha->revision_id >= IPS_REVID_TROMBONE32) && \
-- (ha->revision_id <= IPS_REVID_TROMBONE64)) ? 1 : 0)
-- #define IPS_IS_CLARINET(ha) (((ha->device_id == IPS_DEVICEID_COPPERHEAD) && \
-- (ha->revision_id >= IPS_REVID_CLARINETP1) && \
-- (ha->revision_id <= IPS_REVID_CLARINETP3)) ? 1 : 0)
-- #define IPS_IS_MORPHEUS(ha) (ha->device_id == IPS_DEVICEID_MORPHEUS)
-- #define IPS_IS_MARCO(ha) (ha->device_id == IPS_DEVICEID_MARCO)
-+ #define IPS_IS_TROMBONE(ha) (((ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) && \
-+ (ha->pcidev->revision >= IPS_REVID_TROMBONE32) && \
-+ (ha->pcidev->revision <= IPS_REVID_TROMBONE64)) ? 1 : 0)
-+ #define IPS_IS_CLARINET(ha) (((ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) && \
-+ (ha->pcidev->revision >= IPS_REVID_CLARINETP1) && \
-+ (ha->pcidev->revision <= IPS_REVID_CLARINETP3)) ? 1 : 0)
-+ #define IPS_IS_MORPHEUS(ha) (ha->pcidev->device == IPS_DEVICEID_MORPHEUS)
-+ #define IPS_IS_MARCO(ha) (ha->pcidev->device == IPS_DEVICEID_MARCO)
- #define IPS_USE_I2O_DELIVER(ha) ((IPS_IS_MORPHEUS(ha) || \
- (IPS_IS_TROMBONE(ha) && \
- (ips_force_i2o))) ? 1 : 0)
-@@ -92,7 +92,7 @@
- #ifndef min
- #define min(x,y) ((x) < (y) ? x : y)
- #endif
--
-+
- #ifndef __iomem /* For clean compiles in earlier kernels without __iomem annotations */
- #define __iomem
- #endif
-@@ -171,7 +171,7 @@
- #define IPS_CMD_DOWNLOAD 0x20
- #define IPS_CMD_RW_BIOSFW 0x22
- #define IPS_CMD_GET_VERSION_INFO 0xC6
-- #define IPS_CMD_RESET_CHANNEL 0x1A
-+ #define IPS_CMD_RESET_CHANNEL 0x1A
++ if (megasas_set_dma_mask(pdev))
++ goto fail_set_dma_mask;
- /*
- * Adapter Equates
-@@ -458,7 +458,7 @@ typedef struct {
- uint32_t reserved3;
- uint32_t buffer_addr;
- uint32_t reserved4;
--} IPS_IOCTL_CMD, *PIPS_IOCTL_CMD;
-+} IPS_IOCTL_CMD, *PIPS_IOCTL_CMD;
+ host = scsi_host_alloc(&megasas_template,
+ sizeof(struct megasas_instance));
+@@ -2357,8 +2469,9 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ init_waitqueue_head(&instance->abort_cmd_wait_q);
- typedef struct {
- uint8_t op_code;
-@@ -552,7 +552,7 @@ typedef struct {
- uint32_t cccr;
- } IPS_NVRAM_CMD, *PIPS_NVRAM_CMD;
+ spin_lock_init(&instance->cmd_pool_lock);
++ spin_lock_init(&instance->completion_lock);
--typedef struct
-+typedef struct
- {
- uint8_t op_code;
- uint8_t command_id;
-@@ -650,7 +650,7 @@ typedef struct {
- uint8_t device_address;
- uint8_t cmd_attribute;
- uint8_t cdb_length;
-- uint8_t reserved_for_LUN;
-+ uint8_t reserved_for_LUN;
- uint32_t transfer_length;
- uint32_t buffer_pointer;
- uint16_t sg_count;
-@@ -790,7 +790,7 @@ typedef struct {
- /* SubSystem Parameter[4] */
- #define IPS_GET_VERSION_SUPPORT 0x00018000 /* Mask for Versioning Support */
+- sema_init(&instance->aen_mutex, 1);
++ mutex_init(&instance->aen_mutex);
+ sema_init(&instance->ioctl_sem, MEGASAS_INT_CMDS);
--typedef struct
-+typedef struct
+ /*
+@@ -2490,8 +2603,10 @@ static void megasas_flush_cache(struct megasas_instance *instance)
+ /**
+ * megasas_shutdown_controller - Instructs FW to shutdown the controller
+ * @instance: Adapter soft state
++ * @opcode: Shutdown/Hibernate
+ */
+-static void megasas_shutdown_controller(struct megasas_instance *instance)
++static void megasas_shutdown_controller(struct megasas_instance *instance,
++ u32 opcode)
{
- uint32_t revision;
- uint8_t bootBlkVersion[32];
-@@ -1034,7 +1034,6 @@ typedef struct ips_ha {
- uint8_t ha_id[IPS_MAX_CHANNELS+1];
- uint32_t dcdb_active[IPS_MAX_CHANNELS];
- uint32_t io_addr; /* Base I/O address */
-- uint8_t irq; /* IRQ for adapter */
- uint8_t ntargets; /* Number of targets */
- uint8_t nbus; /* Number of buses */
- uint8_t nlun; /* Number of Luns */
-@@ -1066,10 +1065,7 @@ typedef struct ips_ha {
- int ioctl_reset; /* IOCTL Requested Reset Flag */
- uint16_t reset_count; /* number of resets */
- time_t last_ffdc; /* last time we sent ffdc info*/
-- uint8_t revision_id; /* Revision level */
-- uint16_t device_id; /* PCI device ID */
- uint8_t slot_num; /* PCI Slot Number */
-- uint16_t subdevice_id; /* Subsystem device ID */
- int ioctl_len; /* size of ioctl buffer */
- dma_addr_t ioctl_busaddr; /* dma address of ioctl buffer*/
- uint8_t bios_version[8]; /* BIOS Revision */
-diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
-index 57ce225..e5be5fd 100644
---- a/drivers/scsi/iscsi_tcp.c
-+++ b/drivers/scsi/iscsi_tcp.c
-@@ -48,7 +48,7 @@ MODULE_AUTHOR("Dmitry Yusupov <dmitry_yus at yahoo.com>, "
- "Alex Aizman <itn780 at yahoo.com>");
- MODULE_DESCRIPTION("iSCSI/TCP data-path");
- MODULE_LICENSE("GPL");
--/* #define DEBUG_TCP */
-+#undef DEBUG_TCP
- #define DEBUG_ASSERT
+ struct megasas_cmd *cmd;
+ struct megasas_dcmd_frame *dcmd;
+@@ -2514,7 +2629,7 @@ static void megasas_shutdown_controller(struct megasas_instance *instance)
+ dcmd->flags = MFI_FRAME_DIR_NONE;
+ dcmd->timeout = 0;
+ dcmd->data_xfer_len = 0;
+- dcmd->opcode = MR_DCMD_CTRL_SHUTDOWN;
++ dcmd->opcode = opcode;
- #ifdef DEBUG_TCP
-@@ -67,115 +67,429 @@ MODULE_LICENSE("GPL");
- static unsigned int iscsi_max_lun = 512;
- module_param_named(max_lun, iscsi_max_lun, uint, S_IRUGO);
+ megasas_issue_blocked_cmd(instance, cmd);
-+static int iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment);
-+
-+/*
-+ * Scatterlist handling: inside the iscsi_segment, we
-+ * remember an index into the scatterlist, and set data/size
-+ * to the current scatterlist entry. For highmem pages, we
-+ * kmap as needed.
-+ *
-+ * Note that the page is unmapped when we return from
-+ * TCP's data_ready handler, so we may end up mapping and
-+ * unmapping the same page repeatedly. The whole reason
-+ * for this is that we shouldn't keep the page mapped
-+ * outside the softirq.
-+ */
-+
-+/**
-+ * iscsi_tcp_segment_init_sg - init indicated scatterlist entry
-+ * @segment: the buffer object
-+ * @sg: scatterlist
-+ * @offset: byte offset into that sg entry
-+ *
-+ * This function sets up the segment so that subsequent
-+ * data is copied to the indicated sg entry, at the given
-+ * offset.
-+ */
- static inline void
--iscsi_buf_init_iov(struct iscsi_buf *ibuf, char *vbuf, int size)
-+iscsi_tcp_segment_init_sg(struct iscsi_segment *segment,
-+ struct scatterlist *sg, unsigned int offset)
- {
-- sg_init_one(&ibuf->sg, vbuf, size);
-- ibuf->sent = 0;
-- ibuf->use_sendmsg = 1;
-+ segment->sg = sg;
-+ segment->sg_offset = offset;
-+ segment->size = min(sg->length - offset,
-+ segment->total_size - segment->total_copied);
-+ segment->data = NULL;
+@@ -2524,6 +2639,139 @@ static void megasas_shutdown_controller(struct megasas_instance *instance)
}
-+/**
-+ * iscsi_tcp_segment_map - map the current S/G page
-+ * @segment: iscsi_segment
-+ * @recv: 1 if called from recv path
-+ *
-+ * We only need to possibly kmap data if scatter lists are being used,
-+ * because the iscsi passthrough and internal IO paths will never use high
-+ * mem pages.
+ /**
++ * megasas_suspend - driver suspend entry point
++ * @pdev: PCI device structure
++ * @state: PCI power state to suspend routine
+ */
- static inline void
--iscsi_buf_init_sg(struct iscsi_buf *ibuf, struct scatterlist *sg)
-+iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv)
- {
-- sg_init_table(&ibuf->sg, 1);
-- sg_set_page(&ibuf->sg, sg_page(sg), sg->length, sg->offset);
-+ struct scatterlist *sg;
++static int __devinit
++megasas_suspend(struct pci_dev *pdev, pm_message_t state)
++{
++ struct Scsi_Host *host;
++ struct megasas_instance *instance;
+
-+ if (segment->data != NULL || !segment->sg)
-+ return;
++ instance = pci_get_drvdata(pdev);
++ host = instance->host;
+
-+ sg = segment->sg;
-+ BUG_ON(segment->sg_mapped);
-+ BUG_ON(sg->length == 0);
++ if (poll_mode_io)
++ del_timer_sync(&instance->io_completion_timer);
+
- /*
-- * Fastpath: sg element fits into single page
-+ * If the page count is greater than one it is ok to send
-+ * to the network layer's zero copy send path. If not we
-+ * have to go the slow sendmsg path. We always map for the
-+ * recv path.
- */
-- if (sg->length + sg->offset <= PAGE_SIZE && !PageSlab(sg_page(sg)))
-- ibuf->use_sendmsg = 0;
-- else
-- ibuf->use_sendmsg = 1;
-- ibuf->sent = 0;
-+ if (page_count(sg_page(sg)) >= 1 && !recv)
-+ return;
++ megasas_flush_cache(instance);
++ megasas_shutdown_controller(instance, MR_DCMD_HIBERNATE_SHUTDOWN);
++ tasklet_kill(&instance->isr_tasklet);
++
++ pci_set_drvdata(instance->pdev, instance);
++ instance->instancet->disable_intr(instance->reg_set);
++ free_irq(instance->pdev->irq, instance);
++
++ pci_save_state(pdev);
++ pci_disable_device(pdev);
++
++ pci_set_power_state(pdev, pci_choose_state(pdev, state));
++
++ return 0;
++}
+
-+ debug_tcp("iscsi_tcp_segment_map %s %p\n", recv ? "recv" : "xmit",
-+ segment);
-+ segment->sg_mapped = kmap_atomic(sg_page(sg), KM_SOFTIRQ0);
-+ segment->data = segment->sg_mapped + sg->offset + segment->sg_offset;
- }
-
--static inline int
--iscsi_buf_left(struct iscsi_buf *ibuf)
-+static inline void
-+iscsi_tcp_segment_unmap(struct iscsi_segment *segment)
- {
-- int rc;
-+ debug_tcp("iscsi_tcp_segment_unmap %p\n", segment);
-
-- rc = ibuf->sg.length - ibuf->sent;
-- BUG_ON(rc < 0);
-- return rc;
-+ if (segment->sg_mapped) {
-+ debug_tcp("iscsi_tcp_segment_unmap valid\n");
-+ kunmap_atomic(segment->sg_mapped, KM_SOFTIRQ0);
-+ segment->sg_mapped = NULL;
-+ segment->data = NULL;
-+ }
- }
-
-+/*
-+ * Splice the digest buffer into the buffer
-+ */
- static inline void
--iscsi_hdr_digest(struct iscsi_conn *conn, struct iscsi_buf *buf,
-- u8* crc)
-+iscsi_tcp_segment_splice_digest(struct iscsi_segment *segment, void *digest)
- {
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
--
-- crypto_hash_digest(&tcp_conn->tx_hash, &buf->sg, buf->sg.length, crc);
-- buf->sg.length += sizeof(u32);
-+ segment->data = digest;
-+ segment->digest_len = ISCSI_DIGEST_SIZE;
-+ segment->total_size += ISCSI_DIGEST_SIZE;
-+ segment->size = ISCSI_DIGEST_SIZE;
-+ segment->copied = 0;
-+ segment->sg = NULL;
-+ segment->hash = NULL;
- }
-
+/**
-+ * iscsi_tcp_segment_done - check whether the segment is complete
-+ * @segment: iscsi segment to check
-+ * @recv: set to one of this is called from the recv path
-+ * @copied: number of bytes copied
-+ *
-+ * Check if we're done receiving this segment. If the receive
-+ * buffer is full but we expect more data, move on to the
-+ * next entry in the scatterlist.
-+ *
-+ * If the amount of data we received isn't a multiple of 4,
-+ * we will transparently receive the pad bytes, too.
-+ *
-+ * This function must be re-entrant.
++ * megasas_resume- driver resume entry point
++ * @pdev: PCI device structure
+ */
- static inline int
--iscsi_hdr_extract(struct iscsi_tcp_conn *tcp_conn)
-+iscsi_tcp_segment_done(struct iscsi_segment *segment, int recv, unsigned copied)
- {
-- struct sk_buff *skb = tcp_conn->in.skb;
--
-- tcp_conn->in.zero_copy_hdr = 0;
-+ static unsigned char padbuf[ISCSI_PAD_LEN];
-+ struct scatterlist sg;
-+ unsigned int pad;
-
-- if (tcp_conn->in.copy >= tcp_conn->hdr_size &&
-- tcp_conn->in_progress == IN_PROGRESS_WAIT_HEADER) {
-+ debug_tcp("copied %u %u size %u %s\n", segment->copied, copied,
-+ segment->size, recv ? "recv" : "xmit");
-+ if (segment->hash && copied) {
- /*
-- * Zero-copy PDU Header: using connection context
-- * to store header pointer.
-+ * If a segment is kmapd we must unmap it before sending
-+ * to the crypto layer since that will try to kmap it again.
- */
-- if (skb_shinfo(skb)->frag_list == NULL &&
-- !skb_shinfo(skb)->nr_frags) {
-- tcp_conn->in.hdr = (struct iscsi_hdr *)
-- ((char*)skb->data + tcp_conn->in.offset);
-- tcp_conn->in.zero_copy_hdr = 1;
-+ iscsi_tcp_segment_unmap(segment);
++static int __devinit
++megasas_resume(struct pci_dev *pdev)
++{
++ int rval;
++ struct Scsi_Host *host;
++ struct megasas_instance *instance;
+
-+ if (!segment->data) {
-+ sg_init_table(&sg, 1);
-+ sg_set_page(&sg, sg_page(segment->sg), copied,
-+ segment->copied + segment->sg_offset +
-+ segment->sg->offset);
-+ } else
-+ sg_init_one(&sg, segment->data + segment->copied,
-+ copied);
-+ crypto_hash_update(segment->hash, &sg, copied);
-+ }
++ instance = pci_get_drvdata(pdev);
++ host = instance->host;
++ pci_set_power_state(pdev, PCI_D0);
++ pci_enable_wake(pdev, PCI_D0, 0);
++ pci_restore_state(pdev);
+
-+ segment->copied += copied;
-+ if (segment->copied < segment->size) {
-+ iscsi_tcp_segment_map(segment, recv);
-+ return 0;
++ /*
++ * PCI prepping: enable device set bus mastering and dma mask
++ */
++ rval = pci_enable_device(pdev);
++
++ if (rval) {
++ printk(KERN_ERR "megasas: Enable device failed\n");
++ return rval;
+ }
+
-+ segment->total_copied += segment->copied;
-+ segment->copied = 0;
-+ segment->size = 0;
++ pci_set_master(pdev);
+
-+ /* Unmap the current scatterlist page, if there is one. */
-+ iscsi_tcp_segment_unmap(segment);
++ if (megasas_set_dma_mask(pdev))
++ goto fail_set_dma_mask;
+
-+ /* Do we have more scatterlist entries? */
-+ debug_tcp("total copied %u total size %u\n", segment->total_copied,
-+ segment->total_size);
-+ if (segment->total_copied < segment->total_size) {
-+ /* Proceed to the next entry in the scatterlist. */
-+ iscsi_tcp_segment_init_sg(segment, sg_next(segment->sg),
-+ 0);
-+ iscsi_tcp_segment_map(segment, recv);
-+ BUG_ON(segment->size == 0);
-+ return 0;
-+ }
++ /*
++ * Initialize MFI Firmware
++ */
+
-+ /* Do we need to handle padding? */
-+ pad = iscsi_padding(segment->total_copied);
-+ if (pad != 0) {
-+ debug_tcp("consume %d pad bytes\n", pad);
-+ segment->total_size += pad;
-+ segment->size = pad;
-+ segment->data = padbuf;
-+ return 0;
-+ }
++ *instance->producer = 0;
++ *instance->consumer = 0;
++
++ atomic_set(&instance->fw_outstanding, 0);
+
+ /*
-+ * Set us up for transferring the data digest. hdr digest
-+ * is completely handled in hdr done function.
++ * We expect the FW state to be READY
+ */
-+ if (segment->hash) {
-+ crypto_hash_final(segment->hash, segment->digest);
-+ iscsi_tcp_segment_splice_digest(segment,
-+ recv ? segment->recv_digest : segment->digest);
-+ return 0;
-+ }
++ if (megasas_transition_to_ready(instance))
++ goto fail_ready_state;
+
-+ return 1;
-+}
++ if (megasas_issue_init_mfi(instance))
++ goto fail_init_mfi;
+
-+/**
-+ * iscsi_tcp_xmit_segment - transmit segment
-+ * @tcp_conn: the iSCSI TCP connection
-+ * @segment: the buffer to transmnit
-+ *
-+ * This function transmits as much of the buffer as
-+ * the network layer will accept, and returns the number of
-+ * bytes transmitted.
-+ *
-+ * If CRC hashing is enabled, the function will compute the
-+ * hash as it goes. When the entire segment has been transmitted,
-+ * it will retrieve the hash value and send it as well.
-+ */
-+static int
-+iscsi_tcp_xmit_segment(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment)
-+{
-+ struct socket *sk = tcp_conn->sock;
-+ unsigned int copied = 0;
-+ int r = 0;
++ tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc,
++ (unsigned long)instance);
+
-+ while (!iscsi_tcp_segment_done(segment, 0, r)) {
-+ struct scatterlist *sg;
-+ unsigned int offset, copy;
-+ int flags = 0;
++ /*
++ * Register IRQ
++ */
++ if (request_irq(pdev->irq, megasas_isr, IRQF_SHARED,
++ "megasas", instance)) {
++ printk(KERN_ERR "megasas: Failed to register IRQ\n");
++ goto fail_irq;
++ }
+
-+ r = 0;
-+ offset = segment->copied;
-+ copy = segment->size - offset;
++ instance->instancet->enable_intr(instance->reg_set);
+
-+ if (segment->total_copied + segment->size < segment->total_size)
-+ flags |= MSG_MORE;
++ /*
++ * Initiate AEN (Asynchronous Event Notification)
++ */
++ if (megasas_start_aen(instance))
++ printk(KERN_ERR "megasas: Start AEN failed\n");
+
-+ /* Use sendpage if we can; else fall back to sendmsg */
-+ if (!segment->data) {
-+ sg = segment->sg;
-+ offset += segment->sg_offset + sg->offset;
-+ r = tcp_conn->sendpage(sk, sg_page(sg), offset, copy,
-+ flags);
- } else {
-- /* ignoring return code since we checked
-- * in.copy before */
-- skb_copy_bits(skb, tcp_conn->in.offset,
-- &tcp_conn->hdr, tcp_conn->hdr_size);
-- tcp_conn->in.hdr = &tcp_conn->hdr;
-+ struct msghdr msg = { .msg_flags = flags };
-+ struct kvec iov = {
-+ .iov_base = segment->data + offset,
-+ .iov_len = copy
-+ };
++ /* Initialize the cmd completion timer */
++ if (poll_mode_io)
++ megasas_start_timer(instance, &instance->io_completion_timer,
++ megasas_io_completion_timer,
++ MEGASAS_COMPLETION_TIMER_INTERVAL);
++ return 0;
+
-+ r = kernel_sendmsg(sk, &msg, &iov, 1, copy);
- }
-- tcp_conn->in.offset += tcp_conn->hdr_size;
-- tcp_conn->in.copy -= tcp_conn->hdr_size;
-- } else {
-- int hdr_remains;
-- int copylen;
-
-- /*
-- * PDU header scattered across SKB's,
-- * copying it... This'll happen quite rarely.
-- */
-+ if (r < 0) {
-+ iscsi_tcp_segment_unmap(segment);
-+ if (copied || r == -EAGAIN)
-+ break;
-+ return r;
-+ }
-+ copied += r;
-+ }
-+ return copied;
-+}
++fail_irq:
++fail_init_mfi:
++ if (instance->evt_detail)
++ pci_free_consistent(pdev, sizeof(struct megasas_evt_detail),
++ instance->evt_detail,
++ instance->evt_detail_h);
+
-+/**
-+ * iscsi_tcp_segment_recv - copy data to segment
-+ * @tcp_conn: the iSCSI TCP connection
-+ * @segment: the buffer to copy to
-+ * @ptr: data pointer
-+ * @len: amount of data available
-+ *
-+ * This function copies up to @len bytes to the
-+ * given buffer, and returns the number of bytes
-+ * consumed, which can actually be less than @len.
-+ *
-+ * If hash digest is enabled, the function will update the
-+ * hash while copying.
-+ * Combining these two operations doesn't buy us a lot (yet),
-+ * but in the future we could implement combined copy+crc,
-+ * just way we do for network layer checksums.
-+ */
-+static int
-+iscsi_tcp_segment_recv(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment, const void *ptr,
-+ unsigned int len)
-+{
-+ unsigned int copy = 0, copied = 0;
++ if (instance->producer)
++ pci_free_consistent(pdev, sizeof(u32), instance->producer,
++ instance->producer_h);
++ if (instance->consumer)
++ pci_free_consistent(pdev, sizeof(u32), instance->consumer,
++ instance->consumer_h);
++ scsi_host_put(host);
+
-+ while (!iscsi_tcp_segment_done(segment, 1, copy)) {
-+ if (copied == len) {
-+ debug_tcp("iscsi_tcp_segment_recv copied %d bytes\n",
-+ len);
-+ break;
-+ }
++fail_set_dma_mask:
++fail_ready_state:
+
-+ copy = min(len - copied, segment->size - segment->copied);
-+ debug_tcp("iscsi_tcp_segment_recv copying %d\n", copy);
-+ memcpy(segment->data + segment->copied, ptr + copied, copy);
-+ copied += copy;
-+ }
-+ return copied;
++ pci_disable_device(pdev);
++
++ return -ENODEV;
+}
++
++/**
+ * megasas_detach_one - PCI hot"un"plug entry point
+ * @pdev: PCI device structure
+ */
+@@ -2536,9 +2784,12 @@ static void megasas_detach_one(struct pci_dev *pdev)
+ instance = pci_get_drvdata(pdev);
+ host = instance->host;
-- if (tcp_conn->in_progress == IN_PROGRESS_WAIT_HEADER)
-- tcp_conn->in.hdr_offset = 0;
-+static inline void
-+iscsi_tcp_dgst_header(struct hash_desc *hash, const void *hdr, size_t hdrlen,
-+ unsigned char digest[ISCSI_DIGEST_SIZE])
-+{
-+ struct scatterlist sg;
++ if (poll_mode_io)
++ del_timer_sync(&instance->io_completion_timer);
++
+ scsi_remove_host(instance->host);
+ megasas_flush_cache(instance);
+- megasas_shutdown_controller(instance);
++ megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN);
+ tasklet_kill(&instance->isr_tasklet);
-- hdr_remains = tcp_conn->hdr_size - tcp_conn->in.hdr_offset;
-- BUG_ON(hdr_remains <= 0);
-+ sg_init_one(&sg, hdr, hdrlen);
-+ crypto_hash_digest(hash, &sg, hdrlen, digest);
-+}
+ /*
+@@ -2660,6 +2911,7 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
+ void *sense = NULL;
+ dma_addr_t sense_handle;
+ u32 *sense_ptr;
++ unsigned long *sense_buff;
-- copylen = min(tcp_conn->in.copy, hdr_remains);
-- skb_copy_bits(skb, tcp_conn->in.offset,
-- (char*)&tcp_conn->hdr + tcp_conn->in.hdr_offset,
-- copylen);
-+static inline int
-+iscsi_tcp_dgst_verify(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment)
-+{
-+ if (!segment->digest_len)
-+ return 1;
+ memset(kbuff_arr, 0, sizeof(kbuff_arr));
-- debug_tcp("PDU gather offset %d bytes %d in.offset %d "
-- "in.copy %d\n", tcp_conn->in.hdr_offset, copylen,
-- tcp_conn->in.offset, tcp_conn->in.copy);
-+ if (memcmp(segment->recv_digest, segment->digest,
-+ segment->digest_len)) {
-+ debug_scsi("digest mismatch\n");
-+ return 0;
-+ }
+@@ -2764,14 +3016,16 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
+ */
+ if (ioc->sense_len) {
+ /*
+- * sense_ptr points to the location that has the user
++ * sense_buff points to the location that has the user
+ * sense buffer address
+ */
+- sense_ptr = (u32 *) ((unsigned long)ioc->frame.raw +
+- ioc->sense_off);
++ sense_buff = (unsigned long *) ((unsigned long)ioc->frame.raw +
++ ioc->sense_off);
-- tcp_conn->in.offset += copylen;
-- tcp_conn->in.copy -= copylen;
-- if (copylen < hdr_remains) {
-- tcp_conn->in_progress = IN_PROGRESS_HEADER_GATHER;
-- tcp_conn->in.hdr_offset += copylen;
-- return -EAGAIN;
-+ return 1;
-+}
-+
-+/*
-+ * Helper function to set up segment buffer
-+ */
-+static inline void
-+__iscsi_segment_init(struct iscsi_segment *segment, size_t size,
-+ iscsi_segment_done_fn_t *done, struct hash_desc *hash)
-+{
-+ memset(segment, 0, sizeof(*segment));
-+ segment->total_size = size;
-+ segment->done = done;
-+
-+ if (hash) {
-+ segment->hash = hash;
-+ crypto_hash_init(hash);
-+ }
-+}
-+
-+static inline void
-+iscsi_segment_init_linear(struct iscsi_segment *segment, void *data,
-+ size_t size, iscsi_segment_done_fn_t *done,
-+ struct hash_desc *hash)
-+{
-+ __iscsi_segment_init(segment, size, done, hash);
-+ segment->data = data;
-+ segment->size = size;
-+}
-+
-+static inline int
-+iscsi_segment_seek_sg(struct iscsi_segment *segment,
-+ struct scatterlist *sg_list, unsigned int sg_count,
-+ unsigned int offset, size_t size,
-+ iscsi_segment_done_fn_t *done, struct hash_desc *hash)
-+{
-+ struct scatterlist *sg;
-+ unsigned int i;
-+
-+ debug_scsi("iscsi_segment_seek_sg offset %u size %llu\n",
-+ offset, size);
-+ __iscsi_segment_init(segment, size, done, hash);
-+ for_each_sg(sg_list, sg, sg_count, i) {
-+ debug_scsi("sg %d, len %u offset %u\n", i, sg->length,
-+ sg->offset);
-+ if (offset < sg->length) {
-+ iscsi_tcp_segment_init_sg(segment, sg, offset);
-+ return 0;
+- if (copy_to_user((void __user *)((unsigned long)(*sense_ptr)),
+- sense, ioc->sense_len)) {
++ if (copy_to_user((void __user *)(unsigned long)(*sense_buff),
++ sense, ioc->sense_len)) {
++ printk(KERN_ERR "megasas: Failed to copy out to user "
++ "sense data\n");
+ error = -EFAULT;
+ goto out;
}
-- tcp_conn->in.hdr = &tcp_conn->hdr;
-- tcp_conn->discontiguous_hdr_cnt++;
-- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
-+ offset -= sg->length;
- }
+@@ -2874,10 +3128,10 @@ static int megasas_mgmt_ioctl_aen(struct file *file, unsigned long arg)
+ if (!instance)
+ return -ENODEV;
-+ return ISCSI_ERR_DATA_OFFSET;
-+}
+- down(&instance->aen_mutex);
++ mutex_lock(&instance->aen_mutex);
+ error = megasas_register_aen(instance, aen.seq_num,
+ aen.class_locale_word);
+- up(&instance->aen_mutex);
++ mutex_unlock(&instance->aen_mutex);
+ return error;
+ }
+
+@@ -2977,6 +3231,8 @@ static struct pci_driver megasas_pci_driver = {
+ .id_table = megasas_pci_table,
+ .probe = megasas_probe_one,
+ .remove = __devexit_p(megasas_detach_one),
++ .suspend = megasas_suspend,
++ .resume = megasas_resume,
+ .shutdown = megasas_shutdown,
+ };
+
+@@ -3004,7 +3260,7 @@ static DRIVER_ATTR(release_date, S_IRUGO, megasas_sysfs_show_release_date,
+ static ssize_t
+ megasas_sysfs_show_dbg_lvl(struct device_driver *dd, char *buf)
+ {
+- return sprintf(buf,"%u",megasas_dbg_lvl);
++ return sprintf(buf, "%u\n", megasas_dbg_lvl);
+ }
+
+ static ssize_t
+@@ -3019,7 +3275,65 @@ megasas_sysfs_set_dbg_lvl(struct device_driver *dd, const char *buf, size_t coun
+ }
+
+ static DRIVER_ATTR(dbg_lvl, S_IRUGO|S_IWUGO, megasas_sysfs_show_dbg_lvl,
+- megasas_sysfs_set_dbg_lvl);
++ megasas_sysfs_set_dbg_lvl);
+
-+/**
-+ * iscsi_tcp_hdr_recv_prep - prep segment for hdr reception
-+ * @tcp_conn: iscsi connection to prep for
-+ *
-+ * This function always passes NULL for the hash argument, because when this
-+ * function is called we do not yet know the final size of the header and want
-+ * to delay the digest processing until we know that.
-+ */
-+static void
-+iscsi_tcp_hdr_recv_prep(struct iscsi_tcp_conn *tcp_conn)
++static ssize_t
++megasas_sysfs_show_poll_mode_io(struct device_driver *dd, char *buf)
+{
-+ debug_tcp("iscsi_tcp_hdr_recv_prep(%p%s)\n", tcp_conn,
-+ tcp_conn->iscsi_conn->hdrdgst_en ? ", digest enabled" : "");
-+ iscsi_segment_init_linear(&tcp_conn->in.segment,
-+ tcp_conn->in.hdr_buf, sizeof(struct iscsi_hdr),
-+ iscsi_tcp_hdr_recv_done, NULL);
++ return sprintf(buf, "%u\n", poll_mode_io);
+}
+
-+/*
-+ * Handle incoming reply to any other type of command
-+ */
-+static int
-+iscsi_tcp_data_recv_done(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment)
++static ssize_t
++megasas_sysfs_set_poll_mode_io(struct device_driver *dd,
++ const char *buf, size_t count)
+{
-+ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
-+ int rc = 0;
-+
-+ if (!iscsi_tcp_dgst_verify(tcp_conn, segment))
-+ return ISCSI_ERR_DATA_DGST;
++ int retval = count;
++ int tmp = poll_mode_io;
++ int i;
++ struct megasas_instance *instance;
+
-+ rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr,
-+ conn->data, tcp_conn->in.datalen);
-+ if (rc)
-+ return rc;
++ if (sscanf(buf, "%u", &poll_mode_io) < 1) {
++ printk(KERN_ERR "megasas: could not set poll_mode_io\n");
++ retval = -EINVAL;
++ }
+
-+ iscsi_tcp_hdr_recv_prep(tcp_conn);
- return 0;
- }
-
-+static void
-+iscsi_tcp_data_recv_prep(struct iscsi_tcp_conn *tcp_conn)
-+{
-+ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
-+ struct hash_desc *rx_hash = NULL;
++ /*
++ * Check if poll_mode_io is already set or is same as previous value
++ */
++ if ((tmp && poll_mode_io) || (tmp == poll_mode_io))
++ goto out;
+
-+ if (conn->datadgst_en)
-+ rx_hash = &tcp_conn->rx_hash;
++ if (poll_mode_io) {
++ /*
++ * Start timers for all adapters
++ */
++ for (i = 0; i < megasas_mgmt_info.max_index; i++) {
++ instance = megasas_mgmt_info.instance[i];
++ if (instance) {
++ megasas_start_timer(instance,
++ &instance->io_completion_timer,
++ megasas_io_completion_timer,
++ MEGASAS_COMPLETION_TIMER_INTERVAL);
++ }
++ }
++ } else {
++ /*
++ * Delete timers for all adapters
++ */
++ for (i = 0; i < megasas_mgmt_info.max_index; i++) {
++ instance = megasas_mgmt_info.instance[i];
++ if (instance)
++ del_timer_sync(&instance->io_completion_timer);
++ }
++ }
+
-+ iscsi_segment_init_linear(&tcp_conn->in.segment,
-+ conn->data, tcp_conn->in.datalen,
-+ iscsi_tcp_data_recv_done, rx_hash);
++out:
++ return retval;
+}
+
++static DRIVER_ATTR(poll_mode_io, S_IRUGO|S_IWUGO,
++ megasas_sysfs_show_poll_mode_io,
++ megasas_sysfs_set_poll_mode_io);
+
+ /**
+ * megasas_init - Driver load entry point
+@@ -3070,8 +3384,16 @@ static int __init megasas_init(void)
+ &driver_attr_dbg_lvl);
+ if (rval)
+ goto err_dcf_dbg_lvl;
++ rval = driver_create_file(&megasas_pci_driver.driver,
++ &driver_attr_poll_mode_io);
++ if (rval)
++ goto err_dcf_poll_mode_io;
+
+ return rval;
++
++err_dcf_poll_mode_io:
++ driver_remove_file(&megasas_pci_driver.driver,
++ &driver_attr_dbg_lvl);
+ err_dcf_dbg_lvl:
+ driver_remove_file(&megasas_pci_driver.driver,
+ &driver_attr_release_date);
+@@ -3090,6 +3412,8 @@ err_pcidrv:
+ static void __exit megasas_exit(void)
+ {
+ driver_remove_file(&megasas_pci_driver.driver,
++ &driver_attr_poll_mode_io);
++ driver_remove_file(&megasas_pci_driver.driver,
+ &driver_attr_dbg_lvl);
+ driver_remove_file(&megasas_pci_driver.driver,
+ &driver_attr_release_date);
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 4dffc91..6466bdf 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2,7 +2,7 @@
+ *
+ * Linux MegaRAID driver for SAS based RAID controllers
+ *
+- * Copyright (c) 2003-2005 LSI Logic Corporation.
++ * Copyright (c) 2003-2005 LSI Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+@@ -18,9 +18,9 @@
/*
- * must be called with session lock
+ * MegaRAID SAS Driver meta data
*/
-@@ -184,7 +498,6 @@ iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- {
- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
- struct iscsi_r2t_info *r2t;
-- struct scsi_cmnd *sc;
+-#define MEGASAS_VERSION "00.00.03.10-rc5"
+-#define MEGASAS_RELDATE "May 17, 2007"
+-#define MEGASAS_EXT_VERSION "Thu May 17 10:09:32 PDT 2007"
++#define MEGASAS_VERSION "00.00.03.16-rc1"
++#define MEGASAS_RELDATE "Nov. 07, 2007"
++#define MEGASAS_EXT_VERSION "Thu. Nov. 07 10:09:32 PDT 2007"
- /* flush ctask's r2t queues */
- while (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*))) {
-@@ -193,12 +506,12 @@ iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- debug_scsi("iscsi_tcp_cleanup_ctask pending r2t dropped\n");
- }
+ /*
+ * Device IDs
+@@ -117,6 +117,7 @@
+ #define MR_FLUSH_DISK_CACHE 0x02
-- sc = ctask->sc;
-- if (unlikely(!sc))
-- return;
--
-- tcp_ctask->xmstate = XMSTATE_VALUE_IDLE;
-- tcp_ctask->r2t = NULL;
-+ r2t = tcp_ctask->r2t;
-+ if (r2t != NULL) {
-+ __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
-+ sizeof(void*));
-+ tcp_ctask->r2t = NULL;
-+ }
- }
+ #define MR_DCMD_CTRL_SHUTDOWN 0x01050000
++#define MR_DCMD_HIBERNATE_SHUTDOWN 0x01060000
+ #define MR_ENABLE_DRIVE_SPINDOWN 0x01
- /**
-@@ -217,11 +530,6 @@ iscsi_data_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- int datasn = be32_to_cpu(rhdr->datasn);
+ #define MR_DCMD_CTRL_EVENT_GET_INFO 0x01040100
+@@ -570,7 +571,8 @@ struct megasas_ctrl_info {
+ #define IS_DMA64 (sizeof(dma_addr_t) == 8)
- iscsi_update_cmdsn(session, (struct iscsi_nopin*)rhdr);
-- /*
-- * setup Data-In byte counter (gets decremented..)
-- */
-- ctask->data_count = tcp_conn->in.datalen;
--
- if (tcp_conn->in.datalen == 0)
- return 0;
+ #define MFI_OB_INTR_STATUS_MASK 0x00000002
+-#define MFI_POLL_TIMEOUT_SECS 10
++#define MFI_POLL_TIMEOUT_SECS 60
++#define MEGASAS_COMPLETION_TIMER_INTERVAL (HZ/10)
-@@ -242,22 +550,20 @@ iscsi_data_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- }
+ #define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000
- if (rhdr->flags & ISCSI_FLAG_DATA_STATUS) {
-+ sc->result = (DID_OK << 16) | rhdr->cmd_status;
- conn->exp_statsn = be32_to_cpu(rhdr->statsn) + 1;
-- if (rhdr->flags & ISCSI_FLAG_DATA_UNDERFLOW) {
-+ if (rhdr->flags & (ISCSI_FLAG_DATA_UNDERFLOW |
-+ ISCSI_FLAG_DATA_OVERFLOW)) {
- int res_count = be32_to_cpu(rhdr->residual_count);
+@@ -1083,13 +1085,15 @@ struct megasas_instance {
+ struct megasas_cmd **cmd_list;
+ struct list_head cmd_pool;
+ spinlock_t cmd_pool_lock;
++ /* used to synch producer, consumer ptrs in dpc */
++ spinlock_t completion_lock;
+ struct dma_pool *frame_dma_pool;
+ struct dma_pool *sense_dma_pool;
- if (res_count > 0 &&
-- res_count <= scsi_bufflen(sc)) {
-+ (rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW ||
-+ res_count <= scsi_bufflen(sc)))
- scsi_set_resid(sc, res_count);
-- sc->result = (DID_OK << 16) | rhdr->cmd_status;
-- } else
-+ else
- sc->result = (DID_BAD_TARGET << 16) |
- rhdr->cmd_status;
-- } else if (rhdr->flags & ISCSI_FLAG_DATA_OVERFLOW) {
-- scsi_set_resid(sc, be32_to_cpu(rhdr->residual_count));
-- sc->result = (DID_OK << 16) | rhdr->cmd_status;
-- } else
-- sc->result = (DID_OK << 16) | rhdr->cmd_status;
-+ }
- }
+ struct megasas_evt_detail *evt_detail;
+ dma_addr_t evt_detail_h;
+ struct megasas_cmd *aen_cmd;
+- struct semaphore aen_mutex;
++ struct mutex aen_mutex;
+ struct semaphore ioctl_sem;
- conn->datain_pdus_cnt++;
-@@ -281,9 +587,6 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
- struct iscsi_r2t_info *r2t)
- {
- struct iscsi_data *hdr;
-- struct scsi_cmnd *sc = ctask->sc;
-- int i, sg_count = 0;
-- struct scatterlist *sg;
+ struct Scsi_Host *host;
+@@ -1108,6 +1112,8 @@ struct megasas_instance {
- hdr = &r2t->dtask.hdr;
- memset(hdr, 0, sizeof(struct iscsi_data));
-@@ -307,34 +610,6 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
- conn->dataout_pdus_cnt++;
+ u8 flag;
+ unsigned long last_time;
++
++ struct timer_list io_completion_timer;
+ };
- r2t->sent = 0;
--
-- iscsi_buf_init_iov(&r2t->headbuf, (char*)hdr,
-- sizeof(struct iscsi_hdr));
--
-- sg = scsi_sglist(sc);
-- r2t->sg = NULL;
-- for (i = 0; i < scsi_sg_count(sc); i++, sg += 1) {
-- /* FIXME: prefetch ? */
-- if (sg_count + sg->length > r2t->data_offset) {
-- int page_offset;
--
-- /* sg page found! */
--
-- /* offset within this page */
-- page_offset = r2t->data_offset - sg_count;
--
-- /* fill in this buffer */
-- iscsi_buf_init_sg(&r2t->sendbuf, sg);
-- r2t->sendbuf.sg.offset += page_offset;
-- r2t->sendbuf.sg.length -= page_offset;
--
-- /* xmit logic will continue with next one */
-- r2t->sg = sg + 1;
-- break;
-- }
-- sg_count += sg->length;
-- }
-- BUG_ON(r2t->sg == NULL);
- }
+ #define MEGASAS_IS_LOGICAL(scp) \
+diff --git a/drivers/scsi/ncr53c8xx.c b/drivers/scsi/ncr53c8xx.c
+index 016c462..c02771a 100644
+--- a/drivers/scsi/ncr53c8xx.c
++++ b/drivers/scsi/ncr53c8xx.c
+@@ -4963,7 +4963,8 @@ void ncr_complete (struct ncb *np, struct ccb *cp)
+ ** Copy back sense data to caller's buffer.
+ */
+ memcpy(cmd->sense_buffer, cp->sense_buf,
+- min(sizeof(cmd->sense_buffer), sizeof(cp->sense_buf)));
++ min_t(size_t, SCSI_SENSE_BUFFERSIZE,
++ sizeof(cp->sense_buf)));
- /**
-@@ -366,14 +641,11 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- }
+ if (DEBUG_FLAGS & (DEBUG_RESULT|DEBUG_TINY)) {
+ u_char * p = (u_char*) & cmd->sense_buffer;
+diff --git a/drivers/scsi/pcmcia/Kconfig b/drivers/scsi/pcmcia/Kconfig
+index fa481b5..53857c6 100644
+--- a/drivers/scsi/pcmcia/Kconfig
++++ b/drivers/scsi/pcmcia/Kconfig
+@@ -6,7 +6,8 @@ menuconfig SCSI_LOWLEVEL_PCMCIA
+ bool "PCMCIA SCSI adapter support"
+ depends on SCSI!=n && PCMCIA!=n
- /* fill-in new R2T associated with the task */
-- spin_lock(&session->lock);
- iscsi_update_cmdsn(session, (struct iscsi_nopin*)rhdr);
+-if SCSI_LOWLEVEL_PCMCIA && SCSI && PCMCIA
++# drivers have problems when build in, so require modules
++if SCSI_LOWLEVEL_PCMCIA && SCSI && PCMCIA && m
-- if (!ctask->sc || ctask->mtask ||
-- session->state != ISCSI_STATE_LOGGED_IN) {
-+ if (!ctask->sc || session->state != ISCSI_STATE_LOGGED_IN) {
- printk(KERN_INFO "iscsi_tcp: dropping R2T itt %d in "
- "recovery...\n", ctask->itt);
-- spin_unlock(&session->lock);
- return 0;
- }
+ config PCMCIA_AHA152X
+ tristate "Adaptec AHA152X PCMCIA support"
+diff --git a/drivers/scsi/pcmcia/nsp_cs.c b/drivers/scsi/pcmcia/nsp_cs.c
+index a45d89b..5082ca3 100644
+--- a/drivers/scsi/pcmcia/nsp_cs.c
++++ b/drivers/scsi/pcmcia/nsp_cs.c
+@@ -135,6 +135,11 @@ static nsp_hw_data nsp_data_base; /* attach <-> detect glue */
-@@ -384,7 +656,8 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- r2t->data_length = be32_to_cpu(rhdr->data_length);
- if (r2t->data_length == 0) {
- printk(KERN_ERR "iscsi_tcp: invalid R2T with zero data len\n");
-- spin_unlock(&session->lock);
-+ __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
-+ sizeof(void*));
- return ISCSI_ERR_DATALEN;
- }
+ #define NSP_DEBUG_BUF_LEN 150
-@@ -395,10 +668,11 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
++static inline void nsp_inc_resid(struct scsi_cmnd *SCpnt, int residInc)
++{
++ scsi_set_resid(SCpnt, scsi_get_resid(SCpnt) + residInc);
++}
++
+ static void nsp_cs_message(const char *func, int line, char *type, char *fmt, ...)
+ {
+ va_list args;
+@@ -192,8 +197,10 @@ static int nsp_queuecommand(struct scsi_cmnd *SCpnt,
+ #endif
+ nsp_hw_data *data = (nsp_hw_data *)SCpnt->device->host->hostdata;
- r2t->data_offset = be32_to_cpu(rhdr->data_offset);
- if (r2t->data_offset + r2t->data_length > scsi_bufflen(ctask->sc)) {
-- spin_unlock(&session->lock);
- printk(KERN_ERR "iscsi_tcp: invalid R2T with data len %u at "
- "offset %u and total length %d\n", r2t->data_length,
- r2t->data_offset, scsi_bufflen(ctask->sc));
-+ __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
-+ sizeof(void*));
- return ISCSI_ERR_DATALEN;
+- nsp_dbg(NSP_DEBUG_QUEUECOMMAND, "SCpnt=0x%p target=%d lun=%d buff=0x%p bufflen=%d use_sg=%d",
+- SCpnt, target, SCpnt->device->lun, SCpnt->request_buffer, SCpnt->request_bufflen, SCpnt->use_sg);
++ nsp_dbg(NSP_DEBUG_QUEUECOMMAND,
++ "SCpnt=0x%p target=%d lun=%d sglist=0x%p bufflen=%d sg_count=%d",
++ SCpnt, target, SCpnt->device->lun, scsi_sglist(SCpnt),
++ scsi_bufflen(SCpnt), scsi_sg_count(SCpnt));
+ //nsp_dbg(NSP_DEBUG_QUEUECOMMAND, "before CurrentSC=0x%p", data->CurrentSC);
+
+ SCpnt->scsi_done = done;
+@@ -225,7 +232,7 @@ static int nsp_queuecommand(struct scsi_cmnd *SCpnt,
+ SCpnt->SCp.have_data_in = IO_UNKNOWN;
+ SCpnt->SCp.sent_command = 0;
+ SCpnt->SCp.phase = PH_UNDETERMINED;
+- SCpnt->resid = SCpnt->request_bufflen;
++ scsi_set_resid(SCpnt, scsi_bufflen(SCpnt));
+
+ /* setup scratch area
+ SCp.ptr : buffer pointer
+@@ -233,14 +240,14 @@ static int nsp_queuecommand(struct scsi_cmnd *SCpnt,
+ SCp.buffer : next buffer
+ SCp.buffers_residual : left buffers in list
+ SCp.phase : current state of the command */
+- if (SCpnt->use_sg) {
+- SCpnt->SCp.buffer = (struct scatterlist *) SCpnt->request_buffer;
++ if (scsi_bufflen(SCpnt)) {
++ SCpnt->SCp.buffer = scsi_sglist(SCpnt);
+ SCpnt->SCp.ptr = BUFFER_ADDR;
+ SCpnt->SCp.this_residual = SCpnt->SCp.buffer->length;
+- SCpnt->SCp.buffers_residual = SCpnt->use_sg - 1;
++ SCpnt->SCp.buffers_residual = scsi_sg_count(SCpnt) - 1;
+ } else {
+- SCpnt->SCp.ptr = (char *) SCpnt->request_buffer;
+- SCpnt->SCp.this_residual = SCpnt->request_bufflen;
++ SCpnt->SCp.ptr = NULL;
++ SCpnt->SCp.this_residual = 0;
+ SCpnt->SCp.buffer = NULL;
+ SCpnt->SCp.buffers_residual = 0;
}
+@@ -721,7 +728,9 @@ static void nsp_pio_read(struct scsi_cmnd *SCpnt)
+ ocount = data->FifoCount;
-@@ -409,26 +683,55 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+ nsp_dbg(NSP_DEBUG_DATA_IO, "in SCpnt=0x%p resid=%d ocount=%d ptr=0x%p this_residual=%d buffers=0x%p nbuf=%d",
+- SCpnt, SCpnt->resid, ocount, SCpnt->SCp.ptr, SCpnt->SCp.this_residual, SCpnt->SCp.buffer, SCpnt->SCp.buffers_residual);
++ SCpnt, scsi_get_resid(SCpnt), ocount, SCpnt->SCp.ptr,
++ SCpnt->SCp.this_residual, SCpnt->SCp.buffer,
++ SCpnt->SCp.buffers_residual);
- tcp_ctask->exp_datasn = r2tsn + 1;
- __kfifo_put(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*));
-- set_bit(XMSTATE_BIT_SOL_HDR_INIT, &tcp_ctask->xmstate);
-- list_move_tail(&ctask->running, &conn->xmitqueue);
--
-- scsi_queue_work(session->host, &conn->xmitwork);
- conn->r2t_pdus_cnt++;
-- spin_unlock(&session->lock);
+ time_out = 1000;
-+ iscsi_requeue_ctask(ctask);
- return 0;
- }
+@@ -771,7 +780,7 @@ static void nsp_pio_read(struct scsi_cmnd *SCpnt)
+ return;
+ }
-+/*
-+ * Handle incoming reply to DataIn command
-+ */
- static int
--iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
-+iscsi_tcp_process_data_in(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment)
-+{
-+ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
-+ struct iscsi_hdr *hdr = tcp_conn->in.hdr;
-+ int rc;
-+
-+ if (!iscsi_tcp_dgst_verify(tcp_conn, segment))
-+ return ISCSI_ERR_DATA_DGST;
-+
-+ /* check for non-exceptional status */
-+ if (hdr->flags & ISCSI_FLAG_DATA_STATUS) {
-+ rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr, NULL, 0);
-+ if (rc)
-+ return rc;
-+ }
-+
-+ iscsi_tcp_hdr_recv_prep(tcp_conn);
-+ return 0;
-+}
-+
-+/**
-+ * iscsi_tcp_hdr_dissect - process PDU header
-+ * @conn: iSCSI connection
-+ * @hdr: PDU header
-+ *
-+ * This function analyzes the header of the PDU received,
-+ * and performs several sanity checks. If the PDU is accompanied
-+ * by data, the receive buffer is set up to copy the incoming data
-+ * to the correct location.
-+ */
-+static int
-+iscsi_tcp_hdr_dissect(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
- {
- int rc = 0, opcode, ahslen;
-- struct iscsi_hdr *hdr;
- struct iscsi_session *session = conn->session;
- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- uint32_t cdgst, rdgst = 0, itt;
--
-- hdr = tcp_conn->in.hdr;
-+ struct iscsi_cmd_task *ctask;
-+ uint32_t itt;
+- SCpnt->resid -= res;
++ nsp_inc_resid(SCpnt, -res);
+ SCpnt->SCp.ptr += res;
+ SCpnt->SCp.this_residual -= res;
+ ocount += res;
+@@ -795,10 +804,12 @@ static void nsp_pio_read(struct scsi_cmnd *SCpnt)
- /* verify PDU length */
- tcp_conn->in.datalen = ntoh24(hdr->dlength);
-@@ -437,78 +740,73 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
- tcp_conn->in.datalen, conn->max_recv_dlength);
- return ISCSI_ERR_DATALEN;
+ if (time_out == 0) {
+ nsp_msg(KERN_DEBUG, "pio read timeout resid=%d this_residual=%d buffers_residual=%d",
+- SCpnt->resid, SCpnt->SCp.this_residual, SCpnt->SCp.buffers_residual);
++ scsi_get_resid(SCpnt), SCpnt->SCp.this_residual,
++ SCpnt->SCp.buffers_residual);
}
-- tcp_conn->data_copied = 0;
+ nsp_dbg(NSP_DEBUG_DATA_IO, "read ocount=0x%x", ocount);
+- nsp_dbg(NSP_DEBUG_DATA_IO, "r cmd=%d resid=0x%x\n", data->CmdId, SCpnt->resid);
++ nsp_dbg(NSP_DEBUG_DATA_IO, "r cmd=%d resid=0x%x\n", data->CmdId,
++ scsi_get_resid(SCpnt));
+ }
-- /* read AHS */
-+ /* Additional header segments. So far, we don't
-+ * process additional headers.
-+ */
- ahslen = hdr->hlength << 2;
-- tcp_conn->in.offset += ahslen;
-- tcp_conn->in.copy -= ahslen;
-- if (tcp_conn->in.copy < 0) {
-- printk(KERN_ERR "iscsi_tcp: can't handle AHS with length "
-- "%d bytes\n", ahslen);
-- return ISCSI_ERR_AHSLEN;
-- }
--
-- /* calculate read padding */
-- tcp_conn->in.padding = tcp_conn->in.datalen & (ISCSI_PAD_LEN-1);
-- if (tcp_conn->in.padding) {
-- tcp_conn->in.padding = ISCSI_PAD_LEN - tcp_conn->in.padding;
-- debug_scsi("read padding %d bytes\n", tcp_conn->in.padding);
-- }
--
-- if (conn->hdrdgst_en) {
-- struct scatterlist sg;
--
-- sg_init_one(&sg, (u8 *)hdr,
-- sizeof(struct iscsi_hdr) + ahslen);
-- crypto_hash_digest(&tcp_conn->rx_hash, &sg, sg.length,
-- (u8 *)&cdgst);
-- rdgst = *(uint32_t*)((char*)hdr + sizeof(struct iscsi_hdr) +
-- ahslen);
-- if (cdgst != rdgst) {
-- printk(KERN_ERR "iscsi_tcp: hdrdgst error "
-- "recv 0x%x calc 0x%x\n", rdgst, cdgst);
-- return ISCSI_ERR_HDR_DGST;
-- }
-- }
+ /*
+@@ -816,7 +827,9 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
+ ocount = data->FifoCount;
- opcode = hdr->opcode & ISCSI_OPCODE_MASK;
- /* verify itt (itt encoding: age+cid+itt) */
- rc = iscsi_verify_itt(conn, hdr, &itt);
-- if (rc == ISCSI_ERR_NO_SCSI_CMD) {
-- tcp_conn->in.datalen = 0; /* force drop */
-- return 0;
-- } else if (rc)
-+ if (rc)
- return rc;
+ nsp_dbg(NSP_DEBUG_DATA_IO, "in fifocount=%d ptr=0x%p this_residual=%d buffers=0x%p nbuf=%d resid=0x%x",
+- data->FifoCount, SCpnt->SCp.ptr, SCpnt->SCp.this_residual, SCpnt->SCp.buffer, SCpnt->SCp.buffers_residual, SCpnt->resid);
++ data->FifoCount, SCpnt->SCp.ptr, SCpnt->SCp.this_residual,
++ SCpnt->SCp.buffer, SCpnt->SCp.buffers_residual,
++ scsi_get_resid(SCpnt));
-- debug_tcp("opcode 0x%x offset %d copy %d ahslen %d datalen %d\n",
-- opcode, tcp_conn->in.offset, tcp_conn->in.copy,
-- ahslen, tcp_conn->in.datalen);
-+ debug_tcp("opcode 0x%x ahslen %d datalen %d\n",
-+ opcode, ahslen, tcp_conn->in.datalen);
+ time_out = 1000;
- switch(opcode) {
- case ISCSI_OP_SCSI_DATA_IN:
-- tcp_conn->in.ctask = session->cmds[itt];
-- rc = iscsi_data_rsp(conn, tcp_conn->in.ctask);
-+ ctask = session->cmds[itt];
-+ spin_lock(&conn->session->lock);
-+ rc = iscsi_data_rsp(conn, ctask);
-+ spin_unlock(&conn->session->lock);
- if (rc)
- return rc;
-+ if (tcp_conn->in.datalen) {
-+ struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-+ struct hash_desc *rx_hash = NULL;
-+
-+ /*
-+ * Setup copy of Data-In into the Scsi_Cmnd
-+ * Scatterlist case:
-+ * We set up the iscsi_segment to point to the next
-+ * scatterlist entry to copy to. As we go along,
-+ * we move on to the next scatterlist entry and
-+ * update the digest per-entry.
-+ */
-+ if (conn->datadgst_en)
-+ rx_hash = &tcp_conn->rx_hash;
-+
-+ debug_tcp("iscsi_tcp_begin_data_in(%p, offset=%d, "
-+ "datalen=%d)\n", tcp_conn,
-+ tcp_ctask->data_offset,
-+ tcp_conn->in.datalen);
-+ return iscsi_segment_seek_sg(&tcp_conn->in.segment,
-+ scsi_sglist(ctask->sc),
-+ scsi_sg_count(ctask->sc),
-+ tcp_ctask->data_offset,
-+ tcp_conn->in.datalen,
-+ iscsi_tcp_process_data_in,
-+ rx_hash);
-+ }
- /* fall through */
- case ISCSI_OP_SCSI_CMD_RSP:
-- tcp_conn->in.ctask = session->cmds[itt];
-- if (tcp_conn->in.datalen)
-- goto copy_hdr;
--
-- spin_lock(&session->lock);
-- rc = __iscsi_complete_pdu(conn, hdr, NULL, 0);
-- spin_unlock(&session->lock);
-+ if (tcp_conn->in.datalen) {
-+ iscsi_tcp_data_recv_prep(tcp_conn);
-+ return 0;
-+ }
-+ rc = iscsi_complete_pdu(conn, hdr, NULL, 0);
- break;
- case ISCSI_OP_R2T:
-- tcp_conn->in.ctask = session->cmds[itt];
-+ ctask = session->cmds[itt];
- if (ahslen)
- rc = ISCSI_ERR_AHSLEN;
-- else if (tcp_conn->in.ctask->sc->sc_data_direction ==
-- DMA_TO_DEVICE)
-- rc = iscsi_r2t_rsp(conn, tcp_conn->in.ctask);
-- else
-+ else if (ctask->sc->sc_data_direction == DMA_TO_DEVICE) {
-+ spin_lock(&session->lock);
-+ rc = iscsi_r2t_rsp(conn, ctask);
-+ spin_unlock(&session->lock);
-+ } else
- rc = ISCSI_ERR_PROTO;
- break;
- case ISCSI_OP_LOGIN_RSP:
-@@ -520,8 +818,7 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
- * than 8K, but there are no targets that currently do this.
- * For now we fail until we find a vendor that needs it
- */
-- if (ISCSI_DEF_MAX_RECV_SEG_LEN <
-- tcp_conn->in.datalen) {
-+ if (ISCSI_DEF_MAX_RECV_SEG_LEN < tcp_conn->in.datalen) {
- printk(KERN_ERR "iscsi_tcp: received buffer of len %u "
- "but conn buffer is only %u (opcode %0x)\n",
- tcp_conn->in.datalen,
-@@ -530,8 +827,13 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
+@@ -830,7 +843,7 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
+
+ nsp_dbg(NSP_DEBUG_DATA_IO, "phase changed stat=0x%x, res=%d\n", stat, res);
+ /* Put back pointer */
+- SCpnt->resid += res;
++ nsp_inc_resid(SCpnt, res);
+ SCpnt->SCp.ptr -= res;
+ SCpnt->SCp.this_residual += res;
+ ocount -= res;
+@@ -866,7 +879,7 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
break;
}
-- if (tcp_conn->in.datalen)
-- goto copy_hdr;
-+ /* If there's data coming in with the response,
-+ * receive it to the connection's buffer.
-+ */
-+ if (tcp_conn->in.datalen) {
-+ iscsi_tcp_data_recv_prep(tcp_conn);
-+ return 0;
-+ }
- /* fall through */
- case ISCSI_OP_LOGOUT_RSP:
- case ISCSI_OP_NOOP_IN:
-@@ -543,461 +845,161 @@ iscsi_tcp_hdr_recv(struct iscsi_conn *conn)
- break;
+- SCpnt->resid -= res;
++ nsp_inc_resid(SCpnt, -res);
+ SCpnt->SCp.ptr += res;
+ SCpnt->SCp.this_residual -= res;
+ ocount += res;
+@@ -886,10 +899,12 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
+ data->FifoCount = ocount;
+
+ if (time_out == 0) {
+- nsp_msg(KERN_DEBUG, "pio write timeout resid=0x%x", SCpnt->resid);
++ nsp_msg(KERN_DEBUG, "pio write timeout resid=0x%x",
++ scsi_get_resid(SCpnt));
}
+ nsp_dbg(NSP_DEBUG_DATA_IO, "write ocount=0x%x", ocount);
+- nsp_dbg(NSP_DEBUG_DATA_IO, "w cmd=%d resid=0x%x\n", data->CmdId, SCpnt->resid);
++ nsp_dbg(NSP_DEBUG_DATA_IO, "w cmd=%d resid=0x%x\n", data->CmdId,
++ scsi_get_resid(SCpnt));
+ }
+ #undef RFIFO_CRIT
+ #undef WFIFO_CRIT
+@@ -911,9 +926,8 @@ static int nsp_nexus(struct scsi_cmnd *SCpnt)
+ nsp_index_write(base, SYNCREG, sync->SyncRegister);
+ nsp_index_write(base, ACKWIDTH, sync->AckWidth);
-- return rc;
+- if (SCpnt->use_sg == 0 ||
+- SCpnt->resid % 4 != 0 ||
+- SCpnt->resid <= PAGE_SIZE ) {
++ if (scsi_get_resid(SCpnt) % 4 != 0 ||
++ scsi_get_resid(SCpnt) <= PAGE_SIZE ) {
+ data->TransferMode = MODE_IO8;
+ } else if (nsp_burst_mode == BURST_MEM32) {
+ data->TransferMode = MODE_MEM32;
+diff --git a/drivers/scsi/ppa.c b/drivers/scsi/ppa.c
+index 67ee51a..f655ae3 100644
+--- a/drivers/scsi/ppa.c
++++ b/drivers/scsi/ppa.c
+@@ -750,18 +750,16 @@ static int ppa_engine(ppa_struct *dev, struct scsi_cmnd *cmd)
+ cmd->SCp.phase++;
+
+ case 4: /* Phase 4 - Setup scatter/gather buffers */
+- if (cmd->use_sg) {
+- /* if many buffers are available, start filling the first */
+- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ } else {
+- /* else fill the only available buffer */
+ cmd->SCp.buffer = NULL;
+- cmd->SCp.this_residual = cmd->request_bufflen;
+- cmd->SCp.ptr = cmd->request_buffer;
++ cmd->SCp.this_residual = 0;
++ cmd->SCp.ptr = NULL;
+ }
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.phase++;
+
+ case 5: /* Phase 5 - Data transfer stage */
+diff --git a/drivers/scsi/psi240i.c b/drivers/scsi/psi240i.c
+deleted file mode 100644
+index 899e89d..0000000
+--- a/drivers/scsi/psi240i.c
++++ /dev/null
+@@ -1,689 +0,0 @@
+-/*+M*************************************************************************
+- * Perceptive Solutions, Inc. PSI-240I device driver proc support for Linux.
+- *
+- * Copyright (c) 1997 Perceptive Solutions, Inc.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2, or (at your option)
+- * any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; see the file COPYING. If not, write to
+- * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+- *
+- *
+- * File Name: psi240i.c
+- *
+- * Description: SCSI driver for the PSI240I EIDE interface card.
+- *
+- *-M*************************************************************************/
-
--copy_hdr:
-- /*
-- * if we did zero copy for the header but we will need multiple
-- * skbs to complete the command then we have to copy the header
-- * for later use
-- */
-- if (tcp_conn->in.zero_copy_hdr && tcp_conn->in.copy <=
-- (tcp_conn->in.datalen + tcp_conn->in.padding +
-- (conn->datadgst_en ? 4 : 0))) {
-- debug_tcp("Copying header for later use. in.copy %d in.datalen"
-- " %d\n", tcp_conn->in.copy, tcp_conn->in.datalen);
-- memcpy(&tcp_conn->hdr, tcp_conn->in.hdr,
-- sizeof(struct iscsi_hdr));
-- tcp_conn->in.hdr = &tcp_conn->hdr;
-- tcp_conn->in.zero_copy_hdr = 0;
-- }
-- return 0;
--}
+-#include <linux/module.h>
-
--/**
-- * iscsi_ctask_copy - copy skb bits to the destanation cmd task
-- * @conn: iscsi tcp connection
-- * @ctask: scsi command task
-- * @buf: buffer to copy to
-- * @buf_size: size of buffer
-- * @offset: offset within the buffer
+-#include <linux/blkdev.h>
+-#include <linux/kernel.h>
+-#include <linux/types.h>
+-#include <linux/string.h>
+-#include <linux/ioport.h>
+-#include <linux/delay.h>
+-#include <linux/interrupt.h>
+-#include <linux/proc_fs.h>
+-#include <linux/spinlock.h>
+-#include <linux/stat.h>
+-
+-#include <asm/dma.h>
+-#include <asm/system.h>
+-#include <asm/io.h>
+-#include "scsi.h"
+-#include <scsi/scsi_host.h>
+-
+-#include "psi240i.h"
+-#include "psi_chip.h"
+-
+-//#define DEBUG 1
+-
+-#ifdef DEBUG
+-#define DEB(x) x
+-#else
+-#define DEB(x)
+-#endif
+-
+-#define MAXBOARDS 6 /* Increase this and the sizes of the arrays below, if you need more. */
+-
+-#define PORT_DATA 0
+-#define PORT_ERROR 1
+-#define PORT_SECTOR_COUNT 2
+-#define PORT_LBA_0 3
+-#define PORT_LBA_8 4
+-#define PORT_LBA_16 5
+-#define PORT_LBA_24 6
+-#define PORT_STAT_CMD 7
+-#define PORT_SEL_FAIL 8
+-#define PORT_IRQ_STATUS 9
+-#define PORT_ADDRESS 10
+-#define PORT_FAIL 11
+-#define PORT_ALT_STAT 12
+-
+-typedef struct
+- {
+- UCHAR device; // device code
+- UCHAR byte6; // device select register image
+- UCHAR spigot; // spigot number
+- UCHAR expectingIRQ; // flag for expecting and interrupt
+- USHORT sectors; // number of sectors per track
+- USHORT heads; // number of heads
+- USHORT cylinders; // number of cylinders for this device
+- USHORT spareword; // placeholder
+- ULONG blocks; // number of blocks on device
+- } OUR_DEVICE, *POUR_DEVICE;
+-
+-typedef struct
+- {
+- USHORT ports[13];
+- OUR_DEVICE device[8];
+- struct scsi_cmnd *pSCmnd;
+- IDE_STRUCT ide;
+- ULONG startSector;
+- USHORT sectorCount;
+- struct scsi_cmnd *SCpnt;
+- VOID *buffer;
+- USHORT expectingIRQ;
+- } ADAPTER240I, *PADAPTER240I;
+-
+-#define HOSTDATA(host) ((PADAPTER240I)&host->hostdata)
+-
+-static struct Scsi_Host *PsiHost[6] = {NULL,}; /* One for each IRQ level (10-15) */
+-static IDENTIFY_DATA identifyData;
+-static SETUP ChipSetup;
+-
+-static USHORT portAddr[6] = {CHIP_ADRS_0, CHIP_ADRS_1, CHIP_ADRS_2, CHIP_ADRS_3, CHIP_ADRS_4, CHIP_ADRS_5};
+-
+-/****************************************************************
+- * Name: WriteData :LOCAL
- *
-- * Notes:
-- * The function calls skb_copy_bits() and updates per-connection and
-- * per-cmd byte counters.
+- * Description: Write data to device.
- *
-- * Read counters (in bytes):
+- * Parameters: padapter - Pointer adapter data structure.
- *
-- * conn->in.offset offset within in progress SKB
-- * conn->in.copy left to copy from in progress SKB
-- * including padding
-- * conn->in.copied copied already from in progress SKB
-- * conn->data_copied copied already from in progress buffer
-- * ctask->sent total bytes sent up to the MidLayer
-- * ctask->data_count left to copy from in progress Data-In
-- * buf_left left to copy from in progress buffer
-- **/
--static inline int
--iscsi_ctask_copy(struct iscsi_tcp_conn *tcp_conn, struct iscsi_cmd_task *ctask,
-- void *buf, int buf_size, int offset)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- int buf_left = buf_size - (tcp_conn->data_copied + offset);
-- unsigned size = min(tcp_conn->in.copy, buf_left);
-- int rc;
+- * Returns: TRUE if drive does not assert DRQ in time.
+- *
+- ****************************************************************/
+-static int WriteData (PADAPTER240I padapter)
+- {
+- ULONG timer;
+- USHORT *pports = padapter->ports;
-
-- size = min(size, ctask->data_count);
+- timer = jiffies + TIMEOUT_DRQ; // calculate the timeout value
+- do {
+- if ( inb_p (pports[PORT_STAT_CMD]) & IDE_STATUS_DRQ )
+- {
+- outsw (pports[PORT_DATA], padapter->buffer, (USHORT)padapter->ide.ide.ide[2] * 256);
+- return 0;
+- }
+- } while ( time_after(timer, jiffies) ); // test for timeout
-
-- debug_tcp("ctask_copy %d bytes at offset %d copied %d\n",
-- size, tcp_conn->in.offset, tcp_conn->in.copied);
+- padapter->ide.ide.ides.cmd = 0; // null out the command byte
+- return 1;
+- }
+-/****************************************************************
+- * Name: IdeCmd :LOCAL
+- *
+- * Description: Process a queued command from the SCSI manager.
+- *
+- * Parameters: padapter - Pointer adapter data structure.
+- *
+- * Returns: Zero if no error or status register contents on error.
+- *
+- ****************************************************************/
+-static UCHAR IdeCmd (PADAPTER240I padapter)
+- {
+- ULONG timer;
+- USHORT *pports = padapter->ports;
+- UCHAR status;
-
-- BUG_ON(size <= 0);
-- BUG_ON(tcp_ctask->sent + size > scsi_bufflen(ctask->sc));
+- outb_p (padapter->ide.ide.ides.spigot, pports[PORT_SEL_FAIL]); // select the spigot
+- outb_p (padapter->ide.ide.ide[6], pports[PORT_LBA_24]); // select the drive
+- timer = jiffies + TIMEOUT_READY; // calculate the timeout value
+- do {
+- status = inb_p (padapter->ports[PORT_STAT_CMD]);
+- if ( status & IDE_STATUS_DRDY )
+- {
+- outb_p (padapter->ide.ide.ide[2], pports[PORT_SECTOR_COUNT]);
+- outb_p (padapter->ide.ide.ide[3], pports[PORT_LBA_0]);
+- outb_p (padapter->ide.ide.ide[4], pports[PORT_LBA_8]);
+- outb_p (padapter->ide.ide.ide[5], pports[PORT_LBA_16]);
+- padapter->expectingIRQ = 1;
+- outb_p (padapter->ide.ide.ide[7], pports[PORT_STAT_CMD]);
-
-- rc = skb_copy_bits(tcp_conn->in.skb, tcp_conn->in.offset,
-- (char*)buf + (offset + tcp_conn->data_copied), size);
-- /* must fit into skb->len */
-- BUG_ON(rc);
+- if ( padapter->ide.ide.ides.cmd == IDE_CMD_WRITE_MULTIPLE )
+- return (WriteData (padapter));
-
-- tcp_conn->in.offset += size;
-- tcp_conn->in.copy -= size;
-- tcp_conn->in.copied += size;
-- tcp_conn->data_copied += size;
-- tcp_ctask->sent += size;
-- ctask->data_count -= size;
+- return 0;
+- }
+- } while ( time_after(timer, jiffies) ); // test for timeout
-
-- BUG_ON(tcp_conn->in.copy < 0);
-- BUG_ON(ctask->data_count < 0);
+- padapter->ide.ide.ides.cmd = 0; // null out the command byte
+- return status;
+- }
+-/****************************************************************
+- * Name: SetupTransfer :LOCAL
+- *
+- * Description: Setup a data transfer command.
+- *
+- * Parameters: padapter - Pointer adapter data structure.
+- * drive - Drive/head register upper nibble only.
+- *
+- * Returns: TRUE if no data to transfer.
+- *
+- ****************************************************************/
+-static int SetupTransfer (PADAPTER240I padapter, UCHAR drive)
+- {
+- if ( padapter->sectorCount )
+- {
+- *(ULONG *)padapter->ide.ide.ides.lba = padapter->startSector;
+- padapter->ide.ide.ide[6] |= drive;
+- padapter->ide.ide.ides.sectors = ( padapter->sectorCount > SECTORSXFER ) ? SECTORSXFER : padapter->sectorCount;
+- padapter->sectorCount -= padapter->ide.ide.ides.sectors; // bump the start and count for next xfer
+- padapter->startSector += padapter->ide.ide.ides.sectors;
+- return 0;
+- }
+- else
+- {
+- padapter->ide.ide.ides.cmd = 0; // null out the command byte
+- padapter->SCpnt = NULL;
+- return 1;
+- }
+- }
+-/****************************************************************
+- * Name: DecodeError :LOCAL
+- *
+- * Description: Decode and process device errors.
+- *
+- * Parameters: pshost - Pointer to host data block.
+- * status - Status register code.
+- *
+- * Returns: The driver status code.
+- *
+- ****************************************************************/
+-static ULONG DecodeError (struct Scsi_Host *pshost, UCHAR status)
+- {
+- PADAPTER240I padapter = HOSTDATA(pshost);
+- UCHAR error;
-
-- if (buf_size != (tcp_conn->data_copied + offset)) {
-- if (!ctask->data_count) {
-- BUG_ON(buf_size - tcp_conn->data_copied < 0);
-- /* done with this PDU */
-- return buf_size - tcp_conn->data_copied;
+- padapter->expectingIRQ = 0;
+- padapter->SCpnt = NULL;
+- if ( status & IDE_STATUS_WRITE_FAULT )
+- {
+- return DID_PARITY << 16;
- }
-- return -EAGAIN;
-+ if (rc == 0) {
-+ /* Anything that comes with data should have
-+ * been handled above. */
-+ if (tcp_conn->in.datalen)
-+ return ISCSI_ERR_PROTO;
-+ iscsi_tcp_hdr_recv_prep(tcp_conn);
- }
-
-- /* done with this buffer or with both - PDU and buffer */
-- tcp_conn->data_copied = 0;
-- return 0;
-+ return rc;
- }
-
- /**
-- * iscsi_tcp_copy - copy skb bits to the destanation buffer
-- * @conn: iscsi tcp connection
-+ * iscsi_tcp_hdr_recv_done - process PDU header
- *
-- * Notes:
-- * The function calls skb_copy_bits() and updates per-connection
-- * byte counters.
-- **/
--static inline int
--iscsi_tcp_copy(struct iscsi_conn *conn, int buf_size)
--{
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- int buf_left = buf_size - tcp_conn->data_copied;
-- int size = min(tcp_conn->in.copy, buf_left);
-- int rc;
+- if ( status & IDE_STATUS_BUSY )
+- return DID_BUS_BUSY << 16;
-
-- debug_tcp("tcp_copy %d bytes at offset %d copied %d\n",
-- size, tcp_conn->in.offset, tcp_conn->data_copied);
-- BUG_ON(size <= 0);
+- error = inb_p (padapter->ports[PORT_ERROR]);
+- DEB(printk ("\npsi240i error register: %x", error));
+- switch ( error )
+- {
+- case IDE_ERROR_AMNF:
+- case IDE_ERROR_TKONF:
+- case IDE_ERROR_ABRT:
+- case IDE_ERROR_IDFN:
+- case IDE_ERROR_UNC:
+- case IDE_ERROR_BBK:
+- default:
+- return DID_ERROR << 16;
+- }
+- return DID_ERROR << 16;
+- }
+-/****************************************************************
+- * Name: Irq_Handler :LOCAL
+- *
+- * Description: Interrupt handler.
+- *
+- * Parameters: irq - Hardware IRQ number.
+- * dev_id -
+- *
+- * Returns: TRUE if drive is not ready in time.
+- *
+- ****************************************************************/
+-static void Irq_Handler (int irq, void *dev_id)
+- {
+- struct Scsi_Host *shost; // Pointer to host data block
+- PADAPTER240I padapter; // Pointer to adapter control structure
+- USHORT *pports; // I/O port array
+- struct scsi_cmnd *SCpnt;
+- UCHAR status;
+- int z;
-
-- rc = skb_copy_bits(tcp_conn->in.skb, tcp_conn->in.offset,
-- (char*)conn->data + tcp_conn->data_copied, size);
-- BUG_ON(rc);
+- DEB(printk ("\npsi240i received interrupt\n"));
-
-- tcp_conn->in.offset += size;
-- tcp_conn->in.copy -= size;
-- tcp_conn->in.copied += size;
-- tcp_conn->data_copied += size;
+- shost = PsiHost[irq - 10];
+- if ( !shost )
+- panic ("Splunge!");
-
-- if (buf_size != tcp_conn->data_copied)
-- return -EAGAIN;
+- padapter = HOSTDATA(shost);
+- pports = padapter->ports;
+- SCpnt = padapter->SCpnt;
-
-- return 0;
--}
+- if ( !padapter->expectingIRQ )
+- {
+- DEB(printk ("\npsi240i Unsolicited interrupt\n"));
+- return;
+- }
+- padapter->expectingIRQ = 0;
-
--static inline void
--partial_sg_digest_update(struct hash_desc *desc, struct scatterlist *sg,
-- int offset, int length)
--{
-- struct scatterlist temp;
+- status = inb_p (padapter->ports[PORT_STAT_CMD]); // read the device status
+- if ( status & (IDE_STATUS_ERROR | IDE_STATUS_WRITE_FAULT) )
+- goto irqerror;
-
-- sg_init_table(&temp, 1);
-- sg_set_page(&temp, sg_page(sg), length, offset);
-- crypto_hash_update(desc, &temp, length);
--}
+- DEB(printk ("\npsi240i processing interrupt"));
+- switch ( padapter->ide.ide.ides.cmd ) // decide how to handle the interrupt
+- {
+- case IDE_CMD_READ_MULTIPLE:
+- if ( status & IDE_STATUS_DRQ )
+- {
+- insw (pports[PORT_DATA], padapter->buffer, (USHORT)padapter->ide.ide.ides.sectors * 256);
+- padapter->buffer += padapter->ide.ide.ides.sectors * 512;
+- if ( SetupTransfer (padapter, padapter->ide.ide.ide[6] & 0xF0) )
+- {
+- SCpnt->result = DID_OK << 16;
+- padapter->SCpnt = NULL;
+- SCpnt->scsi_done (SCpnt);
+- return;
+- }
+- if ( !(status = IdeCmd (padapter)) )
+- return;
+- }
+- break;
-
--static void
--iscsi_recv_digest_update(struct iscsi_tcp_conn *tcp_conn, char* buf, int len)
--{
-- struct scatterlist tmp;
+- case IDE_CMD_WRITE_MULTIPLE:
+- padapter->buffer += padapter->ide.ide.ides.sectors * 512;
+- if ( SetupTransfer (padapter, padapter->ide.ide.ide[6] & 0xF0) )
+- {
+- SCpnt->result = DID_OK << 16;
+- padapter->SCpnt = NULL;
+- SCpnt->scsi_done (SCpnt);
+- return;
+- }
+- if ( !(status = IdeCmd (padapter)) )
+- return;
+- break;
-
-- sg_init_one(&tmp, buf, len);
-- crypto_hash_update(&tcp_conn->rx_hash, &tmp, len);
--}
+- case IDE_COMMAND_IDENTIFY:
+- {
+- PINQUIRYDATA pinquiryData = SCpnt->request_buffer;
-
--static int iscsi_scsi_data_in(struct iscsi_conn *conn)
-+ * This is the callback invoked when the PDU header has
-+ * been received. If the header is followed by additional
-+ * header segments, we go back for more data.
-+ */
-+static int
-+iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment)
- {
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- struct iscsi_cmd_task *ctask = tcp_conn->in.ctask;
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- struct scsi_cmnd *sc = ctask->sc;
-- struct scatterlist *sg;
-- int i, offset, rc = 0;
+- if ( status & IDE_STATUS_DRQ )
+- {
+- insw (pports[PORT_DATA], &identifyData, sizeof (identifyData) >> 1);
-
-- BUG_ON((void*)ctask != sc->SCp.ptr);
+- memset (pinquiryData, 0, SCpnt->request_bufflen); // Zero INQUIRY data structure.
+- pinquiryData->DeviceType = 0;
+- pinquiryData->Versions = 2;
+- pinquiryData->AdditionalLength = 35 - 4;
-
-- offset = tcp_ctask->data_offset;
-- sg = scsi_sglist(sc);
+- // Fill in vendor identification fields.
+- for ( z = 0; z < 8; z += 2 )
+- {
+- pinquiryData->VendorId[z] = ((UCHAR *)identifyData.ModelNumber)[z + 1];
+- pinquiryData->VendorId[z + 1] = ((UCHAR *)identifyData.ModelNumber)[z];
+- }
-
-- if (tcp_ctask->data_offset)
-- for (i = 0; i < tcp_ctask->sg_count; i++)
-- offset -= sg[i].length;
-- /* we've passed through partial sg*/
-- if (offset < 0)
-- offset = 0;
+- // Initialize unused portion of product id.
+- for ( z = 0; z < 4; z++ )
+- pinquiryData->ProductId[12 + z] = ' ';
-
-- for (i = tcp_ctask->sg_count; i < scsi_sg_count(sc); i++) {
-- char *dest;
+- // Move firmware revision from IDENTIFY data to
+- // product revision in INQUIRY data.
+- for ( z = 0; z < 4; z += 2 )
+- {
+- pinquiryData->ProductRevisionLevel[z] = ((UCHAR *)identifyData.FirmwareRevision)[z + 1];
+- pinquiryData->ProductRevisionLevel[z + 1] = ((UCHAR *)identifyData.FirmwareRevision)[z];
+- }
-
-- dest = kmap_atomic(sg_page(&sg[i]), KM_SOFTIRQ0);
-- rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg[i].offset,
-- sg[i].length, offset);
-- kunmap_atomic(dest, KM_SOFTIRQ0);
-- if (rc == -EAGAIN)
-- /* continue with the next SKB/PDU */
-- return rc;
-- if (!rc) {
-- if (conn->datadgst_en) {
-- if (!offset)
-- crypto_hash_update(
-- &tcp_conn->rx_hash,
-- &sg[i], sg[i].length);
-- else
-- partial_sg_digest_update(
-- &tcp_conn->rx_hash,
-- &sg[i],
-- sg[i].offset + offset,
-- sg[i].length - offset);
+- SCpnt->result = DID_OK << 16;
+- padapter->SCpnt = NULL;
+- SCpnt->scsi_done (SCpnt);
+- return;
+- }
+- break;
- }
-- offset = 0;
-- tcp_ctask->sg_count++;
-- }
-
-- if (!ctask->data_count) {
-- if (rc && conn->datadgst_en)
-- /*
-- * data-in is complete, but buffer not...
-- */
-- partial_sg_digest_update(&tcp_conn->rx_hash,
-- &sg[i],
-- sg[i].offset,
-- sg[i].length-rc);
-- rc = 0;
-- break;
+- default:
+- SCpnt->result = DID_OK << 16;
+- padapter->SCpnt = NULL;
+- SCpnt->scsi_done (SCpnt);
+- return;
- }
-
-- if (!tcp_conn->in.copy)
-- return -EAGAIN;
+-irqerror:;
+- DEB(printk ("\npsi240i error Device Status: %X\n", status));
+- SCpnt->result = DecodeError (shost, status);
+- SCpnt->scsi_done (SCpnt);
- }
-- BUG_ON(ctask->data_count);
-+ struct iscsi_conn *conn = tcp_conn->iscsi_conn;
-+ struct iscsi_hdr *hdr;
-
-- /* check for non-exceptional status */
-- if (tcp_conn->in.hdr->flags & ISCSI_FLAG_DATA_STATUS) {
-- debug_scsi("done [sc %lx res %d itt 0x%x flags 0x%x]\n",
-- (long)sc, sc->result, ctask->itt,
-- tcp_conn->in.hdr->flags);
-- spin_lock(&conn->session->lock);
-- __iscsi_complete_pdu(conn, tcp_conn->in.hdr, NULL, 0);
-- spin_unlock(&conn->session->lock);
-+ /* Check if there are additional header segments
-+ * *prior* to computing the digest, because we
-+ * may need to go back to the caller for more.
-+ */
-+ hdr = (struct iscsi_hdr *) tcp_conn->in.hdr_buf;
-+ if (segment->copied == sizeof(struct iscsi_hdr) && hdr->hlength) {
-+ /* Bump the header length - the caller will
-+ * just loop around and get the AHS for us, and
-+ * call again. */
-+ unsigned int ahslen = hdr->hlength << 2;
-+
-+ /* Make sure we don't overflow */
-+ if (sizeof(*hdr) + ahslen > sizeof(tcp_conn->in.hdr_buf))
-+ return ISCSI_ERR_AHSLEN;
-+
-+ segment->total_size += ahslen;
-+ segment->size += ahslen;
-+ return 0;
- }
-
-- return rc;
--}
-
--static int
--iscsi_data_recv(struct iscsi_conn *conn)
+-static irqreturn_t do_Irq_Handler (int irq, void *dev_id)
-{
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- int rc = 0, opcode;
+- unsigned long flags;
+- struct Scsi_Host *dev = dev_id;
+-
+- spin_lock_irqsave(dev->host_lock, flags);
+- Irq_Handler(irq, dev_id);
+- spin_unlock_irqrestore(dev->host_lock, flags);
+- return IRQ_HANDLED;
+-}
-
-- opcode = tcp_conn->in.hdr->opcode & ISCSI_OPCODE_MASK;
-- switch (opcode) {
-- case ISCSI_OP_SCSI_DATA_IN:
-- rc = iscsi_scsi_data_in(conn);
-- break;
-- case ISCSI_OP_SCSI_CMD_RSP:
-- case ISCSI_OP_TEXT_RSP:
-- case ISCSI_OP_LOGIN_RSP:
-- case ISCSI_OP_ASYNC_EVENT:
-- case ISCSI_OP_REJECT:
-- /*
-- * Collect data segment to the connection's data
-- * placeholder
-- */
-- if (iscsi_tcp_copy(conn, tcp_conn->in.datalen)) {
-- rc = -EAGAIN;
-- goto exit;
-+ /* We're done processing the header. See if we're doing
-+ * header digests; if so, set up the recv_digest buffer
-+ * and go back for more. */
-+ if (conn->hdrdgst_en) {
-+ if (segment->digest_len == 0) {
-+ iscsi_tcp_segment_splice_digest(segment,
-+ segment->recv_digest);
-+ return 0;
- }
-+ iscsi_tcp_dgst_header(&tcp_conn->rx_hash, hdr,
-+ segment->total_copied - ISCSI_DIGEST_SIZE,
-+ segment->digest);
-
-- rc = iscsi_complete_pdu(conn, tcp_conn->in.hdr, conn->data,
-- tcp_conn->in.datalen);
-- if (!rc && conn->datadgst_en && opcode != ISCSI_OP_LOGIN_RSP)
-- iscsi_recv_digest_update(tcp_conn, conn->data,
-- tcp_conn->in.datalen);
-- break;
-- default:
-- BUG_ON(1);
-+ if (!iscsi_tcp_dgst_verify(tcp_conn, segment))
-+ return ISCSI_ERR_HDR_DGST;
- }
--exit:
-- return rc;
-+
-+ tcp_conn->in.hdr = hdr;
-+ return iscsi_tcp_hdr_dissect(conn, hdr);
- }
-
- /**
-- * iscsi_tcp_data_recv - TCP receive in sendfile fashion
-+ * iscsi_tcp_recv - TCP receive in sendfile fashion
- * @rd_desc: read descriptor
- * @skb: socket buffer
- * @offset: offset in skb
- * @len: skb->len - offset
- **/
- static int
--iscsi_tcp_data_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
-- unsigned int offset, size_t len)
-+iscsi_tcp_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
-+ unsigned int offset, size_t len)
- {
-- int rc;
- struct iscsi_conn *conn = rd_desc->arg.data;
- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- int processed;
-- char pad[ISCSI_PAD_LEN];
-- struct scatterlist sg;
+-/****************************************************************
+- * Name: Psi240i_QueueCommand
+- *
+- * Description: Process a queued command from the SCSI manager.
+- *
+- * Parameters: SCpnt - Pointer to SCSI command structure.
+- * done - Pointer to done function to call.
+- *
+- * Returns: Status code.
+- *
+- ****************************************************************/
+-static int Psi240i_QueueCommand(struct scsi_cmnd *SCpnt,
+- void (*done)(struct scsi_cmnd *))
+- {
+- UCHAR *cdb = (UCHAR *)SCpnt->cmnd;
+- // Pointer to SCSI CDB
+- PADAPTER240I padapter = HOSTDATA (SCpnt->device->host);
+- // Pointer to adapter control structure
+- POUR_DEVICE pdev = &padapter->device [SCpnt->device->id];
+- // Pointer to device information
+- UCHAR rc;
+- // command return code
-
-- /*
-- * Save current SKB and its offset in the corresponding
-- * connection context.
-- */
-- tcp_conn->in.copy = skb->len - offset;
-- tcp_conn->in.offset = offset;
-- tcp_conn->in.skb = skb;
-- tcp_conn->in.len = tcp_conn->in.copy;
-- BUG_ON(tcp_conn->in.copy <= 0);
-- debug_tcp("in %d bytes\n", tcp_conn->in.copy);
-+ struct iscsi_segment *segment = &tcp_conn->in.segment;
-+ struct skb_seq_state seq;
-+ unsigned int consumed = 0;
-+ int rc = 0;
-
--more:
-- tcp_conn->in.copied = 0;
-- rc = 0;
-+ debug_tcp("in %d bytes\n", skb->len - offset);
-
- if (unlikely(conn->suspend_rx)) {
- debug_tcp("conn %d Rx suspended!\n", conn->id);
- return 0;
- }
-
-- if (tcp_conn->in_progress == IN_PROGRESS_WAIT_HEADER ||
-- tcp_conn->in_progress == IN_PROGRESS_HEADER_GATHER) {
-- rc = iscsi_hdr_extract(tcp_conn);
-- if (rc) {
-- if (rc == -EAGAIN)
-- goto nomore;
-- else {
-- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-- return 0;
-- }
-- }
-+ skb_prepare_seq_read(skb, offset, skb->len, &seq);
-+ while (1) {
-+ unsigned int avail;
-+ const u8 *ptr;
-
-- /*
-- * Verify and process incoming PDU header.
-- */
-- rc = iscsi_tcp_hdr_recv(conn);
-- if (!rc && tcp_conn->in.datalen) {
-- if (conn->datadgst_en)
-- crypto_hash_init(&tcp_conn->rx_hash);
-- tcp_conn->in_progress = IN_PROGRESS_DATA_RECV;
-- } else if (rc) {
-- iscsi_conn_failure(conn, rc);
+- SCpnt->scsi_done = done;
+- padapter->ide.ide.ides.spigot = pdev->spigot;
+- padapter->buffer = SCpnt->request_buffer;
+- if (done)
+- {
+- if ( !pdev->device )
+- {
+- SCpnt->result = DID_BAD_TARGET << 16;
+- done (SCpnt);
- return 0;
-+ avail = skb_seq_read(consumed, &ptr, &seq);
-+ if (avail == 0) {
-+ debug_tcp("no more data avail. Consumed %d\n",
-+ consumed);
-+ break;
- }
-- }
--
-- if (tcp_conn->in_progress == IN_PROGRESS_DDIGEST_RECV &&
-- tcp_conn->in.copy) {
-- uint32_t recv_digest;
--
-- debug_tcp("extra data_recv offset %d copy %d\n",
-- tcp_conn->in.offset, tcp_conn->in.copy);
--
-- if (!tcp_conn->data_copied) {
-- if (tcp_conn->in.padding) {
-- debug_tcp("padding -> %d\n",
-- tcp_conn->in.padding);
-- memset(pad, 0, tcp_conn->in.padding);
-- sg_init_one(&sg, pad, tcp_conn->in.padding);
-- crypto_hash_update(&tcp_conn->rx_hash,
-- &sg, sg.length);
-+ BUG_ON(segment->copied >= segment->size);
-+
-+ debug_tcp("skb %p ptr=%p avail=%u\n", skb, ptr, avail);
-+ rc = iscsi_tcp_segment_recv(tcp_conn, segment, ptr, avail);
-+ BUG_ON(rc == 0);
-+ consumed += rc;
-+
-+ if (segment->total_copied >= segment->total_size) {
-+ debug_tcp("segment done\n");
-+ rc = segment->done(tcp_conn, segment);
-+ if (rc != 0) {
-+ skb_abort_seq_read(&seq);
-+ goto error;
- }
-- crypto_hash_final(&tcp_conn->rx_hash,
-- (u8 *) &tcp_conn->in.datadgst);
-- debug_tcp("rx digest 0x%x\n", tcp_conn->in.datadgst);
+- }
- }
-
-- rc = iscsi_tcp_copy(conn, sizeof(uint32_t));
-- if (rc) {
-- if (rc == -EAGAIN)
-- goto again;
-- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-- return 0;
+- else
+- {
+- printk("psi240i_queuecommand: %02X: done can't be NULL\n", *cdb);
+- return 0;
- }
-
-- memcpy(&recv_digest, conn->data, sizeof(uint32_t));
-- if (recv_digest != tcp_conn->in.datadgst) {
-- debug_tcp("iscsi_tcp: data digest error!"
-- "0x%x != 0x%x\n", recv_digest,
-- tcp_conn->in.datadgst);
-- iscsi_conn_failure(conn, ISCSI_ERR_DATA_DGST);
-- return 0;
-- } else {
-- debug_tcp("iscsi_tcp: data digest match!"
-- "0x%x == 0x%x\n", recv_digest,
-- tcp_conn->in.datadgst);
-- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
-+ /* The done() functions sets up the
-+ * next segment. */
- }
- }
-+ skb_abort_seq_read(&seq);
-+ conn->rxdata_octets += consumed;
-+ return consumed;
-
-- if (tcp_conn->in_progress == IN_PROGRESS_DATA_RECV &&
-- tcp_conn->in.copy) {
-- debug_tcp("data_recv offset %d copy %d\n",
-- tcp_conn->in.offset, tcp_conn->in.copy);
+- switch ( *cdb )
+- {
+- case SCSIOP_INQUIRY: // inquiry CDB
+- {
+- padapter->ide.ide.ide[6] = pdev->byte6;
+- padapter->ide.ide.ides.cmd = IDE_COMMAND_IDENTIFY;
+- break;
+- }
-
-- rc = iscsi_data_recv(conn);
-- if (rc) {
-- if (rc == -EAGAIN)
-- goto again;
-- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+- case SCSIOP_TEST_UNIT_READY: // test unit ready CDB
+- SCpnt->result = DID_OK << 16;
+- done (SCpnt);
- return 0;
-- }
--
-- if (tcp_conn->in.padding)
-- tcp_conn->in_progress = IN_PROGRESS_PAD_RECV;
-- else if (conn->datadgst_en)
-- tcp_conn->in_progress = IN_PROGRESS_DDIGEST_RECV;
-- else
-- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
-- tcp_conn->data_copied = 0;
-- }
--
-- if (tcp_conn->in_progress == IN_PROGRESS_PAD_RECV &&
-- tcp_conn->in.copy) {
-- int copylen = min(tcp_conn->in.padding - tcp_conn->data_copied,
-- tcp_conn->in.copy);
--
-- tcp_conn->in.copy -= copylen;
-- tcp_conn->in.offset += copylen;
-- tcp_conn->data_copied += copylen;
-
-- if (tcp_conn->data_copied != tcp_conn->in.padding)
-- tcp_conn->in_progress = IN_PROGRESS_PAD_RECV;
-- else if (conn->datadgst_en)
-- tcp_conn->in_progress = IN_PROGRESS_DDIGEST_RECV;
-- else
-- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
-- tcp_conn->data_copied = 0;
-- }
+- case SCSIOP_READ_CAPACITY: // read capctiy CDB
+- {
+- PREAD_CAPACITY_DATA pdata = (PREAD_CAPACITY_DATA)SCpnt->request_buffer;
-
-- debug_tcp("f, processed %d from out of %d padding %d\n",
-- tcp_conn->in.offset - offset, (int)len, tcp_conn->in.padding);
-- BUG_ON(tcp_conn->in.offset - offset > len);
+- pdata->blksiz = 0x20000;
+- XANY2SCSI ((UCHAR *)&pdata->blks, pdev->blocks);
+- SCpnt->result = DID_OK << 16;
+- done (SCpnt);
+- return 0;
+- }
-
-- if (tcp_conn->in.offset - offset != len) {
-- debug_tcp("continue to process %d bytes\n",
-- (int)len - (tcp_conn->in.offset - offset));
-- goto more;
-- }
+- case SCSIOP_VERIFY: // verify CDB
+- *(ULONG *)padapter->ide.ide.ides.lba = XSCSI2LONG (&cdb[2]);
+- padapter->ide.ide.ide[6] |= pdev->byte6;
+- padapter->ide.ide.ide[2] = (UCHAR)((USHORT)cdb[8] | ((USHORT)cdb[7] << 8));
+- padapter->ide.ide.ides.cmd = IDE_COMMAND_VERIFY;
+- break;
-
--nomore:
-- processed = tcp_conn->in.offset - offset;
-- BUG_ON(processed == 0);
-- return processed;
+- case SCSIOP_READ: // read10 CDB
+- padapter->startSector = XSCSI2LONG (&cdb[2]);
+- padapter->sectorCount = (USHORT)cdb[8] | ((USHORT)cdb[7] << 8);
+- SetupTransfer (padapter, pdev->byte6);
+- padapter->ide.ide.ides.cmd = IDE_CMD_READ_MULTIPLE;
+- break;
-
--again:
-- processed = tcp_conn->in.offset - offset;
-- debug_tcp("c, processed %d from out of %d rd_desc_cnt %d\n",
-- processed, (int)len, (int)rd_desc->count);
-- BUG_ON(processed == 0);
-- BUG_ON(processed > len);
+- case SCSIOP_READ6: // read6 CDB
+- padapter->startSector = SCSI2LONG (&cdb[1]);
+- padapter->sectorCount = cdb[4];
+- SetupTransfer (padapter, pdev->byte6);
+- padapter->ide.ide.ides.cmd = IDE_CMD_READ_MULTIPLE;
+- break;
-
-- conn->rxdata_octets += processed;
-- return processed;
-+error:
-+ debug_tcp("Error receiving PDU, errno=%d\n", rc);
-+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-+ return 0;
- }
-
- static void
- iscsi_tcp_data_ready(struct sock *sk, int flag)
- {
- struct iscsi_conn *conn = sk->sk_user_data;
-+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
- read_descriptor_t rd_desc;
-
- read_lock(&sk->sk_callback_lock);
-
- /*
-- * Use rd_desc to pass 'conn' to iscsi_tcp_data_recv.
-+ * Use rd_desc to pass 'conn' to iscsi_tcp_recv.
- * We set count to 1 because we want the network layer to
-- * hand us all the skbs that are available. iscsi_tcp_data_recv
-+ * hand us all the skbs that are available. iscsi_tcp_recv
- * handled pdus that cross buffers or pdus that still need data.
- */
- rd_desc.arg.data = conn;
- rd_desc.count = 1;
-- tcp_read_sock(sk, &rd_desc, iscsi_tcp_data_recv);
-+ tcp_read_sock(sk, &rd_desc, iscsi_tcp_recv);
-
- read_unlock(&sk->sk_callback_lock);
-+
-+ /* If we had to (atomically) map a highmem page,
-+ * unmap it now. */
-+ iscsi_tcp_segment_unmap(&tcp_conn->in.segment);
- }
-
- static void
-@@ -1077,121 +1079,173 @@ iscsi_conn_restore_callbacks(struct iscsi_tcp_conn *tcp_conn)
- }
-
- /**
-- * iscsi_send - generic send routine
-- * @sk: kernel's socket
-- * @buf: buffer to write from
-- * @size: actual size to write
-- * @flags: socket's flags
-- */
--static inline int
--iscsi_send(struct iscsi_conn *conn, struct iscsi_buf *buf, int size, int flags)
-+ * iscsi_xmit - TCP transmit
-+ **/
-+static int
-+iscsi_xmit(struct iscsi_conn *conn)
- {
- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- struct socket *sk = tcp_conn->sock;
-- int offset = buf->sg.offset + buf->sent, res;
-+ struct iscsi_segment *segment = &tcp_conn->out.segment;
-+ unsigned int consumed = 0;
-+ int rc = 0;
-
-- /*
-- * if we got use_sg=0 or are sending something we kmallocd
-- * then we did not have to do kmap (kmap returns page_address)
-- *
-- * if we got use_sg > 0, but had to drop down, we do not
-- * set clustering so this should only happen for that
-- * slab case.
-- */
-- if (buf->use_sendmsg)
-- res = sock_no_sendpage(sk, sg_page(&buf->sg), offset, size, flags);
-- else
-- res = tcp_conn->sendpage(sk, sg_page(&buf->sg), offset, size, flags);
+- case SCSIOP_WRITE: // write10 CDB
+- padapter->startSector = XSCSI2LONG (&cdb[2]);
+- padapter->sectorCount = (USHORT)cdb[8] | ((USHORT)cdb[7] << 8);
+- SetupTransfer (padapter, pdev->byte6);
+- padapter->ide.ide.ides.cmd = IDE_CMD_WRITE_MULTIPLE;
+- break;
+- case SCSIOP_WRITE6: // write6 CDB
+- padapter->startSector = SCSI2LONG (&cdb[1]);
+- padapter->sectorCount = cdb[4];
+- SetupTransfer (padapter, pdev->byte6);
+- padapter->ide.ide.ides.cmd = IDE_CMD_WRITE_MULTIPLE;
+- break;
-
-- if (res >= 0) {
-- conn->txdata_octets += res;
-- buf->sent += res;
-- return res;
-+ while (1) {
-+ rc = iscsi_tcp_xmit_segment(tcp_conn, segment);
-+ if (rc < 0)
-+ goto error;
-+ if (rc == 0)
-+ break;
-+
-+ consumed += rc;
-+
-+ if (segment->total_copied >= segment->total_size) {
-+ if (segment->done != NULL) {
-+ rc = segment->done(tcp_conn, segment);
-+ if (rc < 0)
-+ goto error;
-+ }
-+ }
- }
-
-- tcp_conn->sendpage_failures_cnt++;
-- if (res == -EAGAIN)
-- res = -ENOBUFS;
-- else
-- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-- return res;
-+ debug_tcp("xmit %d bytes\n", consumed);
-+
-+ conn->txdata_octets += consumed;
-+ return consumed;
-+
-+error:
-+ /* Transmit error. We could initiate error recovery
-+ * here. */
-+ debug_tcp("Error sending PDU, errno=%d\n", rc);
-+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-+ return rc;
- }
-
- /**
-- * iscsi_sendhdr - send PDU Header via tcp_sendpage()
-- * @conn: iscsi connection
-- * @buf: buffer to write from
-- * @datalen: lenght of data to be sent after the header
-- *
-- * Notes:
-- * (Tx, Fast Path)
-- **/
-+ * iscsi_tcp_xmit_qlen - return the number of bytes queued for xmit
-+ */
- static inline int
--iscsi_sendhdr(struct iscsi_conn *conn, struct iscsi_buf *buf, int datalen)
-+iscsi_tcp_xmit_qlen(struct iscsi_conn *conn)
- {
-- int flags = 0; /* MSG_DONTWAIT; */
-- int res, size;
+- default:
+- DEB (printk ("psi240i_queuecommand: Unsupported command %02X\n", *cdb));
+- SCpnt->result = DID_ERROR << 16;
+- done (SCpnt);
+- return 0;
+- }
-
-- size = buf->sg.length - buf->sent;
-- BUG_ON(buf->sent + size > buf->sg.length);
-- if (buf->sent + size != buf->sg.length || datalen)
-- flags |= MSG_MORE;
+- padapter->SCpnt = SCpnt; // Save this command data
-
-- res = iscsi_send(conn, buf, size, flags);
-- debug_tcp("sendhdr %d bytes, sent %d res %d\n", size, buf->sent, res);
-- if (res >= 0) {
-- if (size != res)
-- return -EAGAIN;
+- rc = IdeCmd (padapter);
+- if ( rc )
+- {
+- padapter->expectingIRQ = 0;
+- DEB (printk ("psi240i_queuecommand: %02X, %02X: Device failed to respond for command\n", *cdb, padapter->ide.ide.ides.cmd));
+- SCpnt->result = DID_ERROR << 16;
+- done (SCpnt);
- return 0;
+- }
+- DEB (printk("psi240i_queuecommand: %02X, %02X now waiting for interrupt ", *cdb, padapter->ide.ide.ides.cmd));
+- return 0;
- }
-+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-+ struct iscsi_segment *segment = &tcp_conn->out.segment;
-
-- return res;
-+ return segment->total_copied - segment->total_size;
- }
-
--/**
-- * iscsi_sendpage - send one page of iSCSI Data-Out.
-- * @conn: iscsi connection
-- * @buf: buffer to write from
-- * @count: remaining data
-- * @sent: number of bytes sent
-- *
-- * Notes:
-- * (Tx, Fast Path)
-- **/
- static inline int
--iscsi_sendpage(struct iscsi_conn *conn, struct iscsi_buf *buf,
-- int *count, int *sent)
-+iscsi_tcp_flush(struct iscsi_conn *conn)
- {
-- int flags = 0; /* MSG_DONTWAIT; */
-- int res, size;
--
-- size = buf->sg.length - buf->sent;
-- BUG_ON(buf->sent + size > buf->sg.length);
-- if (size > *count)
-- size = *count;
-- if (buf->sent + size != buf->sg.length || *count != size)
-- flags |= MSG_MORE;
--
-- res = iscsi_send(conn, buf, size, flags);
-- debug_tcp("sendpage: %d bytes, sent %d left %d sent %d res %d\n",
-- size, buf->sent, *count, *sent, res);
-- if (res >= 0) {
-- *count -= res;
-- *sent += res;
-- if (size != res)
-+ int rc;
-+
-+ while (iscsi_tcp_xmit_qlen(conn)) {
-+ rc = iscsi_xmit(conn);
-+ if (rc == 0)
- return -EAGAIN;
-- return 0;
-+ if (rc < 0)
-+ return rc;
- }
-
-- return res;
-+ return 0;
- }
-
--static inline void
--iscsi_data_digest_init(struct iscsi_tcp_conn *tcp_conn,
-- struct iscsi_tcp_cmd_task *tcp_ctask)
-+/*
-+ * This is called when we're done sending the header.
-+ * Simply copy the data_segment to the send segment, and return.
-+ */
-+static int
-+iscsi_tcp_send_hdr_done(struct iscsi_tcp_conn *tcp_conn,
-+ struct iscsi_segment *segment)
-+{
-+ tcp_conn->out.segment = tcp_conn->out.data_segment;
-+ debug_tcp("Header done. Next segment size %u total_size %u\n",
-+ tcp_conn->out.segment.size, tcp_conn->out.segment.total_size);
-+ return 0;
-+}
-+
-+static void
-+iscsi_tcp_send_hdr_prep(struct iscsi_conn *conn, void *hdr, size_t hdrlen)
- {
-- crypto_hash_init(&tcp_conn->tx_hash);
-- tcp_ctask->digest_count = 4;
-+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-+
-+ debug_tcp("%s(%p%s)\n", __FUNCTION__, tcp_conn,
-+ conn->hdrdgst_en? ", digest enabled" : "");
-+
-+ /* Clear the data segment - needs to be filled in by the
-+ * caller using iscsi_tcp_send_data_prep() */
-+ memset(&tcp_conn->out.data_segment, 0, sizeof(struct iscsi_segment));
-+
-+ /* If header digest is enabled, compute the CRC and
-+ * place the digest into the same buffer. We make
-+ * sure that both iscsi_tcp_ctask and mtask have
-+ * sufficient room.
-+ */
-+ if (conn->hdrdgst_en) {
-+ iscsi_tcp_dgst_header(&tcp_conn->tx_hash, hdr, hdrlen,
-+ hdr + hdrlen);
-+ hdrlen += ISCSI_DIGEST_SIZE;
-+ }
-+
-+ /* Remember header pointer for later, when we need
-+ * to decide whether there's a payload to go along
-+ * with the header. */
-+ tcp_conn->out.hdr = hdr;
-+
-+ iscsi_segment_init_linear(&tcp_conn->out.segment, hdr, hdrlen,
-+ iscsi_tcp_send_hdr_done, NULL);
-+}
-+
-+/*
-+ * Prepare the send buffer for the payload data.
-+ * Padding and checksumming will all be taken care
-+ * of by the iscsi_segment routines.
-+ */
-+static int
-+iscsi_tcp_send_data_prep(struct iscsi_conn *conn, struct scatterlist *sg,
-+ unsigned int count, unsigned int offset,
-+ unsigned int len)
-+{
-+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-+ struct hash_desc *tx_hash = NULL;
-+ unsigned int hdr_spec_len;
-+
-+ debug_tcp("%s(%p, offset=%d, datalen=%d%s)\n", __FUNCTION__,
-+ tcp_conn, offset, len,
-+ conn->datadgst_en? ", digest enabled" : "");
-+
-+ /* Make sure the datalen matches what the caller
-+ said he would send. */
-+ hdr_spec_len = ntoh24(tcp_conn->out.hdr->dlength);
-+ WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len));
-+
-+ if (conn->datadgst_en)
-+ tx_hash = &tcp_conn->tx_hash;
-+
-+ return iscsi_segment_seek_sg(&tcp_conn->out.data_segment,
-+ sg, count, offset, len,
-+ NULL, tx_hash);
-+}
-+
-+static void
-+iscsi_tcp_send_linear_data_prepare(struct iscsi_conn *conn, void *data,
-+ size_t len)
-+{
-+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-+ struct hash_desc *tx_hash = NULL;
-+ unsigned int hdr_spec_len;
-+
-+ debug_tcp("%s(%p, datalen=%d%s)\n", __FUNCTION__, tcp_conn, len,
-+ conn->datadgst_en? ", digest enabled" : "");
-+
-+ /* Make sure the datalen matches what the caller
-+ said he would send. */
-+ hdr_spec_len = ntoh24(tcp_conn->out.hdr->dlength);
-+ WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len));
-+
-+ if (conn->datadgst_en)
-+ tx_hash = &tcp_conn->tx_hash;
-+
-+ iscsi_segment_init_linear(&tcp_conn->out.data_segment,
-+ data, len, NULL, tx_hash);
- }
-
- /**
-@@ -1207,12 +1261,17 @@ iscsi_data_digest_init(struct iscsi_tcp_conn *tcp_conn,
- *
- * Called under connection lock.
- **/
--static void
-+static int
- iscsi_solicit_data_cont(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
-- struct iscsi_r2t_info *r2t, int left)
-+ struct iscsi_r2t_info *r2t)
- {
- struct iscsi_data *hdr;
-- int new_offset;
-+ int new_offset, left;
-+
-+ BUG_ON(r2t->data_length - r2t->sent < 0);
-+ left = r2t->data_length - r2t->sent;
-+ if (left == 0)
-+ return 0;
-
- hdr = &r2t->dtask.hdr;
- memset(hdr, 0, sizeof(struct iscsi_data));
-@@ -1233,43 +1292,46 @@ iscsi_solicit_data_cont(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
- r2t->data_count = left;
- hdr->flags = ISCSI_FLAG_CMD_FINAL;
- }
-- conn->dataout_pdus_cnt++;
--
-- iscsi_buf_init_iov(&r2t->headbuf, (char*)hdr,
-- sizeof(struct iscsi_hdr));
--
-- if (iscsi_buf_left(&r2t->sendbuf))
-- return;
--
-- iscsi_buf_init_sg(&r2t->sendbuf, r2t->sg);
-- r2t->sg += 1;
--}
-
--static void iscsi_set_padding(struct iscsi_tcp_cmd_task *tcp_ctask,
-- unsigned long len)
--{
-- tcp_ctask->pad_count = len & (ISCSI_PAD_LEN - 1);
-- if (!tcp_ctask->pad_count)
-- return;
-
-- tcp_ctask->pad_count = ISCSI_PAD_LEN - tcp_ctask->pad_count;
-- debug_scsi("write padding %d bytes\n", tcp_ctask->pad_count);
-- set_bit(XMSTATE_BIT_W_PAD, &tcp_ctask->xmstate);
-+ conn->dataout_pdus_cnt++;
-+ return 1;
- }
-
- /**
-- * iscsi_tcp_cmd_init - Initialize iSCSI SCSI_READ or SCSI_WRITE commands
-+ * iscsi_tcp_ctask - Initialize iSCSI SCSI_READ or SCSI_WRITE commands
- * @conn: iscsi connection
- * @ctask: scsi command task
- * @sc: scsi command
- **/
--static void
--iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask)
-+static int
-+iscsi_tcp_ctask_init(struct iscsi_cmd_task *ctask)
- {
- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-+ struct iscsi_conn *conn = ctask->conn;
-+ struct scsi_cmnd *sc = ctask->sc;
-+ int err;
-
- BUG_ON(__kfifo_len(tcp_ctask->r2tqueue));
-- tcp_ctask->xmstate = 1 << XMSTATE_BIT_CMD_HDR_INIT;
-+ tcp_ctask->sent = 0;
-+ tcp_ctask->exp_datasn = 0;
-+
-+ /* Prepare PDU, optionally w/ immediate data */
-+ debug_scsi("ctask deq [cid %d itt 0x%x imm %d unsol %d]\n",
-+ conn->id, ctask->itt, ctask->imm_count,
-+ ctask->unsol_count);
-+ iscsi_tcp_send_hdr_prep(conn, ctask->hdr, ctask->hdr_len);
-+
-+ if (!ctask->imm_count)
-+ return 0;
-+
-+ /* If we have immediate data, attach a payload */
-+ err = iscsi_tcp_send_data_prep(conn, scsi_sglist(sc), scsi_sg_count(sc),
-+ 0, ctask->imm_count);
-+ if (err)
-+ return err;
-+ tcp_ctask->sent += ctask->imm_count;
-+ ctask->imm_count = 0;
-+ return 0;
- }
-
- /**
-@@ -1281,484 +1343,130 @@ iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask)
- * The function can return -EAGAIN in which case caller must
- * call it again later, or recover. '0' return code means successful
- * xmit.
+-/***************************************************************************
+- * Name: ReadChipMemory
- *
-- * Management xmit state machine consists of these states:
-- * XMSTATE_BIT_IMM_HDR_INIT - calculate digest of PDU Header
-- * XMSTATE_BIT_IMM_HDR - PDU Header xmit in progress
-- * XMSTATE_BIT_IMM_DATA - PDU Data xmit in progress
-- * XMSTATE_VALUE_IDLE - management PDU is done
- **/
- static int
- iscsi_tcp_mtask_xmit(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
- {
-- struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
- int rc;
-
-- debug_scsi("mtask deq [cid %d state %x itt 0x%x]\n",
-- conn->id, tcp_mtask->xmstate, mtask->itt);
--
-- if (test_bit(XMSTATE_BIT_IMM_HDR_INIT, &tcp_mtask->xmstate)) {
-- iscsi_buf_init_iov(&tcp_mtask->headbuf, (char*)mtask->hdr,
-- sizeof(struct iscsi_hdr));
+- * Description: Read information from controller memory.
+- *
+- * Parameters: psetup - Pointer to memory image of setup information.
+- * base - base address of memory.
+- * length - lenght of data space in bytes.
+- * port - I/O address of data port.
+- *
+- * Returns: Nothing.
+- *
+- **************************************************************************/
+-static void ReadChipMemory (void *pdata, USHORT base, USHORT length, USHORT port)
+- {
+- USHORT z, zz;
+- UCHAR *pd = (UCHAR *)pdata;
+- outb_p (SEL_NONE, port + REG_SEL_FAIL); // setup data port
+- zz = 0;
+- while ( zz < length )
+- {
+- outw_p (base, port + REG_ADDRESS); // setup address
-
-- if (mtask->data_count) {
-- set_bit(XMSTATE_BIT_IMM_DATA, &tcp_mtask->xmstate);
-- iscsi_buf_init_iov(&tcp_mtask->sendbuf,
-- (char*)mtask->data,
-- mtask->data_count);
+- for ( z = 0; z < 8; z++ )
+- {
+- if ( (zz + z) < length )
+- *pd++ = inb_p (port + z); // read data byte
+- }
+- zz += 8;
+- base += 8;
- }
--
-- if (conn->c_stage != ISCSI_CONN_INITIAL_STAGE &&
-- conn->stop_stage != STOP_CONN_RECOVER &&
-- conn->hdrdgst_en)
-- iscsi_hdr_digest(conn, &tcp_mtask->headbuf,
-- (u8*)tcp_mtask->hdrext);
--
-- tcp_mtask->sent = 0;
-- clear_bit(XMSTATE_BIT_IMM_HDR_INIT, &tcp_mtask->xmstate);
-- set_bit(XMSTATE_BIT_IMM_HDR, &tcp_mtask->xmstate);
-- }
--
-- if (test_bit(XMSTATE_BIT_IMM_HDR, &tcp_mtask->xmstate)) {
-- rc = iscsi_sendhdr(conn, &tcp_mtask->headbuf,
-- mtask->data_count);
-- if (rc)
-- return rc;
-- clear_bit(XMSTATE_BIT_IMM_HDR, &tcp_mtask->xmstate);
- }
+-/****************************************************************
+- * Name: Psi240i_Detect
+- *
+- * Description: Detect and initialize our boards.
+- *
+- * Parameters: tpnt - Pointer to SCSI host template structure.
+- *
+- * Returns: Number of adapters found.
+- *
+- ****************************************************************/
+-static int Psi240i_Detect (struct scsi_host_template *tpnt)
+- {
+- int board;
+- int count = 0;
+- int unit;
+- int z;
+- USHORT port, port_range = 16;
+- CHIP_CONFIG_N chipConfig;
+- CHIP_DEVICE_N chipDevice[8];
+- struct Scsi_Host *pshost;
-
-- if (test_and_clear_bit(XMSTATE_BIT_IMM_DATA, &tcp_mtask->xmstate)) {
-- BUG_ON(!mtask->data_count);
-- /* FIXME: implement.
-- * Virtual buffer could be spreaded across multiple pages...
-- */
-- do {
-- int rc;
--
-- rc = iscsi_sendpage(conn, &tcp_mtask->sendbuf,
-- &mtask->data_count, &tcp_mtask->sent);
-- if (rc) {
-- set_bit(XMSTATE_BIT_IMM_DATA, &tcp_mtask->xmstate);
-- return rc;
-- }
-- } while (mtask->data_count);
-- }
-+ /* Flush any pending data first. */
-+ rc = iscsi_tcp_flush(conn);
-+ if (rc < 0)
-+ return rc;
-
-- BUG_ON(tcp_mtask->xmstate != XMSTATE_VALUE_IDLE);
- if (mtask->hdr->itt == RESERVED_ITT) {
- struct iscsi_session *session = conn->session;
-
- spin_lock_bh(&session->lock);
-- list_del(&conn->mtask->running);
-- __kfifo_put(session->mgmtpool.queue, (void*)&conn->mtask,
-- sizeof(void*));
-+ iscsi_free_mgmt_task(conn, mtask);
- spin_unlock_bh(&session->lock);
- }
-+
- return 0;
- }
-
-+/*
-+ * iscsi_tcp_ctask_xmit - xmit normal PDU task
-+ * @conn: iscsi connection
-+ * @ctask: iscsi command task
-+ *
-+ * We're expected to return 0 when everything was transmitted succesfully,
-+ * -EAGAIN if there's still data in the queue, or != 0 for any other kind
-+ * of error.
-+ */
- static int
--iscsi_send_cmd_hdr(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
-+iscsi_tcp_ctask_xmit(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
- {
-- struct scsi_cmnd *sc = ctask->sc;
- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-+ struct scsi_cmnd *sc = ctask->sc;
- int rc = 0;
-
-- if (test_bit(XMSTATE_BIT_CMD_HDR_INIT, &tcp_ctask->xmstate)) {
-- tcp_ctask->sent = 0;
-- tcp_ctask->sg_count = 0;
-- tcp_ctask->exp_datasn = 0;
+- for ( board = 0; board < MAXBOARDS; board++ ) // scan for I/O ports
+- {
+- pshost = NULL;
+- port = portAddr[board]; // get base address to test
+- if ( !request_region (port, port_range, "psi240i") )
+- continue;
+- if ( inb_p (port + REG_FAIL) != CHIP_ID ) // do the first test for likley hood that it is us
+- goto host_init_failure;
+- outb_p (SEL_NONE, port + REG_SEL_FAIL); // setup EEPROM/RAM access
+- outw (0, port + REG_ADDRESS); // setup EEPROM address zero
+- if ( inb_p (port) != 0x55 ) // test 1st byte
+- goto host_init_failure; // nope
+- if ( inb_p (port + 1) != 0xAA ) // test 2nd byte
+- goto host_init_failure; // nope
-
-- if (sc->sc_data_direction == DMA_TO_DEVICE) {
-- struct scatterlist *sg = scsi_sglist(sc);
+- // at this point our board is found and can be accessed. Now we need to initialize
+- // our informatation and register with the kernel.
-
-- iscsi_buf_init_sg(&tcp_ctask->sendbuf, sg);
-- tcp_ctask->sg = sg + 1;
-- tcp_ctask->bad_sg = sg + scsi_sg_count(sc);
-
-- debug_scsi("cmd [itt 0x%x total %d imm_data %d "
-- "unsol count %d, unsol offset %d]\n",
-- ctask->itt, scsi_bufflen(sc),
-- ctask->imm_count, ctask->unsol_count,
-- ctask->unsol_offset);
-- }
+- ReadChipMemory (&chipConfig, CHIP_CONFIG, sizeof (chipConfig), port);
+- ReadChipMemory (&chipDevice, CHIP_DEVICE, sizeof (chipDevice), port);
+- ReadChipMemory (&ChipSetup, CHIP_EEPROM_DATA, sizeof (ChipSetup), port);
-
-- iscsi_buf_init_iov(&tcp_ctask->headbuf, (char*)ctask->hdr,
-- sizeof(struct iscsi_hdr));
+- if ( !chipConfig.numDrives ) // if no devices on this board
+- goto host_init_failure;
-
-- if (conn->hdrdgst_en)
-- iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
-- (u8*)tcp_ctask->hdrext);
-- clear_bit(XMSTATE_BIT_CMD_HDR_INIT, &tcp_ctask->xmstate);
-- set_bit(XMSTATE_BIT_CMD_HDR_XMIT, &tcp_ctask->xmstate);
-- }
+- pshost = scsi_register (tpnt, sizeof(ADAPTER240I));
+- if(pshost == NULL)
+- goto host_init_failure;
-
-- if (test_bit(XMSTATE_BIT_CMD_HDR_XMIT, &tcp_ctask->xmstate)) {
-- rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, ctask->imm_count);
-- if (rc)
-- return rc;
-- clear_bit(XMSTATE_BIT_CMD_HDR_XMIT, &tcp_ctask->xmstate);
+- PsiHost[chipConfig.irq - 10] = pshost;
+- pshost->unique_id = port;
+- pshost->io_port = port;
+- pshost->n_io_port = 16; /* Number of bytes of I/O space used */
+- pshost->irq = chipConfig.irq;
-
-- if (sc->sc_data_direction != DMA_TO_DEVICE)
-- return 0;
+- for ( z = 0; z < 11; z++ ) // build regester address array
+- HOSTDATA(pshost)->ports[z] = port + z;
+- HOSTDATA(pshost)->ports[11] = port + REG_FAIL;
+- HOSTDATA(pshost)->ports[12] = port + REG_ALT_STAT;
+- DEB (printk ("\nPorts ="));
+- DEB (for (z=0;z<13;z++) printk(" %#04X",HOSTDATA(pshost)->ports[z]););
-
-- if (ctask->imm_count) {
-- set_bit(XMSTATE_BIT_IMM_DATA, &tcp_ctask->xmstate);
-- iscsi_set_padding(tcp_ctask, ctask->imm_count);
+- for ( z = 0; z < chipConfig.numDrives; ++z )
+- {
+- unit = chipDevice[z].channel & 0x0F;
+- HOSTDATA(pshost)->device[unit].device = ChipSetup.setupDevice[unit].device;
+- HOSTDATA(pshost)->device[unit].byte6 = (UCHAR)(((unit & 1) << 4) | 0xE0);
+- HOSTDATA(pshost)->device[unit].spigot = (UCHAR)(1 << (unit >> 1));
+- HOSTDATA(pshost)->device[unit].sectors = ChipSetup.setupDevice[unit].sectors;
+- HOSTDATA(pshost)->device[unit].heads = ChipSetup.setupDevice[unit].heads;
+- HOSTDATA(pshost)->device[unit].cylinders = ChipSetup.setupDevice[unit].cylinders;
+- HOSTDATA(pshost)->device[unit].blocks = ChipSetup.setupDevice[unit].blocks;
+- DEB (printk ("\nHOSTDATA->device = %X", HOSTDATA(pshost)->device[unit].device));
+- DEB (printk ("\n byte6 = %X", HOSTDATA(pshost)->device[unit].byte6));
+- DEB (printk ("\n spigot = %X", HOSTDATA(pshost)->device[unit].spigot));
+- DEB (printk ("\n sectors = %X", HOSTDATA(pshost)->device[unit].sectors));
+- DEB (printk ("\n heads = %X", HOSTDATA(pshost)->device[unit].heads));
+- DEB (printk ("\n cylinders = %X", HOSTDATA(pshost)->device[unit].cylinders));
+- DEB (printk ("\n blocks = %lX", HOSTDATA(pshost)->device[unit].blocks));
+- }
-
-- if (ctask->conn->datadgst_en) {
-- iscsi_data_digest_init(ctask->conn->dd_data,
-- tcp_ctask);
-- tcp_ctask->immdigest = 0;
+- if ( request_irq (chipConfig.irq, do_Irq_Handler, 0, "psi240i", pshost) == 0 )
+- {
+- printk("\nPSI-240I EIDE CONTROLLER: at I/O = %x IRQ = %d\n", port, chipConfig.irq);
+- printk("(C) 1997 Perceptive Solutions, Inc. All rights reserved\n\n");
+- count++;
+- continue;
- }
-- }
-
-- if (ctask->unsol_count) {
-- set_bit(XMSTATE_BIT_UNS_HDR, &tcp_ctask->xmstate);
-- set_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate);
+- printk ("Unable to allocate IRQ for PSI-240I controller.\n");
+-
+-host_init_failure:
+-
+- release_region (port, port_range);
+- if (pshost)
+- scsi_unregister (pshost);
+-
- }
+- return count;
- }
-- return rc;
--}
-
--static int
--iscsi_send_padding(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
+-static int Psi240i_Release(struct Scsi_Host *shost)
-{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- int sent = 0, rc;
+- if (shost->irq)
+- free_irq(shost->irq, NULL);
+- if (shost->io_port && shost->n_io_port)
+- release_region(shost->io_port, shost->n_io_port);
+- scsi_unregister(shost);
+- return 0;
+-}
-
-- if (test_bit(XMSTATE_BIT_W_PAD, &tcp_ctask->xmstate)) {
-- iscsi_buf_init_iov(&tcp_ctask->sendbuf, (char*)&tcp_ctask->pad,
-- tcp_ctask->pad_count);
-- if (conn->datadgst_en)
-- crypto_hash_update(&tcp_conn->tx_hash,
-- &tcp_ctask->sendbuf.sg,
-- tcp_ctask->sendbuf.sg.length);
-- } else if (!test_bit(XMSTATE_BIT_W_RESEND_PAD, &tcp_ctask->xmstate))
-- return 0;
+-/****************************************************************
+- * Name: Psi240i_BiosParam
+- *
+- * Description: Process the biosparam request from the SCSI manager to
+- * return C/H/S data.
+- *
+- * Parameters: disk - Pointer to SCSI disk structure.
+- * dev - Major/minor number from kernel.
+- * geom - Pointer to integer array to place geometry data.
+- *
+- * Returns: zero.
+- *
+- ****************************************************************/
+-static int Psi240i_BiosParam (struct scsi_device *sdev, struct block_device *dev,
+- sector_t capacity, int geom[])
+- {
+- POUR_DEVICE pdev;
-
-- clear_bit(XMSTATE_BIT_W_PAD, &tcp_ctask->xmstate);
-- clear_bit(XMSTATE_BIT_W_RESEND_PAD, &tcp_ctask->xmstate);
-- debug_scsi("sending %d pad bytes for itt 0x%x\n",
-- tcp_ctask->pad_count, ctask->itt);
-- rc = iscsi_sendpage(conn, &tcp_ctask->sendbuf, &tcp_ctask->pad_count,
-- &sent);
-- if (rc) {
-- debug_scsi("padding send failed %d\n", rc);
-- set_bit(XMSTATE_BIT_W_RESEND_PAD, &tcp_ctask->xmstate);
-- }
-- return rc;
--}
+- pdev = &(HOSTDATA(sdev->host)->device[sdev_id(sdev)]);
-
--static int
--iscsi_send_digest(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
-- struct iscsi_buf *buf, uint32_t *digest)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask;
-- struct iscsi_tcp_conn *tcp_conn;
-- int rc, sent = 0;
+- geom[0] = pdev->heads;
+- geom[1] = pdev->sectors;
+- geom[2] = pdev->cylinders;
+- return 0;
+- }
-
-- if (!conn->datadgst_en)
-- return 0;
+-MODULE_LICENSE("GPL");
-
-- tcp_ctask = ctask->dd_data;
-- tcp_conn = conn->dd_data;
+-static struct scsi_host_template driver_template = {
+- .proc_name = "psi240i",
+- .name = "PSI-240I EIDE Disk Controller",
+- .detect = Psi240i_Detect,
+- .release = Psi240i_Release,
+- .queuecommand = Psi240i_QueueCommand,
+- .bios_param = Psi240i_BiosParam,
+- .can_queue = 1,
+- .this_id = -1,
+- .sg_tablesize = SG_NONE,
+- .cmd_per_lun = 1,
+- .use_clustering = DISABLE_CLUSTERING,
+-};
+-#include "scsi_module.c"
+diff --git a/drivers/scsi/psi240i.h b/drivers/scsi/psi240i.h
+deleted file mode 100644
+index 21ebb92..0000000
+--- a/drivers/scsi/psi240i.h
++++ /dev/null
+@@ -1,315 +0,0 @@
+-/*+M*************************************************************************
+- * Perceptive Solutions, Inc. PSI-240I device driver proc support for Linux.
+- *
+- * Copyright (c) 1997 Perceptive Solutions, Inc.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2, or (at your option)
+- * any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; see the file COPYING. If not, write to
+- * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+- *
+- *
+- * File Name: psi240i.h
+- *
+- * Description: Header file for the SCSI driver for the PSI240I
+- * EIDE interface card.
+- *
+- *-M*************************************************************************/
+-#ifndef _PSI240I_H
+-#define _PSI240I_H
-
-- if (!test_bit(XMSTATE_BIT_W_RESEND_DATA_DIGEST, &tcp_ctask->xmstate)) {
-- crypto_hash_final(&tcp_conn->tx_hash, (u8*)digest);
-- iscsi_buf_init_iov(buf, (char*)digest, 4);
-- }
-- clear_bit(XMSTATE_BIT_W_RESEND_DATA_DIGEST, &tcp_ctask->xmstate);
+-#include <linux/types.h>
-
-- rc = iscsi_sendpage(conn, buf, &tcp_ctask->digest_count, &sent);
-- if (!rc)
-- debug_scsi("sent digest 0x%x for itt 0x%x\n", *digest,
-- ctask->itt);
-- else {
-- debug_scsi("sending digest 0x%x failed for itt 0x%x!\n",
-- *digest, ctask->itt);
-- set_bit(XMSTATE_BIT_W_RESEND_DATA_DIGEST, &tcp_ctask->xmstate);
-- }
-- return rc;
--}
+-#ifndef PSI_EIDE_SCSIOP
+-#define PSI_EIDE_SCSIOP 1
-
--static int
--iscsi_send_data(struct iscsi_cmd_task *ctask, struct iscsi_buf *sendbuf,
-- struct scatterlist **sg, int *sent, int *count,
-- struct iscsi_buf *digestbuf, uint32_t *digest)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- struct iscsi_conn *conn = ctask->conn;
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
-- int rc, buf_sent, offset;
+-/************************************************/
+-/* Some defines that we like */
+-/************************************************/
+-#define CHAR char
+-#define UCHAR unsigned char
+-#define SHORT short
+-#define USHORT unsigned short
+-#define BOOL unsigned short
+-#define LONG long
+-#define ULONG unsigned long
+-#define VOID void
-
-- while (*count) {
-- buf_sent = 0;
-- offset = sendbuf->sent;
+-/************************************************/
+-/* Timeout konstants */
+-/************************************************/
+-#define TIMEOUT_READY 10 // 100 mSec
+-#define TIMEOUT_DRQ 40 // 400 mSec
-
-- rc = iscsi_sendpage(conn, sendbuf, count, &buf_sent);
-- *sent = *sent + buf_sent;
-- if (buf_sent && conn->datadgst_en)
-- partial_sg_digest_update(&tcp_conn->tx_hash,
-- &sendbuf->sg, sendbuf->sg.offset + offset,
-- buf_sent);
-- if (!iscsi_buf_left(sendbuf) && *sg != tcp_ctask->bad_sg) {
-- iscsi_buf_init_sg(sendbuf, *sg);
-- *sg = *sg + 1;
-- }
+-/************************************************/
+-/* Misc. macros */
+-/************************************************/
+-#define ANY2SCSI(up, p) \
+-((UCHAR *)up)[0] = (((ULONG)(p)) >> 8); \
+-((UCHAR *)up)[1] = ((ULONG)(p));
-
-- if (rc)
-- return rc;
-- }
+-#define SCSI2LONG(up) \
+-( (((long)*(((UCHAR *)up))) << 16) \
+-+ (((long)(((UCHAR *)up)[1])) << 8) \
+-+ ((long)(((UCHAR *)up)[2])) )
-
-- rc = iscsi_send_padding(conn, ctask);
-- if (rc)
-+flush:
-+ /* Flush any pending data first. */
-+ rc = iscsi_tcp_flush(conn);
-+ if (rc < 0)
- return rc;
-
-- return iscsi_send_digest(conn, ctask, digestbuf, digest);
--}
+-#define XANY2SCSI(up, p) \
+-((UCHAR *)up)[0] = ((long)(p)) >> 24; \
+-((UCHAR *)up)[1] = ((long)(p)) >> 16; \
+-((UCHAR *)up)[2] = ((long)(p)) >> 8; \
+-((UCHAR *)up)[3] = ((long)(p));
-
--static int
--iscsi_send_unsol_hdr(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- struct iscsi_data_task *dtask;
-- int rc;
+-#define XSCSI2LONG(up) \
+-( (((long)(((UCHAR *)up)[0])) << 24) \
+-+ (((long)(((UCHAR *)up)[1])) << 16) \
+-+ (((long)(((UCHAR *)up)[2])) << 8) \
+-+ ((long)(((UCHAR *)up)[3])) )
-
-- set_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate);
-- if (test_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate)) {
-- dtask = &tcp_ctask->unsol_dtask;
+-/************************************************/
+-/* SCSI CDB operation codes */
+-/************************************************/
+-#define SCSIOP_TEST_UNIT_READY 0x00
+-#define SCSIOP_REZERO_UNIT 0x01
+-#define SCSIOP_REWIND 0x01
+-#define SCSIOP_REQUEST_BLOCK_ADDR 0x02
+-#define SCSIOP_REQUEST_SENSE 0x03
+-#define SCSIOP_FORMAT_UNIT 0x04
+-#define SCSIOP_READ_BLOCK_LIMITS 0x05
+-#define SCSIOP_REASSIGN_BLOCKS 0x07
+-#define SCSIOP_READ6 0x08
+-#define SCSIOP_RECEIVE 0x08
+-#define SCSIOP_WRITE6 0x0A
+-#define SCSIOP_PRINT 0x0A
+-#define SCSIOP_SEND 0x0A
+-#define SCSIOP_SEEK6 0x0B
+-#define SCSIOP_TRACK_SELECT 0x0B
+-#define SCSIOP_SLEW_PRINT 0x0B
+-#define SCSIOP_SEEK_BLOCK 0x0C
+-#define SCSIOP_PARTITION 0x0D
+-#define SCSIOP_READ_REVERSE 0x0F
+-#define SCSIOP_WRITE_FILEMARKS 0x10
+-#define SCSIOP_FLUSH_BUFFER 0x10
+-#define SCSIOP_SPACE 0x11
+-#define SCSIOP_INQUIRY 0x12
+-#define SCSIOP_VERIFY6 0x13
+-#define SCSIOP_RECOVER_BUF_DATA 0x14
+-#define SCSIOP_MODE_SELECT 0x15
+-#define SCSIOP_RESERVE_UNIT 0x16
+-#define SCSIOP_RELEASE_UNIT 0x17
+-#define SCSIOP_COPY 0x18
+-#define SCSIOP_ERASE 0x19
+-#define SCSIOP_MODE_SENSE 0x1A
+-#define SCSIOP_START_STOP_UNIT 0x1B
+-#define SCSIOP_STOP_PRINT 0x1B
+-#define SCSIOP_LOAD_UNLOAD 0x1B
+-#define SCSIOP_RECEIVE_DIAGNOSTIC 0x1C
+-#define SCSIOP_SEND_DIAGNOSTIC 0x1D
+-#define SCSIOP_MEDIUM_REMOVAL 0x1E
+-#define SCSIOP_READ_CAPACITY 0x25
+-#define SCSIOP_READ 0x28
+-#define SCSIOP_WRITE 0x2A
+-#define SCSIOP_SEEK 0x2B
+-#define SCSIOP_LOCATE 0x2B
+-#define SCSIOP_WRITE_VERIFY 0x2E
+-#define SCSIOP_VERIFY 0x2F
+-#define SCSIOP_SEARCH_DATA_HIGH 0x30
+-#define SCSIOP_SEARCH_DATA_EQUAL 0x31
+-#define SCSIOP_SEARCH_DATA_LOW 0x32
+-#define SCSIOP_SET_LIMITS 0x33
+-#define SCSIOP_READ_POSITION 0x34
+-#define SCSIOP_SYNCHRONIZE_CACHE 0x35
+-#define SCSIOP_COMPARE 0x39
+-#define SCSIOP_COPY_COMPARE 0x3A
+-#define SCSIOP_WRITE_DATA_BUFF 0x3B
+-#define SCSIOP_READ_DATA_BUFF 0x3C
+-#define SCSIOP_CHANGE_DEFINITION 0x40
+-#define SCSIOP_READ_SUB_CHANNEL 0x42
+-#define SCSIOP_READ_TOC 0x43
+-#define SCSIOP_READ_HEADER 0x44
+-#define SCSIOP_PLAY_AUDIO 0x45
+-#define SCSIOP_PLAY_AUDIO_MSF 0x47
+-#define SCSIOP_PLAY_TRACK_INDEX 0x48
+-#define SCSIOP_PLAY_TRACK_RELATIVE 0x49
+-#define SCSIOP_PAUSE_RESUME 0x4B
+-#define SCSIOP_LOG_SELECT 0x4C
+-#define SCSIOP_LOG_SENSE 0x4D
+-#define SCSIOP_MODE_SELECT10 0x55
+-#define SCSIOP_MODE_SENSE10 0x5A
+-#define SCSIOP_LOAD_UNLOAD_SLOT 0xA6
+-#define SCSIOP_MECHANISM_STATUS 0xBD
+-#define SCSIOP_READ_CD 0xBE
-
-- iscsi_prep_unsolicit_data_pdu(ctask, &dtask->hdr);
-- iscsi_buf_init_iov(&tcp_ctask->headbuf, (char*)&dtask->hdr,
-- sizeof(struct iscsi_hdr));
-- if (conn->hdrdgst_en)
-- iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
-- (u8*)dtask->hdrext);
+-// IDE command definitions
+-#define IDE_COMMAND_ATAPI_RESET 0x08
+-#define IDE_COMMAND_READ 0x20
+-#define IDE_COMMAND_WRITE 0x30
+-#define IDE_COMMAND_RECALIBRATE 0x10
+-#define IDE_COMMAND_SEEK 0x70
+-#define IDE_COMMAND_SET_PARAMETERS 0x91
+-#define IDE_COMMAND_VERIFY 0x40
+-#define IDE_COMMAND_ATAPI_PACKET 0xA0
+-#define IDE_COMMAND_ATAPI_IDENTIFY 0xA1
+-#define IDE_CMD_READ_MULTIPLE 0xC4
+-#define IDE_CMD_WRITE_MULTIPLE 0xC5
+-#define IDE_CMD_SET_MULTIPLE 0xC6
+-#define IDE_COMMAND_WRITE_DMA 0xCA
+-#define IDE_COMMAND_READ_DMA 0xC8
+-#define IDE_COMMAND_IDENTIFY 0xEC
-
-- clear_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate);
-- iscsi_set_padding(tcp_ctask, ctask->data_count);
-- }
+-// IDE status definitions
+-#define IDE_STATUS_ERROR 0x01
+-#define IDE_STATUS_INDEX 0x02
+-#define IDE_STATUS_CORRECTED_ERROR 0x04
+-#define IDE_STATUS_DRQ 0x08
+-#define IDE_STATUS_DSC 0x10
+-#define IDE_STATUS_WRITE_FAULT 0x20
+-#define IDE_STATUS_DRDY 0x40
+-#define IDE_STATUS_BUSY 0x80
-
-- rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, ctask->data_count);
-- if (rc) {
-- clear_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate);
-- set_bit(XMSTATE_BIT_UNS_HDR, &tcp_ctask->xmstate);
-- return rc;
-- }
-+ /* Are we done already? */
-+ if (sc->sc_data_direction != DMA_TO_DEVICE)
-+ return 0;
-
-- if (conn->datadgst_en) {
-- dtask = &tcp_ctask->unsol_dtask;
-- iscsi_data_digest_init(ctask->conn->dd_data, tcp_ctask);
-- dtask->digest = 0;
-- }
-+ if (ctask->unsol_count != 0) {
-+ struct iscsi_data *hdr = &tcp_ctask->unsol_dtask.hdr;
-
-- debug_scsi("uns dout [itt 0x%x dlen %d sent %d]\n",
-- ctask->itt, ctask->unsol_count, tcp_ctask->sent);
-- return 0;
--}
-+ /* Prepare a header for the unsolicited PDU.
-+ * The amount of data we want to send will be
-+ * in ctask->data_count.
-+ * FIXME: return the data count instead.
-+ */
-+ iscsi_prep_unsolicit_data_pdu(ctask, hdr);
-
--static int
--iscsi_send_unsol_pdu(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- int rc;
-+ debug_tcp("unsol dout [itt 0x%x doff %d dlen %d]\n",
-+ ctask->itt, tcp_ctask->sent, ctask->data_count);
-
-- if (test_and_clear_bit(XMSTATE_BIT_UNS_HDR, &tcp_ctask->xmstate)) {
-- BUG_ON(!ctask->unsol_count);
--send_hdr:
-- rc = iscsi_send_unsol_hdr(conn, ctask);
-+ iscsi_tcp_send_hdr_prep(conn, hdr, sizeof(*hdr));
-+ rc = iscsi_tcp_send_data_prep(conn, scsi_sglist(sc),
-+ scsi_sg_count(sc),
-+ tcp_ctask->sent,
-+ ctask->data_count);
- if (rc)
-- return rc;
-- }
+-// IDE error definitions
+-#define IDE_ERROR_AMNF 0x01
+-#define IDE_ERROR_TKONF 0x02
+-#define IDE_ERROR_ABRT 0x04
+-#define IDE_ERROR_MCR 0x08
+-#define IDE_ERROR_IDFN 0x10
+-#define IDE_ERROR_MC 0x20
+-#define IDE_ERROR_UNC 0x40
+-#define IDE_ERROR_BBK 0x80
-
-- if (test_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate)) {
-- struct iscsi_data_task *dtask = &tcp_ctask->unsol_dtask;
-- int start = tcp_ctask->sent;
-+ goto fail;
-+ tcp_ctask->sent += ctask->data_count;
-+ ctask->unsol_count -= ctask->data_count;
-+ goto flush;
-+ } else {
-+ struct iscsi_session *session = conn->session;
-+ struct iscsi_r2t_info *r2t;
-
-- rc = iscsi_send_data(ctask, &tcp_ctask->sendbuf, &tcp_ctask->sg,
-- &tcp_ctask->sent, &ctask->data_count,
-- &dtask->digestbuf, &dtask->digest);
-- ctask->unsol_count -= tcp_ctask->sent - start;
-- if (rc)
-- return rc;
-- clear_bit(XMSTATE_BIT_UNS_DATA, &tcp_ctask->xmstate);
-- /*
-- * Done with the Data-Out. Next, check if we need
-- * to send another unsolicited Data-Out.
-+ /* All unsolicited PDUs sent. Check for solicited PDUs.
- */
-- if (ctask->unsol_count) {
-- debug_scsi("sending more uns\n");
-- set_bit(XMSTATE_BIT_UNS_INIT, &tcp_ctask->xmstate);
-- goto send_hdr;
-+ spin_lock_bh(&session->lock);
-+ r2t = tcp_ctask->r2t;
-+ if (r2t != NULL) {
-+ /* Continue with this R2T? */
-+ if (!iscsi_solicit_data_cont(conn, ctask, r2t)) {
-+ debug_scsi(" done with r2t %p\n", r2t);
-+
-+ __kfifo_put(tcp_ctask->r2tpool.queue,
-+ (void*)&r2t, sizeof(void*));
-+ tcp_ctask->r2t = r2t = NULL;
-+ }
- }
-- }
-- return 0;
--}
+-// IDE interface structure
+-typedef struct _IDE_STRUCT
+- {
+- union
+- {
+- UCHAR ide[9];
+- struct
+- {
+- USHORT data;
+- UCHAR sectors;
+- UCHAR lba[4];
+- UCHAR cmd;
+- UCHAR spigot;
+- } ides;
+- } ide;
+- } IDE_STRUCT;
-
--static int iscsi_send_sol_pdu(struct iscsi_conn *conn,
-- struct iscsi_cmd_task *ctask)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- struct iscsi_session *session = conn->session;
-- struct iscsi_r2t_info *r2t;
-- struct iscsi_data_task *dtask;
-- int left, rc;
-
-- if (test_bit(XMSTATE_BIT_SOL_HDR_INIT, &tcp_ctask->xmstate)) {
-- if (!tcp_ctask->r2t) {
-- spin_lock_bh(&session->lock);
-+ if (r2t == NULL) {
- __kfifo_get(tcp_ctask->r2tqueue, (void*)&tcp_ctask->r2t,
- sizeof(void*));
-- spin_unlock_bh(&session->lock);
-+ r2t = tcp_ctask->r2t;
- }
--send_hdr:
-- r2t = tcp_ctask->r2t;
-- dtask = &r2t->dtask;
+-// SCSI read capacity structure
+-typedef struct _READ_CAPACITY_DATA
+- {
+- ULONG blks; /* total blocks (converted to little endian) */
+- ULONG blksiz; /* size of each (converted to little endian) */
+- } READ_CAPACITY_DATA, *PREAD_CAPACITY_DATA;
-
-- if (conn->hdrdgst_en)
-- iscsi_hdr_digest(conn, &r2t->headbuf,
-- (u8*)dtask->hdrext);
-- clear_bit(XMSTATE_BIT_SOL_HDR_INIT, &tcp_ctask->xmstate);
-- set_bit(XMSTATE_BIT_SOL_HDR, &tcp_ctask->xmstate);
-- }
+-// SCSI inquiry data
+-#ifndef HOSTS_C
-
-- if (test_bit(XMSTATE_BIT_SOL_HDR, &tcp_ctask->xmstate)) {
-- r2t = tcp_ctask->r2t;
-- dtask = &r2t->dtask;
+-typedef struct _INQUIRYDATA
+- {
+- UCHAR DeviceType :5;
+- UCHAR DeviceTypeQualifier :3;
+- UCHAR DeviceTypeModifier :7;
+- UCHAR RemovableMedia :1;
+- UCHAR Versions;
+- UCHAR ResponseDataFormat;
+- UCHAR AdditionalLength;
+- UCHAR Reserved[2];
+- UCHAR SoftReset :1;
+- UCHAR CommandQueue :1;
+- UCHAR Reserved2 :1;
+- UCHAR LinkedCommands :1;
+- UCHAR Synchronous :1;
+- UCHAR Wide16Bit :1;
+- UCHAR Wide32Bit :1;
+- UCHAR RelativeAddressing :1;
+- UCHAR VendorId[8];
+- UCHAR ProductId[16];
+- UCHAR ProductRevisionLevel[4];
+- UCHAR VendorSpecific[20];
+- UCHAR Reserved3[40];
+- } INQUIRYDATA, *PINQUIRYDATA;
+-#endif
-
-- rc = iscsi_sendhdr(conn, &r2t->headbuf, r2t->data_count);
-- if (rc)
-- return rc;
-- clear_bit(XMSTATE_BIT_SOL_HDR, &tcp_ctask->xmstate);
-- set_bit(XMSTATE_BIT_SOL_DATA, &tcp_ctask->xmstate);
-+ spin_unlock_bh(&session->lock);
-
-- if (conn->datadgst_en) {
-- iscsi_data_digest_init(conn->dd_data, tcp_ctask);
-- dtask->digest = 0;
-+ /* Waiting for more R2Ts to arrive. */
-+ if (r2t == NULL) {
-+ debug_tcp("no R2Ts yet\n");
-+ return 0;
- }
-
-- iscsi_set_padding(tcp_ctask, r2t->data_count);
-- debug_scsi("sol dout [dsn %d itt 0x%x dlen %d sent %d]\n",
-- r2t->solicit_datasn - 1, ctask->itt, r2t->data_count,
-- r2t->sent);
-- }
-+ debug_scsi("sol dout %p [dsn %d itt 0x%x doff %d dlen %d]\n",
-+ r2t, r2t->solicit_datasn - 1, ctask->itt,
-+ r2t->data_offset + r2t->sent, r2t->data_count);
-
-- if (test_bit(XMSTATE_BIT_SOL_DATA, &tcp_ctask->xmstate)) {
-- r2t = tcp_ctask->r2t;
-- dtask = &r2t->dtask;
-+ iscsi_tcp_send_hdr_prep(conn, &r2t->dtask.hdr,
-+ sizeof(struct iscsi_hdr));
-
-- rc = iscsi_send_data(ctask, &r2t->sendbuf, &r2t->sg,
-- &r2t->sent, &r2t->data_count,
-- &dtask->digestbuf, &dtask->digest);
-+ rc = iscsi_tcp_send_data_prep(conn, scsi_sglist(sc),
-+ scsi_sg_count(sc),
-+ r2t->data_offset + r2t->sent,
-+ r2t->data_count);
- if (rc)
-- return rc;
-- clear_bit(XMSTATE_BIT_SOL_DATA, &tcp_ctask->xmstate);
+-// IDE IDENTIFY data
+-typedef struct _IDENTIFY_DATA
+- {
+- USHORT GeneralConfiguration; // 00
+- USHORT NumberOfCylinders; // 02
+- USHORT Reserved1; // 04
+- USHORT NumberOfHeads; // 06
+- USHORT UnformattedBytesPerTrack; // 08
+- USHORT UnformattedBytesPerSector; // 0A
+- USHORT SectorsPerTrack; // 0C
+- USHORT VendorUnique1[3]; // 0E
+- USHORT SerialNumber[10]; // 14
+- USHORT BufferType; // 28
+- USHORT BufferSectorSize; // 2A
+- USHORT NumberOfEccBytes; // 2C
+- USHORT FirmwareRevision[4]; // 2E
+- USHORT ModelNumber[20]; // 36
+- UCHAR MaximumBlockTransfer; // 5E
+- UCHAR VendorUnique2; // 5F
+- USHORT DoubleWordIo; // 60
+- USHORT Capabilities; // 62
+- USHORT Reserved2; // 64
+- UCHAR VendorUnique3; // 66
+- UCHAR PioCycleTimingMode; // 67
+- UCHAR VendorUnique4; // 68
+- UCHAR DmaCycleTimingMode; // 69
+- USHORT TranslationFieldsValid:1; // 6A
+- USHORT Reserved3:15;
+- USHORT NumberOfCurrentCylinders; // 6C
+- USHORT NumberOfCurrentHeads; // 6E
+- USHORT CurrentSectorsPerTrack; // 70
+- ULONG CurrentSectorCapacity; // 72
+- USHORT Reserved4[197]; // 76
+- } IDENTIFY_DATA, *PIDENTIFY_DATA;
-
-- /*
-- * Done with this Data-Out. Next, check if we have
-- * to send another Data-Out for this R2T.
-- */
-- BUG_ON(r2t->data_length - r2t->sent < 0);
-- left = r2t->data_length - r2t->sent;
-- if (left) {
-- iscsi_solicit_data_cont(conn, ctask, r2t, left);
-- goto send_hdr;
-- }
+-// Identify data without the Reserved4.
+-typedef struct _IDENTIFY_DATA2 {
+- USHORT GeneralConfiguration; // 00
+- USHORT NumberOfCylinders; // 02
+- USHORT Reserved1; // 04
+- USHORT NumberOfHeads; // 06
+- USHORT UnformattedBytesPerTrack; // 08
+- USHORT UnformattedBytesPerSector; // 0A
+- USHORT SectorsPerTrack; // 0C
+- USHORT VendorUnique1[3]; // 0E
+- USHORT SerialNumber[10]; // 14
+- USHORT BufferType; // 28
+- USHORT BufferSectorSize; // 2A
+- USHORT NumberOfEccBytes; // 2C
+- USHORT FirmwareRevision[4]; // 2E
+- USHORT ModelNumber[20]; // 36
+- UCHAR MaximumBlockTransfer; // 5E
+- UCHAR VendorUnique2; // 5F
+- USHORT DoubleWordIo; // 60
+- USHORT Capabilities; // 62
+- USHORT Reserved2; // 64
+- UCHAR VendorUnique3; // 66
+- UCHAR PioCycleTimingMode; // 67
+- UCHAR VendorUnique4; // 68
+- UCHAR DmaCycleTimingMode; // 69
+- USHORT TranslationFieldsValid:1; // 6A
+- USHORT Reserved3:15;
+- USHORT NumberOfCurrentCylinders; // 6C
+- USHORT NumberOfCurrentHeads; // 6E
+- USHORT CurrentSectorsPerTrack; // 70
+- ULONG CurrentSectorCapacity; // 72
+- } IDENTIFY_DATA2, *PIDENTIFY_DATA2;
-
-- /*
-- * Done with this R2T. Check if there are more
-- * outstanding R2Ts ready to be processed.
-- */
-- spin_lock_bh(&session->lock);
-- tcp_ctask->r2t = NULL;
-- __kfifo_put(tcp_ctask->r2tpool.queue, (void*)&r2t,
-- sizeof(void*));
-- if (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t,
-- sizeof(void*))) {
-- tcp_ctask->r2t = r2t;
-- spin_unlock_bh(&session->lock);
-- goto send_hdr;
-- }
-- spin_unlock_bh(&session->lock);
-+ goto fail;
-+ tcp_ctask->sent += r2t->data_count;
-+ r2t->sent += r2t->data_count;
-+ goto flush;
- }
- return 0;
--}
+-#endif // PSI_EIDE_SCSIOP
-
--/**
-- * iscsi_tcp_ctask_xmit - xmit normal PDU task
-- * @conn: iscsi connection
-- * @ctask: iscsi command task
+-// function prototypes
+-int Psi240i_Command(struct scsi_cmnd *SCpnt);
+-int Psi240i_Abort(struct scsi_cmnd *SCpnt);
+-int Psi240i_Reset(struct scsi_cmnd *SCpnt, unsigned int flags);
+-#endif
+diff --git a/drivers/scsi/psi_chip.h b/drivers/scsi/psi_chip.h
+deleted file mode 100644
+index 224cf8f..0000000
+--- a/drivers/scsi/psi_chip.h
++++ /dev/null
+@@ -1,195 +0,0 @@
+-/*+M*************************************************************************
+- * Perceptive Solutions, Inc. PSI-240I device driver proc support for Linux.
- *
-- * Notes:
-- * The function can return -EAGAIN in which case caller must
-- * call it again later, or recover. '0' return code means successful
-- * xmit.
-- * The function is devided to logical helpers (above) for the different
-- * xmit stages.
+- * Copyright (c) 1997 Perceptive Solutions, Inc.
- *
-- *iscsi_send_cmd_hdr()
-- * XMSTATE_BIT_CMD_HDR_INIT - prepare Header and Data buffers Calculate
-- * Header Digest
-- * XMSTATE_BIT_CMD_HDR_XMIT - Transmit header in progress
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2, or (at your option)
+- * any later version.
- *
-- *iscsi_send_padding
-- * XMSTATE_BIT_W_PAD - Prepare and send pading
-- * XMSTATE_BIT_W_RESEND_PAD - retry send pading
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
- *
-- *iscsi_send_digest
-- * XMSTATE_BIT_W_RESEND_DATA_DIGEST - Finalize and send Data Digest
-- * XMSTATE_BIT_W_RESEND_DATA_DIGEST - retry sending digest
+- * You should have received a copy of the GNU General Public License
+- * along with this program; see the file COPYING. If not, write to
+- * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
- *
-- *iscsi_send_unsol_hdr
-- * XMSTATE_BIT_UNS_INIT - prepare un-solicit data header and digest
-- * XMSTATE_BIT_UNS_HDR - send un-solicit header
- *
-- *iscsi_send_unsol_pdu
-- * XMSTATE_BIT_UNS_DATA - send un-solicit data in progress
+- * File Name: psi_chip.h
- *
-- *iscsi_send_sol_pdu
-- * XMSTATE_BIT_SOL_HDR_INIT - solicit data header and digest initialize
-- * XMSTATE_BIT_SOL_HDR - send solicit header
-- * XMSTATE_BIT_SOL_DATA - send solicit data
+- * Description: This file contains the interface defines and
+- * error codes.
- *
-- *iscsi_tcp_ctask_xmit
-- * XMSTATE_BIT_IMM_DATA - xmit managment data (??)
-- **/
--static int
--iscsi_tcp_ctask_xmit(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
--{
-- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
-- int rc = 0;
+- *-M*************************************************************************/
+-#ifndef PSI_CHIP
+-#define PSI_CHIP
+-
+-/************************************************/
+-/* Misc konstants */
+-/************************************************/
+-#define CHIP_MAXDRIVES 8
+-
+-/************************************************/
+-/* Chip I/O addresses */
+-/************************************************/
+-#define CHIP_ADRS_0 0x0130
+-#define CHIP_ADRS_1 0x0150
+-#define CHIP_ADRS_2 0x0190
+-#define CHIP_ADRS_3 0x0210
+-#define CHIP_ADRS_4 0x0230
+-#define CHIP_ADRS_5 0x0250
+-
+-/************************************************/
+-/* EEPROM locations */
+-/************************************************/
+-#define CHIP_EEPROM_BIOS 0x0000 // BIOS base address
+-#define CHIP_EEPROM_DATA 0x2000 // SETUP data base address
+-#define CHIP_EEPROM_FACTORY 0x2400 // FACTORY data base address
+-#define CHIP_EEPROM_SETUP 0x3000 // SETUP PROGRAM base address
+-
+-#define CHIP_EEPROM_SIZE 32768U // size of the entire EEPROM
+-#define CHIP_EEPROM_BIOS_SIZE 8192 // size of the BIOS in bytes
+-#define CHIP_EEPROM_DATA_SIZE 4096 // size of factory, setup, log data block in bytes
+-#define CHIP_EEPROM_SETUP_SIZE 20480U // size of the setup program in bytes
+-
+-/************************************************/
+-/* Chip Interrupts */
+-/************************************************/
+-#define CHIP_IRQ_10 0x72
+-#define CHIP_IRQ_11 0x73
+-#define CHIP_IRQ_12 0x74
+-
+-/************************************************/
+-/* Chip Setup addresses */
+-/************************************************/
+-#define CHIP_SETUP_BASE 0x0000C000L
+-
+-/************************************************/
+-/* Chip Register address offsets */
+-/************************************************/
+-#define REG_DATA 0x00
+-#define REG_ERROR 0x01
+-#define REG_SECTOR_COUNT 0x02
+-#define REG_LBA_0 0x03
+-#define REG_LBA_8 0x04
+-#define REG_LBA_16 0x05
+-#define REG_LBA_24 0x06
+-#define REG_STAT_CMD 0x07
+-#define REG_SEL_FAIL 0x08
+-#define REG_IRQ_STATUS 0x09
+-#define REG_ADDRESS 0x0A
+-#define REG_FAIL 0x0C
+-#define REG_ALT_STAT 0x0E
+-#define REG_DRIVE_ADRS 0x0F
+-
+-/************************************************/
+-/* Chip RAM locations */
+-/************************************************/
+-#define CHIP_DEVICE 0x8000
+-#define CHIP_DEVICE_0 0x8000
+-#define CHIP_DEVICE_1 0x8008
+-#define CHIP_DEVICE_2 0x8010
+-#define CHIP_DEVICE_3 0x8018
+-#define CHIP_DEVICE_4 0x8020
+-#define CHIP_DEVICE_5 0x8028
+-#define CHIP_DEVICE_6 0x8030
+-#define CHIP_DEVICE_7 0x8038
+-typedef struct
+- {
+- UCHAR channel; // channel of this device (0-8).
+- UCHAR spt; // Sectors Per Track.
+- ULONG spc; // Sectors Per Cylinder.
+- } CHIP_DEVICE_N;
+-
+-#define CHIP_CONFIG 0x8100 // address of boards configuration.
+-typedef struct
+- {
+- UCHAR irq; // interrupt request channel number
+- UCHAR numDrives; // Number of accessible drives
+- UCHAR fastFormat; // Boolean for fast format enable
+- } CHIP_CONFIG_N;
+-
+-#define CHIP_MAP 0x8108 // eight byte device type map.
+-
+-
+-#define CHIP_RAID 0x8120 // array of RAID signature structures and LBA
+-#define CHIP_RAID_1 0x8120
+-#define CHIP_RAID_2 0x8130
+-#define CHIP_RAID_3 0x8140
+-#define CHIP_RAID_4 0x8150
+-
+-/************************************************/
+-/* Chip Register Masks */
+-/************************************************/
+-#define CHIP_ID 0x7B
+-#define SEL_RAM 0x8000
+-#define MASK_FAIL 0x80
+-
+-/************************************************/
+-/* Chip cable select bits */
+-/************************************************/
+-#define SECTORSXFER 8
-
-- debug_scsi("ctask deq [cid %d xmstate %x itt 0x%x]\n",
-- conn->id, tcp_ctask->xmstate, ctask->itt);
+-/************************************************/
+-/* Chip cable select bits */
+-/************************************************/
+-#define SEL_NONE 0x00
+-#define SEL_1 0x01
+-#define SEL_2 0x02
+-#define SEL_3 0x04
+-#define SEL_4 0x08
-
-- rc = iscsi_send_cmd_hdr(conn, ctask);
-- if (rc)
-- return rc;
-- if (ctask->sc->sc_data_direction != DMA_TO_DEVICE)
-- return 0;
+-/************************************************/
+-/* Programmable Interrupt Controller*/
+-/************************************************/
+-#define PIC1 0x20 // first 8259 base port address
+-#define PIC2 0xA0 // second 8259 base port address
+-#define INT_OCW1 1 // Operation Control Word 1: IRQ mask
+-#define EOI 0x20 // non-specific end-of-interrupt
-
-- if (test_bit(XMSTATE_BIT_IMM_DATA, &tcp_ctask->xmstate)) {
-- rc = iscsi_send_data(ctask, &tcp_ctask->sendbuf, &tcp_ctask->sg,
-- &tcp_ctask->sent, &ctask->imm_count,
-- &tcp_ctask->immbuf, &tcp_ctask->immdigest);
-- if (rc)
-- return rc;
-- clear_bit(XMSTATE_BIT_IMM_DATA, &tcp_ctask->xmstate);
-- }
+-/************************************************/
+-/* Device/Geometry controls */
+-/************************************************/
+-#define GEOMETRY_NONE 0x0 // No device
+-#define GEOMETRY_AUTO 0x1 // Geometry set automatically
+-#define GEOMETRY_USER 0x2 // User supplied geometry
-
-- rc = iscsi_send_unsol_pdu(conn, ctask);
-- if (rc)
-- return rc;
+-#define DEVICE_NONE 0x0 // No device present
+-#define DEVICE_INACTIVE 0x1 // device present but not registered active
+-#define DEVICE_ATAPI 0x2 // ATAPI device (CD_ROM, Tape, Etc...)
+-#define DEVICE_DASD_NONLBA 0x3 // Non LBA incompatible device
+-#define DEVICE_DASD_LBA 0x4 // LBA compatible device
-
-- rc = iscsi_send_sol_pdu(conn, ctask);
-- if (rc)
-- return rc;
+-/************************************************/
+-/* Setup Structure Definitions */
+-/************************************************/
+-typedef struct // device setup parameters
+- {
+- UCHAR geometryControl; // geometry control flags
+- UCHAR device; // device code
+- USHORT sectors; // number of sectors per track
+- USHORT heads; // number of heads
+- USHORT cylinders; // number of cylinders for this device
+- ULONG blocks; // number of blocks on device
+- USHORT spare1;
+- USHORT spare2;
+- } SETUP_DEVICE, *PSETUP_DEVICE;
-
-- return rc;
-+fail:
-+ iscsi_conn_failure(conn, rc);
-+ return -EIO;
- }
+-typedef struct // master setup structure
+- {
+- USHORT startupDelay;
+- USHORT promptBIOS;
+- USHORT fastFormat;
+- USHORT spare2;
+- USHORT spare3;
+- USHORT spare4;
+- USHORT spare5;
+- USHORT spare6;
+- SETUP_DEVICE setupDevice[8];
+- } SETUP, *PSETUP;
+-
+-#endif
+-
+diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
+index 2886407..c94906a 100644
+--- a/drivers/scsi/qla1280.c
++++ b/drivers/scsi/qla1280.c
+@@ -528,7 +528,7 @@ __setup("qla1280=", qla1280_setup);
+ #define CMD_CDBLEN(Cmnd) Cmnd->cmd_len
+ #define CMD_CDBP(Cmnd) Cmnd->cmnd
+ #define CMD_SNSP(Cmnd) Cmnd->sense_buffer
+-#define CMD_SNSLEN(Cmnd) sizeof(Cmnd->sense_buffer)
++#define CMD_SNSLEN(Cmnd) SCSI_SENSE_BUFFERSIZE
+ #define CMD_RESULT(Cmnd) Cmnd->result
+ #define CMD_HANDLE(Cmnd) Cmnd->host_scribble
+ #define CMD_REQUEST(Cmnd) Cmnd->request->cmd
+@@ -3715,7 +3715,7 @@ qla1280_status_entry(struct scsi_qla_host *ha, struct response *pkt,
+ } else
+ sense_sz = 0;
+ memset(cmd->sense_buffer + sense_sz, 0,
+- sizeof(cmd->sense_buffer) - sense_sz);
++ SCSI_SENSE_BUFFERSIZE - sense_sz);
- static struct iscsi_cls_conn *
-@@ -1784,9 +1492,6 @@ iscsi_tcp_conn_create(struct iscsi_cls_session *cls_session, uint32_t conn_idx)
+ dprintk(2, "qla1280_status_entry: Check "
+ "condition Sense data, b %i, t %i, "
+diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile
+index 71ddb5d..c51fd1f 100644
+--- a/drivers/scsi/qla2xxx/Makefile
++++ b/drivers/scsi/qla2xxx/Makefile
+@@ -1,4 +1,4 @@
+ qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \
+- qla_dbg.o qla_sup.o qla_attr.o qla_mid.o
++ qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o
- conn->dd_data = tcp_conn;
- tcp_conn->iscsi_conn = conn;
-- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
-- /* initial operational parameters */
-- tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
+ obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index fb388b8..adf9732 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -9,7 +9,7 @@
+ #include <linux/kthread.h>
+ #include <linux/vmalloc.h>
- tcp_conn->tx_hash.tfm = crypto_alloc_hash("crc32c", 0,
- CRYPTO_ALG_ASYNC);
-@@ -1863,11 +1568,9 @@ static void
- iscsi_tcp_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
- {
- struct iscsi_conn *conn = cls_conn->dd_data;
-- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+-int qla24xx_vport_disable(struct fc_vport *, bool);
++static int qla24xx_vport_disable(struct fc_vport *, bool);
- iscsi_conn_stop(cls_conn, flag);
- iscsi_tcp_release_conn(conn);
-- tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
- }
+ /* SYSFS attributes --------------------------------------------------------- */
- static int iscsi_tcp_get_addr(struct iscsi_conn *conn, struct socket *sock,
-@@ -1967,7 +1670,7 @@ iscsi_tcp_conn_bind(struct iscsi_cls_session *cls_session,
- /*
- * set receive state machine into initial state
- */
-- tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
-+ iscsi_tcp_hdr_recv_prep(tcp_conn);
- return 0;
+@@ -958,7 +958,7 @@ qla2x00_issue_lip(struct Scsi_Host *shost)
+ {
+ scsi_qla_host_t *ha = shost_priv(shost);
- free_socket:
-@@ -1977,10 +1680,17 @@ free_socket:
+- set_bit(LOOP_RESET_NEEDED, &ha->dpc_flags);
++ qla2x00_loop_reset(ha);
+ return 0;
+ }
- /* called with host lock */
- static void
--iscsi_tcp_mgmt_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
-+iscsi_tcp_mtask_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
+@@ -967,35 +967,51 @@ qla2x00_get_fc_host_stats(struct Scsi_Host *shost)
{
-- struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
-- tcp_mtask->xmstate = 1 << XMSTATE_BIT_IMM_HDR_INIT;
-+ debug_scsi("mtask deq [cid %d itt 0x%x]\n", conn->id, mtask->itt);
-+
-+ /* Prepare PDU, optionally w/ immediate data */
-+ iscsi_tcp_send_hdr_prep(conn, mtask->hdr, sizeof(*mtask->hdr));
-+
-+ /* If we have immediate data, attach a payload */
-+ if (mtask->data_count)
-+ iscsi_tcp_send_linear_data_prepare(conn, mtask->data,
-+ mtask->data_count);
- }
+ scsi_qla_host_t *ha = shost_priv(shost);
+ int rval;
+- uint16_t mb_stat[1];
+- link_stat_t stat_buf;
++ struct link_statistics *stats;
++ dma_addr_t stats_dma;
+ struct fc_host_statistics *pfc_host_stat;
- static int
-@@ -2003,8 +1713,7 @@ iscsi_r2tpool_alloc(struct iscsi_session *session)
- */
+- rval = QLA_FUNCTION_FAILED;
+ pfc_host_stat = &ha->fc_host_stat;
+ memset(pfc_host_stat, -1, sizeof(struct fc_host_statistics));
- /* R2T pool */
-- if (iscsi_pool_init(&tcp_ctask->r2tpool, session->max_r2t * 4,
-- (void***)&tcp_ctask->r2ts,
-+ if (iscsi_pool_init(&tcp_ctask->r2tpool, session->max_r2t * 4, NULL,
- sizeof(struct iscsi_r2t_info))) {
- goto r2t_alloc_fail;
- }
-@@ -2013,8 +1722,7 @@ iscsi_r2tpool_alloc(struct iscsi_session *session)
- tcp_ctask->r2tqueue = kfifo_alloc(
- session->max_r2t * 4 * sizeof(void*), GFP_KERNEL, NULL);
- if (tcp_ctask->r2tqueue == ERR_PTR(-ENOMEM)) {
-- iscsi_pool_free(&tcp_ctask->r2tpool,
-- (void**)tcp_ctask->r2ts);
-+ iscsi_pool_free(&tcp_ctask->r2tpool);
- goto r2t_alloc_fail;
- }
++ stats = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &stats_dma);
++ if (stats == NULL) {
++ DEBUG2_3_11(printk("%s(%ld): Failed to allocate memory.\n",
++ __func__, ha->host_no));
++ goto done;
++ }
++ memset(stats, 0, DMA_POOL_SIZE);
++
++ rval = QLA_FUNCTION_FAILED;
+ if (IS_FWI2_CAPABLE(ha)) {
+- rval = qla24xx_get_isp_stats(ha, (uint32_t *)&stat_buf,
+- sizeof(stat_buf) / 4, mb_stat);
++ rval = qla24xx_get_isp_stats(ha, stats, stats_dma);
+ } else if (atomic_read(&ha->loop_state) == LOOP_READY &&
+ !test_bit(ABORT_ISP_ACTIVE, &ha->dpc_flags) &&
+ !test_bit(ISP_ABORT_NEEDED, &ha->dpc_flags) &&
+ !ha->dpc_active) {
+ /* Must be in a 'READY' state for statistics retrieval. */
+- rval = qla2x00_get_link_status(ha, ha->loop_id, &stat_buf,
+- mb_stat);
++ rval = qla2x00_get_link_status(ha, ha->loop_id, stats,
++ stats_dma);
}
-@@ -2027,8 +1735,7 @@ r2t_alloc_fail:
- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
- kfifo_free(tcp_ctask->r2tqueue);
-- iscsi_pool_free(&tcp_ctask->r2tpool,
-- (void**)tcp_ctask->r2ts);
-+ iscsi_pool_free(&tcp_ctask->r2tpool);
- }
- return -ENOMEM;
- }
-@@ -2043,8 +1750,7 @@ iscsi_r2tpool_free(struct iscsi_session *session)
- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+ if (rval != QLA_SUCCESS)
+- goto done;
++ goto done_free;
++
++ pfc_host_stat->link_failure_count = stats->link_fail_cnt;
++ pfc_host_stat->loss_of_sync_count = stats->loss_sync_cnt;
++ pfc_host_stat->loss_of_signal_count = stats->loss_sig_cnt;
++ pfc_host_stat->prim_seq_protocol_err_count = stats->prim_seq_err_cnt;
++ pfc_host_stat->invalid_tx_word_count = stats->inval_xmit_word_cnt;
++ pfc_host_stat->invalid_crc_count = stats->inval_crc_cnt;
++ if (IS_FWI2_CAPABLE(ha)) {
++ pfc_host_stat->tx_frames = stats->tx_frames;
++ pfc_host_stat->rx_frames = stats->rx_frames;
++ pfc_host_stat->dumped_frames = stats->dumped_frames;
++ pfc_host_stat->nos_count = stats->nos_rcvd;
++ }
- kfifo_free(tcp_ctask->r2tqueue);
-- iscsi_pool_free(&tcp_ctask->r2tpool,
-- (void**)tcp_ctask->r2ts);
-+ iscsi_pool_free(&tcp_ctask->r2tpool);
- }
+- pfc_host_stat->link_failure_count = stat_buf.link_fail_cnt;
+- pfc_host_stat->loss_of_sync_count = stat_buf.loss_sync_cnt;
+- pfc_host_stat->loss_of_signal_count = stat_buf.loss_sig_cnt;
+- pfc_host_stat->prim_seq_protocol_err_count = stat_buf.prim_seq_err_cnt;
+- pfc_host_stat->invalid_tx_word_count = stat_buf.inval_xmit_word_cnt;
+- pfc_host_stat->invalid_crc_count = stat_buf.inval_crc_cnt;
++done_free:
++ dma_pool_free(ha->s_dma_pool, stats, stats_dma);
+ done:
+ return pfc_host_stat;
+ }
+@@ -1113,7 +1129,7 @@ vport_create_failed_2:
+ return FC_VPORT_FAILED;
}
-@@ -2060,9 +1766,6 @@ iscsi_conn_set_param(struct iscsi_cls_conn *cls_conn, enum iscsi_param param,
- switch(param) {
- case ISCSI_PARAM_HDRDGST_EN:
- iscsi_set_param(cls_conn, param, buf, buflen);
-- tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
-- if (conn->hdrdgst_en)
-- tcp_conn->hdr_size += sizeof(__u32);
- break;
- case ISCSI_PARAM_DATADGST_EN:
- iscsi_set_param(cls_conn, param, buf, buflen);
-@@ -2071,12 +1774,12 @@ iscsi_conn_set_param(struct iscsi_cls_conn *cls_conn, enum iscsi_param param,
- break;
- case ISCSI_PARAM_MAX_R2T:
- sscanf(buf, "%d", &value);
-- if (session->max_r2t == roundup_pow_of_two(value))
-+ if (value <= 0 || !is_power_of_2(value))
-+ return -EINVAL;
-+ if (session->max_r2t == value)
- break;
- iscsi_r2tpool_free(session);
- iscsi_set_param(cls_conn, param, buf, buflen);
-- if (session->max_r2t & (session->max_r2t - 1))
-- session->max_r2t = roundup_pow_of_two(session->max_r2t);
- if (iscsi_r2tpool_alloc(session))
- return -ENOMEM;
- break;
-@@ -2183,14 +1886,15 @@ iscsi_tcp_session_create(struct iscsi_transport *iscsit,
- struct iscsi_cmd_task *ctask = session->cmds[cmd_i];
- struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
+-int
++static int
+ qla24xx_vport_delete(struct fc_vport *fc_vport)
+ {
+ scsi_qla_host_t *ha = shost_priv(fc_vport->shost);
+@@ -1124,7 +1140,7 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
-- ctask->hdr = &tcp_ctask->hdr;
-+ ctask->hdr = &tcp_ctask->hdr.cmd_hdr;
-+ ctask->hdr_max = sizeof(tcp_ctask->hdr) - ISCSI_DIGEST_SIZE;
- }
+ down(&ha->vport_sem);
+ ha->cur_vport_count--;
+- clear_bit(vha->vp_idx, (unsigned long *)ha->vp_idx_map);
++ clear_bit(vha->vp_idx, ha->vp_idx_map);
+ up(&ha->vport_sem);
- for (cmd_i = 0; cmd_i < session->mgmtpool_max; cmd_i++) {
- struct iscsi_mgmt_task *mtask = session->mgmt_cmds[cmd_i];
- struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
+ kfree(vha->node_name);
+@@ -1146,7 +1162,7 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+ return 0;
+ }
-- mtask->hdr = &tcp_mtask->hdr;
-+ mtask->hdr = (struct iscsi_hdr *) &tcp_mtask->hdr;
- }
+-int
++static int
+ qla24xx_vport_disable(struct fc_vport *fc_vport, bool disable)
+ {
+ scsi_qla_host_t *vha = fc_vport->dd_data;
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
+index eaa04da..d88e98c 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.c
++++ b/drivers/scsi/qla2xxx/qla_dbg.c
+@@ -1051,6 +1051,7 @@ qla25xx_fw_dump(scsi_qla_host_t *ha, int hardware_locked)
+ struct qla25xx_fw_dump *fw;
+ uint32_t ext_mem_cnt;
+ void *nxt;
++ struct qla2xxx_fce_chain *fcec;
- if (iscsi_r2tpool_alloc(class_to_transport_session(cls_session)))
-@@ -2222,12 +1926,14 @@ static struct scsi_host_template iscsi_sht = {
- .queuecommand = iscsi_queuecommand,
- .change_queue_depth = iscsi_change_queue_depth,
- .can_queue = ISCSI_DEF_XMIT_CMDS_MAX - 1,
-- .sg_tablesize = ISCSI_SG_TABLESIZE,
-+ .sg_tablesize = 4096,
- .max_sectors = 0xFFFF,
- .cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
- .eh_abort_handler = iscsi_eh_abort,
-+ .eh_device_reset_handler= iscsi_eh_device_reset,
- .eh_host_reset_handler = iscsi_eh_host_reset,
- .use_clustering = DISABLE_CLUSTERING,
-+ .use_sg_chaining = ENABLE_SG_CHAINING,
- .slave_configure = iscsi_tcp_slave_configure,
- .proc_name = "iscsi_tcp",
- .this_id = -1,
-@@ -2257,14 +1963,17 @@ static struct iscsi_transport iscsi_tcp_transport = {
- ISCSI_PERSISTENT_ADDRESS |
- ISCSI_TARGET_NAME | ISCSI_TPGT |
- ISCSI_USERNAME | ISCSI_PASSWORD |
-- ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN,
-+ ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
-+ ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
-+ ISCSI_LU_RESET_TMO |
-+ ISCSI_PING_TMO | ISCSI_RECV_TMO,
- .host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
- ISCSI_HOST_INITIATOR_NAME |
- ISCSI_HOST_NETDEV_NAME,
- .host_template = &iscsi_sht,
- .conndata_size = sizeof(struct iscsi_conn),
- .max_conn = 1,
-- .max_cmd_len = ISCSI_TCP_MAX_CMD_LEN,
-+ .max_cmd_len = 16,
- /* session management */
- .create_session = iscsi_tcp_session_create,
- .destroy_session = iscsi_tcp_session_destroy,
-@@ -2283,8 +1992,8 @@ static struct iscsi_transport iscsi_tcp_transport = {
- /* IO */
- .send_pdu = iscsi_conn_send_pdu,
- .get_stats = iscsi_conn_get_stats,
-- .init_cmd_task = iscsi_tcp_cmd_init,
-- .init_mgmt_task = iscsi_tcp_mgmt_init,
-+ .init_cmd_task = iscsi_tcp_ctask_init,
-+ .init_mgmt_task = iscsi_tcp_mtask_init,
- .xmit_cmd_task = iscsi_tcp_ctask_xmit,
- .xmit_mgmt_task = iscsi_tcp_mtask_xmit,
- .cleanup_cmd_task = iscsi_tcp_cleanup_ctask,
-diff --git a/drivers/scsi/iscsi_tcp.h b/drivers/scsi/iscsi_tcp.h
-index 68c36cc..ed0b991 100644
---- a/drivers/scsi/iscsi_tcp.h
-+++ b/drivers/scsi/iscsi_tcp.h
-@@ -24,71 +24,61 @@
+ risc_address = ext_mem_cnt = 0;
+ flags = 0;
+@@ -1321,10 +1322,31 @@ qla25xx_fw_dump(scsi_qla_host_t *ha, int hardware_locked)
+ if (rval != QLA_SUCCESS)
+ goto qla25xx_fw_dump_failed_0;
- #include <scsi/libiscsi.h>
++ /* Fibre Channel Trace Buffer. */
+ nxt = qla2xxx_copy_queues(ha, nxt);
+ if (ha->eft)
+ memcpy(nxt, ha->eft, ntohl(ha->fw_dump->eft_size));
--/* Socket's Receive state machine */
--#define IN_PROGRESS_WAIT_HEADER 0x0
--#define IN_PROGRESS_HEADER_GATHER 0x1
--#define IN_PROGRESS_DATA_RECV 0x2
--#define IN_PROGRESS_DDIGEST_RECV 0x3
--#define IN_PROGRESS_PAD_RECV 0x4
--
--/* xmit state machine */
--#define XMSTATE_VALUE_IDLE 0
--#define XMSTATE_BIT_CMD_HDR_INIT 0
--#define XMSTATE_BIT_CMD_HDR_XMIT 1
--#define XMSTATE_BIT_IMM_HDR 2
--#define XMSTATE_BIT_IMM_DATA 3
--#define XMSTATE_BIT_UNS_INIT 4
--#define XMSTATE_BIT_UNS_HDR 5
--#define XMSTATE_BIT_UNS_DATA 6
--#define XMSTATE_BIT_SOL_HDR 7
--#define XMSTATE_BIT_SOL_DATA 8
--#define XMSTATE_BIT_W_PAD 9
--#define XMSTATE_BIT_W_RESEND_PAD 10
--#define XMSTATE_BIT_W_RESEND_DATA_DIGEST 11
--#define XMSTATE_BIT_IMM_HDR_INIT 12
--#define XMSTATE_BIT_SOL_HDR_INIT 13
--
--#define ISCSI_PAD_LEN 4
--#define ISCSI_SG_TABLESIZE SG_ALL
--#define ISCSI_TCP_MAX_CMD_LEN 16
--
- struct crypto_hash;
- struct socket;
-+struct iscsi_tcp_conn;
-+struct iscsi_segment;
++ /* Fibre Channel Event Buffer. */
++ if (!ha->fce)
++ goto qla25xx_fw_dump_failed_0;
+
-+typedef int iscsi_segment_done_fn_t(struct iscsi_tcp_conn *,
-+ struct iscsi_segment *);
++ ha->fw_dump->version |= __constant_htonl(DUMP_CHAIN_VARIANT);
+
-+struct iscsi_segment {
-+ unsigned char *data;
-+ unsigned int size;
-+ unsigned int copied;
-+ unsigned int total_size;
-+ unsigned int total_copied;
++ fcec = nxt + ntohl(ha->fw_dump->eft_size);
++ fcec->type = __constant_htonl(DUMP_CHAIN_FCE | DUMP_CHAIN_LAST);
++ fcec->chain_size = htonl(sizeof(struct qla2xxx_fce_chain) +
++ fce_calc_size(ha->fce_bufs));
++ fcec->size = htonl(fce_calc_size(ha->fce_bufs));
++ fcec->addr_l = htonl(LSD(ha->fce_dma));
++ fcec->addr_h = htonl(MSD(ha->fce_dma));
+
-+ struct hash_desc *hash;
-+ unsigned char recv_digest[ISCSI_DIGEST_SIZE];
-+ unsigned char digest[ISCSI_DIGEST_SIZE];
-+ unsigned int digest_len;
++ iter_reg = fcec->eregs;
++ for (cnt = 0; cnt < 8; cnt++)
++ *iter_reg++ = htonl(ha->fce_mb[cnt]);
+
-+ struct scatterlist *sg;
-+ void *sg_mapped;
-+ unsigned int sg_offset;
++ memcpy(iter_reg, ha->fce, ntohl(fcec->size));
+
-+ iscsi_segment_done_fn_t *done;
-+};
+ qla25xx_fw_dump_failed_0:
+ if (rval != QLA_SUCCESS) {
+ qla_printk(KERN_WARNING, ha,
+@@ -1428,21 +1450,6 @@ qla2x00_print_scsi_cmd(struct scsi_cmnd * cmd)
+ printk(" sp flags=0x%x\n", sp->flags);
+ }
- /* Socket connection recieve helper */
- struct iscsi_tcp_recv {
- struct iscsi_hdr *hdr;
-- struct sk_buff *skb;
-- int offset;
-- int len;
-- int hdr_offset;
-- int copy;
-- int copied;
-- int padding;
-- struct iscsi_cmd_task *ctask; /* current cmd in progress */
-+ struct iscsi_segment segment;
-+
-+ /* Allocate buffer for BHS + AHS */
-+ uint32_t hdr_buf[64];
+-void
+-qla2x00_dump_pkt(void *pkt)
+-{
+- uint32_t i;
+- uint8_t *data = (uint8_t *) pkt;
+-
+- for (i = 0; i < 64; i++) {
+- if (!(i % 4))
+- printk("\n%02x: ", i);
+-
+- printk("%02x ", data[i]);
+- }
+- printk("\n");
+-}
+-
+ #if defined(QL_DEBUG_ROUTINES)
+ /*
+ * qla2x00_formatted_dump_buffer
+diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h
+index a50ecf0..524598a 100644
+--- a/drivers/scsi/qla2xxx/qla_dbg.h
++++ b/drivers/scsi/qla2xxx/qla_dbg.h
+@@ -256,6 +256,25 @@ struct qla25xx_fw_dump {
+ #define EFT_BYTES_PER_BUFFER 0x4000
+ #define EFT_SIZE ((EFT_BYTES_PER_BUFFER) * (EFT_NUM_BUFFERS))
- /* copied and flipped values */
- int datalen;
-- int datadgst;
-- char zero_copy_hdr;
++#define FCE_NUM_BUFFERS 64
++#define FCE_BYTES_PER_BUFFER 0x400
++#define FCE_SIZE ((FCE_BYTES_PER_BUFFER) * (FCE_NUM_BUFFERS))
++#define fce_calc_size(b) ((FCE_BYTES_PER_BUFFER) * (b))
++
++struct qla2xxx_fce_chain {
++ uint32_t type;
++ uint32_t chain_size;
++
++ uint32_t size;
++ uint32_t addr_l;
++ uint32_t addr_h;
++ uint32_t eregs[8];
+};
+
-+/* Socket connection send helper */
-+struct iscsi_tcp_send {
-+ struct iscsi_hdr *hdr;
-+ struct iscsi_segment segment;
-+ struct iscsi_segment data_segment;
- };
++#define DUMP_CHAIN_VARIANT 0x80000000
++#define DUMP_CHAIN_FCE 0x7FFFFAF0
++#define DUMP_CHAIN_LAST 0x80000000
++
+ struct qla2xxx_fw_dump {
+ uint8_t signature[4];
+ uint32_t version;
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 04e8cbc..6f129da 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -623,9 +623,6 @@ typedef struct {
+ #define MBC_GET_LINK_PRIV_STATS 0x6d /* Get link & private data. */
+ #define MBC_SET_VENDOR_ID 0x76 /* Set Vendor ID. */
- struct iscsi_tcp_conn {
- struct iscsi_conn *iscsi_conn;
- struct socket *sock;
-- struct iscsi_hdr hdr; /* header placeholder */
-- char hdrext[4*sizeof(__u16) +
-- sizeof(__u32)];
-- int data_copied;
- int stop_stage; /* conn_stop() flag: *
- * stop to recover, *
- * stop to terminate */
-- /* iSCSI connection-wide sequencing */
-- int hdr_size; /* PDU header size */
+-#define TC_ENABLE 4
+-#define TC_DISABLE 5
-
- /* control data */
- struct iscsi_tcp_recv in; /* TCP receive context */
-- int in_progress; /* connection state machine */
-+ struct iscsi_tcp_send out; /* TCP send context */
-
- /* old values for socket callbacks */
- void (*old_data_ready)(struct sock *, int);
-@@ -103,29 +93,19 @@ struct iscsi_tcp_conn {
- uint32_t sendpage_failures_cnt;
- uint32_t discontiguous_hdr_cnt;
-
-- ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int);
--};
-+ int error;
-
--struct iscsi_buf {
-- struct scatterlist sg;
-- unsigned int sent;
-- char use_sendmsg;
-+ ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int);
- };
-
- struct iscsi_data_task {
- struct iscsi_data hdr; /* PDU */
-- char hdrext[sizeof(__u32)]; /* Header-Digest */
-- struct iscsi_buf digestbuf; /* digest buffer */
-- uint32_t digest; /* data digest */
-+ char hdrext[ISCSI_DIGEST_SIZE];/* Header-Digest */
- };
+ /* Firmware return data sizes */
+ #define FCAL_MAP_SIZE 128
- struct iscsi_tcp_mgmt_task {
- struct iscsi_hdr hdr;
-- char hdrext[sizeof(__u32)]; /* Header-Digest */
-- unsigned long xmstate; /* mgmt xmit progress */
-- struct iscsi_buf headbuf; /* header buffer */
-- struct iscsi_buf sendbuf; /* in progress buffer */
-- int sent;
-+ char hdrext[ISCSI_DIGEST_SIZE]; /* Header-Digest */
- };
+@@ -862,14 +859,20 @@ typedef struct {
+ #define GLSO_SEND_RPS BIT_0
+ #define GLSO_USE_DID BIT_3
- struct iscsi_r2t_info {
-@@ -133,38 +113,26 @@ struct iscsi_r2t_info {
- __be32 exp_statsn; /* copied from R2T */
- uint32_t data_length; /* copied from R2T */
- uint32_t data_offset; /* copied from R2T */
-- struct iscsi_buf headbuf; /* Data-Out Header Buffer */
-- struct iscsi_buf sendbuf; /* Data-Out in progress buffer*/
- int sent; /* R2T sequence progress */
- int data_count; /* DATA-Out payload progress */
-- struct scatterlist *sg; /* per-R2T SG list */
- int solicit_datasn;
-- struct iscsi_data_task dtask; /* which data task */
-+ struct iscsi_data_task dtask; /* Data-Out header buf */
- };
+-typedef struct {
+- uint32_t link_fail_cnt;
+- uint32_t loss_sync_cnt;
+- uint32_t loss_sig_cnt;
+- uint32_t prim_seq_err_cnt;
+- uint32_t inval_xmit_word_cnt;
+- uint32_t inval_crc_cnt;
+-} link_stat_t;
++struct link_statistics {
++ uint32_t link_fail_cnt;
++ uint32_t loss_sync_cnt;
++ uint32_t loss_sig_cnt;
++ uint32_t prim_seq_err_cnt;
++ uint32_t inval_xmit_word_cnt;
++ uint32_t inval_crc_cnt;
++ uint32_t unused1[0x1b];
++ uint32_t tx_frames;
++ uint32_t rx_frames;
++ uint32_t dumped_frames;
++ uint32_t unused2[2];
++ uint32_t nos_rcvd;
++};
- struct iscsi_tcp_cmd_task {
-- struct iscsi_cmd hdr;
-- char hdrext[4*sizeof(__u16)+ /* AHS */
-- sizeof(__u32)]; /* HeaderDigest */
-- char pad[ISCSI_PAD_LEN];
-- int pad_count; /* padded bytes */
-- struct iscsi_buf headbuf; /* header buf (xmit) */
-- struct iscsi_buf sendbuf; /* in progress buffer*/
-- unsigned long xmstate; /* xmit xtate machine */
-+ struct iscsi_hdr_buff {
-+ struct iscsi_cmd cmd_hdr;
-+ char hdrextbuf[ISCSI_MAX_AHS_SIZE +
-+ ISCSI_DIGEST_SIZE];
-+ } hdr;
-+
- int sent;
-- struct scatterlist *sg; /* per-cmd SG list */
-- struct scatterlist *bad_sg; /* assert statement */
-- int sg_count; /* SG's to process */
-- uint32_t exp_datasn; /* expected target's R2TSN/DataSN */
-+ uint32_t exp_datasn; /* expected target's R2TSN/DataSN */
- int data_offset;
-- struct iscsi_r2t_info *r2t; /* in progress R2T */
-- struct iscsi_queue r2tpool;
-+ struct iscsi_r2t_info *r2t; /* in progress R2T */
-+ struct iscsi_pool r2tpool;
- struct kfifo *r2tqueue;
-- struct iscsi_r2t_info **r2ts;
-- int digest_count;
-- uint32_t immdigest; /* for imm data */
-- struct iscsi_buf immbuf; /* for imm data digest */
-- struct iscsi_data_task unsol_dtask; /* unsol data task */
-+ struct iscsi_data_task unsol_dtask; /* Data-Out header buf */
- };
+ /*
+ * NVRAM Command values.
+@@ -2116,14 +2119,6 @@ struct qla_msix_entry {
- #endif /* ISCSI_H */
-diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
-index 8b57af5..553168a 100644
---- a/drivers/scsi/libiscsi.c
-+++ b/drivers/scsi/libiscsi.c
-@@ -24,6 +24,7 @@
- #include <linux/types.h>
- #include <linux/kfifo.h>
- #include <linux/delay.h>
-+#include <linux/log2.h>
- #include <asm/unaligned.h>
- #include <net/tcp.h>
- #include <scsi/scsi_cmnd.h>
-@@ -86,7 +87,7 @@ iscsi_update_cmdsn(struct iscsi_session *session, struct iscsi_nopin *hdr)
- * xmit thread
- */
- if (!list_empty(&session->leadconn->xmitqueue) ||
-- __kfifo_len(session->leadconn->mgmtqueue))
-+ !list_empty(&session->leadconn->mgmtqueue))
- scsi_queue_work(session->host,
- &session->leadconn->xmitwork);
- }
-@@ -122,6 +123,20 @@ void iscsi_prep_unsolicit_data_pdu(struct iscsi_cmd_task *ctask,
- }
- EXPORT_SYMBOL_GPL(iscsi_prep_unsolicit_data_pdu);
+ #define WATCH_INTERVAL 1 /* number of seconds */
-+static int iscsi_add_hdr(struct iscsi_cmd_task *ctask, unsigned len)
-+{
-+ unsigned exp_len = ctask->hdr_len + len;
-+
-+ if (exp_len > ctask->hdr_max) {
-+ WARN_ON(1);
-+ return -EINVAL;
-+ }
-+
-+ WARN_ON(len & (ISCSI_PAD_LEN - 1)); /* caller must pad the AHS */
-+ ctask->hdr_len = exp_len;
-+ return 0;
-+}
-+
- /**
- * iscsi_prep_scsi_cmd_pdu - prep iscsi scsi cmd pdu
- * @ctask: iscsi cmd task
-@@ -129,27 +144,32 @@ EXPORT_SYMBOL_GPL(iscsi_prep_unsolicit_data_pdu);
- * Prep basic iSCSI PDU fields for a scsi cmd pdu. The LLD should set
- * fields like dlength or final based on how much data it sends
+-/* NPIV */
+-#define MAX_MULTI_ID_LOOP 126
+-#define MAX_MULTI_ID_FABRIC 64
+-#define MAX_NUM_VPORT_LOOP (MAX_MULTI_ID_LOOP - 1)
+-#define MAX_NUM_VPORT_FABRIC (MAX_MULTI_ID_FABRIC - 1)
+-#define MAX_NUM_VHBA_LOOP (MAX_MULTI_ID_LOOP - 1)
+-#define MAX_NUM_VHBA_FABRIC (MAX_MULTI_ID_FABRIC - 1)
+-
+ /*
+ * Linux Host Adapter structure
*/
--static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
-+static int iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
- {
- struct iscsi_conn *conn = ctask->conn;
- struct iscsi_session *session = conn->session;
- struct iscsi_cmd *hdr = ctask->hdr;
- struct scsi_cmnd *sc = ctask->sc;
-+ unsigned hdrlength;
-+ int rc;
-
-- hdr->opcode = ISCSI_OP_SCSI_CMD;
-- hdr->flags = ISCSI_ATTR_SIMPLE;
-- int_to_scsilun(sc->device->lun, (struct scsi_lun *)hdr->lun);
-- hdr->itt = build_itt(ctask->itt, conn->id, session->age);
-- hdr->data_length = cpu_to_be32(scsi_bufflen(sc));
-- hdr->cmdsn = cpu_to_be32(session->cmdsn);
-- session->cmdsn++;
-- hdr->exp_statsn = cpu_to_be32(conn->exp_statsn);
-- memcpy(hdr->cdb, sc->cmnd, sc->cmd_len);
-+ ctask->hdr_len = 0;
-+ rc = iscsi_add_hdr(ctask, sizeof(*hdr));
-+ if (rc)
-+ return rc;
-+ hdr->opcode = ISCSI_OP_SCSI_CMD;
-+ hdr->flags = ISCSI_ATTR_SIMPLE;
-+ int_to_scsilun(sc->device->lun, (struct scsi_lun *)hdr->lun);
-+ hdr->itt = build_itt(ctask->itt, conn->id, session->age);
-+ hdr->data_length = cpu_to_be32(scsi_bufflen(sc));
-+ hdr->cmdsn = cpu_to_be32(session->cmdsn);
-+ session->cmdsn++;
-+ hdr->exp_statsn = cpu_to_be32(conn->exp_statsn);
-+ memcpy(hdr->cdb, sc->cmnd, sc->cmd_len);
- if (sc->cmd_len < MAX_COMMAND_SIZE)
- memset(&hdr->cdb[sc->cmd_len], 0,
- MAX_COMMAND_SIZE - sc->cmd_len);
-
-- ctask->data_count = 0;
- ctask->imm_count = 0;
- if (sc->sc_data_direction == DMA_TO_DEVICE) {
- hdr->flags |= ISCSI_FLAG_CMD_WRITE;
-@@ -178,9 +198,9 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
- else
- ctask->imm_count = min(scsi_bufflen(sc),
- conn->max_xmit_dlength);
-- hton24(ctask->hdr->dlength, ctask->imm_count);
-+ hton24(hdr->dlength, ctask->imm_count);
- } else
-- zero_data(ctask->hdr->dlength);
-+ zero_data(hdr->dlength);
+@@ -2161,6 +2156,7 @@ typedef struct scsi_qla_host {
+ uint32_t gpsc_supported :1;
+ uint32_t vsan_enabled :1;
+ uint32_t npiv_supported :1;
++ uint32_t fce_enabled :1;
+ } flags;
- if (!session->initial_r2t_en) {
- ctask->unsol_count = min((session->first_burst),
-@@ -190,7 +210,7 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
+ atomic_t loop_state;
+@@ -2273,8 +2269,7 @@ typedef struct scsi_qla_host {
- if (!ctask->unsol_count)
- /* No unsolicit Data-Out's */
-- ctask->hdr->flags |= ISCSI_FLAG_CMD_FINAL;
-+ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
- } else {
- hdr->flags |= ISCSI_FLAG_CMD_FINAL;
- zero_data(hdr->dlength);
-@@ -199,13 +219,25 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
- hdr->flags |= ISCSI_FLAG_CMD_READ;
- }
+ int bars;
+ device_reg_t __iomem *iobase; /* Base I/O address */
+- unsigned long pio_address;
+- unsigned long pio_length;
++ resource_size_t pio_address;
+ #define MIN_IOBASE_LEN 0x100
-- conn->scsicmd_pdus_cnt++;
-+ /* calculate size of additional header segments (AHSs) */
-+ hdrlength = ctask->hdr_len - sizeof(*hdr);
-+
-+ WARN_ON(hdrlength & (ISCSI_PAD_LEN-1));
-+ hdrlength /= ISCSI_PAD_LEN;
-+
-+ WARN_ON(hdrlength >= 256);
-+ hdr->hlength = hdrlength & 0xFF;
-+
-+ if (conn->session->tt->init_cmd_task(conn->ctask))
-+ return EIO;
+ /* ISP ring lock, rings, and indexes */
+@@ -2416,9 +2411,9 @@ typedef struct scsi_qla_host {
+ #define MBX_INTR_WAIT 2
+ #define MBX_UPDATE_FLASH_ACTIVE 3
-- debug_scsi("iscsi prep [%s cid %d sc %p cdb 0x%x itt 0x%x len %d "
-+ conn->scsicmd_pdus_cnt++;
-+ debug_scsi("iscsi prep [%s cid %d sc %p cdb 0x%x itt 0x%x len %d "
- "cmdsn %d win %d]\n",
-- sc->sc_data_direction == DMA_TO_DEVICE ? "write" : "read",
-+ sc->sc_data_direction == DMA_TO_DEVICE ? "write" : "read",
- conn->id, sc, sc->cmnd[0], ctask->itt, scsi_bufflen(sc),
-- session->cmdsn, session->max_cmdsn - session->exp_cmdsn + 1);
-+ session->cmdsn, session->max_cmdsn - session->exp_cmdsn + 1);
-+ return 0;
- }
+- struct semaphore mbx_cmd_sem; /* Serialialize mbx access */
+ struct semaphore vport_sem; /* Virtual port synchronization */
+- struct semaphore mbx_intr_sem; /* Used for completion notification */
++ struct completion mbx_cmd_comp; /* Serialize mbx access */
++ struct completion mbx_intr_comp; /* Used for completion notification */
- /**
-@@ -218,13 +250,16 @@ static void iscsi_prep_scsi_cmd_pdu(struct iscsi_cmd_task *ctask)
- */
- static void iscsi_complete_command(struct iscsi_cmd_task *ctask)
- {
-- struct iscsi_session *session = ctask->conn->session;
-+ struct iscsi_conn *conn = ctask->conn;
-+ struct iscsi_session *session = conn->session;
- struct scsi_cmnd *sc = ctask->sc;
+ uint32_t mbx_flags;
+ #define MBX_IN_PROGRESS BIT_0
+@@ -2455,6 +2450,15 @@ typedef struct scsi_qla_host {
+ dma_addr_t eft_dma;
+ void *eft;
- ctask->state = ISCSI_TASK_COMPLETED;
- ctask->sc = NULL;
- /* SCSI eh reuses commands to verify us */
- sc->SCp.ptr = NULL;
-+ if (conn->ctask == ctask)
-+ conn->ctask = NULL;
- list_del_init(&ctask->running);
- __kfifo_put(session->cmdpool.queue, (void*)&ctask, sizeof(void*));
- sc->scsi_done(sc);
-@@ -241,6 +276,112 @@ static void __iscsi_put_ctask(struct iscsi_cmd_task *ctask)
- iscsi_complete_command(ctask);
- }
++ struct dentry *dfs_dir;
++ struct dentry *dfs_fce;
++ dma_addr_t fce_dma;
++ void *fce;
++ uint32_t fce_bufs;
++ uint16_t fce_mb[8];
++ uint64_t fce_wr, fce_rd;
++ struct mutex fce_mutex;
++
+ uint8_t host_str[16];
+ uint32_t pci_attr;
+ uint16_t chip_revision;
+@@ -2507,7 +2511,7 @@ typedef struct scsi_qla_host {
+ struct list_head vp_list; /* list of VP */
+ struct fc_vport *fc_vport; /* holds fc_vport * for each vport */
+- uint8_t vp_idx_map[16];
++ unsigned long vp_idx_map[(MAX_MULTI_ID_FABRIC / 8) / sizeof(unsigned long)];
+ uint16_t num_vhosts; /* number of vports created */
+ uint16_t num_vsans; /* number of vsan created */
+ uint16_t vp_idx; /* vport ID */
+diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
+new file mode 100644
+index 0000000..1479c60
+--- /dev/null
++++ b/drivers/scsi/qla2xxx/qla_dfs.c
+@@ -0,0 +1,175 @@
+/*
-+ * session lock must be held
++ * QLogic Fibre Channel HBA Driver
++ * Copyright (c) 2003-2005 QLogic Corporation
++ *
++ * See LICENSE.qla2xxx for copyright and licensing details.
+ */
-+static void fail_command(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
-+ int err)
++#include "qla_def.h"
++
++#include <linux/debugfs.h>
++#include <linux/seq_file.h>
++
++static struct dentry *qla2x00_dfs_root;
++static atomic_t qla2x00_dfs_root_count;
++
++static int
++qla2x00_dfs_fce_show(struct seq_file *s, void *unused)
+{
-+ struct scsi_cmnd *sc;
++ scsi_qla_host_t *ha = s->private;
++ uint32_t cnt;
++ uint32_t *fce;
++ uint64_t fce_start;
+
-+ sc = ctask->sc;
-+ if (!sc)
-+ return;
++ mutex_lock(&ha->fce_mutex);
+
-+ if (ctask->state == ISCSI_TASK_PENDING)
-+ /*
-+ * cmd never made it to the xmit thread, so we should not count
-+ * the cmd in the sequencing
-+ */
-+ conn->session->queued_cmdsn--;
-+ else
-+ conn->session->tt->cleanup_cmd_task(conn, ctask);
++ seq_printf(s, "FCE Trace Buffer\n");
++ seq_printf(s, "In Pointer = %llx\n\n", ha->fce_wr);
++ seq_printf(s, "Base = %llx\n\n", (unsigned long long) ha->fce_dma);
++ seq_printf(s, "FCE Enable Registers\n");
++ seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
++ ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
++ ha->fce_mb[5], ha->fce_mb[6]);
+
-+ sc->result = err;
-+ scsi_set_resid(sc, scsi_bufflen(sc));
-+ if (conn->ctask == ctask)
-+ conn->ctask = NULL;
-+ /* release ref from queuecommand */
-+ __iscsi_put_ctask(ctask);
-+}
++ fce = (uint32_t *) ha->fce;
++ fce_start = (unsigned long long) ha->fce_dma;
++ for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
++ if (cnt % 8 == 0)
++ seq_printf(s, "\n%llx: ",
++ (unsigned long long)((cnt * 4) + fce_start));
++ else
++ seq_printf(s, " ");
++ seq_printf(s, "%08x", *fce++);
++ }
+
-+/**
-+ * iscsi_free_mgmt_task - return mgmt task back to pool
-+ * @conn: iscsi connection
-+ * @mtask: mtask
-+ *
-+ * Must be called with session lock.
-+ */
-+void iscsi_free_mgmt_task(struct iscsi_conn *conn,
-+ struct iscsi_mgmt_task *mtask)
-+{
-+ list_del_init(&mtask->running);
-+ if (conn->login_mtask == mtask)
-+ return;
++ seq_printf(s, "\nEnd\n");
+
-+ if (conn->ping_mtask == mtask)
-+ conn->ping_mtask = NULL;
-+ __kfifo_put(conn->session->mgmtpool.queue,
-+ (void*)&mtask, sizeof(void*));
++ mutex_unlock(&ha->fce_mutex);
++
++ return 0;
+}
-+EXPORT_SYMBOL_GPL(iscsi_free_mgmt_task);
+
-+static struct iscsi_mgmt_task *
-+__iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
-+ char *data, uint32_t data_size)
++static int
++qla2x00_dfs_fce_open(struct inode *inode, struct file *file)
+{
-+ struct iscsi_session *session = conn->session;
-+ struct iscsi_mgmt_task *mtask;
++ scsi_qla_host_t *ha = inode->i_private;
++ int rval;
+
-+ if (session->state == ISCSI_STATE_TERMINATE)
-+ return NULL;
++ if (!ha->flags.fce_enabled)
++ goto out;
+
-+ if (hdr->opcode == (ISCSI_OP_LOGIN | ISCSI_OP_IMMEDIATE) ||
-+ hdr->opcode == (ISCSI_OP_TEXT | ISCSI_OP_IMMEDIATE))
-+ /*
-+ * Login and Text are sent serially, in
-+ * request-followed-by-response sequence.
-+ * Same mtask can be used. Same ITT must be used.
-+ * Note that login_mtask is preallocated at conn_create().
-+ */
-+ mtask = conn->login_mtask;
-+ else {
-+ BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE);
-+ BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED);
++ mutex_lock(&ha->fce_mutex);
+
-+ if (!__kfifo_get(session->mgmtpool.queue,
-+ (void*)&mtask, sizeof(void*)))
-+ return NULL;
-+ }
++ /* Pause tracing to flush FCE buffers. */
++ rval = qla2x00_disable_fce_trace(ha, &ha->fce_wr, &ha->fce_rd);
++ if (rval)
++ qla_printk(KERN_WARNING, ha,
++ "DebugFS: Unable to disable FCE (%d).\n", rval);
+
-+ if (data_size) {
-+ memcpy(mtask->data, data, data_size);
-+ mtask->data_count = data_size;
-+ } else
-+ mtask->data_count = 0;
++ ha->flags.fce_enabled = 0;
+
-+ memcpy(mtask->hdr, hdr, sizeof(struct iscsi_hdr));
-+ INIT_LIST_HEAD(&mtask->running);
-+ list_add_tail(&mtask->running, &conn->mgmtqueue);
-+ return mtask;
++ mutex_unlock(&ha->fce_mutex);
++out:
++ return single_open(file, qla2x00_dfs_fce_show, ha);
+}
+
-+int iscsi_conn_send_pdu(struct iscsi_cls_conn *cls_conn, struct iscsi_hdr *hdr,
-+ char *data, uint32_t data_size)
++static int
++qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
+{
-+ struct iscsi_conn *conn = cls_conn->dd_data;
-+ struct iscsi_session *session = conn->session;
-+ int err = 0;
++ scsi_qla_host_t *ha = inode->i_private;
++ int rval;
+
-+ spin_lock_bh(&session->lock);
-+ if (!__iscsi_conn_send_pdu(conn, hdr, data, data_size))
-+ err = -EPERM;
-+ spin_unlock_bh(&session->lock);
-+ scsi_queue_work(session->host, &conn->xmitwork);
-+ return err;
++ if (ha->flags.fce_enabled)
++ goto out;
++
++ mutex_lock(&ha->fce_mutex);
++
++ /* Re-enable FCE tracing. */
++ ha->flags.fce_enabled = 1;
++ memset(ha->fce, 0, fce_calc_size(ha->fce_bufs));
++ rval = qla2x00_enable_fce_trace(ha, ha->fce_dma, ha->fce_bufs,
++ ha->fce_mb, &ha->fce_bufs);
++ if (rval) {
++ qla_printk(KERN_WARNING, ha,
++ "DebugFS: Unable to reinitialize FCE (%d).\n", rval);
++ ha->flags.fce_enabled = 0;
++ }
++
++ mutex_unlock(&ha->fce_mutex);
++out:
++ return single_release(inode, file);
+}
-+EXPORT_SYMBOL_GPL(iscsi_conn_send_pdu);
+
- /**
- * iscsi_cmd_rsp - SCSI Command Response processing
- * @conn: iscsi connection
-@@ -291,17 +432,19 @@ invalid_datalen:
- min_t(uint16_t, senselen, SCSI_SENSE_BUFFERSIZE));
- }
-
-- if (rhdr->flags & ISCSI_FLAG_CMD_UNDERFLOW) {
-+ if (rhdr->flags & (ISCSI_FLAG_CMD_UNDERFLOW |
-+ ISCSI_FLAG_CMD_OVERFLOW)) {
- int res_count = be32_to_cpu(rhdr->residual_count);
-
-- if (res_count > 0 && res_count <= scsi_bufflen(sc))
-+ if (res_count > 0 &&
-+ (rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW ||
-+ res_count <= scsi_bufflen(sc)))
- scsi_set_resid(sc, res_count);
- else
- sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
-- } else if (rhdr->flags & ISCSI_FLAG_CMD_BIDI_UNDERFLOW)
-+ } else if (rhdr->flags & (ISCSI_FLAG_CMD_BIDI_UNDERFLOW |
-+ ISCSI_FLAG_CMD_BIDI_OVERFLOW))
- sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
-- else if (rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW)
-- scsi_set_resid(sc, be32_to_cpu(rhdr->residual_count));
-
- out:
- debug_scsi("done [sc %lx res %d itt 0x%x]\n",
-@@ -318,18 +461,51 @@ static void iscsi_tmf_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
- conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1;
- conn->tmfrsp_pdus_cnt++;
-
-- if (conn->tmabort_state != TMABORT_INITIAL)
-+ if (conn->tmf_state != TMF_QUEUED)
- return;
-
- if (tmf->response == ISCSI_TMF_RSP_COMPLETE)
-- conn->tmabort_state = TMABORT_SUCCESS;
-+ conn->tmf_state = TMF_SUCCESS;
- else if (tmf->response == ISCSI_TMF_RSP_NO_TASK)
-- conn->tmabort_state = TMABORT_NOT_FOUND;
-+ conn->tmf_state = TMF_NOT_FOUND;
- else
-- conn->tmabort_state = TMABORT_FAILED;
-+ conn->tmf_state = TMF_FAILED;
- wake_up(&conn->ehwait);
- }
-
-+static void iscsi_send_nopout(struct iscsi_conn *conn, struct iscsi_nopin *rhdr)
++static const struct file_operations dfs_fce_ops = {
++ .open = qla2x00_dfs_fce_open,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = qla2x00_dfs_fce_release,
++};
++
++int
++qla2x00_dfs_setup(scsi_qla_host_t *ha)
+{
-+ struct iscsi_nopout hdr;
-+ struct iscsi_mgmt_task *mtask;
++ if (!IS_QLA25XX(ha))
++ goto out;
++ if (!ha->fce)
++ goto out;
+
-+ if (!rhdr && conn->ping_mtask)
-+ return;
++ if (qla2x00_dfs_root)
++ goto create_dir;
+
-+ memset(&hdr, 0, sizeof(struct iscsi_nopout));
-+ hdr.opcode = ISCSI_OP_NOOP_OUT | ISCSI_OP_IMMEDIATE;
-+ hdr.flags = ISCSI_FLAG_CMD_FINAL;
++ atomic_set(&qla2x00_dfs_root_count, 0);
++ qla2x00_dfs_root = debugfs_create_dir(QLA2XXX_DRIVER_NAME, NULL);
++ if (!qla2x00_dfs_root) {
++ qla_printk(KERN_NOTICE, ha,
++ "DebugFS: Unable to create root directory.\n");
++ goto out;
++ }
+
-+ if (rhdr) {
-+ memcpy(hdr.lun, rhdr->lun, 8);
-+ hdr.ttt = rhdr->ttt;
-+ hdr.itt = RESERVED_ITT;
-+ } else
-+ hdr.ttt = RESERVED_ITT;
++create_dir:
++ if (ha->dfs_dir)
++ goto create_nodes;
+
-+ mtask = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)&hdr, NULL, 0);
-+ if (!mtask) {
-+ printk(KERN_ERR "Could not send nopout\n");
-+ return;
++ mutex_init(&ha->fce_mutex);
++ ha->dfs_dir = debugfs_create_dir(ha->host_str, qla2x00_dfs_root);
++ if (!ha->dfs_dir) {
++ qla_printk(KERN_NOTICE, ha,
++ "DebugFS: Unable to create ha directory.\n");
++ goto out;
+ }
+
-+ /* only track our nops */
-+ if (!rhdr) {
-+ conn->ping_mtask = mtask;
-+ conn->last_ping = jiffies;
++ atomic_inc(&qla2x00_dfs_root_count);
++
++create_nodes:
++ ha->dfs_fce = debugfs_create_file("fce", S_IRUSR, ha->dfs_dir, ha,
++ &dfs_fce_ops);
++ if (!ha->dfs_fce) {
++ qla_printk(KERN_NOTICE, ha,
++ "DebugFS: Unable to fce node.\n");
++ goto out;
+ }
-+ scsi_queue_work(conn->session->host, &conn->xmitwork);
++out:
++ return 0;
+}
+
- static int iscsi_handle_reject(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
- char *data, int datalen)
- {
-@@ -374,6 +550,7 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
- struct iscsi_mgmt_task *mtask;
- uint32_t itt;
-
-+ conn->last_recv = jiffies;
- if (hdr->itt != RESERVED_ITT)
- itt = get_itt(hdr->itt);
- else
-@@ -429,10 +606,7 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
- */
- if (iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen))
- rc = ISCSI_ERR_CONN_FAILED;
-- list_del(&mtask->running);
-- if (conn->login_mtask != mtask)
-- __kfifo_put(session->mgmtpool.queue,
-- (void*)&mtask, sizeof(void*));
-+ iscsi_free_mgmt_task(conn, mtask);
- break;
- case ISCSI_OP_SCSI_TMFUNC_RSP:
- if (datalen) {
-@@ -441,20 +615,26 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
- }
-
- iscsi_tmf_rsp(conn, hdr);
-+ iscsi_free_mgmt_task(conn, mtask);
- break;
- case ISCSI_OP_NOOP_IN:
-- if (hdr->ttt != cpu_to_be32(ISCSI_RESERVED_TAG) || datalen) {
-+ if (hdr->ttt != cpu_to_be32(ISCSI_RESERVED_TAG) ||
-+ datalen) {
- rc = ISCSI_ERR_PROTO;
- break;
- }
- conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1;
-
-- if (iscsi_recv_pdu(conn->cls_conn, hdr, data, datalen))
-- rc = ISCSI_ERR_CONN_FAILED;
-- list_del(&mtask->running);
-- if (conn->login_mtask != mtask)
-- __kfifo_put(session->mgmtpool.queue,
-- (void*)&mtask, sizeof(void*));
-+ if (conn->ping_mtask != mtask) {
-+ /*
-+ * If this is not in response to one of our
-+ * nops then it must be from userspace.
-+ */
-+ if (iscsi_recv_pdu(conn->cls_conn, hdr, data,
-+ datalen))
-+ rc = ISCSI_ERR_CONN_FAILED;
-+ }
-+ iscsi_free_mgmt_task(conn, mtask);
- break;
- default:
- rc = ISCSI_ERR_BAD_OPCODE;
-@@ -473,8 +653,7 @@ int __iscsi_complete_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
- if (hdr->ttt == cpu_to_be32(ISCSI_RESERVED_TAG))
- break;
-
-- if (iscsi_recv_pdu(conn->cls_conn, hdr, NULL, 0))
-- rc = ISCSI_ERR_CONN_FAILED;
-+ iscsi_send_nopout(conn, (struct iscsi_nopin*)hdr);
- break;
- case ISCSI_OP_REJECT:
- rc = iscsi_handle_reject(conn, hdr, data, datalen);
-@@ -609,20 +788,19 @@ static void iscsi_prep_mtask(struct iscsi_conn *conn,
- session->tt->init_mgmt_task(conn, mtask);
-
- debug_scsi("mgmtpdu [op 0x%x hdr->itt 0x%x datalen %d]\n",
-- hdr->opcode, hdr->itt, mtask->data_count);
-+ hdr->opcode & ISCSI_OPCODE_MASK, hdr->itt,
-+ mtask->data_count);
- }
-
- static int iscsi_xmit_mtask(struct iscsi_conn *conn)
- {
- struct iscsi_hdr *hdr = conn->mtask->hdr;
-- int rc, was_logout = 0;
-+ int rc;
-
-+ if ((hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGOUT)
-+ conn->session->state = ISCSI_STATE_LOGGING_OUT;
- spin_unlock_bh(&conn->session->lock);
-- if ((hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGOUT) {
-- conn->session->state = ISCSI_STATE_IN_RECOVERY;
-- iscsi_block_session(session_to_cls(conn->session));
-- was_logout = 1;
-- }
-+
- rc = conn->session->tt->xmit_mgmt_task(conn, conn->mtask);
- spin_lock_bh(&conn->session->lock);
- if (rc)
-@@ -630,11 +808,6 @@ static int iscsi_xmit_mtask(struct iscsi_conn *conn)
-
- /* done with this in-progress mtask */
- conn->mtask = NULL;
--
-- if (was_logout) {
-- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
-- return -ENODATA;
-- }
- return 0;
- }
-
-@@ -658,21 +831,13 @@ static int iscsi_check_cmdsn_window_closed(struct iscsi_conn *conn)
- static int iscsi_xmit_ctask(struct iscsi_conn *conn)
- {
- struct iscsi_cmd_task *ctask = conn->ctask;
-- int rc = 0;
--
-- /*
-- * serialize with TMF AbortTask
-- */
-- if (ctask->state == ISCSI_TASK_ABORTING)
-- goto done;
-+ int rc;
-
- __iscsi_get_ctask(ctask);
- spin_unlock_bh(&conn->session->lock);
- rc = conn->session->tt->xmit_cmd_task(conn, ctask);
- spin_lock_bh(&conn->session->lock);
- __iscsi_put_ctask(ctask);
--
--done:
- if (!rc)
- /* done with this ctask */
- conn->ctask = NULL;
-@@ -680,6 +845,22 @@ done:
- }
-
- /**
-+ * iscsi_requeue_ctask - requeue ctask to run from session workqueue
-+ * @ctask: ctask to requeue
-+ *
-+ * LLDs that need to run a ctask from the session workqueue should call
-+ * this. The session lock must be held.
-+ */
-+void iscsi_requeue_ctask(struct iscsi_cmd_task *ctask)
++int
++qla2x00_dfs_remove(scsi_qla_host_t *ha)
+{
-+ struct iscsi_conn *conn = ctask->conn;
++ if (ha->dfs_fce) {
++ debugfs_remove(ha->dfs_fce);
++ ha->dfs_fce = NULL;
++ }
+
-+ list_move_tail(&ctask->running, &conn->requeue);
-+ scsi_queue_work(conn->session->host, &conn->xmitwork);
-+}
-+EXPORT_SYMBOL_GPL(iscsi_requeue_ctask);
++ if (ha->dfs_dir) {
++ debugfs_remove(ha->dfs_dir);
++ ha->dfs_dir = NULL;
++ atomic_dec(&qla2x00_dfs_root_count);
++ }
+
-+/**
- * iscsi_data_xmit - xmit any command into the scheduled connection
- * @conn: iscsi connection
- *
-@@ -717,36 +898,40 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
- * overflow us with nop-ins
- */
- check_mgmt:
-- while (__kfifo_get(conn->mgmtqueue, (void*)&conn->mtask,
-- sizeof(void*))) {
-+ while (!list_empty(&conn->mgmtqueue)) {
-+ conn->mtask = list_entry(conn->mgmtqueue.next,
-+ struct iscsi_mgmt_task, running);
-+ if (conn->session->state == ISCSI_STATE_LOGGING_OUT) {
-+ iscsi_free_mgmt_task(conn, conn->mtask);
-+ conn->mtask = NULL;
-+ continue;
-+ }
++ if (atomic_read(&qla2x00_dfs_root_count) == 0 &&
++ qla2x00_dfs_root) {
++ debugfs_remove(qla2x00_dfs_root);
++ qla2x00_dfs_root = NULL;
++ }
+
- iscsi_prep_mtask(conn, conn->mtask);
-- list_add_tail(&conn->mtask->running, &conn->mgmt_run_list);
-+ list_move_tail(conn->mgmtqueue.next, &conn->mgmt_run_list);
- rc = iscsi_xmit_mtask(conn);
- if (rc)
- goto again;
- }
++ return 0;
++}
+diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h
+index 25364b1..9337e13 100644
+--- a/drivers/scsi/qla2xxx/qla_fw.h
++++ b/drivers/scsi/qla2xxx/qla_fw.h
+@@ -952,9 +952,31 @@ struct device_reg_24xx {
+ uint32_t iobase_sdata;
+ };
-- /* process command queue */
-+ /* process pending command queue */
- while (!list_empty(&conn->xmitqueue)) {
-- /*
-- * iscsi tcp may readd the task to the xmitqueue to send
-- * write data
-- */
-+ if (conn->tmf_state == TMF_QUEUED)
-+ break;
++/* Trace Control *************************************************************/
+
- conn->ctask = list_entry(conn->xmitqueue.next,
- struct iscsi_cmd_task, running);
-- switch (conn->ctask->state) {
-- case ISCSI_TASK_ABORTING:
-- break;
-- case ISCSI_TASK_PENDING:
-- iscsi_prep_scsi_cmd_pdu(conn->ctask);
-- conn->session->tt->init_cmd_task(conn->ctask);
-- /* fall through */
-- default:
-- conn->ctask->state = ISCSI_TASK_RUNNING;
-- break;
-+ if (conn->session->state == ISCSI_STATE_LOGGING_OUT) {
-+ fail_command(conn, conn->ctask, DID_IMM_RETRY << 16);
-+ continue;
-+ }
-+ if (iscsi_prep_scsi_cmd_pdu(conn->ctask)) {
-+ fail_command(conn, conn->ctask, DID_ABORT << 16);
-+ continue;
- }
-- list_move_tail(conn->xmitqueue.next, &conn->run_list);
-
-+ conn->ctask->state = ISCSI_TASK_RUNNING;
-+ list_move_tail(conn->xmitqueue.next, &conn->run_list);
- rc = iscsi_xmit_ctask(conn);
- if (rc)
- goto again;
-@@ -755,7 +940,28 @@ check_mgmt:
- * we need to check the mgmt queue for nops that need to
- * be sent to aviod starvation
- */
-- if (__kfifo_len(conn->mgmtqueue))
-+ if (!list_empty(&conn->mgmtqueue))
-+ goto check_mgmt;
-+ }
++#define TC_AEN_DISABLE 0
+
-+ while (!list_empty(&conn->requeue)) {
-+ if (conn->session->fast_abort && conn->tmf_state != TMF_INITIAL)
-+ break;
++#define TC_EFT_ENABLE 4
++#define TC_EFT_DISABLE 5
+
-+ /*
-+ * we always do fastlogout - conn stop code will clean up.
-+ */
-+ if (conn->session->state == ISCSI_STATE_LOGGING_OUT)
-+ break;
++#define TC_FCE_ENABLE 8
++#define TC_FCE_OPTIONS 0
++#define TC_FCE_DEFAULT_RX_SIZE 2112
++#define TC_FCE_DEFAULT_TX_SIZE 2112
++#define TC_FCE_DISABLE 9
++#define TC_FCE_DISABLE_TRACE BIT_0
+
-+ conn->ctask = list_entry(conn->requeue.next,
-+ struct iscsi_cmd_task, running);
-+ conn->ctask->state = ISCSI_TASK_RUNNING;
-+ list_move_tail(conn->requeue.next, &conn->run_list);
-+ rc = iscsi_xmit_ctask(conn);
-+ if (rc)
-+ goto again;
-+ if (!list_empty(&conn->mgmtqueue))
- goto check_mgmt;
- }
- spin_unlock_bh(&conn->session->lock);
-@@ -790,6 +996,7 @@ enum {
- FAILURE_SESSION_TERMINATE,
- FAILURE_SESSION_IN_RECOVERY,
- FAILURE_SESSION_RECOVERY_TIMEOUT,
-+ FAILURE_SESSION_LOGGING_OUT,
- };
-
- int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
-@@ -805,8 +1012,9 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
- sc->SCp.ptr = NULL;
-
- host = sc->device->host;
-- session = iscsi_hostdata(host->hostdata);
-+ spin_unlock(host->host_lock);
+ /* MID Support ***************************************************************/
-+ session = iscsi_hostdata(host->hostdata);
- spin_lock(&session->lock);
+-#define MAX_MID_VPS 125
++#define MIN_MULTI_ID_FABRIC 64 /* Must be power-of-2. */
++#define MAX_MULTI_ID_FABRIC 256 /* ... */
++
++#define for_each_mapped_vp_idx(_ha, _idx) \
++ for (_idx = find_next_bit((_ha)->vp_idx_map, \
++ (_ha)->max_npiv_vports + 1, 1); \
++ _idx <= (_ha)->max_npiv_vports; \
++ _idx = find_next_bit((_ha)->vp_idx_map, \
++ (_ha)->max_npiv_vports + 1, _idx + 1)) \
- /*
-@@ -822,17 +1030,22 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
- * be entering our queuecommand while a block is starting
- * up because the block code is not locked)
- */
-- if (session->state == ISCSI_STATE_IN_RECOVERY) {
-+ switch (session->state) {
-+ case ISCSI_STATE_IN_RECOVERY:
- reason = FAILURE_SESSION_IN_RECOVERY;
- goto reject;
-- }
--
-- if (session->state == ISCSI_STATE_RECOVERY_FAILED)
-+ case ISCSI_STATE_LOGGING_OUT:
-+ reason = FAILURE_SESSION_LOGGING_OUT;
-+ goto reject;
-+ case ISCSI_STATE_RECOVERY_FAILED:
- reason = FAILURE_SESSION_RECOVERY_TIMEOUT;
-- else if (session->state == ISCSI_STATE_TERMINATE)
-+ break;
-+ case ISCSI_STATE_TERMINATE:
- reason = FAILURE_SESSION_TERMINATE;
-- else
-+ break;
-+ default:
- reason = FAILURE_SESSION_FREED;
-+ }
- goto fault;
- }
+ struct mid_conf_entry_24xx {
+ uint16_t reserved_1;
+@@ -982,7 +1004,7 @@ struct mid_init_cb_24xx {
+ uint16_t count;
+ uint16_t options;
-@@ -859,7 +1072,6 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
+- struct mid_conf_entry_24xx entries[MAX_MID_VPS];
++ struct mid_conf_entry_24xx entries[MAX_MULTI_ID_FABRIC];
+ };
- atomic_set(&ctask->refcount, 1);
- ctask->state = ISCSI_TASK_PENDING;
-- ctask->mtask = NULL;
- ctask->conn = conn;
- ctask->sc = sc;
- INIT_LIST_HEAD(&ctask->running);
-@@ -868,11 +1080,13 @@ int iscsi_queuecommand(struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
- spin_unlock(&session->lock);
- scsi_queue_work(host, &conn->xmitwork);
-+ spin_lock(host->host_lock);
- return 0;
+@@ -1002,10 +1024,6 @@ struct mid_db_entry_24xx {
+ uint8_t reserved_1;
+ };
- reject:
- spin_unlock(&session->lock);
- debug_scsi("cmd 0x%x rejected (%d)\n", sc->cmnd[0], reason);
-+ spin_lock(host->host_lock);
- return SCSI_MLQUEUE_HOST_BUSY;
+-struct mid_db_24xx {
+- struct mid_db_entry_24xx entries[MAX_MID_VPS];
+-};
+-
+ /*
+ * Virtual Fabric ID type definition.
+ */
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 09cb2a9..ba35fc2 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -65,33 +65,25 @@ extern int ql2xextended_error_logging;
+ extern int ql2xqfullrampup;
+ extern int num_hosts;
- fault:
-@@ -882,6 +1096,7 @@ fault:
- sc->result = (DID_NO_CONNECT << 16);
- scsi_set_resid(sc, scsi_bufflen(sc));
- sc->scsi_done(sc);
-+ spin_lock(host->host_lock);
- return 0;
- }
- EXPORT_SYMBOL_GPL(iscsi_queuecommand);
-@@ -895,72 +1110,15 @@ int iscsi_change_queue_depth(struct scsi_device *sdev, int depth)
- }
- EXPORT_SYMBOL_GPL(iscsi_change_queue_depth);
++extern int qla2x00_loop_reset(scsi_qla_host_t *);
++
+ /*
+ * Global Functions in qla_mid.c source file.
+ */
+-extern struct scsi_host_template qla2x00_driver_template;
+ extern struct scsi_host_template qla24xx_driver_template;
+ extern struct scsi_transport_template *qla2xxx_transport_vport_template;
+-extern uint8_t qla2x00_mem_alloc(scsi_qla_host_t *);
+ extern void qla2x00_timer(scsi_qla_host_t *);
+ extern void qla2x00_start_timer(scsi_qla_host_t *, void *, unsigned long);
+-extern void qla2x00_stop_timer(scsi_qla_host_t *);
+-extern uint32_t qla24xx_allocate_vp_id(scsi_qla_host_t *);
+ extern void qla24xx_deallocate_vp_id(scsi_qla_host_t *);
+ extern int qla24xx_disable_vp (scsi_qla_host_t *);
+ extern int qla24xx_enable_vp (scsi_qla_host_t *);
+-extern void qla2x00_mem_free(scsi_qla_host_t *);
+ extern int qla24xx_control_vp(scsi_qla_host_t *, int );
+ extern int qla24xx_modify_vp_config(scsi_qla_host_t *);
+ extern int qla2x00_send_change_request(scsi_qla_host_t *, uint16_t, uint16_t);
+ extern void qla2x00_vp_stop_timer(scsi_qla_host_t *);
+ extern int qla24xx_configure_vhba (scsi_qla_host_t *);
+-extern int qla24xx_get_vp_entry(scsi_qla_host_t *, uint16_t, int);
+-extern int qla24xx_get_vp_database(scsi_qla_host_t *, uint16_t);
+-extern int qla2x00_do_dpc_vp(scsi_qla_host_t *);
+ extern void qla24xx_report_id_acquisition(scsi_qla_host_t *,
+ struct vp_rpt_id_entry_24xx *);
+-extern scsi_qla_host_t * qla24xx_find_vhost_by_name(scsi_qla_host_t *,
+- uint8_t *);
+ extern void qla2x00_do_dpc_all_vps(scsi_qla_host_t *);
+ extern int qla24xx_vport_create_req_sanity_check(struct fc_vport *);
+ extern scsi_qla_host_t * qla24xx_create_vhost(struct fc_vport *);
+@@ -103,8 +95,6 @@ extern char *qla2x00_get_fw_version_str(struct scsi_qla_host *, char *);
+ extern void qla2x00_mark_device_lost(scsi_qla_host_t *, fc_port_t *, int, int);
+ extern void qla2x00_mark_all_devices_lost(scsi_qla_host_t *, int);
--static struct iscsi_mgmt_task *
--__iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
-- char *data, uint32_t data_size)
--{
-- struct iscsi_session *session = conn->session;
-- struct iscsi_mgmt_task *mtask;
--
-- if (session->state == ISCSI_STATE_TERMINATE)
-- return NULL;
--
-- if (hdr->opcode == (ISCSI_OP_LOGIN | ISCSI_OP_IMMEDIATE) ||
-- hdr->opcode == (ISCSI_OP_TEXT | ISCSI_OP_IMMEDIATE))
-- /*
-- * Login and Text are sent serially, in
-- * request-followed-by-response sequence.
-- * Same mtask can be used. Same ITT must be used.
-- * Note that login_mtask is preallocated at conn_create().
-- */
-- mtask = conn->login_mtask;
-- else {
-- BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE);
-- BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED);
--
-- if (!__kfifo_get(session->mgmtpool.queue,
-- (void*)&mtask, sizeof(void*)))
-- return NULL;
-- }
--
-- if (data_size) {
-- memcpy(mtask->data, data, data_size);
-- mtask->data_count = data_size;
-- } else
-- mtask->data_count = 0;
--
-- INIT_LIST_HEAD(&mtask->running);
-- memcpy(mtask->hdr, hdr, sizeof(struct iscsi_hdr));
-- __kfifo_put(conn->mgmtqueue, (void*)&mtask, sizeof(void*));
-- return mtask;
--}
--
--int iscsi_conn_send_pdu(struct iscsi_cls_conn *cls_conn, struct iscsi_hdr *hdr,
-- char *data, uint32_t data_size)
--{
-- struct iscsi_conn *conn = cls_conn->dd_data;
-- struct iscsi_session *session = conn->session;
-- int err = 0;
--
-- spin_lock_bh(&session->lock);
-- if (!__iscsi_conn_send_pdu(conn, hdr, data, data_size))
-- err = -EPERM;
-- spin_unlock_bh(&session->lock);
-- scsi_queue_work(session->host, &conn->xmitwork);
-- return err;
--}
--EXPORT_SYMBOL_GPL(iscsi_conn_send_pdu);
+-extern int qla2x00_down_timeout(struct semaphore *, unsigned long);
-
- void iscsi_session_recovery_timedout(struct iscsi_cls_session *cls_session)
- {
- struct iscsi_session *session = class_to_transport_session(cls_session);
-- struct iscsi_conn *conn = session->leadconn;
+ extern struct fw_blob *qla2x00_request_firmware(scsi_qla_host_t *);
- spin_lock_bh(&session->lock);
- if (session->state != ISCSI_STATE_LOGGED_IN) {
- session->state = ISCSI_STATE_RECOVERY_FAILED;
-- if (conn)
-- wake_up(&conn->ehwait);
-+ if (session->leadconn)
-+ wake_up(&session->leadconn->ehwait);
- }
- spin_unlock_bh(&session->lock);
- }
-@@ -971,30 +1129,25 @@ int iscsi_eh_host_reset(struct scsi_cmnd *sc)
- struct Scsi_Host *host = sc->device->host;
- struct iscsi_session *session = iscsi_hostdata(host->hostdata);
- struct iscsi_conn *conn = session->leadconn;
-- int fail_session = 0;
+ extern int qla2x00_wait_for_hba_online(scsi_qla_host_t *);
+@@ -113,7 +103,6 @@ extern void qla2xxx_wake_dpc(scsi_qla_host_t *);
+ extern void qla2x00_alert_all_vps(scsi_qla_host_t *, uint16_t *);
+ extern void qla2x00_async_event(scsi_qla_host_t *, uint16_t *);
+ extern void qla2x00_vp_abort_isp(scsi_qla_host_t *);
+-extern int qla24xx_vport_delete(struct fc_vport *);
-+ mutex_lock(&session->eh_mutex);
- spin_lock_bh(&session->lock);
- if (session->state == ISCSI_STATE_TERMINATE) {
- failed:
- debug_scsi("failing host reset: session terminated "
- "[CID %d age %d]\n", conn->id, session->age);
- spin_unlock_bh(&session->lock);
-+ mutex_unlock(&session->eh_mutex);
- return FAILED;
- }
+ /*
+ * Global Function Prototypes in qla_iocb.c source file.
+@@ -222,21 +211,16 @@ extern int
+ qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map);
-- if (sc->SCp.phase == session->age) {
-- debug_scsi("failing connection CID %d due to SCSI host reset\n",
-- conn->id);
-- fail_session = 1;
-- }
- spin_unlock_bh(&session->lock);
--
-+ mutex_unlock(&session->eh_mutex);
- /*
- * we drop the lock here but the leadconn cannot be destoyed while
- * we are in the scsi eh
- */
-- if (fail_session)
-- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+ extern int
+-qla2x00_get_link_status(scsi_qla_host_t *, uint16_t, link_stat_t *,
+- uint16_t *);
++qla2x00_get_link_status(scsi_qla_host_t *, uint16_t, struct link_statistics *,
++ dma_addr_t);
- debug_scsi("iscsi_eh_host_reset wait for relogin\n");
- wait_event_interruptible(conn->ehwait,
-@@ -1004,73 +1157,56 @@ failed:
- if (signal_pending(current))
- flush_signals(current);
+ extern int
+-qla24xx_get_isp_stats(scsi_qla_host_t *, uint32_t *, uint32_t, uint16_t *);
++qla24xx_get_isp_stats(scsi_qla_host_t *, struct link_statistics *,
++ dma_addr_t);
-+ mutex_lock(&session->eh_mutex);
- spin_lock_bh(&session->lock);
- if (session->state == ISCSI_STATE_LOGGED_IN)
- printk(KERN_INFO "iscsi: host reset succeeded\n");
- else
- goto failed;
- spin_unlock_bh(&session->lock);
--
-+ mutex_unlock(&session->eh_mutex);
- return SUCCESS;
- }
- EXPORT_SYMBOL_GPL(iscsi_eh_host_reset);
+ extern int qla24xx_abort_command(scsi_qla_host_t *, srb_t *);
+ extern int qla24xx_abort_target(fc_port_t *);
--static void iscsi_tmabort_timedout(unsigned long data)
-+static void iscsi_tmf_timedout(unsigned long data)
- {
-- struct iscsi_cmd_task *ctask = (struct iscsi_cmd_task *)data;
-- struct iscsi_conn *conn = ctask->conn;
-+ struct iscsi_conn *conn = (struct iscsi_conn *)data;
- struct iscsi_session *session = conn->session;
+-extern int qla2x00_system_error(scsi_qla_host_t *);
+-
+-extern int
+-qla2x00_get_serdes_params(scsi_qla_host_t *, uint16_t *, uint16_t *,
+- uint16_t *);
+-
+ extern int
+ qla2x00_set_serdes_params(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t);
- spin_lock(&session->lock);
-- if (conn->tmabort_state == TMABORT_INITIAL) {
-- conn->tmabort_state = TMABORT_TIMEDOUT;
-- debug_scsi("tmabort timedout [sc %p itt 0x%x]\n",
-- ctask->sc, ctask->itt);
-+ if (conn->tmf_state == TMF_QUEUED) {
-+ conn->tmf_state = TMF_TIMEDOUT;
-+ debug_scsi("tmf timedout\n");
- /* unblock eh_abort() */
- wake_up(&conn->ehwait);
- }
- spin_unlock(&session->lock);
- }
+@@ -244,13 +228,19 @@ extern int
+ qla2x00_stop_firmware(scsi_qla_host_t *);
--static int iscsi_exec_abort_task(struct scsi_cmnd *sc,
-- struct iscsi_cmd_task *ctask)
-+static int iscsi_exec_task_mgmt_fn(struct iscsi_conn *conn,
-+ struct iscsi_tm *hdr, int age,
-+ int timeout)
- {
-- struct iscsi_conn *conn = ctask->conn;
- struct iscsi_session *session = conn->session;
-- struct iscsi_tm *hdr = &conn->tmhdr;
--
-- /*
-- * ctask timed out but session is OK requests must be serialized.
-- */
-- memset(hdr, 0, sizeof(struct iscsi_tm));
-- hdr->opcode = ISCSI_OP_SCSI_TMFUNC | ISCSI_OP_IMMEDIATE;
-- hdr->flags = ISCSI_TM_FUNC_ABORT_TASK;
-- hdr->flags |= ISCSI_FLAG_CMD_FINAL;
-- memcpy(hdr->lun, ctask->hdr->lun, sizeof(hdr->lun));
-- hdr->rtt = ctask->hdr->itt;
-- hdr->refcmdsn = ctask->hdr->cmdsn;
-+ struct iscsi_mgmt_task *mtask;
+ extern int
+-qla2x00_trace_control(scsi_qla_host_t *, uint16_t, dma_addr_t, uint16_t);
++qla2x00_enable_eft_trace(scsi_qla_host_t *, dma_addr_t, uint16_t);
++extern int
++qla2x00_disable_eft_trace(scsi_qla_host_t *);
-- ctask->mtask = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)hdr,
-- NULL, 0);
-- if (!ctask->mtask) {
-+ mtask = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)hdr,
-+ NULL, 0);
-+ if (!mtask) {
- spin_unlock_bh(&session->lock);
- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-- spin_lock_bh(&session->lock)
-- debug_scsi("abort sent failure [itt 0x%x]\n", ctask->itt);
-+ spin_lock_bh(&session->lock);
-+ debug_scsi("tmf exec failure\n");
- return -EPERM;
- }
-- ctask->state = ISCSI_TASK_ABORTING;
-+ conn->tmfcmd_pdus_cnt++;
-+ conn->tmf_timer.expires = timeout * HZ + jiffies;
-+ conn->tmf_timer.function = iscsi_tmf_timedout;
-+ conn->tmf_timer.data = (unsigned long)conn;
-+ add_timer(&conn->tmf_timer);
-+ debug_scsi("tmf set timeout\n");
+ extern int
+-qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t, uint16_t);
++qla2x00_enable_fce_trace(scsi_qla_host_t *, dma_addr_t, uint16_t , uint16_t *,
++ uint32_t *);
-- debug_scsi("abort sent [itt 0x%x]\n", ctask->itt);
--
-- if (conn->tmabort_state == TMABORT_INITIAL) {
-- conn->tmfcmd_pdus_cnt++;
-- conn->tmabort_timer.expires = 20*HZ + jiffies;
-- conn->tmabort_timer.function = iscsi_tmabort_timedout;
-- conn->tmabort_timer.data = (unsigned long)ctask;
-- add_timer(&conn->tmabort_timer);
-- debug_scsi("abort set timeout [itt 0x%x]\n", ctask->itt);
-- }
- spin_unlock_bh(&session->lock);
- mutex_unlock(&session->eh_mutex);
- scsi_queue_work(session->host, &conn->xmitwork);
-@@ -1078,113 +1214,197 @@ static int iscsi_exec_abort_task(struct scsi_cmnd *sc,
- /*
- * block eh thread until:
- *
-- * 1) abort response
-- * 2) abort timeout
-+ * 1) tmf response
-+ * 2) tmf timeout
- * 3) session is terminated or restarted or userspace has
- * given up on recovery
- */
-- wait_event_interruptible(conn->ehwait,
-- sc->SCp.phase != session->age ||
-+ wait_event_interruptible(conn->ehwait, age != session->age ||
- session->state != ISCSI_STATE_LOGGED_IN ||
-- conn->tmabort_state != TMABORT_INITIAL);
-+ conn->tmf_state != TMF_QUEUED);
- if (signal_pending(current))
- flush_signals(current);
-- del_timer_sync(&conn->tmabort_timer);
-+ del_timer_sync(&conn->tmf_timer);
+ extern int
+-qla2x00_get_idma_speed(scsi_qla_host_t *, uint16_t, uint16_t *, uint16_t *);
++qla2x00_disable_fce_trace(scsi_qla_host_t *, uint64_t *, uint64_t *);
+
- mutex_lock(&session->eh_mutex);
- spin_lock_bh(&session->lock);
-+ /* if the session drops it will clean up the mtask */
-+ if (age != session->age ||
-+ session->state != ISCSI_STATE_LOGGED_IN)
-+ return -ENOTCONN;
- return 0;
- }
++extern int
++qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t, uint16_t);
+ extern int
+ qla2x00_set_idma_speed(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t *);
+@@ -270,11 +260,7 @@ extern void qla2x00_free_irqs(scsi_qla_host_t *);
/*
-- * session lock must be held
-+ * Fail commands. session lock held and recv side suspended and xmit
-+ * thread flushed
+ * Global Function Prototypes in qla_sup.c source file.
*/
--static struct iscsi_mgmt_task *
--iscsi_remove_mgmt_task(struct kfifo *fifo, uint32_t itt)
-+static void fail_all_commands(struct iscsi_conn *conn, unsigned lun)
- {
-- int i, nr_tasks = __kfifo_len(fifo) / sizeof(void*);
-- struct iscsi_mgmt_task *task;
-+ struct iscsi_cmd_task *ctask, *tmp;
-
-- debug_scsi("searching %d tasks\n", nr_tasks);
-+ if (conn->ctask && (conn->ctask->sc->device->lun == lun || lun == -1))
-+ conn->ctask = NULL;
-
-- for (i = 0; i < nr_tasks; i++) {
-- __kfifo_get(fifo, (void*)&task, sizeof(void*));
-- debug_scsi("check task %u\n", task->itt);
-+ /* flush pending */
-+ list_for_each_entry_safe(ctask, tmp, &conn->xmitqueue, running) {
-+ if (lun == ctask->sc->device->lun || lun == -1) {
-+ debug_scsi("failing pending sc %p itt 0x%x\n",
-+ ctask->sc, ctask->itt);
-+ fail_command(conn, ctask, DID_BUS_BUSY << 16);
-+ }
-+ }
-
-- if (task->itt == itt) {
-- debug_scsi("matched task\n");
-- return task;
-+ list_for_each_entry_safe(ctask, tmp, &conn->requeue, running) {
-+ if (lun == ctask->sc->device->lun || lun == -1) {
-+ debug_scsi("failing requeued sc %p itt 0x%x\n",
-+ ctask->sc, ctask->itt);
-+ fail_command(conn, ctask, DID_BUS_BUSY << 16);
- }
-+ }
-
-- __kfifo_put(fifo, (void*)&task, sizeof(void*));
-+ /* fail all other running */
-+ list_for_each_entry_safe(ctask, tmp, &conn->run_list, running) {
-+ if (lun == ctask->sc->device->lun || lun == -1) {
-+ debug_scsi("failing in progress sc %p itt 0x%x\n",
-+ ctask->sc, ctask->itt);
-+ fail_command(conn, ctask, DID_BUS_BUSY << 16);
-+ }
- }
-- return NULL;
- }
-
--static int iscsi_ctask_mtask_cleanup(struct iscsi_cmd_task *ctask)
-+static void iscsi_suspend_tx(struct iscsi_conn *conn)
- {
-- struct iscsi_conn *conn = ctask->conn;
-- struct iscsi_session *session = conn->session;
--
-- if (!ctask->mtask)
-- return -EINVAL;
-+ set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
-+ scsi_flush_work(conn->session->host);
-+}
-
-- if (!iscsi_remove_mgmt_task(conn->mgmtqueue, ctask->mtask->itt))
-- list_del(&ctask->mtask->running);
-- __kfifo_put(session->mgmtpool.queue, (void*)&ctask->mtask,
-- sizeof(void*));
-- ctask->mtask = NULL;
-- return 0;
-+static void iscsi_start_tx(struct iscsi_conn *conn)
-+{
-+ clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
-+ scsi_queue_work(conn->session->host, &conn->xmitwork);
- }
+-extern void qla2x00_lock_nvram_access(scsi_qla_host_t *);
+-extern void qla2x00_unlock_nvram_access(scsi_qla_host_t *);
+ extern void qla2x00_release_nvram_protection(scsi_qla_host_t *);
+-extern uint16_t qla2x00_get_nvram_word(scsi_qla_host_t *, uint32_t);
+-extern void qla2x00_write_nvram_word(scsi_qla_host_t *, uint32_t, uint16_t);
+ extern uint32_t *qla24xx_read_flash_data(scsi_qla_host_t *, uint32_t *,
+ uint32_t, uint32_t);
+ extern uint8_t *qla2x00_read_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
+@@ -321,7 +307,6 @@ extern void qla25xx_fw_dump(scsi_qla_host_t *, int);
+ extern void qla2x00_dump_regs(scsi_qla_host_t *);
+ extern void qla2x00_dump_buffer(uint8_t *, uint32_t);
+ extern void qla2x00_print_scsi_cmd(struct scsi_cmnd *);
+-extern void qla2x00_dump_pkt(void *);
--/*
-- * session lock must be held
-- */
--static void fail_command(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
-- int err)
-+static enum scsi_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *scmd)
+ /*
+ * Global Function Prototypes in qla_gs.c source file.
+@@ -356,4 +341,10 @@ extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *);
+ extern void qla2x00_init_host_attr(scsi_qla_host_t *);
+ extern void qla2x00_alloc_sysfs_attr(scsi_qla_host_t *);
+ extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *);
++
++/*
++ * Global Function Prototypes in qla_dfs.c source file.
++ */
++extern int qla2x00_dfs_setup(scsi_qla_host_t *);
++extern int qla2x00_dfs_remove(scsi_qla_host_t *);
+ #endif /* _QLA_GBL_H */
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 191dafd..d0633ca 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -732,9 +732,9 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
{
-- struct scsi_cmnd *sc;
-+ struct iscsi_cls_session *cls_session;
-+ struct iscsi_session *session;
-+ struct iscsi_conn *conn;
-+ enum scsi_eh_timer_return rc = EH_NOT_HANDLED;
-
-- sc = ctask->sc;
-- if (!sc)
-- return;
-+ cls_session = starget_to_session(scsi_target(scmd->device));
-+ session = class_to_transport_session(cls_session);
+ int rval;
+ uint32_t dump_size, fixed_size, mem_size, req_q_size, rsp_q_size,
+- eft_size;
+- dma_addr_t eft_dma;
+- void *eft;
++ eft_size, fce_size;
++ dma_addr_t tc_dma;
++ void *tc;
-- if (ctask->state == ISCSI_TASK_PENDING)
-+ debug_scsi("scsi cmd %p timedout\n", scmd);
-+
-+ spin_lock(&session->lock);
-+ if (session->state != ISCSI_STATE_LOGGED_IN) {
- /*
-- * cmd never made it to the xmit thread, so we should not count
-- * the cmd in the sequencing
-+ * We are probably in the middle of iscsi recovery so let
-+ * that complete and handle the error.
- */
-- conn->session->queued_cmdsn--;
-- else
-- conn->session->tt->cleanup_cmd_task(conn, ctask);
-- iscsi_ctask_mtask_cleanup(ctask);
-+ rc = EH_RESET_TIMER;
-+ goto done;
-+ }
+ if (ha->fw_dump) {
+ qla_printk(KERN_WARNING, ha,
+@@ -743,7 +743,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
+ }
-- sc->result = err;
-- scsi_set_resid(sc, scsi_bufflen(sc));
-- if (conn->ctask == ctask)
-- conn->ctask = NULL;
-- /* release ref from queuecommand */
-- __iscsi_put_ctask(ctask);
-+ conn = session->leadconn;
-+ if (!conn) {
-+ /* In the middle of shuting down */
-+ rc = EH_RESET_TIMER;
-+ goto done;
-+ }
-+
-+ if (!conn->recv_timeout && !conn->ping_timeout)
-+ goto done;
-+ /*
-+ * if the ping timedout then we are in the middle of cleaning up
-+ * and can let the iscsi eh handle it
-+ */
-+ if (time_before_eq(conn->last_recv + (conn->recv_timeout * HZ) +
-+ (conn->ping_timeout * HZ), jiffies))
-+ rc = EH_RESET_TIMER;
-+ /*
-+ * if we are about to check the transport then give the command
-+ * more time
-+ */
-+ if (time_before_eq(conn->last_recv + (conn->recv_timeout * HZ),
-+ jiffies))
-+ rc = EH_RESET_TIMER;
-+ /* if in the middle of checking the transport then give us more time */
-+ if (conn->ping_mtask)
-+ rc = EH_RESET_TIMER;
-+done:
-+ spin_unlock(&session->lock);
-+ debug_scsi("return %s\n", rc == EH_RESET_TIMER ? "timer reset" : "nh");
-+ return rc;
- }
+ ha->fw_dumped = 0;
+- fixed_size = mem_size = eft_size = 0;
++ fixed_size = mem_size = eft_size = fce_size = 0;
+ if (IS_QLA2100(ha) || IS_QLA2200(ha)) {
+ fixed_size = sizeof(struct qla2100_fw_dump);
+ } else if (IS_QLA23XX(ha)) {
+@@ -758,21 +758,21 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
+ sizeof(uint32_t);
--static void iscsi_suspend_tx(struct iscsi_conn *conn)
-+static void iscsi_check_transport_timeouts(unsigned long data)
- {
-- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
-- scsi_flush_work(conn->session->host);
-+ struct iscsi_conn *conn = (struct iscsi_conn *)data;
-+ struct iscsi_session *session = conn->session;
-+ unsigned long timeout, next_timeout = 0, last_recv;
-+
-+ spin_lock(&session->lock);
-+ if (session->state != ISCSI_STATE_LOGGED_IN)
-+ goto done;
+ /* Allocate memory for Extended Trace Buffer. */
+- eft = dma_alloc_coherent(&ha->pdev->dev, EFT_SIZE, &eft_dma,
++ tc = dma_alloc_coherent(&ha->pdev->dev, EFT_SIZE, &tc_dma,
+ GFP_KERNEL);
+- if (!eft) {
++ if (!tc) {
+ qla_printk(KERN_WARNING, ha, "Unable to allocate "
+ "(%d KB) for EFT.\n", EFT_SIZE / 1024);
+ goto cont_alloc;
+ }
+
+- rval = qla2x00_trace_control(ha, TC_ENABLE, eft_dma,
+- EFT_NUM_BUFFERS);
++ memset(tc, 0, EFT_SIZE);
++ rval = qla2x00_enable_eft_trace(ha, tc_dma, EFT_NUM_BUFFERS);
+ if (rval) {
+ qla_printk(KERN_WARNING, ha, "Unable to initialize "
+ "EFT (%d).\n", rval);
+- dma_free_coherent(&ha->pdev->dev, EFT_SIZE, eft,
+- eft_dma);
++ dma_free_coherent(&ha->pdev->dev, EFT_SIZE, tc,
++ tc_dma);
+ goto cont_alloc;
+ }
+
+@@ -780,9 +780,40 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
+ EFT_SIZE / 1024);
+
+ eft_size = EFT_SIZE;
+- memset(eft, 0, eft_size);
+- ha->eft_dma = eft_dma;
+- ha->eft = eft;
++ ha->eft_dma = tc_dma;
++ ha->eft = tc;
+
-+ timeout = conn->recv_timeout;
-+ if (!timeout)
-+ goto done;
++ /* Allocate memory for Fibre Channel Event Buffer. */
++ if (!IS_QLA25XX(ha))
++ goto cont_alloc;
+
-+ timeout *= HZ;
-+ last_recv = conn->last_recv;
-+ if (time_before_eq(last_recv + timeout + (conn->ping_timeout * HZ),
-+ jiffies)) {
-+ printk(KERN_ERR "ping timeout of %d secs expired, "
-+ "last rx %lu, last ping %lu, now %lu\n",
-+ conn->ping_timeout, last_recv,
-+ conn->last_ping, jiffies);
-+ spin_unlock(&session->lock);
-+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-+ return;
-+ }
++ tc = dma_alloc_coherent(&ha->pdev->dev, FCE_SIZE, &tc_dma,
++ GFP_KERNEL);
++ if (!tc) {
++ qla_printk(KERN_WARNING, ha, "Unable to allocate "
++ "(%d KB) for FCE.\n", FCE_SIZE / 1024);
++ goto cont_alloc;
++ }
+
-+ if (time_before_eq(last_recv + timeout, jiffies)) {
-+ if (time_before_eq(conn->last_ping, last_recv)) {
-+ /* send a ping to try to provoke some traffic */
-+ debug_scsi("Sending nopout as ping on conn %p\n", conn);
-+ iscsi_send_nopout(conn, NULL);
++ memset(tc, 0, FCE_SIZE);
++ rval = qla2x00_enable_fce_trace(ha, tc_dma, FCE_NUM_BUFFERS,
++ ha->fce_mb, &ha->fce_bufs);
++ if (rval) {
++ qla_printk(KERN_WARNING, ha, "Unable to initialize "
++ "FCE (%d).\n", rval);
++ dma_free_coherent(&ha->pdev->dev, FCE_SIZE, tc,
++ tc_dma);
++ ha->flags.fce_enabled = 0;
++ goto cont_alloc;
+ }
-+ next_timeout = last_recv + timeout + (conn->ping_timeout * HZ);
-+ } else {
-+ next_timeout = last_recv + timeout;
-+ }
+
-+ if (next_timeout) {
-+ debug_scsi("Setting next tmo %lu\n", next_timeout);
-+ mod_timer(&conn->transport_timer, next_timeout);
-+ }
-+done:
-+ spin_unlock(&session->lock);
- }
++ qla_printk(KERN_INFO, ha, "Allocated (%d KB) for FCE...\n",
++ FCE_SIZE / 1024);
++
++ fce_size = sizeof(struct qla2xxx_fce_chain) + EFT_SIZE;
++ ha->flags.fce_enabled = 1;
++ ha->fce_dma = tc_dma;
++ ha->fce = tc;
+ }
+ cont_alloc:
+ req_q_size = ha->request_q_length * sizeof(request_t);
+@@ -790,7 +821,7 @@ cont_alloc:
--static void iscsi_start_tx(struct iscsi_conn *conn)
-+static void iscsi_prep_abort_task_pdu(struct iscsi_cmd_task *ctask,
-+ struct iscsi_tm *hdr)
- {
-- clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
-- scsi_queue_work(conn->session->host, &conn->xmitwork);
-+ memset(hdr, 0, sizeof(*hdr));
-+ hdr->opcode = ISCSI_OP_SCSI_TMFUNC | ISCSI_OP_IMMEDIATE;
-+ hdr->flags = ISCSI_TM_FUNC_ABORT_TASK & ISCSI_FLAG_TM_FUNC_MASK;
-+ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
-+ memcpy(hdr->lun, ctask->hdr->lun, sizeof(hdr->lun));
-+ hdr->rtt = ctask->hdr->itt;
-+ hdr->refcmdsn = ctask->hdr->cmdsn;
- }
+ dump_size = offsetof(struct qla2xxx_fw_dump, isp);
+ dump_size += fixed_size + mem_size + req_q_size + rsp_q_size +
+- eft_size;
++ eft_size + fce_size;
- int iscsi_eh_abort(struct scsi_cmnd *sc)
- {
- struct Scsi_Host *host = sc->device->host;
- struct iscsi_session *session = iscsi_hostdata(host->hostdata);
-- struct iscsi_cmd_task *ctask;
- struct iscsi_conn *conn;
-- int rc;
-+ struct iscsi_cmd_task *ctask;
-+ struct iscsi_tm *hdr;
-+ int rc, age;
+ ha->fw_dump = vmalloc(dump_size);
+ if (!ha->fw_dump) {
+@@ -922,9 +953,9 @@ qla2x00_setup_chip(scsi_qla_host_t *ha)
+ ha->flags.npiv_supported = 1;
+ if ((!ha->max_npiv_vports) ||
+ ((ha->max_npiv_vports + 1) %
+- MAX_MULTI_ID_FABRIC))
++ MIN_MULTI_ID_FABRIC))
+ ha->max_npiv_vports =
+- MAX_NUM_VPORT_FABRIC;
++ MIN_MULTI_ID_FABRIC - 1;
+ }
- mutex_lock(&session->eh_mutex);
- spin_lock_bh(&session->lock);
-@@ -1199,19 +1419,23 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
- return SUCCESS;
- }
+ if (ql2xallocfwdump)
+@@ -1162,7 +1193,10 @@ qla2x00_init_rings(scsi_qla_host_t *ha)
-- ctask = (struct iscsi_cmd_task *)sc->SCp.ptr;
-- conn = ctask->conn;
--
-- conn->eh_abort_cnt++;
-- debug_scsi("aborting [sc %p itt 0x%x]\n", sc, ctask->itt);
--
- /*
- * If we are not logged in or we have started a new session
- * then let the host reset code handle this
- */
-- if (session->state != ISCSI_STATE_LOGGED_IN ||
-- sc->SCp.phase != session->age)
-- goto failed;
-+ if (!session->leadconn || session->state != ISCSI_STATE_LOGGED_IN ||
-+ sc->SCp.phase != session->age) {
-+ spin_unlock_bh(&session->lock);
-+ mutex_unlock(&session->eh_mutex);
-+ return FAILED;
-+ }
-+
-+ conn = session->leadconn;
-+ conn->eh_abort_cnt++;
-+ age = session->age;
+ DEBUG(printk("scsi(%ld): Issue init firmware.\n", ha->host_no));
+
+- mid_init_cb->count = ha->max_npiv_vports;
++ if (ha->flags.npiv_supported)
++ mid_init_cb->count = cpu_to_le16(ha->max_npiv_vports);
+
-+ ctask = (struct iscsi_cmd_task *)sc->SCp.ptr;
-+ debug_scsi("aborting [sc %p itt 0x%x]\n", sc, ctask->itt);
++ mid_init_cb->options = __constant_cpu_to_le16(BIT_1);
- /* ctask completed before time out */
- if (!ctask->sc) {
-@@ -1219,27 +1443,26 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
- goto success;
- }
+ rval = qla2x00_init_firmware(ha, ha->init_cb_size);
+ if (rval) {
+@@ -2566,14 +2600,7 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *ha, struct list_head *new_fcports)
-- /* what should we do here ? */
-- if (conn->ctask == ctask) {
-- printk(KERN_INFO "iscsi: sc %p itt 0x%x partially sent. "
-- "Failing abort\n", sc, ctask->itt);
-- goto failed;
-- }
+ /* Bypass virtual ports of the same host. */
+ if (pha->num_vhosts) {
+- vp_index = find_next_bit(
+- (unsigned long *)pha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC + 1, 1);
-
- if (ctask->state == ISCSI_TASK_PENDING) {
- fail_command(conn, ctask, DID_ABORT << 16);
- goto success;
- }
-
-- conn->tmabort_state = TMABORT_INITIAL;
-- rc = iscsi_exec_abort_task(sc, ctask);
-- if (rc || sc->SCp.phase != session->age ||
-- session->state != ISCSI_STATE_LOGGED_IN)
-+ /* only have one tmf outstanding at a time */
-+ if (conn->tmf_state != TMF_INITIAL)
-+ goto failed;
-+ conn->tmf_state = TMF_QUEUED;
+- for (;vp_index <= MAX_MULTI_ID_FABRIC;
+- vp_index = find_next_bit(
+- (unsigned long *)pha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC + 1, vp_index + 1)) {
++ for_each_mapped_vp_idx(pha, vp_index) {
+ empty_vp_index = 1;
+ found_vp = 0;
+ list_for_each_entry(vha, &pha->vp_list,
+@@ -2592,7 +2619,8 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *ha, struct list_head *new_fcports)
+ new_fcport->d_id.b24 == vha->d_id.b24)
+ break;
+ }
+- if (vp_index <= MAX_MULTI_ID_FABRIC)
+
-+ hdr = &conn->tmhdr;
-+ iscsi_prep_abort_task_pdu(ctask, hdr);
++ if (vp_index <= pha->max_npiv_vports)
+ continue;
+ }
+
+@@ -3245,7 +3273,7 @@ qla2x00_abort_isp(scsi_qla_host_t *ha)
+ clear_bit(ISP_ABORT_RETRY, &ha->dpc_flags);
+
+ if (ha->eft) {
+- rval = qla2x00_trace_control(ha, TC_ENABLE,
++ rval = qla2x00_enable_eft_trace(ha,
+ ha->eft_dma, EFT_NUM_BUFFERS);
+ if (rval) {
+ qla_printk(KERN_WARNING, ha,
+@@ -3253,6 +3281,21 @@ qla2x00_abort_isp(scsi_qla_host_t *ha)
+ "(%d).\n", rval);
+ }
+ }
+
-+ if (iscsi_exec_task_mgmt_fn(conn, hdr, age, session->abort_timeout)) {
-+ rc = FAILED;
- goto failed;
-- iscsi_ctask_mtask_cleanup(ctask);
-+ }
++ if (ha->fce) {
++ ha->flags.fce_enabled = 1;
++ memset(ha->fce, 0,
++ fce_calc_size(ha->fce_bufs));
++ rval = qla2x00_enable_fce_trace(ha,
++ ha->fce_dma, ha->fce_bufs, ha->fce_mb,
++ &ha->fce_bufs);
++ if (rval) {
++ qla_printk(KERN_WARNING, ha,
++ "Unable to reinitialize FCE "
++ "(%d).\n", rval);
++ ha->flags.fce_enabled = 0;
++ }
++ }
+ } else { /* failed the ISP abort */
+ ha->flags.online = 1;
+ if (test_bit(ISP_ABORT_RETRY, &ha->dpc_flags)) {
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 1104bd2..642a0c3 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -104,7 +104,7 @@ qla2100_intr_handler(int irq, void *dev_id)
+ if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
+ (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
+ set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
+- up(&ha->mbx_intr_sem);
++ complete(&ha->mbx_intr_comp);
+ }
-- switch (conn->tmabort_state) {
-- case TMABORT_SUCCESS:
-+ switch (conn->tmf_state) {
-+ case TMF_SUCCESS:
- spin_unlock_bh(&session->lock);
- iscsi_suspend_tx(conn);
- /*
-@@ -1248,22 +1471,26 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
- write_lock_bh(conn->recv_lock);
- spin_lock(&session->lock);
- fail_command(conn, ctask, DID_ABORT << 16);
-+ conn->tmf_state = TMF_INITIAL;
- spin_unlock(&session->lock);
- write_unlock_bh(conn->recv_lock);
- iscsi_start_tx(conn);
- goto success_unlocked;
-- case TMABORT_NOT_FOUND:
-- if (!ctask->sc) {
-+ case TMF_TIMEDOUT:
-+ spin_unlock_bh(&session->lock);
-+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-+ goto failed_unlocked;
-+ case TMF_NOT_FOUND:
-+ if (!sc->SCp.ptr) {
-+ conn->tmf_state = TMF_INITIAL;
- /* ctask completed before tmf abort response */
- debug_scsi("sc completed while abort in progress\n");
- goto success;
- }
- /* fall through */
- default:
-- /* timedout or failed */
-- spin_unlock_bh(&session->lock);
-- iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-- goto failed_unlocked;
-+ conn->tmf_state = TMF_INITIAL;
-+ goto failed;
+ return (IRQ_HANDLED);
+@@ -216,7 +216,7 @@ qla2300_intr_handler(int irq, void *dev_id)
+ if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
+ (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
+ set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
+- up(&ha->mbx_intr_sem);
++ complete(&ha->mbx_intr_comp);
}
- success:
-@@ -1276,65 +1503,152 @@ success_unlocked:
- failed:
- spin_unlock_bh(&session->lock);
- failed_unlocked:
-- debug_scsi("abort failed [sc %lx itt 0x%x]\n", (long)sc, ctask->itt);
-+ debug_scsi("abort failed [sc %p itt 0x%x]\n", sc,
-+ ctask ? ctask->itt : 0);
- mutex_unlock(&session->eh_mutex);
- return FAILED;
+ return (IRQ_HANDLED);
+@@ -347,10 +347,6 @@ qla2x00_async_event(scsi_qla_host_t *ha, uint16_t *mb)
+ break;
+
+ case MBA_SYSTEM_ERR: /* System Error */
+- mb[1] = RD_MAILBOX_REG(ha, reg, 1);
+- mb[2] = RD_MAILBOX_REG(ha, reg, 2);
+- mb[3] = RD_MAILBOX_REG(ha, reg, 3);
+-
+ qla_printk(KERN_INFO, ha,
+ "ISP System Error - mbx1=%xh mbx2=%xh mbx3=%xh.\n",
+ mb[1], mb[2], mb[3]);
+@@ -579,12 +575,15 @@ qla2x00_async_event(scsi_qla_host_t *ha, uint16_t *mb)
+ /* Check if the Vport has issued a SCR */
+ if (ha->parent && test_bit(VP_SCR_NEEDED, &ha->vp_flags))
+ break;
++ /* Only handle SCNs for our Vport index. */
++ if (ha->flags.npiv_supported && ha->vp_idx != mb[3])
++ break;
+
+ DEBUG2(printk("scsi(%ld): Asynchronous RSCR UPDATE.\n",
+ ha->host_no));
+ DEBUG(printk(KERN_INFO
+- "scsi(%ld): RSCN database changed -- %04x %04x.\n",
+- ha->host_no, mb[1], mb[2]));
++ "scsi(%ld): RSCN database changed -- %04x %04x %04x.\n",
++ ha->host_no, mb[1], mb[2], mb[3]));
+
+ rscn_entry = (mb[1] << 16) | mb[2];
+ host_pid = (ha->d_id.b.domain << 16) | (ha->d_id.b.area << 8) |
+@@ -823,6 +822,35 @@ qla2x00_process_response_queue(struct scsi_qla_host *ha)
+ WRT_REG_WORD(ISP_RSP_Q_OUT(ha, reg), ha->rsp_ring_index);
}
- EXPORT_SYMBOL_GPL(iscsi_eh_abort);
-+static void iscsi_prep_lun_reset_pdu(struct scsi_cmnd *sc, struct iscsi_tm *hdr)
-+{
-+ memset(hdr, 0, sizeof(*hdr));
-+ hdr->opcode = ISCSI_OP_SCSI_TMFUNC | ISCSI_OP_IMMEDIATE;
-+ hdr->flags = ISCSI_TM_FUNC_LOGICAL_UNIT_RESET & ISCSI_FLAG_TM_FUNC_MASK;
-+ hdr->flags |= ISCSI_FLAG_CMD_FINAL;
-+ int_to_scsilun(sc->device->lun, (struct scsi_lun *)hdr->lun);
-+ hdr->rtt = RESERVED_ITT;
-+}
-+
-+int iscsi_eh_device_reset(struct scsi_cmnd *sc)
++static inline void
++qla2x00_handle_sense(srb_t *sp, uint8_t *sense_data, uint32_t sense_len)
+{
-+ struct Scsi_Host *host = sc->device->host;
-+ struct iscsi_session *session = iscsi_hostdata(host->hostdata);
-+ struct iscsi_conn *conn;
-+ struct iscsi_tm *hdr;
-+ int rc = FAILED;
-+
-+ debug_scsi("LU Reset [sc %p lun %u]\n", sc, sc->device->lun);
-+
-+ mutex_lock(&session->eh_mutex);
-+ spin_lock_bh(&session->lock);
-+ /*
-+ * Just check if we are not logged in. We cannot check for
-+ * the phase because the reset could come from a ioctl.
-+ */
-+ if (!session->leadconn || session->state != ISCSI_STATE_LOGGED_IN)
-+ goto unlock;
-+ conn = session->leadconn;
-+
-+ /* only have one tmf outstanding at a time */
-+ if (conn->tmf_state != TMF_INITIAL)
-+ goto unlock;
-+ conn->tmf_state = TMF_QUEUED;
-+
-+ hdr = &conn->tmhdr;
-+ iscsi_prep_lun_reset_pdu(sc, hdr);
-+
-+ if (iscsi_exec_task_mgmt_fn(conn, hdr, session->age,
-+ session->lu_reset_timeout)) {
-+ rc = FAILED;
-+ goto unlock;
-+ }
++ struct scsi_cmnd *cp = sp->cmd;
+
-+ switch (conn->tmf_state) {
-+ case TMF_SUCCESS:
-+ break;
-+ case TMF_TIMEDOUT:
-+ spin_unlock_bh(&session->lock);
-+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
-+ goto done;
-+ default:
-+ conn->tmf_state = TMF_INITIAL;
-+ goto unlock;
-+ }
++ if (sense_len >= SCSI_SENSE_BUFFERSIZE)
++ sense_len = SCSI_SENSE_BUFFERSIZE;
+
-+ rc = SUCCESS;
-+ spin_unlock_bh(&session->lock);
++ CMD_ACTUAL_SNSLEN(cp) = sense_len;
++ sp->request_sense_length = sense_len;
++ sp->request_sense_ptr = cp->sense_buffer;
++ if (sp->request_sense_length > 32)
++ sense_len = 32;
+
-+ iscsi_suspend_tx(conn);
-+ /* need to grab the recv lock then session lock */
-+ write_lock_bh(conn->recv_lock);
-+ spin_lock(&session->lock);
-+ fail_all_commands(conn, sc->device->lun);
-+ conn->tmf_state = TMF_INITIAL;
-+ spin_unlock(&session->lock);
-+ write_unlock_bh(conn->recv_lock);
++ memcpy(cp->sense_buffer, sense_data, sense_len);
+
-+ iscsi_start_tx(conn);
-+ goto done;
++ sp->request_sense_ptr += sense_len;
++ sp->request_sense_length -= sense_len;
++ if (sp->request_sense_length != 0)
++ sp->ha->status_srb = sp;
+
-+unlock:
-+ spin_unlock_bh(&session->lock);
-+done:
-+ debug_scsi("iscsi_eh_device_reset %s\n",
-+ rc == SUCCESS ? "SUCCESS" : "FAILED");
-+ mutex_unlock(&session->eh_mutex);
-+ return rc;
++ DEBUG5(printk("%s(): Check condition Sense data, scsi(%ld:%d:%d:%d) "
++ "cmd=%p pid=%ld\n", __func__, sp->ha->host_no, cp->device->channel,
++ cp->device->id, cp->device->lun, cp, cp->serial_number));
++ if (sense_len)
++ DEBUG5(qla2x00_dump_buffer(cp->sense_buffer,
++ CMD_ACTUAL_SNSLEN(cp)));
+}
-+EXPORT_SYMBOL_GPL(iscsi_eh_device_reset);
+
-+/*
-+ * Pre-allocate a pool of @max items of @item_size. By default, the pool
-+ * should be accessed via kfifo_{get,put} on q->queue.
-+ * Optionally, the caller can obtain the array of object pointers
-+ * by passing in a non-NULL @items pointer
-+ */
- int
--iscsi_pool_init(struct iscsi_queue *q, int max, void ***items, int item_size)
-+iscsi_pool_init(struct iscsi_pool *q, int max, void ***items, int item_size)
- {
-- int i;
-+ int i, num_arrays = 1;
+ /**
+ * qla2x00_status_entry() - Process a Status IOCB entry.
+ * @ha: SCSI driver HA context
+@@ -977,36 +1005,11 @@ qla2x00_status_entry(scsi_qla_host_t *ha, void *pkt)
+ if (lscsi_status != SS_CHECK_CONDITION)
+ break;
-- *items = kmalloc(max * sizeof(void*), GFP_KERNEL);
-- if (*items == NULL)
-- return -ENOMEM;
-+ memset(q, 0, sizeof(*q));
+- /* Copy Sense Data into sense buffer. */
+- memset(cp->sense_buffer, 0, sizeof(cp->sense_buffer));
+-
++ memset(cp->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ if (!(scsi_status & SS_SENSE_LEN_VALID))
+ break;
- q->max = max;
-- q->pool = kmalloc(max * sizeof(void*), GFP_KERNEL);
-- if (q->pool == NULL) {
-- kfree(*items);
-- return -ENOMEM;
-- }
-+
-+ /* If the user passed an items pointer, he wants a copy of
-+ * the array. */
-+ if (items)
-+ num_arrays++;
-+ q->pool = kzalloc(num_arrays * max * sizeof(void*), GFP_KERNEL);
-+ if (q->pool == NULL)
-+ goto enomem;
+- if (sense_len >= sizeof(cp->sense_buffer))
+- sense_len = sizeof(cp->sense_buffer);
+-
+- CMD_ACTUAL_SNSLEN(cp) = sense_len;
+- sp->request_sense_length = sense_len;
+- sp->request_sense_ptr = cp->sense_buffer;
+-
+- if (sp->request_sense_length > 32)
+- sense_len = 32;
+-
+- memcpy(cp->sense_buffer, sense_data, sense_len);
+-
+- sp->request_sense_ptr += sense_len;
+- sp->request_sense_length -= sense_len;
+- if (sp->request_sense_length != 0)
+- ha->status_srb = sp;
+-
+- DEBUG5(printk("%s(): Check condition Sense data, "
+- "scsi(%ld:%d:%d:%d) cmd=%p pid=%ld\n", __func__,
+- ha->host_no, cp->device->channel, cp->device->id,
+- cp->device->lun, cp, cp->serial_number));
+- if (sense_len)
+- DEBUG5(qla2x00_dump_buffer(cp->sense_buffer,
+- CMD_ACTUAL_SNSLEN(cp)));
++ qla2x00_handle_sense(sp, sense_data, sense_len);
+ break;
- q->queue = kfifo_init((void*)q->pool, max * sizeof(void*),
- GFP_KERNEL, NULL);
-- if (q->queue == ERR_PTR(-ENOMEM)) {
-- kfree(q->pool);
-- kfree(*items);
-- return -ENOMEM;
-- }
-+ if (q->queue == ERR_PTR(-ENOMEM))
-+ goto enomem;
+ case CS_DATA_UNDERRUN:
+@@ -1061,34 +1064,11 @@ qla2x00_status_entry(scsi_qla_host_t *ha, void *pkt)
+ if (lscsi_status != SS_CHECK_CONDITION)
+ break;
- for (i = 0; i < max; i++) {
-- q->pool[i] = kmalloc(item_size, GFP_KERNEL);
-+ q->pool[i] = kzalloc(item_size, GFP_KERNEL);
- if (q->pool[i] == NULL) {
-- int j;
+- /* Copy Sense Data into sense buffer */
+- memset(cp->sense_buffer, 0, sizeof(cp->sense_buffer));
-
-- for (j = 0; j < i; j++)
-- kfree(q->pool[j]);
++ memset(cp->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ if (!(scsi_status & SS_SENSE_LEN_VALID))
+ break;
+
+- if (sense_len >= sizeof(cp->sense_buffer))
+- sense_len = sizeof(cp->sense_buffer);
-
-- kfifo_free(q->queue);
-- kfree(q->pool);
-- kfree(*items);
-- return -ENOMEM;
-+ q->max = i;
-+ goto enomem;
- }
-- memset(q->pool[i], 0, item_size);
-- (*items)[i] = q->pool[i];
- __kfifo_put(q->queue, (void*)&q->pool[i], sizeof(void*));
- }
-+
-+ if (items) {
-+ *items = q->pool + max;
-+ memcpy(*items, q->pool, max * sizeof(void *));
-+ }
-+
- return 0;
-+
-+enomem:
-+ iscsi_pool_free(q);
-+ return -ENOMEM;
- }
- EXPORT_SYMBOL_GPL(iscsi_pool_init);
+- CMD_ACTUAL_SNSLEN(cp) = sense_len;
+- sp->request_sense_length = sense_len;
+- sp->request_sense_ptr = cp->sense_buffer;
+-
+- if (sp->request_sense_length > 32)
+- sense_len = 32;
+-
+- memcpy(cp->sense_buffer, sense_data, sense_len);
+-
+- sp->request_sense_ptr += sense_len;
+- sp->request_sense_length -= sense_len;
+- if (sp->request_sense_length != 0)
+- ha->status_srb = sp;
+-
+- DEBUG5(printk("%s(): Check condition Sense data, "
+- "scsi(%ld:%d:%d:%d) cmd=%p pid=%ld\n",
+- __func__, ha->host_no, cp->device->channel,
+- cp->device->id, cp->device->lun, cp,
+- cp->serial_number));
++ qla2x00_handle_sense(sp, sense_data, sense_len);
--void iscsi_pool_free(struct iscsi_queue *q, void **items)
-+void iscsi_pool_free(struct iscsi_pool *q)
- {
- int i;
+ /*
+ * In case of a Underrun condition, set both the lscsi
+@@ -1108,10 +1088,6 @@ qla2x00_status_entry(scsi_qla_host_t *ha, void *pkt)
- for (i = 0; i < q->max; i++)
-- kfree(items[i]);
-- kfree(q->pool);
-- kfree(items);
-+ kfree(q->pool[i]);
-+ if (q->pool)
-+ kfree(q->pool);
- }
- EXPORT_SYMBOL_GPL(iscsi_pool_free);
+ cp->result = DID_ERROR << 16 | lscsi_status;
+ }
+-
+- if (sense_len)
+- DEBUG5(qla2x00_dump_buffer(cp->sense_buffer,
+- CMD_ACTUAL_SNSLEN(cp)));
+ } else {
+ /*
+ * If RISC reports underrun and target does not report
+@@ -1621,7 +1597,7 @@ qla24xx_intr_handler(int irq, void *dev_id)
+ if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
+ (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
+ set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
+- up(&ha->mbx_intr_sem);
++ complete(&ha->mbx_intr_comp);
+ }
-@@ -1387,7 +1701,7 @@ iscsi_session_setup(struct iscsi_transport *iscsit,
- qdepth = ISCSI_DEF_CMD_PER_LUN;
+ return IRQ_HANDLED;
+@@ -1758,7 +1734,7 @@ qla24xx_msix_default(int irq, void *dev_id)
+ if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
+ (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
+ set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
+- up(&ha->mbx_intr_sem);
++ complete(&ha->mbx_intr_comp);
}
-- if (cmds_max < 2 || (cmds_max & (cmds_max - 1)) ||
-+ if (!is_power_of_2(cmds_max) ||
- cmds_max >= ISCSI_MGMT_ITT_OFFSET) {
- if (cmds_max != 0)
- printk(KERN_ERR "iscsi: invalid can_queue of %d. "
-@@ -1411,12 +1725,16 @@ iscsi_session_setup(struct iscsi_transport *iscsit,
- shost->max_cmd_len = iscsit->max_cmd_len;
- shost->transportt = scsit;
- shost->transportt->create_work_queue = 1;
-+ shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out;
- *hostno = shost->host_no;
+ return IRQ_HANDLED;
+@@ -1853,6 +1829,18 @@ qla2x00_request_irqs(scsi_qla_host_t *ha)
+ goto skip_msix;
+ }
- session = iscsi_hostdata(shost->hostdata);
- memset(session, 0, sizeof(struct iscsi_session));
- session->host = shost;
- session->state = ISCSI_STATE_FREE;
-+ session->fast_abort = 1;
-+ session->lu_reset_timeout = 15;
-+ session->abort_timeout = 10;
- session->mgmtpool_max = ISCSI_MGMT_CMDS_MAX;
- session->cmds_max = cmds_max;
- session->queued_cmdsn = session->cmdsn = initial_cmdsn;
-@@ -1479,9 +1797,9 @@ module_put:
- cls_session_fail:
- scsi_remove_host(shost);
- add_host_fail:
-- iscsi_pool_free(&session->mgmtpool, (void**)session->mgmt_cmds);
-+ iscsi_pool_free(&session->mgmtpool);
- mgmtpool_alloc_fail:
-- iscsi_pool_free(&session->cmdpool, (void**)session->cmds);
-+ iscsi_pool_free(&session->cmdpool);
- cmdpool_alloc_fail:
- scsi_host_put(shost);
- return NULL;
-@@ -1501,11 +1819,11 @@ void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
- struct iscsi_session *session = iscsi_hostdata(shost->hostdata);
- struct module *owner = cls_session->transport->owner;
++ if (ha->pdev->subsystem_vendor == PCI_VENDOR_ID_HP &&
++ (ha->pdev->subsystem_device == 0x7040 ||
++ ha->pdev->subsystem_device == 0x7041 ||
++ ha->pdev->subsystem_device == 0x1705)) {
++ DEBUG2(qla_printk(KERN_WARNING, ha,
++ "MSI-X: Unsupported ISP2432 SSVID/SSDID (0x%X, 0x%X).\n",
++ ha->pdev->subsystem_vendor,
++ ha->pdev->subsystem_device));
++
++ goto skip_msi;
++ }
++
+ ret = qla24xx_enable_msix(ha);
+ if (!ret) {
+ DEBUG2(qla_printk(KERN_INFO, ha,
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index ccd662a..0c10c0b 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -8,19 +8,6 @@
-- iscsi_unblock_session(cls_session);
-+ iscsi_remove_session(cls_session);
- scsi_remove_host(shost);
+ #include <linux/delay.h>
-- iscsi_pool_free(&session->mgmtpool, (void**)session->mgmt_cmds);
-- iscsi_pool_free(&session->cmdpool, (void**)session->cmds);
-+ iscsi_pool_free(&session->mgmtpool);
-+ iscsi_pool_free(&session->cmdpool);
+-static void
+-qla2x00_mbx_sem_timeout(unsigned long data)
+-{
+- struct semaphore *sem_ptr = (struct semaphore *)data;
+-
+- DEBUG11(printk("qla2x00_sem_timeout: entered.\n"));
+-
+- if (sem_ptr != NULL) {
+- up(sem_ptr);
+- }
+-
+- DEBUG11(printk("qla2x00_mbx_sem_timeout: exiting.\n"));
+-}
- kfree(session->password);
- kfree(session->password_in);
-@@ -1516,7 +1834,7 @@ void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
- kfree(session->hwaddress);
- kfree(session->initiatorname);
+ /*
+ * qla2x00_mailbox_command
+@@ -47,7 +34,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
+ int rval;
+ unsigned long flags = 0;
+ device_reg_t __iomem *reg;
+- struct timer_list tmp_intr_timer;
+ uint8_t abort_active;
+ uint8_t io_lock_on;
+ uint16_t command;
+@@ -72,7 +58,8 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
+ * non ISP abort time.
+ */
+ if (!abort_active) {
+- if (qla2x00_down_timeout(&ha->mbx_cmd_sem, mcp->tov * HZ)) {
++ if (!wait_for_completion_timeout(&ha->mbx_cmd_comp,
++ mcp->tov * HZ)) {
+ /* Timeout occurred. Return error. */
+ DEBUG2_3_11(printk("%s(%ld): cmd access timeout. "
+ "Exiting.\n", __func__, ha->host_no));
+@@ -135,22 +122,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
+ /* Wait for mbx cmd completion until timeout */
-- iscsi_destroy_session(cls_session);
-+ iscsi_free_session(cls_session);
- scsi_host_put(shost);
- module_put(owner);
- }
-@@ -1546,17 +1864,17 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, uint32_t conn_idx)
- conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
- conn->id = conn_idx;
- conn->exp_statsn = 0;
-- conn->tmabort_state = TMABORT_INITIAL;
-+ conn->tmf_state = TMF_INITIAL;
-+
-+ init_timer(&conn->transport_timer);
-+ conn->transport_timer.data = (unsigned long)conn;
-+ conn->transport_timer.function = iscsi_check_transport_timeouts;
-+
- INIT_LIST_HEAD(&conn->run_list);
- INIT_LIST_HEAD(&conn->mgmt_run_list);
-+ INIT_LIST_HEAD(&conn->mgmtqueue);
- INIT_LIST_HEAD(&conn->xmitqueue);
+ if (!abort_active && io_lock_on) {
+- /* sleep on completion semaphore */
+- DEBUG11(printk("%s(%ld): INTERRUPT MODE. Initializing timer.\n",
+- __func__, ha->host_no));
-
-- /* initialize general immediate & non-immediate PDU commands queue */
-- conn->mgmtqueue = kfifo_alloc(session->mgmtpool_max * sizeof(void*),
-- GFP_KERNEL, NULL);
-- if (conn->mgmtqueue == ERR_PTR(-ENOMEM))
-- goto mgmtqueue_alloc_fail;
+- init_timer(&tmp_intr_timer);
+- tmp_intr_timer.data = (unsigned long)&ha->mbx_intr_sem;
+- tmp_intr_timer.expires = jiffies + mcp->tov * HZ;
+- tmp_intr_timer.function =
+- (void (*)(unsigned long))qla2x00_mbx_sem_timeout;
-
-+ INIT_LIST_HEAD(&conn->requeue);
- INIT_WORK(&conn->xmitwork, iscsi_xmitworker);
+- DEBUG11(printk("%s(%ld): Adding timer.\n", __func__,
+- ha->host_no));
+- add_timer(&tmp_intr_timer);
+-
+- DEBUG11(printk("%s(%ld): going to unlock & sleep. "
+- "time=0x%lx.\n", __func__, ha->host_no, jiffies));
- /* allocate login_mtask used for the login/text sequences */
-@@ -1574,7 +1892,7 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, uint32_t conn_idx)
- goto login_mtask_data_alloc_fail;
- conn->login_mtask->data = conn->data = data;
+ set_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
-- init_timer(&conn->tmabort_timer);
-+ init_timer(&conn->tmf_timer);
- init_waitqueue_head(&conn->ehwait);
+@@ -160,17 +131,10 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
+ WRT_REG_WORD(®->isp.hccr, HCCR_SET_HOST_INT);
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
- return cls_conn;
-@@ -1583,8 +1901,6 @@ login_mtask_data_alloc_fail:
- __kfifo_put(session->mgmtpool.queue, (void*)&conn->login_mtask,
- sizeof(void*));
- login_mtask_alloc_fail:
-- kfifo_free(conn->mgmtqueue);
--mgmtqueue_alloc_fail:
- iscsi_destroy_conn(cls_conn);
- return NULL;
- }
-@@ -1603,8 +1919,9 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
- struct iscsi_session *session = conn->session;
- unsigned long flags;
+- /* Wait for either the timer to expire
+- * or the mbox completion interrupt
+- */
+- down(&ha->mbx_intr_sem);
++ wait_for_completion_timeout(&ha->mbx_intr_comp, mcp->tov * HZ);
-+ del_timer_sync(&conn->transport_timer);
-+
- spin_lock_bh(&session->lock);
-- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
- conn->c_stage = ISCSI_CONN_CLEANUP_WAIT;
- if (session->leadconn == conn) {
- /*
-@@ -1637,7 +1954,7 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
- }
+- DEBUG11(printk("%s(%ld): waking up. time=0x%lx\n", __func__,
+- ha->host_no, jiffies));
+ clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
- /* flush queued up work because we free the connection below */
-- scsi_flush_work(session->host);
-+ iscsi_suspend_tx(conn);
+- /* delete the timer */
+- del_timer(&tmp_intr_timer);
+ } else {
+ DEBUG3_11(printk("%s(%ld): cmd=%x POLLING MODE.\n", __func__,
+ ha->host_no, command));
+@@ -299,7 +263,7 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
- spin_lock_bh(&session->lock);
- kfree(conn->data);
-@@ -1648,8 +1965,6 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
- session->leadconn = NULL;
- spin_unlock_bh(&session->lock);
+ /* Allow next mbx cmd to come in. */
+ if (!abort_active)
+- up(&ha->mbx_cmd_sem);
++ complete(&ha->mbx_cmd_comp);
-- kfifo_free(conn->mgmtqueue);
+ if (rval) {
+ DEBUG2_3_11(printk("%s(%ld): **** FAILED. mbx0=%x, mbx1=%x, "
+@@ -905,7 +869,7 @@ qla2x00_get_adapter_id(scsi_qla_host_t *ha, uint16_t *id, uint8_t *al_pa,
+
+ mcp->mb[0] = MBC_GET_ADAPTER_LOOP_ID;
+ mcp->mb[9] = ha->vp_idx;
+- mcp->out_mb = MBX_0;
++ mcp->out_mb = MBX_9|MBX_0;
+ mcp->in_mb = MBX_9|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
+ mcp->tov = 30;
+ mcp->flags = 0;
+@@ -1016,7 +980,7 @@ qla2x00_init_firmware(scsi_qla_host_t *ha, uint16_t size)
+ DEBUG11(printk("qla2x00_init_firmware(%ld): entered.\n",
+ ha->host_no));
+
+- if (ha->flags.npiv_supported)
++ if (ha->fw_attributes & BIT_2)
+ mcp->mb[0] = MBC_MID_INITIALIZE_FIRMWARE;
+ else
+ mcp->mb[0] = MBC_INITIALIZE_FIRMWARE;
+@@ -2042,29 +2006,20 @@ qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map)
+ */
+ int
+ qla2x00_get_link_status(scsi_qla_host_t *ha, uint16_t loop_id,
+- link_stat_t *ret_buf, uint16_t *status)
++ struct link_statistics *stats, dma_addr_t stats_dma)
+ {
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
+- link_stat_t *stat_buf;
+- dma_addr_t stat_buf_dma;
++ uint32_t *siter, *diter, dwords;
+
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+
+- stat_buf = dma_pool_alloc(ha->s_dma_pool, GFP_ATOMIC, &stat_buf_dma);
+- if (stat_buf == NULL) {
+- DEBUG2_3_11(printk("%s(%ld): Failed to allocate memory.\n",
+- __func__, ha->host_no));
+- return BIT_0;
+- }
+- memset(stat_buf, 0, sizeof(link_stat_t));
-
- iscsi_destroy_conn(cls_conn);
- }
- EXPORT_SYMBOL_GPL(iscsi_conn_teardown);
-@@ -1672,11 +1987,29 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
- return -EINVAL;
+ mcp->mb[0] = MBC_GET_LINK_STATUS;
+- mcp->mb[2] = MSW(stat_buf_dma);
+- mcp->mb[3] = LSW(stat_buf_dma);
+- mcp->mb[6] = MSW(MSD(stat_buf_dma));
+- mcp->mb[7] = LSW(MSD(stat_buf_dma));
++ mcp->mb[2] = MSW(stats_dma);
++ mcp->mb[3] = LSW(stats_dma);
++ mcp->mb[6] = MSW(MSD(stats_dma));
++ mcp->mb[7] = LSW(MSD(stats_dma));
+ mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
+ mcp->in_mb = MBX_0;
+ if (IS_FWI2_CAPABLE(ha)) {
+@@ -2089,78 +2044,43 @@ qla2x00_get_link_status(scsi_qla_host_t *ha, uint16_t loop_id,
+ if (mcp->mb[0] != MBS_COMMAND_COMPLETE) {
+ DEBUG2_3_11(printk("%s(%ld): cmd failed. mbx0=%x.\n",
+ __func__, ha->host_no, mcp->mb[0]));
+- status[0] = mcp->mb[0];
+- rval = BIT_1;
++ rval = QLA_FUNCTION_FAILED;
+ } else {
+- /* copy over data -- firmware data is LE. */
+- ret_buf->link_fail_cnt =
+- le32_to_cpu(stat_buf->link_fail_cnt);
+- ret_buf->loss_sync_cnt =
+- le32_to_cpu(stat_buf->loss_sync_cnt);
+- ret_buf->loss_sig_cnt =
+- le32_to_cpu(stat_buf->loss_sig_cnt);
+- ret_buf->prim_seq_err_cnt =
+- le32_to_cpu(stat_buf->prim_seq_err_cnt);
+- ret_buf->inval_xmit_word_cnt =
+- le32_to_cpu(stat_buf->inval_xmit_word_cnt);
+- ret_buf->inval_crc_cnt =
+- le32_to_cpu(stat_buf->inval_crc_cnt);
+-
+- DEBUG11(printk("%s(%ld): stat dump: fail_cnt=%d "
+- "loss_sync=%d loss_sig=%d seq_err=%d "
+- "inval_xmt_word=%d inval_crc=%d.\n", __func__,
+- ha->host_no, stat_buf->link_fail_cnt,
+- stat_buf->loss_sync_cnt, stat_buf->loss_sig_cnt,
+- stat_buf->prim_seq_err_cnt,
+- stat_buf->inval_xmit_word_cnt,
+- stat_buf->inval_crc_cnt));
++ /* Copy over data -- firmware data is LE. */
++ dwords = offsetof(struct link_statistics, unused1) / 4;
++ siter = diter = &stats->link_fail_cnt;
++ while (dwords--)
++ *diter++ = le32_to_cpu(*siter++);
+ }
+ } else {
+ /* Failed. */
+ DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
+ ha->host_no, rval));
+- rval = BIT_1;
}
-+ if (conn->ping_timeout && !conn->recv_timeout) {
-+ printk(KERN_ERR "iscsi: invalid recv timeout of zero "
-+ "Using 5 seconds\n.");
-+ conn->recv_timeout = 5;
-+ }
-+
-+ if (conn->recv_timeout && !conn->ping_timeout) {
-+ printk(KERN_ERR "iscsi: invalid ping timeout of zero "
-+ "Using 5 seconds.\n");
-+ conn->ping_timeout = 5;
-+ }
-+
- spin_lock_bh(&session->lock);
- conn->c_stage = ISCSI_CONN_STARTED;
- session->state = ISCSI_STATE_LOGGED_IN;
- session->queued_cmdsn = session->cmdsn;
+- dma_pool_free(ha->s_dma_pool, stat_buf, stat_buf_dma);
+-
+ return rval;
+ }
-+ conn->last_recv = jiffies;
-+ conn->last_ping = jiffies;
-+ if (conn->recv_timeout && conn->ping_timeout)
-+ mod_timer(&conn->transport_timer,
-+ jiffies + (conn->recv_timeout * HZ));
-+
- switch(conn->stop_stage) {
- case STOP_CONN_RECOVER:
- /*
-@@ -1684,7 +2017,7 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
- * commands after successful recovery
- */
- conn->stop_stage = 0;
-- conn->tmabort_state = TMABORT_INITIAL;
-+ conn->tmf_state = TMF_INITIAL;
- session->age++;
- spin_unlock_bh(&session->lock);
+ int
+-qla24xx_get_isp_stats(scsi_qla_host_t *ha, uint32_t *dwbuf, uint32_t dwords,
+- uint16_t *status)
++qla24xx_get_isp_stats(scsi_qla_host_t *ha, struct link_statistics *stats,
++ dma_addr_t stats_dma)
+ {
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
+- uint32_t *sbuf, *siter;
+- dma_addr_t sbuf_dma;
++ uint32_t *siter, *diter, dwords;
-@@ -1709,55 +2042,27 @@ flush_control_queues(struct iscsi_session *session, struct iscsi_conn *conn)
- struct iscsi_mgmt_task *mtask, *tmp;
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
- /* handle pending */
-- while (__kfifo_get(conn->mgmtqueue, (void*)&mtask, sizeof(void*))) {
-- if (mtask == conn->login_mtask)
-- continue;
-+ list_for_each_entry_safe(mtask, tmp, &conn->mgmtqueue, running) {
- debug_scsi("flushing pending mgmt task itt 0x%x\n", mtask->itt);
-- __kfifo_put(session->mgmtpool.queue, (void*)&mtask,
-- sizeof(void*));
-+ iscsi_free_mgmt_task(conn, mtask);
+- if (dwords > (DMA_POOL_SIZE / 4)) {
+- DEBUG2_3_11(printk("%s(%ld): Unabled to retrieve %d DWORDs "
+- "(max %d).\n", __func__, ha->host_no, dwords,
+- DMA_POOL_SIZE / 4));
+- return BIT_0;
+- }
+- sbuf = dma_pool_alloc(ha->s_dma_pool, GFP_ATOMIC, &sbuf_dma);
+- if (sbuf == NULL) {
+- DEBUG2_3_11(printk("%s(%ld): Failed to allocate memory.\n",
+- __func__, ha->host_no));
+- return BIT_0;
+- }
+- memset(sbuf, 0, DMA_POOL_SIZE);
+-
+ mcp->mb[0] = MBC_GET_LINK_PRIV_STATS;
+- mcp->mb[2] = MSW(sbuf_dma);
+- mcp->mb[3] = LSW(sbuf_dma);
+- mcp->mb[6] = MSW(MSD(sbuf_dma));
+- mcp->mb[7] = LSW(MSD(sbuf_dma));
+- mcp->mb[8] = dwords;
++ mcp->mb[2] = MSW(stats_dma);
++ mcp->mb[3] = LSW(stats_dma);
++ mcp->mb[6] = MSW(MSD(stats_dma));
++ mcp->mb[7] = LSW(MSD(stats_dma));
++ mcp->mb[8] = sizeof(struct link_statistics) / 4;
++ mcp->mb[9] = ha->vp_idx;
+ mcp->mb[10] = 0;
+- mcp->out_mb = MBX_10|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
++ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
+ mcp->in_mb = MBX_2|MBX_1|MBX_0;
+ mcp->tov = 30;
+ mcp->flags = IOCTL_CMD;
+@@ -2170,23 +2090,20 @@ qla24xx_get_isp_stats(scsi_qla_host_t *ha, uint32_t *dwbuf, uint32_t dwords,
+ if (mcp->mb[0] != MBS_COMMAND_COMPLETE) {
+ DEBUG2_3_11(printk("%s(%ld): cmd failed. mbx0=%x.\n",
+ __func__, ha->host_no, mcp->mb[0]));
+- status[0] = mcp->mb[0];
+- rval = BIT_1;
++ rval = QLA_FUNCTION_FAILED;
+ } else {
+ /* Copy over data -- firmware data is LE. */
+- siter = sbuf;
++ dwords = sizeof(struct link_statistics) / 4;
++ siter = diter = &stats->link_fail_cnt;
+ while (dwords--)
+- *dwbuf++ = le32_to_cpu(*siter++);
++ *diter++ = le32_to_cpu(*siter++);
+ }
+ } else {
+ /* Failed. */
+ DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
+ ha->host_no, rval));
+- rval = BIT_1;
}
- /* handle running */
- list_for_each_entry_safe(mtask, tmp, &conn->mgmt_run_list, running) {
- debug_scsi("flushing running mgmt task itt 0x%x\n", mtask->itt);
-- list_del(&mtask->running);
+- dma_pool_free(ha->s_dma_pool, sbuf, sbuf_dma);
-
-- if (mtask == conn->login_mtask)
-- continue;
-- __kfifo_put(session->mgmtpool.queue, (void*)&mtask,
-- sizeof(void*));
-+ iscsi_free_mgmt_task(conn, mtask);
- }
+ return rval;
+ }
- conn->mtask = NULL;
+@@ -2331,6 +2248,8 @@ atarget_done:
+ return rval;
}
--/* Fail commands. Mutex and session lock held and recv side suspended */
--static void fail_all_commands(struct iscsi_conn *conn)
++#if 0
++
+ int
+ qla2x00_system_error(scsi_qla_host_t *ha)
+ {
+@@ -2360,47 +2279,7 @@ qla2x00_system_error(scsi_qla_host_t *ha)
+ return rval;
+ }
+
+-/**
+- * qla2x00_get_serdes_params() -
+- * @ha: HA context
+- *
+- * Returns
+- */
+-int
+-qla2x00_get_serdes_params(scsi_qla_host_t *ha, uint16_t *sw_em_1g,
+- uint16_t *sw_em_2g, uint16_t *sw_em_4g)
-{
-- struct iscsi_cmd_task *ctask, *tmp;
+- int rval;
+- mbx_cmd_t mc;
+- mbx_cmd_t *mcp = &mc;
-
-- /* flush pending */
-- list_for_each_entry_safe(ctask, tmp, &conn->xmitqueue, running) {
-- debug_scsi("failing pending sc %p itt 0x%x\n", ctask->sc,
-- ctask->itt);
-- fail_command(conn, ctask, DID_BUS_BUSY << 16);
-- }
+- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
-
-- /* fail all other running */
-- list_for_each_entry_safe(ctask, tmp, &conn->run_list, running) {
-- debug_scsi("failing in progress sc %p itt 0x%x\n",
-- ctask->sc, ctask->itt);
-- fail_command(conn, ctask, DID_BUS_BUSY << 16);
+- mcp->mb[0] = MBC_SERDES_PARAMS;
+- mcp->mb[1] = 0;
+- mcp->out_mb = MBX_1|MBX_0;
+- mcp->in_mb = MBX_4|MBX_3|MBX_2|MBX_0;
+- mcp->tov = 30;
+- mcp->flags = 0;
+- rval = qla2x00_mailbox_command(ha, mcp);
+-
+- if (rval != QLA_SUCCESS) {
+- /*EMPTY*/
+- DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
+- ha->host_no, rval, mcp->mb[0]));
+- } else {
+- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+-
+- if (sw_em_1g)
+- *sw_em_1g = mcp->mb[2];
+- if (sw_em_2g)
+- *sw_em_2g = mcp->mb[3];
+- if (sw_em_4g)
+- *sw_em_4g = mcp->mb[4];
- }
-
-- conn->ctask = NULL;
+- return rval;
-}
--
- static void iscsi_start_session_recovery(struct iscsi_session *session,
- struct iscsi_conn *conn, int flag)
++#endif /* 0 */
+
+ /**
+ * qla2x00_set_serdes_params() -
+@@ -2471,7 +2350,7 @@ qla2x00_stop_firmware(scsi_qla_host_t *ha)
+ }
+
+ int
+-qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
++qla2x00_enable_eft_trace(scsi_qla_host_t *ha, dma_addr_t eft_dma,
+ uint16_t buffers)
{
- int old_stop_stage;
+ int rval;
+@@ -2484,22 +2363,18 @@ qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
-+ del_timer_sync(&conn->transport_timer);
-+
- mutex_lock(&session->eh_mutex);
- spin_lock_bh(&session->lock);
- if (conn->stop_stage == STOP_CONN_TERM) {
-@@ -1818,7 +2123,7 @@ static void iscsi_start_session_recovery(struct iscsi_session *session,
- * flush queues.
- */
- spin_lock_bh(&session->lock);
-- fail_all_commands(conn);
-+ fail_all_commands(conn, -1);
- flush_control_queues(session, conn);
- spin_unlock_bh(&session->lock);
- mutex_unlock(&session->eh_mutex);
-@@ -1869,6 +2174,21 @@ int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
- uint32_t value;
+ mcp->mb[0] = MBC_TRACE_CONTROL;
+- mcp->mb[1] = ctrl;
+- mcp->out_mb = MBX_1|MBX_0;
++ mcp->mb[1] = TC_EFT_ENABLE;
++ mcp->mb[2] = LSW(eft_dma);
++ mcp->mb[3] = MSW(eft_dma);
++ mcp->mb[4] = LSW(MSD(eft_dma));
++ mcp->mb[5] = MSW(MSD(eft_dma));
++ mcp->mb[6] = buffers;
++ mcp->mb[7] = TC_AEN_DISABLE;
++ mcp->out_mb = MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
+ mcp->in_mb = MBX_1|MBX_0;
+- if (ctrl == TC_ENABLE) {
+- mcp->mb[2] = LSW(eft_dma);
+- mcp->mb[3] = MSW(eft_dma);
+- mcp->mb[4] = LSW(MSD(eft_dma));
+- mcp->mb[5] = MSW(MSD(eft_dma));
+- mcp->mb[6] = buffers;
+- mcp->mb[7] = 0;
+- mcp->out_mb |= MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2;
+- }
+ mcp->tov = 30;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+-
+ if (rval != QLA_SUCCESS) {
+ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
+ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
+@@ -2511,8 +2386,7 @@ qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
+ }
- switch(param) {
-+ case ISCSI_PARAM_FAST_ABORT:
-+ sscanf(buf, "%d", &session->fast_abort);
-+ break;
-+ case ISCSI_PARAM_ABORT_TMO:
-+ sscanf(buf, "%d", &session->abort_timeout);
-+ break;
-+ case ISCSI_PARAM_LU_RESET_TMO:
-+ sscanf(buf, "%d", &session->lu_reset_timeout);
-+ break;
-+ case ISCSI_PARAM_PING_TMO:
-+ sscanf(buf, "%d", &conn->ping_timeout);
-+ break;
-+ case ISCSI_PARAM_RECV_TMO:
-+ sscanf(buf, "%d", &conn->recv_timeout);
-+ break;
- case ISCSI_PARAM_MAX_RECV_DLENGTH:
- sscanf(buf, "%d", &conn->max_recv_dlength);
- break;
-@@ -1983,6 +2303,15 @@ int iscsi_session_get_param(struct iscsi_cls_session *cls_session,
- int len;
+ int
+-qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
+- uint16_t off, uint16_t count)
++qla2x00_disable_eft_trace(scsi_qla_host_t *ha)
+ {
+ int rval;
+ mbx_cmd_t mc;
+@@ -2523,24 +2397,16 @@ qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
- switch(param) {
-+ case ISCSI_PARAM_FAST_ABORT:
-+ len = sprintf(buf, "%d\n", session->fast_abort);
-+ break;
-+ case ISCSI_PARAM_ABORT_TMO:
-+ len = sprintf(buf, "%d\n", session->abort_timeout);
-+ break;
-+ case ISCSI_PARAM_LU_RESET_TMO:
-+ len = sprintf(buf, "%d\n", session->lu_reset_timeout);
-+ break;
- case ISCSI_PARAM_INITIAL_R2T_EN:
- len = sprintf(buf, "%d\n", session->initial_r2t_en);
- break;
-@@ -2040,6 +2369,12 @@ int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
- int len;
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
- switch(param) {
-+ case ISCSI_PARAM_PING_TMO:
-+ len = sprintf(buf, "%u\n", conn->ping_timeout);
-+ break;
-+ case ISCSI_PARAM_RECV_TMO:
-+ len = sprintf(buf, "%u\n", conn->recv_timeout);
-+ break;
- case ISCSI_PARAM_MAX_RECV_DLENGTH:
- len = sprintf(buf, "%u\n", conn->max_recv_dlength);
- break;
-diff --git a/drivers/scsi/libsas/Kconfig b/drivers/scsi/libsas/Kconfig
-index c01a40d..18f33cd 100644
---- a/drivers/scsi/libsas/Kconfig
-+++ b/drivers/scsi/libsas/Kconfig
-@@ -38,6 +38,15 @@ config SCSI_SAS_ATA
- Builds in ATA support into libsas. Will necessitate
- the loading of libata along with libsas.
+- mcp->mb[0] = MBC_READ_SFP;
+- mcp->mb[1] = addr;
+- mcp->mb[2] = MSW(sfp_dma);
+- mcp->mb[3] = LSW(sfp_dma);
+- mcp->mb[6] = MSW(MSD(sfp_dma));
+- mcp->mb[7] = LSW(MSD(sfp_dma));
+- mcp->mb[8] = count;
+- mcp->mb[9] = off;
+- mcp->mb[10] = 0;
+- mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
+- mcp->in_mb = MBX_0;
++ mcp->mb[0] = MBC_TRACE_CONTROL;
++ mcp->mb[1] = TC_EFT_DISABLE;
++ mcp->out_mb = MBX_1|MBX_0;
++ mcp->in_mb = MBX_1|MBX_0;
+ mcp->tov = 30;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+-
+ if (rval != QLA_SUCCESS) {
+- DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
+- ha->host_no, rval, mcp->mb[0]));
++ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
++ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
+ } else {
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+ }
+@@ -2549,176 +2415,168 @@ qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
+ }
-+config SCSI_SAS_HOST_SMP
-+ bool "Support for SMP interpretation for SAS hosts"
-+ default y
-+ depends on SCSI_SAS_LIBSAS
-+ help
-+ Allows sas hosts to receive SMP frames. Selecting this
-+ option builds an SMP interpreter into libsas. Say
-+ N here if you want to save the few kb this consumes.
-+
- config SCSI_SAS_LIBSAS_DEBUG
- bool "Compile the SAS Domain Transport Attributes in debug mode"
- default y
-diff --git a/drivers/scsi/libsas/Makefile b/drivers/scsi/libsas/Makefile
-index fd387b9..1ad1323 100644
---- a/drivers/scsi/libsas/Makefile
-+++ b/drivers/scsi/libsas/Makefile
-@@ -33,5 +33,7 @@ libsas-y += sas_init.o \
- sas_dump.o \
- sas_discover.o \
- sas_expander.o \
-- sas_scsi_host.o
-+ sas_scsi_host.o \
-+ sas_task.o
- libsas-$(CONFIG_SCSI_SAS_ATA) += sas_ata.o
-+libsas-$(CONFIG_SCSI_SAS_HOST_SMP) += sas_host_smp.o
-\ No newline at end of file
-diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
-index 0829b55..0996f86 100644
---- a/drivers/scsi/libsas/sas_ata.c
-+++ b/drivers/scsi/libsas/sas_ata.c
-@@ -158,8 +158,8 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
- struct Scsi_Host *host = sas_ha->core.shost;
- struct sas_internal *i = to_sas_internal(host->transportt);
- struct scatterlist *sg;
-- unsigned int num = 0;
- unsigned int xfer = 0;
-+ unsigned int si;
+ int
+-qla2x00_get_idma_speed(scsi_qla_host_t *ha, uint16_t loop_id,
+- uint16_t *port_speed, uint16_t *mb)
++qla2x00_enable_fce_trace(scsi_qla_host_t *ha, dma_addr_t fce_dma,
++ uint16_t buffers, uint16_t *mb, uint32_t *dwords)
+ {
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
- task = sas_alloc_task(GFP_ATOMIC);
- if (!task)
-@@ -176,22 +176,20 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+- if (!IS_IIDMA_CAPABLE(ha))
++ if (!IS_QLA25XX(ha))
+ return QLA_FUNCTION_FAILED;
- ata_tf_to_fis(&qc->tf, 1, 0, (u8*)&task->ata_task.fis);
- task->uldd_task = qc;
-- if (is_atapi_taskfile(&qc->tf)) {
-+ if (ata_is_atapi(qc->tf.protocol)) {
- memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
- task->total_xfer_len = qc->nbytes + qc->pad_len;
- task->num_scatter = qc->pad_len ? qc->n_elem + 1 : qc->n_elem;
- } else {
-- ata_for_each_sg(sg, qc) {
-- num++;
-+ for_each_sg(qc->sg, sg, qc->n_elem, si)
- xfer += sg->length;
-- }
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
- task->total_xfer_len = xfer;
-- task->num_scatter = num;
-+ task->num_scatter = si;
+- mcp->mb[0] = MBC_PORT_PARAMS;
+- mcp->mb[1] = loop_id;
+- mcp->mb[2] = mcp->mb[3] = mcp->mb[4] = mcp->mb[5] = 0;
+- mcp->out_mb = MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
+- mcp->in_mb = MBX_5|MBX_4|MBX_3|MBX_1|MBX_0;
++ mcp->mb[0] = MBC_TRACE_CONTROL;
++ mcp->mb[1] = TC_FCE_ENABLE;
++ mcp->mb[2] = LSW(fce_dma);
++ mcp->mb[3] = MSW(fce_dma);
++ mcp->mb[4] = LSW(MSD(fce_dma));
++ mcp->mb[5] = MSW(MSD(fce_dma));
++ mcp->mb[6] = buffers;
++ mcp->mb[7] = TC_AEN_DISABLE;
++ mcp->mb[8] = 0;
++ mcp->mb[9] = TC_FCE_DEFAULT_RX_SIZE;
++ mcp->mb[10] = TC_FCE_DEFAULT_TX_SIZE;
++ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|
++ MBX_1|MBX_0;
++ mcp->in_mb = MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
+ mcp->tov = 30;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+-
+- /* Return mailbox statuses. */
+- if (mb != NULL) {
+- mb[0] = mcp->mb[0];
+- mb[1] = mcp->mb[1];
+- mb[3] = mcp->mb[3];
+- mb[4] = mcp->mb[4];
+- mb[5] = mcp->mb[5];
+- }
+-
+ if (rval != QLA_SUCCESS) {
+- DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
+- ha->host_no, rval));
++ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
++ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
+ } else {
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+- if (port_speed)
+- *port_speed = mcp->mb[3];
++
++ if (mb)
++ memcpy(mb, mcp->mb, 8 * sizeof(*mb));
++ if (dwords)
++ *dwords = mcp->mb[6];
}
- task->data_dir = qc->dma_dir;
-- task->scatter = qc->__sg;
-+ task->scatter = qc->sg;
- task->ata_task.retry_count = 1;
- task->task_state_flags = SAS_TASK_STATE_PENDING;
- qc->lldd_task = task;
-@@ -200,7 +198,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
- case ATA_PROT_NCQ:
- task->ata_task.use_ncq = 1;
- /* fall through */
-- case ATA_PROT_ATAPI_DMA:
-+ case ATAPI_PROT_DMA:
- case ATA_PROT_DMA:
- task->ata_task.dma_xfer = 1;
- break;
-@@ -500,7 +498,7 @@ static int sas_execute_task(struct sas_task *task, void *buffer, int size,
- goto ex_err;
- }
- wait_for_completion(&task->completion);
-- res = -ETASK;
-+ res = -ECOMM;
- if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
- int res2;
- SAS_DPRINTK("task aborted, flags:0x%x\n",
-diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
-index 5f3a0d7..31b9af2 100644
---- a/drivers/scsi/libsas/sas_discover.c
-+++ b/drivers/scsi/libsas/sas_discover.c
-@@ -98,7 +98,7 @@ static int sas_get_port_device(struct asd_sas_port *port)
- dev->dev_type = SATA_PM;
- else
- dev->dev_type = SATA_DEV;
-- dev->tproto = SATA_PROTO;
-+ dev->tproto = SAS_PROTOCOL_SATA;
- } else {
- struct sas_identify_frame *id =
- (struct sas_identify_frame *) dev->frame_rcvd;
-diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
-index 8727436..aefd865 100644
---- a/drivers/scsi/libsas/sas_expander.c
-+++ b/drivers/scsi/libsas/sas_expander.c
-@@ -96,7 +96,7 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
- }
+ return rval;
+ }
- wait_for_completion(&task->completion);
-- res = -ETASK;
-+ res = -ECOMM;
- if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
- SAS_DPRINTK("smp task timed out or aborted\n");
- i->dft->lldd_abort_task(task);
-@@ -109,6 +109,16 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
- task->task_status.stat == SAM_GOOD) {
- res = 0;
- break;
-+ } if (task->task_status.resp == SAS_TASK_COMPLETE &&
-+ task->task_status.stat == SAS_DATA_UNDERRUN) {
-+ /* no error, but return the number of bytes of
-+ * underrun */
-+ res = task->task_status.residual;
-+ break;
-+ } if (task->task_status.resp == SAS_TASK_COMPLETE &&
-+ task->task_status.stat == SAS_DATA_OVERRUN) {
-+ res = -EMSGSIZE;
-+ break;
- } else {
- SAS_DPRINTK("%s: task to dev %016llx response: 0x%x "
- "status 0x%x\n", __FUNCTION__,
-@@ -656,9 +666,9 @@ static struct domain_device *sas_ex_discover_end_dev(
- sas_ex_get_linkrate(parent, child, phy);
+ int
+-qla2x00_set_idma_speed(scsi_qla_host_t *ha, uint16_t loop_id,
+- uint16_t port_speed, uint16_t *mb)
++qla2x00_disable_fce_trace(scsi_qla_host_t *ha, uint64_t *wr, uint64_t *rd)
+ {
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
- #ifdef CONFIG_SCSI_SAS_ATA
-- if ((phy->attached_tproto & SAS_PROTO_STP) || phy->attached_sata_dev) {
-+ if ((phy->attached_tproto & SAS_PROTOCOL_STP) || phy->attached_sata_dev) {
- child->dev_type = SATA_DEV;
-- if (phy->attached_tproto & SAS_PROTO_STP)
-+ if (phy->attached_tproto & SAS_PROTOCOL_STP)
- child->tproto = phy->attached_tproto;
- if (phy->attached_sata_dev)
- child->tproto |= SATA_DEV;
-@@ -695,7 +705,7 @@ static struct domain_device *sas_ex_discover_end_dev(
- }
- } else
- #endif
-- if (phy->attached_tproto & SAS_PROTO_SSP) {
-+ if (phy->attached_tproto & SAS_PROTOCOL_SSP) {
- child->dev_type = SAS_END_DEV;
- rphy = sas_end_device_alloc(phy->port);
- /* FIXME: error handling */
-@@ -1896,11 +1906,9 @@ int sas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
- }
+- if (!IS_IIDMA_CAPABLE(ha))
++ if (!IS_FWI2_CAPABLE(ha))
+ return QLA_FUNCTION_FAILED;
- /* no rphy means no smp target support (ie aic94xx host) */
-- if (!rphy) {
-- printk("%s: can we send a smp request to a host?\n",
-- __FUNCTION__);
-- return -EINVAL;
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+
+- mcp->mb[0] = MBC_PORT_PARAMS;
+- mcp->mb[1] = loop_id;
+- mcp->mb[2] = BIT_0;
+- mcp->mb[3] = port_speed & (BIT_2|BIT_1|BIT_0);
+- mcp->mb[4] = mcp->mb[5] = 0;
+- mcp->out_mb = MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
+- mcp->in_mb = MBX_5|MBX_4|MBX_3|MBX_1|MBX_0;
++ mcp->mb[0] = MBC_TRACE_CONTROL;
++ mcp->mb[1] = TC_FCE_DISABLE;
++ mcp->mb[2] = TC_FCE_DISABLE_TRACE;
++ mcp->out_mb = MBX_2|MBX_1|MBX_0;
++ mcp->in_mb = MBX_9|MBX_8|MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|
++ MBX_1|MBX_0;
+ mcp->tov = 30;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+-
+- /* Return mailbox statuses. */
+- if (mb != NULL) {
+- mb[0] = mcp->mb[0];
+- mb[1] = mcp->mb[1];
+- mb[3] = mcp->mb[3];
+- mb[4] = mcp->mb[4];
+- mb[5] = mcp->mb[5];
- }
-+ if (!rphy)
-+ return sas_smp_host_handler(shost, req, rsp);
+-
+ if (rval != QLA_SUCCESS) {
+- DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
+- ha->host_no, rval));
++ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
++ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
+ } else {
+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+
- type = rphy->identify.device_type;
++ if (wr)
++ *wr = (uint64_t) mcp->mb[5] << 48 |
++ (uint64_t) mcp->mb[4] << 32 |
++ (uint64_t) mcp->mb[3] << 16 |
++ (uint64_t) mcp->mb[2];
++ if (rd)
++ *rd = (uint64_t) mcp->mb[9] << 48 |
++ (uint64_t) mcp->mb[8] << 32 |
++ (uint64_t) mcp->mb[7] << 16 |
++ (uint64_t) mcp->mb[6];
+ }
- if (type != SAS_EDGE_EXPANDER_DEVICE &&
-@@ -1926,6 +1934,15 @@ int sas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
+ return rval;
+ }
- ret = smp_execute_task(dev, bio_data(req->bio), req->data_len,
- bio_data(rsp->bio), rsp->data_len);
-+ if (ret > 0) {
-+ /* positive number is the untransferred residual */
-+ rsp->data_len = ret;
-+ req->data_len = 0;
-+ ret = 0;
-+ } else if (ret == 0) {
-+ rsp->data_len = 0;
-+ req->data_len = 0;
-+ }
+-/*
+- * qla24xx_get_vp_database
+- * Get the VP's database for all configured ports.
+- *
+- * Input:
+- * ha = adapter block pointer.
+- * size = size of initialization control block.
+- *
+- * Returns:
+- * qla2x00 local function return status code.
+- *
+- * Context:
+- * Kernel context.
+- */
+ int
+-qla24xx_get_vp_database(scsi_qla_host_t *ha, uint16_t size)
++qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
++ uint16_t off, uint16_t count)
+ {
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
- return ret;
- }
-diff --git a/drivers/scsi/libsas/sas_host_smp.c b/drivers/scsi/libsas/sas_host_smp.c
-new file mode 100644
-index 0000000..16f9312
---- /dev/null
-+++ b/drivers/scsi/libsas/sas_host_smp.c
-@@ -0,0 +1,274 @@
-+/*
-+ * Serial Attached SCSI (SAS) Expander discovery and configuration
-+ *
-+ * Copyright (C) 2007 James E.J. Bottomley
-+ * <James.Bottomley at HansenPartnership.com>
-+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public License as
-+ * published by the Free Software Foundation; version 2 only.
-+ */
-+#include <linux/scatterlist.h>
-+#include <linux/blkdev.h>
-+
-+#include "sas_internal.h"
-+
-+#include <scsi/scsi_transport.h>
-+#include <scsi/scsi_transport_sas.h>
-+#include "../scsi_sas_internal.h"
-+
-+static void sas_host_smp_discover(struct sas_ha_struct *sas_ha, u8 *resp_data,
-+ u8 phy_id)
-+{
-+ struct sas_phy *phy;
-+ struct sas_rphy *rphy;
-+
-+ if (phy_id >= sas_ha->num_phys) {
-+ resp_data[2] = SMP_RESP_NO_PHY;
-+ return;
-+ }
-+ resp_data[2] = SMP_RESP_FUNC_ACC;
-+
-+ phy = sas_ha->sas_phy[phy_id]->phy;
-+ resp_data[9] = phy_id;
-+ resp_data[13] = phy->negotiated_linkrate;
-+ memcpy(resp_data + 16, sas_ha->sas_addr, SAS_ADDR_SIZE);
-+ memcpy(resp_data + 24, sas_ha->sas_phy[phy_id]->attached_sas_addr,
-+ SAS_ADDR_SIZE);
-+ resp_data[40] = (phy->minimum_linkrate << 4) |
-+ phy->minimum_linkrate_hw;
-+ resp_data[41] = (phy->maximum_linkrate << 4) |
-+ phy->maximum_linkrate_hw;
-+
-+ if (!sas_ha->sas_phy[phy_id]->port ||
-+ !sas_ha->sas_phy[phy_id]->port->port_dev)
-+ return;
-+
-+ rphy = sas_ha->sas_phy[phy_id]->port->port_dev->rphy;
-+ resp_data[12] = rphy->identify.device_type << 4;
-+ resp_data[14] = rphy->identify.initiator_port_protocols;
-+ resp_data[15] = rphy->identify.target_port_protocols;
-+}
-+
-+static void sas_report_phy_sata(struct sas_ha_struct *sas_ha, u8 *resp_data,
-+ u8 phy_id)
-+{
-+ struct sas_rphy *rphy;
-+ struct dev_to_host_fis *fis;
-+ int i;
-+
-+ if (phy_id >= sas_ha->num_phys) {
-+ resp_data[2] = SMP_RESP_NO_PHY;
-+ return;
-+ }
-+
-+ resp_data[2] = SMP_RESP_PHY_NO_SATA;
-+
-+ if (!sas_ha->sas_phy[phy_id]->port)
-+ return;
-+
-+ rphy = sas_ha->sas_phy[phy_id]->port->port_dev->rphy;
-+ fis = (struct dev_to_host_fis *)
-+ sas_ha->sas_phy[phy_id]->port->port_dev->frame_rcvd;
-+ if (rphy->identify.target_port_protocols != SAS_PROTOCOL_SATA)
-+ return;
-+
-+ resp_data[2] = SMP_RESP_FUNC_ACC;
-+ resp_data[9] = phy_id;
-+ memcpy(resp_data + 16, sas_ha->sas_phy[phy_id]->attached_sas_addr,
-+ SAS_ADDR_SIZE);
-+
-+ /* check to see if we have a valid d2h fis */
-+ if (fis->fis_type != 0x34)
-+ return;
-+
-+ /* the d2h fis is required by the standard to be in LE format */
-+ for (i = 0; i < 20; i += 4) {
-+ u8 *dst = resp_data + 24 + i, *src =
-+ &sas_ha->sas_phy[phy_id]->port->port_dev->frame_rcvd[i];
-+ dst[0] = src[3];
-+ dst[1] = src[2];
-+ dst[2] = src[1];
-+ dst[3] = src[0];
-+ }
-+}
-+
-+static void sas_phy_control(struct sas_ha_struct *sas_ha, u8 phy_id,
-+ u8 phy_op, enum sas_linkrate min,
-+ enum sas_linkrate max, u8 *resp_data)
-+{
-+ struct sas_internal *i =
-+ to_sas_internal(sas_ha->core.shost->transportt);
-+ struct sas_phy_linkrates rates;
-+
-+ if (phy_id >= sas_ha->num_phys) {
-+ resp_data[2] = SMP_RESP_NO_PHY;
-+ return;
-+ }
-+ switch (phy_op) {
-+ case PHY_FUNC_NOP:
-+ case PHY_FUNC_LINK_RESET:
-+ case PHY_FUNC_HARD_RESET:
-+ case PHY_FUNC_DISABLE:
-+ case PHY_FUNC_CLEAR_ERROR_LOG:
-+ case PHY_FUNC_CLEAR_AFFIL:
-+ case PHY_FUNC_TX_SATA_PS_SIGNAL:
-+ break;
-+
-+ default:
-+ resp_data[2] = SMP_RESP_PHY_UNK_OP;
-+ return;
-+ }
-+
-+ rates.minimum_linkrate = min;
-+ rates.maximum_linkrate = max;
-+
-+ if (i->dft->lldd_control_phy(sas_ha->sas_phy[phy_id], phy_op, &rates))
-+ resp_data[2] = SMP_RESP_FUNC_FAILED;
-+ else
-+ resp_data[2] = SMP_RESP_FUNC_ACC;
-+}
-+
-+int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
-+ struct request *rsp)
-+{
-+ u8 *req_data = NULL, *resp_data = NULL, *buf;
-+ struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
-+ int error = -EINVAL, resp_data_len = rsp->data_len;
-+
-+ /* eight is the minimum size for request and response frames */
-+ if (req->data_len < 8 || rsp->data_len < 8)
-+ goto out;
-+
-+ if (bio_offset(req->bio) + req->data_len > PAGE_SIZE ||
-+ bio_offset(rsp->bio) + rsp->data_len > PAGE_SIZE) {
-+ shost_printk(KERN_ERR, shost,
-+ "SMP request/response frame crosses page boundary");
-+ goto out;
-+ }
-+
-+ req_data = kzalloc(req->data_len, GFP_KERNEL);
-+
-+ /* make sure frame can always be built ... we copy
-+ * back only the requested length */
-+ resp_data = kzalloc(max(rsp->data_len, 128U), GFP_KERNEL);
-+
-+ if (!req_data || !resp_data) {
-+ error = -ENOMEM;
-+ goto out;
-+ }
-+
-+ local_irq_disable();
-+ buf = kmap_atomic(bio_page(req->bio), KM_USER0) + bio_offset(req->bio);
-+ memcpy(req_data, buf, req->data_len);
-+ kunmap_atomic(buf - bio_offset(req->bio), KM_USER0);
-+ local_irq_enable();
-+
-+ if (req_data[0] != SMP_REQUEST)
-+ goto out;
-+
-+ /* always succeeds ... even if we can't process the request
-+ * the result is in the response frame */
-+ error = 0;
-+
-+ /* set up default don't know response */
-+ resp_data[0] = SMP_RESPONSE;
-+ resp_data[1] = req_data[1];
-+ resp_data[2] = SMP_RESP_FUNC_UNK;
-+
-+ switch (req_data[1]) {
-+ case SMP_REPORT_GENERAL:
-+ req->data_len -= 8;
-+ resp_data_len -= 32;
-+ resp_data[2] = SMP_RESP_FUNC_ACC;
-+ resp_data[9] = sas_ha->num_phys;
-+ break;
-+
-+ case SMP_REPORT_MANUF_INFO:
-+ req->data_len -= 8;
-+ resp_data_len -= 64;
-+ resp_data[2] = SMP_RESP_FUNC_ACC;
-+ memcpy(resp_data + 12, shost->hostt->name,
-+ SAS_EXPANDER_VENDOR_ID_LEN);
-+ memcpy(resp_data + 20, "libsas virt phy",
-+ SAS_EXPANDER_PRODUCT_ID_LEN);
-+ break;
-+
-+ case SMP_READ_GPIO_REG:
-+ /* FIXME: need GPIO support in the transport class */
-+ break;
-+
-+ case SMP_DISCOVER:
-+ req->data_len =- 16;
-+ if (req->data_len < 0) {
-+ req->data_len = 0;
-+ error = -EINVAL;
-+ goto out;
-+ }
-+ resp_data_len -= 56;
-+ sas_host_smp_discover(sas_ha, resp_data, req_data[9]);
-+ break;
-+
-+ case SMP_REPORT_PHY_ERR_LOG:
-+ /* FIXME: could implement this with additional
-+ * libsas callbacks providing the HW supports it */
-+ break;
-+
-+ case SMP_REPORT_PHY_SATA:
-+ req->data_len =- 16;
-+ if (req->data_len < 0) {
-+ req->data_len = 0;
-+ error = -EINVAL;
-+ goto out;
-+ }
-+ resp_data_len -= 60;
-+ sas_report_phy_sata(sas_ha, resp_data, req_data[9]);
-+ break;
-+
-+ case SMP_REPORT_ROUTE_INFO:
-+ /* Can't implement; hosts have no routes */
-+ break;
-+
-+ case SMP_WRITE_GPIO_REG:
-+ /* FIXME: need GPIO support in the transport class */
-+ break;
-+
-+ case SMP_CONF_ROUTE_INFO:
-+ /* Can't implement; hosts have no routes */
-+ break;
-+
-+ case SMP_PHY_CONTROL:
-+ req->data_len =- 44;
-+ if (req->data_len < 0) {
-+ req->data_len = 0;
-+ error = -EINVAL;
-+ goto out;
-+ }
-+ resp_data_len -= 8;
-+ sas_phy_control(sas_ha, req_data[9], req_data[10],
-+ req_data[32] >> 4, req_data[33] >> 4,
-+ resp_data);
-+ break;
-+
-+ case SMP_PHY_TEST_FUNCTION:
-+ /* FIXME: should this be implemented? */
-+ break;
-+
-+ default:
-+ /* probably a 2.0 function */
-+ break;
-+ }
-+
-+ local_irq_disable();
-+ buf = kmap_atomic(bio_page(rsp->bio), KM_USER0) + bio_offset(rsp->bio);
-+ memcpy(buf, resp_data, rsp->data_len);
-+ flush_kernel_dcache_page(bio_page(rsp->bio));
-+ kunmap_atomic(buf - bio_offset(rsp->bio), KM_USER0);
-+ local_irq_enable();
-+ rsp->data_len = resp_data_len;
+- DEBUG11(printk("scsi(%ld):%s - entered.\n",
+- ha->host_no, __func__));
++ if (!IS_FWI2_CAPABLE(ha))
++ return QLA_FUNCTION_FAILED;
+
+- mcp->mb[0] = MBC_MID_GET_VP_DATABASE;
+- mcp->mb[2] = MSW(ha->init_cb_dma);
+- mcp->mb[3] = LSW(ha->init_cb_dma);
+- mcp->mb[4] = 0;
+- mcp->mb[5] = 0;
+- mcp->mb[6] = MSW(MSD(ha->init_cb_dma));
+- mcp->mb[7] = LSW(MSD(ha->init_cb_dma));
+- mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
+- mcp->in_mb = MBX_1|MBX_0;
+- mcp->buf_size = size;
+- mcp->flags = MBX_DMA_OUT;
+- mcp->tov = MBX_TOV_SECONDS;
++ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+
-+ out:
-+ kfree(req_data);
-+ kfree(resp_data);
-+ return error;
-+}
-diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h
-index 2b8213b..b4f9368 100644
---- a/drivers/scsi/libsas/sas_internal.h
-+++ b/drivers/scsi/libsas/sas_internal.h
-@@ -45,7 +45,7 @@
- void sas_scsi_recover_host(struct Scsi_Host *shost);
++ mcp->mb[0] = MBC_READ_SFP;
++ mcp->mb[1] = addr;
++ mcp->mb[2] = MSW(sfp_dma);
++ mcp->mb[3] = LSW(sfp_dma);
++ mcp->mb[6] = MSW(MSD(sfp_dma));
++ mcp->mb[7] = LSW(MSD(sfp_dma));
++ mcp->mb[8] = count;
++ mcp->mb[9] = off;
++ mcp->mb[10] = 0;
++ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
++ mcp->in_mb = MBX_0;
++ mcp->tov = 30;
++ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
- int sas_show_class(enum sas_class class, char *buf);
--int sas_show_proto(enum sas_proto proto, char *buf);
-+int sas_show_proto(enum sas_protocol proto, char *buf);
- int sas_show_linkrate(enum sas_linkrate linkrate, char *buf);
- int sas_show_oob_mode(enum sas_oob_mode oob_mode, char *buf);
+ if (rval != QLA_SUCCESS) {
+- /*EMPTY*/
+- DEBUG2_3_11(printk("%s(%ld): failed=%x "
+- "mb0=%x.\n",
+- __func__, ha->host_no, rval, mcp->mb[0]));
++ DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
++ ha->host_no, rval, mcp->mb[0]));
+ } else {
+- /*EMPTY*/
+- DEBUG11(printk("%s(%ld): done.\n",
+- __func__, ha->host_no));
++ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+ }
-@@ -80,6 +80,20 @@ struct domain_device *sas_find_dev_by_rphy(struct sas_rphy *rphy);
+ return rval;
+ }
- void sas_hae_reset(struct work_struct *work);
+ int
+-qla24xx_get_vp_entry(scsi_qla_host_t *ha, uint16_t size, int vp_id)
++qla2x00_set_idma_speed(scsi_qla_host_t *ha, uint16_t loop_id,
++ uint16_t port_speed, uint16_t *mb)
+ {
+ int rval;
+ mbx_cmd_t mc;
+ mbx_cmd_t *mcp = &mc;
-+#ifdef CONFIG_SCSI_SAS_HOST_SMP
-+extern int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
-+ struct request *rsp);
-+#else
-+static inline int sas_smp_host_handler(struct Scsi_Host *shost,
-+ struct request *req,
-+ struct request *rsp)
-+{
-+ shost_printk(KERN_ERR, shost,
-+ "Cannot send SMP to a sas host (not enabled in CONFIG)\n");
-+ return -EINVAL;
-+}
-+#endif
++ if (!IS_IIDMA_CAPABLE(ha))
++ return QLA_FUNCTION_FAILED;
+
- static inline void sas_queue_event(int event, spinlock_t *lock,
- unsigned long *pending,
- struct work_struct *work,
-diff --git a/drivers/scsi/libsas/sas_scsi_host.c b/drivers/scsi/libsas/sas_scsi_host.c
-index 7663841..f869fba 100644
---- a/drivers/scsi/libsas/sas_scsi_host.c
-+++ b/drivers/scsi/libsas/sas_scsi_host.c
-@@ -108,7 +108,7 @@ static void sas_scsi_task_done(struct sas_task *task)
- break;
- case SAM_CHECK_COND:
- memcpy(sc->sense_buffer, ts->buf,
-- max(SCSI_SENSE_BUFFERSIZE, ts->buf_valid_size));
-+ min(SCSI_SENSE_BUFFERSIZE, ts->buf_valid_size));
- stat = SAM_CHECK_COND;
- break;
- default:
-@@ -148,7 +148,6 @@ static struct sas_task *sas_create_task(struct scsi_cmnd *cmd,
- if (!task)
- return NULL;
+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+
+- mcp->mb[0] = MBC_MID_GET_VP_ENTRY;
+- mcp->mb[2] = MSW(ha->init_cb_dma);
+- mcp->mb[3] = LSW(ha->init_cb_dma);
+- mcp->mb[4] = 0;
+- mcp->mb[5] = 0;
+- mcp->mb[6] = MSW(MSD(ha->init_cb_dma));
+- mcp->mb[7] = LSW(MSD(ha->init_cb_dma));
+- mcp->mb[9] = vp_id;
+- mcp->out_mb = MBX_9|MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
+- mcp->in_mb = MBX_0;
+- mcp->buf_size = size;
+- mcp->flags = MBX_DMA_OUT;
++ mcp->mb[0] = MBC_PORT_PARAMS;
++ mcp->mb[1] = loop_id;
++ mcp->mb[2] = BIT_0;
++ mcp->mb[3] = port_speed & (BIT_2|BIT_1|BIT_0);
++ mcp->mb[4] = mcp->mb[5] = 0;
++ mcp->out_mb = MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
++ mcp->in_mb = MBX_5|MBX_4|MBX_3|MBX_1|MBX_0;
+ mcp->tov = 30;
++ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(ha, mcp);
+
++ /* Return mailbox statuses. */
++ if (mb != NULL) {
++ mb[0] = mcp->mb[0];
++ mb[1] = mcp->mb[1];
++ mb[3] = mcp->mb[3];
++ mb[4] = mcp->mb[4];
++ mb[5] = mcp->mb[5];
++ }
++
+ if (rval != QLA_SUCCESS) {
+- /*EMPTY*/
+- DEBUG2_3_11(printk("qla24xx_get_vp_entry(%ld): failed=%x "
+- "mb0=%x.\n",
+- ha->host_no, rval, mcp->mb[0]));
++ DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
++ ha->host_no, rval));
+ } else {
+- /*EMPTY*/
+- DEBUG11(printk("qla24xx_get_vp_entry(%ld): done.\n",
+- ha->host_no));
++ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+ }
+
+ return rval;
+@@ -2873,7 +2731,7 @@ qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ DEBUG11(printk("%s(%ld): entered. Enabling index %d\n", __func__,
+ ha->host_no, vp_index));
+
+- if (vp_index == 0 || vp_index >= MAX_MULTI_ID_LOOP)
++ if (vp_index == 0 || vp_index >= ha->max_npiv_vports)
+ return QLA_PARAMETER_ERROR;
-- *(u32 *)cmd->sense_buffer = 0;
- task->uldd_task = cmd;
- ASSIGN_SAS_TASK(cmd, task);
+ vce = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &vce_dma);
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index 821ee74..cf784cd 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -39,7 +39,7 @@ qla2x00_vp_stop_timer(scsi_qla_host_t *vha)
+ }
+ }
-@@ -200,6 +199,10 @@ int sas_queue_up(struct sas_task *task)
- */
- int sas_queuecommand(struct scsi_cmnd *cmd,
- void (*scsi_done)(struct scsi_cmnd *))
-+ __releases(host->host_lock)
-+ __acquires(dev->sata_dev.ap->lock)
-+ __releases(dev->sata_dev.ap->lock)
-+ __acquires(host->host_lock)
+-uint32_t
++static uint32_t
+ qla24xx_allocate_vp_id(scsi_qla_host_t *vha)
{
- int res = 0;
- struct domain_device *dev = cmd_to_domain_dev(cmd);
-@@ -410,7 +413,7 @@ static int sas_recover_I_T(struct domain_device *dev)
+ uint32_t vp_id;
+@@ -47,16 +47,15 @@ qla24xx_allocate_vp_id(scsi_qla_host_t *vha)
+
+ /* Find an empty slot and assign an vp_id */
+ down(&ha->vport_sem);
+- vp_id = find_first_zero_bit((unsigned long *)ha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC);
+- if (vp_id > MAX_MULTI_ID_FABRIC) {
+- DEBUG15(printk ("vp_id %d is bigger than MAX_MULTI_ID_FABRID\n",
+- vp_id));
++ vp_id = find_first_zero_bit(ha->vp_idx_map, ha->max_npiv_vports + 1);
++ if (vp_id > ha->max_npiv_vports) {
++ DEBUG15(printk ("vp_id %d is bigger than max-supported %d.\n",
++ vp_id, ha->max_npiv_vports));
+ up(&ha->vport_sem);
+ return vp_id;
+ }
+
+- set_bit(vp_id, (unsigned long *)ha->vp_idx_map);
++ set_bit(vp_id, ha->vp_idx_map);
+ ha->num_vhosts++;
+ vha->vp_idx = vp_id;
+ list_add_tail(&vha->vp_list, &ha->vp_list);
+@@ -73,12 +72,12 @@ qla24xx_deallocate_vp_id(scsi_qla_host_t *vha)
+ down(&ha->vport_sem);
+ vp_id = vha->vp_idx;
+ ha->num_vhosts--;
+- clear_bit(vp_id, (unsigned long *)ha->vp_idx_map);
++ clear_bit(vp_id, ha->vp_idx_map);
+ list_del(&vha->vp_list);
+ up(&ha->vport_sem);
}
- /* Find the sas_phy that's attached to this device */
--struct sas_phy *find_local_sas_phy(struct domain_device *dev)
-+static struct sas_phy *find_local_sas_phy(struct domain_device *dev)
+-scsi_qla_host_t *
++static scsi_qla_host_t *
+ qla24xx_find_vhost_by_name(scsi_qla_host_t *ha, uint8_t *port_name)
{
- struct domain_device *pdev = dev->parent;
- struct ex_phy *exphy = NULL;
-@@ -464,7 +467,7 @@ int sas_eh_bus_reset_handler(struct scsi_cmnd *cmd)
- res = sas_phy_reset(phy, 1);
- if (res)
- SAS_DPRINTK("Bus reset of %s failed 0x%x\n",
-- phy->dev.kobj.k_name,
-+ kobject_name(&phy->dev.kobj),
- res);
- if (res == TMF_RESP_FUNC_SUCC || res == TMF_RESP_FUNC_COMPLETE)
- return SUCCESS;
-diff --git a/drivers/scsi/libsas/sas_task.c b/drivers/scsi/libsas/sas_task.c
-new file mode 100644
-index 0000000..594524d
---- /dev/null
-+++ b/drivers/scsi/libsas/sas_task.c
-@@ -0,0 +1,36 @@
-+#include <linux/kernel.h>
-+#include <scsi/sas.h>
-+#include <scsi/libsas.h>
-+
-+/* fill task_status_struct based on SSP response frame */
-+void sas_ssp_task_response(struct device *dev, struct sas_task *task,
-+ struct ssp_response_iu *iu)
-+{
-+ struct task_status_struct *tstat = &task->task_status;
-+
-+ tstat->resp = SAS_TASK_COMPLETE;
-+
-+ if (iu->datapres == 0)
-+ tstat->stat = iu->status;
-+ else if (iu->datapres == 1)
-+ tstat->stat = iu->resp_data[3];
-+ else if (iu->datapres == 2) {
-+ tstat->stat = SAM_CHECK_COND;
-+ tstat->buf_valid_size =
-+ min_t(int, SAS_STATUS_BUF_SIZE,
-+ be32_to_cpu(iu->sense_data_len));
-+ memcpy(tstat->buf, iu->sense_data, tstat->buf_valid_size);
-+
-+ if (iu->status != SAM_CHECK_COND)
-+ dev_printk(KERN_WARNING, dev,
-+ "dev %llx sent sense data, but "
-+ "stat(%x) is not CHECK CONDITION\n",
-+ SAS_ADDR(task->dev->sas_addr),
-+ iu->status);
-+ }
-+ else
-+ /* when datapres contains corrupt/unknown value... */
-+ tstat->stat = SAM_CHECK_COND;
-+}
-+EXPORT_SYMBOL_GPL(sas_ssp_task_response);
-+
-diff --git a/drivers/scsi/libsrp.c b/drivers/scsi/libsrp.c
-index 2ad0a27..5cff020 100644
---- a/drivers/scsi/libsrp.c
-+++ b/drivers/scsi/libsrp.c
-@@ -192,18 +192,18 @@ static int srp_direct_data(struct scsi_cmnd *sc, struct srp_direct_buf *md,
+ scsi_qla_host_t *vha;
+@@ -216,11 +215,7 @@ qla2x00_alert_all_vps(scsi_qla_host_t *ha, uint16_t *mb)
+ if (ha->parent)
+ return;
- if (dma_map) {
- iue = (struct iu_entry *) sc->SCp.ptr;
-- sg = sc->request_buffer;
-+ sg = scsi_sglist(sc);
+- i = find_next_bit((unsigned long *)ha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC + 1, 1);
+- for (;i <= MAX_MULTI_ID_FABRIC;
+- i = find_next_bit((unsigned long *)ha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC + 1, i + 1)) {
++ for_each_mapped_vp_idx(ha, i) {
+ vp_idx_matched = 0;
-- dprintk("%p %u %u %d\n", iue, sc->request_bufflen,
-- md->len, sc->use_sg);
-+ dprintk("%p %u %u %d\n", iue, scsi_bufflen(sc),
-+ md->len, scsi_sg_count(sc));
+ list_for_each_entry(vha, &ha->vp_list, vp_list) {
+@@ -270,7 +265,7 @@ qla2x00_vp_abort_isp(scsi_qla_host_t *vha)
+ qla24xx_enable_vp(vha);
+ }
-- nsg = dma_map_sg(iue->target->dev, sg, sc->use_sg,
-+ nsg = dma_map_sg(iue->target->dev, sg, scsi_sg_count(sc),
- DMA_BIDIRECTIONAL);
- if (!nsg) {
-- printk("fail to map %p %d\n", iue, sc->use_sg);
-+ printk("fail to map %p %d\n", iue, scsi_sg_count(sc));
- return 0;
- }
-- len = min(sc->request_bufflen, md->len);
-+ len = min(scsi_bufflen(sc), md->len);
- } else
- len = md->len;
+-int
++static int
+ qla2x00_do_dpc_vp(scsi_qla_host_t *vha)
+ {
+ if (test_and_clear_bit(VP_IDX_ACQUIRED, &vha->vp_flags)) {
+@@ -311,11 +306,7 @@ qla2x00_do_dpc_all_vps(scsi_qla_host_t *ha)
-@@ -229,10 +229,10 @@ static int srp_indirect_data(struct scsi_cmnd *sc, struct srp_cmd *cmd,
+ clear_bit(VP_DPC_NEEDED, &ha->dpc_flags);
- if (dma_map || ext_desc) {
- iue = (struct iu_entry *) sc->SCp.ptr;
-- sg = sc->request_buffer;
-+ sg = scsi_sglist(sc);
+- i = find_next_bit((unsigned long *)ha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC + 1, 1);
+- for (;i <= MAX_MULTI_ID_FABRIC;
+- i = find_next_bit((unsigned long *)ha->vp_idx_map,
+- MAX_MULTI_ID_FABRIC + 1, i + 1)) {
++ for_each_mapped_vp_idx(ha, i) {
+ vp_idx_matched = 0;
- dprintk("%p %u %u %d %d\n",
-- iue, sc->request_bufflen, id->len,
-+ iue, scsi_bufflen(sc), id->len,
- cmd->data_in_desc_cnt, cmd->data_out_desc_cnt);
+ list_for_each_entry(vha, &ha->vp_list, vp_list) {
+@@ -350,15 +341,17 @@ qla24xx_vport_create_req_sanity_check(struct fc_vport *fc_vport)
+
+ /* Check up unique WWPN */
+ u64_to_wwn(fc_vport->port_name, port_name);
++ if (!memcmp(port_name, ha->port_name, WWN_SIZE))
++ return VPCERR_BAD_WWN;
+ vha = qla24xx_find_vhost_by_name(ha, port_name);
+ if (vha)
+ return VPCERR_BAD_WWN;
+
+ /* Check up max-npiv-supports */
+ if (ha->num_vhosts > ha->max_npiv_vports) {
+- DEBUG15(printk("scsi(%ld): num_vhosts %d is bigger than "
+- "max_npv_vports %d.\n", ha->host_no,
+- (uint16_t) ha->num_vhosts, (int) ha->max_npiv_vports));
++ DEBUG15(printk("scsi(%ld): num_vhosts %ud is bigger than "
++ "max_npv_vports %ud.\n", ha->host_no,
++ ha->num_vhosts, ha->max_npiv_vports));
+ return VPCERR_UNSUPPORTED;
}
+ return 0;
+@@ -412,8 +405,9 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
+ }
+ vha->mgmt_svr_loop_id = 10 + vha->vp_idx;
-@@ -268,13 +268,14 @@ static int srp_indirect_data(struct scsi_cmnd *sc, struct srp_cmd *cmd,
+- init_MUTEX(&vha->mbx_cmd_sem);
+- init_MUTEX_LOCKED(&vha->mbx_intr_sem);
++ init_completion(&vha->mbx_cmd_comp);
++ complete(&vha->mbx_cmd_comp);
++ init_completion(&vha->mbx_intr_comp);
- rdma:
- if (dma_map) {
-- nsg = dma_map_sg(iue->target->dev, sg, sc->use_sg, DMA_BIDIRECTIONAL);
-+ nsg = dma_map_sg(iue->target->dev, sg, scsi_sg_count(sc),
-+ DMA_BIDIRECTIONAL);
- if (!nsg) {
-- eprintk("fail to map %p %d\n", iue, sc->use_sg);
-+ eprintk("fail to map %p %d\n", iue, scsi_sg_count(sc));
- err = -EIO;
- goto free_mem;
- }
-- len = min(sc->request_bufflen, id->len);
-+ len = min(scsi_bufflen(sc), id->len);
- } else
- len = id->len;
+ INIT_LIST_HEAD(&vha->list);
+ INIT_LIST_HEAD(&vha->fcports);
+@@ -450,7 +444,7 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
+ num_hosts++;
-diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
-index ba3ecab..f26b953 100644
---- a/drivers/scsi/lpfc/lpfc.h
-+++ b/drivers/scsi/lpfc/lpfc.h
-@@ -29,7 +29,8 @@ struct lpfc_sli2_slim;
- #define LPFC_MAX_NS_RETRY 3 /* Number of retry attempts to contact
- the NameServer before giving up. */
- #define LPFC_CMD_PER_LUN 3 /* max outstanding cmds per lun */
--#define LPFC_SG_SEG_CNT 64 /* sg element count per scsi cmnd */
-+#define LPFC_DEFAULT_SG_SEG_CNT 64 /* sg element count per scsi cmnd */
-+#define LPFC_MAX_SG_SEG_CNT 256 /* sg element count per scsi cmnd */
- #define LPFC_IOCB_LIST_CNT 2250 /* list of IOCBs for fast-path usage. */
- #define LPFC_Q_RAMP_UP_INTERVAL 120 /* lun q_depth ramp up interval */
+ down(&ha->vport_sem);
+- set_bit(vha->vp_idx, (unsigned long *)ha->vp_idx_map);
++ set_bit(vha->vp_idx, ha->vp_idx_map);
+ ha->cur_vport_count++;
+ up(&ha->vport_sem);
-@@ -68,6 +69,7 @@ struct lpfc_dmabuf {
- struct list_head list;
- void *virt; /* virtual address ptr */
- dma_addr_t phys; /* mapped address */
-+ uint32_t buffer_tag; /* used for tagged queue ring */
- };
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 8ecc047..aba1e6d 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -105,13 +105,12 @@ static int qla2xxx_eh_abort(struct scsi_cmnd *);
+ static int qla2xxx_eh_device_reset(struct scsi_cmnd *);
+ static int qla2xxx_eh_bus_reset(struct scsi_cmnd *);
+ static int qla2xxx_eh_host_reset(struct scsi_cmnd *);
+-static int qla2x00_loop_reset(scsi_qla_host_t *ha);
+ static int qla2x00_device_reset(scsi_qla_host_t *, fc_port_t *);
- struct lpfc_dma_pool {
-@@ -272,10 +274,16 @@ struct lpfc_vport {
- #define FC_ABORT_DISCOVERY 0x8000 /* we want to abort discovery */
- #define FC_NDISC_ACTIVE 0x10000 /* NPort discovery active */
- #define FC_BYPASSED_MODE 0x20000 /* NPort is in bypassed mode */
--#define FC_RFF_NOT_SUPPORTED 0x40000 /* RFF_ID was rejected by switch */
- #define FC_VPORT_NEEDS_REG_VPI 0x80000 /* Needs to have its vpi registered */
- #define FC_RSCN_DEFERRED 0x100000 /* A deferred RSCN being processed */
+ static int qla2x00_change_queue_depth(struct scsi_device *, int);
+ static int qla2x00_change_queue_type(struct scsi_device *, int);
-+ uint32_t ct_flags;
-+#define FC_CT_RFF_ID 0x1 /* RFF_ID accepted by switch */
-+#define FC_CT_RNN_ID 0x2 /* RNN_ID accepted by switch */
-+#define FC_CT_RSNN_NN 0x4 /* RSNN_NN accepted by switch */
-+#define FC_CT_RSPN_ID 0x8 /* RSPN_ID accepted by switch */
-+#define FC_CT_RFT_ID 0x10 /* RFT_ID accepted by switch */
-+
- struct list_head fc_nodes;
+-struct scsi_host_template qla2x00_driver_template = {
++static struct scsi_host_template qla2x00_driver_template = {
+ .module = THIS_MODULE,
+ .name = QLA2XXX_DRIVER_NAME,
+ .queuecommand = qla2x00_queuecommand,
+@@ -179,13 +178,6 @@ struct scsi_transport_template *qla2xxx_transport_vport_template = NULL;
+ * Timer routines
+ */
- /* Keep counters for the number of entries in each list. */
-@@ -344,6 +352,7 @@ struct lpfc_vport {
- uint32_t cfg_discovery_threads;
- uint32_t cfg_log_verbose;
- uint32_t cfg_max_luns;
-+ uint32_t cfg_enable_da_id;
+-void qla2x00_timer(scsi_qla_host_t *);
+-
+-__inline__ void qla2x00_start_timer(scsi_qla_host_t *,
+- void *, unsigned long);
+-static __inline__ void qla2x00_restart_timer(scsi_qla_host_t *, unsigned long);
+-__inline__ void qla2x00_stop_timer(scsi_qla_host_t *);
+-
+ __inline__ void
+ qla2x00_start_timer(scsi_qla_host_t *ha, void *func, unsigned long interval)
+ {
+@@ -203,7 +195,7 @@ qla2x00_restart_timer(scsi_qla_host_t *ha, unsigned long interval)
+ mod_timer(&ha->timer, jiffies + interval * HZ);
+ }
- uint32_t dev_loss_tmo_changed;
+-__inline__ void
++static __inline__ void
+ qla2x00_stop_timer(scsi_qla_host_t *ha)
+ {
+ del_timer_sync(&ha->timer);
+@@ -214,12 +206,11 @@ static int qla2x00_do_dpc(void *data);
-@@ -360,6 +369,7 @@ struct lpfc_vport {
+ static void qla2x00_rst_aen(scsi_qla_host_t *);
- struct hbq_s {
- uint16_t entry_count; /* Current number of HBQ slots */
-+ uint16_t buffer_count; /* Current number of buffers posted */
- uint32_t next_hbqPutIdx; /* Index to next HBQ slot to use */
- uint32_t hbqPutIdx; /* HBQ slot to use */
- uint32_t local_hbqGetIdx; /* Local copy of Get index from Port */
-@@ -377,6 +387,11 @@ struct hbq_s {
- #define LPFC_ELS_HBQ 0
- #define LPFC_EXTRA_HBQ 1
+-uint8_t qla2x00_mem_alloc(scsi_qla_host_t *);
+-void qla2x00_mem_free(scsi_qla_host_t *ha);
++static uint8_t qla2x00_mem_alloc(scsi_qla_host_t *);
++static void qla2x00_mem_free(scsi_qla_host_t *ha);
+ static int qla2x00_allocate_sp_pool( scsi_qla_host_t *ha);
+ static void qla2x00_free_sp_pool(scsi_qla_host_t *ha);
+ static void qla2x00_sp_free_dma(scsi_qla_host_t *, srb_t *);
+-void qla2x00_sp_compl(scsi_qla_host_t *ha, srb_t *);
-+enum hba_temp_state {
-+ HBA_NORMAL_TEMP,
-+ HBA_OVER_TEMP
-+};
-+
- struct lpfc_hba {
- struct lpfc_sli sli;
- uint32_t sli_rev; /* SLI2 or SLI3 */
-@@ -457,7 +472,8 @@ struct lpfc_hba {
- uint64_t cfg_soft_wwnn;
- uint64_t cfg_soft_wwpn;
- uint32_t cfg_hba_queue_depth;
+ /* -------------------------------------------------------------------------- */
+
+@@ -1060,7 +1051,7 @@ eh_host_reset_lock:
+ * Returns:
+ * 0 = success
+ */
+-static int
++int
+ qla2x00_loop_reset(scsi_qla_host_t *ha)
+ {
+ int ret;
+@@ -1479,8 +1470,7 @@ qla2x00_set_isp_flags(scsi_qla_host_t *ha)
+ static int
+ qla2x00_iospace_config(scsi_qla_host_t *ha)
+ {
+- unsigned long pio, pio_len, pio_flags;
+- unsigned long mmio, mmio_len, mmio_flags;
++ resource_size_t pio;
+
+ if (pci_request_selected_regions(ha->pdev, ha->bars,
+ QLA2XXX_DRIVER_NAME)) {
+@@ -1495,10 +1485,8 @@ qla2x00_iospace_config(scsi_qla_host_t *ha)
+
+ /* We only need PIO for Flash operations on ISP2312 v2 chips. */
+ pio = pci_resource_start(ha->pdev, 0);
+- pio_len = pci_resource_len(ha->pdev, 0);
+- pio_flags = pci_resource_flags(ha->pdev, 0);
+- if (pio_flags & IORESOURCE_IO) {
+- if (pio_len < MIN_IOBASE_LEN) {
++ if (pci_resource_flags(ha->pdev, 0) & IORESOURCE_IO) {
++ if (pci_resource_len(ha->pdev, 0) < MIN_IOBASE_LEN) {
+ qla_printk(KERN_WARNING, ha,
+ "Invalid PCI I/O region size (%s)...\n",
+ pci_name(ha->pdev));
+@@ -1511,28 +1499,23 @@ qla2x00_iospace_config(scsi_qla_host_t *ha)
+ pio = 0;
+ }
+ ha->pio_address = pio;
+- ha->pio_length = pio_len;
+
+ skip_pio:
+ /* Use MMIO operations for all accesses. */
+- mmio = pci_resource_start(ha->pdev, 1);
+- mmio_len = pci_resource_len(ha->pdev, 1);
+- mmio_flags = pci_resource_flags(ha->pdev, 1);
-
-+ uint32_t cfg_enable_hba_reset;
-+ uint32_t cfg_enable_hba_heartbeat;
+- if (!(mmio_flags & IORESOURCE_MEM)) {
++ if (!(pci_resource_flags(ha->pdev, 1) & IORESOURCE_MEM)) {
+ qla_printk(KERN_ERR, ha,
+- "region #0 not an MMIO resource (%s), aborting\n",
++ "region #1 not an MMIO resource (%s), aborting\n",
+ pci_name(ha->pdev));
+ goto iospace_error_exit;
+ }
+- if (mmio_len < MIN_IOBASE_LEN) {
++ if (pci_resource_len(ha->pdev, 1) < MIN_IOBASE_LEN) {
+ qla_printk(KERN_ERR, ha,
+ "Invalid PCI mem region size (%s), aborting\n",
+ pci_name(ha->pdev));
+ goto iospace_error_exit;
+ }
- lpfc_vpd_t vpd; /* vital product data */
+- ha->iobase = ioremap(mmio, MIN_IOBASE_LEN);
++ ha->iobase = ioremap(pci_resource_start(ha->pdev, 1), MIN_IOBASE_LEN);
+ if (!ha->iobase) {
+ qla_printk(KERN_ERR, ha,
+ "cannot remap MMIO (%s), aborting\n", pci_name(ha->pdev));
+@@ -1701,9 +1684,10 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ /* load the F/W, read paramaters, and init the H/W */
+ ha->instance = num_hosts;
-@@ -544,8 +560,7 @@ struct lpfc_hba {
- struct list_head port_list;
- struct lpfc_vport *pport; /* physical lpfc_vport pointer */
- uint16_t max_vpi; /* Maximum virtual nports */
--#define LPFC_MAX_VPI 100 /* Max number of VPI supported */
--#define LPFC_MAX_VPORTS (LPFC_MAX_VPI+1)/* Max number of VPorts supported */
-+#define LPFC_MAX_VPI 0xFFFF /* Max number of VPI supported */
- unsigned long *vpi_bmask; /* vpi allocation table */
+- init_MUTEX(&ha->mbx_cmd_sem);
+ init_MUTEX(&ha->vport_sem);
+- init_MUTEX_LOCKED(&ha->mbx_intr_sem);
++ init_completion(&ha->mbx_cmd_comp);
++ complete(&ha->mbx_cmd_comp);
++ init_completion(&ha->mbx_intr_comp);
- /* Data structure used by fabric iocb scheduler */
-@@ -563,16 +578,30 @@ struct lpfc_hba {
- struct dentry *hba_debugfs_root;
- atomic_t debugfs_vport_count;
- struct dentry *debug_hbqinfo;
-- struct dentry *debug_dumpslim;
-+ struct dentry *debug_dumpHostSlim;
-+ struct dentry *debug_dumpHBASlim;
- struct dentry *debug_slow_ring_trc;
- struct lpfc_debugfs_trc *slow_ring_trc;
- atomic_t slow_ring_trc_cnt;
- #endif
+ INIT_LIST_HEAD(&ha->list);
+ INIT_LIST_HEAD(&ha->fcports);
+@@ -1807,6 +1791,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
-+ /* Used for deferred freeing of ELS data buffers */
-+ struct list_head elsbuf;
-+ int elsbuf_cnt;
-+ int elsbuf_prev_cnt;
+ qla2x00_init_host_attr(ha);
+
++ qla2x00_dfs_setup(ha);
+
-+ uint8_t temp_sensor_support;
- /* Fields used for heart beat. */
- unsigned long last_completion_time;
- struct timer_list hb_tmofunc;
- uint8_t hb_outstanding;
-+ /*
-+ * Following bit will be set for all buffer tags which are not
-+ * associated with any HBQ.
-+ */
-+#define QUE_BUFTAG_BIT (1<<31)
-+ uint32_t buffer_tag_count;
-+ enum hba_temp_state over_temp_state;
- };
+ qla_printk(KERN_INFO, ha, "\n"
+ " QLogic Fibre Channel HBA Driver: %s\n"
+ " QLogic %s - %s\n"
+@@ -1838,6 +1824,8 @@ qla2x00_remove_one(struct pci_dev *pdev)
- static inline struct Scsi_Host *
-@@ -598,5 +627,15 @@ lpfc_is_link_up(struct lpfc_hba *phba)
- phba->link_state == LPFC_HBA_READY;
- }
+ ha = pci_get_drvdata(pdev);
--#define FC_REG_DUMP_EVENT 0x10 /* Register for Dump events */
-+#define FC_REG_DUMP_EVENT 0x10 /* Register for Dump events */
-+#define FC_REG_TEMPERATURE_EVENT 0x20 /* Register for temperature
-+ event */
++ qla2x00_dfs_remove(ha);
++
+ qla2x00_free_sysfs_attr(ha);
-+struct temp_event {
-+ uint32_t event_type;
-+ uint32_t event_code;
-+ uint32_t data;
-+};
-+#define LPFC_CRIT_TEMP 0x1
-+#define LPFC_THRESHOLD_TEMP 0x2
-+#define LPFC_NORMAL_TEMP 0x3
-diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
-index 80a1121..4bae4a2 100644
---- a/drivers/scsi/lpfc/lpfc_attr.c
-+++ b/drivers/scsi/lpfc/lpfc_attr.c
-@@ -1,7 +1,7 @@
- /*******************************************************************
- * This file is part of the Emulex Linux Device Driver for *
- * Fibre Channel Host Bus Adapters. *
-- * Copyright (C) 2004-2007 Emulex. All rights reserved. *
-+ * Copyright (C) 2004-2008 Emulex. All rights reserved. *
- * EMULEX and SLI are trademarks of Emulex. *
- * www.emulex.com *
- * Portions Copyright (C) 2004-2005 Christoph Hellwig *
-@@ -45,6 +45,10 @@
- #define LPFC_MIN_DEVLOSS_TMO 1
- #define LPFC_MAX_DEVLOSS_TMO 255
+ fc_remove_host(ha->host);
+@@ -1871,8 +1859,11 @@ qla2x00_free_device(scsi_qla_host_t *ha)
+ kthread_stop(t);
+ }
-+#define LPFC_MAX_LINK_SPEED 8
-+#define LPFC_LINK_SPEED_BITMAP 0x00000117
-+#define LPFC_LINK_SPEED_STRING "0, 1, 2, 4, 8"
++ if (ha->flags.fce_enabled)
++ qla2x00_disable_fce_trace(ha, NULL, NULL);
+
- static void
- lpfc_jedec_to_ascii(int incr, char hdw[])
+ if (ha->eft)
+- qla2x00_trace_control(ha, TC_DISABLE, 0, 0);
++ qla2x00_disable_eft_trace(ha);
+
+ ha->flags.online = 0;
+
+@@ -2016,7 +2007,7 @@ qla2x00_mark_all_devices_lost(scsi_qla_host_t *ha, int defer)
+ * 0 = success.
+ * 1 = failure.
+ */
+-uint8_t
++static uint8_t
+ qla2x00_mem_alloc(scsi_qla_host_t *ha)
{
-@@ -86,6 +90,15 @@ lpfc_serialnum_show(struct class_device *cdev, char *buf)
- }
+ char name[16];
+@@ -2213,7 +2204,7 @@ qla2x00_mem_alloc(scsi_qla_host_t *ha)
+ * Input:
+ * ha = adapter block pointer.
+ */
+-void
++static void
+ qla2x00_mem_free(scsi_qla_host_t *ha)
+ {
+ struct list_head *fcpl, *fcptemp;
+@@ -2228,6 +2219,10 @@ qla2x00_mem_free(scsi_qla_host_t *ha)
+ /* free sp pool */
+ qla2x00_free_sp_pool(ha);
- static ssize_t
-+lpfc_temp_sensor_show(struct class_device *cdev, char *buf)
-+{
-+ struct Scsi_Host *shost = class_to_shost(cdev);
-+ struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
-+ struct lpfc_hba *phba = vport->phba;
-+ return snprintf(buf, PAGE_SIZE, "%d\n",phba->temp_sensor_support);
-+}
++ if (ha->fce)
++ dma_free_coherent(&ha->pdev->dev, FCE_SIZE, ha->fce,
++ ha->fce_dma);
+
-+static ssize_t
- lpfc_modeldesc_show(struct class_device *cdev, char *buf)
- {
- struct Scsi_Host *shost = class_to_shost(cdev);
-@@ -178,12 +191,9 @@ lpfc_state_show(struct class_device *cdev, char *buf)
- case LPFC_LINK_UP:
- case LPFC_CLEAR_LA:
- case LPFC_HBA_READY:
-- len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - \n");
-+ len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
+ if (ha->fw_dump) {
+ if (ha->eft)
+ dma_free_coherent(&ha->pdev->dev,
+@@ -2748,23 +2743,6 @@ qla2x00_timer(scsi_qla_host_t *ha)
+ qla2x00_restart_timer(ha, WATCH_INTERVAL);
+ }
- switch (vport->port_state) {
-- len += snprintf(buf + len, PAGE_SIZE-len,
-- "initializing\n");
+-/* XXX(hch): crude hack to emulate a down_timeout() */
+-int
+-qla2x00_down_timeout(struct semaphore *sema, unsigned long timeout)
+-{
+- const unsigned int step = 100; /* msecs */
+- unsigned int iterations = jiffies_to_msecs(timeout)/100;
+-
+- do {
+- if (!down_trylock(sema))
+- return 0;
+- if (msleep_interruptible(step))
- break;
- case LPFC_LOCAL_CFG_LINK:
- len += snprintf(buf + len, PAGE_SIZE-len,
- "Configuring Link\n");
-@@ -252,8 +262,7 @@ lpfc_issue_lip(struct Scsi_Host *shost)
- int mbxstatus = MBXERR_ERROR;
+- } while (--iterations > 0);
+-
+- return -ETIMEDOUT;
+-}
+-
+ /* Firmware interface routines. */
- if ((vport->fc_flag & FC_OFFLINE_MODE) ||
-- (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) ||
-- (vport->port_state != LPFC_VPORT_READY))
-+ (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO))
- return -EPERM;
+ #define FW_BLOBS 6
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index ad2fa01..b68fb73 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -22,7 +22,7 @@ static void qla2x00_nv_write(scsi_qla_host_t *, uint16_t);
+ * qla2x00_lock_nvram_access() -
+ * @ha: HA context
+ */
+-void
++static void
+ qla2x00_lock_nvram_access(scsi_qla_host_t *ha)
+ {
+ uint16_t data;
+@@ -55,7 +55,7 @@ qla2x00_lock_nvram_access(scsi_qla_host_t *ha)
+ * qla2x00_unlock_nvram_access() -
+ * @ha: HA context
+ */
+-void
++static void
+ qla2x00_unlock_nvram_access(scsi_qla_host_t *ha)
+ {
+ struct device_reg_2xxx __iomem *reg = &ha->iobase->isp;
+@@ -74,7 +74,7 @@ qla2x00_unlock_nvram_access(scsi_qla_host_t *ha)
+ *
+ * Returns the word read from nvram @addr.
+ */
+-uint16_t
++static uint16_t
+ qla2x00_get_nvram_word(scsi_qla_host_t *ha, uint32_t addr)
+ {
+ uint16_t data;
+@@ -93,7 +93,7 @@ qla2x00_get_nvram_word(scsi_qla_host_t *ha, uint32_t addr)
+ * @addr: Address in NVRAM to write
+ * @data: word to program
+ */
+-void
++static void
+ qla2x00_write_nvram_word(scsi_qla_host_t *ha, uint32_t addr, uint16_t data)
+ {
+ int count;
+@@ -550,7 +550,7 @@ qla24xx_write_flash_data(scsi_qla_host_t *ha, uint32_t *dwptr, uint32_t faddr,
+ int ret;
+ uint32_t liter, miter;
+ uint32_t sec_mask, rest_addr, conf_addr;
+- uint32_t fdata, findex ;
++ uint32_t fdata, findex, cnt;
+ uint8_t man_id, flash_id;
+ struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
+ dma_addr_t optrom_dma;
+@@ -690,8 +690,14 @@ qla24xx_write_flash_data(scsi_qla_host_t *ha, uint32_t *dwptr, uint32_t faddr,
+ 0xff0000) | ((fdata >> 16) & 0xff));
+ }
- pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
-@@ -305,12 +314,14 @@ lpfc_do_offline(struct lpfc_hba *phba, uint32_t type)
+- /* Enable flash write-protection. */
++ /* Enable flash write-protection and wait for completion. */
+ qla24xx_write_flash_dword(ha, flash_conf_to_access_addr(0x101), 0x9c);
++ for (cnt = 300; cnt &&
++ qla24xx_read_flash_dword(ha,
++ flash_conf_to_access_addr(0x005)) & BIT_0;
++ cnt--) {
++ udelay(10);
++ }
- psli = &phba->sli;
+ /* Disable flash write. */
+ WRT_REG_DWORD(®->ctrl_status,
+diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h
+index ae6f7a2..2c2f6b4 100644
+--- a/drivers/scsi/qla2xxx/qla_version.h
++++ b/drivers/scsi/qla2xxx/qla_version.h
+@@ -7,7 +7,7 @@
+ /*
+ * Driver version
+ */
+-#define QLA2XXX_VERSION "8.02.00-k5"
++#define QLA2XXX_VERSION "8.02.00-k7"
-+ /* Wait a little for things to settle down, but not
-+ * long enough for dev loss timeout to expire.
-+ */
- for (i = 0; i < psli->num_rings; i++) {
- pring = &psli->ring[i];
-- /* The linkdown event takes 30 seconds to timeout. */
- while (pring->txcmplq_cnt) {
- msleep(10);
-- if (cnt++ > 3000) {
-+ if (cnt++ > 500) { /* 5 secs */
- lpfc_printf_log(phba,
- KERN_WARNING, LOG_INIT,
- "0466 Outstanding IO when "
-@@ -336,6 +347,9 @@ lpfc_selective_reset(struct lpfc_hba *phba)
- struct completion online_compl;
- int status = 0;
+ #define QLA_DRIVER_MAJOR_VER 8
+ #define QLA_DRIVER_MINOR_VER 2
+diff --git a/drivers/scsi/qla4xxx/ql4_init.c b/drivers/scsi/qla4xxx/ql4_init.c
+index d692c71..cbe0a17 100644
+--- a/drivers/scsi/qla4xxx/ql4_init.c
++++ b/drivers/scsi/qla4xxx/ql4_init.c
+@@ -5,6 +5,7 @@
+ * See LICENSE.qla4xxx for copyright and licensing details.
+ */
-+ if (!phba->cfg_enable_hba_reset)
-+ return -EIO;
-+
- status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
++#include <scsi/iscsi_if.h>
+ #include "ql4_def.h"
+ #include "ql4_glbl.h"
+ #include "ql4_dbg.h"
+@@ -1305,7 +1306,8 @@ int qla4xxx_process_ddb_changed(struct scsi_qla_host *ha,
+ atomic_set(&ddb_entry->relogin_timer, 0);
+ clear_bit(DF_RELOGIN, &ddb_entry->flags);
+ clear_bit(DF_NO_RELOGIN, &ddb_entry->flags);
+- iscsi_if_create_session_done(ddb_entry->conn);
++ iscsi_session_event(ddb_entry->sess,
++ ISCSI_KEVENT_CREATE_SESSION);
+ /*
+ * Change the lun state to READY in case the lun TIMEOUT before
+ * the device came back.
+diff --git a/drivers/scsi/qla4xxx/ql4_isr.c b/drivers/scsi/qla4xxx/ql4_isr.c
+index 4a154be..0f029d0 100644
+--- a/drivers/scsi/qla4xxx/ql4_isr.c
++++ b/drivers/scsi/qla4xxx/ql4_isr.c
+@@ -123,15 +123,14 @@ static void qla4xxx_status_entry(struct scsi_qla_host *ha,
+ break;
- if (status != 0)
-@@ -409,6 +423,8 @@ lpfc_board_mode_store(struct class_device *cdev, const char *buf, size_t count)
- struct completion online_compl;
- int status=0;
+ /* Copy Sense Data into sense buffer. */
+- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
++ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
-+ if (!phba->cfg_enable_hba_reset)
-+ return -EACCES;
- init_completion(&online_compl);
+ sensebytecnt = le16_to_cpu(sts_entry->senseDataByteCnt);
+ if (sensebytecnt == 0)
+ break;
- if(strncmp(buf, "online", sizeof("online") - 1) == 0) {
-@@ -908,6 +924,8 @@ static CLASS_DEVICE_ATTR(used_rpi, S_IRUGO, lpfc_used_rpi_show, NULL);
- static CLASS_DEVICE_ATTR(max_xri, S_IRUGO, lpfc_max_xri_show, NULL);
- static CLASS_DEVICE_ATTR(used_xri, S_IRUGO, lpfc_used_xri_show, NULL);
- static CLASS_DEVICE_ATTR(npiv_info, S_IRUGO, lpfc_npiv_info_show, NULL);
-+static CLASS_DEVICE_ATTR(lpfc_temp_sensor, S_IRUGO, lpfc_temp_sensor_show,
-+ NULL);
+ memcpy(cmd->sense_buffer, sts_entry->senseData,
+- min(sensebytecnt,
+- (uint16_t) sizeof(cmd->sense_buffer)));
++ min_t(uint16_t, sensebytecnt, SCSI_SENSE_BUFFERSIZE));
+ DEBUG2(printk("scsi%ld:%d:%d:%d: %s: sense key = %x, "
+ "ASC/ASCQ = %02x/%02x\n", ha->host_no,
+@@ -208,8 +207,7 @@ static void qla4xxx_status_entry(struct scsi_qla_host *ha,
+ break;
- static char *lpfc_soft_wwn_key = "C99G71SL8032A";
-@@ -971,6 +989,14 @@ lpfc_soft_wwpn_store(struct class_device *cdev, const char *buf, size_t count)
- unsigned int i, j, cnt=count;
- u8 wwpn[8];
+ /* Copy Sense Data into sense buffer. */
+- memset(cmd->sense_buffer, 0,
+- sizeof(cmd->sense_buffer));
++ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
-+ if (!phba->cfg_enable_hba_reset)
-+ return -EACCES;
-+ spin_lock_irq(&phba->hbalock);
-+ if (phba->over_temp_state == HBA_OVER_TEMP) {
-+ spin_unlock_irq(&phba->hbalock);
-+ return -EACCES;
-+ }
-+ spin_unlock_irq(&phba->hbalock);
- /* count may include a LF at end of string */
- if (buf[cnt-1] == '\n')
- cnt--;
-@@ -1102,7 +1128,13 @@ MODULE_PARM_DESC(lpfc_sli_mode, "SLI mode selector:"
- " 2 - select SLI-2 even on SLI-3 capable HBAs,"
- " 3 - select SLI-3");
+ sensebytecnt =
+ le16_to_cpu(sts_entry->senseDataByteCnt);
+@@ -217,8 +215,7 @@ static void qla4xxx_status_entry(struct scsi_qla_host *ha,
+ break;
--LPFC_ATTR_R(enable_npiv, 0, 0, 1, "Enable NPIV functionality");
-+int lpfc_enable_npiv = 0;
-+module_param(lpfc_enable_npiv, int, 0);
-+MODULE_PARM_DESC(lpfc_enable_npiv, "Enable NPIV functionality");
-+lpfc_param_show(enable_npiv);
-+lpfc_param_init(enable_npiv, 0, 0, 1);
-+static CLASS_DEVICE_ATTR(lpfc_enable_npiv, S_IRUGO,
-+ lpfc_enable_npiv_show, NULL);
+ memcpy(cmd->sense_buffer, sts_entry->senseData,
+- min(sensebytecnt,
+- (uint16_t) sizeof(cmd->sense_buffer)));
++ min_t(uint16_t, sensebytecnt, SCSI_SENSE_BUFFERSIZE));
- /*
- # lpfc_nodev_tmo: If set, it will hold all I/O errors on devices that disappear
-@@ -1248,6 +1280,13 @@ LPFC_VPORT_ATTR_HEX_RW(log_verbose, 0x0, 0x0, 0xffff,
- "Verbose logging bit-mask");
+ DEBUG2(printk("scsi%ld:%d:%d:%d: %s: sense key = %x, "
+ "ASC/ASCQ = %02x/%02x\n", ha->host_no,
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 89460d2..d3f8664 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -173,18 +173,6 @@ static void qla4xxx_conn_stop(struct iscsi_cls_conn *conn, int flag)
+ printk(KERN_ERR "iscsi: invalid stop flag %d\n", flag);
+ }
- /*
-+# lpfc_enable_da_id: This turns on the DA_ID CT command that deregisters
-+# objects that have been registered with the nameserver after login.
-+*/
-+LPFC_VPORT_ATTR_R(enable_da_id, 0, 0, 1,
-+ "Deregister nameserver objects before LOGO");
-+
-+/*
- # lun_queue_depth: This parameter is used to limit the number of outstanding
- # commands per FCP LUN. Value range is [1,128]. Default value is 30.
- */
-@@ -1369,7 +1408,33 @@ LPFC_VPORT_ATTR_R(scan_down, 1, 0, 1,
- # Set loop mode if you want to run as an NL_Port. Value range is [0,0x6].
- # Default value is 0.
- */
--LPFC_ATTR_RW(topology, 0, 0, 6, "Select Fibre Channel topology");
-+static int
-+lpfc_topology_set(struct lpfc_hba *phba, int val)
-+{
-+ int err;
-+ uint32_t prev_val;
-+ if (val >= 0 && val <= 6) {
-+ prev_val = phba->cfg_topology;
-+ phba->cfg_topology = val;
-+ err = lpfc_issue_lip(lpfc_shost_from_vport(phba->pport));
-+ if (err)
-+ phba->cfg_topology = prev_val;
-+ return err;
-+ }
-+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-+ "%d:0467 lpfc_topology attribute cannot be set to %d, "
-+ "allowed range is [0, 6]\n",
-+ phba->brd_no, val);
-+ return -EINVAL;
-+}
-+static int lpfc_topology = 0;
-+module_param(lpfc_topology, int, 0);
-+MODULE_PARM_DESC(lpfc_topology, "Select Fibre Channel topology");
-+lpfc_param_show(topology)
-+lpfc_param_init(topology, 0, 0, 6)
-+lpfc_param_store(topology)
-+static CLASS_DEVICE_ATTR(lpfc_topology, S_IRUGO | S_IWUSR,
-+ lpfc_topology_show, lpfc_topology_store);
+-static ssize_t format_addr(char *buf, const unsigned char *addr, int len)
+-{
+- int i;
+- char *cp = buf;
+-
+- for (i = 0; i < len; i++)
+- cp += sprintf(cp, "%02x%c", addr[i],
+- i == (len - 1) ? '\n' : ':');
+- return cp - buf;
+-}
+-
+-
+ static int qla4xxx_host_get_param(struct Scsi_Host *shost,
+ enum iscsi_host_param param, char *buf)
+ {
+@@ -193,7 +181,7 @@ static int qla4xxx_host_get_param(struct Scsi_Host *shost,
- /*
- # lpfc_link_speed: Link speed selection for initializing the Fibre Channel
-@@ -1381,7 +1446,59 @@ LPFC_ATTR_RW(topology, 0, 0, 6, "Select Fibre Channel topology");
- # 8 = 8 Gigabaud
- # Value range is [0,8]. Default value is 0.
- */
--LPFC_ATTR_R(link_speed, 0, 0, 8, "Select link speed");
-+static int
-+lpfc_link_speed_set(struct lpfc_hba *phba, int val)
-+{
-+ int err;
-+ uint32_t prev_val;
-+
-+ if (((val == LINK_SPEED_1G) && !(phba->lmt & LMT_1Gb)) ||
-+ ((val == LINK_SPEED_2G) && !(phba->lmt & LMT_2Gb)) ||
-+ ((val == LINK_SPEED_4G) && !(phba->lmt & LMT_4Gb)) ||
-+ ((val == LINK_SPEED_8G) && !(phba->lmt & LMT_8Gb)) ||
-+ ((val == LINK_SPEED_10G) && !(phba->lmt & LMT_10Gb)))
-+ return -EINVAL;
-+
-+ if ((val >= 0 && val <= LPFC_MAX_LINK_SPEED)
-+ && (LPFC_LINK_SPEED_BITMAP & (1 << val))) {
-+ prev_val = phba->cfg_link_speed;
-+ phba->cfg_link_speed = val;
-+ err = lpfc_issue_lip(lpfc_shost_from_vport(phba->pport));
-+ if (err)
-+ phba->cfg_link_speed = prev_val;
-+ return err;
-+ }
-+
-+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-+ "%d:0469 lpfc_link_speed attribute cannot be set to %d, "
-+ "allowed range is [0, 8]\n",
-+ phba->brd_no, val);
-+ return -EINVAL;
-+}
-+
-+static int lpfc_link_speed = 0;
-+module_param(lpfc_link_speed, int, 0);
-+MODULE_PARM_DESC(lpfc_link_speed, "Select link speed");
-+lpfc_param_show(link_speed)
-+static int
-+lpfc_link_speed_init(struct lpfc_hba *phba, int val)
-+{
-+ if ((val >= 0 && val <= LPFC_MAX_LINK_SPEED)
-+ && (LPFC_LINK_SPEED_BITMAP & (1 << val))) {
-+ phba->cfg_link_speed = val;
-+ return 0;
-+ }
-+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-+ "0454 lpfc_link_speed attribute cannot "
-+ "be set to %d, allowed values are "
-+ "["LPFC_LINK_SPEED_STRING"]\n", val);
-+ phba->cfg_link_speed = 0;
-+ return -EINVAL;
-+}
-+
-+lpfc_param_store(link_speed)
-+static CLASS_DEVICE_ATTR(lpfc_link_speed, S_IRUGO | S_IWUSR,
-+ lpfc_link_speed_show, lpfc_link_speed_store);
+ switch (param) {
+ case ISCSI_HOST_PARAM_HWADDRESS:
+- len = format_addr(buf, ha->my_mac, MAC_ADDR_LEN);
++ len = sysfs_format_mac(buf, ha->my_mac, MAC_ADDR_LEN);
+ break;
+ case ISCSI_HOST_PARAM_IPADDRESS:
+ len = sprintf(buf, "%d.%d.%d.%d\n", ha->ip_address[0],
+@@ -298,8 +286,7 @@ void qla4xxx_destroy_sess(struct ddb_entry *ddb_entry)
+ return;
- /*
- # lpfc_fcp_class: Determines FC class to use for the FCP protocol.
-@@ -1479,7 +1596,30 @@ LPFC_ATTR_RW(poll_tmo, 10, 1, 255,
- */
- LPFC_ATTR_R(use_msi, 0, 0, 1, "Use Message Signaled Interrupts, if possible");
+ if (ddb_entry->conn) {
+- iscsi_if_destroy_session_done(ddb_entry->conn);
+- iscsi_destroy_conn(ddb_entry->conn);
++ atomic_set(&ddb_entry->state, DDB_STATE_DEAD);
+ iscsi_remove_session(ddb_entry->sess);
+ }
+ iscsi_free_session(ddb_entry->sess);
+@@ -309,6 +296,7 @@ int qla4xxx_add_sess(struct ddb_entry *ddb_entry)
+ {
+ int err;
-+/*
-+# lpfc_enable_hba_reset: Allow or prevent HBA resets to the hardware.
-+# 0 = HBA resets disabled
-+# 1 = HBA resets enabled (default)
-+# Value range is [0,1]. Default value is 1.
-+*/
-+LPFC_ATTR_R(enable_hba_reset, 1, 0, 1, "Enable HBA resets from the driver.");
++ ddb_entry->sess->recovery_tmo = ddb_entry->ha->port_down_retry_count;
+ err = iscsi_add_session(ddb_entry->sess, ddb_entry->fw_ddb_index);
+ if (err) {
+ DEBUG2(printk(KERN_ERR "Could not add session.\n"));
+@@ -321,9 +309,6 @@ int qla4xxx_add_sess(struct ddb_entry *ddb_entry)
+ DEBUG2(printk(KERN_ERR "Could not add connection.\n"));
+ return -ENOMEM;
+ }
+-
+- ddb_entry->sess->recovery_tmo = ddb_entry->ha->port_down_retry_count;
+- iscsi_if_create_session_done(ddb_entry->conn);
+ return 0;
+ }
+
+diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c
+index 7a2e798..65455ab 100644
+--- a/drivers/scsi/qlogicpti.c
++++ b/drivers/scsi/qlogicpti.c
+@@ -871,11 +871,12 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
+ struct scatterlist *sg, *s;
+ int i, n;
+
+- if (Cmnd->use_sg) {
++ if (scsi_bufflen(Cmnd)) {
+ int sg_count;
+
+- sg = (struct scatterlist *) Cmnd->request_buffer;
+- sg_count = sbus_map_sg(qpti->sdev, sg, Cmnd->use_sg, Cmnd->sc_data_direction);
++ sg = scsi_sglist(Cmnd);
++ sg_count = sbus_map_sg(qpti->sdev, sg, scsi_sg_count(Cmnd),
++ Cmnd->sc_data_direction);
+
+ ds = cmd->dataseg;
+ cmd->segment_cnt = sg_count;
+@@ -914,16 +915,6 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
+ }
+ sg_count -= n;
+ }
+- } else if (Cmnd->request_bufflen) {
+- Cmnd->SCp.ptr = (char *)(unsigned long)
+- sbus_map_single(qpti->sdev,
+- Cmnd->request_buffer,
+- Cmnd->request_bufflen,
+- Cmnd->sc_data_direction);
+-
+- cmd->dataseg[0].d_base = (u32) ((unsigned long)Cmnd->SCp.ptr);
+- cmd->dataseg[0].d_count = Cmnd->request_bufflen;
+- cmd->segment_cnt = 1;
+ } else {
+ cmd->dataseg[0].d_base = 0;
+ cmd->dataseg[0].d_count = 0;
+@@ -1151,7 +1142,7 @@ static struct scsi_cmnd *qlogicpti_intr_handler(struct qlogicpti *qpti)
+
+ if (sts->state_flags & SF_GOT_SENSE)
+ memcpy(Cmnd->sense_buffer, sts->req_sense_data,
+- sizeof(Cmnd->sense_buffer));
++ SCSI_SENSE_BUFFERSIZE);
+
+ if (sts->hdr.entry_type == ENTRY_STATUS)
+ Cmnd->result =
+@@ -1159,17 +1150,11 @@ static struct scsi_cmnd *qlogicpti_intr_handler(struct qlogicpti *qpti)
+ else
+ Cmnd->result = DID_ERROR << 16;
+
+- if (Cmnd->use_sg) {
++ if (scsi_bufflen(Cmnd))
+ sbus_unmap_sg(qpti->sdev,
+- (struct scatterlist *)Cmnd->request_buffer,
+- Cmnd->use_sg,
++ scsi_sglist(Cmnd), scsi_sg_count(Cmnd),
+ Cmnd->sc_data_direction);
+- } else if (Cmnd->request_bufflen) {
+- sbus_unmap_single(qpti->sdev,
+- (__u32)((unsigned long)Cmnd->SCp.ptr),
+- Cmnd->request_bufflen,
+- Cmnd->sc_data_direction);
+- }
+
-+/*
-+# lpfc_enable_hba_heartbeat: Enable HBA heartbeat timer..
-+# 0 = HBA Heartbeat disabled
-+# 1 = HBA Heartbeat enabled (default)
-+# Value range is [0,1]. Default value is 1.
-+*/
-+LPFC_ATTR_R(enable_hba_heartbeat, 1, 0, 1, "Enable HBA Heartbeat.");
+ qpti->cmd_count[Cmnd->device->id]--;
+ sbus_writew(out_ptr, qpti->qregs + MBOX5);
+ Cmnd->host_scribble = (unsigned char *) done_queue;
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index 0fb1709..1a9fba6 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -122,6 +122,11 @@ static const char *const scsi_device_types[] = {
+ "Automation/Drive ",
+ };
-+/*
-+ * lpfc_sg_seg_cnt: Initial Maximum DMA Segment Count
-+ * This value can be set to values between 64 and 256. The default value is
-+ * 64, but may be increased to allow for larger Max I/O sizes. The scsi layer
-+ * will be allowed to request I/Os of sizes up to (MAX_SEG_COUNT * SEG_SIZE).
++/**
++ * scsi_device_type - Return 17 char string indicating device type.
++ * @type: type number to look up
+ */
-+LPFC_ATTR_R(sg_seg_cnt, LPFC_DEFAULT_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT,
-+ LPFC_MAX_SG_SEG_CNT, "Max Scatter Gather Segment Count");
++
+ const char * scsi_device_type(unsigned type)
+ {
+ if (type == 0x1e)
+@@ -136,32 +141,45 @@ const char * scsi_device_type(unsigned type)
+ EXPORT_SYMBOL(scsi_device_type);
- struct class_device_attribute *lpfc_hba_attrs[] = {
- &class_device_attr_info,
-@@ -1494,6 +1634,7 @@ struct class_device_attribute *lpfc_hba_attrs[] = {
- &class_device_attr_state,
- &class_device_attr_num_discovered_ports,
- &class_device_attr_lpfc_drvr_version,
-+ &class_device_attr_lpfc_temp_sensor,
- &class_device_attr_lpfc_log_verbose,
- &class_device_attr_lpfc_lun_queue_depth,
- &class_device_attr_lpfc_hba_queue_depth,
-@@ -1530,6 +1671,9 @@ struct class_device_attribute *lpfc_hba_attrs[] = {
- &class_device_attr_lpfc_soft_wwnn,
- &class_device_attr_lpfc_soft_wwpn,
- &class_device_attr_lpfc_soft_wwn_enable,
-+ &class_device_attr_lpfc_enable_hba_reset,
-+ &class_device_attr_lpfc_enable_hba_heartbeat,
-+ &class_device_attr_lpfc_sg_seg_cnt,
- NULL,
+ struct scsi_host_cmd_pool {
+- struct kmem_cache *slab;
+- unsigned int users;
+- char *name;
+- unsigned int slab_flags;
+- gfp_t gfp_mask;
++ struct kmem_cache *cmd_slab;
++ struct kmem_cache *sense_slab;
++ unsigned int users;
++ char *cmd_name;
++ char *sense_name;
++ unsigned int slab_flags;
++ gfp_t gfp_mask;
};
-@@ -1552,6 +1696,7 @@ struct class_device_attribute *lpfc_vport_attrs[] = {
- &class_device_attr_lpfc_max_luns,
- &class_device_attr_nport_evt_cnt,
- &class_device_attr_npiv_info,
-+ &class_device_attr_lpfc_enable_da_id,
- NULL,
+ static struct scsi_host_cmd_pool scsi_cmd_pool = {
+- .name = "scsi_cmd_cache",
++ .cmd_name = "scsi_cmd_cache",
++ .sense_name = "scsi_sense_cache",
+ .slab_flags = SLAB_HWCACHE_ALIGN,
};
-@@ -1727,13 +1872,18 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
-
- spin_lock_irq(&phba->hbalock);
+ static struct scsi_host_cmd_pool scsi_cmd_dma_pool = {
+- .name = "scsi_cmd_cache(DMA)",
++ .cmd_name = "scsi_cmd_cache(DMA)",
++ .sense_name = "scsi_sense_cache(DMA)",
+ .slab_flags = SLAB_HWCACHE_ALIGN|SLAB_CACHE_DMA,
+ .gfp_mask = __GFP_DMA,
+ };
-+ if (phba->over_temp_state == HBA_OVER_TEMP) {
-+ sysfs_mbox_idle(phba);
-+ spin_unlock_irq(&phba->hbalock);
-+ return -EACCES;
-+ }
-+
- if (off == 0 &&
- phba->sysfs_mbox.state == SMBOX_WRITING &&
- phba->sysfs_mbox.offset >= 2 * sizeof(uint32_t)) {
+ static DEFINE_MUTEX(host_cmd_pool_mutex);
- switch (phba->sysfs_mbox.mbox->mb.mbxCommand) {
- /* Offline only */
-- case MBX_WRITE_NV:
- case MBX_INIT_LINK:
- case MBX_DOWN_LINK:
- case MBX_CONFIG_LINK:
-@@ -1744,9 +1894,7 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
- case MBX_DUMP_CONTEXT:
- case MBX_RUN_DIAGS:
- case MBX_RESTART:
-- case MBX_FLASH_WR_ULA:
- case MBX_SET_MASK:
-- case MBX_SET_SLIM:
- case MBX_SET_DEBUG:
- if (!(vport->fc_flag & FC_OFFLINE_MODE)) {
- printk(KERN_WARNING "mbox_read:Command 0x%x "
-@@ -1756,6 +1904,8 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
- spin_unlock_irq(&phba->hbalock);
- return -EPERM;
- }
-+ case MBX_WRITE_NV:
-+ case MBX_WRITE_VPARMS:
- case MBX_LOAD_SM:
- case MBX_READ_NV:
- case MBX_READ_CONFIG:
-@@ -1772,6 +1922,8 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
- case MBX_LOAD_EXP_ROM:
- case MBX_BEACON:
- case MBX_DEL_LD_ENTRY:
-+ case MBX_SET_VARIABLE:
-+ case MBX_WRITE_WWN:
- break;
- case MBX_READ_SPARM64:
- case MBX_READ_LA:
-@@ -1793,6 +1945,17 @@ sysfs_mbox_read(struct kobject *kobj, struct bin_attribute *bin_attr,
- return -EPERM;
- }
++/**
++ * __scsi_get_command - Allocate a struct scsi_cmnd
++ * @shost: host to transmit command
++ * @gfp_mask: allocation mask
++ *
++ * Description: allocate a struct scsi_cmd from host's slab, recycling from the
++ * host's free_list if necessary.
++ */
+ struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
+ {
+ struct scsi_cmnd *cmd;
++ unsigned char *buf;
-+ /* If HBA encountered an error attention, allow only DUMP
-+ * mailbox command until the HBA is restarted.
-+ */
-+ if ((phba->pport->stopped) &&
-+ (phba->sysfs_mbox.mbox->mb.mbxCommand
-+ != MBX_DUMP_MEMORY)) {
-+ sysfs_mbox_idle(phba);
-+ spin_unlock_irq(&phba->hbalock);
-+ return -EPERM;
-+ }
-+
- phba->sysfs_mbox.mbox->vport = vport;
+- cmd = kmem_cache_alloc(shost->cmd_pool->slab,
+- gfp_mask | shost->cmd_pool->gfp_mask);
++ cmd = kmem_cache_alloc(shost->cmd_pool->cmd_slab,
++ gfp_mask | shost->cmd_pool->gfp_mask);
- if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) {
-@@ -1993,7 +2156,8 @@ lpfc_get_host_speed(struct Scsi_Host *shost)
- fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN;
- break;
+ if (unlikely(!cmd)) {
+ unsigned long flags;
+@@ -173,19 +191,32 @@ struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
+ list_del_init(&cmd->list);
}
-- }
-+ } else
-+ fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN;
+ spin_unlock_irqrestore(&shost->free_list_lock, flags);
++
++ if (cmd) {
++ buf = cmd->sense_buffer;
++ memset(cmd, 0, sizeof(*cmd));
++ cmd->sense_buffer = buf;
++ }
++ } else {
++ buf = kmem_cache_alloc(shost->cmd_pool->sense_slab,
++ gfp_mask | shost->cmd_pool->gfp_mask);
++ if (likely(buf)) {
++ memset(cmd, 0, sizeof(*cmd));
++ cmd->sense_buffer = buf;
++ } else {
++ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
++ cmd = NULL;
++ }
+ }
- spin_unlock_irq(shost->host_lock);
+ return cmd;
}
-@@ -2013,7 +2177,7 @@ lpfc_get_host_fabric_name (struct Scsi_Host *shost)
- node_name = wwn_to_u64(phba->fc_fabparam.nodeName.u.wwn);
- else
- /* fabric is local port if there is no F/FL_Port */
-- node_name = wwn_to_u64(vport->fc_nodename.u.wwn);
-+ node_name = 0;
-
- spin_unlock_irq(shost->host_lock);
-
-@@ -2337,8 +2501,6 @@ struct fc_function_template lpfc_transport_functions = {
- .dev_loss_tmo_callbk = lpfc_dev_loss_tmo_callbk,
- .terminate_rport_io = lpfc_terminate_rport_io,
+ EXPORT_SYMBOL_GPL(__scsi_get_command);
-- .vport_create = lpfc_vport_create,
-- .vport_delete = lpfc_vport_delete,
- .dd_fcvport_size = sizeof(struct lpfc_vport *),
- };
+-/*
+- * Function: scsi_get_command()
+- *
+- * Purpose: Allocate and setup a scsi command block
+- *
+- * Arguments: dev - parent scsi device
+- * gfp_mask- allocator flags
++/**
++ * scsi_get_command - Allocate and setup a scsi command block
++ * @dev: parent scsi device
++ * @gfp_mask: allocator flags
+ *
+ * Returns: The allocated scsi command structure.
+ */
+@@ -202,7 +233,6 @@ struct scsi_cmnd *scsi_get_command(struct scsi_device *dev, gfp_t gfp_mask)
+ if (likely(cmd != NULL)) {
+ unsigned long flags;
-@@ -2414,21 +2576,23 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
- lpfc_poll_tmo_init(phba, lpfc_poll_tmo);
- lpfc_enable_npiv_init(phba, lpfc_enable_npiv);
- lpfc_use_msi_init(phba, lpfc_use_msi);
-+ lpfc_enable_hba_reset_init(phba, lpfc_enable_hba_reset);
-+ lpfc_enable_hba_heartbeat_init(phba, lpfc_enable_hba_heartbeat);
- phba->cfg_poll = lpfc_poll;
- phba->cfg_soft_wwnn = 0L;
- phba->cfg_soft_wwpn = 0L;
-- /*
-- * The total number of segments is the configuration value plus 2
-- * since the IOCB need a command and response bde.
-- */
-- phba->cfg_sg_seg_cnt = LPFC_SG_SEG_CNT + 2;
-+ lpfc_sg_seg_cnt_init(phba, lpfc_sg_seg_cnt);
-+ /* Also reinitialize the host templates with new values. */
-+ lpfc_vport_template.sg_tablesize = phba->cfg_sg_seg_cnt;
-+ lpfc_template.sg_tablesize = phba->cfg_sg_seg_cnt;
- /*
- * Since the sg_tablesize is module parameter, the sg_dma_buf_size
-- * used to create the sg_dma_buf_pool must be dynamically calculated
-+ * used to create the sg_dma_buf_pool must be dynamically calculated.
-+ * 2 segments are added since the IOCB needs a command and response bde.
- */
- phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) +
- sizeof(struct fcp_rsp) +
-- (phba->cfg_sg_seg_cnt * sizeof(struct ulp_bde64));
-+ ((phba->cfg_sg_seg_cnt + 2) * sizeof(struct ulp_bde64));
- lpfc_hba_queue_depth_init(phba, lpfc_hba_queue_depth);
- return;
- }
-@@ -2448,5 +2612,6 @@ lpfc_get_vport_cfgparam(struct lpfc_vport *vport)
- lpfc_discovery_threads_init(vport, lpfc_discovery_threads);
- lpfc_max_luns_init(vport, lpfc_max_luns);
- lpfc_scan_down_init(vport, lpfc_scan_down);
-+ lpfc_enable_da_id_init(vport, lpfc_enable_da_id);
- return;
+- memset(cmd, 0, sizeof(*cmd));
+ cmd->device = dev;
+ init_timer(&cmd->eh_timeout);
+ INIT_LIST_HEAD(&cmd->list);
+@@ -217,6 +247,12 @@ struct scsi_cmnd *scsi_get_command(struct scsi_device *dev, gfp_t gfp_mask)
}
-diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
-index a599e15..50fcb7c 100644
---- a/drivers/scsi/lpfc/lpfc_crtn.h
-+++ b/drivers/scsi/lpfc/lpfc_crtn.h
-@@ -23,6 +23,8 @@ typedef int (*node_filter)(struct lpfc_nodelist *ndlp, void *param);
- struct fc_rport;
- void lpfc_dump_mem(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t);
- void lpfc_read_nv(struct lpfc_hba *, LPFC_MBOXQ_t *);
-+void lpfc_config_async(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
-+
- void lpfc_heart_beat(struct lpfc_hba *, LPFC_MBOXQ_t *);
- int lpfc_read_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
- struct lpfc_dmabuf *mp);
-@@ -43,9 +45,9 @@ void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
- struct lpfc_vport *lpfc_find_vport_by_did(struct lpfc_hba *, uint32_t);
- void lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove);
- int lpfc_linkdown(struct lpfc_hba *);
-+void lpfc_port_link_failure(struct lpfc_vport *);
- void lpfc_mbx_cmpl_read_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
-
--void lpfc_mbx_cmpl_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
- void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
- void lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *, LPFC_MBOXQ_t *);
- void lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
-@@ -66,15 +68,15 @@ int lpfc_check_sli_ndlp(struct lpfc_hba *, struct lpfc_sli_ring *,
- void lpfc_nlp_init(struct lpfc_vport *, struct lpfc_nodelist *, uint32_t);
- struct lpfc_nodelist *lpfc_nlp_get(struct lpfc_nodelist *);
- int lpfc_nlp_put(struct lpfc_nodelist *);
-+int lpfc_nlp_not_used(struct lpfc_nodelist *ndlp);
- struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_vport *, uint32_t);
- void lpfc_disc_list_loopmap(struct lpfc_vport *);
- void lpfc_disc_start(struct lpfc_vport *);
--void lpfc_disc_flush_list(struct lpfc_vport *);
- void lpfc_cleanup_discovery_resources(struct lpfc_vport *);
-+void lpfc_cleanup(struct lpfc_vport *);
- void lpfc_disc_timeout(unsigned long);
+ EXPORT_SYMBOL(scsi_get_command);
- struct lpfc_nodelist *__lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
--struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
++/**
++ * __scsi_put_command - Free a struct scsi_cmnd
++ * @shost: dev->host
++ * @cmd: Command to free
++ * @dev: parent scsi device
++ */
+ void __scsi_put_command(struct Scsi_Host *shost, struct scsi_cmnd *cmd,
+ struct device *dev)
+ {
+@@ -230,19 +266,19 @@ void __scsi_put_command(struct Scsi_Host *shost, struct scsi_cmnd *cmd,
+ }
+ spin_unlock_irqrestore(&shost->free_list_lock, flags);
- void lpfc_worker_wake_up(struct lpfc_hba *);
- int lpfc_workq_post_event(struct lpfc_hba *, void *, void *, uint32_t);
-@@ -82,17 +84,17 @@ int lpfc_do_work(void *);
- int lpfc_disc_state_machine(struct lpfc_vport *, struct lpfc_nodelist *, void *,
- uint32_t);
+- if (likely(cmd != NULL))
+- kmem_cache_free(shost->cmd_pool->slab, cmd);
++ if (likely(cmd != NULL)) {
++ kmem_cache_free(shost->cmd_pool->sense_slab,
++ cmd->sense_buffer);
++ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
++ }
--void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
-- struct lpfc_nodelist *);
- void lpfc_do_scr_ns_plogi(struct lpfc_hba *, struct lpfc_vport *);
- int lpfc_check_sparm(struct lpfc_vport *, struct lpfc_nodelist *,
- struct serv_parm *, uint32_t);
- int lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist *);
-+void lpfc_more_plogi(struct lpfc_vport *);
-+void lpfc_more_adisc(struct lpfc_vport *);
-+void lpfc_end_rscn(struct lpfc_vport *);
- int lpfc_els_chk_latt(struct lpfc_vport *);
- int lpfc_els_abort_flogi(struct lpfc_hba *);
- int lpfc_initial_flogi(struct lpfc_vport *);
- int lpfc_initial_fdisc(struct lpfc_vport *);
--int lpfc_issue_els_fdisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
- int lpfc_issue_els_plogi(struct lpfc_vport *, uint32_t, uint8_t);
- int lpfc_issue_els_prli(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
- int lpfc_issue_els_adisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
-@@ -112,7 +114,6 @@ int lpfc_els_rsp_prli_acc(struct lpfc_vport *, struct lpfc_iocbq *,
- void lpfc_cancel_retry_delay_tmo(struct lpfc_vport *, struct lpfc_nodelist *);
- void lpfc_els_retry_delay(unsigned long);
- void lpfc_els_retry_delay_handler(struct lpfc_nodelist *);
--void lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *);
- void lpfc_els_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
- struct lpfc_iocbq *);
- int lpfc_els_handle_rscn(struct lpfc_vport *);
-@@ -124,7 +125,6 @@ int lpfc_els_disc_adisc(struct lpfc_vport *);
- int lpfc_els_disc_plogi(struct lpfc_vport *);
- void lpfc_els_timeout(unsigned long);
- void lpfc_els_timeout_handler(struct lpfc_vport *);
--void lpfc_hb_timeout(unsigned long);
- void lpfc_hb_timeout_handler(struct lpfc_hba *);
+ put_device(dev);
+ }
+ EXPORT_SYMBOL(__scsi_put_command);
- void lpfc_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
-@@ -142,7 +142,6 @@ void lpfc_hba_init(struct lpfc_hba *, uint32_t *);
- int lpfc_post_buffer(struct lpfc_hba *, struct lpfc_sli_ring *, int, int);
- void lpfc_decode_firmware_rev(struct lpfc_hba *, char *, int);
- int lpfc_online(struct lpfc_hba *);
--void lpfc_block_mgmt_io(struct lpfc_hba *);
- void lpfc_unblock_mgmt_io(struct lpfc_hba *);
- void lpfc_offline_prep(struct lpfc_hba *);
- void lpfc_offline(struct lpfc_hba *);
-@@ -165,7 +164,6 @@ int lpfc_mbox_tmo_val(struct lpfc_hba *, int);
+-/*
+- * Function: scsi_put_command()
+- *
+- * Purpose: Free a scsi command block
+- *
+- * Arguments: cmd - command block to free
++/**
++ * scsi_put_command - Free a scsi command block
++ * @cmd: command block to free
+ *
+ * Returns: Nothing.
+ *
+@@ -263,12 +299,13 @@ void scsi_put_command(struct scsi_cmnd *cmd)
+ }
+ EXPORT_SYMBOL(scsi_put_command);
- void lpfc_config_hbq(struct lpfc_hba *, uint32_t, struct lpfc_hbq_init *,
- uint32_t , LPFC_MBOXQ_t *);
--struct lpfc_hbq_entry * lpfc_sli_next_hbq_slot(struct lpfc_hba *, uint32_t);
- struct hbq_dmabuf *lpfc_els_hbq_alloc(struct lpfc_hba *);
- void lpfc_els_hbq_free(struct lpfc_hba *, struct hbq_dmabuf *);
+-/*
+- * Function: scsi_setup_command_freelist()
+- *
+- * Purpose: Setup the command freelist for a scsi host.
++/**
++ * scsi_setup_command_freelist - Setup the command freelist for a scsi host.
++ * @shost: host to allocate the freelist for.
+ *
+- * Arguments: shost - host to allocate the freelist for.
++ * Description: The command freelist protects against system-wide out of memory
++ * deadlock by preallocating one SCSI command structure for each host, so the
++ * system can always write to a swap file on a device associated with that host.
+ *
+ * Returns: Nothing.
+ */
+@@ -282,16 +319,24 @@ int scsi_setup_command_freelist(struct Scsi_Host *shost)
-@@ -178,7 +176,6 @@ void lpfc_poll_start_timer(struct lpfc_hba * phba);
- void lpfc_sli_poll_fcp_ring(struct lpfc_hba * hba);
- struct lpfc_iocbq * lpfc_sli_get_iocbq(struct lpfc_hba *);
- void lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
--void __lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
- uint16_t lpfc_sli_next_iotag(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
+ /*
+ * Select a command slab for this host and create it if not
+- * yet existant.
++ * yet existent.
+ */
+ mutex_lock(&host_cmd_pool_mutex);
+ pool = (shost->unchecked_isa_dma ? &scsi_cmd_dma_pool : &scsi_cmd_pool);
+ if (!pool->users) {
+- pool->slab = kmem_cache_create(pool->name,
+- sizeof(struct scsi_cmnd), 0,
+- pool->slab_flags, NULL);
+- if (!pool->slab)
++ pool->cmd_slab = kmem_cache_create(pool->cmd_name,
++ sizeof(struct scsi_cmnd), 0,
++ pool->slab_flags, NULL);
++ if (!pool->cmd_slab)
++ goto fail;
++
++ pool->sense_slab = kmem_cache_create(pool->sense_name,
++ SCSI_SENSE_BUFFERSIZE, 0,
++ pool->slab_flags, NULL);
++ if (!pool->sense_slab) {
++ kmem_cache_destroy(pool->cmd_slab);
+ goto fail;
++ }
+ }
- void lpfc_reset_barrier(struct lpfc_hba * phba);
-@@ -204,11 +201,14 @@ int lpfc_sli_ringpostbuf_put(struct lpfc_hba *, struct lpfc_sli_ring *,
- struct lpfc_dmabuf *lpfc_sli_ringpostbuf_get(struct lpfc_hba *,
- struct lpfc_sli_ring *,
- dma_addr_t);
+ pool->users++;
+@@ -301,29 +346,36 @@ int scsi_setup_command_freelist(struct Scsi_Host *shost)
+ /*
+ * Get one backup command for this host.
+ */
+- cmd = kmem_cache_alloc(shost->cmd_pool->slab,
+- GFP_KERNEL | shost->cmd_pool->gfp_mask);
++ cmd = kmem_cache_alloc(shost->cmd_pool->cmd_slab,
++ GFP_KERNEL | shost->cmd_pool->gfp_mask);
+ if (!cmd)
+ goto fail2;
+- list_add(&cmd->list, &shost->free_list);
+
-+uint32_t lpfc_sli_get_buffer_tag(struct lpfc_hba *);
-+struct lpfc_dmabuf * lpfc_sli_ring_taggedbuf_get(struct lpfc_hba *,
-+ struct lpfc_sli_ring *, uint32_t );
++ cmd->sense_buffer = kmem_cache_alloc(shost->cmd_pool->sense_slab,
++ GFP_KERNEL |
++ shost->cmd_pool->gfp_mask);
++ if (!cmd->sense_buffer)
++ goto fail2;
+
- int lpfc_sli_hbq_count(void);
--int lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *, uint32_t);
- int lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *, uint32_t);
- void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *);
--struct hbq_dmabuf *lpfc_sli_hbqbuf_find(struct lpfc_hba *, uint32_t);
- int lpfc_sli_hbq_size(void);
- int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *,
- struct lpfc_iocbq *);
-@@ -219,9 +219,6 @@ int lpfc_sli_abort_iocb(struct lpfc_vport *, struct lpfc_sli_ring *, uint16_t,
- void lpfc_mbox_timeout(unsigned long);
- void lpfc_mbox_timeout_handler(struct lpfc_hba *);
++ list_add(&cmd->list, &shost->free_list);
+ return 0;
--struct lpfc_nodelist *__lpfc_find_node(struct lpfc_vport *, node_filter,
-- void *);
--struct lpfc_nodelist *lpfc_find_node(struct lpfc_vport *, node_filter, void *);
- struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_vport *, uint32_t);
- struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_vport *,
- struct lpfc_name *);
-@@ -260,6 +257,7 @@ extern struct scsi_host_template lpfc_vport_template;
- extern struct fc_function_template lpfc_transport_functions;
- extern struct fc_function_template lpfc_vport_transport_functions;
- extern int lpfc_sli_mode;
-+extern int lpfc_enable_npiv;
+ fail2:
+- if (!--pool->users)
+- kmem_cache_destroy(pool->slab);
+- return -ENOMEM;
++ if (cmd)
++ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
++ mutex_lock(&host_cmd_pool_mutex);
++ if (!--pool->users) {
++ kmem_cache_destroy(pool->cmd_slab);
++ kmem_cache_destroy(pool->sense_slab);
++ }
+ fail:
+ mutex_unlock(&host_cmd_pool_mutex);
+ return -ENOMEM;
+-
+ }
- int lpfc_vport_symbolic_node_name(struct lpfc_vport *, char *, size_t);
- void lpfc_terminate_rport_io(struct fc_rport *);
-@@ -281,11 +279,8 @@ extern void lpfc_debugfs_slow_ring_trc(struct lpfc_hba *, char *, uint32_t,
- extern struct lpfc_hbq_init *lpfc_hbq_defs[];
+-/*
+- * Function: scsi_destroy_command_freelist()
+- *
+- * Purpose: Release the command freelist for a scsi host.
+- *
+- * Arguments: shost - host that's freelist is going to be destroyed
++/**
++ * scsi_destroy_command_freelist - Release the command freelist for a scsi host.
++ * @shost: host whose freelist is going to be destroyed
+ */
+ void scsi_destroy_command_freelist(struct Scsi_Host *shost)
+ {
+@@ -332,12 +384,16 @@ void scsi_destroy_command_freelist(struct Scsi_Host *shost)
- /* Interface exported by fabric iocb scheduler */
--int lpfc_issue_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
--void lpfc_fabric_abort_vport(struct lpfc_vport *);
- void lpfc_fabric_abort_nport(struct lpfc_nodelist *);
- void lpfc_fabric_abort_hba(struct lpfc_hba *);
--void lpfc_fabric_abort_flogi(struct lpfc_hba *);
- void lpfc_fabric_block_timeout(unsigned long);
- void lpfc_unblock_fabric_iocbs(struct lpfc_hba *);
- void lpfc_adjust_queue_depth(struct lpfc_hba *);
-diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
-index c701e4d..92441ce 100644
---- a/drivers/scsi/lpfc/lpfc_ct.c
-+++ b/drivers/scsi/lpfc/lpfc_ct.c
-@@ -19,7 +19,7 @@
- *******************************************************************/
+ cmd = list_entry(shost->free_list.next, struct scsi_cmnd, list);
+ list_del_init(&cmd->list);
+- kmem_cache_free(shost->cmd_pool->slab, cmd);
++ kmem_cache_free(shost->cmd_pool->sense_slab,
++ cmd->sense_buffer);
++ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
+ }
- /*
-- * Fibre Channel SCSI LAN Device Driver CT support
-+ * Fibre Channel SCSI LAN Device Driver CT support: FC Generic Services FC-GS
- */
+ mutex_lock(&host_cmd_pool_mutex);
+- if (!--shost->cmd_pool->users)
+- kmem_cache_destroy(shost->cmd_pool->slab);
++ if (!--shost->cmd_pool->users) {
++ kmem_cache_destroy(shost->cmd_pool->cmd_slab);
++ kmem_cache_destroy(shost->cmd_pool->sense_slab);
++ }
+ mutex_unlock(&host_cmd_pool_mutex);
+ }
- #include <linux/blkdev.h>
-@@ -57,45 +57,27 @@
+@@ -441,8 +497,12 @@ void scsi_log_completion(struct scsi_cmnd *cmd, int disposition)
+ }
+ #endif
- static char *lpfc_release_version = LPFC_DRIVER_VERSION;
+-/*
+- * Assign a serial number to the request for error recovery
++/**
++ * scsi_cmd_get_serial - Assign a serial number to a command
++ * @host: the scsi host
++ * @cmd: command to assign serial number to
++ *
++ * Description: a serial number identifies a request for error recovery
+ * and debugging purposes. Protected by the Host_Lock of host.
+ */
+ static inline void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+@@ -452,14 +512,12 @@ static inline void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd
+ cmd->serial_number = host->cmd_serial_number++;
+ }
-/*
-- * lpfc_ct_unsol_event
-- */
- static void
--lpfc_ct_unsol_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
-- struct lpfc_dmabuf *mp, uint32_t size)
-+lpfc_ct_ignore_hbq_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
-+ struct lpfc_dmabuf *mp, uint32_t size)
+- * Function: scsi_dispatch_command
+- *
+- * Purpose: Dispatch a command to the low-level driver.
+- *
+- * Arguments: cmd - command block we are dispatching.
++/**
++ * scsi_dispatch_command - Dispatch a command to the low-level driver.
++ * @cmd: command block we are dispatching.
+ *
+- * Notes:
++ * Return: nonzero return request was rejected and device's queue needs to be
++ * plugged.
+ */
+ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
{
- if (!mp) {
-- printk(KERN_ERR "%s (%d): Unsolited CT, no buffer, "
-- "piocbq = %p, status = x%x, mp = %p, size = %d\n",
-- __FUNCTION__, __LINE__,
-- piocbq, piocbq->iocb.ulpStatus, mp, size);
-+ lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-+ "0146 Ignoring unsolicted CT No HBQ "
-+ "status = x%x\n",
-+ piocbq->iocb.ulpStatus);
- }
--
-- printk(KERN_ERR "%s (%d): Ignoring unsolicted CT piocbq = %p, "
-- "buffer = %p, size = %d, status = x%x\n",
-- __FUNCTION__, __LINE__,
-- piocbq, mp, size,
-- piocbq->iocb.ulpStatus);
--
-+ lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-+ "0145 Ignoring unsolicted CT HBQ Size:%d "
-+ "status = x%x\n",
-+ size, piocbq->iocb.ulpStatus);
- }
+@@ -585,7 +643,7 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
- static void
--lpfc_ct_ignore_hbq_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
-- struct lpfc_dmabuf *mp, uint32_t size)
-+lpfc_ct_unsol_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
-+ struct lpfc_dmabuf *mp, uint32_t size)
+ /**
+ * scsi_req_abort_cmd -- Request command recovery for the specified command
+- * cmd: pointer to the SCSI command of interest
++ * @cmd: pointer to the SCSI command of interest
+ *
+ * This function requests that SCSI Core start recovery for the
+ * command by deleting the timer and adding the command to the eh
+@@ -606,9 +664,9 @@ EXPORT_SYMBOL(scsi_req_abort_cmd);
+ * @cmd: The SCSI Command for which a low-level device driver (LLDD) gives
+ * ownership back to SCSI Core -- i.e. the LLDD has finished with it.
+ *
+- * This function is the mid-level's (SCSI Core) interrupt routine, which
+- * regains ownership of the SCSI command (de facto) from a LLDD, and enqueues
+- * the command to the done queue for further processing.
++ * Description: This function is the mid-level's (SCSI Core) interrupt routine,
++ * which regains ownership of the SCSI command (de facto) from a LLDD, and
++ * enqueues the command to the done queue for further processing.
+ *
+ * This is the producer of the done queue who enqueues at the tail.
+ *
+@@ -617,7 +675,7 @@ EXPORT_SYMBOL(scsi_req_abort_cmd);
+ static void scsi_done(struct scsi_cmnd *cmd)
{
-- if (!mp) {
-- printk(KERN_ERR "%s (%d): Unsolited CT, no "
-- "HBQ buffer, piocbq = %p, status = x%x\n",
-- __FUNCTION__, __LINE__,
-- piocbq, piocbq->iocb.ulpStatus);
-- } else {
-- lpfc_ct_unsol_buffer(phba, piocbq, mp, size);
-- printk(KERN_ERR "%s (%d): Ignoring unsolicted CT "
-- "piocbq = %p, buffer = %p, size = %d, "
-- "status = x%x\n",
-- __FUNCTION__, __LINE__,
-- piocbq, mp, size, piocbq->iocb.ulpStatus);
-- }
-+ lpfc_ct_ignore_hbq_buffer(phba, piocbq, mp, size);
+ /*
+- * We don't have to worry about this one timing out any more.
++ * We don't have to worry about this one timing out anymore.
+ * If we are unable to remove the timer, then the command
+ * has already timed out. In which case, we have no choice but to
+ * let the timeout function run, as we have no idea where in fact
+@@ -660,10 +718,11 @@ static struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
+ return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
}
- void
-@@ -109,11 +91,8 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- struct lpfc_iocbq *iocbq;
- dma_addr_t paddr;
- uint32_t size;
-- struct lpfc_dmabuf *bdeBuf1 = piocbq->context2;
-- struct lpfc_dmabuf *bdeBuf2 = piocbq->context3;
--
-- piocbq->context2 = NULL;
-- piocbq->context3 = NULL;
-+ struct list_head head;
-+ struct lpfc_dmabuf *bdeBuf;
+-/*
+- * Function: scsi_finish_command
++/**
++ * scsi_finish_command - cleanup and pass command back to upper layer
++ * @cmd: the command
+ *
+- * Purpose: Pass command off to upper layer for finishing of I/O
++ * Description: Pass command off to upper layer for finishing of I/O
+ * request, waking processes that are waiting on results,
+ * etc.
+ */
+@@ -708,18 +767,14 @@ void scsi_finish_command(struct scsi_cmnd *cmd)
+ }
+ EXPORT_SYMBOL(scsi_finish_command);
- if (unlikely(icmd->ulpStatus == IOSTAT_NEED_BUFFER)) {
- lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ);
-@@ -122,7 +101,7 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- /* Not enough posted buffers; Try posting more buffers */
- phba->fc_stat.NoRcvBuf++;
- if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
-- lpfc_post_buffer(phba, pring, 0, 1);
-+ lpfc_post_buffer(phba, pring, 2, 1);
- return;
- }
+-/*
+- * Function: scsi_adjust_queue_depth()
+- *
+- * Purpose: Allow low level drivers to tell us to change the queue depth
+- * on a specific SCSI device
+- *
+- * Arguments: sdev - SCSI Device in question
+- * tagged - Do we use tagged queueing (non-0) or do we treat
+- * this device as an untagged device (0)
+- * tags - Number of tags allowed if tagged queueing enabled,
+- * or number of commands the low level driver can
+- * queue up in non-tagged mode (as per cmd_per_lun).
++/**
++ * scsi_adjust_queue_depth - Let low level drivers change a device's queue depth
++ * @sdev: SCSI Device in question
++ * @tagged: Do we use tagged queueing (non-0) or do we treat
++ * this device as an untagged device (0)
++ * @tags: Number of tags allowed if tagged queueing enabled,
++ * or number of commands the low level driver can
++ * queue up in non-tagged mode (as per cmd_per_lun).
+ *
+ * Returns: Nothing
+ *
+@@ -742,8 +797,8 @@ void scsi_adjust_queue_depth(struct scsi_device *sdev, int tagged, int tags)
-@@ -133,38 +112,34 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- return;
+ spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
- if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
-- list_for_each_entry(iocbq, &piocbq->list, list) {
-+ INIT_LIST_HEAD(&head);
-+ list_add_tail(&head, &piocbq->list);
-+ list_for_each_entry(iocbq, &head, list) {
- icmd = &iocbq->iocb;
-- if (icmd->ulpBdeCount == 0) {
-- printk(KERN_ERR "%s (%d): Unsolited CT, no "
-- "BDE, iocbq = %p, status = x%x\n",
-- __FUNCTION__, __LINE__,
-- iocbq, iocbq->iocb.ulpStatus);
-+ if (icmd->ulpBdeCount == 0)
- continue;
-- }
--
-+ bdeBuf = iocbq->context2;
-+ iocbq->context2 = NULL;
- size = icmd->un.cont64[0].tus.f.bdeSize;
-- lpfc_ct_ignore_hbq_buffer(phba, piocbq, bdeBuf1, size);
-- lpfc_in_buf_free(phba, bdeBuf1);
-+ lpfc_ct_unsol_buffer(phba, piocbq, bdeBuf, size);
-+ lpfc_in_buf_free(phba, bdeBuf);
- if (icmd->ulpBdeCount == 2) {
-- lpfc_ct_ignore_hbq_buffer(phba, piocbq, bdeBuf2,
-- size);
-- lpfc_in_buf_free(phba, bdeBuf2);
-+ bdeBuf = iocbq->context3;
-+ iocbq->context3 = NULL;
-+ size = icmd->unsli3.rcvsli3.bde2.tus.f.bdeSize;
-+ lpfc_ct_unsol_buffer(phba, piocbq, bdeBuf,
-+ size);
-+ lpfc_in_buf_free(phba, bdeBuf);
- }
- }
-+ list_del(&head);
- } else {
- struct lpfc_iocbq *next;
+- /* Check to see if the queue is managed by the block layer
+- * if it is, and we fail to adjust the depth, exit */
++ /* Check to see if the queue is managed by the block layer.
++ * If it is, and we fail to adjust the depth, exit. */
+ if (blk_queue_tagged(sdev->request_queue) &&
+ blk_queue_resize_tags(sdev->request_queue, tags) != 0)
+ goto out;
+@@ -772,20 +827,17 @@ void scsi_adjust_queue_depth(struct scsi_device *sdev, int tagged, int tags)
+ }
+ EXPORT_SYMBOL(scsi_adjust_queue_depth);
- list_for_each_entry_safe(iocbq, next, &piocbq->list, list) {
- icmd = &iocbq->iocb;
-- if (icmd->ulpBdeCount == 0) {
-- printk(KERN_ERR "%s (%d): Unsolited CT, no "
-- "BDE, iocbq = %p, status = x%x\n",
-- __FUNCTION__, __LINE__,
-- iocbq, iocbq->iocb.ulpStatus);
-- continue;
-- }
--
-+ if (icmd->ulpBdeCount == 0)
-+ lpfc_ct_unsol_buffer(phba, piocbq, NULL, 0);
- for (i = 0; i < icmd->ulpBdeCount; i++) {
- paddr = getPaddr(icmd->un.cont64[i].addrHigh,
- icmd->un.cont64[i].addrLow);
-@@ -176,6 +151,7 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- }
- list_del(&iocbq->list);
- lpfc_sli_release_iocbq(phba, iocbq);
-+ lpfc_post_buffer(phba, pring, i, 1);
+-/*
+- * Function: scsi_track_queue_full()
++/**
++ * scsi_track_queue_full - track QUEUE_FULL events to adjust queue depth
++ * @sdev: SCSI Device in question
++ * @depth: Current number of outstanding SCSI commands on this device,
++ * not counting the one returned as QUEUE_FULL.
+ *
+- * Purpose: This function will track successive QUEUE_FULL events on a
++ * Description: This function will track successive QUEUE_FULL events on a
+ * specific SCSI device to determine if and when there is a
+ * need to adjust the queue depth on the device.
+ *
+- * Arguments: sdev - SCSI Device in question
+- * depth - Current number of outstanding SCSI commands on
+- * this device, not counting the one returned as
+- * QUEUE_FULL.
+- *
+- * Returns: 0 - No change needed
+- * >0 - Adjust queue depth to this new depth
++ * Returns: 0 - No change needed, >0 - Adjust queue depth to this new depth,
+ * -1 - Drop back to untagged operation using host->cmd_per_lun
+ * as the untagged command depth
+ *
+@@ -824,10 +876,10 @@ int scsi_track_queue_full(struct scsi_device *sdev, int depth)
+ EXPORT_SYMBOL(scsi_track_queue_full);
+
+ /**
+- * scsi_device_get - get an addition reference to a scsi_device
++ * scsi_device_get - get an additional reference to a scsi_device
+ * @sdev: device to get a reference to
+ *
+- * Gets a reference to the scsi_device and increments the use count
++ * Description: Gets a reference to the scsi_device and increments the use count
+ * of the underlying LLDD module. You must hold host_lock of the
+ * parent Scsi_Host or already have a reference when calling this.
+ */
+@@ -849,8 +901,8 @@ EXPORT_SYMBOL(scsi_device_get);
+ * scsi_device_put - release a reference to a scsi_device
+ * @sdev: device to release a reference on.
+ *
+- * Release a reference to the scsi_device and decrements the use count
+- * of the underlying LLDD module. The device is freed once the last
++ * Description: Release a reference to the scsi_device and decrements the use
++ * count of the underlying LLDD module. The device is freed once the last
+ * user vanishes.
+ */
+ void scsi_device_put(struct scsi_device *sdev)
+@@ -867,7 +919,7 @@ void scsi_device_put(struct scsi_device *sdev)
+ }
+ EXPORT_SYMBOL(scsi_device_put);
+
+-/* helper for shost_for_each_device, thus not documented */
++/* helper for shost_for_each_device, see that for documentation */
+ struct scsi_device *__scsi_iterate_devices(struct Scsi_Host *shost,
+ struct scsi_device *prev)
+ {
+@@ -895,6 +947,8 @@ EXPORT_SYMBOL(__scsi_iterate_devices);
+ /**
+ * starget_for_each_device - helper to walk all devices of a target
+ * @starget: target whose devices we want to iterate over.
++ * @data: Opaque passed to each function call.
++ * @fn: Function to call on each device
+ *
+ * This traverses over each device of @starget. The devices have
+ * a reference that must be released by scsi_host_put when breaking
+@@ -946,13 +1000,13 @@ EXPORT_SYMBOL(__starget_for_each_device);
+ * @starget: SCSI target pointer
+ * @lun: SCSI Logical Unit Number
+ *
+- * Looks up the scsi_device with the specified @lun for a give
+- * @starget. The returned scsi_device does not have an additional
++ * Description: Looks up the scsi_device with the specified @lun for a given
++ * @starget. The returned scsi_device does not have an additional
+ * reference. You must hold the host's host_lock over this call and
+ * any access to the returned scsi_device.
+ *
+- * Note: The only reason why drivers would want to use this is because
+- * they're need to access the device list in irq context. Otherwise you
++ * Note: The only reason why drivers should use this is because
++ * they need to access the device list in irq context. Otherwise you
+ * really want to use scsi_device_lookup_by_target instead.
+ **/
+ struct scsi_device *__scsi_device_lookup_by_target(struct scsi_target *starget,
+@@ -974,9 +1028,9 @@ EXPORT_SYMBOL(__scsi_device_lookup_by_target);
+ * @starget: SCSI target pointer
+ * @lun: SCSI Logical Unit Number
+ *
+- * Looks up the scsi_device with the specified @channel, @id, @lun for a
+- * give host. The returned scsi_device has an additional reference that
+- * needs to be release with scsi_host_put once you're done with it.
++ * Description: Looks up the scsi_device with the specified @channel, @id, @lun
++ * for a given host. The returned scsi_device has an additional reference that
++ * needs to be released with scsi_device_put once you're done with it.
+ **/
+ struct scsi_device *scsi_device_lookup_by_target(struct scsi_target *starget,
+ uint lun)
+@@ -996,19 +1050,19 @@ struct scsi_device *scsi_device_lookup_by_target(struct scsi_target *starget,
+ EXPORT_SYMBOL(scsi_device_lookup_by_target);
+
+ /**
+- * scsi_device_lookup - find a device given the host (UNLOCKED)
++ * __scsi_device_lookup - find a device given the host (UNLOCKED)
+ * @shost: SCSI host pointer
+ * @channel: SCSI channel (zero if only one channel)
+- * @pun: SCSI target number (physical unit number)
++ * @id: SCSI target number (physical unit number)
+ * @lun: SCSI Logical Unit Number
+ *
+- * Looks up the scsi_device with the specified @channel, @id, @lun for a
+- * give host. The returned scsi_device does not have an additional reference.
+- * You must hold the host's host_lock over this call and any access to the
+- * returned scsi_device.
++ * Description: Looks up the scsi_device with the specified @channel, @id, @lun
++ * for a given host. The returned scsi_device does not have an additional
++ * reference. You must hold the host's host_lock over this call and any access
++ * to the returned scsi_device.
+ *
+ * Note: The only reason why drivers would want to use this is because
+- * they're need to access the device list in irq context. Otherwise you
++ * they need to access the device list in irq context. Otherwise you
+ * really want to use scsi_device_lookup instead.
+ **/
+ struct scsi_device *__scsi_device_lookup(struct Scsi_Host *shost,
+@@ -1033,9 +1087,9 @@ EXPORT_SYMBOL(__scsi_device_lookup);
+ * @id: SCSI target number (physical unit number)
+ * @lun: SCSI Logical Unit Number
+ *
+- * Looks up the scsi_device with the specified @channel, @id, @lun for a
+- * give host. The returned scsi_device has an additional reference that
+- * needs to be release with scsi_host_put once you're done with it.
++ * Description: Looks up the scsi_device with the specified @channel, @id, @lun
++ * for a given host. The returned scsi_device has an additional reference that
++ * needs to be released with scsi_device_put once you're done with it.
+ **/
+ struct scsi_device *scsi_device_lookup(struct Scsi_Host *shost,
+ uint channel, uint id, uint lun)
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 46cae5a..82c06f0 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -329,7 +329,7 @@ int scsi_debug_queuecommand(struct scsi_cmnd * SCpnt, done_funct_t done)
+ if (done == NULL)
+ return 0; /* assume mid level reprocessing command */
+
+- SCpnt->resid = 0;
++ scsi_set_resid(SCpnt, 0);
+ if ((SCSI_DEBUG_OPT_NOISE & scsi_debug_opts) && cmd) {
+ printk(KERN_INFO "scsi_debug: cmd ");
+ for (k = 0, len = SCpnt->cmd_len; k < len; ++k)
+@@ -603,26 +603,16 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
+ void * kaddr_off;
+ struct scatterlist * sg;
+
+- if (0 == scp->request_bufflen)
++ if (0 == scsi_bufflen(scp))
+ return 0;
+- if (NULL == scp->request_buffer)
++ if (NULL == scsi_sglist(scp))
+ return (DID_ERROR << 16);
+ if (! ((scp->sc_data_direction == DMA_BIDIRECTIONAL) ||
+ (scp->sc_data_direction == DMA_FROM_DEVICE)))
+ return (DID_ERROR << 16);
+- if (0 == scp->use_sg) {
+- req_len = scp->request_bufflen;
+- act_len = (req_len < arr_len) ? req_len : arr_len;
+- memcpy(scp->request_buffer, arr, act_len);
+- if (scp->resid)
+- scp->resid -= act_len;
+- else
+- scp->resid = req_len - act_len;
+- return 0;
+- }
+ active = 1;
+ req_len = act_len = 0;
+- scsi_for_each_sg(scp, sg, scp->use_sg, k) {
++ scsi_for_each_sg(scp, sg, scsi_sg_count(scp), k) {
+ if (active) {
+ kaddr = (unsigned char *)
+ kmap_atomic(sg_page(sg), KM_USER0);
+@@ -640,10 +630,10 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
}
+ req_len += sg->length;
}
+- if (scp->resid)
+- scp->resid -= act_len;
++ if (scsi_get_resid(scp))
++ scsi_set_resid(scp, scsi_get_resid(scp) - act_len);
+ else
+- scp->resid = req_len - act_len;
++ scsi_set_resid(scp, req_len - act_len);
+ return 0;
+ }
+
+@@ -656,22 +646,15 @@ static int fetch_to_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
+ void * kaddr_off;
+ struct scatterlist * sg;
+
+- if (0 == scp->request_bufflen)
++ if (0 == scsi_bufflen(scp))
+ return 0;
+- if (NULL == scp->request_buffer)
++ if (NULL == scsi_sglist(scp))
+ return -1;
+ if (! ((scp->sc_data_direction == DMA_BIDIRECTIONAL) ||
+ (scp->sc_data_direction == DMA_TO_DEVICE)))
+ return -1;
+- if (0 == scp->use_sg) {
+- req_len = scp->request_bufflen;
+- len = (req_len < max_arr_len) ? req_len : max_arr_len;
+- memcpy(arr, scp->request_buffer, len);
+- return len;
+- }
+- sg = scsi_sglist(scp);
+ req_len = fin = 0;
+- for (k = 0; k < scp->use_sg; ++k, sg = sg_next(sg)) {
++ scsi_for_each_sg(scp, sg, scsi_sg_count(scp), k) {
+ kaddr = (unsigned char *)kmap_atomic(sg_page(sg), KM_USER0);
+ if (NULL == kaddr)
+ return -1;
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index 348cc5a..b8de041 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -276,11 +276,12 @@ static void scsi_strcpy_devinfo(char *name, char *to, size_t to_length,
}
-@@ -203,7 +179,7 @@ lpfc_alloc_ct_rsp(struct lpfc_hba *phba, int cmdcode, struct ulp_bde64 *bpl,
- struct lpfc_dmabuf *mp;
- int cnt, i = 0;
-
-- /* We get chucks of FCELSSIZE */
-+ /* We get chunks of FCELSSIZE */
- cnt = size > FCELSSIZE ? FCELSSIZE: size;
-
- while (size) {
-@@ -426,6 +402,7 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
-
- lpfc_set_disctmo(vport);
- vport->num_disc_nodes = 0;
-+ vport->fc_ns_retry = 0;
-
-
- list_add_tail(&head, &mp->list);
-@@ -458,7 +435,7 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
- ((lpfc_find_vport_by_did(phba, Did) == NULL) ||
- vport->cfg_peer_port_login)) {
- if ((vport->port_type != LPFC_NPIV_PORT) ||
-- (vport->fc_flag & FC_RFF_NOT_SUPPORTED) ||
-+ (!(vport->ct_flags & FC_CT_RFF_ID)) ||
- (!vport->cfg_restrict_login)) {
- ndlp = lpfc_setup_disc_node(vport, Did);
- if (ndlp) {
-@@ -506,7 +483,17 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
- Did, vport->fc_flag,
- vport->fc_rscn_id_cnt);
-- if (lpfc_ns_cmd(vport,
-+ /* This NPortID was previously
-+ * a FCP target, * Don't even
-+ * bother to send GFF_ID.
-+ */
-+ ndlp = lpfc_findnode_did(vport,
-+ Did);
-+ if (ndlp && (ndlp->nlp_type &
-+ NLP_FCP_TARGET))
-+ lpfc_setup_disc_node
-+ (vport, Did);
-+ else if (lpfc_ns_cmd(vport,
- SLI_CTNS_GFF_ID,
- 0, Did) == 0)
- vport->num_disc_nodes++;
-@@ -554,7 +541,7 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_dmabuf *outp;
- struct lpfc_sli_ct_request *CTrsp;
- struct lpfc_nodelist *ndlp;
-- int rc;
-+ int rc, retry;
+ /**
+- * scsi_dev_info_list_add: add one dev_info list entry.
++ * scsi_dev_info_list_add - add one dev_info list entry.
++ * @compatible: if true, null terminate short strings. Otherwise space pad.
+ * @vendor: vendor string
+ * @model: model (product) string
+ * @strflags: integer string
+- * @flag: if strflags NULL, use this flag value
++ * @flags: if strflags NULL, use this flag value
+ *
+ * Description:
+ * Create and add one dev_info entry for @vendor, @model, @strflags or
+@@ -322,8 +323,7 @@ static int scsi_dev_info_list_add(int compatible, char *vendor, char *model,
+ }
- /* First save ndlp, before we overwrite it */
- ndlp = cmdiocb->context_un.ndlp;
-@@ -574,7 +561,6 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- if (vport->load_flag & FC_UNLOADING)
- goto out;
+ /**
+- * scsi_dev_info_list_add_str: parse dev_list and add to the
+- * scsi_dev_info_list.
++ * scsi_dev_info_list_add_str - parse dev_list and add to the scsi_dev_info_list.
+ * @dev_list: string of device flags to add
+ *
+ * Description:
+@@ -374,15 +374,15 @@ static int scsi_dev_info_list_add_str(char *dev_list)
+ }
--
- if (lpfc_els_chk_latt(vport) || lpfc_error_lost_link(irsp)) {
- lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
- "0216 Link event during NS query\n");
-@@ -585,14 +571,35 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- if (irsp->ulpStatus) {
- /* Check for retry */
- if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) {
-- if ((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
-- (irsp->un.ulpWord[4] != IOERR_NO_RESOURCES))
-+ retry = 1;
-+ if (irsp->ulpStatus == IOSTAT_LOCAL_REJECT) {
-+ switch (irsp->un.ulpWord[4]) {
-+ case IOERR_NO_RESOURCES:
-+ /* We don't increment the retry
-+ * count for this case.
-+ */
-+ break;
-+ case IOERR_LINK_DOWN:
-+ case IOERR_SLI_ABORTED:
-+ case IOERR_SLI_DOWN:
-+ retry = 0;
-+ break;
-+ default:
-+ vport->fc_ns_retry++;
-+ }
-+ }
-+ else
- vport->fc_ns_retry++;
-- /* CT command is being retried */
-- rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_FT,
-+
-+ if (retry) {
-+ /* CT command is being retried */
-+ rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_FT,
- vport->fc_ns_retry, 0);
-- if (rc == 0)
-- goto out;
-+ if (rc == 0) {
-+ /* success */
-+ goto out;
-+ }
-+ }
- }
- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
- lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-@@ -698,7 +705,7 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_dmabuf *inp = (struct lpfc_dmabuf *) cmdiocb->context1;
- struct lpfc_dmabuf *outp = (struct lpfc_dmabuf *) cmdiocb->context2;
- struct lpfc_sli_ct_request *CTrsp;
-- int did;
-+ int did, rc, retry;
- uint8_t fbits;
- struct lpfc_nodelist *ndlp;
+ /**
+- * get_device_flags - get device specific flags from the dynamic device
+- * list. Called during scan time.
++ * get_device_flags - get device specific flags from the dynamic device list.
++ * @sdev: &scsi_device to get flags for
+ * @vendor: vendor name
+ * @model: model name
+ *
+ * Description:
+ * Search the scsi_dev_info_list for an entry matching @vendor and
+ * @model, if found, return the matching flags value, else return
+- * the host or global default settings.
++ * the host or global default settings. Called during scan time.
+ **/
+ int scsi_get_device_flags(struct scsi_device *sdev,
+ const unsigned char *vendor,
+@@ -483,13 +483,11 @@ stop_output:
+ }
-@@ -729,6 +736,39 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- }
- }
- else {
-+ /* Check for retry */
-+ if (cmdiocb->retry < LPFC_MAX_NS_RETRY) {
-+ retry = 1;
-+ if (irsp->ulpStatus == IOSTAT_LOCAL_REJECT) {
-+ switch (irsp->un.ulpWord[4]) {
-+ case IOERR_NO_RESOURCES:
-+ /* We don't increment the retry
-+ * count for this case.
-+ */
-+ break;
-+ case IOERR_LINK_DOWN:
-+ case IOERR_SLI_ABORTED:
-+ case IOERR_SLI_DOWN:
-+ retry = 0;
-+ break;
-+ default:
-+ cmdiocb->retry++;
-+ }
-+ }
-+ else
-+ cmdiocb->retry++;
-+
-+ if (retry) {
-+ /* CT command is being retried */
-+ rc = lpfc_ns_cmd(vport, SLI_CTNS_GFF_ID,
-+ cmdiocb->retry, did);
-+ if (rc == 0) {
-+ /* success */
-+ lpfc_ct_free_iocb(phba, cmdiocb);
-+ return;
-+ }
-+ }
-+ }
- lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
- "0267 NameServer GFF Rsp "
- "x%x Error (%d %d) Data: x%x x%x\n",
-@@ -778,8 +818,8 @@ out:
+ /*
+- * proc_scsi_dev_info_write: allow additions to the scsi_dev_info_list via
+- * /proc.
++ * proc_scsi_dev_info_write - allow additions to scsi_dev_info_list via /proc.
+ *
+- * Use: echo "vendor:model:flag" > /proc/scsi/device_info
+- *
+- * To add a black/white list entry for vendor and model with an integer
+- * value of flag to the scsi device info list.
++ * Description: Adds a black/white list entry for vendor and model with an
++ * integer value of flag to the scsi device info list.
++ * To use, echo "vendor:model:flag" > /proc/scsi/device_info
+ */
+ static int proc_scsi_devinfo_write(struct file *file, const char __user *buf,
+ unsigned long length, void *data)
+@@ -532,8 +530,7 @@ MODULE_PARM_DESC(default_dev_flags,
+ "scsi default device flag integer value");
+ /**
+- * scsi_dev_info_list_delete: called from scsi.c:exit_scsi to remove
+- * the scsi_dev_info_list.
++ * scsi_dev_info_list_delete - called from scsi.c:exit_scsi to remove the scsi_dev_info_list.
+ **/
+ void scsi_exit_devinfo(void)
+ {
+@@ -552,13 +549,12 @@ void scsi_exit_devinfo(void)
+ }
- static void
--lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
-- struct lpfc_iocbq *rspiocb)
-+lpfc_cmpl_ct(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
-+ struct lpfc_iocbq *rspiocb)
+ /**
+- * scsi_dev_list_init: set up the dynamic device list.
+- * @dev_list: string of device flags to add
++ * scsi_init_devinfo - set up the dynamic device list.
+ *
+ * Description:
+- * Add command line @dev_list entries, then add
++ * Add command line entries from scsi_dev_flags, then add
+ * scsi_static_device_list entries to the scsi device info list.
+- **/
++ */
+ int __init scsi_init_devinfo(void)
{
- struct lpfc_vport *vport = cmdiocb->vport;
- struct lpfc_dmabuf *inp;
-@@ -809,7 +849,7 @@ lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ #ifdef CONFIG_SCSI_PROC_FS
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index ebaca4c..547e85a 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -62,7 +62,7 @@ void scsi_eh_wakeup(struct Scsi_Host *shost)
+ * @shost: SCSI host to invoke error handling on.
+ *
+ * Schedule SCSI EH without scmd.
+- **/
++ */
+ void scsi_schedule_eh(struct Scsi_Host *shost)
+ {
+ unsigned long flags;
+@@ -86,7 +86,7 @@ EXPORT_SYMBOL_GPL(scsi_schedule_eh);
+ *
+ * Return value:
+ * 0 on failure.
+- **/
++ */
+ int scsi_eh_scmd_add(struct scsi_cmnd *scmd, int eh_flag)
+ {
+ struct Scsi_Host *shost = scmd->device->host;
+@@ -121,7 +121,7 @@ int scsi_eh_scmd_add(struct scsi_cmnd *scmd, int eh_flag)
+ * This should be turned into an inline function. Each scsi command
+ * has its own timer, and as it is added to the queue, we set up the
+ * timer. When the command completes, we cancel the timer.
+- **/
++ */
+ void scsi_add_timer(struct scsi_cmnd *scmd, int timeout,
+ void (*complete)(struct scsi_cmnd *))
+ {
+@@ -155,7 +155,7 @@ void scsi_add_timer(struct scsi_cmnd *scmd, int timeout,
+ * Return value:
+ * 1 if we were able to detach the timer. 0 if we blew it, and the
+ * timer function has already started to run.
+- **/
++ */
+ int scsi_delete_timer(struct scsi_cmnd *scmd)
+ {
+ int rtn;
+@@ -181,7 +181,7 @@ int scsi_delete_timer(struct scsi_cmnd *scmd)
+ * only in that the normal completion handling might run, but if the
+ * normal completion function determines that the timer has already
+ * fired, then it mustn't do anything.
+- **/
++ */
+ void scsi_times_out(struct scsi_cmnd *scmd)
+ {
+ enum scsi_eh_timer_return (* eh_timed_out)(struct scsi_cmnd *);
+@@ -224,7 +224,7 @@ void scsi_times_out(struct scsi_cmnd *scmd)
+ *
+ * Return value:
+ * 0 when dev was taken offline by error recovery. 1 OK to proceed.
+- **/
++ */
+ int scsi_block_when_processing_errors(struct scsi_device *sdev)
+ {
+ int online;
+@@ -245,7 +245,7 @@ EXPORT_SYMBOL(scsi_block_when_processing_errors);
+ * scsi_eh_prt_fail_stats - Log info on failures.
+ * @shost: scsi host being recovered.
+ * @work_q: Queue of scsi cmds to process.
+- **/
++ */
+ static inline void scsi_eh_prt_fail_stats(struct Scsi_Host *shost,
+ struct list_head *work_q)
+ {
+@@ -295,7 +295,7 @@ static inline void scsi_eh_prt_fail_stats(struct Scsi_Host *shost,
+ * Notes:
+ * When a deferred error is detected the current command has
+ * not been executed and needs retrying.
+- **/
++ */
+ static int scsi_check_sense(struct scsi_cmnd *scmd)
+ {
+ struct scsi_sense_hdr sshdr;
+@@ -398,7 +398,7 @@ static int scsi_check_sense(struct scsi_cmnd *scmd)
+ * queued during error recovery. the main difference here is that we
+ * don't allow for the possibility of retries here, and we are a lot
+ * more restrictive about what we consider acceptable.
+- **/
++ */
+ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
+ {
+ /*
+@@ -452,7 +452,7 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
+ /**
+ * scsi_eh_done - Completion function for error handling.
+ * @scmd: Cmd that is done.
+- **/
++ */
+ static void scsi_eh_done(struct scsi_cmnd *scmd)
+ {
+ struct completion *eh_action;
+@@ -469,7 +469,7 @@ static void scsi_eh_done(struct scsi_cmnd *scmd)
+ /**
+ * scsi_try_host_reset - ask host adapter to reset itself
+ * @scmd: SCSI cmd to send hsot reset.
+- **/
++ */
+ static int scsi_try_host_reset(struct scsi_cmnd *scmd)
+ {
+ unsigned long flags;
+@@ -498,7 +498,7 @@ static int scsi_try_host_reset(struct scsi_cmnd *scmd)
+ /**
+ * scsi_try_bus_reset - ask host to perform a bus reset
+ * @scmd: SCSI cmd to send bus reset.
+- **/
++ */
+ static int scsi_try_bus_reset(struct scsi_cmnd *scmd)
+ {
+ unsigned long flags;
+@@ -533,7 +533,7 @@ static int scsi_try_bus_reset(struct scsi_cmnd *scmd)
+ * unreliable for a given host, then the host itself needs to put a
+ * timer on it, and set the host back to a consistent state prior to
+ * returning.
+- **/
++ */
+ static int scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
+ {
+ int rtn;
+@@ -568,7 +568,7 @@ static int __scsi_try_to_abort_cmd(struct scsi_cmnd *scmd)
+ * author of the low-level driver wishes this operation to be timed,
+ * they can provide this facility themselves. helper functions in
+ * scsi_error.c can be supplied to make this easier to do.
+- **/
++ */
+ static int scsi_try_to_abort_cmd(struct scsi_cmnd *scmd)
+ {
+ /*
+@@ -601,7 +601,7 @@ static void scsi_abort_eh_cmnd(struct scsi_cmnd *scmd)
+ * sent must be one that does not transfer any data. If @sense_bytes != 0
+ * @cmnd is ignored and this functions sets up a REQUEST_SENSE command
+ * and cmnd buffers to read @sense_bytes into @scmd->sense_buffer.
+- **/
++ */
+ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
+ unsigned char *cmnd, int cmnd_size, unsigned sense_bytes)
+ {
+@@ -625,7 +625,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
- /* RFT request completes status <ulpStatus> CmdRsp <CmdRsp> */
- lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
-- "0209 RFT request completes, latt %d, "
-+ "0209 CT Request completes, latt %d, "
- "ulpStatus x%x CmdRsp x%x, Context x%x, Tag x%x\n",
- latt, irsp->ulpStatus,
- CTrsp->CommandResponse.bits.CmdRsp,
-@@ -848,10 +888,44 @@ out:
+ if (sense_bytes) {
+ scmd->request_bufflen = min_t(unsigned,
+- sizeof(scmd->sense_buffer), sense_bytes);
++ SCSI_SENSE_BUFFERSIZE, sense_bytes);
+ sg_init_one(&ses->sense_sgl, scmd->sense_buffer,
+ scmd->request_bufflen);
+ scmd->request_buffer = &ses->sense_sgl;
+@@ -657,7 +657,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
+ * Zero the sense buffer. The scsi spec mandates that any
+ * untransferred sense data should be interpreted as being zero.
+ */
+- memset(scmd->sense_buffer, 0, sizeof(scmd->sense_buffer));
++ memset(scmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
}
+ EXPORT_SYMBOL(scsi_eh_prep_cmnd);
- static void
-+lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
-+ struct lpfc_iocbq *rspiocb)
-+{
-+ IOCB_t *irsp = &rspiocb->iocb;
-+ struct lpfc_vport *vport = cmdiocb->vport;
-+
-+ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
-+ struct lpfc_dmabuf *outp;
-+ struct lpfc_sli_ct_request *CTrsp;
-+
-+ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
-+ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
-+ if (CTrsp->CommandResponse.bits.CmdRsp ==
-+ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
-+ vport->ct_flags |= FC_CT_RFT_ID;
-+ }
-+ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
-+ return;
-+}
-+
-+static void
- lpfc_cmpl_ct_cmd_rnn_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_iocbq *rspiocb)
+@@ -667,7 +667,7 @@ EXPORT_SYMBOL(scsi_eh_prep_cmnd);
+ * @ses: saved information from a coresponding call to scsi_prep_eh_cmnd
+ *
+ * Undo any damage done by above scsi_prep_eh_cmnd().
+- **/
++ */
+ void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses)
{
-- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
-+ IOCB_t *irsp = &rspiocb->iocb;
-+ struct lpfc_vport *vport = cmdiocb->vport;
-+
-+ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
-+ struct lpfc_dmabuf *outp;
-+ struct lpfc_sli_ct_request *CTrsp;
-+
-+ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
-+ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
-+ if (CTrsp->CommandResponse.bits.CmdRsp ==
-+ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
-+ vport->ct_flags |= FC_CT_RNN_ID;
-+ }
-+ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
- return;
- }
+ /*
+@@ -697,7 +697,7 @@ EXPORT_SYMBOL(scsi_eh_restore_cmnd);
+ *
+ * Return value:
+ * SUCCESS or FAILED or NEEDS_RETRY
+- **/
++ */
+ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
+ int cmnd_size, int timeout, unsigned sense_bytes)
+ {
+@@ -765,7 +765,7 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
+ * Some hosts automatically obtain this information, others require
+ * that we obtain it on our own. This function will *not* return until
+ * the command either times out, or it completes.
+- **/
++ */
+ static int scsi_request_sense(struct scsi_cmnd *scmd)
+ {
+ return scsi_send_eh_cmnd(scmd, NULL, 0, SENSE_TIMEOUT, ~0);
+@@ -779,10 +779,10 @@ static int scsi_request_sense(struct scsi_cmnd *scmd)
+ * Notes:
+ * We don't want to use the normal command completion while we are are
+ * still handling errors - it may cause other commands to be queued,
+- * and that would disturb what we are doing. thus we really want to
++ * and that would disturb what we are doing. Thus we really want to
+ * keep a list of pending commands for final completion, and once we
+ * are ready to leave error handling we handle completion for real.
+- **/
++ */
+ void scsi_eh_finish_cmd(struct scsi_cmnd *scmd, struct list_head *done_q)
+ {
+ scmd->device->host->host_failed--;
+@@ -794,7 +794,7 @@ EXPORT_SYMBOL(scsi_eh_finish_cmd);
+ /**
+ * scsi_eh_get_sense - Get device sense data.
+ * @work_q: Queue of commands to process.
+- * @done_q: Queue of proccessed commands..
++ * @done_q: Queue of processed commands.
+ *
+ * Description:
+ * See if we need to request sense information. if so, then get it
+@@ -802,7 +802,7 @@ EXPORT_SYMBOL(scsi_eh_finish_cmd);
+ *
+ * Notes:
+ * This has the unfortunate side effect that if a shost adapter does
+- * not automatically request sense information, that we end up shutting
++ * not automatically request sense information, we end up shutting
+ * it down before we request it.
+ *
+ * All drivers should request sense information internally these days,
+@@ -810,7 +810,7 @@ EXPORT_SYMBOL(scsi_eh_finish_cmd);
+ *
+ * XXX: Long term this code should go away, but that needs an audit of
+ * all LLDDs first.
+- **/
++ */
+ int scsi_eh_get_sense(struct list_head *work_q,
+ struct list_head *done_q)
+ {
+@@ -858,11 +858,11 @@ EXPORT_SYMBOL_GPL(scsi_eh_get_sense);
-@@ -859,7 +933,20 @@ static void
- lpfc_cmpl_ct_cmd_rspn_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_iocbq *rspiocb)
+ /**
+ * scsi_eh_tur - Send TUR to device.
+- * @scmd: Scsi cmd to send TUR
++ * @scmd: &scsi_cmnd to send TUR
+ *
+ * Return value:
+ * 0 - Device is ready. 1 - Device NOT ready.
+- **/
++ */
+ static int scsi_eh_tur(struct scsi_cmnd *scmd)
{
-- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
-+ IOCB_t *irsp = &rspiocb->iocb;
-+ struct lpfc_vport *vport = cmdiocb->vport;
-+
-+ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
-+ struct lpfc_dmabuf *outp;
-+ struct lpfc_sli_ct_request *CTrsp;
-+
-+ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
-+ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
-+ if (CTrsp->CommandResponse.bits.CmdRsp ==
-+ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
-+ vport->ct_flags |= FC_CT_RSPN_ID;
-+ }
-+ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
- return;
+ static unsigned char tur_command[6] = {TEST_UNIT_READY, 0, 0, 0, 0, 0};
+@@ -887,17 +887,17 @@ retry_tur:
}
-@@ -867,7 +954,32 @@ static void
- lpfc_cmpl_ct_cmd_rsnn_nn(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_iocbq *rspiocb)
+ /**
+- * scsi_eh_abort_cmds - abort canceled commands.
+- * @shost: scsi host being recovered.
+- * @eh_done_q: list_head for processed commands.
++ * scsi_eh_abort_cmds - abort pending commands.
++ * @work_q: &list_head for pending commands.
++ * @done_q: &list_head for processed commands.
+ *
+ * Decription:
+ * Try and see whether or not it makes sense to try and abort the
+- * running command. this only works out to be the case if we have one
+- * command that has timed out. if the command simply failed, it makes
++ * running command. This only works out to be the case if we have one
++ * command that has timed out. If the command simply failed, it makes
+ * no sense to try and abort the command, since as far as the shost
+ * adapter is concerned, it isn't running.
+- **/
++ */
+ static int scsi_eh_abort_cmds(struct list_head *work_q,
+ struct list_head *done_q)
{
-- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
-+ IOCB_t *irsp = &rspiocb->iocb;
-+ struct lpfc_vport *vport = cmdiocb->vport;
-+
-+ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
-+ struct lpfc_dmabuf *outp;
-+ struct lpfc_sli_ct_request *CTrsp;
-+
-+ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
-+ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
-+ if (CTrsp->CommandResponse.bits.CmdRsp ==
-+ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
-+ vport->ct_flags |= FC_CT_RSNN_NN;
-+ }
-+ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
-+ return;
-+}
-+
-+static void
-+lpfc_cmpl_ct_cmd_da_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
-+ struct lpfc_iocbq *rspiocb)
-+{
-+ struct lpfc_vport *vport = cmdiocb->vport;
-+
-+ /* even if it fails we will act as though it succeeded. */
-+ vport->ct_flags = 0;
-+ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
- return;
- }
+@@ -931,11 +931,11 @@ static int scsi_eh_abort_cmds(struct list_head *work_q,
-@@ -878,10 +990,17 @@ lpfc_cmpl_ct_cmd_rff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- IOCB_t *irsp = &rspiocb->iocb;
- struct lpfc_vport *vport = cmdiocb->vport;
+ /**
+ * scsi_eh_try_stu - Send START_UNIT to device.
+- * @scmd: Scsi cmd to send START_UNIT
++ * @scmd: &scsi_cmnd to send START_UNIT
+ *
+ * Return value:
+ * 0 - Device is ready. 1 - Device NOT ready.
+- **/
++ */
+ static int scsi_eh_try_stu(struct scsi_cmnd *scmd)
+ {
+ static unsigned char stu_command[6] = {START_STOP, 0, 0, 0, 1, 0};
+@@ -956,13 +956,14 @@ static int scsi_eh_try_stu(struct scsi_cmnd *scmd)
-- if (irsp->ulpStatus != IOSTAT_SUCCESS)
-- vport->fc_flag |= FC_RFF_NOT_SUPPORTED;
-+ if (irsp->ulpStatus == IOSTAT_SUCCESS) {
-+ struct lpfc_dmabuf *outp;
-+ struct lpfc_sli_ct_request *CTrsp;
+ /**
+ * scsi_eh_stu - send START_UNIT if needed
+- * @shost: scsi host being recovered.
+- * @eh_done_q: list_head for processed commands.
++ * @shost: &scsi host being recovered.
++ * @work_q: &list_head for pending commands.
++ * @done_q: &list_head for processed commands.
+ *
+ * Notes:
+ * If commands are failing due to not ready, initializing command required,
+ * try revalidating the device, which will end up sending a start unit.
+- **/
++ */
+ static int scsi_eh_stu(struct Scsi_Host *shost,
+ struct list_head *work_q,
+ struct list_head *done_q)
+@@ -1008,14 +1009,15 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
+ /**
+ * scsi_eh_bus_device_reset - send bdr if needed
+ * @shost: scsi host being recovered.
+- * @eh_done_q: list_head for processed commands.
++ * @work_q: &list_head for pending commands.
++ * @done_q: &list_head for processed commands.
+ *
+ * Notes:
+- * Try a bus device reset. still, look to see whether we have multiple
++ * Try a bus device reset. Still, look to see whether we have multiple
+ * devices that are jammed or not - if we have multiple devices, it
+ * makes no sense to try bus_device_reset - we really would need to try
+ * a bus_reset instead.
+- **/
++ */
+ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
+ struct list_head *work_q,
+ struct list_head *done_q)
+@@ -1063,9 +1065,10 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
-- lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
-+ outp = (struct lpfc_dmabuf *) cmdiocb->context2;
-+ CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
-+ if (CTrsp->CommandResponse.bits.CmdRsp ==
-+ be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
-+ vport->ct_flags |= FC_CT_RFF_ID;
-+ }
-+ lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
- return;
+ /**
+ * scsi_eh_bus_reset - send a bus reset
+- * @shost: scsi host being recovered.
+- * @eh_done_q: list_head for processed commands.
+- **/
++ * @shost: &scsi host being recovered.
++ * @work_q: &list_head for pending commands.
++ * @done_q: &list_head for processed commands.
++ */
+ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
+ struct list_head *work_q,
+ struct list_head *done_q)
+@@ -1122,7 +1125,7 @@ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
+ * scsi_eh_host_reset - send a host reset
+ * @work_q: list_head for processed commands.
+ * @done_q: list_head for processed commands.
+- **/
++ */
+ static int scsi_eh_host_reset(struct list_head *work_q,
+ struct list_head *done_q)
+ {
+@@ -1157,8 +1160,7 @@ static int scsi_eh_host_reset(struct list_head *work_q,
+ * scsi_eh_offline_sdevs - offline scsi devices that fail to recover
+ * @work_q: list_head for processed commands.
+ * @done_q: list_head for processed commands.
+- *
+- **/
++ */
+ static void scsi_eh_offline_sdevs(struct list_head *work_q,
+ struct list_head *done_q)
+ {
+@@ -1191,7 +1193,7 @@ static void scsi_eh_offline_sdevs(struct list_head *work_q,
+ * is woken. In cases where the error code indicates an error that
+ * doesn't require the error handler read (i.e. we don't need to
+ * abort/reset), this function should return SUCCESS.
+- **/
++ */
+ int scsi_decide_disposition(struct scsi_cmnd *scmd)
+ {
+ int rtn;
+@@ -1372,7 +1374,7 @@ int scsi_decide_disposition(struct scsi_cmnd *scmd)
+ *
+ * If scsi_allocate_request() fails for what ever reason, we
+ * completely forget to lock the door.
+- **/
++ */
+ static void scsi_eh_lock_door(struct scsi_device *sdev)
+ {
+ unsigned char cmnd[MAX_COMMAND_SIZE];
+@@ -1396,7 +1398,7 @@ static void scsi_eh_lock_door(struct scsi_device *sdev)
+ * Notes:
+ * When we entered the error handler, we blocked all further i/o to
+ * this device. we need to 'reverse' this process.
+- **/
++ */
+ static void scsi_restart_operations(struct Scsi_Host *shost)
+ {
+ struct scsi_device *sdev;
+@@ -1440,9 +1442,9 @@ static void scsi_restart_operations(struct Scsi_Host *shost)
+ /**
+ * scsi_eh_ready_devs - check device ready state and recover if not.
+ * @shost: host to be recovered.
+- * @eh_done_q: list_head for processed commands.
+- *
+- **/
++ * @work_q: &list_head for pending commands.
++ * @done_q: &list_head for processed commands.
++ */
+ void scsi_eh_ready_devs(struct Scsi_Host *shost,
+ struct list_head *work_q,
+ struct list_head *done_q)
+@@ -1458,8 +1460,7 @@ EXPORT_SYMBOL_GPL(scsi_eh_ready_devs);
+ /**
+ * scsi_eh_flush_done_q - finish processed commands or retry them.
+ * @done_q: list_head of processed commands.
+- *
+- **/
++ */
+ void scsi_eh_flush_done_q(struct list_head *done_q)
+ {
+ struct scsi_cmnd *scmd, *next;
+@@ -1513,7 +1514,7 @@ EXPORT_SYMBOL(scsi_eh_flush_done_q);
+ * scsi_finish_cmd() called for it. we do all of the retry stuff
+ * here, so when we restart the host after we return it should have an
+ * empty queue.
+- **/
++ */
+ static void scsi_unjam_host(struct Scsi_Host *shost)
+ {
+ unsigned long flags;
+@@ -1540,7 +1541,7 @@ static void scsi_unjam_host(struct Scsi_Host *shost)
+ * Notes:
+ * This is the main error handling loop. This is run as a kernel thread
+ * for every SCSI host and handles all error handling activity.
+- **/
++ */
+ int scsi_error_handler(void *data)
+ {
+ struct Scsi_Host *shost = data;
+@@ -1769,7 +1770,7 @@ EXPORT_SYMBOL(scsi_reset_provider);
+ *
+ * Return value:
+ * 1 if valid sense data information found, else 0;
+- **/
++ */
+ int scsi_normalize_sense(const u8 *sense_buffer, int sb_len,
+ struct scsi_sense_hdr *sshdr)
+ {
+@@ -1819,14 +1820,12 @@ int scsi_command_normalize_sense(struct scsi_cmnd *cmd,
+ struct scsi_sense_hdr *sshdr)
+ {
+ return scsi_normalize_sense(cmd->sense_buffer,
+- sizeof(cmd->sense_buffer), sshdr);
++ SCSI_SENSE_BUFFERSIZE, sshdr);
}
+ EXPORT_SYMBOL(scsi_command_normalize_sense);
-@@ -1001,6 +1120,8 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
- bpl->tus.f.bdeSize = RSPN_REQUEST_SZ;
- else if (cmdcode == SLI_CTNS_RSNN_NN)
- bpl->tus.f.bdeSize = RSNN_REQUEST_SZ;
-+ else if (cmdcode == SLI_CTNS_DA_ID)
-+ bpl->tus.f.bdeSize = DA_ID_REQUEST_SZ;
- else if (cmdcode == SLI_CTNS_RFF_ID)
- bpl->tus.f.bdeSize = RFF_REQUEST_SZ;
- else
-@@ -1029,31 +1150,34 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
- case SLI_CTNS_GFF_ID:
- CtReq->CommandResponse.bits.CmdRsp =
- be16_to_cpu(SLI_CTNS_GFF_ID);
-- CtReq->un.gff.PortId = be32_to_cpu(context);
-+ CtReq->un.gff.PortId = cpu_to_be32(context);
- cmpl = lpfc_cmpl_ct_cmd_gff_id;
- break;
+ /**
+- * scsi_sense_desc_find - search for a given descriptor type in
+- * descriptor sense data format.
+- *
++ * scsi_sense_desc_find - search for a given descriptor type in descriptor sense data format.
+ * @sense_buffer: byte array of descriptor format sense data
+ * @sb_len: number of valid bytes in sense_buffer
+ * @desc_type: value of descriptor type to find
+@@ -1837,7 +1836,7 @@ EXPORT_SYMBOL(scsi_command_normalize_sense);
+ *
+ * Return value:
+ * pointer to start of (first) descriptor if found else NULL
+- **/
++ */
+ const u8 * scsi_sense_desc_find(const u8 * sense_buffer, int sb_len,
+ int desc_type)
+ {
+@@ -1865,9 +1864,7 @@ const u8 * scsi_sense_desc_find(const u8 * sense_buffer, int sb_len,
+ EXPORT_SYMBOL(scsi_sense_desc_find);
- case SLI_CTNS_RFT_ID:
-+ vport->ct_flags &= ~FC_CT_RFT_ID;
- CtReq->CommandResponse.bits.CmdRsp =
- be16_to_cpu(SLI_CTNS_RFT_ID);
-- CtReq->un.rft.PortId = be32_to_cpu(vport->fc_myDID);
-+ CtReq->un.rft.PortId = cpu_to_be32(vport->fc_myDID);
- CtReq->un.rft.fcpReg = 1;
- cmpl = lpfc_cmpl_ct_cmd_rft_id;
- break;
+ /**
+- * scsi_get_sense_info_fld - attempts to get information field from
+- * sense data (either fixed or descriptor format)
+- *
++ * scsi_get_sense_info_fld - get information field from sense data (either fixed or descriptor format)
+ * @sense_buffer: byte array of sense data
+ * @sb_len: number of valid bytes in sense_buffer
+ * @info_out: pointer to 64 integer where 8 or 4 byte information
+@@ -1875,7 +1872,7 @@ EXPORT_SYMBOL(scsi_sense_desc_find);
+ *
+ * Return value:
+ * 1 if information field found, 0 if not found.
+- **/
++ */
+ int scsi_get_sense_info_fld(const u8 * sense_buffer, int sb_len,
+ u64 * info_out)
+ {
+diff --git a/drivers/scsi/scsi_ioctl.c b/drivers/scsi/scsi_ioctl.c
+index 32293f4..28b19ef 100644
+--- a/drivers/scsi/scsi_ioctl.c
++++ b/drivers/scsi/scsi_ioctl.c
+@@ -174,10 +174,15 @@ static int scsi_ioctl_get_pci(struct scsi_device *sdev, void __user *arg)
+ }
- case SLI_CTNS_RNN_ID:
-+ vport->ct_flags &= ~FC_CT_RNN_ID;
- CtReq->CommandResponse.bits.CmdRsp =
- be16_to_cpu(SLI_CTNS_RNN_ID);
-- CtReq->un.rnn.PortId = be32_to_cpu(vport->fc_myDID);
-+ CtReq->un.rnn.PortId = cpu_to_be32(vport->fc_myDID);
- memcpy(CtReq->un.rnn.wwnn, &vport->fc_nodename,
- sizeof (struct lpfc_name));
- cmpl = lpfc_cmpl_ct_cmd_rnn_id;
- break;
- case SLI_CTNS_RSPN_ID:
-+ vport->ct_flags &= ~FC_CT_RSPN_ID;
- CtReq->CommandResponse.bits.CmdRsp =
- be16_to_cpu(SLI_CTNS_RSPN_ID);
-- CtReq->un.rspn.PortId = be32_to_cpu(vport->fc_myDID);
-+ CtReq->un.rspn.PortId = cpu_to_be32(vport->fc_myDID);
- size = sizeof(CtReq->un.rspn.symbname);
- CtReq->un.rspn.len =
- lpfc_vport_symbolic_port_name(vport,
-@@ -1061,6 +1185,7 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
- cmpl = lpfc_cmpl_ct_cmd_rspn_id;
- break;
- case SLI_CTNS_RSNN_NN:
-+ vport->ct_flags &= ~FC_CT_RSNN_NN;
- CtReq->CommandResponse.bits.CmdRsp =
- be16_to_cpu(SLI_CTNS_RSNN_NN);
- memcpy(CtReq->un.rsnn.wwnn, &vport->fc_nodename,
-@@ -1071,11 +1196,18 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
- CtReq->un.rsnn.symbname, size);
- cmpl = lpfc_cmpl_ct_cmd_rsnn_nn;
- break;
-+ case SLI_CTNS_DA_ID:
-+ /* Implement DA_ID Nameserver request */
-+ CtReq->CommandResponse.bits.CmdRsp =
-+ be16_to_cpu(SLI_CTNS_DA_ID);
-+ CtReq->un.da_id.port_id = cpu_to_be32(vport->fc_myDID);
-+ cmpl = lpfc_cmpl_ct_cmd_da_id;
-+ break;
- case SLI_CTNS_RFF_ID:
-- vport->fc_flag &= ~FC_RFF_NOT_SUPPORTED;
-+ vport->ct_flags &= ~FC_CT_RFF_ID;
- CtReq->CommandResponse.bits.CmdRsp =
- be16_to_cpu(SLI_CTNS_RFF_ID);
-- CtReq->un.rff.PortId = be32_to_cpu(vport->fc_myDID);;
-+ CtReq->un.rff.PortId = cpu_to_be32(vport->fc_myDID);;
- CtReq->un.rff.fbits = FC4_FEATURE_INIT;
- CtReq->un.rff.type_code = FC_FCP_DATA;
- cmpl = lpfc_cmpl_ct_cmd_rff_id;
-diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
-index d6a98bc..783d1ee 100644
---- a/drivers/scsi/lpfc/lpfc_debugfs.c
-+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
-@@ -43,6 +43,7 @@
- #include "lpfc_crtn.h"
- #include "lpfc_vport.h"
- #include "lpfc_version.h"
-+#include "lpfc_compat.h"
- #include "lpfc_debugfs.h"
+-/*
+- * the scsi_ioctl() function differs from most ioctls in that it does
+- * not take a major/minor number as the dev field. Rather, it takes
+- * a pointer to a scsi_devices[] element, a structure.
++/**
++ * scsi_ioctl - Dispatch ioctl to scsi device
++ * @sdev: scsi device receiving ioctl
++ * @cmd: which ioctl is it
++ * @arg: data associated with ioctl
++ *
++ * Description: The scsi_ioctl() function differs from most ioctls in that it
++ * does not take a major/minor number as the dev field. Rather, it takes
++ * a pointer to a &struct scsi_device.
+ */
+ int scsi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
+ {
+@@ -239,7 +244,7 @@ int scsi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
+ return scsi_set_medium_removal(sdev, SCSI_REMOVAL_ALLOW);
+ case SCSI_IOCTL_TEST_UNIT_READY:
+ return scsi_test_unit_ready(sdev, IOCTL_NORMAL_TIMEOUT,
+- NORMAL_RETRIES);
++ NORMAL_RETRIES, NULL);
+ case SCSI_IOCTL_START_UNIT:
+ scsi_cmd[0] = START_STOP;
+ scsi_cmd[1] = 0;
+@@ -264,9 +269,12 @@ int scsi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
+ }
+ EXPORT_SYMBOL(scsi_ioctl);
- #ifdef CONFIG_LPFC_DEBUG_FS
-@@ -75,18 +76,18 @@ module_param(lpfc_debugfs_enable, int, 0);
- MODULE_PARM_DESC(lpfc_debugfs_enable, "Enable debugfs services");
+-/*
+- * the scsi_nonblock_ioctl() function is designed for ioctls which may
+- * be executed even if the device is in recovery.
++/**
++ * scsi_nonblock_ioctl() - Handle SG_SCSI_RESET
++ * @sdev: scsi device receiving ioctl
++ * @cmd: Must be SC_SCSI_RESET
++ * @arg: pointer to int containing SG_SCSI_RESET_{DEVICE,BUS,HOST}
++ * @filp: either NULL or a &struct file which must have the O_NONBLOCK flag.
+ */
+ int scsi_nonblockable_ioctl(struct scsi_device *sdev, int cmd,
+ void __user *arg, struct file *filp)
+@@ -276,7 +284,7 @@ int scsi_nonblockable_ioctl(struct scsi_device *sdev, int cmd,
+ /* The first set of iocts may be executed even if we're doing
+ * error processing, as long as the device was opened
+ * non-blocking */
+- if (filp && filp->f_flags & O_NONBLOCK) {
++ if (filp && (filp->f_flags & O_NONBLOCK)) {
+ if (scsi_host_in_recovery(sdev->host))
+ return -ENODEV;
+ } else if (!scsi_block_when_processing_errors(sdev))
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index a9ac5b1..7c4c889 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -175,7 +175,7 @@ int scsi_queue_insert(struct scsi_cmnd *cmd, int reason)
+ *
+ * returns the req->errors value which is the scsi_cmnd result
+ * field.
+- **/
++ */
+ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
+ int data_direction, void *buffer, unsigned bufflen,
+ unsigned char *sense, int timeout, int retries, int flags)
+@@ -274,7 +274,7 @@ static void scsi_bi_endio(struct bio *bio, int error)
+ /**
+ * scsi_req_map_sg - map a scatterlist into a request
+ * @rq: request to fill
+- * @sg: scatterlist
++ * @sgl: scatterlist
+ * @nsegs: number of elements
+ * @bufflen: len of buffer
+ * @gfp: memory allocation flags
+@@ -365,14 +365,16 @@ free_bios:
+ * @sdev: scsi device
+ * @cmd: scsi command
+ * @cmd_len: length of scsi cdb
+- * @data_direction: data direction
++ * @data_direction: DMA_TO_DEVICE, DMA_FROM_DEVICE, or DMA_NONE
+ * @buffer: data buffer (this can be a kernel buffer or scatterlist)
+ * @bufflen: len of buffer
+ * @use_sg: if buffer is a scatterlist this is the number of elements
+ * @timeout: request timeout in seconds
+ * @retries: number of times to retry request
+- * @flags: or into request flags
+- **/
++ * @privdata: data passed to done()
++ * @done: callback function when done
++ * @gfp: memory allocation flags
++ */
+ int scsi_execute_async(struct scsi_device *sdev, const unsigned char *cmd,
+ int cmd_len, int data_direction, void *buffer, unsigned bufflen,
+ int use_sg, int timeout, int retries, void *privdata,
+@@ -439,7 +441,7 @@ static void scsi_init_cmd_errh(struct scsi_cmnd *cmd)
+ {
+ cmd->serial_number = 0;
+ cmd->resid = 0;
+- memset(cmd->sense_buffer, 0, sizeof cmd->sense_buffer);
++ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ if (cmd->cmd_len == 0)
+ cmd->cmd_len = COMMAND_SIZE(cmd->cmnd[0]);
+ }
+@@ -524,7 +526,7 @@ static void scsi_run_queue(struct request_queue *q)
+ struct Scsi_Host *shost = sdev->host;
+ unsigned long flags;
- /* This MUST be a power of 2 */
--static int lpfc_debugfs_max_disc_trc = 0;
-+static int lpfc_debugfs_max_disc_trc;
- module_param(lpfc_debugfs_max_disc_trc, int, 0);
- MODULE_PARM_DESC(lpfc_debugfs_max_disc_trc,
- "Set debugfs discovery trace depth");
+- if (sdev->single_lun)
++ if (scsi_target(sdev)->single_lun)
+ scsi_single_lun_run(sdev);
- /* This MUST be a power of 2 */
--static int lpfc_debugfs_max_slow_ring_trc = 0;
-+static int lpfc_debugfs_max_slow_ring_trc;
- module_param(lpfc_debugfs_max_slow_ring_trc, int, 0);
- MODULE_PARM_DESC(lpfc_debugfs_max_slow_ring_trc,
- "Set debugfs slow ring trace depth");
+ spin_lock_irqsave(shost->host_lock, flags);
+@@ -632,7 +634,7 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
+ * of upper level post-processing and scsi_io_completion).
+ *
+ * Arguments: cmd - command that is complete.
+- * uptodate - 1 if I/O indicates success, <= 0 for I/O error.
++ * error - 0 if I/O indicates success, < 0 for I/O error.
+ * bytes - number of bytes of completed I/O
+ * requeue - indicates whether we should requeue leftovers.
+ *
+@@ -647,26 +649,25 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
+ * at some point during this call.
+ * Notes: If cmd was requeued, upon return it will be a stale pointer.
+ */
+-static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate,
++static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int error,
+ int bytes, int requeue)
+ {
+ struct request_queue *q = cmd->device->request_queue;
+ struct request *req = cmd->request;
+- unsigned long flags;
--static int lpfc_debugfs_mask_disc_trc = 0;
-+int lpfc_debugfs_mask_disc_trc;
- module_param(lpfc_debugfs_mask_disc_trc, int, 0);
- MODULE_PARM_DESC(lpfc_debugfs_mask_disc_trc,
- "Set debugfs discovery trace mask");
-@@ -100,8 +101,11 @@ MODULE_PARM_DESC(lpfc_debugfs_mask_disc_trc,
- #define LPFC_NODELIST_SIZE 8192
- #define LPFC_NODELIST_ENTRY_SIZE 120
+ /*
+ * If there are blocks left over at the end, set up the command
+ * to queue the remainder of them.
+ */
+- if (end_that_request_chunk(req, uptodate, bytes)) {
++ if (blk_end_request(req, error, bytes)) {
+ int leftover = (req->hard_nr_sectors << 9);
--/* dumpslim output buffer size */
--#define LPFC_DUMPSLIM_SIZE 4096
-+/* dumpHBASlim output buffer size */
-+#define LPFC_DUMPHBASLIM_SIZE 4096
-+
-+/* dumpHostSlim output buffer size */
-+#define LPFC_DUMPHOSTSLIM_SIZE 4096
+ if (blk_pc_request(req))
+ leftover = req->data_len;
- /* hbqinfo output buffer size */
- #define LPFC_HBQINFO_SIZE 8192
-@@ -243,16 +247,17 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
- raw_index = phba->hbq_get[i];
- getidx = le32_to_cpu(raw_index);
- len += snprintf(buf+len, size-len,
-- "entrys:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
-- hbqs->entry_count, hbqs->hbqPutIdx, hbqs->next_hbqPutIdx,
-- hbqs->local_hbqGetIdx, getidx);
-+ "entrys:%d bufcnt:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
-+ hbqs->entry_count, hbqs->buffer_count, hbqs->hbqPutIdx,
-+ hbqs->next_hbqPutIdx, hbqs->local_hbqGetIdx, getidx);
+ /* kill remainder if no retrys */
+- if (!uptodate && blk_noretry_request(req))
+- end_that_request_chunk(req, 0, leftover);
++ if (error && blk_noretry_request(req))
++ blk_end_request(req, error, leftover);
+ else {
+ if (requeue) {
+ /*
+@@ -681,14 +682,6 @@ static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate,
+ }
+ }
- hbqe = (struct lpfc_hbq_entry *) phba->hbqs[i].hbq_virt;
- for (j=0; j<hbqs->entry_count; j++) {
- len += snprintf(buf+len, size-len,
- "%03d: %08x %04x %05x ", j,
-- hbqe->bde.addrLow, hbqe->bde.tus.w, hbqe->buffer_tag);
+- add_disk_randomness(req->rq_disk);
-
-+ le32_to_cpu(hbqe->bde.addrLow),
-+ le32_to_cpu(hbqe->bde.tus.w),
-+ le32_to_cpu(hbqe->buffer_tag));
- i = 0;
- found = 0;
-
-@@ -276,7 +281,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
- list_for_each_entry(d_buf, &hbqs->hbq_buffer_list, list) {
- hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
- phys = ((uint64_t)hbq_buf->dbuf.phys & 0xffffffff);
-- if (phys == hbqe->bde.addrLow) {
-+ if (phys == le32_to_cpu(hbqe->bde.addrLow)) {
- len += snprintf(buf+len, size-len,
- "Buf%d: %p %06x\n", i,
- hbq_buf->dbuf.virt, hbq_buf->tag);
-@@ -297,18 +302,58 @@ skipit:
- return len;
+- spin_lock_irqsave(q->queue_lock, flags);
+- if (blk_rq_tagged(req))
+- blk_queue_end_tag(q, req);
+- end_that_request_last(req, uptodate);
+- spin_unlock_irqrestore(q->queue_lock, flags);
+-
+ /*
+ * This will goose the queue request function at the end, so we don't
+ * need to worry about launching another command.
+@@ -737,138 +730,43 @@ static inline unsigned int scsi_sgtable_index(unsigned short nents)
+ return index;
}
-+static int lpfc_debugfs_last_hba_slim_off;
-+
-+static int
-+lpfc_debugfs_dumpHBASlim_data(struct lpfc_hba *phba, char *buf, int size)
-+{
-+ int len = 0;
-+ int i, off;
-+ uint32_t *ptr;
-+ char buffer[1024];
-+
-+ off = 0;
-+ spin_lock_irq(&phba->hbalock);
-+
-+ len += snprintf(buf+len, size-len, "HBA SLIM\n");
-+ lpfc_memcpy_from_slim(buffer,
-+ ((uint8_t *)phba->MBslimaddr) + lpfc_debugfs_last_hba_slim_off,
-+ 1024);
-+
-+ ptr = (uint32_t *)&buffer[0];
-+ off = lpfc_debugfs_last_hba_slim_off;
-+
-+ /* Set it up for the next time */
-+ lpfc_debugfs_last_hba_slim_off += 1024;
-+ if (lpfc_debugfs_last_hba_slim_off >= 4096)
-+ lpfc_debugfs_last_hba_slim_off = 0;
-+
-+ i = 1024;
-+ while (i > 0) {
-+ len += snprintf(buf+len, size-len,
-+ "%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
-+ off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
-+ *(ptr+5), *(ptr+6), *(ptr+7));
-+ ptr += 8;
-+ i -= (8 * sizeof(uint32_t));
-+ off += (8 * sizeof(uint32_t));
-+ }
-+
-+ spin_unlock_irq(&phba->hbalock);
-+ return len;
-+}
-+
- static int
--lpfc_debugfs_dumpslim_data(struct lpfc_hba *phba, char *buf, int size)
-+lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
++static void scsi_sg_free(struct scatterlist *sgl, unsigned int nents)
{
- int len = 0;
-- int cnt, i, off;
-+ int i, off;
- uint32_t word0, word1, word2, word3;
- uint32_t *ptr;
- struct lpfc_pgp *pgpp;
- struct lpfc_sli *psli = &phba->sli;
- struct lpfc_sli_ring *pring;
-
-- cnt = LPFC_DUMPSLIM_SIZE;
- off = 0;
- spin_lock_irq(&phba->hbalock);
-
-@@ -620,7 +665,34 @@ out:
- }
+ struct scsi_host_sg_pool *sgp;
+- struct scatterlist *sgl, *prev, *ret;
+- unsigned int index;
+- int this, left;
+-
+- BUG_ON(!cmd->use_sg);
+-
+- left = cmd->use_sg;
+- ret = prev = NULL;
+- do {
+- this = left;
+- if (this > SCSI_MAX_SG_SEGMENTS) {
+- this = SCSI_MAX_SG_SEGMENTS - 1;
+- index = SG_MEMPOOL_NR - 1;
+- } else
+- index = scsi_sgtable_index(this);
- static int
--lpfc_debugfs_dumpslim_open(struct inode *inode, struct file *file)
-+lpfc_debugfs_dumpHBASlim_open(struct inode *inode, struct file *file)
-+{
-+ struct lpfc_hba *phba = inode->i_private;
-+ struct lpfc_debug *debug;
-+ int rc = -ENOMEM;
-+
-+ debug = kmalloc(sizeof(*debug), GFP_KERNEL);
-+ if (!debug)
-+ goto out;
-+
-+ /* Round to page boundry */
-+ debug->buffer = kmalloc(LPFC_DUMPHBASLIM_SIZE, GFP_KERNEL);
-+ if (!debug->buffer) {
-+ kfree(debug);
-+ goto out;
-+ }
-+
-+ debug->len = lpfc_debugfs_dumpHBASlim_data(phba, debug->buffer,
-+ LPFC_DUMPHBASLIM_SIZE);
-+ file->private_data = debug;
-+
-+ rc = 0;
-+out:
-+ return rc;
+- left -= this;
+-
+- sgp = scsi_sg_pools + index;
+-
+- sgl = mempool_alloc(sgp->pool, gfp_mask);
+- if (unlikely(!sgl))
+- goto enomem;
++ sgp = scsi_sg_pools + scsi_sgtable_index(nents);
++ mempool_free(sgl, sgp->pool);
+}
-+
-+static int
-+lpfc_debugfs_dumpHostSlim_open(struct inode *inode, struct file *file)
- {
- struct lpfc_hba *phba = inode->i_private;
- struct lpfc_debug *debug;
-@@ -631,14 +703,14 @@ lpfc_debugfs_dumpslim_open(struct inode *inode, struct file *file)
- goto out;
-
- /* Round to page boundry */
-- debug->buffer = kmalloc(LPFC_DUMPSLIM_SIZE, GFP_KERNEL);
-+ debug->buffer = kmalloc(LPFC_DUMPHOSTSLIM_SIZE, GFP_KERNEL);
- if (!debug->buffer) {
- kfree(debug);
- goto out;
- }
-- debug->len = lpfc_debugfs_dumpslim_data(phba, debug->buffer,
-- LPFC_DUMPSLIM_SIZE);
-+ debug->len = lpfc_debugfs_dumpHostSlim_data(phba, debug->buffer,
-+ LPFC_DUMPHOSTSLIM_SIZE);
- file->private_data = debug;
+- sg_init_table(sgl, sgp->size);
++static struct scatterlist *scsi_sg_alloc(unsigned int nents, gfp_t gfp_mask)
++{
++ struct scsi_host_sg_pool *sgp;
- rc = 0;
-@@ -741,10 +813,19 @@ static struct file_operations lpfc_debugfs_op_hbqinfo = {
- .release = lpfc_debugfs_release,
- };
+- /*
+- * first loop through, set initial index and return value
+- */
+- if (!ret)
+- ret = sgl;
++ sgp = scsi_sg_pools + scsi_sgtable_index(nents);
++ return mempool_alloc(sgp->pool, gfp_mask);
++}
--#undef lpfc_debugfs_op_dumpslim
--static struct file_operations lpfc_debugfs_op_dumpslim = {
-+#undef lpfc_debugfs_op_dumpHBASlim
-+static struct file_operations lpfc_debugfs_op_dumpHBASlim = {
-+ .owner = THIS_MODULE,
-+ .open = lpfc_debugfs_dumpHBASlim_open,
-+ .llseek = lpfc_debugfs_lseek,
-+ .read = lpfc_debugfs_read,
-+ .release = lpfc_debugfs_release,
-+};
-+
-+#undef lpfc_debugfs_op_dumpHostSlim
-+static struct file_operations lpfc_debugfs_op_dumpHostSlim = {
- .owner = THIS_MODULE,
-- .open = lpfc_debugfs_dumpslim_open,
-+ .open = lpfc_debugfs_dumpHostSlim_open,
- .llseek = lpfc_debugfs_lseek,
- .read = lpfc_debugfs_read,
- .release = lpfc_debugfs_release,
-@@ -812,15 +893,27 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
- goto debug_failed;
- }
+- /*
+- * chain previous sglist, if any. we know the previous
+- * sglist must be the biggest one, or we would not have
+- * ended up doing another loop.
+- */
+- if (prev)
+- sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl);
++int scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
++{
++ int ret;
-- /* Setup dumpslim */
-- snprintf(name, sizeof(name), "dumpslim");
-- phba->debug_dumpslim =
-+ /* Setup dumpHBASlim */
-+ snprintf(name, sizeof(name), "dumpHBASlim");
-+ phba->debug_dumpHBASlim =
-+ debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
-+ phba->hba_debugfs_root,
-+ phba, &lpfc_debugfs_op_dumpHBASlim);
-+ if (!phba->debug_dumpHBASlim) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
-+ "0409 Cannot create debugfs dumpHBASlim\n");
-+ goto debug_failed;
-+ }
-+
-+ /* Setup dumpHostSlim */
-+ snprintf(name, sizeof(name), "dumpHostSlim");
-+ phba->debug_dumpHostSlim =
- debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
- phba->hba_debugfs_root,
-- phba, &lpfc_debugfs_op_dumpslim);
-- if (!phba->debug_dumpslim) {
-+ phba, &lpfc_debugfs_op_dumpHostSlim);
-+ if (!phba->debug_dumpHostSlim) {
- lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
-- "0409 Cannot create debugfs dumpslim\n");
-+ "0409 Cannot create debugfs dumpHostSlim\n");
- goto debug_failed;
- }
+- /*
+- * if we have nothing left, mark the last segment as
+- * end-of-list
+- */
+- if (!left)
+- sg_mark_end(&sgl[this - 1]);
++ BUG_ON(!cmd->use_sg);
-@@ -970,9 +1063,13 @@ lpfc_debugfs_terminate(struct lpfc_vport *vport)
- debugfs_remove(phba->debug_hbqinfo); /* hbqinfo */
- phba->debug_hbqinfo = NULL;
- }
-- if (phba->debug_dumpslim) {
-- debugfs_remove(phba->debug_dumpslim); /* dumpslim */
-- phba->debug_dumpslim = NULL;
-+ if (phba->debug_dumpHBASlim) {
-+ debugfs_remove(phba->debug_dumpHBASlim); /* HBASlim */
-+ phba->debug_dumpHBASlim = NULL;
-+ }
-+ if (phba->debug_dumpHostSlim) {
-+ debugfs_remove(phba->debug_dumpHostSlim); /* HostSlim */
-+ phba->debug_dumpHostSlim = NULL;
- }
- if (phba->slow_ring_trc) {
- kfree(phba->slow_ring_trc);
-diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h
-index aacac9a..cfe81c5 100644
---- a/drivers/scsi/lpfc/lpfc_disc.h
-+++ b/drivers/scsi/lpfc/lpfc_disc.h
-@@ -36,7 +36,6 @@ enum lpfc_work_type {
- LPFC_EVT_WARM_START,
- LPFC_EVT_KILL,
- LPFC_EVT_ELS_RETRY,
-- LPFC_EVT_DEV_LOSS_DELAY,
- LPFC_EVT_DEV_LOSS,
- };
+- /*
+- * don't allow subsequent mempool allocs to sleep, it would
+- * violate the mempool principle.
+- */
+- gfp_mask &= ~__GFP_WAIT;
+- gfp_mask |= __GFP_HIGH;
+- prev = sgl;
+- } while (left);
++ ret = __sg_alloc_table(&cmd->sg_table, cmd->use_sg,
++ SCSI_MAX_SG_SEGMENTS, gfp_mask, scsi_sg_alloc);
++ if (unlikely(ret))
++ __sg_free_table(&cmd->sg_table, SCSI_MAX_SG_SEGMENTS,
++ scsi_sg_free);
-@@ -92,6 +91,7 @@ struct lpfc_nodelist {
- #define NLP_LOGO_SND 0x100 /* sent LOGO request for this entry */
- #define NLP_RNID_SND 0x400 /* sent RNID request for this entry */
- #define NLP_ELS_SND_MASK 0x7e0 /* sent ELS request for this entry */
-+#define NLP_DEFER_RM 0x10000 /* Remove this ndlp if no longer used */
- #define NLP_DELAY_TMO 0x20000 /* delay timeout is running for node */
- #define NLP_NPR_2B_DISC 0x40000 /* node is included in num_disc_nodes */
- #define NLP_RCV_PLOGI 0x80000 /* Rcv'ed PLOGI from remote system */
-diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
-index 8085900..c6b739d 100644
---- a/drivers/scsi/lpfc/lpfc_els.c
-+++ b/drivers/scsi/lpfc/lpfc_els.c
-@@ -18,7 +18,7 @@
- * more details, a copy of which can be found in the file COPYING *
- * included with this package. *
- *******************************************************************/
+- /*
+- * ->use_sg may get modified after dma mapping has potentially
+- * shrunk the number of segments, so keep a copy of it for free.
+- */
+- cmd->__use_sg = cmd->use_sg;
++ cmd->request_buffer = cmd->sg_table.sgl;
+ return ret;
+-enomem:
+- if (ret) {
+- /*
+- * Free entries chained off ret. Since we were trying to
+- * allocate another sglist, we know that all entries are of
+- * the max size.
+- */
+- sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1;
+- prev = ret;
+- ret = &ret[SCSI_MAX_SG_SEGMENTS - 1];
-
-+/* See Fibre Channel protocol T11 FC-LS for details */
- #include <linux/blkdev.h>
- #include <linux/pci.h>
- #include <linux/interrupt.h>
-@@ -42,6 +42,14 @@ static int lpfc_els_retry(struct lpfc_hba *, struct lpfc_iocbq *,
- struct lpfc_iocbq *);
- static void lpfc_cmpl_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *,
- struct lpfc_iocbq *);
-+static void lpfc_fabric_abort_vport(struct lpfc_vport *vport);
-+static int lpfc_issue_els_fdisc(struct lpfc_vport *vport,
-+ struct lpfc_nodelist *ndlp, uint8_t retry);
-+static int lpfc_issue_fabric_iocb(struct lpfc_hba *phba,
-+ struct lpfc_iocbq *iocb);
-+static void lpfc_register_new_vport(struct lpfc_hba *phba,
-+ struct lpfc_vport *vport,
-+ struct lpfc_nodelist *ndlp);
-
- static int lpfc_max_els_tries = 3;
-
-@@ -109,14 +117,11 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
-
- /* fill in BDEs for command */
- /* Allocate buffer for command payload */
-- if (((pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL)) == 0) ||
-- ((pcmd->virt = lpfc_mbuf_alloc(phba,
-- MEM_PRI, &(pcmd->phys))) == 0)) {
-- kfree(pcmd);
+- while ((sgl = sg_chain_ptr(ret)) != NULL) {
+- ret = &sgl[SCSI_MAX_SG_SEGMENTS - 1];
+- mempool_free(sgl, sgp->pool);
+- }
-
-- lpfc_sli_release_iocbq(phba, elsiocb);
-- return NULL;
+- mempool_free(prev, sgp->pool);
- }
-+ pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
-+ if (pcmd)
-+ pcmd->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &pcmd->phys);
-+ if (!pcmd || !pcmd->virt)
-+ goto els_iocb_free_pcmb_exit;
+- return NULL;
+ }
- INIT_LIST_HEAD(&pcmd->list);
+ EXPORT_SYMBOL(scsi_alloc_sgtable);
-@@ -126,13 +131,8 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
- if (prsp)
- prsp->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
- &prsp->phys);
-- if (prsp == 0 || prsp->virt == 0) {
-- kfree(prsp);
-- lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
-- kfree(pcmd);
-- lpfc_sli_release_iocbq(phba, elsiocb);
-- return NULL;
+ void scsi_free_sgtable(struct scsi_cmnd *cmd)
+ {
+- struct scatterlist *sgl = cmd->request_buffer;
+- struct scsi_host_sg_pool *sgp;
+-
+- /*
+- * if this is the biggest size sglist, check if we have
+- * chained parts we need to free
+- */
+- if (cmd->__use_sg > SCSI_MAX_SG_SEGMENTS) {
+- unsigned short this, left;
+- struct scatterlist *next;
+- unsigned int index;
+-
+- left = cmd->__use_sg - (SCSI_MAX_SG_SEGMENTS - 1);
+- next = sg_chain_ptr(&sgl[SCSI_MAX_SG_SEGMENTS - 1]);
+- while (left && next) {
+- sgl = next;
+- this = left;
+- if (this > SCSI_MAX_SG_SEGMENTS) {
+- this = SCSI_MAX_SG_SEGMENTS - 1;
+- index = SG_MEMPOOL_NR - 1;
+- } else
+- index = scsi_sgtable_index(this);
+-
+- left -= this;
+-
+- sgp = scsi_sg_pools + index;
+-
+- if (left)
+- next = sg_chain_ptr(&sgl[sgp->size - 1]);
+-
+- mempool_free(sgl, sgp->pool);
- }
-+ if (!prsp || !prsp->virt)
-+ goto els_iocb_free_prsp_exit;
- INIT_LIST_HEAD(&prsp->list);
- } else {
- prsp = NULL;
-@@ -143,15 +143,8 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
- if (pbuflist)
- pbuflist->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
- &pbuflist->phys);
-- if (pbuflist == 0 || pbuflist->virt == 0) {
-- lpfc_sli_release_iocbq(phba, elsiocb);
-- lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
-- lpfc_mbuf_free(phba, prsp->virt, prsp->phys);
-- kfree(pcmd);
-- kfree(prsp);
-- kfree(pbuflist);
-- return NULL;
-- }
-+ if (!pbuflist || !pbuflist->virt)
-+ goto els_iocb_free_pbuf_exit;
+-
+- /*
+- * Restore original, will be freed below
+- */
+- sgl = cmd->request_buffer;
+- sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1;
+- } else
+- sgp = scsi_sg_pools + scsi_sgtable_index(cmd->__use_sg);
+-
+- mempool_free(sgl, sgp->pool);
++ __sg_free_table(&cmd->sg_table, SCSI_MAX_SG_SEGMENTS, scsi_sg_free);
+ }
- INIT_LIST_HEAD(&pbuflist->list);
+ EXPORT_SYMBOL(scsi_free_sgtable);
+@@ -985,7 +883,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ * are leftovers and there is some kind of error
+ * (result != 0), retry the rest.
+ */
+- if (scsi_end_request(cmd, 1, good_bytes, result == 0) == NULL)
++ if (scsi_end_request(cmd, 0, good_bytes, result == 0) == NULL)
+ return;
-@@ -196,7 +189,10 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
- bpl->tus.w = le32_to_cpu(bpl->tus.w);
+ /* good_bytes = 0, or (inclusive) there were leftovers and
+@@ -999,7 +897,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ * and quietly refuse further access.
+ */
+ cmd->device->changed = 1;
+- scsi_end_request(cmd, 0, this_count, 1);
++ scsi_end_request(cmd, -EIO, this_count, 1);
+ return;
+ } else {
+ /* Must have been a power glitch, or a
+@@ -1031,7 +929,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ scsi_requeue_command(q, cmd);
+ return;
+ } else {
+- scsi_end_request(cmd, 0, this_count, 1);
++ scsi_end_request(cmd, -EIO, this_count, 1);
+ return;
+ }
+ break;
+@@ -1059,7 +957,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ "Device not ready",
+ &sshdr);
+
+- scsi_end_request(cmd, 0, this_count, 1);
++ scsi_end_request(cmd, -EIO, this_count, 1);
+ return;
+ case VOLUME_OVERFLOW:
+ if (!(req->cmd_flags & REQ_QUIET)) {
+@@ -1069,7 +967,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ scsi_print_sense("", cmd);
+ }
+ /* See SSC3rXX or current. */
+- scsi_end_request(cmd, 0, this_count, 1);
++ scsi_end_request(cmd, -EIO, this_count, 1);
+ return;
+ default:
+ break;
+@@ -1090,7 +988,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ scsi_print_sense("", cmd);
+ }
}
+- scsi_end_request(cmd, 0, this_count, !result);
++ scsi_end_request(cmd, -EIO, this_count, !result);
+ }
-+ /* prevent preparing iocb with NULL ndlp reference */
- elsiocb->context1 = lpfc_nlp_get(ndlp);
-+ if (!elsiocb->context1)
-+ goto els_iocb_free_pbuf_exit;
- elsiocb->context2 = pcmd;
- elsiocb->context3 = pbuflist;
- elsiocb->retry = retry;
-@@ -222,8 +218,20 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
- cmdSize);
+ /*
+@@ -1102,7 +1000,6 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
+ *
+ * Returns: 0 on success
+ * BLKPREP_DEFER if the failure is retryable
+- * BLKPREP_KILL if the failure is fatal
+ */
+ static int scsi_init_io(struct scsi_cmnd *cmd)
+ {
+@@ -1119,8 +1016,7 @@ static int scsi_init_io(struct scsi_cmnd *cmd)
+ /*
+ * If sg table allocation fails, requeue request later.
+ */
+- cmd->request_buffer = scsi_alloc_sgtable(cmd, GFP_ATOMIC);
+- if (unlikely(!cmd->request_buffer)) {
++ if (unlikely(scsi_alloc_sgtable(cmd, GFP_ATOMIC))) {
+ scsi_unprep_request(req);
+ return BLKPREP_DEFER;
}
- return elsiocb;
--}
+@@ -1136,17 +1032,9 @@ static int scsi_init_io(struct scsi_cmnd *cmd)
+ * each segment.
+ */
+ count = blk_rq_map_sg(req->q, req, cmd->request_buffer);
+- if (likely(count <= cmd->use_sg)) {
+- cmd->use_sg = count;
+- return BLKPREP_OK;
+- }
+-
+- printk(KERN_ERR "Incorrect number of segments after building list\n");
+- printk(KERN_ERR "counted %d, received %d\n", count, cmd->use_sg);
+- printk(KERN_ERR "req nr_sec %lu, cur_nr_sec %u\n", req->nr_sectors,
+- req->current_nr_sectors);
+-
+- return BLKPREP_KILL;
++ BUG_ON(count > cmd->use_sg);
++ cmd->use_sg = count;
++ return BLKPREP_OK;
+ }
-+els_iocb_free_pbuf_exit:
-+ lpfc_mbuf_free(phba, prsp->virt, prsp->phys);
-+ kfree(pbuflist);
-+
-+els_iocb_free_prsp_exit:
-+ lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
-+ kfree(prsp);
-+
-+els_iocb_free_pcmb_exit:
-+ kfree(pcmd);
-+ lpfc_sli_release_iocbq(phba, elsiocb);
-+ return NULL;
-+}
+ static struct scsi_cmnd *scsi_get_cmd_from_req(struct scsi_device *sdev,
+@@ -1557,7 +1445,7 @@ static void scsi_request_fn(struct request_queue *q)
- static int
- lpfc_issue_fabric_reglogin(struct lpfc_vport *vport)
-@@ -234,40 +242,53 @@ lpfc_issue_fabric_reglogin(struct lpfc_vport *vport)
- struct lpfc_nodelist *ndlp;
- struct serv_parm *sp;
- int rc;
-+ int err = 0;
+ if (!scsi_host_queue_ready(q, shost, sdev))
+ goto not_ready;
+- if (sdev->single_lun) {
++ if (scsi_target(sdev)->single_lun) {
+ if (scsi_target(sdev)->starget_sdev_user &&
+ scsi_target(sdev)->starget_sdev_user != sdev)
+ goto not_ready;
+@@ -1675,6 +1563,14 @@ struct request_queue *__scsi_alloc_queue(struct Scsi_Host *shost,
- sp = &phba->fc_fabparam;
- ndlp = lpfc_findnode_did(vport, Fabric_DID);
-- if (!ndlp)
-+ if (!ndlp) {
-+ err = 1;
- goto fail;
-+ }
+ if (!shost->use_clustering)
+ clear_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags);
++
++ /*
++ * set a reasonable default alignment on word boundaries: the
++ * host and device may alter it using
++ * blk_queue_update_dma_alignment() later.
++ */
++ blk_queue_dma_alignment(q, 0x03);
++
+ return q;
+ }
+ EXPORT_SYMBOL(__scsi_alloc_queue);
+@@ -1804,7 +1700,7 @@ void scsi_exit_queue(void)
+ * @timeout: command timeout
+ * @retries: number of retries before failing
+ * @data: returns a structure abstracting the mode header data
+- * @sense: place to put sense data (or NULL if no sense to be collected).
++ * @sshdr: place to put sense data (or NULL if no sense to be collected).
+ * must be SCSI_SENSE_BUFFERSIZE big.
+ *
+ * Returns zero if successful; negative error number or scsi
+@@ -1871,8 +1767,7 @@ scsi_mode_select(struct scsi_device *sdev, int pf, int sp, int modepage,
+ EXPORT_SYMBOL_GPL(scsi_mode_select);
- mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-- if (!mbox)
-+ if (!mbox) {
-+ err = 2;
- goto fail;
-+ }
+ /**
+- * scsi_mode_sense - issue a mode sense, falling back from 10 to
+- * six bytes if necessary.
++ * scsi_mode_sense - issue a mode sense, falling back from 10 to six bytes if necessary.
+ * @sdev: SCSI device to be queried
+ * @dbd: set if mode sense will allow block descriptors to be returned
+ * @modepage: mode page being requested
+@@ -1881,13 +1776,13 @@ EXPORT_SYMBOL_GPL(scsi_mode_select);
+ * @timeout: command timeout
+ * @retries: number of retries before failing
+ * @data: returns a structure abstracting the mode header data
+- * @sense: place to put sense data (or NULL if no sense to be collected).
++ * @sshdr: place to put sense data (or NULL if no sense to be collected).
+ * must be SCSI_SENSE_BUFFERSIZE big.
+ *
+ * Returns zero if unsuccessful, or the header offset (either 4
+ * or 8 depending on whether a six or ten byte command was
+ * issued) if successful.
+- **/
++ */
+ int
+ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+ unsigned char *buffer, int len, int timeout, int retries,
+@@ -1981,40 +1876,69 @@ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+ }
+ EXPORT_SYMBOL(scsi_mode_sense);
- vport->port_state = LPFC_FABRIC_CFG_LINK;
- lpfc_config_link(phba, mbox);
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
- mbox->vport = vport;
++/**
++ * scsi_test_unit_ready - test if unit is ready
++ * @sdev: scsi device to change the state of.
++ * @timeout: command timeout
++ * @retries: number of retries before failing
++ * @sshdr_external: Optional pointer to struct scsi_sense_hdr for
++ * returning sense. Make sure that this is cleared before passing
++ * in.
++ *
++ * Returns zero if unsuccessful or an error if TUR failed. For
++ * removable media, a return of NOT_READY or UNIT_ATTENTION is
++ * translated to success, with the ->changed flag updated.
++ **/
+ int
+-scsi_test_unit_ready(struct scsi_device *sdev, int timeout, int retries)
++scsi_test_unit_ready(struct scsi_device *sdev, int timeout, int retries,
++ struct scsi_sense_hdr *sshdr_external)
+ {
+ char cmd[] = {
+ TEST_UNIT_READY, 0, 0, 0, 0, 0,
+ };
+- struct scsi_sense_hdr sshdr;
++ struct scsi_sense_hdr *sshdr;
+ int result;
+-
+- result = scsi_execute_req(sdev, cmd, DMA_NONE, NULL, 0, &sshdr,
+- timeout, retries);
++
++ if (!sshdr_external)
++ sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
++ else
++ sshdr = sshdr_external;
++
++ /* try to eat the UNIT_ATTENTION if there are enough retries */
++ do {
++ result = scsi_execute_req(sdev, cmd, DMA_NONE, NULL, 0, sshdr,
++ timeout, retries);
++ } while ((driver_byte(result) & DRIVER_SENSE) &&
++ sshdr && sshdr->sense_key == UNIT_ATTENTION &&
++ --retries);
++
++ if (!sshdr)
++ /* could not allocate sense buffer, so can't process it */
++ return result;
-- rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT | MBX_STOP_IOCB);
-- if (rc == MBX_NOT_FINISHED)
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
-+ if (rc == MBX_NOT_FINISHED) {
-+ err = 3;
- goto fail_free_mbox;
-+ }
+ if ((driver_byte(result) & DRIVER_SENSE) && sdev->removable) {
- mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-- if (!mbox)
-+ if (!mbox) {
-+ err = 4;
- goto fail;
-+ }
- rc = lpfc_reg_login(phba, vport->vpi, Fabric_DID, (uint8_t *)sp, mbox,
- 0);
-- if (rc)
-+ if (rc) {
-+ err = 5;
- goto fail_free_mbox;
-+ }
+- if ((scsi_sense_valid(&sshdr)) &&
+- ((sshdr.sense_key == UNIT_ATTENTION) ||
+- (sshdr.sense_key == NOT_READY))) {
++ if ((scsi_sense_valid(sshdr)) &&
++ ((sshdr->sense_key == UNIT_ATTENTION) ||
++ (sshdr->sense_key == NOT_READY))) {
+ sdev->changed = 1;
+ result = 0;
+ }
+ }
++ if (!sshdr_external)
++ kfree(sshdr);
+ return result;
+ }
+ EXPORT_SYMBOL(scsi_test_unit_ready);
- mbox->mbox_cmpl = lpfc_mbx_cmpl_fabric_reg_login;
- mbox->vport = vport;
- mbox->context2 = lpfc_nlp_get(ndlp);
+ /**
+- * scsi_device_set_state - Take the given device through the device
+- * state model.
++ * scsi_device_set_state - Take the given device through the device state model.
+ * @sdev: scsi device to change the state of.
+ * @state: state to change to.
+ *
+ * Returns zero if unsuccessful or an error if the requested
+ * transition is illegal.
+- **/
++ */
+ int
+ scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
+ {
+@@ -2264,7 +2188,7 @@ EXPORT_SYMBOL_GPL(sdev_evt_send_simple);
+ * Must be called with user context, may sleep.
+ *
+ * Returns zero if unsuccessful or an error if not.
+- **/
++ */
+ int
+ scsi_device_quiesce(struct scsi_device *sdev)
+ {
+@@ -2289,7 +2213,7 @@ EXPORT_SYMBOL(scsi_device_quiesce);
+ * queues.
+ *
+ * Must be called with user context, may sleep.
+- **/
++ */
+ void
+ scsi_device_resume(struct scsi_device *sdev)
+ {
+@@ -2326,8 +2250,7 @@ scsi_target_resume(struct scsi_target *starget)
+ EXPORT_SYMBOL(scsi_target_resume);
-- rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT | MBX_STOP_IOCB);
-- if (rc == MBX_NOT_FINISHED)
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
-+ if (rc == MBX_NOT_FINISHED) {
-+ err = 6;
- goto fail_issue_reg_login;
-+ }
+ /**
+- * scsi_internal_device_block - internal function to put a device
+- * temporarily into the SDEV_BLOCK state
++ * scsi_internal_device_block - internal function to put a device temporarily into the SDEV_BLOCK state
+ * @sdev: device to block
+ *
+ * Block request made by scsi lld's to temporarily stop all
+@@ -2342,7 +2265,7 @@ EXPORT_SYMBOL(scsi_target_resume);
+ * state, all commands are deferred until the scsi lld reenables
+ * the device with scsi_device_unblock or device_block_tmo fires.
+ * This routine assumes the host_lock is held on entry.
+- **/
++ */
+ int
+ scsi_internal_device_block(struct scsi_device *sdev)
+ {
+@@ -2382,7 +2305,7 @@ EXPORT_SYMBOL_GPL(scsi_internal_device_block);
+ * (which must be a legal transition) allowing the midlayer to
+ * goose the queue for this device. This routine assumes the
+ * host_lock is held upon entry.
+- **/
++ */
+ int
+ scsi_internal_device_unblock(struct scsi_device *sdev)
+ {
+@@ -2460,7 +2383,7 @@ EXPORT_SYMBOL_GPL(scsi_target_unblock);
- return 0;
+ /**
+ * scsi_kmap_atomic_sg - find and atomically map an sg-elemnt
+- * @sg: scatter-gather list
++ * @sgl: scatter-gather list
+ * @sg_count: number of segments in sg
+ * @offset: offset in bytes into sg, on return offset into the mapped area
+ * @len: bytes to map, on return number of bytes mapped
+@@ -2509,8 +2432,7 @@ void *scsi_kmap_atomic_sg(struct scatterlist *sgl, int sg_count,
+ EXPORT_SYMBOL(scsi_kmap_atomic_sg);
-@@ -282,7 +303,7 @@ fail_free_mbox:
- fail:
- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
- lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-- "0249 Cannot issue Register Fabric login\n");
-+ "0249 Cannot issue Register Fabric login: Err %d\n", err);
- return -ENXIO;
- }
+ /**
+- * scsi_kunmap_atomic_sg - atomically unmap a virtual address, previously
+- * mapped with scsi_kmap_atomic_sg
++ * scsi_kunmap_atomic_sg - atomically unmap a virtual address, previously mapped with scsi_kmap_atomic_sg
+ * @virt: virtual address to be unmapped
+ */
+ void scsi_kunmap_atomic_sg(void *virt)
+diff --git a/drivers/scsi/scsi_netlink.c b/drivers/scsi/scsi_netlink.c
+index 40579ed..370c78c 100644
+--- a/drivers/scsi/scsi_netlink.c
++++ b/drivers/scsi/scsi_netlink.c
+@@ -32,11 +32,12 @@ EXPORT_SYMBOL_GPL(scsi_nl_sock);
-@@ -370,11 +391,12 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- }
- if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
- lpfc_mbx_unreg_vpi(vport);
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
-+ spin_unlock_irq(shost->host_lock);
- }
- }
-- ndlp->nlp_sid = irsp->un.ulpWord[4] & Mask_DID;
- lpfc_nlp_set_state(vport, ndlp, NLP_STE_REG_LOGIN_ISSUE);
+ /**
+- * scsi_nl_rcv_msg -
+- * Receive message handler. Extracts message from a receive buffer.
++ * scsi_nl_rcv_msg - Receive message handler.
++ * @skb: socket receive buffer
++ *
++ * Description: Extracts message from a receive buffer.
+ * Validates message header and calls appropriate transport message handler
+ *
+- * @skb: socket receive buffer
+ *
+ **/
+ static void
+@@ -99,9 +100,7 @@ next_msg:
- if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED &&
-@@ -429,8 +451,7 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
- mbox->vport = vport;
-- rc = lpfc_sli_issue_mbox(phba, mbox,
-- MBX_NOWAIT | MBX_STOP_IOCB);
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED) {
- mempool_free(mbox, phba->mbox_mem_pool);
- goto fail;
-@@ -463,6 +484,9 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- lpfc_nlp_put(ndlp);
- }
+ /**
+- * scsi_nl_rcv_event -
+- * Event handler for a netlink socket.
+- *
++ * scsi_nl_rcv_event - Event handler for a netlink socket.
+ * @this: event notifier block
+ * @event: event type
+ * @ptr: event payload
+@@ -129,9 +128,7 @@ static struct notifier_block scsi_netlink_notifier = {
-+ /* If we are pt2pt with another NPort, force NPIV off! */
-+ phba->sli3_options &= ~LPFC_SLI3_NPIV_ENABLED;
-+
- spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_PT2PT;
- spin_unlock_irq(shost->host_lock);
-@@ -488,6 +512,9 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- /* Check to see if link went down during discovery */
- if (lpfc_els_chk_latt(vport)) {
-+ /* One additional decrement on node reference count to
-+ * trigger the release of the node
-+ */
- lpfc_nlp_put(ndlp);
- goto out;
- }
-@@ -562,8 +589,13 @@ flogifail:
+ /**
+- * scsi_netlink_init -
+- * Called by SCSI subsystem to intialize the SCSI transport netlink
+- * interface
++ * scsi_netlink_init - Called by SCSI subsystem to intialize the SCSI transport netlink interface
+ *
+ **/
+ void
+@@ -160,16 +157,14 @@ scsi_netlink_init(void)
- /* Start discovery */
- lpfc_disc_start(vport);
-+ } else if (((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
-+ ((irsp->un.ulpWord[4] != IOERR_SLI_ABORTED) &&
-+ (irsp->un.ulpWord[4] != IOERR_SLI_DOWN))) &&
-+ (phba->link_state != LPFC_CLEAR_LA)) {
-+ /* If FLOGI failed enable link interrupt. */
-+ lpfc_issue_clear_la(phba, vport);
- }
--
- out:
- lpfc_els_free_iocb(phba, cmdiocb);
- }
-@@ -685,6 +717,9 @@ lpfc_initial_flogi(struct lpfc_vport *vport)
- struct lpfc_hba *phba = vport->phba;
- struct lpfc_nodelist *ndlp;
-+ vport->port_state = LPFC_FLOGI;
-+ lpfc_set_disctmo(vport);
-+
- /* First look for the Fabric ndlp */
- ndlp = lpfc_findnode_did(vport, Fabric_DID);
- if (!ndlp) {
-@@ -696,7 +731,11 @@ lpfc_initial_flogi(struct lpfc_vport *vport)
- } else {
- lpfc_dequeue_node(vport, ndlp);
- }
-+
- if (lpfc_issue_els_flogi(vport, ndlp, 0)) {
-+ /* This decrement of reference count to node shall kick off
-+ * the release of the node.
-+ */
- lpfc_nlp_put(ndlp);
- }
- return 1;
-@@ -720,11 +759,16 @@ lpfc_initial_fdisc(struct lpfc_vport *vport)
- lpfc_dequeue_node(vport, ndlp);
- }
- if (lpfc_issue_els_fdisc(vport, ndlp, 0)) {
-+ /* decrement node reference count to trigger the release of
-+ * the node.
-+ */
- lpfc_nlp_put(ndlp);
-+ return 0;
+ /**
+- * scsi_netlink_exit -
+- * Called by SCSI subsystem to disable the SCSI transport netlink
+- * interface
++ * scsi_netlink_exit - Called by SCSI subsystem to disable the SCSI transport netlink interface
+ *
+ **/
+ void
+ scsi_netlink_exit(void)
+ {
+ if (scsi_nl_sock) {
+- sock_release(scsi_nl_sock->sk_socket);
++ netlink_kernel_release(scsi_nl_sock);
+ netlink_unregister_notifier(&scsi_netlink_notifier);
}
- return 1;
- }
--static void
+
+diff --git a/drivers/scsi/scsi_proc.c b/drivers/scsi/scsi_proc.c
+index bb6f051..ed39515 100644
+--- a/drivers/scsi/scsi_proc.c
++++ b/drivers/scsi/scsi_proc.c
+@@ -45,6 +45,16 @@ static struct proc_dir_entry *proc_scsi;
+ /* Protect sht->present and sht->proc_dir */
+ static DEFINE_MUTEX(global_host_template_mutex);
+
++/**
++ * proc_scsi_read - handle read from /proc by calling host's proc_info() command
++ * @buffer: passed to proc_info
++ * @start: passed to proc_info
++ * @offset: passed to proc_info
++ * @length: passed to proc_info
++ * @eof: returns whether length read was less than requested
++ * @data: pointer to a &struct Scsi_Host
++ */
+
-+void
- lpfc_more_plogi(struct lpfc_vport *vport)
- {
- int sentplogi;
-@@ -752,6 +796,8 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
+ static int proc_scsi_read(char *buffer, char **start, off_t offset,
+ int length, int *eof, void *data)
{
- struct lpfc_vport *vport = ndlp->vport;
- struct lpfc_nodelist *new_ndlp;
-+ struct lpfc_rport_data *rdata;
-+ struct fc_rport *rport;
- struct serv_parm *sp;
- uint8_t name[sizeof(struct lpfc_name)];
- uint32_t rc;
-@@ -788,11 +834,34 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
- lpfc_unreg_rpi(vport, new_ndlp);
- new_ndlp->nlp_DID = ndlp->nlp_DID;
- new_ndlp->nlp_prev_state = ndlp->nlp_prev_state;
-+
-+ if (ndlp->nlp_flag & NLP_NPR_2B_DISC)
-+ new_ndlp->nlp_flag |= NLP_NPR_2B_DISC;
-+ ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
-+
- lpfc_nlp_set_state(vport, new_ndlp, ndlp->nlp_state);
+@@ -57,6 +67,13 @@ static int proc_scsi_read(char *buffer, char **start, off_t offset,
+ return n;
+ }
- /* Move this back to NPR state */
-- if (memcmp(&ndlp->nlp_portname, name, sizeof(struct lpfc_name)) == 0)
-+ if (memcmp(&ndlp->nlp_portname, name, sizeof(struct lpfc_name)) == 0) {
-+ /* The new_ndlp is replacing ndlp totally, so we need
-+ * to put ndlp on UNUSED list and try to free it.
-+ */
-+
-+ /* Fix up the rport accordingly */
-+ rport = ndlp->rport;
-+ if (rport) {
-+ rdata = rport->dd_data;
-+ if (rdata->pnode == ndlp) {
-+ lpfc_nlp_put(ndlp);
-+ ndlp->rport = NULL;
-+ rdata->pnode = lpfc_nlp_get(new_ndlp);
-+ new_ndlp->rport = rport;
-+ }
-+ new_ndlp->nlp_type = ndlp->nlp_type;
-+ }
-+
- lpfc_drop_node(vport, ndlp);
-+ }
- else {
- lpfc_unreg_rpi(vport, ndlp);
- ndlp->nlp_DID = 0; /* Two ndlps cannot have the same did */
-@@ -801,6 +870,27 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
- return new_ndlp;
++/**
++ * proc_scsi_write_proc - Handle write to /proc by calling host's proc_info()
++ * @file: not used
++ * @buf: source of data to write.
++ * @count: number of bytes (at most PROC_BLOCK_SIZE) to write.
++ * @data: pointer to &struct Scsi_Host
++ */
+ static int proc_scsi_write_proc(struct file *file, const char __user *buf,
+ unsigned long count, void *data)
+ {
+@@ -80,6 +97,13 @@ out:
+ return ret;
}
-+void
-+lpfc_end_rscn(struct lpfc_vport *vport)
-+{
-+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
-+
-+ if (vport->fc_flag & FC_RSCN_MODE) {
-+ /*
-+ * Check to see if more RSCNs came in while we were
-+ * processing this one.
-+ */
-+ if (vport->fc_rscn_id_cnt ||
-+ (vport->fc_flag & FC_RSCN_DISCOVERY) != 0)
-+ lpfc_els_handle_rscn(vport);
-+ else {
-+ spin_lock_irq(shost->host_lock);
-+ vport->fc_flag &= ~FC_RSCN_MODE;
-+ spin_unlock_irq(shost->host_lock);
-+ }
-+ }
-+}
++/**
++ * scsi_proc_hostdir_add - Create directory in /proc for a scsi host
++ * @sht: owner of this directory
++ *
++ * Sets sht->proc_dir to the new directory.
++ */
+
- static void
- lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_iocbq *rspiocb)
-@@ -871,13 +961,6 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- goto out;
- }
- /* PLOGI failed */
-- if (ndlp->nlp_DID == NameServer_DID) {
-- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
-- lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-- "0250 Nameserver login error: "
-- "0x%x / 0x%x\n",
-- irsp->ulpStatus, irsp->un.ulpWord[4]);
-- }
- /* Do not call DSM for lpfc_els_abort'ed ELS cmds */
- if (lpfc_error_lost_link(irsp)) {
- rc = NLP_STE_FREED_NODE;
-@@ -905,20 +988,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- spin_unlock_irq(shost->host_lock);
-
- lpfc_can_disctmo(vport);
-- if (vport->fc_flag & FC_RSCN_MODE) {
-- /*
-- * Check to see if more RSCNs came in while
-- * we were processing this one.
-- */
-- if ((vport->fc_rscn_id_cnt == 0) &&
-- (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
-- spin_lock_irq(shost->host_lock);
-- vport->fc_flag &= ~FC_RSCN_MODE;
-- spin_unlock_irq(shost->host_lock);
-- } else {
-- lpfc_els_handle_rscn(vport);
-- }
-- }
-+ lpfc_end_rscn(vport);
- }
- }
+ void scsi_proc_hostdir_add(struct scsi_host_template *sht)
+ {
+ if (!sht->proc_info)
+@@ -97,6 +121,10 @@ void scsi_proc_hostdir_add(struct scsi_host_template *sht)
+ mutex_unlock(&global_host_template_mutex);
+ }
-@@ -933,6 +1003,7 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
- struct lpfc_hba *phba = vport->phba;
- struct serv_parm *sp;
- IOCB_t *icmd;
-+ struct lpfc_nodelist *ndlp;
- struct lpfc_iocbq *elsiocb;
- struct lpfc_sli_ring *pring;
- struct lpfc_sli *psli;
-@@ -943,8 +1014,11 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
- psli = &phba->sli;
- pring = &psli->ring[LPFC_ELS_RING]; /* ELS ring */
++/**
++ * scsi_proc_hostdir_rm - remove directory in /proc for a scsi host
++ * @sht: owner of directory
++ */
+ void scsi_proc_hostdir_rm(struct scsi_host_template *sht)
+ {
+ if (!sht->proc_info)
+@@ -110,6 +138,11 @@ void scsi_proc_hostdir_rm(struct scsi_host_template *sht)
+ mutex_unlock(&global_host_template_mutex);
+ }
-+ ndlp = lpfc_findnode_did(vport, did);
-+ /* If ndlp if not NULL, we will bump the reference count on it */
+
- cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm));
-- elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, NULL, did,
-+ elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp, did,
- ELS_CMD_PLOGI);
- if (!elsiocb)
- return 1;
-@@ -1109,7 +1183,7 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- return 0;
++/**
++ * scsi_proc_host_add - Add entry for this host to appropriate /proc dir
++ * @shost: host to add
++ */
+ void scsi_proc_host_add(struct Scsi_Host *shost)
+ {
+ struct scsi_host_template *sht = shost->hostt;
+@@ -133,6 +166,10 @@ void scsi_proc_host_add(struct Scsi_Host *shost)
+ p->owner = sht->module;
}
--static void
-+void
- lpfc_more_adisc(struct lpfc_vport *vport)
- {
- int sentadisc;
-@@ -1134,8 +1208,6 @@ lpfc_more_adisc(struct lpfc_vport *vport)
- static void
- lpfc_rscn_disc(struct lpfc_vport *vport)
++/**
++ * scsi_proc_host_rm - remove this host's entry from /proc
++ * @shost: which host
++ */
+ void scsi_proc_host_rm(struct Scsi_Host *shost)
{
-- struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+ char name[10];
+@@ -143,7 +180,14 @@ void scsi_proc_host_rm(struct Scsi_Host *shost)
+ sprintf(name,"%d", shost->host_no);
+ remove_proc_entry(name, shost->hostt->proc_dir);
+ }
-
- lpfc_can_disctmo(vport);
-
- /* RSCN discovery */
-@@ -1144,19 +1216,7 @@ lpfc_rscn_disc(struct lpfc_vport *vport)
- if (lpfc_els_disc_plogi(vport))
- return;
++/**
++ * proc_print_scsidevice - return data about this host
++ * @dev: A scsi device
++ * @data: &struct seq_file to output to.
++ *
++ * Description: prints Host, Channel, Id, Lun, Vendor, Model, Rev, Type,
++ * and revision.
++ */
+ static int proc_print_scsidevice(struct device *dev, void *data)
+ {
+ struct scsi_device *sdev = to_scsi_device(dev);
+@@ -189,6 +233,21 @@ static int proc_print_scsidevice(struct device *dev, void *data)
+ return 0;
+ }
-- if (vport->fc_flag & FC_RSCN_MODE) {
-- /* Check to see if more RSCNs came in while we were
-- * processing this one.
-- */
-- if ((vport->fc_rscn_id_cnt == 0) &&
-- (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
-- spin_lock_irq(shost->host_lock);
-- vport->fc_flag &= ~FC_RSCN_MODE;
-- spin_unlock_irq(shost->host_lock);
-- } else {
-- lpfc_els_handle_rscn(vport);
-- }
-- }
-+ lpfc_end_rscn(vport);
++/**
++ * scsi_add_single_device - Respond to user request to probe for/add device
++ * @host: user-supplied decimal integer
++ * @channel: user-supplied decimal integer
++ * @id: user-supplied decimal integer
++ * @lun: user-supplied decimal integer
++ *
++ * Description: called by writing "scsi add-single-device" to /proc/scsi/scsi.
++ *
++ * does scsi_host_lookup() and either user_scan() if that transport
++ * type supports it, or else scsi_scan_host_selected()
++ *
++ * Note: this seems to be aimed exclusively at SCSI parallel busses.
++ */
++
+ static int scsi_add_single_device(uint host, uint channel, uint id, uint lun)
+ {
+ struct Scsi_Host *shost;
+@@ -206,6 +265,16 @@ static int scsi_add_single_device(uint host, uint channel, uint id, uint lun)
+ return error;
}
- static void
-@@ -1413,6 +1473,13 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- psli = &phba->sli;
- pring = &psli->ring[LPFC_ELS_RING];
++/**
++ * scsi_remove_single_device - Respond to user request to remove a device
++ * @host: user-supplied decimal integer
++ * @channel: user-supplied decimal integer
++ * @id: user-supplied decimal integer
++ * @lun: user-supplied decimal integer
++ *
++ * Description: called by writing "scsi remove-single-device" to
++ * /proc/scsi/scsi. Does a scsi_device_lookup() and scsi_remove_device()
++ */
+ static int scsi_remove_single_device(uint host, uint channel, uint id, uint lun)
+ {
+ struct scsi_device *sdev;
+@@ -226,6 +295,25 @@ static int scsi_remove_single_device(uint host, uint channel, uint id, uint lun)
+ return error;
+ }
-+ spin_lock_irq(shost->host_lock);
-+ if (ndlp->nlp_flag & NLP_LOGO_SND) {
-+ spin_unlock_irq(shost->host_lock);
-+ return 0;
-+ }
-+ spin_unlock_irq(shost->host_lock);
++/**
++ * proc_scsi_write - handle writes to /proc/scsi/scsi
++ * @file: not used
++ * @buf: buffer to write
++ * @length: length of buf, at most PAGE_SIZE
++ * @ppos: not used
++ *
++ * Description: this provides a legacy mechanism to add or remove devices by
++ * Host, Channel, ID, and Lun. To use,
++ * "echo 'scsi add-single-device 0 1 2 3' > /proc/scsi/scsi" or
++ * "echo 'scsi remove-single-device 0 1 2 3' > /proc/scsi/scsi" with
++ * "0 1 2 3" replaced by the Host, Channel, Id, and Lun.
++ *
++ * Note: this seems to be aimed at parallel SCSI. Most modern busses (USB,
++ * SATA, Firewire, Fibre Channel, etc) dynamically assign these values to
++ * provide a unique identifier and nothing more.
++ */
+
- cmdsize = (2 * sizeof(uint32_t)) + sizeof(struct lpfc_name);
- elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
- ndlp->nlp_DID, ELS_CMD_LOGO);
-@@ -1499,6 +1566,9 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
- ndlp->nlp_DID, ELS_CMD_SCR);
++
+ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ size_t length, loff_t *ppos)
+ {
+@@ -291,6 +379,11 @@ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ return err;
+ }
- if (!elsiocb) {
-+ /* This will trigger the release of the node just
-+ * allocated
-+ */
- lpfc_nlp_put(ndlp);
- return 1;
- }
-@@ -1520,10 +1590,17 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
- phba->fc_stat.elsXmitSCR++;
- elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
- if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
-+ /* The additional lpfc_nlp_put will cause the following
-+ * lpfc_els_free_iocb routine to trigger the rlease of
-+ * the node.
-+ */
- lpfc_nlp_put(ndlp);
- lpfc_els_free_iocb(phba, elsiocb);
- return 1;
- }
-+ /* This will cause the callback-function lpfc_cmpl_els_cmd to
-+ * trigger the release of node.
-+ */
- lpfc_nlp_put(ndlp);
++/**
++ * proc_scsi_show - show contents of /proc/scsi/scsi (attached devices)
++ * @s: output goes here
++ * @p: not used
++ */
+ static int proc_scsi_show(struct seq_file *s, void *p)
+ {
+ seq_printf(s, "Attached devices:\n");
+@@ -298,10 +391,17 @@ static int proc_scsi_show(struct seq_file *s, void *p)
return 0;
}
-@@ -1555,6 +1632,9 @@ lpfc_issue_els_farpr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
- elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
- ndlp->nlp_DID, ELS_CMD_RNID);
- if (!elsiocb) {
-+ /* This will trigger the release of the node just
-+ * allocated
-+ */
- lpfc_nlp_put(ndlp);
- return 1;
- }
-@@ -1591,35 +1671,21 @@ lpfc_issue_els_farpr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
- phba->fc_stat.elsXmitFARPR++;
- elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
- if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
-+ /* The additional lpfc_nlp_put will cause the following
-+ * lpfc_els_free_iocb routine to trigger the release of
-+ * the node.
-+ */
- lpfc_nlp_put(ndlp);
- lpfc_els_free_iocb(phba, elsiocb);
- return 1;
- }
-+ /* This will cause the callback-function lpfc_cmpl_els_cmd to
-+ * trigger the release of the node.
-+ */
- lpfc_nlp_put(ndlp);
- return 0;
+
++/**
++ * proc_scsi_open - glue function
++ * @inode: not used
++ * @file: passed to single_open()
++ *
++ * Associates proc_scsi_show with this file
++ */
+ static int proc_scsi_open(struct inode *inode, struct file *file)
+ {
+ /*
+- * We don't really needs this for the write case but it doesn't
++ * We don't really need this for the write case but it doesn't
+ * harm either.
+ */
+ return single_open(file, proc_scsi_show, NULL);
+@@ -315,6 +415,9 @@ static const struct file_operations proc_scsi_operations = {
+ .release = single_release,
+ };
+
++/**
++ * scsi_init_procfs - create scsi and scsi/scsi in procfs
++ */
+ int __init scsi_init_procfs(void)
+ {
+ struct proc_dir_entry *pde;
+@@ -336,6 +439,9 @@ err1:
+ return -ENOMEM;
}
--static void
--lpfc_end_rscn(struct lpfc_vport *vport)
--{
-- struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
--
-- if (vport->fc_flag & FC_RSCN_MODE) {
-- /*
-- * Check to see if more RSCNs came in while we were
-- * processing this one.
-- */
-- if (vport->fc_rscn_id_cnt ||
-- (vport->fc_flag & FC_RSCN_DISCOVERY) != 0)
-- lpfc_els_handle_rscn(vport);
-- else {
-- spin_lock_irq(shost->host_lock);
-- vport->fc_flag &= ~FC_RSCN_MODE;
-- spin_unlock_irq(shost->host_lock);
-- }
-- }
--}
--
- void
- lpfc_cancel_retry_delay_tmo(struct lpfc_vport *vport, struct lpfc_nodelist *nlp)
++/**
++ * scsi_exit_procfs - Remove scsi/scsi and scsi from procfs
++ */
+ void scsi_exit_procfs(void)
{
-@@ -1675,7 +1741,10 @@ lpfc_els_retry_delay(unsigned long ptr)
- return;
- }
+ remove_proc_entry("scsi/scsi", NULL);
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 40ea71c..1dc165a 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -221,6 +221,9 @@ static void scsi_unlock_floptical(struct scsi_device *sdev,
-- evtp->evt_arg1 = ndlp;
-+ /* We need to hold the node by incrementing the reference
-+ * count until the queued work is done
-+ */
-+ evtp->evt_arg1 = lpfc_nlp_get(ndlp);
- evtp->evt = LPFC_EVT_ELS_RETRY;
- list_add_tail(&evtp->evt_listp, &phba->work_list);
- if (phba->work_wait)
-@@ -1759,6 +1828,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- uint32_t *elscmd;
- struct ls_rjt stat;
- int retry = 0, maxretry = lpfc_max_els_tries, delay = 0;
-+ int logerr = 0;
- uint32_t cmd = 0;
- uint32_t did;
+ /**
+ * scsi_alloc_sdev - allocate and setup a scsi_Device
++ * @starget: which target to allocate a &scsi_device for
++ * @lun: which lun
++ * @hostdata: usually NULL and set by ->slave_alloc instead
+ *
+ * Description:
+ * Allocate, initialize for io, and return a pointer to a scsi_Device.
+@@ -472,7 +475,6 @@ static void scsi_target_reap_usercontext(struct work_struct *work)
-@@ -1815,6 +1885,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- break;
+ /**
+ * scsi_target_reap - check to see if target is in use and destroy if not
+- *
+ * @starget: target to be checked
+ *
+ * This is used after removing a LUN or doing a last put of the target
+@@ -863,7 +865,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ sdev->no_start_on_add = 1;
- case IOERR_NO_RESOURCES:
-+ logerr = 1; /* HBA out of resources */
- retry = 1;
- if (cmdiocb->retry > 100)
- delay = 100;
-@@ -1843,6 +1914,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ if (*bflags & BLIST_SINGLELUN)
+- sdev->single_lun = 1;
++ scsi_target(sdev)->single_lun = 1;
- case IOSTAT_NPORT_BSY:
- case IOSTAT_FABRIC_BSY:
-+ logerr = 1; /* Fabric / Remote NPort out of resources */
- retry = 1;
- break;
+ sdev->use_10_for_rw = 1;
-@@ -1923,6 +1995,15 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- if (did == FDMI_DID)
- retry = 1;
+@@ -928,8 +930,7 @@ static inline void scsi_destroy_sdev(struct scsi_device *sdev)
-+ if ((cmd == ELS_CMD_FLOGI) &&
-+ (phba->fc_topology != TOPOLOGY_LOOP)) {
-+ /* FLOGI retry policy */
-+ retry = 1;
-+ maxretry = 48;
-+ if (cmdiocb->retry >= 32)
-+ delay = 1000;
-+ }
-+
- if ((++cmdiocb->retry) >= maxretry) {
- phba->fc_stat.elsRetryExceeded++;
- retry = 0;
-@@ -2006,11 +2087,46 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- }
- }
- /* No retry ELS command <elsCmd> to remote NPORT <did> */
-- lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-+ if (logerr) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-+ "0137 No retry ELS command x%x to remote "
-+ "NPORT x%x: Out of Resources: Error:x%x/%x\n",
-+ cmd, did, irsp->ulpStatus,
-+ irsp->un.ulpWord[4]);
-+ }
-+ else {
-+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
- "0108 No retry ELS command x%x to remote "
- "NPORT x%x Retried:%d Error:x%x/%x\n",
- cmd, did, cmdiocb->retry, irsp->ulpStatus,
- irsp->un.ulpWord[4]);
-+ }
-+ return 0;
-+}
-+
-+static int
-+lpfc_els_free_data(struct lpfc_hba *phba, struct lpfc_dmabuf *buf_ptr1)
-+{
-+ struct lpfc_dmabuf *buf_ptr;
-+
-+ /* Free the response before processing the command. */
-+ if (!list_empty(&buf_ptr1->list)) {
-+ list_remove_head(&buf_ptr1->list, buf_ptr,
-+ struct lpfc_dmabuf,
-+ list);
-+ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
-+ kfree(buf_ptr);
-+ }
-+ lpfc_mbuf_free(phba, buf_ptr1->virt, buf_ptr1->phys);
-+ kfree(buf_ptr1);
-+ return 0;
-+}
-+
-+static int
-+lpfc_els_free_bpl(struct lpfc_hba *phba, struct lpfc_dmabuf *buf_ptr)
-+{
-+ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
-+ kfree(buf_ptr);
- return 0;
- }
+ #ifdef CONFIG_SCSI_LOGGING
+ /**
+- * scsi_inq_str - print INQUIRY data from min to max index,
+- * strip trailing whitespace
++ * scsi_inq_str - print INQUIRY data from min to max index, strip trailing whitespace
+ * @buf: Output buffer with at least end-first+1 bytes of space
+ * @inq: Inquiry buffer (input)
+ * @first: Offset of string into inq
+@@ -957,9 +958,10 @@ static unsigned char *scsi_inq_str(unsigned char *buf, unsigned char *inq,
+ * scsi_probe_and_add_lun - probe a LUN, if a LUN is found add it
+ * @starget: pointer to target device structure
+ * @lun: LUN of target device
+- * @sdevscan: probe the LUN corresponding to this scsi_device
+- * @sdevnew: store the value of any new scsi_device allocated
+ * @bflagsp: store bflags here if not NULL
++ * @sdevp: probe the LUN corresponding to this scsi_device
++ * @rescan: if nonzero skip some code only needed on first scan
++ * @hostdata: passed to scsi_alloc_sdev()
+ *
+ * Description:
+ * Call scsi_probe_lun, if a LUN with an attached device is found,
+@@ -1110,6 +1112,8 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
+ * scsi_sequential_lun_scan - sequentially scan a SCSI target
+ * @starget: pointer to target structure to scan
+ * @bflags: black/white list flag for LUN 0
++ * @scsi_level: Which version of the standard does this device adhere to
++ * @rescan: passed to scsi_probe_add_lun()
+ *
+ * Description:
+ * Generally, scan from LUN 1 (LUN 0 is assumed to already have been
+@@ -1220,7 +1224,7 @@ EXPORT_SYMBOL(scsilun_to_int);
-@@ -2018,30 +2134,63 @@ int
- lpfc_els_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *elsiocb)
- {
- struct lpfc_dmabuf *buf_ptr, *buf_ptr1;
-+ struct lpfc_nodelist *ndlp;
+ /**
+ * int_to_scsilun: reverts an int into a scsi_lun
+- * @int: integer to be reverted
++ * @lun: integer to be reverted
+ * @scsilun: struct scsi_lun to be set.
+ *
+ * Description:
+@@ -1252,18 +1256,22 @@ EXPORT_SYMBOL(int_to_scsilun);
-- if (elsiocb->context1) {
-- lpfc_nlp_put(elsiocb->context1);
-+ ndlp = (struct lpfc_nodelist *)elsiocb->context1;
-+ if (ndlp) {
-+ if (ndlp->nlp_flag & NLP_DEFER_RM) {
-+ lpfc_nlp_put(ndlp);
-+
-+ /* If the ndlp is not being used by another discovery
-+ * thread, free it.
-+ */
-+ if (!lpfc_nlp_not_used(ndlp)) {
-+ /* If ndlp is being used by another discovery
-+ * thread, just clear NLP_DEFER_RM
-+ */
-+ ndlp->nlp_flag &= ~NLP_DEFER_RM;
-+ }
-+ }
-+ else
-+ lpfc_nlp_put(ndlp);
- elsiocb->context1 = NULL;
- }
- /* context2 = cmd, context2->next = rsp, context3 = bpl */
- if (elsiocb->context2) {
-- buf_ptr1 = (struct lpfc_dmabuf *) elsiocb->context2;
-- /* Free the response before processing the command. */
-- if (!list_empty(&buf_ptr1->list)) {
-- list_remove_head(&buf_ptr1->list, buf_ptr,
-- struct lpfc_dmabuf,
-- list);
-- lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
-- kfree(buf_ptr);
-+ if (elsiocb->iocb_flag & LPFC_DELAY_MEM_FREE) {
-+ /* Firmware could still be in progress of DMAing
-+ * payload, so don't free data buffer till after
-+ * a hbeat.
-+ */
-+ elsiocb->iocb_flag &= ~LPFC_DELAY_MEM_FREE;
-+ buf_ptr = elsiocb->context2;
-+ elsiocb->context2 = NULL;
-+ if (buf_ptr) {
-+ buf_ptr1 = NULL;
-+ spin_lock_irq(&phba->hbalock);
-+ if (!list_empty(&buf_ptr->list)) {
-+ list_remove_head(&buf_ptr->list,
-+ buf_ptr1, struct lpfc_dmabuf,
-+ list);
-+ INIT_LIST_HEAD(&buf_ptr1->list);
-+ list_add_tail(&buf_ptr1->list,
-+ &phba->elsbuf);
-+ phba->elsbuf_cnt++;
-+ }
-+ INIT_LIST_HEAD(&buf_ptr->list);
-+ list_add_tail(&buf_ptr->list, &phba->elsbuf);
-+ phba->elsbuf_cnt++;
-+ spin_unlock_irq(&phba->hbalock);
-+ }
-+ } else {
-+ buf_ptr1 = (struct lpfc_dmabuf *) elsiocb->context2;
-+ lpfc_els_free_data(phba, buf_ptr1);
- }
-- lpfc_mbuf_free(phba, buf_ptr1->virt, buf_ptr1->phys);
-- kfree(buf_ptr1);
- }
+ /**
+ * scsi_report_lun_scan - Scan using SCSI REPORT LUN results
+- * @sdevscan: scan the host, channel, and id of this scsi_device
++ * @starget: which target
++ * @bflags: Zero or a mix of BLIST_NOLUN, BLIST_REPORTLUN2, or BLIST_NOREPORTLUN
++ * @rescan: nonzero if we can skip code only needed on first scan
+ *
+ * Description:
+- * If @sdevscan is for a SCSI-3 or up device, send a REPORT LUN
+- * command, and scan the resulting list of LUNs by calling
+- * scsi_probe_and_add_lun.
++ * Fast scanning for modern (SCSI-3) devices by sending a REPORT LUN command.
++ * Scan the resulting list of LUNs by calling scsi_probe_and_add_lun.
+ *
+- * Modifies sdevscan->lun.
++ * If BLINK_REPORTLUN2 is set, scan a target that supports more than 8
++ * LUNs even if it's older than SCSI-3.
++ * If BLIST_NOREPORTLUN is set, return 1 always.
++ * If BLIST_NOLUN is set, return 0 always.
+ *
+ * Return:
+ * 0: scan completed (or no memory, so further scanning is futile)
+- * 1: no report lun scan, or not configured
++ * 1: could not scan with REPORT LUN
+ **/
+ static int scsi_report_lun_scan(struct scsi_target *starget, int bflags,
+ int rescan)
+@@ -1481,6 +1489,7 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host *shost, uint channel,
+ if (scsi_host_scan_allowed(shost))
+ scsi_probe_and_add_lun(starget, lun, NULL, &sdev, 1, hostdata);
+ mutex_unlock(&shost->scan_mutex);
++ transport_configure_device(&starget->dev);
+ scsi_target_reap(starget);
+ put_device(&starget->dev);
- if (elsiocb->context3) {
- buf_ptr = (struct lpfc_dmabuf *) elsiocb->context3;
-- lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
-- kfree(buf_ptr);
-+ lpfc_els_free_bpl(phba, buf_ptr);
+@@ -1561,6 +1570,7 @@ static void __scsi_scan_target(struct device *parent, unsigned int channel,
+ out_reap:
+ /* now determine if the target has any children at all
+ * and if not, nuke it */
++ transport_configure_device(&starget->dev);
+ scsi_target_reap(starget);
+
+ put_device(&starget->dev);
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 00b3866..ed83cdb 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -1018,6 +1018,7 @@ int scsi_sysfs_add_host(struct Scsi_Host *shost)
}
- lpfc_sli_release_iocbq(phba, elsiocb);
+
+ transport_register_device(&shost->shost_gendev);
++ transport_configure_device(&shost->shost_gendev);
return 0;
-@@ -2065,15 +2214,20 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- "Data: x%x x%x x%x\n",
- ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
- ndlp->nlp_rpi);
-- switch (ndlp->nlp_state) {
-- case NLP_STE_UNUSED_NODE: /* node is just allocated */
-- lpfc_drop_node(vport, ndlp);
-- break;
-- case NLP_STE_NPR_NODE: /* NPort Recovery mode */
-- lpfc_unreg_rpi(vport, ndlp);
-- break;
-- default:
-- break;
-+
-+ if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
-+ /* NPort Recovery mode or node is just allocated */
-+ if (!lpfc_nlp_not_used(ndlp)) {
-+ /* If the ndlp is being used by another discovery
-+ * thread, just unregister the RPI.
-+ */
-+ lpfc_unreg_rpi(vport, ndlp);
-+ } else {
-+ /* Indicate the node has already released, should
-+ * not reference to it from within lpfc_els_free_iocb.
-+ */
-+ cmdiocb->context1 = NULL;
-+ }
- }
- lpfc_els_free_iocb(phba, cmdiocb);
- return;
-@@ -2089,7 +2243,14 @@ lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
- kfree(mp);
- mempool_free(pmb, phba->mbox_mem_pool);
-- lpfc_nlp_put(ndlp);
-+ if (ndlp) {
-+ lpfc_nlp_put(ndlp);
-+ /* This is the end of the default RPI cleanup logic for this
-+ * ndlp. If no other discovery threads are using this ndlp.
-+ * we should free all resources associated with it.
-+ */
-+ lpfc_nlp_not_used(ndlp);
-+ }
- return;
}
-@@ -2100,15 +2261,29 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
- struct lpfc_vport *vport = ndlp ? ndlp->vport : NULL;
- struct Scsi_Host *shost = vport ? lpfc_shost_from_vport(vport) : NULL;
-- IOCB_t *irsp;
-+ IOCB_t *irsp;
-+ uint8_t *pcmd;
- LPFC_MBOXQ_t *mbox = NULL;
- struct lpfc_dmabuf *mp = NULL;
-+ uint32_t ls_rjt = 0;
-
- irsp = &rspiocb->iocb;
+diff --git a/drivers/scsi/scsi_tgt_if.c b/drivers/scsi/scsi_tgt_if.c
+index 9815a1a..d2557db 100644
+--- a/drivers/scsi/scsi_tgt_if.c
++++ b/drivers/scsi/scsi_tgt_if.c
+@@ -112,7 +112,7 @@ int scsi_tgt_uspace_send_cmd(struct scsi_cmnd *cmd, u64 itn_id,
+ memset(&ev, 0, sizeof(ev));
+ ev.p.cmd_req.host_no = shost->host_no;
+ ev.p.cmd_req.itn_id = itn_id;
+- ev.p.cmd_req.data_len = cmd->request_bufflen;
++ ev.p.cmd_req.data_len = scsi_bufflen(cmd);
+ memcpy(ev.p.cmd_req.scb, cmd->cmnd, sizeof(ev.p.cmd_req.scb));
+ memcpy(ev.p.cmd_req.lun, lun, sizeof(ev.p.cmd_req.lun));
+ ev.p.cmd_req.attribute = cmd->tag;
+diff --git a/drivers/scsi/scsi_tgt_lib.c b/drivers/scsi/scsi_tgt_lib.c
+index a91761c..01e03f3 100644
+--- a/drivers/scsi/scsi_tgt_lib.c
++++ b/drivers/scsi/scsi_tgt_lib.c
+@@ -180,7 +180,7 @@ static void scsi_tgt_cmd_destroy(struct work_struct *work)
+ container_of(work, struct scsi_tgt_cmd, work);
+ struct scsi_cmnd *cmd = tcmd->rq->special;
- if (cmdiocb->context_un.mbox)
- mbox = cmdiocb->context_un.mbox;
+- dprintk("cmd %p %d %lu\n", cmd, cmd->sc_data_direction,
++ dprintk("cmd %p %d %u\n", cmd, cmd->sc_data_direction,
+ rq_data_dir(cmd->request));
+ scsi_unmap_user_pages(tcmd);
+ scsi_host_put_command(scsi_tgt_cmd_to_host(cmd), cmd);
+@@ -327,11 +327,11 @@ static void scsi_tgt_cmd_done(struct scsi_cmnd *cmd)
+ {
+ struct scsi_tgt_cmd *tcmd = cmd->request->end_io_data;
-+ /* First determine if this is a LS_RJT cmpl. Note, this callback
-+ * function can have cmdiocb->contest1 (ndlp) field set to NULL.
-+ */
-+ pcmd = (uint8_t *) (((struct lpfc_dmabuf *) cmdiocb->context2)->virt);
-+ if (ndlp && (*((uint32_t *) (pcmd)) == ELS_CMD_LS_RJT)) {
-+ /* A LS_RJT associated with Default RPI cleanup has its own
-+ * seperate code path.
-+ */
-+ if (!(ndlp->nlp_flag & NLP_RM_DFLT_RPI))
-+ ls_rjt = 1;
-+ }
-+
- /* Check to see if link went down during discovery */
- if (!ndlp || lpfc_els_chk_latt(vport)) {
- if (mbox) {
-@@ -2119,6 +2294,15 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- }
- mempool_free(mbox, phba->mbox_mem_pool);
- }
-+ if (ndlp && (ndlp->nlp_flag & NLP_RM_DFLT_RPI))
-+ if (lpfc_nlp_not_used(ndlp)) {
-+ ndlp = NULL;
-+ /* Indicate the node has already released,
-+ * should not reference to it from within
-+ * the routine lpfc_els_free_iocb.
-+ */
-+ cmdiocb->context1 = NULL;
-+ }
- goto out;
- }
+- dprintk("cmd %p %lu\n", cmd, rq_data_dir(cmd->request));
++ dprintk("cmd %p %u\n", cmd, rq_data_dir(cmd->request));
-@@ -2150,20 +2334,39 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- lpfc_nlp_set_state(vport, ndlp,
- NLP_STE_REG_LOGIN_ISSUE);
- }
-- if (lpfc_sli_issue_mbox(phba, mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB))
-+ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
- != MBX_NOT_FINISHED) {
- goto out;
- }
-- lpfc_nlp_put(ndlp);
-- /* NOTE: we should have messages for unsuccessful
-- reglogin */
-+
-+ /* ELS rsp: Cannot issue reg_login for <NPortid> */
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-+ "0138 ELS rsp: Cannot issue reg_login for x%x "
-+ "Data: x%x x%x x%x\n",
-+ ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
-+ ndlp->nlp_rpi);
-+
-+ if (lpfc_nlp_not_used(ndlp)) {
-+ ndlp = NULL;
-+ /* Indicate node has already been released,
-+ * should not reference to it from within
-+ * the routine lpfc_els_free_iocb.
-+ */
-+ cmdiocb->context1 = NULL;
-+ }
- } else {
- /* Do not drop node for lpfc_els_abort'ed ELS cmds */
- if (!lpfc_error_lost_link(irsp) &&
- ndlp->nlp_flag & NLP_ACC_REGLOGIN) {
-- lpfc_drop_node(vport, ndlp);
-- ndlp = NULL;
-+ if (lpfc_nlp_not_used(ndlp)) {
-+ ndlp = NULL;
-+ /* Indicate node has already been
-+ * released, should not reference
-+ * to it from within the routine
-+ * lpfc_els_free_iocb.
-+ */
-+ cmdiocb->context1 = NULL;
-+ }
- }
- }
- mp = (struct lpfc_dmabuf *) mbox->context1;
-@@ -2178,7 +2381,21 @@ out:
- spin_lock_irq(shost->host_lock);
- ndlp->nlp_flag &= ~(NLP_ACC_REGLOGIN | NLP_RM_DFLT_RPI);
- spin_unlock_irq(shost->host_lock);
-+
-+ /* If the node is not being used by another discovery thread,
-+ * and we are sending a reject, we are done with it.
-+ * Release driver reference count here and free associated
-+ * resources.
-+ */
-+ if (ls_rjt)
-+ if (lpfc_nlp_not_used(ndlp))
-+ /* Indicate node has already been released,
-+ * should not reference to it from within
-+ * the routine lpfc_els_free_iocb.
-+ */
-+ cmdiocb->context1 = NULL;
- }
-+
- lpfc_els_free_iocb(phba, cmdiocb);
- return;
- }
-@@ -2349,14 +2566,6 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError,
- elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
- rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
+ scsi_tgt_uspace_send_status(cmd, tcmd->itn_id, tcmd->tag);
-- /* If the node is in the UNUSED state, and we are sending
-- * a reject, we are done with it. Release driver reference
-- * count here. The outstanding els will release its reference on
-- * completion and the node can be freed then.
-- */
-- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
-- lpfc_nlp_put(ndlp);
--
- if (rc == IOCB_ERROR) {
- lpfc_els_free_iocb(phba, elsiocb);
- return 1;
-@@ -2642,7 +2851,10 @@ lpfc_els_disc_plogi(struct lpfc_vport *vport)
- }
- }
- }
-- if (sentplogi == 0) {
-+ if (sentplogi) {
-+ lpfc_set_disctmo(vport);
-+ }
-+ else {
- spin_lock_irq(shost->host_lock);
- vport->fc_flag &= ~FC_NLP_MORE;
- spin_unlock_irq(shost->host_lock);
-@@ -2830,10 +3042,10 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
- "RCV RSCN defer: did:x%x/ste:x%x flg:x%x",
- ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag);
+- if (cmd->request_buffer)
++ if (scsi_sglist(cmd))
+ scsi_free_sgtable(cmd);
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_RSCN_DEFERRED;
- if ((rscn_cnt < FC_MAX_HOLD_RSCN) &&
- !(vport->fc_flag & FC_RSCN_DISCOVERY)) {
-- spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_RSCN_MODE;
- spin_unlock_irq(shost->host_lock);
- if (rscn_cnt) {
-@@ -2862,7 +3074,6 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
- vport->fc_rscn_id_cnt, vport->fc_flag,
- vport->port_state);
- } else {
-- spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_RSCN_DISCOVERY;
- spin_unlock_irq(shost->host_lock);
- /* ReDiscovery RSCN */
-@@ -2877,7 +3088,9 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ queue_work(scsi_tgtd, &tcmd->work);
+@@ -342,7 +342,7 @@ static int scsi_tgt_transfer_response(struct scsi_cmnd *cmd)
+ struct Scsi_Host *shost = scsi_tgt_cmd_to_host(cmd);
+ int err;
- /* send RECOVERY event for ALL nodes that match RSCN payload */
- lpfc_rscn_recovery_check(vport);
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag &= ~FC_RSCN_DEFERRED;
-+ spin_unlock_irq(shost->host_lock);
- return 0;
- }
+- dprintk("cmd %p %lu\n", cmd, rq_data_dir(cmd->request));
++ dprintk("cmd %p %u\n", cmd, rq_data_dir(cmd->request));
-@@ -2929,6 +3142,8 @@ lpfc_els_handle_rscn(struct lpfc_vport *vport)
+ err = shost->hostt->transfer_response(cmd, scsi_tgt_cmd_done);
+ switch (err) {
+@@ -359,22 +359,17 @@ static int scsi_tgt_init_cmd(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+ int count;
- /* To process RSCN, first compare RSCN data with NameServer */
- vport->fc_ns_retry = 0;
-+ vport->num_disc_nodes = 0;
-+
- ndlp = lpfc_findnode_did(vport, NameServer_DID);
- if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE) {
- /* Good ndlp, issue CT Request to NameServer */
-@@ -3022,8 +3237,7 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
- mbox->mb.un.varInitLnk.lipsr_AL_PA = 0;
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
- mbox->vport = vport;
-- rc = lpfc_sli_issue_mbox
-- (phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
- lpfc_set_loopback_flag(phba);
- if (rc == MBX_NOT_FINISHED) {
- mempool_free(mbox, phba->mbox_mem_pool);
-@@ -3140,7 +3354,10 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- elsiocb = lpfc_prep_els_iocb(phba->pport, 0, cmdsize,
- lpfc_max_els_tries, ndlp,
- ndlp->nlp_DID, ELS_CMD_ACC);
-+
-+ /* Decrement the ndlp reference count from previous mbox command */
- lpfc_nlp_put(ndlp);
-+
- if (!elsiocb)
- return;
+ cmd->use_sg = rq->nr_phys_segments;
+- cmd->request_buffer = scsi_alloc_sgtable(cmd, gfp_mask);
+- if (!cmd->request_buffer)
++ if (scsi_alloc_sgtable(cmd, gfp_mask))
+ return -ENOMEM;
-@@ -3160,13 +3377,13 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- status |= 0x4;
+ cmd->request_bufflen = rq->data_len;
- rps_rsp->rsvd1 = 0;
-- rps_rsp->portStatus = be16_to_cpu(status);
-- rps_rsp->linkFailureCnt = be32_to_cpu(mb->un.varRdLnk.linkFailureCnt);
-- rps_rsp->lossSyncCnt = be32_to_cpu(mb->un.varRdLnk.lossSyncCnt);
-- rps_rsp->lossSignalCnt = be32_to_cpu(mb->un.varRdLnk.lossSignalCnt);
-- rps_rsp->primSeqErrCnt = be32_to_cpu(mb->un.varRdLnk.primSeqErrCnt);
-- rps_rsp->invalidXmitWord = be32_to_cpu(mb->un.varRdLnk.invalidXmitWord);
-- rps_rsp->crcCnt = be32_to_cpu(mb->un.varRdLnk.crcCnt);
-+ rps_rsp->portStatus = cpu_to_be16(status);
-+ rps_rsp->linkFailureCnt = cpu_to_be32(mb->un.varRdLnk.linkFailureCnt);
-+ rps_rsp->lossSyncCnt = cpu_to_be32(mb->un.varRdLnk.lossSyncCnt);
-+ rps_rsp->lossSignalCnt = cpu_to_be32(mb->un.varRdLnk.lossSignalCnt);
-+ rps_rsp->primSeqErrCnt = cpu_to_be32(mb->un.varRdLnk.primSeqErrCnt);
-+ rps_rsp->invalidXmitWord = cpu_to_be32(mb->un.varRdLnk.invalidXmitWord);
-+ rps_rsp->crcCnt = cpu_to_be32(mb->un.varRdLnk.crcCnt);
- /* Xmit ELS RPS ACC response tag <ulpIoTag> */
- lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_ELS,
- "0118 Xmit ELS RPS ACC response tag x%x xri x%x, "
-@@ -3223,11 +3440,13 @@ lpfc_els_rcv_rps(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
- mbox->context2 = lpfc_nlp_get(ndlp);
- mbox->vport = vport;
- mbox->mbox_cmpl = lpfc_els_rsp_rps_acc;
-- if (lpfc_sli_issue_mbox (phba, mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB)) != MBX_NOT_FINISHED)
-+ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
-+ != MBX_NOT_FINISHED)
- /* Mbox completion will send ELS Response */
- return 0;
+- dprintk("cmd %p cnt %d %lu\n", cmd, cmd->use_sg, rq_data_dir(rq));
+- count = blk_rq_map_sg(rq->q, rq, cmd->request_buffer);
+- if (likely(count <= cmd->use_sg)) {
+- cmd->use_sg = count;
+- return 0;
+- }
-
-+ /* Decrement reference count used for the failed mbox
-+ * command.
-+ */
- lpfc_nlp_put(ndlp);
- mempool_free(mbox, phba->mbox_mem_pool);
- }
-@@ -3461,6 +3680,7 @@ lpfc_els_rcv_fan(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
- * other NLP_FABRIC logins
- */
- lpfc_drop_node(vport, ndlp);
-+
- } else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
- /* Fail outstanding I/O now since this
- * device is marked for PLOGI
-@@ -3469,8 +3689,6 @@ lpfc_els_rcv_fan(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
- }
- }
-
-- vport->port_state = LPFC_FLOGI;
-- lpfc_set_disctmo(vport);
- lpfc_initial_flogi(vport);
- return 0;
- }
-@@ -3711,6 +3929,7 @@ static void
- lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- struct lpfc_vport *vport, struct lpfc_iocbq *elsiocb)
- {
-+ struct Scsi_Host *shost;
- struct lpfc_nodelist *ndlp;
- struct ls_rjt stat;
- uint32_t *payload;
-@@ -3750,11 +3969,19 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- goto dropit;
+- eprintk("cmd %p cnt %d\n", cmd, cmd->use_sg);
+- scsi_free_sgtable(cmd);
+- return -EINVAL;
++ dprintk("cmd %p cnt %d %lu\n", cmd, scsi_sg_count(cmd),
++ rq_data_dir(rq));
++ count = blk_rq_map_sg(rq->q, rq, scsi_sglist(cmd));
++ BUG_ON(count > cmd->use_sg);
++ cmd->use_sg = count;
++ return 0;
+ }
- lpfc_nlp_init(vport, ndlp, did);
-+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
- newnode = 1;
- if ((did & Fabric_DID_MASK) == Fabric_DID_MASK) {
- ndlp->nlp_type |= NLP_FABRIC;
- }
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
-+ }
-+ else {
-+ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
-+ /* This is simular to the new node path */
-+ lpfc_nlp_get(ndlp);
-+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
-+ newnode = 1;
-+ }
+ /* TODO: test this crap and replace bio_map_user with new interface maybe */
+@@ -496,8 +491,8 @@ int scsi_tgt_kspace_exec(int host_no, u64 itn_id, int result, u64 tag,
}
+ cmd = rq->special;
- phba->fc_stat.elsRcvFrame++;
-@@ -3783,6 +4010,12 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- rjt_err = LSRJT_UNABLE_TPC;
- break;
- }
-+
-+ shost = lpfc_shost_from_vport(vport);
-+ spin_lock_irq(shost->host_lock);
-+ ndlp->nlp_flag &= ~NLP_TARGET_REMOVE;
-+ spin_unlock_irq(shost->host_lock);
-+
- lpfc_disc_state_machine(vport, ndlp, elsiocb,
- NLP_EVT_RCV_PLOGI);
+- dprintk("cmd %p scb %x result %d len %d bufflen %u %lu %x\n",
+- cmd, cmd->cmnd[0], result, len, cmd->request_bufflen,
++ dprintk("cmd %p scb %x result %d len %d bufflen %u %u %x\n",
++ cmd, cmd->cmnd[0], result, len, scsi_bufflen(cmd),
+ rq_data_dir(rq), cmd->cmnd[0]);
-@@ -3795,7 +4028,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- phba->fc_stat.elsRcvFLOGI++;
- lpfc_els_rcv_flogi(vport, elsiocb, ndlp);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- case ELS_CMD_LOGO:
- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
-@@ -3825,7 +4058,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- phba->fc_stat.elsRcvRSCN++;
- lpfc_els_rcv_rscn(vport, elsiocb, ndlp);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- case ELS_CMD_ADISC:
- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
-@@ -3897,7 +4130,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- phba->fc_stat.elsRcvLIRR++;
- lpfc_els_rcv_lirr(vport, elsiocb, ndlp);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- case ELS_CMD_RPS:
- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
-@@ -3907,7 +4140,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- phba->fc_stat.elsRcvRPS++;
- lpfc_els_rcv_rps(vport, elsiocb, ndlp);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- case ELS_CMD_RPL:
- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
-@@ -3917,7 +4150,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- phba->fc_stat.elsRcvRPL++;
- lpfc_els_rcv_rpl(vport, elsiocb, ndlp);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- case ELS_CMD_RNID:
- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
-@@ -3927,7 +4160,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- phba->fc_stat.elsRcvRNID++;
- lpfc_els_rcv_rnid(vport, elsiocb, ndlp);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- default:
- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
-@@ -3942,7 +4175,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- "0115 Unknown ELS command x%x "
- "received from NPORT x%x\n", cmd, did);
- if (newnode)
-- lpfc_drop_node(vport, ndlp);
-+ lpfc_nlp_put(ndlp);
- break;
- }
+ if (result == TASK_ABORTED) {
+@@ -617,7 +612,7 @@ int scsi_tgt_kspace_it_nexus_rsp(int host_no, u64 itn_id, int result)
+ struct Scsi_Host *shost;
+ int err = -EINVAL;
-@@ -3958,10 +4191,11 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- return;
+- dprintk("%d %d %llx\n", host_no, result, (unsigned long long) mid);
++ dprintk("%d %d%llx\n", host_no, result, (unsigned long long)itn_id);
- dropit:
-- lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
-+ if (vport && !(vport->load_flag & FC_UNLOADING))
-+ lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
- "(%d):0111 Dropping received ELS cmd "
- "Data: x%x x%x x%x\n",
-- vport ? vport->vpi : 0xffff, icmd->ulpStatus,
-+ vport->vpi, icmd->ulpStatus,
- icmd->un.ulpWord[4], icmd->ulpTimeout);
- phba->fc_stat.elsRcvDrop++;
- }
-@@ -4114,8 +4348,9 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
- MAILBOX_t *mb = &pmb->mb;
+ shost = scsi_host_lookup(host_no);
+ if (IS_ERR(shost)) {
+diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
+index 7a7cfe5..b1119da 100644
+--- a/drivers/scsi/scsi_transport_fc.c
++++ b/drivers/scsi/scsi_transport_fc.c
+@@ -481,9 +481,9 @@ MODULE_PARM_DESC(dev_loss_tmo,
+ " exceeded, the scsi target is removed. Value should be"
+ " between 1 and SCSI_DEVICE_BLOCK_MAX_TIMEOUT.");
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
-- lpfc_nlp_put(ndlp);
-+ spin_unlock_irq(shost->host_lock);
+-/**
++/*
+ * Netlink Infrastructure
+- **/
++ */
- if (mb->mbxStatus) {
- lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
-@@ -4135,7 +4370,9 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- default:
- /* Try to recover from this error */
- lpfc_mbx_unreg_vpi(vport);
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
-+ spin_unlock_irq(shost->host_lock);
- lpfc_initial_fdisc(vport);
- break;
- }
-@@ -4146,14 +4383,21 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- else
- lpfc_do_scr_ns_plogi(phba, vport);
- }
-+
-+ /* Now, we decrement the ndlp reference count held for this
-+ * callback function
-+ */
-+ lpfc_nlp_put(ndlp);
-+
- mempool_free(pmb, phba->mbox_mem_pool);
- return;
- }
+ static atomic_t fc_event_seq;
--void
-+static void
- lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport,
- struct lpfc_nodelist *ndlp)
+@@ -491,10 +491,10 @@ static atomic_t fc_event_seq;
+ * fc_get_event_number - Obtain the next sequential FC event number
+ *
+ * Notes:
+- * We could have inline'd this, but it would have required fc_event_seq to
++ * We could have inlined this, but it would have required fc_event_seq to
+ * be exposed. For now, live with the subroutine call.
+ * Atomic used to avoid lock/unlock...
+- **/
++ */
+ u32
+ fc_get_event_number(void)
{
-+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
- LPFC_MBOXQ_t *mbox;
+@@ -505,7 +505,6 @@ EXPORT_SYMBOL(fc_get_event_number);
- mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-@@ -4162,25 +4406,31 @@ lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport,
- mbox->vport = vport;
- mbox->context2 = lpfc_nlp_get(ndlp);
- mbox->mbox_cmpl = lpfc_cmpl_reg_new_vport;
-- if (lpfc_sli_issue_mbox(phba, mbox,
-- MBX_NOWAIT | MBX_STOP_IOCB)
-+ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
- == MBX_NOT_FINISHED) {
-+ /* mailbox command not success, decrement ndlp
-+ * reference count for this command
-+ */
-+ lpfc_nlp_put(ndlp);
- mempool_free(mbox, phba->mbox_mem_pool);
-- vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+ /**
+ * fc_host_post_event - called to post an even on an fc_host.
+- *
+ * @shost: host the event occurred on
+ * @event_number: fc event number obtained from get_fc_event_number()
+ * @event_code: fc_host event being posted
+@@ -513,7 +512,7 @@ EXPORT_SYMBOL(fc_get_event_number);
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ void
+ fc_host_post_event(struct Scsi_Host *shost, u32 event_number,
+ enum fc_host_event_code event_code, u32 event_data)
+@@ -579,17 +578,16 @@ EXPORT_SYMBOL(fc_host_post_event);
-- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
- lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
- "0253 Register VPI: Can't send mbox\n");
-+ goto mbox_err_exit;
- }
- } else {
-- lpfc_vport_set_state(vport, FC_VPORT_FAILED);
--
- lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
- "0254 Register VPI: no memory\n");
--
-- vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
-- lpfc_nlp_put(ndlp);
-+ goto mbox_err_exit;
- }
-+ return;
-+
-+mbox_err_exit:
-+ lpfc_vport_set_state(vport, FC_VPORT_FAILED);
-+ spin_lock_irq(shost->host_lock);
-+ vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
-+ spin_unlock_irq(shost->host_lock);
-+ return;
- }
- static void
-@@ -4251,7 +4501,9 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- lpfc_unreg_rpi(vport, np);
- }
- lpfc_mbx_unreg_vpi(vport);
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
-+ spin_unlock_irq(shost->host_lock);
- }
+ /**
+- * fc_host_post_vendor_event - called to post a vendor unique event on
+- * a fc_host
+- *
++ * fc_host_post_vendor_event - called to post a vendor unique event on an fc_host
+ * @shost: host the event occurred on
+ * @event_number: fc event number obtained from get_fc_event_number()
+ * @data_len: amount, in bytes, of vendor unique data
+ * @data_buf: pointer to vendor unique data
++ * @vendor_id: Vendor id
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ void
+ fc_host_post_vendor_event(struct Scsi_Host *shost, u32 event_number,
+ u32 data_len, char * data_buf, u64 vendor_id)
+@@ -1900,7 +1898,6 @@ static int fc_vport_match(struct attribute_container *cont,
- if (vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)
-@@ -4259,14 +4511,15 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- else
- lpfc_do_scr_ns_plogi(phba, vport);
+ /**
+ * fc_timed_out - FC Transport I/O timeout intercept handler
+- *
+ * @scmd: The SCSI command which timed out
+ *
+ * This routine protects against error handlers getting invoked while a
+@@ -1920,7 +1917,7 @@ static int fc_vport_match(struct attribute_container *cont,
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ static enum scsi_eh_timer_return
+ fc_timed_out(struct scsi_cmnd *scmd)
+ {
+@@ -2133,7 +2130,7 @@ EXPORT_SYMBOL(fc_release_transport);
+ * 1 - work queued for execution
+ * 0 - work is already queued
+ * -EINVAL - work queue doesn't exist
+- **/
++ */
+ static int
+ fc_queue_work(struct Scsi_Host *shost, struct work_struct *work)
+ {
+@@ -2152,7 +2149,7 @@ fc_queue_work(struct Scsi_Host *shost, struct work_struct *work)
+ /**
+ * fc_flush_work - Flush a fc_host's workqueue.
+ * @shost: Pointer to Scsi_Host bound to fc_host.
+- **/
++ */
+ static void
+ fc_flush_work(struct Scsi_Host *shost)
+ {
+@@ -2175,7 +2172,7 @@ fc_flush_work(struct Scsi_Host *shost)
+ *
+ * Return value:
+ * 1 on success / 0 already queued / < 0 for error
+- **/
++ */
+ static int
+ fc_queue_devloss_work(struct Scsi_Host *shost, struct delayed_work *work,
+ unsigned long delay)
+@@ -2195,7 +2192,7 @@ fc_queue_devloss_work(struct Scsi_Host *shost, struct delayed_work *work,
+ /**
+ * fc_flush_devloss - Flush a fc_host's devloss workqueue.
+ * @shost: Pointer to Scsi_Host bound to fc_host.
+- **/
++ */
+ static void
+ fc_flush_devloss(struct Scsi_Host *shost)
+ {
+@@ -2212,21 +2209,20 @@ fc_flush_devloss(struct Scsi_Host *shost)
-- lpfc_nlp_put(ndlp); /* Free Fabric ndlp for vports */
-+ /* Unconditionaly kick off releasing fabric node for vports */
-+ lpfc_nlp_put(ndlp);
- }
- out:
- lpfc_els_free_iocb(phba, cmdiocb);
- }
+ /**
+- * fc_remove_host - called to terminate any fc_transport-related elements
+- * for a scsi host.
+- * @rport: remote port to be unblocked.
++ * fc_remove_host - called to terminate any fc_transport-related elements for a scsi host.
++ * @shost: Which &Scsi_Host
+ *
+ * This routine is expected to be called immediately preceeding the
+ * a driver's call to scsi_remove_host().
+ *
+ * WARNING: A driver utilizing the fc_transport, which fails to call
+- * this routine prior to scsi_remote_host(), will leave dangling
++ * this routine prior to scsi_remove_host(), will leave dangling
+ * objects in /sys/class/fc_remote_ports. Access to any of these
+ * objects can result in a system crash !!!
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ void
+ fc_remove_host(struct Scsi_Host *shost)
+ {
+@@ -2281,10 +2277,10 @@ EXPORT_SYMBOL(fc_remove_host);
--int
-+static int
- lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- uint8_t retry)
+ /**
+ * fc_starget_delete - called to delete the scsi decendents of an rport
+- * (target and all sdevs)
+- *
+ * @work: remote port to be operated on.
+- **/
++ *
++ * Deletes target and all sdevs.
++ */
+ static void
+ fc_starget_delete(struct work_struct *work)
{
-@@ -4539,7 +4792,7 @@ lpfc_cmpl_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- }
- }
+@@ -2303,9 +2299,8 @@ fc_starget_delete(struct work_struct *work)
--int
-+static int
- lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
+ /**
+ * fc_rport_final_delete - finish rport termination and delete it.
+- *
+ * @work: remote port to be deleted.
+- **/
++ */
+ static void
+ fc_rport_final_delete(struct work_struct *work)
{
- unsigned long iflags;
-@@ -4583,7 +4836,7 @@ lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
+@@ -2375,7 +2370,7 @@ fc_rport_final_delete(struct work_struct *work)
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ static struct fc_rport *
+ fc_rport_create(struct Scsi_Host *shost, int channel,
+ struct fc_rport_identifiers *ids)
+@@ -2462,8 +2457,7 @@ delete_rport:
}
+ /**
+- * fc_remote_port_add - notifies the fc transport of the existence
+- * of a remote FC port.
++ * fc_remote_port_add - notify fc transport of the existence of a remote FC port.
+ * @shost: scsi host the remote port is connected to.
+ * @channel: Channel on shost port connected to.
+ * @ids: The world wide names, fc address, and FC4 port
+@@ -2499,7 +2493,7 @@ delete_rport:
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ struct fc_rport *
+ fc_remote_port_add(struct Scsi_Host *shost, int channel,
+ struct fc_rport_identifiers *ids)
+@@ -2683,19 +2677,18 @@ EXPORT_SYMBOL(fc_remote_port_add);
+
--void lpfc_fabric_abort_vport(struct lpfc_vport *vport)
-+static void lpfc_fabric_abort_vport(struct lpfc_vport *vport)
+ /**
+- * fc_remote_port_delete - notifies the fc transport that a remote
+- * port is no longer in existence.
++ * fc_remote_port_delete - notifies the fc transport that a remote port is no longer in existence.
+ * @rport: The remote port that no longer exists
+ *
+ * The LLDD calls this routine to notify the transport that a remote
+ * port is no longer part of the topology. Note: Although a port
+ * may no longer be part of the topology, it may persist in the remote
+ * ports displayed by the fc_host. We do this under 2 conditions:
+- * - If the port was a scsi target, we delay its deletion by "blocking" it.
++ * 1) If the port was a scsi target, we delay its deletion by "blocking" it.
+ * This allows the port to temporarily disappear, then reappear without
+ * disrupting the SCSI device tree attached to it. During the "blocked"
+ * period the port will still exist.
+- * - If the port was a scsi target and disappears for longer than we
++ * 2) If the port was a scsi target and disappears for longer than we
+ * expect, we'll delete the port and the tear down the SCSI device tree
+ * attached to it. However, we want to semi-persist the target id assigned
+ * to that port if it eventually does exist. The port structure will
+@@ -2709,7 +2702,8 @@ EXPORT_SYMBOL(fc_remote_port_add);
+ * temporary blocked state. From the LLDD's perspective, the rport no
+ * longer exists. From the SCSI midlayer's perspective, the SCSI target
+ * exists, but all sdevs on it are blocked from further I/O. The following
+- * is then expected:
++ * is then expected.
++ *
+ * If the remote port does not return (signaled by a LLDD call to
+ * fc_remote_port_add()) within the dev_loss_tmo timeout, then the
+ * scsi target is removed - killing all outstanding i/o and removing the
+@@ -2731,7 +2725,7 @@ EXPORT_SYMBOL(fc_remote_port_add);
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ void
+ fc_remote_port_delete(struct fc_rport *rport)
{
- LIST_HEAD(completions);
- struct lpfc_hba *phba = vport->phba;
-@@ -4663,6 +4916,7 @@ void lpfc_fabric_abort_hba(struct lpfc_hba *phba)
- }
+@@ -2792,12 +2786,12 @@ fc_remote_port_delete(struct fc_rport *rport)
+ EXPORT_SYMBOL(fc_remote_port_delete);
+ /**
+- * fc_remote_port_rolechg - notifies the fc transport that the roles
+- * on a remote may have changed.
++ * fc_remote_port_rolechg - notifies the fc transport that the roles on a remote may have changed.
+ * @rport: The remote port that changed.
++ * @roles: New roles for this port.
+ *
+- * The LLDD calls this routine to notify the transport that the roles
+- * on a remote port may have changed. The largest effect of this is
++ * Description: The LLDD calls this routine to notify the transport that the
++ * roles on a remote port may have changed. The largest effect of this is
+ * if a port now becomes a FCP Target, it must be allocated a
+ * scsi target id. If the port is no longer a FCP target, any
+ * scsi target id value assigned to it will persist in case the
+@@ -2810,7 +2804,7 @@ EXPORT_SYMBOL(fc_remote_port_delete);
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ void
+ fc_remote_port_rolechg(struct fc_rport *rport, u32 roles)
+ {
+@@ -2875,12 +2869,12 @@ fc_remote_port_rolechg(struct fc_rport *rport, u32 roles)
+ EXPORT_SYMBOL(fc_remote_port_rolechg);
-+#if 0
- void lpfc_fabric_abort_flogi(struct lpfc_hba *phba)
+ /**
+- * fc_timeout_deleted_rport - Timeout handler for a deleted remote port,
+- * which we blocked, and has now failed to return
+- * in the allotted time.
+- *
++ * fc_timeout_deleted_rport - Timeout handler for a deleted remote port.
+ * @work: rport target that failed to reappear in the allotted time.
+- **/
++ *
++ * Description: An attempt to delete a remote port blocks, and if it fails
++ * to return in the allotted time this gets called.
++ */
+ static void
+ fc_timeout_deleted_rport(struct work_struct *work)
{
- LIST_HEAD(completions);
-@@ -4693,5 +4947,6 @@ void lpfc_fabric_abort_flogi(struct lpfc_hba *phba)
- (piocb->iocb_cmpl) (phba, piocb, piocb);
- }
+@@ -2984,14 +2978,12 @@ fc_timeout_deleted_rport(struct work_struct *work)
}
-+#endif /* 0 */
+ /**
+- * fc_timeout_fail_rport_io - Timeout handler for a fast io failing on a
+- * disconnected SCSI target.
+- *
++ * fc_timeout_fail_rport_io - Timeout handler for a fast io failing on a disconnected SCSI target.
+ * @work: rport to terminate io on.
+ *
+ * Notes: Only requests the failure of the io, not that all are flushed
+ * prior to returning.
+- **/
++ */
+ static void
+ fc_timeout_fail_rport_io(struct work_struct *work)
+ {
+@@ -3008,9 +3000,8 @@ fc_timeout_fail_rport_io(struct work_struct *work)
-diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
-index c81c2b3..dc042bd 100644
---- a/drivers/scsi/lpfc/lpfc_hbadisc.c
-+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
-@@ -57,6 +57,7 @@ static uint8_t lpfcAlpaArray[] = {
- };
+ /**
+ * fc_scsi_scan_rport - called to perform a scsi scan on a remote port.
+- *
+ * @work: remote port to be scanned.
+- **/
++ */
+ static void
+ fc_scsi_scan_rport(struct work_struct *work)
+ {
+@@ -3047,7 +3038,7 @@ fc_scsi_scan_rport(struct work_struct *work)
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ static int
+ fc_vport_create(struct Scsi_Host *shost, int channel, struct device *pdev,
+ struct fc_vport_identifiers *ids, struct fc_vport **ret_vport)
+@@ -3172,7 +3163,7 @@ delete_vport:
+ *
+ * Notes:
+ * This routine assumes no locks are held on entry.
+- **/
++ */
+ int
+ fc_vport_terminate(struct fc_vport *vport)
+ {
+@@ -3232,9 +3223,8 @@ EXPORT_SYMBOL(fc_vport_terminate);
- static void lpfc_disc_timeout_handler(struct lpfc_vport *);
-+static void lpfc_disc_flush_list(struct lpfc_vport *vport);
+ /**
+ * fc_vport_sched_delete - workq-based delete request for a vport
+- *
+ * @work: vport to be deleted.
+- **/
++ */
+ static void
+ fc_vport_sched_delete(struct work_struct *work)
+ {
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 5428d15..0d7b4e7 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -30,10 +30,10 @@
+ #include <scsi/scsi_transport_iscsi.h>
+ #include <scsi/iscsi_if.h>
- void
- lpfc_terminate_rport_io(struct fc_rport *rport)
-@@ -107,20 +108,14 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
- struct lpfc_nodelist * ndlp;
- struct lpfc_vport *vport;
- struct lpfc_hba *phba;
-- struct completion devloss_compl;
- struct lpfc_work_evt *evtp;
-+ int put_node;
-+ int put_rport;
+-#define ISCSI_SESSION_ATTRS 15
++#define ISCSI_SESSION_ATTRS 18
+ #define ISCSI_CONN_ATTRS 11
+ #define ISCSI_HOST_ATTRS 4
+-#define ISCSI_TRANSPORT_VERSION "2.0-724"
++#define ISCSI_TRANSPORT_VERSION "2.0-867"
- rdata = rport->dd_data;
- ndlp = rdata->pnode;
--
-- if (!ndlp) {
-- if (rport->scsi_target_id != -1) {
-- printk(KERN_ERR "Cannot find remote node"
-- " for rport in dev_loss_tmo_callbk x%x\n",
-- rport->port_id);
-- }
-+ if (!ndlp)
- return;
-- }
+ struct iscsi_internal {
+ int daemon_pid;
+@@ -50,6 +50,7 @@ struct iscsi_internal {
+ };
- vport = ndlp->vport;
- phba = vport->phba;
-@@ -129,15 +124,35 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
- "rport devlosscb: sid:x%x did:x%x flg:x%x",
- ndlp->nlp_sid, ndlp->nlp_DID, ndlp->nlp_flag);
+ static atomic_t iscsi_session_nr; /* sysfs session id for next new session */
++static struct workqueue_struct *iscsi_eh_timer_workq;
+
+ /*
+ * list of registered transports and lock that must
+@@ -115,6 +116,8 @@ static struct attribute_group iscsi_transport_group = {
+ .attrs = iscsi_transport_attrs,
+ };
-- init_completion(&devloss_compl);
-+ /* Don't defer this if we are in the process of deleting the vport
-+ * or unloading the driver. The unload will cleanup the node
-+ * appropriately we just need to cleanup the ndlp rport info here.
-+ */
-+ if (vport->load_flag & FC_UNLOADING) {
-+ put_node = rdata->pnode != NULL;
-+ put_rport = ndlp->rport != NULL;
-+ rdata->pnode = NULL;
-+ ndlp->rport = NULL;
-+ if (put_node)
-+ lpfc_nlp_put(ndlp);
-+ if (put_rport)
-+ put_device(&rport->dev);
-+ return;
-+ }
+
-+ if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
-+ return;
+
- evtp = &ndlp->dev_loss_evt;
-
- if (!list_empty(&evtp->evt_listp))
- return;
-
- spin_lock_irq(&phba->hbalock);
-- evtp->evt_arg1 = ndlp;
-- evtp->evt_arg2 = &devloss_compl;
-+ /* We need to hold the node by incrementing the reference
-+ * count until this queued work is done
-+ */
-+ evtp->evt_arg1 = lpfc_nlp_get(ndlp);
- evtp->evt = LPFC_EVT_DEV_LOSS;
- list_add_tail(&evtp->evt_listp, &phba->work_list);
- if (phba->work_wait)
-@@ -145,8 +160,6 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ static int iscsi_setup_host(struct transport_container *tc, struct device *dev,
+ struct class_device *cdev)
+ {
+@@ -124,13 +127,30 @@ static int iscsi_setup_host(struct transport_container *tc, struct device *dev,
+ memset(ihost, 0, sizeof(*ihost));
+ INIT_LIST_HEAD(&ihost->sessions);
+ mutex_init(&ihost->mutex);
++
++ snprintf(ihost->unbind_workq_name, KOBJ_NAME_LEN, "iscsi_unbind_%d",
++ shost->host_no);
++ ihost->unbind_workq = create_singlethread_workqueue(
++ ihost->unbind_workq_name);
++ if (!ihost->unbind_workq)
++ return -ENOMEM;
++ return 0;
++}
++
++static int iscsi_remove_host(struct transport_container *tc, struct device *dev,
++ struct class_device *cdev)
++{
++ struct Scsi_Host *shost = dev_to_shost(dev);
++ struct iscsi_host *ihost = shost->shost_data;
++
++ destroy_workqueue(ihost->unbind_workq);
+ return 0;
+ }
- spin_unlock_irq(&phba->hbalock);
+ static DECLARE_TRANSPORT_CLASS(iscsi_host_class,
+ "iscsi_host",
+ iscsi_setup_host,
+- NULL,
++ iscsi_remove_host,
+ NULL);
-- wait_for_completion(&devloss_compl);
--
- return;
+ static DECLARE_TRANSPORT_CLASS(iscsi_session_class,
+@@ -252,7 +272,7 @@ static void session_recovery_timedout(struct work_struct *work)
+ void iscsi_unblock_session(struct iscsi_cls_session *session)
+ {
+ if (!cancel_delayed_work(&session->recovery_work))
+- flush_scheduled_work();
++ flush_workqueue(iscsi_eh_timer_workq);
+ scsi_target_unblock(&session->dev);
}
-
-@@ -154,7 +167,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
- * This function is called from the worker thread when dev_loss_tmo
- * expire.
- */
--void
-+static void
- lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+ EXPORT_SYMBOL_GPL(iscsi_unblock_session);
+@@ -260,11 +280,40 @@ EXPORT_SYMBOL_GPL(iscsi_unblock_session);
+ void iscsi_block_session(struct iscsi_cls_session *session)
{
- struct lpfc_rport_data *rdata;
-@@ -162,6 +175,8 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
- struct lpfc_vport *vport;
- struct lpfc_hba *phba;
- uint8_t *name;
-+ int put_node;
-+ int put_rport;
- int warn_on = 0;
-
- rport = ndlp->rport;
-@@ -178,14 +193,32 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
- "rport devlosstmo:did:x%x type:x%x id:x%x",
- ndlp->nlp_DID, ndlp->nlp_type, rport->scsi_target_id);
-
-- if (!(vport->load_flag & FC_UNLOADING) &&
-- ndlp->nlp_state == NLP_STE_MAPPED_NODE)
-+ /* Don't defer this if we are in the process of deleting the vport
-+ * or unloading the driver. The unload will cleanup the node
-+ * appropriately we just need to cleanup the ndlp rport info here.
-+ */
-+ if (vport->load_flag & FC_UNLOADING) {
-+ if (ndlp->nlp_sid != NLP_NO_SID) {
-+ /* flush the target */
-+ lpfc_sli_abort_iocb(vport,
-+ &phba->sli.ring[phba->sli.fcp_ring],
-+ ndlp->nlp_sid, 0, LPFC_CTX_TGT);
-+ }
-+ put_node = rdata->pnode != NULL;
-+ put_rport = ndlp->rport != NULL;
-+ rdata->pnode = NULL;
-+ ndlp->rport = NULL;
-+ if (put_node)
-+ lpfc_nlp_put(ndlp);
-+ if (put_rport)
-+ put_device(&rport->dev);
- return;
-+ }
+ scsi_target_block(&session->dev);
+- schedule_delayed_work(&session->recovery_work,
+- session->recovery_tmo * HZ);
++ queue_delayed_work(iscsi_eh_timer_workq, &session->recovery_work,
++ session->recovery_tmo * HZ);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_block_session);
-- if (ndlp->nlp_type & NLP_FABRIC) {
-- int put_node;
-- int put_rport;
-+ if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
++static void __iscsi_unbind_session(struct work_struct *work)
++{
++ struct iscsi_cls_session *session =
++ container_of(work, struct iscsi_cls_session,
++ unbind_work);
++ struct Scsi_Host *shost = iscsi_session_to_shost(session);
++ struct iscsi_host *ihost = shost->shost_data;
++
++ /* Prevent new scans and make sure scanning is not in progress */
++ mutex_lock(&ihost->mutex);
++ if (list_empty(&session->host_list)) {
++ mutex_unlock(&ihost->mutex);
+ return;
-
-+ if (ndlp->nlp_type & NLP_FABRIC) {
- /* We will clean up these Nodes in linkup */
- put_node = rdata->pnode != NULL;
- put_rport = ndlp->rport != NULL;
-@@ -227,23 +260,20 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
- ndlp->nlp_state, ndlp->nlp_rpi);
- }
-
-+ put_node = rdata->pnode != NULL;
-+ put_rport = ndlp->rport != NULL;
-+ rdata->pnode = NULL;
-+ ndlp->rport = NULL;
-+ if (put_node)
-+ lpfc_nlp_put(ndlp);
-+ if (put_rport)
-+ put_device(&rport->dev);
++ }
++ list_del_init(&session->host_list);
++ mutex_unlock(&ihost->mutex);
+
- if (!(vport->load_flag & FC_UNLOADING) &&
- !(ndlp->nlp_flag & NLP_DELAY_TMO) &&
- !(ndlp->nlp_flag & NLP_NPR_2B_DISC) &&
-- (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE))
-+ (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE)) {
- lpfc_disc_state_machine(vport, ndlp, NULL, NLP_EVT_DEVICE_RM);
-- else {
-- int put_node;
-- int put_rport;
--
-- put_node = rdata->pnode != NULL;
-- put_rport = ndlp->rport != NULL;
-- rdata->pnode = NULL;
-- ndlp->rport = NULL;
-- if (put_node)
-- lpfc_nlp_put(ndlp);
-- if (put_rport)
-- put_device(&rport->dev);
- }
- }
++ scsi_remove_target(&session->dev);
++ iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
++}
++
++static int iscsi_unbind_session(struct iscsi_cls_session *session)
++{
++ struct Scsi_Host *shost = iscsi_session_to_shost(session);
++ struct iscsi_host *ihost = shost->shost_data;
++
++ return queue_work(ihost->unbind_workq, &session->unbind_work);
++}
++
+ struct iscsi_cls_session *
+ iscsi_alloc_session(struct Scsi_Host *shost,
+ struct iscsi_transport *transport)
+@@ -281,6 +330,7 @@ iscsi_alloc_session(struct Scsi_Host *shost,
+ INIT_DELAYED_WORK(&session->recovery_work, session_recovery_timedout);
+ INIT_LIST_HEAD(&session->host_list);
+ INIT_LIST_HEAD(&session->sess_list);
++ INIT_WORK(&session->unbind_work, __iscsi_unbind_session);
-@@ -260,7 +290,6 @@ lpfc_work_list_done(struct lpfc_hba *phba)
+ /* this is released in the dev's release function */
+ scsi_host_get(shost);
+@@ -297,6 +347,7 @@ int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
{
- struct lpfc_work_evt *evtp = NULL;
- struct lpfc_nodelist *ndlp;
-- struct lpfc_vport *vport;
- int free_evt;
-
- spin_lock_irq(&phba->hbalock);
-@@ -270,35 +299,22 @@ lpfc_work_list_done(struct lpfc_hba *phba)
- spin_unlock_irq(&phba->hbalock);
- free_evt = 1;
- switch (evtp->evt) {
-- case LPFC_EVT_DEV_LOSS_DELAY:
-- free_evt = 0; /* evt is part of ndlp */
-- ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
-- vport = ndlp->vport;
-- if (!vport)
-- break;
--
-- lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
-- "rport devlossdly:did:x%x flg:x%x",
-- ndlp->nlp_DID, ndlp->nlp_flag, 0);
--
-- if (!(vport->load_flag & FC_UNLOADING) &&
-- !(ndlp->nlp_flag & NLP_DELAY_TMO) &&
-- !(ndlp->nlp_flag & NLP_NPR_2B_DISC)) {
-- lpfc_disc_state_machine(vport, ndlp, NULL,
-- NLP_EVT_DEVICE_RM);
-- }
-- break;
- case LPFC_EVT_ELS_RETRY:
- ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
- lpfc_els_retry_delay_handler(ndlp);
- free_evt = 0; /* evt is part of ndlp */
-+ /* decrement the node reference count held
-+ * for this queued work
-+ */
-+ lpfc_nlp_put(ndlp);
- break;
- case LPFC_EVT_DEV_LOSS:
- ndlp = (struct lpfc_nodelist *)(evtp->evt_arg1);
-- lpfc_nlp_get(ndlp);
- lpfc_dev_loss_tmo_handler(ndlp);
- free_evt = 0;
-- complete((struct completion *)(evtp->evt_arg2));
-+ /* decrement the node reference count held for
-+ * this queued work
-+ */
- lpfc_nlp_put(ndlp);
- break;
- case LPFC_EVT_ONLINE:
-@@ -373,7 +389,7 @@ lpfc_work_done(struct lpfc_hba *phba)
- lpfc_handle_latt(phba);
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS; i++) {
-+ for(i = 0; i <= phba->max_vpi; i++) {
- /*
- * We could have no vports in array if unloading, so if
- * this happens then just use the pport
-@@ -405,14 +421,14 @@ lpfc_work_done(struct lpfc_hba *phba)
- vport->work_port_events &= ~work_port_events;
- spin_unlock_irq(&vport->work_port_lock);
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
+ struct Scsi_Host *shost = iscsi_session_to_shost(session);
+ struct iscsi_host *ihost;
++ unsigned long flags;
+ int err;
- pring = &phba->sli.ring[LPFC_ELS_RING];
- status = (ha_copy & (HA_RXMASK << (4*LPFC_ELS_RING)));
- status >>= (4*LPFC_ELS_RING);
- if ((status & HA_RXMASK)
- || (pring->flag & LPFC_DEFERRED_RING_EVENT)) {
-- if (pring->flag & LPFC_STOP_IOCB_MASK) {
-+ if (pring->flag & LPFC_STOP_IOCB_EVENT) {
- pring->flag |= LPFC_DEFERRED_RING_EVENT;
- } else {
- lpfc_sli_handle_slow_ring_event(phba, pring,
-@@ -544,6 +560,7 @@ lpfc_workq_post_event(struct lpfc_hba *phba, void *arg1, void *arg2,
- void
- lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
- {
-+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
- struct lpfc_hba *phba = vport->phba;
- struct lpfc_nodelist *ndlp, *next_ndlp;
- int rc;
-@@ -552,7 +569,9 @@ lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
- continue;
+ ihost = shost->shost_data;
+@@ -313,9 +364,15 @@ int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
+ }
+ transport_register_device(&session->dev);
-- if (phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN)
-+ if ((phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) ||
-+ ((vport->port_type == LPFC_NPIV_PORT) &&
-+ (ndlp->nlp_DID == NameServer_DID)))
- lpfc_unreg_rpi(vport, ndlp);
++ spin_lock_irqsave(&sesslock, flags);
++ list_add(&session->sess_list, &sesslist);
++ spin_unlock_irqrestore(&sesslock, flags);
++
+ mutex_lock(&ihost->mutex);
+ list_add(&session->host_list, &ihost->sessions);
+ mutex_unlock(&ihost->mutex);
++
++ iscsi_session_event(session, ISCSI_KEVENT_CREATE_SESSION);
+ return 0;
- /* Leave Fabric nodes alone on link down */
-@@ -565,14 +584,30 @@ lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
- }
- if (phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) {
- lpfc_mbx_unreg_vpi(vport);
-+ spin_lock_irq(shost->host_lock);
- vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
-+ spin_unlock_irq(shost->host_lock);
- }
+ release_host:
+@@ -328,9 +385,10 @@ EXPORT_SYMBOL_GPL(iscsi_add_session);
+ * iscsi_create_session - create iscsi class session
+ * @shost: scsi host
+ * @transport: iscsi transport
++ * @target_id: which target
+ *
+ * This can be called from a LLD or iscsi_transport.
+- **/
++ */
+ struct iscsi_cls_session *
+ iscsi_create_session(struct Scsi_Host *shost,
+ struct iscsi_transport *transport,
+@@ -350,19 +408,58 @@ iscsi_create_session(struct Scsi_Host *shost,
}
+ EXPORT_SYMBOL_GPL(iscsi_create_session);
-+void
-+lpfc_port_link_failure(struct lpfc_vport *vport)
++static void iscsi_conn_release(struct device *dev)
+{
-+ /* Cleanup any outstanding RSCN activity */
-+ lpfc_els_flush_rscn(vport);
++ struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev);
++ struct device *parent = conn->dev.parent;
+
-+ /* Cleanup any outstanding ELS commands */
-+ lpfc_els_flush_cmd(vport);
++ kfree(conn);
++ put_device(parent);
++}
+
-+ lpfc_cleanup_rpis(vport, 0);
++static int iscsi_is_conn_dev(const struct device *dev)
++{
++ return dev->release == iscsi_conn_release;
++}
+
-+ /* Turn off discovery timer if its running */
-+ lpfc_can_disctmo(vport);
++static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
++{
++ if (!iscsi_is_conn_dev(dev))
++ return 0;
++ return iscsi_destroy_conn(iscsi_dev_to_conn(dev));
+}
+
- static void
- lpfc_linkdown_port(struct lpfc_vport *vport)
+ void iscsi_remove_session(struct iscsi_cls_session *session)
{
-- struct lpfc_nodelist *ndlp, *next_ndlp;
- struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
-
- fc_host_post_event(shost, fc_get_event_number(), FCH_EVT_LINKDOWN, 0);
-@@ -581,21 +616,8 @@ lpfc_linkdown_port(struct lpfc_vport *vport)
- "Link Down: state:x%x rtry:x%x flg:x%x",
- vport->port_state, vport->fc_ns_retry, vport->fc_flag);
+ struct Scsi_Host *shost = iscsi_session_to_shost(session);
+ struct iscsi_host *ihost = shost->shost_data;
++ unsigned long flags;
++ int err;
-- /* Cleanup any outstanding RSCN activity */
-- lpfc_els_flush_rscn(vport);
--
-- /* Cleanup any outstanding ELS commands */
-- lpfc_els_flush_cmd(vport);
-+ lpfc_port_link_failure(vport);
+- if (!cancel_delayed_work(&session->recovery_work))
+- flush_scheduled_work();
++ spin_lock_irqsave(&sesslock, flags);
++ list_del(&session->sess_list);
++ spin_unlock_irqrestore(&sesslock, flags);
-- lpfc_cleanup_rpis(vport, 0);
--
-- /* free any ndlp's on unused list */
-- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp)
-- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
-- lpfc_drop_node(vport, ndlp);
--
-- /* Turn off discovery timer if its running */
-- lpfc_can_disctmo(vport);
- }
+- mutex_lock(&ihost->mutex);
+- list_del(&session->host_list);
+- mutex_unlock(&ihost->mutex);
++ /*
++ * If we are blocked let commands flow again. The lld or iscsi
++ * layer should set up the queuecommand to fail commands.
++ */
++ iscsi_unblock_session(session);
++ iscsi_unbind_session(session);
++ /*
++ * If the session dropped while removing devices then we need to make
++ * sure it is not blocked
++ */
++ if (!cancel_delayed_work(&session->recovery_work))
++ flush_workqueue(iscsi_eh_timer_workq);
++ flush_workqueue(ihost->unbind_workq);
- int
-@@ -618,18 +640,18 @@ lpfc_linkdown(struct lpfc_hba *phba)
- spin_unlock_irq(&phba->hbalock);
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
- /* Issue a LINK DOWN event to all nodes */
- lpfc_linkdown_port(vports[i]);
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- /* Clean up any firmware default rpi's */
- mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
- if (mb) {
- lpfc_unreg_did(phba, 0xffff, 0xffffffff, mb);
- mb->vport = vport;
- mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-- if (lpfc_sli_issue_mbox(phba, mb, (MBX_NOWAIT | MBX_STOP_IOCB))
-+ if (lpfc_sli_issue_mbox(phba, mb, MBX_NOWAIT)
- == MBX_NOT_FINISHED) {
- mempool_free(mb, phba->mbox_mem_pool);
- }
-@@ -643,8 +665,7 @@ lpfc_linkdown(struct lpfc_hba *phba)
- lpfc_config_link(phba, mb);
- mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
- mb->vport = vport;
-- if (lpfc_sli_issue_mbox(phba, mb,
-- (MBX_NOWAIT | MBX_STOP_IOCB))
-+ if (lpfc_sli_issue_mbox(phba, mb, MBX_NOWAIT)
- == MBX_NOT_FINISHED) {
- mempool_free(mb, phba->mbox_mem_pool);
- }
-@@ -686,7 +707,6 @@ static void
- lpfc_linkup_port(struct lpfc_vport *vport)
- {
- struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
-- struct lpfc_nodelist *ndlp, *next_ndlp;
- struct lpfc_hba *phba = vport->phba;
+- scsi_remove_target(&session->dev);
++ /* hw iscsi may not have removed all connections from session */
++ err = device_for_each_child(&session->dev, NULL,
++ iscsi_iter_destroy_conn_fn);
++ if (err)
++ dev_printk(KERN_ERR, &session->dev, "iscsi: Could not delete "
++ "all connections for session. Error %d.\n", err);
- if ((vport->load_flag & FC_UNLOADING) != 0)
-@@ -713,11 +733,6 @@ lpfc_linkup_port(struct lpfc_vport *vport)
- if (vport->fc_flag & FC_LBIT)
- lpfc_linkup_cleanup_nodes(vport);
+ transport_unregister_device(&session->dev);
+ device_del(&session->dev);
+@@ -371,9 +468,9 @@ EXPORT_SYMBOL_GPL(iscsi_remove_session);
-- /* free any ndlp's in unused state */
-- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes,
-- nlp_listp)
-- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
-- lpfc_drop_node(vport, ndlp);
+ void iscsi_free_session(struct iscsi_cls_session *session)
+ {
++ iscsi_session_event(session, ISCSI_KEVENT_DESTROY_SESSION);
+ put_device(&session->dev);
}
+-
+ EXPORT_SYMBOL_GPL(iscsi_free_session);
- static int
-@@ -734,9 +749,9 @@ lpfc_linkup(struct lpfc_hba *phba)
-
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++)
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++)
- lpfc_linkup_port(vports[i]);
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)
- lpfc_issue_clear_la(phba, phba->pport);
+ /**
+@@ -382,7 +479,7 @@ EXPORT_SYMBOL_GPL(iscsi_free_session);
+ *
+ * Can be called by a LLD or iscsi_transport. There must not be
+ * any running connections.
+- **/
++ */
+ int iscsi_destroy_session(struct iscsi_cls_session *session)
+ {
+ iscsi_remove_session(session);
+@@ -391,20 +488,6 @@ int iscsi_destroy_session(struct iscsi_cls_session *session)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_destroy_session);
-@@ -749,7 +764,7 @@ lpfc_linkup(struct lpfc_hba *phba)
- * as the completion routine when the command is
- * handed off to the SLI layer.
- */
--void
-+static void
- lpfc_mbx_cmpl_clear_la(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+-static void iscsi_conn_release(struct device *dev)
+-{
+- struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev);
+- struct device *parent = conn->dev.parent;
+-
+- kfree(conn);
+- put_device(parent);
+-}
+-
+-static int iscsi_is_conn_dev(const struct device *dev)
+-{
+- return dev->release == iscsi_conn_release;
+-}
+-
+ /**
+ * iscsi_create_conn - create iscsi class connection
+ * @session: iscsi cls session
+@@ -418,12 +501,13 @@ static int iscsi_is_conn_dev(const struct device *dev)
+ * for software iscsi we could be trying to preallocate a connection struct
+ * in which case there could be two connection structs and cid would be
+ * non-zero.
+- **/
++ */
+ struct iscsi_cls_conn *
+ iscsi_create_conn(struct iscsi_cls_session *session, uint32_t cid)
{
- struct lpfc_vport *vport = pmb->vport;
-@@ -852,8 +867,6 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- * LPFC_FLOGI while waiting for FLOGI cmpl
- */
- if (vport->port_state != LPFC_FLOGI) {
-- vport->port_state = LPFC_FLOGI;
-- lpfc_set_disctmo(vport);
- lpfc_initial_flogi(vport);
- }
- return;
-@@ -1022,8 +1035,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
- lpfc_read_sparam(phba, sparam_mbox, 0);
- sparam_mbox->vport = vport;
- sparam_mbox->mbox_cmpl = lpfc_mbx_cmpl_read_sparam;
-- rc = lpfc_sli_issue_mbox(phba, sparam_mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, sparam_mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED) {
- mp = (struct lpfc_dmabuf *) sparam_mbox->context1;
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
-@@ -1040,8 +1052,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
- lpfc_config_link(phba, cfglink_mbox);
- cfglink_mbox->vport = vport;
- cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
-- rc = lpfc_sli_issue_mbox(phba, cfglink_mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
- if (rc != MBX_NOT_FINISHED)
- return;
- mempool_free(cfglink_mbox, phba->mbox_mem_pool);
-@@ -1174,6 +1185,9 @@ lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
- kfree(mp);
- mempool_free(pmb, phba->mbox_mem_pool);
-+ /* decrement the node reference count held for this callback
-+ * function.
-+ */
- lpfc_nlp_put(ndlp);
+ struct iscsi_transport *transport = session->transport;
+ struct iscsi_cls_conn *conn;
++ unsigned long flags;
+ int err;
- return;
-@@ -1219,7 +1233,7 @@ lpfc_mbx_unreg_vpi(struct lpfc_vport *vport)
- lpfc_unreg_vpi(phba, vport->vpi, mbox);
- mbox->vport = vport;
- mbox->mbox_cmpl = lpfc_mbx_cmpl_unreg_vpi;
-- rc = lpfc_sli_issue_mbox(phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED) {
- lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
- "1800 Could not issue unreg_vpi\n");
-@@ -1319,7 +1333,7 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
- for(i = 0;
-- i < LPFC_MAX_VPORTS && vports[i] != NULL;
-+ i <= phba->max_vpi && vports[i] != NULL;
- i++) {
- if (vports[i]->port_type == LPFC_PHYSICAL_PORT)
- continue;
-@@ -1335,7 +1349,7 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- "Fabric support\n");
- }
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- lpfc_do_scr_ns_plogi(phba, vport);
+ conn = kzalloc(sizeof(*conn) + transport->conndata_size, GFP_KERNEL);
+@@ -452,6 +536,11 @@ iscsi_create_conn(struct iscsi_cls_session *session, uint32_t cid)
+ goto release_parent_ref;
}
-
-@@ -1361,11 +1375,16 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
-
- if (mb->mbxStatus) {
- out:
-+ /* decrement the node reference count held for this
-+ * callback function.
-+ */
- lpfc_nlp_put(ndlp);
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
- kfree(mp);
- mempool_free(pmb, phba->mbox_mem_pool);
-- lpfc_drop_node(vport, ndlp);
+ transport_register_device(&conn->dev);
+
-+ /* If no other thread is using the ndlp, free it */
-+ lpfc_nlp_not_used(ndlp);
++ spin_lock_irqsave(&connlock, flags);
++ list_add(&conn->conn_list, &connlist);
++ conn->active = 1;
++ spin_unlock_irqrestore(&connlock, flags);
+ return conn;
- if (phba->fc_topology == TOPOLOGY_LOOP) {
- /*
-@@ -1410,6 +1429,9 @@ out:
- goto out;
- }
+ release_parent_ref:
+@@ -465,17 +554,23 @@ EXPORT_SYMBOL_GPL(iscsi_create_conn);
-+ /* decrement the node reference count held for this
-+ * callback function.
-+ */
- lpfc_nlp_put(ndlp);
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
- kfree(mp);
-@@ -1656,8 +1678,18 @@ lpfc_dequeue_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
- void
- lpfc_drop_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ /**
+ * iscsi_destroy_conn - destroy iscsi class connection
+- * @session: iscsi cls session
++ * @conn: iscsi cls session
+ *
+ * This can be called from a LLD or iscsi_transport.
+- **/
++ */
+ int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
{
-+ /*
-+ * Use of lpfc_drop_node and UNUSED list: lpfc_drop_node should
-+ * be used if we wish to issue the "last" lpfc_nlp_put() to remove
-+ * the ndlp from the vport. The ndlp marked as UNUSED on the list
-+ * until ALL other outstanding threads have completed. We check
-+ * that the ndlp not already in the UNUSED state before we proceed.
-+ */
-+ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
-+ return;
- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
- lpfc_nlp_put(ndlp);
-+ return;
++ unsigned long flags;
++
++ spin_lock_irqsave(&connlock, flags);
++ conn->active = 0;
++ list_del(&conn->conn_list);
++ spin_unlock_irqrestore(&connlock, flags);
++
+ transport_unregister_device(&conn->dev);
+ device_unregister(&conn->dev);
+ return 0;
}
+-
+ EXPORT_SYMBOL_GPL(iscsi_destroy_conn);
/*
-@@ -1868,8 +1900,7 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
- lpfc_unreg_login(phba, vport->vpi, ndlp->nlp_rpi, mbox);
- mbox->vport = vport;
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-- rc = lpfc_sli_issue_mbox(phba, mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED)
- mempool_free(mbox, phba->mbox_mem_pool);
- }
-@@ -1892,8 +1923,8 @@ lpfc_unreg_all_rpis(struct lpfc_vport *vport)
- lpfc_unreg_login(phba, vport->vpi, 0xffff, mbox);
- mbox->vport = vport;
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-- rc = lpfc_sli_issue_mbox(phba, mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB));
-+ mbox->context1 = NULL;
-+ rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO);
- if (rc == MBX_NOT_FINISHED) {
- mempool_free(mbox, phba->mbox_mem_pool);
- }
-@@ -1912,8 +1943,8 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport)
- lpfc_unreg_did(phba, vport->vpi, 0xffffffff, mbox);
- mbox->vport = vport;
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-- rc = lpfc_sli_issue_mbox(phba, mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB));
-+ mbox->context1 = NULL;
-+ rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO);
- if (rc == MBX_NOT_FINISHED) {
- lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
- "1815 Could not issue "
-@@ -1981,11 +2012,6 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
- if (!list_empty(&ndlp->dev_loss_evt.evt_listp))
- list_del_init(&ndlp->dev_loss_evt.evt_listp);
-
-- if (!list_empty(&ndlp->dev_loss_evt.evt_listp)) {
-- list_del_init(&ndlp->dev_loss_evt.evt_listp);
-- complete((struct completion *)(ndlp->dev_loss_evt.evt_arg2));
-- }
--
- lpfc_unreg_rpi(vport, ndlp);
+@@ -685,132 +780,74 @@ iscsi_if_get_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh)
+ }
- return 0;
-@@ -1999,12 +2025,39 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
- static void
- lpfc_nlp_remove(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ /**
+- * iscsi_if_destroy_session_done - send session destr. completion event
+- * @conn: last connection for session
+- *
+- * This is called by HW iscsi LLDs to notify userpsace that its HW has
+- * removed a session.
+- **/
+-int iscsi_if_destroy_session_done(struct iscsi_cls_conn *conn)
++ * iscsi_session_event - send session destr. completion event
++ * @session: iscsi class session
++ * @event: type of event
++ */
++int iscsi_session_event(struct iscsi_cls_session *session,
++ enum iscsi_uevent_e event)
{
-+ struct lpfc_hba *phba = vport->phba;
- struct lpfc_rport_data *rdata;
-+ LPFC_MBOXQ_t *mbox;
-+ int rc;
+ struct iscsi_internal *priv;
+- struct iscsi_cls_session *session;
+ struct Scsi_Host *shost;
+ struct iscsi_uevent *ev;
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+- unsigned long flags;
+ int rc, len = NLMSG_SPACE(sizeof(*ev));
- if (ndlp->nlp_flag & NLP_DELAY_TMO) {
- lpfc_cancel_retry_delay_tmo(vport, ndlp);
+- priv = iscsi_if_transport_lookup(conn->transport);
++ priv = iscsi_if_transport_lookup(session->transport);
+ if (!priv)
+ return -EINVAL;
+-
+- session = iscsi_dev_to_session(conn->dev.parent);
+ shost = iscsi_session_to_shost(session);
+
+ skb = alloc_skb(len, GFP_KERNEL);
+ if (!skb) {
+- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+- "session creation event\n");
++ dev_printk(KERN_ERR, &session->dev, "Cannot notify userspace "
++ "of session event %u\n", event);
+ return -ENOMEM;
}
-+ if (ndlp->nlp_flag & NLP_DEFER_RM && !ndlp->nlp_rpi) {
-+ /* For this case we need to cleanup the default rpi
-+ * allocated by the firmware.
-+ */
-+ if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))
-+ != NULL) {
-+ rc = lpfc_reg_login(phba, vport->vpi, ndlp->nlp_DID,
-+ (uint8_t *) &vport->fc_sparam, mbox, 0);
-+ if (rc) {
-+ mempool_free(mbox, phba->mbox_mem_pool);
-+ }
-+ else {
-+ mbox->mbox_flag |= LPFC_MBX_IMED_UNREG;
-+ mbox->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
-+ mbox->vport = vport;
-+ mbox->context2 = NULL;
-+ rc =lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
-+ if (rc == MBX_NOT_FINISHED) {
-+ mempool_free(mbox, phba->mbox_mem_pool);
-+ }
-+ }
-+ }
-+ }
-+
- lpfc_cleanup_node(vport, ndlp);
+ nlh = __nlmsg_put(skb, priv->daemon_pid, 0, 0, (len - sizeof(*nlh)), 0);
+ ev = NLMSG_DATA(nlh);
+- ev->transport_handle = iscsi_handle(conn->transport);
+- ev->type = ISCSI_KEVENT_DESTROY_SESSION;
+- ev->r.d_session.host_no = shost->host_no;
+- ev->r.d_session.sid = session->sid;
+-
+- /*
+- * this will occur if the daemon is not up, so we just warn
+- * the user and when the daemon is restarted it will handle it
+- */
+- rc = iscsi_broadcast_skb(skb, GFP_KERNEL);
+- if (rc < 0)
+- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+- "session destruction event. Check iscsi daemon\n");
+-
+- spin_lock_irqsave(&sesslock, flags);
+- list_del(&session->sess_list);
+- spin_unlock_irqrestore(&sesslock, flags);
++ ev->transport_handle = iscsi_handle(session->transport);
- /*
-@@ -2132,6 +2185,12 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
- }
- if (vport->fc_flag & FC_RSCN_MODE) {
- if (lpfc_rscn_payload_check(vport, did)) {
-+ /* If we've already recieved a PLOGI from this NPort
-+ * we don't need to try to discover it again.
-+ */
-+ if (ndlp->nlp_flag & NLP_RCV_PLOGI)
-+ return NULL;
-+
- spin_lock_irq(shost->host_lock);
- ndlp->nlp_flag |= NLP_NPR_2B_DISC;
- spin_unlock_irq(shost->host_lock);
-@@ -2144,8 +2203,13 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
- } else
- ndlp = NULL;
- } else {
-+ /* If we've already recieved a PLOGI from this NPort,
-+ * or we are already in the process of discovery on it,
-+ * we don't need to try to discover it again.
-+ */
- if (ndlp->nlp_state == NLP_STE_ADISC_ISSUE ||
-- ndlp->nlp_state == NLP_STE_PLOGI_ISSUE)
-+ ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
-+ ndlp->nlp_flag & NLP_RCV_PLOGI)
- return NULL;
- lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
- spin_lock_irq(shost->host_lock);
-@@ -2220,8 +2284,7 @@ lpfc_issue_clear_la(struct lpfc_hba *phba, struct lpfc_vport *vport)
- lpfc_clear_la(phba, mbox);
- mbox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
- mbox->vport = vport;
-- rc = lpfc_sli_issue_mbox(phba, mbox, (MBX_NOWAIT |
-- MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED) {
- mempool_free(mbox, phba->mbox_mem_pool);
- lpfc_disc_flush_list(vport);
-@@ -2244,8 +2307,7 @@ lpfc_issue_reg_vpi(struct lpfc_hba *phba, struct lpfc_vport *vport)
- lpfc_reg_vpi(phba, vport->vpi, vport->fc_myDID, regvpimbox);
- regvpimbox->mbox_cmpl = lpfc_mbx_cmpl_reg_vpi;
- regvpimbox->vport = vport;
-- if (lpfc_sli_issue_mbox(phba, regvpimbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB))
-+ if (lpfc_sli_issue_mbox(phba, regvpimbox, MBX_NOWAIT)
- == MBX_NOT_FINISHED) {
- mempool_free(regvpimbox, phba->mbox_mem_pool);
- }
-@@ -2414,7 +2476,7 @@ lpfc_free_tx(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
+- spin_lock_irqsave(&connlock, flags);
+- conn->active = 0;
+- list_del(&conn->conn_list);
+- spin_unlock_irqrestore(&connlock, flags);
+-
+- return rc;
+-}
+-EXPORT_SYMBOL_GPL(iscsi_if_destroy_session_done);
+-
+-/**
+- * iscsi_if_create_session_done - send session creation completion event
+- * @conn: leading connection for session
+- *
+- * This is called by HW iscsi LLDs to notify userpsace that its HW has
+- * created a session or a existing session is back in the logged in state.
+- **/
+-int iscsi_if_create_session_done(struct iscsi_cls_conn *conn)
+-{
+- struct iscsi_internal *priv;
+- struct iscsi_cls_session *session;
+- struct Scsi_Host *shost;
+- struct iscsi_uevent *ev;
+- struct sk_buff *skb;
+- struct nlmsghdr *nlh;
+- unsigned long flags;
+- int rc, len = NLMSG_SPACE(sizeof(*ev));
+-
+- priv = iscsi_if_transport_lookup(conn->transport);
+- if (!priv)
++ ev->type = event;
++ switch (event) {
++ case ISCSI_KEVENT_DESTROY_SESSION:
++ ev->r.d_session.host_no = shost->host_no;
++ ev->r.d_session.sid = session->sid;
++ break;
++ case ISCSI_KEVENT_CREATE_SESSION:
++ ev->r.c_session_ret.host_no = shost->host_no;
++ ev->r.c_session_ret.sid = session->sid;
++ break;
++ case ISCSI_KEVENT_UNBIND_SESSION:
++ ev->r.unbind_session.host_no = shost->host_no;
++ ev->r.unbind_session.sid = session->sid;
++ break;
++ default:
++ dev_printk(KERN_ERR, &session->dev, "Invalid event %u.\n",
++ event);
++ kfree_skb(skb);
+ return -EINVAL;
+-
+- session = iscsi_dev_to_session(conn->dev.parent);
+- shost = iscsi_session_to_shost(session);
+-
+- skb = alloc_skb(len, GFP_KERNEL);
+- if (!skb) {
+- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+- "session creation event\n");
+- return -ENOMEM;
}
+
+- nlh = __nlmsg_put(skb, priv->daemon_pid, 0, 0, (len - sizeof(*nlh)), 0);
+- ev = NLMSG_DATA(nlh);
+- ev->transport_handle = iscsi_handle(conn->transport);
+- ev->type = ISCSI_UEVENT_CREATE_SESSION;
+- ev->r.c_session_ret.host_no = shost->host_no;
+- ev->r.c_session_ret.sid = session->sid;
+-
+ /*
+ * this will occur if the daemon is not up, so we just warn
+ * the user and when the daemon is restarted it will handle it
+ */
+ rc = iscsi_broadcast_skb(skb, GFP_KERNEL);
+ if (rc < 0)
+- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
+- "session creation event. Check iscsi daemon\n");
+-
+- spin_lock_irqsave(&sesslock, flags);
+- list_add(&session->sess_list, &sesslist);
+- spin_unlock_irqrestore(&sesslock, flags);
+-
+- spin_lock_irqsave(&connlock, flags);
+- list_add(&conn->conn_list, &connlist);
+- conn->active = 1;
+- spin_unlock_irqrestore(&connlock, flags);
++ dev_printk(KERN_ERR, &session->dev, "Cannot notify userspace "
++ "of session event %u. Check iscsi daemon\n", event);
+ return rc;
}
+-EXPORT_SYMBOL_GPL(iscsi_if_create_session_done);
++EXPORT_SYMBOL_GPL(iscsi_session_event);
--void
-+static void
- lpfc_disc_flush_list(struct lpfc_vport *vport)
+ static int
+ iscsi_if_create_session(struct iscsi_internal *priv, struct iscsi_uevent *ev)
{
- struct lpfc_nodelist *ndlp, *next_ndlp;
-@@ -2426,7 +2488,6 @@ lpfc_disc_flush_list(struct lpfc_vport *vport)
- if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
- ndlp->nlp_state == NLP_STE_ADISC_ISSUE) {
- lpfc_free_tx(phba, ndlp);
-- lpfc_nlp_put(ndlp);
- }
- }
- }
-@@ -2516,6 +2577,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- if (ndlp->nlp_type & NLP_FABRIC) {
- /* Clean up the ndlp on Fabric connections */
- lpfc_drop_node(vport, ndlp);
-+
- } else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
- /* Fail outstanding IO now since device
- * is marked for PLOGI.
-@@ -2524,9 +2586,8 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- }
- }
- if (vport->port_state != LPFC_FLOGI) {
-- vport->port_state = LPFC_FLOGI;
-- lpfc_set_disctmo(vport);
- lpfc_initial_flogi(vport);
-+ return;
- }
- break;
-
-@@ -2536,7 +2597,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- /* Initial FLOGI timeout */
- lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
- "0222 Initial %s timeout\n",
-- vport->vpi ? "FLOGI" : "FDISC");
-+ vport->vpi ? "FDISC" : "FLOGI");
-
- /* Assume no Fabric and go on with discovery.
- * Check for outstanding ELS FLOGI to abort.
-@@ -2558,10 +2619,10 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- /* Next look for NameServer ndlp */
- ndlp = lpfc_findnode_did(vport, NameServer_DID);
- if (ndlp)
-- lpfc_nlp_put(ndlp);
-- /* Start discovery */
-- lpfc_disc_start(vport);
-- break;
-+ lpfc_els_abort(phba, ndlp);
-+
-+ /* ReStart discovery */
-+ goto restart_disc;
+ struct iscsi_transport *transport = priv->iscsi_transport;
+ struct iscsi_cls_session *session;
+- unsigned long flags;
+ uint32_t hostno;
- case LPFC_NS_QRY:
- /* Check for wait for NameServer Rsp timeout */
-@@ -2580,6 +2641,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- }
- vport->fc_ns_retry = 0;
+ session = transport->create_session(transport, &priv->t,
+@@ -821,10 +858,6 @@ iscsi_if_create_session(struct iscsi_internal *priv, struct iscsi_uevent *ev)
+ if (!session)
+ return -ENOMEM;
-+restart_disc:
- /*
- * Discovery is over.
- * set port_state to PORT_READY if SLI2.
-@@ -2608,8 +2670,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- initlinkmbox->mb.un.varInitLnk.lipsr_AL_PA = 0;
- initlinkmbox->vport = vport;
- initlinkmbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-- rc = lpfc_sli_issue_mbox(phba, initlinkmbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, initlinkmbox, MBX_NOWAIT);
- lpfc_set_loopback_flag(phba);
- if (rc == MBX_NOT_FINISHED)
- mempool_free(initlinkmbox, phba->mbox_mem_pool);
-@@ -2664,12 +2725,14 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
- clrlaerr = 1;
- break;
+- spin_lock_irqsave(&sesslock, flags);
+- list_add(&session->sess_list, &sesslist);
+- spin_unlock_irqrestore(&sesslock, flags);
+-
+ ev->r.c_session_ret.host_no = hostno;
+ ev->r.c_session_ret.sid = session->sid;
+ return 0;
+@@ -835,7 +868,6 @@ iscsi_if_create_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ {
+ struct iscsi_cls_conn *conn;
+ struct iscsi_cls_session *session;
+- unsigned long flags;
-+ case LPFC_LINK_UP:
-+ lpfc_issue_clear_la(phba, vport);
-+ /* Drop thru */
- case LPFC_LINK_UNKNOWN:
- case LPFC_WARM_START:
- case LPFC_INIT_START:
- case LPFC_INIT_MBX_CMDS:
- case LPFC_LINK_DOWN:
-- case LPFC_LINK_UP:
- case LPFC_HBA_ERROR:
- lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
- "0230 Unexpected timeout, hba link "
-@@ -2723,7 +2786,9 @@ lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
- else
- mod_timer(&vport->fc_fdmitmo, jiffies + HZ * 60);
+ session = iscsi_session_lookup(ev->u.c_conn.sid);
+ if (!session) {
+@@ -854,28 +886,17 @@ iscsi_if_create_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
-- /* Mailbox took a reference to the node */
-+ /* decrement the node reference count held for this callback
-+ * function.
-+ */
- lpfc_nlp_put(ndlp);
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
- kfree(mp);
-@@ -2747,19 +2812,19 @@ lpfc_filter_by_wwpn(struct lpfc_nodelist *ndlp, void *param)
- sizeof(ndlp->nlp_portname)) == 0;
+ ev->r.c_conn_ret.sid = session->sid;
+ ev->r.c_conn_ret.cid = conn->cid;
+-
+- spin_lock_irqsave(&connlock, flags);
+- list_add(&conn->conn_list, &connlist);
+- conn->active = 1;
+- spin_unlock_irqrestore(&connlock, flags);
+-
+ return 0;
}
--struct lpfc_nodelist *
-+static struct lpfc_nodelist *
- __lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
+ static int
+ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
{
- struct lpfc_nodelist *ndlp;
+- unsigned long flags;
+ struct iscsi_cls_conn *conn;
- list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
-- if (ndlp->nlp_state != NLP_STE_UNUSED_NODE &&
-- filter(ndlp, param))
-+ if (filter(ndlp, param))
- return ndlp;
- }
- return NULL;
- }
+ conn = iscsi_conn_lookup(ev->u.d_conn.sid, ev->u.d_conn.cid);
+ if (!conn)
+ return -EINVAL;
+- spin_lock_irqsave(&connlock, flags);
+- conn->active = 0;
+- list_del(&conn->conn_list);
+- spin_unlock_irqrestore(&connlock, flags);
-+#if 0
- /*
- * Search node lists for a remote port matching filter criteria
- * Caller needs to hold host_lock before calling this routine.
-@@ -2775,6 +2840,7 @@ lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
- spin_unlock_irq(shost->host_lock);
- return ndlp;
- }
-+#endif /* 0 */
+ if (transport->destroy_conn)
+ transport->destroy_conn(conn);
+@@ -1002,7 +1023,6 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ struct iscsi_internal *priv;
+ struct iscsi_cls_session *session;
+ struct iscsi_cls_conn *conn;
+- unsigned long flags;
- /*
- * This routine looks up the ndlp lists for the given RPI. If rpi found it
-@@ -2786,6 +2852,7 @@ __lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
- return __lpfc_find_node(vport, lpfc_filter_by_rpi, &rpi);
- }
+ priv = iscsi_if_transport_lookup(iscsi_ptr(ev->transport_handle));
+ if (!priv)
+@@ -1020,13 +1040,16 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ break;
+ case ISCSI_UEVENT_DESTROY_SESSION:
+ session = iscsi_session_lookup(ev->u.d_session.sid);
+- if (session) {
+- spin_lock_irqsave(&sesslock, flags);
+- list_del(&session->sess_list);
+- spin_unlock_irqrestore(&sesslock, flags);
+-
++ if (session)
+ transport->destroy_session(session);
+- } else
++ else
++ err = -EINVAL;
++ break;
++ case ISCSI_UEVENT_UNBIND_SESSION:
++ session = iscsi_session_lookup(ev->u.d_session.sid);
++ if (session)
++ iscsi_unbind_session(session);
++ else
+ err = -EINVAL;
+ break;
+ case ISCSI_UEVENT_CREATE_CONN:
+@@ -1179,6 +1202,8 @@ iscsi_conn_attr(port, ISCSI_PARAM_CONN_PORT);
+ iscsi_conn_attr(exp_statsn, ISCSI_PARAM_EXP_STATSN);
+ iscsi_conn_attr(persistent_address, ISCSI_PARAM_PERSISTENT_ADDRESS);
+ iscsi_conn_attr(address, ISCSI_PARAM_CONN_ADDRESS);
++iscsi_conn_attr(ping_tmo, ISCSI_PARAM_PING_TMO);
++iscsi_conn_attr(recv_tmo, ISCSI_PARAM_RECV_TMO);
-+#if 0
- struct lpfc_nodelist *
- lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
- {
-@@ -2797,6 +2864,7 @@ lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
- spin_unlock_irq(shost->host_lock);
- return ndlp;
- }
-+#endif /* 0 */
+ #define iscsi_cdev_to_session(_cdev) \
+ iscsi_dev_to_session(_cdev->dev)
+@@ -1217,6 +1242,9 @@ iscsi_session_attr(username, ISCSI_PARAM_USERNAME, 1);
+ iscsi_session_attr(username_in, ISCSI_PARAM_USERNAME_IN, 1);
+ iscsi_session_attr(password, ISCSI_PARAM_PASSWORD, 1);
+ iscsi_session_attr(password_in, ISCSI_PARAM_PASSWORD_IN, 1);
++iscsi_session_attr(fast_abort, ISCSI_PARAM_FAST_ABORT, 0);
++iscsi_session_attr(abort_tmo, ISCSI_PARAM_ABORT_TMO, 0);
++iscsi_session_attr(lu_reset_tmo, ISCSI_PARAM_LU_RESET_TMO, 0);
- /*
- * This routine looks up the ndlp lists for the given WWPN. If WWPN found it
-@@ -2837,6 +2905,9 @@ lpfc_nlp_init(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- return;
- }
+ #define iscsi_priv_session_attr_show(field, format) \
+ static ssize_t \
+@@ -1413,6 +1441,8 @@ iscsi_register_transport(struct iscsi_transport *tt)
+ SETUP_CONN_RD_ATTR(exp_statsn, ISCSI_EXP_STATSN);
+ SETUP_CONN_RD_ATTR(persistent_address, ISCSI_PERSISTENT_ADDRESS);
+ SETUP_CONN_RD_ATTR(persistent_port, ISCSI_PERSISTENT_PORT);
++ SETUP_CONN_RD_ATTR(ping_tmo, ISCSI_PING_TMO);
++ SETUP_CONN_RD_ATTR(recv_tmo, ISCSI_RECV_TMO);
-+/* This routine releases all resources associated with a specifc NPort's ndlp
-+ * and mempool_free's the nodelist.
-+ */
- static void
- lpfc_nlp_release(struct kref *kref)
- {
-@@ -2851,16 +2922,57 @@ lpfc_nlp_release(struct kref *kref)
- mempool_free(ndlp, ndlp->vport->phba->nlp_mem_pool);
- }
+ BUG_ON(count > ISCSI_CONN_ATTRS);
+ priv->conn_attrs[count] = NULL;
+@@ -1438,6 +1468,9 @@ iscsi_register_transport(struct iscsi_transport *tt)
+ SETUP_SESSION_RD_ATTR(password_in, ISCSI_USERNAME_IN);
+ SETUP_SESSION_RD_ATTR(username, ISCSI_PASSWORD);
+ SETUP_SESSION_RD_ATTR(username_in, ISCSI_PASSWORD_IN);
++ SETUP_SESSION_RD_ATTR(fast_abort, ISCSI_FAST_ABORT);
++ SETUP_SESSION_RD_ATTR(abort_tmo, ISCSI_ABORT_TMO);
++ SETUP_SESSION_RD_ATTR(lu_reset_tmo,ISCSI_LU_RESET_TMO);
+ SETUP_PRIV_SESSION_RD_ATTR(recovery_tmo);
-+/* This routine bumps the reference count for a ndlp structure to ensure
-+ * that one discovery thread won't free a ndlp while another discovery thread
-+ * is using it.
-+ */
- struct lpfc_nodelist *
- lpfc_nlp_get(struct lpfc_nodelist *ndlp)
- {
-- if (ndlp)
-+ if (ndlp) {
-+ lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
-+ "node get: did:x%x flg:x%x refcnt:x%x",
-+ ndlp->nlp_DID, ndlp->nlp_flag,
-+ atomic_read(&ndlp->kref.refcount));
- kref_get(&ndlp->kref);
-+ }
- return ndlp;
- }
+ BUG_ON(count > ISCSI_SESSION_ATTRS);
+@@ -1518,8 +1551,14 @@ static __init int iscsi_transport_init(void)
+ goto unregister_session_class;
+ }
++ iscsi_eh_timer_workq = create_singlethread_workqueue("iscsi_eh");
++ if (!iscsi_eh_timer_workq)
++ goto release_nls;
+
-+/* This routine decrements the reference count for a ndlp structure. If the
-+ * count goes to 0, this indicates the the associated nodelist should be freed.
-+ */
- int
- lpfc_nlp_put(struct lpfc_nodelist *ndlp)
- {
-+ if (ndlp) {
-+ lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
-+ "node put: did:x%x flg:x%x refcnt:x%x",
-+ ndlp->nlp_DID, ndlp->nlp_flag,
-+ atomic_read(&ndlp->kref.refcount));
-+ }
- return ndlp ? kref_put(&ndlp->kref, lpfc_nlp_release) : 0;
- }
-+
-+/* This routine free's the specified nodelist if it is not in use
-+ * by any other discovery thread. This routine returns 1 if the ndlp
-+ * is not being used by anyone and has been freed. A return value of
-+ * 0 indicates it is being used by another discovery thread and the
-+ * refcount is left unchanged.
-+ */
-+int
-+lpfc_nlp_not_used(struct lpfc_nodelist *ndlp)
-+{
-+ lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
-+ "node not used: did:x%x flg:x%x refcnt:x%x",
-+ ndlp->nlp_DID, ndlp->nlp_flag,
-+ atomic_read(&ndlp->kref.refcount));
-+
-+ if (atomic_read(&ndlp->kref.refcount) == 1) {
-+ lpfc_nlp_put(ndlp);
-+ return 1;
-+ }
-+ return 0;
-+}
-+
-diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h
-index 451accd..041f83e 100644
---- a/drivers/scsi/lpfc/lpfc_hw.h
-+++ b/drivers/scsi/lpfc/lpfc_hw.h
-@@ -139,6 +139,9 @@ struct lpfc_sli_ct_request {
- uint8_t len;
- uint8_t symbname[255];
- } rsnn;
-+ struct da_id { /* For DA_ID requests */
-+ uint32_t port_id;
-+ } da_id;
- struct rspn { /* For RSPN_ID requests */
- uint32_t PortId;
- uint8_t len;
-@@ -150,11 +153,7 @@ struct lpfc_sli_ct_request {
- struct gff_acc {
- uint8_t fbits[128];
- } gff_acc;
--#ifdef __BIG_ENDIAN_BITFIELD
- #define FCP_TYPE_FEATURE_OFFSET 7
--#else /* __LITTLE_ENDIAN_BITFIELD */
--#define FCP_TYPE_FEATURE_OFFSET 4
--#endif
- struct rff {
- uint32_t PortId;
- uint8_t reserved[2];
-@@ -177,6 +176,8 @@ struct lpfc_sli_ct_request {
- sizeof(struct rnn))
- #define RSNN_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
- sizeof(struct rsnn))
-+#define DA_ID_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
-+ sizeof(struct da_id))
- #define RSPN_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
- sizeof(struct rspn))
+ return 0;
-@@ -1228,7 +1229,8 @@ typedef struct { /* FireFly BIU registers */
- #define HS_FFER3 0x20000000 /* Bit 29 */
- #define HS_FFER2 0x40000000 /* Bit 30 */
- #define HS_FFER1 0x80000000 /* Bit 31 */
--#define HS_FFERM 0xFF000000 /* Mask for error bits 31:24 */
-+#define HS_CRIT_TEMP 0x00000100 /* Bit 8 */
-+#define HS_FFERM 0xFF000100 /* Mask for error bits 31:24 and 8 */
++release_nls:
++ netlink_kernel_release(nls);
+ unregister_session_class:
+ transport_class_unregister(&iscsi_session_class);
+ unregister_conn_class:
+@@ -1533,7 +1572,8 @@ unregister_transport_class:
- /* Host Control Register */
+ static void __exit iscsi_transport_exit(void)
+ {
+- sock_release(nls->sk_socket);
++ destroy_workqueue(iscsi_eh_timer_workq);
++ netlink_kernel_release(nls);
+ transport_class_unregister(&iscsi_connection_class);
+ transport_class_unregister(&iscsi_session_class);
+ transport_class_unregister(&iscsi_host_class);
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index 3120f4b..f2149d0 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -173,6 +173,7 @@ static void sas_smp_request(struct request_queue *q, struct Scsi_Host *shost,
-@@ -1277,12 +1279,14 @@ typedef struct { /* FireFly BIU registers */
- #define MBX_DEL_LD_ENTRY 0x1D
- #define MBX_RUN_PROGRAM 0x1E
- #define MBX_SET_MASK 0x20
--#define MBX_SET_SLIM 0x21
-+#define MBX_SET_VARIABLE 0x21
- #define MBX_UNREG_D_ID 0x23
- #define MBX_KILL_BOARD 0x24
- #define MBX_CONFIG_FARP 0x25
- #define MBX_BEACON 0x2A
- #define MBX_HEARTBEAT 0x31
-+#define MBX_WRITE_VPARMS 0x32
-+#define MBX_ASYNCEVT_ENABLE 0x33
+ handler = to_sas_internal(shost->transportt)->f->smp_handler;
+ ret = handler(shost, rphy, req);
++ req->errors = ret;
- #define MBX_CONFIG_HBQ 0x7C
- #define MBX_LOAD_AREA 0x81
-@@ -1297,7 +1301,7 @@ typedef struct { /* FireFly BIU registers */
- #define MBX_REG_VNPID 0x96
- #define MBX_UNREG_VNPID 0x97
+ spin_lock_irq(q->queue_lock);
--#define MBX_FLASH_WR_ULA 0x98
-+#define MBX_WRITE_WWN 0x98
- #define MBX_SET_DEBUG 0x99
- #define MBX_LOAD_EXP_ROM 0x9C
+@@ -323,7 +324,7 @@ static int do_sas_phy_delete(struct device *dev, void *data)
+ }
-@@ -1344,6 +1348,7 @@ typedef struct { /* FireFly BIU registers */
+ /**
+- * sas_remove_children -- tear down a devices SAS data structures
++ * sas_remove_children - tear down a devices SAS data structures
+ * @dev: device belonging to the sas object
+ *
+ * Removes all SAS PHYs and remote PHYs for a given object
+@@ -336,7 +337,7 @@ void sas_remove_children(struct device *dev)
+ EXPORT_SYMBOL(sas_remove_children);
- /* SLI_2 IOCB Command Set */
+ /**
+- * sas_remove_host -- tear down a Scsi_Host's SAS data structures
++ * sas_remove_host - tear down a Scsi_Host's SAS data structures
+ * @shost: Scsi Host that is torn down
+ *
+ * Removes all SAS PHYs and remote PHYs for a given Scsi_Host.
+@@ -577,7 +578,7 @@ static void sas_phy_release(struct device *dev)
+ }
-+#define CMD_ASYNC_STATUS 0x7C
- #define CMD_RCV_SEQUENCE64_CX 0x81
- #define CMD_XMIT_SEQUENCE64_CR 0x82
- #define CMD_XMIT_SEQUENCE64_CX 0x83
-@@ -1368,6 +1373,7 @@ typedef struct { /* FireFly BIU registers */
- #define CMD_FCP_TRECEIVE64_CX 0xA1
- #define CMD_FCP_TRSP64_CX 0xA3
+ /**
+- * sas_phy_alloc -- allocates and initialize a SAS PHY structure
++ * sas_phy_alloc - allocates and initialize a SAS PHY structure
+ * @parent: Parent device
+ * @number: Phy index
+ *
+@@ -618,7 +619,7 @@ struct sas_phy *sas_phy_alloc(struct device *parent, int number)
+ EXPORT_SYMBOL(sas_phy_alloc);
-+#define CMD_QUE_XRI64_CX 0xB3
- #define CMD_IOCB_RCV_SEQ64_CX 0xB5
- #define CMD_IOCB_RCV_ELS64_CX 0xB7
- #define CMD_IOCB_RCV_CONT64_CX 0xBB
-@@ -1406,6 +1412,8 @@ typedef struct { /* FireFly BIU registers */
- #define MBX_BUSY 0xffffff /* Attempted cmd to busy Mailbox */
- #define MBX_TIMEOUT 0xfffffe /* time-out expired waiting for */
+ /**
+- * sas_phy_add -- add a SAS PHY to the device hierarchy
++ * sas_phy_add - add a SAS PHY to the device hierarchy
+ * @phy: The PHY to be added
+ *
+ * Publishes a SAS PHY to the rest of the system.
+@@ -638,7 +639,7 @@ int sas_phy_add(struct sas_phy *phy)
+ EXPORT_SYMBOL(sas_phy_add);
-+#define TEMPERATURE_OFFSET 0xB0 /* Slim offset for critical temperature event */
-+
- /*
- * Begin Structure Definitions for Mailbox Commands
- */
-@@ -2606,6 +2614,18 @@ typedef struct {
- uint32_t IPAddress;
- } CONFIG_FARP_VAR;
+ /**
+- * sas_phy_free -- free a SAS PHY
++ * sas_phy_free - free a SAS PHY
+ * @phy: SAS PHY to free
+ *
+ * Frees the specified SAS PHY.
+@@ -655,7 +656,7 @@ void sas_phy_free(struct sas_phy *phy)
+ EXPORT_SYMBOL(sas_phy_free);
-+/* Structure for MB Command MBX_ASYNCEVT_ENABLE (0x33) */
-+
-+typedef struct {
-+#ifdef __BIG_ENDIAN_BITFIELD
-+ uint32_t rsvd:30;
-+ uint32_t ring:2; /* Ring for ASYNC_EVENT iocb Bits 0-1*/
-+#else /* __LITTLE_ENDIAN */
-+ uint32_t ring:2; /* Ring for ASYNC_EVENT iocb Bits 0-1*/
-+ uint32_t rsvd:30;
-+#endif
-+} ASYNCEVT_ENABLE_VAR;
-+
- /* Union of all Mailbox Command types */
- #define MAILBOX_CMD_WSIZE 32
- #define MAILBOX_CMD_SIZE (MAILBOX_CMD_WSIZE * sizeof(uint32_t))
-@@ -2645,6 +2665,7 @@ typedef union {
- CONFIG_PORT_VAR varCfgPort; /* cmd = 0x88 (CONFIG_PORT) */
- REG_VPI_VAR varRegVpi; /* cmd = 0x96 (REG_VPI) */
- UNREG_VPI_VAR varUnregVpi; /* cmd = 0x97 (UNREG_VPI) */
-+ ASYNCEVT_ENABLE_VAR varCfgAsyncEvent; /*cmd = x33 (CONFIG_ASYNC) */
- } MAILVARIANTS;
+ /**
+- * sas_phy_delete -- remove SAS PHY
++ * sas_phy_delete - remove SAS PHY
+ * @phy: SAS PHY to remove
+ *
+ * Removes the specified SAS PHY. If the SAS PHY has an
+@@ -677,7 +678,7 @@ sas_phy_delete(struct sas_phy *phy)
+ EXPORT_SYMBOL(sas_phy_delete);
- /*
-@@ -2973,6 +2994,34 @@ typedef struct {
- #endif
- } RCV_ELS_REQ64;
+ /**
+- * scsi_is_sas_phy -- check if a struct device represents a SAS PHY
++ * scsi_is_sas_phy - check if a struct device represents a SAS PHY
+ * @dev: device to check
+ *
+ * Returns:
+@@ -843,7 +844,6 @@ EXPORT_SYMBOL(sas_port_alloc_num);
-+/* IOCB Command template for RCV_SEQ64 */
-+struct rcv_seq64 {
-+ struct ulp_bde64 elsReq;
-+ uint32_t hbq_1;
-+ uint32_t parmRo;
-+#ifdef __BIG_ENDIAN_BITFIELD
-+ uint32_t rctl:8;
-+ uint32_t type:8;
-+ uint32_t dfctl:8;
-+ uint32_t ls:1;
-+ uint32_t fs:1;
-+ uint32_t rsvd2:3;
-+ uint32_t si:1;
-+ uint32_t bc:1;
-+ uint32_t rsvd3:1;
-+#else /* __LITTLE_ENDIAN_BITFIELD */
-+ uint32_t rsvd3:1;
-+ uint32_t bc:1;
-+ uint32_t si:1;
-+ uint32_t rsvd2:3;
-+ uint32_t fs:1;
-+ uint32_t ls:1;
-+ uint32_t dfctl:8;
-+ uint32_t type:8;
-+ uint32_t rctl:8;
-+#endif
-+};
-+
- /* IOCB Command template for all 64 bit FCP Initiator commands */
- typedef struct {
- ULP_BDL bdl;
-@@ -2987,6 +3036,21 @@ typedef struct {
- uint32_t fcpt_Length; /* transfer ready for IWRITE */
- } FCPT_FIELDS64;
+ /**
+ * sas_port_add - add a SAS port to the device hierarchy
+- *
+ * @port: port to be added
+ *
+ * publishes a port to the rest of the system
+@@ -868,7 +868,7 @@ int sas_port_add(struct sas_port *port)
+ EXPORT_SYMBOL(sas_port_add);
-+/* IOCB Command template for Async Status iocb commands */
-+typedef struct {
-+ uint32_t rsvd[4];
-+ uint32_t param;
-+#ifdef __BIG_ENDIAN_BITFIELD
-+ uint16_t evt_code; /* High order bits word 5 */
-+ uint16_t sub_ctxt_tag; /* Low order bits word 5 */
-+#else /* __LITTLE_ENDIAN_BITFIELD */
-+ uint16_t sub_ctxt_tag; /* High order bits word 5 */
-+ uint16_t evt_code; /* Low order bits word 5 */
-+#endif
-+} ASYNCSTAT_FIELDS;
-+#define ASYNC_TEMP_WARN 0x100
-+#define ASYNC_TEMP_SAFE 0x101
-+
- /* IOCB Command template for CMD_IOCB_RCV_ELS64_CX (0xB7)
- or CMD_IOCB_RCV_SEQ64_CX (0xB5) */
+ /**
+- * sas_port_free -- free a SAS PORT
++ * sas_port_free - free a SAS PORT
+ * @port: SAS PORT to free
+ *
+ * Frees the specified SAS PORT.
+@@ -885,7 +885,7 @@ void sas_port_free(struct sas_port *port)
+ EXPORT_SYMBOL(sas_port_free);
-@@ -3004,7 +3068,26 @@ struct rcv_sli3 {
- struct ulp_bde64 bde2;
- };
+ /**
+- * sas_port_delete -- remove SAS PORT
++ * sas_port_delete - remove SAS PORT
+ * @port: SAS PORT to remove
+ *
+ * Removes the specified SAS PORT. If the SAS PORT has an
+@@ -924,7 +924,7 @@ void sas_port_delete(struct sas_port *port)
+ EXPORT_SYMBOL(sas_port_delete);
-+/* Structure used for a single HBQ entry */
-+struct lpfc_hbq_entry {
-+ struct ulp_bde64 bde;
-+ uint32_t buffer_tag;
-+};
+ /**
+- * scsi_is_sas_port -- check if a struct device represents a SAS port
++ * scsi_is_sas_port - check if a struct device represents a SAS port
+ * @dev: device to check
+ *
+ * Returns:
+@@ -1309,6 +1309,7 @@ static void sas_rphy_initialize(struct sas_rphy *rphy)
-+/* IOCB Command template for QUE_XRI64_CX (0xB3) command */
-+typedef struct {
-+ struct lpfc_hbq_entry buff;
-+ uint32_t rsvd;
-+ uint32_t rsvd1;
-+} QUE_XRI64_CX_FIELDS;
-+
-+struct que_xri64cx_ext_fields {
-+ uint32_t iotag64_low;
-+ uint32_t iotag64_high;
-+ uint32_t ebde_count;
-+ uint32_t rsvd;
-+ struct lpfc_hbq_entry buff[5];
-+};
+ /**
+ * sas_end_device_alloc - allocate an rphy for an end device
++ * @parent: which port
+ *
+ * Allocates an SAS remote PHY structure, connected to @parent.
+ *
+@@ -1345,6 +1346,8 @@ EXPORT_SYMBOL(sas_end_device_alloc);
- typedef struct _IOCB { /* IOCB structure */
- union {
-@@ -3028,6 +3111,9 @@ typedef struct _IOCB { /* IOCB structure */
- XMT_SEQ_FIELDS64 xseq64; /* XMIT / BCAST cmd */
- FCPI_FIELDS64 fcpi64; /* FCP 64 bit Initiator template */
- FCPT_FIELDS64 fcpt64; /* FCP 64 bit target template */
-+ ASYNCSTAT_FIELDS asyncstat; /* async_status iocb */
-+ QUE_XRI64_CX_FIELDS quexri64cx; /* que_xri64_cx fields */
-+ struct rcv_seq64 rcvseq64; /* RCV_SEQ64 and RCV_CONT64 */
+ /**
+ * sas_expander_alloc - allocate an rphy for an end device
++ * @parent: which port
++ * @type: SAS_EDGE_EXPANDER_DEVICE or SAS_FANOUT_EXPANDER_DEVICE
+ *
+ * Allocates an SAS remote PHY structure, connected to @parent.
+ *
+@@ -1383,7 +1386,7 @@ struct sas_rphy *sas_expander_alloc(struct sas_port *parent,
+ EXPORT_SYMBOL(sas_expander_alloc);
- uint32_t ulpWord[IOCB_WORD_SZ - 2]; /* generic 6 'words' */
- } un;
-@@ -3085,6 +3171,10 @@ typedef struct _IOCB { /* IOCB structure */
+ /**
+- * sas_rphy_add -- add a SAS remote PHY to the device hierarchy
++ * sas_rphy_add - add a SAS remote PHY to the device hierarchy
+ * @rphy: The remote PHY to be added
+ *
+ * Publishes a SAS remote PHY to the rest of the system.
+@@ -1430,8 +1433,8 @@ int sas_rphy_add(struct sas_rphy *rphy)
+ EXPORT_SYMBOL(sas_rphy_add);
- union {
- struct rcv_sli3 rcvsli3; /* words 8 - 15 */
-+
-+ /* words 8-31 used for que_xri_cx iocb */
-+ struct que_xri64cx_ext_fields que_xri64cx_ext_words;
-+
- uint32_t sli3Words[24]; /* 96 extra bytes for SLI-3 */
- } unsli3;
+ /**
+- * sas_rphy_free -- free a SAS remote PHY
+- * @rphy SAS remote PHY to free
++ * sas_rphy_free - free a SAS remote PHY
++ * @rphy: SAS remote PHY to free
+ *
+ * Frees the specified SAS remote PHY.
+ *
+@@ -1459,7 +1462,7 @@ void sas_rphy_free(struct sas_rphy *rphy)
+ EXPORT_SYMBOL(sas_rphy_free);
-@@ -3124,12 +3214,6 @@ typedef struct _IOCB { /* IOCB structure */
+ /**
+- * sas_rphy_delete -- remove and free SAS remote PHY
++ * sas_rphy_delete - remove and free SAS remote PHY
+ * @rphy: SAS remote PHY to remove and free
+ *
+ * Removes the specified SAS remote PHY and frees it.
+@@ -1473,7 +1476,7 @@ sas_rphy_delete(struct sas_rphy *rphy)
+ EXPORT_SYMBOL(sas_rphy_delete);
- } IOCB_t;
+ /**
+- * sas_rphy_remove -- remove SAS remote PHY
++ * sas_rphy_remove - remove SAS remote PHY
+ * @rphy: SAS remote phy to remove
+ *
+ * Removes the specified SAS remote PHY.
+@@ -1504,7 +1507,7 @@ sas_rphy_remove(struct sas_rphy *rphy)
+ EXPORT_SYMBOL(sas_rphy_remove);
--/* Structure used for a single HBQ entry */
--struct lpfc_hbq_entry {
-- struct ulp_bde64 bde;
-- uint32_t buffer_tag;
--};
--
+ /**
+- * scsi_is_sas_rphy -- check if a struct device represents a SAS remote PHY
++ * scsi_is_sas_rphy - check if a struct device represents a SAS remote PHY
+ * @dev: device to check
+ *
+ * Returns:
+@@ -1604,7 +1607,7 @@ static int sas_user_scan(struct Scsi_Host *shost, uint channel,
+ SETUP_TEMPLATE(expander_attrs, expander_##field, S_IRUGO, 1)
- #define SLI1_SLIM_SIZE (4 * 1024)
+ /**
+- * sas_attach_transport -- instantiate SAS transport template
++ * sas_attach_transport - instantiate SAS transport template
+ * @ft: SAS transport class function template
+ */
+ struct scsi_transport_template *
+@@ -1715,7 +1718,7 @@ sas_attach_transport(struct sas_function_template *ft)
+ EXPORT_SYMBOL(sas_attach_transport);
-@@ -3172,6 +3256,8 @@ lpfc_is_LC_HBA(unsigned short device)
- (device == PCI_DEVICE_ID_BSMB) ||
- (device == PCI_DEVICE_ID_ZMID) ||
- (device == PCI_DEVICE_ID_ZSMB) ||
-+ (device == PCI_DEVICE_ID_SAT_MID) ||
-+ (device == PCI_DEVICE_ID_SAT_SMB) ||
- (device == PCI_DEVICE_ID_RFLY))
- return 1;
- else
-diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
-index ecebdfa..3205f74 100644
---- a/drivers/scsi/lpfc/lpfc_init.c
-+++ b/drivers/scsi/lpfc/lpfc_init.c
-@@ -212,6 +212,18 @@ out_free_mbox:
+ /**
+- * sas_release_transport -- release SAS transport template instance
++ * sas_release_transport - release SAS transport template instance
+ * @t: transport template instance
+ */
+ void sas_release_transport(struct scsi_transport_template *t)
+diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
+index 4df21c9..1fb6031 100644
+--- a/drivers/scsi/scsi_transport_spi.c
++++ b/drivers/scsi/scsi_transport_spi.c
+@@ -52,13 +52,6 @@
+ struct spi_internal {
+ struct scsi_transport_template t;
+ struct spi_function_template *f;
+- /* The actual attributes */
+- struct class_device_attribute private_attrs[SPI_NUM_ATTRS];
+- /* The array of null terminated pointers to attributes
+- * needed by scsi_sysfs.c */
+- struct class_device_attribute *attrs[SPI_NUM_ATTRS + SPI_OTHER_ATTRS + 1];
+- struct class_device_attribute private_host_attrs[SPI_HOST_ATTRS];
+- struct class_device_attribute *host_attrs[SPI_HOST_ATTRS + 1];
+ };
+
+ #define to_spi_internal(tmpl) container_of(tmpl, struct spi_internal, t)
+@@ -174,17 +167,20 @@ static int spi_host_setup(struct transport_container *tc, struct device *dev,
return 0;
}
-+/* Completion handler for config async event mailbox command. */
-+static void
-+lpfc_config_async_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq)
-+{
-+ if (pmboxq->mb.mbxStatus == MBX_SUCCESS)
-+ phba->temp_sensor_support = 1;
-+ else
-+ phba->temp_sensor_support = 0;
-+ mempool_free(pmboxq, phba->mbox_mem_pool);
-+ return;
-+}
-+
- /************************************************************************/
- /* */
- /* lpfc_config_port_post */
-@@ -234,6 +246,15 @@ lpfc_config_port_post(struct lpfc_hba *phba)
- int i, j;
- int rc;
-
-+ spin_lock_irq(&phba->hbalock);
-+ /*
-+ * If the Config port completed correctly the HBA is not
-+ * over heated any more.
-+ */
-+ if (phba->over_temp_state == HBA_OVER_TEMP)
-+ phba->over_temp_state = HBA_NORMAL_TEMP;
-+ spin_unlock_irq(&phba->hbalock);
++static int spi_host_configure(struct transport_container *tc,
++ struct device *dev,
++ struct class_device *cdev);
+
- pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
- if (!pmb) {
- phba->link_state = LPFC_HBA_ERROR;
-@@ -343,7 +364,7 @@ lpfc_config_port_post(struct lpfc_hba *phba)
+ static DECLARE_TRANSPORT_CLASS(spi_host_class,
+ "spi_host",
+ spi_host_setup,
+ NULL,
+- NULL);
++ spi_host_configure);
- phba->link_state = LPFC_LINK_DOWN;
+ static int spi_host_match(struct attribute_container *cont,
+ struct device *dev)
+ {
+ struct Scsi_Host *shost;
+- struct spi_internal *i;
-- /* Only process IOCBs on ring 0 till hba_state is READY */
-+ /* Only process IOCBs on ELS ring till hba_state is READY */
- if (psli->ring[psli->extra_ring].cmdringaddr)
- psli->ring[psli->extra_ring].flag |= LPFC_STOP_IOCB_EVENT;
- if (psli->ring[psli->fcp_ring].cmdringaddr)
-@@ -409,7 +430,21 @@ lpfc_config_port_post(struct lpfc_hba *phba)
- return -EIO;
- }
- /* MBOX buffer will be freed in mbox compl */
-+ pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-+ lpfc_config_async(phba, pmb, LPFC_ELS_RING);
-+ pmb->mbox_cmpl = lpfc_config_async_cmpl;
-+ pmb->vport = phba->pport;
-+ rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
+ if (!scsi_is_host_device(dev))
+ return 0;
+@@ -194,11 +190,13 @@ static int spi_host_match(struct attribute_container *cont,
+ != &spi_host_class.class)
+ return 0;
-+ if ((rc != MBX_BUSY) && (rc != MBX_SUCCESS)) {
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_INIT,
-+ "0456 Adapter failed to issue "
-+ "ASYNCEVT_ENABLE mbox status x%x \n.",
-+ rc);
-+ mempool_free(pmb, phba->mbox_mem_pool);
-+ }
- return (0);
+- i = to_spi_internal(shost->transportt);
+-
+- return &i->t.host_attrs.ac == cont;
++ return &shost->transportt->host_attrs.ac == cont;
}
-@@ -449,6 +484,9 @@ lpfc_hba_down_post(struct lpfc_hba *phba)
- struct lpfc_sli *psli = &phba->sli;
- struct lpfc_sli_ring *pring;
- struct lpfc_dmabuf *mp, *next_mp;
-+ struct lpfc_iocbq *iocb;
-+ IOCB_t *cmd = NULL;
-+ LIST_HEAD(completions);
- int i;
-
- if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED)
-@@ -464,16 +502,42 @@ lpfc_hba_down_post(struct lpfc_hba *phba)
- }
- }
-
-+ spin_lock_irq(&phba->hbalock);
- for (i = 0; i < psli->num_rings; i++) {
- pring = &psli->ring[i];
-+
-+ /* At this point in time the HBA is either reset or DOA. Either
-+ * way, nothing should be on txcmplq as it will NEVER complete.
-+ */
-+ list_splice_init(&pring->txcmplq, &completions);
-+ pring->txcmplq_cnt = 0;
-+ spin_unlock_irq(&phba->hbalock);
-+
-+ while (!list_empty(&completions)) {
-+ iocb = list_get_first(&completions, struct lpfc_iocbq,
-+ list);
-+ cmd = &iocb->iocb;
-+ list_del_init(&iocb->list);
-+
-+ if (!iocb->iocb_cmpl)
-+ lpfc_sli_release_iocbq(phba, iocb);
-+ else {
-+ cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
-+ cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
-+ (iocb->iocb_cmpl) (phba, iocb, iocb);
-+ }
-+ }
++static int spi_target_configure(struct transport_container *tc,
++ struct device *dev,
++ struct class_device *cdev);
+
- lpfc_sli_abort_iocb_ring(phba, pring);
-+ spin_lock_irq(&phba->hbalock);
- }
-+ spin_unlock_irq(&phba->hbalock);
-
- return 0;
+ static int spi_device_configure(struct transport_container *tc,
+ struct device *dev,
+ struct class_device *cdev)
+@@ -300,8 +298,10 @@ store_spi_transport_##field(struct class_device *cdev, const char *buf, \
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); \
+ struct spi_internal *i = to_spi_internal(shost->transportt); \
+ \
++ if (!i->f->set_##field) \
++ return -EINVAL; \
+ val = simple_strtoul(buf, NULL, 0); \
+- i->f->set_##field(starget, val); \
++ i->f->set_##field(starget, val); \
+ return count; \
}
- /* HBA heart beat timeout handler */
--void
-+static void
- lpfc_hb_timeout(unsigned long ptr)
- {
- struct lpfc_hba *phba;
-@@ -512,8 +576,10 @@ void
- lpfc_hb_timeout_handler(struct lpfc_hba *phba)
- {
- LPFC_MBOXQ_t *pmboxq;
-+ struct lpfc_dmabuf *buf_ptr;
- int retval;
- struct lpfc_sli *psli = &phba->sli;
-+ LIST_HEAD(completions);
+@@ -317,6 +317,8 @@ store_spi_transport_##field(struct class_device *cdev, const char *buf, \
+ struct spi_transport_attrs *tp \
+ = (struct spi_transport_attrs *)&starget->starget_data; \
+ \
++ if (i->f->set_##field) \
++ return -EINVAL; \
+ val = simple_strtoul(buf, NULL, 0); \
+ if (val > tp->max_##field) \
+ val = tp->max_##field; \
+@@ -327,14 +329,14 @@ store_spi_transport_##field(struct class_device *cdev, const char *buf, \
+ #define spi_transport_rd_attr(field, format_string) \
+ spi_transport_show_function(field, format_string) \
+ spi_transport_store_function(field, format_string) \
+-static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
++static CLASS_DEVICE_ATTR(field, S_IRUGO, \
+ show_spi_transport_##field, \
+ store_spi_transport_##field);
- if ((phba->link_state == LPFC_HBA_ERROR) ||
- (phba->pport->load_flag & FC_UNLOADING) ||
-@@ -540,49 +606,88 @@ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
- }
- spin_unlock_irq(&phba->pport->work_port_lock);
+ #define spi_transport_simple_attr(field, format_string) \
+ spi_transport_show_simple(field, format_string) \
+ spi_transport_store_simple(field, format_string) \
+-static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
++static CLASS_DEVICE_ATTR(field, S_IRUGO, \
+ show_spi_transport_##field, \
+ store_spi_transport_##field);
-- /* If there is no heart beat outstanding, issue a heartbeat command */
-- if (!phba->hb_outstanding) {
-- pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
-- if (!pmboxq) {
-- mod_timer(&phba->hb_tmofunc,
-- jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
-- return;
-+ if (phba->elsbuf_cnt &&
-+ (phba->elsbuf_cnt == phba->elsbuf_prev_cnt)) {
-+ spin_lock_irq(&phba->hbalock);
-+ list_splice_init(&phba->elsbuf, &completions);
-+ phba->elsbuf_cnt = 0;
-+ phba->elsbuf_prev_cnt = 0;
-+ spin_unlock_irq(&phba->hbalock);
-+
-+ while (!list_empty(&completions)) {
-+ list_remove_head(&completions, buf_ptr,
-+ struct lpfc_dmabuf, list);
-+ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
-+ kfree(buf_ptr);
- }
-+ }
-+ phba->elsbuf_prev_cnt = phba->elsbuf_cnt;
+@@ -342,7 +344,7 @@ static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
+ spi_transport_show_function(field, format_string) \
+ spi_transport_store_max(field, format_string) \
+ spi_transport_simple_attr(max_##field, format_string) \
+-static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
++static CLASS_DEVICE_ATTR(field, S_IRUGO, \
+ show_spi_transport_##field, \
+ store_spi_transport_##field);
-- lpfc_heart_beat(phba, pmboxq);
-- pmboxq->mbox_cmpl = lpfc_hb_mbox_cmpl;
-- pmboxq->vport = phba->pport;
-- retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
-+ /* If there is no heart beat outstanding, issue a heartbeat command */
-+ if (phba->cfg_enable_hba_heartbeat) {
-+ if (!phba->hb_outstanding) {
-+ pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
-+ if (!pmboxq) {
-+ mod_timer(&phba->hb_tmofunc,
-+ jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
-+ return;
-+ }
+@@ -472,6 +474,9 @@ store_spi_transport_period(struct class_device *cdev, const char *buf,
+ (struct spi_transport_attrs *)&starget->starget_data;
+ int period, retval;
-- if (retval != MBX_BUSY && retval != MBX_SUCCESS) {
-- mempool_free(pmboxq, phba->mbox_mem_pool);
-+ lpfc_heart_beat(phba, pmboxq);
-+ pmboxq->mbox_cmpl = lpfc_hb_mbox_cmpl;
-+ pmboxq->vport = phba->pport;
-+ retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
-+
-+ if (retval != MBX_BUSY && retval != MBX_SUCCESS) {
-+ mempool_free(pmboxq, phba->mbox_mem_pool);
-+ mod_timer(&phba->hb_tmofunc,
-+ jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
-+ return;
-+ }
- mod_timer(&phba->hb_tmofunc,
-- jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
-+ jiffies + HZ * LPFC_HB_MBOX_TIMEOUT);
-+ phba->hb_outstanding = 1;
- return;
-+ } else {
-+ /*
-+ * If heart beat timeout called with hb_outstanding set
-+ * we need to take the HBA offline.
-+ */
-+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-+ "0459 Adapter heartbeat failure, "
-+ "taking this port offline.\n");
-+
-+ spin_lock_irq(&phba->hbalock);
-+ psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
-+ spin_unlock_irq(&phba->hbalock);
++ if (!i->f->set_period)
++ return -EINVAL;
+
-+ lpfc_offline_prep(phba);
-+ lpfc_offline(phba);
-+ lpfc_unblock_mgmt_io(phba);
-+ phba->link_state = LPFC_HBA_ERROR;
-+ lpfc_hba_down_post(phba);
- }
-- mod_timer(&phba->hb_tmofunc,
-- jiffies + HZ * LPFC_HB_MBOX_TIMEOUT);
-- phba->hb_outstanding = 1;
-- return;
-- } else {
-- /*
-- * If heart beat timeout called with hb_outstanding set we
-- * need to take the HBA offline.
-- */
-- lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-- "0459 Adapter heartbeat failure, taking "
-- "this port offline.\n");
-+ }
-+}
-
-- spin_lock_irq(&phba->hbalock);
-- psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
-- spin_unlock_irq(&phba->hbalock);
-+static void
-+lpfc_offline_eratt(struct lpfc_hba *phba)
-+{
-+ struct lpfc_sli *psli = &phba->sli;
+ retval = store_spi_transport_period_helper(cdev, buf, count, &period);
-- lpfc_offline_prep(phba);
-- lpfc_offline(phba);
-- lpfc_unblock_mgmt_io(phba);
-- phba->link_state = LPFC_HBA_ERROR;
-- lpfc_hba_down_post(phba);
-- }
-+ spin_lock_irq(&phba->hbalock);
-+ psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
-+ spin_unlock_irq(&phba->hbalock);
-+ lpfc_offline_prep(phba);
-+
-+ lpfc_offline(phba);
-+ lpfc_reset_barrier(phba);
-+ lpfc_sli_brdreset(phba);
-+ lpfc_hba_down_post(phba);
-+ lpfc_sli_brdready(phba, HS_MBRDY);
-+ lpfc_unblock_mgmt_io(phba);
-+ phba->link_state = LPFC_HBA_ERROR;
-+ return;
+ if (period < tp->min_period)
+@@ -482,7 +487,7 @@ store_spi_transport_period(struct class_device *cdev, const char *buf,
+ return retval;
}
- /************************************************************************/
-@@ -601,6 +706,8 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
- struct lpfc_sli_ring *pring;
- struct lpfc_vport **vports;
- uint32_t event_data;
-+ unsigned long temperature;
-+ struct temp_event temp_event_data;
- struct Scsi_Host *shost;
- int i;
+-static CLASS_DEVICE_ATTR(period, S_IRUGO | S_IWUSR,
++static CLASS_DEVICE_ATTR(period, S_IRUGO,
+ show_spi_transport_period,
+ store_spi_transport_period);
-@@ -608,6 +715,9 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
- * since we cannot communicate with the pci card anyway. */
- if (pci_channel_offline(phba->pcidev))
- return;
-+ /* If resets are disabled then leave the HBA alone and return */
-+ if (!phba->cfg_enable_hba_reset)
-+ return;
+@@ -490,9 +495,14 @@ static ssize_t
+ show_spi_transport_min_period(struct class_device *cdev, char *buf)
+ {
+ struct scsi_target *starget = transport_class_to_starget(cdev);
++ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
++ struct spi_internal *i = to_spi_internal(shost->transportt);
+ struct spi_transport_attrs *tp =
+ (struct spi_transport_attrs *)&starget->starget_data;
- if (phba->work_hs & HS_FFER6 ||
- phba->work_hs & HS_FFER5) {
-@@ -620,14 +730,14 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
- for(i = 0;
-- i < LPFC_MAX_VPORTS && vports[i] != NULL;
-+ i <= phba->max_vpi && vports[i] != NULL;
- i++){
- shost = lpfc_shost_from_vport(vports[i]);
- spin_lock_irq(shost->host_lock);
- vports[i]->fc_flag |= FC_ESTABLISH_LINK;
- spin_unlock_irq(shost->host_lock);
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- spin_lock_irq(&phba->hbalock);
- psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
- spin_unlock_irq(&phba->hbalock);
-@@ -655,6 +765,31 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
- return;
- }
- lpfc_unblock_mgmt_io(phba);
-+ } else if (phba->work_hs & HS_CRIT_TEMP) {
-+ temperature = readl(phba->MBslimaddr + TEMPERATURE_OFFSET);
-+ temp_event_data.event_type = FC_REG_TEMPERATURE_EVENT;
-+ temp_event_data.event_code = LPFC_CRIT_TEMP;
-+ temp_event_data.data = (uint32_t)temperature;
-+
-+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-+ "0459 Adapter maximum temperature exceeded "
-+ "(%ld), taking this port offline "
-+ "Data: x%x x%x x%x\n",
-+ temperature, phba->work_hs,
-+ phba->work_status[0], phba->work_status[1]);
-+
-+ shost = lpfc_shost_from_vport(phba->pport);
-+ fc_host_post_vendor_event(shost, fc_get_event_number(),
-+ sizeof(temp_event_data),
-+ (char *) &temp_event_data,
-+ SCSI_NL_VID_TYPE_PCI
-+ | PCI_VENDOR_ID_EMULEX);
-+
-+ spin_lock_irq(&phba->hbalock);
-+ phba->over_temp_state = HBA_OVER_TEMP;
-+ spin_unlock_irq(&phba->hbalock);
-+ lpfc_offline_eratt(phba);
++ if (!i->f->set_period)
++ return -EINVAL;
+
- } else {
- /* The if clause above forces this code path when the status
- * failure is a value other than FFER6. Do not call the offline
-@@ -672,14 +807,7 @@ lpfc_handle_eratt(struct lpfc_hba *phba)
- sizeof(event_data), (char *) &event_data,
- SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_EMULEX);
-
-- spin_lock_irq(&phba->hbalock);
-- psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
-- spin_unlock_irq(&phba->hbalock);
-- lpfc_offline_prep(phba);
-- lpfc_offline(phba);
-- lpfc_unblock_mgmt_io(phba);
-- phba->link_state = LPFC_HBA_ERROR;
-- lpfc_hba_down_post(phba);
-+ lpfc_offline_eratt(phba);
- }
+ return show_spi_transport_period_helper(buf, tp->min_period);
}
-@@ -699,21 +827,25 @@ lpfc_handle_latt(struct lpfc_hba *phba)
- LPFC_MBOXQ_t *pmb;
- volatile uint32_t control;
- struct lpfc_dmabuf *mp;
-- int rc = -ENOMEM;
-+ int rc = 0;
-
- pmb = (LPFC_MBOXQ_t *)mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-- if (!pmb)
-+ if (!pmb) {
-+ rc = 1;
- goto lpfc_handle_latt_err_exit;
-+ }
-
- mp = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
-- if (!mp)
-+ if (!mp) {
-+ rc = 2;
- goto lpfc_handle_latt_free_pmb;
-+ }
+@@ -509,7 +519,7 @@ store_spi_transport_min_period(struct class_device *cdev, const char *buf,
+ }
- mp->virt = lpfc_mbuf_alloc(phba, 0, &mp->phys);
-- if (!mp->virt)
-+ if (!mp->virt) {
-+ rc = 3;
- goto lpfc_handle_latt_free_mp;
--
-- rc = -EIO;
-+ }
- /* Cleanup any outstanding ELS commands */
- lpfc_els_flush_all_cmd(phba);
-@@ -722,9 +854,11 @@ lpfc_handle_latt(struct lpfc_hba *phba)
- lpfc_read_la(phba, pmb, mp);
- pmb->mbox_cmpl = lpfc_mbx_cmpl_read_la;
- pmb->vport = vport;
-- rc = lpfc_sli_issue_mbox (phba, pmb, (MBX_NOWAIT | MBX_STOP_IOCB));
-- if (rc == MBX_NOT_FINISHED)
-+ rc = lpfc_sli_issue_mbox (phba, pmb, MBX_NOWAIT);
-+ if (rc == MBX_NOT_FINISHED) {
-+ rc = 4;
- goto lpfc_handle_latt_free_mbuf;
-+ }
+-static CLASS_DEVICE_ATTR(min_period, S_IRUGO | S_IWUSR,
++static CLASS_DEVICE_ATTR(min_period, S_IRUGO,
+ show_spi_transport_min_period,
+ store_spi_transport_min_period);
- /* Clear Link Attention in HA REG */
- spin_lock_irq(&phba->hbalock);
-@@ -756,10 +890,8 @@ lpfc_handle_latt_err_exit:
- lpfc_linkdown(phba);
- phba->link_state = LPFC_HBA_ERROR;
+@@ -531,12 +541,15 @@ static ssize_t store_spi_host_signalling(struct class_device *cdev,
+ struct spi_internal *i = to_spi_internal(shost->transportt);
+ enum spi_signal_type type = spi_signal_to_value(buf);
-- /* The other case is an error from issue_mbox */
-- if (rc == -ENOMEM)
-- lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX,
-- "0300 READ_LA: no buffers\n");
-+ lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
-+ "0300 LATT: Cannot issue READ_LA: Data:%d\n", rc);
++ if (!i->f->set_signalling)
++ return -EINVAL;
++
+ if (type != SPI_SIGNAL_UNKNOWN)
+ i->f->set_signalling(shost, type);
- return;
- }
-@@ -1088,9 +1220,8 @@ lpfc_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt,
- /* Allocate buffer to post */
- mp1 = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
- if (mp1)
-- mp1->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
-- &mp1->phys);
-- if (mp1 == 0 || mp1->virt == 0) {
-+ mp1->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &mp1->phys);
-+ if (!mp1 || !mp1->virt) {
- kfree(mp1);
- lpfc_sli_release_iocbq(phba, iocb);
- pring->missbufcnt = cnt;
-@@ -1104,7 +1235,7 @@ lpfc_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt,
- if (mp2)
- mp2->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
- &mp2->phys);
-- if (mp2 == 0 || mp2->virt == 0) {
-+ if (!mp2 || !mp2->virt) {
- kfree(mp2);
- lpfc_mbuf_free(phba, mp1->virt, mp1->phys);
- kfree(mp1);
-@@ -1280,15 +1411,39 @@ lpfc_hba_init(struct lpfc_hba *phba, uint32_t *hbainit)
- kfree(HashWorking);
+ return count;
}
+-static CLASS_DEVICE_ATTR(signalling, S_IRUGO | S_IWUSR,
++static CLASS_DEVICE_ATTR(signalling, S_IRUGO,
+ show_spi_host_signalling,
+ store_spi_host_signalling);
--static void
-+void
- lpfc_cleanup(struct lpfc_vport *vport)
+@@ -1262,35 +1275,6 @@ int spi_print_msg(const unsigned char *msg)
+ EXPORT_SYMBOL(spi_print_msg);
+ #endif /* ! CONFIG_SCSI_CONSTANTS */
+
+-#define SETUP_ATTRIBUTE(field) \
+- i->private_attrs[count] = class_device_attr_##field; \
+- if (!i->f->set_##field) { \
+- i->private_attrs[count].attr.mode = S_IRUGO; \
+- i->private_attrs[count].store = NULL; \
+- } \
+- i->attrs[count] = &i->private_attrs[count]; \
+- if (i->f->show_##field) \
+- count++
+-
+-#define SETUP_RELATED_ATTRIBUTE(field, rel_field) \
+- i->private_attrs[count] = class_device_attr_##field; \
+- if (!i->f->set_##rel_field) { \
+- i->private_attrs[count].attr.mode = S_IRUGO; \
+- i->private_attrs[count].store = NULL; \
+- } \
+- i->attrs[count] = &i->private_attrs[count]; \
+- if (i->f->show_##rel_field) \
+- count++
+-
+-#define SETUP_HOST_ATTRIBUTE(field) \
+- i->private_host_attrs[count] = class_device_attr_##field; \
+- if (!i->f->set_##field) { \
+- i->private_host_attrs[count].attr.mode = S_IRUGO; \
+- i->private_host_attrs[count].store = NULL; \
+- } \
+- i->host_attrs[count] = &i->private_host_attrs[count]; \
+- count++
+-
+ static int spi_device_match(struct attribute_container *cont,
+ struct device *dev)
{
-+ struct lpfc_hba *phba = vport->phba;
- struct lpfc_nodelist *ndlp, *next_ndlp;
-+ int i = 0;
+@@ -1343,16 +1327,156 @@ static DECLARE_TRANSPORT_CLASS(spi_transport_class,
+ "spi_transport",
+ spi_setup_transport_attrs,
+ NULL,
+- NULL);
++ spi_target_configure);
-- /* clean up phba - lpfc specific */
-- lpfc_can_disctmo(vport);
-- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp)
-- lpfc_nlp_put(ndlp);
-+ if (phba->link_state > LPFC_LINK_DOWN)
-+ lpfc_port_link_failure(vport);
+ static DECLARE_ANON_TRANSPORT_CLASS(spi_device_class,
+ spi_device_match,
+ spi_device_configure);
+
++static struct attribute *host_attributes[] = {
++ &class_device_attr_signalling.attr,
++ NULL
++};
+
-+ list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
-+ if (ndlp->nlp_type & NLP_FABRIC)
-+ lpfc_disc_state_machine(vport, ndlp, NULL,
-+ NLP_EVT_DEVICE_RECOVERY);
-+ lpfc_disc_state_machine(vport, ndlp, NULL,
-+ NLP_EVT_DEVICE_RM);
-+ }
++static struct attribute_group host_attribute_group = {
++ .attrs = host_attributes,
++};
+
-+ /* At this point, ALL ndlp's should be gone
-+ * because of the previous NLP_EVT_DEVICE_RM.
-+ * Lets wait for this to happen, if needed.
-+ */
-+ while (!list_empty(&vport->fc_nodes)) {
++static int spi_host_configure(struct transport_container *tc,
++ struct device *dev,
++ struct class_device *cdev)
++{
++ struct kobject *kobj = &cdev->kobj;
++ struct Scsi_Host *shost = transport_class_to_shost(cdev);
++ struct spi_internal *si = to_spi_internal(shost->transportt);
++ struct attribute *attr = &class_device_attr_signalling.attr;
++ int rc = 0;
++
++ if (si->f->set_signalling)
++ rc = sysfs_chmod_file(kobj, attr, attr->mode | S_IWUSR);
++
++ return rc;
++}
++
++/* returns true if we should be showing the variable. Also
++ * overloads the return by setting 1<<1 if the attribute should
++ * be writeable */
++#define TARGET_ATTRIBUTE_HELPER(name) \
++ (si->f->show_##name ? 1 : 0) + \
++ (si->f->set_##name ? 2 : 0)
++
++static int target_attribute_is_visible(struct kobject *kobj,
++ struct attribute *attr, int i)
++{
++ struct class_device *cdev =
++ container_of(kobj, struct class_device, kobj);
++ struct scsi_target *starget = transport_class_to_starget(cdev);
++ struct Scsi_Host *shost = transport_class_to_shost(cdev);
++ struct spi_internal *si = to_spi_internal(shost->transportt);
++
++ if (attr == &class_device_attr_period.attr &&
++ spi_support_sync(starget))
++ return TARGET_ATTRIBUTE_HELPER(period);
++ else if (attr == &class_device_attr_min_period.attr &&
++ spi_support_sync(starget))
++ return TARGET_ATTRIBUTE_HELPER(period);
++ else if (attr == &class_device_attr_offset.attr &&
++ spi_support_sync(starget))
++ return TARGET_ATTRIBUTE_HELPER(offset);
++ else if (attr == &class_device_attr_max_offset.attr &&
++ spi_support_sync(starget))
++ return TARGET_ATTRIBUTE_HELPER(offset);
++ else if (attr == &class_device_attr_width.attr &&
++ spi_support_wide(starget))
++ return TARGET_ATTRIBUTE_HELPER(width);
++ else if (attr == &class_device_attr_max_width.attr &&
++ spi_support_wide(starget))
++ return TARGET_ATTRIBUTE_HELPER(width);
++ else if (attr == &class_device_attr_iu.attr &&
++ spi_support_ius(starget))
++ return TARGET_ATTRIBUTE_HELPER(iu);
++ else if (attr == &class_device_attr_dt.attr &&
++ spi_support_dt(starget))
++ return TARGET_ATTRIBUTE_HELPER(dt);
++ else if (attr == &class_device_attr_qas.attr &&
++ spi_support_qas(starget))
++ return TARGET_ATTRIBUTE_HELPER(qas);
++ else if (attr == &class_device_attr_wr_flow.attr &&
++ spi_support_ius(starget))
++ return TARGET_ATTRIBUTE_HELPER(wr_flow);
++ else if (attr == &class_device_attr_rd_strm.attr &&
++ spi_support_ius(starget))
++ return TARGET_ATTRIBUTE_HELPER(rd_strm);
++ else if (attr == &class_device_attr_rti.attr &&
++ spi_support_ius(starget))
++ return TARGET_ATTRIBUTE_HELPER(rti);
++ else if (attr == &class_device_attr_pcomp_en.attr &&
++ spi_support_ius(starget))
++ return TARGET_ATTRIBUTE_HELPER(pcomp_en);
++ else if (attr == &class_device_attr_hold_mcs.attr &&
++ spi_support_ius(starget))
++ return TARGET_ATTRIBUTE_HELPER(hold_mcs);
++ else if (attr == &class_device_attr_revalidate.attr)
++ return 1;
++
++ return 0;
++}
++
++static struct attribute *target_attributes[] = {
++ &class_device_attr_period.attr,
++ &class_device_attr_min_period.attr,
++ &class_device_attr_offset.attr,
++ &class_device_attr_max_offset.attr,
++ &class_device_attr_width.attr,
++ &class_device_attr_max_width.attr,
++ &class_device_attr_iu.attr,
++ &class_device_attr_dt.attr,
++ &class_device_attr_qas.attr,
++ &class_device_attr_wr_flow.attr,
++ &class_device_attr_rd_strm.attr,
++ &class_device_attr_rti.attr,
++ &class_device_attr_pcomp_en.attr,
++ &class_device_attr_hold_mcs.attr,
++ &class_device_attr_revalidate.attr,
++ NULL
++};
++
++static struct attribute_group target_attribute_group = {
++ .attrs = target_attributes,
++ .is_visible = target_attribute_is_visible,
++};
+
-+ if (i++ > 3000) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
-+ "0233 Nodelist not empty\n");
-+ break;
-+ }
++static int spi_target_configure(struct transport_container *tc,
++ struct device *dev,
++ struct class_device *cdev)
++{
++ struct kobject *kobj = &cdev->kobj;
++ int i;
++ struct attribute *attr;
++ int rc;
+
-+ /* Wait for any activity on ndlps to settle */
-+ msleep(10);
++ for (i = 0; (attr = target_attributes[i]) != NULL; i++) {
++ int j = target_attribute_group.is_visible(kobj, attr, i);
++
++ /* FIXME: as well as returning -EEXIST, which we'd like
++ * to ignore, sysfs also does a WARN_ON and dumps a trace,
++ * which is bad, so temporarily, skip attributes that are
++ * already visible (the revalidate one) */
++ if (j && attr != &class_device_attr_revalidate.attr)
++ rc = sysfs_add_file_to_group(kobj, attr,
++ target_attribute_group.name);
++ /* and make the attribute writeable if we have a set
++ * function */
++ if ((j & 1))
++ rc = sysfs_chmod_file(kobj, attr, attr->mode | S_IWUSR);
+ }
- return;
- }
-
-@@ -1307,14 +1462,14 @@ lpfc_establish_link_tmo(unsigned long ptr)
- phba->pport->fc_flag, phba->pport->port_state);
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
- struct Scsi_Host *shost;
- shost = lpfc_shost_from_vport(vports[i]);
- spin_lock_irqsave(shost->host_lock, iflag);
- vports[i]->fc_flag &= ~FC_ESTABLISH_LINK;
- spin_unlock_irqrestore(shost->host_lock, iflag);
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- }
-
- void
-@@ -1339,6 +1494,16 @@ lpfc_stop_phba_timers(struct lpfc_hba *phba)
- return;
- }
-
-+static void
-+lpfc_block_mgmt_io(struct lpfc_hba * phba)
-+{
-+ unsigned long iflag;
+
-+ spin_lock_irqsave(&phba->hbalock, iflag);
-+ phba->sli.sli_flag |= LPFC_BLOCK_MGMT_IO;
-+ spin_unlock_irqrestore(&phba->hbalock, iflag);
++ return 0;
+}
+
- int
- lpfc_online(struct lpfc_hba *phba)
+ struct scsi_transport_template *
+ spi_attach_transport(struct spi_function_template *ft)
{
-@@ -1369,7 +1534,7 @@ lpfc_online(struct lpfc_hba *phba)
+- int count = 0;
+ struct spi_internal *i = kzalloc(sizeof(struct spi_internal),
+ GFP_KERNEL);
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
- struct Scsi_Host *shost;
- shost = lpfc_shost_from_vport(vports[i]);
- spin_lock_irq(shost->host_lock);
-@@ -1378,23 +1543,13 @@ lpfc_online(struct lpfc_hba *phba)
- vports[i]->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
- spin_unlock_irq(shost->host_lock);
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
+@@ -1360,47 +1484,17 @@ spi_attach_transport(struct spi_function_template *ft)
+ return NULL;
- lpfc_unblock_mgmt_io(phba);
- return 0;
- }
+ i->t.target_attrs.ac.class = &spi_transport_class.class;
+- i->t.target_attrs.ac.attrs = &i->attrs[0];
++ i->t.target_attrs.ac.grp = &target_attribute_group;
+ i->t.target_attrs.ac.match = spi_target_match;
+ transport_container_register(&i->t.target_attrs);
+ i->t.target_size = sizeof(struct spi_transport_attrs);
+ i->t.host_attrs.ac.class = &spi_host_class.class;
+- i->t.host_attrs.ac.attrs = &i->host_attrs[0];
++ i->t.host_attrs.ac.grp = &host_attribute_group;
+ i->t.host_attrs.ac.match = spi_host_match;
+ transport_container_register(&i->t.host_attrs);
+ i->t.host_size = sizeof(struct spi_host_attrs);
+ i->f = ft;
- void
--lpfc_block_mgmt_io(struct lpfc_hba * phba)
--{
-- unsigned long iflag;
+- SETUP_ATTRIBUTE(period);
+- SETUP_RELATED_ATTRIBUTE(min_period, period);
+- SETUP_ATTRIBUTE(offset);
+- SETUP_RELATED_ATTRIBUTE(max_offset, offset);
+- SETUP_ATTRIBUTE(width);
+- SETUP_RELATED_ATTRIBUTE(max_width, width);
+- SETUP_ATTRIBUTE(iu);
+- SETUP_ATTRIBUTE(dt);
+- SETUP_ATTRIBUTE(qas);
+- SETUP_ATTRIBUTE(wr_flow);
+- SETUP_ATTRIBUTE(rd_strm);
+- SETUP_ATTRIBUTE(rti);
+- SETUP_ATTRIBUTE(pcomp_en);
+- SETUP_ATTRIBUTE(hold_mcs);
-
-- spin_lock_irqsave(&phba->hbalock, iflag);
-- phba->sli.sli_flag |= LPFC_BLOCK_MGMT_IO;
-- spin_unlock_irqrestore(&phba->hbalock, iflag);
--}
+- /* if you add an attribute but forget to increase SPI_NUM_ATTRS
+- * this bug will trigger */
+- BUG_ON(count > SPI_NUM_ATTRS);
-
--void
- lpfc_unblock_mgmt_io(struct lpfc_hba * phba)
- {
- unsigned long iflag;
-@@ -1409,6 +1564,8 @@ lpfc_offline_prep(struct lpfc_hba * phba)
- {
- struct lpfc_vport *vport = phba->pport;
- struct lpfc_nodelist *ndlp, *next_ndlp;
-+ struct lpfc_vport **vports;
-+ int i;
-
- if (vport->fc_flag & FC_OFFLINE_MODE)
- return;
-@@ -1417,10 +1574,34 @@ lpfc_offline_prep(struct lpfc_hba * phba)
-
- lpfc_linkdown(phba);
+- i->attrs[count++] = &class_device_attr_revalidate;
+-
+- i->attrs[count] = NULL;
+-
+- count = 0;
+- SETUP_HOST_ATTRIBUTE(signalling);
+-
+- BUG_ON(count > SPI_HOST_ATTRS);
+-
+- i->host_attrs[count] = NULL;
+-
+ return &i->t;
+ }
+ EXPORT_SYMBOL(spi_attach_transport);
+diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
+index 65c584d..2445c98 100644
+--- a/drivers/scsi/scsi_transport_srp.c
++++ b/drivers/scsi/scsi_transport_srp.c
+@@ -185,11 +185,10 @@ static int srp_host_match(struct attribute_container *cont, struct device *dev)
-- /* Issue an unreg_login to all nodes */
-- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp)
-- if (ndlp->nlp_state != NLP_STE_UNUSED_NODE)
-- lpfc_unreg_rpi(vport, ndlp);
-+ /* Issue an unreg_login to all nodes on all vports */
-+ vports = lpfc_create_vport_work_array(phba);
-+ if (vports != NULL) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
-+ struct Scsi_Host *shost;
-+
-+ if (vports[i]->load_flag & FC_UNLOADING)
-+ continue;
-+ shost = lpfc_shost_from_vport(vports[i]);
-+ list_for_each_entry_safe(ndlp, next_ndlp,
-+ &vports[i]->fc_nodes,
-+ nlp_listp) {
-+ if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
-+ continue;
-+ if (ndlp->nlp_type & NLP_FABRIC) {
-+ lpfc_disc_state_machine(vports[i], ndlp,
-+ NULL, NLP_EVT_DEVICE_RECOVERY);
-+ lpfc_disc_state_machine(vports[i], ndlp,
-+ NULL, NLP_EVT_DEVICE_RM);
-+ }
-+ spin_lock_irq(shost->host_lock);
-+ ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-+ spin_unlock_irq(shost->host_lock);
-+ lpfc_unreg_rpi(vports[i], ndlp);
-+ }
-+ }
-+ }
-+ lpfc_destroy_vport_work_array(phba, vports);
+ /**
+ * srp_rport_add - add a SRP remote port to the device hierarchy
+- *
+ * @shost: scsi host the remote port is connected to.
+ * @ids: The port id for the remote port.
+ *
+- * publishes a port to the rest of the system
++ * Publishes a port to the rest of the system.
+ */
+ struct srp_rport *srp_rport_add(struct Scsi_Host *shost,
+ struct srp_rport_identifiers *ids)
+@@ -242,8 +241,8 @@ struct srp_rport *srp_rport_add(struct Scsi_Host *shost,
+ EXPORT_SYMBOL_GPL(srp_rport_add);
- lpfc_sli_flush_mbox_queue(phba);
+ /**
+- * srp_rport_del -- remove a SRP remote port
+- * @port: SRP remote port to remove
++ * srp_rport_del - remove a SRP remote port
++ * @rport: SRP remote port to remove
+ *
+ * Removes the specified SRP remote port.
+ */
+@@ -271,7 +270,7 @@ static int do_srp_rport_del(struct device *dev, void *data)
}
-@@ -1439,9 +1620,9 @@ lpfc_offline(struct lpfc_hba *phba)
- lpfc_stop_phba_timers(phba);
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++)
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++)
- lpfc_stop_vport_timers(vports[i]);
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
- "0460 Bring Adapter offline\n");
- /* Bring down the SLI Layer and cleanup. The HBA is offline
-@@ -1452,15 +1633,14 @@ lpfc_offline(struct lpfc_hba *phba)
- spin_unlock_irq(&phba->hbalock);
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
- shost = lpfc_shost_from_vport(vports[i]);
-- lpfc_cleanup(vports[i]);
- spin_lock_irq(shost->host_lock);
- vports[i]->work_port_events = 0;
- vports[i]->fc_flag |= FC_OFFLINE_MODE;
- spin_unlock_irq(shost->host_lock);
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
+
+ /**
+- * srp_remove_host -- tear down a Scsi_Host's SRP data structures
++ * srp_remove_host - tear down a Scsi_Host's SRP data structures
+ * @shost: Scsi Host that is torn down
+ *
+ * Removes all SRP remote ports for a given Scsi_Host.
+@@ -297,7 +296,7 @@ static int srp_it_nexus_response(struct Scsi_Host *shost, u64 nexus, int result)
}
- /******************************************************************************
-@@ -1674,6 +1854,8 @@ void lpfc_host_attrib_init(struct Scsi_Host *shost)
- fc_host_supported_speeds(shost) = 0;
- if (phba->lmt & LMT_10Gb)
- fc_host_supported_speeds(shost) |= FC_PORTSPEED_10GBIT;
-+ if (phba->lmt & LMT_8Gb)
-+ fc_host_supported_speeds(shost) |= FC_PORTSPEED_8GBIT;
- if (phba->lmt & LMT_4Gb)
- fc_host_supported_speeds(shost) |= FC_PORTSPEED_4GBIT;
- if (phba->lmt & LMT_2Gb)
-@@ -1707,13 +1889,14 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
- struct Scsi_Host *shost = NULL;
- void *ptr;
- unsigned long bar0map_len, bar2map_len;
-- int error = -ENODEV;
-+ int error = -ENODEV, retval;
- int i, hbq_count;
- uint16_t iotag;
-+ int bars = pci_select_bars(pdev, IORESOURCE_MEM);
+ /**
+- * srp_attach_transport -- instantiate SRP transport template
++ * srp_attach_transport - instantiate SRP transport template
+ * @ft: SRP transport class function template
+ */
+ struct scsi_transport_template *
+@@ -337,7 +336,7 @@ srp_attach_transport(struct srp_function_template *ft)
+ EXPORT_SYMBOL_GPL(srp_attach_transport);
-- if (pci_enable_device(pdev))
-+ if (pci_enable_device_bars(pdev, bars))
- goto out;
-- if (pci_request_regions(pdev, LPFC_DRIVER_NAME))
-+ if (pci_request_selected_regions(pdev, bars, LPFC_DRIVER_NAME))
- goto out_disable_device;
+ /**
+- * srp_release_transport -- release SRP transport template instance
++ * srp_release_transport - release SRP transport template instance
+ * @t: transport template instance
+ */
+ void srp_release_transport(struct scsi_transport_template *t)
+diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c
+index cd68a66..3f21bc6 100644
+--- a/drivers/scsi/scsicam.c
++++ b/drivers/scsi/scsicam.c
+@@ -24,6 +24,14 @@
+ static int setsize(unsigned long capacity, unsigned int *cyls, unsigned int *hds,
+ unsigned int *secs);
- phba = kzalloc(sizeof (struct lpfc_hba), GFP_KERNEL);
-@@ -1823,9 +2006,11 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
- lpfc_sli_setup(phba);
- lpfc_sli_queue_setup(phba);
++/**
++ * scsi_bios_ptable - Read PC partition table out of first sector of device.
++ * @dev: from this device
++ *
++ * Description: Reads the first sector from the device and returns %0x42 bytes
++ * starting at offset %0x1be.
++ * Returns: partition table in kmalloc(GFP_KERNEL) memory, or NULL on error.
++ */
+ unsigned char *scsi_bios_ptable(struct block_device *dev)
+ {
+ unsigned char *res = kmalloc(66, GFP_KERNEL);
+@@ -43,15 +51,17 @@ unsigned char *scsi_bios_ptable(struct block_device *dev)
+ }
+ EXPORT_SYMBOL(scsi_bios_ptable);
-- error = lpfc_mem_alloc(phba);
-- if (error)
-+ retval = lpfc_mem_alloc(phba);
-+ if (retval) {
-+ error = retval;
- goto out_free_hbqslimp;
-+ }
+-/*
+- * Function : int scsicam_bios_param (struct block_device *bdev, ector_t capacity, int *ip)
++/**
++ * scsicam_bios_param - Determine geometry of a disk in cylinders/heads/sectors.
++ * @bdev: which device
++ * @capacity: size of the disk in sectors
++ * @ip: return value: ip[0]=heads, ip[1]=sectors, ip[2]=cylinders
+ *
+- * Purpose : to determine the BIOS mapping used for a drive in a
++ * Description : determine the BIOS mapping/geometry used for a drive in a
+ * SCSI-CAM system, storing the results in ip as required
+ * by the HDIO_GETGEO ioctl().
+ *
+ * Returns : -1 on failure, 0 on success.
+- *
+ */
- /* Initialize and populate the iocb list per host. */
- INIT_LIST_HEAD(&phba->lpfc_iocb_list);
-@@ -1880,6 +2065,9 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
- /* Initialize list of fabric iocbs */
- INIT_LIST_HEAD(&phba->fabric_iocb_list);
+ int scsicam_bios_param(struct block_device *bdev, sector_t capacity, int *ip)
+@@ -98,15 +108,18 @@ int scsicam_bios_param(struct block_device *bdev, sector_t capacity, int *ip)
+ }
+ EXPORT_SYMBOL(scsicam_bios_param);
-+ /* Initialize list to save ELS buffers */
-+ INIT_LIST_HEAD(&phba->elsbuf);
-+
- vport = lpfc_create_port(phba, phba->brd_no, &phba->pcidev->dev);
- if (!vport)
- goto out_kthread_stop;
-@@ -1891,8 +2079,8 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
- pci_set_drvdata(pdev, shost);
+-/*
+- * Function : static int scsi_partsize(unsigned char *buf, unsigned long
+- * capacity,unsigned int *cyls, unsigned int *hds, unsigned int *secs);
++/**
++ * scsi_partsize - Parse cylinders/heads/sectors from PC partition table
++ * @buf: partition table, see scsi_bios_ptable()
++ * @capacity: size of the disk in sectors
++ * @cyls: put cylinders here
++ * @hds: put heads here
++ * @secs: put sectors here
+ *
+- * Purpose : to determine the BIOS mapping used to create the partition
++ * Description: determine the BIOS mapping/geometry used to create the partition
+ * table, storing the results in *cyls, *hds, and *secs
+ *
+- * Returns : -1 on failure, 0 on success.
+- *
++ * Returns: -1 on failure, 0 on success.
+ */
- if (phba->cfg_use_msi) {
-- error = pci_enable_msi(phba->pcidev);
-- if (!error)
-+ retval = pci_enable_msi(phba->pcidev);
-+ if (!retval)
- phba->using_msi = 1;
- else
- lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
-@@ -1900,11 +2088,12 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
- "with IRQ\n");
+ int scsi_partsize(unsigned char *buf, unsigned long capacity,
+@@ -194,7 +207,7 @@ EXPORT_SYMBOL(scsi_partsize);
+ *
+ * WORKING X3T9.2
+ * DRAFT 792D
+- *
++ * see http://www.t10.org/ftp/t10/drafts/cam/cam-r12b.pdf
+ *
+ * Revision 6
+ * 10-MAR-94
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index a69b155..24eba31 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -395,6 +395,15 @@ static int sd_prep_fn(struct request_queue *q, struct request *rq)
+ goto out;
}
-- error = request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
-+ retval = request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
- LPFC_DRIVER_NAME, phba);
-- if (error) {
-+ if (retval) {
- lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
- "0451 Enable interrupt handler failed\n");
-+ error = retval;
- goto out_disable_msi;
- }
++ /*
++ * Some devices (some sdcards for one) don't like it if the
++ * last sector gets read in a larger then 1 sector read.
++ */
++ if (unlikely(sdp->last_sector_bug &&
++ rq->nr_sectors > sdp->sector_size / 512 &&
++ block + this_count == get_capacity(disk)))
++ this_count -= sdp->sector_size / 512;
++
+ SCSI_LOG_HLQUEUE(2, scmd_printk(KERN_INFO, SCpnt, "block=%llu\n",
+ (unsigned long long)block));
-@@ -1914,11 +2103,15 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
- phba->HSregaddr = phba->ctrl_regs_memmap_p + HS_REG_OFFSET;
- phba->HCregaddr = phba->ctrl_regs_memmap_p + HC_REG_OFFSET;
+@@ -736,6 +745,7 @@ static int sd_media_changed(struct gendisk *disk)
+ {
+ struct scsi_disk *sdkp = scsi_disk(disk);
+ struct scsi_device *sdp = sdkp->device;
++ struct scsi_sense_hdr *sshdr = NULL;
+ int retval;
-- if (lpfc_alloc_sysfs_attr(vport))
-+ if (lpfc_alloc_sysfs_attr(vport)) {
-+ error = -ENOMEM;
- goto out_free_irq;
+ SCSI_LOG_HLQUEUE(3, sd_printk(KERN_INFO, sdkp, "sd_media_changed\n"));
+@@ -749,8 +759,11 @@ static int sd_media_changed(struct gendisk *disk)
+ * can deal with it then. It is only because of unrecoverable errors
+ * that we would ever take a device offline in the first place.
+ */
+- if (!scsi_device_online(sdp))
+- goto not_present;
++ if (!scsi_device_online(sdp)) {
++ set_media_not_present(sdkp);
++ retval = 1;
++ goto out;
+ }
-- if (lpfc_sli_hba_setup(phba))
-+ if (lpfc_sli_hba_setup(phba)) {
-+ error = -ENODEV;
- goto out_remove_device;
+ /*
+ * Using TEST_UNIT_READY enables differentiation between drive with
+@@ -762,8 +775,12 @@ static int sd_media_changed(struct gendisk *disk)
+ * sd_revalidate() is called.
+ */
+ retval = -ENODEV;
+- if (scsi_block_when_processing_errors(sdp))
+- retval = scsi_test_unit_ready(sdp, SD_TIMEOUT, SD_MAX_RETRIES);
++
++ if (scsi_block_when_processing_errors(sdp)) {
++ sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
++ retval = scsi_test_unit_ready(sdp, SD_TIMEOUT, SD_MAX_RETRIES,
++ sshdr);
+ }
/*
- * hba setup may have changed the hba_queue_depth so we need to adjust
-@@ -1975,7 +2168,7 @@ out_idr_remove:
- out_free_phba:
- kfree(phba);
- out_release_regions:
-- pci_release_regions(pdev);
-+ pci_release_selected_regions(pdev, bars);
- out_disable_device:
- pci_disable_device(pdev);
- out:
-@@ -1991,6 +2184,8 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
- struct Scsi_Host *shost = pci_get_drvdata(pdev);
- struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
- struct lpfc_hba *phba = vport->phba;
-+ int bars = pci_select_bars(pdev, IORESOURCE_MEM);
-+
- spin_lock_irq(&phba->hbalock);
- vport->load_flag |= FC_UNLOADING;
- spin_unlock_irq(&phba->hbalock);
-@@ -1998,8 +2193,12 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
- kfree(vport->vname);
- lpfc_free_sysfs_attr(vport);
+ * Unable to test, unit probably not ready. This usually
+@@ -771,8 +788,13 @@ static int sd_media_changed(struct gendisk *disk)
+ * and we will figure it out later once the drive is
+ * available again.
+ */
+- if (retval)
+- goto not_present;
++ if (retval || (scsi_sense_valid(sshdr) &&
++ /* 0x3a is medium not present */
++ sshdr->asc == 0x3a)) {
++ set_media_not_present(sdkp);
++ retval = 1;
++ goto out;
++ }
-+ kthread_stop(phba->worker_thread);
-+
- fc_remove_host(shost);
- scsi_remove_host(shost);
-+ lpfc_cleanup(vport);
-+
/*
- * Bring down the SLI Layer. This step disable all interrupts,
- * clears the rings, discards all mailbox commands, and resets
-@@ -2014,9 +2213,6 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
- spin_unlock_irq(&phba->hbalock);
+ * For removable scsi disk we have to recognise the presence
+@@ -783,12 +805,12 @@ static int sd_media_changed(struct gendisk *disk)
- lpfc_debugfs_terminate(vport);
-- lpfc_cleanup(vport);
+ retval = sdp->changed;
+ sdp->changed = 0;
+-
++out:
++ if (retval != sdkp->previous_state)
++ sdev_evt_send_simple(sdp, SDEV_EVT_MEDIA_CHANGE, GFP_KERNEL);
++ sdkp->previous_state = retval;
++ kfree(sshdr);
+ return retval;
+-
+-not_present:
+- set_media_not_present(sdkp);
+- return 1;
+ }
+
+ static int sd_sync_cache(struct scsi_disk *sdkp)
+diff --git a/drivers/scsi/seagate.c b/drivers/scsi/seagate.c
+deleted file mode 100644
+index b113244..0000000
+--- a/drivers/scsi/seagate.c
++++ /dev/null
+@@ -1,1667 +0,0 @@
+-/*
+- * seagate.c Copyright (C) 1992, 1993 Drew Eckhardt
+- * low level scsi driver for ST01/ST02, Future Domain TMC-885,
+- * TMC-950 by Drew Eckhardt <drew at colorado.edu>
+- *
+- * Note : TMC-880 boards don't work because they have two bits in
+- * the status register flipped, I'll fix this "RSN"
+- * [why do I have strong feeling that above message is from 1993? :-)
+- * pavel at ucw.cz]
+- *
+- * This card does all the I/O via memory mapped I/O, so there is no need
+- * to check or allocate a region of the I/O address space.
+- */
+-
+-/* 1996 - to use new read{b,w,l}, write{b,w,l}, and phys_to_virt
+- * macros, replaced assembler routines with C. There's probably a
+- * performance hit, but I only have a cdrom and can't tell. Define
+- * SEAGATE_USE_ASM if you want the old assembler code -- SJT
+- *
+- * 1998-jul-29 - created DPRINTK macros and made it work under
+- * linux 2.1.112, simplified some #defines etc. <pavel at ucw.cz>
+- *
+- * Aug 2000 - aeb - deleted seagate_st0x_biosparam(). It would try to
+- * read the physical disk geometry, a bad mistake. Of course it doesn't
+- * matter much what geometry one invents, but on large disks it
+- * returned 256 (or more) heads, causing all kind of failures.
+- * Of course this means that people might see a different geometry now,
+- * so boot parameters may be necessary in some cases.
+- */
+-
+-/*
+- * Configuration :
+- * To use without BIOS -DOVERRIDE=base_address -DCONTROLLER=FD or SEAGATE
+- * -DIRQ will override the default of 5.
+- * Note: You can now set these options from the kernel's "command line".
+- * The syntax is:
+- *
+- * st0x=ADDRESS,IRQ (for a Seagate controller)
+- * or:
+- * tmc8xx=ADDRESS,IRQ (for a TMC-8xx or TMC-950 controller)
+- * eg:
+- * tmc8xx=0xC8000,15
+- *
+- * will configure the driver for a TMC-8xx style controller using IRQ 15
+- * with a base address of 0xC8000.
+- *
+- * -DARBITRATE
+- * Will cause the host adapter to arbitrate for the
+- * bus for better SCSI-II compatibility, rather than just
+- * waiting for BUS FREE and then doing its thing. Should
+- * let us do one command per Lun when I integrate my
+- * reorganization changes into the distribution sources.
+- *
+- * -DDEBUG=65535
+- * Will activate debug code.
+- *
+- * -DFAST or -DFAST32
+- * Will use blind transfers where possible
+- *
+- * -DPARITY
+- * This will enable parity.
+- *
+- * -DSEAGATE_USE_ASM
+- * Will use older seagate assembly code. should be (very small amount)
+- * Faster.
+- *
+- * -DSLOW_RATE=50
+- * Will allow compatibility with broken devices that don't
+- * handshake fast enough (ie, some CD ROM's) for the Seagate
+- * code.
+- *
+- * 50 is some number, It will let you specify a default
+- * transfer rate if handshaking isn't working correctly.
+- *
+- * -DOLDCNTDATASCEME There is a new sceme to set the CONTROL
+- * and DATA reigsters which complies more closely
+- * with the SCSI2 standard. This hopefully eliminates
+- * the need to swap the order these registers are
+- * 'messed' with. It makes the following two options
+- * obsolete. To reenable the old sceme define this.
+- *
+- * The following to options are patches from the SCSI.HOWTO
+- *
+- * -DSWAPSTAT This will swap the definitions for STAT_MSG and STAT_CD.
+- *
+- * -DSWAPCNTDATA This will swap the order that seagate.c messes with
+- * the CONTROL an DATA registers.
+- */
+-
+-#include <linux/module.h>
+-#include <linux/interrupt.h>
+-#include <linux/spinlock.h>
+-#include <linux/signal.h>
+-#include <linux/string.h>
+-#include <linux/proc_fs.h>
+-#include <linux/init.h>
+-#include <linux/blkdev.h>
+-#include <linux/stat.h>
+-#include <linux/delay.h>
+-#include <linux/io.h>
+-
+-#include <asm/system.h>
+-#include <asm/uaccess.h>
+-
+-#include <scsi/scsi_cmnd.h>
+-#include <scsi/scsi_device.h>
+-#include <scsi/scsi.h>
+-
+-#include <scsi/scsi_dbg.h>
+-#include <scsi/scsi_host.h>
+-
+-
+-#ifdef DEBUG
+-#define DPRINTK( when, msg... ) do { if ( (DEBUG & (when)) == (when) ) printk( msg ); } while (0)
+-#else
+-#define DPRINTK( when, msg... ) do { } while (0)
+-#define DEBUG 0
+-#endif
+-#define DANY( msg... ) DPRINTK( 0xffff, msg );
+-
+-#ifndef IRQ
+-#define IRQ 5
+-#endif
+-
+-#ifdef FAST32
+-#define FAST
+-#endif
+-
+-#undef LINKED /* Linked commands are currently broken! */
+-
+-#if defined(OVERRIDE) && !defined(CONTROLLER)
+-#error Please use -DCONTROLLER=SEAGATE or -DCONTROLLER=FD to override controller type
+-#endif
+-
+-#ifndef __i386__
+-#undef SEAGATE_USE_ASM
+-#endif
+-
+-/*
+- Thanks to Brian Antoine for the example code in his Messy-Loss ST-01
+- driver, and Mitsugu Suzuki for information on the ST-01
+- SCSI host.
+-*/
+-
+-/*
+- CONTROL defines
+-*/
+-
+-#define CMD_RST 0x01
+-#define CMD_SEL 0x02
+-#define CMD_BSY 0x04
+-#define CMD_ATTN 0x08
+-#define CMD_START_ARB 0x10
+-#define CMD_EN_PARITY 0x20
+-#define CMD_INTR 0x40
+-#define CMD_DRVR_ENABLE 0x80
+-
+-/*
+- STATUS
+-*/
+-#ifdef SWAPSTAT
+-#define STAT_MSG 0x08
+-#define STAT_CD 0x02
+-#else
+-#define STAT_MSG 0x02
+-#define STAT_CD 0x08
+-#endif
+-
+-#define STAT_BSY 0x01
+-#define STAT_IO 0x04
+-#define STAT_REQ 0x10
+-#define STAT_SEL 0x20
+-#define STAT_PARITY 0x40
+-#define STAT_ARB_CMPL 0x80
+-
+-/*
+- REQUESTS
+-*/
+-
+-#define REQ_MASK (STAT_CD | STAT_IO | STAT_MSG)
+-#define REQ_DATAOUT 0
+-#define REQ_DATAIN STAT_IO
+-#define REQ_CMDOUT STAT_CD
+-#define REQ_STATIN (STAT_CD | STAT_IO)
+-#define REQ_MSGOUT (STAT_MSG | STAT_CD)
+-#define REQ_MSGIN (STAT_MSG | STAT_CD | STAT_IO)
+-
+-extern volatile int seagate_st0x_timeout;
+-
+-#ifdef PARITY
+-#define BASE_CMD CMD_EN_PARITY
+-#else
+-#define BASE_CMD 0
+-#endif
+-
+-/*
+- Debugging code
+-*/
+-
+-#define PHASE_BUS_FREE 1
+-#define PHASE_ARBITRATION 2
+-#define PHASE_SELECTION 4
+-#define PHASE_DATAIN 8
+-#define PHASE_DATAOUT 0x10
+-#define PHASE_CMDOUT 0x20
+-#define PHASE_MSGIN 0x40
+-#define PHASE_MSGOUT 0x80
+-#define PHASE_STATUSIN 0x100
+-#define PHASE_ETC (PHASE_DATAIN | PHASE_DATAOUT | PHASE_CMDOUT | PHASE_MSGIN | PHASE_MSGOUT | PHASE_STATUSIN)
+-#define PRINT_COMMAND 0x200
+-#define PHASE_EXIT 0x400
+-#define PHASE_RESELECT 0x800
+-#define DEBUG_FAST 0x1000
+-#define DEBUG_SG 0x2000
+-#define DEBUG_LINKED 0x4000
+-#define DEBUG_BORKEN 0x8000
+-
+-/*
+- * Control options - these are timeouts specified in .01 seconds.
+- */
+-
+-/* 30, 20 work */
+-#define ST0X_BUS_FREE_DELAY 25
+-#define ST0X_SELECTION_DELAY 25
+-
+-#define SEAGATE 1 /* these determine the type of the controller */
+-#define FD 2
+-
+-#define ST0X_ID_STR "Seagate ST-01/ST-02"
+-#define FD_ID_STR "TMC-8XX/TMC-950"
+-
+-static int internal_command (unsigned char target, unsigned char lun,
+- const void *cmnd,
+- void *buff, int bufflen, int reselect);
+-
+-static int incommand; /* set if arbitration has finished
+- and we are in some command phase. */
+-
+-static unsigned int base_address = 0; /* Where the card ROM starts, used to
+- calculate memory mapped register
+- location. */
+-
+-static void __iomem *st0x_cr_sr; /* control register write, status
+- register read. 256 bytes in
+- length.
+- Read is status of SCSI BUS, as per
+- STAT masks. */
+-
+-static void __iomem *st0x_dr; /* data register, read write 256
+- bytes in length. */
+-
+-static volatile int st0x_aborted = 0; /* set when we are aborted, ie by a
+- time out, etc. */
+-
+-static unsigned char controller_type = 0; /* set to SEAGATE for ST0x
+- boards or FD for TMC-8xx
+- boards */
+-static int irq = IRQ;
+-
+-module_param(base_address, uint, 0);
+-module_param(controller_type, byte, 0);
+-module_param(irq, int, 0);
+-MODULE_LICENSE("GPL");
+-
+-
+-#define retcode(result) (((result) << 16) | (message << 8) | status)
+-#define STATUS ((u8) readb(st0x_cr_sr))
+-#define DATA ((u8) readb(st0x_dr))
+-#define WRITE_CONTROL(d) { writeb((d), st0x_cr_sr); }
+-#define WRITE_DATA(d) { writeb((d), st0x_dr); }
+-
+-#ifndef OVERRIDE
+-static unsigned int seagate_bases[] = {
+- 0xc8000, 0xca000, 0xcc000,
+- 0xce000, 0xdc000, 0xde000
+-};
+-
+-typedef struct {
+- const unsigned char *signature;
+- unsigned offset;
+- unsigned length;
+- unsigned char type;
+-} Signature;
+-
+-static Signature __initdata signatures[] = {
+- {"ST01 v1.7 (C) Copyright 1987 Seagate", 15, 37, SEAGATE},
+- {"SCSI BIOS 2.00 (C) Copyright 1987 Seagate", 15, 40, SEAGATE},
+-
+-/*
+- * The following two lines are NOT mistakes. One detects ROM revision
+- * 3.0.0, the other 3.2. Since seagate has only one type of SCSI adapter,
+- * and this is not going to change, the "SEAGATE" and "SCSI" together
+- * are probably "good enough"
+- */
+-
+- {"SEAGATE SCSI BIOS ", 16, 17, SEAGATE},
+- {"SEAGATE SCSI BIOS ", 17, 17, SEAGATE},
+-
+-/*
+- * However, future domain makes several incompatible SCSI boards, so specific
+- * signatures must be used.
+- */
+-
+- {"FUTURE DOMAIN CORP. (C) 1986-1989 V5.0C2/14/89", 5, 46, FD},
+- {"FUTURE DOMAIN CORP. (C) 1986-1989 V6.0A7/28/89", 5, 46, FD},
+- {"FUTURE DOMAIN CORP. (C) 1986-1990 V6.0105/31/90", 5, 47, FD},
+- {"FUTURE DOMAIN CORP. (C) 1986-1990 V6.0209/18/90", 5, 47, FD},
+- {"FUTURE DOMAIN CORP. (C) 1986-1990 V7.009/18/90", 5, 46, FD},
+- {"FUTURE DOMAIN CORP. (C) 1992 V8.00.004/02/92", 5, 44, FD},
+- {"IBM F1 BIOS V1.1004/30/92", 5, 25, FD},
+- {"FUTURE DOMAIN TMC-950", 5, 21, FD},
+- /* Added for 2.2.16 by Matthias_Heidbrink at b.maus.de */
+- {"IBM F1 V1.2009/22/93", 5, 25, FD},
+-};
+-
+-#define NUM_SIGNATURES ARRAY_SIZE(signatures)
+-#endif /* n OVERRIDE */
+-
+-/*
+- * hostno stores the hostnumber, as told to us by the init routine.
+- */
+-
+-static int hostno = -1;
+-static void seagate_reconnect_intr (int, void *);
+-static irqreturn_t do_seagate_reconnect_intr (int, void *);
+-static int seagate_st0x_bus_reset(struct scsi_cmnd *);
+-
+-#ifdef FAST
+-static int fast = 1;
+-#else
+-#define fast 0
+-#endif
+-
+-#ifdef SLOW_RATE
+-/*
+- * Support for broken devices :
+- * The Seagate board has a handshaking problem. Namely, a lack
+- * thereof for slow devices. You can blast 600K/second through
+- * it if you are polling for each byte, more if you do a blind
+- * transfer. In the first case, with a fast device, REQ will
+- * transition high-low or high-low-high before your loop restarts
+- * and you'll have no problems. In the second case, the board
+- * will insert wait states for up to 13.2 usecs for REQ to
+- * transition low->high, and everything will work.
+- *
+- * However, there's nothing in the state machine that says
+- * you *HAVE* to see a high-low-high set of transitions before
+- * sending the next byte, and slow things like the Trantor CD ROMS
+- * will break because of this.
+- *
+- * So, we need to slow things down, which isn't as simple as it
+- * seems. We can't slow things down period, because then people
+- * who don't recompile their kernels will shoot me for ruining
+- * their performance. We need to do it on a case per case basis.
+- *
+- * The best for performance will be to, only for borken devices
+- * (this is stored on a per-target basis in the scsi_devices array)
+- *
+- * Wait for a low->high transition before continuing with that
+- * transfer. If we timeout, continue anyways. We don't need
+- * a long timeout, because REQ should only be asserted until the
+- * corresponding ACK is received and processed.
+- *
+- * Note that we can't use the system timer for this, because of
+- * resolution, and we *really* can't use the timer chip since
+- * gettimeofday() and the beeper routines use that. So,
+- * the best thing for us to do will be to calibrate a timing
+- * loop in the initialization code using the timer chip before
+- * gettimeofday() can screw with it.
+- *
+- * FIXME: this is broken (not borken :-). Empty loop costs less than
+- * loop with ISA access in it! -- pavel at ucw.cz
+- */
+-
+-static int borken_calibration = 0;
+-
+-static void __init borken_init (void)
+-{
+- register int count = 0, start = jiffies + 1, stop = start + 25;
+-
+- /* FIXME: There may be a better approach, this is a straight port for
+- now */
+- preempt_disable();
+- while (time_before (jiffies, start))
+- cpu_relax();
+- for (; time_before (jiffies, stop); ++count)
+- cpu_relax();
+- preempt_enable();
+-
+-/*
+- * Ok, we now have a count for .25 seconds. Convert to a
+- * count per second and divide by transfer rate in K. */
+-
+- borken_calibration = (count * 4) / (SLOW_RATE * 1024);
+-
+- if (borken_calibration < 1)
+- borken_calibration = 1;
+-}
+-
+-static inline void borken_wait (void)
+-{
+- register int count;
+-
+- for (count = borken_calibration; count && (STATUS & STAT_REQ); --count)
+- cpu_relax();
+-
+-#if (DEBUG & DEBUG_BORKEN)
+- if (count)
+- printk ("scsi%d : borken timeout\n", hostno);
+-#endif
+-}
+-
+-#endif /* def SLOW_RATE */
+-
+-/* These beasts only live on ISA, and ISA means 8MHz. Each ULOOP()
+- * contains at least one ISA access, which takes more than 0.125
+- * usec. So if we loop 8 times time in usec, we are safe.
+- */
+-
+-#define ULOOP( i ) for (clock = i*8;;)
+-#define TIMEOUT (!(clock--))
+-
+-static int __init seagate_st0x_detect (struct scsi_host_template * tpnt)
+-{
+- struct Scsi_Host *instance;
+- int i, j;
+- unsigned long cr, dr;
+-
+- tpnt->proc_name = "seagate";
+-/*
+- * First, we try for the manual override.
+- */
+- DANY ("Autodetecting ST0x / TMC-8xx\n");
+-
+- if (hostno != -1) {
+- printk (KERN_ERR "seagate_st0x_detect() called twice?!\n");
+- return 0;
+- }
+-
+-/* If the user specified the controller type from the command line,
+- controller_type will be non-zero, so don't try to detect one */
+-
+- if (!controller_type) {
+-#ifdef OVERRIDE
+- base_address = OVERRIDE;
+- controller_type = CONTROLLER;
+-
+- DANY ("Base address overridden to %x, controller type is %s\n",
+- base_address,
+- controller_type == SEAGATE ? "SEAGATE" : "FD");
+-#else /* OVERRIDE */
+-/*
+- * To detect this card, we simply look for the signature
+- * from the BIOS version notice in all the possible locations
+- * of the ROM's. This has a nice side effect of not trashing
+- * any register locations that might be used by something else.
+- *
+- * XXX - note that we probably should be probing the address
+- * space for the on-board RAM instead.
+- */
+-
+- for (i = 0; i < ARRAY_SIZE(seagate_bases); ++i) {
+- void __iomem *p = ioremap(seagate_bases[i], 0x2000);
+- if (!p)
+- continue;
+- for (j = 0; j < NUM_SIGNATURES; ++j)
+- if (check_signature(p + signatures[j].offset, signatures[j].signature, signatures[j].length)) {
+- base_address = seagate_bases[i];
+- controller_type = signatures[j].type;
+- break;
+- }
+- iounmap(p);
+- }
+-#endif /* OVERRIDE */
+- }
+- /* (! controller_type) */
+- tpnt->this_id = (controller_type == SEAGATE) ? 7 : 6;
+- tpnt->name = (controller_type == SEAGATE) ? ST0X_ID_STR : FD_ID_STR;
+-
+- if (!base_address) {
+- printk(KERN_INFO "seagate: ST0x/TMC-8xx not detected.\n");
+- return 0;
+- }
+-
+- cr = base_address + (controller_type == SEAGATE ? 0x1a00 : 0x1c00);
+- dr = cr + 0x200;
+- st0x_cr_sr = ioremap(cr, 0x100);
+- st0x_dr = ioremap(dr, 0x100);
+-
+- DANY("%s detected. Base address = %x, cr = %x, dr = %x\n",
+- tpnt->name, base_address, cr, dr);
+-
+- /*
+- * At all times, we will use IRQ 5. Should also check for IRQ3
+- * if we lose our first interrupt.
+- */
+- instance = scsi_register (tpnt, 0);
+- if (instance == NULL)
+- return 0;
+-
+- hostno = instance->host_no;
+- if (request_irq (irq, do_seagate_reconnect_intr, IRQF_DISABLED, (controller_type == SEAGATE) ? "seagate" : "tmc-8xx", instance)) {
+- printk(KERN_ERR "scsi%d : unable to allocate IRQ%d\n", hostno, irq);
+- return 0;
+- }
+- instance->irq = irq;
+- instance->io_port = base_address;
+-#ifdef SLOW_RATE
+- printk(KERN_INFO "Calibrating borken timer... ");
+- borken_init();
+- printk(" %d cycles per transfer\n", borken_calibration);
+-#endif
+- printk (KERN_INFO "This is one second... ");
+- {
+- int clock;
+- ULOOP (1 * 1000 * 1000) {
+- STATUS;
+- if (TIMEOUT)
+- break;
+- }
+- }
+-
+- printk ("done, %s options:"
+-#ifdef ARBITRATE
+- " ARBITRATE"
+-#endif
+-#if DEBUG
+- " DEBUG"
+-#endif
+-#ifdef FAST
+- " FAST"
+-#ifdef FAST32
+- "32"
+-#endif
+-#endif
+-#ifdef LINKED
+- " LINKED"
+-#endif
+-#ifdef PARITY
+- " PARITY"
+-#endif
+-#ifdef SEAGATE_USE_ASM
+- " SEAGATE_USE_ASM"
+-#endif
+-#ifdef SLOW_RATE
+- " SLOW_RATE"
+-#endif
+-#ifdef SWAPSTAT
+- " SWAPSTAT"
+-#endif
+-#ifdef SWAPCNTDATA
+- " SWAPCNTDATA"
+-#endif
+- "\n", tpnt->name);
+- return 1;
+-}
+-
+-static const char *seagate_st0x_info (struct Scsi_Host *shpnt)
+-{
+- static char buffer[64];
+-
+- snprintf(buffer, 64, "%s at irq %d, address 0x%05X",
+- (controller_type == SEAGATE) ? ST0X_ID_STR : FD_ID_STR,
+- irq, base_address);
+- return buffer;
+-}
+-
+-/*
+- * These are our saved pointers for the outstanding command that is
+- * waiting for a reconnect
+- */
+-
+-static unsigned char current_target, current_lun;
+-static unsigned char *current_cmnd, *current_data;
+-static int current_nobuffs;
+-static struct scatterlist *current_buffer;
+-static int current_bufflen;
+-
+-#ifdef LINKED
+-/*
+- * linked_connected indicates whether or not we are currently connected to
+- * linked_target, linked_lun and in an INFORMATION TRANSFER phase,
+- * using linked commands.
+- */
+-
+-static int linked_connected = 0;
+-static unsigned char linked_target, linked_lun;
+-#endif
+-
+-static void (*done_fn) (struct scsi_cmnd *) = NULL;
+-static struct scsi_cmnd *SCint = NULL;
+-
+-/*
+- * These control whether or not disconnect / reconnect will be attempted,
+- * or are being attempted.
+- */
+-
+-#define NO_RECONNECT 0
+-#define RECONNECT_NOW 1
+-#define CAN_RECONNECT 2
+-
+-/*
+- * LINKED_RIGHT indicates that we are currently connected to the correct target
+- * for this command, LINKED_WRONG indicates that we are connected to the wrong
+- * target. Note that these imply CAN_RECONNECT and require defined(LINKED).
+- */
+-
+-#define LINKED_RIGHT 3
+-#define LINKED_WRONG 4
+-
+-/*
+- * This determines if we are expecting to reconnect or not.
+- */
+-
+-static int should_reconnect = 0;
+-
+-/*
+- * The seagate_reconnect_intr routine is called when a target reselects the
+- * host adapter. This occurs on the interrupt triggered by the target
+- * asserting SEL.
+- */
+-
+-static irqreturn_t do_seagate_reconnect_intr(int irq, void *dev_id)
+-{
+- unsigned long flags;
+- struct Scsi_Host *dev = dev_id;
+-
+- spin_lock_irqsave (dev->host_lock, flags);
+- seagate_reconnect_intr (irq, dev_id);
+- spin_unlock_irqrestore (dev->host_lock, flags);
+- return IRQ_HANDLED;
+-}
+-
+-static void seagate_reconnect_intr (int irq, void *dev_id)
+-{
+- int temp;
+- struct scsi_cmnd *SCtmp;
+-
+- DPRINTK (PHASE_RESELECT, "scsi%d : seagate_reconnect_intr() called\n", hostno);
+-
+- if (!should_reconnect)
+- printk(KERN_WARNING "scsi%d: unexpected interrupt.\n", hostno);
+- else {
+- should_reconnect = 0;
+-
+- DPRINTK (PHASE_RESELECT, "scsi%d : internal_command(%d, %08x, %08x, RECONNECT_NOW\n",
+- hostno, current_target, current_data, current_bufflen);
+-
+- temp = internal_command (current_target, current_lun, current_cmnd, current_data, current_bufflen, RECONNECT_NOW);
+-
+- if (msg_byte(temp) != DISCONNECT) {
+- if (done_fn) {
+- DPRINTK(PHASE_RESELECT, "scsi%d : done_fn(%d,%08x)", hostno, hostno, temp);
+- if (!SCint)
+- panic ("SCint == NULL in seagate");
+- SCtmp = SCint;
+- SCint = NULL;
+- SCtmp->result = temp;
+- done_fn(SCtmp);
+- } else
+- printk(KERN_ERR "done_fn() not defined.\n");
+- }
+- }
+-}
+-
+-/*
+- * The seagate_st0x_queue_command() function provides a queued interface
+- * to the seagate SCSI driver. Basically, it just passes control onto the
+- * seagate_command() function, after fixing it so that the done_fn()
+- * is set to the one passed to the function. We have to be very careful,
+- * because there are some commands on some devices that do not disconnect,
+- * and if we simply call the done_fn when the command is done then another
+- * command is started and queue_command is called again... We end up
+- * overflowing the kernel stack, and this tends not to be such a good idea.
+- */
+-
+-static int recursion_depth = 0;
+-
+-static int seagate_st0x_queue_command(struct scsi_cmnd * SCpnt,
+- void (*done) (struct scsi_cmnd *))
+-{
+- int result, reconnect;
+- struct scsi_cmnd *SCtmp;
+-
+- DANY ("seagate: que_command");
+- done_fn = done;
+- current_target = SCpnt->device->id;
+- current_lun = SCpnt->device->lun;
+- current_cmnd = SCpnt->cmnd;
+- current_data = (unsigned char *) SCpnt->request_buffer;
+- current_bufflen = SCpnt->request_bufflen;
+- SCint = SCpnt;
+- if (recursion_depth)
+- return 1;
+- recursion_depth++;
+- do {
+-#ifdef LINKED
+- /*
+- * Set linked command bit in control field of SCSI command.
+- */
+-
+- current_cmnd[SCpnt->cmd_len] |= 0x01;
+- if (linked_connected) {
+- DPRINTK (DEBUG_LINKED, "scsi%d : using linked commands, current I_T_L nexus is ", hostno);
+- if (linked_target == current_target && linked_lun == current_lun)
+- {
+- DPRINTK(DEBUG_LINKED, "correct\n");
+- reconnect = LINKED_RIGHT;
+- } else {
+- DPRINTK(DEBUG_LINKED, "incorrect\n");
+- reconnect = LINKED_WRONG;
+- }
+- } else
+-#endif /* LINKED */
+- reconnect = CAN_RECONNECT;
+-
+- result = internal_command(SCint->device->id, SCint->device->lun, SCint->cmnd,
+- SCint->request_buffer, SCint->request_bufflen, reconnect);
+- if (msg_byte(result) == DISCONNECT)
+- break;
+- SCtmp = SCint;
+- SCint = NULL;
+- SCtmp->result = result;
+- done_fn(SCtmp);
+- }
+- while (SCint);
+- recursion_depth--;
+- return 0;
+-}
+-
+-static int internal_command (unsigned char target, unsigned char lun,
+- const void *cmnd, void *buff, int bufflen, int reselect)
+-{
+- unsigned char *data = NULL;
+- struct scatterlist *buffer = NULL;
+- int clock, temp, nobuffs = 0, done = 0, len = 0;
+-#if DEBUG
+- int transfered = 0, phase = 0, newphase;
+-#endif
+- register unsigned char status_read;
+- unsigned char tmp_data, tmp_control, status = 0, message = 0;
+- unsigned transfersize = 0, underflow = 0;
+-#ifdef SLOW_RATE
+- int borken = (int) SCint->device->borken; /* Does the current target require
+- Very Slow I/O ? */
+-#endif
+-
+- incommand = 0;
+- st0x_aborted = 0;
+-
+-#if (DEBUG & PRINT_COMMAND)
+- printk("scsi%d : target = %d, command = ", hostno, target);
+- __scsi_print_command((unsigned char *) cmnd);
+-#endif
+-
+-#if (DEBUG & PHASE_RESELECT)
+- switch (reselect) {
+- case RECONNECT_NOW:
+- printk("scsi%d : reconnecting\n", hostno);
+- break;
+-#ifdef LINKED
+- case LINKED_RIGHT:
+- printk("scsi%d : connected, can reconnect\n", hostno);
+- break;
+- case LINKED_WRONG:
+- printk("scsi%d : connected to wrong target, can reconnect\n",
+- hostno);
+- break;
+-#endif
+- case CAN_RECONNECT:
+- printk("scsi%d : allowed to reconnect\n", hostno);
+- break;
+- default:
+- printk("scsi%d : not allowed to reconnect\n", hostno);
+- }
+-#endif
+-
+- if (target == (controller_type == SEAGATE ? 7 : 6))
+- return DID_BAD_TARGET;
+-
+- /*
+- * We work it differently depending on if this is is "the first time,"
+- * or a reconnect. If this is a reselect phase, then SEL will
+- * be asserted, and we must skip selection / arbitration phases.
+- */
+-
+- switch (reselect) {
+- case RECONNECT_NOW:
+- DPRINTK (PHASE_RESELECT, "scsi%d : phase RESELECT \n", hostno);
+- /*
+- * At this point, we should find the logical or of our ID
+- * and the original target's ID on the BUS, with BSY, SEL,
+- * and I/O signals asserted.
+- *
+- * After ARBITRATION phase is completed, only SEL, BSY,
+- * and the target ID are asserted. A valid initiator ID
+- * is not on the bus until IO is asserted, so we must wait
+- * for that.
+- */
+- ULOOP (100 * 1000) {
+- temp = STATUS;
+- if ((temp & STAT_IO) && !(temp & STAT_BSY))
+- break;
+- if (TIMEOUT) {
+- DPRINTK (PHASE_RESELECT, "scsi%d : RESELECT timed out while waiting for IO .\n", hostno);
+- return (DID_BAD_INTR << 16);
+- }
+- }
+-
+- /*
+- * After I/O is asserted by the target, we can read our ID
+- * and its ID off of the BUS.
+- */
+-
+- if (!((temp = DATA) & (controller_type == SEAGATE ? 0x80 : 0x40))) {
+- DPRINTK (PHASE_RESELECT, "scsi%d : detected reconnect request to different target.\n\tData bus = %d\n", hostno, temp);
+- return (DID_BAD_INTR << 16);
+- }
+-
+- if (!(temp & (1 << current_target))) {
+- printk(KERN_WARNING "scsi%d : Unexpected reselect interrupt. Data bus = %d\n", hostno, temp);
+- return (DID_BAD_INTR << 16);
+- }
+-
+- buffer = current_buffer;
+- cmnd = current_cmnd; /* WDE add */
+- data = current_data; /* WDE add */
+- len = current_bufflen; /* WDE add */
+- nobuffs = current_nobuffs;
+-
+- /*
+- * We have determined that we have been selected. At this
+- * point, we must respond to the reselection by asserting
+- * BSY ourselves
+- */
+-
+-#if 1
+- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE | CMD_BSY);
+-#else
+- WRITE_CONTROL (BASE_CMD | CMD_BSY);
+-#endif
+-
+- /*
+- * The target will drop SEL, and raise BSY, at which time
+- * we must drop BSY.
+- */
+-
+- ULOOP (100 * 1000) {
+- if (!(STATUS & STAT_SEL))
+- break;
+- if (TIMEOUT) {
+- WRITE_CONTROL (BASE_CMD | CMD_INTR);
+- DPRINTK (PHASE_RESELECT, "scsi%d : RESELECT timed out while waiting for SEL.\n", hostno);
+- return (DID_BAD_INTR << 16);
+- }
+- }
+- WRITE_CONTROL (BASE_CMD);
+- /*
+- * At this point, we have connected with the target
+- * and can get on with our lives.
+- */
+- break;
+- case CAN_RECONNECT:
+-#ifdef LINKED
+- /*
+- * This is a bletcherous hack, just as bad as the Unix #!
+- * interpreter stuff. If it turns out we are using the wrong
+- * I_T_L nexus, the easiest way to deal with it is to go into
+- * our INFORMATION TRANSFER PHASE code, send a ABORT
+- * message on MESSAGE OUT phase, and then loop back to here.
+- */
+-connect_loop:
+-#endif
+- DPRINTK (PHASE_BUS_FREE, "scsi%d : phase = BUS FREE \n", hostno);
+-
+- /*
+- * BUS FREE PHASE
+- *
+- * On entry, we make sure that the BUS is in a BUS FREE
+- * phase, by insuring that both BSY and SEL are low for
+- * at least one bus settle delay. Several reads help
+- * eliminate wire glitch.
+- */
+-
+-#ifndef ARBITRATE
+-#error FIXME: this is broken: we may not use jiffies here - we are under cli(). It will hardlock.
+- clock = jiffies + ST0X_BUS_FREE_DELAY;
+-
+- while (((STATUS | STATUS | STATUS) & (STAT_BSY | STAT_SEL)) && (!st0x_aborted) && time_before (jiffies, clock))
+- cpu_relax();
+-
+- if (time_after (jiffies, clock))
+- return retcode (DID_BUS_BUSY);
+- else if (st0x_aborted)
+- return retcode (st0x_aborted);
+-#endif
+- DPRINTK (PHASE_SELECTION, "scsi%d : phase = SELECTION\n", hostno);
+-
+- clock = jiffies + ST0X_SELECTION_DELAY;
+-
+- /*
+- * Arbitration/selection procedure :
+- * 1. Disable drivers
+- * 2. Write HOST adapter address bit
+- * 3. Set start arbitration.
+- * 4. We get either ARBITRATION COMPLETE or SELECT at this
+- * point.
+- * 5. OR our ID and targets on bus.
+- * 6. Enable SCSI drivers and asserted SEL and ATTN
+- */
+-
+-#ifdef ARBITRATE
+- /* FIXME: verify host lock is always held here */
+- WRITE_CONTROL(0);
+- WRITE_DATA((controller_type == SEAGATE) ? 0x80 : 0x40);
+- WRITE_CONTROL(CMD_START_ARB);
+-
+- ULOOP (ST0X_SELECTION_DELAY * 10000) {
+- status_read = STATUS;
+- if (status_read & STAT_ARB_CMPL)
+- break;
+- if (st0x_aborted) /* FIXME: What? We are going to do something even after abort? */
+- break;
+- if (TIMEOUT || (status_read & STAT_SEL)) {
+- printk(KERN_WARNING "scsi%d : arbitration lost or timeout.\n", hostno);
+- WRITE_CONTROL (BASE_CMD);
+- return retcode (DID_NO_CONNECT);
+- }
+- }
+- DPRINTK (PHASE_SELECTION, "scsi%d : arbitration complete\n", hostno);
+-#endif
+-
+- /*
+- * When the SCSI device decides that we're gawking at it,
+- * it will respond by asserting BUSY on the bus.
+- *
+- * Note : the Seagate ST-01/02 product manual says that we
+- * should twiddle the DATA register before the control
+- * register. However, this does not work reliably so we do
+- * it the other way around.
+- *
+- * Probably could be a problem with arbitration too, we
+- * really should try this with a SCSI protocol or logic
+- * analyzer to see what is going on.
+- */
+- tmp_data = (unsigned char) ((1 << target) | (controller_type == SEAGATE ? 0x80 : 0x40));
+- tmp_control = BASE_CMD | CMD_DRVR_ENABLE | CMD_SEL | (reselect ? CMD_ATTN : 0);
+-
+- /* FIXME: verify host lock is always held here */
+-#ifdef OLDCNTDATASCEME
+-#ifdef SWAPCNTDATA
+- WRITE_CONTROL (tmp_control);
+- WRITE_DATA (tmp_data);
+-#else
+- WRITE_DATA (tmp_data);
+- WRITE_CONTROL (tmp_control);
+-#endif
+-#else
+- tmp_control ^= CMD_BSY; /* This is guesswork. What used to be in driver */
+- WRITE_CONTROL (tmp_control); /* could never work: it sent data into control */
+- WRITE_DATA (tmp_data); /* register and control info into data. Hopefully */
+- tmp_control ^= CMD_BSY; /* fixed, but order of first two may be wrong. */
+- WRITE_CONTROL (tmp_control); /* -- pavel at ucw.cz */
+-#endif
+-
+- ULOOP (250 * 1000) {
+- if (st0x_aborted) {
+- /*
+- * If we have been aborted, and we have a
+- * command in progress, IE the target
+- * still has BSY asserted, then we will
+- * reset the bus, and notify the midlevel
+- * driver to expect sense.
+- */
+-
+- WRITE_CONTROL (BASE_CMD);
+- if (STATUS & STAT_BSY) {
+- printk(KERN_WARNING "scsi%d : BST asserted after we've been aborted.\n", hostno);
+- seagate_st0x_bus_reset(NULL);
+- return retcode (DID_RESET);
+- }
+- return retcode (st0x_aborted);
+- }
+- if (STATUS & STAT_BSY)
+- break;
+- if (TIMEOUT) {
+- DPRINTK (PHASE_SELECTION, "scsi%d : NO CONNECT with target %d, stat = %x \n", hostno, target, STATUS);
+- return retcode (DID_NO_CONNECT);
+- }
+- }
+-
+- /* Establish current pointers. Take into account scatter / gather */
+-
+- if ((nobuffs = SCint->use_sg)) {
+-#if (DEBUG & DEBUG_SG)
+- {
+- int i;
+- printk("scsi%d : scatter gather requested, using %d buffers.\n", hostno, nobuffs);
+- for (i = 0; i < nobuffs; ++i)
+- printk("scsi%d : buffer %d address = %p length = %d\n",
+- hostno, i,
+- sg_virt(&buffer[i]),
+- buffer[i].length);
+- }
+-#endif
+-
+- buffer = (struct scatterlist *) SCint->request_buffer;
+- len = buffer->length;
+- data = sg_virt(buffer);
+- } else {
+- DPRINTK (DEBUG_SG, "scsi%d : scatter gather not requested.\n", hostno);
+- buffer = NULL;
+- len = SCint->request_bufflen;
+- data = (unsigned char *) SCint->request_buffer;
+- }
+-
+- DPRINTK (PHASE_DATAIN | PHASE_DATAOUT, "scsi%d : len = %d\n",
+- hostno, len);
+-
+- break;
+-#ifdef LINKED
+- case LINKED_RIGHT:
+- break;
+- case LINKED_WRONG:
+- break;
+-#endif
+- } /* end of switch(reselect) */
+-
+- /*
+- * There are several conditions under which we wish to send a message :
+- * 1. When we are allowing disconnect / reconnect, and need to
+- * establish the I_T_L nexus via an IDENTIFY with the DiscPriv bit
+- * set.
+- *
+- * 2. When we are doing linked commands, are have the wrong I_T_L
+- * nexus established and want to send an ABORT message.
+- */
+-
+- /* GCC does not like an ifdef inside a macro, so do it the hard way. */
+-#ifdef LINKED
+- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE | (((reselect == CAN_RECONNECT)|| (reselect == LINKED_WRONG))? CMD_ATTN : 0));
+-#else
+- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE | (((reselect == CAN_RECONNECT))? CMD_ATTN : 0));
+-#endif
+-
+- /*
+- * INFORMATION TRANSFER PHASE
+- *
+- * The nasty looking read / write inline assembler loops we use for
+- * DATAIN and DATAOUT phases are approximately 4-5 times as fast as
+- * the 'C' versions - since we're moving 1024 bytes of data, this
+- * really adds up.
+- *
+- * SJT: The nasty-looking assembler is gone, so it's slower.
+- *
+- */
+-
+- DPRINTK (PHASE_ETC, "scsi%d : phase = INFORMATION TRANSFER\n", hostno);
+-
+- incommand = 1;
+- transfersize = SCint->transfersize;
+- underflow = SCint->underflow;
+-
+- /*
+- * Now, we poll the device for status information,
+- * and handle any requests it makes. Note that since we are unsure
+- * of how much data will be flowing across the system, etc and
+- * cannot make reasonable timeouts, that we will instead have the
+- * midlevel driver handle any timeouts that occur in this phase.
+- */
+-
+- while (((status_read = STATUS) & STAT_BSY) && !st0x_aborted && !done) {
+-#ifdef PARITY
+- if (status_read & STAT_PARITY) {
+- printk(KERN_ERR "scsi%d : got parity error\n", hostno);
+- st0x_aborted = DID_PARITY;
+- }
+-#endif
+- if (status_read & STAT_REQ) {
+-#if ((DEBUG & PHASE_ETC) == PHASE_ETC)
+- if ((newphase = (status_read & REQ_MASK)) != phase) {
+- phase = newphase;
+- switch (phase) {
+- case REQ_DATAOUT:
+- printk ("scsi%d : phase = DATA OUT\n", hostno);
+- break;
+- case REQ_DATAIN:
+- printk ("scsi%d : phase = DATA IN\n", hostno);
+- break;
+- case REQ_CMDOUT:
+- printk
+- ("scsi%d : phase = COMMAND OUT\n", hostno);
+- break;
+- case REQ_STATIN:
+- printk ("scsi%d : phase = STATUS IN\n", hostno);
+- break;
+- case REQ_MSGOUT:
+- printk
+- ("scsi%d : phase = MESSAGE OUT\n", hostno);
+- break;
+- case REQ_MSGIN:
+- printk ("scsi%d : phase = MESSAGE IN\n", hostno);
+- break;
+- default:
+- printk ("scsi%d : phase = UNKNOWN\n", hostno);
+- st0x_aborted = DID_ERROR;
+- }
+- }
+-#endif
+- switch (status_read & REQ_MASK) {
+- case REQ_DATAOUT:
+- /*
+- * If we are in fast mode, then we simply splat
+- * the data out in word-sized chunks as fast as
+- * we can.
+- */
+-
+- if (!len) {
+-#if 0
+- printk("scsi%d: underflow to target %d lun %d \n", hostno, target, lun);
+- st0x_aborted = DID_ERROR;
+- fast = 0;
+-#endif
+- break;
+- }
+-
+- if (fast && transfersize
+- && !(len % transfersize)
+- && (len >= transfersize)
+-#ifdef FAST32
+- && !(transfersize % 4)
+-#endif
+- ) {
+- DPRINTK (DEBUG_FAST,
+- "scsi%d : FAST transfer, underflow = %d, transfersize = %d\n"
+- " len = %d, data = %08x\n",
+- hostno, SCint->underflow,
+- SCint->transfersize, len,
+- data);
+-
+- /* SJT: Start. Fast Write */
+-#ifdef SEAGATE_USE_ASM
+- __asm__ ("cld\n\t"
+-#ifdef FAST32
+- "shr $2, %%ecx\n\t"
+- "1:\t"
+- "lodsl\n\t"
+- "movl %%eax, (%%edi)\n\t"
+-#else
+- "1:\t"
+- "lodsb\n\t"
+- "movb %%al, (%%edi)\n\t"
+-#endif
+- "loop 1b;"
+- /* output */ :
+- /* input */ :"D" (st0x_dr),
+- "S"
+- (data),
+- "c" (SCint->transfersize)
+-/* clobbered */
+- : "eax", "ecx",
+- "esi");
+-#else /* SEAGATE_USE_ASM */
+- memcpy_toio(st0x_dr, data, transfersize);
+-#endif /* SEAGATE_USE_ASM */
+-/* SJT: End */
+- len -= transfersize;
+- data += transfersize;
+- DPRINTK (DEBUG_FAST, "scsi%d : FAST transfer complete len = %d data = %08x\n", hostno, len, data);
+- } else {
+- /*
+- * We loop as long as we are in a
+- * data out phase, there is data to
+- * send, and BSY is still active.
+- */
+-
+-/* SJT: Start. Slow Write. */
+-#ifdef SEAGATE_USE_ASM
+-
+- int __dummy_1, __dummy_2;
+-
+-/*
+- * We loop as long as we are in a data out phase, there is data to send,
+- * and BSY is still active.
+- */
+-/* Local variables : len = ecx , data = esi,
+- st0x_cr_sr = ebx, st0x_dr = edi
+-*/
+- __asm__ (
+- /* Test for any data here at all. */
+- "orl %%ecx, %%ecx\n\t"
+- "jz 2f\n\t" "cld\n\t"
+-/* "movl st0x_cr_sr, %%ebx\n\t" */
+-/* "movl st0x_dr, %%edi\n\t" */
+- "1:\t"
+- "movb (%%ebx), %%al\n\t"
+- /* Test for BSY */
+- "test $1, %%al\n\t"
+- "jz 2f\n\t"
+- /* Test for data out phase - STATUS & REQ_MASK should be
+- REQ_DATAOUT, which is 0. */
+- "test $0xe, %%al\n\t"
+- "jnz 2f\n\t"
+- /* Test for REQ */
+- "test $0x10, %%al\n\t"
+- "jz 1b\n\t"
+- "lodsb\n\t"
+- "movb %%al, (%%edi)\n\t"
+- "loop 1b\n\t" "2:\n"
+- /* output */ :"=S" (data), "=c" (len),
+- "=b"
+- (__dummy_1),
+- "=D" (__dummy_2)
+-/* input */
+- : "0" (data), "1" (len),
+- "2" (st0x_cr_sr),
+- "3" (st0x_dr)
+-/* clobbered */
+- : "eax");
+-#else /* SEAGATE_USE_ASM */
+- while (len) {
+- unsigned char stat;
+-
+- stat = STATUS;
+- if (!(stat & STAT_BSY)
+- || ((stat & REQ_MASK) !=
+- REQ_DATAOUT))
+- break;
+- if (stat & STAT_REQ) {
+- WRITE_DATA (*data++);
+- --len;
+- }
+- }
+-#endif /* SEAGATE_USE_ASM */
+-/* SJT: End. */
+- }
+-
+- if (!len && nobuffs) {
+- --nobuffs;
+- ++buffer;
+- len = buffer->length;
+- data = sg_virt(buffer);
+- DPRINTK (DEBUG_SG,
+- "scsi%d : next scatter-gather buffer len = %d address = %08x\n",
+- hostno, len, data);
+- }
+- break;
+-
+- case REQ_DATAIN:
+-#ifdef SLOW_RATE
+- if (borken) {
+-#if (DEBUG & (PHASE_DATAIN))
+- transfered += len;
+-#endif
+- for (; len && (STATUS & (REQ_MASK | STAT_REQ)) == (REQ_DATAIN | STAT_REQ); --len) {
+- *data++ = DATA;
+- borken_wait();
+- }
+-#if (DEBUG & (PHASE_DATAIN))
+- transfered -= len;
+-#endif
+- } else
+-#endif
+-
+- if (fast && transfersize
+- && !(len % transfersize)
+- && (len >= transfersize)
+-#ifdef FAST32
+- && !(transfersize % 4)
+-#endif
+- ) {
+- DPRINTK (DEBUG_FAST,
+- "scsi%d : FAST transfer, underflow = %d, transfersize = %d\n"
+- " len = %d, data = %08x\n",
+- hostno, SCint->underflow,
+- SCint->transfersize, len,
+- data);
+-
+-/* SJT: Start. Fast Read */
+-#ifdef SEAGATE_USE_ASM
+- __asm__ ("cld\n\t"
+-#ifdef FAST32
+- "shr $2, %%ecx\n\t"
+- "1:\t"
+- "movl (%%esi), %%eax\n\t"
+- "stosl\n\t"
+-#else
+- "1:\t"
+- "movb (%%esi), %%al\n\t"
+- "stosb\n\t"
+-#endif
+- "loop 1b\n\t"
+- /* output */ :
+- /* input */ :"S" (st0x_dr),
+- "D"
+- (data),
+- "c" (SCint->transfersize)
+-/* clobbered */
+- : "eax", "ecx",
+- "edi");
+-#else /* SEAGATE_USE_ASM */
+- memcpy_fromio(data, st0x_dr, len);
+-#endif /* SEAGATE_USE_ASM */
+-/* SJT: End */
+- len -= transfersize;
+- data += transfersize;
+-#if (DEBUG & PHASE_DATAIN)
+- printk ("scsi%d: transfered += %d\n", hostno, transfersize);
+- transfered += transfersize;
+-#endif
+-
+- DPRINTK (DEBUG_FAST, "scsi%d : FAST transfer complete len = %d data = %08x\n", hostno, len, data);
+- } else {
+-
+-#if (DEBUG & PHASE_DATAIN)
+- printk ("scsi%d: transfered += %d\n", hostno, len);
+- transfered += len; /* Assume we'll transfer it all, then
+- subtract what we *didn't* transfer */
+-#endif
+-
+-/*
+- * We loop as long as we are in a data in phase, there is room to read,
+- * and BSY is still active
+- */
+-
+-/* SJT: Start. */
+-#ifdef SEAGATE_USE_ASM
+-
+- int __dummy_3, __dummy_4;
+-
+-/* Dummy clobbering variables for the new gcc-2.95 */
+-
+-/*
+- * We loop as long as we are in a data in phase, there is room to read,
+- * and BSY is still active
+- */
+- /* Local variables : ecx = len, edi = data
+- esi = st0x_cr_sr, ebx = st0x_dr */
+- __asm__ (
+- /* Test for room to read */
+- "orl %%ecx, %%ecx\n\t"
+- "jz 2f\n\t" "cld\n\t"
+-/* "movl st0x_cr_sr, %%esi\n\t" */
+-/* "movl st0x_dr, %%ebx\n\t" */
+- "1:\t"
+- "movb (%%esi), %%al\n\t"
+- /* Test for BSY */
+- "test $1, %%al\n\t"
+- "jz 2f\n\t"
+- /* Test for data in phase - STATUS & REQ_MASK should be REQ_DATAIN,
+- = STAT_IO, which is 4. */
+- "movb $0xe, %%ah\n\t"
+- "andb %%al, %%ah\n\t"
+- "cmpb $0x04, %%ah\n\t"
+- "jne 2f\n\t"
+- /* Test for REQ */
+- "test $0x10, %%al\n\t"
+- "jz 1b\n\t"
+- "movb (%%ebx), %%al\n\t"
+- "stosb\n\t"
+- "loop 1b\n\t" "2:\n"
+- /* output */ :"=D" (data), "=c" (len),
+- "=S"
+- (__dummy_3),
+- "=b" (__dummy_4)
+-/* input */
+- : "0" (data), "1" (len),
+- "2" (st0x_cr_sr),
+- "3" (st0x_dr)
+-/* clobbered */
+- : "eax");
+-#else /* SEAGATE_USE_ASM */
+- while (len) {
+- unsigned char stat;
+-
+- stat = STATUS;
+- if (!(stat & STAT_BSY)
+- || ((stat & REQ_MASK) !=
+- REQ_DATAIN))
+- break;
+- if (stat & STAT_REQ) {
+- *data++ = DATA;
+- --len;
+- }
+- }
+-#endif /* SEAGATE_USE_ASM */
+-/* SJT: End. */
+-#if (DEBUG & PHASE_DATAIN)
+- printk ("scsi%d: transfered -= %d\n", hostno, len);
+- transfered -= len; /* Since we assumed all of Len got *
+- transfered, correct our mistake */
+-#endif
+- }
+-
+- if (!len && nobuffs) {
+- --nobuffs;
+- ++buffer;
+- len = buffer->length;
+- data = sg_virt(buffer);
+- DPRINTK (DEBUG_SG, "scsi%d : next scatter-gather buffer len = %d address = %08x\n", hostno, len, data);
+- }
+- break;
+-
+- case REQ_CMDOUT:
+- while (((status_read = STATUS) & STAT_BSY) &&
+- ((status_read & REQ_MASK) == REQ_CMDOUT))
+- if (status_read & STAT_REQ) {
+- WRITE_DATA (*(const unsigned char *) cmnd);
+- cmnd = 1 + (const unsigned char *)cmnd;
+-#ifdef SLOW_RATE
+- if (borken)
+- borken_wait ();
+-#endif
+- }
+- break;
+-
+- case REQ_STATIN:
+- status = DATA;
+- break;
+-
+- case REQ_MSGOUT:
+- /*
+- * We can only have sent a MSG OUT if we
+- * requested to do this by raising ATTN.
+- * So, we must drop ATTN.
+- */
+- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE);
+- /*
+- * If we are reconnecting, then we must
+- * send an IDENTIFY message in response
+- * to MSGOUT.
+- */
+- switch (reselect) {
+- case CAN_RECONNECT:
+- WRITE_DATA (IDENTIFY (1, lun));
+- DPRINTK (PHASE_RESELECT | PHASE_MSGOUT, "scsi%d : sent IDENTIFY message.\n", hostno);
+- break;
+-#ifdef LINKED
+- case LINKED_WRONG:
+- WRITE_DATA (ABORT);
+- linked_connected = 0;
+- reselect = CAN_RECONNECT;
+- goto connect_loop;
+- DPRINTK (PHASE_MSGOUT | DEBUG_LINKED, "scsi%d : sent ABORT message to cancel incorrect I_T_L nexus.\n", hostno);
+-#endif /* LINKED */
+- DPRINTK (DEBUG_LINKED, "correct\n");
+- default:
+- WRITE_DATA (NOP);
+- printk("scsi%d : target %d requested MSGOUT, sent NOP message.\n", hostno, target);
+- }
+- break;
+-
+- case REQ_MSGIN:
+- switch (message = DATA) {
+- case DISCONNECT:
+- DANY("seagate: deciding to disconnect\n");
+- should_reconnect = 1;
+- current_data = data; /* WDE add */
+- current_buffer = buffer;
+- current_bufflen = len; /* WDE add */
+- current_nobuffs = nobuffs;
+-#ifdef LINKED
+- linked_connected = 0;
+-#endif
+- done = 1;
+- DPRINTK ((PHASE_RESELECT | PHASE_MSGIN), "scsi%d : disconnected.\n", hostno);
+- break;
+-
+-#ifdef LINKED
+- case LINKED_CMD_COMPLETE:
+- case LINKED_FLG_CMD_COMPLETE:
+-#endif
+- case COMMAND_COMPLETE:
+- /*
+- * Note : we should check for underflow here.
+- */
+- DPRINTK(PHASE_MSGIN, "scsi%d : command complete.\n", hostno);
+- done = 1;
+- break;
+- case ABORT:
+- DPRINTK(PHASE_MSGIN, "scsi%d : abort message.\n", hostno);
+- done = 1;
+- break;
+- case SAVE_POINTERS:
+- current_buffer = buffer;
+- current_bufflen = len; /* WDE add */
+- current_data = data; /* WDE mod */
+- current_nobuffs = nobuffs;
+- DPRINTK (PHASE_MSGIN, "scsi%d : pointers saved.\n", hostno);
+- break;
+- case RESTORE_POINTERS:
+- buffer = current_buffer;
+- cmnd = current_cmnd;
+- data = current_data; /* WDE mod */
+- len = current_bufflen;
+- nobuffs = current_nobuffs;
+- DPRINTK(PHASE_MSGIN, "scsi%d : pointers restored.\n", hostno);
+- break;
+- default:
+-
+- /*
+- * IDENTIFY distinguishes itself
+- * from the other messages by
+- * setting the high bit.
+- *
+- * Note : we need to handle at
+- * least one outstanding command
+- * per LUN, and need to hash the
+- * SCSI command for that I_T_L
+- * nexus based on the known ID
+- * (at this point) and LUN.
+- */
+-
+- if (message & 0x80) {
+- DPRINTK (PHASE_MSGIN, "scsi%d : IDENTIFY message received from id %d, lun %d.\n", hostno, target, message & 7);
+- } else {
+- /*
+- * We should go into a
+- * MESSAGE OUT phase, and
+- * send a MESSAGE_REJECT
+- * if we run into a message
+- * that we don't like. The
+- * seagate driver needs
+- * some serious
+- * restructuring first
+- * though.
+- */
+- DPRINTK (PHASE_MSGIN, "scsi%d : unknown message %d from target %d.\n", hostno, message, target);
+- }
+- }
+- break;
+- default:
+- printk(KERN_ERR "scsi%d : unknown phase.\n", hostno);
+- st0x_aborted = DID_ERROR;
+- } /* end of switch (status_read & REQ_MASK) */
+-#ifdef SLOW_RATE
+- /*
+- * I really don't care to deal with borken devices in
+- * each single byte transfer case (ie, message in,
+- * message out, status), so I'll do the wait here if
+- * necessary.
+- */
+- if(borken)
+- borken_wait();
+-#endif
-
-- kthread_stop(phba->worker_thread);
-
- /* Release the irq reservation */
- free_irq(phba->pcidev->irq, phba);
-@@ -2048,7 +2244,7 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
-
- kfree(phba);
-
-- pci_release_regions(pdev);
-+ pci_release_selected_regions(pdev, bars);
- pci_disable_device(pdev);
- }
-
-@@ -2239,12 +2435,22 @@ lpfc_init(void)
- printk(LPFC_MODULE_DESC "\n");
- printk(LPFC_COPYRIGHT "\n");
+- } /* if(status_read & STAT_REQ) ends */
+- } /* while(((status_read = STATUS)...) ends */
+-
+- DPRINTK(PHASE_DATAIN | PHASE_DATAOUT | PHASE_EXIT, "scsi%d : Transfered %d bytes\n", hostno, transfered);
+-
+-#if (DEBUG & PHASE_EXIT)
+-#if 0 /* Doesn't work for scatter/gather */
+- printk("Buffer : \n");
+- for(i = 0; i < 20; ++i)
+- printk("%02x ", ((unsigned char *) data)[i]); /* WDE mod */
+- printk("\n");
+-#endif
+- printk("scsi%d : status = ", hostno);
+- scsi_print_status(status);
+- printk(" message = %02x\n", message);
+-#endif
+-
+- /* We shouldn't reach this until *after* BSY has been deasserted */
+-
+-#ifdef LINKED
+- else
+- {
+- /*
+- * Fix the message byte so that unsuspecting high level drivers
+- * don't puke when they see a LINKED COMMAND message in place of
+- * the COMMAND COMPLETE they may be expecting. Shouldn't be
+- * necessary, but it's better to be on the safe side.
+- *
+- * A non LINKED* message byte will indicate that the command
+- * completed, and we are now disconnected.
+- */
+-
+- switch (message) {
+- case LINKED_CMD_COMPLETE:
+- case LINKED_FLG_CMD_COMPLETE:
+- message = COMMAND_COMPLETE;
+- linked_target = current_target;
+- linked_lun = current_lun;
+- linked_connected = 1;
+- DPRINTK (DEBUG_LINKED, "scsi%d : keeping I_T_L nexus established for linked command.\n", hostno);
+- /* We also will need to adjust status to accommodate intermediate
+- conditions. */
+- if ((status == INTERMEDIATE_GOOD) || (status == INTERMEDIATE_C_GOOD))
+- status = GOOD;
+- break;
+- /*
+- * We should also handle what are "normal" termination
+- * messages here (ABORT, BUS_DEVICE_RESET?, and
+- * COMMAND_COMPLETE individually, and flake if things
+- * aren't right.
+- */
+- default:
+- DPRINTK (DEBUG_LINKED, "scsi%d : closing I_T_L nexus.\n", hostno);
+- linked_connected = 0;
+- }
+- }
+-#endif /* LINKED */
+-
+- if (should_reconnect) {
+- DPRINTK (PHASE_RESELECT, "scsi%d : exiting seagate_st0x_queue_command() with reconnect enabled.\n", hostno);
+- WRITE_CONTROL (BASE_CMD | CMD_INTR);
+- } else
+- WRITE_CONTROL (BASE_CMD);
+-
+- return retcode (st0x_aborted);
+-} /* end of internal_command */
+-
+-static int seagate_st0x_abort(struct scsi_cmnd * SCpnt)
+-{
+- st0x_aborted = DID_ABORT;
+- return SUCCESS;
+-}
+-
+-#undef ULOOP
+-#undef TIMEOUT
+-
+-/*
+- * the seagate_st0x_reset function resets the SCSI bus
+- *
+- * May be called with SCpnt = NULL
+- */
+-
+-static int seagate_st0x_bus_reset(struct scsi_cmnd * SCpnt)
+-{
+- /* No timeouts - this command is going to fail because it was reset. */
+- DANY ("scsi%d: Reseting bus... ", hostno);
+-
+- /* assert RESET signal on SCSI bus. */
+- WRITE_CONTROL (BASE_CMD | CMD_RST);
+-
+- mdelay (20);
+-
+- WRITE_CONTROL (BASE_CMD);
+- st0x_aborted = DID_RESET;
+-
+- DANY ("done.\n");
+- return SUCCESS;
+-}
+-
+-static int seagate_st0x_release(struct Scsi_Host *shost)
+-{
+- if (shost->irq)
+- free_irq(shost->irq, shost);
+- release_region(shost->io_port, shost->n_io_port);
+- return 0;
+-}
+-
+-static struct scsi_host_template driver_template = {
+- .detect = seagate_st0x_detect,
+- .release = seagate_st0x_release,
+- .info = seagate_st0x_info,
+- .queuecommand = seagate_st0x_queue_command,
+- .eh_abort_handler = seagate_st0x_abort,
+- .eh_bus_reset_handler = seagate_st0x_bus_reset,
+- .can_queue = 1,
+- .this_id = 7,
+- .sg_tablesize = SG_ALL,
+- .cmd_per_lun = 1,
+- .use_clustering = DISABLE_CLUSTERING,
+-};
+-#include "scsi_module.c"
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index f1871ea..aba28f3 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -48,6 +48,7 @@ static int sg_version_num = 30534; /* 2 digits for each component */
+ #include <linux/blkdev.h>
+ #include <linux/delay.h>
+ #include <linux/scatterlist.h>
++#include <linux/blktrace_api.h>
-+ if (lpfc_enable_npiv) {
-+ lpfc_transport_functions.vport_create = lpfc_vport_create;
-+ lpfc_transport_functions.vport_delete = lpfc_vport_delete;
-+ }
- lpfc_transport_template =
- fc_attach_transport(&lpfc_transport_functions);
-- lpfc_vport_transport_template =
-- fc_attach_transport(&lpfc_vport_transport_functions);
-- if (!lpfc_transport_template || !lpfc_vport_transport_template)
-+ if (lpfc_transport_template == NULL)
- return -ENOMEM;
-+ if (lpfc_enable_npiv) {
-+ lpfc_vport_transport_template =
-+ fc_attach_transport(&lpfc_vport_transport_functions);
-+ if (lpfc_vport_transport_template == NULL) {
-+ fc_release_transport(lpfc_transport_template);
-+ return -ENOMEM;
+ #include "scsi.h"
+ #include <scsi/scsi_dbg.h>
+@@ -602,8 +603,9 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
+ * but is is possible that the app intended SG_DXFER_TO_DEV, because there
+ * is a non-zero input_size, so emit a warning.
+ */
+- if (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV)
+- if (printk_ratelimit())
++ if (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV) {
++ static char cmd[TASK_COMM_LEN];
++ if (strcmp(current->comm, cmd) && printk_ratelimit()) {
+ printk(KERN_WARNING
+ "sg_write: data in/out %d/%d bytes for SCSI command 0x%x--"
+ "guessing data in;\n" KERN_WARNING " "
+@@ -611,6 +613,9 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
+ old_hdr.reply_len - (int)SZ_SG_HEADER,
+ input_size, (unsigned int) cmnd[0],
+ current->comm);
++ strcpy(cmd, current->comm);
+ }
+ }
- error = pci_register_driver(&lpfc_driver);
- if (error) {
- fc_release_transport(lpfc_transport_template);
-@@ -2259,7 +2465,8 @@ lpfc_exit(void)
- {
- pci_unregister_driver(&lpfc_driver);
- fc_release_transport(lpfc_transport_template);
-- fc_release_transport(lpfc_vport_transport_template);
-+ if (lpfc_enable_npiv)
-+ fc_release_transport(lpfc_vport_transport_template);
- }
-
- module_init(lpfc_init);
-diff --git a/drivers/scsi/lpfc/lpfc_logmsg.h b/drivers/scsi/lpfc/lpfc_logmsg.h
-index 626e4d8..c5841d7 100644
---- a/drivers/scsi/lpfc/lpfc_logmsg.h
-+++ b/drivers/scsi/lpfc/lpfc_logmsg.h
-@@ -26,6 +26,7 @@
- #define LOG_IP 0x20 /* IP traffic history */
- #define LOG_FCP 0x40 /* FCP traffic history */
- #define LOG_NODE 0x80 /* Node table events */
-+#define LOG_TEMP 0x100 /* Temperature sensor events */
- #define LOG_MISC 0x400 /* Miscellaneous events */
- #define LOG_SLI 0x800 /* SLI events */
- #define LOG_FCP_ERROR 0x1000 /* log errors, not underruns */
-diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c
-index a592733..dfc63f6 100644
---- a/drivers/scsi/lpfc/lpfc_mbox.c
-+++ b/drivers/scsi/lpfc/lpfc_mbox.c
-@@ -82,6 +82,24 @@ lpfc_read_nv(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+ k = sg_common_write(sfp, srp, cmnd, sfp->timeout, blocking);
+ return (k < 0) ? k : count;
}
-
- /**********************************************/
-+/* lpfc_config_async Issue a */
-+/* MBX_ASYNC_EVT_ENABLE mailbox command */
-+/**********************************************/
-+void
-+lpfc_config_async(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
-+ uint32_t ring)
-+{
-+ MAILBOX_t *mb;
-+
-+ mb = &pmb->mb;
-+ memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
-+ mb->mbxCommand = MBX_ASYNCEVT_ENABLE;
-+ mb->un.varCfgAsyncEvent.ring = ring;
-+ mb->mbxOwner = OWN_HOST;
-+ return;
-+}
-+
-+/**********************************************/
- /* lpfc_heart_beat Issue a HEART_BEAT */
- /* mailbox command */
- /**********************************************/
-@@ -270,8 +288,10 @@ lpfc_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, int vpi)
-
- /* Get a buffer to hold the HBAs Service Parameters */
-
-- if (((mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL)) == 0) ||
-- ((mp->virt = lpfc_mbuf_alloc(phba, 0, &(mp->phys))) == 0)) {
-+ mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
-+ if (mp)
-+ mp->virt = lpfc_mbuf_alloc(phba, 0, &mp->phys);
-+ if (!mp || !mp->virt) {
- kfree(mp);
- mb->mbxCommand = MBX_READ_SPARM64;
- /* READ_SPARAM: no buffers */
-@@ -369,8 +389,10 @@ lpfc_reg_login(struct lpfc_hba *phba, uint16_t vpi, uint32_t did,
- mb->mbxOwner = OWN_HOST;
-
- /* Get a buffer to hold NPorts Service Parameters */
-- if (((mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL)) == NULL) ||
-- ((mp->virt = lpfc_mbuf_alloc(phba, 0, &(mp->phys))) == 0)) {
-+ mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
-+ if (mp)
-+ mp->virt = lpfc_mbuf_alloc(phba, 0, &mp->phys);
-+ if (!mp || !mp->virt) {
- kfree(mp);
- mb->mbxCommand = MBX_REG_LOGIN64;
- /* REG_LOGIN: no buffers */
-@@ -874,7 +896,7 @@ lpfc_mbox_tmo_val(struct lpfc_hba *phba, int cmd)
- case MBX_DOWN_LOAD: /* 0x1C */
- case MBX_DEL_LD_ENTRY: /* 0x1D */
- case MBX_LOAD_AREA: /* 0x81 */
-- case MBX_FLASH_WR_ULA: /* 0x98 */
-+ case MBX_WRITE_WWN: /* 0x98 */
- case MBX_LOAD_EXP_ROM: /* 0x9C */
- return LPFC_MBOX_TMO_FLASH_CMD;
+@@ -1063,6 +1068,17 @@ sg_ioctl(struct inode *inode, struct file *filp,
+ case BLKSECTGET:
+ return put_user(sdp->device->request_queue->max_sectors * 512,
+ ip);
++ case BLKTRACESETUP:
++ return blk_trace_setup(sdp->device->request_queue,
++ sdp->disk->disk_name,
++ sdp->device->sdev_gendev.devt,
++ (char *)arg);
++ case BLKTRACESTART:
++ return blk_trace_startstop(sdp->device->request_queue, 1);
++ case BLKTRACESTOP:
++ return blk_trace_startstop(sdp->device->request_queue, 0);
++ case BLKTRACETEARDOWN:
++ return blk_trace_remove(sdp->device->request_queue);
+ default:
+ if (read_only)
+ return -EPERM; /* don't know so take safe approach */
+@@ -1418,7 +1434,6 @@ sg_add(struct class_device *cl_dev, struct class_interface *cl_intf)
+ goto out;
}
-diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c
-index 43c3b8a..6dc5ab8 100644
---- a/drivers/scsi/lpfc/lpfc_mem.c
-+++ b/drivers/scsi/lpfc/lpfc_mem.c
-@@ -98,6 +98,7 @@ lpfc_mem_alloc(struct lpfc_hba * phba)
- fail_free_hbq_pool:
- lpfc_sli_hbqbuf_free_all(phba);
-+ pci_pool_destroy(phba->lpfc_hbq_pool);
- fail_free_nlp_mem_pool:
- mempool_destroy(phba->nlp_mem_pool);
- phba->nlp_mem_pool = NULL;
-diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
-index 880af0c..4a0e340 100644
---- a/drivers/scsi/lpfc/lpfc_nportdisc.c
-+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
-@@ -287,6 +287,24 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
- lp = (uint32_t *) pcmd->virt;
- sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
-+ if (wwn_to_u64(sp->portName.u.wwn) == 0) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-+ "0140 PLOGI Reject: invalid nname\n");
-+ stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
-+ stat.un.b.lsRjtRsnCodeExp = LSEXP_INVALID_PNAME;
-+ lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
-+ NULL);
-+ return 0;
-+ }
-+ if (wwn_to_u64(sp->nodeName.u.wwn) == 0) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-+ "0141 PLOGI Reject: invalid pname\n");
-+ stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
-+ stat.un.b.lsRjtRsnCodeExp = LSEXP_INVALID_NNAME;
-+ lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
-+ NULL);
-+ return 0;
-+ }
- if ((lpfc_check_sparm(vport, ndlp, sp, CLASS3) == 0)) {
- /* Reject this request because invalid parameters */
- stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
-@@ -343,8 +361,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- lpfc_config_link(phba, mbox);
- mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
- mbox->vport = vport;
-- rc = lpfc_sli_issue_mbox
-- (phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
-+ rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED) {
- mempool_free(mbox, phba->mbox_mem_pool);
- goto out;
-@@ -407,6 +424,61 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- ndlp, mbox);
- return 1;
- }
-+
-+ /* If the remote NPort logs into us, before we can initiate
-+ * discovery to them, cleanup the NPort from discovery accordingly.
-+ */
-+ if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
-+ spin_lock_irq(shost->host_lock);
-+ ndlp->nlp_flag &= ~NLP_DELAY_TMO;
-+ spin_unlock_irq(shost->host_lock);
-+ del_timer_sync(&ndlp->nlp_delayfunc);
-+ ndlp->nlp_last_elscmd = 0;
-+
-+ if (!list_empty(&ndlp->els_retry_evt.evt_listp))
-+ list_del_init(&ndlp->els_retry_evt.evt_listp);
-+
-+ if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
-+ spin_lock_irq(shost->host_lock);
-+ ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
-+ spin_unlock_irq(shost->host_lock);
-+
-+ if ((ndlp->nlp_flag & NLP_ADISC_SND) &&
-+ (vport->num_disc_nodes)) {
-+ /* Check to see if there are more
-+ * ADISCs to be sent
-+ */
-+ lpfc_more_adisc(vport);
-+
-+ if ((vport->num_disc_nodes == 0) &&
-+ (vport->fc_npr_cnt))
-+ lpfc_els_disc_plogi(vport);
-+
-+ if (vport->num_disc_nodes == 0) {
-+ spin_lock_irq(shost->host_lock);
-+ vport->fc_flag &= ~FC_NDISC_ACTIVE;
-+ spin_unlock_irq(shost->host_lock);
-+ lpfc_can_disctmo(vport);
-+ lpfc_end_rscn(vport);
-+ }
-+ }
-+ else if (vport->num_disc_nodes) {
-+ /* Check to see if there are more
-+ * PLOGIs to be sent
-+ */
-+ lpfc_more_plogi(vport);
-+
-+ if (vport->num_disc_nodes == 0) {
-+ spin_lock_irq(shost->host_lock);
-+ vport->fc_flag &= ~FC_NDISC_ACTIVE;
-+ spin_unlock_irq(shost->host_lock);
-+ lpfc_can_disctmo(vport);
-+ lpfc_end_rscn(vport);
-+ }
-+ }
+- class_set_devdata(cl_dev, sdp);
+ error = cdev_add(cdev, MKDEV(SCSI_GENERIC_MAJOR, sdp->index), 1);
+ if (error)
+ goto cdev_add_err;
+@@ -1431,11 +1446,14 @@ sg_add(struct class_device *cl_dev, struct class_interface *cl_intf)
+ MKDEV(SCSI_GENERIC_MAJOR, sdp->index),
+ cl_dev->dev, "%s",
+ disk->disk_name);
+- if (IS_ERR(sg_class_member))
+- printk(KERN_WARNING "sg_add: "
+- "class_device_create failed\n");
++ if (IS_ERR(sg_class_member)) {
++ printk(KERN_ERR "sg_add: "
++ "class_device_create failed\n");
++ error = PTR_ERR(sg_class_member);
++ goto cdev_add_err;
+ }
-+ }
-+
- lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox);
- return 1;
-
-@@ -501,12 +573,9 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- spin_unlock_irq(shost->host_lock);
-
- ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
-- ndlp->nlp_prev_state = ndlp->nlp_state;
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
-- } else {
-- ndlp->nlp_prev_state = ndlp->nlp_state;
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
- }
-+ ndlp->nlp_prev_state = ndlp->nlp_state;
-+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
-
- spin_lock_irq(shost->host_lock);
- ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-@@ -594,6 +663,25 @@ lpfc_disc_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- return ndlp->nlp_state;
- }
+ class_set_devdata(sg_class_member, sdp);
+- error = sysfs_create_link(&scsidp->sdev_gendev.kobj,
++ error = sysfs_create_link(&scsidp->sdev_gendev.kobj,
+ &sg_class_member->kobj, "generic");
+ if (error)
+ printk(KERN_ERR "sg_add: unable to make symlink "
+@@ -1447,6 +1465,8 @@ sg_add(struct class_device *cl_dev, struct class_interface *cl_intf)
+ "Attached scsi generic sg%d type %d\n", sdp->index,
+ scsidp->type);
-+static uint32_t
-+lpfc_cmpl_plogi_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
-+ void *arg, uint32_t evt)
-+{
-+ /* This transition is only legal if we previously
-+ * rcv'ed a PLOGI. Since we don't want 2 discovery threads
-+ * working on the same NPortID, do nothing for this thread
-+ * to stop it.
-+ */
-+ if (!(ndlp->nlp_flag & NLP_RCV_PLOGI)) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
-+ "0253 Illegal State Transition: node x%x "
-+ "event x%x, state x%x Data: x%x x%x\n",
-+ ndlp->nlp_DID, evt, ndlp->nlp_state, ndlp->nlp_rpi,
-+ ndlp->nlp_flag);
-+ }
-+ return ndlp->nlp_state;
-+}
++ class_set_devdata(cl_dev, sdp);
+
- /* Start of Discovery State Machine routines */
-
- static uint32_t
-@@ -605,11 +693,8 @@ lpfc_rcv_plogi_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- cmdiocb = (struct lpfc_iocbq *) arg;
-
- if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) {
-- ndlp->nlp_prev_state = NLP_STE_UNUSED_NODE;
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
- return ndlp->nlp_state;
- }
-- lpfc_drop_node(vport, ndlp);
- return NLP_STE_FREED_NODE;
- }
-
-@@ -618,7 +703,6 @@ lpfc_rcv_els_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- void *arg, uint32_t evt)
- {
- lpfc_issue_els_logo(vport, ndlp, 0);
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
- return ndlp->nlp_state;
- }
-
-@@ -633,7 +717,6 @@ lpfc_rcv_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- ndlp->nlp_flag |= NLP_LOGO_ACC;
- spin_unlock_irq(shost->host_lock);
- lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+ return 0;
- return ndlp->nlp_state;
- }
-@@ -642,7 +725,6 @@ static uint32_t
- lpfc_cmpl_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- void *arg, uint32_t evt)
+ cdev_add_err:
+@@ -2521,7 +2541,7 @@ sg_idr_max_id(int id, void *p, void *data)
+ static int
+ sg_last_dev(void)
{
-- lpfc_drop_node(vport, ndlp);
- return NLP_STE_FREED_NODE;
- }
+- int k = 0;
++ int k = -1;
+ unsigned long iflags;
-@@ -650,7 +732,6 @@ static uint32_t
- lpfc_device_rm_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- void *arg, uint32_t evt)
- {
-- lpfc_drop_node(vport, ndlp);
- return NLP_STE_FREED_NODE;
+ read_lock_irqsave(&sg_index_lock, iflags);
+diff --git a/drivers/scsi/sgiwd93.c b/drivers/scsi/sgiwd93.c
+index eef8275..d4ebe8c 100644
+--- a/drivers/scsi/sgiwd93.c
++++ b/drivers/scsi/sgiwd93.c
+@@ -159,6 +159,7 @@ void sgiwd93_reset(unsigned long base)
+ udelay(50);
+ hregs->ctrl = 0;
}
++EXPORT_SYMBOL_GPL(sgiwd93_reset);
-@@ -752,6 +833,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
- uint32_t evt)
+ static inline void init_hpc_chain(struct hpc_data *hd)
{
- struct lpfc_hba *phba = vport->phba;
-+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
- struct lpfc_iocbq *cmdiocb, *rspiocb;
- struct lpfc_dmabuf *pcmd, *prsp, *mp;
- uint32_t *lp;
-@@ -778,6 +860,12 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
-
- lp = (uint32_t *) prsp->virt;
- sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
-+ if (wwn_to_u64(sp->portName.u.wwn) == 0 ||
-+ wwn_to_u64(sp->nodeName.u.wwn) == 0) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
-+ "0142 PLOGI RSP: Invalid WWN.\n");
-+ goto out;
-+ }
- if (!lpfc_check_sparm(vport, ndlp, sp, CLASS3))
- goto out;
- /* PLOGI chkparm OK */
-@@ -828,13 +916,15 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
- }
- mbox->context2 = lpfc_nlp_get(ndlp);
- mbox->vport = vport;
-- if (lpfc_sli_issue_mbox(phba, mbox,
-- (MBX_NOWAIT | MBX_STOP_IOCB))
-+ if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
- != MBX_NOT_FINISHED) {
- lpfc_nlp_set_state(vport, ndlp,
- NLP_STE_REG_LOGIN_ISSUE);
- return ndlp->nlp_state;
- }
-+ /* decrement node reference count to the failed mbox
-+ * command
-+ */
- lpfc_nlp_put(ndlp);
- mp = (struct lpfc_dmabuf *) mbox->context1;
- lpfc_mbuf_free(phba, mp->virt, mp->phys);
-@@ -864,13 +954,27 @@ out:
- "0261 Cannot Register NameServer login\n");
- }
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index c619990..1fcee16 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -67,8 +67,6 @@ MODULE_ALIAS_SCSI_DEVICE(TYPE_WORM);
-- /* Free this node since the driver cannot login or has the wrong
-- sparm */
-- lpfc_drop_node(vport, ndlp);
-+ spin_lock_irq(shost->host_lock);
-+ ndlp->nlp_flag |= NLP_DEFER_RM;
-+ spin_unlock_irq(shost->host_lock);
- return NLP_STE_FREED_NODE;
- }
+ #define SR_DISKS 256
- static uint32_t
-+lpfc_cmpl_logo_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
-+ void *arg, uint32_t evt)
-+{
-+ return ndlp->nlp_state;
-+}
-+
-+static uint32_t
-+lpfc_cmpl_reglogin_plogi_issue(struct lpfc_vport *vport,
-+ struct lpfc_nodelist *ndlp, void *arg, uint32_t evt)
-+{
-+ return ndlp->nlp_state;
-+}
-+
-+static uint32_t
- lpfc_device_rm_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
- void *arg, uint32_t evt)
+-#define MAX_RETRIES 3
+-#define SR_TIMEOUT (30 * HZ)
+ #define SR_CAPABILITIES \
+ (CDC_CLOSE_TRAY|CDC_OPEN_TRAY|CDC_LOCK|CDC_SELECT_SPEED| \
+ CDC_SELECT_DISC|CDC_MULTI_SESSION|CDC_MCN|CDC_MEDIA_CHANGED| \
+@@ -179,21 +177,28 @@ static int sr_media_change(struct cdrom_device_info *cdi, int slot)
{
-@@ -1137,7 +1241,7 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport,
- (ndlp == (struct lpfc_nodelist *) mb->context2)) {
- mp = (struct lpfc_dmabuf *) (mb->context1);
- if (mp) {
-- lpfc_mbuf_free(phba, mp->virt, mp->phys);
-+ __lpfc_mbuf_free(phba, mp->virt, mp->phys);
- kfree(mp);
- }
- lpfc_nlp_put(ndlp);
-@@ -1197,8 +1301,8 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
- * retry discovery.
- */
- if (mb->mbxStatus == MBXERR_RPI_FULL) {
-- ndlp->nlp_prev_state = NLP_STE_UNUSED_NODE;
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
-+ ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
-+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
- return ndlp->nlp_state;
- }
-
-@@ -1378,7 +1482,7 @@ out:
- lpfc_issue_els_logo(vport, ndlp, 0);
-
- ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
-- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
-+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
- return ndlp->nlp_state;
- }
-
-@@ -1753,7 +1857,7 @@ lpfc_cmpl_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ struct scsi_cd *cd = cdi->handle;
+ int retval;
++ struct scsi_sense_hdr *sshdr;
- irsp = &rspiocb->iocb;
- if (irsp->ulpStatus) {
-- lpfc_drop_node(vport, ndlp);
-+ ndlp->nlp_flag |= NLP_DEFER_RM;
- return NLP_STE_FREED_NODE;
+ if (CDSL_CURRENT != slot) {
+ /* no changer support */
+ return -EINVAL;
}
- return ndlp->nlp_state;
-@@ -1942,9 +2046,9 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
- lpfc_rcv_els_plogi_issue, /* RCV_PRLO */
- lpfc_cmpl_plogi_plogi_issue, /* CMPL_PLOGI */
- lpfc_disc_illegal, /* CMPL_PRLI */
-- lpfc_disc_illegal, /* CMPL_LOGO */
-+ lpfc_cmpl_logo_plogi_issue, /* CMPL_LOGO */
- lpfc_disc_illegal, /* CMPL_ADISC */
-- lpfc_disc_illegal, /* CMPL_REG_LOGIN */
-+ lpfc_cmpl_reglogin_plogi_issue,/* CMPL_REG_LOGIN */
- lpfc_device_rm_plogi_issue, /* DEVICE_RM */
- lpfc_device_recov_plogi_issue, /* DEVICE_RECOVERY */
-
-@@ -1968,7 +2072,7 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
- lpfc_rcv_padisc_reglogin_issue, /* RCV_ADISC */
- lpfc_rcv_padisc_reglogin_issue, /* RCV_PDISC */
- lpfc_rcv_prlo_reglogin_issue, /* RCV_PRLO */
-- lpfc_disc_illegal, /* CMPL_PLOGI */
-+ lpfc_cmpl_plogi_illegal, /* CMPL_PLOGI */
- lpfc_disc_illegal, /* CMPL_PRLI */
- lpfc_disc_illegal, /* CMPL_LOGO */
- lpfc_disc_illegal, /* CMPL_ADISC */
-@@ -1982,7 +2086,7 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
- lpfc_rcv_padisc_prli_issue, /* RCV_ADISC */
- lpfc_rcv_padisc_prli_issue, /* RCV_PDISC */
- lpfc_rcv_prlo_prli_issue, /* RCV_PRLO */
-- lpfc_disc_illegal, /* CMPL_PLOGI */
-+ lpfc_cmpl_plogi_illegal, /* CMPL_PLOGI */
- lpfc_cmpl_prli_prli_issue, /* CMPL_PRLI */
- lpfc_disc_illegal, /* CMPL_LOGO */
- lpfc_disc_illegal, /* CMPL_ADISC */
-diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
-index 4e46045..6483c62 100644
---- a/drivers/scsi/lpfc/lpfc_scsi.c
-+++ b/drivers/scsi/lpfc/lpfc_scsi.c
-@@ -130,7 +130,7 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
-
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
- shost = lpfc_shost_from_vport(vports[i]);
- shost_for_each_device(sdev, shost) {
- new_queue_depth =
-@@ -151,7 +151,7 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
- new_queue_depth);
- }
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- atomic_set(&phba->num_rsrc_err, 0);
- atomic_set(&phba->num_cmd_success, 0);
- }
-@@ -166,7 +166,7 @@ lpfc_ramp_up_queue_handler(struct lpfc_hba *phba)
-
- vports = lpfc_create_vport_work_array(phba);
- if (vports != NULL)
-- for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
-+ for(i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
- shost = lpfc_shost_from_vport(vports[i]);
- shost_for_each_device(sdev, shost) {
- if (sdev->ordered_tags)
-@@ -179,7 +179,7 @@ lpfc_ramp_up_queue_handler(struct lpfc_hba *phba)
- sdev->queue_depth+1);
- }
- }
-- lpfc_destroy_vport_work_array(vports);
-+ lpfc_destroy_vport_work_array(phba, vports);
- atomic_set(&phba->num_rsrc_err, 0);
- atomic_set(&phba->num_cmd_success, 0);
- }
-@@ -380,7 +380,7 @@ lpfc_scsi_prep_dma_buf(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
- (num_bde * sizeof (struct ulp_bde64));
- iocb_cmd->ulpBdeCount = 1;
- iocb_cmd->ulpLe = 1;
-- fcp_cmnd->fcpDl = be32_to_cpu(scsi_bufflen(scsi_cmnd));
-+ fcp_cmnd->fcpDl = cpu_to_be32(scsi_bufflen(scsi_cmnd));
- return 0;
- }
-
-@@ -542,6 +542,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
- int result;
- struct scsi_device *sdev, *tmp_sdev;
- int depth = 0;
-+ unsigned long flags;
-
- lpfc_cmd->result = pIocbOut->iocb.un.ulpWord[4];
- lpfc_cmd->status = pIocbOut->iocb.ulpStatus;
-@@ -608,6 +609,15 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
- cmd->scsi_done(cmd);
- if (phba->cfg_poll & ENABLE_FCP_RING_POLLING) {
-+ /*
-+ * If there is a thread waiting for command completion
-+ * wake up the thread.
+- retval = scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES);
+- if (retval) {
+- /* Unable to test, unit probably not ready. This usually
+- * means there is no disc in the drive. Mark as changed,
+- * and we will figure it out later once the drive is
+- * available again. */
++ sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
++ retval = scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES,
++ sshdr);
++ if (retval || (scsi_sense_valid(sshdr) &&
++ /* 0x3a is medium not present */
++ sshdr->asc == 0x3a)) {
++ /* Media not present or unable to test, unit probably not
++ * ready. This usually means there is no disc in the drive.
++ * Mark as changed, and we will figure it out later once
++ * the drive is available again.
+ */
-+ spin_lock_irqsave(sdev->host->host_lock, flags);
-+ lpfc_cmd->pCmd = NULL;
-+ if (lpfc_cmd->waitq)
-+ wake_up(lpfc_cmd->waitq);
-+ spin_unlock_irqrestore(sdev->host->host_lock, flags);
- lpfc_release_scsi_buf(phba, lpfc_cmd);
- return;
- }
-@@ -669,6 +679,16 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
- }
- }
+ cd->device->changed = 1;
+- return 1; /* This will force a flush, if called from
+- * check_disk_change */
++ /* This will force a flush, if called from check_disk_change */
++ retval = 1;
++ goto out;
+ };
-+ /*
-+ * If there is a thread waiting for command completion
-+ * wake up the thread.
-+ */
-+ spin_lock_irqsave(sdev->host->host_lock, flags);
-+ lpfc_cmd->pCmd = NULL;
-+ if (lpfc_cmd->waitq)
-+ wake_up(lpfc_cmd->waitq);
-+ spin_unlock_irqrestore(sdev->host->host_lock, flags);
+ retval = cd->device->changed;
+@@ -203,9 +208,17 @@ static int sr_media_change(struct cdrom_device_info *cdi, int slot)
+ if (retval) {
+ /* check multisession offset etc */
+ sr_cd_check(cdi);
+-
+ get_sectorsize(cd);
+ }
+
- lpfc_release_scsi_buf(phba, lpfc_cmd);
- }
-
-@@ -743,6 +763,8 @@ lpfc_scsi_prep_cmnd(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd,
- piocbq->iocb.ulpContext = pnode->nlp_rpi;
- if (pnode->nlp_fcp_info & NLP_FCP_2_DEVICE)
- piocbq->iocb.ulpFCP2Rcvy = 1;
-+ else
-+ piocbq->iocb.ulpFCP2Rcvy = 0;
-
- piocbq->iocb.ulpClass = (pnode->nlp_fcp_info & 0x0f);
- piocbq->context1 = lpfc_cmd;
-@@ -1018,8 +1040,8 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
- struct lpfc_iocbq *abtsiocb;
- struct lpfc_scsi_buf *lpfc_cmd;
- IOCB_t *cmd, *icmd;
-- unsigned int loop_count = 0;
- int ret = SUCCESS;
-+ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(waitq);
-
- lpfc_block_error_handler(cmnd);
- lpfc_cmd = (struct lpfc_scsi_buf *)cmnd->host_scribble;
-@@ -1074,17 +1096,15 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
- if (phba->cfg_poll & DISABLE_FCP_RING_INT)
- lpfc_sli_poll_fcp_ring (phba);
-
-+ lpfc_cmd->waitq = &waitq;
- /* Wait for abort to complete */
-- while (lpfc_cmd->pCmd == cmnd)
-- {
-- if (phba->cfg_poll & DISABLE_FCP_RING_INT)
-- lpfc_sli_poll_fcp_ring (phba);
-+ wait_event_timeout(waitq,
-+ (lpfc_cmd->pCmd != cmnd),
-+ (2*vport->cfg_devloss_tmo*HZ));
-
-- schedule_timeout_uninterruptible(LPFC_ABORT_WAIT * HZ);
-- if (++loop_count
-- > (2 * vport->cfg_devloss_tmo)/LPFC_ABORT_WAIT)
-- break;
-- }
-+ spin_lock_irq(shost->host_lock);
-+ lpfc_cmd->waitq = NULL;
-+ spin_unlock_irq(shost->host_lock);
-
- if (lpfc_cmd->pCmd == cmnd) {
- ret = FAILED;
-@@ -1438,7 +1458,7 @@ struct scsi_host_template lpfc_template = {
- .slave_destroy = lpfc_slave_destroy,
- .scan_finished = lpfc_scan_finished,
- .this_id = -1,
-- .sg_tablesize = LPFC_SG_SEG_CNT,
-+ .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT,
- .use_sg_chaining = ENABLE_SG_CHAINING,
- .cmd_per_lun = LPFC_CMD_PER_LUN,
- .use_clustering = ENABLE_CLUSTERING,
-@@ -1459,7 +1479,7 @@ struct scsi_host_template lpfc_vport_template = {
- .slave_destroy = lpfc_slave_destroy,
- .scan_finished = lpfc_scan_finished,
- .this_id = -1,
-- .sg_tablesize = LPFC_SG_SEG_CNT,
-+ .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT,
- .cmd_per_lun = LPFC_CMD_PER_LUN,
- .use_clustering = ENABLE_CLUSTERING,
- .use_sg_chaining = ENABLE_SG_CHAINING,
-diff --git a/drivers/scsi/lpfc/lpfc_scsi.h b/drivers/scsi/lpfc/lpfc_scsi.h
-index 31787bb..daba923 100644
---- a/drivers/scsi/lpfc/lpfc_scsi.h
-+++ b/drivers/scsi/lpfc/lpfc_scsi.h
-@@ -138,6 +138,7 @@ struct lpfc_scsi_buf {
- * Iotag is in here
- */
- struct lpfc_iocbq cur_iocbq;
-+ wait_queue_head_t *waitq;
- };
-
- #define LPFC_SCSI_DMA_EXT_SIZE 264
-diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
-index ce348c5..fdd01e3 100644
---- a/drivers/scsi/lpfc/lpfc_sli.c
-+++ b/drivers/scsi/lpfc/lpfc_sli.c
-@@ -106,7 +106,7 @@ lpfc_sli_get_iocbq(struct lpfc_hba *phba)
- return iocbq;
++out:
++ /* Notify userspace, that media has changed. */
++ if (retval != cd->previous_state)
++ sdev_evt_send_simple(cd->device, SDEV_EVT_MEDIA_CHANGE,
++ GFP_KERNEL);
++ cd->previous_state = retval;
++ kfree(sshdr);
++
+ return retval;
}
+
+diff --git a/drivers/scsi/sr.h b/drivers/scsi/sr.h
+index d65de96..81fbc0b 100644
+--- a/drivers/scsi/sr.h
++++ b/drivers/scsi/sr.h
+@@ -20,6 +20,9 @@
+ #include <linux/genhd.h>
+ #include <linux/kref.h>
--void
-+static void
- __lpfc_sli_release_iocbq(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
- {
- size_t start_clean = offsetof(struct lpfc_iocbq, iocb);
-@@ -199,6 +199,7 @@ lpfc_sli_iocb_cmd_type(uint8_t iocb_cmnd)
- case CMD_RCV_ELS_REQ_CX:
- case CMD_RCV_SEQUENCE64_CX:
- case CMD_RCV_ELS_REQ64_CX:
-+ case CMD_ASYNC_STATUS:
- case CMD_IOCB_RCV_SEQ64_CX:
- case CMD_IOCB_RCV_ELS64_CX:
- case CMD_IOCB_RCV_CONT64_CX:
-@@ -473,8 +474,7 @@ lpfc_sli_resume_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
- if (pring->txq_cnt &&
- lpfc_is_link_up(phba) &&
- (pring->ringno != phba->sli.fcp_ring ||
-- phba->sli.sli_flag & LPFC_PROCESS_LA) &&
-- !(pring->flag & LPFC_STOP_IOCB_MBX)) {
-+ phba->sli.sli_flag & LPFC_PROCESS_LA)) {
++#define MAX_RETRIES 3
++#define SR_TIMEOUT (30 * HZ)
++
+ struct scsi_device;
- while ((iocb = lpfc_sli_next_iocb_slot(phba, pring)) &&
- (nextiocb = lpfc_sli_ringtx_get(phba, pring)))
-@@ -489,32 +489,7 @@ lpfc_sli_resume_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
- return;
- }
+ /* The CDROM is fairly slow, so we need a little extra time */
+@@ -37,6 +40,7 @@ typedef struct scsi_cd {
+ unsigned xa_flag:1; /* CD has XA sectors ? */
+ unsigned readcd_known:1; /* drive supports READ_CD (0xbe) */
+ unsigned readcd_cdda:1; /* reading audio data using READ_CD */
++ unsigned previous_state:1; /* media has changed */
+ struct cdrom_device_info cdi;
+ /* We hold gendisk and scsi_device references on probe and use
+ * the refs on this kref to decide when to release them */
+diff --git a/drivers/scsi/sr_ioctl.c b/drivers/scsi/sr_ioctl.c
+index e1589f9..d5cebff 100644
+--- a/drivers/scsi/sr_ioctl.c
++++ b/drivers/scsi/sr_ioctl.c
+@@ -275,18 +275,6 @@ int sr_do_ioctl(Scsi_CD *cd, struct packet_command *cgc)
+ /* ---------------------------------------------------------------------- */
+ /* interface to cdrom.c */
--/* lpfc_sli_turn_on_ring is only called by lpfc_sli_handle_mb_event below */
--static void
--lpfc_sli_turn_on_ring(struct lpfc_hba *phba, int ringno)
+-static int test_unit_ready(Scsi_CD *cd)
-{
-- struct lpfc_pgp *pgp = (phba->sli_rev == 3) ?
-- &phba->slim2p->mbx.us.s3_pgp.port[ringno] :
-- &phba->slim2p->mbx.us.s2.port[ringno];
-- unsigned long iflags;
+- struct packet_command cgc;
-
-- /* If the ring is active, flag it */
-- spin_lock_irqsave(&phba->hbalock, iflags);
-- if (phba->sli.ring[ringno].cmdringaddr) {
-- if (phba->sli.ring[ringno].flag & LPFC_STOP_IOCB_MBX) {
-- phba->sli.ring[ringno].flag &= ~LPFC_STOP_IOCB_MBX;
-- /*
-- * Force update of the local copy of cmdGetInx
-- */
-- phba->sli.ring[ringno].local_getidx
-- = le32_to_cpu(pgp->cmdGetInx);
-- lpfc_sli_resume_iocb(phba, &phba->sli.ring[ringno]);
-- }
-- }
-- spin_unlock_irqrestore(&phba->hbalock, iflags);
+- memset(&cgc, 0, sizeof(struct packet_command));
+- cgc.cmd[0] = GPCMD_TEST_UNIT_READY;
+- cgc.quiet = 1;
+- cgc.data_direction = DMA_NONE;
+- cgc.timeout = IOCTL_TIMEOUT;
+- return sr_do_ioctl(cd, &cgc);
-}
-
--struct lpfc_hbq_entry *
-+static struct lpfc_hbq_entry *
- lpfc_sli_next_hbq_slot(struct lpfc_hba *phba, uint32_t hbqno)
- {
- struct hbq_s *hbqp = &phba->hbqs[hbqno];
-@@ -565,6 +540,7 @@ lpfc_sli_hbqbuf_free_all(struct lpfc_hba *phba)
- list_del(&hbq_buf->dbuf.list);
- (phba->hbqs[i].hbq_free_buffer)(phba, hbq_buf);
- }
-+ phba->hbqs[i].buffer_count = 0;
- }
- }
-
-@@ -633,8 +609,8 @@ lpfc_sli_hbqbuf_fill_hbqs(struct lpfc_hba *phba, uint32_t hbqno, uint32_t count)
- return 0;
- }
-
-- start = lpfc_hbq_defs[hbqno]->buffer_count;
-- end = count + lpfc_hbq_defs[hbqno]->buffer_count;
-+ start = phba->hbqs[hbqno].buffer_count;
-+ end = count + start;
- if (end > lpfc_hbq_defs[hbqno]->entry_count) {
- end = lpfc_hbq_defs[hbqno]->entry_count;
- }
-@@ -646,7 +622,7 @@ lpfc_sli_hbqbuf_fill_hbqs(struct lpfc_hba *phba, uint32_t hbqno, uint32_t count)
- return 1;
- hbq_buffer->tag = (i | (hbqno << 16));
- if (lpfc_sli_hbq_to_firmware(phba, hbqno, hbq_buffer))
-- lpfc_hbq_defs[hbqno]->buffer_count++;
-+ phba->hbqs[hbqno].buffer_count++;
- else
- (phba->hbqs[hbqno].hbq_free_buffer)(phba, hbq_buffer);
- }
-@@ -660,14 +636,14 @@ lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *phba, uint32_t qno)
- lpfc_hbq_defs[qno]->add_count));
- }
-
--int
-+static int
- lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *phba, uint32_t qno)
+ int sr_tray_move(struct cdrom_device_info *cdi, int pos)
{
- return(lpfc_sli_hbqbuf_fill_hbqs(phba, qno,
- lpfc_hbq_defs[qno]->init_count));
- }
+ Scsi_CD *cd = cdi->handle;
+@@ -310,14 +298,46 @@ int sr_lock_door(struct cdrom_device_info *cdi, int lock)
--struct hbq_dmabuf *
-+static struct hbq_dmabuf *
- lpfc_sli_hbqbuf_find(struct lpfc_hba *phba, uint32_t tag)
+ int sr_drive_status(struct cdrom_device_info *cdi, int slot)
{
- struct lpfc_dmabuf *d_buf;
-@@ -686,7 +662,7 @@ lpfc_sli_hbqbuf_find(struct lpfc_hba *phba, uint32_t tag)
- }
- lpfc_printf_log(phba, KERN_ERR, LOG_SLI | LOG_VPORT,
- "1803 Bad hbq tag. Data: x%x x%x\n",
-- tag, lpfc_hbq_defs[tag >> 16]->buffer_count);
-+ tag, phba->hbqs[tag >> 16].buffer_count);
- return NULL;
- }
-
-@@ -712,6 +688,7 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
- case MBX_LOAD_SM:
- case MBX_READ_NV:
- case MBX_WRITE_NV:
-+ case MBX_WRITE_VPARMS:
- case MBX_RUN_BIU_DIAG:
- case MBX_INIT_LINK:
- case MBX_DOWN_LINK:
-@@ -739,7 +716,7 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
- case MBX_DEL_LD_ENTRY:
- case MBX_RUN_PROGRAM:
- case MBX_SET_MASK:
-- case MBX_SET_SLIM:
-+ case MBX_SET_VARIABLE:
- case MBX_UNREG_D_ID:
- case MBX_KILL_BOARD:
- case MBX_CONFIG_FARP:
-@@ -751,9 +728,10 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
- case MBX_READ_RPI64:
- case MBX_REG_LOGIN64:
- case MBX_READ_LA64:
-- case MBX_FLASH_WR_ULA:
-+ case MBX_WRITE_WWN:
- case MBX_SET_DEBUG:
- case MBX_LOAD_EXP_ROM:
-+ case MBX_ASYNCEVT_ENABLE:
- case MBX_REG_VPI:
- case MBX_UNREG_VPI:
- case MBX_HEARTBEAT:
-@@ -953,6 +931,17 @@ lpfc_sli_replace_hbqbuff(struct lpfc_hba *phba, uint32_t tag)
- return &new_hbq_entry->dbuf;
- }
-
-+static struct lpfc_dmabuf *
-+lpfc_sli_get_buff(struct lpfc_hba *phba,
-+ struct lpfc_sli_ring *pring,
-+ uint32_t tag)
-+{
-+ if (tag & QUE_BUFTAG_BIT)
-+ return lpfc_sli_ring_taggedbuf_get(phba, pring, tag);
-+ else
-+ return lpfc_sli_replace_hbqbuff(phba, tag);
-+}
++ struct scsi_cd *cd = cdi->handle;
++ struct scsi_sense_hdr sshdr;
++ struct media_event_desc med;
+
- static int
- lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- struct lpfc_iocbq *saveq)
-@@ -961,19 +950,112 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- WORD5 * w5p;
- uint32_t Rctl, Type;
- uint32_t match, i;
-+ struct lpfc_iocbq *iocbq;
+ if (CDSL_CURRENT != slot) {
+ /* we have no changer support */
+ return -EINVAL;
+ }
+- if (0 == test_unit_ready(cdi->handle))
++ if (0 == scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES,
++ &sshdr))
+ return CDS_DISC_OK;
- match = 0;
- irsp = &(saveq->iocb);
-- if ((irsp->ulpCommand == CMD_RCV_ELS_REQ64_CX)
-- || (irsp->ulpCommand == CMD_RCV_ELS_REQ_CX)
-- || (irsp->ulpCommand == CMD_IOCB_RCV_ELS64_CX)
-- || (irsp->ulpCommand == CMD_IOCB_RCV_CONT64_CX)) {
-+
-+ if (irsp->ulpStatus == IOSTAT_NEED_BUFFER)
-+ return 1;
-+ if (irsp->ulpCommand == CMD_ASYNC_STATUS) {
-+ if (pring->lpfc_sli_rcv_async_status)
-+ pring->lpfc_sli_rcv_async_status(phba, pring, saveq);
+- return CDS_TRAY_OPEN;
++ if (!cdrom_get_media_event(cdi, &med)) {
++ if (med.media_present)
++ return CDS_DISC_OK;
++ else if (med.door_open)
++ return CDS_TRAY_OPEN;
+ else
-+ lpfc_printf_log(phba,
-+ KERN_WARNING,
-+ LOG_SLI,
-+ "0316 Ring %d handler: unexpected "
-+ "ASYNC_STATUS iocb received evt_code "
-+ "0x%x\n",
-+ pring->ringno,
-+ irsp->un.asyncstat.evt_code);
-+ return 1;
++ return CDS_NO_DISC;
+ }
+
-+ if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
-+ if (irsp->ulpBdeCount != 0) {
-+ saveq->context2 = lpfc_sli_get_buff(phba, pring,
-+ irsp->un.ulpWord[3]);
-+ if (!saveq->context2)
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_SLI,
-+ "0341 Ring %d Cannot find buffer for "
-+ "an unsolicited iocb. tag 0x%x\n",
-+ pring->ringno,
-+ irsp->un.ulpWord[3]);
-+ }
-+ if (irsp->ulpBdeCount == 2) {
-+ saveq->context3 = lpfc_sli_get_buff(phba, pring,
-+ irsp->unsli3.sli3Words[7]);
-+ if (!saveq->context3)
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_SLI,
-+ "0342 Ring %d Cannot find buffer for an"
-+ " unsolicited iocb. tag 0x%x\n",
-+ pring->ringno,
-+ irsp->unsli3.sli3Words[7]);
-+ }
-+ list_for_each_entry(iocbq, &saveq->list, list) {
-+ irsp = &(iocbq->iocb);
-+ if (irsp->ulpBdeCount != 0) {
-+ iocbq->context2 = lpfc_sli_get_buff(phba, pring,
-+ irsp->un.ulpWord[3]);
-+ if (!iocbq->context2)
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_SLI,
-+ "0343 Ring %d Cannot find "
-+ "buffer for an unsolicited iocb"
-+ ". tag 0x%x\n", pring->ringno,
-+ irsp->un.ulpWord[3]);
-+ }
-+ if (irsp->ulpBdeCount == 2) {
-+ iocbq->context3 = lpfc_sli_get_buff(phba, pring,
-+ irsp->unsli3.sli3Words[7]);
-+ if (!iocbq->context3)
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_SLI,
-+ "0344 Ring %d Cannot find "
-+ "buffer for an unsolicited "
-+ "iocb. tag 0x%x\n",
-+ pring->ringno,
-+ irsp->unsli3.sli3Words[7]);
-+ }
-+ }
-+ }
-+ if (irsp->ulpBdeCount != 0 &&
-+ (irsp->ulpCommand == CMD_IOCB_RCV_CONT64_CX ||
-+ irsp->ulpStatus == IOSTAT_INTERMED_RSP)) {
-+ int found = 0;
++ /*
++ * 0x04 is format in progress .. but there must be a disc present!
++ */
++ if (sshdr.sense_key == NOT_READY && sshdr.asc == 0x04)
++ return CDS_DISC_OK;
+
-+ /* search continue save q for same XRI */
-+ list_for_each_entry(iocbq, &pring->iocb_continue_saveq, clist) {
-+ if (iocbq->iocb.ulpContext == saveq->iocb.ulpContext) {
-+ list_add_tail(&saveq->list, &iocbq->list);
-+ found = 1;
-+ break;
-+ }
-+ }
-+ if (!found)
-+ list_add_tail(&saveq->clist,
-+ &pring->iocb_continue_saveq);
-+ if (saveq->iocb.ulpStatus != IOSTAT_INTERMED_RSP) {
-+ list_del_init(&iocbq->clist);
-+ saveq = iocbq;
-+ irsp = &(saveq->iocb);
-+ } else
-+ return 0;
-+ }
-+ if ((irsp->ulpCommand == CMD_RCV_ELS_REQ64_CX) ||
-+ (irsp->ulpCommand == CMD_RCV_ELS_REQ_CX) ||
-+ (irsp->ulpCommand == CMD_IOCB_RCV_ELS64_CX)) {
- Rctl = FC_ELS_REQ;
- Type = FC_ELS_DATA;
- } else {
-- w5p =
-- (WORD5 *) & (saveq->iocb.un.
-- ulpWord[5]);
-+ w5p = (WORD5 *)&(saveq->iocb.un.ulpWord[5]);
- Rctl = w5p->hcsw.Rctl;
- Type = w5p->hcsw.Type;
-
-@@ -988,15 +1070,6 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- }
- }
-
-- if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
-- if (irsp->ulpBdeCount != 0)
-- saveq->context2 = lpfc_sli_replace_hbqbuff(phba,
-- irsp->un.ulpWord[3]);
-- if (irsp->ulpBdeCount == 2)
-- saveq->context3 = lpfc_sli_replace_hbqbuff(phba,
-- irsp->unsli3.sli3Words[7]);
-- }
--
- /* unSolicited Responses */
- if (pring->prt[0].profile) {
- if (pring->prt[0].lpfc_sli_rcv_unsol_event)
-@@ -1006,12 +1079,9 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- } else {
- /* We must search, based on rctl / type
- for the right routine */
-- for (i = 0; i < pring->num_mask;
-- i++) {
-- if ((pring->prt[i].rctl ==
-- Rctl)
-- && (pring->prt[i].
-- type == Type)) {
-+ for (i = 0; i < pring->num_mask; i++) {
-+ if ((pring->prt[i].rctl == Rctl)
-+ && (pring->prt[i].type == Type)) {
- if (pring->prt[i].lpfc_sli_rcv_unsol_event)
- (pring->prt[i].lpfc_sli_rcv_unsol_event)
- (phba, pring, saveq);
-@@ -1084,6 +1154,12 @@ lpfc_sli_process_sol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- IOSTAT_LOCAL_REJECT;
- saveq->iocb.un.ulpWord[4] =
- IOERR_SLI_ABORTED;
++ /*
++ * If not using Mt Fuji extended media tray reports,
++ * just return TRAY_OPEN since ATAPI doesn't provide
++ * any other way to detect this...
++ */
++ if (scsi_sense_valid(&sshdr) &&
++ /* 0x3a is medium not present */
++ sshdr.asc == 0x3a)
++ return CDS_NO_DISC;
++ else
++ return CDS_TRAY_OPEN;
+
-+ /* Firmware could still be in progress
-+ * of DMAing payload, so don't free data
-+ * buffer till after a hbeat.
-+ */
-+ saveq->iocb_flag |= LPFC_DELAY_MEM_FREE;
- }
- }
- (cmdiocbp->iocb_cmpl) (phba, cmdiocbp, saveq);
-@@ -1572,12 +1648,7 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba *phba,
++ return CDS_DRIVE_NOT_READY;
+ }
- writel(pring->rspidx, &phba->host_gp[pring->ringno].rspGetInx);
+ int sr_disk_status(struct cdrom_device_info *cdi)
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index 328c47c..7195270 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -9,7 +9,7 @@
+ Steve Hirsch, Andreas Koppenh"ofer, Michael Leodolter, Eyal Lebedinsky,
+ Michael Schaefer, J"org Weule, and Eric Youngdale.
-- if (list_empty(&(pring->iocb_continueq))) {
-- list_add(&rspiocbp->list, &(pring->iocb_continueq));
-- } else {
-- list_add_tail(&rspiocbp->list,
-- &(pring->iocb_continueq));
-- }
-+ list_add_tail(&rspiocbp->list, &(pring->iocb_continueq));
+- Copyright 1992 - 2007 Kai Makisara
++ Copyright 1992 - 2008 Kai Makisara
+ email Kai.Makisara at kolumbus.fi
- pring->iocb_continueq_cnt++;
- if (irsp->ulpLe) {
-@@ -1642,17 +1713,17 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba *phba,
- iocb_cmd_type = irsp->ulpCommand & CMD_IOCB_MASK;
- type = lpfc_sli_iocb_cmd_type(iocb_cmd_type);
- if (type == LPFC_SOL_IOCB) {
-- spin_unlock_irqrestore(&phba->hbalock,
-- iflag);
-+ spin_unlock_irqrestore(&phba->hbalock, iflag);
- rc = lpfc_sli_process_sol_iocb(phba, pring,
- saveq);
- spin_lock_irqsave(&phba->hbalock, iflag);
- } else if (type == LPFC_UNSOL_IOCB) {
-- spin_unlock_irqrestore(&phba->hbalock,
-- iflag);
-+ spin_unlock_irqrestore(&phba->hbalock, iflag);
- rc = lpfc_sli_process_unsol_iocb(phba, pring,
- saveq);
- spin_lock_irqsave(&phba->hbalock, iflag);
-+ if (!rc)
-+ free_saveq = 0;
- } else if (type == LPFC_ABORT_IOCB) {
- if ((irsp->ulpCommand != CMD_XRI_ABORTED_CX) &&
- ((cmdiocbp =
-@@ -1921,8 +1992,8 @@ lpfc_sli_brdkill(struct lpfc_hba *phba)
- "0329 Kill HBA Data: x%x x%x\n",
- phba->pport->port_state, psli->sli_flag);
+ Some small formal changes - aeb, 950809
+@@ -17,7 +17,7 @@
+ Last modified: 18-JAN-1998 Richard Gooch <rgooch at atnf.csiro.au> Devfs support
+ */
-- if ((pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool,
-- GFP_KERNEL)) == 0)
-+ pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-+ if (!pmb)
- return 1;
+-static const char *verstr = "20070203";
++static const char *verstr = "20080117";
- /* Disable the error attention */
-@@ -2113,7 +2184,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
- <status> */
- lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
- "0436 Adapter failed to init, "
-- "timeout, status reg x%x\n", status);
-+ "timeout, status reg x%x, "
-+ "FW Data: A8 x%x AC x%x\n", status,
-+ readl(phba->MBslimaddr + 0xa8),
-+ readl(phba->MBslimaddr + 0xac));
- phba->link_state = LPFC_HBA_ERROR;
- return -ETIMEDOUT;
- }
-@@ -2125,7 +2199,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
- <status> */
- lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
- "0437 Adapter failed to init, "
-- "chipset, status reg x%x\n", status);
-+ "chipset, status reg x%x, "
-+ "FW Data: A8 x%x AC x%x\n", status,
-+ readl(phba->MBslimaddr + 0xa8),
-+ readl(phba->MBslimaddr + 0xac));
- phba->link_state = LPFC_HBA_ERROR;
- return -EIO;
- }
-@@ -2153,7 +2230,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
- /* Adapter failed to init, chipset, status reg <status> */
- lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
- "0438 Adapter failed to init, chipset, "
-- "status reg x%x\n", status);
-+ "status reg x%x, "
-+ "FW Data: A8 x%x AC x%x\n", status,
-+ readl(phba->MBslimaddr + 0xa8),
-+ readl(phba->MBslimaddr + 0xac));
- phba->link_state = LPFC_HBA_ERROR;
- return -EIO;
- }
-@@ -2485,11 +2565,16 @@ lpfc_mbox_timeout_handler(struct lpfc_hba *phba)
- lpfc_sli_abort_iocb_ring(phba, pring);
+ #include <linux/module.h>
- lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
-- "0316 Resetting board due to mailbox timeout\n");
-+ "0345 Resetting board due to mailbox timeout\n");
- /*
- * lpfc_offline calls lpfc_sli_hba_down which will clean up
- * on oustanding mailbox commands.
- */
-+ /* If resets are disabled then set error state and return. */
-+ if (!phba->cfg_enable_hba_reset) {
-+ phba->link_state = LPFC_HBA_ERROR;
-+ return;
-+ }
- lpfc_offline_prep(phba);
- lpfc_offline(phba);
- lpfc_sli_brdrestart(phba);
-@@ -2507,6 +2592,7 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
- uint32_t status, evtctr;
- uint32_t ha_copy;
- int i;
-+ unsigned long timeout;
- unsigned long drvr_flag = 0;
- volatile uint32_t word0, ldata;
- void __iomem *to_slim;
-@@ -2519,7 +2605,7 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
- "1806 Mbox x%x failed. No vport\n",
- pmbox->mb.mbxCommand);
- dump_stack();
-- return MBXERR_ERROR;
-+ return MBX_NOT_FINISHED;
- }
- }
+@@ -3214,8 +3214,7 @@ static int partition_tape(struct scsi_tape *STp, int size)
-@@ -2571,21 +2657,6 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
- return MBX_NOT_FINISHED;
- }
-- /* Handle STOP IOCB processing flag. This is only meaningful
-- * if we are not polling for mbox completion.
-- */
-- if (flag & MBX_STOP_IOCB) {
-- flag &= ~MBX_STOP_IOCB;
-- /* Now flag each ring */
-- for (i = 0; i < psli->num_rings; i++) {
-- /* If the ring is active, flag it */
-- if (psli->ring[i].cmdringaddr) {
-- psli->ring[i].flag |=
-- LPFC_STOP_IOCB_MBX;
-- }
-- }
-- }
--
- /* Another mailbox command is still being processed, queue this
- * command to be processed later.
- */
-@@ -2620,23 +2691,6 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
- return MBX_BUSY;
- }
+ /* The ioctl command */
+-static int st_ioctl(struct inode *inode, struct file *file,
+- unsigned int cmd_in, unsigned long arg)
++static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ {
+ int i, cmd_nr, cmd_type, bt;
+ int retval = 0;
+@@ -3870,7 +3869,7 @@ static const struct file_operations st_fops =
+ .owner = THIS_MODULE,
+ .read = st_read,
+ .write = st_write,
+- .ioctl = st_ioctl,
++ .unlocked_ioctl = st_ioctl,
+ #ifdef CONFIG_COMPAT
+ .compat_ioctl = st_compat_ioctl,
+ #endif
+diff --git a/drivers/scsi/sun3_NCR5380.c b/drivers/scsi/sun3_NCR5380.c
+index 2dcde37..bcaba86 100644
+--- a/drivers/scsi/sun3_NCR5380.c
++++ b/drivers/scsi/sun3_NCR5380.c
+@@ -515,9 +515,9 @@ static __inline__ void initialize_SCp(struct scsi_cmnd *cmd)
+ * various queues are valid.
+ */
-- /* Handle STOP IOCB processing flag. This is only meaningful
-- * if we are not polling for mbox completion.
-- */
-- if (flag & MBX_STOP_IOCB) {
-- flag &= ~MBX_STOP_IOCB;
-- if (flag == MBX_NOWAIT) {
-- /* Now flag each ring */
-- for (i = 0; i < psli->num_rings; i++) {
-- /* If the ring is active, flag it */
-- if (psli->ring[i].cmdringaddr) {
-- psli->ring[i].flag |=
-- LPFC_STOP_IOCB_MBX;
-- }
-- }
-- }
-- }
--
- psli->sli_flag |= LPFC_SLI_MBOX_ACTIVE;
+- if (cmd->use_sg) {
+- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.ptr = (char *) SGADDR(cmd->SCp.buffer);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
- /* If we are not polling, we MUST be in SLI2 mode */
-@@ -2714,18 +2768,24 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
+@@ -528,8 +528,8 @@ static __inline__ void initialize_SCp(struct scsi_cmnd *cmd)
+ } else {
+ cmd->SCp.buffer = NULL;
+ cmd->SCp.buffers_residual = 0;
+- cmd->SCp.ptr = (char *) cmd->request_buffer;
+- cmd->SCp.this_residual = cmd->request_bufflen;
++ cmd->SCp.ptr = NULL;
++ cmd->SCp.this_residual = 0;
+ }
+
+ }
+@@ -935,7 +935,7 @@ static int NCR5380_queue_command(struct scsi_cmnd *cmd,
+ }
+ # endif
+ # ifdef NCR5380_STAT_LIMIT
+- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
++ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
+ # endif
+ switch (cmd->cmnd[0])
+ {
+@@ -943,14 +943,14 @@ static int NCR5380_queue_command(struct scsi_cmnd *cmd,
+ case WRITE_6:
+ case WRITE_10:
+ hostdata->time_write[cmd->device->id] -= (jiffies - hostdata->timebase);
+- hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;
++ hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);
+ hostdata->pendingw++;
+ break;
+ case READ:
+ case READ_6:
+ case READ_10:
+ hostdata->time_read[cmd->device->id] -= (jiffies - hostdata->timebase);
+- hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;
++ hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);
+ hostdata->pendingr++;
+ break;
}
-
- wmb();
-- /* interrupt board to doit right away */
-- writel(CA_MBATT, phba->CAregaddr);
-- readl(phba->CAregaddr); /* flush */
-
- switch (flag) {
- case MBX_NOWAIT:
-- /* Don't wait for it to finish, just return */
-+ /* Set up reference to mailbox command */
- psli->mbox_active = pmbox;
-+ /* Interrupt board to do it */
-+ writel(CA_MBATT, phba->CAregaddr);
-+ readl(phba->CAregaddr); /* flush */
-+ /* Don't wait for it to finish, just return */
+@@ -1345,7 +1345,7 @@ static void collect_stats(struct NCR5380_hostdata *hostdata,
+ struct scsi_cmnd *cmd)
+ {
+ # ifdef NCR5380_STAT_LIMIT
+- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
++ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
+ # endif
+ switch (cmd->cmnd[0])
+ {
+@@ -1353,14 +1353,14 @@ static void collect_stats(struct NCR5380_hostdata *hostdata,
+ case WRITE_6:
+ case WRITE_10:
+ hostdata->time_write[cmd->device->id] += (jiffies - hostdata->timebase);
+- /*hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;*/
++ /*hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);*/
+ hostdata->pendingw--;
break;
+ case READ:
+ case READ_6:
+ case READ_10:
+ hostdata->time_read[cmd->device->id] += (jiffies - hostdata->timebase);
+- /*hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;*/
++ /*hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);*/
+ hostdata->pendingr--;
+ break;
+ }
+@@ -1863,7 +1863,7 @@ static int do_abort (struct Scsi_Host *host)
+ * the target sees, so we just handshake.
+ */
+
+- while (!(tmp = NCR5380_read(STATUS_REG)) & SR_REQ);
++ while (!((tmp = NCR5380_read(STATUS_REG)) & SR_REQ));
- case MBX_POLL:
-+ /* Set up null reference to mailbox command */
- psli->mbox_active = NULL;
-+ /* Interrupt board to do it */
-+ writel(CA_MBATT, phba->CAregaddr);
-+ readl(phba->CAregaddr); /* flush */
-+
- if (psli->sli_flag & LPFC_SLI2_ACTIVE) {
- /* First read mbox status word */
- word0 = *((volatile uint32_t *)&phba->slim2p->mbx);
-@@ -2737,15 +2797,15 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
-
- /* Read the HBA Host Attention Register */
- ha_copy = readl(phba->HAregaddr);
--
-- i = lpfc_mbox_tmo_val(phba, mb->mbxCommand);
-- i *= 1000; /* Convert to ms */
--
-+ timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
-+ mb->mbxCommand) *
-+ 1000) + jiffies;
-+ i = 0;
- /* Wait for command to complete */
- while (((word0 & OWN_CHIP) == OWN_CHIP) ||
- (!(ha_copy & HA_MBATT) &&
- (phba->link_state > LPFC_WARM_START))) {
-- if (i-- <= 0) {
-+ if (time_after(jiffies, timeout)) {
- psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
- spin_unlock_irqrestore(&phba->hbalock,
- drvr_flag);
-@@ -2758,12 +2818,12 @@ lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
- && (evtctr != psli->slistat.mbox_event))
- break;
-
-- spin_unlock_irqrestore(&phba->hbalock,
-- drvr_flag);
--
-- msleep(1);
--
-- spin_lock_irqsave(&phba->hbalock, drvr_flag);
-+ if (i++ > 10) {
-+ spin_unlock_irqrestore(&phba->hbalock,
-+ drvr_flag);
-+ msleep(1);
-+ spin_lock_irqsave(&phba->hbalock, drvr_flag);
-+ }
+ NCR5380_write(TARGET_COMMAND_REG, PHASE_SR_TO_TCR(tmp));
- if (psli->sli_flag & LPFC_SLI2_ACTIVE) {
- /* First copy command data */
-@@ -2848,7 +2908,7 @@ lpfc_sli_next_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- /*
- * Lockless version of lpfc_sli_issue_iocb.
- */
--int
-+static int
- __lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- struct lpfc_iocbq *piocb, uint32_t flag)
+diff --git a/drivers/scsi/sym53c416.c b/drivers/scsi/sym53c416.c
+index 90cee94..1f6fd16 100644
+--- a/drivers/scsi/sym53c416.c
++++ b/drivers/scsi/sym53c416.c
+@@ -328,27 +328,13 @@ static __inline__ unsigned int sym53c416_write(int base, unsigned char *buffer,
+ static irqreturn_t sym53c416_intr_handle(int irq, void *dev_id)
{
-@@ -2879,9 +2939,9 @@ __lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ struct Scsi_Host *dev = dev_id;
+- int base = 0;
++ int base = dev->io_port;
+ int i;
+ unsigned long flags = 0;
+ unsigned char status_reg, pio_int_reg, int_reg;
+ struct scatterlist *sg;
+ unsigned int tot_trans = 0;
- /*
- * Check to see if we are blocking IOCB processing because of a
-- * outstanding mbox command.
-+ * outstanding event.
+- /* We search the base address of the host adapter which caused the interrupt */
+- /* FIXME: should pass dev_id sensibly as hosts[i] */
+- for(i = 0; i < host_index && !base; i++)
+- if(irq == hosts[i].irq)
+- base = hosts[i].base;
+- /* If no adapter found, we cannot handle the interrupt. Leave a message */
+- /* and continue. This should never happen... */
+- if(!base)
+- {
+- printk(KERN_ERR "sym53c416: No host adapter defined for interrupt %d\n", irq);
+- return IRQ_NONE;
+- }
+- /* Now we have the base address and we can start handling the interrupt */
+-
+ spin_lock_irqsave(dev->host_lock,flags);
+ status_reg = inb(base + STATUS_REG);
+ pio_int_reg = inb(base + PIO_INT_REG);
+diff --git a/drivers/scsi/sym53c8xx_2/sym_glue.c b/drivers/scsi/sym53c8xx_2/sym_glue.c
+index 9e0908d..21e926d 100644
+--- a/drivers/scsi/sym53c8xx_2/sym_glue.c
++++ b/drivers/scsi/sym53c8xx_2/sym_glue.c
+@@ -207,10 +207,9 @@ void sym_set_cam_result_error(struct sym_hcb *np, struct sym_ccb *cp, int resid)
+ /*
+ * Bounce back the sense data to user.
+ */
+- memset(&cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
++ memset(&cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ memcpy(cmd->sense_buffer, cp->sns_bbuf,
+- min(sizeof(cmd->sense_buffer),
+- (size_t)SYM_SNS_BBUF_LEN));
++ min(SCSI_SENSE_BUFFERSIZE, SYM_SNS_BBUF_LEN));
+ #if 0
+ /*
+ * If the device reports a UNIT ATTENTION condition
+@@ -609,22 +608,24 @@ static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd)
*/
-- if (unlikely(pring->flag & LPFC_STOP_IOCB_MBX))
-+ if (unlikely(pring->flag & LPFC_STOP_IOCB_EVENT))
- goto iocb_busy;
-
- if (unlikely(phba->link_state == LPFC_LINK_DOWN)) {
-@@ -2993,6 +3053,61 @@ lpfc_extra_ring_setup( struct lpfc_hba *phba)
- return 0;
- }
-
-+static void
-+lpfc_sli_async_event_handler(struct lpfc_hba * phba,
-+ struct lpfc_sli_ring * pring, struct lpfc_iocbq * iocbq)
-+{
-+ IOCB_t *icmd;
-+ uint16_t evt_code;
-+ uint16_t temp;
-+ struct temp_event temp_event_data;
-+ struct Scsi_Host *shost;
-+
-+ icmd = &iocbq->iocb;
-+ evt_code = icmd->un.asyncstat.evt_code;
-+ temp = icmd->ulpContext;
-+
-+ if ((evt_code != ASYNC_TEMP_WARN) &&
-+ (evt_code != ASYNC_TEMP_SAFE)) {
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_SLI,
-+ "0346 Ring %d handler: unexpected ASYNC_STATUS"
-+ " evt_code 0x%x\n",
-+ pring->ringno,
-+ icmd->un.asyncstat.evt_code);
-+ return;
-+ }
-+ temp_event_data.data = (uint32_t)temp;
-+ temp_event_data.event_type = FC_REG_TEMPERATURE_EVENT;
-+ if (evt_code == ASYNC_TEMP_WARN) {
-+ temp_event_data.event_code = LPFC_THRESHOLD_TEMP;
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_TEMP,
-+ "0347 Adapter is very hot, please take "
-+ "corrective action. temperature : %d Celsius\n",
-+ temp);
-+ }
-+ if (evt_code == ASYNC_TEMP_SAFE) {
-+ temp_event_data.event_code = LPFC_NORMAL_TEMP;
-+ lpfc_printf_log(phba,
-+ KERN_ERR,
-+ LOG_TEMP,
-+ "0340 Adapter temperature is OK now. "
-+ "temperature : %d Celsius\n",
-+ temp);
-+ }
-+
-+ /* Send temperature change event to applications */
-+ shost = lpfc_shost_from_vport(phba->pport);
-+ fc_host_post_vendor_event(shost, fc_get_event_number(),
-+ sizeof(temp_event_data), (char *) &temp_event_data,
-+ SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_EMULEX);
-+
-+}
-+
-+
- int
- lpfc_sli_setup(struct lpfc_hba *phba)
- {
-@@ -3059,6 +3174,8 @@ lpfc_sli_setup(struct lpfc_hba *phba)
- pring->fast_iotag = 0;
- pring->iotag_ctr = 0;
- pring->iotag_max = 4096;
-+ pring->lpfc_sli_rcv_async_status =
-+ lpfc_sli_async_event_handler;
- pring->num_mask = 4;
- pring->prt[0].profile = 0; /* Mask 0 */
- pring->prt[0].rctl = FC_ELS_REQ;
-@@ -3123,6 +3240,7 @@ lpfc_sli_queue_setup(struct lpfc_hba *phba)
- INIT_LIST_HEAD(&pring->txq);
- INIT_LIST_HEAD(&pring->txcmplq);
- INIT_LIST_HEAD(&pring->iocb_continueq);
-+ INIT_LIST_HEAD(&pring->iocb_continue_saveq);
- INIT_LIST_HEAD(&pring->postbufq);
- }
- spin_unlock_irq(&phba->hbalock);
-@@ -3193,6 +3311,7 @@ lpfc_sli_hba_down(struct lpfc_hba *phba)
- LIST_HEAD(completions);
- struct lpfc_sli *psli = &phba->sli;
- struct lpfc_sli_ring *pring;
-+ struct lpfc_dmabuf *buf_ptr;
- LPFC_MBOXQ_t *pmb;
- struct lpfc_iocbq *iocb;
- IOCB_t *cmd = NULL;
-@@ -3232,6 +3351,19 @@ lpfc_sli_hba_down(struct lpfc_hba *phba)
+ #define WAIT_FOR_PCI_RECOVERY 35
+ if (pci_channel_offline(pdev)) {
+- struct completion *io_reset;
+ int finished_reset = 0;
+ init_completion(&eh_done);
+ spin_lock_irq(shost->host_lock);
+ /* Make sure we didn't race */
+ if (pci_channel_offline(pdev)) {
+- if (!sym_data->io_reset)
+- sym_data->io_reset = &eh_done;
+- io_reset = sym_data->io_reset;
++ BUG_ON(sym_data->io_reset);
++ sym_data->io_reset = &eh_done;
+ } else {
+ finished_reset = 1;
}
+ spin_unlock_irq(shost->host_lock);
+ if (!finished_reset)
+- finished_reset = wait_for_completion_timeout(io_reset,
++ finished_reset = wait_for_completion_timeout
++ (sym_data->io_reset,
+ WAIT_FOR_PCI_RECOVERY*HZ);
++ spin_lock_irq(shost->host_lock);
++ sym_data->io_reset = NULL;
++ spin_unlock_irq(shost->host_lock);
+ if (!finished_reset)
+ return SCSI_FAILED;
}
-
-+ spin_lock_irqsave(&phba->hbalock, flags);
-+ list_splice_init(&phba->elsbuf, &completions);
-+ phba->elsbuf_cnt = 0;
-+ phba->elsbuf_prev_cnt = 0;
-+ spin_unlock_irqrestore(&phba->hbalock, flags);
-+
-+ while (!list_empty(&completions)) {
-+ list_remove_head(&completions, buf_ptr,
-+ struct lpfc_dmabuf, list);
-+ lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
-+ kfree(buf_ptr);
-+ }
-+
- /* Return any active mbox cmds */
- del_timer_sync(&psli->mbox_tmo);
- spin_lock_irqsave(&phba->hbalock, flags);
-@@ -3294,6 +3426,47 @@ lpfc_sli_ringpostbuf_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- return 0;
+@@ -1744,7 +1745,7 @@ static int __devinit sym2_probe(struct pci_dev *pdev,
+ return -ENODEV;
}
-+uint32_t
-+lpfc_sli_get_buffer_tag(struct lpfc_hba *phba)
-+{
-+ spin_lock_irq(&phba->hbalock);
-+ phba->buffer_tag_count++;
-+ /*
-+ * Always set the QUE_BUFTAG_BIT to distiguish between
-+ * a tag assigned by HBQ.
-+ */
-+ phba->buffer_tag_count |= QUE_BUFTAG_BIT;
-+ spin_unlock_irq(&phba->hbalock);
-+ return phba->buffer_tag_count;
-+}
-+
-+struct lpfc_dmabuf *
-+lpfc_sli_ring_taggedbuf_get(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-+ uint32_t tag)
-+{
-+ struct lpfc_dmabuf *mp, *next_mp;
-+ struct list_head *slp = &pring->postbufq;
-+
-+ /* Search postbufq, from the begining, looking for a match on tag */
-+ spin_lock_irq(&phba->hbalock);
-+ list_for_each_entry_safe(mp, next_mp, &pring->postbufq, list) {
-+ if (mp->buffer_tag == tag) {
-+ list_del_init(&mp->list);
-+ pring->postbufq_cnt--;
-+ spin_unlock_irq(&phba->hbalock);
-+ return mp;
-+ }
-+ }
-+
-+ spin_unlock_irq(&phba->hbalock);
-+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-+ "0410 Cannot find virtual addr for buffer tag on "
-+ "ring %d Data x%lx x%p x%p x%x\n",
-+ pring->ringno, (unsigned long) tag,
-+ slp->next, slp->prev, pring->postbufq_cnt);
-+
-+ return NULL;
-+}
-
- struct lpfc_dmabuf *
- lpfc_sli_ringpostbuf_get(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-@@ -3361,6 +3534,12 @@ lpfc_sli_abort_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
- pring->txcmplq_cnt--;
- spin_unlock_irq(&phba->hbalock);
-
-+ /* Firmware could still be in progress of DMAing
-+ * payload, so don't free data buffer till after
-+ * a hbeat.
-+ */
-+ abort_iocb->iocb_flag |= LPFC_DELAY_MEM_FREE;
-+
- abort_iocb->iocb_flag &= ~LPFC_DRIVER_ABORTED;
- abort_iocb->iocb.ulpStatus = IOSTAT_LOCAL_REJECT;
- abort_iocb->iocb.un.ulpWord[4] = IOERR_SLI_ABORTED;
-@@ -3699,7 +3878,7 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq,
- unsigned long flag;
-
- /* The caller must leave context1 empty. */
-- if (pmboxq->context1 != 0)
-+ if (pmboxq->context1)
- return MBX_NOT_FINISHED;
-
- /* setup wake call as IOCB callback */
-@@ -3771,7 +3950,6 @@ lpfc_intr_handler(int irq, void *dev_id)
- uint32_t ha_copy;
- uint32_t work_ha_copy;
- unsigned long status;
-- int i;
- uint32_t control;
-
- MAILBOX_t *mbox, *pmbox;
-@@ -3888,7 +4066,6 @@ lpfc_intr_handler(int irq, void *dev_id)
- }
+-static void __devexit sym2_remove(struct pci_dev *pdev)
++static void sym2_remove(struct pci_dev *pdev)
+ {
+ struct Scsi_Host *shost = pci_get_drvdata(pdev);
- if (work_ha_copy & HA_ERATT) {
-- phba->link_state = LPFC_HBA_ERROR;
- /*
- * There was a link/board error. Read the
- * status register to retrieve the error event
-@@ -3920,7 +4097,7 @@ lpfc_intr_handler(int irq, void *dev_id)
- * Stray Mailbox Interrupt, mbxCommand <cmd>
- * mbxStatus <status>
- */
-- lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX |
-+ lpfc_printf_log(phba, KERN_ERR, LOG_MBOX |
- LOG_SLI,
- "(%d):0304 Stray Mailbox "
- "Interrupt mbxCommand x%x "
-@@ -3928,51 +4105,60 @@ lpfc_intr_handler(int irq, void *dev_id)
- (vport ? vport->vpi : 0),
- pmbox->mbxCommand,
- pmbox->mbxStatus);
-- }
-- phba->last_completion_time = jiffies;
-- del_timer_sync(&phba->sli.mbox_tmo);
--
-- phba->sli.mbox_active = NULL;
-- if (pmb->mbox_cmpl) {
-- lpfc_sli_pcimem_bcopy(mbox, pmbox,
-- MAILBOX_CMD_SIZE);
-- }
-- if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) {
-- pmb->mbox_flag &= ~LPFC_MBX_IMED_UNREG;
-+ /* clear mailbox attention bit */
-+ work_ha_copy &= ~HA_MBATT;
-+ } else {
-+ phba->last_completion_time = jiffies;
-+ del_timer(&phba->sli.mbox_tmo);
+@@ -1879,7 +1880,6 @@ static void sym2_io_resume(struct pci_dev *pdev)
+ spin_lock_irq(shost->host_lock);
+ if (sym_data->io_reset)
+ complete_all(sym_data->io_reset);
+- sym_data->io_reset = NULL;
+ spin_unlock_irq(shost->host_lock);
+ }
-- lpfc_debugfs_disc_trc(vport,
-- LPFC_DISC_TRC_MBOX_VPORT,
-- "MBOX dflt rpi: : status:x%x rpi:x%x",
-- (uint32_t)pmbox->mbxStatus,
-- pmbox->un.varWords[0], 0);
--
-- if ( !pmbox->mbxStatus) {
-- mp = (struct lpfc_dmabuf *)
-- (pmb->context1);
-- ndlp = (struct lpfc_nodelist *)
-- pmb->context2;
--
-- /* Reg_LOGIN of dflt RPI was successful.
-- * new lets get rid of the RPI using the
-- * same mbox buffer.
-- */
-- lpfc_unreg_login(phba, vport->vpi,
-- pmbox->un.varWords[0], pmb);
-- pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
-- pmb->context1 = mp;
-- pmb->context2 = ndlp;
-- pmb->vport = vport;
-- spin_lock(&phba->hbalock);
-- phba->sli.sli_flag &=
-- ~LPFC_SLI_MBOX_ACTIVE;
-- spin_unlock(&phba->hbalock);
-- goto send_current_mbox;
-+ phba->sli.mbox_active = NULL;
-+ if (pmb->mbox_cmpl) {
-+ lpfc_sli_pcimem_bcopy(mbox, pmbox,
-+ MAILBOX_CMD_SIZE);
-+ }
-+ if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) {
-+ pmb->mbox_flag &= ~LPFC_MBX_IMED_UNREG;
-+
-+ lpfc_debugfs_disc_trc(vport,
-+ LPFC_DISC_TRC_MBOX_VPORT,
-+ "MBOX dflt rpi: : "
-+ "status:x%x rpi:x%x",
-+ (uint32_t)pmbox->mbxStatus,
-+ pmbox->un.varWords[0], 0);
-+
-+ if (!pmbox->mbxStatus) {
-+ mp = (struct lpfc_dmabuf *)
-+ (pmb->context1);
-+ ndlp = (struct lpfc_nodelist *)
-+ pmb->context2;
-+
-+ /* Reg_LOGIN of dflt RPI was
-+ * successful. new lets get
-+ * rid of the RPI using the
-+ * same mbox buffer.
-+ */
-+ lpfc_unreg_login(phba,
-+ vport->vpi,
-+ pmbox->un.varWords[0],
-+ pmb);
-+ pmb->mbox_cmpl =
-+ lpfc_mbx_cmpl_dflt_rpi;
-+ pmb->context1 = mp;
-+ pmb->context2 = ndlp;
-+ pmb->vport = vport;
-+ spin_lock(&phba->hbalock);
-+ phba->sli.sli_flag &=
-+ ~LPFC_SLI_MBOX_ACTIVE;
-+ spin_unlock(&phba->hbalock);
-+ goto send_current_mbox;
-+ }
- }
-+ spin_lock(&phba->pport->work_port_lock);
-+ phba->pport->work_port_events &=
-+ ~WORKER_MBOX_TMO;
-+ spin_unlock(&phba->pport->work_port_lock);
-+ lpfc_mbox_cmpl_put(phba, pmb);
- }
-- spin_lock(&phba->pport->work_port_lock);
-- phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
-- spin_unlock(&phba->pport->work_port_lock);
-- lpfc_mbox_cmpl_put(phba, pmb);
- }
- if ((work_ha_copy & HA_MBATT) &&
- (phba->sli.mbox_active == NULL)) {
-@@ -3990,10 +4176,6 @@ send_current_mbox:
- lpfc_mbox_cmpl_put(phba, pmb);
- goto send_next_mbox;
- }
-- } else {
-- /* Turn on IOCB processing */
-- for (i = 0; i < phba->sli.num_rings; i++)
-- lpfc_sli_turn_on_ring(phba, i);
- }
+@@ -2056,7 +2056,7 @@ static struct pci_driver sym2_driver = {
+ .name = NAME53C8XX,
+ .id_table = sym2_id_table,
+ .probe = sym2_probe,
+- .remove = __devexit_p(sym2_remove),
++ .remove = sym2_remove,
+ .err_handler = &sym2_err_handler,
+ };
- }
-diff --git a/drivers/scsi/lpfc/lpfc_sli.h b/drivers/scsi/lpfc/lpfc_sli.h
-index 51b2b6b..7249fd2 100644
---- a/drivers/scsi/lpfc/lpfc_sli.h
-+++ b/drivers/scsi/lpfc/lpfc_sli.h
-@@ -33,6 +33,7 @@ typedef enum _lpfc_ctx_cmd {
- struct lpfc_iocbq {
- /* lpfc_iocbqs are used in double linked lists */
- struct list_head list;
-+ struct list_head clist;
- uint16_t iotag; /* pre-assigned IO tag */
- uint16_t rsvd1;
+diff --git a/drivers/scsi/tmscsim.c b/drivers/scsi/tmscsim.c
+index 4419304..5b04ddf 100644
+--- a/drivers/scsi/tmscsim.c
++++ b/drivers/scsi/tmscsim.c
+@@ -444,7 +444,7 @@ static int dc390_pci_map (struct dc390_srb* pSRB)
-@@ -44,6 +45,7 @@ struct lpfc_iocbq {
- #define LPFC_IO_FCP 4 /* FCP command -- iocbq in scsi_buf */
- #define LPFC_DRIVER_ABORTED 8 /* driver aborted this request */
- #define LPFC_IO_FABRIC 0x10 /* Iocb send using fabric scheduler */
-+#define LPFC_DELAY_MEM_FREE 0x20 /* Defer free'ing of FC data */
+ /* Map sense buffer */
+ if (pSRB->SRBFlag & AUTO_REQSENSE) {
+- pSRB->pSegmentList = dc390_sg_build_single(&pSRB->Segmentx, pcmd->sense_buffer, sizeof(pcmd->sense_buffer));
++ pSRB->pSegmentList = dc390_sg_build_single(&pSRB->Segmentx, pcmd->sense_buffer, SCSI_SENSE_BUFFERSIZE);
+ pSRB->SGcount = pci_map_sg(pdev, pSRB->pSegmentList, 1,
+ DMA_FROM_DEVICE);
+ cmdp->saved_dma_handle = sg_dma_address(pSRB->pSegmentList);
+@@ -599,7 +599,7 @@ dc390_StartSCSI( struct dc390_acb* pACB, struct dc390_dcb* pDCB, struct dc390_sr
+ DC390_write8 (ScsiFifo, pDCB->TargetLUN << 5);
+ DC390_write8 (ScsiFifo, 0);
+ DC390_write8 (ScsiFifo, 0);
+- DC390_write8 (ScsiFifo, sizeof(scmd->sense_buffer));
++ DC390_write8 (ScsiFifo, SCSI_SENSE_BUFFERSIZE);
+ DC390_write8 (ScsiFifo, 0);
+ DEBUG1(printk (KERN_DEBUG "DC390: AutoReqSense !\n"));
+ }
+@@ -1389,7 +1389,7 @@ dc390_CommandPhase( struct dc390_acb* pACB, struct dc390_srb* pSRB, u8 *psstatus
+ DC390_write8 (ScsiFifo, pDCB->TargetLUN << 5);
+ DC390_write8 (ScsiFifo, 0);
+ DC390_write8 (ScsiFifo, 0);
+- DC390_write8 (ScsiFifo, sizeof(pSRB->pcmd->sense_buffer));
++ DC390_write8 (ScsiFifo, SCSI_SENSE_BUFFERSIZE);
+ DC390_write8 (ScsiFifo, 0);
+ DEBUG0(printk(KERN_DEBUG "DC390: AutoReqSense (CmndPhase)!\n"));
+ }
+diff --git a/drivers/scsi/u14-34f.c b/drivers/scsi/u14-34f.c
+index 7edd6ce..4bc5407 100644
+--- a/drivers/scsi/u14-34f.c
++++ b/drivers/scsi/u14-34f.c
+@@ -1121,9 +1121,9 @@ static void map_dma(unsigned int i, unsigned int j) {
- uint8_t abort_count;
- uint8_t rsvd2;
-@@ -92,8 +94,6 @@ typedef struct lpfcMboxq {
- #define MBX_POLL 1 /* poll mailbox till command done, then
- return */
- #define MBX_NOWAIT 2 /* issue command then return immediately */
--#define MBX_STOP_IOCB 4 /* Stop iocb processing till mbox cmds
-- complete */
+ if (SCpnt->sense_buffer)
+ cpp->sense_addr = H2DEV(pci_map_single(HD(j)->pdev, SCpnt->sense_buffer,
+- sizeof SCpnt->sense_buffer, PCI_DMA_FROMDEVICE));
++ SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE));
- #define LPFC_MAX_RING_MASK 4 /* max num of rctl/type masks allowed per
- ring */
-@@ -129,9 +129,7 @@ struct lpfc_sli_ring {
- uint16_t flag; /* ring flags */
- #define LPFC_DEFERRED_RING_EVENT 0x001 /* Deferred processing a ring event */
- #define LPFC_CALL_RING_AVAILABLE 0x002 /* indicates cmd was full */
--#define LPFC_STOP_IOCB_MBX 0x010 /* Stop processing IOCB cmds mbox */
- #define LPFC_STOP_IOCB_EVENT 0x020 /* Stop processing IOCB cmds event */
--#define LPFC_STOP_IOCB_MASK 0x030 /* Stop processing IOCB cmds mask */
- uint16_t abtsiotag; /* tracks next iotag to use for ABTS */
+- cpp->sense_len = sizeof SCpnt->sense_buffer;
++ cpp->sense_len = SCSI_SENSE_BUFFERSIZE;
- uint32_t local_getidx; /* last available cmd index (from cmdGetInx) */
-@@ -163,9 +161,12 @@ struct lpfc_sli_ring {
- struct list_head iocb_continueq;
- uint16_t iocb_continueq_cnt; /* current length of queue */
- uint16_t iocb_continueq_max; /* max length */
-+ struct list_head iocb_continue_saveq;
+ if (scsi_bufflen(SCpnt)) {
+ count = scsi_dma_map(SCpnt);
+diff --git a/drivers/scsi/ultrastor.c b/drivers/scsi/ultrastor.c
+index 6d1f0ed..75eca6b 100644
+--- a/drivers/scsi/ultrastor.c
++++ b/drivers/scsi/ultrastor.c
+@@ -298,9 +298,16 @@ static inline int find_and_clear_bit_16(unsigned long *field)
+ {
+ int rv;
- struct lpfc_sli_ring_mask prt[LPFC_MAX_RING_MASK];
- uint32_t num_mask; /* number of mask entries in prt array */
-+ void (*lpfc_sli_rcv_async_status) (struct lpfc_hba *,
-+ struct lpfc_sli_ring *, struct lpfc_iocbq *);
+- if (*field == 0) panic("No free mscp");
+- asm("xorl %0,%0\n0:\tbsfw %1,%w0\n\tbtr %0,%1\n\tjnc 0b"
+- : "=&r" (rv), "=m" (*field) : "1" (*field));
++ if (*field == 0)
++ panic("No free mscp");
++
++ asm volatile (
++ "xorl %0,%0\n\t"
++ "0: bsfw %1,%w0\n\t"
++ "btr %0,%1\n\t"
++ "jnc 0b"
++ : "=&r" (rv), "=m" (*field) :);
++
+ return rv;
+ }
- struct lpfc_sli_ring_stat stats; /* SLI statistical info */
+@@ -741,7 +748,7 @@ static int ultrastor_queuecommand(struct scsi_cmnd *SCpnt,
+ }
+ my_mscp->command_link = 0; /*???*/
+ my_mscp->scsi_command_link_id = 0; /*???*/
+- my_mscp->length_of_sense_byte = sizeof SCpnt->sense_buffer;
++ my_mscp->length_of_sense_byte = SCSI_SENSE_BUFFERSIZE;
+ my_mscp->length_of_scsi_cdbs = SCpnt->cmd_len;
+ memcpy(my_mscp->scsi_cdbs, SCpnt->cmnd, my_mscp->length_of_scsi_cdbs);
+ my_mscp->adapter_status = 0;
+diff --git a/drivers/scsi/wd33c93.c b/drivers/scsi/wd33c93.c
+index fdbb92d..f286c37 100644
+--- a/drivers/scsi/wd33c93.c
++++ b/drivers/scsi/wd33c93.c
+@@ -407,16 +407,16 @@ wd33c93_queuecommand(struct scsi_cmnd *cmd,
+ * - SCp.phase records this command's SRCID_ER bit setting
+ */
-@@ -199,9 +200,6 @@ struct lpfc_hbq_init {
- uint32_t add_count; /* number to allocate when starved */
- } ;
+- if (cmd->use_sg) {
+- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
+- cmd->SCp.buffers_residual = cmd->use_sg - 1;
++ if (scsi_bufflen(cmd)) {
++ cmd->SCp.buffer = scsi_sglist(cmd);
++ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
+ cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
+ cmd->SCp.this_residual = cmd->SCp.buffer->length;
+ } else {
+ cmd->SCp.buffer = NULL;
+ cmd->SCp.buffers_residual = 0;
+- cmd->SCp.ptr = (char *) cmd->request_buffer;
+- cmd->SCp.this_residual = cmd->request_bufflen;
++ cmd->SCp.ptr = NULL;
++ cmd->SCp.this_residual = 0;
+ }
--#define LPFC_MAX_HBQ 16
--
--
- /* Structure used to hold SLI statistical counters and info */
- struct lpfc_sli_stat {
- uint64_t mbox_stat_err; /* Mbox cmds completed status error */
-diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
-index 0081f49..4b633d3 100644
---- a/drivers/scsi/lpfc/lpfc_version.h
-+++ b/drivers/scsi/lpfc/lpfc_version.h
-@@ -1,7 +1,7 @@
- /*******************************************************************
- * This file is part of the Emulex Linux Device Driver for *
- * Fibre Channel Host Bus Adapters. *
-- * Copyright (C) 2004-2007 Emulex. All rights reserved. *
-+ * Copyright (C) 2004-2008 Emulex. All rights reserved. *
- * EMULEX and SLI are trademarks of Emulex. *
- * www.emulex.com *
- * *
-@@ -18,10 +18,10 @@
- * included with this package. *
- *******************************************************************/
+ /* WD docs state that at the conclusion of a "LEVEL2" command, the
+diff --git a/drivers/scsi/wd7000.c b/drivers/scsi/wd7000.c
+index 03cd44f..b4304ae 100644
+--- a/drivers/scsi/wd7000.c
++++ b/drivers/scsi/wd7000.c
+@@ -1108,13 +1108,10 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt,
+ scb->host = host;
--#define LPFC_DRIVER_VERSION "8.2.2"
-+#define LPFC_DRIVER_VERSION "8.2.4"
+ nseg = scsi_sg_count(SCpnt);
+- if (nseg) {
++ if (nseg > 1) {
+ struct scatterlist *sg;
+ unsigned i;
- #define LPFC_DRIVER_NAME "lpfc"
+- if (SCpnt->device->host->sg_tablesize == SG_NONE) {
+- panic("wd7000_queuecommand: scatter/gather not supported.\n");
+- }
+ dprintk("Using scatter/gather with %d elements.\n", nseg);
- #define LPFC_MODULE_DESC "Emulex LightPulse Fibre Channel SCSI driver " \
- LPFC_DRIVER_VERSION
--#define LPFC_COPYRIGHT "Copyright(c) 2004-2007 Emulex. All rights reserved."
-+#define LPFC_COPYRIGHT "Copyright(c) 2004-2008 Emulex. All rights reserved."
-diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c
-index dcb415e..9fad766 100644
---- a/drivers/scsi/lpfc/lpfc_vport.c
-+++ b/drivers/scsi/lpfc/lpfc_vport.c
-@@ -125,15 +125,26 @@ lpfc_vport_sparm(struct lpfc_hba *phba, struct lpfc_vport *vport)
- pmb->vport = vport;
- rc = lpfc_sli_issue_mbox_wait(phba, pmb, phba->fc_ratov * 2);
- if (rc != MBX_SUCCESS) {
-- lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
-- "1818 VPort failed init, mbxCmd x%x "
-- "READ_SPARM mbxStatus x%x, rc = x%x\n",
-- mb->mbxCommand, mb->mbxStatus, rc);
-- lpfc_mbuf_free(phba, mp->virt, mp->phys);
-- kfree(mp);
-- if (rc != MBX_TIMEOUT)
-- mempool_free(pmb, phba->mbox_mem_pool);
-- return -EIO;
-+ if (signal_pending(current)) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
-+ "1830 Signal aborted mbxCmd x%x\n",
-+ mb->mbxCommand);
-+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
-+ kfree(mp);
-+ if (rc != MBX_TIMEOUT)
-+ mempool_free(pmb, phba->mbox_mem_pool);
-+ return -EINTR;
-+ } else {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
-+ "1818 VPort failed init, mbxCmd x%x "
-+ "READ_SPARM mbxStatus x%x, rc = x%x\n",
-+ mb->mbxCommand, mb->mbxStatus, rc);
-+ lpfc_mbuf_free(phba, mp->virt, mp->phys);
-+ kfree(mp);
-+ if (rc != MBX_TIMEOUT)
-+ mempool_free(pmb, phba->mbox_mem_pool);
-+ return -EIO;
+ sgb = scb->sgb;
+@@ -1128,7 +1125,10 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt,
+ }
+ } else {
+ scb->op = 0;
+- any2scsi(scb->dataptr, isa_virt_to_bus(scsi_sglist(SCpnt)));
++ if (nseg) {
++ struct scatterlist *sg = scsi_sglist(SCpnt);
++ any2scsi(scb->dataptr, isa_page_to_bus(sg_page(sg)) + sg->offset);
+ }
+ any2scsi(scb->maxlen, scsi_bufflen(SCpnt));
}
- memcpy(&vport->fc_sparam, mp->virt, sizeof (struct serv_parm));
-@@ -204,6 +215,7 @@ lpfc_vport_create(struct fc_vport *fc_vport, bool disable)
- int instance;
- int vpi;
- int rc = VPORT_ERROR;
-+ int status;
+@@ -1524,7 +1524,7 @@ static __init int wd7000_detect(struct scsi_host_template *tpnt)
+ * For boards before rev 6.0, scatter/gather isn't supported.
+ */
+ if (host->rev1 < 6)
+- sh->sg_tablesize = SG_NONE;
++ sh->sg_tablesize = 1;
- if ((phba->sli_rev < 3) ||
- !(phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)) {
-@@ -248,13 +260,19 @@ lpfc_vport_create(struct fc_vport *fc_vport, bool disable)
- vport->vpi = vpi;
- lpfc_debugfs_initialize(vport);
+ present++; /* count it */
-- if (lpfc_vport_sparm(phba, vport)) {
-- lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
-- "1813 Create VPORT failed. "
-- "Cannot get sparam\n");
-+ if ((status = lpfc_vport_sparm(phba, vport))) {
-+ if (status == -EINTR) {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
-+ "1831 Create VPORT Interrupted.\n");
-+ rc = VPORT_ERROR;
-+ } else {
-+ lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
-+ "1813 Create VPORT failed. "
-+ "Cannot get sparam\n");
-+ rc = VPORT_NORESOURCES;
-+ }
- lpfc_free_vpi(phba, vpi);
- destroy_port(vport);
-- rc = VPORT_NORESOURCES;
- goto error_out;
- }
+diff --git a/drivers/serial/21285.c b/drivers/serial/21285.c
+index facb678..6a48dfa 100644
+--- a/drivers/serial/21285.c
++++ b/drivers/serial/21285.c
+@@ -277,6 +277,8 @@ serial21285_set_termios(struct uart_port *port, struct ktermios *termios,
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= RXSTAT_FRAME | RXSTAT_PARITY;
-@@ -427,7 +445,6 @@ int
- lpfc_vport_delete(struct fc_vport *fc_vport)
++ tty_encode_baud_rate(tty, baud, baud);
++
+ /*
+ * Which character status flags should we ignore?
+ */
+diff --git a/drivers/serial/bfin_5xx.c b/drivers/serial/bfin_5xx.c
+index 6f475b6..ac2a3ef 100644
+--- a/drivers/serial/bfin_5xx.c
++++ b/drivers/serial/bfin_5xx.c
+@@ -442,7 +442,8 @@ static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart)
+ set_bfin_dma_config(DIR_READ, DMA_FLOW_STOP,
+ INTR_ON_BUF,
+ DIMENSION_LINEAR,
+- DATA_SIZE_8));
++ DATA_SIZE_8,
++ DMA_SYNC_RESTART));
+ set_dma_start_addr(uart->tx_dma_channel, (unsigned long)(xmit->buf+xmit->tail));
+ set_dma_x_count(uart->tx_dma_channel, uart->tx_count);
+ set_dma_x_modify(uart->tx_dma_channel, 1);
+@@ -689,7 +690,8 @@ static int bfin_serial_startup(struct uart_port *port)
+ set_dma_config(uart->rx_dma_channel,
+ set_bfin_dma_config(DIR_WRITE, DMA_FLOW_AUTO,
+ INTR_ON_ROW, DIMENSION_2D,
+- DATA_SIZE_8));
++ DATA_SIZE_8,
++ DMA_SYNC_RESTART));
+ set_dma_x_count(uart->rx_dma_channel, DMA_RX_XCOUNT);
+ set_dma_x_modify(uart->rx_dma_channel, 1);
+ set_dma_y_count(uart->rx_dma_channel, DMA_RX_YCOUNT);
+diff --git a/drivers/serial/icom.c b/drivers/serial/icom.c
+index 9d3105b..9c2df5c 100644
+--- a/drivers/serial/icom.c
++++ b/drivers/serial/icom.c
+@@ -48,7 +48,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/smp.h>
+ #include <linux/spinlock.h>
+-#include <linux/kobject.h>
++#include <linux/kref.h>
+ #include <linux/firmware.h>
+ #include <linux/bitops.h>
+
+@@ -65,7 +65,7 @@
+ #define ICOM_VERSION_STR "1.3.1"
+ #define NR_PORTS 128
+ #define ICOM_PORT ((struct icom_port *)port)
+-#define to_icom_adapter(d) container_of(d, struct icom_adapter, kobj)
++#define to_icom_adapter(d) container_of(d, struct icom_adapter, kref)
+
+ static const struct pci_device_id icom_pci_table[] = {
+ {
+@@ -141,6 +141,7 @@ static inline void trace(struct icom_port *, char *, unsigned long) {};
+ #else
+ static inline void trace(struct icom_port *icom_port, char *trace_pt, unsigned long trace_data) {};
+ #endif
++static void icom_kref_release(struct kref *kref);
+
+ static void free_port_memory(struct icom_port *icom_port)
{
- struct lpfc_nodelist *ndlp = NULL;
-- struct lpfc_nodelist *next_ndlp;
- struct Scsi_Host *shost = (struct Scsi_Host *) fc_vport->shost;
- struct lpfc_vport *vport = *(struct lpfc_vport **)fc_vport->dd_data;
- struct lpfc_hba *phba = vport->phba;
-@@ -482,8 +499,18 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
+@@ -1063,11 +1064,11 @@ static int icom_open(struct uart_port *port)
+ {
+ int retval;
- ndlp = lpfc_findnode_did(phba->pport, Fabric_DID);
- if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE &&
-- phba->link_state >= LPFC_LINK_UP) {
--
-+ phba->link_state >= LPFC_LINK_UP) {
-+ if (vport->cfg_enable_da_id) {
-+ timeout = msecs_to_jiffies(phba->fc_ratov * 2000);
-+ if (!lpfc_ns_cmd(vport, SLI_CTNS_DA_ID, 0, 0))
-+ while (vport->ct_flags && timeout)
-+ timeout = schedule_timeout(timeout);
-+ else
-+ lpfc_printf_log(vport->phba, KERN_WARNING,
-+ LOG_VPORT,
-+ "1829 CT command failed to "
-+ "delete objects on fabric. \n");
-+ }
- /* First look for the Fabric ndlp */
- ndlp = lpfc_findnode_did(vport, Fabric_DID);
- if (!ndlp) {
-@@ -503,23 +530,20 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
- }
+- kobject_get(&ICOM_PORT->adapter->kobj);
++ kref_get(&ICOM_PORT->adapter->kref);
+ retval = startup(ICOM_PORT);
- skip_logo:
-+ lpfc_cleanup(vport);
- lpfc_sli_host_down(vport);
+ if (retval) {
+- kobject_put(&ICOM_PORT->adapter->kobj);
++ kref_put(&ICOM_PORT->adapter->kref, icom_kref_release);
+ trace(ICOM_PORT, "STARTUP_ERROR", 0);
+ return retval;
+ }
+@@ -1088,7 +1089,7 @@ static void icom_close(struct uart_port *port)
-- list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
-- lpfc_disc_state_machine(vport, ndlp, NULL,
-- NLP_EVT_DEVICE_RECOVERY);
-- lpfc_disc_state_machine(vport, ndlp, NULL,
-- NLP_EVT_DEVICE_RM);
-- }
--
- lpfc_stop_vport_timers(vport);
- lpfc_unreg_all_rpis(vport);
-- lpfc_unreg_default_rpis(vport);
-- /*
-- * Completion of unreg_vpi (lpfc_mbx_cmpl_unreg_vpi) does the
-- * scsi_host_put() to release the vport.
-- */
-- lpfc_mbx_unreg_vpi(vport);
-+
-+ if (!(phba->pport->load_flag & FC_UNLOADING)) {
-+ lpfc_unreg_default_rpis(vport);
-+ /*
-+ * Completion of unreg_vpi (lpfc_mbx_cmpl_unreg_vpi)
-+ * does the scsi_host_put() to release the vport.
-+ */
-+ lpfc_mbx_unreg_vpi(vport);
-+ }
+ shutdown(ICOM_PORT);
- lpfc_free_vpi(phba, vport->vpi);
- vport->work_port_events = 0;
-@@ -532,16 +556,13 @@ skip_logo:
- return VPORT_OK;
+- kobject_put(&ICOM_PORT->adapter->kobj);
++ kref_put(&ICOM_PORT->adapter->kref, icom_kref_release);
}
--EXPORT_SYMBOL(lpfc_vport_create);
--EXPORT_SYMBOL(lpfc_vport_delete);
--
- struct lpfc_vport **
- lpfc_create_vport_work_array(struct lpfc_hba *phba)
- {
- struct lpfc_vport *port_iterator;
- struct lpfc_vport **vports;
- int index = 0;
-- vports = kzalloc(LPFC_MAX_VPORTS * sizeof(struct lpfc_vport *),
-+ vports = kzalloc((phba->max_vpi + 1) * sizeof(struct lpfc_vport *),
- GFP_KERNEL);
- if (vports == NULL)
- return NULL;
-@@ -560,12 +581,12 @@ lpfc_create_vport_work_array(struct lpfc_hba *phba)
+ static void icom_set_termios(struct uart_port *port,
+@@ -1485,18 +1486,14 @@ static void icom_remove_adapter(struct icom_adapter *icom_adapter)
+ pci_release_regions(icom_adapter->pci_dev);
}
- void
--lpfc_destroy_vport_work_array(struct lpfc_vport **vports)
-+lpfc_destroy_vport_work_array(struct lpfc_hba *phba, struct lpfc_vport **vports)
+-static void icom_kobj_release(struct kobject *kobj)
++static void icom_kref_release(struct kref *kref)
{
- int i;
- if (vports == NULL)
- return;
-- for (i=0; vports[i] != NULL && i < LPFC_MAX_VPORTS; i++)
-+ for (i=0; vports[i] != NULL && i <= phba->max_vpi; i++)
- scsi_host_put(lpfc_shost_from_vport(vports[i]));
- kfree(vports);
- }
-diff --git a/drivers/scsi/lpfc/lpfc_vport.h b/drivers/scsi/lpfc/lpfc_vport.h
-index 91da177..96c4453 100644
---- a/drivers/scsi/lpfc/lpfc_vport.h
-+++ b/drivers/scsi/lpfc/lpfc_vport.h
-@@ -89,7 +89,7 @@ int lpfc_vport_delete(struct fc_vport *);
- int lpfc_vport_getinfo(struct Scsi_Host *, struct vport_info *);
- int lpfc_vport_tgt_remove(struct Scsi_Host *, uint, uint);
- struct lpfc_vport **lpfc_create_vport_work_array(struct lpfc_hba *);
--void lpfc_destroy_vport_work_array(struct lpfc_vport **);
-+void lpfc_destroy_vport_work_array(struct lpfc_hba *, struct lpfc_vport **);
+ struct icom_adapter *icom_adapter;
- /*
- * queuecommand VPORT-specific return codes. Specified in the host byte code.
-diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
-index 66c6520..765c24d 100644
---- a/drivers/scsi/megaraid.c
-+++ b/drivers/scsi/megaraid.c
-@@ -4889,7 +4889,7 @@ __megaraid_shutdown(adapter_t *adapter)
- mdelay(1000);
+- icom_adapter = to_icom_adapter(kobj);
++ icom_adapter = to_icom_adapter(kref);
+ icom_remove_adapter(icom_adapter);
}
--static void
-+static void __devexit
- megaraid_remove_one(struct pci_dev *pdev)
+-static struct kobj_type icom_kobj_type = {
+- .release = icom_kobj_release,
+-};
+-
+ static int __devinit icom_probe(struct pci_dev *dev,
+ const struct pci_device_id *ent)
{
- struct Scsi_Host *host = pci_get_drvdata(pdev);
-diff --git a/drivers/scsi/megaraid/megaraid_mbox.c b/drivers/scsi/megaraid/megaraid_mbox.c
-index c892310..24e32e4 100644
---- a/drivers/scsi/megaraid/megaraid_mbox.c
-+++ b/drivers/scsi/megaraid/megaraid_mbox.c
-@@ -300,7 +300,7 @@ static struct pci_device_id pci_id_table_g[] = {
- MODULE_DEVICE_TABLE(pci, pci_id_table_g);
+@@ -1592,8 +1589,7 @@ static int __devinit icom_probe(struct pci_dev *dev,
+ }
+ }
+- kobject_init(&icom_adapter->kobj);
+- icom_adapter->kobj.ktype = &icom_kobj_type;
++ kref_init(&icom_adapter->kref);
+ return 0;
--static struct pci_driver megaraid_pci_driver_g = {
-+static struct pci_driver megaraid_pci_driver = {
- .name = "megaraid",
- .id_table = pci_id_table_g,
- .probe = megaraid_probe_one,
-@@ -394,7 +394,7 @@ megaraid_init(void)
+ probe_exit2:
+@@ -1619,7 +1615,7 @@ static void __devexit icom_remove(struct pci_dev *dev)
+ icom_adapter = list_entry(tmp, struct icom_adapter,
+ icom_adapter_entry);
+ if (icom_adapter->pci_dev == dev) {
+- kobject_put(&icom_adapter->kobj);
++ kref_put(&icom_adapter->kref, icom_kref_release);
+ return;
+ }
+ }
+diff --git a/drivers/serial/icom.h b/drivers/serial/icom.h
+index e8578d8..0274554 100644
+--- a/drivers/serial/icom.h
++++ b/drivers/serial/icom.h
+@@ -270,7 +270,7 @@ struct icom_adapter {
+ #define V2_ONE_PORT_RVX_ONE_PORT_IMBED_MDM 0x0251
+ int numb_ports;
+ struct list_head icom_adapter_entry;
+- struct kobject kobj;
++ struct kref kref;
+ };
+ /* prototype */
+diff --git a/drivers/serial/sh-sci.c b/drivers/serial/sh-sci.c
+index 73440e2..ddf6391 100644
+--- a/drivers/serial/sh-sci.c
++++ b/drivers/serial/sh-sci.c
+@@ -302,7 +302,7 @@ static void sci_init_pins_scif(struct uart_port* port, unsigned int cflag)
+ }
+ sci_out(port, SCFCR, fcr_val);
+ }
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7720)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7720) || defined(CONFIG_CPU_SUBTYPE_SH7721)
+ static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
+ {
+ unsigned int fcr_val = 0;
+@@ -395,7 +395,8 @@ static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
+ } else {
+ #ifdef CONFIG_CPU_SUBTYPE_SH7343
+ /* Nothing */
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7780) || \
++#elif defined(CONFIG_CPU_SUBTYPE_SH7763) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7780) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7785) || \
+ defined(CONFIG_CPU_SUBTYPE_SHX3)
+ ctrl_outw(0x0080, SCSPTR0); /* Set RTS = 1 */
+@@ -408,6 +409,7 @@ static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
+ #endif
- // register as a PCI hot-plug driver module
-- rval = pci_register_driver(&megaraid_pci_driver_g);
-+ rval = pci_register_driver(&megaraid_pci_driver);
- if (rval < 0) {
- con_log(CL_ANN, (KERN_WARNING
- "megaraid: could not register hotplug support.\n"));
-@@ -415,7 +415,7 @@ megaraid_exit(void)
- con_log(CL_DLEVEL1, (KERN_NOTICE "megaraid: unloading framework\n"));
+ #if defined(CONFIG_CPU_SUBTYPE_SH7760) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7763) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7780) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7785)
+ static inline int scif_txroom(struct uart_port *port)
+diff --git a/drivers/serial/sh-sci.h b/drivers/serial/sh-sci.h
+index d24621c..f5764eb 100644
+--- a/drivers/serial/sh-sci.h
++++ b/drivers/serial/sh-sci.h
+@@ -46,7 +46,8 @@
+ */
+ # define SCSCR_INIT(port) (port->mapbase == SCIF2) ? 0xF3 : 0xF0
+ # define SCIF_ONLY
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7720)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ # define SCSCR_INIT(port) 0x0030 /* TIE=0,RIE=0,TE=1,RE=1 */
+ # define SCIF_ONLY
+ #define SCIF_ORER 0x0200 /* overrun error bit */
+@@ -119,6 +120,12 @@
+ # define SCSCR_INIT(port) 0x30 /* TIE=0,RIE=0,TE=1,RE=1 */
+ # define SCI_ONLY
+ # define H8300_SCI_DR(ch) *(volatile char *)(P1DR + h8300_sci_pins[ch].port)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7763)
++# define SCSPTR0 0xffe00024 /* 16 bit SCIF */
++# define SCSPTR1 0xffe08024 /* 16 bit SCIF */
++# define SCIF_ORER 0x0001 /* overrun error bit */
++# define SCSCR_INIT(port) 0x3a /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */
++# define SCIF_ONLY
+ #elif defined(CONFIG_CPU_SUBTYPE_SH7770)
+ # define SCSPTR0 0xff923020 /* 16 bit SCIF */
+ # define SCSPTR1 0xff924020 /* 16 bit SCIF */
+@@ -142,7 +149,9 @@
+ # define SCIF_OPER 0x0001 /* Overrun error bit */
+ # define SCSCR_INIT(port) 0x3a /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */
+ # define SCIF_ONLY
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7206)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7203) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7206) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7263)
+ # define SCSPTR0 0xfffe8020 /* 16 bit SCIF */
+ # define SCSPTR1 0xfffe8820 /* 16 bit SCIF */
+ # define SCSPTR2 0xfffe9020 /* 16 bit SCIF */
+@@ -214,7 +223,8 @@
+ #define SCIF_DR 0x0001 /* 7705 SCIF, 7707 SCIF, 7709 SCIF, 7750 SCIF */
- // unregister as PCI hotplug driver
-- pci_unregister_driver(&megaraid_pci_driver_g);
-+ pci_unregister_driver(&megaraid_pci_driver);
+ #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define SCIF_ORER 0x0200
+ #define SCIF_ERRORS ( SCIF_PER | SCIF_FER | SCIF_ER | SCIF_BRK | SCIF_ORER)
+ #define SCIF_RFDC_MASK 0x007f
+@@ -252,7 +262,8 @@
+ # define SCxSR_PER(port) SCIF_PER
+ # define SCxSR_BRK(port) SCIF_BRK
+ #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ # define SCxSR_RDxF_CLEAR(port) (sci_in(port,SCxSR)&0xfffc)
+ # define SCxSR_ERROR_CLEAR(port) (sci_in(port,SCxSR)&0xfd73)
+ # define SCxSR_TDxE_CLEAR(port) (sci_in(port,SCxSR)&0xffdf)
+@@ -361,7 +372,8 @@
+ #define SCIF_FNS(name, sh3_scif_offset, sh3_scif_size, sh4_scif_offset, sh4_scif_size) \
+ CPU_SCIF_FNS(name, sh4_scif_offset, sh4_scif_size)
+ #elif defined(CONFIG_CPU_SUBTYPE_SH7705) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define SCIF_FNS(name, scif_offset, scif_size) \
+ CPU_SCIF_FNS(name, scif_offset, scif_size)
+ #else
+@@ -388,7 +400,8 @@
+ #endif
- return;
+ #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+
+ SCIF_FNS(SCSMR, 0x00, 16)
+ SCIF_FNS(SCBRR, 0x04, 8)
+@@ -412,6 +425,7 @@ SCIx_FNS(SCxSR, 0x08, 8, 0x10, 8, 0x08, 16, 0x10, 16, 0x04, 8)
+ SCIx_FNS(SCxRDR, 0x0a, 8, 0x14, 8, 0x0A, 8, 0x14, 8, 0x05, 8)
+ SCIF_FNS(SCFCR, 0x0c, 8, 0x18, 16)
+ #if defined(CONFIG_CPU_SUBTYPE_SH7760) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7763) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7780) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7785)
+ SCIF_FNS(SCFDR, 0x0e, 16, 0x1C, 16)
+@@ -510,7 +524,8 @@ static inline void set_sh771x_scif_pfc(struct uart_port *port)
+ return;
+ }
}
-diff --git a/drivers/scsi/megaraid/megaraid_sas.c b/drivers/scsi/megaraid/megaraid_sas.c
-index e3c5c52..d7ec921 100644
---- a/drivers/scsi/megaraid/megaraid_sas.c
-+++ b/drivers/scsi/megaraid/megaraid_sas.c
-@@ -2,7 +2,7 @@
- *
- * Linux MegaRAID driver for SAS based RAID controllers
- *
-- * Copyright (c) 2003-2005 LSI Logic Corporation.
-+ * Copyright (c) 2003-2005 LSI Corporation.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
-@@ -10,7 +10,7 @@
- * 2 of the License, or (at your option) any later version.
- *
- * FILE : megaraid_sas.c
-- * Version : v00.00.03.10-rc5
-+ * Version : v00.00.03.16-rc1
- *
- * Authors:
- * (email-id : megaraidlinux at lsi.com)
-@@ -31,6 +31,7 @@
- #include <linux/moduleparam.h>
- #include <linux/module.h>
- #include <linux/spinlock.h>
-+#include <linux/mutex.h>
- #include <linux/interrupt.h>
- #include <linux/delay.h>
- #include <linux/uio.h>
-@@ -46,10 +47,18 @@
- #include <scsi/scsi_host.h>
- #include "megaraid_sas.h"
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7720)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ static inline int sci_rxd_in(struct uart_port *port)
+ {
+ if (port->mapbase == 0xa4430000)
+@@ -580,6 +595,15 @@ static inline int sci_rxd_in(struct uart_port *port)
+ int ch = (port->mapbase - SMR0) >> 3;
+ return (H8300_SCI_DR(ch) & h8300_sci_pins[ch].rx) ? 1 : 0;
+ }
++#elif defined(CONFIG_CPU_SUBTYPE_SH7763)
++static inline int sci_rxd_in(struct uart_port *port)
++{
++ if (port->mapbase == 0xffe00000)
++ return ctrl_inw(SCSPTR0) & 0x0001 ? 1 : 0; /* SCIF */
++ if (port->mapbase == 0xffe08000)
++ return ctrl_inw(SCSPTR1) & 0x0001 ? 1 : 0; /* SCIF */
++ return 1;
++}
+ #elif defined(CONFIG_CPU_SUBTYPE_SH7770)
+ static inline int sci_rxd_in(struct uart_port *port)
+ {
+@@ -617,7 +641,9 @@ static inline int sci_rxd_in(struct uart_port *port)
+ return ctrl_inw(SCSPTR5) & 0x0001 ? 1 : 0; /* SCIF */
+ return 1;
+ }
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7206)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7203) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7206) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7263)
+ static inline int sci_rxd_in(struct uart_port *port)
+ {
+ if (port->mapbase == 0xfffe8000)
+@@ -688,11 +714,13 @@ static inline int sci_rxd_in(struct uart_port *port)
+ * -- Mitch Davis - 15 Jul 2000
+ */
-+/*
-+ * poll_mode_io:1- schedule complete completion from q cmd
-+ */
-+static unsigned int poll_mode_io;
-+module_param_named(poll_mode_io, poll_mode_io, int, 0);
-+MODULE_PARM_DESC(poll_mode_io,
-+ "Complete cmds from IO path, (default=0)");
-+
- MODULE_LICENSE("GPL");
- MODULE_VERSION(MEGASAS_VERSION);
- MODULE_AUTHOR("megaraidlinux at lsi.com");
--MODULE_DESCRIPTION("LSI Logic MegaRAID SAS Driver");
-+MODULE_DESCRIPTION("LSI MegaRAID SAS Driver");
+-#if defined(CONFIG_CPU_SUBTYPE_SH7780) || \
++#if defined(CONFIG_CPU_SUBTYPE_SH7763) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7780) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7785)
+ #define SCBRR_VALUE(bps, clk) ((clk+16*bps)/(16*bps)-1)
+ #elif defined(CONFIG_CPU_SUBTYPE_SH7705) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define SCBRR_VALUE(bps, clk) (((clk*2)+16*bps)/(32*bps)-1)
+ #elif defined(__H8300H__) || defined(__H8300S__)
+ #define SCBRR_VALUE(bps) (((CONFIG_CPU_CLOCK*1000/32)/bps)-1)
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index abf0504..aaaea81 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -153,6 +153,7 @@ config SPI_OMAP24XX
+ config SPI_PXA2XX
+ tristate "PXA2xx SSP SPI master"
+ depends on SPI_MASTER && ARCH_PXA && EXPERIMENTAL
++ select PXA_SSP
+ help
+ This enables using a PXA2xx SSP port as a SPI master controller.
+ The driver can be configured to use any SSP port and additional
+diff --git a/drivers/spi/pxa2xx_spi.c b/drivers/spi/pxa2xx_spi.c
+index 1c2ab54..eb817b8 100644
+--- a/drivers/spi/pxa2xx_spi.c
++++ b/drivers/spi/pxa2xx_spi.c
+@@ -27,6 +27,7 @@
+ #include <linux/spi/spi.h>
+ #include <linux/workqueue.h>
+ #include <linux/delay.h>
++#include <linux/clk.h>
- /*
- * PCI ID table for all supported controllers
-@@ -76,6 +85,10 @@ static DEFINE_MUTEX(megasas_async_queue_mutex);
+ #include <asm/io.h>
+ #include <asm/irq.h>
+@@ -36,6 +37,8 @@
- static u32 megasas_dbg_lvl;
+ #include <asm/arch/hardware.h>
+ #include <asm/arch/pxa-regs.h>
++#include <asm/arch/regs-ssp.h>
++#include <asm/arch/ssp.h>
+ #include <asm/arch/pxa2xx_spi.h>
-+static void
-+megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
-+ u8 alt_status);
-+
- /**
- * megasas_get_cmd - Get a command from the free pool
- * @instance: Adapter soft state
-@@ -855,6 +868,12 @@ megasas_queue_command(struct scsi_cmnd *scmd, void (*done) (struct scsi_cmnd *))
- atomic_inc(&instance->fw_outstanding);
+ MODULE_AUTHOR("Stephen Street");
+@@ -80,6 +83,9 @@ struct driver_data {
+ /* Driver model hookup */
+ struct platform_device *pdev;
- instance->instancet->fire_cmd(cmd->frame_phys_addr ,cmd->frame_count-1,instance->reg_set);
-+ /*
-+ * Check if we have pend cmds to be completed
-+ */
-+ if (poll_mode_io && atomic_read(&instance->fw_outstanding))
-+ tasklet_schedule(&instance->isr_tasklet);
++ /* SSP Info */
++ struct ssp_device *ssp;
+
-
- return 0;
-
-@@ -886,6 +905,64 @@ static int megasas_slave_configure(struct scsi_device *sdev)
+ /* SPI framework hookup */
+ enum pxa_ssp_type ssp_type;
+ struct spi_master *master;
+@@ -778,6 +784,16 @@ int set_dma_burst_and_threshold(struct chip_data *chip, struct spi_device *spi,
+ return retval;
}
- /**
-+ * megasas_complete_cmd_dpc - Returns FW's controller structure
-+ * @instance_addr: Address of adapter soft state
-+ *
-+ * Tasklet to complete cmds
-+ */
-+static void megasas_complete_cmd_dpc(unsigned long instance_addr)
++static unsigned int ssp_get_clk_div(struct ssp_device *ssp, int rate)
+{
-+ u32 producer;
-+ u32 consumer;
-+ u32 context;
-+ struct megasas_cmd *cmd;
-+ struct megasas_instance *instance =
-+ (struct megasas_instance *)instance_addr;
-+ unsigned long flags;
-+
-+ /* If we have already declared adapter dead, donot complete cmds */
-+ if (instance->hw_crit_error)
-+ return;
-+
-+ spin_lock_irqsave(&instance->completion_lock, flags);
-+
-+ producer = *instance->producer;
-+ consumer = *instance->consumer;
-+
-+ while (consumer != producer) {
-+ context = instance->reply_queue[consumer];
-+
-+ cmd = instance->cmd_list[context];
-+
-+ megasas_complete_cmd(instance, cmd, DID_OK);
-+
-+ consumer++;
-+ if (consumer == (instance->max_fw_cmds + 1)) {
-+ consumer = 0;
-+ }
-+ }
-+
-+ *instance->consumer = producer;
-+
-+ spin_unlock_irqrestore(&instance->completion_lock, flags);
-+
-+ /*
-+ * Check if we can restore can_queue
-+ */
-+ if (instance->flag & MEGASAS_FW_BUSY
-+ && time_after(jiffies, instance->last_time + 5 * HZ)
-+ && atomic_read(&instance->fw_outstanding) < 17) {
-+
-+ spin_lock_irqsave(instance->host->host_lock, flags);
-+ instance->flag &= ~MEGASAS_FW_BUSY;
-+ instance->host->can_queue =
-+ instance->max_fw_cmds - MEGASAS_INT_CMDS;
++ unsigned long ssp_clk = clk_get_rate(ssp->clk);
+
-+ spin_unlock_irqrestore(instance->host->host_lock, flags);
-+ }
++ if (ssp->type == PXA25x_SSP)
++ return ((ssp_clk / (2 * rate) - 1) & 0xff) << 8;
++ else
++ return ((ssp_clk / rate - 1) & 0xfff) << 8;
+}
+
-+/**
- * megasas_wait_for_outstanding - Wait for all outstanding cmds
- * @instance: Adapter soft state
- *
-@@ -908,6 +985,11 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance)
- if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) {
- printk(KERN_NOTICE "megasas: [%2d]waiting for %d "
- "commands to complete\n",i,outstanding);
-+ /*
-+ * Call cmd completion routine. Cmd to be
-+ * be completed directly without depending on isr.
-+ */
-+ megasas_complete_cmd_dpc((unsigned long)instance);
- }
-
- msleep(1000);
-@@ -1100,7 +1182,7 @@ megasas_service_aen(struct megasas_instance *instance, struct megasas_cmd *cmd)
- static struct scsi_host_template megasas_template = {
-
- .module = THIS_MODULE,
-- .name = "LSI Logic SAS based MegaRAID driver",
-+ .name = "LSI SAS based MegaRAID driver",
- .proc_name = "megaraid_sas",
- .slave_configure = megasas_slave_configure,
- .queuecommand = megasas_queue_command,
-@@ -1749,57 +1831,119 @@ megasas_get_ctrl_info(struct megasas_instance *instance,
- }
-
- /**
-- * megasas_complete_cmd_dpc - Returns FW's controller structure
-- * @instance_addr: Address of adapter soft state
-+ * megasas_issue_init_mfi - Initializes the FW
-+ * @instance: Adapter soft state
- *
-- * Tasklet to complete cmds
-+ * Issues the INIT MFI cmd
- */
--static void megasas_complete_cmd_dpc(unsigned long instance_addr)
-+static int
-+megasas_issue_init_mfi(struct megasas_instance *instance)
+ static void pump_transfers(unsigned long data)
{
-- u32 producer;
-- u32 consumer;
- u32 context;
-+
- struct megasas_cmd *cmd;
-- struct megasas_instance *instance = (struct megasas_instance *)instance_addr;
-- unsigned long flags;
+ struct driver_data *drv_data = (struct driver_data *)data;
+@@ -785,6 +801,7 @@ static void pump_transfers(unsigned long data)
+ struct spi_transfer *transfer = NULL;
+ struct spi_transfer *previous = NULL;
+ struct chip_data *chip = NULL;
++ struct ssp_device *ssp = drv_data->ssp;
+ void *reg = drv_data->ioaddr;
+ u32 clk_div = 0;
+ u8 bits = 0;
+@@ -866,12 +883,7 @@ static void pump_transfers(unsigned long data)
+ if (transfer->bits_per_word)
+ bits = transfer->bits_per_word;
-- /* If we have already declared adapter dead, donot complete cmds */
-- if (instance->hw_crit_error)
-- return;
-+ struct megasas_init_frame *init_frame;
-+ struct megasas_init_queue_info *initq_info;
-+ dma_addr_t init_frame_h;
-+ dma_addr_t initq_info_h;
+- if (reg == SSP1_VIRT)
+- clk_div = SSP1_SerClkDiv(speed);
+- else if (reg == SSP2_VIRT)
+- clk_div = SSP2_SerClkDiv(speed);
+- else if (reg == SSP3_VIRT)
+- clk_div = SSP3_SerClkDiv(speed);
++ clk_div = ssp_get_clk_div(ssp, speed);
-- producer = *instance->producer;
-- consumer = *instance->consumer;
-+ /*
-+ * Prepare a init frame. Note the init frame points to queue info
-+ * structure. Each frame has SGL allocated after first 64 bytes. For
-+ * this frame - since we don't need any SGL - we use SGL's space as
-+ * queue info structure
-+ *
-+ * We will not get a NULL command below. We just created the pool.
-+ */
-+ cmd = megasas_get_cmd(instance);
+ if (bits <= 8) {
+ drv_data->n_bytes = 1;
+@@ -1074,6 +1086,7 @@ static int setup(struct spi_device *spi)
+ struct pxa2xx_spi_chip *chip_info = NULL;
+ struct chip_data *chip;
+ struct driver_data *drv_data = spi_master_get_devdata(spi->master);
++ struct ssp_device *ssp = drv_data->ssp;
+ unsigned int clk_div;
-- while (consumer != producer) {
-- context = instance->reply_queue[consumer];
-+ init_frame = (struct megasas_init_frame *)cmd->frame;
-+ initq_info = (struct megasas_init_queue_info *)
-+ ((unsigned long)init_frame + 64);
+ if (!spi->bits_per_word)
+@@ -1157,18 +1170,7 @@ static int setup(struct spi_device *spi)
+ }
+ }
-- cmd = instance->cmd_list[context];
-+ init_frame_h = cmd->frame_phys_addr;
-+ initq_info_h = init_frame_h + 64;
+- if (drv_data->ioaddr == SSP1_VIRT)
+- clk_div = SSP1_SerClkDiv(spi->max_speed_hz);
+- else if (drv_data->ioaddr == SSP2_VIRT)
+- clk_div = SSP2_SerClkDiv(spi->max_speed_hz);
+- else if (drv_data->ioaddr == SSP3_VIRT)
+- clk_div = SSP3_SerClkDiv(spi->max_speed_hz);
+- else
+- {
+- dev_err(&spi->dev, "failed setup: unknown IO address=0x%p\n",
+- drv_data->ioaddr);
+- return -ENODEV;
+- }
++ clk_div = ssp_get_clk_div(ssp, spi->max_speed_hz);
+ chip->speed_hz = spi->max_speed_hz;
-- megasas_complete_cmd(instance, cmd, DID_OK);
-+ context = init_frame->context;
-+ memset(init_frame, 0, MEGAMFI_FRAME_SIZE);
-+ memset(initq_info, 0, sizeof(struct megasas_init_queue_info));
-+ init_frame->context = context;
+ chip->cr0 = clk_div
+@@ -1183,15 +1185,15 @@ static int setup(struct spi_device *spi)
-- consumer++;
-- if (consumer == (instance->max_fw_cmds + 1)) {
-- consumer = 0;
-- }
-- }
-+ initq_info->reply_queue_entries = instance->max_fw_cmds + 1;
-+ initq_info->reply_queue_start_phys_addr_lo = instance->reply_queue_h;
+ /* NOTE: PXA25x_SSP _could_ use external clocking ... */
+ if (drv_data->ssp_type != PXA25x_SSP)
+- dev_dbg(&spi->dev, "%d bits/word, %d Hz, mode %d\n",
++ dev_dbg(&spi->dev, "%d bits/word, %ld Hz, mode %d\n",
+ spi->bits_per_word,
+- (CLOCK_SPEED_HZ)
++ clk_get_rate(ssp->clk)
+ / (1 + ((chip->cr0 & SSCR0_SCR) >> 8)),
+ spi->mode & 0x3);
+ else
+- dev_dbg(&spi->dev, "%d bits/word, %d Hz, mode %d\n",
++ dev_dbg(&spi->dev, "%d bits/word, %ld Hz, mode %d\n",
+ spi->bits_per_word,
+- (CLOCK_SPEED_HZ/2)
++ clk_get_rate(ssp->clk)
+ / (1 + ((chip->cr0 & SSCR0_SCR) >> 8)),
+ spi->mode & 0x3);
-- *instance->consumer = producer;
-+ initq_info->producer_index_phys_addr_lo = instance->producer_h;
-+ initq_info->consumer_index_phys_addr_lo = instance->consumer_h;
-+
-+ init_frame->cmd = MFI_CMD_INIT;
-+ init_frame->cmd_status = 0xFF;
-+ init_frame->queue_info_new_phys_addr_lo = initq_info_h;
-+
-+ init_frame->data_xfer_len = sizeof(struct megasas_init_queue_info);
+@@ -1323,14 +1325,14 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
+ struct pxa2xx_spi_master *platform_info;
+ struct spi_master *master;
+ struct driver_data *drv_data = 0;
+- struct resource *memory_resource;
+- int irq;
++ struct ssp_device *ssp;
+ int status = 0;
- /*
-- * Check if we can restore can_queue
-+ * disable the intr before firing the init frame to FW
- */
-- if (instance->flag & MEGASAS_FW_BUSY
-- && time_after(jiffies, instance->last_time + 5 * HZ)
-- && atomic_read(&instance->fw_outstanding) < 17) {
-+ instance->instancet->disable_intr(instance->reg_set);
+ platform_info = dev->platform_data;
-- spin_lock_irqsave(instance->host->host_lock, flags);
-- instance->flag &= ~MEGASAS_FW_BUSY;
-- instance->host->can_queue =
-- instance->max_fw_cmds - MEGASAS_INT_CMDS;
-+ /*
-+ * Issue the init frame in polled mode
-+ */
+- if (platform_info->ssp_type == SSP_UNDEFINED) {
+- dev_err(&pdev->dev, "undefined SSP\n");
++ ssp = ssp_request(pdev->id, pdev->name);
++ if (ssp == NULL) {
++ dev_err(&pdev->dev, "failed to request SSP%d\n", pdev->id);
+ return -ENODEV;
+ }
-- spin_unlock_irqrestore(instance->host->host_lock, flags);
-+ if (megasas_issue_polled(instance, cmd)) {
-+ printk(KERN_ERR "megasas: Failed to init firmware\n");
-+ megasas_return_cmd(instance, cmd);
-+ goto fail_fw_init;
+@@ -1338,12 +1340,14 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
+ master = spi_alloc_master(dev, sizeof(struct driver_data) + 16);
+ if (!master) {
+ dev_err(&pdev->dev, "can not alloc spi_master\n");
++ ssp_free(ssp);
+ return -ENOMEM;
}
+ drv_data = spi_master_get_devdata(master);
+ drv_data->master = master;
+ drv_data->master_info = platform_info;
+ drv_data->pdev = pdev;
++ drv_data->ssp = ssp;
-+ megasas_return_cmd(instance, cmd);
-+
-+ return 0;
-+
-+fail_fw_init:
-+ return -EINVAL;
-+}
-+
-+/**
-+ * megasas_start_timer - Initializes a timer object
-+ * @instance: Adapter soft state
-+ * @timer: timer object to be initialized
-+ * @fn: timer function
-+ * @interval: time interval between timer function call
-+ */
-+static inline void
-+megasas_start_timer(struct megasas_instance *instance,
-+ struct timer_list *timer,
-+ void *fn, unsigned long interval)
-+{
-+ init_timer(timer);
-+ timer->expires = jiffies + interval;
-+ timer->data = (unsigned long)instance;
-+ timer->function = fn;
-+ add_timer(timer);
-+}
-+
-+/**
-+ * megasas_io_completion_timer - Timer fn
-+ * @instance_addr: Address of adapter soft state
-+ *
-+ * Schedules tasklet for cmd completion
-+ * if poll_mode_io is set
-+ */
-+static void
-+megasas_io_completion_timer(unsigned long instance_addr)
-+{
-+ struct megasas_instance *instance =
-+ (struct megasas_instance *)instance_addr;
-+
-+ if (atomic_read(&instance->fw_outstanding))
-+ tasklet_schedule(&instance->isr_tasklet);
-+
-+ /* Restart timer */
-+ if (poll_mode_io)
-+ mod_timer(&instance->io_completion_timer,
-+ jiffies + MEGASAS_COMPLETION_TIMER_INTERVAL);
- }
+ master->bus_num = pdev->id;
+ master->num_chipselect = platform_info->num_chipselect;
+@@ -1351,21 +1355,13 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
+ master->setup = setup;
+ master->transfer = transfer;
- /**
-@@ -1814,22 +1958,15 @@ static int megasas_init_mfi(struct megasas_instance *instance)
- u32 reply_q_sz;
- u32 max_sectors_1;
- u32 max_sectors_2;
-+ u32 tmp_sectors;
- struct megasas_register_set __iomem *reg_set;
--
-- struct megasas_cmd *cmd;
- struct megasas_ctrl_info *ctrl_info;
--
-- struct megasas_init_frame *init_frame;
-- struct megasas_init_queue_info *initq_info;
-- dma_addr_t init_frame_h;
-- dma_addr_t initq_info_h;
--
- /*
- * Map the message registers
- */
- instance->base_addr = pci_resource_start(instance->pdev, 0);
+- drv_data->ssp_type = platform_info->ssp_type;
++ drv_data->ssp_type = ssp->type;
+ drv_data->null_dma_buf = (u32 *)ALIGN((u32)(drv_data +
+ sizeof(struct driver_data)), 8);
-- if (pci_request_regions(instance->pdev, "megasas: LSI Logic")) {
-+ if (pci_request_regions(instance->pdev, "megasas: LSI")) {
- printk(KERN_DEBUG "megasas: IO memory region busy!\n");
- return -EBUSY;
- }
-@@ -1900,52 +2037,8 @@ static int megasas_init_mfi(struct megasas_instance *instance)
- goto fail_reply_queue;
+- /* Setup register addresses */
+- memory_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- if (!memory_resource) {
+- dev_err(&pdev->dev, "memory resources not defined\n");
+- status = -ENODEV;
+- goto out_error_master_alloc;
+- }
+-
+- drv_data->ioaddr = (void *)io_p2v((unsigned long)(memory_resource->start));
+- drv_data->ssdr_physical = memory_resource->start + 0x00000010;
+- if (platform_info->ssp_type == PXA25x_SSP) {
++ drv_data->ioaddr = ssp->mmio_base;
++ drv_data->ssdr_physical = ssp->phys_base + SSDR;
++ if (ssp->type == PXA25x_SSP) {
+ drv_data->int_cr1 = SSCR1_TIE | SSCR1_RIE;
+ drv_data->dma_cr1 = 0;
+ drv_data->clear_sr = SSSR_ROR;
+@@ -1377,15 +1373,7 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
+ drv_data->mask_sr = SSSR_TINT | SSSR_RFS | SSSR_TFS | SSSR_ROR;
}
-- /*
-- * Prepare a init frame. Note the init frame points to queue info
-- * structure. Each frame has SGL allocated after first 64 bytes. For
-- * this frame - since we don't need any SGL - we use SGL's space as
-- * queue info structure
-- *
-- * We will not get a NULL command below. We just created the pool.
-- */
-- cmd = megasas_get_cmd(instance);
--
-- init_frame = (struct megasas_init_frame *)cmd->frame;
-- initq_info = (struct megasas_init_queue_info *)
-- ((unsigned long)init_frame + 64);
--
-- init_frame_h = cmd->frame_phys_addr;
-- initq_info_h = init_frame_h + 64;
--
-- memset(init_frame, 0, MEGAMFI_FRAME_SIZE);
-- memset(initq_info, 0, sizeof(struct megasas_init_queue_info));
--
-- initq_info->reply_queue_entries = instance->max_fw_cmds + 1;
-- initq_info->reply_queue_start_phys_addr_lo = instance->reply_queue_h;
--
-- initq_info->producer_index_phys_addr_lo = instance->producer_h;
-- initq_info->consumer_index_phys_addr_lo = instance->consumer_h;
--
-- init_frame->cmd = MFI_CMD_INIT;
-- init_frame->cmd_status = 0xFF;
-- init_frame->queue_info_new_phys_addr_lo = initq_info_h;
--
-- init_frame->data_xfer_len = sizeof(struct megasas_init_queue_info);
--
-- /*
-- * disable the intr before firing the init frame to FW
-- */
-- instance->instancet->disable_intr(instance->reg_set);
--
-- /*
-- * Issue the init frame in polled mode
-- */
-- if (megasas_issue_polled(instance, cmd)) {
-- printk(KERN_DEBUG "megasas: Failed to init firmware\n");
-+ if (megasas_issue_init_mfi(instance))
- goto fail_fw_init;
+- /* Attach to IRQ */
+- irq = platform_get_irq(pdev, 0);
+- if (irq < 0) {
+- dev_err(&pdev->dev, "irq resource not defined\n");
+- status = -ENODEV;
+- goto out_error_master_alloc;
- }
-
-- megasas_return_cmd(instance, cmd);
+- status = request_irq(irq, ssp_int, 0, dev->bus_id, drv_data);
++ status = request_irq(ssp->irq, ssp_int, 0, dev->bus_id, drv_data);
+ if (status < 0) {
+ dev_err(&pdev->dev, "can not get IRQ\n");
+ goto out_error_master_alloc;
+@@ -1418,29 +1406,12 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
+ goto out_error_dma_alloc;
+ }
- ctrl_info = kmalloc(sizeof(struct megasas_ctrl_info), GFP_KERNEL);
+- if (drv_data->ioaddr == SSP1_VIRT) {
+- DRCMRRXSSDR = DRCMR_MAPVLD
+- | drv_data->rx_channel;
+- DRCMRTXSSDR = DRCMR_MAPVLD
+- | drv_data->tx_channel;
+- } else if (drv_data->ioaddr == SSP2_VIRT) {
+- DRCMRRXSS2DR = DRCMR_MAPVLD
+- | drv_data->rx_channel;
+- DRCMRTXSS2DR = DRCMR_MAPVLD
+- | drv_data->tx_channel;
+- } else if (drv_data->ioaddr == SSP3_VIRT) {
+- DRCMRRXSS3DR = DRCMR_MAPVLD
+- | drv_data->rx_channel;
+- DRCMRTXSS3DR = DRCMR_MAPVLD
+- | drv_data->tx_channel;
+- } else {
+- dev_err(dev, "bad SSP type\n");
+- goto out_error_dma_alloc;
+- }
++ DRCMR(ssp->drcmr_rx) = DRCMR_MAPVLD | drv_data->rx_channel;
++ DRCMR(ssp->drcmr_tx) = DRCMR_MAPVLD | drv_data->tx_channel;
+ }
-@@ -1958,17 +2051,20 @@ static int megasas_init_mfi(struct megasas_instance *instance)
- * Note that older firmwares ( < FW ver 30) didn't report information
- * to calculate max_sectors_1. So the number ended up as zero always.
- */
-+ tmp_sectors = 0;
- if (ctrl_info && !megasas_get_ctrl_info(instance, ctrl_info)) {
+ /* Enable SOC clock */
+- pxa_set_cken(platform_info->clock_enable, 1);
++ clk_enable(ssp->clk);
- max_sectors_1 = (1 << ctrl_info->stripe_sz_ops.min) *
- ctrl_info->max_strips_per_io;
- max_sectors_2 = ctrl_info->max_request_size;
+ /* Load default SSP configuration */
+ write_SSCR0(0, drv_data->ioaddr);
+@@ -1479,7 +1450,7 @@ out_error_queue_alloc:
+ destroy_queue(drv_data);
-- instance->max_sectors_per_req = (max_sectors_1 < max_sectors_2)
-- ? max_sectors_1 : max_sectors_2;
-- } else
-- instance->max_sectors_per_req = instance->max_num_sge *
-- PAGE_SIZE / 512;
-+ tmp_sectors = min_t(u32, max_sectors_1 , max_sectors_2);
-+ }
-+
-+ instance->max_sectors_per_req = instance->max_num_sge *
-+ PAGE_SIZE / 512;
-+ if (tmp_sectors && (instance->max_sectors_per_req > tmp_sectors))
-+ instance->max_sectors_per_req = tmp_sectors;
+ out_error_clock_enabled:
+- pxa_set_cken(platform_info->clock_enable, 0);
++ clk_disable(ssp->clk);
- kfree(ctrl_info);
+ out_error_dma_alloc:
+ if (drv_data->tx_channel != -1)
+@@ -1488,17 +1459,18 @@ out_error_dma_alloc:
+ pxa_free_dma(drv_data->rx_channel);
-@@ -1976,12 +2072,17 @@ static int megasas_init_mfi(struct megasas_instance *instance)
- * Setup tasklet for cmd completion
- */
+ out_error_irq_alloc:
+- free_irq(irq, drv_data);
++ free_irq(ssp->irq, drv_data);
-- tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc,
-- (unsigned long)instance);
-+ tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc,
-+ (unsigned long)instance);
+ out_error_master_alloc:
+ spi_master_put(master);
++ ssp_free(ssp);
+ return status;
+ }
+
+ static int pxa2xx_spi_remove(struct platform_device *pdev)
+ {
+ struct driver_data *drv_data = platform_get_drvdata(pdev);
+- int irq;
++ struct ssp_device *ssp = drv_data->ssp;
+ int status = 0;
+
+ if (!drv_data)
+@@ -1520,28 +1492,21 @@ static int pxa2xx_spi_remove(struct platform_device *pdev)
+
+ /* Disable the SSP at the peripheral and SOC level */
+ write_SSCR0(0, drv_data->ioaddr);
+- pxa_set_cken(drv_data->master_info->clock_enable, 0);
++ clk_disable(ssp->clk);
+
+ /* Release DMA */
+ if (drv_data->master_info->enable_dma) {
+- if (drv_data->ioaddr == SSP1_VIRT) {
+- DRCMRRXSSDR = 0;
+- DRCMRTXSSDR = 0;
+- } else if (drv_data->ioaddr == SSP2_VIRT) {
+- DRCMRRXSS2DR = 0;
+- DRCMRTXSS2DR = 0;
+- } else if (drv_data->ioaddr == SSP3_VIRT) {
+- DRCMRRXSS3DR = 0;
+- DRCMRTXSS3DR = 0;
+- }
++ DRCMR(ssp->drcmr_rx) = 0;
++ DRCMR(ssp->drcmr_tx) = 0;
+ pxa_free_dma(drv_data->tx_channel);
+ pxa_free_dma(drv_data->rx_channel);
+ }
+
+ /* Release IRQ */
+- irq = platform_get_irq(pdev, 0);
+- if (irq >= 0)
+- free_irq(irq, drv_data);
++ free_irq(ssp->irq, drv_data);
+
-+ /* Initialize the cmd completion timer */
-+ if (poll_mode_io)
-+ megasas_start_timer(instance, &instance->io_completion_timer,
-+ megasas_io_completion_timer,
-+ MEGASAS_COMPLETION_TIMER_INTERVAL);
- return 0;
++ /* Release SSP */
++ ssp_free(ssp);
- fail_fw_init:
-- megasas_return_cmd(instance, cmd);
+ /* Disconnect from the SPI framework */
+ spi_unregister_master(drv_data->master);
+@@ -1576,6 +1541,7 @@ static int suspend_devices(struct device *dev, void *pm_message)
+ static int pxa2xx_spi_suspend(struct platform_device *pdev, pm_message_t state)
+ {
+ struct driver_data *drv_data = platform_get_drvdata(pdev);
++ struct ssp_device *ssp = drv_data->ssp;
+ int status = 0;
+
+ /* Check all childern for current power state */
+@@ -1588,7 +1554,7 @@ static int pxa2xx_spi_suspend(struct platform_device *pdev, pm_message_t state)
+ if (status != 0)
+ return status;
+ write_SSCR0(0, drv_data->ioaddr);
+- pxa_set_cken(drv_data->master_info->clock_enable, 0);
++ clk_disable(ssp->clk);
- pci_free_consistent(instance->pdev, reply_q_sz,
- instance->reply_queue, instance->reply_queue_h);
-@@ -2263,6 +2364,28 @@ static int megasas_io_attach(struct megasas_instance *instance)
return 0;
}
+@@ -1596,10 +1562,11 @@ static int pxa2xx_spi_suspend(struct platform_device *pdev, pm_message_t state)
+ static int pxa2xx_spi_resume(struct platform_device *pdev)
+ {
+ struct driver_data *drv_data = platform_get_drvdata(pdev);
++ struct ssp_device *ssp = drv_data->ssp;
+ int status = 0;
-+static int
-+megasas_set_dma_mask(struct pci_dev *pdev)
+ /* Enable the SSP clock */
+- pxa_set_cken(drv_data->master_info->clock_enable, 1);
++ clk_disable(ssp->clk);
+
+ /* Start the queue running */
+ status = start_queue(drv_data);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 93e9de4..682a6a4 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -485,6 +485,15 @@ void spi_unregister_master(struct spi_master *master)
+ }
+ EXPORT_SYMBOL_GPL(spi_unregister_master);
+
++static int __spi_master_match(struct device *dev, void *data)
+{
-+ /*
-+ * All our contollers are capable of performing 64-bit DMA
-+ */
-+ if (IS_DMA64) {
-+ if (pci_set_dma_mask(pdev, DMA_64BIT_MASK) != 0) {
-+
-+ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
-+ goto fail_set_dma_mask;
-+ }
-+ } else {
-+ if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
-+ goto fail_set_dma_mask;
-+ }
-+ return 0;
++ struct spi_master *m;
++ u16 *bus_num = data;
+
-+fail_set_dma_mask:
-+ return 1;
++ m = container_of(dev, struct spi_master, dev);
++ return m->bus_num == *bus_num;
+}
+
/**
- * megasas_probe_one - PCI hotplug entry point
- * @pdev: PCI device structure
-@@ -2296,19 +2419,8 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
-
- pci_set_master(pdev);
-
-- /*
-- * All our contollers are capable of performing 64-bit DMA
-- */
-- if (IS_DMA64) {
-- if (pci_set_dma_mask(pdev, DMA_64BIT_MASK) != 0) {
+ * spi_busnum_to_master - look up master associated with bus_num
+ * @bus_num: the master's bus number
+@@ -499,17 +508,12 @@ struct spi_master *spi_busnum_to_master(u16 bus_num)
+ {
+ struct device *dev;
+ struct spi_master *master = NULL;
+- struct spi_master *m;
-
-- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
-- goto fail_set_dma_mask;
+- down(&spi_master_class.sem);
+- list_for_each_entry(dev, &spi_master_class.children, node) {
+- m = container_of(dev, struct spi_master, dev);
+- if (m->bus_num == bus_num) {
+- master = spi_master_get(m);
+- break;
- }
-- } else {
-- if (pci_set_dma_mask(pdev, DMA_32BIT_MASK) != 0)
-- goto fail_set_dma_mask;
- }
-+ if (megasas_set_dma_mask(pdev))
-+ goto fail_set_dma_mask;
+- up(&spi_master_class.sem);
++
++ dev = class_find_device(&spi_master_class, &bus_num,
++ __spi_master_match);
++ if (dev)
++ master = container_of(dev, struct spi_master, dev);
++ /* reference got in class_find_device */
+ return master;
+ }
+ EXPORT_SYMBOL_GPL(spi_busnum_to_master);
+diff --git a/drivers/ssb/b43_pci_bridge.c b/drivers/ssb/b43_pci_bridge.c
+index f145d8a..1a31f7a 100644
+--- a/drivers/ssb/b43_pci_bridge.c
++++ b/drivers/ssb/b43_pci_bridge.c
+@@ -27,6 +27,8 @@ static const struct pci_device_id b43_pci_bridge_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4321) },
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4324) },
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4325) },
++ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4328) },
++ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4329) },
+ { 0, },
+ };
+ MODULE_DEVICE_TABLE(pci, b43_pci_bridge_tbl);
+diff --git a/drivers/ssb/main.c b/drivers/ssb/main.c
+index 85a2054..9028ed5 100644
+--- a/drivers/ssb/main.c
++++ b/drivers/ssb/main.c
+@@ -872,14 +872,22 @@ EXPORT_SYMBOL(ssb_clockspeed);
- host = scsi_host_alloc(&megasas_template,
- sizeof(struct megasas_instance));
-@@ -2357,8 +2469,9 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
- init_waitqueue_head(&instance->abort_cmd_wait_q);
+ static u32 ssb_tmslow_reject_bitmask(struct ssb_device *dev)
+ {
++ u32 rev = ssb_read32(dev, SSB_IDLOW) & SSB_IDLOW_SSBREV;
++
+ /* The REJECT bit changed position in TMSLOW between
+ * Backplane revisions. */
+- switch (ssb_read32(dev, SSB_IDLOW) & SSB_IDLOW_SSBREV) {
++ switch (rev) {
+ case SSB_IDLOW_SSBREV_22:
+ return SSB_TMSLOW_REJECT_22;
+ case SSB_IDLOW_SSBREV_23:
+ return SSB_TMSLOW_REJECT_23;
++ case SSB_IDLOW_SSBREV_24: /* TODO - find the proper REJECT bits */
++ case SSB_IDLOW_SSBREV_25: /* same here */
++ case SSB_IDLOW_SSBREV_26: /* same here */
++ case SSB_IDLOW_SSBREV_27: /* same here */
++ return SSB_TMSLOW_REJECT_23; /* this is a guess */
+ default:
++ printk(KERN_INFO "ssb: Backplane Revision 0x%.8X\n", rev);
+ WARN_ON(1);
+ }
+ return (SSB_TMSLOW_REJECT_22 | SSB_TMSLOW_REJECT_23);
+diff --git a/drivers/ssb/pci.c b/drivers/ssb/pci.c
+index 0ab095c..b434df7 100644
+--- a/drivers/ssb/pci.c
++++ b/drivers/ssb/pci.c
+@@ -212,29 +212,29 @@ static inline u8 ssb_crc8(u8 crc, u8 data)
+ return t[crc ^ data];
+ }
- spin_lock_init(&instance->cmd_pool_lock);
-+ spin_lock_init(&instance->completion_lock);
+-static u8 ssb_sprom_crc(const u16 *sprom)
++static u8 ssb_sprom_crc(const u16 *sprom, u16 size)
+ {
+ int word;
+ u8 crc = 0xFF;
-- sema_init(&instance->aen_mutex, 1);
-+ mutex_init(&instance->aen_mutex);
- sema_init(&instance->ioctl_sem, MEGASAS_INT_CMDS);
+- for (word = 0; word < SSB_SPROMSIZE_WORDS - 1; word++) {
++ for (word = 0; word < size - 1; word++) {
+ crc = ssb_crc8(crc, sprom[word] & 0x00FF);
+ crc = ssb_crc8(crc, (sprom[word] & 0xFF00) >> 8);
+ }
+- crc = ssb_crc8(crc, sprom[SPOFF(SSB_SPROM_REVISION)] & 0x00FF);
++ crc = ssb_crc8(crc, sprom[size - 1] & 0x00FF);
+ crc ^= 0xFF;
- /*
-@@ -2490,8 +2603,10 @@ static void megasas_flush_cache(struct megasas_instance *instance)
- /**
- * megasas_shutdown_controller - Instructs FW to shutdown the controller
- * @instance: Adapter soft state
-+ * @opcode: Shutdown/Hibernate
- */
--static void megasas_shutdown_controller(struct megasas_instance *instance)
-+static void megasas_shutdown_controller(struct megasas_instance *instance,
-+ u32 opcode)
+ return crc;
+ }
+
+-static int sprom_check_crc(const u16 *sprom)
++static int sprom_check_crc(const u16 *sprom, u16 size)
{
- struct megasas_cmd *cmd;
- struct megasas_dcmd_frame *dcmd;
-@@ -2514,7 +2629,7 @@ static void megasas_shutdown_controller(struct megasas_instance *instance)
- dcmd->flags = MFI_FRAME_DIR_NONE;
- dcmd->timeout = 0;
- dcmd->data_xfer_len = 0;
-- dcmd->opcode = MR_DCMD_CTRL_SHUTDOWN;
-+ dcmd->opcode = opcode;
+ u8 crc;
+ u8 expected_crc;
+ u16 tmp;
- megasas_issue_blocked_cmd(instance, cmd);
+- crc = ssb_sprom_crc(sprom);
+- tmp = sprom[SPOFF(SSB_SPROM_REVISION)] & SSB_SPROM_REVISION_CRC;
++ crc = ssb_sprom_crc(sprom, size);
++ tmp = sprom[size - 1] & SSB_SPROM_REVISION_CRC;
+ expected_crc = tmp >> SSB_SPROM_REVISION_CRC_SHIFT;
+ if (crc != expected_crc)
+ return -EPROTO;
+@@ -246,8 +246,8 @@ static void sprom_do_read(struct ssb_bus *bus, u16 *sprom)
+ {
+ int i;
-@@ -2524,6 +2639,139 @@ static void megasas_shutdown_controller(struct megasas_instance *instance)
+- for (i = 0; i < SSB_SPROMSIZE_WORDS; i++)
+- sprom[i] = readw(bus->mmio + SSB_SPROM_BASE + (i * 2));
++ for (i = 0; i < bus->sprom_size; i++)
++ sprom[i] = ioread16(bus->mmio + SSB_SPROM_BASE + (i * 2));
}
- /**
-+ * megasas_suspend - driver suspend entry point
-+ * @pdev: PCI device structure
-+ * @state: PCI power state to suspend routine
-+ */
-+static int __devinit
-+megasas_suspend(struct pci_dev *pdev, pm_message_t state)
+ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
+@@ -255,6 +255,7 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
+ struct pci_dev *pdev = bus->host_pci;
+ int i, err;
+ u32 spromctl;
++ u16 size = bus->sprom_size;
+
+ ssb_printk(KERN_NOTICE PFX "Writing SPROM. Do NOT turn off the power! Please stand by...\n");
+ err = pci_read_config_dword(pdev, SSB_SPROMCTL, &spromctl);
+@@ -266,12 +267,12 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
+ goto err_ctlreg;
+ ssb_printk(KERN_NOTICE PFX "[ 0%%");
+ msleep(500);
+- for (i = 0; i < SSB_SPROMSIZE_WORDS; i++) {
+- if (i == SSB_SPROMSIZE_WORDS / 4)
++ for (i = 0; i < size; i++) {
++ if (i == size / 4)
+ ssb_printk("25%%");
+- else if (i == SSB_SPROMSIZE_WORDS / 2)
++ else if (i == size / 2)
+ ssb_printk("50%%");
+- else if (i == (SSB_SPROMSIZE_WORDS / 4) * 3)
++ else if (i == (size * 3) / 4)
+ ssb_printk("75%%");
+ else if (i % 2)
+ ssb_printk(".");
+@@ -296,24 +297,53 @@ err_ctlreg:
+ return err;
+ }
+
+-static void sprom_extract_r1(struct ssb_sprom_r1 *out, const u16 *in)
++static s8 r123_extract_antgain(u8 sprom_revision, const u16 *in,
++ u16 mask, u16 shift)
+{
-+ struct Scsi_Host *host;
-+ struct megasas_instance *instance;
-+
-+ instance = pci_get_drvdata(pdev);
-+ host = instance->host;
-+
-+ if (poll_mode_io)
-+ del_timer_sync(&instance->io_completion_timer);
-+
-+ megasas_flush_cache(instance);
-+ megasas_shutdown_controller(instance, MR_DCMD_HIBERNATE_SHUTDOWN);
-+ tasklet_kill(&instance->isr_tasklet);
-+
-+ pci_set_drvdata(instance->pdev, instance);
-+ instance->instancet->disable_intr(instance->reg_set);
-+ free_irq(instance->pdev->irq, instance);
-+
-+ pci_save_state(pdev);
-+ pci_disable_device(pdev);
++ u16 v;
++ u8 gain;
+
-+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
++ v = in[SPOFF(SSB_SPROM1_AGAIN)];
++ gain = (v & mask) >> shift;
++ if (gain == 0xFF)
++ gain = 2; /* If unset use 2dBm */
++ if (sprom_revision == 1) {
++ /* Convert to Q5.2 */
++ gain <<= 2;
++ } else {
++ /* Q5.2 Fractional part is stored in 0xC0 */
++ gain = ((gain & 0xC0) >> 6) | ((gain & 0x3F) << 2);
++ }
+
-+ return 0;
++ return (s8)gain;
+}
+
-+/**
-+ * megasas_resume- driver resume entry point
-+ * @pdev: PCI device structure
-+ */
-+static int __devinit
-+megasas_resume(struct pci_dev *pdev)
-+{
-+ int rval;
-+ struct Scsi_Host *host;
-+ struct megasas_instance *instance;
-+
-+ instance = pci_get_drvdata(pdev);
-+ host = instance->host;
-+ pci_set_power_state(pdev, PCI_D0);
-+ pci_enable_wake(pdev, PCI_D0, 0);
-+ pci_restore_state(pdev);
-+
-+ /*
-+ * PCI prepping: enable device set bus mastering and dma mask
-+ */
-+ rval = pci_enable_device(pdev);
-+
-+ if (rval) {
-+ printk(KERN_ERR "megasas: Enable device failed\n");
-+ return rval;
++static void sprom_extract_r123(struct ssb_sprom *out, const u16 *in)
+ {
+ int i;
+ u16 v;
++ s8 gain;
++ u16 loc[3];
+
+- SPEX(pci_spid, SSB_SPROM1_SPID, 0xFFFF, 0);
+- SPEX(pci_svid, SSB_SPROM1_SVID, 0xFFFF, 0);
+- SPEX(pci_pid, SSB_SPROM1_PID, 0xFFFF, 0);
++ if (out->revision == 3) { /* rev 3 moved MAC */
++ loc[0] = SSB_SPROM3_IL0MAC;
++ loc[1] = SSB_SPROM3_ET0MAC;
++ loc[2] = SSB_SPROM3_ET1MAC;
++ } else {
++ loc[0] = SSB_SPROM1_IL0MAC;
++ loc[1] = SSB_SPROM1_ET0MAC;
++ loc[2] = SSB_SPROM1_ET1MAC;
+ }
+ for (i = 0; i < 3; i++) {
+- v = in[SPOFF(SSB_SPROM1_IL0MAC) + i];
++ v = in[SPOFF(loc[0]) + i];
+ *(((__be16 *)out->il0mac) + i) = cpu_to_be16(v);
+ }
+ for (i = 0; i < 3; i++) {
+- v = in[SPOFF(SSB_SPROM1_ET0MAC) + i];
++ v = in[SPOFF(loc[1]) + i];
+ *(((__be16 *)out->et0mac) + i) = cpu_to_be16(v);
+ }
+ for (i = 0; i < 3; i++) {
+- v = in[SPOFF(SSB_SPROM1_ET1MAC) + i];
++ v = in[SPOFF(loc[2]) + i];
+ *(((__be16 *)out->et1mac) + i) = cpu_to_be16(v);
+ }
+ SPEX(et0phyaddr, SSB_SPROM1_ETHPHY, SSB_SPROM1_ETHPHY_ET0A, 0);
+@@ -324,9 +354,9 @@ static void sprom_extract_r1(struct ssb_sprom_r1 *out, const u16 *in)
+ SPEX(board_rev, SSB_SPROM1_BINF, SSB_SPROM1_BINF_BREV, 0);
+ SPEX(country_code, SSB_SPROM1_BINF, SSB_SPROM1_BINF_CCODE,
+ SSB_SPROM1_BINF_CCODE_SHIFT);
+- SPEX(antenna_a, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTA,
++ SPEX(ant_available_a, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTA,
+ SSB_SPROM1_BINF_ANTA_SHIFT);
+- SPEX(antenna_bg, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTBG,
++ SPEX(ant_available_bg, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTBG,
+ SSB_SPROM1_BINF_ANTBG_SHIFT);
+ SPEX(pa0b0, SSB_SPROM1_PA0B0, 0xFFFF, 0);
+ SPEX(pa0b1, SSB_SPROM1_PA0B1, 0xFFFF, 0);
+@@ -347,100 +377,108 @@ static void sprom_extract_r1(struct ssb_sprom_r1 *out, const u16 *in)
+ SSB_SPROM1_ITSSI_A_SHIFT);
+ SPEX(itssi_bg, SSB_SPROM1_ITSSI, SSB_SPROM1_ITSSI_BG, 0);
+ SPEX(boardflags_lo, SSB_SPROM1_BFLLO, 0xFFFF, 0);
+- SPEX(antenna_gain_a, SSB_SPROM1_AGAIN, SSB_SPROM1_AGAIN_A, 0);
+- SPEX(antenna_gain_bg, SSB_SPROM1_AGAIN, SSB_SPROM1_AGAIN_BG,
+- SSB_SPROM1_AGAIN_BG_SHIFT);
+- for (i = 0; i < 4; i++) {
+- v = in[SPOFF(SSB_SPROM1_OEM) + i];
+- *(((__le16 *)out->oem) + i) = cpu_to_le16(v);
+- }
++ if (out->revision >= 2)
++ SPEX(boardflags_hi, SSB_SPROM2_BFLHI, 0xFFFF, 0);
+
-+ pci_set_master(pdev);
-+
-+ if (megasas_set_dma_mask(pdev))
-+ goto fail_set_dma_mask;
-+
-+ /*
-+ * Initialize MFI Firmware
-+ */
-+
-+ *instance->producer = 0;
-+ *instance->consumer = 0;
-+
-+ atomic_set(&instance->fw_outstanding, 0);
-+
-+ /*
-+ * We expect the FW state to be READY
-+ */
-+ if (megasas_transition_to_ready(instance))
-+ goto fail_ready_state;
-+
-+ if (megasas_issue_init_mfi(instance))
-+ goto fail_init_mfi;
-+
-+ tasklet_init(&instance->isr_tasklet, megasas_complete_cmd_dpc,
-+ (unsigned long)instance);
-+
-+ /*
-+ * Register IRQ
-+ */
-+ if (request_irq(pdev->irq, megasas_isr, IRQF_SHARED,
-+ "megasas", instance)) {
-+ printk(KERN_ERR "megasas: Failed to register IRQ\n");
-+ goto fail_irq;
++ /* Extract the antenna gain values. */
++ gain = r123_extract_antgain(out->revision, in,
++ SSB_SPROM1_AGAIN_BG,
++ SSB_SPROM1_AGAIN_BG_SHIFT);
++ out->antenna_gain.ghz24.a0 = gain;
++ out->antenna_gain.ghz24.a1 = gain;
++ out->antenna_gain.ghz24.a2 = gain;
++ out->antenna_gain.ghz24.a3 = gain;
++ gain = r123_extract_antgain(out->revision, in,
++ SSB_SPROM1_AGAIN_A,
++ SSB_SPROM1_AGAIN_A_SHIFT);
++ out->antenna_gain.ghz5.a0 = gain;
++ out->antenna_gain.ghz5.a1 = gain;
++ out->antenna_gain.ghz5.a2 = gain;
++ out->antenna_gain.ghz5.a3 = gain;
+ }
+
+-static void sprom_extract_r2(struct ssb_sprom_r2 *out, const u16 *in)
++static void sprom_extract_r4(struct ssb_sprom *out, const u16 *in)
+ {
+ int i;
+ u16 v;
+
+- SPEX(boardflags_hi, SSB_SPROM2_BFLHI, 0xFFFF, 0);
+- SPEX(maxpwr_a_hi, SSB_SPROM2_MAXP_A, SSB_SPROM2_MAXP_A_HI, 0);
+- SPEX(maxpwr_a_lo, SSB_SPROM2_MAXP_A, SSB_SPROM2_MAXP_A_LO,
+- SSB_SPROM2_MAXP_A_LO_SHIFT);
+- SPEX(pa1lob0, SSB_SPROM2_PA1LOB0, 0xFFFF, 0);
+- SPEX(pa1lob1, SSB_SPROM2_PA1LOB1, 0xFFFF, 0);
+- SPEX(pa1lob2, SSB_SPROM2_PA1LOB2, 0xFFFF, 0);
+- SPEX(pa1hib0, SSB_SPROM2_PA1HIB0, 0xFFFF, 0);
+- SPEX(pa1hib1, SSB_SPROM2_PA1HIB1, 0xFFFF, 0);
+- SPEX(pa1hib2, SSB_SPROM2_PA1HIB2, 0xFFFF, 0);
+- SPEX(ofdm_pwr_off, SSB_SPROM2_OPO, SSB_SPROM2_OPO_VALUE, 0);
+- for (i = 0; i < 4; i++) {
+- v = in[SPOFF(SSB_SPROM2_CCODE) + i];
+- *(((__le16 *)out->country_str) + i) = cpu_to_le16(v);
++ /* extract the equivalent of the r1 variables */
++ for (i = 0; i < 3; i++) {
++ v = in[SPOFF(SSB_SPROM4_IL0MAC) + i];
++ *(((__be16 *)out->il0mac) + i) = cpu_to_be16(v);
+ }
++ for (i = 0; i < 3; i++) {
++ v = in[SPOFF(SSB_SPROM4_ET0MAC) + i];
++ *(((__be16 *)out->et0mac) + i) = cpu_to_be16(v);
++ }
++ for (i = 0; i < 3; i++) {
++ v = in[SPOFF(SSB_SPROM4_ET1MAC) + i];
++ *(((__be16 *)out->et1mac) + i) = cpu_to_be16(v);
+ }
++ SPEX(et0phyaddr, SSB_SPROM4_ETHPHY, SSB_SPROM4_ETHPHY_ET0A, 0);
++ SPEX(et1phyaddr, SSB_SPROM4_ETHPHY, SSB_SPROM4_ETHPHY_ET1A,
++ SSB_SPROM4_ETHPHY_ET1A_SHIFT);
++ SPEX(country_code, SSB_SPROM4_CCODE, 0xFFFF, 0);
++ SPEX(boardflags_lo, SSB_SPROM4_BFLLO, 0xFFFF, 0);
++ SPEX(boardflags_hi, SSB_SPROM4_BFLHI, 0xFFFF, 0);
++ SPEX(ant_available_a, SSB_SPROM4_ANTAVAIL, SSB_SPROM4_ANTAVAIL_A,
++ SSB_SPROM4_ANTAVAIL_A_SHIFT);
++ SPEX(ant_available_bg, SSB_SPROM4_ANTAVAIL, SSB_SPROM4_ANTAVAIL_BG,
++ SSB_SPROM4_ANTAVAIL_BG_SHIFT);
++ SPEX(maxpwr_bg, SSB_SPROM4_MAXP_BG, SSB_SPROM4_MAXP_BG_MASK, 0);
++ SPEX(itssi_bg, SSB_SPROM4_MAXP_BG, SSB_SPROM4_ITSSI_BG,
++ SSB_SPROM4_ITSSI_BG_SHIFT);
++ SPEX(maxpwr_a, SSB_SPROM4_MAXP_A, SSB_SPROM4_MAXP_A_MASK, 0);
++ SPEX(itssi_a, SSB_SPROM4_MAXP_A, SSB_SPROM4_ITSSI_A,
++ SSB_SPROM4_ITSSI_A_SHIFT);
++ SPEX(gpio0, SSB_SPROM4_GPIOA, SSB_SPROM4_GPIOA_P0, 0);
++ SPEX(gpio1, SSB_SPROM4_GPIOA, SSB_SPROM4_GPIOA_P1,
++ SSB_SPROM4_GPIOA_P1_SHIFT);
++ SPEX(gpio2, SSB_SPROM4_GPIOB, SSB_SPROM4_GPIOB_P2, 0);
++ SPEX(gpio3, SSB_SPROM4_GPIOB, SSB_SPROM4_GPIOB_P3,
++ SSB_SPROM4_GPIOB_P3_SHIFT);
+
-+ instance->instancet->enable_intr(instance->reg_set);
++ /* Extract the antenna gain values. */
++ SPEX(antenna_gain.ghz24.a0, SSB_SPROM4_AGAIN01,
++ SSB_SPROM4_AGAIN0, SSB_SPROM4_AGAIN0_SHIFT);
++ SPEX(antenna_gain.ghz24.a1, SSB_SPROM4_AGAIN01,
++ SSB_SPROM4_AGAIN1, SSB_SPROM4_AGAIN1_SHIFT);
++ SPEX(antenna_gain.ghz24.a2, SSB_SPROM4_AGAIN23,
++ SSB_SPROM4_AGAIN2, SSB_SPROM4_AGAIN2_SHIFT);
++ SPEX(antenna_gain.ghz24.a3, SSB_SPROM4_AGAIN23,
++ SSB_SPROM4_AGAIN3, SSB_SPROM4_AGAIN3_SHIFT);
++ memcpy(&out->antenna_gain.ghz5, &out->antenna_gain.ghz24,
++ sizeof(out->antenna_gain.ghz5));
+
-+ /*
-+ * Initiate AEN (Asynchronous Event Notification)
-+ */
-+ if (megasas_start_aen(instance))
-+ printk(KERN_ERR "megasas: Start AEN failed\n");
++ /* TODO - get remaining rev 4 stuff needed */
+ }
+
+-static void sprom_extract_r3(struct ssb_sprom_r3 *out, const u16 *in)
+-{
+- out->ofdmapo = (in[SPOFF(SSB_SPROM3_OFDMAPO) + 0] & 0xFF00) >> 8;
+- out->ofdmapo |= (in[SPOFF(SSB_SPROM3_OFDMAPO) + 0] & 0x00FF) << 8;
+- out->ofdmapo <<= 16;
+- out->ofdmapo |= (in[SPOFF(SSB_SPROM3_OFDMAPO) + 1] & 0xFF00) >> 8;
+- out->ofdmapo |= (in[SPOFF(SSB_SPROM3_OFDMAPO) + 1] & 0x00FF) << 8;
+-
+- out->ofdmalpo = (in[SPOFF(SSB_SPROM3_OFDMALPO) + 0] & 0xFF00) >> 8;
+- out->ofdmalpo |= (in[SPOFF(SSB_SPROM3_OFDMALPO) + 0] & 0x00FF) << 8;
+- out->ofdmalpo <<= 16;
+- out->ofdmalpo |= (in[SPOFF(SSB_SPROM3_OFDMALPO) + 1] & 0xFF00) >> 8;
+- out->ofdmalpo |= (in[SPOFF(SSB_SPROM3_OFDMALPO) + 1] & 0x00FF) << 8;
+-
+- out->ofdmahpo = (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 0] & 0xFF00) >> 8;
+- out->ofdmahpo |= (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 0] & 0x00FF) << 8;
+- out->ofdmahpo <<= 16;
+- out->ofdmahpo |= (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 1] & 0xFF00) >> 8;
+- out->ofdmahpo |= (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 1] & 0x00FF) << 8;
+-
+- SPEX(gpioldc_on_cnt, SSB_SPROM3_GPIOLDC, SSB_SPROM3_GPIOLDC_ON,
+- SSB_SPROM3_GPIOLDC_ON_SHIFT);
+- SPEX(gpioldc_off_cnt, SSB_SPROM3_GPIOLDC, SSB_SPROM3_GPIOLDC_OFF,
+- SSB_SPROM3_GPIOLDC_OFF_SHIFT);
+- SPEX(cckpo_1M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_1M, 0);
+- SPEX(cckpo_2M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_2M,
+- SSB_SPROM3_CCKPO_2M_SHIFT);
+- SPEX(cckpo_55M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_55M,
+- SSB_SPROM3_CCKPO_55M_SHIFT);
+- SPEX(cckpo_11M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_11M,
+- SSB_SPROM3_CCKPO_11M_SHIFT);
+-
+- out->ofdmgpo = (in[SPOFF(SSB_SPROM3_OFDMGPO) + 0] & 0xFF00) >> 8;
+- out->ofdmgpo |= (in[SPOFF(SSB_SPROM3_OFDMGPO) + 0] & 0x00FF) << 8;
+- out->ofdmgpo <<= 16;
+- out->ofdmgpo |= (in[SPOFF(SSB_SPROM3_OFDMGPO) + 1] & 0xFF00) >> 8;
+- out->ofdmgpo |= (in[SPOFF(SSB_SPROM3_OFDMGPO) + 1] & 0x00FF) << 8;
+-}
+-
+-static int sprom_extract(struct ssb_bus *bus,
+- struct ssb_sprom *out, const u16 *in)
++static int sprom_extract(struct ssb_bus *bus, struct ssb_sprom *out,
++ const u16 *in, u16 size)
+ {
+ memset(out, 0, sizeof(*out));
+
+- SPEX(revision, SSB_SPROM_REVISION, SSB_SPROM_REVISION_REV, 0);
+- SPEX(crc, SSB_SPROM_REVISION, SSB_SPROM_REVISION_CRC,
+- SSB_SPROM_REVISION_CRC_SHIFT);
+-
++ out->revision = in[size - 1] & 0x00FF;
++ ssb_dprintk(KERN_DEBUG PFX "SPROM revision %d detected.\n", out->revision);
+ if ((bus->chip_id & 0xFF00) == 0x4400) {
+ /* Workaround: The BCM44XX chip has a stupid revision
+ * number stored in the SPROM.
+ * Always extract r1. */
+- sprom_extract_r1(&out->r1, in);
++ out->revision = 1;
++ sprom_extract_r123(out, in);
++ } else if (bus->chip_id == 0x4321) {
++ /* the BCM4328 has a chipid == 0x4321 and a rev 4 SPROM */
++ out->revision = 4;
++ sprom_extract_r4(out, in);
+ } else {
+ if (out->revision == 0)
+ goto unsupported;
+- if (out->revision >= 1 && out->revision <= 3)
+- sprom_extract_r1(&out->r1, in);
+- if (out->revision >= 2 && out->revision <= 3)
+- sprom_extract_r2(&out->r2, in);
+- if (out->revision == 3)
+- sprom_extract_r3(&out->r3, in);
+- if (out->revision >= 4)
++ if (out->revision >= 1 && out->revision <= 3) {
++ sprom_extract_r123(out, in);
++ }
++ if (out->revision == 4)
++ sprom_extract_r4(out, in);
++ if (out->revision >= 5)
+ goto unsupported;
+ }
+
+@@ -448,7 +486,7 @@ static int sprom_extract(struct ssb_bus *bus,
+ unsupported:
+ ssb_printk(KERN_WARNING PFX "Unsupported SPROM revision %d "
+ "detected. Will extract v1\n", out->revision);
+- sprom_extract_r1(&out->r1, in);
++ sprom_extract_r123(out, in);
+ return 0;
+ }
+
+@@ -458,16 +496,29 @@ static int ssb_pci_sprom_get(struct ssb_bus *bus,
+ int err = -ENOMEM;
+ u16 *buf;
+
+- buf = kcalloc(SSB_SPROMSIZE_WORDS, sizeof(u16), GFP_KERNEL);
++ buf = kcalloc(SSB_SPROMSIZE_WORDS_R123, sizeof(u16), GFP_KERNEL);
+ if (!buf)
+ goto out;
++ bus->sprom_size = SSB_SPROMSIZE_WORDS_R123;
+ sprom_do_read(bus, buf);
+- err = sprom_check_crc(buf);
++ err = sprom_check_crc(buf, bus->sprom_size);
+ if (err) {
+- ssb_printk(KERN_WARNING PFX
+- "WARNING: Invalid SPROM CRC (corrupt SPROM)\n");
++ /* check for rev 4 sprom - has special signature */
++ if (buf[32] == 0x5372) {
++ kfree(buf);
++ buf = kcalloc(SSB_SPROMSIZE_WORDS_R4, sizeof(u16),
++ GFP_KERNEL);
++ if (!buf)
++ goto out;
++ bus->sprom_size = SSB_SPROMSIZE_WORDS_R4;
++ sprom_do_read(bus, buf);
++ err = sprom_check_crc(buf, bus->sprom_size);
++ }
++ if (err)
++ ssb_printk(KERN_WARNING PFX "WARNING: Invalid"
++ " SPROM CRC (corrupt SPROM)\n");
+ }
+- err = sprom_extract(bus, sprom, buf);
++ err = sprom_extract(bus, sprom, buf, bus->sprom_size);
+
+ kfree(buf);
+ out:
+@@ -581,29 +632,28 @@ const struct ssb_bus_ops ssb_pci_ops = {
+ .write32 = ssb_pci_write32,
+ };
+
+-static int sprom2hex(const u16 *sprom, char *buf, size_t buf_len)
++static int sprom2hex(const u16 *sprom, char *buf, size_t buf_len, u16 size)
+ {
+ int i, pos = 0;
+
+- for (i = 0; i < SSB_SPROMSIZE_WORDS; i++) {
++ for (i = 0; i < size; i++)
+ pos += snprintf(buf + pos, buf_len - pos - 1,
+ "%04X", swab16(sprom[i]) & 0xFFFF);
+- }
+ pos += snprintf(buf + pos, buf_len - pos - 1, "\n");
+
+ return pos + 1;
+ }
+
+-static int hex2sprom(u16 *sprom, const char *dump, size_t len)
++static int hex2sprom(u16 *sprom, const char *dump, size_t len, u16 size)
+ {
+ char tmp[5] = { 0 };
+ int cnt = 0;
+ unsigned long parsed;
+
+- if (len < SSB_SPROMSIZE_BYTES * 2)
++ if (len < size * 2)
+ return -EINVAL;
+
+- while (cnt < SSB_SPROMSIZE_WORDS) {
++ while (cnt < size) {
+ memcpy(tmp, dump, 4);
+ dump += 4;
+ parsed = simple_strtoul(tmp, NULL, 16);
+@@ -627,7 +677,7 @@ static ssize_t ssb_pci_attr_sprom_show(struct device *pcidev,
+ if (!bus)
+ goto out;
+ err = -ENOMEM;
+- sprom = kcalloc(SSB_SPROMSIZE_WORDS, sizeof(u16), GFP_KERNEL);
++ sprom = kcalloc(bus->sprom_size, sizeof(u16), GFP_KERNEL);
+ if (!sprom)
+ goto out;
+
+@@ -640,7 +690,7 @@ static ssize_t ssb_pci_attr_sprom_show(struct device *pcidev,
+ sprom_do_read(bus, sprom);
+ mutex_unlock(&bus->pci_sprom_mutex);
+
+- count = sprom2hex(sprom, buf, PAGE_SIZE);
++ count = sprom2hex(sprom, buf, PAGE_SIZE, bus->sprom_size);
+ err = 0;
+
+ out_kfree:
+@@ -662,15 +712,15 @@ static ssize_t ssb_pci_attr_sprom_store(struct device *pcidev,
+ if (!bus)
+ goto out;
+ err = -ENOMEM;
+- sprom = kcalloc(SSB_SPROMSIZE_WORDS, sizeof(u16), GFP_KERNEL);
++ sprom = kcalloc(bus->sprom_size, sizeof(u16), GFP_KERNEL);
+ if (!sprom)
+ goto out;
+- err = hex2sprom(sprom, buf, count);
++ err = hex2sprom(sprom, buf, count, bus->sprom_size);
+ if (err) {
+ err = -EINVAL;
+ goto out_kfree;
+ }
+- err = sprom_check_crc(sprom);
++ err = sprom_check_crc(sprom, bus->sprom_size);
+ if (err) {
+ err = -EINVAL;
+ goto out_kfree;
+diff --git a/drivers/ssb/pcmcia.c b/drivers/ssb/pcmcia.c
+index bb44a76..46816cd 100644
+--- a/drivers/ssb/pcmcia.c
++++ b/drivers/ssb/pcmcia.c
+@@ -94,7 +94,6 @@ int ssb_pcmcia_switch_core(struct ssb_bus *bus,
+ struct ssb_device *dev)
+ {
+ int err;
+- unsigned long flags;
+
+ #if SSB_VERBOSE_PCMCIACORESWITCH_DEBUG
+ ssb_printk(KERN_INFO PFX
+@@ -103,11 +102,9 @@ int ssb_pcmcia_switch_core(struct ssb_bus *bus,
+ dev->core_index);
+ #endif
+
+- spin_lock_irqsave(&bus->bar_lock, flags);
+ err = ssb_pcmcia_switch_coreidx(bus, dev->core_index);
+ if (!err)
+ bus->mapped_device = dev;
+- spin_unlock_irqrestore(&bus->bar_lock, flags);
+
+ return err;
+ }
+@@ -115,14 +112,12 @@ int ssb_pcmcia_switch_core(struct ssb_bus *bus,
+ int ssb_pcmcia_switch_segment(struct ssb_bus *bus, u8 seg)
+ {
+ int attempts = 0;
+- unsigned long flags;
+ conf_reg_t reg;
+- int res, err = 0;
++ int res;
+
+ SSB_WARN_ON((seg != 0) && (seg != 1));
+ reg.Offset = 0x34;
+ reg.Function = 0;
+- spin_lock_irqsave(&bus->bar_lock, flags);
+ while (1) {
+ reg.Action = CS_WRITE;
+ reg.Value = seg;
+@@ -143,13 +138,11 @@ int ssb_pcmcia_switch_segment(struct ssb_bus *bus, u8 seg)
+ udelay(10);
+ }
+ bus->mapped_pcmcia_seg = seg;
+-out_unlock:
+- spin_unlock_irqrestore(&bus->bar_lock, flags);
+- return err;
+
-+ /* Initialize the cmd completion timer */
-+ if (poll_mode_io)
-+ megasas_start_timer(instance, &instance->io_completion_timer,
-+ megasas_io_completion_timer,
-+ MEGASAS_COMPLETION_TIMER_INTERVAL);
+ return 0;
-+
-+fail_irq:
-+fail_init_mfi:
-+ if (instance->evt_detail)
-+ pci_free_consistent(pdev, sizeof(struct megasas_evt_detail),
-+ instance->evt_detail,
-+ instance->evt_detail_h);
-+
-+ if (instance->producer)
-+ pci_free_consistent(pdev, sizeof(u32), instance->producer,
-+ instance->producer_h);
-+ if (instance->consumer)
-+ pci_free_consistent(pdev, sizeof(u32), instance->consumer,
-+ instance->consumer_h);
-+ scsi_host_put(host);
-+
-+fail_set_dma_mask:
-+fail_ready_state:
-+
-+ pci_disable_device(pdev);
-+
+ error:
+ ssb_printk(KERN_ERR PFX "Failed to switch pcmcia segment\n");
+- err = -ENODEV;
+- goto out_unlock;
+ return -ENODEV;
-+}
-+
-+/**
- * megasas_detach_one - PCI hot"un"plug entry point
- * @pdev: PCI device structure
- */
-@@ -2536,9 +2784,12 @@ static void megasas_detach_one(struct pci_dev *pdev)
- instance = pci_get_drvdata(pdev);
- host = instance->host;
+ }
-+ if (poll_mode_io)
-+ del_timer_sync(&instance->io_completion_timer);
-+
- scsi_remove_host(instance->host);
- megasas_flush_cache(instance);
-- megasas_shutdown_controller(instance);
-+ megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN);
- tasklet_kill(&instance->isr_tasklet);
+ static int select_core_and_segment(struct ssb_device *dev,
+@@ -182,22 +175,33 @@ static int select_core_and_segment(struct ssb_device *dev,
+ static u16 ssb_pcmcia_read16(struct ssb_device *dev, u16 offset)
+ {
+ struct ssb_bus *bus = dev->bus;
++ unsigned long flags;
++ int err;
++ u16 value = 0xFFFF;
- /*
-@@ -2660,6 +2911,7 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
- void *sense = NULL;
- dma_addr_t sense_handle;
- u32 *sense_ptr;
-+ unsigned long *sense_buff;
+- if (unlikely(select_core_and_segment(dev, &offset)))
+- return 0xFFFF;
++ spin_lock_irqsave(&bus->bar_lock, flags);
++ err = select_core_and_segment(dev, &offset);
++ if (likely(!err))
++ value = readw(bus->mmio + offset);
++ spin_unlock_irqrestore(&bus->bar_lock, flags);
- memset(kbuff_arr, 0, sizeof(kbuff_arr));
+- return readw(bus->mmio + offset);
++ return value;
+ }
-@@ -2764,14 +3016,16 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
- */
- if (ioc->sense_len) {
- /*
-- * sense_ptr points to the location that has the user
-+ * sense_buff points to the location that has the user
- * sense buffer address
- */
-- sense_ptr = (u32 *) ((unsigned long)ioc->frame.raw +
-- ioc->sense_off);
-+ sense_buff = (unsigned long *) ((unsigned long)ioc->frame.raw +
-+ ioc->sense_off);
+ static u32 ssb_pcmcia_read32(struct ssb_device *dev, u16 offset)
+ {
+ struct ssb_bus *bus = dev->bus;
+- u32 lo, hi;
++ unsigned long flags;
++ int err;
++ u32 lo = 0xFFFFFFFF, hi = 0xFFFFFFFF;
-- if (copy_to_user((void __user *)((unsigned long)(*sense_ptr)),
-- sense, ioc->sense_len)) {
-+ if (copy_to_user((void __user *)(unsigned long)(*sense_buff),
-+ sense, ioc->sense_len)) {
-+ printk(KERN_ERR "megasas: Failed to copy out to user "
-+ "sense data\n");
- error = -EFAULT;
- goto out;
- }
-@@ -2874,10 +3128,10 @@ static int megasas_mgmt_ioctl_aen(struct file *file, unsigned long arg)
- if (!instance)
- return -ENODEV;
+- if (unlikely(select_core_and_segment(dev, &offset)))
+- return 0xFFFFFFFF;
+- lo = readw(bus->mmio + offset);
+- hi = readw(bus->mmio + offset + 2);
++ spin_lock_irqsave(&bus->bar_lock, flags);
++ err = select_core_and_segment(dev, &offset);
++ if (likely(!err)) {
++ lo = readw(bus->mmio + offset);
++ hi = readw(bus->mmio + offset + 2);
++ }
++ spin_unlock_irqrestore(&bus->bar_lock, flags);
-- down(&instance->aen_mutex);
-+ mutex_lock(&instance->aen_mutex);
- error = megasas_register_aen(instance, aen.seq_num,
- aen.class_locale_word);
-- up(&instance->aen_mutex);
-+ mutex_unlock(&instance->aen_mutex);
- return error;
+ return (lo | (hi << 16));
}
+@@ -205,22 +209,31 @@ static u32 ssb_pcmcia_read32(struct ssb_device *dev, u16 offset)
+ static void ssb_pcmcia_write16(struct ssb_device *dev, u16 offset, u16 value)
+ {
+ struct ssb_bus *bus = dev->bus;
++ unsigned long flags;
++ int err;
-@@ -2977,6 +3231,8 @@ static struct pci_driver megasas_pci_driver = {
- .id_table = megasas_pci_table,
- .probe = megasas_probe_one,
- .remove = __devexit_p(megasas_detach_one),
-+ .suspend = megasas_suspend,
-+ .resume = megasas_resume,
- .shutdown = megasas_shutdown,
- };
+- if (unlikely(select_core_and_segment(dev, &offset)))
+- return;
+- writew(value, bus->mmio + offset);
++ spin_lock_irqsave(&bus->bar_lock, flags);
++ err = select_core_and_segment(dev, &offset);
++ if (likely(!err))
++ writew(value, bus->mmio + offset);
++ mmiowb();
++ spin_unlock_irqrestore(&bus->bar_lock, flags);
+ }
-@@ -3004,7 +3260,7 @@ static DRIVER_ATTR(release_date, S_IRUGO, megasas_sysfs_show_release_date,
- static ssize_t
- megasas_sysfs_show_dbg_lvl(struct device_driver *dd, char *buf)
+ static void ssb_pcmcia_write32(struct ssb_device *dev, u16 offset, u32 value)
{
-- return sprintf(buf,"%u",megasas_dbg_lvl);
-+ return sprintf(buf, "%u\n", megasas_dbg_lvl);
+ struct ssb_bus *bus = dev->bus;
++ unsigned long flags;
++ int err;
+
+- if (unlikely(select_core_and_segment(dev, &offset)))
+- return;
+- writeb((value & 0xFF000000) >> 24, bus->mmio + offset + 3);
+- writeb((value & 0x00FF0000) >> 16, bus->mmio + offset + 2);
+- writeb((value & 0x0000FF00) >> 8, bus->mmio + offset + 1);
+- writeb((value & 0x000000FF) >> 0, bus->mmio + offset + 0);
++ spin_lock_irqsave(&bus->bar_lock, flags);
++ err = select_core_and_segment(dev, &offset);
++ if (likely(!err)) {
++ writew((value & 0x0000FFFF), bus->mmio + offset);
++ writew(((value & 0xFFFF0000) >> 16), bus->mmio + offset + 2);
++ }
++ mmiowb();
++ spin_unlock_irqrestore(&bus->bar_lock, flags);
}
- static ssize_t
-@@ -3019,7 +3275,65 @@ megasas_sysfs_set_dbg_lvl(struct device_driver *dd, const char *buf, size_t coun
+ /* Not "static", as it's used in main.c */
+@@ -231,10 +244,12 @@ const struct ssb_bus_ops ssb_pcmcia_ops = {
+ .write32 = ssb_pcmcia_write32,
+ };
+
++#include <linux/etherdevice.h>
+ int ssb_pcmcia_get_invariants(struct ssb_bus *bus,
+ struct ssb_init_invariants *iv)
+ {
+ //TODO
++ random_ether_addr(iv->sprom.il0mac);
+ return 0;
}
- static DRIVER_ATTR(dbg_lvl, S_IRUGO|S_IWUGO, megasas_sysfs_show_dbg_lvl,
-- megasas_sysfs_set_dbg_lvl);
-+ megasas_sysfs_set_dbg_lvl);
-+
-+static ssize_t
-+megasas_sysfs_show_poll_mode_io(struct device_driver *dd, char *buf)
-+{
-+ return sprintf(buf, "%u\n", poll_mode_io);
-+}
-+
-+static ssize_t
-+megasas_sysfs_set_poll_mode_io(struct device_driver *dd,
-+ const char *buf, size_t count)
-+{
-+ int retval = count;
-+ int tmp = poll_mode_io;
-+ int i;
-+ struct megasas_instance *instance;
-+
-+ if (sscanf(buf, "%u", &poll_mode_io) < 1) {
-+ printk(KERN_ERR "megasas: could not set poll_mode_io\n");
-+ retval = -EINVAL;
-+ }
-+
-+ /*
-+ * Check if poll_mode_io is already set or is same as previous value
-+ */
-+ if ((tmp && poll_mode_io) || (tmp == poll_mode_io))
-+ goto out;
-+
-+ if (poll_mode_io) {
-+ /*
-+ * Start timers for all adapters
-+ */
-+ for (i = 0; i < megasas_mgmt_info.max_index; i++) {
-+ instance = megasas_mgmt_info.instance[i];
-+ if (instance) {
-+ megasas_start_timer(instance,
-+ &instance->io_completion_timer,
-+ megasas_io_completion_timer,
-+ MEGASAS_COMPLETION_TIMER_INTERVAL);
-+ }
-+ }
-+ } else {
-+ /*
-+ * Delete timers for all adapters
-+ */
-+ for (i = 0; i < megasas_mgmt_info.max_index; i++) {
-+ instance = megasas_mgmt_info.instance[i];
-+ if (instance)
-+ del_timer_sync(&instance->io_completion_timer);
-+ }
-+ }
-+
-+out:
-+ return retval;
-+}
-+
-+static DRIVER_ATTR(poll_mode_io, S_IRUGO|S_IWUGO,
-+ megasas_sysfs_show_poll_mode_io,
-+ megasas_sysfs_set_poll_mode_io);
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 865f32b..cc246fa 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -34,12 +34,12 @@ struct uio_device {
+ wait_queue_head_t wait;
+ int vma_count;
+ struct uio_info *info;
+- struct kset map_attr_kset;
++ struct kobject *map_dir;
+ };
- /**
- * megasas_init - Driver load entry point
-@@ -3070,8 +3384,16 @@ static int __init megasas_init(void)
- &driver_attr_dbg_lvl);
- if (rval)
- goto err_dcf_dbg_lvl;
-+ rval = driver_create_file(&megasas_pci_driver.driver,
-+ &driver_attr_poll_mode_io);
-+ if (rval)
-+ goto err_dcf_poll_mode_io;
+ static int uio_major;
+ static DEFINE_IDR(uio_idr);
+-static struct file_operations uio_fops;
++static const struct file_operations uio_fops;
- return rval;
-+
-+err_dcf_poll_mode_io:
-+ driver_remove_file(&megasas_pci_driver.driver,
-+ &driver_attr_dbg_lvl);
- err_dcf_dbg_lvl:
- driver_remove_file(&megasas_pci_driver.driver,
- &driver_attr_release_date);
-@@ -3090,6 +3412,8 @@ err_pcidrv:
- static void __exit megasas_exit(void)
- {
- driver_remove_file(&megasas_pci_driver.driver,
-+ &driver_attr_poll_mode_io);
-+ driver_remove_file(&megasas_pci_driver.driver,
- &driver_attr_dbg_lvl);
- driver_remove_file(&megasas_pci_driver.driver,
- &driver_attr_release_date);
-diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
-index 4dffc91..6466bdf 100644
---- a/drivers/scsi/megaraid/megaraid_sas.h
-+++ b/drivers/scsi/megaraid/megaraid_sas.h
-@@ -2,7 +2,7 @@
- *
- * Linux MegaRAID driver for SAS based RAID controllers
- *
-- * Copyright (c) 2003-2005 LSI Logic Corporation.
-+ * Copyright (c) 2003-2005 LSI Corporation.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
-@@ -18,9 +18,9 @@
- /*
- * MegaRAID SAS Driver meta data
+ /* UIO class infrastructure */
+ static struct uio_class {
+@@ -51,47 +51,48 @@ static struct uio_class {
+ * attributes
*/
--#define MEGASAS_VERSION "00.00.03.10-rc5"
--#define MEGASAS_RELDATE "May 17, 2007"
--#define MEGASAS_EXT_VERSION "Thu May 17 10:09:32 PDT 2007"
-+#define MEGASAS_VERSION "00.00.03.16-rc1"
-+#define MEGASAS_RELDATE "Nov. 07, 2007"
-+#define MEGASAS_EXT_VERSION "Thu. Nov. 07 10:09:32 PDT 2007"
- /*
- * Device IDs
-@@ -117,6 +117,7 @@
- #define MR_FLUSH_DISK_CACHE 0x02
+-static struct attribute attr_addr = {
+- .name = "addr",
+- .mode = S_IRUGO,
++struct uio_map {
++ struct kobject kobj;
++ struct uio_mem *mem;
+ };
++#define to_map(map) container_of(map, struct uio_map, kobj)
- #define MR_DCMD_CTRL_SHUTDOWN 0x01050000
-+#define MR_DCMD_HIBERNATE_SHUTDOWN 0x01060000
- #define MR_ENABLE_DRIVE_SPINDOWN 0x01
+-static struct attribute attr_size = {
+- .name = "size",
+- .mode = S_IRUGO,
+-};
- #define MR_DCMD_CTRL_EVENT_GET_INFO 0x01040100
-@@ -570,7 +571,8 @@ struct megasas_ctrl_info {
- #define IS_DMA64 (sizeof(dma_addr_t) == 8)
+-static struct attribute* map_attrs[] = {
+- &attr_addr, &attr_size, NULL
+-};
+-
+-static ssize_t map_attr_show(struct kobject *kobj, struct attribute *attr,
++static ssize_t map_attr_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+ {
+- struct uio_mem *mem = container_of(kobj, struct uio_mem, kobj);
++ struct uio_map *map = to_map(kobj);
++ struct uio_mem *mem = map->mem;
- #define MFI_OB_INTR_STATUS_MASK 0x00000002
--#define MFI_POLL_TIMEOUT_SECS 10
-+#define MFI_POLL_TIMEOUT_SECS 60
-+#define MEGASAS_COMPLETION_TIMER_INTERVAL (HZ/10)
+- if (strncmp(attr->name,"addr",4) == 0)
++ if (strncmp(attr->attr.name, "addr", 4) == 0)
+ return sprintf(buf, "0x%lx\n", mem->addr);
- #define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000
+- if (strncmp(attr->name,"size",4) == 0)
++ if (strncmp(attr->attr.name, "size", 4) == 0)
+ return sprintf(buf, "0x%lx\n", mem->size);
-@@ -1083,13 +1085,15 @@ struct megasas_instance {
- struct megasas_cmd **cmd_list;
- struct list_head cmd_pool;
- spinlock_t cmd_pool_lock;
-+ /* used to synch producer, consumer ptrs in dpc */
-+ spinlock_t completion_lock;
- struct dma_pool *frame_dma_pool;
- struct dma_pool *sense_dma_pool;
+ return -ENODEV;
+ }
- struct megasas_evt_detail *evt_detail;
- dma_addr_t evt_detail_h;
- struct megasas_cmd *aen_cmd;
-- struct semaphore aen_mutex;
-+ struct mutex aen_mutex;
- struct semaphore ioctl_sem;
+-static void map_attr_release(struct kobject *kobj)
+-{
+- /* TODO ??? */
+-}
++static struct kobj_attribute attr_attribute =
++ __ATTR(addr, S_IRUGO, map_attr_show, NULL);
++static struct kobj_attribute size_attribute =
++ __ATTR(size, S_IRUGO, map_attr_show, NULL);
- struct Scsi_Host *host;
-@@ -1108,6 +1112,8 @@ struct megasas_instance {
+-static struct sysfs_ops map_attr_ops = {
+- .show = map_attr_show,
++static struct attribute *attrs[] = {
++ &attr_attribute.attr,
++ &size_attribute.attr,
++ NULL, /* need to NULL terminate the list of attributes */
+ };
- u8 flag;
- unsigned long last_time;
++static void map_release(struct kobject *kobj)
++{
++ struct uio_map *map = to_map(kobj);
++ kfree(map);
++}
+
-+ struct timer_list io_completion_timer;
+ static struct kobj_type map_attr_type = {
+- .release = map_attr_release,
+- .sysfs_ops = &map_attr_ops,
+- .default_attrs = map_attrs,
++ .release = map_release,
++ .default_attrs = attrs,
};
- #define MEGASAS_IS_LOGICAL(scp) \
-diff --git a/drivers/scsi/ncr53c8xx.c b/drivers/scsi/ncr53c8xx.c
-index 016c462..c02771a 100644
---- a/drivers/scsi/ncr53c8xx.c
-+++ b/drivers/scsi/ncr53c8xx.c
-@@ -4963,7 +4963,8 @@ void ncr_complete (struct ncb *np, struct ccb *cp)
- ** Copy back sense data to caller's buffer.
- */
- memcpy(cmd->sense_buffer, cp->sense_buf,
-- min(sizeof(cmd->sense_buffer), sizeof(cp->sense_buf)));
-+ min_t(size_t, SCSI_SENSE_BUFFERSIZE,
-+ sizeof(cp->sense_buf)));
+ static ssize_t show_name(struct device *dev,
+@@ -148,6 +149,7 @@ static int uio_dev_add_attributes(struct uio_device *idev)
+ int mi;
+ int map_found = 0;
+ struct uio_mem *mem;
++ struct uio_map *map;
- if (DEBUG_FLAGS & (DEBUG_RESULT|DEBUG_TINY)) {
- u_char * p = (u_char*) & cmd->sense_buffer;
-diff --git a/drivers/scsi/pcmcia/Kconfig b/drivers/scsi/pcmcia/Kconfig
-index fa481b5..53857c6 100644
---- a/drivers/scsi/pcmcia/Kconfig
-+++ b/drivers/scsi/pcmcia/Kconfig
-@@ -6,7 +6,8 @@ menuconfig SCSI_LOWLEVEL_PCMCIA
- bool "PCMCIA SCSI adapter support"
- depends on SCSI!=n && PCMCIA!=n
+ ret = sysfs_create_group(&idev->dev->kobj, &uio_attr_grp);
+ if (ret)
+@@ -159,31 +161,34 @@ static int uio_dev_add_attributes(struct uio_device *idev)
+ break;
+ if (!map_found) {
+ map_found = 1;
+- kobject_set_name(&idev->map_attr_kset.kobj,"maps");
+- idev->map_attr_kset.ktype = &map_attr_type;
+- idev->map_attr_kset.kobj.parent = &idev->dev->kobj;
+- ret = kset_register(&idev->map_attr_kset);
+- if (ret)
+- goto err_remove_group;
++ idev->map_dir = kobject_create_and_add("maps",
++ &idev->dev->kobj);
++ if (!idev->map_dir)
++ goto err;
+ }
+- kobject_init(&mem->kobj);
+- kobject_set_name(&mem->kobj,"map%d",mi);
+- mem->kobj.parent = &idev->map_attr_kset.kobj;
+- mem->kobj.kset = &idev->map_attr_kset;
+- ret = kobject_add(&mem->kobj);
++ map = kzalloc(sizeof(*map), GFP_KERNEL);
++ if (!map)
++ goto err;
++ kobject_init(&map->kobj, &map_attr_type);
++ map->mem = mem;
++ mem->map = map;
++ ret = kobject_add(&map->kobj, idev->map_dir, "map%d", mi);
++ if (ret)
++ goto err;
++ ret = kobject_uevent(&map->kobj, KOBJ_ADD);
+ if (ret)
+- goto err_remove_maps;
++ goto err;
+ }
--if SCSI_LOWLEVEL_PCMCIA && SCSI && PCMCIA
-+# drivers have problems when build in, so require modules
-+if SCSI_LOWLEVEL_PCMCIA && SCSI && PCMCIA && m
+ return 0;
- config PCMCIA_AHA152X
- tristate "Adaptec AHA152X PCMCIA support"
-diff --git a/drivers/scsi/pcmcia/nsp_cs.c b/drivers/scsi/pcmcia/nsp_cs.c
-index a45d89b..5082ca3 100644
---- a/drivers/scsi/pcmcia/nsp_cs.c
-+++ b/drivers/scsi/pcmcia/nsp_cs.c
-@@ -135,6 +135,11 @@ static nsp_hw_data nsp_data_base; /* attach <-> detect glue */
+-err_remove_maps:
++err:
+ for (mi--; mi>=0; mi--) {
+ mem = &idev->info->mem[mi];
+- kobject_unregister(&mem->kobj);
++ map = mem->map;
++ kobject_put(&map->kobj);
+ }
+- kset_unregister(&idev->map_attr_kset); /* Needed ? */
+-err_remove_group:
++ kobject_put(idev->map_dir);
+ sysfs_remove_group(&idev->dev->kobj, &uio_attr_grp);
+ err_group:
+ dev_err(idev->dev, "error creating sysfs files (%d)\n", ret);
+@@ -198,9 +203,9 @@ static void uio_dev_del_attributes(struct uio_device *idev)
+ mem = &idev->info->mem[mi];
+ if (mem->size == 0)
+ break;
+- kobject_unregister(&mem->kobj);
++ kobject_put(&mem->map->kobj);
+ }
+- kset_unregister(&idev->map_attr_kset);
++ kobject_put(idev->map_dir);
+ sysfs_remove_group(&idev->dev->kobj, &uio_attr_grp);
+ }
- #define NSP_DEBUG_BUF_LEN 150
+@@ -503,7 +508,7 @@ static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
+ }
+ }
-+static inline void nsp_inc_resid(struct scsi_cmnd *SCpnt, int residInc)
-+{
-+ scsi_set_resid(SCpnt, scsi_get_resid(SCpnt) + residInc);
-+}
-+
- static void nsp_cs_message(const char *func, int line, char *type, char *fmt, ...)
- {
- va_list args;
-@@ -192,8 +197,10 @@ static int nsp_queuecommand(struct scsi_cmnd *SCpnt,
+-static struct file_operations uio_fops = {
++static const struct file_operations uio_fops = {
+ .owner = THIS_MODULE,
+ .open = uio_open,
+ .release = uio_release,
+diff --git a/drivers/usb/Kconfig b/drivers/usb/Kconfig
+index 7580aa5..7a64990 100644
+--- a/drivers/usb/Kconfig
++++ b/drivers/usb/Kconfig
+@@ -33,6 +33,7 @@ config USB_ARCH_HAS_OHCI
+ default y if ARCH_LH7A404
+ default y if ARCH_S3C2410
+ default y if PXA27x
++ default y if PXA3xx
+ default y if ARCH_EP93XX
+ default y if ARCH_AT91
+ default y if ARCH_PNX4008
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index c51f8e9..7c3aaa9 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -91,8 +91,8 @@ static int usb_create_newid_file(struct usb_driver *usb_drv)
+ goto exit;
+
+ if (usb_drv->probe != NULL)
+- error = sysfs_create_file(&usb_drv->drvwrap.driver.kobj,
+- &driver_attr_new_id.attr);
++ error = driver_create_file(&usb_drv->drvwrap.driver,
++ &driver_attr_new_id);
+ exit:
+ return error;
+ }
+@@ -103,8 +103,8 @@ static void usb_remove_newid_file(struct usb_driver *usb_drv)
+ return;
+
+ if (usb_drv->probe != NULL)
+- sysfs_remove_file(&usb_drv->drvwrap.driver.kobj,
+- &driver_attr_new_id.attr);
++ driver_remove_file(&usb_drv->drvwrap.driver,
++ &driver_attr_new_id);
+ }
+
+ static void usb_free_dynids(struct usb_driver *usb_drv)
+diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig
+index f81d08d..77a3759 100644
+--- a/drivers/usb/gadget/Kconfig
++++ b/drivers/usb/gadget/Kconfig
+@@ -308,7 +308,7 @@ config USB_S3C2410_DEBUG
+
+ config USB_GADGET_AT91
+ boolean "AT91 USB Device Port"
+- depends on ARCH_AT91 && !ARCH_AT91SAM9RL
++ depends on ARCH_AT91 && !ARCH_AT91SAM9RL && !ARCH_AT91CAP9
+ select USB_GADGET_SELECTED
+ help
+ Many Atmel AT91 processors (such as the AT91RM2000) have a
+diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
+index ecfe800..ddd4ee1 100644
+--- a/drivers/usb/host/ohci-hcd.c
++++ b/drivers/usb/host/ohci-hcd.c
+@@ -997,7 +997,7 @@ MODULE_LICENSE ("GPL");
+ #define PLATFORM_DRIVER ohci_hcd_lh7a404_driver
#endif
- nsp_hw_data *data = (nsp_hw_data *)SCpnt->device->host->hostdata;
-- nsp_dbg(NSP_DEBUG_QUEUECOMMAND, "SCpnt=0x%p target=%d lun=%d buff=0x%p bufflen=%d use_sg=%d",
-- SCpnt, target, SCpnt->device->lun, SCpnt->request_buffer, SCpnt->request_bufflen, SCpnt->use_sg);
-+ nsp_dbg(NSP_DEBUG_QUEUECOMMAND,
-+ "SCpnt=0x%p target=%d lun=%d sglist=0x%p bufflen=%d sg_count=%d",
-+ SCpnt, target, SCpnt->device->lun, scsi_sglist(SCpnt),
-+ scsi_bufflen(SCpnt), scsi_sg_count(SCpnt));
- //nsp_dbg(NSP_DEBUG_QUEUECOMMAND, "before CurrentSC=0x%p", data->CurrentSC);
+-#ifdef CONFIG_PXA27x
++#if defined(CONFIG_PXA27x) || defined(CONFIG_PXA3xx)
+ #include "ohci-pxa27x.c"
+ #define PLATFORM_DRIVER ohci_hcd_pxa27x_driver
+ #endif
+diff --git a/drivers/usb/host/ohci-omap.c b/drivers/usb/host/ohci-omap.c
+index 5cfa3d1..74e1f4b 100644
+--- a/drivers/usb/host/ohci-omap.c
++++ b/drivers/usb/host/ohci-omap.c
+@@ -47,7 +47,7 @@
+ #endif
- SCpnt->scsi_done = done;
-@@ -225,7 +232,7 @@ static int nsp_queuecommand(struct scsi_cmnd *SCpnt,
- SCpnt->SCp.have_data_in = IO_UNKNOWN;
- SCpnt->SCp.sent_command = 0;
- SCpnt->SCp.phase = PH_UNDETERMINED;
-- SCpnt->resid = SCpnt->request_bufflen;
-+ scsi_set_resid(SCpnt, scsi_bufflen(SCpnt));
+ #ifdef CONFIG_TPS65010
+-#include <asm/arch/tps65010.h>
++#include <linux/i2c/tps65010.h>
+ #else
- /* setup scratch area
- SCp.ptr : buffer pointer
-@@ -233,14 +240,14 @@ static int nsp_queuecommand(struct scsi_cmnd *SCpnt,
- SCp.buffer : next buffer
- SCp.buffers_residual : left buffers in list
- SCp.phase : current state of the command */
-- if (SCpnt->use_sg) {
-- SCpnt->SCp.buffer = (struct scatterlist *) SCpnt->request_buffer;
-+ if (scsi_bufflen(SCpnt)) {
-+ SCpnt->SCp.buffer = scsi_sglist(SCpnt);
- SCpnt->SCp.ptr = BUFFER_ADDR;
- SCpnt->SCp.this_residual = SCpnt->SCp.buffer->length;
-- SCpnt->SCp.buffers_residual = SCpnt->use_sg - 1;
-+ SCpnt->SCp.buffers_residual = scsi_sg_count(SCpnt) - 1;
- } else {
-- SCpnt->SCp.ptr = (char *) SCpnt->request_buffer;
-- SCpnt->SCp.this_residual = SCpnt->request_bufflen;
-+ SCpnt->SCp.ptr = NULL;
-+ SCpnt->SCp.this_residual = 0;
- SCpnt->SCp.buffer = NULL;
- SCpnt->SCp.buffers_residual = 0;
- }
-@@ -721,7 +728,9 @@ static void nsp_pio_read(struct scsi_cmnd *SCpnt)
- ocount = data->FifoCount;
+ #define LOW 0
+diff --git a/drivers/usb/host/ohci-pnx4008.c b/drivers/usb/host/ohci-pnx4008.c
+index ca2a6ab..6c52c66 100644
+--- a/drivers/usb/host/ohci-pnx4008.c
++++ b/drivers/usb/host/ohci-pnx4008.c
+@@ -112,9 +112,9 @@ static int isp1301_detach(struct i2c_client *client);
+ static int isp1301_command(struct i2c_client *client, unsigned int cmd,
+ void *arg);
- nsp_dbg(NSP_DEBUG_DATA_IO, "in SCpnt=0x%p resid=%d ocount=%d ptr=0x%p this_residual=%d buffers=0x%p nbuf=%d",
-- SCpnt, SCpnt->resid, ocount, SCpnt->SCp.ptr, SCpnt->SCp.this_residual, SCpnt->SCp.buffer, SCpnt->SCp.buffers_residual);
-+ SCpnt, scsi_get_resid(SCpnt), ocount, SCpnt->SCp.ptr,
-+ SCpnt->SCp.this_residual, SCpnt->SCp.buffer,
-+ SCpnt->SCp.buffers_residual);
+-static unsigned short normal_i2c[] =
++static const unsigned short normal_i2c[] =
+ { ISP1301_I2C_ADDR, ISP1301_I2C_ADDR + 1, I2C_CLIENT_END };
+-static unsigned short dummy_i2c_addrlist[] = { I2C_CLIENT_END };
++static const unsigned short dummy_i2c_addrlist[] = { I2C_CLIENT_END };
- time_out = 1000;
+ static struct i2c_client_address_data addr_data = {
+ .normal_i2c = normal_i2c,
+@@ -123,7 +123,6 @@ static struct i2c_client_address_data addr_data = {
+ };
-@@ -771,7 +780,7 @@ static void nsp_pio_read(struct scsi_cmnd *SCpnt)
- return;
- }
+ struct i2c_driver isp1301_driver = {
+- .id = I2C_DRIVERID_I2CDEV, /* Fake Id */
+ .class = I2C_CLASS_HWMON,
+ .attach_adapter = isp1301_probe,
+ .detach_client = isp1301_detach,
+diff --git a/drivers/usb/host/ohci-pxa27x.c b/drivers/usb/host/ohci-pxa27x.c
+index 23d2fe5..ff9a798 100644
+--- a/drivers/usb/host/ohci-pxa27x.c
++++ b/drivers/usb/host/ohci-pxa27x.c
+@@ -22,6 +22,7 @@
+ #include <linux/device.h>
+ #include <linux/signal.h>
+ #include <linux/platform_device.h>
++#include <linux/clk.h>
-- SCpnt->resid -= res;
-+ nsp_inc_resid(SCpnt, -res);
- SCpnt->SCp.ptr += res;
- SCpnt->SCp.this_residual -= res;
- ocount += res;
-@@ -795,10 +804,12 @@ static void nsp_pio_read(struct scsi_cmnd *SCpnt)
+ #include <asm/mach-types.h>
+ #include <asm/hardware.h>
+@@ -32,6 +33,8 @@
- if (time_out == 0) {
- nsp_msg(KERN_DEBUG, "pio read timeout resid=%d this_residual=%d buffers_residual=%d",
-- SCpnt->resid, SCpnt->SCp.this_residual, SCpnt->SCp.buffers_residual);
-+ scsi_get_resid(SCpnt), SCpnt->SCp.this_residual,
-+ SCpnt->SCp.buffers_residual);
- }
- nsp_dbg(NSP_DEBUG_DATA_IO, "read ocount=0x%x", ocount);
-- nsp_dbg(NSP_DEBUG_DATA_IO, "r cmd=%d resid=0x%x\n", data->CmdId, SCpnt->resid);
-+ nsp_dbg(NSP_DEBUG_DATA_IO, "r cmd=%d resid=0x%x\n", data->CmdId,
-+ scsi_get_resid(SCpnt));
- }
+ #define UHCRHPS(x) __REG2( 0x4C000050, (x)<<2 )
++static struct clk *usb_clk;
++
/*
-@@ -816,7 +827,9 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
- ocount = data->FifoCount;
+ PMM_NPS_MODE -- PMM Non-power switching mode
+ Ports are powered continuously.
+@@ -80,7 +83,7 @@ static int pxa27x_start_hc(struct device *dev)
- nsp_dbg(NSP_DEBUG_DATA_IO, "in fifocount=%d ptr=0x%p this_residual=%d buffers=0x%p nbuf=%d resid=0x%x",
-- data->FifoCount, SCpnt->SCp.ptr, SCpnt->SCp.this_residual, SCpnt->SCp.buffer, SCpnt->SCp.buffers_residual, SCpnt->resid);
-+ data->FifoCount, SCpnt->SCp.ptr, SCpnt->SCp.this_residual,
-+ SCpnt->SCp.buffer, SCpnt->SCp.buffers_residual,
-+ scsi_get_resid(SCpnt));
+ inf = dev->platform_data;
- time_out = 1000;
+- pxa_set_cken(CKEN_USBHOST, 1);
++ clk_enable(usb_clk);
-@@ -830,7 +843,7 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
+ UHCHR |= UHCHR_FHR;
+ udelay(11);
+@@ -123,7 +126,7 @@ static void pxa27x_stop_hc(struct device *dev)
+ UHCCOMS |= 1;
+ udelay(10);
- nsp_dbg(NSP_DEBUG_DATA_IO, "phase changed stat=0x%x, res=%d\n", stat, res);
- /* Put back pointer */
-- SCpnt->resid += res;
-+ nsp_inc_resid(SCpnt, res);
- SCpnt->SCp.ptr -= res;
- SCpnt->SCp.this_residual += res;
- ocount -= res;
-@@ -866,7 +879,7 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
- break;
- }
+- pxa_set_cken(CKEN_USBHOST, 0);
++ clk_disable(usb_clk);
+ }
-- SCpnt->resid -= res;
-+ nsp_inc_resid(SCpnt, -res);
- SCpnt->SCp.ptr += res;
- SCpnt->SCp.this_residual -= res;
- ocount += res;
-@@ -886,10 +899,12 @@ static void nsp_pio_write(struct scsi_cmnd *SCpnt)
- data->FifoCount = ocount;
- if (time_out == 0) {
-- nsp_msg(KERN_DEBUG, "pio write timeout resid=0x%x", SCpnt->resid);
-+ nsp_msg(KERN_DEBUG, "pio write timeout resid=0x%x",
-+ scsi_get_resid(SCpnt));
+@@ -158,6 +161,10 @@ int usb_hcd_pxa27x_probe (const struct hc_driver *driver, struct platform_device
+ return -ENOMEM;
}
- nsp_dbg(NSP_DEBUG_DATA_IO, "write ocount=0x%x", ocount);
-- nsp_dbg(NSP_DEBUG_DATA_IO, "w cmd=%d resid=0x%x\n", data->CmdId, SCpnt->resid);
-+ nsp_dbg(NSP_DEBUG_DATA_IO, "w cmd=%d resid=0x%x\n", data->CmdId,
-+ scsi_get_resid(SCpnt));
+
++ usb_clk = clk_get(&pdev->dev, "USBCLK");
++ if (IS_ERR(usb_clk))
++ return PTR_ERR(usb_clk);
++
+ hcd = usb_create_hcd (driver, &pdev->dev, "pxa27x");
+ if (!hcd)
+ return -ENOMEM;
+@@ -201,6 +208,7 @@ int usb_hcd_pxa27x_probe (const struct hc_driver *driver, struct platform_device
+ release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
+ err1:
+ usb_put_hcd(hcd);
++ clk_put(usb_clk);
+ return retval;
}
- #undef RFIFO_CRIT
- #undef WFIFO_CRIT
-@@ -911,9 +926,8 @@ static int nsp_nexus(struct scsi_cmnd *SCpnt)
- nsp_index_write(base, SYNCREG, sync->SyncRegister);
- nsp_index_write(base, ACKWIDTH, sync->AckWidth);
-- if (SCpnt->use_sg == 0 ||
-- SCpnt->resid % 4 != 0 ||
-- SCpnt->resid <= PAGE_SIZE ) {
-+ if (scsi_get_resid(SCpnt) % 4 != 0 ||
-+ scsi_get_resid(SCpnt) <= PAGE_SIZE ) {
- data->TransferMode = MODE_IO8;
- } else if (nsp_burst_mode == BURST_MEM32) {
- data->TransferMode = MODE_MEM32;
-diff --git a/drivers/scsi/ppa.c b/drivers/scsi/ppa.c
-index 67ee51a..f655ae3 100644
---- a/drivers/scsi/ppa.c
-+++ b/drivers/scsi/ppa.c
-@@ -750,18 +750,16 @@ static int ppa_engine(ppa_struct *dev, struct scsi_cmnd *cmd)
- cmd->SCp.phase++;
+@@ -225,6 +233,7 @@ void usb_hcd_pxa27x_remove (struct usb_hcd *hcd, struct platform_device *pdev)
+ iounmap(hcd->regs);
+ release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
+ usb_put_hcd(hcd);
++ clk_put(usb_clk);
+ }
- case 4: /* Phase 4 - Setup scatter/gather buffers */
-- if (cmd->use_sg) {
-- /* if many buffers are available, start filling the first */
-- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- } else {
-- /* else fill the only available buffer */
- cmd->SCp.buffer = NULL;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-- cmd->SCp.ptr = cmd->request_buffer;
-+ cmd->SCp.this_residual = 0;
-+ cmd->SCp.ptr = NULL;
- }
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.phase++;
+ /*-------------------------------------------------------------------------*/
+diff --git a/drivers/usb/storage/freecom.c b/drivers/usb/storage/freecom.c
+index 88aa59a..f5a4e8d 100644
+--- a/drivers/usb/storage/freecom.c
++++ b/drivers/usb/storage/freecom.c
+@@ -132,8 +132,7 @@ freecom_readdata (struct scsi_cmnd *srb, struct us_data *us,
+
+ /* Now transfer all of our blocks. */
+ US_DEBUGP("Start of read\n");
+- result = usb_stor_bulk_transfer_sg(us, ipipe, srb->request_buffer,
+- count, srb->use_sg, &srb->resid);
++ result = usb_stor_bulk_srb(us, ipipe, srb);
+ US_DEBUGP("freecom_readdata done!\n");
+
+ if (result > USB_STOR_XFER_SHORT)
+@@ -166,8 +165,7 @@ freecom_writedata (struct scsi_cmnd *srb, struct us_data *us,
+
+ /* Now transfer all of our blocks. */
+ US_DEBUGP("Start of write\n");
+- result = usb_stor_bulk_transfer_sg(us, opipe, srb->request_buffer,
+- count, srb->use_sg, &srb->resid);
++ result = usb_stor_bulk_srb(us, opipe, srb);
+
+ US_DEBUGP("freecom_writedata done!\n");
+ if (result > USB_STOR_XFER_SHORT)
+@@ -281,7 +279,7 @@ int freecom_transport(struct scsi_cmnd *srb, struct us_data *us)
+ * and such will hang. */
+ US_DEBUGP("Device indicates that it has %d bytes available\n",
+ le16_to_cpu (fst->Count));
+- US_DEBUGP("SCSI requested %d\n", srb->request_bufflen);
++ US_DEBUGP("SCSI requested %d\n", scsi_bufflen(srb));
+
+ /* Find the length we desire to read. */
+ switch (srb->cmnd[0]) {
+@@ -292,12 +290,12 @@ int freecom_transport(struct scsi_cmnd *srb, struct us_data *us)
+ length = le16_to_cpu(fst->Count);
+ break;
+ default:
+- length = srb->request_bufflen;
++ length = scsi_bufflen(srb);
+ }
+
+ /* verify that this amount is legal */
+- if (length > srb->request_bufflen) {
+- length = srb->request_bufflen;
++ if (length > scsi_bufflen(srb)) {
++ length = scsi_bufflen(srb);
+ US_DEBUGP("Truncating request to match buffer length: %d\n", length);
+ }
+
+diff --git a/drivers/usb/storage/isd200.c b/drivers/usb/storage/isd200.c
+index 49ba6c0..178e8c2 100644
+--- a/drivers/usb/storage/isd200.c
++++ b/drivers/usb/storage/isd200.c
+@@ -49,6 +49,7 @@
+ #include <linux/slab.h>
+ #include <linux/hdreg.h>
+ #include <linux/ide.h>
++#include <linux/scatterlist.h>
+
+ #include <scsi/scsi.h>
+ #include <scsi/scsi_cmnd.h>
+@@ -287,6 +288,7 @@ struct isd200_info {
+ /* maximum number of LUNs supported */
+ unsigned char MaxLUNs;
+ struct scsi_cmnd srb;
++ struct scatterlist sg;
+ };
+
+
+@@ -398,6 +400,31 @@ static void isd200_build_sense(struct us_data *us, struct scsi_cmnd *srb)
+ * Transport routines
+ ***********************************************************************/
+
++/**************************************************************************
++ * isd200_set_srb(), isd200_srb_set_bufflen()
++ *
++ * Two helpers to facilitate in initialization of scsi_cmnd structure
++ * Will need to change when struct scsi_cmnd changes
++ */
++static void isd200_set_srb(struct isd200_info *info,
++ enum dma_data_direction dir, void* buff, unsigned bufflen)
++{
++ struct scsi_cmnd *srb = &info->srb;
++
++ if (buff)
++ sg_init_one(&info->sg, buff, bufflen);
++
++ srb->sc_data_direction = dir;
++ srb->request_buffer = buff ? &info->sg : NULL;
++ srb->request_bufflen = bufflen;
++ srb->use_sg = buff ? 1 : 0;
++}
++
++static void isd200_srb_set_bufflen(struct scsi_cmnd *srb, unsigned bufflen)
++{
++ srb->request_bufflen = bufflen;
++}
++
+
+ /**************************************************************************
+ * isd200_action
+@@ -432,9 +459,7 @@ static int isd200_action( struct us_data *us, int action,
+ ata.generic.RegisterSelect =
+ REG_CYLINDER_LOW | REG_CYLINDER_HIGH |
+ REG_STATUS | REG_ERROR;
+- srb->sc_data_direction = DMA_FROM_DEVICE;
+- srb->request_buffer = pointer;
+- srb->request_bufflen = value;
++ isd200_set_srb(info, DMA_FROM_DEVICE, pointer, value);
+ break;
+
+ case ACTION_ENUM:
+@@ -444,7 +469,7 @@ static int isd200_action( struct us_data *us, int action,
+ ACTION_SELECT_5;
+ ata.generic.RegisterSelect = REG_DEVICE_HEAD;
+ ata.write.DeviceHeadByte = value;
+- srb->sc_data_direction = DMA_NONE;
++ isd200_set_srb(info, DMA_NONE, NULL, 0);
+ break;
+
+ case ACTION_RESET:
+@@ -453,7 +478,7 @@ static int isd200_action( struct us_data *us, int action,
+ ACTION_SELECT_3|ACTION_SELECT_4;
+ ata.generic.RegisterSelect = REG_DEVICE_CONTROL;
+ ata.write.DeviceControlByte = ATA_DC_RESET_CONTROLLER;
+- srb->sc_data_direction = DMA_NONE;
++ isd200_set_srb(info, DMA_NONE, NULL, 0);
+ break;
+
+ case ACTION_REENABLE:
+@@ -462,7 +487,7 @@ static int isd200_action( struct us_data *us, int action,
+ ACTION_SELECT_3|ACTION_SELECT_4;
+ ata.generic.RegisterSelect = REG_DEVICE_CONTROL;
+ ata.write.DeviceControlByte = ATA_DC_REENABLE_CONTROLLER;
+- srb->sc_data_direction = DMA_NONE;
++ isd200_set_srb(info, DMA_NONE, NULL, 0);
+ break;
+
+ case ACTION_SOFT_RESET:
+@@ -471,21 +496,20 @@ static int isd200_action( struct us_data *us, int action,
+ ata.generic.RegisterSelect = REG_DEVICE_HEAD | REG_COMMAND;
+ ata.write.DeviceHeadByte = info->DeviceHead;
+ ata.write.CommandByte = WIN_SRST;
+- srb->sc_data_direction = DMA_NONE;
++ isd200_set_srb(info, DMA_NONE, NULL, 0);
+ break;
+
+ case ACTION_IDENTIFY:
+ US_DEBUGP(" isd200_action(IDENTIFY)\n");
+ ata.generic.RegisterSelect = REG_COMMAND;
+ ata.write.CommandByte = WIN_IDENTIFY;
+- srb->sc_data_direction = DMA_FROM_DEVICE;
+- srb->request_buffer = (void *) info->id;
+- srb->request_bufflen = sizeof(struct hd_driveid);
++ isd200_set_srb(info, DMA_FROM_DEVICE, info->id,
++ sizeof(struct hd_driveid));
+ break;
+
+ default:
+ US_DEBUGP("Error: Undefined action %d\n",action);
+- break;
++ return ISD200_ERROR;
+ }
+
+ memcpy(srb->cmnd, &ata, sizeof(ata.generic));
+@@ -590,7 +614,7 @@ static void isd200_invoke_transport( struct us_data *us,
+ return;
+ }
+
+- if ((srb->resid > 0) &&
++ if ((scsi_get_resid(srb) > 0) &&
+ !((srb->cmnd[0] == REQUEST_SENSE) ||
+ (srb->cmnd[0] == INQUIRY) ||
+ (srb->cmnd[0] == MODE_SENSE) ||
+@@ -1217,7 +1241,6 @@ static int isd200_get_inquiry_data( struct us_data *us )
+ return(retStatus);
+ }
- case 5: /* Phase 5 - Data transfer stage */
-diff --git a/drivers/scsi/psi240i.c b/drivers/scsi/psi240i.c
-deleted file mode 100644
-index 899e89d..0000000
---- a/drivers/scsi/psi240i.c
-+++ /dev/null
-@@ -1,689 +0,0 @@
--/*+M*************************************************************************
-- * Perceptive Solutions, Inc. PSI-240I device driver proc support for Linux.
-- *
-- * Copyright (c) 1997 Perceptive Solutions, Inc.
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License as published by
-- * the Free Software Foundation; either version 2, or (at your option)
-- * any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-- * GNU General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; see the file COPYING. If not, write to
-- * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
-- *
-- *
-- * File Name: psi240i.c
-- *
-- * Description: SCSI driver for the PSI240I EIDE interface card.
-- *
-- *-M*************************************************************************/
--
--#include <linux/module.h>
--
--#include <linux/blkdev.h>
--#include <linux/kernel.h>
--#include <linux/types.h>
--#include <linux/string.h>
--#include <linux/ioport.h>
--#include <linux/delay.h>
--#include <linux/interrupt.h>
--#include <linux/proc_fs.h>
--#include <linux/spinlock.h>
--#include <linux/stat.h>
--
--#include <asm/dma.h>
--#include <asm/system.h>
--#include <asm/io.h>
--#include "scsi.h"
--#include <scsi/scsi_host.h>
--
--#include "psi240i.h"
--#include "psi_chip.h"
--
--//#define DEBUG 1
--
--#ifdef DEBUG
--#define DEB(x) x
--#else
--#define DEB(x)
--#endif
--
--#define MAXBOARDS 6 /* Increase this and the sizes of the arrays below, if you need more. */
--
--#define PORT_DATA 0
--#define PORT_ERROR 1
--#define PORT_SECTOR_COUNT 2
--#define PORT_LBA_0 3
--#define PORT_LBA_8 4
--#define PORT_LBA_16 5
--#define PORT_LBA_24 6
--#define PORT_STAT_CMD 7
--#define PORT_SEL_FAIL 8
--#define PORT_IRQ_STATUS 9
--#define PORT_ADDRESS 10
--#define PORT_FAIL 11
--#define PORT_ALT_STAT 12
--
--typedef struct
-- {
-- UCHAR device; // device code
-- UCHAR byte6; // device select register image
-- UCHAR spigot; // spigot number
-- UCHAR expectingIRQ; // flag for expecting and interrupt
-- USHORT sectors; // number of sectors per track
-- USHORT heads; // number of heads
-- USHORT cylinders; // number of cylinders for this device
-- USHORT spareword; // placeholder
-- ULONG blocks; // number of blocks on device
-- } OUR_DEVICE, *POUR_DEVICE;
--
--typedef struct
-- {
-- USHORT ports[13];
-- OUR_DEVICE device[8];
-- struct scsi_cmnd *pSCmnd;
-- IDE_STRUCT ide;
-- ULONG startSector;
-- USHORT sectorCount;
-- struct scsi_cmnd *SCpnt;
-- VOID *buffer;
-- USHORT expectingIRQ;
-- } ADAPTER240I, *PADAPTER240I;
--
--#define HOSTDATA(host) ((PADAPTER240I)&host->hostdata)
--
--static struct Scsi_Host *PsiHost[6] = {NULL,}; /* One for each IRQ level (10-15) */
--static IDENTIFY_DATA identifyData;
--static SETUP ChipSetup;
--
--static USHORT portAddr[6] = {CHIP_ADRS_0, CHIP_ADRS_1, CHIP_ADRS_2, CHIP_ADRS_3, CHIP_ADRS_4, CHIP_ADRS_5};
--
--/****************************************************************
-- * Name: WriteData :LOCAL
-- *
-- * Description: Write data to device.
-- *
-- * Parameters: padapter - Pointer adapter data structure.
-- *
-- * Returns: TRUE if drive does not assert DRQ in time.
-- *
-- ****************************************************************/
--static int WriteData (PADAPTER240I padapter)
-- {
-- ULONG timer;
-- USHORT *pports = padapter->ports;
--
-- timer = jiffies + TIMEOUT_DRQ; // calculate the timeout value
-- do {
-- if ( inb_p (pports[PORT_STAT_CMD]) & IDE_STATUS_DRQ )
-- {
-- outsw (pports[PORT_DATA], padapter->buffer, (USHORT)padapter->ide.ide.ide[2] * 256);
-- return 0;
-- }
-- } while ( time_after(timer, jiffies) ); // test for timeout
--
-- padapter->ide.ide.ides.cmd = 0; // null out the command byte
-- return 1;
-- }
--/****************************************************************
-- * Name: IdeCmd :LOCAL
-- *
-- * Description: Process a queued command from the SCSI manager.
-- *
-- * Parameters: padapter - Pointer adapter data structure.
-- *
-- * Returns: Zero if no error or status register contents on error.
-- *
-- ****************************************************************/
--static UCHAR IdeCmd (PADAPTER240I padapter)
-- {
-- ULONG timer;
-- USHORT *pports = padapter->ports;
-- UCHAR status;
--
-- outb_p (padapter->ide.ide.ides.spigot, pports[PORT_SEL_FAIL]); // select the spigot
-- outb_p (padapter->ide.ide.ide[6], pports[PORT_LBA_24]); // select the drive
-- timer = jiffies + TIMEOUT_READY; // calculate the timeout value
-- do {
-- status = inb_p (padapter->ports[PORT_STAT_CMD]);
-- if ( status & IDE_STATUS_DRDY )
-- {
-- outb_p (padapter->ide.ide.ide[2], pports[PORT_SECTOR_COUNT]);
-- outb_p (padapter->ide.ide.ide[3], pports[PORT_LBA_0]);
-- outb_p (padapter->ide.ide.ide[4], pports[PORT_LBA_8]);
-- outb_p (padapter->ide.ide.ide[5], pports[PORT_LBA_16]);
-- padapter->expectingIRQ = 1;
-- outb_p (padapter->ide.ide.ide[7], pports[PORT_STAT_CMD]);
--
-- if ( padapter->ide.ide.ides.cmd == IDE_CMD_WRITE_MULTIPLE )
-- return (WriteData (padapter));
--
-- return 0;
-- }
-- } while ( time_after(timer, jiffies) ); // test for timeout
--
-- padapter->ide.ide.ides.cmd = 0; // null out the command byte
-- return status;
-- }
--/****************************************************************
-- * Name: SetupTransfer :LOCAL
-- *
-- * Description: Setup a data transfer command.
-- *
-- * Parameters: padapter - Pointer adapter data structure.
-- * drive - Drive/head register upper nibble only.
-- *
-- * Returns: TRUE if no data to transfer.
-- *
-- ****************************************************************/
--static int SetupTransfer (PADAPTER240I padapter, UCHAR drive)
-- {
-- if ( padapter->sectorCount )
-- {
-- *(ULONG *)padapter->ide.ide.ides.lba = padapter->startSector;
-- padapter->ide.ide.ide[6] |= drive;
-- padapter->ide.ide.ides.sectors = ( padapter->sectorCount > SECTORSXFER ) ? SECTORSXFER : padapter->sectorCount;
-- padapter->sectorCount -= padapter->ide.ide.ides.sectors; // bump the start and count for next xfer
-- padapter->startSector += padapter->ide.ide.ides.sectors;
-- return 0;
-- }
-- else
-- {
-- padapter->ide.ide.ides.cmd = 0; // null out the command byte
-- padapter->SCpnt = NULL;
-- return 1;
-- }
-- }
--/****************************************************************
-- * Name: DecodeError :LOCAL
-- *
-- * Description: Decode and process device errors.
-- *
-- * Parameters: pshost - Pointer to host data block.
-- * status - Status register code.
-- *
-- * Returns: The driver status code.
-- *
-- ****************************************************************/
--static ULONG DecodeError (struct Scsi_Host *pshost, UCHAR status)
-- {
-- PADAPTER240I padapter = HOSTDATA(pshost);
-- UCHAR error;
--
-- padapter->expectingIRQ = 0;
-- padapter->SCpnt = NULL;
-- if ( status & IDE_STATUS_WRITE_FAULT )
-- {
-- return DID_PARITY << 16;
-- }
-- if ( status & IDE_STATUS_BUSY )
-- return DID_BUS_BUSY << 16;
--
-- error = inb_p (padapter->ports[PORT_ERROR]);
-- DEB(printk ("\npsi240i error register: %x", error));
-- switch ( error )
-- {
-- case IDE_ERROR_AMNF:
-- case IDE_ERROR_TKONF:
-- case IDE_ERROR_ABRT:
-- case IDE_ERROR_IDFN:
-- case IDE_ERROR_UNC:
-- case IDE_ERROR_BBK:
-- default:
-- return DID_ERROR << 16;
-- }
-- return DID_ERROR << 16;
-- }
--/****************************************************************
-- * Name: Irq_Handler :LOCAL
-- *
-- * Description: Interrupt handler.
-- *
-- * Parameters: irq - Hardware IRQ number.
-- * dev_id -
-- *
-- * Returns: TRUE if drive is not ready in time.
-- *
-- ****************************************************************/
--static void Irq_Handler (int irq, void *dev_id)
-- {
-- struct Scsi_Host *shost; // Pointer to host data block
-- PADAPTER240I padapter; // Pointer to adapter control structure
-- USHORT *pports; // I/O port array
-- struct scsi_cmnd *SCpnt;
-- UCHAR status;
-- int z;
--
-- DEB(printk ("\npsi240i received interrupt\n"));
--
-- shost = PsiHost[irq - 10];
-- if ( !shost )
-- panic ("Splunge!");
--
-- padapter = HOSTDATA(shost);
-- pports = padapter->ports;
-- SCpnt = padapter->SCpnt;
--
-- if ( !padapter->expectingIRQ )
-- {
-- DEB(printk ("\npsi240i Unsolicited interrupt\n"));
-- return;
-- }
-- padapter->expectingIRQ = 0;
--
-- status = inb_p (padapter->ports[PORT_STAT_CMD]); // read the device status
-- if ( status & (IDE_STATUS_ERROR | IDE_STATUS_WRITE_FAULT) )
-- goto irqerror;
--
-- DEB(printk ("\npsi240i processing interrupt"));
-- switch ( padapter->ide.ide.ides.cmd ) // decide how to handle the interrupt
-- {
-- case IDE_CMD_READ_MULTIPLE:
-- if ( status & IDE_STATUS_DRQ )
-- {
-- insw (pports[PORT_DATA], padapter->buffer, (USHORT)padapter->ide.ide.ides.sectors * 256);
-- padapter->buffer += padapter->ide.ide.ides.sectors * 512;
-- if ( SetupTransfer (padapter, padapter->ide.ide.ide[6] & 0xF0) )
-- {
-- SCpnt->result = DID_OK << 16;
-- padapter->SCpnt = NULL;
-- SCpnt->scsi_done (SCpnt);
-- return;
-- }
-- if ( !(status = IdeCmd (padapter)) )
-- return;
-- }
-- break;
--
-- case IDE_CMD_WRITE_MULTIPLE:
-- padapter->buffer += padapter->ide.ide.ides.sectors * 512;
-- if ( SetupTransfer (padapter, padapter->ide.ide.ide[6] & 0xF0) )
-- {
-- SCpnt->result = DID_OK << 16;
-- padapter->SCpnt = NULL;
-- SCpnt->scsi_done (SCpnt);
-- return;
-- }
-- if ( !(status = IdeCmd (padapter)) )
-- return;
-- break;
--
-- case IDE_COMMAND_IDENTIFY:
-- {
-- PINQUIRYDATA pinquiryData = SCpnt->request_buffer;
--
-- if ( status & IDE_STATUS_DRQ )
-- {
-- insw (pports[PORT_DATA], &identifyData, sizeof (identifyData) >> 1);
--
-- memset (pinquiryData, 0, SCpnt->request_bufflen); // Zero INQUIRY data structure.
-- pinquiryData->DeviceType = 0;
-- pinquiryData->Versions = 2;
-- pinquiryData->AdditionalLength = 35 - 4;
--
-- // Fill in vendor identification fields.
-- for ( z = 0; z < 8; z += 2 )
-- {
-- pinquiryData->VendorId[z] = ((UCHAR *)identifyData.ModelNumber)[z + 1];
-- pinquiryData->VendorId[z + 1] = ((UCHAR *)identifyData.ModelNumber)[z];
-- }
--
-- // Initialize unused portion of product id.
-- for ( z = 0; z < 4; z++ )
-- pinquiryData->ProductId[12 + z] = ' ';
--
-- // Move firmware revision from IDENTIFY data to
-- // product revision in INQUIRY data.
-- for ( z = 0; z < 4; z += 2 )
-- {
-- pinquiryData->ProductRevisionLevel[z] = ((UCHAR *)identifyData.FirmwareRevision)[z + 1];
-- pinquiryData->ProductRevisionLevel[z + 1] = ((UCHAR *)identifyData.FirmwareRevision)[z];
-- }
--
-- SCpnt->result = DID_OK << 16;
-- padapter->SCpnt = NULL;
-- SCpnt->scsi_done (SCpnt);
-- return;
-- }
-- break;
-- }
--
-- default:
-- SCpnt->result = DID_OK << 16;
-- padapter->SCpnt = NULL;
-- SCpnt->scsi_done (SCpnt);
-- return;
-- }
--
--irqerror:;
-- DEB(printk ("\npsi240i error Device Status: %X\n", status));
-- SCpnt->result = DecodeError (shost, status);
-- SCpnt->scsi_done (SCpnt);
-- }
--
--static irqreturn_t do_Irq_Handler (int irq, void *dev_id)
--{
-- unsigned long flags;
-- struct Scsi_Host *dev = dev_id;
--
-- spin_lock_irqsave(dev->host_lock, flags);
-- Irq_Handler(irq, dev_id);
-- spin_unlock_irqrestore(dev->host_lock, flags);
-- return IRQ_HANDLED;
--}
--
--/****************************************************************
-- * Name: Psi240i_QueueCommand
-- *
-- * Description: Process a queued command from the SCSI manager.
-- *
-- * Parameters: SCpnt - Pointer to SCSI command structure.
-- * done - Pointer to done function to call.
-- *
-- * Returns: Status code.
-- *
-- ****************************************************************/
--static int Psi240i_QueueCommand(struct scsi_cmnd *SCpnt,
-- void (*done)(struct scsi_cmnd *))
-- {
-- UCHAR *cdb = (UCHAR *)SCpnt->cmnd;
-- // Pointer to SCSI CDB
-- PADAPTER240I padapter = HOSTDATA (SCpnt->device->host);
-- // Pointer to adapter control structure
-- POUR_DEVICE pdev = &padapter->device [SCpnt->device->id];
-- // Pointer to device information
-- UCHAR rc;
-- // command return code
--
-- SCpnt->scsi_done = done;
-- padapter->ide.ide.ides.spigot = pdev->spigot;
-- padapter->buffer = SCpnt->request_buffer;
-- if (done)
-- {
-- if ( !pdev->device )
-- {
-- SCpnt->result = DID_BAD_TARGET << 16;
-- done (SCpnt);
-- return 0;
-- }
-- }
-- else
-- {
-- printk("psi240i_queuecommand: %02X: done can't be NULL\n", *cdb);
-- return 0;
-- }
--
-- switch ( *cdb )
-- {
-- case SCSIOP_INQUIRY: // inquiry CDB
-- {
-- padapter->ide.ide.ide[6] = pdev->byte6;
-- padapter->ide.ide.ides.cmd = IDE_COMMAND_IDENTIFY;
-- break;
-- }
--
-- case SCSIOP_TEST_UNIT_READY: // test unit ready CDB
-- SCpnt->result = DID_OK << 16;
-- done (SCpnt);
-- return 0;
--
-- case SCSIOP_READ_CAPACITY: // read capctiy CDB
-- {
-- PREAD_CAPACITY_DATA pdata = (PREAD_CAPACITY_DATA)SCpnt->request_buffer;
--
-- pdata->blksiz = 0x20000;
-- XANY2SCSI ((UCHAR *)&pdata->blks, pdev->blocks);
-- SCpnt->result = DID_OK << 16;
-- done (SCpnt);
-- return 0;
-- }
--
-- case SCSIOP_VERIFY: // verify CDB
-- *(ULONG *)padapter->ide.ide.ides.lba = XSCSI2LONG (&cdb[2]);
-- padapter->ide.ide.ide[6] |= pdev->byte6;
-- padapter->ide.ide.ide[2] = (UCHAR)((USHORT)cdb[8] | ((USHORT)cdb[7] << 8));
-- padapter->ide.ide.ides.cmd = IDE_COMMAND_VERIFY;
-- break;
--
-- case SCSIOP_READ: // read10 CDB
-- padapter->startSector = XSCSI2LONG (&cdb[2]);
-- padapter->sectorCount = (USHORT)cdb[8] | ((USHORT)cdb[7] << 8);
-- SetupTransfer (padapter, pdev->byte6);
-- padapter->ide.ide.ides.cmd = IDE_CMD_READ_MULTIPLE;
-- break;
--
-- case SCSIOP_READ6: // read6 CDB
-- padapter->startSector = SCSI2LONG (&cdb[1]);
-- padapter->sectorCount = cdb[4];
-- SetupTransfer (padapter, pdev->byte6);
-- padapter->ide.ide.ides.cmd = IDE_CMD_READ_MULTIPLE;
-- break;
--
-- case SCSIOP_WRITE: // write10 CDB
-- padapter->startSector = XSCSI2LONG (&cdb[2]);
-- padapter->sectorCount = (USHORT)cdb[8] | ((USHORT)cdb[7] << 8);
-- SetupTransfer (padapter, pdev->byte6);
-- padapter->ide.ide.ides.cmd = IDE_CMD_WRITE_MULTIPLE;
-- break;
-- case SCSIOP_WRITE6: // write6 CDB
-- padapter->startSector = SCSI2LONG (&cdb[1]);
-- padapter->sectorCount = cdb[4];
-- SetupTransfer (padapter, pdev->byte6);
-- padapter->ide.ide.ides.cmd = IDE_CMD_WRITE_MULTIPLE;
-- break;
--
-- default:
-- DEB (printk ("psi240i_queuecommand: Unsupported command %02X\n", *cdb));
-- SCpnt->result = DID_ERROR << 16;
-- done (SCpnt);
-- return 0;
-- }
--
-- padapter->SCpnt = SCpnt; // Save this command data
--
-- rc = IdeCmd (padapter);
-- if ( rc )
-- {
-- padapter->expectingIRQ = 0;
-- DEB (printk ("psi240i_queuecommand: %02X, %02X: Device failed to respond for command\n", *cdb, padapter->ide.ide.ides.cmd));
-- SCpnt->result = DID_ERROR << 16;
-- done (SCpnt);
-- return 0;
-- }
-- DEB (printk("psi240i_queuecommand: %02X, %02X now waiting for interrupt ", *cdb, padapter->ide.ide.ides.cmd));
-- return 0;
-- }
--
--/***************************************************************************
-- * Name: ReadChipMemory
-- *
-- * Description: Read information from controller memory.
-- *
-- * Parameters: psetup - Pointer to memory image of setup information.
-- * base - base address of memory.
-- * length - lenght of data space in bytes.
-- * port - I/O address of data port.
-- *
-- * Returns: Nothing.
-- *
-- **************************************************************************/
--static void ReadChipMemory (void *pdata, USHORT base, USHORT length, USHORT port)
-- {
-- USHORT z, zz;
-- UCHAR *pd = (UCHAR *)pdata;
-- outb_p (SEL_NONE, port + REG_SEL_FAIL); // setup data port
-- zz = 0;
-- while ( zz < length )
-- {
-- outw_p (base, port + REG_ADDRESS); // setup address
--
-- for ( z = 0; z < 8; z++ )
-- {
-- if ( (zz + z) < length )
-- *pd++ = inb_p (port + z); // read data byte
-- }
-- zz += 8;
-- base += 8;
-- }
-- }
--/****************************************************************
-- * Name: Psi240i_Detect
-- *
-- * Description: Detect and initialize our boards.
-- *
-- * Parameters: tpnt - Pointer to SCSI host template structure.
-- *
-- * Returns: Number of adapters found.
-- *
-- ****************************************************************/
--static int Psi240i_Detect (struct scsi_host_template *tpnt)
-- {
-- int board;
-- int count = 0;
-- int unit;
-- int z;
-- USHORT port, port_range = 16;
-- CHIP_CONFIG_N chipConfig;
-- CHIP_DEVICE_N chipDevice[8];
-- struct Scsi_Host *pshost;
--
-- for ( board = 0; board < MAXBOARDS; board++ ) // scan for I/O ports
-- {
-- pshost = NULL;
-- port = portAddr[board]; // get base address to test
-- if ( !request_region (port, port_range, "psi240i") )
-- continue;
-- if ( inb_p (port + REG_FAIL) != CHIP_ID ) // do the first test for likley hood that it is us
-- goto host_init_failure;
-- outb_p (SEL_NONE, port + REG_SEL_FAIL); // setup EEPROM/RAM access
-- outw (0, port + REG_ADDRESS); // setup EEPROM address zero
-- if ( inb_p (port) != 0x55 ) // test 1st byte
-- goto host_init_failure; // nope
-- if ( inb_p (port + 1) != 0xAA ) // test 2nd byte
-- goto host_init_failure; // nope
--
-- // at this point our board is found and can be accessed. Now we need to initialize
-- // our informatation and register with the kernel.
--
--
-- ReadChipMemory (&chipConfig, CHIP_CONFIG, sizeof (chipConfig), port);
-- ReadChipMemory (&chipDevice, CHIP_DEVICE, sizeof (chipDevice), port);
-- ReadChipMemory (&ChipSetup, CHIP_EEPROM_DATA, sizeof (ChipSetup), port);
--
-- if ( !chipConfig.numDrives ) // if no devices on this board
-- goto host_init_failure;
--
-- pshost = scsi_register (tpnt, sizeof(ADAPTER240I));
-- if(pshost == NULL)
-- goto host_init_failure;
--
-- PsiHost[chipConfig.irq - 10] = pshost;
-- pshost->unique_id = port;
-- pshost->io_port = port;
-- pshost->n_io_port = 16; /* Number of bytes of I/O space used */
-- pshost->irq = chipConfig.irq;
--
-- for ( z = 0; z < 11; z++ ) // build regester address array
-- HOSTDATA(pshost)->ports[z] = port + z;
-- HOSTDATA(pshost)->ports[11] = port + REG_FAIL;
-- HOSTDATA(pshost)->ports[12] = port + REG_ALT_STAT;
-- DEB (printk ("\nPorts ="));
-- DEB (for (z=0;z<13;z++) printk(" %#04X",HOSTDATA(pshost)->ports[z]););
--
-- for ( z = 0; z < chipConfig.numDrives; ++z )
-- {
-- unit = chipDevice[z].channel & 0x0F;
-- HOSTDATA(pshost)->device[unit].device = ChipSetup.setupDevice[unit].device;
-- HOSTDATA(pshost)->device[unit].byte6 = (UCHAR)(((unit & 1) << 4) | 0xE0);
-- HOSTDATA(pshost)->device[unit].spigot = (UCHAR)(1 << (unit >> 1));
-- HOSTDATA(pshost)->device[unit].sectors = ChipSetup.setupDevice[unit].sectors;
-- HOSTDATA(pshost)->device[unit].heads = ChipSetup.setupDevice[unit].heads;
-- HOSTDATA(pshost)->device[unit].cylinders = ChipSetup.setupDevice[unit].cylinders;
-- HOSTDATA(pshost)->device[unit].blocks = ChipSetup.setupDevice[unit].blocks;
-- DEB (printk ("\nHOSTDATA->device = %X", HOSTDATA(pshost)->device[unit].device));
-- DEB (printk ("\n byte6 = %X", HOSTDATA(pshost)->device[unit].byte6));
-- DEB (printk ("\n spigot = %X", HOSTDATA(pshost)->device[unit].spigot));
-- DEB (printk ("\n sectors = %X", HOSTDATA(pshost)->device[unit].sectors));
-- DEB (printk ("\n heads = %X", HOSTDATA(pshost)->device[unit].heads));
-- DEB (printk ("\n cylinders = %X", HOSTDATA(pshost)->device[unit].cylinders));
-- DEB (printk ("\n blocks = %lX", HOSTDATA(pshost)->device[unit].blocks));
-- }
--
-- if ( request_irq (chipConfig.irq, do_Irq_Handler, 0, "psi240i", pshost) == 0 )
-- {
-- printk("\nPSI-240I EIDE CONTROLLER: at I/O = %x IRQ = %d\n", port, chipConfig.irq);
-- printk("(C) 1997 Perceptive Solutions, Inc. All rights reserved\n\n");
-- count++;
-- continue;
-- }
--
-- printk ("Unable to allocate IRQ for PSI-240I controller.\n");
--
--host_init_failure:
--
-- release_region (port, port_range);
-- if (pshost)
-- scsi_unregister (pshost);
--
-- }
-- return count;
-- }
--
--static int Psi240i_Release(struct Scsi_Host *shost)
--{
-- if (shost->irq)
-- free_irq(shost->irq, NULL);
-- if (shost->io_port && shost->n_io_port)
-- release_region(shost->io_port, shost->n_io_port);
-- scsi_unregister(shost);
-- return 0;
--}
--
--/****************************************************************
-- * Name: Psi240i_BiosParam
-- *
-- * Description: Process the biosparam request from the SCSI manager to
-- * return C/H/S data.
-- *
-- * Parameters: disk - Pointer to SCSI disk structure.
-- * dev - Major/minor number from kernel.
-- * geom - Pointer to integer array to place geometry data.
-- *
-- * Returns: zero.
-- *
-- ****************************************************************/
--static int Psi240i_BiosParam (struct scsi_device *sdev, struct block_device *dev,
-- sector_t capacity, int geom[])
-- {
-- POUR_DEVICE pdev;
--
-- pdev = &(HOSTDATA(sdev->host)->device[sdev_id(sdev)]);
--
-- geom[0] = pdev->heads;
-- geom[1] = pdev->sectors;
-- geom[2] = pdev->cylinders;
-- return 0;
-- }
--
--MODULE_LICENSE("GPL");
--
--static struct scsi_host_template driver_template = {
-- .proc_name = "psi240i",
-- .name = "PSI-240I EIDE Disk Controller",
-- .detect = Psi240i_Detect,
-- .release = Psi240i_Release,
-- .queuecommand = Psi240i_QueueCommand,
-- .bios_param = Psi240i_BiosParam,
-- .can_queue = 1,
-- .this_id = -1,
-- .sg_tablesize = SG_NONE,
-- .cmd_per_lun = 1,
-- .use_clustering = DISABLE_CLUSTERING,
--};
--#include "scsi_module.c"
-diff --git a/drivers/scsi/psi240i.h b/drivers/scsi/psi240i.h
-deleted file mode 100644
-index 21ebb92..0000000
---- a/drivers/scsi/psi240i.h
-+++ /dev/null
-@@ -1,315 +0,0 @@
--/*+M*************************************************************************
-- * Perceptive Solutions, Inc. PSI-240I device driver proc support for Linux.
-- *
-- * Copyright (c) 1997 Perceptive Solutions, Inc.
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License as published by
-- * the Free Software Foundation; either version 2, or (at your option)
-- * any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-- * GNU General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; see the file COPYING. If not, write to
-- * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
-- *
-- *
-- * File Name: psi240i.h
-- *
-- * Description: Header file for the SCSI driver for the PSI240I
-- * EIDE interface card.
-- *
-- *-M*************************************************************************/
--#ifndef _PSI240I_H
--#define _PSI240I_H
--
--#include <linux/types.h>
--
--#ifndef PSI_EIDE_SCSIOP
--#define PSI_EIDE_SCSIOP 1
--
--/************************************************/
--/* Some defines that we like */
--/************************************************/
--#define CHAR char
--#define UCHAR unsigned char
--#define SHORT short
--#define USHORT unsigned short
--#define BOOL unsigned short
--#define LONG long
--#define ULONG unsigned long
--#define VOID void
--
--/************************************************/
--/* Timeout konstants */
--/************************************************/
--#define TIMEOUT_READY 10 // 100 mSec
--#define TIMEOUT_DRQ 40 // 400 mSec
--
--/************************************************/
--/* Misc. macros */
--/************************************************/
--#define ANY2SCSI(up, p) \
--((UCHAR *)up)[0] = (((ULONG)(p)) >> 8); \
--((UCHAR *)up)[1] = ((ULONG)(p));
--
--#define SCSI2LONG(up) \
--( (((long)*(((UCHAR *)up))) << 16) \
--+ (((long)(((UCHAR *)up)[1])) << 8) \
--+ ((long)(((UCHAR *)up)[2])) )
--
--#define XANY2SCSI(up, p) \
--((UCHAR *)up)[0] = ((long)(p)) >> 24; \
--((UCHAR *)up)[1] = ((long)(p)) >> 16; \
--((UCHAR *)up)[2] = ((long)(p)) >> 8; \
--((UCHAR *)up)[3] = ((long)(p));
--
--#define XSCSI2LONG(up) \
--( (((long)(((UCHAR *)up)[0])) << 24) \
--+ (((long)(((UCHAR *)up)[1])) << 16) \
--+ (((long)(((UCHAR *)up)[2])) << 8) \
--+ ((long)(((UCHAR *)up)[3])) )
--
--/************************************************/
--/* SCSI CDB operation codes */
--/************************************************/
--#define SCSIOP_TEST_UNIT_READY 0x00
--#define SCSIOP_REZERO_UNIT 0x01
--#define SCSIOP_REWIND 0x01
--#define SCSIOP_REQUEST_BLOCK_ADDR 0x02
--#define SCSIOP_REQUEST_SENSE 0x03
--#define SCSIOP_FORMAT_UNIT 0x04
--#define SCSIOP_READ_BLOCK_LIMITS 0x05
--#define SCSIOP_REASSIGN_BLOCKS 0x07
--#define SCSIOP_READ6 0x08
--#define SCSIOP_RECEIVE 0x08
--#define SCSIOP_WRITE6 0x0A
--#define SCSIOP_PRINT 0x0A
--#define SCSIOP_SEND 0x0A
--#define SCSIOP_SEEK6 0x0B
--#define SCSIOP_TRACK_SELECT 0x0B
--#define SCSIOP_SLEW_PRINT 0x0B
--#define SCSIOP_SEEK_BLOCK 0x0C
--#define SCSIOP_PARTITION 0x0D
--#define SCSIOP_READ_REVERSE 0x0F
--#define SCSIOP_WRITE_FILEMARKS 0x10
--#define SCSIOP_FLUSH_BUFFER 0x10
--#define SCSIOP_SPACE 0x11
--#define SCSIOP_INQUIRY 0x12
--#define SCSIOP_VERIFY6 0x13
--#define SCSIOP_RECOVER_BUF_DATA 0x14
--#define SCSIOP_MODE_SELECT 0x15
--#define SCSIOP_RESERVE_UNIT 0x16
--#define SCSIOP_RELEASE_UNIT 0x17
--#define SCSIOP_COPY 0x18
--#define SCSIOP_ERASE 0x19
--#define SCSIOP_MODE_SENSE 0x1A
--#define SCSIOP_START_STOP_UNIT 0x1B
--#define SCSIOP_STOP_PRINT 0x1B
--#define SCSIOP_LOAD_UNLOAD 0x1B
--#define SCSIOP_RECEIVE_DIAGNOSTIC 0x1C
--#define SCSIOP_SEND_DIAGNOSTIC 0x1D
--#define SCSIOP_MEDIUM_REMOVAL 0x1E
--#define SCSIOP_READ_CAPACITY 0x25
--#define SCSIOP_READ 0x28
--#define SCSIOP_WRITE 0x2A
--#define SCSIOP_SEEK 0x2B
--#define SCSIOP_LOCATE 0x2B
--#define SCSIOP_WRITE_VERIFY 0x2E
--#define SCSIOP_VERIFY 0x2F
--#define SCSIOP_SEARCH_DATA_HIGH 0x30
--#define SCSIOP_SEARCH_DATA_EQUAL 0x31
--#define SCSIOP_SEARCH_DATA_LOW 0x32
--#define SCSIOP_SET_LIMITS 0x33
--#define SCSIOP_READ_POSITION 0x34
--#define SCSIOP_SYNCHRONIZE_CACHE 0x35
--#define SCSIOP_COMPARE 0x39
--#define SCSIOP_COPY_COMPARE 0x3A
--#define SCSIOP_WRITE_DATA_BUFF 0x3B
--#define SCSIOP_READ_DATA_BUFF 0x3C
--#define SCSIOP_CHANGE_DEFINITION 0x40
--#define SCSIOP_READ_SUB_CHANNEL 0x42
--#define SCSIOP_READ_TOC 0x43
--#define SCSIOP_READ_HEADER 0x44
--#define SCSIOP_PLAY_AUDIO 0x45
--#define SCSIOP_PLAY_AUDIO_MSF 0x47
--#define SCSIOP_PLAY_TRACK_INDEX 0x48
--#define SCSIOP_PLAY_TRACK_RELATIVE 0x49
--#define SCSIOP_PAUSE_RESUME 0x4B
--#define SCSIOP_LOG_SELECT 0x4C
--#define SCSIOP_LOG_SENSE 0x4D
--#define SCSIOP_MODE_SELECT10 0x55
--#define SCSIOP_MODE_SENSE10 0x5A
--#define SCSIOP_LOAD_UNLOAD_SLOT 0xA6
--#define SCSIOP_MECHANISM_STATUS 0xBD
--#define SCSIOP_READ_CD 0xBE
--
--// IDE command definitions
--#define IDE_COMMAND_ATAPI_RESET 0x08
--#define IDE_COMMAND_READ 0x20
--#define IDE_COMMAND_WRITE 0x30
--#define IDE_COMMAND_RECALIBRATE 0x10
--#define IDE_COMMAND_SEEK 0x70
--#define IDE_COMMAND_SET_PARAMETERS 0x91
--#define IDE_COMMAND_VERIFY 0x40
--#define IDE_COMMAND_ATAPI_PACKET 0xA0
--#define IDE_COMMAND_ATAPI_IDENTIFY 0xA1
--#define IDE_CMD_READ_MULTIPLE 0xC4
--#define IDE_CMD_WRITE_MULTIPLE 0xC5
--#define IDE_CMD_SET_MULTIPLE 0xC6
--#define IDE_COMMAND_WRITE_DMA 0xCA
--#define IDE_COMMAND_READ_DMA 0xC8
--#define IDE_COMMAND_IDENTIFY 0xEC
--
--// IDE status definitions
--#define IDE_STATUS_ERROR 0x01
--#define IDE_STATUS_INDEX 0x02
--#define IDE_STATUS_CORRECTED_ERROR 0x04
--#define IDE_STATUS_DRQ 0x08
--#define IDE_STATUS_DSC 0x10
--#define IDE_STATUS_WRITE_FAULT 0x20
--#define IDE_STATUS_DRDY 0x40
--#define IDE_STATUS_BUSY 0x80
--
--// IDE error definitions
--#define IDE_ERROR_AMNF 0x01
--#define IDE_ERROR_TKONF 0x02
--#define IDE_ERROR_ABRT 0x04
--#define IDE_ERROR_MCR 0x08
--#define IDE_ERROR_IDFN 0x10
--#define IDE_ERROR_MC 0x20
--#define IDE_ERROR_UNC 0x40
--#define IDE_ERROR_BBK 0x80
--
--// IDE interface structure
--typedef struct _IDE_STRUCT
-- {
-- union
-- {
-- UCHAR ide[9];
-- struct
-- {
-- USHORT data;
-- UCHAR sectors;
-- UCHAR lba[4];
-- UCHAR cmd;
-- UCHAR spigot;
-- } ides;
-- } ide;
-- } IDE_STRUCT;
--
--// SCSI read capacity structure
--typedef struct _READ_CAPACITY_DATA
-- {
-- ULONG blks; /* total blocks (converted to little endian) */
-- ULONG blksiz; /* size of each (converted to little endian) */
-- } READ_CAPACITY_DATA, *PREAD_CAPACITY_DATA;
--
--// SCSI inquiry data
--#ifndef HOSTS_C
--
--typedef struct _INQUIRYDATA
-- {
-- UCHAR DeviceType :5;
-- UCHAR DeviceTypeQualifier :3;
-- UCHAR DeviceTypeModifier :7;
-- UCHAR RemovableMedia :1;
-- UCHAR Versions;
-- UCHAR ResponseDataFormat;
-- UCHAR AdditionalLength;
-- UCHAR Reserved[2];
-- UCHAR SoftReset :1;
-- UCHAR CommandQueue :1;
-- UCHAR Reserved2 :1;
-- UCHAR LinkedCommands :1;
-- UCHAR Synchronous :1;
-- UCHAR Wide16Bit :1;
-- UCHAR Wide32Bit :1;
-- UCHAR RelativeAddressing :1;
-- UCHAR VendorId[8];
-- UCHAR ProductId[16];
-- UCHAR ProductRevisionLevel[4];
-- UCHAR VendorSpecific[20];
-- UCHAR Reserved3[40];
-- } INQUIRYDATA, *PINQUIRYDATA;
--#endif
--
--// IDE IDENTIFY data
--typedef struct _IDENTIFY_DATA
-- {
-- USHORT GeneralConfiguration; // 00
-- USHORT NumberOfCylinders; // 02
-- USHORT Reserved1; // 04
-- USHORT NumberOfHeads; // 06
-- USHORT UnformattedBytesPerTrack; // 08
-- USHORT UnformattedBytesPerSector; // 0A
-- USHORT SectorsPerTrack; // 0C
-- USHORT VendorUnique1[3]; // 0E
-- USHORT SerialNumber[10]; // 14
-- USHORT BufferType; // 28
-- USHORT BufferSectorSize; // 2A
-- USHORT NumberOfEccBytes; // 2C
-- USHORT FirmwareRevision[4]; // 2E
-- USHORT ModelNumber[20]; // 36
-- UCHAR MaximumBlockTransfer; // 5E
-- UCHAR VendorUnique2; // 5F
-- USHORT DoubleWordIo; // 60
-- USHORT Capabilities; // 62
-- USHORT Reserved2; // 64
-- UCHAR VendorUnique3; // 66
-- UCHAR PioCycleTimingMode; // 67
-- UCHAR VendorUnique4; // 68
-- UCHAR DmaCycleTimingMode; // 69
-- USHORT TranslationFieldsValid:1; // 6A
-- USHORT Reserved3:15;
-- USHORT NumberOfCurrentCylinders; // 6C
-- USHORT NumberOfCurrentHeads; // 6E
-- USHORT CurrentSectorsPerTrack; // 70
-- ULONG CurrentSectorCapacity; // 72
-- USHORT Reserved4[197]; // 76
-- } IDENTIFY_DATA, *PIDENTIFY_DATA;
--
--// Identify data without the Reserved4.
--typedef struct _IDENTIFY_DATA2 {
-- USHORT GeneralConfiguration; // 00
-- USHORT NumberOfCylinders; // 02
-- USHORT Reserved1; // 04
-- USHORT NumberOfHeads; // 06
-- USHORT UnformattedBytesPerTrack; // 08
-- USHORT UnformattedBytesPerSector; // 0A
-- USHORT SectorsPerTrack; // 0C
-- USHORT VendorUnique1[3]; // 0E
-- USHORT SerialNumber[10]; // 14
-- USHORT BufferType; // 28
-- USHORT BufferSectorSize; // 2A
-- USHORT NumberOfEccBytes; // 2C
-- USHORT FirmwareRevision[4]; // 2E
-- USHORT ModelNumber[20]; // 36
-- UCHAR MaximumBlockTransfer; // 5E
-- UCHAR VendorUnique2; // 5F
-- USHORT DoubleWordIo; // 60
-- USHORT Capabilities; // 62
-- USHORT Reserved2; // 64
-- UCHAR VendorUnique3; // 66
-- UCHAR PioCycleTimingMode; // 67
-- UCHAR VendorUnique4; // 68
-- UCHAR DmaCycleTimingMode; // 69
-- USHORT TranslationFieldsValid:1; // 6A
-- USHORT Reserved3:15;
-- USHORT NumberOfCurrentCylinders; // 6C
-- USHORT NumberOfCurrentHeads; // 6E
-- USHORT CurrentSectorsPerTrack; // 70
-- ULONG CurrentSectorCapacity; // 72
-- } IDENTIFY_DATA2, *PIDENTIFY_DATA2;
--
--#endif // PSI_EIDE_SCSIOP
--
--// function prototypes
--int Psi240i_Command(struct scsi_cmnd *SCpnt);
--int Psi240i_Abort(struct scsi_cmnd *SCpnt);
--int Psi240i_Reset(struct scsi_cmnd *SCpnt, unsigned int flags);
--#endif
-diff --git a/drivers/scsi/psi_chip.h b/drivers/scsi/psi_chip.h
-deleted file mode 100644
-index 224cf8f..0000000
---- a/drivers/scsi/psi_chip.h
-+++ /dev/null
-@@ -1,195 +0,0 @@
--/*+M*************************************************************************
-- * Perceptive Solutions, Inc. PSI-240I device driver proc support for Linux.
-- *
-- * Copyright (c) 1997 Perceptive Solutions, Inc.
-- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License as published by
-- * the Free Software Foundation; either version 2, or (at your option)
-- * any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-- * GNU General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public License
-- * along with this program; see the file COPYING. If not, write to
-- * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
-- *
-- *
-- * File Name: psi_chip.h
-- *
-- * Description: This file contains the interface defines and
-- * error codes.
-- *
-- *-M*************************************************************************/
--#ifndef PSI_CHIP
--#define PSI_CHIP
--
--/************************************************/
--/* Misc konstants */
--/************************************************/
--#define CHIP_MAXDRIVES 8
--
--/************************************************/
--/* Chip I/O addresses */
--/************************************************/
--#define CHIP_ADRS_0 0x0130
--#define CHIP_ADRS_1 0x0150
--#define CHIP_ADRS_2 0x0190
--#define CHIP_ADRS_3 0x0210
--#define CHIP_ADRS_4 0x0230
--#define CHIP_ADRS_5 0x0250
--
--/************************************************/
--/* EEPROM locations */
--/************************************************/
--#define CHIP_EEPROM_BIOS 0x0000 // BIOS base address
--#define CHIP_EEPROM_DATA 0x2000 // SETUP data base address
--#define CHIP_EEPROM_FACTORY 0x2400 // FACTORY data base address
--#define CHIP_EEPROM_SETUP 0x3000 // SETUP PROGRAM base address
--
--#define CHIP_EEPROM_SIZE 32768U // size of the entire EEPROM
--#define CHIP_EEPROM_BIOS_SIZE 8192 // size of the BIOS in bytes
--#define CHIP_EEPROM_DATA_SIZE 4096 // size of factory, setup, log data block in bytes
--#define CHIP_EEPROM_SETUP_SIZE 20480U // size of the setup program in bytes
--
--/************************************************/
--/* Chip Interrupts */
--/************************************************/
--#define CHIP_IRQ_10 0x72
--#define CHIP_IRQ_11 0x73
--#define CHIP_IRQ_12 0x74
--
--/************************************************/
--/* Chip Setup addresses */
--/************************************************/
--#define CHIP_SETUP_BASE 0x0000C000L
--
--/************************************************/
--/* Chip Register address offsets */
--/************************************************/
--#define REG_DATA 0x00
--#define REG_ERROR 0x01
--#define REG_SECTOR_COUNT 0x02
--#define REG_LBA_0 0x03
--#define REG_LBA_8 0x04
--#define REG_LBA_16 0x05
--#define REG_LBA_24 0x06
--#define REG_STAT_CMD 0x07
--#define REG_SEL_FAIL 0x08
--#define REG_IRQ_STATUS 0x09
--#define REG_ADDRESS 0x0A
--#define REG_FAIL 0x0C
--#define REG_ALT_STAT 0x0E
--#define REG_DRIVE_ADRS 0x0F
--
--/************************************************/
--/* Chip RAM locations */
--/************************************************/
--#define CHIP_DEVICE 0x8000
--#define CHIP_DEVICE_0 0x8000
--#define CHIP_DEVICE_1 0x8008
--#define CHIP_DEVICE_2 0x8010
--#define CHIP_DEVICE_3 0x8018
--#define CHIP_DEVICE_4 0x8020
--#define CHIP_DEVICE_5 0x8028
--#define CHIP_DEVICE_6 0x8030
--#define CHIP_DEVICE_7 0x8038
--typedef struct
-- {
-- UCHAR channel; // channel of this device (0-8).
-- UCHAR spt; // Sectors Per Track.
-- ULONG spc; // Sectors Per Cylinder.
-- } CHIP_DEVICE_N;
--
--#define CHIP_CONFIG 0x8100 // address of boards configuration.
--typedef struct
-- {
-- UCHAR irq; // interrupt request channel number
-- UCHAR numDrives; // Number of accessible drives
-- UCHAR fastFormat; // Boolean for fast format enable
-- } CHIP_CONFIG_N;
-
--#define CHIP_MAP 0x8108 // eight byte device type map.
+ /**************************************************************************
+ * isd200_scsi_to_ata
+ *
+@@ -1266,7 +1289,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
+ ataCdb->generic.TransferBlockSize = 1;
+ ataCdb->generic.RegisterSelect = REG_COMMAND;
+ ataCdb->write.CommandByte = ATA_COMMAND_GET_MEDIA_STATUS;
+- srb->request_bufflen = 0;
++ isd200_srb_set_bufflen(srb, 0);
+ } else {
+ US_DEBUGP(" Media Status not supported, just report okay\n");
+ srb->result = SAM_STAT_GOOD;
+@@ -1284,7 +1307,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
+ ataCdb->generic.TransferBlockSize = 1;
+ ataCdb->generic.RegisterSelect = REG_COMMAND;
+ ataCdb->write.CommandByte = ATA_COMMAND_GET_MEDIA_STATUS;
+- srb->request_bufflen = 0;
++ isd200_srb_set_bufflen(srb, 0);
+ } else {
+ US_DEBUGP(" Media Status not supported, just report okay\n");
+ srb->result = SAM_STAT_GOOD;
+@@ -1390,7 +1413,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
+ ataCdb->generic.RegisterSelect = REG_COMMAND;
+ ataCdb->write.CommandByte = (srb->cmnd[4] & 0x1) ?
+ WIN_DOORLOCK : WIN_DOORUNLOCK;
+- srb->request_bufflen = 0;
++ isd200_srb_set_bufflen(srb, 0);
+ } else {
+ US_DEBUGP(" Not removeable media, just report okay\n");
+ srb->result = SAM_STAT_GOOD;
+@@ -1416,7 +1439,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
+ ataCdb->generic.TransferBlockSize = 1;
+ ataCdb->generic.RegisterSelect = REG_COMMAND;
+ ataCdb->write.CommandByte = ATA_COMMAND_GET_MEDIA_STATUS;
+- srb->request_bufflen = 0;
++ isd200_srb_set_bufflen(srb, 0);
+ } else {
+ US_DEBUGP(" Nothing to do, just report okay\n");
+ srb->result = SAM_STAT_GOOD;
+@@ -1525,7 +1548,7 @@ int isd200_Initialization(struct us_data *us)
+
+ void isd200_ata_command(struct scsi_cmnd *srb, struct us_data *us)
+ {
+- int sendToTransport = 1;
++ int sendToTransport = 1, orig_bufflen;
+ union ata_cdb ataCdb;
+
+ /* Make sure driver was initialized */
+@@ -1533,11 +1556,14 @@ void isd200_ata_command(struct scsi_cmnd *srb, struct us_data *us)
+ if (us->extra == NULL)
+ US_DEBUGP("ERROR Driver not initialized\n");
+
+- /* Convert command */
+- srb->resid = 0;
++ scsi_set_resid(srb, 0);
++ /* scsi_bufflen might change in protocol translation to ata */
++ orig_bufflen = scsi_bufflen(srb);
+ sendToTransport = isd200_scsi_to_ata(srb, us, &ataCdb);
+
+ /* send the command to the transport layer */
+ if (sendToTransport)
+ isd200_invoke_transport(us, srb, &ataCdb);
++
++ isd200_srb_set_bufflen(srb, orig_bufflen);
+ }
+diff --git a/drivers/usb/storage/protocol.c b/drivers/usb/storage/protocol.c
+index 889622b..a41ce21 100644
+--- a/drivers/usb/storage/protocol.c
++++ b/drivers/usb/storage/protocol.c
+@@ -149,11 +149,7 @@ void usb_stor_transparent_scsi_command(struct scsi_cmnd *srb,
+ ***********************************************************************/
+
+ /* Copy a buffer of length buflen to/from the srb's transfer buffer.
+- * (Note: for scatter-gather transfers (srb->use_sg > 0), srb->request_buffer
+- * points to a list of s-g entries and we ignore srb->request_bufflen.
+- * For non-scatter-gather transfers, srb->request_buffer points to the
+- * transfer buffer itself and srb->request_bufflen is the buffer's length.)
+- * Update the *index and *offset variables so that the next copy will
++ * Update the **sgptr and *offset variables so that the next copy will
+ * pick up from where this one left off. */
+
+ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
+@@ -162,80 +158,64 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
+ {
+ unsigned int cnt;
+
+- /* If not using scatter-gather, just transfer the data directly.
+- * Make certain it will fit in the available buffer space. */
+- if (srb->use_sg == 0) {
+- if (*offset >= srb->request_bufflen)
+- return 0;
+- cnt = min(buflen, srb->request_bufflen - *offset);
+- if (dir == TO_XFER_BUF)
+- memcpy((unsigned char *) srb->request_buffer + *offset,
+- buffer, cnt);
+- else
+- memcpy(buffer, (unsigned char *) srb->request_buffer +
+- *offset, cnt);
+- *offset += cnt;
-
+- /* Using scatter-gather. We have to go through the list one entry
++ /* We have to go through the list one entry
+ * at a time. Each s-g entry contains some number of pages, and
+ * each page has to be kmap()'ed separately. If the page is already
+ * in kernel-addressable memory then kmap() will return its address.
+ * If the page is not directly accessible -- such as a user buffer
+ * located in high memory -- then kmap() will map it to a temporary
+ * position in the kernel's virtual address space. */
+- } else {
+- struct scatterlist *sg = *sgptr;
-
--#define CHIP_RAID 0x8120 // array of RAID signature structures and LBA
--#define CHIP_RAID_1 0x8120
--#define CHIP_RAID_2 0x8130
--#define CHIP_RAID_3 0x8140
--#define CHIP_RAID_4 0x8150
+- if (!sg)
+- sg = (struct scatterlist *) srb->request_buffer;
-
--/************************************************/
--/* Chip Register Masks */
--/************************************************/
--#define CHIP_ID 0x7B
--#define SEL_RAM 0x8000
--#define MASK_FAIL 0x80
+- /* This loop handles a single s-g list entry, which may
+- * include multiple pages. Find the initial page structure
+- * and the starting offset within the page, and update
+- * the *offset and *index values for the next loop. */
+- cnt = 0;
+- while (cnt < buflen) {
+- struct page *page = sg_page(sg) +
+- ((sg->offset + *offset) >> PAGE_SHIFT);
+- unsigned int poff =
+- (sg->offset + *offset) & (PAGE_SIZE-1);
+- unsigned int sglen = sg->length - *offset;
-
--/************************************************/
--/* Chip cable select bits */
--/************************************************/
--#define SECTORSXFER 8
+- if (sglen > buflen - cnt) {
-
--/************************************************/
--/* Chip cable select bits */
--/************************************************/
--#define SEL_NONE 0x00
--#define SEL_1 0x01
--#define SEL_2 0x02
--#define SEL_3 0x04
--#define SEL_4 0x08
+- /* Transfer ends within this s-g entry */
+- sglen = buflen - cnt;
+- *offset += sglen;
+- } else {
-
--/************************************************/
--/* Programmable Interrupt Controller*/
--/************************************************/
--#define PIC1 0x20 // first 8259 base port address
--#define PIC2 0xA0 // second 8259 base port address
--#define INT_OCW1 1 // Operation Control Word 1: IRQ mask
--#define EOI 0x20 // non-specific end-of-interrupt
+- /* Transfer continues to next s-g entry */
+- *offset = 0;
+- sg = sg_next(sg);
+- }
-
--/************************************************/
--/* Device/Geometry controls */
--/************************************************/
--#define GEOMETRY_NONE 0x0 // No device
--#define GEOMETRY_AUTO 0x1 // Geometry set automatically
--#define GEOMETRY_USER 0x2 // User supplied geometry
+- /* Transfer the data for all the pages in this
+- * s-g entry. For each page: call kmap(), do the
+- * transfer, and call kunmap() immediately after. */
+- while (sglen > 0) {
+- unsigned int plen = min(sglen, (unsigned int)
+- PAGE_SIZE - poff);
+- unsigned char *ptr = kmap(page);
-
--#define DEVICE_NONE 0x0 // No device present
--#define DEVICE_INACTIVE 0x1 // device present but not registered active
--#define DEVICE_ATAPI 0x2 // ATAPI device (CD_ROM, Tape, Etc...)
--#define DEVICE_DASD_NONLBA 0x3 // Non LBA incompatible device
--#define DEVICE_DASD_LBA 0x4 // LBA compatible device
+- if (dir == TO_XFER_BUF)
+- memcpy(ptr + poff, buffer + cnt, plen);
+- else
+- memcpy(buffer + cnt, ptr + poff, plen);
+- kunmap(page);
-
--/************************************************/
--/* Setup Structure Definitions */
--/************************************************/
--typedef struct // device setup parameters
-- {
-- UCHAR geometryControl; // geometry control flags
-- UCHAR device; // device code
-- USHORT sectors; // number of sectors per track
-- USHORT heads; // number of heads
-- USHORT cylinders; // number of cylinders for this device
-- ULONG blocks; // number of blocks on device
-- USHORT spare1;
-- USHORT spare2;
-- } SETUP_DEVICE, *PSETUP_DEVICE;
+- /* Start at the beginning of the next page */
+- poff = 0;
+- ++page;
+- cnt += plen;
+- sglen -= plen;
+- }
++ struct scatterlist *sg = *sgptr;
++
++ if (!sg)
++ sg = scsi_sglist(srb);
++
++ /* This loop handles a single s-g list entry, which may
++ * include multiple pages. Find the initial page structure
++ * and the starting offset within the page, and update
++ * the *offset and **sgptr values for the next loop. */
++ cnt = 0;
++ while (cnt < buflen) {
++ struct page *page = sg_page(sg) +
++ ((sg->offset + *offset) >> PAGE_SHIFT);
++ unsigned int poff =
++ (sg->offset + *offset) & (PAGE_SIZE-1);
++ unsigned int sglen = sg->length - *offset;
++
++ if (sglen > buflen - cnt) {
++
++ /* Transfer ends within this s-g entry */
++ sglen = buflen - cnt;
++ *offset += sglen;
++ } else {
++
++ /* Transfer continues to next s-g entry */
++ *offset = 0;
++ sg = sg_next(sg);
++ }
++
++ /* Transfer the data for all the pages in this
++ * s-g entry. For each page: call kmap(), do the
++ * transfer, and call kunmap() immediately after. */
++ while (sglen > 0) {
++ unsigned int plen = min(sglen, (unsigned int)
++ PAGE_SIZE - poff);
++ unsigned char *ptr = kmap(page);
++
++ if (dir == TO_XFER_BUF)
++ memcpy(ptr + poff, buffer + cnt, plen);
++ else
++ memcpy(buffer + cnt, ptr + poff, plen);
++ kunmap(page);
++
++ /* Start at the beginning of the next page */
++ poff = 0;
++ ++page;
++ cnt += plen;
++ sglen -= plen;
+ }
+- *sgptr = sg;
+ }
++ *sgptr = sg;
+
+ /* Return the amount actually transferred */
+ return cnt;
+@@ -251,6 +231,6 @@ void usb_stor_set_xfer_buf(unsigned char *buffer,
+
+ usb_stor_access_xfer_buf(buffer, buflen, srb, &sg, &offset,
+ TO_XFER_BUF);
+- if (buflen < srb->request_bufflen)
+- srb->resid = srb->request_bufflen - buflen;
++ if (buflen < scsi_bufflen(srb))
++ scsi_set_resid(srb, scsi_bufflen(srb) - buflen);
+ }
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index 7c9593b..8c1e295 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -81,6 +81,16 @@ static int slave_alloc (struct scsi_device *sdev)
+ */
+ sdev->inquiry_len = 36;
+
++ /* Scatter-gather buffers (all but the last) must have a length
++ * divisible by the bulk maxpacket size. Otherwise a data packet
++ * would end up being short, causing a premature end to the data
++ * transfer. Since high-speed bulk pipes have a maxpacket size
++ * of 512, we'll use that as the scsi device queue's DMA alignment
++ * mask. Guaranteeing proper alignment of the first buffer will
++ * have the desired effect because, except at the beginning and
++ * the end, scatter-gather buffers follow page boundaries. */
++ blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
++
+ /*
+ * The UFI spec treates the Peripheral Qualifier bits in an
+ * INQUIRY result as reserved and requires devices to set them
+@@ -100,16 +110,6 @@ static int slave_configure(struct scsi_device *sdev)
+ {
+ struct us_data *us = host_to_us(sdev->host);
+
+- /* Scatter-gather buffers (all but the last) must have a length
+- * divisible by the bulk maxpacket size. Otherwise a data packet
+- * would end up being short, causing a premature end to the data
+- * transfer. Since high-speed bulk pipes have a maxpacket size
+- * of 512, we'll use that as the scsi device queue's DMA alignment
+- * mask. Guaranteeing proper alignment of the first buffer will
+- * have the desired effect because, except at the beginning and
+- * the end, scatter-gather buffers follow page boundaries. */
+- blk_queue_dma_alignment(sdev->request_queue, (512 - 1));
-
--typedef struct // master setup structure
-- {
-- USHORT startupDelay;
-- USHORT promptBIOS;
-- USHORT fastFormat;
-- USHORT spare2;
-- USHORT spare3;
-- USHORT spare4;
-- USHORT spare5;
-- USHORT spare6;
-- SETUP_DEVICE setupDevice[8];
-- } SETUP, *PSETUP;
+ /* Many devices have trouble transfering more than 32KB at a time,
+ * while others have trouble with more than 64K. At this time we
+ * are limiting both to 32K (64 sectores).
+@@ -187,6 +187,10 @@ static int slave_configure(struct scsi_device *sdev)
+ * automatically, requiring a START-STOP UNIT command. */
+ sdev->allow_restart = 1;
+
++ /* Some USB cardreaders have trouble reading an sdcard's last
++ * sector in a larger then 1 sector read, since the performance
++ * impact is negible we set this flag for all USB disks */
++ sdev->last_sector_bug = 1;
+ } else {
+
+ /* Non-disk-type devices don't need to blacklist any pages
+diff --git a/drivers/usb/storage/sddr09.c b/drivers/usb/storage/sddr09.c
+index b12202c..8972b17 100644
+--- a/drivers/usb/storage/sddr09.c
++++ b/drivers/usb/storage/sddr09.c
+@@ -1623,7 +1623,7 @@ int sddr09_transport(struct scsi_cmnd *srb, struct us_data *us)
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+
+- if (srb->request_bufflen == 0)
++ if (scsi_bufflen(srb) == 0)
+ return USB_STOR_TRANSPORT_GOOD;
+
+ if (srb->sc_data_direction == DMA_TO_DEVICE ||
+@@ -1634,12 +1634,9 @@ int sddr09_transport(struct scsi_cmnd *srb, struct us_data *us)
+ US_DEBUGP("SDDR09: %s %d bytes\n",
+ (srb->sc_data_direction == DMA_TO_DEVICE) ?
+ "sending" : "receiving",
+- srb->request_bufflen);
++ scsi_bufflen(srb));
+
+- result = usb_stor_bulk_transfer_sg(us, pipe,
+- srb->request_buffer,
+- srb->request_bufflen,
+- srb->use_sg, &srb->resid);
++ result = usb_stor_bulk_srb(us, pipe, srb);
+
+ return (result == USB_STOR_XFER_GOOD ?
+ USB_STOR_TRANSPORT_GOOD : USB_STOR_TRANSPORT_ERROR);
+diff --git a/drivers/usb/storage/shuttle_usbat.c b/drivers/usb/storage/shuttle_usbat.c
+index cb22a9a..570c125 100644
+--- a/drivers/usb/storage/shuttle_usbat.c
++++ b/drivers/usb/storage/shuttle_usbat.c
+@@ -130,7 +130,7 @@ static int usbat_write(struct us_data *us,
+ * Convenience function to perform a bulk read
+ */
+ static int usbat_bulk_read(struct us_data *us,
+- unsigned char *data,
++ void* buf,
+ unsigned int len,
+ int use_sg)
+ {
+@@ -138,14 +138,14 @@ static int usbat_bulk_read(struct us_data *us,
+ return USB_STOR_XFER_GOOD;
+
+ US_DEBUGP("usbat_bulk_read: len = %d\n", len);
+- return usb_stor_bulk_transfer_sg(us, us->recv_bulk_pipe, data, len, use_sg, NULL);
++ return usb_stor_bulk_transfer_sg(us, us->recv_bulk_pipe, buf, len, use_sg, NULL);
+ }
+
+ /*
+ * Convenience function to perform a bulk write
+ */
+ static int usbat_bulk_write(struct us_data *us,
+- unsigned char *data,
++ void* buf,
+ unsigned int len,
+ int use_sg)
+ {
+@@ -153,7 +153,7 @@ static int usbat_bulk_write(struct us_data *us,
+ return USB_STOR_XFER_GOOD;
+
+ US_DEBUGP("usbat_bulk_write: len = %d\n", len);
+- return usb_stor_bulk_transfer_sg(us, us->send_bulk_pipe, data, len, use_sg, NULL);
++ return usb_stor_bulk_transfer_sg(us, us->send_bulk_pipe, buf, len, use_sg, NULL);
+ }
+
+ /*
+@@ -314,7 +314,7 @@ static int usbat_wait_not_busy(struct us_data *us, int minutes)
+ * Read block data from the data register
+ */
+ static int usbat_read_block(struct us_data *us,
+- unsigned char *content,
++ void* buf,
+ unsigned short len,
+ int use_sg)
+ {
+@@ -337,7 +337,7 @@ static int usbat_read_block(struct us_data *us,
+ if (result != USB_STOR_XFER_GOOD)
+ return USB_STOR_TRANSPORT_ERROR;
+
+- result = usbat_bulk_read(us, content, len, use_sg);
++ result = usbat_bulk_read(us, buf, len, use_sg);
+ return (result == USB_STOR_XFER_GOOD ?
+ USB_STOR_TRANSPORT_GOOD : USB_STOR_TRANSPORT_ERROR);
+ }
+@@ -347,7 +347,7 @@ static int usbat_read_block(struct us_data *us,
+ */
+ static int usbat_write_block(struct us_data *us,
+ unsigned char access,
+- unsigned char *content,
++ void* buf,
+ unsigned short len,
+ int minutes,
+ int use_sg)
+@@ -372,7 +372,7 @@ static int usbat_write_block(struct us_data *us,
+ if (result != USB_STOR_XFER_GOOD)
+ return USB_STOR_TRANSPORT_ERROR;
+
+- result = usbat_bulk_write(us, content, len, use_sg);
++ result = usbat_bulk_write(us, buf, len, use_sg);
+ if (result != USB_STOR_XFER_GOOD)
+ return USB_STOR_TRANSPORT_ERROR;
+
+@@ -392,7 +392,7 @@ static int usbat_hp8200e_rw_block_test(struct us_data *us,
+ unsigned char timeout,
+ unsigned char qualifier,
+ int direction,
+- unsigned char *content,
++ void *buf,
+ unsigned short len,
+ int use_sg,
+ int minutes)
+@@ -472,7 +472,7 @@ static int usbat_hp8200e_rw_block_test(struct us_data *us,
+ }
+
+ result = usb_stor_bulk_transfer_sg(us,
+- pipe, content, len, use_sg, NULL);
++ pipe, buf, len, use_sg, NULL);
+
+ /*
+ * If we get a stall on the bulk download, we'll retry
+@@ -606,7 +606,7 @@ static int usbat_multiple_write(struct us_data *us,
+ * other related details) are defined beforehand with _set_shuttle_features().
+ */
+ static int usbat_read_blocks(struct us_data *us,
+- unsigned char *buffer,
++ void* buffer,
+ int len,
+ int use_sg)
+ {
+@@ -648,7 +648,7 @@ static int usbat_read_blocks(struct us_data *us,
+ * other related details) are defined beforehand with _set_shuttle_features().
+ */
+ static int usbat_write_blocks(struct us_data *us,
+- unsigned char *buffer,
++ void* buffer,
+ int len,
+ int use_sg)
+ {
+@@ -1170,15 +1170,15 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
+ US_DEBUGP("handle_read10: transfersize %d\n",
+ srb->transfersize);
+
+- if (srb->request_bufflen < 0x10000) {
++ if (scsi_bufflen(srb) < 0x10000) {
+
+ result = usbat_hp8200e_rw_block_test(us, USBAT_ATA,
+ registers, data, 19,
+ USBAT_ATA_DATA, USBAT_ATA_STATUS, 0xFD,
+ (USBAT_QUAL_FCQ | USBAT_QUAL_ALQ),
+ DMA_FROM_DEVICE,
+- srb->request_buffer,
+- srb->request_bufflen, srb->use_sg, 1);
++ scsi_sglist(srb),
++ scsi_bufflen(srb), scsi_sg_count(srb), 1);
+
+ return result;
+ }
+@@ -1196,7 +1196,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
+ len <<= 16;
+ len |= data[7+7];
+ US_DEBUGP("handle_read10: GPCMD_READ_CD: len %d\n", len);
+- srb->transfersize = srb->request_bufflen/len;
++ srb->transfersize = scsi_bufflen(srb)/len;
+ }
+
+ if (!srb->transfersize) {
+@@ -1213,7 +1213,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
+
+ len = (65535/srb->transfersize) * srb->transfersize;
+ US_DEBUGP("Max read is %d bytes\n", len);
+- len = min(len, srb->request_bufflen);
++ len = min(len, scsi_bufflen(srb));
+ buffer = kmalloc(len, GFP_NOIO);
+ if (buffer == NULL) /* bloody hell! */
+ return USB_STOR_TRANSPORT_FAILED;
+@@ -1222,10 +1222,10 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
+ sector |= short_pack(data[7+5], data[7+4]);
+ transferred = 0;
+
+- while (transferred != srb->request_bufflen) {
++ while (transferred != scsi_bufflen(srb)) {
+
+- if (len > srb->request_bufflen - transferred)
+- len = srb->request_bufflen - transferred;
++ if (len > scsi_bufflen(srb) - transferred)
++ len = scsi_bufflen(srb) - transferred;
+
+ data[3] = len&0xFF; /* (cylL) = expected length (L) */
+ data[4] = (len>>8)&0xFF; /* (cylH) = expected length (H) */
+@@ -1261,7 +1261,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
+ transferred += len;
+ sector += len / srb->transfersize;
+
+- } /* while transferred != srb->request_bufflen */
++ } /* while transferred != scsi_bufflen(srb) */
+
+ kfree(buffer);
+ return result;
+@@ -1429,9 +1429,8 @@ static int usbat_hp8200e_transport(struct scsi_cmnd *srb, struct us_data *us)
+ unsigned char data[32];
+ unsigned int len;
+ int i;
+- char string[64];
+
+- len = srb->request_bufflen;
++ len = scsi_bufflen(srb);
+
+ /* Send A0 (ATA PACKET COMMAND).
+ Note: I guess we're never going to get any of the ATA
+@@ -1472,8 +1471,8 @@ static int usbat_hp8200e_transport(struct scsi_cmnd *srb, struct us_data *us)
+ USBAT_ATA_DATA, USBAT_ATA_STATUS, 0xFD,
+ (USBAT_QUAL_FCQ | USBAT_QUAL_ALQ),
+ DMA_TO_DEVICE,
+- srb->request_buffer,
+- len, srb->use_sg, 10);
++ scsi_sglist(srb),
++ len, scsi_sg_count(srb), 10);
+
+ if (result == USB_STOR_TRANSPORT_GOOD) {
+ transferred += len;
+@@ -1540,23 +1539,8 @@ static int usbat_hp8200e_transport(struct scsi_cmnd *srb, struct us_data *us)
+ len = *status;
+
+
+- result = usbat_read_block(us, srb->request_buffer, len, srb->use_sg);
-
--#endif
+- /* Debug-print the first 32 bytes of the transfer */
-
-diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
-index 2886407..c94906a 100644
---- a/drivers/scsi/qla1280.c
-+++ b/drivers/scsi/qla1280.c
-@@ -528,7 +528,7 @@ __setup("qla1280=", qla1280_setup);
- #define CMD_CDBLEN(Cmnd) Cmnd->cmd_len
- #define CMD_CDBP(Cmnd) Cmnd->cmnd
- #define CMD_SNSP(Cmnd) Cmnd->sense_buffer
--#define CMD_SNSLEN(Cmnd) sizeof(Cmnd->sense_buffer)
-+#define CMD_SNSLEN(Cmnd) SCSI_SENSE_BUFFERSIZE
- #define CMD_RESULT(Cmnd) Cmnd->result
- #define CMD_HANDLE(Cmnd) Cmnd->host_scribble
- #define CMD_REQUEST(Cmnd) Cmnd->request->cmd
-@@ -3715,7 +3715,7 @@ qla1280_status_entry(struct scsi_qla_host *ha, struct response *pkt,
- } else
- sense_sz = 0;
- memset(cmd->sense_buffer + sense_sz, 0,
-- sizeof(cmd->sense_buffer) - sense_sz);
-+ SCSI_SENSE_BUFFERSIZE - sense_sz);
+- if (!srb->use_sg) {
+- string[0] = 0;
+- for (i=0; i<len && i<32; i++) {
+- sprintf(string+strlen(string), "%02X ",
+- ((unsigned char *)srb->request_buffer)[i]);
+- if ((i%16)==15) {
+- US_DEBUGP("%s\n", string);
+- string[0] = 0;
+- }
+- }
+- if (string[0]!=0)
+- US_DEBUGP("%s\n", string);
+- }
++ result = usbat_read_block(us, scsi_sglist(srb), len,
++ scsi_sg_count(srb));
+ }
- dprintk(2, "qla1280_status_entry: Check "
- "condition Sense data, b %i, t %i, "
-diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile
-index 71ddb5d..c51fd1f 100644
---- a/drivers/scsi/qla2xxx/Makefile
-+++ b/drivers/scsi/qla2xxx/Makefile
-@@ -1,4 +1,4 @@
- qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \
-- qla_dbg.o qla_sup.o qla_attr.o qla_mid.o
-+ qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o
+ return result;
+diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c
+index c646750..d9f4912 100644
+--- a/drivers/usb/storage/transport.c
++++ b/drivers/usb/storage/transport.c
+@@ -459,6 +459,22 @@ static int usb_stor_bulk_transfer_sglist(struct us_data *us, unsigned int pipe,
+ }
- obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o
-diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
-index fb388b8..adf9732 100644
---- a/drivers/scsi/qla2xxx/qla_attr.c
-+++ b/drivers/scsi/qla2xxx/qla_attr.c
-@@ -9,7 +9,7 @@
- #include <linux/kthread.h>
- #include <linux/vmalloc.h>
+ /*
++ * Common used function. Transfer a complete command
++ * via usb_stor_bulk_transfer_sglist() above. Set cmnd resid
++ */
++int usb_stor_bulk_srb(struct us_data* us, unsigned int pipe,
++ struct scsi_cmnd* srb)
++{
++ unsigned int partial;
++ int result = usb_stor_bulk_transfer_sglist(us, pipe, scsi_sglist(srb),
++ scsi_sg_count(srb), scsi_bufflen(srb),
++ &partial);
++
++ scsi_set_resid(srb, scsi_bufflen(srb) - partial);
++ return result;
++}
++
++/*
+ * Transfer an entire SCSI command's worth of data payload over the bulk
+ * pipe.
+ *
+@@ -508,7 +524,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
+ int result;
--int qla24xx_vport_disable(struct fc_vport *, bool);
-+static int qla24xx_vport_disable(struct fc_vport *, bool);
+ /* send the command to the transport layer */
+- srb->resid = 0;
++ scsi_set_resid(srb, 0);
+ result = us->transport(srb, us);
- /* SYSFS attributes --------------------------------------------------------- */
+ /* if the command gets aborted by the higher layers, we need to
+@@ -568,7 +584,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
+ * A short transfer on a command where we don't expect it
+ * is unusual, but it doesn't mean we need to auto-sense.
+ */
+- if ((srb->resid > 0) &&
++ if ((scsi_get_resid(srb) > 0) &&
+ !((srb->cmnd[0] == REQUEST_SENSE) ||
+ (srb->cmnd[0] == INQUIRY) ||
+ (srb->cmnd[0] == MODE_SENSE) ||
+@@ -593,7 +609,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
+ srb->cmd_len = 12;
-@@ -958,7 +958,7 @@ qla2x00_issue_lip(struct Scsi_Host *shost)
+ /* issue the auto-sense command */
+- srb->resid = 0;
++ scsi_set_resid(srb, 0);
+ temp_result = us->transport(us->srb, us);
+
+ /* let's clean up right away */
+@@ -649,7 +665,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
+
+ /* Did we transfer less than the minimum amount required? */
+ if (srb->result == SAM_STAT_GOOD &&
+- srb->request_bufflen - srb->resid < srb->underflow)
++ scsi_bufflen(srb) - scsi_get_resid(srb) < srb->underflow)
+ srb->result = (DID_ERROR << 16) | (SUGGEST_RETRY << 24);
+
+ return;
+@@ -708,7 +724,7 @@ void usb_stor_stop_transport(struct us_data *us)
+
+ int usb_stor_CBI_transport(struct scsi_cmnd *srb, struct us_data *us)
{
- scsi_qla_host_t *ha = shost_priv(shost);
+- unsigned int transfer_length = srb->request_bufflen;
++ unsigned int transfer_length = scsi_bufflen(srb);
+ unsigned int pipe = 0;
+ int result;
+
+@@ -737,9 +753,7 @@ int usb_stor_CBI_transport(struct scsi_cmnd *srb, struct us_data *us)
+ if (transfer_length) {
+ pipe = srb->sc_data_direction == DMA_FROM_DEVICE ?
+ us->recv_bulk_pipe : us->send_bulk_pipe;
+- result = usb_stor_bulk_transfer_sg(us, pipe,
+- srb->request_buffer, transfer_length,
+- srb->use_sg, &srb->resid);
++ result = usb_stor_bulk_srb(us, pipe, srb);
+ US_DEBUGP("CBI data stage result is 0x%x\n", result);
+
+ /* if we stalled the data transfer it means command failed */
+@@ -808,7 +822,7 @@ int usb_stor_CBI_transport(struct scsi_cmnd *srb, struct us_data *us)
+ */
+ int usb_stor_CB_transport(struct scsi_cmnd *srb, struct us_data *us)
+ {
+- unsigned int transfer_length = srb->request_bufflen;
++ unsigned int transfer_length = scsi_bufflen(srb);
+ int result;
+
+ /* COMMAND STAGE */
+@@ -836,9 +850,7 @@ int usb_stor_CB_transport(struct scsi_cmnd *srb, struct us_data *us)
+ if (transfer_length) {
+ unsigned int pipe = srb->sc_data_direction == DMA_FROM_DEVICE ?
+ us->recv_bulk_pipe : us->send_bulk_pipe;
+- result = usb_stor_bulk_transfer_sg(us, pipe,
+- srb->request_buffer, transfer_length,
+- srb->use_sg, &srb->resid);
++ result = usb_stor_bulk_srb(us, pipe, srb);
+ US_DEBUGP("CB data stage result is 0x%x\n", result);
+
+ /* if we stalled the data transfer it means command failed */
+@@ -904,7 +916,7 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
+ {
+ struct bulk_cb_wrap *bcb = (struct bulk_cb_wrap *) us->iobuf;
+ struct bulk_cs_wrap *bcs = (struct bulk_cs_wrap *) us->iobuf;
+- unsigned int transfer_length = srb->request_bufflen;
++ unsigned int transfer_length = scsi_bufflen(srb);
+ unsigned int residue;
+ int result;
+ int fake_sense = 0;
+@@ -955,9 +967,7 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
+ if (transfer_length) {
+ unsigned int pipe = srb->sc_data_direction == DMA_FROM_DEVICE ?
+ us->recv_bulk_pipe : us->send_bulk_pipe;
+- result = usb_stor_bulk_transfer_sg(us, pipe,
+- srb->request_buffer, transfer_length,
+- srb->use_sg, &srb->resid);
++ result = usb_stor_bulk_srb(us, pipe, srb);
+ US_DEBUGP("Bulk data transfer result 0x%x\n", result);
+ if (result == USB_STOR_XFER_ERROR)
+ return USB_STOR_TRANSPORT_ERROR;
+@@ -1036,7 +1046,8 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
+ if (residue) {
+ if (!(us->flags & US_FL_IGNORE_RESIDUE)) {
+ residue = min(residue, transfer_length);
+- srb->resid = max(srb->resid, (int) residue);
++ scsi_set_resid(srb, max(scsi_get_resid(srb),
++ (int) residue));
+ }
+ }
+
+diff --git a/drivers/usb/storage/transport.h b/drivers/usb/storage/transport.h
+index 633a715..ada7c2f 100644
+--- a/drivers/usb/storage/transport.h
++++ b/drivers/usb/storage/transport.h
+@@ -139,6 +139,8 @@ extern int usb_stor_bulk_transfer_buf(struct us_data *us, unsigned int pipe,
+ void *buf, unsigned int length, unsigned int *act_len);
+ extern int usb_stor_bulk_transfer_sg(struct us_data *us, unsigned int pipe,
+ void *buf, unsigned int length, int use_sg, int *residual);
++extern int usb_stor_bulk_srb(struct us_data* us, unsigned int pipe,
++ struct scsi_cmnd* srb);
+
+ extern int usb_stor_port_reset(struct us_data *us);
+ #endif
+diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
+index 5b3dbcf..758435f 100644
+--- a/drivers/video/Kconfig
++++ b/drivers/video/Kconfig
+@@ -889,7 +889,7 @@ config FB_S1D13XXX
+
+ config FB_ATMEL
+ tristate "AT91/AT32 LCD Controller support"
+- depends on FB && (ARCH_AT91SAM9261 || ARCH_AT91SAM9263 || AVR32)
++ depends on FB && (ARCH_AT91SAM9261 || ARCH_AT91SAM9263 || ARCH_AT91SAM9RL || ARCH_AT91CAP9 || AVR32)
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+diff --git a/drivers/video/atmel_lcdfb.c b/drivers/video/atmel_lcdfb.c
+index 7c30cc8..f8e7111 100644
+--- a/drivers/video/atmel_lcdfb.c
++++ b/drivers/video/atmel_lcdfb.c
+@@ -30,7 +30,7 @@
+ #define ATMEL_LCDC_CVAL_DEFAULT 0xc8
+ #define ATMEL_LCDC_DMA_BURST_LEN 8
+
+-#if defined(CONFIG_ARCH_AT91SAM9263)
++#if defined(CONFIG_ARCH_AT91SAM9263) || defined(CONFIG_ARCH_AT91CAP9)
+ #define ATMEL_LCDC_FIFO_SIZE 2048
+ #else
+ #define ATMEL_LCDC_FIFO_SIZE 512
+diff --git a/drivers/video/bf54x-lq043fb.c b/drivers/video/bf54x-lq043fb.c
+index 74d11c3..c8e7427 100644
+--- a/drivers/video/bf54x-lq043fb.c
++++ b/drivers/video/bf54x-lq043fb.c
+@@ -224,7 +224,8 @@ static int config_dma(struct bfin_bf54xfb_info *fbi)
+ set_dma_config(CH_EPPI0,
+ set_bfin_dma_config(DIR_READ, DMA_FLOW_AUTO,
+ INTR_DISABLE, DIMENSION_2D,
+- DATA_SIZE_32));
++ DATA_SIZE_32,
++ DMA_NOSYNC_KEEP_DMA_BUF));
+ set_dma_x_count(CH_EPPI0, (LCD_X_RES * LCD_BPP) / DMA_BUS_SIZE);
+ set_dma_x_modify(CH_EPPI0, DMA_BUS_SIZE / 8);
+ set_dma_y_count(CH_EPPI0, LCD_Y_RES);
+@@ -263,8 +264,7 @@ static int request_ports(struct bfin_bf54xfb_info *fbi)
+ }
+ }
+
+- gpio_direction_output(disp);
+- gpio_set_value(disp, 1);
++ gpio_direction_output(disp, 1);
-- set_bit(LOOP_RESET_NEEDED, &ha->dpc_flags);
-+ qla2x00_loop_reset(ha);
return 0;
}
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index b87ed37..2b53d1f 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -6,7 +6,7 @@ menu "Console display driver support"
-@@ -967,35 +967,51 @@ qla2x00_get_fc_host_stats(struct Scsi_Host *shost)
+ config VGA_CONSOLE
+ bool "VGA text console" if EMBEDDED || !X86
+- depends on !ARCH_ACORN && !ARCH_EBSA110 && !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && !ARCH_VERSATILE && !SUPERH && !BLACKFIN
++ depends on !ARCH_ACORN && !ARCH_EBSA110 && !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && !ARCH_VERSATILE && !SUPERH && !BLACKFIN && !AVR32
+ default y
+ help
+ Saying Y here will allow you to use Linux in text mode through a
+diff --git a/drivers/video/matrox/matroxfb_maven.c b/drivers/video/matrox/matroxfb_maven.c
+index 49cd53e..0cd58f8 100644
+--- a/drivers/video/matrox/matroxfb_maven.c
++++ b/drivers/video/matrox/matroxfb_maven.c
+@@ -1232,7 +1232,7 @@ static int maven_shutdown_client(struct i2c_client* clnt) {
+ return 0;
+ }
+
+-static unsigned short normal_i2c[] = { MAVEN_I2CID, I2C_CLIENT_END };
++static const unsigned short normal_i2c[] = { MAVEN_I2CID, I2C_CLIENT_END };
+ I2C_CLIENT_INSMOD;
+
+ static struct i2c_driver maven_driver;
+diff --git a/drivers/video/omap/lcd_h3.c b/drivers/video/omap/lcd_h3.c
+index c604d93..31e9783 100644
+--- a/drivers/video/omap/lcd_h3.c
++++ b/drivers/video/omap/lcd_h3.c
+@@ -21,9 +21,9 @@
+
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
++#include <linux/i2c/tps65010.h>
+
+ #include <asm/arch/gpio.h>
+-#include <asm/arch/tps65010.h>
+ #include <asm/arch/omapfb.h>
+
+ #define MODULE_NAME "omapfb-lcd_h3"
+diff --git a/drivers/video/vermilion/vermilion.c b/drivers/video/vermilion/vermilion.c
+index c31f549..1c65666 100644
+--- a/drivers/video/vermilion/vermilion.c
++++ b/drivers/video/vermilion/vermilion.c
+@@ -88,9 +88,7 @@ static int vmlfb_alloc_vram_area(struct vram_area *va, unsigned max_order,
{
- scsi_qla_host_t *ha = shost_priv(shost);
- int rval;
-- uint16_t mb_stat[1];
-- link_stat_t stat_buf;
-+ struct link_statistics *stats;
-+ dma_addr_t stats_dma;
- struct fc_host_statistics *pfc_host_stat;
+ gfp_t flags;
+ unsigned long i;
+- pgprot_t wc_pageprot;
-- rval = QLA_FUNCTION_FAILED;
- pfc_host_stat = &ha->fc_host_stat;
- memset(pfc_host_stat, -1, sizeof(struct fc_host_statistics));
+- wc_pageprot = PAGE_KERNEL_NOCACHE;
+ max_order++;
+ do {
+ /*
+@@ -126,14 +124,8 @@ static int vmlfb_alloc_vram_area(struct vram_area *va, unsigned max_order,
+ /*
+ * Change caching policy of the linear kernel map to avoid
+ * mapping type conflicts with user-space mappings.
+- * The first global_flush_tlb() is really only there to do a global
+- * wbinvd().
+ */
+-
+- global_flush_tlb();
+- change_page_attr(virt_to_page(va->logical), va->size >> PAGE_SHIFT,
+- wc_pageprot);
+- global_flush_tlb();
++ set_pages_uc(virt_to_page(va->logical), va->size >> PAGE_SHIFT);
+
+ printk(KERN_DEBUG MODULE_NAME
+ ": Allocated %ld bytes vram area at 0x%08lx\n",
+@@ -157,9 +149,8 @@ static void vmlfb_free_vram_area(struct vram_area *va)
+ * Reset the linear kernel map caching policy.
+ */
-+ stats = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &stats_dma);
-+ if (stats == NULL) {
-+ DEBUG2_3_11(printk("%s(%ld): Failed to allocate memory.\n",
-+ __func__, ha->host_no));
-+ goto done;
-+ }
-+ memset(stats, 0, DMA_POOL_SIZE);
+- change_page_attr(virt_to_page(va->logical),
+- va->size >> PAGE_SHIFT, PAGE_KERNEL);
+- global_flush_tlb();
++ set_pages_wb(virt_to_page(va->logical),
++ va->size >> PAGE_SHIFT);
+
+ /*
+ * Decrease the usage count on the pages we've used
+diff --git a/drivers/w1/masters/ds2482.c b/drivers/w1/masters/ds2482.c
+index d93eb62..0fd5820 100644
+--- a/drivers/w1/masters/ds2482.c
++++ b/drivers/w1/masters/ds2482.c
+@@ -29,7 +29,7 @@
+ * However, the chip cannot be detected without doing an i2c write,
+ * so use the force module parameter.
+ */
+-static unsigned short normal_i2c[] = {I2C_CLIENT_END};
++static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
+
+ /**
+ * Insmod parameters
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 52dff40..899fc13 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -223,7 +223,7 @@ config DAVINCI_WATCHDOG
+
+ config AT32AP700X_WDT
+ tristate "AT32AP700x watchdog"
+- depends on CPU_AT32AP7000
++ depends on CPU_AT32AP700X
+ help
+ Watchdog timer embedded into AT32AP700x devices. This will reboot
+ your system when the timeout is reached.
+@@ -639,6 +639,12 @@ config AR7_WDT
+ help
+ Hardware driver for the TI AR7 Watchdog Timer.
+
++config TXX9_WDT
++ tristate "Toshiba TXx9 Watchdog Timer"
++ depends on CPU_TX39XX || CPU_TX49XX
++ help
++ Hardware driver for the built-in watchdog timer on TXx9 MIPS SoCs.
+
-+ rval = QLA_FUNCTION_FAILED;
- if (IS_FWI2_CAPABLE(ha)) {
-- rval = qla24xx_get_isp_stats(ha, (uint32_t *)&stat_buf,
-- sizeof(stat_buf) / 4, mb_stat);
-+ rval = qla24xx_get_isp_stats(ha, stats, stats_dma);
- } else if (atomic_read(&ha->loop_state) == LOOP_READY &&
- !test_bit(ABORT_ISP_ACTIVE, &ha->dpc_flags) &&
- !test_bit(ISP_ABORT_NEEDED, &ha->dpc_flags) &&
- !ha->dpc_active) {
- /* Must be in a 'READY' state for statistics retrieval. */
-- rval = qla2x00_get_link_status(ha, ha->loop_id, &stat_buf,
-- mb_stat);
-+ rval = qla2x00_get_link_status(ha, ha->loop_id, stats,
-+ stats_dma);
+ # PARISC Architecture
+
+ # POWERPC Architecture
+diff --git a/drivers/watchdog/Makefile b/drivers/watchdog/Makefile
+index 87483cc..ebc2114 100644
+--- a/drivers/watchdog/Makefile
++++ b/drivers/watchdog/Makefile
+@@ -93,6 +93,7 @@ obj-$(CONFIG_INDYDOG) += indydog.o
+ obj-$(CONFIG_WDT_MTX1) += mtx-1_wdt.o
+ obj-$(CONFIG_WDT_RM9K_GPI) += rm9k_wdt.o
+ obj-$(CONFIG_AR7_WDT) += ar7_wdt.o
++obj-$(CONFIG_TXX9_WDT) += txx9wdt.o
+
+ # PARISC Architecture
+
+diff --git a/drivers/watchdog/alim1535_wdt.c b/drivers/watchdog/alim1535_wdt.c
+index b481cc0..2b1fbdb 100644
+--- a/drivers/watchdog/alim1535_wdt.c
++++ b/drivers/watchdog/alim1535_wdt.c
+@@ -413,18 +413,18 @@ static int __init watchdog_init(void)
+ /* Calculate the watchdog's timeout */
+ ali_settimer(timeout);
+
+- ret = misc_register(&ali_miscdev);
++ ret = register_reboot_notifier(&ali_notifier);
+ if (ret != 0) {
+- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
+- WATCHDOG_MINOR, ret);
++ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
++ ret);
+ goto out;
}
- if (rval != QLA_SUCCESS)
-- goto done;
-+ goto done_free;
-+
-+ pfc_host_stat->link_failure_count = stats->link_fail_cnt;
-+ pfc_host_stat->loss_of_sync_count = stats->loss_sync_cnt;
-+ pfc_host_stat->loss_of_signal_count = stats->loss_sig_cnt;
-+ pfc_host_stat->prim_seq_protocol_err_count = stats->prim_seq_err_cnt;
-+ pfc_host_stat->invalid_tx_word_count = stats->inval_xmit_word_cnt;
-+ pfc_host_stat->invalid_crc_count = stats->inval_crc_cnt;
-+ if (IS_FWI2_CAPABLE(ha)) {
-+ pfc_host_stat->tx_frames = stats->tx_frames;
-+ pfc_host_stat->rx_frames = stats->rx_frames;
-+ pfc_host_stat->dumped_frames = stats->dumped_frames;
-+ pfc_host_stat->nos_count = stats->nos_rcvd;
-+ }
+- ret = register_reboot_notifier(&ali_notifier);
++ ret = misc_register(&ali_miscdev);
+ if (ret != 0) {
+- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
+- ret);
+- goto unreg_miscdev;
++ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
++ WATCHDOG_MINOR, ret);
++ goto unreg_reboot;
+ }
-- pfc_host_stat->link_failure_count = stat_buf.link_fail_cnt;
-- pfc_host_stat->loss_of_sync_count = stat_buf.loss_sync_cnt;
-- pfc_host_stat->loss_of_signal_count = stat_buf.loss_sig_cnt;
-- pfc_host_stat->prim_seq_protocol_err_count = stat_buf.prim_seq_err_cnt;
-- pfc_host_stat->invalid_tx_word_count = stat_buf.inval_xmit_word_cnt;
-- pfc_host_stat->invalid_crc_count = stat_buf.inval_crc_cnt;
-+done_free:
-+ dma_pool_free(ha->s_dma_pool, stats, stats_dma);
- done:
- return pfc_host_stat;
+ printk(KERN_INFO PFX "initialized. timeout=%d sec (nowayout=%d)\n",
+@@ -432,8 +432,8 @@ static int __init watchdog_init(void)
+
+ out:
+ return ret;
+-unreg_miscdev:
+- misc_deregister(&ali_miscdev);
++unreg_reboot:
++ unregister_reboot_notifier(&ali_notifier);
+ goto out;
}
-@@ -1113,7 +1129,7 @@ vport_create_failed_2:
- return FC_VPORT_FAILED;
+
+@@ -449,8 +449,8 @@ static void __exit watchdog_exit(void)
+ ali_stop();
+
+ /* Deregister */
+- unregister_reboot_notifier(&ali_notifier);
+ misc_deregister(&ali_miscdev);
++ unregister_reboot_notifier(&ali_notifier);
+ pci_dev_put(ali_pci);
}
--int
-+static int
- qla24xx_vport_delete(struct fc_vport *fc_vport)
- {
- scsi_qla_host_t *ha = shost_priv(fc_vport->shost);
-@@ -1124,7 +1140,7 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+diff --git a/drivers/watchdog/alim7101_wdt.c b/drivers/watchdog/alim7101_wdt.c
+index 67aed9f..238273c 100644
+--- a/drivers/watchdog/alim7101_wdt.c
++++ b/drivers/watchdog/alim7101_wdt.c
+@@ -377,18 +377,18 @@ static int __init alim7101_wdt_init(void)
+ timeout);
+ }
- down(&ha->vport_sem);
- ha->cur_vport_count--;
-- clear_bit(vha->vp_idx, (unsigned long *)ha->vp_idx_map);
-+ clear_bit(vha->vp_idx, ha->vp_idx_map);
- up(&ha->vport_sem);
+- rc = misc_register(&wdt_miscdev);
++ rc = register_reboot_notifier(&wdt_notifier);
+ if (rc) {
+- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
+- wdt_miscdev.minor, rc);
++ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
++ rc);
+ goto err_out;
+ }
- kfree(vha->node_name);
-@@ -1146,7 +1162,7 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+- rc = register_reboot_notifier(&wdt_notifier);
++ rc = misc_register(&wdt_miscdev);
+ if (rc) {
+- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
+- rc);
+- goto err_out_miscdev;
++ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
++ wdt_miscdev.minor, rc);
++ goto err_out_reboot;
+ }
+
+ if (nowayout) {
+@@ -399,8 +399,8 @@ static int __init alim7101_wdt_init(void)
+ timeout, nowayout);
return 0;
+
+-err_out_miscdev:
+- misc_deregister(&wdt_miscdev);
++err_out_reboot:
++ unregister_reboot_notifier(&wdt_notifier);
+ err_out:
+ pci_dev_put(alim7101_pmu);
+ return rc;
+diff --git a/drivers/watchdog/ar7_wdt.c b/drivers/watchdog/ar7_wdt.c
+index cdaab8c..2eb48c0 100644
+--- a/drivers/watchdog/ar7_wdt.c
++++ b/drivers/watchdog/ar7_wdt.c
+@@ -279,7 +279,7 @@ static int ar7_wdt_ioctl(struct inode *inode, struct file *file,
+ }
}
--int
-+static int
- qla24xx_vport_disable(struct fc_vport *fc_vport, bool disable)
- {
- scsi_qla_host_t *vha = fc_vport->dd_data;
-diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
-index eaa04da..d88e98c 100644
---- a/drivers/scsi/qla2xxx/qla_dbg.c
-+++ b/drivers/scsi/qla2xxx/qla_dbg.c
-@@ -1051,6 +1051,7 @@ qla25xx_fw_dump(scsi_qla_host_t *ha, int hardware_locked)
- struct qla25xx_fw_dump *fw;
- uint32_t ext_mem_cnt;
- void *nxt;
-+ struct qla2xxx_fce_chain *fcec;
+-static struct file_operations ar7_wdt_fops = {
++static const struct file_operations ar7_wdt_fops = {
+ .owner = THIS_MODULE,
+ .write = ar7_wdt_write,
+ .ioctl = ar7_wdt_ioctl,
+diff --git a/drivers/watchdog/bfin_wdt.c b/drivers/watchdog/bfin_wdt.c
+index 31dc7a6..472be10 100644
+--- a/drivers/watchdog/bfin_wdt.c
++++ b/drivers/watchdog/bfin_wdt.c
+@@ -390,7 +390,7 @@ static struct platform_driver bfin_wdt_driver = {
+ .resume = bfin_wdt_resume,
+ };
- risc_address = ext_mem_cnt = 0;
- flags = 0;
-@@ -1321,10 +1322,31 @@ qla25xx_fw_dump(scsi_qla_host_t *ha, int hardware_locked)
- if (rval != QLA_SUCCESS)
- goto qla25xx_fw_dump_failed_0;
+-static struct file_operations bfin_wdt_fops = {
++static const struct file_operations bfin_wdt_fops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .write = bfin_wdt_write,
+diff --git a/drivers/watchdog/it8712f_wdt.c b/drivers/watchdog/it8712f_wdt.c
+index 6330fc0..1b6d7d1 100644
+--- a/drivers/watchdog/it8712f_wdt.c
++++ b/drivers/watchdog/it8712f_wdt.c
+@@ -296,7 +296,7 @@ it8712f_wdt_release(struct inode *inode, struct file *file)
+ return 0;
+ }
-+ /* Fibre Channel Trace Buffer. */
- nxt = qla2xxx_copy_queues(ha, nxt);
- if (ha->eft)
- memcpy(nxt, ha->eft, ntohl(ha->fw_dump->eft_size));
+-static struct file_operations it8712f_wdt_fops = {
++static const struct file_operations it8712f_wdt_fops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .write = it8712f_wdt_write,
+diff --git a/drivers/watchdog/mpc5200_wdt.c b/drivers/watchdog/mpc5200_wdt.c
+index 11f6a11..80a91d4 100644
+--- a/drivers/watchdog/mpc5200_wdt.c
++++ b/drivers/watchdog/mpc5200_wdt.c
+@@ -158,7 +158,7 @@ static int mpc5200_wdt_release(struct inode *inode, struct file *file)
+ return 0;
+ }
-+ /* Fibre Channel Event Buffer. */
-+ if (!ha->fce)
-+ goto qla25xx_fw_dump_failed_0;
+-static struct file_operations mpc5200_wdt_fops = {
++static const struct file_operations mpc5200_wdt_fops = {
+ .owner = THIS_MODULE,
+ .write = mpc5200_wdt_write,
+ .ioctl = mpc5200_wdt_ioctl,
+diff --git a/drivers/watchdog/mtx-1_wdt.c b/drivers/watchdog/mtx-1_wdt.c
+index dcfd401..9845174 100644
+--- a/drivers/watchdog/mtx-1_wdt.c
++++ b/drivers/watchdog/mtx-1_wdt.c
+@@ -180,7 +180,7 @@ static ssize_t mtx1_wdt_write(struct file *file, const char *buf, size_t count,
+ return count;
+ }
+
+-static struct file_operations mtx1_wdt_fops = {
++static const struct file_operations mtx1_wdt_fops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .ioctl = mtx1_wdt_ioctl,
+diff --git a/drivers/watchdog/sbc60xxwdt.c b/drivers/watchdog/sbc60xxwdt.c
+index e4f3cb6..ef76f01 100644
+--- a/drivers/watchdog/sbc60xxwdt.c
++++ b/drivers/watchdog/sbc60xxwdt.c
+@@ -359,20 +359,20 @@ static int __init sbc60xxwdt_init(void)
+ }
+ }
+
+- rc = misc_register(&wdt_miscdev);
++ rc = register_reboot_notifier(&wdt_notifier);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
+- wdt_miscdev.minor, rc);
++ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
++ rc);
+ goto err_out_region2;
+ }
+
+- rc = register_reboot_notifier(&wdt_notifier);
++ rc = misc_register(&wdt_miscdev);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
+- rc);
+- goto err_out_miscdev;
++ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
++ wdt_miscdev.minor, rc);
++ goto err_out_reboot;
+ }
+
+ printk(KERN_INFO PFX "WDT driver for 60XX single board computer initialised. timeout=%d sec (nowayout=%d)\n",
+@@ -380,8 +380,8 @@ static int __init sbc60xxwdt_init(void)
+
+ return 0;
+
+-err_out_miscdev:
+- misc_deregister(&wdt_miscdev);
++err_out_reboot:
++ unregister_reboot_notifier(&wdt_notifier);
+ err_out_region2:
+ if ((wdt_stop != 0x45) && (wdt_stop != wdt_start))
+ release_region(wdt_stop,1);
+diff --git a/drivers/watchdog/scx200_wdt.c b/drivers/watchdog/scx200_wdt.c
+index d4fd0fa..d55882b 100644
+--- a/drivers/watchdog/scx200_wdt.c
++++ b/drivers/watchdog/scx200_wdt.c
+@@ -231,17 +231,17 @@ static int __init scx200_wdt_init(void)
+
+ sema_init(&open_semaphore, 1);
+
+- r = misc_register(&scx200_wdt_miscdev);
++ r = register_reboot_notifier(&scx200_wdt_notifier);
+ if (r) {
++ printk(KERN_ERR NAME ": unable to register reboot notifier");
+ release_region(scx200_cb_base + SCx200_WDT_OFFSET,
+ SCx200_WDT_SIZE);
+ return r;
+ }
+
+- r = register_reboot_notifier(&scx200_wdt_notifier);
++ r = misc_register(&scx200_wdt_miscdev);
+ if (r) {
+- printk(KERN_ERR NAME ": unable to register reboot notifier");
+- misc_deregister(&scx200_wdt_miscdev);
++ unregister_reboot_notifier(&scx200_wdt_notifier);
+ release_region(scx200_cb_base + SCx200_WDT_OFFSET,
+ SCx200_WDT_SIZE);
+ return r;
+@@ -252,8 +252,8 @@ static int __init scx200_wdt_init(void)
+
+ static void __exit scx200_wdt_cleanup(void)
+ {
+- unregister_reboot_notifier(&scx200_wdt_notifier);
+ misc_deregister(&scx200_wdt_miscdev);
++ unregister_reboot_notifier(&scx200_wdt_notifier);
+ release_region(scx200_cb_base + SCx200_WDT_OFFSET,
+ SCx200_WDT_SIZE);
+ }
+diff --git a/drivers/watchdog/txx9wdt.c b/drivers/watchdog/txx9wdt.c
+new file mode 100644
+index 0000000..328b3c7
+--- /dev/null
++++ b/drivers/watchdog/txx9wdt.c
+@@ -0,0 +1,276 @@
++/*
++ * txx9wdt: A Hardware Watchdog Driver for TXx9 SoCs
++ *
++ * Copyright (C) 2007 Atsushi Nemoto <anemo at mba.ocn.ne.jp>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/types.h>
++#include <linux/miscdevice.h>
++#include <linux/watchdog.h>
++#include <linux/fs.h>
++#include <linux/reboot.h>
++#include <linux/init.h>
++#include <linux/uaccess.h>
++#include <linux/platform_device.h>
++#include <linux/clk.h>
++#include <linux/err.h>
++#include <linux/io.h>
++#include <asm/txx9tmr.h>
+
-+ ha->fw_dump->version |= __constant_htonl(DUMP_CHAIN_VARIANT);
++#define TIMER_MARGIN 60 /* Default is 60 seconds */
+
-+ fcec = nxt + ntohl(ha->fw_dump->eft_size);
-+ fcec->type = __constant_htonl(DUMP_CHAIN_FCE | DUMP_CHAIN_LAST);
-+ fcec->chain_size = htonl(sizeof(struct qla2xxx_fce_chain) +
-+ fce_calc_size(ha->fce_bufs));
-+ fcec->size = htonl(fce_calc_size(ha->fce_bufs));
-+ fcec->addr_l = htonl(LSD(ha->fce_dma));
-+ fcec->addr_h = htonl(MSD(ha->fce_dma));
++static int timeout = TIMER_MARGIN; /* in seconds */
++module_param(timeout, int, 0);
++MODULE_PARM_DESC(timeout,
++ "Watchdog timeout in seconds. "
++ "(0<timeout<((2^" __MODULE_STRING(TXX9_TIMER_BITS) ")/(IMCLK/256)), "
++ "default=" __MODULE_STRING(TIMER_MARGIN) ")");
+
-+ iter_reg = fcec->eregs;
-+ for (cnt = 0; cnt < 8; cnt++)
-+ *iter_reg++ = htonl(ha->fce_mb[cnt]);
++static int nowayout = WATCHDOG_NOWAYOUT;
++module_param(nowayout, int, 0);
++MODULE_PARM_DESC(nowayout,
++ "Watchdog cannot be stopped once started "
++ "(default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
+
-+ memcpy(iter_reg, ha->fce, ntohl(fcec->size));
++#define WD_TIMER_CCD 7 /* 1/256 */
++#define WD_TIMER_CLK (clk_get_rate(txx9_imclk) / (2 << WD_TIMER_CCD))
++#define WD_MAX_TIMEOUT ((0xffffffff >> (32 - TXX9_TIMER_BITS)) / WD_TIMER_CLK)
+
- qla25xx_fw_dump_failed_0:
- if (rval != QLA_SUCCESS) {
- qla_printk(KERN_WARNING, ha,
-@@ -1428,21 +1450,6 @@ qla2x00_print_scsi_cmd(struct scsi_cmnd * cmd)
- printk(" sp flags=0x%x\n", sp->flags);
- }
-
--void
--qla2x00_dump_pkt(void *pkt)
--{
-- uint32_t i;
-- uint8_t *data = (uint8_t *) pkt;
--
-- for (i = 0; i < 64; i++) {
-- if (!(i % 4))
-- printk("\n%02x: ", i);
--
-- printk("%02x ", data[i]);
-- }
-- printk("\n");
--}
--
- #if defined(QL_DEBUG_ROUTINES)
- /*
- * qla2x00_formatted_dump_buffer
-diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h
-index a50ecf0..524598a 100644
---- a/drivers/scsi/qla2xxx/qla_dbg.h
-+++ b/drivers/scsi/qla2xxx/qla_dbg.h
-@@ -256,6 +256,25 @@ struct qla25xx_fw_dump {
- #define EFT_BYTES_PER_BUFFER 0x4000
- #define EFT_SIZE ((EFT_BYTES_PER_BUFFER) * (EFT_NUM_BUFFERS))
-
-+#define FCE_NUM_BUFFERS 64
-+#define FCE_BYTES_PER_BUFFER 0x400
-+#define FCE_SIZE ((FCE_BYTES_PER_BUFFER) * (FCE_NUM_BUFFERS))
-+#define fce_calc_size(b) ((FCE_BYTES_PER_BUFFER) * (b))
++static unsigned long txx9wdt_alive;
++static int expect_close;
++static struct txx9_tmr_reg __iomem *txx9wdt_reg;
++static struct clk *txx9_imclk;
+
-+struct qla2xxx_fce_chain {
-+ uint32_t type;
-+ uint32_t chain_size;
++static void txx9wdt_ping(void)
++{
++ __raw_writel(TXx9_TMWTMR_TWIE | TXx9_TMWTMR_TWC, &txx9wdt_reg->wtmr);
++}
+
-+ uint32_t size;
-+ uint32_t addr_l;
-+ uint32_t addr_h;
-+ uint32_t eregs[8];
++static void txx9wdt_start(void)
++{
++ __raw_writel(WD_TIMER_CLK * timeout, &txx9wdt_reg->cpra);
++ __raw_writel(WD_TIMER_CCD, &txx9wdt_reg->ccdr);
++ __raw_writel(0, &txx9wdt_reg->tisr); /* clear pending interrupt */
++ __raw_writel(TXx9_TMTCR_TCE | TXx9_TMTCR_CCDE | TXx9_TMTCR_TMODE_WDOG,
++ &txx9wdt_reg->tcr);
++ __raw_writel(TXx9_TMWTMR_TWIE | TXx9_TMWTMR_TWC, &txx9wdt_reg->wtmr);
++}
++
++static void txx9wdt_stop(void)
++{
++ __raw_writel(TXx9_TMWTMR_WDIS, &txx9wdt_reg->wtmr);
++ __raw_writel(__raw_readl(&txx9wdt_reg->tcr) & ~TXx9_TMTCR_TCE,
++ &txx9wdt_reg->tcr);
++}
++
++static int txx9wdt_open(struct inode *inode, struct file *file)
++{
++ if (test_and_set_bit(0, &txx9wdt_alive))
++ return -EBUSY;
++
++ if (__raw_readl(&txx9wdt_reg->tcr) & TXx9_TMTCR_TCE) {
++ clear_bit(0, &txx9wdt_alive);
++ return -EBUSY;
++ }
++
++ if (nowayout)
++ __module_get(THIS_MODULE);
++
++ txx9wdt_start();
++ return nonseekable_open(inode, file);
++}
++
++static int txx9wdt_release(struct inode *inode, struct file *file)
++{
++ if (expect_close)
++ txx9wdt_stop();
++ else {
++ printk(KERN_CRIT "txx9wdt: "
++ "Unexpected close, not stopping watchdog!\n");
++ txx9wdt_ping();
++ }
++ clear_bit(0, &txx9wdt_alive);
++ expect_close = 0;
++ return 0;
++}
++
++static ssize_t txx9wdt_write(struct file *file, const char __user *data,
++ size_t len, loff_t *ppos)
++{
++ if (len) {
++ if (!nowayout) {
++ size_t i;
++
++ expect_close = 0;
++ for (i = 0; i != len; i++) {
++ char c;
++ if (get_user(c, data + i))
++ return -EFAULT;
++ if (c == 'V')
++ expect_close = 1;
++ }
++ }
++ txx9wdt_ping();
++ }
++ return len;
++}
++
++static int txx9wdt_ioctl(struct inode *inode, struct file *file,
++ unsigned int cmd, unsigned long arg)
++{
++ void __user *argp = (void __user *)arg;
++ int __user *p = argp;
++ int new_timeout;
++ static struct watchdog_info ident = {
++ .options = WDIOF_SETTIMEOUT |
++ WDIOF_KEEPALIVEPING |
++ WDIOF_MAGICCLOSE,
++ .firmware_version = 0,
++ .identity = "Hardware Watchdog for TXx9",
++ };
++
++ switch (cmd) {
++ default:
++ return -ENOTTY;
++ case WDIOC_GETSUPPORT:
++ return copy_to_user(argp, &ident, sizeof(ident)) ? -EFAULT : 0;
++ case WDIOC_GETSTATUS:
++ case WDIOC_GETBOOTSTATUS:
++ return put_user(0, p);
++ case WDIOC_KEEPALIVE:
++ txx9wdt_ping();
++ return 0;
++ case WDIOC_SETTIMEOUT:
++ if (get_user(new_timeout, p))
++ return -EFAULT;
++ if (new_timeout < 1 || new_timeout > WD_MAX_TIMEOUT)
++ return -EINVAL;
++ timeout = new_timeout;
++ txx9wdt_stop();
++ txx9wdt_start();
++ /* Fall */
++ case WDIOC_GETTIMEOUT:
++ return put_user(timeout, p);
++ }
++}
++
++static int txx9wdt_notify_sys(struct notifier_block *this, unsigned long code,
++ void *unused)
++{
++ if (code == SYS_DOWN || code == SYS_HALT)
++ txx9wdt_stop();
++ return NOTIFY_DONE;
++}
++
++static const struct file_operations txx9wdt_fops = {
++ .owner = THIS_MODULE,
++ .llseek = no_llseek,
++ .write = txx9wdt_write,
++ .ioctl = txx9wdt_ioctl,
++ .open = txx9wdt_open,
++ .release = txx9wdt_release,
+};
+
-+#define DUMP_CHAIN_VARIANT 0x80000000
-+#define DUMP_CHAIN_FCE 0x7FFFFAF0
-+#define DUMP_CHAIN_LAST 0x80000000
++static struct miscdevice txx9wdt_miscdev = {
++ .minor = WATCHDOG_MINOR,
++ .name = "watchdog",
++ .fops = &txx9wdt_fops,
++};
+
- struct qla2xxx_fw_dump {
- uint8_t signature[4];
- uint32_t version;
-diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
-index 04e8cbc..6f129da 100644
---- a/drivers/scsi/qla2xxx/qla_def.h
-+++ b/drivers/scsi/qla2xxx/qla_def.h
-@@ -623,9 +623,6 @@ typedef struct {
- #define MBC_GET_LINK_PRIV_STATS 0x6d /* Get link & private data. */
- #define MBC_SET_VENDOR_ID 0x76 /* Set Vendor ID. */
++static struct notifier_block txx9wdt_notifier = {
++ .notifier_call = txx9wdt_notify_sys
++};
++
++static int __init txx9wdt_probe(struct platform_device *dev)
++{
++ struct resource *res;
++ int ret;
++
++ txx9_imclk = clk_get(NULL, "imbus_clk");
++ if (IS_ERR(txx9_imclk)) {
++ ret = PTR_ERR(txx9_imclk);
++ txx9_imclk = NULL;
++ goto exit;
++ }
++ ret = clk_enable(txx9_imclk);
++ if (ret) {
++ clk_put(txx9_imclk);
++ txx9_imclk = NULL;
++ goto exit;
++ }
++
++ res = platform_get_resource(dev, IORESOURCE_MEM, 0);
++ if (!res)
++ goto exit_busy;
++ if (!devm_request_mem_region(&dev->dev,
++ res->start, res->end - res->start + 1,
++ "txx9wdt"))
++ goto exit_busy;
++ txx9wdt_reg = devm_ioremap(&dev->dev,
++ res->start, res->end - res->start + 1);
++ if (!txx9wdt_reg)
++ goto exit_busy;
++
++ ret = register_reboot_notifier(&txx9wdt_notifier);
++ if (ret)
++ goto exit;
++
++ ret = misc_register(&txx9wdt_miscdev);
++ if (ret) {
++ unregister_reboot_notifier(&txx9wdt_notifier);
++ goto exit;
++ }
++
++ printk(KERN_INFO "Hardware Watchdog Timer for TXx9: "
++ "timeout=%d sec (max %ld) (nowayout= %d)\n",
++ timeout, WD_MAX_TIMEOUT, nowayout);
++
++ return 0;
++exit_busy:
++ ret = -EBUSY;
++exit:
++ if (txx9_imclk) {
++ clk_disable(txx9_imclk);
++ clk_put(txx9_imclk);
++ }
++ return ret;
++}
++
++static int __exit txx9wdt_remove(struct platform_device *dev)
++{
++ misc_deregister(&txx9wdt_miscdev);
++ unregister_reboot_notifier(&txx9wdt_notifier);
++ clk_disable(txx9_imclk);
++ clk_put(txx9_imclk);
++ return 0;
++}
++
++static struct platform_driver txx9wdt_driver = {
++ .remove = __exit_p(txx9wdt_remove),
++ .driver = {
++ .name = "txx9wdt",
++ .owner = THIS_MODULE,
++ },
++};
++
++static int __init watchdog_init(void)
++{
++ return platform_driver_probe(&txx9wdt_driver, txx9wdt_probe);
++}
++
++static void __exit watchdog_exit(void)
++{
++ platform_driver_unregister(&txx9wdt_driver);
++}
++
++module_init(watchdog_init);
++module_exit(watchdog_exit);
++
++MODULE_DESCRIPTION("TXx9 Watchdog Driver");
++MODULE_LICENSE("GPL");
++MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+diff --git a/drivers/watchdog/w83877f_wdt.c b/drivers/watchdog/w83877f_wdt.c
+index bcc9d48..f510a3a 100644
+--- a/drivers/watchdog/w83877f_wdt.c
++++ b/drivers/watchdog/w83877f_wdt.c
+@@ -373,20 +373,20 @@ static int __init w83877f_wdt_init(void)
+ goto err_out_region1;
+ }
--#define TC_ENABLE 4
--#define TC_DISABLE 5
--
- /* Firmware return data sizes */
- #define FCAL_MAP_SIZE 128
+- rc = misc_register(&wdt_miscdev);
++ rc = register_reboot_notifier(&wdt_notifier);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
+- wdt_miscdev.minor, rc);
++ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
++ rc);
+ goto err_out_region2;
+ }
-@@ -862,14 +859,20 @@ typedef struct {
- #define GLSO_SEND_RPS BIT_0
- #define GLSO_USE_DID BIT_3
+- rc = register_reboot_notifier(&wdt_notifier);
++ rc = misc_register(&wdt_miscdev);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
+- rc);
+- goto err_out_miscdev;
++ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
++ wdt_miscdev.minor, rc);
++ goto err_out_reboot;
+ }
--typedef struct {
-- uint32_t link_fail_cnt;
-- uint32_t loss_sync_cnt;
-- uint32_t loss_sig_cnt;
-- uint32_t prim_seq_err_cnt;
-- uint32_t inval_xmit_word_cnt;
-- uint32_t inval_crc_cnt;
--} link_stat_t;
-+struct link_statistics {
-+ uint32_t link_fail_cnt;
-+ uint32_t loss_sync_cnt;
-+ uint32_t loss_sig_cnt;
-+ uint32_t prim_seq_err_cnt;
-+ uint32_t inval_xmit_word_cnt;
-+ uint32_t inval_crc_cnt;
-+ uint32_t unused1[0x1b];
-+ uint32_t tx_frames;
-+ uint32_t rx_frames;
-+ uint32_t dumped_frames;
-+ uint32_t unused2[2];
-+ uint32_t nos_rcvd;
-+};
+ printk(KERN_INFO PFX "WDT driver for W83877F initialised. timeout=%d sec (nowayout=%d)\n",
+@@ -394,8 +394,8 @@ static int __init w83877f_wdt_init(void)
- /*
- * NVRAM Command values.
-@@ -2116,14 +2119,6 @@ struct qla_msix_entry {
+ return 0;
+
+-err_out_miscdev:
+- misc_deregister(&wdt_miscdev);
++err_out_reboot:
++ unregister_reboot_notifier(&wdt_notifier);
+ err_out_region2:
+ release_region(WDT_PING,1);
+ err_out_region1:
+diff --git a/drivers/watchdog/w83977f_wdt.c b/drivers/watchdog/w83977f_wdt.c
+index b475529..b209bcd 100644
+--- a/drivers/watchdog/w83977f_wdt.c
++++ b/drivers/watchdog/w83977f_wdt.c
+@@ -494,20 +494,20 @@ static int __init w83977f_wdt_init(void)
+ goto err_out;
+ }
+
+- rc = misc_register(&wdt_miscdev);
++ rc = register_reboot_notifier(&wdt_notifier);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
+- wdt_miscdev.minor, rc);
++ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
++ rc);
+ goto err_out_region;
+ }
+
+- rc = register_reboot_notifier(&wdt_notifier);
++ rc = misc_register(&wdt_miscdev);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
+- rc);
+- goto err_out_miscdev;
++ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
++ wdt_miscdev.minor, rc);
++ goto err_out_reboot;
+ }
+
+ printk(KERN_INFO PFX "initialized. timeout=%d sec (nowayout=%d testmode=%d)\n",
+@@ -515,8 +515,8 @@ static int __init w83977f_wdt_init(void)
+
+ return 0;
+
+-err_out_miscdev:
+- misc_deregister(&wdt_miscdev);
++err_out_reboot:
++ unregister_reboot_notifier(&wdt_notifier);
+ err_out_region:
+ release_region(IO_INDEX_PORT,2);
+ err_out:
+diff --git a/drivers/watchdog/wdt.c b/drivers/watchdog/wdt.c
+index 53d0bb4..756fb15 100644
+--- a/drivers/watchdog/wdt.c
++++ b/drivers/watchdog/wdt.c
+@@ -70,6 +70,8 @@ MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default=" _
+ static int io=0x240;
+ static int irq=11;
+
++static DEFINE_SPINLOCK(wdt_lock);
++
+ module_param(io, int, 0);
+ MODULE_PARM_DESC(io, "WDT io port (default=0x240)");
+ module_param(irq, int, 0);
+@@ -109,6 +111,8 @@ static void wdt_ctr_load(int ctr, int val)
+
+ static int wdt_start(void)
+ {
++ unsigned long flags;
++ spin_lock_irqsave(&wdt_lock, flags);
+ inb_p(WDT_DC); /* Disable watchdog */
+ wdt_ctr_mode(0,3); /* Program CTR0 for Mode 3: Square Wave Generator */
+ wdt_ctr_mode(1,2); /* Program CTR1 for Mode 2: Rate Generator */
+@@ -117,6 +121,7 @@ static int wdt_start(void)
+ wdt_ctr_load(1,wd_heartbeat); /* Heartbeat */
+ wdt_ctr_load(2,65535); /* Length of reset pulse */
+ outb_p(0, WDT_DC); /* Enable watchdog */
++ spin_unlock_irqrestore(&wdt_lock, flags);
+ return 0;
+ }
+
+@@ -128,9 +133,12 @@ static int wdt_start(void)
+
+ static int wdt_stop (void)
+ {
++ unsigned long flags;
++ spin_lock_irqsave(&wdt_lock, flags);
+ /* Turn the card off */
+ inb_p(WDT_DC); /* Disable watchdog */
+ wdt_ctr_load(2,0); /* 0 length reset pulses now */
++ spin_unlock_irqrestore(&wdt_lock, flags);
+ return 0;
+ }
+
+@@ -143,11 +151,14 @@ static int wdt_stop (void)
+
+ static int wdt_ping(void)
+ {
++ unsigned long flags;
++ spin_lock_irqsave(&wdt_lock, flags);
+ /* Write a watchdog value */
+ inb_p(WDT_DC); /* Disable watchdog */
+ wdt_ctr_mode(1,2); /* Re-Program CTR1 for Mode 2: Rate Generator */
+ wdt_ctr_load(1,wd_heartbeat); /* Heartbeat */
+ outb_p(0, WDT_DC); /* Enable watchdog */
++ spin_unlock_irqrestore(&wdt_lock, flags);
+ return 0;
+ }
+
+@@ -182,7 +193,12 @@ static int wdt_set_heartbeat(int t)
+
+ static int wdt_get_status(int *status)
+ {
+- unsigned char new_status=inb_p(WDT_SR);
++ unsigned char new_status;
++ unsigned long flags;
++
++ spin_lock_irqsave(&wdt_lock, flags);
++ new_status = inb_p(WDT_SR);
++ spin_unlock_irqrestore(&wdt_lock, flags);
+
+ *status=0;
+ if (new_status & WDC_SR_ISOI0)
+@@ -214,8 +230,12 @@ static int wdt_get_status(int *status)
+
+ static int wdt_get_temperature(int *temperature)
+ {
+- unsigned short c=inb_p(WDT_RT);
++ unsigned short c;
++ unsigned long flags;
+
++ spin_lock_irqsave(&wdt_lock, flags);
++ c = inb_p(WDT_RT);
++ spin_unlock_irqrestore(&wdt_lock, flags);
+ *temperature = (c * 11 / 15) + 7;
+ return 0;
+ }
+@@ -237,7 +257,10 @@ static irqreturn_t wdt_interrupt(int irq, void *dev_id)
+ * Read the status register see what is up and
+ * then printk it.
+ */
+- unsigned char status=inb_p(WDT_SR);
++ unsigned char status;
++
++ spin_lock(&wdt_lock);
++ status = inb_p(WDT_SR);
+
+ printk(KERN_CRIT "WDT status %d\n", status);
+
+@@ -265,6 +288,7 @@ static irqreturn_t wdt_interrupt(int irq, void *dev_id)
+ printk(KERN_CRIT "Reset in 5ms.\n");
+ #endif
+ }
++ spin_unlock(&wdt_lock);
+ return IRQ_HANDLED;
+ }
+
+diff --git a/drivers/watchdog/wdt977.c b/drivers/watchdog/wdt977.c
+index 9b7f6b6..fb4b876 100644
+--- a/drivers/watchdog/wdt977.c
++++ b/drivers/watchdog/wdt977.c
+@@ -470,20 +470,20 @@ static int __init wd977_init(void)
+ }
+ }
+
+- rc = misc_register(&wdt977_miscdev);
++ rc = register_reboot_notifier(&wdt977_notifier);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
+- wdt977_miscdev.minor, rc);
++ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
++ rc);
+ goto err_out_region;
+ }
+
+- rc = register_reboot_notifier(&wdt977_notifier);
++ rc = misc_register(&wdt977_miscdev);
+ if (rc)
+ {
+- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
+- rc);
+- goto err_out_miscdev;
++ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
++ wdt977_miscdev.minor, rc);
++ goto err_out_reboot;
+ }
+
+ printk(KERN_INFO PFX "initialized. timeout=%d sec (nowayout=%d, testmode=%i)\n",
+@@ -491,8 +491,8 @@ static int __init wd977_init(void)
+
+ return 0;
+
+-err_out_miscdev:
+- misc_deregister(&wdt977_miscdev);
++err_out_reboot:
++ unregister_reboot_notifier(&wdt977_notifier);
+ err_out_region:
+ if (!machine_is_netwinder())
+ release_region(IO_INDEX_PORT,2);
+diff --git a/fs/Kconfig b/fs/Kconfig
+index 781b47d..219ec06 100644
+--- a/fs/Kconfig
++++ b/fs/Kconfig
+@@ -236,6 +236,7 @@ config JBD_DEBUG
+
+ config JBD2
+ tristate
++ select CRC32
+ help
+ This is a generic journaling layer for block devices that support
+ both 32-bit and 64-bit block numbers. It is currently used by
+@@ -440,14 +441,8 @@ config OCFS2_FS
+ Tools web page: http://oss.oracle.com/projects/ocfs2-tools
+ OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
+
+- Note: Features which OCFS2 does not support yet:
+- - extended attributes
+- - quotas
+- - cluster aware flock
+- - Directory change notification (F_NOTIFY)
+- - Distributed Caching (F_SETLEASE/F_GETLEASE/break_lease)
+- - POSIX ACLs
+- - readpages / writepages (not user visible)
++ For more information on OCFS2, see the file
++ <file:Documentation/filesystems/ocfs2.txt>.
+
+ config OCFS2_DEBUG_MASKLOG
+ bool "OCFS2 logging support"
+@@ -1028,8 +1023,8 @@ config HUGETLB_PAGE
+ def_bool HUGETLBFS
+
+ config CONFIGFS_FS
+- tristate "Userspace-driven configuration filesystem (EXPERIMENTAL)"
+- depends on SYSFS && EXPERIMENTAL
++ tristate "Userspace-driven configuration filesystem"
++ depends on SYSFS
+ help
+ configfs is a ram-based filesystem that provides the converse
+ of sysfs's functionality. Where sysfs is a filesystem-based
+@@ -1905,13 +1900,15 @@ config CIFS
+ file servers such as Windows 2000 (including Windows 2003, NT 4
+ and Windows XP) as well by Samba (which provides excellent CIFS
+ server support for Linux and many other operating systems). Limited
+- support for OS/2 and Windows ME and similar servers is provided as well.
+-
+- The intent of the cifs module is to provide an advanced
+- network file system client for mounting to CIFS compliant servers,
+- including support for dfs (hierarchical name space), secure per-user
+- session establishment, safe distributed caching (oplock), optional
+- packet signing, Unicode and other internationalization improvements.
++ support for OS/2 and Windows ME and similar servers is provided as
++ well.
++
++ The cifs module provides an advanced network file system
++ client for mounting to CIFS compliant servers. It includes
++ support for DFS (hierarchical name space), secure per-user
++ session establishment via Kerberos or NTLM or NTLMv2,
++ safe distributed caching (oplock), optional packet
++ signing, Unicode and other internationalization improvements.
+ If you need to mount to Samba or Windows from this machine, say Y.
+
+ config CIFS_STATS
+@@ -1943,7 +1940,8 @@ config CIFS_WEAK_PW_HASH
+ (since 1997) support stronger NTLM (and even NTLMv2 and Kerberos)
+ security mechanisms. These hash the password more securely
+ than the mechanisms used in the older LANMAN version of the
+- SMB protocol needed to establish sessions with old SMB servers.
++ SMB protocol but LANMAN based authentication is needed to
++ establish sessions with some old SMB servers.
+
+ Enabling this option allows the cifs module to mount to older
+ LANMAN based servers such as OS/2 and Windows 95, but such
+@@ -1951,8 +1949,8 @@ config CIFS_WEAK_PW_HASH
+ security mechanisms if you are on a public network. Unless you
+ have a need to access old SMB servers (and are on a private
+ network) you probably want to say N. Even if this support
+- is enabled in the kernel build, they will not be used
+- automatically. At runtime LANMAN mounts are disabled but
++ is enabled in the kernel build, LANMAN authentication will not be
++ used automatically. At runtime LANMAN mounts are disabled but
+ can be set to required (or optional) either in
+ /proc/fs/cifs (see fs/cifs/README for more detail) or via an
+ option on the mount command. This support is disabled by
+@@ -2018,12 +2016,22 @@ config CIFS_UPCALL
+ depends on CIFS_EXPERIMENTAL
+ depends on KEYS
+ help
+- Enables an upcall mechanism for CIFS which will be used to contact
+- userspace helper utilities to provide SPNEGO packaged Kerberos
+- tickets which are needed to mount to certain secure servers
++ Enables an upcall mechanism for CIFS which accesses
++ userspace helper utilities to provide SPNEGO packaged (RFC 4178)
++ Kerberos tickets which are needed to mount to certain secure servers
+ (for which more secure Kerberos authentication is required). If
+ unsure, say N.
- #define WATCH_INTERVAL 1 /* number of seconds */
++config CIFS_DFS_UPCALL
++ bool "DFS feature support (EXPERIMENTAL)"
++ depends on CIFS_EXPERIMENTAL
++ depends on KEYS
++ help
++ Enables an upcall mechanism for CIFS which contacts userspace
++ helper utilities to provide server name resolution (host names to
++ IP addresses) which is needed for implicit mounts of DFS junction
++ points. If unsure, say N.
++
+ config NCP_FS
+ tristate "NCP file system support (to mount NetWare volumes)"
+ depends on IPX!=n || INET
+@@ -2130,4 +2138,3 @@ source "fs/nls/Kconfig"
+ source "fs/dlm/Kconfig"
--/* NPIV */
--#define MAX_MULTI_ID_LOOP 126
--#define MAX_MULTI_ID_FABRIC 64
--#define MAX_NUM_VPORT_LOOP (MAX_MULTI_ID_LOOP - 1)
--#define MAX_NUM_VPORT_FABRIC (MAX_MULTI_ID_FABRIC - 1)
--#define MAX_NUM_VHBA_LOOP (MAX_MULTI_ID_LOOP - 1)
--#define MAX_NUM_VHBA_FABRIC (MAX_MULTI_ID_FABRIC - 1)
+ endmenu
-
- /*
- * Linux Host Adapter structure
- */
-@@ -2161,6 +2156,7 @@ typedef struct scsi_qla_host {
- uint32_t gpsc_supported :1;
- uint32_t vsan_enabled :1;
- uint32_t npiv_supported :1;
-+ uint32_t fce_enabled :1;
- } flags;
+diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt
+index d4fc609..7c3d5f9 100644
+--- a/fs/Kconfig.binfmt
++++ b/fs/Kconfig.binfmt
+@@ -23,6 +23,10 @@ config BINFMT_ELF
+ ld.so (check the file <file:Documentation/Changes> for location and
+ latest version).
- atomic_t loop_state;
-@@ -2273,8 +2269,7 @@ typedef struct scsi_qla_host {
++config COMPAT_BINFMT_ELF
++ bool
++ depends on COMPAT && MMU
++
+ config BINFMT_ELF_FDPIC
+ bool "Kernel support for FDPIC ELF binaries"
+ default y
+diff --git a/fs/Makefile b/fs/Makefile
+index 500cf15..1e7a11b 100644
+--- a/fs/Makefile
++++ b/fs/Makefile
+@@ -39,6 +39,7 @@ obj-$(CONFIG_BINFMT_MISC) += binfmt_misc.o
+ obj-y += binfmt_script.o
+
+ obj-$(CONFIG_BINFMT_ELF) += binfmt_elf.o
++obj-$(CONFIG_COMPAT_BINFMT_ELF) += compat_binfmt_elf.o
+ obj-$(CONFIG_BINFMT_ELF_FDPIC) += binfmt_elf_fdpic.o
+ obj-$(CONFIG_BINFMT_SOM) += binfmt_som.o
+ obj-$(CONFIG_BINFMT_FLAT) += binfmt_flat.o
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 33fe39a..0cc3597 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -546,11 +546,11 @@ static struct dentry *afs_lookup(struct inode *dir, struct dentry *dentry,
+ dentry->d_op = &afs_fs_dentry_operations;
- int bars;
- device_reg_t __iomem *iobase; /* Base I/O address */
-- unsigned long pio_address;
-- unsigned long pio_length;
-+ resource_size_t pio_address;
- #define MIN_IOBASE_LEN 0x100
+ d_add(dentry, inode);
+- _leave(" = 0 { vn=%u u=%u } -> { ino=%lu v=%lu }",
++ _leave(" = 0 { vn=%u u=%u } -> { ino=%lu v=%llu }",
+ fid.vnode,
+ fid.unique,
+ dentry->d_inode->i_ino,
+- dentry->d_inode->i_version);
++ (unsigned long long)dentry->d_inode->i_version);
- /* ISP ring lock, rings, and indexes */
-@@ -2416,9 +2411,9 @@ typedef struct scsi_qla_host {
- #define MBX_INTR_WAIT 2
- #define MBX_UPDATE_FLASH_ACTIVE 3
+ return NULL;
+ }
+@@ -630,9 +630,10 @@ static int afs_d_revalidate(struct dentry *dentry, struct nameidata *nd)
+ * been deleted and replaced, and the original vnode ID has
+ * been reused */
+ if (fid.unique != vnode->fid.unique) {
+- _debug("%s: file deleted (uq %u -> %u I:%lu)",
++ _debug("%s: file deleted (uq %u -> %u I:%llu)",
+ dentry->d_name.name, fid.unique,
+- vnode->fid.unique, dentry->d_inode->i_version);
++ vnode->fid.unique,
++ (unsigned long long)dentry->d_inode->i_version);
+ spin_lock(&vnode->lock);
+ set_bit(AFS_VNODE_DELETED, &vnode->flags);
+ spin_unlock(&vnode->lock);
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index d196840..84750c8 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -301,7 +301,8 @@ int afs_getattr(struct vfsmount *mnt, struct dentry *dentry,
-- struct semaphore mbx_cmd_sem; /* Serialialize mbx access */
- struct semaphore vport_sem; /* Virtual port synchronization */
-- struct semaphore mbx_intr_sem; /* Used for completion notification */
-+ struct completion mbx_cmd_comp; /* Serialize mbx access */
-+ struct completion mbx_intr_comp; /* Used for completion notification */
+ inode = dentry->d_inode;
- uint32_t mbx_flags;
- #define MBX_IN_PROGRESS BIT_0
-@@ -2455,6 +2450,15 @@ typedef struct scsi_qla_host {
- dma_addr_t eft_dma;
- void *eft;
+- _enter("{ ino=%lu v=%lu }", inode->i_ino, inode->i_version);
++ _enter("{ ino=%lu v=%llu }", inode->i_ino,
++ (unsigned long long)inode->i_version);
-+ struct dentry *dfs_dir;
-+ struct dentry *dfs_fce;
-+ dma_addr_t fce_dma;
-+ void *fce;
-+ uint32_t fce_bufs;
-+ uint16_t fce_mb[8];
-+ uint64_t fce_wr, fce_rd;
-+ struct mutex fce_mutex;
-+
- uint8_t host_str[16];
- uint32_t pci_attr;
- uint16_t chip_revision;
-@@ -2507,7 +2511,7 @@ typedef struct scsi_qla_host {
+ generic_fillattr(inode, stat);
+ return 0;
+diff --git a/fs/aio.c b/fs/aio.c
+index 9dec7d2..8a37dbb 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -397,7 +397,7 @@ void fastcall __put_ioctx(struct kioctx *ctx)
+ * This prevents races between the aio code path referencing the
+ * req (after submitting it) and aio_complete() freeing the req.
+ */
+-static struct kiocb *FASTCALL(__aio_get_req(struct kioctx *ctx));
++static struct kiocb *__aio_get_req(struct kioctx *ctx);
+ static struct kiocb fastcall *__aio_get_req(struct kioctx *ctx)
+ {
+ struct kiocb *req = NULL;
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index f0b3171..18ed6dd 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -45,7 +45,8 @@
+
+ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs);
+ static int load_elf_library(struct file *);
+-static unsigned long elf_map (struct file *, unsigned long, struct elf_phdr *, int, int);
++static unsigned long elf_map(struct file *, unsigned long, struct elf_phdr *,
++ int, int, unsigned long);
+
+ /*
+ * If we don't support core dumping, then supply a NULL so we
+@@ -298,33 +299,70 @@ create_elf_tables(struct linux_binprm *bprm, struct elfhdr *exec,
+ #ifndef elf_map
+
+ static unsigned long elf_map(struct file *filep, unsigned long addr,
+- struct elf_phdr *eppnt, int prot, int type)
++ struct elf_phdr *eppnt, int prot, int type,
++ unsigned long total_size)
+ {
+ unsigned long map_addr;
+- unsigned long pageoffset = ELF_PAGEOFFSET(eppnt->p_vaddr);
++ unsigned long size = eppnt->p_filesz + ELF_PAGEOFFSET(eppnt->p_vaddr);
++ unsigned long off = eppnt->p_offset - ELF_PAGEOFFSET(eppnt->p_vaddr);
++ addr = ELF_PAGESTART(addr);
++ size = ELF_PAGEALIGN(size);
- struct list_head vp_list; /* list of VP */
- struct fc_vport *fc_vport; /* holds fc_vport * for each vport */
-- uint8_t vp_idx_map[16];
-+ unsigned long vp_idx_map[(MAX_MULTI_ID_FABRIC / 8) / sizeof(unsigned long)];
- uint16_t num_vhosts; /* number of vports created */
- uint16_t num_vsans; /* number of vsan created */
- uint16_t vp_idx; /* vport ID */
-diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
-new file mode 100644
-index 0000000..1479c60
---- /dev/null
-+++ b/drivers/scsi/qla2xxx/qla_dfs.c
-@@ -0,0 +1,175 @@
-+/*
-+ * QLogic Fibre Channel HBA Driver
-+ * Copyright (c) 2003-2005 QLogic Corporation
-+ *
-+ * See LICENSE.qla2xxx for copyright and licensing details.
-+ */
-+#include "qla_def.h"
-+
-+#include <linux/debugfs.h>
-+#include <linux/seq_file.h>
+- down_write(¤t->mm->mmap_sem);
+ /* mmap() will return -EINVAL if given a zero size, but a
+ * segment with zero filesize is perfectly valid */
+- if (eppnt->p_filesz + pageoffset)
+- map_addr = do_mmap(filep, ELF_PAGESTART(addr),
+- eppnt->p_filesz + pageoffset, prot, type,
+- eppnt->p_offset - pageoffset);
+- else
+- map_addr = ELF_PAGESTART(addr);
++ if (!size)
++ return addr;
+
-+static struct dentry *qla2x00_dfs_root;
-+static atomic_t qla2x00_dfs_root_count;
++ down_write(¤t->mm->mmap_sem);
++ /*
++ * total_size is the size of the ELF (interpreter) image.
++ * The _first_ mmap needs to know the full size, otherwise
++ * randomization might put this image into an overlapping
++ * position with the ELF binary image. (since size < total_size)
++ * So we first map the 'big' image - and unmap the remainder at
++ * the end. (which unmap is needed for ELF images with holes.)
++ */
++ if (total_size) {
++ total_size = ELF_PAGEALIGN(total_size);
++ map_addr = do_mmap(filep, addr, total_size, prot, type, off);
++ if (!BAD_ADDR(map_addr))
++ do_munmap(current->mm, map_addr+size, total_size-size);
++ } else
++ map_addr = do_mmap(filep, addr, size, prot, type, off);
+
-+static int
-+qla2x00_dfs_fce_show(struct seq_file *s, void *unused)
+ up_write(¤t->mm->mmap_sem);
+ return(map_addr);
+ }
+
+ #endif /* !elf_map */
+
++static unsigned long total_mapping_size(struct elf_phdr *cmds, int nr)
+{
-+ scsi_qla_host_t *ha = s->private;
-+ uint32_t cnt;
-+ uint32_t *fce;
-+ uint64_t fce_start;
++ int i, first_idx = -1, last_idx = -1;
+
-+ mutex_lock(&ha->fce_mutex);
++ for (i = 0; i < nr; i++) {
++ if (cmds[i].p_type == PT_LOAD) {
++ last_idx = i;
++ if (first_idx == -1)
++ first_idx = i;
++ }
++ }
++ if (first_idx == -1)
++ return 0;
+
-+ seq_printf(s, "FCE Trace Buffer\n");
-+ seq_printf(s, "In Pointer = %llx\n\n", ha->fce_wr);
-+ seq_printf(s, "Base = %llx\n\n", (unsigned long long) ha->fce_dma);
-+ seq_printf(s, "FCE Enable Registers\n");
-+ seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
-+ ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
-+ ha->fce_mb[5], ha->fce_mb[6]);
++ return cmds[last_idx].p_vaddr + cmds[last_idx].p_memsz -
++ ELF_PAGESTART(cmds[first_idx].p_vaddr);
++}
+
-+ fce = (uint32_t *) ha->fce;
-+ fce_start = (unsigned long long) ha->fce_dma;
-+ for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
-+ if (cnt % 8 == 0)
-+ seq_printf(s, "\n%llx: ",
-+ (unsigned long long)((cnt * 4) + fce_start));
-+ else
-+ seq_printf(s, " ");
-+ seq_printf(s, "%08x", *fce++);
-+ }
+
-+ seq_printf(s, "\nEnd\n");
+ /* This is much more generalized than the library routine read function,
+ so we keep this separate. Technically the library read function
+ is only provided so that we can read a.out libraries that have
+ an ELF header */
+
+ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+- struct file *interpreter, unsigned long *interp_load_addr)
++ struct file *interpreter, unsigned long *interp_map_addr,
++ unsigned long no_base)
+ {
+ struct elf_phdr *elf_phdata;
+ struct elf_phdr *eppnt;
+@@ -332,6 +370,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+ int load_addr_set = 0;
+ unsigned long last_bss = 0, elf_bss = 0;
+ unsigned long error = ~0UL;
++ unsigned long total_size;
+ int retval, i, size;
+
+ /* First of all, some simple consistency checks */
+@@ -370,6 +409,12 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+ goto out_close;
+ }
+
++ total_size = total_mapping_size(elf_phdata, interp_elf_ex->e_phnum);
++ if (!total_size) {
++ error = -EINVAL;
++ goto out_close;
++ }
+
-+ mutex_unlock(&ha->fce_mutex);
+ eppnt = elf_phdata;
+ for (i = 0; i < interp_elf_ex->e_phnum; i++, eppnt++) {
+ if (eppnt->p_type == PT_LOAD) {
+@@ -387,9 +432,14 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+ vaddr = eppnt->p_vaddr;
+ if (interp_elf_ex->e_type == ET_EXEC || load_addr_set)
+ elf_type |= MAP_FIXED;
++ else if (no_base && interp_elf_ex->e_type == ET_DYN)
++ load_addr = -vaddr;
+
+ map_addr = elf_map(interpreter, load_addr + vaddr,
+- eppnt, elf_prot, elf_type);
++ eppnt, elf_prot, elf_type, total_size);
++ total_size = 0;
++ if (!*interp_map_addr)
++ *interp_map_addr = map_addr;
+ error = map_addr;
+ if (BAD_ADDR(map_addr))
+ goto out_close;
+@@ -455,8 +505,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+ goto out_close;
+ }
+
+- *interp_load_addr = load_addr;
+- error = ((unsigned long)interp_elf_ex->e_entry) + load_addr;
++ error = load_addr;
+
+ out_close:
+ kfree(elf_phdata);
+@@ -546,14 +595,14 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ int load_addr_set = 0;
+ char * elf_interpreter = NULL;
+ unsigned int interpreter_type = INTERPRETER_NONE;
+- unsigned char ibcs2_interpreter = 0;
+ unsigned long error;
+ struct elf_phdr *elf_ppnt, *elf_phdata;
+ unsigned long elf_bss, elf_brk;
+ int elf_exec_fileno;
+ int retval, i;
+ unsigned int size;
+- unsigned long elf_entry, interp_load_addr = 0;
++ unsigned long elf_entry;
++ unsigned long interp_load_addr = 0;
+ unsigned long start_code, end_code, start_data, end_data;
+ unsigned long reloc_func_desc = 0;
+ char passed_fileno[6];
+@@ -663,14 +712,6 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ if (elf_interpreter[elf_ppnt->p_filesz - 1] != '\0')
+ goto out_free_interp;
+
+- /* If the program interpreter is one of these two,
+- * then assume an iBCS2 image. Otherwise assume
+- * a native linux image.
+- */
+- if (strcmp(elf_interpreter,"/usr/lib/libc.so.1") == 0 ||
+- strcmp(elf_interpreter,"/usr/lib/ld.so.1") == 0)
+- ibcs2_interpreter = 1;
+-
+ /*
+ * The early SET_PERSONALITY here is so that the lookup
+ * for the interpreter happens in the namespace of the
+@@ -690,7 +731,7 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ * switch really is going to happen - do this in
+ * flush_thread(). - akpm
+ */
+- SET_PERSONALITY(loc->elf_ex, ibcs2_interpreter);
++ SET_PERSONALITY(loc->elf_ex, 0);
+
+ interpreter = open_exec(elf_interpreter);
+ retval = PTR_ERR(interpreter);
+@@ -769,7 +810,7 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ goto out_free_dentry;
+ } else {
+ /* Executables without an interpreter also need a personality */
+- SET_PERSONALITY(loc->elf_ex, ibcs2_interpreter);
++ SET_PERSONALITY(loc->elf_ex, 0);
+ }
+
+ /* OK, we are done with that, now set up the arg stuff,
+@@ -803,7 +844,7 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+
+ /* Do this immediately, since STACK_TOP as used in setup_arg_pages
+ may depend on the personality. */
+- SET_PERSONALITY(loc->elf_ex, ibcs2_interpreter);
++ SET_PERSONALITY(loc->elf_ex, 0);
+ if (elf_read_implies_exec(loc->elf_ex, executable_stack))
+ current->personality |= READ_IMPLIES_EXEC;
+
+@@ -825,9 +866,7 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ current->mm->start_stack = bprm->p;
+
+ /* Now we do a little grungy work by mmaping the ELF image into
+- the correct location in memory. At this point, we assume that
+- the image should be loaded at fixed address, not at a variable
+- address. */
++ the correct location in memory. */
+ for(i = 0, elf_ppnt = elf_phdata;
+ i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
+ int elf_prot = 0, elf_flags;
+@@ -881,11 +920,15 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ * default mmap base, as well as whatever program they
+ * might try to exec. This is because the brk will
+ * follow the loader, and is not movable. */
++#ifdef CONFIG_X86
++ load_bias = 0;
++#else
+ load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
++#endif
+ }
+
+ error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,
+- elf_prot, elf_flags);
++ elf_prot, elf_flags, 0);
+ if (BAD_ADDR(error)) {
+ send_sig(SIGKILL, current, 0);
+ retval = IS_ERR((void *)error) ?
+@@ -961,13 +1004,25 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ }
+
+ if (elf_interpreter) {
+- if (interpreter_type == INTERPRETER_AOUT)
++ if (interpreter_type == INTERPRETER_AOUT) {
+ elf_entry = load_aout_interp(&loc->interp_ex,
+ interpreter);
+- else
++ } else {
++ unsigned long uninitialized_var(interp_map_addr);
+
-+ return 0;
+ elf_entry = load_elf_interp(&loc->interp_elf_ex,
+ interpreter,
+- &interp_load_addr);
++ &interp_map_addr,
++ load_bias);
++ if (!IS_ERR((void *)elf_entry)) {
++ /*
++ * load_elf_interp() returns relocation
++ * adjustment
++ */
++ interp_load_addr = elf_entry;
++ elf_entry += loc->interp_elf_ex.e_entry;
++ }
++ }
+ if (BAD_ADDR(elf_entry)) {
+ force_sig(SIGSEGV, current);
+ retval = IS_ERR((void *)elf_entry) ?
+@@ -1021,6 +1076,12 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+ current->mm->end_data = end_data;
+ current->mm->start_stack = bprm->p;
+
++#ifdef arch_randomize_brk
++ if (current->flags & PF_RANDOMIZE)
++ current->mm->brk = current->mm->start_brk =
++ arch_randomize_brk(current->mm);
++#endif
++
+ if (current->personality & MMAP_PAGE_ZERO) {
+ /* Why this, you ask??? Well SVr4 maps page 0 as read-only,
+ and some applications "depend" upon this behavior.
+@@ -1325,7 +1386,8 @@ static int writenote(struct memelfnote *men, struct file *file,
+ if (!dump_seek(file, (off))) \
+ goto end_coredump;
+
+-static void fill_elf_header(struct elfhdr *elf, int segs)
++static void fill_elf_header(struct elfhdr *elf, int segs,
++ u16 machine, u32 flags, u8 osabi)
+ {
+ memcpy(elf->e_ident, ELFMAG, SELFMAG);
+ elf->e_ident[EI_CLASS] = ELF_CLASS;
+@@ -1335,12 +1397,12 @@ static void fill_elf_header(struct elfhdr *elf, int segs)
+ memset(elf->e_ident+EI_PAD, 0, EI_NIDENT-EI_PAD);
+
+ elf->e_type = ET_CORE;
+- elf->e_machine = ELF_ARCH;
++ elf->e_machine = machine;
+ elf->e_version = EV_CURRENT;
+ elf->e_entry = 0;
+ elf->e_phoff = sizeof(struct elfhdr);
+ elf->e_shoff = 0;
+- elf->e_flags = ELF_CORE_EFLAGS;
++ elf->e_flags = flags;
+ elf->e_ehsize = sizeof(struct elfhdr);
+ elf->e_phentsize = sizeof(struct elf_phdr);
+ elf->e_phnum = segs;
+@@ -1447,6 +1509,238 @@ static int fill_psinfo(struct elf_prpsinfo *psinfo, struct task_struct *p,
+ return 0;
+ }
+
++static void fill_auxv_note(struct memelfnote *note, struct mm_struct *mm)
++{
++ elf_addr_t *auxv = (elf_addr_t *) mm->saved_auxv;
++ int i = 0;
++ do
++ i += 2;
++ while (auxv[i - 2] != AT_NULL);
++ fill_note(note, "CORE", NT_AUXV, i * sizeof(elf_addr_t), auxv);
+}
+
-+static int
-+qla2x00_dfs_fce_open(struct inode *inode, struct file *file)
-+{
-+ scsi_qla_host_t *ha = inode->i_private;
-+ int rval;
++#ifdef CORE_DUMP_USE_REGSET
++#include <linux/regset.h>
+
-+ if (!ha->flags.fce_enabled)
-+ goto out;
++struct elf_thread_core_info {
++ struct elf_thread_core_info *next;
++ struct task_struct *task;
++ struct elf_prstatus prstatus;
++ struct memelfnote notes[0];
++};
+
-+ mutex_lock(&ha->fce_mutex);
++struct elf_note_info {
++ struct elf_thread_core_info *thread;
++ struct memelfnote psinfo;
++ struct memelfnote auxv;
++ size_t size;
++ int thread_notes;
++};
+
-+ /* Pause tracing to flush FCE buffers. */
-+ rval = qla2x00_disable_fce_trace(ha, &ha->fce_wr, &ha->fce_rd);
-+ if (rval)
-+ qla_printk(KERN_WARNING, ha,
-+ "DebugFS: Unable to disable FCE (%d).\n", rval);
++static int fill_thread_core_info(struct elf_thread_core_info *t,
++ const struct user_regset_view *view,
++ long signr, size_t *total)
++{
++ unsigned int i;
+
-+ ha->flags.fce_enabled = 0;
++ /*
++ * NT_PRSTATUS is the one special case, because the regset data
++ * goes into the pr_reg field inside the note contents, rather
++ * than being the whole note contents. We fill the reset in here.
++ * We assume that regset 0 is NT_PRSTATUS.
++ */
++ fill_prstatus(&t->prstatus, t->task, signr);
++ (void) view->regsets[0].get(t->task, &view->regsets[0],
++ 0, sizeof(t->prstatus.pr_reg),
++ &t->prstatus.pr_reg, NULL);
++
++ fill_note(&t->notes[0], "CORE", NT_PRSTATUS,
++ sizeof(t->prstatus), &t->prstatus);
++ *total += notesize(&t->notes[0]);
++
++ /*
++ * Each other regset might generate a note too. For each regset
++ * that has no core_note_type or is inactive, we leave t->notes[i]
++ * all zero and we'll know to skip writing it later.
++ */
++ for (i = 1; i < view->n; ++i) {
++ const struct user_regset *regset = &view->regsets[i];
++ if (regset->core_note_type &&
++ (!regset->active || regset->active(t->task, regset))) {
++ int ret;
++ size_t size = regset->n * regset->size;
++ void *data = kmalloc(size, GFP_KERNEL);
++ if (unlikely(!data))
++ return 0;
++ ret = regset->get(t->task, regset,
++ 0, size, data, NULL);
++ if (unlikely(ret))
++ kfree(data);
++ else {
++ if (regset->core_note_type != NT_PRFPREG)
++ fill_note(&t->notes[i], "LINUX",
++ regset->core_note_type,
++ size, data);
++ else {
++ t->prstatus.pr_fpvalid = 1;
++ fill_note(&t->notes[i], "CORE",
++ NT_PRFPREG, size, data);
++ }
++ *total += notesize(&t->notes[i]);
++ }
++ }
++ }
+
-+ mutex_unlock(&ha->fce_mutex);
-+out:
-+ return single_open(file, qla2x00_dfs_fce_show, ha);
++ return 1;
+}
+
-+static int
-+qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
-+{
-+ scsi_qla_host_t *ha = inode->i_private;
-+ int rval;
++static int fill_note_info(struct elfhdr *elf, int phdrs,
++ struct elf_note_info *info,
++ long signr, struct pt_regs *regs)
++{
++ struct task_struct *dump_task = current;
++ const struct user_regset_view *view = task_user_regset_view(dump_task);
++ struct elf_thread_core_info *t;
++ struct elf_prpsinfo *psinfo;
++ struct task_struct *g, *p;
++ unsigned int i;
+
-+ if (ha->flags.fce_enabled)
-+ goto out;
++ info->size = 0;
++ info->thread = NULL;
+
-+ mutex_lock(&ha->fce_mutex);
++ psinfo = kmalloc(sizeof(*psinfo), GFP_KERNEL);
++ fill_note(&info->psinfo, "CORE", NT_PRPSINFO, sizeof(*psinfo), psinfo);
+
-+ /* Re-enable FCE tracing. */
-+ ha->flags.fce_enabled = 1;
-+ memset(ha->fce, 0, fce_calc_size(ha->fce_bufs));
-+ rval = qla2x00_enable_fce_trace(ha, ha->fce_dma, ha->fce_bufs,
-+ ha->fce_mb, &ha->fce_bufs);
-+ if (rval) {
-+ qla_printk(KERN_WARNING, ha,
-+ "DebugFS: Unable to reinitialize FCE (%d).\n", rval);
-+ ha->flags.fce_enabled = 0;
++ if (psinfo == NULL)
++ return 0;
++
++ /*
++ * Figure out how many notes we're going to need for each thread.
++ */
++ info->thread_notes = 0;
++ for (i = 0; i < view->n; ++i)
++ if (view->regsets[i].core_note_type != 0)
++ ++info->thread_notes;
++
++ /*
++ * Sanity check. We rely on regset 0 being in NT_PRSTATUS,
++ * since it is our one special case.
++ */
++ if (unlikely(info->thread_notes == 0) ||
++ unlikely(view->regsets[0].core_note_type != NT_PRSTATUS)) {
++ WARN_ON(1);
++ return 0;
+ }
+
-+ mutex_unlock(&ha->fce_mutex);
-+out:
-+ return single_release(inode, file);
++ /*
++ * Initialize the ELF file header.
++ */
++ fill_elf_header(elf, phdrs,
++ view->e_machine, view->e_flags, view->ei_osabi);
++
++ /*
++ * Allocate a structure for each thread.
++ */
++ rcu_read_lock();
++ do_each_thread(g, p)
++ if (p->mm == dump_task->mm) {
++ t = kzalloc(offsetof(struct elf_thread_core_info,
++ notes[info->thread_notes]),
++ GFP_ATOMIC);
++ if (unlikely(!t)) {
++ rcu_read_unlock();
++ return 0;
++ }
++ t->task = p;
++ if (p == dump_task || !info->thread) {
++ t->next = info->thread;
++ info->thread = t;
++ } else {
++ /*
++ * Make sure to keep the original task at
++ * the head of the list.
++ */
++ t->next = info->thread->next;
++ info->thread->next = t;
++ }
++ }
++ while_each_thread(g, p);
++ rcu_read_unlock();
++
++ /*
++ * Now fill in each thread's information.
++ */
++ for (t = info->thread; t != NULL; t = t->next)
++ if (!fill_thread_core_info(t, view, signr, &info->size))
++ return 0;
++
++ /*
++ * Fill in the two process-wide notes.
++ */
++ fill_psinfo(psinfo, dump_task->group_leader, dump_task->mm);
++ info->size += notesize(&info->psinfo);
++
++ fill_auxv_note(&info->auxv, current->mm);
++ info->size += notesize(&info->auxv);
++
++ return 1;
+}
+
-+static const struct file_operations dfs_fce_ops = {
-+ .open = qla2x00_dfs_fce_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = qla2x00_dfs_fce_release,
-+};
++static size_t get_note_info_size(struct elf_note_info *info)
++{
++ return info->size;
++}
+
-+int
-+qla2x00_dfs_setup(scsi_qla_host_t *ha)
++/*
++ * Write all the notes for each thread. When writing the first thread, the
++ * process-wide notes are interleaved after the first thread-specific note.
++ */
++static int write_note_info(struct elf_note_info *info,
++ struct file *file, loff_t *foffset)
+{
-+ if (!IS_QLA25XX(ha))
-+ goto out;
-+ if (!ha->fce)
-+ goto out;
++ bool first = 1;
++ struct elf_thread_core_info *t = info->thread;
+
-+ if (qla2x00_dfs_root)
-+ goto create_dir;
++ do {
++ int i;
+
-+ atomic_set(&qla2x00_dfs_root_count, 0);
-+ qla2x00_dfs_root = debugfs_create_dir(QLA2XXX_DRIVER_NAME, NULL);
-+ if (!qla2x00_dfs_root) {
-+ qla_printk(KERN_NOTICE, ha,
-+ "DebugFS: Unable to create root directory.\n");
-+ goto out;
-+ }
++ if (!writenote(&t->notes[0], file, foffset))
++ return 0;
+
-+create_dir:
-+ if (ha->dfs_dir)
-+ goto create_nodes;
++ if (first && !writenote(&info->psinfo, file, foffset))
++ return 0;
++ if (first && !writenote(&info->auxv, file, foffset))
++ return 0;
+
-+ mutex_init(&ha->fce_mutex);
-+ ha->dfs_dir = debugfs_create_dir(ha->host_str, qla2x00_dfs_root);
-+ if (!ha->dfs_dir) {
-+ qla_printk(KERN_NOTICE, ha,
-+ "DebugFS: Unable to create ha directory.\n");
-+ goto out;
-+ }
++ for (i = 1; i < info->thread_notes; ++i)
++ if (t->notes[i].data &&
++ !writenote(&t->notes[i], file, foffset))
++ return 0;
+
-+ atomic_inc(&qla2x00_dfs_root_count);
++ first = 0;
++ t = t->next;
++ } while (t);
+
-+create_nodes:
-+ ha->dfs_fce = debugfs_create_file("fce", S_IRUSR, ha->dfs_dir, ha,
-+ &dfs_fce_ops);
-+ if (!ha->dfs_fce) {
-+ qla_printk(KERN_NOTICE, ha,
-+ "DebugFS: Unable to fce node.\n");
-+ goto out;
-+ }
-+out:
-+ return 0;
++ return 1;
+}
+
-+int
-+qla2x00_dfs_remove(scsi_qla_host_t *ha)
++static void free_note_info(struct elf_note_info *info)
+{
-+ if (ha->dfs_fce) {
-+ debugfs_remove(ha->dfs_fce);
-+ ha->dfs_fce = NULL;
-+ }
-+
-+ if (ha->dfs_dir) {
-+ debugfs_remove(ha->dfs_dir);
-+ ha->dfs_dir = NULL;
-+ atomic_dec(&qla2x00_dfs_root_count);
++ struct elf_thread_core_info *threads = info->thread;
++ while (threads) {
++ unsigned int i;
++ struct elf_thread_core_info *t = threads;
++ threads = t->next;
++ WARN_ON(t->notes[0].data && t->notes[0].data != &t->prstatus);
++ for (i = 1; i < info->thread_notes; ++i)
++ kfree(t->notes[i].data);
++ kfree(t);
+ }
++ kfree(info->psinfo.data);
++}
+
-+ if (atomic_read(&qla2x00_dfs_root_count) == 0 &&
-+ qla2x00_dfs_root) {
-+ debugfs_remove(qla2x00_dfs_root);
-+ qla2x00_dfs_root = NULL;
-+ }
++#else
+
-+ return 0;
-+}
-diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h
-index 25364b1..9337e13 100644
---- a/drivers/scsi/qla2xxx/qla_fw.h
-+++ b/drivers/scsi/qla2xxx/qla_fw.h
-@@ -952,9 +952,31 @@ struct device_reg_24xx {
- uint32_t iobase_sdata;
- };
+ /* Here is the structure in which status of each thread is captured. */
+ struct elf_thread_status
+ {
+@@ -1499,6 +1793,176 @@ static int elf_dump_thread_status(long signr, struct elf_thread_status *t)
+ return sz;
+ }
-+/* Trace Control *************************************************************/
++struct elf_note_info {
++ struct memelfnote *notes;
++ struct elf_prstatus *prstatus; /* NT_PRSTATUS */
++ struct elf_prpsinfo *psinfo; /* NT_PRPSINFO */
++ struct list_head thread_list;
++ elf_fpregset_t *fpu;
++#ifdef ELF_CORE_COPY_XFPREGS
++ elf_fpxregset_t *xfpu;
++#endif
++ int thread_status_size;
++ int numnote;
++};
+
-+#define TC_AEN_DISABLE 0
++static int fill_note_info(struct elfhdr *elf, int phdrs,
++ struct elf_note_info *info,
++ long signr, struct pt_regs *regs)
++{
++#define NUM_NOTES 6
++ struct list_head *t;
++ struct task_struct *g, *p;
+
-+#define TC_EFT_ENABLE 4
-+#define TC_EFT_DISABLE 5
++ info->notes = NULL;
++ info->prstatus = NULL;
++ info->psinfo = NULL;
++ info->fpu = NULL;
++#ifdef ELF_CORE_COPY_XFPREGS
++ info->xfpu = NULL;
++#endif
++ INIT_LIST_HEAD(&info->thread_list);
+
-+#define TC_FCE_ENABLE 8
-+#define TC_FCE_OPTIONS 0
-+#define TC_FCE_DEFAULT_RX_SIZE 2112
-+#define TC_FCE_DEFAULT_TX_SIZE 2112
-+#define TC_FCE_DISABLE 9
-+#define TC_FCE_DISABLE_TRACE BIT_0
++ info->notes = kmalloc(NUM_NOTES * sizeof(struct memelfnote),
++ GFP_KERNEL);
++ if (!info->notes)
++ return 0;
++ info->psinfo = kmalloc(sizeof(*info->psinfo), GFP_KERNEL);
++ if (!info->psinfo)
++ return 0;
++ info->prstatus = kmalloc(sizeof(*info->prstatus), GFP_KERNEL);
++ if (!info->prstatus)
++ return 0;
++ info->fpu = kmalloc(sizeof(*info->fpu), GFP_KERNEL);
++ if (!info->fpu)
++ return 0;
++#ifdef ELF_CORE_COPY_XFPREGS
++ info->xfpu = kmalloc(sizeof(*info->xfpu), GFP_KERNEL);
++ if (!info->xfpu)
++ return 0;
++#endif
+
- /* MID Support ***************************************************************/
-
--#define MAX_MID_VPS 125
-+#define MIN_MULTI_ID_FABRIC 64 /* Must be power-of-2. */
-+#define MAX_MULTI_ID_FABRIC 256 /* ... */
++ info->thread_status_size = 0;
++ if (signr) {
++ struct elf_thread_status *tmp;
++ rcu_read_lock();
++ do_each_thread(g, p)
++ if (current->mm == p->mm && current != p) {
++ tmp = kzalloc(sizeof(*tmp), GFP_ATOMIC);
++ if (!tmp) {
++ rcu_read_unlock();
++ return 0;
++ }
++ tmp->thread = p;
++ list_add(&tmp->list, &info->thread_list);
++ }
++ while_each_thread(g, p);
++ rcu_read_unlock();
++ list_for_each(t, &info->thread_list) {
++ struct elf_thread_status *tmp;
++ int sz;
+
-+#define for_each_mapped_vp_idx(_ha, _idx) \
-+ for (_idx = find_next_bit((_ha)->vp_idx_map, \
-+ (_ha)->max_npiv_vports + 1, 1); \
-+ _idx <= (_ha)->max_npiv_vports; \
-+ _idx = find_next_bit((_ha)->vp_idx_map, \
-+ (_ha)->max_npiv_vports + 1, _idx + 1)) \
-
- struct mid_conf_entry_24xx {
- uint16_t reserved_1;
-@@ -982,7 +1004,7 @@ struct mid_init_cb_24xx {
- uint16_t count;
- uint16_t options;
-
-- struct mid_conf_entry_24xx entries[MAX_MID_VPS];
-+ struct mid_conf_entry_24xx entries[MAX_MULTI_ID_FABRIC];
- };
-
-
-@@ -1002,10 +1024,6 @@ struct mid_db_entry_24xx {
- uint8_t reserved_1;
- };
-
--struct mid_db_24xx {
-- struct mid_db_entry_24xx entries[MAX_MID_VPS];
--};
--
- /*
- * Virtual Fabric ID type definition.
- */
-diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
-index 09cb2a9..ba35fc2 100644
---- a/drivers/scsi/qla2xxx/qla_gbl.h
-+++ b/drivers/scsi/qla2xxx/qla_gbl.h
-@@ -65,33 +65,25 @@ extern int ql2xextended_error_logging;
- extern int ql2xqfullrampup;
- extern int num_hosts;
-
-+extern int qla2x00_loop_reset(scsi_qla_host_t *);
++ tmp = list_entry(t, struct elf_thread_status, list);
++ sz = elf_dump_thread_status(signr, tmp);
++ info->thread_status_size += sz;
++ }
++ }
++ /* now collect the dump for the current */
++ memset(info->prstatus, 0, sizeof(*info->prstatus));
++ fill_prstatus(info->prstatus, current, signr);
++ elf_core_copy_regs(&info->prstatus->pr_reg, regs);
+
- /*
- * Global Functions in qla_mid.c source file.
- */
--extern struct scsi_host_template qla2x00_driver_template;
- extern struct scsi_host_template qla24xx_driver_template;
- extern struct scsi_transport_template *qla2xxx_transport_vport_template;
--extern uint8_t qla2x00_mem_alloc(scsi_qla_host_t *);
- extern void qla2x00_timer(scsi_qla_host_t *);
- extern void qla2x00_start_timer(scsi_qla_host_t *, void *, unsigned long);
--extern void qla2x00_stop_timer(scsi_qla_host_t *);
--extern uint32_t qla24xx_allocate_vp_id(scsi_qla_host_t *);
- extern void qla24xx_deallocate_vp_id(scsi_qla_host_t *);
- extern int qla24xx_disable_vp (scsi_qla_host_t *);
- extern int qla24xx_enable_vp (scsi_qla_host_t *);
--extern void qla2x00_mem_free(scsi_qla_host_t *);
- extern int qla24xx_control_vp(scsi_qla_host_t *, int );
- extern int qla24xx_modify_vp_config(scsi_qla_host_t *);
- extern int qla2x00_send_change_request(scsi_qla_host_t *, uint16_t, uint16_t);
- extern void qla2x00_vp_stop_timer(scsi_qla_host_t *);
- extern int qla24xx_configure_vhba (scsi_qla_host_t *);
--extern int qla24xx_get_vp_entry(scsi_qla_host_t *, uint16_t, int);
--extern int qla24xx_get_vp_database(scsi_qla_host_t *, uint16_t);
--extern int qla2x00_do_dpc_vp(scsi_qla_host_t *);
- extern void qla24xx_report_id_acquisition(scsi_qla_host_t *,
- struct vp_rpt_id_entry_24xx *);
--extern scsi_qla_host_t * qla24xx_find_vhost_by_name(scsi_qla_host_t *,
-- uint8_t *);
- extern void qla2x00_do_dpc_all_vps(scsi_qla_host_t *);
- extern int qla24xx_vport_create_req_sanity_check(struct fc_vport *);
- extern scsi_qla_host_t * qla24xx_create_vhost(struct fc_vport *);
-@@ -103,8 +95,6 @@ extern char *qla2x00_get_fw_version_str(struct scsi_qla_host *, char *);
- extern void qla2x00_mark_device_lost(scsi_qla_host_t *, fc_port_t *, int, int);
- extern void qla2x00_mark_all_devices_lost(scsi_qla_host_t *, int);
-
--extern int qla2x00_down_timeout(struct semaphore *, unsigned long);
--
- extern struct fw_blob *qla2x00_request_firmware(scsi_qla_host_t *);
-
- extern int qla2x00_wait_for_hba_online(scsi_qla_host_t *);
-@@ -113,7 +103,6 @@ extern void qla2xxx_wake_dpc(scsi_qla_host_t *);
- extern void qla2x00_alert_all_vps(scsi_qla_host_t *, uint16_t *);
- extern void qla2x00_async_event(scsi_qla_host_t *, uint16_t *);
- extern void qla2x00_vp_abort_isp(scsi_qla_host_t *);
--extern int qla24xx_vport_delete(struct fc_vport *);
-
- /*
- * Global Function Prototypes in qla_iocb.c source file.
-@@ -222,21 +211,16 @@ extern int
- qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map);
-
- extern int
--qla2x00_get_link_status(scsi_qla_host_t *, uint16_t, link_stat_t *,
-- uint16_t *);
-+qla2x00_get_link_status(scsi_qla_host_t *, uint16_t, struct link_statistics *,
-+ dma_addr_t);
-
- extern int
--qla24xx_get_isp_stats(scsi_qla_host_t *, uint32_t *, uint32_t, uint16_t *);
-+qla24xx_get_isp_stats(scsi_qla_host_t *, struct link_statistics *,
-+ dma_addr_t);
-
- extern int qla24xx_abort_command(scsi_qla_host_t *, srb_t *);
- extern int qla24xx_abort_target(fc_port_t *);
-
--extern int qla2x00_system_error(scsi_qla_host_t *);
--
--extern int
--qla2x00_get_serdes_params(scsi_qla_host_t *, uint16_t *, uint16_t *,
-- uint16_t *);
--
- extern int
- qla2x00_set_serdes_params(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t);
-
-@@ -244,13 +228,19 @@ extern int
- qla2x00_stop_firmware(scsi_qla_host_t *);
-
- extern int
--qla2x00_trace_control(scsi_qla_host_t *, uint16_t, dma_addr_t, uint16_t);
-+qla2x00_enable_eft_trace(scsi_qla_host_t *, dma_addr_t, uint16_t);
-+extern int
-+qla2x00_disable_eft_trace(scsi_qla_host_t *);
-
- extern int
--qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t, uint16_t);
-+qla2x00_enable_fce_trace(scsi_qla_host_t *, dma_addr_t, uint16_t , uint16_t *,
-+ uint32_t *);
-
- extern int
--qla2x00_get_idma_speed(scsi_qla_host_t *, uint16_t, uint16_t *, uint16_t *);
-+qla2x00_disable_fce_trace(scsi_qla_host_t *, uint64_t *, uint64_t *);
++ /* Set up header */
++ fill_elf_header(elf, phdrs, ELF_ARCH, ELF_CORE_EFLAGS, ELF_OSABI);
+
-+extern int
-+qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t, uint16_t);
-
- extern int
- qla2x00_set_idma_speed(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t *);
-@@ -270,11 +260,7 @@ extern void qla2x00_free_irqs(scsi_qla_host_t *);
- /*
- * Global Function Prototypes in qla_sup.c source file.
- */
--extern void qla2x00_lock_nvram_access(scsi_qla_host_t *);
--extern void qla2x00_unlock_nvram_access(scsi_qla_host_t *);
- extern void qla2x00_release_nvram_protection(scsi_qla_host_t *);
--extern uint16_t qla2x00_get_nvram_word(scsi_qla_host_t *, uint32_t);
--extern void qla2x00_write_nvram_word(scsi_qla_host_t *, uint32_t, uint16_t);
- extern uint32_t *qla24xx_read_flash_data(scsi_qla_host_t *, uint32_t *,
- uint32_t, uint32_t);
- extern uint8_t *qla2x00_read_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
-@@ -321,7 +307,6 @@ extern void qla25xx_fw_dump(scsi_qla_host_t *, int);
- extern void qla2x00_dump_regs(scsi_qla_host_t *);
- extern void qla2x00_dump_buffer(uint8_t *, uint32_t);
- extern void qla2x00_print_scsi_cmd(struct scsi_cmnd *);
--extern void qla2x00_dump_pkt(void *);
-
- /*
- * Global Function Prototypes in qla_gs.c source file.
-@@ -356,4 +341,10 @@ extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *);
- extern void qla2x00_init_host_attr(scsi_qla_host_t *);
- extern void qla2x00_alloc_sysfs_attr(scsi_qla_host_t *);
- extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *);
++ /*
++ * Set up the notes in similar form to SVR4 core dumps made
++ * with info from their /proc.
++ */
+
-+/*
-+ * Global Function Prototypes in qla_dfs.c source file.
-+ */
-+extern int qla2x00_dfs_setup(scsi_qla_host_t *);
-+extern int qla2x00_dfs_remove(scsi_qla_host_t *);
- #endif /* _QLA_GBL_H */
-diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
-index 191dafd..d0633ca 100644
---- a/drivers/scsi/qla2xxx/qla_init.c
-+++ b/drivers/scsi/qla2xxx/qla_init.c
-@@ -732,9 +732,9 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
- {
- int rval;
- uint32_t dump_size, fixed_size, mem_size, req_q_size, rsp_q_size,
-- eft_size;
-- dma_addr_t eft_dma;
-- void *eft;
-+ eft_size, fce_size;
-+ dma_addr_t tc_dma;
-+ void *tc;
-
- if (ha->fw_dump) {
- qla_printk(KERN_WARNING, ha,
-@@ -743,7 +743,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
- }
-
- ha->fw_dumped = 0;
-- fixed_size = mem_size = eft_size = 0;
-+ fixed_size = mem_size = eft_size = fce_size = 0;
- if (IS_QLA2100(ha) || IS_QLA2200(ha)) {
- fixed_size = sizeof(struct qla2100_fw_dump);
- } else if (IS_QLA23XX(ha)) {
-@@ -758,21 +758,21 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
- sizeof(uint32_t);
-
- /* Allocate memory for Extended Trace Buffer. */
-- eft = dma_alloc_coherent(&ha->pdev->dev, EFT_SIZE, &eft_dma,
-+ tc = dma_alloc_coherent(&ha->pdev->dev, EFT_SIZE, &tc_dma,
- GFP_KERNEL);
-- if (!eft) {
-+ if (!tc) {
- qla_printk(KERN_WARNING, ha, "Unable to allocate "
- "(%d KB) for EFT.\n", EFT_SIZE / 1024);
- goto cont_alloc;
- }
-
-- rval = qla2x00_trace_control(ha, TC_ENABLE, eft_dma,
-- EFT_NUM_BUFFERS);
-+ memset(tc, 0, EFT_SIZE);
-+ rval = qla2x00_enable_eft_trace(ha, tc_dma, EFT_NUM_BUFFERS);
- if (rval) {
- qla_printk(KERN_WARNING, ha, "Unable to initialize "
- "EFT (%d).\n", rval);
-- dma_free_coherent(&ha->pdev->dev, EFT_SIZE, eft,
-- eft_dma);
-+ dma_free_coherent(&ha->pdev->dev, EFT_SIZE, tc,
-+ tc_dma);
- goto cont_alloc;
- }
-
-@@ -780,9 +780,40 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *ha)
- EFT_SIZE / 1024);
-
- eft_size = EFT_SIZE;
-- memset(eft, 0, eft_size);
-- ha->eft_dma = eft_dma;
-- ha->eft = eft;
-+ ha->eft_dma = tc_dma;
-+ ha->eft = tc;
++ fill_note(info->notes + 0, "CORE", NT_PRSTATUS,
++ sizeof(*info->prstatus), info->prstatus);
++ fill_psinfo(info->psinfo, current->group_leader, current->mm);
++ fill_note(info->notes + 1, "CORE", NT_PRPSINFO,
++ sizeof(*info->psinfo), info->psinfo);
+
-+ /* Allocate memory for Fibre Channel Event Buffer. */
-+ if (!IS_QLA25XX(ha))
-+ goto cont_alloc;
++ info->numnote = 2;
+
-+ tc = dma_alloc_coherent(&ha->pdev->dev, FCE_SIZE, &tc_dma,
-+ GFP_KERNEL);
-+ if (!tc) {
-+ qla_printk(KERN_WARNING, ha, "Unable to allocate "
-+ "(%d KB) for FCE.\n", FCE_SIZE / 1024);
-+ goto cont_alloc;
-+ }
++ fill_auxv_note(&info->notes[info->numnote++], current->mm);
+
-+ memset(tc, 0, FCE_SIZE);
-+ rval = qla2x00_enable_fce_trace(ha, tc_dma, FCE_NUM_BUFFERS,
-+ ha->fce_mb, &ha->fce_bufs);
-+ if (rval) {
-+ qla_printk(KERN_WARNING, ha, "Unable to initialize "
-+ "FCE (%d).\n", rval);
-+ dma_free_coherent(&ha->pdev->dev, FCE_SIZE, tc,
-+ tc_dma);
-+ ha->flags.fce_enabled = 0;
-+ goto cont_alloc;
-+ }
++ /* Try to dump the FPU. */
++ info->prstatus->pr_fpvalid = elf_core_copy_task_fpregs(current, regs,
++ info->fpu);
++ if (info->prstatus->pr_fpvalid)
++ fill_note(info->notes + info->numnote++,
++ "CORE", NT_PRFPREG, sizeof(*info->fpu), info->fpu);
++#ifdef ELF_CORE_COPY_XFPREGS
++ if (elf_core_copy_task_xfpregs(current, info->xfpu))
++ fill_note(info->notes + info->numnote++,
++ "LINUX", ELF_CORE_XFPREG_TYPE,
++ sizeof(*info->xfpu), info->xfpu);
++#endif
+
-+ qla_printk(KERN_INFO, ha, "Allocated (%d KB) for FCE...\n",
-+ FCE_SIZE / 1024);
++ return 1;
+
-+ fce_size = sizeof(struct qla2xxx_fce_chain) + EFT_SIZE;
-+ ha->flags.fce_enabled = 1;
-+ ha->fce_dma = tc_dma;
-+ ha->fce = tc;
- }
- cont_alloc:
- req_q_size = ha->request_q_length * sizeof(request_t);
-@@ -790,7 +821,7 @@ cont_alloc:
-
- dump_size = offsetof(struct qla2xxx_fw_dump, isp);
- dump_size += fixed_size + mem_size + req_q_size + rsp_q_size +
-- eft_size;
-+ eft_size + fce_size;
-
- ha->fw_dump = vmalloc(dump_size);
- if (!ha->fw_dump) {
-@@ -922,9 +953,9 @@ qla2x00_setup_chip(scsi_qla_host_t *ha)
- ha->flags.npiv_supported = 1;
- if ((!ha->max_npiv_vports) ||
- ((ha->max_npiv_vports + 1) %
-- MAX_MULTI_ID_FABRIC))
-+ MIN_MULTI_ID_FABRIC))
- ha->max_npiv_vports =
-- MAX_NUM_VPORT_FABRIC;
-+ MIN_MULTI_ID_FABRIC - 1;
- }
-
- if (ql2xallocfwdump)
-@@ -1162,7 +1193,10 @@ qla2x00_init_rings(scsi_qla_host_t *ha)
-
- DEBUG(printk("scsi(%ld): Issue init firmware.\n", ha->host_no));
-
-- mid_init_cb->count = ha->max_npiv_vports;
-+ if (ha->flags.npiv_supported)
-+ mid_init_cb->count = cpu_to_le16(ha->max_npiv_vports);
++#undef NUM_NOTES
++}
+
-+ mid_init_cb->options = __constant_cpu_to_le16(BIT_1);
-
- rval = qla2x00_init_firmware(ha, ha->init_cb_size);
- if (rval) {
-@@ -2566,14 +2600,7 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *ha, struct list_head *new_fcports)
-
- /* Bypass virtual ports of the same host. */
- if (pha->num_vhosts) {
-- vp_index = find_next_bit(
-- (unsigned long *)pha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC + 1, 1);
--
-- for (;vp_index <= MAX_MULTI_ID_FABRIC;
-- vp_index = find_next_bit(
-- (unsigned long *)pha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC + 1, vp_index + 1)) {
-+ for_each_mapped_vp_idx(pha, vp_index) {
- empty_vp_index = 1;
- found_vp = 0;
- list_for_each_entry(vha, &pha->vp_list,
-@@ -2592,7 +2619,8 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *ha, struct list_head *new_fcports)
- new_fcport->d_id.b24 == vha->d_id.b24)
- break;
- }
-- if (vp_index <= MAX_MULTI_ID_FABRIC)
++static size_t get_note_info_size(struct elf_note_info *info)
++{
++ int sz = 0;
++ int i;
+
-+ if (vp_index <= pha->max_npiv_vports)
- continue;
- }
-
-@@ -3245,7 +3273,7 @@ qla2x00_abort_isp(scsi_qla_host_t *ha)
- clear_bit(ISP_ABORT_RETRY, &ha->dpc_flags);
-
- if (ha->eft) {
-- rval = qla2x00_trace_control(ha, TC_ENABLE,
-+ rval = qla2x00_enable_eft_trace(ha,
- ha->eft_dma, EFT_NUM_BUFFERS);
- if (rval) {
- qla_printk(KERN_WARNING, ha,
-@@ -3253,6 +3281,21 @@ qla2x00_abort_isp(scsi_qla_host_t *ha)
- "(%d).\n", rval);
- }
- }
++ for (i = 0; i < info->numnote; i++)
++ sz += notesize(info->notes + i);
+
-+ if (ha->fce) {
-+ ha->flags.fce_enabled = 1;
-+ memset(ha->fce, 0,
-+ fce_calc_size(ha->fce_bufs));
-+ rval = qla2x00_enable_fce_trace(ha,
-+ ha->fce_dma, ha->fce_bufs, ha->fce_mb,
-+ &ha->fce_bufs);
-+ if (rval) {
-+ qla_printk(KERN_WARNING, ha,
-+ "Unable to reinitialize FCE "
-+ "(%d).\n", rval);
-+ ha->flags.fce_enabled = 0;
-+ }
-+ }
- } else { /* failed the ISP abort */
- ha->flags.online = 1;
- if (test_bit(ISP_ABORT_RETRY, &ha->dpc_flags)) {
-diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
-index 1104bd2..642a0c3 100644
---- a/drivers/scsi/qla2xxx/qla_isr.c
-+++ b/drivers/scsi/qla2xxx/qla_isr.c
-@@ -104,7 +104,7 @@ qla2100_intr_handler(int irq, void *dev_id)
- if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
- (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
- set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
-- up(&ha->mbx_intr_sem);
-+ complete(&ha->mbx_intr_comp);
- }
-
- return (IRQ_HANDLED);
-@@ -216,7 +216,7 @@ qla2300_intr_handler(int irq, void *dev_id)
- if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
- (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
- set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
-- up(&ha->mbx_intr_sem);
-+ complete(&ha->mbx_intr_comp);
- }
-
- return (IRQ_HANDLED);
-@@ -347,10 +347,6 @@ qla2x00_async_event(scsi_qla_host_t *ha, uint16_t *mb)
- break;
-
- case MBA_SYSTEM_ERR: /* System Error */
-- mb[1] = RD_MAILBOX_REG(ha, reg, 1);
-- mb[2] = RD_MAILBOX_REG(ha, reg, 2);
-- mb[3] = RD_MAILBOX_REG(ha, reg, 3);
--
- qla_printk(KERN_INFO, ha,
- "ISP System Error - mbx1=%xh mbx2=%xh mbx3=%xh.\n",
- mb[1], mb[2], mb[3]);
-@@ -579,12 +575,15 @@ qla2x00_async_event(scsi_qla_host_t *ha, uint16_t *mb)
- /* Check if the Vport has issued a SCR */
- if (ha->parent && test_bit(VP_SCR_NEEDED, &ha->vp_flags))
- break;
-+ /* Only handle SCNs for our Vport index. */
-+ if (ha->flags.npiv_supported && ha->vp_idx != mb[3])
-+ break;
-
- DEBUG2(printk("scsi(%ld): Asynchronous RSCR UPDATE.\n",
- ha->host_no));
- DEBUG(printk(KERN_INFO
-- "scsi(%ld): RSCN database changed -- %04x %04x.\n",
-- ha->host_no, mb[1], mb[2]));
-+ "scsi(%ld): RSCN database changed -- %04x %04x %04x.\n",
-+ ha->host_no, mb[1], mb[2], mb[3]));
-
- rscn_entry = (mb[1] << 16) | mb[2];
- host_pid = (ha->d_id.b.domain << 16) | (ha->d_id.b.area << 8) |
-@@ -823,6 +822,35 @@ qla2x00_process_response_queue(struct scsi_qla_host *ha)
- WRT_REG_WORD(ISP_RSP_Q_OUT(ha, reg), ha->rsp_ring_index);
- }
-
-+static inline void
-+qla2x00_handle_sense(srb_t *sp, uint8_t *sense_data, uint32_t sense_len)
++ sz += info->thread_status_size;
++
++ return sz;
++}
++
++static int write_note_info(struct elf_note_info *info,
++ struct file *file, loff_t *foffset)
+{
-+ struct scsi_cmnd *cp = sp->cmd;
++ int i;
++ struct list_head *t;
+
-+ if (sense_len >= SCSI_SENSE_BUFFERSIZE)
-+ sense_len = SCSI_SENSE_BUFFERSIZE;
++ for (i = 0; i < info->numnote; i++)
++ if (!writenote(info->notes + i, file, foffset))
++ return 0;
+
-+ CMD_ACTUAL_SNSLEN(cp) = sense_len;
-+ sp->request_sense_length = sense_len;
-+ sp->request_sense_ptr = cp->sense_buffer;
-+ if (sp->request_sense_length > 32)
-+ sense_len = 32;
++ /* write out the thread status notes section */
++ list_for_each(t, &info->thread_list) {
++ struct elf_thread_status *tmp =
++ list_entry(t, struct elf_thread_status, list);
+
-+ memcpy(cp->sense_buffer, sense_data, sense_len);
++ for (i = 0; i < tmp->num_notes; i++)
++ if (!writenote(&tmp->notes[i], file, foffset))
++ return 0;
++ }
+
-+ sp->request_sense_ptr += sense_len;
-+ sp->request_sense_length -= sense_len;
-+ if (sp->request_sense_length != 0)
-+ sp->ha->status_srb = sp;
++ return 1;
++}
+
-+ DEBUG5(printk("%s(): Check condition Sense data, scsi(%ld:%d:%d:%d) "
-+ "cmd=%p pid=%ld\n", __func__, sp->ha->host_no, cp->device->channel,
-+ cp->device->id, cp->device->lun, cp, cp->serial_number));
-+ if (sense_len)
-+ DEBUG5(qla2x00_dump_buffer(cp->sense_buffer,
-+ CMD_ACTUAL_SNSLEN(cp)));
++static void free_note_info(struct elf_note_info *info)
++{
++ while (!list_empty(&info->thread_list)) {
++ struct list_head *tmp = info->thread_list.next;
++ list_del(tmp);
++ kfree(list_entry(tmp, struct elf_thread_status, list));
++ }
++
++ kfree(info->prstatus);
++ kfree(info->psinfo);
++ kfree(info->notes);
++ kfree(info->fpu);
++#ifdef ELF_CORE_COPY_XFPREGS
++ kfree(info->xfpu);
++#endif
+}
+
- /**
- * qla2x00_status_entry() - Process a Status IOCB entry.
- * @ha: SCSI driver HA context
-@@ -977,36 +1005,11 @@ qla2x00_status_entry(scsi_qla_host_t *ha, void *pkt)
- if (lscsi_status != SS_CHECK_CONDITION)
- break;
-
-- /* Copy Sense Data into sense buffer. */
-- memset(cp->sense_buffer, 0, sizeof(cp->sense_buffer));
--
-+ memset(cp->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- if (!(scsi_status & SS_SENSE_LEN_VALID))
- break;
-
-- if (sense_len >= sizeof(cp->sense_buffer))
-- sense_len = sizeof(cp->sense_buffer);
--
-- CMD_ACTUAL_SNSLEN(cp) = sense_len;
-- sp->request_sense_length = sense_len;
-- sp->request_sense_ptr = cp->sense_buffer;
--
-- if (sp->request_sense_length > 32)
-- sense_len = 32;
--
-- memcpy(cp->sense_buffer, sense_data, sense_len);
--
-- sp->request_sense_ptr += sense_len;
-- sp->request_sense_length -= sense_len;
-- if (sp->request_sense_length != 0)
-- ha->status_srb = sp;
--
-- DEBUG5(printk("%s(): Check condition Sense data, "
-- "scsi(%ld:%d:%d:%d) cmd=%p pid=%ld\n", __func__,
-- ha->host_no, cp->device->channel, cp->device->id,
-- cp->device->lun, cp, cp->serial_number));
-- if (sense_len)
-- DEBUG5(qla2x00_dump_buffer(cp->sense_buffer,
-- CMD_ACTUAL_SNSLEN(cp)));
-+ qla2x00_handle_sense(sp, sense_data, sense_len);
- break;
-
- case CS_DATA_UNDERRUN:
-@@ -1061,34 +1064,11 @@ qla2x00_status_entry(scsi_qla_host_t *ha, void *pkt)
- if (lscsi_status != SS_CHECK_CONDITION)
- break;
-
-- /* Copy Sense Data into sense buffer */
-- memset(cp->sense_buffer, 0, sizeof(cp->sense_buffer));
--
-+ memset(cp->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- if (!(scsi_status & SS_SENSE_LEN_VALID))
- break;
-
-- if (sense_len >= sizeof(cp->sense_buffer))
-- sense_len = sizeof(cp->sense_buffer);
--
-- CMD_ACTUAL_SNSLEN(cp) = sense_len;
-- sp->request_sense_length = sense_len;
-- sp->request_sense_ptr = cp->sense_buffer;
--
-- if (sp->request_sense_length > 32)
-- sense_len = 32;
++#endif
++
+ static struct vm_area_struct *first_vma(struct task_struct *tsk,
+ struct vm_area_struct *gate_vma)
+ {
+@@ -1534,29 +1998,15 @@ static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma,
+ */
+ static int elf_core_dump(long signr, struct pt_regs *regs, struct file *file, unsigned long limit)
+ {
+-#define NUM_NOTES 6
+ int has_dumped = 0;
+ mm_segment_t fs;
+ int segs;
+ size_t size = 0;
+- int i;
+ struct vm_area_struct *vma, *gate_vma;
+ struct elfhdr *elf = NULL;
+ loff_t offset = 0, dataoff, foffset;
+- int numnote;
+- struct memelfnote *notes = NULL;
+- struct elf_prstatus *prstatus = NULL; /* NT_PRSTATUS */
+- struct elf_prpsinfo *psinfo = NULL; /* NT_PRPSINFO */
+- struct task_struct *g, *p;
+- LIST_HEAD(thread_list);
+- struct list_head *t;
+- elf_fpregset_t *fpu = NULL;
+-#ifdef ELF_CORE_COPY_XFPREGS
+- elf_fpxregset_t *xfpu = NULL;
+-#endif
+- int thread_status_size = 0;
+- elf_addr_t *auxv;
+ unsigned long mm_flags;
++ struct elf_note_info info;
+
+ /*
+ * We no longer stop all VM operations.
+@@ -1574,52 +2024,6 @@ static int elf_core_dump(long signr, struct pt_regs *regs, struct file *file, un
+ elf = kmalloc(sizeof(*elf), GFP_KERNEL);
+ if (!elf)
+ goto cleanup;
+- prstatus = kmalloc(sizeof(*prstatus), GFP_KERNEL);
+- if (!prstatus)
+- goto cleanup;
+- psinfo = kmalloc(sizeof(*psinfo), GFP_KERNEL);
+- if (!psinfo)
+- goto cleanup;
+- notes = kmalloc(NUM_NOTES * sizeof(struct memelfnote), GFP_KERNEL);
+- if (!notes)
+- goto cleanup;
+- fpu = kmalloc(sizeof(*fpu), GFP_KERNEL);
+- if (!fpu)
+- goto cleanup;
+-#ifdef ELF_CORE_COPY_XFPREGS
+- xfpu = kmalloc(sizeof(*xfpu), GFP_KERNEL);
+- if (!xfpu)
+- goto cleanup;
+-#endif
-
-- memcpy(cp->sense_buffer, sense_data, sense_len);
+- if (signr) {
+- struct elf_thread_status *tmp;
+- rcu_read_lock();
+- do_each_thread(g,p)
+- if (current->mm == p->mm && current != p) {
+- tmp = kzalloc(sizeof(*tmp), GFP_ATOMIC);
+- if (!tmp) {
+- rcu_read_unlock();
+- goto cleanup;
+- }
+- tmp->thread = p;
+- list_add(&tmp->list, &thread_list);
+- }
+- while_each_thread(g,p);
+- rcu_read_unlock();
+- list_for_each(t, &thread_list) {
+- struct elf_thread_status *tmp;
+- int sz;
+-
+- tmp = list_entry(t, struct elf_thread_status, list);
+- sz = elf_dump_thread_status(signr, tmp);
+- thread_status_size += sz;
+- }
+- }
+- /* now collect the dump for the current */
+- memset(prstatus, 0, sizeof(*prstatus));
+- fill_prstatus(prstatus, current, signr);
+- elf_core_copy_regs(&prstatus->pr_reg, regs);
+
+ segs = current->mm->map_count;
+ #ifdef ELF_CORE_EXTRA_PHDRS
+@@ -1630,42 +2034,16 @@ static int elf_core_dump(long signr, struct pt_regs *regs, struct file *file, un
+ if (gate_vma != NULL)
+ segs++;
+
+- /* Set up header */
+- fill_elf_header(elf, segs + 1); /* including notes section */
+-
+- has_dumped = 1;
+- current->flags |= PF_DUMPCORE;
+-
+ /*
+- * Set up the notes in similar form to SVR4 core dumps made
+- * with info from their /proc.
++ * Collect all the non-memory information about the process for the
++ * notes. This also sets up the file header.
+ */
++ if (!fill_note_info(elf, segs + 1, /* including notes section */
++ &info, signr, regs))
++ goto cleanup;
+
+- fill_note(notes + 0, "CORE", NT_PRSTATUS, sizeof(*prstatus), prstatus);
+- fill_psinfo(psinfo, current->group_leader, current->mm);
+- fill_note(notes + 1, "CORE", NT_PRPSINFO, sizeof(*psinfo), psinfo);
+-
+- numnote = 2;
-
-- sp->request_sense_ptr += sense_len;
-- sp->request_sense_length -= sense_len;
-- if (sp->request_sense_length != 0)
-- ha->status_srb = sp;
+- auxv = (elf_addr_t *)current->mm->saved_auxv;
-
-- DEBUG5(printk("%s(): Check condition Sense data, "
-- "scsi(%ld:%d:%d:%d) cmd=%p pid=%ld\n",
-- __func__, ha->host_no, cp->device->channel,
-- cp->device->id, cp->device->lun, cp,
-- cp->serial_number));
-+ qla2x00_handle_sense(sp, sense_data, sense_len);
-
- /*
- * In case of a Underrun condition, set both the lscsi
-@@ -1108,10 +1088,6 @@ qla2x00_status_entry(scsi_qla_host_t *ha, void *pkt)
-
- cp->result = DID_ERROR << 16 | lscsi_status;
- }
+- i = 0;
+- do
+- i += 2;
+- while (auxv[i - 2] != AT_NULL);
+- fill_note(¬es[numnote++], "CORE", NT_AUXV,
+- i * sizeof(elf_addr_t), auxv);
+-
+- /* Try to dump the FPU. */
+- if ((prstatus->pr_fpvalid =
+- elf_core_copy_task_fpregs(current, regs, fpu)))
+- fill_note(notes + numnote++,
+- "CORE", NT_PRFPREG, sizeof(*fpu), fpu);
+-#ifdef ELF_CORE_COPY_XFPREGS
+- if (elf_core_copy_task_xfpregs(current, xfpu))
+- fill_note(notes + numnote++,
+- "LINUX", ELF_CORE_XFPREG_TYPE, sizeof(*xfpu), xfpu);
+-#endif
++ has_dumped = 1;
++ current->flags |= PF_DUMPCORE;
+
+ fs = get_fs();
+ set_fs(KERNEL_DS);
+@@ -1678,12 +2056,7 @@ static int elf_core_dump(long signr, struct pt_regs *regs, struct file *file, un
+ /* Write notes phdr entry */
+ {
+ struct elf_phdr phdr;
+- int sz = 0;
-
-- if (sense_len)
-- DEBUG5(qla2x00_dump_buffer(cp->sense_buffer,
-- CMD_ACTUAL_SNSLEN(cp)));
- } else {
- /*
- * If RISC reports underrun and target does not report
-@@ -1621,7 +1597,7 @@ qla24xx_intr_handler(int irq, void *dev_id)
- if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
- (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
- set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
-- up(&ha->mbx_intr_sem);
-+ complete(&ha->mbx_intr_comp);
- }
+- for (i = 0; i < numnote; i++)
+- sz += notesize(notes + i);
+-
+- sz += thread_status_size;
++ size_t sz = get_note_info_size(&info);
- return IRQ_HANDLED;
-@@ -1758,7 +1734,7 @@ qla24xx_msix_default(int irq, void *dev_id)
- if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) &&
- (status & MBX_INTERRUPT) && ha->flags.mbox_int) {
- set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
-- up(&ha->mbx_intr_sem);
-+ complete(&ha->mbx_intr_comp);
- }
+ sz += elf_coredump_extra_notes_size();
- return IRQ_HANDLED;
-@@ -1853,6 +1829,18 @@ qla2x00_request_irqs(scsi_qla_host_t *ha)
- goto skip_msix;
- }
+@@ -1728,23 +2101,12 @@ static int elf_core_dump(long signr, struct pt_regs *regs, struct file *file, un
+ #endif
-+ if (ha->pdev->subsystem_vendor == PCI_VENDOR_ID_HP &&
-+ (ha->pdev->subsystem_device == 0x7040 ||
-+ ha->pdev->subsystem_device == 0x7041 ||
-+ ha->pdev->subsystem_device == 0x1705)) {
-+ DEBUG2(qla_printk(KERN_WARNING, ha,
-+ "MSI-X: Unsupported ISP2432 SSVID/SSDID (0x%X, 0x%X).\n",
-+ ha->pdev->subsystem_vendor,
-+ ha->pdev->subsystem_device));
-+
-+ goto skip_msi;
-+ }
-+
- ret = qla24xx_enable_msix(ha);
- if (!ret) {
- DEBUG2(qla_printk(KERN_INFO, ha,
-diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
-index ccd662a..0c10c0b 100644
---- a/drivers/scsi/qla2xxx/qla_mbx.c
-+++ b/drivers/scsi/qla2xxx/qla_mbx.c
-@@ -8,19 +8,6 @@
+ /* write out the notes section */
+- for (i = 0; i < numnote; i++)
+- if (!writenote(notes + i, file, &foffset))
+- goto end_coredump;
++ if (!write_note_info(&info, file, &foffset))
++ goto end_coredump;
- #include <linux/delay.h>
+ if (elf_coredump_extra_notes_write(file, &foffset))
+ goto end_coredump;
--static void
--qla2x00_mbx_sem_timeout(unsigned long data)
--{
-- struct semaphore *sem_ptr = (struct semaphore *)data;
--
-- DEBUG11(printk("qla2x00_sem_timeout: entered.\n"));
+- /* write out the thread status notes section */
+- list_for_each(t, &thread_list) {
+- struct elf_thread_status *tmp =
+- list_entry(t, struct elf_thread_status, list);
-
-- if (sem_ptr != NULL) {
-- up(sem_ptr);
+- for (i = 0; i < tmp->num_notes; i++)
+- if (!writenote(&tmp->notes[i], file, &foffset))
+- goto end_coredump;
- }
-
-- DEBUG11(printk("qla2x00_mbx_sem_timeout: exiting.\n"));
--}
+ /* Align to page */
+ DUMP_SEEK(dataoff - foffset);
- /*
- * qla2x00_mailbox_command
-@@ -47,7 +34,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
- int rval;
- unsigned long flags = 0;
- device_reg_t __iomem *reg;
-- struct timer_list tmp_intr_timer;
- uint8_t abort_active;
- uint8_t io_lock_on;
- uint16_t command;
-@@ -72,7 +58,8 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
- * non ISP abort time.
- */
- if (!abort_active) {
-- if (qla2x00_down_timeout(&ha->mbx_cmd_sem, mcp->tov * HZ)) {
-+ if (!wait_for_completion_timeout(&ha->mbx_cmd_comp,
-+ mcp->tov * HZ)) {
- /* Timeout occurred. Return error. */
- DEBUG2_3_11(printk("%s(%ld): cmd access timeout. "
- "Exiting.\n", __func__, ha->host_no));
-@@ -135,22 +122,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
- /* Wait for mbx cmd completion until timeout */
+@@ -1795,22 +2157,9 @@ end_coredump:
+ set_fs(fs);
- if (!abort_active && io_lock_on) {
-- /* sleep on completion semaphore */
-- DEBUG11(printk("%s(%ld): INTERRUPT MODE. Initializing timer.\n",
-- __func__, ha->host_no));
--
-- init_timer(&tmp_intr_timer);
-- tmp_intr_timer.data = (unsigned long)&ha->mbx_intr_sem;
-- tmp_intr_timer.expires = jiffies + mcp->tov * HZ;
-- tmp_intr_timer.function =
-- (void (*)(unsigned long))qla2x00_mbx_sem_timeout;
--
-- DEBUG11(printk("%s(%ld): Adding timer.\n", __func__,
-- ha->host_no));
-- add_timer(&tmp_intr_timer);
+ cleanup:
+- while (!list_empty(&thread_list)) {
+- struct list_head *tmp = thread_list.next;
+- list_del(tmp);
+- kfree(list_entry(tmp, struct elf_thread_status, list));
+- }
-
-- DEBUG11(printk("%s(%ld): going to unlock & sleep. "
-- "time=0x%lx.\n", __func__, ha->host_no, jiffies));
-
- set_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
-
-@@ -160,17 +131,10 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
- WRT_REG_WORD(®->isp.hccr, HCCR_SET_HOST_INT);
- spin_unlock_irqrestore(&ha->hardware_lock, flags);
-
-- /* Wait for either the timer to expire
-- * or the mbox completion interrupt
-- */
-- down(&ha->mbx_intr_sem);
-+ wait_for_completion_timeout(&ha->mbx_intr_comp, mcp->tov * HZ);
-
-- DEBUG11(printk("%s(%ld): waking up. time=0x%lx\n", __func__,
-- ha->host_no, jiffies));
- clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
-
-- /* delete the timer */
-- del_timer(&tmp_intr_timer);
- } else {
- DEBUG3_11(printk("%s(%ld): cmd=%x POLLING MODE.\n", __func__,
- ha->host_no, command));
-@@ -299,7 +263,7 @@ qla2x00_mailbox_command(scsi_qla_host_t *pvha, mbx_cmd_t *mcp)
-
- /* Allow next mbx cmd to come in. */
- if (!abort_active)
-- up(&ha->mbx_cmd_sem);
-+ complete(&ha->mbx_cmd_comp);
-
- if (rval) {
- DEBUG2_3_11(printk("%s(%ld): **** FAILED. mbx0=%x, mbx1=%x, "
-@@ -905,7 +869,7 @@ qla2x00_get_adapter_id(scsi_qla_host_t *ha, uint16_t *id, uint8_t *al_pa,
-
- mcp->mb[0] = MBC_GET_ADAPTER_LOOP_ID;
- mcp->mb[9] = ha->vp_idx;
-- mcp->out_mb = MBX_0;
-+ mcp->out_mb = MBX_9|MBX_0;
- mcp->in_mb = MBX_9|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
- mcp->tov = 30;
- mcp->flags = 0;
-@@ -1016,7 +980,7 @@ qla2x00_init_firmware(scsi_qla_host_t *ha, uint16_t size)
- DEBUG11(printk("qla2x00_init_firmware(%ld): entered.\n",
- ha->host_no));
+ kfree(elf);
+- kfree(prstatus);
+- kfree(psinfo);
+- kfree(notes);
+- kfree(fpu);
+-#ifdef ELF_CORE_COPY_XFPREGS
+- kfree(xfpu);
+-#endif
++ free_note_info(&info);
+ return has_dumped;
+-#undef NUM_NOTES
+ }
-- if (ha->flags.npiv_supported)
-+ if (ha->fw_attributes & BIT_2)
- mcp->mb[0] = MBC_MID_INITIALIZE_FIRMWARE;
- else
- mcp->mb[0] = MBC_INITIALIZE_FIRMWARE;
-@@ -2042,29 +2006,20 @@ qla2x00_get_fcal_position_map(scsi_qla_host_t *ha, char *pos_map)
+ #endif /* USE_ELF_CORE_DUMP */
+diff --git a/fs/bio.c b/fs/bio.c
+index d59ddbf..242e409 100644
+--- a/fs/bio.c
++++ b/fs/bio.c
+@@ -248,11 +248,13 @@ inline int bio_hw_segments(struct request_queue *q, struct bio *bio)
*/
- int
- qla2x00_get_link_status(scsi_qla_host_t *ha, uint16_t loop_id,
-- link_stat_t *ret_buf, uint16_t *status)
-+ struct link_statistics *stats, dma_addr_t stats_dma)
+ void __bio_clone(struct bio *bio, struct bio *bio_src)
{
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
-- link_stat_t *stat_buf;
-- dma_addr_t stat_buf_dma;
-+ uint32_t *siter, *diter, dwords;
-
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
-
-- stat_buf = dma_pool_alloc(ha->s_dma_pool, GFP_ATOMIC, &stat_buf_dma);
-- if (stat_buf == NULL) {
-- DEBUG2_3_11(printk("%s(%ld): Failed to allocate memory.\n",
-- __func__, ha->host_no));
-- return BIT_0;
-- }
-- memset(stat_buf, 0, sizeof(link_stat_t));
--
- mcp->mb[0] = MBC_GET_LINK_STATUS;
-- mcp->mb[2] = MSW(stat_buf_dma);
-- mcp->mb[3] = LSW(stat_buf_dma);
-- mcp->mb[6] = MSW(MSD(stat_buf_dma));
-- mcp->mb[7] = LSW(MSD(stat_buf_dma));
-+ mcp->mb[2] = MSW(stats_dma);
-+ mcp->mb[3] = LSW(stats_dma);
-+ mcp->mb[6] = MSW(MSD(stats_dma));
-+ mcp->mb[7] = LSW(MSD(stats_dma));
- mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
- mcp->in_mb = MBX_0;
- if (IS_FWI2_CAPABLE(ha)) {
-@@ -2089,78 +2044,43 @@ qla2x00_get_link_status(scsi_qla_host_t *ha, uint16_t loop_id,
- if (mcp->mb[0] != MBS_COMMAND_COMPLETE) {
- DEBUG2_3_11(printk("%s(%ld): cmd failed. mbx0=%x.\n",
- __func__, ha->host_no, mcp->mb[0]));
-- status[0] = mcp->mb[0];
-- rval = BIT_1;
-+ rval = QLA_FUNCTION_FAILED;
- } else {
-- /* copy over data -- firmware data is LE. */
-- ret_buf->link_fail_cnt =
-- le32_to_cpu(stat_buf->link_fail_cnt);
-- ret_buf->loss_sync_cnt =
-- le32_to_cpu(stat_buf->loss_sync_cnt);
-- ret_buf->loss_sig_cnt =
-- le32_to_cpu(stat_buf->loss_sig_cnt);
-- ret_buf->prim_seq_err_cnt =
-- le32_to_cpu(stat_buf->prim_seq_err_cnt);
-- ret_buf->inval_xmit_word_cnt =
-- le32_to_cpu(stat_buf->inval_xmit_word_cnt);
-- ret_buf->inval_crc_cnt =
-- le32_to_cpu(stat_buf->inval_crc_cnt);
+- struct request_queue *q = bdev_get_queue(bio_src->bi_bdev);
-
-- DEBUG11(printk("%s(%ld): stat dump: fail_cnt=%d "
-- "loss_sync=%d loss_sig=%d seq_err=%d "
-- "inval_xmt_word=%d inval_crc=%d.\n", __func__,
-- ha->host_no, stat_buf->link_fail_cnt,
-- stat_buf->loss_sync_cnt, stat_buf->loss_sig_cnt,
-- stat_buf->prim_seq_err_cnt,
-- stat_buf->inval_xmit_word_cnt,
-- stat_buf->inval_crc_cnt));
-+ /* Copy over data -- firmware data is LE. */
-+ dwords = offsetof(struct link_statistics, unused1) / 4;
-+ siter = diter = &stats->link_fail_cnt;
-+ while (dwords--)
-+ *diter++ = le32_to_cpu(*siter++);
- }
- } else {
- /* Failed. */
- DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
- ha->host_no, rval));
-- rval = BIT_1;
- }
+ memcpy(bio->bi_io_vec, bio_src->bi_io_vec,
+ bio_src->bi_max_vecs * sizeof(struct bio_vec));
-- dma_pool_free(ha->s_dma_pool, stat_buf, stat_buf_dma);
--
- return rval;
++ /*
++ * most users will be overriding ->bi_bdev with a new target,
++ * so we don't set nor calculate new physical/hw segment counts here
++ */
+ bio->bi_sector = bio_src->bi_sector;
+ bio->bi_bdev = bio_src->bi_bdev;
+ bio->bi_flags |= 1 << BIO_CLONED;
+@@ -260,8 +262,6 @@ void __bio_clone(struct bio *bio, struct bio *bio_src)
+ bio->bi_vcnt = bio_src->bi_vcnt;
+ bio->bi_size = bio_src->bi_size;
+ bio->bi_idx = bio_src->bi_idx;
+- bio_phys_segments(q, bio);
+- bio_hw_segments(q, bio);
}
- int
--qla24xx_get_isp_stats(scsi_qla_host_t *ha, uint32_t *dwbuf, uint32_t dwords,
-- uint16_t *status)
-+qla24xx_get_isp_stats(scsi_qla_host_t *ha, struct link_statistics *stats,
-+ dma_addr_t stats_dma)
+ /**
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 993f78c..e48a630 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -738,9 +738,9 @@ EXPORT_SYMBOL(bd_release);
+ static struct kobject *bdev_get_kobj(struct block_device *bdev)
{
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
-- uint32_t *sbuf, *siter;
-- dma_addr_t sbuf_dma;
-+ uint32_t *siter, *diter, dwords;
-
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+ if (bdev->bd_contains != bdev)
+- return kobject_get(&bdev->bd_part->kobj);
++ return kobject_get(&bdev->bd_part->dev.kobj);
+ else
+- return kobject_get(&bdev->bd_disk->kobj);
++ return kobject_get(&bdev->bd_disk->dev.kobj);
+ }
-- if (dwords > (DMA_POOL_SIZE / 4)) {
-- DEBUG2_3_11(printk("%s(%ld): Unabled to retrieve %d DWORDs "
-- "(max %d).\n", __func__, ha->host_no, dwords,
-- DMA_POOL_SIZE / 4));
-- return BIT_0;
-- }
-- sbuf = dma_pool_alloc(ha->s_dma_pool, GFP_ATOMIC, &sbuf_dma);
-- if (sbuf == NULL) {
-- DEBUG2_3_11(printk("%s(%ld): Failed to allocate memory.\n",
-- __func__, ha->host_no));
-- return BIT_0;
-- }
-- memset(sbuf, 0, DMA_POOL_SIZE);
--
- mcp->mb[0] = MBC_GET_LINK_PRIV_STATS;
-- mcp->mb[2] = MSW(sbuf_dma);
-- mcp->mb[3] = LSW(sbuf_dma);
-- mcp->mb[6] = MSW(MSD(sbuf_dma));
-- mcp->mb[7] = LSW(MSD(sbuf_dma));
-- mcp->mb[8] = dwords;
-+ mcp->mb[2] = MSW(stats_dma);
-+ mcp->mb[3] = LSW(stats_dma);
-+ mcp->mb[6] = MSW(MSD(stats_dma));
-+ mcp->mb[7] = LSW(MSD(stats_dma));
-+ mcp->mb[8] = sizeof(struct link_statistics) / 4;
-+ mcp->mb[9] = ha->vp_idx;
- mcp->mb[10] = 0;
-- mcp->out_mb = MBX_10|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
-+ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
- mcp->in_mb = MBX_2|MBX_1|MBX_0;
- mcp->tov = 30;
- mcp->flags = IOCTL_CMD;
-@@ -2170,23 +2090,20 @@ qla24xx_get_isp_stats(scsi_qla_host_t *ha, uint32_t *dwbuf, uint32_t dwords,
- if (mcp->mb[0] != MBS_COMMAND_COMPLETE) {
- DEBUG2_3_11(printk("%s(%ld): cmd failed. mbx0=%x.\n",
- __func__, ha->host_no, mcp->mb[0]));
-- status[0] = mcp->mb[0];
-- rval = BIT_1;
-+ rval = QLA_FUNCTION_FAILED;
- } else {
- /* Copy over data -- firmware data is LE. */
-- siter = sbuf;
-+ dwords = sizeof(struct link_statistics) / 4;
-+ siter = diter = &stats->link_fail_cnt;
- while (dwords--)
-- *dwbuf++ = le32_to_cpu(*siter++);
-+ *diter++ = le32_to_cpu(*siter++);
+ static struct kobject *bdev_get_holder(struct block_device *bdev)
+@@ -1176,7 +1176,7 @@ static int do_open(struct block_device *bdev, struct file *file, int for_part)
+ ret = -ENXIO;
+ goto out_first;
+ }
+- kobject_get(&p->kobj);
++ kobject_get(&p->dev.kobj);
+ bdev->bd_part = p;
+ bd_set_size(bdev, (loff_t) p->nr_sects << 9);
}
- } else {
- /* Failed. */
- DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
- ha->host_no, rval));
-- rval = BIT_1;
- }
-
-- dma_pool_free(ha->s_dma_pool, sbuf, sbuf_dma);
--
- return rval;
- }
+@@ -1299,7 +1299,7 @@ static int __blkdev_put(struct block_device *bdev, int for_part)
+ module_put(owner);
-@@ -2331,6 +2248,8 @@ atarget_done:
- return rval;
+ if (bdev->bd_contains != bdev) {
+- kobject_put(&bdev->bd_part->kobj);
++ kobject_put(&bdev->bd_part->dev.kobj);
+ bdev->bd_part = NULL;
+ }
+ bdev->bd_disk = NULL;
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 7249e01..456c9ab 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -3213,6 +3213,50 @@ static int buffer_cpu_notify(struct notifier_block *self,
+ return NOTIFY_OK;
}
-+#if 0
++/**
++ * bh_uptodate_or_lock: Test whether the buffer is uptodate
++ * @bh: struct buffer_head
++ *
++ * Return true if the buffer is up-to-date and false,
++ * with the buffer locked, if not.
++ */
++int bh_uptodate_or_lock(struct buffer_head *bh)
++{
++ if (!buffer_uptodate(bh)) {
++ lock_buffer(bh);
++ if (!buffer_uptodate(bh))
++ return 0;
++ unlock_buffer(bh);
++ }
++ return 1;
++}
++EXPORT_SYMBOL(bh_uptodate_or_lock);
+
- int
- qla2x00_system_error(scsi_qla_host_t *ha)
++/**
++ * bh_submit_read: Submit a locked buffer for reading
++ * @bh: struct buffer_head
++ *
++ * Returns zero on success and -EIO on error.
++ */
++int bh_submit_read(struct buffer_head *bh)
++{
++ BUG_ON(!buffer_locked(bh));
++
++ if (buffer_uptodate(bh)) {
++ unlock_buffer(bh);
++ return 0;
++ }
++
++ get_bh(bh);
++ bh->b_end_io = end_buffer_read_sync;
++ submit_bh(READ, bh);
++ wait_on_buffer(bh);
++ if (buffer_uptodate(bh))
++ return 0;
++ return -EIO;
++}
++EXPORT_SYMBOL(bh_submit_read);
++
+ void __init buffer_init(void)
{
-@@ -2360,47 +2279,7 @@ qla2x00_system_error(scsi_qla_host_t *ha)
- return rval;
- }
-
--/**
-- * qla2x00_get_serdes_params() -
-- * @ha: HA context
-- *
-- * Returns
-- */
--int
--qla2x00_get_serdes_params(scsi_qla_host_t *ha, uint16_t *sw_em_1g,
-- uint16_t *sw_em_2g, uint16_t *sw_em_4g)
--{
-- int rval;
-- mbx_cmd_t mc;
-- mbx_cmd_t *mcp = &mc;
--
-- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
--
-- mcp->mb[0] = MBC_SERDES_PARAMS;
-- mcp->mb[1] = 0;
-- mcp->out_mb = MBX_1|MBX_0;
-- mcp->in_mb = MBX_4|MBX_3|MBX_2|MBX_0;
-- mcp->tov = 30;
-- mcp->flags = 0;
-- rval = qla2x00_mailbox_command(ha, mcp);
--
-- if (rval != QLA_SUCCESS) {
-- /*EMPTY*/
-- DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
-- ha->host_no, rval, mcp->mb[0]));
-- } else {
-- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
--
-- if (sw_em_1g)
-- *sw_em_1g = mcp->mb[2];
-- if (sw_em_2g)
-- *sw_em_2g = mcp->mb[3];
-- if (sw_em_4g)
-- *sw_em_4g = mcp->mb[4];
-- }
--
-- return rval;
--}
-+#endif /* 0 */
-
- /**
- * qla2x00_set_serdes_params() -
-@@ -2471,7 +2350,7 @@ qla2x00_stop_firmware(scsi_qla_host_t *ha)
- }
-
- int
--qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
-+qla2x00_enable_eft_trace(scsi_qla_host_t *ha, dma_addr_t eft_dma,
- uint16_t buffers)
+ int nrpages;
+diff --git a/fs/char_dev.c b/fs/char_dev.c
+index c3bfa76..2c7a8b5 100644
+--- a/fs/char_dev.c
++++ b/fs/char_dev.c
+@@ -510,9 +510,8 @@ struct cdev *cdev_alloc(void)
{
- int rval;
-@@ -2484,22 +2363,18 @@ qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
-
- mcp->mb[0] = MBC_TRACE_CONTROL;
-- mcp->mb[1] = ctrl;
-- mcp->out_mb = MBX_1|MBX_0;
-+ mcp->mb[1] = TC_EFT_ENABLE;
-+ mcp->mb[2] = LSW(eft_dma);
-+ mcp->mb[3] = MSW(eft_dma);
-+ mcp->mb[4] = LSW(MSD(eft_dma));
-+ mcp->mb[5] = MSW(MSD(eft_dma));
-+ mcp->mb[6] = buffers;
-+ mcp->mb[7] = TC_AEN_DISABLE;
-+ mcp->out_mb = MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
- mcp->in_mb = MBX_1|MBX_0;
-- if (ctrl == TC_ENABLE) {
-- mcp->mb[2] = LSW(eft_dma);
-- mcp->mb[3] = MSW(eft_dma);
-- mcp->mb[4] = LSW(MSD(eft_dma));
-- mcp->mb[5] = MSW(MSD(eft_dma));
-- mcp->mb[6] = buffers;
-- mcp->mb[7] = 0;
-- mcp->out_mb |= MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2;
-- }
- mcp->tov = 30;
- mcp->flags = 0;
- rval = qla2x00_mailbox_command(ha, mcp);
--
- if (rval != QLA_SUCCESS) {
- DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
- __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
-@@ -2511,8 +2386,7 @@ qla2x00_trace_control(scsi_qla_host_t *ha, uint16_t ctrl, dma_addr_t eft_dma,
+ struct cdev *p = kzalloc(sizeof(struct cdev), GFP_KERNEL);
+ if (p) {
+- p->kobj.ktype = &ktype_cdev_dynamic;
+ INIT_LIST_HEAD(&p->list);
+- kobject_init(&p->kobj);
++ kobject_init(&p->kobj, &ktype_cdev_dynamic);
+ }
+ return p;
}
-
- int
--qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
-- uint16_t off, uint16_t count)
-+qla2x00_disable_eft_trace(scsi_qla_host_t *ha)
+@@ -529,8 +528,7 @@ void cdev_init(struct cdev *cdev, const struct file_operations *fops)
{
- int rval;
- mbx_cmd_t mc;
-@@ -2523,24 +2397,16 @@ qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
+ memset(cdev, 0, sizeof *cdev);
+ INIT_LIST_HEAD(&cdev->list);
+- cdev->kobj.ktype = &ktype_cdev_default;
+- kobject_init(&cdev->kobj);
++ kobject_init(&cdev->kobj, &ktype_cdev_default);
+ cdev->ops = fops;
+ }
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+diff --git a/fs/cifs/CHANGES b/fs/cifs/CHANGES
+index a609599..edd2483 100644
+--- a/fs/cifs/CHANGES
++++ b/fs/cifs/CHANGES
+@@ -3,7 +3,10 @@ Version 1.52
+ Fix oops on second mount to server when null auth is used.
+ Enable experimental Kerberos support. Return writebehind errors on flush
+ and sync so that events like out of disk space get reported properly on
+-cached files.
++cached files. Fix setxattr failure to certain Samba versions. Fix mount
++of second share to disconnected server session (autoreconnect on this).
++Add ability to modify cifs acls for handling chmod (when mounted with
++cifsacl flag).
-- mcp->mb[0] = MBC_READ_SFP;
-- mcp->mb[1] = addr;
-- mcp->mb[2] = MSW(sfp_dma);
-- mcp->mb[3] = LSW(sfp_dma);
-- mcp->mb[6] = MSW(MSD(sfp_dma));
-- mcp->mb[7] = LSW(MSD(sfp_dma));
-- mcp->mb[8] = count;
-- mcp->mb[9] = off;
-- mcp->mb[10] = 0;
-- mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
-- mcp->in_mb = MBX_0;
-+ mcp->mb[0] = MBC_TRACE_CONTROL;
-+ mcp->mb[1] = TC_EFT_DISABLE;
-+ mcp->out_mb = MBX_1|MBX_0;
-+ mcp->in_mb = MBX_1|MBX_0;
- mcp->tov = 30;
- mcp->flags = 0;
- rval = qla2x00_mailbox_command(ha, mcp);
--
- if (rval != QLA_SUCCESS) {
-- DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
-- ha->host_no, rval, mcp->mb[0]));
-+ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
-+ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
- } else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
- }
-@@ -2549,176 +2415,168 @@ qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
- }
+ Version 1.51
+ ------------
+diff --git a/fs/cifs/Makefile b/fs/cifs/Makefile
+index 45e42fb..6ba43fb 100644
+--- a/fs/cifs/Makefile
++++ b/fs/cifs/Makefile
+@@ -9,3 +9,5 @@ cifs-y := cifsfs.o cifssmb.o cifs_debug.o connect.o dir.o file.o inode.o \
+ readdir.o ioctl.o sess.o export.o cifsacl.o
- int
--qla2x00_get_idma_speed(scsi_qla_host_t *ha, uint16_t loop_id,
-- uint16_t *port_speed, uint16_t *mb)
-+qla2x00_enable_fce_trace(scsi_qla_host_t *ha, dma_addr_t fce_dma,
-+ uint16_t buffers, uint16_t *mb, uint32_t *dwords)
- {
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
+ cifs-$(CONFIG_CIFS_UPCALL) += cifs_spnego.o
++
++cifs-$(CONFIG_CIFS_DFS_UPCALL) += dns_resolve.o cifs_dfs_ref.o
+diff --git a/fs/cifs/README b/fs/cifs/README
+index bf11329..c623e2f 100644
+--- a/fs/cifs/README
++++ b/fs/cifs/README
+@@ -56,7 +56,8 @@ the CIFS VFS web site) copy it to the same directory in which mount.smbfs and
+ similar files reside (usually /sbin). Although the helper software is not
+ required, mount.cifs is recommended. Eventually the Samba 3.0 utility program
+ "net" may also be helpful since it may someday provide easier mount syntax for
+-users who are used to Windows e.g. net use <mount point> <UNC name or cifs URL>
++users who are used to Windows e.g.
++ net use <mount point> <UNC name or cifs URL>
+ Note that running the Winbind pam/nss module (logon service) on all of your
+ Linux clients is useful in mapping Uids and Gids consistently across the
+ domain to the proper network user. The mount.cifs mount helper can be
+@@ -248,7 +249,7 @@ A partial list of the supported mount options follows:
+ the CIFS session.
+ password The user password. If the mount helper is
+ installed, the user will be prompted for password
+- if it is not supplied.
++ if not supplied.
+ ip The ip address of the target server
+ unc The target server Universal Network Name (export) to
+ mount.
+@@ -283,7 +284,7 @@ A partial list of the supported mount options follows:
+ can be enabled by specifying file_mode and dir_mode on
+ the client. Note that the mount.cifs helper must be
+ at version 1.10 or higher to support specifying the uid
+- (or gid) in non-numberic form.
++ (or gid) in non-numeric form.
+ gid Set the default gid for inodes (similar to above).
+ file_mode If CIFS Unix extensions are not supported by the server
+ this overrides the default mode for file inodes.
+@@ -417,9 +418,10 @@ A partial list of the supported mount options follows:
+ acl Allow setfacl and getfacl to manage posix ACLs if server
+ supports them. (default)
+ noacl Do not allow setfacl and getfacl calls on this mount
+- user_xattr Allow getting and setting user xattrs as OS/2 EAs (extended
+- attributes) to the server (default) e.g. via setfattr
+- and getfattr utilities.
++ user_xattr Allow getting and setting user xattrs (those attributes whose
++ name begins with "user." or "os2.") as OS/2 EAs (extended
++ attributes) to the server. This allows support of the
++ setfattr and getfattr utilities. (default)
+ nouser_xattr Do not allow getfattr/setfattr to get/set/list xattrs
+ mapchars Translate six of the seven reserved characters (not backslash)
+ *?<>|:
+@@ -434,6 +436,7 @@ A partial list of the supported mount options follows:
+ nomapchars Do not translate any of these seven characters (default).
+ nocase Request case insensitive path name matching (case
+ sensitive is the default if the server suports it).
++ (mount option "ignorecase" is identical to "nocase")
+ posixpaths If CIFS Unix extensions are supported, attempt to
+ negotiate posix path name support which allows certain
+ characters forbidden in typical CIFS filenames, without
+@@ -485,6 +488,9 @@ A partial list of the supported mount options follows:
+ ntlmv2i Use NTLMv2 password hashing with packet signing
+ lanman (if configured in kernel config) use older
+ lanman hash
++hard Retry file operations if server is not responding
++soft Limit retries to unresponsive servers (usually only
++ one retry) before returning an error. (default)
-- if (!IS_IIDMA_CAPABLE(ha))
-+ if (!IS_QLA25XX(ha))
- return QLA_FUNCTION_FAILED;
+ The mount.cifs mount helper also accepts a few mount options before -o
+ including:
+@@ -535,8 +541,8 @@ SecurityFlags Flags which control security negotiation and
+ must use NTLM 0x02002
+ may use NTLMv2 0x00004
+ must use NTLMv2 0x04004
+- may use Kerberos security (not implemented yet) 0x00008
+- must use Kerberos (not implemented yet) 0x08008
++ may use Kerberos security 0x00008
++ must use Kerberos 0x08008
+ may use lanman (weak) password hash 0x00010
+ must use lanman password hash 0x10010
+ may use plaintext passwords 0x00020
+@@ -626,6 +632,6 @@ returned success.
+
+ Also note that "cat /proc/fs/cifs/DebugData" will display information about
+ the active sessions and the shares that are mounted.
+-Enabling Kerberos (extended security) works when CONFIG_CIFS_EXPERIMENTAL is enabled
+-but requires a user space helper (from the Samba project). NTLM and NTLMv2 and
+-LANMAN support do not require this helpr.
++Enabling Kerberos (extended security) works when CONFIG_CIFS_EXPERIMENTAL is
++on but requires a user space helper (from the Samba project). NTLM and NTLMv2 and
++LANMAN support do not require this helper.
+diff --git a/fs/cifs/TODO b/fs/cifs/TODO
+index a8852c2..92c9fea 100644
+--- a/fs/cifs/TODO
++++ b/fs/cifs/TODO
+@@ -1,4 +1,4 @@
+-Version 1.49 April 26, 2007
++Version 1.52 January 3, 2008
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
+ A Partial List of Missing Features
+ ==================================
+@@ -16,16 +16,14 @@ SecurityDescriptors
+ c) Better pam/winbind integration (e.g. to handle uid mapping
+ better)
-- mcp->mb[0] = MBC_PORT_PARAMS;
-- mcp->mb[1] = loop_id;
-- mcp->mb[2] = mcp->mb[3] = mcp->mb[4] = mcp->mb[5] = 0;
-- mcp->out_mb = MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
-- mcp->in_mb = MBX_5|MBX_4|MBX_3|MBX_1|MBX_0;
-+ mcp->mb[0] = MBC_TRACE_CONTROL;
-+ mcp->mb[1] = TC_FCE_ENABLE;
-+ mcp->mb[2] = LSW(fce_dma);
-+ mcp->mb[3] = MSW(fce_dma);
-+ mcp->mb[4] = LSW(MSD(fce_dma));
-+ mcp->mb[5] = MSW(MSD(fce_dma));
-+ mcp->mb[6] = buffers;
-+ mcp->mb[7] = TC_AEN_DISABLE;
-+ mcp->mb[8] = 0;
-+ mcp->mb[9] = TC_FCE_DEFAULT_RX_SIZE;
-+ mcp->mb[10] = TC_FCE_DEFAULT_TX_SIZE;
-+ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|
-+ MBX_1|MBX_0;
-+ mcp->in_mb = MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
- mcp->tov = 30;
- mcp->flags = 0;
- rval = qla2x00_mailbox_command(ha, mcp);
--
-- /* Return mailbox statuses. */
-- if (mb != NULL) {
-- mb[0] = mcp->mb[0];
-- mb[1] = mcp->mb[1];
-- mb[3] = mcp->mb[3];
-- mb[4] = mcp->mb[4];
-- mb[5] = mcp->mb[5];
-- }
+-d) Verify that Kerberos signing works
-
- if (rval != QLA_SUCCESS) {
-- DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
-- ha->host_no, rval));
-+ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
-+ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
- } else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
-- if (port_speed)
-- *port_speed = mcp->mb[3];
+-e) Cleanup now unneeded SessSetup code in
++d) Cleanup now unneeded SessSetup code in
+ fs/cifs/connect.c and add back in NTLMSSP code if any servers
+ need it
+
+-f) MD5-HMAC signing SMB PDUs when SPNEGO style SessionSetup
+-used (Kerberos or NTLMSSP). Signing alreadyimplemented for NTLM
+-and raw NTLMSSP already. This is important when enabling
+-extended security and mounting to Windows 2003 Servers
++e) ms-dfs and ms-dfs host name resolution cleanup
++
++f) fix NTLMv2 signing when two mounts with different users to same
++server.
+
+ g) Directory entry caching relies on a 1 second timer, rather than
+ using FindNotify or equivalent. - (started)
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+new file mode 100644
+index 0000000..413ee23
+--- /dev/null
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -0,0 +1,377 @@
++/*
++ * Contains the CIFS DFS referral mounting routines used for handling
++ * traversal via DFS junction point
++ *
++ * Copyright (c) 2007 Igor Mammedov
++ * Copyright (C) International Business Machines Corp., 2008
++ * Author(s): Igor Mammedov (niallain at gmail.com)
++ * Steve French (sfrench at us.ibm.com)
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * as published by the Free Software Foundation; either version
++ * 2 of the License, or (at your option) any later version.
++ */
++
++#include <linux/dcache.h>
++#include <linux/mount.h>
++#include <linux/namei.h>
++#include <linux/vfs.h>
++#include <linux/fs.h>
++#include "cifsglob.h"
++#include "cifsproto.h"
++#include "cifsfs.h"
++#include "dns_resolve.h"
++#include "cifs_debug.h"
++
++LIST_HEAD(cifs_dfs_automount_list);
++
++/*
++ * DFS functions
++*/
++
++void dfs_shrink_umount_helper(struct vfsmount *vfsmnt)
++{
++ mark_mounts_for_expiry(&cifs_dfs_automount_list);
++ mark_mounts_for_expiry(&cifs_dfs_automount_list);
++ shrink_submounts(vfsmnt, &cifs_dfs_automount_list);
++}
++
++/**
++ * cifs_get_share_name - extracts share name from UNC
++ * @node_name: pointer to UNC string
++ *
++ * Extracts sharename form full UNC.
++ * i.e. strips from UNC trailing path that is not part of share
++ * name and fixup missing '\' in the begining of DFS node refferal
++ * if neccessary.
++ * Returns pointer to share name on success or NULL on error.
++ * Caller is responsible for freeing returned string.
++ */
++static char *cifs_get_share_name(const char *node_name)
++{
++ int len;
++ char *UNC;
++ char *pSep;
++
++ len = strlen(node_name);
++ UNC = kmalloc(len+2 /*for term null and additional \ if it's missed */,
++ GFP_KERNEL);
++ if (!UNC)
++ return NULL;
++
++ /* get share name and server name */
++ if (node_name[1] != '\\') {
++ UNC[0] = '\\';
++ strncpy(UNC+1, node_name, len);
++ len++;
++ UNC[len] = 0;
++ } else {
++ strncpy(UNC, node_name, len);
++ UNC[len] = 0;
++ }
++
++ /* find server name end */
++ pSep = memchr(UNC+2, '\\', len-2);
++ if (!pSep) {
++ cERROR(1, ("%s: no server name end in node name: %s",
++ __FUNCTION__, node_name));
++ kfree(UNC);
++ return NULL;
++ }
++
++ /* find sharename end */
++ pSep++;
++ pSep = memchr(UNC+(pSep-UNC), '\\', len-(pSep-UNC));
++ if (!pSep) {
++ cERROR(1, ("%s:2 cant find share name in node name: %s",
++ __FUNCTION__, node_name));
++ kfree(UNC);
++ return NULL;
++ }
++ /* trim path up to sharename end
++ * * now we have share name in UNC */
++ *pSep = 0;
++
++ return UNC;
++}
++
++
++/**
++ * compose_mount_options - creates mount options for refferral
++ * @sb_mountdata: parent/root DFS mount options (template)
++ * @ref_unc: refferral server UNC
++ * @devname: pointer for saving device name
++ *
++ * creates mount options for submount based on template options sb_mountdata
++ * and replacing unc,ip,prefixpath options with ones we've got form ref_unc.
++ *
++ * Returns: pointer to new mount options or ERR_PTR.
++ * Caller is responcible for freeing retunrned value if it is not error.
++ */
++static char *compose_mount_options(const char *sb_mountdata,
++ const char *ref_unc,
++ char **devname)
++{
++ int rc;
++ char *mountdata;
++ int md_len;
++ char *tkn_e;
++ char *srvIP = NULL;
++ char sep = ',';
++ int off, noff;
++
++ if (sb_mountdata == NULL)
++ return ERR_PTR(-EINVAL);
++
++ *devname = cifs_get_share_name(ref_unc);
++ rc = dns_resolve_server_name_to_ip(*devname, &srvIP);
++ if (rc != 0) {
++ cERROR(1, ("%s: Failed to resolve server part of %s to IP",
++ __FUNCTION__, *devname));
++ mountdata = ERR_PTR(rc);
++ goto compose_mount_options_out;
++ }
++ md_len = strlen(sb_mountdata) + strlen(srvIP) + strlen(ref_unc) + 3;
++ mountdata = kzalloc(md_len+1, GFP_KERNEL);
++ if (mountdata == NULL) {
++ mountdata = ERR_PTR(-ENOMEM);
++ goto compose_mount_options_out;
++ }
++
++ /* copy all options except of unc,ip,prefixpath */
++ off = 0;
++ if (strncmp(sb_mountdata, "sep=", 4) == 0) {
++ sep = sb_mountdata[4];
++ strncpy(mountdata, sb_mountdata, 5);
++ off += 5;
++ }
++ while ((tkn_e = strchr(sb_mountdata+off, sep))) {
++ noff = (tkn_e - (sb_mountdata+off)) + 1;
++ if (strnicmp(sb_mountdata+off, "unc=", 4) == 0) {
++ off += noff;
++ continue;
++ }
++ if (strnicmp(sb_mountdata+off, "ip=", 3) == 0) {
++ off += noff;
++ continue;
++ }
++ if (strnicmp(sb_mountdata+off, "prefixpath=", 3) == 0) {
++ off += noff;
++ continue;
++ }
++ strncat(mountdata, sb_mountdata+off, noff);
++ off += noff;
++ }
++ strcat(mountdata, sb_mountdata+off);
++ mountdata[md_len] = '\0';
++
++ /* copy new IP and ref share name */
++ strcat(mountdata, ",ip=");
++ strcat(mountdata, srvIP);
++ strcat(mountdata, ",unc=");
++ strcat(mountdata, *devname);
++
++ /* find & copy prefixpath */
++ tkn_e = strchr(ref_unc+2, '\\');
++ if (tkn_e) {
++ tkn_e = strchr(tkn_e+1, '\\');
++ if (tkn_e) {
++ strcat(mountdata, ",prefixpath=");
++ strcat(mountdata, tkn_e);
++ }
++ }
++
++ /*cFYI(1,("%s: parent mountdata: %s", __FUNCTION__,sb_mountdata));*/
++ /*cFYI(1, ("%s: submount mountdata: %s", __FUNCTION__, mountdata ));*/
++
++compose_mount_options_out:
++ kfree(srvIP);
++ return mountdata;
++}
++
++
++static struct vfsmount *cifs_dfs_do_refmount(const struct vfsmount *mnt_parent,
++ struct dentry *dentry, char *ref_unc)
++{
++ struct cifs_sb_info *cifs_sb;
++ struct vfsmount *mnt;
++ char *mountdata;
++ char *devname = NULL;
++
++ cifs_sb = CIFS_SB(dentry->d_inode->i_sb);
++ mountdata = compose_mount_options(cifs_sb->mountdata,
++ ref_unc, &devname);
++
++ if (IS_ERR(mountdata))
++ return (struct vfsmount *)mountdata;
++
++ mnt = vfs_kern_mount(&cifs_fs_type, 0, devname, mountdata);
++ kfree(mountdata);
++ kfree(devname);
++ return mnt;
++
++}
++
++static char *build_full_dfs_path_from_dentry(struct dentry *dentry)
++{
++ char *full_path = NULL;
++ char *search_path;
++ char *tmp_path;
++ size_t l_max_len;
++ struct cifs_sb_info *cifs_sb;
++
++ if (dentry->d_inode == NULL)
++ return NULL;
++
++ cifs_sb = CIFS_SB(dentry->d_inode->i_sb);
++
++ if (cifs_sb->tcon == NULL)
++ return NULL;
++
++ search_path = build_path_from_dentry(dentry);
++ if (search_path == NULL)
++ return NULL;
++
++ if (cifs_sb->tcon->Flags & SMB_SHARE_IS_IN_DFS) {
++ /* we should use full path name to correct working with DFS */
++ l_max_len = strnlen(cifs_sb->tcon->treeName, MAX_TREE_SIZE+1) +
++ strnlen(search_path, MAX_PATHCONF) + 1;
++ tmp_path = kmalloc(l_max_len, GFP_KERNEL);
++ if (tmp_path == NULL) {
++ kfree(search_path);
++ return NULL;
++ }
++ strncpy(tmp_path, cifs_sb->tcon->treeName, l_max_len);
++ strcat(tmp_path, search_path);
++ tmp_path[l_max_len-1] = 0;
++ full_path = tmp_path;
++ kfree(search_path);
++ } else {
++ full_path = search_path;
++ }
++ return full_path;
++}
++
++static int add_mount_helper(struct vfsmount *newmnt, struct nameidata *nd,
++ struct list_head *mntlist)
++{
++ /* stolen from afs code */
++ int err;
++
++ mntget(newmnt);
++ err = do_add_mount(newmnt, nd, nd->mnt->mnt_flags, mntlist);
++ switch (err) {
++ case 0:
++ dput(nd->dentry);
++ mntput(nd->mnt);
++ nd->mnt = newmnt;
++ nd->dentry = dget(newmnt->mnt_root);
++ break;
++ case -EBUSY:
++ /* someone else made a mount here whilst we were busy */
++ while (d_mountpoint(nd->dentry) &&
++ follow_down(&nd->mnt, &nd->dentry))
++ ;
++ err = 0;
++ default:
++ mntput(newmnt);
++ break;
++ }
++ return err;
++}
++
++static void dump_referral(const struct dfs_info3_param *ref)
++{
++ cFYI(1, ("DFS: ref path: %s", ref->path_name));
++ cFYI(1, ("DFS: node path: %s", ref->node_name));
++ cFYI(1, ("DFS: fl: %hd, srv_type: %hd", ref->flags, ref->server_type));
++ cFYI(1, ("DFS: ref_flags: %hd, path_consumed: %hd", ref->ref_flag,
++ ref->PathConsumed));
++}
++
++
++static void*
++cifs_dfs_follow_mountpoint(struct dentry *dentry, struct nameidata *nd)
++{
++ struct dfs_info3_param *referrals = NULL;
++ unsigned int num_referrals = 0;
++ struct cifs_sb_info *cifs_sb;
++ struct cifsSesInfo *ses;
++ char *full_path = NULL;
++ int xid, i;
++ int rc = 0;
++ struct vfsmount *mnt = ERR_PTR(-ENOENT);
++
++ cFYI(1, ("in %s", __FUNCTION__));
++ BUG_ON(IS_ROOT(dentry));
++
++ xid = GetXid();
++
++ dput(nd->dentry);
++ nd->dentry = dget(dentry);
++
++ cifs_sb = CIFS_SB(dentry->d_inode->i_sb);
++ ses = cifs_sb->tcon->ses;
++
++ if (!ses) {
++ rc = -EINVAL;
++ goto out_err;
++ }
++
++ full_path = build_full_dfs_path_from_dentry(dentry);
++ if (full_path == NULL) {
++ rc = -ENOMEM;
++ goto out_err;
++ }
++
++ rc = get_dfs_path(xid, ses , full_path, cifs_sb->local_nls,
++ &num_referrals, &referrals,
++ cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR);
++
++ for (i = 0; i < num_referrals; i++) {
++ dump_referral(referrals+i);
++ /* connect to a storage node */
++ if (referrals[i].flags & DFSREF_STORAGE_SERVER) {
++ int len;
++ len = strlen(referrals[i].node_name);
++ if (len < 2) {
++ cERROR(1, ("%s: Net Address path too short: %s",
++ __FUNCTION__, referrals[i].node_name));
++ rc = -EINVAL;
++ goto out_err;
++ }
++ mnt = cifs_dfs_do_refmount(nd->mnt, nd->dentry,
++ referrals[i].node_name);
++ cFYI(1, ("%s: cifs_dfs_do_refmount:%s , mnt:%p",
++ __FUNCTION__,
++ referrals[i].node_name, mnt));
+
-+ if (mb)
-+ memcpy(mb, mcp->mb, 8 * sizeof(*mb));
-+ if (dwords)
-+ *dwords = mcp->mb[6];
++ /* complete mount procedure if we accured submount */
++ if (!IS_ERR(mnt))
++ break;
++ }
++ }
++
++ /* we need it cause for() above could exit without valid submount */
++ rc = PTR_ERR(mnt);
++ if (IS_ERR(mnt))
++ goto out_err;
++
++ nd->mnt->mnt_flags |= MNT_SHRINKABLE;
++ rc = add_mount_helper(mnt, nd, &cifs_dfs_automount_list);
++
++out:
++ FreeXid(xid);
++ free_dfs_info_array(referrals, num_referrals);
++ kfree(full_path);
++ cFYI(1, ("leaving %s" , __FUNCTION__));
++ return ERR_PTR(rc);
++out_err:
++ path_release(nd);
++ goto out;
++}
++
++struct inode_operations cifs_dfs_referral_inode_operations = {
++ .follow_link = cifs_dfs_follow_mountpoint,
++};
++
+diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
+index 34af556..8ad2330 100644
+--- a/fs/cifs/cifs_fs_sb.h
++++ b/fs/cifs/cifs_fs_sb.h
+@@ -43,6 +43,9 @@ struct cifs_sb_info {
+ mode_t mnt_dir_mode;
+ int mnt_cifs_flags;
+ int prepathlen;
+- char *prepath;
++ char *prepath; /* relative path under the share to mount to */
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ char *mountdata; /* mount options received at mount time */
++#endif
+ };
+ #endif /* _CIFS_FS_SB_H */
+diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
+index 1529d2b..d543acc 100644
+--- a/fs/cifs/cifs_spnego.c
++++ b/fs/cifs/cifs_spnego.c
+@@ -122,11 +122,13 @@ cifs_get_spnego_key(struct cifsSesInfo *sesInfo)
+ cFYI(1, ("key description = %s", description));
+ spnego_key = request_key(&cifs_spnego_key_type, description, "");
+
++#ifdef CONFIG_CIFS_DEBUG2
+ if (cifsFYI && !IS_ERR(spnego_key)) {
+ struct cifs_spnego_msg *msg = spnego_key->payload.data;
+- cifs_dump_mem("SPNEGO reply blob:", msg->data,
+- msg->secblob_len + msg->sesskey_len);
++ cifs_dump_mem("SPNEGO reply blob:", msg->data, min(1024,
++ msg->secblob_len + msg->sesskey_len));
}
++#endif /* CONFIG_CIFS_DEBUG2 */
- return rval;
+ out:
+ kfree(description);
+diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
+index c312adc..a7035bd 100644
+--- a/fs/cifs/cifsacl.c
++++ b/fs/cifs/cifsacl.c
+@@ -129,6 +129,54 @@ int compare_sids(const struct cifs_sid *ctsid, const struct cifs_sid *cwsid)
+ return (1); /* sids compare/match */
}
- int
--qla2x00_set_idma_speed(scsi_qla_host_t *ha, uint16_t loop_id,
-- uint16_t port_speed, uint16_t *mb)
-+qla2x00_disable_fce_trace(scsi_qla_host_t *ha, uint64_t *wr, uint64_t *rd)
- {
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
-
-- if (!IS_IIDMA_CAPABLE(ha))
-+ if (!IS_FWI2_CAPABLE(ha))
- return QLA_FUNCTION_FAILED;
-
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
-
-- mcp->mb[0] = MBC_PORT_PARAMS;
-- mcp->mb[1] = loop_id;
-- mcp->mb[2] = BIT_0;
-- mcp->mb[3] = port_speed & (BIT_2|BIT_1|BIT_0);
-- mcp->mb[4] = mcp->mb[5] = 0;
-- mcp->out_mb = MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
-- mcp->in_mb = MBX_5|MBX_4|MBX_3|MBX_1|MBX_0;
-+ mcp->mb[0] = MBC_TRACE_CONTROL;
-+ mcp->mb[1] = TC_FCE_DISABLE;
-+ mcp->mb[2] = TC_FCE_DISABLE_TRACE;
-+ mcp->out_mb = MBX_2|MBX_1|MBX_0;
-+ mcp->in_mb = MBX_9|MBX_8|MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|
-+ MBX_1|MBX_0;
- mcp->tov = 30;
- mcp->flags = 0;
- rval = qla2x00_mailbox_command(ha, mcp);
--
-- /* Return mailbox statuses. */
-- if (mb != NULL) {
-- mb[0] = mcp->mb[0];
-- mb[1] = mcp->mb[1];
-- mb[3] = mcp->mb[3];
-- mb[4] = mcp->mb[4];
-- mb[5] = mcp->mb[5];
-- }
--
- if (rval != QLA_SUCCESS) {
-- DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
-- ha->host_no, rval));
-+ DEBUG2_3_11(printk("%s(%ld): failed=%x mb[0]=%x mb[1]=%x.\n",
-+ __func__, ha->host_no, rval, mcp->mb[0], mcp->mb[1]));
- } else {
- DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
+
-+ if (wr)
-+ *wr = (uint64_t) mcp->mb[5] << 48 |
-+ (uint64_t) mcp->mb[4] << 32 |
-+ (uint64_t) mcp->mb[3] << 16 |
-+ (uint64_t) mcp->mb[2];
-+ if (rd)
-+ *rd = (uint64_t) mcp->mb[9] << 48 |
-+ (uint64_t) mcp->mb[8] << 32 |
-+ (uint64_t) mcp->mb[7] << 16 |
-+ (uint64_t) mcp->mb[6];
- }
-
- return rval;
++/* copy ntsd, owner sid, and group sid from a security descriptor to another */
++static void copy_sec_desc(const struct cifs_ntsd *pntsd,
++ struct cifs_ntsd *pnntsd, __u32 sidsoffset)
++{
++ int i;
++
++ struct cifs_sid *owner_sid_ptr, *group_sid_ptr;
++ struct cifs_sid *nowner_sid_ptr, *ngroup_sid_ptr;
++
++ /* copy security descriptor control portion */
++ pnntsd->revision = pntsd->revision;
++ pnntsd->type = pntsd->type;
++ pnntsd->dacloffset = cpu_to_le32(sizeof(struct cifs_ntsd));
++ pnntsd->sacloffset = 0;
++ pnntsd->osidoffset = cpu_to_le32(sidsoffset);
++ pnntsd->gsidoffset = cpu_to_le32(sidsoffset + sizeof(struct cifs_sid));
++
++ /* copy owner sid */
++ owner_sid_ptr = (struct cifs_sid *)((char *)pntsd +
++ le32_to_cpu(pntsd->osidoffset));
++ nowner_sid_ptr = (struct cifs_sid *)((char *)pnntsd + sidsoffset);
++
++ nowner_sid_ptr->revision = owner_sid_ptr->revision;
++ nowner_sid_ptr->num_subauth = owner_sid_ptr->num_subauth;
++ for (i = 0; i < 6; i++)
++ nowner_sid_ptr->authority[i] = owner_sid_ptr->authority[i];
++ for (i = 0; i < 5; i++)
++ nowner_sid_ptr->sub_auth[i] = owner_sid_ptr->sub_auth[i];
++
++ /* copy group sid */
++ group_sid_ptr = (struct cifs_sid *)((char *)pntsd +
++ le32_to_cpu(pntsd->gsidoffset));
++ ngroup_sid_ptr = (struct cifs_sid *)((char *)pnntsd + sidsoffset +
++ sizeof(struct cifs_sid));
++
++ ngroup_sid_ptr->revision = group_sid_ptr->revision;
++ ngroup_sid_ptr->num_subauth = group_sid_ptr->num_subauth;
++ for (i = 0; i < 6; i++)
++ ngroup_sid_ptr->authority[i] = group_sid_ptr->authority[i];
++ for (i = 0; i < 5; i++)
++ ngroup_sid_ptr->sub_auth[i] =
++ cpu_to_le32(group_sid_ptr->sub_auth[i]);
++
++ return;
++}
++
++
+ /*
+ change posix mode to reflect permissions
+ pmode is the existing mode (we only want to overwrite part of this
+@@ -220,6 +268,33 @@ static void mode_to_access_flags(umode_t mode, umode_t bits_to_use,
+ return;
}
--/*
-- * qla24xx_get_vp_database
-- * Get the VP's database for all configured ports.
-- *
-- * Input:
-- * ha = adapter block pointer.
-- * size = size of initialization control block.
-- *
-- * Returns:
-- * qla2x00 local function return status code.
-- *
-- * Context:
-- * Kernel context.
-- */
- int
--qla24xx_get_vp_database(scsi_qla_host_t *ha, uint16_t size)
-+qla2x00_read_sfp(scsi_qla_host_t *ha, dma_addr_t sfp_dma, uint16_t addr,
-+ uint16_t off, uint16_t count)
- {
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
-
-- DEBUG11(printk("scsi(%ld):%s - entered.\n",
-- ha->host_no, __func__));
-+ if (!IS_FWI2_CAPABLE(ha))
-+ return QLA_FUNCTION_FAILED;
-
-- mcp->mb[0] = MBC_MID_GET_VP_DATABASE;
-- mcp->mb[2] = MSW(ha->init_cb_dma);
-- mcp->mb[3] = LSW(ha->init_cb_dma);
-- mcp->mb[4] = 0;
-- mcp->mb[5] = 0;
-- mcp->mb[6] = MSW(MSD(ha->init_cb_dma));
-- mcp->mb[7] = LSW(MSD(ha->init_cb_dma));
-- mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
-- mcp->in_mb = MBX_1|MBX_0;
-- mcp->buf_size = size;
-- mcp->flags = MBX_DMA_OUT;
-- mcp->tov = MBX_TOV_SECONDS;
-+ DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
++static __le16 fill_ace_for_sid(struct cifs_ace *pntace,
++ const struct cifs_sid *psid, __u64 nmode, umode_t bits)
++{
++ int i;
++ __u16 size = 0;
++ __u32 access_req = 0;
++
++ pntace->type = ACCESS_ALLOWED;
++ pntace->flags = 0x0;
++ mode_to_access_flags(nmode, bits, &access_req);
++ if (!access_req)
++ access_req = SET_MINIMUM_RIGHTS;
++ pntace->access_req = cpu_to_le32(access_req);
++
++ pntace->sid.revision = psid->revision;
++ pntace->sid.num_subauth = psid->num_subauth;
++ for (i = 0; i < 6; i++)
++ pntace->sid.authority[i] = psid->authority[i];
++ for (i = 0; i < psid->num_subauth; i++)
++ pntace->sid.sub_auth[i] = psid->sub_auth[i];
++
++ size = 1 + 1 + 2 + 4 + 1 + 1 + 6 + (psid->num_subauth * 4);
++ pntace->size = cpu_to_le16(size);
++
++ return (size);
++}
+
-+ mcp->mb[0] = MBC_READ_SFP;
-+ mcp->mb[1] = addr;
-+ mcp->mb[2] = MSW(sfp_dma);
-+ mcp->mb[3] = LSW(sfp_dma);
-+ mcp->mb[6] = MSW(MSD(sfp_dma));
-+ mcp->mb[7] = LSW(MSD(sfp_dma));
-+ mcp->mb[8] = count;
-+ mcp->mb[9] = off;
-+ mcp->mb[10] = 0;
-+ mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
-+ mcp->in_mb = MBX_0;
-+ mcp->tov = 30;
-+ mcp->flags = 0;
- rval = qla2x00_mailbox_command(ha, mcp);
-
- if (rval != QLA_SUCCESS) {
-- /*EMPTY*/
-- DEBUG2_3_11(printk("%s(%ld): failed=%x "
-- "mb0=%x.\n",
-- __func__, ha->host_no, rval, mcp->mb[0]));
-+ DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
-+ ha->host_no, rval, mcp->mb[0]));
- } else {
-- /*EMPTY*/
-- DEBUG11(printk("%s(%ld): done.\n",
-- __func__, ha->host_no));
-+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
- }
- return rval;
+ #ifdef CONFIG_CIFS_DEBUG2
+ static void dump_ace(struct cifs_ace *pace, char *end_of_acl)
+@@ -243,7 +318,7 @@ static void dump_ace(struct cifs_ace *pace, char *end_of_acl)
+ int i;
+ cFYI(1, ("ACE revision %d num_auth %d type %d flags %d size %d",
+ pace->sid.revision, pace->sid.num_subauth, pace->type,
+- pace->flags, pace->size));
++ pace->flags, le16_to_cpu(pace->size)));
+ for (i = 0; i < num_subauth; ++i) {
+ cFYI(1, ("ACE sub_auth[%d]: 0x%x", i,
+ le32_to_cpu(pace->sid.sub_auth[i])));
+@@ -346,6 +421,28 @@ static void parse_dacl(struct cifs_acl *pdacl, char *end_of_acl,
}
- int
--qla24xx_get_vp_entry(scsi_qla_host_t *ha, uint16_t size, int vp_id)
-+qla2x00_set_idma_speed(scsi_qla_host_t *ha, uint16_t loop_id,
-+ uint16_t port_speed, uint16_t *mb)
- {
- int rval;
- mbx_cmd_t mc;
- mbx_cmd_t *mcp = &mc;
-+ if (!IS_IIDMA_CAPABLE(ha))
-+ return QLA_FUNCTION_FAILED;
++static int set_chmod_dacl(struct cifs_acl *pndacl, struct cifs_sid *pownersid,
++ struct cifs_sid *pgrpsid, __u64 nmode)
++{
++ __le16 size = 0;
++ struct cifs_acl *pnndacl;
+
- DEBUG11(printk("%s(%ld): entered.\n", __func__, ha->host_no));
-
-- mcp->mb[0] = MBC_MID_GET_VP_ENTRY;
-- mcp->mb[2] = MSW(ha->init_cb_dma);
-- mcp->mb[3] = LSW(ha->init_cb_dma);
-- mcp->mb[4] = 0;
-- mcp->mb[5] = 0;
-- mcp->mb[6] = MSW(MSD(ha->init_cb_dma));
-- mcp->mb[7] = LSW(MSD(ha->init_cb_dma));
-- mcp->mb[9] = vp_id;
-- mcp->out_mb = MBX_9|MBX_7|MBX_6|MBX_3|MBX_2|MBX_0;
-- mcp->in_mb = MBX_0;
-- mcp->buf_size = size;
-- mcp->flags = MBX_DMA_OUT;
-+ mcp->mb[0] = MBC_PORT_PARAMS;
-+ mcp->mb[1] = loop_id;
-+ mcp->mb[2] = BIT_0;
-+ mcp->mb[3] = port_speed & (BIT_2|BIT_1|BIT_0);
-+ mcp->mb[4] = mcp->mb[5] = 0;
-+ mcp->out_mb = MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
-+ mcp->in_mb = MBX_5|MBX_4|MBX_3|MBX_1|MBX_0;
- mcp->tov = 30;
-+ mcp->flags = 0;
- rval = qla2x00_mailbox_command(ha, mcp);
-
-+ /* Return mailbox statuses. */
-+ if (mb != NULL) {
-+ mb[0] = mcp->mb[0];
-+ mb[1] = mcp->mb[1];
-+ mb[3] = mcp->mb[3];
-+ mb[4] = mcp->mb[4];
-+ mb[5] = mcp->mb[5];
-+ }
++ pnndacl = (struct cifs_acl *)((char *)pndacl + sizeof(struct cifs_acl));
+
- if (rval != QLA_SUCCESS) {
-- /*EMPTY*/
-- DEBUG2_3_11(printk("qla24xx_get_vp_entry(%ld): failed=%x "
-- "mb0=%x.\n",
-- ha->host_no, rval, mcp->mb[0]));
-+ DEBUG2_3_11(printk("%s(%ld): failed=%x.\n", __func__,
-+ ha->host_no, rval));
- } else {
-- /*EMPTY*/
-- DEBUG11(printk("qla24xx_get_vp_entry(%ld): done.\n",
-- ha->host_no));
-+ DEBUG11(printk("%s(%ld): done.\n", __func__, ha->host_no));
- }
-
- return rval;
-@@ -2873,7 +2731,7 @@ qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
- DEBUG11(printk("%s(%ld): entered. Enabling index %d\n", __func__,
- ha->host_no, vp_index));
-
-- if (vp_index == 0 || vp_index >= MAX_MULTI_ID_LOOP)
-+ if (vp_index == 0 || vp_index >= ha->max_npiv_vports)
- return QLA_PARAMETER_ERROR;
-
- vce = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &vce_dma);
-diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
-index 821ee74..cf784cd 100644
---- a/drivers/scsi/qla2xxx/qla_mid.c
-+++ b/drivers/scsi/qla2xxx/qla_mid.c
-@@ -39,7 +39,7 @@ qla2x00_vp_stop_timer(scsi_qla_host_t *vha)
- }
- }
-
--uint32_t
-+static uint32_t
- qla24xx_allocate_vp_id(scsi_qla_host_t *vha)
++ size += fill_ace_for_sid((struct cifs_ace *) ((char *)pnndacl + size),
++ pownersid, nmode, S_IRWXU);
++ size += fill_ace_for_sid((struct cifs_ace *)((char *)pnndacl + size),
++ pgrpsid, nmode, S_IRWXG);
++ size += fill_ace_for_sid((struct cifs_ace *)((char *)pnndacl + size),
++ &sid_everyone, nmode, S_IRWXO);
++
++ pndacl->size = cpu_to_le16(size + sizeof(struct cifs_acl));
++ pndacl->num_aces = 3;
++
++ return (0);
++}
++
++
+ static int parse_sid(struct cifs_sid *psid, char *end_of_acl)
{
- uint32_t vp_id;
-@@ -47,16 +47,15 @@ qla24xx_allocate_vp_id(scsi_qla_host_t *vha)
+ /* BB need to add parm so we can store the SID BB */
+@@ -432,6 +529,46 @@ static int parse_sec_desc(struct cifs_ntsd *pntsd, int acl_len,
+ }
- /* Find an empty slot and assign an vp_id */
- down(&ha->vport_sem);
-- vp_id = find_first_zero_bit((unsigned long *)ha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC);
-- if (vp_id > MAX_MULTI_ID_FABRIC) {
-- DEBUG15(printk ("vp_id %d is bigger than MAX_MULTI_ID_FABRID\n",
-- vp_id));
-+ vp_id = find_first_zero_bit(ha->vp_idx_map, ha->max_npiv_vports + 1);
-+ if (vp_id > ha->max_npiv_vports) {
-+ DEBUG15(printk ("vp_id %d is bigger than max-supported %d.\n",
-+ vp_id, ha->max_npiv_vports));
- up(&ha->vport_sem);
- return vp_id;
- }
-- set_bit(vp_id, (unsigned long *)ha->vp_idx_map);
-+ set_bit(vp_id, ha->vp_idx_map);
- ha->num_vhosts++;
- vha->vp_idx = vp_id;
- list_add_tail(&vha->vp_list, &ha->vp_list);
-@@ -73,12 +72,12 @@ qla24xx_deallocate_vp_id(scsi_qla_host_t *vha)
- down(&ha->vport_sem);
- vp_id = vha->vp_idx;
- ha->num_vhosts--;
-- clear_bit(vp_id, (unsigned long *)ha->vp_idx_map);
-+ clear_bit(vp_id, ha->vp_idx_map);
- list_del(&vha->vp_list);
- up(&ha->vport_sem);
++/* Convert permission bits from mode to equivalent CIFS ACL */
++static int build_sec_desc(struct cifs_ntsd *pntsd, struct cifs_ntsd *pnntsd,
++ int acl_len, struct inode *inode, __u64 nmode)
++{
++ int rc = 0;
++ __u32 dacloffset;
++ __u32 ndacloffset;
++ __u32 sidsoffset;
++ struct cifs_sid *owner_sid_ptr, *group_sid_ptr;
++ struct cifs_acl *dacl_ptr = NULL; /* no need for SACL ptr */
++ struct cifs_acl *ndacl_ptr = NULL; /* no need for SACL ptr */
++
++ if ((inode == NULL) || (pntsd == NULL) || (pnntsd == NULL))
++ return (-EIO);
++
++ owner_sid_ptr = (struct cifs_sid *)((char *)pntsd +
++ le32_to_cpu(pntsd->osidoffset));
++ group_sid_ptr = (struct cifs_sid *)((char *)pntsd +
++ le32_to_cpu(pntsd->gsidoffset));
++
++ dacloffset = le32_to_cpu(pntsd->dacloffset);
++ dacl_ptr = (struct cifs_acl *)((char *)pntsd + dacloffset);
++
++ ndacloffset = sizeof(struct cifs_ntsd);
++ ndacl_ptr = (struct cifs_acl *)((char *)pnntsd + ndacloffset);
++ ndacl_ptr->revision = dacl_ptr->revision;
++ ndacl_ptr->size = 0;
++ ndacl_ptr->num_aces = 0;
++
++ rc = set_chmod_dacl(ndacl_ptr, owner_sid_ptr, group_sid_ptr, nmode);
++
++ sidsoffset = ndacloffset + le16_to_cpu(ndacl_ptr->size);
++
++ /* copy security descriptor control portion and owner and group sid */
++ copy_sec_desc(pntsd, pnntsd, sidsoffset);
++
++ return (rc);
++}
++
++
+ /* Retrieve an ACL from the server */
+ static struct cifs_ntsd *get_cifs_acl(u32 *pacllen, struct inode *inode,
+ const char *path)
+@@ -487,6 +624,64 @@ static struct cifs_ntsd *get_cifs_acl(u32 *pacllen, struct inode *inode,
+ return pntsd;
}
--scsi_qla_host_t *
-+static scsi_qla_host_t *
- qla24xx_find_vhost_by_name(scsi_qla_host_t *ha, uint8_t *port_name)
++/* Set an ACL on the server */
++static int set_cifs_acl(struct cifs_ntsd *pnntsd, __u32 acllen,
++ struct inode *inode, const char *path)
++{
++ struct cifsFileInfo *open_file;
++ int unlock_file = FALSE;
++ int xid;
++ int rc = -EIO;
++ __u16 fid;
++ struct super_block *sb;
++ struct cifs_sb_info *cifs_sb;
++
++#ifdef CONFIG_CIFS_DEBUG2
++ cFYI(1, ("set ACL for %s from mode 0x%x", path, inode->i_mode));
++#endif
++
++ if (!inode)
++ return (rc);
++
++ sb = inode->i_sb;
++ if (sb == NULL)
++ return (rc);
++
++ cifs_sb = CIFS_SB(sb);
++ xid = GetXid();
++
++ open_file = find_readable_file(CIFS_I(inode));
++ if (open_file) {
++ unlock_file = TRUE;
++ fid = open_file->netfid;
++ } else {
++ int oplock = FALSE;
++ /* open file */
++ rc = CIFSSMBOpen(xid, cifs_sb->tcon, path, FILE_OPEN,
++ WRITE_DAC, 0, &fid, &oplock, NULL,
++ cifs_sb->local_nls, cifs_sb->mnt_cifs_flags &
++ CIFS_MOUNT_MAP_SPECIAL_CHR);
++ if (rc != 0) {
++ cERROR(1, ("Unable to open file to set ACL"));
++ FreeXid(xid);
++ return (rc);
++ }
++ }
++
++ rc = CIFSSMBSetCIFSACL(xid, cifs_sb->tcon, fid, pnntsd, acllen);
++#ifdef CONFIG_CIFS_DEBUG2
++ cFYI(1, ("SetCIFSACL rc = %d", rc));
++#endif
++ if (unlock_file == TRUE)
++ atomic_dec(&open_file->wrtPending);
++ else
++ CIFSSMBClose(xid, cifs_sb->tcon, fid);
++
++ FreeXid(xid);
++
++ return (rc);
++}
++
+ /* Translate the CIFS ACL (simlar to NTFS ACL) for a file into mode bits */
+ void acl_to_uid_mode(struct inode *inode, const char *path)
{
- scsi_qla_host_t *vha;
-@@ -216,11 +215,7 @@ qla2x00_alert_all_vps(scsi_qla_host_t *ha, uint16_t *mb)
- if (ha->parent)
- return;
-
-- i = find_next_bit((unsigned long *)ha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC + 1, 1);
-- for (;i <= MAX_MULTI_ID_FABRIC;
-- i = find_next_bit((unsigned long *)ha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC + 1, i + 1)) {
-+ for_each_mapped_vp_idx(ha, i) {
- vp_idx_matched = 0;
-
- list_for_each_entry(vha, &ha->vp_list, vp_list) {
-@@ -270,7 +265,7 @@ qla2x00_vp_abort_isp(scsi_qla_host_t *vha)
- qla24xx_enable_vp(vha);
+@@ -510,24 +705,53 @@ void acl_to_uid_mode(struct inode *inode, const char *path)
}
--int
-+static int
- qla2x00_do_dpc_vp(scsi_qla_host_t *vha)
+ /* Convert mode bits to an ACL so we can update the ACL on the server */
+-int mode_to_acl(struct inode *inode, const char *path)
++int mode_to_acl(struct inode *inode, const char *path, __u64 nmode)
{
- if (test_and_clear_bit(VP_IDX_ACQUIRED, &vha->vp_flags)) {
-@@ -311,11 +306,7 @@ qla2x00_do_dpc_all_vps(scsi_qla_host_t *ha)
+ int rc = 0;
+ __u32 acllen = 0;
+- struct cifs_ntsd *pntsd = NULL;
++ struct cifs_ntsd *pntsd = NULL; /* acl obtained from server */
++ struct cifs_ntsd *pnntsd = NULL; /* modified acl to be sent to server */
- clear_bit(VP_DPC_NEEDED, &ha->dpc_flags);
++#ifdef CONFIG_CIFS_DEBUG2
+ cFYI(1, ("set ACL from mode for %s", path));
++#endif
-- i = find_next_bit((unsigned long *)ha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC + 1, 1);
-- for (;i <= MAX_MULTI_ID_FABRIC;
-- i = find_next_bit((unsigned long *)ha->vp_idx_map,
-- MAX_MULTI_ID_FABRIC + 1, i + 1)) {
-+ for_each_mapped_vp_idx(ha, i) {
- vp_idx_matched = 0;
+ /* Get the security descriptor */
+ pntsd = get_cifs_acl(&acllen, inode, path);
- list_for_each_entry(vha, &ha->vp_list, vp_list) {
-@@ -350,15 +341,17 @@ qla24xx_vport_create_req_sanity_check(struct fc_vport *fc_vport)
+- /* Add/Modify the three ACEs for owner, group, everyone
+- while retaining the other ACEs */
++ /* Add three ACEs for owner, group, everyone getting rid of
++ other ACEs as chmod disables ACEs and set the security descriptor */
- /* Check up unique WWPN */
- u64_to_wwn(fc_vport->port_name, port_name);
-+ if (!memcmp(port_name, ha->port_name, WWN_SIZE))
-+ return VPCERR_BAD_WWN;
- vha = qla24xx_find_vhost_by_name(ha, port_name);
- if (vha)
- return VPCERR_BAD_WWN;
+- /* Set the security descriptor */
++ if (pntsd) {
++ /* allocate memory for the smb header,
++ set security descriptor request security descriptor
++ parameters, and secuirty descriptor itself */
- /* Check up max-npiv-supports */
- if (ha->num_vhosts > ha->max_npiv_vports) {
-- DEBUG15(printk("scsi(%ld): num_vhosts %d is bigger than "
-- "max_npv_vports %d.\n", ha->host_no,
-- (uint16_t) ha->num_vhosts, (int) ha->max_npiv_vports));
-+ DEBUG15(printk("scsi(%ld): num_vhosts %ud is bigger than "
-+ "max_npv_vports %ud.\n", ha->host_no,
-+ ha->num_vhosts, ha->max_npiv_vports));
- return VPCERR_UNSUPPORTED;
- }
- return 0;
-@@ -412,8 +405,9 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
- }
- vha->mgmt_svr_loop_id = 10 + vha->vp_idx;
++ pnntsd = kmalloc(acllen, GFP_KERNEL);
++ if (!pnntsd) {
++ cERROR(1, ("Unable to allocate security descriptor"));
++ kfree(pntsd);
++ return (-ENOMEM);
++ }
-- init_MUTEX(&vha->mbx_cmd_sem);
-- init_MUTEX_LOCKED(&vha->mbx_intr_sem);
-+ init_completion(&vha->mbx_cmd_comp);
-+ complete(&vha->mbx_cmd_comp);
-+ init_completion(&vha->mbx_intr_comp);
+- kfree(pntsd);
+- return rc;
++ rc = build_sec_desc(pntsd, pnntsd, acllen, inode, nmode);
++
++#ifdef CONFIG_CIFS_DEBUG2
++ cFYI(1, ("build_sec_desc rc: %d", rc));
++#endif
++
++ if (!rc) {
++ /* Set the security descriptor */
++ rc = set_cifs_acl(pnntsd, acllen, inode, path);
++#ifdef CONFIG_CIFS_DEBUG2
++ cFYI(1, ("set_cifs_acl rc: %d", rc));
++#endif
++ }
++
++ kfree(pnntsd);
++ kfree(pntsd);
++ }
++
++ return (rc);
+ }
+ #endif /* CONFIG_CIFS_EXPERIMENTAL */
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 093beaa..e9f4ec7 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -44,6 +44,7 @@
+ #include "cifs_fs_sb.h"
+ #include <linux/mm.h>
+ #include <linux/key-type.h>
++#include "dns_resolve.h"
+ #include "cifs_spnego.h"
+ #define CIFS_MAGIC_NUMBER 0xFF534D42 /* the first four bytes of SMB PDUs */
- INIT_LIST_HEAD(&vha->list);
- INIT_LIST_HEAD(&vha->fcports);
-@@ -450,7 +444,7 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
- num_hosts++;
+@@ -96,6 +97,9 @@ cifs_read_super(struct super_block *sb, void *data,
+ {
+ struct inode *inode;
+ struct cifs_sb_info *cifs_sb;
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ int len;
++#endif
+ int rc = 0;
- down(&ha->vport_sem);
-- set_bit(vha->vp_idx, (unsigned long *)ha->vp_idx_map);
-+ set_bit(vha->vp_idx, ha->vp_idx_map);
- ha->cur_vport_count++;
- up(&ha->vport_sem);
+ /* BB should we make this contingent on mount parm? */
+@@ -105,6 +109,25 @@ cifs_read_super(struct super_block *sb, void *data,
+ if (cifs_sb == NULL)
+ return -ENOMEM;
-diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
-index 8ecc047..aba1e6d 100644
---- a/drivers/scsi/qla2xxx/qla_os.c
-+++ b/drivers/scsi/qla2xxx/qla_os.c
-@@ -105,13 +105,12 @@ static int qla2xxx_eh_abort(struct scsi_cmnd *);
- static int qla2xxx_eh_device_reset(struct scsi_cmnd *);
- static int qla2xxx_eh_bus_reset(struct scsi_cmnd *);
- static int qla2xxx_eh_host_reset(struct scsi_cmnd *);
--static int qla2x00_loop_reset(scsi_qla_host_t *ha);
- static int qla2x00_device_reset(scsi_qla_host_t *, fc_port_t *);
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ /* copy mount params to sb for use in submounts */
++ /* BB: should we move this after the mount so we
++ * do not have to do the copy on failed mounts?
++ * BB: May be it is better to do simple copy before
++ * complex operation (mount), and in case of fail
++ * just exit instead of doing mount and attempting
++ * undo it if this copy fails?*/
++ len = strlen(data);
++ cifs_sb->mountdata = kzalloc(len + 1, GFP_KERNEL);
++ if (cifs_sb->mountdata == NULL) {
++ kfree(sb->s_fs_info);
++ sb->s_fs_info = NULL;
++ return -ENOMEM;
++ }
++ strncpy(cifs_sb->mountdata, data, len + 1);
++ cifs_sb->mountdata[len] = '\0';
++#endif
++
+ rc = cifs_mount(sb, cifs_sb, data, devname);
- static int qla2x00_change_queue_depth(struct scsi_device *, int);
- static int qla2x00_change_queue_type(struct scsi_device *, int);
+ if (rc) {
+@@ -154,6 +177,12 @@ out_no_root:
--struct scsi_host_template qla2x00_driver_template = {
-+static struct scsi_host_template qla2x00_driver_template = {
- .module = THIS_MODULE,
- .name = QLA2XXX_DRIVER_NAME,
- .queuecommand = qla2x00_queuecommand,
-@@ -179,13 +178,6 @@ struct scsi_transport_template *qla2xxx_transport_vport_template = NULL;
- * Timer routines
- */
+ out_mount_failed:
+ if (cifs_sb) {
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ if (cifs_sb->mountdata) {
++ kfree(cifs_sb->mountdata);
++ cifs_sb->mountdata = NULL;
++ }
++#endif
+ if (cifs_sb->local_nls)
+ unload_nls(cifs_sb->local_nls);
+ kfree(cifs_sb);
+@@ -177,6 +206,13 @@ cifs_put_super(struct super_block *sb)
+ if (rc) {
+ cERROR(1, ("cifs_umount failed with return code %d", rc));
+ }
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ if (cifs_sb->mountdata) {
++ kfree(cifs_sb->mountdata);
++ cifs_sb->mountdata = NULL;
++ }
++#endif
++
+ unload_nls(cifs_sb->local_nls);
+ kfree(cifs_sb);
+ return;
+@@ -435,6 +471,10 @@ static void cifs_umount_begin(struct vfsmount *vfsmnt, int flags)
+ struct cifs_sb_info *cifs_sb;
+ struct cifsTconInfo *tcon;
--void qla2x00_timer(scsi_qla_host_t *);
--
--__inline__ void qla2x00_start_timer(scsi_qla_host_t *,
-- void *, unsigned long);
--static __inline__ void qla2x00_restart_timer(scsi_qla_host_t *, unsigned long);
--__inline__ void qla2x00_stop_timer(scsi_qla_host_t *);
--
- __inline__ void
- qla2x00_start_timer(scsi_qla_host_t *ha, void *func, unsigned long interval)
- {
-@@ -203,7 +195,7 @@ qla2x00_restart_timer(scsi_qla_host_t *ha, unsigned long interval)
- mod_timer(&ha->timer, jiffies + interval * HZ);
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ dfs_shrink_umount_helper(vfsmnt);
++#endif /* CONFIG CIFS_DFS_UPCALL */
++
+ if (!(flags & MNT_FORCE))
+ return;
+ cifs_sb = CIFS_SB(vfsmnt->mnt_sb);
+@@ -552,7 +592,7 @@ static loff_t cifs_llseek(struct file *file, loff_t offset, int origin)
+ return remote_llseek(file, offset, origin);
}
--__inline__ void
-+static __inline__ void
- qla2x00_stop_timer(scsi_qla_host_t *ha)
- {
- del_timer_sync(&ha->timer);
-@@ -214,12 +206,11 @@ static int qla2x00_do_dpc(void *data);
+-static struct file_system_type cifs_fs_type = {
++struct file_system_type cifs_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "cifs",
+ .get_sb = cifs_get_sb,
+@@ -1015,11 +1055,16 @@ init_cifs(void)
+ if (rc)
+ goto out_unregister_filesystem;
+ #endif
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ rc = register_key_type(&key_type_dns_resolver);
++ if (rc)
++ goto out_unregister_key_type;
++#endif
+ oplockThread = kthread_run(cifs_oplock_thread, NULL, "cifsoplockd");
+ if (IS_ERR(oplockThread)) {
+ rc = PTR_ERR(oplockThread);
+ cERROR(1, ("error %d create oplock thread", rc));
+- goto out_unregister_key_type;
++ goto out_unregister_dfs_key_type;
+ }
- static void qla2x00_rst_aen(scsi_qla_host_t *);
+ dnotifyThread = kthread_run(cifs_dnotify_thread, NULL, "cifsdnotifyd");
+@@ -1033,7 +1078,11 @@ init_cifs(void)
--uint8_t qla2x00_mem_alloc(scsi_qla_host_t *);
--void qla2x00_mem_free(scsi_qla_host_t *ha);
-+static uint8_t qla2x00_mem_alloc(scsi_qla_host_t *);
-+static void qla2x00_mem_free(scsi_qla_host_t *ha);
- static int qla2x00_allocate_sp_pool( scsi_qla_host_t *ha);
- static void qla2x00_free_sp_pool(scsi_qla_host_t *ha);
- static void qla2x00_sp_free_dma(scsi_qla_host_t *, srb_t *);
--void qla2x00_sp_compl(scsi_qla_host_t *ha, srb_t *);
+ out_stop_oplock_thread:
+ kthread_stop(oplockThread);
++ out_unregister_dfs_key_type:
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ unregister_key_type(&key_type_dns_resolver);
+ out_unregister_key_type:
++#endif
+ #ifdef CONFIG_CIFS_UPCALL
+ unregister_key_type(&cifs_spnego_key_type);
+ out_unregister_filesystem:
+@@ -1059,6 +1108,9 @@ exit_cifs(void)
+ #ifdef CONFIG_PROC_FS
+ cifs_proc_clean();
+ #endif
++#ifdef CONFIG_CIFS_DFS_UPCALL
++ unregister_key_type(&key_type_dns_resolver);
++#endif
+ #ifdef CONFIG_CIFS_UPCALL
+ unregister_key_type(&cifs_spnego_key_type);
+ #endif
+diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
+index 2a21dc6..195b14d 100644
+--- a/fs/cifs/cifsfs.h
++++ b/fs/cifs/cifsfs.h
+@@ -32,6 +32,7 @@
+ #define TRUE 1
+ #endif
- /* -------------------------------------------------------------------------- */
++extern struct file_system_type cifs_fs_type;
+ extern const struct address_space_operations cifs_addr_ops;
+ extern const struct address_space_operations cifs_addr_ops_smallbuf;
-@@ -1060,7 +1051,7 @@ eh_host_reset_lock:
- * Returns:
- * 0 = success
- */
--static int
-+int
- qla2x00_loop_reset(scsi_qla_host_t *ha)
- {
- int ret;
-@@ -1479,8 +1470,7 @@ qla2x00_set_isp_flags(scsi_qla_host_t *ha)
- static int
- qla2x00_iospace_config(scsi_qla_host_t *ha)
- {
-- unsigned long pio, pio_len, pio_flags;
-- unsigned long mmio, mmio_len, mmio_flags;
-+ resource_size_t pio;
+@@ -60,6 +61,10 @@ extern int cifs_setattr(struct dentry *, struct iattr *);
- if (pci_request_selected_regions(ha->pdev, ha->bars,
- QLA2XXX_DRIVER_NAME)) {
-@@ -1495,10 +1485,8 @@ qla2x00_iospace_config(scsi_qla_host_t *ha)
+ extern const struct inode_operations cifs_file_inode_ops;
+ extern const struct inode_operations cifs_symlink_inode_ops;
++extern struct list_head cifs_dfs_automount_list;
++extern struct inode_operations cifs_dfs_referral_inode_operations;
++
++
- /* We only need PIO for Flash operations on ISP2312 v2 chips. */
- pio = pci_resource_start(ha->pdev, 0);
-- pio_len = pci_resource_len(ha->pdev, 0);
-- pio_flags = pci_resource_flags(ha->pdev, 0);
-- if (pio_flags & IORESOURCE_IO) {
-- if (pio_len < MIN_IOBASE_LEN) {
-+ if (pci_resource_flags(ha->pdev, 0) & IORESOURCE_IO) {
-+ if (pci_resource_len(ha->pdev, 0) < MIN_IOBASE_LEN) {
- qla_printk(KERN_WARNING, ha,
- "Invalid PCI I/O region size (%s)...\n",
- pci_name(ha->pdev));
-@@ -1511,28 +1499,23 @@ qla2x00_iospace_config(scsi_qla_host_t *ha)
- pio = 0;
- }
- ha->pio_address = pio;
-- ha->pio_length = pio_len;
+ /* Functions related to files and directories */
+ extern const struct file_operations cifs_file_ops;
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 1fde219..5d32d8d 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1,7 +1,7 @@
+ /*
+ * fs/cifs/cifsglob.h
+ *
+- * Copyright (C) International Business Machines Corp., 2002,2007
++ * Copyright (C) International Business Machines Corp., 2002,2008
+ * Author(s): Steve French (sfrench at us.ibm.com)
+ * Jeremy Allison (jra at samba.org)
+ *
+@@ -70,14 +70,6 @@
+ #endif
- skip_pio:
- /* Use MMIO operations for all accesses. */
-- mmio = pci_resource_start(ha->pdev, 1);
-- mmio_len = pci_resource_len(ha->pdev, 1);
-- mmio_flags = pci_resource_flags(ha->pdev, 1);
+ /*
+- * This information is kept on every Server we know about.
+- *
+- * Some things to note:
+- *
+- */
+-#define SERVER_NAME_LEN_WITH_NULL (SERVER_NAME_LENGTH + 1)
-
-- if (!(mmio_flags & IORESOURCE_MEM)) {
-+ if (!(pci_resource_flags(ha->pdev, 1) & IORESOURCE_MEM)) {
- qla_printk(KERN_ERR, ha,
-- "region #0 not an MMIO resource (%s), aborting\n",
-+ "region #1 not an MMIO resource (%s), aborting\n",
- pci_name(ha->pdev));
- goto iospace_error_exit;
- }
-- if (mmio_len < MIN_IOBASE_LEN) {
-+ if (pci_resource_len(ha->pdev, 1) < MIN_IOBASE_LEN) {
- qla_printk(KERN_ERR, ha,
- "Invalid PCI mem region size (%s), aborting\n",
- pci_name(ha->pdev));
- goto iospace_error_exit;
- }
-
-- ha->iobase = ioremap(mmio, MIN_IOBASE_LEN);
-+ ha->iobase = ioremap(pci_resource_start(ha->pdev, 1), MIN_IOBASE_LEN);
- if (!ha->iobase) {
- qla_printk(KERN_ERR, ha,
- "cannot remap MMIO (%s), aborting\n", pci_name(ha->pdev));
-@@ -1701,9 +1684,10 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
- /* load the F/W, read paramaters, and init the H/W */
- ha->instance = num_hosts;
-
-- init_MUTEX(&ha->mbx_cmd_sem);
- init_MUTEX(&ha->vport_sem);
-- init_MUTEX_LOCKED(&ha->mbx_intr_sem);
-+ init_completion(&ha->mbx_cmd_comp);
-+ complete(&ha->mbx_cmd_comp);
-+ init_completion(&ha->mbx_intr_comp);
-
- INIT_LIST_HEAD(&ha->list);
- INIT_LIST_HEAD(&ha->fcports);
-@@ -1807,6 +1791,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+-/*
+ * CIFS vfs client Status information (based on what we know.)
+ */
- qla2x00_init_host_attr(ha);
+@@ -460,6 +452,37 @@ struct dir_notify_req {
+ struct file *pfile;
+ };
-+ qla2x00_dfs_setup(ha);
++struct dfs_info3_param {
++ int flags; /* DFSREF_REFERRAL_SERVER, DFSREF_STORAGE_SERVER*/
++ int PathConsumed;
++ int server_type;
++ int ref_flag;
++ char *path_name;
++ char *node_name;
++};
+
- qla_printk(KERN_INFO, ha, "\n"
- " QLogic Fibre Channel HBA Driver: %s\n"
- " QLogic %s - %s\n"
-@@ -1838,6 +1824,8 @@ qla2x00_remove_one(struct pci_dev *pdev)
-
- ha = pci_get_drvdata(pdev);
-
-+ qla2x00_dfs_remove(ha);
++static inline void free_dfs_info_param(struct dfs_info3_param *param)
++{
++ if (param) {
++ kfree(param->path_name);
++ kfree(param->node_name);
++ kfree(param);
++ }
++}
+
- qla2x00_free_sysfs_attr(ha);
-
- fc_remove_host(ha->host);
-@@ -1871,8 +1859,11 @@ qla2x00_free_device(scsi_qla_host_t *ha)
- kthread_stop(t);
- }
++static inline void free_dfs_info_array(struct dfs_info3_param *param,
++ int number_of_items)
++{
++ int i;
++ if ((number_of_items == 0) || (param == NULL))
++ return;
++ for (i = 0; i < number_of_items; i++) {
++ kfree(param[i].path_name);
++ kfree(param[i].node_name);
++ }
++ kfree(param);
++}
++
+ #define MID_FREE 0
+ #define MID_REQUEST_ALLOCATED 1
+ #define MID_REQUEST_SUBMITTED 2
+diff --git a/fs/cifs/cifspdu.h b/fs/cifs/cifspdu.h
+index dbe6b84..47f7950 100644
+--- a/fs/cifs/cifspdu.h
++++ b/fs/cifs/cifspdu.h
+@@ -237,6 +237,9 @@
+ | DELETE | READ_CONTROL | WRITE_DAC \
+ | WRITE_OWNER | SYNCHRONIZE)
-+ if (ha->flags.fce_enabled)
-+ qla2x00_disable_fce_trace(ha, NULL, NULL);
++#define SET_MINIMUM_RIGHTS (FILE_READ_EA | FILE_READ_ATTRIBUTES \
++ | READ_CONTROL | SYNCHRONIZE)
+
- if (ha->eft)
-- qla2x00_trace_control(ha, TC_DISABLE, 0, 0);
-+ qla2x00_disable_eft_trace(ha);
- ha->flags.online = 0;
+ /*
+ * Invalid readdir handle
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 8350eec..2f09f56 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -1,7 +1,7 @@
+ /*
+ * fs/cifs/cifsproto.h
+ *
+- * Copyright (c) International Business Machines Corp., 2002,2007
++ * Copyright (c) International Business Machines Corp., 2002,2008
+ * Author(s): Steve French (sfrench at us.ibm.com)
+ *
+ * This library is free software; you can redistribute it and/or modify
+@@ -97,11 +97,14 @@ extern int cifs_get_inode_info_unix(struct inode **pinode,
+ const unsigned char *search_path,
+ struct super_block *sb, int xid);
+ extern void acl_to_uid_mode(struct inode *inode, const char *search_path);
+-extern int mode_to_acl(struct inode *inode, const char *path);
++extern int mode_to_acl(struct inode *inode, const char *path, __u64);
-@@ -2016,7 +2007,7 @@ qla2x00_mark_all_devices_lost(scsi_qla_host_t *ha, int defer)
- * 0 = success.
- * 1 = failure.
- */
--uint8_t
-+static uint8_t
- qla2x00_mem_alloc(scsi_qla_host_t *ha)
- {
- char name[16];
-@@ -2213,7 +2204,7 @@ qla2x00_mem_alloc(scsi_qla_host_t *ha)
- * Input:
- * ha = adapter block pointer.
- */
--void
-+static void
- qla2x00_mem_free(scsi_qla_host_t *ha)
- {
- struct list_head *fcpl, *fcptemp;
-@@ -2228,6 +2219,10 @@ qla2x00_mem_free(scsi_qla_host_t *ha)
- /* free sp pool */
- qla2x00_free_sp_pool(ha);
+ extern int cifs_mount(struct super_block *, struct cifs_sb_info *, char *,
+ const char *);
+ extern int cifs_umount(struct super_block *, struct cifs_sb_info *);
++#ifdef CONFIG_CIFS_DFS_UPCALL
++extern void dfs_shrink_umount_helper(struct vfsmount *vfsmnt);
++#endif
+ void cifs_proc_init(void);
+ void cifs_proc_clean(void);
-+ if (ha->fce)
-+ dma_free_coherent(&ha->pdev->dev, FCE_SIZE, ha->fce,
-+ ha->fce_dma);
-+
- if (ha->fw_dump) {
- if (ha->eft)
- dma_free_coherent(&ha->pdev->dev,
-@@ -2748,23 +2743,6 @@ qla2x00_timer(scsi_qla_host_t *ha)
- qla2x00_restart_timer(ha, WATCH_INTERVAL);
+@@ -153,7 +156,7 @@ extern int get_dfs_path(int xid, struct cifsSesInfo *pSesInfo,
+ const char *old_path,
+ const struct nls_table *nls_codepage,
+ unsigned int *pnum_referrals,
+- unsigned char **preferrals,
++ struct dfs_info3_param **preferrals,
+ int remap);
+ extern void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon,
+ struct super_block *sb, struct smb_vol *vol);
+@@ -342,6 +345,8 @@ extern int CIFSSMBSetEA(const int xid, struct cifsTconInfo *tcon,
+ const struct nls_table *nls_codepage, int remap_special_chars);
+ extern int CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon,
+ __u16 fid, struct cifs_ntsd **acl_inf, __u32 *buflen);
++extern int CIFSSMBSetCIFSACL(const int, struct cifsTconInfo *, __u16,
++ struct cifs_ntsd *, __u32);
+ extern int CIFSSMBGetPosixACL(const int xid, struct cifsTconInfo *tcon,
+ const unsigned char *searchName,
+ char *acl_inf, const int buflen, const int acl_type,
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 9e8a6be..9409524 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -3156,6 +3156,71 @@ qsec_out:
+ /* cifs_small_buf_release(pSMB); */ /* Freed earlier now in SendReceive2 */
+ return rc;
}
++
++int
++CIFSSMBSetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid,
++ struct cifs_ntsd *pntsd, __u32 acllen)
++{
++ __u16 byte_count, param_count, data_count, param_offset, data_offset;
++ int rc = 0;
++ int bytes_returned = 0;
++ SET_SEC_DESC_REQ *pSMB = NULL;
++ NTRANSACT_RSP *pSMBr = NULL;
++
++setCifsAclRetry:
++ rc = smb_init(SMB_COM_NT_TRANSACT, 19, tcon, (void **) &pSMB,
++ (void **) &pSMBr);
++ if (rc)
++ return (rc);
++
++ pSMB->MaxSetupCount = 0;
++ pSMB->Reserved = 0;
++
++ param_count = 8;
++ param_offset = offsetof(struct smb_com_transaction_ssec_req, Fid) - 4;
++ data_count = acllen;
++ data_offset = param_offset + param_count;
++ byte_count = 3 /* pad */ + param_count;
++
++ pSMB->DataCount = cpu_to_le32(data_count);
++ pSMB->TotalDataCount = pSMB->DataCount;
++ pSMB->MaxParameterCount = cpu_to_le32(4);
++ pSMB->MaxDataCount = cpu_to_le32(16384);
++ pSMB->ParameterCount = cpu_to_le32(param_count);
++ pSMB->ParameterOffset = cpu_to_le32(param_offset);
++ pSMB->TotalParameterCount = pSMB->ParameterCount;
++ pSMB->DataOffset = cpu_to_le32(data_offset);
++ pSMB->SetupCount = 0;
++ pSMB->SubCommand = cpu_to_le16(NT_TRANSACT_SET_SECURITY_DESC);
++ pSMB->ByteCount = cpu_to_le16(byte_count+data_count);
++
++ pSMB->Fid = fid; /* file handle always le */
++ pSMB->Reserved2 = 0;
++ pSMB->AclFlags = cpu_to_le32(CIFS_ACL_DACL);
++
++ if (pntsd && acllen) {
++ memcpy((char *) &pSMBr->hdr.Protocol + data_offset,
++ (char *) pntsd,
++ acllen);
++ pSMB->hdr.smb_buf_length += (byte_count + data_count);
++
++ } else
++ pSMB->hdr.smb_buf_length += byte_count;
++
++ rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
++ (struct smb_hdr *) pSMBr, &bytes_returned, 0);
++
++ cFYI(1, ("SetCIFSACL bytes_returned: %d, rc: %d", bytes_returned, rc));
++ if (rc)
++ cFYI(1, ("Set CIFS ACL returned %d", rc));
++ cifs_buf_release(pSMB);
++
++ if (rc == -EAGAIN)
++ goto setCifsAclRetry;
++
++ return (rc);
++}
++
+ #endif /* CONFIG_CIFS_EXPERIMENTAL */
--/* XXX(hch): crude hack to emulate a down_timeout() */
--int
--qla2x00_down_timeout(struct semaphore *sema, unsigned long timeout)
--{
-- const unsigned int step = 100; /* msecs */
-- unsigned int iterations = jiffies_to_msecs(timeout)/100;
--
-- do {
-- if (!down_trylock(sema))
-- return 0;
-- if (msleep_interruptible(step))
-- break;
-- } while (--iterations > 0);
--
-- return -ETIMEDOUT;
--}
--
- /* Firmware interface routines. */
+ /* Legacy Query Path Information call for lookup to old servers such
+@@ -5499,7 +5564,7 @@ SetEARetry:
+ else
+ name_len = strnlen(ea_name, 255);
- #define FW_BLOBS 6
-diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
-index ad2fa01..b68fb73 100644
---- a/drivers/scsi/qla2xxx/qla_sup.c
-+++ b/drivers/scsi/qla2xxx/qla_sup.c
-@@ -22,7 +22,7 @@ static void qla2x00_nv_write(scsi_qla_host_t *, uint16_t);
- * qla2x00_lock_nvram_access() -
- * @ha: HA context
- */
--void
-+static void
- qla2x00_lock_nvram_access(scsi_qla_host_t *ha)
- {
- uint16_t data;
-@@ -55,7 +55,7 @@ qla2x00_lock_nvram_access(scsi_qla_host_t *ha)
- * qla2x00_unlock_nvram_access() -
- * @ha: HA context
- */
--void
-+static void
- qla2x00_unlock_nvram_access(scsi_qla_host_t *ha)
- {
- struct device_reg_2xxx __iomem *reg = &ha->iobase->isp;
-@@ -74,7 +74,7 @@ qla2x00_unlock_nvram_access(scsi_qla_host_t *ha)
+- count = sizeof(*parm_data) + ea_value_len + name_len + 1;
++ count = sizeof(*parm_data) + ea_value_len + name_len;
+ pSMB->MaxParameterCount = cpu_to_le16(2);
+ pSMB->MaxDataCount = cpu_to_le16(1000); /* BB find max SMB size from sess */
+ pSMB->MaxSetupCount = 0;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index fd9147c..65d0ba7 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1,7 +1,7 @@
+ /*
+ * fs/cifs/connect.c
*
- * Returns the word read from nvram @addr.
- */
--uint16_t
-+static uint16_t
- qla2x00_get_nvram_word(scsi_qla_host_t *ha, uint32_t addr)
- {
- uint16_t data;
-@@ -93,7 +93,7 @@ qla2x00_get_nvram_word(scsi_qla_host_t *ha, uint32_t addr)
- * @addr: Address in NVRAM to write
- * @data: word to program
- */
--void
-+static void
- qla2x00_write_nvram_word(scsi_qla_host_t *ha, uint32_t addr, uint16_t data)
+- * Copyright (C) International Business Machines Corp., 2002,2007
++ * Copyright (C) International Business Machines Corp., 2002,2008
+ * Author(s): Steve French (sfrench at us.ibm.com)
+ *
+ * This library is free software; you can redistribute it and/or modify
+@@ -1410,7 +1410,7 @@ connect_to_dfs_path(int xid, struct cifsSesInfo *pSesInfo,
+ const char *old_path, const struct nls_table *nls_codepage,
+ int remap)
{
- int count;
-@@ -550,7 +550,7 @@ qla24xx_write_flash_data(scsi_qla_host_t *ha, uint32_t *dwptr, uint32_t faddr,
- int ret;
- uint32_t liter, miter;
- uint32_t sec_mask, rest_addr, conf_addr;
-- uint32_t fdata, findex ;
-+ uint32_t fdata, findex, cnt;
- uint8_t man_id, flash_id;
- struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
- dma_addr_t optrom_dma;
-@@ -690,8 +690,14 @@ qla24xx_write_flash_data(scsi_qla_host_t *ha, uint32_t *dwptr, uint32_t faddr,
- 0xff0000) | ((fdata >> 16) & 0xff));
- }
-
-- /* Enable flash write-protection. */
-+ /* Enable flash write-protection and wait for completion. */
- qla24xx_write_flash_dword(ha, flash_conf_to_access_addr(0x101), 0x9c);
-+ for (cnt = 300; cnt &&
-+ qla24xx_read_flash_dword(ha,
-+ flash_conf_to_access_addr(0x005)) & BIT_0;
-+ cnt--) {
-+ udelay(10);
-+ }
-
- /* Disable flash write. */
- WRT_REG_DWORD(®->ctrl_status,
-diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h
-index ae6f7a2..2c2f6b4 100644
---- a/drivers/scsi/qla2xxx/qla_version.h
-+++ b/drivers/scsi/qla2xxx/qla_version.h
-@@ -7,7 +7,7 @@
- /*
- * Driver version
- */
--#define QLA2XXX_VERSION "8.02.00-k5"
-+#define QLA2XXX_VERSION "8.02.00-k7"
-
- #define QLA_DRIVER_MAJOR_VER 8
- #define QLA_DRIVER_MINOR_VER 2
-diff --git a/drivers/scsi/qla4xxx/ql4_init.c b/drivers/scsi/qla4xxx/ql4_init.c
-index d692c71..cbe0a17 100644
---- a/drivers/scsi/qla4xxx/ql4_init.c
-+++ b/drivers/scsi/qla4xxx/ql4_init.c
-@@ -5,6 +5,7 @@
- * See LICENSE.qla4xxx for copyright and licensing details.
- */
-
-+#include <scsi/iscsi_if.h>
- #include "ql4_def.h"
- #include "ql4_glbl.h"
- #include "ql4_dbg.h"
-@@ -1305,7 +1306,8 @@ int qla4xxx_process_ddb_changed(struct scsi_qla_host *ha,
- atomic_set(&ddb_entry->relogin_timer, 0);
- clear_bit(DF_RELOGIN, &ddb_entry->flags);
- clear_bit(DF_NO_RELOGIN, &ddb_entry->flags);
-- iscsi_if_create_session_done(ddb_entry->conn);
-+ iscsi_session_event(ddb_entry->sess,
-+ ISCSI_KEVENT_CREATE_SESSION);
- /*
- * Change the lun state to READY in case the lun TIMEOUT before
- * the device came back.
-diff --git a/drivers/scsi/qla4xxx/ql4_isr.c b/drivers/scsi/qla4xxx/ql4_isr.c
-index 4a154be..0f029d0 100644
---- a/drivers/scsi/qla4xxx/ql4_isr.c
-+++ b/drivers/scsi/qla4xxx/ql4_isr.c
-@@ -123,15 +123,14 @@ static void qla4xxx_status_entry(struct scsi_qla_host *ha,
- break;
-
- /* Copy Sense Data into sense buffer. */
-- memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
-+ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
-
- sensebytecnt = le16_to_cpu(sts_entry->senseDataByteCnt);
- if (sensebytecnt == 0)
- break;
-
- memcpy(cmd->sense_buffer, sts_entry->senseData,
-- min(sensebytecnt,
-- (uint16_t) sizeof(cmd->sense_buffer)));
-+ min_t(uint16_t, sensebytecnt, SCSI_SENSE_BUFFERSIZE));
-
- DEBUG2(printk("scsi%ld:%d:%d:%d: %s: sense key = %x, "
- "ASC/ASCQ = %02x/%02x\n", ha->host_no,
-@@ -208,8 +207,7 @@ static void qla4xxx_status_entry(struct scsi_qla_host *ha,
- break;
+- unsigned char *referrals = NULL;
++ struct dfs_info3_param *referrals = NULL;
+ unsigned int num_referrals;
+ int rc = 0;
- /* Copy Sense Data into sense buffer. */
-- memset(cmd->sense_buffer, 0,
-- sizeof(cmd->sense_buffer));
-+ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+@@ -1429,12 +1429,14 @@ connect_to_dfs_path(int xid, struct cifsSesInfo *pSesInfo,
+ int
+ get_dfs_path(int xid, struct cifsSesInfo *pSesInfo, const char *old_path,
+ const struct nls_table *nls_codepage, unsigned int *pnum_referrals,
+- unsigned char **preferrals, int remap)
++ struct dfs_info3_param **preferrals, int remap)
+ {
+ char *temp_unc;
+ int rc = 0;
++ unsigned char *targetUNCs;
- sensebytecnt =
- le16_to_cpu(sts_entry->senseDataByteCnt);
-@@ -217,8 +215,7 @@ static void qla4xxx_status_entry(struct scsi_qla_host *ha,
- break;
+ *pnum_referrals = 0;
++ *preferrals = NULL;
- memcpy(cmd->sense_buffer, sts_entry->senseData,
-- min(sensebytecnt,
-- (uint16_t) sizeof(cmd->sense_buffer)));
-+ min_t(uint16_t, sensebytecnt, SCSI_SENSE_BUFFERSIZE));
+ if (pSesInfo->ipc_tid == 0) {
+ temp_unc = kmalloc(2 /* for slashes */ +
+@@ -1454,8 +1456,10 @@ get_dfs_path(int xid, struct cifsSesInfo *pSesInfo, const char *old_path,
+ kfree(temp_unc);
+ }
+ if (rc == 0)
+- rc = CIFSGetDFSRefer(xid, pSesInfo, old_path, preferrals,
++ rc = CIFSGetDFSRefer(xid, pSesInfo, old_path, &targetUNCs,
+ pnum_referrals, nls_codepage, remap);
++ /* BB map targetUNCs to dfs_info3 structures, here or
++ in CIFSGetDFSRefer BB */
- DEBUG2(printk("scsi%ld:%d:%d:%d: %s: sense key = %x, "
- "ASC/ASCQ = %02x/%02x\n", ha->host_no,
-diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
-index 89460d2..d3f8664 100644
---- a/drivers/scsi/qla4xxx/ql4_os.c
-+++ b/drivers/scsi/qla4xxx/ql4_os.c
-@@ -173,18 +173,6 @@ static void qla4xxx_conn_stop(struct iscsi_cls_conn *conn, int flag)
- printk(KERN_ERR "iscsi: invalid stop flag %d\n", flag);
+ return rc;
}
+@@ -1964,7 +1968,15 @@ cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb,
--static ssize_t format_addr(char *buf, const unsigned char *addr, int len)
--{
-- int i;
-- char *cp = buf;
--
-- for (i = 0; i < len; i++)
-- cp += sprintf(cp, "%02x%c", addr[i],
-- i == (len - 1) ? '\n' : ':');
-- return cp - buf;
--}
--
--
- static int qla4xxx_host_get_param(struct Scsi_Host *shost,
- enum iscsi_host_param param, char *buf)
- {
-@@ -193,7 +181,7 @@ static int qla4xxx_host_get_param(struct Scsi_Host *shost,
-
- switch (param) {
- case ISCSI_HOST_PARAM_HWADDRESS:
-- len = format_addr(buf, ha->my_mac, MAC_ADDR_LEN);
-+ len = sysfs_format_mac(buf, ha->my_mac, MAC_ADDR_LEN);
- break;
- case ISCSI_HOST_PARAM_IPADDRESS:
- len = sprintf(buf, "%d.%d.%d.%d\n", ha->ip_address[0],
-@@ -298,8 +286,7 @@ void qla4xxx_destroy_sess(struct ddb_entry *ddb_entry)
- return;
+ if (existingCifsSes) {
+ pSesInfo = existingCifsSes;
+- cFYI(1, ("Existing smb sess found"));
++ cFYI(1, ("Existing smb sess found (status=%d)",
++ pSesInfo->status));
++ down(&pSesInfo->sesSem);
++ if (pSesInfo->status == CifsNeedReconnect) {
++ cFYI(1, ("Session needs reconnect"));
++ rc = cifs_setup_session(xid, pSesInfo,
++ cifs_sb->local_nls);
++ }
++ up(&pSesInfo->sesSem);
+ } else if (!rc) {
+ cFYI(1, ("Existing smb sess not found"));
+ pSesInfo = sesInfoAlloc();
+@@ -3514,7 +3526,7 @@ cifs_umount(struct super_block *sb, struct cifs_sb_info *cifs_sb)
+ sesInfoFree(ses);
- if (ddb_entry->conn) {
-- iscsi_if_destroy_session_done(ddb_entry->conn);
-- iscsi_destroy_conn(ddb_entry->conn);
-+ atomic_set(&ddb_entry->state, DDB_STATE_DEAD);
- iscsi_remove_session(ddb_entry->sess);
- }
- iscsi_free_session(ddb_entry->sess);
-@@ -309,6 +296,7 @@ int qla4xxx_add_sess(struct ddb_entry *ddb_entry)
- {
- int err;
+ FreeXid(xid);
+- return rc; /* BB check if we should always return zero here */
++ return rc;
+ }
-+ ddb_entry->sess->recovery_tmo = ddb_entry->ha->port_down_retry_count;
- err = iscsi_add_session(ddb_entry->sess, ddb_entry->fw_ddb_index);
- if (err) {
- DEBUG2(printk(KERN_ERR "Could not add session.\n"));
-@@ -321,9 +309,6 @@ int qla4xxx_add_sess(struct ddb_entry *ddb_entry)
- DEBUG2(printk(KERN_ERR "Could not add connection.\n"));
- return -ENOMEM;
+ int cifs_setup_session(unsigned int xid, struct cifsSesInfo *pSesInfo,
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index 37dc97a..699ec11 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -517,12 +517,10 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry,
+ d_add(direntry, NULL);
+ /* if it was once a directory (but how can we tell?) we could do
+ shrink_dcache_parent(direntry); */
+- } else {
+- cERROR(1, ("Error 0x%x on cifs_get_inode_info in lookup of %s",
+- rc, full_path));
+- /* BB special case check for Access Denied - watch security
+- exposure of returning dir info implicitly via different rc
+- if file exists or not but no access BB */
++ } else if (rc != -EACCES) {
++ cERROR(1, ("Unexpected lookup error %d", rc));
++ /* We special case check for Access Denied - since that
++ is a common return code */
}
--
-- ddb_entry->sess->recovery_tmo = ddb_entry->ha->port_down_retry_count;
-- iscsi_if_create_session_done(ddb_entry->conn);
- return 0;
- }
-diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c
-index 7a2e798..65455ab 100644
---- a/drivers/scsi/qlogicpti.c
-+++ b/drivers/scsi/qlogicpti.c
-@@ -871,11 +871,12 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
- struct scatterlist *sg, *s;
- int i, n;
+ kfree(full_path);
+diff --git a/fs/cifs/dns_resolve.c b/fs/cifs/dns_resolve.c
+new file mode 100644
+index 0000000..ef7f438
+--- /dev/null
++++ b/fs/cifs/dns_resolve.c
+@@ -0,0 +1,124 @@
++/*
++ * fs/cifs/dns_resolve.c
++ *
++ * Copyright (c) 2007 Igor Mammedov
++ * Author(s): Igor Mammedov (niallain at gmail.com)
++ * Steve French (sfrench at us.ibm.com)
++ *
++ * Contains the CIFS DFS upcall routines used for hostname to
++ * IP address translation.
++ *
++ * This library is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU Lesser General Public License as published
++ * by the Free Software Foundation; either version 2.1 of the License, or
++ * (at your option) any later version.
++ *
++ * This library is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
++ * the GNU Lesser General Public License for more details.
++ *
++ * You should have received a copy of the GNU Lesser General Public License
++ * along with this library; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ */
++
++#include <keys/user-type.h>
++#include "dns_resolve.h"
++#include "cifsglob.h"
++#include "cifsproto.h"
++#include "cifs_debug.h"
++
++static int dns_resolver_instantiate(struct key *key, const void *data,
++ size_t datalen)
++{
++ int rc = 0;
++ char *ip;
++
++ ip = kmalloc(datalen+1, GFP_KERNEL);
++ if (!ip)
++ return -ENOMEM;
++
++ memcpy(ip, data, datalen);
++ ip[datalen] = '\0';
++
++ rcu_assign_pointer(key->payload.data, ip);
++
++ return rc;
++}
++
++struct key_type key_type_dns_resolver = {
++ .name = "dns_resolver",
++ .def_datalen = sizeof(struct in_addr),
++ .describe = user_describe,
++ .instantiate = dns_resolver_instantiate,
++ .match = user_match,
++};
++
++
++/* Resolves server name to ip address.
++ * input:
++ * unc - server UNC
++ * output:
++ * *ip_addr - pointer to server ip, caller responcible for freeing it.
++ * return 0 on success
++ */
++int
++dns_resolve_server_name_to_ip(const char *unc, char **ip_addr)
++{
++ int rc = -EAGAIN;
++ struct key *rkey;
++ char *name;
++ int len;
++
++ if (!ip_addr || !unc)
++ return -EINVAL;
++
++ /* search for server name delimiter */
++ len = strlen(unc);
++ if (len < 3) {
++ cFYI(1, ("%s: unc is too short: %s", __FUNCTION__, unc));
++ return -EINVAL;
++ }
++ len -= 2;
++ name = memchr(unc+2, '\\', len);
++ if (!name) {
++ cFYI(1, ("%s: probably server name is whole unc: %s",
++ __FUNCTION__, unc));
++ } else {
++ len = (name - unc) - 2/* leading // */;
++ }
++
++ name = kmalloc(len+1, GFP_KERNEL);
++ if (!name) {
++ rc = -ENOMEM;
++ return rc;
++ }
++ memcpy(name, unc+2, len);
++ name[len] = 0;
++
++ rkey = request_key(&key_type_dns_resolver, name, "");
++ if (!IS_ERR(rkey)) {
++ len = strlen(rkey->payload.data);
++ *ip_addr = kmalloc(len+1, GFP_KERNEL);
++ if (*ip_addr) {
++ memcpy(*ip_addr, rkey->payload.data, len);
++ (*ip_addr)[len] = '\0';
++ cFYI(1, ("%s: resolved: %s to %s", __FUNCTION__,
++ rkey->description,
++ *ip_addr
++ ));
++ rc = 0;
++ } else {
++ rc = -ENOMEM;
++ }
++ key_put(rkey);
++ } else {
++ cERROR(1, ("%s: unable to resolve: %s", __FUNCTION__, name));
++ }
++
++ kfree(name);
++ return rc;
++}
++
++
+diff --git a/fs/cifs/dns_resolve.h b/fs/cifs/dns_resolve.h
+new file mode 100644
+index 0000000..073fdc3
+--- /dev/null
++++ b/fs/cifs/dns_resolve.h
+@@ -0,0 +1,32 @@
++/*
++ * fs/cifs/dns_resolve.h -- DNS Resolver upcall management for CIFS DFS
++ * Handles host name to IP address resolution
++ *
++ * Copyright (c) International Business Machines Corp., 2008
++ * Author(s): Steve French (sfrench at us.ibm.com)
++ *
++ * This library is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU Lesser General Public License as published
++ * by the Free Software Foundation; either version 2.1 of the License, or
++ * (at your option) any later version.
++ *
++ * This library is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
++ * the GNU Lesser General Public License for more details.
++ *
++ * You should have received a copy of the GNU Lesser General Public License
++ * along with this library; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ */
++
++#ifndef _DNS_RESOLVE_H
++#define _DNS_RESOLVE_H
++
++#ifdef __KERNEL__
++#include <linux/key-type.h>
++extern struct key_type key_type_dns_resolver;
++extern int dns_resolve_server_name_to_ip(const char *unc, char **ip_addr);
++#endif /* KERNEL */
++
++#endif /* _DNS_RESOLVE_H */
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index dd26e27..5f7c374 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -1179,12 +1179,10 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to)
+ atomic_dec(&open_file->wrtPending);
+ /* Does mm or vfs already set times? */
+ inode->i_atime = inode->i_mtime = current_fs_time(inode->i_sb);
+- if ((bytes_written > 0) && (offset)) {
++ if ((bytes_written > 0) && (offset))
+ rc = 0;
+- } else if (bytes_written < 0) {
+- if (rc != -EBADF)
+- rc = bytes_written;
+- }
++ else if (bytes_written < 0)
++ rc = bytes_written;
+ } else {
+ cFYI(1, ("No writeable filehandles for inode"));
+ rc = -EIO;
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index e915eb1..d9567ba 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -54,9 +54,9 @@ int cifs_get_inode_info_unix(struct inode **pinode,
+ MAX_TREE_SIZE + 1) +
+ strnlen(search_path, MAX_PATHCONF) + 1,
+ GFP_KERNEL);
+- if (tmp_path == NULL) {
++ if (tmp_path == NULL)
+ return -ENOMEM;
+- }
++
+ /* have to skip first of the double backslash of
+ UNC name */
+ strncpy(tmp_path, pTcon->treeName, MAX_TREE_SIZE);
+@@ -511,7 +511,8 @@ int cifs_get_inode_info(struct inode **pinode,
+ }
-- if (Cmnd->use_sg) {
-+ if (scsi_bufflen(Cmnd)) {
- int sg_count;
+ spin_lock(&inode->i_lock);
+- if (is_size_safe_to_change(cifsInfo, le64_to_cpu(pfindData->EndOfFile))) {
++ if (is_size_safe_to_change(cifsInfo,
++ le64_to_cpu(pfindData->EndOfFile))) {
+ /* can not safely shrink the file size here if the
+ client is writing to it due to potential races */
+ i_size_write(inode, le64_to_cpu(pfindData->EndOfFile));
+@@ -931,7 +932,7 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, int mode)
+ (CIFS_UNIX_POSIX_PATH_OPS_CAP &
+ le64_to_cpu(pTcon->fsUnixInfo.Capability))) {
+ u32 oplock = 0;
+- FILE_UNIX_BASIC_INFO * pInfo =
++ FILE_UNIX_BASIC_INFO *pInfo =
+ kzalloc(sizeof(FILE_UNIX_BASIC_INFO), GFP_KERNEL);
+ if (pInfo == NULL) {
+ rc = -ENOMEM;
+@@ -1607,7 +1608,14 @@ int cifs_setattr(struct dentry *direntry, struct iattr *attrs)
+ CIFS_MOUNT_MAP_SPECIAL_CHR);
+ else if (attrs->ia_valid & ATTR_MODE) {
+ rc = 0;
+- if ((mode & S_IWUGO) == 0) /* not writeable */ {
++#ifdef CONFIG_CIFS_EXPERIMENTAL
++ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL)
++ rc = mode_to_acl(direntry->d_inode, full_path, mode);
++ else if ((mode & S_IWUGO) == 0) {
++#else
++ if ((mode & S_IWUGO) == 0) {
++#endif
++ /* not writeable */
+ if ((cifsInode->cifsAttrs & ATTR_READONLY) == 0) {
+ set_dosattr = TRUE;
+ time_buf.Attributes =
+@@ -1626,10 +1634,10 @@ int cifs_setattr(struct dentry *direntry, struct iattr *attrs)
+ if (time_buf.Attributes == 0)
+ time_buf.Attributes |= cpu_to_le32(ATTR_NORMAL);
+ }
+- /* BB to be implemented -
+- via Windows security descriptors or streams */
+- /* CIFSSMBWinSetPerms(xid, pTcon, full_path, mode, uid, gid,
+- cifs_sb->local_nls); */
++#ifdef CONFIG_CIFS_EXPERIMENTAL
++ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL)
++ mode_to_acl(direntry->d_inode, full_path, mode);
++#endif
+ }
-- sg = (struct scatterlist *) Cmnd->request_buffer;
-- sg_count = sbus_map_sg(qpti->sdev, sg, Cmnd->use_sg, Cmnd->sc_data_direction);
-+ sg = scsi_sglist(Cmnd);
-+ sg_count = sbus_map_sg(qpti->sdev, sg, scsi_sg_count(Cmnd),
-+ Cmnd->sc_data_direction);
+ if (attrs->ia_valid & ATTR_ATIME) {
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index 11f2657..1d6fb01 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -1,7 +1,7 @@
+ /*
+ * fs/cifs/link.c
+ *
+- * Copyright (C) International Business Machines Corp., 2002,2003
++ * Copyright (C) International Business Machines Corp., 2002,2008
+ * Author(s): Steve French (sfrench at us.ibm.com)
+ *
+ * This library is free software; you can redistribute it and/or modify
+@@ -236,8 +236,6 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
+ char *full_path = NULL;
+ char *tmp_path = NULL;
+ char *tmpbuffer;
+- unsigned char *referrals = NULL;
+- unsigned int num_referrals = 0;
+ int len;
+ __u16 fid;
- ds = cmd->dataseg;
- cmd->segment_cnt = sg_count;
-@@ -914,16 +915,6 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
+@@ -297,8 +295,11 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
+ cFYI(1, ("Error closing junction point "
+ "(open for ioctl)"));
}
- sg_count -= n;
++ /* BB unwind this long, nested function, or remove BB */
+ if (rc == -EIO) {
+ /* Query if DFS Junction */
++ unsigned int num_referrals = 0;
++ struct dfs_info3_param *refs = NULL;
+ tmp_path =
+ kmalloc(MAX_TREE_SIZE + MAX_PATHCONF + 1,
+ GFP_KERNEL);
+@@ -310,7 +311,7 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
+ rc = get_dfs_path(xid, pTcon->ses,
+ tmp_path,
+ cifs_sb->local_nls,
+- &num_referrals, &referrals,
++ &num_referrals, &refs,
+ cifs_sb->mnt_cifs_flags &
+ CIFS_MOUNT_MAP_SPECIAL_CHR);
+ cFYI(1, ("Get DFS for %s rc = %d ",
+@@ -320,14 +321,13 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
+ else {
+ cFYI(1, ("num referral: %d",
+ num_referrals));
+- if (referrals) {
+- cFYI(1,("referral string: %s", referrals));
++ if (refs && refs->path_name) {
+ strncpy(tmpbuffer,
+- referrals,
++ refs->path_name,
+ len-1);
+ }
+ }
+- kfree(referrals);
++ kfree(refs);
+ kfree(tmp_path);
+ }
+ /* BB add code like else decode referrals
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index d0cb469..d2153ab 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -528,9 +528,11 @@ CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, int first_time,
+ rc = -EOVERFLOW;
+ goto ssetup_exit;
}
-- } else if (Cmnd->request_bufflen) {
-- Cmnd->SCp.ptr = (char *)(unsigned long)
-- sbus_map_single(qpti->sdev,
-- Cmnd->request_buffer,
-- Cmnd->request_bufflen,
-- Cmnd->sc_data_direction);
--
-- cmd->dataseg[0].d_base = (u32) ((unsigned long)Cmnd->SCp.ptr);
-- cmd->dataseg[0].d_count = Cmnd->request_bufflen;
-- cmd->segment_cnt = 1;
- } else {
- cmd->dataseg[0].d_base = 0;
- cmd->dataseg[0].d_count = 0;
-@@ -1151,7 +1142,7 @@ static struct scsi_cmnd *qlogicpti_intr_handler(struct qlogicpti *qpti)
+- ses->server->mac_signing_key.len = msg->sesskey_len;
+- memcpy(ses->server->mac_signing_key.data.krb5, msg->data,
+- msg->sesskey_len);
++ if (first_time) {
++ ses->server->mac_signing_key.len = msg->sesskey_len;
++ memcpy(ses->server->mac_signing_key.data.krb5,
++ msg->data, msg->sesskey_len);
++ }
+ pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC;
+ capabilities |= CAP_EXTENDED_SECURITY;
+ pSMB->req.Capabilities = cpu_to_le32(capabilities);
+@@ -540,7 +542,7 @@ CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, int first_time,
- if (sts->state_flags & SF_GOT_SENSE)
- memcpy(Cmnd->sense_buffer, sts->req_sense_data,
-- sizeof(Cmnd->sense_buffer));
-+ SCSI_SENSE_BUFFERSIZE);
+ if (ses->capabilities & CAP_UNICODE) {
+ /* unicode strings must be word aligned */
+- if (iov[0].iov_len % 2) {
++ if ((iov[0].iov_len + iov[1].iov_len) % 2) {
+ *bcc_ptr = 0;
+ bcc_ptr++;
+ }
+diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c
+index dcc6aea..e3eb355 100644
+--- a/fs/coda/psdev.c
++++ b/fs/coda/psdev.c
+@@ -362,8 +362,8 @@ static int init_coda_psdev(void)
+ goto out_chrdev;
+ }
+ for (i = 0; i < MAX_CODADEVS; i++)
+- class_device_create(coda_psdev_class, NULL,
+- MKDEV(CODA_PSDEV_MAJOR,i), NULL, "cfs%d", i);
++ device_create(coda_psdev_class, NULL,
++ MKDEV(CODA_PSDEV_MAJOR,i), "cfs%d", i);
+ coda_sysctl_init();
+ goto out;
- if (sts->hdr.entry_type == ENTRY_STATUS)
- Cmnd->result =
-@@ -1159,17 +1150,11 @@ static struct scsi_cmnd *qlogicpti_intr_handler(struct qlogicpti *qpti)
- else
- Cmnd->result = DID_ERROR << 16;
+@@ -405,7 +405,7 @@ static int __init init_coda(void)
+ return 0;
+ out:
+ for (i = 0; i < MAX_CODADEVS; i++)
+- class_device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
++ device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
+ class_destroy(coda_psdev_class);
+ unregister_chrdev(CODA_PSDEV_MAJOR, "coda");
+ coda_sysctl_clean();
+@@ -424,7 +424,7 @@ static void __exit exit_coda(void)
+ printk("coda: failed to unregister filesystem\n");
+ }
+ for (i = 0; i < MAX_CODADEVS; i++)
+- class_device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
++ device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
+ class_destroy(coda_psdev_class);
+ unregister_chrdev(CODA_PSDEV_MAJOR, "coda");
+ coda_sysctl_clean();
+diff --git a/fs/compat.c b/fs/compat.c
+index 15078ce..5216c3f 100644
+--- a/fs/compat.c
++++ b/fs/compat.c
+@@ -1104,10 +1104,6 @@ static ssize_t compat_do_readv_writev(int type, struct file *file,
+ if (ret < 0)
+ goto out;
-- if (Cmnd->use_sg) {
-+ if (scsi_bufflen(Cmnd))
- sbus_unmap_sg(qpti->sdev,
-- (struct scatterlist *)Cmnd->request_buffer,
-- Cmnd->use_sg,
-+ scsi_sglist(Cmnd), scsi_sg_count(Cmnd),
- Cmnd->sc_data_direction);
-- } else if (Cmnd->request_bufflen) {
-- sbus_unmap_single(qpti->sdev,
-- (__u32)((unsigned long)Cmnd->SCp.ptr),
-- Cmnd->request_bufflen,
-- Cmnd->sc_data_direction);
-- }
+- ret = security_file_permission(file, type == READ ? MAY_READ:MAY_WRITE);
+- if (ret)
+- goto out;
+-
+ fnv = NULL;
+ if (type == READ) {
+ fn = file->f_op->read;
+diff --git a/fs/compat_binfmt_elf.c b/fs/compat_binfmt_elf.c
+new file mode 100644
+index 0000000..0adced2
+--- /dev/null
++++ b/fs/compat_binfmt_elf.c
+@@ -0,0 +1,131 @@
++/*
++ * 32-bit compatibility support for ELF format executables and core dumps.
++ *
++ * Copyright (C) 2007 Red Hat, Inc. All rights reserved.
++ *
++ * This copyrighted material is made available to anyone wishing to use,
++ * modify, copy, or redistribute it subject to the terms and conditions
++ * of the GNU General Public License v.2.
++ *
++ * Red Hat Author: Roland McGrath.
++ *
++ * This file is used in a 64-bit kernel that wants to support 32-bit ELF.
++ * asm/elf.h is responsible for defining the compat_* and COMPAT_* macros
++ * used below, with definitions appropriate for 32-bit ABI compatibility.
++ *
++ * We use macros to rename the ABI types and machine-dependent
++ * functions used in binfmt_elf.c to compat versions.
++ */
+
- qpti->cmd_count[Cmnd->device->id]--;
- sbus_writew(out_ptr, qpti->qregs + MBOX5);
- Cmnd->host_scribble = (unsigned char *) done_queue;
-diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
-index 0fb1709..1a9fba6 100644
---- a/drivers/scsi/scsi.c
-+++ b/drivers/scsi/scsi.c
-@@ -122,6 +122,11 @@ static const char *const scsi_device_types[] = {
- "Automation/Drive ",
- };
-
-+/**
-+ * scsi_device_type - Return 17 char string indicating device type.
-+ * @type: type number to look up
++#include <linux/elfcore-compat.h>
++#include <linux/time.h>
++
++/*
++ * Rename the basic ELF layout types to refer to the 32-bit class of files.
+ */
++#undef ELF_CLASS
++#define ELF_CLASS ELFCLASS32
+
- const char * scsi_device_type(unsigned type)
++#undef elfhdr
++#undef elf_phdr
++#undef elf_note
++#undef elf_addr_t
++#define elfhdr elf32_hdr
++#define elf_phdr elf32_phdr
++#define elf_note elf32_note
++#define elf_addr_t Elf32_Addr
++
++/*
++ * The machine-dependent core note format types are defined in elfcore-compat.h,
++ * which requires asm/elf.h to define compat_elf_gregset_t et al.
++ */
++#define elf_prstatus compat_elf_prstatus
++#define elf_prpsinfo compat_elf_prpsinfo
++
++/*
++ * Compat version of cputime_to_compat_timeval, perhaps this
++ * should be an inline in <linux/compat.h>.
++ */
++static void cputime_to_compat_timeval(const cputime_t cputime,
++ struct compat_timeval *value)
++{
++ struct timeval tv;
++ cputime_to_timeval(cputime, &tv);
++ value->tv_sec = tv.tv_sec;
++ value->tv_usec = tv.tv_usec;
++}
++
++#undef cputime_to_timeval
++#define cputime_to_timeval cputime_to_compat_timeval
++
++
++/*
++ * To use this file, asm/elf.h must define compat_elf_check_arch.
++ * The other following macros can be defined if the compat versions
++ * differ from the native ones, or omitted when they match.
++ */
++
++#undef ELF_ARCH
++#undef elf_check_arch
++#define elf_check_arch compat_elf_check_arch
++
++#ifdef COMPAT_ELF_PLATFORM
++#undef ELF_PLATFORM
++#define ELF_PLATFORM COMPAT_ELF_PLATFORM
++#endif
++
++#ifdef COMPAT_ELF_HWCAP
++#undef ELF_HWCAP
++#define ELF_HWCAP COMPAT_ELF_HWCAP
++#endif
++
++#ifdef COMPAT_ARCH_DLINFO
++#undef ARCH_DLINFO
++#define ARCH_DLINFO COMPAT_ARCH_DLINFO
++#endif
++
++#ifdef COMPAT_ELF_ET_DYN_BASE
++#undef ELF_ET_DYN_BASE
++#define ELF_ET_DYN_BASE COMPAT_ELF_ET_DYN_BASE
++#endif
++
++#ifdef COMPAT_ELF_EXEC_PAGESIZE
++#undef ELF_EXEC_PAGESIZE
++#define ELF_EXEC_PAGESIZE COMPAT_ELF_EXEC_PAGESIZE
++#endif
++
++#ifdef COMPAT_ELF_PLAT_INIT
++#undef ELF_PLAT_INIT
++#define ELF_PLAT_INIT COMPAT_ELF_PLAT_INIT
++#endif
++
++#ifdef COMPAT_SET_PERSONALITY
++#undef SET_PERSONALITY
++#define SET_PERSONALITY COMPAT_SET_PERSONALITY
++#endif
++
++#ifdef compat_start_thread
++#undef start_thread
++#define start_thread compat_start_thread
++#endif
++
++#ifdef compat_arch_setup_additional_pages
++#undef ARCH_HAS_SETUP_ADDITIONAL_PAGES
++#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
++#undef arch_setup_additional_pages
++#define arch_setup_additional_pages compat_arch_setup_additional_pages
++#endif
++
++/*
++ * Rename a few of the symbols that binfmt_elf.c will define.
++ * These are all local so the names don't really matter, but it
++ * might make some debugging less confusing not to duplicate them.
++ */
++#define elf_format compat_elf_format
++#define init_elf_binfmt init_compat_elf_binfmt
++#define exit_elf_binfmt exit_compat_elf_binfmt
++
++/*
++ * We share all the actual code with the native (64-bit) version.
++ */
++#include "binfmt_elf.c"
+diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c
+index da8cb3b..ffdc022 100644
+--- a/fs/compat_ioctl.c
++++ b/fs/compat_ioctl.c
+@@ -1376,7 +1376,7 @@ static int do_atm_ioctl(unsigned int fd, unsigned int cmd32, unsigned long arg)
+ return -EINVAL;
+ }
+
+-static __attribute_used__ int
++static __used int
+ ret_einval(unsigned int fd, unsigned int cmd, unsigned long arg)
{
- if (type == 0x1e)
-@@ -136,32 +141,45 @@ const char * scsi_device_type(unsigned type)
- EXPORT_SYMBOL(scsi_device_type);
+ return -EINVAL;
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 50ed691..a48dc7d 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -546,7 +546,7 @@ static int populate_groups(struct config_group *group)
+ * That said, taking our i_mutex is closer to mkdir
+ * emulation, and shouldn't hurt.
+ */
+- mutex_lock(&dentry->d_inode->i_mutex);
++ mutex_lock_nested(&dentry->d_inode->i_mutex, I_MUTEX_CHILD);
- struct scsi_host_cmd_pool {
-- struct kmem_cache *slab;
-- unsigned int users;
-- char *name;
-- unsigned int slab_flags;
-- gfp_t gfp_mask;
-+ struct kmem_cache *cmd_slab;
-+ struct kmem_cache *sense_slab;
-+ unsigned int users;
-+ char *cmd_name;
-+ char *sense_name;
-+ unsigned int slab_flags;
-+ gfp_t gfp_mask;
- };
+ for (i = 0; group->default_groups[i]; i++) {
+ new_group = group->default_groups[i];
+@@ -1405,7 +1405,8 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
+ sd = configfs_sb->s_root->d_fsdata;
+ link_group(to_config_group(sd->s_element), group);
- static struct scsi_host_cmd_pool scsi_cmd_pool = {
-- .name = "scsi_cmd_cache",
-+ .cmd_name = "scsi_cmd_cache",
-+ .sense_name = "scsi_sense_cache",
- .slab_flags = SLAB_HWCACHE_ALIGN,
- };
+- mutex_lock(&configfs_sb->s_root->d_inode->i_mutex);
++ mutex_lock_nested(&configfs_sb->s_root->d_inode->i_mutex,
++ I_MUTEX_PARENT);
- static struct scsi_host_cmd_pool scsi_cmd_dma_pool = {
-- .name = "scsi_cmd_cache(DMA)",
-+ .cmd_name = "scsi_cmd_cache(DMA)",
-+ .sense_name = "scsi_sense_cache(DMA)",
- .slab_flags = SLAB_HWCACHE_ALIGN|SLAB_CACHE_DMA,
- .gfp_mask = __GFP_DMA,
- };
+ name.name = group->cg_item.ci_name;
+ name.len = strlen(name.name);
+diff --git a/fs/configfs/file.c b/fs/configfs/file.c
+index a3658f9..397cb50 100644
+--- a/fs/configfs/file.c
++++ b/fs/configfs/file.c
+@@ -320,7 +320,7 @@ int configfs_add_file(struct dentry * dir, const struct configfs_attribute * att
+ umode_t mode = (attr->ca_mode & S_IALLUGO) | S_IFREG;
+ int error = 0;
+
+- mutex_lock(&dir->d_inode->i_mutex);
++ mutex_lock_nested(&dir->d_inode->i_mutex, I_MUTEX_NORMAL);
+ error = configfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type);
+ mutex_unlock(&dir->d_inode->i_mutex);
+
+diff --git a/fs/configfs/mount.c b/fs/configfs/mount.c
+index 3bf0278..de3b31d 100644
+--- a/fs/configfs/mount.c
++++ b/fs/configfs/mount.c
+@@ -128,7 +128,7 @@ void configfs_release_fs(void)
+ }
- static DEFINE_MUTEX(host_cmd_pool_mutex);
-+/**
-+ * __scsi_get_command - Allocate a struct scsi_cmnd
-+ * @shost: host to transmit command
-+ * @gfp_mask: allocation mask
-+ *
-+ * Description: allocate a struct scsi_cmd from host's slab, recycling from the
-+ * host's free_list if necessary.
-+ */
- struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
- {
- struct scsi_cmnd *cmd;
-+ unsigned char *buf;
+-static decl_subsys(config, NULL, NULL);
++static struct kobject *config_kobj;
-- cmd = kmem_cache_alloc(shost->cmd_pool->slab,
-- gfp_mask | shost->cmd_pool->gfp_mask);
-+ cmd = kmem_cache_alloc(shost->cmd_pool->cmd_slab,
-+ gfp_mask | shost->cmd_pool->gfp_mask);
+ static int __init configfs_init(void)
+ {
+@@ -140,9 +140,8 @@ static int __init configfs_init(void)
+ if (!configfs_dir_cachep)
+ goto out;
- if (unlikely(!cmd)) {
- unsigned long flags;
-@@ -173,19 +191,32 @@ struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
- list_del_init(&cmd->list);
- }
- spin_unlock_irqrestore(&shost->free_list_lock, flags);
-+
-+ if (cmd) {
-+ buf = cmd->sense_buffer;
-+ memset(cmd, 0, sizeof(*cmd));
-+ cmd->sense_buffer = buf;
-+ }
-+ } else {
-+ buf = kmem_cache_alloc(shost->cmd_pool->sense_slab,
-+ gfp_mask | shost->cmd_pool->gfp_mask);
-+ if (likely(buf)) {
-+ memset(cmd, 0, sizeof(*cmd));
-+ cmd->sense_buffer = buf;
-+ } else {
-+ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
-+ cmd = NULL;
-+ }
+- kobj_set_kset_s(&config_subsys, kernel_subsys);
+- err = subsystem_register(&config_subsys);
+- if (err) {
++ config_kobj = kobject_create_and_add("config", kernel_kobj);
++ if (!config_kobj) {
+ kmem_cache_destroy(configfs_dir_cachep);
+ configfs_dir_cachep = NULL;
+ goto out;
+@@ -151,7 +150,7 @@ static int __init configfs_init(void)
+ err = register_filesystem(&configfs_fs_type);
+ if (err) {
+ printk(KERN_ERR "configfs: Unable to register filesystem!\n");
+- subsystem_unregister(&config_subsys);
++ kobject_put(config_kobj);
+ kmem_cache_destroy(configfs_dir_cachep);
+ configfs_dir_cachep = NULL;
+ goto out;
+@@ -160,7 +159,7 @@ static int __init configfs_init(void)
+ err = configfs_inode_init();
+ if (err) {
+ unregister_filesystem(&configfs_fs_type);
+- subsystem_unregister(&config_subsys);
++ kobject_put(config_kobj);
+ kmem_cache_destroy(configfs_dir_cachep);
+ configfs_dir_cachep = NULL;
}
-
- return cmd;
+@@ -171,7 +170,7 @@ out:
+ static void __exit configfs_exit(void)
+ {
+ unregister_filesystem(&configfs_fs_type);
+- subsystem_unregister(&config_subsys);
++ kobject_put(config_kobj);
+ kmem_cache_destroy(configfs_dir_cachep);
+ configfs_dir_cachep = NULL;
+ configfs_inode_exit();
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 6a713b3..d26e282 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -426,20 +426,19 @@ exit:
}
- EXPORT_SYMBOL_GPL(__scsi_get_command);
-
--/*
-- * Function: scsi_get_command()
-- *
-- * Purpose: Allocate and setup a scsi command block
-- *
-- * Arguments: dev - parent scsi device
-- * gfp_mask- allocator flags
-+/**
-+ * scsi_get_command - Allocate and setup a scsi command block
-+ * @dev: parent scsi device
-+ * @gfp_mask: allocator flags
- *
- * Returns: The allocated scsi command structure.
- */
-@@ -202,7 +233,6 @@ struct scsi_cmnd *scsi_get_command(struct scsi_device *dev, gfp_t gfp_mask)
- if (likely(cmd != NULL)) {
- unsigned long flags;
+ EXPORT_SYMBOL_GPL(debugfs_rename);
-- memset(cmd, 0, sizeof(*cmd));
- cmd->device = dev;
- init_timer(&cmd->eh_timeout);
- INIT_LIST_HEAD(&cmd->list);
-@@ -217,6 +247,12 @@ struct scsi_cmnd *scsi_get_command(struct scsi_device *dev, gfp_t gfp_mask)
- }
- EXPORT_SYMBOL(scsi_get_command);
+-static decl_subsys(debug, NULL, NULL);
++static struct kobject *debug_kobj;
-+/**
-+ * __scsi_put_command - Free a struct scsi_cmnd
-+ * @shost: dev->host
-+ * @cmd: Command to free
-+ * @dev: parent scsi device
-+ */
- void __scsi_put_command(struct Scsi_Host *shost, struct scsi_cmnd *cmd,
- struct device *dev)
+ static int __init debugfs_init(void)
{
-@@ -230,19 +266,19 @@ void __scsi_put_command(struct Scsi_Host *shost, struct scsi_cmnd *cmd,
- }
- spin_unlock_irqrestore(&shost->free_list_lock, flags);
+ int retval;
-- if (likely(cmd != NULL))
-- kmem_cache_free(shost->cmd_pool->slab, cmd);
-+ if (likely(cmd != NULL)) {
-+ kmem_cache_free(shost->cmd_pool->sense_slab,
-+ cmd->sense_buffer);
-+ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
-+ }
+- kobj_set_kset_s(&debug_subsys, kernel_subsys);
+- retval = subsystem_register(&debug_subsys);
+- if (retval)
+- return retval;
++ debug_kobj = kobject_create_and_add("debug", kernel_kobj);
++ if (!debug_kobj)
++ return -EINVAL;
- put_device(dev);
+ retval = register_filesystem(&debug_fs_type);
+ if (retval)
+- subsystem_unregister(&debug_subsys);
++ kobject_put(debug_kobj);
+ return retval;
}
- EXPORT_SYMBOL(__scsi_put_command);
--/*
-- * Function: scsi_put_command()
-- *
-- * Purpose: Free a scsi command block
-- *
-- * Arguments: cmd - command block to free
-+/**
-+ * scsi_put_command - Free a scsi command block
-+ * @cmd: command block to free
- *
- * Returns: Nothing.
- *
-@@ -263,12 +299,13 @@ void scsi_put_command(struct scsi_cmnd *cmd)
+@@ -447,7 +446,7 @@ static void __exit debugfs_exit(void)
+ {
+ simple_release_fs(&debugfs_mount, &debugfs_mount_count);
+ unregister_filesystem(&debug_fs_type);
+- subsystem_unregister(&debug_subsys);
++ kobject_put(debug_kobj);
}
- EXPORT_SYMBOL(scsi_put_command);
--/*
-- * Function: scsi_setup_command_freelist()
-- *
-- * Purpose: Setup the command freelist for a scsi host.
-+/**
-+ * scsi_setup_command_freelist - Setup the command freelist for a scsi host.
-+ * @shost: host to allocate the freelist for.
- *
-- * Arguments: shost - host to allocate the freelist for.
-+ * Description: The command freelist protects against system-wide out of memory
-+ * deadlock by preallocating one SCSI command structure for each host, so the
-+ * system can always write to a swap file on a device associated with that host.
- *
- * Returns: Nothing.
- */
-@@ -282,16 +319,24 @@ int scsi_setup_command_freelist(struct Scsi_Host *shost)
+ core_initcall(debugfs_init);
+diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
+index 6353a83..5c108c4 100644
+--- a/fs/dlm/lockspace.c
++++ b/fs/dlm/lockspace.c
+@@ -166,26 +166,7 @@ static struct kobj_type dlm_ktype = {
+ .release = lockspace_kobj_release,
+ };
- /*
- * Select a command slab for this host and create it if not
-- * yet existant.
-+ * yet existent.
- */
- mutex_lock(&host_cmd_pool_mutex);
- pool = (shost->unchecked_isa_dma ? &scsi_cmd_dma_pool : &scsi_cmd_pool);
- if (!pool->users) {
-- pool->slab = kmem_cache_create(pool->name,
-- sizeof(struct scsi_cmnd), 0,
-- pool->slab_flags, NULL);
-- if (!pool->slab)
-+ pool->cmd_slab = kmem_cache_create(pool->cmd_name,
-+ sizeof(struct scsi_cmnd), 0,
-+ pool->slab_flags, NULL);
-+ if (!pool->cmd_slab)
-+ goto fail;
-+
-+ pool->sense_slab = kmem_cache_create(pool->sense_name,
-+ SCSI_SENSE_BUFFERSIZE, 0,
-+ pool->slab_flags, NULL);
-+ if (!pool->sense_slab) {
-+ kmem_cache_destroy(pool->cmd_slab);
- goto fail;
-+ }
- }
+-static struct kset dlm_kset = {
+- .ktype = &dlm_ktype,
+-};
+-
+-static int kobject_setup(struct dlm_ls *ls)
+-{
+- char lsname[DLM_LOCKSPACE_LEN];
+- int error;
+-
+- memset(lsname, 0, DLM_LOCKSPACE_LEN);
+- snprintf(lsname, DLM_LOCKSPACE_LEN, "%s", ls->ls_name);
+-
+- error = kobject_set_name(&ls->ls_kobj, "%s", lsname);
+- if (error)
+- return error;
+-
+- ls->ls_kobj.kset = &dlm_kset;
+- ls->ls_kobj.ktype = &dlm_ktype;
+- return 0;
+-}
++static struct kset *dlm_kset;
- pool->users++;
-@@ -301,29 +346,36 @@ int scsi_setup_command_freelist(struct Scsi_Host *shost)
- /*
- * Get one backup command for this host.
- */
-- cmd = kmem_cache_alloc(shost->cmd_pool->slab,
-- GFP_KERNEL | shost->cmd_pool->gfp_mask);
-+ cmd = kmem_cache_alloc(shost->cmd_pool->cmd_slab,
-+ GFP_KERNEL | shost->cmd_pool->gfp_mask);
- if (!cmd)
- goto fail2;
-- list_add(&cmd->list, &shost->free_list);
-+
-+ cmd->sense_buffer = kmem_cache_alloc(shost->cmd_pool->sense_slab,
-+ GFP_KERNEL |
-+ shost->cmd_pool->gfp_mask);
-+ if (!cmd->sense_buffer)
-+ goto fail2;
-+
-+ list_add(&cmd->list, &shost->free_list);
- return 0;
+ static int do_uevent(struct dlm_ls *ls, int in)
+ {
+@@ -220,24 +201,22 @@ static int do_uevent(struct dlm_ls *ls, int in)
- fail2:
-- if (!--pool->users)
-- kmem_cache_destroy(pool->slab);
-- return -ENOMEM;
-+ if (cmd)
-+ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
-+ mutex_lock(&host_cmd_pool_mutex);
-+ if (!--pool->users) {
-+ kmem_cache_destroy(pool->cmd_slab);
-+ kmem_cache_destroy(pool->sense_slab);
-+ }
- fail:
- mutex_unlock(&host_cmd_pool_mutex);
- return -ENOMEM;
+ int dlm_lockspace_init(void)
+ {
+- int error;
-
+ ls_count = 0;
+ mutex_init(&ls_lock);
+ INIT_LIST_HEAD(&lslist);
+ spin_lock_init(&lslist_lock);
+
+- kobject_set_name(&dlm_kset.kobj, "dlm");
+- kobj_set_kset_s(&dlm_kset, kernel_subsys);
+- error = kset_register(&dlm_kset);
+- if (error)
+- printk("dlm_lockspace_init: cannot register kset %d\n", error);
+- return error;
++ dlm_kset = kset_create_and_add("dlm", NULL, kernel_kobj);
++ if (!dlm_kset) {
++ printk(KERN_WARNING "%s: can not create kset\n", __FUNCTION__);
++ return -ENOMEM;
++ }
++ return 0;
}
--/*
-- * Function: scsi_destroy_command_freelist()
-- *
-- * Purpose: Release the command freelist for a scsi host.
-- *
-- * Arguments: shost - host that's freelist is going to be destroyed
-+/**
-+ * scsi_destroy_command_freelist - Release the command freelist for a scsi host.
-+ * @shost: host whose freelist is going to be destroyed
- */
- void scsi_destroy_command_freelist(struct Scsi_Host *shost)
+ void dlm_lockspace_exit(void)
{
-@@ -332,12 +384,16 @@ void scsi_destroy_command_freelist(struct Scsi_Host *shost)
+- kset_unregister(&dlm_kset);
++ kset_unregister(dlm_kset);
+ }
- cmd = list_entry(shost->free_list.next, struct scsi_cmnd, list);
- list_del_init(&cmd->list);
-- kmem_cache_free(shost->cmd_pool->slab, cmd);
-+ kmem_cache_free(shost->cmd_pool->sense_slab,
-+ cmd->sense_buffer);
-+ kmem_cache_free(shost->cmd_pool->cmd_slab, cmd);
+ static int dlm_scand(void *data)
+@@ -549,13 +528,12 @@ static int new_lockspace(char *name, int namelen, void **lockspace,
+ goto out_delist;
}
- mutex_lock(&host_cmd_pool_mutex);
-- if (!--shost->cmd_pool->users)
-- kmem_cache_destroy(shost->cmd_pool->slab);
-+ if (!--shost->cmd_pool->users) {
-+ kmem_cache_destroy(shost->cmd_pool->cmd_slab);
-+ kmem_cache_destroy(shost->cmd_pool->sense_slab);
-+ }
- mutex_unlock(&host_cmd_pool_mutex);
- }
+- error = kobject_setup(ls);
+- if (error)
+- goto out_stop;
+-
+- error = kobject_register(&ls->ls_kobj);
++ ls->ls_kobj.kset = dlm_kset;
++ error = kobject_init_and_add(&ls->ls_kobj, &dlm_ktype, NULL,
++ "%s", ls->ls_name);
+ if (error)
+ goto out_stop;
++ kobject_uevent(&ls->ls_kobj, KOBJ_ADD);
-@@ -441,8 +497,12 @@ void scsi_log_completion(struct scsi_cmnd *cmd, int disposition)
- }
- #endif
+ /* let kobject handle freeing of ls if there's an error */
+ do_unreg = 1;
+@@ -601,7 +579,7 @@ static int new_lockspace(char *name, int namelen, void **lockspace,
+ kfree(ls->ls_rsbtbl);
+ out_lsfree:
+ if (do_unreg)
+- kobject_unregister(&ls->ls_kobj);
++ kobject_put(&ls->ls_kobj);
+ else
+ kfree(ls);
+ out:
+@@ -750,7 +728,7 @@ static int release_lockspace(struct dlm_ls *ls, int force)
+ dlm_clear_members(ls);
+ dlm_clear_members_gone(ls);
+ kfree(ls->ls_node_array);
+- kobject_unregister(&ls->ls_kobj);
++ kobject_put(&ls->ls_kobj);
+ /* The ls structure will be freed when the kobject is done with */
--/*
-- * Assign a serial number to the request for error recovery
-+/**
-+ * scsi_cmd_get_serial - Assign a serial number to a command
-+ * @host: the scsi host
-+ * @cmd: command to assign serial number to
-+ *
-+ * Description: a serial number identifies a request for error recovery
- * and debugging purposes. Protected by the Host_Lock of host.
- */
- static inline void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd *cmd)
-@@ -452,14 +512,12 @@ static inline void scsi_cmd_get_serial(struct Scsi_Host *host, struct scsi_cmnd
- cmd->serial_number = host->cmd_serial_number++;
+ mutex_lock(&ls_lock);
+diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
+index e5580bc..0249aa4 100644
+--- a/fs/ecryptfs/main.c
++++ b/fs/ecryptfs/main.c
+@@ -734,127 +734,40 @@ static int ecryptfs_init_kmem_caches(void)
+ return 0;
}
--/*
-- * Function: scsi_dispatch_command
-- *
-- * Purpose: Dispatch a command to the low-level driver.
-- *
-- * Arguments: cmd - command block we are dispatching.
-+/**
-+ * scsi_dispatch_command - Dispatch a command to the low-level driver.
-+ * @cmd: command block we are dispatching.
- *
-- * Notes:
-+ * Return: nonzero return request was rejected and device's queue needs to be
-+ * plugged.
- */
- int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
- {
-@@ -585,7 +643,7 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
+-struct ecryptfs_obj {
+- char *name;
+- struct list_head slot_list;
+- struct kobject kobj;
+-};
+-
+-struct ecryptfs_attribute {
+- struct attribute attr;
+- ssize_t(*show) (struct ecryptfs_obj *, char *);
+- ssize_t(*store) (struct ecryptfs_obj *, const char *, size_t);
+-};
++static struct kobject *ecryptfs_kobj;
- /**
- * scsi_req_abort_cmd -- Request command recovery for the specified command
-- * cmd: pointer to the SCSI command of interest
-+ * @cmd: pointer to the SCSI command of interest
- *
- * This function requests that SCSI Core start recovery for the
- * command by deleting the timer and adding the command to the eh
-@@ -606,9 +664,9 @@ EXPORT_SYMBOL(scsi_req_abort_cmd);
- * @cmd: The SCSI Command for which a low-level device driver (LLDD) gives
- * ownership back to SCSI Core -- i.e. the LLDD has finished with it.
- *
-- * This function is the mid-level's (SCSI Core) interrupt routine, which
-- * regains ownership of the SCSI command (de facto) from a LLDD, and enqueues
-- * the command to the done queue for further processing.
-+ * Description: This function is the mid-level's (SCSI Core) interrupt routine,
-+ * which regains ownership of the SCSI command (de facto) from a LLDD, and
-+ * enqueues the command to the done queue for further processing.
- *
- * This is the producer of the done queue who enqueues at the tail.
- *
-@@ -617,7 +675,7 @@ EXPORT_SYMBOL(scsi_req_abort_cmd);
- static void scsi_done(struct scsi_cmnd *cmd)
+-static ssize_t
+-ecryptfs_attr_store(struct kobject *kobj,
+- struct attribute *attr, const char *buf, size_t len)
++static ssize_t version_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buff)
{
- /*
-- * We don't have to worry about this one timing out any more.
-+ * We don't have to worry about this one timing out anymore.
- * If we are unable to remove the timer, then the command
- * has already timed out. In which case, we have no choice but to
- * let the timeout function run, as we have no idea where in fact
-@@ -660,10 +718,11 @@ static struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
- return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
+- struct ecryptfs_obj *obj = container_of(kobj, struct ecryptfs_obj,
+- kobj);
+- struct ecryptfs_attribute *attribute =
+- container_of(attr, struct ecryptfs_attribute, attr);
+-
+- return (attribute->store ? attribute->store(obj, buf, len) : 0);
++ return snprintf(buff, PAGE_SIZE, "%d\n", ECRYPTFS_VERSIONING_MASK);
}
--/*
-- * Function: scsi_finish_command
-+/**
-+ * scsi_finish_command - cleanup and pass command back to upper layer
-+ * @cmd: the command
- *
-- * Purpose: Pass command off to upper layer for finishing of I/O
-+ * Description: Pass command off to upper layer for finishing of I/O
- * request, waking processes that are waiting on results,
- * etc.
- */
-@@ -708,18 +767,14 @@ void scsi_finish_command(struct scsi_cmnd *cmd)
- }
- EXPORT_SYMBOL(scsi_finish_command);
+-static ssize_t
+-ecryptfs_attr_show(struct kobject *kobj, struct attribute *attr, char *buf)
+-{
+- struct ecryptfs_obj *obj = container_of(kobj, struct ecryptfs_obj,
+- kobj);
+- struct ecryptfs_attribute *attribute =
+- container_of(attr, struct ecryptfs_attribute, attr);
+-
+- return (attribute->show ? attribute->show(obj, buf) : 0);
+-}
++static struct kobj_attribute version_attr = __ATTR_RO(version);
--/*
-- * Function: scsi_adjust_queue_depth()
-- *
-- * Purpose: Allow low level drivers to tell us to change the queue depth
-- * on a specific SCSI device
-- *
-- * Arguments: sdev - SCSI Device in question
-- * tagged - Do we use tagged queueing (non-0) or do we treat
-- * this device as an untagged device (0)
-- * tags - Number of tags allowed if tagged queueing enabled,
-- * or number of commands the low level driver can
-- * queue up in non-tagged mode (as per cmd_per_lun).
-+/**
-+ * scsi_adjust_queue_depth - Let low level drivers change a device's queue depth
-+ * @sdev: SCSI Device in question
-+ * @tagged: Do we use tagged queueing (non-0) or do we treat
-+ * this device as an untagged device (0)
-+ * @tags: Number of tags allowed if tagged queueing enabled,
-+ * or number of commands the low level driver can
-+ * queue up in non-tagged mode (as per cmd_per_lun).
- *
- * Returns: Nothing
- *
-@@ -742,8 +797,8 @@ void scsi_adjust_queue_depth(struct scsi_device *sdev, int tagged, int tags)
+-static struct sysfs_ops ecryptfs_sysfs_ops = {
+- .show = ecryptfs_attr_show,
+- .store = ecryptfs_attr_store
++static struct attribute *attributes[] = {
++ &version_attr.attr,
++ NULL,
+ };
- spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
+-static struct kobj_type ecryptfs_ktype = {
+- .sysfs_ops = &ecryptfs_sysfs_ops
++static struct attribute_group attr_group = {
++ .attrs = attributes,
+ };
-- /* Check to see if the queue is managed by the block layer
-- * if it is, and we fail to adjust the depth, exit */
-+ /* Check to see if the queue is managed by the block layer.
-+ * If it is, and we fail to adjust the depth, exit. */
- if (blk_queue_tagged(sdev->request_queue) &&
- blk_queue_resize_tags(sdev->request_queue, tags) != 0)
+-static decl_subsys(ecryptfs, &ecryptfs_ktype, NULL);
+-
+-static ssize_t version_show(struct ecryptfs_obj *obj, char *buff)
+-{
+- return snprintf(buff, PAGE_SIZE, "%d\n", ECRYPTFS_VERSIONING_MASK);
+-}
+-
+-static struct ecryptfs_attribute sysfs_attr_version = __ATTR_RO(version);
+-
+-static struct ecryptfs_version_str_map_elem {
+- u32 flag;
+- char *str;
+-} ecryptfs_version_str_map[] = {
+- {ECRYPTFS_VERSIONING_PASSPHRASE, "passphrase"},
+- {ECRYPTFS_VERSIONING_PUBKEY, "pubkey"},
+- {ECRYPTFS_VERSIONING_PLAINTEXT_PASSTHROUGH, "plaintext passthrough"},
+- {ECRYPTFS_VERSIONING_POLICY, "policy"},
+- {ECRYPTFS_VERSIONING_XATTR, "metadata in extended attribute"},
+- {ECRYPTFS_VERSIONING_MULTKEY, "multiple keys per file"}
+-};
+-
+-static ssize_t version_str_show(struct ecryptfs_obj *obj, char *buff)
+-{
+- int i;
+- int remaining = PAGE_SIZE;
+- int total_written = 0;
+-
+- buff[0] = '\0';
+- for (i = 0; i < ARRAY_SIZE(ecryptfs_version_str_map); i++) {
+- int entry_size;
+-
+- if (!(ECRYPTFS_VERSIONING_MASK
+- & ecryptfs_version_str_map[i].flag))
+- continue;
+- entry_size = strlen(ecryptfs_version_str_map[i].str);
+- if ((entry_size + 2) > remaining)
+- goto out;
+- memcpy(buff, ecryptfs_version_str_map[i].str, entry_size);
+- buff[entry_size++] = '\n';
+- buff[entry_size] = '\0';
+- buff += entry_size;
+- total_written += entry_size;
+- remaining -= entry_size;
+- }
+-out:
+- return total_written;
+-}
+-
+-static struct ecryptfs_attribute sysfs_attr_version_str = __ATTR_RO(version_str);
+-
+ static int do_sysfs_registration(void)
+ {
+ int rc;
+
+- rc = subsystem_register(&ecryptfs_subsys);
+- if (rc) {
+- printk(KERN_ERR
+- "Unable to register ecryptfs sysfs subsystem\n");
+- goto out;
+- }
+- rc = sysfs_create_file(&ecryptfs_subsys.kobj,
+- &sysfs_attr_version.attr);
+- if (rc) {
+- printk(KERN_ERR
+- "Unable to create ecryptfs version attribute\n");
+- subsystem_unregister(&ecryptfs_subsys);
++ ecryptfs_kobj = kobject_create_and_add("ecryptfs", fs_kobj);
++ if (!ecryptfs_kobj) {
++ printk(KERN_ERR "Unable to create ecryptfs kset\n");
++ rc = -ENOMEM;
goto out;
-@@ -772,20 +827,17 @@ void scsi_adjust_queue_depth(struct scsi_device *sdev, int tagged, int tags)
- }
- EXPORT_SYMBOL(scsi_adjust_queue_depth);
+ }
+- rc = sysfs_create_file(&ecryptfs_subsys.kobj,
+- &sysfs_attr_version_str.attr);
++ rc = sysfs_create_group(ecryptfs_kobj, &attr_group);
+ if (rc) {
+ printk(KERN_ERR
+- "Unable to create ecryptfs version_str attribute\n");
+- sysfs_remove_file(&ecryptfs_subsys.kobj,
+- &sysfs_attr_version.attr);
+- subsystem_unregister(&ecryptfs_subsys);
+- goto out;
++ "Unable to create ecryptfs version attributes\n");
++ kobject_put(ecryptfs_kobj);
+ }
+ out:
+ return rc;
+@@ -862,11 +775,8 @@ out:
--/*
-- * Function: scsi_track_queue_full()
-+/**
-+ * scsi_track_queue_full - track QUEUE_FULL events to adjust queue depth
-+ * @sdev: SCSI Device in question
-+ * @depth: Current number of outstanding SCSI commands on this device,
-+ * not counting the one returned as QUEUE_FULL.
- *
-- * Purpose: This function will track successive QUEUE_FULL events on a
-+ * Description: This function will track successive QUEUE_FULL events on a
- * specific SCSI device to determine if and when there is a
- * need to adjust the queue depth on the device.
- *
-- * Arguments: sdev - SCSI Device in question
-- * depth - Current number of outstanding SCSI commands on
-- * this device, not counting the one returned as
-- * QUEUE_FULL.
-- *
-- * Returns: 0 - No change needed
-- * >0 - Adjust queue depth to this new depth
-+ * Returns: 0 - No change needed, >0 - Adjust queue depth to this new depth,
- * -1 - Drop back to untagged operation using host->cmd_per_lun
- * as the untagged command depth
- *
-@@ -824,10 +876,10 @@ int scsi_track_queue_full(struct scsi_device *sdev, int depth)
- EXPORT_SYMBOL(scsi_track_queue_full);
+ static void do_sysfs_unregistration(void)
+ {
+- sysfs_remove_file(&ecryptfs_subsys.kobj,
+- &sysfs_attr_version.attr);
+- sysfs_remove_file(&ecryptfs_subsys.kobj,
+- &sysfs_attr_version_str.attr);
+- subsystem_unregister(&ecryptfs_subsys);
++ sysfs_remove_group(ecryptfs_kobj, &attr_group);
++ kobject_put(ecryptfs_kobj);
+ }
- /**
-- * scsi_device_get - get an addition reference to a scsi_device
-+ * scsi_device_get - get an additional reference to a scsi_device
- * @sdev: device to get a reference to
- *
-- * Gets a reference to the scsi_device and increments the use count
-+ * Description: Gets a reference to the scsi_device and increments the use count
- * of the underlying LLDD module. You must hold host_lock of the
- * parent Scsi_Host or already have a reference when calling this.
- */
-@@ -849,8 +901,8 @@ EXPORT_SYMBOL(scsi_device_get);
- * scsi_device_put - release a reference to a scsi_device
- * @sdev: device to release a reference on.
- *
-- * Release a reference to the scsi_device and decrements the use count
-- * of the underlying LLDD module. The device is freed once the last
-+ * Description: Release a reference to the scsi_device and decrements the use
-+ * count of the underlying LLDD module. The device is freed once the last
- * user vanishes.
+ static int __init ecryptfs_init(void)
+@@ -894,7 +804,6 @@ static int __init ecryptfs_init(void)
+ printk(KERN_ERR "Failed to register filesystem\n");
+ goto out_free_kmem_caches;
+ }
+- kobj_set_kset_s(&ecryptfs_subsys, fs_subsys);
+ rc = do_sysfs_registration();
+ if (rc) {
+ printk(KERN_ERR "sysfs registration failed\n");
+diff --git a/fs/ecryptfs/netlink.c b/fs/ecryptfs/netlink.c
+index 9aa3451..f638a69 100644
+--- a/fs/ecryptfs/netlink.c
++++ b/fs/ecryptfs/netlink.c
+@@ -237,7 +237,6 @@ out:
*/
- void scsi_device_put(struct scsi_device *sdev)
-@@ -867,7 +919,7 @@ void scsi_device_put(struct scsi_device *sdev)
+ void ecryptfs_release_netlink(void)
+ {
+- if (ecryptfs_nl_sock && ecryptfs_nl_sock->sk_socket)
+- sock_release(ecryptfs_nl_sock->sk_socket);
++ netlink_kernel_release(ecryptfs_nl_sock);
+ ecryptfs_nl_sock = NULL;
}
- EXPORT_SYMBOL(scsi_device_put);
-
--/* helper for shost_for_each_device, thus not documented */
-+/* helper for shost_for_each_device, see that for documentation */
- struct scsi_device *__scsi_iterate_devices(struct Scsi_Host *shost,
- struct scsi_device *prev)
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 154e25f..6abaf75 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -680,11 +680,31 @@ static int ext2_check_descriptors (struct super_block * sb)
+ static loff_t ext2_max_size(int bits)
{
-@@ -895,6 +947,8 @@ EXPORT_SYMBOL(__scsi_iterate_devices);
- /**
- * starget_for_each_device - helper to walk all devices of a target
- * @starget: target whose devices we want to iterate over.
-+ * @data: Opaque passed to each function call.
-+ * @fn: Function to call on each device
- *
- * This traverses over each device of @starget. The devices have
- * a reference that must be released by scsi_host_put when breaking
-@@ -946,13 +1000,13 @@ EXPORT_SYMBOL(__starget_for_each_device);
- * @starget: SCSI target pointer
- * @lun: SCSI Logical Unit Number
- *
-- * Looks up the scsi_device with the specified @lun for a give
-- * @starget. The returned scsi_device does not have an additional
-+ * Description: Looks up the scsi_device with the specified @lun for a given
-+ * @starget. The returned scsi_device does not have an additional
- * reference. You must hold the host's host_lock over this call and
- * any access to the returned scsi_device.
- *
-- * Note: The only reason why drivers would want to use this is because
-- * they're need to access the device list in irq context. Otherwise you
-+ * Note: The only reason why drivers should use this is because
-+ * they need to access the device list in irq context. Otherwise you
- * really want to use scsi_device_lookup_by_target instead.
- **/
- struct scsi_device *__scsi_device_lookup_by_target(struct scsi_target *starget,
-@@ -974,9 +1028,9 @@ EXPORT_SYMBOL(__scsi_device_lookup_by_target);
- * @starget: SCSI target pointer
- * @lun: SCSI Logical Unit Number
- *
-- * Looks up the scsi_device with the specified @channel, @id, @lun for a
-- * give host. The returned scsi_device has an additional reference that
-- * needs to be release with scsi_host_put once you're done with it.
-+ * Description: Looks up the scsi_device with the specified @channel, @id, @lun
-+ * for a given host. The returned scsi_device has an additional reference that
-+ * needs to be released with scsi_device_put once you're done with it.
- **/
- struct scsi_device *scsi_device_lookup_by_target(struct scsi_target *starget,
- uint lun)
-@@ -996,19 +1050,19 @@ struct scsi_device *scsi_device_lookup_by_target(struct scsi_target *starget,
- EXPORT_SYMBOL(scsi_device_lookup_by_target);
-
- /**
-- * scsi_device_lookup - find a device given the host (UNLOCKED)
-+ * __scsi_device_lookup - find a device given the host (UNLOCKED)
- * @shost: SCSI host pointer
- * @channel: SCSI channel (zero if only one channel)
-- * @pun: SCSI target number (physical unit number)
-+ * @id: SCSI target number (physical unit number)
- * @lun: SCSI Logical Unit Number
- *
-- * Looks up the scsi_device with the specified @channel, @id, @lun for a
-- * give host. The returned scsi_device does not have an additional reference.
-- * You must hold the host's host_lock over this call and any access to the
-- * returned scsi_device.
-+ * Description: Looks up the scsi_device with the specified @channel, @id, @lun
-+ * for a given host. The returned scsi_device does not have an additional
-+ * reference. You must hold the host's host_lock over this call and any access
-+ * to the returned scsi_device.
- *
- * Note: The only reason why drivers would want to use this is because
-- * they're need to access the device list in irq context. Otherwise you
-+ * they need to access the device list in irq context. Otherwise you
- * really want to use scsi_device_lookup instead.
- **/
- struct scsi_device *__scsi_device_lookup(struct Scsi_Host *shost,
-@@ -1033,9 +1087,9 @@ EXPORT_SYMBOL(__scsi_device_lookup);
- * @id: SCSI target number (physical unit number)
- * @lun: SCSI Logical Unit Number
- *
-- * Looks up the scsi_device with the specified @channel, @id, @lun for a
-- * give host. The returned scsi_device has an additional reference that
-- * needs to be release with scsi_host_put once you're done with it.
-+ * Description: Looks up the scsi_device with the specified @channel, @id, @lun
-+ * for a given host. The returned scsi_device has an additional reference that
-+ * needs to be released with scsi_device_put once you're done with it.
- **/
- struct scsi_device *scsi_device_lookup(struct Scsi_Host *shost,
- uint channel, uint id, uint lun)
-diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
-index 46cae5a..82c06f0 100644
---- a/drivers/scsi/scsi_debug.c
-+++ b/drivers/scsi/scsi_debug.c
-@@ -329,7 +329,7 @@ int scsi_debug_queuecommand(struct scsi_cmnd * SCpnt, done_funct_t done)
- if (done == NULL)
- return 0; /* assume mid level reprocessing command */
-
-- SCpnt->resid = 0;
-+ scsi_set_resid(SCpnt, 0);
- if ((SCSI_DEBUG_OPT_NOISE & scsi_debug_opts) && cmd) {
- printk(KERN_INFO "scsi_debug: cmd ");
- for (k = 0, len = SCpnt->cmd_len; k < len; ++k)
-@@ -603,26 +603,16 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
- void * kaddr_off;
- struct scatterlist * sg;
+ loff_t res = EXT2_NDIR_BLOCKS;
+- /* This constant is calculated to be the largest file size for a
+- * dense, 4k-blocksize file such that the total number of
++ int meta_blocks;
++ loff_t upper_limit;
++
++ /* This is calculated to be the largest file size for a
++ * dense, file such that the total number of
+ * sectors in the file, including data and all indirect blocks,
+- * does not exceed 2^32. */
+- const loff_t upper_limit = 0x1ff7fffd000LL;
++ * does not exceed 2^32 -1
++ * __u32 i_blocks representing the total number of
++ * 512 bytes blocks of the file
++ */
++ upper_limit = (1LL << 32) - 1;
++
++ /* total blocks in file system block size */
++ upper_limit >>= (bits - 9);
++
++
++ /* indirect blocks */
++ meta_blocks = 1;
++ /* double indirect blocks */
++ meta_blocks += 1 + (1LL << (bits-2));
++ /* tripple indirect blocks */
++ meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
++
++ upper_limit -= meta_blocks;
++ upper_limit <<= bits;
-- if (0 == scp->request_bufflen)
-+ if (0 == scsi_bufflen(scp))
- return 0;
-- if (NULL == scp->request_buffer)
-+ if (NULL == scsi_sglist(scp))
- return (DID_ERROR << 16);
- if (! ((scp->sc_data_direction == DMA_BIDIRECTIONAL) ||
- (scp->sc_data_direction == DMA_FROM_DEVICE)))
- return (DID_ERROR << 16);
-- if (0 == scp->use_sg) {
-- req_len = scp->request_bufflen;
-- act_len = (req_len < arr_len) ? req_len : arr_len;
-- memcpy(scp->request_buffer, arr, act_len);
-- if (scp->resid)
-- scp->resid -= act_len;
-- else
-- scp->resid = req_len - act_len;
-- return 0;
-- }
- active = 1;
- req_len = act_len = 0;
-- scsi_for_each_sg(scp, sg, scp->use_sg, k) {
-+ scsi_for_each_sg(scp, sg, scsi_sg_count(scp), k) {
- if (active) {
- kaddr = (unsigned char *)
- kmap_atomic(sg_page(sg), KM_USER0);
-@@ -640,10 +630,10 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
- }
- req_len += sg->length;
- }
-- if (scp->resid)
-- scp->resid -= act_len;
-+ if (scsi_get_resid(scp))
-+ scsi_set_resid(scp, scsi_get_resid(scp) - act_len);
- else
-- scp->resid = req_len - act_len;
-+ scsi_set_resid(scp, req_len - act_len);
- return 0;
+ res += 1LL << (bits-2);
+ res += 1LL << (2*(bits-2));
+@@ -692,6 +712,10 @@ static loff_t ext2_max_size(int bits)
+ res <<= bits;
+ if (res > upper_limit)
+ res = upper_limit;
++
++ if (res > MAX_LFS_FILESIZE)
++ res = MAX_LFS_FILESIZE;
++
+ return res;
}
-@@ -656,22 +646,15 @@ static int fetch_to_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
- void * kaddr_off;
- struct scatterlist * sg;
-
-- if (0 == scp->request_bufflen)
-+ if (0 == scsi_bufflen(scp))
- return 0;
-- if (NULL == scp->request_buffer)
-+ if (NULL == scsi_sglist(scp))
- return -1;
- if (! ((scp->sc_data_direction == DMA_BIDIRECTIONAL) ||
- (scp->sc_data_direction == DMA_TO_DEVICE)))
- return -1;
-- if (0 == scp->use_sg) {
-- req_len = scp->request_bufflen;
-- len = (req_len < max_arr_len) ? req_len : max_arr_len;
-- memcpy(arr, scp->request_buffer, len);
-- return len;
-- }
-- sg = scsi_sglist(scp);
- req_len = fin = 0;
-- for (k = 0; k < scp->use_sg; ++k, sg = sg_next(sg)) {
-+ scsi_for_each_sg(scp, sg, scsi_sg_count(scp), k) {
- kaddr = (unsigned char *)kmap_atomic(sg_page(sg), KM_USER0);
- if (NULL == kaddr)
- return -1;
-diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
-index 348cc5a..b8de041 100644
---- a/drivers/scsi/scsi_devinfo.c
-+++ b/drivers/scsi/scsi_devinfo.c
-@@ -276,11 +276,12 @@ static void scsi_strcpy_devinfo(char *name, char *to, size_t to_length,
- }
+diff --git a/fs/ext3/super.c b/fs/ext3/super.c
+index cb14de1..f3675cc 100644
+--- a/fs/ext3/super.c
++++ b/fs/ext3/super.c
+@@ -1436,11 +1436,31 @@ static void ext3_orphan_cleanup (struct super_block * sb,
+ static loff_t ext3_max_size(int bits)
+ {
+ loff_t res = EXT3_NDIR_BLOCKS;
+- /* This constant is calculated to be the largest file size for a
+- * dense, 4k-blocksize file such that the total number of
++ int meta_blocks;
++ loff_t upper_limit;
++
++ /* This is calculated to be the largest file size for a
++ * dense, file such that the total number of
+ * sectors in the file, including data and all indirect blocks,
+- * does not exceed 2^32. */
+- const loff_t upper_limit = 0x1ff7fffd000LL;
++ * does not exceed 2^32 -1
++ * __u32 i_blocks representing the total number of
++ * 512 bytes blocks of the file
++ */
++ upper_limit = (1LL << 32) - 1;
++
++ /* total blocks in file system block size */
++ upper_limit >>= (bits - 9);
++
++
++ /* indirect blocks */
++ meta_blocks = 1;
++ /* double indirect blocks */
++ meta_blocks += 1 + (1LL << (bits-2));
++ /* tripple indirect blocks */
++ meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
++
++ upper_limit -= meta_blocks;
++ upper_limit <<= bits;
- /**
-- * scsi_dev_info_list_add: add one dev_info list entry.
-+ * scsi_dev_info_list_add - add one dev_info list entry.
-+ * @compatible: if true, null terminate short strings. Otherwise space pad.
- * @vendor: vendor string
- * @model: model (product) string
- * @strflags: integer string
-- * @flag: if strflags NULL, use this flag value
-+ * @flags: if strflags NULL, use this flag value
- *
- * Description:
- * Create and add one dev_info entry for @vendor, @model, @strflags or
-@@ -322,8 +323,7 @@ static int scsi_dev_info_list_add(int compatible, char *vendor, char *model,
+ res += 1LL << (bits-2);
+ res += 1LL << (2*(bits-2));
+@@ -1448,6 +1468,10 @@ static loff_t ext3_max_size(int bits)
+ res <<= bits;
+ if (res > upper_limit)
+ res = upper_limit;
++
++ if (res > MAX_LFS_FILESIZE)
++ res = MAX_LFS_FILESIZE;
++
+ return res;
}
- /**
-- * scsi_dev_info_list_add_str: parse dev_list and add to the
-- * scsi_dev_info_list.
-+ * scsi_dev_info_list_add_str - parse dev_list and add to the scsi_dev_info_list.
- * @dev_list: string of device flags to add
- *
- * Description:
-@@ -374,15 +374,15 @@ static int scsi_dev_info_list_add_str(char *dev_list)
- }
+diff --git a/fs/ext4/Makefile b/fs/ext4/Makefile
+index ae6e7e5..ac6fa8c 100644
+--- a/fs/ext4/Makefile
++++ b/fs/ext4/Makefile
+@@ -6,7 +6,7 @@ obj-$(CONFIG_EXT4DEV_FS) += ext4dev.o
- /**
-- * get_device_flags - get device specific flags from the dynamic device
-- * list. Called during scan time.
-+ * get_device_flags - get device specific flags from the dynamic device list.
-+ * @sdev: &scsi_device to get flags for
- * @vendor: vendor name
- * @model: model name
- *
- * Description:
- * Search the scsi_dev_info_list for an entry matching @vendor and
- * @model, if found, return the matching flags value, else return
-- * the host or global default settings.
-+ * the host or global default settings. Called during scan time.
- **/
- int scsi_get_device_flags(struct scsi_device *sdev,
- const unsigned char *vendor,
-@@ -483,13 +483,11 @@ stop_output:
- }
+ ext4dev-y := balloc.o bitmap.o dir.o file.o fsync.o ialloc.o inode.o \
+ ioctl.o namei.o super.o symlink.o hash.o resize.o extents.o \
+- ext4_jbd2.o
++ ext4_jbd2.o migrate.o mballoc.o
- /*
-- * proc_scsi_dev_info_write: allow additions to the scsi_dev_info_list via
-- * /proc.
-+ * proc_scsi_dev_info_write - allow additions to scsi_dev_info_list via /proc.
- *
-- * Use: echo "vendor:model:flag" > /proc/scsi/device_info
-- *
-- * To add a black/white list entry for vendor and model with an integer
-- * value of flag to the scsi device info list.
-+ * Description: Adds a black/white list entry for vendor and model with an
-+ * integer value of flag to the scsi device info list.
-+ * To use, echo "vendor:model:flag" > /proc/scsi/device_info
+ ext4dev-$(CONFIG_EXT4DEV_FS_XATTR) += xattr.o xattr_user.o xattr_trusted.o
+ ext4dev-$(CONFIG_EXT4DEV_FS_POSIX_ACL) += acl.o
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 71ee95e..ac75ea9 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -29,7 +29,7 @@
+ * Calculate the block group number and offset, given a block number
*/
- static int proc_scsi_devinfo_write(struct file *file, const char __user *buf,
- unsigned long length, void *data)
-@@ -532,8 +530,7 @@ MODULE_PARM_DESC(default_dev_flags,
- "scsi default device flag integer value");
-
- /**
-- * scsi_dev_info_list_delete: called from scsi.c:exit_scsi to remove
-- * the scsi_dev_info_list.
-+ * scsi_dev_info_list_delete - called from scsi.c:exit_scsi to remove the scsi_dev_info_list.
- **/
- void scsi_exit_devinfo(void)
+ void ext4_get_group_no_and_offset(struct super_block *sb, ext4_fsblk_t blocknr,
+- unsigned long *blockgrpp, ext4_grpblk_t *offsetp)
++ ext4_group_t *blockgrpp, ext4_grpblk_t *offsetp)
{
-@@ -552,13 +549,12 @@ void scsi_exit_devinfo(void)
+ struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ ext4_grpblk_t offset;
+@@ -46,7 +46,7 @@ void ext4_get_group_no_and_offset(struct super_block *sb, ext4_fsblk_t blocknr,
+ /* Initializes an uninitialized block bitmap if given, and returns the
+ * number of blocks free in the group. */
+ unsigned ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
+- int block_group, struct ext4_group_desc *gdp)
++ ext4_group_t block_group, struct ext4_group_desc *gdp)
+ {
+ unsigned long start;
+ int bit, bit_max;
+@@ -60,7 +60,7 @@ unsigned ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
+ * essentially implementing a per-group read-only flag. */
+ if (!ext4_group_desc_csum_verify(sbi, block_group, gdp)) {
+ ext4_error(sb, __FUNCTION__,
+- "Checksum bad for group %u\n", block_group);
++ "Checksum bad for group %lu\n", block_group);
+ gdp->bg_free_blocks_count = 0;
+ gdp->bg_free_inodes_count = 0;
+ gdp->bg_itable_unused = 0;
+@@ -153,7 +153,7 @@ unsigned ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
+ * group descriptor
+ */
+ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
+- unsigned int block_group,
++ ext4_group_t block_group,
+ struct buffer_head ** bh)
+ {
+ unsigned long group_desc;
+@@ -164,7 +164,7 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
+ if (block_group >= sbi->s_groups_count) {
+ ext4_error (sb, "ext4_get_group_desc",
+ "block_group >= groups_count - "
+- "block_group = %d, groups_count = %lu",
++ "block_group = %lu, groups_count = %lu",
+ block_group, sbi->s_groups_count);
+
+ return NULL;
+@@ -176,7 +176,7 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
+ if (!sbi->s_group_desc[group_desc]) {
+ ext4_error (sb, "ext4_get_group_desc",
+ "Group descriptor not loaded - "
+- "block_group = %d, group_desc = %lu, desc = %lu",
++ "block_group = %lu, group_desc = %lu, desc = %lu",
+ block_group, group_desc, offset);
+ return NULL;
+ }
+@@ -189,18 +189,70 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
+ return desc;
}
++static int ext4_valid_block_bitmap(struct super_block *sb,
++ struct ext4_group_desc *desc,
++ unsigned int block_group,
++ struct buffer_head *bh)
++{
++ ext4_grpblk_t offset;
++ ext4_grpblk_t next_zero_bit;
++ ext4_fsblk_t bitmap_blk;
++ ext4_fsblk_t group_first_block;
++
++ if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) {
++ /* with FLEX_BG, the inode/block bitmaps and itable
++ * blocks may not be in the group at all
++ * so the bitmap validation will be skipped for those groups
++ * or it has to also read the block group where the bitmaps
++ * are located to verify they are set.
++ */
++ return 1;
++ }
++ group_first_block = ext4_group_first_block_no(sb, block_group);
++
++ /* check whether block bitmap block number is set */
++ bitmap_blk = ext4_block_bitmap(sb, desc);
++ offset = bitmap_blk - group_first_block;
++ if (!ext4_test_bit(offset, bh->b_data))
++ /* bad block bitmap */
++ goto err_out;
++
++ /* check whether the inode bitmap block number is set */
++ bitmap_blk = ext4_inode_bitmap(sb, desc);
++ offset = bitmap_blk - group_first_block;
++ if (!ext4_test_bit(offset, bh->b_data))
++ /* bad block bitmap */
++ goto err_out;
++
++ /* check whether the inode table block number is set */
++ bitmap_blk = ext4_inode_table(sb, desc);
++ offset = bitmap_blk - group_first_block;
++ next_zero_bit = ext4_find_next_zero_bit(bh->b_data,
++ offset + EXT4_SB(sb)->s_itb_per_group,
++ offset);
++ if (next_zero_bit >= offset + EXT4_SB(sb)->s_itb_per_group)
++ /* good bitmap for inode tables */
++ return 1;
++
++err_out:
++ ext4_error(sb, __FUNCTION__,
++ "Invalid block bitmap - "
++ "block_group = %d, block = %llu",
++ block_group, bitmap_blk);
++ return 0;
++}
/**
-- * scsi_dev_list_init: set up the dynamic device list.
-- * @dev_list: string of device flags to add
-+ * scsi_init_devinfo - set up the dynamic device list.
- *
- * Description:
-- * Add command line @dev_list entries, then add
-+ * Add command line entries from scsi_dev_flags, then add
- * scsi_static_device_list entries to the scsi device info list.
-- **/
-+ */
- int __init scsi_init_devinfo(void)
- {
- #ifdef CONFIG_SCSI_PROC_FS
-diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
-index ebaca4c..547e85a 100644
---- a/drivers/scsi/scsi_error.c
-+++ b/drivers/scsi/scsi_error.c
-@@ -62,7 +62,7 @@ void scsi_eh_wakeup(struct Scsi_Host *shost)
- * @shost: SCSI host to invoke error handling on.
+ * read_block_bitmap()
+ * @sb: super block
+ * @block_group: given block group
*
- * Schedule SCSI EH without scmd.
-- **/
-+ */
- void scsi_schedule_eh(struct Scsi_Host *shost)
- {
- unsigned long flags;
-@@ -86,7 +86,7 @@ EXPORT_SYMBOL_GPL(scsi_schedule_eh);
+- * Read the bitmap for a given block_group, reading into the specified
+- * slot in the superblock's bitmap cache.
++ * Read the bitmap for a given block_group,and validate the
++ * bits for block/inode/inode tables are set in the bitmaps
*
- * Return value:
- * 0 on failure.
-- **/
-+ */
- int scsi_eh_scmd_add(struct scsi_cmnd *scmd, int eh_flag)
- {
- struct Scsi_Host *shost = scmd->device->host;
-@@ -121,7 +121,7 @@ int scsi_eh_scmd_add(struct scsi_cmnd *scmd, int eh_flag)
- * This should be turned into an inline function. Each scsi command
- * has its own timer, and as it is added to the queue, we set up the
- * timer. When the command completes, we cancel the timer.
-- **/
-+ */
- void scsi_add_timer(struct scsi_cmnd *scmd, int timeout,
- void (*complete)(struct scsi_cmnd *))
- {
-@@ -155,7 +155,7 @@ void scsi_add_timer(struct scsi_cmnd *scmd, int timeout,
- * Return value:
- * 1 if we were able to detach the timer. 0 if we blew it, and the
- * timer function has already started to run.
-- **/
-+ */
- int scsi_delete_timer(struct scsi_cmnd *scmd)
+ * Return buffer_head on success or NULL in case of failure.
+ */
+ struct buffer_head *
+-read_block_bitmap(struct super_block *sb, unsigned int block_group)
++read_block_bitmap(struct super_block *sb, ext4_group_t block_group)
{
- int rtn;
-@@ -181,7 +181,7 @@ int scsi_delete_timer(struct scsi_cmnd *scmd)
- * only in that the normal completion handling might run, but if the
- * normal completion function determines that the timer has already
- * fired, then it mustn't do anything.
-- **/
-+ */
- void scsi_times_out(struct scsi_cmnd *scmd)
+ struct ext4_group_desc * desc;
+ struct buffer_head * bh = NULL;
+@@ -210,25 +262,36 @@ read_block_bitmap(struct super_block *sb, unsigned int block_group)
+ if (!desc)
+ return NULL;
+ bitmap_blk = ext4_block_bitmap(sb, desc);
++ bh = sb_getblk(sb, bitmap_blk);
++ if (unlikely(!bh)) {
++ ext4_error(sb, __FUNCTION__,
++ "Cannot read block bitmap - "
++ "block_group = %d, block_bitmap = %llu",
++ (int)block_group, (unsigned long long)bitmap_blk);
++ return NULL;
++ }
++ if (bh_uptodate_or_lock(bh))
++ return bh;
++
+ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
+- bh = sb_getblk(sb, bitmap_blk);
+- if (!buffer_uptodate(bh)) {
+- lock_buffer(bh);
+- if (!buffer_uptodate(bh)) {
+- ext4_init_block_bitmap(sb, bh, block_group,
+- desc);
+- set_buffer_uptodate(bh);
+- }
+- unlock_buffer(bh);
+- }
+- } else {
+- bh = sb_bread(sb, bitmap_blk);
++ ext4_init_block_bitmap(sb, bh, block_group, desc);
++ set_buffer_uptodate(bh);
++ unlock_buffer(bh);
++ return bh;
+ }
+- if (!bh)
+- ext4_error (sb, __FUNCTION__,
++ if (bh_submit_read(bh) < 0) {
++ put_bh(bh);
++ ext4_error(sb, __FUNCTION__,
+ "Cannot read block bitmap - "
+ "block_group = %d, block_bitmap = %llu",
+- block_group, bitmap_blk);
++ (int)block_group, (unsigned long long)bitmap_blk);
++ return NULL;
++ }
++ if (!ext4_valid_block_bitmap(sb, desc, block_group, bh)) {
++ put_bh(bh);
++ return NULL;
++ }
++
+ return bh;
+ }
+ /*
+@@ -320,7 +383,7 @@ restart:
+ */
+ static int
+ goal_in_my_reservation(struct ext4_reserve_window *rsv, ext4_grpblk_t grp_goal,
+- unsigned int group, struct super_block * sb)
++ ext4_group_t group, struct super_block *sb)
{
- enum scsi_eh_timer_return (* eh_timed_out)(struct scsi_cmnd *);
-@@ -224,7 +224,7 @@ void scsi_times_out(struct scsi_cmnd *scmd)
+ ext4_fsblk_t group_first_block, group_last_block;
+
+@@ -463,7 +526,7 @@ static inline int rsv_is_empty(struct ext4_reserve_window *rsv)
+ * when setting the reservation window size through ioctl before the file
+ * is open for write (needs block allocation).
*
- * Return value:
- * 0 when dev was taken offline by error recovery. 1 OK to proceed.
-- **/
-+ */
- int scsi_block_when_processing_errors(struct scsi_device *sdev)
- {
- int online;
-@@ -245,7 +245,7 @@ EXPORT_SYMBOL(scsi_block_when_processing_errors);
- * scsi_eh_prt_fail_stats - Log info on failures.
- * @shost: scsi host being recovered.
- * @work_q: Queue of scsi cmds to process.
-- **/
-+ */
- static inline void scsi_eh_prt_fail_stats(struct Scsi_Host *shost,
- struct list_head *work_q)
- {
-@@ -295,7 +295,7 @@ static inline void scsi_eh_prt_fail_stats(struct Scsi_Host *shost,
- * Notes:
- * When a deferred error is detected the current command has
- * not been executed and needs retrying.
-- **/
-+ */
- static int scsi_check_sense(struct scsi_cmnd *scmd)
+- * Needs truncate_mutex protection prior to call this function.
++ * Needs down_write(i_data_sem) protection prior to call this function.
+ */
+ void ext4_init_block_alloc_info(struct inode *inode)
{
- struct scsi_sense_hdr sshdr;
-@@ -398,7 +398,7 @@ static int scsi_check_sense(struct scsi_cmnd *scmd)
- * queued during error recovery. the main difference here is that we
- * don't allow for the possibility of retries here, and we are a lot
- * more restrictive about what we consider acceptable.
-- **/
-+ */
- static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
+@@ -514,6 +577,8 @@ void ext4_discard_reservation(struct inode *inode)
+ struct ext4_reserve_window_node *rsv;
+ spinlock_t *rsv_lock = &EXT4_SB(inode->i_sb)->s_rsv_window_lock;
+
++ ext4_mb_discard_inode_preallocations(inode);
++
+ if (!block_i)
+ return;
+
+@@ -540,7 +605,7 @@ void ext4_free_blocks_sb(handle_t *handle, struct super_block *sb,
{
+ struct buffer_head *bitmap_bh = NULL;
+ struct buffer_head *gd_bh;
+- unsigned long block_group;
++ ext4_group_t block_group;
+ ext4_grpblk_t bit;
+ unsigned long i;
+ unsigned long overflow;
+@@ -587,11 +652,13 @@ do_more:
+ in_range(ext4_inode_bitmap(sb, desc), block, count) ||
+ in_range(block, ext4_inode_table(sb, desc), sbi->s_itb_per_group) ||
+ in_range(block + count - 1, ext4_inode_table(sb, desc),
+- sbi->s_itb_per_group))
++ sbi->s_itb_per_group)) {
+ ext4_error (sb, "ext4_free_blocks",
+ "Freeing blocks in system zones - "
+ "Block = %llu, count = %lu",
+ block, count);
++ goto error_return;
++ }
+
/*
-@@ -452,7 +452,7 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
- /**
- * scsi_eh_done - Completion function for error handling.
- * @scmd: Cmd that is done.
-- **/
-+ */
- static void scsi_eh_done(struct scsi_cmnd *scmd)
- {
- struct completion *eh_action;
-@@ -469,7 +469,7 @@ static void scsi_eh_done(struct scsi_cmnd *scmd)
- /**
- * scsi_try_host_reset - ask host adapter to reset itself
- * @scmd: SCSI cmd to send hsot reset.
-- **/
-+ */
- static int scsi_try_host_reset(struct scsi_cmnd *scmd)
+ * We are about to start releasing blocks in the bitmap,
+@@ -720,19 +787,29 @@ error_return:
+ * @inode: inode
+ * @block: start physical block to free
+ * @count: number of blocks to count
++ * @metadata: Are these metadata blocks
+ */
+ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+- ext4_fsblk_t block, unsigned long count)
++ ext4_fsblk_t block, unsigned long count,
++ int metadata)
{
- unsigned long flags;
-@@ -498,7 +498,7 @@ static int scsi_try_host_reset(struct scsi_cmnd *scmd)
- /**
- * scsi_try_bus_reset - ask host to perform a bus reset
- * @scmd: SCSI cmd to send bus reset.
-- **/
-+ */
- static int scsi_try_bus_reset(struct scsi_cmnd *scmd)
+ struct super_block * sb;
+ unsigned long dquot_freed_blocks;
+
++ /* this isn't the right place to decide whether block is metadata
++ * inode.c/extents.c knows better, but for safety ... */
++ if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode) ||
++ ext4_should_journal_data(inode))
++ metadata = 1;
++
+ sb = inode->i_sb;
+- if (!sb) {
+- printk ("ext4_free_blocks: nonexistent device");
+- return;
+- }
+- ext4_free_blocks_sb(handle, sb, block, count, &dquot_freed_blocks);
++
++ if (!test_opt(sb, MBALLOC) || !EXT4_SB(sb)->s_group_info)
++ ext4_free_blocks_sb(handle, sb, block, count,
++ &dquot_freed_blocks);
++ else
++ ext4_mb_free_blocks(handle, inode, block, count,
++ metadata, &dquot_freed_blocks);
+ if (dquot_freed_blocks)
+ DQUOT_FREE_BLOCK(inode, dquot_freed_blocks);
+ return;
+@@ -920,9 +997,10 @@ claim_block(spinlock_t *lock, ext4_grpblk_t block, struct buffer_head *bh)
+ * ext4_journal_release_buffer(), else we'll run out of credits.
+ */
+ static ext4_grpblk_t
+-ext4_try_to_allocate(struct super_block *sb, handle_t *handle, int group,
+- struct buffer_head *bitmap_bh, ext4_grpblk_t grp_goal,
+- unsigned long *count, struct ext4_reserve_window *my_rsv)
++ext4_try_to_allocate(struct super_block *sb, handle_t *handle,
++ ext4_group_t group, struct buffer_head *bitmap_bh,
++ ext4_grpblk_t grp_goal, unsigned long *count,
++ struct ext4_reserve_window *my_rsv)
{
- unsigned long flags;
-@@ -533,7 +533,7 @@ static int scsi_try_bus_reset(struct scsi_cmnd *scmd)
- * unreliable for a given host, then the host itself needs to put a
- * timer on it, and set the host back to a consistent state prior to
- * returning.
-- **/
-+ */
- static int scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
+ ext4_fsblk_t group_first_block;
+ ext4_grpblk_t start, end;
+@@ -1156,7 +1234,7 @@ static int find_next_reservable_window(
+ */
+ static int alloc_new_reservation(struct ext4_reserve_window_node *my_rsv,
+ ext4_grpblk_t grp_goal, struct super_block *sb,
+- unsigned int group, struct buffer_head *bitmap_bh)
++ ext4_group_t group, struct buffer_head *bitmap_bh)
{
- int rtn;
-@@ -568,7 +568,7 @@ static int __scsi_try_to_abort_cmd(struct scsi_cmnd *scmd)
- * author of the low-level driver wishes this operation to be timed,
- * they can provide this facility themselves. helper functions in
- * scsi_error.c can be supplied to make this easier to do.
-- **/
-+ */
- static int scsi_try_to_abort_cmd(struct scsi_cmnd *scmd)
+ struct ext4_reserve_window_node *search_head;
+ ext4_fsblk_t group_first_block, group_end_block, start_block;
+@@ -1354,7 +1432,7 @@ static void try_to_extend_reservation(struct ext4_reserve_window_node *my_rsv,
+ */
+ static ext4_grpblk_t
+ ext4_try_to_allocate_with_rsv(struct super_block *sb, handle_t *handle,
+- unsigned int group, struct buffer_head *bitmap_bh,
++ ext4_group_t group, struct buffer_head *bitmap_bh,
+ ext4_grpblk_t grp_goal,
+ struct ext4_reserve_window_node * my_rsv,
+ unsigned long *count, int *errp)
+@@ -1510,7 +1588,7 @@ int ext4_should_retry_alloc(struct super_block *sb, int *retries)
+ }
+
+ /**
+- * ext4_new_blocks() -- core block(s) allocation function
++ * ext4_new_blocks_old() -- core block(s) allocation function
+ * @handle: handle to this transaction
+ * @inode: file inode
+ * @goal: given target block(filesystem wide)
+@@ -1523,17 +1601,17 @@ int ext4_should_retry_alloc(struct super_block *sb, int *retries)
+ * any specific goal block.
+ *
+ */
+-ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
++ext4_fsblk_t ext4_new_blocks_old(handle_t *handle, struct inode *inode,
+ ext4_fsblk_t goal, unsigned long *count, int *errp)
{
+ struct buffer_head *bitmap_bh = NULL;
+ struct buffer_head *gdp_bh;
+- unsigned long group_no;
+- int goal_group;
++ ext4_group_t group_no;
++ ext4_group_t goal_group;
+ ext4_grpblk_t grp_target_blk; /* blockgroup relative goal block */
+ ext4_grpblk_t grp_alloc_blk; /* blockgroup-relative allocated block*/
+ ext4_fsblk_t ret_block; /* filesyetem-wide allocated block */
+- int bgi; /* blockgroup iteration index */
++ ext4_group_t bgi; /* blockgroup iteration index */
+ int fatal = 0, err;
+ int performed_allocation = 0;
+ ext4_grpblk_t free_blocks; /* number of free blocks in a group */
+@@ -1544,10 +1622,7 @@ ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
+ struct ext4_reserve_window_node *my_rsv = NULL;
+ struct ext4_block_alloc_info *block_i;
+ unsigned short windowsz = 0;
+-#ifdef EXT4FS_DEBUG
+- static int goal_hits, goal_attempts;
+-#endif
+- unsigned long ngroups;
++ ext4_group_t ngroups;
+ unsigned long num = *count;
+
+ *errp = -ENOSPC;
+@@ -1567,7 +1642,7 @@ ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
+
+ sbi = EXT4_SB(sb);
+ es = EXT4_SB(sb)->s_es;
+- ext4_debug("goal=%lu.\n", goal);
++ ext4_debug("goal=%llu.\n", goal);
/*
-@@ -601,7 +601,7 @@ static void scsi_abort_eh_cmnd(struct scsi_cmnd *scmd)
- * sent must be one that does not transfer any data. If @sense_bytes != 0
- * @cmnd is ignored and this functions sets up a REQUEST_SENSE command
- * and cmnd buffers to read @sense_bytes into @scmd->sense_buffer.
-- **/
-+ */
- void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
- unsigned char *cmnd, int cmnd_size, unsigned sense_bytes)
- {
-@@ -625,7 +625,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
+ * Allocate a block from reservation only when
+ * filesystem is mounted with reservation(default,-o reservation), and
+@@ -1677,7 +1752,7 @@ retry_alloc:
- if (sense_bytes) {
- scmd->request_bufflen = min_t(unsigned,
-- sizeof(scmd->sense_buffer), sense_bytes);
-+ SCSI_SENSE_BUFFERSIZE, sense_bytes);
- sg_init_one(&ses->sense_sgl, scmd->sense_buffer,
- scmd->request_bufflen);
- scmd->request_buffer = &ses->sense_sgl;
-@@ -657,7 +657,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
- * Zero the sense buffer. The scsi spec mandates that any
- * untransferred sense data should be interpreted as being zero.
+ allocated:
+
+- ext4_debug("using block group %d(%d)\n",
++ ext4_debug("using block group %lu(%d)\n",
+ group_no, gdp->bg_free_blocks_count);
+
+ BUFFER_TRACE(gdp_bh, "get_write_access");
+@@ -1692,11 +1767,13 @@ allocated:
+ in_range(ret_block, ext4_inode_table(sb, gdp),
+ EXT4_SB(sb)->s_itb_per_group) ||
+ in_range(ret_block + num - 1, ext4_inode_table(sb, gdp),
+- EXT4_SB(sb)->s_itb_per_group))
++ EXT4_SB(sb)->s_itb_per_group)) {
+ ext4_error(sb, "ext4_new_block",
+ "Allocating block in system zone - "
+ "blocks from %llu, length %lu",
+ ret_block, num);
++ goto out;
++ }
+
+ performed_allocation = 1;
+
+@@ -1743,9 +1820,6 @@ allocated:
+ * list of some description. We don't know in advance whether
+ * the caller wants to use it as metadata or data.
*/
-- memset(scmd->sense_buffer, 0, sizeof(scmd->sense_buffer));
-+ memset(scmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+- ext4_debug("allocating block %lu. Goal hits %d of %d.\n",
+- ret_block, goal_hits, goal_attempts);
+-
+ spin_lock(sb_bgl_lock(sbi, group_no));
+ if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))
+ gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
+@@ -1787,13 +1861,46 @@ out:
}
- EXPORT_SYMBOL(scsi_eh_prep_cmnd);
-@@ -667,7 +667,7 @@ EXPORT_SYMBOL(scsi_eh_prep_cmnd);
- * @ses: saved information from a coresponding call to scsi_prep_eh_cmnd
- *
- * Undo any damage done by above scsi_prep_eh_cmnd().
-- **/
-+ */
- void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses)
- {
- /*
-@@ -697,7 +697,7 @@ EXPORT_SYMBOL(scsi_eh_restore_cmnd);
- *
- * Return value:
- * SUCCESS or FAILED or NEEDS_RETRY
-- **/
-+ */
- static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
- int cmnd_size, int timeout, unsigned sense_bytes)
- {
-@@ -765,7 +765,7 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
- * Some hosts automatically obtain this information, others require
- * that we obtain it on our own. This function will *not* return until
- * the command either times out, or it completes.
-- **/
-+ */
- static int scsi_request_sense(struct scsi_cmnd *scmd)
- {
- return scsi_send_eh_cmnd(scmd, NULL, 0, SENSE_TIMEOUT, ~0);
-@@ -779,10 +779,10 @@ static int scsi_request_sense(struct scsi_cmnd *scmd)
- * Notes:
- * We don't want to use the normal command completion while we are are
- * still handling errors - it may cause other commands to be queued,
-- * and that would disturb what we are doing. thus we really want to
-+ * and that would disturb what we are doing. Thus we really want to
- * keep a list of pending commands for final completion, and once we
- * are ready to leave error handling we handle completion for real.
-- **/
-+ */
- void scsi_eh_finish_cmd(struct scsi_cmnd *scmd, struct list_head *done_q)
- {
- scmd->device->host->host_failed--;
-@@ -794,7 +794,7 @@ EXPORT_SYMBOL(scsi_eh_finish_cmd);
- /**
- * scsi_eh_get_sense - Get device sense data.
- * @work_q: Queue of commands to process.
-- * @done_q: Queue of proccessed commands..
-+ * @done_q: Queue of processed commands.
- *
- * Description:
- * See if we need to request sense information. if so, then get it
-@@ -802,7 +802,7 @@ EXPORT_SYMBOL(scsi_eh_finish_cmd);
- *
- * Notes:
- * This has the unfortunate side effect that if a shost adapter does
-- * not automatically request sense information, that we end up shutting
-+ * not automatically request sense information, we end up shutting
- * it down before we request it.
- *
- * All drivers should request sense information internally these days,
-@@ -810,7 +810,7 @@ EXPORT_SYMBOL(scsi_eh_finish_cmd);
- *
- * XXX: Long term this code should go away, but that needs an audit of
- * all LLDDs first.
-- **/
-+ */
- int scsi_eh_get_sense(struct list_head *work_q,
- struct list_head *done_q)
+ ext4_fsblk_t ext4_new_block(handle_t *handle, struct inode *inode,
+- ext4_fsblk_t goal, int *errp)
++ ext4_fsblk_t goal, int *errp)
{
-@@ -858,11 +858,11 @@ EXPORT_SYMBOL_GPL(scsi_eh_get_sense);
+- unsigned long count = 1;
++ struct ext4_allocation_request ar;
++ ext4_fsblk_t ret;
- /**
- * scsi_eh_tur - Send TUR to device.
-- * @scmd: Scsi cmd to send TUR
-+ * @scmd: &scsi_cmnd to send TUR
- *
- * Return value:
- * 0 - Device is ready. 1 - Device NOT ready.
-- **/
-+ */
- static int scsi_eh_tur(struct scsi_cmnd *scmd)
- {
- static unsigned char tur_command[6] = {TEST_UNIT_READY, 0, 0, 0, 0, 0};
-@@ -887,17 +887,17 @@ retry_tur:
+- return ext4_new_blocks(handle, inode, goal, &count, errp);
++ if (!test_opt(inode->i_sb, MBALLOC)) {
++ unsigned long count = 1;
++ ret = ext4_new_blocks_old(handle, inode, goal, &count, errp);
++ return ret;
++ }
++
++ memset(&ar, 0, sizeof(ar));
++ ar.inode = inode;
++ ar.goal = goal;
++ ar.len = 1;
++ ret = ext4_mb_new_blocks(handle, &ar, errp);
++ return ret;
++}
++
++ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
++ ext4_fsblk_t goal, unsigned long *count, int *errp)
++{
++ struct ext4_allocation_request ar;
++ ext4_fsblk_t ret;
++
++ if (!test_opt(inode->i_sb, MBALLOC)) {
++ ret = ext4_new_blocks_old(handle, inode, goal, count, errp);
++ return ret;
++ }
++
++ memset(&ar, 0, sizeof(ar));
++ ar.inode = inode;
++ ar.goal = goal;
++ ar.len = *count;
++ ret = ext4_mb_new_blocks(handle, &ar, errp);
++ *count = ar.len;
++ return ret;
}
++
/**
-- * scsi_eh_abort_cmds - abort canceled commands.
-- * @shost: scsi host being recovered.
-- * @eh_done_q: list_head for processed commands.
-+ * scsi_eh_abort_cmds - abort pending commands.
-+ * @work_q: &list_head for pending commands.
-+ * @done_q: &list_head for processed commands.
- *
- * Decription:
- * Try and see whether or not it makes sense to try and abort the
-- * running command. this only works out to be the case if we have one
-- * command that has timed out. if the command simply failed, it makes
-+ * running command. This only works out to be the case if we have one
-+ * command that has timed out. If the command simply failed, it makes
- * no sense to try and abort the command, since as far as the shost
- * adapter is concerned, it isn't running.
-- **/
-+ */
- static int scsi_eh_abort_cmds(struct list_head *work_q,
- struct list_head *done_q)
+ * ext4_count_free_blocks() -- count filesystem free blocks
+ * @sb: superblock
+@@ -1804,8 +1911,8 @@ ext4_fsblk_t ext4_count_free_blocks(struct super_block *sb)
{
-@@ -931,11 +931,11 @@ static int scsi_eh_abort_cmds(struct list_head *work_q,
+ ext4_fsblk_t desc_count;
+ struct ext4_group_desc *gdp;
+- int i;
+- unsigned long ngroups = EXT4_SB(sb)->s_groups_count;
++ ext4_group_t i;
++ ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
+ #ifdef EXT4FS_DEBUG
+ struct ext4_super_block *es;
+ ext4_fsblk_t bitmap_count;
+@@ -1829,14 +1936,14 @@ ext4_fsblk_t ext4_count_free_blocks(struct super_block *sb)
+ continue;
- /**
- * scsi_eh_try_stu - Send START_UNIT to device.
-- * @scmd: Scsi cmd to send START_UNIT
-+ * @scmd: &scsi_cmnd to send START_UNIT
- *
- * Return value:
- * 0 - Device is ready. 1 - Device NOT ready.
-- **/
-+ */
- static int scsi_eh_try_stu(struct scsi_cmnd *scmd)
+ x = ext4_count_free(bitmap_bh, sb->s_blocksize);
+- printk("group %d: stored = %d, counted = %lu\n",
++ printk(KERN_DEBUG "group %lu: stored = %d, counted = %lu\n",
+ i, le16_to_cpu(gdp->bg_free_blocks_count), x);
+ bitmap_count += x;
+ }
+ brelse(bitmap_bh);
+ printk("ext4_count_free_blocks: stored = %llu"
+ ", computed = %llu, %llu\n",
+- EXT4_FREE_BLOCKS_COUNT(es),
++ ext4_free_blocks_count(es),
+ desc_count, bitmap_count);
+ return bitmap_count;
+ #else
+@@ -1853,7 +1960,7 @@ ext4_fsblk_t ext4_count_free_blocks(struct super_block *sb)
+ #endif
+ }
+
+-static inline int test_root(int a, int b)
++static inline int test_root(ext4_group_t a, int b)
{
- static unsigned char stu_command[6] = {START_STOP, 0, 0, 0, 1, 0};
-@@ -956,13 +956,14 @@ static int scsi_eh_try_stu(struct scsi_cmnd *scmd)
+ int num = b;
- /**
- * scsi_eh_stu - send START_UNIT if needed
-- * @shost: scsi host being recovered.
-- * @eh_done_q: list_head for processed commands.
-+ * @shost: &scsi host being recovered.
-+ * @work_q: &list_head for pending commands.
-+ * @done_q: &list_head for processed commands.
- *
- * Notes:
- * If commands are failing due to not ready, initializing command required,
- * try revalidating the device, which will end up sending a start unit.
-- **/
-+ */
- static int scsi_eh_stu(struct Scsi_Host *shost,
- struct list_head *work_q,
- struct list_head *done_q)
-@@ -1008,14 +1009,15 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
- /**
- * scsi_eh_bus_device_reset - send bdr if needed
- * @shost: scsi host being recovered.
-- * @eh_done_q: list_head for processed commands.
-+ * @work_q: &list_head for pending commands.
-+ * @done_q: &list_head for processed commands.
- *
- * Notes:
-- * Try a bus device reset. still, look to see whether we have multiple
-+ * Try a bus device reset. Still, look to see whether we have multiple
- * devices that are jammed or not - if we have multiple devices, it
- * makes no sense to try bus_device_reset - we really would need to try
- * a bus_reset instead.
-- **/
-+ */
- static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
- struct list_head *work_q,
- struct list_head *done_q)
-@@ -1063,9 +1065,10 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
+@@ -1862,7 +1969,7 @@ static inline int test_root(int a, int b)
+ return num == a;
+ }
- /**
- * scsi_eh_bus_reset - send a bus reset
-- * @shost: scsi host being recovered.
-- * @eh_done_q: list_head for processed commands.
-- **/
-+ * @shost: &scsi host being recovered.
-+ * @work_q: &list_head for pending commands.
-+ * @done_q: &list_head for processed commands.
-+ */
- static int scsi_eh_bus_reset(struct Scsi_Host *shost,
- struct list_head *work_q,
- struct list_head *done_q)
-@@ -1122,7 +1125,7 @@ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
- * scsi_eh_host_reset - send a host reset
- * @work_q: list_head for processed commands.
- * @done_q: list_head for processed commands.
-- **/
-+ */
- static int scsi_eh_host_reset(struct list_head *work_q,
- struct list_head *done_q)
- {
-@@ -1157,8 +1160,7 @@ static int scsi_eh_host_reset(struct list_head *work_q,
- * scsi_eh_offline_sdevs - offline scsi devices that fail to recover
- * @work_q: list_head for processed commands.
- * @done_q: list_head for processed commands.
-- *
-- **/
-+ */
- static void scsi_eh_offline_sdevs(struct list_head *work_q,
- struct list_head *done_q)
- {
-@@ -1191,7 +1193,7 @@ static void scsi_eh_offline_sdevs(struct list_head *work_q,
- * is woken. In cases where the error code indicates an error that
- * doesn't require the error handler read (i.e. we don't need to
- * abort/reset), this function should return SUCCESS.
-- **/
-+ */
- int scsi_decide_disposition(struct scsi_cmnd *scmd)
- {
- int rtn;
-@@ -1372,7 +1374,7 @@ int scsi_decide_disposition(struct scsi_cmnd *scmd)
- *
- * If scsi_allocate_request() fails for what ever reason, we
- * completely forget to lock the door.
-- **/
-+ */
- static void scsi_eh_lock_door(struct scsi_device *sdev)
- {
- unsigned char cmnd[MAX_COMMAND_SIZE];
-@@ -1396,7 +1398,7 @@ static void scsi_eh_lock_door(struct scsi_device *sdev)
- * Notes:
- * When we entered the error handler, we blocked all further i/o to
- * this device. we need to 'reverse' this process.
-- **/
-+ */
- static void scsi_restart_operations(struct Scsi_Host *shost)
- {
- struct scsi_device *sdev;
-@@ -1440,9 +1442,9 @@ static void scsi_restart_operations(struct Scsi_Host *shost)
- /**
- * scsi_eh_ready_devs - check device ready state and recover if not.
- * @shost: host to be recovered.
-- * @eh_done_q: list_head for processed commands.
-- *
-- **/
-+ * @work_q: &list_head for pending commands.
-+ * @done_q: &list_head for processed commands.
-+ */
- void scsi_eh_ready_devs(struct Scsi_Host *shost,
- struct list_head *work_q,
- struct list_head *done_q)
-@@ -1458,8 +1460,7 @@ EXPORT_SYMBOL_GPL(scsi_eh_ready_devs);
- /**
- * scsi_eh_flush_done_q - finish processed commands or retry them.
- * @done_q: list_head of processed commands.
-- *
-- **/
-+ */
- void scsi_eh_flush_done_q(struct list_head *done_q)
- {
- struct scsi_cmnd *scmd, *next;
-@@ -1513,7 +1514,7 @@ EXPORT_SYMBOL(scsi_eh_flush_done_q);
- * scsi_finish_cmd() called for it. we do all of the retry stuff
- * here, so when we restart the host after we return it should have an
- * empty queue.
-- **/
-+ */
- static void scsi_unjam_host(struct Scsi_Host *shost)
- {
- unsigned long flags;
-@@ -1540,7 +1541,7 @@ static void scsi_unjam_host(struct Scsi_Host *shost)
- * Notes:
- * This is the main error handling loop. This is run as a kernel thread
- * for every SCSI host and handles all error handling activity.
-- **/
-+ */
- int scsi_error_handler(void *data)
- {
- struct Scsi_Host *shost = data;
-@@ -1769,7 +1770,7 @@ EXPORT_SYMBOL(scsi_reset_provider);
- *
- * Return value:
- * 1 if valid sense data information found, else 0;
-- **/
-+ */
- int scsi_normalize_sense(const u8 *sense_buffer, int sb_len,
- struct scsi_sense_hdr *sshdr)
+-static int ext4_group_sparse(int group)
++static int ext4_group_sparse(ext4_group_t group)
{
-@@ -1819,14 +1820,12 @@ int scsi_command_normalize_sense(struct scsi_cmnd *cmd,
- struct scsi_sense_hdr *sshdr)
+ if (group <= 1)
+ return 1;
+@@ -1880,7 +1987,7 @@ static int ext4_group_sparse(int group)
+ * Return the number of blocks used by the superblock (primary or backup)
+ * in this group. Currently this will be only 0 or 1.
+ */
+-int ext4_bg_has_super(struct super_block *sb, int group)
++int ext4_bg_has_super(struct super_block *sb, ext4_group_t group)
{
- return scsi_normalize_sense(cmd->sense_buffer,
-- sizeof(cmd->sense_buffer), sshdr);
-+ SCSI_SENSE_BUFFERSIZE, sshdr);
+ if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
+ EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER) &&
+@@ -1889,18 +1996,20 @@ int ext4_bg_has_super(struct super_block *sb, int group)
+ return 1;
}
- EXPORT_SYMBOL(scsi_command_normalize_sense);
- /**
-- * scsi_sense_desc_find - search for a given descriptor type in
-- * descriptor sense data format.
-- *
-+ * scsi_sense_desc_find - search for a given descriptor type in descriptor sense data format.
- * @sense_buffer: byte array of descriptor format sense data
- * @sb_len: number of valid bytes in sense_buffer
- * @desc_type: value of descriptor type to find
-@@ -1837,7 +1836,7 @@ EXPORT_SYMBOL(scsi_command_normalize_sense);
- *
- * Return value:
- * pointer to start of (first) descriptor if found else NULL
-- **/
-+ */
- const u8 * scsi_sense_desc_find(const u8 * sense_buffer, int sb_len,
- int desc_type)
+-static unsigned long ext4_bg_num_gdb_meta(struct super_block *sb, int group)
++static unsigned long ext4_bg_num_gdb_meta(struct super_block *sb,
++ ext4_group_t group)
{
-@@ -1865,9 +1864,7 @@ const u8 * scsi_sense_desc_find(const u8 * sense_buffer, int sb_len,
- EXPORT_SYMBOL(scsi_sense_desc_find);
+ unsigned long metagroup = group / EXT4_DESC_PER_BLOCK(sb);
+- unsigned long first = metagroup * EXT4_DESC_PER_BLOCK(sb);
+- unsigned long last = first + EXT4_DESC_PER_BLOCK(sb) - 1;
++ ext4_group_t first = metagroup * EXT4_DESC_PER_BLOCK(sb);
++ ext4_group_t last = first + EXT4_DESC_PER_BLOCK(sb) - 1;
- /**
-- * scsi_get_sense_info_fld - attempts to get information field from
-- * sense data (either fixed or descriptor format)
-- *
-+ * scsi_get_sense_info_fld - get information field from sense data (either fixed or descriptor format)
- * @sense_buffer: byte array of sense data
- * @sb_len: number of valid bytes in sense_buffer
- * @info_out: pointer to 64 integer where 8 or 4 byte information
-@@ -1875,7 +1872,7 @@ EXPORT_SYMBOL(scsi_sense_desc_find);
- *
- * Return value:
- * 1 if information field found, 0 if not found.
-- **/
-+ */
- int scsi_get_sense_info_fld(const u8 * sense_buffer, int sb_len,
- u64 * info_out)
- {
-diff --git a/drivers/scsi/scsi_ioctl.c b/drivers/scsi/scsi_ioctl.c
-index 32293f4..28b19ef 100644
---- a/drivers/scsi/scsi_ioctl.c
-+++ b/drivers/scsi/scsi_ioctl.c
-@@ -174,10 +174,15 @@ static int scsi_ioctl_get_pci(struct scsi_device *sdev, void __user *arg)
+ if (group == first || group == first + 1 || group == last)
+ return 1;
+ return 0;
}
+-static unsigned long ext4_bg_num_gdb_nometa(struct super_block *sb, int group)
++static unsigned long ext4_bg_num_gdb_nometa(struct super_block *sb,
++ ext4_group_t group)
+ {
+ if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
+ EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER) &&
+@@ -1918,7 +2027,7 @@ static unsigned long ext4_bg_num_gdb_nometa(struct super_block *sb, int group)
+ * (primary or backup) in this group. In the future there may be a
+ * different number of descriptor blocks in each group.
+ */
+-unsigned long ext4_bg_num_gdb(struct super_block *sb, int group)
++unsigned long ext4_bg_num_gdb(struct super_block *sb, ext4_group_t group)
+ {
+ unsigned long first_meta_bg =
+ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_meta_bg);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index f612bef..33888bb 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -67,7 +67,7 @@ int ext4_check_dir_entry (const char * function, struct inode * dir,
+ unsigned long offset)
+ {
+ const char * error_msg = NULL;
+- const int rlen = le16_to_cpu(de->rec_len);
++ const int rlen = ext4_rec_len_from_disk(de->rec_len);
+
+ if (rlen < EXT4_DIR_REC_LEN(1))
+ error_msg = "rec_len is smaller than minimal";
+@@ -124,7 +124,7 @@ static int ext4_readdir(struct file * filp,
+ offset = filp->f_pos & (sb->s_blocksize - 1);
--/*
-- * the scsi_ioctl() function differs from most ioctls in that it does
-- * not take a major/minor number as the dev field. Rather, it takes
-- * a pointer to a scsi_devices[] element, a structure.
-+/**
-+ * scsi_ioctl - Dispatch ioctl to scsi device
-+ * @sdev: scsi device receiving ioctl
-+ * @cmd: which ioctl is it
-+ * @arg: data associated with ioctl
-+ *
-+ * Description: The scsi_ioctl() function differs from most ioctls in that it
-+ * does not take a major/minor number as the dev field. Rather, it takes
-+ * a pointer to a &struct scsi_device.
+ while (!error && !stored && filp->f_pos < inode->i_size) {
+- unsigned long blk = filp->f_pos >> EXT4_BLOCK_SIZE_BITS(sb);
++ ext4_lblk_t blk = filp->f_pos >> EXT4_BLOCK_SIZE_BITS(sb);
+ struct buffer_head map_bh;
+ struct buffer_head *bh = NULL;
+
+@@ -172,10 +172,10 @@ revalidate:
+ * least that it is non-zero. A
+ * failure will be detected in the
+ * dirent test below. */
+- if (le16_to_cpu(de->rec_len) <
+- EXT4_DIR_REC_LEN(1))
++ if (ext4_rec_len_from_disk(de->rec_len)
++ < EXT4_DIR_REC_LEN(1))
+ break;
+- i += le16_to_cpu(de->rec_len);
++ i += ext4_rec_len_from_disk(de->rec_len);
+ }
+ offset = i;
+ filp->f_pos = (filp->f_pos & ~(sb->s_blocksize - 1))
+@@ -197,7 +197,7 @@ revalidate:
+ ret = stored;
+ goto out;
+ }
+- offset += le16_to_cpu(de->rec_len);
++ offset += ext4_rec_len_from_disk(de->rec_len);
+ if (le32_to_cpu(de->inode)) {
+ /* We might block in the next section
+ * if the data destination is
+@@ -219,7 +219,7 @@ revalidate:
+ goto revalidate;
+ stored ++;
+ }
+- filp->f_pos += le16_to_cpu(de->rec_len);
++ filp->f_pos += ext4_rec_len_from_disk(de->rec_len);
+ }
+ offset = 0;
+ brelse (bh);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 8528774..bc7081f 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -61,7 +61,7 @@ static ext4_fsblk_t ext_pblock(struct ext4_extent *ex)
+ * idx_pblock:
+ * combine low and high parts of a leaf physical block number into ext4_fsblk_t
*/
- int scsi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
+-static ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
++ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
{
-@@ -239,7 +244,7 @@ int scsi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
- return scsi_set_medium_removal(sdev, SCSI_REMOVAL_ALLOW);
- case SCSI_IOCTL_TEST_UNIT_READY:
- return scsi_test_unit_ready(sdev, IOCTL_NORMAL_TIMEOUT,
-- NORMAL_RETRIES);
-+ NORMAL_RETRIES, NULL);
- case SCSI_IOCTL_START_UNIT:
- scsi_cmd[0] = START_STOP;
- scsi_cmd[1] = 0;
-@@ -264,9 +269,12 @@ int scsi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
- }
- EXPORT_SYMBOL(scsi_ioctl);
+ ext4_fsblk_t block;
--/*
-- * the scsi_nonblock_ioctl() function is designed for ioctls which may
-- * be executed even if the device is in recovery.
-+/**
-+ * scsi_nonblock_ioctl() - Handle SG_SCSI_RESET
-+ * @sdev: scsi device receiving ioctl
-+ * @cmd: Must be SC_SCSI_RESET
-+ * @arg: pointer to int containing SG_SCSI_RESET_{DEVICE,BUS,HOST}
-+ * @filp: either NULL or a &struct file which must have the O_NONBLOCK flag.
+@@ -75,7 +75,7 @@ static ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
+ * stores a large physical block number into an extent struct,
+ * breaking it into parts
*/
- int scsi_nonblockable_ioctl(struct scsi_device *sdev, int cmd,
- void __user *arg, struct file *filp)
-@@ -276,7 +284,7 @@ int scsi_nonblockable_ioctl(struct scsi_device *sdev, int cmd,
- /* The first set of iocts may be executed even if we're doing
- * error processing, as long as the device was opened
- * non-blocking */
-- if (filp && filp->f_flags & O_NONBLOCK) {
-+ if (filp && (filp->f_flags & O_NONBLOCK)) {
- if (scsi_host_in_recovery(sdev->host))
- return -ENODEV;
- } else if (!scsi_block_when_processing_errors(sdev))
-diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
-index a9ac5b1..7c4c889 100644
---- a/drivers/scsi/scsi_lib.c
-+++ b/drivers/scsi/scsi_lib.c
-@@ -175,7 +175,7 @@ int scsi_queue_insert(struct scsi_cmnd *cmd, int reason)
- *
- * returns the req->errors value which is the scsi_cmnd result
- * field.
-- **/
-+ */
- int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
- int data_direction, void *buffer, unsigned bufflen,
- unsigned char *sense, int timeout, int retries, int flags)
-@@ -274,7 +274,7 @@ static void scsi_bi_endio(struct bio *bio, int error)
- /**
- * scsi_req_map_sg - map a scatterlist into a request
- * @rq: request to fill
-- * @sg: scatterlist
-+ * @sgl: scatterlist
- * @nsegs: number of elements
- * @bufflen: len of buffer
- * @gfp: memory allocation flags
-@@ -365,14 +365,16 @@ free_bios:
- * @sdev: scsi device
- * @cmd: scsi command
- * @cmd_len: length of scsi cdb
-- * @data_direction: data direction
-+ * @data_direction: DMA_TO_DEVICE, DMA_FROM_DEVICE, or DMA_NONE
- * @buffer: data buffer (this can be a kernel buffer or scatterlist)
- * @bufflen: len of buffer
- * @use_sg: if buffer is a scatterlist this is the number of elements
- * @timeout: request timeout in seconds
- * @retries: number of times to retry request
-- * @flags: or into request flags
-- **/
-+ * @privdata: data passed to done()
-+ * @done: callback function when done
-+ * @gfp: memory allocation flags
-+ */
- int scsi_execute_async(struct scsi_device *sdev, const unsigned char *cmd,
- int cmd_len, int data_direction, void *buffer, unsigned bufflen,
- int use_sg, int timeout, int retries, void *privdata,
-@@ -439,7 +441,7 @@ static void scsi_init_cmd_errh(struct scsi_cmnd *cmd)
+-static void ext4_ext_store_pblock(struct ext4_extent *ex, ext4_fsblk_t pb)
++void ext4_ext_store_pblock(struct ext4_extent *ex, ext4_fsblk_t pb)
{
- cmd->serial_number = 0;
- cmd->resid = 0;
-- memset(cmd->sense_buffer, 0, sizeof cmd->sense_buffer);
-+ memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- if (cmd->cmd_len == 0)
- cmd->cmd_len = COMMAND_SIZE(cmd->cmnd[0]);
- }
-@@ -524,7 +526,7 @@ static void scsi_run_queue(struct request_queue *q)
- struct Scsi_Host *shost = sdev->host;
- unsigned long flags;
-
-- if (sdev->single_lun)
-+ if (scsi_target(sdev)->single_lun)
- scsi_single_lun_run(sdev);
+ ex->ee_start_lo = cpu_to_le32((unsigned long) (pb & 0xffffffff));
+ ex->ee_start_hi = cpu_to_le16((unsigned long) ((pb >> 31) >> 1) & 0xffff);
+@@ -144,7 +144,7 @@ static int ext4_ext_dirty(handle_t *handle, struct inode *inode,
- spin_lock_irqsave(shost->host_lock, flags);
-@@ -632,7 +634,7 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
- * of upper level post-processing and scsi_io_completion).
- *
- * Arguments: cmd - command that is complete.
-- * uptodate - 1 if I/O indicates success, <= 0 for I/O error.
-+ * error - 0 if I/O indicates success, < 0 for I/O error.
- * bytes - number of bytes of completed I/O
- * requeue - indicates whether we should requeue leftovers.
- *
-@@ -647,26 +649,25 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
- * at some point during this call.
- * Notes: If cmd was requeued, upon return it will be a stale pointer.
+ static ext4_fsblk_t ext4_ext_find_goal(struct inode *inode,
+ struct ext4_ext_path *path,
+- ext4_fsblk_t block)
++ ext4_lblk_t block)
+ {
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ ext4_fsblk_t bg_start;
+@@ -367,13 +367,14 @@ static void ext4_ext_drop_refs(struct ext4_ext_path *path)
+ * the header must be checked before calling this
*/
--static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate,
-+static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int error,
- int bytes, int requeue)
+ static void
+-ext4_ext_binsearch_idx(struct inode *inode, struct ext4_ext_path *path, int block)
++ext4_ext_binsearch_idx(struct inode *inode,
++ struct ext4_ext_path *path, ext4_lblk_t block)
{
- struct request_queue *q = cmd->device->request_queue;
- struct request *req = cmd->request;
-- unsigned long flags;
+ struct ext4_extent_header *eh = path->p_hdr;
+ struct ext4_extent_idx *r, *l, *m;
- /*
- * If there are blocks left over at the end, set up the command
- * to queue the remainder of them.
- */
-- if (end_that_request_chunk(req, uptodate, bytes)) {
-+ if (blk_end_request(req, error, bytes)) {
- int leftover = (req->hard_nr_sectors << 9);
- if (blk_pc_request(req))
- leftover = req->data_len;
+- ext_debug("binsearch for %d(idx): ", block);
++ ext_debug("binsearch for %u(idx): ", block);
- /* kill remainder if no retrys */
-- if (!uptodate && blk_noretry_request(req))
-- end_that_request_chunk(req, 0, leftover);
-+ if (error && blk_noretry_request(req))
-+ blk_end_request(req, error, leftover);
- else {
- if (requeue) {
- /*
-@@ -681,14 +682,6 @@ static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate,
- }
+ l = EXT_FIRST_INDEX(eh) + 1;
+ r = EXT_LAST_INDEX(eh);
+@@ -425,7 +426,8 @@ ext4_ext_binsearch_idx(struct inode *inode, struct ext4_ext_path *path, int bloc
+ * the header must be checked before calling this
+ */
+ static void
+-ext4_ext_binsearch(struct inode *inode, struct ext4_ext_path *path, int block)
++ext4_ext_binsearch(struct inode *inode,
++ struct ext4_ext_path *path, ext4_lblk_t block)
+ {
+ struct ext4_extent_header *eh = path->p_hdr;
+ struct ext4_extent *r, *l, *m;
+@@ -438,7 +440,7 @@ ext4_ext_binsearch(struct inode *inode, struct ext4_ext_path *path, int block)
+ return;
}
-- add_disk_randomness(req->rq_disk);
--
-- spin_lock_irqsave(q->queue_lock, flags);
-- if (blk_rq_tagged(req))
-- blk_queue_end_tag(q, req);
-- end_that_request_last(req, uptodate);
-- spin_unlock_irqrestore(q->queue_lock, flags);
--
- /*
- * This will goose the queue request function at the end, so we don't
- * need to worry about launching another command.
-@@ -737,138 +730,43 @@ static inline unsigned int scsi_sgtable_index(unsigned short nents)
- return index;
+- ext_debug("binsearch for %d: ", block);
++ ext_debug("binsearch for %u: ", block);
+
+ l = EXT_FIRST_EXTENT(eh) + 1;
+ r = EXT_LAST_EXTENT(eh);
+@@ -494,7 +496,8 @@ int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
}
--struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
-+static void scsi_sg_free(struct scatterlist *sgl, unsigned int nents)
+ struct ext4_ext_path *
+-ext4_ext_find_extent(struct inode *inode, int block, struct ext4_ext_path *path)
++ext4_ext_find_extent(struct inode *inode, ext4_lblk_t block,
++ struct ext4_ext_path *path)
{
- struct scsi_host_sg_pool *sgp;
-- struct scatterlist *sgl, *prev, *ret;
-- unsigned int index;
-- int this, left;
--
-- BUG_ON(!cmd->use_sg);
--
-- left = cmd->use_sg;
-- ret = prev = NULL;
-- do {
-- this = left;
-- if (this > SCSI_MAX_SG_SEGMENTS) {
-- this = SCSI_MAX_SG_SEGMENTS - 1;
-- index = SG_MEMPOOL_NR - 1;
-- } else
-- index = scsi_sgtable_index(this);
+ struct ext4_extent_header *eh;
+ struct buffer_head *bh;
+@@ -763,7 +766,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ while (k--) {
+ oldblock = newblock;
+ newblock = ablocks[--a];
+- bh = sb_getblk(inode->i_sb, (ext4_fsblk_t)newblock);
++ bh = sb_getblk(inode->i_sb, newblock);
+ if (!bh) {
+ err = -EIO;
+ goto cleanup;
+@@ -783,9 +786,8 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ fidx->ei_block = border;
+ ext4_idx_store_pblock(fidx, oldblock);
-- left -= this;
--
-- sgp = scsi_sg_pools + index;
--
-- sgl = mempool_alloc(sgp->pool, gfp_mask);
-- if (unlikely(!sgl))
-- goto enomem;
-+ sgp = scsi_sg_pools + scsi_sgtable_index(nents);
-+ mempool_free(sgl, sgp->pool);
-+}
+- ext_debug("int.index at %d (block %llu): %lu -> %llu\n", i,
+- newblock, (unsigned long) le32_to_cpu(border),
+- oldblock);
++ ext_debug("int.index at %d (block %llu): %u -> %llu\n",
++ i, newblock, le32_to_cpu(border), oldblock);
+ /* copy indexes */
+ m = 0;
+ path[i].p_idx++;
+@@ -851,7 +853,7 @@ cleanup:
+ for (i = 0; i < depth; i++) {
+ if (!ablocks[i])
+ continue;
+- ext4_free_blocks(handle, inode, ablocks[i], 1);
++ ext4_free_blocks(handle, inode, ablocks[i], 1, 1);
+ }
+ }
+ kfree(ablocks);
+@@ -979,8 +981,8 @@ repeat:
+ /* refill path */
+ ext4_ext_drop_refs(path);
+ path = ext4_ext_find_extent(inode,
+- le32_to_cpu(newext->ee_block),
+- path);
++ (ext4_lblk_t)le32_to_cpu(newext->ee_block),
++ path);
+ if (IS_ERR(path))
+ err = PTR_ERR(path);
+ } else {
+@@ -992,8 +994,8 @@ repeat:
+ /* refill path */
+ ext4_ext_drop_refs(path);
+ path = ext4_ext_find_extent(inode,
+- le32_to_cpu(newext->ee_block),
+- path);
++ (ext4_lblk_t)le32_to_cpu(newext->ee_block),
++ path);
+ if (IS_ERR(path)) {
+ err = PTR_ERR(path);
+ goto out;
+@@ -1015,13 +1017,157 @@ out:
+ }
-- sg_init_table(sgl, sgp->size);
-+static struct scatterlist *scsi_sg_alloc(unsigned int nents, gfp_t gfp_mask)
+ /*
++ * search the closest allocated block to the left for *logical
++ * and returns it at @logical + it's physical address at @phys
++ * if *logical is the smallest allocated block, the function
++ * returns 0 at @phys
++ * return value contains 0 (success) or error code
++ */
++int
++ext4_ext_search_left(struct inode *inode, struct ext4_ext_path *path,
++ ext4_lblk_t *logical, ext4_fsblk_t *phys)
+{
-+ struct scsi_host_sg_pool *sgp;
-
-- /*
-- * first loop through, set initial index and return value
-- */
-- if (!ret)
-- ret = sgl;
-+ sgp = scsi_sg_pools + scsi_sgtable_index(nents);
-+ return mempool_alloc(sgp->pool, gfp_mask);
++ struct ext4_extent_idx *ix;
++ struct ext4_extent *ex;
++ int depth, ee_len;
++
++ BUG_ON(path == NULL);
++ depth = path->p_depth;
++ *phys = 0;
++
++ if (depth == 0 && path->p_ext == NULL)
++ return 0;
++
++ /* usually extent in the path covers blocks smaller
++ * then *logical, but it can be that extent is the
++ * first one in the file */
++
++ ex = path[depth].p_ext;
++ ee_len = ext4_ext_get_actual_len(ex);
++ if (*logical < le32_to_cpu(ex->ee_block)) {
++ BUG_ON(EXT_FIRST_EXTENT(path[depth].p_hdr) != ex);
++ while (--depth >= 0) {
++ ix = path[depth].p_idx;
++ BUG_ON(ix != EXT_FIRST_INDEX(path[depth].p_hdr));
++ }
++ return 0;
++ }
++
++ BUG_ON(*logical < (le32_to_cpu(ex->ee_block) + ee_len));
++
++ *logical = le32_to_cpu(ex->ee_block) + ee_len - 1;
++ *phys = ext_pblock(ex) + ee_len - 1;
++ return 0;
+}
-
-- /*
-- * chain previous sglist, if any. we know the previous
-- * sglist must be the biggest one, or we would not have
-- * ended up doing another loop.
-- */
-- if (prev)
-- sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl);
-+int scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
++
++/*
++ * search the closest allocated block to the right for *logical
++ * and returns it at @logical + it's physical address at @phys
++ * if *logical is the smallest allocated block, the function
++ * returns 0 at @phys
++ * return value contains 0 (success) or error code
++ */
++int
++ext4_ext_search_right(struct inode *inode, struct ext4_ext_path *path,
++ ext4_lblk_t *logical, ext4_fsblk_t *phys)
+{
-+ int ret;
++ struct buffer_head *bh = NULL;
++ struct ext4_extent_header *eh;
++ struct ext4_extent_idx *ix;
++ struct ext4_extent *ex;
++ ext4_fsblk_t block;
++ int depth, ee_len;
++
++ BUG_ON(path == NULL);
++ depth = path->p_depth;
++ *phys = 0;
++
++ if (depth == 0 && path->p_ext == NULL)
++ return 0;
++
++ /* usually extent in the path covers blocks smaller
++ * then *logical, but it can be that extent is the
++ * first one in the file */
++
++ ex = path[depth].p_ext;
++ ee_len = ext4_ext_get_actual_len(ex);
++ if (*logical < le32_to_cpu(ex->ee_block)) {
++ BUG_ON(EXT_FIRST_EXTENT(path[depth].p_hdr) != ex);
++ while (--depth >= 0) {
++ ix = path[depth].p_idx;
++ BUG_ON(ix != EXT_FIRST_INDEX(path[depth].p_hdr));
++ }
++ *logical = le32_to_cpu(ex->ee_block);
++ *phys = ext_pblock(ex);
++ return 0;
++ }
++
++ BUG_ON(*logical < (le32_to_cpu(ex->ee_block) + ee_len));
++
++ if (ex != EXT_LAST_EXTENT(path[depth].p_hdr)) {
++ /* next allocated block in this leaf */
++ ex++;
++ *logical = le32_to_cpu(ex->ee_block);
++ *phys = ext_pblock(ex);
++ return 0;
++ }
++
++ /* go up and search for index to the right */
++ while (--depth >= 0) {
++ ix = path[depth].p_idx;
++ if (ix != EXT_LAST_INDEX(path[depth].p_hdr))
++ break;
++ }
++
++ if (depth < 0) {
++ /* we've gone up to the root and
++ * found no index to the right */
++ return 0;
++ }
++
++ /* we've found index to the right, let's
++ * follow it and find the closest allocated
++ * block to the right */
++ ix++;
++ block = idx_pblock(ix);
++ while (++depth < path->p_depth) {
++ bh = sb_bread(inode->i_sb, block);
++ if (bh == NULL)
++ return -EIO;
++ eh = ext_block_hdr(bh);
++ if (ext4_ext_check_header(inode, eh, depth)) {
++ put_bh(bh);
++ return -EIO;
++ }
++ ix = EXT_FIRST_INDEX(eh);
++ block = idx_pblock(ix);
++ put_bh(bh);
++ }
++
++ bh = sb_bread(inode->i_sb, block);
++ if (bh == NULL)
++ return -EIO;
++ eh = ext_block_hdr(bh);
++ if (ext4_ext_check_header(inode, eh, path->p_depth - depth)) {
++ put_bh(bh);
++ return -EIO;
++ }
++ ex = EXT_FIRST_EXTENT(eh);
++ *logical = le32_to_cpu(ex->ee_block);
++ *phys = ext_pblock(ex);
++ put_bh(bh);
++ return 0;
++
++}
++
++/*
+ * ext4_ext_next_allocated_block:
+ * returns allocated block in subsequent extent or EXT_MAX_BLOCK.
+ * NOTE: it considers block number from index entry as
+ * allocated block. Thus, index entries have to be consistent
+ * with leaves.
+ */
+-static unsigned long
++static ext4_lblk_t
+ ext4_ext_next_allocated_block(struct ext4_ext_path *path)
+ {
+ int depth;
+@@ -1054,7 +1200,7 @@ ext4_ext_next_allocated_block(struct ext4_ext_path *path)
+ * ext4_ext_next_leaf_block:
+ * returns first allocated block from next leaf or EXT_MAX_BLOCK
+ */
+-static unsigned ext4_ext_next_leaf_block(struct inode *inode,
++static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode,
+ struct ext4_ext_path *path)
+ {
+ int depth;
+@@ -1072,7 +1218,8 @@ static unsigned ext4_ext_next_leaf_block(struct inode *inode,
+ while (depth >= 0) {
+ if (path[depth].p_idx !=
+ EXT_LAST_INDEX(path[depth].p_hdr))
+- return le32_to_cpu(path[depth].p_idx[1].ei_block);
++ return (ext4_lblk_t)
++ le32_to_cpu(path[depth].p_idx[1].ei_block);
+ depth--;
+ }
-- /*
-- * if we have nothing left, mark the last segment as
-- * end-of-list
-- */
-- if (!left)
-- sg_mark_end(&sgl[this - 1]);
-+ BUG_ON(!cmd->use_sg);
+@@ -1085,7 +1232,7 @@ static unsigned ext4_ext_next_leaf_block(struct inode *inode,
+ * then we have to correct all indexes above.
+ * TODO: do we need to correct tree in all cases?
+ */
+-int ext4_ext_correct_indexes(handle_t *handle, struct inode *inode,
++static int ext4_ext_correct_indexes(handle_t *handle, struct inode *inode,
+ struct ext4_ext_path *path)
+ {
+ struct ext4_extent_header *eh;
+@@ -1171,7 +1318,7 @@ ext4_can_extents_be_merged(struct inode *inode, struct ext4_extent *ex1,
+ if (ext1_ee_len + ext2_ee_len > max_len)
+ return 0;
+ #ifdef AGGRESSIVE_TEST
+- if (le16_to_cpu(ex1->ee_len) >= 4)
++ if (ext1_ee_len >= 4)
+ return 0;
+ #endif
-- /*
-- * don't allow subsequent mempool allocs to sleep, it would
-- * violate the mempool principle.
-- */
-- gfp_mask &= ~__GFP_WAIT;
-- gfp_mask |= __GFP_HIGH;
-- prev = sgl;
-- } while (left);
-+ ret = __sg_alloc_table(&cmd->sg_table, cmd->use_sg,
-+ SCSI_MAX_SG_SEGMENTS, gfp_mask, scsi_sg_alloc);
-+ if (unlikely(ret))
-+ __sg_free_table(&cmd->sg_table, SCSI_MAX_SG_SEGMENTS,
-+ scsi_sg_free);
+@@ -1239,7 +1386,7 @@ unsigned int ext4_ext_check_overlap(struct inode *inode,
+ struct ext4_extent *newext,
+ struct ext4_ext_path *path)
+ {
+- unsigned long b1, b2;
++ ext4_lblk_t b1, b2;
+ unsigned int depth, len1;
+ unsigned int ret = 0;
-- /*
-- * ->use_sg may get modified after dma mapping has potentially
-- * shrunk the number of segments, so keep a copy of it for free.
-- */
-- cmd->__use_sg = cmd->use_sg;
-+ cmd->request_buffer = cmd->sg_table.sgl;
- return ret;
--enomem:
-- if (ret) {
-- /*
-- * Free entries chained off ret. Since we were trying to
-- * allocate another sglist, we know that all entries are of
-- * the max size.
-- */
-- sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1;
-- prev = ret;
-- ret = &ret[SCSI_MAX_SG_SEGMENTS - 1];
--
-- while ((sgl = sg_chain_ptr(ret)) != NULL) {
-- ret = &sgl[SCSI_MAX_SG_SEGMENTS - 1];
-- mempool_free(sgl, sgp->pool);
-- }
--
-- mempool_free(prev, sgp->pool);
-- }
-- return NULL;
- }
+@@ -1260,7 +1407,7 @@ unsigned int ext4_ext_check_overlap(struct inode *inode,
+ goto out;
+ }
- EXPORT_SYMBOL(scsi_alloc_sgtable);
+- /* check for wrap through zero */
++ /* check for wrap through zero on extent logical start block*/
+ if (b1 + len1 < b1) {
+ len1 = EXT_MAX_BLOCK - b1;
+ newext->ee_len = cpu_to_le16(len1);
+@@ -1290,7 +1437,8 @@ int ext4_ext_insert_extent(handle_t *handle, struct inode *inode,
+ struct ext4_extent *ex, *fex;
+ struct ext4_extent *nearex; /* nearest extent */
+ struct ext4_ext_path *npath = NULL;
+- int depth, len, err, next;
++ int depth, len, err;
++ ext4_lblk_t next;
+ unsigned uninitialized = 0;
- void scsi_free_sgtable(struct scsi_cmnd *cmd)
- {
-- struct scatterlist *sgl = cmd->request_buffer;
-- struct scsi_host_sg_pool *sgp;
+ BUG_ON(ext4_ext_get_actual_len(newext) == 0);
+@@ -1435,114 +1583,8 @@ cleanup:
+ return err;
+ }
+
+-int ext4_ext_walk_space(struct inode *inode, unsigned long block,
+- unsigned long num, ext_prepare_callback func,
+- void *cbdata)
+-{
+- struct ext4_ext_path *path = NULL;
+- struct ext4_ext_cache cbex;
+- struct ext4_extent *ex;
+- unsigned long next, start = 0, end = 0;
+- unsigned long last = block + num;
+- int depth, exists, err = 0;
-
-- /*
-- * if this is the biggest size sglist, check if we have
-- * chained parts we need to free
-- */
-- if (cmd->__use_sg > SCSI_MAX_SG_SEGMENTS) {
-- unsigned short this, left;
-- struct scatterlist *next;
-- unsigned int index;
+- BUG_ON(func == NULL);
+- BUG_ON(inode == NULL);
-
-- left = cmd->__use_sg - (SCSI_MAX_SG_SEGMENTS - 1);
-- next = sg_chain_ptr(&sgl[SCSI_MAX_SG_SEGMENTS - 1]);
-- while (left && next) {
-- sgl = next;
-- this = left;
-- if (this > SCSI_MAX_SG_SEGMENTS) {
-- this = SCSI_MAX_SG_SEGMENTS - 1;
-- index = SG_MEMPOOL_NR - 1;
-- } else
-- index = scsi_sgtable_index(this);
+- while (block < last && block != EXT_MAX_BLOCK) {
+- num = last - block;
+- /* find extent for this block */
+- path = ext4_ext_find_extent(inode, block, path);
+- if (IS_ERR(path)) {
+- err = PTR_ERR(path);
+- path = NULL;
+- break;
+- }
-
-- left -= this;
+- depth = ext_depth(inode);
+- BUG_ON(path[depth].p_hdr == NULL);
+- ex = path[depth].p_ext;
+- next = ext4_ext_next_allocated_block(path);
-
-- sgp = scsi_sg_pools + index;
+- exists = 0;
+- if (!ex) {
+- /* there is no extent yet, so try to allocate
+- * all requested space */
+- start = block;
+- end = block + num;
+- } else if (le32_to_cpu(ex->ee_block) > block) {
+- /* need to allocate space before found extent */
+- start = block;
+- end = le32_to_cpu(ex->ee_block);
+- if (block + num < end)
+- end = block + num;
+- } else if (block >= le32_to_cpu(ex->ee_block)
+- + ext4_ext_get_actual_len(ex)) {
+- /* need to allocate space after found extent */
+- start = block;
+- end = block + num;
+- if (end >= next)
+- end = next;
+- } else if (block >= le32_to_cpu(ex->ee_block)) {
+- /*
+- * some part of requested space is covered
+- * by found extent
+- */
+- start = block;
+- end = le32_to_cpu(ex->ee_block)
+- + ext4_ext_get_actual_len(ex);
+- if (block + num < end)
+- end = block + num;
+- exists = 1;
+- } else {
+- BUG();
+- }
+- BUG_ON(end <= start);
-
-- if (left)
-- next = sg_chain_ptr(&sgl[sgp->size - 1]);
+- if (!exists) {
+- cbex.ec_block = start;
+- cbex.ec_len = end - start;
+- cbex.ec_start = 0;
+- cbex.ec_type = EXT4_EXT_CACHE_GAP;
+- } else {
+- cbex.ec_block = le32_to_cpu(ex->ee_block);
+- cbex.ec_len = ext4_ext_get_actual_len(ex);
+- cbex.ec_start = ext_pblock(ex);
+- cbex.ec_type = EXT4_EXT_CACHE_EXTENT;
+- }
-
-- mempool_free(sgl, sgp->pool);
+- BUG_ON(cbex.ec_len == 0);
+- err = func(inode, path, &cbex, cbdata);
+- ext4_ext_drop_refs(path);
+-
+- if (err < 0)
+- break;
+- if (err == EXT_REPEAT)
+- continue;
+- else if (err == EXT_BREAK) {
+- err = 0;
+- break;
- }
-
-- /*
-- * Restore original, will be freed below
-- */
-- sgl = cmd->request_buffer;
-- sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1;
-- } else
-- sgp = scsi_sg_pools + scsi_sgtable_index(cmd->__use_sg);
+- if (ext_depth(inode) != depth) {
+- /* depth was changed. we have to realloc path */
+- kfree(path);
+- path = NULL;
+- }
-
-- mempool_free(sgl, sgp->pool);
-+ __sg_free_table(&cmd->sg_table, SCSI_MAX_SG_SEGMENTS, scsi_sg_free);
+- block = cbex.ec_block + cbex.ec_len;
+- }
+-
+- if (path) {
+- ext4_ext_drop_refs(path);
+- kfree(path);
+- }
+-
+- return err;
+-}
+-
+ static void
+-ext4_ext_put_in_cache(struct inode *inode, __u32 block,
++ext4_ext_put_in_cache(struct inode *inode, ext4_lblk_t block,
+ __u32 len, ext4_fsblk_t start, int type)
+ {
+ struct ext4_ext_cache *cex;
+@@ -1561,10 +1603,11 @@ ext4_ext_put_in_cache(struct inode *inode, __u32 block,
+ */
+ static void
+ ext4_ext_put_gap_in_cache(struct inode *inode, struct ext4_ext_path *path,
+- unsigned long block)
++ ext4_lblk_t block)
+ {
+ int depth = ext_depth(inode);
+- unsigned long lblock, len;
++ unsigned long len;
++ ext4_lblk_t lblock;
+ struct ext4_extent *ex;
+
+ ex = path[depth].p_ext;
+@@ -1576,32 +1619,34 @@ ext4_ext_put_gap_in_cache(struct inode *inode, struct ext4_ext_path *path,
+ } else if (block < le32_to_cpu(ex->ee_block)) {
+ lblock = block;
+ len = le32_to_cpu(ex->ee_block) - block;
+- ext_debug("cache gap(before): %lu [%lu:%lu]",
+- (unsigned long) block,
+- (unsigned long) le32_to_cpu(ex->ee_block),
+- (unsigned long) ext4_ext_get_actual_len(ex));
++ ext_debug("cache gap(before): %u [%u:%u]",
++ block,
++ le32_to_cpu(ex->ee_block),
++ ext4_ext_get_actual_len(ex));
+ } else if (block >= le32_to_cpu(ex->ee_block)
+ + ext4_ext_get_actual_len(ex)) {
++ ext4_lblk_t next;
+ lblock = le32_to_cpu(ex->ee_block)
+ + ext4_ext_get_actual_len(ex);
+- len = ext4_ext_next_allocated_block(path);
+- ext_debug("cache gap(after): [%lu:%lu] %lu",
+- (unsigned long) le32_to_cpu(ex->ee_block),
+- (unsigned long) ext4_ext_get_actual_len(ex),
+- (unsigned long) block);
+- BUG_ON(len == lblock);
+- len = len - lblock;
++
++ next = ext4_ext_next_allocated_block(path);
++ ext_debug("cache gap(after): [%u:%u] %u",
++ le32_to_cpu(ex->ee_block),
++ ext4_ext_get_actual_len(ex),
++ block);
++ BUG_ON(next == lblock);
++ len = next - lblock;
+ } else {
+ lblock = len = 0;
+ BUG();
+ }
+
+- ext_debug(" -> %lu:%lu\n", (unsigned long) lblock, len);
++ ext_debug(" -> %u:%lu\n", lblock, len);
+ ext4_ext_put_in_cache(inode, lblock, len, 0, EXT4_EXT_CACHE_GAP);
}
- EXPORT_SYMBOL(scsi_free_sgtable);
-@@ -985,7 +883,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- * are leftovers and there is some kind of error
- * (result != 0), retry the rest.
- */
-- if (scsi_end_request(cmd, 1, good_bytes, result == 0) == NULL)
-+ if (scsi_end_request(cmd, 0, good_bytes, result == 0) == NULL)
- return;
+ static int
+-ext4_ext_in_cache(struct inode *inode, unsigned long block,
++ext4_ext_in_cache(struct inode *inode, ext4_lblk_t block,
+ struct ext4_extent *ex)
+ {
+ struct ext4_ext_cache *cex;
+@@ -1618,11 +1663,9 @@ ext4_ext_in_cache(struct inode *inode, unsigned long block,
+ ex->ee_block = cpu_to_le32(cex->ec_block);
+ ext4_ext_store_pblock(ex, cex->ec_start);
+ ex->ee_len = cpu_to_le16(cex->ec_len);
+- ext_debug("%lu cached by %lu:%lu:%llu\n",
+- (unsigned long) block,
+- (unsigned long) cex->ec_block,
+- (unsigned long) cex->ec_len,
+- cex->ec_start);
++ ext_debug("%u cached by %u:%u:%llu\n",
++ block,
++ cex->ec_block, cex->ec_len, cex->ec_start);
+ return cex->ec_type;
+ }
- /* good_bytes = 0, or (inclusive) there were leftovers and
-@@ -999,7 +897,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- * and quietly refuse further access.
- */
- cmd->device->changed = 1;
-- scsi_end_request(cmd, 0, this_count, 1);
-+ scsi_end_request(cmd, -EIO, this_count, 1);
- return;
- } else {
- /* Must have been a power glitch, or a
-@@ -1031,7 +929,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- scsi_requeue_command(q, cmd);
- return;
- } else {
-- scsi_end_request(cmd, 0, this_count, 1);
-+ scsi_end_request(cmd, -EIO, this_count, 1);
- return;
- }
- break;
-@@ -1059,7 +957,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- "Device not ready",
- &sshdr);
+@@ -1636,7 +1679,7 @@ ext4_ext_in_cache(struct inode *inode, unsigned long block,
+ * It's used in truncate case only, thus all requests are for
+ * last index in the block only.
+ */
+-int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
++static int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
+ struct ext4_ext_path *path)
+ {
+ struct buffer_head *bh;
+@@ -1657,7 +1700,7 @@ int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
+ ext_debug("index is empty, remove it, free block %llu\n", leaf);
+ bh = sb_find_get_block(inode->i_sb, leaf);
+ ext4_forget(handle, 1, inode, bh, leaf);
+- ext4_free_blocks(handle, inode, leaf, 1);
++ ext4_free_blocks(handle, inode, leaf, 1, 1);
+ return err;
+ }
-- scsi_end_request(cmd, 0, this_count, 1);
-+ scsi_end_request(cmd, -EIO, this_count, 1);
- return;
- case VOLUME_OVERFLOW:
- if (!(req->cmd_flags & REQ_QUIET)) {
-@@ -1069,7 +967,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- scsi_print_sense("", cmd);
- }
- /* See SSC3rXX or current. */
-- scsi_end_request(cmd, 0, this_count, 1);
-+ scsi_end_request(cmd, -EIO, this_count, 1);
- return;
- default:
- break;
-@@ -1090,7 +988,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- scsi_print_sense("", cmd);
+@@ -1666,7 +1709,7 @@ int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
+ * This routine returns max. credits that the extent tree can consume.
+ * It should be OK for low-performance paths like ->writepage()
+ * To allow many writing processes to fit into a single transaction,
+- * the caller should calculate credits under truncate_mutex and
++ * the caller should calculate credits under i_data_sem and
+ * pass the actual path.
+ */
+ int ext4_ext_calc_credits_for_insert(struct inode *inode,
+@@ -1714,12 +1757,14 @@ int ext4_ext_calc_credits_for_insert(struct inode *inode,
+
+ static int ext4_remove_blocks(handle_t *handle, struct inode *inode,
+ struct ext4_extent *ex,
+- unsigned long from, unsigned long to)
++ ext4_lblk_t from, ext4_lblk_t to)
+ {
+ struct buffer_head *bh;
+ unsigned short ee_len = ext4_ext_get_actual_len(ex);
+- int i;
++ int i, metadata = 0;
+
++ if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
++ metadata = 1;
+ #ifdef EXTENTS_STATS
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+@@ -1738,42 +1783,45 @@ static int ext4_remove_blocks(handle_t *handle, struct inode *inode,
+ if (from >= le32_to_cpu(ex->ee_block)
+ && to == le32_to_cpu(ex->ee_block) + ee_len - 1) {
+ /* tail removal */
+- unsigned long num;
++ ext4_lblk_t num;
+ ext4_fsblk_t start;
++
+ num = le32_to_cpu(ex->ee_block) + ee_len - from;
+ start = ext_pblock(ex) + ee_len - num;
+- ext_debug("free last %lu blocks starting %llu\n", num, start);
++ ext_debug("free last %u blocks starting %llu\n", num, start);
+ for (i = 0; i < num; i++) {
+ bh = sb_find_get_block(inode->i_sb, start + i);
+ ext4_forget(handle, 0, inode, bh, start + i);
}
+- ext4_free_blocks(handle, inode, start, num);
++ ext4_free_blocks(handle, inode, start, num, metadata);
+ } else if (from == le32_to_cpu(ex->ee_block)
+ && to <= le32_to_cpu(ex->ee_block) + ee_len - 1) {
+- printk("strange request: removal %lu-%lu from %u:%u\n",
++ printk(KERN_INFO "strange request: removal %u-%u from %u:%u\n",
+ from, to, le32_to_cpu(ex->ee_block), ee_len);
+ } else {
+- printk("strange request: removal(2) %lu-%lu from %u:%u\n",
+- from, to, le32_to_cpu(ex->ee_block), ee_len);
++ printk(KERN_INFO "strange request: removal(2) "
++ "%u-%u from %u:%u\n",
++ from, to, le32_to_cpu(ex->ee_block), ee_len);
}
-- scsi_end_request(cmd, 0, this_count, !result);
-+ scsi_end_request(cmd, -EIO, this_count, !result);
+ return 0;
}
- /*
-@@ -1102,7 +1000,6 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
- *
- * Returns: 0 on success
- * BLKPREP_DEFER if the failure is retryable
-- * BLKPREP_KILL if the failure is fatal
+ static int
+ ext4_ext_rm_leaf(handle_t *handle, struct inode *inode,
+- struct ext4_ext_path *path, unsigned long start)
++ struct ext4_ext_path *path, ext4_lblk_t start)
+ {
+ int err = 0, correct_index = 0;
+ int depth = ext_depth(inode), credits;
+ struct ext4_extent_header *eh;
+- unsigned a, b, block, num;
+- unsigned long ex_ee_block;
++ ext4_lblk_t a, b, block;
++ unsigned num;
++ ext4_lblk_t ex_ee_block;
+ unsigned short ex_ee_len;
+ unsigned uninitialized = 0;
+ struct ext4_extent *ex;
+
+ /* the header must be checked already in ext4_ext_remove_space() */
+- ext_debug("truncate since %lu in leaf\n", start);
++ ext_debug("truncate since %u in leaf\n", start);
+ if (!path[depth].p_hdr)
+ path[depth].p_hdr = ext_block_hdr(path[depth].p_bh);
+ eh = path[depth].p_hdr;
+@@ -1904,7 +1952,7 @@ ext4_ext_more_to_rm(struct ext4_ext_path *path)
+ return 1;
+ }
+
+-int ext4_ext_remove_space(struct inode *inode, unsigned long start)
++static int ext4_ext_remove_space(struct inode *inode, ext4_lblk_t start)
+ {
+ struct super_block *sb = inode->i_sb;
+ int depth = ext_depth(inode);
+@@ -1912,7 +1960,7 @@ int ext4_ext_remove_space(struct inode *inode, unsigned long start)
+ handle_t *handle;
+ int i = 0, err = 0;
+
+- ext_debug("truncate since %lu\n", start);
++ ext_debug("truncate since %u\n", start);
+
+ /* probably first extent we're gonna free will be last in block */
+ handle = ext4_journal_start(inode, depth + 1);
+@@ -2094,17 +2142,19 @@ void ext4_ext_release(struct super_block *sb)
+ * b> Splits in two extents: Write is happening at either end of the extent
+ * c> Splits in three extents: Somone is writing in middle of the extent
*/
- static int scsi_init_io(struct scsi_cmnd *cmd)
+-int ext4_ext_convert_to_initialized(handle_t *handle, struct inode *inode,
+- struct ext4_ext_path *path,
+- ext4_fsblk_t iblock,
+- unsigned long max_blocks)
++static int ext4_ext_convert_to_initialized(handle_t *handle,
++ struct inode *inode,
++ struct ext4_ext_path *path,
++ ext4_lblk_t iblock,
++ unsigned long max_blocks)
{
-@@ -1119,8 +1016,7 @@ static int scsi_init_io(struct scsi_cmnd *cmd)
+ struct ext4_extent *ex, newex;
+ struct ext4_extent *ex1 = NULL;
+ struct ext4_extent *ex2 = NULL;
+ struct ext4_extent *ex3 = NULL;
+ struct ext4_extent_header *eh;
+- unsigned int allocated, ee_block, ee_len, depth;
++ ext4_lblk_t ee_block;
++ unsigned int allocated, ee_len, depth;
+ ext4_fsblk_t newblock;
+ int err = 0;
+ int ret = 0;
+@@ -2225,8 +2275,13 @@ out:
+ return err ? err : allocated;
+ }
+
++/*
++ * Need to be called with
++ * down_read(&EXT4_I(inode)->i_data_sem) if not allocating file system block
++ * (ie, create is zero). Otherwise down_write(&EXT4_I(inode)->i_data_sem)
++ */
+ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+- ext4_fsblk_t iblock,
++ ext4_lblk_t iblock,
+ unsigned long max_blocks, struct buffer_head *bh_result,
+ int create, int extend_disksize)
+ {
+@@ -2236,11 +2291,11 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ ext4_fsblk_t goal, newblock;
+ int err = 0, depth, ret;
+ unsigned long allocated = 0;
++ struct ext4_allocation_request ar;
+
+ __clear_bit(BH_New, &bh_result->b_state);
+- ext_debug("blocks %d/%lu requested for inode %u\n", (int) iblock,
+- max_blocks, (unsigned) inode->i_ino);
+- mutex_lock(&EXT4_I(inode)->truncate_mutex);
++ ext_debug("blocks %u/%lu requested for inode %u\n",
++ iblock, max_blocks, inode->i_ino);
+
+ /* check in cache */
+ goal = ext4_ext_in_cache(inode, iblock, &newex);
+@@ -2260,7 +2315,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ - le32_to_cpu(newex.ee_block)
+ + ext_pblock(&newex);
+ /* number of remaining blocks in the extent */
+- allocated = le16_to_cpu(newex.ee_len) -
++ allocated = ext4_ext_get_actual_len(&newex) -
+ (iblock - le32_to_cpu(newex.ee_block));
+ goto out;
+ } else {
+@@ -2288,7 +2343,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+
+ ex = path[depth].p_ext;
+ if (ex) {
+- unsigned long ee_block = le32_to_cpu(ex->ee_block);
++ ext4_lblk_t ee_block = le32_to_cpu(ex->ee_block);
+ ext4_fsblk_t ee_start = ext_pblock(ex);
+ unsigned short ee_len;
+
+@@ -2302,7 +2357,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ newblock = iblock - ee_block + ee_start;
+ /* number of remaining blocks in the extent */
+ allocated = ee_len - (iblock - ee_block);
+- ext_debug("%d fit into %lu:%d -> %llu\n", (int) iblock,
++ ext_debug("%u fit into %lu:%d -> %llu\n", iblock,
+ ee_block, ee_len, newblock);
+
+ /* Do not put uninitialized extent in the cache */
+@@ -2320,9 +2375,10 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ ret = ext4_ext_convert_to_initialized(handle, inode,
+ path, iblock,
+ max_blocks);
+- if (ret <= 0)
++ if (ret <= 0) {
++ err = ret;
+ goto out2;
+- else
++ } else
+ allocated = ret;
+ goto outnew;
+ }
+@@ -2347,8 +2403,15 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ if (S_ISREG(inode->i_mode) && (!EXT4_I(inode)->i_block_alloc_info))
+ ext4_init_block_alloc_info(inode);
+
+- /* allocate new block */
+- goal = ext4_ext_find_goal(inode, path, iblock);
++ /* find neighbour allocated blocks */
++ ar.lleft = iblock;
++ err = ext4_ext_search_left(inode, path, &ar.lleft, &ar.pleft);
++ if (err)
++ goto out2;
++ ar.lright = iblock;
++ err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright);
++ if (err)
++ goto out2;
+
/*
- * If sg table allocation fails, requeue request later.
- */
-- cmd->request_buffer = scsi_alloc_sgtable(cmd, GFP_ATOMIC);
-- if (unlikely(!cmd->request_buffer)) {
-+ if (unlikely(scsi_alloc_sgtable(cmd, GFP_ATOMIC))) {
- scsi_unprep_request(req);
- return BLKPREP_DEFER;
+ * See if request is beyond maximum number of blocks we can have in
+@@ -2368,10 +2431,21 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ newex.ee_len = cpu_to_le16(max_blocks);
+ err = ext4_ext_check_overlap(inode, &newex, path);
+ if (err)
+- allocated = le16_to_cpu(newex.ee_len);
++ allocated = ext4_ext_get_actual_len(&newex);
+ else
+ allocated = max_blocks;
+- newblock = ext4_new_blocks(handle, inode, goal, &allocated, &err);
++
++ /* allocate new block */
++ ar.inode = inode;
++ ar.goal = ext4_ext_find_goal(inode, path, iblock);
++ ar.logical = iblock;
++ ar.len = allocated;
++ if (S_ISREG(inode->i_mode))
++ ar.flags = EXT4_MB_HINT_DATA;
++ else
++ /* disable in-core preallocation for non-regular files */
++ ar.flags = 0;
++ newblock = ext4_mb_new_blocks(handle, &ar, &err);
+ if (!newblock)
+ goto out2;
+ ext_debug("allocate new block: goal %llu, found %llu/%lu\n",
+@@ -2379,14 +2453,17 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+
+ /* try to insert new extent into found leaf and return */
+ ext4_ext_store_pblock(&newex, newblock);
+- newex.ee_len = cpu_to_le16(allocated);
++ newex.ee_len = cpu_to_le16(ar.len);
+ if (create == EXT4_CREATE_UNINITIALIZED_EXT) /* Mark uninitialized */
+ ext4_ext_mark_uninitialized(&newex);
+ err = ext4_ext_insert_extent(handle, inode, path, &newex);
+ if (err) {
+ /* free data blocks we just allocated */
++ /* not a good idea to call discard here directly,
++ * but otherwise we'd need to call it every free() */
++ ext4_mb_discard_inode_preallocations(inode);
+ ext4_free_blocks(handle, inode, ext_pblock(&newex),
+- le16_to_cpu(newex.ee_len));
++ ext4_ext_get_actual_len(&newex), 0);
+ goto out2;
}
-@@ -1136,17 +1032,9 @@ static int scsi_init_io(struct scsi_cmnd *cmd)
- * each segment.
- */
- count = blk_rq_map_sg(req->q, req, cmd->request_buffer);
-- if (likely(count <= cmd->use_sg)) {
-- cmd->use_sg = count;
-- return BLKPREP_OK;
-- }
--
-- printk(KERN_ERR "Incorrect number of segments after building list\n");
-- printk(KERN_ERR "counted %d, received %d\n", count, cmd->use_sg);
-- printk(KERN_ERR "req nr_sec %lu, cur_nr_sec %u\n", req->nr_sectors,
-- req->current_nr_sectors);
+
+@@ -2395,6 +2472,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+
+ /* previous routine could use block we allocated */
+ newblock = ext_pblock(&newex);
++ allocated = ext4_ext_get_actual_len(&newex);
+ outnew:
+ __set_bit(BH_New, &bh_result->b_state);
+
+@@ -2414,8 +2492,6 @@ out2:
+ ext4_ext_drop_refs(path);
+ kfree(path);
+ }
+- mutex_unlock(&EXT4_I(inode)->truncate_mutex);
-
-- return BLKPREP_KILL;
-+ BUG_ON(count > cmd->use_sg);
-+ cmd->use_sg = count;
-+ return BLKPREP_OK;
+ return err ? err : allocated;
}
- static struct scsi_cmnd *scsi_get_cmd_from_req(struct scsi_device *sdev,
-@@ -1557,7 +1445,7 @@ static void scsi_request_fn(struct request_queue *q)
+@@ -2423,7 +2499,7 @@ void ext4_ext_truncate(struct inode * inode, struct page *page)
+ {
+ struct address_space *mapping = inode->i_mapping;
+ struct super_block *sb = inode->i_sb;
+- unsigned long last_block;
++ ext4_lblk_t last_block;
+ handle_t *handle;
+ int err = 0;
- if (!scsi_host_queue_ready(q, shost, sdev))
- goto not_ready;
-- if (sdev->single_lun) {
-+ if (scsi_target(sdev)->single_lun) {
- if (scsi_target(sdev)->starget_sdev_user &&
- scsi_target(sdev)->starget_sdev_user != sdev)
- goto not_ready;
-@@ -1675,6 +1563,14 @@ struct request_queue *__scsi_alloc_queue(struct Scsi_Host *shost,
+@@ -2445,9 +2521,11 @@ void ext4_ext_truncate(struct inode * inode, struct page *page)
+ if (page)
+ ext4_block_truncate_page(handle, page, mapping, inode->i_size);
- if (!shost->use_clustering)
- clear_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags);
-+
-+ /*
-+ * set a reasonable default alignment on word boundaries: the
-+ * host and device may alter it using
-+ * blk_queue_update_dma_alignment() later.
-+ */
-+ blk_queue_dma_alignment(q, 0x03);
+- mutex_lock(&EXT4_I(inode)->truncate_mutex);
++ down_write(&EXT4_I(inode)->i_data_sem);
+ ext4_ext_invalidate_cache(inode);
+
++ ext4_mb_discard_inode_preallocations(inode);
+
- return q;
- }
- EXPORT_SYMBOL(__scsi_alloc_queue);
-@@ -1804,7 +1700,7 @@ void scsi_exit_queue(void)
- * @timeout: command timeout
- * @retries: number of retries before failing
- * @data: returns a structure abstracting the mode header data
-- * @sense: place to put sense data (or NULL if no sense to be collected).
-+ * @sshdr: place to put sense data (or NULL if no sense to be collected).
- * must be SCSI_SENSE_BUFFERSIZE big.
- *
- * Returns zero if successful; negative error number or scsi
-@@ -1871,8 +1767,7 @@ scsi_mode_select(struct scsi_device *sdev, int pf, int sp, int modepage,
- EXPORT_SYMBOL_GPL(scsi_mode_select);
+ /*
+ * TODO: optimization is possible here.
+ * Probably we need not scan at all,
+@@ -2481,7 +2559,7 @@ out_stop:
+ if (inode->i_nlink)
+ ext4_orphan_del(handle, inode);
- /**
-- * scsi_mode_sense - issue a mode sense, falling back from 10 to
-- * six bytes if necessary.
-+ * scsi_mode_sense - issue a mode sense, falling back from 10 to six bytes if necessary.
- * @sdev: SCSI device to be queried
- * @dbd: set if mode sense will allow block descriptors to be returned
- * @modepage: mode page being requested
-@@ -1881,13 +1776,13 @@ EXPORT_SYMBOL_GPL(scsi_mode_select);
- * @timeout: command timeout
- * @retries: number of retries before failing
- * @data: returns a structure abstracting the mode header data
-- * @sense: place to put sense data (or NULL if no sense to be collected).
-+ * @sshdr: place to put sense data (or NULL if no sense to be collected).
- * must be SCSI_SENSE_BUFFERSIZE big.
- *
- * Returns zero if unsuccessful, or the header offset (either 4
- * or 8 depending on whether a six or ten byte command was
- * issued) if successful.
-- **/
-+ */
- int
- scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
- unsigned char *buffer, int len, int timeout, int retries,
-@@ -1981,40 +1876,69 @@ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+- mutex_unlock(&EXT4_I(inode)->truncate_mutex);
++ up_write(&EXT4_I(inode)->i_data_sem);
+ ext4_journal_stop(handle);
}
- EXPORT_SYMBOL(scsi_mode_sense);
-+/**
-+ * scsi_test_unit_ready - test if unit is ready
-+ * @sdev: scsi device to change the state of.
-+ * @timeout: command timeout
-+ * @retries: number of retries before failing
-+ * @sshdr_external: Optional pointer to struct scsi_sense_hdr for
-+ * returning sense. Make sure that this is cleared before passing
-+ * in.
-+ *
-+ * Returns zero if unsuccessful or an error if TUR failed. For
-+ * removable media, a return of NOT_READY or UNIT_ATTENTION is
-+ * translated to success, with the ->changed flag updated.
-+ **/
- int
--scsi_test_unit_ready(struct scsi_device *sdev, int timeout, int retries)
-+scsi_test_unit_ready(struct scsi_device *sdev, int timeout, int retries,
-+ struct scsi_sense_hdr *sshdr_external)
+@@ -2516,7 +2594,8 @@ int ext4_ext_writepage_trans_blocks(struct inode *inode, int num)
+ long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
{
- char cmd[] = {
- TEST_UNIT_READY, 0, 0, 0, 0, 0,
- };
-- struct scsi_sense_hdr sshdr;
-+ struct scsi_sense_hdr *sshdr;
- int result;
--
-- result = scsi_execute_req(sdev, cmd, DMA_NONE, NULL, 0, &sshdr,
-- timeout, retries);
+ handle_t *handle;
+- ext4_fsblk_t block, max_blocks;
++ ext4_lblk_t block;
++ unsigned long max_blocks;
+ ext4_fsblk_t nblocks = 0;
+ int ret = 0;
+ int ret2 = 0;
+@@ -2544,6 +2623,7 @@ long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
+ * modify 1 super block, 1 block bitmap and 1 group descriptor.
+ */
+ credits = EXT4_DATA_TRANS_BLOCKS(inode->i_sb) + 3;
++ down_write((&EXT4_I(inode)->i_data_sem));
+ retry:
+ while (ret >= 0 && ret < max_blocks) {
+ block = block + ret;
+@@ -2557,12 +2637,12 @@ retry:
+ ret = ext4_ext_get_blocks(handle, inode, block,
+ max_blocks, &map_bh,
+ EXT4_CREATE_UNINITIALIZED_EXT, 0);
+- WARN_ON(!ret);
+- if (!ret) {
++ WARN_ON(ret <= 0);
++ if (ret <= 0) {
+ ext4_error(inode->i_sb, "ext4_fallocate",
+- "ext4_ext_get_blocks returned 0! inode#%lu"
+- ", block=%llu, max_blocks=%llu",
+- inode->i_ino, block, max_blocks);
++ "ext4_ext_get_blocks returned error: "
++ "inode#%lu, block=%u, max_blocks=%lu",
++ inode->i_ino, block, max_blocks);
+ ret = -EIO;
+ ext4_mark_inode_dirty(handle, inode);
+ ret2 = ext4_journal_stop(handle);
+@@ -2600,6 +2680,7 @@ retry:
+ if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
+ goto retry;
+
++ up_write((&EXT4_I(inode)->i_data_sem));
+ /*
+ * Time to update the file size.
+ * Update only when preallocation was requested beyond the file size.
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 1a81cd6..ac35ec5 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -37,9 +37,9 @@ static int ext4_release_file (struct inode * inode, struct file * filp)
+ if ((filp->f_mode & FMODE_WRITE) &&
+ (atomic_read(&inode->i_writecount) == 1))
+ {
+- mutex_lock(&EXT4_I(inode)->truncate_mutex);
++ down_write(&EXT4_I(inode)->i_data_sem);
+ ext4_discard_reservation(inode);
+- mutex_unlock(&EXT4_I(inode)->truncate_mutex);
++ up_write(&EXT4_I(inode)->i_data_sem);
+ }
+ if (is_dx(inode) && filp->private_data)
+ ext4_htree_free_dir_info(filp->private_data);
+@@ -56,8 +56,25 @@ ext4_file_write(struct kiocb *iocb, const struct iovec *iov,
+ ssize_t ret;
+ int err;
+
+- ret = generic_file_aio_write(iocb, iov, nr_segs, pos);
++ /*
++ * If we have encountered a bitmap-format file, the size limit
++ * is smaller than s_maxbytes, which is for extent-mapped files.
++ */
+
-+ if (!sshdr_external)
-+ sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
-+ else
-+ sshdr = sshdr_external;
++ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) {
++ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++ size_t length = iov_length(iov, nr_segs);
+
++ if (pos > sbi->s_bitmap_maxbytes)
++ return -EFBIG;
+
-+ /* try to eat the UNIT_ATTENTION if there are enough retries */
-+ do {
-+ result = scsi_execute_req(sdev, cmd, DMA_NONE, NULL, 0, sshdr,
-+ timeout, retries);
-+ } while ((driver_byte(result) & DRIVER_SENSE) &&
-+ sshdr && sshdr->sense_key == UNIT_ATTENTION &&
-+ --retries);
++ if (pos + length > sbi->s_bitmap_maxbytes) {
++ nr_segs = iov_shorten((struct iovec *)iov, nr_segs,
++ sbi->s_bitmap_maxbytes - pos);
++ }
++ }
+
-+ if (!sshdr)
-+ /* could not allocate sense buffer, so can't process it */
-+ return result;
++ ret = generic_file_aio_write(iocb, iov, nr_segs, pos);
+ /*
+ * Skip flushing if there was an error, or if nothing was written.
+ */
+diff --git a/fs/ext4/group.h b/fs/ext4/group.h
+index 1577910..7eb0604 100644
+--- a/fs/ext4/group.h
++++ b/fs/ext4/group.h
+@@ -14,14 +14,16 @@ extern __le16 ext4_group_desc_csum(struct ext4_sb_info *sbi, __u32 group,
+ extern int ext4_group_desc_csum_verify(struct ext4_sb_info *sbi, __u32 group,
+ struct ext4_group_desc *gdp);
+ struct buffer_head *read_block_bitmap(struct super_block *sb,
+- unsigned int block_group);
++ ext4_group_t block_group);
+ extern unsigned ext4_init_block_bitmap(struct super_block *sb,
+- struct buffer_head *bh, int group,
++ struct buffer_head *bh,
++ ext4_group_t group,
+ struct ext4_group_desc *desc);
+ #define ext4_free_blocks_after_init(sb, group, desc) \
+ ext4_init_block_bitmap(sb, NULL, group, desc)
+ extern unsigned ext4_init_inode_bitmap(struct super_block *sb,
+- struct buffer_head *bh, int group,
++ struct buffer_head *bh,
++ ext4_group_t group,
+ struct ext4_group_desc *desc);
+ extern void mark_bitmap_end(int start_bit, int end_bit, char *bitmap);
+ #endif /* _LINUX_EXT4_GROUP_H */
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index c61f37f..575b521 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -64,8 +64,8 @@ void mark_bitmap_end(int start_bit, int end_bit, char *bitmap)
+ }
- if ((driver_byte(result) & DRIVER_SENSE) && sdev->removable) {
+ /* Initializes an uninitialized inode bitmap */
+-unsigned ext4_init_inode_bitmap(struct super_block *sb,
+- struct buffer_head *bh, int block_group,
++unsigned ext4_init_inode_bitmap(struct super_block *sb, struct buffer_head *bh,
++ ext4_group_t block_group,
+ struct ext4_group_desc *gdp)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+@@ -75,7 +75,7 @@ unsigned ext4_init_inode_bitmap(struct super_block *sb,
+ /* If checksum is bad mark all blocks and inodes use to prevent
+ * allocation, essentially implementing a per-group read-only flag. */
+ if (!ext4_group_desc_csum_verify(sbi, block_group, gdp)) {
+- ext4_error(sb, __FUNCTION__, "Checksum bad for group %u\n",
++ ext4_error(sb, __FUNCTION__, "Checksum bad for group %lu\n",
+ block_group);
+ gdp->bg_free_blocks_count = 0;
+ gdp->bg_free_inodes_count = 0;
+@@ -98,7 +98,7 @@ unsigned ext4_init_inode_bitmap(struct super_block *sb,
+ * Return buffer_head of bitmap on success or NULL.
+ */
+ static struct buffer_head *
+-read_inode_bitmap(struct super_block * sb, unsigned long block_group)
++read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ {
+ struct ext4_group_desc *desc;
+ struct buffer_head *bh = NULL;
+@@ -152,7 +152,7 @@ void ext4_free_inode (handle_t *handle, struct inode * inode)
+ unsigned long ino;
+ struct buffer_head *bitmap_bh = NULL;
+ struct buffer_head *bh2;
+- unsigned long block_group;
++ ext4_group_t block_group;
+ unsigned long bit;
+ struct ext4_group_desc * gdp;
+ struct ext4_super_block * es;
+@@ -260,12 +260,14 @@ error_return:
+ * For other inodes, search forward from the parent directory\'s block
+ * group to find a free inode.
+ */
+-static int find_group_dir(struct super_block *sb, struct inode *parent)
++static int find_group_dir(struct super_block *sb, struct inode *parent,
++ ext4_group_t *best_group)
+ {
+- int ngroups = EXT4_SB(sb)->s_groups_count;
++ ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
+ unsigned int freei, avefreei;
+ struct ext4_group_desc *desc, *best_desc = NULL;
+- int group, best_group = -1;
++ ext4_group_t group;
++ int ret = -1;
-- if ((scsi_sense_valid(&sshdr)) &&
-- ((sshdr.sense_key == UNIT_ATTENTION) ||
-- (sshdr.sense_key == NOT_READY))) {
-+ if ((scsi_sense_valid(sshdr)) &&
-+ ((sshdr->sense_key == UNIT_ATTENTION) ||
-+ (sshdr->sense_key == NOT_READY))) {
- sdev->changed = 1;
- result = 0;
+ freei = percpu_counter_read_positive(&EXT4_SB(sb)->s_freeinodes_counter);
+ avefreei = freei / ngroups;
+@@ -279,11 +281,12 @@ static int find_group_dir(struct super_block *sb, struct inode *parent)
+ if (!best_desc ||
+ (le16_to_cpu(desc->bg_free_blocks_count) >
+ le16_to_cpu(best_desc->bg_free_blocks_count))) {
+- best_group = group;
++ *best_group = group;
+ best_desc = desc;
++ ret = 0;
}
}
-+ if (!sshdr_external)
-+ kfree(sshdr);
- return result;
+- return best_group;
++ return ret;
}
- EXPORT_SYMBOL(scsi_test_unit_ready);
- /**
-- * scsi_device_set_state - Take the given device through the device
-- * state model.
-+ * scsi_device_set_state - Take the given device through the device state model.
- * @sdev: scsi device to change the state of.
- * @state: state to change to.
- *
- * Returns zero if unsuccessful or an error if the requested
- * transition is illegal.
-- **/
-+ */
- int
- scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
- {
-@@ -2264,7 +2188,7 @@ EXPORT_SYMBOL_GPL(sdev_evt_send_simple);
- * Must be called with user context, may sleep.
- *
- * Returns zero if unsuccessful or an error if not.
-- **/
-+ */
- int
- scsi_device_quiesce(struct scsi_device *sdev)
- {
-@@ -2289,7 +2213,7 @@ EXPORT_SYMBOL(scsi_device_quiesce);
- * queues.
- *
- * Must be called with user context, may sleep.
-- **/
-+ */
- void
- scsi_device_resume(struct scsi_device *sdev)
- {
-@@ -2326,8 +2250,7 @@ scsi_target_resume(struct scsi_target *starget)
- EXPORT_SYMBOL(scsi_target_resume);
+ /*
+@@ -314,12 +317,13 @@ static int find_group_dir(struct super_block *sb, struct inode *parent)
+ #define INODE_COST 64
+ #define BLOCK_COST 256
- /**
-- * scsi_internal_device_block - internal function to put a device
-- * temporarily into the SDEV_BLOCK state
-+ * scsi_internal_device_block - internal function to put a device temporarily into the SDEV_BLOCK state
- * @sdev: device to block
- *
- * Block request made by scsi lld's to temporarily stop all
-@@ -2342,7 +2265,7 @@ EXPORT_SYMBOL(scsi_target_resume);
- * state, all commands are deferred until the scsi lld reenables
- * the device with scsi_device_unblock or device_block_tmo fires.
- * This routine assumes the host_lock is held on entry.
-- **/
-+ */
- int
- scsi_internal_device_block(struct scsi_device *sdev)
- {
-@@ -2382,7 +2305,7 @@ EXPORT_SYMBOL_GPL(scsi_internal_device_block);
- * (which must be a legal transition) allowing the midlayer to
- * goose the queue for this device. This routine assumes the
- * host_lock is held upon entry.
-- **/
-+ */
- int
- scsi_internal_device_unblock(struct scsi_device *sdev)
+-static int find_group_orlov(struct super_block *sb, struct inode *parent)
++static int find_group_orlov(struct super_block *sb, struct inode *parent,
++ ext4_group_t *group)
{
-@@ -2460,7 +2383,7 @@ EXPORT_SYMBOL_GPL(scsi_target_unblock);
+- int parent_group = EXT4_I(parent)->i_block_group;
++ ext4_group_t parent_group = EXT4_I(parent)->i_block_group;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ struct ext4_super_block *es = sbi->s_es;
+- int ngroups = sbi->s_groups_count;
++ ext4_group_t ngroups = sbi->s_groups_count;
+ int inodes_per_group = EXT4_INODES_PER_GROUP(sb);
+ unsigned int freei, avefreei;
+ ext4_fsblk_t freeb, avefreeb;
+@@ -327,7 +331,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
+ unsigned int ndirs;
+ int max_debt, max_dirs, min_inodes;
+ ext4_grpblk_t min_blocks;
+- int group = -1, i;
++ ext4_group_t i;
+ struct ext4_group_desc *desc;
- /**
- * scsi_kmap_atomic_sg - find and atomically map an sg-elemnt
-- * @sg: scatter-gather list
-+ * @sgl: scatter-gather list
- * @sg_count: number of segments in sg
- * @offset: offset in bytes into sg, on return offset into the mapped area
- * @len: bytes to map, on return number of bytes mapped
-@@ -2509,8 +2432,7 @@ void *scsi_kmap_atomic_sg(struct scatterlist *sgl, int sg_count,
- EXPORT_SYMBOL(scsi_kmap_atomic_sg);
+ freei = percpu_counter_read_positive(&sbi->s_freeinodes_counter);
+@@ -340,13 +344,14 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
+ if ((parent == sb->s_root->d_inode) ||
+ (EXT4_I(parent)->i_flags & EXT4_TOPDIR_FL)) {
+ int best_ndir = inodes_per_group;
+- int best_group = -1;
++ ext4_group_t grp;
++ int ret = -1;
- /**
-- * scsi_kunmap_atomic_sg - atomically unmap a virtual address, previously
-- * mapped with scsi_kmap_atomic_sg
-+ * scsi_kunmap_atomic_sg - atomically unmap a virtual address, previously mapped with scsi_kmap_atomic_sg
- * @virt: virtual address to be unmapped
- */
- void scsi_kunmap_atomic_sg(void *virt)
-diff --git a/drivers/scsi/scsi_netlink.c b/drivers/scsi/scsi_netlink.c
-index 40579ed..370c78c 100644
---- a/drivers/scsi/scsi_netlink.c
-+++ b/drivers/scsi/scsi_netlink.c
-@@ -32,11 +32,12 @@ EXPORT_SYMBOL_GPL(scsi_nl_sock);
+- get_random_bytes(&group, sizeof(group));
+- parent_group = (unsigned)group % ngroups;
++ get_random_bytes(&grp, sizeof(grp));
++ parent_group = (unsigned)grp % ngroups;
+ for (i = 0; i < ngroups; i++) {
+- group = (parent_group + i) % ngroups;
+- desc = ext4_get_group_desc (sb, group, NULL);
++ grp = (parent_group + i) % ngroups;
++ desc = ext4_get_group_desc(sb, grp, NULL);
+ if (!desc || !desc->bg_free_inodes_count)
+ continue;
+ if (le16_to_cpu(desc->bg_used_dirs_count) >= best_ndir)
+@@ -355,11 +360,12 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
+ continue;
+ if (le16_to_cpu(desc->bg_free_blocks_count) < avefreeb)
+ continue;
+- best_group = group;
++ *group = grp;
++ ret = 0;
+ best_ndir = le16_to_cpu(desc->bg_used_dirs_count);
+ }
+- if (best_group >= 0)
+- return best_group;
++ if (ret == 0)
++ return ret;
+ goto fallback;
+ }
+@@ -380,8 +386,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
+ max_debt = 1;
- /**
-- * scsi_nl_rcv_msg -
-- * Receive message handler. Extracts message from a receive buffer.
-+ * scsi_nl_rcv_msg - Receive message handler.
-+ * @skb: socket receive buffer
-+ *
-+ * Description: Extracts message from a receive buffer.
- * Validates message header and calls appropriate transport message handler
- *
-- * @skb: socket receive buffer
- *
- **/
- static void
-@@ -99,9 +100,7 @@ next_msg:
+ for (i = 0; i < ngroups; i++) {
+- group = (parent_group + i) % ngroups;
+- desc = ext4_get_group_desc (sb, group, NULL);
++ *group = (parent_group + i) % ngroups;
++ desc = ext4_get_group_desc(sb, *group, NULL);
+ if (!desc || !desc->bg_free_inodes_count)
+ continue;
+ if (le16_to_cpu(desc->bg_used_dirs_count) >= max_dirs)
+@@ -390,17 +396,16 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
+ continue;
+ if (le16_to_cpu(desc->bg_free_blocks_count) < min_blocks)
+ continue;
+- return group;
++ return 0;
+ }
+ fallback:
+ for (i = 0; i < ngroups; i++) {
+- group = (parent_group + i) % ngroups;
+- desc = ext4_get_group_desc (sb, group, NULL);
+- if (!desc || !desc->bg_free_inodes_count)
+- continue;
+- if (le16_to_cpu(desc->bg_free_inodes_count) >= avefreei)
+- return group;
++ *group = (parent_group + i) % ngroups;
++ desc = ext4_get_group_desc(sb, *group, NULL);
++ if (desc && desc->bg_free_inodes_count &&
++ le16_to_cpu(desc->bg_free_inodes_count) >= avefreei)
++ return 0;
+ }
- /**
-- * scsi_nl_rcv_event -
-- * Event handler for a netlink socket.
-- *
-+ * scsi_nl_rcv_event - Event handler for a netlink socket.
- * @this: event notifier block
- * @event: event type
- * @ptr: event payload
-@@ -129,9 +128,7 @@ static struct notifier_block scsi_netlink_notifier = {
+ if (avefreei) {
+@@ -415,21 +420,22 @@ fallback:
+ return -1;
+ }
+-static int find_group_other(struct super_block *sb, struct inode *parent)
++static int find_group_other(struct super_block *sb, struct inode *parent,
++ ext4_group_t *group)
+ {
+- int parent_group = EXT4_I(parent)->i_block_group;
+- int ngroups = EXT4_SB(sb)->s_groups_count;
++ ext4_group_t parent_group = EXT4_I(parent)->i_block_group;
++ ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
+ struct ext4_group_desc *desc;
+- int group, i;
++ ext4_group_t i;
- /**
-- * scsi_netlink_init -
-- * Called by SCSI subsystem to intialize the SCSI transport netlink
-- * interface
-+ * scsi_netlink_init - Called by SCSI subsystem to intialize the SCSI transport netlink interface
- *
- **/
- void
-@@ -160,16 +157,14 @@ scsi_netlink_init(void)
+ /*
+ * Try to place the inode in its parent directory
+ */
+- group = parent_group;
+- desc = ext4_get_group_desc (sb, group, NULL);
++ *group = parent_group;
++ desc = ext4_get_group_desc(sb, *group, NULL);
+ if (desc && le16_to_cpu(desc->bg_free_inodes_count) &&
+ le16_to_cpu(desc->bg_free_blocks_count))
+- return group;
++ return 0;
+ /*
+ * We're going to place this inode in a different blockgroup from its
+@@ -440,33 +446,33 @@ static int find_group_other(struct super_block *sb, struct inode *parent)
+ *
+ * So add our directory's i_ino into the starting point for the hash.
+ */
+- group = (group + parent->i_ino) % ngroups;
++ *group = (*group + parent->i_ino) % ngroups;
- /**
-- * scsi_netlink_exit -
-- * Called by SCSI subsystem to disable the SCSI transport netlink
-- * interface
-+ * scsi_netlink_exit - Called by SCSI subsystem to disable the SCSI transport netlink interface
- *
- **/
- void
- scsi_netlink_exit(void)
- {
- if (scsi_nl_sock) {
-- sock_release(scsi_nl_sock->sk_socket);
-+ netlink_kernel_release(scsi_nl_sock);
- netlink_unregister_notifier(&scsi_netlink_notifier);
+ /*
+ * Use a quadratic hash to find a group with a free inode and some free
+ * blocks.
+ */
+ for (i = 1; i < ngroups; i <<= 1) {
+- group += i;
+- if (group >= ngroups)
+- group -= ngroups;
+- desc = ext4_get_group_desc (sb, group, NULL);
++ *group += i;
++ if (*group >= ngroups)
++ *group -= ngroups;
++ desc = ext4_get_group_desc(sb, *group, NULL);
+ if (desc && le16_to_cpu(desc->bg_free_inodes_count) &&
+ le16_to_cpu(desc->bg_free_blocks_count))
+- return group;
++ return 0;
}
-diff --git a/drivers/scsi/scsi_proc.c b/drivers/scsi/scsi_proc.c
-index bb6f051..ed39515 100644
---- a/drivers/scsi/scsi_proc.c
-+++ b/drivers/scsi/scsi_proc.c
-@@ -45,6 +45,16 @@ static struct proc_dir_entry *proc_scsi;
- /* Protect sht->present and sht->proc_dir */
- static DEFINE_MUTEX(global_host_template_mutex);
+ /*
+ * That failed: try linear search for a free inode, even if that group
+ * has no free blocks.
+ */
+- group = parent_group;
++ *group = parent_group;
+ for (i = 0; i < ngroups; i++) {
+- if (++group >= ngroups)
+- group = 0;
+- desc = ext4_get_group_desc (sb, group, NULL);
++ if (++*group >= ngroups)
++ *group = 0;
++ desc = ext4_get_group_desc(sb, *group, NULL);
+ if (desc && le16_to_cpu(desc->bg_free_inodes_count))
+- return group;
++ return 0;
+ }
-+/**
-+ * proc_scsi_read - handle read from /proc by calling host's proc_info() command
-+ * @buffer: passed to proc_info
-+ * @start: passed to proc_info
-+ * @offset: passed to proc_info
-+ * @length: passed to proc_info
-+ * @eof: returns whether length read was less than requested
-+ * @data: pointer to a &struct Scsi_Host
-+ */
-+
- static int proc_scsi_read(char *buffer, char **start, off_t offset,
- int length, int *eof, void *data)
- {
-@@ -57,6 +67,13 @@ static int proc_scsi_read(char *buffer, char **start, off_t offset,
- return n;
- }
+ return -1;
+@@ -487,16 +493,17 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
+ struct super_block *sb;
+ struct buffer_head *bitmap_bh = NULL;
+ struct buffer_head *bh2;
+- int group;
++ ext4_group_t group = 0;
+ unsigned long ino = 0;
+ struct inode * inode;
+ struct ext4_group_desc * gdp = NULL;
+ struct ext4_super_block * es;
+ struct ext4_inode_info *ei;
+ struct ext4_sb_info *sbi;
+- int err = 0;
++ int ret2, err = 0;
+ struct inode *ret;
+- int i, free = 0;
++ ext4_group_t i;
++ int free = 0;
-+/**
-+ * proc_scsi_write_proc - Handle write to /proc by calling host's proc_info()
-+ * @file: not used
-+ * @buf: source of data to write.
-+ * @count: number of bytes (at most PROC_BLOCK_SIZE) to write.
-+ * @data: pointer to &struct Scsi_Host
-+ */
- static int proc_scsi_write_proc(struct file *file, const char __user *buf,
- unsigned long count, void *data)
- {
-@@ -80,6 +97,13 @@ out:
- return ret;
- }
+ /* Cannot create files in a deleted directory */
+ if (!dir || !dir->i_nlink)
+@@ -512,14 +519,14 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
+ es = sbi->s_es;
+ if (S_ISDIR(mode)) {
+ if (test_opt (sb, OLDALLOC))
+- group = find_group_dir(sb, dir);
++ ret2 = find_group_dir(sb, dir, &group);
+ else
+- group = find_group_orlov(sb, dir);
++ ret2 = find_group_orlov(sb, dir, &group);
+ } else
+- group = find_group_other(sb, dir);
++ ret2 = find_group_other(sb, dir, &group);
-+/**
-+ * scsi_proc_hostdir_add - Create directory in /proc for a scsi host
-+ * @sht: owner of this directory
-+ *
-+ * Sets sht->proc_dir to the new directory.
-+ */
-+
- void scsi_proc_hostdir_add(struct scsi_host_template *sht)
- {
- if (!sht->proc_info)
-@@ -97,6 +121,10 @@ void scsi_proc_hostdir_add(struct scsi_host_template *sht)
- mutex_unlock(&global_host_template_mutex);
- }
+ err = -ENOSPC;
+- if (group == -1)
++ if (ret2 == -1)
+ goto out;
-+/**
-+ * scsi_proc_hostdir_rm - remove directory in /proc for a scsi host
-+ * @sht: owner of directory
-+ */
- void scsi_proc_hostdir_rm(struct scsi_host_template *sht)
+ for (i = 0; i < sbi->s_groups_count; i++) {
+@@ -583,7 +590,7 @@ got:
+ ino > EXT4_INODES_PER_GROUP(sb)) {
+ ext4_error(sb, __FUNCTION__,
+ "reserved inode or inode > inodes count - "
+- "block_group = %d, inode=%lu", group,
++ "block_group = %lu, inode=%lu", group,
+ ino + group * EXT4_INODES_PER_GROUP(sb));
+ err = -EIO;
+ goto fail;
+@@ -702,7 +709,6 @@ got:
+ if (!S_ISDIR(mode))
+ ei->i_flags &= ~EXT4_DIRSYNC_FL;
+ ei->i_file_acl = 0;
+- ei->i_dir_acl = 0;
+ ei->i_dtime = 0;
+ ei->i_block_alloc_info = NULL;
+ ei->i_block_group = group;
+@@ -741,13 +747,10 @@ got:
+ if (test_opt(sb, EXTENTS)) {
+ EXT4_I(inode)->i_flags |= EXT4_EXTENTS_FL;
+ ext4_ext_tree_init(handle, inode);
+- if (!EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_EXTENTS)) {
+- err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh);
+- if (err) goto fail;
+- EXT4_SET_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_EXTENTS);
+- BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "call ext4_journal_dirty_metadata");
+- err = ext4_journal_dirty_metadata(handle, EXT4_SB(sb)->s_sbh);
+- }
++ err = ext4_update_incompat_feature(handle, sb,
++ EXT4_FEATURE_INCOMPAT_EXTENTS);
++ if (err)
++ goto fail;
+ }
+
+ ext4_debug("allocating inode %lu\n", inode->i_ino);
+@@ -777,7 +780,7 @@ fail_drop:
+ struct inode *ext4_orphan_get(struct super_block *sb, unsigned long ino)
{
- if (!sht->proc_info)
-@@ -110,6 +138,11 @@ void scsi_proc_hostdir_rm(struct scsi_host_template *sht)
- mutex_unlock(&global_host_template_mutex);
- }
+ unsigned long max_ino = le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count);
+- unsigned long block_group;
++ ext4_group_t block_group;
+ int bit;
+ struct buffer_head *bitmap_bh = NULL;
+ struct inode *inode = NULL;
+@@ -833,7 +836,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
+ {
+ unsigned long desc_count;
+ struct ext4_group_desc *gdp;
+- int i;
++ ext4_group_t i;
+ #ifdef EXT4FS_DEBUG
+ struct ext4_super_block *es;
+ unsigned long bitmap_count, x;
+@@ -854,7 +857,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
+ continue;
-+
-+/**
-+ * scsi_proc_host_add - Add entry for this host to appropriate /proc dir
-+ * @shost: host to add
-+ */
- void scsi_proc_host_add(struct Scsi_Host *shost)
+ x = ext4_count_free(bitmap_bh, EXT4_INODES_PER_GROUP(sb) / 8);
+- printk("group %d: stored = %d, counted = %lu\n",
++ printk(KERN_DEBUG "group %lu: stored = %d, counted = %lu\n",
+ i, le16_to_cpu(gdp->bg_free_inodes_count), x);
+ bitmap_count += x;
+ }
+@@ -879,7 +882,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
+ unsigned long ext4_count_dirs (struct super_block * sb)
{
- struct scsi_host_template *sht = shost->hostt;
-@@ -133,6 +166,10 @@ void scsi_proc_host_add(struct Scsi_Host *shost)
- p->owner = sht->module;
- }
+ unsigned long count = 0;
+- int i;
++ ext4_group_t i;
-+/**
-+ * scsi_proc_host_rm - remove this host's entry from /proc
-+ * @shost: which host
-+ */
- void scsi_proc_host_rm(struct Scsi_Host *shost)
+ for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
+ struct ext4_group_desc *gdp = ext4_get_group_desc (sb, i, NULL);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 5489703..bb717cb 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -105,7 +105,7 @@ int ext4_forget(handle_t *handle, int is_metadata, struct inode *inode,
+ */
+ static unsigned long blocks_for_truncate(struct inode *inode)
{
- char name[10];
-@@ -143,7 +180,14 @@ void scsi_proc_host_rm(struct Scsi_Host *shost)
- sprintf(name,"%d", shost->host_no);
- remove_proc_entry(name, shost->hostt->proc_dir);
+- unsigned long needed;
++ ext4_lblk_t needed;
+
+ needed = inode->i_blocks >> (inode->i_sb->s_blocksize_bits - 9);
+
+@@ -243,13 +243,6 @@ static inline void add_chain(Indirect *p, struct buffer_head *bh, __le32 *v)
+ p->bh = bh;
}
+
+-static int verify_chain(Indirect *from, Indirect *to)
+-{
+- while (from <= to && from->key == *from->p)
+- from++;
+- return (from > to);
+-}
-
-+/**
-+ * proc_print_scsidevice - return data about this host
-+ * @dev: A scsi device
-+ * @data: &struct seq_file to output to.
-+ *
-+ * Description: prints Host, Channel, Id, Lun, Vendor, Model, Rev, Type,
-+ * and revision.
-+ */
- static int proc_print_scsidevice(struct device *dev, void *data)
- {
- struct scsi_device *sdev = to_scsi_device(dev);
-@@ -189,6 +233,21 @@ static int proc_print_scsidevice(struct device *dev, void *data)
- return 0;
- }
+ /**
+ * ext4_block_to_path - parse the block number into array of offsets
+ * @inode: inode in question (we are only interested in its superblock)
+@@ -282,7 +275,8 @@ static int verify_chain(Indirect *from, Indirect *to)
+ */
-+/**
-+ * scsi_add_single_device - Respond to user request to probe for/add device
-+ * @host: user-supplied decimal integer
-+ * @channel: user-supplied decimal integer
-+ * @id: user-supplied decimal integer
-+ * @lun: user-supplied decimal integer
-+ *
-+ * Description: called by writing "scsi add-single-device" to /proc/scsi/scsi.
-+ *
-+ * does scsi_host_lookup() and either user_scan() if that transport
-+ * type supports it, or else scsi_scan_host_selected()
+ static int ext4_block_to_path(struct inode *inode,
+- long i_block, int offsets[4], int *boundary)
++ ext4_lblk_t i_block,
++ ext4_lblk_t offsets[4], int *boundary)
+ {
+ int ptrs = EXT4_ADDR_PER_BLOCK(inode->i_sb);
+ int ptrs_bits = EXT4_ADDR_PER_BLOCK_BITS(inode->i_sb);
+@@ -313,7 +307,10 @@ static int ext4_block_to_path(struct inode *inode,
+ offsets[n++] = i_block & (ptrs - 1);
+ final = ptrs;
+ } else {
+- ext4_warning(inode->i_sb, "ext4_block_to_path", "block > big");
++ ext4_warning(inode->i_sb, "ext4_block_to_path",
++ "block %lu > max",
++ i_block + direct_blocks +
++ indirect_blocks + double_blocks);
+ }
+ if (boundary)
+ *boundary = final - 1 - (i_block & (ptrs - 1));
+@@ -344,12 +341,14 @@ static int ext4_block_to_path(struct inode *inode,
+ * (pointer to last triple returned, *@err == 0)
+ * or when it gets an IO error reading an indirect block
+ * (ditto, *@err == -EIO)
+- * or when it notices that chain had been changed while it was reading
+- * (ditto, *@err == -EAGAIN)
+ * or when it reads all @depth-1 indirect blocks successfully and finds
+ * the whole chain, all way to the data (returns %NULL, *err == 0).
+ *
-+ * Note: this seems to be aimed exclusively at SCSI parallel busses.
-+ */
-+
- static int scsi_add_single_device(uint host, uint channel, uint id, uint lun)
++ * Need to be called with
++ * down_read(&EXT4_I(inode)->i_data_sem)
+ */
+-static Indirect *ext4_get_branch(struct inode *inode, int depth, int *offsets,
++static Indirect *ext4_get_branch(struct inode *inode, int depth,
++ ext4_lblk_t *offsets,
+ Indirect chain[4], int *err)
{
- struct Scsi_Host *shost;
-@@ -206,6 +265,16 @@ static int scsi_add_single_device(uint host, uint channel, uint id, uint lun)
- return error;
- }
+ struct super_block *sb = inode->i_sb;
+@@ -365,9 +364,6 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth, int *offsets,
+ bh = sb_bread(sb, le32_to_cpu(p->key));
+ if (!bh)
+ goto failure;
+- /* Reader: pointers */
+- if (!verify_chain(chain, p))
+- goto changed;
+ add_chain(++p, bh, (__le32*)bh->b_data + *++offsets);
+ /* Reader: end */
+ if (!p->key)
+@@ -375,10 +371,6 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth, int *offsets,
+ }
+ return NULL;
-+/**
-+ * scsi_remove_single_device - Respond to user request to remove a device
-+ * @host: user-supplied decimal integer
-+ * @channel: user-supplied decimal integer
-+ * @id: user-supplied decimal integer
-+ * @lun: user-supplied decimal integer
-+ *
-+ * Description: called by writing "scsi remove-single-device" to
-+ * /proc/scsi/scsi. Does a scsi_device_lookup() and scsi_remove_device()
-+ */
- static int scsi_remove_single_device(uint host, uint channel, uint id, uint lun)
+-changed:
+- brelse(bh);
+- *err = -EAGAIN;
+- goto no_block;
+ failure:
+ *err = -EIO;
+ no_block:
+@@ -445,7 +437,7 @@ static ext4_fsblk_t ext4_find_near(struct inode *inode, Indirect *ind)
+ * stores it in *@goal and returns zero.
+ */
+
+-static ext4_fsblk_t ext4_find_goal(struct inode *inode, long block,
++static ext4_fsblk_t ext4_find_goal(struct inode *inode, ext4_lblk_t block,
+ Indirect chain[4], Indirect *partial)
{
- struct scsi_device *sdev;
-@@ -226,6 +295,25 @@ static int scsi_remove_single_device(uint host, uint channel, uint id, uint lun)
- return error;
+ struct ext4_block_alloc_info *block_i;
+@@ -559,7 +551,7 @@ static int ext4_alloc_blocks(handle_t *handle, struct inode *inode,
+ return ret;
+ failed_out:
+ for (i = 0; i <index; i++)
+- ext4_free_blocks(handle, inode, new_blocks[i], 1);
++ ext4_free_blocks(handle, inode, new_blocks[i], 1, 0);
+ return ret;
}
-+/**
-+ * proc_scsi_write - handle writes to /proc/scsi/scsi
-+ * @file: not used
-+ * @buf: buffer to write
-+ * @length: length of buf, at most PAGE_SIZE
-+ * @ppos: not used
-+ *
-+ * Description: this provides a legacy mechanism to add or remove devices by
-+ * Host, Channel, ID, and Lun. To use,
-+ * "echo 'scsi add-single-device 0 1 2 3' > /proc/scsi/scsi" or
-+ * "echo 'scsi remove-single-device 0 1 2 3' > /proc/scsi/scsi" with
-+ * "0 1 2 3" replaced by the Host, Channel, Id, and Lun.
-+ *
-+ * Note: this seems to be aimed at parallel SCSI. Most modern busses (USB,
-+ * SATA, Firewire, Fibre Channel, etc) dynamically assign these values to
-+ * provide a unique identifier and nothing more.
-+ */
-+
-+
- static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
- size_t length, loff_t *ppos)
+@@ -590,7 +582,7 @@ failed_out:
+ */
+ static int ext4_alloc_branch(handle_t *handle, struct inode *inode,
+ int indirect_blks, int *blks, ext4_fsblk_t goal,
+- int *offsets, Indirect *branch)
++ ext4_lblk_t *offsets, Indirect *branch)
{
-@@ -291,6 +379,11 @@ static ssize_t proc_scsi_write(struct file *file, const char __user *buf,
+ int blocksize = inode->i_sb->s_blocksize;
+ int i, n = 0;
+@@ -658,9 +650,9 @@ failed:
+ ext4_journal_forget(handle, branch[i].bh);
+ }
+ for (i = 0; i <indirect_blks; i++)
+- ext4_free_blocks(handle, inode, new_blocks[i], 1);
++ ext4_free_blocks(handle, inode, new_blocks[i], 1, 0);
+
+- ext4_free_blocks(handle, inode, new_blocks[i], num);
++ ext4_free_blocks(handle, inode, new_blocks[i], num, 0);
+
return err;
}
-
-+/**
-+ * proc_scsi_show - show contents of /proc/scsi/scsi (attached devices)
-+ * @s: output goes here
-+ * @p: not used
-+ */
- static int proc_scsi_show(struct seq_file *s, void *p)
+@@ -680,7 +672,7 @@ failed:
+ * chain to new block and return 0.
+ */
+ static int ext4_splice_branch(handle_t *handle, struct inode *inode,
+- long block, Indirect *where, int num, int blks)
++ ext4_lblk_t block, Indirect *where, int num, int blks)
{
- seq_printf(s, "Attached devices:\n");
-@@ -298,10 +391,17 @@ static int proc_scsi_show(struct seq_file *s, void *p)
- return 0;
- }
+ int i;
+ int err = 0;
+@@ -757,9 +749,10 @@ err_out:
+ for (i = 1; i <= num; i++) {
+ BUFFER_TRACE(where[i].bh, "call jbd2_journal_forget");
+ ext4_journal_forget(handle, where[i].bh);
+- ext4_free_blocks(handle,inode,le32_to_cpu(where[i-1].key),1);
++ ext4_free_blocks(handle, inode,
++ le32_to_cpu(where[i-1].key), 1, 0);
+ }
+- ext4_free_blocks(handle, inode, le32_to_cpu(where[num].key), blks);
++ ext4_free_blocks(handle, inode, le32_to_cpu(where[num].key), blks, 0);
-+/**
-+ * proc_scsi_open - glue function
-+ * @inode: not used
-+ * @file: passed to single_open()
+ return err;
+ }
+@@ -782,14 +775,19 @@ err_out:
+ * return > 0, # of blocks mapped or allocated.
+ * return = 0, if plain lookup failed.
+ * return < 0, error case.
+ *
-+ * Associates proc_scsi_show with this file
-+ */
- static int proc_scsi_open(struct inode *inode, struct file *file)
++ *
++ * Need to be called with
++ * down_read(&EXT4_I(inode)->i_data_sem) if not allocating file system block
++ * (ie, create is zero). Otherwise down_write(&EXT4_I(inode)->i_data_sem)
+ */
+ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
+- sector_t iblock, unsigned long maxblocks,
++ ext4_lblk_t iblock, unsigned long maxblocks,
+ struct buffer_head *bh_result,
+ int create, int extend_disksize)
{
- /*
-- * We don't really needs this for the write case but it doesn't
-+ * We don't really need this for the write case but it doesn't
- * harm either.
- */
- return single_open(file, proc_scsi_show, NULL);
-@@ -315,6 +415,9 @@ static const struct file_operations proc_scsi_operations = {
- .release = single_release,
- };
+ int err = -EIO;
+- int offsets[4];
++ ext4_lblk_t offsets[4];
+ Indirect chain[4];
+ Indirect *partial;
+ ext4_fsblk_t goal;
+@@ -803,7 +801,8 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
-+/**
-+ * scsi_init_procfs - create scsi and scsi/scsi in procfs
-+ */
- int __init scsi_init_procfs(void)
- {
- struct proc_dir_entry *pde;
-@@ -336,6 +439,9 @@ err1:
- return -ENOMEM;
- }
+ J_ASSERT(!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL));
+ J_ASSERT(handle != NULL || create == 0);
+- depth = ext4_block_to_path(inode,iblock,offsets,&blocks_to_boundary);
++ depth = ext4_block_to_path(inode, iblock, offsets,
++ &blocks_to_boundary);
-+/**
-+ * scsi_exit_procfs - Remove scsi/scsi and scsi from procfs
-+ */
- void scsi_exit_procfs(void)
- {
- remove_proc_entry("scsi/scsi", NULL);
-diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
-index 40ea71c..1dc165a 100644
---- a/drivers/scsi/scsi_scan.c
-+++ b/drivers/scsi/scsi_scan.c
-@@ -221,6 +221,9 @@ static void scsi_unlock_floptical(struct scsi_device *sdev,
+ if (depth == 0)
+ goto out;
+@@ -819,18 +818,6 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
+ while (count < maxblocks && count <= blocks_to_boundary) {
+ ext4_fsblk_t blk;
- /**
- * scsi_alloc_sdev - allocate and setup a scsi_Device
-+ * @starget: which target to allocate a &scsi_device for
-+ * @lun: which lun
-+ * @hostdata: usually NULL and set by ->slave_alloc instead
- *
- * Description:
- * Allocate, initialize for io, and return a pointer to a scsi_Device.
-@@ -472,7 +475,6 @@ static void scsi_target_reap_usercontext(struct work_struct *work)
+- if (!verify_chain(chain, partial)) {
+- /*
+- * Indirect block might be removed by
+- * truncate while we were reading it.
+- * Handling of that case: forget what we've
+- * got now. Flag the err as EAGAIN, so it
+- * will reread.
+- */
+- err = -EAGAIN;
+- count = 0;
+- break;
+- }
+ blk = le32_to_cpu(*(chain[depth-1].p + count));
- /**
- * scsi_target_reap - check to see if target is in use and destroy if not
-- *
- * @starget: target to be checked
- *
- * This is used after removing a LUN or doing a last put of the target
-@@ -863,7 +865,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
- sdev->no_start_on_add = 1;
+ if (blk == first_block + count)
+@@ -838,44 +825,13 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
+ else
+ break;
+ }
+- if (err != -EAGAIN)
+- goto got_it;
++ goto got_it;
+ }
- if (*bflags & BLIST_SINGLELUN)
-- sdev->single_lun = 1;
-+ scsi_target(sdev)->single_lun = 1;
+ /* Next simple case - plain lookup or failed read of indirect block */
+ if (!create || err == -EIO)
+ goto cleanup;
- sdev->use_10_for_rw = 1;
+- mutex_lock(&ei->truncate_mutex);
+-
+- /*
+- * If the indirect block is missing while we are reading
+- * the chain(ext4_get_branch() returns -EAGAIN err), or
+- * if the chain has been changed after we grab the semaphore,
+- * (either because another process truncated this branch, or
+- * another get_block allocated this branch) re-grab the chain to see if
+- * the request block has been allocated or not.
+- *
+- * Since we already block the truncate/other get_block
+- * at this point, we will have the current copy of the chain when we
+- * splice the branch into the tree.
+- */
+- if (err == -EAGAIN || !verify_chain(chain, partial)) {
+- while (partial > chain) {
+- brelse(partial->bh);
+- partial--;
+- }
+- partial = ext4_get_branch(inode, depth, offsets, chain, &err);
+- if (!partial) {
+- count++;
+- mutex_unlock(&ei->truncate_mutex);
+- if (err)
+- goto cleanup;
+- clear_buffer_new(bh_result);
+- goto got_it;
+- }
+- }
+-
+ /*
+ * Okay, we need to do block allocation. Lazily initialize the block
+ * allocation info here if necessary
+@@ -911,13 +867,12 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
+ err = ext4_splice_branch(handle, inode, iblock,
+ partial, indirect_blks, count);
+ /*
+- * i_disksize growing is protected by truncate_mutex. Don't forget to
++ * i_disksize growing is protected by i_data_sem. Don't forget to
+ * protect it if you're about to implement concurrent
+ * ext4_get_block() -bzzz
+ */
+ if (!err && extend_disksize && inode->i_size > ei->i_disksize)
+ ei->i_disksize = inode->i_size;
+- mutex_unlock(&ei->truncate_mutex);
+ if (err)
+ goto cleanup;
-@@ -928,8 +930,7 @@ static inline void scsi_destroy_sdev(struct scsi_device *sdev)
+@@ -942,6 +897,47 @@ out:
- #ifdef CONFIG_SCSI_LOGGING
- /**
-- * scsi_inq_str - print INQUIRY data from min to max index,
-- * strip trailing whitespace
-+ * scsi_inq_str - print INQUIRY data from min to max index, strip trailing whitespace
- * @buf: Output buffer with at least end-first+1 bytes of space
- * @inq: Inquiry buffer (input)
- * @first: Offset of string into inq
-@@ -957,9 +958,10 @@ static unsigned char *scsi_inq_str(unsigned char *buf, unsigned char *inq,
- * scsi_probe_and_add_lun - probe a LUN, if a LUN is found add it
- * @starget: pointer to target device structure
- * @lun: LUN of target device
-- * @sdevscan: probe the LUN corresponding to this scsi_device
-- * @sdevnew: store the value of any new scsi_device allocated
- * @bflagsp: store bflags here if not NULL
-+ * @sdevp: probe the LUN corresponding to this scsi_device
-+ * @rescan: if nonzero skip some code only needed on first scan
-+ * @hostdata: passed to scsi_alloc_sdev()
- *
- * Description:
- * Call scsi_probe_lun, if a LUN with an attached device is found,
-@@ -1110,6 +1112,8 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
- * scsi_sequential_lun_scan - sequentially scan a SCSI target
- * @starget: pointer to target structure to scan
- * @bflags: black/white list flag for LUN 0
-+ * @scsi_level: Which version of the standard does this device adhere to
-+ * @rescan: passed to scsi_probe_add_lun()
- *
- * Description:
- * Generally, scan from LUN 1 (LUN 0 is assumed to already have been
-@@ -1220,7 +1224,7 @@ EXPORT_SYMBOL(scsilun_to_int);
+ #define DIO_CREDITS (EXT4_RESERVE_TRANS_BLOCKS + 32)
- /**
- * int_to_scsilun: reverts an int into a scsi_lun
-- * @int: integer to be reverted
-+ * @lun: integer to be reverted
- * @scsilun: struct scsi_lun to be set.
- *
- * Description:
-@@ -1252,18 +1256,22 @@ EXPORT_SYMBOL(int_to_scsilun);
++int ext4_get_blocks_wrap(handle_t *handle, struct inode *inode, sector_t block,
++ unsigned long max_blocks, struct buffer_head *bh,
++ int create, int extend_disksize)
++{
++ int retval;
++ /*
++ * Try to see if we can get the block without requesting
++ * for new file system block.
++ */
++ down_read((&EXT4_I(inode)->i_data_sem));
++ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
++ retval = ext4_ext_get_blocks(handle, inode, block, max_blocks,
++ bh, 0, 0);
++ } else {
++ retval = ext4_get_blocks_handle(handle,
++ inode, block, max_blocks, bh, 0, 0);
++ }
++ up_read((&EXT4_I(inode)->i_data_sem));
++ if (!create || (retval > 0))
++ return retval;
++
++ /*
++ * We need to allocate new blocks which will result
++ * in i_data update
++ */
++ down_write((&EXT4_I(inode)->i_data_sem));
++ /*
++ * We need to check for EXT4 here because migrate
++ * could have changed the inode type in between
++ */
++ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
++ retval = ext4_ext_get_blocks(handle, inode, block, max_blocks,
++ bh, create, extend_disksize);
++ } else {
++ retval = ext4_get_blocks_handle(handle, inode, block,
++ max_blocks, bh, create, extend_disksize);
++ }
++ up_write((&EXT4_I(inode)->i_data_sem));
++ return retval;
++}
++
+ static int ext4_get_block(struct inode *inode, sector_t iblock,
+ struct buffer_head *bh_result, int create)
+ {
+@@ -996,7 +992,7 @@ get_block:
+ * `handle' can be NULL if create is zero
+ */
+ struct buffer_head *ext4_getblk(handle_t *handle, struct inode *inode,
+- long block, int create, int *errp)
++ ext4_lblk_t block, int create, int *errp)
+ {
+ struct buffer_head dummy;
+ int fatal = 0, err;
+@@ -1063,7 +1059,7 @@ err:
+ }
- /**
- * scsi_report_lun_scan - Scan using SCSI REPORT LUN results
-- * @sdevscan: scan the host, channel, and id of this scsi_device
-+ * @starget: which target
-+ * @bflags: Zero or a mix of BLIST_NOLUN, BLIST_REPORTLUN2, or BLIST_NOREPORTLUN
-+ * @rescan: nonzero if we can skip code only needed on first scan
- *
- * Description:
-- * If @sdevscan is for a SCSI-3 or up device, send a REPORT LUN
-- * command, and scan the resulting list of LUNs by calling
-- * scsi_probe_and_add_lun.
-+ * Fast scanning for modern (SCSI-3) devices by sending a REPORT LUN command.
-+ * Scan the resulting list of LUNs by calling scsi_probe_and_add_lun.
+ struct buffer_head *ext4_bread(handle_t *handle, struct inode *inode,
+- int block, int create, int *err)
++ ext4_lblk_t block, int create, int *err)
+ {
+ struct buffer_head * bh;
+
+@@ -1446,7 +1442,7 @@ static int jbd2_journal_dirty_data_fn(handle_t *handle, struct buffer_head *bh)
+ * ext4_file_write() -> generic_file_write() -> __alloc_pages() -> ...
*
-- * Modifies sdevscan->lun.
-+ * If BLINK_REPORTLUN2 is set, scan a target that supports more than 8
-+ * LUNs even if it's older than SCSI-3.
-+ * If BLIST_NOREPORTLUN is set, return 1 always.
-+ * If BLIST_NOLUN is set, return 0 always.
+ * Same applies to ext4_get_block(). We will deadlock on various things like
+- * lock_journal and i_truncate_mutex.
++ * lock_journal and i_data_sem
*
- * Return:
- * 0: scan completed (or no memory, so further scanning is futile)
-- * 1: no report lun scan, or not configured
-+ * 1: could not scan with REPORT LUN
- **/
- static int scsi_report_lun_scan(struct scsi_target *starget, int bflags,
- int rescan)
-@@ -1481,6 +1489,7 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host *shost, uint channel,
- if (scsi_host_scan_allowed(shost))
- scsi_probe_and_add_lun(starget, lun, NULL, &sdev, 1, hostdata);
- mutex_unlock(&shost->scan_mutex);
-+ transport_configure_device(&starget->dev);
- scsi_target_reap(starget);
- put_device(&starget->dev);
+ * Setting PF_MEMALLOC here doesn't work - too many internal memory
+ * allocations fail.
+@@ -1828,7 +1824,8 @@ int ext4_block_truncate_page(handle_t *handle, struct page *page,
+ {
+ ext4_fsblk_t index = from >> PAGE_CACHE_SHIFT;
+ unsigned offset = from & (PAGE_CACHE_SIZE-1);
+- unsigned blocksize, iblock, length, pos;
++ unsigned blocksize, length, pos;
++ ext4_lblk_t iblock;
+ struct inode *inode = mapping->host;
+ struct buffer_head *bh;
+ int err = 0;
+@@ -1964,7 +1961,7 @@ static inline int all_zeroes(__le32 *p, __le32 *q)
+ * (no partially truncated stuff there). */
-@@ -1561,6 +1570,7 @@ static void __scsi_scan_target(struct device *parent, unsigned int channel,
- out_reap:
- /* now determine if the target has any children at all
- * and if not, nuke it */
-+ transport_configure_device(&starget->dev);
- scsi_target_reap(starget);
+ static Indirect *ext4_find_shared(struct inode *inode, int depth,
+- int offsets[4], Indirect chain[4], __le32 *top)
++ ext4_lblk_t offsets[4], Indirect chain[4], __le32 *top)
+ {
+ Indirect *partial, *p;
+ int k, err;
+@@ -2048,15 +2045,15 @@ static void ext4_clear_blocks(handle_t *handle, struct inode *inode,
+ for (p = first; p < last; p++) {
+ u32 nr = le32_to_cpu(*p);
+ if (nr) {
+- struct buffer_head *bh;
++ struct buffer_head *tbh;
- put_device(&starget->dev);
-diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
-index 00b3866..ed83cdb 100644
---- a/drivers/scsi/scsi_sysfs.c
-+++ b/drivers/scsi/scsi_sysfs.c
-@@ -1018,6 +1018,7 @@ int scsi_sysfs_add_host(struct Scsi_Host *shost)
+ *p = 0;
+- bh = sb_find_get_block(inode->i_sb, nr);
+- ext4_forget(handle, 0, inode, bh, nr);
++ tbh = sb_find_get_block(inode->i_sb, nr);
++ ext4_forget(handle, 0, inode, tbh, nr);
+ }
}
- transport_register_device(&shost->shost_gendev);
-+ transport_configure_device(&shost->shost_gendev);
- return 0;
+- ext4_free_blocks(handle, inode, block_to_free, count);
++ ext4_free_blocks(handle, inode, block_to_free, count, 0);
}
-diff --git a/drivers/scsi/scsi_tgt_if.c b/drivers/scsi/scsi_tgt_if.c
-index 9815a1a..d2557db 100644
---- a/drivers/scsi/scsi_tgt_if.c
-+++ b/drivers/scsi/scsi_tgt_if.c
-@@ -112,7 +112,7 @@ int scsi_tgt_uspace_send_cmd(struct scsi_cmnd *cmd, u64 itn_id,
- memset(&ev, 0, sizeof(ev));
- ev.p.cmd_req.host_no = shost->host_no;
- ev.p.cmd_req.itn_id = itn_id;
-- ev.p.cmd_req.data_len = cmd->request_bufflen;
-+ ev.p.cmd_req.data_len = scsi_bufflen(cmd);
- memcpy(ev.p.cmd_req.scb, cmd->cmnd, sizeof(ev.p.cmd_req.scb));
- memcpy(ev.p.cmd_req.lun, lun, sizeof(ev.p.cmd_req.lun));
- ev.p.cmd_req.attribute = cmd->tag;
-diff --git a/drivers/scsi/scsi_tgt_lib.c b/drivers/scsi/scsi_tgt_lib.c
-index a91761c..01e03f3 100644
---- a/drivers/scsi/scsi_tgt_lib.c
-+++ b/drivers/scsi/scsi_tgt_lib.c
-@@ -180,7 +180,7 @@ static void scsi_tgt_cmd_destroy(struct work_struct *work)
- container_of(work, struct scsi_tgt_cmd, work);
- struct scsi_cmnd *cmd = tcmd->rq->special;
+ /**
+@@ -2229,7 +2226,7 @@ static void ext4_free_branches(handle_t *handle, struct inode *inode,
+ ext4_journal_test_restart(handle, inode);
+ }
-- dprintk("cmd %p %d %lu\n", cmd, cmd->sc_data_direction,
-+ dprintk("cmd %p %d %u\n", cmd, cmd->sc_data_direction,
- rq_data_dir(cmd->request));
- scsi_unmap_user_pages(tcmd);
- scsi_host_put_command(scsi_tgt_cmd_to_host(cmd), cmd);
-@@ -327,11 +327,11 @@ static void scsi_tgt_cmd_done(struct scsi_cmnd *cmd)
- {
- struct scsi_tgt_cmd *tcmd = cmd->request->end_io_data;
+- ext4_free_blocks(handle, inode, nr, 1);
++ ext4_free_blocks(handle, inode, nr, 1, 1);
-- dprintk("cmd %p %lu\n", cmd, rq_data_dir(cmd->request));
-+ dprintk("cmd %p %u\n", cmd, rq_data_dir(cmd->request));
+ if (parent_bh) {
+ /*
+@@ -2289,12 +2286,12 @@ void ext4_truncate(struct inode *inode)
+ __le32 *i_data = ei->i_data;
+ int addr_per_block = EXT4_ADDR_PER_BLOCK(inode->i_sb);
+ struct address_space *mapping = inode->i_mapping;
+- int offsets[4];
++ ext4_lblk_t offsets[4];
+ Indirect chain[4];
+ Indirect *partial;
+ __le32 nr = 0;
+ int n;
+- long last_block;
++ ext4_lblk_t last_block;
+ unsigned blocksize = inode->i_sb->s_blocksize;
+ struct page *page;
- scsi_tgt_uspace_send_status(cmd, tcmd->itn_id, tcmd->tag);
+@@ -2320,8 +2317,10 @@ void ext4_truncate(struct inode *inode)
+ return;
+ }
-- if (cmd->request_buffer)
-+ if (scsi_sglist(cmd))
- scsi_free_sgtable(cmd);
+- if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)
+- return ext4_ext_truncate(inode, page);
++ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
++ ext4_ext_truncate(inode, page);
++ return;
++ }
- queue_work(scsi_tgtd, &tcmd->work);
-@@ -342,7 +342,7 @@ static int scsi_tgt_transfer_response(struct scsi_cmnd *cmd)
- struct Scsi_Host *shost = scsi_tgt_cmd_to_host(cmd);
- int err;
+ handle = start_transaction(inode);
+ if (IS_ERR(handle)) {
+@@ -2369,7 +2368,7 @@ void ext4_truncate(struct inode *inode)
+ * From here we block out all ext4_get_block() callers who want to
+ * modify the block allocation tree.
+ */
+- mutex_lock(&ei->truncate_mutex);
++ down_write(&ei->i_data_sem);
-- dprintk("cmd %p %lu\n", cmd, rq_data_dir(cmd->request));
-+ dprintk("cmd %p %u\n", cmd, rq_data_dir(cmd->request));
+ if (n == 1) { /* direct blocks */
+ ext4_free_data(handle, inode, NULL, i_data+offsets[0],
+@@ -2433,7 +2432,7 @@ do_indirects:
- err = shost->hostt->transfer_response(cmd, scsi_tgt_cmd_done);
- switch (err) {
-@@ -359,22 +359,17 @@ static int scsi_tgt_init_cmd(struct scsi_cmnd *cmd, gfp_t gfp_mask)
- int count;
+ ext4_discard_reservation(inode);
- cmd->use_sg = rq->nr_phys_segments;
-- cmd->request_buffer = scsi_alloc_sgtable(cmd, gfp_mask);
-- if (!cmd->request_buffer)
-+ if (scsi_alloc_sgtable(cmd, gfp_mask))
- return -ENOMEM;
+- mutex_unlock(&ei->truncate_mutex);
++ up_write(&ei->i_data_sem);
+ inode->i_mtime = inode->i_ctime = ext4_current_time(inode);
+ ext4_mark_inode_dirty(handle, inode);
- cmd->request_bufflen = rq->data_len;
+@@ -2460,7 +2459,8 @@ out_stop:
+ static ext4_fsblk_t ext4_get_inode_block(struct super_block *sb,
+ unsigned long ino, struct ext4_iloc *iloc)
+ {
+- unsigned long desc, group_desc, block_group;
++ unsigned long desc, group_desc;
++ ext4_group_t block_group;
+ unsigned long offset;
+ ext4_fsblk_t block;
+ struct buffer_head *bh;
+@@ -2547,7 +2547,7 @@ static int __ext4_get_inode_loc(struct inode *inode,
+ struct ext4_group_desc *desc;
+ int inodes_per_buffer;
+ int inode_offset, i;
+- int block_group;
++ ext4_group_t block_group;
+ int start;
-- dprintk("cmd %p cnt %d %lu\n", cmd, cmd->use_sg, rq_data_dir(rq));
-- count = blk_rq_map_sg(rq->q, rq, cmd->request_buffer);
-- if (likely(count <= cmd->use_sg)) {
-- cmd->use_sg = count;
-- return 0;
-- }
--
-- eprintk("cmd %p cnt %d\n", cmd, cmd->use_sg);
-- scsi_free_sgtable(cmd);
-- return -EINVAL;
-+ dprintk("cmd %p cnt %d %lu\n", cmd, scsi_sg_count(cmd),
-+ rq_data_dir(rq));
-+ count = blk_rq_map_sg(rq->q, rq, scsi_sglist(cmd));
-+ BUG_ON(count > cmd->use_sg);
-+ cmd->use_sg = count;
-+ return 0;
+ block_group = (inode->i_ino - 1) /
+@@ -2660,6 +2660,28 @@ void ext4_get_inode_flags(struct ext4_inode_info *ei)
+ if (flags & S_DIRSYNC)
+ ei->i_flags |= EXT4_DIRSYNC_FL;
}
++static blkcnt_t ext4_inode_blocks(struct ext4_inode *raw_inode,
++ struct ext4_inode_info *ei)
++{
++ blkcnt_t i_blocks ;
++ struct inode *inode = &(ei->vfs_inode);
++ struct super_block *sb = inode->i_sb;
++
++ if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
++ EXT4_FEATURE_RO_COMPAT_HUGE_FILE)) {
++ /* we are using combined 48 bit field */
++ i_blocks = ((u64)le16_to_cpu(raw_inode->i_blocks_high)) << 32 |
++ le32_to_cpu(raw_inode->i_blocks_lo);
++ if (ei->i_flags & EXT4_HUGE_FILE_FL) {
++ /* i_blocks represent file system block size */
++ return i_blocks << (inode->i_blkbits - 9);
++ } else {
++ return i_blocks;
++ }
++ } else {
++ return le32_to_cpu(raw_inode->i_blocks_lo);
++ }
++}
- /* TODO: test this crap and replace bio_map_user with new interface maybe */
-@@ -496,8 +491,8 @@ int scsi_tgt_kspace_exec(int host_no, u64 itn_id, int result, u64 tag,
+ void ext4_read_inode(struct inode * inode)
+ {
+@@ -2687,7 +2709,6 @@ void ext4_read_inode(struct inode * inode)
+ inode->i_gid |= le16_to_cpu(raw_inode->i_gid_high) << 16;
}
- cmd = rq->special;
+ inode->i_nlink = le16_to_cpu(raw_inode->i_links_count);
+- inode->i_size = le32_to_cpu(raw_inode->i_size);
-- dprintk("cmd %p scb %x result %d len %d bufflen %u %lu %x\n",
-- cmd, cmd->cmnd[0], result, len, cmd->request_bufflen,
-+ dprintk("cmd %p scb %x result %d len %d bufflen %u %u %x\n",
-+ cmd, cmd->cmnd[0], result, len, scsi_bufflen(cmd),
- rq_data_dir(rq), cmd->cmnd[0]);
+ ei->i_state = 0;
+ ei->i_dir_start_lookup = 0;
+@@ -2709,19 +2730,15 @@ void ext4_read_inode(struct inode * inode)
+ * recovery code: that's fine, we're about to complete
+ * the process of deleting those. */
+ }
+- inode->i_blocks = le32_to_cpu(raw_inode->i_blocks);
+ ei->i_flags = le32_to_cpu(raw_inode->i_flags);
+- ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl);
++ inode->i_blocks = ext4_inode_blocks(raw_inode, ei);
++ ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo);
+ if (EXT4_SB(inode->i_sb)->s_es->s_creator_os !=
+- cpu_to_le32(EXT4_OS_HURD))
++ cpu_to_le32(EXT4_OS_HURD)) {
+ ei->i_file_acl |=
+ ((__u64)le16_to_cpu(raw_inode->i_file_acl_high)) << 32;
+- if (!S_ISREG(inode->i_mode)) {
+- ei->i_dir_acl = le32_to_cpu(raw_inode->i_dir_acl);
+- } else {
+- inode->i_size |=
+- ((__u64)le32_to_cpu(raw_inode->i_size_high)) << 32;
+ }
++ inode->i_size = ext4_isize(raw_inode);
+ ei->i_disksize = inode->i_size;
+ inode->i_generation = le32_to_cpu(raw_inode->i_generation);
+ ei->i_block_group = iloc.block_group;
+@@ -2765,6 +2782,13 @@ void ext4_read_inode(struct inode * inode)
+ EXT4_INODE_GET_XTIME(i_atime, inode, raw_inode);
+ EXT4_EINODE_GET_XTIME(i_crtime, ei, raw_inode);
- if (result == TASK_ABORTED) {
-@@ -617,7 +612,7 @@ int scsi_tgt_kspace_it_nexus_rsp(int host_no, u64 itn_id, int result)
- struct Scsi_Host *shost;
- int err = -EINVAL;
++ inode->i_version = le32_to_cpu(raw_inode->i_disk_version);
++ if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE) {
++ if (EXT4_FITS_IN_INODE(raw_inode, ei, i_version_hi))
++ inode->i_version |=
++ (__u64)(le32_to_cpu(raw_inode->i_version_hi)) << 32;
++ }
++
+ if (S_ISREG(inode->i_mode)) {
+ inode->i_op = &ext4_file_inode_operations;
+ inode->i_fop = &ext4_file_operations;
+@@ -2797,6 +2821,55 @@ bad_inode:
+ return;
+ }
-- dprintk("%d %d %llx\n", host_no, result, (unsigned long long) mid);
-+ dprintk("%d %d%llx\n", host_no, result, (unsigned long long)itn_id);
++static int ext4_inode_blocks_set(handle_t *handle,
++ struct ext4_inode *raw_inode,
++ struct ext4_inode_info *ei)
++{
++ struct inode *inode = &(ei->vfs_inode);
++ u64 i_blocks = inode->i_blocks;
++ struct super_block *sb = inode->i_sb;
++ int err = 0;
++
++ if (i_blocks <= ~0U) {
++ /*
++ * i_blocks can be represnted in a 32 bit variable
++ * as multiple of 512 bytes
++ */
++ raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
++ raw_inode->i_blocks_high = 0;
++ ei->i_flags &= ~EXT4_HUGE_FILE_FL;
++ } else if (i_blocks <= 0xffffffffffffULL) {
++ /*
++ * i_blocks can be represented in a 48 bit variable
++ * as multiple of 512 bytes
++ */
++ err = ext4_update_rocompat_feature(handle, sb,
++ EXT4_FEATURE_RO_COMPAT_HUGE_FILE);
++ if (err)
++ goto err_out;
++ /* i_block is stored in the split 48 bit fields */
++ raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
++ raw_inode->i_blocks_high = cpu_to_le16(i_blocks >> 32);
++ ei->i_flags &= ~EXT4_HUGE_FILE_FL;
++ } else {
++ /*
++ * i_blocks should be represented in a 48 bit variable
++ * as multiple of file system block size
++ */
++ err = ext4_update_rocompat_feature(handle, sb,
++ EXT4_FEATURE_RO_COMPAT_HUGE_FILE);
++ if (err)
++ goto err_out;
++ ei->i_flags |= EXT4_HUGE_FILE_FL;
++ /* i_block is stored in file system block size */
++ i_blocks = i_blocks >> (inode->i_blkbits - 9);
++ raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
++ raw_inode->i_blocks_high = cpu_to_le16(i_blocks >> 32);
++ }
++err_out:
++ return err;
++}
++
+ /*
+ * Post the struct inode info into an on-disk inode location in the
+ * buffer-cache. This gobbles the caller's reference to the
+@@ -2845,47 +2918,42 @@ static int ext4_do_update_inode(handle_t *handle,
+ raw_inode->i_gid_high = 0;
+ }
+ raw_inode->i_links_count = cpu_to_le16(inode->i_nlink);
+- raw_inode->i_size = cpu_to_le32(ei->i_disksize);
- shost = scsi_host_lookup(host_no);
- if (IS_ERR(shost)) {
-diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
-index 7a7cfe5..b1119da 100644
---- a/drivers/scsi/scsi_transport_fc.c
-+++ b/drivers/scsi/scsi_transport_fc.c
-@@ -481,9 +481,9 @@ MODULE_PARM_DESC(dev_loss_tmo,
- " exceeded, the scsi target is removed. Value should be"
- " between 1 and SCSI_DEVICE_BLOCK_MAX_TIMEOUT.");
+ EXT4_INODE_SET_XTIME(i_ctime, inode, raw_inode);
+ EXT4_INODE_SET_XTIME(i_mtime, inode, raw_inode);
+ EXT4_INODE_SET_XTIME(i_atime, inode, raw_inode);
+ EXT4_EINODE_SET_XTIME(i_crtime, ei, raw_inode);
--/**
-+/*
- * Netlink Infrastructure
-- **/
-+ */
+- raw_inode->i_blocks = cpu_to_le32(inode->i_blocks);
++ if (ext4_inode_blocks_set(handle, raw_inode, ei))
++ goto out_brelse;
+ raw_inode->i_dtime = cpu_to_le32(ei->i_dtime);
+ raw_inode->i_flags = cpu_to_le32(ei->i_flags);
+ if (EXT4_SB(inode->i_sb)->s_es->s_creator_os !=
+ cpu_to_le32(EXT4_OS_HURD))
+ raw_inode->i_file_acl_high =
+ cpu_to_le16(ei->i_file_acl >> 32);
+- raw_inode->i_file_acl = cpu_to_le32(ei->i_file_acl);
+- if (!S_ISREG(inode->i_mode)) {
+- raw_inode->i_dir_acl = cpu_to_le32(ei->i_dir_acl);
+- } else {
+- raw_inode->i_size_high =
+- cpu_to_le32(ei->i_disksize >> 32);
+- if (ei->i_disksize > 0x7fffffffULL) {
+- struct super_block *sb = inode->i_sb;
+- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
+- EXT4_FEATURE_RO_COMPAT_LARGE_FILE) ||
+- EXT4_SB(sb)->s_es->s_rev_level ==
+- cpu_to_le32(EXT4_GOOD_OLD_REV)) {
+- /* If this is the first large file
+- * created, add a flag to the superblock.
+- */
+- err = ext4_journal_get_write_access(handle,
+- EXT4_SB(sb)->s_sbh);
+- if (err)
+- goto out_brelse;
+- ext4_update_dynamic_rev(sb);
+- EXT4_SET_RO_COMPAT_FEATURE(sb,
++ raw_inode->i_file_acl_lo = cpu_to_le32(ei->i_file_acl);
++ ext4_isize_set(raw_inode, ei->i_disksize);
++ if (ei->i_disksize > 0x7fffffffULL) {
++ struct super_block *sb = inode->i_sb;
++ if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
++ EXT4_FEATURE_RO_COMPAT_LARGE_FILE) ||
++ EXT4_SB(sb)->s_es->s_rev_level ==
++ cpu_to_le32(EXT4_GOOD_OLD_REV)) {
++ /* If this is the first large file
++ * created, add a flag to the superblock.
++ */
++ err = ext4_journal_get_write_access(handle,
++ EXT4_SB(sb)->s_sbh);
++ if (err)
++ goto out_brelse;
++ ext4_update_dynamic_rev(sb);
++ EXT4_SET_RO_COMPAT_FEATURE(sb,
+ EXT4_FEATURE_RO_COMPAT_LARGE_FILE);
+- sb->s_dirt = 1;
+- handle->h_sync = 1;
+- err = ext4_journal_dirty_metadata(handle,
+- EXT4_SB(sb)->s_sbh);
+- }
++ sb->s_dirt = 1;
++ handle->h_sync = 1;
++ err = ext4_journal_dirty_metadata(handle,
++ EXT4_SB(sb)->s_sbh);
+ }
+ }
+ raw_inode->i_generation = cpu_to_le32(inode->i_generation);
+@@ -2903,8 +2971,14 @@ static int ext4_do_update_inode(handle_t *handle,
+ } else for (block = 0; block < EXT4_N_BLOCKS; block++)
+ raw_inode->i_block[block] = ei->i_data[block];
- static atomic_t fc_event_seq;
+- if (ei->i_extra_isize)
++ raw_inode->i_disk_version = cpu_to_le32(inode->i_version);
++ if (ei->i_extra_isize) {
++ if (EXT4_FITS_IN_INODE(raw_inode, ei, i_version_hi))
++ raw_inode->i_version_hi =
++ cpu_to_le32(inode->i_version >> 32);
+ raw_inode->i_extra_isize = cpu_to_le16(ei->i_extra_isize);
++ }
++
-@@ -491,10 +491,10 @@ static atomic_t fc_event_seq;
- * fc_get_event_number - Obtain the next sequential FC event number
- *
- * Notes:
-- * We could have inline'd this, but it would have required fc_event_seq to
-+ * We could have inlined this, but it would have required fc_event_seq to
- * be exposed. For now, live with the subroutine call.
- * Atomic used to avoid lock/unlock...
-- **/
-+ */
- u32
- fc_get_event_number(void)
+ BUFFER_TRACE(bh, "call ext4_journal_dirty_metadata");
+ rc = ext4_journal_dirty_metadata(handle, bh);
+@@ -3024,6 +3098,17 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ ext4_journal_stop(handle);
+ }
+
++ if (attr->ia_valid & ATTR_SIZE) {
++ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) {
++ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++
++ if (attr->ia_size > sbi->s_bitmap_maxbytes) {
++ error = -EFBIG;
++ goto err_out;
++ }
++ }
++ }
++
+ if (S_ISREG(inode->i_mode) &&
+ attr->ia_valid & ATTR_SIZE && attr->ia_size < inode->i_size) {
+ handle_t *handle;
+@@ -3120,6 +3205,9 @@ int ext4_mark_iloc_dirty(handle_t *handle,
{
-@@ -505,7 +505,6 @@ EXPORT_SYMBOL(fc_get_event_number);
+ int err = 0;
- /**
- * fc_host_post_event - called to post an even on an fc_host.
-- *
- * @shost: host the event occurred on
- * @event_number: fc event number obtained from get_fc_event_number()
- * @event_code: fc_host event being posted
-@@ -513,7 +512,7 @@ EXPORT_SYMBOL(fc_get_event_number);
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
-+ */
- void
- fc_host_post_event(struct Scsi_Host *shost, u32 event_number,
- enum fc_host_event_code event_code, u32 event_data)
-@@ -579,17 +578,16 @@ EXPORT_SYMBOL(fc_host_post_event);
++ if (test_opt(inode->i_sb, I_VERSION))
++ inode_inc_iversion(inode);
++
+ /* the do_update_inode consumes one bh->b_count */
+ get_bh(iloc->bh);
+@@ -3158,8 +3246,10 @@ ext4_reserve_inode_write(handle_t *handle, struct inode *inode,
+ * Expand an inode by new_extra_isize bytes.
+ * Returns 0 on success or negative error number on failure.
+ */
+-int ext4_expand_extra_isize(struct inode *inode, unsigned int new_extra_isize,
+- struct ext4_iloc iloc, handle_t *handle)
++static int ext4_expand_extra_isize(struct inode *inode,
++ unsigned int new_extra_isize,
++ struct ext4_iloc iloc,
++ handle_t *handle)
+ {
+ struct ext4_inode *raw_inode;
+ struct ext4_xattr_ibody_header *header;
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index e7f894b..2ed7c37 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -199,7 +199,7 @@ flags_err:
+ * need to allocate reservation structure for this inode
+ * before set the window size
+ */
+- mutex_lock(&ei->truncate_mutex);
++ down_write(&ei->i_data_sem);
+ if (!ei->i_block_alloc_info)
+ ext4_init_block_alloc_info(inode);
- /**
-- * fc_host_post_vendor_event - called to post a vendor unique event on
-- * a fc_host
-- *
-+ * fc_host_post_vendor_event - called to post a vendor unique event on an fc_host
- * @shost: host the event occurred on
- * @event_number: fc event number obtained from get_fc_event_number()
- * @data_len: amount, in bytes, of vendor unique data
- * @data_buf: pointer to vendor unique data
-+ * @vendor_id: Vendor id
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
-+ */
- void
- fc_host_post_vendor_event(struct Scsi_Host *shost, u32 event_number,
- u32 data_len, char * data_buf, u64 vendor_id)
-@@ -1900,7 +1898,6 @@ static int fc_vport_match(struct attribute_container *cont,
+@@ -207,7 +207,7 @@ flags_err:
+ struct ext4_reserve_window_node *rsv = &ei->i_block_alloc_info->rsv_window_node;
+ rsv->rsv_goal_size = rsv_window_size;
+ }
+- mutex_unlock(&ei->truncate_mutex);
++ up_write(&ei->i_data_sem);
+ return 0;
+ }
+ case EXT4_IOC_GROUP_EXTEND: {
+@@ -254,6 +254,9 @@ flags_err:
+ return err;
+ }
- /**
- * fc_timed_out - FC Transport I/O timeout intercept handler
-- *
- * @scmd: The SCSI command which timed out
- *
- * This routine protects against error handlers getting invoked while a
-@@ -1920,7 +1917,7 @@ static int fc_vport_match(struct attribute_container *cont,
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
++ case EXT4_IOC_MIGRATE:
++ return ext4_ext_migrate(inode, filp, cmd, arg);
++
+ default:
+ return -ENOTTY;
+ }
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+new file mode 100644
+index 0000000..76e5fed
+--- /dev/null
++++ b/fs/ext4/mballoc.c
+@@ -0,0 +1,4552 @@
++/*
++ * Copyright (c) 2003-2006, Cluster File Systems, Inc, info at clusterfs.com
++ * Written by Alex Tomas <alex at clusterfs.com>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public Licens
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-
+ */
- static enum scsi_eh_timer_return
- fc_timed_out(struct scsi_cmnd *scmd)
- {
-@@ -2133,7 +2130,7 @@ EXPORT_SYMBOL(fc_release_transport);
- * 1 - work queued for execution
- * 0 - work is already queued
- * -EINVAL - work queue doesn't exist
-- **/
++
++
++/*
++ * mballoc.c contains the multiblocks allocation routines
+ */
- static int
- fc_queue_work(struct Scsi_Host *shost, struct work_struct *work)
- {
-@@ -2152,7 +2149,7 @@ fc_queue_work(struct Scsi_Host *shost, struct work_struct *work)
- /**
- * fc_flush_work - Flush a fc_host's workqueue.
- * @shost: Pointer to Scsi_Host bound to fc_host.
-- **/
++
++#include <linux/time.h>
++#include <linux/fs.h>
++#include <linux/namei.h>
++#include <linux/ext4_jbd2.h>
++#include <linux/ext4_fs.h>
++#include <linux/quotaops.h>
++#include <linux/buffer_head.h>
++#include <linux/module.h>
++#include <linux/swap.h>
++#include <linux/proc_fs.h>
++#include <linux/pagemap.h>
++#include <linux/seq_file.h>
++#include <linux/version.h>
++#include "group.h"
++
++/*
++ * MUSTDO:
++ * - test ext4_ext_search_left() and ext4_ext_search_right()
++ * - search for metadata in few groups
++ *
++ * TODO v4:
++ * - normalization should take into account whether file is still open
++ * - discard preallocations if no free space left (policy?)
++ * - don't normalize tails
++ * - quota
++ * - reservation for superuser
++ *
++ * TODO v3:
++ * - bitmap read-ahead (proposed by Oleg Drokin aka green)
++ * - track min/max extents in each group for better group selection
++ * - mb_mark_used() may allocate chunk right after splitting buddy
++ * - tree of groups sorted by number of free blocks
++ * - error handling
+ */
- static void
- fc_flush_work(struct Scsi_Host *shost)
- {
-@@ -2175,7 +2172,7 @@ fc_flush_work(struct Scsi_Host *shost)
- *
- * Return value:
- * 1 on success / 0 already queued / < 0 for error
-- **/
++
++/*
++ * The allocation request involve request for multiple number of blocks
++ * near to the goal(block) value specified.
++ *
++ * During initialization phase of the allocator we decide to use the group
++ * preallocation or inode preallocation depending on the size file. The
++ * size of the file could be the resulting file size we would have after
++ * allocation or the current file size which ever is larger. If the size is
++ * less that sbi->s_mb_stream_request we select the group
++ * preallocation. The default value of s_mb_stream_request is 16
++ * blocks. This can also be tuned via
++ * /proc/fs/ext4/<partition>/stream_req. The value is represented in terms
++ * of number of blocks.
++ *
++ * The main motivation for having small file use group preallocation is to
++ * ensure that we have small file closer in the disk.
++ *
++ * First stage the allocator looks at the inode prealloc list
++ * ext4_inode_info->i_prealloc_list contain list of prealloc spaces for
++ * this particular inode. The inode prealloc space is represented as:
++ *
++ * pa_lstart -> the logical start block for this prealloc space
++ * pa_pstart -> the physical start block for this prealloc space
++ * pa_len -> lenght for this prealloc space
++ * pa_free -> free space available in this prealloc space
++ *
++ * The inode preallocation space is used looking at the _logical_ start
++ * block. If only the logical file block falls within the range of prealloc
++ * space we will consume the particular prealloc space. This make sure that
++ * that the we have contiguous physical blocks representing the file blocks
++ *
++ * The important thing to be noted in case of inode prealloc space is that
++ * we don't modify the values associated to inode prealloc space except
++ * pa_free.
++ *
++ * If we are not able to find blocks in the inode prealloc space and if we
++ * have the group allocation flag set then we look at the locality group
++ * prealloc space. These are per CPU prealloc list repreasented as
++ *
++ * ext4_sb_info.s_locality_groups[smp_processor_id()]
++ *
++ * The reason for having a per cpu locality group is to reduce the contention
++ * between CPUs. It is possible to get scheduled at this point.
++ *
++ * The locality group prealloc space is used looking at whether we have
++ * enough free space (pa_free) withing the prealloc space.
++ *
++ * If we can't allocate blocks via inode prealloc or/and locality group
++ * prealloc then we look at the buddy cache. The buddy cache is represented
++ * by ext4_sb_info.s_buddy_cache (struct inode) whose file offset gets
++ * mapped to the buddy and bitmap information regarding different
++ * groups. The buddy information is attached to buddy cache inode so that
++ * we can access them through the page cache. The information regarding
++ * each group is loaded via ext4_mb_load_buddy. The information involve
++ * block bitmap and buddy information. The information are stored in the
++ * inode as:
++ *
++ * { page }
++ * [ group 0 buddy][ group 0 bitmap] [group 1][ group 1]...
++ *
++ *
++ * one block each for bitmap and buddy information. So for each group we
++ * take up 2 blocks. A page can contain blocks_per_page (PAGE_CACHE_SIZE /
++ * blocksize) blocks. So it can have information regarding groups_per_page
++ * which is blocks_per_page/2
++ *
++ * The buddy cache inode is not stored on disk. The inode is thrown
++ * away when the filesystem is unmounted.
++ *
++ * We look for count number of blocks in the buddy cache. If we were able
++ * to locate that many free blocks we return with additional information
++ * regarding rest of the contiguous physical block available
++ *
++ * Before allocating blocks via buddy cache we normalize the request
++ * blocks. This ensure we ask for more blocks that we needed. The extra
++ * blocks that we get after allocation is added to the respective prealloc
++ * list. In case of inode preallocation we follow a list of heuristics
++ * based on file size. This can be found in ext4_mb_normalize_request. If
++ * we are doing a group prealloc we try to normalize the request to
++ * sbi->s_mb_group_prealloc. Default value of s_mb_group_prealloc is set to
++ * 512 blocks. This can be tuned via
++ * /proc/fs/ext4/<partition/group_prealloc. The value is represented in
++ * terms of number of blocks. If we have mounted the file system with -O
++ * stripe=<value> option the group prealloc request is normalized to the
++ * stripe value (sbi->s_stripe)
++ *
++ * The regular allocator(using the buddy cache) support few tunables.
++ *
++ * /proc/fs/ext4/<partition>/min_to_scan
++ * /proc/fs/ext4/<partition>/max_to_scan
++ * /proc/fs/ext4/<partition>/order2_req
++ *
++ * The regular allocator use buddy scan only if the request len is power of
++ * 2 blocks and the order of allocation is >= sbi->s_mb_order2_reqs. The
++ * value of s_mb_order2_reqs can be tuned via
++ * /proc/fs/ext4/<partition>/order2_req. If the request len is equal to
++ * stripe size (sbi->s_stripe), we try to search for contigous block in
++ * stripe size. This should result in better allocation on RAID setup. If
++ * not we search in the specific group using bitmap for best extents. The
++ * tunable min_to_scan and max_to_scan controll the behaviour here.
++ * min_to_scan indicate how long the mballoc __must__ look for a best
++ * extent and max_to_scanindicate how long the mballoc __can__ look for a
++ * best extent in the found extents. Searching for the blocks starts with
++ * the group specified as the goal value in allocation context via
++ * ac_g_ex. Each group is first checked based on the criteria whether it
++ * can used for allocation. ext4_mb_good_group explains how the groups are
++ * checked.
++ *
++ * Both the prealloc space are getting populated as above. So for the first
++ * request we will hit the buddy cache which will result in this prealloc
++ * space getting filled. The prealloc space is then later used for the
++ * subsequent request.
+ */
- static int
- fc_queue_devloss_work(struct Scsi_Host *shost, struct delayed_work *work,
- unsigned long delay)
-@@ -2195,7 +2192,7 @@ fc_queue_devloss_work(struct Scsi_Host *shost, struct delayed_work *work,
- /**
- * fc_flush_devloss - Flush a fc_host's devloss workqueue.
- * @shost: Pointer to Scsi_Host bound to fc_host.
-- **/
++
++/*
++ * mballoc operates on the following data:
++ * - on-disk bitmap
++ * - in-core buddy (actually includes buddy and bitmap)
++ * - preallocation descriptors (PAs)
++ *
++ * there are two types of preallocations:
++ * - inode
++ * assiged to specific inode and can be used for this inode only.
++ * it describes part of inode's space preallocated to specific
++ * physical blocks. any block from that preallocated can be used
++ * independent. the descriptor just tracks number of blocks left
++ * unused. so, before taking some block from descriptor, one must
++ * make sure corresponded logical block isn't allocated yet. this
++ * also means that freeing any block within descriptor's range
++ * must discard all preallocated blocks.
++ * - locality group
++ * assigned to specific locality group which does not translate to
++ * permanent set of inodes: inode can join and leave group. space
++ * from this type of preallocation can be used for any inode. thus
++ * it's consumed from the beginning to the end.
++ *
++ * relation between them can be expressed as:
++ * in-core buddy = on-disk bitmap + preallocation descriptors
++ *
++ * this mean blocks mballoc considers used are:
++ * - allocated blocks (persistent)
++ * - preallocated blocks (non-persistent)
++ *
++ * consistency in mballoc world means that at any time a block is either
++ * free or used in ALL structures. notice: "any time" should not be read
++ * literally -- time is discrete and delimited by locks.
++ *
++ * to keep it simple, we don't use block numbers, instead we count number of
++ * blocks: how many blocks marked used/free in on-disk bitmap, buddy and PA.
++ *
++ * all operations can be expressed as:
++ * - init buddy: buddy = on-disk + PAs
++ * - new PA: buddy += N; PA = N
++ * - use inode PA: on-disk += N; PA -= N
++ * - discard inode PA buddy -= on-disk - PA; PA = 0
++ * - use locality group PA on-disk += N; PA -= N
++ * - discard locality group PA buddy -= PA; PA = 0
++ * note: 'buddy -= on-disk - PA' is used to show that on-disk bitmap
++ * is used in real operation because we can't know actual used
++ * bits from PA, only from on-disk bitmap
++ *
++ * if we follow this strict logic, then all operations above should be atomic.
++ * given some of them can block, we'd have to use something like semaphores
++ * killing performance on high-end SMP hardware. let's try to relax it using
++ * the following knowledge:
++ * 1) if buddy is referenced, it's already initialized
++ * 2) while block is used in buddy and the buddy is referenced,
++ * nobody can re-allocate that block
++ * 3) we work on bitmaps and '+' actually means 'set bits'. if on-disk has
++ * bit set and PA claims same block, it's OK. IOW, one can set bit in
++ * on-disk bitmap if buddy has same bit set or/and PA covers corresponded
++ * block
++ *
++ * so, now we're building a concurrency table:
++ * - init buddy vs.
++ * - new PA
++ * blocks for PA are allocated in the buddy, buddy must be referenced
++ * until PA is linked to allocation group to avoid concurrent buddy init
++ * - use inode PA
++ * we need to make sure that either on-disk bitmap or PA has uptodate data
++ * given (3) we care that PA-=N operation doesn't interfere with init
++ * - discard inode PA
++ * the simplest way would be to have buddy initialized by the discard
++ * - use locality group PA
++ * again PA-=N must be serialized with init
++ * - discard locality group PA
++ * the simplest way would be to have buddy initialized by the discard
++ * - new PA vs.
++ * - use inode PA
++ * i_data_sem serializes them
++ * - discard inode PA
++ * discard process must wait until PA isn't used by another process
++ * - use locality group PA
++ * some mutex should serialize them
++ * - discard locality group PA
++ * discard process must wait until PA isn't used by another process
++ * - use inode PA
++ * - use inode PA
++ * i_data_sem or another mutex should serializes them
++ * - discard inode PA
++ * discard process must wait until PA isn't used by another process
++ * - use locality group PA
++ * nothing wrong here -- they're different PAs covering different blocks
++ * - discard locality group PA
++ * discard process must wait until PA isn't used by another process
++ *
++ * now we're ready to make few consequences:
++ * - PA is referenced and while it is no discard is possible
++ * - PA is referenced until block isn't marked in on-disk bitmap
++ * - PA changes only after on-disk bitmap
++ * - discard must not compete with init. either init is done before
++ * any discard or they're serialized somehow
++ * - buddy init as sum of on-disk bitmap and PAs is done atomically
++ *
++ * a special case when we've used PA to emptiness. no need to modify buddy
++ * in this case, but we should care about concurrent init
++ *
+ */
- static void
- fc_flush_devloss(struct Scsi_Host *shost)
- {
-@@ -2212,21 +2209,20 @@ fc_flush_devloss(struct Scsi_Host *shost)
-
-
- /**
-- * fc_remove_host - called to terminate any fc_transport-related elements
-- * for a scsi host.
-- * @rport: remote port to be unblocked.
-+ * fc_remove_host - called to terminate any fc_transport-related elements for a scsi host.
-+ * @shost: Which &Scsi_Host
- *
- * This routine is expected to be called immediately preceeding the
- * a driver's call to scsi_remove_host().
- *
- * WARNING: A driver utilizing the fc_transport, which fails to call
-- * this routine prior to scsi_remote_host(), will leave dangling
-+ * this routine prior to scsi_remove_host(), will leave dangling
- * objects in /sys/class/fc_remote_ports. Access to any of these
- * objects can result in a system crash !!!
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
++
++ /*
++ * Logic in few words:
++ *
++ * - allocation:
++ * load group
++ * find blocks
++ * mark bits in on-disk bitmap
++ * release group
++ *
++ * - use preallocation:
++ * find proper PA (per-inode or group)
++ * load group
++ * mark bits in on-disk bitmap
++ * release group
++ * release PA
++ *
++ * - free:
++ * load group
++ * mark bits in on-disk bitmap
++ * release group
++ *
++ * - discard preallocations in group:
++ * mark PAs deleted
++ * move them onto local list
++ * load on-disk bitmap
++ * load group
++ * remove PA from object (inode or locality group)
++ * mark free blocks in-core
++ *
++ * - discard inode's preallocations:
+ */
- void
- fc_remove_host(struct Scsi_Host *shost)
- {
-@@ -2281,10 +2277,10 @@ EXPORT_SYMBOL(fc_remove_host);
-
- /**
- * fc_starget_delete - called to delete the scsi decendents of an rport
-- * (target and all sdevs)
-- *
- * @work: remote port to be operated on.
-- **/
++
++/*
++ * Locking rules
++ *
++ * Locks:
++ * - bitlock on a group (group)
++ * - object (inode/locality) (object)
++ * - per-pa lock (pa)
++ *
++ * Paths:
++ * - new pa
++ * object
++ * group
++ *
++ * - find and use pa:
++ * pa
++ *
++ * - release consumed pa:
++ * pa
++ * group
++ * object
++ *
++ * - generate in-core bitmap:
++ * group
++ * pa
++ *
++ * - discard all for given object (inode, locality group):
++ * object
++ * pa
++ * group
++ *
++ * - discard all for given group:
++ * group
++ * pa
++ * group
++ * object
+ *
-+ * Deletes target and all sdevs.
+ */
- static void
- fc_starget_delete(struct work_struct *work)
- {
-@@ -2303,9 +2299,8 @@ fc_starget_delete(struct work_struct *work)
-
- /**
- * fc_rport_final_delete - finish rport termination and delete it.
-- *
- * @work: remote port to be deleted.
-- **/
++
++/*
++ * with AGGRESSIVE_CHECK allocator runs consistency checks over
++ * structures. these checks slow things down a lot
+ */
- static void
- fc_rport_final_delete(struct work_struct *work)
- {
-@@ -2375,7 +2370,7 @@ fc_rport_final_delete(struct work_struct *work)
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
++#define AGGRESSIVE_CHECK__
++
++/*
++ * with DOUBLE_CHECK defined mballoc creates persistent in-core
++ * bitmaps, maintains and uses them to check for double allocations
+ */
- static struct fc_rport *
- fc_rport_create(struct Scsi_Host *shost, int channel,
- struct fc_rport_identifiers *ids)
-@@ -2462,8 +2457,7 @@ delete_rport:
- }
-
- /**
-- * fc_remote_port_add - notifies the fc transport of the existence
-- * of a remote FC port.
-+ * fc_remote_port_add - notify fc transport of the existence of a remote FC port.
- * @shost: scsi host the remote port is connected to.
- * @channel: Channel on shost port connected to.
- * @ids: The world wide names, fc address, and FC4 port
-@@ -2499,7 +2493,7 @@ delete_rport:
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
++#define DOUBLE_CHECK__
++
++/*
+ */
- struct fc_rport *
- fc_remote_port_add(struct Scsi_Host *shost, int channel,
- struct fc_rport_identifiers *ids)
-@@ -2683,19 +2677,18 @@ EXPORT_SYMBOL(fc_remote_port_add);
-
-
- /**
-- * fc_remote_port_delete - notifies the fc transport that a remote
-- * port is no longer in existence.
-+ * fc_remote_port_delete - notifies the fc transport that a remote port is no longer in existence.
- * @rport: The remote port that no longer exists
- *
- * The LLDD calls this routine to notify the transport that a remote
- * port is no longer part of the topology. Note: Although a port
- * may no longer be part of the topology, it may persist in the remote
- * ports displayed by the fc_host. We do this under 2 conditions:
-- * - If the port was a scsi target, we delay its deletion by "blocking" it.
-+ * 1) If the port was a scsi target, we delay its deletion by "blocking" it.
- * This allows the port to temporarily disappear, then reappear without
- * disrupting the SCSI device tree attached to it. During the "blocked"
- * period the port will still exist.
-- * - If the port was a scsi target and disappears for longer than we
-+ * 2) If the port was a scsi target and disappears for longer than we
- * expect, we'll delete the port and the tear down the SCSI device tree
- * attached to it. However, we want to semi-persist the target id assigned
- * to that port if it eventually does exist. The port structure will
-@@ -2709,7 +2702,8 @@ EXPORT_SYMBOL(fc_remote_port_add);
- * temporary blocked state. From the LLDD's perspective, the rport no
- * longer exists. From the SCSI midlayer's perspective, the SCSI target
- * exists, but all sdevs on it are blocked from further I/O. The following
-- * is then expected:
-+ * is then expected.
-+ *
- * If the remote port does not return (signaled by a LLDD call to
- * fc_remote_port_add()) within the dev_loss_tmo timeout, then the
- * scsi target is removed - killing all outstanding i/o and removing the
-@@ -2731,7 +2725,7 @@ EXPORT_SYMBOL(fc_remote_port_add);
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
++#define MB_DEBUG__
++#ifdef MB_DEBUG
++#define mb_debug(fmt, a...) printk(fmt, ##a)
++#else
++#define mb_debug(fmt, a...)
++#endif
++
++/*
++ * with EXT4_MB_HISTORY mballoc stores last N allocations in memory
++ * and you can monitor it in /proc/fs/ext4/<dev>/mb_history
+ */
- void
- fc_remote_port_delete(struct fc_rport *rport)
- {
-@@ -2792,12 +2786,12 @@ fc_remote_port_delete(struct fc_rport *rport)
- EXPORT_SYMBOL(fc_remote_port_delete);
-
- /**
-- * fc_remote_port_rolechg - notifies the fc transport that the roles
-- * on a remote may have changed.
-+ * fc_remote_port_rolechg - notifies the fc transport that the roles on a remote may have changed.
- * @rport: The remote port that changed.
-+ * @roles: New roles for this port.
- *
-- * The LLDD calls this routine to notify the transport that the roles
-- * on a remote port may have changed. The largest effect of this is
-+ * Description: The LLDD calls this routine to notify the transport that the
-+ * roles on a remote port may have changed. The largest effect of this is
- * if a port now becomes a FCP Target, it must be allocated a
- * scsi target id. If the port is no longer a FCP target, any
- * scsi target id value assigned to it will persist in case the
-@@ -2810,7 +2804,7 @@ EXPORT_SYMBOL(fc_remote_port_delete);
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
++#define EXT4_MB_HISTORY
++#define EXT4_MB_HISTORY_ALLOC 1 /* allocation */
++#define EXT4_MB_HISTORY_PREALLOC 2 /* preallocated blocks used */
++#define EXT4_MB_HISTORY_DISCARD 4 /* preallocation discarded */
++#define EXT4_MB_HISTORY_FREE 8 /* free */
++
++#define EXT4_MB_HISTORY_DEFAULT (EXT4_MB_HISTORY_ALLOC | \
++ EXT4_MB_HISTORY_PREALLOC)
++
++/*
++ * How long mballoc can look for a best extent (in found extents)
+ */
- void
- fc_remote_port_rolechg(struct fc_rport *rport, u32 roles)
- {
-@@ -2875,12 +2869,12 @@ fc_remote_port_rolechg(struct fc_rport *rport, u32 roles)
- EXPORT_SYMBOL(fc_remote_port_rolechg);
-
- /**
-- * fc_timeout_deleted_rport - Timeout handler for a deleted remote port,
-- * which we blocked, and has now failed to return
-- * in the allotted time.
-- *
-+ * fc_timeout_deleted_rport - Timeout handler for a deleted remote port.
- * @work: rport target that failed to reappear in the allotted time.
-- **/
++#define MB_DEFAULT_MAX_TO_SCAN 200
++
++/*
++ * How long mballoc must look for a best extent
++ */
++#define MB_DEFAULT_MIN_TO_SCAN 10
++
++/*
++ * How many groups mballoc will scan looking for the best chunk
++ */
++#define MB_DEFAULT_MAX_GROUPS_TO_SCAN 5
++
++/*
++ * with 'ext4_mb_stats' allocator will collect stats that will be
++ * shown at umount. The collecting costs though!
++ */
++#define MB_DEFAULT_STATS 1
++
++/*
++ * files smaller than MB_DEFAULT_STREAM_THRESHOLD are served
++ * by the stream allocator, which purpose is to pack requests
++ * as close each to other as possible to produce smooth I/O traffic
++ * We use locality group prealloc space for stream request.
++ * We can tune the same via /proc/fs/ext4/<parition>/stream_req
++ */
++#define MB_DEFAULT_STREAM_THRESHOLD 16 /* 64K */
++
++/*
++ * for which requests use 2^N search using buddies
++ */
++#define MB_DEFAULT_ORDER2_REQS 2
++
++/*
++ * default group prealloc size 512 blocks
++ */
++#define MB_DEFAULT_GROUP_PREALLOC 512
++
++static struct kmem_cache *ext4_pspace_cachep;
++
++#ifdef EXT4_BB_MAX_BLOCKS
++#undef EXT4_BB_MAX_BLOCKS
++#endif
++#define EXT4_BB_MAX_BLOCKS 30
++
++struct ext4_free_metadata {
++ ext4_group_t group;
++ unsigned short num;
++ ext4_grpblk_t blocks[EXT4_BB_MAX_BLOCKS];
++ struct list_head list;
++};
++
++struct ext4_group_info {
++ unsigned long bb_state;
++ unsigned long bb_tid;
++ struct ext4_free_metadata *bb_md_cur;
++ unsigned short bb_first_free;
++ unsigned short bb_free;
++ unsigned short bb_fragments;
++ struct list_head bb_prealloc_list;
++#ifdef DOUBLE_CHECK
++ void *bb_bitmap;
++#endif
++ unsigned short bb_counters[];
++};
++
++#define EXT4_GROUP_INFO_NEED_INIT_BIT 0
++#define EXT4_GROUP_INFO_LOCKED_BIT 1
++
++#define EXT4_MB_GRP_NEED_INIT(grp) \
++ (test_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &((grp)->bb_state)))
++
++
++struct ext4_prealloc_space {
++ struct list_head pa_inode_list;
++ struct list_head pa_group_list;
++ union {
++ struct list_head pa_tmp_list;
++ struct rcu_head pa_rcu;
++ } u;
++ spinlock_t pa_lock;
++ atomic_t pa_count;
++ unsigned pa_deleted;
++ ext4_fsblk_t pa_pstart; /* phys. block */
++ ext4_lblk_t pa_lstart; /* log. block */
++ unsigned short pa_len; /* len of preallocated chunk */
++ unsigned short pa_free; /* how many blocks are free */
++ unsigned short pa_linear; /* consumed in one direction
++ * strictly, for grp prealloc */
++ spinlock_t *pa_obj_lock;
++ struct inode *pa_inode; /* hack, for history only */
++};
++
++
++struct ext4_free_extent {
++ ext4_lblk_t fe_logical;
++ ext4_grpblk_t fe_start;
++ ext4_group_t fe_group;
++ int fe_len;
++};
++
++/*
++ * Locality group:
++ * we try to group all related changes together
++ * so that writeback can flush/allocate them together as well
++ */
++struct ext4_locality_group {
++ /* for allocator */
++ struct mutex lg_mutex; /* to serialize allocates */
++ struct list_head lg_prealloc_list;/* list of preallocations */
++ spinlock_t lg_prealloc_lock;
++};
++
++struct ext4_allocation_context {
++ struct inode *ac_inode;
++ struct super_block *ac_sb;
++
++ /* original request */
++ struct ext4_free_extent ac_o_ex;
++
++ /* goal request (after normalization) */
++ struct ext4_free_extent ac_g_ex;
++
++ /* the best found extent */
++ struct ext4_free_extent ac_b_ex;
++
++ /* copy of the bext found extent taken before preallocation efforts */
++ struct ext4_free_extent ac_f_ex;
++
++ /* number of iterations done. we have to track to limit searching */
++ unsigned long ac_ex_scanned;
++ __u16 ac_groups_scanned;
++ __u16 ac_found;
++ __u16 ac_tail;
++ __u16 ac_buddy;
++ __u16 ac_flags; /* allocation hints */
++ __u8 ac_status;
++ __u8 ac_criteria;
++ __u8 ac_repeats;
++ __u8 ac_2order; /* if request is to allocate 2^N blocks and
++ * N > 0, the field stores N, otherwise 0 */
++ __u8 ac_op; /* operation, for history only */
++ struct page *ac_bitmap_page;
++ struct page *ac_buddy_page;
++ struct ext4_prealloc_space *ac_pa;
++ struct ext4_locality_group *ac_lg;
++};
++
++#define AC_STATUS_CONTINUE 1
++#define AC_STATUS_FOUND 2
++#define AC_STATUS_BREAK 3
++
++struct ext4_mb_history {
++ struct ext4_free_extent orig; /* orig allocation */
++ struct ext4_free_extent goal; /* goal allocation */
++ struct ext4_free_extent result; /* result allocation */
++ unsigned pid;
++ unsigned ino;
++ __u16 found; /* how many extents have been found */
++ __u16 groups; /* how many groups have been scanned */
++ __u16 tail; /* what tail broke some buddy */
++ __u16 buddy; /* buddy the tail ^^^ broke */
++ __u16 flags;
++ __u8 cr:3; /* which phase the result extent was found at */
++ __u8 op:4;
++ __u8 merged:1;
++};
++
++struct ext4_buddy {
++ struct page *bd_buddy_page;
++ void *bd_buddy;
++ struct page *bd_bitmap_page;
++ void *bd_bitmap;
++ struct ext4_group_info *bd_info;
++ struct super_block *bd_sb;
++ __u16 bd_blkbits;
++ ext4_group_t bd_group;
++};
++#define EXT4_MB_BITMAP(e4b) ((e4b)->bd_bitmap)
++#define EXT4_MB_BUDDY(e4b) ((e4b)->bd_buddy)
++
++#ifndef EXT4_MB_HISTORY
++static inline void ext4_mb_store_history(struct ext4_allocation_context *ac)
++{
++ return;
++}
++#else
++static void ext4_mb_store_history(struct ext4_allocation_context *ac);
++#endif
++
++#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
++
++static struct proc_dir_entry *proc_root_ext4;
++struct buffer_head *read_block_bitmap(struct super_block *, ext4_group_t);
++ext4_fsblk_t ext4_new_blocks_old(handle_t *handle, struct inode *inode,
++ ext4_fsblk_t goal, unsigned long *count, int *errp);
++
++static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
++ ext4_group_t group);
++static void ext4_mb_poll_new_transaction(struct super_block *, handle_t *);
++static void ext4_mb_free_committed_blocks(struct super_block *);
++static void ext4_mb_return_to_preallocation(struct inode *inode,
++ struct ext4_buddy *e4b, sector_t block,
++ int count);
++static void ext4_mb_put_pa(struct ext4_allocation_context *,
++ struct super_block *, struct ext4_prealloc_space *pa);
++static int ext4_mb_init_per_dev_proc(struct super_block *sb);
++static int ext4_mb_destroy_per_dev_proc(struct super_block *sb);
++
++
++static inline void ext4_lock_group(struct super_block *sb, ext4_group_t group)
++{
++ struct ext4_group_info *grinfo = ext4_get_group_info(sb, group);
++
++ bit_spin_lock(EXT4_GROUP_INFO_LOCKED_BIT, &(grinfo->bb_state));
++}
++
++static inline void ext4_unlock_group(struct super_block *sb,
++ ext4_group_t group)
++{
++ struct ext4_group_info *grinfo = ext4_get_group_info(sb, group);
++
++ bit_spin_unlock(EXT4_GROUP_INFO_LOCKED_BIT, &(grinfo->bb_state));
++}
++
++static inline int ext4_is_group_locked(struct super_block *sb,
++ ext4_group_t group)
++{
++ struct ext4_group_info *grinfo = ext4_get_group_info(sb, group);
++
++ return bit_spin_is_locked(EXT4_GROUP_INFO_LOCKED_BIT,
++ &(grinfo->bb_state));
++}
++
++static ext4_fsblk_t ext4_grp_offs_to_block(struct super_block *sb,
++ struct ext4_free_extent *fex)
++{
++ ext4_fsblk_t block;
++
++ block = (ext4_fsblk_t) fex->fe_group * EXT4_BLOCKS_PER_GROUP(sb)
++ + fex->fe_start
++ + le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
++ return block;
++}
++
++#if BITS_PER_LONG == 64
++#define mb_correct_addr_and_bit(bit, addr) \
++{ \
++ bit += ((unsigned long) addr & 7UL) << 3; \
++ addr = (void *) ((unsigned long) addr & ~7UL); \
++}
++#elif BITS_PER_LONG == 32
++#define mb_correct_addr_and_bit(bit, addr) \
++{ \
++ bit += ((unsigned long) addr & 3UL) << 3; \
++ addr = (void *) ((unsigned long) addr & ~3UL); \
++}
++#else
++#error "how many bits you are?!"
++#endif
++
++static inline int mb_test_bit(int bit, void *addr)
++{
++ /*
++ * ext4_test_bit on architecture like powerpc
++ * needs unsigned long aligned address
++ */
++ mb_correct_addr_and_bit(bit, addr);
++ return ext4_test_bit(bit, addr);
++}
++
++static inline void mb_set_bit(int bit, void *addr)
++{
++ mb_correct_addr_and_bit(bit, addr);
++ ext4_set_bit(bit, addr);
++}
++
++static inline void mb_set_bit_atomic(spinlock_t *lock, int bit, void *addr)
++{
++ mb_correct_addr_and_bit(bit, addr);
++ ext4_set_bit_atomic(lock, bit, addr);
++}
++
++static inline void mb_clear_bit(int bit, void *addr)
++{
++ mb_correct_addr_and_bit(bit, addr);
++ ext4_clear_bit(bit, addr);
++}
++
++static inline void mb_clear_bit_atomic(spinlock_t *lock, int bit, void *addr)
++{
++ mb_correct_addr_and_bit(bit, addr);
++ ext4_clear_bit_atomic(lock, bit, addr);
++}
++
++static void *mb_find_buddy(struct ext4_buddy *e4b, int order, int *max)
++{
++ char *bb;
++
++ /* FIXME!! is this needed */
++ BUG_ON(EXT4_MB_BITMAP(e4b) == EXT4_MB_BUDDY(e4b));
++ BUG_ON(max == NULL);
++
++ if (order > e4b->bd_blkbits + 1) {
++ *max = 0;
++ return NULL;
++ }
++
++ /* at order 0 we see each particular block */
++ *max = 1 << (e4b->bd_blkbits + 3);
++ if (order == 0)
++ return EXT4_MB_BITMAP(e4b);
++
++ bb = EXT4_MB_BUDDY(e4b) + EXT4_SB(e4b->bd_sb)->s_mb_offsets[order];
++ *max = EXT4_SB(e4b->bd_sb)->s_mb_maxs[order];
++
++ return bb;
++}
++
++#ifdef DOUBLE_CHECK
++static void mb_free_blocks_double(struct inode *inode, struct ext4_buddy *e4b,
++ int first, int count)
++{
++ int i;
++ struct super_block *sb = e4b->bd_sb;
++
++ if (unlikely(e4b->bd_info->bb_bitmap == NULL))
++ return;
++ BUG_ON(!ext4_is_group_locked(sb, e4b->bd_group));
++ for (i = 0; i < count; i++) {
++ if (!mb_test_bit(first + i, e4b->bd_info->bb_bitmap)) {
++ ext4_fsblk_t blocknr;
++ blocknr = e4b->bd_group * EXT4_BLOCKS_PER_GROUP(sb);
++ blocknr += first + i;
++ blocknr +=
++ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
++
++ ext4_error(sb, __FUNCTION__, "double-free of inode"
++ " %lu's block %llu(bit %u in group %lu)\n",
++ inode ? inode->i_ino : 0, blocknr,
++ first + i, e4b->bd_group);
++ }
++ mb_clear_bit(first + i, e4b->bd_info->bb_bitmap);
++ }
++}
++
++static void mb_mark_used_double(struct ext4_buddy *e4b, int first, int count)
++{
++ int i;
++
++ if (unlikely(e4b->bd_info->bb_bitmap == NULL))
++ return;
++ BUG_ON(!ext4_is_group_locked(e4b->bd_sb, e4b->bd_group));
++ for (i = 0; i < count; i++) {
++ BUG_ON(mb_test_bit(first + i, e4b->bd_info->bb_bitmap));
++ mb_set_bit(first + i, e4b->bd_info->bb_bitmap);
++ }
++}
++
++static void mb_cmp_bitmaps(struct ext4_buddy *e4b, void *bitmap)
++{
++ if (memcmp(e4b->bd_info->bb_bitmap, bitmap, e4b->bd_sb->s_blocksize)) {
++ unsigned char *b1, *b2;
++ int i;
++ b1 = (unsigned char *) e4b->bd_info->bb_bitmap;
++ b2 = (unsigned char *) bitmap;
++ for (i = 0; i < e4b->bd_sb->s_blocksize; i++) {
++ if (b1[i] != b2[i]) {
++ printk("corruption in group %lu at byte %u(%u):"
++ " %x in copy != %x on disk/prealloc\n",
++ e4b->bd_group, i, i * 8, b1[i], b2[i]);
++ BUG();
++ }
++ }
++ }
++}
++
++#else
++static inline void mb_free_blocks_double(struct inode *inode,
++ struct ext4_buddy *e4b, int first, int count)
++{
++ return;
++}
++static inline void mb_mark_used_double(struct ext4_buddy *e4b,
++ int first, int count)
++{
++ return;
++}
++static inline void mb_cmp_bitmaps(struct ext4_buddy *e4b, void *bitmap)
++{
++ return;
++}
++#endif
++
++#ifdef AGGRESSIVE_CHECK
++
++#define MB_CHECK_ASSERT(assert) \
++do { \
++ if (!(assert)) { \
++ printk(KERN_EMERG \
++ "Assertion failure in %s() at %s:%d: \"%s\"\n", \
++ function, file, line, # assert); \
++ BUG(); \
++ } \
++} while (0)
++
++static int __mb_check_buddy(struct ext4_buddy *e4b, char *file,
++ const char *function, int line)
++{
++ struct super_block *sb = e4b->bd_sb;
++ int order = e4b->bd_blkbits + 1;
++ int max;
++ int max2;
++ int i;
++ int j;
++ int k;
++ int count;
++ struct ext4_group_info *grp;
++ int fragments = 0;
++ int fstart;
++ struct list_head *cur;
++ void *buddy;
++ void *buddy2;
++
++ if (!test_opt(sb, MBALLOC))
++ return 0;
++
++ {
++ static int mb_check_counter;
++ if (mb_check_counter++ % 100 != 0)
++ return 0;
++ }
++
++ while (order > 1) {
++ buddy = mb_find_buddy(e4b, order, &max);
++ MB_CHECK_ASSERT(buddy);
++ buddy2 = mb_find_buddy(e4b, order - 1, &max2);
++ MB_CHECK_ASSERT(buddy2);
++ MB_CHECK_ASSERT(buddy != buddy2);
++ MB_CHECK_ASSERT(max * 2 == max2);
++
++ count = 0;
++ for (i = 0; i < max; i++) {
++
++ if (mb_test_bit(i, buddy)) {
++ /* only single bit in buddy2 may be 1 */
++ if (!mb_test_bit(i << 1, buddy2)) {
++ MB_CHECK_ASSERT(
++ mb_test_bit((i<<1)+1, buddy2));
++ } else if (!mb_test_bit((i << 1) + 1, buddy2)) {
++ MB_CHECK_ASSERT(
++ mb_test_bit(i << 1, buddy2));
++ }
++ continue;
++ }
++
++ /* both bits in buddy2 must be 0 */
++ MB_CHECK_ASSERT(mb_test_bit(i << 1, buddy2));
++ MB_CHECK_ASSERT(mb_test_bit((i << 1) + 1, buddy2));
++
++ for (j = 0; j < (1 << order); j++) {
++ k = (i * (1 << order)) + j;
++ MB_CHECK_ASSERT(
++ !mb_test_bit(k, EXT4_MB_BITMAP(e4b)));
++ }
++ count++;
++ }
++ MB_CHECK_ASSERT(e4b->bd_info->bb_counters[order] == count);
++ order--;
++ }
++
++ fstart = -1;
++ buddy = mb_find_buddy(e4b, 0, &max);
++ for (i = 0; i < max; i++) {
++ if (!mb_test_bit(i, buddy)) {
++ MB_CHECK_ASSERT(i >= e4b->bd_info->bb_first_free);
++ if (fstart == -1) {
++ fragments++;
++ fstart = i;
++ }
++ continue;
++ }
++ fstart = -1;
++ /* check used bits only */
++ for (j = 0; j < e4b->bd_blkbits + 1; j++) {
++ buddy2 = mb_find_buddy(e4b, j, &max2);
++ k = i >> j;
++ MB_CHECK_ASSERT(k < max2);
++ MB_CHECK_ASSERT(mb_test_bit(k, buddy2));
++ }
++ }
++ MB_CHECK_ASSERT(!EXT4_MB_GRP_NEED_INIT(e4b->bd_info));
++ MB_CHECK_ASSERT(e4b->bd_info->bb_fragments == fragments);
++
++ grp = ext4_get_group_info(sb, e4b->bd_group);
++ buddy = mb_find_buddy(e4b, 0, &max);
++ list_for_each(cur, &grp->bb_prealloc_list) {
++ ext4_group_t groupnr;
++ struct ext4_prealloc_space *pa;
++ pa = list_entry(cur, struct ext4_prealloc_space, group_list);
++ ext4_get_group_no_and_offset(sb, pa->pstart, &groupnr, &k);
++ MB_CHECK_ASSERT(groupnr == e4b->bd_group);
++ for (i = 0; i < pa->len; i++)
++ MB_CHECK_ASSERT(mb_test_bit(k + i, buddy));
++ }
++ return 0;
++}
++#undef MB_CHECK_ASSERT
++#define mb_check_buddy(e4b) __mb_check_buddy(e4b, \
++ __FILE__, __FUNCTION__, __LINE__)
++#else
++#define mb_check_buddy(e4b)
++#endif
++
++/* FIXME!! need more doc */
++static void ext4_mb_mark_free_simple(struct super_block *sb,
++ void *buddy, unsigned first, int len,
++ struct ext4_group_info *grp)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ unsigned short min;
++ unsigned short max;
++ unsigned short chunk;
++ unsigned short border;
++
++ BUG_ON(len >= EXT4_BLOCKS_PER_GROUP(sb));
++
++ border = 2 << sb->s_blocksize_bits;
++
++ while (len > 0) {
++ /* find how many blocks can be covered since this position */
++ max = ffs(first | border) - 1;
++
++ /* find how many blocks of power 2 we need to mark */
++ min = fls(len) - 1;
++
++ if (max < min)
++ min = max;
++ chunk = 1 << min;
++
++ /* mark multiblock chunks only */
++ grp->bb_counters[min]++;
++ if (min > 0)
++ mb_clear_bit(first >> min,
++ buddy + sbi->s_mb_offsets[min]);
++
++ len -= chunk;
++ first += chunk;
++ }
++}
++
++static void ext4_mb_generate_buddy(struct super_block *sb,
++ void *buddy, void *bitmap, ext4_group_t group)
++{
++ struct ext4_group_info *grp = ext4_get_group_info(sb, group);
++ unsigned short max = EXT4_BLOCKS_PER_GROUP(sb);
++ unsigned short i = 0;
++ unsigned short first;
++ unsigned short len;
++ unsigned free = 0;
++ unsigned fragments = 0;
++ unsigned long long period = get_cycles();
++
++ /* initialize buddy from bitmap which is aggregation
++ * of on-disk bitmap and preallocations */
++ i = ext4_find_next_zero_bit(bitmap, max, 0);
++ grp->bb_first_free = i;
++ while (i < max) {
++ fragments++;
++ first = i;
++ i = ext4_find_next_bit(bitmap, max, i);
++ len = i - first;
++ free += len;
++ if (len > 1)
++ ext4_mb_mark_free_simple(sb, buddy, first, len, grp);
++ else
++ grp->bb_counters[0]++;
++ if (i < max)
++ i = ext4_find_next_zero_bit(bitmap, max, i);
++ }
++ grp->bb_fragments = fragments;
++
++ if (free != grp->bb_free) {
++ printk(KERN_DEBUG
++ "EXT4-fs: group %lu: %u blocks in bitmap, %u in gd\n",
++ group, free, grp->bb_free);
++ grp->bb_free = free;
++ }
++
++ clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state));
++
++ period = get_cycles() - period;
++ spin_lock(&EXT4_SB(sb)->s_bal_lock);
++ EXT4_SB(sb)->s_mb_buddies_generated++;
++ EXT4_SB(sb)->s_mb_generation_time += period;
++ spin_unlock(&EXT4_SB(sb)->s_bal_lock);
++}
++
++/* The buddy information is attached the buddy cache inode
++ * for convenience. The information regarding each group
++ * is loaded via ext4_mb_load_buddy. The information involve
++ * block bitmap and buddy information. The information are
++ * stored in the inode as
+ *
-+ * Description: An attempt to delete a remote port blocks, and if it fails
-+ * to return in the allotted time this gets called.
-+ */
- static void
- fc_timeout_deleted_rport(struct work_struct *work)
- {
-@@ -2984,14 +2978,12 @@ fc_timeout_deleted_rport(struct work_struct *work)
- }
-
- /**
-- * fc_timeout_fail_rport_io - Timeout handler for a fast io failing on a
-- * disconnected SCSI target.
-- *
-+ * fc_timeout_fail_rport_io - Timeout handler for a fast io failing on a disconnected SCSI target.
- * @work: rport to terminate io on.
- *
- * Notes: Only requests the failure of the io, not that all are flushed
- * prior to returning.
-- **/
-+ */
- static void
- fc_timeout_fail_rport_io(struct work_struct *work)
- {
-@@ -3008,9 +3000,8 @@ fc_timeout_fail_rport_io(struct work_struct *work)
-
- /**
- * fc_scsi_scan_rport - called to perform a scsi scan on a remote port.
-- *
- * @work: remote port to be scanned.
-- **/
-+ */
- static void
- fc_scsi_scan_rport(struct work_struct *work)
- {
-@@ -3047,7 +3038,7 @@ fc_scsi_scan_rport(struct work_struct *work)
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
-+ */
- static int
- fc_vport_create(struct Scsi_Host *shost, int channel, struct device *pdev,
- struct fc_vport_identifiers *ids, struct fc_vport **ret_vport)
-@@ -3172,7 +3163,7 @@ delete_vport:
- *
- * Notes:
- * This routine assumes no locks are held on entry.
-- **/
-+ */
- int
- fc_vport_terminate(struct fc_vport *vport)
- {
-@@ -3232,9 +3223,8 @@ EXPORT_SYMBOL(fc_vport_terminate);
-
- /**
- * fc_vport_sched_delete - workq-based delete request for a vport
-- *
- * @work: vport to be deleted.
-- **/
++ * { page }
++ * [ group 0 buddy][ group 0 bitmap] [group 1][ group 1]...
++ *
++ *
++ * one block each for bitmap and buddy information.
++ * So for each group we take up 2 blocks. A page can
++ * contain blocks_per_page (PAGE_CACHE_SIZE / blocksize) blocks.
++ * So it can have information regarding groups_per_page which
++ * is blocks_per_page/2
+ */
- static void
- fc_vport_sched_delete(struct work_struct *work)
- {
-diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
-index 5428d15..0d7b4e7 100644
---- a/drivers/scsi/scsi_transport_iscsi.c
-+++ b/drivers/scsi/scsi_transport_iscsi.c
-@@ -30,10 +30,10 @@
- #include <scsi/scsi_transport_iscsi.h>
- #include <scsi/iscsi_if.h>
-
--#define ISCSI_SESSION_ATTRS 15
-+#define ISCSI_SESSION_ATTRS 18
- #define ISCSI_CONN_ATTRS 11
- #define ISCSI_HOST_ATTRS 4
--#define ISCSI_TRANSPORT_VERSION "2.0-724"
-+#define ISCSI_TRANSPORT_VERSION "2.0-867"
-
- struct iscsi_internal {
- int daemon_pid;
-@@ -50,6 +50,7 @@ struct iscsi_internal {
- };
-
- static atomic_t iscsi_session_nr; /* sysfs session id for next new session */
-+static struct workqueue_struct *iscsi_eh_timer_workq;
-
- /*
- * list of registered transports and lock that must
-@@ -115,6 +116,8 @@ static struct attribute_group iscsi_transport_group = {
- .attrs = iscsi_transport_attrs,
- };
-
+
++static int ext4_mb_init_cache(struct page *page, char *incore)
++{
++ int blocksize;
++ int blocks_per_page;
++ int groups_per_page;
++ int err = 0;
++ int i;
++ ext4_group_t first_group;
++ int first_block;
++ struct super_block *sb;
++ struct buffer_head *bhs;
++ struct buffer_head **bh;
++ struct inode *inode;
++ char *data;
++ char *bitmap;
+
- static int iscsi_setup_host(struct transport_container *tc, struct device *dev,
- struct class_device *cdev)
- {
-@@ -124,13 +127,30 @@ static int iscsi_setup_host(struct transport_container *tc, struct device *dev,
- memset(ihost, 0, sizeof(*ihost));
- INIT_LIST_HEAD(&ihost->sessions);
- mutex_init(&ihost->mutex);
++ mb_debug("init page %lu\n", page->index);
++
++ inode = page->mapping->host;
++ sb = inode->i_sb;
++ blocksize = 1 << inode->i_blkbits;
++ blocks_per_page = PAGE_CACHE_SIZE / blocksize;
++
++ groups_per_page = blocks_per_page >> 1;
++ if (groups_per_page == 0)
++ groups_per_page = 1;
++
++ /* allocate buffer_heads to read bitmaps */
++ if (groups_per_page > 1) {
++ err = -ENOMEM;
++ i = sizeof(struct buffer_head *) * groups_per_page;
++ bh = kzalloc(i, GFP_NOFS);
++ if (bh == NULL)
++ goto out;
++ } else
++ bh = &bhs;
++
++ first_group = page->index * blocks_per_page / 2;
++
++ /* read all groups the page covers into the cache */
++ for (i = 0; i < groups_per_page; i++) {
++ struct ext4_group_desc *desc;
++
++ if (first_group + i >= EXT4_SB(sb)->s_groups_count)
++ break;
++
++ err = -EIO;
++ desc = ext4_get_group_desc(sb, first_group + i, NULL);
++ if (desc == NULL)
++ goto out;
++
++ err = -ENOMEM;
++ bh[i] = sb_getblk(sb, ext4_block_bitmap(sb, desc));
++ if (bh[i] == NULL)
++ goto out;
++
++ if (bh_uptodate_or_lock(bh[i]))
++ continue;
++
++ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ ext4_init_block_bitmap(sb, bh[i],
++ first_group + i, desc);
++ set_buffer_uptodate(bh[i]);
++ unlock_buffer(bh[i]);
++ continue;
++ }
++ get_bh(bh[i]);
++ bh[i]->b_end_io = end_buffer_read_sync;
++ submit_bh(READ, bh[i]);
++ mb_debug("read bitmap for group %lu\n", first_group + i);
++ }
++
++ /* wait for I/O completion */
++ for (i = 0; i < groups_per_page && bh[i]; i++)
++ wait_on_buffer(bh[i]);
++
++ err = -EIO;
++ for (i = 0; i < groups_per_page && bh[i]; i++)
++ if (!buffer_uptodate(bh[i]))
++ goto out;
++
++ first_block = page->index * blocks_per_page;
++ for (i = 0; i < blocks_per_page; i++) {
++ int group;
++ struct ext4_group_info *grinfo;
++
++ group = (first_block + i) >> 1;
++ if (group >= EXT4_SB(sb)->s_groups_count)
++ break;
++
++ /*
++ * data carry information regarding this
++ * particular group in the format specified
++ * above
++ *
++ */
++ data = page_address(page) + (i * blocksize);
++ bitmap = bh[group - first_group]->b_data;
++
++ /*
++ * We place the buddy block and bitmap block
++ * close together
++ */
++ if ((first_block + i) & 1) {
++ /* this is block of buddy */
++ BUG_ON(incore == NULL);
++ mb_debug("put buddy for group %u in page %lu/%x\n",
++ group, page->index, i * blocksize);
++ memset(data, 0xff, blocksize);
++ grinfo = ext4_get_group_info(sb, group);
++ grinfo->bb_fragments = 0;
++ memset(grinfo->bb_counters, 0,
++ sizeof(unsigned short)*(sb->s_blocksize_bits+2));
++ /*
++ * incore got set to the group block bitmap below
++ */
++ ext4_mb_generate_buddy(sb, data, incore, group);
++ incore = NULL;
++ } else {
++ /* this is block of bitmap */
++ BUG_ON(incore != NULL);
++ mb_debug("put bitmap for group %u in page %lu/%x\n",
++ group, page->index, i * blocksize);
++
++ /* see comments in ext4_mb_put_pa() */
++ ext4_lock_group(sb, group);
++ memcpy(data, bitmap, blocksize);
++
++ /* mark all preallocated blks used in in-core bitmap */
++ ext4_mb_generate_from_pa(sb, data, group);
++ ext4_unlock_group(sb, group);
++
++ /* set incore so that the buddy information can be
++ * generated using this
++ */
++ incore = data;
++ }
++ }
++ SetPageUptodate(page);
++
++out:
++ if (bh) {
++ for (i = 0; i < groups_per_page && bh[i]; i++)
++ brelse(bh[i]);
++ if (bh != &bhs)
++ kfree(bh);
++ }
++ return err;
++}
++
++static int ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
++ struct ext4_buddy *e4b)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ struct inode *inode = sbi->s_buddy_cache;
++ int blocks_per_page;
++ int block;
++ int pnum;
++ int poff;
++ struct page *page;
++
++ mb_debug("load group %lu\n", group);
++
++ blocks_per_page = PAGE_CACHE_SIZE / sb->s_blocksize;
++
++ e4b->bd_blkbits = sb->s_blocksize_bits;
++ e4b->bd_info = ext4_get_group_info(sb, group);
++ e4b->bd_sb = sb;
++ e4b->bd_group = group;
++ e4b->bd_buddy_page = NULL;
++ e4b->bd_bitmap_page = NULL;
++
++ /*
++ * the buddy cache inode stores the block bitmap
++ * and buddy information in consecutive blocks.
++ * So for each group we need two blocks.
++ */
++ block = group * 2;
++ pnum = block / blocks_per_page;
++ poff = block % blocks_per_page;
++
++ /* we could use find_or_create_page(), but it locks page
++ * what we'd like to avoid in fast path ... */
++ page = find_get_page(inode->i_mapping, pnum);
++ if (page == NULL || !PageUptodate(page)) {
++ if (page)
++ page_cache_release(page);
++ page = find_or_create_page(inode->i_mapping, pnum, GFP_NOFS);
++ if (page) {
++ BUG_ON(page->mapping != inode->i_mapping);
++ if (!PageUptodate(page)) {
++ ext4_mb_init_cache(page, NULL);
++ mb_cmp_bitmaps(e4b, page_address(page) +
++ (poff * sb->s_blocksize));
++ }
++ unlock_page(page);
++ }
++ }
++ if (page == NULL || !PageUptodate(page))
++ goto err;
++ e4b->bd_bitmap_page = page;
++ e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);
++ mark_page_accessed(page);
++
++ block++;
++ pnum = block / blocks_per_page;
++ poff = block % blocks_per_page;
++
++ page = find_get_page(inode->i_mapping, pnum);
++ if (page == NULL || !PageUptodate(page)) {
++ if (page)
++ page_cache_release(page);
++ page = find_or_create_page(inode->i_mapping, pnum, GFP_NOFS);
++ if (page) {
++ BUG_ON(page->mapping != inode->i_mapping);
++ if (!PageUptodate(page))
++ ext4_mb_init_cache(page, e4b->bd_bitmap);
++
++ unlock_page(page);
++ }
++ }
++ if (page == NULL || !PageUptodate(page))
++ goto err;
++ e4b->bd_buddy_page = page;
++ e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize);
++ mark_page_accessed(page);
++
++ BUG_ON(e4b->bd_bitmap_page == NULL);
++ BUG_ON(e4b->bd_buddy_page == NULL);
+
-+ snprintf(ihost->unbind_workq_name, KOBJ_NAME_LEN, "iscsi_unbind_%d",
-+ shost->host_no);
-+ ihost->unbind_workq = create_singlethread_workqueue(
-+ ihost->unbind_workq_name);
-+ if (!ihost->unbind_workq)
-+ return -ENOMEM;
+ return 0;
++
++err:
++ if (e4b->bd_bitmap_page)
++ page_cache_release(e4b->bd_bitmap_page);
++ if (e4b->bd_buddy_page)
++ page_cache_release(e4b->bd_buddy_page);
++ e4b->bd_buddy = NULL;
++ e4b->bd_bitmap = NULL;
++ return -EIO;
+}
+
-+static int iscsi_remove_host(struct transport_container *tc, struct device *dev,
-+ struct class_device *cdev)
++static void ext4_mb_release_desc(struct ext4_buddy *e4b)
+{
-+ struct Scsi_Host *shost = dev_to_shost(dev);
-+ struct iscsi_host *ihost = shost->shost_data;
++ if (e4b->bd_bitmap_page)
++ page_cache_release(e4b->bd_bitmap_page);
++ if (e4b->bd_buddy_page)
++ page_cache_release(e4b->bd_buddy_page);
++}
+
-+ destroy_workqueue(ihost->unbind_workq);
- return 0;
- }
-
- static DECLARE_TRANSPORT_CLASS(iscsi_host_class,
- "iscsi_host",
- iscsi_setup_host,
-- NULL,
-+ iscsi_remove_host,
- NULL);
-
- static DECLARE_TRANSPORT_CLASS(iscsi_session_class,
-@@ -252,7 +272,7 @@ static void session_recovery_timedout(struct work_struct *work)
- void iscsi_unblock_session(struct iscsi_cls_session *session)
- {
- if (!cancel_delayed_work(&session->recovery_work))
-- flush_scheduled_work();
-+ flush_workqueue(iscsi_eh_timer_workq);
- scsi_target_unblock(&session->dev);
- }
- EXPORT_SYMBOL_GPL(iscsi_unblock_session);
-@@ -260,11 +280,40 @@ EXPORT_SYMBOL_GPL(iscsi_unblock_session);
- void iscsi_block_session(struct iscsi_cls_session *session)
- {
- scsi_target_block(&session->dev);
-- schedule_delayed_work(&session->recovery_work,
-- session->recovery_tmo * HZ);
-+ queue_delayed_work(iscsi_eh_timer_workq, &session->recovery_work,
-+ session->recovery_tmo * HZ);
- }
- EXPORT_SYMBOL_GPL(iscsi_block_session);
-
-+static void __iscsi_unbind_session(struct work_struct *work)
++
++static int mb_find_order_for_block(struct ext4_buddy *e4b, int block)
+{
-+ struct iscsi_cls_session *session =
-+ container_of(work, struct iscsi_cls_session,
-+ unbind_work);
-+ struct Scsi_Host *shost = iscsi_session_to_shost(session);
-+ struct iscsi_host *ihost = shost->shost_data;
++ int order = 1;
++ void *bb;
+
-+ /* Prevent new scans and make sure scanning is not in progress */
-+ mutex_lock(&ihost->mutex);
-+ if (list_empty(&session->host_list)) {
-+ mutex_unlock(&ihost->mutex);
-+ return;
++ BUG_ON(EXT4_MB_BITMAP(e4b) == EXT4_MB_BUDDY(e4b));
++ BUG_ON(block >= (1 << (e4b->bd_blkbits + 3)));
++
++ bb = EXT4_MB_BUDDY(e4b);
++ while (order <= e4b->bd_blkbits + 1) {
++ block = block >> 1;
++ if (!mb_test_bit(block, bb)) {
++ /* this block is part of buddy of order 'order' */
++ return order;
++ }
++ bb += 1 << (e4b->bd_blkbits - order);
++ order++;
+ }
-+ list_del_init(&session->host_list);
-+ mutex_unlock(&ihost->mutex);
++ return 0;
++}
+
-+ scsi_remove_target(&session->dev);
-+ iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
++static void mb_clear_bits(spinlock_t *lock, void *bm, int cur, int len)
++{
++ __u32 *addr;
++
++ len = cur + len;
++ while (cur < len) {
++ if ((cur & 31) == 0 && (len - cur) >= 32) {
++ /* fast path: clear whole word at once */
++ addr = bm + (cur >> 3);
++ *addr = 0;
++ cur += 32;
++ continue;
++ }
++ mb_clear_bit_atomic(lock, cur, bm);
++ cur++;
++ }
+}
+
-+static int iscsi_unbind_session(struct iscsi_cls_session *session)
++static void mb_set_bits(spinlock_t *lock, void *bm, int cur, int len)
+{
-+ struct Scsi_Host *shost = iscsi_session_to_shost(session);
-+ struct iscsi_host *ihost = shost->shost_data;
++ __u32 *addr;
+
-+ return queue_work(ihost->unbind_workq, &session->unbind_work);
++ len = cur + len;
++ while (cur < len) {
++ if ((cur & 31) == 0 && (len - cur) >= 32) {
++ /* fast path: set whole word at once */
++ addr = bm + (cur >> 3);
++ *addr = 0xffffffff;
++ cur += 32;
++ continue;
++ }
++ mb_set_bit_atomic(lock, cur, bm);
++ cur++;
++ }
+}
+
- struct iscsi_cls_session *
- iscsi_alloc_session(struct Scsi_Host *shost,
- struct iscsi_transport *transport)
-@@ -281,6 +330,7 @@ iscsi_alloc_session(struct Scsi_Host *shost,
- INIT_DELAYED_WORK(&session->recovery_work, session_recovery_timedout);
- INIT_LIST_HEAD(&session->host_list);
- INIT_LIST_HEAD(&session->sess_list);
-+ INIT_WORK(&session->unbind_work, __iscsi_unbind_session);
-
- /* this is released in the dev's release function */
- scsi_host_get(shost);
-@@ -297,6 +347,7 @@ int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
- {
- struct Scsi_Host *shost = iscsi_session_to_shost(session);
- struct iscsi_host *ihost;
-+ unsigned long flags;
- int err;
-
- ihost = shost->shost_data;
-@@ -313,9 +364,15 @@ int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
- }
- transport_register_device(&session->dev);
-
-+ spin_lock_irqsave(&sesslock, flags);
-+ list_add(&session->sess_list, &sesslist);
-+ spin_unlock_irqrestore(&sesslock, flags);
++static int mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
++ int first, int count)
++{
++ int block = 0;
++ int max = 0;
++ int order;
++ void *buddy;
++ void *buddy2;
++ struct super_block *sb = e4b->bd_sb;
+
- mutex_lock(&ihost->mutex);
- list_add(&session->host_list, &ihost->sessions);
- mutex_unlock(&ihost->mutex);
++ BUG_ON(first + count > (sb->s_blocksize << 3));
++ BUG_ON(!ext4_is_group_locked(sb, e4b->bd_group));
++ mb_check_buddy(e4b);
++ mb_free_blocks_double(inode, e4b, first, count);
+
-+ iscsi_session_event(session, ISCSI_KEVENT_CREATE_SESSION);
- return 0;
-
- release_host:
-@@ -328,9 +385,10 @@ EXPORT_SYMBOL_GPL(iscsi_add_session);
- * iscsi_create_session - create iscsi class session
- * @shost: scsi host
- * @transport: iscsi transport
-+ * @target_id: which target
- *
- * This can be called from a LLD or iscsi_transport.
-- **/
-+ */
- struct iscsi_cls_session *
- iscsi_create_session(struct Scsi_Host *shost,
- struct iscsi_transport *transport,
-@@ -350,19 +408,58 @@ iscsi_create_session(struct Scsi_Host *shost,
- }
- EXPORT_SYMBOL_GPL(iscsi_create_session);
-
-+static void iscsi_conn_release(struct device *dev)
++ e4b->bd_info->bb_free += count;
++ if (first < e4b->bd_info->bb_first_free)
++ e4b->bd_info->bb_first_free = first;
++
++ /* let's maintain fragments counter */
++ if (first != 0)
++ block = !mb_test_bit(first - 1, EXT4_MB_BITMAP(e4b));
++ if (first + count < EXT4_SB(sb)->s_mb_maxs[0])
++ max = !mb_test_bit(first + count, EXT4_MB_BITMAP(e4b));
++ if (block && max)
++ e4b->bd_info->bb_fragments--;
++ else if (!block && !max)
++ e4b->bd_info->bb_fragments++;
++
++ /* let's maintain buddy itself */
++ while (count-- > 0) {
++ block = first++;
++ order = 0;
++
++ if (!mb_test_bit(block, EXT4_MB_BITMAP(e4b))) {
++ ext4_fsblk_t blocknr;
++ blocknr = e4b->bd_group * EXT4_BLOCKS_PER_GROUP(sb);
++ blocknr += block;
++ blocknr +=
++ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
++
++ ext4_error(sb, __FUNCTION__, "double-free of inode"
++ " %lu's block %llu(bit %u in group %lu)\n",
++ inode ? inode->i_ino : 0, blocknr, block,
++ e4b->bd_group);
++ }
++ mb_clear_bit(block, EXT4_MB_BITMAP(e4b));
++ e4b->bd_info->bb_counters[order]++;
++
++ /* start of the buddy */
++ buddy = mb_find_buddy(e4b, order, &max);
++
++ do {
++ block &= ~1UL;
++ if (mb_test_bit(block, buddy) ||
++ mb_test_bit(block + 1, buddy))
++ break;
++
++ /* both the buddies are free, try to coalesce them */
++ buddy2 = mb_find_buddy(e4b, order + 1, &max);
++
++ if (!buddy2)
++ break;
++
++ if (order > 0) {
++ /* for special purposes, we don't set
++ * free bits in bitmap */
++ mb_set_bit(block, buddy);
++ mb_set_bit(block + 1, buddy);
++ }
++ e4b->bd_info->bb_counters[order]--;
++ e4b->bd_info->bb_counters[order]--;
++
++ block = block >> 1;
++ order++;
++ e4b->bd_info->bb_counters[order]++;
++
++ mb_clear_bit(block, buddy2);
++ buddy = buddy2;
++ } while (1);
++ }
++ mb_check_buddy(e4b);
++
++ return 0;
++}
++
++static int mb_find_extent(struct ext4_buddy *e4b, int order, int block,
++ int needed, struct ext4_free_extent *ex)
+{
-+ struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev);
-+ struct device *parent = conn->dev.parent;
++ int next = block;
++ int max;
++ int ord;
++ void *buddy;
+
-+ kfree(conn);
-+ put_device(parent);
++ BUG_ON(!ext4_is_group_locked(e4b->bd_sb, e4b->bd_group));
++ BUG_ON(ex == NULL);
++
++ buddy = mb_find_buddy(e4b, order, &max);
++ BUG_ON(buddy == NULL);
++ BUG_ON(block >= max);
++ if (mb_test_bit(block, buddy)) {
++ ex->fe_len = 0;
++ ex->fe_start = 0;
++ ex->fe_group = 0;
++ return 0;
++ }
++
++ /* FIXME dorp order completely ? */
++ if (likely(order == 0)) {
++ /* find actual order */
++ order = mb_find_order_for_block(e4b, block);
++ block = block >> order;
++ }
++
++ ex->fe_len = 1 << order;
++ ex->fe_start = block << order;
++ ex->fe_group = e4b->bd_group;
++
++ /* calc difference from given start */
++ next = next - ex->fe_start;
++ ex->fe_len -= next;
++ ex->fe_start += next;
++
++ while (needed > ex->fe_len &&
++ (buddy = mb_find_buddy(e4b, order, &max))) {
++
++ if (block + 1 >= max)
++ break;
++
++ next = (block + 1) * (1 << order);
++ if (mb_test_bit(next, EXT4_MB_BITMAP(e4b)))
++ break;
++
++ ord = mb_find_order_for_block(e4b, next);
++
++ order = ord;
++ block = next >> order;
++ ex->fe_len += 1 << order;
++ }
++
++ BUG_ON(ex->fe_start + ex->fe_len > (1 << (e4b->bd_blkbits + 3)));
++ return ex->fe_len;
+}
+
-+static int iscsi_is_conn_dev(const struct device *dev)
++static int mb_mark_used(struct ext4_buddy *e4b, struct ext4_free_extent *ex)
+{
-+ return dev->release == iscsi_conn_release;
++ int ord;
++ int mlen = 0;
++ int max = 0;
++ int cur;
++ int start = ex->fe_start;
++ int len = ex->fe_len;
++ unsigned ret = 0;
++ int len0 = len;
++ void *buddy;
++
++ BUG_ON(start + len > (e4b->bd_sb->s_blocksize << 3));
++ BUG_ON(e4b->bd_group != ex->fe_group);
++ BUG_ON(!ext4_is_group_locked(e4b->bd_sb, e4b->bd_group));
++ mb_check_buddy(e4b);
++ mb_mark_used_double(e4b, start, len);
++
++ e4b->bd_info->bb_free -= len;
++ if (e4b->bd_info->bb_first_free == start)
++ e4b->bd_info->bb_first_free += len;
++
++ /* let's maintain fragments counter */
++ if (start != 0)
++ mlen = !mb_test_bit(start - 1, EXT4_MB_BITMAP(e4b));
++ if (start + len < EXT4_SB(e4b->bd_sb)->s_mb_maxs[0])
++ max = !mb_test_bit(start + len, EXT4_MB_BITMAP(e4b));
++ if (mlen && max)
++ e4b->bd_info->bb_fragments++;
++ else if (!mlen && !max)
++ e4b->bd_info->bb_fragments--;
++
++ /* let's maintain buddy itself */
++ while (len) {
++ ord = mb_find_order_for_block(e4b, start);
++
++ if (((start >> ord) << ord) == start && len >= (1 << ord)) {
++ /* the whole chunk may be allocated at once! */
++ mlen = 1 << ord;
++ buddy = mb_find_buddy(e4b, ord, &max);
++ BUG_ON((start >> ord) >= max);
++ mb_set_bit(start >> ord, buddy);
++ e4b->bd_info->bb_counters[ord]--;
++ start += mlen;
++ len -= mlen;
++ BUG_ON(len < 0);
++ continue;
++ }
++
++ /* store for history */
++ if (ret == 0)
++ ret = len | (ord << 16);
++
++ /* we have to split large buddy */
++ BUG_ON(ord <= 0);
++ buddy = mb_find_buddy(e4b, ord, &max);
++ mb_set_bit(start >> ord, buddy);
++ e4b->bd_info->bb_counters[ord]--;
++
++ ord--;
++ cur = (start >> ord) & ~1U;
++ buddy = mb_find_buddy(e4b, ord, &max);
++ mb_clear_bit(cur, buddy);
++ mb_clear_bit(cur + 1, buddy);
++ e4b->bd_info->bb_counters[ord]++;
++ e4b->bd_info->bb_counters[ord]++;
++ }
++
++ mb_set_bits(sb_bgl_lock(EXT4_SB(e4b->bd_sb), ex->fe_group),
++ EXT4_MB_BITMAP(e4b), ex->fe_start, len0);
++ mb_check_buddy(e4b);
++
++ return ret;
+}
+
-+static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
++/*
++ * Must be called under group lock!
++ */
++static void ext4_mb_use_best_found(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b)
+{
-+ if (!iscsi_is_conn_dev(dev))
-+ return 0;
-+ return iscsi_destroy_conn(iscsi_dev_to_conn(dev));
++ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++ int ret;
++
++ BUG_ON(ac->ac_b_ex.fe_group != e4b->bd_group);
++ BUG_ON(ac->ac_status == AC_STATUS_FOUND);
++
++ ac->ac_b_ex.fe_len = min(ac->ac_b_ex.fe_len, ac->ac_g_ex.fe_len);
++ ac->ac_b_ex.fe_logical = ac->ac_g_ex.fe_logical;
++ ret = mb_mark_used(e4b, &ac->ac_b_ex);
++
++ /* preallocation can change ac_b_ex, thus we store actually
++ * allocated blocks for history */
++ ac->ac_f_ex = ac->ac_b_ex;
++
++ ac->ac_status = AC_STATUS_FOUND;
++ ac->ac_tail = ret & 0xffff;
++ ac->ac_buddy = ret >> 16;
++
++ /* XXXXXXX: SUCH A HORRIBLE **CK */
++ /*FIXME!! Why ? */
++ ac->ac_bitmap_page = e4b->bd_bitmap_page;
++ get_page(ac->ac_bitmap_page);
++ ac->ac_buddy_page = e4b->bd_buddy_page;
++ get_page(ac->ac_buddy_page);
++
++ /* store last allocated for subsequent stream allocation */
++ if ((ac->ac_flags & EXT4_MB_HINT_DATA)) {
++ spin_lock(&sbi->s_md_lock);
++ sbi->s_mb_last_group = ac->ac_f_ex.fe_group;
++ sbi->s_mb_last_start = ac->ac_f_ex.fe_start;
++ spin_unlock(&sbi->s_md_lock);
++ }
+}
+
- void iscsi_remove_session(struct iscsi_cls_session *session)
- {
- struct Scsi_Host *shost = iscsi_session_to_shost(session);
- struct iscsi_host *ihost = shost->shost_data;
-+ unsigned long flags;
-+ int err;
-
-- if (!cancel_delayed_work(&session->recovery_work))
-- flush_scheduled_work();
-+ spin_lock_irqsave(&sesslock, flags);
-+ list_del(&session->sess_list);
-+ spin_unlock_irqrestore(&sesslock, flags);
-
-- mutex_lock(&ihost->mutex);
-- list_del(&session->host_list);
-- mutex_unlock(&ihost->mutex);
++/*
++ * regular allocator, for general purposes allocation
++ */
++
++static void ext4_mb_check_limits(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b,
++ int finish_group)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++ struct ext4_free_extent *bex = &ac->ac_b_ex;
++ struct ext4_free_extent *gex = &ac->ac_g_ex;
++ struct ext4_free_extent ex;
++ int max;
++
+ /*
-+ * If we are blocked let commands flow again. The lld or iscsi
-+ * layer should set up the queuecommand to fail commands.
++ * We don't want to scan for a whole year
+ */
-+ iscsi_unblock_session(session);
-+ iscsi_unbind_session(session);
++ if (ac->ac_found > sbi->s_mb_max_to_scan &&
++ !(ac->ac_flags & EXT4_MB_HINT_FIRST)) {
++ ac->ac_status = AC_STATUS_BREAK;
++ return;
++ }
++
+ /*
-+ * If the session dropped while removing devices then we need to make
-+ * sure it is not blocked
++ * Haven't found good chunk so far, let's continue
+ */
-+ if (!cancel_delayed_work(&session->recovery_work))
-+ flush_workqueue(iscsi_eh_timer_workq);
-+ flush_workqueue(ihost->unbind_workq);
-
-- scsi_remove_target(&session->dev);
-+ /* hw iscsi may not have removed all connections from session */
-+ err = device_for_each_child(&session->dev, NULL,
-+ iscsi_iter_destroy_conn_fn);
-+ if (err)
-+ dev_printk(KERN_ERR, &session->dev, "iscsi: Could not delete "
-+ "all connections for session. Error %d.\n", err);
-
- transport_unregister_device(&session->dev);
- device_del(&session->dev);
-@@ -371,9 +468,9 @@ EXPORT_SYMBOL_GPL(iscsi_remove_session);
-
- void iscsi_free_session(struct iscsi_cls_session *session)
- {
-+ iscsi_session_event(session, ISCSI_KEVENT_DESTROY_SESSION);
- put_device(&session->dev);
- }
--
- EXPORT_SYMBOL_GPL(iscsi_free_session);
-
- /**
-@@ -382,7 +479,7 @@ EXPORT_SYMBOL_GPL(iscsi_free_session);
- *
- * Can be called by a LLD or iscsi_transport. There must not be
- * any running connections.
-- **/
-+ */
- int iscsi_destroy_session(struct iscsi_cls_session *session)
- {
- iscsi_remove_session(session);
-@@ -391,20 +488,6 @@ int iscsi_destroy_session(struct iscsi_cls_session *session)
- }
- EXPORT_SYMBOL_GPL(iscsi_destroy_session);
-
--static void iscsi_conn_release(struct device *dev)
--{
-- struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev);
-- struct device *parent = conn->dev.parent;
--
-- kfree(conn);
-- put_device(parent);
--}
--
--static int iscsi_is_conn_dev(const struct device *dev)
--{
-- return dev->release == iscsi_conn_release;
--}
--
- /**
- * iscsi_create_conn - create iscsi class connection
- * @session: iscsi cls session
-@@ -418,12 +501,13 @@ static int iscsi_is_conn_dev(const struct device *dev)
- * for software iscsi we could be trying to preallocate a connection struct
- * in which case there could be two connection structs and cid would be
- * non-zero.
-- **/
-+ */
- struct iscsi_cls_conn *
- iscsi_create_conn(struct iscsi_cls_session *session, uint32_t cid)
- {
- struct iscsi_transport *transport = session->transport;
- struct iscsi_cls_conn *conn;
-+ unsigned long flags;
- int err;
-
- conn = kzalloc(sizeof(*conn) + transport->conndata_size, GFP_KERNEL);
-@@ -452,6 +536,11 @@ iscsi_create_conn(struct iscsi_cls_session *session, uint32_t cid)
- goto release_parent_ref;
- }
- transport_register_device(&conn->dev);
-+
-+ spin_lock_irqsave(&connlock, flags);
-+ list_add(&conn->conn_list, &connlist);
-+ conn->active = 1;
-+ spin_unlock_irqrestore(&connlock, flags);
- return conn;
-
- release_parent_ref:
-@@ -465,17 +554,23 @@ EXPORT_SYMBOL_GPL(iscsi_create_conn);
-
- /**
- * iscsi_destroy_conn - destroy iscsi class connection
-- * @session: iscsi cls session
-+ * @conn: iscsi cls session
- *
- * This can be called from a LLD or iscsi_transport.
-- **/
-+ */
- int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
- {
-+ unsigned long flags;
++ if (bex->fe_len < gex->fe_len)
++ return;
+
-+ spin_lock_irqsave(&connlock, flags);
-+ conn->active = 0;
-+ list_del(&conn->conn_list);
-+ spin_unlock_irqrestore(&connlock, flags);
++ if ((finish_group || ac->ac_found > sbi->s_mb_min_to_scan)
++ && bex->fe_group == e4b->bd_group) {
++ /* recheck chunk's availability - we don't know
++ * when it was found (within this lock-unlock
++ * period or not) */
++ max = mb_find_extent(e4b, 0, bex->fe_start, gex->fe_len, &ex);
++ if (max >= gex->fe_len) {
++ ext4_mb_use_best_found(ac, e4b);
++ return;
++ }
++ }
++}
+
- transport_unregister_device(&conn->dev);
- device_unregister(&conn->dev);
- return 0;
- }
--
- EXPORT_SYMBOL_GPL(iscsi_destroy_conn);
-
- /*
-@@ -685,132 +780,74 @@ iscsi_if_get_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh)
- }
-
- /**
-- * iscsi_if_destroy_session_done - send session destr. completion event
-- * @conn: last connection for session
-- *
-- * This is called by HW iscsi LLDs to notify userpsace that its HW has
-- * removed a session.
-- **/
--int iscsi_if_destroy_session_done(struct iscsi_cls_conn *conn)
-+ * iscsi_session_event - send session destr. completion event
-+ * @session: iscsi class session
-+ * @event: type of event
++/*
++ * The routine checks whether found extent is good enough. If it is,
++ * then the extent gets marked used and flag is set to the context
++ * to stop scanning. Otherwise, the extent is compared with the
++ * previous found extent and if new one is better, then it's stored
++ * in the context. Later, the best found extent will be used, if
++ * mballoc can't find good enough extent.
++ *
++ * FIXME: real allocation policy is to be designed yet!
+ */
-+int iscsi_session_event(struct iscsi_cls_session *session,
-+ enum iscsi_uevent_e event)
- {
- struct iscsi_internal *priv;
-- struct iscsi_cls_session *session;
- struct Scsi_Host *shost;
- struct iscsi_uevent *ev;
- struct sk_buff *skb;
- struct nlmsghdr *nlh;
-- unsigned long flags;
- int rc, len = NLMSG_SPACE(sizeof(*ev));
-
-- priv = iscsi_if_transport_lookup(conn->transport);
-+ priv = iscsi_if_transport_lookup(session->transport);
- if (!priv)
- return -EINVAL;
--
-- session = iscsi_dev_to_session(conn->dev.parent);
- shost = iscsi_session_to_shost(session);
-
- skb = alloc_skb(len, GFP_KERNEL);
- if (!skb) {
-- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
-- "session creation event\n");
-+ dev_printk(KERN_ERR, &session->dev, "Cannot notify userspace "
-+ "of session event %u\n", event);
- return -ENOMEM;
- }
-
- nlh = __nlmsg_put(skb, priv->daemon_pid, 0, 0, (len - sizeof(*nlh)), 0);
- ev = NLMSG_DATA(nlh);
-- ev->transport_handle = iscsi_handle(conn->transport);
-- ev->type = ISCSI_KEVENT_DESTROY_SESSION;
-- ev->r.d_session.host_no = shost->host_no;
-- ev->r.d_session.sid = session->sid;
--
-- /*
-- * this will occur if the daemon is not up, so we just warn
-- * the user and when the daemon is restarted it will handle it
-- */
-- rc = iscsi_broadcast_skb(skb, GFP_KERNEL);
-- if (rc < 0)
-- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
-- "session destruction event. Check iscsi daemon\n");
--
-- spin_lock_irqsave(&sesslock, flags);
-- list_del(&session->sess_list);
-- spin_unlock_irqrestore(&sesslock, flags);
-+ ev->transport_handle = iscsi_handle(session->transport);
-
-- spin_lock_irqsave(&connlock, flags);
-- conn->active = 0;
-- list_del(&conn->conn_list);
-- spin_unlock_irqrestore(&connlock, flags);
--
-- return rc;
--}
--EXPORT_SYMBOL_GPL(iscsi_if_destroy_session_done);
--
--/**
-- * iscsi_if_create_session_done - send session creation completion event
-- * @conn: leading connection for session
-- *
-- * This is called by HW iscsi LLDs to notify userpsace that its HW has
-- * created a session or a existing session is back in the logged in state.
-- **/
--int iscsi_if_create_session_done(struct iscsi_cls_conn *conn)
--{
-- struct iscsi_internal *priv;
-- struct iscsi_cls_session *session;
-- struct Scsi_Host *shost;
-- struct iscsi_uevent *ev;
-- struct sk_buff *skb;
-- struct nlmsghdr *nlh;
-- unsigned long flags;
-- int rc, len = NLMSG_SPACE(sizeof(*ev));
--
-- priv = iscsi_if_transport_lookup(conn->transport);
-- if (!priv)
-+ ev->type = event;
-+ switch (event) {
-+ case ISCSI_KEVENT_DESTROY_SESSION:
-+ ev->r.d_session.host_no = shost->host_no;
-+ ev->r.d_session.sid = session->sid;
-+ break;
-+ case ISCSI_KEVENT_CREATE_SESSION:
-+ ev->r.c_session_ret.host_no = shost->host_no;
-+ ev->r.c_session_ret.sid = session->sid;
-+ break;
-+ case ISCSI_KEVENT_UNBIND_SESSION:
-+ ev->r.unbind_session.host_no = shost->host_no;
-+ ev->r.unbind_session.sid = session->sid;
-+ break;
-+ default:
-+ dev_printk(KERN_ERR, &session->dev, "Invalid event %u.\n",
-+ event);
-+ kfree_skb(skb);
- return -EINVAL;
--
-- session = iscsi_dev_to_session(conn->dev.parent);
-- shost = iscsi_session_to_shost(session);
--
-- skb = alloc_skb(len, GFP_KERNEL);
-- if (!skb) {
-- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
-- "session creation event\n");
-- return -ENOMEM;
- }
-
-- nlh = __nlmsg_put(skb, priv->daemon_pid, 0, 0, (len - sizeof(*nlh)), 0);
-- ev = NLMSG_DATA(nlh);
-- ev->transport_handle = iscsi_handle(conn->transport);
-- ev->type = ISCSI_UEVENT_CREATE_SESSION;
-- ev->r.c_session_ret.host_no = shost->host_no;
-- ev->r.c_session_ret.sid = session->sid;
--
- /*
- * this will occur if the daemon is not up, so we just warn
- * the user and when the daemon is restarted it will handle it
- */
- rc = iscsi_broadcast_skb(skb, GFP_KERNEL);
- if (rc < 0)
-- dev_printk(KERN_ERR, &conn->dev, "Cannot notify userspace of "
-- "session creation event. Check iscsi daemon\n");
--
-- spin_lock_irqsave(&sesslock, flags);
-- list_add(&session->sess_list, &sesslist);
-- spin_unlock_irqrestore(&sesslock, flags);
--
-- spin_lock_irqsave(&connlock, flags);
-- list_add(&conn->conn_list, &connlist);
-- conn->active = 1;
-- spin_unlock_irqrestore(&connlock, flags);
-+ dev_printk(KERN_ERR, &session->dev, "Cannot notify userspace "
-+ "of session event %u. Check iscsi daemon\n", event);
- return rc;
- }
--EXPORT_SYMBOL_GPL(iscsi_if_create_session_done);
-+EXPORT_SYMBOL_GPL(iscsi_session_event);
-
- static int
- iscsi_if_create_session(struct iscsi_internal *priv, struct iscsi_uevent *ev)
- {
- struct iscsi_transport *transport = priv->iscsi_transport;
- struct iscsi_cls_session *session;
-- unsigned long flags;
- uint32_t hostno;
-
- session = transport->create_session(transport, &priv->t,
-@@ -821,10 +858,6 @@ iscsi_if_create_session(struct iscsi_internal *priv, struct iscsi_uevent *ev)
- if (!session)
- return -ENOMEM;
-
-- spin_lock_irqsave(&sesslock, flags);
-- list_add(&session->sess_list, &sesslist);
-- spin_unlock_irqrestore(&sesslock, flags);
--
- ev->r.c_session_ret.host_no = hostno;
- ev->r.c_session_ret.sid = session->sid;
- return 0;
-@@ -835,7 +868,6 @@ iscsi_if_create_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
- {
- struct iscsi_cls_conn *conn;
- struct iscsi_cls_session *session;
-- unsigned long flags;
-
- session = iscsi_session_lookup(ev->u.c_conn.sid);
- if (!session) {
-@@ -854,28 +886,17 @@ iscsi_if_create_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
-
- ev->r.c_conn_ret.sid = session->sid;
- ev->r.c_conn_ret.cid = conn->cid;
--
-- spin_lock_irqsave(&connlock, flags);
-- list_add(&conn->conn_list, &connlist);
-- conn->active = 1;
-- spin_unlock_irqrestore(&connlock, flags);
--
- return 0;
- }
-
- static int
- iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
- {
-- unsigned long flags;
- struct iscsi_cls_conn *conn;
-
- conn = iscsi_conn_lookup(ev->u.d_conn.sid, ev->u.d_conn.cid);
- if (!conn)
- return -EINVAL;
-- spin_lock_irqsave(&connlock, flags);
-- conn->active = 0;
-- list_del(&conn->conn_list);
-- spin_unlock_irqrestore(&connlock, flags);
-
- if (transport->destroy_conn)
- transport->destroy_conn(conn);
-@@ -1002,7 +1023,6 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
- struct iscsi_internal *priv;
- struct iscsi_cls_session *session;
- struct iscsi_cls_conn *conn;
-- unsigned long flags;
-
- priv = iscsi_if_transport_lookup(iscsi_ptr(ev->transport_handle));
- if (!priv)
-@@ -1020,13 +1040,16 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
- break;
- case ISCSI_UEVENT_DESTROY_SESSION:
- session = iscsi_session_lookup(ev->u.d_session.sid);
-- if (session) {
-- spin_lock_irqsave(&sesslock, flags);
-- list_del(&session->sess_list);
-- spin_unlock_irqrestore(&sesslock, flags);
--
-+ if (session)
- transport->destroy_session(session);
-- } else
-+ else
-+ err = -EINVAL;
-+ break;
-+ case ISCSI_UEVENT_UNBIND_SESSION:
-+ session = iscsi_session_lookup(ev->u.d_session.sid);
-+ if (session)
-+ iscsi_unbind_session(session);
-+ else
- err = -EINVAL;
- break;
- case ISCSI_UEVENT_CREATE_CONN:
-@@ -1179,6 +1202,8 @@ iscsi_conn_attr(port, ISCSI_PARAM_CONN_PORT);
- iscsi_conn_attr(exp_statsn, ISCSI_PARAM_EXP_STATSN);
- iscsi_conn_attr(persistent_address, ISCSI_PARAM_PERSISTENT_ADDRESS);
- iscsi_conn_attr(address, ISCSI_PARAM_CONN_ADDRESS);
-+iscsi_conn_attr(ping_tmo, ISCSI_PARAM_PING_TMO);
-+iscsi_conn_attr(recv_tmo, ISCSI_PARAM_RECV_TMO);
-
- #define iscsi_cdev_to_session(_cdev) \
- iscsi_dev_to_session(_cdev->dev)
-@@ -1217,6 +1242,9 @@ iscsi_session_attr(username, ISCSI_PARAM_USERNAME, 1);
- iscsi_session_attr(username_in, ISCSI_PARAM_USERNAME_IN, 1);
- iscsi_session_attr(password, ISCSI_PARAM_PASSWORD, 1);
- iscsi_session_attr(password_in, ISCSI_PARAM_PASSWORD_IN, 1);
-+iscsi_session_attr(fast_abort, ISCSI_PARAM_FAST_ABORT, 0);
-+iscsi_session_attr(abort_tmo, ISCSI_PARAM_ABORT_TMO, 0);
-+iscsi_session_attr(lu_reset_tmo, ISCSI_PARAM_LU_RESET_TMO, 0);
-
- #define iscsi_priv_session_attr_show(field, format) \
- static ssize_t \
-@@ -1413,6 +1441,8 @@ iscsi_register_transport(struct iscsi_transport *tt)
- SETUP_CONN_RD_ATTR(exp_statsn, ISCSI_EXP_STATSN);
- SETUP_CONN_RD_ATTR(persistent_address, ISCSI_PERSISTENT_ADDRESS);
- SETUP_CONN_RD_ATTR(persistent_port, ISCSI_PERSISTENT_PORT);
-+ SETUP_CONN_RD_ATTR(ping_tmo, ISCSI_PING_TMO);
-+ SETUP_CONN_RD_ATTR(recv_tmo, ISCSI_RECV_TMO);
-
- BUG_ON(count > ISCSI_CONN_ATTRS);
- priv->conn_attrs[count] = NULL;
-@@ -1438,6 +1468,9 @@ iscsi_register_transport(struct iscsi_transport *tt)
- SETUP_SESSION_RD_ATTR(password_in, ISCSI_USERNAME_IN);
- SETUP_SESSION_RD_ATTR(username, ISCSI_PASSWORD);
- SETUP_SESSION_RD_ATTR(username_in, ISCSI_PASSWORD_IN);
-+ SETUP_SESSION_RD_ATTR(fast_abort, ISCSI_FAST_ABORT);
-+ SETUP_SESSION_RD_ATTR(abort_tmo, ISCSI_ABORT_TMO);
-+ SETUP_SESSION_RD_ATTR(lu_reset_tmo,ISCSI_LU_RESET_TMO);
- SETUP_PRIV_SESSION_RD_ATTR(recovery_tmo);
-
- BUG_ON(count > ISCSI_SESSION_ATTRS);
-@@ -1518,8 +1551,14 @@ static __init int iscsi_transport_init(void)
- goto unregister_session_class;
- }
-
-+ iscsi_eh_timer_workq = create_singlethread_workqueue("iscsi_eh");
-+ if (!iscsi_eh_timer_workq)
-+ goto release_nls;
++static void ext4_mb_measure_extent(struct ext4_allocation_context *ac,
++ struct ext4_free_extent *ex,
++ struct ext4_buddy *e4b)
++{
++ struct ext4_free_extent *bex = &ac->ac_b_ex;
++ struct ext4_free_extent *gex = &ac->ac_g_ex;
+
- return 0;
-
-+release_nls:
-+ netlink_kernel_release(nls);
- unregister_session_class:
- transport_class_unregister(&iscsi_session_class);
- unregister_conn_class:
-@@ -1533,7 +1572,8 @@ unregister_transport_class:
-
- static void __exit iscsi_transport_exit(void)
- {
-- sock_release(nls->sk_socket);
-+ destroy_workqueue(iscsi_eh_timer_workq);
-+ netlink_kernel_release(nls);
- transport_class_unregister(&iscsi_connection_class);
- transport_class_unregister(&iscsi_session_class);
- transport_class_unregister(&iscsi_host_class);
-diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
-index 3120f4b..f2149d0 100644
---- a/drivers/scsi/scsi_transport_sas.c
-+++ b/drivers/scsi/scsi_transport_sas.c
-@@ -173,6 +173,7 @@ static void sas_smp_request(struct request_queue *q, struct Scsi_Host *shost,
-
- handler = to_sas_internal(shost->transportt)->f->smp_handler;
- ret = handler(shost, rphy, req);
-+ req->errors = ret;
-
- spin_lock_irq(q->queue_lock);
-
-@@ -323,7 +324,7 @@ static int do_sas_phy_delete(struct device *dev, void *data)
- }
-
- /**
-- * sas_remove_children -- tear down a devices SAS data structures
-+ * sas_remove_children - tear down a devices SAS data structures
- * @dev: device belonging to the sas object
- *
- * Removes all SAS PHYs and remote PHYs for a given object
-@@ -336,7 +337,7 @@ void sas_remove_children(struct device *dev)
- EXPORT_SYMBOL(sas_remove_children);
-
- /**
-- * sas_remove_host -- tear down a Scsi_Host's SAS data structures
-+ * sas_remove_host - tear down a Scsi_Host's SAS data structures
- * @shost: Scsi Host that is torn down
- *
- * Removes all SAS PHYs and remote PHYs for a given Scsi_Host.
-@@ -577,7 +578,7 @@ static void sas_phy_release(struct device *dev)
- }
-
- /**
-- * sas_phy_alloc -- allocates and initialize a SAS PHY structure
-+ * sas_phy_alloc - allocates and initialize a SAS PHY structure
- * @parent: Parent device
- * @number: Phy index
- *
-@@ -618,7 +619,7 @@ struct sas_phy *sas_phy_alloc(struct device *parent, int number)
- EXPORT_SYMBOL(sas_phy_alloc);
-
- /**
-- * sas_phy_add -- add a SAS PHY to the device hierarchy
-+ * sas_phy_add - add a SAS PHY to the device hierarchy
- * @phy: The PHY to be added
- *
- * Publishes a SAS PHY to the rest of the system.
-@@ -638,7 +639,7 @@ int sas_phy_add(struct sas_phy *phy)
- EXPORT_SYMBOL(sas_phy_add);
-
- /**
-- * sas_phy_free -- free a SAS PHY
-+ * sas_phy_free - free a SAS PHY
- * @phy: SAS PHY to free
- *
- * Frees the specified SAS PHY.
-@@ -655,7 +656,7 @@ void sas_phy_free(struct sas_phy *phy)
- EXPORT_SYMBOL(sas_phy_free);
-
- /**
-- * sas_phy_delete -- remove SAS PHY
-+ * sas_phy_delete - remove SAS PHY
- * @phy: SAS PHY to remove
- *
- * Removes the specified SAS PHY. If the SAS PHY has an
-@@ -677,7 +678,7 @@ sas_phy_delete(struct sas_phy *phy)
- EXPORT_SYMBOL(sas_phy_delete);
-
- /**
-- * scsi_is_sas_phy -- check if a struct device represents a SAS PHY
-+ * scsi_is_sas_phy - check if a struct device represents a SAS PHY
- * @dev: device to check
- *
- * Returns:
-@@ -843,7 +844,6 @@ EXPORT_SYMBOL(sas_port_alloc_num);
-
- /**
- * sas_port_add - add a SAS port to the device hierarchy
-- *
- * @port: port to be added
- *
- * publishes a port to the rest of the system
-@@ -868,7 +868,7 @@ int sas_port_add(struct sas_port *port)
- EXPORT_SYMBOL(sas_port_add);
-
- /**
-- * sas_port_free -- free a SAS PORT
-+ * sas_port_free - free a SAS PORT
- * @port: SAS PORT to free
- *
- * Frees the specified SAS PORT.
-@@ -885,7 +885,7 @@ void sas_port_free(struct sas_port *port)
- EXPORT_SYMBOL(sas_port_free);
-
- /**
-- * sas_port_delete -- remove SAS PORT
-+ * sas_port_delete - remove SAS PORT
- * @port: SAS PORT to remove
- *
- * Removes the specified SAS PORT. If the SAS PORT has an
-@@ -924,7 +924,7 @@ void sas_port_delete(struct sas_port *port)
- EXPORT_SYMBOL(sas_port_delete);
-
- /**
-- * scsi_is_sas_port -- check if a struct device represents a SAS port
-+ * scsi_is_sas_port - check if a struct device represents a SAS port
- * @dev: device to check
- *
- * Returns:
-@@ -1309,6 +1309,7 @@ static void sas_rphy_initialize(struct sas_rphy *rphy)
-
- /**
- * sas_end_device_alloc - allocate an rphy for an end device
-+ * @parent: which port
- *
- * Allocates an SAS remote PHY structure, connected to @parent.
- *
-@@ -1345,6 +1346,8 @@ EXPORT_SYMBOL(sas_end_device_alloc);
-
- /**
- * sas_expander_alloc - allocate an rphy for an end device
-+ * @parent: which port
-+ * @type: SAS_EDGE_EXPANDER_DEVICE or SAS_FANOUT_EXPANDER_DEVICE
- *
- * Allocates an SAS remote PHY structure, connected to @parent.
- *
-@@ -1383,7 +1386,7 @@ struct sas_rphy *sas_expander_alloc(struct sas_port *parent,
- EXPORT_SYMBOL(sas_expander_alloc);
-
- /**
-- * sas_rphy_add -- add a SAS remote PHY to the device hierarchy
-+ * sas_rphy_add - add a SAS remote PHY to the device hierarchy
- * @rphy: The remote PHY to be added
- *
- * Publishes a SAS remote PHY to the rest of the system.
-@@ -1430,8 +1433,8 @@ int sas_rphy_add(struct sas_rphy *rphy)
- EXPORT_SYMBOL(sas_rphy_add);
-
- /**
-- * sas_rphy_free -- free a SAS remote PHY
-- * @rphy SAS remote PHY to free
-+ * sas_rphy_free - free a SAS remote PHY
-+ * @rphy: SAS remote PHY to free
- *
- * Frees the specified SAS remote PHY.
- *
-@@ -1459,7 +1462,7 @@ void sas_rphy_free(struct sas_rphy *rphy)
- EXPORT_SYMBOL(sas_rphy_free);
-
- /**
-- * sas_rphy_delete -- remove and free SAS remote PHY
-+ * sas_rphy_delete - remove and free SAS remote PHY
- * @rphy: SAS remote PHY to remove and free
- *
- * Removes the specified SAS remote PHY and frees it.
-@@ -1473,7 +1476,7 @@ sas_rphy_delete(struct sas_rphy *rphy)
- EXPORT_SYMBOL(sas_rphy_delete);
-
- /**
-- * sas_rphy_remove -- remove SAS remote PHY
-+ * sas_rphy_remove - remove SAS remote PHY
- * @rphy: SAS remote phy to remove
- *
- * Removes the specified SAS remote PHY.
-@@ -1504,7 +1507,7 @@ sas_rphy_remove(struct sas_rphy *rphy)
- EXPORT_SYMBOL(sas_rphy_remove);
-
- /**
-- * scsi_is_sas_rphy -- check if a struct device represents a SAS remote PHY
-+ * scsi_is_sas_rphy - check if a struct device represents a SAS remote PHY
- * @dev: device to check
- *
- * Returns:
-@@ -1604,7 +1607,7 @@ static int sas_user_scan(struct Scsi_Host *shost, uint channel,
- SETUP_TEMPLATE(expander_attrs, expander_##field, S_IRUGO, 1)
-
- /**
-- * sas_attach_transport -- instantiate SAS transport template
-+ * sas_attach_transport - instantiate SAS transport template
- * @ft: SAS transport class function template
- */
- struct scsi_transport_template *
-@@ -1715,7 +1718,7 @@ sas_attach_transport(struct sas_function_template *ft)
- EXPORT_SYMBOL(sas_attach_transport);
-
- /**
-- * sas_release_transport -- release SAS transport template instance
-+ * sas_release_transport - release SAS transport template instance
- * @t: transport template instance
- */
- void sas_release_transport(struct scsi_transport_template *t)
-diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
-index 4df21c9..1fb6031 100644
---- a/drivers/scsi/scsi_transport_spi.c
-+++ b/drivers/scsi/scsi_transport_spi.c
-@@ -52,13 +52,6 @@
- struct spi_internal {
- struct scsi_transport_template t;
- struct spi_function_template *f;
-- /* The actual attributes */
-- struct class_device_attribute private_attrs[SPI_NUM_ATTRS];
-- /* The array of null terminated pointers to attributes
-- * needed by scsi_sysfs.c */
-- struct class_device_attribute *attrs[SPI_NUM_ATTRS + SPI_OTHER_ATTRS + 1];
-- struct class_device_attribute private_host_attrs[SPI_HOST_ATTRS];
-- struct class_device_attribute *host_attrs[SPI_HOST_ATTRS + 1];
- };
-
- #define to_spi_internal(tmpl) container_of(tmpl, struct spi_internal, t)
-@@ -174,17 +167,20 @@ static int spi_host_setup(struct transport_container *tc, struct device *dev,
- return 0;
- }
-
-+static int spi_host_configure(struct transport_container *tc,
-+ struct device *dev,
-+ struct class_device *cdev);
++ BUG_ON(ex->fe_len <= 0);
++ BUG_ON(ex->fe_len >= EXT4_BLOCKS_PER_GROUP(ac->ac_sb));
++ BUG_ON(ex->fe_start >= EXT4_BLOCKS_PER_GROUP(ac->ac_sb));
++ BUG_ON(ac->ac_status != AC_STATUS_CONTINUE);
+
- static DECLARE_TRANSPORT_CLASS(spi_host_class,
- "spi_host",
- spi_host_setup,
- NULL,
-- NULL);
-+ spi_host_configure);
-
- static int spi_host_match(struct attribute_container *cont,
- struct device *dev)
- {
- struct Scsi_Host *shost;
-- struct spi_internal *i;
-
- if (!scsi_is_host_device(dev))
- return 0;
-@@ -194,11 +190,13 @@ static int spi_host_match(struct attribute_container *cont,
- != &spi_host_class.class)
- return 0;
-
-- i = to_spi_internal(shost->transportt);
--
-- return &i->t.host_attrs.ac == cont;
-+ return &shost->transportt->host_attrs.ac == cont;
- }
-
-+static int spi_target_configure(struct transport_container *tc,
-+ struct device *dev,
-+ struct class_device *cdev);
++ ac->ac_found++;
+
- static int spi_device_configure(struct transport_container *tc,
- struct device *dev,
- struct class_device *cdev)
-@@ -300,8 +298,10 @@ store_spi_transport_##field(struct class_device *cdev, const char *buf, \
- struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); \
- struct spi_internal *i = to_spi_internal(shost->transportt); \
- \
-+ if (!i->f->set_##field) \
-+ return -EINVAL; \
- val = simple_strtoul(buf, NULL, 0); \
-- i->f->set_##field(starget, val); \
-+ i->f->set_##field(starget, val); \
- return count; \
- }
-
-@@ -317,6 +317,8 @@ store_spi_transport_##field(struct class_device *cdev, const char *buf, \
- struct spi_transport_attrs *tp \
- = (struct spi_transport_attrs *)&starget->starget_data; \
- \
-+ if (i->f->set_##field) \
-+ return -EINVAL; \
- val = simple_strtoul(buf, NULL, 0); \
- if (val > tp->max_##field) \
- val = tp->max_##field; \
-@@ -327,14 +329,14 @@ store_spi_transport_##field(struct class_device *cdev, const char *buf, \
- #define spi_transport_rd_attr(field, format_string) \
- spi_transport_show_function(field, format_string) \
- spi_transport_store_function(field, format_string) \
--static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
-+static CLASS_DEVICE_ATTR(field, S_IRUGO, \
- show_spi_transport_##field, \
- store_spi_transport_##field);
-
- #define spi_transport_simple_attr(field, format_string) \
- spi_transport_show_simple(field, format_string) \
- spi_transport_store_simple(field, format_string) \
--static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
-+static CLASS_DEVICE_ATTR(field, S_IRUGO, \
- show_spi_transport_##field, \
- store_spi_transport_##field);
-
-@@ -342,7 +344,7 @@ static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
- spi_transport_show_function(field, format_string) \
- spi_transport_store_max(field, format_string) \
- spi_transport_simple_attr(max_##field, format_string) \
--static CLASS_DEVICE_ATTR(field, S_IRUGO | S_IWUSR, \
-+static CLASS_DEVICE_ATTR(field, S_IRUGO, \
- show_spi_transport_##field, \
- store_spi_transport_##field);
-
-@@ -472,6 +474,9 @@ store_spi_transport_period(struct class_device *cdev, const char *buf,
- (struct spi_transport_attrs *)&starget->starget_data;
- int period, retval;
-
-+ if (!i->f->set_period)
-+ return -EINVAL;
++ /*
++ * The special case - take what you catch first
++ */
++ if (unlikely(ac->ac_flags & EXT4_MB_HINT_FIRST)) {
++ *bex = *ex;
++ ext4_mb_use_best_found(ac, e4b);
++ return;
++ }
+
- retval = store_spi_transport_period_helper(cdev, buf, count, &period);
-
- if (period < tp->min_period)
-@@ -482,7 +487,7 @@ store_spi_transport_period(struct class_device *cdev, const char *buf,
- return retval;
- }
-
--static CLASS_DEVICE_ATTR(period, S_IRUGO | S_IWUSR,
-+static CLASS_DEVICE_ATTR(period, S_IRUGO,
- show_spi_transport_period,
- store_spi_transport_period);
-
-@@ -490,9 +495,14 @@ static ssize_t
- show_spi_transport_min_period(struct class_device *cdev, char *buf)
- {
- struct scsi_target *starget = transport_class_to_starget(cdev);
-+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
-+ struct spi_internal *i = to_spi_internal(shost->transportt);
- struct spi_transport_attrs *tp =
- (struct spi_transport_attrs *)&starget->starget_data;
-
-+ if (!i->f->set_period)
-+ return -EINVAL;
++ /*
++ * Let's check whether the chuck is good enough
++ */
++ if (ex->fe_len == gex->fe_len) {
++ *bex = *ex;
++ ext4_mb_use_best_found(ac, e4b);
++ return;
++ }
+
- return show_spi_transport_period_helper(buf, tp->min_period);
- }
-
-@@ -509,7 +519,7 @@ store_spi_transport_min_period(struct class_device *cdev, const char *buf,
- }
-
-
--static CLASS_DEVICE_ATTR(min_period, S_IRUGO | S_IWUSR,
-+static CLASS_DEVICE_ATTR(min_period, S_IRUGO,
- show_spi_transport_min_period,
- store_spi_transport_min_period);
-
-@@ -531,12 +541,15 @@ static ssize_t store_spi_host_signalling(struct class_device *cdev,
- struct spi_internal *i = to_spi_internal(shost->transportt);
- enum spi_signal_type type = spi_signal_to_value(buf);
-
-+ if (!i->f->set_signalling)
-+ return -EINVAL;
++ /*
++ * If this is first found extent, just store it in the context
++ */
++ if (bex->fe_len == 0) {
++ *bex = *ex;
++ return;
++ }
+
- if (type != SPI_SIGNAL_UNKNOWN)
- i->f->set_signalling(shost, type);
-
- return count;
- }
--static CLASS_DEVICE_ATTR(signalling, S_IRUGO | S_IWUSR,
-+static CLASS_DEVICE_ATTR(signalling, S_IRUGO,
- show_spi_host_signalling,
- store_spi_host_signalling);
-
-@@ -1262,35 +1275,6 @@ int spi_print_msg(const unsigned char *msg)
- EXPORT_SYMBOL(spi_print_msg);
- #endif /* ! CONFIG_SCSI_CONSTANTS */
-
--#define SETUP_ATTRIBUTE(field) \
-- i->private_attrs[count] = class_device_attr_##field; \
-- if (!i->f->set_##field) { \
-- i->private_attrs[count].attr.mode = S_IRUGO; \
-- i->private_attrs[count].store = NULL; \
-- } \
-- i->attrs[count] = &i->private_attrs[count]; \
-- if (i->f->show_##field) \
-- count++
--
--#define SETUP_RELATED_ATTRIBUTE(field, rel_field) \
-- i->private_attrs[count] = class_device_attr_##field; \
-- if (!i->f->set_##rel_field) { \
-- i->private_attrs[count].attr.mode = S_IRUGO; \
-- i->private_attrs[count].store = NULL; \
-- } \
-- i->attrs[count] = &i->private_attrs[count]; \
-- if (i->f->show_##rel_field) \
-- count++
--
--#define SETUP_HOST_ATTRIBUTE(field) \
-- i->private_host_attrs[count] = class_device_attr_##field; \
-- if (!i->f->set_##field) { \
-- i->private_host_attrs[count].attr.mode = S_IRUGO; \
-- i->private_host_attrs[count].store = NULL; \
-- } \
-- i->host_attrs[count] = &i->private_host_attrs[count]; \
-- count++
--
- static int spi_device_match(struct attribute_container *cont,
- struct device *dev)
- {
-@@ -1343,16 +1327,156 @@ static DECLARE_TRANSPORT_CLASS(spi_transport_class,
- "spi_transport",
- spi_setup_transport_attrs,
- NULL,
-- NULL);
-+ spi_target_configure);
-
- static DECLARE_ANON_TRANSPORT_CLASS(spi_device_class,
- spi_device_match,
- spi_device_configure);
-
-+static struct attribute *host_attributes[] = {
-+ &class_device_attr_signalling.attr,
-+ NULL
-+};
++ /*
++ * If new found extent is better, store it in the context
++ */
++ if (bex->fe_len < gex->fe_len) {
++ /* if the request isn't satisfied, any found extent
++ * larger than previous best one is better */
++ if (ex->fe_len > bex->fe_len)
++ *bex = *ex;
++ } else if (ex->fe_len > gex->fe_len) {
++ /* if the request is satisfied, then we try to find
++ * an extent that still satisfy the request, but is
++ * smaller than previous one */
++ if (ex->fe_len < bex->fe_len)
++ *bex = *ex;
++ }
+
-+static struct attribute_group host_attribute_group = {
-+ .attrs = host_attributes,
-+};
++ ext4_mb_check_limits(ac, e4b, 0);
++}
+
-+static int spi_host_configure(struct transport_container *tc,
-+ struct device *dev,
-+ struct class_device *cdev)
++static int ext4_mb_try_best_found(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b)
+{
-+ struct kobject *kobj = &cdev->kobj;
-+ struct Scsi_Host *shost = transport_class_to_shost(cdev);
-+ struct spi_internal *si = to_spi_internal(shost->transportt);
-+ struct attribute *attr = &class_device_attr_signalling.attr;
-+ int rc = 0;
++ struct ext4_free_extent ex = ac->ac_b_ex;
++ ext4_group_t group = ex.fe_group;
++ int max;
++ int err;
+
-+ if (si->f->set_signalling)
-+ rc = sysfs_chmod_file(kobj, attr, attr->mode | S_IWUSR);
++ BUG_ON(ex.fe_len <= 0);
++ err = ext4_mb_load_buddy(ac->ac_sb, group, e4b);
++ if (err)
++ return err;
+
-+ return rc;
-+}
++ ext4_lock_group(ac->ac_sb, group);
++ max = mb_find_extent(e4b, 0, ex.fe_start, ex.fe_len, &ex);
+
-+/* returns true if we should be showing the variable. Also
-+ * overloads the return by setting 1<<1 if the attribute should
-+ * be writeable */
-+#define TARGET_ATTRIBUTE_HELPER(name) \
-+ (si->f->show_##name ? 1 : 0) + \
-+ (si->f->set_##name ? 2 : 0)
++ if (max > 0) {
++ ac->ac_b_ex = ex;
++ ext4_mb_use_best_found(ac, e4b);
++ }
+
-+static int target_attribute_is_visible(struct kobject *kobj,
-+ struct attribute *attr, int i)
++ ext4_unlock_group(ac->ac_sb, group);
++ ext4_mb_release_desc(e4b);
++
++ return 0;
++}
++
++static int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b)
+{
-+ struct class_device *cdev =
-+ container_of(kobj, struct class_device, kobj);
-+ struct scsi_target *starget = transport_class_to_starget(cdev);
-+ struct Scsi_Host *shost = transport_class_to_shost(cdev);
-+ struct spi_internal *si = to_spi_internal(shost->transportt);
++ ext4_group_t group = ac->ac_g_ex.fe_group;
++ int max;
++ int err;
++ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++ struct ext4_super_block *es = sbi->s_es;
++ struct ext4_free_extent ex;
+
-+ if (attr == &class_device_attr_period.attr &&
-+ spi_support_sync(starget))
-+ return TARGET_ATTRIBUTE_HELPER(period);
-+ else if (attr == &class_device_attr_min_period.attr &&
-+ spi_support_sync(starget))
-+ return TARGET_ATTRIBUTE_HELPER(period);
-+ else if (attr == &class_device_attr_offset.attr &&
-+ spi_support_sync(starget))
-+ return TARGET_ATTRIBUTE_HELPER(offset);
-+ else if (attr == &class_device_attr_max_offset.attr &&
-+ spi_support_sync(starget))
-+ return TARGET_ATTRIBUTE_HELPER(offset);
-+ else if (attr == &class_device_attr_width.attr &&
-+ spi_support_wide(starget))
-+ return TARGET_ATTRIBUTE_HELPER(width);
-+ else if (attr == &class_device_attr_max_width.attr &&
-+ spi_support_wide(starget))
-+ return TARGET_ATTRIBUTE_HELPER(width);
-+ else if (attr == &class_device_attr_iu.attr &&
-+ spi_support_ius(starget))
-+ return TARGET_ATTRIBUTE_HELPER(iu);
-+ else if (attr == &class_device_attr_dt.attr &&
-+ spi_support_dt(starget))
-+ return TARGET_ATTRIBUTE_HELPER(dt);
-+ else if (attr == &class_device_attr_qas.attr &&
-+ spi_support_qas(starget))
-+ return TARGET_ATTRIBUTE_HELPER(qas);
-+ else if (attr == &class_device_attr_wr_flow.attr &&
-+ spi_support_ius(starget))
-+ return TARGET_ATTRIBUTE_HELPER(wr_flow);
-+ else if (attr == &class_device_attr_rd_strm.attr &&
-+ spi_support_ius(starget))
-+ return TARGET_ATTRIBUTE_HELPER(rd_strm);
-+ else if (attr == &class_device_attr_rti.attr &&
-+ spi_support_ius(starget))
-+ return TARGET_ATTRIBUTE_HELPER(rti);
-+ else if (attr == &class_device_attr_pcomp_en.attr &&
-+ spi_support_ius(starget))
-+ return TARGET_ATTRIBUTE_HELPER(pcomp_en);
-+ else if (attr == &class_device_attr_hold_mcs.attr &&
-+ spi_support_ius(starget))
-+ return TARGET_ATTRIBUTE_HELPER(hold_mcs);
-+ else if (attr == &class_device_attr_revalidate.attr)
-+ return 1;
++ if (!(ac->ac_flags & EXT4_MB_HINT_TRY_GOAL))
++ return 0;
++
++ err = ext4_mb_load_buddy(ac->ac_sb, group, e4b);
++ if (err)
++ return err;
++
++ ext4_lock_group(ac->ac_sb, group);
++ max = mb_find_extent(e4b, 0, ac->ac_g_ex.fe_start,
++ ac->ac_g_ex.fe_len, &ex);
++
++ if (max >= ac->ac_g_ex.fe_len && ac->ac_g_ex.fe_len == sbi->s_stripe) {
++ ext4_fsblk_t start;
++
++ start = (e4b->bd_group * EXT4_BLOCKS_PER_GROUP(ac->ac_sb)) +
++ ex.fe_start + le32_to_cpu(es->s_first_data_block);
++ /* use do_div to get remainder (would be 64-bit modulo) */
++ if (do_div(start, sbi->s_stripe) == 0) {
++ ac->ac_found++;
++ ac->ac_b_ex = ex;
++ ext4_mb_use_best_found(ac, e4b);
++ }
++ } else if (max >= ac->ac_g_ex.fe_len) {
++ BUG_ON(ex.fe_len <= 0);
++ BUG_ON(ex.fe_group != ac->ac_g_ex.fe_group);
++ BUG_ON(ex.fe_start != ac->ac_g_ex.fe_start);
++ ac->ac_found++;
++ ac->ac_b_ex = ex;
++ ext4_mb_use_best_found(ac, e4b);
++ } else if (max > 0 && (ac->ac_flags & EXT4_MB_HINT_MERGE)) {
++ /* Sometimes, caller may want to merge even small
++ * number of blocks to an existing extent */
++ BUG_ON(ex.fe_len <= 0);
++ BUG_ON(ex.fe_group != ac->ac_g_ex.fe_group);
++ BUG_ON(ex.fe_start != ac->ac_g_ex.fe_start);
++ ac->ac_found++;
++ ac->ac_b_ex = ex;
++ ext4_mb_use_best_found(ac, e4b);
++ }
++ ext4_unlock_group(ac->ac_sb, group);
++ ext4_mb_release_desc(e4b);
+
+ return 0;
+}
+
-+static struct attribute *target_attributes[] = {
-+ &class_device_attr_period.attr,
-+ &class_device_attr_min_period.attr,
-+ &class_device_attr_offset.attr,
-+ &class_device_attr_max_offset.attr,
-+ &class_device_attr_width.attr,
-+ &class_device_attr_max_width.attr,
-+ &class_device_attr_iu.attr,
-+ &class_device_attr_dt.attr,
-+ &class_device_attr_qas.attr,
-+ &class_device_attr_wr_flow.attr,
-+ &class_device_attr_rd_strm.attr,
-+ &class_device_attr_rti.attr,
-+ &class_device_attr_pcomp_en.attr,
-+ &class_device_attr_hold_mcs.attr,
-+ &class_device_attr_revalidate.attr,
-+ NULL
-+};
++/*
++ * The routine scans buddy structures (not bitmap!) from given order
++ * to max order and tries to find big enough chunk to satisfy the req
++ */
++static void ext4_mb_simple_scan_group(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b)
++{
++ struct super_block *sb = ac->ac_sb;
++ struct ext4_group_info *grp = e4b->bd_info;
++ void *buddy;
++ int i;
++ int k;
++ int max;
+
-+static struct attribute_group target_attribute_group = {
-+ .attrs = target_attributes,
-+ .is_visible = target_attribute_is_visible,
-+};
++ BUG_ON(ac->ac_2order <= 0);
++ for (i = ac->ac_2order; i <= sb->s_blocksize_bits + 1; i++) {
++ if (grp->bb_counters[i] == 0)
++ continue;
+
-+static int spi_target_configure(struct transport_container *tc,
-+ struct device *dev,
-+ struct class_device *cdev)
++ buddy = mb_find_buddy(e4b, i, &max);
++ BUG_ON(buddy == NULL);
++
++ k = ext4_find_next_zero_bit(buddy, max, 0);
++ BUG_ON(k >= max);
++
++ ac->ac_found++;
++
++ ac->ac_b_ex.fe_len = 1 << i;
++ ac->ac_b_ex.fe_start = k << i;
++ ac->ac_b_ex.fe_group = e4b->bd_group;
++
++ ext4_mb_use_best_found(ac, e4b);
++
++ BUG_ON(ac->ac_b_ex.fe_len != ac->ac_g_ex.fe_len);
++
++ if (EXT4_SB(sb)->s_mb_stats)
++ atomic_inc(&EXT4_SB(sb)->s_bal_2orders);
++
++ break;
++ }
++}
++
++/*
++ * The routine scans the group and measures all found extents.
++ * In order to optimize scanning, caller must pass number of
++ * free blocks in the group, so the routine can know upper limit.
++ */
++static void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b)
+{
-+ struct kobject *kobj = &cdev->kobj;
++ struct super_block *sb = ac->ac_sb;
++ void *bitmap = EXT4_MB_BITMAP(e4b);
++ struct ext4_free_extent ex;
+ int i;
-+ struct attribute *attr;
-+ int rc;
++ int free;
+
-+ for (i = 0; (attr = target_attributes[i]) != NULL; i++) {
-+ int j = target_attribute_group.is_visible(kobj, attr, i);
++ free = e4b->bd_info->bb_free;
++ BUG_ON(free <= 0);
++
++ i = e4b->bd_info->bb_first_free;
++
++ while (free && ac->ac_status == AC_STATUS_CONTINUE) {
++ i = ext4_find_next_zero_bit(bitmap,
++ EXT4_BLOCKS_PER_GROUP(sb), i);
++ if (i >= EXT4_BLOCKS_PER_GROUP(sb)) {
++ BUG_ON(free != 0);
++ break;
++ }
++
++ mb_find_extent(e4b, 0, i, ac->ac_g_ex.fe_len, &ex);
++ BUG_ON(ex.fe_len <= 0);
++ BUG_ON(free < ex.fe_len);
++
++ ext4_mb_measure_extent(ac, &ex, e4b);
++
++ i += ex.fe_len;
++ free -= ex.fe_len;
++ }
++
++ ext4_mb_check_limits(ac, e4b, 1);
++}
++
++/*
++ * This is a special case for storages like raid5
++ * we try to find stripe-aligned chunks for stripe-size requests
++ * XXX should do so at least for multiples of stripe size as well
++ */
++static void ext4_mb_scan_aligned(struct ext4_allocation_context *ac,
++ struct ext4_buddy *e4b)
++{
++ struct super_block *sb = ac->ac_sb;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ void *bitmap = EXT4_MB_BITMAP(e4b);
++ struct ext4_free_extent ex;
++ ext4_fsblk_t first_group_block;
++ ext4_fsblk_t a;
++ ext4_grpblk_t i;
++ int max;
++
++ BUG_ON(sbi->s_stripe == 0);
++
++ /* find first stripe-aligned block in group */
++ first_group_block = e4b->bd_group * EXT4_BLOCKS_PER_GROUP(sb)
++ + le32_to_cpu(sbi->s_es->s_first_data_block);
++ a = first_group_block + sbi->s_stripe - 1;
++ do_div(a, sbi->s_stripe);
++ i = (a * sbi->s_stripe) - first_group_block;
++
++ while (i < EXT4_BLOCKS_PER_GROUP(sb)) {
++ if (!mb_test_bit(i, bitmap)) {
++ max = mb_find_extent(e4b, 0, i, sbi->s_stripe, &ex);
++ if (max >= sbi->s_stripe) {
++ ac->ac_found++;
++ ac->ac_b_ex = ex;
++ ext4_mb_use_best_found(ac, e4b);
++ break;
++ }
++ }
++ i += sbi->s_stripe;
++ }
++}
++
++static int ext4_mb_good_group(struct ext4_allocation_context *ac,
++ ext4_group_t group, int cr)
++{
++ unsigned free, fragments;
++ unsigned i, bits;
++ struct ext4_group_desc *desc;
++ struct ext4_group_info *grp = ext4_get_group_info(ac->ac_sb, group);
++
++ BUG_ON(cr < 0 || cr >= 4);
++ BUG_ON(EXT4_MB_GRP_NEED_INIT(grp));
++
++ free = grp->bb_free;
++ fragments = grp->bb_fragments;
++ if (free == 0)
++ return 0;
++ if (fragments == 0)
++ return 0;
++
++ switch (cr) {
++ case 0:
++ BUG_ON(ac->ac_2order == 0);
++ /* If this group is uninitialized, skip it initially */
++ desc = ext4_get_group_desc(ac->ac_sb, group, NULL);
++ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))
++ return 0;
+
-+ /* FIXME: as well as returning -EEXIST, which we'd like
-+ * to ignore, sysfs also does a WARN_ON and dumps a trace,
-+ * which is bad, so temporarily, skip attributes that are
-+ * already visible (the revalidate one) */
-+ if (j && attr != &class_device_attr_revalidate.attr)
-+ rc = sysfs_add_file_to_group(kobj, attr,
-+ target_attribute_group.name);
-+ /* and make the attribute writeable if we have a set
-+ * function */
-+ if ((j & 1))
-+ rc = sysfs_chmod_file(kobj, attr, attr->mode | S_IWUSR);
++ bits = ac->ac_sb->s_blocksize_bits + 1;
++ for (i = ac->ac_2order; i <= bits; i++)
++ if (grp->bb_counters[i] > 0)
++ return 1;
++ break;
++ case 1:
++ if ((free / fragments) >= ac->ac_g_ex.fe_len)
++ return 1;
++ break;
++ case 2:
++ if (free >= ac->ac_g_ex.fe_len)
++ return 1;
++ break;
++ case 3:
++ return 1;
++ default:
++ BUG();
+ }
+
+ return 0;
+}
+
- struct scsi_transport_template *
- spi_attach_transport(struct spi_function_template *ft)
- {
-- int count = 0;
- struct spi_internal *i = kzalloc(sizeof(struct spi_internal),
- GFP_KERNEL);
-
-@@ -1360,47 +1484,17 @@ spi_attach_transport(struct spi_function_template *ft)
- return NULL;
-
- i->t.target_attrs.ac.class = &spi_transport_class.class;
-- i->t.target_attrs.ac.attrs = &i->attrs[0];
-+ i->t.target_attrs.ac.grp = &target_attribute_group;
- i->t.target_attrs.ac.match = spi_target_match;
- transport_container_register(&i->t.target_attrs);
- i->t.target_size = sizeof(struct spi_transport_attrs);
- i->t.host_attrs.ac.class = &spi_host_class.class;
-- i->t.host_attrs.ac.attrs = &i->host_attrs[0];
-+ i->t.host_attrs.ac.grp = &host_attribute_group;
- i->t.host_attrs.ac.match = spi_host_match;
- transport_container_register(&i->t.host_attrs);
- i->t.host_size = sizeof(struct spi_host_attrs);
- i->f = ft;
-
-- SETUP_ATTRIBUTE(period);
-- SETUP_RELATED_ATTRIBUTE(min_period, period);
-- SETUP_ATTRIBUTE(offset);
-- SETUP_RELATED_ATTRIBUTE(max_offset, offset);
-- SETUP_ATTRIBUTE(width);
-- SETUP_RELATED_ATTRIBUTE(max_width, width);
-- SETUP_ATTRIBUTE(iu);
-- SETUP_ATTRIBUTE(dt);
-- SETUP_ATTRIBUTE(qas);
-- SETUP_ATTRIBUTE(wr_flow);
-- SETUP_ATTRIBUTE(rd_strm);
-- SETUP_ATTRIBUTE(rti);
-- SETUP_ATTRIBUTE(pcomp_en);
-- SETUP_ATTRIBUTE(hold_mcs);
--
-- /* if you add an attribute but forget to increase SPI_NUM_ATTRS
-- * this bug will trigger */
-- BUG_ON(count > SPI_NUM_ATTRS);
--
-- i->attrs[count++] = &class_device_attr_revalidate;
--
-- i->attrs[count] = NULL;
--
-- count = 0;
-- SETUP_HOST_ATTRIBUTE(signalling);
--
-- BUG_ON(count > SPI_HOST_ATTRS);
--
-- i->host_attrs[count] = NULL;
--
- return &i->t;
- }
- EXPORT_SYMBOL(spi_attach_transport);
-diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
-index 65c584d..2445c98 100644
---- a/drivers/scsi/scsi_transport_srp.c
-+++ b/drivers/scsi/scsi_transport_srp.c
-@@ -185,11 +185,10 @@ static int srp_host_match(struct attribute_container *cont, struct device *dev)
-
- /**
- * srp_rport_add - add a SRP remote port to the device hierarchy
-- *
- * @shost: scsi host the remote port is connected to.
- * @ids: The port id for the remote port.
- *
-- * publishes a port to the rest of the system
-+ * Publishes a port to the rest of the system.
- */
- struct srp_rport *srp_rport_add(struct Scsi_Host *shost,
- struct srp_rport_identifiers *ids)
-@@ -242,8 +241,8 @@ struct srp_rport *srp_rport_add(struct Scsi_Host *shost,
- EXPORT_SYMBOL_GPL(srp_rport_add);
-
- /**
-- * srp_rport_del -- remove a SRP remote port
-- * @port: SRP remote port to remove
-+ * srp_rport_del - remove a SRP remote port
-+ * @rport: SRP remote port to remove
- *
- * Removes the specified SRP remote port.
- */
-@@ -271,7 +270,7 @@ static int do_srp_rport_del(struct device *dev, void *data)
- }
-
- /**
-- * srp_remove_host -- tear down a Scsi_Host's SRP data structures
-+ * srp_remove_host - tear down a Scsi_Host's SRP data structures
- * @shost: Scsi Host that is torn down
- *
- * Removes all SRP remote ports for a given Scsi_Host.
-@@ -297,7 +296,7 @@ static int srp_it_nexus_response(struct Scsi_Host *shost, u64 nexus, int result)
- }
-
- /**
-- * srp_attach_transport -- instantiate SRP transport template
-+ * srp_attach_transport - instantiate SRP transport template
- * @ft: SRP transport class function template
- */
- struct scsi_transport_template *
-@@ -337,7 +336,7 @@ srp_attach_transport(struct srp_function_template *ft)
- EXPORT_SYMBOL_GPL(srp_attach_transport);
-
- /**
-- * srp_release_transport -- release SRP transport template instance
-+ * srp_release_transport - release SRP transport template instance
- * @t: transport template instance
- */
- void srp_release_transport(struct scsi_transport_template *t)
-diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c
-index cd68a66..3f21bc6 100644
---- a/drivers/scsi/scsicam.c
-+++ b/drivers/scsi/scsicam.c
-@@ -24,6 +24,14 @@
- static int setsize(unsigned long capacity, unsigned int *cyls, unsigned int *hds,
- unsigned int *secs);
-
-+/**
-+ * scsi_bios_ptable - Read PC partition table out of first sector of device.
-+ * @dev: from this device
-+ *
-+ * Description: Reads the first sector from the device and returns %0x42 bytes
-+ * starting at offset %0x1be.
-+ * Returns: partition table in kmalloc(GFP_KERNEL) memory, or NULL on error.
-+ */
- unsigned char *scsi_bios_ptable(struct block_device *dev)
- {
- unsigned char *res = kmalloc(66, GFP_KERNEL);
-@@ -43,15 +51,17 @@ unsigned char *scsi_bios_ptable(struct block_device *dev)
- }
- EXPORT_SYMBOL(scsi_bios_ptable);
-
--/*
-- * Function : int scsicam_bios_param (struct block_device *bdev, ector_t capacity, int *ip)
-+/**
-+ * scsicam_bios_param - Determine geometry of a disk in cylinders/heads/sectors.
-+ * @bdev: which device
-+ * @capacity: size of the disk in sectors
-+ * @ip: return value: ip[0]=heads, ip[1]=sectors, ip[2]=cylinders
- *
-- * Purpose : to determine the BIOS mapping used for a drive in a
-+ * Description : determine the BIOS mapping/geometry used for a drive in a
- * SCSI-CAM system, storing the results in ip as required
- * by the HDIO_GETGEO ioctl().
- *
- * Returns : -1 on failure, 0 on success.
-- *
- */
-
- int scsicam_bios_param(struct block_device *bdev, sector_t capacity, int *ip)
-@@ -98,15 +108,18 @@ int scsicam_bios_param(struct block_device *bdev, sector_t capacity, int *ip)
- }
- EXPORT_SYMBOL(scsicam_bios_param);
-
--/*
-- * Function : static int scsi_partsize(unsigned char *buf, unsigned long
-- * capacity,unsigned int *cyls, unsigned int *hds, unsigned int *secs);
-+/**
-+ * scsi_partsize - Parse cylinders/heads/sectors from PC partition table
-+ * @buf: partition table, see scsi_bios_ptable()
-+ * @capacity: size of the disk in sectors
-+ * @cyls: put cylinders here
-+ * @hds: put heads here
-+ * @secs: put sectors here
- *
-- * Purpose : to determine the BIOS mapping used to create the partition
-+ * Description: determine the BIOS mapping/geometry used to create the partition
- * table, storing the results in *cyls, *hds, and *secs
- *
-- * Returns : -1 on failure, 0 on success.
-- *
-+ * Returns: -1 on failure, 0 on success.
- */
-
- int scsi_partsize(unsigned char *buf, unsigned long capacity,
-@@ -194,7 +207,7 @@ EXPORT_SYMBOL(scsi_partsize);
- *
- * WORKING X3T9.2
- * DRAFT 792D
-- *
-+ * see http://www.t10.org/ftp/t10/drafts/cam/cam-r12b.pdf
- *
- * Revision 6
- * 10-MAR-94
-diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
-index a69b155..24eba31 100644
---- a/drivers/scsi/sd.c
-+++ b/drivers/scsi/sd.c
-@@ -395,6 +395,15 @@ static int sd_prep_fn(struct request_queue *q, struct request *rq)
- goto out;
- }
-
-+ /*
-+ * Some devices (some sdcards for one) don't like it if the
-+ * last sector gets read in a larger then 1 sector read.
-+ */
-+ if (unlikely(sdp->last_sector_bug &&
-+ rq->nr_sectors > sdp->sector_size / 512 &&
-+ block + this_count == get_capacity(disk)))
-+ this_count -= sdp->sector_size / 512;
++static int ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
++{
++ ext4_group_t group;
++ ext4_group_t i;
++ int cr;
++ int err = 0;
++ int bsbits;
++ struct ext4_sb_info *sbi;
++ struct super_block *sb;
++ struct ext4_buddy e4b;
++ loff_t size, isize;
+
- SCSI_LOG_HLQUEUE(2, scmd_printk(KERN_INFO, SCpnt, "block=%llu\n",
- (unsigned long long)block));
-
-@@ -736,6 +745,7 @@ static int sd_media_changed(struct gendisk *disk)
- {
- struct scsi_disk *sdkp = scsi_disk(disk);
- struct scsi_device *sdp = sdkp->device;
-+ struct scsi_sense_hdr *sshdr = NULL;
- int retval;
-
- SCSI_LOG_HLQUEUE(3, sd_printk(KERN_INFO, sdkp, "sd_media_changed\n"));
-@@ -749,8 +759,11 @@ static int sd_media_changed(struct gendisk *disk)
- * can deal with it then. It is only because of unrecoverable errors
- * that we would ever take a device offline in the first place.
- */
-- if (!scsi_device_online(sdp))
-- goto not_present;
-+ if (!scsi_device_online(sdp)) {
-+ set_media_not_present(sdkp);
-+ retval = 1;
++ sb = ac->ac_sb;
++ sbi = EXT4_SB(sb);
++ BUG_ON(ac->ac_status == AC_STATUS_FOUND);
++
++ /* first, try the goal */
++ err = ext4_mb_find_by_goal(ac, &e4b);
++ if (err || ac->ac_status == AC_STATUS_FOUND)
+ goto out;
-+ }
-
- /*
- * Using TEST_UNIT_READY enables differentiation between drive with
-@@ -762,8 +775,12 @@ static int sd_media_changed(struct gendisk *disk)
- * sd_revalidate() is called.
- */
- retval = -ENODEV;
-- if (scsi_block_when_processing_errors(sdp))
-- retval = scsi_test_unit_ready(sdp, SD_TIMEOUT, SD_MAX_RETRIES);
+
-+ if (scsi_block_when_processing_errors(sdp)) {
-+ sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
-+ retval = scsi_test_unit_ready(sdp, SD_TIMEOUT, SD_MAX_RETRIES,
-+ sshdr);
-+ }
-
- /*
- * Unable to test, unit probably not ready. This usually
-@@ -771,8 +788,13 @@ static int sd_media_changed(struct gendisk *disk)
- * and we will figure it out later once the drive is
- * available again.
- */
-- if (retval)
-- goto not_present;
-+ if (retval || (scsi_sense_valid(sshdr) &&
-+ /* 0x3a is medium not present */
-+ sshdr->asc == 0x3a)) {
-+ set_media_not_present(sdkp);
-+ retval = 1;
++ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
+ goto out;
++
++ /*
++ * ac->ac2_order is set only if the fe_len is a power of 2
++ * if ac2_order is set we also set criteria to 0 so that we
++ * try exact allocation using buddy.
++ */
++ i = fls(ac->ac_g_ex.fe_len);
++ ac->ac_2order = 0;
++ /*
++ * We search using buddy data only if the order of the request
++ * is greater than equal to the sbi_s_mb_order2_reqs
++ * You can tune it via /proc/fs/ext4/<partition>/order2_req
++ */
++ if (i >= sbi->s_mb_order2_reqs) {
++ /*
++ * This should tell if fe_len is exactly power of 2
++ */
++ if ((ac->ac_g_ex.fe_len & (~(1 << (i - 1)))) == 0)
++ ac->ac_2order = i - 1;
+ }
-
- /*
- * For removable scsi disk we have to recognise the presence
-@@ -783,12 +805,12 @@ static int sd_media_changed(struct gendisk *disk)
-
- retval = sdp->changed;
- sdp->changed = 0;
--
-+out:
-+ if (retval != sdkp->previous_state)
-+ sdev_evt_send_simple(sdp, SDEV_EVT_MEDIA_CHANGE, GFP_KERNEL);
-+ sdkp->previous_state = retval;
-+ kfree(sshdr);
- return retval;
--
--not_present:
-- set_media_not_present(sdkp);
-- return 1;
- }
-
- static int sd_sync_cache(struct scsi_disk *sdkp)
-diff --git a/drivers/scsi/seagate.c b/drivers/scsi/seagate.c
-deleted file mode 100644
-index b113244..0000000
---- a/drivers/scsi/seagate.c
-+++ /dev/null
-@@ -1,1667 +0,0 @@
--/*
-- * seagate.c Copyright (C) 1992, 1993 Drew Eckhardt
-- * low level scsi driver for ST01/ST02, Future Domain TMC-885,
-- * TMC-950 by Drew Eckhardt <drew at colorado.edu>
-- *
-- * Note : TMC-880 boards don't work because they have two bits in
-- * the status register flipped, I'll fix this "RSN"
-- * [why do I have strong feeling that above message is from 1993? :-)
-- * pavel at ucw.cz]
-- *
-- * This card does all the I/O via memory mapped I/O, so there is no need
-- * to check or allocate a region of the I/O address space.
-- */
--
--/* 1996 - to use new read{b,w,l}, write{b,w,l}, and phys_to_virt
-- * macros, replaced assembler routines with C. There's probably a
-- * performance hit, but I only have a cdrom and can't tell. Define
-- * SEAGATE_USE_ASM if you want the old assembler code -- SJT
-- *
-- * 1998-jul-29 - created DPRINTK macros and made it work under
-- * linux 2.1.112, simplified some #defines etc. <pavel at ucw.cz>
-- *
-- * Aug 2000 - aeb - deleted seagate_st0x_biosparam(). It would try to
-- * read the physical disk geometry, a bad mistake. Of course it doesn't
-- * matter much what geometry one invents, but on large disks it
-- * returned 256 (or more) heads, causing all kind of failures.
-- * Of course this means that people might see a different geometry now,
-- * so boot parameters may be necessary in some cases.
-- */
--
--/*
-- * Configuration :
-- * To use without BIOS -DOVERRIDE=base_address -DCONTROLLER=FD or SEAGATE
-- * -DIRQ will override the default of 5.
-- * Note: You can now set these options from the kernel's "command line".
-- * The syntax is:
-- *
-- * st0x=ADDRESS,IRQ (for a Seagate controller)
-- * or:
-- * tmc8xx=ADDRESS,IRQ (for a TMC-8xx or TMC-950 controller)
-- * eg:
-- * tmc8xx=0xC8000,15
-- *
-- * will configure the driver for a TMC-8xx style controller using IRQ 15
-- * with a base address of 0xC8000.
-- *
-- * -DARBITRATE
-- * Will cause the host adapter to arbitrate for the
-- * bus for better SCSI-II compatibility, rather than just
-- * waiting for BUS FREE and then doing its thing. Should
-- * let us do one command per Lun when I integrate my
-- * reorganization changes into the distribution sources.
-- *
-- * -DDEBUG=65535
-- * Will activate debug code.
-- *
-- * -DFAST or -DFAST32
-- * Will use blind transfers where possible
-- *
-- * -DPARITY
-- * This will enable parity.
-- *
-- * -DSEAGATE_USE_ASM
-- * Will use older seagate assembly code. should be (very small amount)
-- * Faster.
-- *
-- * -DSLOW_RATE=50
-- * Will allow compatibility with broken devices that don't
-- * handshake fast enough (ie, some CD ROM's) for the Seagate
-- * code.
-- *
-- * 50 is some number, It will let you specify a default
-- * transfer rate if handshaking isn't working correctly.
-- *
-- * -DOLDCNTDATASCEME There is a new sceme to set the CONTROL
-- * and DATA reigsters which complies more closely
-- * with the SCSI2 standard. This hopefully eliminates
-- * the need to swap the order these registers are
-- * 'messed' with. It makes the following two options
-- * obsolete. To reenable the old sceme define this.
-- *
-- * The following to options are patches from the SCSI.HOWTO
-- *
-- * -DSWAPSTAT This will swap the definitions for STAT_MSG and STAT_CD.
-- *
-- * -DSWAPCNTDATA This will swap the order that seagate.c messes with
-- * the CONTROL an DATA registers.
-- */
--
--#include <linux/module.h>
--#include <linux/interrupt.h>
--#include <linux/spinlock.h>
--#include <linux/signal.h>
--#include <linux/string.h>
--#include <linux/proc_fs.h>
--#include <linux/init.h>
--#include <linux/blkdev.h>
--#include <linux/stat.h>
--#include <linux/delay.h>
--#include <linux/io.h>
--
--#include <asm/system.h>
--#include <asm/uaccess.h>
--
--#include <scsi/scsi_cmnd.h>
--#include <scsi/scsi_device.h>
--#include <scsi/scsi.h>
--
--#include <scsi/scsi_dbg.h>
--#include <scsi/scsi_host.h>
--
--
--#ifdef DEBUG
--#define DPRINTK( when, msg... ) do { if ( (DEBUG & (when)) == (when) ) printk( msg ); } while (0)
--#else
--#define DPRINTK( when, msg... ) do { } while (0)
--#define DEBUG 0
--#endif
--#define DANY( msg... ) DPRINTK( 0xffff, msg );
--
--#ifndef IRQ
--#define IRQ 5
--#endif
--
--#ifdef FAST32
--#define FAST
--#endif
--
--#undef LINKED /* Linked commands are currently broken! */
--
--#if defined(OVERRIDE) && !defined(CONTROLLER)
--#error Please use -DCONTROLLER=SEAGATE or -DCONTROLLER=FD to override controller type
--#endif
--
--#ifndef __i386__
--#undef SEAGATE_USE_ASM
--#endif
--
--/*
-- Thanks to Brian Antoine for the example code in his Messy-Loss ST-01
-- driver, and Mitsugu Suzuki for information on the ST-01
-- SCSI host.
--*/
--
--/*
-- CONTROL defines
--*/
--
--#define CMD_RST 0x01
--#define CMD_SEL 0x02
--#define CMD_BSY 0x04
--#define CMD_ATTN 0x08
--#define CMD_START_ARB 0x10
--#define CMD_EN_PARITY 0x20
--#define CMD_INTR 0x40
--#define CMD_DRVR_ENABLE 0x80
--
--/*
-- STATUS
--*/
--#ifdef SWAPSTAT
--#define STAT_MSG 0x08
--#define STAT_CD 0x02
--#else
--#define STAT_MSG 0x02
--#define STAT_CD 0x08
--#endif
--
--#define STAT_BSY 0x01
--#define STAT_IO 0x04
--#define STAT_REQ 0x10
--#define STAT_SEL 0x20
--#define STAT_PARITY 0x40
--#define STAT_ARB_CMPL 0x80
--
--/*
-- REQUESTS
--*/
--
--#define REQ_MASK (STAT_CD | STAT_IO | STAT_MSG)
--#define REQ_DATAOUT 0
--#define REQ_DATAIN STAT_IO
--#define REQ_CMDOUT STAT_CD
--#define REQ_STATIN (STAT_CD | STAT_IO)
--#define REQ_MSGOUT (STAT_MSG | STAT_CD)
--#define REQ_MSGIN (STAT_MSG | STAT_CD | STAT_IO)
--
--extern volatile int seagate_st0x_timeout;
--
--#ifdef PARITY
--#define BASE_CMD CMD_EN_PARITY
--#else
--#define BASE_CMD 0
--#endif
--
--/*
-- Debugging code
--*/
--
--#define PHASE_BUS_FREE 1
--#define PHASE_ARBITRATION 2
--#define PHASE_SELECTION 4
--#define PHASE_DATAIN 8
--#define PHASE_DATAOUT 0x10
--#define PHASE_CMDOUT 0x20
--#define PHASE_MSGIN 0x40
--#define PHASE_MSGOUT 0x80
--#define PHASE_STATUSIN 0x100
--#define PHASE_ETC (PHASE_DATAIN | PHASE_DATAOUT | PHASE_CMDOUT | PHASE_MSGIN | PHASE_MSGOUT | PHASE_STATUSIN)
--#define PRINT_COMMAND 0x200
--#define PHASE_EXIT 0x400
--#define PHASE_RESELECT 0x800
--#define DEBUG_FAST 0x1000
--#define DEBUG_SG 0x2000
--#define DEBUG_LINKED 0x4000
--#define DEBUG_BORKEN 0x8000
--
--/*
-- * Control options - these are timeouts specified in .01 seconds.
-- */
--
--/* 30, 20 work */
--#define ST0X_BUS_FREE_DELAY 25
--#define ST0X_SELECTION_DELAY 25
--
--#define SEAGATE 1 /* these determine the type of the controller */
--#define FD 2
--
--#define ST0X_ID_STR "Seagate ST-01/ST-02"
--#define FD_ID_STR "TMC-8XX/TMC-950"
--
--static int internal_command (unsigned char target, unsigned char lun,
-- const void *cmnd,
-- void *buff, int bufflen, int reselect);
--
--static int incommand; /* set if arbitration has finished
-- and we are in some command phase. */
--
--static unsigned int base_address = 0; /* Where the card ROM starts, used to
-- calculate memory mapped register
-- location. */
--
--static void __iomem *st0x_cr_sr; /* control register write, status
-- register read. 256 bytes in
-- length.
-- Read is status of SCSI BUS, as per
-- STAT masks. */
--
--static void __iomem *st0x_dr; /* data register, read write 256
-- bytes in length. */
--
--static volatile int st0x_aborted = 0; /* set when we are aborted, ie by a
-- time out, etc. */
--
--static unsigned char controller_type = 0; /* set to SEAGATE for ST0x
-- boards or FD for TMC-8xx
-- boards */
--static int irq = IRQ;
--
--module_param(base_address, uint, 0);
--module_param(controller_type, byte, 0);
--module_param(irq, int, 0);
--MODULE_LICENSE("GPL");
--
--
--#define retcode(result) (((result) << 16) | (message << 8) | status)
--#define STATUS ((u8) readb(st0x_cr_sr))
--#define DATA ((u8) readb(st0x_dr))
--#define WRITE_CONTROL(d) { writeb((d), st0x_cr_sr); }
--#define WRITE_DATA(d) { writeb((d), st0x_dr); }
--
--#ifndef OVERRIDE
--static unsigned int seagate_bases[] = {
-- 0xc8000, 0xca000, 0xcc000,
-- 0xce000, 0xdc000, 0xde000
--};
--
--typedef struct {
-- const unsigned char *signature;
-- unsigned offset;
-- unsigned length;
-- unsigned char type;
--} Signature;
--
--static Signature __initdata signatures[] = {
-- {"ST01 v1.7 (C) Copyright 1987 Seagate", 15, 37, SEAGATE},
-- {"SCSI BIOS 2.00 (C) Copyright 1987 Seagate", 15, 40, SEAGATE},
--
--/*
-- * The following two lines are NOT mistakes. One detects ROM revision
-- * 3.0.0, the other 3.2. Since seagate has only one type of SCSI adapter,
-- * and this is not going to change, the "SEAGATE" and "SCSI" together
-- * are probably "good enough"
-- */
--
-- {"SEAGATE SCSI BIOS ", 16, 17, SEAGATE},
-- {"SEAGATE SCSI BIOS ", 17, 17, SEAGATE},
--
--/*
-- * However, future domain makes several incompatible SCSI boards, so specific
-- * signatures must be used.
-- */
--
-- {"FUTURE DOMAIN CORP. (C) 1986-1989 V5.0C2/14/89", 5, 46, FD},
-- {"FUTURE DOMAIN CORP. (C) 1986-1989 V6.0A7/28/89", 5, 46, FD},
-- {"FUTURE DOMAIN CORP. (C) 1986-1990 V6.0105/31/90", 5, 47, FD},
-- {"FUTURE DOMAIN CORP. (C) 1986-1990 V6.0209/18/90", 5, 47, FD},
-- {"FUTURE DOMAIN CORP. (C) 1986-1990 V7.009/18/90", 5, 46, FD},
-- {"FUTURE DOMAIN CORP. (C) 1992 V8.00.004/02/92", 5, 44, FD},
-- {"IBM F1 BIOS V1.1004/30/92", 5, 25, FD},
-- {"FUTURE DOMAIN TMC-950", 5, 21, FD},
-- /* Added for 2.2.16 by Matthias_Heidbrink at b.maus.de */
-- {"IBM F1 V1.2009/22/93", 5, 25, FD},
--};
--
--#define NUM_SIGNATURES ARRAY_SIZE(signatures)
--#endif /* n OVERRIDE */
--
--/*
-- * hostno stores the hostnumber, as told to us by the init routine.
-- */
--
--static int hostno = -1;
--static void seagate_reconnect_intr (int, void *);
--static irqreturn_t do_seagate_reconnect_intr (int, void *);
--static int seagate_st0x_bus_reset(struct scsi_cmnd *);
--
--#ifdef FAST
--static int fast = 1;
--#else
--#define fast 0
--#endif
--
--#ifdef SLOW_RATE
--/*
-- * Support for broken devices :
-- * The Seagate board has a handshaking problem. Namely, a lack
-- * thereof for slow devices. You can blast 600K/second through
-- * it if you are polling for each byte, more if you do a blind
-- * transfer. In the first case, with a fast device, REQ will
-- * transition high-low or high-low-high before your loop restarts
-- * and you'll have no problems. In the second case, the board
-- * will insert wait states for up to 13.2 usecs for REQ to
-- * transition low->high, and everything will work.
-- *
-- * However, there's nothing in the state machine that says
-- * you *HAVE* to see a high-low-high set of transitions before
-- * sending the next byte, and slow things like the Trantor CD ROMS
-- * will break because of this.
-- *
-- * So, we need to slow things down, which isn't as simple as it
-- * seems. We can't slow things down period, because then people
-- * who don't recompile their kernels will shoot me for ruining
-- * their performance. We need to do it on a case per case basis.
-- *
-- * The best for performance will be to, only for borken devices
-- * (this is stored on a per-target basis in the scsi_devices array)
-- *
-- * Wait for a low->high transition before continuing with that
-- * transfer. If we timeout, continue anyways. We don't need
-- * a long timeout, because REQ should only be asserted until the
-- * corresponding ACK is received and processed.
-- *
-- * Note that we can't use the system timer for this, because of
-- * resolution, and we *really* can't use the timer chip since
-- * gettimeofday() and the beeper routines use that. So,
-- * the best thing for us to do will be to calibrate a timing
-- * loop in the initialization code using the timer chip before
-- * gettimeofday() can screw with it.
-- *
-- * FIXME: this is broken (not borken :-). Empty loop costs less than
-- * loop with ISA access in it! -- pavel at ucw.cz
-- */
--
--static int borken_calibration = 0;
--
--static void __init borken_init (void)
--{
-- register int count = 0, start = jiffies + 1, stop = start + 25;
--
-- /* FIXME: There may be a better approach, this is a straight port for
-- now */
-- preempt_disable();
-- while (time_before (jiffies, start))
-- cpu_relax();
-- for (; time_before (jiffies, stop); ++count)
-- cpu_relax();
-- preempt_enable();
--
--/*
-- * Ok, we now have a count for .25 seconds. Convert to a
-- * count per second and divide by transfer rate in K. */
--
-- borken_calibration = (count * 4) / (SLOW_RATE * 1024);
--
-- if (borken_calibration < 1)
-- borken_calibration = 1;
--}
--
--static inline void borken_wait (void)
--{
-- register int count;
--
-- for (count = borken_calibration; count && (STATUS & STAT_REQ); --count)
-- cpu_relax();
--
--#if (DEBUG & DEBUG_BORKEN)
-- if (count)
-- printk ("scsi%d : borken timeout\n", hostno);
--#endif
--}
--
--#endif /* def SLOW_RATE */
--
--/* These beasts only live on ISA, and ISA means 8MHz. Each ULOOP()
-- * contains at least one ISA access, which takes more than 0.125
-- * usec. So if we loop 8 times time in usec, we are safe.
-- */
--
--#define ULOOP( i ) for (clock = i*8;;)
--#define TIMEOUT (!(clock--))
--
--static int __init seagate_st0x_detect (struct scsi_host_template * tpnt)
--{
-- struct Scsi_Host *instance;
-- int i, j;
-- unsigned long cr, dr;
--
-- tpnt->proc_name = "seagate";
--/*
-- * First, we try for the manual override.
-- */
-- DANY ("Autodetecting ST0x / TMC-8xx\n");
--
-- if (hostno != -1) {
-- printk (KERN_ERR "seagate_st0x_detect() called twice?!\n");
-- return 0;
-- }
--
--/* If the user specified the controller type from the command line,
-- controller_type will be non-zero, so don't try to detect one */
--
-- if (!controller_type) {
--#ifdef OVERRIDE
-- base_address = OVERRIDE;
-- controller_type = CONTROLLER;
--
-- DANY ("Base address overridden to %x, controller type is %s\n",
-- base_address,
-- controller_type == SEAGATE ? "SEAGATE" : "FD");
--#else /* OVERRIDE */
--/*
-- * To detect this card, we simply look for the signature
-- * from the BIOS version notice in all the possible locations
-- * of the ROM's. This has a nice side effect of not trashing
-- * any register locations that might be used by something else.
-- *
-- * XXX - note that we probably should be probing the address
-- * space for the on-board RAM instead.
-- */
--
-- for (i = 0; i < ARRAY_SIZE(seagate_bases); ++i) {
-- void __iomem *p = ioremap(seagate_bases[i], 0x2000);
-- if (!p)
-- continue;
-- for (j = 0; j < NUM_SIGNATURES; ++j)
-- if (check_signature(p + signatures[j].offset, signatures[j].signature, signatures[j].length)) {
-- base_address = seagate_bases[i];
-- controller_type = signatures[j].type;
-- break;
-- }
-- iounmap(p);
-- }
--#endif /* OVERRIDE */
-- }
-- /* (! controller_type) */
-- tpnt->this_id = (controller_type == SEAGATE) ? 7 : 6;
-- tpnt->name = (controller_type == SEAGATE) ? ST0X_ID_STR : FD_ID_STR;
--
-- if (!base_address) {
-- printk(KERN_INFO "seagate: ST0x/TMC-8xx not detected.\n");
-- return 0;
-- }
--
-- cr = base_address + (controller_type == SEAGATE ? 0x1a00 : 0x1c00);
-- dr = cr + 0x200;
-- st0x_cr_sr = ioremap(cr, 0x100);
-- st0x_dr = ioremap(dr, 0x100);
--
-- DANY("%s detected. Base address = %x, cr = %x, dr = %x\n",
-- tpnt->name, base_address, cr, dr);
--
-- /*
-- * At all times, we will use IRQ 5. Should also check for IRQ3
-- * if we lose our first interrupt.
-- */
-- instance = scsi_register (tpnt, 0);
-- if (instance == NULL)
-- return 0;
--
-- hostno = instance->host_no;
-- if (request_irq (irq, do_seagate_reconnect_intr, IRQF_DISABLED, (controller_type == SEAGATE) ? "seagate" : "tmc-8xx", instance)) {
-- printk(KERN_ERR "scsi%d : unable to allocate IRQ%d\n", hostno, irq);
-- return 0;
-- }
-- instance->irq = irq;
-- instance->io_port = base_address;
--#ifdef SLOW_RATE
-- printk(KERN_INFO "Calibrating borken timer... ");
-- borken_init();
-- printk(" %d cycles per transfer\n", borken_calibration);
--#endif
-- printk (KERN_INFO "This is one second... ");
-- {
-- int clock;
-- ULOOP (1 * 1000 * 1000) {
-- STATUS;
-- if (TIMEOUT)
-- break;
-- }
-- }
--
-- printk ("done, %s options:"
--#ifdef ARBITRATE
-- " ARBITRATE"
--#endif
--#if DEBUG
-- " DEBUG"
--#endif
--#ifdef FAST
-- " FAST"
--#ifdef FAST32
-- "32"
--#endif
--#endif
--#ifdef LINKED
-- " LINKED"
--#endif
--#ifdef PARITY
-- " PARITY"
--#endif
--#ifdef SEAGATE_USE_ASM
-- " SEAGATE_USE_ASM"
--#endif
--#ifdef SLOW_RATE
-- " SLOW_RATE"
--#endif
--#ifdef SWAPSTAT
-- " SWAPSTAT"
--#endif
--#ifdef SWAPCNTDATA
-- " SWAPCNTDATA"
--#endif
-- "\n", tpnt->name);
-- return 1;
--}
--
--static const char *seagate_st0x_info (struct Scsi_Host *shpnt)
--{
-- static char buffer[64];
--
-- snprintf(buffer, 64, "%s at irq %d, address 0x%05X",
-- (controller_type == SEAGATE) ? ST0X_ID_STR : FD_ID_STR,
-- irq, base_address);
-- return buffer;
--}
--
--/*
-- * These are our saved pointers for the outstanding command that is
-- * waiting for a reconnect
-- */
--
--static unsigned char current_target, current_lun;
--static unsigned char *current_cmnd, *current_data;
--static int current_nobuffs;
--static struct scatterlist *current_buffer;
--static int current_bufflen;
--
--#ifdef LINKED
--/*
-- * linked_connected indicates whether or not we are currently connected to
-- * linked_target, linked_lun and in an INFORMATION TRANSFER phase,
-- * using linked commands.
-- */
--
--static int linked_connected = 0;
--static unsigned char linked_target, linked_lun;
--#endif
--
--static void (*done_fn) (struct scsi_cmnd *) = NULL;
--static struct scsi_cmnd *SCint = NULL;
--
--/*
-- * These control whether or not disconnect / reconnect will be attempted,
-- * or are being attempted.
-- */
--
--#define NO_RECONNECT 0
--#define RECONNECT_NOW 1
--#define CAN_RECONNECT 2
--
--/*
-- * LINKED_RIGHT indicates that we are currently connected to the correct target
-- * for this command, LINKED_WRONG indicates that we are connected to the wrong
-- * target. Note that these imply CAN_RECONNECT and require defined(LINKED).
-- */
--
--#define LINKED_RIGHT 3
--#define LINKED_WRONG 4
--
--/*
-- * This determines if we are expecting to reconnect or not.
-- */
--
--static int should_reconnect = 0;
--
--/*
-- * The seagate_reconnect_intr routine is called when a target reselects the
-- * host adapter. This occurs on the interrupt triggered by the target
-- * asserting SEL.
-- */
--
--static irqreturn_t do_seagate_reconnect_intr(int irq, void *dev_id)
--{
-- unsigned long flags;
-- struct Scsi_Host *dev = dev_id;
--
-- spin_lock_irqsave (dev->host_lock, flags);
-- seagate_reconnect_intr (irq, dev_id);
-- spin_unlock_irqrestore (dev->host_lock, flags);
-- return IRQ_HANDLED;
--}
--
--static void seagate_reconnect_intr (int irq, void *dev_id)
--{
-- int temp;
-- struct scsi_cmnd *SCtmp;
--
-- DPRINTK (PHASE_RESELECT, "scsi%d : seagate_reconnect_intr() called\n", hostno);
--
-- if (!should_reconnect)
-- printk(KERN_WARNING "scsi%d: unexpected interrupt.\n", hostno);
-- else {
-- should_reconnect = 0;
--
-- DPRINTK (PHASE_RESELECT, "scsi%d : internal_command(%d, %08x, %08x, RECONNECT_NOW\n",
-- hostno, current_target, current_data, current_bufflen);
--
-- temp = internal_command (current_target, current_lun, current_cmnd, current_data, current_bufflen, RECONNECT_NOW);
--
-- if (msg_byte(temp) != DISCONNECT) {
-- if (done_fn) {
-- DPRINTK(PHASE_RESELECT, "scsi%d : done_fn(%d,%08x)", hostno, hostno, temp);
-- if (!SCint)
-- panic ("SCint == NULL in seagate");
-- SCtmp = SCint;
-- SCint = NULL;
-- SCtmp->result = temp;
-- done_fn(SCtmp);
-- } else
-- printk(KERN_ERR "done_fn() not defined.\n");
-- }
-- }
--}
--
--/*
-- * The seagate_st0x_queue_command() function provides a queued interface
-- * to the seagate SCSI driver. Basically, it just passes control onto the
-- * seagate_command() function, after fixing it so that the done_fn()
-- * is set to the one passed to the function. We have to be very careful,
-- * because there are some commands on some devices that do not disconnect,
-- * and if we simply call the done_fn when the command is done then another
-- * command is started and queue_command is called again... We end up
-- * overflowing the kernel stack, and this tends not to be such a good idea.
-- */
--
--static int recursion_depth = 0;
--
--static int seagate_st0x_queue_command(struct scsi_cmnd * SCpnt,
-- void (*done) (struct scsi_cmnd *))
--{
-- int result, reconnect;
-- struct scsi_cmnd *SCtmp;
--
-- DANY ("seagate: que_command");
-- done_fn = done;
-- current_target = SCpnt->device->id;
-- current_lun = SCpnt->device->lun;
-- current_cmnd = SCpnt->cmnd;
-- current_data = (unsigned char *) SCpnt->request_buffer;
-- current_bufflen = SCpnt->request_bufflen;
-- SCint = SCpnt;
-- if (recursion_depth)
-- return 1;
-- recursion_depth++;
-- do {
--#ifdef LINKED
-- /*
-- * Set linked command bit in control field of SCSI command.
-- */
--
-- current_cmnd[SCpnt->cmd_len] |= 0x01;
-- if (linked_connected) {
-- DPRINTK (DEBUG_LINKED, "scsi%d : using linked commands, current I_T_L nexus is ", hostno);
-- if (linked_target == current_target && linked_lun == current_lun)
-- {
-- DPRINTK(DEBUG_LINKED, "correct\n");
-- reconnect = LINKED_RIGHT;
-- } else {
-- DPRINTK(DEBUG_LINKED, "incorrect\n");
-- reconnect = LINKED_WRONG;
-- }
-- } else
--#endif /* LINKED */
-- reconnect = CAN_RECONNECT;
--
-- result = internal_command(SCint->device->id, SCint->device->lun, SCint->cmnd,
-- SCint->request_buffer, SCint->request_bufflen, reconnect);
-- if (msg_byte(result) == DISCONNECT)
-- break;
-- SCtmp = SCint;
-- SCint = NULL;
-- SCtmp->result = result;
-- done_fn(SCtmp);
-- }
-- while (SCint);
-- recursion_depth--;
-- return 0;
--}
--
--static int internal_command (unsigned char target, unsigned char lun,
-- const void *cmnd, void *buff, int bufflen, int reselect)
--{
-- unsigned char *data = NULL;
-- struct scatterlist *buffer = NULL;
-- int clock, temp, nobuffs = 0, done = 0, len = 0;
--#if DEBUG
-- int transfered = 0, phase = 0, newphase;
--#endif
-- register unsigned char status_read;
-- unsigned char tmp_data, tmp_control, status = 0, message = 0;
-- unsigned transfersize = 0, underflow = 0;
--#ifdef SLOW_RATE
-- int borken = (int) SCint->device->borken; /* Does the current target require
-- Very Slow I/O ? */
--#endif
--
-- incommand = 0;
-- st0x_aborted = 0;
--
--#if (DEBUG & PRINT_COMMAND)
-- printk("scsi%d : target = %d, command = ", hostno, target);
-- __scsi_print_command((unsigned char *) cmnd);
--#endif
--
--#if (DEBUG & PHASE_RESELECT)
-- switch (reselect) {
-- case RECONNECT_NOW:
-- printk("scsi%d : reconnecting\n", hostno);
-- break;
--#ifdef LINKED
-- case LINKED_RIGHT:
-- printk("scsi%d : connected, can reconnect\n", hostno);
-- break;
-- case LINKED_WRONG:
-- printk("scsi%d : connected to wrong target, can reconnect\n",
-- hostno);
-- break;
--#endif
-- case CAN_RECONNECT:
-- printk("scsi%d : allowed to reconnect\n", hostno);
-- break;
-- default:
-- printk("scsi%d : not allowed to reconnect\n", hostno);
-- }
--#endif
--
-- if (target == (controller_type == SEAGATE ? 7 : 6))
-- return DID_BAD_TARGET;
--
-- /*
-- * We work it differently depending on if this is is "the first time,"
-- * or a reconnect. If this is a reselect phase, then SEL will
-- * be asserted, and we must skip selection / arbitration phases.
-- */
--
-- switch (reselect) {
-- case RECONNECT_NOW:
-- DPRINTK (PHASE_RESELECT, "scsi%d : phase RESELECT \n", hostno);
-- /*
-- * At this point, we should find the logical or of our ID
-- * and the original target's ID on the BUS, with BSY, SEL,
-- * and I/O signals asserted.
-- *
-- * After ARBITRATION phase is completed, only SEL, BSY,
-- * and the target ID are asserted. A valid initiator ID
-- * is not on the bus until IO is asserted, so we must wait
-- * for that.
-- */
-- ULOOP (100 * 1000) {
-- temp = STATUS;
-- if ((temp & STAT_IO) && !(temp & STAT_BSY))
-- break;
-- if (TIMEOUT) {
-- DPRINTK (PHASE_RESELECT, "scsi%d : RESELECT timed out while waiting for IO .\n", hostno);
-- return (DID_BAD_INTR << 16);
-- }
-- }
--
-- /*
-- * After I/O is asserted by the target, we can read our ID
-- * and its ID off of the BUS.
-- */
--
-- if (!((temp = DATA) & (controller_type == SEAGATE ? 0x80 : 0x40))) {
-- DPRINTK (PHASE_RESELECT, "scsi%d : detected reconnect request to different target.\n\tData bus = %d\n", hostno, temp);
-- return (DID_BAD_INTR << 16);
-- }
--
-- if (!(temp & (1 << current_target))) {
-- printk(KERN_WARNING "scsi%d : Unexpected reselect interrupt. Data bus = %d\n", hostno, temp);
-- return (DID_BAD_INTR << 16);
-- }
--
-- buffer = current_buffer;
-- cmnd = current_cmnd; /* WDE add */
-- data = current_data; /* WDE add */
-- len = current_bufflen; /* WDE add */
-- nobuffs = current_nobuffs;
--
-- /*
-- * We have determined that we have been selected. At this
-- * point, we must respond to the reselection by asserting
-- * BSY ourselves
-- */
--
--#if 1
-- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE | CMD_BSY);
--#else
-- WRITE_CONTROL (BASE_CMD | CMD_BSY);
--#endif
--
-- /*
-- * The target will drop SEL, and raise BSY, at which time
-- * we must drop BSY.
-- */
--
-- ULOOP (100 * 1000) {
-- if (!(STATUS & STAT_SEL))
-- break;
-- if (TIMEOUT) {
-- WRITE_CONTROL (BASE_CMD | CMD_INTR);
-- DPRINTK (PHASE_RESELECT, "scsi%d : RESELECT timed out while waiting for SEL.\n", hostno);
-- return (DID_BAD_INTR << 16);
-- }
-- }
-- WRITE_CONTROL (BASE_CMD);
-- /*
-- * At this point, we have connected with the target
-- * and can get on with our lives.
-- */
-- break;
-- case CAN_RECONNECT:
--#ifdef LINKED
-- /*
-- * This is a bletcherous hack, just as bad as the Unix #!
-- * interpreter stuff. If it turns out we are using the wrong
-- * I_T_L nexus, the easiest way to deal with it is to go into
-- * our INFORMATION TRANSFER PHASE code, send a ABORT
-- * message on MESSAGE OUT phase, and then loop back to here.
-- */
--connect_loop:
--#endif
-- DPRINTK (PHASE_BUS_FREE, "scsi%d : phase = BUS FREE \n", hostno);
--
-- /*
-- * BUS FREE PHASE
-- *
-- * On entry, we make sure that the BUS is in a BUS FREE
-- * phase, by insuring that both BSY and SEL are low for
-- * at least one bus settle delay. Several reads help
-- * eliminate wire glitch.
-- */
--
--#ifndef ARBITRATE
--#error FIXME: this is broken: we may not use jiffies here - we are under cli(). It will hardlock.
-- clock = jiffies + ST0X_BUS_FREE_DELAY;
--
-- while (((STATUS | STATUS | STATUS) & (STAT_BSY | STAT_SEL)) && (!st0x_aborted) && time_before (jiffies, clock))
-- cpu_relax();
--
-- if (time_after (jiffies, clock))
-- return retcode (DID_BUS_BUSY);
-- else if (st0x_aborted)
-- return retcode (st0x_aborted);
--#endif
-- DPRINTK (PHASE_SELECTION, "scsi%d : phase = SELECTION\n", hostno);
--
-- clock = jiffies + ST0X_SELECTION_DELAY;
--
-- /*
-- * Arbitration/selection procedure :
-- * 1. Disable drivers
-- * 2. Write HOST adapter address bit
-- * 3. Set start arbitration.
-- * 4. We get either ARBITRATION COMPLETE or SELECT at this
-- * point.
-- * 5. OR our ID and targets on bus.
-- * 6. Enable SCSI drivers and asserted SEL and ATTN
-- */
--
--#ifdef ARBITRATE
-- /* FIXME: verify host lock is always held here */
-- WRITE_CONTROL(0);
-- WRITE_DATA((controller_type == SEAGATE) ? 0x80 : 0x40);
-- WRITE_CONTROL(CMD_START_ARB);
--
-- ULOOP (ST0X_SELECTION_DELAY * 10000) {
-- status_read = STATUS;
-- if (status_read & STAT_ARB_CMPL)
-- break;
-- if (st0x_aborted) /* FIXME: What? We are going to do something even after abort? */
-- break;
-- if (TIMEOUT || (status_read & STAT_SEL)) {
-- printk(KERN_WARNING "scsi%d : arbitration lost or timeout.\n", hostno);
-- WRITE_CONTROL (BASE_CMD);
-- return retcode (DID_NO_CONNECT);
-- }
-- }
-- DPRINTK (PHASE_SELECTION, "scsi%d : arbitration complete\n", hostno);
--#endif
--
-- /*
-- * When the SCSI device decides that we're gawking at it,
-- * it will respond by asserting BUSY on the bus.
-- *
-- * Note : the Seagate ST-01/02 product manual says that we
-- * should twiddle the DATA register before the control
-- * register. However, this does not work reliably so we do
-- * it the other way around.
-- *
-- * Probably could be a problem with arbitration too, we
-- * really should try this with a SCSI protocol or logic
-- * analyzer to see what is going on.
-- */
-- tmp_data = (unsigned char) ((1 << target) | (controller_type == SEAGATE ? 0x80 : 0x40));
-- tmp_control = BASE_CMD | CMD_DRVR_ENABLE | CMD_SEL | (reselect ? CMD_ATTN : 0);
--
-- /* FIXME: verify host lock is always held here */
--#ifdef OLDCNTDATASCEME
--#ifdef SWAPCNTDATA
-- WRITE_CONTROL (tmp_control);
-- WRITE_DATA (tmp_data);
--#else
-- WRITE_DATA (tmp_data);
-- WRITE_CONTROL (tmp_control);
--#endif
--#else
-- tmp_control ^= CMD_BSY; /* This is guesswork. What used to be in driver */
-- WRITE_CONTROL (tmp_control); /* could never work: it sent data into control */
-- WRITE_DATA (tmp_data); /* register and control info into data. Hopefully */
-- tmp_control ^= CMD_BSY; /* fixed, but order of first two may be wrong. */
-- WRITE_CONTROL (tmp_control); /* -- pavel at ucw.cz */
--#endif
--
-- ULOOP (250 * 1000) {
-- if (st0x_aborted) {
-- /*
-- * If we have been aborted, and we have a
-- * command in progress, IE the target
-- * still has BSY asserted, then we will
-- * reset the bus, and notify the midlevel
-- * driver to expect sense.
-- */
--
-- WRITE_CONTROL (BASE_CMD);
-- if (STATUS & STAT_BSY) {
-- printk(KERN_WARNING "scsi%d : BST asserted after we've been aborted.\n", hostno);
-- seagate_st0x_bus_reset(NULL);
-- return retcode (DID_RESET);
-- }
-- return retcode (st0x_aborted);
-- }
-- if (STATUS & STAT_BSY)
-- break;
-- if (TIMEOUT) {
-- DPRINTK (PHASE_SELECTION, "scsi%d : NO CONNECT with target %d, stat = %x \n", hostno, target, STATUS);
-- return retcode (DID_NO_CONNECT);
-- }
-- }
--
-- /* Establish current pointers. Take into account scatter / gather */
--
-- if ((nobuffs = SCint->use_sg)) {
--#if (DEBUG & DEBUG_SG)
-- {
-- int i;
-- printk("scsi%d : scatter gather requested, using %d buffers.\n", hostno, nobuffs);
-- for (i = 0; i < nobuffs; ++i)
-- printk("scsi%d : buffer %d address = %p length = %d\n",
-- hostno, i,
-- sg_virt(&buffer[i]),
-- buffer[i].length);
-- }
--#endif
--
-- buffer = (struct scatterlist *) SCint->request_buffer;
-- len = buffer->length;
-- data = sg_virt(buffer);
-- } else {
-- DPRINTK (DEBUG_SG, "scsi%d : scatter gather not requested.\n", hostno);
-- buffer = NULL;
-- len = SCint->request_bufflen;
-- data = (unsigned char *) SCint->request_buffer;
-- }
--
-- DPRINTK (PHASE_DATAIN | PHASE_DATAOUT, "scsi%d : len = %d\n",
-- hostno, len);
--
-- break;
--#ifdef LINKED
-- case LINKED_RIGHT:
-- break;
-- case LINKED_WRONG:
-- break;
--#endif
-- } /* end of switch(reselect) */
--
-- /*
-- * There are several conditions under which we wish to send a message :
-- * 1. When we are allowing disconnect / reconnect, and need to
-- * establish the I_T_L nexus via an IDENTIFY with the DiscPriv bit
-- * set.
-- *
-- * 2. When we are doing linked commands, are have the wrong I_T_L
-- * nexus established and want to send an ABORT message.
-- */
--
-- /* GCC does not like an ifdef inside a macro, so do it the hard way. */
--#ifdef LINKED
-- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE | (((reselect == CAN_RECONNECT)|| (reselect == LINKED_WRONG))? CMD_ATTN : 0));
--#else
-- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE | (((reselect == CAN_RECONNECT))? CMD_ATTN : 0));
--#endif
--
-- /*
-- * INFORMATION TRANSFER PHASE
-- *
-- * The nasty looking read / write inline assembler loops we use for
-- * DATAIN and DATAOUT phases are approximately 4-5 times as fast as
-- * the 'C' versions - since we're moving 1024 bytes of data, this
-- * really adds up.
-- *
-- * SJT: The nasty-looking assembler is gone, so it's slower.
-- *
-- */
--
-- DPRINTK (PHASE_ETC, "scsi%d : phase = INFORMATION TRANSFER\n", hostno);
--
-- incommand = 1;
-- transfersize = SCint->transfersize;
-- underflow = SCint->underflow;
--
-- /*
-- * Now, we poll the device for status information,
-- * and handle any requests it makes. Note that since we are unsure
-- * of how much data will be flowing across the system, etc and
-- * cannot make reasonable timeouts, that we will instead have the
-- * midlevel driver handle any timeouts that occur in this phase.
-- */
--
-- while (((status_read = STATUS) & STAT_BSY) && !st0x_aborted && !done) {
--#ifdef PARITY
-- if (status_read & STAT_PARITY) {
-- printk(KERN_ERR "scsi%d : got parity error\n", hostno);
-- st0x_aborted = DID_PARITY;
-- }
--#endif
-- if (status_read & STAT_REQ) {
--#if ((DEBUG & PHASE_ETC) == PHASE_ETC)
-- if ((newphase = (status_read & REQ_MASK)) != phase) {
-- phase = newphase;
-- switch (phase) {
-- case REQ_DATAOUT:
-- printk ("scsi%d : phase = DATA OUT\n", hostno);
-- break;
-- case REQ_DATAIN:
-- printk ("scsi%d : phase = DATA IN\n", hostno);
-- break;
-- case REQ_CMDOUT:
-- printk
-- ("scsi%d : phase = COMMAND OUT\n", hostno);
-- break;
-- case REQ_STATIN:
-- printk ("scsi%d : phase = STATUS IN\n", hostno);
-- break;
-- case REQ_MSGOUT:
-- printk
-- ("scsi%d : phase = MESSAGE OUT\n", hostno);
-- break;
-- case REQ_MSGIN:
-- printk ("scsi%d : phase = MESSAGE IN\n", hostno);
-- break;
-- default:
-- printk ("scsi%d : phase = UNKNOWN\n", hostno);
-- st0x_aborted = DID_ERROR;
-- }
-- }
--#endif
-- switch (status_read & REQ_MASK) {
-- case REQ_DATAOUT:
-- /*
-- * If we are in fast mode, then we simply splat
-- * the data out in word-sized chunks as fast as
-- * we can.
-- */
--
-- if (!len) {
--#if 0
-- printk("scsi%d: underflow to target %d lun %d \n", hostno, target, lun);
-- st0x_aborted = DID_ERROR;
-- fast = 0;
--#endif
-- break;
-- }
--
-- if (fast && transfersize
-- && !(len % transfersize)
-- && (len >= transfersize)
--#ifdef FAST32
-- && !(transfersize % 4)
--#endif
-- ) {
-- DPRINTK (DEBUG_FAST,
-- "scsi%d : FAST transfer, underflow = %d, transfersize = %d\n"
-- " len = %d, data = %08x\n",
-- hostno, SCint->underflow,
-- SCint->transfersize, len,
-- data);
--
-- /* SJT: Start. Fast Write */
--#ifdef SEAGATE_USE_ASM
-- __asm__ ("cld\n\t"
--#ifdef FAST32
-- "shr $2, %%ecx\n\t"
-- "1:\t"
-- "lodsl\n\t"
-- "movl %%eax, (%%edi)\n\t"
--#else
-- "1:\t"
-- "lodsb\n\t"
-- "movb %%al, (%%edi)\n\t"
--#endif
-- "loop 1b;"
-- /* output */ :
-- /* input */ :"D" (st0x_dr),
-- "S"
-- (data),
-- "c" (SCint->transfersize)
--/* clobbered */
-- : "eax", "ecx",
-- "esi");
--#else /* SEAGATE_USE_ASM */
-- memcpy_toio(st0x_dr, data, transfersize);
--#endif /* SEAGATE_USE_ASM */
--/* SJT: End */
-- len -= transfersize;
-- data += transfersize;
-- DPRINTK (DEBUG_FAST, "scsi%d : FAST transfer complete len = %d data = %08x\n", hostno, len, data);
-- } else {
-- /*
-- * We loop as long as we are in a
-- * data out phase, there is data to
-- * send, and BSY is still active.
-- */
--
--/* SJT: Start. Slow Write. */
--#ifdef SEAGATE_USE_ASM
--
-- int __dummy_1, __dummy_2;
--
--/*
-- * We loop as long as we are in a data out phase, there is data to send,
-- * and BSY is still active.
-- */
--/* Local variables : len = ecx , data = esi,
-- st0x_cr_sr = ebx, st0x_dr = edi
--*/
-- __asm__ (
-- /* Test for any data here at all. */
-- "orl %%ecx, %%ecx\n\t"
-- "jz 2f\n\t" "cld\n\t"
--/* "movl st0x_cr_sr, %%ebx\n\t" */
--/* "movl st0x_dr, %%edi\n\t" */
-- "1:\t"
-- "movb (%%ebx), %%al\n\t"
-- /* Test for BSY */
-- "test $1, %%al\n\t"
-- "jz 2f\n\t"
-- /* Test for data out phase - STATUS & REQ_MASK should be
-- REQ_DATAOUT, which is 0. */
-- "test $0xe, %%al\n\t"
-- "jnz 2f\n\t"
-- /* Test for REQ */
-- "test $0x10, %%al\n\t"
-- "jz 1b\n\t"
-- "lodsb\n\t"
-- "movb %%al, (%%edi)\n\t"
-- "loop 1b\n\t" "2:\n"
-- /* output */ :"=S" (data), "=c" (len),
-- "=b"
-- (__dummy_1),
-- "=D" (__dummy_2)
--/* input */
-- : "0" (data), "1" (len),
-- "2" (st0x_cr_sr),
-- "3" (st0x_dr)
--/* clobbered */
-- : "eax");
--#else /* SEAGATE_USE_ASM */
-- while (len) {
-- unsigned char stat;
--
-- stat = STATUS;
-- if (!(stat & STAT_BSY)
-- || ((stat & REQ_MASK) !=
-- REQ_DATAOUT))
-- break;
-- if (stat & STAT_REQ) {
-- WRITE_DATA (*data++);
-- --len;
-- }
-- }
--#endif /* SEAGATE_USE_ASM */
--/* SJT: End. */
-- }
--
-- if (!len && nobuffs) {
-- --nobuffs;
-- ++buffer;
-- len = buffer->length;
-- data = sg_virt(buffer);
-- DPRINTK (DEBUG_SG,
-- "scsi%d : next scatter-gather buffer len = %d address = %08x\n",
-- hostno, len, data);
-- }
-- break;
--
-- case REQ_DATAIN:
--#ifdef SLOW_RATE
-- if (borken) {
--#if (DEBUG & (PHASE_DATAIN))
-- transfered += len;
--#endif
-- for (; len && (STATUS & (REQ_MASK | STAT_REQ)) == (REQ_DATAIN | STAT_REQ); --len) {
-- *data++ = DATA;
-- borken_wait();
-- }
--#if (DEBUG & (PHASE_DATAIN))
-- transfered -= len;
--#endif
-- } else
--#endif
--
-- if (fast && transfersize
-- && !(len % transfersize)
-- && (len >= transfersize)
--#ifdef FAST32
-- && !(transfersize % 4)
--#endif
-- ) {
-- DPRINTK (DEBUG_FAST,
-- "scsi%d : FAST transfer, underflow = %d, transfersize = %d\n"
-- " len = %d, data = %08x\n",
-- hostno, SCint->underflow,
-- SCint->transfersize, len,
-- data);
--
--/* SJT: Start. Fast Read */
--#ifdef SEAGATE_USE_ASM
-- __asm__ ("cld\n\t"
--#ifdef FAST32
-- "shr $2, %%ecx\n\t"
-- "1:\t"
-- "movl (%%esi), %%eax\n\t"
-- "stosl\n\t"
--#else
-- "1:\t"
-- "movb (%%esi), %%al\n\t"
-- "stosb\n\t"
--#endif
-- "loop 1b\n\t"
-- /* output */ :
-- /* input */ :"S" (st0x_dr),
-- "D"
-- (data),
-- "c" (SCint->transfersize)
--/* clobbered */
-- : "eax", "ecx",
-- "edi");
--#else /* SEAGATE_USE_ASM */
-- memcpy_fromio(data, st0x_dr, len);
--#endif /* SEAGATE_USE_ASM */
--/* SJT: End */
-- len -= transfersize;
-- data += transfersize;
--#if (DEBUG & PHASE_DATAIN)
-- printk ("scsi%d: transfered += %d\n", hostno, transfersize);
-- transfered += transfersize;
--#endif
--
-- DPRINTK (DEBUG_FAST, "scsi%d : FAST transfer complete len = %d data = %08x\n", hostno, len, data);
-- } else {
--
--#if (DEBUG & PHASE_DATAIN)
-- printk ("scsi%d: transfered += %d\n", hostno, len);
-- transfered += len; /* Assume we'll transfer it all, then
-- subtract what we *didn't* transfer */
--#endif
--
--/*
-- * We loop as long as we are in a data in phase, there is room to read,
-- * and BSY is still active
-- */
--
--/* SJT: Start. */
--#ifdef SEAGATE_USE_ASM
--
-- int __dummy_3, __dummy_4;
--
--/* Dummy clobbering variables for the new gcc-2.95 */
--
--/*
-- * We loop as long as we are in a data in phase, there is room to read,
-- * and BSY is still active
-- */
-- /* Local variables : ecx = len, edi = data
-- esi = st0x_cr_sr, ebx = st0x_dr */
-- __asm__ (
-- /* Test for room to read */
-- "orl %%ecx, %%ecx\n\t"
-- "jz 2f\n\t" "cld\n\t"
--/* "movl st0x_cr_sr, %%esi\n\t" */
--/* "movl st0x_dr, %%ebx\n\t" */
-- "1:\t"
-- "movb (%%esi), %%al\n\t"
-- /* Test for BSY */
-- "test $1, %%al\n\t"
-- "jz 2f\n\t"
-- /* Test for data in phase - STATUS & REQ_MASK should be REQ_DATAIN,
-- = STAT_IO, which is 4. */
-- "movb $0xe, %%ah\n\t"
-- "andb %%al, %%ah\n\t"
-- "cmpb $0x04, %%ah\n\t"
-- "jne 2f\n\t"
-- /* Test for REQ */
-- "test $0x10, %%al\n\t"
-- "jz 1b\n\t"
-- "movb (%%ebx), %%al\n\t"
-- "stosb\n\t"
-- "loop 1b\n\t" "2:\n"
-- /* output */ :"=D" (data), "=c" (len),
-- "=S"
-- (__dummy_3),
-- "=b" (__dummy_4)
--/* input */
-- : "0" (data), "1" (len),
-- "2" (st0x_cr_sr),
-- "3" (st0x_dr)
--/* clobbered */
-- : "eax");
--#else /* SEAGATE_USE_ASM */
-- while (len) {
-- unsigned char stat;
--
-- stat = STATUS;
-- if (!(stat & STAT_BSY)
-- || ((stat & REQ_MASK) !=
-- REQ_DATAIN))
-- break;
-- if (stat & STAT_REQ) {
-- *data++ = DATA;
-- --len;
-- }
-- }
--#endif /* SEAGATE_USE_ASM */
--/* SJT: End. */
--#if (DEBUG & PHASE_DATAIN)
-- printk ("scsi%d: transfered -= %d\n", hostno, len);
-- transfered -= len; /* Since we assumed all of Len got *
-- transfered, correct our mistake */
--#endif
-- }
--
-- if (!len && nobuffs) {
-- --nobuffs;
-- ++buffer;
-- len = buffer->length;
-- data = sg_virt(buffer);
-- DPRINTK (DEBUG_SG, "scsi%d : next scatter-gather buffer len = %d address = %08x\n", hostno, len, data);
-- }
-- break;
--
-- case REQ_CMDOUT:
-- while (((status_read = STATUS) & STAT_BSY) &&
-- ((status_read & REQ_MASK) == REQ_CMDOUT))
-- if (status_read & STAT_REQ) {
-- WRITE_DATA (*(const unsigned char *) cmnd);
-- cmnd = 1 + (const unsigned char *)cmnd;
--#ifdef SLOW_RATE
-- if (borken)
-- borken_wait ();
--#endif
-- }
-- break;
--
-- case REQ_STATIN:
-- status = DATA;
-- break;
--
-- case REQ_MSGOUT:
-- /*
-- * We can only have sent a MSG OUT if we
-- * requested to do this by raising ATTN.
-- * So, we must drop ATTN.
-- */
-- WRITE_CONTROL (BASE_CMD | CMD_DRVR_ENABLE);
-- /*
-- * If we are reconnecting, then we must
-- * send an IDENTIFY message in response
-- * to MSGOUT.
-- */
-- switch (reselect) {
-- case CAN_RECONNECT:
-- WRITE_DATA (IDENTIFY (1, lun));
-- DPRINTK (PHASE_RESELECT | PHASE_MSGOUT, "scsi%d : sent IDENTIFY message.\n", hostno);
-- break;
--#ifdef LINKED
-- case LINKED_WRONG:
-- WRITE_DATA (ABORT);
-- linked_connected = 0;
-- reselect = CAN_RECONNECT;
-- goto connect_loop;
-- DPRINTK (PHASE_MSGOUT | DEBUG_LINKED, "scsi%d : sent ABORT message to cancel incorrect I_T_L nexus.\n", hostno);
--#endif /* LINKED */
-- DPRINTK (DEBUG_LINKED, "correct\n");
-- default:
-- WRITE_DATA (NOP);
-- printk("scsi%d : target %d requested MSGOUT, sent NOP message.\n", hostno, target);
-- }
-- break;
--
-- case REQ_MSGIN:
-- switch (message = DATA) {
-- case DISCONNECT:
-- DANY("seagate: deciding to disconnect\n");
-- should_reconnect = 1;
-- current_data = data; /* WDE add */
-- current_buffer = buffer;
-- current_bufflen = len; /* WDE add */
-- current_nobuffs = nobuffs;
--#ifdef LINKED
-- linked_connected = 0;
--#endif
-- done = 1;
-- DPRINTK ((PHASE_RESELECT | PHASE_MSGIN), "scsi%d : disconnected.\n", hostno);
-- break;
--
--#ifdef LINKED
-- case LINKED_CMD_COMPLETE:
-- case LINKED_FLG_CMD_COMPLETE:
--#endif
-- case COMMAND_COMPLETE:
-- /*
-- * Note : we should check for underflow here.
-- */
-- DPRINTK(PHASE_MSGIN, "scsi%d : command complete.\n", hostno);
-- done = 1;
-- break;
-- case ABORT:
-- DPRINTK(PHASE_MSGIN, "scsi%d : abort message.\n", hostno);
-- done = 1;
-- break;
-- case SAVE_POINTERS:
-- current_buffer = buffer;
-- current_bufflen = len; /* WDE add */
-- current_data = data; /* WDE mod */
-- current_nobuffs = nobuffs;
-- DPRINTK (PHASE_MSGIN, "scsi%d : pointers saved.\n", hostno);
-- break;
-- case RESTORE_POINTERS:
-- buffer = current_buffer;
-- cmnd = current_cmnd;
-- data = current_data; /* WDE mod */
-- len = current_bufflen;
-- nobuffs = current_nobuffs;
-- DPRINTK(PHASE_MSGIN, "scsi%d : pointers restored.\n", hostno);
-- break;
-- default:
--
-- /*
-- * IDENTIFY distinguishes itself
-- * from the other messages by
-- * setting the high bit.
-- *
-- * Note : we need to handle at
-- * least one outstanding command
-- * per LUN, and need to hash the
-- * SCSI command for that I_T_L
-- * nexus based on the known ID
-- * (at this point) and LUN.
-- */
--
-- if (message & 0x80) {
-- DPRINTK (PHASE_MSGIN, "scsi%d : IDENTIFY message received from id %d, lun %d.\n", hostno, target, message & 7);
-- } else {
-- /*
-- * We should go into a
-- * MESSAGE OUT phase, and
-- * send a MESSAGE_REJECT
-- * if we run into a message
-- * that we don't like. The
-- * seagate driver needs
-- * some serious
-- * restructuring first
-- * though.
-- */
-- DPRINTK (PHASE_MSGIN, "scsi%d : unknown message %d from target %d.\n", hostno, message, target);
-- }
-- }
-- break;
-- default:
-- printk(KERN_ERR "scsi%d : unknown phase.\n", hostno);
-- st0x_aborted = DID_ERROR;
-- } /* end of switch (status_read & REQ_MASK) */
--#ifdef SLOW_RATE
-- /*
-- * I really don't care to deal with borken devices in
-- * each single byte transfer case (ie, message in,
-- * message out, status), so I'll do the wait here if
-- * necessary.
-- */
-- if(borken)
-- borken_wait();
--#endif
--
-- } /* if(status_read & STAT_REQ) ends */
-- } /* while(((status_read = STATUS)...) ends */
--
-- DPRINTK(PHASE_DATAIN | PHASE_DATAOUT | PHASE_EXIT, "scsi%d : Transfered %d bytes\n", hostno, transfered);
--
--#if (DEBUG & PHASE_EXIT)
--#if 0 /* Doesn't work for scatter/gather */
-- printk("Buffer : \n");
-- for(i = 0; i < 20; ++i)
-- printk("%02x ", ((unsigned char *) data)[i]); /* WDE mod */
-- printk("\n");
--#endif
-- printk("scsi%d : status = ", hostno);
-- scsi_print_status(status);
-- printk(" message = %02x\n", message);
--#endif
--
-- /* We shouldn't reach this until *after* BSY has been deasserted */
--
--#ifdef LINKED
-- else
-- {
-- /*
-- * Fix the message byte so that unsuspecting high level drivers
-- * don't puke when they see a LINKED COMMAND message in place of
-- * the COMMAND COMPLETE they may be expecting. Shouldn't be
-- * necessary, but it's better to be on the safe side.
-- *
-- * A non LINKED* message byte will indicate that the command
-- * completed, and we are now disconnected.
-- */
--
-- switch (message) {
-- case LINKED_CMD_COMPLETE:
-- case LINKED_FLG_CMD_COMPLETE:
-- message = COMMAND_COMPLETE;
-- linked_target = current_target;
-- linked_lun = current_lun;
-- linked_connected = 1;
-- DPRINTK (DEBUG_LINKED, "scsi%d : keeping I_T_L nexus established for linked command.\n", hostno);
-- /* We also will need to adjust status to accommodate intermediate
-- conditions. */
-- if ((status == INTERMEDIATE_GOOD) || (status == INTERMEDIATE_C_GOOD))
-- status = GOOD;
-- break;
-- /*
-- * We should also handle what are "normal" termination
-- * messages here (ABORT, BUS_DEVICE_RESET?, and
-- * COMMAND_COMPLETE individually, and flake if things
-- * aren't right.
-- */
-- default:
-- DPRINTK (DEBUG_LINKED, "scsi%d : closing I_T_L nexus.\n", hostno);
-- linked_connected = 0;
-- }
-- }
--#endif /* LINKED */
--
-- if (should_reconnect) {
-- DPRINTK (PHASE_RESELECT, "scsi%d : exiting seagate_st0x_queue_command() with reconnect enabled.\n", hostno);
-- WRITE_CONTROL (BASE_CMD | CMD_INTR);
-- } else
-- WRITE_CONTROL (BASE_CMD);
--
-- return retcode (st0x_aborted);
--} /* end of internal_command */
--
--static int seagate_st0x_abort(struct scsi_cmnd * SCpnt)
--{
-- st0x_aborted = DID_ABORT;
-- return SUCCESS;
--}
--
--#undef ULOOP
--#undef TIMEOUT
--
--/*
-- * the seagate_st0x_reset function resets the SCSI bus
-- *
-- * May be called with SCpnt = NULL
-- */
--
--static int seagate_st0x_bus_reset(struct scsi_cmnd * SCpnt)
--{
-- /* No timeouts - this command is going to fail because it was reset. */
-- DANY ("scsi%d: Reseting bus... ", hostno);
--
-- /* assert RESET signal on SCSI bus. */
-- WRITE_CONTROL (BASE_CMD | CMD_RST);
--
-- mdelay (20);
--
-- WRITE_CONTROL (BASE_CMD);
-- st0x_aborted = DID_RESET;
--
-- DANY ("done.\n");
-- return SUCCESS;
--}
--
--static int seagate_st0x_release(struct Scsi_Host *shost)
--{
-- if (shost->irq)
-- free_irq(shost->irq, shost);
-- release_region(shost->io_port, shost->n_io_port);
-- return 0;
--}
--
--static struct scsi_host_template driver_template = {
-- .detect = seagate_st0x_detect,
-- .release = seagate_st0x_release,
-- .info = seagate_st0x_info,
-- .queuecommand = seagate_st0x_queue_command,
-- .eh_abort_handler = seagate_st0x_abort,
-- .eh_bus_reset_handler = seagate_st0x_bus_reset,
-- .can_queue = 1,
-- .this_id = 7,
-- .sg_tablesize = SG_ALL,
-- .cmd_per_lun = 1,
-- .use_clustering = DISABLE_CLUSTERING,
--};
--#include "scsi_module.c"
-diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
-index f1871ea..aba28f3 100644
---- a/drivers/scsi/sg.c
-+++ b/drivers/scsi/sg.c
-@@ -48,6 +48,7 @@ static int sg_version_num = 30534; /* 2 digits for each component */
- #include <linux/blkdev.h>
- #include <linux/delay.h>
- #include <linux/scatterlist.h>
-+#include <linux/blktrace_api.h>
-
- #include "scsi.h"
- #include <scsi/scsi_dbg.h>
-@@ -602,8 +603,9 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
- * but is is possible that the app intended SG_DXFER_TO_DEV, because there
- * is a non-zero input_size, so emit a warning.
- */
-- if (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV)
-- if (printk_ratelimit())
-+ if (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV) {
-+ static char cmd[TASK_COMM_LEN];
-+ if (strcmp(current->comm, cmd) && printk_ratelimit()) {
- printk(KERN_WARNING
- "sg_write: data in/out %d/%d bytes for SCSI command 0x%x--"
- "guessing data in;\n" KERN_WARNING " "
-@@ -611,6 +613,9 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
- old_hdr.reply_len - (int)SZ_SG_HEADER,
- input_size, (unsigned int) cmnd[0],
- current->comm);
-+ strcpy(cmd, current->comm);
-+ }
++
++ bsbits = ac->ac_sb->s_blocksize_bits;
++ /* if stream allocation is enabled, use global goal */
++ size = ac->ac_o_ex.fe_logical + ac->ac_o_ex.fe_len;
++ isize = i_size_read(ac->ac_inode) >> bsbits;
++ if (size < isize)
++ size = isize;
++
++ if (size < sbi->s_mb_stream_request &&
++ (ac->ac_flags & EXT4_MB_HINT_DATA)) {
++ /* TBD: may be hot point */
++ spin_lock(&sbi->s_md_lock);
++ ac->ac_g_ex.fe_group = sbi->s_mb_last_group;
++ ac->ac_g_ex.fe_start = sbi->s_mb_last_start;
++ spin_unlock(&sbi->s_md_lock);
+ }
- k = sg_common_write(sfp, srp, cmnd, sfp->timeout, blocking);
- return (k < 0) ? k : count;
- }
-@@ -1063,6 +1068,17 @@ sg_ioctl(struct inode *inode, struct file *filp,
- case BLKSECTGET:
- return put_user(sdp->device->request_queue->max_sectors * 512,
- ip);
-+ case BLKTRACESETUP:
-+ return blk_trace_setup(sdp->device->request_queue,
-+ sdp->disk->disk_name,
-+ sdp->device->sdev_gendev.devt,
-+ (char *)arg);
-+ case BLKTRACESTART:
-+ return blk_trace_startstop(sdp->device->request_queue, 1);
-+ case BLKTRACESTOP:
-+ return blk_trace_startstop(sdp->device->request_queue, 0);
-+ case BLKTRACETEARDOWN:
-+ return blk_trace_remove(sdp->device->request_queue);
- default:
- if (read_only)
- return -EPERM; /* don't know so take safe approach */
-@@ -1418,7 +1434,6 @@ sg_add(struct class_device *cl_dev, struct class_interface *cl_intf)
- goto out;
- }
-
-- class_set_devdata(cl_dev, sdp);
- error = cdev_add(cdev, MKDEV(SCSI_GENERIC_MAJOR, sdp->index), 1);
- if (error)
- goto cdev_add_err;
-@@ -1431,11 +1446,14 @@ sg_add(struct class_device *cl_dev, struct class_interface *cl_intf)
- MKDEV(SCSI_GENERIC_MAJOR, sdp->index),
- cl_dev->dev, "%s",
- disk->disk_name);
-- if (IS_ERR(sg_class_member))
-- printk(KERN_WARNING "sg_add: "
-- "class_device_create failed\n");
-+ if (IS_ERR(sg_class_member)) {
-+ printk(KERN_ERR "sg_add: "
-+ "class_device_create failed\n");
-+ error = PTR_ERR(sg_class_member);
-+ goto cdev_add_err;
++
++ /* searching for the right group start from the goal value specified */
++ group = ac->ac_g_ex.fe_group;
++
++ /* Let's just scan groups to find more-less suitable blocks */
++ cr = ac->ac_2order ? 0 : 1;
++ /*
++ * cr == 0 try to get exact allocation,
++ * cr == 3 try to get anything
++ */
++repeat:
++ for (; cr < 4 && ac->ac_status == AC_STATUS_CONTINUE; cr++) {
++ ac->ac_criteria = cr;
++ for (i = 0; i < EXT4_SB(sb)->s_groups_count; group++, i++) {
++ struct ext4_group_info *grp;
++ struct ext4_group_desc *desc;
++
++ if (group == EXT4_SB(sb)->s_groups_count)
++ group = 0;
++
++ /* quick check to skip empty groups */
++ grp = ext4_get_group_info(ac->ac_sb, group);
++ if (grp->bb_free == 0)
++ continue;
++
++ /*
++ * if the group is already init we check whether it is
++ * a good group and if not we don't load the buddy
++ */
++ if (EXT4_MB_GRP_NEED_INIT(grp)) {
++ /*
++ * we need full data about the group
++ * to make a good selection
++ */
++ err = ext4_mb_load_buddy(sb, group, &e4b);
++ if (err)
++ goto out;
++ ext4_mb_release_desc(&e4b);
++ }
++
++ /*
++ * If the particular group doesn't satisfy our
++ * criteria we continue with the next group
++ */
++ if (!ext4_mb_good_group(ac, group, cr))
++ continue;
++
++ err = ext4_mb_load_buddy(sb, group, &e4b);
++ if (err)
++ goto out;
++
++ ext4_lock_group(sb, group);
++ if (!ext4_mb_good_group(ac, group, cr)) {
++ /* someone did allocation from this group */
++ ext4_unlock_group(sb, group);
++ ext4_mb_release_desc(&e4b);
++ continue;
++ }
++
++ ac->ac_groups_scanned++;
++ desc = ext4_get_group_desc(sb, group, NULL);
++ if (cr == 0 || (desc->bg_flags &
++ cpu_to_le16(EXT4_BG_BLOCK_UNINIT) &&
++ ac->ac_2order != 0))
++ ext4_mb_simple_scan_group(ac, &e4b);
++ else if (cr == 1 &&
++ ac->ac_g_ex.fe_len == sbi->s_stripe)
++ ext4_mb_scan_aligned(ac, &e4b);
++ else
++ ext4_mb_complex_scan_group(ac, &e4b);
++
++ ext4_unlock_group(sb, group);
++ ext4_mb_release_desc(&e4b);
++
++ if (ac->ac_status != AC_STATUS_CONTINUE)
++ break;
+ }
- class_set_devdata(sg_class_member, sdp);
-- error = sysfs_create_link(&scsidp->sdev_gendev.kobj,
-+ error = sysfs_create_link(&scsidp->sdev_gendev.kobj,
- &sg_class_member->kobj, "generic");
- if (error)
- printk(KERN_ERR "sg_add: unable to make symlink "
-@@ -1447,6 +1465,8 @@ sg_add(struct class_device *cl_dev, struct class_interface *cl_intf)
- "Attached scsi generic sg%d type %d\n", sdp->index,
- scsidp->type);
-
-+ class_set_devdata(cl_dev, sdp);
++ }
+
- return 0;
-
- cdev_add_err:
-@@ -2521,7 +2541,7 @@ sg_idr_max_id(int id, void *p, void *data)
- static int
- sg_last_dev(void)
- {
-- int k = 0;
-+ int k = -1;
- unsigned long iflags;
-
- read_lock_irqsave(&sg_index_lock, iflags);
-diff --git a/drivers/scsi/sgiwd93.c b/drivers/scsi/sgiwd93.c
-index eef8275..d4ebe8c 100644
---- a/drivers/scsi/sgiwd93.c
-+++ b/drivers/scsi/sgiwd93.c
-@@ -159,6 +159,7 @@ void sgiwd93_reset(unsigned long base)
- udelay(50);
- hregs->ctrl = 0;
- }
-+EXPORT_SYMBOL_GPL(sgiwd93_reset);
-
- static inline void init_hpc_chain(struct hpc_data *hd)
- {
-diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
-index c619990..1fcee16 100644
---- a/drivers/scsi/sr.c
-+++ b/drivers/scsi/sr.c
-@@ -67,8 +67,6 @@ MODULE_ALIAS_SCSI_DEVICE(TYPE_WORM);
-
- #define SR_DISKS 256
-
--#define MAX_RETRIES 3
--#define SR_TIMEOUT (30 * HZ)
- #define SR_CAPABILITIES \
- (CDC_CLOSE_TRAY|CDC_OPEN_TRAY|CDC_LOCK|CDC_SELECT_SPEED| \
- CDC_SELECT_DISC|CDC_MULTI_SESSION|CDC_MCN|CDC_MEDIA_CHANGED| \
-@@ -179,21 +177,28 @@ static int sr_media_change(struct cdrom_device_info *cdi, int slot)
- {
- struct scsi_cd *cd = cdi->handle;
- int retval;
-+ struct scsi_sense_hdr *sshdr;
-
- if (CDSL_CURRENT != slot) {
- /* no changer support */
- return -EINVAL;
- }
-
-- retval = scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES);
-- if (retval) {
-- /* Unable to test, unit probably not ready. This usually
-- * means there is no disc in the drive. Mark as changed,
-- * and we will figure it out later once the drive is
-- * available again. */
-+ sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL);
-+ retval = scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES,
-+ sshdr);
-+ if (retval || (scsi_sense_valid(sshdr) &&
-+ /* 0x3a is medium not present */
-+ sshdr->asc == 0x3a)) {
-+ /* Media not present or unable to test, unit probably not
-+ * ready. This usually means there is no disc in the drive.
-+ * Mark as changed, and we will figure it out later once
-+ * the drive is available again.
++ if (ac->ac_b_ex.fe_len > 0 && ac->ac_status != AC_STATUS_FOUND &&
++ !(ac->ac_flags & EXT4_MB_HINT_FIRST)) {
++ /*
++ * We've been searching too long. Let's try to allocate
++ * the best chunk we've found so far
+ */
- cd->device->changed = 1;
-- return 1; /* This will force a flush, if called from
-- * check_disk_change */
-+ /* This will force a flush, if called from check_disk_change */
-+ retval = 1;
-+ goto out;
- };
-
- retval = cd->device->changed;
-@@ -203,9 +208,17 @@ static int sr_media_change(struct cdrom_device_info *cdi, int slot)
- if (retval) {
- /* check multisession offset etc */
- sr_cd_check(cdi);
--
- get_sectorsize(cd);
- }
+
++ ext4_mb_try_best_found(ac, &e4b);
++ if (ac->ac_status != AC_STATUS_FOUND) {
++ /*
++ * Someone more lucky has already allocated it.
++ * The only thing we can do is just take first
++ * found block(s)
++ printk(KERN_DEBUG "EXT4-fs: someone won our chunk\n");
++ */
++ ac->ac_b_ex.fe_group = 0;
++ ac->ac_b_ex.fe_start = 0;
++ ac->ac_b_ex.fe_len = 0;
++ ac->ac_status = AC_STATUS_CONTINUE;
++ ac->ac_flags |= EXT4_MB_HINT_FIRST;
++ cr = 3;
++ atomic_inc(&sbi->s_mb_lost_chunks);
++ goto repeat;
++ }
++ }
+out:
-+ /* Notify userspace, that media has changed. */
-+ if (retval != cd->previous_state)
-+ sdev_evt_send_simple(cd->device, SDEV_EVT_MEDIA_CHANGE,
-+ GFP_KERNEL);
-+ cd->previous_state = retval;
-+ kfree(sshdr);
++ return err;
++}
+
- return retval;
- }
-
-diff --git a/drivers/scsi/sr.h b/drivers/scsi/sr.h
-index d65de96..81fbc0b 100644
---- a/drivers/scsi/sr.h
-+++ b/drivers/scsi/sr.h
-@@ -20,6 +20,9 @@
- #include <linux/genhd.h>
- #include <linux/kref.h>
-
-+#define MAX_RETRIES 3
-+#define SR_TIMEOUT (30 * HZ)
++#ifdef EXT4_MB_HISTORY
++struct ext4_mb_proc_session {
++ struct ext4_mb_history *history;
++ struct super_block *sb;
++ int start;
++ int max;
++};
+
- struct scsi_device;
-
- /* The CDROM is fairly slow, so we need a little extra time */
-@@ -37,6 +40,7 @@ typedef struct scsi_cd {
- unsigned xa_flag:1; /* CD has XA sectors ? */
- unsigned readcd_known:1; /* drive supports READ_CD (0xbe) */
- unsigned readcd_cdda:1; /* reading audio data using READ_CD */
-+ unsigned previous_state:1; /* media has changed */
- struct cdrom_device_info cdi;
- /* We hold gendisk and scsi_device references on probe and use
- * the refs on this kref to decide when to release them */
-diff --git a/drivers/scsi/sr_ioctl.c b/drivers/scsi/sr_ioctl.c
-index e1589f9..d5cebff 100644
---- a/drivers/scsi/sr_ioctl.c
-+++ b/drivers/scsi/sr_ioctl.c
-@@ -275,18 +275,6 @@ int sr_do_ioctl(Scsi_CD *cd, struct packet_command *cgc)
- /* ---------------------------------------------------------------------- */
- /* interface to cdrom.c */
-
--static int test_unit_ready(Scsi_CD *cd)
--{
-- struct packet_command cgc;
--
-- memset(&cgc, 0, sizeof(struct packet_command));
-- cgc.cmd[0] = GPCMD_TEST_UNIT_READY;
-- cgc.quiet = 1;
-- cgc.data_direction = DMA_NONE;
-- cgc.timeout = IOCTL_TIMEOUT;
-- return sr_do_ioctl(cd, &cgc);
--}
--
- int sr_tray_move(struct cdrom_device_info *cdi, int pos)
- {
- Scsi_CD *cd = cdi->handle;
-@@ -310,14 +298,46 @@ int sr_lock_door(struct cdrom_device_info *cdi, int lock)
-
- int sr_drive_status(struct cdrom_device_info *cdi, int slot)
- {
-+ struct scsi_cd *cd = cdi->handle;
-+ struct scsi_sense_hdr sshdr;
-+ struct media_event_desc med;
++static void *ext4_mb_history_skip_empty(struct ext4_mb_proc_session *s,
++ struct ext4_mb_history *hs,
++ int first)
++{
++ if (hs == s->history + s->max)
++ hs = s->history;
++ if (!first && hs == s->history + s->start)
++ return NULL;
++ while (hs->orig.fe_len == 0) {
++ hs++;
++ if (hs == s->history + s->max)
++ hs = s->history;
++ if (hs == s->history + s->start)
++ return NULL;
++ }
++ return hs;
++}
+
- if (CDSL_CURRENT != slot) {
- /* we have no changer support */
- return -EINVAL;
- }
-- if (0 == test_unit_ready(cdi->handle))
-+ if (0 == scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES,
-+ &sshdr))
- return CDS_DISC_OK;
-
-- return CDS_TRAY_OPEN;
-+ if (!cdrom_get_media_event(cdi, &med)) {
-+ if (med.media_present)
-+ return CDS_DISC_OK;
-+ else if (med.door_open)
-+ return CDS_TRAY_OPEN;
-+ else
-+ return CDS_NO_DISC;
++static void *ext4_mb_seq_history_start(struct seq_file *seq, loff_t *pos)
++{
++ struct ext4_mb_proc_session *s = seq->private;
++ struct ext4_mb_history *hs;
++ int l = *pos;
++
++ if (l == 0)
++ return SEQ_START_TOKEN;
++ hs = ext4_mb_history_skip_empty(s, s->history + s->start, 1);
++ if (!hs)
++ return NULL;
++ while (--l && (hs = ext4_mb_history_skip_empty(s, ++hs, 0)) != NULL);
++ return hs;
++}
++
++static void *ext4_mb_seq_history_next(struct seq_file *seq, void *v,
++ loff_t *pos)
++{
++ struct ext4_mb_proc_session *s = seq->private;
++ struct ext4_mb_history *hs = v;
++
++ ++*pos;
++ if (v == SEQ_START_TOKEN)
++ return ext4_mb_history_skip_empty(s, s->history + s->start, 1);
++ else
++ return ext4_mb_history_skip_empty(s, ++hs, 0);
++}
++
++static int ext4_mb_seq_history_show(struct seq_file *seq, void *v)
++{
++ char buf[25], buf2[25], buf3[25], *fmt;
++ struct ext4_mb_history *hs = v;
++
++ if (v == SEQ_START_TOKEN) {
++ seq_printf(seq, "%-5s %-8s %-23s %-23s %-23s %-5s "
++ "%-5s %-2s %-5s %-5s %-5s %-6s\n",
++ "pid", "inode", "original", "goal", "result", "found",
++ "grps", "cr", "flags", "merge", "tail", "broken");
++ return 0;
+ }
+
-+ /*
-+ * 0x04 is format in progress .. but there must be a disc present!
-+ */
-+ if (sshdr.sense_key == NOT_READY && sshdr.asc == 0x04)
-+ return CDS_DISC_OK;
++ if (hs->op == EXT4_MB_HISTORY_ALLOC) {
++ fmt = "%-5u %-8u %-23s %-23s %-23s %-5u %-5u %-2u "
++ "%-5u %-5s %-5u %-6u\n";
++ sprintf(buf2, "%lu/%d/%u@%u", hs->result.fe_group,
++ hs->result.fe_start, hs->result.fe_len,
++ hs->result.fe_logical);
++ sprintf(buf, "%lu/%d/%u@%u", hs->orig.fe_group,
++ hs->orig.fe_start, hs->orig.fe_len,
++ hs->orig.fe_logical);
++ sprintf(buf3, "%lu/%d/%u@%u", hs->goal.fe_group,
++ hs->goal.fe_start, hs->goal.fe_len,
++ hs->goal.fe_logical);
++ seq_printf(seq, fmt, hs->pid, hs->ino, buf, buf3, buf2,
++ hs->found, hs->groups, hs->cr, hs->flags,
++ hs->merged ? "M" : "", hs->tail,
++ hs->buddy ? 1 << hs->buddy : 0);
++ } else if (hs->op == EXT4_MB_HISTORY_PREALLOC) {
++ fmt = "%-5u %-8u %-23s %-23s %-23s\n";
++ sprintf(buf2, "%lu/%d/%u@%u", hs->result.fe_group,
++ hs->result.fe_start, hs->result.fe_len,
++ hs->result.fe_logical);
++ sprintf(buf, "%lu/%d/%u@%u", hs->orig.fe_group,
++ hs->orig.fe_start, hs->orig.fe_len,
++ hs->orig.fe_logical);
++ seq_printf(seq, fmt, hs->pid, hs->ino, buf, "", buf2);
++ } else if (hs->op == EXT4_MB_HISTORY_DISCARD) {
++ sprintf(buf2, "%lu/%d/%u", hs->result.fe_group,
++ hs->result.fe_start, hs->result.fe_len);
++ seq_printf(seq, "%-5u %-8u %-23s discard\n",
++ hs->pid, hs->ino, buf2);
++ } else if (hs->op == EXT4_MB_HISTORY_FREE) {
++ sprintf(buf2, "%lu/%d/%u", hs->result.fe_group,
++ hs->result.fe_start, hs->result.fe_len);
++ seq_printf(seq, "%-5u %-8u %-23s free\n",
++ hs->pid, hs->ino, buf2);
++ }
++ return 0;
++}
++
++static void ext4_mb_seq_history_stop(struct seq_file *seq, void *v)
++{
++}
++
++static struct seq_operations ext4_mb_seq_history_ops = {
++ .start = ext4_mb_seq_history_start,
++ .next = ext4_mb_seq_history_next,
++ .stop = ext4_mb_seq_history_stop,
++ .show = ext4_mb_seq_history_show,
++};
++
++static int ext4_mb_seq_history_open(struct inode *inode, struct file *file)
++{
++ struct super_block *sb = PDE(inode)->data;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ struct ext4_mb_proc_session *s;
++ int rc;
++ int size;
++
++ s = kmalloc(sizeof(*s), GFP_KERNEL);
++ if (s == NULL)
++ return -ENOMEM;
++ s->sb = sb;
++ size = sizeof(struct ext4_mb_history) * sbi->s_mb_history_max;
++ s->history = kmalloc(size, GFP_KERNEL);
++ if (s->history == NULL) {
++ kfree(s);
++ return -ENOMEM;
++ }
++
++ spin_lock(&sbi->s_mb_history_lock);
++ memcpy(s->history, sbi->s_mb_history, size);
++ s->max = sbi->s_mb_history_max;
++ s->start = sbi->s_mb_history_cur % s->max;
++ spin_unlock(&sbi->s_mb_history_lock);
++
++ rc = seq_open(file, &ext4_mb_seq_history_ops);
++ if (rc == 0) {
++ struct seq_file *m = (struct seq_file *)file->private_data;
++ m->private = s;
++ } else {
++ kfree(s->history);
++ kfree(s);
++ }
++ return rc;
++
++}
++
++static int ext4_mb_seq_history_release(struct inode *inode, struct file *file)
++{
++ struct seq_file *seq = (struct seq_file *)file->private_data;
++ struct ext4_mb_proc_session *s = seq->private;
++ kfree(s->history);
++ kfree(s);
++ return seq_release(inode, file);
++}
++
++static ssize_t ext4_mb_seq_history_write(struct file *file,
++ const char __user *buffer,
++ size_t count, loff_t *ppos)
++{
++ struct seq_file *seq = (struct seq_file *)file->private_data;
++ struct ext4_mb_proc_session *s = seq->private;
++ struct super_block *sb = s->sb;
++ char str[32];
++ int value;
++
++ if (count >= sizeof(str)) {
++ printk(KERN_ERR "EXT4-fs: %s string too long, max %u bytes\n",
++ "mb_history", (int)sizeof(str));
++ return -EOVERFLOW;
++ }
++
++ if (copy_from_user(str, buffer, count))
++ return -EFAULT;
++
++ value = simple_strtol(str, NULL, 0);
++ if (value < 0)
++ return -ERANGE;
++ EXT4_SB(sb)->s_mb_history_filter = value;
++
++ return count;
++}
++
++static struct file_operations ext4_mb_seq_history_fops = {
++ .owner = THIS_MODULE,
++ .open = ext4_mb_seq_history_open,
++ .read = seq_read,
++ .write = ext4_mb_seq_history_write,
++ .llseek = seq_lseek,
++ .release = ext4_mb_seq_history_release,
++};
++
++static void *ext4_mb_seq_groups_start(struct seq_file *seq, loff_t *pos)
++{
++ struct super_block *sb = seq->private;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ ext4_group_t group;
++
++ if (*pos < 0 || *pos >= sbi->s_groups_count)
++ return NULL;
++
++ group = *pos + 1;
++ return (void *) group;
++}
++
++static void *ext4_mb_seq_groups_next(struct seq_file *seq, void *v, loff_t *pos)
++{
++ struct super_block *sb = seq->private;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ ext4_group_t group;
++
++ ++*pos;
++ if (*pos < 0 || *pos >= sbi->s_groups_count)
++ return NULL;
++ group = *pos + 1;
++ return (void *) group;;
++}
++
++static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
++{
++ struct super_block *sb = seq->private;
++ long group = (long) v;
++ int i;
++ int err;
++ struct ext4_buddy e4b;
++ struct sg {
++ struct ext4_group_info info;
++ unsigned short counters[16];
++ } sg;
++
++ group--;
++ if (group == 0)
++ seq_printf(seq, "#%-5s: %-5s %-5s %-5s "
++ "[ %-5s %-5s %-5s %-5s %-5s %-5s %-5s "
++ "%-5s %-5s %-5s %-5s %-5s %-5s %-5s ]\n",
++ "group", "free", "frags", "first",
++ "2^0", "2^1", "2^2", "2^3", "2^4", "2^5", "2^6",
++ "2^7", "2^8", "2^9", "2^10", "2^11", "2^12", "2^13");
++
++ i = (sb->s_blocksize_bits + 2) * sizeof(sg.info.bb_counters[0]) +
++ sizeof(struct ext4_group_info);
++ err = ext4_mb_load_buddy(sb, group, &e4b);
++ if (err) {
++ seq_printf(seq, "#%-5lu: I/O error\n", group);
++ return 0;
++ }
++ ext4_lock_group(sb, group);
++ memcpy(&sg, ext4_get_group_info(sb, group), i);
++ ext4_unlock_group(sb, group);
++ ext4_mb_release_desc(&e4b);
++
++ seq_printf(seq, "#%-5lu: %-5u %-5u %-5u [", group, sg.info.bb_free,
++ sg.info.bb_fragments, sg.info.bb_first_free);
++ for (i = 0; i <= 13; i++)
++ seq_printf(seq, " %-5u", i <= sb->s_blocksize_bits + 1 ?
++ sg.info.bb_counters[i] : 0);
++ seq_printf(seq, " ]\n");
++
++ return 0;
++}
++
++static void ext4_mb_seq_groups_stop(struct seq_file *seq, void *v)
++{
++}
++
++static struct seq_operations ext4_mb_seq_groups_ops = {
++ .start = ext4_mb_seq_groups_start,
++ .next = ext4_mb_seq_groups_next,
++ .stop = ext4_mb_seq_groups_stop,
++ .show = ext4_mb_seq_groups_show,
++};
++
++static int ext4_mb_seq_groups_open(struct inode *inode, struct file *file)
++{
++ struct super_block *sb = PDE(inode)->data;
++ int rc;
++
++ rc = seq_open(file, &ext4_mb_seq_groups_ops);
++ if (rc == 0) {
++ struct seq_file *m = (struct seq_file *)file->private_data;
++ m->private = sb;
++ }
++ return rc;
++
++}
++
++static struct file_operations ext4_mb_seq_groups_fops = {
++ .owner = THIS_MODULE,
++ .open = ext4_mb_seq_groups_open,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = seq_release,
++};
++
++static void ext4_mb_history_release(struct super_block *sb)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++ remove_proc_entry("mb_groups", sbi->s_mb_proc);
++ remove_proc_entry("mb_history", sbi->s_mb_proc);
++
++ kfree(sbi->s_mb_history);
++}
++
++static void ext4_mb_history_init(struct super_block *sb)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ int i;
++
++ if (sbi->s_mb_proc != NULL) {
++ struct proc_dir_entry *p;
++ p = create_proc_entry("mb_history", S_IRUGO, sbi->s_mb_proc);
++ if (p) {
++ p->proc_fops = &ext4_mb_seq_history_fops;
++ p->data = sb;
++ }
++ p = create_proc_entry("mb_groups", S_IRUGO, sbi->s_mb_proc);
++ if (p) {
++ p->proc_fops = &ext4_mb_seq_groups_fops;
++ p->data = sb;
++ }
++ }
++
++ sbi->s_mb_history_max = 1000;
++ sbi->s_mb_history_cur = 0;
++ spin_lock_init(&sbi->s_mb_history_lock);
++ i = sbi->s_mb_history_max * sizeof(struct ext4_mb_history);
++ sbi->s_mb_history = kmalloc(i, GFP_KERNEL);
++ if (likely(sbi->s_mb_history != NULL))
++ memset(sbi->s_mb_history, 0, i);
++ /* if we can't allocate history, then we simple won't use it */
++}
++
++static void ext4_mb_store_history(struct ext4_allocation_context *ac)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++ struct ext4_mb_history h;
++
++ if (unlikely(sbi->s_mb_history == NULL))
++ return;
++
++ if (!(ac->ac_op & sbi->s_mb_history_filter))
++ return;
++
++ h.op = ac->ac_op;
++ h.pid = current->pid;
++ h.ino = ac->ac_inode ? ac->ac_inode->i_ino : 0;
++ h.orig = ac->ac_o_ex;
++ h.result = ac->ac_b_ex;
++ h.flags = ac->ac_flags;
++ h.found = ac->ac_found;
++ h.groups = ac->ac_groups_scanned;
++ h.cr = ac->ac_criteria;
++ h.tail = ac->ac_tail;
++ h.buddy = ac->ac_buddy;
++ h.merged = 0;
++ if (ac->ac_op == EXT4_MB_HISTORY_ALLOC) {
++ if (ac->ac_g_ex.fe_start == ac->ac_b_ex.fe_start &&
++ ac->ac_g_ex.fe_group == ac->ac_b_ex.fe_group)
++ h.merged = 1;
++ h.goal = ac->ac_g_ex;
++ h.result = ac->ac_f_ex;
++ }
++
++ spin_lock(&sbi->s_mb_history_lock);
++ memcpy(sbi->s_mb_history + sbi->s_mb_history_cur, &h, sizeof(h));
++ if (++sbi->s_mb_history_cur >= sbi->s_mb_history_max)
++ sbi->s_mb_history_cur = 0;
++ spin_unlock(&sbi->s_mb_history_lock);
++}
++
++#else
++#define ext4_mb_history_release(sb)
++#define ext4_mb_history_init(sb)
++#endif
++
++static int ext4_mb_init_backend(struct super_block *sb)
++{
++ ext4_group_t i;
++ int j, len, metalen;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ int num_meta_group_infos =
++ (sbi->s_groups_count + EXT4_DESC_PER_BLOCK(sb) - 1) >>
++ EXT4_DESC_PER_BLOCK_BITS(sb);
++ struct ext4_group_info **meta_group_info;
++
++ /* An 8TB filesystem with 64-bit pointers requires a 4096 byte
++ * kmalloc. A 128kb malloc should suffice for a 256TB filesystem.
++ * So a two level scheme suffices for now. */
++ sbi->s_group_info = kmalloc(sizeof(*sbi->s_group_info) *
++ num_meta_group_infos, GFP_KERNEL);
++ if (sbi->s_group_info == NULL) {
++ printk(KERN_ERR "EXT4-fs: can't allocate buddy meta group\n");
++ return -ENOMEM;
++ }
++ sbi->s_buddy_cache = new_inode(sb);
++ if (sbi->s_buddy_cache == NULL) {
++ printk(KERN_ERR "EXT4-fs: can't get new inode\n");
++ goto err_freesgi;
++ }
++ EXT4_I(sbi->s_buddy_cache)->i_disksize = 0;
++
++ metalen = sizeof(*meta_group_info) << EXT4_DESC_PER_BLOCK_BITS(sb);
++ for (i = 0; i < num_meta_group_infos; i++) {
++ if ((i + 1) == num_meta_group_infos)
++ metalen = sizeof(*meta_group_info) *
++ (sbi->s_groups_count -
++ (i << EXT4_DESC_PER_BLOCK_BITS(sb)));
++ meta_group_info = kmalloc(metalen, GFP_KERNEL);
++ if (meta_group_info == NULL) {
++ printk(KERN_ERR "EXT4-fs: can't allocate mem for a "
++ "buddy group\n");
++ goto err_freemeta;
++ }
++ sbi->s_group_info[i] = meta_group_info;
++ }
+
+ /*
-+ * If not using Mt Fuji extended media tray reports,
-+ * just return TRAY_OPEN since ATAPI doesn't provide
-+ * any other way to detect this...
++ * calculate needed size. if change bb_counters size,
++ * don't forget about ext4_mb_generate_buddy()
+ */
-+ if (scsi_sense_valid(&sshdr) &&
-+ /* 0x3a is medium not present */
-+ sshdr.asc == 0x3a)
-+ return CDS_NO_DISC;
-+ else
-+ return CDS_TRAY_OPEN;
++ len = sizeof(struct ext4_group_info);
++ len += sizeof(unsigned short) * (sb->s_blocksize_bits + 2);
++ for (i = 0; i < sbi->s_groups_count; i++) {
++ struct ext4_group_desc *desc;
+
-+ return CDS_DRIVE_NOT_READY;
- }
-
- int sr_disk_status(struct cdrom_device_info *cdi)
-diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
-index 328c47c..7195270 100644
---- a/drivers/scsi/st.c
-+++ b/drivers/scsi/st.c
-@@ -9,7 +9,7 @@
- Steve Hirsch, Andreas Koppenh"ofer, Michael Leodolter, Eyal Lebedinsky,
- Michael Schaefer, J"org Weule, and Eric Youngdale.
-
-- Copyright 1992 - 2007 Kai Makisara
-+ Copyright 1992 - 2008 Kai Makisara
- email Kai.Makisara at kolumbus.fi
-
- Some small formal changes - aeb, 950809
-@@ -17,7 +17,7 @@
- Last modified: 18-JAN-1998 Richard Gooch <rgooch at atnf.csiro.au> Devfs support
- */
-
--static const char *verstr = "20070203";
-+static const char *verstr = "20080117";
-
- #include <linux/module.h>
-
-@@ -3214,8 +3214,7 @@ static int partition_tape(struct scsi_tape *STp, int size)
-
-
- /* The ioctl command */
--static int st_ioctl(struct inode *inode, struct file *file,
-- unsigned int cmd_in, unsigned long arg)
-+static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
- {
- int i, cmd_nr, cmd_type, bt;
- int retval = 0;
-@@ -3870,7 +3869,7 @@ static const struct file_operations st_fops =
- .owner = THIS_MODULE,
- .read = st_read,
- .write = st_write,
-- .ioctl = st_ioctl,
-+ .unlocked_ioctl = st_ioctl,
- #ifdef CONFIG_COMPAT
- .compat_ioctl = st_compat_ioctl,
- #endif
-diff --git a/drivers/scsi/sun3_NCR5380.c b/drivers/scsi/sun3_NCR5380.c
-index 2dcde37..bcaba86 100644
---- a/drivers/scsi/sun3_NCR5380.c
-+++ b/drivers/scsi/sun3_NCR5380.c
-@@ -515,9 +515,9 @@ static __inline__ void initialize_SCp(struct scsi_cmnd *cmd)
- * various queues are valid.
- */
-
-- if (cmd->use_sg) {
-- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.ptr = (char *) SGADDR(cmd->SCp.buffer);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
-
-@@ -528,8 +528,8 @@ static __inline__ void initialize_SCp(struct scsi_cmnd *cmd)
- } else {
- cmd->SCp.buffer = NULL;
- cmd->SCp.buffers_residual = 0;
-- cmd->SCp.ptr = (char *) cmd->request_buffer;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-+ cmd->SCp.ptr = NULL;
-+ cmd->SCp.this_residual = 0;
- }
-
- }
-@@ -935,7 +935,7 @@ static int NCR5380_queue_command(struct scsi_cmnd *cmd,
- }
- # endif
- # ifdef NCR5380_STAT_LIMIT
-- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
-+ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
- # endif
- switch (cmd->cmnd[0])
- {
-@@ -943,14 +943,14 @@ static int NCR5380_queue_command(struct scsi_cmnd *cmd,
- case WRITE_6:
- case WRITE_10:
- hostdata->time_write[cmd->device->id] -= (jiffies - hostdata->timebase);
-- hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;
-+ hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);
- hostdata->pendingw++;
- break;
- case READ:
- case READ_6:
- case READ_10:
- hostdata->time_read[cmd->device->id] -= (jiffies - hostdata->timebase);
-- hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;
-+ hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);
- hostdata->pendingr++;
- break;
- }
-@@ -1345,7 +1345,7 @@ static void collect_stats(struct NCR5380_hostdata *hostdata,
- struct scsi_cmnd *cmd)
- {
- # ifdef NCR5380_STAT_LIMIT
-- if (cmd->request_bufflen > NCR5380_STAT_LIMIT)
-+ if (scsi_bufflen(cmd) > NCR5380_STAT_LIMIT)
- # endif
- switch (cmd->cmnd[0])
- {
-@@ -1353,14 +1353,14 @@ static void collect_stats(struct NCR5380_hostdata *hostdata,
- case WRITE_6:
- case WRITE_10:
- hostdata->time_write[cmd->device->id] += (jiffies - hostdata->timebase);
-- /*hostdata->bytes_write[cmd->device->id] += cmd->request_bufflen;*/
-+ /*hostdata->bytes_write[cmd->device->id] += scsi_bufflen(cmd);*/
- hostdata->pendingw--;
- break;
- case READ:
- case READ_6:
- case READ_10:
- hostdata->time_read[cmd->device->id] += (jiffies - hostdata->timebase);
-- /*hostdata->bytes_read[cmd->device->id] += cmd->request_bufflen;*/
-+ /*hostdata->bytes_read[cmd->device->id] += scsi_bufflen(cmd);*/
- hostdata->pendingr--;
- break;
- }
-@@ -1863,7 +1863,7 @@ static int do_abort (struct Scsi_Host *host)
- * the target sees, so we just handshake.
- */
-
-- while (!(tmp = NCR5380_read(STATUS_REG)) & SR_REQ);
-+ while (!((tmp = NCR5380_read(STATUS_REG)) & SR_REQ));
-
- NCR5380_write(TARGET_COMMAND_REG, PHASE_SR_TO_TCR(tmp));
-
-diff --git a/drivers/scsi/sym53c416.c b/drivers/scsi/sym53c416.c
-index 90cee94..1f6fd16 100644
---- a/drivers/scsi/sym53c416.c
-+++ b/drivers/scsi/sym53c416.c
-@@ -328,27 +328,13 @@ static __inline__ unsigned int sym53c416_write(int base, unsigned char *buffer,
- static irqreturn_t sym53c416_intr_handle(int irq, void *dev_id)
- {
- struct Scsi_Host *dev = dev_id;
-- int base = 0;
-+ int base = dev->io_port;
- int i;
- unsigned long flags = 0;
- unsigned char status_reg, pio_int_reg, int_reg;
- struct scatterlist *sg;
- unsigned int tot_trans = 0;
-
-- /* We search the base address of the host adapter which caused the interrupt */
-- /* FIXME: should pass dev_id sensibly as hosts[i] */
-- for(i = 0; i < host_index && !base; i++)
-- if(irq == hosts[i].irq)
-- base = hosts[i].base;
-- /* If no adapter found, we cannot handle the interrupt. Leave a message */
-- /* and continue. This should never happen... */
-- if(!base)
-- {
-- printk(KERN_ERR "sym53c416: No host adapter defined for interrupt %d\n", irq);
-- return IRQ_NONE;
-- }
-- /* Now we have the base address and we can start handling the interrupt */
--
- spin_lock_irqsave(dev->host_lock,flags);
- status_reg = inb(base + STATUS_REG);
- pio_int_reg = inb(base + PIO_INT_REG);
-diff --git a/drivers/scsi/sym53c8xx_2/sym_glue.c b/drivers/scsi/sym53c8xx_2/sym_glue.c
-index 9e0908d..21e926d 100644
---- a/drivers/scsi/sym53c8xx_2/sym_glue.c
-+++ b/drivers/scsi/sym53c8xx_2/sym_glue.c
-@@ -207,10 +207,9 @@ void sym_set_cam_result_error(struct sym_hcb *np, struct sym_ccb *cp, int resid)
- /*
- * Bounce back the sense data to user.
- */
-- memset(&cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
-+ memset(&cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
- memcpy(cmd->sense_buffer, cp->sns_bbuf,
-- min(sizeof(cmd->sense_buffer),
-- (size_t)SYM_SNS_BBUF_LEN));
-+ min(SCSI_SENSE_BUFFERSIZE, SYM_SNS_BBUF_LEN));
- #if 0
- /*
- * If the device reports a UNIT ATTENTION condition
-@@ -609,22 +608,24 @@ static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd)
- */
- #define WAIT_FOR_PCI_RECOVERY 35
- if (pci_channel_offline(pdev)) {
-- struct completion *io_reset;
- int finished_reset = 0;
- init_completion(&eh_done);
- spin_lock_irq(shost->host_lock);
- /* Make sure we didn't race */
- if (pci_channel_offline(pdev)) {
-- if (!sym_data->io_reset)
-- sym_data->io_reset = &eh_done;
-- io_reset = sym_data->io_reset;
-+ BUG_ON(sym_data->io_reset);
-+ sym_data->io_reset = &eh_done;
- } else {
- finished_reset = 1;
- }
- spin_unlock_irq(shost->host_lock);
- if (!finished_reset)
-- finished_reset = wait_for_completion_timeout(io_reset,
-+ finished_reset = wait_for_completion_timeout
-+ (sym_data->io_reset,
- WAIT_FOR_PCI_RECOVERY*HZ);
-+ spin_lock_irq(shost->host_lock);
-+ sym_data->io_reset = NULL;
-+ spin_unlock_irq(shost->host_lock);
- if (!finished_reset)
- return SCSI_FAILED;
- }
-@@ -1744,7 +1745,7 @@ static int __devinit sym2_probe(struct pci_dev *pdev,
- return -ENODEV;
- }
-
--static void __devexit sym2_remove(struct pci_dev *pdev)
-+static void sym2_remove(struct pci_dev *pdev)
- {
- struct Scsi_Host *shost = pci_get_drvdata(pdev);
-
-@@ -1879,7 +1880,6 @@ static void sym2_io_resume(struct pci_dev *pdev)
- spin_lock_irq(shost->host_lock);
- if (sym_data->io_reset)
- complete_all(sym_data->io_reset);
-- sym_data->io_reset = NULL;
- spin_unlock_irq(shost->host_lock);
- }
-
-@@ -2056,7 +2056,7 @@ static struct pci_driver sym2_driver = {
- .name = NAME53C8XX,
- .id_table = sym2_id_table,
- .probe = sym2_probe,
-- .remove = __devexit_p(sym2_remove),
-+ .remove = sym2_remove,
- .err_handler = &sym2_err_handler,
- };
-
-diff --git a/drivers/scsi/tmscsim.c b/drivers/scsi/tmscsim.c
-index 4419304..5b04ddf 100644
---- a/drivers/scsi/tmscsim.c
-+++ b/drivers/scsi/tmscsim.c
-@@ -444,7 +444,7 @@ static int dc390_pci_map (struct dc390_srb* pSRB)
-
- /* Map sense buffer */
- if (pSRB->SRBFlag & AUTO_REQSENSE) {
-- pSRB->pSegmentList = dc390_sg_build_single(&pSRB->Segmentx, pcmd->sense_buffer, sizeof(pcmd->sense_buffer));
-+ pSRB->pSegmentList = dc390_sg_build_single(&pSRB->Segmentx, pcmd->sense_buffer, SCSI_SENSE_BUFFERSIZE);
- pSRB->SGcount = pci_map_sg(pdev, pSRB->pSegmentList, 1,
- DMA_FROM_DEVICE);
- cmdp->saved_dma_handle = sg_dma_address(pSRB->pSegmentList);
-@@ -599,7 +599,7 @@ dc390_StartSCSI( struct dc390_acb* pACB, struct dc390_dcb* pDCB, struct dc390_sr
- DC390_write8 (ScsiFifo, pDCB->TargetLUN << 5);
- DC390_write8 (ScsiFifo, 0);
- DC390_write8 (ScsiFifo, 0);
-- DC390_write8 (ScsiFifo, sizeof(scmd->sense_buffer));
-+ DC390_write8 (ScsiFifo, SCSI_SENSE_BUFFERSIZE);
- DC390_write8 (ScsiFifo, 0);
- DEBUG1(printk (KERN_DEBUG "DC390: AutoReqSense !\n"));
- }
-@@ -1389,7 +1389,7 @@ dc390_CommandPhase( struct dc390_acb* pACB, struct dc390_srb* pSRB, u8 *psstatus
- DC390_write8 (ScsiFifo, pDCB->TargetLUN << 5);
- DC390_write8 (ScsiFifo, 0);
- DC390_write8 (ScsiFifo, 0);
-- DC390_write8 (ScsiFifo, sizeof(pSRB->pcmd->sense_buffer));
-+ DC390_write8 (ScsiFifo, SCSI_SENSE_BUFFERSIZE);
- DC390_write8 (ScsiFifo, 0);
- DEBUG0(printk(KERN_DEBUG "DC390: AutoReqSense (CmndPhase)!\n"));
- }
-diff --git a/drivers/scsi/u14-34f.c b/drivers/scsi/u14-34f.c
-index 7edd6ce..4bc5407 100644
---- a/drivers/scsi/u14-34f.c
-+++ b/drivers/scsi/u14-34f.c
-@@ -1121,9 +1121,9 @@ static void map_dma(unsigned int i, unsigned int j) {
-
- if (SCpnt->sense_buffer)
- cpp->sense_addr = H2DEV(pci_map_single(HD(j)->pdev, SCpnt->sense_buffer,
-- sizeof SCpnt->sense_buffer, PCI_DMA_FROMDEVICE));
-+ SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE));
-
-- cpp->sense_len = sizeof SCpnt->sense_buffer;
-+ cpp->sense_len = SCSI_SENSE_BUFFERSIZE;
-
- if (scsi_bufflen(SCpnt)) {
- count = scsi_dma_map(SCpnt);
-diff --git a/drivers/scsi/ultrastor.c b/drivers/scsi/ultrastor.c
-index 6d1f0ed..75eca6b 100644
---- a/drivers/scsi/ultrastor.c
-+++ b/drivers/scsi/ultrastor.c
-@@ -298,9 +298,16 @@ static inline int find_and_clear_bit_16(unsigned long *field)
- {
- int rv;
-
-- if (*field == 0) panic("No free mscp");
-- asm("xorl %0,%0\n0:\tbsfw %1,%w0\n\tbtr %0,%1\n\tjnc 0b"
-- : "=&r" (rv), "=m" (*field) : "1" (*field));
-+ if (*field == 0)
-+ panic("No free mscp");
++ meta_group_info =
++ sbi->s_group_info[i >> EXT4_DESC_PER_BLOCK_BITS(sb)];
++ j = i & (EXT4_DESC_PER_BLOCK(sb) - 1);
+
-+ asm volatile (
-+ "xorl %0,%0\n\t"
-+ "0: bsfw %1,%w0\n\t"
-+ "btr %0,%1\n\t"
-+ "jnc 0b"
-+ : "=&r" (rv), "=m" (*field) :);
++ meta_group_info[j] = kzalloc(len, GFP_KERNEL);
++ if (meta_group_info[j] == NULL) {
++ printk(KERN_ERR "EXT4-fs: can't allocate buddy mem\n");
++ i--;
++ goto err_freebuddy;
++ }
++ desc = ext4_get_group_desc(sb, i, NULL);
++ if (desc == NULL) {
++ printk(KERN_ERR
++ "EXT4-fs: can't read descriptor %lu\n", i);
++ goto err_freebuddy;
++ }
++ memset(meta_group_info[j], 0, len);
++ set_bit(EXT4_GROUP_INFO_NEED_INIT_BIT,
++ &(meta_group_info[j]->bb_state));
+
- return rv;
- }
-
-@@ -741,7 +748,7 @@ static int ultrastor_queuecommand(struct scsi_cmnd *SCpnt,
- }
- my_mscp->command_link = 0; /*???*/
- my_mscp->scsi_command_link_id = 0; /*???*/
-- my_mscp->length_of_sense_byte = sizeof SCpnt->sense_buffer;
-+ my_mscp->length_of_sense_byte = SCSI_SENSE_BUFFERSIZE;
- my_mscp->length_of_scsi_cdbs = SCpnt->cmd_len;
- memcpy(my_mscp->scsi_cdbs, SCpnt->cmnd, my_mscp->length_of_scsi_cdbs);
- my_mscp->adapter_status = 0;
-diff --git a/drivers/scsi/wd33c93.c b/drivers/scsi/wd33c93.c
-index fdbb92d..f286c37 100644
---- a/drivers/scsi/wd33c93.c
-+++ b/drivers/scsi/wd33c93.c
-@@ -407,16 +407,16 @@ wd33c93_queuecommand(struct scsi_cmnd *cmd,
- * - SCp.phase records this command's SRCID_ER bit setting
- */
-
-- if (cmd->use_sg) {
-- cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer;
-- cmd->SCp.buffers_residual = cmd->use_sg - 1;
-+ if (scsi_bufflen(cmd)) {
-+ cmd->SCp.buffer = scsi_sglist(cmd);
-+ cmd->SCp.buffers_residual = scsi_sg_count(cmd) - 1;
- cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
- cmd->SCp.this_residual = cmd->SCp.buffer->length;
- } else {
- cmd->SCp.buffer = NULL;
- cmd->SCp.buffers_residual = 0;
-- cmd->SCp.ptr = (char *) cmd->request_buffer;
-- cmd->SCp.this_residual = cmd->request_bufflen;
-+ cmd->SCp.ptr = NULL;
-+ cmd->SCp.this_residual = 0;
- }
-
- /* WD docs state that at the conclusion of a "LEVEL2" command, the
-diff --git a/drivers/scsi/wd7000.c b/drivers/scsi/wd7000.c
-index 03cd44f..b4304ae 100644
---- a/drivers/scsi/wd7000.c
-+++ b/drivers/scsi/wd7000.c
-@@ -1108,13 +1108,10 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt,
- scb->host = host;
-
- nseg = scsi_sg_count(SCpnt);
-- if (nseg) {
-+ if (nseg > 1) {
- struct scatterlist *sg;
- unsigned i;
-
-- if (SCpnt->device->host->sg_tablesize == SG_NONE) {
-- panic("wd7000_queuecommand: scatter/gather not supported.\n");
-- }
- dprintk("Using scatter/gather with %d elements.\n", nseg);
-
- sgb = scb->sgb;
-@@ -1128,7 +1125,10 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt,
- }
- } else {
- scb->op = 0;
-- any2scsi(scb->dataptr, isa_virt_to_bus(scsi_sglist(SCpnt)));
-+ if (nseg) {
-+ struct scatterlist *sg = scsi_sglist(SCpnt);
-+ any2scsi(scb->dataptr, isa_page_to_bus(sg_page(sg)) + sg->offset);
++ /*
++ * initialize bb_free to be able to skip
++ * empty groups without initialization
++ */
++ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ meta_group_info[j]->bb_free =
++ ext4_free_blocks_after_init(sb, i, desc);
++ } else {
++ meta_group_info[j]->bb_free =
++ le16_to_cpu(desc->bg_free_blocks_count);
+ }
- any2scsi(scb->maxlen, scsi_bufflen(SCpnt));
- }
-
-@@ -1524,7 +1524,7 @@ static __init int wd7000_detect(struct scsi_host_template *tpnt)
- * For boards before rev 6.0, scatter/gather isn't supported.
- */
- if (host->rev1 < 6)
-- sh->sg_tablesize = SG_NONE;
-+ sh->sg_tablesize = 1;
-
- present++; /* count it */
-
-diff --git a/drivers/serial/21285.c b/drivers/serial/21285.c
-index facb678..6a48dfa 100644
---- a/drivers/serial/21285.c
-+++ b/drivers/serial/21285.c
-@@ -277,6 +277,8 @@ serial21285_set_termios(struct uart_port *port, struct ktermios *termios,
- if (termios->c_iflag & INPCK)
- port->read_status_mask |= RXSTAT_FRAME | RXSTAT_PARITY;
-
-+ tty_encode_baud_rate(tty, baud, baud);
+
- /*
- * Which character status flags should we ignore?
- */
-diff --git a/drivers/serial/bfin_5xx.c b/drivers/serial/bfin_5xx.c
-index 6f475b6..ac2a3ef 100644
---- a/drivers/serial/bfin_5xx.c
-+++ b/drivers/serial/bfin_5xx.c
-@@ -442,7 +442,8 @@ static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart)
- set_bfin_dma_config(DIR_READ, DMA_FLOW_STOP,
- INTR_ON_BUF,
- DIMENSION_LINEAR,
-- DATA_SIZE_8));
-+ DATA_SIZE_8,
-+ DMA_SYNC_RESTART));
- set_dma_start_addr(uart->tx_dma_channel, (unsigned long)(xmit->buf+xmit->tail));
- set_dma_x_count(uart->tx_dma_channel, uart->tx_count);
- set_dma_x_modify(uart->tx_dma_channel, 1);
-@@ -689,7 +690,8 @@ static int bfin_serial_startup(struct uart_port *port)
- set_dma_config(uart->rx_dma_channel,
- set_bfin_dma_config(DIR_WRITE, DMA_FLOW_AUTO,
- INTR_ON_ROW, DIMENSION_2D,
-- DATA_SIZE_8));
-+ DATA_SIZE_8,
-+ DMA_SYNC_RESTART));
- set_dma_x_count(uart->rx_dma_channel, DMA_RX_XCOUNT);
- set_dma_x_modify(uart->rx_dma_channel, 1);
- set_dma_y_count(uart->rx_dma_channel, DMA_RX_YCOUNT);
-diff --git a/drivers/serial/icom.c b/drivers/serial/icom.c
-index 9d3105b..9c2df5c 100644
---- a/drivers/serial/icom.c
-+++ b/drivers/serial/icom.c
-@@ -48,7 +48,7 @@
- #include <linux/vmalloc.h>
- #include <linux/smp.h>
- #include <linux/spinlock.h>
--#include <linux/kobject.h>
-+#include <linux/kref.h>
- #include <linux/firmware.h>
- #include <linux/bitops.h>
-
-@@ -65,7 +65,7 @@
- #define ICOM_VERSION_STR "1.3.1"
- #define NR_PORTS 128
- #define ICOM_PORT ((struct icom_port *)port)
--#define to_icom_adapter(d) container_of(d, struct icom_adapter, kobj)
-+#define to_icom_adapter(d) container_of(d, struct icom_adapter, kref)
-
- static const struct pci_device_id icom_pci_table[] = {
- {
-@@ -141,6 +141,7 @@ static inline void trace(struct icom_port *, char *, unsigned long) {};
- #else
- static inline void trace(struct icom_port *icom_port, char *trace_pt, unsigned long trace_data) {};
- #endif
-+static void icom_kref_release(struct kref *kref);
-
- static void free_port_memory(struct icom_port *icom_port)
- {
-@@ -1063,11 +1064,11 @@ static int icom_open(struct uart_port *port)
- {
- int retval;
-
-- kobject_get(&ICOM_PORT->adapter->kobj);
-+ kref_get(&ICOM_PORT->adapter->kref);
- retval = startup(ICOM_PORT);
-
- if (retval) {
-- kobject_put(&ICOM_PORT->adapter->kobj);
-+ kref_put(&ICOM_PORT->adapter->kref, icom_kref_release);
- trace(ICOM_PORT, "STARTUP_ERROR", 0);
- return retval;
- }
-@@ -1088,7 +1089,7 @@ static void icom_close(struct uart_port *port)
-
- shutdown(ICOM_PORT);
-
-- kobject_put(&ICOM_PORT->adapter->kobj);
-+ kref_put(&ICOM_PORT->adapter->kref, icom_kref_release);
- }
-
- static void icom_set_termios(struct uart_port *port,
-@@ -1485,18 +1486,14 @@ static void icom_remove_adapter(struct icom_adapter *icom_adapter)
- pci_release_regions(icom_adapter->pci_dev);
- }
-
--static void icom_kobj_release(struct kobject *kobj)
-+static void icom_kref_release(struct kref *kref)
- {
- struct icom_adapter *icom_adapter;
-
-- icom_adapter = to_icom_adapter(kobj);
-+ icom_adapter = to_icom_adapter(kref);
- icom_remove_adapter(icom_adapter);
- }
-
--static struct kobj_type icom_kobj_type = {
-- .release = icom_kobj_release,
--};
--
- static int __devinit icom_probe(struct pci_dev *dev,
- const struct pci_device_id *ent)
- {
-@@ -1592,8 +1589,7 @@ static int __devinit icom_probe(struct pci_dev *dev,
- }
- }
-
-- kobject_init(&icom_adapter->kobj);
-- icom_adapter->kobj.ktype = &icom_kobj_type;
-+ kref_init(&icom_adapter->kref);
- return 0;
-
- probe_exit2:
-@@ -1619,7 +1615,7 @@ static void __devexit icom_remove(struct pci_dev *dev)
- icom_adapter = list_entry(tmp, struct icom_adapter,
- icom_adapter_entry);
- if (icom_adapter->pci_dev == dev) {
-- kobject_put(&icom_adapter->kobj);
-+ kref_put(&icom_adapter->kref, icom_kref_release);
- return;
- }
- }
-diff --git a/drivers/serial/icom.h b/drivers/serial/icom.h
-index e8578d8..0274554 100644
---- a/drivers/serial/icom.h
-+++ b/drivers/serial/icom.h
-@@ -270,7 +270,7 @@ struct icom_adapter {
- #define V2_ONE_PORT_RVX_ONE_PORT_IMBED_MDM 0x0251
- int numb_ports;
- struct list_head icom_adapter_entry;
-- struct kobject kobj;
-+ struct kref kref;
- };
-
- /* prototype */
-diff --git a/drivers/serial/sh-sci.c b/drivers/serial/sh-sci.c
-index 73440e2..ddf6391 100644
---- a/drivers/serial/sh-sci.c
-+++ b/drivers/serial/sh-sci.c
-@@ -302,7 +302,7 @@ static void sci_init_pins_scif(struct uart_port* port, unsigned int cflag)
- }
- sci_out(port, SCFCR, fcr_val);
- }
--#elif defined(CONFIG_CPU_SUBTYPE_SH7720)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7720) || defined(CONFIG_CPU_SUBTYPE_SH7721)
- static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
- {
- unsigned int fcr_val = 0;
-@@ -395,7 +395,8 @@ static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
- } else {
- #ifdef CONFIG_CPU_SUBTYPE_SH7343
- /* Nothing */
--#elif defined(CONFIG_CPU_SUBTYPE_SH7780) || \
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7763) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7780) || \
- defined(CONFIG_CPU_SUBTYPE_SH7785) || \
- defined(CONFIG_CPU_SUBTYPE_SHX3)
- ctrl_outw(0x0080, SCSPTR0); /* Set RTS = 1 */
-@@ -408,6 +409,7 @@ static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
- #endif
-
- #if defined(CONFIG_CPU_SUBTYPE_SH7760) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7763) || \
- defined(CONFIG_CPU_SUBTYPE_SH7780) || \
- defined(CONFIG_CPU_SUBTYPE_SH7785)
- static inline int scif_txroom(struct uart_port *port)
-diff --git a/drivers/serial/sh-sci.h b/drivers/serial/sh-sci.h
-index d24621c..f5764eb 100644
---- a/drivers/serial/sh-sci.h
-+++ b/drivers/serial/sh-sci.h
-@@ -46,7 +46,8 @@
- */
- # define SCSCR_INIT(port) (port->mapbase == SCIF2) ? 0xF3 : 0xF0
- # define SCIF_ONLY
--#elif defined(CONFIG_CPU_SUBTYPE_SH7720)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- # define SCSCR_INIT(port) 0x0030 /* TIE=0,RIE=0,TE=1,RE=1 */
- # define SCIF_ONLY
- #define SCIF_ORER 0x0200 /* overrun error bit */
-@@ -119,6 +120,12 @@
- # define SCSCR_INIT(port) 0x30 /* TIE=0,RIE=0,TE=1,RE=1 */
- # define SCI_ONLY
- # define H8300_SCI_DR(ch) *(volatile char *)(P1DR + h8300_sci_pins[ch].port)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7763)
-+# define SCSPTR0 0xffe00024 /* 16 bit SCIF */
-+# define SCSPTR1 0xffe08024 /* 16 bit SCIF */
-+# define SCIF_ORER 0x0001 /* overrun error bit */
-+# define SCSCR_INIT(port) 0x3a /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */
-+# define SCIF_ONLY
- #elif defined(CONFIG_CPU_SUBTYPE_SH7770)
- # define SCSPTR0 0xff923020 /* 16 bit SCIF */
- # define SCSPTR1 0xff924020 /* 16 bit SCIF */
-@@ -142,7 +149,9 @@
- # define SCIF_OPER 0x0001 /* Overrun error bit */
- # define SCSCR_INIT(port) 0x3a /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */
- # define SCIF_ONLY
--#elif defined(CONFIG_CPU_SUBTYPE_SH7206)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7203) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7206) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7263)
- # define SCSPTR0 0xfffe8020 /* 16 bit SCIF */
- # define SCSPTR1 0xfffe8820 /* 16 bit SCIF */
- # define SCSPTR2 0xfffe9020 /* 16 bit SCIF */
-@@ -214,7 +223,8 @@
- #define SCIF_DR 0x0001 /* 7705 SCIF, 7707 SCIF, 7709 SCIF, 7750 SCIF */
-
- #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define SCIF_ORER 0x0200
- #define SCIF_ERRORS ( SCIF_PER | SCIF_FER | SCIF_ER | SCIF_BRK | SCIF_ORER)
- #define SCIF_RFDC_MASK 0x007f
-@@ -252,7 +262,8 @@
- # define SCxSR_PER(port) SCIF_PER
- # define SCxSR_BRK(port) SCIF_BRK
- #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- # define SCxSR_RDxF_CLEAR(port) (sci_in(port,SCxSR)&0xfffc)
- # define SCxSR_ERROR_CLEAR(port) (sci_in(port,SCxSR)&0xfd73)
- # define SCxSR_TDxE_CLEAR(port) (sci_in(port,SCxSR)&0xffdf)
-@@ -361,7 +372,8 @@
- #define SCIF_FNS(name, sh3_scif_offset, sh3_scif_size, sh4_scif_offset, sh4_scif_size) \
- CPU_SCIF_FNS(name, sh4_scif_offset, sh4_scif_size)
- #elif defined(CONFIG_CPU_SUBTYPE_SH7705) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define SCIF_FNS(name, scif_offset, scif_size) \
- CPU_SCIF_FNS(name, scif_offset, scif_size)
- #else
-@@ -388,7 +400,8 @@
- #endif
-
- #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
-
- SCIF_FNS(SCSMR, 0x00, 16)
- SCIF_FNS(SCBRR, 0x04, 8)
-@@ -412,6 +425,7 @@ SCIx_FNS(SCxSR, 0x08, 8, 0x10, 8, 0x08, 16, 0x10, 16, 0x04, 8)
- SCIx_FNS(SCxRDR, 0x0a, 8, 0x14, 8, 0x0A, 8, 0x14, 8, 0x05, 8)
- SCIF_FNS(SCFCR, 0x0c, 8, 0x18, 16)
- #if defined(CONFIG_CPU_SUBTYPE_SH7760) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7763) || \
- defined(CONFIG_CPU_SUBTYPE_SH7780) || \
- defined(CONFIG_CPU_SUBTYPE_SH7785)
- SCIF_FNS(SCFDR, 0x0e, 16, 0x1C, 16)
-@@ -510,7 +524,8 @@ static inline void set_sh771x_scif_pfc(struct uart_port *port)
- return;
- }
- }
--#elif defined(CONFIG_CPU_SUBTYPE_SH7720)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- static inline int sci_rxd_in(struct uart_port *port)
- {
- if (port->mapbase == 0xa4430000)
-@@ -580,6 +595,15 @@ static inline int sci_rxd_in(struct uart_port *port)
- int ch = (port->mapbase - SMR0) >> 3;
- return (H8300_SCI_DR(ch) & h8300_sci_pins[ch].rx) ? 1 : 0;
- }
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7763)
-+static inline int sci_rxd_in(struct uart_port *port)
++ INIT_LIST_HEAD(&meta_group_info[j]->bb_prealloc_list);
++
++#ifdef DOUBLE_CHECK
++ {
++ struct buffer_head *bh;
++ meta_group_info[j]->bb_bitmap =
++ kmalloc(sb->s_blocksize, GFP_KERNEL);
++ BUG_ON(meta_group_info[j]->bb_bitmap == NULL);
++ bh = read_block_bitmap(sb, i);
++ BUG_ON(bh == NULL);
++ memcpy(meta_group_info[j]->bb_bitmap, bh->b_data,
++ sb->s_blocksize);
++ put_bh(bh);
++ }
++#endif
++
++ }
++
++ return 0;
++
++err_freebuddy:
++ while (i >= 0) {
++ kfree(ext4_get_group_info(sb, i));
++ i--;
++ }
++ i = num_meta_group_infos;
++err_freemeta:
++ while (--i >= 0)
++ kfree(sbi->s_group_info[i]);
++ iput(sbi->s_buddy_cache);
++err_freesgi:
++ kfree(sbi->s_group_info);
++ return -ENOMEM;
++}
++
++int ext4_mb_init(struct super_block *sb, int needs_recovery)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ unsigned i;
++ unsigned offset;
++ unsigned max;
++
++ if (!test_opt(sb, MBALLOC))
++ return 0;
++
++ i = (sb->s_blocksize_bits + 2) * sizeof(unsigned short);
++
++ sbi->s_mb_offsets = kmalloc(i, GFP_KERNEL);
++ if (sbi->s_mb_offsets == NULL) {
++ clear_opt(sbi->s_mount_opt, MBALLOC);
++ return -ENOMEM;
++ }
++ sbi->s_mb_maxs = kmalloc(i, GFP_KERNEL);
++ if (sbi->s_mb_maxs == NULL) {
++ clear_opt(sbi->s_mount_opt, MBALLOC);
++ kfree(sbi->s_mb_maxs);
++ return -ENOMEM;
++ }
++
++ /* order 0 is regular bitmap */
++ sbi->s_mb_maxs[0] = sb->s_blocksize << 3;
++ sbi->s_mb_offsets[0] = 0;
++
++ i = 1;
++ offset = 0;
++ max = sb->s_blocksize << 2;
++ do {
++ sbi->s_mb_offsets[i] = offset;
++ sbi->s_mb_maxs[i] = max;
++ offset += 1 << (sb->s_blocksize_bits - i);
++ max = max >> 1;
++ i++;
++ } while (i <= sb->s_blocksize_bits + 1);
++
++ /* init file for buddy data */
++ i = ext4_mb_init_backend(sb);
++ if (i) {
++ clear_opt(sbi->s_mount_opt, MBALLOC);
++ kfree(sbi->s_mb_offsets);
++ kfree(sbi->s_mb_maxs);
++ return i;
++ }
++
++ spin_lock_init(&sbi->s_md_lock);
++ INIT_LIST_HEAD(&sbi->s_active_transaction);
++ INIT_LIST_HEAD(&sbi->s_closed_transaction);
++ INIT_LIST_HEAD(&sbi->s_committed_transaction);
++ spin_lock_init(&sbi->s_bal_lock);
++
++ sbi->s_mb_max_to_scan = MB_DEFAULT_MAX_TO_SCAN;
++ sbi->s_mb_min_to_scan = MB_DEFAULT_MIN_TO_SCAN;
++ sbi->s_mb_stats = MB_DEFAULT_STATS;
++ sbi->s_mb_stream_request = MB_DEFAULT_STREAM_THRESHOLD;
++ sbi->s_mb_order2_reqs = MB_DEFAULT_ORDER2_REQS;
++ sbi->s_mb_history_filter = EXT4_MB_HISTORY_DEFAULT;
++ sbi->s_mb_group_prealloc = MB_DEFAULT_GROUP_PREALLOC;
++
++ i = sizeof(struct ext4_locality_group) * NR_CPUS;
++ sbi->s_locality_groups = kmalloc(i, GFP_KERNEL);
++ if (sbi->s_locality_groups == NULL) {
++ clear_opt(sbi->s_mount_opt, MBALLOC);
++ kfree(sbi->s_mb_offsets);
++ kfree(sbi->s_mb_maxs);
++ return -ENOMEM;
++ }
++ for (i = 0; i < NR_CPUS; i++) {
++ struct ext4_locality_group *lg;
++ lg = &sbi->s_locality_groups[i];
++ mutex_init(&lg->lg_mutex);
++ INIT_LIST_HEAD(&lg->lg_prealloc_list);
++ spin_lock_init(&lg->lg_prealloc_lock);
++ }
++
++ ext4_mb_init_per_dev_proc(sb);
++ ext4_mb_history_init(sb);
++
++ printk("EXT4-fs: mballoc enabled\n");
++ return 0;
++}
++
++/* need to called with ext4 group lock (ext4_lock_group) */
++static void ext4_mb_cleanup_pa(struct ext4_group_info *grp)
++{
++ struct ext4_prealloc_space *pa;
++ struct list_head *cur, *tmp;
++ int count = 0;
++
++ list_for_each_safe(cur, tmp, &grp->bb_prealloc_list) {
++ pa = list_entry(cur, struct ext4_prealloc_space, pa_group_list);
++ list_del(&pa->pa_group_list);
++ count++;
++ kfree(pa);
++ }
++ if (count)
++ mb_debug("mballoc: %u PAs left\n", count);
++
++}
++
++int ext4_mb_release(struct super_block *sb)
++{
++ ext4_group_t i;
++ int num_meta_group_infos;
++ struct ext4_group_info *grinfo;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++ if (!test_opt(sb, MBALLOC))
++ return 0;
++
++ /* release freed, non-committed blocks */
++ spin_lock(&sbi->s_md_lock);
++ list_splice_init(&sbi->s_closed_transaction,
++ &sbi->s_committed_transaction);
++ list_splice_init(&sbi->s_active_transaction,
++ &sbi->s_committed_transaction);
++ spin_unlock(&sbi->s_md_lock);
++ ext4_mb_free_committed_blocks(sb);
++
++ if (sbi->s_group_info) {
++ for (i = 0; i < sbi->s_groups_count; i++) {
++ grinfo = ext4_get_group_info(sb, i);
++#ifdef DOUBLE_CHECK
++ kfree(grinfo->bb_bitmap);
++#endif
++ ext4_lock_group(sb, i);
++ ext4_mb_cleanup_pa(grinfo);
++ ext4_unlock_group(sb, i);
++ kfree(grinfo);
++ }
++ num_meta_group_infos = (sbi->s_groups_count +
++ EXT4_DESC_PER_BLOCK(sb) - 1) >>
++ EXT4_DESC_PER_BLOCK_BITS(sb);
++ for (i = 0; i < num_meta_group_infos; i++)
++ kfree(sbi->s_group_info[i]);
++ kfree(sbi->s_group_info);
++ }
++ kfree(sbi->s_mb_offsets);
++ kfree(sbi->s_mb_maxs);
++ if (sbi->s_buddy_cache)
++ iput(sbi->s_buddy_cache);
++ if (sbi->s_mb_stats) {
++ printk(KERN_INFO
++ "EXT4-fs: mballoc: %u blocks %u reqs (%u success)\n",
++ atomic_read(&sbi->s_bal_allocated),
++ atomic_read(&sbi->s_bal_reqs),
++ atomic_read(&sbi->s_bal_success));
++ printk(KERN_INFO
++ "EXT4-fs: mballoc: %u extents scanned, %u goal hits, "
++ "%u 2^N hits, %u breaks, %u lost\n",
++ atomic_read(&sbi->s_bal_ex_scanned),
++ atomic_read(&sbi->s_bal_goals),
++ atomic_read(&sbi->s_bal_2orders),
++ atomic_read(&sbi->s_bal_breaks),
++ atomic_read(&sbi->s_mb_lost_chunks));
++ printk(KERN_INFO
++ "EXT4-fs: mballoc: %lu generated and it took %Lu\n",
++ sbi->s_mb_buddies_generated++,
++ sbi->s_mb_generation_time);
++ printk(KERN_INFO
++ "EXT4-fs: mballoc: %u preallocated, %u discarded\n",
++ atomic_read(&sbi->s_mb_preallocated),
++ atomic_read(&sbi->s_mb_discarded));
++ }
++
++ kfree(sbi->s_locality_groups);
++
++ ext4_mb_history_release(sb);
++ ext4_mb_destroy_per_dev_proc(sb);
++
++ return 0;
++}
++
++static void ext4_mb_free_committed_blocks(struct super_block *sb)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ int err;
++ int i;
++ int count = 0;
++ int count2 = 0;
++ struct ext4_free_metadata *md;
++ struct ext4_buddy e4b;
++
++ if (list_empty(&sbi->s_committed_transaction))
++ return;
++
++ /* there is committed blocks to be freed yet */
++ do {
++ /* get next array of blocks */
++ md = NULL;
++ spin_lock(&sbi->s_md_lock);
++ if (!list_empty(&sbi->s_committed_transaction)) {
++ md = list_entry(sbi->s_committed_transaction.next,
++ struct ext4_free_metadata, list);
++ list_del(&md->list);
++ }
++ spin_unlock(&sbi->s_md_lock);
++
++ if (md == NULL)
++ break;
++
++ mb_debug("gonna free %u blocks in group %lu (0x%p):",
++ md->num, md->group, md);
++
++ err = ext4_mb_load_buddy(sb, md->group, &e4b);
++ /* we expect to find existing buddy because it's pinned */
++ BUG_ON(err != 0);
++
++ /* there are blocks to put in buddy to make them really free */
++ count += md->num;
++ count2++;
++ ext4_lock_group(sb, md->group);
++ for (i = 0; i < md->num; i++) {
++ mb_debug(" %u", md->blocks[i]);
++ err = mb_free_blocks(NULL, &e4b, md->blocks[i], 1);
++ BUG_ON(err != 0);
++ }
++ mb_debug("\n");
++ ext4_unlock_group(sb, md->group);
++
++ /* balance refcounts from ext4_mb_free_metadata() */
++ page_cache_release(e4b.bd_buddy_page);
++ page_cache_release(e4b.bd_bitmap_page);
++
++ kfree(md);
++ ext4_mb_release_desc(&e4b);
++
++ } while (md);
++
++ mb_debug("freed %u blocks in %u structures\n", count, count2);
++}
++
++#define EXT4_ROOT "ext4"
++#define EXT4_MB_STATS_NAME "stats"
++#define EXT4_MB_MAX_TO_SCAN_NAME "max_to_scan"
++#define EXT4_MB_MIN_TO_SCAN_NAME "min_to_scan"
++#define EXT4_MB_ORDER2_REQ "order2_req"
++#define EXT4_MB_STREAM_REQ "stream_req"
++#define EXT4_MB_GROUP_PREALLOC "group_prealloc"
++
++
++
++#define MB_PROC_VALUE_READ(name) \
++static int ext4_mb_read_##name(char *page, char **start, \
++ off_t off, int count, int *eof, void *data) \
++{ \
++ struct ext4_sb_info *sbi = data; \
++ int len; \
++ *eof = 1; \
++ if (off != 0) \
++ return 0; \
++ len = sprintf(page, "%ld\n", sbi->s_mb_##name); \
++ *start = page; \
++ return len; \
++}
++
++#define MB_PROC_VALUE_WRITE(name) \
++static int ext4_mb_write_##name(struct file *file, \
++ const char __user *buf, unsigned long cnt, void *data) \
++{ \
++ struct ext4_sb_info *sbi = data; \
++ char str[32]; \
++ long value; \
++ if (cnt >= sizeof(str)) \
++ return -EINVAL; \
++ if (copy_from_user(str, buf, cnt)) \
++ return -EFAULT; \
++ value = simple_strtol(str, NULL, 0); \
++ if (value <= 0) \
++ return -ERANGE; \
++ sbi->s_mb_##name = value; \
++ return cnt; \
++}
++
++MB_PROC_VALUE_READ(stats);
++MB_PROC_VALUE_WRITE(stats);
++MB_PROC_VALUE_READ(max_to_scan);
++MB_PROC_VALUE_WRITE(max_to_scan);
++MB_PROC_VALUE_READ(min_to_scan);
++MB_PROC_VALUE_WRITE(min_to_scan);
++MB_PROC_VALUE_READ(order2_reqs);
++MB_PROC_VALUE_WRITE(order2_reqs);
++MB_PROC_VALUE_READ(stream_request);
++MB_PROC_VALUE_WRITE(stream_request);
++MB_PROC_VALUE_READ(group_prealloc);
++MB_PROC_VALUE_WRITE(group_prealloc);
++
++#define MB_PROC_HANDLER(name, var) \
++do { \
++ proc = create_proc_entry(name, mode, sbi->s_mb_proc); \
++ if (proc == NULL) { \
++ printk(KERN_ERR "EXT4-fs: can't to create %s\n", name); \
++ goto err_out; \
++ } \
++ proc->data = sbi; \
++ proc->read_proc = ext4_mb_read_##var ; \
++ proc->write_proc = ext4_mb_write_##var; \
++} while (0)
++
++static int ext4_mb_init_per_dev_proc(struct super_block *sb)
++{
++ mode_t mode = S_IFREG | S_IRUGO | S_IWUSR;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ struct proc_dir_entry *proc;
++ char devname[64];
++
++ snprintf(devname, sizeof(devname) - 1, "%s",
++ bdevname(sb->s_bdev, devname));
++ sbi->s_mb_proc = proc_mkdir(devname, proc_root_ext4);
++
++ MB_PROC_HANDLER(EXT4_MB_STATS_NAME, stats);
++ MB_PROC_HANDLER(EXT4_MB_MAX_TO_SCAN_NAME, max_to_scan);
++ MB_PROC_HANDLER(EXT4_MB_MIN_TO_SCAN_NAME, min_to_scan);
++ MB_PROC_HANDLER(EXT4_MB_ORDER2_REQ, order2_reqs);
++ MB_PROC_HANDLER(EXT4_MB_STREAM_REQ, stream_request);
++ MB_PROC_HANDLER(EXT4_MB_GROUP_PREALLOC, group_prealloc);
++
++ return 0;
++
++err_out:
++ printk(KERN_ERR "EXT4-fs: Unable to create %s\n", devname);
++ remove_proc_entry(EXT4_MB_GROUP_PREALLOC, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_STREAM_REQ, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_ORDER2_REQ, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_MIN_TO_SCAN_NAME, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_MAX_TO_SCAN_NAME, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_STATS_NAME, sbi->s_mb_proc);
++ remove_proc_entry(devname, proc_root_ext4);
++ sbi->s_mb_proc = NULL;
++
++ return -ENOMEM;
++}
++
++static int ext4_mb_destroy_per_dev_proc(struct super_block *sb)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ char devname[64];
++
++ if (sbi->s_mb_proc == NULL)
++ return -EINVAL;
++
++ snprintf(devname, sizeof(devname) - 1, "%s",
++ bdevname(sb->s_bdev, devname));
++ remove_proc_entry(EXT4_MB_GROUP_PREALLOC, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_STREAM_REQ, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_ORDER2_REQ, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_MIN_TO_SCAN_NAME, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_MAX_TO_SCAN_NAME, sbi->s_mb_proc);
++ remove_proc_entry(EXT4_MB_STATS_NAME, sbi->s_mb_proc);
++ remove_proc_entry(devname, proc_root_ext4);
++
++ return 0;
++}
++
++int __init init_ext4_mballoc(void)
++{
++ ext4_pspace_cachep =
++ kmem_cache_create("ext4_prealloc_space",
++ sizeof(struct ext4_prealloc_space),
++ 0, SLAB_RECLAIM_ACCOUNT, NULL);
++ if (ext4_pspace_cachep == NULL)
++ return -ENOMEM;
++
++#ifdef CONFIG_PROC_FS
++ proc_root_ext4 = proc_mkdir(EXT4_ROOT, proc_root_fs);
++ if (proc_root_ext4 == NULL)
++ printk(KERN_ERR "EXT4-fs: Unable to create %s\n", EXT4_ROOT);
++#endif
++
++ return 0;
++}
++
++void exit_ext4_mballoc(void)
++{
++ /* XXX: synchronize_rcu(); */
++ kmem_cache_destroy(ext4_pspace_cachep);
++#ifdef CONFIG_PROC_FS
++ remove_proc_entry(EXT4_ROOT, proc_root_fs);
++#endif
++}
++
++
++/*
++ * Check quota and mark choosed space (ac->ac_b_ex) non-free in bitmaps
++ * Returns 0 if success or error code
++ */
++static int ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
++ handle_t *handle)
++{
++ struct buffer_head *bitmap_bh = NULL;
++ struct ext4_super_block *es;
++ struct ext4_group_desc *gdp;
++ struct buffer_head *gdp_bh;
++ struct ext4_sb_info *sbi;
++ struct super_block *sb;
++ ext4_fsblk_t block;
++ int err;
++
++ BUG_ON(ac->ac_status != AC_STATUS_FOUND);
++ BUG_ON(ac->ac_b_ex.fe_len <= 0);
++
++ sb = ac->ac_sb;
++ sbi = EXT4_SB(sb);
++ es = sbi->s_es;
++
++ ext4_debug("using block group %lu(%d)\n", ac->ac_b_ex.fe_group,
++ gdp->bg_free_blocks_count);
++
++ err = -EIO;
++ bitmap_bh = read_block_bitmap(sb, ac->ac_b_ex.fe_group);
++ if (!bitmap_bh)
++ goto out_err;
++
++ err = ext4_journal_get_write_access(handle, bitmap_bh);
++ if (err)
++ goto out_err;
++
++ err = -EIO;
++ gdp = ext4_get_group_desc(sb, ac->ac_b_ex.fe_group, &gdp_bh);
++ if (!gdp)
++ goto out_err;
++
++ err = ext4_journal_get_write_access(handle, gdp_bh);
++ if (err)
++ goto out_err;
++
++ block = ac->ac_b_ex.fe_group * EXT4_BLOCKS_PER_GROUP(sb)
++ + ac->ac_b_ex.fe_start
++ + le32_to_cpu(es->s_first_data_block);
++
++ if (block == ext4_block_bitmap(sb, gdp) ||
++ block == ext4_inode_bitmap(sb, gdp) ||
++ in_range(block, ext4_inode_table(sb, gdp),
++ EXT4_SB(sb)->s_itb_per_group)) {
++
++ ext4_error(sb, __FUNCTION__,
++ "Allocating block in system zone - block = %llu",
++ block);
++ }
++#ifdef AGGRESSIVE_CHECK
++ {
++ int i;
++ for (i = 0; i < ac->ac_b_ex.fe_len; i++) {
++ BUG_ON(mb_test_bit(ac->ac_b_ex.fe_start + i,
++ bitmap_bh->b_data));
++ }
++ }
++#endif
++ mb_set_bits(sb_bgl_lock(sbi, ac->ac_b_ex.fe_group), bitmap_bh->b_data,
++ ac->ac_b_ex.fe_start, ac->ac_b_ex.fe_len);
++
++ spin_lock(sb_bgl_lock(sbi, ac->ac_b_ex.fe_group));
++ if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
++ gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
++ gdp->bg_free_blocks_count =
++ cpu_to_le16(ext4_free_blocks_after_init(sb,
++ ac->ac_b_ex.fe_group,
++ gdp));
++ }
++ gdp->bg_free_blocks_count =
++ cpu_to_le16(le16_to_cpu(gdp->bg_free_blocks_count)
++ - ac->ac_b_ex.fe_len);
++ gdp->bg_checksum = ext4_group_desc_csum(sbi, ac->ac_b_ex.fe_group, gdp);
++ spin_unlock(sb_bgl_lock(sbi, ac->ac_b_ex.fe_group));
++ percpu_counter_sub(&sbi->s_freeblocks_counter, ac->ac_b_ex.fe_len);
++
++ err = ext4_journal_dirty_metadata(handle, bitmap_bh);
++ if (err)
++ goto out_err;
++ err = ext4_journal_dirty_metadata(handle, gdp_bh);
++
++out_err:
++ sb->s_dirt = 1;
++ put_bh(bitmap_bh);
++ return err;
++}
++
++/*
++ * here we normalize request for locality group
++ * Group request are normalized to s_strip size if we set the same via mount
++ * option. If not we set it to s_mb_group_prealloc which can be configured via
++ * /proc/fs/ext4/<partition>/group_prealloc
++ *
++ * XXX: should we try to preallocate more than the group has now?
++ */
++static void ext4_mb_normalize_group_request(struct ext4_allocation_context *ac)
++{
++ struct super_block *sb = ac->ac_sb;
++ struct ext4_locality_group *lg = ac->ac_lg;
++
++ BUG_ON(lg == NULL);
++ if (EXT4_SB(sb)->s_stripe)
++ ac->ac_g_ex.fe_len = EXT4_SB(sb)->s_stripe;
++ else
++ ac->ac_g_ex.fe_len = EXT4_SB(sb)->s_mb_group_prealloc;
++ mb_debug("#%u: goal %lu blocks for locality group\n",
++ current->pid, ac->ac_g_ex.fe_len);
++}
++
++/*
++ * Normalization means making request better in terms of
++ * size and alignment
++ */
++static void ext4_mb_normalize_request(struct ext4_allocation_context *ac,
++ struct ext4_allocation_request *ar)
++{
++ int bsbits, max;
++ ext4_lblk_t end;
++ struct list_head *cur;
++ loff_t size, orig_size, start_off;
++ ext4_lblk_t start, orig_start;
++ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
++
++ /* do normalize only data requests, metadata requests
++ do not need preallocation */
++ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
++ return;
++
++ /* sometime caller may want exact blocks */
++ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
++ return;
++
++ /* caller may indicate that preallocation isn't
++ * required (it's a tail, for example) */
++ if (ac->ac_flags & EXT4_MB_HINT_NOPREALLOC)
++ return;
++
++ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC) {
++ ext4_mb_normalize_group_request(ac);
++ return ;
++ }
++
++ bsbits = ac->ac_sb->s_blocksize_bits;
++
++ /* first, let's learn actual file size
++ * given current request is allocated */
++ size = ac->ac_o_ex.fe_logical + ac->ac_o_ex.fe_len;
++ size = size << bsbits;
++ if (size < i_size_read(ac->ac_inode))
++ size = i_size_read(ac->ac_inode);
++
++ /* max available blocks in a free group */
++ max = EXT4_BLOCKS_PER_GROUP(ac->ac_sb) - 1 - 1 -
++ EXT4_SB(ac->ac_sb)->s_itb_per_group;
++
++#define NRL_CHECK_SIZE(req, size, max,bits) \
++ (req <= (size) || max <= ((size) >> bits))
++
++ /* first, try to predict filesize */
++ /* XXX: should this table be tunable? */
++ start_off = 0;
++ if (size <= 16 * 1024) {
++ size = 16 * 1024;
++ } else if (size <= 32 * 1024) {
++ size = 32 * 1024;
++ } else if (size <= 64 * 1024) {
++ size = 64 * 1024;
++ } else if (size <= 128 * 1024) {
++ size = 128 * 1024;
++ } else if (size <= 256 * 1024) {
++ size = 256 * 1024;
++ } else if (size <= 512 * 1024) {
++ size = 512 * 1024;
++ } else if (size <= 1024 * 1024) {
++ size = 1024 * 1024;
++ } else if (NRL_CHECK_SIZE(size, 4 * 1024 * 1024, max, bsbits)) {
++ start_off = ((loff_t)ac->ac_o_ex.fe_logical >>
++ (20 - bsbits)) << 20;
++ size = 1024 * 1024;
++ } else if (NRL_CHECK_SIZE(size, 8 * 1024 * 1024, max, bsbits)) {
++ start_off = ((loff_t)ac->ac_o_ex.fe_logical >>
++ (22 - bsbits)) << 22;
++ size = 4 * 1024 * 1024;
++ } else if (NRL_CHECK_SIZE(ac->ac_o_ex.fe_len,
++ (8<<20)>>bsbits, max, bsbits)) {
++ start_off = ((loff_t)ac->ac_o_ex.fe_logical >>
++ (23 - bsbits)) << 23;
++ size = 8 * 1024 * 1024;
++ } else {
++ start_off = (loff_t)ac->ac_o_ex.fe_logical << bsbits;
++ size = ac->ac_o_ex.fe_len << bsbits;
++ }
++ orig_size = size = size >> bsbits;
++ orig_start = start = start_off >> bsbits;
++
++ /* don't cover already allocated blocks in selected range */
++ if (ar->pleft && start <= ar->lleft) {
++ size -= ar->lleft + 1 - start;
++ start = ar->lleft + 1;
++ }
++ if (ar->pright && start + size - 1 >= ar->lright)
++ size -= start + size - ar->lright;
++
++ end = start + size;
++
++ /* check we don't cross already preallocated blocks */
++ rcu_read_lock();
++ list_for_each_rcu(cur, &ei->i_prealloc_list) {
++ struct ext4_prealloc_space *pa;
++ unsigned long pa_end;
++
++ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
++
++ if (pa->pa_deleted)
++ continue;
++ spin_lock(&pa->pa_lock);
++ if (pa->pa_deleted) {
++ spin_unlock(&pa->pa_lock);
++ continue;
++ }
++
++ pa_end = pa->pa_lstart + pa->pa_len;
++
++ /* PA must not overlap original request */
++ BUG_ON(!(ac->ac_o_ex.fe_logical >= pa_end ||
++ ac->ac_o_ex.fe_logical < pa->pa_lstart));
++
++ /* skip PA normalized request doesn't overlap with */
++ if (pa->pa_lstart >= end) {
++ spin_unlock(&pa->pa_lock);
++ continue;
++ }
++ if (pa_end <= start) {
++ spin_unlock(&pa->pa_lock);
++ continue;
++ }
++ BUG_ON(pa->pa_lstart <= start && pa_end >= end);
++
++ if (pa_end <= ac->ac_o_ex.fe_logical) {
++ BUG_ON(pa_end < start);
++ start = pa_end;
++ }
++
++ if (pa->pa_lstart > ac->ac_o_ex.fe_logical) {
++ BUG_ON(pa->pa_lstart > end);
++ end = pa->pa_lstart;
++ }
++ spin_unlock(&pa->pa_lock);
++ }
++ rcu_read_unlock();
++ size = end - start;
++
++ /* XXX: extra loop to check we really don't overlap preallocations */
++ rcu_read_lock();
++ list_for_each_rcu(cur, &ei->i_prealloc_list) {
++ struct ext4_prealloc_space *pa;
++ unsigned long pa_end;
++ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
++ spin_lock(&pa->pa_lock);
++ if (pa->pa_deleted == 0) {
++ pa_end = pa->pa_lstart + pa->pa_len;
++ BUG_ON(!(start >= pa_end || end <= pa->pa_lstart));
++ }
++ spin_unlock(&pa->pa_lock);
++ }
++ rcu_read_unlock();
++
++ if (start + size <= ac->ac_o_ex.fe_logical &&
++ start > ac->ac_o_ex.fe_logical) {
++ printk(KERN_ERR "start %lu, size %lu, fe_logical %lu\n",
++ (unsigned long) start, (unsigned long) size,
++ (unsigned long) ac->ac_o_ex.fe_logical);
++ }
++ BUG_ON(start + size <= ac->ac_o_ex.fe_logical &&
++ start > ac->ac_o_ex.fe_logical);
++ BUG_ON(size <= 0 || size >= EXT4_BLOCKS_PER_GROUP(ac->ac_sb));
++
++ /* now prepare goal request */
++
++ /* XXX: is it better to align blocks WRT to logical
++ * placement or satisfy big request as is */
++ ac->ac_g_ex.fe_logical = start;
++ ac->ac_g_ex.fe_len = size;
++
++ /* define goal start in order to merge */
++ if (ar->pright && (ar->lright == (start + size))) {
++ /* merge to the right */
++ ext4_get_group_no_and_offset(ac->ac_sb, ar->pright - size,
++ &ac->ac_f_ex.fe_group,
++ &ac->ac_f_ex.fe_start);
++ ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
++ }
++ if (ar->pleft && (ar->lleft + 1 == start)) {
++ /* merge to the left */
++ ext4_get_group_no_and_offset(ac->ac_sb, ar->pleft + 1,
++ &ac->ac_f_ex.fe_group,
++ &ac->ac_f_ex.fe_start);
++ ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
++ }
++
++ mb_debug("goal: %u(was %u) blocks at %u\n", (unsigned) size,
++ (unsigned) orig_size, (unsigned) start);
++}
++
++static void ext4_mb_collect_stats(struct ext4_allocation_context *ac)
+{
-+ if (port->mapbase == 0xffe00000)
-+ return ctrl_inw(SCSPTR0) & 0x0001 ? 1 : 0; /* SCIF */
-+ if (port->mapbase == 0xffe08000)
-+ return ctrl_inw(SCSPTR1) & 0x0001 ? 1 : 0; /* SCIF */
-+ return 1;
++ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++
++ if (sbi->s_mb_stats && ac->ac_g_ex.fe_len > 1) {
++ atomic_inc(&sbi->s_bal_reqs);
++ atomic_add(ac->ac_b_ex.fe_len, &sbi->s_bal_allocated);
++ if (ac->ac_o_ex.fe_len >= ac->ac_g_ex.fe_len)
++ atomic_inc(&sbi->s_bal_success);
++ atomic_add(ac->ac_found, &sbi->s_bal_ex_scanned);
++ if (ac->ac_g_ex.fe_start == ac->ac_b_ex.fe_start &&
++ ac->ac_g_ex.fe_group == ac->ac_b_ex.fe_group)
++ atomic_inc(&sbi->s_bal_goals);
++ if (ac->ac_found > sbi->s_mb_max_to_scan)
++ atomic_inc(&sbi->s_bal_breaks);
++ }
++
++ ext4_mb_store_history(ac);
+}
- #elif defined(CONFIG_CPU_SUBTYPE_SH7770)
- static inline int sci_rxd_in(struct uart_port *port)
- {
-@@ -617,7 +641,9 @@ static inline int sci_rxd_in(struct uart_port *port)
- return ctrl_inw(SCSPTR5) & 0x0001 ? 1 : 0; /* SCIF */
- return 1;
- }
--#elif defined(CONFIG_CPU_SUBTYPE_SH7206)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7203) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7206) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7263)
- static inline int sci_rxd_in(struct uart_port *port)
- {
- if (port->mapbase == 0xfffe8000)
-@@ -688,11 +714,13 @@ static inline int sci_rxd_in(struct uart_port *port)
- * -- Mitch Davis - 15 Jul 2000
- */
-
--#if defined(CONFIG_CPU_SUBTYPE_SH7780) || \
-+#if defined(CONFIG_CPU_SUBTYPE_SH7763) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7780) || \
- defined(CONFIG_CPU_SUBTYPE_SH7785)
- #define SCBRR_VALUE(bps, clk) ((clk+16*bps)/(16*bps)-1)
- #elif defined(CONFIG_CPU_SUBTYPE_SH7705) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define SCBRR_VALUE(bps, clk) (((clk*2)+16*bps)/(32*bps)-1)
- #elif defined(__H8300H__) || defined(__H8300S__)
- #define SCBRR_VALUE(bps) (((CONFIG_CPU_CLOCK*1000/32)/bps)-1)
-diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
-index abf0504..aaaea81 100644
---- a/drivers/spi/Kconfig
-+++ b/drivers/spi/Kconfig
-@@ -153,6 +153,7 @@ config SPI_OMAP24XX
- config SPI_PXA2XX
- tristate "PXA2xx SSP SPI master"
- depends on SPI_MASTER && ARCH_PXA && EXPERIMENTAL
-+ select PXA_SSP
- help
- This enables using a PXA2xx SSP port as a SPI master controller.
- The driver can be configured to use any SSP port and additional
-diff --git a/drivers/spi/pxa2xx_spi.c b/drivers/spi/pxa2xx_spi.c
-index 1c2ab54..eb817b8 100644
---- a/drivers/spi/pxa2xx_spi.c
-+++ b/drivers/spi/pxa2xx_spi.c
-@@ -27,6 +27,7 @@
- #include <linux/spi/spi.h>
- #include <linux/workqueue.h>
- #include <linux/delay.h>
-+#include <linux/clk.h>
-
- #include <asm/io.h>
- #include <asm/irq.h>
-@@ -36,6 +37,8 @@
-
- #include <asm/arch/hardware.h>
- #include <asm/arch/pxa-regs.h>
-+#include <asm/arch/regs-ssp.h>
-+#include <asm/arch/ssp.h>
- #include <asm/arch/pxa2xx_spi.h>
-
- MODULE_AUTHOR("Stephen Street");
-@@ -80,6 +83,9 @@ struct driver_data {
- /* Driver model hookup */
- struct platform_device *pdev;
-
-+ /* SSP Info */
-+ struct ssp_device *ssp;
+
- /* SPI framework hookup */
- enum pxa_ssp_type ssp_type;
- struct spi_master *master;
-@@ -778,6 +784,16 @@ int set_dma_burst_and_threshold(struct chip_data *chip, struct spi_device *spi,
- return retval;
- }
-
-+static unsigned int ssp_get_clk_div(struct ssp_device *ssp, int rate)
++/*
++ * use blocks preallocated to inode
++ */
++static void ext4_mb_use_inode_pa(struct ext4_allocation_context *ac,
++ struct ext4_prealloc_space *pa)
+{
-+ unsigned long ssp_clk = clk_get_rate(ssp->clk);
++ ext4_fsblk_t start;
++ ext4_fsblk_t end;
++ int len;
+
-+ if (ssp->type == PXA25x_SSP)
-+ return ((ssp_clk / (2 * rate) - 1) & 0xff) << 8;
-+ else
-+ return ((ssp_clk / rate - 1) & 0xfff) << 8;
++ /* found preallocated blocks, use them */
++ start = pa->pa_pstart + (ac->ac_o_ex.fe_logical - pa->pa_lstart);
++ end = min(pa->pa_pstart + pa->pa_len, start + ac->ac_o_ex.fe_len);
++ len = end - start;
++ ext4_get_group_no_and_offset(ac->ac_sb, start, &ac->ac_b_ex.fe_group,
++ &ac->ac_b_ex.fe_start);
++ ac->ac_b_ex.fe_len = len;
++ ac->ac_status = AC_STATUS_FOUND;
++ ac->ac_pa = pa;
++
++ BUG_ON(start < pa->pa_pstart);
++ BUG_ON(start + len > pa->pa_pstart + pa->pa_len);
++ BUG_ON(pa->pa_free < len);
++ pa->pa_free -= len;
++
++ mb_debug("use %llu/%lu from inode pa %p\n", start, len, pa);
+}
+
- static void pump_transfers(unsigned long data)
- {
- struct driver_data *drv_data = (struct driver_data *)data;
-@@ -785,6 +801,7 @@ static void pump_transfers(unsigned long data)
- struct spi_transfer *transfer = NULL;
- struct spi_transfer *previous = NULL;
- struct chip_data *chip = NULL;
-+ struct ssp_device *ssp = drv_data->ssp;
- void *reg = drv_data->ioaddr;
- u32 clk_div = 0;
- u8 bits = 0;
-@@ -866,12 +883,7 @@ static void pump_transfers(unsigned long data)
- if (transfer->bits_per_word)
- bits = transfer->bits_per_word;
-
-- if (reg == SSP1_VIRT)
-- clk_div = SSP1_SerClkDiv(speed);
-- else if (reg == SSP2_VIRT)
-- clk_div = SSP2_SerClkDiv(speed);
-- else if (reg == SSP3_VIRT)
-- clk_div = SSP3_SerClkDiv(speed);
-+ clk_div = ssp_get_clk_div(ssp, speed);
-
- if (bits <= 8) {
- drv_data->n_bytes = 1;
-@@ -1074,6 +1086,7 @@ static int setup(struct spi_device *spi)
- struct pxa2xx_spi_chip *chip_info = NULL;
- struct chip_data *chip;
- struct driver_data *drv_data = spi_master_get_devdata(spi->master);
-+ struct ssp_device *ssp = drv_data->ssp;
- unsigned int clk_div;
-
- if (!spi->bits_per_word)
-@@ -1157,18 +1170,7 @@ static int setup(struct spi_device *spi)
- }
- }
-
-- if (drv_data->ioaddr == SSP1_VIRT)
-- clk_div = SSP1_SerClkDiv(spi->max_speed_hz);
-- else if (drv_data->ioaddr == SSP2_VIRT)
-- clk_div = SSP2_SerClkDiv(spi->max_speed_hz);
-- else if (drv_data->ioaddr == SSP3_VIRT)
-- clk_div = SSP3_SerClkDiv(spi->max_speed_hz);
-- else
-- {
-- dev_err(&spi->dev, "failed setup: unknown IO address=0x%p\n",
-- drv_data->ioaddr);
-- return -ENODEV;
-- }
-+ clk_div = ssp_get_clk_div(ssp, spi->max_speed_hz);
- chip->speed_hz = spi->max_speed_hz;
-
- chip->cr0 = clk_div
-@@ -1183,15 +1185,15 @@ static int setup(struct spi_device *spi)
-
- /* NOTE: PXA25x_SSP _could_ use external clocking ... */
- if (drv_data->ssp_type != PXA25x_SSP)
-- dev_dbg(&spi->dev, "%d bits/word, %d Hz, mode %d\n",
-+ dev_dbg(&spi->dev, "%d bits/word, %ld Hz, mode %d\n",
- spi->bits_per_word,
-- (CLOCK_SPEED_HZ)
-+ clk_get_rate(ssp->clk)
- / (1 + ((chip->cr0 & SSCR0_SCR) >> 8)),
- spi->mode & 0x3);
- else
-- dev_dbg(&spi->dev, "%d bits/word, %d Hz, mode %d\n",
-+ dev_dbg(&spi->dev, "%d bits/word, %ld Hz, mode %d\n",
- spi->bits_per_word,
-- (CLOCK_SPEED_HZ/2)
-+ clk_get_rate(ssp->clk)
- / (1 + ((chip->cr0 & SSCR0_SCR) >> 8)),
- spi->mode & 0x3);
-
-@@ -1323,14 +1325,14 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
- struct pxa2xx_spi_master *platform_info;
- struct spi_master *master;
- struct driver_data *drv_data = 0;
-- struct resource *memory_resource;
-- int irq;
-+ struct ssp_device *ssp;
- int status = 0;
-
- platform_info = dev->platform_data;
-
-- if (platform_info->ssp_type == SSP_UNDEFINED) {
-- dev_err(&pdev->dev, "undefined SSP\n");
-+ ssp = ssp_request(pdev->id, pdev->name);
-+ if (ssp == NULL) {
-+ dev_err(&pdev->dev, "failed to request SSP%d\n", pdev->id);
- return -ENODEV;
- }
-
-@@ -1338,12 +1340,14 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
- master = spi_alloc_master(dev, sizeof(struct driver_data) + 16);
- if (!master) {
- dev_err(&pdev->dev, "can not alloc spi_master\n");
-+ ssp_free(ssp);
- return -ENOMEM;
- }
- drv_data = spi_master_get_devdata(master);
- drv_data->master = master;
- drv_data->master_info = platform_info;
- drv_data->pdev = pdev;
-+ drv_data->ssp = ssp;
-
- master->bus_num = pdev->id;
- master->num_chipselect = platform_info->num_chipselect;
-@@ -1351,21 +1355,13 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
- master->setup = setup;
- master->transfer = transfer;
-
-- drv_data->ssp_type = platform_info->ssp_type;
-+ drv_data->ssp_type = ssp->type;
- drv_data->null_dma_buf = (u32 *)ALIGN((u32)(drv_data +
- sizeof(struct driver_data)), 8);
-
-- /* Setup register addresses */
-- memory_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-- if (!memory_resource) {
-- dev_err(&pdev->dev, "memory resources not defined\n");
-- status = -ENODEV;
-- goto out_error_master_alloc;
-- }
--
-- drv_data->ioaddr = (void *)io_p2v((unsigned long)(memory_resource->start));
-- drv_data->ssdr_physical = memory_resource->start + 0x00000010;
-- if (platform_info->ssp_type == PXA25x_SSP) {
-+ drv_data->ioaddr = ssp->mmio_base;
-+ drv_data->ssdr_physical = ssp->phys_base + SSDR;
-+ if (ssp->type == PXA25x_SSP) {
- drv_data->int_cr1 = SSCR1_TIE | SSCR1_RIE;
- drv_data->dma_cr1 = 0;
- drv_data->clear_sr = SSSR_ROR;
-@@ -1377,15 +1373,7 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
- drv_data->mask_sr = SSSR_TINT | SSSR_RFS | SSSR_TFS | SSSR_ROR;
- }
-
-- /* Attach to IRQ */
-- irq = platform_get_irq(pdev, 0);
-- if (irq < 0) {
-- dev_err(&pdev->dev, "irq resource not defined\n");
-- status = -ENODEV;
-- goto out_error_master_alloc;
-- }
--
-- status = request_irq(irq, ssp_int, 0, dev->bus_id, drv_data);
-+ status = request_irq(ssp->irq, ssp_int, 0, dev->bus_id, drv_data);
- if (status < 0) {
- dev_err(&pdev->dev, "can not get IRQ\n");
- goto out_error_master_alloc;
-@@ -1418,29 +1406,12 @@ static int __init pxa2xx_spi_probe(struct platform_device *pdev)
- goto out_error_dma_alloc;
- }
-
-- if (drv_data->ioaddr == SSP1_VIRT) {
-- DRCMRRXSSDR = DRCMR_MAPVLD
-- | drv_data->rx_channel;
-- DRCMRTXSSDR = DRCMR_MAPVLD
-- | drv_data->tx_channel;
-- } else if (drv_data->ioaddr == SSP2_VIRT) {
-- DRCMRRXSS2DR = DRCMR_MAPVLD
-- | drv_data->rx_channel;
-- DRCMRTXSS2DR = DRCMR_MAPVLD
-- | drv_data->tx_channel;
-- } else if (drv_data->ioaddr == SSP3_VIRT) {
-- DRCMRRXSS3DR = DRCMR_MAPVLD
-- | drv_data->rx_channel;
-- DRCMRTXSS3DR = DRCMR_MAPVLD
-- | drv_data->tx_channel;
-- } else {
-- dev_err(dev, "bad SSP type\n");
-- goto out_error_dma_alloc;
-- }
-+ DRCMR(ssp->drcmr_rx) = DRCMR_MAPVLD | drv_data->rx_channel;
-+ DRCMR(ssp->drcmr_tx) = DRCMR_MAPVLD | drv_data->tx_channel;
- }
-
- /* Enable SOC clock */
-- pxa_set_cken(platform_info->clock_enable, 1);
-+ clk_enable(ssp->clk);
-
- /* Load default SSP configuration */
- write_SSCR0(0, drv_data->ioaddr);
-@@ -1479,7 +1450,7 @@ out_error_queue_alloc:
- destroy_queue(drv_data);
-
- out_error_clock_enabled:
-- pxa_set_cken(platform_info->clock_enable, 0);
-+ clk_disable(ssp->clk);
-
- out_error_dma_alloc:
- if (drv_data->tx_channel != -1)
-@@ -1488,17 +1459,18 @@ out_error_dma_alloc:
- pxa_free_dma(drv_data->rx_channel);
-
- out_error_irq_alloc:
-- free_irq(irq, drv_data);
-+ free_irq(ssp->irq, drv_data);
-
- out_error_master_alloc:
- spi_master_put(master);
-+ ssp_free(ssp);
- return status;
- }
-
- static int pxa2xx_spi_remove(struct platform_device *pdev)
- {
- struct driver_data *drv_data = platform_get_drvdata(pdev);
-- int irq;
-+ struct ssp_device *ssp = drv_data->ssp;
- int status = 0;
-
- if (!drv_data)
-@@ -1520,28 +1492,21 @@ static int pxa2xx_spi_remove(struct platform_device *pdev)
-
- /* Disable the SSP at the peripheral and SOC level */
- write_SSCR0(0, drv_data->ioaddr);
-- pxa_set_cken(drv_data->master_info->clock_enable, 0);
-+ clk_disable(ssp->clk);
-
- /* Release DMA */
- if (drv_data->master_info->enable_dma) {
-- if (drv_data->ioaddr == SSP1_VIRT) {
-- DRCMRRXSSDR = 0;
-- DRCMRTXSSDR = 0;
-- } else if (drv_data->ioaddr == SSP2_VIRT) {
-- DRCMRRXSS2DR = 0;
-- DRCMRTXSS2DR = 0;
-- } else if (drv_data->ioaddr == SSP3_VIRT) {
-- DRCMRRXSS3DR = 0;
-- DRCMRTXSS3DR = 0;
-- }
-+ DRCMR(ssp->drcmr_rx) = 0;
-+ DRCMR(ssp->drcmr_tx) = 0;
- pxa_free_dma(drv_data->tx_channel);
- pxa_free_dma(drv_data->rx_channel);
- }
-
- /* Release IRQ */
-- irq = platform_get_irq(pdev, 0);
-- if (irq >= 0)
-- free_irq(irq, drv_data);
-+ free_irq(ssp->irq, drv_data);
++/*
++ * use blocks preallocated to locality group
++ */
++static void ext4_mb_use_group_pa(struct ext4_allocation_context *ac,
++ struct ext4_prealloc_space *pa)
++{
++ unsigned len = ac->ac_o_ex.fe_len;
+
-+ /* Release SSP */
-+ ssp_free(ssp);
-
- /* Disconnect from the SPI framework */
- spi_unregister_master(drv_data->master);
-@@ -1576,6 +1541,7 @@ static int suspend_devices(struct device *dev, void *pm_message)
- static int pxa2xx_spi_suspend(struct platform_device *pdev, pm_message_t state)
- {
- struct driver_data *drv_data = platform_get_drvdata(pdev);
-+ struct ssp_device *ssp = drv_data->ssp;
- int status = 0;
-
- /* Check all childern for current power state */
-@@ -1588,7 +1554,7 @@ static int pxa2xx_spi_suspend(struct platform_device *pdev, pm_message_t state)
- if (status != 0)
- return status;
- write_SSCR0(0, drv_data->ioaddr);
-- pxa_set_cken(drv_data->master_info->clock_enable, 0);
-+ clk_disable(ssp->clk);
-
- return 0;
- }
-@@ -1596,10 +1562,11 @@ static int pxa2xx_spi_suspend(struct platform_device *pdev, pm_message_t state)
- static int pxa2xx_spi_resume(struct platform_device *pdev)
- {
- struct driver_data *drv_data = platform_get_drvdata(pdev);
-+ struct ssp_device *ssp = drv_data->ssp;
- int status = 0;
-
- /* Enable the SSP clock */
-- pxa_set_cken(drv_data->master_info->clock_enable, 1);
-+ clk_disable(ssp->clk);
-
- /* Start the queue running */
- status = start_queue(drv_data);
-diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
-index 93e9de4..682a6a4 100644
---- a/drivers/spi/spi.c
-+++ b/drivers/spi/spi.c
-@@ -485,6 +485,15 @@ void spi_unregister_master(struct spi_master *master)
- }
- EXPORT_SYMBOL_GPL(spi_unregister_master);
-
-+static int __spi_master_match(struct device *dev, void *data)
++ ext4_get_group_no_and_offset(ac->ac_sb, pa->pa_pstart,
++ &ac->ac_b_ex.fe_group,
++ &ac->ac_b_ex.fe_start);
++ ac->ac_b_ex.fe_len = len;
++ ac->ac_status = AC_STATUS_FOUND;
++ ac->ac_pa = pa;
++
++ /* we don't correct pa_pstart or pa_plen here to avoid
++ * possible race when tte group is being loaded concurrently
++ * instead we correct pa later, after blocks are marked
++ * in on-disk bitmap -- see ext4_mb_release_context() */
++ /*
++ * FIXME!! but the other CPUs can look at this particular
++ * pa and think that it have enought free blocks if we
++ * don't update pa_free here right ?
++ */
++ mb_debug("use %u/%u from group pa %p\n", pa->pa_lstart-len, len, pa);
++}
++
++/*
++ * search goal blocks in preallocated space
++ */
++static int ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
+{
-+ struct spi_master *m;
-+ u16 *bus_num = data;
++ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
++ struct ext4_locality_group *lg;
++ struct ext4_prealloc_space *pa;
++ struct list_head *cur;
+
-+ m = container_of(dev, struct spi_master, dev);
-+ return m->bus_num == *bus_num;
++ /* only data can be preallocated */
++ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
++ return 0;
++
++ /* first, try per-file preallocation */
++ rcu_read_lock();
++ list_for_each_rcu(cur, &ei->i_prealloc_list) {
++ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
++
++ /* all fields in this condition don't change,
++ * so we can skip locking for them */
++ if (ac->ac_o_ex.fe_logical < pa->pa_lstart ||
++ ac->ac_o_ex.fe_logical >= pa->pa_lstart + pa->pa_len)
++ continue;
++
++ /* found preallocated blocks, use them */
++ spin_lock(&pa->pa_lock);
++ if (pa->pa_deleted == 0 && pa->pa_free) {
++ atomic_inc(&pa->pa_count);
++ ext4_mb_use_inode_pa(ac, pa);
++ spin_unlock(&pa->pa_lock);
++ ac->ac_criteria = 10;
++ rcu_read_unlock();
++ return 1;
++ }
++ spin_unlock(&pa->pa_lock);
++ }
++ rcu_read_unlock();
++
++ /* can we use group allocation? */
++ if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC))
++ return 0;
++
++ /* inode may have no locality group for some reason */
++ lg = ac->ac_lg;
++ if (lg == NULL)
++ return 0;
++
++ rcu_read_lock();
++ list_for_each_rcu(cur, &lg->lg_prealloc_list) {
++ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
++ spin_lock(&pa->pa_lock);
++ if (pa->pa_deleted == 0 && pa->pa_free >= ac->ac_o_ex.fe_len) {
++ atomic_inc(&pa->pa_count);
++ ext4_mb_use_group_pa(ac, pa);
++ spin_unlock(&pa->pa_lock);
++ ac->ac_criteria = 20;
++ rcu_read_unlock();
++ return 1;
++ }
++ spin_unlock(&pa->pa_lock);
++ }
++ rcu_read_unlock();
++
++ return 0;
+}
+
- /**
- * spi_busnum_to_master - look up master associated with bus_num
- * @bus_num: the master's bus number
-@@ -499,17 +508,12 @@ struct spi_master *spi_busnum_to_master(u16 bus_num)
- {
- struct device *dev;
- struct spi_master *master = NULL;
-- struct spi_master *m;
--
-- down(&spi_master_class.sem);
-- list_for_each_entry(dev, &spi_master_class.children, node) {
-- m = container_of(dev, struct spi_master, dev);
-- if (m->bus_num == bus_num) {
-- master = spi_master_get(m);
-- break;
-- }
-- }
-- up(&spi_master_class.sem);
++/*
++ * the function goes through all preallocation in this group and marks them
++ * used in in-core bitmap. buddy must be generated from this bitmap
++ * Need to be called with ext4 group lock (ext4_lock_group)
++ */
++static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
++ ext4_group_t group)
++{
++ struct ext4_group_info *grp = ext4_get_group_info(sb, group);
++ struct ext4_prealloc_space *pa;
++ struct list_head *cur;
++ ext4_group_t groupnr;
++ ext4_grpblk_t start;
++ int preallocated = 0;
++ int count = 0;
++ int len;
+
-+ dev = class_find_device(&spi_master_class, &bus_num,
-+ __spi_master_match);
-+ if (dev)
-+ master = container_of(dev, struct spi_master, dev);
-+ /* reference got in class_find_device */
- return master;
- }
- EXPORT_SYMBOL_GPL(spi_busnum_to_master);
-diff --git a/drivers/ssb/b43_pci_bridge.c b/drivers/ssb/b43_pci_bridge.c
-index f145d8a..1a31f7a 100644
---- a/drivers/ssb/b43_pci_bridge.c
-+++ b/drivers/ssb/b43_pci_bridge.c
-@@ -27,6 +27,8 @@ static const struct pci_device_id b43_pci_bridge_tbl[] = {
- { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4321) },
- { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4324) },
- { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4325) },
-+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4328) },
-+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4329) },
- { 0, },
- };
- MODULE_DEVICE_TABLE(pci, b43_pci_bridge_tbl);
-diff --git a/drivers/ssb/main.c b/drivers/ssb/main.c
-index 85a2054..9028ed5 100644
---- a/drivers/ssb/main.c
-+++ b/drivers/ssb/main.c
-@@ -872,14 +872,22 @@ EXPORT_SYMBOL(ssb_clockspeed);
-
- static u32 ssb_tmslow_reject_bitmask(struct ssb_device *dev)
- {
-+ u32 rev = ssb_read32(dev, SSB_IDLOW) & SSB_IDLOW_SSBREV;
++ /* all form of preallocation discards first load group,
++ * so the only competing code is preallocation use.
++ * we don't need any locking here
++ * notice we do NOT ignore preallocations with pa_deleted
++ * otherwise we could leave used blocks available for
++ * allocation in buddy when concurrent ext4_mb_put_pa()
++ * is dropping preallocation
++ */
++ list_for_each(cur, &grp->bb_prealloc_list) {
++ pa = list_entry(cur, struct ext4_prealloc_space, pa_group_list);
++ spin_lock(&pa->pa_lock);
++ ext4_get_group_no_and_offset(sb, pa->pa_pstart,
++ &groupnr, &start);
++ len = pa->pa_len;
++ spin_unlock(&pa->pa_lock);
++ if (unlikely(len == 0))
++ continue;
++ BUG_ON(groupnr != group);
++ mb_set_bits(sb_bgl_lock(EXT4_SB(sb), group),
++ bitmap, start, len);
++ preallocated += len;
++ count++;
++ }
++ mb_debug("prellocated %u for group %lu\n", preallocated, group);
++}
+
- /* The REJECT bit changed position in TMSLOW between
- * Backplane revisions. */
-- switch (ssb_read32(dev, SSB_IDLOW) & SSB_IDLOW_SSBREV) {
-+ switch (rev) {
- case SSB_IDLOW_SSBREV_22:
- return SSB_TMSLOW_REJECT_22;
- case SSB_IDLOW_SSBREV_23:
- return SSB_TMSLOW_REJECT_23;
-+ case SSB_IDLOW_SSBREV_24: /* TODO - find the proper REJECT bits */
-+ case SSB_IDLOW_SSBREV_25: /* same here */
-+ case SSB_IDLOW_SSBREV_26: /* same here */
-+ case SSB_IDLOW_SSBREV_27: /* same here */
-+ return SSB_TMSLOW_REJECT_23; /* this is a guess */
- default:
-+ printk(KERN_INFO "ssb: Backplane Revision 0x%.8X\n", rev);
- WARN_ON(1);
- }
- return (SSB_TMSLOW_REJECT_22 | SSB_TMSLOW_REJECT_23);
-diff --git a/drivers/ssb/pci.c b/drivers/ssb/pci.c
-index 0ab095c..b434df7 100644
---- a/drivers/ssb/pci.c
-+++ b/drivers/ssb/pci.c
-@@ -212,29 +212,29 @@ static inline u8 ssb_crc8(u8 crc, u8 data)
- return t[crc ^ data];
- }
-
--static u8 ssb_sprom_crc(const u16 *sprom)
-+static u8 ssb_sprom_crc(const u16 *sprom, u16 size)
- {
- int word;
- u8 crc = 0xFF;
-
-- for (word = 0; word < SSB_SPROMSIZE_WORDS - 1; word++) {
-+ for (word = 0; word < size - 1; word++) {
- crc = ssb_crc8(crc, sprom[word] & 0x00FF);
- crc = ssb_crc8(crc, (sprom[word] & 0xFF00) >> 8);
- }
-- crc = ssb_crc8(crc, sprom[SPOFF(SSB_SPROM_REVISION)] & 0x00FF);
-+ crc = ssb_crc8(crc, sprom[size - 1] & 0x00FF);
- crc ^= 0xFF;
-
- return crc;
- }
-
--static int sprom_check_crc(const u16 *sprom)
-+static int sprom_check_crc(const u16 *sprom, u16 size)
- {
- u8 crc;
- u8 expected_crc;
- u16 tmp;
-
-- crc = ssb_sprom_crc(sprom);
-- tmp = sprom[SPOFF(SSB_SPROM_REVISION)] & SSB_SPROM_REVISION_CRC;
-+ crc = ssb_sprom_crc(sprom, size);
-+ tmp = sprom[size - 1] & SSB_SPROM_REVISION_CRC;
- expected_crc = tmp >> SSB_SPROM_REVISION_CRC_SHIFT;
- if (crc != expected_crc)
- return -EPROTO;
-@@ -246,8 +246,8 @@ static void sprom_do_read(struct ssb_bus *bus, u16 *sprom)
- {
- int i;
-
-- for (i = 0; i < SSB_SPROMSIZE_WORDS; i++)
-- sprom[i] = readw(bus->mmio + SSB_SPROM_BASE + (i * 2));
-+ for (i = 0; i < bus->sprom_size; i++)
-+ sprom[i] = ioread16(bus->mmio + SSB_SPROM_BASE + (i * 2));
- }
-
- static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
-@@ -255,6 +255,7 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
- struct pci_dev *pdev = bus->host_pci;
- int i, err;
- u32 spromctl;
-+ u16 size = bus->sprom_size;
-
- ssb_printk(KERN_NOTICE PFX "Writing SPROM. Do NOT turn off the power! Please stand by...\n");
- err = pci_read_config_dword(pdev, SSB_SPROMCTL, &spromctl);
-@@ -266,12 +267,12 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
- goto err_ctlreg;
- ssb_printk(KERN_NOTICE PFX "[ 0%%");
- msleep(500);
-- for (i = 0; i < SSB_SPROMSIZE_WORDS; i++) {
-- if (i == SSB_SPROMSIZE_WORDS / 4)
-+ for (i = 0; i < size; i++) {
-+ if (i == size / 4)
- ssb_printk("25%%");
-- else if (i == SSB_SPROMSIZE_WORDS / 2)
-+ else if (i == size / 2)
- ssb_printk("50%%");
-- else if (i == (SSB_SPROMSIZE_WORDS / 4) * 3)
-+ else if (i == (size * 3) / 4)
- ssb_printk("75%%");
- else if (i % 2)
- ssb_printk(".");
-@@ -296,24 +297,53 @@ err_ctlreg:
- return err;
- }
-
--static void sprom_extract_r1(struct ssb_sprom_r1 *out, const u16 *in)
-+static s8 r123_extract_antgain(u8 sprom_revision, const u16 *in,
-+ u16 mask, u16 shift)
++static void ext4_mb_pa_callback(struct rcu_head *head)
+{
-+ u16 v;
-+ u8 gain;
++ struct ext4_prealloc_space *pa;
++ pa = container_of(head, struct ext4_prealloc_space, u.pa_rcu);
++ kmem_cache_free(ext4_pspace_cachep, pa);
++}
+
-+ v = in[SPOFF(SSB_SPROM1_AGAIN)];
-+ gain = (v & mask) >> shift;
-+ if (gain == 0xFF)
-+ gain = 2; /* If unset use 2dBm */
-+ if (sprom_revision == 1) {
-+ /* Convert to Q5.2 */
-+ gain <<= 2;
-+ } else {
-+ /* Q5.2 Fractional part is stored in 0xC0 */
-+ gain = ((gain & 0xC0) >> 6) | ((gain & 0x3F) << 2);
++/*
++ * drops a reference to preallocated space descriptor
++ * if this was the last reference and the space is consumed
++ */
++static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
++ struct super_block *sb, struct ext4_prealloc_space *pa)
++{
++ unsigned long grp;
++
++ if (!atomic_dec_and_test(&pa->pa_count) || pa->pa_free != 0)
++ return;
++
++ /* in this short window concurrent discard can set pa_deleted */
++ spin_lock(&pa->pa_lock);
++ if (pa->pa_deleted == 1) {
++ spin_unlock(&pa->pa_lock);
++ return;
+ }
+
-+ return (s8)gain;
++ pa->pa_deleted = 1;
++ spin_unlock(&pa->pa_lock);
++
++ /* -1 is to protect from crossing allocation group */
++ ext4_get_group_no_and_offset(sb, pa->pa_pstart - 1, &grp, NULL);
++
++ /*
++ * possible race:
++ *
++ * P1 (buddy init) P2 (regular allocation)
++ * find block B in PA
++ * copy on-disk bitmap to buddy
++ * mark B in on-disk bitmap
++ * drop PA from group
++ * mark all PAs in buddy
++ *
++ * thus, P1 initializes buddy with B available. to prevent this
++ * we make "copy" and "mark all PAs" atomic and serialize "drop PA"
++ * against that pair
++ */
++ ext4_lock_group(sb, grp);
++ list_del(&pa->pa_group_list);
++ ext4_unlock_group(sb, grp);
++
++ spin_lock(pa->pa_obj_lock);
++ list_del_rcu(&pa->pa_inode_list);
++ spin_unlock(pa->pa_obj_lock);
++
++ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+}
+
-+static void sprom_extract_r123(struct ssb_sprom *out, const u16 *in)
- {
- int i;
- u16 v;
-+ s8 gain;
-+ u16 loc[3];
-
-- SPEX(pci_spid, SSB_SPROM1_SPID, 0xFFFF, 0);
-- SPEX(pci_svid, SSB_SPROM1_SVID, 0xFFFF, 0);
-- SPEX(pci_pid, SSB_SPROM1_PID, 0xFFFF, 0);
-+ if (out->revision == 3) { /* rev 3 moved MAC */
-+ loc[0] = SSB_SPROM3_IL0MAC;
-+ loc[1] = SSB_SPROM3_ET0MAC;
-+ loc[2] = SSB_SPROM3_ET1MAC;
-+ } else {
-+ loc[0] = SSB_SPROM1_IL0MAC;
-+ loc[1] = SSB_SPROM1_ET0MAC;
-+ loc[2] = SSB_SPROM1_ET1MAC;
++/*
++ * creates new preallocated space for given inode
++ */
++static int ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
++{
++ struct super_block *sb = ac->ac_sb;
++ struct ext4_prealloc_space *pa;
++ struct ext4_group_info *grp;
++ struct ext4_inode_info *ei;
++
++ /* preallocate only when found space is larger then requested */
++ BUG_ON(ac->ac_o_ex.fe_len >= ac->ac_b_ex.fe_len);
++ BUG_ON(ac->ac_status != AC_STATUS_FOUND);
++ BUG_ON(!S_ISREG(ac->ac_inode->i_mode));
++
++ pa = kmem_cache_alloc(ext4_pspace_cachep, GFP_NOFS);
++ if (pa == NULL)
++ return -ENOMEM;
++
++ if (ac->ac_b_ex.fe_len < ac->ac_g_ex.fe_len) {
++ int winl;
++ int wins;
++ int win;
++ int offs;
++
++ /* we can't allocate as much as normalizer wants.
++ * so, found space must get proper lstart
++ * to cover original request */
++ BUG_ON(ac->ac_g_ex.fe_logical > ac->ac_o_ex.fe_logical);
++ BUG_ON(ac->ac_g_ex.fe_len < ac->ac_o_ex.fe_len);
++
++ /* we're limited by original request in that
++ * logical block must be covered any way
++ * winl is window we can move our chunk within */
++ winl = ac->ac_o_ex.fe_logical - ac->ac_g_ex.fe_logical;
++
++ /* also, we should cover whole original request */
++ wins = ac->ac_b_ex.fe_len - ac->ac_o_ex.fe_len;
++
++ /* the smallest one defines real window */
++ win = min(winl, wins);
++
++ offs = ac->ac_o_ex.fe_logical % ac->ac_b_ex.fe_len;
++ if (offs && offs < win)
++ win = offs;
++
++ ac->ac_b_ex.fe_logical = ac->ac_o_ex.fe_logical - win;
++ BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
++ BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
+ }
- for (i = 0; i < 3; i++) {
-- v = in[SPOFF(SSB_SPROM1_IL0MAC) + i];
-+ v = in[SPOFF(loc[0]) + i];
- *(((__be16 *)out->il0mac) + i) = cpu_to_be16(v);
- }
- for (i = 0; i < 3; i++) {
-- v = in[SPOFF(SSB_SPROM1_ET0MAC) + i];
-+ v = in[SPOFF(loc[1]) + i];
- *(((__be16 *)out->et0mac) + i) = cpu_to_be16(v);
- }
- for (i = 0; i < 3; i++) {
-- v = in[SPOFF(SSB_SPROM1_ET1MAC) + i];
-+ v = in[SPOFF(loc[2]) + i];
- *(((__be16 *)out->et1mac) + i) = cpu_to_be16(v);
- }
- SPEX(et0phyaddr, SSB_SPROM1_ETHPHY, SSB_SPROM1_ETHPHY_ET0A, 0);
-@@ -324,9 +354,9 @@ static void sprom_extract_r1(struct ssb_sprom_r1 *out, const u16 *in)
- SPEX(board_rev, SSB_SPROM1_BINF, SSB_SPROM1_BINF_BREV, 0);
- SPEX(country_code, SSB_SPROM1_BINF, SSB_SPROM1_BINF_CCODE,
- SSB_SPROM1_BINF_CCODE_SHIFT);
-- SPEX(antenna_a, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTA,
-+ SPEX(ant_available_a, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTA,
- SSB_SPROM1_BINF_ANTA_SHIFT);
-- SPEX(antenna_bg, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTBG,
-+ SPEX(ant_available_bg, SSB_SPROM1_BINF, SSB_SPROM1_BINF_ANTBG,
- SSB_SPROM1_BINF_ANTBG_SHIFT);
- SPEX(pa0b0, SSB_SPROM1_PA0B0, 0xFFFF, 0);
- SPEX(pa0b1, SSB_SPROM1_PA0B1, 0xFFFF, 0);
-@@ -347,100 +377,108 @@ static void sprom_extract_r1(struct ssb_sprom_r1 *out, const u16 *in)
- SSB_SPROM1_ITSSI_A_SHIFT);
- SPEX(itssi_bg, SSB_SPROM1_ITSSI, SSB_SPROM1_ITSSI_BG, 0);
- SPEX(boardflags_lo, SSB_SPROM1_BFLLO, 0xFFFF, 0);
-- SPEX(antenna_gain_a, SSB_SPROM1_AGAIN, SSB_SPROM1_AGAIN_A, 0);
-- SPEX(antenna_gain_bg, SSB_SPROM1_AGAIN, SSB_SPROM1_AGAIN_BG,
-- SSB_SPROM1_AGAIN_BG_SHIFT);
-- for (i = 0; i < 4; i++) {
-- v = in[SPOFF(SSB_SPROM1_OEM) + i];
-- *(((__le16 *)out->oem) + i) = cpu_to_le16(v);
-- }
-+ if (out->revision >= 2)
-+ SPEX(boardflags_hi, SSB_SPROM2_BFLHI, 0xFFFF, 0);
+
-+ /* Extract the antenna gain values. */
-+ gain = r123_extract_antgain(out->revision, in,
-+ SSB_SPROM1_AGAIN_BG,
-+ SSB_SPROM1_AGAIN_BG_SHIFT);
-+ out->antenna_gain.ghz24.a0 = gain;
-+ out->antenna_gain.ghz24.a1 = gain;
-+ out->antenna_gain.ghz24.a2 = gain;
-+ out->antenna_gain.ghz24.a3 = gain;
-+ gain = r123_extract_antgain(out->revision, in,
-+ SSB_SPROM1_AGAIN_A,
-+ SSB_SPROM1_AGAIN_A_SHIFT);
-+ out->antenna_gain.ghz5.a0 = gain;
-+ out->antenna_gain.ghz5.a1 = gain;
-+ out->antenna_gain.ghz5.a2 = gain;
-+ out->antenna_gain.ghz5.a3 = gain;
- }
-
--static void sprom_extract_r2(struct ssb_sprom_r2 *out, const u16 *in)
-+static void sprom_extract_r4(struct ssb_sprom *out, const u16 *in)
- {
- int i;
- u16 v;
-
-- SPEX(boardflags_hi, SSB_SPROM2_BFLHI, 0xFFFF, 0);
-- SPEX(maxpwr_a_hi, SSB_SPROM2_MAXP_A, SSB_SPROM2_MAXP_A_HI, 0);
-- SPEX(maxpwr_a_lo, SSB_SPROM2_MAXP_A, SSB_SPROM2_MAXP_A_LO,
-- SSB_SPROM2_MAXP_A_LO_SHIFT);
-- SPEX(pa1lob0, SSB_SPROM2_PA1LOB0, 0xFFFF, 0);
-- SPEX(pa1lob1, SSB_SPROM2_PA1LOB1, 0xFFFF, 0);
-- SPEX(pa1lob2, SSB_SPROM2_PA1LOB2, 0xFFFF, 0);
-- SPEX(pa1hib0, SSB_SPROM2_PA1HIB0, 0xFFFF, 0);
-- SPEX(pa1hib1, SSB_SPROM2_PA1HIB1, 0xFFFF, 0);
-- SPEX(pa1hib2, SSB_SPROM2_PA1HIB2, 0xFFFF, 0);
-- SPEX(ofdm_pwr_off, SSB_SPROM2_OPO, SSB_SPROM2_OPO_VALUE, 0);
-- for (i = 0; i < 4; i++) {
-- v = in[SPOFF(SSB_SPROM2_CCODE) + i];
-- *(((__le16 *)out->country_str) + i) = cpu_to_le16(v);
-+ /* extract the equivalent of the r1 variables */
-+ for (i = 0; i < 3; i++) {
-+ v = in[SPOFF(SSB_SPROM4_IL0MAC) + i];
-+ *(((__be16 *)out->il0mac) + i) = cpu_to_be16(v);
- }
-+ for (i = 0; i < 3; i++) {
-+ v = in[SPOFF(SSB_SPROM4_ET0MAC) + i];
-+ *(((__be16 *)out->et0mac) + i) = cpu_to_be16(v);
++ /* preallocation can change ac_b_ex, thus we store actually
++ * allocated blocks for history */
++ ac->ac_f_ex = ac->ac_b_ex;
++
++ pa->pa_lstart = ac->ac_b_ex.fe_logical;
++ pa->pa_pstart = ext4_grp_offs_to_block(sb, &ac->ac_b_ex);
++ pa->pa_len = ac->ac_b_ex.fe_len;
++ pa->pa_free = pa->pa_len;
++ atomic_set(&pa->pa_count, 1);
++ spin_lock_init(&pa->pa_lock);
++ pa->pa_deleted = 0;
++ pa->pa_linear = 0;
++
++ mb_debug("new inode pa %p: %llu/%u for %u\n", pa,
++ pa->pa_pstart, pa->pa_len, pa->pa_lstart);
++
++ ext4_mb_use_inode_pa(ac, pa);
++ atomic_add(pa->pa_free, &EXT4_SB(sb)->s_mb_preallocated);
++
++ ei = EXT4_I(ac->ac_inode);
++ grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
++
++ pa->pa_obj_lock = &ei->i_prealloc_lock;
++ pa->pa_inode = ac->ac_inode;
++
++ ext4_lock_group(sb, ac->ac_b_ex.fe_group);
++ list_add(&pa->pa_group_list, &grp->bb_prealloc_list);
++ ext4_unlock_group(sb, ac->ac_b_ex.fe_group);
++
++ spin_lock(pa->pa_obj_lock);
++ list_add_rcu(&pa->pa_inode_list, &ei->i_prealloc_list);
++ spin_unlock(pa->pa_obj_lock);
++
++ return 0;
++}
++
++/*
++ * creates new preallocated space for locality group inodes belongs to
++ */
++static int ext4_mb_new_group_pa(struct ext4_allocation_context *ac)
++{
++ struct super_block *sb = ac->ac_sb;
++ struct ext4_locality_group *lg;
++ struct ext4_prealloc_space *pa;
++ struct ext4_group_info *grp;
++
++ /* preallocate only when found space is larger then requested */
++ BUG_ON(ac->ac_o_ex.fe_len >= ac->ac_b_ex.fe_len);
++ BUG_ON(ac->ac_status != AC_STATUS_FOUND);
++ BUG_ON(!S_ISREG(ac->ac_inode->i_mode));
++
++ BUG_ON(ext4_pspace_cachep == NULL);
++ pa = kmem_cache_alloc(ext4_pspace_cachep, GFP_NOFS);
++ if (pa == NULL)
++ return -ENOMEM;
++
++ /* preallocation can change ac_b_ex, thus we store actually
++ * allocated blocks for history */
++ ac->ac_f_ex = ac->ac_b_ex;
++
++ pa->pa_pstart = ext4_grp_offs_to_block(sb, &ac->ac_b_ex);
++ pa->pa_lstart = pa->pa_pstart;
++ pa->pa_len = ac->ac_b_ex.fe_len;
++ pa->pa_free = pa->pa_len;
++ atomic_set(&pa->pa_count, 1);
++ spin_lock_init(&pa->pa_lock);
++ pa->pa_deleted = 0;
++ pa->pa_linear = 1;
++
++ mb_debug("new group pa %p: %llu/%u for %u\n", pa,
++ pa->pa_pstart, pa->pa_len, pa->pa_lstart);
++
++ ext4_mb_use_group_pa(ac, pa);
++ atomic_add(pa->pa_free, &EXT4_SB(sb)->s_mb_preallocated);
++
++ grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
++ lg = ac->ac_lg;
++ BUG_ON(lg == NULL);
++
++ pa->pa_obj_lock = &lg->lg_prealloc_lock;
++ pa->pa_inode = NULL;
++
++ ext4_lock_group(sb, ac->ac_b_ex.fe_group);
++ list_add(&pa->pa_group_list, &grp->bb_prealloc_list);
++ ext4_unlock_group(sb, ac->ac_b_ex.fe_group);
++
++ spin_lock(pa->pa_obj_lock);
++ list_add_tail_rcu(&pa->pa_inode_list, &lg->lg_prealloc_list);
++ spin_unlock(pa->pa_obj_lock);
++
++ return 0;
++}
++
++static int ext4_mb_new_preallocation(struct ext4_allocation_context *ac)
++{
++ int err;
++
++ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)
++ err = ext4_mb_new_group_pa(ac);
++ else
++ err = ext4_mb_new_inode_pa(ac);
++ return err;
++}
++
++/*
++ * finds all unused blocks in on-disk bitmap, frees them in
++ * in-core bitmap and buddy.
++ * @pa must be unlinked from inode and group lists, so that
++ * nobody else can find/use it.
++ * the caller MUST hold group/inode locks.
++ * TODO: optimize the case when there are no in-core structures yet
++ */
++static int ext4_mb_release_inode_pa(struct ext4_buddy *e4b,
++ struct buffer_head *bitmap_bh,
++ struct ext4_prealloc_space *pa)
++{
++ struct ext4_allocation_context ac;
++ struct super_block *sb = e4b->bd_sb;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ unsigned long end;
++ unsigned long next;
++ ext4_group_t group;
++ ext4_grpblk_t bit;
++ sector_t start;
++ int err = 0;
++ int free = 0;
++
++ BUG_ON(pa->pa_deleted == 0);
++ ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
++ BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
++ end = bit + pa->pa_len;
++
++ ac.ac_sb = sb;
++ ac.ac_inode = pa->pa_inode;
++ ac.ac_op = EXT4_MB_HISTORY_DISCARD;
++
++ while (bit < end) {
++ bit = ext4_find_next_zero_bit(bitmap_bh->b_data, end, bit);
++ if (bit >= end)
++ break;
++ next = ext4_find_next_bit(bitmap_bh->b_data, end, bit);
++ if (next > end)
++ next = end;
++ start = group * EXT4_BLOCKS_PER_GROUP(sb) + bit +
++ le32_to_cpu(sbi->s_es->s_first_data_block);
++ mb_debug(" free preallocated %u/%u in group %u\n",
++ (unsigned) start, (unsigned) next - bit,
++ (unsigned) group);
++ free += next - bit;
++
++ ac.ac_b_ex.fe_group = group;
++ ac.ac_b_ex.fe_start = bit;
++ ac.ac_b_ex.fe_len = next - bit;
++ ac.ac_b_ex.fe_logical = 0;
++ ext4_mb_store_history(&ac);
++
++ mb_free_blocks(pa->pa_inode, e4b, bit, next - bit);
++ bit = next + 1;
+ }
-+ for (i = 0; i < 3; i++) {
-+ v = in[SPOFF(SSB_SPROM4_ET1MAC) + i];
-+ *(((__be16 *)out->et1mac) + i) = cpu_to_be16(v);
++ if (free != pa->pa_free) {
++ printk(KERN_ERR "pa %p: logic %lu, phys. %lu, len %lu\n",
++ pa, (unsigned long) pa->pa_lstart,
++ (unsigned long) pa->pa_pstart,
++ (unsigned long) pa->pa_len);
++ printk(KERN_ERR "free %u, pa_free %u\n", free, pa->pa_free);
+ }
-+ SPEX(et0phyaddr, SSB_SPROM4_ETHPHY, SSB_SPROM4_ETHPHY_ET0A, 0);
-+ SPEX(et1phyaddr, SSB_SPROM4_ETHPHY, SSB_SPROM4_ETHPHY_ET1A,
-+ SSB_SPROM4_ETHPHY_ET1A_SHIFT);
-+ SPEX(country_code, SSB_SPROM4_CCODE, 0xFFFF, 0);
-+ SPEX(boardflags_lo, SSB_SPROM4_BFLLO, 0xFFFF, 0);
-+ SPEX(boardflags_hi, SSB_SPROM4_BFLHI, 0xFFFF, 0);
-+ SPEX(ant_available_a, SSB_SPROM4_ANTAVAIL, SSB_SPROM4_ANTAVAIL_A,
-+ SSB_SPROM4_ANTAVAIL_A_SHIFT);
-+ SPEX(ant_available_bg, SSB_SPROM4_ANTAVAIL, SSB_SPROM4_ANTAVAIL_BG,
-+ SSB_SPROM4_ANTAVAIL_BG_SHIFT);
-+ SPEX(maxpwr_bg, SSB_SPROM4_MAXP_BG, SSB_SPROM4_MAXP_BG_MASK, 0);
-+ SPEX(itssi_bg, SSB_SPROM4_MAXP_BG, SSB_SPROM4_ITSSI_BG,
-+ SSB_SPROM4_ITSSI_BG_SHIFT);
-+ SPEX(maxpwr_a, SSB_SPROM4_MAXP_A, SSB_SPROM4_MAXP_A_MASK, 0);
-+ SPEX(itssi_a, SSB_SPROM4_MAXP_A, SSB_SPROM4_ITSSI_A,
-+ SSB_SPROM4_ITSSI_A_SHIFT);
-+ SPEX(gpio0, SSB_SPROM4_GPIOA, SSB_SPROM4_GPIOA_P0, 0);
-+ SPEX(gpio1, SSB_SPROM4_GPIOA, SSB_SPROM4_GPIOA_P1,
-+ SSB_SPROM4_GPIOA_P1_SHIFT);
-+ SPEX(gpio2, SSB_SPROM4_GPIOB, SSB_SPROM4_GPIOB_P2, 0);
-+ SPEX(gpio3, SSB_SPROM4_GPIOB, SSB_SPROM4_GPIOB_P3,
-+ SSB_SPROM4_GPIOB_P3_SHIFT);
++ BUG_ON(free != pa->pa_free);
++ atomic_add(free, &sbi->s_mb_discarded);
+
-+ /* Extract the antenna gain values. */
-+ SPEX(antenna_gain.ghz24.a0, SSB_SPROM4_AGAIN01,
-+ SSB_SPROM4_AGAIN0, SSB_SPROM4_AGAIN0_SHIFT);
-+ SPEX(antenna_gain.ghz24.a1, SSB_SPROM4_AGAIN01,
-+ SSB_SPROM4_AGAIN1, SSB_SPROM4_AGAIN1_SHIFT);
-+ SPEX(antenna_gain.ghz24.a2, SSB_SPROM4_AGAIN23,
-+ SSB_SPROM4_AGAIN2, SSB_SPROM4_AGAIN2_SHIFT);
-+ SPEX(antenna_gain.ghz24.a3, SSB_SPROM4_AGAIN23,
-+ SSB_SPROM4_AGAIN3, SSB_SPROM4_AGAIN3_SHIFT);
-+ memcpy(&out->antenna_gain.ghz5, &out->antenna_gain.ghz24,
-+ sizeof(out->antenna_gain.ghz5));
++ return err;
++}
+
-+ /* TODO - get remaining rev 4 stuff needed */
- }
-
--static void sprom_extract_r3(struct ssb_sprom_r3 *out, const u16 *in)
--{
-- out->ofdmapo = (in[SPOFF(SSB_SPROM3_OFDMAPO) + 0] & 0xFF00) >> 8;
-- out->ofdmapo |= (in[SPOFF(SSB_SPROM3_OFDMAPO) + 0] & 0x00FF) << 8;
-- out->ofdmapo <<= 16;
-- out->ofdmapo |= (in[SPOFF(SSB_SPROM3_OFDMAPO) + 1] & 0xFF00) >> 8;
-- out->ofdmapo |= (in[SPOFF(SSB_SPROM3_OFDMAPO) + 1] & 0x00FF) << 8;
--
-- out->ofdmalpo = (in[SPOFF(SSB_SPROM3_OFDMALPO) + 0] & 0xFF00) >> 8;
-- out->ofdmalpo |= (in[SPOFF(SSB_SPROM3_OFDMALPO) + 0] & 0x00FF) << 8;
-- out->ofdmalpo <<= 16;
-- out->ofdmalpo |= (in[SPOFF(SSB_SPROM3_OFDMALPO) + 1] & 0xFF00) >> 8;
-- out->ofdmalpo |= (in[SPOFF(SSB_SPROM3_OFDMALPO) + 1] & 0x00FF) << 8;
--
-- out->ofdmahpo = (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 0] & 0xFF00) >> 8;
-- out->ofdmahpo |= (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 0] & 0x00FF) << 8;
-- out->ofdmahpo <<= 16;
-- out->ofdmahpo |= (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 1] & 0xFF00) >> 8;
-- out->ofdmahpo |= (in[SPOFF(SSB_SPROM3_OFDMAHPO) + 1] & 0x00FF) << 8;
--
-- SPEX(gpioldc_on_cnt, SSB_SPROM3_GPIOLDC, SSB_SPROM3_GPIOLDC_ON,
-- SSB_SPROM3_GPIOLDC_ON_SHIFT);
-- SPEX(gpioldc_off_cnt, SSB_SPROM3_GPIOLDC, SSB_SPROM3_GPIOLDC_OFF,
-- SSB_SPROM3_GPIOLDC_OFF_SHIFT);
-- SPEX(cckpo_1M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_1M, 0);
-- SPEX(cckpo_2M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_2M,
-- SSB_SPROM3_CCKPO_2M_SHIFT);
-- SPEX(cckpo_55M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_55M,
-- SSB_SPROM3_CCKPO_55M_SHIFT);
-- SPEX(cckpo_11M, SSB_SPROM3_CCKPO, SSB_SPROM3_CCKPO_11M,
-- SSB_SPROM3_CCKPO_11M_SHIFT);
--
-- out->ofdmgpo = (in[SPOFF(SSB_SPROM3_OFDMGPO) + 0] & 0xFF00) >> 8;
-- out->ofdmgpo |= (in[SPOFF(SSB_SPROM3_OFDMGPO) + 0] & 0x00FF) << 8;
-- out->ofdmgpo <<= 16;
-- out->ofdmgpo |= (in[SPOFF(SSB_SPROM3_OFDMGPO) + 1] & 0xFF00) >> 8;
-- out->ofdmgpo |= (in[SPOFF(SSB_SPROM3_OFDMGPO) + 1] & 0x00FF) << 8;
--}
--
--static int sprom_extract(struct ssb_bus *bus,
-- struct ssb_sprom *out, const u16 *in)
-+static int sprom_extract(struct ssb_bus *bus, struct ssb_sprom *out,
-+ const u16 *in, u16 size)
- {
- memset(out, 0, sizeof(*out));
-
-- SPEX(revision, SSB_SPROM_REVISION, SSB_SPROM_REVISION_REV, 0);
-- SPEX(crc, SSB_SPROM_REVISION, SSB_SPROM_REVISION_CRC,
-- SSB_SPROM_REVISION_CRC_SHIFT);
--
-+ out->revision = in[size - 1] & 0x00FF;
-+ ssb_dprintk(KERN_DEBUG PFX "SPROM revision %d detected.\n", out->revision);
- if ((bus->chip_id & 0xFF00) == 0x4400) {
- /* Workaround: The BCM44XX chip has a stupid revision
- * number stored in the SPROM.
- * Always extract r1. */
-- sprom_extract_r1(&out->r1, in);
-+ out->revision = 1;
-+ sprom_extract_r123(out, in);
-+ } else if (bus->chip_id == 0x4321) {
-+ /* the BCM4328 has a chipid == 0x4321 and a rev 4 SPROM */
-+ out->revision = 4;
-+ sprom_extract_r4(out, in);
- } else {
- if (out->revision == 0)
- goto unsupported;
-- if (out->revision >= 1 && out->revision <= 3)
-- sprom_extract_r1(&out->r1, in);
-- if (out->revision >= 2 && out->revision <= 3)
-- sprom_extract_r2(&out->r2, in);
-- if (out->revision == 3)
-- sprom_extract_r3(&out->r3, in);
-- if (out->revision >= 4)
-+ if (out->revision >= 1 && out->revision <= 3) {
-+ sprom_extract_r123(out, in);
-+ }
-+ if (out->revision == 4)
-+ sprom_extract_r4(out, in);
-+ if (out->revision >= 5)
- goto unsupported;
- }
-
-@@ -448,7 +486,7 @@ static int sprom_extract(struct ssb_bus *bus,
- unsupported:
- ssb_printk(KERN_WARNING PFX "Unsupported SPROM revision %d "
- "detected. Will extract v1\n", out->revision);
-- sprom_extract_r1(&out->r1, in);
-+ sprom_extract_r123(out, in);
- return 0;
- }
-
-@@ -458,16 +496,29 @@ static int ssb_pci_sprom_get(struct ssb_bus *bus,
- int err = -ENOMEM;
- u16 *buf;
-
-- buf = kcalloc(SSB_SPROMSIZE_WORDS, sizeof(u16), GFP_KERNEL);
-+ buf = kcalloc(SSB_SPROMSIZE_WORDS_R123, sizeof(u16), GFP_KERNEL);
- if (!buf)
- goto out;
-+ bus->sprom_size = SSB_SPROMSIZE_WORDS_R123;
- sprom_do_read(bus, buf);
-- err = sprom_check_crc(buf);
-+ err = sprom_check_crc(buf, bus->sprom_size);
- if (err) {
-- ssb_printk(KERN_WARNING PFX
-- "WARNING: Invalid SPROM CRC (corrupt SPROM)\n");
-+ /* check for rev 4 sprom - has special signature */
-+ if (buf[32] == 0x5372) {
-+ kfree(buf);
-+ buf = kcalloc(SSB_SPROMSIZE_WORDS_R4, sizeof(u16),
-+ GFP_KERNEL);
-+ if (!buf)
-+ goto out;
-+ bus->sprom_size = SSB_SPROMSIZE_WORDS_R4;
-+ sprom_do_read(bus, buf);
-+ err = sprom_check_crc(buf, bus->sprom_size);
-+ }
-+ if (err)
-+ ssb_printk(KERN_WARNING PFX "WARNING: Invalid"
-+ " SPROM CRC (corrupt SPROM)\n");
- }
-- err = sprom_extract(bus, sprom, buf);
-+ err = sprom_extract(bus, sprom, buf, bus->sprom_size);
-
- kfree(buf);
- out:
-@@ -581,29 +632,28 @@ const struct ssb_bus_ops ssb_pci_ops = {
- .write32 = ssb_pci_write32,
- };
-
--static int sprom2hex(const u16 *sprom, char *buf, size_t buf_len)
-+static int sprom2hex(const u16 *sprom, char *buf, size_t buf_len, u16 size)
- {
- int i, pos = 0;
-
-- for (i = 0; i < SSB_SPROMSIZE_WORDS; i++) {
-+ for (i = 0; i < size; i++)
- pos += snprintf(buf + pos, buf_len - pos - 1,
- "%04X", swab16(sprom[i]) & 0xFFFF);
-- }
- pos += snprintf(buf + pos, buf_len - pos - 1, "\n");
-
- return pos + 1;
- }
-
--static int hex2sprom(u16 *sprom, const char *dump, size_t len)
-+static int hex2sprom(u16 *sprom, const char *dump, size_t len, u16 size)
- {
- char tmp[5] = { 0 };
- int cnt = 0;
- unsigned long parsed;
-
-- if (len < SSB_SPROMSIZE_BYTES * 2)
-+ if (len < size * 2)
- return -EINVAL;
-
-- while (cnt < SSB_SPROMSIZE_WORDS) {
-+ while (cnt < size) {
- memcpy(tmp, dump, 4);
- dump += 4;
- parsed = simple_strtoul(tmp, NULL, 16);
-@@ -627,7 +677,7 @@ static ssize_t ssb_pci_attr_sprom_show(struct device *pcidev,
- if (!bus)
- goto out;
- err = -ENOMEM;
-- sprom = kcalloc(SSB_SPROMSIZE_WORDS, sizeof(u16), GFP_KERNEL);
-+ sprom = kcalloc(bus->sprom_size, sizeof(u16), GFP_KERNEL);
- if (!sprom)
- goto out;
-
-@@ -640,7 +690,7 @@ static ssize_t ssb_pci_attr_sprom_show(struct device *pcidev,
- sprom_do_read(bus, sprom);
- mutex_unlock(&bus->pci_sprom_mutex);
-
-- count = sprom2hex(sprom, buf, PAGE_SIZE);
-+ count = sprom2hex(sprom, buf, PAGE_SIZE, bus->sprom_size);
- err = 0;
-
- out_kfree:
-@@ -662,15 +712,15 @@ static ssize_t ssb_pci_attr_sprom_store(struct device *pcidev,
- if (!bus)
- goto out;
- err = -ENOMEM;
-- sprom = kcalloc(SSB_SPROMSIZE_WORDS, sizeof(u16), GFP_KERNEL);
-+ sprom = kcalloc(bus->sprom_size, sizeof(u16), GFP_KERNEL);
- if (!sprom)
- goto out;
-- err = hex2sprom(sprom, buf, count);
-+ err = hex2sprom(sprom, buf, count, bus->sprom_size);
- if (err) {
- err = -EINVAL;
- goto out_kfree;
- }
-- err = sprom_check_crc(sprom);
-+ err = sprom_check_crc(sprom, bus->sprom_size);
- if (err) {
- err = -EINVAL;
- goto out_kfree;
-diff --git a/drivers/ssb/pcmcia.c b/drivers/ssb/pcmcia.c
-index bb44a76..46816cd 100644
---- a/drivers/ssb/pcmcia.c
-+++ b/drivers/ssb/pcmcia.c
-@@ -94,7 +94,6 @@ int ssb_pcmcia_switch_core(struct ssb_bus *bus,
- struct ssb_device *dev)
- {
- int err;
-- unsigned long flags;
-
- #if SSB_VERBOSE_PCMCIACORESWITCH_DEBUG
- ssb_printk(KERN_INFO PFX
-@@ -103,11 +102,9 @@ int ssb_pcmcia_switch_core(struct ssb_bus *bus,
- dev->core_index);
- #endif
-
-- spin_lock_irqsave(&bus->bar_lock, flags);
- err = ssb_pcmcia_switch_coreidx(bus, dev->core_index);
- if (!err)
- bus->mapped_device = dev;
-- spin_unlock_irqrestore(&bus->bar_lock, flags);
-
- return err;
- }
-@@ -115,14 +112,12 @@ int ssb_pcmcia_switch_core(struct ssb_bus *bus,
- int ssb_pcmcia_switch_segment(struct ssb_bus *bus, u8 seg)
- {
- int attempts = 0;
-- unsigned long flags;
- conf_reg_t reg;
-- int res, err = 0;
-+ int res;
-
- SSB_WARN_ON((seg != 0) && (seg != 1));
- reg.Offset = 0x34;
- reg.Function = 0;
-- spin_lock_irqsave(&bus->bar_lock, flags);
- while (1) {
- reg.Action = CS_WRITE;
- reg.Value = seg;
-@@ -143,13 +138,11 @@ int ssb_pcmcia_switch_segment(struct ssb_bus *bus, u8 seg)
- udelay(10);
- }
- bus->mapped_pcmcia_seg = seg;
--out_unlock:
-- spin_unlock_irqrestore(&bus->bar_lock, flags);
-- return err;
++static int ext4_mb_release_group_pa(struct ext4_buddy *e4b,
++ struct ext4_prealloc_space *pa)
++{
++ struct ext4_allocation_context ac;
++ struct super_block *sb = e4b->bd_sb;
++ ext4_group_t group;
++ ext4_grpblk_t bit;
++
++ ac.ac_op = EXT4_MB_HISTORY_DISCARD;
++
++ BUG_ON(pa->pa_deleted == 0);
++ ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
++ BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
++ mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len);
++ atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded);
++
++ ac.ac_sb = sb;
++ ac.ac_inode = NULL;
++ ac.ac_b_ex.fe_group = group;
++ ac.ac_b_ex.fe_start = bit;
++ ac.ac_b_ex.fe_len = pa->pa_len;
++ ac.ac_b_ex.fe_logical = 0;
++ ext4_mb_store_history(&ac);
+
+ return 0;
- error:
- ssb_printk(KERN_ERR PFX "Failed to switch pcmcia segment\n");
-- err = -ENODEV;
-- goto out_unlock;
-+ return -ENODEV;
- }
-
- static int select_core_and_segment(struct ssb_device *dev,
-@@ -182,22 +175,33 @@ static int select_core_and_segment(struct ssb_device *dev,
- static u16 ssb_pcmcia_read16(struct ssb_device *dev, u16 offset)
- {
- struct ssb_bus *bus = dev->bus;
-+ unsigned long flags;
-+ int err;
-+ u16 value = 0xFFFF;
-
-- if (unlikely(select_core_and_segment(dev, &offset)))
-- return 0xFFFF;
-+ spin_lock_irqsave(&bus->bar_lock, flags);
-+ err = select_core_and_segment(dev, &offset);
-+ if (likely(!err))
-+ value = readw(bus->mmio + offset);
-+ spin_unlock_irqrestore(&bus->bar_lock, flags);
-
-- return readw(bus->mmio + offset);
-+ return value;
- }
-
- static u32 ssb_pcmcia_read32(struct ssb_device *dev, u16 offset)
- {
- struct ssb_bus *bus = dev->bus;
-- u32 lo, hi;
-+ unsigned long flags;
++}
++
++/*
++ * releases all preallocations in given group
++ *
++ * first, we need to decide discard policy:
++ * - when do we discard
++ * 1) ENOSPC
++ * - how many do we discard
++ * 1) how many requested
++ */
++static int ext4_mb_discard_group_preallocations(struct super_block *sb,
++ ext4_group_t group, int needed)
++{
++ struct ext4_group_info *grp = ext4_get_group_info(sb, group);
++ struct buffer_head *bitmap_bh = NULL;
++ struct ext4_prealloc_space *pa, *tmp;
++ struct list_head list;
++ struct ext4_buddy e4b;
+ int err;
-+ u32 lo = 0xFFFFFFFF, hi = 0xFFFFFFFF;
-
-- if (unlikely(select_core_and_segment(dev, &offset)))
-- return 0xFFFFFFFF;
-- lo = readw(bus->mmio + offset);
-- hi = readw(bus->mmio + offset + 2);
-+ spin_lock_irqsave(&bus->bar_lock, flags);
-+ err = select_core_and_segment(dev, &offset);
-+ if (likely(!err)) {
-+ lo = readw(bus->mmio + offset);
-+ hi = readw(bus->mmio + offset + 2);
++ int busy = 0;
++ int free = 0;
++
++ mb_debug("discard preallocation for group %lu\n", group);
++
++ if (list_empty(&grp->bb_prealloc_list))
++ return 0;
++
++ bitmap_bh = read_block_bitmap(sb, group);
++ if (bitmap_bh == NULL) {
++ /* error handling here */
++ ext4_mb_release_desc(&e4b);
++ BUG_ON(bitmap_bh == NULL);
+ }
-+ spin_unlock_irqrestore(&bus->bar_lock, flags);
-
- return (lo | (hi << 16));
- }
-@@ -205,22 +209,31 @@ static u32 ssb_pcmcia_read32(struct ssb_device *dev, u16 offset)
- static void ssb_pcmcia_write16(struct ssb_device *dev, u16 offset, u16 value)
- {
- struct ssb_bus *bus = dev->bus;
-+ unsigned long flags;
-+ int err;
-
-- if (unlikely(select_core_and_segment(dev, &offset)))
-- return;
-- writew(value, bus->mmio + offset);
-+ spin_lock_irqsave(&bus->bar_lock, flags);
-+ err = select_core_and_segment(dev, &offset);
-+ if (likely(!err))
-+ writew(value, bus->mmio + offset);
-+ mmiowb();
-+ spin_unlock_irqrestore(&bus->bar_lock, flags);
- }
-
- static void ssb_pcmcia_write32(struct ssb_device *dev, u16 offset, u32 value)
- {
- struct ssb_bus *bus = dev->bus;
-+ unsigned long flags;
-+ int err;
-
-- if (unlikely(select_core_and_segment(dev, &offset)))
-- return;
-- writeb((value & 0xFF000000) >> 24, bus->mmio + offset + 3);
-- writeb((value & 0x00FF0000) >> 16, bus->mmio + offset + 2);
-- writeb((value & 0x0000FF00) >> 8, bus->mmio + offset + 1);
-- writeb((value & 0x000000FF) >> 0, bus->mmio + offset + 0);
-+ spin_lock_irqsave(&bus->bar_lock, flags);
-+ err = select_core_and_segment(dev, &offset);
-+ if (likely(!err)) {
-+ writew((value & 0x0000FFFF), bus->mmio + offset);
-+ writew(((value & 0xFFFF0000) >> 16), bus->mmio + offset + 2);
++
++ err = ext4_mb_load_buddy(sb, group, &e4b);
++ BUG_ON(err != 0); /* error handling here */
++
++ if (needed == 0)
++ needed = EXT4_BLOCKS_PER_GROUP(sb) + 1;
++
++ grp = ext4_get_group_info(sb, group);
++ INIT_LIST_HEAD(&list);
++
++repeat:
++ ext4_lock_group(sb, group);
++ list_for_each_entry_safe(pa, tmp,
++ &grp->bb_prealloc_list, pa_group_list) {
++ spin_lock(&pa->pa_lock);
++ if (atomic_read(&pa->pa_count)) {
++ spin_unlock(&pa->pa_lock);
++ busy = 1;
++ continue;
++ }
++ if (pa->pa_deleted) {
++ spin_unlock(&pa->pa_lock);
++ continue;
++ }
++
++ /* seems this one can be freed ... */
++ pa->pa_deleted = 1;
++
++ /* we can trust pa_free ... */
++ free += pa->pa_free;
++
++ spin_unlock(&pa->pa_lock);
++
++ list_del(&pa->pa_group_list);
++ list_add(&pa->u.pa_tmp_list, &list);
+ }
-+ mmiowb();
-+ spin_unlock_irqrestore(&bus->bar_lock, flags);
- }
-
- /* Not "static", as it's used in main.c */
-@@ -231,10 +244,12 @@ const struct ssb_bus_ops ssb_pcmcia_ops = {
- .write32 = ssb_pcmcia_write32,
- };
-
-+#include <linux/etherdevice.h>
- int ssb_pcmcia_get_invariants(struct ssb_bus *bus,
- struct ssb_init_invariants *iv)
- {
- //TODO
-+ random_ether_addr(iv->sprom.il0mac);
- return 0;
- }
-
-diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
-index 865f32b..cc246fa 100644
---- a/drivers/uio/uio.c
-+++ b/drivers/uio/uio.c
-@@ -34,12 +34,12 @@ struct uio_device {
- wait_queue_head_t wait;
- int vma_count;
- struct uio_info *info;
-- struct kset map_attr_kset;
-+ struct kobject *map_dir;
- };
-
- static int uio_major;
- static DEFINE_IDR(uio_idr);
--static struct file_operations uio_fops;
-+static const struct file_operations uio_fops;
-
- /* UIO class infrastructure */
- static struct uio_class {
-@@ -51,47 +51,48 @@ static struct uio_class {
- * attributes
- */
-
--static struct attribute attr_addr = {
-- .name = "addr",
-- .mode = S_IRUGO,
-+struct uio_map {
-+ struct kobject kobj;
-+ struct uio_mem *mem;
- };
-+#define to_map(map) container_of(map, struct uio_map, kobj)
-
--static struct attribute attr_size = {
-- .name = "size",
-- .mode = S_IRUGO,
--};
-
--static struct attribute* map_attrs[] = {
-- &attr_addr, &attr_size, NULL
--};
--
--static ssize_t map_attr_show(struct kobject *kobj, struct attribute *attr,
-+static ssize_t map_attr_show(struct kobject *kobj, struct kobj_attribute *attr,
- char *buf)
- {
-- struct uio_mem *mem = container_of(kobj, struct uio_mem, kobj);
-+ struct uio_map *map = to_map(kobj);
-+ struct uio_mem *mem = map->mem;
-
-- if (strncmp(attr->name,"addr",4) == 0)
-+ if (strncmp(attr->attr.name, "addr", 4) == 0)
- return sprintf(buf, "0x%lx\n", mem->addr);
-
-- if (strncmp(attr->name,"size",4) == 0)
-+ if (strncmp(attr->attr.name, "size", 4) == 0)
- return sprintf(buf, "0x%lx\n", mem->size);
-
- return -ENODEV;
- }
-
--static void map_attr_release(struct kobject *kobj)
--{
-- /* TODO ??? */
--}
-+static struct kobj_attribute attr_attribute =
-+ __ATTR(addr, S_IRUGO, map_attr_show, NULL);
-+static struct kobj_attribute size_attribute =
-+ __ATTR(size, S_IRUGO, map_attr_show, NULL);
-
--static struct sysfs_ops map_attr_ops = {
-- .show = map_attr_show,
-+static struct attribute *attrs[] = {
-+ &attr_attribute.attr,
-+ &size_attribute.attr,
-+ NULL, /* need to NULL terminate the list of attributes */
- };
-
-+static void map_release(struct kobject *kobj)
-+{
-+ struct uio_map *map = to_map(kobj);
-+ kfree(map);
++
++ /* if we still need more blocks and some PAs were used, try again */
++ if (free < needed && busy) {
++ busy = 0;
++ ext4_unlock_group(sb, group);
++ /*
++ * Yield the CPU here so that we don't get soft lockup
++ * in non preempt case.
++ */
++ yield();
++ goto repeat;
++ }
++
++ /* found anything to free? */
++ if (list_empty(&list)) {
++ BUG_ON(free != 0);
++ goto out;
++ }
++
++ /* now free all selected PAs */
++ list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
++
++ /* remove from object (inode or locality group) */
++ spin_lock(pa->pa_obj_lock);
++ list_del_rcu(&pa->pa_inode_list);
++ spin_unlock(pa->pa_obj_lock);
++
++ if (pa->pa_linear)
++ ext4_mb_release_group_pa(&e4b, pa);
++ else
++ ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa);
++
++ list_del(&pa->u.pa_tmp_list);
++ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
++ }
++
++out:
++ ext4_unlock_group(sb, group);
++ ext4_mb_release_desc(&e4b);
++ put_bh(bitmap_bh);
++ return free;
+}
+
- static struct kobj_type map_attr_type = {
-- .release = map_attr_release,
-- .sysfs_ops = &map_attr_ops,
-- .default_attrs = map_attrs,
-+ .release = map_release,
-+ .default_attrs = attrs,
- };
-
- static ssize_t show_name(struct device *dev,
-@@ -148,6 +149,7 @@ static int uio_dev_add_attributes(struct uio_device *idev)
- int mi;
- int map_found = 0;
- struct uio_mem *mem;
-+ struct uio_map *map;
-
- ret = sysfs_create_group(&idev->dev->kobj, &uio_attr_grp);
- if (ret)
-@@ -159,31 +161,34 @@ static int uio_dev_add_attributes(struct uio_device *idev)
- break;
- if (!map_found) {
- map_found = 1;
-- kobject_set_name(&idev->map_attr_kset.kobj,"maps");
-- idev->map_attr_kset.ktype = &map_attr_type;
-- idev->map_attr_kset.kobj.parent = &idev->dev->kobj;
-- ret = kset_register(&idev->map_attr_kset);
-- if (ret)
-- goto err_remove_group;
-+ idev->map_dir = kobject_create_and_add("maps",
-+ &idev->dev->kobj);
-+ if (!idev->map_dir)
-+ goto err;
- }
-- kobject_init(&mem->kobj);
-- kobject_set_name(&mem->kobj,"map%d",mi);
-- mem->kobj.parent = &idev->map_attr_kset.kobj;
-- mem->kobj.kset = &idev->map_attr_kset;
-- ret = kobject_add(&mem->kobj);
-+ map = kzalloc(sizeof(*map), GFP_KERNEL);
-+ if (!map)
-+ goto err;
-+ kobject_init(&map->kobj, &map_attr_type);
-+ map->mem = mem;
-+ mem->map = map;
-+ ret = kobject_add(&map->kobj, idev->map_dir, "map%d", mi);
-+ if (ret)
-+ goto err;
-+ ret = kobject_uevent(&map->kobj, KOBJ_ADD);
- if (ret)
-- goto err_remove_maps;
-+ goto err;
- }
-
- return 0;
-
--err_remove_maps:
-+err:
- for (mi--; mi>=0; mi--) {
- mem = &idev->info->mem[mi];
-- kobject_unregister(&mem->kobj);
-+ map = mem->map;
-+ kobject_put(&map->kobj);
- }
-- kset_unregister(&idev->map_attr_kset); /* Needed ? */
--err_remove_group:
-+ kobject_put(idev->map_dir);
- sysfs_remove_group(&idev->dev->kobj, &uio_attr_grp);
- err_group:
- dev_err(idev->dev, "error creating sysfs files (%d)\n", ret);
-@@ -198,9 +203,9 @@ static void uio_dev_del_attributes(struct uio_device *idev)
- mem = &idev->info->mem[mi];
- if (mem->size == 0)
- break;
-- kobject_unregister(&mem->kobj);
-+ kobject_put(&mem->map->kobj);
- }
-- kset_unregister(&idev->map_attr_kset);
-+ kobject_put(idev->map_dir);
- sysfs_remove_group(&idev->dev->kobj, &uio_attr_grp);
- }
-
-@@ -503,7 +508,7 @@ static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
- }
- }
-
--static struct file_operations uio_fops = {
-+static const struct file_operations uio_fops = {
- .owner = THIS_MODULE,
- .open = uio_open,
- .release = uio_release,
-diff --git a/drivers/usb/Kconfig b/drivers/usb/Kconfig
-index 7580aa5..7a64990 100644
---- a/drivers/usb/Kconfig
-+++ b/drivers/usb/Kconfig
-@@ -33,6 +33,7 @@ config USB_ARCH_HAS_OHCI
- default y if ARCH_LH7A404
- default y if ARCH_S3C2410
- default y if PXA27x
-+ default y if PXA3xx
- default y if ARCH_EP93XX
- default y if ARCH_AT91
- default y if ARCH_PNX4008
-diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
-index c51f8e9..7c3aaa9 100644
---- a/drivers/usb/core/driver.c
-+++ b/drivers/usb/core/driver.c
-@@ -91,8 +91,8 @@ static int usb_create_newid_file(struct usb_driver *usb_drv)
- goto exit;
-
- if (usb_drv->probe != NULL)
-- error = sysfs_create_file(&usb_drv->drvwrap.driver.kobj,
-- &driver_attr_new_id.attr);
-+ error = driver_create_file(&usb_drv->drvwrap.driver,
-+ &driver_attr_new_id);
- exit:
- return error;
- }
-@@ -103,8 +103,8 @@ static void usb_remove_newid_file(struct usb_driver *usb_drv)
- return;
-
- if (usb_drv->probe != NULL)
-- sysfs_remove_file(&usb_drv->drvwrap.driver.kobj,
-- &driver_attr_new_id.attr);
-+ driver_remove_file(&usb_drv->drvwrap.driver,
-+ &driver_attr_new_id);
- }
-
- static void usb_free_dynids(struct usb_driver *usb_drv)
-diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig
-index f81d08d..77a3759 100644
---- a/drivers/usb/gadget/Kconfig
-+++ b/drivers/usb/gadget/Kconfig
-@@ -308,7 +308,7 @@ config USB_S3C2410_DEBUG
-
- config USB_GADGET_AT91
- boolean "AT91 USB Device Port"
-- depends on ARCH_AT91 && !ARCH_AT91SAM9RL
-+ depends on ARCH_AT91 && !ARCH_AT91SAM9RL && !ARCH_AT91CAP9
- select USB_GADGET_SELECTED
- help
- Many Atmel AT91 processors (such as the AT91RM2000) have a
-diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
-index ecfe800..ddd4ee1 100644
---- a/drivers/usb/host/ohci-hcd.c
-+++ b/drivers/usb/host/ohci-hcd.c
-@@ -997,7 +997,7 @@ MODULE_LICENSE ("GPL");
- #define PLATFORM_DRIVER ohci_hcd_lh7a404_driver
- #endif
-
--#ifdef CONFIG_PXA27x
-+#if defined(CONFIG_PXA27x) || defined(CONFIG_PXA3xx)
- #include "ohci-pxa27x.c"
- #define PLATFORM_DRIVER ohci_hcd_pxa27x_driver
- #endif
-diff --git a/drivers/usb/host/ohci-omap.c b/drivers/usb/host/ohci-omap.c
-index 5cfa3d1..74e1f4b 100644
---- a/drivers/usb/host/ohci-omap.c
-+++ b/drivers/usb/host/ohci-omap.c
-@@ -47,7 +47,7 @@
- #endif
-
- #ifdef CONFIG_TPS65010
--#include <asm/arch/tps65010.h>
-+#include <linux/i2c/tps65010.h>
- #else
-
- #define LOW 0
-diff --git a/drivers/usb/host/ohci-pnx4008.c b/drivers/usb/host/ohci-pnx4008.c
-index ca2a6ab..6c52c66 100644
---- a/drivers/usb/host/ohci-pnx4008.c
-+++ b/drivers/usb/host/ohci-pnx4008.c
-@@ -112,9 +112,9 @@ static int isp1301_detach(struct i2c_client *client);
- static int isp1301_command(struct i2c_client *client, unsigned int cmd,
- void *arg);
-
--static unsigned short normal_i2c[] =
-+static const unsigned short normal_i2c[] =
- { ISP1301_I2C_ADDR, ISP1301_I2C_ADDR + 1, I2C_CLIENT_END };
--static unsigned short dummy_i2c_addrlist[] = { I2C_CLIENT_END };
-+static const unsigned short dummy_i2c_addrlist[] = { I2C_CLIENT_END };
-
- static struct i2c_client_address_data addr_data = {
- .normal_i2c = normal_i2c,
-@@ -123,7 +123,6 @@ static struct i2c_client_address_data addr_data = {
- };
-
- struct i2c_driver isp1301_driver = {
-- .id = I2C_DRIVERID_I2CDEV, /* Fake Id */
- .class = I2C_CLASS_HWMON,
- .attach_adapter = isp1301_probe,
- .detach_client = isp1301_detach,
-diff --git a/drivers/usb/host/ohci-pxa27x.c b/drivers/usb/host/ohci-pxa27x.c
-index 23d2fe5..ff9a798 100644
---- a/drivers/usb/host/ohci-pxa27x.c
-+++ b/drivers/usb/host/ohci-pxa27x.c
-@@ -22,6 +22,7 @@
- #include <linux/device.h>
- #include <linux/signal.h>
- #include <linux/platform_device.h>
-+#include <linux/clk.h>
-
- #include <asm/mach-types.h>
- #include <asm/hardware.h>
-@@ -32,6 +33,8 @@
-
- #define UHCRHPS(x) __REG2( 0x4C000050, (x)<<2 )
-
-+static struct clk *usb_clk;
++/*
++ * releases all non-used preallocated blocks for given inode
++ *
++ * It's important to discard preallocations under i_data_sem
++ * We don't want another block to be served from the prealloc
++ * space when we are discarding the inode prealloc space.
++ *
++ * FIXME!! Make sure it is valid at all the call sites
++ */
++void ext4_mb_discard_inode_preallocations(struct inode *inode)
++{
++ struct ext4_inode_info *ei = EXT4_I(inode);
++ struct super_block *sb = inode->i_sb;
++ struct buffer_head *bitmap_bh = NULL;
++ struct ext4_prealloc_space *pa, *tmp;
++ ext4_group_t group = 0;
++ struct list_head list;
++ struct ext4_buddy e4b;
++ int err;
++
++ if (!test_opt(sb, MBALLOC) || !S_ISREG(inode->i_mode)) {
++ /*BUG_ON(!list_empty(&ei->i_prealloc_list));*/
++ return;
++ }
++
++ mb_debug("discard preallocation for inode %lu\n", inode->i_ino);
++
++ INIT_LIST_HEAD(&list);
++
++repeat:
++ /* first, collect all pa's in the inode */
++ spin_lock(&ei->i_prealloc_lock);
++ while (!list_empty(&ei->i_prealloc_list)) {
++ pa = list_entry(ei->i_prealloc_list.next,
++ struct ext4_prealloc_space, pa_inode_list);
++ BUG_ON(pa->pa_obj_lock != &ei->i_prealloc_lock);
++ spin_lock(&pa->pa_lock);
++ if (atomic_read(&pa->pa_count)) {
++ /* this shouldn't happen often - nobody should
++ * use preallocation while we're discarding it */
++ spin_unlock(&pa->pa_lock);
++ spin_unlock(&ei->i_prealloc_lock);
++ printk(KERN_ERR "uh-oh! used pa while discarding\n");
++ WARN_ON(1);
++ schedule_timeout_uninterruptible(HZ);
++ goto repeat;
+
- /*
- PMM_NPS_MODE -- PMM Non-power switching mode
- Ports are powered continuously.
-@@ -80,7 +83,7 @@ static int pxa27x_start_hc(struct device *dev)
-
- inf = dev->platform_data;
-
-- pxa_set_cken(CKEN_USBHOST, 1);
-+ clk_enable(usb_clk);
-
- UHCHR |= UHCHR_FHR;
- udelay(11);
-@@ -123,7 +126,7 @@ static void pxa27x_stop_hc(struct device *dev)
- UHCCOMS |= 1;
- udelay(10);
-
-- pxa_set_cken(CKEN_USBHOST, 0);
-+ clk_disable(usb_clk);
- }
-
-
-@@ -158,6 +161,10 @@ int usb_hcd_pxa27x_probe (const struct hc_driver *driver, struct platform_device
- return -ENOMEM;
- }
-
-+ usb_clk = clk_get(&pdev->dev, "USBCLK");
-+ if (IS_ERR(usb_clk))
-+ return PTR_ERR(usb_clk);
++ }
++ if (pa->pa_deleted == 0) {
++ pa->pa_deleted = 1;
++ spin_unlock(&pa->pa_lock);
++ list_del_rcu(&pa->pa_inode_list);
++ list_add(&pa->u.pa_tmp_list, &list);
++ continue;
++ }
+
- hcd = usb_create_hcd (driver, &pdev->dev, "pxa27x");
- if (!hcd)
- return -ENOMEM;
-@@ -201,6 +208,7 @@ int usb_hcd_pxa27x_probe (const struct hc_driver *driver, struct platform_device
- release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
- err1:
- usb_put_hcd(hcd);
-+ clk_put(usb_clk);
- return retval;
- }
-
-@@ -225,6 +233,7 @@ void usb_hcd_pxa27x_remove (struct usb_hcd *hcd, struct platform_device *pdev)
- iounmap(hcd->regs);
- release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
- usb_put_hcd(hcd);
-+ clk_put(usb_clk);
- }
-
- /*-------------------------------------------------------------------------*/
-diff --git a/drivers/usb/storage/freecom.c b/drivers/usb/storage/freecom.c
-index 88aa59a..f5a4e8d 100644
---- a/drivers/usb/storage/freecom.c
-+++ b/drivers/usb/storage/freecom.c
-@@ -132,8 +132,7 @@ freecom_readdata (struct scsi_cmnd *srb, struct us_data *us,
-
- /* Now transfer all of our blocks. */
- US_DEBUGP("Start of read\n");
-- result = usb_stor_bulk_transfer_sg(us, ipipe, srb->request_buffer,
-- count, srb->use_sg, &srb->resid);
-+ result = usb_stor_bulk_srb(us, ipipe, srb);
- US_DEBUGP("freecom_readdata done!\n");
-
- if (result > USB_STOR_XFER_SHORT)
-@@ -166,8 +165,7 @@ freecom_writedata (struct scsi_cmnd *srb, struct us_data *us,
-
- /* Now transfer all of our blocks. */
- US_DEBUGP("Start of write\n");
-- result = usb_stor_bulk_transfer_sg(us, opipe, srb->request_buffer,
-- count, srb->use_sg, &srb->resid);
-+ result = usb_stor_bulk_srb(us, opipe, srb);
-
- US_DEBUGP("freecom_writedata done!\n");
- if (result > USB_STOR_XFER_SHORT)
-@@ -281,7 +279,7 @@ int freecom_transport(struct scsi_cmnd *srb, struct us_data *us)
- * and such will hang. */
- US_DEBUGP("Device indicates that it has %d bytes available\n",
- le16_to_cpu (fst->Count));
-- US_DEBUGP("SCSI requested %d\n", srb->request_bufflen);
-+ US_DEBUGP("SCSI requested %d\n", scsi_bufflen(srb));
-
- /* Find the length we desire to read. */
- switch (srb->cmnd[0]) {
-@@ -292,12 +290,12 @@ int freecom_transport(struct scsi_cmnd *srb, struct us_data *us)
- length = le16_to_cpu(fst->Count);
- break;
- default:
-- length = srb->request_bufflen;
-+ length = scsi_bufflen(srb);
- }
-
- /* verify that this amount is legal */
-- if (length > srb->request_bufflen) {
-- length = srb->request_bufflen;
-+ if (length > scsi_bufflen(srb)) {
-+ length = scsi_bufflen(srb);
- US_DEBUGP("Truncating request to match buffer length: %d\n", length);
- }
-
-diff --git a/drivers/usb/storage/isd200.c b/drivers/usb/storage/isd200.c
-index 49ba6c0..178e8c2 100644
---- a/drivers/usb/storage/isd200.c
-+++ b/drivers/usb/storage/isd200.c
-@@ -49,6 +49,7 @@
- #include <linux/slab.h>
- #include <linux/hdreg.h>
- #include <linux/ide.h>
-+#include <linux/scatterlist.h>
-
- #include <scsi/scsi.h>
- #include <scsi/scsi_cmnd.h>
-@@ -287,6 +288,7 @@ struct isd200_info {
- /* maximum number of LUNs supported */
- unsigned char MaxLUNs;
- struct scsi_cmnd srb;
-+ struct scatterlist sg;
- };
-
-
-@@ -398,6 +400,31 @@ static void isd200_build_sense(struct us_data *us, struct scsi_cmnd *srb)
- * Transport routines
- ***********************************************************************/
-
-+/**************************************************************************
-+ * isd200_set_srb(), isd200_srb_set_bufflen()
-+ *
-+ * Two helpers to facilitate in initialization of scsi_cmnd structure
-+ * Will need to change when struct scsi_cmnd changes
++ /* someone is deleting pa right now */
++ spin_unlock(&pa->pa_lock);
++ spin_unlock(&ei->i_prealloc_lock);
++
++ /* we have to wait here because pa_deleted
++ * doesn't mean pa is already unlinked from
++ * the list. as we might be called from
++ * ->clear_inode() the inode will get freed
++ * and concurrent thread which is unlinking
++ * pa from inode's list may access already
++ * freed memory, bad-bad-bad */
++
++ /* XXX: if this happens too often, we can
++ * add a flag to force wait only in case
++ * of ->clear_inode(), but not in case of
++ * regular truncate */
++ schedule_timeout_uninterruptible(HZ);
++ goto repeat;
++ }
++ spin_unlock(&ei->i_prealloc_lock);
++
++ list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
++ BUG_ON(pa->pa_linear != 0);
++ ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, NULL);
++
++ err = ext4_mb_load_buddy(sb, group, &e4b);
++ BUG_ON(err != 0); /* error handling here */
++
++ bitmap_bh = read_block_bitmap(sb, group);
++ if (bitmap_bh == NULL) {
++ /* error handling here */
++ ext4_mb_release_desc(&e4b);
++ BUG_ON(bitmap_bh == NULL);
++ }
++
++ ext4_lock_group(sb, group);
++ list_del(&pa->pa_group_list);
++ ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa);
++ ext4_unlock_group(sb, group);
++
++ ext4_mb_release_desc(&e4b);
++ put_bh(bitmap_bh);
++
++ list_del(&pa->u.pa_tmp_list);
++ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
++ }
++}
++
++/*
++ * finds all preallocated spaces and return blocks being freed to them
++ * if preallocated space becomes full (no block is used from the space)
++ * then the function frees space in buddy
++ * XXX: at the moment, truncate (which is the only way to free blocks)
++ * discards all preallocations
+ */
-+static void isd200_set_srb(struct isd200_info *info,
-+ enum dma_data_direction dir, void* buff, unsigned bufflen)
++static void ext4_mb_return_to_preallocation(struct inode *inode,
++ struct ext4_buddy *e4b,
++ sector_t block, int count)
+{
-+ struct scsi_cmnd *srb = &info->srb;
++ BUG_ON(!list_empty(&EXT4_I(inode)->i_prealloc_list));
++}
++#ifdef MB_DEBUG
++static void ext4_mb_show_ac(struct ext4_allocation_context *ac)
++{
++ struct super_block *sb = ac->ac_sb;
++ ext4_group_t i;
+
-+ if (buff)
-+ sg_init_one(&info->sg, buff, bufflen);
++ printk(KERN_ERR "EXT4-fs: Can't allocate:"
++ " Allocation context details:\n");
++ printk(KERN_ERR "EXT4-fs: status %d flags %d\n",
++ ac->ac_status, ac->ac_flags);
++ printk(KERN_ERR "EXT4-fs: orig %lu/%lu/%lu@%lu, goal %lu/%lu/%lu@%lu, "
++ "best %lu/%lu/%lu@%lu cr %d\n",
++ (unsigned long)ac->ac_o_ex.fe_group,
++ (unsigned long)ac->ac_o_ex.fe_start,
++ (unsigned long)ac->ac_o_ex.fe_len,
++ (unsigned long)ac->ac_o_ex.fe_logical,
++ (unsigned long)ac->ac_g_ex.fe_group,
++ (unsigned long)ac->ac_g_ex.fe_start,
++ (unsigned long)ac->ac_g_ex.fe_len,
++ (unsigned long)ac->ac_g_ex.fe_logical,
++ (unsigned long)ac->ac_b_ex.fe_group,
++ (unsigned long)ac->ac_b_ex.fe_start,
++ (unsigned long)ac->ac_b_ex.fe_len,
++ (unsigned long)ac->ac_b_ex.fe_logical,
++ (int)ac->ac_criteria);
++ printk(KERN_ERR "EXT4-fs: %lu scanned, %d found\n", ac->ac_ex_scanned,
++ ac->ac_found);
++ printk(KERN_ERR "EXT4-fs: groups: \n");
++ for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
++ struct ext4_group_info *grp = ext4_get_group_info(sb, i);
++ struct ext4_prealloc_space *pa;
++ ext4_grpblk_t start;
++ struct list_head *cur;
++ ext4_lock_group(sb, i);
++ list_for_each(cur, &grp->bb_prealloc_list) {
++ pa = list_entry(cur, struct ext4_prealloc_space,
++ pa_group_list);
++ spin_lock(&pa->pa_lock);
++ ext4_get_group_no_and_offset(sb, pa->pa_pstart,
++ NULL, &start);
++ spin_unlock(&pa->pa_lock);
++ printk(KERN_ERR "PA:%lu:%d:%u \n", i,
++ start, pa->pa_len);
++ }
++ ext4_lock_group(sb, i);
+
-+ srb->sc_data_direction = dir;
-+ srb->request_buffer = buff ? &info->sg : NULL;
-+ srb->request_bufflen = bufflen;
-+ srb->use_sg = buff ? 1 : 0;
++ if (grp->bb_free == 0)
++ continue;
++ printk(KERN_ERR "%lu: %d/%d \n",
++ i, grp->bb_free, grp->bb_fragments);
++ }
++ printk(KERN_ERR "\n");
++}
++#else
++static inline void ext4_mb_show_ac(struct ext4_allocation_context *ac)
++{
++ return;
+}
++#endif
+
-+static void isd200_srb_set_bufflen(struct scsi_cmnd *srb, unsigned bufflen)
++/*
++ * We use locality group preallocation for small size file. The size of the
++ * file is determined by the current size or the resulting size after
++ * allocation which ever is larger
++ *
++ * One can tune this size via /proc/fs/ext4/<partition>/stream_req
++ */
++static void ext4_mb_group_or_file(struct ext4_allocation_context *ac)
+{
-+ srb->request_bufflen = bufflen;
++ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
++ int bsbits = ac->ac_sb->s_blocksize_bits;
++ loff_t size, isize;
++
++ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
++ return;
++
++ size = ac->ac_o_ex.fe_logical + ac->ac_o_ex.fe_len;
++ isize = i_size_read(ac->ac_inode) >> bsbits;
++ size = max(size, isize);
++
++ /* don't use group allocation for large files */
++ if (size >= sbi->s_mb_stream_request)
++ return;
++
++ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
++ return;
++
++ BUG_ON(ac->ac_lg != NULL);
++ /*
++ * locality group prealloc space are per cpu. The reason for having
++ * per cpu locality group is to reduce the contention between block
++ * request from multiple CPUs.
++ */
++ ac->ac_lg = &sbi->s_locality_groups[get_cpu()];
++ put_cpu();
++
++ /* we're going to use group allocation */
++ ac->ac_flags |= EXT4_MB_HINT_GROUP_ALLOC;
++
++ /* serialize all allocations in the group */
++ mutex_lock(&ac->ac_lg->lg_mutex);
+}
+
-
- /**************************************************************************
- * isd200_action
-@@ -432,9 +459,7 @@ static int isd200_action( struct us_data *us, int action,
- ata.generic.RegisterSelect =
- REG_CYLINDER_LOW | REG_CYLINDER_HIGH |
- REG_STATUS | REG_ERROR;
-- srb->sc_data_direction = DMA_FROM_DEVICE;
-- srb->request_buffer = pointer;
-- srb->request_bufflen = value;
-+ isd200_set_srb(info, DMA_FROM_DEVICE, pointer, value);
- break;
-
- case ACTION_ENUM:
-@@ -444,7 +469,7 @@ static int isd200_action( struct us_data *us, int action,
- ACTION_SELECT_5;
- ata.generic.RegisterSelect = REG_DEVICE_HEAD;
- ata.write.DeviceHeadByte = value;
-- srb->sc_data_direction = DMA_NONE;
-+ isd200_set_srb(info, DMA_NONE, NULL, 0);
- break;
-
- case ACTION_RESET:
-@@ -453,7 +478,7 @@ static int isd200_action( struct us_data *us, int action,
- ACTION_SELECT_3|ACTION_SELECT_4;
- ata.generic.RegisterSelect = REG_DEVICE_CONTROL;
- ata.write.DeviceControlByte = ATA_DC_RESET_CONTROLLER;
-- srb->sc_data_direction = DMA_NONE;
-+ isd200_set_srb(info, DMA_NONE, NULL, 0);
- break;
-
- case ACTION_REENABLE:
-@@ -462,7 +487,7 @@ static int isd200_action( struct us_data *us, int action,
- ACTION_SELECT_3|ACTION_SELECT_4;
- ata.generic.RegisterSelect = REG_DEVICE_CONTROL;
- ata.write.DeviceControlByte = ATA_DC_REENABLE_CONTROLLER;
-- srb->sc_data_direction = DMA_NONE;
-+ isd200_set_srb(info, DMA_NONE, NULL, 0);
- break;
-
- case ACTION_SOFT_RESET:
-@@ -471,21 +496,20 @@ static int isd200_action( struct us_data *us, int action,
- ata.generic.RegisterSelect = REG_DEVICE_HEAD | REG_COMMAND;
- ata.write.DeviceHeadByte = info->DeviceHead;
- ata.write.CommandByte = WIN_SRST;
-- srb->sc_data_direction = DMA_NONE;
-+ isd200_set_srb(info, DMA_NONE, NULL, 0);
- break;
-
- case ACTION_IDENTIFY:
- US_DEBUGP(" isd200_action(IDENTIFY)\n");
- ata.generic.RegisterSelect = REG_COMMAND;
- ata.write.CommandByte = WIN_IDENTIFY;
-- srb->sc_data_direction = DMA_FROM_DEVICE;
-- srb->request_buffer = (void *) info->id;
-- srb->request_bufflen = sizeof(struct hd_driveid);
-+ isd200_set_srb(info, DMA_FROM_DEVICE, info->id,
-+ sizeof(struct hd_driveid));
- break;
-
- default:
- US_DEBUGP("Error: Undefined action %d\n",action);
-- break;
-+ return ISD200_ERROR;
- }
-
- memcpy(srb->cmnd, &ata, sizeof(ata.generic));
-@@ -590,7 +614,7 @@ static void isd200_invoke_transport( struct us_data *us,
- return;
- }
-
-- if ((srb->resid > 0) &&
-+ if ((scsi_get_resid(srb) > 0) &&
- !((srb->cmnd[0] == REQUEST_SENSE) ||
- (srb->cmnd[0] == INQUIRY) ||
- (srb->cmnd[0] == MODE_SENSE) ||
-@@ -1217,7 +1241,6 @@ static int isd200_get_inquiry_data( struct us_data *us )
- return(retStatus);
- }
-
--
- /**************************************************************************
- * isd200_scsi_to_ata
- *
-@@ -1266,7 +1289,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
- ataCdb->generic.TransferBlockSize = 1;
- ataCdb->generic.RegisterSelect = REG_COMMAND;
- ataCdb->write.CommandByte = ATA_COMMAND_GET_MEDIA_STATUS;
-- srb->request_bufflen = 0;
-+ isd200_srb_set_bufflen(srb, 0);
- } else {
- US_DEBUGP(" Media Status not supported, just report okay\n");
- srb->result = SAM_STAT_GOOD;
-@@ -1284,7 +1307,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
- ataCdb->generic.TransferBlockSize = 1;
- ataCdb->generic.RegisterSelect = REG_COMMAND;
- ataCdb->write.CommandByte = ATA_COMMAND_GET_MEDIA_STATUS;
-- srb->request_bufflen = 0;
-+ isd200_srb_set_bufflen(srb, 0);
- } else {
- US_DEBUGP(" Media Status not supported, just report okay\n");
- srb->result = SAM_STAT_GOOD;
-@@ -1390,7 +1413,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
- ataCdb->generic.RegisterSelect = REG_COMMAND;
- ataCdb->write.CommandByte = (srb->cmnd[4] & 0x1) ?
- WIN_DOORLOCK : WIN_DOORUNLOCK;
-- srb->request_bufflen = 0;
-+ isd200_srb_set_bufflen(srb, 0);
- } else {
- US_DEBUGP(" Not removeable media, just report okay\n");
- srb->result = SAM_STAT_GOOD;
-@@ -1416,7 +1439,7 @@ static int isd200_scsi_to_ata(struct scsi_cmnd *srb, struct us_data *us,
- ataCdb->generic.TransferBlockSize = 1;
- ataCdb->generic.RegisterSelect = REG_COMMAND;
- ataCdb->write.CommandByte = ATA_COMMAND_GET_MEDIA_STATUS;
-- srb->request_bufflen = 0;
-+ isd200_srb_set_bufflen(srb, 0);
- } else {
- US_DEBUGP(" Nothing to do, just report okay\n");
- srb->result = SAM_STAT_GOOD;
-@@ -1525,7 +1548,7 @@ int isd200_Initialization(struct us_data *us)
-
- void isd200_ata_command(struct scsi_cmnd *srb, struct us_data *us)
- {
-- int sendToTransport = 1;
-+ int sendToTransport = 1, orig_bufflen;
- union ata_cdb ataCdb;
-
- /* Make sure driver was initialized */
-@@ -1533,11 +1556,14 @@ void isd200_ata_command(struct scsi_cmnd *srb, struct us_data *us)
- if (us->extra == NULL)
- US_DEBUGP("ERROR Driver not initialized\n");
-
-- /* Convert command */
-- srb->resid = 0;
-+ scsi_set_resid(srb, 0);
-+ /* scsi_bufflen might change in protocol translation to ata */
-+ orig_bufflen = scsi_bufflen(srb);
- sendToTransport = isd200_scsi_to_ata(srb, us, &ataCdb);
-
- /* send the command to the transport layer */
- if (sendToTransport)
- isd200_invoke_transport(us, srb, &ataCdb);
++static int ext4_mb_initialize_context(struct ext4_allocation_context *ac,
++ struct ext4_allocation_request *ar)
++{
++ struct super_block *sb = ar->inode->i_sb;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ struct ext4_super_block *es = sbi->s_es;
++ ext4_group_t group;
++ unsigned long len;
++ unsigned long goal;
++ ext4_grpblk_t block;
+
-+ isd200_srb_set_bufflen(srb, orig_bufflen);
- }
-diff --git a/drivers/usb/storage/protocol.c b/drivers/usb/storage/protocol.c
-index 889622b..a41ce21 100644
---- a/drivers/usb/storage/protocol.c
-+++ b/drivers/usb/storage/protocol.c
-@@ -149,11 +149,7 @@ void usb_stor_transparent_scsi_command(struct scsi_cmnd *srb,
- ***********************************************************************/
-
- /* Copy a buffer of length buflen to/from the srb's transfer buffer.
-- * (Note: for scatter-gather transfers (srb->use_sg > 0), srb->request_buffer
-- * points to a list of s-g entries and we ignore srb->request_bufflen.
-- * For non-scatter-gather transfers, srb->request_buffer points to the
-- * transfer buffer itself and srb->request_bufflen is the buffer's length.)
-- * Update the *index and *offset variables so that the next copy will
-+ * Update the **sgptr and *offset variables so that the next copy will
- * pick up from where this one left off. */
-
- unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
-@@ -162,80 +158,64 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
- {
- unsigned int cnt;
-
-- /* If not using scatter-gather, just transfer the data directly.
-- * Make certain it will fit in the available buffer space. */
-- if (srb->use_sg == 0) {
-- if (*offset >= srb->request_bufflen)
-- return 0;
-- cnt = min(buflen, srb->request_bufflen - *offset);
-- if (dir == TO_XFER_BUF)
-- memcpy((unsigned char *) srb->request_buffer + *offset,
-- buffer, cnt);
-- else
-- memcpy(buffer, (unsigned char *) srb->request_buffer +
-- *offset, cnt);
-- *offset += cnt;
--
-- /* Using scatter-gather. We have to go through the list one entry
-+ /* We have to go through the list one entry
- * at a time. Each s-g entry contains some number of pages, and
- * each page has to be kmap()'ed separately. If the page is already
- * in kernel-addressable memory then kmap() will return its address.
- * If the page is not directly accessible -- such as a user buffer
- * located in high memory -- then kmap() will map it to a temporary
- * position in the kernel's virtual address space. */
-- } else {
-- struct scatterlist *sg = *sgptr;
--
-- if (!sg)
-- sg = (struct scatterlist *) srb->request_buffer;
--
-- /* This loop handles a single s-g list entry, which may
-- * include multiple pages. Find the initial page structure
-- * and the starting offset within the page, and update
-- * the *offset and *index values for the next loop. */
-- cnt = 0;
-- while (cnt < buflen) {
-- struct page *page = sg_page(sg) +
-- ((sg->offset + *offset) >> PAGE_SHIFT);
-- unsigned int poff =
-- (sg->offset + *offset) & (PAGE_SIZE-1);
-- unsigned int sglen = sg->length - *offset;
--
-- if (sglen > buflen - cnt) {
--
-- /* Transfer ends within this s-g entry */
-- sglen = buflen - cnt;
-- *offset += sglen;
-- } else {
--
-- /* Transfer continues to next s-g entry */
-- *offset = 0;
-- sg = sg_next(sg);
-- }
--
-- /* Transfer the data for all the pages in this
-- * s-g entry. For each page: call kmap(), do the
-- * transfer, and call kunmap() immediately after. */
-- while (sglen > 0) {
-- unsigned int plen = min(sglen, (unsigned int)
-- PAGE_SIZE - poff);
-- unsigned char *ptr = kmap(page);
--
-- if (dir == TO_XFER_BUF)
-- memcpy(ptr + poff, buffer + cnt, plen);
-- else
-- memcpy(buffer + cnt, ptr + poff, plen);
-- kunmap(page);
--
-- /* Start at the beginning of the next page */
-- poff = 0;
-- ++page;
-- cnt += plen;
-- sglen -= plen;
-- }
-+ struct scatterlist *sg = *sgptr;
++ /* we can't allocate > group size */
++ len = ar->len;
+
-+ if (!sg)
-+ sg = scsi_sglist(srb);
++ /* just a dirty hack to filter too big requests */
++ if (len >= EXT4_BLOCKS_PER_GROUP(sb) - 10)
++ len = EXT4_BLOCKS_PER_GROUP(sb) - 10;
+
-+ /* This loop handles a single s-g list entry, which may
-+ * include multiple pages. Find the initial page structure
-+ * and the starting offset within the page, and update
-+ * the *offset and **sgptr values for the next loop. */
-+ cnt = 0;
-+ while (cnt < buflen) {
-+ struct page *page = sg_page(sg) +
-+ ((sg->offset + *offset) >> PAGE_SHIFT);
-+ unsigned int poff =
-+ (sg->offset + *offset) & (PAGE_SIZE-1);
-+ unsigned int sglen = sg->length - *offset;
++ /* start searching from the goal */
++ goal = ar->goal;
++ if (goal < le32_to_cpu(es->s_first_data_block) ||
++ goal >= ext4_blocks_count(es))
++ goal = le32_to_cpu(es->s_first_data_block);
++ ext4_get_group_no_and_offset(sb, goal, &group, &block);
+
-+ if (sglen > buflen - cnt) {
++ /* set up allocation goals */
++ ac->ac_b_ex.fe_logical = ar->logical;
++ ac->ac_b_ex.fe_group = 0;
++ ac->ac_b_ex.fe_start = 0;
++ ac->ac_b_ex.fe_len = 0;
++ ac->ac_status = AC_STATUS_CONTINUE;
++ ac->ac_groups_scanned = 0;
++ ac->ac_ex_scanned = 0;
++ ac->ac_found = 0;
++ ac->ac_sb = sb;
++ ac->ac_inode = ar->inode;
++ ac->ac_o_ex.fe_logical = ar->logical;
++ ac->ac_o_ex.fe_group = group;
++ ac->ac_o_ex.fe_start = block;
++ ac->ac_o_ex.fe_len = len;
++ ac->ac_g_ex.fe_logical = ar->logical;
++ ac->ac_g_ex.fe_group = group;
++ ac->ac_g_ex.fe_start = block;
++ ac->ac_g_ex.fe_len = len;
++ ac->ac_f_ex.fe_len = 0;
++ ac->ac_flags = ar->flags;
++ ac->ac_2order = 0;
++ ac->ac_criteria = 0;
++ ac->ac_pa = NULL;
++ ac->ac_bitmap_page = NULL;
++ ac->ac_buddy_page = NULL;
++ ac->ac_lg = NULL;
+
-+ /* Transfer ends within this s-g entry */
-+ sglen = buflen - cnt;
-+ *offset += sglen;
-+ } else {
++ /* we have to define context: we'll we work with a file or
++ * locality group. this is a policy, actually */
++ ext4_mb_group_or_file(ac);
+
-+ /* Transfer continues to next s-g entry */
-+ *offset = 0;
-+ sg = sg_next(sg);
++ mb_debug("init ac: %u blocks @ %u, goal %u, flags %x, 2^%d, "
++ "left: %u/%u, right %u/%u to %swritable\n",
++ (unsigned) ar->len, (unsigned) ar->logical,
++ (unsigned) ar->goal, ac->ac_flags, ac->ac_2order,
++ (unsigned) ar->lleft, (unsigned) ar->pleft,
++ (unsigned) ar->lright, (unsigned) ar->pright,
++ atomic_read(&ar->inode->i_writecount) ? "" : "non-");
++ return 0;
++
++}
++
++/*
++ * release all resource we used in allocation
++ */
++static int ext4_mb_release_context(struct ext4_allocation_context *ac)
++{
++ if (ac->ac_pa) {
++ if (ac->ac_pa->pa_linear) {
++ /* see comment in ext4_mb_use_group_pa() */
++ spin_lock(&ac->ac_pa->pa_lock);
++ ac->ac_pa->pa_pstart += ac->ac_b_ex.fe_len;
++ ac->ac_pa->pa_lstart += ac->ac_b_ex.fe_len;
++ ac->ac_pa->pa_free -= ac->ac_b_ex.fe_len;
++ ac->ac_pa->pa_len -= ac->ac_b_ex.fe_len;
++ spin_unlock(&ac->ac_pa->pa_lock);
+ }
++ ext4_mb_put_pa(ac, ac->ac_sb, ac->ac_pa);
++ }
++ if (ac->ac_bitmap_page)
++ page_cache_release(ac->ac_bitmap_page);
++ if (ac->ac_buddy_page)
++ page_cache_release(ac->ac_buddy_page);
++ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)
++ mutex_unlock(&ac->ac_lg->lg_mutex);
++ ext4_mb_collect_stats(ac);
++ return 0;
++}
+
-+ /* Transfer the data for all the pages in this
-+ * s-g entry. For each page: call kmap(), do the
-+ * transfer, and call kunmap() immediately after. */
-+ while (sglen > 0) {
-+ unsigned int plen = min(sglen, (unsigned int)
-+ PAGE_SIZE - poff);
-+ unsigned char *ptr = kmap(page);
++static int ext4_mb_discard_preallocations(struct super_block *sb, int needed)
++{
++ ext4_group_t i;
++ int ret;
++ int freed = 0;
+
-+ if (dir == TO_XFER_BUF)
-+ memcpy(ptr + poff, buffer + cnt, plen);
-+ else
-+ memcpy(buffer + cnt, ptr + poff, plen);
-+ kunmap(page);
++ for (i = 0; i < EXT4_SB(sb)->s_groups_count && needed > 0; i++) {
++ ret = ext4_mb_discard_group_preallocations(sb, i, needed);
++ freed += ret;
++ needed -= ret;
++ }
+
-+ /* Start at the beginning of the next page */
-+ poff = 0;
-+ ++page;
-+ cnt += plen;
-+ sglen -= plen;
- }
-- *sgptr = sg;
- }
-+ *sgptr = sg;
-
- /* Return the amount actually transferred */
- return cnt;
-@@ -251,6 +231,6 @@ void usb_stor_set_xfer_buf(unsigned char *buffer,
-
- usb_stor_access_xfer_buf(buffer, buflen, srb, &sg, &offset,
- TO_XFER_BUF);
-- if (buflen < srb->request_bufflen)
-- srb->resid = srb->request_bufflen - buflen;
-+ if (buflen < scsi_bufflen(srb))
-+ scsi_set_resid(srb, scsi_bufflen(srb) - buflen);
- }
-diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
-index 7c9593b..8c1e295 100644
---- a/drivers/usb/storage/scsiglue.c
-+++ b/drivers/usb/storage/scsiglue.c
-@@ -81,6 +81,16 @@ static int slave_alloc (struct scsi_device *sdev)
- */
- sdev->inquiry_len = 36;
-
-+ /* Scatter-gather buffers (all but the last) must have a length
-+ * divisible by the bulk maxpacket size. Otherwise a data packet
-+ * would end up being short, causing a premature end to the data
-+ * transfer. Since high-speed bulk pipes have a maxpacket size
-+ * of 512, we'll use that as the scsi device queue's DMA alignment
-+ * mask. Guaranteeing proper alignment of the first buffer will
-+ * have the desired effect because, except at the beginning and
-+ * the end, scatter-gather buffers follow page boundaries. */
-+ blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
++ return freed;
++}
+
- /*
- * The UFI spec treates the Peripheral Qualifier bits in an
- * INQUIRY result as reserved and requires devices to set them
-@@ -100,16 +110,6 @@ static int slave_configure(struct scsi_device *sdev)
- {
- struct us_data *us = host_to_us(sdev->host);
-
-- /* Scatter-gather buffers (all but the last) must have a length
-- * divisible by the bulk maxpacket size. Otherwise a data packet
-- * would end up being short, causing a premature end to the data
-- * transfer. Since high-speed bulk pipes have a maxpacket size
-- * of 512, we'll use that as the scsi device queue's DMA alignment
-- * mask. Guaranteeing proper alignment of the first buffer will
-- * have the desired effect because, except at the beginning and
-- * the end, scatter-gather buffers follow page boundaries. */
-- blk_queue_dma_alignment(sdev->request_queue, (512 - 1));
--
- /* Many devices have trouble transfering more than 32KB at a time,
- * while others have trouble with more than 64K. At this time we
- * are limiting both to 32K (64 sectores).
-@@ -187,6 +187,10 @@ static int slave_configure(struct scsi_device *sdev)
- * automatically, requiring a START-STOP UNIT command. */
- sdev->allow_restart = 1;
-
-+ /* Some USB cardreaders have trouble reading an sdcard's last
-+ * sector in a larger then 1 sector read, since the performance
-+ * impact is negible we set this flag for all USB disks */
-+ sdev->last_sector_bug = 1;
- } else {
-
- /* Non-disk-type devices don't need to blacklist any pages
-diff --git a/drivers/usb/storage/sddr09.c b/drivers/usb/storage/sddr09.c
-index b12202c..8972b17 100644
---- a/drivers/usb/storage/sddr09.c
-+++ b/drivers/usb/storage/sddr09.c
-@@ -1623,7 +1623,7 @@ int sddr09_transport(struct scsi_cmnd *srb, struct us_data *us)
- return USB_STOR_TRANSPORT_ERROR;
- }
-
-- if (srb->request_bufflen == 0)
-+ if (scsi_bufflen(srb) == 0)
- return USB_STOR_TRANSPORT_GOOD;
-
- if (srb->sc_data_direction == DMA_TO_DEVICE ||
-@@ -1634,12 +1634,9 @@ int sddr09_transport(struct scsi_cmnd *srb, struct us_data *us)
- US_DEBUGP("SDDR09: %s %d bytes\n",
- (srb->sc_data_direction == DMA_TO_DEVICE) ?
- "sending" : "receiving",
-- srb->request_bufflen);
-+ scsi_bufflen(srb));
-
-- result = usb_stor_bulk_transfer_sg(us, pipe,
-- srb->request_buffer,
-- srb->request_bufflen,
-- srb->use_sg, &srb->resid);
-+ result = usb_stor_bulk_srb(us, pipe, srb);
-
- return (result == USB_STOR_XFER_GOOD ?
- USB_STOR_TRANSPORT_GOOD : USB_STOR_TRANSPORT_ERROR);
-diff --git a/drivers/usb/storage/shuttle_usbat.c b/drivers/usb/storage/shuttle_usbat.c
-index cb22a9a..570c125 100644
---- a/drivers/usb/storage/shuttle_usbat.c
-+++ b/drivers/usb/storage/shuttle_usbat.c
-@@ -130,7 +130,7 @@ static int usbat_write(struct us_data *us,
- * Convenience function to perform a bulk read
- */
- static int usbat_bulk_read(struct us_data *us,
-- unsigned char *data,
-+ void* buf,
- unsigned int len,
- int use_sg)
- {
-@@ -138,14 +138,14 @@ static int usbat_bulk_read(struct us_data *us,
- return USB_STOR_XFER_GOOD;
-
- US_DEBUGP("usbat_bulk_read: len = %d\n", len);
-- return usb_stor_bulk_transfer_sg(us, us->recv_bulk_pipe, data, len, use_sg, NULL);
-+ return usb_stor_bulk_transfer_sg(us, us->recv_bulk_pipe, buf, len, use_sg, NULL);
- }
-
- /*
- * Convenience function to perform a bulk write
- */
- static int usbat_bulk_write(struct us_data *us,
-- unsigned char *data,
-+ void* buf,
- unsigned int len,
- int use_sg)
- {
-@@ -153,7 +153,7 @@ static int usbat_bulk_write(struct us_data *us,
- return USB_STOR_XFER_GOOD;
-
- US_DEBUGP("usbat_bulk_write: len = %d\n", len);
-- return usb_stor_bulk_transfer_sg(us, us->send_bulk_pipe, data, len, use_sg, NULL);
-+ return usb_stor_bulk_transfer_sg(us, us->send_bulk_pipe, buf, len, use_sg, NULL);
- }
-
- /*
-@@ -314,7 +314,7 @@ static int usbat_wait_not_busy(struct us_data *us, int minutes)
- * Read block data from the data register
- */
- static int usbat_read_block(struct us_data *us,
-- unsigned char *content,
-+ void* buf,
- unsigned short len,
- int use_sg)
- {
-@@ -337,7 +337,7 @@ static int usbat_read_block(struct us_data *us,
- if (result != USB_STOR_XFER_GOOD)
- return USB_STOR_TRANSPORT_ERROR;
-
-- result = usbat_bulk_read(us, content, len, use_sg);
-+ result = usbat_bulk_read(us, buf, len, use_sg);
- return (result == USB_STOR_XFER_GOOD ?
- USB_STOR_TRANSPORT_GOOD : USB_STOR_TRANSPORT_ERROR);
- }
-@@ -347,7 +347,7 @@ static int usbat_read_block(struct us_data *us,
- */
- static int usbat_write_block(struct us_data *us,
- unsigned char access,
-- unsigned char *content,
-+ void* buf,
- unsigned short len,
- int minutes,
- int use_sg)
-@@ -372,7 +372,7 @@ static int usbat_write_block(struct us_data *us,
- if (result != USB_STOR_XFER_GOOD)
- return USB_STOR_TRANSPORT_ERROR;
-
-- result = usbat_bulk_write(us, content, len, use_sg);
-+ result = usbat_bulk_write(us, buf, len, use_sg);
- if (result != USB_STOR_XFER_GOOD)
- return USB_STOR_TRANSPORT_ERROR;
-
-@@ -392,7 +392,7 @@ static int usbat_hp8200e_rw_block_test(struct us_data *us,
- unsigned char timeout,
- unsigned char qualifier,
- int direction,
-- unsigned char *content,
-+ void *buf,
- unsigned short len,
- int use_sg,
- int minutes)
-@@ -472,7 +472,7 @@ static int usbat_hp8200e_rw_block_test(struct us_data *us,
- }
-
- result = usb_stor_bulk_transfer_sg(us,
-- pipe, content, len, use_sg, NULL);
-+ pipe, buf, len, use_sg, NULL);
-
- /*
- * If we get a stall on the bulk download, we'll retry
-@@ -606,7 +606,7 @@ static int usbat_multiple_write(struct us_data *us,
- * other related details) are defined beforehand with _set_shuttle_features().
- */
- static int usbat_read_blocks(struct us_data *us,
-- unsigned char *buffer,
-+ void* buffer,
- int len,
- int use_sg)
- {
-@@ -648,7 +648,7 @@ static int usbat_read_blocks(struct us_data *us,
- * other related details) are defined beforehand with _set_shuttle_features().
- */
- static int usbat_write_blocks(struct us_data *us,
-- unsigned char *buffer,
-+ void* buffer,
- int len,
- int use_sg)
- {
-@@ -1170,15 +1170,15 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
- US_DEBUGP("handle_read10: transfersize %d\n",
- srb->transfersize);
-
-- if (srb->request_bufflen < 0x10000) {
-+ if (scsi_bufflen(srb) < 0x10000) {
-
- result = usbat_hp8200e_rw_block_test(us, USBAT_ATA,
- registers, data, 19,
- USBAT_ATA_DATA, USBAT_ATA_STATUS, 0xFD,
- (USBAT_QUAL_FCQ | USBAT_QUAL_ALQ),
- DMA_FROM_DEVICE,
-- srb->request_buffer,
-- srb->request_bufflen, srb->use_sg, 1);
-+ scsi_sglist(srb),
-+ scsi_bufflen(srb), scsi_sg_count(srb), 1);
-
- return result;
- }
-@@ -1196,7 +1196,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
- len <<= 16;
- len |= data[7+7];
- US_DEBUGP("handle_read10: GPCMD_READ_CD: len %d\n", len);
-- srb->transfersize = srb->request_bufflen/len;
-+ srb->transfersize = scsi_bufflen(srb)/len;
- }
-
- if (!srb->transfersize) {
-@@ -1213,7 +1213,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
-
- len = (65535/srb->transfersize) * srb->transfersize;
- US_DEBUGP("Max read is %d bytes\n", len);
-- len = min(len, srb->request_bufflen);
-+ len = min(len, scsi_bufflen(srb));
- buffer = kmalloc(len, GFP_NOIO);
- if (buffer == NULL) /* bloody hell! */
- return USB_STOR_TRANSPORT_FAILED;
-@@ -1222,10 +1222,10 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
- sector |= short_pack(data[7+5], data[7+4]);
- transferred = 0;
-
-- while (transferred != srb->request_bufflen) {
-+ while (transferred != scsi_bufflen(srb)) {
-
-- if (len > srb->request_bufflen - transferred)
-- len = srb->request_bufflen - transferred;
-+ if (len > scsi_bufflen(srb) - transferred)
-+ len = scsi_bufflen(srb) - transferred;
-
- data[3] = len&0xFF; /* (cylL) = expected length (L) */
- data[4] = (len>>8)&0xFF; /* (cylH) = expected length (H) */
-@@ -1261,7 +1261,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
- transferred += len;
- sector += len / srb->transfersize;
-
-- } /* while transferred != srb->request_bufflen */
-+ } /* while transferred != scsi_bufflen(srb) */
-
- kfree(buffer);
- return result;
-@@ -1429,9 +1429,8 @@ static int usbat_hp8200e_transport(struct scsi_cmnd *srb, struct us_data *us)
- unsigned char data[32];
- unsigned int len;
- int i;
-- char string[64];
-
-- len = srb->request_bufflen;
-+ len = scsi_bufflen(srb);
-
- /* Send A0 (ATA PACKET COMMAND).
- Note: I guess we're never going to get any of the ATA
-@@ -1472,8 +1471,8 @@ static int usbat_hp8200e_transport(struct scsi_cmnd *srb, struct us_data *us)
- USBAT_ATA_DATA, USBAT_ATA_STATUS, 0xFD,
- (USBAT_QUAL_FCQ | USBAT_QUAL_ALQ),
- DMA_TO_DEVICE,
-- srb->request_buffer,
-- len, srb->use_sg, 10);
-+ scsi_sglist(srb),
-+ len, scsi_sg_count(srb), 10);
-
- if (result == USB_STOR_TRANSPORT_GOOD) {
- transferred += len;
-@@ -1540,23 +1539,8 @@ static int usbat_hp8200e_transport(struct scsi_cmnd *srb, struct us_data *us)
- len = *status;
-
-
-- result = usbat_read_block(us, srb->request_buffer, len, srb->use_sg);
--
-- /* Debug-print the first 32 bytes of the transfer */
--
-- if (!srb->use_sg) {
-- string[0] = 0;
-- for (i=0; i<len && i<32; i++) {
-- sprintf(string+strlen(string), "%02X ",
-- ((unsigned char *)srb->request_buffer)[i]);
-- if ((i%16)==15) {
-- US_DEBUGP("%s\n", string);
-- string[0] = 0;
-- }
-- }
-- if (string[0]!=0)
-- US_DEBUGP("%s\n", string);
-- }
-+ result = usbat_read_block(us, scsi_sglist(srb), len,
-+ scsi_sg_count(srb));
- }
-
- return result;
-diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c
-index c646750..d9f4912 100644
---- a/drivers/usb/storage/transport.c
-+++ b/drivers/usb/storage/transport.c
-@@ -459,6 +459,22 @@ static int usb_stor_bulk_transfer_sglist(struct us_data *us, unsigned int pipe,
- }
-
- /*
-+ * Common used function. Transfer a complete command
-+ * via usb_stor_bulk_transfer_sglist() above. Set cmnd resid
++/*
++ * Main entry point into mballoc to allocate blocks
++ * it tries to use preallocation first, then falls back
++ * to usual allocation
+ */
-+int usb_stor_bulk_srb(struct us_data* us, unsigned int pipe,
-+ struct scsi_cmnd* srb)
++ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
++ struct ext4_allocation_request *ar, int *errp)
+{
-+ unsigned int partial;
-+ int result = usb_stor_bulk_transfer_sglist(us, pipe, scsi_sglist(srb),
-+ scsi_sg_count(srb), scsi_bufflen(srb),
-+ &partial);
++ struct ext4_allocation_context ac;
++ struct ext4_sb_info *sbi;
++ struct super_block *sb;
++ ext4_fsblk_t block = 0;
++ int freed;
++ int inquota;
+
-+ scsi_set_resid(srb, scsi_bufflen(srb) - partial);
-+ return result;
++ sb = ar->inode->i_sb;
++ sbi = EXT4_SB(sb);
++
++ if (!test_opt(sb, MBALLOC)) {
++ block = ext4_new_blocks_old(handle, ar->inode, ar->goal,
++ &(ar->len), errp);
++ return block;
++ }
++
++ while (ar->len && DQUOT_ALLOC_BLOCK(ar->inode, ar->len)) {
++ ar->flags |= EXT4_MB_HINT_NOPREALLOC;
++ ar->len--;
++ }
++ if (ar->len == 0) {
++ *errp = -EDQUOT;
++ return 0;
++ }
++ inquota = ar->len;
++
++ ext4_mb_poll_new_transaction(sb, handle);
++
++ *errp = ext4_mb_initialize_context(&ac, ar);
++ if (*errp) {
++ ar->len = 0;
++ goto out;
++ }
++
++ ac.ac_op = EXT4_MB_HISTORY_PREALLOC;
++ if (!ext4_mb_use_preallocated(&ac)) {
++
++ ac.ac_op = EXT4_MB_HISTORY_ALLOC;
++ ext4_mb_normalize_request(&ac, ar);
++
++repeat:
++ /* allocate space in core */
++ ext4_mb_regular_allocator(&ac);
++
++ /* as we've just preallocated more space than
++ * user requested orinally, we store allocated
++ * space in a special descriptor */
++ if (ac.ac_status == AC_STATUS_FOUND &&
++ ac.ac_o_ex.fe_len < ac.ac_b_ex.fe_len)
++ ext4_mb_new_preallocation(&ac);
++ }
++
++ if (likely(ac.ac_status == AC_STATUS_FOUND)) {
++ ext4_mb_mark_diskspace_used(&ac, handle);
++ *errp = 0;
++ block = ext4_grp_offs_to_block(sb, &ac.ac_b_ex);
++ ar->len = ac.ac_b_ex.fe_len;
++ } else {
++ freed = ext4_mb_discard_preallocations(sb, ac.ac_o_ex.fe_len);
++ if (freed)
++ goto repeat;
++ *errp = -ENOSPC;
++ ac.ac_b_ex.fe_len = 0;
++ ar->len = 0;
++ ext4_mb_show_ac(&ac);
++ }
++
++ ext4_mb_release_context(&ac);
++
++out:
++ if (ar->len < inquota)
++ DQUOT_FREE_BLOCK(ar->inode, inquota - ar->len);
++
++ return block;
++}
++static void ext4_mb_poll_new_transaction(struct super_block *sb,
++ handle_t *handle)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++ if (sbi->s_last_transaction == handle->h_transaction->t_tid)
++ return;
++
++ /* new transaction! time to close last one and free blocks for
++ * committed transaction. we know that only transaction can be
++ * active, so previos transaction can be being logged and we
++ * know that transaction before previous is known to be already
++ * logged. this means that now we may free blocks freed in all
++ * transactions before previous one. hope I'm clear enough ... */
++
++ spin_lock(&sbi->s_md_lock);
++ if (sbi->s_last_transaction != handle->h_transaction->t_tid) {
++ mb_debug("new transaction %lu, old %lu\n",
++ (unsigned long) handle->h_transaction->t_tid,
++ (unsigned long) sbi->s_last_transaction);
++ list_splice_init(&sbi->s_closed_transaction,
++ &sbi->s_committed_transaction);
++ list_splice_init(&sbi->s_active_transaction,
++ &sbi->s_closed_transaction);
++ sbi->s_last_transaction = handle->h_transaction->t_tid;
++ }
++ spin_unlock(&sbi->s_md_lock);
++
++ ext4_mb_free_committed_blocks(sb);
++}
++
++static int ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
++ ext4_group_t group, ext4_grpblk_t block, int count)
++{
++ struct ext4_group_info *db = e4b->bd_info;
++ struct super_block *sb = e4b->bd_sb;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ struct ext4_free_metadata *md;
++ int i;
++
++ BUG_ON(e4b->bd_bitmap_page == NULL);
++ BUG_ON(e4b->bd_buddy_page == NULL);
++
++ ext4_lock_group(sb, group);
++ for (i = 0; i < count; i++) {
++ md = db->bb_md_cur;
++ if (md && db->bb_tid != handle->h_transaction->t_tid) {
++ db->bb_md_cur = NULL;
++ md = NULL;
++ }
++
++ if (md == NULL) {
++ ext4_unlock_group(sb, group);
++ md = kmalloc(sizeof(*md), GFP_NOFS);
++ if (md == NULL)
++ return -ENOMEM;
++ md->num = 0;
++ md->group = group;
++
++ ext4_lock_group(sb, group);
++ if (db->bb_md_cur == NULL) {
++ spin_lock(&sbi->s_md_lock);
++ list_add(&md->list, &sbi->s_active_transaction);
++ spin_unlock(&sbi->s_md_lock);
++ /* protect buddy cache from being freed,
++ * otherwise we'll refresh it from
++ * on-disk bitmap and lose not-yet-available
++ * blocks */
++ page_cache_get(e4b->bd_buddy_page);
++ page_cache_get(e4b->bd_bitmap_page);
++ db->bb_md_cur = md;
++ db->bb_tid = handle->h_transaction->t_tid;
++ mb_debug("new md 0x%p for group %lu\n",
++ md, md->group);
++ } else {
++ kfree(md);
++ md = db->bb_md_cur;
++ }
++ }
++
++ BUG_ON(md->num >= EXT4_BB_MAX_BLOCKS);
++ md->blocks[md->num] = block + i;
++ md->num++;
++ if (md->num == EXT4_BB_MAX_BLOCKS) {
++ /* no more space, put full container on a sb's list */
++ db->bb_md_cur = NULL;
++ }
++ }
++ ext4_unlock_group(sb, group);
++ return 0;
+}
+
+/*
- * Transfer an entire SCSI command's worth of data payload over the bulk
- * pipe.
- *
-@@ -508,7 +524,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
- int result;
-
- /* send the command to the transport layer */
-- srb->resid = 0;
-+ scsi_set_resid(srb, 0);
- result = us->transport(srb, us);
-
- /* if the command gets aborted by the higher layers, we need to
-@@ -568,7 +584,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
- * A short transfer on a command where we don't expect it
- * is unusual, but it doesn't mean we need to auto-sense.
- */
-- if ((srb->resid > 0) &&
-+ if ((scsi_get_resid(srb) > 0) &&
- !((srb->cmnd[0] == REQUEST_SENSE) ||
- (srb->cmnd[0] == INQUIRY) ||
- (srb->cmnd[0] == MODE_SENSE) ||
-@@ -593,7 +609,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
- srb->cmd_len = 12;
-
- /* issue the auto-sense command */
-- srb->resid = 0;
-+ scsi_set_resid(srb, 0);
- temp_result = us->transport(us->srb, us);
-
- /* let's clean up right away */
-@@ -649,7 +665,7 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
-
- /* Did we transfer less than the minimum amount required? */
- if (srb->result == SAM_STAT_GOOD &&
-- srb->request_bufflen - srb->resid < srb->underflow)
-+ scsi_bufflen(srb) - scsi_get_resid(srb) < srb->underflow)
- srb->result = (DID_ERROR << 16) | (SUGGEST_RETRY << 24);
-
- return;
-@@ -708,7 +724,7 @@ void usb_stor_stop_transport(struct us_data *us)
-
- int usb_stor_CBI_transport(struct scsi_cmnd *srb, struct us_data *us)
- {
-- unsigned int transfer_length = srb->request_bufflen;
-+ unsigned int transfer_length = scsi_bufflen(srb);
- unsigned int pipe = 0;
- int result;
-
-@@ -737,9 +753,7 @@ int usb_stor_CBI_transport(struct scsi_cmnd *srb, struct us_data *us)
- if (transfer_length) {
- pipe = srb->sc_data_direction == DMA_FROM_DEVICE ?
- us->recv_bulk_pipe : us->send_bulk_pipe;
-- result = usb_stor_bulk_transfer_sg(us, pipe,
-- srb->request_buffer, transfer_length,
-- srb->use_sg, &srb->resid);
-+ result = usb_stor_bulk_srb(us, pipe, srb);
- US_DEBUGP("CBI data stage result is 0x%x\n", result);
-
- /* if we stalled the data transfer it means command failed */
-@@ -808,7 +822,7 @@ int usb_stor_CBI_transport(struct scsi_cmnd *srb, struct us_data *us)
- */
- int usb_stor_CB_transport(struct scsi_cmnd *srb, struct us_data *us)
- {
-- unsigned int transfer_length = srb->request_bufflen;
-+ unsigned int transfer_length = scsi_bufflen(srb);
- int result;
-
- /* COMMAND STAGE */
-@@ -836,9 +850,7 @@ int usb_stor_CB_transport(struct scsi_cmnd *srb, struct us_data *us)
- if (transfer_length) {
- unsigned int pipe = srb->sc_data_direction == DMA_FROM_DEVICE ?
- us->recv_bulk_pipe : us->send_bulk_pipe;
-- result = usb_stor_bulk_transfer_sg(us, pipe,
-- srb->request_buffer, transfer_length,
-- srb->use_sg, &srb->resid);
-+ result = usb_stor_bulk_srb(us, pipe, srb);
- US_DEBUGP("CB data stage result is 0x%x\n", result);
-
- /* if we stalled the data transfer it means command failed */
-@@ -904,7 +916,7 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
- {
- struct bulk_cb_wrap *bcb = (struct bulk_cb_wrap *) us->iobuf;
- struct bulk_cs_wrap *bcs = (struct bulk_cs_wrap *) us->iobuf;
-- unsigned int transfer_length = srb->request_bufflen;
-+ unsigned int transfer_length = scsi_bufflen(srb);
- unsigned int residue;
- int result;
- int fake_sense = 0;
-@@ -955,9 +967,7 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
- if (transfer_length) {
- unsigned int pipe = srb->sc_data_direction == DMA_FROM_DEVICE ?
- us->recv_bulk_pipe : us->send_bulk_pipe;
-- result = usb_stor_bulk_transfer_sg(us, pipe,
-- srb->request_buffer, transfer_length,
-- srb->use_sg, &srb->resid);
-+ result = usb_stor_bulk_srb(us, pipe, srb);
- US_DEBUGP("Bulk data transfer result 0x%x\n", result);
- if (result == USB_STOR_XFER_ERROR)
- return USB_STOR_TRANSPORT_ERROR;
-@@ -1036,7 +1046,8 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
- if (residue) {
- if (!(us->flags & US_FL_IGNORE_RESIDUE)) {
- residue = min(residue, transfer_length);
-- srb->resid = max(srb->resid, (int) residue);
-+ scsi_set_resid(srb, max(scsi_get_resid(srb),
-+ (int) residue));
- }
- }
-
-diff --git a/drivers/usb/storage/transport.h b/drivers/usb/storage/transport.h
-index 633a715..ada7c2f 100644
---- a/drivers/usb/storage/transport.h
-+++ b/drivers/usb/storage/transport.h
-@@ -139,6 +139,8 @@ extern int usb_stor_bulk_transfer_buf(struct us_data *us, unsigned int pipe,
- void *buf, unsigned int length, unsigned int *act_len);
- extern int usb_stor_bulk_transfer_sg(struct us_data *us, unsigned int pipe,
- void *buf, unsigned int length, int use_sg, int *residual);
-+extern int usb_stor_bulk_srb(struct us_data* us, unsigned int pipe,
-+ struct scsi_cmnd* srb);
-
- extern int usb_stor_port_reset(struct us_data *us);
- #endif
-diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
-index 5b3dbcf..758435f 100644
---- a/drivers/video/Kconfig
-+++ b/drivers/video/Kconfig
-@@ -889,7 +889,7 @@ config FB_S1D13XXX
-
- config FB_ATMEL
- tristate "AT91/AT32 LCD Controller support"
-- depends on FB && (ARCH_AT91SAM9261 || ARCH_AT91SAM9263 || AVR32)
-+ depends on FB && (ARCH_AT91SAM9261 || ARCH_AT91SAM9263 || ARCH_AT91SAM9RL || ARCH_AT91CAP9 || AVR32)
- select FB_CFB_FILLRECT
- select FB_CFB_COPYAREA
- select FB_CFB_IMAGEBLIT
-diff --git a/drivers/video/atmel_lcdfb.c b/drivers/video/atmel_lcdfb.c
-index 7c30cc8..f8e7111 100644
---- a/drivers/video/atmel_lcdfb.c
-+++ b/drivers/video/atmel_lcdfb.c
-@@ -30,7 +30,7 @@
- #define ATMEL_LCDC_CVAL_DEFAULT 0xc8
- #define ATMEL_LCDC_DMA_BURST_LEN 8
-
--#if defined(CONFIG_ARCH_AT91SAM9263)
-+#if defined(CONFIG_ARCH_AT91SAM9263) || defined(CONFIG_ARCH_AT91CAP9)
- #define ATMEL_LCDC_FIFO_SIZE 2048
- #else
- #define ATMEL_LCDC_FIFO_SIZE 512
-diff --git a/drivers/video/bf54x-lq043fb.c b/drivers/video/bf54x-lq043fb.c
-index 74d11c3..c8e7427 100644
---- a/drivers/video/bf54x-lq043fb.c
-+++ b/drivers/video/bf54x-lq043fb.c
-@@ -224,7 +224,8 @@ static int config_dma(struct bfin_bf54xfb_info *fbi)
- set_dma_config(CH_EPPI0,
- set_bfin_dma_config(DIR_READ, DMA_FLOW_AUTO,
- INTR_DISABLE, DIMENSION_2D,
-- DATA_SIZE_32));
-+ DATA_SIZE_32,
-+ DMA_NOSYNC_KEEP_DMA_BUF));
- set_dma_x_count(CH_EPPI0, (LCD_X_RES * LCD_BPP) / DMA_BUS_SIZE);
- set_dma_x_modify(CH_EPPI0, DMA_BUS_SIZE / 8);
- set_dma_y_count(CH_EPPI0, LCD_Y_RES);
-@@ -263,8 +264,7 @@ static int request_ports(struct bfin_bf54xfb_info *fbi)
- }
- }
-
-- gpio_direction_output(disp);
-- gpio_set_value(disp, 1);
-+ gpio_direction_output(disp, 1);
-
- return 0;
- }
-diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
-index b87ed37..2b53d1f 100644
---- a/drivers/video/console/Kconfig
-+++ b/drivers/video/console/Kconfig
-@@ -6,7 +6,7 @@ menu "Console display driver support"
-
- config VGA_CONSOLE
- bool "VGA text console" if EMBEDDED || !X86
-- depends on !ARCH_ACORN && !ARCH_EBSA110 && !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && !ARCH_VERSATILE && !SUPERH && !BLACKFIN
-+ depends on !ARCH_ACORN && !ARCH_EBSA110 && !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && !ARCH_VERSATILE && !SUPERH && !BLACKFIN && !AVR32
- default y
- help
- Saying Y here will allow you to use Linux in text mode through a
-diff --git a/drivers/video/matrox/matroxfb_maven.c b/drivers/video/matrox/matroxfb_maven.c
-index 49cd53e..0cd58f8 100644
---- a/drivers/video/matrox/matroxfb_maven.c
-+++ b/drivers/video/matrox/matroxfb_maven.c
-@@ -1232,7 +1232,7 @@ static int maven_shutdown_client(struct i2c_client* clnt) {
- return 0;
- }
-
--static unsigned short normal_i2c[] = { MAVEN_I2CID, I2C_CLIENT_END };
-+static const unsigned short normal_i2c[] = { MAVEN_I2CID, I2C_CLIENT_END };
- I2C_CLIENT_INSMOD;
-
- static struct i2c_driver maven_driver;
-diff --git a/drivers/video/omap/lcd_h3.c b/drivers/video/omap/lcd_h3.c
-index c604d93..31e9783 100644
---- a/drivers/video/omap/lcd_h3.c
-+++ b/drivers/video/omap/lcd_h3.c
-@@ -21,9 +21,9 @@
-
- #include <linux/module.h>
- #include <linux/platform_device.h>
-+#include <linux/i2c/tps65010.h>
-
- #include <asm/arch/gpio.h>
--#include <asm/arch/tps65010.h>
- #include <asm/arch/omapfb.h>
-
- #define MODULE_NAME "omapfb-lcd_h3"
-diff --git a/drivers/w1/masters/ds2482.c b/drivers/w1/masters/ds2482.c
-index d93eb62..0fd5820 100644
---- a/drivers/w1/masters/ds2482.c
-+++ b/drivers/w1/masters/ds2482.c
-@@ -29,7 +29,7 @@
- * However, the chip cannot be detected without doing an i2c write,
- * so use the force module parameter.
- */
--static unsigned short normal_i2c[] = {I2C_CLIENT_END};
-+static const unsigned short normal_i2c[] = { I2C_CLIENT_END };
-
- /**
- * Insmod parameters
-diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
-index 52dff40..899fc13 100644
---- a/drivers/watchdog/Kconfig
-+++ b/drivers/watchdog/Kconfig
-@@ -223,7 +223,7 @@ config DAVINCI_WATCHDOG
-
- config AT32AP700X_WDT
- tristate "AT32AP700x watchdog"
-- depends on CPU_AT32AP7000
-+ depends on CPU_AT32AP700X
- help
- Watchdog timer embedded into AT32AP700x devices. This will reboot
- your system when the timeout is reached.
-@@ -639,6 +639,12 @@ config AR7_WDT
- help
- Hardware driver for the TI AR7 Watchdog Timer.
-
-+config TXX9_WDT
-+ tristate "Toshiba TXx9 Watchdog Timer"
-+ depends on CPU_TX39XX || CPU_TX49XX
-+ help
-+ Hardware driver for the built-in watchdog timer on TXx9 MIPS SoCs.
++ * Main entry point into mballoc to free blocks
++ */
++void ext4_mb_free_blocks(handle_t *handle, struct inode *inode,
++ unsigned long block, unsigned long count,
++ int metadata, unsigned long *freed)
++{
++ struct buffer_head *bitmap_bh = 0;
++ struct super_block *sb = inode->i_sb;
++ struct ext4_allocation_context ac;
++ struct ext4_group_desc *gdp;
++ struct ext4_super_block *es;
++ unsigned long overflow;
++ ext4_grpblk_t bit;
++ struct buffer_head *gd_bh;
++ ext4_group_t block_group;
++ struct ext4_sb_info *sbi;
++ struct ext4_buddy e4b;
++ int err = 0;
++ int ret;
+
- # PARISC Architecture
-
- # POWERPC Architecture
-diff --git a/drivers/watchdog/Makefile b/drivers/watchdog/Makefile
-index 87483cc..ebc2114 100644
---- a/drivers/watchdog/Makefile
-+++ b/drivers/watchdog/Makefile
-@@ -93,6 +93,7 @@ obj-$(CONFIG_INDYDOG) += indydog.o
- obj-$(CONFIG_WDT_MTX1) += mtx-1_wdt.o
- obj-$(CONFIG_WDT_RM9K_GPI) += rm9k_wdt.o
- obj-$(CONFIG_AR7_WDT) += ar7_wdt.o
-+obj-$(CONFIG_TXX9_WDT) += txx9wdt.o
-
- # PARISC Architecture
-
-diff --git a/drivers/watchdog/alim1535_wdt.c b/drivers/watchdog/alim1535_wdt.c
-index b481cc0..2b1fbdb 100644
---- a/drivers/watchdog/alim1535_wdt.c
-+++ b/drivers/watchdog/alim1535_wdt.c
-@@ -413,18 +413,18 @@ static int __init watchdog_init(void)
- /* Calculate the watchdog's timeout */
- ali_settimer(timeout);
-
-- ret = misc_register(&ali_miscdev);
-+ ret = register_reboot_notifier(&ali_notifier);
- if (ret != 0) {
-- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-- WATCHDOG_MINOR, ret);
-+ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-+ ret);
- goto out;
- }
-
-- ret = register_reboot_notifier(&ali_notifier);
-+ ret = misc_register(&ali_miscdev);
- if (ret != 0) {
-- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-- ret);
-- goto unreg_miscdev;
-+ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-+ WATCHDOG_MINOR, ret);
-+ goto unreg_reboot;
- }
-
- printk(KERN_INFO PFX "initialized. timeout=%d sec (nowayout=%d)\n",
-@@ -432,8 +432,8 @@ static int __init watchdog_init(void)
-
- out:
- return ret;
--unreg_miscdev:
-- misc_deregister(&ali_miscdev);
-+unreg_reboot:
-+ unregister_reboot_notifier(&ali_notifier);
- goto out;
- }
-
-@@ -449,8 +449,8 @@ static void __exit watchdog_exit(void)
- ali_stop();
-
- /* Deregister */
-- unregister_reboot_notifier(&ali_notifier);
- misc_deregister(&ali_miscdev);
-+ unregister_reboot_notifier(&ali_notifier);
- pci_dev_put(ali_pci);
- }
-
-diff --git a/drivers/watchdog/alim7101_wdt.c b/drivers/watchdog/alim7101_wdt.c
-index 67aed9f..238273c 100644
---- a/drivers/watchdog/alim7101_wdt.c
-+++ b/drivers/watchdog/alim7101_wdt.c
-@@ -377,18 +377,18 @@ static int __init alim7101_wdt_init(void)
- timeout);
- }
-
-- rc = misc_register(&wdt_miscdev);
-+ rc = register_reboot_notifier(&wdt_notifier);
- if (rc) {
-- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-- wdt_miscdev.minor, rc);
-+ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-+ rc);
- goto err_out;
- }
-
-- rc = register_reboot_notifier(&wdt_notifier);
-+ rc = misc_register(&wdt_miscdev);
- if (rc) {
-- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-- rc);
-- goto err_out_miscdev;
-+ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-+ wdt_miscdev.minor, rc);
-+ goto err_out_reboot;
- }
-
- if (nowayout) {
-@@ -399,8 +399,8 @@ static int __init alim7101_wdt_init(void)
- timeout, nowayout);
- return 0;
-
--err_out_miscdev:
-- misc_deregister(&wdt_miscdev);
-+err_out_reboot:
-+ unregister_reboot_notifier(&wdt_notifier);
- err_out:
- pci_dev_put(alim7101_pmu);
- return rc;
-diff --git a/drivers/watchdog/ar7_wdt.c b/drivers/watchdog/ar7_wdt.c
-index cdaab8c..2eb48c0 100644
---- a/drivers/watchdog/ar7_wdt.c
-+++ b/drivers/watchdog/ar7_wdt.c
-@@ -279,7 +279,7 @@ static int ar7_wdt_ioctl(struct inode *inode, struct file *file,
- }
- }
-
--static struct file_operations ar7_wdt_fops = {
-+static const struct file_operations ar7_wdt_fops = {
- .owner = THIS_MODULE,
- .write = ar7_wdt_write,
- .ioctl = ar7_wdt_ioctl,
-diff --git a/drivers/watchdog/bfin_wdt.c b/drivers/watchdog/bfin_wdt.c
-index 31dc7a6..472be10 100644
---- a/drivers/watchdog/bfin_wdt.c
-+++ b/drivers/watchdog/bfin_wdt.c
-@@ -390,7 +390,7 @@ static struct platform_driver bfin_wdt_driver = {
- .resume = bfin_wdt_resume,
- };
-
--static struct file_operations bfin_wdt_fops = {
-+static const struct file_operations bfin_wdt_fops = {
- .owner = THIS_MODULE,
- .llseek = no_llseek,
- .write = bfin_wdt_write,
-diff --git a/drivers/watchdog/it8712f_wdt.c b/drivers/watchdog/it8712f_wdt.c
-index 6330fc0..1b6d7d1 100644
---- a/drivers/watchdog/it8712f_wdt.c
-+++ b/drivers/watchdog/it8712f_wdt.c
-@@ -296,7 +296,7 @@ it8712f_wdt_release(struct inode *inode, struct file *file)
- return 0;
- }
-
--static struct file_operations it8712f_wdt_fops = {
-+static const struct file_operations it8712f_wdt_fops = {
- .owner = THIS_MODULE,
- .llseek = no_llseek,
- .write = it8712f_wdt_write,
-diff --git a/drivers/watchdog/mpc5200_wdt.c b/drivers/watchdog/mpc5200_wdt.c
-index 11f6a11..80a91d4 100644
---- a/drivers/watchdog/mpc5200_wdt.c
-+++ b/drivers/watchdog/mpc5200_wdt.c
-@@ -158,7 +158,7 @@ static int mpc5200_wdt_release(struct inode *inode, struct file *file)
- return 0;
- }
-
--static struct file_operations mpc5200_wdt_fops = {
-+static const struct file_operations mpc5200_wdt_fops = {
- .owner = THIS_MODULE,
- .write = mpc5200_wdt_write,
- .ioctl = mpc5200_wdt_ioctl,
-diff --git a/drivers/watchdog/mtx-1_wdt.c b/drivers/watchdog/mtx-1_wdt.c
-index dcfd401..9845174 100644
---- a/drivers/watchdog/mtx-1_wdt.c
-+++ b/drivers/watchdog/mtx-1_wdt.c
-@@ -180,7 +180,7 @@ static ssize_t mtx1_wdt_write(struct file *file, const char *buf, size_t count,
- return count;
- }
-
--static struct file_operations mtx1_wdt_fops = {
-+static const struct file_operations mtx1_wdt_fops = {
- .owner = THIS_MODULE,
- .llseek = no_llseek,
- .ioctl = mtx1_wdt_ioctl,
-diff --git a/drivers/watchdog/sbc60xxwdt.c b/drivers/watchdog/sbc60xxwdt.c
-index e4f3cb6..ef76f01 100644
---- a/drivers/watchdog/sbc60xxwdt.c
-+++ b/drivers/watchdog/sbc60xxwdt.c
-@@ -359,20 +359,20 @@ static int __init sbc60xxwdt_init(void)
- }
- }
-
-- rc = misc_register(&wdt_miscdev);
-+ rc = register_reboot_notifier(&wdt_notifier);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-- wdt_miscdev.minor, rc);
-+ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-+ rc);
- goto err_out_region2;
- }
-
-- rc = register_reboot_notifier(&wdt_notifier);
-+ rc = misc_register(&wdt_miscdev);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-- rc);
-- goto err_out_miscdev;
-+ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-+ wdt_miscdev.minor, rc);
-+ goto err_out_reboot;
- }
-
- printk(KERN_INFO PFX "WDT driver for 60XX single board computer initialised. timeout=%d sec (nowayout=%d)\n",
-@@ -380,8 +380,8 @@ static int __init sbc60xxwdt_init(void)
-
- return 0;
-
--err_out_miscdev:
-- misc_deregister(&wdt_miscdev);
-+err_out_reboot:
-+ unregister_reboot_notifier(&wdt_notifier);
- err_out_region2:
- if ((wdt_stop != 0x45) && (wdt_stop != wdt_start))
- release_region(wdt_stop,1);
-diff --git a/drivers/watchdog/scx200_wdt.c b/drivers/watchdog/scx200_wdt.c
-index d4fd0fa..d55882b 100644
---- a/drivers/watchdog/scx200_wdt.c
-+++ b/drivers/watchdog/scx200_wdt.c
-@@ -231,17 +231,17 @@ static int __init scx200_wdt_init(void)
-
- sema_init(&open_semaphore, 1);
-
-- r = misc_register(&scx200_wdt_miscdev);
-+ r = register_reboot_notifier(&scx200_wdt_notifier);
- if (r) {
-+ printk(KERN_ERR NAME ": unable to register reboot notifier");
- release_region(scx200_cb_base + SCx200_WDT_OFFSET,
- SCx200_WDT_SIZE);
- return r;
- }
-
-- r = register_reboot_notifier(&scx200_wdt_notifier);
-+ r = misc_register(&scx200_wdt_miscdev);
- if (r) {
-- printk(KERN_ERR NAME ": unable to register reboot notifier");
-- misc_deregister(&scx200_wdt_miscdev);
-+ unregister_reboot_notifier(&scx200_wdt_notifier);
- release_region(scx200_cb_base + SCx200_WDT_OFFSET,
- SCx200_WDT_SIZE);
- return r;
-@@ -252,8 +252,8 @@ static int __init scx200_wdt_init(void)
-
- static void __exit scx200_wdt_cleanup(void)
- {
-- unregister_reboot_notifier(&scx200_wdt_notifier);
- misc_deregister(&scx200_wdt_miscdev);
-+ unregister_reboot_notifier(&scx200_wdt_notifier);
- release_region(scx200_cb_base + SCx200_WDT_OFFSET,
- SCx200_WDT_SIZE);
- }
-diff --git a/drivers/watchdog/txx9wdt.c b/drivers/watchdog/txx9wdt.c
++ *freed = 0;
++
++ ext4_mb_poll_new_transaction(sb, handle);
++
++ sbi = EXT4_SB(sb);
++ es = EXT4_SB(sb)->s_es;
++ if (block < le32_to_cpu(es->s_first_data_block) ||
++ block + count < block ||
++ block + count > ext4_blocks_count(es)) {
++ ext4_error(sb, __FUNCTION__,
++ "Freeing blocks not in datazone - "
++ "block = %lu, count = %lu", block, count);
++ goto error_return;
++ }
++
++ ext4_debug("freeing block %lu\n", block);
++
++ ac.ac_op = EXT4_MB_HISTORY_FREE;
++ ac.ac_inode = inode;
++ ac.ac_sb = sb;
++
++do_more:
++ overflow = 0;
++ ext4_get_group_no_and_offset(sb, block, &block_group, &bit);
++
++ /*
++ * Check to see if we are freeing blocks across a group
++ * boundary.
++ */
++ if (bit + count > EXT4_BLOCKS_PER_GROUP(sb)) {
++ overflow = bit + count - EXT4_BLOCKS_PER_GROUP(sb);
++ count -= overflow;
++ }
++ bitmap_bh = read_block_bitmap(sb, block_group);
++ if (!bitmap_bh)
++ goto error_return;
++ gdp = ext4_get_group_desc(sb, block_group, &gd_bh);
++ if (!gdp)
++ goto error_return;
++
++ if (in_range(ext4_block_bitmap(sb, gdp), block, count) ||
++ in_range(ext4_inode_bitmap(sb, gdp), block, count) ||
++ in_range(block, ext4_inode_table(sb, gdp),
++ EXT4_SB(sb)->s_itb_per_group) ||
++ in_range(block + count - 1, ext4_inode_table(sb, gdp),
++ EXT4_SB(sb)->s_itb_per_group)) {
++
++ ext4_error(sb, __FUNCTION__,
++ "Freeing blocks in system zone - "
++ "Block = %lu, count = %lu", block, count);
++ }
++
++ BUFFER_TRACE(bitmap_bh, "getting write access");
++ err = ext4_journal_get_write_access(handle, bitmap_bh);
++ if (err)
++ goto error_return;
++
++ /*
++ * We are about to modify some metadata. Call the journal APIs
++ * to unshare ->b_data if a currently-committing transaction is
++ * using it
++ */
++ BUFFER_TRACE(gd_bh, "get_write_access");
++ err = ext4_journal_get_write_access(handle, gd_bh);
++ if (err)
++ goto error_return;
++
++ err = ext4_mb_load_buddy(sb, block_group, &e4b);
++ if (err)
++ goto error_return;
++
++#ifdef AGGRESSIVE_CHECK
++ {
++ int i;
++ for (i = 0; i < count; i++)
++ BUG_ON(!mb_test_bit(bit + i, bitmap_bh->b_data));
++ }
++#endif
++ mb_clear_bits(sb_bgl_lock(sbi, block_group), bitmap_bh->b_data,
++ bit, count);
++
++ /* We dirtied the bitmap block */
++ BUFFER_TRACE(bitmap_bh, "dirtied bitmap block");
++ err = ext4_journal_dirty_metadata(handle, bitmap_bh);
++
++ ac.ac_b_ex.fe_group = block_group;
++ ac.ac_b_ex.fe_start = bit;
++ ac.ac_b_ex.fe_len = count;
++ ext4_mb_store_history(&ac);
++
++ if (metadata) {
++ /* blocks being freed are metadata. these blocks shouldn't
++ * be used until this transaction is committed */
++ ext4_mb_free_metadata(handle, &e4b, block_group, bit, count);
++ } else {
++ ext4_lock_group(sb, block_group);
++ err = mb_free_blocks(inode, &e4b, bit, count);
++ ext4_mb_return_to_preallocation(inode, &e4b, block, count);
++ ext4_unlock_group(sb, block_group);
++ BUG_ON(err != 0);
++ }
++
++ spin_lock(sb_bgl_lock(sbi, block_group));
++ gdp->bg_free_blocks_count =
++ cpu_to_le16(le16_to_cpu(gdp->bg_free_blocks_count) + count);
++ gdp->bg_checksum = ext4_group_desc_csum(sbi, block_group, gdp);
++ spin_unlock(sb_bgl_lock(sbi, block_group));
++ percpu_counter_add(&sbi->s_freeblocks_counter, count);
++
++ ext4_mb_release_desc(&e4b);
++
++ *freed += count;
++
++ /* And the group descriptor block */
++ BUFFER_TRACE(gd_bh, "dirtied group descriptor block");
++ ret = ext4_journal_dirty_metadata(handle, gd_bh);
++ if (!err)
++ err = ret;
++
++ if (overflow && !err) {
++ block += count;
++ count = overflow;
++ put_bh(bitmap_bh);
++ goto do_more;
++ }
++ sb->s_dirt = 1;
++error_return:
++ brelse(bitmap_bh);
++ ext4_std_error(sb, err);
++ return;
++}
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
new file mode 100644
-index 0000000..328b3c7
+index 0000000..3ebc233
--- /dev/null
-+++ b/drivers/watchdog/txx9wdt.c
-@@ -0,0 +1,276 @@
++++ b/fs/ext4/migrate.c
+@@ -0,0 +1,560 @@
+/*
-+ * txx9wdt: A Hardware Watchdog Driver for TXx9 SoCs
++ * Copyright IBM Corporation, 2007
++ * Author Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
+ *
-+ * Copyright (C) 2007 Atsushi Nemoto <anemo at mba.ocn.ne.jp>
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of version 2.1 of the GNU Lesser General Public License
++ * as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it would be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
+ */
++
+#include <linux/module.h>
-+#include <linux/moduleparam.h>
-+#include <linux/types.h>
-+#include <linux/miscdevice.h>
-+#include <linux/watchdog.h>
-+#include <linux/fs.h>
-+#include <linux/reboot.h>
-+#include <linux/init.h>
-+#include <linux/uaccess.h>
-+#include <linux/platform_device.h>
-+#include <linux/clk.h>
-+#include <linux/err.h>
-+#include <linux/io.h>
-+#include <asm/txx9tmr.h>
++#include <linux/ext4_jbd2.h>
++#include <linux/ext4_fs_extents.h>
+
-+#define TIMER_MARGIN 60 /* Default is 60 seconds */
++/*
++ * The contiguous blocks details which can be
++ * represented by a single extent
++ */
++struct list_blocks_struct {
++ ext4_lblk_t first_block, last_block;
++ ext4_fsblk_t first_pblock, last_pblock;
++};
+
-+static int timeout = TIMER_MARGIN; /* in seconds */
-+module_param(timeout, int, 0);
-+MODULE_PARM_DESC(timeout,
-+ "Watchdog timeout in seconds. "
-+ "(0<timeout<((2^" __MODULE_STRING(TXX9_TIMER_BITS) ")/(IMCLK/256)), "
-+ "default=" __MODULE_STRING(TIMER_MARGIN) ")");
++static int finish_range(handle_t *handle, struct inode *inode,
++ struct list_blocks_struct *lb)
+
-+static int nowayout = WATCHDOG_NOWAYOUT;
-+module_param(nowayout, int, 0);
-+MODULE_PARM_DESC(nowayout,
-+ "Watchdog cannot be stopped once started "
-+ "(default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
++{
++ int retval = 0, needed;
++ struct ext4_extent newext;
++ struct ext4_ext_path *path;
++ if (lb->first_pblock == 0)
++ return 0;
+
-+#define WD_TIMER_CCD 7 /* 1/256 */
-+#define WD_TIMER_CLK (clk_get_rate(txx9_imclk) / (2 << WD_TIMER_CCD))
-+#define WD_MAX_TIMEOUT ((0xffffffff >> (32 - TXX9_TIMER_BITS)) / WD_TIMER_CLK)
++ /* Add the extent to temp inode*/
++ newext.ee_block = cpu_to_le32(lb->first_block);
++ newext.ee_len = cpu_to_le16(lb->last_block - lb->first_block + 1);
++ ext4_ext_store_pblock(&newext, lb->first_pblock);
++ path = ext4_ext_find_extent(inode, lb->first_block, NULL);
+
-+static unsigned long txx9wdt_alive;
-+static int expect_close;
-+static struct txx9_tmr_reg __iomem *txx9wdt_reg;
-+static struct clk *txx9_imclk;
++ if (IS_ERR(path)) {
++ retval = PTR_ERR(path);
++ goto err_out;
++ }
+
-+static void txx9wdt_ping(void)
++ /*
++ * Calculate the credit needed to inserting this extent
++ * Since we are doing this in loop we may accumalate extra
++ * credit. But below we try to not accumalate too much
++ * of them by restarting the journal.
++ */
++ needed = ext4_ext_calc_credits_for_insert(inode, path);
++
++ /*
++ * Make sure the credit we accumalated is not really high
++ */
++ if (needed && handle->h_buffer_credits >= EXT4_RESERVE_TRANS_BLOCKS) {
++ retval = ext4_journal_restart(handle, needed);
++ if (retval)
++ goto err_out;
++ }
++ if (needed) {
++ retval = ext4_journal_extend(handle, needed);
++ if (retval != 0) {
++ /*
++ * IF not able to extend the journal restart the journal
++ */
++ retval = ext4_journal_restart(handle, needed);
++ if (retval)
++ goto err_out;
++ }
++ }
++ retval = ext4_ext_insert_extent(handle, inode, path, &newext);
++err_out:
++ lb->first_pblock = 0;
++ return retval;
++}
++
++static int update_extent_range(handle_t *handle, struct inode *inode,
++ ext4_fsblk_t pblock, ext4_lblk_t blk_num,
++ struct list_blocks_struct *lb)
+{
-+ __raw_writel(TXx9_TMWTMR_TWIE | TXx9_TMWTMR_TWC, &txx9wdt_reg->wtmr);
++ int retval;
++ /*
++ * See if we can add on to the existing range (if it exists)
++ */
++ if (lb->first_pblock &&
++ (lb->last_pblock+1 == pblock) &&
++ (lb->last_block+1 == blk_num)) {
++ lb->last_pblock = pblock;
++ lb->last_block = blk_num;
++ return 0;
++ }
++ /*
++ * Start a new range.
++ */
++ retval = finish_range(handle, inode, lb);
++ lb->first_pblock = lb->last_pblock = pblock;
++ lb->first_block = lb->last_block = blk_num;
++
++ return retval;
++}
++
++static int update_ind_extent_range(handle_t *handle, struct inode *inode,
++ ext4_fsblk_t pblock, ext4_lblk_t *blk_nump,
++ struct list_blocks_struct *lb)
++{
++ struct buffer_head *bh;
++ __le32 *i_data;
++ int i, retval = 0;
++ ext4_lblk_t blk_count = *blk_nump;
++ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++
++ if (!pblock) {
++ /* Only update the file block number */
++ *blk_nump += max_entries;
++ return 0;
++ }
++
++ bh = sb_bread(inode->i_sb, pblock);
++ if (!bh)
++ return -EIO;
++
++ i_data = (__le32 *)bh->b_data;
++ for (i = 0; i < max_entries; i++, blk_count++) {
++ if (i_data[i]) {
++ retval = update_extent_range(handle, inode,
++ le32_to_cpu(i_data[i]),
++ blk_count, lb);
++ if (retval)
++ break;
++ }
++ }
++
++ /* Update the file block number */
++ *blk_nump = blk_count;
++ put_bh(bh);
++ return retval;
++
+}
+
-+static void txx9wdt_start(void)
-+{
-+ __raw_writel(WD_TIMER_CLK * timeout, &txx9wdt_reg->cpra);
-+ __raw_writel(WD_TIMER_CCD, &txx9wdt_reg->ccdr);
-+ __raw_writel(0, &txx9wdt_reg->tisr); /* clear pending interrupt */
-+ __raw_writel(TXx9_TMTCR_TCE | TXx9_TMTCR_CCDE | TXx9_TMTCR_TMODE_WDOG,
-+ &txx9wdt_reg->tcr);
-+ __raw_writel(TXx9_TMWTMR_TWIE | TXx9_TMWTMR_TWC, &txx9wdt_reg->wtmr);
-+}
++static int update_dind_extent_range(handle_t *handle, struct inode *inode,
++ ext4_fsblk_t pblock, ext4_lblk_t *blk_nump,
++ struct list_blocks_struct *lb)
++{
++ struct buffer_head *bh;
++ __le32 *i_data;
++ int i, retval = 0;
++ ext4_lblk_t blk_count = *blk_nump;
++ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++
++ if (!pblock) {
++ /* Only update the file block number */
++ *blk_nump += max_entries * max_entries;
++ return 0;
++ }
++ bh = sb_bread(inode->i_sb, pblock);
++ if (!bh)
++ return -EIO;
++
++ i_data = (__le32 *)bh->b_data;
++ for (i = 0; i < max_entries; i++) {
++ if (i_data[i]) {
++ retval = update_ind_extent_range(handle, inode,
++ le32_to_cpu(i_data[i]),
++ &blk_count, lb);
++ if (retval)
++ break;
++ } else {
++ /* Only update the file block number */
++ blk_count += max_entries;
++ }
++ }
++
++ /* Update the file block number */
++ *blk_nump = blk_count;
++ put_bh(bh);
++ return retval;
+
-+static void txx9wdt_stop(void)
-+{
-+ __raw_writel(TXx9_TMWTMR_WDIS, &txx9wdt_reg->wtmr);
-+ __raw_writel(__raw_readl(&txx9wdt_reg->tcr) & ~TXx9_TMTCR_TCE,
-+ &txx9wdt_reg->tcr);
+}
+
-+static int txx9wdt_open(struct inode *inode, struct file *file)
++static int update_tind_extent_range(handle_t *handle, struct inode *inode,
++ ext4_fsblk_t pblock, ext4_lblk_t *blk_nump,
++ struct list_blocks_struct *lb)
+{
-+ if (test_and_set_bit(0, &txx9wdt_alive))
-+ return -EBUSY;
++ struct buffer_head *bh;
++ __le32 *i_data;
++ int i, retval = 0;
++ ext4_lblk_t blk_count = *blk_nump;
++ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
+
-+ if (__raw_readl(&txx9wdt_reg->tcr) & TXx9_TMTCR_TCE) {
-+ clear_bit(0, &txx9wdt_alive);
-+ return -EBUSY;
++ if (!pblock) {
++ /* Only update the file block number */
++ *blk_nump += max_entries * max_entries * max_entries;
++ return 0;
+ }
++ bh = sb_bread(inode->i_sb, pblock);
++ if (!bh)
++ return -EIO;
+
-+ if (nowayout)
-+ __module_get(THIS_MODULE);
++ i_data = (__le32 *)bh->b_data;
++ for (i = 0; i < max_entries; i++) {
++ if (i_data[i]) {
++ retval = update_dind_extent_range(handle, inode,
++ le32_to_cpu(i_data[i]),
++ &blk_count, lb);
++ if (retval)
++ break;
++ } else
++ /* Only update the file block number */
++ blk_count += max_entries * max_entries;
++ }
++ /* Update the file block number */
++ *blk_nump = blk_count;
++ put_bh(bh);
++ return retval;
+
-+ txx9wdt_start();
-+ return nonseekable_open(inode, file);
+}
+
-+static int txx9wdt_release(struct inode *inode, struct file *file)
++static int free_dind_blocks(handle_t *handle,
++ struct inode *inode, __le32 i_data)
+{
-+ if (expect_close)
-+ txx9wdt_stop();
-+ else {
-+ printk(KERN_CRIT "txx9wdt: "
-+ "Unexpected close, not stopping watchdog!\n");
-+ txx9wdt_ping();
++ int i;
++ __le32 *tmp_idata;
++ struct buffer_head *bh;
++ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++
++ bh = sb_bread(inode->i_sb, le32_to_cpu(i_data));
++ if (!bh)
++ return -EIO;
++
++ tmp_idata = (__le32 *)bh->b_data;
++ for (i = 0; i < max_entries; i++) {
++ if (tmp_idata[i])
++ ext4_free_blocks(handle, inode,
++ le32_to_cpu(tmp_idata[i]), 1, 1);
+ }
-+ clear_bit(0, &txx9wdt_alive);
-+ expect_close = 0;
++ put_bh(bh);
++ ext4_free_blocks(handle, inode, le32_to_cpu(i_data), 1, 1);
+ return 0;
+}
+
-+static ssize_t txx9wdt_write(struct file *file, const char __user *data,
-+ size_t len, loff_t *ppos)
++static int free_tind_blocks(handle_t *handle,
++ struct inode *inode, __le32 i_data)
+{
-+ if (len) {
-+ if (!nowayout) {
-+ size_t i;
++ int i, retval = 0;
++ __le32 *tmp_idata;
++ struct buffer_head *bh;
++ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
+
-+ expect_close = 0;
-+ for (i = 0; i != len; i++) {
-+ char c;
-+ if (get_user(c, data + i))
-+ return -EFAULT;
-+ if (c == 'V')
-+ expect_close = 1;
++ bh = sb_bread(inode->i_sb, le32_to_cpu(i_data));
++ if (!bh)
++ return -EIO;
++
++ tmp_idata = (__le32 *)bh->b_data;
++ for (i = 0; i < max_entries; i++) {
++ if (tmp_idata[i]) {
++ retval = free_dind_blocks(handle,
++ inode, tmp_idata[i]);
++ if (retval) {
++ put_bh(bh);
++ return retval;
+ }
+ }
-+ txx9wdt_ping();
+ }
-+ return len;
++ put_bh(bh);
++ ext4_free_blocks(handle, inode, le32_to_cpu(i_data), 1, 1);
++ return 0;
+}
+
-+static int txx9wdt_ioctl(struct inode *inode, struct file *file,
-+ unsigned int cmd, unsigned long arg)
++static int free_ind_block(handle_t *handle, struct inode *inode)
+{
-+ void __user *argp = (void __user *)arg;
-+ int __user *p = argp;
-+ int new_timeout;
-+ static struct watchdog_info ident = {
-+ .options = WDIOF_SETTIMEOUT |
-+ WDIOF_KEEPALIVEPING |
-+ WDIOF_MAGICCLOSE,
-+ .firmware_version = 0,
-+ .identity = "Hardware Watchdog for TXx9",
-+ };
++ int retval;
++ struct ext4_inode_info *ei = EXT4_I(inode);
+
-+ switch (cmd) {
-+ default:
-+ return -ENOTTY;
-+ case WDIOC_GETSUPPORT:
-+ return copy_to_user(argp, &ident, sizeof(ident)) ? -EFAULT : 0;
-+ case WDIOC_GETSTATUS:
-+ case WDIOC_GETBOOTSTATUS:
-+ return put_user(0, p);
-+ case WDIOC_KEEPALIVE:
-+ txx9wdt_ping();
-+ return 0;
-+ case WDIOC_SETTIMEOUT:
-+ if (get_user(new_timeout, p))
-+ return -EFAULT;
-+ if (new_timeout < 1 || new_timeout > WD_MAX_TIMEOUT)
-+ return -EINVAL;
-+ timeout = new_timeout;
-+ txx9wdt_stop();
-+ txx9wdt_start();
-+ /* Fall */
-+ case WDIOC_GETTIMEOUT:
-+ return put_user(timeout, p);
++ if (ei->i_data[EXT4_IND_BLOCK])
++ ext4_free_blocks(handle, inode,
++ le32_to_cpu(ei->i_data[EXT4_IND_BLOCK]), 1, 1);
++
++ if (ei->i_data[EXT4_DIND_BLOCK]) {
++ retval = free_dind_blocks(handle, inode,
++ ei->i_data[EXT4_DIND_BLOCK]);
++ if (retval)
++ return retval;
++ }
++
++ if (ei->i_data[EXT4_TIND_BLOCK]) {
++ retval = free_tind_blocks(handle, inode,
++ ei->i_data[EXT4_TIND_BLOCK]);
++ if (retval)
++ return retval;
+ }
++ return 0;
+}
+
-+static int txx9wdt_notify_sys(struct notifier_block *this, unsigned long code,
-+ void *unused)
++static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
++ struct inode *tmp_inode, int retval)
+{
-+ if (code == SYS_DOWN || code == SYS_HALT)
-+ txx9wdt_stop();
-+ return NOTIFY_DONE;
-+}
++ struct ext4_inode_info *ei = EXT4_I(inode);
++ struct ext4_inode_info *tmp_ei = EXT4_I(tmp_inode);
+
-+static const struct file_operations txx9wdt_fops = {
-+ .owner = THIS_MODULE,
-+ .llseek = no_llseek,
-+ .write = txx9wdt_write,
-+ .ioctl = txx9wdt_ioctl,
-+ .open = txx9wdt_open,
-+ .release = txx9wdt_release,
-+};
++ retval = free_ind_block(handle, inode);
++ if (retval)
++ goto err_out;
+
-+static struct miscdevice txx9wdt_miscdev = {
-+ .minor = WATCHDOG_MINOR,
-+ .name = "watchdog",
-+ .fops = &txx9wdt_fops,
-+};
++ /*
++ * One credit accounted for writing the
++ * i_data field of the original inode
++ */
++ retval = ext4_journal_extend(handle, 1);
++ if (retval != 0) {
++ retval = ext4_journal_restart(handle, 1);
++ if (retval)
++ goto err_out;
++ }
+
-+static struct notifier_block txx9wdt_notifier = {
-+ .notifier_call = txx9wdt_notify_sys
-+};
++ /*
++ * We have the extent map build with the tmp inode.
++ * Now copy the i_data across
++ */
++ ei->i_flags |= EXT4_EXTENTS_FL;
++ memcpy(ei->i_data, tmp_ei->i_data, sizeof(ei->i_data));
+
-+static int __init txx9wdt_probe(struct platform_device *dev)
++ /*
++ * Update i_blocks with the new blocks that got
++ * allocated while adding extents for extent index
++ * blocks.
++ *
++ * While converting to extents we need not
++ * update the orignal inode i_blocks for extent blocks
++ * via quota APIs. The quota update happened via tmp_inode already.
++ */
++ spin_lock(&inode->i_lock);
++ inode->i_blocks += tmp_inode->i_blocks;
++ spin_unlock(&inode->i_lock);
++
++ ext4_mark_inode_dirty(handle, inode);
++err_out:
++ return retval;
++}
++
++static int free_ext_idx(handle_t *handle, struct inode *inode,
++ struct ext4_extent_idx *ix)
+{
-+ struct resource *res;
-+ int ret;
++ int i, retval = 0;
++ ext4_fsblk_t block;
++ struct buffer_head *bh;
++ struct ext4_extent_header *eh;
+
-+ txx9_imclk = clk_get(NULL, "imbus_clk");
-+ if (IS_ERR(txx9_imclk)) {
-+ ret = PTR_ERR(txx9_imclk);
-+ txx9_imclk = NULL;
-+ goto exit;
++ block = idx_pblock(ix);
++ bh = sb_bread(inode->i_sb, block);
++ if (!bh)
++ return -EIO;
++
++ eh = (struct ext4_extent_header *)bh->b_data;
++ if (eh->eh_depth != 0) {
++ ix = EXT_FIRST_INDEX(eh);
++ for (i = 0; i < le16_to_cpu(eh->eh_entries); i++, ix++) {
++ retval = free_ext_idx(handle, inode, ix);
++ if (retval)
++ break;
++ }
+ }
-+ ret = clk_enable(txx9_imclk);
-+ if (ret) {
-+ clk_put(txx9_imclk);
-+ txx9_imclk = NULL;
-+ goto exit;
++ put_bh(bh);
++ ext4_free_blocks(handle, inode, block, 1, 1);
++ return retval;
++}
++
++/*
++ * Free the extent meta data blocks only
++ */
++static int free_ext_block(handle_t *handle, struct inode *inode)
++{
++ int i, retval = 0;
++ struct ext4_inode_info *ei = EXT4_I(inode);
++ struct ext4_extent_header *eh = (struct ext4_extent_header *)ei->i_data;
++ struct ext4_extent_idx *ix;
++ if (eh->eh_depth == 0)
++ /*
++ * No extra blocks allocated for extent meta data
++ */
++ return 0;
++ ix = EXT_FIRST_INDEX(eh);
++ for (i = 0; i < le16_to_cpu(eh->eh_entries); i++, ix++) {
++ retval = free_ext_idx(handle, inode, ix);
++ if (retval)
++ return retval;
+ }
++ return retval;
+
-+ res = platform_get_resource(dev, IORESOURCE_MEM, 0);
-+ if (!res)
-+ goto exit_busy;
-+ if (!devm_request_mem_region(&dev->dev,
-+ res->start, res->end - res->start + 1,
-+ "txx9wdt"))
-+ goto exit_busy;
-+ txx9wdt_reg = devm_ioremap(&dev->dev,
-+ res->start, res->end - res->start + 1);
-+ if (!txx9wdt_reg)
-+ goto exit_busy;
++}
+
-+ ret = register_reboot_notifier(&txx9wdt_notifier);
-+ if (ret)
-+ goto exit;
++int ext4_ext_migrate(struct inode *inode, struct file *filp,
++ unsigned int cmd, unsigned long arg)
++{
++ handle_t *handle;
++ int retval = 0, i;
++ __le32 *i_data;
++ ext4_lblk_t blk_count = 0;
++ struct ext4_inode_info *ei;
++ struct inode *tmp_inode = NULL;
++ struct list_blocks_struct lb;
++ unsigned long max_entries;
+
-+ ret = misc_register(&txx9wdt_miscdev);
-+ if (ret) {
-+ unregister_reboot_notifier(&txx9wdt_notifier);
-+ goto exit;
++ if (!test_opt(inode->i_sb, EXTENTS))
++ /*
++ * if mounted with noextents we don't allow the migrate
++ */
++ return -EINVAL;
++
++ if ((EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
++ return -EINVAL;
++
++ down_write(&EXT4_I(inode)->i_data_sem);
++ handle = ext4_journal_start(inode,
++ EXT4_DATA_TRANS_BLOCKS(inode->i_sb) +
++ EXT4_INDEX_EXTRA_TRANS_BLOCKS + 3 +
++ 2 * EXT4_QUOTA_INIT_BLOCKS(inode->i_sb)
++ + 1);
++ if (IS_ERR(handle)) {
++ retval = PTR_ERR(handle);
++ goto err_out;
++ }
++ tmp_inode = ext4_new_inode(handle,
++ inode->i_sb->s_root->d_inode,
++ S_IFREG);
++ if (IS_ERR(tmp_inode)) {
++ retval = -ENOMEM;
++ ext4_journal_stop(handle);
++ tmp_inode = NULL;
++ goto err_out;
+ }
++ i_size_write(tmp_inode, i_size_read(inode));
++ /*
++ * We don't want the inode to be reclaimed
++ * if we got interrupted in between. We have
++ * this tmp inode carrying reference to the
++ * data blocks of the original file. We set
++ * the i_nlink to zero at the last stage after
++ * switching the original file to extent format
++ */
++ tmp_inode->i_nlink = 1;
+
-+ printk(KERN_INFO "Hardware Watchdog Timer for TXx9: "
-+ "timeout=%d sec (max %ld) (nowayout= %d)\n",
-+ timeout, WD_MAX_TIMEOUT, nowayout);
++ ext4_ext_tree_init(handle, tmp_inode);
++ ext4_orphan_add(handle, tmp_inode);
++ ext4_journal_stop(handle);
+
-+ return 0;
-+exit_busy:
-+ ret = -EBUSY;
-+exit:
-+ if (txx9_imclk) {
-+ clk_disable(txx9_imclk);
-+ clk_put(txx9_imclk);
++ ei = EXT4_I(inode);
++ i_data = ei->i_data;
++ memset(&lb, 0, sizeof(lb));
++
++ /* 32 bit block address 4 bytes */
++ max_entries = inode->i_sb->s_blocksize >> 2;
++
++ /*
++ * start with one credit accounted for
++ * superblock modification.
++ *
++ * For the tmp_inode we already have commited the
++ * trascation that created the inode. Later as and
++ * when we add extents we extent the journal
++ */
++ handle = ext4_journal_start(inode, 1);
++ for (i = 0; i < EXT4_NDIR_BLOCKS; i++, blk_count++) {
++ if (i_data[i]) {
++ retval = update_extent_range(handle, tmp_inode,
++ le32_to_cpu(i_data[i]),
++ blk_count, &lb);
++ if (retval)
++ goto err_out;
++ }
+ }
-+ return ret;
-+}
++ if (i_data[EXT4_IND_BLOCK]) {
++ retval = update_ind_extent_range(handle, tmp_inode,
++ le32_to_cpu(i_data[EXT4_IND_BLOCK]),
++ &blk_count, &lb);
++ if (retval)
++ goto err_out;
++ } else
++ blk_count += max_entries;
++ if (i_data[EXT4_DIND_BLOCK]) {
++ retval = update_dind_extent_range(handle, tmp_inode,
++ le32_to_cpu(i_data[EXT4_DIND_BLOCK]),
++ &blk_count, &lb);
++ if (retval)
++ goto err_out;
++ } else
++ blk_count += max_entries * max_entries;
++ if (i_data[EXT4_TIND_BLOCK]) {
++ retval = update_tind_extent_range(handle, tmp_inode,
++ le32_to_cpu(i_data[EXT4_TIND_BLOCK]),
++ &blk_count, &lb);
++ if (retval)
++ goto err_out;
++ }
++ /*
++ * Build the last extent
++ */
++ retval = finish_range(handle, tmp_inode, &lb);
++err_out:
++ /*
++ * We are either freeing extent information or indirect
++ * blocks. During this we touch superblock, group descriptor
++ * and block bitmap. Later we mark the tmp_inode dirty
++ * via ext4_ext_tree_init. So allocate a credit of 4
++ * We may update quota (user and group).
++ *
++ * FIXME!! we may be touching bitmaps in different block groups.
++ */
++ if (ext4_journal_extend(handle,
++ 4 + 2*EXT4_QUOTA_TRANS_BLOCKS(inode->i_sb)) != 0)
++ ext4_journal_restart(handle,
++ 4 + 2*EXT4_QUOTA_TRANS_BLOCKS(inode->i_sb));
++ if (retval)
++ /*
++ * Failure case delete the extent information with the
++ * tmp_inode
++ */
++ free_ext_block(handle, tmp_inode);
++ else
++ retval = ext4_ext_swap_inode_data(handle, inode,
++ tmp_inode, retval);
+
-+static int __exit txx9wdt_remove(struct platform_device *dev)
-+{
-+ misc_deregister(&txx9wdt_miscdev);
-+ unregister_reboot_notifier(&txx9wdt_notifier);
-+ clk_disable(txx9_imclk);
-+ clk_put(txx9_imclk);
-+ return 0;
-+}
++ /*
++ * Mark the tmp_inode as of size zero
++ */
++ i_size_write(tmp_inode, 0);
+
-+static struct platform_driver txx9wdt_driver = {
-+ .remove = __exit_p(txx9wdt_remove),
-+ .driver = {
-+ .name = "txx9wdt",
-+ .owner = THIS_MODULE,
-+ },
-+};
++ /*
++ * set the i_blocks count to zero
++ * so that the ext4_delete_inode does the
++ * right job
++ *
++ * We don't need to take the i_lock because
++ * the inode is not visible to user space.
++ */
++ tmp_inode->i_blocks = 0;
+
-+static int __init watchdog_init(void)
-+{
-+ return platform_driver_probe(&txx9wdt_driver, txx9wdt_probe);
-+}
++ /* Reset the extent details */
++ ext4_ext_tree_init(handle, tmp_inode);
+
-+static void __exit watchdog_exit(void)
-+{
-+ platform_driver_unregister(&txx9wdt_driver);
-+}
++ /*
++ * Set the i_nlink to zero so that
++ * generic_drop_inode really deletes the
++ * inode
++ */
++ tmp_inode->i_nlink = 0;
+
-+module_init(watchdog_init);
-+module_exit(watchdog_exit);
++ ext4_journal_stop(handle);
+
-+MODULE_DESCRIPTION("TXx9 Watchdog Driver");
-+MODULE_LICENSE("GPL");
-+MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
-diff --git a/drivers/watchdog/w83877f_wdt.c b/drivers/watchdog/w83877f_wdt.c
-index bcc9d48..f510a3a 100644
---- a/drivers/watchdog/w83877f_wdt.c
-+++ b/drivers/watchdog/w83877f_wdt.c
-@@ -373,20 +373,20 @@ static int __init w83877f_wdt_init(void)
- goto err_out_region1;
- }
++ up_write(&EXT4_I(inode)->i_data_sem);
++
++ if (tmp_inode)
++ iput(tmp_inode);
++
++ return retval;
++}
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 94ee6f3..67b6d8a 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -51,7 +51,7 @@
-- rc = misc_register(&wdt_miscdev);
-+ rc = register_reboot_notifier(&wdt_notifier);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-- wdt_miscdev.minor, rc);
-+ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-+ rc);
- goto err_out_region2;
- }
+ static struct buffer_head *ext4_append(handle_t *handle,
+ struct inode *inode,
+- u32 *block, int *err)
++ ext4_lblk_t *block, int *err)
+ {
+ struct buffer_head *bh;
-- rc = register_reboot_notifier(&wdt_notifier);
-+ rc = misc_register(&wdt_miscdev);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-- rc);
-- goto err_out_miscdev;
-+ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-+ wdt_miscdev.minor, rc);
-+ goto err_out_reboot;
- }
+@@ -144,8 +144,8 @@ struct dx_map_entry
+ u16 size;
+ };
- printk(KERN_INFO PFX "WDT driver for W83877F initialised. timeout=%d sec (nowayout=%d)\n",
-@@ -394,8 +394,8 @@ static int __init w83877f_wdt_init(void)
+-static inline unsigned dx_get_block (struct dx_entry *entry);
+-static void dx_set_block (struct dx_entry *entry, unsigned value);
++static inline ext4_lblk_t dx_get_block(struct dx_entry *entry);
++static void dx_set_block(struct dx_entry *entry, ext4_lblk_t value);
+ static inline unsigned dx_get_hash (struct dx_entry *entry);
+ static void dx_set_hash (struct dx_entry *entry, unsigned value);
+ static unsigned dx_get_count (struct dx_entry *entries);
+@@ -166,7 +166,8 @@ static void dx_sort_map(struct dx_map_entry *map, unsigned count);
+ static struct ext4_dir_entry_2 *dx_move_dirents (char *from, char *to,
+ struct dx_map_entry *offsets, int count);
+ static struct ext4_dir_entry_2* dx_pack_dirents (char *base, int size);
+-static void dx_insert_block (struct dx_frame *frame, u32 hash, u32 block);
++static void dx_insert_block(struct dx_frame *frame,
++ u32 hash, ext4_lblk_t block);
+ static int ext4_htree_next_block(struct inode *dir, __u32 hash,
+ struct dx_frame *frame,
+ struct dx_frame *frames,
+@@ -181,12 +182,12 @@ static int ext4_dx_add_entry(handle_t *handle, struct dentry *dentry,
+ * Mask them off for now.
+ */
- return 0;
+-static inline unsigned dx_get_block (struct dx_entry *entry)
++static inline ext4_lblk_t dx_get_block(struct dx_entry *entry)
+ {
+ return le32_to_cpu(entry->block) & 0x00ffffff;
+ }
--err_out_miscdev:
-- misc_deregister(&wdt_miscdev);
-+err_out_reboot:
-+ unregister_reboot_notifier(&wdt_notifier);
- err_out_region2:
- release_region(WDT_PING,1);
- err_out_region1:
-diff --git a/drivers/watchdog/w83977f_wdt.c b/drivers/watchdog/w83977f_wdt.c
-index b475529..b209bcd 100644
---- a/drivers/watchdog/w83977f_wdt.c
-+++ b/drivers/watchdog/w83977f_wdt.c
-@@ -494,20 +494,20 @@ static int __init w83977f_wdt_init(void)
- goto err_out;
+-static inline void dx_set_block (struct dx_entry *entry, unsigned value)
++static inline void dx_set_block(struct dx_entry *entry, ext4_lblk_t value)
+ {
+ entry->block = cpu_to_le32(value);
+ }
+@@ -243,8 +244,8 @@ static void dx_show_index (char * label, struct dx_entry *entries)
+ int i, n = dx_get_count (entries);
+ printk("%s index ", label);
+ for (i = 0; i < n; i++) {
+- printk("%x->%u ", i? dx_get_hash(entries + i) :
+- 0, dx_get_block(entries + i));
++ printk("%x->%lu ", i? dx_get_hash(entries + i) :
++ 0, (unsigned long)dx_get_block(entries + i));
}
-
-- rc = misc_register(&wdt_miscdev);
-+ rc = register_reboot_notifier(&wdt_notifier);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-- wdt_miscdev.minor, rc);
-+ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-+ rc);
- goto err_out_region;
+ printk("\n");
+ }
+@@ -280,7 +281,7 @@ static struct stats dx_show_leaf(struct dx_hash_info *hinfo, struct ext4_dir_ent
+ space += EXT4_DIR_REC_LEN(de->name_len);
+ names++;
+ }
+- de = (struct ext4_dir_entry_2 *) ((char *) de + le16_to_cpu(de->rec_len));
++ de = ext4_next_entry(de);
}
-
-- rc = register_reboot_notifier(&wdt_notifier);
-+ rc = misc_register(&wdt_miscdev);
- if (rc)
+ printk("(%i)\n", names);
+ return (struct stats) { names, space, 1 };
+@@ -297,7 +298,8 @@ struct stats dx_show_entries(struct dx_hash_info *hinfo, struct inode *dir,
+ printk("%i indexed blocks...\n", count);
+ for (i = 0; i < count; i++, entries++)
{
-- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-- rc);
-- goto err_out_miscdev;
-+ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-+ wdt_miscdev.minor, rc);
-+ goto err_out_reboot;
- }
-
- printk(KERN_INFO PFX "initialized. timeout=%d sec (nowayout=%d testmode=%d)\n",
-@@ -515,8 +515,8 @@ static int __init w83977f_wdt_init(void)
-
- return 0;
-
--err_out_miscdev:
-- misc_deregister(&wdt_miscdev);
-+err_out_reboot:
-+ unregister_reboot_notifier(&wdt_notifier);
- err_out_region:
- release_region(IO_INDEX_PORT,2);
- err_out:
-diff --git a/drivers/watchdog/wdt.c b/drivers/watchdog/wdt.c
-index 53d0bb4..756fb15 100644
---- a/drivers/watchdog/wdt.c
-+++ b/drivers/watchdog/wdt.c
-@@ -70,6 +70,8 @@ MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default=" _
- static int io=0x240;
- static int irq=11;
-
-+static DEFINE_SPINLOCK(wdt_lock);
-+
- module_param(io, int, 0);
- MODULE_PARM_DESC(io, "WDT io port (default=0x240)");
- module_param(irq, int, 0);
-@@ -109,6 +111,8 @@ static void wdt_ctr_load(int ctr, int val)
-
- static int wdt_start(void)
+- u32 block = dx_get_block(entries), hash = i? dx_get_hash(entries): 0;
++ ext4_lblk_t block = dx_get_block(entries);
++ ext4_lblk_t hash = i ? dx_get_hash(entries): 0;
+ u32 range = i < count - 1? (dx_get_hash(entries + 1) - hash): ~hash;
+ struct stats stats;
+ printk("%s%3u:%03u hash %8x/%8x ",levels?"":" ", i, block, hash, range);
+@@ -551,7 +553,8 @@ static int ext4_htree_next_block(struct inode *dir, __u32 hash,
+ */
+ static inline struct ext4_dir_entry_2 *ext4_next_entry(struct ext4_dir_entry_2 *p)
{
-+ unsigned long flags;
-+ spin_lock_irqsave(&wdt_lock, flags);
- inb_p(WDT_DC); /* Disable watchdog */
- wdt_ctr_mode(0,3); /* Program CTR0 for Mode 3: Square Wave Generator */
- wdt_ctr_mode(1,2); /* Program CTR1 for Mode 2: Rate Generator */
-@@ -117,6 +121,7 @@ static int wdt_start(void)
- wdt_ctr_load(1,wd_heartbeat); /* Heartbeat */
- wdt_ctr_load(2,65535); /* Length of reset pulse */
- outb_p(0, WDT_DC); /* Enable watchdog */
-+ spin_unlock_irqrestore(&wdt_lock, flags);
- return 0;
+- return (struct ext4_dir_entry_2 *)((char*)p + le16_to_cpu(p->rec_len));
++ return (struct ext4_dir_entry_2 *)((char *)p +
++ ext4_rec_len_from_disk(p->rec_len));
}
-@@ -128,9 +133,12 @@ static int wdt_start(void)
-
- static int wdt_stop (void)
+ /*
+@@ -560,7 +563,7 @@ static inline struct ext4_dir_entry_2 *ext4_next_entry(struct ext4_dir_entry_2 *
+ * into the tree. If there is an error it is returned in err.
+ */
+ static int htree_dirblock_to_tree(struct file *dir_file,
+- struct inode *dir, int block,
++ struct inode *dir, ext4_lblk_t block,
+ struct dx_hash_info *hinfo,
+ __u32 start_hash, __u32 start_minor_hash)
{
-+ unsigned long flags;
-+ spin_lock_irqsave(&wdt_lock, flags);
- /* Turn the card off */
- inb_p(WDT_DC); /* Disable watchdog */
- wdt_ctr_load(2,0); /* 0 length reset pulses now */
-+ spin_unlock_irqrestore(&wdt_lock, flags);
- return 0;
- }
+@@ -568,7 +571,8 @@ static int htree_dirblock_to_tree(struct file *dir_file,
+ struct ext4_dir_entry_2 *de, *top;
+ int err, count = 0;
-@@ -143,11 +151,14 @@ static int wdt_stop (void)
+- dxtrace(printk("In htree dirblock_to_tree: block %d\n", block));
++ dxtrace(printk(KERN_INFO "In htree dirblock_to_tree: block %lu\n",
++ (unsigned long)block));
+ if (!(bh = ext4_bread (NULL, dir, block, 0, &err)))
+ return err;
- static int wdt_ping(void)
- {
-+ unsigned long flags;
-+ spin_lock_irqsave(&wdt_lock, flags);
- /* Write a watchdog value */
- inb_p(WDT_DC); /* Disable watchdog */
- wdt_ctr_mode(1,2); /* Re-Program CTR1 for Mode 2: Rate Generator */
- wdt_ctr_load(1,wd_heartbeat); /* Heartbeat */
- outb_p(0, WDT_DC); /* Enable watchdog */
-+ spin_unlock_irqrestore(&wdt_lock, flags);
- return 0;
- }
+@@ -620,9 +624,9 @@ int ext4_htree_fill_tree(struct file *dir_file, __u32 start_hash,
+ struct ext4_dir_entry_2 *de;
+ struct dx_frame frames[2], *frame;
+ struct inode *dir;
+- int block, err;
++ ext4_lblk_t block;
+ int count = 0;
+- int ret;
++ int ret, err;
+ __u32 hashval;
-@@ -182,7 +193,12 @@ static int wdt_set_heartbeat(int t)
+ dxtrace(printk("In htree_fill_tree, start hash: %x:%x\n", start_hash,
+@@ -720,7 +724,7 @@ static int dx_make_map (struct ext4_dir_entry_2 *de, int size,
+ cond_resched();
+ }
+ /* XXX: do we need to check rec_len == 0 case? -Chris */
+- de = (struct ext4_dir_entry_2 *) ((char *) de + le16_to_cpu(de->rec_len));
++ de = ext4_next_entry(de);
+ }
+ return count;
+ }
+@@ -752,7 +756,7 @@ static void dx_sort_map (struct dx_map_entry *map, unsigned count)
+ } while(more);
+ }
- static int wdt_get_status(int *status)
+-static void dx_insert_block(struct dx_frame *frame, u32 hash, u32 block)
++static void dx_insert_block(struct dx_frame *frame, u32 hash, ext4_lblk_t block)
{
-- unsigned char new_status=inb_p(WDT_SR);
-+ unsigned char new_status;
-+ unsigned long flags;
-+
-+ spin_lock_irqsave(&wdt_lock, flags);
-+ new_status = inb_p(WDT_SR);
-+ spin_unlock_irqrestore(&wdt_lock, flags);
+ struct dx_entry *entries = frame->entries;
+ struct dx_entry *old = frame->at, *new = old + 1;
+@@ -820,7 +824,7 @@ static inline int search_dirblock(struct buffer_head * bh,
+ return 1;
+ }
+ /* prevent looping on a bad block */
+- de_len = le16_to_cpu(de->rec_len);
++ de_len = ext4_rec_len_from_disk(de->rec_len);
+ if (de_len <= 0)
+ return -1;
+ offset += de_len;
+@@ -847,23 +851,20 @@ static struct buffer_head * ext4_find_entry (struct dentry *dentry,
+ struct super_block * sb;
+ struct buffer_head * bh_use[NAMEI_RA_SIZE];
+ struct buffer_head * bh, *ret = NULL;
+- unsigned long start, block, b;
++ ext4_lblk_t start, block, b;
+ int ra_max = 0; /* Number of bh's in the readahead
+ buffer, bh_use[] */
+ int ra_ptr = 0; /* Current index into readahead
+ buffer */
+ int num = 0;
+- int nblocks, i, err;
++ ext4_lblk_t nblocks;
++ int i, err;
+ struct inode *dir = dentry->d_parent->d_inode;
+ int namelen;
+- const u8 *name;
+- unsigned blocksize;
- *status=0;
- if (new_status & WDC_SR_ISOI0)
-@@ -214,8 +230,12 @@ static int wdt_get_status(int *status)
+ *res_dir = NULL;
+ sb = dir->i_sb;
+- blocksize = sb->s_blocksize;
+ namelen = dentry->d_name.len;
+- name = dentry->d_name.name;
+ if (namelen > EXT4_NAME_LEN)
+ return NULL;
+ if (is_dx(dir)) {
+@@ -914,7 +915,8 @@ restart:
+ if (!buffer_uptodate(bh)) {
+ /* read error, skip block & hope for the best */
+ ext4_error(sb, __FUNCTION__, "reading directory #%lu "
+- "offset %lu", dir->i_ino, block);
++ "offset %lu", dir->i_ino,
++ (unsigned long)block);
+ brelse(bh);
+ goto next;
+ }
+@@ -961,7 +963,7 @@ static struct buffer_head * ext4_dx_find_entry(struct dentry *dentry,
+ struct dx_frame frames[2], *frame;
+ struct ext4_dir_entry_2 *de, *top;
+ struct buffer_head *bh;
+- unsigned long block;
++ ext4_lblk_t block;
+ int retval;
+ int namelen = dentry->d_name.len;
+ const u8 *name = dentry->d_name.name;
+@@ -1128,7 +1130,7 @@ dx_move_dirents(char *from, char *to, struct dx_map_entry *map, int count)
+ rec_len = EXT4_DIR_REC_LEN(de->name_len);
+ memcpy (to, de, rec_len);
+ ((struct ext4_dir_entry_2 *) to)->rec_len =
+- cpu_to_le16(rec_len);
++ ext4_rec_len_to_disk(rec_len);
+ de->inode = 0;
+ map++;
+ to += rec_len;
+@@ -1147,13 +1149,12 @@ static struct ext4_dir_entry_2* dx_pack_dirents(char *base, int size)
- static int wdt_get_temperature(int *temperature)
- {
-- unsigned short c=inb_p(WDT_RT);
-+ unsigned short c;
-+ unsigned long flags;
+ prev = to = de;
+ while ((char*)de < base + size) {
+- next = (struct ext4_dir_entry_2 *) ((char *) de +
+- le16_to_cpu(de->rec_len));
++ next = ext4_next_entry(de);
+ if (de->inode && de->name_len) {
+ rec_len = EXT4_DIR_REC_LEN(de->name_len);
+ if (de > to)
+ memmove(to, de, rec_len);
+- to->rec_len = cpu_to_le16(rec_len);
++ to->rec_len = ext4_rec_len_to_disk(rec_len);
+ prev = to;
+ to = (struct ext4_dir_entry_2 *) (((char *) to) + rec_len);
+ }
+@@ -1174,7 +1175,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ unsigned blocksize = dir->i_sb->s_blocksize;
+ unsigned count, continued;
+ struct buffer_head *bh2;
+- u32 newblock;
++ ext4_lblk_t newblock;
+ u32 hash2;
+ struct dx_map_entry *map;
+ char *data1 = (*bh)->b_data, *data2;
+@@ -1221,14 +1222,15 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ split = count - move;
+ hash2 = map[split].hash;
+ continued = hash2 == map[split - 1].hash;
+- dxtrace(printk("Split block %i at %x, %i/%i\n",
+- dx_get_block(frame->at), hash2, split, count-split));
++ dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",
++ (unsigned long)dx_get_block(frame->at),
++ hash2, split, count-split));
-+ spin_lock_irqsave(&wdt_lock, flags);
-+ c = inb_p(WDT_RT);
-+ spin_unlock_irqrestore(&wdt_lock, flags);
- *temperature = (c * 11 / 15) + 7;
- return 0;
- }
-@@ -237,7 +257,10 @@ static irqreturn_t wdt_interrupt(int irq, void *dev_id)
- * Read the status register see what is up and
- * then printk it.
- */
-- unsigned char status=inb_p(WDT_SR);
-+ unsigned char status;
-+
-+ spin_lock(&wdt_lock);
-+ status = inb_p(WDT_SR);
+ /* Fancy dance to stay within two buffers */
+ de2 = dx_move_dirents(data1, data2, map + split, count - split);
+ de = dx_pack_dirents(data1,blocksize);
+- de->rec_len = cpu_to_le16(data1 + blocksize - (char *) de);
+- de2->rec_len = cpu_to_le16(data2 + blocksize - (char *) de2);
++ de->rec_len = ext4_rec_len_to_disk(data1 + blocksize - (char *) de);
++ de2->rec_len = ext4_rec_len_to_disk(data2 + blocksize - (char *) de2);
+ dxtrace(dx_show_leaf (hinfo, (struct ext4_dir_entry_2 *) data1, blocksize, 1));
+ dxtrace(dx_show_leaf (hinfo, (struct ext4_dir_entry_2 *) data2, blocksize, 1));
- printk(KERN_CRIT "WDT status %d\n", status);
+@@ -1297,7 +1299,7 @@ static int add_dirent_to_buf(handle_t *handle, struct dentry *dentry,
+ return -EEXIST;
+ }
+ nlen = EXT4_DIR_REC_LEN(de->name_len);
+- rlen = le16_to_cpu(de->rec_len);
++ rlen = ext4_rec_len_from_disk(de->rec_len);
+ if ((de->inode? rlen - nlen: rlen) >= reclen)
+ break;
+ de = (struct ext4_dir_entry_2 *)((char *)de + rlen);
+@@ -1316,11 +1318,11 @@ static int add_dirent_to_buf(handle_t *handle, struct dentry *dentry,
-@@ -265,6 +288,7 @@ static irqreturn_t wdt_interrupt(int irq, void *dev_id)
- printk(KERN_CRIT "Reset in 5ms.\n");
- #endif
+ /* By now the buffer is marked for journaling */
+ nlen = EXT4_DIR_REC_LEN(de->name_len);
+- rlen = le16_to_cpu(de->rec_len);
++ rlen = ext4_rec_len_from_disk(de->rec_len);
+ if (de->inode) {
+ struct ext4_dir_entry_2 *de1 = (struct ext4_dir_entry_2 *)((char *)de + nlen);
+- de1->rec_len = cpu_to_le16(rlen - nlen);
+- de->rec_len = cpu_to_le16(nlen);
++ de1->rec_len = ext4_rec_len_to_disk(rlen - nlen);
++ de->rec_len = ext4_rec_len_to_disk(nlen);
+ de = de1;
}
-+ spin_unlock(&wdt_lock);
- return IRQ_HANDLED;
+ de->file_type = EXT4_FT_UNKNOWN;
+@@ -1374,7 +1376,7 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry,
+ int retval;
+ unsigned blocksize;
+ struct dx_hash_info hinfo;
+- u32 block;
++ ext4_lblk_t block;
+ struct fake_dirent *fde;
+
+ blocksize = dir->i_sb->s_blocksize;
+@@ -1397,17 +1399,18 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry,
+
+ /* The 0th block becomes the root, move the dirents out */
+ fde = &root->dotdot;
+- de = (struct ext4_dir_entry_2 *)((char *)fde + le16_to_cpu(fde->rec_len));
++ de = (struct ext4_dir_entry_2 *)((char *)fde +
++ ext4_rec_len_from_disk(fde->rec_len));
+ len = ((char *) root) + blocksize - (char *) de;
+ memcpy (data1, de, len);
+ de = (struct ext4_dir_entry_2 *) data1;
+ top = data1 + len;
+- while ((char *)(de2=(void*)de+le16_to_cpu(de->rec_len)) < top)
++ while ((char *)(de2 = ext4_next_entry(de)) < top)
+ de = de2;
+- de->rec_len = cpu_to_le16(data1 + blocksize - (char *) de);
++ de->rec_len = ext4_rec_len_to_disk(data1 + blocksize - (char *) de);
+ /* Initialize the root; the dot dirents already exist */
+ de = (struct ext4_dir_entry_2 *) (&root->dotdot);
+- de->rec_len = cpu_to_le16(blocksize - EXT4_DIR_REC_LEN(2));
++ de->rec_len = ext4_rec_len_to_disk(blocksize - EXT4_DIR_REC_LEN(2));
+ memset (&root->info, 0, sizeof(root->info));
+ root->info.info_length = sizeof(root->info);
+ root->info.hash_version = EXT4_SB(dir->i_sb)->s_def_hash_version;
+@@ -1454,7 +1457,7 @@ static int ext4_add_entry (handle_t *handle, struct dentry *dentry,
+ int retval;
+ int dx_fallback=0;
+ unsigned blocksize;
+- u32 block, blocks;
++ ext4_lblk_t block, blocks;
+
+ sb = dir->i_sb;
+ blocksize = sb->s_blocksize;
+@@ -1487,7 +1490,7 @@ static int ext4_add_entry (handle_t *handle, struct dentry *dentry,
+ return retval;
+ de = (struct ext4_dir_entry_2 *) bh->b_data;
+ de->inode = 0;
+- de->rec_len = cpu_to_le16(blocksize);
++ de->rec_len = ext4_rec_len_to_disk(blocksize);
+ return add_dirent_to_buf(handle, dentry, inode, de, bh);
}
-diff --git a/drivers/watchdog/wdt977.c b/drivers/watchdog/wdt977.c
-index 9b7f6b6..fb4b876 100644
---- a/drivers/watchdog/wdt977.c
-+++ b/drivers/watchdog/wdt977.c
-@@ -470,20 +470,20 @@ static int __init wd977_init(void)
+@@ -1531,7 +1534,7 @@ static int ext4_dx_add_entry(handle_t *handle, struct dentry *dentry,
+ dx_get_count(entries), dx_get_limit(entries)));
+ /* Need to split index? */
+ if (dx_get_count(entries) == dx_get_limit(entries)) {
+- u32 newblock;
++ ext4_lblk_t newblock;
+ unsigned icount = dx_get_count(entries);
+ int levels = frame - frames;
+ struct dx_entry *entries2;
+@@ -1550,7 +1553,7 @@ static int ext4_dx_add_entry(handle_t *handle, struct dentry *dentry,
+ goto cleanup;
+ node2 = (struct dx_node *)(bh2->b_data);
+ entries2 = node2->entries;
+- node2->fake.rec_len = cpu_to_le16(sb->s_blocksize);
++ node2->fake.rec_len = ext4_rec_len_to_disk(sb->s_blocksize);
+ node2->fake.inode = 0;
+ BUFFER_TRACE(frame->bh, "get_write_access");
+ err = ext4_journal_get_write_access(handle, frame->bh);
+@@ -1648,9 +1651,9 @@ static int ext4_delete_entry (handle_t *handle,
+ BUFFER_TRACE(bh, "get_write_access");
+ ext4_journal_get_write_access(handle, bh);
+ if (pde)
+- pde->rec_len =
+- cpu_to_le16(le16_to_cpu(pde->rec_len) +
+- le16_to_cpu(de->rec_len));
++ pde->rec_len = ext4_rec_len_to_disk(
++ ext4_rec_len_from_disk(pde->rec_len) +
++ ext4_rec_len_from_disk(de->rec_len));
+ else
+ de->inode = 0;
+ dir->i_version++;
+@@ -1658,10 +1661,9 @@ static int ext4_delete_entry (handle_t *handle,
+ ext4_journal_dirty_metadata(handle, bh);
+ return 0;
}
+- i += le16_to_cpu(de->rec_len);
++ i += ext4_rec_len_from_disk(de->rec_len);
+ pde = de;
+- de = (struct ext4_dir_entry_2 *)
+- ((char *) de + le16_to_cpu(de->rec_len));
++ de = ext4_next_entry(de);
}
-
-- rc = misc_register(&wdt977_miscdev);
-+ rc = register_reboot_notifier(&wdt977_notifier);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-- wdt977_miscdev.minor, rc);
-+ printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-+ rc);
- goto err_out_region;
+ return -ENOENT;
+ }
+@@ -1824,13 +1826,13 @@ retry:
+ de = (struct ext4_dir_entry_2 *) dir_block->b_data;
+ de->inode = cpu_to_le32(inode->i_ino);
+ de->name_len = 1;
+- de->rec_len = cpu_to_le16(EXT4_DIR_REC_LEN(de->name_len));
++ de->rec_len = ext4_rec_len_to_disk(EXT4_DIR_REC_LEN(de->name_len));
+ strcpy (de->name, ".");
+ ext4_set_de_type(dir->i_sb, de, S_IFDIR);
+- de = (struct ext4_dir_entry_2 *)
+- ((char *) de + le16_to_cpu(de->rec_len));
++ de = ext4_next_entry(de);
+ de->inode = cpu_to_le32(dir->i_ino);
+- de->rec_len = cpu_to_le16(inode->i_sb->s_blocksize-EXT4_DIR_REC_LEN(1));
++ de->rec_len = ext4_rec_len_to_disk(inode->i_sb->s_blocksize -
++ EXT4_DIR_REC_LEN(1));
+ de->name_len = 2;
+ strcpy (de->name, "..");
+ ext4_set_de_type(dir->i_sb, de, S_IFDIR);
+@@ -1882,8 +1884,7 @@ static int empty_dir (struct inode * inode)
+ return 1;
}
-
-- rc = register_reboot_notifier(&wdt977_notifier);
-+ rc = misc_register(&wdt977_miscdev);
- if (rc)
- {
-- printk(KERN_ERR PFX "cannot register reboot notifier (err=%d)\n",
-- rc);
-- goto err_out_miscdev;
-+ printk(KERN_ERR PFX "cannot register miscdev on minor=%d (err=%d)\n",
-+ wdt977_miscdev.minor, rc);
-+ goto err_out_reboot;
+ de = (struct ext4_dir_entry_2 *) bh->b_data;
+- de1 = (struct ext4_dir_entry_2 *)
+- ((char *) de + le16_to_cpu(de->rec_len));
++ de1 = ext4_next_entry(de);
+ if (le32_to_cpu(de->inode) != inode->i_ino ||
+ !le32_to_cpu(de1->inode) ||
+ strcmp (".", de->name) ||
+@@ -1894,9 +1895,9 @@ static int empty_dir (struct inode * inode)
+ brelse (bh);
+ return 1;
}
+- offset = le16_to_cpu(de->rec_len) + le16_to_cpu(de1->rec_len);
+- de = (struct ext4_dir_entry_2 *)
+- ((char *) de1 + le16_to_cpu(de1->rec_len));
++ offset = ext4_rec_len_from_disk(de->rec_len) +
++ ext4_rec_len_from_disk(de1->rec_len);
++ de = ext4_next_entry(de1);
+ while (offset < inode->i_size ) {
+ if (!bh ||
+ (void *) de >= (void *) (bh->b_data+sb->s_blocksize)) {
+@@ -1925,9 +1926,8 @@ static int empty_dir (struct inode * inode)
+ brelse (bh);
+ return 0;
+ }
+- offset += le16_to_cpu(de->rec_len);
+- de = (struct ext4_dir_entry_2 *)
+- ((char *) de + le16_to_cpu(de->rec_len));
++ offset += ext4_rec_len_from_disk(de->rec_len);
++ de = ext4_next_entry(de);
+ }
+ brelse (bh);
+ return 1;
+@@ -2282,8 +2282,7 @@ retry:
+ }
- printk(KERN_INFO PFX "initialized. timeout=%d sec (nowayout=%d, testmode=%i)\n",
-@@ -491,8 +491,8 @@ static int __init wd977_init(void)
-
- return 0;
-
--err_out_miscdev:
-- misc_deregister(&wdt977_miscdev);
-+err_out_reboot:
-+ unregister_reboot_notifier(&wdt977_notifier);
- err_out_region:
- if (!machine_is_netwinder())
- release_region(IO_INDEX_PORT,2);
-diff --git a/fs/Kconfig b/fs/Kconfig
-index 781b47d..219ec06 100644
---- a/fs/Kconfig
-+++ b/fs/Kconfig
-@@ -236,6 +236,7 @@ config JBD_DEBUG
-
- config JBD2
- tristate
-+ select CRC32
- help
- This is a generic journaling layer for block devices that support
- both 32-bit and 64-bit block numbers. It is currently used by
-@@ -440,14 +441,8 @@ config OCFS2_FS
- Tools web page: http://oss.oracle.com/projects/ocfs2-tools
- OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
-
-- Note: Features which OCFS2 does not support yet:
-- - extended attributes
-- - quotas
-- - cluster aware flock
-- - Directory change notification (F_NOTIFY)
-- - Distributed Caching (F_SETLEASE/F_GETLEASE/break_lease)
-- - POSIX ACLs
-- - readpages / writepages (not user visible)
-+ For more information on OCFS2, see the file
-+ <file:Documentation/filesystems/ocfs2.txt>.
+ #define PARENT_INO(buffer) \
+- ((struct ext4_dir_entry_2 *) ((char *) buffer + \
+- le16_to_cpu(((struct ext4_dir_entry_2 *) buffer)->rec_len)))->inode
++ (ext4_next_entry((struct ext4_dir_entry_2 *)(buffer))->inode)
- config OCFS2_DEBUG_MASKLOG
- bool "OCFS2 logging support"
-@@ -1028,8 +1023,8 @@ config HUGETLB_PAGE
- def_bool HUGETLBFS
+ /*
+ * Anybody can rename anything with this: the permission checks are left to the
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index bd8a52b..4fbba60 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -28,7 +28,7 @@ static int verify_group_input(struct super_block *sb,
+ struct ext4_super_block *es = sbi->s_es;
+ ext4_fsblk_t start = ext4_blocks_count(es);
+ ext4_fsblk_t end = start + input->blocks_count;
+- unsigned group = input->group;
++ ext4_group_t group = input->group;
+ ext4_fsblk_t itend = input->inode_table + sbi->s_itb_per_group;
+ unsigned overhead = ext4_bg_has_super(sb, group) ?
+ (1 + ext4_bg_num_gdb(sb, group) +
+@@ -206,7 +206,7 @@ static int setup_new_group_blocks(struct super_block *sb,
+ }
- config CONFIGFS_FS
-- tristate "Userspace-driven configuration filesystem (EXPERIMENTAL)"
-- depends on SYSFS && EXPERIMENTAL
-+ tristate "Userspace-driven configuration filesystem"
-+ depends on SYSFS
- help
- configfs is a ram-based filesystem that provides the converse
- of sysfs's functionality. Where sysfs is a filesystem-based
-@@ -1905,13 +1900,15 @@ config CIFS
- file servers such as Windows 2000 (including Windows 2003, NT 4
- and Windows XP) as well by Samba (which provides excellent CIFS
- server support for Linux and many other operating systems). Limited
-- support for OS/2 and Windows ME and similar servers is provided as well.
--
-- The intent of the cifs module is to provide an advanced
-- network file system client for mounting to CIFS compliant servers,
-- including support for dfs (hierarchical name space), secure per-user
-- session establishment, safe distributed caching (oplock), optional
-- packet signing, Unicode and other internationalization improvements.
-+ support for OS/2 and Windows ME and similar servers is provided as
-+ well.
-+
-+ The cifs module provides an advanced network file system
-+ client for mounting to CIFS compliant servers. It includes
-+ support for DFS (hierarchical name space), secure per-user
-+ session establishment via Kerberos or NTLM or NTLMv2,
-+ safe distributed caching (oplock), optional packet
-+ signing, Unicode and other internationalization improvements.
- If you need to mount to Samba or Windows from this machine, say Y.
+ if (ext4_bg_has_super(sb, input->group)) {
+- ext4_debug("mark backup superblock %#04lx (+0)\n", start);
++ ext4_debug("mark backup superblock %#04llx (+0)\n", start);
+ ext4_set_bit(0, bh->b_data);
+ }
- config CIFS_STATS
-@@ -1943,7 +1940,8 @@ config CIFS_WEAK_PW_HASH
- (since 1997) support stronger NTLM (and even NTLMv2 and Kerberos)
- security mechanisms. These hash the password more securely
- than the mechanisms used in the older LANMAN version of the
-- SMB protocol needed to establish sessions with old SMB servers.
-+ SMB protocol but LANMAN based authentication is needed to
-+ establish sessions with some old SMB servers.
+@@ -215,7 +215,7 @@ static int setup_new_group_blocks(struct super_block *sb,
+ i < gdblocks; i++, block++, bit++) {
+ struct buffer_head *gdb;
- Enabling this option allows the cifs module to mount to older
- LANMAN based servers such as OS/2 and Windows 95, but such
-@@ -1951,8 +1949,8 @@ config CIFS_WEAK_PW_HASH
- security mechanisms if you are on a public network. Unless you
- have a need to access old SMB servers (and are on a private
- network) you probably want to say N. Even if this support
-- is enabled in the kernel build, they will not be used
-- automatically. At runtime LANMAN mounts are disabled but
-+ is enabled in the kernel build, LANMAN authentication will not be
-+ used automatically. At runtime LANMAN mounts are disabled but
- can be set to required (or optional) either in
- /proc/fs/cifs (see fs/cifs/README for more detail) or via an
- option on the mount command. This support is disabled by
-@@ -2018,12 +2016,22 @@ config CIFS_UPCALL
- depends on CIFS_EXPERIMENTAL
- depends on KEYS
- help
-- Enables an upcall mechanism for CIFS which will be used to contact
-- userspace helper utilities to provide SPNEGO packaged Kerberos
-- tickets which are needed to mount to certain secure servers
-+ Enables an upcall mechanism for CIFS which accesses
-+ userspace helper utilities to provide SPNEGO packaged (RFC 4178)
-+ Kerberos tickets which are needed to mount to certain secure servers
- (for which more secure Kerberos authentication is required). If
- unsure, say N.
+- ext4_debug("update backup group %#04lx (+%d)\n", block, bit);
++ ext4_debug("update backup group %#04llx (+%d)\n", block, bit);
-+config CIFS_DFS_UPCALL
-+ bool "DFS feature support (EXPERIMENTAL)"
-+ depends on CIFS_EXPERIMENTAL
-+ depends on KEYS
-+ help
-+ Enables an upcall mechanism for CIFS which contacts userspace
-+ helper utilities to provide server name resolution (host names to
-+ IP addresses) which is needed for implicit mounts of DFS junction
-+ points. If unsure, say N.
-+
- config NCP_FS
- tristate "NCP file system support (to mount NetWare volumes)"
- depends on IPX!=n || INET
-@@ -2130,4 +2138,3 @@ source "fs/nls/Kconfig"
- source "fs/dlm/Kconfig"
+ if ((err = extend_or_restart_transaction(handle, 1, bh)))
+ goto exit_bh;
+@@ -243,7 +243,7 @@ static int setup_new_group_blocks(struct super_block *sb,
+ i < reserved_gdb; i++, block++, bit++) {
+ struct buffer_head *gdb;
- endmenu
--
-diff --git a/fs/afs/dir.c b/fs/afs/dir.c
-index 33fe39a..0cc3597 100644
---- a/fs/afs/dir.c
-+++ b/fs/afs/dir.c
-@@ -546,11 +546,11 @@ static struct dentry *afs_lookup(struct inode *dir, struct dentry *dentry,
- dentry->d_op = &afs_fs_dentry_operations;
+- ext4_debug("clear reserved block %#04lx (+%d)\n", block, bit);
++ ext4_debug("clear reserved block %#04llx (+%d)\n", block, bit);
- d_add(dentry, inode);
-- _leave(" = 0 { vn=%u u=%u } -> { ino=%lu v=%lu }",
-+ _leave(" = 0 { vn=%u u=%u } -> { ino=%lu v=%llu }",
- fid.vnode,
- fid.unique,
- dentry->d_inode->i_ino,
-- dentry->d_inode->i_version);
-+ (unsigned long long)dentry->d_inode->i_version);
+ if ((err = extend_or_restart_transaction(handle, 1, bh)))
+ goto exit_bh;
+@@ -256,10 +256,10 @@ static int setup_new_group_blocks(struct super_block *sb,
+ ext4_set_bit(bit, bh->b_data);
+ brelse(gdb);
+ }
+- ext4_debug("mark block bitmap %#04x (+%ld)\n", input->block_bitmap,
++ ext4_debug("mark block bitmap %#04llx (+%llu)\n", input->block_bitmap,
+ input->block_bitmap - start);
+ ext4_set_bit(input->block_bitmap - start, bh->b_data);
+- ext4_debug("mark inode bitmap %#04x (+%ld)\n", input->inode_bitmap,
++ ext4_debug("mark inode bitmap %#04llx (+%llu)\n", input->inode_bitmap,
+ input->inode_bitmap - start);
+ ext4_set_bit(input->inode_bitmap - start, bh->b_data);
- return NULL;
- }
-@@ -630,9 +630,10 @@ static int afs_d_revalidate(struct dentry *dentry, struct nameidata *nd)
- * been deleted and replaced, and the original vnode ID has
- * been reused */
- if (fid.unique != vnode->fid.unique) {
-- _debug("%s: file deleted (uq %u -> %u I:%lu)",
-+ _debug("%s: file deleted (uq %u -> %u I:%llu)",
- dentry->d_name.name, fid.unique,
-- vnode->fid.unique, dentry->d_inode->i_version);
-+ vnode->fid.unique,
-+ (unsigned long long)dentry->d_inode->i_version);
- spin_lock(&vnode->lock);
- set_bit(AFS_VNODE_DELETED, &vnode->flags);
- spin_unlock(&vnode->lock);
-diff --git a/fs/afs/inode.c b/fs/afs/inode.c
-index d196840..84750c8 100644
---- a/fs/afs/inode.c
-+++ b/fs/afs/inode.c
-@@ -301,7 +301,8 @@ int afs_getattr(struct vfsmount *mnt, struct dentry *dentry,
+@@ -268,7 +268,7 @@ static int setup_new_group_blocks(struct super_block *sb,
+ i < sbi->s_itb_per_group; i++, bit++, block++) {
+ struct buffer_head *it;
- inode = dentry->d_inode;
+- ext4_debug("clear inode block %#04lx (+%d)\n", block, bit);
++ ext4_debug("clear inode block %#04llx (+%d)\n", block, bit);
-- _enter("{ ino=%lu v=%lu }", inode->i_ino, inode->i_version);
-+ _enter("{ ino=%lu v=%llu }", inode->i_ino,
-+ (unsigned long long)inode->i_version);
+ if ((err = extend_or_restart_transaction(handle, 1, bh)))
+ goto exit_bh;
+@@ -291,7 +291,7 @@ static int setup_new_group_blocks(struct super_block *sb,
+ brelse(bh);
- generic_fillattr(inode, stat);
- return 0;
-diff --git a/fs/bio.c b/fs/bio.c
-index d59ddbf..242e409 100644
---- a/fs/bio.c
-+++ b/fs/bio.c
-@@ -248,11 +248,13 @@ inline int bio_hw_segments(struct request_queue *q, struct bio *bio)
- */
- void __bio_clone(struct bio *bio, struct bio *bio_src)
+ /* Mark unused entries in inode bitmap used */
+- ext4_debug("clear inode bitmap %#04x (+%ld)\n",
++ ext4_debug("clear inode bitmap %#04llx (+%llu)\n",
+ input->inode_bitmap, input->inode_bitmap - start);
+ if (IS_ERR(bh = bclean(handle, sb, input->inode_bitmap))) {
+ err = PTR_ERR(bh);
+@@ -357,7 +357,7 @@ static int verify_reserved_gdb(struct super_block *sb,
+ struct buffer_head *primary)
{
-- struct request_queue *q = bdev_get_queue(bio_src->bi_bdev);
--
- memcpy(bio->bi_io_vec, bio_src->bi_io_vec,
- bio_src->bi_max_vecs * sizeof(struct bio_vec));
-
-+ /*
-+ * most users will be overriding ->bi_bdev with a new target,
-+ * so we don't set nor calculate new physical/hw segment counts here
-+ */
- bio->bi_sector = bio_src->bi_sector;
- bio->bi_bdev = bio_src->bi_bdev;
- bio->bi_flags |= 1 << BIO_CLONED;
-@@ -260,8 +262,6 @@ void __bio_clone(struct bio *bio, struct bio *bio_src)
- bio->bi_vcnt = bio_src->bi_vcnt;
- bio->bi_size = bio_src->bi_size;
- bio->bi_idx = bio_src->bi_idx;
-- bio_phys_segments(q, bio);
-- bio_hw_segments(q, bio);
- }
-
- /**
-diff --git a/fs/block_dev.c b/fs/block_dev.c
-index 993f78c..e48a630 100644
---- a/fs/block_dev.c
-+++ b/fs/block_dev.c
-@@ -738,9 +738,9 @@ EXPORT_SYMBOL(bd_release);
- static struct kobject *bdev_get_kobj(struct block_device *bdev)
+ const ext4_fsblk_t blk = primary->b_blocknr;
+- const unsigned long end = EXT4_SB(sb)->s_groups_count;
++ const ext4_group_t end = EXT4_SB(sb)->s_groups_count;
+ unsigned three = 1;
+ unsigned five = 5;
+ unsigned seven = 7;
+@@ -656,12 +656,12 @@ static void update_backups(struct super_block *sb,
+ int blk_off, char *data, int size)
{
- if (bdev->bd_contains != bdev)
-- return kobject_get(&bdev->bd_part->kobj);
-+ return kobject_get(&bdev->bd_part->dev.kobj);
- else
-- return kobject_get(&bdev->bd_disk->kobj);
-+ return kobject_get(&bdev->bd_disk->dev.kobj);
- }
-
- static struct kobject *bdev_get_holder(struct block_device *bdev)
-@@ -1176,7 +1176,7 @@ static int do_open(struct block_device *bdev, struct file *file, int for_part)
- ret = -ENXIO;
- goto out_first;
- }
-- kobject_get(&p->kobj);
-+ kobject_get(&p->dev.kobj);
- bdev->bd_part = p;
- bd_set_size(bdev, (loff_t) p->nr_sects << 9);
- }
-@@ -1299,7 +1299,7 @@ static int __blkdev_put(struct block_device *bdev, int for_part)
- module_put(owner);
-
- if (bdev->bd_contains != bdev) {
-- kobject_put(&bdev->bd_part->kobj);
-+ kobject_put(&bdev->bd_part->dev.kobj);
- bdev->bd_part = NULL;
- }
- bdev->bd_disk = NULL;
-diff --git a/fs/buffer.c b/fs/buffer.c
-index 7249e01..456c9ab 100644
---- a/fs/buffer.c
-+++ b/fs/buffer.c
-@@ -3213,6 +3213,50 @@ static int buffer_cpu_notify(struct notifier_block *self,
- return NOTIFY_OK;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+- const unsigned long last = sbi->s_groups_count;
++ const ext4_group_t last = sbi->s_groups_count;
+ const int bpg = EXT4_BLOCKS_PER_GROUP(sb);
+ unsigned three = 1;
+ unsigned five = 5;
+ unsigned seven = 7;
+- unsigned group;
++ ext4_group_t group;
+ int rest = sb->s_blocksize - size;
+ handle_t *handle;
+ int err = 0, err2;
+@@ -716,7 +716,7 @@ static void update_backups(struct super_block *sb,
+ exit_err:
+ if (err) {
+ ext4_warning(sb, __FUNCTION__,
+- "can't update backup for group %d (err %d), "
++ "can't update backup for group %lu (err %d), "
+ "forcing fsck on next reboot", group, err);
+ sbi->s_mount_state &= ~EXT4_VALID_FS;
+ sbi->s_es->s_state &= cpu_to_le16(~EXT4_VALID_FS);
+@@ -952,7 +952,7 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
+ ext4_fsblk_t n_blocks_count)
+ {
+ ext4_fsblk_t o_blocks_count;
+- unsigned long o_groups_count;
++ ext4_group_t o_groups_count;
+ ext4_grpblk_t last;
+ ext4_grpblk_t add;
+ struct buffer_head * bh;
+@@ -1054,7 +1054,7 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
+ ext4_journal_dirty_metadata(handle, EXT4_SB(sb)->s_sbh);
+ sb->s_dirt = 1;
+ unlock_super(sb);
+- ext4_debug("freeing blocks %lu through %llu\n", o_blocks_count,
++ ext4_debug("freeing blocks %llu through %llu\n", o_blocks_count,
+ o_blocks_count + add);
+ ext4_free_blocks_sb(handle, sb, o_blocks_count, add, &freed_blocks);
+ ext4_debug("freed blocks %llu through %llu\n", o_blocks_count,
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 1ca0f54..055a0cd 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -373,6 +373,66 @@ void ext4_update_dynamic_rev(struct super_block *sb)
+ */
}
-+/**
-+ * bh_uptodate_or_lock: Test whether the buffer is uptodate
-+ * @bh: struct buffer_head
-+ *
-+ * Return true if the buffer is up-to-date and false,
-+ * with the buffer locked, if not.
-+ */
-+int bh_uptodate_or_lock(struct buffer_head *bh)
++int ext4_update_compat_feature(handle_t *handle,
++ struct super_block *sb, __u32 compat)
+{
-+ if (!buffer_uptodate(bh)) {
-+ lock_buffer(bh);
-+ if (!buffer_uptodate(bh))
-+ return 0;
-+ unlock_buffer(bh);
++ int err = 0;
++ if (!EXT4_HAS_COMPAT_FEATURE(sb, compat)) {
++ err = ext4_journal_get_write_access(handle,
++ EXT4_SB(sb)->s_sbh);
++ if (err)
++ return err;
++ EXT4_SET_COMPAT_FEATURE(sb, compat);
++ sb->s_dirt = 1;
++ handle->h_sync = 1;
++ BUFFER_TRACE(EXT4_SB(sb)->s_sbh,
++ "call ext4_journal_dirty_met adata");
++ err = ext4_journal_dirty_metadata(handle,
++ EXT4_SB(sb)->s_sbh);
+ }
-+ return 1;
++ return err;
+}
-+EXPORT_SYMBOL(bh_uptodate_or_lock);
+
-+/**
-+ * bh_submit_read: Submit a locked buffer for reading
-+ * @bh: struct buffer_head
-+ *
-+ * Returns zero on success and -EIO on error.
-+ */
-+int bh_submit_read(struct buffer_head *bh)
++int ext4_update_rocompat_feature(handle_t *handle,
++ struct super_block *sb, __u32 rocompat)
+{
-+ BUG_ON(!buffer_locked(bh));
-+
-+ if (buffer_uptodate(bh)) {
-+ unlock_buffer(bh);
-+ return 0;
++ int err = 0;
++ if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, rocompat)) {
++ err = ext4_journal_get_write_access(handle,
++ EXT4_SB(sb)->s_sbh);
++ if (err)
++ return err;
++ EXT4_SET_RO_COMPAT_FEATURE(sb, rocompat);
++ sb->s_dirt = 1;
++ handle->h_sync = 1;
++ BUFFER_TRACE(EXT4_SB(sb)->s_sbh,
++ "call ext4_journal_dirty_met adata");
++ err = ext4_journal_dirty_metadata(handle,
++ EXT4_SB(sb)->s_sbh);
+ }
++ return err;
++}
+
-+ get_bh(bh);
-+ bh->b_end_io = end_buffer_read_sync;
-+ submit_bh(READ, bh);
-+ wait_on_buffer(bh);
-+ if (buffer_uptodate(bh))
-+ return 0;
-+ return -EIO;
++int ext4_update_incompat_feature(handle_t *handle,
++ struct super_block *sb, __u32 incompat)
++{
++ int err = 0;
++ if (!EXT4_HAS_INCOMPAT_FEATURE(sb, incompat)) {
++ err = ext4_journal_get_write_access(handle,
++ EXT4_SB(sb)->s_sbh);
++ if (err)
++ return err;
++ EXT4_SET_INCOMPAT_FEATURE(sb, incompat);
++ sb->s_dirt = 1;
++ handle->h_sync = 1;
++ BUFFER_TRACE(EXT4_SB(sb)->s_sbh,
++ "call ext4_journal_dirty_met adata");
++ err = ext4_journal_dirty_metadata(handle,
++ EXT4_SB(sb)->s_sbh);
++ }
++ return err;
+}
-+EXPORT_SYMBOL(bh_submit_read);
+
- void __init buffer_init(void)
- {
- int nrpages;
-diff --git a/fs/char_dev.c b/fs/char_dev.c
-index c3bfa76..2c7a8b5 100644
---- a/fs/char_dev.c
-+++ b/fs/char_dev.c
-@@ -510,9 +510,8 @@ struct cdev *cdev_alloc(void)
- {
- struct cdev *p = kzalloc(sizeof(struct cdev), GFP_KERNEL);
- if (p) {
-- p->kobj.ktype = &ktype_cdev_dynamic;
- INIT_LIST_HEAD(&p->list);
-- kobject_init(&p->kobj);
-+ kobject_init(&p->kobj, &ktype_cdev_dynamic);
- }
- return p;
- }
-@@ -529,8 +528,7 @@ void cdev_init(struct cdev *cdev, const struct file_operations *fops)
- {
- memset(cdev, 0, sizeof *cdev);
- INIT_LIST_HEAD(&cdev->list);
-- cdev->kobj.ktype = &ktype_cdev_default;
-- kobject_init(&cdev->kobj);
-+ kobject_init(&cdev->kobj, &ktype_cdev_default);
- cdev->ops = fops;
+ /*
+ * Open the external journal device
+ */
+@@ -443,6 +503,7 @@ static void ext4_put_super (struct super_block * sb)
+ struct ext4_super_block *es = sbi->s_es;
+ int i;
+
++ ext4_mb_release(sb);
+ ext4_ext_release(sb);
+ ext4_xattr_put_super(sb);
+ jbd2_journal_destroy(sbi->s_journal);
+@@ -509,6 +570,8 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
+ ei->i_block_alloc_info = NULL;
+ ei->vfs_inode.i_version = 1;
+ memset(&ei->i_cached_extent, 0, sizeof(struct ext4_ext_cache));
++ INIT_LIST_HEAD(&ei->i_prealloc_list);
++ spin_lock_init(&ei->i_prealloc_lock);
+ return &ei->vfs_inode;
}
-diff --git a/fs/cifs/CHANGES b/fs/cifs/CHANGES
-index a609599..edd2483 100644
---- a/fs/cifs/CHANGES
-+++ b/fs/cifs/CHANGES
-@@ -3,7 +3,10 @@ Version 1.52
- Fix oops on second mount to server when null auth is used.
- Enable experimental Kerberos support. Return writebehind errors on flush
- and sync so that events like out of disk space get reported properly on
--cached files.
-+cached files. Fix setxattr failure to certain Samba versions. Fix mount
-+of second share to disconnected server session (autoreconnect on this).
-+Add ability to modify cifs acls for handling chmod (when mounted with
-+cifsacl flag).
+@@ -533,7 +596,7 @@ static void init_once(struct kmem_cache *cachep, void *foo)
+ #ifdef CONFIG_EXT4DEV_FS_XATTR
+ init_rwsem(&ei->xattr_sem);
+ #endif
+- mutex_init(&ei->truncate_mutex);
++ init_rwsem(&ei->i_data_sem);
+ inode_init_once(&ei->vfs_inode);
+ }
- Version 1.51
- ------------
-diff --git a/fs/cifs/Makefile b/fs/cifs/Makefile
-index 45e42fb..6ba43fb 100644
---- a/fs/cifs/Makefile
-+++ b/fs/cifs/Makefile
-@@ -9,3 +9,5 @@ cifs-y := cifsfs.o cifssmb.o cifs_debug.o connect.o dir.o file.o inode.o \
- readdir.o ioctl.o sess.o export.o cifsacl.o
+@@ -605,18 +668,20 @@ static inline void ext4_show_quota_options(struct seq_file *seq, struct super_bl
+ */
+ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
+ {
++ int def_errors;
++ unsigned long def_mount_opts;
+ struct super_block *sb = vfs->mnt_sb;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ struct ext4_super_block *es = sbi->s_es;
+- unsigned long def_mount_opts;
- cifs-$(CONFIG_CIFS_UPCALL) += cifs_spnego.o
-+
-+cifs-$(CONFIG_CIFS_DFS_UPCALL) += dns_resolve.o cifs_dfs_ref.o
-diff --git a/fs/cifs/README b/fs/cifs/README
-index bf11329..c623e2f 100644
---- a/fs/cifs/README
-+++ b/fs/cifs/README
-@@ -56,7 +56,8 @@ the CIFS VFS web site) copy it to the same directory in which mount.smbfs and
- similar files reside (usually /sbin). Although the helper software is not
- required, mount.cifs is recommended. Eventually the Samba 3.0 utility program
- "net" may also be helpful since it may someday provide easier mount syntax for
--users who are used to Windows e.g. net use <mount point> <UNC name or cifs URL>
-+users who are used to Windows e.g.
-+ net use <mount point> <UNC name or cifs URL>
- Note that running the Winbind pam/nss module (logon service) on all of your
- Linux clients is useful in mapping Uids and Gids consistently across the
- domain to the proper network user. The mount.cifs mount helper can be
-@@ -248,7 +249,7 @@ A partial list of the supported mount options follows:
- the CIFS session.
- password The user password. If the mount helper is
- installed, the user will be prompted for password
-- if it is not supplied.
-+ if not supplied.
- ip The ip address of the target server
- unc The target server Universal Network Name (export) to
- mount.
-@@ -283,7 +284,7 @@ A partial list of the supported mount options follows:
- can be enabled by specifying file_mode and dir_mode on
- the client. Note that the mount.cifs helper must be
- at version 1.10 or higher to support specifying the uid
-- (or gid) in non-numberic form.
-+ (or gid) in non-numeric form.
- gid Set the default gid for inodes (similar to above).
- file_mode If CIFS Unix extensions are not supported by the server
- this overrides the default mode for file inodes.
-@@ -417,9 +418,10 @@ A partial list of the supported mount options follows:
- acl Allow setfacl and getfacl to manage posix ACLs if server
- supports them. (default)
- noacl Do not allow setfacl and getfacl calls on this mount
-- user_xattr Allow getting and setting user xattrs as OS/2 EAs (extended
-- attributes) to the server (default) e.g. via setfattr
-- and getfattr utilities.
-+ user_xattr Allow getting and setting user xattrs (those attributes whose
-+ name begins with "user." or "os2.") as OS/2 EAs (extended
-+ attributes) to the server. This allows support of the
-+ setfattr and getfattr utilities. (default)
- nouser_xattr Do not allow getfattr/setfattr to get/set/list xattrs
- mapchars Translate six of the seven reserved characters (not backslash)
- *?<>|:
-@@ -434,6 +436,7 @@ A partial list of the supported mount options follows:
- nomapchars Do not translate any of these seven characters (default).
- nocase Request case insensitive path name matching (case
- sensitive is the default if the server suports it).
-+ (mount option "ignorecase" is identical to "nocase")
- posixpaths If CIFS Unix extensions are supported, attempt to
- negotiate posix path name support which allows certain
- characters forbidden in typical CIFS filenames, without
-@@ -485,6 +488,9 @@ A partial list of the supported mount options follows:
- ntlmv2i Use NTLMv2 password hashing with packet signing
- lanman (if configured in kernel config) use older
- lanman hash
-+hard Retry file operations if server is not responding
-+soft Limit retries to unresponsive servers (usually only
-+ one retry) before returning an error. (default)
+ def_mount_opts = le32_to_cpu(es->s_default_mount_opts);
++ def_errors = le16_to_cpu(es->s_errors);
- The mount.cifs mount helper also accepts a few mount options before -o
- including:
-@@ -535,8 +541,8 @@ SecurityFlags Flags which control security negotiation and
- must use NTLM 0x02002
- may use NTLMv2 0x00004
- must use NTLMv2 0x04004
-- may use Kerberos security (not implemented yet) 0x00008
-- must use Kerberos (not implemented yet) 0x08008
-+ may use Kerberos security 0x00008
-+ must use Kerberos 0x08008
- may use lanman (weak) password hash 0x00010
- must use lanman password hash 0x10010
- may use plaintext passwords 0x00020
-@@ -626,6 +632,6 @@ returned success.
-
- Also note that "cat /proc/fs/cifs/DebugData" will display information about
- the active sessions and the shares that are mounted.
--Enabling Kerberos (extended security) works when CONFIG_CIFS_EXPERIMENTAL is enabled
--but requires a user space helper (from the Samba project). NTLM and NTLMv2 and
--LANMAN support do not require this helpr.
-+Enabling Kerberos (extended security) works when CONFIG_CIFS_EXPERIMENTAL is
-+on but requires a user space helper (from the Samba project). NTLM and NTLMv2 and
-+LANMAN support do not require this helper.
-diff --git a/fs/cifs/TODO b/fs/cifs/TODO
-index a8852c2..92c9fea 100644
---- a/fs/cifs/TODO
-+++ b/fs/cifs/TODO
-@@ -1,4 +1,4 @@
--Version 1.49 April 26, 2007
-+Version 1.52 January 3, 2008
+ if (sbi->s_sb_block != 1)
+ seq_printf(seq, ",sb=%llu", sbi->s_sb_block);
+ if (test_opt(sb, MINIX_DF))
+ seq_puts(seq, ",minixdf");
+- if (test_opt(sb, GRPID))
++ if (test_opt(sb, GRPID) && !(def_mount_opts & EXT4_DEFM_BSDGROUPS))
+ seq_puts(seq, ",grpid");
+ if (!test_opt(sb, GRPID) && (def_mount_opts & EXT4_DEFM_BSDGROUPS))
+ seq_puts(seq, ",nogrpid");
+@@ -628,34 +693,33 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
+ le16_to_cpu(es->s_def_resgid) != EXT4_DEF_RESGID) {
+ seq_printf(seq, ",resgid=%u", sbi->s_resgid);
+ }
+- if (test_opt(sb, ERRORS_CONT)) {
+- int def_errors = le16_to_cpu(es->s_errors);
+-
++ if (test_opt(sb, ERRORS_RO)) {
+ if (def_errors == EXT4_ERRORS_PANIC ||
+- def_errors == EXT4_ERRORS_RO) {
+- seq_puts(seq, ",errors=continue");
++ def_errors == EXT4_ERRORS_CONTINUE) {
++ seq_puts(seq, ",errors=remount-ro");
+ }
+ }
+- if (test_opt(sb, ERRORS_RO))
+- seq_puts(seq, ",errors=remount-ro");
+- if (test_opt(sb, ERRORS_PANIC))
++ if (test_opt(sb, ERRORS_CONT) && def_errors != EXT4_ERRORS_CONTINUE)
++ seq_puts(seq, ",errors=continue");
++ if (test_opt(sb, ERRORS_PANIC) && def_errors != EXT4_ERRORS_PANIC)
+ seq_puts(seq, ",errors=panic");
+- if (test_opt(sb, NO_UID32))
++ if (test_opt(sb, NO_UID32) && !(def_mount_opts & EXT4_DEFM_UID16))
+ seq_puts(seq, ",nouid32");
+- if (test_opt(sb, DEBUG))
++ if (test_opt(sb, DEBUG) && !(def_mount_opts & EXT4_DEFM_DEBUG))
+ seq_puts(seq, ",debug");
+ if (test_opt(sb, OLDALLOC))
+ seq_puts(seq, ",oldalloc");
+-#ifdef CONFIG_EXT4_FS_XATTR
+- if (test_opt(sb, XATTR_USER))
++#ifdef CONFIG_EXT4DEV_FS_XATTR
++ if (test_opt(sb, XATTR_USER) &&
++ !(def_mount_opts & EXT4_DEFM_XATTR_USER))
+ seq_puts(seq, ",user_xattr");
+ if (!test_opt(sb, XATTR_USER) &&
+ (def_mount_opts & EXT4_DEFM_XATTR_USER)) {
+ seq_puts(seq, ",nouser_xattr");
+ }
+ #endif
+-#ifdef CONFIG_EXT4_FS_POSIX_ACL
+- if (test_opt(sb, POSIX_ACL))
++#ifdef CONFIG_EXT4DEV_FS_POSIX_ACL
++ if (test_opt(sb, POSIX_ACL) && !(def_mount_opts & EXT4_DEFM_ACL))
+ seq_puts(seq, ",acl");
+ if (!test_opt(sb, POSIX_ACL) && (def_mount_opts & EXT4_DEFM_ACL))
+ seq_puts(seq, ",noacl");
+@@ -672,7 +736,17 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
+ seq_puts(seq, ",nobh");
+ if (!test_opt(sb, EXTENTS))
+ seq_puts(seq, ",noextents");
++ if (!test_opt(sb, MBALLOC))
++ seq_puts(seq, ",nomballoc");
++ if (test_opt(sb, I_VERSION))
++ seq_puts(seq, ",i_version");
- A Partial List of Missing Features
- ==================================
-@@ -16,16 +16,14 @@ SecurityDescriptors
- c) Better pam/winbind integration (e.g. to handle uid mapping
- better)
++ if (sbi->s_stripe)
++ seq_printf(seq, ",stripe=%lu", sbi->s_stripe);
++ /*
++ * journal mode get enabled in different ways
++ * So just print the value even if we didn't specify it
++ */
+ if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
+ seq_puts(seq, ",data=journal");
+ else if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA)
+@@ -681,7 +755,6 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
+ seq_puts(seq, ",data=writeback");
--d) Verify that Kerberos signing works
+ ext4_show_quota_options(seq, sb);
-
--e) Cleanup now unneeded SessSetup code in
-+d) Cleanup now unneeded SessSetup code in
- fs/cifs/connect.c and add back in NTLMSSP code if any servers
- need it
+ return 0;
+ }
--f) MD5-HMAC signing SMB PDUs when SPNEGO style SessionSetup
--used (Kerberos or NTLMSSP). Signing alreadyimplemented for NTLM
--and raw NTLMSSP already. This is important when enabling
--extended security and mounting to Windows 2003 Servers
-+e) ms-dfs and ms-dfs host name resolution cleanup
-+
-+f) fix NTLMv2 signing when two mounts with different users to same
-+server.
+@@ -809,11 +882,13 @@ enum {
+ Opt_user_xattr, Opt_nouser_xattr, Opt_acl, Opt_noacl,
+ Opt_reservation, Opt_noreservation, Opt_noload, Opt_nobh, Opt_bh,
+ Opt_commit, Opt_journal_update, Opt_journal_inum, Opt_journal_dev,
++ Opt_journal_checksum, Opt_journal_async_commit,
+ Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
+ Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
+ Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_quota, Opt_noquota,
+ Opt_ignore, Opt_barrier, Opt_err, Opt_resize, Opt_usrquota,
+- Opt_grpquota, Opt_extents, Opt_noextents,
++ Opt_grpquota, Opt_extents, Opt_noextents, Opt_i_version,
++ Opt_mballoc, Opt_nomballoc, Opt_stripe,
+ };
- g) Directory entry caching relies on a 1 second timer, rather than
- using FindNotify or equivalent. - (started)
-diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
-new file mode 100644
-index 0000000..413ee23
---- /dev/null
-+++ b/fs/cifs/cifs_dfs_ref.c
-@@ -0,0 +1,377 @@
-+/*
-+ * Contains the CIFS DFS referral mounting routines used for handling
-+ * traversal via DFS junction point
-+ *
-+ * Copyright (c) 2007 Igor Mammedov
-+ * Copyright (C) International Business Machines Corp., 2008
-+ * Author(s): Igor Mammedov (niallain at gmail.com)
-+ * Steve French (sfrench at us.ibm.com)
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public License
-+ * as published by the Free Software Foundation; either version
-+ * 2 of the License, or (at your option) any later version.
-+ */
-+
-+#include <linux/dcache.h>
-+#include <linux/mount.h>
-+#include <linux/namei.h>
-+#include <linux/vfs.h>
-+#include <linux/fs.h>
-+#include "cifsglob.h"
-+#include "cifsproto.h"
-+#include "cifsfs.h"
-+#include "dns_resolve.h"
-+#include "cifs_debug.h"
-+
-+LIST_HEAD(cifs_dfs_automount_list);
-+
-+/*
-+ * DFS functions
-+*/
-+
-+void dfs_shrink_umount_helper(struct vfsmount *vfsmnt)
-+{
-+ mark_mounts_for_expiry(&cifs_dfs_automount_list);
-+ mark_mounts_for_expiry(&cifs_dfs_automount_list);
-+ shrink_submounts(vfsmnt, &cifs_dfs_automount_list);
-+}
-+
-+/**
-+ * cifs_get_share_name - extracts share name from UNC
-+ * @node_name: pointer to UNC string
-+ *
-+ * Extracts sharename form full UNC.
-+ * i.e. strips from UNC trailing path that is not part of share
-+ * name and fixup missing '\' in the begining of DFS node refferal
-+ * if neccessary.
-+ * Returns pointer to share name on success or NULL on error.
-+ * Caller is responsible for freeing returned string.
-+ */
-+static char *cifs_get_share_name(const char *node_name)
-+{
-+ int len;
-+ char *UNC;
-+ char *pSep;
-+
-+ len = strlen(node_name);
-+ UNC = kmalloc(len+2 /*for term null and additional \ if it's missed */,
-+ GFP_KERNEL);
-+ if (!UNC)
-+ return NULL;
-+
-+ /* get share name and server name */
-+ if (node_name[1] != '\\') {
-+ UNC[0] = '\\';
-+ strncpy(UNC+1, node_name, len);
-+ len++;
-+ UNC[len] = 0;
-+ } else {
-+ strncpy(UNC, node_name, len);
-+ UNC[len] = 0;
-+ }
-+
-+ /* find server name end */
-+ pSep = memchr(UNC+2, '\\', len-2);
-+ if (!pSep) {
-+ cERROR(1, ("%s: no server name end in node name: %s",
-+ __FUNCTION__, node_name));
-+ kfree(UNC);
-+ return NULL;
-+ }
-+
-+ /* find sharename end */
-+ pSep++;
-+ pSep = memchr(UNC+(pSep-UNC), '\\', len-(pSep-UNC));
-+ if (!pSep) {
-+ cERROR(1, ("%s:2 cant find share name in node name: %s",
-+ __FUNCTION__, node_name));
-+ kfree(UNC);
-+ return NULL;
-+ }
-+ /* trim path up to sharename end
-+ * * now we have share name in UNC */
-+ *pSep = 0;
-+
-+ return UNC;
-+}
-+
-+
-+/**
-+ * compose_mount_options - creates mount options for refferral
-+ * @sb_mountdata: parent/root DFS mount options (template)
-+ * @ref_unc: refferral server UNC
-+ * @devname: pointer for saving device name
-+ *
-+ * creates mount options for submount based on template options sb_mountdata
-+ * and replacing unc,ip,prefixpath options with ones we've got form ref_unc.
+ static match_table_t tokens = {
+@@ -848,6 +923,8 @@ static match_table_t tokens = {
+ {Opt_journal_update, "journal=update"},
+ {Opt_journal_inum, "journal=%u"},
+ {Opt_journal_dev, "journal_dev=%u"},
++ {Opt_journal_checksum, "journal_checksum"},
++ {Opt_journal_async_commit, "journal_async_commit"},
+ {Opt_abort, "abort"},
+ {Opt_data_journal, "data=journal"},
+ {Opt_data_ordered, "data=ordered"},
+@@ -865,6 +942,10 @@ static match_table_t tokens = {
+ {Opt_barrier, "barrier=%u"},
+ {Opt_extents, "extents"},
+ {Opt_noextents, "noextents"},
++ {Opt_i_version, "i_version"},
++ {Opt_mballoc, "mballoc"},
++ {Opt_nomballoc, "nomballoc"},
++ {Opt_stripe, "stripe=%u"},
+ {Opt_err, NULL},
+ {Opt_resize, "resize"},
+ };
+@@ -1035,6 +1116,13 @@ static int parse_options (char *options, struct super_block *sb,
+ return 0;
+ *journal_devnum = option;
+ break;
++ case Opt_journal_checksum:
++ set_opt(sbi->s_mount_opt, JOURNAL_CHECKSUM);
++ break;
++ case Opt_journal_async_commit:
++ set_opt(sbi->s_mount_opt, JOURNAL_ASYNC_COMMIT);
++ set_opt(sbi->s_mount_opt, JOURNAL_CHECKSUM);
++ break;
+ case Opt_noload:
+ set_opt (sbi->s_mount_opt, NOLOAD);
+ break;
+@@ -1203,6 +1291,23 @@ clear_qf_name:
+ case Opt_noextents:
+ clear_opt (sbi->s_mount_opt, EXTENTS);
+ break;
++ case Opt_i_version:
++ set_opt(sbi->s_mount_opt, I_VERSION);
++ sb->s_flags |= MS_I_VERSION;
++ break;
++ case Opt_mballoc:
++ set_opt(sbi->s_mount_opt, MBALLOC);
++ break;
++ case Opt_nomballoc:
++ clear_opt(sbi->s_mount_opt, MBALLOC);
++ break;
++ case Opt_stripe:
++ if (match_int(&args[0], &option))
++ return 0;
++ if (option < 0)
++ return 0;
++ sbi->s_stripe = option;
++ break;
+ default:
+ printk (KERN_ERR
+ "EXT4-fs: Unrecognized mount option \"%s\" "
+@@ -1364,7 +1469,7 @@ static int ext4_check_descriptors (struct super_block * sb)
+ struct ext4_group_desc * gdp = NULL;
+ int desc_block = 0;
+ int flexbg_flag = 0;
+- int i;
++ ext4_group_t i;
+
+ if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG))
+ flexbg_flag = 1;
+@@ -1386,7 +1491,7 @@ static int ext4_check_descriptors (struct super_block * sb)
+ if (block_bitmap < first_block || block_bitmap > last_block)
+ {
+ ext4_error (sb, "ext4_check_descriptors",
+- "Block bitmap for group %d"
++ "Block bitmap for group %lu"
+ " not in group (block %llu)!",
+ i, block_bitmap);
+ return 0;
+@@ -1395,7 +1500,7 @@ static int ext4_check_descriptors (struct super_block * sb)
+ if (inode_bitmap < first_block || inode_bitmap > last_block)
+ {
+ ext4_error (sb, "ext4_check_descriptors",
+- "Inode bitmap for group %d"
++ "Inode bitmap for group %lu"
+ " not in group (block %llu)!",
+ i, inode_bitmap);
+ return 0;
+@@ -1405,17 +1510,16 @@ static int ext4_check_descriptors (struct super_block * sb)
+ inode_table + sbi->s_itb_per_group - 1 > last_block)
+ {
+ ext4_error (sb, "ext4_check_descriptors",
+- "Inode table for group %d"
++ "Inode table for group %lu"
+ " not in group (block %llu)!",
+ i, inode_table);
+ return 0;
+ }
+ if (!ext4_group_desc_csum_verify(sbi, i, gdp)) {
+ ext4_error(sb, __FUNCTION__,
+- "Checksum for group %d failed (%u!=%u)\n", i,
+- le16_to_cpu(ext4_group_desc_csum(sbi, i,
+- gdp)),
+- le16_to_cpu(gdp->bg_checksum));
++ "Checksum for group %lu failed (%u!=%u)\n",
++ i, le16_to_cpu(ext4_group_desc_csum(sbi, i,
++ gdp)), le16_to_cpu(gdp->bg_checksum));
+ return 0;
+ }
+ if (!flexbg_flag)
+@@ -1429,7 +1533,6 @@ static int ext4_check_descriptors (struct super_block * sb)
+ return 1;
+ }
+
+-
+ /* ext4_orphan_cleanup() walks a singly-linked list of inodes (starting at
+ * the superblock) which were deleted from all directories, but held open by
+ * a process at the time of a crash. We walk the list and try to delete these
+@@ -1542,20 +1645,95 @@ static void ext4_orphan_cleanup (struct super_block * sb,
+ #endif
+ sb->s_flags = s_flags; /* Restore MS_RDONLY status */
+ }
++/*
++ * Maximal extent format file size.
++ * Resulting logical blkno at s_maxbytes must fit in our on-disk
++ * extent format containers, within a sector_t, and within i_blocks
++ * in the vfs. ext4 inode has 48 bits of i_block in fsblock units,
++ * so that won't be a limiting factor.
+ *
-+ * Returns: pointer to new mount options or ERR_PTR.
-+ * Caller is responcible for freeing retunrned value if it is not error.
++ * Note, this does *not* consider any metadata overhead for vfs i_blocks.
+ */
-+static char *compose_mount_options(const char *sb_mountdata,
-+ const char *ref_unc,
-+ char **devname)
++static loff_t ext4_max_size(int blkbits)
+{
-+ int rc;
-+ char *mountdata;
-+ int md_len;
-+ char *tkn_e;
-+ char *srvIP = NULL;
-+ char sep = ',';
-+ int off, noff;
-+
-+ if (sb_mountdata == NULL)
-+ return ERR_PTR(-EINVAL);
-+
-+ *devname = cifs_get_share_name(ref_unc);
-+ rc = dns_resolve_server_name_to_ip(*devname, &srvIP);
-+ if (rc != 0) {
-+ cERROR(1, ("%s: Failed to resolve server part of %s to IP",
-+ __FUNCTION__, *devname));
-+ mountdata = ERR_PTR(rc);
-+ goto compose_mount_options_out;
-+ }
-+ md_len = strlen(sb_mountdata) + strlen(srvIP) + strlen(ref_unc) + 3;
-+ mountdata = kzalloc(md_len+1, GFP_KERNEL);
-+ if (mountdata == NULL) {
-+ mountdata = ERR_PTR(-ENOMEM);
-+ goto compose_mount_options_out;
-+ }
-+
-+ /* copy all options except of unc,ip,prefixpath */
-+ off = 0;
-+ if (strncmp(sb_mountdata, "sep=", 4) == 0) {
-+ sep = sb_mountdata[4];
-+ strncpy(mountdata, sb_mountdata, 5);
-+ off += 5;
-+ }
-+ while ((tkn_e = strchr(sb_mountdata+off, sep))) {
-+ noff = (tkn_e - (sb_mountdata+off)) + 1;
-+ if (strnicmp(sb_mountdata+off, "unc=", 4) == 0) {
-+ off += noff;
-+ continue;
-+ }
-+ if (strnicmp(sb_mountdata+off, "ip=", 3) == 0) {
-+ off += noff;
-+ continue;
-+ }
-+ if (strnicmp(sb_mountdata+off, "prefixpath=", 3) == 0) {
-+ off += noff;
-+ continue;
-+ }
-+ strncat(mountdata, sb_mountdata+off, noff);
-+ off += noff;
-+ }
-+ strcat(mountdata, sb_mountdata+off);
-+ mountdata[md_len] = '\0';
++ loff_t res;
++ loff_t upper_limit = MAX_LFS_FILESIZE;
+
-+ /* copy new IP and ref share name */
-+ strcat(mountdata, ",ip=");
-+ strcat(mountdata, srvIP);
-+ strcat(mountdata, ",unc=");
-+ strcat(mountdata, *devname);
++ /* small i_blocks in vfs inode? */
++ if (sizeof(blkcnt_t) < sizeof(u64)) {
++ /*
++ * CONFIG_LSF is not enabled implies the inode
++ * i_block represent total blocks in 512 bytes
++ * 32 == size of vfs inode i_blocks * 8
++ */
++ upper_limit = (1LL << 32) - 1;
+
-+ /* find & copy prefixpath */
-+ tkn_e = strchr(ref_unc+2, '\\');
-+ if (tkn_e) {
-+ tkn_e = strchr(tkn_e+1, '\\');
-+ if (tkn_e) {
-+ strcat(mountdata, ",prefixpath=");
-+ strcat(mountdata, tkn_e);
-+ }
++ /* total blocks in file system block size */
++ upper_limit >>= (blkbits - 9);
++ upper_limit <<= blkbits;
+ }
+
-+ /*cFYI(1,("%s: parent mountdata: %s", __FUNCTION__,sb_mountdata));*/
-+ /*cFYI(1, ("%s: submount mountdata: %s", __FUNCTION__, mountdata ));*/
-+
-+compose_mount_options_out:
-+ kfree(srvIP);
-+ return mountdata;
-+}
-+
-+
-+static struct vfsmount *cifs_dfs_do_refmount(const struct vfsmount *mnt_parent,
-+ struct dentry *dentry, char *ref_unc)
-+{
-+ struct cifs_sb_info *cifs_sb;
-+ struct vfsmount *mnt;
-+ char *mountdata;
-+ char *devname = NULL;
-+
-+ cifs_sb = CIFS_SB(dentry->d_inode->i_sb);
-+ mountdata = compose_mount_options(cifs_sb->mountdata,
-+ ref_unc, &devname);
-+
-+ if (IS_ERR(mountdata))
-+ return (struct vfsmount *)mountdata;
++ /* 32-bit extent-start container, ee_block */
++ res = 1LL << 32;
++ res <<= blkbits;
++ res -= 1;
+
-+ mnt = vfs_kern_mount(&cifs_fs_type, 0, devname, mountdata);
-+ kfree(mountdata);
-+ kfree(devname);
-+ return mnt;
++ /* Sanity check against vm- & vfs- imposed limits */
++ if (res > upper_limit)
++ res = upper_limit;
+
++ return res;
+}
+
+ /*
+- * Maximal file size. There is a direct, and {,double-,triple-}indirect
+- * block limit, and also a limit of (2^32 - 1) 512-byte sectors in i_blocks.
+- * We need to be 1 filesystem block less than the 2^32 sector limit.
++ * Maximal bitmap file size. There is a direct, and {,double-,triple-}indirect
++ * block limit, and also a limit of (2^48 - 1) 512-byte sectors in i_blocks.
++ * We need to be 1 filesystem block less than the 2^48 sector limit.
+ */
+-static loff_t ext4_max_size(int bits)
++static loff_t ext4_max_bitmap_size(int bits)
+ {
+ loff_t res = EXT4_NDIR_BLOCKS;
+- /* This constant is calculated to be the largest file size for a
+- * dense, 4k-blocksize file such that the total number of
++ int meta_blocks;
++ loff_t upper_limit;
++ /* This is calculated to be the largest file size for a
++ * dense, bitmapped file such that the total number of
+ * sectors in the file, including data and all indirect blocks,
+- * does not exceed 2^32. */
+- const loff_t upper_limit = 0x1ff7fffd000LL;
++ * does not exceed 2^48 -1
++ * __u32 i_blocks_lo and _u16 i_blocks_high representing the
++ * total number of 512 bytes blocks of the file
++ */
+
-+static char *build_full_dfs_path_from_dentry(struct dentry *dentry)
-+{
-+ char *full_path = NULL;
-+ char *search_path;
-+ char *tmp_path;
-+ size_t l_max_len;
-+ struct cifs_sb_info *cifs_sb;
-+
-+ if (dentry->d_inode == NULL)
-+ return NULL;
-+
-+ cifs_sb = CIFS_SB(dentry->d_inode->i_sb);
-+
-+ if (cifs_sb->tcon == NULL)
-+ return NULL;
++ if (sizeof(blkcnt_t) < sizeof(u64)) {
++ /*
++ * CONFIG_LSF is not enabled implies the inode
++ * i_block represent total blocks in 512 bytes
++ * 32 == size of vfs inode i_blocks * 8
++ */
++ upper_limit = (1LL << 32) - 1;
+
-+ search_path = build_path_from_dentry(dentry);
-+ if (search_path == NULL)
-+ return NULL;
++ /* total blocks in file system block size */
++ upper_limit >>= (bits - 9);
+
-+ if (cifs_sb->tcon->Flags & SMB_SHARE_IS_IN_DFS) {
-+ /* we should use full path name to correct working with DFS */
-+ l_max_len = strnlen(cifs_sb->tcon->treeName, MAX_TREE_SIZE+1) +
-+ strnlen(search_path, MAX_PATHCONF) + 1;
-+ tmp_path = kmalloc(l_max_len, GFP_KERNEL);
-+ if (tmp_path == NULL) {
-+ kfree(search_path);
-+ return NULL;
-+ }
-+ strncpy(tmp_path, cifs_sb->tcon->treeName, l_max_len);
-+ strcat(tmp_path, search_path);
-+ tmp_path[l_max_len-1] = 0;
-+ full_path = tmp_path;
-+ kfree(search_path);
+ } else {
-+ full_path = search_path;
-+ }
-+ return full_path;
-+}
-+
-+static int add_mount_helper(struct vfsmount *newmnt, struct nameidata *nd,
-+ struct list_head *mntlist)
-+{
-+ /* stolen from afs code */
-+ int err;
++ /*
++ * We use 48 bit ext4_inode i_blocks
++ * With EXT4_HUGE_FILE_FL set the i_blocks
++ * represent total number of blocks in
++ * file system block size
++ */
++ upper_limit = (1LL << 48) - 1;
+
-+ mntget(newmnt);
-+ err = do_add_mount(newmnt, nd, nd->mnt->mnt_flags, mntlist);
-+ switch (err) {
-+ case 0:
-+ dput(nd->dentry);
-+ mntput(nd->mnt);
-+ nd->mnt = newmnt;
-+ nd->dentry = dget(newmnt->mnt_root);
-+ break;
-+ case -EBUSY:
-+ /* someone else made a mount here whilst we were busy */
-+ while (d_mountpoint(nd->dentry) &&
-+ follow_down(&nd->mnt, &nd->dentry))
-+ ;
-+ err = 0;
-+ default:
-+ mntput(newmnt);
-+ break;
+ }
-+ return err;
-+}
+
-+static void dump_referral(const struct dfs_info3_param *ref)
-+{
-+ cFYI(1, ("DFS: ref path: %s", ref->path_name));
-+ cFYI(1, ("DFS: node path: %s", ref->node_name));
-+ cFYI(1, ("DFS: fl: %hd, srv_type: %hd", ref->flags, ref->server_type));
-+ cFYI(1, ("DFS: ref_flags: %hd, path_consumed: %hd", ref->ref_flag,
-+ ref->PathConsumed));
-+}
++ /* indirect blocks */
++ meta_blocks = 1;
++ /* double indirect blocks */
++ meta_blocks += 1 + (1LL << (bits-2));
++ /* tripple indirect blocks */
++ meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
++
++ upper_limit -= meta_blocks;
++ upper_limit <<= bits;
+
+ res += 1LL << (bits-2);
+ res += 1LL << (2*(bits-2));
+@@ -1563,6 +1741,10 @@ static loff_t ext4_max_size(int bits)
+ res <<= bits;
+ if (res > upper_limit)
+ res = upper_limit;
+
++ if (res > MAX_LFS_FILESIZE)
++ res = MAX_LFS_FILESIZE;
+
-+static void*
-+cifs_dfs_follow_mountpoint(struct dentry *dentry, struct nameidata *nd)
+ return res;
+ }
+
+@@ -1570,7 +1752,7 @@ static ext4_fsblk_t descriptor_loc(struct super_block *sb,
+ ext4_fsblk_t logical_sb_block, int nr)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+- unsigned long bg, first_meta_bg;
++ ext4_group_t bg, first_meta_bg;
+ int has_super = 0;
+
+ first_meta_bg = le32_to_cpu(sbi->s_es->s_first_meta_bg);
+@@ -1584,8 +1766,39 @@ static ext4_fsblk_t descriptor_loc(struct super_block *sb,
+ return (has_super + ext4_group_first_block_no(sb, bg));
+ }
+
++/**
++ * ext4_get_stripe_size: Get the stripe size.
++ * @sbi: In memory super block info
++ *
++ * If we have specified it via mount option, then
++ * use the mount option value. If the value specified at mount time is
++ * greater than the blocks per group use the super block value.
++ * If the super block value is greater than blocks per group return 0.
++ * Allocator needs it be less than blocks per group.
++ *
++ */
++static unsigned long ext4_get_stripe_size(struct ext4_sb_info *sbi)
+{
-+ struct dfs_info3_param *referrals = NULL;
-+ unsigned int num_referrals = 0;
-+ struct cifs_sb_info *cifs_sb;
-+ struct cifsSesInfo *ses;
-+ char *full_path = NULL;
-+ int xid, i;
-+ int rc = 0;
-+ struct vfsmount *mnt = ERR_PTR(-ENOENT);
-+
-+ cFYI(1, ("in %s", __FUNCTION__));
-+ BUG_ON(IS_ROOT(dentry));
++ unsigned long stride = le16_to_cpu(sbi->s_es->s_raid_stride);
++ unsigned long stripe_width =
++ le32_to_cpu(sbi->s_es->s_raid_stripe_width);
+
-+ xid = GetXid();
++ if (sbi->s_stripe && sbi->s_stripe <= sbi->s_blocks_per_group)
++ return sbi->s_stripe;
+
-+ dput(nd->dentry);
-+ nd->dentry = dget(dentry);
++ if (stripe_width <= sbi->s_blocks_per_group)
++ return stripe_width;
+
-+ cifs_sb = CIFS_SB(dentry->d_inode->i_sb);
-+ ses = cifs_sb->tcon->ses;
++ if (stride <= sbi->s_blocks_per_group)
++ return stride;
+
-+ if (!ses) {
-+ rc = -EINVAL;
-+ goto out_err;
-+ }
++ return 0;
++}
+
+ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
++ __releases(kernel_sem)
++ __acquires(kernel_sem)
+
-+ full_path = build_full_dfs_path_from_dentry(dentry);
-+ if (full_path == NULL) {
-+ rc = -ENOMEM;
-+ goto out_err;
+ {
+ struct buffer_head * bh;
+ struct ext4_super_block *es = NULL;
+@@ -1599,7 +1812,6 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ unsigned long def_mount_opts;
+ struct inode *root;
+ int blocksize;
+- int hblock;
+ int db_count;
+ int i;
+ int needs_recovery;
+@@ -1624,6 +1836,11 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ goto out_fail;
+ }
+
++ if (!sb_set_blocksize(sb, blocksize)) {
++ printk(KERN_ERR "EXT4-fs: bad blocksize %d.\n", blocksize);
++ goto out_fail;
+ }
+
-+ rc = get_dfs_path(xid, ses , full_path, cifs_sb->local_nls,
-+ &num_referrals, &referrals,
-+ cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR);
-+
-+ for (i = 0; i < num_referrals; i++) {
-+ dump_referral(referrals+i);
-+ /* connect to a storage node */
-+ if (referrals[i].flags & DFSREF_STORAGE_SERVER) {
-+ int len;
-+ len = strlen(referrals[i].node_name);
-+ if (len < 2) {
-+ cERROR(1, ("%s: Net Address path too short: %s",
-+ __FUNCTION__, referrals[i].node_name));
-+ rc = -EINVAL;
-+ goto out_err;
-+ }
-+ mnt = cifs_dfs_do_refmount(nd->mnt, nd->dentry,
-+ referrals[i].node_name);
-+ cFYI(1, ("%s: cifs_dfs_do_refmount:%s , mnt:%p",
-+ __FUNCTION__,
-+ referrals[i].node_name, mnt));
-+
-+ /* complete mount procedure if we accured submount */
-+ if (!IS_ERR(mnt))
-+ break;
+ /*
+ * The ext4 superblock will not be buffer aligned for other than 1kB
+ * block sizes. We need to calculate the offset from buffer start.
+@@ -1674,10 +1891,10 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+
+ if (le16_to_cpu(sbi->s_es->s_errors) == EXT4_ERRORS_PANIC)
+ set_opt(sbi->s_mount_opt, ERRORS_PANIC);
+- else if (le16_to_cpu(sbi->s_es->s_errors) == EXT4_ERRORS_RO)
+- set_opt(sbi->s_mount_opt, ERRORS_RO);
+- else
++ else if (le16_to_cpu(sbi->s_es->s_errors) == EXT4_ERRORS_CONTINUE)
+ set_opt(sbi->s_mount_opt, ERRORS_CONT);
++ else
++ set_opt(sbi->s_mount_opt, ERRORS_RO);
+
+ sbi->s_resuid = le16_to_cpu(es->s_def_resuid);
+ sbi->s_resgid = le16_to_cpu(es->s_def_resgid);
+@@ -1689,6 +1906,11 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ * User -o noextents to turn it off
+ */
+ set_opt(sbi->s_mount_opt, EXTENTS);
++ /*
++ * turn on mballoc feature by default in ext4 filesystem
++ * User -o nomballoc to turn it off
++ */
++ set_opt(sbi->s_mount_opt, MBALLOC);
+
+ if (!parse_options ((char *) data, sb, &journal_inum, &journal_devnum,
+ NULL, 0))
+@@ -1723,6 +1945,19 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ sb->s_id, le32_to_cpu(features));
+ goto failed_mount;
+ }
++ if (EXT4_HAS_RO_COMPAT_FEATURE(sb, EXT4_FEATURE_RO_COMPAT_HUGE_FILE)) {
++ /*
++ * Large file size enabled file system can only be
++ * mount if kernel is build with CONFIG_LSF
++ */
++ if (sizeof(root->i_blocks) < sizeof(u64) &&
++ !(sb->s_flags & MS_RDONLY)) {
++ printk(KERN_ERR "EXT4-fs: %s: Filesystem with huge "
++ "files cannot be mounted read-write "
++ "without CONFIG_LSF.\n", sb->s_id);
++ goto failed_mount;
+ }
+ }
+ blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
+
+ if (blocksize < EXT4_MIN_BLOCK_SIZE ||
+@@ -1733,20 +1968,16 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ goto failed_mount;
+ }
+
+- hblock = bdev_hardsect_size(sb->s_bdev);
+ if (sb->s_blocksize != blocksize) {
+- /*
+- * Make sure the blocksize for the filesystem is larger
+- * than the hardware sectorsize for the machine.
+- */
+- if (blocksize < hblock) {
+- printk(KERN_ERR "EXT4-fs: blocksize %d too small for "
+- "device blocksize %d.\n", blocksize, hblock);
+
-+ /* we need it cause for() above could exit without valid submount */
-+ rc = PTR_ERR(mnt);
-+ if (IS_ERR(mnt))
-+ goto out_err;
-+
-+ nd->mnt->mnt_flags |= MNT_SHRINKABLE;
-+ rc = add_mount_helper(mnt, nd, &cifs_dfs_automount_list);
-+
-+out:
-+ FreeXid(xid);
-+ free_dfs_info_array(referrals, num_referrals);
-+ kfree(full_path);
-+ cFYI(1, ("leaving %s" , __FUNCTION__));
-+ return ERR_PTR(rc);
-+out_err:
-+ path_release(nd);
-+ goto out;
-+}
-+
-+struct inode_operations cifs_dfs_referral_inode_operations = {
-+ .follow_link = cifs_dfs_follow_mountpoint,
-+};
-+
-diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
-index 34af556..8ad2330 100644
---- a/fs/cifs/cifs_fs_sb.h
-+++ b/fs/cifs/cifs_fs_sb.h
-@@ -43,6 +43,9 @@ struct cifs_sb_info {
- mode_t mnt_dir_mode;
- int mnt_cifs_flags;
- int prepathlen;
-- char *prepath;
-+ char *prepath; /* relative path under the share to mount to */
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ char *mountdata; /* mount options received at mount time */
-+#endif
- };
- #endif /* _CIFS_FS_SB_H */
-diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
-index 1529d2b..d543acc 100644
---- a/fs/cifs/cifs_spnego.c
-+++ b/fs/cifs/cifs_spnego.c
-@@ -122,11 +122,13 @@ cifs_get_spnego_key(struct cifsSesInfo *sesInfo)
- cFYI(1, ("key description = %s", description));
- spnego_key = request_key(&cifs_spnego_key_type, description, "");
++ /* Validate the filesystem blocksize */
++ if (!sb_set_blocksize(sb, blocksize)) {
++ printk(KERN_ERR "EXT4-fs: bad block size %d.\n",
++ blocksize);
+ goto failed_mount;
+ }
-+#ifdef CONFIG_CIFS_DEBUG2
- if (cifsFYI && !IS_ERR(spnego_key)) {
- struct cifs_spnego_msg *msg = spnego_key->payload.data;
-- cifs_dump_mem("SPNEGO reply blob:", msg->data,
-- msg->secblob_len + msg->sesskey_len);
-+ cifs_dump_mem("SPNEGO reply blob:", msg->data, min(1024,
-+ msg->secblob_len + msg->sesskey_len));
+ brelse (bh);
+- sb_set_blocksize(sb, blocksize);
+ logical_sb_block = sb_block * EXT4_MIN_BLOCK_SIZE;
+ offset = do_div(logical_sb_block, blocksize);
+ bh = sb_bread(sb, logical_sb_block);
+@@ -1764,6 +1995,7 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ }
}
-+#endif /* CONFIG_CIFS_DEBUG2 */
- out:
- kfree(description);
-diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
-index c312adc..a7035bd 100644
---- a/fs/cifs/cifsacl.c
-+++ b/fs/cifs/cifsacl.c
-@@ -129,6 +129,54 @@ int compare_sids(const struct cifs_sid *ctsid, const struct cifs_sid *cwsid)
- return (1); /* sids compare/match */
- }
++ sbi->s_bitmap_maxbytes = ext4_max_bitmap_size(sb->s_blocksize_bits);
+ sb->s_maxbytes = ext4_max_size(sb->s_blocksize_bits);
+ if (le32_to_cpu(es->s_rev_level) == EXT4_GOOD_OLD_REV) {
+@@ -1838,6 +2070,17 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+
+ if (EXT4_BLOCKS_PER_GROUP(sb) == 0)
+ goto cantfind_ext4;
+
-+/* copy ntsd, owner sid, and group sid from a security descriptor to another */
-+static void copy_sec_desc(const struct cifs_ntsd *pntsd,
-+ struct cifs_ntsd *pnntsd, __u32 sidsoffset)
-+{
-+ int i;
-+
-+ struct cifs_sid *owner_sid_ptr, *group_sid_ptr;
-+ struct cifs_sid *nowner_sid_ptr, *ngroup_sid_ptr;
-+
-+ /* copy security descriptor control portion */
-+ pnntsd->revision = pntsd->revision;
-+ pnntsd->type = pntsd->type;
-+ pnntsd->dacloffset = cpu_to_le32(sizeof(struct cifs_ntsd));
-+ pnntsd->sacloffset = 0;
-+ pnntsd->osidoffset = cpu_to_le32(sidsoffset);
-+ pnntsd->gsidoffset = cpu_to_le32(sidsoffset + sizeof(struct cifs_sid));
-+
-+ /* copy owner sid */
-+ owner_sid_ptr = (struct cifs_sid *)((char *)pntsd +
-+ le32_to_cpu(pntsd->osidoffset));
-+ nowner_sid_ptr = (struct cifs_sid *)((char *)pnntsd + sidsoffset);
-+
-+ nowner_sid_ptr->revision = owner_sid_ptr->revision;
-+ nowner_sid_ptr->num_subauth = owner_sid_ptr->num_subauth;
-+ for (i = 0; i < 6; i++)
-+ nowner_sid_ptr->authority[i] = owner_sid_ptr->authority[i];
-+ for (i = 0; i < 5; i++)
-+ nowner_sid_ptr->sub_auth[i] = owner_sid_ptr->sub_auth[i];
-+
-+ /* copy group sid */
-+ group_sid_ptr = (struct cifs_sid *)((char *)pntsd +
-+ le32_to_cpu(pntsd->gsidoffset));
-+ ngroup_sid_ptr = (struct cifs_sid *)((char *)pnntsd + sidsoffset +
-+ sizeof(struct cifs_sid));
++ /* ensure blocks_count calculation below doesn't sign-extend */
++ if (ext4_blocks_count(es) + EXT4_BLOCKS_PER_GROUP(sb) <
++ le32_to_cpu(es->s_first_data_block) + 1) {
++ printk(KERN_WARNING "EXT4-fs: bad geometry: block count %llu, "
++ "first data block %u, blocks per group %lu\n",
++ ext4_blocks_count(es),
++ le32_to_cpu(es->s_first_data_block),
++ EXT4_BLOCKS_PER_GROUP(sb));
++ goto failed_mount;
++ }
+ blocks_count = (ext4_blocks_count(es) -
+ le32_to_cpu(es->s_first_data_block) +
+ EXT4_BLOCKS_PER_GROUP(sb) - 1);
+@@ -1900,6 +2143,8 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ sbi->s_rsv_window_head.rsv_goal_size = 0;
+ ext4_rsv_window_add(sb, &sbi->s_rsv_window_head);
+
++ sbi->s_stripe = ext4_get_stripe_size(sbi);
+
-+ ngroup_sid_ptr->revision = group_sid_ptr->revision;
-+ ngroup_sid_ptr->num_subauth = group_sid_ptr->num_subauth;
-+ for (i = 0; i < 6; i++)
-+ ngroup_sid_ptr->authority[i] = group_sid_ptr->authority[i];
-+ for (i = 0; i < 5; i++)
-+ ngroup_sid_ptr->sub_auth[i] =
-+ cpu_to_le32(group_sid_ptr->sub_auth[i]);
+ /*
+ * set up enough so that it can read an inode
+ */
+@@ -1944,6 +2189,21 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ goto failed_mount4;
+ }
+
++ if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
++ jbd2_journal_set_features(sbi->s_journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM, 0,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
++ } else if (test_opt(sb, JOURNAL_CHECKSUM)) {
++ jbd2_journal_set_features(sbi->s_journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM, 0, 0);
++ jbd2_journal_clear_features(sbi->s_journal, 0, 0,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
++ } else {
++ jbd2_journal_clear_features(sbi->s_journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM, 0,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
++ }
+
-+ return;
-+}
+ /* We have now updated the journal if required, so we can
+ * validate the data journaling mode. */
+ switch (test_opt(sb, DATA_FLAGS)) {
+@@ -2044,6 +2304,7 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ "writeback");
+
+ ext4_ext_init(sb);
++ ext4_mb_init(sb, needs_recovery);
+
+ lock_kernel();
+ return 0;
+@@ -2673,7 +2934,7 @@ static int ext4_statfs (struct dentry * dentry, struct kstatfs * buf)
+ if (test_opt(sb, MINIX_DF)) {
+ sbi->s_overhead_last = 0;
+ } else if (sbi->s_blocks_last != ext4_blocks_count(es)) {
+- unsigned long ngroups = sbi->s_groups_count, i;
++ ext4_group_t ngroups = sbi->s_groups_count, i;
+ ext4_fsblk_t overhead = 0;
+ smp_rmb();
+
+@@ -2909,7 +3170,7 @@ static ssize_t ext4_quota_read(struct super_block *sb, int type, char *data,
+ size_t len, loff_t off)
+ {
+ struct inode *inode = sb_dqopt(sb)->files[type];
+- sector_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
++ ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
+ int err = 0;
+ int offset = off & (sb->s_blocksize - 1);
+ int tocopy;
+@@ -2947,7 +3208,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
+ const char *data, size_t len, loff_t off)
+ {
+ struct inode *inode = sb_dqopt(sb)->files[type];
+- sector_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
++ ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
+ int err = 0;
+ int offset = off & (sb->s_blocksize - 1);
+ int tocopy;
+@@ -3002,7 +3263,6 @@ out:
+ i_size_write(inode, off+len-towrite);
+ EXT4_I(inode)->i_disksize = inode->i_size;
+ }
+- inode->i_version++;
+ inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ ext4_mark_inode_dirty(handle, inode);
+ mutex_unlock(&inode->i_mutex);
+@@ -3027,9 +3287,15 @@ static struct file_system_type ext4dev_fs_type = {
+
+ static int __init init_ext4_fs(void)
+ {
+- int err = init_ext4_xattr();
++ int err;
+
++ err = init_ext4_mballoc();
+ if (err)
+ return err;
+
- /*
- change posix mode to reflect permissions
- pmode is the existing mode (we only want to overwrite part of this
-@@ -220,6 +268,33 @@ static void mode_to_access_flags(umode_t mode, umode_t bits_to_use,
- return;
++ err = init_ext4_xattr();
++ if (err)
++ goto out2;
+ err = init_inodecache();
+ if (err)
+ goto out1;
+@@ -3041,6 +3307,8 @@ out:
+ destroy_inodecache();
+ out1:
+ exit_ext4_xattr();
++out2:
++ exit_ext4_mballoc();
+ return err;
}
-+static __le16 fill_ace_for_sid(struct cifs_ace *pntace,
-+ const struct cifs_sid *psid, __u64 nmode, umode_t bits)
-+{
-+ int i;
-+ __u16 size = 0;
-+ __u32 access_req = 0;
-+
-+ pntace->type = ACCESS_ALLOWED;
-+ pntace->flags = 0x0;
-+ mode_to_access_flags(nmode, bits, &access_req);
-+ if (!access_req)
-+ access_req = SET_MINIMUM_RIGHTS;
-+ pntace->access_req = cpu_to_le32(access_req);
-+
-+ pntace->sid.revision = psid->revision;
-+ pntace->sid.num_subauth = psid->num_subauth;
-+ for (i = 0; i < 6; i++)
-+ pntace->sid.authority[i] = psid->authority[i];
-+ for (i = 0; i < psid->num_subauth; i++)
-+ pntace->sid.sub_auth[i] = psid->sub_auth[i];
-+
-+ size = 1 + 1 + 2 + 4 + 1 + 1 + 6 + (psid->num_subauth * 4);
-+ pntace->size = cpu_to_le16(size);
-+
-+ return (size);
-+}
-+
+@@ -3049,6 +3317,7 @@ static void __exit exit_ext4_fs(void)
+ unregister_filesystem(&ext4dev_fs_type);
+ destroy_inodecache();
+ exit_ext4_xattr();
++ exit_ext4_mballoc();
+ }
- #ifdef CONFIG_CIFS_DEBUG2
- static void dump_ace(struct cifs_ace *pace, char *end_of_acl)
-@@ -243,7 +318,7 @@ static void dump_ace(struct cifs_ace *pace, char *end_of_acl)
- int i;
- cFYI(1, ("ACE revision %d num_auth %d type %d flags %d size %d",
- pace->sid.revision, pace->sid.num_subauth, pace->type,
-- pace->flags, pace->size));
-+ pace->flags, le16_to_cpu(pace->size)));
- for (i = 0; i < num_subauth; ++i) {
- cFYI(1, ("ACE sub_auth[%d]: 0x%x", i,
- le32_to_cpu(pace->sid.sub_auth[i])));
-@@ -346,6 +421,28 @@ static void parse_dacl(struct cifs_acl *pdacl, char *end_of_acl,
+ MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 8638730..d796213 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -480,7 +480,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ ea_bdebug(bh, "refcount now=0; freeing");
+ if (ce)
+ mb_cache_entry_free(ce);
+- ext4_free_blocks(handle, inode, bh->b_blocknr, 1);
++ ext4_free_blocks(handle, inode, bh->b_blocknr, 1, 1);
+ get_bh(bh);
+ ext4_forget(handle, 1, inode, bh, bh->b_blocknr);
+ } else {
+@@ -821,7 +821,7 @@ inserted:
+ new_bh = sb_getblk(sb, block);
+ if (!new_bh) {
+ getblk_failed:
+- ext4_free_blocks(handle, inode, block, 1);
++ ext4_free_blocks(handle, inode, block, 1, 1);
+ error = -EIO;
+ goto cleanup;
+ }
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 84f9f7d..e5e80d1 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -744,9 +744,6 @@ static inline void unregister_fuseblk(void)
}
+ #endif
+-static decl_subsys(fuse, NULL, NULL);
+-static decl_subsys(connections, NULL, NULL);
+-
+ static void fuse_inode_init_once(struct kmem_cache *cachep, void *foo)
+ {
+ struct inode * inode = foo;
+@@ -791,32 +788,37 @@ static void fuse_fs_cleanup(void)
+ kmem_cache_destroy(fuse_inode_cachep);
+ }
-+static int set_chmod_dacl(struct cifs_acl *pndacl, struct cifs_sid *pownersid,
-+ struct cifs_sid *pgrpsid, __u64 nmode)
-+{
-+ __le16 size = 0;
-+ struct cifs_acl *pnndacl;
-+
-+ pnndacl = (struct cifs_acl *)((char *)pndacl + sizeof(struct cifs_acl));
-+
-+ size += fill_ace_for_sid((struct cifs_ace *) ((char *)pnndacl + size),
-+ pownersid, nmode, S_IRWXU);
-+ size += fill_ace_for_sid((struct cifs_ace *)((char *)pnndacl + size),
-+ pgrpsid, nmode, S_IRWXG);
-+ size += fill_ace_for_sid((struct cifs_ace *)((char *)pnndacl + size),
-+ &sid_everyone, nmode, S_IRWXO);
-+
-+ pndacl->size = cpu_to_le16(size + sizeof(struct cifs_acl));
-+ pndacl->num_aces = 3;
-+
-+ return (0);
-+}
-+
++static struct kobject *fuse_kobj;
++static struct kobject *connections_kobj;
+
- static int parse_sid(struct cifs_sid *psid, char *end_of_acl)
+ static int fuse_sysfs_init(void)
{
- /* BB need to add parm so we can store the SID BB */
-@@ -432,6 +529,46 @@ static int parse_sec_desc(struct cifs_ntsd *pntsd, int acl_len,
- }
+ int err;
+- kobj_set_kset_s(&fuse_subsys, fs_subsys);
+- err = subsystem_register(&fuse_subsys);
+- if (err)
++ fuse_kobj = kobject_create_and_add("fuse", fs_kobj);
++ if (!fuse_kobj) {
++ err = -ENOMEM;
+ goto out_err;
++ }
-+/* Convert permission bits from mode to equivalent CIFS ACL */
-+static int build_sec_desc(struct cifs_ntsd *pntsd, struct cifs_ntsd *pnntsd,
-+ int acl_len, struct inode *inode, __u64 nmode)
-+{
-+ int rc = 0;
-+ __u32 dacloffset;
-+ __u32 ndacloffset;
-+ __u32 sidsoffset;
-+ struct cifs_sid *owner_sid_ptr, *group_sid_ptr;
-+ struct cifs_acl *dacl_ptr = NULL; /* no need for SACL ptr */
-+ struct cifs_acl *ndacl_ptr = NULL; /* no need for SACL ptr */
-+
-+ if ((inode == NULL) || (pntsd == NULL) || (pnntsd == NULL))
-+ return (-EIO);
-+
-+ owner_sid_ptr = (struct cifs_sid *)((char *)pntsd +
-+ le32_to_cpu(pntsd->osidoffset));
-+ group_sid_ptr = (struct cifs_sid *)((char *)pntsd +
-+ le32_to_cpu(pntsd->gsidoffset));
-+
-+ dacloffset = le32_to_cpu(pntsd->dacloffset);
-+ dacl_ptr = (struct cifs_acl *)((char *)pntsd + dacloffset);
-+
-+ ndacloffset = sizeof(struct cifs_ntsd);
-+ ndacl_ptr = (struct cifs_acl *)((char *)pnntsd + ndacloffset);
-+ ndacl_ptr->revision = dacl_ptr->revision;
-+ ndacl_ptr->size = 0;
-+ ndacl_ptr->num_aces = 0;
-+
-+ rc = set_chmod_dacl(ndacl_ptr, owner_sid_ptr, group_sid_ptr, nmode);
-+
-+ sidsoffset = ndacloffset + le16_to_cpu(ndacl_ptr->size);
-+
-+ /* copy security descriptor control portion and owner and group sid */
-+ copy_sec_desc(pntsd, pnntsd, sidsoffset);
-+
-+ return (rc);
-+}
-+
-+
- /* Retrieve an ACL from the server */
- static struct cifs_ntsd *get_cifs_acl(u32 *pacllen, struct inode *inode,
- const char *path)
-@@ -487,6 +624,64 @@ static struct cifs_ntsd *get_cifs_acl(u32 *pacllen, struct inode *inode,
- return pntsd;
+- kobj_set_kset_s(&connections_subsys, fuse_subsys);
+- err = subsystem_register(&connections_subsys);
+- if (err)
++ connections_kobj = kobject_create_and_add("connections", fuse_kobj);
++ if (!connections_kobj) {
++ err = -ENOMEM;
+ goto out_fuse_unregister;
++ }
+
+ return 0;
+
+ out_fuse_unregister:
+- subsystem_unregister(&fuse_subsys);
++ kobject_put(fuse_kobj);
+ out_err:
+ return err;
}
-+/* Set an ACL on the server */
-+static int set_cifs_acl(struct cifs_ntsd *pnntsd, __u32 acllen,
-+ struct inode *inode, const char *path)
-+{
-+ struct cifsFileInfo *open_file;
-+ int unlock_file = FALSE;
-+ int xid;
-+ int rc = -EIO;
-+ __u16 fid;
-+ struct super_block *sb;
-+ struct cifs_sb_info *cifs_sb;
-+
-+#ifdef CONFIG_CIFS_DEBUG2
-+ cFYI(1, ("set ACL for %s from mode 0x%x", path, inode->i_mode));
-+#endif
-+
-+ if (!inode)
-+ return (rc);
-+
-+ sb = inode->i_sb;
-+ if (sb == NULL)
-+ return (rc);
-+
-+ cifs_sb = CIFS_SB(sb);
-+ xid = GetXid();
-+
-+ open_file = find_readable_file(CIFS_I(inode));
-+ if (open_file) {
-+ unlock_file = TRUE;
-+ fid = open_file->netfid;
-+ } else {
-+ int oplock = FALSE;
-+ /* open file */
-+ rc = CIFSSMBOpen(xid, cifs_sb->tcon, path, FILE_OPEN,
-+ WRITE_DAC, 0, &fid, &oplock, NULL,
-+ cifs_sb->local_nls, cifs_sb->mnt_cifs_flags &
-+ CIFS_MOUNT_MAP_SPECIAL_CHR);
-+ if (rc != 0) {
-+ cERROR(1, ("Unable to open file to set ACL"));
-+ FreeXid(xid);
-+ return (rc);
-+ }
-+ }
-+
-+ rc = CIFSSMBSetCIFSACL(xid, cifs_sb->tcon, fid, pnntsd, acllen);
-+#ifdef CONFIG_CIFS_DEBUG2
-+ cFYI(1, ("SetCIFSACL rc = %d", rc));
-+#endif
-+ if (unlock_file == TRUE)
-+ atomic_dec(&open_file->wrtPending);
-+ else
-+ CIFSSMBClose(xid, cifs_sb->tcon, fid);
-+
-+ FreeXid(xid);
-+
-+ return (rc);
-+}
-+
- /* Translate the CIFS ACL (simlar to NTFS ACL) for a file into mode bits */
- void acl_to_uid_mode(struct inode *inode, const char *path)
+ static void fuse_sysfs_cleanup(void)
{
-@@ -510,24 +705,53 @@ void acl_to_uid_mode(struct inode *inode, const char *path)
+- subsystem_unregister(&connections_subsys);
+- subsystem_unregister(&fuse_subsys);
++ kobject_put(connections_kobj);
++ kobject_put(fuse_kobj);
}
- /* Convert mode bits to an ACL so we can update the ACL on the server */
--int mode_to_acl(struct inode *inode, const char *path)
-+int mode_to_acl(struct inode *inode, const char *path, __u64 nmode)
+ static int __init fuse_init(void)
+diff --git a/fs/gfs2/Makefile b/fs/gfs2/Makefile
+index 04ad0ca..8fff110 100644
+--- a/fs/gfs2/Makefile
++++ b/fs/gfs2/Makefile
+@@ -2,7 +2,7 @@ obj-$(CONFIG_GFS2_FS) += gfs2.o
+ gfs2-y := acl.o bmap.o daemon.o dir.o eaops.o eattr.o glock.o \
+ glops.o inode.o lm.o log.o lops.o locking.o main.o meta_io.o \
+ mount.o ops_address.o ops_dentry.o ops_export.o ops_file.o \
+- ops_fstype.o ops_inode.o ops_super.o ops_vm.o quota.o \
++ ops_fstype.o ops_inode.o ops_super.o quota.o \
+ recovery.o rgrp.o super.o sys.o trans.o util.o
+
+ obj-$(CONFIG_GFS2_FS_LOCKING_NOLOCK) += locking/nolock/
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 93fa427..e4effc4 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -59,7 +59,6 @@ struct strip_mine {
+ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
+ u64 block, struct page *page)
{
- int rc = 0;
- __u32 acllen = 0;
-- struct cifs_ntsd *pntsd = NULL;
-+ struct cifs_ntsd *pntsd = NULL; /* acl obtained from server */
-+ struct cifs_ntsd *pnntsd = NULL; /* modified acl to be sent to server */
+- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ struct inode *inode = &ip->i_inode;
+ struct buffer_head *bh;
+ int release = 0;
+@@ -95,7 +94,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
+ set_buffer_uptodate(bh);
+ if (!gfs2_is_jdata(ip))
+ mark_buffer_dirty(bh);
+- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
++ if (!gfs2_is_writeback(ip))
+ gfs2_trans_add_bh(ip->i_gl, bh, 0);
-+#ifdef CONFIG_CIFS_DEBUG2
- cFYI(1, ("set ACL from mode for %s", path));
-+#endif
+ if (release) {
+@@ -453,8 +452,8 @@ static inline void bmap_unlock(struct inode *inode, int create)
+ * Returns: errno
+ */
- /* Get the security descriptor */
- pntsd = get_cifs_acl(&acllen, inode, path);
+-int gfs2_block_map(struct inode *inode, u64 lblock, int create,
+- struct buffer_head *bh_map)
++int gfs2_block_map(struct inode *inode, sector_t lblock,
++ struct buffer_head *bh_map, int create)
+ {
+ struct gfs2_inode *ip = GFS2_I(inode);
+ struct gfs2_sbd *sdp = GFS2_SB(inode);
+@@ -470,6 +469,7 @@ int gfs2_block_map(struct inode *inode, u64 lblock, int create,
+ unsigned int maxlen = bh_map->b_size >> inode->i_blkbits;
+ struct metapath mp;
+ u64 size;
++ struct buffer_head *dibh = NULL;
-- /* Add/Modify the three ACEs for owner, group, everyone
-- while retaining the other ACEs */
-+ /* Add three ACEs for owner, group, everyone getting rid of
-+ other ACEs as chmod disables ACEs and set the security descriptor */
+ BUG_ON(maxlen == 0);
-- /* Set the security descriptor */
-+ if (pntsd) {
-+ /* allocate memory for the smb header,
-+ set security descriptor request security descriptor
-+ parameters, and secuirty descriptor itself */
+@@ -500,6 +500,8 @@ int gfs2_block_map(struct inode *inode, u64 lblock, int create,
+ error = gfs2_meta_inode_buffer(ip, &bh);
+ if (error)
+ goto out_fail;
++ dibh = bh;
++ get_bh(dibh);
-+ pnntsd = kmalloc(acllen, GFP_KERNEL);
-+ if (!pnntsd) {
-+ cERROR(1, ("Unable to allocate security descriptor"));
-+ kfree(pntsd);
-+ return (-ENOMEM);
-+ }
+ for (x = 0; x < end_of_metadata; x++) {
+ lookup_block(ip, bh, x, &mp, create, &new, &dblock);
+@@ -518,13 +520,8 @@ int gfs2_block_map(struct inode *inode, u64 lblock, int create,
+ if (boundary)
+ set_buffer_boundary(bh_map);
+ if (new) {
+- struct buffer_head *dibh;
+- error = gfs2_meta_inode_buffer(ip, &dibh);
+- if (!error) {
+- gfs2_trans_add_bh(ip->i_gl, dibh, 1);
+- gfs2_dinode_out(ip, dibh->b_data);
+- brelse(dibh);
+- }
++ gfs2_trans_add_bh(ip->i_gl, dibh, 1);
++ gfs2_dinode_out(ip, dibh->b_data);
+ set_buffer_new(bh_map);
+ goto out_brelse;
+ }
+@@ -545,6 +542,8 @@ out_brelse:
+ out_ok:
+ error = 0;
+ out_fail:
++ if (dibh)
++ brelse(dibh);
+ bmap_unlock(inode, create);
+ return error;
+ }
+@@ -560,7 +559,7 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi
+ BUG_ON(!new);
-- kfree(pntsd);
-- return rc;
-+ rc = build_sec_desc(pntsd, pnntsd, acllen, inode, nmode);
-+
-+#ifdef CONFIG_CIFS_DEBUG2
-+ cFYI(1, ("build_sec_desc rc: %d", rc));
-+#endif
-+
-+ if (!rc) {
-+ /* Set the security descriptor */
-+ rc = set_cifs_acl(pnntsd, acllen, inode, path);
-+#ifdef CONFIG_CIFS_DEBUG2
-+ cFYI(1, ("set_cifs_acl rc: %d", rc));
-+#endif
+ bh.b_size = 1 << (inode->i_blkbits + 5);
+- ret = gfs2_block_map(inode, lblock, create, &bh);
++ ret = gfs2_block_map(inode, lblock, &bh, create);
+ *extlen = bh.b_size >> inode->i_blkbits;
+ *dblock = bh.b_blocknr;
+ if (buffer_new(&bh))
+@@ -684,7 +683,7 @@ static int do_strip(struct gfs2_inode *ip, struct buffer_head *dibh,
+ if (metadata)
+ revokes = (height) ? sdp->sd_inptrs : sdp->sd_diptrs;
+
+- error = gfs2_rindex_hold(sdp, &ip->i_alloc.al_ri_gh);
++ error = gfs2_rindex_hold(sdp, &ip->i_alloc->al_ri_gh);
+ if (error)
+ return error;
+
+@@ -786,7 +785,7 @@ out_rg_gunlock:
+ out_rlist:
+ gfs2_rlist_free(&rlist);
+ out:
+- gfs2_glock_dq_uninit(&ip->i_alloc.al_ri_gh);
++ gfs2_glock_dq_uninit(&ip->i_alloc->al_ri_gh);
+ return error;
+ }
+
+@@ -879,7 +878,6 @@ static int gfs2_block_truncate_page(struct address_space *mapping)
+ {
+ struct inode *inode = mapping->host;
+ struct gfs2_inode *ip = GFS2_I(inode);
+- struct gfs2_sbd *sdp = GFS2_SB(inode);
+ loff_t from = inode->i_size;
+ unsigned long index = from >> PAGE_CACHE_SHIFT;
+ unsigned offset = from & (PAGE_CACHE_SIZE-1);
+@@ -911,7 +909,7 @@ static int gfs2_block_truncate_page(struct address_space *mapping)
+ err = 0;
+
+ if (!buffer_mapped(bh)) {
+- gfs2_get_block(inode, iblock, bh, 0);
++ gfs2_block_map(inode, iblock, bh, 0);
+ /* unmapped? It's a hole - nothing to do */
+ if (!buffer_mapped(bh))
+ goto unlock;
+@@ -931,7 +929,7 @@ static int gfs2_block_truncate_page(struct address_space *mapping)
+ err = 0;
+ }
+
+- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
++ if (!gfs2_is_writeback(ip))
+ gfs2_trans_add_bh(ip->i_gl, bh, 0);
+
+ zero_user_page(page, offset, length, KM_USER0);
+@@ -1224,8 +1222,13 @@ int gfs2_write_alloc_required(struct gfs2_inode *ip, u64 offset,
+ do_div(lblock_stop, bsize);
+ } else {
+ unsigned int shift = sdp->sd_sb.sb_bsize_shift;
++ u64 end_of_file = (ip->i_di.di_size + sdp->sd_sb.sb_bsize - 1) >> shift;
+ lblock = offset >> shift;
+ lblock_stop = (offset + len + sdp->sd_sb.sb_bsize - 1) >> shift;
++ if (lblock_stop > end_of_file) {
++ *alloc_required = 1;
++ return 0;
+ }
-+
-+ kfree(pnntsd);
-+ kfree(pntsd);
-+ }
-+
-+ return (rc);
+ }
+
+ for (; lblock < lblock_stop; lblock += extlen) {
+diff --git a/fs/gfs2/bmap.h b/fs/gfs2/bmap.h
+index ac2fd04..4e6cde2 100644
+--- a/fs/gfs2/bmap.h
++++ b/fs/gfs2/bmap.h
+@@ -15,7 +15,7 @@ struct gfs2_inode;
+ struct page;
+
+ int gfs2_unstuff_dinode(struct gfs2_inode *ip, struct page *page);
+-int gfs2_block_map(struct inode *inode, u64 lblock, int create, struct buffer_head *bh);
++int gfs2_block_map(struct inode *inode, sector_t lblock, struct buffer_head *bh, int create);
+ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsigned *extlen);
+
+ int gfs2_truncatei(struct gfs2_inode *ip, u64 size);
+diff --git a/fs/gfs2/daemon.c b/fs/gfs2/daemon.c
+index 3731ab0..e519919 100644
+--- a/fs/gfs2/daemon.c
++++ b/fs/gfs2/daemon.c
+@@ -83,56 +83,6 @@ int gfs2_recoverd(void *data)
}
- #endif /* CONFIG_CIFS_EXPERIMENTAL */
-diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
-index 093beaa..e9f4ec7 100644
---- a/fs/cifs/cifsfs.c
-+++ b/fs/cifs/cifsfs.c
-@@ -44,6 +44,7 @@
- #include "cifs_fs_sb.h"
- #include <linux/mm.h>
- #include <linux/key-type.h>
-+#include "dns_resolve.h"
- #include "cifs_spnego.h"
- #define CIFS_MAGIC_NUMBER 0xFF534D42 /* the first four bytes of SMB PDUs */
-@@ -96,6 +97,9 @@ cifs_read_super(struct super_block *sb, void *data,
+ /**
+- * gfs2_logd - Update log tail as Active Items get flushed to in-place blocks
+- * @sdp: Pointer to GFS2 superblock
+- *
+- * Also, periodically check to make sure that we're using the most recent
+- * journal index.
+- */
+-
+-int gfs2_logd(void *data)
+-{
+- struct gfs2_sbd *sdp = data;
+- struct gfs2_holder ji_gh;
+- unsigned long t;
+- int need_flush;
+-
+- while (!kthread_should_stop()) {
+- /* Advance the log tail */
+-
+- t = sdp->sd_log_flush_time +
+- gfs2_tune_get(sdp, gt_log_flush_secs) * HZ;
+-
+- gfs2_ail1_empty(sdp, DIO_ALL);
+- gfs2_log_lock(sdp);
+- need_flush = sdp->sd_log_num_buf > gfs2_tune_get(sdp, gt_incore_log_blocks);
+- gfs2_log_unlock(sdp);
+- if (need_flush || time_after_eq(jiffies, t)) {
+- gfs2_log_flush(sdp, NULL);
+- sdp->sd_log_flush_time = jiffies;
+- }
+-
+- /* Check for latest journal index */
+-
+- t = sdp->sd_jindex_refresh_time +
+- gfs2_tune_get(sdp, gt_jindex_refresh_secs) * HZ;
+-
+- if (time_after_eq(jiffies, t)) {
+- if (!gfs2_jindex_hold(sdp, &ji_gh))
+- gfs2_glock_dq_uninit(&ji_gh);
+- sdp->sd_jindex_refresh_time = jiffies;
+- }
+-
+- t = gfs2_tune_get(sdp, gt_logd_secs) * HZ;
+- if (freezing(current))
+- refrigerator();
+- schedule_timeout_interruptible(t);
+- }
+-
+- return 0;
+-}
+-
+-/**
+ * gfs2_quotad - Write cached quota changes into the quota file
+ * @sdp: Pointer to GFS2 superblock
+ *
+diff --git a/fs/gfs2/daemon.h b/fs/gfs2/daemon.h
+index 0de9b35..4be084f 100644
+--- a/fs/gfs2/daemon.h
++++ b/fs/gfs2/daemon.h
+@@ -12,7 +12,6 @@
+
+ int gfs2_glockd(void *data);
+ int gfs2_recoverd(void *data);
+-int gfs2_logd(void *data);
+ int gfs2_quotad(void *data);
+
+ #endif /* __DAEMON_DOT_H__ */
+diff --git a/fs/gfs2/dir.c b/fs/gfs2/dir.c
+index 9949bb7..57e2ed9 100644
+--- a/fs/gfs2/dir.c
++++ b/fs/gfs2/dir.c
+@@ -1876,7 +1876,7 @@ static int leaf_dealloc(struct gfs2_inode *dip, u32 index, u32 len,
+ if (error)
+ goto out;
+
+- error = gfs2_rindex_hold(sdp, &dip->i_alloc.al_ri_gh);
++ error = gfs2_rindex_hold(sdp, &dip->i_alloc->al_ri_gh);
+ if (error)
+ goto out_qs;
+
+@@ -1949,7 +1949,7 @@ out_rg_gunlock:
+ gfs2_glock_dq_m(rlist.rl_rgrps, rlist.rl_ghs);
+ out_rlist:
+ gfs2_rlist_free(&rlist);
+- gfs2_glock_dq_uninit(&dip->i_alloc.al_ri_gh);
++ gfs2_glock_dq_uninit(&dip->i_alloc->al_ri_gh);
+ out_qs:
+ gfs2_quota_unhold(dip);
+ out:
+diff --git a/fs/gfs2/eaops.c b/fs/gfs2/eaops.c
+index aa8dbf3..f114ba2 100644
+--- a/fs/gfs2/eaops.c
++++ b/fs/gfs2/eaops.c
+@@ -56,46 +56,6 @@ unsigned int gfs2_ea_name2type(const char *name, const char **truncated_name)
+ return type;
+ }
+
+-static int user_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+-{
+- struct inode *inode = &ip->i_inode;
+- int error = permission(inode, MAY_READ, NULL);
+- if (error)
+- return error;
+-
+- return gfs2_ea_get_i(ip, er);
+-}
+-
+-static int user_eo_set(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+-{
+- struct inode *inode = &ip->i_inode;
+-
+- if (S_ISREG(inode->i_mode) ||
+- (S_ISDIR(inode->i_mode) && !(inode->i_mode & S_ISVTX))) {
+- int error = permission(inode, MAY_WRITE, NULL);
+- if (error)
+- return error;
+- } else
+- return -EPERM;
+-
+- return gfs2_ea_set_i(ip, er);
+-}
+-
+-static int user_eo_remove(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+-{
+- struct inode *inode = &ip->i_inode;
+-
+- if (S_ISREG(inode->i_mode) ||
+- (S_ISDIR(inode->i_mode) && !(inode->i_mode & S_ISVTX))) {
+- int error = permission(inode, MAY_WRITE, NULL);
+- if (error)
+- return error;
+- } else
+- return -EPERM;
+-
+- return gfs2_ea_remove_i(ip, er);
+-}
+-
+ static int system_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
{
- struct inode *inode;
- struct cifs_sb_info *cifs_sb;
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ int len;
-+#endif
- int rc = 0;
+ if (!GFS2_ACL_IS_ACCESS(er->er_name, er->er_name_len) &&
+@@ -108,8 +68,6 @@ static int system_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+ GFS2_ACL_IS_DEFAULT(er->er_name, er->er_name_len)))
+ return -EOPNOTSUPP;
- /* BB should we make this contingent on mount parm? */
-@@ -105,6 +109,25 @@ cifs_read_super(struct super_block *sb, void *data,
- if (cifs_sb == NULL)
- return -ENOMEM;
+-
+-
+ return gfs2_ea_get_i(ip, er);
+ }
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ /* copy mount params to sb for use in submounts */
-+ /* BB: should we move this after the mount so we
-+ * do not have to do the copy on failed mounts?
-+ * BB: May be it is better to do simple copy before
-+ * complex operation (mount), and in case of fail
-+ * just exit instead of doing mount and attempting
-+ * undo it if this copy fails?*/
-+ len = strlen(data);
-+ cifs_sb->mountdata = kzalloc(len + 1, GFP_KERNEL);
-+ if (cifs_sb->mountdata == NULL) {
-+ kfree(sb->s_fs_info);
-+ sb->s_fs_info = NULL;
-+ return -ENOMEM;
-+ }
-+ strncpy(cifs_sb->mountdata, data, len + 1);
-+ cifs_sb->mountdata[len] = '\0';
-+#endif
-+
- rc = cifs_mount(sb, cifs_sb, data, devname);
+@@ -170,40 +128,10 @@ static int system_eo_remove(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+ return gfs2_ea_remove_i(ip, er);
+ }
- if (rc) {
-@@ -154,6 +177,12 @@ out_no_root:
+-static int security_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+-{
+- struct inode *inode = &ip->i_inode;
+- int error = permission(inode, MAY_READ, NULL);
+- if (error)
+- return error;
+-
+- return gfs2_ea_get_i(ip, er);
+-}
+-
+-static int security_eo_set(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+-{
+- struct inode *inode = &ip->i_inode;
+- int error = permission(inode, MAY_WRITE, NULL);
+- if (error)
+- return error;
+-
+- return gfs2_ea_set_i(ip, er);
+-}
+-
+-static int security_eo_remove(struct gfs2_inode *ip, struct gfs2_ea_request *er)
+-{
+- struct inode *inode = &ip->i_inode;
+- int error = permission(inode, MAY_WRITE, NULL);
+- if (error)
+- return error;
+-
+- return gfs2_ea_remove_i(ip, er);
+-}
+-
+ static const struct gfs2_eattr_operations gfs2_user_eaops = {
+- .eo_get = user_eo_get,
+- .eo_set = user_eo_set,
+- .eo_remove = user_eo_remove,
++ .eo_get = gfs2_ea_get_i,
++ .eo_set = gfs2_ea_set_i,
++ .eo_remove = gfs2_ea_remove_i,
+ .eo_name = "user",
+ };
- out_mount_failed:
- if (cifs_sb) {
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ if (cifs_sb->mountdata) {
-+ kfree(cifs_sb->mountdata);
-+ cifs_sb->mountdata = NULL;
-+ }
-+#endif
- if (cifs_sb->local_nls)
- unload_nls(cifs_sb->local_nls);
- kfree(cifs_sb);
-@@ -177,6 +206,13 @@ cifs_put_super(struct super_block *sb)
- if (rc) {
- cERROR(1, ("cifs_umount failed with return code %d", rc));
+@@ -215,9 +143,9 @@ const struct gfs2_eattr_operations gfs2_system_eaops = {
+ };
+
+ static const struct gfs2_eattr_operations gfs2_security_eaops = {
+- .eo_get = security_eo_get,
+- .eo_set = security_eo_set,
+- .eo_remove = security_eo_remove,
++ .eo_get = gfs2_ea_get_i,
++ .eo_set = gfs2_ea_set_i,
++ .eo_remove = gfs2_ea_remove_i,
+ .eo_name = "security",
+ };
+
+diff --git a/fs/gfs2/eattr.c b/fs/gfs2/eattr.c
+index 2a7435b..bee9970 100644
+--- a/fs/gfs2/eattr.c
++++ b/fs/gfs2/eattr.c
+@@ -1418,7 +1418,7 @@ out:
+ static int ea_dealloc_block(struct gfs2_inode *ip)
+ {
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_rgrpd *rgd;
+ struct buffer_head *dibh;
+ int error;
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index a37efe4..80e09c5 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1,6 +1,6 @@
+ /*
+ * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
++ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+@@ -217,7 +217,6 @@ int gfs2_glock_put(struct gfs2_glock *gl)
+ if (atomic_dec_and_test(&gl->gl_ref)) {
+ hlist_del(&gl->gl_list);
+ write_unlock(gl_lock_addr(gl->gl_hash));
+- BUG_ON(spin_is_locked(&gl->gl_spin));
+ gfs2_assert(sdp, gl->gl_state == LM_ST_UNLOCKED);
+ gfs2_assert(sdp, list_empty(&gl->gl_reclaim));
+ gfs2_assert(sdp, list_empty(&gl->gl_holders));
+@@ -346,7 +345,6 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ gl->gl_object = NULL;
+ gl->gl_sbd = sdp;
+ gl->gl_aspace = NULL;
+- lops_init_le(&gl->gl_le, &gfs2_glock_lops);
+ INIT_DELAYED_WORK(&gl->gl_work, glock_work_func);
+
+ /* If this glock protects actual on-disk data or metadata blocks,
+@@ -461,7 +459,6 @@ static void wait_on_holder(struct gfs2_holder *gh)
+
+ static void gfs2_demote_wake(struct gfs2_glock *gl)
+ {
+- BUG_ON(!spin_is_locked(&gl->gl_spin));
+ gl->gl_demote_state = LM_ST_EXCLUSIVE;
+ clear_bit(GLF_DEMOTE, &gl->gl_flags);
+ smp_mb__after_clear_bit();
+@@ -507,21 +504,12 @@ static int rq_mutex(struct gfs2_holder *gh)
+ static int rq_promote(struct gfs2_holder *gh)
+ {
+ struct gfs2_glock *gl = gh->gh_gl;
+- struct gfs2_sbd *sdp = gl->gl_sbd;
+
+ if (!relaxed_state_ok(gl->gl_state, gh->gh_state, gh->gh_flags)) {
+ if (list_empty(&gl->gl_holders)) {
+ gl->gl_req_gh = gh;
+ set_bit(GLF_LOCK, &gl->gl_flags);
+ spin_unlock(&gl->gl_spin);
+-
+- if (atomic_read(&sdp->sd_reclaim_count) >
+- gfs2_tune_get(sdp, gt_reclaim_limit) &&
+- !(gh->gh_flags & LM_FLAG_PRIORITY)) {
+- gfs2_reclaim_glock(sdp);
+- gfs2_reclaim_glock(sdp);
+- }
+-
+ gfs2_glock_xmote_th(gh->gh_gl, gh);
+ spin_lock(&gl->gl_spin);
+ }
+@@ -567,7 +555,10 @@ static int rq_demote(struct gfs2_glock *gl)
+ gfs2_demote_wake(gl);
+ return 0;
}
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ if (cifs_sb->mountdata) {
-+ kfree(cifs_sb->mountdata);
-+ cifs_sb->mountdata = NULL;
-+ }
-+#endif
+
- unload_nls(cifs_sb->local_nls);
- kfree(cifs_sb);
- return;
-@@ -435,6 +471,10 @@ static void cifs_umount_begin(struct vfsmount *vfsmnt, int flags)
- struct cifs_sb_info *cifs_sb;
- struct cifsTconInfo *tcon;
+ set_bit(GLF_LOCK, &gl->gl_flags);
++ set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
++
+ if (gl->gl_demote_state == LM_ST_UNLOCKED ||
+ gl->gl_state != LM_ST_EXCLUSIVE) {
+ spin_unlock(&gl->gl_spin);
+@@ -576,7 +567,9 @@ static int rq_demote(struct gfs2_glock *gl)
+ spin_unlock(&gl->gl_spin);
+ gfs2_glock_xmote_th(gl, NULL);
+ }
++
+ spin_lock(&gl->gl_spin);
++ clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ dfs_shrink_umount_helper(vfsmnt);
-+#endif /* CONFIG CIFS_DFS_UPCALL */
+ return 0;
+ }
+@@ -598,23 +591,18 @@ static void run_queue(struct gfs2_glock *gl)
+ if (!list_empty(&gl->gl_waiters1)) {
+ gh = list_entry(gl->gl_waiters1.next,
+ struct gfs2_holder, gh_list);
+-
+- if (test_bit(HIF_MUTEX, &gh->gh_iflags))
+- blocked = rq_mutex(gh);
+- else
+- gfs2_assert_warn(gl->gl_sbd, 0);
+-
++ blocked = rq_mutex(gh);
+ } else if (test_bit(GLF_DEMOTE, &gl->gl_flags)) {
+ blocked = rq_demote(gl);
++ if (gl->gl_waiters2 && !blocked) {
++ set_bit(GLF_DEMOTE, &gl->gl_flags);
++ gl->gl_demote_state = LM_ST_UNLOCKED;
++ }
++ gl->gl_waiters2 = 0;
+ } else if (!list_empty(&gl->gl_waiters3)) {
+ gh = list_entry(gl->gl_waiters3.next,
+ struct gfs2_holder, gh_list);
+-
+- if (test_bit(HIF_PROMOTE, &gh->gh_iflags))
+- blocked = rq_promote(gh);
+- else
+- gfs2_assert_warn(gl->gl_sbd, 0);
+-
++ blocked = rq_promote(gh);
+ } else
+ break;
+
+@@ -632,27 +620,21 @@ static void run_queue(struct gfs2_glock *gl)
+
+ static void gfs2_glmutex_lock(struct gfs2_glock *gl)
+ {
+- struct gfs2_holder gh;
+-
+- gfs2_holder_init(gl, 0, 0, &gh);
+- set_bit(HIF_MUTEX, &gh.gh_iflags);
+- if (test_and_set_bit(HIF_WAIT, &gh.gh_iflags))
+- BUG();
+-
+ spin_lock(&gl->gl_spin);
+ if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {
++ struct gfs2_holder gh;
+
- if (!(flags & MNT_FORCE))
- return;
- cifs_sb = CIFS_SB(vfsmnt->mnt_sb);
-@@ -552,7 +592,7 @@ static loff_t cifs_llseek(struct file *file, loff_t offset, int origin)
- return remote_llseek(file, offset, origin);
++ gfs2_holder_init(gl, 0, 0, &gh);
++ set_bit(HIF_WAIT, &gh.gh_iflags);
+ list_add_tail(&gh.gh_list, &gl->gl_waiters1);
++ spin_unlock(&gl->gl_spin);
++ wait_on_holder(&gh);
++ gfs2_holder_uninit(&gh);
+ } else {
+ gl->gl_owner_pid = current->pid;
+ gl->gl_ip = (unsigned long)__builtin_return_address(0);
+- clear_bit(HIF_WAIT, &gh.gh_iflags);
+- smp_mb();
+- wake_up_bit(&gh.gh_iflags, HIF_WAIT);
++ spin_unlock(&gl->gl_spin);
+ }
+- spin_unlock(&gl->gl_spin);
+-
+- wait_on_holder(&gh);
+- gfs2_holder_uninit(&gh);
}
--static struct file_system_type cifs_fs_type = {
-+struct file_system_type cifs_fs_type = {
- .owner = THIS_MODULE,
- .name = "cifs",
- .get_sb = cifs_get_sb,
-@@ -1015,11 +1055,16 @@ init_cifs(void)
- if (rc)
- goto out_unregister_filesystem;
- #endif
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ rc = register_key_type(&key_type_dns_resolver);
-+ if (rc)
-+ goto out_unregister_key_type;
-+#endif
- oplockThread = kthread_run(cifs_oplock_thread, NULL, "cifsoplockd");
- if (IS_ERR(oplockThread)) {
- rc = PTR_ERR(oplockThread);
- cERROR(1, ("error %d create oplock thread", rc));
-- goto out_unregister_key_type;
-+ goto out_unregister_dfs_key_type;
+ /**
+@@ -691,7 +673,6 @@ static void gfs2_glmutex_unlock(struct gfs2_glock *gl)
+ gl->gl_owner_pid = 0;
+ gl->gl_ip = 0;
+ run_queue(gl);
+- BUG_ON(!spin_is_locked(&gl->gl_spin));
+ spin_unlock(&gl->gl_spin);
+ }
+
+@@ -722,7 +703,10 @@ static void handle_callback(struct gfs2_glock *gl, unsigned int state,
+ }
+ } else if (gl->gl_demote_state != LM_ST_UNLOCKED &&
+ gl->gl_demote_state != state) {
+- gl->gl_demote_state = LM_ST_UNLOCKED;
++ if (test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags))
++ gl->gl_waiters2 = 1;
++ else
++ gl->gl_demote_state = LM_ST_UNLOCKED;
}
+ spin_unlock(&gl->gl_spin);
+ }
+@@ -943,8 +927,8 @@ static void gfs2_glock_drop_th(struct gfs2_glock *gl)
+ const struct gfs2_glock_operations *glops = gl->gl_ops;
+ unsigned int ret;
- dnotifyThread = kthread_run(cifs_dnotify_thread, NULL, "cifsdnotifyd");
-@@ -1033,7 +1078,11 @@ init_cifs(void)
+- if (glops->go_drop_th)
+- glops->go_drop_th(gl);
++ if (glops->go_xmote_th)
++ glops->go_xmote_th(gl);
- out_stop_oplock_thread:
- kthread_stop(oplockThread);
-+ out_unregister_dfs_key_type:
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ unregister_key_type(&key_type_dns_resolver);
- out_unregister_key_type:
-+#endif
- #ifdef CONFIG_CIFS_UPCALL
- unregister_key_type(&cifs_spnego_key_type);
- out_unregister_filesystem:
-@@ -1059,6 +1108,9 @@ exit_cifs(void)
- #ifdef CONFIG_PROC_FS
- cifs_proc_clean();
- #endif
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+ unregister_key_type(&key_type_dns_resolver);
-+#endif
- #ifdef CONFIG_CIFS_UPCALL
- unregister_key_type(&cifs_spnego_key_type);
- #endif
-diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
-index 2a21dc6..195b14d 100644
---- a/fs/cifs/cifsfs.h
-+++ b/fs/cifs/cifsfs.h
-@@ -32,6 +32,7 @@
- #define TRUE 1
- #endif
+ gfs2_assert_warn(sdp, test_bit(GLF_LOCK, &gl->gl_flags));
+ gfs2_assert_warn(sdp, list_empty(&gl->gl_holders));
+@@ -1156,8 +1140,6 @@ restart:
+ return -EIO;
+ }
-+extern struct file_system_type cifs_fs_type;
- extern const struct address_space_operations cifs_addr_ops;
- extern const struct address_space_operations cifs_addr_ops_smallbuf;
+- set_bit(HIF_PROMOTE, &gh->gh_iflags);
+-
+ spin_lock(&gl->gl_spin);
+ add_to_queue(gh);
+ run_queue(gl);
+@@ -1248,12 +1230,11 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
+ list_del_init(&gh->gh_list);
-@@ -60,6 +61,10 @@ extern int cifs_setattr(struct dentry *, struct iattr *);
+ if (list_empty(&gl->gl_holders)) {
+- spin_unlock(&gl->gl_spin);
+-
+- if (glops->go_unlock)
++ if (glops->go_unlock) {
++ spin_unlock(&gl->gl_spin);
+ glops->go_unlock(gh);
+-
+- spin_lock(&gl->gl_spin);
++ spin_lock(&gl->gl_spin);
++ }
+ gl->gl_stamp = jiffies;
+ }
- extern const struct inode_operations cifs_file_inode_ops;
- extern const struct inode_operations cifs_symlink_inode_ops;
-+extern struct list_head cifs_dfs_automount_list;
-+extern struct inode_operations cifs_dfs_referral_inode_operations;
-+
+@@ -1910,8 +1891,6 @@ static int dump_glock(struct glock_iter *gi, struct gfs2_glock *gl)
+ print_dbg(gi, " req_bh = %s\n", (gl->gl_req_bh) ? "yes" : "no");
+ print_dbg(gi, " lvb_count = %d\n", atomic_read(&gl->gl_lvb_count));
+ print_dbg(gi, " object = %s\n", (gl->gl_object) ? "yes" : "no");
+- print_dbg(gi, " le = %s\n",
+- (list_empty(&gl->gl_le.le_list)) ? "no" : "yes");
+ print_dbg(gi, " reclaim = %s\n",
+ (list_empty(&gl->gl_reclaim)) ? "no" : "yes");
+ if (gl->gl_aspace)
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 4670dcb..c663b7a 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -56,7 +56,7 @@ static void gfs2_ail_empty_gl(struct gfs2_glock *gl)
+ bd = list_entry(head->next, struct gfs2_bufdata,
+ bd_ail_gl_list);
+ bh = bd->bd_bh;
+- gfs2_remove_from_ail(NULL, bd);
++ gfs2_remove_from_ail(bd);
+ bd->bd_bh = NULL;
+ bh->b_private = NULL;
+ bd->bd_blkno = bh->b_blocknr;
+@@ -86,15 +86,10 @@ static void gfs2_pte_inval(struct gfs2_glock *gl)
+ if (!ip || !S_ISREG(inode->i_mode))
+ return;
+
+- if (!test_bit(GIF_PAGED, &ip->i_flags))
+- return;
+-
+ unmap_shared_mapping_range(inode->i_mapping, 0, 0);
+-
+ if (test_bit(GIF_SW_PAGED, &ip->i_flags))
+ set_bit(GLF_DIRTY, &gl->gl_flags);
+
+- clear_bit(GIF_SW_PAGED, &ip->i_flags);
+ }
+
+ /**
+@@ -143,44 +138,34 @@ static void meta_go_inval(struct gfs2_glock *gl, int flags)
+ static void inode_go_sync(struct gfs2_glock *gl)
+ {
+ struct gfs2_inode *ip = gl->gl_object;
++ struct address_space *metamapping = gl->gl_aspace->i_mapping;
++ int error;
+
++ if (gl->gl_state != LM_ST_UNLOCKED)
++ gfs2_pte_inval(gl);
++ if (gl->gl_state != LM_ST_EXCLUSIVE)
++ return;
- /* Functions related to files and directories */
- extern const struct file_operations cifs_file_ops;
-diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
-index 1fde219..5d32d8d 100644
---- a/fs/cifs/cifsglob.h
-+++ b/fs/cifs/cifsglob.h
-@@ -1,7 +1,7 @@
- /*
- * fs/cifs/cifsglob.h
- *
-- * Copyright (C) International Business Machines Corp., 2002,2007
-+ * Copyright (C) International Business Machines Corp., 2002,2008
- * Author(s): Steve French (sfrench at us.ibm.com)
- * Jeremy Allison (jra at samba.org)
- *
-@@ -70,14 +70,6 @@
- #endif
+ if (ip && !S_ISREG(ip->i_inode.i_mode))
+ ip = NULL;
- /*
-- * This information is kept on every Server we know about.
-- *
-- * Some things to note:
+ if (test_bit(GLF_DIRTY, &gl->gl_flags)) {
+- if (ip && !gfs2_is_jdata(ip))
+- filemap_fdatawrite(ip->i_inode.i_mapping);
+ gfs2_log_flush(gl->gl_sbd, gl);
+- if (ip && gfs2_is_jdata(ip))
+- filemap_fdatawrite(ip->i_inode.i_mapping);
+- gfs2_meta_sync(gl);
++ filemap_fdatawrite(metamapping);
+ if (ip) {
+ struct address_space *mapping = ip->i_inode.i_mapping;
+- int error = filemap_fdatawait(mapping);
++ filemap_fdatawrite(mapping);
++ error = filemap_fdatawait(mapping);
+ mapping_set_error(mapping, error);
+ }
++ error = filemap_fdatawait(metamapping);
++ mapping_set_error(metamapping, error);
+ clear_bit(GLF_DIRTY, &gl->gl_flags);
+ gfs2_ail_empty_gl(gl);
+ }
+ }
+
+ /**
+- * inode_go_xmote_th - promote/demote a glock
+- * @gl: the glock
+- * @state: the requested state
+- * @flags:
- *
- */
--#define SERVER_NAME_LEN_WITH_NULL (SERVER_NAME_LENGTH + 1)
-
--/*
- * CIFS vfs client Status information (based on what we know.)
- */
+-static void inode_go_xmote_th(struct gfs2_glock *gl)
+-{
+- if (gl->gl_state != LM_ST_UNLOCKED)
+- gfs2_pte_inval(gl);
+- if (gl->gl_state == LM_ST_EXCLUSIVE)
+- inode_go_sync(gl);
+-}
+-
+-/**
+ * inode_go_xmote_bh - After promoting/demoting a glock
+ * @gl: the glock
+ *
+@@ -201,22 +186,6 @@ static void inode_go_xmote_bh(struct gfs2_glock *gl)
+ }
-@@ -460,6 +452,37 @@ struct dir_notify_req {
- struct file *pfile;
- };
+ /**
+- * inode_go_drop_th - unlock a glock
+- * @gl: the glock
+- *
+- * Invoked from rq_demote().
+- * Another node needs the lock in EXCLUSIVE mode, or lock (unused for too long)
+- * is being purged from our node's glock cache; we're dropping lock.
+- */
+-
+-static void inode_go_drop_th(struct gfs2_glock *gl)
+-{
+- gfs2_pte_inval(gl);
+- if (gl->gl_state == LM_ST_EXCLUSIVE)
+- inode_go_sync(gl);
+-}
+-
+-/**
+ * inode_go_inval - prepare a inode glock to be released
+ * @gl: the glock
+ * @flags:
+@@ -234,10 +203,8 @@ static void inode_go_inval(struct gfs2_glock *gl, int flags)
+ set_bit(GIF_INVALID, &ip->i_flags);
+ }
-+struct dfs_info3_param {
-+ int flags; /* DFSREF_REFERRAL_SERVER, DFSREF_STORAGE_SERVER*/
-+ int PathConsumed;
-+ int server_type;
-+ int ref_flag;
-+ char *path_name;
-+ char *node_name;
-+};
-+
-+static inline void free_dfs_info_param(struct dfs_info3_param *param)
-+{
-+ if (param) {
-+ kfree(param->path_name);
-+ kfree(param->node_name);
-+ kfree(param);
-+ }
-+}
-+
-+static inline void free_dfs_info_array(struct dfs_info3_param *param,
-+ int number_of_items)
-+{
-+ int i;
-+ if ((number_of_items == 0) || (param == NULL))
-+ return;
-+ for (i = 0; i < number_of_items; i++) {
-+ kfree(param[i].path_name);
-+ kfree(param[i].node_name);
-+ }
-+ kfree(param);
-+}
-+
- #define MID_FREE 0
- #define MID_REQUEST_ALLOCATED 1
- #define MID_REQUEST_SUBMITTED 2
-diff --git a/fs/cifs/cifspdu.h b/fs/cifs/cifspdu.h
-index dbe6b84..47f7950 100644
---- a/fs/cifs/cifspdu.h
-+++ b/fs/cifs/cifspdu.h
-@@ -237,6 +237,9 @@
- | DELETE | READ_CONTROL | WRITE_DAC \
- | WRITE_OWNER | SYNCHRONIZE)
+- if (ip && S_ISREG(ip->i_inode.i_mode)) {
++ if (ip && S_ISREG(ip->i_inode.i_mode))
+ truncate_inode_pages(ip->i_inode.i_mapping, 0);
+- clear_bit(GIF_PAGED, &ip->i_flags);
+- }
+ }
-+#define SET_MINIMUM_RIGHTS (FILE_READ_EA | FILE_READ_ATTRIBUTES \
-+ | READ_CONTROL | SYNCHRONIZE)
-+
+ /**
+@@ -294,23 +261,6 @@ static int inode_go_lock(struct gfs2_holder *gh)
+ }
- /*
- * Invalid readdir handle
-diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
-index 8350eec..2f09f56 100644
---- a/fs/cifs/cifsproto.h
-+++ b/fs/cifs/cifsproto.h
-@@ -1,7 +1,7 @@
- /*
- * fs/cifs/cifsproto.h
+ /**
+- * inode_go_unlock - operation done before an inode lock is unlocked by a
+- * process
+- * @gl: the glock
+- * @flags:
+- *
+- */
+-
+-static void inode_go_unlock(struct gfs2_holder *gh)
+-{
+- struct gfs2_glock *gl = gh->gh_gl;
+- struct gfs2_inode *ip = gl->gl_object;
+-
+- if (ip)
+- gfs2_meta_cache_flush(ip);
+-}
+-
+-/**
+ * rgrp_go_demote_ok - Check to see if it's ok to unlock a RG's glock
+ * @gl: the glock
*
-- * Copyright (c) International Business Machines Corp., 2002,2007
-+ * Copyright (c) International Business Machines Corp., 2002,2008
- * Author(s): Steve French (sfrench at us.ibm.com)
+@@ -350,14 +300,14 @@ static void rgrp_go_unlock(struct gfs2_holder *gh)
+ }
+
+ /**
+- * trans_go_xmote_th - promote/demote the transaction glock
++ * trans_go_sync - promote/demote the transaction glock
+ * @gl: the glock
+ * @state: the requested state
+ * @flags:
*
- * This library is free software; you can redistribute it and/or modify
-@@ -97,11 +97,14 @@ extern int cifs_get_inode_info_unix(struct inode **pinode,
- const unsigned char *search_path,
- struct super_block *sb, int xid);
- extern void acl_to_uid_mode(struct inode *inode, const char *search_path);
--extern int mode_to_acl(struct inode *inode, const char *path);
-+extern int mode_to_acl(struct inode *inode, const char *path, __u64);
+ */
- extern int cifs_mount(struct super_block *, struct cifs_sb_info *, char *,
- const char *);
- extern int cifs_umount(struct super_block *, struct cifs_sb_info *);
-+#ifdef CONFIG_CIFS_DFS_UPCALL
-+extern void dfs_shrink_umount_helper(struct vfsmount *vfsmnt);
-+#endif
- void cifs_proc_init(void);
- void cifs_proc_clean(void);
+-static void trans_go_xmote_th(struct gfs2_glock *gl)
++static void trans_go_sync(struct gfs2_glock *gl)
+ {
+ struct gfs2_sbd *sdp = gl->gl_sbd;
-@@ -153,7 +156,7 @@ extern int get_dfs_path(int xid, struct cifsSesInfo *pSesInfo,
- const char *old_path,
- const struct nls_table *nls_codepage,
- unsigned int *pnum_referrals,
-- unsigned char **preferrals,
-+ struct dfs_info3_param **preferrals,
- int remap);
- extern void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon,
- struct super_block *sb, struct smb_vol *vol);
-@@ -342,6 +345,8 @@ extern int CIFSSMBSetEA(const int xid, struct cifsTconInfo *tcon,
- const struct nls_table *nls_codepage, int remap_special_chars);
- extern int CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon,
- __u16 fid, struct cifs_ntsd **acl_inf, __u32 *buflen);
-+extern int CIFSSMBSetCIFSACL(const int, struct cifsTconInfo *, __u16,
-+ struct cifs_ntsd *, __u32);
- extern int CIFSSMBGetPosixACL(const int xid, struct cifsTconInfo *tcon,
- const unsigned char *searchName,
- char *acl_inf, const int buflen, const int acl_type,
-diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
-index 9e8a6be..9409524 100644
---- a/fs/cifs/cifssmb.c
-+++ b/fs/cifs/cifssmb.c
-@@ -3156,6 +3156,71 @@ qsec_out:
- /* cifs_small_buf_release(pSMB); */ /* Freed earlier now in SendReceive2 */
- return rc;
+@@ -384,7 +334,6 @@ static void trans_go_xmote_bh(struct gfs2_glock *gl)
+
+ if (gl->gl_state != LM_ST_UNLOCKED &&
+ test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) {
+- gfs2_meta_cache_flush(GFS2_I(sdp->sd_jdesc->jd_inode));
+ j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+
+ error = gfs2_find_jhead(sdp->sd_jdesc, &head);
+@@ -402,24 +351,6 @@ static void trans_go_xmote_bh(struct gfs2_glock *gl)
}
-+
-+int
-+CIFSSMBSetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid,
-+ struct cifs_ntsd *pntsd, __u32 acllen)
-+{
-+ __u16 byte_count, param_count, data_count, param_offset, data_offset;
-+ int rc = 0;
-+ int bytes_returned = 0;
-+ SET_SEC_DESC_REQ *pSMB = NULL;
-+ NTRANSACT_RSP *pSMBr = NULL;
-+
-+setCifsAclRetry:
-+ rc = smb_init(SMB_COM_NT_TRANSACT, 19, tcon, (void **) &pSMB,
-+ (void **) &pSMBr);
-+ if (rc)
-+ return (rc);
-+
-+ pSMB->MaxSetupCount = 0;
-+ pSMB->Reserved = 0;
-+
-+ param_count = 8;
-+ param_offset = offsetof(struct smb_com_transaction_ssec_req, Fid) - 4;
-+ data_count = acllen;
-+ data_offset = param_offset + param_count;
-+ byte_count = 3 /* pad */ + param_count;
-+
-+ pSMB->DataCount = cpu_to_le32(data_count);
-+ pSMB->TotalDataCount = pSMB->DataCount;
-+ pSMB->MaxParameterCount = cpu_to_le32(4);
-+ pSMB->MaxDataCount = cpu_to_le32(16384);
-+ pSMB->ParameterCount = cpu_to_le32(param_count);
-+ pSMB->ParameterOffset = cpu_to_le32(param_offset);
-+ pSMB->TotalParameterCount = pSMB->ParameterCount;
-+ pSMB->DataOffset = cpu_to_le32(data_offset);
-+ pSMB->SetupCount = 0;
-+ pSMB->SubCommand = cpu_to_le16(NT_TRANSACT_SET_SECURITY_DESC);
-+ pSMB->ByteCount = cpu_to_le16(byte_count+data_count);
-+
-+ pSMB->Fid = fid; /* file handle always le */
-+ pSMB->Reserved2 = 0;
-+ pSMB->AclFlags = cpu_to_le32(CIFS_ACL_DACL);
-+
-+ if (pntsd && acllen) {
-+ memcpy((char *) &pSMBr->hdr.Protocol + data_offset,
-+ (char *) pntsd,
-+ acllen);
-+ pSMB->hdr.smb_buf_length += (byte_count + data_count);
-+
-+ } else
-+ pSMB->hdr.smb_buf_length += byte_count;
-+
-+ rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
-+ (struct smb_hdr *) pSMBr, &bytes_returned, 0);
-+
-+ cFYI(1, ("SetCIFSACL bytes_returned: %d, rc: %d", bytes_returned, rc));
-+ if (rc)
-+ cFYI(1, ("Set CIFS ACL returned %d", rc));
-+ cifs_buf_release(pSMB);
-+
-+ if (rc == -EAGAIN)
-+ goto setCifsAclRetry;
-+
-+ return (rc);
-+}
-+
- #endif /* CONFIG_CIFS_EXPERIMENTAL */
- /* Legacy Query Path Information call for lookup to old servers such
-@@ -5499,7 +5564,7 @@ SetEARetry:
- else
- name_len = strnlen(ea_name, 255);
+ /**
+- * trans_go_drop_th - unlock the transaction glock
+- * @gl: the glock
+- *
+- * We want to sync the device even with localcaching. Remember
+- * that localcaching journal replay only marks buffers dirty.
+- */
+-
+-static void trans_go_drop_th(struct gfs2_glock *gl)
+-{
+- struct gfs2_sbd *sdp = gl->gl_sbd;
+-
+- if (test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) {
+- gfs2_meta_syncfs(sdp);
+- gfs2_log_shutdown(sdp);
+- }
+-}
+-
+-/**
+ * quota_go_demote_ok - Check to see if it's ok to unlock a quota glock
+ * @gl: the glock
+ *
+@@ -433,25 +364,21 @@ static int quota_go_demote_ok(struct gfs2_glock *gl)
+
+ const struct gfs2_glock_operations gfs2_meta_glops = {
+ .go_xmote_th = meta_go_sync,
+- .go_drop_th = meta_go_sync,
+ .go_type = LM_TYPE_META,
+ };
-- count = sizeof(*parm_data) + ea_value_len + name_len + 1;
-+ count = sizeof(*parm_data) + ea_value_len + name_len;
- pSMB->MaxParameterCount = cpu_to_le16(2);
- pSMB->MaxDataCount = cpu_to_le16(1000); /* BB find max SMB size from sess */
- pSMB->MaxSetupCount = 0;
-diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
-index fd9147c..65d0ba7 100644
---- a/fs/cifs/connect.c
-+++ b/fs/cifs/connect.c
-@@ -1,7 +1,7 @@
+ const struct gfs2_glock_operations gfs2_inode_glops = {
+- .go_xmote_th = inode_go_xmote_th,
++ .go_xmote_th = inode_go_sync,
+ .go_xmote_bh = inode_go_xmote_bh,
+- .go_drop_th = inode_go_drop_th,
+ .go_inval = inode_go_inval,
+ .go_demote_ok = inode_go_demote_ok,
+ .go_lock = inode_go_lock,
+- .go_unlock = inode_go_unlock,
+ .go_type = LM_TYPE_INODE,
+ .go_min_hold_time = HZ / 10,
+ };
+
+ const struct gfs2_glock_operations gfs2_rgrp_glops = {
+ .go_xmote_th = meta_go_sync,
+- .go_drop_th = meta_go_sync,
+ .go_inval = meta_go_inval,
+ .go_demote_ok = rgrp_go_demote_ok,
+ .go_lock = rgrp_go_lock,
+@@ -461,9 +388,8 @@ const struct gfs2_glock_operations gfs2_rgrp_glops = {
+ };
+
+ const struct gfs2_glock_operations gfs2_trans_glops = {
+- .go_xmote_th = trans_go_xmote_th,
++ .go_xmote_th = trans_go_sync,
+ .go_xmote_bh = trans_go_xmote_bh,
+- .go_drop_th = trans_go_drop_th,
+ .go_type = LM_TYPE_NONDISK,
+ };
+
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index eaddfb5..513aaf0 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -1,6 +1,6 @@
/*
- * fs/cifs/connect.c
- *
-- * Copyright (C) International Business Machines Corp., 2002,2007
-+ * Copyright (C) International Business Machines Corp., 2002,2008
- * Author(s): Steve French (sfrench at us.ibm.com)
+ * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
++ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
- * This library is free software; you can redistribute it and/or modify
-@@ -1410,7 +1410,7 @@ connect_to_dfs_path(int xid, struct cifsSesInfo *pSesInfo,
- const char *old_path, const struct nls_table *nls_codepage,
- int remap)
- {
-- unsigned char *referrals = NULL;
-+ struct dfs_info3_param *referrals = NULL;
- unsigned int num_referrals;
- int rc = 0;
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+@@ -131,7 +131,6 @@ struct gfs2_bufdata {
+ struct gfs2_glock_operations {
+ void (*go_xmote_th) (struct gfs2_glock *gl);
+ void (*go_xmote_bh) (struct gfs2_glock *gl);
+- void (*go_drop_th) (struct gfs2_glock *gl);
+ void (*go_inval) (struct gfs2_glock *gl, int flags);
+ int (*go_demote_ok) (struct gfs2_glock *gl);
+ int (*go_lock) (struct gfs2_holder *gh);
+@@ -141,10 +140,6 @@ struct gfs2_glock_operations {
+ };
-@@ -1429,12 +1429,14 @@ connect_to_dfs_path(int xid, struct cifsSesInfo *pSesInfo,
- int
- get_dfs_path(int xid, struct cifsSesInfo *pSesInfo, const char *old_path,
- const struct nls_table *nls_codepage, unsigned int *pnum_referrals,
-- unsigned char **preferrals, int remap)
-+ struct dfs_info3_param **preferrals, int remap)
- {
- char *temp_unc;
- int rc = 0;
-+ unsigned char *targetUNCs;
+ enum {
+- /* Actions */
+- HIF_MUTEX = 0,
+- HIF_PROMOTE = 1,
+-
+ /* States */
+ HIF_HOLDER = 6,
+ HIF_FIRST = 7,
+@@ -171,6 +166,8 @@ enum {
+ GLF_DEMOTE = 3,
+ GLF_PENDING_DEMOTE = 4,
+ GLF_DIRTY = 5,
++ GLF_DEMOTE_IN_PROGRESS = 6,
++ GLF_LFLUSH = 7,
+ };
- *pnum_referrals = 0;
-+ *preferrals = NULL;
+ struct gfs2_glock {
+@@ -190,6 +187,7 @@ struct gfs2_glock {
+ struct list_head gl_holders;
+ struct list_head gl_waiters1; /* HIF_MUTEX */
+ struct list_head gl_waiters3; /* HIF_PROMOTE */
++ int gl_waiters2; /* GIF_DEMOTE */
- if (pSesInfo->ipc_tid == 0) {
- temp_unc = kmalloc(2 /* for slashes */ +
-@@ -1454,8 +1456,10 @@ get_dfs_path(int xid, struct cifsSesInfo *pSesInfo, const char *old_path,
- kfree(temp_unc);
- }
- if (rc == 0)
-- rc = CIFSGetDFSRefer(xid, pSesInfo, old_path, preferrals,
-+ rc = CIFSGetDFSRefer(xid, pSesInfo, old_path, &targetUNCs,
- pnum_referrals, nls_codepage, remap);
-+ /* BB map targetUNCs to dfs_info3 structures, here or
-+ in CIFSGetDFSRefer BB */
+ const struct gfs2_glock_operations *gl_ops;
- return rc;
- }
-@@ -1964,7 +1968,15 @@ cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb,
+@@ -210,7 +208,6 @@ struct gfs2_glock {
+ struct gfs2_sbd *gl_sbd;
- if (existingCifsSes) {
- pSesInfo = existingCifsSes;
-- cFYI(1, ("Existing smb sess found"));
-+ cFYI(1, ("Existing smb sess found (status=%d)",
-+ pSesInfo->status));
-+ down(&pSesInfo->sesSem);
-+ if (pSesInfo->status == CifsNeedReconnect) {
-+ cFYI(1, ("Session needs reconnect"));
-+ rc = cifs_setup_session(xid, pSesInfo,
-+ cifs_sb->local_nls);
-+ }
-+ up(&pSesInfo->sesSem);
- } else if (!rc) {
- cFYI(1, ("Existing smb sess not found"));
- pSesInfo = sesInfoAlloc();
-@@ -3514,7 +3526,7 @@ cifs_umount(struct super_block *sb, struct cifs_sb_info *cifs_sb)
- sesInfoFree(ses);
+ struct inode *gl_aspace;
+- struct gfs2_log_element gl_le;
+ struct list_head gl_ail_list;
+ atomic_t gl_ail_count;
+ struct delayed_work gl_work;
+@@ -239,7 +236,6 @@ struct gfs2_alloc {
+ enum {
+ GIF_INVALID = 0,
+ GIF_QD_LOCKED = 1,
+- GIF_PAGED = 2,
+ GIF_SW_PAGED = 3,
+ };
- FreeXid(xid);
-- return rc; /* BB check if we should always return zero here */
-+ return rc;
+@@ -268,14 +264,10 @@ struct gfs2_inode {
+ struct gfs2_glock *i_gl; /* Move into i_gh? */
+ struct gfs2_holder i_iopen_gh;
+ struct gfs2_holder i_gh; /* for prepare/commit_write only */
+- struct gfs2_alloc i_alloc;
++ struct gfs2_alloc *i_alloc;
+ u64 i_last_rg_alloc;
+
+- spinlock_t i_spin;
+ struct rw_semaphore i_rw_mutex;
+- unsigned long i_last_pfault;
+-
+- struct buffer_head *i_cache[GFS2_MAX_META_HEIGHT];
+ };
+
+ /*
+@@ -287,19 +279,12 @@ static inline struct gfs2_inode *GFS2_I(struct inode *inode)
+ return container_of(inode, struct gfs2_inode, i_inode);
}
- int cifs_setup_session(unsigned int xid, struct cifsSesInfo *pSesInfo,
-diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
-index 37dc97a..699ec11 100644
---- a/fs/cifs/dir.c
-+++ b/fs/cifs/dir.c
-@@ -517,12 +517,10 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry,
- d_add(direntry, NULL);
- /* if it was once a directory (but how can we tell?) we could do
- shrink_dcache_parent(direntry); */
-- } else {
-- cERROR(1, ("Error 0x%x on cifs_get_inode_info in lookup of %s",
-- rc, full_path));
-- /* BB special case check for Access Denied - watch security
-- exposure of returning dir info implicitly via different rc
-- if file exists or not but no access BB */
-+ } else if (rc != -EACCES) {
-+ cERROR(1, ("Unexpected lookup error %d", rc));
-+ /* We special case check for Access Denied - since that
-+ is a common return code */
- }
+-/* To be removed? */
+-static inline struct gfs2_sbd *GFS2_SB(struct inode *inode)
++static inline struct gfs2_sbd *GFS2_SB(const struct inode *inode)
+ {
+ return inode->i_sb->s_fs_info;
+ }
- kfree(full_path);
-diff --git a/fs/cifs/dns_resolve.c b/fs/cifs/dns_resolve.c
-new file mode 100644
-index 0000000..ef7f438
---- /dev/null
-+++ b/fs/cifs/dns_resolve.c
-@@ -0,0 +1,124 @@
-+/*
-+ * fs/cifs/dns_resolve.c
-+ *
-+ * Copyright (c) 2007 Igor Mammedov
-+ * Author(s): Igor Mammedov (niallain at gmail.com)
-+ * Steve French (sfrench at us.ibm.com)
-+ *
-+ * Contains the CIFS DFS upcall routines used for hostname to
-+ * IP address translation.
-+ *
-+ * This library is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU Lesser General Public License as published
-+ * by the Free Software Foundation; either version 2.1 of the License, or
-+ * (at your option) any later version.
-+ *
-+ * This library is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
-+ * the GNU Lesser General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU Lesser General Public License
-+ * along with this library; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-+ */
-+
-+#include <keys/user-type.h>
-+#include "dns_resolve.h"
-+#include "cifsglob.h"
-+#include "cifsproto.h"
-+#include "cifs_debug.h"
-+
-+static int dns_resolver_instantiate(struct key *key, const void *data,
-+ size_t datalen)
-+{
-+ int rc = 0;
-+ char *ip;
-+
-+ ip = kmalloc(datalen+1, GFP_KERNEL);
-+ if (!ip)
-+ return -ENOMEM;
-+
-+ memcpy(ip, data, datalen);
-+ ip[datalen] = '\0';
-+
-+ rcu_assign_pointer(key->payload.data, ip);
-+
-+ return rc;
-+}
+-enum {
+- GFF_DID_DIRECT_ALLOC = 0,
+- GFF_EXLOCK = 1,
+-};
+-
+ struct gfs2_file {
+- unsigned long f_flags; /* GFF_... */
+ struct mutex f_fl_mutex;
+ struct gfs2_holder f_fl_gh;
+ };
+@@ -373,8 +358,17 @@ struct gfs2_ail {
+ u64 ai_sync_gen;
+ };
+
++struct gfs2_journal_extent {
++ struct list_head extent_list;
+
-+struct key_type key_type_dns_resolver = {
-+ .name = "dns_resolver",
-+ .def_datalen = sizeof(struct in_addr),
-+ .describe = user_describe,
-+ .instantiate = dns_resolver_instantiate,
-+ .match = user_match,
++ unsigned int lblock; /* First logical block */
++ u64 dblock; /* First disk block */
++ u64 blocks;
+};
+
-+
-+/* Resolves server name to ip address.
-+ * input:
-+ * unc - server UNC
-+ * output:
-+ * *ip_addr - pointer to server ip, caller responcible for freeing it.
-+ * return 0 on success
-+ */
-+int
-+dns_resolve_server_name_to_ip(const char *unc, char **ip_addr)
-+{
-+ int rc = -EAGAIN;
-+ struct key *rkey;
-+ char *name;
-+ int len;
-+
-+ if (!ip_addr || !unc)
-+ return -EINVAL;
-+
-+ /* search for server name delimiter */
-+ len = strlen(unc);
-+ if (len < 3) {
-+ cFYI(1, ("%s: unc is too short: %s", __FUNCTION__, unc));
-+ return -EINVAL;
-+ }
-+ len -= 2;
-+ name = memchr(unc+2, '\\', len);
-+ if (!name) {
-+ cFYI(1, ("%s: probably server name is whole unc: %s",
-+ __FUNCTION__, unc));
-+ } else {
-+ len = (name - unc) - 2/* leading // */;
-+ }
-+
-+ name = kmalloc(len+1, GFP_KERNEL);
-+ if (!name) {
-+ rc = -ENOMEM;
-+ return rc;
-+ }
-+ memcpy(name, unc+2, len);
-+ name[len] = 0;
-+
-+ rkey = request_key(&key_type_dns_resolver, name, "");
-+ if (!IS_ERR(rkey)) {
-+ len = strlen(rkey->payload.data);
-+ *ip_addr = kmalloc(len+1, GFP_KERNEL);
-+ if (*ip_addr) {
-+ memcpy(*ip_addr, rkey->payload.data, len);
-+ (*ip_addr)[len] = '\0';
-+ cFYI(1, ("%s: resolved: %s to %s", __FUNCTION__,
-+ rkey->description,
-+ *ip_addr
-+ ));
-+ rc = 0;
-+ } else {
-+ rc = -ENOMEM;
-+ }
-+ key_put(rkey);
-+ } else {
-+ cERROR(1, ("%s: unable to resolve: %s", __FUNCTION__, name));
-+ }
-+
-+ kfree(name);
-+ return rc;
-+}
-+
-+
-diff --git a/fs/cifs/dns_resolve.h b/fs/cifs/dns_resolve.h
-new file mode 100644
-index 0000000..073fdc3
---- /dev/null
-+++ b/fs/cifs/dns_resolve.h
-@@ -0,0 +1,32 @@
-+/*
-+ * fs/cifs/dns_resolve.h -- DNS Resolver upcall management for CIFS DFS
-+ * Handles host name to IP address resolution
-+ *
-+ * Copyright (c) International Business Machines Corp., 2008
-+ * Author(s): Steve French (sfrench at us.ibm.com)
-+ *
-+ * This library is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU Lesser General Public License as published
-+ * by the Free Software Foundation; either version 2.1 of the License, or
-+ * (at your option) any later version.
-+ *
-+ * This library is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
-+ * the GNU Lesser General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU Lesser General Public License
-+ * along with this library; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-+ */
-+
-+#ifndef _DNS_RESOLVE_H
-+#define _DNS_RESOLVE_H
-+
-+#ifdef __KERNEL__
-+#include <linux/key-type.h>
-+extern struct key_type key_type_dns_resolver;
-+extern int dns_resolve_server_name_to_ip(const char *unc, char **ip_addr);
-+#endif /* KERNEL */
-+
-+#endif /* _DNS_RESOLVE_H */
-diff --git a/fs/cifs/file.c b/fs/cifs/file.c
-index dd26e27..5f7c374 100644
---- a/fs/cifs/file.c
-+++ b/fs/cifs/file.c
-@@ -1179,12 +1179,10 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to)
- atomic_dec(&open_file->wrtPending);
- /* Does mm or vfs already set times? */
- inode->i_atime = inode->i_mtime = current_fs_time(inode->i_sb);
-- if ((bytes_written > 0) && (offset)) {
-+ if ((bytes_written > 0) && (offset))
- rc = 0;
-- } else if (bytes_written < 0) {
-- if (rc != -EBADF)
-- rc = bytes_written;
-- }
-+ else if (bytes_written < 0)
-+ rc = bytes_written;
- } else {
- cFYI(1, ("No writeable filehandles for inode"));
- rc = -EIO;
-diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
-index e915eb1..d9567ba 100644
---- a/fs/cifs/inode.c
-+++ b/fs/cifs/inode.c
-@@ -54,9 +54,9 @@ int cifs_get_inode_info_unix(struct inode **pinode,
- MAX_TREE_SIZE + 1) +
- strnlen(search_path, MAX_PATHCONF) + 1,
- GFP_KERNEL);
-- if (tmp_path == NULL) {
-+ if (tmp_path == NULL)
- return -ENOMEM;
-- }
-+
- /* have to skip first of the double backslash of
- UNC name */
- strncpy(tmp_path, pTcon->treeName, MAX_TREE_SIZE);
-@@ -511,7 +511,8 @@ int cifs_get_inode_info(struct inode **pinode,
- }
+ struct gfs2_jdesc {
+ struct list_head jd_list;
++ struct list_head extent_list;
- spin_lock(&inode->i_lock);
-- if (is_size_safe_to_change(cifsInfo, le64_to_cpu(pfindData->EndOfFile))) {
-+ if (is_size_safe_to_change(cifsInfo,
-+ le64_to_cpu(pfindData->EndOfFile))) {
- /* can not safely shrink the file size here if the
- client is writing to it due to potential races */
- i_size_write(inode, le64_to_cpu(pfindData->EndOfFile));
-@@ -931,7 +932,7 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, int mode)
- (CIFS_UNIX_POSIX_PATH_OPS_CAP &
- le64_to_cpu(pTcon->fsUnixInfo.Capability))) {
- u32 oplock = 0;
-- FILE_UNIX_BASIC_INFO * pInfo =
-+ FILE_UNIX_BASIC_INFO *pInfo =
- kzalloc(sizeof(FILE_UNIX_BASIC_INFO), GFP_KERNEL);
- if (pInfo == NULL) {
- rc = -ENOMEM;
-@@ -1607,7 +1608,14 @@ int cifs_setattr(struct dentry *direntry, struct iattr *attrs)
- CIFS_MOUNT_MAP_SPECIAL_CHR);
- else if (attrs->ia_valid & ATTR_MODE) {
- rc = 0;
-- if ((mode & S_IWUGO) == 0) /* not writeable */ {
-+#ifdef CONFIG_CIFS_EXPERIMENTAL
-+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL)
-+ rc = mode_to_acl(direntry->d_inode, full_path, mode);
-+ else if ((mode & S_IWUGO) == 0) {
-+#else
-+ if ((mode & S_IWUGO) == 0) {
-+#endif
-+ /* not writeable */
- if ((cifsInode->cifsAttrs & ATTR_READONLY) == 0) {
- set_dosattr = TRUE;
- time_buf.Attributes =
-@@ -1626,10 +1634,10 @@ int cifs_setattr(struct dentry *direntry, struct iattr *attrs)
- if (time_buf.Attributes == 0)
- time_buf.Attributes |= cpu_to_le32(ATTR_NORMAL);
- }
-- /* BB to be implemented -
-- via Windows security descriptors or streams */
-- /* CIFSSMBWinSetPerms(xid, pTcon, full_path, mode, uid, gid,
-- cifs_sb->local_nls); */
-+#ifdef CONFIG_CIFS_EXPERIMENTAL
-+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL)
-+ mode_to_acl(direntry->d_inode, full_path, mode);
-+#endif
- }
+ struct inode *jd_inode;
+ unsigned int jd_jid;
+@@ -421,13 +415,9 @@ struct gfs2_args {
+ struct gfs2_tune {
+ spinlock_t gt_spin;
- if (attrs->ia_valid & ATTR_ATIME) {
-diff --git a/fs/cifs/link.c b/fs/cifs/link.c
-index 11f2657..1d6fb01 100644
---- a/fs/cifs/link.c
-+++ b/fs/cifs/link.c
-@@ -1,7 +1,7 @@
- /*
- * fs/cifs/link.c
- *
-- * Copyright (C) International Business Machines Corp., 2002,2003
-+ * Copyright (C) International Business Machines Corp., 2002,2008
- * Author(s): Steve French (sfrench at us.ibm.com)
- *
- * This library is free software; you can redistribute it and/or modify
-@@ -236,8 +236,6 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
- char *full_path = NULL;
- char *tmp_path = NULL;
- char *tmpbuffer;
-- unsigned char *referrals = NULL;
-- unsigned int num_referrals = 0;
- int len;
- __u16 fid;
+- unsigned int gt_ilimit;
+- unsigned int gt_ilimit_tries;
+- unsigned int gt_ilimit_min;
+ unsigned int gt_demote_secs; /* Cache retention for unheld glock */
+ unsigned int gt_incore_log_blocks;
+ unsigned int gt_log_flush_secs;
+- unsigned int gt_jindex_refresh_secs; /* Check for new journal index */
-@@ -297,8 +295,11 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
- cFYI(1, ("Error closing junction point "
- "(open for ioctl)"));
- }
-+ /* BB unwind this long, nested function, or remove BB */
- if (rc == -EIO) {
- /* Query if DFS Junction */
-+ unsigned int num_referrals = 0;
-+ struct dfs_info3_param *refs = NULL;
- tmp_path =
- kmalloc(MAX_TREE_SIZE + MAX_PATHCONF + 1,
- GFP_KERNEL);
-@@ -310,7 +311,7 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
- rc = get_dfs_path(xid, pTcon->ses,
- tmp_path,
- cifs_sb->local_nls,
-- &num_referrals, &referrals,
-+ &num_referrals, &refs,
- cifs_sb->mnt_cifs_flags &
- CIFS_MOUNT_MAP_SPECIAL_CHR);
- cFYI(1, ("Get DFS for %s rc = %d ",
-@@ -320,14 +321,13 @@ cifs_readlink(struct dentry *direntry, char __user *pBuffer, int buflen)
- else {
- cFYI(1, ("num referral: %d",
- num_referrals));
-- if (referrals) {
-- cFYI(1,("referral string: %s", referrals));
-+ if (refs && refs->path_name) {
- strncpy(tmpbuffer,
-- referrals,
-+ refs->path_name,
- len-1);
- }
- }
-- kfree(referrals);
-+ kfree(refs);
- kfree(tmp_path);
- }
- /* BB add code like else decode referrals
-diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
-index d0cb469..d2153ab 100644
---- a/fs/cifs/sess.c
-+++ b/fs/cifs/sess.c
-@@ -528,9 +528,11 @@ CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, int first_time,
- rc = -EOVERFLOW;
- goto ssetup_exit;
- }
-- ses->server->mac_signing_key.len = msg->sesskey_len;
-- memcpy(ses->server->mac_signing_key.data.krb5, msg->data,
-- msg->sesskey_len);
-+ if (first_time) {
-+ ses->server->mac_signing_key.len = msg->sesskey_len;
-+ memcpy(ses->server->mac_signing_key.data.krb5,
-+ msg->data, msg->sesskey_len);
-+ }
- pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC;
- capabilities |= CAP_EXTENDED_SECURITY;
- pSMB->req.Capabilities = cpu_to_le32(capabilities);
-@@ -540,7 +542,7 @@ CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, int first_time,
+ unsigned int gt_recoverd_secs;
+ unsigned int gt_logd_secs;
+@@ -443,10 +433,8 @@ struct gfs2_tune {
+ unsigned int gt_new_files_jdata;
+ unsigned int gt_new_files_directio;
+ unsigned int gt_max_readahead; /* Max bytes to read-ahead from disk */
+- unsigned int gt_lockdump_size;
+ unsigned int gt_stall_secs; /* Detects trouble! */
+ unsigned int gt_complain_secs;
+- unsigned int gt_reclaim_limit; /* Max num of glocks in reclaim list */
+ unsigned int gt_statfs_quantum;
+ unsigned int gt_statfs_slow;
+ };
+@@ -539,7 +527,6 @@ struct gfs2_sbd {
+ /* StatFS stuff */
- if (ses->capabilities & CAP_UNICODE) {
- /* unicode strings must be word aligned */
-- if (iov[0].iov_len % 2) {
-+ if ((iov[0].iov_len + iov[1].iov_len) % 2) {
- *bcc_ptr = 0;
- bcc_ptr++;
- }
-diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c
-index dcc6aea..e3eb355 100644
---- a/fs/coda/psdev.c
-+++ b/fs/coda/psdev.c
-@@ -362,8 +362,8 @@ static int init_coda_psdev(void)
- goto out_chrdev;
- }
- for (i = 0; i < MAX_CODADEVS; i++)
-- class_device_create(coda_psdev_class, NULL,
-- MKDEV(CODA_PSDEV_MAJOR,i), NULL, "cfs%d", i);
-+ device_create(coda_psdev_class, NULL,
-+ MKDEV(CODA_PSDEV_MAJOR,i), "cfs%d", i);
- coda_sysctl_init();
- goto out;
+ spinlock_t sd_statfs_spin;
+- struct mutex sd_statfs_mutex;
+ struct gfs2_statfs_change_host sd_statfs_master;
+ struct gfs2_statfs_change_host sd_statfs_local;
+ unsigned long sd_statfs_sync_time;
+@@ -602,20 +589,18 @@ struct gfs2_sbd {
+ unsigned int sd_log_commited_databuf;
+ unsigned int sd_log_commited_revoke;
-@@ -405,7 +405,7 @@ static int __init init_coda(void)
- return 0;
- out:
- for (i = 0; i < MAX_CODADEVS; i++)
-- class_device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
-+ device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
- class_destroy(coda_psdev_class);
- unregister_chrdev(CODA_PSDEV_MAJOR, "coda");
- coda_sysctl_clean();
-@@ -424,7 +424,7 @@ static void __exit exit_coda(void)
- printk("coda: failed to unregister filesystem\n");
- }
- for (i = 0; i < MAX_CODADEVS; i++)
-- class_device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
-+ device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i));
- class_destroy(coda_psdev_class);
- unregister_chrdev(CODA_PSDEV_MAJOR, "coda");
- coda_sysctl_clean();
-diff --git a/fs/compat.c b/fs/compat.c
-index 15078ce..5216c3f 100644
---- a/fs/compat.c
-+++ b/fs/compat.c
-@@ -1104,10 +1104,6 @@ static ssize_t compat_do_readv_writev(int type, struct file *file,
- if (ret < 0)
- goto out;
+- unsigned int sd_log_num_gl;
+ unsigned int sd_log_num_buf;
+ unsigned int sd_log_num_revoke;
+ unsigned int sd_log_num_rg;
+ unsigned int sd_log_num_databuf;
-- ret = security_file_permission(file, type == READ ? MAY_READ:MAY_WRITE);
-- if (ret)
-- goto out;
--
- fnv = NULL;
- if (type == READ) {
- fn = file->f_op->read;
-diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c
-index da8cb3b..ffdc022 100644
---- a/fs/compat_ioctl.c
-+++ b/fs/compat_ioctl.c
-@@ -1376,7 +1376,7 @@ static int do_atm_ioctl(unsigned int fd, unsigned int cmd32, unsigned long arg)
- return -EINVAL;
- }
+- struct list_head sd_log_le_gl;
+ struct list_head sd_log_le_buf;
+ struct list_head sd_log_le_revoke;
+ struct list_head sd_log_le_rg;
+ struct list_head sd_log_le_databuf;
+ struct list_head sd_log_le_ordered;
--static __attribute_used__ int
-+static __used int
- ret_einval(unsigned int fd, unsigned int cmd, unsigned long arg)
- {
- return -EINVAL;
-diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
-index 50ed691..a48dc7d 100644
---- a/fs/configfs/dir.c
-+++ b/fs/configfs/dir.c
-@@ -546,7 +546,7 @@ static int populate_groups(struct config_group *group)
- * That said, taking our i_mutex is closer to mkdir
- * emulation, and shouldn't hurt.
- */
-- mutex_lock(&dentry->d_inode->i_mutex);
-+ mutex_lock_nested(&dentry->d_inode->i_mutex, I_MUTEX_CHILD);
+- unsigned int sd_log_blks_free;
++ atomic_t sd_log_blks_free;
+ struct mutex sd_log_reserve_mutex;
- for (i = 0; group->default_groups[i]; i++) {
- new_group = group->default_groups[i];
-@@ -1405,7 +1405,8 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
- sd = configfs_sb->s_root->d_fsdata;
- link_group(to_config_group(sd->s_element), group);
+ u64 sd_log_sequence;
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 5f6dc32..728d316 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -31,7 +31,6 @@
+ #include "log.h"
+ #include "meta_io.h"
+ #include "ops_address.h"
+-#include "ops_file.h"
+ #include "ops_inode.h"
+ #include "quota.h"
+ #include "rgrp.h"
+@@ -132,15 +131,21 @@ static struct inode *gfs2_iget_skip(struct super_block *sb,
-- mutex_lock(&configfs_sb->s_root->d_inode->i_mutex);
-+ mutex_lock_nested(&configfs_sb->s_root->d_inode->i_mutex,
-+ I_MUTEX_PARENT);
+ void gfs2_set_iop(struct inode *inode)
+ {
++ struct gfs2_sbd *sdp = GFS2_SB(inode);
+ umode_t mode = inode->i_mode;
- name.name = group->cg_item.ci_name;
- name.len = strlen(name.name);
-diff --git a/fs/configfs/file.c b/fs/configfs/file.c
-index a3658f9..397cb50 100644
---- a/fs/configfs/file.c
-+++ b/fs/configfs/file.c
-@@ -320,7 +320,7 @@ int configfs_add_file(struct dentry * dir, const struct configfs_attribute * att
- umode_t mode = (attr->ca_mode & S_IALLUGO) | S_IFREG;
- int error = 0;
+ if (S_ISREG(mode)) {
+ inode->i_op = &gfs2_file_iops;
+- inode->i_fop = &gfs2_file_fops;
+- inode->i_mapping->a_ops = &gfs2_file_aops;
++ if (sdp->sd_args.ar_localflocks)
++ inode->i_fop = &gfs2_file_fops_nolock;
++ else
++ inode->i_fop = &gfs2_file_fops;
+ } else if (S_ISDIR(mode)) {
+ inode->i_op = &gfs2_dir_iops;
+- inode->i_fop = &gfs2_dir_fops;
++ if (sdp->sd_args.ar_localflocks)
++ inode->i_fop = &gfs2_dir_fops_nolock;
++ else
++ inode->i_fop = &gfs2_dir_fops;
+ } else if (S_ISLNK(mode)) {
+ inode->i_op = &gfs2_symlink_iops;
+ } else {
+@@ -291,12 +296,10 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ di->di_entries = be32_to_cpu(str->di_entries);
-- mutex_lock(&dir->d_inode->i_mutex);
-+ mutex_lock_nested(&dir->d_inode->i_mutex, I_MUTEX_NORMAL);
- error = configfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type);
- mutex_unlock(&dir->d_inode->i_mutex);
+ di->di_eattr = be64_to_cpu(str->di_eattr);
+- return 0;
+-}
++ if (S_ISREG(ip->i_inode.i_mode))
++ gfs2_set_aops(&ip->i_inode);
-diff --git a/fs/configfs/mount.c b/fs/configfs/mount.c
-index 3bf0278..de3b31d 100644
---- a/fs/configfs/mount.c
-+++ b/fs/configfs/mount.c
-@@ -128,7 +128,7 @@ void configfs_release_fs(void)
+-static void gfs2_inode_bh(struct gfs2_inode *ip, struct buffer_head *bh)
+-{
+- ip->i_cache[0] = bh;
++ return 0;
}
+ /**
+@@ -366,7 +369,8 @@ int gfs2_dinode_dealloc(struct gfs2_inode *ip)
+ if (error)
+ goto out_rg_gunlock;
--static decl_subsys(config, NULL, NULL);
-+static struct kobject *config_kobj;
+- gfs2_trans_add_gl(ip->i_gl);
++ set_bit(GLF_DIRTY, &ip->i_gl->gl_flags);
++ set_bit(GLF_LFLUSH, &ip->i_gl->gl_flags);
- static int __init configfs_init(void)
- {
-@@ -140,9 +140,8 @@ static int __init configfs_init(void)
- if (!configfs_dir_cachep)
- goto out;
+ gfs2_free_di(rgd, ip);
-- kobj_set_kset_s(&config_subsys, kernel_subsys);
-- err = subsystem_register(&config_subsys);
-- if (err) {
-+ config_kobj = kobject_create_and_add("config", kernel_kobj);
-+ if (!config_kobj) {
- kmem_cache_destroy(configfs_dir_cachep);
- configfs_dir_cachep = NULL;
- goto out;
-@@ -151,7 +150,7 @@ static int __init configfs_init(void)
- err = register_filesystem(&configfs_fs_type);
- if (err) {
- printk(KERN_ERR "configfs: Unable to register filesystem!\n");
-- subsystem_unregister(&config_subsys);
-+ kobject_put(config_kobj);
- kmem_cache_destroy(configfs_dir_cachep);
- configfs_dir_cachep = NULL;
+@@ -707,9 +711,10 @@ static int alloc_dinode(struct gfs2_inode *dip, u64 *no_addr, u64 *generation)
+ struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode);
+ int error;
+
+- gfs2_alloc_get(dip);
++ if (gfs2_alloc_get(dip) == NULL)
++ return -ENOMEM;
+
+- dip->i_alloc.al_requested = RES_DINODE;
++ dip->i_alloc->al_requested = RES_DINODE;
+ error = gfs2_inplace_reserve(dip);
+ if (error)
goto out;
-@@ -160,7 +159,7 @@ static int __init configfs_init(void)
- err = configfs_inode_init();
- if (err) {
- unregister_filesystem(&configfs_fs_type);
-- subsystem_unregister(&config_subsys);
-+ kobject_put(config_kobj);
- kmem_cache_destroy(configfs_dir_cachep);
- configfs_dir_cachep = NULL;
- }
-@@ -171,7 +170,7 @@ out:
- static void __exit configfs_exit(void)
- {
- unregister_filesystem(&configfs_fs_type);
-- subsystem_unregister(&config_subsys);
-+ kobject_put(config_kobj);
- kmem_cache_destroy(configfs_dir_cachep);
- configfs_dir_cachep = NULL;
- configfs_inode_exit();
-diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
-index 6a713b3..d26e282 100644
---- a/fs/debugfs/inode.c
-+++ b/fs/debugfs/inode.c
-@@ -426,20 +426,19 @@ exit:
+@@ -855,7 +860,7 @@ static int link_dinode(struct gfs2_inode *dip, const struct qstr *name,
+
+ error = alloc_required = gfs2_diradd_alloc_required(&dip->i_inode, name);
+ if (alloc_required < 0)
+- goto fail;
++ goto fail_quota_locks;
+ if (alloc_required) {
+ error = gfs2_quota_check(dip, dip->i_inode.i_uid, dip->i_inode.i_gid);
+ if (error)
+@@ -896,7 +901,7 @@ fail_end_trans:
+ gfs2_trans_end(sdp);
+
+ fail_ipreserv:
+- if (dip->i_alloc.al_rgd)
++ if (dip->i_alloc->al_rgd)
+ gfs2_inplace_release(dip);
+
+ fail_quota_locks:
+@@ -966,7 +971,7 @@ struct inode *gfs2_createi(struct gfs2_holder *ghs, const struct qstr *name,
+ struct gfs2_inum_host inum = { .no_addr = 0, .no_formal_ino = 0 };
+ int error;
+ u64 generation;
+- struct buffer_head *bh=NULL;
++ struct buffer_head *bh = NULL;
+
+ if (!name->len || name->len > GFS2_FNAMESIZE)
+ return ERR_PTR(-ENAMETOOLONG);
+@@ -1003,8 +1008,6 @@ struct inode *gfs2_createi(struct gfs2_holder *ghs, const struct qstr *name,
+ if (IS_ERR(inode))
+ goto fail_gunlock2;
+
+- gfs2_inode_bh(GFS2_I(inode), bh);
+-
+ error = gfs2_inode_refresh(GFS2_I(inode));
+ if (error)
+ goto fail_gunlock2;
+@@ -1021,6 +1024,8 @@ struct inode *gfs2_createi(struct gfs2_holder *ghs, const struct qstr *name,
+ if (error)
+ goto fail_gunlock2;
+
++ if (bh)
++ brelse(bh);
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+ return inode;
+@@ -1032,6 +1037,8 @@ fail_gunlock2:
+ fail_gunlock:
+ gfs2_glock_dq(ghs);
+ fail:
++ if (bh)
++ brelse(bh);
+ return ERR_PTR(error);
}
- EXPORT_SYMBOL_GPL(debugfs_rename);
--static decl_subsys(debug, NULL, NULL);
-+static struct kobject *debug_kobj;
+diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h
+index 351ac87..d446506 100644
+--- a/fs/gfs2/inode.h
++++ b/fs/gfs2/inode.h
+@@ -20,6 +20,18 @@ static inline int gfs2_is_jdata(const struct gfs2_inode *ip)
+ return ip->i_di.di_flags & GFS2_DIF_JDATA;
+ }
- static int __init debugfs_init(void)
++static inline int gfs2_is_writeback(const struct gfs2_inode *ip)
++{
++ const struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++ return (sdp->sd_args.ar_data == GFS2_DATA_WRITEBACK) && !gfs2_is_jdata(ip);
++}
++
++static inline int gfs2_is_ordered(const struct gfs2_inode *ip)
++{
++ const struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++ return (sdp->sd_args.ar_data == GFS2_DATA_ORDERED) && !gfs2_is_jdata(ip);
++}
++
+ static inline int gfs2_is_dir(const struct gfs2_inode *ip)
{
- int retval;
+ return S_ISDIR(ip->i_inode.i_mode);
+diff --git a/fs/gfs2/locking/dlm/mount.c b/fs/gfs2/locking/dlm/mount.c
+index 41c5b04..f2efff4 100644
+--- a/fs/gfs2/locking/dlm/mount.c
++++ b/fs/gfs2/locking/dlm/mount.c
+@@ -67,6 +67,11 @@ static int make_args(struct gdlm_ls *ls, char *data_arg, int *nodir)
+ memset(data, 0, 256);
+ strncpy(data, data_arg, 255);
-- kobj_set_kset_s(&debug_subsys, kernel_subsys);
-- retval = subsystem_register(&debug_subsys);
-- if (retval)
-- return retval;
-+ debug_kobj = kobject_create_and_add("debug", kernel_kobj);
-+ if (!debug_kobj)
++ if (!strlen(data)) {
++ log_error("no mount options, (u)mount helpers not installed");
+ return -EINVAL;
++ }
++
+ for (options = data; (x = strsep(&options, ":")); ) {
+ if (!*x)
+ continue;
+diff --git a/fs/gfs2/locking/dlm/plock.c b/fs/gfs2/locking/dlm/plock.c
+index 1f7b038..2ebd374 100644
+--- a/fs/gfs2/locking/dlm/plock.c
++++ b/fs/gfs2/locking/dlm/plock.c
+@@ -89,15 +89,19 @@ int gdlm_plock(void *lockspace, struct lm_lockname *name,
+ op->info.number = name->ln_number;
+ op->info.start = fl->fl_start;
+ op->info.end = fl->fl_end;
+- op->info.owner = (__u64)(long) fl->fl_owner;
+ if (fl->fl_lmops && fl->fl_lmops->fl_grant) {
++ /* fl_owner is lockd which doesn't distinguish
++ processes on the nfs client */
++ op->info.owner = (__u64) fl->fl_pid;
+ xop->callback = fl->fl_lmops->fl_grant;
+ locks_init_lock(&xop->flc);
+ locks_copy_lock(&xop->flc, fl);
+ xop->fl = fl;
+ xop->file = file;
+- } else
++ } else {
++ op->info.owner = (__u64)(long) fl->fl_owner;
+ xop->callback = NULL;
++ }
- retval = register_filesystem(&debug_fs_type);
- if (retval)
-- subsystem_unregister(&debug_subsys);
-+ kobject_put(debug_kobj);
- return retval;
- }
+ send_op(op);
-@@ -447,7 +446,7 @@ static void __exit debugfs_exit(void)
- {
- simple_release_fs(&debugfs_mount, &debugfs_mount_count);
- unregister_filesystem(&debug_fs_type);
-- subsystem_unregister(&debug_subsys);
-+ kobject_put(debug_kobj);
- }
+@@ -203,7 +207,10 @@ int gdlm_punlock(void *lockspace, struct lm_lockname *name,
+ op->info.number = name->ln_number;
+ op->info.start = fl->fl_start;
+ op->info.end = fl->fl_end;
+- op->info.owner = (__u64)(long) fl->fl_owner;
++ if (fl->fl_lmops && fl->fl_lmops->fl_grant)
++ op->info.owner = (__u64) fl->fl_pid;
++ else
++ op->info.owner = (__u64)(long) fl->fl_owner;
- core_initcall(debugfs_init);
-diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
-index 6353a83..5c108c4 100644
---- a/fs/dlm/lockspace.c
-+++ b/fs/dlm/lockspace.c
-@@ -166,26 +166,7 @@ static struct kobj_type dlm_ktype = {
- .release = lockspace_kobj_release,
+ send_op(op);
+ wait_event(recv_wq, (op->done != 0));
+@@ -242,7 +249,10 @@ int gdlm_plock_get(void *lockspace, struct lm_lockname *name,
+ op->info.number = name->ln_number;
+ op->info.start = fl->fl_start;
+ op->info.end = fl->fl_end;
+- op->info.owner = (__u64)(long) fl->fl_owner;
++ if (fl->fl_lmops && fl->fl_lmops->fl_grant)
++ op->info.owner = (__u64) fl->fl_pid;
++ else
++ op->info.owner = (__u64)(long) fl->fl_owner;
+
+ send_op(op);
+ wait_event(recv_wq, (op->done != 0));
+diff --git a/fs/gfs2/locking/dlm/sysfs.c b/fs/gfs2/locking/dlm/sysfs.c
+index ae9e6a2..a87b098 100644
+--- a/fs/gfs2/locking/dlm/sysfs.c
++++ b/fs/gfs2/locking/dlm/sysfs.c
+@@ -189,51 +189,39 @@ static struct kobj_type gdlm_ktype = {
+ .sysfs_ops = &gdlm_attr_ops,
};
--static struct kset dlm_kset = {
-- .ktype = &dlm_ktype,
+-static struct kset gdlm_kset = {
+- .ktype = &gdlm_ktype,
-};
--
--static int kobject_setup(struct dlm_ls *ls)
--{
-- char lsname[DLM_LOCKSPACE_LEN];
-- int error;
--
-- memset(lsname, 0, DLM_LOCKSPACE_LEN);
-- snprintf(lsname, DLM_LOCKSPACE_LEN, "%s", ls->ls_name);
--
-- error = kobject_set_name(&ls->ls_kobj, "%s", lsname);
-- if (error)
++static struct kset *gdlm_kset;
+
+ int gdlm_kobject_setup(struct gdlm_ls *ls, struct kobject *fskobj)
+ {
+ int error;
+
+- error = kobject_set_name(&ls->kobj, "%s", "lock_module");
+- if (error) {
+- log_error("can't set kobj name %d", error);
- return error;
+- }
-
-- ls->ls_kobj.kset = &dlm_kset;
-- ls->ls_kobj.ktype = &dlm_ktype;
-- return 0;
--}
-+static struct kset *dlm_kset;
+- ls->kobj.kset = &gdlm_kset;
+- ls->kobj.ktype = &gdlm_ktype;
+- ls->kobj.parent = fskobj;
+-
+- error = kobject_register(&ls->kobj);
++ ls->kobj.kset = gdlm_kset;
++ error = kobject_init_and_add(&ls->kobj, &gdlm_ktype, fskobj,
++ "lock_module");
+ if (error)
+ log_error("can't register kobj %d", error);
++ kobject_uevent(&ls->kobj, KOBJ_ADD);
- static int do_uevent(struct dlm_ls *ls, int in)
+ return error;
+ }
+
+ void gdlm_kobject_release(struct gdlm_ls *ls)
{
-@@ -220,24 +201,22 @@ static int do_uevent(struct dlm_ls *ls, int in)
+- kobject_unregister(&ls->kobj);
++ kobject_put(&ls->kobj);
+ }
- int dlm_lockspace_init(void)
+ int gdlm_sysfs_init(void)
{
- int error;
-
- ls_count = 0;
- mutex_init(&ls_lock);
- INIT_LIST_HEAD(&lslist);
- spin_lock_init(&lslist_lock);
-
-- kobject_set_name(&dlm_kset.kobj, "dlm");
-- kobj_set_kset_s(&dlm_kset, kernel_subsys);
-- error = kset_register(&dlm_kset);
+- kobject_set_name(&gdlm_kset.kobj, "lock_dlm");
+- kobj_set_kset_s(&gdlm_kset, kernel_subsys);
+- error = kset_register(&gdlm_kset);
- if (error)
-- printk("dlm_lockspace_init: cannot register kset %d\n", error);
+- printk("lock_dlm: cannot register kset %d\n", error);
+-
- return error;
-+ dlm_kset = kset_create_and_add("dlm", NULL, kernel_kobj);
-+ if (!dlm_kset) {
++ gdlm_kset = kset_create_and_add("lock_dlm", NULL, kernel_kobj);
++ if (!gdlm_kset) {
+ printk(KERN_WARNING "%s: can not create kset\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+ return 0;
}
- void dlm_lockspace_exit(void)
+ void gdlm_sysfs_exit(void)
{
-- kset_unregister(&dlm_kset);
-+ kset_unregister(dlm_kset);
+- kset_unregister(&gdlm_kset);
++ kset_unregister(gdlm_kset);
}
- static int dlm_scand(void *data)
-@@ -549,13 +528,12 @@ static int new_lockspace(char *name, int namelen, void **lockspace,
- goto out_delist;
- }
+diff --git a/fs/gfs2/locking/dlm/thread.c b/fs/gfs2/locking/dlm/thread.c
+index bd938f0..521694f 100644
+--- a/fs/gfs2/locking/dlm/thread.c
++++ b/fs/gfs2/locking/dlm/thread.c
+@@ -273,18 +273,13 @@ static int gdlm_thread(void *data, int blist)
+ struct gdlm_ls *ls = (struct gdlm_ls *) data;
+ struct gdlm_lock *lp = NULL;
+ uint8_t complete, blocking, submit, drop;
+- DECLARE_WAITQUEUE(wait, current);
-- error = kobject_setup(ls);
-- if (error)
-- goto out_stop;
--
-- error = kobject_register(&ls->ls_kobj);
-+ ls->ls_kobj.kset = dlm_kset;
-+ error = kobject_init_and_add(&ls->ls_kobj, &dlm_ktype, NULL,
-+ "%s", ls->ls_name);
- if (error)
- goto out_stop;
-+ kobject_uevent(&ls->ls_kobj, KOBJ_ADD);
+ /* Only thread1 is allowed to do blocking callbacks since gfs
+ may wait for a completion callback within a blocking cb. */
- /* let kobject handle freeing of ls if there's an error */
- do_unreg = 1;
-@@ -601,7 +579,7 @@ static int new_lockspace(char *name, int namelen, void **lockspace,
- kfree(ls->ls_rsbtbl);
- out_lsfree:
- if (do_unreg)
-- kobject_unregister(&ls->ls_kobj);
-+ kobject_put(&ls->ls_kobj);
- else
- kfree(ls);
- out:
-@@ -750,7 +728,7 @@ static int release_lockspace(struct dlm_ls *ls, int force)
- dlm_clear_members(ls);
- dlm_clear_members_gone(ls);
- kfree(ls->ls_node_array);
-- kobject_unregister(&ls->ls_kobj);
-+ kobject_put(&ls->ls_kobj);
- /* The ls structure will be freed when the kobject is done with */
+ while (!kthread_should_stop()) {
+- set_current_state(TASK_INTERRUPTIBLE);
+- add_wait_queue(&ls->thread_wait, &wait);
+- if (no_work(ls, blist))
+- schedule();
+- remove_wait_queue(&ls->thread_wait, &wait);
+- set_current_state(TASK_RUNNING);
++ wait_event_interruptible(ls->thread_wait,
++ !no_work(ls, blist) || kthread_should_stop());
- mutex_lock(&ls_lock);
-diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
-index e5580bc..0249aa4 100644
---- a/fs/ecryptfs/main.c
-+++ b/fs/ecryptfs/main.c
-@@ -734,127 +734,40 @@ static int ecryptfs_init_kmem_caches(void)
- return 0;
- }
+ complete = blocking = submit = drop = 0;
--struct ecryptfs_obj {
-- char *name;
-- struct list_head slot_list;
-- struct kobject kobj;
--};
--
--struct ecryptfs_attribute {
-- struct attribute attr;
-- ssize_t(*show) (struct ecryptfs_obj *, char *);
-- ssize_t(*store) (struct ecryptfs_obj *, const char *, size_t);
--};
-+static struct kobject *ecryptfs_kobj;
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 7df7024..161ab6f 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -1,6 +1,6 @@
+ /*
+ * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
++ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+@@ -16,6 +16,8 @@
+ #include <linux/crc32.h>
+ #include <linux/lm_interface.h>
+ #include <linux/delay.h>
++#include <linux/kthread.h>
++#include <linux/freezer.h>
--static ssize_t
--ecryptfs_attr_store(struct kobject *kobj,
-- struct attribute *attr, const char *buf, size_t len)
-+static ssize_t version_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buff)
+ #include "gfs2.h"
+ #include "incore.h"
+@@ -68,14 +70,12 @@ unsigned int gfs2_struct2blk(struct gfs2_sbd *sdp, unsigned int nstruct,
+ *
+ */
+
+-void gfs2_remove_from_ail(struct address_space *mapping, struct gfs2_bufdata *bd)
++void gfs2_remove_from_ail(struct gfs2_bufdata *bd)
{
-- struct ecryptfs_obj *obj = container_of(kobj, struct ecryptfs_obj,
-- kobj);
-- struct ecryptfs_attribute *attribute =
-- container_of(attr, struct ecryptfs_attribute, attr);
--
-- return (attribute->store ? attribute->store(obj, buf, len) : 0);
-+ return snprintf(buff, PAGE_SIZE, "%d\n", ECRYPTFS_VERSIONING_MASK);
+ bd->bd_ail = NULL;
+ list_del_init(&bd->bd_ail_st_list);
+ list_del_init(&bd->bd_ail_gl_list);
+ atomic_dec(&bd->bd_gl->gl_ail_count);
+- if (mapping)
+- gfs2_meta_cache_flush(GFS2_I(mapping->host));
+ brelse(bd->bd_bh);
}
--static ssize_t
--ecryptfs_attr_show(struct kobject *kobj, struct attribute *attr, char *buf)
--{
-- struct ecryptfs_obj *obj = container_of(kobj, struct ecryptfs_obj,
-- kobj);
-- struct ecryptfs_attribute *attribute =
-- container_of(attr, struct ecryptfs_attribute, attr);
--
-- return (attribute->show ? attribute->show(obj, buf) : 0);
--}
-+static struct kobj_attribute version_attr = __ATTR_RO(version);
-
--static struct sysfs_ops ecryptfs_sysfs_ops = {
-- .show = ecryptfs_attr_show,
-- .store = ecryptfs_attr_store
-+static struct attribute *attributes[] = {
-+ &version_attr.attr,
-+ NULL,
- };
-
--static struct kobj_type ecryptfs_ktype = {
-- .sysfs_ops = &ecryptfs_sysfs_ops
-+static struct attribute_group attr_group = {
-+ .attrs = attributes,
- };
+@@ -92,8 +92,6 @@ static void gfs2_ail1_start_one(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
+ struct buffer_head *bh;
+ int retry;
--static decl_subsys(ecryptfs, &ecryptfs_ktype, NULL);
--
--static ssize_t version_show(struct ecryptfs_obj *obj, char *buff)
--{
-- return snprintf(buff, PAGE_SIZE, "%d\n", ECRYPTFS_VERSIONING_MASK);
--}
--
--static struct ecryptfs_attribute sysfs_attr_version = __ATTR_RO(version);
--
--static struct ecryptfs_version_str_map_elem {
-- u32 flag;
-- char *str;
--} ecryptfs_version_str_map[] = {
-- {ECRYPTFS_VERSIONING_PASSPHRASE, "passphrase"},
-- {ECRYPTFS_VERSIONING_PUBKEY, "pubkey"},
-- {ECRYPTFS_VERSIONING_PLAINTEXT_PASSTHROUGH, "plaintext passthrough"},
-- {ECRYPTFS_VERSIONING_POLICY, "policy"},
-- {ECRYPTFS_VERSIONING_XATTR, "metadata in extended attribute"},
-- {ECRYPTFS_VERSIONING_MULTKEY, "multiple keys per file"}
--};
--
--static ssize_t version_str_show(struct ecryptfs_obj *obj, char *buff)
--{
-- int i;
-- int remaining = PAGE_SIZE;
-- int total_written = 0;
--
-- buff[0] = '\0';
-- for (i = 0; i < ARRAY_SIZE(ecryptfs_version_str_map); i++) {
-- int entry_size;
--
-- if (!(ECRYPTFS_VERSIONING_MASK
-- & ecryptfs_version_str_map[i].flag))
-- continue;
-- entry_size = strlen(ecryptfs_version_str_map[i].str);
-- if ((entry_size + 2) > remaining)
-- goto out;
-- memcpy(buff, ecryptfs_version_str_map[i].str, entry_size);
-- buff[entry_size++] = '\n';
-- buff[entry_size] = '\0';
-- buff += entry_size;
-- total_written += entry_size;
-- remaining -= entry_size;
-- }
--out:
-- return total_written;
--}
--
--static struct ecryptfs_attribute sysfs_attr_version_str = __ATTR_RO(version_str);
+- BUG_ON(!spin_is_locked(&sdp->sd_log_lock));
-
- static int do_sysfs_registration(void)
- {
- int rc;
+ do {
+ retry = 0;
-- rc = subsystem_register(&ecryptfs_subsys);
-- if (rc) {
-- printk(KERN_ERR
-- "Unable to register ecryptfs sysfs subsystem\n");
-- goto out;
-- }
-- rc = sysfs_create_file(&ecryptfs_subsys.kobj,
-- &sysfs_attr_version.attr);
-- if (rc) {
-- printk(KERN_ERR
-- "Unable to create ecryptfs version attribute\n");
-- subsystem_unregister(&ecryptfs_subsys);
-+ ecryptfs_kobj = kobject_create_and_add("ecryptfs", fs_kobj);
-+ if (!ecryptfs_kobj) {
-+ printk(KERN_ERR "Unable to create ecryptfs kset\n");
-+ rc = -ENOMEM;
- goto out;
- }
-- rc = sysfs_create_file(&ecryptfs_subsys.kobj,
-- &sysfs_attr_version_str.attr);
-+ rc = sysfs_create_group(ecryptfs_kobj, &attr_group);
- if (rc) {
- printk(KERN_ERR
-- "Unable to create ecryptfs version_str attribute\n");
-- sysfs_remove_file(&ecryptfs_subsys.kobj,
-- &sysfs_attr_version.attr);
-- subsystem_unregister(&ecryptfs_subsys);
-- goto out;
-+ "Unable to create ecryptfs version attributes\n");
-+ kobject_put(ecryptfs_kobj);
- }
- out:
- return rc;
-@@ -862,11 +775,8 @@ out:
+@@ -210,7 +208,7 @@ static void gfs2_ail1_start(struct gfs2_sbd *sdp, int flags)
+ gfs2_log_unlock(sdp);
+ }
- static void do_sysfs_unregistration(void)
+-int gfs2_ail1_empty(struct gfs2_sbd *sdp, int flags)
++static int gfs2_ail1_empty(struct gfs2_sbd *sdp, int flags)
{
-- sysfs_remove_file(&ecryptfs_subsys.kobj,
-- &sysfs_attr_version.attr);
-- sysfs_remove_file(&ecryptfs_subsys.kobj,
-- &sysfs_attr_version_str.attr);
-- subsystem_unregister(&ecryptfs_subsys);
-+ sysfs_remove_group(ecryptfs_kobj, &attr_group);
-+ kobject_put(ecryptfs_kobj);
+ struct gfs2_ail *ai, *s;
+ int ret;
+@@ -248,7 +246,7 @@ static void gfs2_ail2_empty_one(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
+ bd = list_entry(head->prev, struct gfs2_bufdata,
+ bd_ail_st_list);
+ gfs2_assert(sdp, bd->bd_ail == ai);
+- gfs2_remove_from_ail(bd->bd_bh->b_page->mapping, bd);
++ gfs2_remove_from_ail(bd);
+ }
}
- static int __init ecryptfs_init(void)
-@@ -894,7 +804,6 @@ static int __init ecryptfs_init(void)
- printk(KERN_ERR "Failed to register filesystem\n");
- goto out_free_kmem_caches;
+@@ -303,7 +301,7 @@ int gfs2_log_reserve(struct gfs2_sbd *sdp, unsigned int blks)
+
+ mutex_lock(&sdp->sd_log_reserve_mutex);
+ gfs2_log_lock(sdp);
+- while(sdp->sd_log_blks_free <= (blks + reserved_blks)) {
++ while(atomic_read(&sdp->sd_log_blks_free) <= (blks + reserved_blks)) {
+ gfs2_log_unlock(sdp);
+ gfs2_ail1_empty(sdp, 0);
+ gfs2_log_flush(sdp, NULL);
+@@ -312,7 +310,7 @@ int gfs2_log_reserve(struct gfs2_sbd *sdp, unsigned int blks)
+ gfs2_ail1_start(sdp, 0);
+ gfs2_log_lock(sdp);
}
-- kobj_set_kset_s(&ecryptfs_subsys, fs_subsys);
- rc = do_sysfs_registration();
- if (rc) {
- printk(KERN_ERR "sysfs registration failed\n");
-diff --git a/fs/ecryptfs/netlink.c b/fs/ecryptfs/netlink.c
-index 9aa3451..f638a69 100644
---- a/fs/ecryptfs/netlink.c
-+++ b/fs/ecryptfs/netlink.c
-@@ -237,7 +237,6 @@ out:
- */
- void ecryptfs_release_netlink(void)
- {
-- if (ecryptfs_nl_sock && ecryptfs_nl_sock->sk_socket)
-- sock_release(ecryptfs_nl_sock->sk_socket);
-+ netlink_kernel_release(ecryptfs_nl_sock);
- ecryptfs_nl_sock = NULL;
- }
-diff --git a/fs/ext2/super.c b/fs/ext2/super.c
-index 154e25f..6abaf75 100644
---- a/fs/ext2/super.c
-+++ b/fs/ext2/super.c
-@@ -680,11 +680,31 @@ static int ext2_check_descriptors (struct super_block * sb)
- static loff_t ext2_max_size(int bits)
+- sdp->sd_log_blks_free -= blks;
++ atomic_sub(blks, &sdp->sd_log_blks_free);
+ gfs2_log_unlock(sdp);
+ mutex_unlock(&sdp->sd_log_reserve_mutex);
+
+@@ -332,27 +330,23 @@ void gfs2_log_release(struct gfs2_sbd *sdp, unsigned int blks)
{
- loff_t res = EXT2_NDIR_BLOCKS;
-- /* This constant is calculated to be the largest file size for a
-- * dense, 4k-blocksize file such that the total number of
-+ int meta_blocks;
-+ loff_t upper_limit;
-+
-+ /* This is calculated to be the largest file size for a
-+ * dense, file such that the total number of
- * sectors in the file, including data and all indirect blocks,
-- * does not exceed 2^32. */
-- const loff_t upper_limit = 0x1ff7fffd000LL;
-+ * does not exceed 2^32 -1
-+ * __u32 i_blocks representing the total number of
-+ * 512 bytes blocks of the file
-+ */
-+ upper_limit = (1LL << 32) - 1;
-+
-+ /* total blocks in file system block size */
-+ upper_limit >>= (bits - 9);
-+
-+
-+ /* indirect blocks */
-+ meta_blocks = 1;
-+ /* double indirect blocks */
-+ meta_blocks += 1 + (1LL << (bits-2));
-+ /* tripple indirect blocks */
-+ meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
-+
-+ upper_limit -= meta_blocks;
-+ upper_limit <<= bits;
- res += 1LL << (bits-2);
- res += 1LL << (2*(bits-2));
-@@ -692,6 +712,10 @@ static loff_t ext2_max_size(int bits)
- res <<= bits;
- if (res > upper_limit)
- res = upper_limit;
-+
-+ if (res > MAX_LFS_FILESIZE)
-+ res = MAX_LFS_FILESIZE;
-+
- return res;
+ gfs2_log_lock(sdp);
+- sdp->sd_log_blks_free += blks;
++ atomic_add(blks, &sdp->sd_log_blks_free);
+ gfs2_assert_withdraw(sdp,
+- sdp->sd_log_blks_free <= sdp->sd_jdesc->jd_blocks);
++ atomic_read(&sdp->sd_log_blks_free) <= sdp->sd_jdesc->jd_blocks);
+ gfs2_log_unlock(sdp);
+ up_read(&sdp->sd_log_flush_lock);
}
-diff --git a/fs/ext3/super.c b/fs/ext3/super.c
-index cb14de1..f3675cc 100644
---- a/fs/ext3/super.c
-+++ b/fs/ext3/super.c
-@@ -1436,11 +1436,31 @@ static void ext3_orphan_cleanup (struct super_block * sb,
- static loff_t ext3_max_size(int bits)
+ static u64 log_bmap(struct gfs2_sbd *sdp, unsigned int lbn)
{
- loff_t res = EXT3_NDIR_BLOCKS;
-- /* This constant is calculated to be the largest file size for a
-- * dense, 4k-blocksize file such that the total number of
-+ int meta_blocks;
-+ loff_t upper_limit;
-+
-+ /* This is calculated to be the largest file size for a
-+ * dense, file such that the total number of
- * sectors in the file, including data and all indirect blocks,
-- * does not exceed 2^32. */
-- const loff_t upper_limit = 0x1ff7fffd000LL;
-+ * does not exceed 2^32 -1
-+ * __u32 i_blocks representing the total number of
-+ * 512 bytes blocks of the file
-+ */
-+ upper_limit = (1LL << 32) - 1;
-+
-+ /* total blocks in file system block size */
-+ upper_limit >>= (bits - 9);
-+
-+
-+ /* indirect blocks */
-+ meta_blocks = 1;
-+ /* double indirect blocks */
-+ meta_blocks += 1 + (1LL << (bits-2));
-+ /* tripple indirect blocks */
-+ meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
-+
-+ upper_limit -= meta_blocks;
-+ upper_limit <<= bits;
-
- res += 1LL << (bits-2);
- res += 1LL << (2*(bits-2));
-@@ -1448,6 +1468,10 @@ static loff_t ext3_max_size(int bits)
- res <<= bits;
- if (res > upper_limit)
- res = upper_limit;
+- struct inode *inode = sdp->sd_jdesc->jd_inode;
+- int error;
+- struct buffer_head bh_map = { .b_state = 0, .b_blocknr = 0 };
+-
+- bh_map.b_size = 1 << inode->i_blkbits;
+- error = gfs2_block_map(inode, lbn, 0, &bh_map);
+- if (error || !bh_map.b_blocknr)
+- printk(KERN_INFO "error=%d, dbn=%llu lbn=%u", error,
+- (unsigned long long)bh_map.b_blocknr, lbn);
+- gfs2_assert_withdraw(sdp, !error && bh_map.b_blocknr);
+-
+- return bh_map.b_blocknr;
++ struct gfs2_journal_extent *je;
+
-+ if (res > MAX_LFS_FILESIZE)
-+ res = MAX_LFS_FILESIZE;
++ list_for_each_entry(je, &sdp->sd_jdesc->extent_list, extent_list) {
++ if (lbn >= je->lblock && lbn < je->lblock + je->blocks)
++ return je->dblock + lbn - je->lblock;
++ }
+
- return res;
++ return -1;
}
-diff --git a/fs/ext4/Makefile b/fs/ext4/Makefile
-index ae6e7e5..ac6fa8c 100644
---- a/fs/ext4/Makefile
-+++ b/fs/ext4/Makefile
-@@ -6,7 +6,7 @@ obj-$(CONFIG_EXT4DEV_FS) += ext4dev.o
+ /**
+@@ -561,8 +555,8 @@ static void log_pull_tail(struct gfs2_sbd *sdp, unsigned int new_tail)
+ ail2_empty(sdp, new_tail);
- ext4dev-y := balloc.o bitmap.o dir.o file.o fsync.o ialloc.o inode.o \
- ioctl.o namei.o super.o symlink.o hash.o resize.o extents.o \
-- ext4_jbd2.o
-+ ext4_jbd2.o migrate.o mballoc.o
+ gfs2_log_lock(sdp);
+- sdp->sd_log_blks_free += dist;
+- gfs2_assert_withdraw(sdp, sdp->sd_log_blks_free <= sdp->sd_jdesc->jd_blocks);
++ atomic_add(dist, &sdp->sd_log_blks_free);
++ gfs2_assert_withdraw(sdp, atomic_read(&sdp->sd_log_blks_free) <= sdp->sd_jdesc->jd_blocks);
+ gfs2_log_unlock(sdp);
- ext4dev-$(CONFIG_EXT4DEV_FS_XATTR) += xattr.o xattr_user.o xattr_trusted.o
- ext4dev-$(CONFIG_EXT4DEV_FS_POSIX_ACL) += acl.o
-diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
-index 71ee95e..ac75ea9 100644
---- a/fs/ext4/balloc.c
-+++ b/fs/ext4/balloc.c
-@@ -29,7 +29,7 @@
- * Calculate the block group number and offset, given a block number
+ sdp->sd_log_tail = new_tail;
+@@ -652,7 +646,7 @@ static void gfs2_ordered_write(struct gfs2_sbd *sdp)
+ get_bh(bh);
+ gfs2_log_unlock(sdp);
+ lock_buffer(bh);
+- if (test_clear_buffer_dirty(bh)) {
++ if (buffer_mapped(bh) && test_clear_buffer_dirty(bh)) {
+ bh->b_end_io = end_buffer_write_sync;
+ submit_bh(WRITE, bh);
+ } else {
+@@ -694,20 +688,16 @@ static void gfs2_ordered_wait(struct gfs2_sbd *sdp)
+ *
*/
- void ext4_get_group_no_and_offset(struct super_block *sb, ext4_fsblk_t blocknr,
-- unsigned long *blockgrpp, ext4_grpblk_t *offsetp)
-+ ext4_group_t *blockgrpp, ext4_grpblk_t *offsetp)
- {
- struct ext4_super_block *es = EXT4_SB(sb)->s_es;
- ext4_grpblk_t offset;
-@@ -46,7 +46,7 @@ void ext4_get_group_no_and_offset(struct super_block *sb, ext4_fsblk_t blocknr,
- /* Initializes an uninitialized block bitmap if given, and returns the
- * number of blocks free in the group. */
- unsigned ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
-- int block_group, struct ext4_group_desc *gdp)
-+ ext4_group_t block_group, struct ext4_group_desc *gdp)
+
+-void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
++void __gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
{
- unsigned long start;
- int bit, bit_max;
-@@ -60,7 +60,7 @@ unsigned ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
- * essentially implementing a per-group read-only flag. */
- if (!ext4_group_desc_csum_verify(sbi, block_group, gdp)) {
- ext4_error(sb, __FUNCTION__,
-- "Checksum bad for group %u\n", block_group);
-+ "Checksum bad for group %lu\n", block_group);
- gdp->bg_free_blocks_count = 0;
- gdp->bg_free_inodes_count = 0;
- gdp->bg_itable_unused = 0;
-@@ -153,7 +153,7 @@ unsigned ext4_init_block_bitmap(struct super_block *sb, struct buffer_head *bh,
- * group descriptor
- */
- struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
-- unsigned int block_group,
-+ ext4_group_t block_group,
- struct buffer_head ** bh)
+ struct gfs2_ail *ai;
+
+ down_write(&sdp->sd_log_flush_lock);
+
+- if (gl) {
+- gfs2_log_lock(sdp);
+- if (list_empty(&gl->gl_le.le_list)) {
+- gfs2_log_unlock(sdp);
+- up_write(&sdp->sd_log_flush_lock);
+- return;
+- }
+- gfs2_log_unlock(sdp);
++ /* Log might have been flushed while we waited for the flush lock */
++ if (gl && !test_bit(GLF_LFLUSH, &gl->gl_flags)) {
++ up_write(&sdp->sd_log_flush_lock);
++ return;
+ }
+
+ ai = kzalloc(sizeof(struct gfs2_ail), GFP_NOFS | __GFP_NOFAIL);
+@@ -739,7 +729,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
+ log_flush_commit(sdp);
+ else if (sdp->sd_log_tail != current_tail(sdp) && !sdp->sd_log_idle){
+ gfs2_log_lock(sdp);
+- sdp->sd_log_blks_free--; /* Adjust for unreserved buffer */
++ atomic_dec(&sdp->sd_log_blks_free); /* Adjust for unreserved buffer */
+ gfs2_log_unlock(sdp);
+ log_write_header(sdp, 0, PULL);
+ }
+@@ -767,7 +757,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
+ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
{
- unsigned long group_desc;
-@@ -164,7 +164,7 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
- if (block_group >= sbi->s_groups_count) {
- ext4_error (sb, "ext4_get_group_desc",
- "block_group >= groups_count - "
-- "block_group = %d, groups_count = %lu",
-+ "block_group = %lu, groups_count = %lu",
- block_group, sbi->s_groups_count);
+ unsigned int reserved;
+- unsigned int old;
++ unsigned int unused;
- return NULL;
-@@ -176,7 +176,7 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
- if (!sbi->s_group_desc[group_desc]) {
- ext4_error (sb, "ext4_get_group_desc",
- "Group descriptor not loaded - "
-- "block_group = %d, group_desc = %lu, desc = %lu",
-+ "block_group = %lu, group_desc = %lu, desc = %lu",
- block_group, group_desc, offset);
- return NULL;
+ gfs2_log_lock(sdp);
+
+@@ -779,14 +769,11 @@ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+ sdp->sd_log_commited_revoke += tr->tr_num_revoke - tr->tr_num_revoke_rm;
+ gfs2_assert_withdraw(sdp, ((int)sdp->sd_log_commited_revoke) >= 0);
+ reserved = calc_reserved(sdp);
+- old = sdp->sd_log_blks_free;
+- sdp->sd_log_blks_free += tr->tr_reserved -
+- (reserved - sdp->sd_log_blks_reserved);
+-
+- gfs2_assert_withdraw(sdp, sdp->sd_log_blks_free >= old);
+- gfs2_assert_withdraw(sdp, sdp->sd_log_blks_free <=
++ unused = sdp->sd_log_blks_reserved - reserved + tr->tr_reserved;
++ gfs2_assert_withdraw(sdp, unused >= 0);
++ atomic_add(unused, &sdp->sd_log_blks_free);
++ gfs2_assert_withdraw(sdp, atomic_read(&sdp->sd_log_blks_free) <=
+ sdp->sd_jdesc->jd_blocks);
+-
+ sdp->sd_log_blks_reserved = reserved;
+
+ gfs2_log_unlock(sdp);
+@@ -825,7 +812,6 @@ void gfs2_log_shutdown(struct gfs2_sbd *sdp)
+ down_write(&sdp->sd_log_flush_lock);
+
+ gfs2_assert_withdraw(sdp, !sdp->sd_log_blks_reserved);
+- gfs2_assert_withdraw(sdp, !sdp->sd_log_num_gl);
+ gfs2_assert_withdraw(sdp, !sdp->sd_log_num_buf);
+ gfs2_assert_withdraw(sdp, !sdp->sd_log_num_revoke);
+ gfs2_assert_withdraw(sdp, !sdp->sd_log_num_rg);
+@@ -838,7 +824,7 @@ void gfs2_log_shutdown(struct gfs2_sbd *sdp)
+ log_write_header(sdp, GFS2_LOG_HEAD_UNMOUNT,
+ (sdp->sd_log_tail == current_tail(sdp)) ? 0 : PULL);
+
+- gfs2_assert_warn(sdp, sdp->sd_log_blks_free == sdp->sd_jdesc->jd_blocks);
++ gfs2_assert_warn(sdp, atomic_read(&sdp->sd_log_blks_free) == sdp->sd_jdesc->jd_blocks);
+ gfs2_assert_warn(sdp, sdp->sd_log_head == sdp->sd_log_tail);
+ gfs2_assert_warn(sdp, list_empty(&sdp->sd_ail2_list));
+
+@@ -866,3 +852,42 @@ void gfs2_meta_syncfs(struct gfs2_sbd *sdp)
}
-@@ -189,18 +189,70 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block * sb,
- return desc;
}
-+static int ext4_valid_block_bitmap(struct super_block *sb,
-+ struct ext4_group_desc *desc,
-+ unsigned int block_group,
-+ struct buffer_head *bh)
++
++/**
++ * gfs2_logd - Update log tail as Active Items get flushed to in-place blocks
++ * @sdp: Pointer to GFS2 superblock
++ *
++ * Also, periodically check to make sure that we're using the most recent
++ * journal index.
++ */
++
++int gfs2_logd(void *data)
+{
-+ ext4_grpblk_t offset;
-+ ext4_grpblk_t next_zero_bit;
-+ ext4_fsblk_t bitmap_blk;
-+ ext4_fsblk_t group_first_block;
++ struct gfs2_sbd *sdp = data;
++ unsigned long t;
++ int need_flush;
+
-+ if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) {
-+ /* with FLEX_BG, the inode/block bitmaps and itable
-+ * blocks may not be in the group at all
-+ * so the bitmap validation will be skipped for those groups
-+ * or it has to also read the block group where the bitmaps
-+ * are located to verify they are set.
-+ */
-+ return 1;
-+ }
-+ group_first_block = ext4_group_first_block_no(sb, block_group);
++ while (!kthread_should_stop()) {
++ /* Advance the log tail */
+
-+ /* check whether block bitmap block number is set */
-+ bitmap_blk = ext4_block_bitmap(sb, desc);
-+ offset = bitmap_blk - group_first_block;
-+ if (!ext4_test_bit(offset, bh->b_data))
-+ /* bad block bitmap */
-+ goto err_out;
++ t = sdp->sd_log_flush_time +
++ gfs2_tune_get(sdp, gt_log_flush_secs) * HZ;
+
-+ /* check whether the inode bitmap block number is set */
-+ bitmap_blk = ext4_inode_bitmap(sb, desc);
-+ offset = bitmap_blk - group_first_block;
-+ if (!ext4_test_bit(offset, bh->b_data))
-+ /* bad block bitmap */
-+ goto err_out;
++ gfs2_ail1_empty(sdp, DIO_ALL);
++ gfs2_log_lock(sdp);
++ need_flush = sdp->sd_log_num_buf > gfs2_tune_get(sdp, gt_incore_log_blocks);
++ gfs2_log_unlock(sdp);
++ if (need_flush || time_after_eq(jiffies, t)) {
++ gfs2_log_flush(sdp, NULL);
++ sdp->sd_log_flush_time = jiffies;
++ }
+
-+ /* check whether the inode table block number is set */
-+ bitmap_blk = ext4_inode_table(sb, desc);
-+ offset = bitmap_blk - group_first_block;
-+ next_zero_bit = ext4_find_next_zero_bit(bh->b_data,
-+ offset + EXT4_SB(sb)->s_itb_per_group,
-+ offset);
-+ if (next_zero_bit >= offset + EXT4_SB(sb)->s_itb_per_group)
-+ /* good bitmap for inode tables */
-+ return 1;
++ t = gfs2_tune_get(sdp, gt_logd_secs) * HZ;
++ if (freezing(current))
++ refrigerator();
++ schedule_timeout_interruptible(t);
++ }
+
-+err_out:
-+ ext4_error(sb, __FUNCTION__,
-+ "Invalid block bitmap - "
-+ "block_group = %d, block = %llu",
-+ block_group, bitmap_blk);
+ return 0;
+}
- /**
- * read_block_bitmap()
- * @sb: super block
- * @block_group: given block group
- *
-- * Read the bitmap for a given block_group, reading into the specified
-- * slot in the superblock's bitmap cache.
-+ * Read the bitmap for a given block_group,and validate the
-+ * bits for block/inode/inode tables are set in the bitmaps
- *
- * Return buffer_head on success or NULL in case of failure.
- */
- struct buffer_head *
--read_block_bitmap(struct super_block *sb, unsigned int block_group)
-+read_block_bitmap(struct super_block *sb, ext4_group_t block_group)
- {
- struct ext4_group_desc * desc;
- struct buffer_head * bh = NULL;
-@@ -210,25 +262,36 @@ read_block_bitmap(struct super_block *sb, unsigned int block_group)
- if (!desc)
- return NULL;
- bitmap_blk = ext4_block_bitmap(sb, desc);
-+ bh = sb_getblk(sb, bitmap_blk);
-+ if (unlikely(!bh)) {
-+ ext4_error(sb, __FUNCTION__,
-+ "Cannot read block bitmap - "
-+ "block_group = %d, block_bitmap = %llu",
-+ (int)block_group, (unsigned long long)bitmap_blk);
-+ return NULL;
-+ }
-+ if (bh_uptodate_or_lock(bh))
-+ return bh;
-+
- if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
-- bh = sb_getblk(sb, bitmap_blk);
-- if (!buffer_uptodate(bh)) {
-- lock_buffer(bh);
-- if (!buffer_uptodate(bh)) {
-- ext4_init_block_bitmap(sb, bh, block_group,
-- desc);
-- set_buffer_uptodate(bh);
-- }
-- unlock_buffer(bh);
-- }
-- } else {
-- bh = sb_bread(sb, bitmap_blk);
-+ ext4_init_block_bitmap(sb, bh, block_group, desc);
-+ set_buffer_uptodate(bh);
-+ unlock_buffer(bh);
-+ return bh;
- }
-- if (!bh)
-- ext4_error (sb, __FUNCTION__,
-+ if (bh_submit_read(bh) < 0) {
-+ put_bh(bh);
-+ ext4_error(sb, __FUNCTION__,
- "Cannot read block bitmap - "
- "block_group = %d, block_bitmap = %llu",
-- block_group, bitmap_blk);
-+ (int)block_group, (unsigned long long)bitmap_blk);
-+ return NULL;
-+ }
-+ if (!ext4_valid_block_bitmap(sb, desc, block_group, bh)) {
-+ put_bh(bh);
-+ return NULL;
-+ }
+
- return bh;
- }
- /*
-@@ -320,7 +383,7 @@ restart:
- */
- static int
- goal_in_my_reservation(struct ext4_reserve_window *rsv, ext4_grpblk_t grp_goal,
-- unsigned int group, struct super_block * sb)
-+ ext4_group_t group, struct super_block *sb)
- {
- ext4_fsblk_t group_first_block, group_last_block;
-
-@@ -463,7 +526,7 @@ static inline int rsv_is_empty(struct ext4_reserve_window *rsv)
- * when setting the reservation window size through ioctl before the file
- * is open for write (needs block allocation).
- *
-- * Needs truncate_mutex protection prior to call this function.
-+ * Needs down_write(i_data_sem) protection prior to call this function.
- */
- void ext4_init_block_alloc_info(struct inode *inode)
- {
-@@ -514,6 +577,8 @@ void ext4_discard_reservation(struct inode *inode)
- struct ext4_reserve_window_node *rsv;
- spinlock_t *rsv_lock = &EXT4_SB(inode->i_sb)->s_rsv_window_lock;
+diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h
+index dae2824..7711528 100644
+--- a/fs/gfs2/log.h
++++ b/fs/gfs2/log.h
+@@ -48,8 +48,6 @@ static inline void gfs2_log_pointers_init(struct gfs2_sbd *sdp,
+ unsigned int gfs2_struct2blk(struct gfs2_sbd *sdp, unsigned int nstruct,
+ unsigned int ssize);
-+ ext4_mb_discard_inode_preallocations(inode);
+-int gfs2_ail1_empty(struct gfs2_sbd *sdp, int flags);
+-
+ int gfs2_log_reserve(struct gfs2_sbd *sdp, unsigned int blks);
+ void gfs2_log_release(struct gfs2_sbd *sdp, unsigned int blks);
+ void gfs2_log_incr_head(struct gfs2_sbd *sdp);
+@@ -57,11 +55,19 @@ void gfs2_log_incr_head(struct gfs2_sbd *sdp);
+ struct buffer_head *gfs2_log_get_buf(struct gfs2_sbd *sdp);
+ struct buffer_head *gfs2_log_fake_buf(struct gfs2_sbd *sdp,
+ struct buffer_head *real);
+-void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl);
++void __gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl);
+
- if (!block_i)
- return;
++static inline void gfs2_log_flush(struct gfs2_sbd *sbd, struct gfs2_glock *gl)
++{
++ if (!gl || test_bit(GLF_LFLUSH, &gl->gl_flags))
++ __gfs2_log_flush(sbd, gl);
++}
++
+ void gfs2_log_commit(struct gfs2_sbd *sdp, struct gfs2_trans *trans);
+-void gfs2_remove_from_ail(struct address_space *mapping, struct gfs2_bufdata *bd);
++void gfs2_remove_from_ail(struct gfs2_bufdata *bd);
-@@ -540,7 +605,7 @@ void ext4_free_blocks_sb(handle_t *handle, struct super_block *sb,
- {
- struct buffer_head *bitmap_bh = NULL;
- struct buffer_head *gd_bh;
-- unsigned long block_group;
-+ ext4_group_t block_group;
- ext4_grpblk_t bit;
- unsigned long i;
- unsigned long overflow;
-@@ -587,11 +652,13 @@ do_more:
- in_range(ext4_inode_bitmap(sb, desc), block, count) ||
- in_range(block, ext4_inode_table(sb, desc), sbi->s_itb_per_group) ||
- in_range(block + count - 1, ext4_inode_table(sb, desc),
-- sbi->s_itb_per_group))
-+ sbi->s_itb_per_group)) {
- ext4_error (sb, "ext4_free_blocks",
- "Freeing blocks in system zones - "
- "Block = %llu, count = %lu",
- block, count);
-+ goto error_return;
-+ }
+ void gfs2_log_shutdown(struct gfs2_sbd *sdp);
+ void gfs2_meta_syncfs(struct gfs2_sbd *sdp);
++int gfs2_logd(void *data);
- /*
- * We are about to start releasing blocks in the bitmap,
-@@ -720,19 +787,29 @@ error_return:
- * @inode: inode
- * @block: start physical block to free
- * @count: number of blocks to count
-+ * @metadata: Are these metadata blocks
- */
- void ext4_free_blocks(handle_t *handle, struct inode *inode,
-- ext4_fsblk_t block, unsigned long count)
-+ ext4_fsblk_t block, unsigned long count,
-+ int metadata)
- {
- struct super_block * sb;
- unsigned long dquot_freed_blocks;
+ #endif /* __LOG_DOT_H__ */
+diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
+index 6c27cea..fae59d6 100644
+--- a/fs/gfs2/lops.c
++++ b/fs/gfs2/lops.c
+@@ -87,6 +87,7 @@ static void gfs2_unpin(struct gfs2_sbd *sdp, struct buffer_head *bh,
+ }
+ bd->bd_ail = ai;
+ list_add(&bd->bd_ail_st_list, &ai->ai_ail1_list);
++ clear_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags);
+ gfs2_log_unlock(sdp);
+ unlock_buffer(bh);
+ }
+@@ -124,49 +125,6 @@ static struct buffer_head *gfs2_get_log_desc(struct gfs2_sbd *sdp, u32 ld_type)
+ return bh;
+ }
-+ /* this isn't the right place to decide whether block is metadata
-+ * inode.c/extents.c knows better, but for safety ... */
-+ if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode) ||
-+ ext4_should_journal_data(inode))
-+ metadata = 1;
-+
- sb = inode->i_sb;
-- if (!sb) {
-- printk ("ext4_free_blocks: nonexistent device");
+-static void __glock_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
+-{
+- struct gfs2_glock *gl;
+- struct gfs2_trans *tr = current->journal_info;
+-
+- tr->tr_touched = 1;
+-
+- gl = container_of(le, struct gfs2_glock, gl_le);
+- if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(gl)))
- return;
+-
+- if (!list_empty(&le->le_list))
+- return;
+-
+- gfs2_glock_hold(gl);
+- set_bit(GLF_DIRTY, &gl->gl_flags);
+- sdp->sd_log_num_gl++;
+- list_add(&le->le_list, &sdp->sd_log_le_gl);
+-}
+-
+-static void glock_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
+-{
+- gfs2_log_lock(sdp);
+- __glock_lo_add(sdp, le);
+- gfs2_log_unlock(sdp);
+-}
+-
+-static void glock_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
+-{
+- struct list_head *head = &sdp->sd_log_le_gl;
+- struct gfs2_glock *gl;
+-
+- while (!list_empty(head)) {
+- gl = list_entry(head->next, struct gfs2_glock, gl_le.le_list);
+- list_del_init(&gl->gl_le.le_list);
+- sdp->sd_log_num_gl--;
+-
+- gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(gl));
+- gfs2_glock_put(gl);
- }
-- ext4_free_blocks_sb(handle, sb, block, count, &dquot_freed_blocks);
-+
-+ if (!test_opt(sb, MBALLOC) || !EXT4_SB(sb)->s_group_info)
-+ ext4_free_blocks_sb(handle, sb, block, count,
-+ &dquot_freed_blocks);
-+ else
-+ ext4_mb_free_blocks(handle, inode, block, count,
-+ metadata, &dquot_freed_blocks);
- if (dquot_freed_blocks)
- DQUOT_FREE_BLOCK(inode, dquot_freed_blocks);
- return;
-@@ -920,9 +997,10 @@ claim_block(spinlock_t *lock, ext4_grpblk_t block, struct buffer_head *bh)
- * ext4_journal_release_buffer(), else we'll run out of credits.
- */
- static ext4_grpblk_t
--ext4_try_to_allocate(struct super_block *sb, handle_t *handle, int group,
-- struct buffer_head *bitmap_bh, ext4_grpblk_t grp_goal,
-- unsigned long *count, struct ext4_reserve_window *my_rsv)
-+ext4_try_to_allocate(struct super_block *sb, handle_t *handle,
-+ ext4_group_t group, struct buffer_head *bitmap_bh,
-+ ext4_grpblk_t grp_goal, unsigned long *count,
-+ struct ext4_reserve_window *my_rsv)
- {
- ext4_fsblk_t group_first_block;
- ext4_grpblk_t start, end;
-@@ -1156,7 +1234,7 @@ static int find_next_reservable_window(
- */
- static int alloc_new_reservation(struct ext4_reserve_window_node *my_rsv,
- ext4_grpblk_t grp_goal, struct super_block *sb,
-- unsigned int group, struct buffer_head *bitmap_bh)
-+ ext4_group_t group, struct buffer_head *bitmap_bh)
- {
- struct ext4_reserve_window_node *search_head;
- ext4_fsblk_t group_first_block, group_end_block, start_block;
-@@ -1354,7 +1432,7 @@ static void try_to_extend_reservation(struct ext4_reserve_window_node *my_rsv,
- */
- static ext4_grpblk_t
- ext4_try_to_allocate_with_rsv(struct super_block *sb, handle_t *handle,
-- unsigned int group, struct buffer_head *bitmap_bh,
-+ ext4_group_t group, struct buffer_head *bitmap_bh,
- ext4_grpblk_t grp_goal,
- struct ext4_reserve_window_node * my_rsv,
- unsigned long *count, int *errp)
-@@ -1510,7 +1588,7 @@ int ext4_should_retry_alloc(struct super_block *sb, int *retries)
- }
-
- /**
-- * ext4_new_blocks() -- core block(s) allocation function
-+ * ext4_new_blocks_old() -- core block(s) allocation function
- * @handle: handle to this transaction
- * @inode: file inode
- * @goal: given target block(filesystem wide)
-@@ -1523,17 +1601,17 @@ int ext4_should_retry_alloc(struct super_block *sb, int *retries)
- * any specific goal block.
- *
- */
--ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
-+ext4_fsblk_t ext4_new_blocks_old(handle_t *handle, struct inode *inode,
- ext4_fsblk_t goal, unsigned long *count, int *errp)
- {
- struct buffer_head *bitmap_bh = NULL;
- struct buffer_head *gdp_bh;
-- unsigned long group_no;
-- int goal_group;
-+ ext4_group_t group_no;
-+ ext4_group_t goal_group;
- ext4_grpblk_t grp_target_blk; /* blockgroup relative goal block */
- ext4_grpblk_t grp_alloc_blk; /* blockgroup-relative allocated block*/
- ext4_fsblk_t ret_block; /* filesyetem-wide allocated block */
-- int bgi; /* blockgroup iteration index */
-+ ext4_group_t bgi; /* blockgroup iteration index */
- int fatal = 0, err;
- int performed_allocation = 0;
- ext4_grpblk_t free_blocks; /* number of free blocks in a group */
-@@ -1544,10 +1622,7 @@ ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
- struct ext4_reserve_window_node *my_rsv = NULL;
- struct ext4_block_alloc_info *block_i;
- unsigned short windowsz = 0;
--#ifdef EXT4FS_DEBUG
-- static int goal_hits, goal_attempts;
--#endif
-- unsigned long ngroups;
-+ ext4_group_t ngroups;
- unsigned long num = *count;
-
- *errp = -ENOSPC;
-@@ -1567,7 +1642,7 @@ ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
-
- sbi = EXT4_SB(sb);
- es = EXT4_SB(sb)->s_es;
-- ext4_debug("goal=%lu.\n", goal);
-+ ext4_debug("goal=%llu.\n", goal);
- /*
- * Allocate a block from reservation only when
- * filesystem is mounted with reservation(default,-o reservation), and
-@@ -1677,7 +1752,7 @@ retry_alloc:
-
- allocated:
+- gfs2_assert_warn(sdp, !sdp->sd_log_num_gl);
+-}
+-
+ static void buf_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
+ {
+ struct gfs2_bufdata *bd = container_of(le, struct gfs2_bufdata, bd_le);
+@@ -182,7 +140,8 @@ static void buf_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
+ list_add(&bd->bd_list_tr, &tr->tr_list_buf);
+ if (!list_empty(&le->le_list))
+ goto out;
+- __glock_lo_add(sdp, &bd->bd_gl->gl_le);
++ set_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags);
++ set_bit(GLF_DIRTY, &bd->bd_gl->gl_flags);
+ gfs2_meta_check(sdp, bd->bd_bh);
+ gfs2_pin(sdp, bd->bd_bh);
+ sdp->sd_log_num_buf++;
+@@ -556,17 +515,20 @@ static void databuf_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
-- ext4_debug("using block group %d(%d)\n",
-+ ext4_debug("using block group %lu(%d)\n",
- group_no, gdp->bg_free_blocks_count);
+ lock_buffer(bd->bd_bh);
+ gfs2_log_lock(sdp);
+- if (!list_empty(&bd->bd_list_tr))
+- goto out;
+- tr->tr_touched = 1;
+- if (gfs2_is_jdata(ip)) {
+- tr->tr_num_buf++;
+- list_add(&bd->bd_list_tr, &tr->tr_list_buf);
++ if (tr) {
++ if (!list_empty(&bd->bd_list_tr))
++ goto out;
++ tr->tr_touched = 1;
++ if (gfs2_is_jdata(ip)) {
++ tr->tr_num_buf++;
++ list_add(&bd->bd_list_tr, &tr->tr_list_buf);
++ }
+ }
+ if (!list_empty(&le->le_list))
+ goto out;
- BUFFER_TRACE(gdp_bh, "get_write_access");
-@@ -1692,11 +1767,13 @@ allocated:
- in_range(ret_block, ext4_inode_table(sb, gdp),
- EXT4_SB(sb)->s_itb_per_group) ||
- in_range(ret_block + num - 1, ext4_inode_table(sb, gdp),
-- EXT4_SB(sb)->s_itb_per_group))
-+ EXT4_SB(sb)->s_itb_per_group)) {
- ext4_error(sb, "ext4_new_block",
- "Allocating block in system zone - "
- "blocks from %llu, length %lu",
- ret_block, num);
-+ goto out;
-+ }
+- __glock_lo_add(sdp, &bd->bd_gl->gl_le);
++ set_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags);
++ set_bit(GLF_DIRTY, &bd->bd_gl->gl_flags);
+ if (gfs2_is_jdata(ip)) {
+ gfs2_pin(sdp, bd->bd_bh);
+ tr->tr_num_databuf_new++;
+@@ -773,12 +735,6 @@ static void databuf_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
+ }
- performed_allocation = 1;
-@@ -1743,9 +1820,6 @@ allocated:
- * list of some description. We don't know in advance whether
- * the caller wants to use it as metadata or data.
- */
-- ext4_debug("allocating block %lu. Goal hits %d of %d.\n",
-- ret_block, goal_hits, goal_attempts);
+-const struct gfs2_log_operations gfs2_glock_lops = {
+- .lo_add = glock_lo_add,
+- .lo_after_commit = glock_lo_after_commit,
+- .lo_name = "glock",
+-};
-
- spin_lock(sb_bgl_lock(sbi, group_no));
- if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))
- gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
-@@ -1787,13 +1861,46 @@ out:
- }
+ const struct gfs2_log_operations gfs2_buf_lops = {
+ .lo_add = buf_lo_add,
+ .lo_incore_commit = buf_lo_incore_commit,
+@@ -816,7 +772,6 @@ const struct gfs2_log_operations gfs2_databuf_lops = {
+ };
- ext4_fsblk_t ext4_new_block(handle_t *handle, struct inode *inode,
-- ext4_fsblk_t goal, int *errp)
-+ ext4_fsblk_t goal, int *errp)
- {
-- unsigned long count = 1;
-+ struct ext4_allocation_request ar;
-+ ext4_fsblk_t ret;
+ const struct gfs2_log_operations *gfs2_log_ops[] = {
+- &gfs2_glock_lops,
+ &gfs2_databuf_lops,
+ &gfs2_buf_lops,
+ &gfs2_rg_lops,
+diff --git a/fs/gfs2/main.c b/fs/gfs2/main.c
+index 7ecfe0d..9c7765c 100644
+--- a/fs/gfs2/main.c
++++ b/fs/gfs2/main.c
+@@ -29,9 +29,8 @@ static void gfs2_init_inode_once(struct kmem_cache *cachep, void *foo)
+ struct gfs2_inode *ip = foo;
-- return ext4_new_blocks(handle, inode, goal, &count, errp);
-+ if (!test_opt(inode->i_sb, MBALLOC)) {
-+ unsigned long count = 1;
-+ ret = ext4_new_blocks_old(handle, inode, goal, &count, errp);
-+ return ret;
-+ }
-+
-+ memset(&ar, 0, sizeof(ar));
-+ ar.inode = inode;
-+ ar.goal = goal;
-+ ar.len = 1;
-+ ret = ext4_mb_new_blocks(handle, &ar, errp);
-+ return ret;
-+}
-+
-+ext4_fsblk_t ext4_new_blocks(handle_t *handle, struct inode *inode,
-+ ext4_fsblk_t goal, unsigned long *count, int *errp)
-+{
-+ struct ext4_allocation_request ar;
-+ ext4_fsblk_t ret;
-+
-+ if (!test_opt(inode->i_sb, MBALLOC)) {
-+ ret = ext4_new_blocks_old(handle, inode, goal, count, errp);
-+ return ret;
-+ }
-+
-+ memset(&ar, 0, sizeof(ar));
-+ ar.inode = inode;
-+ ar.goal = goal;
-+ ar.len = *count;
-+ ret = ext4_mb_new_blocks(handle, &ar, errp);
-+ *count = ar.len;
-+ return ret;
+ inode_init_once(&ip->i_inode);
+- spin_lock_init(&ip->i_spin);
+ init_rwsem(&ip->i_rw_mutex);
+- memset(ip->i_cache, 0, sizeof(ip->i_cache));
++ ip->i_alloc = NULL;
}
-+
+ static void gfs2_init_glock_once(struct kmem_cache *cachep, void *foo)
+diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
+index 4da4239..85aea27 100644
+--- a/fs/gfs2/meta_io.c
++++ b/fs/gfs2/meta_io.c
+@@ -50,6 +50,7 @@ static int gfs2_aspace_writepage(struct page *page,
+ static const struct address_space_operations aspace_aops = {
+ .writepage = gfs2_aspace_writepage,
+ .releasepage = gfs2_releasepage,
++ .sync_page = block_sync_page,
+ };
+
/**
- * ext4_count_free_blocks() -- count filesystem free blocks
- * @sb: superblock
-@@ -1804,8 +1911,8 @@ ext4_fsblk_t ext4_count_free_blocks(struct super_block *sb)
+@@ -221,13 +222,14 @@ int gfs2_meta_read(struct gfs2_glock *gl, u64 blkno, int flags,
+ struct buffer_head **bhp)
{
- ext4_fsblk_t desc_count;
- struct ext4_group_desc *gdp;
-- int i;
-- unsigned long ngroups = EXT4_SB(sb)->s_groups_count;
-+ ext4_group_t i;
-+ ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
- #ifdef EXT4FS_DEBUG
- struct ext4_super_block *es;
- ext4_fsblk_t bitmap_count;
-@@ -1829,14 +1936,14 @@ ext4_fsblk_t ext4_count_free_blocks(struct super_block *sb)
- continue;
+ *bhp = getbuf(gl, blkno, CREATE);
+- if (!buffer_uptodate(*bhp))
++ if (!buffer_uptodate(*bhp)) {
+ ll_rw_block(READ_META, 1, bhp);
+- if (flags & DIO_WAIT) {
+- int error = gfs2_meta_wait(gl->gl_sbd, *bhp);
+- if (error) {
+- brelse(*bhp);
+- return error;
++ if (flags & DIO_WAIT) {
++ int error = gfs2_meta_wait(gl->gl_sbd, *bhp);
++ if (error) {
++ brelse(*bhp);
++ return error;
++ }
+ }
+ }
- x = ext4_count_free(bitmap_bh, sb->s_blocksize);
-- printk("group %d: stored = %d, counted = %lu\n",
-+ printk(KERN_DEBUG "group %lu: stored = %d, counted = %lu\n",
- i, le16_to_cpu(gdp->bg_free_blocks_count), x);
- bitmap_count += x;
+@@ -282,7 +284,7 @@ void gfs2_attach_bufdata(struct gfs2_glock *gl, struct buffer_head *bh,
+ return;
}
- brelse(bitmap_bh);
- printk("ext4_count_free_blocks: stored = %llu"
- ", computed = %llu, %llu\n",
-- EXT4_FREE_BLOCKS_COUNT(es),
-+ ext4_free_blocks_count(es),
- desc_count, bitmap_count);
- return bitmap_count;
- #else
-@@ -1853,7 +1960,7 @@ ext4_fsblk_t ext4_count_free_blocks(struct super_block *sb)
- #endif
- }
--static inline int test_root(int a, int b)
-+static inline int test_root(ext4_group_t a, int b)
- {
- int num = b;
+- bd = kmem_cache_zalloc(gfs2_bufdata_cachep, GFP_NOFS | __GFP_NOFAIL),
++ bd = kmem_cache_zalloc(gfs2_bufdata_cachep, GFP_NOFS | __GFP_NOFAIL);
+ bd->bd_bh = bh;
+ bd->bd_gl = gl;
-@@ -1862,7 +1969,7 @@ static inline int test_root(int a, int b)
- return num == a;
+@@ -317,7 +319,7 @@ void gfs2_remove_from_journal(struct buffer_head *bh, struct gfs2_trans *tr, int
+ }
+ if (bd) {
+ if (bd->bd_ail) {
+- gfs2_remove_from_ail(NULL, bd);
++ gfs2_remove_from_ail(bd);
+ bh->b_private = NULL;
+ bd->bd_bh = NULL;
+ bd->bd_blkno = bh->b_blocknr;
+@@ -358,32 +360,6 @@ void gfs2_meta_wipe(struct gfs2_inode *ip, u64 bstart, u32 blen)
}
--static int ext4_group_sparse(int group)
-+static int ext4_group_sparse(ext4_group_t group)
- {
- if (group <= 1)
- return 1;
-@@ -1880,7 +1987,7 @@ static int ext4_group_sparse(int group)
- * Return the number of blocks used by the superblock (primary or backup)
- * in this group. Currently this will be only 0 or 1.
+ /**
+- * gfs2_meta_cache_flush - get rid of any references on buffers for this inode
+- * @ip: The GFS2 inode
+- *
+- * This releases buffers that are in the most-recently-used array of
+- * blocks used for indirect block addressing for this inode.
+- */
+-
+-void gfs2_meta_cache_flush(struct gfs2_inode *ip)
+-{
+- struct buffer_head **bh_slot;
+- unsigned int x;
+-
+- spin_lock(&ip->i_spin);
+-
+- for (x = 0; x < GFS2_MAX_META_HEIGHT; x++) {
+- bh_slot = &ip->i_cache[x];
+- if (*bh_slot) {
+- brelse(*bh_slot);
+- *bh_slot = NULL;
+- }
+- }
+-
+- spin_unlock(&ip->i_spin);
+-}
+-
+-/**
+ * gfs2_meta_indirect_buffer - Get a metadata buffer
+ * @ip: The GFS2 inode
+ * @height: The level of this buf in the metadata (indir addr) tree (if any)
+@@ -391,8 +367,6 @@ void gfs2_meta_cache_flush(struct gfs2_inode *ip)
+ * @new: Non-zero if we may create a new buffer
+ * @bhp: the buffer is returned here
+ *
+- * Try to use the gfs2_inode's MRU metadata tree cache.
+- *
+ * Returns: errno
*/
--int ext4_bg_has_super(struct super_block *sb, int group)
-+int ext4_bg_has_super(struct super_block *sb, ext4_group_t group)
- {
- if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
- EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER) &&
-@@ -1889,18 +1996,20 @@ int ext4_bg_has_super(struct super_block *sb, int group)
- return 1;
- }
--static unsigned long ext4_bg_num_gdb_meta(struct super_block *sb, int group)
-+static unsigned long ext4_bg_num_gdb_meta(struct super_block *sb,
-+ ext4_group_t group)
+@@ -401,58 +375,25 @@ int gfs2_meta_indirect_buffer(struct gfs2_inode *ip, int height, u64 num,
{
- unsigned long metagroup = group / EXT4_DESC_PER_BLOCK(sb);
-- unsigned long first = metagroup * EXT4_DESC_PER_BLOCK(sb);
-- unsigned long last = first + EXT4_DESC_PER_BLOCK(sb) - 1;
-+ ext4_group_t first = metagroup * EXT4_DESC_PER_BLOCK(sb);
-+ ext4_group_t last = first + EXT4_DESC_PER_BLOCK(sb) - 1;
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ struct gfs2_glock *gl = ip->i_gl;
+- struct buffer_head *bh = NULL, **bh_slot = ip->i_cache + height;
+- int in_cache = 0;
+-
+- BUG_ON(!gl);
+- BUG_ON(!sdp);
+-
+- spin_lock(&ip->i_spin);
+- if (*bh_slot && (*bh_slot)->b_blocknr == num) {
+- bh = *bh_slot;
+- get_bh(bh);
+- in_cache = 1;
+- }
+- spin_unlock(&ip->i_spin);
+-
+- if (!bh)
+- bh = getbuf(gl, num, CREATE);
+-
+- if (!bh)
+- return -ENOBUFS;
++ struct buffer_head *bh;
++ int ret = 0;
- if (group == first || group == first + 1 || group == last)
- return 1;
- return 0;
+ if (new) {
+- if (gfs2_assert_warn(sdp, height))
+- goto err;
+- meta_prep_new(bh);
++ BUG_ON(height == 0);
++ bh = gfs2_meta_new(gl, num);
+ gfs2_trans_add_bh(ip->i_gl, bh, 1);
+ gfs2_metatype_set(bh, GFS2_METATYPE_IN, GFS2_FORMAT_IN);
+ gfs2_buffer_clear_tail(bh, sizeof(struct gfs2_meta_header));
+ } else {
+ u32 mtype = height ? GFS2_METATYPE_IN : GFS2_METATYPE_DI;
+- if (!buffer_uptodate(bh)) {
+- ll_rw_block(READ_META, 1, &bh);
+- if (gfs2_meta_wait(sdp, bh))
+- goto err;
++ ret = gfs2_meta_read(gl, num, DIO_WAIT, &bh);
++ if (ret == 0 && gfs2_metatype_check(sdp, bh, mtype)) {
++ brelse(bh);
++ ret = -EIO;
+ }
+- if (gfs2_metatype_check(sdp, bh, mtype))
+- goto err;
+- }
+-
+- if (!in_cache) {
+- spin_lock(&ip->i_spin);
+- if (*bh_slot)
+- brelse(*bh_slot);
+- *bh_slot = bh;
+- get_bh(bh);
+- spin_unlock(&ip->i_spin);
+ }
+-
+ *bhp = bh;
+- return 0;
+-err:
+- brelse(bh);
+- return -EIO;
++ return ret;
}
--static unsigned long ext4_bg_num_gdb_nometa(struct super_block *sb, int group)
-+static unsigned long ext4_bg_num_gdb_nometa(struct super_block *sb,
-+ ext4_group_t group)
- {
- if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
- EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER) &&
-@@ -1918,7 +2027,7 @@ static unsigned long ext4_bg_num_gdb_nometa(struct super_block *sb, int group)
- * (primary or backup) in this group. In the future there may be a
- * different number of descriptor blocks in each group.
- */
--unsigned long ext4_bg_num_gdb(struct super_block *sb, int group)
-+unsigned long ext4_bg_num_gdb(struct super_block *sb, ext4_group_t group)
- {
- unsigned long first_meta_bg =
- le32_to_cpu(EXT4_SB(sb)->s_es->s_first_meta_bg);
-diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
-index f612bef..33888bb 100644
---- a/fs/ext4/dir.c
-+++ b/fs/ext4/dir.c
-@@ -67,7 +67,7 @@ int ext4_check_dir_entry (const char * function, struct inode * dir,
- unsigned long offset)
- {
- const char * error_msg = NULL;
-- const int rlen = le16_to_cpu(de->rec_len);
-+ const int rlen = ext4_rec_len_from_disk(de->rec_len);
+ /**
+diff --git a/fs/gfs2/meta_io.h b/fs/gfs2/meta_io.h
+index b704822..73e3b1c 100644
+--- a/fs/gfs2/meta_io.h
++++ b/fs/gfs2/meta_io.h
+@@ -56,7 +56,6 @@ void gfs2_remove_from_journal(struct buffer_head *bh, struct gfs2_trans *tr,
- if (rlen < EXT4_DIR_REC_LEN(1))
- error_msg = "rec_len is smaller than minimal";
-@@ -124,7 +124,7 @@ static int ext4_readdir(struct file * filp,
- offset = filp->f_pos & (sb->s_blocksize - 1);
+ void gfs2_meta_wipe(struct gfs2_inode *ip, u64 bstart, u32 blen);
- while (!error && !stored && filp->f_pos < inode->i_size) {
-- unsigned long blk = filp->f_pos >> EXT4_BLOCK_SIZE_BITS(sb);
-+ ext4_lblk_t blk = filp->f_pos >> EXT4_BLOCK_SIZE_BITS(sb);
- struct buffer_head map_bh;
- struct buffer_head *bh = NULL;
+-void gfs2_meta_cache_flush(struct gfs2_inode *ip);
+ int gfs2_meta_indirect_buffer(struct gfs2_inode *ip, int height, u64 num,
+ int new, struct buffer_head **bhp);
-@@ -172,10 +172,10 @@ revalidate:
- * least that it is non-zero. A
- * failure will be detected in the
- * dirent test below. */
-- if (le16_to_cpu(de->rec_len) <
-- EXT4_DIR_REC_LEN(1))
-+ if (ext4_rec_len_from_disk(de->rec_len)
-+ < EXT4_DIR_REC_LEN(1))
- break;
-- i += le16_to_cpu(de->rec_len);
-+ i += ext4_rec_len_from_disk(de->rec_len);
- }
- offset = i;
- filp->f_pos = (filp->f_pos & ~(sb->s_blocksize - 1))
-@@ -197,7 +197,7 @@ revalidate:
- ret = stored;
- goto out;
- }
-- offset += le16_to_cpu(de->rec_len);
-+ offset += ext4_rec_len_from_disk(de->rec_len);
- if (le32_to_cpu(de->inode)) {
- /* We might block in the next section
- * if the data destination is
-@@ -219,7 +219,7 @@ revalidate:
- goto revalidate;
- stored ++;
- }
-- filp->f_pos += le16_to_cpu(de->rec_len);
-+ filp->f_pos += ext4_rec_len_from_disk(de->rec_len);
- }
- offset = 0;
- brelse (bh);
-diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
-index 8528774..bc7081f 100644
---- a/fs/ext4/extents.c
-+++ b/fs/ext4/extents.c
-@@ -61,7 +61,7 @@ static ext4_fsblk_t ext_pblock(struct ext4_extent *ex)
- * idx_pblock:
- * combine low and high parts of a leaf physical block number into ext4_fsblk_t
- */
--static ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
-+ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
- {
- ext4_fsblk_t block;
+diff --git a/fs/gfs2/ops_address.c b/fs/gfs2/ops_address.c
+index 9679f8b..38dbe99 100644
+--- a/fs/gfs2/ops_address.c
++++ b/fs/gfs2/ops_address.c
+@@ -20,6 +20,8 @@
+ #include <linux/swap.h>
+ #include <linux/gfs2_ondisk.h>
+ #include <linux/lm_interface.h>
++#include <linux/backing-dev.h>
++#include <linux/pagevec.h>
-@@ -75,7 +75,7 @@ static ext4_fsblk_t idx_pblock(struct ext4_extent_idx *ix)
- * stores a large physical block number into an extent struct,
- * breaking it into parts
- */
--static void ext4_ext_store_pblock(struct ext4_extent *ex, ext4_fsblk_t pb)
-+void ext4_ext_store_pblock(struct ext4_extent *ex, ext4_fsblk_t pb)
- {
- ex->ee_start_lo = cpu_to_le32((unsigned long) (pb & 0xffffffff));
- ex->ee_start_hi = cpu_to_le16((unsigned long) ((pb >> 31) >> 1) & 0xffff);
-@@ -144,7 +144,7 @@ static int ext4_ext_dirty(handle_t *handle, struct inode *inode,
+ #include "gfs2.h"
+ #include "incore.h"
+@@ -32,7 +34,6 @@
+ #include "quota.h"
+ #include "trans.h"
+ #include "rgrp.h"
+-#include "ops_file.h"
+ #include "super.h"
+ #include "util.h"
+ #include "glops.h"
+@@ -58,22 +59,6 @@ static void gfs2_page_add_databufs(struct gfs2_inode *ip, struct page *page,
+ }
- static ext4_fsblk_t ext4_ext_find_goal(struct inode *inode,
- struct ext4_ext_path *path,
-- ext4_fsblk_t block)
-+ ext4_lblk_t block)
- {
- struct ext4_inode_info *ei = EXT4_I(inode);
- ext4_fsblk_t bg_start;
-@@ -367,13 +367,14 @@ static void ext4_ext_drop_refs(struct ext4_ext_path *path)
- * the header must be checked before calling this
- */
- static void
--ext4_ext_binsearch_idx(struct inode *inode, struct ext4_ext_path *path, int block)
-+ext4_ext_binsearch_idx(struct inode *inode,
-+ struct ext4_ext_path *path, ext4_lblk_t block)
+ /**
+- * gfs2_get_block - Fills in a buffer head with details about a block
+- * @inode: The inode
+- * @lblock: The block number to look up
+- * @bh_result: The buffer head to return the result in
+- * @create: Non-zero if we may add block to the file
+- *
+- * Returns: errno
+- */
+-
+-int gfs2_get_block(struct inode *inode, sector_t lblock,
+- struct buffer_head *bh_result, int create)
+-{
+- return gfs2_block_map(inode, lblock, create, bh_result);
+-}
+-
+-/**
+ * gfs2_get_block_noalloc - Fills in a buffer head with details about a block
+ * @inode: The inode
+ * @lblock: The block number to look up
+@@ -88,7 +73,7 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock,
{
- struct ext4_extent_header *eh = path->p_hdr;
- struct ext4_extent_idx *r, *l, *m;
-
-
-- ext_debug("binsearch for %d(idx): ", block);
-+ ext_debug("binsearch for %u(idx): ", block);
+ int error;
- l = EXT_FIRST_INDEX(eh) + 1;
- r = EXT_LAST_INDEX(eh);
-@@ -425,7 +426,8 @@ ext4_ext_binsearch_idx(struct inode *inode, struct ext4_ext_path *path, int bloc
- * the header must be checked before calling this
- */
- static void
--ext4_ext_binsearch(struct inode *inode, struct ext4_ext_path *path, int block)
-+ext4_ext_binsearch(struct inode *inode,
-+ struct ext4_ext_path *path, ext4_lblk_t block)
+- error = gfs2_block_map(inode, lblock, 0, bh_result);
++ error = gfs2_block_map(inode, lblock, bh_result, 0);
+ if (error)
+ return error;
+ if (!buffer_mapped(bh_result))
+@@ -99,20 +84,19 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock,
+ static int gfs2_get_block_direct(struct inode *inode, sector_t lblock,
+ struct buffer_head *bh_result, int create)
{
- struct ext4_extent_header *eh = path->p_hdr;
- struct ext4_extent *r, *l, *m;
-@@ -438,7 +440,7 @@ ext4_ext_binsearch(struct inode *inode, struct ext4_ext_path *path, int block)
- return;
- }
-
-- ext_debug("binsearch for %d: ", block);
-+ ext_debug("binsearch for %u: ", block);
-
- l = EXT_FIRST_EXTENT(eh) + 1;
- r = EXT_LAST_EXTENT(eh);
-@@ -494,7 +496,8 @@ int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+- return gfs2_block_map(inode, lblock, 0, bh_result);
++ return gfs2_block_map(inode, lblock, bh_result, 0);
}
- struct ext4_ext_path *
--ext4_ext_find_extent(struct inode *inode, int block, struct ext4_ext_path *path)
-+ext4_ext_find_extent(struct inode *inode, ext4_lblk_t block,
-+ struct ext4_ext_path *path)
- {
- struct ext4_extent_header *eh;
- struct buffer_head *bh;
-@@ -763,7 +766,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
- while (k--) {
- oldblock = newblock;
- newblock = ablocks[--a];
-- bh = sb_getblk(inode->i_sb, (ext4_fsblk_t)newblock);
-+ bh = sb_getblk(inode->i_sb, newblock);
- if (!bh) {
- err = -EIO;
- goto cleanup;
-@@ -783,9 +786,8 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
- fidx->ei_block = border;
- ext4_idx_store_pblock(fidx, oldblock);
+ /**
+- * gfs2_writepage - Write complete page
+- * @page: Page to write
++ * gfs2_writepage_common - Common bits of writepage
++ * @page: The page to be written
++ * @wbc: The writeback control
+ *
+- * Returns: errno
+- *
+- * Some of this is copied from block_write_full_page() although we still
+- * call it to do most of the work.
++ * Returns: 1 if writepage is ok, otherwise an error code or zero if no error.
+ */
-- ext_debug("int.index at %d (block %llu): %lu -> %llu\n", i,
-- newblock, (unsigned long) le32_to_cpu(border),
-- oldblock);
-+ ext_debug("int.index at %d (block %llu): %u -> %llu\n",
-+ i, newblock, le32_to_cpu(border), oldblock);
- /* copy indexes */
- m = 0;
- path[i].p_idx++;
-@@ -851,7 +853,7 @@ cleanup:
- for (i = 0; i < depth; i++) {
- if (!ablocks[i])
- continue;
-- ext4_free_blocks(handle, inode, ablocks[i], 1);
-+ ext4_free_blocks(handle, inode, ablocks[i], 1, 1);
- }
- }
- kfree(ablocks);
-@@ -979,8 +981,8 @@ repeat:
- /* refill path */
- ext4_ext_drop_refs(path);
- path = ext4_ext_find_extent(inode,
-- le32_to_cpu(newext->ee_block),
-- path);
-+ (ext4_lblk_t)le32_to_cpu(newext->ee_block),
-+ path);
- if (IS_ERR(path))
- err = PTR_ERR(path);
- } else {
-@@ -992,8 +994,8 @@ repeat:
- /* refill path */
- ext4_ext_drop_refs(path);
- path = ext4_ext_find_extent(inode,
-- le32_to_cpu(newext->ee_block),
-- path);
-+ (ext4_lblk_t)le32_to_cpu(newext->ee_block),
-+ path);
- if (IS_ERR(path)) {
- err = PTR_ERR(path);
- goto out;
-@@ -1015,13 +1017,157 @@ out:
- }
+-static int gfs2_writepage(struct page *page, struct writeback_control *wbc)
++static int gfs2_writepage_common(struct page *page,
++ struct writeback_control *wbc)
+ {
+ struct inode *inode = page->mapping->host;
+ struct gfs2_inode *ip = GFS2_I(inode);
+@@ -120,41 +104,133 @@ static int gfs2_writepage(struct page *page, struct writeback_control *wbc)
+ loff_t i_size = i_size_read(inode);
+ pgoff_t end_index = i_size >> PAGE_CACHE_SHIFT;
+ unsigned offset;
+- int error;
+- int done_trans = 0;
++ int ret = -EIO;
- /*
-+ * search the closest allocated block to the left for *logical
-+ * and returns it at @logical + it's physical address at @phys
-+ * if *logical is the smallest allocated block, the function
-+ * returns 0 at @phys
-+ * return value contains 0 (success) or error code
+- if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(ip->i_gl))) {
+- unlock_page(page);
+- return -EIO;
+- }
++ if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(ip->i_gl)))
++ goto out;
++ ret = 0;
+ if (current->journal_info)
+- goto out_ignore;
+-
++ goto redirty;
+ /* Is the page fully outside i_size? (truncate in progress) */
+- offset = i_size & (PAGE_CACHE_SIZE-1);
++ offset = i_size & (PAGE_CACHE_SIZE-1);
+ if (page->index > end_index || (page->index == end_index && !offset)) {
+ page->mapping->a_ops->invalidatepage(page, 0);
+- unlock_page(page);
+- return 0; /* don't care */
++ goto out;
++ }
++ return 1;
++redirty:
++ redirty_page_for_writepage(wbc, page);
++out:
++ unlock_page(page);
++ return 0;
++}
++
++/**
++ * gfs2_writeback_writepage - Write page for writeback mappings
++ * @page: The page
++ * @wbc: The writeback control
++ *
+ */
-+int
-+ext4_ext_search_left(struct inode *inode, struct ext4_ext_path *path,
-+ ext4_lblk_t *logical, ext4_fsblk_t *phys)
++
++static int gfs2_writeback_writepage(struct page *page,
++ struct writeback_control *wbc)
+{
-+ struct ext4_extent_idx *ix;
-+ struct ext4_extent *ex;
-+ int depth, ee_len;
++ int ret;
+
-+ BUG_ON(path == NULL);
-+ depth = path->p_depth;
-+ *phys = 0;
++ ret = gfs2_writepage_common(page, wbc);
++ if (ret <= 0)
++ return ret;
+
-+ if (depth == 0 && path->p_ext == NULL)
-+ return 0;
++ ret = mpage_writepage(page, gfs2_get_block_noalloc, wbc);
++ if (ret == -EAGAIN)
++ ret = block_write_full_page(page, gfs2_get_block_noalloc, wbc);
++ return ret;
++}
+
-+ /* usually extent in the path covers blocks smaller
-+ * then *logical, but it can be that extent is the
-+ * first one in the file */
++/**
++ * gfs2_ordered_writepage - Write page for ordered data files
++ * @page: The page to write
++ * @wbc: The writeback control
++ *
++ */
+
-+ ex = path[depth].p_ext;
-+ ee_len = ext4_ext_get_actual_len(ex);
-+ if (*logical < le32_to_cpu(ex->ee_block)) {
-+ BUG_ON(EXT_FIRST_EXTENT(path[depth].p_hdr) != ex);
-+ while (--depth >= 0) {
-+ ix = path[depth].p_idx;
-+ BUG_ON(ix != EXT_FIRST_INDEX(path[depth].p_hdr));
-+ }
-+ return 0;
-+ }
++static int gfs2_ordered_writepage(struct page *page,
++ struct writeback_control *wbc)
++{
++ struct inode *inode = page->mapping->host;
++ struct gfs2_inode *ip = GFS2_I(inode);
++ int ret;
+
-+ BUG_ON(*logical < (le32_to_cpu(ex->ee_block) + ee_len));
++ ret = gfs2_writepage_common(page, wbc);
++ if (ret <= 0)
++ return ret;
+
-+ *logical = le32_to_cpu(ex->ee_block) + ee_len - 1;
-+ *phys = ext_pblock(ex) + ee_len - 1;
-+ return 0;
++ if (!page_has_buffers(page)) {
++ create_empty_buffers(page, inode->i_sb->s_blocksize,
++ (1 << BH_Dirty)|(1 << BH_Uptodate));
+ }
++ gfs2_page_add_databufs(ip, page, 0, inode->i_sb->s_blocksize-1);
++ return block_write_full_page(page, gfs2_get_block_noalloc, wbc);
++}
+
+- if ((sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip)) &&
+- PageChecked(page)) {
++/**
++ * __gfs2_jdata_writepage - The core of jdata writepage
++ * @page: The page to write
++ * @wbc: The writeback control
++ *
++ * This is shared between writepage and writepages and implements the
++ * core of the writepage operation. If a transaction is required then
++ * PageChecked will have been set and the transaction will have
++ * already been started before this is called.
++ */
++
++static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc)
++{
++ struct inode *inode = page->mapping->host;
++ struct gfs2_inode *ip = GFS2_I(inode);
++ struct gfs2_sbd *sdp = GFS2_SB(inode);
++
++ if (PageChecked(page)) {
+ ClearPageChecked(page);
+- error = gfs2_trans_begin(sdp, RES_DINODE + 1, 0);
+- if (error)
+- goto out_ignore;
+ if (!page_has_buffers(page)) {
+ create_empty_buffers(page, inode->i_sb->s_blocksize,
+ (1 << BH_Dirty)|(1 << BH_Uptodate));
+ }
+ gfs2_page_add_databufs(ip, page, 0, sdp->sd_vfs->s_blocksize-1);
++ }
++ return block_write_full_page(page, gfs2_get_block_noalloc, wbc);
+}
+
-+/*
-+ * search the closest allocated block to the right for *logical
-+ * and returns it at @logical + it's physical address at @phys
-+ * if *logical is the smallest allocated block, the function
-+ * returns 0 at @phys
-+ * return value contains 0 (success) or error code
++/**
++ * gfs2_jdata_writepage - Write complete page
++ * @page: Page to write
++ *
++ * Returns: errno
++ *
+ */
-+int
-+ext4_ext_search_right(struct inode *inode, struct ext4_ext_path *path,
-+ ext4_lblk_t *logical, ext4_fsblk_t *phys)
++
++static int gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc)
+{
-+ struct buffer_head *bh = NULL;
-+ struct ext4_extent_header *eh;
-+ struct ext4_extent_idx *ix;
-+ struct ext4_extent *ex;
-+ ext4_fsblk_t block;
-+ int depth, ee_len;
++ struct inode *inode = page->mapping->host;
++ struct gfs2_sbd *sdp = GFS2_SB(inode);
++ int error;
++ int done_trans = 0;
+
-+ BUG_ON(path == NULL);
-+ depth = path->p_depth;
-+ *phys = 0;
++ error = gfs2_writepage_common(page, wbc);
++ if (error <= 0)
++ return error;
+
-+ if (depth == 0 && path->p_ext == NULL)
-+ return 0;
++ if (PageChecked(page)) {
++ if (wbc->sync_mode != WB_SYNC_ALL)
++ goto out_ignore;
++ error = gfs2_trans_begin(sdp, RES_DINODE + 1, 0);
++ if (error)
++ goto out_ignore;
+ done_trans = 1;
+ }
+- error = block_write_full_page(page, gfs2_get_block_noalloc, wbc);
++ error = __gfs2_jdata_writepage(page, wbc);
+ if (done_trans)
+ gfs2_trans_end(sdp);
+- gfs2_meta_cache_flush(ip);
+ return error;
+
+ out_ignore:
+@@ -164,29 +240,190 @@ out_ignore:
+ }
+
+ /**
+- * gfs2_writepages - Write a bunch of dirty pages back to disk
++ * gfs2_writeback_writepages - Write a bunch of dirty pages back to disk
+ * @mapping: The mapping to write
+ * @wbc: Write-back control
+ *
+- * For journaled files and/or ordered writes this just falls back to the
+- * kernel's default writepages path for now. We will probably want to change
+- * that eventually (i.e. when we look at allocate on flush).
+- *
+- * For the data=writeback case though we can already ignore buffer heads
++ * For the data=writeback case we can already ignore buffer heads
+ * and write whole extents at once. This is a big reduction in the
+ * number of I/O requests we send and the bmap calls we make in this case.
+ */
+-static int gfs2_writepages(struct address_space *mapping,
+- struct writeback_control *wbc)
++static int gfs2_writeback_writepages(struct address_space *mapping,
++ struct writeback_control *wbc)
++{
++ return mpage_writepages(mapping, wbc, gfs2_get_block_noalloc);
++}
+
-+ /* usually extent in the path covers blocks smaller
-+ * then *logical, but it can be that extent is the
-+ * first one in the file */
++/**
++ * gfs2_write_jdata_pagevec - Write back a pagevec's worth of pages
++ * @mapping: The mapping
++ * @wbc: The writeback control
++ * @writepage: The writepage function to call for each page
++ * @pvec: The vector of pages
++ * @nr_pages: The number of pages to write
++ *
++ * Returns: non-zero if loop should terminate, zero otherwise
++ */
+
-+ ex = path[depth].p_ext;
-+ ee_len = ext4_ext_get_actual_len(ex);
-+ if (*logical < le32_to_cpu(ex->ee_block)) {
-+ BUG_ON(EXT_FIRST_EXTENT(path[depth].p_hdr) != ex);
-+ while (--depth >= 0) {
-+ ix = path[depth].p_idx;
-+ BUG_ON(ix != EXT_FIRST_INDEX(path[depth].p_hdr));
++static int gfs2_write_jdata_pagevec(struct address_space *mapping,
++ struct writeback_control *wbc,
++ struct pagevec *pvec,
++ int nr_pages, pgoff_t end)
+ {
+ struct inode *inode = mapping->host;
+- struct gfs2_inode *ip = GFS2_I(inode);
+ struct gfs2_sbd *sdp = GFS2_SB(inode);
++ loff_t i_size = i_size_read(inode);
++ pgoff_t end_index = i_size >> PAGE_CACHE_SHIFT;
++ unsigned offset = i_size & (PAGE_CACHE_SIZE-1);
++ unsigned nrblocks = nr_pages * (PAGE_CACHE_SIZE/inode->i_sb->s_blocksize);
++ struct backing_dev_info *bdi = mapping->backing_dev_info;
++ int i;
++ int ret;
++
++ ret = gfs2_trans_begin(sdp, nrblocks, 0);
++ if (ret < 0)
++ return ret;
++
++ for(i = 0; i < nr_pages; i++) {
++ struct page *page = pvec->pages[i];
++
++ lock_page(page);
++
++ if (unlikely(page->mapping != mapping)) {
++ unlock_page(page);
++ continue;
+ }
-+ *logical = le32_to_cpu(ex->ee_block);
-+ *phys = ext_pblock(ex);
-+ return 0;
-+ }
+
-+ BUG_ON(*logical < (le32_to_cpu(ex->ee_block) + ee_len));
++ if (!wbc->range_cyclic && page->index > end) {
++ ret = 1;
++ unlock_page(page);
++ continue;
++ }
+
-+ if (ex != EXT_LAST_EXTENT(path[depth].p_hdr)) {
-+ /* next allocated block in this leaf */
-+ ex++;
-+ *logical = le32_to_cpu(ex->ee_block);
-+ *phys = ext_pblock(ex);
-+ return 0;
-+ }
++ if (wbc->sync_mode != WB_SYNC_NONE)
++ wait_on_page_writeback(page);
++
++ if (PageWriteback(page) ||
++ !clear_page_dirty_for_io(page)) {
++ unlock_page(page);
++ continue;
++ }
++
++ /* Is the page fully outside i_size? (truncate in progress) */
++ if (page->index > end_index || (page->index == end_index && !offset)) {
++ page->mapping->a_ops->invalidatepage(page, 0);
++ unlock_page(page);
++ continue;
++ }
++
++ ret = __gfs2_jdata_writepage(page, wbc);
++
++ if (ret || (--(wbc->nr_to_write) <= 0))
++ ret = 1;
++ if (wbc->nonblocking && bdi_write_congested(bdi)) {
++ wbc->encountered_congestion = 1;
++ ret = 1;
++ }
+
-+ /* go up and search for index to the right */
-+ while (--depth >= 0) {
-+ ix = path[depth].p_idx;
-+ if (ix != EXT_LAST_INDEX(path[depth].p_hdr))
-+ break;
+ }
++ gfs2_trans_end(sdp);
++ return ret;
++}
+
-+ if (depth < 0) {
-+ /* we've gone up to the root and
-+ * found no index to the right */
++/**
++ * gfs2_write_cache_jdata - Like write_cache_pages but different
++ * @mapping: The mapping to write
++ * @wbc: The writeback control
++ * @writepage: The writepage function to call
++ * @data: The data to pass to writepage
++ *
++ * The reason that we use our own function here is that we need to
++ * start transactions before we grab page locks. This allows us
++ * to get the ordering right.
++ */
++
++static int gfs2_write_cache_jdata(struct address_space *mapping,
++ struct writeback_control *wbc)
++{
++ struct backing_dev_info *bdi = mapping->backing_dev_info;
++ int ret = 0;
++ int done = 0;
++ struct pagevec pvec;
++ int nr_pages;
++ pgoff_t index;
++ pgoff_t end;
++ int scanned = 0;
++ int range_whole = 0;
++
++ if (wbc->nonblocking && bdi_write_congested(bdi)) {
++ wbc->encountered_congestion = 1;
+ return 0;
+ }
+
-+ /* we've found index to the right, let's
-+ * follow it and find the closest allocated
-+ * block to the right */
-+ ix++;
-+ block = idx_pblock(ix);
-+ while (++depth < path->p_depth) {
-+ bh = sb_bread(inode->i_sb, block);
-+ if (bh == NULL)
-+ return -EIO;
-+ eh = ext_block_hdr(bh);
-+ if (ext4_ext_check_header(inode, eh, depth)) {
-+ put_bh(bh);
-+ return -EIO;
-+ }
-+ ix = EXT_FIRST_INDEX(eh);
-+ block = idx_pblock(ix);
-+ put_bh(bh);
++ pagevec_init(&pvec, 0);
++ if (wbc->range_cyclic) {
++ index = mapping->writeback_index; /* Start from prev offset */
++ end = -1;
++ } else {
++ index = wbc->range_start >> PAGE_CACHE_SHIFT;
++ end = wbc->range_end >> PAGE_CACHE_SHIFT;
++ if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
++ range_whole = 1;
++ scanned = 1;
+ }
+
+- if (sdp->sd_args.ar_data == GFS2_DATA_WRITEBACK && !gfs2_is_jdata(ip))
+- return mpage_writepages(mapping, wbc, gfs2_get_block_noalloc);
++retry:
++ while (!done && (index <= end) &&
++ (nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
++ PAGECACHE_TAG_DIRTY,
++ min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1))) {
++ scanned = 1;
++ ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, end);
++ if (ret)
++ done = 1;
++ if (ret > 0)
++ ret = 0;
+
-+ bh = sb_bread(inode->i_sb, block);
-+ if (bh == NULL)
-+ return -EIO;
-+ eh = ext_block_hdr(bh);
-+ if (ext4_ext_check_header(inode, eh, path->p_depth - depth)) {
-+ put_bh(bh);
-+ return -EIO;
++ pagevec_release(&pvec);
++ cond_resched();
++ }
++
++ if (!scanned && !done) {
++ /*
++ * We hit the last page and there is more work to be done: wrap
++ * back to the start of the file
++ */
++ scanned = 1;
++ index = 0;
++ goto retry;
+ }
-+ ex = EXT_FIRST_EXTENT(eh);
-+ *logical = le32_to_cpu(ex->ee_block);
-+ *phys = ext_pblock(ex);
-+ put_bh(bh);
-+ return 0;
+
++ if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
++ mapping->writeback_index = index;
++ return ret;
+}
+
-+/*
- * ext4_ext_next_allocated_block:
- * returns allocated block in subsequent extent or EXT_MAX_BLOCK.
- * NOTE: it considers block number from index entry as
- * allocated block. Thus, index entries have to be consistent
- * with leaves.
- */
--static unsigned long
-+static ext4_lblk_t
- ext4_ext_next_allocated_block(struct ext4_ext_path *path)
- {
- int depth;
-@@ -1054,7 +1200,7 @@ ext4_ext_next_allocated_block(struct ext4_ext_path *path)
- * ext4_ext_next_leaf_block:
- * returns first allocated block from next leaf or EXT_MAX_BLOCK
- */
--static unsigned ext4_ext_next_leaf_block(struct inode *inode,
-+static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode,
- struct ext4_ext_path *path)
- {
- int depth;
-@@ -1072,7 +1218,8 @@ static unsigned ext4_ext_next_leaf_block(struct inode *inode,
- while (depth >= 0) {
- if (path[depth].p_idx !=
- EXT_LAST_INDEX(path[depth].p_hdr))
-- return le32_to_cpu(path[depth].p_idx[1].ei_block);
-+ return (ext4_lblk_t)
-+ le32_to_cpu(path[depth].p_idx[1].ei_block);
- depth--;
- }
++
++/**
++ * gfs2_jdata_writepages - Write a bunch of dirty pages back to disk
++ * @mapping: The mapping to write
++ * @wbc: The writeback control
++ *
++ */
-@@ -1085,7 +1232,7 @@ static unsigned ext4_ext_next_leaf_block(struct inode *inode,
- * then we have to correct all indexes above.
- * TODO: do we need to correct tree in all cases?
+- return generic_writepages(mapping, wbc);
++static int gfs2_jdata_writepages(struct address_space *mapping,
++ struct writeback_control *wbc)
++{
++ struct gfs2_inode *ip = GFS2_I(mapping->host);
++ struct gfs2_sbd *sdp = GFS2_SB(mapping->host);
++ int ret;
++
++ ret = gfs2_write_cache_jdata(mapping, wbc);
++ if (ret == 0 && wbc->sync_mode == WB_SYNC_ALL) {
++ gfs2_log_flush(sdp, ip->i_gl);
++ ret = gfs2_write_cache_jdata(mapping, wbc);
++ }
++ return ret;
+ }
+
+ /**
+@@ -231,62 +468,107 @@ static int stuffed_readpage(struct gfs2_inode *ip, struct page *page)
+
+
+ /**
+- * gfs2_readpage - readpage with locking
+- * @file: The file to read a page for. N.B. This may be NULL if we are
+- * reading an internal file.
++ * __gfs2_readpage - readpage
++ * @file: The file to read a page for
+ * @page: The page to read
+ *
+- * Returns: errno
++ * This is the core of gfs2's readpage. Its used by the internal file
++ * reading code as in that case we already hold the glock. Also its
++ * called by gfs2_readpage() once the required lock has been granted.
++ *
*/
--int ext4_ext_correct_indexes(handle_t *handle, struct inode *inode,
-+static int ext4_ext_correct_indexes(handle_t *handle, struct inode *inode,
- struct ext4_ext_path *path)
- {
- struct ext4_extent_header *eh;
-@@ -1171,7 +1318,7 @@ ext4_can_extents_be_merged(struct inode *inode, struct ext4_extent *ex1,
- if (ext1_ee_len + ext2_ee_len > max_len)
- return 0;
- #ifdef AGGRESSIVE_TEST
-- if (le16_to_cpu(ex1->ee_len) >= 4)
-+ if (ext1_ee_len >= 4)
- return 0;
- #endif
-@@ -1239,7 +1386,7 @@ unsigned int ext4_ext_check_overlap(struct inode *inode,
- struct ext4_extent *newext,
- struct ext4_ext_path *path)
+-static int gfs2_readpage(struct file *file, struct page *page)
++static int __gfs2_readpage(void *file, struct page *page)
{
-- unsigned long b1, b2;
-+ ext4_lblk_t b1, b2;
- unsigned int depth, len1;
- unsigned int ret = 0;
+ struct gfs2_inode *ip = GFS2_I(page->mapping->host);
+ struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host);
+- struct gfs2_file *gf = NULL;
+- struct gfs2_holder gh;
+ int error;
+- int do_unlock = 0;
+-
+- if (likely(file != &gfs2_internal_file_sentinel)) {
+- if (file) {
+- gf = file->private_data;
+- if (test_bit(GFF_EXLOCK, &gf->f_flags))
+- /* gfs2_sharewrite_fault has grabbed the ip->i_gl already */
+- goto skip_lock;
+- }
+- gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME|LM_FLAG_TRY_1CB, &gh);
+- do_unlock = 1;
+- error = gfs2_glock_nq_atime(&gh);
+- if (unlikely(error))
+- goto out_unlock;
+- }
-@@ -1260,7 +1407,7 @@ unsigned int ext4_ext_check_overlap(struct inode *inode,
- goto out;
- }
+-skip_lock:
+ if (gfs2_is_stuffed(ip)) {
+ error = stuffed_readpage(ip, page);
+ unlock_page(page);
+- } else
+- error = mpage_readpage(page, gfs2_get_block);
++ } else {
++ error = mpage_readpage(page, gfs2_block_map);
++ }
-- /* check for wrap through zero */
-+ /* check for wrap through zero on extent logical start block*/
- if (b1 + len1 < b1) {
- len1 = EXT_MAX_BLOCK - b1;
- newext->ee_len = cpu_to_le16(len1);
-@@ -1290,7 +1437,8 @@ int ext4_ext_insert_extent(handle_t *handle, struct inode *inode,
- struct ext4_extent *ex, *fex;
- struct ext4_extent *nearex; /* nearest extent */
- struct ext4_ext_path *npath = NULL;
-- int depth, len, err, next;
-+ int depth, len, err;
-+ ext4_lblk_t next;
- unsigned uninitialized = 0;
+ if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
+- error = -EIO;
++ return -EIO;
++
++ return error;
++}
++
++/**
++ * gfs2_readpage - read a page of a file
++ * @file: The file to read
++ * @page: The page of the file
++ *
++ * This deals with the locking required. We use a trylock in order to
++ * avoid the page lock / glock ordering problems returning AOP_TRUNCATED_PAGE
++ * in the event that we are unable to get the lock.
++ */
++
++static int gfs2_readpage(struct file *file, struct page *page)
++{
++ struct gfs2_inode *ip = GFS2_I(page->mapping->host);
++ struct gfs2_holder gh;
++ int error;
- BUG_ON(ext4_ext_get_actual_len(newext) == 0);
-@@ -1435,114 +1583,8 @@ cleanup:
- return err;
+- if (do_unlock) {
+- gfs2_glock_dq_m(1, &gh);
+- gfs2_holder_uninit(&gh);
++ gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME|LM_FLAG_TRY_1CB, &gh);
++ error = gfs2_glock_nq_atime(&gh);
++ if (unlikely(error)) {
++ unlock_page(page);
++ goto out;
+ }
++ error = __gfs2_readpage(file, page);
++ gfs2_glock_dq(&gh);
+ out:
+- return error;
+-out_unlock:
+- unlock_page(page);
++ gfs2_holder_uninit(&gh);
+ if (error == GLR_TRYFAILED) {
+- error = AOP_TRUNCATED_PAGE;
+ yield();
++ return AOP_TRUNCATED_PAGE;
+ }
+- if (do_unlock)
+- gfs2_holder_uninit(&gh);
+- goto out;
++ return error;
++}
++
++/**
++ * gfs2_internal_read - read an internal file
++ * @ip: The gfs2 inode
++ * @ra_state: The readahead state (or NULL for no readahead)
++ * @buf: The buffer to fill
++ * @pos: The file position
++ * @size: The amount to read
++ *
++ */
++
++int gfs2_internal_read(struct gfs2_inode *ip, struct file_ra_state *ra_state,
++ char *buf, loff_t *pos, unsigned size)
++{
++ struct address_space *mapping = ip->i_inode.i_mapping;
++ unsigned long index = *pos / PAGE_CACHE_SIZE;
++ unsigned offset = *pos & (PAGE_CACHE_SIZE - 1);
++ unsigned copied = 0;
++ unsigned amt;
++ struct page *page;
++ void *p;
++
++ do {
++ amt = size - copied;
++ if (offset + size > PAGE_CACHE_SIZE)
++ amt = PAGE_CACHE_SIZE - offset;
++ page = read_cache_page(mapping, index, __gfs2_readpage, NULL);
++ if (IS_ERR(page))
++ return PTR_ERR(page);
++ p = kmap_atomic(page, KM_USER0);
++ memcpy(buf + copied, p + offset, amt);
++ kunmap_atomic(p, KM_USER0);
++ mark_page_accessed(page);
++ page_cache_release(page);
++ copied += amt;
++ index++;
++ offset = 0;
++ } while(copied < size);
++ (*pos) += size;
++ return size;
}
--int ext4_ext_walk_space(struct inode *inode, unsigned long block,
-- unsigned long num, ext_prepare_callback func,
-- void *cbdata)
--{
-- struct ext4_ext_path *path = NULL;
-- struct ext4_ext_cache cbex;
-- struct ext4_extent *ex;
-- unsigned long next, start = 0, end = 0;
-- unsigned long last = block + num;
-- int depth, exists, err = 0;
--
-- BUG_ON(func == NULL);
-- BUG_ON(inode == NULL);
--
-- while (block < last && block != EXT_MAX_BLOCK) {
-- num = last - block;
-- /* find extent for this block */
-- path = ext4_ext_find_extent(inode, block, path);
-- if (IS_ERR(path)) {
-- err = PTR_ERR(path);
-- path = NULL;
-- break;
-- }
--
-- depth = ext_depth(inode);
-- BUG_ON(path[depth].p_hdr == NULL);
-- ex = path[depth].p_ext;
-- next = ext4_ext_next_allocated_block(path);
--
-- exists = 0;
-- if (!ex) {
-- /* there is no extent yet, so try to allocate
-- * all requested space */
-- start = block;
-- end = block + num;
-- } else if (le32_to_cpu(ex->ee_block) > block) {
-- /* need to allocate space before found extent */
-- start = block;
-- end = le32_to_cpu(ex->ee_block);
-- if (block + num < end)
-- end = block + num;
-- } else if (block >= le32_to_cpu(ex->ee_block)
-- + ext4_ext_get_actual_len(ex)) {
-- /* need to allocate space after found extent */
-- start = block;
-- end = block + num;
-- if (end >= next)
-- end = next;
-- } else if (block >= le32_to_cpu(ex->ee_block)) {
-- /*
-- * some part of requested space is covered
-- * by found extent
-- */
-- start = block;
-- end = le32_to_cpu(ex->ee_block)
-- + ext4_ext_get_actual_len(ex);
-- if (block + num < end)
-- end = block + num;
-- exists = 1;
-- } else {
-- BUG();
-- }
-- BUG_ON(end <= start);
--
-- if (!exists) {
-- cbex.ec_block = start;
-- cbex.ec_len = end - start;
-- cbex.ec_start = 0;
-- cbex.ec_type = EXT4_EXT_CACHE_GAP;
-- } else {
-- cbex.ec_block = le32_to_cpu(ex->ee_block);
-- cbex.ec_len = ext4_ext_get_actual_len(ex);
-- cbex.ec_start = ext_pblock(ex);
-- cbex.ec_type = EXT4_EXT_CACHE_EXTENT;
-- }
--
-- BUG_ON(cbex.ec_len == 0);
-- err = func(inode, path, &cbex, cbdata);
-- ext4_ext_drop_refs(path);
--
-- if (err < 0)
-- break;
-- if (err == EXT_REPEAT)
-- continue;
-- else if (err == EXT_BREAK) {
-- err = 0;
-- break;
-- }
--
-- if (ext_depth(inode) != depth) {
-- /* depth was changed. we have to realloc path */
-- kfree(path);
-- path = NULL;
+ /**
+@@ -300,10 +582,9 @@ out_unlock:
+ * Any I/O we ignore at this time will be done via readpage later.
+ * 2. We don't handle stuffed files here we let readpage do the honours.
+ * 3. mpage_readpages() does most of the heavy lifting in the common case.
+- * 4. gfs2_get_block() is relied upon to set BH_Boundary in the right places.
+- * 5. We use LM_FLAG_TRY_1CB here, effectively we then have lock-ahead as
+- * well as read-ahead.
++ * 4. gfs2_block_map() is relied upon to set BH_Boundary in the right places.
+ */
++
+ static int gfs2_readpages(struct file *file, struct address_space *mapping,
+ struct list_head *pages, unsigned nr_pages)
+ {
+@@ -311,42 +592,20 @@ static int gfs2_readpages(struct file *file, struct address_space *mapping,
+ struct gfs2_inode *ip = GFS2_I(inode);
+ struct gfs2_sbd *sdp = GFS2_SB(inode);
+ struct gfs2_holder gh;
+- int ret = 0;
+- int do_unlock = 0;
++ int ret;
+
+- if (likely(file != &gfs2_internal_file_sentinel)) {
+- if (file) {
+- struct gfs2_file *gf = file->private_data;
+- if (test_bit(GFF_EXLOCK, &gf->f_flags))
+- goto skip_lock;
- }
--
-- block = cbex.ec_block + cbex.ec_len;
+- gfs2_holder_init(ip->i_gl, LM_ST_SHARED,
+- LM_FLAG_TRY_1CB|GL_ATIME, &gh);
+- do_unlock = 1;
+- ret = gfs2_glock_nq_atime(&gh);
+- if (ret == GLR_TRYFAILED)
+- goto out_noerror;
+- if (unlikely(ret))
+- goto out_unlock;
- }
+-skip_lock:
++ gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME, &gh);
++ ret = gfs2_glock_nq_atime(&gh);
++ if (unlikely(ret))
++ goto out_uninit;
+ if (!gfs2_is_stuffed(ip))
+- ret = mpage_readpages(mapping, pages, nr_pages, gfs2_get_block);
-
-- if (path) {
-- ext4_ext_drop_refs(path);
-- kfree(path);
+- if (do_unlock) {
+- gfs2_glock_dq_m(1, &gh);
+- gfs2_holder_uninit(&gh);
- }
+-out:
++ ret = mpage_readpages(mapping, pages, nr_pages, gfs2_block_map);
++ gfs2_glock_dq(&gh);
++out_uninit:
++ gfs2_holder_uninit(&gh);
+ if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
+ ret = -EIO;
+ return ret;
+-out_noerror:
+- ret = 0;
+-out_unlock:
+- if (do_unlock)
+- gfs2_holder_uninit(&gh);
+- goto out;
+ }
+
+ /**
+@@ -382,20 +641,11 @@ static int gfs2_write_begin(struct file *file, struct address_space *mapping,
+ if (unlikely(error))
+ goto out_uninit;
+
+- error = -ENOMEM;
+- page = __grab_cache_page(mapping, index);
+- *pagep = page;
+- if (!page)
+- goto out_unlock;
-
-- return err;
--}
+ gfs2_write_calc_reserv(ip, len, &data_blocks, &ind_blocks);
-
- static void
--ext4_ext_put_in_cache(struct inode *inode, __u32 block,
-+ext4_ext_put_in_cache(struct inode *inode, ext4_lblk_t block,
- __u32 len, ext4_fsblk_t start, int type)
- {
- struct ext4_ext_cache *cex;
-@@ -1561,10 +1603,11 @@ ext4_ext_put_in_cache(struct inode *inode, __u32 block,
- */
- static void
- ext4_ext_put_gap_in_cache(struct inode *inode, struct ext4_ext_path *path,
-- unsigned long block)
-+ ext4_lblk_t block)
- {
- int depth = ext_depth(inode);
-- unsigned long lblock, len;
-+ unsigned long len;
-+ ext4_lblk_t lblock;
- struct ext4_extent *ex;
+ error = gfs2_write_alloc_required(ip, pos, len, &alloc_required);
+ if (error)
+- goto out_putpage;
+-
++ goto out_unlock;
- ex = path[depth].p_ext;
-@@ -1576,32 +1619,34 @@ ext4_ext_put_gap_in_cache(struct inode *inode, struct ext4_ext_path *path,
- } else if (block < le32_to_cpu(ex->ee_block)) {
- lblock = block;
- len = le32_to_cpu(ex->ee_block) - block;
-- ext_debug("cache gap(before): %lu [%lu:%lu]",
-- (unsigned long) block,
-- (unsigned long) le32_to_cpu(ex->ee_block),
-- (unsigned long) ext4_ext_get_actual_len(ex));
-+ ext_debug("cache gap(before): %u [%u:%u]",
-+ block,
-+ le32_to_cpu(ex->ee_block),
-+ ext4_ext_get_actual_len(ex));
- } else if (block >= le32_to_cpu(ex->ee_block)
- + ext4_ext_get_actual_len(ex)) {
-+ ext4_lblk_t next;
- lblock = le32_to_cpu(ex->ee_block)
- + ext4_ext_get_actual_len(ex);
-- len = ext4_ext_next_allocated_block(path);
-- ext_debug("cache gap(after): [%lu:%lu] %lu",
-- (unsigned long) le32_to_cpu(ex->ee_block),
-- (unsigned long) ext4_ext_get_actual_len(ex),
-- (unsigned long) block);
-- BUG_ON(len == lblock);
-- len = len - lblock;
+- ip->i_alloc.al_requested = 0;
+ if (alloc_required) {
+ al = gfs2_alloc_get(ip);
+
+@@ -424,40 +674,47 @@ static int gfs2_write_begin(struct file *file, struct address_space *mapping,
+ if (error)
+ goto out_trans_fail;
+
++ error = -ENOMEM;
++ page = __grab_cache_page(mapping, index);
++ *pagep = page;
++ if (unlikely(!page))
++ goto out_endtrans;
+
-+ next = ext4_ext_next_allocated_block(path);
-+ ext_debug("cache gap(after): [%u:%u] %u",
-+ le32_to_cpu(ex->ee_block),
-+ ext4_ext_get_actual_len(ex),
-+ block);
-+ BUG_ON(next == lblock);
-+ len = next - lblock;
- } else {
- lblock = len = 0;
- BUG();
+ if (gfs2_is_stuffed(ip)) {
++ error = 0;
+ if (pos + len > sdp->sd_sb.sb_bsize - sizeof(struct gfs2_dinode)) {
+ error = gfs2_unstuff_dinode(ip, page);
+ if (error == 0)
+ goto prepare_write;
+- } else if (!PageUptodate(page))
++ } else if (!PageUptodate(page)) {
+ error = stuffed_readpage(ip, page);
++ }
+ goto out;
}
-- ext_debug(" -> %lu:%lu\n", (unsigned long) lblock, len);
-+ ext_debug(" -> %u:%lu\n", lblock, len);
- ext4_ext_put_in_cache(inode, lblock, len, 0, EXT4_EXT_CACHE_GAP);
+ prepare_write:
+- error = block_prepare_write(page, from, to, gfs2_get_block);
+-
++ error = block_prepare_write(page, from, to, gfs2_block_map);
+ out:
+- if (error) {
+- gfs2_trans_end(sdp);
++ if (error == 0)
++ return 0;
++
++ page_cache_release(page);
++ if (pos + len > ip->i_inode.i_size)
++ vmtruncate(&ip->i_inode, ip->i_inode.i_size);
++out_endtrans:
++ gfs2_trans_end(sdp);
+ out_trans_fail:
+- if (alloc_required) {
+- gfs2_inplace_release(ip);
++ if (alloc_required) {
++ gfs2_inplace_release(ip);
+ out_qunlock:
+- gfs2_quota_unlock(ip);
++ gfs2_quota_unlock(ip);
+ out_alloc_put:
+- gfs2_alloc_put(ip);
+- }
+-out_putpage:
+- page_cache_release(page);
+- if (pos + len > ip->i_inode.i_size)
+- vmtruncate(&ip->i_inode, ip->i_inode.i_size);
++ gfs2_alloc_put(ip);
++ }
+ out_unlock:
+- gfs2_glock_dq_m(1, &ip->i_gh);
++ gfs2_glock_dq(&ip->i_gh);
+ out_uninit:
+- gfs2_holder_uninit(&ip->i_gh);
+- }
+-
++ gfs2_holder_uninit(&ip->i_gh);
+ return error;
}
- static int
--ext4_ext_in_cache(struct inode *inode, unsigned long block,
-+ext4_ext_in_cache(struct inode *inode, ext4_lblk_t block,
- struct ext4_extent *ex)
- {
- struct ext4_ext_cache *cex;
-@@ -1618,11 +1663,9 @@ ext4_ext_in_cache(struct inode *inode, unsigned long block,
- ex->ee_block = cpu_to_le32(cex->ec_block);
- ext4_ext_store_pblock(ex, cex->ec_start);
- ex->ee_len = cpu_to_le16(cex->ec_len);
-- ext_debug("%lu cached by %lu:%lu:%llu\n",
-- (unsigned long) block,
-- (unsigned long) cex->ec_block,
-- (unsigned long) cex->ec_len,
-- cex->ec_start);
-+ ext_debug("%u cached by %u:%u:%llu\n",
-+ block,
-+ cex->ec_block, cex->ec_len, cex->ec_start);
- return cex->ec_type;
+@@ -565,7 +822,7 @@ static int gfs2_write_end(struct file *file, struct address_space *mapping,
+ struct gfs2_inode *ip = GFS2_I(inode);
+ struct gfs2_sbd *sdp = GFS2_SB(inode);
+ struct buffer_head *dibh;
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_dinode *di;
+ unsigned int from = pos & (PAGE_CACHE_SIZE - 1);
+ unsigned int to = from + len;
+@@ -585,19 +842,16 @@ static int gfs2_write_end(struct file *file, struct address_space *mapping,
+ if (gfs2_is_stuffed(ip))
+ return gfs2_stuffed_write_end(inode, dibh, pos, len, copied, page);
+
+- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
++ if (!gfs2_is_writeback(ip))
+ gfs2_page_add_databufs(ip, page, from, to);
+
+ ret = generic_write_end(file, mapping, pos, len, copied, page, fsdata);
+
+- if (likely(ret >= 0)) {
+- copied = ret;
+- if ((pos + copied) > inode->i_size) {
+- di = (struct gfs2_dinode *)dibh->b_data;
+- ip->i_di.di_size = inode->i_size;
+- di->di_size = cpu_to_be64(inode->i_size);
+- mark_inode_dirty(inode);
+- }
++ if (likely(ret >= 0) && (inode->i_size > ip->i_di.di_size)) {
++ di = (struct gfs2_dinode *)dibh->b_data;
++ ip->i_di.di_size = inode->i_size;
++ di->di_size = cpu_to_be64(inode->i_size);
++ mark_inode_dirty(inode);
}
-@@ -1636,7 +1679,7 @@ ext4_ext_in_cache(struct inode *inode, unsigned long block,
- * It's used in truncate case only, thus all requests are for
- * last index in the block only.
- */
--int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
-+static int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
- struct ext4_ext_path *path)
+ if (inode == sdp->sd_rindex)
+@@ -606,7 +860,7 @@ static int gfs2_write_end(struct file *file, struct address_space *mapping,
+ brelse(dibh);
+ gfs2_trans_end(sdp);
+ failed:
+- if (al->al_requested) {
++ if (al) {
+ gfs2_inplace_release(ip);
+ gfs2_quota_unlock(ip);
+ gfs2_alloc_put(ip);
+@@ -625,11 +879,7 @@ failed:
+
+ static int gfs2_set_page_dirty(struct page *page)
{
- struct buffer_head *bh;
-@@ -1657,7 +1700,7 @@ int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
- ext_debug("index is empty, remove it, free block %llu\n", leaf);
- bh = sb_find_get_block(inode->i_sb, leaf);
- ext4_forget(handle, 1, inode, bh, leaf);
-- ext4_free_blocks(handle, inode, leaf, 1);
-+ ext4_free_blocks(handle, inode, leaf, 1, 1);
- return err;
+- struct gfs2_inode *ip = GFS2_I(page->mapping->host);
+- struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host);
+-
+- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
+- SetPageChecked(page);
++ SetPageChecked(page);
+ return __set_page_dirty_buffers(page);
}
-@@ -1666,7 +1709,7 @@ int ext4_ext_rm_idx(handle_t *handle, struct inode *inode,
- * This routine returns max. credits that the extent tree can consume.
- * It should be OK for low-performance paths like ->writepage()
- * To allow many writing processes to fit into a single transaction,
-- * the caller should calculate credits under truncate_mutex and
-+ * the caller should calculate credits under i_data_sem and
- * pass the actual path.
- */
- int ext4_ext_calc_credits_for_insert(struct inode *inode,
-@@ -1714,12 +1757,14 @@ int ext4_ext_calc_credits_for_insert(struct inode *inode,
+@@ -653,7 +903,7 @@ static sector_t gfs2_bmap(struct address_space *mapping, sector_t lblock)
+ return 0;
- static int ext4_remove_blocks(handle_t *handle, struct inode *inode,
- struct ext4_extent *ex,
-- unsigned long from, unsigned long to)
-+ ext4_lblk_t from, ext4_lblk_t to)
+ if (!gfs2_is_stuffed(ip))
+- dblock = generic_block_bmap(mapping, lblock, gfs2_get_block);
++ dblock = generic_block_bmap(mapping, lblock, gfs2_block_map);
+
+ gfs2_glock_dq_uninit(&i_gh);
+
+@@ -719,13 +969,9 @@ static int gfs2_ok_for_dio(struct gfs2_inode *ip, int rw, loff_t offset)
{
- struct buffer_head *bh;
- unsigned short ee_len = ext4_ext_get_actual_len(ex);
-- int i;
-+ int i, metadata = 0;
+ /*
+ * Should we return an error here? I can't see that O_DIRECT for
+- * a journaled file makes any sense. For now we'll silently fall
+- * back to buffered I/O, likewise we do the same for stuffed
+- * files since they are (a) small and (b) unaligned.
++ * a stuffed file makes any sense. For now we'll silently fall
++ * back to buffered I/O
+ */
+- if (gfs2_is_jdata(ip))
+- return 0;
+-
+ if (gfs2_is_stuffed(ip))
+ return 0;
-+ if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
-+ metadata = 1;
- #ifdef EXTENTS_STATS
- {
- struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
-@@ -1738,42 +1783,45 @@ static int ext4_remove_blocks(handle_t *handle, struct inode *inode,
- if (from >= le32_to_cpu(ex->ee_block)
- && to == le32_to_cpu(ex->ee_block) + ee_len - 1) {
- /* tail removal */
-- unsigned long num;
-+ ext4_lblk_t num;
- ext4_fsblk_t start;
-+
- num = le32_to_cpu(ex->ee_block) + ee_len - from;
- start = ext_pblock(ex) + ee_len - num;
-- ext_debug("free last %lu blocks starting %llu\n", num, start);
-+ ext_debug("free last %u blocks starting %llu\n", num, start);
- for (i = 0; i < num; i++) {
- bh = sb_find_get_block(inode->i_sb, start + i);
- ext4_forget(handle, 0, inode, bh, start + i);
- }
-- ext4_free_blocks(handle, inode, start, num);
-+ ext4_free_blocks(handle, inode, start, num, metadata);
- } else if (from == le32_to_cpu(ex->ee_block)
- && to <= le32_to_cpu(ex->ee_block) + ee_len - 1) {
-- printk("strange request: removal %lu-%lu from %u:%u\n",
-+ printk(KERN_INFO "strange request: removal %u-%u from %u:%u\n",
- from, to, le32_to_cpu(ex->ee_block), ee_len);
- } else {
-- printk("strange request: removal(2) %lu-%lu from %u:%u\n",
-- from, to, le32_to_cpu(ex->ee_block), ee_len);
-+ printk(KERN_INFO "strange request: removal(2) "
-+ "%u-%u from %u:%u\n",
-+ from, to, le32_to_cpu(ex->ee_block), ee_len);
- }
+@@ -836,9 +1082,23 @@ cannot_release:
return 0;
}
- static int
- ext4_ext_rm_leaf(handle_t *handle, struct inode *inode,
-- struct ext4_ext_path *path, unsigned long start)
-+ struct ext4_ext_path *path, ext4_lblk_t start)
- {
- int err = 0, correct_index = 0;
- int depth = ext_depth(inode), credits;
- struct ext4_extent_header *eh;
-- unsigned a, b, block, num;
-- unsigned long ex_ee_block;
-+ ext4_lblk_t a, b, block;
-+ unsigned num;
-+ ext4_lblk_t ex_ee_block;
- unsigned short ex_ee_len;
- unsigned uninitialized = 0;
- struct ext4_extent *ex;
+-const struct address_space_operations gfs2_file_aops = {
+- .writepage = gfs2_writepage,
+- .writepages = gfs2_writepages,
++static const struct address_space_operations gfs2_writeback_aops = {
++ .writepage = gfs2_writeback_writepage,
++ .writepages = gfs2_writeback_writepages,
++ .readpage = gfs2_readpage,
++ .readpages = gfs2_readpages,
++ .sync_page = block_sync_page,
++ .write_begin = gfs2_write_begin,
++ .write_end = gfs2_write_end,
++ .bmap = gfs2_bmap,
++ .invalidatepage = gfs2_invalidatepage,
++ .releasepage = gfs2_releasepage,
++ .direct_IO = gfs2_direct_IO,
++ .migratepage = buffer_migrate_page,
++};
++
++static const struct address_space_operations gfs2_ordered_aops = {
++ .writepage = gfs2_ordered_writepage,
+ .readpage = gfs2_readpage,
+ .readpages = gfs2_readpages,
+ .sync_page = block_sync_page,
+@@ -849,5 +1109,34 @@ const struct address_space_operations gfs2_file_aops = {
+ .invalidatepage = gfs2_invalidatepage,
+ .releasepage = gfs2_releasepage,
+ .direct_IO = gfs2_direct_IO,
++ .migratepage = buffer_migrate_page,
+ };
- /* the header must be checked already in ext4_ext_remove_space() */
-- ext_debug("truncate since %lu in leaf\n", start);
-+ ext_debug("truncate since %u in leaf\n", start);
- if (!path[depth].p_hdr)
- path[depth].p_hdr = ext_block_hdr(path[depth].p_bh);
- eh = path[depth].p_hdr;
-@@ -1904,7 +1952,7 @@ ext4_ext_more_to_rm(struct ext4_ext_path *path)
- return 1;
- }
++static const struct address_space_operations gfs2_jdata_aops = {
++ .writepage = gfs2_jdata_writepage,
++ .writepages = gfs2_jdata_writepages,
++ .readpage = gfs2_readpage,
++ .readpages = gfs2_readpages,
++ .sync_page = block_sync_page,
++ .write_begin = gfs2_write_begin,
++ .write_end = gfs2_write_end,
++ .set_page_dirty = gfs2_set_page_dirty,
++ .bmap = gfs2_bmap,
++ .invalidatepage = gfs2_invalidatepage,
++ .releasepage = gfs2_releasepage,
++};
++
++void gfs2_set_aops(struct inode *inode)
++{
++ struct gfs2_inode *ip = GFS2_I(inode);
++
++ if (gfs2_is_writeback(ip))
++ inode->i_mapping->a_ops = &gfs2_writeback_aops;
++ else if (gfs2_is_ordered(ip))
++ inode->i_mapping->a_ops = &gfs2_ordered_aops;
++ else if (gfs2_is_jdata(ip))
++ inode->i_mapping->a_ops = &gfs2_jdata_aops;
++ else
++ BUG();
++}
++
+diff --git a/fs/gfs2/ops_address.h b/fs/gfs2/ops_address.h
+index fa1b5b3..5da2128 100644
+--- a/fs/gfs2/ops_address.h
++++ b/fs/gfs2/ops_address.h
+@@ -14,9 +14,10 @@
+ #include <linux/buffer_head.h>
+ #include <linux/mm.h>
--int ext4_ext_remove_space(struct inode *inode, unsigned long start)
-+static int ext4_ext_remove_space(struct inode *inode, ext4_lblk_t start)
- {
- struct super_block *sb = inode->i_sb;
- int depth = ext_depth(inode);
-@@ -1912,7 +1960,7 @@ int ext4_ext_remove_space(struct inode *inode, unsigned long start)
- handle_t *handle;
- int i = 0, err = 0;
+-extern const struct address_space_operations gfs2_file_aops;
+-extern int gfs2_get_block(struct inode *inode, sector_t lblock,
+- struct buffer_head *bh_result, int create);
+ extern int gfs2_releasepage(struct page *page, gfp_t gfp_mask);
++extern int gfs2_internal_read(struct gfs2_inode *ip,
++ struct file_ra_state *ra_state,
++ char *buf, loff_t *pos, unsigned size);
++extern void gfs2_set_aops(struct inode *inode);
-- ext_debug("truncate since %lu\n", start);
-+ ext_debug("truncate since %u\n", start);
+ #endif /* __OPS_ADDRESS_DOT_H__ */
+diff --git a/fs/gfs2/ops_file.c b/fs/gfs2/ops_file.c
+index bb11fd6..f4842f2 100644
+--- a/fs/gfs2/ops_file.c
++++ b/fs/gfs2/ops_file.c
+@@ -33,57 +33,12 @@
+ #include "lm.h"
+ #include "log.h"
+ #include "meta_io.h"
+-#include "ops_file.h"
+-#include "ops_vm.h"
+ #include "quota.h"
+ #include "rgrp.h"
+ #include "trans.h"
+ #include "util.h"
+ #include "eaops.h"
+-
+-/*
+- * Most fields left uninitialised to catch anybody who tries to
+- * use them. f_flags set to prevent file_accessed() from touching
+- * any other part of this. Its use is purely as a flag so that we
+- * know (in readpage()) whether or not do to locking.
+- */
+-struct file gfs2_internal_file_sentinel = {
+- .f_flags = O_NOATIME|O_RDONLY,
+-};
+-
+-static int gfs2_read_actor(read_descriptor_t *desc, struct page *page,
+- unsigned long offset, unsigned long size)
+-{
+- char *kaddr;
+- unsigned long count = desc->count;
+-
+- if (size > count)
+- size = count;
+-
+- kaddr = kmap(page);
+- memcpy(desc->arg.data, kaddr + offset, size);
+- kunmap(page);
+-
+- desc->count = count - size;
+- desc->written += size;
+- desc->arg.buf += size;
+- return size;
+-}
+-
+-int gfs2_internal_read(struct gfs2_inode *ip, struct file_ra_state *ra_state,
+- char *buf, loff_t *pos, unsigned size)
+-{
+- struct inode *inode = &ip->i_inode;
+- read_descriptor_t desc;
+- desc.written = 0;
+- desc.arg.data = buf;
+- desc.count = size;
+- desc.error = 0;
+- do_generic_mapping_read(inode->i_mapping, ra_state,
+- &gfs2_internal_file_sentinel, pos, &desc,
+- gfs2_read_actor);
+- return desc.written ? desc.written : desc.error;
+-}
++#include "ops_address.h"
- /* probably first extent we're gonna free will be last in block */
- handle = ext4_journal_start(inode, depth + 1);
-@@ -2094,17 +2142,19 @@ void ext4_ext_release(struct super_block *sb)
- * b> Splits in two extents: Write is happening at either end of the extent
- * c> Splits in three extents: Somone is writing in middle of the extent
- */
--int ext4_ext_convert_to_initialized(handle_t *handle, struct inode *inode,
-- struct ext4_ext_path *path,
-- ext4_fsblk_t iblock,
-- unsigned long max_blocks)
-+static int ext4_ext_convert_to_initialized(handle_t *handle,
-+ struct inode *inode,
-+ struct ext4_ext_path *path,
-+ ext4_lblk_t iblock,
-+ unsigned long max_blocks)
- {
- struct ext4_extent *ex, newex;
- struct ext4_extent *ex1 = NULL;
- struct ext4_extent *ex2 = NULL;
- struct ext4_extent *ex3 = NULL;
- struct ext4_extent_header *eh;
-- unsigned int allocated, ee_block, ee_len, depth;
-+ ext4_lblk_t ee_block;
-+ unsigned int allocated, ee_len, depth;
- ext4_fsblk_t newblock;
- int err = 0;
- int ret = 0;
-@@ -2225,8 +2275,13 @@ out:
- return err ? err : allocated;
+ /**
+ * gfs2_llseek - seek to a location in a file
+@@ -214,7 +169,7 @@ static int gfs2_get_flags(struct file *filp, u32 __user *ptr)
+ if (put_user(fsflags, ptr))
+ error = -EFAULT;
+
+- gfs2_glock_dq_m(1, &gh);
++ gfs2_glock_dq(&gh);
+ gfs2_holder_uninit(&gh);
+ return error;
+ }
+@@ -291,7 +246,16 @@ static int do_gfs2_set_flags(struct file *filp, u32 reqflags, u32 mask)
+ if (error)
+ goto out;
+ }
+-
++ if ((flags ^ new_flags) & GFS2_DIF_JDATA) {
++ if (flags & GFS2_DIF_JDATA)
++ gfs2_log_flush(sdp, ip->i_gl);
++ error = filemap_fdatawrite(inode->i_mapping);
++ if (error)
++ goto out;
++ error = filemap_fdatawait(inode->i_mapping);
++ if (error)
++ goto out;
++ }
+ error = gfs2_trans_begin(sdp, RES_DINODE, 0);
+ if (error)
+ goto out;
+@@ -303,6 +267,7 @@ static int do_gfs2_set_flags(struct file *filp, u32 reqflags, u32 mask)
+ gfs2_dinode_out(ip, bh->b_data);
+ brelse(bh);
+ gfs2_set_inode_flags(inode);
++ gfs2_set_aops(inode);
+ out_trans_end:
+ gfs2_trans_end(sdp);
+ out:
+@@ -338,6 +303,128 @@ static long gfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ return -ENOTTY;
}
-+/*
-+ * Need to be called with
-+ * down_read(&EXT4_I(inode)->i_data_sem) if not allocating file system block
-+ * (ie, create is zero). Otherwise down_write(&EXT4_I(inode)->i_data_sem)
-+ */
- int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
-- ext4_fsblk_t iblock,
-+ ext4_lblk_t iblock,
- unsigned long max_blocks, struct buffer_head *bh_result,
- int create, int extend_disksize)
- {
-@@ -2236,11 +2291,11 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
- ext4_fsblk_t goal, newblock;
- int err = 0, depth, ret;
- unsigned long allocated = 0;
-+ struct ext4_allocation_request ar;
++/**
++ * gfs2_allocate_page_backing - Use bmap to allocate blocks
++ * @page: The (locked) page to allocate backing for
++ *
++ * We try to allocate all the blocks required for the page in
++ * one go. This might fail for various reasons, so we keep
++ * trying until all the blocks to back this page are allocated.
++ * If some of the blocks are already allocated, thats ok too.
++ */
++
++static int gfs2_allocate_page_backing(struct page *page)
++{
++ struct inode *inode = page->mapping->host;
++ struct buffer_head bh;
++ unsigned long size = PAGE_CACHE_SIZE;
++ u64 lblock = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
++
++ do {
++ bh.b_state = 0;
++ bh.b_size = size;
++ gfs2_block_map(inode, lblock, &bh, 1);
++ if (!buffer_mapped(&bh))
++ return -EIO;
++ size -= bh.b_size;
++ lblock += (bh.b_size >> inode->i_blkbits);
++ } while(size > 0);
++ return 0;
++}
++
++/**
++ * gfs2_page_mkwrite - Make a shared, mmap()ed, page writable
++ * @vma: The virtual memory area
++ * @page: The page which is about to become writable
++ *
++ * When the page becomes writable, we need to ensure that we have
++ * blocks allocated on disk to back that page.
++ */
++
++static int gfs2_page_mkwrite(struct vm_area_struct *vma, struct page *page)
++{
++ struct inode *inode = vma->vm_file->f_path.dentry->d_inode;
++ struct gfs2_inode *ip = GFS2_I(inode);
++ struct gfs2_sbd *sdp = GFS2_SB(inode);
++ unsigned long last_index;
++ u64 pos = page->index << (PAGE_CACHE_SIZE - inode->i_blkbits);
++ unsigned int data_blocks, ind_blocks, rblocks;
++ int alloc_required = 0;
++ struct gfs2_holder gh;
++ struct gfs2_alloc *al;
++ int ret;
++
++ gfs2_holder_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_ATIME, &gh);
++ ret = gfs2_glock_nq_atime(&gh);
++ if (ret)
++ goto out;
++
++ set_bit(GIF_SW_PAGED, &ip->i_flags);
++ gfs2_write_calc_reserv(ip, PAGE_CACHE_SIZE, &data_blocks, &ind_blocks);
++ ret = gfs2_write_alloc_required(ip, pos, PAGE_CACHE_SIZE, &alloc_required);
++ if (ret || !alloc_required)
++ goto out_unlock;
++ ret = -ENOMEM;
++ al = gfs2_alloc_get(ip);
++ if (al == NULL)
++ goto out_unlock;
++
++ ret = gfs2_quota_lock(ip, NO_QUOTA_CHANGE, NO_QUOTA_CHANGE);
++ if (ret)
++ goto out_alloc_put;
++ ret = gfs2_quota_check(ip, ip->i_inode.i_uid, ip->i_inode.i_gid);
++ if (ret)
++ goto out_quota_unlock;
++ al->al_requested = data_blocks + ind_blocks;
++ ret = gfs2_inplace_reserve(ip);
++ if (ret)
++ goto out_quota_unlock;
++
++ rblocks = RES_DINODE + ind_blocks;
++ if (gfs2_is_jdata(ip))
++ rblocks += data_blocks ? data_blocks : 1;
++ if (ind_blocks || data_blocks)
++ rblocks += RES_STATFS + RES_QUOTA;
++ ret = gfs2_trans_begin(sdp, rblocks, 0);
++ if (ret)
++ goto out_trans_fail;
++
++ lock_page(page);
++ ret = -EINVAL;
++ last_index = ip->i_inode.i_size >> PAGE_CACHE_SHIFT;
++ if (page->index > last_index)
++ goto out_unlock_page;
++ ret = 0;
++ if (!PageUptodate(page) || page->mapping != ip->i_inode.i_mapping)
++ goto out_unlock_page;
++ if (gfs2_is_stuffed(ip)) {
++ ret = gfs2_unstuff_dinode(ip, page);
++ if (ret)
++ goto out_unlock_page;
++ }
++ ret = gfs2_allocate_page_backing(page);
++
++out_unlock_page:
++ unlock_page(page);
++ gfs2_trans_end(sdp);
++out_trans_fail:
++ gfs2_inplace_release(ip);
++out_quota_unlock:
++ gfs2_quota_unlock(ip);
++out_alloc_put:
++ gfs2_alloc_put(ip);
++out_unlock:
++ gfs2_glock_dq(&gh);
++out:
++ gfs2_holder_uninit(&gh);
++ return ret;
++}
++
++static struct vm_operations_struct gfs2_vm_ops = {
++ .fault = filemap_fault,
++ .page_mkwrite = gfs2_page_mkwrite,
++};
++
+
+ /**
+ * gfs2_mmap -
+@@ -360,14 +447,7 @@ static int gfs2_mmap(struct file *file, struct vm_area_struct *vma)
+ return error;
+ }
- __clear_bit(BH_New, &bh_result->b_state);
-- ext_debug("blocks %d/%lu requested for inode %u\n", (int) iblock,
-- max_blocks, (unsigned) inode->i_ino);
-- mutex_lock(&EXT4_I(inode)->truncate_mutex);
-+ ext_debug("blocks %u/%lu requested for inode %u\n",
-+ iblock, max_blocks, inode->i_ino);
+- /* This is VM_MAYWRITE instead of VM_WRITE because a call
+- to mprotect() can turn on VM_WRITE later. */
+-
+- if ((vma->vm_flags & (VM_MAYSHARE | VM_MAYWRITE)) ==
+- (VM_MAYSHARE | VM_MAYWRITE))
+- vma->vm_ops = &gfs2_vm_ops_sharewrite;
+- else
+- vma->vm_ops = &gfs2_vm_ops_private;
++ vma->vm_ops = &gfs2_vm_ops;
- /* check in cache */
- goal = ext4_ext_in_cache(inode, iblock, &newex);
-@@ -2260,7 +2315,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
- - le32_to_cpu(newex.ee_block)
- + ext_pblock(&newex);
- /* number of remaining blocks in the extent */
-- allocated = le16_to_cpu(newex.ee_len) -
-+ allocated = ext4_ext_get_actual_len(&newex) -
- (iblock - le32_to_cpu(newex.ee_block));
- goto out;
- } else {
-@@ -2288,7 +2343,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ gfs2_glock_dq_uninit(&i_gh);
- ex = path[depth].p_ext;
- if (ex) {
-- unsigned long ee_block = le32_to_cpu(ex->ee_block);
-+ ext4_lblk_t ee_block = le32_to_cpu(ex->ee_block);
- ext4_fsblk_t ee_start = ext_pblock(ex);
- unsigned short ee_len;
+@@ -538,15 +618,6 @@ static int gfs2_lock(struct file *file, int cmd, struct file_lock *fl)
+ if (__mandatory_lock(&ip->i_inode))
+ return -ENOLCK;
-@@ -2302,7 +2357,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
- newblock = iblock - ee_block + ee_start;
- /* number of remaining blocks in the extent */
- allocated = ee_len - (iblock - ee_block);
-- ext_debug("%d fit into %lu:%d -> %llu\n", (int) iblock,
-+ ext_debug("%u fit into %lu:%d -> %llu\n", iblock,
- ee_block, ee_len, newblock);
+- if (sdp->sd_args.ar_localflocks) {
+- if (IS_GETLK(cmd)) {
+- posix_test_lock(file, fl);
+- return 0;
+- } else {
+- return posix_lock_file_wait(file, fl);
+- }
+- }
+-
+ if (cmd == F_CANCELLK) {
+ /* Hack: */
+ cmd = F_SETLK;
+@@ -632,16 +703,12 @@ static void do_unflock(struct file *file, struct file_lock *fl)
+ static int gfs2_flock(struct file *file, int cmd, struct file_lock *fl)
+ {
+ struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
+- struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
- /* Do not put uninitialized extent in the cache */
-@@ -2320,9 +2375,10 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
- ret = ext4_ext_convert_to_initialized(handle, inode,
- path, iblock,
- max_blocks);
-- if (ret <= 0)
-+ if (ret <= 0) {
-+ err = ret;
- goto out2;
-- else
-+ } else
- allocated = ret;
- goto outnew;
- }
-@@ -2347,8 +2403,15 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
- if (S_ISREG(inode->i_mode) && (!EXT4_I(inode)->i_block_alloc_info))
- ext4_init_block_alloc_info(inode);
+ if (!(fl->fl_flags & FL_FLOCK))
+ return -ENOLCK;
+ if (__mandatory_lock(&ip->i_inode))
+ return -ENOLCK;
-- /* allocate new block */
-- goal = ext4_ext_find_goal(inode, path, iblock);
-+ /* find neighbour allocated blocks */
-+ ar.lleft = iblock;
-+ err = ext4_ext_search_left(inode, path, &ar.lleft, &ar.pleft);
-+ if (err)
-+ goto out2;
-+ ar.lright = iblock;
-+ err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright);
-+ if (err)
-+ goto out2;
+- if (sdp->sd_args.ar_localflocks)
+- return flock_lock_file_wait(file, fl);
+-
+ if (fl->fl_type == F_UNLCK) {
+ do_unflock(file, fl);
+ return 0;
+@@ -678,3 +745,27 @@ const struct file_operations gfs2_dir_fops = {
+ .flock = gfs2_flock,
+ };
- /*
- * See if request is beyond maximum number of blocks we can have in
-@@ -2368,10 +2431,21 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
- newex.ee_len = cpu_to_le16(max_blocks);
- err = ext4_ext_check_overlap(inode, &newex, path);
- if (err)
-- allocated = le16_to_cpu(newex.ee_len);
-+ allocated = ext4_ext_get_actual_len(&newex);
- else
- allocated = max_blocks;
-- newblock = ext4_new_blocks(handle, inode, goal, &allocated, &err);
++const struct file_operations gfs2_file_fops_nolock = {
++ .llseek = gfs2_llseek,
++ .read = do_sync_read,
++ .aio_read = generic_file_aio_read,
++ .write = do_sync_write,
++ .aio_write = generic_file_aio_write,
++ .unlocked_ioctl = gfs2_ioctl,
++ .mmap = gfs2_mmap,
++ .open = gfs2_open,
++ .release = gfs2_close,
++ .fsync = gfs2_fsync,
++ .splice_read = generic_file_splice_read,
++ .splice_write = generic_file_splice_write,
++ .setlease = gfs2_setlease,
++};
+
-+ /* allocate new block */
-+ ar.inode = inode;
-+ ar.goal = ext4_ext_find_goal(inode, path, iblock);
-+ ar.logical = iblock;
-+ ar.len = allocated;
-+ if (S_ISREG(inode->i_mode))
-+ ar.flags = EXT4_MB_HINT_DATA;
-+ else
-+ /* disable in-core preallocation for non-regular files */
-+ ar.flags = 0;
-+ newblock = ext4_mb_new_blocks(handle, &ar, &err);
- if (!newblock)
- goto out2;
- ext_debug("allocate new block: goal %llu, found %llu/%lu\n",
-@@ -2379,14 +2453,17 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
-
- /* try to insert new extent into found leaf and return */
- ext4_ext_store_pblock(&newex, newblock);
-- newex.ee_len = cpu_to_le16(allocated);
-+ newex.ee_len = cpu_to_le16(ar.len);
- if (create == EXT4_CREATE_UNINITIALIZED_EXT) /* Mark uninitialized */
- ext4_ext_mark_uninitialized(&newex);
- err = ext4_ext_insert_extent(handle, inode, path, &newex);
- if (err) {
- /* free data blocks we just allocated */
-+ /* not a good idea to call discard here directly,
-+ * but otherwise we'd need to call it every free() */
-+ ext4_mb_discard_inode_preallocations(inode);
- ext4_free_blocks(handle, inode, ext_pblock(&newex),
-- le16_to_cpu(newex.ee_len));
-+ ext4_ext_get_actual_len(&newex), 0);
- goto out2;
- }
-
-@@ -2395,6 +2472,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
-
- /* previous routine could use block we allocated */
- newblock = ext_pblock(&newex);
-+ allocated = ext4_ext_get_actual_len(&newex);
- outnew:
- __set_bit(BH_New, &bh_result->b_state);
-
-@@ -2414,8 +2492,6 @@ out2:
- ext4_ext_drop_refs(path);
- kfree(path);
- }
-- mutex_unlock(&EXT4_I(inode)->truncate_mutex);
++const struct file_operations gfs2_dir_fops_nolock = {
++ .readdir = gfs2_readdir,
++ .unlocked_ioctl = gfs2_ioctl,
++ .open = gfs2_open,
++ .release = gfs2_close,
++ .fsync = gfs2_fsync,
++};
++
+diff --git a/fs/gfs2/ops_file.h b/fs/gfs2/ops_file.h
+deleted file mode 100644
+index 7e5d8ec..0000000
+--- a/fs/gfs2/ops_file.h
++++ /dev/null
+@@ -1,24 +0,0 @@
+-/*
+- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
+- *
+- * This copyrighted material is made available to anyone wishing to use,
+- * modify, copy, or redistribute it subject to the terms and conditions
+- * of the GNU General Public License version 2.
+- */
-
- return err ? err : allocated;
- }
+-#ifndef __OPS_FILE_DOT_H__
+-#define __OPS_FILE_DOT_H__
+-
+-#include <linux/fs.h>
+-struct gfs2_inode;
+-
+-extern struct file gfs2_internal_file_sentinel;
+-extern int gfs2_internal_read(struct gfs2_inode *ip,
+- struct file_ra_state *ra_state,
+- char *buf, loff_t *pos, unsigned size);
+-extern void gfs2_set_inode_flags(struct inode *inode);
+-extern const struct file_operations gfs2_file_fops;
+-extern const struct file_operations gfs2_dir_fops;
+-
+-#endif /* __OPS_FILE_DOT_H__ */
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 17de58e..43d511b 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1,6 +1,6 @@
+ /*
+ * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
++ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+@@ -21,6 +21,7 @@
-@@ -2423,7 +2499,7 @@ void ext4_ext_truncate(struct inode * inode, struct page *page)
- {
- struct address_space *mapping = inode->i_mapping;
- struct super_block *sb = inode->i_sb;
-- unsigned long last_block;
-+ ext4_lblk_t last_block;
- handle_t *handle;
- int err = 0;
+ #include "gfs2.h"
+ #include "incore.h"
++#include "bmap.h"
+ #include "daemon.h"
+ #include "glock.h"
+ #include "glops.h"
+@@ -59,7 +60,6 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
-@@ -2445,9 +2521,11 @@ void ext4_ext_truncate(struct inode * inode, struct page *page)
- if (page)
- ext4_block_truncate_page(handle, page, mapping, inode->i_size);
+ mutex_init(&sdp->sd_inum_mutex);
+ spin_lock_init(&sdp->sd_statfs_spin);
+- mutex_init(&sdp->sd_statfs_mutex);
-- mutex_lock(&EXT4_I(inode)->truncate_mutex);
-+ down_write(&EXT4_I(inode)->i_data_sem);
- ext4_ext_invalidate_cache(inode);
+ spin_lock_init(&sdp->sd_rindex_spin);
+ mutex_init(&sdp->sd_rindex_mutex);
+@@ -77,7 +77,6 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
-+ ext4_mb_discard_inode_preallocations(inode);
-+
- /*
- * TODO: optimization is possible here.
- * Probably we need not scan at all,
-@@ -2481,7 +2559,7 @@ out_stop:
- if (inode->i_nlink)
- ext4_orphan_del(handle, inode);
+ spin_lock_init(&sdp->sd_log_lock);
-- mutex_unlock(&EXT4_I(inode)->truncate_mutex);
-+ up_write(&EXT4_I(inode)->i_data_sem);
- ext4_journal_stop(handle);
+- INIT_LIST_HEAD(&sdp->sd_log_le_gl);
+ INIT_LIST_HEAD(&sdp->sd_log_le_buf);
+ INIT_LIST_HEAD(&sdp->sd_log_le_revoke);
+ INIT_LIST_HEAD(&sdp->sd_log_le_rg);
+@@ -303,6 +302,67 @@ out:
+ return error;
}
-@@ -2516,7 +2594,8 @@ int ext4_ext_writepage_trans_blocks(struct inode *inode, int num)
- long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
- {
- handle_t *handle;
-- ext4_fsblk_t block, max_blocks;
-+ ext4_lblk_t block;
-+ unsigned long max_blocks;
- ext4_fsblk_t nblocks = 0;
- int ret = 0;
- int ret2 = 0;
-@@ -2544,6 +2623,7 @@ long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
- * modify 1 super block, 1 block bitmap and 1 group descriptor.
- */
- credits = EXT4_DATA_TRANS_BLOCKS(inode->i_sb) + 3;
-+ down_write((&EXT4_I(inode)->i_data_sem));
- retry:
- while (ret >= 0 && ret < max_blocks) {
- block = block + ret;
-@@ -2557,12 +2637,12 @@ retry:
- ret = ext4_ext_get_blocks(handle, inode, block,
- max_blocks, &map_bh,
- EXT4_CREATE_UNINITIALIZED_EXT, 0);
-- WARN_ON(!ret);
-- if (!ret) {
-+ WARN_ON(ret <= 0);
-+ if (ret <= 0) {
- ext4_error(inode->i_sb, "ext4_fallocate",
-- "ext4_ext_get_blocks returned 0! inode#%lu"
-- ", block=%llu, max_blocks=%llu",
-- inode->i_ino, block, max_blocks);
-+ "ext4_ext_get_blocks returned error: "
-+ "inode#%lu, block=%u, max_blocks=%lu",
-+ inode->i_ino, block, max_blocks);
- ret = -EIO;
- ext4_mark_inode_dirty(handle, inode);
- ret2 = ext4_journal_stop(handle);
-@@ -2600,6 +2680,7 @@ retry:
- if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
- goto retry;
-
-+ up_write((&EXT4_I(inode)->i_data_sem));
- /*
- * Time to update the file size.
- * Update only when preallocation was requested beyond the file size.
-diff --git a/fs/ext4/file.c b/fs/ext4/file.c
-index 1a81cd6..ac35ec5 100644
---- a/fs/ext4/file.c
-+++ b/fs/ext4/file.c
-@@ -37,9 +37,9 @@ static int ext4_release_file (struct inode * inode, struct file * filp)
- if ((filp->f_mode & FMODE_WRITE) &&
- (atomic_read(&inode->i_writecount) == 1))
- {
-- mutex_lock(&EXT4_I(inode)->truncate_mutex);
-+ down_write(&EXT4_I(inode)->i_data_sem);
- ext4_discard_reservation(inode);
-- mutex_unlock(&EXT4_I(inode)->truncate_mutex);
-+ up_write(&EXT4_I(inode)->i_data_sem);
- }
- if (is_dx(inode) && filp->private_data)
- ext4_htree_free_dir_info(filp->private_data);
-@@ -56,8 +56,25 @@ ext4_file_write(struct kiocb *iocb, const struct iovec *iov,
- ssize_t ret;
- int err;
-
-- ret = generic_file_aio_write(iocb, iov, nr_segs, pos);
-+ /*
-+ * If we have encountered a bitmap-format file, the size limit
-+ * is smaller than s_maxbytes, which is for extent-mapped files.
-+ */
++/**
++ * map_journal_extents - create a reusable "extent" mapping from all logical
++ * blocks to all physical blocks for the given journal. This will save
++ * us time when writing journal blocks. Most journals will have only one
++ * extent that maps all their logical blocks. That's because gfs2.mkfs
++ * arranges the journal blocks sequentially to maximize performance.
++ * So the extent would map the first block for the entire file length.
++ * However, gfs2_jadd can happen while file activity is happening, so
++ * those journals may not be sequential. Less likely is the case where
++ * the users created their own journals by mounting the metafs and
++ * laying it out. But it's still possible. These journals might have
++ * several extents.
++ *
++ * TODO: This should be done in bigger chunks rather than one block at a time,
++ * but since it's only done at mount time, I'm not worried about the
++ * time it takes.
++ */
++static int map_journal_extents(struct gfs2_sbd *sdp)
++{
++ struct gfs2_jdesc *jd = sdp->sd_jdesc;
++ unsigned int lb;
++ u64 db, prev_db; /* logical block, disk block, prev disk block */
++ struct gfs2_inode *ip = GFS2_I(jd->jd_inode);
++ struct gfs2_journal_extent *jext = NULL;
++ struct buffer_head bh;
++ int rc = 0;
+
-+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) {
-+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
-+ size_t length = iov_length(iov, nr_segs);
-
-+ if (pos > sbi->s_bitmap_maxbytes)
-+ return -EFBIG;
++ prev_db = 0;
+
-+ if (pos + length > sbi->s_bitmap_maxbytes) {
-+ nr_segs = iov_shorten((struct iovec *)iov, nr_segs,
-+ sbi->s_bitmap_maxbytes - pos);
++ for (lb = 0; lb < ip->i_di.di_size >> sdp->sd_sb.sb_bsize_shift; lb++) {
++ bh.b_state = 0;
++ bh.b_blocknr = 0;
++ bh.b_size = 1 << ip->i_inode.i_blkbits;
++ rc = gfs2_block_map(jd->jd_inode, lb, &bh, 0);
++ db = bh.b_blocknr;
++ if (rc || !db) {
++ printk(KERN_INFO "GFS2 journal mapping error %d: lb="
++ "%u db=%llu\n", rc, lb, (unsigned long long)db);
++ break;
++ }
++ if (!prev_db || db != prev_db + 1) {
++ jext = kzalloc(sizeof(struct gfs2_journal_extent),
++ GFP_KERNEL);
++ if (!jext) {
++ printk(KERN_INFO "GFS2 error: out of memory "
++ "mapping journal extents.\n");
++ rc = -ENOMEM;
++ break;
++ }
++ jext->dblock = db;
++ jext->lblock = lb;
++ jext->blocks = 1;
++ list_add_tail(&jext->extent_list, &jd->extent_list);
++ } else {
++ jext->blocks++;
+ }
++ prev_db = db;
+ }
++ return rc;
++}
+
-+ ret = generic_file_aio_write(iocb, iov, nr_segs, pos);
- /*
- * Skip flushing if there was an error, or if nothing was written.
- */
-diff --git a/fs/ext4/group.h b/fs/ext4/group.h
-index 1577910..7eb0604 100644
---- a/fs/ext4/group.h
-+++ b/fs/ext4/group.h
-@@ -14,14 +14,16 @@ extern __le16 ext4_group_desc_csum(struct ext4_sb_info *sbi, __u32 group,
- extern int ext4_group_desc_csum_verify(struct ext4_sb_info *sbi, __u32 group,
- struct ext4_group_desc *gdp);
- struct buffer_head *read_block_bitmap(struct super_block *sb,
-- unsigned int block_group);
-+ ext4_group_t block_group);
- extern unsigned ext4_init_block_bitmap(struct super_block *sb,
-- struct buffer_head *bh, int group,
-+ struct buffer_head *bh,
-+ ext4_group_t group,
- struct ext4_group_desc *desc);
- #define ext4_free_blocks_after_init(sb, group, desc) \
- ext4_init_block_bitmap(sb, NULL, group, desc)
- extern unsigned ext4_init_inode_bitmap(struct super_block *sb,
-- struct buffer_head *bh, int group,
-+ struct buffer_head *bh,
-+ ext4_group_t group,
- struct ext4_group_desc *desc);
- extern void mark_bitmap_end(int start_bit, int end_bit, char *bitmap);
- #endif /* _LINUX_EXT4_GROUP_H */
-diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
-index c61f37f..575b521 100644
---- a/fs/ext4/ialloc.c
-+++ b/fs/ext4/ialloc.c
-@@ -64,8 +64,8 @@ void mark_bitmap_end(int start_bit, int end_bit, char *bitmap)
- }
-
- /* Initializes an uninitialized inode bitmap */
--unsigned ext4_init_inode_bitmap(struct super_block *sb,
-- struct buffer_head *bh, int block_group,
-+unsigned ext4_init_inode_bitmap(struct super_block *sb, struct buffer_head *bh,
-+ ext4_group_t block_group,
- struct ext4_group_desc *gdp)
- {
- struct ext4_sb_info *sbi = EXT4_SB(sb);
-@@ -75,7 +75,7 @@ unsigned ext4_init_inode_bitmap(struct super_block *sb,
- /* If checksum is bad mark all blocks and inodes use to prevent
- * allocation, essentially implementing a per-group read-only flag. */
- if (!ext4_group_desc_csum_verify(sbi, block_group, gdp)) {
-- ext4_error(sb, __FUNCTION__, "Checksum bad for group %u\n",
-+ ext4_error(sb, __FUNCTION__, "Checksum bad for group %lu\n",
- block_group);
- gdp->bg_free_blocks_count = 0;
- gdp->bg_free_inodes_count = 0;
-@@ -98,7 +98,7 @@ unsigned ext4_init_inode_bitmap(struct super_block *sb,
- * Return buffer_head of bitmap on success or NULL.
- */
- static struct buffer_head *
--read_inode_bitmap(struct super_block * sb, unsigned long block_group)
-+read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
- {
- struct ext4_group_desc *desc;
- struct buffer_head *bh = NULL;
-@@ -152,7 +152,7 @@ void ext4_free_inode (handle_t *handle, struct inode * inode)
- unsigned long ino;
- struct buffer_head *bitmap_bh = NULL;
- struct buffer_head *bh2;
-- unsigned long block_group;
-+ ext4_group_t block_group;
- unsigned long bit;
- struct ext4_group_desc * gdp;
- struct ext4_super_block * es;
-@@ -260,12 +260,14 @@ error_return:
- * For other inodes, search forward from the parent directory\'s block
- * group to find a free inode.
- */
--static int find_group_dir(struct super_block *sb, struct inode *parent)
-+static int find_group_dir(struct super_block *sb, struct inode *parent,
-+ ext4_group_t *best_group)
+ static int init_journal(struct gfs2_sbd *sdp, int undo)
{
-- int ngroups = EXT4_SB(sb)->s_groups_count;
-+ ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
- unsigned int freei, avefreei;
- struct ext4_group_desc *desc, *best_desc = NULL;
-- int group, best_group = -1;
-+ ext4_group_t group;
-+ int ret = -1;
+ struct gfs2_holder ji_gh;
+@@ -340,7 +400,7 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
- freei = percpu_counter_read_positive(&EXT4_SB(sb)->s_freeinodes_counter);
- avefreei = freei / ngroups;
-@@ -279,11 +281,12 @@ static int find_group_dir(struct super_block *sb, struct inode *parent)
- if (!best_desc ||
- (le16_to_cpu(desc->bg_free_blocks_count) >
- le16_to_cpu(best_desc->bg_free_blocks_count))) {
-- best_group = group;
-+ *best_group = group;
- best_desc = desc;
-+ ret = 0;
+ if (sdp->sd_args.ar_spectator) {
+ sdp->sd_jdesc = gfs2_jdesc_find(sdp, 0);
+- sdp->sd_log_blks_free = sdp->sd_jdesc->jd_blocks;
++ atomic_set(&sdp->sd_log_blks_free, sdp->sd_jdesc->jd_blocks);
+ } else {
+ if (sdp->sd_lockstruct.ls_jid >= gfs2_jindex_size(sdp)) {
+ fs_err(sdp, "can't mount journal #%u\n",
+@@ -377,7 +437,10 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
+ sdp->sd_jdesc->jd_jid, error);
+ goto fail_jinode_gh;
}
+- sdp->sd_log_blks_free = sdp->sd_jdesc->jd_blocks;
++ atomic_set(&sdp->sd_log_blks_free, sdp->sd_jdesc->jd_blocks);
++
++ /* Map the extents for this journal's blocks */
++ map_journal_extents(sdp);
}
-- return best_group;
-+ return ret;
- }
- /*
-@@ -314,12 +317,13 @@ static int find_group_dir(struct super_block *sb, struct inode *parent)
- #define INODE_COST 64
- #define BLOCK_COST 256
+ if (sdp->sd_lockstruct.ls_first) {
+diff --git a/fs/gfs2/ops_inode.c b/fs/gfs2/ops_inode.c
+index 291f0c7..9f71372 100644
+--- a/fs/gfs2/ops_inode.c
++++ b/fs/gfs2/ops_inode.c
+@@ -61,7 +61,7 @@ static int gfs2_create(struct inode *dir, struct dentry *dentry,
+ inode = gfs2_createi(ghs, &dentry->d_name, S_IFREG | mode, 0);
+ if (!IS_ERR(inode)) {
+ gfs2_trans_end(sdp);
+- if (dip->i_alloc.al_rgd)
++ if (dip->i_alloc->al_rgd)
+ gfs2_inplace_release(dip);
+ gfs2_quota_unlock(dip);
+ gfs2_alloc_put(dip);
+@@ -113,8 +113,18 @@ static struct dentry *gfs2_lookup(struct inode *dir, struct dentry *dentry,
+ if (inode && IS_ERR(inode))
+ return ERR_PTR(PTR_ERR(inode));
--static int find_group_orlov(struct super_block *sb, struct inode *parent)
-+static int find_group_orlov(struct super_block *sb, struct inode *parent,
-+ ext4_group_t *group)
- {
-- int parent_group = EXT4_I(parent)->i_block_group;
-+ ext4_group_t parent_group = EXT4_I(parent)->i_block_group;
- struct ext4_sb_info *sbi = EXT4_SB(sb);
- struct ext4_super_block *es = sbi->s_es;
-- int ngroups = sbi->s_groups_count;
-+ ext4_group_t ngroups = sbi->s_groups_count;
- int inodes_per_group = EXT4_INODES_PER_GROUP(sb);
- unsigned int freei, avefreei;
- ext4_fsblk_t freeb, avefreeb;
-@@ -327,7 +331,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
- unsigned int ndirs;
- int max_debt, max_dirs, min_inodes;
- ext4_grpblk_t min_blocks;
-- int group = -1, i;
-+ ext4_group_t i;
- struct ext4_group_desc *desc;
+- if (inode)
++ if (inode) {
++ struct gfs2_glock *gl = GFS2_I(inode)->i_gl;
++ struct gfs2_holder gh;
++ int error;
++ error = gfs2_glock_nq_init(gl, LM_ST_SHARED, LM_FLAG_ANY, &gh);
++ if (error) {
++ iput(inode);
++ return ERR_PTR(error);
++ }
++ gfs2_glock_dq_uninit(&gh);
+ return d_splice_alias(inode, dentry);
++ }
+ d_add(dentry, inode);
- freei = percpu_counter_read_positive(&sbi->s_freeinodes_counter);
-@@ -340,13 +344,14 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
- if ((parent == sb->s_root->d_inode) ||
- (EXT4_I(parent)->i_flags & EXT4_TOPDIR_FL)) {
- int best_ndir = inodes_per_group;
-- int best_group = -1;
-+ ext4_group_t grp;
-+ int ret = -1;
+ return NULL;
+@@ -366,7 +376,7 @@ static int gfs2_symlink(struct inode *dir, struct dentry *dentry,
+ }
-- get_random_bytes(&group, sizeof(group));
-- parent_group = (unsigned)group % ngroups;
-+ get_random_bytes(&grp, sizeof(grp));
-+ parent_group = (unsigned)grp % ngroups;
- for (i = 0; i < ngroups; i++) {
-- group = (parent_group + i) % ngroups;
-- desc = ext4_get_group_desc (sb, group, NULL);
-+ grp = (parent_group + i) % ngroups;
-+ desc = ext4_get_group_desc(sb, grp, NULL);
- if (!desc || !desc->bg_free_inodes_count)
- continue;
- if (le16_to_cpu(desc->bg_used_dirs_count) >= best_ndir)
-@@ -355,11 +360,12 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
- continue;
- if (le16_to_cpu(desc->bg_free_blocks_count) < avefreeb)
- continue;
-- best_group = group;
-+ *group = grp;
-+ ret = 0;
- best_ndir = le16_to_cpu(desc->bg_used_dirs_count);
- }
-- if (best_group >= 0)
-- return best_group;
-+ if (ret == 0)
-+ return ret;
- goto fallback;
+ gfs2_trans_end(sdp);
+- if (dip->i_alloc.al_rgd)
++ if (dip->i_alloc->al_rgd)
+ gfs2_inplace_release(dip);
+ gfs2_quota_unlock(dip);
+ gfs2_alloc_put(dip);
+@@ -442,7 +452,7 @@ static int gfs2_mkdir(struct inode *dir, struct dentry *dentry, int mode)
+ gfs2_assert_withdraw(sdp, !error); /* dip already pinned */
+
+ gfs2_trans_end(sdp);
+- if (dip->i_alloc.al_rgd)
++ if (dip->i_alloc->al_rgd)
+ gfs2_inplace_release(dip);
+ gfs2_quota_unlock(dip);
+ gfs2_alloc_put(dip);
+@@ -548,7 +558,7 @@ static int gfs2_mknod(struct inode *dir, struct dentry *dentry, int mode,
}
-@@ -380,8 +386,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
- max_debt = 1;
+ gfs2_trans_end(sdp);
+- if (dip->i_alloc.al_rgd)
++ if (dip->i_alloc->al_rgd)
+ gfs2_inplace_release(dip);
+ gfs2_quota_unlock(dip);
+ gfs2_alloc_put(dip);
+diff --git a/fs/gfs2/ops_inode.h b/fs/gfs2/ops_inode.h
+index 34f0caa..fd8cee2 100644
+--- a/fs/gfs2/ops_inode.h
++++ b/fs/gfs2/ops_inode.h
+@@ -16,5 +16,11 @@ extern const struct inode_operations gfs2_file_iops;
+ extern const struct inode_operations gfs2_dir_iops;
+ extern const struct inode_operations gfs2_symlink_iops;
+ extern const struct inode_operations gfs2_dev_iops;
++extern const struct file_operations gfs2_file_fops;
++extern const struct file_operations gfs2_dir_fops;
++extern const struct file_operations gfs2_file_fops_nolock;
++extern const struct file_operations gfs2_dir_fops_nolock;
++
++extern void gfs2_set_inode_flags(struct inode *inode);
- for (i = 0; i < ngroups; i++) {
-- group = (parent_group + i) % ngroups;
-- desc = ext4_get_group_desc (sb, group, NULL);
-+ *group = (parent_group + i) % ngroups;
-+ desc = ext4_get_group_desc(sb, *group, NULL);
- if (!desc || !desc->bg_free_inodes_count)
- continue;
- if (le16_to_cpu(desc->bg_used_dirs_count) >= max_dirs)
-@@ -390,17 +396,16 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
- continue;
- if (le16_to_cpu(desc->bg_free_blocks_count) < min_blocks)
- continue;
-- return group;
-+ return 0;
+ #endif /* __OPS_INODE_DOT_H__ */
+diff --git a/fs/gfs2/ops_super.c b/fs/gfs2/ops_super.c
+index 950f314..5e52421 100644
+--- a/fs/gfs2/ops_super.c
++++ b/fs/gfs2/ops_super.c
+@@ -487,7 +487,6 @@ static struct inode *gfs2_alloc_inode(struct super_block *sb)
+ if (ip) {
+ ip->i_flags = 0;
+ ip->i_gl = NULL;
+- ip->i_last_pfault = jiffies;
}
+ return &ip->i_inode;
+ }
+diff --git a/fs/gfs2/ops_vm.c b/fs/gfs2/ops_vm.c
+deleted file mode 100644
+index 927d739..0000000
+--- a/fs/gfs2/ops_vm.c
++++ /dev/null
+@@ -1,169 +0,0 @@
+-/*
+- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
+- *
+- * This copyrighted material is made available to anyone wishing to use,
+- * modify, copy, or redistribute it subject to the terms and conditions
+- * of the GNU General Public License version 2.
+- */
+-
+-#include <linux/slab.h>
+-#include <linux/spinlock.h>
+-#include <linux/completion.h>
+-#include <linux/buffer_head.h>
+-#include <linux/mm.h>
+-#include <linux/pagemap.h>
+-#include <linux/gfs2_ondisk.h>
+-#include <linux/lm_interface.h>
+-
+-#include "gfs2.h"
+-#include "incore.h"
+-#include "bmap.h"
+-#include "glock.h"
+-#include "inode.h"
+-#include "ops_vm.h"
+-#include "quota.h"
+-#include "rgrp.h"
+-#include "trans.h"
+-#include "util.h"
+-
+-static int gfs2_private_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+-{
+- struct gfs2_inode *ip = GFS2_I(vma->vm_file->f_mapping->host);
+-
+- set_bit(GIF_PAGED, &ip->i_flags);
+- return filemap_fault(vma, vmf);
+-}
+-
+-static int alloc_page_backing(struct gfs2_inode *ip, struct page *page)
+-{
+- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- unsigned long index = page->index;
+- u64 lblock = index << (PAGE_CACHE_SHIFT -
+- sdp->sd_sb.sb_bsize_shift);
+- unsigned int blocks = PAGE_CACHE_SIZE >> sdp->sd_sb.sb_bsize_shift;
+- struct gfs2_alloc *al;
+- unsigned int data_blocks, ind_blocks;
+- unsigned int x;
+- int error;
+-
+- al = gfs2_alloc_get(ip);
+-
+- error = gfs2_quota_lock(ip, NO_QUOTA_CHANGE, NO_QUOTA_CHANGE);
+- if (error)
+- goto out;
+-
+- error = gfs2_quota_check(ip, ip->i_inode.i_uid, ip->i_inode.i_gid);
+- if (error)
+- goto out_gunlock_q;
+-
+- gfs2_write_calc_reserv(ip, PAGE_CACHE_SIZE, &data_blocks, &ind_blocks);
+-
+- al->al_requested = data_blocks + ind_blocks;
+-
+- error = gfs2_inplace_reserve(ip);
+- if (error)
+- goto out_gunlock_q;
+-
+- error = gfs2_trans_begin(sdp, al->al_rgd->rd_length +
+- ind_blocks + RES_DINODE +
+- RES_STATFS + RES_QUOTA, 0);
+- if (error)
+- goto out_ipres;
+-
+- if (gfs2_is_stuffed(ip)) {
+- error = gfs2_unstuff_dinode(ip, NULL);
+- if (error)
+- goto out_trans;
+- }
+-
+- for (x = 0; x < blocks; ) {
+- u64 dblock;
+- unsigned int extlen;
+- int new = 1;
+-
+- error = gfs2_extent_map(&ip->i_inode, lblock, &new, &dblock, &extlen);
+- if (error)
+- goto out_trans;
+-
+- lblock += extlen;
+- x += extlen;
+- }
+-
+- gfs2_assert_warn(sdp, al->al_alloced);
+-
+-out_trans:
+- gfs2_trans_end(sdp);
+-out_ipres:
+- gfs2_inplace_release(ip);
+-out_gunlock_q:
+- gfs2_quota_unlock(ip);
+-out:
+- gfs2_alloc_put(ip);
+- return error;
+-}
+-
+-static int gfs2_sharewrite_fault(struct vm_area_struct *vma,
+- struct vm_fault *vmf)
+-{
+- struct file *file = vma->vm_file;
+- struct gfs2_file *gf = file->private_data;
+- struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
+- struct gfs2_holder i_gh;
+- int alloc_required;
+- int error;
+- int ret = 0;
+-
+- error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0, &i_gh);
+- if (error)
+- goto out;
+-
+- set_bit(GIF_PAGED, &ip->i_flags);
+- set_bit(GIF_SW_PAGED, &ip->i_flags);
+-
+- error = gfs2_write_alloc_required(ip,
+- (u64)vmf->pgoff << PAGE_CACHE_SHIFT,
+- PAGE_CACHE_SIZE, &alloc_required);
+- if (error) {
+- ret = VM_FAULT_OOM; /* XXX: are these right? */
+- goto out_unlock;
+- }
+-
+- set_bit(GFF_EXLOCK, &gf->f_flags);
+- ret = filemap_fault(vma, vmf);
+- clear_bit(GFF_EXLOCK, &gf->f_flags);
+- if (ret & VM_FAULT_ERROR)
+- goto out_unlock;
+-
+- if (alloc_required) {
+- /* XXX: do we need to drop page lock around alloc_page_backing?*/
+- error = alloc_page_backing(ip, vmf->page);
+- if (error) {
+- /*
+- * VM_FAULT_LOCKED should always be the case for
+- * filemap_fault, but it may not be in a future
+- * implementation.
+- */
+- if (ret & VM_FAULT_LOCKED)
+- unlock_page(vmf->page);
+- page_cache_release(vmf->page);
+- ret = VM_FAULT_OOM;
+- goto out_unlock;
+- }
+- set_page_dirty(vmf->page);
+- }
+-
+-out_unlock:
+- gfs2_glock_dq_uninit(&i_gh);
+-out:
+- return ret;
+-}
+-
+-struct vm_operations_struct gfs2_vm_ops_private = {
+- .fault = gfs2_private_fault,
+-};
+-
+-struct vm_operations_struct gfs2_vm_ops_sharewrite = {
+- .fault = gfs2_sharewrite_fault,
+-};
+-
+diff --git a/fs/gfs2/ops_vm.h b/fs/gfs2/ops_vm.h
+deleted file mode 100644
+index 4ae8f43..0000000
+--- a/fs/gfs2/ops_vm.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/*
+- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
+- *
+- * This copyrighted material is made available to anyone wishing to use,
+- * modify, copy, or redistribute it subject to the terms and conditions
+- * of the GNU General Public License version 2.
+- */
+-
+-#ifndef __OPS_VM_DOT_H__
+-#define __OPS_VM_DOT_H__
+-
+-#include <linux/mm.h>
+-
+-extern struct vm_operations_struct gfs2_vm_ops_private;
+-extern struct vm_operations_struct gfs2_vm_ops_sharewrite;
+-
+-#endif /* __OPS_VM_DOT_H__ */
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index addb51e..a08dabd 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -1,6 +1,6 @@
+ /*
+ * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
++ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+@@ -59,7 +59,6 @@
+ #include "super.h"
+ #include "trans.h"
+ #include "inode.h"
+-#include "ops_file.h"
+ #include "ops_address.h"
+ #include "util.h"
- fallback:
- for (i = 0; i < ngroups; i++) {
-- group = (parent_group + i) % ngroups;
-- desc = ext4_get_group_desc (sb, group, NULL);
-- if (!desc || !desc->bg_free_inodes_count)
-- continue;
-- if (le16_to_cpu(desc->bg_free_inodes_count) >= avefreei)
-- return group;
-+ *group = (parent_group + i) % ngroups;
-+ desc = ext4_get_group_desc(sb, *group, NULL);
-+ if (desc && desc->bg_free_inodes_count &&
-+ le16_to_cpu(desc->bg_free_inodes_count) >= avefreei)
-+ return 0;
+@@ -274,10 +273,10 @@ static int bh_get(struct gfs2_quota_data *qd)
}
- if (avefreei) {
-@@ -415,21 +420,22 @@ fallback:
- return -1;
- }
+ block = qd->qd_slot / sdp->sd_qc_per_block;
+- offset = qd->qd_slot % sdp->sd_qc_per_block;;
++ offset = qd->qd_slot % sdp->sd_qc_per_block;
--static int find_group_other(struct super_block *sb, struct inode *parent)
-+static int find_group_other(struct super_block *sb, struct inode *parent,
-+ ext4_group_t *group)
+ bh_map.b_size = 1 << ip->i_inode.i_blkbits;
+- error = gfs2_block_map(&ip->i_inode, block, 0, &bh_map);
++ error = gfs2_block_map(&ip->i_inode, block, &bh_map, 0);
+ if (error)
+ goto fail;
+ error = gfs2_meta_read(ip->i_gl, bh_map.b_blocknr, DIO_WAIT, &bh);
+@@ -454,7 +453,7 @@ static void qdsb_put(struct gfs2_quota_data *qd)
+ int gfs2_quota_hold(struct gfs2_inode *ip, u32 uid, u32 gid)
{
-- int parent_group = EXT4_I(parent)->i_block_group;
-- int ngroups = EXT4_SB(sb)->s_groups_count;
-+ ext4_group_t parent_group = EXT4_I(parent)->i_block_group;
-+ ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
- struct ext4_group_desc *desc;
-- int group, i;
-+ ext4_group_t i;
-
- /*
- * Try to place the inode in its parent directory
- */
-- group = parent_group;
-- desc = ext4_get_group_desc (sb, group, NULL);
-+ *group = parent_group;
-+ desc = ext4_get_group_desc(sb, *group, NULL);
- if (desc && le16_to_cpu(desc->bg_free_inodes_count) &&
- le16_to_cpu(desc->bg_free_blocks_count))
-- return group;
-+ return 0;
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_quota_data **qd = al->al_qd;
+ int error;
- /*
- * We're going to place this inode in a different blockgroup from its
-@@ -440,33 +446,33 @@ static int find_group_other(struct super_block *sb, struct inode *parent)
- *
- * So add our directory's i_ino into the starting point for the hash.
- */
-- group = (group + parent->i_ino) % ngroups;
-+ *group = (*group + parent->i_ino) % ngroups;
+@@ -502,7 +501,7 @@ out:
+ void gfs2_quota_unhold(struct gfs2_inode *ip)
+ {
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ unsigned int x;
- /*
- * Use a quadratic hash to find a group with a free inode and some free
- * blocks.
- */
- for (i = 1; i < ngroups; i <<= 1) {
-- group += i;
-- if (group >= ngroups)
-- group -= ngroups;
-- desc = ext4_get_group_desc (sb, group, NULL);
-+ *group += i;
-+ if (*group >= ngroups)
-+ *group -= ngroups;
-+ desc = ext4_get_group_desc(sb, *group, NULL);
- if (desc && le16_to_cpu(desc->bg_free_inodes_count) &&
- le16_to_cpu(desc->bg_free_blocks_count))
-- return group;
-+ return 0;
+ gfs2_assert_warn(sdp, !test_bit(GIF_QD_LOCKED, &ip->i_flags));
+@@ -646,7 +645,7 @@ static int gfs2_adjust_quota(struct gfs2_inode *ip, loff_t loc,
}
- /*
- * That failed: try linear search for a free inode, even if that group
- * has no free blocks.
- */
-- group = parent_group;
-+ *group = parent_group;
- for (i = 0; i < ngroups; i++) {
-- if (++group >= ngroups)
-- group = 0;
-- desc = ext4_get_group_desc (sb, group, NULL);
-+ if (++*group >= ngroups)
-+ *group = 0;
-+ desc = ext4_get_group_desc(sb, *group, NULL);
- if (desc && le16_to_cpu(desc->bg_free_inodes_count))
-- return group;
-+ return 0;
+ if (!buffer_mapped(bh)) {
+- gfs2_get_block(inode, iblock, bh, 1);
++ gfs2_block_map(inode, iblock, bh, 1);
+ if (!buffer_mapped(bh))
+ goto unlock;
}
+@@ -793,11 +792,9 @@ static int do_glock(struct gfs2_quota_data *qd, int force_refresh,
+ struct gfs2_holder i_gh;
+ struct gfs2_quota_host q;
+ char buf[sizeof(struct gfs2_quota)];
+- struct file_ra_state ra_state;
+ int error;
+ struct gfs2_quota_lvb *qlvb;
- return -1;
-@@ -487,16 +493,17 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
- struct super_block *sb;
- struct buffer_head *bitmap_bh = NULL;
- struct buffer_head *bh2;
-- int group;
-+ ext4_group_t group = 0;
- unsigned long ino = 0;
- struct inode * inode;
- struct ext4_group_desc * gdp = NULL;
- struct ext4_super_block * es;
- struct ext4_inode_info *ei;
- struct ext4_sb_info *sbi;
-- int err = 0;
-+ int ret2, err = 0;
- struct inode *ret;
-- int i, free = 0;
-+ ext4_group_t i;
-+ int free = 0;
+- file_ra_state_init(&ra_state, sdp->sd_quota_inode->i_mapping);
+ restart:
+ error = gfs2_glock_nq_init(qd->qd_gl, LM_ST_SHARED, 0, q_gh);
+ if (error)
+@@ -820,8 +817,8 @@ restart:
- /* Cannot create files in a deleted directory */
- if (!dir || !dir->i_nlink)
-@@ -512,14 +519,14 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
- es = sbi->s_es;
- if (S_ISDIR(mode)) {
- if (test_opt (sb, OLDALLOC))
-- group = find_group_dir(sb, dir);
-+ ret2 = find_group_dir(sb, dir, &group);
- else
-- group = find_group_orlov(sb, dir);
-+ ret2 = find_group_orlov(sb, dir, &group);
- } else
-- group = find_group_other(sb, dir);
-+ ret2 = find_group_other(sb, dir, &group);
+ memset(buf, 0, sizeof(struct gfs2_quota));
+ pos = qd2offset(qd);
+- error = gfs2_internal_read(ip, &ra_state, buf,
+- &pos, sizeof(struct gfs2_quota));
++ error = gfs2_internal_read(ip, NULL, buf, &pos,
++ sizeof(struct gfs2_quota));
+ if (error < 0)
+ goto fail_gunlock;
- err = -ENOSPC;
-- if (group == -1)
-+ if (ret2 == -1)
- goto out;
+@@ -856,7 +853,7 @@ fail:
+ int gfs2_quota_lock(struct gfs2_inode *ip, u32 uid, u32 gid)
+ {
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ unsigned int x;
+ int error = 0;
- for (i = 0; i < sbi->s_groups_count; i++) {
-@@ -583,7 +590,7 @@ got:
- ino > EXT4_INODES_PER_GROUP(sb)) {
- ext4_error(sb, __FUNCTION__,
- "reserved inode or inode > inodes count - "
-- "block_group = %d, inode=%lu", group,
-+ "block_group = %lu, inode=%lu", group,
- ino + group * EXT4_INODES_PER_GROUP(sb));
- err = -EIO;
- goto fail;
-@@ -702,7 +709,6 @@ got:
- if (!S_ISDIR(mode))
- ei->i_flags &= ~EXT4_DIRSYNC_FL;
- ei->i_file_acl = 0;
-- ei->i_dir_acl = 0;
- ei->i_dtime = 0;
- ei->i_block_alloc_info = NULL;
- ei->i_block_group = group;
-@@ -741,13 +747,10 @@ got:
- if (test_opt(sb, EXTENTS)) {
- EXT4_I(inode)->i_flags |= EXT4_EXTENTS_FL;
- ext4_ext_tree_init(handle, inode);
-- if (!EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_EXTENTS)) {
-- err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh);
-- if (err) goto fail;
-- EXT4_SET_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_EXTENTS);
-- BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "call ext4_journal_dirty_metadata");
-- err = ext4_journal_dirty_metadata(handle, EXT4_SB(sb)->s_sbh);
-- }
-+ err = ext4_update_incompat_feature(handle, sb,
-+ EXT4_FEATURE_INCOMPAT_EXTENTS);
-+ if (err)
-+ goto fail;
- }
+@@ -924,7 +921,7 @@ static int need_sync(struct gfs2_quota_data *qd)
- ext4_debug("allocating inode %lu\n", inode->i_ino);
-@@ -777,7 +780,7 @@ fail_drop:
- struct inode *ext4_orphan_get(struct super_block *sb, unsigned long ino)
+ void gfs2_quota_unlock(struct gfs2_inode *ip)
{
- unsigned long max_ino = le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count);
-- unsigned long block_group;
-+ ext4_group_t block_group;
- int bit;
- struct buffer_head *bitmap_bh = NULL;
- struct inode *inode = NULL;
-@@ -833,7 +836,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_quota_data *qda[4];
+ unsigned int count = 0;
+ unsigned int x;
+@@ -972,7 +969,7 @@ static int print_message(struct gfs2_quota_data *qd, char *type)
+ int gfs2_quota_check(struct gfs2_inode *ip, u32 uid, u32 gid)
{
- unsigned long desc_count;
- struct ext4_group_desc *gdp;
-- int i;
-+ ext4_group_t i;
- #ifdef EXT4FS_DEBUG
- struct ext4_super_block *es;
- unsigned long bitmap_count, x;
-@@ -854,7 +857,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
- continue;
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_quota_data *qd;
+ s64 value;
+ unsigned int x;
+@@ -1016,10 +1013,9 @@ int gfs2_quota_check(struct gfs2_inode *ip, u32 uid, u32 gid)
+ void gfs2_quota_change(struct gfs2_inode *ip, s64 change,
+ u32 uid, u32 gid)
+ {
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_quota_data *qd;
+ unsigned int x;
+- unsigned int found = 0;
- x = ext4_count_free(bitmap_bh, EXT4_INODES_PER_GROUP(sb) / 8);
-- printk("group %d: stored = %d, counted = %lu\n",
-+ printk(KERN_DEBUG "group %lu: stored = %d, counted = %lu\n",
- i, le16_to_cpu(gdp->bg_free_inodes_count), x);
- bitmap_count += x;
+ if (gfs2_assert_warn(GFS2_SB(&ip->i_inode), change))
+ return;
+@@ -1032,7 +1028,6 @@ void gfs2_quota_change(struct gfs2_inode *ip, s64 change,
+ if ((qd->qd_id == uid && test_bit(QDF_USER, &qd->qd_flags)) ||
+ (qd->qd_id == gid && !test_bit(QDF_USER, &qd->qd_flags))) {
+ do_qc(qd, change);
+- found++;
+ }
}
-@@ -879,7 +882,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
- unsigned long ext4_count_dirs (struct super_block * sb)
- {
- unsigned long count = 0;
-- int i;
-+ ext4_group_t i;
+ }
+diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
+index beb6c7a..b249e29 100644
+--- a/fs/gfs2/recovery.c
++++ b/fs/gfs2/recovery.c
+@@ -391,7 +391,7 @@ static int clean_journal(struct gfs2_jdesc *jd, struct gfs2_log_header_host *hea
+ lblock = head->lh_blkno;
+ gfs2_replay_incr_blk(sdp, &lblock);
+ bh_map.b_size = 1 << ip->i_inode.i_blkbits;
+- error = gfs2_block_map(&ip->i_inode, lblock, 0, &bh_map);
++ error = gfs2_block_map(&ip->i_inode, lblock, &bh_map, 0);
+ if (error)
+ return error;
+ if (!bh_map.b_blocknr) {
+@@ -504,13 +504,21 @@ int gfs2_recover_journal(struct gfs2_jdesc *jd)
+ if (!test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags))
+ ro = 1;
+ } else {
+- if (sdp->sd_vfs->s_flags & MS_RDONLY)
+- ro = 1;
++ if (sdp->sd_vfs->s_flags & MS_RDONLY) {
++ /* check if device itself is read-only */
++ ro = bdev_read_only(sdp->sd_vfs->s_bdev);
++ if (!ro) {
++ fs_info(sdp, "recovery required on "
++ "read-only filesystem.\n");
++ fs_info(sdp, "write access will be "
++ "enabled during recovery.\n");
++ }
++ }
+ }
- for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
- struct ext4_group_desc *gdp = ext4_get_group_desc (sb, i, NULL);
-diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
-index 5489703..bb717cb 100644
---- a/fs/ext4/inode.c
-+++ b/fs/ext4/inode.c
-@@ -105,7 +105,7 @@ int ext4_forget(handle_t *handle, int is_metadata, struct inode *inode,
+ if (ro) {
+- fs_warn(sdp, "jid=%u: Can't replay: read-only FS\n",
+- jd->jd_jid);
++ fs_warn(sdp, "jid=%u: Can't replay: read-only block "
++ "device\n", jd->jd_jid);
+ error = -EROFS;
+ goto fail_gunlock_tr;
+ }
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 708c287..3552110 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -25,10 +25,10 @@
+ #include "rgrp.h"
+ #include "super.h"
+ #include "trans.h"
+-#include "ops_file.h"
+ #include "util.h"
+ #include "log.h"
+ #include "inode.h"
++#include "ops_address.h"
+
+ #define BFITNOENT ((u32)~0)
+ #define NO_BLOCK ((u64)~0)
+@@ -126,41 +126,43 @@ static unsigned char gfs2_testbit(struct gfs2_rgrpd *rgd, unsigned char *buffer,
+ * Return: the block number (bitmap buffer scope) that was found
*/
- static unsigned long blocks_for_truncate(struct inode *inode)
+
+-static u32 gfs2_bitfit(struct gfs2_rgrpd *rgd, unsigned char *buffer,
+- unsigned int buflen, u32 goal,
+- unsigned char old_state)
++static u32 gfs2_bitfit(unsigned char *buffer, unsigned int buflen, u32 goal,
++ unsigned char old_state)
{
-- unsigned long needed;
-+ ext4_lblk_t needed;
+- unsigned char *byte, *end, alloc;
++ unsigned char *byte;
+ u32 blk = goal;
+- unsigned int bit;
++ unsigned int bit, bitlong;
++ unsigned long *plong, plong55;
- needed = inode->i_blocks >> (inode->i_sb->s_blocksize_bits - 9);
+ byte = buffer + (goal / GFS2_NBBY);
++ plong = (unsigned long *)(buffer + (goal / GFS2_NBBY));
+ bit = (goal % GFS2_NBBY) * GFS2_BIT_SIZE;
+- end = buffer + buflen;
+- alloc = (old_state == GFS2_BLKST_FREE) ? 0x55 : 0;
+-
+- while (byte < end) {
+- /* If we're looking for a free block we can eliminate all
+- bitmap settings with 0x55, which represents four data
+- blocks in a row. If we're looking for a data block, we can
+- eliminate 0x00 which corresponds to four free blocks. */
+- if ((*byte & 0x55) == alloc) {
+- blk += (8 - bit) >> 1;
+-
+- bit = 0;
+- byte++;
+-
++ bitlong = bit;
++#if BITS_PER_LONG == 32
++ plong55 = 0x55555555;
++#else
++ plong55 = 0x5555555555555555;
++#endif
++ while (byte < buffer + buflen) {
++
++ if (bitlong == 0 && old_state == 0 && *plong == plong55) {
++ plong++;
++ byte += sizeof(unsigned long);
++ blk += sizeof(unsigned long) * GFS2_NBBY;
+ continue;
+ }
+-
+ if (((*byte >> bit) & GFS2_BIT_MASK) == old_state)
+ return blk;
+-
+ bit += GFS2_BIT_SIZE;
+ if (bit >= 8) {
+ bit = 0;
+ byte++;
+ }
++ bitlong += GFS2_BIT_SIZE;
++ if (bitlong >= sizeof(unsigned long) * 8) {
++ bitlong = 0;
++ plong++;
++ }
-@@ -243,13 +243,6 @@ static inline void add_chain(Indirect *p, struct buffer_head *bh, __le32 *v)
- p->bh = bh;
- }
+ blk++;
+ }
+@@ -817,11 +819,9 @@ void gfs2_rgrp_repolish_clones(struct gfs2_rgrpd *rgd)
--static int verify_chain(Indirect *from, Indirect *to)
--{
-- while (from <= to && from->key == *from->p)
-- from++;
-- return (from > to);
--}
+ struct gfs2_alloc *gfs2_alloc_get(struct gfs2_inode *ip)
+ {
+- struct gfs2_alloc *al = &ip->i_alloc;
-
+- /* FIXME: Should assert that the correct locks are held here... */
+- memset(al, 0, sizeof(*al));
+- return al;
++ BUG_ON(ip->i_alloc != NULL);
++ ip->i_alloc = kzalloc(sizeof(struct gfs2_alloc), GFP_KERNEL);
++ return ip->i_alloc;
+ }
+
/**
- * ext4_block_to_path - parse the block number into array of offsets
- * @inode: inode in question (we are only interested in its superblock)
-@@ -282,7 +275,8 @@ static int verify_chain(Indirect *from, Indirect *to)
- */
+@@ -1059,26 +1059,34 @@ static struct inode *get_local_rgrp(struct gfs2_inode *ip, u64 *last_unlinked)
+ struct inode *inode = NULL;
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ struct gfs2_rgrpd *rgd, *begin = NULL;
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ int flags = LM_FLAG_TRY;
+ int skipped = 0;
+ int loops = 0;
+- int error;
++ int error, rg_locked;
- static int ext4_block_to_path(struct inode *inode,
-- long i_block, int offsets[4], int *boundary)
-+ ext4_lblk_t i_block,
-+ ext4_lblk_t offsets[4], int *boundary)
- {
- int ptrs = EXT4_ADDR_PER_BLOCK(inode->i_sb);
- int ptrs_bits = EXT4_ADDR_PER_BLOCK_BITS(inode->i_sb);
-@@ -313,7 +307,10 @@ static int ext4_block_to_path(struct inode *inode,
- offsets[n++] = i_block & (ptrs - 1);
- final = ptrs;
- } else {
-- ext4_warning(inode->i_sb, "ext4_block_to_path", "block > big");
-+ ext4_warning(inode->i_sb, "ext4_block_to_path",
-+ "block %lu > max",
-+ i_block + direct_blocks +
-+ indirect_blocks + double_blocks);
- }
- if (boundary)
- *boundary = final - 1 - (i_block & (ptrs - 1));
-@@ -344,12 +341,14 @@ static int ext4_block_to_path(struct inode *inode,
- * (pointer to last triple returned, *@err == 0)
- * or when it gets an IO error reading an indirect block
- * (ditto, *@err == -EIO)
-- * or when it notices that chain had been changed while it was reading
-- * (ditto, *@err == -EAGAIN)
- * or when it reads all @depth-1 indirect blocks successfully and finds
- * the whole chain, all way to the data (returns %NULL, *err == 0).
-+ *
-+ * Need to be called with
-+ * down_read(&EXT4_I(inode)->i_data_sem)
- */
--static Indirect *ext4_get_branch(struct inode *inode, int depth, int *offsets,
-+static Indirect *ext4_get_branch(struct inode *inode, int depth,
-+ ext4_lblk_t *offsets,
- Indirect chain[4], int *err)
- {
- struct super_block *sb = inode->i_sb;
-@@ -365,9 +364,6 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth, int *offsets,
- bh = sb_bread(sb, le32_to_cpu(p->key));
- if (!bh)
- goto failure;
-- /* Reader: pointers */
-- if (!verify_chain(chain, p))
-- goto changed;
- add_chain(++p, bh, (__le32*)bh->b_data + *++offsets);
- /* Reader: end */
- if (!p->key)
-@@ -375,10 +371,6 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth, int *offsets,
- }
- return NULL;
+ /* Try recently successful rgrps */
--changed:
-- brelse(bh);
-- *err = -EAGAIN;
-- goto no_block;
- failure:
- *err = -EIO;
- no_block:
-@@ -445,7 +437,7 @@ static ext4_fsblk_t ext4_find_near(struct inode *inode, Indirect *ind)
- * stores it in *@goal and returns zero.
- */
+ rgd = recent_rgrp_first(sdp, ip->i_last_rg_alloc);
--static ext4_fsblk_t ext4_find_goal(struct inode *inode, long block,
-+static ext4_fsblk_t ext4_find_goal(struct inode *inode, ext4_lblk_t block,
- Indirect chain[4], Indirect *partial)
- {
- struct ext4_block_alloc_info *block_i;
-@@ -559,7 +551,7 @@ static int ext4_alloc_blocks(handle_t *handle, struct inode *inode,
- return ret;
- failed_out:
- for (i = 0; i <index; i++)
-- ext4_free_blocks(handle, inode, new_blocks[i], 1);
-+ ext4_free_blocks(handle, inode, new_blocks[i], 1, 0);
- return ret;
- }
+ while (rgd) {
+- error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
+- LM_FLAG_TRY, &al->al_rgd_gh);
++ rg_locked = 0;
++
++ if (gfs2_glock_is_locked_by_me(rgd->rd_gl)) {
++ rg_locked = 1;
++ error = 0;
++ } else {
++ error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
++ LM_FLAG_TRY, &al->al_rgd_gh);
++ }
+ switch (error) {
+ case 0:
+ if (try_rgrp_fit(rgd, al))
+ goto out;
+ if (rgd->rd_flags & GFS2_RDF_CHECK)
+ inode = try_rgrp_unlink(rgd, last_unlinked);
+- gfs2_glock_dq_uninit(&al->al_rgd_gh);
++ if (!rg_locked)
++ gfs2_glock_dq_uninit(&al->al_rgd_gh);
+ if (inode)
+ return inode;
+ rgd = recent_rgrp_next(rgd, 1);
+@@ -1098,15 +1106,23 @@ static struct inode *get_local_rgrp(struct gfs2_inode *ip, u64 *last_unlinked)
+ begin = rgd = forward_rgrp_get(sdp);
-@@ -590,7 +582,7 @@ failed_out:
- */
- static int ext4_alloc_branch(handle_t *handle, struct inode *inode,
- int indirect_blks, int *blks, ext4_fsblk_t goal,
-- int *offsets, Indirect *branch)
-+ ext4_lblk_t *offsets, Indirect *branch)
+ for (;;) {
+- error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE, flags,
+- &al->al_rgd_gh);
++ rg_locked = 0;
++
++ if (gfs2_glock_is_locked_by_me(rgd->rd_gl)) {
++ rg_locked = 1;
++ error = 0;
++ } else {
++ error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE, flags,
++ &al->al_rgd_gh);
++ }
+ switch (error) {
+ case 0:
+ if (try_rgrp_fit(rgd, al))
+ goto out;
+ if (rgd->rd_flags & GFS2_RDF_CHECK)
+ inode = try_rgrp_unlink(rgd, last_unlinked);
+- gfs2_glock_dq_uninit(&al->al_rgd_gh);
++ if (!rg_locked)
++ gfs2_glock_dq_uninit(&al->al_rgd_gh);
+ if (inode)
+ return inode;
+ break;
+@@ -1158,7 +1174,7 @@ out:
+ int gfs2_inplace_reserve_i(struct gfs2_inode *ip, char *file, unsigned int line)
{
- int blocksize = inode->i_sb->s_blocksize;
- int i, n = 0;
-@@ -658,9 +650,9 @@ failed:
- ext4_journal_forget(handle, branch[i].bh);
- }
- for (i = 0; i <indirect_blks; i++)
-- ext4_free_blocks(handle, inode, new_blocks[i], 1);
-+ ext4_free_blocks(handle, inode, new_blocks[i], 1, 0);
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct inode *inode;
+ int error = 0;
+ u64 last_unlinked = NO_BLOCK;
+@@ -1204,7 +1220,7 @@ try_again:
+ void gfs2_inplace_release(struct gfs2_inode *ip)
+ {
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
-- ext4_free_blocks(handle, inode, new_blocks[i], num);
-+ ext4_free_blocks(handle, inode, new_blocks[i], num, 0);
+ if (gfs2_assert_warn(sdp, al->al_alloced <= al->al_requested) == -1)
+ fs_warn(sdp, "al_alloced = %u, al_requested = %u "
+@@ -1213,7 +1229,8 @@ void gfs2_inplace_release(struct gfs2_inode *ip)
+ al->al_line);
- return err;
+ al->al_rgd = NULL;
+- gfs2_glock_dq_uninit(&al->al_rgd_gh);
++ if (al->al_rgd_gh.gh_gl)
++ gfs2_glock_dq_uninit(&al->al_rgd_gh);
+ if (ip != GFS2_I(sdp->sd_rindex))
+ gfs2_glock_dq_uninit(&al->al_ri_gh);
}
-@@ -680,7 +672,7 @@ failed:
- * chain to new block and return 0.
- */
- static int ext4_splice_branch(handle_t *handle, struct inode *inode,
-- long block, Indirect *where, int num, int blks)
-+ ext4_lblk_t block, Indirect *where, int num, int blks)
+@@ -1301,11 +1318,10 @@ static u32 rgblk_search(struct gfs2_rgrpd *rgd, u32 goal,
+ /* The GFS2_BLKST_UNLINKED state doesn't apply to the clone
+ bitmaps, so we must search the originals for that. */
+ if (old_state != GFS2_BLKST_UNLINKED && bi->bi_clone)
+- blk = gfs2_bitfit(rgd, bi->bi_clone + bi->bi_offset,
++ blk = gfs2_bitfit(bi->bi_clone + bi->bi_offset,
+ bi->bi_len, goal, old_state);
+ else
+- blk = gfs2_bitfit(rgd,
+- bi->bi_bh->b_data + bi->bi_offset,
++ blk = gfs2_bitfit(bi->bi_bh->b_data + bi->bi_offset,
+ bi->bi_len, goal, old_state);
+ if (blk != BFITNOENT)
+ break;
+@@ -1394,7 +1410,7 @@ static struct gfs2_rgrpd *rgblk_free(struct gfs2_sbd *sdp, u64 bstart,
+ u64 gfs2_alloc_data(struct gfs2_inode *ip)
{
- int i;
- int err = 0;
-@@ -757,9 +749,10 @@ err_out:
- for (i = 1; i <= num; i++) {
- BUFFER_TRACE(where[i].bh, "call jbd2_journal_forget");
- ext4_journal_forget(handle, where[i].bh);
-- ext4_free_blocks(handle,inode,le32_to_cpu(where[i-1].key),1);
-+ ext4_free_blocks(handle, inode,
-+ le32_to_cpu(where[i-1].key), 1, 0);
- }
-- ext4_free_blocks(handle, inode, le32_to_cpu(where[num].key), blks);
-+ ext4_free_blocks(handle, inode, le32_to_cpu(where[num].key), blks, 0);
-
- return err;
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_rgrpd *rgd = al->al_rgd;
+ u32 goal, blk;
+ u64 block;
+@@ -1439,7 +1455,7 @@ u64 gfs2_alloc_data(struct gfs2_inode *ip)
+ u64 gfs2_alloc_meta(struct gfs2_inode *ip)
+ {
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+- struct gfs2_alloc *al = &ip->i_alloc;
++ struct gfs2_alloc *al = ip->i_alloc;
+ struct gfs2_rgrpd *rgd = al->al_rgd;
+ u32 goal, blk;
+ u64 block;
+@@ -1485,7 +1501,7 @@ u64 gfs2_alloc_meta(struct gfs2_inode *ip)
+ u64 gfs2_alloc_di(struct gfs2_inode *dip, u64 *generation)
+ {
+ struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode);
+- struct gfs2_alloc *al = &dip->i_alloc;
++ struct gfs2_alloc *al = dip->i_alloc;
+ struct gfs2_rgrpd *rgd = al->al_rgd;
+ u32 blk;
+ u64 block;
+diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h
+index b4c6adf..149bb16 100644
+--- a/fs/gfs2/rgrp.h
++++ b/fs/gfs2/rgrp.h
+@@ -32,7 +32,9 @@ void gfs2_rgrp_repolish_clones(struct gfs2_rgrpd *rgd);
+ struct gfs2_alloc *gfs2_alloc_get(struct gfs2_inode *ip);
+ static inline void gfs2_alloc_put(struct gfs2_inode *ip)
+ {
+- return; /* So we can see where ip->i_alloc is used */
++ BUG_ON(ip->i_alloc == NULL);
++ kfree(ip->i_alloc);
++ ip->i_alloc = NULL;
}
-@@ -782,14 +775,19 @@ err_out:
- * return > 0, # of blocks mapped or allocated.
- * return = 0, if plain lookup failed.
- * return < 0, error case.
-+ *
-+ *
-+ * Need to be called with
-+ * down_read(&EXT4_I(inode)->i_data_sem) if not allocating file system block
-+ * (ie, create is zero). Otherwise down_write(&EXT4_I(inode)->i_data_sem)
- */
- int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
-- sector_t iblock, unsigned long maxblocks,
-+ ext4_lblk_t iblock, unsigned long maxblocks,
- struct buffer_head *bh_result,
- int create, int extend_disksize)
+
+ int gfs2_inplace_reserve_i(struct gfs2_inode *ip,
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index dd3e737..ef0562c 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1,6 +1,6 @@
+ /*
+ * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
+- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
++ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+@@ -51,13 +51,9 @@ void gfs2_tune_init(struct gfs2_tune *gt)
{
- int err = -EIO;
-- int offsets[4];
-+ ext4_lblk_t offsets[4];
- Indirect chain[4];
- Indirect *partial;
- ext4_fsblk_t goal;
-@@ -803,7 +801,8 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
+ spin_lock_init(>->gt_spin);
- J_ASSERT(!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL));
- J_ASSERT(handle != NULL || create == 0);
-- depth = ext4_block_to_path(inode,iblock,offsets,&blocks_to_boundary);
-+ depth = ext4_block_to_path(inode, iblock, offsets,
-+ &blocks_to_boundary);
+- gt->gt_ilimit = 100;
+- gt->gt_ilimit_tries = 3;
+- gt->gt_ilimit_min = 1;
+ gt->gt_demote_secs = 300;
+ gt->gt_incore_log_blocks = 1024;
+ gt->gt_log_flush_secs = 60;
+- gt->gt_jindex_refresh_secs = 60;
+ gt->gt_recoverd_secs = 60;
+ gt->gt_logd_secs = 1;
+ gt->gt_quotad_secs = 5;
+@@ -71,10 +67,8 @@ void gfs2_tune_init(struct gfs2_tune *gt)
+ gt->gt_new_files_jdata = 0;
+ gt->gt_new_files_directio = 0;
+ gt->gt_max_readahead = 1 << 18;
+- gt->gt_lockdump_size = 131072;
+ gt->gt_stall_secs = 600;
+ gt->gt_complain_secs = 10;
+- gt->gt_reclaim_limit = 5000;
+ gt->gt_statfs_quantum = 30;
+ gt->gt_statfs_slow = 0;
+ }
+@@ -393,6 +387,7 @@ int gfs2_jindex_hold(struct gfs2_sbd *sdp, struct gfs2_holder *ji_gh)
+ if (!jd)
+ break;
- if (depth == 0)
- goto out;
-@@ -819,18 +818,6 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
- while (count < maxblocks && count <= blocks_to_boundary) {
- ext4_fsblk_t blk;
++ INIT_LIST_HEAD(&jd->extent_list);
+ jd->jd_inode = gfs2_lookupi(sdp->sd_jindex, &name, 1, NULL);
+ if (!jd->jd_inode || IS_ERR(jd->jd_inode)) {
+ if (!jd->jd_inode)
+@@ -422,8 +417,9 @@ int gfs2_jindex_hold(struct gfs2_sbd *sdp, struct gfs2_holder *ji_gh)
-- if (!verify_chain(chain, partial)) {
-- /*
-- * Indirect block might be removed by
-- * truncate while we were reading it.
-- * Handling of that case: forget what we've
-- * got now. Flag the err as EAGAIN, so it
-- * will reread.
-- */
-- err = -EAGAIN;
-- count = 0;
-- break;
-- }
- blk = le32_to_cpu(*(chain[depth-1].p + count));
+ void gfs2_jindex_free(struct gfs2_sbd *sdp)
+ {
+- struct list_head list;
++ struct list_head list, *head;
+ struct gfs2_jdesc *jd;
++ struct gfs2_journal_extent *jext;
- if (blk == first_block + count)
-@@ -838,44 +825,13 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
- else
- break;
- }
-- if (err != -EAGAIN)
-- goto got_it;
-+ goto got_it;
- }
+ spin_lock(&sdp->sd_jindex_spin);
+ list_add(&list, &sdp->sd_jindex_list);
+@@ -433,6 +429,14 @@ void gfs2_jindex_free(struct gfs2_sbd *sdp)
- /* Next simple case - plain lookup or failed read of indirect block */
- if (!create || err == -EIO)
- goto cleanup;
+ while (!list_empty(&list)) {
+ jd = list_entry(list.next, struct gfs2_jdesc, jd_list);
++ head = &jd->extent_list;
++ while (!list_empty(head)) {
++ jext = list_entry(head->next,
++ struct gfs2_journal_extent,
++ extent_list);
++ list_del(&jext->extent_list);
++ kfree(jext);
++ }
+ list_del(&jd->jd_list);
+ iput(jd->jd_inode);
+ kfree(jd);
+@@ -543,7 +547,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ if (error)
+ return error;
-- mutex_lock(&ei->truncate_mutex);
--
-- /*
-- * If the indirect block is missing while we are reading
-- * the chain(ext4_get_branch() returns -EAGAIN err), or
-- * if the chain has been changed after we grab the semaphore,
-- * (either because another process truncated this branch, or
-- * another get_block allocated this branch) re-grab the chain to see if
-- * the request block has been allocated or not.
-- *
-- * Since we already block the truncate/other get_block
-- * at this point, we will have the current copy of the chain when we
-- * splice the branch into the tree.
-- */
-- if (err == -EAGAIN || !verify_chain(chain, partial)) {
-- while (partial > chain) {
-- brelse(partial->bh);
-- partial--;
-- }
-- partial = ext4_get_branch(inode, depth, offsets, chain, &err);
-- if (!partial) {
-- count++;
-- mutex_unlock(&ei->truncate_mutex);
-- if (err)
-- goto cleanup;
-- clear_buffer_new(bh_result);
-- goto got_it;
-- }
-- }
--
- /*
- * Okay, we need to do block allocation. Lazily initialize the block
- * allocation info here if necessary
-@@ -911,13 +867,12 @@ int ext4_get_blocks_handle(handle_t *handle, struct inode *inode,
- err = ext4_splice_branch(handle, inode, iblock,
- partial, indirect_blks, count);
- /*
-- * i_disksize growing is protected by truncate_mutex. Don't forget to
-+ * i_disksize growing is protected by i_data_sem. Don't forget to
- * protect it if you're about to implement concurrent
- * ext4_get_block() -bzzz
- */
- if (!err && extend_disksize && inode->i_size > ei->i_disksize)
- ei->i_disksize = inode->i_size;
-- mutex_unlock(&ei->truncate_mutex);
- if (err)
- goto cleanup;
+- gfs2_meta_cache_flush(ip);
+ j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
-@@ -942,6 +897,47 @@ out:
+ error = gfs2_find_jhead(sdp->sd_jdesc, &head);
+@@ -686,9 +689,7 @@ void gfs2_statfs_change(struct gfs2_sbd *sdp, s64 total, s64 free,
+ if (error)
+ return;
- #define DIO_CREDITS (EXT4_RESERVE_TRANS_BLOCKS + 32)
+- mutex_lock(&sdp->sd_statfs_mutex);
+ gfs2_trans_add_bh(l_ip->i_gl, l_bh, 1);
+- mutex_unlock(&sdp->sd_statfs_mutex);
-+int ext4_get_blocks_wrap(handle_t *handle, struct inode *inode, sector_t block,
-+ unsigned long max_blocks, struct buffer_head *bh,
-+ int create, int extend_disksize)
-+{
-+ int retval;
-+ /*
-+ * Try to see if we can get the block without requesting
-+ * for new file system block.
-+ */
-+ down_read((&EXT4_I(inode)->i_data_sem));
-+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
-+ retval = ext4_ext_get_blocks(handle, inode, block, max_blocks,
-+ bh, 0, 0);
-+ } else {
-+ retval = ext4_get_blocks_handle(handle,
-+ inode, block, max_blocks, bh, 0, 0);
-+ }
-+ up_read((&EXT4_I(inode)->i_data_sem));
-+ if (!create || (retval > 0))
-+ return retval;
-+
-+ /*
-+ * We need to allocate new blocks which will result
-+ * in i_data update
-+ */
-+ down_write((&EXT4_I(inode)->i_data_sem));
-+ /*
-+ * We need to check for EXT4 here because migrate
-+ * could have changed the inode type in between
-+ */
-+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
-+ retval = ext4_ext_get_blocks(handle, inode, block, max_blocks,
-+ bh, create, extend_disksize);
-+ } else {
-+ retval = ext4_get_blocks_handle(handle, inode, block,
-+ max_blocks, bh, create, extend_disksize);
-+ }
-+ up_write((&EXT4_I(inode)->i_data_sem));
-+ return retval;
-+}
-+
- static int ext4_get_block(struct inode *inode, sector_t iblock,
- struct buffer_head *bh_result, int create)
- {
-@@ -996,7 +992,7 @@ get_block:
- * `handle' can be NULL if create is zero
- */
- struct buffer_head *ext4_getblk(handle_t *handle, struct inode *inode,
-- long block, int create, int *errp)
-+ ext4_lblk_t block, int create, int *errp)
- {
- struct buffer_head dummy;
- int fatal = 0, err;
-@@ -1063,7 +1059,7 @@ err:
- }
+ spin_lock(&sdp->sd_statfs_spin);
+ l_sc->sc_total += total;
+@@ -736,9 +737,7 @@ int gfs2_statfs_sync(struct gfs2_sbd *sdp)
+ if (error)
+ goto out_bh2;
- struct buffer_head *ext4_bread(handle_t *handle, struct inode *inode,
-- int block, int create, int *err)
-+ ext4_lblk_t block, int create, int *err)
- {
- struct buffer_head * bh;
+- mutex_lock(&sdp->sd_statfs_mutex);
+ gfs2_trans_add_bh(l_ip->i_gl, l_bh, 1);
+- mutex_unlock(&sdp->sd_statfs_mutex);
-@@ -1446,7 +1442,7 @@ static int jbd2_journal_dirty_data_fn(handle_t *handle, struct buffer_head *bh)
- * ext4_file_write() -> generic_file_write() -> __alloc_pages() -> ...
- *
- * Same applies to ext4_get_block(). We will deadlock on various things like
-- * lock_journal and i_truncate_mutex.
-+ * lock_journal and i_data_sem
- *
- * Setting PF_MEMALLOC here doesn't work - too many internal memory
- * allocations fail.
-@@ -1828,7 +1824,8 @@ int ext4_block_truncate_page(handle_t *handle, struct page *page,
- {
- ext4_fsblk_t index = from >> PAGE_CACHE_SHIFT;
- unsigned offset = from & (PAGE_CACHE_SIZE-1);
-- unsigned blocksize, iblock, length, pos;
-+ unsigned blocksize, length, pos;
-+ ext4_lblk_t iblock;
- struct inode *inode = mapping->host;
- struct buffer_head *bh;
- int err = 0;
-@@ -1964,7 +1961,7 @@ static inline int all_zeroes(__le32 *p, __le32 *q)
- * (no partially truncated stuff there). */
+ spin_lock(&sdp->sd_statfs_spin);
+ m_sc->sc_total += l_sc->sc_total;
+diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c
+index 06e0b77..eaa3b7b 100644
+--- a/fs/gfs2/sys.c
++++ b/fs/gfs2/sys.c
+@@ -32,7 +32,8 @@ spinlock_t gfs2_sys_margs_lock;
- static Indirect *ext4_find_shared(struct inode *inode, int depth,
-- int offsets[4], Indirect chain[4], __le32 *top)
-+ ext4_lblk_t offsets[4], Indirect chain[4], __le32 *top)
+ static ssize_t id_show(struct gfs2_sbd *sdp, char *buf)
{
- Indirect *partial, *p;
- int k, err;
-@@ -2048,15 +2045,15 @@ static void ext4_clear_blocks(handle_t *handle, struct inode *inode,
- for (p = first; p < last; p++) {
- u32 nr = le32_to_cpu(*p);
- if (nr) {
-- struct buffer_head *bh;
-+ struct buffer_head *tbh;
-
- *p = 0;
-- bh = sb_find_get_block(inode->i_sb, nr);
-- ext4_forget(handle, 0, inode, bh, nr);
-+ tbh = sb_find_get_block(inode->i_sb, nr);
-+ ext4_forget(handle, 0, inode, tbh, nr);
- }
- }
-
-- ext4_free_blocks(handle, inode, block_to_free, count);
-+ ext4_free_blocks(handle, inode, block_to_free, count, 0);
+- return snprintf(buf, PAGE_SIZE, "%s\n", sdp->sd_vfs->s_id);
++ return snprintf(buf, PAGE_SIZE, "%u:%u\n",
++ MAJOR(sdp->sd_vfs->s_dev), MINOR(sdp->sd_vfs->s_dev));
}
- /**
-@@ -2229,7 +2226,7 @@ static void ext4_free_branches(handle_t *handle, struct inode *inode,
- ext4_journal_test_restart(handle, inode);
- }
-
-- ext4_free_blocks(handle, inode, nr, 1);
-+ ext4_free_blocks(handle, inode, nr, 1, 1);
+ static ssize_t fsname_show(struct gfs2_sbd *sdp, char *buf)
+@@ -221,9 +222,7 @@ static struct kobj_type gfs2_ktype = {
+ .sysfs_ops = &gfs2_attr_ops,
+ };
- if (parent_bh) {
- /*
-@@ -2289,12 +2286,12 @@ void ext4_truncate(struct inode *inode)
- __le32 *i_data = ei->i_data;
- int addr_per_block = EXT4_ADDR_PER_BLOCK(inode->i_sb);
- struct address_space *mapping = inode->i_mapping;
-- int offsets[4];
-+ ext4_lblk_t offsets[4];
- Indirect chain[4];
- Indirect *partial;
- __le32 nr = 0;
- int n;
-- long last_block;
-+ ext4_lblk_t last_block;
- unsigned blocksize = inode->i_sb->s_blocksize;
- struct page *page;
+-static struct kset gfs2_kset = {
+- .ktype = &gfs2_ktype,
+-};
++static struct kset *gfs2_kset;
-@@ -2320,8 +2317,10 @@ void ext4_truncate(struct inode *inode)
- return;
- }
+ /*
+ * display struct lm_lockstruct fields
+@@ -427,13 +426,11 @@ TUNE_ATTR_2(name, name##_store)
+ TUNE_ATTR(demote_secs, 0);
+ TUNE_ATTR(incore_log_blocks, 0);
+ TUNE_ATTR(log_flush_secs, 0);
+-TUNE_ATTR(jindex_refresh_secs, 0);
+ TUNE_ATTR(quota_warn_period, 0);
+ TUNE_ATTR(quota_quantum, 0);
+ TUNE_ATTR(atime_quantum, 0);
+ TUNE_ATTR(max_readahead, 0);
+ TUNE_ATTR(complain_secs, 0);
+-TUNE_ATTR(reclaim_limit, 0);
+ TUNE_ATTR(statfs_slow, 0);
+ TUNE_ATTR(new_files_jdata, 0);
+ TUNE_ATTR(new_files_directio, 0);
+@@ -450,13 +447,11 @@ static struct attribute *tune_attrs[] = {
+ &tune_attr_demote_secs.attr,
+ &tune_attr_incore_log_blocks.attr,
+ &tune_attr_log_flush_secs.attr,
+- &tune_attr_jindex_refresh_secs.attr,
+ &tune_attr_quota_warn_period.attr,
+ &tune_attr_quota_quantum.attr,
+ &tune_attr_atime_quantum.attr,
+ &tune_attr_max_readahead.attr,
+ &tune_attr_complain_secs.attr,
+- &tune_attr_reclaim_limit.attr,
+ &tune_attr_statfs_slow.attr,
+ &tune_attr_quota_simul_sync.attr,
+ &tune_attr_quota_cache_secs.attr,
+@@ -495,14 +490,9 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
+ {
+ int error;
-- if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)
-- return ext4_ext_truncate(inode, page);
-+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
-+ ext4_ext_truncate(inode, page);
-+ return;
-+ }
+- sdp->sd_kobj.kset = &gfs2_kset;
+- sdp->sd_kobj.ktype = &gfs2_ktype;
+-
+- error = kobject_set_name(&sdp->sd_kobj, "%s", sdp->sd_table_name);
+- if (error)
+- goto fail;
+-
+- error = kobject_register(&sdp->sd_kobj);
++ sdp->sd_kobj.kset = gfs2_kset;
++ error = kobject_init_and_add(&sdp->sd_kobj, &gfs2_ktype, NULL,
++ "%s", sdp->sd_table_name);
+ if (error)
+ goto fail;
- handle = start_transaction(inode);
- if (IS_ERR(handle)) {
-@@ -2369,7 +2368,7 @@ void ext4_truncate(struct inode *inode)
- * From here we block out all ext4_get_block() callers who want to
- * modify the block allocation tree.
- */
-- mutex_lock(&ei->truncate_mutex);
-+ down_write(&ei->i_data_sem);
+@@ -522,6 +512,7 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
+ if (error)
+ goto fail_args;
- if (n == 1) { /* direct blocks */
- ext4_free_data(handle, inode, NULL, i_data+offsets[0],
-@@ -2433,7 +2432,7 @@ do_indirects:
++ kobject_uevent(&sdp->sd_kobj, KOBJ_ADD);
+ return 0;
- ext4_discard_reservation(inode);
+ fail_args:
+@@ -531,7 +522,7 @@ fail_counters:
+ fail_lockstruct:
+ sysfs_remove_group(&sdp->sd_kobj, &lockstruct_group);
+ fail_reg:
+- kobject_unregister(&sdp->sd_kobj);
++ kobject_put(&sdp->sd_kobj);
+ fail:
+ fs_err(sdp, "error %d adding sysfs files", error);
+ return error;
+@@ -543,21 +534,22 @@ void gfs2_sys_fs_del(struct gfs2_sbd *sdp)
+ sysfs_remove_group(&sdp->sd_kobj, &args_group);
+ sysfs_remove_group(&sdp->sd_kobj, &counters_group);
+ sysfs_remove_group(&sdp->sd_kobj, &lockstruct_group);
+- kobject_unregister(&sdp->sd_kobj);
++ kobject_put(&sdp->sd_kobj);
+ }
-- mutex_unlock(&ei->truncate_mutex);
-+ up_write(&ei->i_data_sem);
- inode->i_mtime = inode->i_ctime = ext4_current_time(inode);
- ext4_mark_inode_dirty(handle, inode);
+ int gfs2_sys_init(void)
+ {
+ gfs2_sys_margs = NULL;
+ spin_lock_init(&gfs2_sys_margs_lock);
+- kobject_set_name(&gfs2_kset.kobj, "gfs2");
+- kobj_set_kset_s(&gfs2_kset, fs_subsys);
+- return kset_register(&gfs2_kset);
++ gfs2_kset = kset_create_and_add("gfs2", NULL, fs_kobj);
++ if (!gfs2_kset)
++ return -ENOMEM;
++ return 0;
+ }
-@@ -2460,7 +2459,8 @@ out_stop:
- static ext4_fsblk_t ext4_get_inode_block(struct super_block *sb,
- unsigned long ino, struct ext4_iloc *iloc)
+ void gfs2_sys_uninit(void)
{
-- unsigned long desc, group_desc, block_group;
-+ unsigned long desc, group_desc;
-+ ext4_group_t block_group;
- unsigned long offset;
- ext4_fsblk_t block;
- struct buffer_head *bh;
-@@ -2547,7 +2547,7 @@ static int __ext4_get_inode_loc(struct inode *inode,
- struct ext4_group_desc *desc;
- int inodes_per_buffer;
- int inode_offset, i;
-- int block_group;
-+ ext4_group_t block_group;
- int start;
+ kfree(gfs2_sys_margs);
+- kset_unregister(&gfs2_kset);
++ kset_unregister(gfs2_kset);
+ }
- block_group = (inode->i_ino - 1) /
-@@ -2660,6 +2660,28 @@ void ext4_get_inode_flags(struct ext4_inode_info *ei)
- if (flags & S_DIRSYNC)
- ei->i_flags |= EXT4_DIRSYNC_FL;
+diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c
+index 717983e..73e5d92 100644
+--- a/fs/gfs2/trans.c
++++ b/fs/gfs2/trans.c
+@@ -114,11 +114,6 @@ void gfs2_trans_end(struct gfs2_sbd *sdp)
+ gfs2_log_flush(sdp, NULL);
}
-+static blkcnt_t ext4_inode_blocks(struct ext4_inode *raw_inode,
-+ struct ext4_inode_info *ei)
-+{
-+ blkcnt_t i_blocks ;
-+ struct inode *inode = &(ei->vfs_inode);
-+ struct super_block *sb = inode->i_sb;
-+
-+ if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
-+ EXT4_FEATURE_RO_COMPAT_HUGE_FILE)) {
-+ /* we are using combined 48 bit field */
-+ i_blocks = ((u64)le16_to_cpu(raw_inode->i_blocks_high)) << 32 |
-+ le32_to_cpu(raw_inode->i_blocks_lo);
-+ if (ei->i_flags & EXT4_HUGE_FILE_FL) {
-+ /* i_blocks represent file system block size */
-+ return i_blocks << (inode->i_blkbits - 9);
-+ } else {
-+ return i_blocks;
-+ }
-+ } else {
-+ return le32_to_cpu(raw_inode->i_blocks_lo);
-+ }
-+}
- void ext4_read_inode(struct inode * inode)
- {
-@@ -2687,7 +2709,6 @@ void ext4_read_inode(struct inode * inode)
- inode->i_gid |= le16_to_cpu(raw_inode->i_gid_high) << 16;
- }
- inode->i_nlink = le16_to_cpu(raw_inode->i_links_count);
-- inode->i_size = le32_to_cpu(raw_inode->i_size);
+-void gfs2_trans_add_gl(struct gfs2_glock *gl)
+-{
+- lops_add(gl->gl_sbd, &gl->gl_le);
+-}
+-
+ /**
+ * gfs2_trans_add_bh - Add a to-be-modified buffer to the current transaction
+ * @gl: the glock the buffer belongs to
+diff --git a/fs/gfs2/trans.h b/fs/gfs2/trans.h
+index 043d5f4..e826f0d 100644
+--- a/fs/gfs2/trans.h
++++ b/fs/gfs2/trans.h
+@@ -30,7 +30,6 @@ int gfs2_trans_begin(struct gfs2_sbd *sdp, unsigned int blocks,
- ei->i_state = 0;
- ei->i_dir_start_lookup = 0;
-@@ -2709,19 +2730,15 @@ void ext4_read_inode(struct inode * inode)
- * recovery code: that's fine, we're about to complete
- * the process of deleting those. */
- }
-- inode->i_blocks = le32_to_cpu(raw_inode->i_blocks);
- ei->i_flags = le32_to_cpu(raw_inode->i_flags);
-- ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl);
-+ inode->i_blocks = ext4_inode_blocks(raw_inode, ei);
-+ ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo);
- if (EXT4_SB(inode->i_sb)->s_es->s_creator_os !=
-- cpu_to_le32(EXT4_OS_HURD))
-+ cpu_to_le32(EXT4_OS_HURD)) {
- ei->i_file_acl |=
- ((__u64)le16_to_cpu(raw_inode->i_file_acl_high)) << 32;
-- if (!S_ISREG(inode->i_mode)) {
-- ei->i_dir_acl = le32_to_cpu(raw_inode->i_dir_acl);
-- } else {
-- inode->i_size |=
-- ((__u64)le32_to_cpu(raw_inode->i_size_high)) << 32;
+ void gfs2_trans_end(struct gfs2_sbd *sdp);
+
+-void gfs2_trans_add_gl(struct gfs2_glock *gl);
+ void gfs2_trans_add_bh(struct gfs2_glock *gl, struct buffer_head *bh, int meta);
+ void gfs2_trans_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd);
+ void gfs2_trans_add_unrevoke(struct gfs2_sbd *sdp, u64 blkno);
+diff --git a/fs/inode.c b/fs/inode.c
+index ed35383..276ffd6 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1276,6 +1276,11 @@ void file_update_time(struct file *file)
+ sync_it = 1;
}
-+ inode->i_size = ext4_isize(raw_inode);
- ei->i_disksize = inode->i_size;
- inode->i_generation = le32_to_cpu(raw_inode->i_generation);
- ei->i_block_group = iloc.block_group;
-@@ -2765,6 +2782,13 @@ void ext4_read_inode(struct inode * inode)
- EXT4_INODE_GET_XTIME(i_atime, inode, raw_inode);
- EXT4_EINODE_GET_XTIME(i_crtime, ei, raw_inode);
-+ inode->i_version = le32_to_cpu(raw_inode->i_disk_version);
-+ if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE) {
-+ if (EXT4_FITS_IN_INODE(raw_inode, ei, i_version_hi))
-+ inode->i_version |=
-+ (__u64)(le32_to_cpu(raw_inode->i_version_hi)) << 32;
++ if (IS_I_VERSION(inode)) {
++ inode_inc_iversion(inode);
++ sync_it = 1;
+ }
+
- if (S_ISREG(inode->i_mode)) {
- inode->i_op = &ext4_file_inode_operations;
- inode->i_fop = &ext4_file_operations;
-@@ -2797,6 +2821,55 @@ bad_inode:
- return;
+ if (sync_it)
+ mark_inode_dirty_sync(inode);
}
+diff --git a/fs/ioprio.c b/fs/ioprio.c
+index e4e01bc..c4a1c3c 100644
+--- a/fs/ioprio.c
++++ b/fs/ioprio.c
+@@ -41,18 +41,28 @@ static int set_task_ioprio(struct task_struct *task, int ioprio)
+ return err;
-+static int ext4_inode_blocks_set(handle_t *handle,
-+ struct ext4_inode *raw_inode,
-+ struct ext4_inode_info *ei)
-+{
-+ struct inode *inode = &(ei->vfs_inode);
-+ u64 i_blocks = inode->i_blocks;
-+ struct super_block *sb = inode->i_sb;
-+ int err = 0;
-+
-+ if (i_blocks <= ~0U) {
-+ /*
-+ * i_blocks can be represnted in a 32 bit variable
-+ * as multiple of 512 bytes
-+ */
-+ raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
-+ raw_inode->i_blocks_high = 0;
-+ ei->i_flags &= ~EXT4_HUGE_FILE_FL;
-+ } else if (i_blocks <= 0xffffffffffffULL) {
-+ /*
-+ * i_blocks can be represented in a 48 bit variable
-+ * as multiple of 512 bytes
-+ */
-+ err = ext4_update_rocompat_feature(handle, sb,
-+ EXT4_FEATURE_RO_COMPAT_HUGE_FILE);
-+ if (err)
-+ goto err_out;
-+ /* i_block is stored in the split 48 bit fields */
-+ raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
-+ raw_inode->i_blocks_high = cpu_to_le16(i_blocks >> 32);
-+ ei->i_flags &= ~EXT4_HUGE_FILE_FL;
-+ } else {
-+ /*
-+ * i_blocks should be represented in a 48 bit variable
-+ * as multiple of file system block size
-+ */
-+ err = ext4_update_rocompat_feature(handle, sb,
-+ EXT4_FEATURE_RO_COMPAT_HUGE_FILE);
-+ if (err)
-+ goto err_out;
-+ ei->i_flags |= EXT4_HUGE_FILE_FL;
-+ /* i_block is stored in file system block size */
-+ i_blocks = i_blocks >> (inode->i_blkbits - 9);
-+ raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
-+ raw_inode->i_blocks_high = cpu_to_le16(i_blocks >> 32);
+ task_lock(task);
++ do {
++ ioc = task->io_context;
++ /* see wmb() in current_io_context() */
++ smp_read_barrier_depends();
++ if (ioc)
++ break;
+
+- task->ioprio = ioprio;
+-
+- ioc = task->io_context;
+- /* see wmb() in current_io_context() */
+- smp_read_barrier_depends();
++ ioc = alloc_io_context(GFP_ATOMIC, -1);
++ if (!ioc) {
++ err = -ENOMEM;
++ break;
++ }
++ task->io_context = ioc;
++ } while (1);
+
+- if (ioc)
++ if (!err) {
++ ioc->ioprio = ioprio;
+ ioc->ioprio_changed = 1;
+ }
-+err_out:
+
+ task_unlock(task);
+- return 0;
+ return err;
-+}
-+
- /*
- * Post the struct inode info into an on-disk inode location in the
- * buffer-cache. This gobbles the caller's reference to the
-@@ -2845,47 +2918,42 @@ static int ext4_do_update_inode(handle_t *handle,
- raw_inode->i_gid_high = 0;
- }
- raw_inode->i_links_count = cpu_to_le16(inode->i_nlink);
-- raw_inode->i_size = cpu_to_le32(ei->i_disksize);
+ }
- EXT4_INODE_SET_XTIME(i_ctime, inode, raw_inode);
- EXT4_INODE_SET_XTIME(i_mtime, inode, raw_inode);
- EXT4_INODE_SET_XTIME(i_atime, inode, raw_inode);
- EXT4_EINODE_SET_XTIME(i_crtime, ei, raw_inode);
+ asmlinkage long sys_ioprio_set(int which, int who, int ioprio)
+@@ -75,8 +85,6 @@ asmlinkage long sys_ioprio_set(int which, int who, int ioprio)
-- raw_inode->i_blocks = cpu_to_le32(inode->i_blocks);
-+ if (ext4_inode_blocks_set(handle, raw_inode, ei))
-+ goto out_brelse;
- raw_inode->i_dtime = cpu_to_le32(ei->i_dtime);
- raw_inode->i_flags = cpu_to_le32(ei->i_flags);
- if (EXT4_SB(inode->i_sb)->s_es->s_creator_os !=
- cpu_to_le32(EXT4_OS_HURD))
- raw_inode->i_file_acl_high =
- cpu_to_le16(ei->i_file_acl >> 32);
-- raw_inode->i_file_acl = cpu_to_le32(ei->i_file_acl);
-- if (!S_ISREG(inode->i_mode)) {
-- raw_inode->i_dir_acl = cpu_to_le32(ei->i_dir_acl);
-- } else {
-- raw_inode->i_size_high =
-- cpu_to_le32(ei->i_disksize >> 32);
-- if (ei->i_disksize > 0x7fffffffULL) {
-- struct super_block *sb = inode->i_sb;
-- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
-- EXT4_FEATURE_RO_COMPAT_LARGE_FILE) ||
-- EXT4_SB(sb)->s_es->s_rev_level ==
-- cpu_to_le32(EXT4_GOOD_OLD_REV)) {
-- /* If this is the first large file
-- * created, add a flag to the superblock.
-- */
-- err = ext4_journal_get_write_access(handle,
-- EXT4_SB(sb)->s_sbh);
-- if (err)
-- goto out_brelse;
-- ext4_update_dynamic_rev(sb);
-- EXT4_SET_RO_COMPAT_FEATURE(sb,
-+ raw_inode->i_file_acl_lo = cpu_to_le32(ei->i_file_acl);
-+ ext4_isize_set(raw_inode, ei->i_disksize);
-+ if (ei->i_disksize > 0x7fffffffULL) {
-+ struct super_block *sb = inode->i_sb;
-+ if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
-+ EXT4_FEATURE_RO_COMPAT_LARGE_FILE) ||
-+ EXT4_SB(sb)->s_es->s_rev_level ==
-+ cpu_to_le32(EXT4_GOOD_OLD_REV)) {
-+ /* If this is the first large file
-+ * created, add a flag to the superblock.
-+ */
-+ err = ext4_journal_get_write_access(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ if (err)
-+ goto out_brelse;
-+ ext4_update_dynamic_rev(sb);
-+ EXT4_SET_RO_COMPAT_FEATURE(sb,
- EXT4_FEATURE_RO_COMPAT_LARGE_FILE);
-- sb->s_dirt = 1;
-- handle->h_sync = 1;
-- err = ext4_journal_dirty_metadata(handle,
-- EXT4_SB(sb)->s_sbh);
-- }
-+ sb->s_dirt = 1;
-+ handle->h_sync = 1;
-+ err = ext4_journal_dirty_metadata(handle,
-+ EXT4_SB(sb)->s_sbh);
+ break;
+ case IOPRIO_CLASS_IDLE:
+- if (!capable(CAP_SYS_ADMIN))
+- return -EPERM;
+ break;
+ case IOPRIO_CLASS_NONE:
+ if (data)
+@@ -148,7 +156,9 @@ static int get_task_ioprio(struct task_struct *p)
+ ret = security_task_getioprio(p);
+ if (ret)
+ goto out;
+- ret = p->ioprio;
++ ret = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, IOPRIO_NORM);
++ if (p->io_context)
++ ret = p->io_context->ioprio;
+ out:
+ return ret;
+ }
+diff --git a/fs/jbd/checkpoint.c b/fs/jbd/checkpoint.c
+index 0f69c41..a5432bb 100644
+--- a/fs/jbd/checkpoint.c
++++ b/fs/jbd/checkpoint.c
+@@ -347,7 +347,8 @@ restart:
+ break;
+ }
+ retry = __process_buffer(journal, jh, bhs,&batch_count);
+- if (!retry && lock_need_resched(&journal->j_list_lock)){
++ if (!retry && (need_resched() ||
++ spin_needbreak(&journal->j_list_lock))) {
+ spin_unlock(&journal->j_list_lock);
+ retry = 1;
+ break;
+diff --git a/fs/jbd/commit.c b/fs/jbd/commit.c
+index 610264b..31853eb 100644
+--- a/fs/jbd/commit.c
++++ b/fs/jbd/commit.c
+@@ -265,7 +265,7 @@ write_out_data:
+ put_bh(bh);
}
- }
- raw_inode->i_generation = cpu_to_le32(inode->i_generation);
-@@ -2903,8 +2971,14 @@ static int ext4_do_update_inode(handle_t *handle,
- } else for (block = 0; block < EXT4_N_BLOCKS; block++)
- raw_inode->i_block[block] = ei->i_data[block];
-- if (ei->i_extra_isize)
-+ raw_inode->i_disk_version = cpu_to_le32(inode->i_version);
-+ if (ei->i_extra_isize) {
-+ if (EXT4_FITS_IN_INODE(raw_inode, ei, i_version_hi))
-+ raw_inode->i_version_hi =
-+ cpu_to_le32(inode->i_version >> 32);
- raw_inode->i_extra_isize = cpu_to_le16(ei->i_extra_isize);
-+ }
-+
+- if (lock_need_resched(&journal->j_list_lock)) {
++ if (need_resched() || spin_needbreak(&journal->j_list_lock)) {
+ spin_unlock(&journal->j_list_lock);
+ goto write_out_data;
+ }
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 3fccde7..6914598 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -232,7 +232,8 @@ __flush_batch(journal_t *journal, struct buffer_head **bhs, int *batch_count)
+ * Called under jbd_lock_bh_state(jh2bh(jh)), and drops it
+ */
+ static int __process_buffer(journal_t *journal, struct journal_head *jh,
+- struct buffer_head **bhs, int *batch_count)
++ struct buffer_head **bhs, int *batch_count,
++ transaction_t *transaction)
+ {
+ struct buffer_head *bh = jh2bh(jh);
+ int ret = 0;
+@@ -250,6 +251,7 @@ static int __process_buffer(journal_t *journal, struct journal_head *jh,
+ transaction_t *t = jh->b_transaction;
+ tid_t tid = t->t_tid;
- BUFFER_TRACE(bh, "call ext4_journal_dirty_metadata");
- rc = ext4_journal_dirty_metadata(handle, bh);
-@@ -3024,6 +3098,17 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
- ext4_journal_stop(handle);
++ transaction->t_chp_stats.cs_forced_to_close++;
+ spin_unlock(&journal->j_list_lock);
+ jbd_unlock_bh_state(bh);
+ jbd2_log_start_commit(journal, tid);
+@@ -279,6 +281,7 @@ static int __process_buffer(journal_t *journal, struct journal_head *jh,
+ bhs[*batch_count] = bh;
+ __buffer_relink_io(jh);
+ jbd_unlock_bh_state(bh);
++ transaction->t_chp_stats.cs_written++;
+ (*batch_count)++;
+ if (*batch_count == NR_BATCH) {
+ spin_unlock(&journal->j_list_lock);
+@@ -322,6 +325,8 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ if (!journal->j_checkpoint_transactions)
+ goto out;
+ transaction = journal->j_checkpoint_transactions;
++ if (transaction->t_chp_stats.cs_chp_time == 0)
++ transaction->t_chp_stats.cs_chp_time = jiffies;
+ this_tid = transaction->t_tid;
+ restart:
+ /*
+@@ -346,8 +351,10 @@ restart:
+ retry = 1;
+ break;
+ }
+- retry = __process_buffer(journal, jh, bhs,&batch_count);
+- if (!retry && lock_need_resched(&journal->j_list_lock)){
++ retry = __process_buffer(journal, jh, bhs, &batch_count,
++ transaction);
++ if (!retry && (need_resched() ||
++ spin_needbreak(&journal->j_list_lock))) {
+ spin_unlock(&journal->j_list_lock);
+ retry = 1;
+ break;
+@@ -602,15 +609,15 @@ int __jbd2_journal_remove_checkpoint(struct journal_head *jh)
+
+ /*
+ * There is one special case to worry about: if we have just pulled the
+- * buffer off a committing transaction's forget list, then even if the
+- * checkpoint list is empty, the transaction obviously cannot be
+- * dropped!
++ * buffer off a running or committing transaction's checkpoing list,
++ * then even if the checkpoint list is empty, the transaction obviously
++ * cannot be dropped!
+ *
+- * The locking here around j_committing_transaction is a bit sleazy.
++ * The locking here around t_state is a bit sleazy.
+ * See the comment at the end of jbd2_journal_commit_transaction().
+ */
+- if (transaction == journal->j_committing_transaction) {
+- JBUFFER_TRACE(jh, "belongs to committing transaction");
++ if (transaction->t_state != T_FINISHED) {
++ JBUFFER_TRACE(jh, "belongs to running/committing transaction");
+ goto out;
}
-+ if (attr->ia_valid & ATTR_SIZE) {
-+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) {
-+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
-+
-+ if (attr->ia_size > sbi->s_bitmap_maxbytes) {
-+ error = -EFBIG;
-+ goto err_out;
-+ }
-+ }
-+ }
-+
- if (S_ISREG(inode->i_mode) &&
- attr->ia_valid & ATTR_SIZE && attr->ia_size < inode->i_size) {
- handle_t *handle;
-@@ -3120,6 +3205,9 @@ int ext4_mark_iloc_dirty(handle_t *handle,
- {
- int err = 0;
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 6986f33..4f302d2 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -20,6 +20,8 @@
+ #include <linux/slab.h>
+ #include <linux/mm.h>
+ #include <linux/pagemap.h>
++#include <linux/jiffies.h>
++#include <linux/crc32.h>
-+ if (test_opt(inode->i_sb, I_VERSION))
-+ inode_inc_iversion(inode);
-+
- /* the do_update_inode consumes one bh->b_count */
- get_bh(iloc->bh);
+ /*
+ * Default IO end handler for temporary BJ_IO buffer_heads.
+@@ -92,19 +94,23 @@ static int inverted_lock(journal_t *journal, struct buffer_head *bh)
+ return 1;
+ }
-@@ -3158,8 +3246,10 @@ ext4_reserve_inode_write(handle_t *handle, struct inode *inode,
- * Expand an inode by new_extra_isize bytes.
- * Returns 0 on success or negative error number on failure.
+-/* Done it all: now write the commit record. We should have
++/*
++ * Done it all: now submit the commit record. We should have
+ * cleaned up our previous buffers by now, so if we are in abort
+ * mode we can now just skip the rest of the journal write
+ * entirely.
+ *
+ * Returns 1 if the journal needs to be aborted or 0 on success
*/
--int ext4_expand_extra_isize(struct inode *inode, unsigned int new_extra_isize,
-- struct ext4_iloc iloc, handle_t *handle)
-+static int ext4_expand_extra_isize(struct inode *inode,
-+ unsigned int new_extra_isize,
-+ struct ext4_iloc iloc,
-+ handle_t *handle)
+-static int journal_write_commit_record(journal_t *journal,
+- transaction_t *commit_transaction)
++static int journal_submit_commit_record(journal_t *journal,
++ transaction_t *commit_transaction,
++ struct buffer_head **cbh,
++ __u32 crc32_sum)
{
- struct ext4_inode *raw_inode;
- struct ext4_xattr_ibody_header *header;
-diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
-index e7f894b..2ed7c37 100644
---- a/fs/ext4/ioctl.c
-+++ b/fs/ext4/ioctl.c
-@@ -199,7 +199,7 @@ flags_err:
- * need to allocate reservation structure for this inode
- * before set the window size
- */
-- mutex_lock(&ei->truncate_mutex);
-+ down_write(&ei->i_data_sem);
- if (!ei->i_block_alloc_info)
- ext4_init_block_alloc_info(inode);
+ struct journal_head *descriptor;
++ struct commit_header *tmp;
+ struct buffer_head *bh;
+- int i, ret;
++ int ret;
+ int barrier_done = 0;
-@@ -207,7 +207,7 @@ flags_err:
- struct ext4_reserve_window_node *rsv = &ei->i_block_alloc_info->rsv_window_node;
- rsv->rsv_goal_size = rsv_window_size;
- }
-- mutex_unlock(&ei->truncate_mutex);
-+ up_write(&ei->i_data_sem);
- return 0;
- }
- case EXT4_IOC_GROUP_EXTEND: {
-@@ -254,6 +254,9 @@ flags_err:
- return err;
- }
+ if (is_journal_aborted(journal))
+@@ -116,21 +122,33 @@ static int journal_write_commit_record(journal_t *journal,
-+ case EXT4_IOC_MIGRATE:
-+ return ext4_ext_migrate(inode, filp, cmd, arg);
+ bh = jh2bh(descriptor);
+
+- /* AKPM: buglet - add `i' to tmp! */
+- for (i = 0; i < bh->b_size; i += 512) {
+- journal_header_t *tmp = (journal_header_t*)bh->b_data;
+- tmp->h_magic = cpu_to_be32(JBD2_MAGIC_NUMBER);
+- tmp->h_blocktype = cpu_to_be32(JBD2_COMMIT_BLOCK);
+- tmp->h_sequence = cpu_to_be32(commit_transaction->t_tid);
++ tmp = (struct commit_header *)bh->b_data;
++ tmp->h_magic = cpu_to_be32(JBD2_MAGIC_NUMBER);
++ tmp->h_blocktype = cpu_to_be32(JBD2_COMMIT_BLOCK);
++ tmp->h_sequence = cpu_to_be32(commit_transaction->t_tid);
+
- default:
- return -ENOTTY;
++ if (JBD2_HAS_COMPAT_FEATURE(journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM)) {
++ tmp->h_chksum_type = JBD2_CRC32_CHKSUM;
++ tmp->h_chksum_size = JBD2_CRC32_CHKSUM_SIZE;
++ tmp->h_chksum[0] = cpu_to_be32(crc32_sum);
}
-diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
-new file mode 100644
-index 0000000..76e5fed
---- /dev/null
-+++ b/fs/ext4/mballoc.c
-@@ -0,0 +1,4552 @@
-+/*
-+ * Copyright (c) 2003-2006, Cluster File Systems, Inc, info at clusterfs.com
-+ * Written by Alex Tomas <alex at clusterfs.com>
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public Licens
-+ * along with this program; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-
-+ */
-+
-+
-+/*
-+ * mballoc.c contains the multiblocks allocation routines
-+ */
-+
-+#include <linux/time.h>
-+#include <linux/fs.h>
-+#include <linux/namei.h>
-+#include <linux/ext4_jbd2.h>
-+#include <linux/ext4_fs.h>
-+#include <linux/quotaops.h>
-+#include <linux/buffer_head.h>
-+#include <linux/module.h>
-+#include <linux/swap.h>
-+#include <linux/proc_fs.h>
-+#include <linux/pagemap.h>
-+#include <linux/seq_file.h>
-+#include <linux/version.h>
-+#include "group.h"
-+
-+/*
-+ * MUSTDO:
-+ * - test ext4_ext_search_left() and ext4_ext_search_right()
-+ * - search for metadata in few groups
-+ *
-+ * TODO v4:
-+ * - normalization should take into account whether file is still open
-+ * - discard preallocations if no free space left (policy?)
-+ * - don't normalize tails
-+ * - quota
-+ * - reservation for superuser
-+ *
-+ * TODO v3:
-+ * - bitmap read-ahead (proposed by Oleg Drokin aka green)
-+ * - track min/max extents in each group for better group selection
-+ * - mb_mark_used() may allocate chunk right after splitting buddy
-+ * - tree of groups sorted by number of free blocks
-+ * - error handling
-+ */
-+
-+/*
-+ * The allocation request involve request for multiple number of blocks
-+ * near to the goal(block) value specified.
-+ *
-+ * During initialization phase of the allocator we decide to use the group
-+ * preallocation or inode preallocation depending on the size file. The
-+ * size of the file could be the resulting file size we would have after
-+ * allocation or the current file size which ever is larger. If the size is
-+ * less that sbi->s_mb_stream_request we select the group
-+ * preallocation. The default value of s_mb_stream_request is 16
-+ * blocks. This can also be tuned via
-+ * /proc/fs/ext4/<partition>/stream_req. The value is represented in terms
-+ * of number of blocks.
-+ *
-+ * The main motivation for having small file use group preallocation is to
-+ * ensure that we have small file closer in the disk.
-+ *
-+ * First stage the allocator looks at the inode prealloc list
-+ * ext4_inode_info->i_prealloc_list contain list of prealloc spaces for
-+ * this particular inode. The inode prealloc space is represented as:
-+ *
-+ * pa_lstart -> the logical start block for this prealloc space
-+ * pa_pstart -> the physical start block for this prealloc space
-+ * pa_len -> lenght for this prealloc space
-+ * pa_free -> free space available in this prealloc space
-+ *
-+ * The inode preallocation space is used looking at the _logical_ start
-+ * block. If only the logical file block falls within the range of prealloc
-+ * space we will consume the particular prealloc space. This make sure that
-+ * that the we have contiguous physical blocks representing the file blocks
-+ *
-+ * The important thing to be noted in case of inode prealloc space is that
-+ * we don't modify the values associated to inode prealloc space except
-+ * pa_free.
-+ *
-+ * If we are not able to find blocks in the inode prealloc space and if we
-+ * have the group allocation flag set then we look at the locality group
-+ * prealloc space. These are per CPU prealloc list repreasented as
-+ *
-+ * ext4_sb_info.s_locality_groups[smp_processor_id()]
-+ *
-+ * The reason for having a per cpu locality group is to reduce the contention
-+ * between CPUs. It is possible to get scheduled at this point.
-+ *
-+ * The locality group prealloc space is used looking at whether we have
-+ * enough free space (pa_free) withing the prealloc space.
-+ *
-+ * If we can't allocate blocks via inode prealloc or/and locality group
-+ * prealloc then we look at the buddy cache. The buddy cache is represented
-+ * by ext4_sb_info.s_buddy_cache (struct inode) whose file offset gets
-+ * mapped to the buddy and bitmap information regarding different
-+ * groups. The buddy information is attached to buddy cache inode so that
-+ * we can access them through the page cache. The information regarding
-+ * each group is loaded via ext4_mb_load_buddy. The information involve
-+ * block bitmap and buddy information. The information are stored in the
-+ * inode as:
-+ *
-+ * { page }
-+ * [ group 0 buddy][ group 0 bitmap] [group 1][ group 1]...
-+ *
-+ *
-+ * one block each for bitmap and buddy information. So for each group we
-+ * take up 2 blocks. A page can contain blocks_per_page (PAGE_CACHE_SIZE /
-+ * blocksize) blocks. So it can have information regarding groups_per_page
-+ * which is blocks_per_page/2
-+ *
-+ * The buddy cache inode is not stored on disk. The inode is thrown
-+ * away when the filesystem is unmounted.
-+ *
-+ * We look for count number of blocks in the buddy cache. If we were able
-+ * to locate that many free blocks we return with additional information
-+ * regarding rest of the contiguous physical block available
-+ *
-+ * Before allocating blocks via buddy cache we normalize the request
-+ * blocks. This ensure we ask for more blocks that we needed. The extra
-+ * blocks that we get after allocation is added to the respective prealloc
-+ * list. In case of inode preallocation we follow a list of heuristics
-+ * based on file size. This can be found in ext4_mb_normalize_request. If
-+ * we are doing a group prealloc we try to normalize the request to
-+ * sbi->s_mb_group_prealloc. Default value of s_mb_group_prealloc is set to
-+ * 512 blocks. This can be tuned via
-+ * /proc/fs/ext4/<partition/group_prealloc. The value is represented in
-+ * terms of number of blocks. If we have mounted the file system with -O
-+ * stripe=<value> option the group prealloc request is normalized to the
-+ * stripe value (sbi->s_stripe)
-+ *
-+ * The regular allocator(using the buddy cache) support few tunables.
-+ *
-+ * /proc/fs/ext4/<partition>/min_to_scan
-+ * /proc/fs/ext4/<partition>/max_to_scan
-+ * /proc/fs/ext4/<partition>/order2_req
-+ *
-+ * The regular allocator use buddy scan only if the request len is power of
-+ * 2 blocks and the order of allocation is >= sbi->s_mb_order2_reqs. The
-+ * value of s_mb_order2_reqs can be tuned via
-+ * /proc/fs/ext4/<partition>/order2_req. If the request len is equal to
-+ * stripe size (sbi->s_stripe), we try to search for contigous block in
-+ * stripe size. This should result in better allocation on RAID setup. If
-+ * not we search in the specific group using bitmap for best extents. The
-+ * tunable min_to_scan and max_to_scan controll the behaviour here.
-+ * min_to_scan indicate how long the mballoc __must__ look for a best
-+ * extent and max_to_scanindicate how long the mballoc __can__ look for a
-+ * best extent in the found extents. Searching for the blocks starts with
-+ * the group specified as the goal value in allocation context via
-+ * ac_g_ex. Each group is first checked based on the criteria whether it
-+ * can used for allocation. ext4_mb_good_group explains how the groups are
-+ * checked.
-+ *
-+ * Both the prealloc space are getting populated as above. So for the first
-+ * request we will hit the buddy cache which will result in this prealloc
-+ * space getting filled. The prealloc space is then later used for the
-+ * subsequent request.
-+ */
-+
-+/*
-+ * mballoc operates on the following data:
-+ * - on-disk bitmap
-+ * - in-core buddy (actually includes buddy and bitmap)
-+ * - preallocation descriptors (PAs)
-+ *
-+ * there are two types of preallocations:
-+ * - inode
-+ * assiged to specific inode and can be used for this inode only.
-+ * it describes part of inode's space preallocated to specific
-+ * physical blocks. any block from that preallocated can be used
-+ * independent. the descriptor just tracks number of blocks left
-+ * unused. so, before taking some block from descriptor, one must
-+ * make sure corresponded logical block isn't allocated yet. this
-+ * also means that freeing any block within descriptor's range
-+ * must discard all preallocated blocks.
-+ * - locality group
-+ * assigned to specific locality group which does not translate to
-+ * permanent set of inodes: inode can join and leave group. space
-+ * from this type of preallocation can be used for any inode. thus
-+ * it's consumed from the beginning to the end.
-+ *
-+ * relation between them can be expressed as:
-+ * in-core buddy = on-disk bitmap + preallocation descriptors
-+ *
-+ * this mean blocks mballoc considers used are:
-+ * - allocated blocks (persistent)
-+ * - preallocated blocks (non-persistent)
-+ *
-+ * consistency in mballoc world means that at any time a block is either
-+ * free or used in ALL structures. notice: "any time" should not be read
-+ * literally -- time is discrete and delimited by locks.
-+ *
-+ * to keep it simple, we don't use block numbers, instead we count number of
-+ * blocks: how many blocks marked used/free in on-disk bitmap, buddy and PA.
-+ *
-+ * all operations can be expressed as:
-+ * - init buddy: buddy = on-disk + PAs
-+ * - new PA: buddy += N; PA = N
-+ * - use inode PA: on-disk += N; PA -= N
-+ * - discard inode PA buddy -= on-disk - PA; PA = 0
-+ * - use locality group PA on-disk += N; PA -= N
-+ * - discard locality group PA buddy -= PA; PA = 0
-+ * note: 'buddy -= on-disk - PA' is used to show that on-disk bitmap
-+ * is used in real operation because we can't know actual used
-+ * bits from PA, only from on-disk bitmap
-+ *
-+ * if we follow this strict logic, then all operations above should be atomic.
-+ * given some of them can block, we'd have to use something like semaphores
-+ * killing performance on high-end SMP hardware. let's try to relax it using
-+ * the following knowledge:
-+ * 1) if buddy is referenced, it's already initialized
-+ * 2) while block is used in buddy and the buddy is referenced,
-+ * nobody can re-allocate that block
-+ * 3) we work on bitmaps and '+' actually means 'set bits'. if on-disk has
-+ * bit set and PA claims same block, it's OK. IOW, one can set bit in
-+ * on-disk bitmap if buddy has same bit set or/and PA covers corresponded
-+ * block
-+ *
-+ * so, now we're building a concurrency table:
-+ * - init buddy vs.
-+ * - new PA
-+ * blocks for PA are allocated in the buddy, buddy must be referenced
-+ * until PA is linked to allocation group to avoid concurrent buddy init
-+ * - use inode PA
-+ * we need to make sure that either on-disk bitmap or PA has uptodate data
-+ * given (3) we care that PA-=N operation doesn't interfere with init
-+ * - discard inode PA
-+ * the simplest way would be to have buddy initialized by the discard
-+ * - use locality group PA
-+ * again PA-=N must be serialized with init
-+ * - discard locality group PA
-+ * the simplest way would be to have buddy initialized by the discard
-+ * - new PA vs.
-+ * - use inode PA
-+ * i_data_sem serializes them
-+ * - discard inode PA
-+ * discard process must wait until PA isn't used by another process
-+ * - use locality group PA
-+ * some mutex should serialize them
-+ * - discard locality group PA
-+ * discard process must wait until PA isn't used by another process
-+ * - use inode PA
-+ * - use inode PA
-+ * i_data_sem or another mutex should serializes them
-+ * - discard inode PA
-+ * discard process must wait until PA isn't used by another process
-+ * - use locality group PA
-+ * nothing wrong here -- they're different PAs covering different blocks
-+ * - discard locality group PA
-+ * discard process must wait until PA isn't used by another process
-+ *
-+ * now we're ready to make few consequences:
-+ * - PA is referenced and while it is no discard is possible
-+ * - PA is referenced until block isn't marked in on-disk bitmap
-+ * - PA changes only after on-disk bitmap
-+ * - discard must not compete with init. either init is done before
-+ * any discard or they're serialized somehow
-+ * - buddy init as sum of on-disk bitmap and PAs is done atomically
-+ *
-+ * a special case when we've used PA to emptiness. no need to modify buddy
-+ * in this case, but we should care about concurrent init
-+ *
-+ */
-+
-+ /*
-+ * Logic in few words:
-+ *
-+ * - allocation:
-+ * load group
-+ * find blocks
-+ * mark bits in on-disk bitmap
-+ * release group
-+ *
-+ * - use preallocation:
-+ * find proper PA (per-inode or group)
-+ * load group
-+ * mark bits in on-disk bitmap
-+ * release group
-+ * release PA
-+ *
-+ * - free:
-+ * load group
-+ * mark bits in on-disk bitmap
-+ * release group
-+ *
-+ * - discard preallocations in group:
-+ * mark PAs deleted
-+ * move them onto local list
-+ * load on-disk bitmap
-+ * load group
-+ * remove PA from object (inode or locality group)
-+ * mark free blocks in-core
-+ *
-+ * - discard inode's preallocations:
-+ */
-+
-+/*
-+ * Locking rules
-+ *
-+ * Locks:
-+ * - bitlock on a group (group)
-+ * - object (inode/locality) (object)
-+ * - per-pa lock (pa)
-+ *
-+ * Paths:
-+ * - new pa
-+ * object
-+ * group
-+ *
-+ * - find and use pa:
-+ * pa
-+ *
-+ * - release consumed pa:
-+ * pa
-+ * group
-+ * object
-+ *
-+ * - generate in-core bitmap:
-+ * group
-+ * pa
-+ *
-+ * - discard all for given object (inode, locality group):
-+ * object
-+ * pa
-+ * group
-+ *
-+ * - discard all for given group:
-+ * group
-+ * pa
-+ * group
-+ * object
-+ *
-+ */
-+
-+/*
-+ * with AGGRESSIVE_CHECK allocator runs consistency checks over
-+ * structures. these checks slow things down a lot
-+ */
-+#define AGGRESSIVE_CHECK__
-+
-+/*
-+ * with DOUBLE_CHECK defined mballoc creates persistent in-core
-+ * bitmaps, maintains and uses them to check for double allocations
-+ */
-+#define DOUBLE_CHECK__
-+
-+/*
-+ */
-+#define MB_DEBUG__
-+#ifdef MB_DEBUG
-+#define mb_debug(fmt, a...) printk(fmt, ##a)
-+#else
-+#define mb_debug(fmt, a...)
-+#endif
-+
-+/*
-+ * with EXT4_MB_HISTORY mballoc stores last N allocations in memory
-+ * and you can monitor it in /proc/fs/ext4/<dev>/mb_history
-+ */
-+#define EXT4_MB_HISTORY
-+#define EXT4_MB_HISTORY_ALLOC 1 /* allocation */
-+#define EXT4_MB_HISTORY_PREALLOC 2 /* preallocated blocks used */
-+#define EXT4_MB_HISTORY_DISCARD 4 /* preallocation discarded */
-+#define EXT4_MB_HISTORY_FREE 8 /* free */
-+
-+#define EXT4_MB_HISTORY_DEFAULT (EXT4_MB_HISTORY_ALLOC | \
-+ EXT4_MB_HISTORY_PREALLOC)
-+
-+/*
-+ * How long mballoc can look for a best extent (in found extents)
-+ */
-+#define MB_DEFAULT_MAX_TO_SCAN 200
-+
-+/*
-+ * How long mballoc must look for a best extent
-+ */
-+#define MB_DEFAULT_MIN_TO_SCAN 10
-+
-+/*
-+ * How many groups mballoc will scan looking for the best chunk
-+ */
-+#define MB_DEFAULT_MAX_GROUPS_TO_SCAN 5
+
+- JBUFFER_TRACE(descriptor, "write commit block");
++ JBUFFER_TRACE(descriptor, "submit commit block");
++ lock_buffer(bh);
+
-+/*
-+ * with 'ext4_mb_stats' allocator will collect stats that will be
-+ * shown at umount. The collecting costs though!
-+ */
-+#define MB_DEFAULT_STATS 1
+ set_buffer_dirty(bh);
+- if (journal->j_flags & JBD2_BARRIER) {
++ set_buffer_uptodate(bh);
++ bh->b_end_io = journal_end_buffer_io_sync;
+
-+/*
-+ * files smaller than MB_DEFAULT_STREAM_THRESHOLD are served
-+ * by the stream allocator, which purpose is to pack requests
-+ * as close each to other as possible to produce smooth I/O traffic
-+ * We use locality group prealloc space for stream request.
-+ * We can tune the same via /proc/fs/ext4/<parition>/stream_req
-+ */
-+#define MB_DEFAULT_STREAM_THRESHOLD 16 /* 64K */
++ if (journal->j_flags & JBD2_BARRIER &&
++ !JBD2_HAS_COMPAT_FEATURE(journal,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
+ set_buffer_ordered(bh);
+ barrier_done = 1;
+ }
+- ret = sync_dirty_buffer(bh);
++ ret = submit_bh(WRITE, bh);
+
-+/*
-+ * for which requests use 2^N search using buddies
-+ */
-+#define MB_DEFAULT_ORDER2_REQS 2
+ /* is it possible for another commit to fail at roughly
+ * the same time as this one? If so, we don't want to
+ * trust the barrier flag in the super, but instead want
+@@ -151,14 +169,72 @@ static int journal_write_commit_record(journal_t *journal,
+ clear_buffer_ordered(bh);
+ set_buffer_uptodate(bh);
+ set_buffer_dirty(bh);
+- ret = sync_dirty_buffer(bh);
++ ret = submit_bh(WRITE, bh);
+ }
+- put_bh(bh); /* One for getblk() */
+- jbd2_journal_put_journal_head(descriptor);
++ *cbh = bh;
++ return ret;
++}
+
+/*
-+ * default group prealloc size 512 blocks
++ * This function along with journal_submit_commit_record
++ * allows to write the commit record asynchronously.
+ */
-+#define MB_DEFAULT_GROUP_PREALLOC 512
-+
-+static struct kmem_cache *ext4_pspace_cachep;
-+
-+#ifdef EXT4_BB_MAX_BLOCKS
-+#undef EXT4_BB_MAX_BLOCKS
-+#endif
-+#define EXT4_BB_MAX_BLOCKS 30
-+
-+struct ext4_free_metadata {
-+ ext4_group_t group;
-+ unsigned short num;
-+ ext4_grpblk_t blocks[EXT4_BB_MAX_BLOCKS];
-+ struct list_head list;
-+};
-+
-+struct ext4_group_info {
-+ unsigned long bb_state;
-+ unsigned long bb_tid;
-+ struct ext4_free_metadata *bb_md_cur;
-+ unsigned short bb_first_free;
-+ unsigned short bb_free;
-+ unsigned short bb_fragments;
-+ struct list_head bb_prealloc_list;
-+#ifdef DOUBLE_CHECK
-+ void *bb_bitmap;
-+#endif
-+ unsigned short bb_counters[];
-+};
-+
-+#define EXT4_GROUP_INFO_NEED_INIT_BIT 0
-+#define EXT4_GROUP_INFO_LOCKED_BIT 1
-+
-+#define EXT4_MB_GRP_NEED_INIT(grp) \
-+ (test_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &((grp)->bb_state)))
-+
-+
-+struct ext4_prealloc_space {
-+ struct list_head pa_inode_list;
-+ struct list_head pa_group_list;
-+ union {
-+ struct list_head pa_tmp_list;
-+ struct rcu_head pa_rcu;
-+ } u;
-+ spinlock_t pa_lock;
-+ atomic_t pa_count;
-+ unsigned pa_deleted;
-+ ext4_fsblk_t pa_pstart; /* phys. block */
-+ ext4_lblk_t pa_lstart; /* log. block */
-+ unsigned short pa_len; /* len of preallocated chunk */
-+ unsigned short pa_free; /* how many blocks are free */
-+ unsigned short pa_linear; /* consumed in one direction
-+ * strictly, for grp prealloc */
-+ spinlock_t *pa_obj_lock;
-+ struct inode *pa_inode; /* hack, for history only */
-+};
-+
++static int journal_wait_on_commit_record(struct buffer_head *bh)
++{
++ int ret = 0;
+
-+struct ext4_free_extent {
-+ ext4_lblk_t fe_logical;
-+ ext4_grpblk_t fe_start;
-+ ext4_group_t fe_group;
-+ int fe_len;
-+};
++ clear_buffer_dirty(bh);
++ wait_on_buffer(bh);
+
++ if (unlikely(!buffer_uptodate(bh)))
++ ret = -EIO;
++ put_bh(bh); /* One for getblk() */
++ jbd2_journal_put_journal_head(bh2jh(bh));
+
+- return (ret == -EIO);
++ return ret;
+ }
+
+/*
-+ * Locality group:
-+ * we try to group all related changes together
-+ * so that writeback can flush/allocate them together as well
++ * Wait for all submitted IO to complete.
+ */
-+struct ext4_locality_group {
-+ /* for allocator */
-+ struct mutex lg_mutex; /* to serialize allocates */
-+ struct list_head lg_prealloc_list;/* list of preallocations */
-+ spinlock_t lg_prealloc_lock;
-+};
-+
-+struct ext4_allocation_context {
-+ struct inode *ac_inode;
-+ struct super_block *ac_sb;
-+
-+ /* original request */
-+ struct ext4_free_extent ac_o_ex;
-+
-+ /* goal request (after normalization) */
-+ struct ext4_free_extent ac_g_ex;
-+
-+ /* the best found extent */
-+ struct ext4_free_extent ac_b_ex;
-+
-+ /* copy of the bext found extent taken before preallocation efforts */
-+ struct ext4_free_extent ac_f_ex;
-+
-+ /* number of iterations done. we have to track to limit searching */
-+ unsigned long ac_ex_scanned;
-+ __u16 ac_groups_scanned;
-+ __u16 ac_found;
-+ __u16 ac_tail;
-+ __u16 ac_buddy;
-+ __u16 ac_flags; /* allocation hints */
-+ __u8 ac_status;
-+ __u8 ac_criteria;
-+ __u8 ac_repeats;
-+ __u8 ac_2order; /* if request is to allocate 2^N blocks and
-+ * N > 0, the field stores N, otherwise 0 */
-+ __u8 ac_op; /* operation, for history only */
-+ struct page *ac_bitmap_page;
-+ struct page *ac_buddy_page;
-+ struct ext4_prealloc_space *ac_pa;
-+ struct ext4_locality_group *ac_lg;
-+};
-+
-+#define AC_STATUS_CONTINUE 1
-+#define AC_STATUS_FOUND 2
-+#define AC_STATUS_BREAK 3
-+
-+struct ext4_mb_history {
-+ struct ext4_free_extent orig; /* orig allocation */
-+ struct ext4_free_extent goal; /* goal allocation */
-+ struct ext4_free_extent result; /* result allocation */
-+ unsigned pid;
-+ unsigned ino;
-+ __u16 found; /* how many extents have been found */
-+ __u16 groups; /* how many groups have been scanned */
-+ __u16 tail; /* what tail broke some buddy */
-+ __u16 buddy; /* buddy the tail ^^^ broke */
-+ __u16 flags;
-+ __u8 cr:3; /* which phase the result extent was found at */
-+ __u8 op:4;
-+ __u8 merged:1;
-+};
-+
-+struct ext4_buddy {
-+ struct page *bd_buddy_page;
-+ void *bd_buddy;
-+ struct page *bd_bitmap_page;
-+ void *bd_bitmap;
-+ struct ext4_group_info *bd_info;
-+ struct super_block *bd_sb;
-+ __u16 bd_blkbits;
-+ ext4_group_t bd_group;
-+};
-+#define EXT4_MB_BITMAP(e4b) ((e4b)->bd_bitmap)
-+#define EXT4_MB_BUDDY(e4b) ((e4b)->bd_buddy)
-+
-+#ifndef EXT4_MB_HISTORY
-+static inline void ext4_mb_store_history(struct ext4_allocation_context *ac)
++static int journal_wait_on_locked_list(journal_t *journal,
++ transaction_t *commit_transaction)
+{
-+ return;
-+}
-+#else
-+static void ext4_mb_store_history(struct ext4_allocation_context *ac);
-+#endif
-+
-+#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
-+
-+static struct proc_dir_entry *proc_root_ext4;
-+struct buffer_head *read_block_bitmap(struct super_block *, ext4_group_t);
-+ext4_fsblk_t ext4_new_blocks_old(handle_t *handle, struct inode *inode,
-+ ext4_fsblk_t goal, unsigned long *count, int *errp);
++ int ret = 0;
++ struct journal_head *jh;
+
-+static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
-+ ext4_group_t group);
-+static void ext4_mb_poll_new_transaction(struct super_block *, handle_t *);
-+static void ext4_mb_free_committed_blocks(struct super_block *);
-+static void ext4_mb_return_to_preallocation(struct inode *inode,
-+ struct ext4_buddy *e4b, sector_t block,
-+ int count);
-+static void ext4_mb_put_pa(struct ext4_allocation_context *,
-+ struct super_block *, struct ext4_prealloc_space *pa);
-+static int ext4_mb_init_per_dev_proc(struct super_block *sb);
-+static int ext4_mb_destroy_per_dev_proc(struct super_block *sb);
++ while (commit_transaction->t_locked_list) {
++ struct buffer_head *bh;
+
++ jh = commit_transaction->t_locked_list->b_tprev;
++ bh = jh2bh(jh);
++ get_bh(bh);
++ if (buffer_locked(bh)) {
++ spin_unlock(&journal->j_list_lock);
++ wait_on_buffer(bh);
++ if (unlikely(!buffer_uptodate(bh)))
++ ret = -EIO;
++ spin_lock(&journal->j_list_lock);
++ }
++ if (!inverted_lock(journal, bh)) {
++ put_bh(bh);
++ spin_lock(&journal->j_list_lock);
++ continue;
++ }
++ if (buffer_jbd(bh) && jh->b_jlist == BJ_Locked) {
++ __jbd2_journal_unfile_buffer(jh);
++ jbd_unlock_bh_state(bh);
++ jbd2_journal_remove_journal_head(bh);
++ put_bh(bh);
++ } else {
++ jbd_unlock_bh_state(bh);
++ }
++ put_bh(bh);
++ cond_resched_lock(&journal->j_list_lock);
++ }
++ return ret;
++ }
+
-+static inline void ext4_lock_group(struct super_block *sb, ext4_group_t group)
+ static void journal_do_submit_data(struct buffer_head **wbuf, int bufs)
+ {
+ int i;
+@@ -265,7 +341,7 @@ write_out_data:
+ put_bh(bh);
+ }
+
+- if (lock_need_resched(&journal->j_list_lock)) {
++ if (need_resched() || spin_needbreak(&journal->j_list_lock)) {
+ spin_unlock(&journal->j_list_lock);
+ goto write_out_data;
+ }
+@@ -274,7 +350,21 @@ write_out_data:
+ journal_do_submit_data(wbuf, bufs);
+ }
+
+-static inline void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
++static __u32 jbd2_checksum_data(__u32 crc32_sum, struct buffer_head *bh)
+{
-+ struct ext4_group_info *grinfo = ext4_get_group_info(sb, group);
++ struct page *page = bh->b_page;
++ char *addr;
++ __u32 checksum;
+
-+ bit_spin_lock(EXT4_GROUP_INFO_LOCKED_BIT, &(grinfo->bb_state));
++ addr = kmap_atomic(page, KM_USER0);
++ checksum = crc32_be(crc32_sum,
++ (void *)(addr + offset_in_page(bh->b_data)), bh->b_size);
++ kunmap_atomic(addr, KM_USER0);
++
++ return checksum;
+}
+
-+static inline void ext4_unlock_group(struct super_block *sb,
-+ ext4_group_t group)
-+{
-+ struct ext4_group_info *grinfo = ext4_get_group_info(sb, group);
++static void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
+ unsigned long long block)
+ {
+ tag->t_blocknr = cpu_to_be32(block & (u32)~0);
+@@ -290,6 +380,7 @@ static inline void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
+ */
+ void jbd2_journal_commit_transaction(journal_t *journal)
+ {
++ struct transaction_stats_s stats;
+ transaction_t *commit_transaction;
+ struct journal_head *jh, *new_jh, *descriptor;
+ struct buffer_head **wbuf = journal->j_wbuf;
+@@ -305,6 +396,8 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ int tag_flag;
+ int i;
+ int tag_bytes = journal_tag_bytes(journal);
++ struct buffer_head *cbh = NULL; /* For transactional checksums */
++ __u32 crc32_sum = ~0;
+
+ /*
+ * First job: lock down the current transaction and wait for
+@@ -337,6 +430,11 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ spin_lock(&journal->j_state_lock);
+ commit_transaction->t_state = T_LOCKED;
+
++ stats.u.run.rs_wait = commit_transaction->t_max_wait;
++ stats.u.run.rs_locked = jiffies;
++ stats.u.run.rs_running = jbd2_time_diff(commit_transaction->t_start,
++ stats.u.run.rs_locked);
+
-+ bit_spin_unlock(EXT4_GROUP_INFO_LOCKED_BIT, &(grinfo->bb_state));
-+}
+ spin_lock(&commit_transaction->t_handle_lock);
+ while (commit_transaction->t_updates) {
+ DEFINE_WAIT(wait);
+@@ -407,6 +505,10 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ */
+ jbd2_journal_switch_revoke_table(journal);
+
++ stats.u.run.rs_flushing = jiffies;
++ stats.u.run.rs_locked = jbd2_time_diff(stats.u.run.rs_locked,
++ stats.u.run.rs_flushing);
+
-+static inline int ext4_is_group_locked(struct super_block *sb,
-+ ext4_group_t group)
-+{
-+ struct ext4_group_info *grinfo = ext4_get_group_info(sb, group);
+ commit_transaction->t_state = T_FLUSH;
+ journal->j_committing_transaction = commit_transaction;
+ journal->j_running_transaction = NULL;
+@@ -440,38 +542,15 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ journal_submit_data_buffers(journal, commit_transaction);
+
+ /*
+- * Wait for all previously submitted IO to complete.
++ * Wait for all previously submitted IO to complete if commit
++ * record is to be written synchronously.
+ */
+ spin_lock(&journal->j_list_lock);
+- while (commit_transaction->t_locked_list) {
+- struct buffer_head *bh;
++ if (!JBD2_HAS_INCOMPAT_FEATURE(journal,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT))
++ err = journal_wait_on_locked_list(journal,
++ commit_transaction);
+
+- jh = commit_transaction->t_locked_list->b_tprev;
+- bh = jh2bh(jh);
+- get_bh(bh);
+- if (buffer_locked(bh)) {
+- spin_unlock(&journal->j_list_lock);
+- wait_on_buffer(bh);
+- if (unlikely(!buffer_uptodate(bh)))
+- err = -EIO;
+- spin_lock(&journal->j_list_lock);
+- }
+- if (!inverted_lock(journal, bh)) {
+- put_bh(bh);
+- spin_lock(&journal->j_list_lock);
+- continue;
+- }
+- if (buffer_jbd(bh) && jh->b_jlist == BJ_Locked) {
+- __jbd2_journal_unfile_buffer(jh);
+- jbd_unlock_bh_state(bh);
+- jbd2_journal_remove_journal_head(bh);
+- put_bh(bh);
+- } else {
+- jbd_unlock_bh_state(bh);
+- }
+- put_bh(bh);
+- cond_resched_lock(&journal->j_list_lock);
+- }
+ spin_unlock(&journal->j_list_lock);
+
+ if (err)
+@@ -498,6 +577,12 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ */
+ commit_transaction->t_state = T_COMMIT;
+
++ stats.u.run.rs_logging = jiffies;
++ stats.u.run.rs_flushing = jbd2_time_diff(stats.u.run.rs_flushing,
++ stats.u.run.rs_logging);
++ stats.u.run.rs_blocks = commit_transaction->t_outstanding_credits;
++ stats.u.run.rs_blocks_logged = 0;
+
-+ return bit_spin_is_locked(EXT4_GROUP_INFO_LOCKED_BIT,
-+ &(grinfo->bb_state));
-+}
+ descriptor = NULL;
+ bufs = 0;
+ while (commit_transaction->t_buffers) {
+@@ -639,6 +724,15 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ start_journal_io:
+ for (i = 0; i < bufs; i++) {
+ struct buffer_head *bh = wbuf[i];
++ /*
++ * Compute checksum.
++ */
++ if (JBD2_HAS_COMPAT_FEATURE(journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM)) {
++ crc32_sum =
++ jbd2_checksum_data(crc32_sum, bh);
++ }
+
-+static ext4_fsblk_t ext4_grp_offs_to_block(struct super_block *sb,
-+ struct ext4_free_extent *fex)
-+{
-+ ext4_fsblk_t block;
+ lock_buffer(bh);
+ clear_buffer_dirty(bh);
+ set_buffer_uptodate(bh);
+@@ -646,6 +740,7 @@ start_journal_io:
+ submit_bh(WRITE, bh);
+ }
+ cond_resched();
++ stats.u.run.rs_blocks_logged += bufs;
+
+ /* Force a new descriptor to be generated next
+ time round the loop. */
+@@ -654,6 +749,23 @@ start_journal_io:
+ }
+ }
+
++ /* Done it all: now write the commit record asynchronously. */
+
-+ block = (ext4_fsblk_t) fex->fe_group * EXT4_BLOCKS_PER_GROUP(sb)
-+ + fex->fe_start
-+ + le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
-+ return block;
-+}
++ if (JBD2_HAS_INCOMPAT_FEATURE(journal,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
++ err = journal_submit_commit_record(journal, commit_transaction,
++ &cbh, crc32_sum);
++ if (err)
++ __jbd2_journal_abort_hard(journal);
+
-+#if BITS_PER_LONG == 64
-+#define mb_correct_addr_and_bit(bit, addr) \
-+{ \
-+ bit += ((unsigned long) addr & 7UL) << 3; \
-+ addr = (void *) ((unsigned long) addr & ~7UL); \
-+}
-+#elif BITS_PER_LONG == 32
-+#define mb_correct_addr_and_bit(bit, addr) \
-+{ \
-+ bit += ((unsigned long) addr & 3UL) << 3; \
-+ addr = (void *) ((unsigned long) addr & ~3UL); \
-+}
-+#else
-+#error "how many bits you are?!"
-+#endif
++ spin_lock(&journal->j_list_lock);
++ err = journal_wait_on_locked_list(journal,
++ commit_transaction);
++ spin_unlock(&journal->j_list_lock);
++ if (err)
++ __jbd2_journal_abort_hard(journal);
++ }
++
+ /* Lo and behold: we have just managed to send a transaction to
+ the log. Before we can commit it, wait for the IO so far to
+ complete. Control buffers being written are on the
+@@ -753,8 +865,14 @@ wait_for_iobuf:
+
+ jbd_debug(3, "JBD: commit phase 6\n");
+
+- if (journal_write_commit_record(journal, commit_transaction))
+- err = -EIO;
++ if (!JBD2_HAS_INCOMPAT_FEATURE(journal,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
++ err = journal_submit_commit_record(journal, commit_transaction,
++ &cbh, crc32_sum);
++ if (err)
++ __jbd2_journal_abort_hard(journal);
++ }
++ err = journal_wait_on_commit_record(cbh);
+
+ if (err)
+ jbd2_journal_abort(journal, err);
+@@ -816,6 +934,7 @@ restart_loop:
+ cp_transaction = jh->b_cp_transaction;
+ if (cp_transaction) {
+ JBUFFER_TRACE(jh, "remove from old cp transaction");
++ cp_transaction->t_chp_stats.cs_dropped++;
+ __jbd2_journal_remove_checkpoint(jh);
+ }
+
+@@ -867,10 +986,10 @@ restart_loop:
+ }
+ spin_unlock(&journal->j_list_lock);
+ /*
+- * This is a bit sleazy. We borrow j_list_lock to protect
+- * journal->j_committing_transaction in __jbd2_journal_remove_checkpoint.
+- * Really, __jbd2_journal_remove_checkpoint should be using j_state_lock but
+- * it's a bit hassle to hold that across __jbd2_journal_remove_checkpoint
++ * This is a bit sleazy. We use j_list_lock to protect transition
++ * of a transaction into T_FINISHED state and calling
++ * __jbd2_journal_drop_transaction(). Otherwise we could race with
++ * other checkpointing code processing the transaction...
+ */
+ spin_lock(&journal->j_state_lock);
+ spin_lock(&journal->j_list_lock);
+@@ -890,6 +1009,36 @@ restart_loop:
+
+ J_ASSERT(commit_transaction->t_state == T_COMMIT);
+
++ commit_transaction->t_start = jiffies;
++ stats.u.run.rs_logging = jbd2_time_diff(stats.u.run.rs_logging,
++ commit_transaction->t_start);
+
-+static inline int mb_test_bit(int bit, void *addr)
-+{
+ /*
-+ * ext4_test_bit on architecture like powerpc
-+ * needs unsigned long aligned address
++ * File the transaction for history
+ */
-+ mb_correct_addr_and_bit(bit, addr);
-+ return ext4_test_bit(bit, addr);
-+}
-+
-+static inline void mb_set_bit(int bit, void *addr)
-+{
-+ mb_correct_addr_and_bit(bit, addr);
-+ ext4_set_bit(bit, addr);
-+}
-+
-+static inline void mb_set_bit_atomic(spinlock_t *lock, int bit, void *addr)
-+{
-+ mb_correct_addr_and_bit(bit, addr);
-+ ext4_set_bit_atomic(lock, bit, addr);
-+}
++ stats.ts_type = JBD2_STATS_RUN;
++ stats.ts_tid = commit_transaction->t_tid;
++ stats.u.run.rs_handle_count = commit_transaction->t_handle_count;
++ spin_lock(&journal->j_history_lock);
++ memcpy(journal->j_history + journal->j_history_cur, &stats,
++ sizeof(stats));
++ if (++journal->j_history_cur == journal->j_history_max)
++ journal->j_history_cur = 0;
+
-+static inline void mb_clear_bit(int bit, void *addr)
-+{
-+ mb_correct_addr_and_bit(bit, addr);
-+ ext4_clear_bit(bit, addr);
-+}
++ /*
++ * Calculate overall stats
++ */
++ journal->j_stats.ts_tid++;
++ journal->j_stats.u.run.rs_wait += stats.u.run.rs_wait;
++ journal->j_stats.u.run.rs_running += stats.u.run.rs_running;
++ journal->j_stats.u.run.rs_locked += stats.u.run.rs_locked;
++ journal->j_stats.u.run.rs_flushing += stats.u.run.rs_flushing;
++ journal->j_stats.u.run.rs_logging += stats.u.run.rs_logging;
++ journal->j_stats.u.run.rs_handle_count += stats.u.run.rs_handle_count;
++ journal->j_stats.u.run.rs_blocks += stats.u.run.rs_blocks;
++ journal->j_stats.u.run.rs_blocks_logged += stats.u.run.rs_blocks_logged;
++ spin_unlock(&journal->j_history_lock);
+
-+static inline void mb_clear_bit_atomic(spinlock_t *lock, int bit, void *addr)
-+{
-+ mb_correct_addr_and_bit(bit, addr);
-+ ext4_clear_bit_atomic(lock, bit, addr);
-+}
+ commit_transaction->t_state = T_FINISHED;
+ J_ASSERT(commit_transaction == journal->j_committing_transaction);
+ journal->j_commit_sequence = commit_transaction->t_tid;
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 6ddc553..96ba846 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -36,6 +36,7 @@
+ #include <linux/poison.h>
+ #include <linux/proc_fs.h>
+ #include <linux/debugfs.h>
++#include <linux/seq_file.h>
+
+ #include <asm/uaccess.h>
+ #include <asm/page.h>
+@@ -640,6 +641,312 @@ struct journal_head *jbd2_journal_get_descriptor_buffer(journal_t *journal)
+ return jbd2_journal_add_journal_head(bh);
+ }
+
++struct jbd2_stats_proc_session {
++ journal_t *journal;
++ struct transaction_stats_s *stats;
++ int start;
++ int max;
++};
+
-+static void *mb_find_buddy(struct ext4_buddy *e4b, int order, int *max)
++static void *jbd2_history_skip_empty(struct jbd2_stats_proc_session *s,
++ struct transaction_stats_s *ts,
++ int first)
+{
-+ char *bb;
-+
-+ /* FIXME!! is this needed */
-+ BUG_ON(EXT4_MB_BITMAP(e4b) == EXT4_MB_BUDDY(e4b));
-+ BUG_ON(max == NULL);
-+
-+ if (order > e4b->bd_blkbits + 1) {
-+ *max = 0;
++ if (ts == s->stats + s->max)
++ ts = s->stats;
++ if (!first && ts == s->stats + s->start)
+ return NULL;
++ while (ts->ts_type == 0) {
++ ts++;
++ if (ts == s->stats + s->max)
++ ts = s->stats;
++ if (ts == s->stats + s->start)
++ return NULL;
+ }
++ return ts;
+
-+ /* at order 0 we see each particular block */
-+ *max = 1 << (e4b->bd_blkbits + 3);
-+ if (order == 0)
-+ return EXT4_MB_BITMAP(e4b);
-+
-+ bb = EXT4_MB_BUDDY(e4b) + EXT4_SB(e4b->bd_sb)->s_mb_offsets[order];
-+ *max = EXT4_SB(e4b->bd_sb)->s_mb_maxs[order];
-+
-+ return bb;
+}
+
-+#ifdef DOUBLE_CHECK
-+static void mb_free_blocks_double(struct inode *inode, struct ext4_buddy *e4b,
-+ int first, int count)
++static void *jbd2_seq_history_start(struct seq_file *seq, loff_t *pos)
+{
-+ int i;
-+ struct super_block *sb = e4b->bd_sb;
-+
-+ if (unlikely(e4b->bd_info->bb_bitmap == NULL))
-+ return;
-+ BUG_ON(!ext4_is_group_locked(sb, e4b->bd_group));
-+ for (i = 0; i < count; i++) {
-+ if (!mb_test_bit(first + i, e4b->bd_info->bb_bitmap)) {
-+ ext4_fsblk_t blocknr;
-+ blocknr = e4b->bd_group * EXT4_BLOCKS_PER_GROUP(sb);
-+ blocknr += first + i;
-+ blocknr +=
-+ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
++ struct jbd2_stats_proc_session *s = seq->private;
++ struct transaction_stats_s *ts;
++ int l = *pos;
+
-+ ext4_error(sb, __FUNCTION__, "double-free of inode"
-+ " %lu's block %llu(bit %u in group %lu)\n",
-+ inode ? inode->i_ino : 0, blocknr,
-+ first + i, e4b->bd_group);
-+ }
-+ mb_clear_bit(first + i, e4b->bd_info->bb_bitmap);
++ if (l == 0)
++ return SEQ_START_TOKEN;
++ ts = jbd2_history_skip_empty(s, s->stats + s->start, 1);
++ if (!ts)
++ return NULL;
++ l--;
++ while (l) {
++ ts = jbd2_history_skip_empty(s, ++ts, 0);
++ if (!ts)
++ break;
++ l--;
+ }
++ return ts;
+}
+
-+static void mb_mark_used_double(struct ext4_buddy *e4b, int first, int count)
++static void *jbd2_seq_history_next(struct seq_file *seq, void *v, loff_t *pos)
+{
-+ int i;
++ struct jbd2_stats_proc_session *s = seq->private;
++ struct transaction_stats_s *ts = v;
+
-+ if (unlikely(e4b->bd_info->bb_bitmap == NULL))
-+ return;
-+ BUG_ON(!ext4_is_group_locked(e4b->bd_sb, e4b->bd_group));
-+ for (i = 0; i < count; i++) {
-+ BUG_ON(mb_test_bit(first + i, e4b->bd_info->bb_bitmap));
-+ mb_set_bit(first + i, e4b->bd_info->bb_bitmap);
-+ }
++ ++*pos;
++ if (v == SEQ_START_TOKEN)
++ return jbd2_history_skip_empty(s, s->stats + s->start, 1);
++ else
++ return jbd2_history_skip_empty(s, ++ts, 0);
+}
+
-+static void mb_cmp_bitmaps(struct ext4_buddy *e4b, void *bitmap)
++static int jbd2_seq_history_show(struct seq_file *seq, void *v)
+{
-+ if (memcmp(e4b->bd_info->bb_bitmap, bitmap, e4b->bd_sb->s_blocksize)) {
-+ unsigned char *b1, *b2;
-+ int i;
-+ b1 = (unsigned char *) e4b->bd_info->bb_bitmap;
-+ b2 = (unsigned char *) bitmap;
-+ for (i = 0; i < e4b->bd_sb->s_blocksize; i++) {
-+ if (b1[i] != b2[i]) {
-+ printk("corruption in group %lu at byte %u(%u):"
-+ " %x in copy != %x on disk/prealloc\n",
-+ e4b->bd_group, i, i * 8, b1[i], b2[i]);
-+ BUG();
-+ }
-+ }
++ struct transaction_stats_s *ts = v;
++ if (v == SEQ_START_TOKEN) {
++ seq_printf(seq, "%-4s %-5s %-5s %-5s %-5s %-5s %-5s %-6s %-5s "
++ "%-5s %-5s %-5s %-5s %-5s\n", "R/C", "tid",
++ "wait", "run", "lock", "flush", "log", "hndls",
++ "block", "inlog", "ctime", "write", "drop",
++ "close");
++ return 0;
+ }
++ if (ts->ts_type == JBD2_STATS_RUN)
++ seq_printf(seq, "%-4s %-5lu %-5u %-5u %-5u %-5u %-5u "
++ "%-6lu %-5lu %-5lu\n", "R", ts->ts_tid,
++ jiffies_to_msecs(ts->u.run.rs_wait),
++ jiffies_to_msecs(ts->u.run.rs_running),
++ jiffies_to_msecs(ts->u.run.rs_locked),
++ jiffies_to_msecs(ts->u.run.rs_flushing),
++ jiffies_to_msecs(ts->u.run.rs_logging),
++ ts->u.run.rs_handle_count,
++ ts->u.run.rs_blocks,
++ ts->u.run.rs_blocks_logged);
++ else if (ts->ts_type == JBD2_STATS_CHECKPOINT)
++ seq_printf(seq, "%-4s %-5lu %48s %-5u %-5lu %-5lu %-5lu\n",
++ "C", ts->ts_tid, " ",
++ jiffies_to_msecs(ts->u.chp.cs_chp_time),
++ ts->u.chp.cs_written, ts->u.chp.cs_dropped,
++ ts->u.chp.cs_forced_to_close);
++ else
++ J_ASSERT(0);
++ return 0;
+}
+
-+#else
-+static inline void mb_free_blocks_double(struct inode *inode,
-+ struct ext4_buddy *e4b, int first, int count)
-+{
-+ return;
-+}
-+static inline void mb_mark_used_double(struct ext4_buddy *e4b,
-+ int first, int count)
-+{
-+ return;
-+}
-+static inline void mb_cmp_bitmaps(struct ext4_buddy *e4b, void *bitmap)
++static void jbd2_seq_history_stop(struct seq_file *seq, void *v)
+{
-+ return;
+}
-+#endif
+
-+#ifdef AGGRESSIVE_CHECK
-+
-+#define MB_CHECK_ASSERT(assert) \
-+do { \
-+ if (!(assert)) { \
-+ printk(KERN_EMERG \
-+ "Assertion failure in %s() at %s:%d: \"%s\"\n", \
-+ function, file, line, # assert); \
-+ BUG(); \
-+ } \
-+} while (0)
++static struct seq_operations jbd2_seq_history_ops = {
++ .start = jbd2_seq_history_start,
++ .next = jbd2_seq_history_next,
++ .stop = jbd2_seq_history_stop,
++ .show = jbd2_seq_history_show,
++};
+
-+static int __mb_check_buddy(struct ext4_buddy *e4b, char *file,
-+ const char *function, int line)
++static int jbd2_seq_history_open(struct inode *inode, struct file *file)
+{
-+ struct super_block *sb = e4b->bd_sb;
-+ int order = e4b->bd_blkbits + 1;
-+ int max;
-+ int max2;
-+ int i;
-+ int j;
-+ int k;
-+ int count;
-+ struct ext4_group_info *grp;
-+ int fragments = 0;
-+ int fstart;
-+ struct list_head *cur;
-+ void *buddy;
-+ void *buddy2;
-+
-+ if (!test_opt(sb, MBALLOC))
-+ return 0;
-+
-+ {
-+ static int mb_check_counter;
-+ if (mb_check_counter++ % 100 != 0)
-+ return 0;
-+ }
-+
-+ while (order > 1) {
-+ buddy = mb_find_buddy(e4b, order, &max);
-+ MB_CHECK_ASSERT(buddy);
-+ buddy2 = mb_find_buddy(e4b, order - 1, &max2);
-+ MB_CHECK_ASSERT(buddy2);
-+ MB_CHECK_ASSERT(buddy != buddy2);
-+ MB_CHECK_ASSERT(max * 2 == max2);
-+
-+ count = 0;
-+ for (i = 0; i < max; i++) {
-+
-+ if (mb_test_bit(i, buddy)) {
-+ /* only single bit in buddy2 may be 1 */
-+ if (!mb_test_bit(i << 1, buddy2)) {
-+ MB_CHECK_ASSERT(
-+ mb_test_bit((i<<1)+1, buddy2));
-+ } else if (!mb_test_bit((i << 1) + 1, buddy2)) {
-+ MB_CHECK_ASSERT(
-+ mb_test_bit(i << 1, buddy2));
-+ }
-+ continue;
-+ }
-+
-+ /* both bits in buddy2 must be 0 */
-+ MB_CHECK_ASSERT(mb_test_bit(i << 1, buddy2));
-+ MB_CHECK_ASSERT(mb_test_bit((i << 1) + 1, buddy2));
++ journal_t *journal = PDE(inode)->data;
++ struct jbd2_stats_proc_session *s;
++ int rc, size;
+
-+ for (j = 0; j < (1 << order); j++) {
-+ k = (i * (1 << order)) + j;
-+ MB_CHECK_ASSERT(
-+ !mb_test_bit(k, EXT4_MB_BITMAP(e4b)));
-+ }
-+ count++;
-+ }
-+ MB_CHECK_ASSERT(e4b->bd_info->bb_counters[order] == count);
-+ order--;
++ s = kmalloc(sizeof(*s), GFP_KERNEL);
++ if (s == NULL)
++ return -ENOMEM;
++ size = sizeof(struct transaction_stats_s) * journal->j_history_max;
++ s->stats = kmalloc(size, GFP_KERNEL);
++ if (s->stats == NULL) {
++ kfree(s);
++ return -ENOMEM;
+ }
++ spin_lock(&journal->j_history_lock);
++ memcpy(s->stats, journal->j_history, size);
++ s->max = journal->j_history_max;
++ s->start = journal->j_history_cur % s->max;
++ spin_unlock(&journal->j_history_lock);
+
-+ fstart = -1;
-+ buddy = mb_find_buddy(e4b, 0, &max);
-+ for (i = 0; i < max; i++) {
-+ if (!mb_test_bit(i, buddy)) {
-+ MB_CHECK_ASSERT(i >= e4b->bd_info->bb_first_free);
-+ if (fstart == -1) {
-+ fragments++;
-+ fstart = i;
-+ }
-+ continue;
-+ }
-+ fstart = -1;
-+ /* check used bits only */
-+ for (j = 0; j < e4b->bd_blkbits + 1; j++) {
-+ buddy2 = mb_find_buddy(e4b, j, &max2);
-+ k = i >> j;
-+ MB_CHECK_ASSERT(k < max2);
-+ MB_CHECK_ASSERT(mb_test_bit(k, buddy2));
-+ }
++ rc = seq_open(file, &jbd2_seq_history_ops);
++ if (rc == 0) {
++ struct seq_file *m = file->private_data;
++ m->private = s;
++ } else {
++ kfree(s->stats);
++ kfree(s);
+ }
-+ MB_CHECK_ASSERT(!EXT4_MB_GRP_NEED_INIT(e4b->bd_info));
-+ MB_CHECK_ASSERT(e4b->bd_info->bb_fragments == fragments);
++ return rc;
+
-+ grp = ext4_get_group_info(sb, e4b->bd_group);
-+ buddy = mb_find_buddy(e4b, 0, &max);
-+ list_for_each(cur, &grp->bb_prealloc_list) {
-+ ext4_group_t groupnr;
-+ struct ext4_prealloc_space *pa;
-+ pa = list_entry(cur, struct ext4_prealloc_space, group_list);
-+ ext4_get_group_no_and_offset(sb, pa->pstart, &groupnr, &k);
-+ MB_CHECK_ASSERT(groupnr == e4b->bd_group);
-+ for (i = 0; i < pa->len; i++)
-+ MB_CHECK_ASSERT(mb_test_bit(k + i, buddy));
-+ }
-+ return 0;
+}
-+#undef MB_CHECK_ASSERT
-+#define mb_check_buddy(e4b) __mb_check_buddy(e4b, \
-+ __FILE__, __FUNCTION__, __LINE__)
-+#else
-+#define mb_check_buddy(e4b)
-+#endif
+
-+/* FIXME!! need more doc */
-+static void ext4_mb_mark_free_simple(struct super_block *sb,
-+ void *buddy, unsigned first, int len,
-+ struct ext4_group_info *grp)
++static int jbd2_seq_history_release(struct inode *inode, struct file *file)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ unsigned short min;
-+ unsigned short max;
-+ unsigned short chunk;
-+ unsigned short border;
-+
-+ BUG_ON(len >= EXT4_BLOCKS_PER_GROUP(sb));
-+
-+ border = 2 << sb->s_blocksize_bits;
-+
-+ while (len > 0) {
-+ /* find how many blocks can be covered since this position */
-+ max = ffs(first | border) - 1;
-+
-+ /* find how many blocks of power 2 we need to mark */
-+ min = fls(len) - 1;
++ struct seq_file *seq = file->private_data;
++ struct jbd2_stats_proc_session *s = seq->private;
+
-+ if (max < min)
-+ min = max;
-+ chunk = 1 << min;
++ kfree(s->stats);
++ kfree(s);
++ return seq_release(inode, file);
++}
+
-+ /* mark multiblock chunks only */
-+ grp->bb_counters[min]++;
-+ if (min > 0)
-+ mb_clear_bit(first >> min,
-+ buddy + sbi->s_mb_offsets[min]);
++static struct file_operations jbd2_seq_history_fops = {
++ .owner = THIS_MODULE,
++ .open = jbd2_seq_history_open,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = jbd2_seq_history_release,
++};
+
-+ len -= chunk;
-+ first += chunk;
-+ }
++static void *jbd2_seq_info_start(struct seq_file *seq, loff_t *pos)
++{
++ return *pos ? NULL : SEQ_START_TOKEN;
+}
+
-+static void ext4_mb_generate_buddy(struct super_block *sb,
-+ void *buddy, void *bitmap, ext4_group_t group)
++static void *jbd2_seq_info_next(struct seq_file *seq, void *v, loff_t *pos)
+{
-+ struct ext4_group_info *grp = ext4_get_group_info(sb, group);
-+ unsigned short max = EXT4_BLOCKS_PER_GROUP(sb);
-+ unsigned short i = 0;
-+ unsigned short first;
-+ unsigned short len;
-+ unsigned free = 0;
-+ unsigned fragments = 0;
-+ unsigned long long period = get_cycles();
-+
-+ /* initialize buddy from bitmap which is aggregation
-+ * of on-disk bitmap and preallocations */
-+ i = ext4_find_next_zero_bit(bitmap, max, 0);
-+ grp->bb_first_free = i;
-+ while (i < max) {
-+ fragments++;
-+ first = i;
-+ i = ext4_find_next_bit(bitmap, max, i);
-+ len = i - first;
-+ free += len;
-+ if (len > 1)
-+ ext4_mb_mark_free_simple(sb, buddy, first, len, grp);
-+ else
-+ grp->bb_counters[0]++;
-+ if (i < max)
-+ i = ext4_find_next_zero_bit(bitmap, max, i);
-+ }
-+ grp->bb_fragments = fragments;
-+
-+ if (free != grp->bb_free) {
-+ printk(KERN_DEBUG
-+ "EXT4-fs: group %lu: %u blocks in bitmap, %u in gd\n",
-+ group, free, grp->bb_free);
-+ grp->bb_free = free;
-+ }
-+
-+ clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state));
-+
-+ period = get_cycles() - period;
-+ spin_lock(&EXT4_SB(sb)->s_bal_lock);
-+ EXT4_SB(sb)->s_mb_buddies_generated++;
-+ EXT4_SB(sb)->s_mb_generation_time += period;
-+ spin_unlock(&EXT4_SB(sb)->s_bal_lock);
++ return NULL;
+}
+
-+/* The buddy information is attached the buddy cache inode
-+ * for convenience. The information regarding each group
-+ * is loaded via ext4_mb_load_buddy. The information involve
-+ * block bitmap and buddy information. The information are
-+ * stored in the inode as
-+ *
-+ * { page }
-+ * [ group 0 buddy][ group 0 bitmap] [group 1][ group 1]...
-+ *
-+ *
-+ * one block each for bitmap and buddy information.
-+ * So for each group we take up 2 blocks. A page can
-+ * contain blocks_per_page (PAGE_CACHE_SIZE / blocksize) blocks.
-+ * So it can have information regarding groups_per_page which
-+ * is blocks_per_page/2
-+ */
-+
-+static int ext4_mb_init_cache(struct page *page, char *incore)
++static int jbd2_seq_info_show(struct seq_file *seq, void *v)
+{
-+ int blocksize;
-+ int blocks_per_page;
-+ int groups_per_page;
-+ int err = 0;
-+ int i;
-+ ext4_group_t first_group;
-+ int first_block;
-+ struct super_block *sb;
-+ struct buffer_head *bhs;
-+ struct buffer_head **bh;
-+ struct inode *inode;
-+ char *data;
-+ char *bitmap;
-+
-+ mb_debug("init page %lu\n", page->index);
-+
-+ inode = page->mapping->host;
-+ sb = inode->i_sb;
-+ blocksize = 1 << inode->i_blkbits;
-+ blocks_per_page = PAGE_CACHE_SIZE / blocksize;
-+
-+ groups_per_page = blocks_per_page >> 1;
-+ if (groups_per_page == 0)
-+ groups_per_page = 1;
-+
-+ /* allocate buffer_heads to read bitmaps */
-+ if (groups_per_page > 1) {
-+ err = -ENOMEM;
-+ i = sizeof(struct buffer_head *) * groups_per_page;
-+ bh = kzalloc(i, GFP_NOFS);
-+ if (bh == NULL)
-+ goto out;
-+ } else
-+ bh = &bhs;
-+
-+ first_group = page->index * blocks_per_page / 2;
-+
-+ /* read all groups the page covers into the cache */
-+ for (i = 0; i < groups_per_page; i++) {
-+ struct ext4_group_desc *desc;
-+
-+ if (first_group + i >= EXT4_SB(sb)->s_groups_count)
-+ break;
-+
-+ err = -EIO;
-+ desc = ext4_get_group_desc(sb, first_group + i, NULL);
-+ if (desc == NULL)
-+ goto out;
-+
-+ err = -ENOMEM;
-+ bh[i] = sb_getblk(sb, ext4_block_bitmap(sb, desc));
-+ if (bh[i] == NULL)
-+ goto out;
-+
-+ if (bh_uptodate_or_lock(bh[i]))
-+ continue;
-+
-+ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
-+ ext4_init_block_bitmap(sb, bh[i],
-+ first_group + i, desc);
-+ set_buffer_uptodate(bh[i]);
-+ unlock_buffer(bh[i]);
-+ continue;
-+ }
-+ get_bh(bh[i]);
-+ bh[i]->b_end_io = end_buffer_read_sync;
-+ submit_bh(READ, bh[i]);
-+ mb_debug("read bitmap for group %lu\n", first_group + i);
-+ }
-+
-+ /* wait for I/O completion */
-+ for (i = 0; i < groups_per_page && bh[i]; i++)
-+ wait_on_buffer(bh[i]);
-+
-+ err = -EIO;
-+ for (i = 0; i < groups_per_page && bh[i]; i++)
-+ if (!buffer_uptodate(bh[i]))
-+ goto out;
-+
-+ first_block = page->index * blocks_per_page;
-+ for (i = 0; i < blocks_per_page; i++) {
-+ int group;
-+ struct ext4_group_info *grinfo;
-+
-+ group = (first_block + i) >> 1;
-+ if (group >= EXT4_SB(sb)->s_groups_count)
-+ break;
-+
-+ /*
-+ * data carry information regarding this
-+ * particular group in the format specified
-+ * above
-+ *
-+ */
-+ data = page_address(page) + (i * blocksize);
-+ bitmap = bh[group - first_group]->b_data;
++ struct jbd2_stats_proc_session *s = seq->private;
+
-+ /*
-+ * We place the buddy block and bitmap block
-+ * close together
-+ */
-+ if ((first_block + i) & 1) {
-+ /* this is block of buddy */
-+ BUG_ON(incore == NULL);
-+ mb_debug("put buddy for group %u in page %lu/%x\n",
-+ group, page->index, i * blocksize);
-+ memset(data, 0xff, blocksize);
-+ grinfo = ext4_get_group_info(sb, group);
-+ grinfo->bb_fragments = 0;
-+ memset(grinfo->bb_counters, 0,
-+ sizeof(unsigned short)*(sb->s_blocksize_bits+2));
-+ /*
-+ * incore got set to the group block bitmap below
-+ */
-+ ext4_mb_generate_buddy(sb, data, incore, group);
-+ incore = NULL;
-+ } else {
-+ /* this is block of bitmap */
-+ BUG_ON(incore != NULL);
-+ mb_debug("put bitmap for group %u in page %lu/%x\n",
-+ group, page->index, i * blocksize);
++ if (v != SEQ_START_TOKEN)
++ return 0;
++ seq_printf(seq, "%lu transaction, each upto %u blocks\n",
++ s->stats->ts_tid,
++ s->journal->j_max_transaction_buffers);
++ if (s->stats->ts_tid == 0)
++ return 0;
++ seq_printf(seq, "average: \n %ums waiting for transaction\n",
++ jiffies_to_msecs(s->stats->u.run.rs_wait / s->stats->ts_tid));
++ seq_printf(seq, " %ums running transaction\n",
++ jiffies_to_msecs(s->stats->u.run.rs_running / s->stats->ts_tid));
++ seq_printf(seq, " %ums transaction was being locked\n",
++ jiffies_to_msecs(s->stats->u.run.rs_locked / s->stats->ts_tid));
++ seq_printf(seq, " %ums flushing data (in ordered mode)\n",
++ jiffies_to_msecs(s->stats->u.run.rs_flushing / s->stats->ts_tid));
++ seq_printf(seq, " %ums logging transaction\n",
++ jiffies_to_msecs(s->stats->u.run.rs_logging / s->stats->ts_tid));
++ seq_printf(seq, " %lu handles per transaction\n",
++ s->stats->u.run.rs_handle_count / s->stats->ts_tid);
++ seq_printf(seq, " %lu blocks per transaction\n",
++ s->stats->u.run.rs_blocks / s->stats->ts_tid);
++ seq_printf(seq, " %lu logged blocks per transaction\n",
++ s->stats->u.run.rs_blocks_logged / s->stats->ts_tid);
++ return 0;
++}
+
-+ /* see comments in ext4_mb_put_pa() */
-+ ext4_lock_group(sb, group);
-+ memcpy(data, bitmap, blocksize);
++static void jbd2_seq_info_stop(struct seq_file *seq, void *v)
++{
++}
+
-+ /* mark all preallocated blks used in in-core bitmap */
-+ ext4_mb_generate_from_pa(sb, data, group);
-+ ext4_unlock_group(sb, group);
++static struct seq_operations jbd2_seq_info_ops = {
++ .start = jbd2_seq_info_start,
++ .next = jbd2_seq_info_next,
++ .stop = jbd2_seq_info_stop,
++ .show = jbd2_seq_info_show,
++};
+
-+ /* set incore so that the buddy information can be
-+ * generated using this
-+ */
-+ incore = data;
-+ }
++static int jbd2_seq_info_open(struct inode *inode, struct file *file)
++{
++ journal_t *journal = PDE(inode)->data;
++ struct jbd2_stats_proc_session *s;
++ int rc, size;
++
++ s = kmalloc(sizeof(*s), GFP_KERNEL);
++ if (s == NULL)
++ return -ENOMEM;
++ size = sizeof(struct transaction_stats_s);
++ s->stats = kmalloc(size, GFP_KERNEL);
++ if (s->stats == NULL) {
++ kfree(s);
++ return -ENOMEM;
+ }
-+ SetPageUptodate(page);
++ spin_lock(&journal->j_history_lock);
++ memcpy(s->stats, &journal->j_stats, size);
++ s->journal = journal;
++ spin_unlock(&journal->j_history_lock);
+
-+out:
-+ if (bh) {
-+ for (i = 0; i < groups_per_page && bh[i]; i++)
-+ brelse(bh[i]);
-+ if (bh != &bhs)
-+ kfree(bh);
++ rc = seq_open(file, &jbd2_seq_info_ops);
++ if (rc == 0) {
++ struct seq_file *m = file->private_data;
++ m->private = s;
++ } else {
++ kfree(s->stats);
++ kfree(s);
+ }
-+ return err;
++ return rc;
++
+}
+
-+static int ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
-+ struct ext4_buddy *e4b)
++static int jbd2_seq_info_release(struct inode *inode, struct file *file)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ struct inode *inode = sbi->s_buddy_cache;
-+ int blocks_per_page;
-+ int block;
-+ int pnum;
-+ int poff;
-+ struct page *page;
-+
-+ mb_debug("load group %lu\n", group);
++ struct seq_file *seq = file->private_data;
++ struct jbd2_stats_proc_session *s = seq->private;
++ kfree(s->stats);
++ kfree(s);
++ return seq_release(inode, file);
++}
+
-+ blocks_per_page = PAGE_CACHE_SIZE / sb->s_blocksize;
++static struct file_operations jbd2_seq_info_fops = {
++ .owner = THIS_MODULE,
++ .open = jbd2_seq_info_open,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = jbd2_seq_info_release,
++};
+
-+ e4b->bd_blkbits = sb->s_blocksize_bits;
-+ e4b->bd_info = ext4_get_group_info(sb, group);
-+ e4b->bd_sb = sb;
-+ e4b->bd_group = group;
-+ e4b->bd_buddy_page = NULL;
-+ e4b->bd_bitmap_page = NULL;
++static struct proc_dir_entry *proc_jbd2_stats;
+
-+ /*
-+ * the buddy cache inode stores the block bitmap
-+ * and buddy information in consecutive blocks.
-+ * So for each group we need two blocks.
-+ */
-+ block = group * 2;
-+ pnum = block / blocks_per_page;
-+ poff = block % blocks_per_page;
++static void jbd2_stats_proc_init(journal_t *journal)
++{
++ char name[BDEVNAME_SIZE];
+
-+ /* we could use find_or_create_page(), but it locks page
-+ * what we'd like to avoid in fast path ... */
-+ page = find_get_page(inode->i_mapping, pnum);
-+ if (page == NULL || !PageUptodate(page)) {
-+ if (page)
-+ page_cache_release(page);
-+ page = find_or_create_page(inode->i_mapping, pnum, GFP_NOFS);
-+ if (page) {
-+ BUG_ON(page->mapping != inode->i_mapping);
-+ if (!PageUptodate(page)) {
-+ ext4_mb_init_cache(page, NULL);
-+ mb_cmp_bitmaps(e4b, page_address(page) +
-+ (poff * sb->s_blocksize));
++ snprintf(name, sizeof(name) - 1, "%s", bdevname(journal->j_dev, name));
++ journal->j_proc_entry = proc_mkdir(name, proc_jbd2_stats);
++ if (journal->j_proc_entry) {
++ struct proc_dir_entry *p;
++ p = create_proc_entry("history", S_IRUGO,
++ journal->j_proc_entry);
++ if (p) {
++ p->proc_fops = &jbd2_seq_history_fops;
++ p->data = journal;
++ p = create_proc_entry("info", S_IRUGO,
++ journal->j_proc_entry);
++ if (p) {
++ p->proc_fops = &jbd2_seq_info_fops;
++ p->data = journal;
+ }
-+ unlock_page(page);
+ }
+ }
-+ if (page == NULL || !PageUptodate(page))
-+ goto err;
-+ e4b->bd_bitmap_page = page;
-+ e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);
-+ mark_page_accessed(page);
++}
+
-+ block++;
-+ pnum = block / blocks_per_page;
-+ poff = block % blocks_per_page;
++static void jbd2_stats_proc_exit(journal_t *journal)
++{
++ char name[BDEVNAME_SIZE];
+
-+ page = find_get_page(inode->i_mapping, pnum);
-+ if (page == NULL || !PageUptodate(page)) {
-+ if (page)
-+ page_cache_release(page);
-+ page = find_or_create_page(inode->i_mapping, pnum, GFP_NOFS);
-+ if (page) {
-+ BUG_ON(page->mapping != inode->i_mapping);
-+ if (!PageUptodate(page))
-+ ext4_mb_init_cache(page, e4b->bd_bitmap);
++ snprintf(name, sizeof(name) - 1, "%s", bdevname(journal->j_dev, name));
++ remove_proc_entry("info", journal->j_proc_entry);
++ remove_proc_entry("history", journal->j_proc_entry);
++ remove_proc_entry(name, proc_jbd2_stats);
++}
+
-+ unlock_page(page);
-+ }
++static void journal_init_stats(journal_t *journal)
++{
++ int size;
++
++ if (!proc_jbd2_stats)
++ return;
++
++ journal->j_history_max = 100;
++ size = sizeof(struct transaction_stats_s) * journal->j_history_max;
++ journal->j_history = kzalloc(size, GFP_KERNEL);
++ if (!journal->j_history) {
++ journal->j_history_max = 0;
++ return;
+ }
-+ if (page == NULL || !PageUptodate(page))
-+ goto err;
-+ e4b->bd_buddy_page = page;
-+ e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize);
-+ mark_page_accessed(page);
++ spin_lock_init(&journal->j_history_lock);
++}
+
-+ BUG_ON(e4b->bd_bitmap_page == NULL);
-+ BUG_ON(e4b->bd_buddy_page == NULL);
+ /*
+ * Management for journal control blocks: functions to create and
+ * destroy journal_t structures, and to initialise and read existing
+@@ -681,6 +988,9 @@ static journal_t * journal_init_common (void)
+ kfree(journal);
+ goto fail;
+ }
+
-+ return 0;
++ journal_init_stats(journal);
+
-+err:
-+ if (e4b->bd_bitmap_page)
-+ page_cache_release(e4b->bd_bitmap_page);
-+ if (e4b->bd_buddy_page)
-+ page_cache_release(e4b->bd_buddy_page);
-+ e4b->bd_buddy = NULL;
-+ e4b->bd_bitmap = NULL;
-+ return -EIO;
+ return journal;
+ fail:
+ return NULL;
+@@ -735,6 +1045,7 @@ journal_t * jbd2_journal_init_dev(struct block_device *bdev,
+ journal->j_fs_dev = fs_dev;
+ journal->j_blk_offset = start;
+ journal->j_maxlen = len;
++ jbd2_stats_proc_init(journal);
+
+ bh = __getblk(journal->j_dev, start, journal->j_blocksize);
+ J_ASSERT(bh != NULL);
+@@ -773,6 +1084,7 @@ journal_t * jbd2_journal_init_inode (struct inode *inode)
+
+ journal->j_maxlen = inode->i_size >> inode->i_sb->s_blocksize_bits;
+ journal->j_blocksize = inode->i_sb->s_blocksize;
++ jbd2_stats_proc_init(journal);
+
+ /* journal descriptor can store up to n blocks -bzzz */
+ n = journal->j_blocksize / sizeof(journal_block_tag_t);
+@@ -1153,6 +1465,8 @@ void jbd2_journal_destroy(journal_t *journal)
+ brelse(journal->j_sb_buffer);
+ }
+
++ if (journal->j_proc_entry)
++ jbd2_stats_proc_exit(journal);
+ if (journal->j_inode)
+ iput(journal->j_inode);
+ if (journal->j_revoke)
+@@ -1264,6 +1578,32 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+ return 1;
+ }
+
++/*
++ * jbd2_journal_clear_features () - Clear a given journal feature in the
++ * superblock
++ * @journal: Journal to act on.
++ * @compat: bitmask of compatible features
++ * @ro: bitmask of features that force read-only mount
++ * @incompat: bitmask of incompatible features
++ *
++ * Clear a given journal feature as present on the
++ * superblock.
++ */
++void jbd2_journal_clear_features(journal_t *journal, unsigned long compat,
++ unsigned long ro, unsigned long incompat)
++{
++ journal_superblock_t *sb;
++
++ jbd_debug(1, "Clear features 0x%lx/0x%lx/0x%lx\n",
++ compat, ro, incompat);
++
++ sb = journal->j_superblock;
++
++ sb->s_feature_compat &= ~cpu_to_be32(compat);
++ sb->s_feature_ro_compat &= ~cpu_to_be32(ro);
++ sb->s_feature_incompat &= ~cpu_to_be32(incompat);
+}
++EXPORT_SYMBOL(jbd2_journal_clear_features);
+
+ /**
+ * int jbd2_journal_update_format () - Update on-disk journal structure.
+@@ -1633,7 +1973,7 @@ static int journal_init_jbd2_journal_head_cache(void)
+ jbd2_journal_head_cache = kmem_cache_create("jbd2_journal_head",
+ sizeof(struct journal_head),
+ 0, /* offset */
+- 0, /* flags */
++ SLAB_TEMPORARY, /* flags */
+ NULL); /* ctor */
+ retval = 0;
+ if (jbd2_journal_head_cache == 0) {
+@@ -1900,6 +2240,28 @@ static void __exit jbd2_remove_debugfs_entry(void)
+
+ #endif
+
++#ifdef CONFIG_PROC_FS
+
-+static void ext4_mb_release_desc(struct ext4_buddy *e4b)
++#define JBD2_STATS_PROC_NAME "fs/jbd2"
++
++static void __init jbd2_create_jbd_stats_proc_entry(void)
+{
-+ if (e4b->bd_bitmap_page)
-+ page_cache_release(e4b->bd_bitmap_page);
-+ if (e4b->bd_buddy_page)
-+ page_cache_release(e4b->bd_buddy_page);
++ proc_jbd2_stats = proc_mkdir(JBD2_STATS_PROC_NAME, NULL);
+}
+
++static void __exit jbd2_remove_jbd_stats_proc_entry(void)
++{
++ if (proc_jbd2_stats)
++ remove_proc_entry(JBD2_STATS_PROC_NAME, NULL);
++}
+
-+static int mb_find_order_for_block(struct ext4_buddy *e4b, int block)
++#else
++
++#define jbd2_create_jbd_stats_proc_entry() do {} while (0)
++#define jbd2_remove_jbd_stats_proc_entry() do {} while (0)
++
++#endif
++
+ struct kmem_cache *jbd2_handle_cache;
+
+ static int __init journal_init_handle_cache(void)
+@@ -1907,7 +2269,7 @@ static int __init journal_init_handle_cache(void)
+ jbd2_handle_cache = kmem_cache_create("jbd2_journal_handle",
+ sizeof(handle_t),
+ 0, /* offset */
+- 0, /* flags */
++ SLAB_TEMPORARY, /* flags */
+ NULL); /* ctor */
+ if (jbd2_handle_cache == NULL) {
+ printk(KERN_EMERG "JBD: failed to create handle cache\n");
+@@ -1955,6 +2317,7 @@ static int __init journal_init(void)
+ if (ret != 0)
+ jbd2_journal_destroy_caches();
+ jbd2_create_debugfs_entry();
++ jbd2_create_jbd_stats_proc_entry();
+ return ret;
+ }
+
+@@ -1966,6 +2329,7 @@ static void __exit journal_exit(void)
+ printk(KERN_EMERG "JBD: leaked %d journal_heads!\n", n);
+ #endif
+ jbd2_remove_debugfs_entry();
++ jbd2_remove_jbd_stats_proc_entry();
+ jbd2_journal_destroy_caches();
+ }
+
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index d0ce627..9216806 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -21,6 +21,7 @@
+ #include <linux/jbd2.h>
+ #include <linux/errno.h>
+ #include <linux/slab.h>
++#include <linux/crc32.h>
+ #endif
+
+ /*
+@@ -316,6 +317,37 @@ static inline unsigned long long read_tag_block(int tag_bytes, journal_block_tag
+ return block;
+ }
+
++/*
++ * calc_chksums calculates the checksums for the blocks described in the
++ * descriptor block.
++ */
++static int calc_chksums(journal_t *journal, struct buffer_head *bh,
++ unsigned long *next_log_block, __u32 *crc32_sum)
+{
-+ int order = 1;
-+ void *bb;
++ int i, num_blks, err;
++ unsigned long io_block;
++ struct buffer_head *obh;
+
-+ BUG_ON(EXT4_MB_BITMAP(e4b) == EXT4_MB_BUDDY(e4b));
-+ BUG_ON(block >= (1 << (e4b->bd_blkbits + 3)));
++ num_blks = count_tags(journal, bh);
++ /* Calculate checksum of the descriptor block. */
++ *crc32_sum = crc32_be(*crc32_sum, (void *)bh->b_data, bh->b_size);
+
-+ bb = EXT4_MB_BUDDY(e4b);
-+ while (order <= e4b->bd_blkbits + 1) {
-+ block = block >> 1;
-+ if (!mb_test_bit(block, bb)) {
-+ /* this block is part of buddy of order 'order' */
-+ return order;
++ for (i = 0; i < num_blks; i++) {
++ io_block = (*next_log_block)++;
++ wrap(journal, *next_log_block);
++ err = jread(&obh, journal, io_block);
++ if (err) {
++ printk(KERN_ERR "JBD: IO error %d recovering block "
++ "%lu in log\n", err, io_block);
++ return 1;
++ } else {
++ *crc32_sum = crc32_be(*crc32_sum, (void *)obh->b_data,
++ obh->b_size);
+ }
-+ bb += 1 << (e4b->bd_blkbits - order);
-+ order++;
+ }
+ return 0;
+}
+
-+static void mb_clear_bits(spinlock_t *lock, void *bm, int cur, int len)
+ static int do_one_pass(journal_t *journal,
+ struct recovery_info *info, enum passtype pass)
+ {
+@@ -328,6 +360,7 @@ static int do_one_pass(journal_t *journal,
+ unsigned int sequence;
+ int blocktype;
+ int tag_bytes = journal_tag_bytes(journal);
++ __u32 crc32_sum = ~0; /* Transactional Checksums */
+
+ /* Precompute the maximum metadata descriptors in a descriptor block */
+ int MAX_BLOCKS_PER_DESC;
+@@ -419,12 +452,26 @@ static int do_one_pass(journal_t *journal,
+ switch(blocktype) {
+ case JBD2_DESCRIPTOR_BLOCK:
+ /* If it is a valid descriptor block, replay it
+- * in pass REPLAY; otherwise, just skip over the
+- * blocks it describes. */
++ * in pass REPLAY; if journal_checksums enabled, then
++ * calculate checksums in PASS_SCAN, otherwise,
++ * just skip over the blocks it describes. */
+ if (pass != PASS_REPLAY) {
++ if (pass == PASS_SCAN &&
++ JBD2_HAS_COMPAT_FEATURE(journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM) &&
++ !info->end_transaction) {
++ if (calc_chksums(journal, bh,
++ &next_log_block,
++ &crc32_sum)) {
++ put_bh(bh);
++ break;
++ }
++ put_bh(bh);
++ continue;
++ }
+ next_log_block += count_tags(journal, bh);
+ wrap(journal, next_log_block);
+- brelse(bh);
++ put_bh(bh);
+ continue;
+ }
+
+@@ -516,9 +563,96 @@ static int do_one_pass(journal_t *journal,
+ continue;
+
+ case JBD2_COMMIT_BLOCK:
+- /* Found an expected commit block: not much to
+- * do other than move on to the next sequence
++ /* How to differentiate between interrupted commit
++ * and journal corruption ?
++ *
++ * {nth transaction}
++ * Checksum Verification Failed
++ * |
++ * ____________________
++ * | |
++ * async_commit sync_commit
++ * | |
++ * | GO TO NEXT "Journal Corruption"
++ * | TRANSACTION
++ * |
++ * {(n+1)th transanction}
++ * |
++ * _______|______________
++ * | |
++ * Commit block found Commit block not found
++ * | |
++ * "Journal Corruption" |
++ * _____________|_________
++ * | |
++ * nth trans corrupt OR nth trans
++ * and (n+1)th interrupted interrupted
++ * before commit block
++ * could reach the disk.
++ * (Cannot find the difference in above
++ * mentioned conditions. Hence assume
++ * "Interrupted Commit".)
++ */
++
++ /* Found an expected commit block: if checksums
++ * are present verify them in PASS_SCAN; else not
++ * much to do other than move on to the next sequence
+ * number. */
++ if (pass == PASS_SCAN &&
++ JBD2_HAS_COMPAT_FEATURE(journal,
++ JBD2_FEATURE_COMPAT_CHECKSUM)) {
++ int chksum_err, chksum_seen;
++ struct commit_header *cbh =
++ (struct commit_header *)bh->b_data;
++ unsigned found_chksum =
++ be32_to_cpu(cbh->h_chksum[0]);
++
++ chksum_err = chksum_seen = 0;
++
++ if (info->end_transaction) {
++ printk(KERN_ERR "JBD: Transaction %u "
++ "found to be corrupt.\n",
++ next_commit_ID - 1);
++ brelse(bh);
++ break;
++ }
++
++ if (crc32_sum == found_chksum &&
++ cbh->h_chksum_type == JBD2_CRC32_CHKSUM &&
++ cbh->h_chksum_size ==
++ JBD2_CRC32_CHKSUM_SIZE)
++ chksum_seen = 1;
++ else if (!(cbh->h_chksum_type == 0 &&
++ cbh->h_chksum_size == 0 &&
++ found_chksum == 0 &&
++ !chksum_seen))
++ /*
++ * If fs is mounted using an old kernel and then
++ * kernel with journal_chksum is used then we
++ * get a situation where the journal flag has
++ * checksum flag set but checksums are not
++ * present i.e chksum = 0, in the individual
++ * commit blocks.
++ * Hence to avoid checksum failures, in this
++ * situation, this extra check is added.
++ */
++ chksum_err = 1;
++
++ if (chksum_err) {
++ info->end_transaction = next_commit_ID;
++
++ if (!JBD2_HAS_COMPAT_FEATURE(journal,
++ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)){
++ printk(KERN_ERR
++ "JBD: Transaction %u "
++ "found to be corrupt.\n",
++ next_commit_ID);
++ brelse(bh);
++ break;
++ }
++ }
++ crc32_sum = ~0;
++ }
+ brelse(bh);
+ next_commit_ID++;
+ continue;
+@@ -554,9 +688,10 @@ static int do_one_pass(journal_t *journal,
+ * transaction marks the end of the valid log.
+ */
+
+- if (pass == PASS_SCAN)
+- info->end_transaction = next_commit_ID;
+- else {
++ if (pass == PASS_SCAN) {
++ if (!info->end_transaction)
++ info->end_transaction = next_commit_ID;
++ } else {
+ /* It's really bad news if different passes end up at
+ * different places (but possible due to IO errors). */
+ if (info->end_transaction != next_commit_ID) {
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index 3595fd4..df36f42 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -171,13 +171,15 @@ int __init jbd2_journal_init_revoke_caches(void)
+ {
+ jbd2_revoke_record_cache = kmem_cache_create("jbd2_revoke_record",
+ sizeof(struct jbd2_revoke_record_s),
+- 0, SLAB_HWCACHE_ALIGN, NULL);
++ 0,
++ SLAB_HWCACHE_ALIGN|SLAB_TEMPORARY,
++ NULL);
+ if (jbd2_revoke_record_cache == 0)
+ return -ENOMEM;
+
+ jbd2_revoke_table_cache = kmem_cache_create("jbd2_revoke_table",
+ sizeof(struct jbd2_revoke_table_s),
+- 0, 0, NULL);
++ 0, SLAB_TEMPORARY, NULL);
+ if (jbd2_revoke_table_cache == 0) {
+ kmem_cache_destroy(jbd2_revoke_record_cache);
+ jbd2_revoke_record_cache = NULL;
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index b1fcf2b..b9b0b6f 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -54,11 +54,13 @@ jbd2_get_transaction(journal_t *journal, transaction_t *transaction)
+ spin_lock_init(&transaction->t_handle_lock);
+
+ /* Set up the commit timer for the new transaction. */
+- journal->j_commit_timer.expires = transaction->t_expires;
++ journal->j_commit_timer.expires = round_jiffies(transaction->t_expires);
+ add_timer(&journal->j_commit_timer);
+
+ J_ASSERT(journal->j_running_transaction == NULL);
+ journal->j_running_transaction = transaction;
++ transaction->t_max_wait = 0;
++ transaction->t_start = jiffies;
+
+ return transaction;
+ }
+@@ -85,6 +87,7 @@ static int start_this_handle(journal_t *journal, handle_t *handle)
+ int nblocks = handle->h_buffer_credits;
+ transaction_t *new_transaction = NULL;
+ int ret = 0;
++ unsigned long ts = jiffies;
+
+ if (nblocks > journal->j_max_transaction_buffers) {
+ printk(KERN_ERR "JBD: %s wants too many credits (%d > %d)\n",
+@@ -217,6 +220,12 @@ repeat_locked:
+ /* OK, account for the buffers that this operation expects to
+ * use and add the handle to the running transaction. */
+
++ if (time_after(transaction->t_start, ts)) {
++ ts = jbd2_time_diff(ts, transaction->t_start);
++ if (ts > transaction->t_max_wait)
++ transaction->t_max_wait = ts;
++ }
++
+ handle->h_transaction = transaction;
+ transaction->t_outstanding_credits += nblocks;
+ transaction->t_updates++;
+@@ -232,6 +241,8 @@ out:
+ return ret;
+ }
+
++static struct lock_class_key jbd2_handle_key;
++
+ /* Allocate a new handle. This should probably be in a slab... */
+ static handle_t *new_handle(int nblocks)
+ {
+@@ -242,6 +253,9 @@ static handle_t *new_handle(int nblocks)
+ handle->h_buffer_credits = nblocks;
+ handle->h_ref = 1;
+
++ lockdep_init_map(&handle->h_lockdep_map, "jbd2_handle",
++ &jbd2_handle_key, 0);
++
+ return handle;
+ }
+
+@@ -284,7 +298,11 @@ handle_t *jbd2_journal_start(journal_t *journal, int nblocks)
+ jbd2_free_handle(handle);
+ current->journal_info = NULL;
+ handle = ERR_PTR(err);
++ goto out;
+ }
++
++ lock_acquire(&handle->h_lockdep_map, 0, 0, 0, 2, _THIS_IP_);
++out:
+ return handle;
+ }
+
+@@ -1164,7 +1182,7 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ }
+
+ /* That test should have eliminated the following case: */
+- J_ASSERT_JH(jh, jh->b_frozen_data == 0);
++ J_ASSERT_JH(jh, jh->b_frozen_data == NULL);
+
+ JBUFFER_TRACE(jh, "file as BJ_Metadata");
+ spin_lock(&journal->j_list_lock);
+@@ -1410,6 +1428,8 @@ int jbd2_journal_stop(handle_t *handle)
+ spin_unlock(&journal->j_state_lock);
+ }
+
++ lock_release(&handle->h_lockdep_map, 1, _THIS_IP_);
++
+ jbd2_free_handle(handle);
+ return err;
+ }
+@@ -1512,7 +1532,7 @@ void __jbd2_journal_temp_unlink_buffer(struct journal_head *jh)
+
+ J_ASSERT_JH(jh, jh->b_jlist < BJ_Types);
+ if (jh->b_jlist != BJ_None)
+- J_ASSERT_JH(jh, transaction != 0);
++ J_ASSERT_JH(jh, transaction != NULL);
+
+ switch (jh->b_jlist) {
+ case BJ_None:
+@@ -1581,11 +1601,11 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
+ if (buffer_locked(bh) || buffer_dirty(bh))
+ goto out;
+
+- if (jh->b_next_transaction != 0)
++ if (jh->b_next_transaction != NULL)
+ goto out;
+
+ spin_lock(&journal->j_list_lock);
+- if (jh->b_transaction != 0 && jh->b_cp_transaction == 0) {
++ if (jh->b_transaction != NULL && jh->b_cp_transaction == NULL) {
+ if (jh->b_jlist == BJ_SyncData || jh->b_jlist == BJ_Locked) {
+ /* A written-back ordered data buffer */
+ JBUFFER_TRACE(jh, "release data");
+@@ -1593,7 +1613,7 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
+ jbd2_journal_remove_journal_head(bh);
+ __brelse(bh);
+ }
+- } else if (jh->b_cp_transaction != 0 && jh->b_transaction == 0) {
++ } else if (jh->b_cp_transaction != NULL && jh->b_transaction == NULL) {
+ /* written-back checkpointed metadata buffer */
+ if (jh->b_jlist == BJ_None) {
+ JBUFFER_TRACE(jh, "remove from checkpoint list");
+@@ -1953,7 +1973,7 @@ void __jbd2_journal_file_buffer(struct journal_head *jh,
+
+ J_ASSERT_JH(jh, jh->b_jlist < BJ_Types);
+ J_ASSERT_JH(jh, jh->b_transaction == transaction ||
+- jh->b_transaction == 0);
++ jh->b_transaction == NULL);
+
+ if (jh->b_transaction && jh->b_jlist == jlist)
+ return;
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index df25ecc..4dcc058 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -284,11 +284,11 @@ static struct dir_table_slot *find_index(struct inode *ip, u32 index,
+ release_metapage(*mp);
+ *mp = NULL;
+ }
+- if (*mp == 0) {
++ if (!(*mp)) {
+ *lblock = blkno;
+ *mp = read_index_page(ip, blkno);
+ }
+- if (*mp == 0) {
++ if (!(*mp)) {
+ jfs_err("free_index: error reading directory table");
+ return NULL;
+ }
+@@ -413,7 +413,8 @@ static u32 add_index(tid_t tid, struct inode *ip, s64 bn, int slot)
+ }
+ ip->i_size = PSIZE;
+
+- if ((mp = get_index_page(ip, 0)) == 0) {
++ mp = get_index_page(ip, 0);
++ if (!mp) {
+ jfs_err("add_index: get_metapage failed!");
+ xtTruncate(tid, ip, 0, COMMIT_PWMAP);
+ memcpy(&jfs_ip->i_dirtable, temp_table,
+@@ -461,7 +462,7 @@ static u32 add_index(tid_t tid, struct inode *ip, s64 bn, int slot)
+ } else
+ mp = read_index_page(ip, blkno);
+
+- if (mp == 0) {
++ if (!mp) {
+ jfs_err("add_index: get/read_metapage failed!");
+ goto clean_up;
+ }
+@@ -499,7 +500,7 @@ static void free_index(tid_t tid, struct inode *ip, u32 index, u32 next)
+
+ dirtab_slot = find_index(ip, index, &mp, &lblock);
+
+- if (dirtab_slot == 0)
++ if (!dirtab_slot)
+ return;
+
+ dirtab_slot->flag = DIR_INDEX_FREE;
+@@ -526,7 +527,7 @@ static void modify_index(tid_t tid, struct inode *ip, u32 index, s64 bn,
+
+ dirtab_slot = find_index(ip, index, mp, lblock);
+
+- if (dirtab_slot == 0)
++ if (!dirtab_slot)
+ return;
+
+ DTSaddress(dirtab_slot, bn);
+@@ -552,7 +553,7 @@ static int read_index(struct inode *ip, u32 index,
+ struct dir_table_slot *slot;
+
+ slot = find_index(ip, index, &mp, &lblock);
+- if (slot == 0) {
++ if (!slot) {
+ return -EIO;
+ }
+
+@@ -592,10 +593,8 @@ int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
+ struct component_name ciKey;
+ struct super_block *sb = ip->i_sb;
+
+- ciKey.name =
+- (wchar_t *) kmalloc((JFS_NAME_MAX + 1) * sizeof(wchar_t),
+- GFP_NOFS);
+- if (ciKey.name == 0) {
++ ciKey.name = kmalloc((JFS_NAME_MAX + 1) * sizeof(wchar_t), GFP_NOFS);
++ if (!ciKey.name) {
+ rc = -ENOMEM;
+ goto dtSearch_Exit2;
+ }
+@@ -957,10 +956,8 @@ static int dtSplitUp(tid_t tid,
+ smp = split->mp;
+ sp = DT_PAGE(ip, smp);
+
+- key.name =
+- (wchar_t *) kmalloc((JFS_NAME_MAX + 2) * sizeof(wchar_t),
+- GFP_NOFS);
+- if (key.name == 0) {
++ key.name = kmalloc((JFS_NAME_MAX + 2) * sizeof(wchar_t), GFP_NOFS);
++ if (!key.name) {
+ DT_PUTPAGE(smp);
+ rc = -ENOMEM;
+ goto dtSplitUp_Exit;
+diff --git a/fs/jfs/jfs_dtree.h b/fs/jfs/jfs_dtree.h
+index 8561c6e..cdac2d5 100644
+--- a/fs/jfs/jfs_dtree.h
++++ b/fs/jfs/jfs_dtree.h
+@@ -74,7 +74,7 @@ struct idtentry {
+ #define DTIHDRDATALEN 11
+
+ /* compute number of slots for entry */
+-#define NDTINTERNAL(klen) ( ((4 + (klen)) + (15 - 1)) / 15 )
++#define NDTINTERNAL(klen) (DIV_ROUND_UP((4 + (klen)), 15))
+
+
+ /*
+@@ -133,7 +133,7 @@ struct dir_table_slot {
+ ( ((s64)((dts)->addr1)) << 32 | __le32_to_cpu((dts)->addr2) )
+
+ /* compute number of slots for entry */
+-#define NDTLEAF_LEGACY(klen) ( ((2 + (klen)) + (15 - 1)) / 15 )
++#define NDTLEAF_LEGACY(klen) (DIV_ROUND_UP((2 + (klen)), 15))
+ #define NDTLEAF NDTINTERNAL
+
+
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 3870ba8..9bf29f7 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -381,7 +381,7 @@ int diRead(struct inode *ip)
+
+ /* read the page of disk inode */
+ mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
+- if (mp == 0) {
++ if (!mp) {
+ jfs_err("diRead: read_metapage failed");
+ return -EIO;
+ }
+@@ -654,7 +654,7 @@ int diWrite(tid_t tid, struct inode *ip)
+ /* read the page of disk inode */
+ retry:
+ mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
+- if (mp == 0)
++ if (!mp)
+ return -EIO;
+
+ /* get the pointer to the disk inode */
+diff --git a/fs/jfs/jfs_logmgr.c b/fs/jfs/jfs_logmgr.c
+index 15a3974..325a967 100644
+--- a/fs/jfs/jfs_logmgr.c
++++ b/fs/jfs/jfs_logmgr.c
+@@ -208,6 +208,17 @@ static struct lmStat {
+ } lmStat;
+ #endif
+
++static void write_special_inodes(struct jfs_log *log,
++ int (*writer)(struct address_space *))
+{
-+ __u32 *addr;
++ struct jfs_sb_info *sbi;
+
-+ len = cur + len;
-+ while (cur < len) {
-+ if ((cur & 31) == 0 && (len - cur) >= 32) {
-+ /* fast path: clear whole word at once */
-+ addr = bm + (cur >> 3);
-+ *addr = 0;
-+ cur += 32;
-+ continue;
-+ }
-+ mb_clear_bit_atomic(lock, cur, bm);
-+ cur++;
++ list_for_each_entry(sbi, &log->sb_list, log_list) {
++ writer(sbi->ipbmap->i_mapping);
++ writer(sbi->ipimap->i_mapping);
++ writer(sbi->direct_inode->i_mapping);
+ }
+}
+
+ /*
+ * NAME: lmLog()
+@@ -935,22 +946,13 @@ static int lmLogSync(struct jfs_log * log, int hard_sync)
+ struct lrd lrd;
+ int lsn;
+ struct logsyncblk *lp;
+- struct jfs_sb_info *sbi;
+ unsigned long flags;
+
+ /* push dirty metapages out to disk */
+ if (hard_sync)
+- list_for_each_entry(sbi, &log->sb_list, log_list) {
+- filemap_fdatawrite(sbi->ipbmap->i_mapping);
+- filemap_fdatawrite(sbi->ipimap->i_mapping);
+- filemap_fdatawrite(sbi->direct_inode->i_mapping);
+- }
++ write_special_inodes(log, filemap_fdatawrite);
+ else
+- list_for_each_entry(sbi, &log->sb_list, log_list) {
+- filemap_flush(sbi->ipbmap->i_mapping);
+- filemap_flush(sbi->ipimap->i_mapping);
+- filemap_flush(sbi->direct_inode->i_mapping);
+- }
++ write_special_inodes(log, filemap_flush);
+
+ /*
+ * forward syncpt
+@@ -1536,7 +1538,6 @@ void jfs_flush_journal(struct jfs_log *log, int wait)
+ {
+ int i;
+ struct tblock *target = NULL;
+- struct jfs_sb_info *sbi;
+
+ /* jfs_write_inode may call us during read-only mount */
+ if (!log)
+@@ -1598,11 +1599,7 @@ void jfs_flush_journal(struct jfs_log *log, int wait)
+ if (wait < 2)
+ return;
+
+- list_for_each_entry(sbi, &log->sb_list, log_list) {
+- filemap_fdatawrite(sbi->ipbmap->i_mapping);
+- filemap_fdatawrite(sbi->ipimap->i_mapping);
+- filemap_fdatawrite(sbi->direct_inode->i_mapping);
+- }
++ write_special_inodes(log, filemap_fdatawrite);
+
+ /*
+ * If there was recent activity, we may need to wait
+@@ -1611,6 +1608,7 @@ void jfs_flush_journal(struct jfs_log *log, int wait)
+ if ((!list_empty(&log->cqueue)) || !list_empty(&log->synclist)) {
+ for (i = 0; i < 200; i++) { /* Too much? */
+ msleep(250);
++ write_special_inodes(log, filemap_fdatawrite);
+ if (list_empty(&log->cqueue) &&
+ list_empty(&log->synclist))
+ break;
+@@ -2347,7 +2345,7 @@ int jfsIOWait(void *arg)
+
+ do {
+ spin_lock_irq(&log_redrive_lock);
+- while ((bp = log_redrive_list) != 0) {
++ while ((bp = log_redrive_list)) {
+ log_redrive_list = bp->l_redrive_next;
+ bp->l_redrive_next = NULL;
+ spin_unlock_irq(&log_redrive_lock);
+diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
+index f5cd8d3..d1e64f2 100644
+--- a/fs/jfs/jfs_metapage.c
++++ b/fs/jfs/jfs_metapage.c
+@@ -39,11 +39,11 @@ static struct {
+ #endif
+
+ #define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
+-#define trylock_metapage(mp) test_and_set_bit(META_locked, &(mp)->flag)
++#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
+
+ static inline void unlock_metapage(struct metapage *mp)
+ {
+- clear_bit(META_locked, &mp->flag);
++ clear_bit_unlock(META_locked, &mp->flag);
+ wake_up(&mp->wait);
+ }
+
+@@ -88,7 +88,7 @@ struct meta_anchor {
+ };
+ #define mp_anchor(page) ((struct meta_anchor *)page_private(page))
+
+-static inline struct metapage *page_to_mp(struct page *page, uint offset)
++static inline struct metapage *page_to_mp(struct page *page, int offset)
+ {
+ if (!PagePrivate(page))
+ return NULL;
+@@ -153,7 +153,7 @@ static inline void dec_io(struct page *page, void (*handler) (struct page *))
+ }
+
+ #else
+-static inline struct metapage *page_to_mp(struct page *page, uint offset)
++static inline struct metapage *page_to_mp(struct page *page, int offset)
+ {
+ return PagePrivate(page) ? (struct metapage *)page_private(page) : NULL;
+ }
+@@ -249,7 +249,7 @@ static inline void drop_metapage(struct page *page, struct metapage *mp)
+ */
+
+ static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
+- unsigned int *len)
++ int *len)
+ {
+ int rc = 0;
+ int xflag;
+@@ -352,25 +352,27 @@ static void metapage_write_end_io(struct bio *bio, int err)
+ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
+ {
+ struct bio *bio = NULL;
+- unsigned int block_offset; /* block offset of mp within page */
++ int block_offset; /* block offset of mp within page */
+ struct inode *inode = page->mapping->host;
+- unsigned int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
+- unsigned int len;
+- unsigned int xlen;
++ int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
++ int len;
++ int xlen;
+ struct metapage *mp;
+ int redirty = 0;
+ sector_t lblock;
++ int nr_underway = 0;
+ sector_t pblock;
+ sector_t next_block = 0;
+ sector_t page_start;
+ unsigned long bio_bytes = 0;
+ unsigned long bio_offset = 0;
+- unsigned int offset;
++ int offset;
+
+ page_start = (sector_t)page->index <<
+ (PAGE_CACHE_SHIFT - inode->i_blkbits);
+ BUG_ON(!PageLocked(page));
+ BUG_ON(PageWriteback(page));
++ set_page_writeback(page);
+
+ for (offset = 0; offset < PAGE_CACHE_SIZE; offset += PSIZE) {
+ mp = page_to_mp(page, offset);
+@@ -413,11 +415,10 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
+ if (!bio->bi_size)
+ goto dump_bio;
+ submit_bio(WRITE, bio);
++ nr_underway++;
+ bio = NULL;
+- } else {
+- set_page_writeback(page);
++ } else
+ inc_io(page);
+- }
+ xlen = (PAGE_CACHE_SIZE - offset) >> inode->i_blkbits;
+ pblock = metapage_get_blocks(inode, lblock, &xlen);
+ if (!pblock) {
+@@ -427,7 +428,7 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
+ continue;
+ }
+ set_bit(META_io, &mp->flag);
+- len = min(xlen, (uint) JFS_SBI(inode->i_sb)->nbperpage);
++ len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
+
+ bio = bio_alloc(GFP_NOFS, 1);
+ bio->bi_bdev = inode->i_sb->s_bdev;
+@@ -449,12 +450,16 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
+ goto dump_bio;
+
+ submit_bio(WRITE, bio);
++ nr_underway++;
+ }
+ if (redirty)
+ redirty_page_for_writepage(wbc, page);
+
+ unlock_page(page);
+
++ if (nr_underway == 0)
++ end_page_writeback(page);
+
-+static void mb_set_bits(spinlock_t *lock, void *bm, int cur, int len)
+ return 0;
+ add_failed:
+ /* We should never reach here, since we're only adding one vec */
+@@ -475,13 +480,13 @@ static int metapage_readpage(struct file *fp, struct page *page)
+ {
+ struct inode *inode = page->mapping->host;
+ struct bio *bio = NULL;
+- unsigned int block_offset;
+- unsigned int blocks_per_page = PAGE_CACHE_SIZE >> inode->i_blkbits;
++ int block_offset;
++ int blocks_per_page = PAGE_CACHE_SIZE >> inode->i_blkbits;
+ sector_t page_start; /* address of page in fs blocks */
+ sector_t pblock;
+- unsigned int xlen;
++ int xlen;
+ unsigned int len;
+- unsigned int offset;
++ int offset;
+
+ BUG_ON(!PageLocked(page));
+ page_start = (sector_t)page->index <<
+@@ -530,7 +535,7 @@ static int metapage_releasepage(struct page *page, gfp_t gfp_mask)
+ {
+ struct metapage *mp;
+ int ret = 1;
+- unsigned int offset;
++ int offset;
+
+ for (offset = 0; offset < PAGE_CACHE_SIZE; offset += PSIZE) {
+ mp = page_to_mp(page, offset);
+diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
+index 644429a..7b698f2 100644
+--- a/fs/jfs/jfs_mount.c
++++ b/fs/jfs/jfs_mount.c
+@@ -147,7 +147,7 @@ int jfs_mount(struct super_block *sb)
+ */
+ if ((sbi->mntflag & JFS_BAD_SAIT) == 0) {
+ ipaimap2 = diReadSpecial(sb, AGGREGATE_I, 1);
+- if (ipaimap2 == 0) {
++ if (!ipaimap2) {
+ jfs_err("jfs_mount: Faild to read AGGREGATE_I");
+ rc = -EIO;
+ goto errout35;
+diff --git a/fs/jfs/jfs_umount.c b/fs/jfs/jfs_umount.c
+index 7971f37..adcf92d 100644
+--- a/fs/jfs/jfs_umount.c
++++ b/fs/jfs/jfs_umount.c
+@@ -68,7 +68,7 @@ int jfs_umount(struct super_block *sb)
+ /*
+ * Wait for outstanding transactions to be written to log:
+ */
+- jfs_flush_journal(log, 2);
++ jfs_flush_journal(log, 1);
+
+ /*
+ * close fileset inode allocation map (aka fileset inode)
+@@ -146,7 +146,7 @@ int jfs_umount_rw(struct super_block *sb)
+ *
+ * remove file system from log active file system list.
+ */
+- jfs_flush_journal(log, 2);
++ jfs_flush_journal(log, 1);
+
+ /*
+ * Make sure all metadata makes it to disk
+diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c
+index 4e0a849..f8718de 100644
+--- a/fs/jfs/namei.c
++++ b/fs/jfs/namei.c
+@@ -1103,8 +1103,8 @@ static int jfs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ * Make sure dest inode number (if any) is what we think it is
+ */
+ rc = dtSearch(new_dir, &new_dname, &ino, &btstack, JFS_LOOKUP);
+- if (rc == 0) {
+- if ((new_ip == 0) || (ino != new_ip->i_ino)) {
++ if (!rc) {
++ if ((!new_ip) || (ino != new_ip->i_ino)) {
+ rc = -ESTALE;
+ goto out3;
+ }
+diff --git a/fs/jfs/resize.c b/fs/jfs/resize.c
+index 71984ee..7f24a0b 100644
+--- a/fs/jfs/resize.c
++++ b/fs/jfs/resize.c
+@@ -172,7 +172,7 @@ int jfs_extendfs(struct super_block *sb, s64 newLVSize, int newLogSize)
+ */
+ t64 = ((newLVSize - newLogSize + BPERDMAP - 1) >> L2BPERDMAP)
+ << L2BPERDMAP;
+- t32 = ((t64 + (BITSPERPAGE - 1)) / BITSPERPAGE) + 1 + 50;
++ t32 = DIV_ROUND_UP(t64, BITSPERPAGE) + 1 + 50;
+ newFSCKSize = t32 << sbi->l2nbperpage;
+ newFSCKAddress = newLogAddress - newFSCKSize;
+
+diff --git a/fs/jfs/super.c b/fs/jfs/super.c
+index 314bb4f..70a1400 100644
+--- a/fs/jfs/super.c
++++ b/fs/jfs/super.c
+@@ -598,6 +598,12 @@ static int jfs_show_options(struct seq_file *seq, struct vfsmount *vfs)
+ seq_printf(seq, ",umask=%03o", sbi->umask);
+ if (sbi->flag & JFS_NOINTEGRITY)
+ seq_puts(seq, ",nointegrity");
++ if (sbi->nls_tab)
++ seq_printf(seq, ",iocharset=%s", sbi->nls_tab->charset);
++ if (sbi->flag & JFS_ERR_CONTINUE)
++ seq_printf(seq, ",errors=continue");
++ if (sbi->flag & JFS_ERR_PANIC)
++ seq_printf(seq, ",errors=panic");
+
+ #ifdef CONFIG_QUOTA
+ if (sbi->flag & JFS_USRQUOTA)
+diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
+index d070b18..0b45fd3 100644
+--- a/fs/lockd/clntlock.c
++++ b/fs/lockd/clntlock.c
+@@ -41,6 +41,48 @@ struct nlm_wait {
+
+ static LIST_HEAD(nlm_blocked);
+
++/**
++ * nlmclnt_init - Set up per-NFS mount point lockd data structures
++ * @nlm_init: pointer to arguments structure
++ *
++ * Returns pointer to an appropriate nlm_host struct,
++ * or an ERR_PTR value.
++ */
++struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
+{
-+ __u32 *addr;
++ struct nlm_host *host;
++ u32 nlm_version = (nlm_init->nfs_version == 2) ? 1 : 4;
++ int status;
+
-+ len = cur + len;
-+ while (cur < len) {
-+ if ((cur & 31) == 0 && (len - cur) >= 32) {
-+ /* fast path: set whole word at once */
-+ addr = bm + (cur >> 3);
-+ *addr = 0xffffffff;
-+ cur += 32;
-+ continue;
-+ }
-+ mb_set_bit_atomic(lock, cur, bm);
-+ cur++;
++ status = lockd_up(nlm_init->protocol);
++ if (status < 0)
++ return ERR_PTR(status);
++
++ host = nlmclnt_lookup_host((struct sockaddr_in *)nlm_init->address,
++ nlm_init->protocol, nlm_version,
++ nlm_init->hostname,
++ strlen(nlm_init->hostname));
++ if (host == NULL) {
++ lockd_down();
++ return ERR_PTR(-ENOLCK);
+ }
++
++ return host;
+}
++EXPORT_SYMBOL_GPL(nlmclnt_init);
+
-+static int mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
-+ int first, int count)
++/**
++ * nlmclnt_done - Release resources allocated by nlmclnt_init()
++ * @host: nlm_host structure reserved by nlmclnt_init()
++ *
++ */
++void nlmclnt_done(struct nlm_host *host)
+{
-+ int block = 0;
-+ int max = 0;
-+ int order;
-+ void *buddy;
-+ void *buddy2;
-+ struct super_block *sb = e4b->bd_sb;
++ nlm_release_host(host);
++ lockd_down();
++}
++EXPORT_SYMBOL_GPL(nlmclnt_done);
+
-+ BUG_ON(first + count > (sb->s_blocksize << 3));
-+ BUG_ON(!ext4_is_group_locked(sb, e4b->bd_group));
-+ mb_check_buddy(e4b);
-+ mb_free_blocks_double(inode, e4b, first, count);
+ /*
+ * Queue up a lock for blocking so that the GRANTED request can see it
+ */
+diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
+index a10343b..b6b74a6 100644
+--- a/fs/lockd/clntproc.c
++++ b/fs/lockd/clntproc.c
+@@ -145,34 +145,21 @@ static void nlmclnt_release_lockargs(struct nlm_rqst *req)
+ BUG_ON(req->a_args.lock.fl.fl_ops != NULL);
+ }
+
+-/*
+- * This is the main entry point for the NLM client.
++/**
++ * nlmclnt_proc - Perform a single client-side lock request
++ * @host: address of a valid nlm_host context representing the NLM server
++ * @cmd: fcntl-style file lock operation to perform
++ * @fl: address of arguments for the lock operation
++ *
+ */
+-int
+-nlmclnt_proc(struct inode *inode, int cmd, struct file_lock *fl)
++int nlmclnt_proc(struct nlm_host *host, int cmd, struct file_lock *fl)
+ {
+- struct rpc_clnt *client = NFS_CLIENT(inode);
+- struct sockaddr_in addr;
+- struct nfs_server *nfssrv = NFS_SERVER(inode);
+- struct nlm_host *host;
+ struct nlm_rqst *call;
+ sigset_t oldset;
+ unsigned long flags;
+- int status, vers;
+-
+- vers = (NFS_PROTO(inode)->version == 3) ? 4 : 1;
+- if (NFS_PROTO(inode)->version > 3) {
+- printk(KERN_NOTICE "NFSv4 file locking not implemented!\n");
+- return -ENOLCK;
+- }
+-
+- rpc_peeraddr(client, (struct sockaddr *) &addr, sizeof(addr));
+- host = nlmclnt_lookup_host(&addr, client->cl_xprt->prot, vers,
+- nfssrv->nfs_client->cl_hostname,
+- strlen(nfssrv->nfs_client->cl_hostname));
+- if (host == NULL)
+- return -ENOLCK;
++ int status;
+
++ nlm_get_host(host);
+ call = nlm_alloc_call(host);
+ if (call == NULL)
+ return -ENOMEM;
+@@ -219,7 +206,7 @@ nlmclnt_proc(struct inode *inode, int cmd, struct file_lock *fl)
+ dprintk("lockd: clnt proc returns %d\n", status);
+ return status;
+ }
+-EXPORT_SYMBOL(nlmclnt_proc);
++EXPORT_SYMBOL_GPL(nlmclnt_proc);
+
+ /*
+ * Allocate an NLM RPC call struct
+@@ -257,7 +244,7 @@ void nlm_release_call(struct nlm_rqst *call)
+
+ static void nlmclnt_rpc_release(void *data)
+ {
+- return nlm_release_call(data);
++ nlm_release_call(data);
+ }
+
+ static int nlm_wait_on_grace(wait_queue_head_t *queue)
+diff --git a/fs/lockd/xdr.c b/fs/lockd/xdr.c
+index 633653b..3e459e1 100644
+--- a/fs/lockd/xdr.c
++++ b/fs/lockd/xdr.c
+@@ -612,8 +612,7 @@ const char *nlmdbg_cookie2a(const struct nlm_cookie *cookie)
+ * called with BKL held.
+ */
+ static char buf[2*NLM_MAXCOOKIELEN+1];
+- int i;
+- int len = sizeof(buf);
++ unsigned int i, len = sizeof(buf);
+ char *p = buf;
+
+ len--; /* allow for trailing \0 */
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 0608388..61bf376 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -41,8 +41,8 @@ static struct kmem_cache *mnt_cache __read_mostly;
+ static struct rw_semaphore namespace_sem;
+
+ /* /sys/fs */
+-decl_subsys(fs, NULL, NULL);
+-EXPORT_SYMBOL_GPL(fs_subsys);
++struct kobject *fs_kobj;
++EXPORT_SYMBOL_GPL(fs_kobj);
+
+ static inline unsigned long hash(struct vfsmount *mnt, struct dentry *dentry)
+ {
+@@ -1861,10 +1861,9 @@ void __init mnt_init(void)
+ if (err)
+ printk(KERN_WARNING "%s: sysfs_init error: %d\n",
+ __FUNCTION__, err);
+- err = subsystem_register(&fs_subsys);
+- if (err)
+- printk(KERN_WARNING "%s: subsystem_register error: %d\n",
+- __FUNCTION__, err);
++ fs_kobj = kobject_create_and_add("fs", NULL);
++ if (!fs_kobj)
++ printk(KERN_WARNING "%s: kobj create error\n", __FUNCTION__);
+ init_rootfs();
+ init_mount_tree();
+ }
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index a796be5..9b6bbf1 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -73,8 +73,6 @@ static void nfs_callback_svc(struct svc_rqst *rqstp)
+ complete(&nfs_callback_info.started);
+
+ for(;;) {
+- char buf[RPC_MAX_ADDRBUFLEN];
+-
+ if (signalled()) {
+ if (nfs_callback_info.users == 0)
+ break;
+@@ -92,8 +90,6 @@ static void nfs_callback_svc(struct svc_rqst *rqstp)
+ __FUNCTION__, -err);
+ break;
+ }
+- dprintk("%s: request from %s\n", __FUNCTION__,
+- svc_print_addr(rqstp, buf, sizeof(buf)));
+ svc_process(rqstp);
+ }
+
+@@ -168,12 +164,11 @@ void nfs_callback_down(void)
+
+ static int nfs_callback_authenticate(struct svc_rqst *rqstp)
+ {
+- struct sockaddr_in *addr = svc_addr_in(rqstp);
+ struct nfs_client *clp;
+ char buf[RPC_MAX_ADDRBUFLEN];
+
+ /* Don't talk to strangers */
+- clp = nfs_find_client(addr, 4);
++ clp = nfs_find_client(svc_addr(rqstp), 4);
+ if (clp == NULL)
+ return SVC_DROP;
+
+diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
+index c2bb14e..bb25d21 100644
+--- a/fs/nfs/callback.h
++++ b/fs/nfs/callback.h
+@@ -38,7 +38,7 @@ struct cb_compound_hdr_res {
+ };
+
+ struct cb_getattrargs {
+- struct sockaddr_in *addr;
++ struct sockaddr *addr;
+ struct nfs_fh fh;
+ uint32_t bitmap[2];
+ };
+@@ -53,7 +53,7 @@ struct cb_getattrres {
+ };
+
+ struct cb_recallargs {
+- struct sockaddr_in *addr;
++ struct sockaddr *addr;
+ struct nfs_fh fh;
+ nfs4_stateid stateid;
+ uint32_t truncate;
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 72e55d8..15f7785 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -12,7 +12,9 @@
+ #include "delegation.h"
+ #include "internal.h"
+
++#ifdef NFS_DEBUG
+ #define NFSDBG_FACILITY NFSDBG_CALLBACK
++#endif
+
+ __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res)
+ {
+@@ -20,12 +22,16 @@ __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *
+ struct nfs_delegation *delegation;
+ struct nfs_inode *nfsi;
+ struct inode *inode;
+-
+
-+ e4b->bd_info->bb_free += count;
-+ if (first < e4b->bd_info->bb_first_free)
-+ e4b->bd_info->bb_first_free = first;
+ res->bitmap[0] = res->bitmap[1] = 0;
+ res->status = htonl(NFS4ERR_BADHANDLE);
+ clp = nfs_find_client(args->addr, 4);
+ if (clp == NULL)
+ goto out;
+
-+ /* let's maintain fragments counter */
-+ if (first != 0)
-+ block = !mb_test_bit(first - 1, EXT4_MB_BITMAP(e4b));
-+ if (first + count < EXT4_SB(sb)->s_mb_maxs[0])
-+ max = !mb_test_bit(first + count, EXT4_MB_BITMAP(e4b));
-+ if (block && max)
-+ e4b->bd_info->bb_fragments--;
-+ else if (!block && !max)
-+ e4b->bd_info->bb_fragments++;
++ dprintk("NFS: GETATTR callback request from %s\n",
++ rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
+
-+ /* let's maintain buddy itself */
-+ while (count-- > 0) {
-+ block = first++;
-+ order = 0;
+ inode = nfs_delegation_find_inode(clp, &args->fh);
+ if (inode == NULL)
+ goto out_putclient;
+@@ -65,23 +71,32 @@ __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy)
+ clp = nfs_find_client(args->addr, 4);
+ if (clp == NULL)
+ goto out;
+- inode = nfs_delegation_find_inode(clp, &args->fh);
+- if (inode == NULL)
+- goto out_putclient;
+- /* Set up a helper thread to actually return the delegation */
+- switch(nfs_async_inode_return_delegation(inode, &args->stateid)) {
+- case 0:
+- res = 0;
+- break;
+- case -ENOENT:
+- res = htonl(NFS4ERR_BAD_STATEID);
+- break;
+- default:
+- res = htonl(NFS4ERR_RESOURCE);
+- }
+- iput(inode);
+-out_putclient:
+- nfs_put_client(clp);
+
-+ if (!mb_test_bit(block, EXT4_MB_BITMAP(e4b))) {
-+ ext4_fsblk_t blocknr;
-+ blocknr = e4b->bd_group * EXT4_BLOCKS_PER_GROUP(sb);
-+ blocknr += block;
-+ blocknr +=
-+ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
++ dprintk("NFS: RECALL callback request from %s\n",
++ rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
+
-+ ext4_error(sb, __FUNCTION__, "double-free of inode"
-+ " %lu's block %llu(bit %u in group %lu)\n",
-+ inode ? inode->i_ino : 0, blocknr, block,
-+ e4b->bd_group);
++ do {
++ struct nfs_client *prev = clp;
++
++ inode = nfs_delegation_find_inode(clp, &args->fh);
++ if (inode != NULL) {
++ /* Set up a helper thread to actually return the delegation */
++ switch(nfs_async_inode_return_delegation(inode, &args->stateid)) {
++ case 0:
++ res = 0;
++ break;
++ case -ENOENT:
++ if (res != 0)
++ res = htonl(NFS4ERR_BAD_STATEID);
++ break;
++ default:
++ res = htonl(NFS4ERR_RESOURCE);
++ }
++ iput(inode);
+ }
-+ mb_clear_bit(block, EXT4_MB_BITMAP(e4b));
-+ e4b->bd_info->bb_counters[order]++;
++ clp = nfs_find_client_next(prev);
++ nfs_put_client(prev);
++ } while (clp != NULL);
+ out:
+ dprintk("%s: exit with status = %d\n", __FUNCTION__, ntohl(res));
+ return res;
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index 058ade7..c63eb72 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -139,7 +139,7 @@ static __be32 decode_compound_hdr_arg(struct xdr_stream *xdr, struct cb_compound
+ if (unlikely(status != 0))
+ return status;
+ /* We do not like overly long tags! */
+- if (hdr->taglen > CB_OP_TAGLEN_MAXSZ-12 || hdr->taglen < 0) {
++ if (hdr->taglen > CB_OP_TAGLEN_MAXSZ - 12) {
+ printk("NFSv4 CALLBACK %s: client sent tag of length %u\n",
+ __FUNCTION__, hdr->taglen);
+ return htonl(NFS4ERR_RESOURCE);
+@@ -176,7 +176,7 @@ static __be32 decode_getattr_args(struct svc_rqst *rqstp, struct xdr_stream *xdr
+ status = decode_fh(xdr, &args->fh);
+ if (unlikely(status != 0))
+ goto out;
+- args->addr = svc_addr_in(rqstp);
++ args->addr = svc_addr(rqstp);
+ status = decode_bitmap(xdr, args->bitmap);
+ out:
+ dprintk("%s: exit with status = %d\n", __FUNCTION__, ntohl(status));
+@@ -188,7 +188,7 @@ static __be32 decode_recall_args(struct svc_rqst *rqstp, struct xdr_stream *xdr,
+ __be32 *p;
+ __be32 status;
+
+- args->addr = svc_addr_in(rqstp);
++ args->addr = svc_addr(rqstp);
+ status = decode_stateid(xdr, &args->stateid);
+ if (unlikely(status != 0))
+ goto out;
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index a6f6254..685c43f 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -34,6 +34,8 @@
+ #include <linux/nfs_idmap.h>
+ #include <linux/vfs.h>
+ #include <linux/inet.h>
++#include <linux/in6.h>
++#include <net/ipv6.h>
+ #include <linux/nfs_xdr.h>
+
+ #include <asm/system.h>
+@@ -93,22 +95,30 @@ struct rpc_program nfsacl_program = {
+ };
+ #endif /* CONFIG_NFS_V3_ACL */
+
++struct nfs_client_initdata {
++ const char *hostname;
++ const struct sockaddr *addr;
++ size_t addrlen;
++ const struct nfs_rpc_ops *rpc_ops;
++ int proto;
++};
+
-+ /* start of the buddy */
-+ buddy = mb_find_buddy(e4b, order, &max);
+ /*
+ * Allocate a shared client record
+ *
+ * Since these are allocated/deallocated very rarely, we don't
+ * bother putting them in a slab cache...
+ */
+-static struct nfs_client *nfs_alloc_client(const char *hostname,
+- const struct sockaddr_in *addr,
+- int nfsversion)
++static struct nfs_client *nfs_alloc_client(const struct nfs_client_initdata *cl_init)
+ {
+ struct nfs_client *clp;
+
+ if ((clp = kzalloc(sizeof(*clp), GFP_KERNEL)) == NULL)
+ goto error_0;
+
+- if (nfsversion == 4) {
++ clp->rpc_ops = cl_init->rpc_ops;
+
-+ do {
-+ block &= ~1UL;
-+ if (mb_test_bit(block, buddy) ||
-+ mb_test_bit(block + 1, buddy))
-+ break;
++ if (cl_init->rpc_ops->version == 4) {
+ if (nfs_callback_up() < 0)
+ goto error_2;
+ __set_bit(NFS_CS_CALLBACK, &clp->cl_res_state);
+@@ -117,11 +127,11 @@ static struct nfs_client *nfs_alloc_client(const char *hostname,
+ atomic_set(&clp->cl_count, 1);
+ clp->cl_cons_state = NFS_CS_INITING;
+
+- clp->cl_nfsversion = nfsversion;
+- memcpy(&clp->cl_addr, addr, sizeof(clp->cl_addr));
++ memcpy(&clp->cl_addr, cl_init->addr, cl_init->addrlen);
++ clp->cl_addrlen = cl_init->addrlen;
+
+- if (hostname) {
+- clp->cl_hostname = kstrdup(hostname, GFP_KERNEL);
++ if (cl_init->hostname) {
++ clp->cl_hostname = kstrdup(cl_init->hostname, GFP_KERNEL);
+ if (!clp->cl_hostname)
+ goto error_3;
+ }
+@@ -129,6 +139,8 @@ static struct nfs_client *nfs_alloc_client(const char *hostname,
+ INIT_LIST_HEAD(&clp->cl_superblocks);
+ clp->cl_rpcclient = ERR_PTR(-EINVAL);
+
++ clp->cl_proto = cl_init->proto;
+
-+ /* both the buddies are free, try to coalesce them */
-+ buddy2 = mb_find_buddy(e4b, order + 1, &max);
+ #ifdef CONFIG_NFS_V4
+ init_rwsem(&clp->cl_sem);
+ INIT_LIST_HEAD(&clp->cl_delegations);
+@@ -166,7 +178,7 @@ static void nfs4_shutdown_client(struct nfs_client *clp)
+ */
+ static void nfs_free_client(struct nfs_client *clp)
+ {
+- dprintk("--> nfs_free_client(%d)\n", clp->cl_nfsversion);
++ dprintk("--> nfs_free_client(%u)\n", clp->rpc_ops->version);
+
+ nfs4_shutdown_client(clp);
+
+@@ -203,76 +215,148 @@ void nfs_put_client(struct nfs_client *clp)
+ }
+ }
+
++static int nfs_sockaddr_match_ipaddr4(const struct sockaddr_in *sa1,
++ const struct sockaddr_in *sa2)
++{
++ return sa1->sin_addr.s_addr == sa2->sin_addr.s_addr;
++}
+
-+ if (!buddy2)
-+ break;
++static int nfs_sockaddr_match_ipaddr6(const struct sockaddr_in6 *sa1,
++ const struct sockaddr_in6 *sa2)
++{
++ return ipv6_addr_equal(&sa1->sin6_addr, &sa2->sin6_addr);
++}
+
-+ if (order > 0) {
-+ /* for special purposes, we don't set
-+ * free bits in bitmap */
-+ mb_set_bit(block, buddy);
-+ mb_set_bit(block + 1, buddy);
-+ }
-+ e4b->bd_info->bb_counters[order]--;
-+ e4b->bd_info->bb_counters[order]--;
++static int nfs_sockaddr_match_ipaddr(const struct sockaddr *sa1,
++ const struct sockaddr *sa2)
++{
++ switch (sa1->sa_family) {
++ case AF_INET:
++ return nfs_sockaddr_match_ipaddr4((const struct sockaddr_in *)sa1,
++ (const struct sockaddr_in *)sa2);
++ case AF_INET6:
++ return nfs_sockaddr_match_ipaddr6((const struct sockaddr_in6 *)sa1,
++ (const struct sockaddr_in6 *)sa2);
++ }
++ BUG();
++}
+
-+ block = block >> 1;
-+ order++;
-+ e4b->bd_info->bb_counters[order]++;
+ /*
+- * Find a client by address
+- * - caller must hold nfs_client_lock
++ * Find a client by IP address and protocol version
++ * - returns NULL if no such client
+ */
+-static struct nfs_client *__nfs_find_client(const struct sockaddr_in *addr, int nfsversion, int match_port)
++struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
+ {
+ struct nfs_client *clp;
+
++ spin_lock(&nfs_client_lock);
+ list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
++ struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
++
+ /* Don't match clients that failed to initialise properly */
+- if (clp->cl_cons_state < 0)
++ if (clp->cl_cons_state != NFS_CS_READY)
+ continue;
+
+ /* Different NFS versions cannot share the same nfs_client */
+- if (clp->cl_nfsversion != nfsversion)
++ if (clp->rpc_ops->version != nfsversion)
+ continue;
+
+- if (memcmp(&clp->cl_addr.sin_addr, &addr->sin_addr,
+- sizeof(clp->cl_addr.sin_addr)) != 0)
++ if (addr->sa_family != clap->sa_family)
++ continue;
++ /* Match only the IP address, not the port number */
++ if (!nfs_sockaddr_match_ipaddr(addr, clap))
+ continue;
+
+- if (!match_port || clp->cl_addr.sin_port == addr->sin_port)
+- goto found;
++ atomic_inc(&clp->cl_count);
++ spin_unlock(&nfs_client_lock);
++ return clp;
+ }
+-
++ spin_unlock(&nfs_client_lock);
+ return NULL;
+-
+-found:
+- atomic_inc(&clp->cl_count);
+- return clp;
+ }
+
+ /*
+ * Find a client by IP address and protocol version
+ * - returns NULL if no such client
+ */
+-struct nfs_client *nfs_find_client(const struct sockaddr_in *addr, int nfsversion)
++struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
+ {
+- struct nfs_client *clp;
++ struct sockaddr *sap = (struct sockaddr *)&clp->cl_addr;
++ u32 nfsvers = clp->rpc_ops->version;
+
+ spin_lock(&nfs_client_lock);
+- clp = __nfs_find_client(addr, nfsversion, 0);
++ list_for_each_entry_continue(clp, &nfs_client_list, cl_share_link) {
++ struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
+
-+ mb_clear_bit(block, buddy2);
-+ buddy = buddy2;
-+ } while (1);
-+ }
-+ mb_check_buddy(e4b);
++ /* Don't match clients that failed to initialise properly */
++ if (clp->cl_cons_state != NFS_CS_READY)
++ continue;
+
-+ return 0;
++ /* Different NFS versions cannot share the same nfs_client */
++ if (clp->rpc_ops->version != nfsvers)
++ continue;
++
++ if (sap->sa_family != clap->sa_family)
++ continue;
++ /* Match only the IP address, not the port number */
++ if (!nfs_sockaddr_match_ipaddr(sap, clap))
++ continue;
++
++ atomic_inc(&clp->cl_count);
++ spin_unlock(&nfs_client_lock);
++ return clp;
++ }
+ spin_unlock(&nfs_client_lock);
+- if (clp != NULL && clp->cl_cons_state != NFS_CS_READY) {
+- nfs_put_client(clp);
+- clp = NULL;
++ return NULL;
+}
+
-+static int mb_find_extent(struct ext4_buddy *e4b, int order, int block,
-+ int needed, struct ext4_free_extent *ex)
++/*
++ * Find an nfs_client on the list that matches the initialisation data
++ * that is supplied.
++ */
++static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *data)
+{
-+ int next = block;
-+ int max;
-+ int ord;
-+ void *buddy;
++ struct nfs_client *clp;
+
-+ BUG_ON(!ext4_is_group_locked(e4b->bd_sb, e4b->bd_group));
-+ BUG_ON(ex == NULL);
++ list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
++ /* Don't match clients that failed to initialise properly */
++ if (clp->cl_cons_state < 0)
++ continue;
+
-+ buddy = mb_find_buddy(e4b, order, &max);
-+ BUG_ON(buddy == NULL);
-+ BUG_ON(block >= max);
-+ if (mb_test_bit(block, buddy)) {
-+ ex->fe_len = 0;
-+ ex->fe_start = 0;
-+ ex->fe_group = 0;
-+ return 0;
-+ }
++ /* Different NFS versions cannot share the same nfs_client */
++ if (clp->rpc_ops != data->rpc_ops)
++ continue;
+
-+ /* FIXME dorp order completely ? */
-+ if (likely(order == 0)) {
-+ /* find actual order */
-+ order = mb_find_order_for_block(e4b, block);
-+ block = block >> order;
-+ }
++ if (clp->cl_proto != data->proto)
++ continue;
+
-+ ex->fe_len = 1 << order;
-+ ex->fe_start = block << order;
-+ ex->fe_group = e4b->bd_group;
++ /* Match the full socket address */
++ if (memcmp(&clp->cl_addr, data->addr, sizeof(clp->cl_addr)) != 0)
++ continue;
+
-+ /* calc difference from given start */
-+ next = next - ex->fe_start;
-+ ex->fe_len -= next;
-+ ex->fe_start += next;
++ atomic_inc(&clp->cl_count);
++ return clp;
+ }
+- return clp;
++ return NULL;
+ }
+
+ /*
+ * Look up a client by IP address and protocol version
+ * - creates a new record if one doesn't yet exist
+ */
+-static struct nfs_client *nfs_get_client(const char *hostname,
+- const struct sockaddr_in *addr,
+- int nfsversion)
++static struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ {
+ struct nfs_client *clp, *new = NULL;
+ int error;
+
+- dprintk("--> nfs_get_client(%s,"NIPQUAD_FMT":%d,%d)\n",
+- hostname ?: "", NIPQUAD(addr->sin_addr),
+- addr->sin_port, nfsversion);
++ dprintk("--> nfs_get_client(%s,v%u)\n",
++ cl_init->hostname ?: "", cl_init->rpc_ops->version);
+
+ /* see if the client already exists */
+ do {
+ spin_lock(&nfs_client_lock);
+
+- clp = __nfs_find_client(addr, nfsversion, 1);
++ clp = nfs_match_client(cl_init);
+ if (clp)
+ goto found_client;
+ if (new)
+@@ -280,7 +364,7 @@ static struct nfs_client *nfs_get_client(const char *hostname,
+
+ spin_unlock(&nfs_client_lock);
+
+- new = nfs_alloc_client(hostname, addr, nfsversion);
++ new = nfs_alloc_client(cl_init);
+ } while (new);
+
+ return ERR_PTR(-ENOMEM);
+@@ -344,12 +428,16 @@ static void nfs_init_timeout_values(struct rpc_timeout *to, int proto,
+ switch (proto) {
+ case XPRT_TRANSPORT_TCP:
+ case XPRT_TRANSPORT_RDMA:
+- if (!to->to_initval)
++ if (to->to_initval == 0)
+ to->to_initval = 60 * HZ;
+ if (to->to_initval > NFS_MAX_TCP_TIMEOUT)
+ to->to_initval = NFS_MAX_TCP_TIMEOUT;
+ to->to_increment = to->to_initval;
+ to->to_maxval = to->to_initval + (to->to_increment * to->to_retries);
++ if (to->to_maxval > NFS_MAX_TCP_TIMEOUT)
++ to->to_maxval = NFS_MAX_TCP_TIMEOUT;
++ if (to->to_maxval < to->to_initval)
++ to->to_maxval = to->to_initval;
+ to->to_exponential = 0;
+ break;
+ case XPRT_TRANSPORT_UDP:
+@@ -367,19 +455,17 @@ static void nfs_init_timeout_values(struct rpc_timeout *to, int proto,
+ /*
+ * Create an RPC client handle
+ */
+-static int nfs_create_rpc_client(struct nfs_client *clp, int proto,
+- unsigned int timeo,
+- unsigned int retrans,
+- rpc_authflavor_t flavor,
+- int flags)
++static int nfs_create_rpc_client(struct nfs_client *clp,
++ const struct rpc_timeout *timeparms,
++ rpc_authflavor_t flavor,
++ int flags)
+ {
+- struct rpc_timeout timeparms;
+ struct rpc_clnt *clnt = NULL;
+ struct rpc_create_args args = {
+- .protocol = proto,
++ .protocol = clp->cl_proto,
+ .address = (struct sockaddr *)&clp->cl_addr,
+- .addrsize = sizeof(clp->cl_addr),
+- .timeout = &timeparms,
++ .addrsize = clp->cl_addrlen,
++ .timeout = timeparms,
+ .servername = clp->cl_hostname,
+ .program = &nfs_program,
+ .version = clp->rpc_ops->version,
+@@ -390,10 +476,6 @@ static int nfs_create_rpc_client(struct nfs_client *clp, int proto,
+ if (!IS_ERR(clp->cl_rpcclient))
+ return 0;
+
+- nfs_init_timeout_values(&timeparms, proto, timeo, retrans);
+- clp->retrans_timeo = timeparms.to_initval;
+- clp->retrans_count = timeparms.to_retries;
+-
+ clnt = rpc_create(&args);
+ if (IS_ERR(clnt)) {
+ dprintk("%s: cannot create RPC client. Error = %ld\n",
+@@ -411,7 +493,7 @@ static int nfs_create_rpc_client(struct nfs_client *clp, int proto,
+ static void nfs_destroy_server(struct nfs_server *server)
+ {
+ if (!(server->flags & NFS_MOUNT_NONLM))
+- lockd_down(); /* release rpc.lockd */
++ nlmclnt_done(server->nlm_host);
+ }
+
+ /*
+@@ -419,20 +501,29 @@ static void nfs_destroy_server(struct nfs_server *server)
+ */
+ static int nfs_start_lockd(struct nfs_server *server)
+ {
+- int error = 0;
++ struct nlm_host *host;
++ struct nfs_client *clp = server->nfs_client;
++ struct nlmclnt_initdata nlm_init = {
++ .hostname = clp->cl_hostname,
++ .address = (struct sockaddr *)&clp->cl_addr,
++ .addrlen = clp->cl_addrlen,
++ .protocol = server->flags & NFS_MOUNT_TCP ?
++ IPPROTO_TCP : IPPROTO_UDP,
++ .nfs_version = clp->rpc_ops->version,
++ };
+
+- if (server->nfs_client->cl_nfsversion > 3)
+- goto out;
++ if (nlm_init.nfs_version > 3)
++ return 0;
+ if (server->flags & NFS_MOUNT_NONLM)
+- goto out;
+- error = lockd_up((server->flags & NFS_MOUNT_TCP) ?
+- IPPROTO_TCP : IPPROTO_UDP);
+- if (error < 0)
+- server->flags |= NFS_MOUNT_NONLM;
+- else
+- server->destroy = nfs_destroy_server;
+-out:
+- return error;
++ return 0;
+
-+ while (needed > ex->fe_len &&
-+ (buddy = mb_find_buddy(e4b, order, &max))) {
++ host = nlmclnt_init(&nlm_init);
++ if (IS_ERR(host))
++ return PTR_ERR(host);
+
-+ if (block + 1 >= max)
-+ break;
++ server->nlm_host = host;
++ server->destroy = nfs_destroy_server;
++ return 0;
+ }
+
+ /*
+@@ -441,7 +532,7 @@ out:
+ #ifdef CONFIG_NFS_V3_ACL
+ static void nfs_init_server_aclclient(struct nfs_server *server)
+ {
+- if (server->nfs_client->cl_nfsversion != 3)
++ if (server->nfs_client->rpc_ops->version != 3)
+ goto out_noacl;
+ if (server->flags & NFS_MOUNT_NOACL)
+ goto out_noacl;
+@@ -468,7 +559,9 @@ static inline void nfs_init_server_aclclient(struct nfs_server *server)
+ /*
+ * Create a general RPC client
+ */
+-static int nfs_init_server_rpcclient(struct nfs_server *server, rpc_authflavor_t pseudoflavour)
++static int nfs_init_server_rpcclient(struct nfs_server *server,
++ const struct rpc_timeout *timeo,
++ rpc_authflavor_t pseudoflavour)
+ {
+ struct nfs_client *clp = server->nfs_client;
+
+@@ -478,6 +571,11 @@ static int nfs_init_server_rpcclient(struct nfs_server *server, rpc_authflavor_t
+ return PTR_ERR(server->client);
+ }
+
++ memcpy(&server->client->cl_timeout_default,
++ timeo,
++ sizeof(server->client->cl_timeout_default));
++ server->client->cl_timeout = &server->client->cl_timeout_default;
+
-+ next = (block + 1) * (1 << order);
-+ if (mb_test_bit(next, EXT4_MB_BITMAP(e4b)))
-+ break;
+ if (pseudoflavour != clp->cl_rpcclient->cl_auth->au_flavor) {
+ struct rpc_auth *auth;
+
+@@ -502,6 +600,7 @@ static int nfs_init_server_rpcclient(struct nfs_server *server, rpc_authflavor_t
+ * Initialise an NFS2 or NFS3 client
+ */
+ static int nfs_init_client(struct nfs_client *clp,
++ const struct rpc_timeout *timeparms,
+ const struct nfs_parsed_mount_data *data)
+ {
+ int error;
+@@ -512,18 +611,11 @@ static int nfs_init_client(struct nfs_client *clp,
+ return 0;
+ }
+
+- /* Check NFS protocol revision and initialize RPC op vector */
+- clp->rpc_ops = &nfs_v2_clientops;
+-#ifdef CONFIG_NFS_V3
+- if (clp->cl_nfsversion == 3)
+- clp->rpc_ops = &nfs_v3_clientops;
+-#endif
+ /*
+ * Create a client RPC handle for doing FSSTAT with UNIX auth only
+ * - RFC 2623, sec 2.3.2
+ */
+- error = nfs_create_rpc_client(clp, data->nfs_server.protocol,
+- data->timeo, data->retrans, RPC_AUTH_UNIX, 0);
++ error = nfs_create_rpc_client(clp, timeparms, RPC_AUTH_UNIX, 0);
+ if (error < 0)
+ goto error;
+ nfs_mark_client_ready(clp, NFS_CS_READY);
+@@ -541,25 +633,34 @@ error:
+ static int nfs_init_server(struct nfs_server *server,
+ const struct nfs_parsed_mount_data *data)
+ {
++ struct nfs_client_initdata cl_init = {
++ .hostname = data->nfs_server.hostname,
++ .addr = (const struct sockaddr *)&data->nfs_server.address,
++ .addrlen = data->nfs_server.addrlen,
++ .rpc_ops = &nfs_v2_clientops,
++ .proto = data->nfs_server.protocol,
++ };
++ struct rpc_timeout timeparms;
+ struct nfs_client *clp;
+- int error, nfsvers = 2;
++ int error;
+
+ dprintk("--> nfs_init_server()\n");
+
+ #ifdef CONFIG_NFS_V3
+ if (data->flags & NFS_MOUNT_VER3)
+- nfsvers = 3;
++ cl_init.rpc_ops = &nfs_v3_clientops;
+ #endif
+
+ /* Allocate or find a client reference we can use */
+- clp = nfs_get_client(data->nfs_server.hostname,
+- &data->nfs_server.address, nfsvers);
++ clp = nfs_get_client(&cl_init);
+ if (IS_ERR(clp)) {
+ dprintk("<-- nfs_init_server() = error %ld\n", PTR_ERR(clp));
+ return PTR_ERR(clp);
+ }
+
+- error = nfs_init_client(clp, data);
++ nfs_init_timeout_values(&timeparms, data->nfs_server.protocol,
++ data->timeo, data->retrans);
++ error = nfs_init_client(clp, &timeparms, data);
+ if (error < 0)
+ goto error;
+
+@@ -583,7 +684,7 @@ static int nfs_init_server(struct nfs_server *server,
+ if (error < 0)
+ goto error;
+
+- error = nfs_init_server_rpcclient(server, data->auth_flavors[0]);
++ error = nfs_init_server_rpcclient(server, &timeparms, data->auth_flavors[0]);
+ if (error < 0)
+ goto error;
+
+@@ -729,6 +830,9 @@ static struct nfs_server *nfs_alloc_server(void)
+ INIT_LIST_HEAD(&server->client_link);
+ INIT_LIST_HEAD(&server->master_link);
+
++ init_waitqueue_head(&server->active_wq);
++ atomic_set(&server->active, 0);
++
+ server->io_stats = nfs_alloc_iostats();
+ if (!server->io_stats) {
+ kfree(server);
+@@ -840,7 +944,7 @@ error:
+ * Initialise an NFS4 client record
+ */
+ static int nfs4_init_client(struct nfs_client *clp,
+- int proto, int timeo, int retrans,
++ const struct rpc_timeout *timeparms,
+ const char *ip_addr,
+ rpc_authflavor_t authflavour)
+ {
+@@ -855,7 +959,7 @@ static int nfs4_init_client(struct nfs_client *clp,
+ /* Check NFS protocol revision and initialize RPC op vector */
+ clp->rpc_ops = &nfs_v4_clientops;
+
+- error = nfs_create_rpc_client(clp, proto, timeo, retrans, authflavour,
++ error = nfs_create_rpc_client(clp, timeparms, authflavour,
+ RPC_CLNT_CREATE_DISCRTRY);
+ if (error < 0)
+ goto error;
+@@ -882,23 +986,32 @@ error:
+ * Set up an NFS4 client
+ */
+ static int nfs4_set_client(struct nfs_server *server,
+- const char *hostname, const struct sockaddr_in *addr,
++ const char *hostname,
++ const struct sockaddr *addr,
++ const size_t addrlen,
+ const char *ip_addr,
+ rpc_authflavor_t authflavour,
+- int proto, int timeo, int retrans)
++ int proto, const struct rpc_timeout *timeparms)
+ {
++ struct nfs_client_initdata cl_init = {
++ .hostname = hostname,
++ .addr = addr,
++ .addrlen = addrlen,
++ .rpc_ops = &nfs_v4_clientops,
++ .proto = proto,
++ };
+ struct nfs_client *clp;
+ int error;
+
+ dprintk("--> nfs4_set_client()\n");
+
+ /* Allocate or find a client reference we can use */
+- clp = nfs_get_client(hostname, addr, 4);
++ clp = nfs_get_client(&cl_init);
+ if (IS_ERR(clp)) {
+ error = PTR_ERR(clp);
+ goto error;
+ }
+- error = nfs4_init_client(clp, proto, timeo, retrans, ip_addr, authflavour);
++ error = nfs4_init_client(clp, timeparms, ip_addr, authflavour);
+ if (error < 0)
+ goto error_put;
+
+@@ -919,10 +1032,26 @@ error:
+ static int nfs4_init_server(struct nfs_server *server,
+ const struct nfs_parsed_mount_data *data)
+ {
++ struct rpc_timeout timeparms;
+ int error;
+
+ dprintk("--> nfs4_init_server()\n");
+
++ nfs_init_timeout_values(&timeparms, data->nfs_server.protocol,
++ data->timeo, data->retrans);
+
-+ ord = mb_find_order_for_block(e4b, next);
++ /* Get a client record */
++ error = nfs4_set_client(server,
++ data->nfs_server.hostname,
++ (const struct sockaddr *)&data->nfs_server.address,
++ data->nfs_server.addrlen,
++ data->client_address,
++ data->auth_flavors[0],
++ data->nfs_server.protocol,
++ &timeparms);
++ if (error < 0)
++ goto error;
+
-+ order = ord;
-+ block = next >> order;
-+ ex->fe_len += 1 << order;
-+ }
+ /* Initialise the client representation from the mount data */
+ server->flags = data->flags & NFS_MOUNT_FLAGMASK;
+ server->caps |= NFS_CAP_ATOMIC_OPEN;
+@@ -937,8 +1066,9 @@ static int nfs4_init_server(struct nfs_server *server,
+ server->acdirmin = data->acdirmin * HZ;
+ server->acdirmax = data->acdirmax * HZ;
+
+- error = nfs_init_server_rpcclient(server, data->auth_flavors[0]);
++ error = nfs_init_server_rpcclient(server, &timeparms, data->auth_flavors[0]);
+
++error:
+ /* Done */
+ dprintk("<-- nfs4_init_server() = %d\n", error);
+ return error;
+@@ -961,17 +1091,6 @@ struct nfs_server *nfs4_create_server(const struct nfs_parsed_mount_data *data,
+ if (!server)
+ return ERR_PTR(-ENOMEM);
+
+- /* Get a client record */
+- error = nfs4_set_client(server,
+- data->nfs_server.hostname,
+- &data->nfs_server.address,
+- data->client_address,
+- data->auth_flavors[0],
+- data->nfs_server.protocol,
+- data->timeo, data->retrans);
+- if (error < 0)
+- goto error;
+-
+ /* set up the general RPC client */
+ error = nfs4_init_server(server, data);
+ if (error < 0)
+@@ -1039,12 +1158,13 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+
+ /* Get a client representation.
+ * Note: NFSv4 always uses TCP, */
+- error = nfs4_set_client(server, data->hostname, data->addr,
+- parent_client->cl_ipaddr,
+- data->authflavor,
+- parent_server->client->cl_xprt->prot,
+- parent_client->retrans_timeo,
+- parent_client->retrans_count);
++ error = nfs4_set_client(server, data->hostname,
++ data->addr,
++ data->addrlen,
++ parent_client->cl_ipaddr,
++ data->authflavor,
++ parent_server->client->cl_xprt->prot,
++ parent_server->client->cl_timeout);
+ if (error < 0)
+ goto error;
+
+@@ -1052,7 +1172,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ nfs_server_copy_userdata(server, parent_server);
+ server->caps |= NFS_CAP_ATOMIC_OPEN;
+
+- error = nfs_init_server_rpcclient(server, data->authflavor);
++ error = nfs_init_server_rpcclient(server, parent_server->client->cl_timeout, data->authflavor);
+ if (error < 0)
+ goto error;
+
+@@ -1121,7 +1241,9 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source,
+
+ server->fsid = fattr->fsid;
+
+- error = nfs_init_server_rpcclient(server, source->client->cl_auth->au_flavor);
++ error = nfs_init_server_rpcclient(server,
++ source->client->cl_timeout,
++ source->client->cl_auth->au_flavor);
+ if (error < 0)
+ goto out_free_server;
+ if (!IS_ERR(source->client_acl))
+@@ -1263,10 +1385,10 @@ static int nfs_server_list_show(struct seq_file *m, void *v)
+ /* display one transport per line on subsequent lines */
+ clp = list_entry(v, struct nfs_client, cl_share_link);
+
+- seq_printf(m, "v%d %02x%02x%02x%02x %4hx %3d %s\n",
+- clp->cl_nfsversion,
+- NIPQUAD(clp->cl_addr.sin_addr),
+- ntohs(clp->cl_addr.sin_port),
++ seq_printf(m, "v%u %s %s %3d %s\n",
++ clp->rpc_ops->version,
++ rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_ADDR),
++ rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_PORT),
+ atomic_read(&clp->cl_count),
+ clp->cl_hostname);
+
+@@ -1342,10 +1464,10 @@ static int nfs_volume_list_show(struct seq_file *m, void *v)
+ (unsigned long long) server->fsid.major,
+ (unsigned long long) server->fsid.minor);
+
+- seq_printf(m, "v%d %02x%02x%02x%02x %4hx %-7s %-17s\n",
+- clp->cl_nfsversion,
+- NIPQUAD(clp->cl_addr.sin_addr),
+- ntohs(clp->cl_addr.sin_port),
++ seq_printf(m, "v%u %s %s %-7s %-17s\n",
++ clp->rpc_ops->version,
++ rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_ADDR),
++ rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_PORT),
+ dev,
+ fsid);
+
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 11833f4..b9eadd1 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -125,6 +125,32 @@ void nfs_inode_reclaim_delegation(struct inode *inode, struct rpc_cred *cred, st
+ put_rpccred(oldcred);
+ }
+
++static int nfs_do_return_delegation(struct inode *inode, struct nfs_delegation *delegation, int issync)
++{
++ int res = 0;
+
-+ BUG_ON(ex->fe_start + ex->fe_len > (1 << (e4b->bd_blkbits + 3)));
-+ return ex->fe_len;
++ res = nfs4_proc_delegreturn(inode, delegation->cred, &delegation->stateid, issync);
++ nfs_free_delegation(delegation);
++ return res;
+}
+
-+static int mb_mark_used(struct ext4_buddy *e4b, struct ext4_free_extent *ex)
++static struct nfs_delegation *nfs_detach_delegation_locked(struct nfs_inode *nfsi, const nfs4_stateid *stateid)
+{
-+ int ord;
-+ int mlen = 0;
-+ int max = 0;
-+ int cur;
-+ int start = ex->fe_start;
-+ int len = ex->fe_len;
-+ unsigned ret = 0;
-+ int len0 = len;
-+ void *buddy;
++ struct nfs_delegation *delegation = rcu_dereference(nfsi->delegation);
+
-+ BUG_ON(start + len > (e4b->bd_sb->s_blocksize << 3));
-+ BUG_ON(e4b->bd_group != ex->fe_group);
-+ BUG_ON(!ext4_is_group_locked(e4b->bd_sb, e4b->bd_group));
-+ mb_check_buddy(e4b);
-+ mb_mark_used_double(e4b, start, len);
++ if (delegation == NULL)
++ goto nomatch;
++ if (stateid != NULL && memcmp(delegation->stateid.data, stateid->data,
++ sizeof(delegation->stateid.data)) != 0)
++ goto nomatch;
++ list_del_rcu(&delegation->super_list);
++ nfsi->delegation_state = 0;
++ rcu_assign_pointer(nfsi->delegation, NULL);
++ return delegation;
++nomatch:
++ return NULL;
++}
+
-+ e4b->bd_info->bb_free -= len;
-+ if (e4b->bd_info->bb_first_free == start)
-+ e4b->bd_info->bb_first_free += len;
+ /*
+ * Set up a delegation on an inode
+ */
+@@ -133,6 +159,7 @@ int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct
+ struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
+ struct nfs_inode *nfsi = NFS_I(inode);
+ struct nfs_delegation *delegation;
++ struct nfs_delegation *freeme = NULL;
+ int status = 0;
+
+ delegation = kmalloc(sizeof(*delegation), GFP_KERNEL);
+@@ -147,41 +174,45 @@ int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct
+ delegation->inode = inode;
+
+ spin_lock(&clp->cl_lock);
+- if (rcu_dereference(nfsi->delegation) == NULL) {
+- list_add_rcu(&delegation->super_list, &clp->cl_delegations);
+- nfsi->delegation_state = delegation->type;
+- rcu_assign_pointer(nfsi->delegation, delegation);
+- delegation = NULL;
+- } else {
++ if (rcu_dereference(nfsi->delegation) != NULL) {
+ if (memcmp(&delegation->stateid, &nfsi->delegation->stateid,
+- sizeof(delegation->stateid)) != 0 ||
+- delegation->type != nfsi->delegation->type) {
+- printk("%s: server %u.%u.%u.%u, handed out a duplicate delegation!\n",
+- __FUNCTION__, NIPQUAD(clp->cl_addr.sin_addr));
+- status = -EIO;
++ sizeof(delegation->stateid)) == 0 &&
++ delegation->type == nfsi->delegation->type) {
++ goto out;
++ }
++ /*
++ * Deal with broken servers that hand out two
++ * delegations for the same file.
++ */
++ dfprintk(FILE, "%s: server %s handed out "
++ "a duplicate delegation!\n",
++ __FUNCTION__, clp->cl_hostname);
++ if (delegation->type <= nfsi->delegation->type) {
++ freeme = delegation;
++ delegation = NULL;
++ goto out;
+ }
++ freeme = nfs_detach_delegation_locked(nfsi, NULL);
+ }
++ list_add_rcu(&delegation->super_list, &clp->cl_delegations);
++ nfsi->delegation_state = delegation->type;
++ rcu_assign_pointer(nfsi->delegation, delegation);
++ delegation = NULL;
+
+ /* Ensure we revalidate the attributes and page cache! */
+ spin_lock(&inode->i_lock);
+ nfsi->cache_validity |= NFS_INO_REVAL_FORCED;
+ spin_unlock(&inode->i_lock);
+
++out:
+ spin_unlock(&clp->cl_lock);
+ if (delegation != NULL)
+ nfs_free_delegation(delegation);
++ if (freeme != NULL)
++ nfs_do_return_delegation(inode, freeme, 0);
+ return status;
+ }
+
+-static int nfs_do_return_delegation(struct inode *inode, struct nfs_delegation *delegation)
+-{
+- int res = 0;
+-
+- res = nfs4_proc_delegreturn(inode, delegation->cred, &delegation->stateid);
+- nfs_free_delegation(delegation);
+- return res;
+-}
+-
+ /* Sync all data to disk upon delegation return */
+ static void nfs_msync_inode(struct inode *inode)
+ {
+@@ -207,24 +238,28 @@ static int __nfs_inode_return_delegation(struct inode *inode, struct nfs_delegat
+ up_read(&clp->cl_sem);
+ nfs_msync_inode(inode);
+
+- return nfs_do_return_delegation(inode, delegation);
++ return nfs_do_return_delegation(inode, delegation, 1);
+ }
+
+-static struct nfs_delegation *nfs_detach_delegation_locked(struct nfs_inode *nfsi, const nfs4_stateid *stateid)
++/*
++ * This function returns the delegation without reclaiming opens
++ * or protecting against delegation reclaims.
++ * It is therefore really only safe to be called from
++ * nfs4_clear_inode()
++ */
++void nfs_inode_return_delegation_noreclaim(struct inode *inode)
+ {
+- struct nfs_delegation *delegation = rcu_dereference(nfsi->delegation);
++ struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
++ struct nfs_inode *nfsi = NFS_I(inode);
++ struct nfs_delegation *delegation;
+
+- if (delegation == NULL)
+- goto nomatch;
+- if (stateid != NULL && memcmp(delegation->stateid.data, stateid->data,
+- sizeof(delegation->stateid.data)) != 0)
+- goto nomatch;
+- list_del_rcu(&delegation->super_list);
+- nfsi->delegation_state = 0;
+- rcu_assign_pointer(nfsi->delegation, NULL);
+- return delegation;
+-nomatch:
+- return NULL;
++ if (rcu_dereference(nfsi->delegation) != NULL) {
++ spin_lock(&clp->cl_lock);
++ delegation = nfs_detach_delegation_locked(nfsi, NULL);
++ spin_unlock(&clp->cl_lock);
++ if (delegation != NULL)
++ nfs_do_return_delegation(inode, delegation, 0);
++ }
+ }
+
+ int nfs_inode_return_delegation(struct inode *inode)
+@@ -314,8 +349,9 @@ void nfs_expire_all_delegations(struct nfs_client *clp)
+ __module_get(THIS_MODULE);
+ atomic_inc(&clp->cl_count);
+ task = kthread_run(nfs_do_expire_all_delegations, clp,
+- "%u.%u.%u.%u-delegreturn",
+- NIPQUAD(clp->cl_addr.sin_addr));
++ "%s-delegreturn",
++ rpc_peeraddr2str(clp->cl_rpcclient,
++ RPC_DISPLAY_ADDR));
+ if (!IS_ERR(task))
+ return;
+ nfs_put_client(clp);
+@@ -386,7 +422,7 @@ static int recall_thread(void *data)
+ nfs_msync_inode(inode);
+
+ if (delegation != NULL)
+- nfs_do_return_delegation(inode, delegation);
++ nfs_do_return_delegation(inode, delegation, 1);
+ iput(inode);
+ module_put_and_exit(0);
+ }
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index 5874ce7..f1c5e2a 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -29,6 +29,7 @@ int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct
+ void nfs_inode_reclaim_delegation(struct inode *inode, struct rpc_cred *cred, struct nfs_openres *res);
+ int nfs_inode_return_delegation(struct inode *inode);
+ int nfs_async_inode_return_delegation(struct inode *inode, const nfs4_stateid *stateid);
++void nfs_inode_return_delegation_noreclaim(struct inode *inode);
+
+ struct inode *nfs_delegation_find_inode(struct nfs_client *clp, const struct nfs_fh *fhandle);
+ void nfs_return_all_delegations(struct super_block *sb);
+@@ -39,7 +40,7 @@ void nfs_delegation_mark_reclaim(struct nfs_client *clp);
+ void nfs_delegation_reap_unclaimed(struct nfs_client *clp);
+
+ /* NFSv4 delegation-related procedures */
+-int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid);
++int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync);
+ int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid);
+ int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl);
+ int nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode);
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index f697b5c..476cb0f 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -192,7 +192,7 @@ int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page *page)
+ /* We requested READDIRPLUS, but the server doesn't grok it */
+ if (error == -ENOTSUPP && desc->plus) {
+ NFS_SERVER(inode)->caps &= ~NFS_CAP_READDIRPLUS;
+- clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_FLAGS(inode));
++ clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_I(inode)->flags);
+ desc->plus = 0;
+ goto again;
+ }
+@@ -537,12 +537,6 @@ static int nfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
+
+ lock_kernel();
+
+- res = nfs_revalidate_mapping_nolock(inode, filp->f_mapping);
+- if (res < 0) {
+- unlock_kernel();
+- return res;
+- }
+-
+ /*
+ * filp->f_pos points to the dirent entry number.
+ * *desc->dir_cookie has the cookie for the next entry. We have
+@@ -564,6 +558,10 @@ static int nfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
+ desc->entry = &my_entry;
+
+ nfs_block_sillyrename(dentry);
++ res = nfs_revalidate_mapping_nolock(inode, filp->f_mapping);
++ if (res < 0)
++ goto out;
+
-+ /* let's maintain fragments counter */
-+ if (start != 0)
-+ mlen = !mb_test_bit(start - 1, EXT4_MB_BITMAP(e4b));
-+ if (start + len < EXT4_SB(e4b->bd_sb)->s_mb_maxs[0])
-+ max = !mb_test_bit(start + len, EXT4_MB_BITMAP(e4b));
-+ if (mlen && max)
-+ e4b->bd_info->bb_fragments++;
-+ else if (!mlen && !max)
-+ e4b->bd_info->bb_fragments--;
+ while(!desc->entry->eof) {
+ res = readdir_search_pagecache(desc);
+
+@@ -579,7 +577,7 @@ static int nfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
+ break;
+ }
+ if (res == -ETOOSMALL && desc->plus) {
+- clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_FLAGS(inode));
++ clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_I(inode)->flags);
+ nfs_zap_caches(inode);
+ desc->plus = 0;
+ desc->entry->eof = 0;
+@@ -594,6 +592,7 @@ static int nfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
+ break;
+ }
+ }
++out:
+ nfs_unblock_sillyrename(dentry);
+ unlock_kernel();
+ if (res > 0)
+@@ -639,6 +638,21 @@ static int nfs_fsync_dir(struct file *filp, struct dentry *dentry, int datasync)
+ return 0;
+ }
+
++/**
++ * nfs_force_lookup_revalidate - Mark the directory as having changed
++ * @dir - pointer to directory inode
++ *
++ * This forces the revalidation code in nfs_lookup_revalidate() to do a
++ * full lookup on all child dentries of 'dir' whenever a change occurs
++ * on the server that might have invalidated our dcache.
++ *
++ * The caller should be holding dir->i_lock
++ */
++void nfs_force_lookup_revalidate(struct inode *dir)
++{
++ NFS_I(dir)->cache_change_attribute = jiffies;
++}
+
-+ /* let's maintain buddy itself */
-+ while (len) {
-+ ord = mb_find_order_for_block(e4b, start);
+ /*
+ * A check for whether or not the parent directory has changed.
+ * In the case it has, we assume that the dentries are untrustworthy
+@@ -827,6 +841,10 @@ static int nfs_dentry_delete(struct dentry *dentry)
+ dentry->d_parent->d_name.name, dentry->d_name.name,
+ dentry->d_flags);
+
++ /* Unhash any dentry with a stale inode */
++ if (dentry->d_inode != NULL && NFS_STALE(dentry->d_inode))
++ return 1;
+
-+ if (((start >> ord) << ord) == start && len >= (1 << ord)) {
-+ /* the whole chunk may be allocated at once! */
-+ mlen = 1 << ord;
-+ buddy = mb_find_buddy(e4b, ord, &max);
-+ BUG_ON((start >> ord) >= max);
-+ mb_set_bit(start >> ord, buddy);
-+ e4b->bd_info->bb_counters[ord]--;
-+ start += mlen;
-+ len -= mlen;
-+ BUG_ON(len < 0);
+ if (dentry->d_flags & DCACHE_NFSFS_RENAMED) {
+ /* Unhash it, so that ->d_iput() would be called */
+ return 1;
+@@ -846,7 +864,6 @@ static int nfs_dentry_delete(struct dentry *dentry)
+ */
+ static void nfs_dentry_iput(struct dentry *dentry, struct inode *inode)
+ {
+- nfs_inode_return_delegation(inode);
+ if (S_ISDIR(inode->i_mode))
+ /* drop any readdir cache as it could easily be old */
+ NFS_I(inode)->cache_validity |= NFS_INO_INVALID_DATA;
+@@ -1268,6 +1285,12 @@ out_err:
+ return error;
+ }
+
++static void nfs_dentry_handle_enoent(struct dentry *dentry)
++{
++ if (dentry->d_inode != NULL && !d_unhashed(dentry))
++ d_delete(dentry);
++}
++
+ static int nfs_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+ int error;
+@@ -1280,6 +1303,8 @@ static int nfs_rmdir(struct inode *dir, struct dentry *dentry)
+ /* Ensure the VFS deletes this inode */
+ if (error == 0 && dentry->d_inode != NULL)
+ clear_nlink(dentry->d_inode);
++ else if (error == -ENOENT)
++ nfs_dentry_handle_enoent(dentry);
+ unlock_kernel();
+
+ return error;
+@@ -1386,6 +1411,8 @@ static int nfs_safe_remove(struct dentry *dentry)
+ nfs_mark_for_revalidate(inode);
+ } else
+ error = NFS_PROTO(dir)->remove(dir, &dentry->d_name);
++ if (error == -ENOENT)
++ nfs_dentry_handle_enoent(dentry);
+ out:
+ return error;
+ }
+@@ -1422,7 +1449,7 @@ static int nfs_unlink(struct inode *dir, struct dentry *dentry)
+ spin_unlock(&dentry->d_lock);
+ spin_unlock(&dcache_lock);
+ error = nfs_safe_remove(dentry);
+- if (!error) {
++ if (!error || error == -ENOENT) {
+ nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
+ } else if (need_rehash)
+ d_rehash(dentry);
+@@ -1635,7 +1662,8 @@ out:
+ d_move(old_dentry, new_dentry);
+ nfs_set_verifier(new_dentry,
+ nfs_save_change_attribute(new_dir));
+- }
++ } else if (error == -ENOENT)
++ nfs_dentry_handle_enoent(old_dentry);
+
+ /* new dentry created? */
+ if (dentry)
+@@ -1666,13 +1694,19 @@ int nfs_access_cache_shrinker(int nr_to_scan, gfp_t gfp_mask)
+ restart:
+ spin_lock(&nfs_access_lru_lock);
+ list_for_each_entry(nfsi, &nfs_access_lru_list, access_cache_inode_lru) {
++ struct rw_semaphore *s_umount;
+ struct inode *inode;
+
+ if (nr_to_scan-- == 0)
+ break;
++ s_umount = &nfsi->vfs_inode.i_sb->s_umount;
++ if (!down_read_trylock(s_umount))
+ continue;
+ inode = igrab(&nfsi->vfs_inode);
+- if (inode == NULL)
++ if (inode == NULL) {
++ up_read(s_umount);
+ continue;
+ }
+ spin_lock(&inode->i_lock);
+ if (list_empty(&nfsi->access_cache_entry_lru))
+ goto remove_lru_entry;
+@@ -1691,6 +1725,7 @@ remove_lru_entry:
+ spin_unlock(&inode->i_lock);
+ spin_unlock(&nfs_access_lru_lock);
+ iput(inode);
++ up_read(s_umount);
+ goto restart;
+ }
+ spin_unlock(&nfs_access_lru_lock);
+@@ -1731,7 +1766,7 @@ static void __nfs_access_zap_cache(struct inode *inode)
+ void nfs_access_zap_cache(struct inode *inode)
+ {
+ /* Remove from global LRU init */
+- if (test_and_clear_bit(NFS_INO_ACL_LRU_SET, &NFS_FLAGS(inode))) {
++ if (test_and_clear_bit(NFS_INO_ACL_LRU_SET, &NFS_I(inode)->flags)) {
+ spin_lock(&nfs_access_lru_lock);
+ list_del_init(&NFS_I(inode)->access_cache_inode_lru);
+ spin_unlock(&nfs_access_lru_lock);
+@@ -1845,7 +1880,7 @@ static void nfs_access_add_cache(struct inode *inode, struct nfs_access_entry *s
+ smp_mb__after_atomic_inc();
+
+ /* Add inode to global LRU list */
+- if (!test_and_set_bit(NFS_INO_ACL_LRU_SET, &NFS_FLAGS(inode))) {
++ if (!test_and_set_bit(NFS_INO_ACL_LRU_SET, &NFS_I(inode)->flags)) {
+ spin_lock(&nfs_access_lru_lock);
+ list_add_tail(&NFS_I(inode)->access_cache_inode_lru, &nfs_access_lru_list);
+ spin_unlock(&nfs_access_lru_lock);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 3c9d16b..f8e165c 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -188,12 +188,17 @@ static void nfs_direct_req_release(struct nfs_direct_req *dreq)
+ static ssize_t nfs_direct_wait(struct nfs_direct_req *dreq)
+ {
+ ssize_t result = -EIOCBQUEUED;
++ struct rpc_clnt *clnt;
++ sigset_t oldset;
+
+ /* Async requests don't wait here */
+ if (dreq->iocb)
+ goto out;
+
++ clnt = NFS_CLIENT(dreq->inode);
++ rpc_clnt_sigmask(clnt, &oldset);
+ result = wait_for_completion_interruptible(&dreq->completion);
++ rpc_clnt_sigunmask(clnt, &oldset);
+
+ if (!result)
+ result = dreq->error;
+@@ -272,6 +277,16 @@ static ssize_t nfs_direct_read_schedule_segment(struct nfs_direct_req *dreq,
+ unsigned long user_addr = (unsigned long)iov->iov_base;
+ size_t count = iov->iov_len;
+ size_t rsize = NFS_SERVER(inode)->rsize;
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_cred = ctx->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = NFS_CLIENT(inode),
++ .rpc_message = &msg,
++ .callback_ops = &nfs_read_direct_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
+ unsigned int pgbase;
+ int result;
+ ssize_t started = 0;
+@@ -311,7 +326,7 @@ static ssize_t nfs_direct_read_schedule_segment(struct nfs_direct_req *dreq,
+
+ data->req = (struct nfs_page *) dreq;
+ data->inode = inode;
+- data->cred = ctx->cred;
++ data->cred = msg.rpc_cred;
+ data->args.fh = NFS_FH(inode);
+ data->args.context = ctx;
+ data->args.offset = pos;
+@@ -321,14 +336,16 @@ static ssize_t nfs_direct_read_schedule_segment(struct nfs_direct_req *dreq,
+ data->res.fattr = &data->fattr;
+ data->res.eof = 0;
+ data->res.count = bytes;
++ msg.rpc_argp = &data->args;
++ msg.rpc_resp = &data->res;
+
+- rpc_init_task(&data->task, NFS_CLIENT(inode), RPC_TASK_ASYNC,
+- &nfs_read_direct_ops, data);
+- NFS_PROTO(inode)->read_setup(data);
+-
+- data->task.tk_cookie = (unsigned long) inode;
++ task_setup_data.task = &data->task;
++ task_setup_data.callback_data = data;
++ NFS_PROTO(inode)->read_setup(data, &msg);
+
+- rpc_execute(&data->task);
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+
+ dprintk("NFS: %5u initiated direct read call "
+ "(req %s/%Ld, %zu bytes @ offset %Lu)\n",
+@@ -391,9 +408,7 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+ {
+ ssize_t result = 0;
+- sigset_t oldset;
+ struct inode *inode = iocb->ki_filp->f_mapping->host;
+- struct rpc_clnt *clnt = NFS_CLIENT(inode);
+ struct nfs_direct_req *dreq;
+
+ dreq = nfs_direct_req_alloc();
+@@ -405,11 +420,9 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
+ if (!is_sync_kiocb(iocb))
+ dreq->iocb = iocb;
+
+- rpc_clnt_sigmask(clnt, &oldset);
+ result = nfs_direct_read_schedule_iovec(dreq, iov, nr_segs, pos);
+ if (!result)
+ result = nfs_direct_wait(dreq);
+- rpc_clnt_sigunmask(clnt, &oldset);
+ nfs_direct_req_release(dreq);
+
+ return result;
+@@ -431,6 +444,15 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+ struct inode *inode = dreq->inode;
+ struct list_head *p;
+ struct nfs_write_data *data;
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_cred = dreq->ctx->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = NFS_CLIENT(inode),
++ .callback_ops = &nfs_write_direct_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
+
+ dreq->count = 0;
+ get_dreq(dreq);
+@@ -440,6 +462,9 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+
+ get_dreq(dreq);
+
++ /* Use stable writes */
++ data->args.stable = NFS_FILE_SYNC;
+
-+ /* store for history */
-+ if (ret == 0)
-+ ret = len | (ord << 16);
+ /*
+ * Reset data->res.
+ */
+@@ -451,17 +476,18 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+ * Reuse data->task; data->args should not have changed
+ * since the original request was sent.
+ */
+- rpc_init_task(&data->task, NFS_CLIENT(inode), RPC_TASK_ASYNC,
+- &nfs_write_direct_ops, data);
+- NFS_PROTO(inode)->write_setup(data, FLUSH_STABLE);
+-
+- data->task.tk_priority = RPC_PRIORITY_NORMAL;
+- data->task.tk_cookie = (unsigned long) inode;
++ task_setup_data.task = &data->task;
++ task_setup_data.callback_data = data;
++ msg.rpc_argp = &data->args;
++ msg.rpc_resp = &data->res;
++ NFS_PROTO(inode)->write_setup(data, &msg);
+
+ /*
+ * We're called via an RPC callback, so BKL is already held.
+ */
+- rpc_execute(&data->task);
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+
+ dprintk("NFS: %5u rescheduled direct write call (req %s/%Ld, %u bytes @ offset %Lu)\n",
+ data->task.tk_pid,
+@@ -504,9 +530,23 @@ static const struct rpc_call_ops nfs_commit_direct_ops = {
+ static void nfs_direct_commit_schedule(struct nfs_direct_req *dreq)
+ {
+ struct nfs_write_data *data = dreq->commit_data;
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_argp = &data->args,
++ .rpc_resp = &data->res,
++ .rpc_cred = dreq->ctx->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .task = &data->task,
++ .rpc_client = NFS_CLIENT(dreq->inode),
++ .rpc_message = &msg,
++ .callback_ops = &nfs_commit_direct_ops,
++ .callback_data = data,
++ .flags = RPC_TASK_ASYNC,
++ };
+
+ data->inode = dreq->inode;
+- data->cred = dreq->ctx->cred;
++ data->cred = msg.rpc_cred;
+
+ data->args.fh = NFS_FH(data->inode);
+ data->args.offset = 0;
+@@ -515,18 +555,16 @@ static void nfs_direct_commit_schedule(struct nfs_direct_req *dreq)
+ data->res.fattr = &data->fattr;
+ data->res.verf = &data->verf;
+
+- rpc_init_task(&data->task, NFS_CLIENT(dreq->inode), RPC_TASK_ASYNC,
+- &nfs_commit_direct_ops, data);
+- NFS_PROTO(data->inode)->commit_setup(data, 0);
++ NFS_PROTO(data->inode)->commit_setup(data, &msg);
+
+- data->task.tk_priority = RPC_PRIORITY_NORMAL;
+- data->task.tk_cookie = (unsigned long)data->inode;
+ /* Note: task.tk_ops->rpc_release will free dreq->commit_data */
+ dreq->commit_data = NULL;
+
+ dprintk("NFS: %5u initiated commit call\n", data->task.tk_pid);
+
+- rpc_execute(&data->task);
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+ }
+
+ static void nfs_direct_write_complete(struct nfs_direct_req *dreq, struct inode *inode)
+@@ -641,6 +679,16 @@ static ssize_t nfs_direct_write_schedule_segment(struct nfs_direct_req *dreq,
+ struct inode *inode = ctx->path.dentry->d_inode;
+ unsigned long user_addr = (unsigned long)iov->iov_base;
+ size_t count = iov->iov_len;
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_cred = ctx->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = NFS_CLIENT(inode),
++ .rpc_message = &msg,
++ .callback_ops = &nfs_write_direct_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
+ size_t wsize = NFS_SERVER(inode)->wsize;
+ unsigned int pgbase;
+ int result;
+@@ -683,25 +731,27 @@ static ssize_t nfs_direct_write_schedule_segment(struct nfs_direct_req *dreq,
+
+ data->req = (struct nfs_page *) dreq;
+ data->inode = inode;
+- data->cred = ctx->cred;
++ data->cred = msg.rpc_cred;
+ data->args.fh = NFS_FH(inode);
+ data->args.context = ctx;
+ data->args.offset = pos;
+ data->args.pgbase = pgbase;
+ data->args.pages = data->pagevec;
+ data->args.count = bytes;
++ data->args.stable = sync;
+ data->res.fattr = &data->fattr;
+ data->res.count = bytes;
+ data->res.verf = &data->verf;
+
+- rpc_init_task(&data->task, NFS_CLIENT(inode), RPC_TASK_ASYNC,
+- &nfs_write_direct_ops, data);
+- NFS_PROTO(inode)->write_setup(data, sync);
+-
+- data->task.tk_priority = RPC_PRIORITY_NORMAL;
+- data->task.tk_cookie = (unsigned long) inode;
++ task_setup_data.task = &data->task;
++ task_setup_data.callback_data = data;
++ msg.rpc_argp = &data->args;
++ msg.rpc_resp = &data->res;
++ NFS_PROTO(inode)->write_setup(data, &msg);
+
+- rpc_execute(&data->task);
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+
+ dprintk("NFS: %5u initiated direct write call "
+ "(req %s/%Ld, %zu bytes @ offset %Lu)\n",
+@@ -767,12 +817,10 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
+ size_t count)
+ {
+ ssize_t result = 0;
+- sigset_t oldset;
+ struct inode *inode = iocb->ki_filp->f_mapping->host;
+- struct rpc_clnt *clnt = NFS_CLIENT(inode);
+ struct nfs_direct_req *dreq;
+ size_t wsize = NFS_SERVER(inode)->wsize;
+- int sync = 0;
++ int sync = NFS_UNSTABLE;
+
+ dreq = nfs_direct_req_alloc();
+ if (!dreq)
+@@ -780,18 +828,16 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
+ nfs_alloc_commit_data(dreq);
+
+ if (dreq->commit_data == NULL || count < wsize)
+- sync = FLUSH_STABLE;
++ sync = NFS_FILE_SYNC;
+
+ dreq->inode = inode;
+ dreq->ctx = get_nfs_open_context(nfs_file_open_context(iocb->ki_filp));
+ if (!is_sync_kiocb(iocb))
+ dreq->iocb = iocb;
+
+- rpc_clnt_sigmask(clnt, &oldset);
+ result = nfs_direct_write_schedule_iovec(dreq, iov, nr_segs, pos, sync);
+ if (!result)
+ result = nfs_direct_wait(dreq);
+- rpc_clnt_sigunmask(clnt, &oldset);
+ nfs_direct_req_release(dreq);
+
+ return result;
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index b3bb89f..ef57a5a 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -349,7 +349,9 @@ static int nfs_write_end(struct file *file, struct address_space *mapping,
+ unlock_page(page);
+ page_cache_release(page);
+
+- return status < 0 ? status : copied;
++ if (status < 0)
++ return status;
++ return copied;
+ }
+
+ static void nfs_invalidate_page(struct page *page, unsigned long offset)
+@@ -392,35 +394,27 @@ static int nfs_vm_page_mkwrite(struct vm_area_struct *vma, struct page *page)
+ struct file *filp = vma->vm_file;
+ unsigned pagelen;
+ int ret = -EINVAL;
+- void *fsdata;
+ struct address_space *mapping;
+- loff_t offset;
+
+ lock_page(page);
+ mapping = page->mapping;
+- if (mapping != vma->vm_file->f_path.dentry->d_inode->i_mapping) {
+- unlock_page(page);
+- return -EINVAL;
+- }
++ if (mapping != vma->vm_file->f_path.dentry->d_inode->i_mapping)
++ goto out_unlock;
+
-+ /* we have to split large buddy */
-+ BUG_ON(ord <= 0);
-+ buddy = mb_find_buddy(e4b, ord, &max);
-+ mb_set_bit(start >> ord, buddy);
-+ e4b->bd_info->bb_counters[ord]--;
++ ret = 0;
+ pagelen = nfs_page_length(page);
+- offset = (loff_t)page->index << PAGE_CACHE_SHIFT;
+- unlock_page(page);
++ if (pagelen == 0)
++ goto out_unlock;
+
+- /*
+- * we can use mapping after releasing the page lock, because:
+- * we hold mmap_sem on the fault path, which should pin the vma
+- * which should pin the file, which pins the dentry which should
+- * hold a reference on inode.
+- */
++ ret = nfs_flush_incompatible(filp, page);
++ if (ret != 0)
++ goto out_unlock;
+
+- if (pagelen) {
+- struct page *page2 = NULL;
+- ret = nfs_write_begin(filp, mapping, offset, pagelen,
+- 0, &page2, &fsdata);
+- if (!ret)
+- ret = nfs_write_end(filp, mapping, offset, pagelen,
+- pagelen, page2, fsdata);
+- }
++ ret = nfs_updatepage(filp, page, 0, pagelen);
++ if (ret == 0)
++ ret = pagelen;
++out_unlock:
++ unlock_page(page);
+ return ret;
+ }
+
+diff --git a/fs/nfs/idmap.c b/fs/nfs/idmap.c
+index d11eb05..8ae5dba 100644
+--- a/fs/nfs/idmap.c
++++ b/fs/nfs/idmap.c
+@@ -72,39 +72,39 @@ module_param_call(idmap_cache_timeout, param_set_idmap_timeout, param_get_int,
+ &nfs_idmap_cache_timeout, 0644);
+
+ struct idmap_hashent {
+- unsigned long ih_expires;
+- __u32 ih_id;
+- int ih_namelen;
+- char ih_name[IDMAP_NAMESZ];
++ unsigned long ih_expires;
++ __u32 ih_id;
++ size_t ih_namelen;
++ char ih_name[IDMAP_NAMESZ];
+ };
+
+ struct idmap_hashtable {
+- __u8 h_type;
+- struct idmap_hashent h_entries[IDMAP_HASH_SZ];
++ __u8 h_type;
++ struct idmap_hashent h_entries[IDMAP_HASH_SZ];
+ };
+
+ struct idmap {
+- struct dentry *idmap_dentry;
+- wait_queue_head_t idmap_wq;
+- struct idmap_msg idmap_im;
+- struct mutex idmap_lock; /* Serializes upcalls */
+- struct mutex idmap_im_lock; /* Protects the hashtable */
+- struct idmap_hashtable idmap_user_hash;
+- struct idmap_hashtable idmap_group_hash;
++ struct dentry *idmap_dentry;
++ wait_queue_head_t idmap_wq;
++ struct idmap_msg idmap_im;
++ struct mutex idmap_lock; /* Serializes upcalls */
++ struct mutex idmap_im_lock; /* Protects the hashtable */
++ struct idmap_hashtable idmap_user_hash;
++ struct idmap_hashtable idmap_group_hash;
+ };
+
+-static ssize_t idmap_pipe_upcall(struct file *, struct rpc_pipe_msg *,
+- char __user *, size_t);
+-static ssize_t idmap_pipe_downcall(struct file *, const char __user *,
+- size_t);
+-static void idmap_pipe_destroy_msg(struct rpc_pipe_msg *);
++static ssize_t idmap_pipe_upcall(struct file *, struct rpc_pipe_msg *,
++ char __user *, size_t);
++static ssize_t idmap_pipe_downcall(struct file *, const char __user *,
++ size_t);
++static void idmap_pipe_destroy_msg(struct rpc_pipe_msg *);
+
+ static unsigned int fnvhash32(const void *, size_t);
+
+ static struct rpc_pipe_ops idmap_upcall_ops = {
+- .upcall = idmap_pipe_upcall,
+- .downcall = idmap_pipe_downcall,
+- .destroy_msg = idmap_pipe_destroy_msg,
++ .upcall = idmap_pipe_upcall,
++ .downcall = idmap_pipe_downcall,
++ .destroy_msg = idmap_pipe_destroy_msg,
+ };
+
+ int
+@@ -115,19 +115,20 @@ nfs_idmap_new(struct nfs_client *clp)
+
+ BUG_ON(clp->cl_idmap != NULL);
+
+- if ((idmap = kzalloc(sizeof(*idmap), GFP_KERNEL)) == NULL)
+- return -ENOMEM;
++ idmap = kzalloc(sizeof(*idmap), GFP_KERNEL);
++ if (idmap == NULL)
++ return -ENOMEM;
+
+- idmap->idmap_dentry = rpc_mkpipe(clp->cl_rpcclient->cl_dentry, "idmap",
+- idmap, &idmap_upcall_ops, 0);
+- if (IS_ERR(idmap->idmap_dentry)) {
++ idmap->idmap_dentry = rpc_mkpipe(clp->cl_rpcclient->cl_dentry, "idmap",
++ idmap, &idmap_upcall_ops, 0);
++ if (IS_ERR(idmap->idmap_dentry)) {
+ error = PTR_ERR(idmap->idmap_dentry);
+ kfree(idmap);
+ return error;
+ }
+
+- mutex_init(&idmap->idmap_lock);
+- mutex_init(&idmap->idmap_im_lock);
++ mutex_init(&idmap->idmap_lock);
++ mutex_init(&idmap->idmap_im_lock);
+ init_waitqueue_head(&idmap->idmap_wq);
+ idmap->idmap_user_hash.h_type = IDMAP_TYPE_USER;
+ idmap->idmap_group_hash.h_type = IDMAP_TYPE_GROUP;
+@@ -192,7 +193,7 @@ idmap_lookup_id(struct idmap_hashtable *h, __u32 id)
+ * pretty trivial.
+ */
+ static inline struct idmap_hashent *
+-idmap_alloc_name(struct idmap_hashtable *h, char *name, unsigned len)
++idmap_alloc_name(struct idmap_hashtable *h, char *name, size_t len)
+ {
+ return idmap_name_hash(h, name, len);
+ }
+@@ -285,7 +286,7 @@ nfs_idmap_id(struct idmap *idmap, struct idmap_hashtable *h,
+ memset(im, 0, sizeof(*im));
+ mutex_unlock(&idmap->idmap_im_lock);
+ mutex_unlock(&idmap->idmap_lock);
+- return (ret);
++ return ret;
+ }
+
+ /*
+@@ -354,42 +355,40 @@ nfs_idmap_name(struct idmap *idmap, struct idmap_hashtable *h,
+ /* RPC pipefs upcall/downcall routines */
+ static ssize_t
+ idmap_pipe_upcall(struct file *filp, struct rpc_pipe_msg *msg,
+- char __user *dst, size_t buflen)
++ char __user *dst, size_t buflen)
+ {
+- char *data = (char *)msg->data + msg->copied;
+- ssize_t mlen = msg->len - msg->copied;
+- ssize_t left;
+-
+- if (mlen > buflen)
+- mlen = buflen;
+-
+- left = copy_to_user(dst, data, mlen);
+- if (left < 0) {
+- msg->errno = left;
+- return left;
++ char *data = (char *)msg->data + msg->copied;
++ size_t mlen = min(msg->len, buflen);
++ unsigned long left;
++
++ left = copy_to_user(dst, data, mlen);
++ if (left == mlen) {
++ msg->errno = -EFAULT;
++ return -EFAULT;
+ }
+
-+ ord--;
-+ cur = (start >> ord) & ~1U;
-+ buddy = mb_find_buddy(e4b, ord, &max);
-+ mb_clear_bit(cur, buddy);
-+ mb_clear_bit(cur + 1, buddy);
-+ e4b->bd_info->bb_counters[ord]++;
-+ e4b->bd_info->bb_counters[ord]++;
+ mlen -= left;
+ msg->copied += mlen;
+ msg->errno = 0;
+- return mlen;
++ return mlen;
+ }
+
+ static ssize_t
+ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ {
+- struct rpc_inode *rpci = RPC_I(filp->f_path.dentry->d_inode);
++ struct rpc_inode *rpci = RPC_I(filp->f_path.dentry->d_inode);
+ struct idmap *idmap = (struct idmap *)rpci->private;
+ struct idmap_msg im_in, *im = &idmap->idmap_im;
+ struct idmap_hashtable *h;
+ struct idmap_hashent *he = NULL;
+- int namelen_in;
++ size_t namelen_in;
+ int ret;
+
+- if (mlen != sizeof(im_in))
+- return (-ENOSPC);
++ if (mlen != sizeof(im_in))
++ return -ENOSPC;
+
+- if (copy_from_user(&im_in, src, mlen) != 0)
+- return (-EFAULT);
++ if (copy_from_user(&im_in, src, mlen) != 0)
++ return -EFAULT;
+
+ mutex_lock(&idmap->idmap_im_lock);
+
+@@ -487,7 +486,7 @@ static unsigned int fnvhash32(const void *buf, size_t buflen)
+ hash ^= (unsigned int)*p;
+ }
+
+- return (hash);
++ return hash;
+ }
+
+ int nfs_map_name_to_uid(struct nfs_client *clp, const char *name, size_t namelen, __u32 *uid)
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index db5d96d..3f332e5 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -192,7 +192,7 @@ void nfs_invalidate_atime(struct inode *inode)
+ */
+ static void nfs_invalidate_inode(struct inode *inode)
+ {
+- set_bit(NFS_INO_STALE, &NFS_FLAGS(inode));
++ set_bit(NFS_INO_STALE, &NFS_I(inode)->flags);
+ nfs_zap_caches_locked(inode);
+ }
+
+@@ -229,7 +229,7 @@ nfs_init_locked(struct inode *inode, void *opaque)
+ struct nfs_find_desc *desc = (struct nfs_find_desc *)opaque;
+ struct nfs_fattr *fattr = desc->fattr;
+
+- NFS_FILEID(inode) = fattr->fileid;
++ set_nfs_fileid(inode, fattr->fileid);
+ nfs_copy_fh(NFS_FH(inode), desc->fh);
+ return 0;
+ }
+@@ -291,7 +291,7 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr)
+ inode->i_fop = &nfs_dir_operations;
+ if (nfs_server_capable(inode, NFS_CAP_READDIRPLUS)
+ && fattr->size <= NFS_LIMIT_READDIRPLUS)
+- set_bit(NFS_INO_ADVISE_RDPLUS, &NFS_FLAGS(inode));
++ set_bit(NFS_INO_ADVISE_RDPLUS, &NFS_I(inode)->flags);
+ /* Deal with crossing mountpoints */
+ if (!nfs_fsid_equal(&NFS_SB(sb)->fsid, &fattr->fsid)) {
+ if (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL)
+@@ -461,9 +461,18 @@ int nfs_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat)
+ int need_atime = NFS_I(inode)->cache_validity & NFS_INO_INVALID_ATIME;
+ int err;
+
+- /* Flush out writes to the server in order to update c/mtime */
+- if (S_ISREG(inode->i_mode))
++ /*
++ * Flush out writes to the server in order to update c/mtime.
++ *
++ * Hold the i_mutex to suspend application writes temporarily;
++ * this prevents long-running writing applications from blocking
++ * nfs_wb_nocommit.
++ */
++ if (S_ISREG(inode->i_mode)) {
++ mutex_lock(&inode->i_mutex);
+ nfs_wb_nocommit(inode);
++ mutex_unlock(&inode->i_mutex);
+ }
+
+ /*
+ * We may force a getattr if the user cares about atime.
+@@ -659,7 +668,7 @@ __nfs_revalidate_inode(struct nfs_server *server, struct inode *inode)
+ if (status == -ESTALE) {
+ nfs_zap_caches(inode);
+ if (!S_ISDIR(inode->i_mode))
+- set_bit(NFS_INO_STALE, &NFS_FLAGS(inode));
++ set_bit(NFS_INO_STALE, &NFS_I(inode)->flags);
+ }
+ goto out;
+ }
+@@ -814,8 +823,9 @@ static void nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ if (S_ISDIR(inode->i_mode))
+ nfsi->cache_validity |= NFS_INO_INVALID_DATA;
+ }
+- if (inode->i_size == fattr->pre_size && nfsi->npages == 0)
+- inode->i_size = fattr->size;
++ if (inode->i_size == nfs_size_to_loff_t(fattr->pre_size) &&
++ nfsi->npages == 0)
++ inode->i_size = nfs_size_to_loff_t(fattr->size);
+ }
+ }
+
+@@ -1019,7 +1029,8 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ dprintk("NFS: mtime change on server for file %s/%ld\n",
+ inode->i_sb->s_id, inode->i_ino);
+ invalid |= NFS_INO_INVALID_ATTR|NFS_INO_INVALID_DATA;
+- nfsi->cache_change_attribute = now;
++ if (S_ISDIR(inode->i_mode))
++ nfs_force_lookup_revalidate(inode);
+ }
+ /* If ctime has changed we should definitely clear access+acl caches */
+ if (!timespec_equal(&inode->i_ctime, &fattr->ctime))
+@@ -1028,7 +1039,8 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ dprintk("NFS: change_attr change on server for file %s/%ld\n",
+ inode->i_sb->s_id, inode->i_ino);
+ invalid |= NFS_INO_INVALID_ATTR|NFS_INO_INVALID_DATA|NFS_INO_INVALID_ACCESS|NFS_INO_INVALID_ACL;
+- nfsi->cache_change_attribute = now;
++ if (S_ISDIR(inode->i_mode))
++ nfs_force_lookup_revalidate(inode);
+ }
+
+ /* Check if our cached file size is stale */
+@@ -1133,7 +1145,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ void nfs4_clear_inode(struct inode *inode)
+ {
+ /* If we are holding a delegation, return it! */
+- nfs_inode_return_delegation(inode);
++ nfs_inode_return_delegation_noreclaim(inode);
+ /* First call standard NFS clear_inode() code */
+ nfs_clear_inode(inode);
+ }
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index f3acf48..0f56196 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -21,7 +21,8 @@ struct nfs_clone_mount {
+ struct nfs_fattr *fattr;
+ char *hostname;
+ char *mnt_path;
+- struct sockaddr_in *addr;
++ struct sockaddr *addr;
++ size_t addrlen;
+ rpc_authflavor_t authflavor;
+ };
+
+@@ -41,19 +42,19 @@ struct nfs_parsed_mount_data {
+ char *client_address;
+
+ struct {
+- struct sockaddr_in address;
++ struct sockaddr_storage address;
++ size_t addrlen;
+ char *hostname;
+- unsigned int program;
+ unsigned int version;
+ unsigned short port;
+ int protocol;
+ } mount_server;
+
+ struct {
+- struct sockaddr_in address;
++ struct sockaddr_storage address;
++ size_t addrlen;
+ char *hostname;
+ char *export_path;
+- unsigned int program;
+ int protocol;
+ } nfs_server;
+ };
+@@ -62,7 +63,8 @@ struct nfs_parsed_mount_data {
+ extern struct rpc_program nfs_program;
+
+ extern void nfs_put_client(struct nfs_client *);
+-extern struct nfs_client *nfs_find_client(const struct sockaddr_in *, int);
++extern struct nfs_client *nfs_find_client(const struct sockaddr *, u32);
++extern struct nfs_client *nfs_find_client_next(struct nfs_client *);
+ extern struct nfs_server *nfs_create_server(
+ const struct nfs_parsed_mount_data *,
+ struct nfs_fh *);
+@@ -160,6 +162,8 @@ extern struct rpc_stat nfs_rpcstat;
+
+ extern int __init register_nfs_fs(void);
+ extern void __exit unregister_nfs_fs(void);
++extern void nfs_sb_active(struct nfs_server *server);
++extern void nfs_sb_deactive(struct nfs_server *server);
+
+ /* namespace.c */
+ extern char *nfs_path(const char *base,
+diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
+index acfc56f..be4ce1c 100644
+--- a/fs/nfs/namespace.c
++++ b/fs/nfs/namespace.c
+@@ -188,7 +188,7 @@ static struct vfsmount *nfs_do_clone_mount(struct nfs_server *server,
+ {
+ #ifdef CONFIG_NFS_V4
+ struct vfsmount *mnt = NULL;
+- switch (server->nfs_client->cl_nfsversion) {
++ switch (server->nfs_client->rpc_ops->version) {
+ case 2:
+ case 3:
+ mnt = vfs_kern_mount(&nfs_xdev_fs_type, 0, devname, mountdata);
+diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
+index 668ab96..1f7ea67 100644
+--- a/fs/nfs/nfs2xdr.c
++++ b/fs/nfs/nfs2xdr.c
+@@ -262,7 +262,9 @@ static int
+ nfs_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ {
+ struct kvec *iov = req->rq_rcv_buf.head;
+- int status, count, recvd, hdrlen;
++ size_t hdrlen;
++ u32 count, recvd;
++ int status;
+
+ if ((status = ntohl(*p++)))
+ return -nfs_stat_to_errno(status);
+@@ -273,7 +275,7 @@ nfs_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ hdrlen = (u8 *) p - (u8 *) iov->iov_base;
+ if (iov->iov_len < hdrlen) {
+ dprintk("NFS: READ reply header overflowed:"
+- "length %d > %Zu\n", hdrlen, iov->iov_len);
++ "length %Zu > %Zu\n", hdrlen, iov->iov_len);
+ return -errno_NFSERR_IO;
+ } else if (iov->iov_len != hdrlen) {
+ dprintk("NFS: READ header is short. iovec will be shifted.\n");
+@@ -283,11 +285,11 @@ nfs_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ recvd = req->rq_rcv_buf.len - hdrlen;
+ if (count > recvd) {
+ dprintk("NFS: server cheating in read reply: "
+- "count %d > recvd %d\n", count, recvd);
++ "count %u > recvd %u\n", count, recvd);
+ count = recvd;
+ }
+
+- dprintk("RPC: readres OK count %d\n", count);
++ dprintk("RPC: readres OK count %u\n", count);
+ if (count < res->count)
+ res->count = count;
+
+@@ -423,9 +425,10 @@ nfs_xdr_readdirres(struct rpc_rqst *req, __be32 *p, void *dummy)
+ struct xdr_buf *rcvbuf = &req->rq_rcv_buf;
+ struct kvec *iov = rcvbuf->head;
+ struct page **page;
+- int hdrlen, recvd;
++ size_t hdrlen;
++ unsigned int pglen, recvd;
++ u32 len;
+ int status, nr;
+- unsigned int len, pglen;
+ __be32 *end, *entry, *kaddr;
+
+ if ((status = ntohl(*p++)))
+@@ -434,7 +437,7 @@ nfs_xdr_readdirres(struct rpc_rqst *req, __be32 *p, void *dummy)
+ hdrlen = (u8 *) p - (u8 *) iov->iov_base;
+ if (iov->iov_len < hdrlen) {
+ dprintk("NFS: READDIR reply header overflowed:"
+- "length %d > %Zu\n", hdrlen, iov->iov_len);
++ "length %Zu > %Zu\n", hdrlen, iov->iov_len);
+ return -errno_NFSERR_IO;
+ } else if (iov->iov_len != hdrlen) {
+ dprintk("NFS: READDIR header is short. iovec will be shifted.\n");
+@@ -576,7 +579,8 @@ nfs_xdr_readlinkres(struct rpc_rqst *req, __be32 *p, void *dummy)
+ {
+ struct xdr_buf *rcvbuf = &req->rq_rcv_buf;
+ struct kvec *iov = rcvbuf->head;
+- int hdrlen, len, recvd;
++ size_t hdrlen;
++ u32 len, recvd;
+ char *kaddr;
+ int status;
+
+@@ -584,14 +588,14 @@ nfs_xdr_readlinkres(struct rpc_rqst *req, __be32 *p, void *dummy)
+ return -nfs_stat_to_errno(status);
+ /* Convert length of symlink */
+ len = ntohl(*p++);
+- if (len >= rcvbuf->page_len || len <= 0) {
++ if (len >= rcvbuf->page_len) {
+ dprintk("nfs: server returned giant symlink!\n");
+ return -ENAMETOOLONG;
+ }
+ hdrlen = (u8 *) p - (u8 *) iov->iov_base;
+ if (iov->iov_len < hdrlen) {
+ dprintk("NFS: READLINK reply header overflowed:"
+- "length %d > %Zu\n", hdrlen, iov->iov_len);
++ "length %Zu > %Zu\n", hdrlen, iov->iov_len);
+ return -errno_NFSERR_IO;
+ } else if (iov->iov_len != hdrlen) {
+ dprintk("NFS: READLINK header is short. iovec will be shifted.\n");
+diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
+index 4cdc236..b353c1a 100644
+--- a/fs/nfs/nfs3proc.c
++++ b/fs/nfs/nfs3proc.c
+@@ -732,16 +732,9 @@ static int nfs3_read_done(struct rpc_task *task, struct nfs_read_data *data)
+ return 0;
+ }
+
+-static void nfs3_proc_read_setup(struct nfs_read_data *data)
++static void nfs3_proc_read_setup(struct nfs_read_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs3_procedures[NFS3PROC_READ],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs3_procedures[NFS3PROC_READ];
+ }
+
+ static int nfs3_write_done(struct rpc_task *task, struct nfs_write_data *data)
+@@ -753,24 +746,9 @@ static int nfs3_write_done(struct rpc_task *task, struct nfs_write_data *data)
+ return 0;
+ }
+
+-static void nfs3_proc_write_setup(struct nfs_write_data *data, int how)
++static void nfs3_proc_write_setup(struct nfs_write_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs3_procedures[NFS3PROC_WRITE],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+- data->args.stable = NFS_UNSTABLE;
+- if (how & FLUSH_STABLE) {
+- data->args.stable = NFS_FILE_SYNC;
+- if (NFS_I(data->inode)->ncommit)
+- data->args.stable = NFS_DATA_SYNC;
+- }
+-
+- /* Finalize the task. */
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs3_procedures[NFS3PROC_WRITE];
+ }
+
+ static int nfs3_commit_done(struct rpc_task *task, struct nfs_write_data *data)
+@@ -781,22 +759,17 @@ static int nfs3_commit_done(struct rpc_task *task, struct nfs_write_data *data)
+ return 0;
+ }
+
+-static void nfs3_proc_commit_setup(struct nfs_write_data *data, int how)
++static void nfs3_proc_commit_setup(struct nfs_write_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs3_procedures[NFS3PROC_COMMIT],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs3_procedures[NFS3PROC_COMMIT];
+ }
+
+ static int
+ nfs3_proc_lock(struct file *filp, int cmd, struct file_lock *fl)
+ {
+- return nlmclnt_proc(filp->f_path.dentry->d_inode, cmd, fl);
++ struct inode *inode = filp->f_path.dentry->d_inode;
+
-+ mb_set_bits(sb_bgl_lock(EXT4_SB(e4b->bd_sb), ex->fe_group),
-+ EXT4_MB_BITMAP(e4b), ex->fe_start, len0);
-+ mb_check_buddy(e4b);
++ return nlmclnt_proc(NFS_SERVER(inode)->nlm_host, cmd, fl);
+ }
+
+ const struct nfs_rpc_ops nfs_v3_clientops = {
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index 616d326..3917e2f 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -506,9 +506,9 @@ nfs3_xdr_readdirres(struct rpc_rqst *req, __be32 *p, struct nfs3_readdirres *res
+ struct xdr_buf *rcvbuf = &req->rq_rcv_buf;
+ struct kvec *iov = rcvbuf->head;
+ struct page **page;
+- int hdrlen, recvd;
++ size_t hdrlen;
++ u32 len, recvd, pglen;
+ int status, nr;
+- unsigned int len, pglen;
+ __be32 *entry, *end, *kaddr;
+
+ status = ntohl(*p++);
+@@ -527,7 +527,7 @@ nfs3_xdr_readdirres(struct rpc_rqst *req, __be32 *p, struct nfs3_readdirres *res
+ hdrlen = (u8 *) p - (u8 *) iov->iov_base;
+ if (iov->iov_len < hdrlen) {
+ dprintk("NFS: READDIR reply header overflowed:"
+- "length %d > %Zu\n", hdrlen, iov->iov_len);
++ "length %Zu > %Zu\n", hdrlen, iov->iov_len);
+ return -errno_NFSERR_IO;
+ } else if (iov->iov_len != hdrlen) {
+ dprintk("NFS: READDIR header is short. iovec will be shifted.\n");
+@@ -549,7 +549,7 @@ nfs3_xdr_readdirres(struct rpc_rqst *req, __be32 *p, struct nfs3_readdirres *res
+ len = ntohl(*p++); /* string length */
+ p += XDR_QUADLEN(len) + 2; /* name + cookie */
+ if (len > NFS3_MAXNAMLEN) {
+- dprintk("NFS: giant filename in readdir (len %x)!\n",
++ dprintk("NFS: giant filename in readdir (len 0x%x)!\n",
+ len);
+ goto err_unmap;
+ }
+@@ -570,7 +570,7 @@ nfs3_xdr_readdirres(struct rpc_rqst *req, __be32 *p, struct nfs3_readdirres *res
+ len = ntohl(*p++);
+ if (len > NFS3_FHSIZE) {
+ dprintk("NFS: giant filehandle in "
+- "readdir (len %x)!\n", len);
++ "readdir (len 0x%x)!\n", len);
+ goto err_unmap;
+ }
+ p += XDR_QUADLEN(len);
+@@ -815,7 +815,8 @@ nfs3_xdr_readlinkres(struct rpc_rqst *req, __be32 *p, struct nfs_fattr *fattr)
+ {
+ struct xdr_buf *rcvbuf = &req->rq_rcv_buf;
+ struct kvec *iov = rcvbuf->head;
+- int hdrlen, len, recvd;
++ size_t hdrlen;
++ u32 len, recvd;
+ char *kaddr;
+ int status;
+
+@@ -827,7 +828,7 @@ nfs3_xdr_readlinkres(struct rpc_rqst *req, __be32 *p, struct nfs_fattr *fattr)
+
+ /* Convert length of symlink */
+ len = ntohl(*p++);
+- if (len >= rcvbuf->page_len || len <= 0) {
++ if (len >= rcvbuf->page_len) {
+ dprintk("nfs: server returned giant symlink!\n");
+ return -ENAMETOOLONG;
+ }
+@@ -835,7 +836,7 @@ nfs3_xdr_readlinkres(struct rpc_rqst *req, __be32 *p, struct nfs_fattr *fattr)
+ hdrlen = (u8 *) p - (u8 *) iov->iov_base;
+ if (iov->iov_len < hdrlen) {
+ dprintk("NFS: READLINK reply header overflowed:"
+- "length %d > %Zu\n", hdrlen, iov->iov_len);
++ "length %Zu > %Zu\n", hdrlen, iov->iov_len);
+ return -errno_NFSERR_IO;
+ } else if (iov->iov_len != hdrlen) {
+ dprintk("NFS: READLINK header is short. "
+@@ -863,7 +864,9 @@ static int
+ nfs3_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ {
+ struct kvec *iov = req->rq_rcv_buf.head;
+- int status, count, ocount, recvd, hdrlen;
++ size_t hdrlen;
++ u32 count, ocount, recvd;
++ int status;
+
+ status = ntohl(*p++);
+ p = xdr_decode_post_op_attr(p, res->fattr);
+@@ -871,7 +874,7 @@ nfs3_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ if (status != 0)
+ return -nfs_stat_to_errno(status);
+
+- /* Decode reply could and EOF flag. NFSv3 is somewhat redundant
++ /* Decode reply count and EOF flag. NFSv3 is somewhat redundant
+ * in that it puts the count both in the res struct and in the
+ * opaque data count. */
+ count = ntohl(*p++);
+@@ -886,7 +889,7 @@ nfs3_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ hdrlen = (u8 *) p - (u8 *) iov->iov_base;
+ if (iov->iov_len < hdrlen) {
+ dprintk("NFS: READ reply header overflowed:"
+- "length %d > %Zu\n", hdrlen, iov->iov_len);
++ "length %Zu > %Zu\n", hdrlen, iov->iov_len);
+ return -errno_NFSERR_IO;
+ } else if (iov->iov_len != hdrlen) {
+ dprintk("NFS: READ header is short. iovec will be shifted.\n");
+@@ -896,7 +899,7 @@ nfs3_xdr_readres(struct rpc_rqst *req, __be32 *p, struct nfs_readres *res)
+ recvd = req->rq_rcv_buf.len - hdrlen;
+ if (count > recvd) {
+ dprintk("NFS: server cheating in read reply: "
+- "count %d > recvd %d\n", count, recvd);
++ "count %u > recvd %u\n", count, recvd);
+ count = recvd;
+ res->eof = 0;
+ }
+diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c
+index dd5fef2..5f9ba41 100644
+--- a/fs/nfs/nfs4namespace.c
++++ b/fs/nfs/nfs4namespace.c
+@@ -114,10 +114,7 @@ static inline int valid_ipaddr4(const char *buf)
+ * nfs_follow_referral - set up mountpoint when hitting a referral on moved error
+ * @mnt_parent - mountpoint of parent directory
+ * @dentry - parent directory
+- * @fspath - fs path returned in fs_locations
+- * @mntpath - mount path to new server
+- * @hostname - hostname of new server
+- * @addr - host addr of new server
++ * @locations - array of NFSv4 server location information
+ *
+ */
+ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
+@@ -131,7 +128,8 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
+ .authflavor = NFS_SB(mnt_parent->mnt_sb)->client->cl_auth->au_flavor,
+ };
+ char *page = NULL, *page2 = NULL;
+- int loc, s, error;
++ unsigned int s;
++ int loc, error;
+
+ if (locations == NULL || locations->nlocations <= 0)
+ goto out;
+@@ -174,7 +172,10 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
+
+ s = 0;
+ while (s < location->nservers) {
+- struct sockaddr_in addr = {};
++ struct sockaddr_in addr = {
++ .sin_family = AF_INET,
++ .sin_port = htons(NFS_PORT),
++ };
+
+ if (location->servers[s].len <= 0 ||
+ valid_ipaddr4(location->servers[s].data) < 0) {
+@@ -183,10 +184,9 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
+ }
+
+ mountdata.hostname = location->servers[s].data;
+- addr.sin_addr.s_addr = in_aton(mountdata.hostname);
+- addr.sin_family = AF_INET;
+- addr.sin_port = htons(NFS_PORT);
+- mountdata.addr = &addr;
++ addr.sin_addr.s_addr = in_aton(mountdata.hostname),
++ mountdata.addr = (struct sockaddr *)&addr;
++ mountdata.addrlen = sizeof(addr);
+
+ snprintf(page, PAGE_SIZE, "%s:%s",
+ mountdata.hostname,
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9e2e1c7..5c189bd 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -210,7 +210,7 @@ static void update_changeattr(struct inode *dir, struct nfs4_change_info *cinfo)
+ spin_lock(&dir->i_lock);
+ nfsi->cache_validity |= NFS_INO_INVALID_ATTR|NFS_INO_REVAL_PAGECACHE|NFS_INO_INVALID_DATA;
+ if (!cinfo->atomic || cinfo->before != nfsi->change_attr)
+- nfsi->cache_change_attribute = jiffies;
++ nfs_force_lookup_revalidate(dir);
+ nfsi->change_attr = cinfo->after;
+ spin_unlock(&dir->i_lock);
+ }
+@@ -718,19 +718,6 @@ int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state
+ return err;
+ }
+
+-static void nfs4_open_confirm_prepare(struct rpc_task *task, void *calldata)
+-{
+- struct nfs4_opendata *data = calldata;
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_CONFIRM],
+- .rpc_argp = &data->c_arg,
+- .rpc_resp = &data->c_res,
+- .rpc_cred = data->owner->so_cred,
+- };
+- data->timestamp = jiffies;
+- rpc_call_setup(task, &msg, 0);
+-}
+-
+ static void nfs4_open_confirm_done(struct rpc_task *task, void *calldata)
+ {
+ struct nfs4_opendata *data = calldata;
+@@ -767,7 +754,6 @@ out_free:
+ }
+
+ static const struct rpc_call_ops nfs4_open_confirm_ops = {
+- .rpc_call_prepare = nfs4_open_confirm_prepare,
+ .rpc_call_done = nfs4_open_confirm_done,
+ .rpc_release = nfs4_open_confirm_release,
+ };
+@@ -779,12 +765,26 @@ static int _nfs4_proc_open_confirm(struct nfs4_opendata *data)
+ {
+ struct nfs_server *server = NFS_SERVER(data->dir->d_inode);
+ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_CONFIRM],
++ .rpc_argp = &data->c_arg,
++ .rpc_resp = &data->c_res,
++ .rpc_cred = data->owner->so_cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = server->client,
++ .rpc_message = &msg,
++ .callback_ops = &nfs4_open_confirm_ops,
++ .callback_data = data,
++ .flags = RPC_TASK_ASYNC,
++ };
+ int status;
+
+ kref_get(&data->kref);
+ data->rpc_done = 0;
+ data->rpc_status = 0;
+- task = rpc_run_task(server->client, RPC_TASK_ASYNC, &nfs4_open_confirm_ops, data);
++ data->timestamp = jiffies;
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ status = nfs4_wait_for_completion_rpc_task(task);
+@@ -801,13 +801,7 @@ static void nfs4_open_prepare(struct rpc_task *task, void *calldata)
+ {
+ struct nfs4_opendata *data = calldata;
+ struct nfs4_state_owner *sp = data->owner;
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN],
+- .rpc_argp = &data->o_arg,
+- .rpc_resp = &data->o_res,
+- .rpc_cred = sp->so_cred,
+- };
+-
++
+ if (nfs_wait_on_sequence(data->o_arg.seqid, task) != 0)
+ return;
+ /*
+@@ -832,11 +826,11 @@ static void nfs4_open_prepare(struct rpc_task *task, void *calldata)
+ data->o_arg.id = sp->so_owner_id.id;
+ data->o_arg.clientid = sp->so_client->cl_clientid;
+ if (data->o_arg.claim == NFS4_OPEN_CLAIM_PREVIOUS) {
+- msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_NOATTR];
++ task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_NOATTR];
+ nfs_copy_fh(&data->o_res.fh, data->o_arg.fh);
+ }
+ data->timestamp = jiffies;
+- rpc_call_setup(task, &msg, 0);
++ rpc_call_start(task);
+ return;
+ out_no_action:
+ task->tk_action = NULL;
+@@ -908,13 +902,26 @@ static int _nfs4_proc_open(struct nfs4_opendata *data)
+ struct nfs_openargs *o_arg = &data->o_arg;
+ struct nfs_openres *o_res = &data->o_res;
+ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN],
++ .rpc_argp = o_arg,
++ .rpc_resp = o_res,
++ .rpc_cred = data->owner->so_cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = server->client,
++ .rpc_message = &msg,
++ .callback_ops = &nfs4_open_ops,
++ .callback_data = data,
++ .flags = RPC_TASK_ASYNC,
++ };
+ int status;
+
+ kref_get(&data->kref);
+ data->rpc_done = 0;
+ data->rpc_status = 0;
+ data->cancelled = 0;
+- task = rpc_run_task(server->client, RPC_TASK_ASYNC, &nfs4_open_ops, data);
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ status = nfs4_wait_for_completion_rpc_task(task);
+@@ -1244,12 +1251,6 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
+ {
+ struct nfs4_closedata *calldata = data;
+ struct nfs4_state *state = calldata->state;
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_CLOSE],
+- .rpc_argp = &calldata->arg,
+- .rpc_resp = &calldata->res,
+- .rpc_cred = state->owner->so_cred,
+- };
+ int clear_rd, clear_wr, clear_rdwr;
+
+ if (nfs_wait_on_sequence(calldata->arg.seqid, task) != 0)
+@@ -1276,14 +1277,14 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
+ }
+ nfs_fattr_init(calldata->res.fattr);
+ if (test_bit(NFS_O_RDONLY_STATE, &state->flags) != 0) {
+- msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_DOWNGRADE];
++ task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_DOWNGRADE];
+ calldata->arg.open_flags = FMODE_READ;
+ } else if (test_bit(NFS_O_WRONLY_STATE, &state->flags) != 0) {
+- msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_DOWNGRADE];
++ task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_DOWNGRADE];
+ calldata->arg.open_flags = FMODE_WRITE;
+ }
+ calldata->timestamp = jiffies;
+- rpc_call_setup(task, &msg, 0);
++ rpc_call_start(task);
+ }
+
+ static const struct rpc_call_ops nfs4_close_ops = {
+@@ -1309,6 +1310,16 @@ int nfs4_do_close(struct path *path, struct nfs4_state *state, int wait)
+ struct nfs4_closedata *calldata;
+ struct nfs4_state_owner *sp = state->owner;
+ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_CLOSE],
++ .rpc_cred = state->owner->so_cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = server->client,
++ .rpc_message = &msg,
++ .callback_ops = &nfs4_close_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
+ int status = -ENOMEM;
+
+ calldata = kmalloc(sizeof(*calldata), GFP_KERNEL);
+@@ -1328,7 +1339,10 @@ int nfs4_do_close(struct path *path, struct nfs4_state *state, int wait)
+ calldata->path.mnt = mntget(path->mnt);
+ calldata->path.dentry = dget(path->dentry);
+
+- task = rpc_run_task(server->client, RPC_TASK_ASYNC, &nfs4_close_ops, calldata);
++ msg.rpc_argp = &calldata->arg,
++ msg.rpc_resp = &calldata->res,
++ task_setup_data.callback_data = calldata;
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ status = 0;
+@@ -2414,18 +2428,10 @@ static int nfs4_read_done(struct rpc_task *task, struct nfs_read_data *data)
+ return 0;
+ }
+
+-static void nfs4_proc_read_setup(struct nfs_read_data *data)
++static void nfs4_proc_read_setup(struct nfs_read_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+ data->timestamp = jiffies;
+-
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ];
+ }
+
+ static int nfs4_write_done(struct rpc_task *task, struct nfs_write_data *data)
+@@ -2443,33 +2449,15 @@ static int nfs4_write_done(struct rpc_task *task, struct nfs_write_data *data)
+ return 0;
+ }
+
+-static void nfs4_proc_write_setup(struct nfs_write_data *data, int how)
++static void nfs4_proc_write_setup(struct nfs_write_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_WRITE],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+- struct inode *inode = data->inode;
+- struct nfs_server *server = NFS_SERVER(inode);
+- int stable;
+-
+- if (how & FLUSH_STABLE) {
+- if (!NFS_I(inode)->ncommit)
+- stable = NFS_FILE_SYNC;
+- else
+- stable = NFS_DATA_SYNC;
+- } else
+- stable = NFS_UNSTABLE;
+- data->args.stable = stable;
++ struct nfs_server *server = NFS_SERVER(data->inode);
++
+ data->args.bitmask = server->attr_bitmask;
+ data->res.server = server;
+-
+ data->timestamp = jiffies;
+
+- /* Finalize the task. */
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_WRITE];
+ }
+
+ static int nfs4_commit_done(struct rpc_task *task, struct nfs_write_data *data)
+@@ -2484,20 +2472,13 @@ static int nfs4_commit_done(struct rpc_task *task, struct nfs_write_data *data)
+ return 0;
+ }
+
+-static void nfs4_proc_commit_setup(struct nfs_write_data *data, int how)
++static void nfs4_proc_commit_setup(struct nfs_write_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_COMMIT],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+ struct nfs_server *server = NFS_SERVER(data->inode);
+
+ data->args.bitmask = server->attr_bitmask;
+ data->res.server = server;
+-
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_COMMIT];
+ }
+
+ /*
+@@ -2910,14 +2891,20 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program, unsigned short po
+
+ for(;;) {
+ setclientid.sc_name_len = scnprintf(setclientid.sc_name,
+- sizeof(setclientid.sc_name), "%s/%u.%u.%u.%u %s %u",
+- clp->cl_ipaddr, NIPQUAD(clp->cl_addr.sin_addr),
++ sizeof(setclientid.sc_name), "%s/%s %s %s %u",
++ clp->cl_ipaddr,
++ rpc_peeraddr2str(clp->cl_rpcclient,
++ RPC_DISPLAY_ADDR),
++ rpc_peeraddr2str(clp->cl_rpcclient,
++ RPC_DISPLAY_PROTO),
+ cred->cr_ops->cr_name,
+ clp->cl_id_uniquifier);
+ setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
+- sizeof(setclientid.sc_netid), "tcp");
++ sizeof(setclientid.sc_netid),
++ rpc_peeraddr2str(clp->cl_rpcclient,
++ RPC_DISPLAY_NETID));
+ setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
+- sizeof(setclientid.sc_uaddr), "%s.%d.%d",
++ sizeof(setclientid.sc_uaddr), "%s.%u.%u",
+ clp->cl_ipaddr, port >> 8, port & 255);
+
+ status = rpc_call_sync(clp->cl_rpcclient, &msg, 0);
+@@ -2981,25 +2968,11 @@ struct nfs4_delegreturndata {
+ struct nfs4_delegreturnres res;
+ struct nfs_fh fh;
+ nfs4_stateid stateid;
+- struct rpc_cred *cred;
+ unsigned long timestamp;
+ struct nfs_fattr fattr;
+ int rpc_status;
+ };
+
+-static void nfs4_delegreturn_prepare(struct rpc_task *task, void *calldata)
+-{
+- struct nfs4_delegreturndata *data = calldata;
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_DELEGRETURN],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+- nfs_fattr_init(data->res.fattr);
+- rpc_call_setup(task, &msg, 0);
+-}
+-
+ static void nfs4_delegreturn_done(struct rpc_task *task, void *calldata)
+ {
+ struct nfs4_delegreturndata *data = calldata;
+@@ -3010,24 +2983,30 @@ static void nfs4_delegreturn_done(struct rpc_task *task, void *calldata)
+
+ static void nfs4_delegreturn_release(void *calldata)
+ {
+- struct nfs4_delegreturndata *data = calldata;
+-
+- put_rpccred(data->cred);
+ kfree(calldata);
+ }
+
+ static const struct rpc_call_ops nfs4_delegreturn_ops = {
+- .rpc_call_prepare = nfs4_delegreturn_prepare,
+ .rpc_call_done = nfs4_delegreturn_done,
+ .rpc_release = nfs4_delegreturn_release,
+ };
+
+-static int _nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid)
++static int _nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync)
+ {
+ struct nfs4_delegreturndata *data;
+ struct nfs_server *server = NFS_SERVER(inode);
+ struct rpc_task *task;
+- int status;
++ struct rpc_message msg = {
++ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_DELEGRETURN],
++ .rpc_cred = cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = server->client,
++ .rpc_message = &msg,
++ .callback_ops = &nfs4_delegreturn_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
++ int status = 0;
+
+ data = kmalloc(sizeof(*data), GFP_KERNEL);
+ if (data == NULL)
+@@ -3039,30 +3018,37 @@ static int _nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, co
+ memcpy(&data->stateid, stateid, sizeof(data->stateid));
+ data->res.fattr = &data->fattr;
+ data->res.server = server;
+- data->cred = get_rpccred(cred);
++ nfs_fattr_init(data->res.fattr);
+ data->timestamp = jiffies;
+ data->rpc_status = 0;
+
+- task = rpc_run_task(NFS_CLIENT(inode), RPC_TASK_ASYNC, &nfs4_delegreturn_ops, data);
++ task_setup_data.callback_data = data;
++ msg.rpc_argp = &data->args,
++ msg.rpc_resp = &data->res,
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
++ if (!issync)
++ goto out;
+ status = nfs4_wait_for_completion_rpc_task(task);
+- if (status == 0) {
+- status = data->rpc_status;
+- if (status == 0)
+- nfs_refresh_inode(inode, &data->fattr);
+- }
++ if (status != 0)
++ goto out;
++ status = data->rpc_status;
++ if (status != 0)
++ goto out;
++ nfs_refresh_inode(inode, &data->fattr);
++out:
+ rpc_put_task(task);
+ return status;
+ }
+
+-int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid)
++int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync)
+ {
+ struct nfs_server *server = NFS_SERVER(inode);
+ struct nfs4_exception exception = { };
+ int err;
+ do {
+- err = _nfs4_proc_delegreturn(inode, cred, stateid);
++ err = _nfs4_proc_delegreturn(inode, cred, stateid, issync);
+ switch (err) {
+ case -NFS4ERR_STALE_STATEID:
+ case -NFS4ERR_EXPIRED:
+@@ -3230,12 +3216,6 @@ static void nfs4_locku_done(struct rpc_task *task, void *data)
+ static void nfs4_locku_prepare(struct rpc_task *task, void *data)
+ {
+ struct nfs4_unlockdata *calldata = data;
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_LOCKU],
+- .rpc_argp = &calldata->arg,
+- .rpc_resp = &calldata->res,
+- .rpc_cred = calldata->lsp->ls_state->owner->so_cred,
+- };
+
+ if (nfs_wait_on_sequence(calldata->arg.seqid, task) != 0)
+ return;
+@@ -3245,7 +3225,7 @@ static void nfs4_locku_prepare(struct rpc_task *task, void *data)
+ return;
+ }
+ calldata->timestamp = jiffies;
+- rpc_call_setup(task, &msg, 0);
++ rpc_call_start(task);
+ }
+
+ static const struct rpc_call_ops nfs4_locku_ops = {
+@@ -3260,6 +3240,16 @@ static struct rpc_task *nfs4_do_unlck(struct file_lock *fl,
+ struct nfs_seqid *seqid)
+ {
+ struct nfs4_unlockdata *data;
++ struct rpc_message msg = {
++ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_LOCKU],
++ .rpc_cred = ctx->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = NFS_CLIENT(lsp->ls_state->inode),
++ .rpc_message = &msg,
++ .callback_ops = &nfs4_locku_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
+
+ /* Ensure this is an unlock - when canceling a lock, the
+ * canceled lock is passed in, and it won't be an unlock.
+@@ -3272,7 +3262,10 @@ static struct rpc_task *nfs4_do_unlck(struct file_lock *fl,
+ return ERR_PTR(-ENOMEM);
+ }
+
+- return rpc_run_task(NFS_CLIENT(lsp->ls_state->inode), RPC_TASK_ASYNC, &nfs4_locku_ops, data);
++ msg.rpc_argp = &data->arg,
++ msg.rpc_resp = &data->res,
++ task_setup_data.callback_data = data;
++ return rpc_run_task(&task_setup_data);
+ }
+
+ static int nfs4_proc_unlck(struct nfs4_state *state, int cmd, struct file_lock *request)
+@@ -3331,15 +3324,12 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
+
+ p->arg.fh = NFS_FH(inode);
+ p->arg.fl = &p->fl;
+- if (!(lsp->ls_seqid.flags & NFS_SEQID_CONFIRMED)) {
+- p->arg.open_seqid = nfs_alloc_seqid(&lsp->ls_state->owner->so_seqid);
+- if (p->arg.open_seqid == NULL)
+- goto out_free;
+-
+- }
++ p->arg.open_seqid = nfs_alloc_seqid(&lsp->ls_state->owner->so_seqid);
++ if (p->arg.open_seqid == NULL)
++ goto out_free;
+ p->arg.lock_seqid = nfs_alloc_seqid(&lsp->ls_seqid);
+ if (p->arg.lock_seqid == NULL)
+- goto out_free;
++ goto out_free_seqid;
+ p->arg.lock_stateid = &lsp->ls_stateid;
+ p->arg.lock_owner.clientid = server->nfs_client->cl_clientid;
+ p->arg.lock_owner.id = lsp->ls_id.id;
+@@ -3348,9 +3338,9 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
+ p->ctx = get_nfs_open_context(ctx);
+ memcpy(&p->fl, fl, sizeof(p->fl));
+ return p;
++out_free_seqid:
++ nfs_free_seqid(p->arg.open_seqid);
+ out_free:
+- if (p->arg.open_seqid != NULL)
+- nfs_free_seqid(p->arg.open_seqid);
+ kfree(p);
+ return NULL;
+ }
+@@ -3359,31 +3349,20 @@ static void nfs4_lock_prepare(struct rpc_task *task, void *calldata)
+ {
+ struct nfs4_lockdata *data = calldata;
+ struct nfs4_state *state = data->lsp->ls_state;
+- struct nfs4_state_owner *sp = state->owner;
+- struct rpc_message msg = {
+- .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_LOCK],
+- .rpc_argp = &data->arg,
+- .rpc_resp = &data->res,
+- .rpc_cred = sp->so_cred,
+- };
+
+ dprintk("%s: begin!\n", __FUNCTION__);
++ if (nfs_wait_on_sequence(data->arg.lock_seqid, task) != 0)
++ return;
+ /* Do we need to do an open_to_lock_owner? */
+ if (!(data->arg.lock_seqid->sequence->flags & NFS_SEQID_CONFIRMED)) {
+ if (nfs_wait_on_sequence(data->arg.open_seqid, task) != 0)
+ return;
+ data->arg.open_stateid = &state->stateid;
+ data->arg.new_lock_owner = 1;
+- /* Retest in case we raced... */
+- if (!(data->arg.lock_seqid->sequence->flags & NFS_SEQID_CONFIRMED))
+- goto do_rpc;
+- }
+- if (nfs_wait_on_sequence(data->arg.lock_seqid, task) != 0)
+- return;
+- data->arg.new_lock_owner = 0;
+-do_rpc:
++ } else
++ data->arg.new_lock_owner = 0;
+ data->timestamp = jiffies;
+- rpc_call_setup(task, &msg, 0);
++ rpc_call_start(task);
+ dprintk("%s: done!, ret = %d\n", __FUNCTION__, data->rpc_status);
+ }
+
+@@ -3419,6 +3398,7 @@ static void nfs4_lock_release(void *calldata)
+ struct nfs4_lockdata *data = calldata;
+
+ dprintk("%s: begin!\n", __FUNCTION__);
++ nfs_free_seqid(data->arg.open_seqid);
+ if (data->cancelled != 0) {
+ struct rpc_task *task;
+ task = nfs4_do_unlck(&data->fl, data->ctx, data->lsp,
+@@ -3428,8 +3408,6 @@ static void nfs4_lock_release(void *calldata)
+ dprintk("%s: cancelling lock!\n", __FUNCTION__);
+ } else
+ nfs_free_seqid(data->arg.lock_seqid);
+- if (data->arg.open_seqid != NULL)
+- nfs_free_seqid(data->arg.open_seqid);
+ nfs4_put_lock_state(data->lsp);
+ put_nfs_open_context(data->ctx);
+ kfree(data);
+@@ -3446,6 +3424,16 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
+ {
+ struct nfs4_lockdata *data;
+ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_LOCK],
++ .rpc_cred = state->owner->so_cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = NFS_CLIENT(state->inode),
++ .rpc_message = &msg,
++ .callback_ops = &nfs4_lock_ops,
++ .flags = RPC_TASK_ASYNC,
++ };
+ int ret;
+
+ dprintk("%s: begin!\n", __FUNCTION__);
+@@ -3457,8 +3445,10 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
+ data->arg.block = 1;
+ if (reclaim != 0)
+ data->arg.reclaim = 1;
+- task = rpc_run_task(NFS_CLIENT(state->inode), RPC_TASK_ASYNC,
+- &nfs4_lock_ops, data);
++ msg.rpc_argp = &data->arg,
++ msg.rpc_resp = &data->res,
++ task_setup_data.callback_data = data;
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ ret = nfs4_wait_for_completion_rpc_task(task);
+@@ -3631,10 +3621,6 @@ int nfs4_setxattr(struct dentry *dentry, const char *key, const void *buf,
+ if (strcmp(key, XATTR_NAME_NFSV4_ACL) != 0)
+ return -EOPNOTSUPP;
+
+- if (!S_ISREG(inode->i_mode) &&
+- (!S_ISDIR(inode->i_mode) || inode->i_mode & S_ISVTX))
+- return -EPERM;
+-
+ return nfs4_proc_set_acl(inode, buf, buflen);
+ }
+
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 5a39c6f..f9c7432 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -644,27 +644,26 @@ void nfs4_copy_stateid(nfs4_stateid *dst, struct nfs4_state *state, fl_owner_t f
+
+ struct nfs_seqid *nfs_alloc_seqid(struct nfs_seqid_counter *counter)
+ {
+- struct rpc_sequence *sequence = counter->sequence;
+ struct nfs_seqid *new;
+
+ new = kmalloc(sizeof(*new), GFP_KERNEL);
+ if (new != NULL) {
+ new->sequence = counter;
+- spin_lock(&sequence->lock);
+- list_add_tail(&new->list, &sequence->list);
+- spin_unlock(&sequence->lock);
++ INIT_LIST_HEAD(&new->list);
+ }
+ return new;
+ }
+
+ void nfs_free_seqid(struct nfs_seqid *seqid)
+ {
+- struct rpc_sequence *sequence = seqid->sequence->sequence;
++ if (!list_empty(&seqid->list)) {
++ struct rpc_sequence *sequence = seqid->sequence->sequence;
+
+- spin_lock(&sequence->lock);
+- list_del(&seqid->list);
+- spin_unlock(&sequence->lock);
+- rpc_wake_up(&sequence->wait);
++ spin_lock(&sequence->lock);
++ list_del(&seqid->list);
++ spin_unlock(&sequence->lock);
++ rpc_wake_up(&sequence->wait);
++ }
+ kfree(seqid);
+ }
+
+@@ -675,6 +674,7 @@ void nfs_free_seqid(struct nfs_seqid *seqid)
+ */
+ static void nfs_increment_seqid(int status, struct nfs_seqid *seqid)
+ {
++ BUG_ON(list_first_entry(&seqid->sequence->sequence->list, struct nfs_seqid, list) != seqid);
+ switch (status) {
+ case 0:
+ break;
+@@ -726,15 +726,15 @@ int nfs_wait_on_sequence(struct nfs_seqid *seqid, struct rpc_task *task)
+ struct rpc_sequence *sequence = seqid->sequence->sequence;
+ int status = 0;
+
+- if (sequence->list.next == &seqid->list)
+- goto out;
+ spin_lock(&sequence->lock);
+- if (sequence->list.next != &seqid->list) {
+- rpc_sleep_on(&sequence->wait, task, NULL, NULL);
+- status = -EAGAIN;
+- }
++ if (list_empty(&seqid->list))
++ list_add_tail(&seqid->list, &sequence->list);
++ if (list_first_entry(&sequence->list, struct nfs_seqid, list) == seqid)
++ goto unlock;
++ rpc_sleep_on(&sequence->wait, task, NULL, NULL);
++ status = -EAGAIN;
++unlock:
+ spin_unlock(&sequence->lock);
+-out:
+ return status;
+ }
+
+@@ -758,8 +758,9 @@ static void nfs4_recover_state(struct nfs_client *clp)
+
+ __module_get(THIS_MODULE);
+ atomic_inc(&clp->cl_count);
+- task = kthread_run(reclaimer, clp, "%u.%u.%u.%u-reclaim",
+- NIPQUAD(clp->cl_addr.sin_addr));
++ task = kthread_run(reclaimer, clp, "%s-reclaim",
++ rpc_peeraddr2str(clp->cl_rpcclient,
++ RPC_DISPLAY_ADDR));
+ if (!IS_ERR(task))
+ return;
+ nfs4_clear_recover_bit(clp);
+@@ -970,8 +971,8 @@ out:
+ module_put_and_exit(0);
+ return 0;
+ out_error:
+- printk(KERN_WARNING "Error: state recovery failed on NFSv4 server %u.%u.%u.%u with error %d\n",
+- NIPQUAD(clp->cl_addr.sin_addr), -status);
++ printk(KERN_WARNING "Error: state recovery failed on NFSv4 server %s"
++ " with error %d\n", clp->cl_hostname, -status);
+ set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state);
+ goto out;
+ }
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 51dd380..db1ed9c 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -116,10 +116,12 @@ static int nfs4_stat_to_errno(int);
+ #define decode_renew_maxsz (op_decode_hdr_maxsz)
+ #define encode_setclientid_maxsz \
+ (op_encode_hdr_maxsz + \
+- 4 /*server->ip_addr*/ + \
+- 1 /*Netid*/ + \
+- 6 /*uaddr*/ + \
+- 6 + (NFS4_VERIFIER_SIZE >> 2))
++ XDR_QUADLEN(NFS4_VERIFIER_SIZE) + \
++ XDR_QUADLEN(NFS4_SETCLIENTID_NAMELEN) + \
++ 1 /* sc_prog */ + \
++ XDR_QUADLEN(RPCBIND_MAXNETIDLEN) + \
++ XDR_QUADLEN(RPCBIND_MAXUADDRLEN) + \
++ 1) /* sc_cb_ident */
+ #define decode_setclientid_maxsz \
+ (op_decode_hdr_maxsz + \
+ 2 + \
+@@ -2515,14 +2517,12 @@ static int decode_attr_files_total(struct xdr_stream *xdr, uint32_t *bitmap, uin
+
+ static int decode_pathname(struct xdr_stream *xdr, struct nfs4_pathname *path)
+ {
+- int n;
++ u32 n;
+ __be32 *p;
+ int status = 0;
+
+ READ_BUF(4);
+ READ32(n);
+- if (n < 0)
+- goto out_eio;
+ if (n == 0)
+ goto root_path;
+ dprintk("path ");
+@@ -2579,13 +2579,11 @@ static int decode_attr_fs_locations(struct xdr_stream *xdr, uint32_t *bitmap, st
+ goto out_eio;
+ res->nlocations = 0;
+ while (res->nlocations < n) {
+- int m;
++ u32 m;
+ struct nfs4_fs_location *loc = &res->locations[res->nlocations];
+
+ READ_BUF(4);
+ READ32(m);
+- if (m <= 0)
+- goto out_eio;
+
+ loc->nservers = 0;
+ dprintk("%s: servers ", __FUNCTION__);
+@@ -2598,8 +2596,12 @@ static int decode_attr_fs_locations(struct xdr_stream *xdr, uint32_t *bitmap, st
+ if (loc->nservers < NFS4_FS_LOCATION_MAXSERVERS)
+ loc->nservers++;
+ else {
+- int i;
+- dprintk("%s: using first %d of %d servers returned for location %d\n", __FUNCTION__, NFS4_FS_LOCATION_MAXSERVERS, m, res->nlocations);
++ unsigned int i;
++ dprintk("%s: using first %u of %u servers "
++ "returned for location %u\n",
++ __FUNCTION__,
++ NFS4_FS_LOCATION_MAXSERVERS,
++ m, res->nlocations);
+ for (i = loc->nservers; i < m; i++) {
+ unsigned int len;
+ char *data;
+@@ -3476,10 +3478,11 @@ static int decode_readdir(struct xdr_stream *xdr, struct rpc_rqst *req, struct n
+ struct xdr_buf *rcvbuf = &req->rq_rcv_buf;
+ struct page *page = *rcvbuf->pages;
+ struct kvec *iov = rcvbuf->head;
+- unsigned int nr, pglen = rcvbuf->page_len;
++ size_t hdrlen;
++ u32 recvd, pglen = rcvbuf->page_len;
+ __be32 *end, *entry, *p, *kaddr;
+- uint32_t len, attrlen, xlen;
+- int hdrlen, recvd, status;
++ unsigned int nr;
++ int status;
+
+ status = decode_op_hdr(xdr, OP_READDIR);
+ if (status)
+@@ -3503,6 +3506,7 @@ static int decode_readdir(struct xdr_stream *xdr, struct rpc_rqst *req, struct n
+ end = p + ((pglen + readdir->pgbase) >> 2);
+ entry = p;
+ for (nr = 0; *p++; nr++) {
++ u32 len, attrlen, xlen;
+ if (end - p < 3)
+ goto short_pkt;
+ dprintk("cookie = %Lu, ", *((unsigned long long *)p));
+@@ -3551,7 +3555,8 @@ static int decode_readlink(struct xdr_stream *xdr, struct rpc_rqst *req)
+ {
+ struct xdr_buf *rcvbuf = &req->rq_rcv_buf;
+ struct kvec *iov = rcvbuf->head;
+- int hdrlen, len, recvd;
++ size_t hdrlen;
++ u32 len, recvd;
+ __be32 *p;
+ char *kaddr;
+ int status;
+@@ -3646,7 +3651,8 @@ static int decode_getacl(struct xdr_stream *xdr, struct rpc_rqst *req,
+ if (unlikely(bitmap[0] & (FATTR4_WORD0_ACL - 1U)))
+ return -EIO;
+ if (likely(bitmap[0] & FATTR4_WORD0_ACL)) {
+- int hdrlen, recvd;
++ size_t hdrlen;
++ u32 recvd;
+
+ /* We ignore &savep and don't do consistency checks on
+ * the attr length. Let userspace figure it out.... */
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 345bb9b..3b3dbb9 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -111,13 +111,14 @@ void nfs_unlock_request(struct nfs_page *req)
+ * nfs_set_page_tag_locked - Tag a request as locked
+ * @req:
+ */
+-static int nfs_set_page_tag_locked(struct nfs_page *req)
++int nfs_set_page_tag_locked(struct nfs_page *req)
+ {
+ struct nfs_inode *nfsi = NFS_I(req->wb_context->path.dentry->d_inode);
+
+- if (!nfs_lock_request(req))
++ if (!nfs_lock_request_dontget(req))
+ return 0;
+- radix_tree_tag_set(&nfsi->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
++ if (req->wb_page != NULL)
++ radix_tree_tag_set(&nfsi->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
+ return 1;
+ }
+
+@@ -132,9 +133,10 @@ void nfs_clear_page_tag_locked(struct nfs_page *req)
+ if (req->wb_page != NULL) {
+ spin_lock(&inode->i_lock);
+ radix_tree_tag_clear(&nfsi->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
++ nfs_unlock_request(req);
+ spin_unlock(&inode->i_lock);
+- }
+- nfs_unlock_request(req);
++ } else
++ nfs_unlock_request(req);
+ }
+
+ /**
+@@ -421,6 +423,7 @@ int nfs_scan_list(struct nfs_inode *nfsi,
+ goto out;
+ idx_start = req->wb_index + 1;
+ if (nfs_set_page_tag_locked(req)) {
++ kref_get(&req->wb_kref);
+ nfs_list_remove_request(req);
+ radix_tree_tag_clear(&nfsi->nfs_page_tree,
+ req->wb_index, tag);
+diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
+index 4f80d88..5ccf7fa 100644
+--- a/fs/nfs/proc.c
++++ b/fs/nfs/proc.c
+@@ -565,16 +565,9 @@ static int nfs_read_done(struct rpc_task *task, struct nfs_read_data *data)
+ return 0;
+ }
+
+-static void nfs_proc_read_setup(struct nfs_read_data *data)
++static void nfs_proc_read_setup(struct nfs_read_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs_procedures[NFSPROC_READ],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs_procedures[NFSPROC_READ];
+ }
+
+ static int nfs_write_done(struct rpc_task *task, struct nfs_write_data *data)
+@@ -584,24 +577,15 @@ static int nfs_write_done(struct rpc_task *task, struct nfs_write_data *data)
+ return 0;
+ }
+
+-static void nfs_proc_write_setup(struct nfs_write_data *data, int how)
++static void nfs_proc_write_setup(struct nfs_write_data *data, struct rpc_message *msg)
+ {
+- struct rpc_message msg = {
+- .rpc_proc = &nfs_procedures[NFSPROC_WRITE],
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+ /* Note: NFSv2 ignores @stable and always uses NFS_FILE_SYNC */
+ data->args.stable = NFS_FILE_SYNC;
+-
+- /* Finalize the task. */
+- rpc_call_setup(&data->task, &msg, 0);
++ msg->rpc_proc = &nfs_procedures[NFSPROC_WRITE];
+ }
+
+ static void
+-nfs_proc_commit_setup(struct nfs_write_data *data, int how)
++nfs_proc_commit_setup(struct nfs_write_data *data, struct rpc_message *msg)
+ {
+ BUG();
+ }
+@@ -609,7 +593,9 @@ nfs_proc_commit_setup(struct nfs_write_data *data, int how)
+ static int
+ nfs_proc_lock(struct file *filp, int cmd, struct file_lock *fl)
+ {
+- return nlmclnt_proc(filp->f_path.dentry->d_inode, cmd, fl);
++ struct inode *inode = filp->f_path.dentry->d_inode;
+
-+ return ret;
++ return nlmclnt_proc(NFS_SERVER(inode)->nlm_host, cmd, fl);
+ }
+
+
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index 4587a86..8fd6dfb 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -160,12 +160,26 @@ static void nfs_read_rpcsetup(struct nfs_page *req, struct nfs_read_data *data,
+ const struct rpc_call_ops *call_ops,
+ unsigned int count, unsigned int offset)
+ {
+- struct inode *inode;
+- int flags;
++ struct inode *inode = req->wb_context->path.dentry->d_inode;
++ int swap_flags = IS_SWAPFILE(inode) ? NFS_RPC_SWAPFLAGS : 0;
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_argp = &data->args,
++ .rpc_resp = &data->res,
++ .rpc_cred = req->wb_context->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .task = &data->task,
++ .rpc_client = NFS_CLIENT(inode),
++ .rpc_message = &msg,
++ .callback_ops = call_ops,
++ .callback_data = data,
++ .flags = RPC_TASK_ASYNC | swap_flags,
++ };
+
+ data->req = req;
+- data->inode = inode = req->wb_context->path.dentry->d_inode;
+- data->cred = req->wb_context->cred;
++ data->inode = inode;
++ data->cred = msg.rpc_cred;
+
+ data->args.fh = NFS_FH(inode);
+ data->args.offset = req_offset(req) + offset;
+@@ -180,11 +194,7 @@ static void nfs_read_rpcsetup(struct nfs_page *req, struct nfs_read_data *data,
+ nfs_fattr_init(&data->fattr);
+
+ /* Set up the initial task struct. */
+- flags = RPC_TASK_ASYNC | (IS_SWAPFILE(inode)? NFS_RPC_SWAPFLAGS : 0);
+- rpc_init_task(&data->task, NFS_CLIENT(inode), flags, call_ops, data);
+- NFS_PROTO(inode)->read_setup(data);
+-
+- data->task.tk_cookie = (unsigned long)inode;
++ NFS_PROTO(inode)->read_setup(data, &msg);
+
+ dprintk("NFS: %5u initiated read call (req %s/%Ld, %u bytes @ offset %Lu)\n",
+ data->task.tk_pid,
+@@ -192,6 +202,10 @@ static void nfs_read_rpcsetup(struct nfs_page *req, struct nfs_read_data *data,
+ (long long)NFS_FILEID(inode),
+ count,
+ (unsigned long long)data->args.offset);
++
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+ }
+
+ static void
+@@ -208,19 +222,6 @@ nfs_async_read_error(struct list_head *head)
+ }
+
+ /*
+- * Start an async read operation
+- */
+-static void nfs_execute_read(struct nfs_read_data *data)
+-{
+- struct rpc_clnt *clnt = NFS_CLIENT(data->inode);
+- sigset_t oldset;
+-
+- rpc_clnt_sigmask(clnt, &oldset);
+- rpc_execute(&data->task);
+- rpc_clnt_sigunmask(clnt, &oldset);
+-}
+-
+-/*
+ * Generate multiple requests to fill a single page.
+ *
+ * We optimize to reduce the number of read operations on the wire. If we
+@@ -274,7 +275,6 @@ static int nfs_pagein_multi(struct inode *inode, struct list_head *head, unsigne
+ rsize, offset);
+ offset += rsize;
+ nbytes -= rsize;
+- nfs_execute_read(data);
+ } while (nbytes != 0);
+
+ return 0;
+@@ -312,8 +312,6 @@ static int nfs_pagein_one(struct inode *inode, struct list_head *head, unsigned
+ req = nfs_list_entry(data->pages.next);
+
+ nfs_read_rpcsetup(req, data, &nfs_read_full_ops, count, 0);
+-
+- nfs_execute_read(data);
+ return 0;
+ out_bad:
+ nfs_async_read_error(head);
+@@ -338,7 +336,7 @@ int nfs_readpage_result(struct rpc_task *task, struct nfs_read_data *data)
+ nfs_add_stats(data->inode, NFSIOS_SERVERREADBYTES, data->res.count);
+
+ if (task->tk_status == -ESTALE) {
+- set_bit(NFS_INO_STALE, &NFS_FLAGS(data->inode));
++ set_bit(NFS_INO_STALE, &NFS_I(data->inode)->flags);
+ nfs_mark_for_revalidate(data->inode);
+ }
+ return 0;
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 0b0c72a..22c49c0 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -45,6 +45,8 @@
+ #include <linux/nfs_idmap.h>
+ #include <linux/vfs.h>
+ #include <linux/inet.h>
++#include <linux/in6.h>
++#include <net/ipv6.h>
+ #include <linux/nfs_xdr.h>
+ #include <linux/magic.h>
+ #include <linux/parser.h>
+@@ -83,11 +85,11 @@ enum {
+ Opt_actimeo,
+ Opt_namelen,
+ Opt_mountport,
+- Opt_mountprog, Opt_mountvers,
+- Opt_nfsprog, Opt_nfsvers,
++ Opt_mountvers,
++ Opt_nfsvers,
+
+ /* Mount options that take string arguments */
+- Opt_sec, Opt_proto, Opt_mountproto,
++ Opt_sec, Opt_proto, Opt_mountproto, Opt_mounthost,
+ Opt_addr, Opt_mountaddr, Opt_clientaddr,
+
+ /* Mount options that are ignored */
+@@ -137,9 +139,7 @@ static match_table_t nfs_mount_option_tokens = {
+ { Opt_userspace, "retry=%u" },
+ { Opt_namelen, "namlen=%u" },
+ { Opt_mountport, "mountport=%u" },
+- { Opt_mountprog, "mountprog=%u" },
+ { Opt_mountvers, "mountvers=%u" },
+- { Opt_nfsprog, "nfsprog=%u" },
+ { Opt_nfsvers, "nfsvers=%u" },
+ { Opt_nfsvers, "vers=%u" },
+
+@@ -148,7 +148,7 @@ static match_table_t nfs_mount_option_tokens = {
+ { Opt_mountproto, "mountproto=%s" },
+ { Opt_addr, "addr=%s" },
+ { Opt_clientaddr, "clientaddr=%s" },
+- { Opt_userspace, "mounthost=%s" },
++ { Opt_mounthost, "mounthost=%s" },
+ { Opt_mountaddr, "mountaddr=%s" },
+
+ { Opt_err, NULL }
+@@ -202,6 +202,7 @@ static int nfs_get_sb(struct file_system_type *, int, const char *, void *, stru
+ static int nfs_xdev_get_sb(struct file_system_type *fs_type,
+ int flags, const char *dev_name, void *raw_data, struct vfsmount *mnt);
+ static void nfs_kill_super(struct super_block *);
++static void nfs_put_super(struct super_block *);
+
+ static struct file_system_type nfs_fs_type = {
+ .owner = THIS_MODULE,
+@@ -223,6 +224,7 @@ static const struct super_operations nfs_sops = {
+ .alloc_inode = nfs_alloc_inode,
+ .destroy_inode = nfs_destroy_inode,
+ .write_inode = nfs_write_inode,
++ .put_super = nfs_put_super,
+ .statfs = nfs_statfs,
+ .clear_inode = nfs_clear_inode,
+ .umount_begin = nfs_umount_begin,
+@@ -325,6 +327,28 @@ void __exit unregister_nfs_fs(void)
+ unregister_filesystem(&nfs_fs_type);
+ }
+
++void nfs_sb_active(struct nfs_server *server)
++{
++ atomic_inc(&server->active);
+}
+
-+/*
-+ * Must be called under group lock!
-+ */
-+static void ext4_mb_use_best_found(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b)
++void nfs_sb_deactive(struct nfs_server *server)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
-+ int ret;
-+
-+ BUG_ON(ac->ac_b_ex.fe_group != e4b->bd_group);
-+ BUG_ON(ac->ac_status == AC_STATUS_FOUND);
-+
-+ ac->ac_b_ex.fe_len = min(ac->ac_b_ex.fe_len, ac->ac_g_ex.fe_len);
-+ ac->ac_b_ex.fe_logical = ac->ac_g_ex.fe_logical;
-+ ret = mb_mark_used(e4b, &ac->ac_b_ex);
-+
-+ /* preallocation can change ac_b_ex, thus we store actually
-+ * allocated blocks for history */
-+ ac->ac_f_ex = ac->ac_b_ex;
-+
-+ ac->ac_status = AC_STATUS_FOUND;
-+ ac->ac_tail = ret & 0xffff;
-+ ac->ac_buddy = ret >> 16;
-+
-+ /* XXXXXXX: SUCH A HORRIBLE **CK */
-+ /*FIXME!! Why ? */
-+ ac->ac_bitmap_page = e4b->bd_bitmap_page;
-+ get_page(ac->ac_bitmap_page);
-+ ac->ac_buddy_page = e4b->bd_buddy_page;
-+ get_page(ac->ac_buddy_page);
-+
-+ /* store last allocated for subsequent stream allocation */
-+ if ((ac->ac_flags & EXT4_MB_HINT_DATA)) {
-+ spin_lock(&sbi->s_md_lock);
-+ sbi->s_mb_last_group = ac->ac_f_ex.fe_group;
-+ sbi->s_mb_last_start = ac->ac_f_ex.fe_start;
-+ spin_unlock(&sbi->s_md_lock);
-+ }
++ if (atomic_dec_and_test(&server->active))
++ wake_up(&server->active_wq);
+}
+
-+/*
-+ * regular allocator, for general purposes allocation
-+ */
-+
-+static void ext4_mb_check_limits(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b,
-+ int finish_group)
++static void nfs_put_super(struct super_block *sb)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
-+ struct ext4_free_extent *bex = &ac->ac_b_ex;
-+ struct ext4_free_extent *gex = &ac->ac_g_ex;
-+ struct ext4_free_extent ex;
-+ int max;
-+
-+ /*
-+ * We don't want to scan for a whole year
-+ */
-+ if (ac->ac_found > sbi->s_mb_max_to_scan &&
-+ !(ac->ac_flags & EXT4_MB_HINT_FIRST)) {
-+ ac->ac_status = AC_STATUS_BREAK;
-+ return;
-+ }
-+
++ struct nfs_server *server = NFS_SB(sb);
+ /*
-+ * Haven't found good chunk so far, let's continue
++ * Make sure there are no outstanding ops to this server.
++ * If so, wait for them to finish before allowing the
++ * unmount to continue.
+ */
-+ if (bex->fe_len < gex->fe_len)
-+ return;
++ wait_event(server->active_wq, atomic_read(&server->active) == 0);
++}
+
-+ if ((finish_group || ac->ac_found > sbi->s_mb_min_to_scan)
-+ && bex->fe_group == e4b->bd_group) {
-+ /* recheck chunk's availability - we don't know
-+ * when it was found (within this lock-unlock
-+ * period or not) */
-+ max = mb_find_extent(e4b, 0, bex->fe_start, gex->fe_len, &ex);
-+ if (max >= gex->fe_len) {
-+ ext4_mb_use_best_found(ac, e4b);
-+ return;
-+ }
+ /*
+ * Deliver file system statistics to userspace
+ */
+@@ -455,8 +479,8 @@ static void nfs_show_mount_options(struct seq_file *m, struct nfs_server *nfss,
+ }
+ seq_printf(m, ",proto=%s",
+ rpc_peeraddr2str(nfss->client, RPC_DISPLAY_PROTO));
+- seq_printf(m, ",timeo=%lu", 10U * clp->retrans_timeo / HZ);
+- seq_printf(m, ",retrans=%u", clp->retrans_count);
++ seq_printf(m, ",timeo=%lu", 10U * nfss->client->cl_timeout->to_initval / HZ);
++ seq_printf(m, ",retrans=%u", nfss->client->cl_timeout->to_retries);
+ seq_printf(m, ",sec=%s", nfs_pseudoflavour_to_name(nfss->client->cl_auth->au_flavor));
+ }
+
+@@ -469,8 +493,9 @@ static int nfs_show_options(struct seq_file *m, struct vfsmount *mnt)
+
+ nfs_show_mount_options(m, nfss, 0);
+
+- seq_printf(m, ",addr="NIPQUAD_FMT,
+- NIPQUAD(nfss->nfs_client->cl_addr.sin_addr));
++ seq_printf(m, ",addr=%s",
++ rpc_peeraddr2str(nfss->nfs_client->cl_rpcclient,
++ RPC_DISPLAY_ADDR));
+
+ return 0;
+ }
+@@ -507,7 +532,7 @@ static int nfs_show_stats(struct seq_file *m, struct vfsmount *mnt)
+ seq_printf(m, ",namelen=%d", nfss->namelen);
+
+ #ifdef CONFIG_NFS_V4
+- if (nfss->nfs_client->cl_nfsversion == 4) {
++ if (nfss->nfs_client->rpc_ops->version == 4) {
+ seq_printf(m, "\n\tnfsv4:\t");
+ seq_printf(m, "bm0=0x%x", nfss->attr_bitmask[0]);
+ seq_printf(m, ",bm1=0x%x", nfss->attr_bitmask[1]);
+@@ -575,16 +600,40 @@ static void nfs_umount_begin(struct vfsmount *vfsmnt, int flags)
+ }
+
+ /*
+- * Sanity-check a server address provided by the mount command
++ * Set the port number in an address. Be agnostic about the address family.
++ */
++static void nfs_set_port(struct sockaddr *sap, unsigned short port)
++{
++ switch (sap->sa_family) {
++ case AF_INET: {
++ struct sockaddr_in *ap = (struct sockaddr_in *)sap;
++ ap->sin_port = htons(port);
++ break;
++ }
++ case AF_INET6: {
++ struct sockaddr_in6 *ap = (struct sockaddr_in6 *)sap;
++ ap->sin6_port = htons(port);
++ break;
++ }
+ }
+}
+
+/*
-+ * The routine checks whether found extent is good enough. If it is,
-+ * then the extent gets marked used and flag is set to the context
-+ * to stop scanning. Otherwise, the extent is compared with the
-+ * previous found extent and if new one is better, then it's stored
-+ * in the context. Later, the best found extent will be used, if
-+ * mballoc can't find good enough extent.
++ * Sanity-check a server address provided by the mount command.
+ *
-+ * FIXME: real allocation policy is to be designed yet!
++ * Address family must be initialized, and address must not be
++ * the ANY address for that family.
+ */
+ static int nfs_verify_server_address(struct sockaddr *addr)
+ {
+ switch (addr->sa_family) {
+ case AF_INET: {
+- struct sockaddr_in *sa = (struct sockaddr_in *) addr;
+- if (sa->sin_addr.s_addr != INADDR_ANY)
+- return 1;
+- break;
++ struct sockaddr_in *sa = (struct sockaddr_in *)addr;
++ return sa->sin_addr.s_addr != INADDR_ANY;
++ }
++ case AF_INET6: {
++ struct in6_addr *sa = &((struct sockaddr_in6 *)addr)->sin6_addr;
++ return !ipv6_addr_any(sa);
+ }
+ }
+
+@@ -592,6 +641,40 @@ static int nfs_verify_server_address(struct sockaddr *addr)
+ }
+
+ /*
++ * Parse string addresses passed in via a mount option,
++ * and construct a sockaddr based on the result.
++ *
++ * If address parsing fails, set the sockaddr's address
++ * family to AF_UNSPEC to force nfs_verify_server_address()
++ * to punt the mount.
+ */
-+static void ext4_mb_measure_extent(struct ext4_allocation_context *ac,
-+ struct ext4_free_extent *ex,
-+ struct ext4_buddy *e4b)
++static void nfs_parse_server_address(char *value,
++ struct sockaddr *sap,
++ size_t *len)
+{
-+ struct ext4_free_extent *bex = &ac->ac_b_ex;
-+ struct ext4_free_extent *gex = &ac->ac_g_ex;
-+
-+ BUG_ON(ex->fe_len <= 0);
-+ BUG_ON(ex->fe_len >= EXT4_BLOCKS_PER_GROUP(ac->ac_sb));
-+ BUG_ON(ex->fe_start >= EXT4_BLOCKS_PER_GROUP(ac->ac_sb));
-+ BUG_ON(ac->ac_status != AC_STATUS_CONTINUE);
-+
-+ ac->ac_found++;
-+
-+ /*
-+ * The special case - take what you catch first
-+ */
-+ if (unlikely(ac->ac_flags & EXT4_MB_HINT_FIRST)) {
-+ *bex = *ex;
-+ ext4_mb_use_best_found(ac, e4b);
-+ return;
-+ }
-+
-+ /*
-+ * Let's check whether the chuck is good enough
-+ */
-+ if (ex->fe_len == gex->fe_len) {
-+ *bex = *ex;
-+ ext4_mb_use_best_found(ac, e4b);
-+ return;
-+ }
++ if (strchr(value, ':')) {
++ struct sockaddr_in6 *ap = (struct sockaddr_in6 *)sap;
++ u8 *addr = (u8 *)&ap->sin6_addr.in6_u;
+
-+ /*
-+ * If this is first found extent, just store it in the context
-+ */
-+ if (bex->fe_len == 0) {
-+ *bex = *ex;
-+ return;
-+ }
++ ap->sin6_family = AF_INET6;
++ *len = sizeof(*ap);
++ if (in6_pton(value, -1, addr, '\0', NULL))
++ return;
++ } else {
++ struct sockaddr_in *ap = (struct sockaddr_in *)sap;
++ u8 *addr = (u8 *)&ap->sin_addr.s_addr;
+
-+ /*
-+ * If new found extent is better, store it in the context
-+ */
-+ if (bex->fe_len < gex->fe_len) {
-+ /* if the request isn't satisfied, any found extent
-+ * larger than previous best one is better */
-+ if (ex->fe_len > bex->fe_len)
-+ *bex = *ex;
-+ } else if (ex->fe_len > gex->fe_len) {
-+ /* if the request is satisfied, then we try to find
-+ * an extent that still satisfy the request, but is
-+ * smaller than previous one */
-+ if (ex->fe_len < bex->fe_len)
-+ *bex = *ex;
++ ap->sin_family = AF_INET;
++ *len = sizeof(*ap);
++ if (in4_pton(value, -1, addr, '\0', NULL))
++ return;
+ }
+
-+ ext4_mb_check_limits(ac, e4b, 0);
++ sap->sa_family = AF_UNSPEC;
++ *len = 0;
+}
+
-+static int ext4_mb_try_best_found(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b)
-+{
-+ struct ext4_free_extent ex = ac->ac_b_ex;
-+ ext4_group_t group = ex.fe_group;
-+ int max;
-+ int err;
-+
-+ BUG_ON(ex.fe_len <= 0);
-+ err = ext4_mb_load_buddy(ac->ac_sb, group, e4b);
-+ if (err)
-+ return err;
++/*
+ * Error-check and convert a string of mount options from user space into
+ * a data structure
+ */
+@@ -599,6 +682,7 @@ static int nfs_parse_mount_options(char *raw,
+ struct nfs_parsed_mount_data *mnt)
+ {
+ char *p, *string;
++ unsigned short port = 0;
+
+ if (!raw) {
+ dfprintk(MOUNT, "NFS: mount options string was NULL.\n");
+@@ -701,7 +785,7 @@ static int nfs_parse_mount_options(char *raw,
+ return 0;
+ if (option < 0 || option > 65535)
+ return 0;
+- mnt->nfs_server.address.sin_port = htons(option);
++ port = option;
+ break;
+ case Opt_rsize:
+ if (match_int(args, &mnt->rsize))
+@@ -763,13 +847,6 @@ static int nfs_parse_mount_options(char *raw,
+ return 0;
+ mnt->mount_server.port = option;
+ break;
+- case Opt_mountprog:
+- if (match_int(args, &option))
+- return 0;
+- if (option < 0)
+- return 0;
+- mnt->mount_server.program = option;
+- break;
+ case Opt_mountvers:
+ if (match_int(args, &option))
+ return 0;
+@@ -777,13 +854,6 @@ static int nfs_parse_mount_options(char *raw,
+ return 0;
+ mnt->mount_server.version = option;
+ break;
+- case Opt_nfsprog:
+- if (match_int(args, &option))
+- return 0;
+- if (option < 0)
+- return 0;
+- mnt->nfs_server.program = option;
+- break;
+ case Opt_nfsvers:
+ if (match_int(args, &option))
+ return 0;
+@@ -927,24 +997,32 @@ static int nfs_parse_mount_options(char *raw,
+ string = match_strdup(args);
+ if (string == NULL)
+ goto out_nomem;
+- mnt->nfs_server.address.sin_family = AF_INET;
+- mnt->nfs_server.address.sin_addr.s_addr =
+- in_aton(string);
++ nfs_parse_server_address(string, (struct sockaddr *)
++ &mnt->nfs_server.address,
++ &mnt->nfs_server.addrlen);
+ kfree(string);
+ break;
+ case Opt_clientaddr:
+ string = match_strdup(args);
+ if (string == NULL)
+ goto out_nomem;
++ kfree(mnt->client_address);
+ mnt->client_address = string;
+ break;
++ case Opt_mounthost:
++ string = match_strdup(args);
++ if (string == NULL)
++ goto out_nomem;
++ kfree(mnt->mount_server.hostname);
++ mnt->mount_server.hostname = string;
++ break;
+ case Opt_mountaddr:
+ string = match_strdup(args);
+ if (string == NULL)
+ goto out_nomem;
+- mnt->mount_server.address.sin_family = AF_INET;
+- mnt->mount_server.address.sin_addr.s_addr =
+- in_aton(string);
++ nfs_parse_server_address(string, (struct sockaddr *)
++ &mnt->mount_server.address,
++ &mnt->mount_server.addrlen);
+ kfree(string);
+ break;
+
+@@ -957,6 +1035,8 @@ static int nfs_parse_mount_options(char *raw,
+ }
+ }
+
++ nfs_set_port((struct sockaddr *)&mnt->nfs_server.address, port);
+
-+ ext4_lock_group(ac->ac_sb, group);
-+ max = mb_find_extent(e4b, 0, ex.fe_start, ex.fe_len, &ex);
+ return 1;
+
+ out_nomem:
+@@ -987,7 +1067,8 @@ out_unknown:
+ static int nfs_try_mount(struct nfs_parsed_mount_data *args,
+ struct nfs_fh *root_fh)
+ {
+- struct sockaddr_in sin;
++ struct sockaddr *sap = (struct sockaddr *)&args->mount_server.address;
++ char *hostname;
+ int status;
+
+ if (args->mount_server.version == 0) {
+@@ -997,25 +1078,32 @@ static int nfs_try_mount(struct nfs_parsed_mount_data *args,
+ args->mount_server.version = NFS_MNT_VERSION;
+ }
+
++ if (args->mount_server.hostname)
++ hostname = args->mount_server.hostname;
++ else
++ hostname = args->nfs_server.hostname;
+
-+ if (max > 0) {
-+ ac->ac_b_ex = ex;
-+ ext4_mb_use_best_found(ac, e4b);
+ /*
+ * Construct the mount server's address.
+ */
+- if (args->mount_server.address.sin_addr.s_addr != INADDR_ANY)
+- sin = args->mount_server.address;
+- else
+- sin = args->nfs_server.address;
++ if (args->mount_server.address.ss_family == AF_UNSPEC) {
++ memcpy(sap, &args->nfs_server.address,
++ args->nfs_server.addrlen);
++ args->mount_server.addrlen = args->nfs_server.addrlen;
+ }
+
-+ ext4_unlock_group(ac->ac_sb, group);
-+ ext4_mb_release_desc(e4b);
-+
-+ return 0;
-+}
-+
-+static int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b)
+ /*
+ * autobind will be used if mount_server.port == 0
+ */
+- sin.sin_port = htons(args->mount_server.port);
++ nfs_set_port(sap, args->mount_server.port);
+
+ /*
+ * Now ask the mount server to map our export path
+ * to a file handle.
+ */
+- status = nfs_mount((struct sockaddr *) &sin,
+- sizeof(sin),
+- args->nfs_server.hostname,
++ status = nfs_mount(sap,
++ args->mount_server.addrlen,
++ hostname,
+ args->nfs_server.export_path,
+ args->mount_server.version,
+ args->mount_server.protocol,
+@@ -1023,8 +1111,8 @@ static int nfs_try_mount(struct nfs_parsed_mount_data *args,
+ if (status == 0)
+ return 0;
+
+- dfprintk(MOUNT, "NFS: unable to mount server " NIPQUAD_FMT
+- ", error %d\n", NIPQUAD(sin.sin_addr.s_addr), status);
++ dfprintk(MOUNT, "NFS: unable to mount server %s, error %d",
++ hostname, status);
+ return status;
+ }
+
+@@ -1043,9 +1131,6 @@ static int nfs_try_mount(struct nfs_parsed_mount_data *args,
+ *
+ * + breaking back: trying proto=udp after proto=tcp, v2 after v3,
+ * mountproto=tcp after mountproto=udp, and so on
+- *
+- * XXX: as far as I can tell, changing the NFS program number is not
+- * supported in the NFS client.
+ */
+ static int nfs_validate_mount_data(void *options,
+ struct nfs_parsed_mount_data *args,
+@@ -1069,9 +1154,7 @@ static int nfs_validate_mount_data(void *options,
+ args->acdirmin = 30;
+ args->acdirmax = 60;
+ args->mount_server.protocol = XPRT_TRANSPORT_UDP;
+- args->mount_server.program = NFS_MNT_PROGRAM;
+ args->nfs_server.protocol = XPRT_TRANSPORT_TCP;
+- args->nfs_server.program = NFS_PROGRAM;
+
+ switch (data->version) {
+ case 1:
+@@ -1102,9 +1185,6 @@ static int nfs_validate_mount_data(void *options,
+ memset(mntfh->data + mntfh->size, 0,
+ sizeof(mntfh->data) - mntfh->size);
+
+- if (!nfs_verify_server_address((struct sockaddr *) &data->addr))
+- goto out_no_address;
+-
+ /*
+ * Translate to nfs_parsed_mount_data, which nfs_fill_super
+ * can deal with.
+@@ -1119,7 +1199,14 @@ static int nfs_validate_mount_data(void *options,
+ args->acregmax = data->acregmax;
+ args->acdirmin = data->acdirmin;
+ args->acdirmax = data->acdirmax;
+- args->nfs_server.address = data->addr;
++
++ memcpy(&args->nfs_server.address, &data->addr,
++ sizeof(data->addr));
++ args->nfs_server.addrlen = sizeof(data->addr);
++ if (!nfs_verify_server_address((struct sockaddr *)
++ &args->nfs_server.address))
++ goto out_no_address;
++
+ if (!(data->flags & NFS_MOUNT_TCP))
+ args->nfs_server.protocol = XPRT_TRANSPORT_UDP;
+ /* N.B. caller will free nfs_server.hostname in all cases */
+@@ -1322,15 +1409,50 @@ static int nfs_set_super(struct super_block *s, void *data)
+ return ret;
+ }
+
++static int nfs_compare_super_address(struct nfs_server *server1,
++ struct nfs_server *server2)
+{
-+ ext4_group_t group = ac->ac_g_ex.fe_group;
-+ int max;
-+ int err;
-+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
-+ struct ext4_super_block *es = sbi->s_es;
-+ struct ext4_free_extent ex;
-+
-+ if (!(ac->ac_flags & EXT4_MB_HINT_TRY_GOAL))
-+ return 0;
-+
-+ err = ext4_mb_load_buddy(ac->ac_sb, group, e4b);
-+ if (err)
-+ return err;
++ struct sockaddr *sap1, *sap2;
+
-+ ext4_lock_group(ac->ac_sb, group);
-+ max = mb_find_extent(e4b, 0, ac->ac_g_ex.fe_start,
-+ ac->ac_g_ex.fe_len, &ex);
++ sap1 = (struct sockaddr *)&server1->nfs_client->cl_addr;
++ sap2 = (struct sockaddr *)&server2->nfs_client->cl_addr;
+
-+ if (max >= ac->ac_g_ex.fe_len && ac->ac_g_ex.fe_len == sbi->s_stripe) {
-+ ext4_fsblk_t start;
++ if (sap1->sa_family != sap2->sa_family)
++ return 0;
+
-+ start = (e4b->bd_group * EXT4_BLOCKS_PER_GROUP(ac->ac_sb)) +
-+ ex.fe_start + le32_to_cpu(es->s_first_data_block);
-+ /* use do_div to get remainder (would be 64-bit modulo) */
-+ if (do_div(start, sbi->s_stripe) == 0) {
-+ ac->ac_found++;
-+ ac->ac_b_ex = ex;
-+ ext4_mb_use_best_found(ac, e4b);
-+ }
-+ } else if (max >= ac->ac_g_ex.fe_len) {
-+ BUG_ON(ex.fe_len <= 0);
-+ BUG_ON(ex.fe_group != ac->ac_g_ex.fe_group);
-+ BUG_ON(ex.fe_start != ac->ac_g_ex.fe_start);
-+ ac->ac_found++;
-+ ac->ac_b_ex = ex;
-+ ext4_mb_use_best_found(ac, e4b);
-+ } else if (max > 0 && (ac->ac_flags & EXT4_MB_HINT_MERGE)) {
-+ /* Sometimes, caller may want to merge even small
-+ * number of blocks to an existing extent */
-+ BUG_ON(ex.fe_len <= 0);
-+ BUG_ON(ex.fe_group != ac->ac_g_ex.fe_group);
-+ BUG_ON(ex.fe_start != ac->ac_g_ex.fe_start);
-+ ac->ac_found++;
-+ ac->ac_b_ex = ex;
-+ ext4_mb_use_best_found(ac, e4b);
++ switch (sap1->sa_family) {
++ case AF_INET: {
++ struct sockaddr_in *sin1 = (struct sockaddr_in *)sap1;
++ struct sockaddr_in *sin2 = (struct sockaddr_in *)sap2;
++ if (sin1->sin_addr.s_addr != sin2->sin_addr.s_addr)
++ return 0;
++ if (sin1->sin_port != sin2->sin_port)
++ return 0;
++ break;
++ }
++ case AF_INET6: {
++ struct sockaddr_in6 *sin1 = (struct sockaddr_in6 *)sap1;
++ struct sockaddr_in6 *sin2 = (struct sockaddr_in6 *)sap2;
++ if (!ipv6_addr_equal(&sin1->sin6_addr, &sin2->sin6_addr))
++ return 0;
++ if (sin1->sin6_port != sin2->sin6_port)
++ return 0;
++ break;
++ }
++ default:
++ return 0;
+ }
-+ ext4_unlock_group(ac->ac_sb, group);
-+ ext4_mb_release_desc(e4b);
+
-+ return 0;
++ return 1;
+}
+
-+/*
-+ * The routine scans buddy structures (not bitmap!) from given order
-+ * to max order and tries to find big enough chunk to satisfy the req
+ static int nfs_compare_super(struct super_block *sb, void *data)
+ {
+ struct nfs_sb_mountdata *sb_mntdata = data;
+ struct nfs_server *server = sb_mntdata->server, *old = NFS_SB(sb);
+ int mntflags = sb_mntdata->mntflags;
+
+- if (memcmp(&old->nfs_client->cl_addr,
+- &server->nfs_client->cl_addr,
+- sizeof(old->nfs_client->cl_addr)) != 0)
++ if (!nfs_compare_super_address(old, server))
+ return 0;
+ /* Note: NFS_MOUNT_UNSHARED == NFS4_MOUNT_UNSHARED */
+ if (old->flags & NFS_MOUNT_UNSHARED)
+@@ -1400,6 +1522,7 @@ static int nfs_get_sb(struct file_system_type *fs_type,
+
+ out:
+ kfree(data.nfs_server.hostname);
++ kfree(data.mount_server.hostname);
+ return error;
+
+ out_err_nosb:
+@@ -1528,12 +1651,35 @@ static void nfs4_fill_super(struct super_block *sb)
+ }
+
+ /*
++ * If the user didn't specify a port, set the port number to
++ * the NFS version 4 default port.
+ */
-+static void ext4_mb_simple_scan_group(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b)
++static void nfs4_default_port(struct sockaddr *sap)
+{
-+ struct super_block *sb = ac->ac_sb;
-+ struct ext4_group_info *grp = e4b->bd_info;
-+ void *buddy;
-+ int i;
-+ int k;
-+ int max;
-+
-+ BUG_ON(ac->ac_2order <= 0);
-+ for (i = ac->ac_2order; i <= sb->s_blocksize_bits + 1; i++) {
-+ if (grp->bb_counters[i] == 0)
-+ continue;
-+
-+ buddy = mb_find_buddy(e4b, i, &max);
-+ BUG_ON(buddy == NULL);
-+
-+ k = ext4_find_next_zero_bit(buddy, max, 0);
-+ BUG_ON(k >= max);
-+
-+ ac->ac_found++;
-+
-+ ac->ac_b_ex.fe_len = 1 << i;
-+ ac->ac_b_ex.fe_start = k << i;
-+ ac->ac_b_ex.fe_group = e4b->bd_group;
-+
-+ ext4_mb_use_best_found(ac, e4b);
-+
-+ BUG_ON(ac->ac_b_ex.fe_len != ac->ac_g_ex.fe_len);
-+
-+ if (EXT4_SB(sb)->s_mb_stats)
-+ atomic_inc(&EXT4_SB(sb)->s_bal_2orders);
-+
++ switch (sap->sa_family) {
++ case AF_INET: {
++ struct sockaddr_in *ap = (struct sockaddr_in *)sap;
++ if (ap->sin_port == 0)
++ ap->sin_port = htons(NFS_PORT);
+ break;
+ }
++ case AF_INET6: {
++ struct sockaddr_in6 *ap = (struct sockaddr_in6 *)sap;
++ if (ap->sin6_port == 0)
++ ap->sin6_port = htons(NFS_PORT);
++ break;
++ }
++ }
+}
+
+/*
-+ * The routine scans the group and measures all found extents.
-+ * In order to optimize scanning, caller must pass number of
-+ * free blocks in the group, so the routine can know upper limit.
-+ */
-+static void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b)
-+{
-+ struct super_block *sb = ac->ac_sb;
-+ void *bitmap = EXT4_MB_BITMAP(e4b);
-+ struct ext4_free_extent ex;
-+ int i;
-+ int free;
-+
-+ free = e4b->bd_info->bb_free;
-+ BUG_ON(free <= 0);
+ * Validate NFSv4 mount options
+ */
+ static int nfs4_validate_mount_data(void *options,
+ struct nfs_parsed_mount_data *args,
+ const char *dev_name)
+ {
++ struct sockaddr_in *ap;
+ struct nfs4_mount_data *data = (struct nfs4_mount_data *)options;
+ char *c;
+
+@@ -1554,18 +1700,21 @@ static int nfs4_validate_mount_data(void *options,
+
+ switch (data->version) {
+ case 1:
+- if (data->host_addrlen != sizeof(args->nfs_server.address))
++ ap = (struct sockaddr_in *)&args->nfs_server.address;
++ if (data->host_addrlen > sizeof(args->nfs_server.address))
+ goto out_no_address;
+- if (copy_from_user(&args->nfs_server.address,
+- data->host_addr,
+- sizeof(args->nfs_server.address)))
++ if (data->host_addrlen == 0)
++ goto out_no_address;
++ args->nfs_server.addrlen = data->host_addrlen;
++ if (copy_from_user(ap, data->host_addr, data->host_addrlen))
+ return -EFAULT;
+- if (args->nfs_server.address.sin_port == 0)
+- args->nfs_server.address.sin_port = htons(NFS_PORT);
+ if (!nfs_verify_server_address((struct sockaddr *)
+ &args->nfs_server.address))
+ goto out_no_address;
+
++ nfs4_default_port((struct sockaddr *)
++ &args->nfs_server.address);
+
-+ i = e4b->bd_info->bb_first_free;
+ switch (data->auth_flavourlen) {
+ case 0:
+ args->auth_flavors[0] = RPC_AUTH_UNIX;
+@@ -1623,6 +1772,9 @@ static int nfs4_validate_mount_data(void *options,
+ &args->nfs_server.address))
+ return -EINVAL;
+
++ nfs4_default_port((struct sockaddr *)
++ &args->nfs_server.address);
+
-+ while (free && ac->ac_status == AC_STATUS_CONTINUE) {
-+ i = ext4_find_next_zero_bit(bitmap,
-+ EXT4_BLOCKS_PER_GROUP(sb), i);
-+ if (i >= EXT4_BLOCKS_PER_GROUP(sb)) {
-+ BUG_ON(free != 0);
-+ break;
-+ }
+ switch (args->auth_flavor_len) {
+ case 0:
+ args->auth_flavors[0] = RPC_AUTH_UNIX;
+@@ -1643,21 +1795,16 @@ static int nfs4_validate_mount_data(void *options,
+ len = c - dev_name;
+ if (len > NFS4_MAXNAMLEN)
+ return -ENAMETOOLONG;
+- args->nfs_server.hostname = kzalloc(len, GFP_KERNEL);
+- if (args->nfs_server.hostname == NULL)
+- return -ENOMEM;
+- strncpy(args->nfs_server.hostname, dev_name, len - 1);
++ /* N.B. caller will free nfs_server.hostname in all cases */
++ args->nfs_server.hostname = kstrndup(dev_name, len, GFP_KERNEL);
+
+ c++; /* step over the ':' */
+ len = strlen(c);
+ if (len > NFS4_MAXPATHLEN)
+ return -ENAMETOOLONG;
+- args->nfs_server.export_path = kzalloc(len + 1, GFP_KERNEL);
+- if (args->nfs_server.export_path == NULL)
+- return -ENOMEM;
+- strncpy(args->nfs_server.export_path, c, len);
++ args->nfs_server.export_path = kstrndup(c, len, GFP_KERNEL);
+
+- dprintk("MNTPATH: %s\n", args->nfs_server.export_path);
++ dprintk("NFS: MNTPATH: '%s'\n", args->nfs_server.export_path);
+
+ if (args->client_address == NULL)
+ goto out_no_client_address;
+diff --git a/fs/nfs/unlink.c b/fs/nfs/unlink.c
+index 233ad38..7574153 100644
+--- a/fs/nfs/unlink.c
++++ b/fs/nfs/unlink.c
+@@ -14,6 +14,8 @@
+ #include <linux/sched.h>
+ #include <linux/wait.h>
+
++#include "internal.h"
+
-+ mb_find_extent(e4b, 0, i, ac->ac_g_ex.fe_len, &ex);
-+ BUG_ON(ex.fe_len <= 0);
-+ BUG_ON(free < ex.fe_len);
+ struct nfs_unlinkdata {
+ struct hlist_node list;
+ struct nfs_removeargs args;
+@@ -69,24 +71,6 @@ static void nfs_dec_sillycount(struct inode *dir)
+ }
+
+ /**
+- * nfs_async_unlink_init - Initialize the RPC info
+- * task: rpc_task of the sillydelete
+- */
+-static void nfs_async_unlink_init(struct rpc_task *task, void *calldata)
+-{
+- struct nfs_unlinkdata *data = calldata;
+- struct inode *dir = data->dir;
+- struct rpc_message msg = {
+- .rpc_argp = &data->args,
+- .rpc_resp = &data->res,
+- .rpc_cred = data->cred,
+- };
+-
+- NFS_PROTO(dir)->unlink_setup(&msg, dir);
+- rpc_call_setup(task, &msg, 0);
+-}
+-
+-/**
+ * nfs_async_unlink_done - Sillydelete post-processing
+ * @task: rpc_task of the sillydelete
+ *
+@@ -113,32 +97,45 @@ static void nfs_async_unlink_release(void *calldata)
+ struct nfs_unlinkdata *data = calldata;
+
+ nfs_dec_sillycount(data->dir);
++ nfs_sb_deactive(NFS_SERVER(data->dir));
+ nfs_free_unlinkdata(data);
+ }
+
+ static const struct rpc_call_ops nfs_unlink_ops = {
+- .rpc_call_prepare = nfs_async_unlink_init,
+ .rpc_call_done = nfs_async_unlink_done,
+ .rpc_release = nfs_async_unlink_release,
+ };
+
+ static int nfs_do_call_unlink(struct dentry *parent, struct inode *dir, struct nfs_unlinkdata *data)
+ {
++ struct rpc_message msg = {
++ .rpc_argp = &data->args,
++ .rpc_resp = &data->res,
++ .rpc_cred = data->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_message = &msg,
++ .callback_ops = &nfs_unlink_ops,
++ .callback_data = data,
++ .flags = RPC_TASK_ASYNC,
++ };
+ struct rpc_task *task;
+ struct dentry *alias;
+
+ alias = d_lookup(parent, &data->args.name);
+ if (alias != NULL) {
+ int ret = 0;
+
-+ ext4_mb_measure_extent(ac, &ex, e4b);
+ /*
+ * Hey, we raced with lookup... See if we need to transfer
+ * the sillyrename information to the aliased dentry.
+ */
+ nfs_free_dname(data);
+ spin_lock(&alias->d_lock);
+- if (!(alias->d_flags & DCACHE_NFSFS_RENAMED)) {
++ if (alias->d_inode != NULL &&
++ !(alias->d_flags & DCACHE_NFSFS_RENAMED)) {
+ alias->d_fsdata = data;
+- alias->d_flags ^= DCACHE_NFSFS_RENAMED;
++ alias->d_flags |= DCACHE_NFSFS_RENAMED;
+ ret = 1;
+ }
+ spin_unlock(&alias->d_lock);
+@@ -151,10 +148,14 @@ static int nfs_do_call_unlink(struct dentry *parent, struct inode *dir, struct n
+ nfs_dec_sillycount(dir);
+ return 0;
+ }
++ nfs_sb_active(NFS_SERVER(dir));
+ data->args.fh = NFS_FH(dir);
+ nfs_fattr_init(&data->res.dir_attr);
+
+- task = rpc_run_task(NFS_CLIENT(dir), RPC_TASK_ASYNC, &nfs_unlink_ops, data);
++ NFS_PROTO(dir)->unlink_setup(&msg, dir);
++
++ task_setup_data.rpc_client = NFS_CLIENT(dir);
++ task = rpc_run_task(&task_setup_data);
+ if (!IS_ERR(task))
+ rpc_put_task(task);
+ return 1;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 51cc1bd..5ac5b27 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -196,7 +196,7 @@ static int nfs_writepage_setup(struct nfs_open_context *ctx, struct page *page,
+ }
+ /* Update file length */
+ nfs_grow_file(page, offset, count);
+- nfs_unlock_request(req);
++ nfs_clear_page_tag_locked(req);
+ return 0;
+ }
+
+@@ -252,7 +252,6 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+ struct page *page)
+ {
+ struct inode *inode = page->mapping->host;
+- struct nfs_inode *nfsi = NFS_I(inode);
+ struct nfs_page *req;
+ int ret;
+
+@@ -263,10 +262,10 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+ spin_unlock(&inode->i_lock);
+ return 0;
+ }
+- if (nfs_lock_request_dontget(req))
++ if (nfs_set_page_tag_locked(req))
+ break;
+ /* Note: If we hold the page lock, as is the case in nfs_writepage,
+- * then the call to nfs_lock_request_dontget() will always
++ * then the call to nfs_set_page_tag_locked() will always
+ * succeed provided that someone hasn't already marked the
+ * request as dirty (in which case we don't care).
+ */
+@@ -280,7 +279,7 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+ if (test_bit(PG_NEED_COMMIT, &req->wb_flags)) {
+ /* This request is marked for commit */
+ spin_unlock(&inode->i_lock);
+- nfs_unlock_request(req);
++ nfs_clear_page_tag_locked(req);
+ nfs_pageio_complete(pgio);
+ return 0;
+ }
+@@ -288,8 +287,6 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+ spin_unlock(&inode->i_lock);
+ BUG();
+ }
+- radix_tree_tag_set(&nfsi->nfs_page_tree, req->wb_index,
+- NFS_PAGE_TAG_LOCKED);
+ spin_unlock(&inode->i_lock);
+ nfs_pageio_add_request(pgio, req);
+ return 0;
+@@ -381,6 +378,7 @@ static int nfs_inode_add_request(struct inode *inode, struct nfs_page *req)
+ set_page_private(req->wb_page, (unsigned long)req);
+ nfsi->npages++;
+ kref_get(&req->wb_kref);
++ radix_tree_tag_set(&nfsi->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
+ return 0;
+ }
+
+@@ -596,7 +594,7 @@ static struct nfs_page * nfs_update_request(struct nfs_open_context* ctx,
+ spin_lock(&inode->i_lock);
+ req = nfs_page_find_request_locked(page);
+ if (req) {
+- if (!nfs_lock_request_dontget(req)) {
++ if (!nfs_set_page_tag_locked(req)) {
+ int error;
+
+ spin_unlock(&inode->i_lock);
+@@ -646,7 +644,7 @@ static struct nfs_page * nfs_update_request(struct nfs_open_context* ctx,
+ || req->wb_page != page
+ || !nfs_dirty_request(req)
+ || offset > rqend || end < req->wb_offset) {
+- nfs_unlock_request(req);
++ nfs_clear_page_tag_locked(req);
+ return ERR_PTR(-EBUSY);
+ }
+
+@@ -755,7 +753,7 @@ static void nfs_writepage_release(struct nfs_page *req)
+ nfs_clear_page_tag_locked(req);
+ }
+
+-static inline int flush_task_priority(int how)
++static int flush_task_priority(int how)
+ {
+ switch (how & (FLUSH_HIGHPRI|FLUSH_LOWPRI)) {
+ case FLUSH_HIGHPRI:
+@@ -775,15 +773,31 @@ static void nfs_write_rpcsetup(struct nfs_page *req,
+ unsigned int count, unsigned int offset,
+ int how)
+ {
+- struct inode *inode;
+- int flags;
++ struct inode *inode = req->wb_context->path.dentry->d_inode;
++ int flags = (how & FLUSH_SYNC) ? 0 : RPC_TASK_ASYNC;
++ int priority = flush_task_priority(how);
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_argp = &data->args,
++ .rpc_resp = &data->res,
++ .rpc_cred = req->wb_context->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = NFS_CLIENT(inode),
++ .task = &data->task,
++ .rpc_message = &msg,
++ .callback_ops = call_ops,
++ .callback_data = data,
++ .flags = flags,
++ .priority = priority,
++ };
+
+ /* Set up the RPC argument and reply structs
+ * NB: take care not to mess about with data->commit et al. */
+
+ data->req = req;
+ data->inode = inode = req->wb_context->path.dentry->d_inode;
+- data->cred = req->wb_context->cred;
++ data->cred = msg.rpc_cred;
+
+ data->args.fh = NFS_FH(inode);
+ data->args.offset = req_offset(req) + offset;
+@@ -791,6 +805,12 @@ static void nfs_write_rpcsetup(struct nfs_page *req,
+ data->args.pages = data->pagevec;
+ data->args.count = count;
+ data->args.context = req->wb_context;
++ data->args.stable = NFS_UNSTABLE;
++ if (how & FLUSH_STABLE) {
++ data->args.stable = NFS_DATA_SYNC;
++ if (!NFS_I(inode)->ncommit)
++ data->args.stable = NFS_FILE_SYNC;
++ }
+
+ data->res.fattr = &data->fattr;
+ data->res.count = count;
+@@ -798,12 +818,7 @@ static void nfs_write_rpcsetup(struct nfs_page *req,
+ nfs_fattr_init(&data->fattr);
+
+ /* Set up the initial task struct. */
+- flags = (how & FLUSH_SYNC) ? 0 : RPC_TASK_ASYNC;
+- rpc_init_task(&data->task, NFS_CLIENT(inode), flags, call_ops, data);
+- NFS_PROTO(inode)->write_setup(data, how);
+-
+- data->task.tk_priority = flush_task_priority(how);
+- data->task.tk_cookie = (unsigned long)inode;
++ NFS_PROTO(inode)->write_setup(data, &msg);
+
+ dprintk("NFS: %5u initiated write call "
+ "(req %s/%Ld, %u bytes @ offset %Lu)\n",
+@@ -812,16 +827,10 @@ static void nfs_write_rpcsetup(struct nfs_page *req,
+ (long long)NFS_FILEID(inode),
+ count,
+ (unsigned long long)data->args.offset);
+-}
+-
+-static void nfs_execute_write(struct nfs_write_data *data)
+-{
+- struct rpc_clnt *clnt = NFS_CLIENT(data->inode);
+- sigset_t oldset;
+
+- rpc_clnt_sigmask(clnt, &oldset);
+- rpc_execute(&data->task);
+- rpc_clnt_sigunmask(clnt, &oldset);
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+ }
+
+ /*
+@@ -868,7 +877,6 @@ static int nfs_flush_multi(struct inode *inode, struct list_head *head, unsigned
+ wsize, offset, how);
+ offset += wsize;
+ nbytes -= wsize;
+- nfs_execute_write(data);
+ } while (nbytes != 0);
+
+ return 0;
+@@ -916,7 +924,6 @@ static int nfs_flush_one(struct inode *inode, struct list_head *head, unsigned i
+ /* Set up the argument struct */
+ nfs_write_rpcsetup(req, data, &nfs_write_full_ops, count, 0, how);
+
+- nfs_execute_write(data);
+ return 0;
+ out_bad:
+ while (!list_empty(head)) {
+@@ -932,7 +939,7 @@ static int nfs_flush_one(struct inode *inode, struct list_head *head, unsigned i
+ static void nfs_pageio_init_write(struct nfs_pageio_descriptor *pgio,
+ struct inode *inode, int ioflags)
+ {
+- int wsize = NFS_SERVER(inode)->wsize;
++ size_t wsize = NFS_SERVER(inode)->wsize;
+
+ if (wsize < PAGE_CACHE_SIZE)
+ nfs_pageio_init(pgio, inode, nfs_flush_multi, wsize, ioflags);
+@@ -1146,19 +1153,33 @@ static void nfs_commit_rpcsetup(struct list_head *head,
+ struct nfs_write_data *data,
+ int how)
+ {
+- struct nfs_page *first;
+- struct inode *inode;
+- int flags;
++ struct nfs_page *first = nfs_list_entry(head->next);
++ struct inode *inode = first->wb_context->path.dentry->d_inode;
++ int flags = (how & FLUSH_SYNC) ? 0 : RPC_TASK_ASYNC;
++ int priority = flush_task_priority(how);
++ struct rpc_task *task;
++ struct rpc_message msg = {
++ .rpc_argp = &data->args,
++ .rpc_resp = &data->res,
++ .rpc_cred = first->wb_context->cred,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .task = &data->task,
++ .rpc_client = NFS_CLIENT(inode),
++ .rpc_message = &msg,
++ .callback_ops = &nfs_commit_ops,
++ .callback_data = data,
++ .flags = flags,
++ .priority = priority,
++ };
+
+ /* Set up the RPC argument and reply structs
+ * NB: take care not to mess about with data->commit et al. */
+
+ list_splice_init(head, &data->pages);
+- first = nfs_list_entry(data->pages.next);
+- inode = first->wb_context->path.dentry->d_inode;
+
+ data->inode = inode;
+- data->cred = first->wb_context->cred;
++ data->cred = msg.rpc_cred;
+
+ data->args.fh = NFS_FH(data->inode);
+ /* Note: we always request a commit of the entire inode */
+@@ -1170,14 +1191,13 @@ static void nfs_commit_rpcsetup(struct list_head *head,
+ nfs_fattr_init(&data->fattr);
+
+ /* Set up the initial task struct. */
+- flags = (how & FLUSH_SYNC) ? 0 : RPC_TASK_ASYNC;
+- rpc_init_task(&data->task, NFS_CLIENT(inode), flags, &nfs_commit_ops, data);
+- NFS_PROTO(inode)->commit_setup(data, how);
++ NFS_PROTO(inode)->commit_setup(data, &msg);
+
+- data->task.tk_priority = flush_task_priority(how);
+- data->task.tk_cookie = (unsigned long)inode;
+-
+ dprintk("NFS: %5u initiated commit call\n", data->task.tk_pid);
+
-+ i += ex.fe_len;
-+ free -= ex.fe_len;
-+ }
++ task = rpc_run_task(&task_setup_data);
++ if (!IS_ERR(task))
++ rpc_put_task(task);
+ }
+
+ /*
+@@ -1197,7 +1217,6 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how)
+ /* Set up the argument struct */
+ nfs_commit_rpcsetup(head, data, how);
+
+- nfs_execute_write(data);
+ return 0;
+ out_bad:
+ while (!list_empty(head)) {
+diff --git a/fs/ocfs2/Makefile b/fs/ocfs2/Makefile
+index 9fb8132..4d4ce48 100644
+--- a/fs/ocfs2/Makefile
++++ b/fs/ocfs2/Makefile
+@@ -19,16 +19,17 @@ ocfs2-objs := \
+ ioctl.o \
+ journal.o \
+ localalloc.o \
++ locks.o \
+ mmap.o \
+ namei.o \
++ resize.o \
+ slot_map.o \
+ suballoc.o \
+ super.o \
+ symlink.o \
+ sysfile.o \
+ uptodate.o \
+- ver.o \
+- vote.o
++ ver.o
+
+ obj-$(CONFIG_OCFS2_FS) += cluster/
+ obj-$(CONFIG_OCFS2_FS) += dlm/
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index 23c8cda..e6df06a 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -4731,7 +4731,7 @@ int __ocfs2_flush_truncate_log(struct ocfs2_super *osb)
+
+ mutex_lock(&data_alloc_inode->i_mutex);
+
+- status = ocfs2_meta_lock(data_alloc_inode, &data_alloc_bh, 1);
++ status = ocfs2_inode_lock(data_alloc_inode, &data_alloc_bh, 1);
+ if (status < 0) {
+ mlog_errno(status);
+ goto out_mutex;
+@@ -4753,7 +4753,7 @@ int __ocfs2_flush_truncate_log(struct ocfs2_super *osb)
+
+ out_unlock:
+ brelse(data_alloc_bh);
+- ocfs2_meta_unlock(data_alloc_inode, 1);
++ ocfs2_inode_unlock(data_alloc_inode, 1);
+
+ out_mutex:
+ mutex_unlock(&data_alloc_inode->i_mutex);
+@@ -5077,7 +5077,7 @@ static int ocfs2_free_cached_items(struct ocfs2_super *osb,
+
+ mutex_lock(&inode->i_mutex);
+
+- ret = ocfs2_meta_lock(inode, &di_bh, 1);
++ ret = ocfs2_inode_lock(inode, &di_bh, 1);
+ if (ret) {
+ mlog_errno(ret);
+ goto out_mutex;
+@@ -5118,7 +5118,7 @@ out_journal:
+ ocfs2_commit_trans(osb, handle);
+
+ out_unlock:
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ brelse(di_bh);
+ out_mutex:
+ mutex_unlock(&inode->i_mutex);
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 56f7790..bc7b4cb 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -26,6 +26,7 @@
+ #include <asm/byteorder.h>
+ #include <linux/swap.h>
+ #include <linux/pipe_fs_i.h>
++#include <linux/mpage.h>
+
+ #define MLOG_MASK_PREFIX ML_FILE_IO
+ #include <cluster/masklog.h>
+@@ -139,7 +140,8 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
+ {
+ int err = 0;
+ unsigned int ext_flags;
+- u64 p_blkno, past_eof;
++ u64 max_blocks = bh_result->b_size >> inode->i_blkbits;
++ u64 p_blkno, count, past_eof;
+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
+ mlog_entry("(0x%p, %llu, 0x%p, %d)\n", inode,
+@@ -155,7 +157,7 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
+ goto bail;
+ }
+
+- err = ocfs2_extent_map_get_blocks(inode, iblock, &p_blkno, NULL,
++ err = ocfs2_extent_map_get_blocks(inode, iblock, &p_blkno, &count,
+ &ext_flags);
+ if (err) {
+ mlog(ML_ERROR, "Error %d from get_blocks(0x%p, %llu, 1, "
+@@ -164,6 +166,9 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
+ goto bail;
+ }
+
++ if (max_blocks < count)
++ count = max_blocks;
+
-+ ext4_mb_check_limits(ac, e4b, 1);
-+}
+ /*
+ * ocfs2 never allocates in this function - the only time we
+ * need to use BH_New is when we're extending i_size on a file
+@@ -178,6 +183,8 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
+ if (p_blkno && !(ext_flags & OCFS2_EXT_UNWRITTEN))
+ map_bh(bh_result, inode->i_sb, p_blkno);
+
++ bh_result->b_size = count << inode->i_blkbits;
+
+ if (!ocfs2_sparse_alloc(osb)) {
+ if (p_blkno == 0) {
+ err = -EIO;
+@@ -210,7 +217,7 @@ int ocfs2_read_inline_data(struct inode *inode, struct page *page,
+ struct buffer_head *di_bh)
+ {
+ void *kaddr;
+- unsigned int size;
++ loff_t size;
+ struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
+
+ if (!(le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL)) {
+@@ -224,8 +231,9 @@ int ocfs2_read_inline_data(struct inode *inode, struct page *page,
+ if (size > PAGE_CACHE_SIZE ||
+ size > ocfs2_max_inline_data(inode->i_sb)) {
+ ocfs2_error(inode->i_sb,
+- "Inode %llu has with inline data has bad size: %u",
+- (unsigned long long)OCFS2_I(inode)->ip_blkno, size);
++ "Inode %llu has with inline data has bad size: %Lu",
++ (unsigned long long)OCFS2_I(inode)->ip_blkno,
++ (unsigned long long)size);
+ return -EROFS;
+ }
+
+@@ -275,7 +283,7 @@ static int ocfs2_readpage(struct file *file, struct page *page)
+
+ mlog_entry("(0x%p, %lu)\n", file, (page ? page->index : 0));
+
+- ret = ocfs2_meta_lock_with_page(inode, NULL, 0, page);
++ ret = ocfs2_inode_lock_with_page(inode, NULL, 0, page);
+ if (ret != 0) {
+ if (ret == AOP_TRUNCATED_PAGE)
+ unlock = 0;
+@@ -285,7 +293,7 @@ static int ocfs2_readpage(struct file *file, struct page *page)
+
+ if (down_read_trylock(&oi->ip_alloc_sem) == 0) {
+ ret = AOP_TRUNCATED_PAGE;
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+
+ /*
+@@ -305,25 +313,16 @@ static int ocfs2_readpage(struct file *file, struct page *page)
+ goto out_alloc;
+ }
+
+- ret = ocfs2_data_lock_with_page(inode, 0, page);
+- if (ret != 0) {
+- if (ret == AOP_TRUNCATED_PAGE)
+- unlock = 0;
+- mlog_errno(ret);
+- goto out_alloc;
+- }
+-
+ if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL)
+ ret = ocfs2_readpage_inline(inode, page);
+ else
+ ret = block_read_full_page(page, ocfs2_get_block);
+ unlock = 0;
+
+- ocfs2_data_unlock(inode, 0);
+ out_alloc:
+ up_read(&OCFS2_I(inode)->ip_alloc_sem);
+-out_meta_unlock:
+- ocfs2_meta_unlock(inode, 0);
++out_inode_unlock:
++ ocfs2_inode_unlock(inode, 0);
+ out:
+ if (unlock)
+ unlock_page(page);
+@@ -331,6 +330,62 @@ out:
+ return ret;
+ }
+
+/*
-+ * This is a special case for storages like raid5
-+ * we try to find stripe-aligned chunks for stripe-size requests
-+ * XXX should do so at least for multiples of stripe size as well
++ * This is used only for read-ahead. Failures or difficult to handle
++ * situations are safe to ignore.
++ *
++ * Right now, we don't bother with BH_Boundary - in-inode extent lists
++ * are quite large (243 extents on 4k blocks), so most inodes don't
++ * grow out to a tree. If need be, detecting boundary extents could
++ * trivially be added in a future version of ocfs2_get_block().
+ */
-+static void ext4_mb_scan_aligned(struct ext4_allocation_context *ac,
-+ struct ext4_buddy *e4b)
-+{
-+ struct super_block *sb = ac->ac_sb;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ void *bitmap = EXT4_MB_BITMAP(e4b);
-+ struct ext4_free_extent ex;
-+ ext4_fsblk_t first_group_block;
-+ ext4_fsblk_t a;
-+ ext4_grpblk_t i;
-+ int max;
-+
-+ BUG_ON(sbi->s_stripe == 0);
-+
-+ /* find first stripe-aligned block in group */
-+ first_group_block = e4b->bd_group * EXT4_BLOCKS_PER_GROUP(sb)
-+ + le32_to_cpu(sbi->s_es->s_first_data_block);
-+ a = first_group_block + sbi->s_stripe - 1;
-+ do_div(a, sbi->s_stripe);
-+ i = (a * sbi->s_stripe) - first_group_block;
-+
-+ while (i < EXT4_BLOCKS_PER_GROUP(sb)) {
-+ if (!mb_test_bit(i, bitmap)) {
-+ max = mb_find_extent(e4b, 0, i, sbi->s_stripe, &ex);
-+ if (max >= sbi->s_stripe) {
-+ ac->ac_found++;
-+ ac->ac_b_ex = ex;
-+ ext4_mb_use_best_found(ac, e4b);
-+ break;
-+ }
-+ }
-+ i += sbi->s_stripe;
-+ }
-+}
-+
-+static int ext4_mb_good_group(struct ext4_allocation_context *ac,
-+ ext4_group_t group, int cr)
-+{
-+ unsigned free, fragments;
-+ unsigned i, bits;
-+ struct ext4_group_desc *desc;
-+ struct ext4_group_info *grp = ext4_get_group_info(ac->ac_sb, group);
-+
-+ BUG_ON(cr < 0 || cr >= 4);
-+ BUG_ON(EXT4_MB_GRP_NEED_INIT(grp));
-+
-+ free = grp->bb_free;
-+ fragments = grp->bb_fragments;
-+ if (free == 0)
-+ return 0;
-+ if (fragments == 0)
-+ return 0;
-+
-+ switch (cr) {
-+ case 0:
-+ BUG_ON(ac->ac_2order == 0);
-+ /* If this group is uninitialized, skip it initially */
-+ desc = ext4_get_group_desc(ac->ac_sb, group, NULL);
-+ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))
-+ return 0;
-+
-+ bits = ac->ac_sb->s_blocksize_bits + 1;
-+ for (i = ac->ac_2order; i <= bits; i++)
-+ if (grp->bb_counters[i] > 0)
-+ return 1;
-+ break;
-+ case 1:
-+ if ((free / fragments) >= ac->ac_g_ex.fe_len)
-+ return 1;
-+ break;
-+ case 2:
-+ if (free >= ac->ac_g_ex.fe_len)
-+ return 1;
-+ break;
-+ case 3:
-+ return 1;
-+ default:
-+ BUG();
-+ }
-+
-+ return 0;
-+}
-+
-+static int ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
++static int ocfs2_readpages(struct file *filp, struct address_space *mapping,
++ struct list_head *pages, unsigned nr_pages)
+{
-+ ext4_group_t group;
-+ ext4_group_t i;
-+ int cr;
-+ int err = 0;
-+ int bsbits;
-+ struct ext4_sb_info *sbi;
-+ struct super_block *sb;
-+ struct ext4_buddy e4b;
-+ loff_t size, isize;
-+
-+ sb = ac->ac_sb;
-+ sbi = EXT4_SB(sb);
-+ BUG_ON(ac->ac_status == AC_STATUS_FOUND);
-+
-+ /* first, try the goal */
-+ err = ext4_mb_find_by_goal(ac, &e4b);
-+ if (err || ac->ac_status == AC_STATUS_FOUND)
-+ goto out;
-+
-+ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
-+ goto out;
++ int ret, err = -EIO;
++ struct inode *inode = mapping->host;
++ struct ocfs2_inode_info *oi = OCFS2_I(inode);
++ loff_t start;
++ struct page *last;
+
+ /*
-+ * ac->ac2_order is set only if the fe_len is a power of 2
-+ * if ac2_order is set we also set criteria to 0 so that we
-+ * try exact allocation using buddy.
-+ */
-+ i = fls(ac->ac_g_ex.fe_len);
-+ ac->ac_2order = 0;
-+ /*
-+ * We search using buddy data only if the order of the request
-+ * is greater than equal to the sbi_s_mb_order2_reqs
-+ * You can tune it via /proc/fs/ext4/<partition>/order2_req
++ * Use the nonblocking flag for the dlm code to avoid page
++ * lock inversion, but don't bother with retrying.
+ */
-+ if (i >= sbi->s_mb_order2_reqs) {
-+ /*
-+ * This should tell if fe_len is exactly power of 2
-+ */
-+ if ((ac->ac_g_ex.fe_len & (~(1 << (i - 1)))) == 0)
-+ ac->ac_2order = i - 1;
-+ }
-+
-+ bsbits = ac->ac_sb->s_blocksize_bits;
-+ /* if stream allocation is enabled, use global goal */
-+ size = ac->ac_o_ex.fe_logical + ac->ac_o_ex.fe_len;
-+ isize = i_size_read(ac->ac_inode) >> bsbits;
-+ if (size < isize)
-+ size = isize;
++ ret = ocfs2_inode_lock_full(inode, NULL, 0, OCFS2_LOCK_NONBLOCK);
++ if (ret)
++ return err;
+
-+ if (size < sbi->s_mb_stream_request &&
-+ (ac->ac_flags & EXT4_MB_HINT_DATA)) {
-+ /* TBD: may be hot point */
-+ spin_lock(&sbi->s_md_lock);
-+ ac->ac_g_ex.fe_group = sbi->s_mb_last_group;
-+ ac->ac_g_ex.fe_start = sbi->s_mb_last_start;
-+ spin_unlock(&sbi->s_md_lock);
++ if (down_read_trylock(&oi->ip_alloc_sem) == 0) {
++ ocfs2_inode_unlock(inode, 0);
++ return err;
+ }
+
-+ /* searching for the right group start from the goal value specified */
-+ group = ac->ac_g_ex.fe_group;
-+
-+ /* Let's just scan groups to find more-less suitable blocks */
-+ cr = ac->ac_2order ? 0 : 1;
+ /*
-+ * cr == 0 try to get exact allocation,
-+ * cr == 3 try to get anything
++ * Don't bother with inline-data. There isn't anything
++ * to read-ahead in that case anyway...
+ */
-+repeat:
-+ for (; cr < 4 && ac->ac_status == AC_STATUS_CONTINUE; cr++) {
-+ ac->ac_criteria = cr;
-+ for (i = 0; i < EXT4_SB(sb)->s_groups_count; group++, i++) {
-+ struct ext4_group_info *grp;
-+ struct ext4_group_desc *desc;
-+
-+ if (group == EXT4_SB(sb)->s_groups_count)
-+ group = 0;
-+
-+ /* quick check to skip empty groups */
-+ grp = ext4_get_group_info(ac->ac_sb, group);
-+ if (grp->bb_free == 0)
-+ continue;
-+
-+ /*
-+ * if the group is already init we check whether it is
-+ * a good group and if not we don't load the buddy
-+ */
-+ if (EXT4_MB_GRP_NEED_INIT(grp)) {
-+ /*
-+ * we need full data about the group
-+ * to make a good selection
-+ */
-+ err = ext4_mb_load_buddy(sb, group, &e4b);
-+ if (err)
-+ goto out;
-+ ext4_mb_release_desc(&e4b);
-+ }
-+
-+ /*
-+ * If the particular group doesn't satisfy our
-+ * criteria we continue with the next group
-+ */
-+ if (!ext4_mb_good_group(ac, group, cr))
-+ continue;
-+
-+ err = ext4_mb_load_buddy(sb, group, &e4b);
-+ if (err)
-+ goto out;
-+
-+ ext4_lock_group(sb, group);
-+ if (!ext4_mb_good_group(ac, group, cr)) {
-+ /* someone did allocation from this group */
-+ ext4_unlock_group(sb, group);
-+ ext4_mb_release_desc(&e4b);
-+ continue;
-+ }
-+
-+ ac->ac_groups_scanned++;
-+ desc = ext4_get_group_desc(sb, group, NULL);
-+ if (cr == 0 || (desc->bg_flags &
-+ cpu_to_le16(EXT4_BG_BLOCK_UNINIT) &&
-+ ac->ac_2order != 0))
-+ ext4_mb_simple_scan_group(ac, &e4b);
-+ else if (cr == 1 &&
-+ ac->ac_g_ex.fe_len == sbi->s_stripe)
-+ ext4_mb_scan_aligned(ac, &e4b);
-+ else
-+ ext4_mb_complex_scan_group(ac, &e4b);
++ if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL)
++ goto out_unlock;
+
-+ ext4_unlock_group(sb, group);
-+ ext4_mb_release_desc(&e4b);
++ /*
++ * Check whether a remote node truncated this file - we just
++ * drop out in that case as it's not worth handling here.
++ */
++ last = list_entry(pages->prev, struct page, lru);
++ start = (loff_t)last->index << PAGE_CACHE_SHIFT;
++ if (start >= i_size_read(inode))
++ goto out_unlock;
+
-+ if (ac->ac_status != AC_STATUS_CONTINUE)
-+ break;
-+ }
-+ }
++ err = mpage_readpages(mapping, pages, nr_pages, ocfs2_get_block);
+
-+ if (ac->ac_b_ex.fe_len > 0 && ac->ac_status != AC_STATUS_FOUND &&
-+ !(ac->ac_flags & EXT4_MB_HINT_FIRST)) {
-+ /*
-+ * We've been searching too long. Let's try to allocate
-+ * the best chunk we've found so far
-+ */
++out_unlock:
++ up_read(&oi->ip_alloc_sem);
++ ocfs2_inode_unlock(inode, 0);
+
-+ ext4_mb_try_best_found(ac, &e4b);
-+ if (ac->ac_status != AC_STATUS_FOUND) {
-+ /*
-+ * Someone more lucky has already allocated it.
-+ * The only thing we can do is just take first
-+ * found block(s)
-+ printk(KERN_DEBUG "EXT4-fs: someone won our chunk\n");
-+ */
-+ ac->ac_b_ex.fe_group = 0;
-+ ac->ac_b_ex.fe_start = 0;
-+ ac->ac_b_ex.fe_len = 0;
-+ ac->ac_status = AC_STATUS_CONTINUE;
-+ ac->ac_flags |= EXT4_MB_HINT_FIRST;
-+ cr = 3;
-+ atomic_inc(&sbi->s_mb_lost_chunks);
-+ goto repeat;
-+ }
-+ }
-+out:
+ return err;
+}
+
-+#ifdef EXT4_MB_HISTORY
-+struct ext4_mb_proc_session {
-+ struct ext4_mb_history *history;
-+ struct super_block *sb;
-+ int start;
-+ int max;
-+};
-+
-+static void *ext4_mb_history_skip_empty(struct ext4_mb_proc_session *s,
-+ struct ext4_mb_history *hs,
-+ int first)
-+{
-+ if (hs == s->history + s->max)
-+ hs = s->history;
-+ if (!first && hs == s->history + s->start)
-+ return NULL;
-+ while (hs->orig.fe_len == 0) {
-+ hs++;
-+ if (hs == s->history + s->max)
-+ hs = s->history;
-+ if (hs == s->history + s->start)
-+ return NULL;
-+ }
-+ return hs;
-+}
-+
-+static void *ext4_mb_seq_history_start(struct seq_file *seq, loff_t *pos)
-+{
-+ struct ext4_mb_proc_session *s = seq->private;
-+ struct ext4_mb_history *hs;
-+ int l = *pos;
+ /* Note: Because we don't support holes, our allocation has
+ * already happened (allocation writes zeros to the file data)
+ * so we don't have to worry about ordered writes in
+@@ -452,7 +507,7 @@ static sector_t ocfs2_bmap(struct address_space *mapping, sector_t block)
+ * accessed concurrently from multiple nodes.
+ */
+ if (!INODE_JOURNAL(inode)) {
+- err = ocfs2_meta_lock(inode, NULL, 0);
++ err = ocfs2_inode_lock(inode, NULL, 0);
+ if (err) {
+ if (err != -ENOENT)
+ mlog_errno(err);
+@@ -467,7 +522,7 @@ static sector_t ocfs2_bmap(struct address_space *mapping, sector_t block)
+
+ if (!INODE_JOURNAL(inode)) {
+ up_read(&OCFS2_I(inode)->ip_alloc_sem);
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+ }
+
+ if (err) {
+@@ -638,34 +693,12 @@ static ssize_t ocfs2_direct_IO(int rw,
+ if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL)
+ return 0;
+
+- if (!ocfs2_sparse_alloc(OCFS2_SB(inode->i_sb))) {
+- /*
+- * We get PR data locks even for O_DIRECT. This
+- * allows concurrent O_DIRECT I/O but doesn't let
+- * O_DIRECT with extending and buffered zeroing writes
+- * race. If they did race then the buffered zeroing
+- * could be written back after the O_DIRECT I/O. It's
+- * one thing to tell people not to mix buffered and
+- * O_DIRECT writes, but expecting them to understand
+- * that file extension is also an implicit buffered
+- * write is too much. By getting the PR we force
+- * writeback of the buffered zeroing before
+- * proceeding.
+- */
+- ret = ocfs2_data_lock(inode, 0);
+- if (ret < 0) {
+- mlog_errno(ret);
+- goto out;
+- }
+- ocfs2_data_unlock(inode, 0);
+- }
+-
+ ret = blockdev_direct_IO_no_locking(rw, iocb, inode,
+ inode->i_sb->s_bdev, iov, offset,
+ nr_segs,
+ ocfs2_direct_IO_get_blocks,
+ ocfs2_dio_end_io);
+-out:
+
-+ if (l == 0)
-+ return SEQ_START_TOKEN;
-+ hs = ext4_mb_history_skip_empty(s, s->history + s->start, 1);
-+ if (!hs)
-+ return NULL;
-+ while (--l && (hs = ext4_mb_history_skip_empty(s, ++hs, 0)) != NULL);
-+ return hs;
-+}
+ mlog_exit(ret);
+ return ret;
+ }
+@@ -1754,7 +1787,7 @@ static int ocfs2_write_begin(struct file *file, struct address_space *mapping,
+ struct buffer_head *di_bh = NULL;
+ struct inode *inode = mapping->host;
+
+- ret = ocfs2_meta_lock(inode, &di_bh, 1);
++ ret = ocfs2_inode_lock(inode, &di_bh, 1);
+ if (ret) {
+ mlog_errno(ret);
+ return ret;
+@@ -1769,30 +1802,22 @@ static int ocfs2_write_begin(struct file *file, struct address_space *mapping,
+ */
+ down_write(&OCFS2_I(inode)->ip_alloc_sem);
+
+- ret = ocfs2_data_lock(inode, 1);
+- if (ret) {
+- mlog_errno(ret);
+- goto out_fail;
+- }
+-
+ ret = ocfs2_write_begin_nolock(mapping, pos, len, flags, pagep,
+ fsdata, di_bh, NULL);
+ if (ret) {
+ mlog_errno(ret);
+- goto out_fail_data;
++ goto out_fail;
+ }
+
+ brelse(di_bh);
+
+ return 0;
+
+-out_fail_data:
+- ocfs2_data_unlock(inode, 1);
+ out_fail:
+ up_write(&OCFS2_I(inode)->ip_alloc_sem);
+
+ brelse(di_bh);
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ return ret;
+ }
+@@ -1908,15 +1933,15 @@ static int ocfs2_write_end(struct file *file, struct address_space *mapping,
+
+ ret = ocfs2_write_end_nolock(mapping, pos, len, copied, page, fsdata);
+
+- ocfs2_data_unlock(inode, 1);
+ up_write(&OCFS2_I(inode)->ip_alloc_sem);
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ return ret;
+ }
+
+ const struct address_space_operations ocfs2_aops = {
+ .readpage = ocfs2_readpage,
++ .readpages = ocfs2_readpages,
+ .writepage = ocfs2_writepage,
+ .write_begin = ocfs2_write_begin,
+ .write_end = ocfs2_write_end,
+diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
+index c903741..f136639 100644
+--- a/fs/ocfs2/buffer_head_io.c
++++ b/fs/ocfs2/buffer_head_io.c
+@@ -79,7 +79,7 @@ int ocfs2_write_block(struct ocfs2_super *osb, struct buffer_head *bh,
+ * information for this bh as it's not marked locally
+ * uptodate. */
+ ret = -EIO;
+- brelse(bh);
++ put_bh(bh);
+ }
+
+ mutex_unlock(&OCFS2_I(inode)->ip_io_mutex);
+@@ -256,7 +256,7 @@ int ocfs2_read_blocks(struct ocfs2_super *osb, u64 block, int nr,
+ * for this bh as it's not marked locally
+ * uptodate. */
+ status = -EIO;
+- brelse(bh);
++ put_bh(bh);
+ bhs[i] = NULL;
+ continue;
+ }
+@@ -280,3 +280,64 @@ bail:
+ mlog_exit(status);
+ return status;
+ }
+
-+static void *ext4_mb_seq_history_next(struct seq_file *seq, void *v,
-+ loff_t *pos)
++/* Check whether the blkno is the super block or one of the backups. */
++static void ocfs2_check_super_or_backup(struct super_block *sb,
++ sector_t blkno)
+{
-+ struct ext4_mb_proc_session *s = seq->private;
-+ struct ext4_mb_history *hs = v;
-+
-+ ++*pos;
-+ if (v == SEQ_START_TOKEN)
-+ return ext4_mb_history_skip_empty(s, s->history + s->start, 1);
-+ else
-+ return ext4_mb_history_skip_empty(s, ++hs, 0);
-+}
++ int i;
++ u64 backup_blkno;
+
-+static int ext4_mb_seq_history_show(struct seq_file *seq, void *v)
-+{
-+ char buf[25], buf2[25], buf3[25], *fmt;
-+ struct ext4_mb_history *hs = v;
++ if (blkno == OCFS2_SUPER_BLOCK_BLKNO)
++ return;
+
-+ if (v == SEQ_START_TOKEN) {
-+ seq_printf(seq, "%-5s %-8s %-23s %-23s %-23s %-5s "
-+ "%-5s %-2s %-5s %-5s %-5s %-6s\n",
-+ "pid", "inode", "original", "goal", "result", "found",
-+ "grps", "cr", "flags", "merge", "tail", "broken");
-+ return 0;
++ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
++ backup_blkno = ocfs2_backup_super_blkno(sb, i);
++ if (backup_blkno == blkno)
++ return;
+ }
+
-+ if (hs->op == EXT4_MB_HISTORY_ALLOC) {
-+ fmt = "%-5u %-8u %-23s %-23s %-23s %-5u %-5u %-2u "
-+ "%-5u %-5s %-5u %-6u\n";
-+ sprintf(buf2, "%lu/%d/%u@%u", hs->result.fe_group,
-+ hs->result.fe_start, hs->result.fe_len,
-+ hs->result.fe_logical);
-+ sprintf(buf, "%lu/%d/%u@%u", hs->orig.fe_group,
-+ hs->orig.fe_start, hs->orig.fe_len,
-+ hs->orig.fe_logical);
-+ sprintf(buf3, "%lu/%d/%u@%u", hs->goal.fe_group,
-+ hs->goal.fe_start, hs->goal.fe_len,
-+ hs->goal.fe_logical);
-+ seq_printf(seq, fmt, hs->pid, hs->ino, buf, buf3, buf2,
-+ hs->found, hs->groups, hs->cr, hs->flags,
-+ hs->merged ? "M" : "", hs->tail,
-+ hs->buddy ? 1 << hs->buddy : 0);
-+ } else if (hs->op == EXT4_MB_HISTORY_PREALLOC) {
-+ fmt = "%-5u %-8u %-23s %-23s %-23s\n";
-+ sprintf(buf2, "%lu/%d/%u@%u", hs->result.fe_group,
-+ hs->result.fe_start, hs->result.fe_len,
-+ hs->result.fe_logical);
-+ sprintf(buf, "%lu/%d/%u@%u", hs->orig.fe_group,
-+ hs->orig.fe_start, hs->orig.fe_len,
-+ hs->orig.fe_logical);
-+ seq_printf(seq, fmt, hs->pid, hs->ino, buf, "", buf2);
-+ } else if (hs->op == EXT4_MB_HISTORY_DISCARD) {
-+ sprintf(buf2, "%lu/%d/%u", hs->result.fe_group,
-+ hs->result.fe_start, hs->result.fe_len);
-+ seq_printf(seq, "%-5u %-8u %-23s discard\n",
-+ hs->pid, hs->ino, buf2);
-+ } else if (hs->op == EXT4_MB_HISTORY_FREE) {
-+ sprintf(buf2, "%lu/%d/%u", hs->result.fe_group,
-+ hs->result.fe_start, hs->result.fe_len);
-+ seq_printf(seq, "%-5u %-8u %-23s free\n",
-+ hs->pid, hs->ino, buf2);
-+ }
-+ return 0;
++ BUG();
+}
+
-+static void ext4_mb_seq_history_stop(struct seq_file *seq, void *v)
++/*
++ * Write super block and backups doesn't need to collaborate with journal,
++ * so we don't need to lock ip_io_mutex and inode doesn't need to bea passed
++ * into this function.
++ */
++int ocfs2_write_super_or_backup(struct ocfs2_super *osb,
++ struct buffer_head *bh)
+{
-+}
++ int ret = 0;
+
-+static struct seq_operations ext4_mb_seq_history_ops = {
-+ .start = ext4_mb_seq_history_start,
-+ .next = ext4_mb_seq_history_next,
-+ .stop = ext4_mb_seq_history_stop,
-+ .show = ext4_mb_seq_history_show,
-+};
++ mlog_entry_void();
+
-+static int ext4_mb_seq_history_open(struct inode *inode, struct file *file)
-+{
-+ struct super_block *sb = PDE(inode)->data;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ struct ext4_mb_proc_session *s;
-+ int rc;
-+ int size;
++ BUG_ON(buffer_jbd(bh));
++ ocfs2_check_super_or_backup(osb->sb, bh->b_blocknr);
+
-+ s = kmalloc(sizeof(*s), GFP_KERNEL);
-+ if (s == NULL)
-+ return -ENOMEM;
-+ s->sb = sb;
-+ size = sizeof(struct ext4_mb_history) * sbi->s_mb_history_max;
-+ s->history = kmalloc(size, GFP_KERNEL);
-+ if (s->history == NULL) {
-+ kfree(s);
-+ return -ENOMEM;
++ if (ocfs2_is_hard_readonly(osb) || ocfs2_is_soft_readonly(osb)) {
++ ret = -EROFS;
++ goto out;
+ }
+
-+ spin_lock(&sbi->s_mb_history_lock);
-+ memcpy(s->history, sbi->s_mb_history, size);
-+ s->max = sbi->s_mb_history_max;
-+ s->start = sbi->s_mb_history_cur % s->max;
-+ spin_unlock(&sbi->s_mb_history_lock);
-+
-+ rc = seq_open(file, &ext4_mb_seq_history_ops);
-+ if (rc == 0) {
-+ struct seq_file *m = (struct seq_file *)file->private_data;
-+ m->private = s;
-+ } else {
-+ kfree(s->history);
-+ kfree(s);
-+ }
-+ return rc;
++ lock_buffer(bh);
++ set_buffer_uptodate(bh);
+
-+}
++ /* remove from dirty list before I/O. */
++ clear_buffer_dirty(bh);
+
-+static int ext4_mb_seq_history_release(struct inode *inode, struct file *file)
-+{
-+ struct seq_file *seq = (struct seq_file *)file->private_data;
-+ struct ext4_mb_proc_session *s = seq->private;
-+ kfree(s->history);
-+ kfree(s);
-+ return seq_release(inode, file);
-+}
++ get_bh(bh); /* for end_buffer_write_sync() */
++ bh->b_end_io = end_buffer_write_sync;
++ submit_bh(WRITE, bh);
+
-+static ssize_t ext4_mb_seq_history_write(struct file *file,
-+ const char __user *buffer,
-+ size_t count, loff_t *ppos)
-+{
-+ struct seq_file *seq = (struct seq_file *)file->private_data;
-+ struct ext4_mb_proc_session *s = seq->private;
-+ struct super_block *sb = s->sb;
-+ char str[32];
-+ int value;
++ wait_on_buffer(bh);
+
-+ if (count >= sizeof(str)) {
-+ printk(KERN_ERR "EXT4-fs: %s string too long, max %u bytes\n",
-+ "mb_history", (int)sizeof(str));
-+ return -EOVERFLOW;
++ if (!buffer_uptodate(bh)) {
++ ret = -EIO;
++ put_bh(bh);
+ }
+
-+ if (copy_from_user(str, buffer, count))
-+ return -EFAULT;
-+
-+ value = simple_strtol(str, NULL, 0);
-+ if (value < 0)
-+ return -ERANGE;
-+ EXT4_SB(sb)->s_mb_history_filter = value;
-+
-+ return count;
++out:
++ mlog_exit(ret);
++ return ret;
+}
+diff --git a/fs/ocfs2/buffer_head_io.h b/fs/ocfs2/buffer_head_io.h
+index 6cc2093..c2e7861 100644
+--- a/fs/ocfs2/buffer_head_io.h
++++ b/fs/ocfs2/buffer_head_io.h
+@@ -47,6 +47,8 @@ int ocfs2_read_blocks(struct ocfs2_super *osb,
+ int flags,
+ struct inode *inode);
+
++int ocfs2_write_super_or_backup(struct ocfs2_super *osb,
++ struct buffer_head *bh);
+
+ #define OCFS2_BH_CACHED 1
+ #define OCFS2_BH_READAHEAD 8
+diff --git a/fs/ocfs2/cluster/heartbeat.h b/fs/ocfs2/cluster/heartbeat.h
+index 35397dd..e511339 100644
+--- a/fs/ocfs2/cluster/heartbeat.h
++++ b/fs/ocfs2/cluster/heartbeat.h
+@@ -35,7 +35,7 @@
+ #define O2HB_LIVE_THRESHOLD 2
+ /* number of equal samples to be seen as dead */
+ extern unsigned int o2hb_dead_threshold;
+-#define O2HB_DEFAULT_DEAD_THRESHOLD 7
++#define O2HB_DEFAULT_DEAD_THRESHOLD 31
+ /* Otherwise MAX_WRITE_TIMEOUT will be zero... */
+ #define O2HB_MIN_DEAD_THRESHOLD 2
+ #define O2HB_MAX_WRITE_TIMEOUT_MS (O2HB_REGION_TIMEOUT_MS * (o2hb_dead_threshold - 1))
+diff --git a/fs/ocfs2/cluster/masklog.c b/fs/ocfs2/cluster/masklog.c
+index a4882c8..23c732f 100644
+--- a/fs/ocfs2/cluster/masklog.c
++++ b/fs/ocfs2/cluster/masklog.c
+@@ -146,7 +146,7 @@ static struct kset mlog_kset = {
+ .kobj = {.ktype = &mlog_ktype},
+ };
+
+-int mlog_sys_init(struct kset *o2cb_subsys)
++int mlog_sys_init(struct kset *o2cb_kset)
+ {
+ int i = 0;
+
+@@ -157,7 +157,7 @@ int mlog_sys_init(struct kset *o2cb_subsys)
+ mlog_attr_ptrs[i] = NULL;
+
+ kobject_set_name(&mlog_kset.kobj, "logmask");
+- kobj_set_kset_s(&mlog_kset, *o2cb_subsys);
++ mlog_kset.kobj.kset = o2cb_kset;
+ return kset_register(&mlog_kset);
+ }
+
+diff --git a/fs/ocfs2/cluster/sys.c b/fs/ocfs2/cluster/sys.c
+index 64f6f37..0c095ce 100644
+--- a/fs/ocfs2/cluster/sys.c
++++ b/fs/ocfs2/cluster/sys.c
+@@ -28,96 +28,55 @@
+ #include <linux/module.h>
+ #include <linux/kobject.h>
+ #include <linux/sysfs.h>
++#include <linux/fs.h>
+
+ #include "ocfs2_nodemanager.h"
+ #include "masklog.h"
+ #include "sys.h"
+
+-struct o2cb_attribute {
+- struct attribute attr;
+- ssize_t (*show)(char *buf);
+- ssize_t (*store)(const char *buf, size_t count);
+-};
+-
+-#define O2CB_ATTR(_name, _mode, _show, _store) \
+-struct o2cb_attribute o2cb_attr_##_name = __ATTR(_name, _mode, _show, _store)
+-
+-#define to_o2cb_attr(_attr) container_of(_attr, struct o2cb_attribute, attr)
+
+-static ssize_t o2cb_interface_revision_show(char *buf)
++static ssize_t version_show(struct kobject *kobj, struct kobj_attribute *attr,
++ char *buf)
+ {
+ return snprintf(buf, PAGE_SIZE, "%u\n", O2NM_API_VERSION);
+ }
+-
+-static O2CB_ATTR(interface_revision, S_IFREG | S_IRUGO, o2cb_interface_revision_show, NULL);
++static struct kobj_attribute attr_version =
++ __ATTR(interface_revision, S_IFREG | S_IRUGO, version_show, NULL);
+
+ static struct attribute *o2cb_attrs[] = {
+- &o2cb_attr_interface_revision.attr,
++ &attr_version.attr,
+ NULL,
+ };
+
+-static ssize_t
+-o2cb_show(struct kobject * kobj, struct attribute * attr, char * buffer);
+-static ssize_t
+-o2cb_store(struct kobject * kobj, struct attribute * attr,
+- const char * buffer, size_t count);
+-static struct sysfs_ops o2cb_sysfs_ops = {
+- .show = o2cb_show,
+- .store = o2cb_store,
++static struct attribute_group o2cb_attr_group = {
++ .attrs = o2cb_attrs,
+ };
+
+-static struct kobj_type o2cb_subsys_type = {
+- .default_attrs = o2cb_attrs,
+- .sysfs_ops = &o2cb_sysfs_ops,
+-};
+-
+-/* gives us o2cb_subsys */
+-static decl_subsys(o2cb, NULL, NULL);
+-
+-static ssize_t
+-o2cb_show(struct kobject * kobj, struct attribute * attr, char * buffer)
+-{
+- struct o2cb_attribute *o2cb_attr = to_o2cb_attr(attr);
+- struct kset *sbs = to_kset(kobj);
+-
+- BUG_ON(sbs != &o2cb_subsys);
+-
+- if (o2cb_attr->show)
+- return o2cb_attr->show(buffer);
+- return -EIO;
+-}
+-
+-static ssize_t
+-o2cb_store(struct kobject * kobj, struct attribute * attr,
+- const char * buffer, size_t count)
+-{
+- struct o2cb_attribute *o2cb_attr = to_o2cb_attr(attr);
+- struct kset *sbs = to_kset(kobj);
+-
+- BUG_ON(sbs != &o2cb_subsys);
+-
+- if (o2cb_attr->store)
+- return o2cb_attr->store(buffer, count);
+- return -EIO;
+-}
++static struct kset *o2cb_kset;
+
+ void o2cb_sys_shutdown(void)
+ {
+ mlog_sys_shutdown();
+- subsystem_unregister(&o2cb_subsys);
++ kset_unregister(o2cb_kset);
+ }
+
+ int o2cb_sys_init(void)
+ {
+ int ret;
+
+- o2cb_subsys.kobj.ktype = &o2cb_subsys_type;
+- ret = subsystem_register(&o2cb_subsys);
++ o2cb_kset = kset_create_and_add("o2cb", NULL, NULL);
++ if (!o2cb_kset)
++ return -ENOMEM;
+
-+static struct file_operations ext4_mb_seq_history_fops = {
-+ .owner = THIS_MODULE,
-+ .open = ext4_mb_seq_history_open,
-+ .read = seq_read,
-+ .write = ext4_mb_seq_history_write,
-+ .llseek = seq_lseek,
-+ .release = ext4_mb_seq_history_release,
-+};
-+
-+static void *ext4_mb_seq_groups_start(struct seq_file *seq, loff_t *pos)
-+{
-+ struct super_block *sb = seq->private;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ ext4_group_t group;
-+
-+ if (*pos < 0 || *pos >= sbi->s_groups_count)
-+ return NULL;
++ ret = sysfs_create_group(&o2cb_kset->kobj, &o2cb_attr_group);
+ if (ret)
+- return ret;
++ goto error;
+
+- ret = mlog_sys_init(&o2cb_subsys);
++ ret = mlog_sys_init(o2cb_kset);
+ if (ret)
+- subsystem_unregister(&o2cb_subsys);
++ goto error;
++ return 0;
++error:
++ kset_unregister(o2cb_kset);
+ return ret;
+ }
+diff --git a/fs/ocfs2/cluster/tcp.h b/fs/ocfs2/cluster/tcp.h
+index da880fc..f36f66a 100644
+--- a/fs/ocfs2/cluster/tcp.h
++++ b/fs/ocfs2/cluster/tcp.h
+@@ -60,8 +60,8 @@ typedef void (o2net_post_msg_handler_func)(int status, void *data,
+ /* same as hb delay, we're waiting for another node to recognize our hb */
+ #define O2NET_RECONNECT_DELAY_MS_DEFAULT 2000
+
+-#define O2NET_KEEPALIVE_DELAY_MS_DEFAULT 5000
+-#define O2NET_IDLE_TIMEOUT_MS_DEFAULT 10000
++#define O2NET_KEEPALIVE_DELAY_MS_DEFAULT 2000
++#define O2NET_IDLE_TIMEOUT_MS_DEFAULT 30000
+
+
+ /* TODO: figure this out.... */
+diff --git a/fs/ocfs2/cluster/tcp_internal.h b/fs/ocfs2/cluster/tcp_internal.h
+index 9606111..b2e832a 100644
+--- a/fs/ocfs2/cluster/tcp_internal.h
++++ b/fs/ocfs2/cluster/tcp_internal.h
+@@ -38,6 +38,12 @@
+ * locking semantics of the file system using the protocol. It should
+ * be somewhere else, I'm sure, but right now it isn't.
+ *
++ * New in version 10:
++ * - Meta/data locks combined
++ *
++ * New in version 9:
++ * - All votes removed
++ *
+ * New in version 8:
+ * - Replace delete inode votes with a cluster lock
+ *
+@@ -60,7 +66,7 @@
+ * - full 64 bit i_size in the metadata lock lvbs
+ * - introduction of "rw" lock and pushing meta/data locking down
+ */
+-#define O2NET_PROTOCOL_VERSION 8ULL
++#define O2NET_PROTOCOL_VERSION 10ULL
+ struct o2net_handshake {
+ __be64 protocol_version;
+ __be64 connector_id;
+diff --git a/fs/ocfs2/cluster/ver.c b/fs/ocfs2/cluster/ver.c
+index 7286c48..a56eee6 100644
+--- a/fs/ocfs2/cluster/ver.c
++++ b/fs/ocfs2/cluster/ver.c
+@@ -28,7 +28,7 @@
+
+ #include "ver.h"
+
+-#define CLUSTER_BUILD_VERSION "1.3.3"
++#define CLUSTER_BUILD_VERSION "1.5.0"
+
+ #define VERSION_STR "OCFS2 Node Manager " CLUSTER_BUILD_VERSION
+
+diff --git a/fs/ocfs2/dcache.c b/fs/ocfs2/dcache.c
+index 9923278..b1cc7c3 100644
+--- a/fs/ocfs2/dcache.c
++++ b/fs/ocfs2/dcache.c
+@@ -128,9 +128,9 @@ static int ocfs2_match_dentry(struct dentry *dentry,
+ /*
+ * Walk the inode alias list, and find a dentry which has a given
+ * parent. ocfs2_dentry_attach_lock() wants to find _any_ alias as it
+- * is looking for a dentry_lock reference. The vote thread is looking
+- * to unhash aliases, so we allow it to skip any that already have
+- * that property.
++ * is looking for a dentry_lock reference. The downconvert thread is
++ * looking to unhash aliases, so we allow it to skip any that already
++ * have that property.
+ */
+ struct dentry *ocfs2_find_local_alias(struct inode *inode,
+ u64 parent_blkno,
+@@ -266,7 +266,7 @@ int ocfs2_dentry_attach_lock(struct dentry *dentry,
+ dl->dl_count = 0;
+ /*
+ * Does this have to happen below, for all attaches, in case
+- * the struct inode gets blown away by votes?
++ * the struct inode gets blown away by the downconvert thread?
+ */
+ dl->dl_inode = igrab(inode);
+ dl->dl_parent_blkno = parent_blkno;
+diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
+index 63b28fd..6b0107f 100644
+--- a/fs/ocfs2/dir.c
++++ b/fs/ocfs2/dir.c
+@@ -846,14 +846,14 @@ int ocfs2_readdir(struct file * filp, void * dirent, filldir_t filldir)
+ mlog_entry("dirino=%llu\n",
+ (unsigned long long)OCFS2_I(inode)->ip_blkno);
+
+- error = ocfs2_meta_lock_atime(inode, filp->f_vfsmnt, &lock_level);
++ error = ocfs2_inode_lock_atime(inode, filp->f_vfsmnt, &lock_level);
+ if (lock_level && error >= 0) {
+ /* We release EX lock which used to update atime
+ * and get PR lock again to reduce contention
+ * on commonly accessed directories. */
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ lock_level = 0;
+- error = ocfs2_meta_lock(inode, NULL, 0);
++ error = ocfs2_inode_lock(inode, NULL, 0);
+ }
+ if (error < 0) {
+ if (error != -ENOENT)
+@@ -865,7 +865,7 @@ int ocfs2_readdir(struct file * filp, void * dirent, filldir_t filldir)
+ error = ocfs2_dir_foreach_blk(inode, &filp->f_version, &filp->f_pos,
+ dirent, filldir, NULL);
+
+- ocfs2_meta_unlock(inode, lock_level);
++ ocfs2_inode_unlock(inode, lock_level);
+
+ bail_nolock:
+ mlog_exit(error);
+diff --git a/fs/ocfs2/dlm/dlmfsver.c b/fs/ocfs2/dlm/dlmfsver.c
+index d2be3ad..a733b33 100644
+--- a/fs/ocfs2/dlm/dlmfsver.c
++++ b/fs/ocfs2/dlm/dlmfsver.c
+@@ -28,7 +28,7 @@
+
+ #include "dlmfsver.h"
+
+-#define DLM_BUILD_VERSION "1.3.3"
++#define DLM_BUILD_VERSION "1.5.0"
+
+ #define VERSION_STR "OCFS2 DLMFS " DLM_BUILD_VERSION
+
+diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
+index 2fde7bf..91f747b 100644
+--- a/fs/ocfs2/dlm/dlmrecovery.c
++++ b/fs/ocfs2/dlm/dlmrecovery.c
+@@ -2270,6 +2270,12 @@ static void __dlm_hb_node_down(struct dlm_ctxt *dlm, int idx)
+ }
+ }
+
++ /* Clean up join state on node death. */
++ if (dlm->joining_node == idx) {
++ mlog(0, "Clearing join state for node %u\n", idx);
++ __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);
++ }
+
-+ group = *pos + 1;
-+ return (void *) group;
-+}
+ /* check to see if the node is already considered dead */
+ if (!test_bit(idx, dlm->live_nodes_map)) {
+ mlog(0, "for domain %s, node %d is already dead. "
+@@ -2288,12 +2294,6 @@ static void __dlm_hb_node_down(struct dlm_ctxt *dlm, int idx)
+
+ clear_bit(idx, dlm->live_nodes_map);
+
+- /* Clean up join state on node death. */
+- if (dlm->joining_node == idx) {
+- mlog(0, "Clearing join state for node %u\n", idx);
+- __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);
+- }
+-
+ /* make sure local cleanup occurs before the heartbeat events */
+ if (!test_bit(idx, dlm->recovery_map))
+ dlm_do_local_recovery_cleanup(dlm, idx);
+@@ -2321,6 +2321,13 @@ void dlm_hb_node_down_cb(struct o2nm_node *node, int idx, void *data)
+ if (!dlm_grab(dlm))
+ return;
+
++ /*
++ * This will notify any dlm users that a node in our domain
++ * went away without notifying us first.
++ */
++ if (test_bit(idx, dlm->domain_map))
++ dlm_fire_domain_eviction_callbacks(dlm, idx);
+
-+static void *ext4_mb_seq_groups_next(struct seq_file *seq, void *v, loff_t *pos)
-+{
-+ struct super_block *sb = seq->private;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ ext4_group_t group;
+ spin_lock(&dlm->spinlock);
+ __dlm_hb_node_down(dlm, idx);
+ spin_unlock(&dlm->spinlock);
+diff --git a/fs/ocfs2/dlm/dlmver.c b/fs/ocfs2/dlm/dlmver.c
+index 7ef2653..dfc0da4 100644
+--- a/fs/ocfs2/dlm/dlmver.c
++++ b/fs/ocfs2/dlm/dlmver.c
+@@ -28,7 +28,7 @@
+
+ #include "dlmver.h"
+
+-#define DLM_BUILD_VERSION "1.3.3"
++#define DLM_BUILD_VERSION "1.5.0"
+
+ #define VERSION_STR "OCFS2 DLM " DLM_BUILD_VERSION
+
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 4e97dcc..3867244 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -55,7 +55,6 @@
+ #include "slot_map.h"
+ #include "super.h"
+ #include "uptodate.h"
+-#include "vote.h"
+
+ #include "buffer_head_io.h"
+
+@@ -69,6 +68,7 @@ struct ocfs2_mask_waiter {
+
+ static struct ocfs2_super *ocfs2_get_dentry_osb(struct ocfs2_lock_res *lockres);
+ static struct ocfs2_super *ocfs2_get_inode_osb(struct ocfs2_lock_res *lockres);
++static struct ocfs2_super *ocfs2_get_file_osb(struct ocfs2_lock_res *lockres);
+
+ /*
+ * Return value from ->downconvert_worker functions.
+@@ -153,10 +153,10 @@ struct ocfs2_lock_res_ops {
+ struct ocfs2_super * (*get_osb)(struct ocfs2_lock_res *);
+
+ /*
+- * Optionally called in the downconvert (or "vote") thread
+- * after a successful downconvert. The lockres will not be
+- * referenced after this callback is called, so it is safe to
+- * free memory, etc.
++ * Optionally called in the downconvert thread after a
++ * successful downconvert. The lockres will not be referenced
++ * after this callback is called, so it is safe to free
++ * memory, etc.
+ *
+ * The exact semantics of when this is called are controlled
+ * by ->downconvert_worker()
+@@ -225,17 +225,12 @@ static struct ocfs2_lock_res_ops ocfs2_inode_rw_lops = {
+ .flags = 0,
+ };
+
+-static struct ocfs2_lock_res_ops ocfs2_inode_meta_lops = {
++static struct ocfs2_lock_res_ops ocfs2_inode_inode_lops = {
+ .get_osb = ocfs2_get_inode_osb,
+ .check_downconvert = ocfs2_check_meta_downconvert,
+ .set_lvb = ocfs2_set_meta_lvb,
+- .flags = LOCK_TYPE_REQUIRES_REFRESH|LOCK_TYPE_USES_LVB,
+-};
+-
+-static struct ocfs2_lock_res_ops ocfs2_inode_data_lops = {
+- .get_osb = ocfs2_get_inode_osb,
+ .downconvert_worker = ocfs2_data_convert_worker,
+- .flags = 0,
++ .flags = LOCK_TYPE_REQUIRES_REFRESH|LOCK_TYPE_USES_LVB,
+ };
+
+ static struct ocfs2_lock_res_ops ocfs2_super_lops = {
+@@ -258,10 +253,14 @@ static struct ocfs2_lock_res_ops ocfs2_inode_open_lops = {
+ .flags = 0,
+ };
+
++static struct ocfs2_lock_res_ops ocfs2_flock_lops = {
++ .get_osb = ocfs2_get_file_osb,
++ .flags = 0,
++};
+
-+ ++*pos;
-+ if (*pos < 0 || *pos >= sbi->s_groups_count)
-+ return NULL;
-+ group = *pos + 1;
-+ return (void *) group;;
-+}
+ static inline int ocfs2_is_inode_lock(struct ocfs2_lock_res *lockres)
+ {
+ return lockres->l_type == OCFS2_LOCK_TYPE_META ||
+- lockres->l_type == OCFS2_LOCK_TYPE_DATA ||
+ lockres->l_type == OCFS2_LOCK_TYPE_RW ||
+ lockres->l_type == OCFS2_LOCK_TYPE_OPEN;
+ }
+@@ -310,12 +309,24 @@ static inline void ocfs2_recover_from_dlm_error(struct ocfs2_lock_res *lockres,
+ "resource %s: %s\n", dlm_errname(_stat), _func, \
+ _lockres->l_name, dlm_errmsg(_stat)); \
+ } while (0)
+-static void ocfs2_vote_on_unlock(struct ocfs2_super *osb,
+- struct ocfs2_lock_res *lockres);
+-static int ocfs2_meta_lock_update(struct inode *inode,
++static int ocfs2_downconvert_thread(void *arg);
++static void ocfs2_downconvert_on_unlock(struct ocfs2_super *osb,
++ struct ocfs2_lock_res *lockres);
++static int ocfs2_inode_lock_update(struct inode *inode,
+ struct buffer_head **bh);
+ static void ocfs2_drop_osb_locks(struct ocfs2_super *osb);
+ static inline int ocfs2_highest_compat_lock_level(int level);
++static void ocfs2_prepare_downconvert(struct ocfs2_lock_res *lockres,
++ int new_level);
++static int ocfs2_downconvert_lock(struct ocfs2_super *osb,
++ struct ocfs2_lock_res *lockres,
++ int new_level,
++ int lvb);
++static int ocfs2_prepare_cancel_convert(struct ocfs2_super *osb,
++ struct ocfs2_lock_res *lockres);
++static int ocfs2_cancel_convert(struct ocfs2_super *osb,
++ struct ocfs2_lock_res *lockres);
+
-+static int ext4_mb_seq_groups_show(struct seq_file *seq, void *v)
+
+ static void ocfs2_build_lock_name(enum ocfs2_lock_type type,
+ u64 blkno,
+@@ -402,10 +413,7 @@ void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
+ ops = &ocfs2_inode_rw_lops;
+ break;
+ case OCFS2_LOCK_TYPE_META:
+- ops = &ocfs2_inode_meta_lops;
+- break;
+- case OCFS2_LOCK_TYPE_DATA:
+- ops = &ocfs2_inode_data_lops;
++ ops = &ocfs2_inode_inode_lops;
+ break;
+ case OCFS2_LOCK_TYPE_OPEN:
+ ops = &ocfs2_inode_open_lops;
+@@ -428,6 +436,13 @@ static struct ocfs2_super *ocfs2_get_inode_osb(struct ocfs2_lock_res *lockres)
+ return OCFS2_SB(inode->i_sb);
+ }
+
++static struct ocfs2_super *ocfs2_get_file_osb(struct ocfs2_lock_res *lockres)
+{
-+ struct super_block *sb = seq->private;
-+ long group = (long) v;
-+ int i;
-+ int err;
-+ struct ext4_buddy e4b;
-+ struct sg {
-+ struct ext4_group_info info;
-+ unsigned short counters[16];
-+ } sg;
-+
-+ group--;
-+ if (group == 0)
-+ seq_printf(seq, "#%-5s: %-5s %-5s %-5s "
-+ "[ %-5s %-5s %-5s %-5s %-5s %-5s %-5s "
-+ "%-5s %-5s %-5s %-5s %-5s %-5s %-5s ]\n",
-+ "group", "free", "frags", "first",
-+ "2^0", "2^1", "2^2", "2^3", "2^4", "2^5", "2^6",
-+ "2^7", "2^8", "2^9", "2^10", "2^11", "2^12", "2^13");
-+
-+ i = (sb->s_blocksize_bits + 2) * sizeof(sg.info.bb_counters[0]) +
-+ sizeof(struct ext4_group_info);
-+ err = ext4_mb_load_buddy(sb, group, &e4b);
-+ if (err) {
-+ seq_printf(seq, "#%-5lu: I/O error\n", group);
-+ return 0;
-+ }
-+ ext4_lock_group(sb, group);
-+ memcpy(&sg, ext4_get_group_info(sb, group), i);
-+ ext4_unlock_group(sb, group);
-+ ext4_mb_release_desc(&e4b);
-+
-+ seq_printf(seq, "#%-5lu: %-5u %-5u %-5u [", group, sg.info.bb_free,
-+ sg.info.bb_fragments, sg.info.bb_first_free);
-+ for (i = 0; i <= 13; i++)
-+ seq_printf(seq, " %-5u", i <= sb->s_blocksize_bits + 1 ?
-+ sg.info.bb_counters[i] : 0);
-+ seq_printf(seq, " ]\n");
-+
-+ return 0;
-+}
++ struct ocfs2_file_private *fp = lockres->l_priv;
+
-+static void ext4_mb_seq_groups_stop(struct seq_file *seq, void *v)
-+{
++ return OCFS2_SB(fp->fp_file->f_mapping->host->i_sb);
+}
+
-+static struct seq_operations ext4_mb_seq_groups_ops = {
-+ .start = ext4_mb_seq_groups_start,
-+ .next = ext4_mb_seq_groups_next,
-+ .stop = ext4_mb_seq_groups_stop,
-+ .show = ext4_mb_seq_groups_show,
-+};
-+
-+static int ext4_mb_seq_groups_open(struct inode *inode, struct file *file)
+ static __u64 ocfs2_get_dentry_lock_ino(struct ocfs2_lock_res *lockres)
+ {
+ __be64 inode_blkno_be;
+@@ -508,6 +523,21 @@ static void ocfs2_rename_lock_res_init(struct ocfs2_lock_res *res,
+ &ocfs2_rename_lops, osb);
+ }
+
++void ocfs2_file_lock_res_init(struct ocfs2_lock_res *lockres,
++ struct ocfs2_file_private *fp)
+{
-+ struct super_block *sb = PDE(inode)->data;
-+ int rc;
-+
-+ rc = seq_open(file, &ext4_mb_seq_groups_ops);
-+ if (rc == 0) {
-+ struct seq_file *m = (struct seq_file *)file->private_data;
-+ m->private = sb;
-+ }
-+ return rc;
++ struct inode *inode = fp->fp_file->f_mapping->host;
++ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+
++ ocfs2_lock_res_init_once(lockres);
++ ocfs2_build_lock_name(OCFS2_LOCK_TYPE_FLOCK, oi->ip_blkno,
++ inode->i_generation, lockres->l_name);
++ ocfs2_lock_res_init_common(OCFS2_SB(inode->i_sb), lockres,
++ OCFS2_LOCK_TYPE_FLOCK, &ocfs2_flock_lops,
++ fp);
++ lockres->l_flags |= OCFS2_LOCK_NOCACHE;
+}
+
-+static struct file_operations ext4_mb_seq_groups_fops = {
-+ .owner = THIS_MODULE,
-+ .open = ext4_mb_seq_groups_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = seq_release,
-+};
+ void ocfs2_lock_res_free(struct ocfs2_lock_res *res)
+ {
+ mlog_entry_void();
+@@ -724,6 +754,13 @@ static void ocfs2_blocking_ast(void *opaque, int level)
+ lockres->l_name, level, lockres->l_level,
+ ocfs2_lock_type_string(lockres->l_type));
+
++ /*
++ * We can skip the bast for locks which don't enable caching -
++ * they'll be dropped at the earliest possible time anyway.
++ */
++ if (lockres->l_flags & OCFS2_LOCK_NOCACHE)
++ return;
+
-+static void ext4_mb_history_release(struct super_block *sb)
+ spin_lock_irqsave(&lockres->l_lock, flags);
+ needs_downconvert = ocfs2_generic_handle_bast(lockres, level);
+ if (needs_downconvert)
+@@ -732,7 +769,7 @@ static void ocfs2_blocking_ast(void *opaque, int level)
+
+ wake_up(&lockres->l_event);
+
+- ocfs2_kick_vote_thread(osb);
++ ocfs2_wake_downconvert_thread(osb);
+ }
+
+ static void ocfs2_locking_ast(void *opaque)
+@@ -935,6 +972,21 @@ static int lockres_remove_mask_waiter(struct ocfs2_lock_res *lockres,
+
+ }
+
++static int ocfs2_wait_for_mask_interruptible(struct ocfs2_mask_waiter *mw,
++ struct ocfs2_lock_res *lockres)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+
-+ remove_proc_entry("mb_groups", sbi->s_mb_proc);
-+ remove_proc_entry("mb_history", sbi->s_mb_proc);
++ int ret;
+
-+ kfree(sbi->s_mb_history);
++ ret = wait_for_completion_interruptible(&mw->mw_complete);
++ if (ret)
++ lockres_remove_mask_waiter(lockres, mw);
++ else
++ ret = mw->mw_status;
++ /* Re-arm the completion in case we want to wait on it again */
++ INIT_COMPLETION(mw->mw_complete);
++ return ret;
+}
+
-+static void ext4_mb_history_init(struct super_block *sb)
-+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ int i;
-+
-+ if (sbi->s_mb_proc != NULL) {
-+ struct proc_dir_entry *p;
-+ p = create_proc_entry("mb_history", S_IRUGO, sbi->s_mb_proc);
-+ if (p) {
-+ p->proc_fops = &ext4_mb_seq_history_fops;
-+ p->data = sb;
-+ }
-+ p = create_proc_entry("mb_groups", S_IRUGO, sbi->s_mb_proc);
-+ if (p) {
-+ p->proc_fops = &ext4_mb_seq_groups_fops;
-+ p->data = sb;
+ static int ocfs2_cluster_lock(struct ocfs2_super *osb,
+ struct ocfs2_lock_res *lockres,
+ int level,
+@@ -1089,7 +1141,7 @@ static void ocfs2_cluster_unlock(struct ocfs2_super *osb,
+ mlog_entry_void();
+ spin_lock_irqsave(&lockres->l_lock, flags);
+ ocfs2_dec_holders(lockres, level);
+- ocfs2_vote_on_unlock(osb, lockres);
++ ocfs2_downconvert_on_unlock(osb, lockres);
+ spin_unlock_irqrestore(&lockres->l_lock, flags);
+ mlog_exit_void();
+ }
+@@ -1147,13 +1199,7 @@ int ocfs2_create_new_inode_locks(struct inode *inode)
+ * We don't want to use LKM_LOCAL on a meta data lock as they
+ * don't use a generation in their lock names.
+ */
+- ret = ocfs2_create_new_lock(osb, &OCFS2_I(inode)->ip_meta_lockres, 1, 0);
+- if (ret) {
+- mlog_errno(ret);
+- goto bail;
+- }
+-
+- ret = ocfs2_create_new_lock(osb, &OCFS2_I(inode)->ip_data_lockres, 1, 1);
++ ret = ocfs2_create_new_lock(osb, &OCFS2_I(inode)->ip_inode_lockres, 1, 0);
+ if (ret) {
+ mlog_errno(ret);
+ goto bail;
+@@ -1311,76 +1357,221 @@ out:
+ mlog_exit_void();
+ }
+
+-int ocfs2_data_lock_full(struct inode *inode,
+- int write,
+- int arg_flags)
++static int ocfs2_flock_handle_signal(struct ocfs2_lock_res *lockres,
++ int level)
+ {
+- int status = 0, level;
+- struct ocfs2_lock_res *lockres;
+- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
++ int ret;
++ struct ocfs2_super *osb = ocfs2_get_lockres_osb(lockres);
++ unsigned long flags;
++ struct ocfs2_mask_waiter mw;
+
+- BUG_ON(!inode);
++ ocfs2_init_mask_waiter(&mw);
+
+- mlog_entry_void();
++retry_cancel:
++ spin_lock_irqsave(&lockres->l_lock, flags);
++ if (lockres->l_flags & OCFS2_LOCK_BUSY) {
++ ret = ocfs2_prepare_cancel_convert(osb, lockres);
++ if (ret) {
++ spin_unlock_irqrestore(&lockres->l_lock, flags);
++ ret = ocfs2_cancel_convert(osb, lockres);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out;
++ }
++ goto retry_cancel;
+ }
++ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
++ spin_unlock_irqrestore(&lockres->l_lock, flags);
+
+- mlog(0, "inode %llu take %s DATA lock\n",
+- (unsigned long long)OCFS2_I(inode)->ip_blkno,
+- write ? "EXMODE" : "PRMODE");
++ ocfs2_wait_for_mask(&mw);
++ goto retry_cancel;
+ }
+
+- /* We'll allow faking a readonly data lock for
+- * rodevices. */
+- if (ocfs2_is_hard_readonly(OCFS2_SB(inode->i_sb))) {
+- if (write) {
+- status = -EROFS;
+- mlog_errno(status);
++ ret = -ERESTARTSYS;
++ /*
++ * We may still have gotten the lock, in which case there's no
++ * point to restarting the syscall.
++ */
++ if (lockres->l_level == level)
++ ret = 0;
+
-+ sbi->s_mb_history_max = 1000;
-+ sbi->s_mb_history_cur = 0;
-+ spin_lock_init(&sbi->s_mb_history_lock);
-+ i = sbi->s_mb_history_max * sizeof(struct ext4_mb_history);
-+ sbi->s_mb_history = kmalloc(i, GFP_KERNEL);
-+ if (likely(sbi->s_mb_history != NULL))
-+ memset(sbi->s_mb_history, 0, i);
-+ /* if we can't allocate history, then we simple won't use it */
-+}
-+
-+static void ext4_mb_store_history(struct ext4_allocation_context *ac)
-+{
-+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
-+ struct ext4_mb_history h;
-+
-+ if (unlikely(sbi->s_mb_history == NULL))
-+ return;
-+
-+ if (!(ac->ac_op & sbi->s_mb_history_filter))
-+ return;
++ mlog(0, "Cancel returning %d. flags: 0x%lx, level: %d, act: %d\n", ret,
++ lockres->l_flags, lockres->l_level, lockres->l_action);
+
-+ h.op = ac->ac_op;
-+ h.pid = current->pid;
-+ h.ino = ac->ac_inode ? ac->ac_inode->i_ino : 0;
-+ h.orig = ac->ac_o_ex;
-+ h.result = ac->ac_b_ex;
-+ h.flags = ac->ac_flags;
-+ h.found = ac->ac_found;
-+ h.groups = ac->ac_groups_scanned;
-+ h.cr = ac->ac_criteria;
-+ h.tail = ac->ac_tail;
-+ h.buddy = ac->ac_buddy;
-+ h.merged = 0;
-+ if (ac->ac_op == EXT4_MB_HISTORY_ALLOC) {
-+ if (ac->ac_g_ex.fe_start == ac->ac_b_ex.fe_start &&
-+ ac->ac_g_ex.fe_group == ac->ac_b_ex.fe_group)
-+ h.merged = 1;
-+ h.goal = ac->ac_g_ex;
-+ h.result = ac->ac_f_ex;
-+ }
++ spin_unlock_irqrestore(&lockres->l_lock, flags);
+
-+ spin_lock(&sbi->s_mb_history_lock);
-+ memcpy(sbi->s_mb_history + sbi->s_mb_history_cur, &h, sizeof(h));
-+ if (++sbi->s_mb_history_cur >= sbi->s_mb_history_max)
-+ sbi->s_mb_history_cur = 0;
-+ spin_unlock(&sbi->s_mb_history_lock);
++out:
++ return ret;
+}
+
-+#else
-+#define ext4_mb_history_release(sb)
-+#define ext4_mb_history_init(sb)
-+#endif
-+
-+static int ext4_mb_init_backend(struct super_block *sb)
++/*
++ * ocfs2_file_lock() and ocfs2_file_unlock() map to a single pair of
++ * flock() calls. The locking approach this requires is sufficiently
++ * different from all other cluster lock types that we implement a
++ * seperate path to the "low-level" dlm calls. In particular:
++ *
++ * - No optimization of lock levels is done - we take at exactly
++ * what's been requested.
++ *
++ * - No lock caching is employed. We immediately downconvert to
++ * no-lock at unlock time. This also means flock locks never go on
++ * the blocking list).
++ *
++ * - Since userspace can trivially deadlock itself with flock, we make
++ * sure to allow cancellation of a misbehaving applications flock()
++ * request.
++ *
++ * - Access to any flock lockres doesn't require concurrency, so we
++ * can simplify the code by requiring the caller to guarantee
++ * serialization of dlmglue flock calls.
++ */
++int ocfs2_file_lock(struct file *file, int ex, int trylock)
+{
-+ ext4_group_t i;
-+ int j, len, metalen;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ int num_meta_group_infos =
-+ (sbi->s_groups_count + EXT4_DESC_PER_BLOCK(sb) - 1) >>
-+ EXT4_DESC_PER_BLOCK_BITS(sb);
-+ struct ext4_group_info **meta_group_info;
++ int ret, level = ex ? LKM_EXMODE : LKM_PRMODE;
++ unsigned int lkm_flags = trylock ? LKM_NOQUEUE : 0;
++ unsigned long flags;
++ struct ocfs2_file_private *fp = file->private_data;
++ struct ocfs2_lock_res *lockres = &fp->fp_flock;
++ struct ocfs2_super *osb = OCFS2_SB(file->f_mapping->host->i_sb);
++ struct ocfs2_mask_waiter mw;
+
-+ /* An 8TB filesystem with 64-bit pointers requires a 4096 byte
-+ * kmalloc. A 128kb malloc should suffice for a 256TB filesystem.
-+ * So a two level scheme suffices for now. */
-+ sbi->s_group_info = kmalloc(sizeof(*sbi->s_group_info) *
-+ num_meta_group_infos, GFP_KERNEL);
-+ if (sbi->s_group_info == NULL) {
-+ printk(KERN_ERR "EXT4-fs: can't allocate buddy meta group\n");
-+ return -ENOMEM;
-+ }
-+ sbi->s_buddy_cache = new_inode(sb);
-+ if (sbi->s_buddy_cache == NULL) {
-+ printk(KERN_ERR "EXT4-fs: can't get new inode\n");
-+ goto err_freesgi;
-+ }
-+ EXT4_I(sbi->s_buddy_cache)->i_disksize = 0;
++ ocfs2_init_mask_waiter(&mw);
+
-+ metalen = sizeof(*meta_group_info) << EXT4_DESC_PER_BLOCK_BITS(sb);
-+ for (i = 0; i < num_meta_group_infos; i++) {
-+ if ((i + 1) == num_meta_group_infos)
-+ metalen = sizeof(*meta_group_info) *
-+ (sbi->s_groups_count -
-+ (i << EXT4_DESC_PER_BLOCK_BITS(sb)));
-+ meta_group_info = kmalloc(metalen, GFP_KERNEL);
-+ if (meta_group_info == NULL) {
-+ printk(KERN_ERR "EXT4-fs: can't allocate mem for a "
-+ "buddy group\n");
-+ goto err_freemeta;
-+ }
-+ sbi->s_group_info[i] = meta_group_info;
++ if ((lockres->l_flags & OCFS2_LOCK_BUSY) ||
++ (lockres->l_level > LKM_NLMODE)) {
++ mlog(ML_ERROR,
++ "File lock \"%s\" has busy or locked state: flags: 0x%lx, "
++ "level: %u\n", lockres->l_name, lockres->l_flags,
++ lockres->l_level);
++ return -EINVAL;
+ }
+
-+ /*
-+ * calculate needed size. if change bb_counters size,
-+ * don't forget about ext4_mb_generate_buddy()
-+ */
-+ len = sizeof(struct ext4_group_info);
-+ len += sizeof(unsigned short) * (sb->s_blocksize_bits + 2);
-+ for (i = 0; i < sbi->s_groups_count; i++) {
-+ struct ext4_group_desc *desc;
-+
-+ meta_group_info =
-+ sbi->s_group_info[i >> EXT4_DESC_PER_BLOCK_BITS(sb)];
-+ j = i & (EXT4_DESC_PER_BLOCK(sb) - 1);
-+
-+ meta_group_info[j] = kzalloc(len, GFP_KERNEL);
-+ if (meta_group_info[j] == NULL) {
-+ printk(KERN_ERR "EXT4-fs: can't allocate buddy mem\n");
-+ i--;
-+ goto err_freebuddy;
-+ }
-+ desc = ext4_get_group_desc(sb, i, NULL);
-+ if (desc == NULL) {
-+ printk(KERN_ERR
-+ "EXT4-fs: can't read descriptor %lu\n", i);
-+ goto err_freebuddy;
-+ }
-+ memset(meta_group_info[j], 0, len);
-+ set_bit(EXT4_GROUP_INFO_NEED_INIT_BIT,
-+ &(meta_group_info[j]->bb_state));
++ spin_lock_irqsave(&lockres->l_lock, flags);
++ if (!(lockres->l_flags & OCFS2_LOCK_ATTACHED)) {
++ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
++ spin_unlock_irqrestore(&lockres->l_lock, flags);
+
+ /*
-+ * initialize bb_free to be able to skip
-+ * empty groups without initialization
++ * Get the lock at NLMODE to start - that way we
++ * can cancel the upconvert request if need be.
+ */
-+ if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
-+ meta_group_info[j]->bb_free =
-+ ext4_free_blocks_after_init(sb, i, desc);
-+ } else {
-+ meta_group_info[j]->bb_free =
-+ le16_to_cpu(desc->bg_free_blocks_count);
-+ }
-+
-+ INIT_LIST_HEAD(&meta_group_info[j]->bb_prealloc_list);
++ ret = ocfs2_lock_create(osb, lockres, LKM_NLMODE, 0);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out;
+ }
+- goto out;
+
-+#ifdef DOUBLE_CHECK
-+ {
-+ struct buffer_head *bh;
-+ meta_group_info[j]->bb_bitmap =
-+ kmalloc(sb->s_blocksize, GFP_KERNEL);
-+ BUG_ON(meta_group_info[j]->bb_bitmap == NULL);
-+ bh = read_block_bitmap(sb, i);
-+ BUG_ON(bh == NULL);
-+ memcpy(meta_group_info[j]->bb_bitmap, bh->b_data,
-+ sb->s_blocksize);
-+ put_bh(bh);
++ ret = ocfs2_wait_for_mask(&mw);
++ if (ret) {
++ mlog_errno(ret);
++ goto out;
+ }
-+#endif
-+
++ spin_lock_irqsave(&lockres->l_lock, flags);
+ }
+
+- if (ocfs2_mount_local(osb))
+- goto out;
++ lockres->l_action = OCFS2_AST_CONVERT;
++ lkm_flags |= LKM_CONVERT;
++ lockres->l_requested = level;
++ lockres_or_flags(lockres, OCFS2_LOCK_BUSY);
+
+- lockres = &OCFS2_I(inode)->ip_data_lockres;
++ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
++ spin_unlock_irqrestore(&lockres->l_lock, flags);
+
+- level = write ? LKM_EXMODE : LKM_PRMODE;
++ ret = dlmlock(osb->dlm, level, &lockres->l_lksb, lkm_flags,
++ lockres->l_name, OCFS2_LOCK_ID_MAX_LEN - 1,
++ ocfs2_locking_ast, lockres, ocfs2_blocking_ast);
++ if (ret != DLM_NORMAL) {
++ if (trylock && ret == DLM_NOTQUEUED)
++ ret = -EAGAIN;
++ else {
++ ocfs2_log_dlm_error("dlmlock", ret, lockres);
++ ret = -EINVAL;
++ }
+
+- status = ocfs2_cluster_lock(OCFS2_SB(inode->i_sb), lockres, level,
+- 0, arg_flags);
+- if (status < 0 && status != -EAGAIN)
+- mlog_errno(status);
++ ocfs2_recover_from_dlm_error(lockres, 1);
++ lockres_remove_mask_waiter(lockres, &mw);
++ goto out;
+ }
+
-+ return 0;
-+
-+err_freebuddy:
-+ while (i >= 0) {
-+ kfree(ext4_get_group_info(sb, i));
-+ i--;
++ ret = ocfs2_wait_for_mask_interruptible(&mw, lockres);
++ if (ret == -ERESTARTSYS) {
++ /*
++ * Userspace can cause deadlock itself with
++ * flock(). Current behavior locally is to allow the
++ * deadlock, but abort the system call if a signal is
++ * received. We follow this example, otherwise a
++ * poorly written program could sit in kernel until
++ * reboot.
++ *
++ * Handling this is a bit more complicated for Ocfs2
++ * though. We can't exit this function with an
++ * outstanding lock request, so a cancel convert is
++ * required. We intentionally overwrite 'ret' - if the
++ * cancel fails and the lock was granted, it's easier
++ * to just bubble sucess back up to the user.
++ */
++ ret = ocfs2_flock_handle_signal(lockres, level);
+ }
-+ i = num_meta_group_infos;
-+err_freemeta:
-+ while (--i >= 0)
-+ kfree(sbi->s_group_info[i]);
-+ iput(sbi->s_buddy_cache);
-+err_freesgi:
-+ kfree(sbi->s_group_info);
-+ return -ENOMEM;
-+}
-+
-+int ext4_mb_init(struct super_block *sb, int needs_recovery)
-+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ unsigned i;
-+ unsigned offset;
-+ unsigned max;
-+
-+ if (!test_opt(sb, MBALLOC))
-+ return 0;
-+
-+ i = (sb->s_blocksize_bits + 2) * sizeof(unsigned short);
+
+ out:
+- mlog_exit(status);
+- return status;
+
-+ sbi->s_mb_offsets = kmalloc(i, GFP_KERNEL);
-+ if (sbi->s_mb_offsets == NULL) {
-+ clear_opt(sbi->s_mount_opt, MBALLOC);
-+ return -ENOMEM;
-+ }
-+ sbi->s_mb_maxs = kmalloc(i, GFP_KERNEL);
-+ if (sbi->s_mb_maxs == NULL) {
-+ clear_opt(sbi->s_mount_opt, MBALLOC);
-+ kfree(sbi->s_mb_maxs);
-+ return -ENOMEM;
-+ }
++ mlog(0, "Lock: \"%s\" ex: %d, trylock: %d, returns: %d\n",
++ lockres->l_name, ex, trylock, ret);
++ return ret;
+ }
+
+-/* see ocfs2_meta_lock_with_page() */
+-int ocfs2_data_lock_with_page(struct inode *inode,
+- int write,
+- struct page *page)
++void ocfs2_file_unlock(struct file *file)
+ {
+ int ret;
++ unsigned long flags;
++ struct ocfs2_file_private *fp = file->private_data;
++ struct ocfs2_lock_res *lockres = &fp->fp_flock;
++ struct ocfs2_super *osb = OCFS2_SB(file->f_mapping->host->i_sb);
++ struct ocfs2_mask_waiter mw;
+
+- ret = ocfs2_data_lock_full(inode, write, OCFS2_LOCK_NONBLOCK);
+- if (ret == -EAGAIN) {
+- unlock_page(page);
+- if (ocfs2_data_lock(inode, write) == 0)
+- ocfs2_data_unlock(inode, write);
+- ret = AOP_TRUNCATED_PAGE;
++ ocfs2_init_mask_waiter(&mw);
+
-+ /* order 0 is regular bitmap */
-+ sbi->s_mb_maxs[0] = sb->s_blocksize << 3;
-+ sbi->s_mb_offsets[0] = 0;
++ if (!(lockres->l_flags & OCFS2_LOCK_ATTACHED))
++ return;
+
-+ i = 1;
-+ offset = 0;
-+ max = sb->s_blocksize << 2;
-+ do {
-+ sbi->s_mb_offsets[i] = offset;
-+ sbi->s_mb_maxs[i] = max;
-+ offset += 1 << (sb->s_blocksize_bits - i);
-+ max = max >> 1;
-+ i++;
-+ } while (i <= sb->s_blocksize_bits + 1);
++ if (lockres->l_level == LKM_NLMODE)
++ return;
+
-+ /* init file for buddy data */
-+ i = ext4_mb_init_backend(sb);
-+ if (i) {
-+ clear_opt(sbi->s_mount_opt, MBALLOC);
-+ kfree(sbi->s_mb_offsets);
-+ kfree(sbi->s_mb_maxs);
-+ return i;
-+ }
++ mlog(0, "Unlock: \"%s\" flags: 0x%lx, level: %d, act: %d\n",
++ lockres->l_name, lockres->l_flags, lockres->l_level,
++ lockres->l_action);
+
-+ spin_lock_init(&sbi->s_md_lock);
-+ INIT_LIST_HEAD(&sbi->s_active_transaction);
-+ INIT_LIST_HEAD(&sbi->s_closed_transaction);
-+ INIT_LIST_HEAD(&sbi->s_committed_transaction);
-+ spin_lock_init(&sbi->s_bal_lock);
++ spin_lock_irqsave(&lockres->l_lock, flags);
++ /*
++ * Fake a blocking ast for the downconvert code.
++ */
++ lockres_or_flags(lockres, OCFS2_LOCK_BLOCKED);
++ lockres->l_blocking = LKM_EXMODE;
+
-+ sbi->s_mb_max_to_scan = MB_DEFAULT_MAX_TO_SCAN;
-+ sbi->s_mb_min_to_scan = MB_DEFAULT_MIN_TO_SCAN;
-+ sbi->s_mb_stats = MB_DEFAULT_STATS;
-+ sbi->s_mb_stream_request = MB_DEFAULT_STREAM_THRESHOLD;
-+ sbi->s_mb_order2_reqs = MB_DEFAULT_ORDER2_REQS;
-+ sbi->s_mb_history_filter = EXT4_MB_HISTORY_DEFAULT;
-+ sbi->s_mb_group_prealloc = MB_DEFAULT_GROUP_PREALLOC;
++ ocfs2_prepare_downconvert(lockres, LKM_NLMODE);
++ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
++ spin_unlock_irqrestore(&lockres->l_lock, flags);
+
-+ i = sizeof(struct ext4_locality_group) * NR_CPUS;
-+ sbi->s_locality_groups = kmalloc(i, GFP_KERNEL);
-+ if (sbi->s_locality_groups == NULL) {
-+ clear_opt(sbi->s_mount_opt, MBALLOC);
-+ kfree(sbi->s_mb_offsets);
-+ kfree(sbi->s_mb_maxs);
-+ return -ENOMEM;
-+ }
-+ for (i = 0; i < NR_CPUS; i++) {
-+ struct ext4_locality_group *lg;
-+ lg = &sbi->s_locality_groups[i];
-+ mutex_init(&lg->lg_mutex);
-+ INIT_LIST_HEAD(&lg->lg_prealloc_list);
-+ spin_lock_init(&lg->lg_prealloc_lock);
-+ }
++ ret = ocfs2_downconvert_lock(osb, lockres, LKM_NLMODE, 0);
++ if (ret) {
++ mlog_errno(ret);
++ return;
+ }
+
+- return ret;
++ ret = ocfs2_wait_for_mask(&mw);
++ if (ret)
++ mlog_errno(ret);
+ }
+
+-static void ocfs2_vote_on_unlock(struct ocfs2_super *osb,
+- struct ocfs2_lock_res *lockres)
++static void ocfs2_downconvert_on_unlock(struct ocfs2_super *osb,
++ struct ocfs2_lock_res *lockres)
+ {
+ int kick = 0;
+
+ mlog_entry_void();
+
+ /* If we know that another node is waiting on our lock, kick
+- * the vote thread * pre-emptively when we reach a release
++ * the downconvert thread * pre-emptively when we reach a release
+ * condition. */
+ if (lockres->l_flags & OCFS2_LOCK_BLOCKED) {
+ switch(lockres->l_blocking) {
+@@ -1398,27 +1589,7 @@ static void ocfs2_vote_on_unlock(struct ocfs2_super *osb,
+ }
+
+ if (kick)
+- ocfs2_kick_vote_thread(osb);
+-
+- mlog_exit_void();
+-}
+-
+-void ocfs2_data_unlock(struct inode *inode,
+- int write)
+-{
+- int level = write ? LKM_EXMODE : LKM_PRMODE;
+- struct ocfs2_lock_res *lockres = &OCFS2_I(inode)->ip_data_lockres;
+- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+-
+- mlog_entry_void();
+-
+- mlog(0, "inode %llu drop %s DATA lock\n",
+- (unsigned long long)OCFS2_I(inode)->ip_blkno,
+- write ? "EXMODE" : "PRMODE");
+-
+- if (!ocfs2_is_hard_readonly(OCFS2_SB(inode->i_sb)) &&
+- !ocfs2_mount_local(osb))
+- ocfs2_cluster_unlock(OCFS2_SB(inode->i_sb), lockres, level);
++ ocfs2_wake_downconvert_thread(osb);
+
+ mlog_exit_void();
+ }
+@@ -1442,11 +1613,11 @@ static u64 ocfs2_pack_timespec(struct timespec *spec)
+
+ /* Call this with the lockres locked. I am reasonably sure we don't
+ * need ip_lock in this function as anyone who would be changing those
+- * values is supposed to be blocked in ocfs2_meta_lock right now. */
++ * values is supposed to be blocked in ocfs2_inode_lock right now. */
+ static void __ocfs2_stuff_meta_lvb(struct inode *inode)
+ {
+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+- struct ocfs2_lock_res *lockres = &oi->ip_meta_lockres;
++ struct ocfs2_lock_res *lockres = &oi->ip_inode_lockres;
+ struct ocfs2_meta_lvb *lvb;
+
+ mlog_entry_void();
+@@ -1496,7 +1667,7 @@ static void ocfs2_unpack_timespec(struct timespec *spec,
+ static void ocfs2_refresh_inode_from_lvb(struct inode *inode)
+ {
+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+- struct ocfs2_lock_res *lockres = &oi->ip_meta_lockres;
++ struct ocfs2_lock_res *lockres = &oi->ip_inode_lockres;
+ struct ocfs2_meta_lvb *lvb;
+
+ mlog_entry_void();
+@@ -1604,12 +1775,12 @@ static inline void ocfs2_complete_lock_res_refresh(struct ocfs2_lock_res *lockre
+ }
+
+ /* may or may not return a bh if it went to disk. */
+-static int ocfs2_meta_lock_update(struct inode *inode,
++static int ocfs2_inode_lock_update(struct inode *inode,
+ struct buffer_head **bh)
+ {
+ int status = 0;
+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+- struct ocfs2_lock_res *lockres = &oi->ip_meta_lockres;
++ struct ocfs2_lock_res *lockres = &oi->ip_inode_lockres;
+ struct ocfs2_dinode *fe;
+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
+@@ -1721,7 +1892,7 @@ static int ocfs2_assign_bh(struct inode *inode,
+ * returns < 0 error if the callback will never be called, otherwise
+ * the result of the lock will be communicated via the callback.
+ */
+-int ocfs2_meta_lock_full(struct inode *inode,
++int ocfs2_inode_lock_full(struct inode *inode,
+ struct buffer_head **ret_bh,
+ int ex,
+ int arg_flags)
+@@ -1756,7 +1927,7 @@ int ocfs2_meta_lock_full(struct inode *inode,
+ wait_event(osb->recovery_event,
+ ocfs2_node_map_is_empty(osb, &osb->recovery_map));
+
+- lockres = &OCFS2_I(inode)->ip_meta_lockres;
++ lockres = &OCFS2_I(inode)->ip_inode_lockres;
+ level = ex ? LKM_EXMODE : LKM_PRMODE;
+ dlm_flags = 0;
+ if (arg_flags & OCFS2_META_LOCK_NOQUEUE)
+@@ -1795,11 +1966,11 @@ local:
+ }
+
+ /* This is fun. The caller may want a bh back, or it may
+- * not. ocfs2_meta_lock_update definitely wants one in, but
++ * not. ocfs2_inode_lock_update definitely wants one in, but
+ * may or may not read one, depending on what's in the
+ * LVB. The result of all of this is that we've *only* gone to
+ * disk if we have to, so the complexity is worthwhile. */
+- status = ocfs2_meta_lock_update(inode, &local_bh);
++ status = ocfs2_inode_lock_update(inode, &local_bh);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -1821,7 +1992,7 @@ bail:
+ *ret_bh = NULL;
+ }
+ if (acquired)
+- ocfs2_meta_unlock(inode, ex);
++ ocfs2_inode_unlock(inode, ex);
+ }
+
+ if (local_bh)
+@@ -1832,19 +2003,20 @@ bail:
+ }
+
+ /*
+- * This is working around a lock inversion between tasks acquiring DLM locks
+- * while holding a page lock and the vote thread which blocks dlm lock acquiry
+- * while acquiring page locks.
++ * This is working around a lock inversion between tasks acquiring DLM
++ * locks while holding a page lock and the downconvert thread which
++ * blocks dlm lock acquiry while acquiring page locks.
+ *
+ * ** These _with_page variantes are only intended to be called from aop
+ * methods that hold page locks and return a very specific *positive* error
+ * code that aop methods pass up to the VFS -- test for errors with != 0. **
+ *
+- * The DLM is called such that it returns -EAGAIN if it would have blocked
+- * waiting for the vote thread. In that case we unlock our page so the vote
+- * thread can make progress. Once we've done this we have to return
+- * AOP_TRUNCATED_PAGE so the aop method that called us can bubble that back up
+- * into the VFS who will then immediately retry the aop call.
++ * The DLM is called such that it returns -EAGAIN if it would have
++ * blocked waiting for the downconvert thread. In that case we unlock
++ * our page so the downconvert thread can make progress. Once we've
++ * done this we have to return AOP_TRUNCATED_PAGE so the aop method
++ * that called us can bubble that back up into the VFS who will then
++ * immediately retry the aop call.
+ *
+ * We do a blocking lock and immediate unlock before returning, though, so that
+ * the lock has a great chance of being cached on this node by the time the VFS
+@@ -1852,32 +2024,32 @@ bail:
+ * ping locks back and forth, but that's a risk we're willing to take to avoid
+ * the lock inversion simply.
+ */
+-int ocfs2_meta_lock_with_page(struct inode *inode,
++int ocfs2_inode_lock_with_page(struct inode *inode,
+ struct buffer_head **ret_bh,
+ int ex,
+ struct page *page)
+ {
+ int ret;
+
+- ret = ocfs2_meta_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK);
++ ret = ocfs2_inode_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK);
+ if (ret == -EAGAIN) {
+ unlock_page(page);
+- if (ocfs2_meta_lock(inode, ret_bh, ex) == 0)
+- ocfs2_meta_unlock(inode, ex);
++ if (ocfs2_inode_lock(inode, ret_bh, ex) == 0)
++ ocfs2_inode_unlock(inode, ex);
+ ret = AOP_TRUNCATED_PAGE;
+ }
+
+ return ret;
+ }
+
+-int ocfs2_meta_lock_atime(struct inode *inode,
++int ocfs2_inode_lock_atime(struct inode *inode,
+ struct vfsmount *vfsmnt,
+ int *level)
+ {
+ int ret;
+
+ mlog_entry_void();
+- ret = ocfs2_meta_lock(inode, NULL, 0);
++ ret = ocfs2_inode_lock(inode, NULL, 0);
+ if (ret < 0) {
+ mlog_errno(ret);
+ return ret;
+@@ -1890,8 +2062,8 @@ int ocfs2_meta_lock_atime(struct inode *inode,
+ if (ocfs2_should_update_atime(inode, vfsmnt)) {
+ struct buffer_head *bh = NULL;
+
+- ocfs2_meta_unlock(inode, 0);
+- ret = ocfs2_meta_lock(inode, &bh, 1);
++ ocfs2_inode_unlock(inode, 0);
++ ret = ocfs2_inode_lock(inode, &bh, 1);
+ if (ret < 0) {
+ mlog_errno(ret);
+ return ret;
+@@ -1908,11 +2080,11 @@ int ocfs2_meta_lock_atime(struct inode *inode,
+ return ret;
+ }
+
+-void ocfs2_meta_unlock(struct inode *inode,
++void ocfs2_inode_unlock(struct inode *inode,
+ int ex)
+ {
+ int level = ex ? LKM_EXMODE : LKM_PRMODE;
+- struct ocfs2_lock_res *lockres = &OCFS2_I(inode)->ip_meta_lockres;
++ struct ocfs2_lock_res *lockres = &OCFS2_I(inode)->ip_inode_lockres;
+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
+ mlog_entry_void();
+@@ -2320,11 +2492,11 @@ int ocfs2_dlm_init(struct ocfs2_super *osb)
+ goto bail;
+ }
+
+- /* launch vote thread */
+- osb->vote_task = kthread_run(ocfs2_vote_thread, osb, "ocfs2vote");
+- if (IS_ERR(osb->vote_task)) {
+- status = PTR_ERR(osb->vote_task);
+- osb->vote_task = NULL;
++ /* launch downconvert thread */
++ osb->dc_task = kthread_run(ocfs2_downconvert_thread, osb, "ocfs2dc");
++ if (IS_ERR(osb->dc_task)) {
++ status = PTR_ERR(osb->dc_task);
++ osb->dc_task = NULL;
+ mlog_errno(status);
+ goto bail;
+ }
+@@ -2353,8 +2525,8 @@ local:
+ bail:
+ if (status < 0) {
+ ocfs2_dlm_shutdown_debug(osb);
+- if (osb->vote_task)
+- kthread_stop(osb->vote_task);
++ if (osb->dc_task)
++ kthread_stop(osb->dc_task);
+ }
+
+ mlog_exit(status);
+@@ -2369,9 +2541,9 @@ void ocfs2_dlm_shutdown(struct ocfs2_super *osb)
+
+ ocfs2_drop_osb_locks(osb);
+
+- if (osb->vote_task) {
+- kthread_stop(osb->vote_task);
+- osb->vote_task = NULL;
++ if (osb->dc_task) {
++ kthread_stop(osb->dc_task);
++ osb->dc_task = NULL;
+ }
+
+ ocfs2_lock_res_free(&osb->osb_super_lockres);
+@@ -2527,7 +2699,7 @@ out:
+
+ /* Mark the lockres as being dropped. It will no longer be
+ * queued if blocking, but we still may have to wait on it
+- * being dequeued from the vote thread before we can consider
++ * being dequeued from the downconvert thread before we can consider
+ * it safe to drop.
+ *
+ * You can *not* attempt to call cluster_lock on this lockres anymore. */
+@@ -2590,14 +2762,7 @@ int ocfs2_drop_inode_locks(struct inode *inode)
+ status = err;
+
+ err = ocfs2_drop_lock(OCFS2_SB(inode->i_sb),
+- &OCFS2_I(inode)->ip_data_lockres);
+- if (err < 0)
+- mlog_errno(err);
+- if (err < 0 && !status)
+- status = err;
+-
+- err = ocfs2_drop_lock(OCFS2_SB(inode->i_sb),
+- &OCFS2_I(inode)->ip_meta_lockres);
++ &OCFS2_I(inode)->ip_inode_lockres);
+ if (err < 0)
+ mlog_errno(err);
+ if (err < 0 && !status)
+@@ -2850,6 +3015,9 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
+ inode = ocfs2_lock_res_inode(lockres);
+ mapping = inode->i_mapping;
+
++ if (S_ISREG(inode->i_mode))
++ goto out;
+
-+ ext4_mb_init_per_dev_proc(sb);
-+ ext4_mb_history_init(sb);
+ /*
+ * We need this before the filemap_fdatawrite() so that it can
+ * transfer the dirty bit from the PTE to the
+@@ -2875,6 +3043,7 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
+ filemap_fdatawait(mapping);
+ }
+
++out:
+ return UNBLOCK_CONTINUE;
+ }
+
+@@ -2903,7 +3072,7 @@ static void ocfs2_set_meta_lvb(struct ocfs2_lock_res *lockres)
+
+ /*
+ * Does the final reference drop on our dentry lock. Right now this
+- * happens in the vote thread, but we could choose to simplify the
++ * happens in the downconvert thread, but we could choose to simplify the
+ * dlmglue API and push these off to the ocfs2_wq in the future.
+ */
+ static void ocfs2_dentry_post_unlock(struct ocfs2_super *osb,
+@@ -3042,7 +3211,7 @@ void ocfs2_process_blocked_lock(struct ocfs2_super *osb,
+ mlog(0, "lockres %s blocked.\n", lockres->l_name);
+
+ /* Detect whether a lock has been marked as going away while
+- * the vote thread was processing other things. A lock can
++ * the downconvert thread was processing other things. A lock can
+ * still be marked with OCFS2_LOCK_FREEING after this check,
+ * but short circuiting here will still save us some
+ * performance. */
+@@ -3091,13 +3260,104 @@ static void ocfs2_schedule_blocked_lock(struct ocfs2_super *osb,
+
+ lockres_or_flags(lockres, OCFS2_LOCK_QUEUED);
+
+- spin_lock(&osb->vote_task_lock);
++ spin_lock(&osb->dc_task_lock);
+ if (list_empty(&lockres->l_blocked_list)) {
+ list_add_tail(&lockres->l_blocked_list,
+ &osb->blocked_lock_list);
+ osb->blocked_lock_count++;
+ }
+- spin_unlock(&osb->vote_task_lock);
++ spin_unlock(&osb->dc_task_lock);
+
-+ printk("EXT4-fs: mballoc enabled\n");
-+ return 0;
++ mlog_exit_void();
+}
+
-+/* need to called with ext4 group lock (ext4_lock_group) */
-+static void ext4_mb_cleanup_pa(struct ext4_group_info *grp)
++static void ocfs2_downconvert_thread_do_work(struct ocfs2_super *osb)
+{
-+ struct ext4_prealloc_space *pa;
-+ struct list_head *cur, *tmp;
-+ int count = 0;
++ unsigned long processed;
++ struct ocfs2_lock_res *lockres;
+
-+ list_for_each_safe(cur, tmp, &grp->bb_prealloc_list) {
-+ pa = list_entry(cur, struct ext4_prealloc_space, pa_group_list);
-+ list_del(&pa->pa_group_list);
-+ count++;
-+ kfree(pa);
-+ }
-+ if (count)
-+ mb_debug("mballoc: %u PAs left\n", count);
++ mlog_entry_void();
+
-+}
++ spin_lock(&osb->dc_task_lock);
++ /* grab this early so we know to try again if a state change and
++ * wake happens part-way through our work */
++ osb->dc_work_sequence = osb->dc_wake_sequence;
+
-+int ext4_mb_release(struct super_block *sb)
-+{
-+ ext4_group_t i;
-+ int num_meta_group_infos;
-+ struct ext4_group_info *grinfo;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ processed = osb->blocked_lock_count;
++ while (processed) {
++ BUG_ON(list_empty(&osb->blocked_lock_list));
+
-+ if (!test_opt(sb, MBALLOC))
-+ return 0;
++ lockres = list_entry(osb->blocked_lock_list.next,
++ struct ocfs2_lock_res, l_blocked_list);
++ list_del_init(&lockres->l_blocked_list);
++ osb->blocked_lock_count--;
++ spin_unlock(&osb->dc_task_lock);
+
-+ /* release freed, non-committed blocks */
-+ spin_lock(&sbi->s_md_lock);
-+ list_splice_init(&sbi->s_closed_transaction,
-+ &sbi->s_committed_transaction);
-+ list_splice_init(&sbi->s_active_transaction,
-+ &sbi->s_committed_transaction);
-+ spin_unlock(&sbi->s_md_lock);
-+ ext4_mb_free_committed_blocks(sb);
++ BUG_ON(!processed);
++ processed--;
+
-+ if (sbi->s_group_info) {
-+ for (i = 0; i < sbi->s_groups_count; i++) {
-+ grinfo = ext4_get_group_info(sb, i);
-+#ifdef DOUBLE_CHECK
-+ kfree(grinfo->bb_bitmap);
-+#endif
-+ ext4_lock_group(sb, i);
-+ ext4_mb_cleanup_pa(grinfo);
-+ ext4_unlock_group(sb, i);
-+ kfree(grinfo);
-+ }
-+ num_meta_group_infos = (sbi->s_groups_count +
-+ EXT4_DESC_PER_BLOCK(sb) - 1) >>
-+ EXT4_DESC_PER_BLOCK_BITS(sb);
-+ for (i = 0; i < num_meta_group_infos; i++)
-+ kfree(sbi->s_group_info[i]);
-+ kfree(sbi->s_group_info);
-+ }
-+ kfree(sbi->s_mb_offsets);
-+ kfree(sbi->s_mb_maxs);
-+ if (sbi->s_buddy_cache)
-+ iput(sbi->s_buddy_cache);
-+ if (sbi->s_mb_stats) {
-+ printk(KERN_INFO
-+ "EXT4-fs: mballoc: %u blocks %u reqs (%u success)\n",
-+ atomic_read(&sbi->s_bal_allocated),
-+ atomic_read(&sbi->s_bal_reqs),
-+ atomic_read(&sbi->s_bal_success));
-+ printk(KERN_INFO
-+ "EXT4-fs: mballoc: %u extents scanned, %u goal hits, "
-+ "%u 2^N hits, %u breaks, %u lost\n",
-+ atomic_read(&sbi->s_bal_ex_scanned),
-+ atomic_read(&sbi->s_bal_goals),
-+ atomic_read(&sbi->s_bal_2orders),
-+ atomic_read(&sbi->s_bal_breaks),
-+ atomic_read(&sbi->s_mb_lost_chunks));
-+ printk(KERN_INFO
-+ "EXT4-fs: mballoc: %lu generated and it took %Lu\n",
-+ sbi->s_mb_buddies_generated++,
-+ sbi->s_mb_generation_time);
-+ printk(KERN_INFO
-+ "EXT4-fs: mballoc: %u preallocated, %u discarded\n",
-+ atomic_read(&sbi->s_mb_preallocated),
-+ atomic_read(&sbi->s_mb_discarded));
++ ocfs2_process_blocked_lock(osb, lockres);
++
++ spin_lock(&osb->dc_task_lock);
+ }
++ spin_unlock(&osb->dc_task_lock);
+
+ mlog_exit_void();
+ }
+
-+ kfree(sbi->s_locality_groups);
++static int ocfs2_downconvert_thread_lists_empty(struct ocfs2_super *osb)
++{
++ int empty = 0;
+
-+ ext4_mb_history_release(sb);
-+ ext4_mb_destroy_per_dev_proc(sb);
++ spin_lock(&osb->dc_task_lock);
++ if (list_empty(&osb->blocked_lock_list))
++ empty = 1;
+
-+ return 0;
++ spin_unlock(&osb->dc_task_lock);
++ return empty;
+}
+
-+static void ext4_mb_free_committed_blocks(struct super_block *sb)
++static int ocfs2_downconvert_thread_should_wake(struct ocfs2_super *osb)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ int err;
-+ int i;
-+ int count = 0;
-+ int count2 = 0;
-+ struct ext4_free_metadata *md;
-+ struct ext4_buddy e4b;
-+
-+ if (list_empty(&sbi->s_committed_transaction))
-+ return;
-+
-+ /* there is committed blocks to be freed yet */
-+ do {
-+ /* get next array of blocks */
-+ md = NULL;
-+ spin_lock(&sbi->s_md_lock);
-+ if (!list_empty(&sbi->s_committed_transaction)) {
-+ md = list_entry(sbi->s_committed_transaction.next,
-+ struct ext4_free_metadata, list);
-+ list_del(&md->list);
-+ }
-+ spin_unlock(&sbi->s_md_lock);
-+
-+ if (md == NULL)
-+ break;
-+
-+ mb_debug("gonna free %u blocks in group %lu (0x%p):",
-+ md->num, md->group, md);
-+
-+ err = ext4_mb_load_buddy(sb, md->group, &e4b);
-+ /* we expect to find existing buddy because it's pinned */
-+ BUG_ON(err != 0);
-+
-+ /* there are blocks to put in buddy to make them really free */
-+ count += md->num;
-+ count2++;
-+ ext4_lock_group(sb, md->group);
-+ for (i = 0; i < md->num; i++) {
-+ mb_debug(" %u", md->blocks[i]);
-+ err = mb_free_blocks(NULL, &e4b, md->blocks[i], 1);
-+ BUG_ON(err != 0);
-+ }
-+ mb_debug("\n");
-+ ext4_unlock_group(sb, md->group);
++ int should_wake = 0;
+
-+ /* balance refcounts from ext4_mb_free_metadata() */
-+ page_cache_release(e4b.bd_buddy_page);
-+ page_cache_release(e4b.bd_bitmap_page);
++ spin_lock(&osb->dc_task_lock);
++ if (osb->dc_work_sequence != osb->dc_wake_sequence)
++ should_wake = 1;
++ spin_unlock(&osb->dc_task_lock);
+
-+ kfree(md);
-+ ext4_mb_release_desc(&e4b);
++ return should_wake;
++}
+
-+ } while (md);
++int ocfs2_downconvert_thread(void *arg)
++{
++ int status = 0;
++ struct ocfs2_super *osb = arg;
+
-+ mb_debug("freed %u blocks in %u structures\n", count, count2);
-+}
++ /* only quit once we've been asked to stop and there is no more
++ * work available */
++ while (!(kthread_should_stop() &&
++ ocfs2_downconvert_thread_lists_empty(osb))) {
+
-+#define EXT4_ROOT "ext4"
-+#define EXT4_MB_STATS_NAME "stats"
-+#define EXT4_MB_MAX_TO_SCAN_NAME "max_to_scan"
-+#define EXT4_MB_MIN_TO_SCAN_NAME "min_to_scan"
-+#define EXT4_MB_ORDER2_REQ "order2_req"
-+#define EXT4_MB_STREAM_REQ "stream_req"
-+#define EXT4_MB_GROUP_PREALLOC "group_prealloc"
++ wait_event_interruptible(osb->dc_event,
++ ocfs2_downconvert_thread_should_wake(osb) ||
++ kthread_should_stop());
+
++ mlog(0, "downconvert_thread: awoken\n");
+
++ ocfs2_downconvert_thread_do_work(osb);
++ }
+
-+#define MB_PROC_VALUE_READ(name) \
-+static int ext4_mb_read_##name(char *page, char **start, \
-+ off_t off, int count, int *eof, void *data) \
-+{ \
-+ struct ext4_sb_info *sbi = data; \
-+ int len; \
-+ *eof = 1; \
-+ if (off != 0) \
-+ return 0; \
-+ len = sprintf(page, "%ld\n", sbi->s_mb_##name); \
-+ *start = page; \
-+ return len; \
++ osb->dc_task = NULL;
++ return status;
+}
+
-+#define MB_PROC_VALUE_WRITE(name) \
-+static int ext4_mb_write_##name(struct file *file, \
-+ const char __user *buf, unsigned long cnt, void *data) \
-+{ \
-+ struct ext4_sb_info *sbi = data; \
-+ char str[32]; \
-+ long value; \
-+ if (cnt >= sizeof(str)) \
-+ return -EINVAL; \
-+ if (copy_from_user(str, buf, cnt)) \
-+ return -EFAULT; \
-+ value = simple_strtol(str, NULL, 0); \
-+ if (value <= 0) \
-+ return -ERANGE; \
-+ sbi->s_mb_##name = value; \
-+ return cnt; \
++void ocfs2_wake_downconvert_thread(struct ocfs2_super *osb)
++{
++ spin_lock(&osb->dc_task_lock);
++ /* make sure the voting thread gets a swipe at whatever changes
++ * the caller may have made to the voting state */
++ osb->dc_wake_sequence++;
++ spin_unlock(&osb->dc_task_lock);
++ wake_up(&osb->dc_event);
+}
-+
-+MB_PROC_VALUE_READ(stats);
-+MB_PROC_VALUE_WRITE(stats);
-+MB_PROC_VALUE_READ(max_to_scan);
-+MB_PROC_VALUE_WRITE(max_to_scan);
-+MB_PROC_VALUE_READ(min_to_scan);
-+MB_PROC_VALUE_WRITE(min_to_scan);
-+MB_PROC_VALUE_READ(order2_reqs);
-+MB_PROC_VALUE_WRITE(order2_reqs);
-+MB_PROC_VALUE_READ(stream_request);
-+MB_PROC_VALUE_WRITE(stream_request);
-+MB_PROC_VALUE_READ(group_prealloc);
-+MB_PROC_VALUE_WRITE(group_prealloc);
-+
-+#define MB_PROC_HANDLER(name, var) \
-+do { \
-+ proc = create_proc_entry(name, mode, sbi->s_mb_proc); \
-+ if (proc == NULL) { \
-+ printk(KERN_ERR "EXT4-fs: can't to create %s\n", name); \
-+ goto err_out; \
-+ } \
-+ proc->data = sbi; \
-+ proc->read_proc = ext4_mb_read_##var ; \
-+ proc->write_proc = ext4_mb_write_##var; \
-+} while (0)
-+
-+static int ext4_mb_init_per_dev_proc(struct super_block *sb)
+diff --git a/fs/ocfs2/dlmglue.h b/fs/ocfs2/dlmglue.h
+index 87a785e..5f17243 100644
+--- a/fs/ocfs2/dlmglue.h
++++ b/fs/ocfs2/dlmglue.h
+@@ -49,12 +49,12 @@ struct ocfs2_meta_lvb {
+ __be32 lvb_reserved2;
+ };
+
+-/* ocfs2_meta_lock_full() and ocfs2_data_lock_full() 'arg_flags' flags */
++/* ocfs2_inode_lock_full() 'arg_flags' flags */
+ /* don't wait on recovery. */
+ #define OCFS2_META_LOCK_RECOVERY (0x01)
+ /* Instruct the dlm not to queue ourselves on the other node. */
+ #define OCFS2_META_LOCK_NOQUEUE (0x02)
+-/* don't block waiting for the vote thread, instead return -EAGAIN */
++/* don't block waiting for the downconvert thread, instead return -EAGAIN */
+ #define OCFS2_LOCK_NONBLOCK (0x04)
+
+ int ocfs2_dlm_init(struct ocfs2_super *osb);
+@@ -66,38 +66,32 @@ void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
+ struct inode *inode);
+ void ocfs2_dentry_lock_res_init(struct ocfs2_dentry_lock *dl,
+ u64 parent, struct inode *inode);
++struct ocfs2_file_private;
++void ocfs2_file_lock_res_init(struct ocfs2_lock_res *lockres,
++ struct ocfs2_file_private *fp);
+ void ocfs2_lock_res_free(struct ocfs2_lock_res *res);
+ int ocfs2_create_new_inode_locks(struct inode *inode);
+ int ocfs2_drop_inode_locks(struct inode *inode);
+-int ocfs2_data_lock_full(struct inode *inode,
+- int write,
+- int arg_flags);
+-#define ocfs2_data_lock(inode, write) ocfs2_data_lock_full(inode, write, 0)
+-int ocfs2_data_lock_with_page(struct inode *inode,
+- int write,
+- struct page *page);
+-void ocfs2_data_unlock(struct inode *inode,
+- int write);
+ int ocfs2_rw_lock(struct inode *inode, int write);
+ void ocfs2_rw_unlock(struct inode *inode, int write);
+ int ocfs2_open_lock(struct inode *inode);
+ int ocfs2_try_open_lock(struct inode *inode, int write);
+ void ocfs2_open_unlock(struct inode *inode);
+-int ocfs2_meta_lock_atime(struct inode *inode,
++int ocfs2_inode_lock_atime(struct inode *inode,
+ struct vfsmount *vfsmnt,
+ int *level);
+-int ocfs2_meta_lock_full(struct inode *inode,
++int ocfs2_inode_lock_full(struct inode *inode,
+ struct buffer_head **ret_bh,
+ int ex,
+ int arg_flags);
+-int ocfs2_meta_lock_with_page(struct inode *inode,
++int ocfs2_inode_lock_with_page(struct inode *inode,
+ struct buffer_head **ret_bh,
+ int ex,
+ struct page *page);
+ /* 99% of the time we don't want to supply any additional flags --
+ * those are for very specific cases only. */
+-#define ocfs2_meta_lock(i, b, e) ocfs2_meta_lock_full(i, b, e, 0)
+-void ocfs2_meta_unlock(struct inode *inode,
++#define ocfs2_inode_lock(i, b, e) ocfs2_inode_lock_full(i, b, e, 0)
++void ocfs2_inode_unlock(struct inode *inode,
+ int ex);
+ int ocfs2_super_lock(struct ocfs2_super *osb,
+ int ex);
+@@ -107,14 +101,17 @@ int ocfs2_rename_lock(struct ocfs2_super *osb);
+ void ocfs2_rename_unlock(struct ocfs2_super *osb);
+ int ocfs2_dentry_lock(struct dentry *dentry, int ex);
+ void ocfs2_dentry_unlock(struct dentry *dentry, int ex);
++int ocfs2_file_lock(struct file *file, int ex, int trylock);
++void ocfs2_file_unlock(struct file *file);
+
+ void ocfs2_mark_lockres_freeing(struct ocfs2_lock_res *lockres);
+ void ocfs2_simple_drop_lockres(struct ocfs2_super *osb,
+ struct ocfs2_lock_res *lockres);
+
+-/* for the vote thread */
++/* for the downconvert thread */
+ void ocfs2_process_blocked_lock(struct ocfs2_super *osb,
+ struct ocfs2_lock_res *lockres);
++void ocfs2_wake_downconvert_thread(struct ocfs2_super *osb);
+
+ struct ocfs2_dlm_debug *ocfs2_new_dlm_debug(void);
+ void ocfs2_put_dlm_debug(struct ocfs2_dlm_debug *dlm_debug);
+diff --git a/fs/ocfs2/endian.h b/fs/ocfs2/endian.h
+index ff25762..1942e09 100644
+--- a/fs/ocfs2/endian.h
++++ b/fs/ocfs2/endian.h
+@@ -37,11 +37,6 @@ static inline void le64_add_cpu(__le64 *var, u64 val)
+ *var = cpu_to_le64(le64_to_cpu(*var) + val);
+ }
+
+-static inline void le32_and_cpu(__le32 *var, u32 val)
+-{
+- *var = cpu_to_le32(le32_to_cpu(*var) & val);
+-}
+-
+ static inline void be32_add_cpu(__be32 *var, u32 val)
+ {
+ *var = cpu_to_be32(be32_to_cpu(*var) + val);
+diff --git a/fs/ocfs2/export.c b/fs/ocfs2/export.c
+index 535bfa9..67527ce 100644
+--- a/fs/ocfs2/export.c
++++ b/fs/ocfs2/export.c
+@@ -58,7 +58,7 @@ static struct dentry *ocfs2_get_dentry(struct super_block *sb,
+ return ERR_PTR(-ESTALE);
+ }
+
+- inode = ocfs2_iget(OCFS2_SB(sb), handle->ih_blkno, 0);
++ inode = ocfs2_iget(OCFS2_SB(sb), handle->ih_blkno, 0, 0);
+
+ if (IS_ERR(inode))
+ return (void *)inode;
+@@ -95,7 +95,7 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ mlog(0, "find parent of directory %llu\n",
+ (unsigned long long)OCFS2_I(dir)->ip_blkno);
+
+- status = ocfs2_meta_lock(dir, NULL, 0);
++ status = ocfs2_inode_lock(dir, NULL, 0);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -109,7 +109,7 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ goto bail_unlock;
+ }
+
+- inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0);
++ inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0, 0);
+ if (IS_ERR(inode)) {
+ mlog(ML_ERROR, "Unable to create inode %llu\n",
+ (unsigned long long)blkno);
+@@ -126,7 +126,7 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ parent->d_op = &ocfs2_dentry_ops;
+
+ bail_unlock:
+- ocfs2_meta_unlock(dir, 0);
++ ocfs2_inode_unlock(dir, 0);
+
+ bail:
+ mlog_exit_ptr(parent);
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index b75b2e1..ed5d523 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -51,6 +51,7 @@
+ #include "inode.h"
+ #include "ioctl.h"
+ #include "journal.h"
++#include "locks.h"
+ #include "mmap.h"
+ #include "suballoc.h"
+ #include "super.h"
+@@ -63,6 +64,35 @@ static int ocfs2_sync_inode(struct inode *inode)
+ return sync_mapping_buffers(inode->i_mapping);
+ }
+
++static int ocfs2_init_file_private(struct inode *inode, struct file *file)
+{
-+ mode_t mode = S_IFREG | S_IRUGO | S_IWUSR;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ struct proc_dir_entry *proc;
-+ char devname[64];
++ struct ocfs2_file_private *fp;
+
-+ snprintf(devname, sizeof(devname) - 1, "%s",
-+ bdevname(sb->s_bdev, devname));
-+ sbi->s_mb_proc = proc_mkdir(devname, proc_root_ext4);
++ fp = kzalloc(sizeof(struct ocfs2_file_private), GFP_KERNEL);
++ if (!fp)
++ return -ENOMEM;
+
-+ MB_PROC_HANDLER(EXT4_MB_STATS_NAME, stats);
-+ MB_PROC_HANDLER(EXT4_MB_MAX_TO_SCAN_NAME, max_to_scan);
-+ MB_PROC_HANDLER(EXT4_MB_MIN_TO_SCAN_NAME, min_to_scan);
-+ MB_PROC_HANDLER(EXT4_MB_ORDER2_REQ, order2_reqs);
-+ MB_PROC_HANDLER(EXT4_MB_STREAM_REQ, stream_request);
-+ MB_PROC_HANDLER(EXT4_MB_GROUP_PREALLOC, group_prealloc);
++ fp->fp_file = file;
++ mutex_init(&fp->fp_mutex);
++ ocfs2_file_lock_res_init(&fp->fp_flock, fp);
++ file->private_data = fp;
+
+ return 0;
-+
-+err_out:
-+ printk(KERN_ERR "EXT4-fs: Unable to create %s\n", devname);
-+ remove_proc_entry(EXT4_MB_GROUP_PREALLOC, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_STREAM_REQ, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_ORDER2_REQ, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_MIN_TO_SCAN_NAME, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_MAX_TO_SCAN_NAME, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_STATS_NAME, sbi->s_mb_proc);
-+ remove_proc_entry(devname, proc_root_ext4);
-+ sbi->s_mb_proc = NULL;
-+
-+ return -ENOMEM;
+}
+
-+static int ext4_mb_destroy_per_dev_proc(struct super_block *sb)
++static void ocfs2_free_file_private(struct inode *inode, struct file *file)
+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ char devname[64];
-+
-+ if (sbi->s_mb_proc == NULL)
-+ return -EINVAL;
-+
-+ snprintf(devname, sizeof(devname) - 1, "%s",
-+ bdevname(sb->s_bdev, devname));
-+ remove_proc_entry(EXT4_MB_GROUP_PREALLOC, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_STREAM_REQ, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_ORDER2_REQ, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_MIN_TO_SCAN_NAME, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_MAX_TO_SCAN_NAME, sbi->s_mb_proc);
-+ remove_proc_entry(EXT4_MB_STATS_NAME, sbi->s_mb_proc);
-+ remove_proc_entry(devname, proc_root_ext4);
++ struct ocfs2_file_private *fp = file->private_data;
++ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
-+ return 0;
++ if (fp) {
++ ocfs2_simple_drop_lockres(osb, &fp->fp_flock);
++ ocfs2_lock_res_free(&fp->fp_flock);
++ kfree(fp);
++ file->private_data = NULL;
++ }
+}
+
-+int __init init_ext4_mballoc(void)
-+{
-+ ext4_pspace_cachep =
-+ kmem_cache_create("ext4_prealloc_space",
-+ sizeof(struct ext4_prealloc_space),
-+ 0, SLAB_RECLAIM_ACCOUNT, NULL);
-+ if (ext4_pspace_cachep == NULL)
-+ return -ENOMEM;
+ static int ocfs2_file_open(struct inode *inode, struct file *file)
+ {
+ int status;
+@@ -89,7 +119,18 @@ static int ocfs2_file_open(struct inode *inode, struct file *file)
+
+ oi->ip_open_count++;
+ spin_unlock(&oi->ip_lock);
+- status = 0;
+
-+#ifdef CONFIG_PROC_FS
-+ proc_root_ext4 = proc_mkdir(EXT4_ROOT, proc_root_fs);
-+ if (proc_root_ext4 == NULL)
-+ printk(KERN_ERR "EXT4-fs: Unable to create %s\n", EXT4_ROOT);
-+#endif
++ status = ocfs2_init_file_private(inode, file);
++ if (status) {
++ /*
++ * We want to set open count back if we're failing the
++ * open.
++ */
++ spin_lock(&oi->ip_lock);
++ oi->ip_open_count--;
++ spin_unlock(&oi->ip_lock);
++ }
+
-+ return 0;
-+}
+ leave:
+ mlog_exit(status);
+ return status;
+@@ -108,11 +149,24 @@ static int ocfs2_file_release(struct inode *inode, struct file *file)
+ oi->ip_flags &= ~OCFS2_INODE_OPEN_DIRECT;
+ spin_unlock(&oi->ip_lock);
+
++ ocfs2_free_file_private(inode, file);
+
-+void exit_ext4_mballoc(void)
+ mlog_exit(0);
+
+ return 0;
+ }
+
++static int ocfs2_dir_open(struct inode *inode, struct file *file)
+{
-+ /* XXX: synchronize_rcu(); */
-+ kmem_cache_destroy(ext4_pspace_cachep);
-+#ifdef CONFIG_PROC_FS
-+ remove_proc_entry(EXT4_ROOT, proc_root_fs);
-+#endif
++ return ocfs2_init_file_private(inode, file);
+}
+
-+
-+/*
-+ * Check quota and mark choosed space (ac->ac_b_ex) non-free in bitmaps
-+ * Returns 0 if success or error code
-+ */
-+static int ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
-+ handle_t *handle)
++static int ocfs2_dir_release(struct inode *inode, struct file *file)
+{
-+ struct buffer_head *bitmap_bh = NULL;
-+ struct ext4_super_block *es;
-+ struct ext4_group_desc *gdp;
-+ struct buffer_head *gdp_bh;
-+ struct ext4_sb_info *sbi;
-+ struct super_block *sb;
-+ ext4_fsblk_t block;
-+ int err;
++ ocfs2_free_file_private(inode, file);
++ return 0;
++}
+
-+ BUG_ON(ac->ac_status != AC_STATUS_FOUND);
-+ BUG_ON(ac->ac_b_ex.fe_len <= 0);
+ static int ocfs2_sync_file(struct file *file,
+ struct dentry *dentry,
+ int datasync)
+@@ -382,18 +436,13 @@ static int ocfs2_truncate_file(struct inode *inode,
+
+ down_write(&OCFS2_I(inode)->ip_alloc_sem);
+
+- /* This forces other nodes to sync and drop their pages. Do
+- * this even if we have a truncate without allocation change -
+- * ocfs2 cluster sizes can be much greater than page size, so
+- * we have to truncate them anyway. */
+- status = ocfs2_data_lock(inode, 1);
+- if (status < 0) {
+- up_write(&OCFS2_I(inode)->ip_alloc_sem);
+-
+- mlog_errno(status);
+- goto bail;
+- }
+-
++ /*
++ * The inode lock forced other nodes to sync and drop their
++ * pages, which (correctly) happens even if we have a truncate
++ * without allocation change - ocfs2 cluster sizes can be much
++ * greater than page size, so we have to truncate them
++ * anyway.
++ */
+ unmap_mapping_range(inode->i_mapping, new_i_size + PAGE_SIZE - 1, 0, 1);
+ truncate_inode_pages(inode->i_mapping, new_i_size);
+
+@@ -403,7 +452,7 @@ static int ocfs2_truncate_file(struct inode *inode,
+ if (status)
+ mlog_errno(status);
+
+- goto bail_unlock_data;
++ goto bail_unlock_sem;
+ }
+
+ /* alright, we're going to need to do a full blown alloc size
+@@ -413,25 +462,23 @@ static int ocfs2_truncate_file(struct inode *inode,
+ status = ocfs2_orphan_for_truncate(osb, inode, di_bh, new_i_size);
+ if (status < 0) {
+ mlog_errno(status);
+- goto bail_unlock_data;
++ goto bail_unlock_sem;
+ }
+
+ status = ocfs2_prepare_truncate(osb, inode, di_bh, &tc);
+ if (status < 0) {
+ mlog_errno(status);
+- goto bail_unlock_data;
++ goto bail_unlock_sem;
+ }
+
+ status = ocfs2_commit_truncate(osb, inode, di_bh, tc);
+ if (status < 0) {
+ mlog_errno(status);
+- goto bail_unlock_data;
++ goto bail_unlock_sem;
+ }
+
+ /* TODO: orphan dir cleanup here. */
+-bail_unlock_data:
+- ocfs2_data_unlock(inode, 1);
+-
++bail_unlock_sem:
+ up_write(&OCFS2_I(inode)->ip_alloc_sem);
+
+ bail:
+@@ -579,7 +626,7 @@ int ocfs2_lock_allocators(struct inode *inode, struct ocfs2_dinode *di,
+
+ mlog(0, "extend inode %llu, i_size = %lld, di->i_clusters = %u, "
+ "clusters_to_add = %u, extents_to_split = %u\n",
+- (unsigned long long)OCFS2_I(inode)->ip_blkno, i_size_read(inode),
++ (unsigned long long)OCFS2_I(inode)->ip_blkno, (long long)i_size_read(inode),
+ le32_to_cpu(di->i_clusters), clusters_to_add, extents_to_split);
+
+ num_free_extents = ocfs2_num_free_extents(osb, inode, di);
+@@ -760,7 +807,7 @@ restarted_transaction:
+ le32_to_cpu(fe->i_clusters),
+ (unsigned long long)le64_to_cpu(fe->i_size));
+ mlog(0, "inode: ip_clusters=%u, i_size=%lld\n",
+- OCFS2_I(inode)->ip_clusters, i_size_read(inode));
++ OCFS2_I(inode)->ip_clusters, (long long)i_size_read(inode));
+
+ leave:
+ if (handle) {
+@@ -917,7 +964,7 @@ static int ocfs2_extend_file(struct inode *inode,
+ struct buffer_head *di_bh,
+ u64 new_i_size)
+ {
+- int ret = 0, data_locked = 0;
++ int ret = 0;
+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+
+ BUG_ON(!di_bh);
+@@ -943,20 +990,6 @@ static int ocfs2_extend_file(struct inode *inode,
+ && ocfs2_sparse_alloc(OCFS2_SB(inode->i_sb)))
+ goto out_update_size;
+
+- /*
+- * protect the pages that ocfs2_zero_extend is going to be
+- * pulling into the page cache.. we do this before the
+- * metadata extend so that we don't get into the situation
+- * where we've extended the metadata but can't get the data
+- * lock to zero.
+- */
+- ret = ocfs2_data_lock(inode, 1);
+- if (ret < 0) {
+- mlog_errno(ret);
+- goto out;
+- }
+- data_locked = 1;
+-
+ /*
+ * The alloc sem blocks people in read/write from reading our
+ * allocation until we're done changing it. We depend on
+@@ -980,7 +1013,7 @@ static int ocfs2_extend_file(struct inode *inode,
+ up_write(&oi->ip_alloc_sem);
+
+ mlog_errno(ret);
+- goto out_unlock;
++ goto out;
+ }
+ }
+
+@@ -991,7 +1024,7 @@ static int ocfs2_extend_file(struct inode *inode,
+
+ if (ret < 0) {
+ mlog_errno(ret);
+- goto out_unlock;
++ goto out;
+ }
+
+ out_update_size:
+@@ -999,10 +1032,6 @@ out_update_size:
+ if (ret < 0)
+ mlog_errno(ret);
+
+-out_unlock:
+- if (data_locked)
+- ocfs2_data_unlock(inode, 1);
+-
+ out:
+ return ret;
+ }
+@@ -1050,7 +1079,7 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
+ }
+ }
+
+- status = ocfs2_meta_lock(inode, &bh, 1);
++ status = ocfs2_inode_lock(inode, &bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -1102,7 +1131,7 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
+ bail_commit:
+ ocfs2_commit_trans(osb, handle);
+ bail_unlock:
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ bail_unlock_rw:
+ if (size_change)
+ ocfs2_rw_unlock(inode, 1);
+@@ -1149,7 +1178,7 @@ int ocfs2_permission(struct inode *inode, int mask, struct nameidata *nd)
+
+ mlog_entry_void();
+
+- ret = ocfs2_meta_lock(inode, NULL, 0);
++ ret = ocfs2_inode_lock(inode, NULL, 0);
+ if (ret) {
+ if (ret != -ENOENT)
+ mlog_errno(ret);
+@@ -1158,7 +1187,7 @@ int ocfs2_permission(struct inode *inode, int mask, struct nameidata *nd)
+
+ ret = generic_permission(inode, mask, NULL);
+
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+ out:
+ mlog_exit(ret);
+ return ret;
+@@ -1630,7 +1659,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ goto out;
+ }
+
+- ret = ocfs2_meta_lock(inode, &di_bh, 1);
++ ret = ocfs2_inode_lock(inode, &di_bh, 1);
+ if (ret) {
+ mlog_errno(ret);
+ goto out_rw_unlock;
+@@ -1638,7 +1667,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+
+ if (inode->i_flags & (S_IMMUTABLE|S_APPEND)) {
+ ret = -EPERM;
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+
+ switch (sr->l_whence) {
+@@ -1652,7 +1681,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ break;
+ default:
+ ret = -EINVAL;
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+ sr->l_whence = 0;
+
+@@ -1663,14 +1692,14 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ || (sr->l_start + llen) < 0
+ || (sr->l_start + llen) > max_off) {
+ ret = -EINVAL;
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+ size = sr->l_start + sr->l_len;
+
+ if (cmd == OCFS2_IOC_RESVSP || cmd == OCFS2_IOC_RESVSP64) {
+ if (sr->l_len <= 0) {
+ ret = -EINVAL;
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+ }
+
+@@ -1678,7 +1707,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ ret = __ocfs2_write_remove_suid(inode, di_bh);
+ if (ret) {
+ mlog_errno(ret);
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+ }
+
+@@ -1704,7 +1733,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ up_write(&OCFS2_I(inode)->ip_alloc_sem);
+ if (ret) {
+ mlog_errno(ret);
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+
+ /*
+@@ -1714,7 +1743,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ if (IS_ERR(handle)) {
+ ret = PTR_ERR(handle);
+ mlog_errno(ret);
+- goto out_meta_unlock;
++ goto out_inode_unlock;
+ }
+
+ if (change_size && i_size_read(inode) < size)
+@@ -1727,9 +1756,9 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+
+ ocfs2_commit_trans(osb, handle);
+
+-out_meta_unlock:
++out_inode_unlock:
+ brelse(di_bh);
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ out_rw_unlock:
+ ocfs2_rw_unlock(inode, 1);
+
+@@ -1799,7 +1828,7 @@ static int ocfs2_prepare_inode_for_write(struct dentry *dentry,
+ * if we need to make modifications here.
+ */
+ for(;;) {
+- ret = ocfs2_meta_lock(inode, NULL, meta_level);
++ ret = ocfs2_inode_lock(inode, NULL, meta_level);
+ if (ret < 0) {
+ meta_level = -1;
+ mlog_errno(ret);
+@@ -1817,7 +1846,7 @@ static int ocfs2_prepare_inode_for_write(struct dentry *dentry,
+ * set inode->i_size at the end of a write. */
+ if (should_remove_suid(dentry)) {
+ if (meta_level == 0) {
+- ocfs2_meta_unlock(inode, meta_level);
++ ocfs2_inode_unlock(inode, meta_level);
+ meta_level = 1;
+ continue;
+ }
+@@ -1886,7 +1915,7 @@ static int ocfs2_prepare_inode_for_write(struct dentry *dentry,
+ *ppos = saved_pos;
+
+ out_unlock:
+- ocfs2_meta_unlock(inode, meta_level);
++ ocfs2_inode_unlock(inode, meta_level);
+
+ out:
+ return ret;
+@@ -2099,12 +2128,12 @@ static ssize_t ocfs2_file_splice_read(struct file *in,
+ /*
+ * See the comment in ocfs2_file_aio_read()
+ */
+- ret = ocfs2_meta_lock(inode, NULL, 0);
++ ret = ocfs2_inode_lock(inode, NULL, 0);
+ if (ret < 0) {
+ mlog_errno(ret);
+ goto bail;
+ }
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+
+ ret = generic_file_splice_read(in, ppos, pipe, len, flags);
+
+@@ -2160,12 +2189,12 @@ static ssize_t ocfs2_file_aio_read(struct kiocb *iocb,
+ * like i_size. This allows the checks down below
+ * generic_file_aio_read() a chance of actually working.
+ */
+- ret = ocfs2_meta_lock_atime(inode, filp->f_vfsmnt, &lock_level);
++ ret = ocfs2_inode_lock_atime(inode, filp->f_vfsmnt, &lock_level);
+ if (ret < 0) {
+ mlog_errno(ret);
+ goto bail;
+ }
+- ocfs2_meta_unlock(inode, lock_level);
++ ocfs2_inode_unlock(inode, lock_level);
+
+ ret = generic_file_aio_read(iocb, iov, nr_segs, iocb->ki_pos);
+ if (ret == -EINVAL)
+@@ -2204,6 +2233,7 @@ const struct inode_operations ocfs2_special_file_iops = {
+ };
+
+ const struct file_operations ocfs2_fops = {
++ .llseek = generic_file_llseek,
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .mmap = ocfs2_mmap,
+@@ -2216,16 +2246,21 @@ const struct file_operations ocfs2_fops = {
+ #ifdef CONFIG_COMPAT
+ .compat_ioctl = ocfs2_compat_ioctl,
+ #endif
++ .flock = ocfs2_flock,
+ .splice_read = ocfs2_file_splice_read,
+ .splice_write = ocfs2_file_splice_write,
+ };
+
+ const struct file_operations ocfs2_dops = {
++ .llseek = generic_file_llseek,
+ .read = generic_read_dir,
+ .readdir = ocfs2_readdir,
+ .fsync = ocfs2_sync_file,
++ .release = ocfs2_dir_release,
++ .open = ocfs2_dir_open,
+ .ioctl = ocfs2_ioctl,
+ #ifdef CONFIG_COMPAT
+ .compat_ioctl = ocfs2_compat_ioctl,
+ #endif
++ .flock = ocfs2_flock,
+ };
+diff --git a/fs/ocfs2/file.h b/fs/ocfs2/file.h
+index 066f14a..048ddca 100644
+--- a/fs/ocfs2/file.h
++++ b/fs/ocfs2/file.h
+@@ -32,6 +32,12 @@ extern const struct inode_operations ocfs2_file_iops;
+ extern const struct inode_operations ocfs2_special_file_iops;
+ struct ocfs2_alloc_context;
+
++struct ocfs2_file_private {
++ struct file *fp_file;
++ struct mutex fp_mutex;
++ struct ocfs2_lock_res fp_flock;
++};
+
-+ sb = ac->ac_sb;
-+ sbi = EXT4_SB(sb);
-+ es = sbi->s_es;
+ enum ocfs2_alloc_restarted {
+ RESTART_NONE = 0,
+ RESTART_TRANS,
+diff --git a/fs/ocfs2/heartbeat.c b/fs/ocfs2/heartbeat.c
+index c4c3617..c0efd94 100644
+--- a/fs/ocfs2/heartbeat.c
++++ b/fs/ocfs2/heartbeat.c
+@@ -30,9 +30,6 @@
+ #include <linux/highmem.h>
+ #include <linux/kmod.h>
+
+-#include <cluster/heartbeat.h>
+-#include <cluster/nodemanager.h>
+-
+ #include <dlm/dlmapi.h>
+
+ #define MLOG_MASK_PREFIX ML_SUPER
+@@ -44,13 +41,9 @@
+ #include "heartbeat.h"
+ #include "inode.h"
+ #include "journal.h"
+-#include "vote.h"
+
+ #include "buffer_head_io.h"
+
+-#define OCFS2_HB_NODE_DOWN_PRI (0x0000002)
+-#define OCFS2_HB_NODE_UP_PRI OCFS2_HB_NODE_DOWN_PRI
+-
+ static inline void __ocfs2_node_map_set_bit(struct ocfs2_node_map *map,
+ int bit);
+ static inline void __ocfs2_node_map_clear_bit(struct ocfs2_node_map *map,
+@@ -64,9 +57,7 @@ static void __ocfs2_node_map_set(struct ocfs2_node_map *target,
+ void ocfs2_init_node_maps(struct ocfs2_super *osb)
+ {
+ spin_lock_init(&osb->node_map_lock);
+- ocfs2_node_map_init(&osb->mounted_map);
+ ocfs2_node_map_init(&osb->recovery_map);
+- ocfs2_node_map_init(&osb->umount_map);
+ ocfs2_node_map_init(&osb->osb_recovering_orphan_dirs);
+ }
+
+@@ -87,24 +78,7 @@ static void ocfs2_do_node_down(int node_num,
+ return;
+ }
+
+- if (ocfs2_node_map_test_bit(osb, &osb->umount_map, node_num)) {
+- /* If a node is in the umount map, then we've been
+- * expecting him to go down and we know ahead of time
+- * that recovery is not necessary. */
+- ocfs2_node_map_clear_bit(osb, &osb->umount_map, node_num);
+- return;
+- }
+-
+ ocfs2_recovery_thread(osb, node_num);
+-
+- ocfs2_remove_node_from_vote_queues(osb, node_num);
+-}
+-
+-static void ocfs2_hb_node_down_cb(struct o2nm_node *node,
+- int node_num,
+- void *data)
+-{
+- ocfs2_do_node_down(node_num, (struct ocfs2_super *) data);
+ }
+
+ /* Called from the dlm when it's about to evict a node. We may also
+@@ -121,27 +95,8 @@ static void ocfs2_dlm_eviction_cb(int node_num,
+ ocfs2_do_node_down(node_num, osb);
+ }
+
+-static void ocfs2_hb_node_up_cb(struct o2nm_node *node,
+- int node_num,
+- void *data)
+-{
+- struct ocfs2_super *osb = data;
+-
+- BUG_ON(osb->node_num == node_num);
+-
+- mlog(0, "node up event for %d\n", node_num);
+- ocfs2_node_map_clear_bit(osb, &osb->umount_map, node_num);
+-}
+-
+ void ocfs2_setup_hb_callbacks(struct ocfs2_super *osb)
+ {
+- o2hb_setup_callback(&osb->osb_hb_down, O2HB_NODE_DOWN_CB,
+- ocfs2_hb_node_down_cb, osb,
+- OCFS2_HB_NODE_DOWN_PRI);
+-
+- o2hb_setup_callback(&osb->osb_hb_up, O2HB_NODE_UP_CB,
+- ocfs2_hb_node_up_cb, osb, OCFS2_HB_NODE_UP_PRI);
+-
+ /* Not exactly a heartbeat callback, but leads to essentially
+ * the same path so we set it up here. */
+ dlm_setup_eviction_cb(&osb->osb_eviction_cb,
+@@ -149,39 +104,6 @@ void ocfs2_setup_hb_callbacks(struct ocfs2_super *osb)
+ osb);
+ }
+
+-/* Most functions here are just stubs for now... */
+-int ocfs2_register_hb_callbacks(struct ocfs2_super *osb)
+-{
+- int status;
+-
+- if (ocfs2_mount_local(osb))
+- return 0;
+-
+- status = o2hb_register_callback(osb->uuid_str, &osb->osb_hb_down);
+- if (status < 0) {
+- mlog_errno(status);
+- goto bail;
+- }
+-
+- status = o2hb_register_callback(osb->uuid_str, &osb->osb_hb_up);
+- if (status < 0) {
+- mlog_errno(status);
+- o2hb_unregister_callback(osb->uuid_str, &osb->osb_hb_down);
+- }
+-
+-bail:
+- return status;
+-}
+-
+-void ocfs2_clear_hb_callbacks(struct ocfs2_super *osb)
+-{
+- if (ocfs2_mount_local(osb))
+- return;
+-
+- o2hb_unregister_callback(osb->uuid_str, &osb->osb_hb_down);
+- o2hb_unregister_callback(osb->uuid_str, &osb->osb_hb_up);
+-}
+-
+ void ocfs2_stop_heartbeat(struct ocfs2_super *osb)
+ {
+ int ret;
+@@ -341,8 +263,6 @@ int ocfs2_recovery_map_set(struct ocfs2_super *osb,
+
+ spin_lock(&osb->node_map_lock);
+
+- __ocfs2_node_map_clear_bit(&osb->mounted_map, num);
+-
+ if (!test_bit(num, osb->recovery_map.map)) {
+ __ocfs2_node_map_set_bit(&osb->recovery_map, num);
+ set = 1;
+diff --git a/fs/ocfs2/heartbeat.h b/fs/ocfs2/heartbeat.h
+index e8fb079..5685921 100644
+--- a/fs/ocfs2/heartbeat.h
++++ b/fs/ocfs2/heartbeat.h
+@@ -29,8 +29,6 @@
+ void ocfs2_init_node_maps(struct ocfs2_super *osb);
+
+ void ocfs2_setup_hb_callbacks(struct ocfs2_super *osb);
+-int ocfs2_register_hb_callbacks(struct ocfs2_super *osb);
+-void ocfs2_clear_hb_callbacks(struct ocfs2_super *osb);
+ void ocfs2_stop_heartbeat(struct ocfs2_super *osb);
+
+ /* node map functions - used to keep track of mounted and in-recovery
+diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c
+index ebb2bbe..7e9e4c7 100644
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -49,7 +49,6 @@
+ #include "symlink.h"
+ #include "sysfile.h"
+ #include "uptodate.h"
+-#include "vote.h"
+
+ #include "buffer_head_io.h"
+
+@@ -58,8 +57,11 @@ struct ocfs2_find_inode_args
+ u64 fi_blkno;
+ unsigned long fi_ino;
+ unsigned int fi_flags;
++ unsigned int fi_sysfile_type;
+ };
+
++static struct lock_class_key ocfs2_sysfile_lock_key[NUM_SYSTEM_INODES];
++
+ static int ocfs2_read_locked_inode(struct inode *inode,
+ struct ocfs2_find_inode_args *args);
+ static int ocfs2_init_locked_inode(struct inode *inode, void *opaque);
+@@ -107,7 +109,8 @@ void ocfs2_get_inode_flags(struct ocfs2_inode_info *oi)
+ oi->ip_attr |= OCFS2_DIRSYNC_FL;
+ }
+
+-struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, int flags)
++struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
++ int sysfile_type)
+ {
+ struct inode *inode = NULL;
+ struct super_block *sb = osb->sb;
+@@ -127,6 +130,7 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, int flags)
+ args.fi_blkno = blkno;
+ args.fi_flags = flags;
+ args.fi_ino = ino_from_blkno(sb, blkno);
++ args.fi_sysfile_type = sysfile_type;
+
+ inode = iget5_locked(sb, args.fi_ino, ocfs2_find_actor,
+ ocfs2_init_locked_inode, &args);
+@@ -201,6 +205,9 @@ static int ocfs2_init_locked_inode(struct inode *inode, void *opaque)
+
+ inode->i_ino = args->fi_ino;
+ OCFS2_I(inode)->ip_blkno = args->fi_blkno;
++ if (args->fi_sysfile_type != 0)
++ lockdep_set_class(&inode->i_mutex,
++ &ocfs2_sysfile_lock_key[args->fi_sysfile_type]);
+
+ mlog_exit(0);
+ return 0;
+@@ -322,7 +329,7 @@ int ocfs2_populate_inode(struct inode *inode, struct ocfs2_dinode *fe,
+ */
+ BUG_ON(le32_to_cpu(fe->i_flags) & OCFS2_SYSTEM_FL);
+
+- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_meta_lockres,
++ ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_inode_lockres,
+ OCFS2_LOCK_TYPE_META, 0, inode);
+
+ ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_open_lockres,
+@@ -333,10 +340,6 @@ int ocfs2_populate_inode(struct inode *inode, struct ocfs2_dinode *fe,
+ OCFS2_LOCK_TYPE_RW, inode->i_generation,
+ inode);
+
+- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_data_lockres,
+- OCFS2_LOCK_TYPE_DATA, inode->i_generation,
+- inode);
+-
+ ocfs2_set_inode_flags(inode);
+
+ status = 0;
+@@ -414,7 +417,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
+ if (args->fi_flags & OCFS2_FI_FLAG_SYSFILE)
+ generation = osb->fs_generation;
+
+- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_meta_lockres,
++ ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_inode_lockres,
+ OCFS2_LOCK_TYPE_META,
+ generation, inode);
+
+@@ -429,7 +432,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
+ mlog_errno(status);
+ return status;
+ }
+- status = ocfs2_meta_lock(inode, NULL, 0);
++ status = ocfs2_inode_lock(inode, NULL, 0);
+ if (status) {
+ make_bad_inode(inode);
+ mlog_errno(status);
+@@ -484,7 +487,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
+
+ bail:
+ if (can_lock)
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+
+ if (status < 0)
+ make_bad_inode(inode);
+@@ -586,7 +589,7 @@ static int ocfs2_remove_inode(struct inode *inode,
+ }
+
+ mutex_lock(&inode_alloc_inode->i_mutex);
+- status = ocfs2_meta_lock(inode_alloc_inode, &inode_alloc_bh, 1);
++ status = ocfs2_inode_lock(inode_alloc_inode, &inode_alloc_bh, 1);
+ if (status < 0) {
+ mutex_unlock(&inode_alloc_inode->i_mutex);
+
+@@ -617,7 +620,7 @@ static int ocfs2_remove_inode(struct inode *inode,
+ }
+
+ di->i_dtime = cpu_to_le64(CURRENT_TIME.tv_sec);
+- le32_and_cpu(&di->i_flags, ~(OCFS2_VALID_FL | OCFS2_ORPHANED_FL));
++ di->i_flags &= cpu_to_le32(~(OCFS2_VALID_FL | OCFS2_ORPHANED_FL));
+
+ status = ocfs2_journal_dirty(handle, di_bh);
+ if (status < 0) {
+@@ -635,7 +638,7 @@ static int ocfs2_remove_inode(struct inode *inode,
+ bail_commit:
+ ocfs2_commit_trans(osb, handle);
+ bail_unlock:
+- ocfs2_meta_unlock(inode_alloc_inode, 1);
++ ocfs2_inode_unlock(inode_alloc_inode, 1);
+ mutex_unlock(&inode_alloc_inode->i_mutex);
+ brelse(inode_alloc_bh);
+ bail:
+@@ -709,7 +712,7 @@ static int ocfs2_wipe_inode(struct inode *inode,
+ * delete_inode operation. We do this now to avoid races with
+ * recovery completion on other nodes. */
+ mutex_lock(&orphan_dir_inode->i_mutex);
+- status = ocfs2_meta_lock(orphan_dir_inode, &orphan_dir_bh, 1);
++ status = ocfs2_inode_lock(orphan_dir_inode, &orphan_dir_bh, 1);
+ if (status < 0) {
+ mutex_unlock(&orphan_dir_inode->i_mutex);
+
+@@ -718,8 +721,8 @@ static int ocfs2_wipe_inode(struct inode *inode,
+ }
+
+ /* we do this while holding the orphan dir lock because we
+- * don't want recovery being run from another node to vote for
+- * an inode delete on us -- this will result in two nodes
++ * don't want recovery being run from another node to try an
++ * inode delete underneath us -- this will result in two nodes
+ * truncating the same file! */
+ status = ocfs2_truncate_for_delete(osb, inode, di_bh);
+ if (status < 0) {
+@@ -733,7 +736,7 @@ static int ocfs2_wipe_inode(struct inode *inode,
+ mlog_errno(status);
+
+ bail_unlock_dir:
+- ocfs2_meta_unlock(orphan_dir_inode, 1);
++ ocfs2_inode_unlock(orphan_dir_inode, 1);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
+ brelse(orphan_dir_bh);
+ bail:
+@@ -744,7 +747,7 @@ bail:
+ }
+
+ /* There is a series of simple checks that should be done before a
+- * vote is even considered. Encapsulate those in this function. */
++ * trylock is even considered. Encapsulate those in this function. */
+ static int ocfs2_inode_is_valid_to_delete(struct inode *inode)
+ {
+ int ret = 0;
+@@ -758,14 +761,14 @@ static int ocfs2_inode_is_valid_to_delete(struct inode *inode)
+ goto bail;
+ }
+
+- /* If we're coming from process_vote we can't go into our own
++ /* If we're coming from downconvert_thread we can't go into our own
+ * voting [hello, deadlock city!], so unforuntately we just
+ * have to skip deleting this guy. That's OK though because
+ * the node who's doing the actual deleting should handle it
+ * anyway. */
+- if (current == osb->vote_task) {
++ if (current == osb->dc_task) {
+ mlog(0, "Skipping delete of %lu because we're currently "
+- "in process_vote\n", inode->i_ino);
++ "in downconvert\n", inode->i_ino);
+ goto bail;
+ }
+
+@@ -779,10 +782,9 @@ static int ocfs2_inode_is_valid_to_delete(struct inode *inode)
+ goto bail_unlock;
+ }
+
+- /* If we have voted "yes" on the wipe of this inode for
+- * another node, it will be marked here so we can safely skip
+- * it. Recovery will cleanup any inodes we might inadvertantly
+- * skip here. */
++ /* If we have allowd wipe of this inode for another node, it
++ * will be marked here so we can safely skip it. Recovery will
++ * cleanup any inodes we might inadvertantly skip here. */
+ if (oi->ip_flags & OCFS2_INODE_SKIP_DELETE) {
+ mlog(0, "Skipping delete of %lu because another node "
+ "has done this for us.\n", inode->i_ino);
+@@ -929,13 +931,13 @@ void ocfs2_delete_inode(struct inode *inode)
+
+ /* Lock down the inode. This gives us an up to date view of
+ * it's metadata (for verification), and allows us to
+- * serialize delete_inode votes.
++ * serialize delete_inode on multiple nodes.
+ *
+ * Even though we might be doing a truncate, we don't take the
+ * allocation lock here as it won't be needed - nobody will
+ * have the file open.
+ */
+- status = ocfs2_meta_lock(inode, &di_bh, 1);
++ status = ocfs2_inode_lock(inode, &di_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -947,15 +949,15 @@ void ocfs2_delete_inode(struct inode *inode)
+ * before we go ahead and wipe the inode. */
+ status = ocfs2_query_inode_wipe(inode, di_bh, &wipe);
+ if (!wipe || status < 0) {
+- /* Error and inode busy vote both mean we won't be
++ /* Error and remote inode busy both mean we won't be
+ * removing the inode, so they take almost the same
+ * path. */
+ if (status < 0)
+ mlog_errno(status);
+
+- /* Someone in the cluster has voted to not wipe this
+- * inode, or it was never completely orphaned. Write
+- * out the pages and exit now. */
++ /* Someone in the cluster has disallowed a wipe of
++ * this inode, or it was never completely
++ * orphaned. Write out the pages and exit now. */
+ ocfs2_cleanup_delete_inode(inode, 1);
+ goto bail_unlock_inode;
+ }
+@@ -981,7 +983,7 @@ void ocfs2_delete_inode(struct inode *inode)
+ OCFS2_I(inode)->ip_flags |= OCFS2_INODE_DELETED;
+
+ bail_unlock_inode:
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ brelse(di_bh);
+ bail_unblock:
+ status = sigprocmask(SIG_SETMASK, &oldset, NULL);
+@@ -1008,15 +1010,14 @@ void ocfs2_clear_inode(struct inode *inode)
+ mlog_bug_on_msg(OCFS2_SB(inode->i_sb) == NULL,
+ "Inode=%lu\n", inode->i_ino);
+
+- /* For remove delete_inode vote, we hold open lock before,
+- * now it is time to unlock PR and EX open locks. */
++ /* To preven remote deletes we hold open lock before, now it
++ * is time to unlock PR and EX open locks. */
+ ocfs2_open_unlock(inode);
+
+ /* Do these before all the other work so that we don't bounce
+- * the vote thread while waiting to destroy the locks. */
++ * the downconvert thread while waiting to destroy the locks. */
+ ocfs2_mark_lockres_freeing(&oi->ip_rw_lockres);
+- ocfs2_mark_lockres_freeing(&oi->ip_meta_lockres);
+- ocfs2_mark_lockres_freeing(&oi->ip_data_lockres);
++ ocfs2_mark_lockres_freeing(&oi->ip_inode_lockres);
+ ocfs2_mark_lockres_freeing(&oi->ip_open_lockres);
+
+ /* We very well may get a clear_inode before all an inodes
+@@ -1039,8 +1040,7 @@ void ocfs2_clear_inode(struct inode *inode)
+ mlog_errno(status);
+
+ ocfs2_lock_res_free(&oi->ip_rw_lockres);
+- ocfs2_lock_res_free(&oi->ip_meta_lockres);
+- ocfs2_lock_res_free(&oi->ip_data_lockres);
++ ocfs2_lock_res_free(&oi->ip_inode_lockres);
+ ocfs2_lock_res_free(&oi->ip_open_lockres);
+
+ ocfs2_metadata_cache_purge(inode);
+@@ -1184,15 +1184,15 @@ int ocfs2_inode_revalidate(struct dentry *dentry)
+ }
+ spin_unlock(&OCFS2_I(inode)->ip_lock);
+
+- /* Let ocfs2_meta_lock do the work of updating our struct
++ /* Let ocfs2_inode_lock do the work of updating our struct
+ * inode for us. */
+- status = ocfs2_meta_lock(inode, NULL, 0);
++ status = ocfs2_inode_lock(inode, NULL, 0);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+ goto bail;
+ }
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+ bail:
+ mlog_exit(status);
+
+diff --git a/fs/ocfs2/inode.h b/fs/ocfs2/inode.h
+index 70e881c..390a855 100644
+--- a/fs/ocfs2/inode.h
++++ b/fs/ocfs2/inode.h
+@@ -34,8 +34,7 @@ struct ocfs2_inode_info
+ u64 ip_blkno;
+
+ struct ocfs2_lock_res ip_rw_lockres;
+- struct ocfs2_lock_res ip_meta_lockres;
+- struct ocfs2_lock_res ip_data_lockres;
++ struct ocfs2_lock_res ip_inode_lockres;
+ struct ocfs2_lock_res ip_open_lockres;
+
+ /* protects allocation changes on this inode. */
+@@ -121,9 +120,10 @@ void ocfs2_delete_inode(struct inode *inode);
+ void ocfs2_drop_inode(struct inode *inode);
+
+ /* Flags for ocfs2_iget() */
+-#define OCFS2_FI_FLAG_SYSFILE 0x4
+-#define OCFS2_FI_FLAG_ORPHAN_RECOVERY 0x8
+-struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 feoff, int flags);
++#define OCFS2_FI_FLAG_SYSFILE 0x1
++#define OCFS2_FI_FLAG_ORPHAN_RECOVERY 0x2
++struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 feoff, unsigned flags,
++ int sysfile_type);
+ int ocfs2_inode_init_private(struct inode *inode);
+ int ocfs2_inode_revalidate(struct dentry *dentry);
+ int ocfs2_populate_inode(struct inode *inode, struct ocfs2_dinode *fe,
+diff --git a/fs/ocfs2/ioctl.c b/fs/ocfs2/ioctl.c
+index 87dcece..5177fba 100644
+--- a/fs/ocfs2/ioctl.c
++++ b/fs/ocfs2/ioctl.c
+@@ -20,6 +20,7 @@
+
+ #include "ocfs2_fs.h"
+ #include "ioctl.h"
++#include "resize.h"
+
+ #include <linux/ext2_fs.h>
+
+@@ -27,14 +28,14 @@ static int ocfs2_get_inode_attr(struct inode *inode, unsigned *flags)
+ {
+ int status;
+
+- status = ocfs2_meta_lock(inode, NULL, 0);
++ status = ocfs2_inode_lock(inode, NULL, 0);
+ if (status < 0) {
+ mlog_errno(status);
+ return status;
+ }
+ ocfs2_get_inode_flags(OCFS2_I(inode));
+ *flags = OCFS2_I(inode)->ip_attr;
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+
+ mlog_exit(status);
+ return status;
+@@ -52,7 +53,7 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
+
+ mutex_lock(&inode->i_mutex);
+
+- status = ocfs2_meta_lock(inode, &bh, 1);
++ status = ocfs2_inode_lock(inode, &bh, 1);
+ if (status < 0) {
+ mlog_errno(status);
+ goto bail;
+@@ -100,7 +101,7 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
+
+ ocfs2_commit_trans(osb, handle);
+ bail_unlock:
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ bail:
+ mutex_unlock(&inode->i_mutex);
+
+@@ -115,8 +116,10 @@ int ocfs2_ioctl(struct inode * inode, struct file * filp,
+ unsigned int cmd, unsigned long arg)
+ {
+ unsigned int flags;
++ int new_clusters;
+ int status;
+ struct ocfs2_space_resv sr;
++ struct ocfs2_new_group_input input;
+
+ switch (cmd) {
+ case OCFS2_IOC_GETFLAGS:
+@@ -140,6 +143,23 @@ int ocfs2_ioctl(struct inode * inode, struct file * filp,
+ return -EFAULT;
+
+ return ocfs2_change_file_space(filp, cmd, &sr);
++ case OCFS2_IOC_GROUP_EXTEND:
++ if (!capable(CAP_SYS_RESOURCE))
++ return -EPERM;
+
-+ ext4_debug("using block group %lu(%d)\n", ac->ac_b_ex.fe_group,
-+ gdp->bg_free_blocks_count);
++ if (get_user(new_clusters, (int __user *)arg))
++ return -EFAULT;
+
-+ err = -EIO;
-+ bitmap_bh = read_block_bitmap(sb, ac->ac_b_ex.fe_group);
-+ if (!bitmap_bh)
-+ goto out_err;
++ return ocfs2_group_extend(inode, new_clusters);
++ case OCFS2_IOC_GROUP_ADD:
++ case OCFS2_IOC_GROUP_ADD64:
++ if (!capable(CAP_SYS_RESOURCE))
++ return -EPERM;
+
-+ err = ext4_journal_get_write_access(handle, bitmap_bh);
-+ if (err)
-+ goto out_err;
++ if (copy_from_user(&input, (int __user *) arg, sizeof(input)))
++ return -EFAULT;
+
-+ err = -EIO;
-+ gdp = ext4_get_group_desc(sb, ac->ac_b_ex.fe_group, &gdp_bh);
-+ if (!gdp)
-+ goto out_err;
++ return ocfs2_group_add(inode, &input);
+ default:
+ return -ENOTTY;
+ }
+@@ -162,6 +182,9 @@ long ocfs2_compat_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ case OCFS2_IOC_RESVSP64:
+ case OCFS2_IOC_UNRESVSP:
+ case OCFS2_IOC_UNRESVSP64:
++ case OCFS2_IOC_GROUP_EXTEND:
++ case OCFS2_IOC_GROUP_ADD:
++ case OCFS2_IOC_GROUP_ADD64:
+ break;
+ default:
+ return -ENOIOCTLCMD;
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 8d81f6c..f31c7e8 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -44,7 +44,6 @@
+ #include "localalloc.h"
+ #include "slot_map.h"
+ #include "super.h"
+-#include "vote.h"
+ #include "sysfile.h"
+
+ #include "buffer_head_io.h"
+@@ -103,7 +102,7 @@ static int ocfs2_commit_cache(struct ocfs2_super *osb)
+ mlog(0, "commit_thread: flushed transaction %lu (%u handles)\n",
+ journal->j_trans_id, flushed);
+
+- ocfs2_kick_vote_thread(osb);
++ ocfs2_wake_downconvert_thread(osb);
+ wake_up(&journal->j_checkpointed);
+ finally:
+ mlog_exit(status);
+@@ -314,14 +313,18 @@ int ocfs2_journal_dirty_data(handle_t *handle,
+ return err;
+ }
+
+-#define OCFS2_DEFAULT_COMMIT_INTERVAL (HZ * 5)
++#define OCFS2_DEFAULT_COMMIT_INTERVAL (HZ * JBD_DEFAULT_MAX_COMMIT_AGE)
+
+ void ocfs2_set_journal_params(struct ocfs2_super *osb)
+ {
+ journal_t *journal = osb->journal->j_journal;
++ unsigned long commit_interval = OCFS2_DEFAULT_COMMIT_INTERVAL;
+
-+ err = ext4_journal_get_write_access(handle, gdp_bh);
-+ if (err)
-+ goto out_err;
++ if (osb->osb_commit_interval)
++ commit_interval = osb->osb_commit_interval;
+
+ spin_lock(&journal->j_state_lock);
+- journal->j_commit_interval = OCFS2_DEFAULT_COMMIT_INTERVAL;
++ journal->j_commit_interval = commit_interval;
+ if (osb->s_mount_opt & OCFS2_MOUNT_BARRIER)
+ journal->j_flags |= JFS_BARRIER;
+ else
+@@ -337,7 +340,7 @@ int ocfs2_journal_init(struct ocfs2_journal *journal, int *dirty)
+ struct ocfs2_dinode *di = NULL;
+ struct buffer_head *bh = NULL;
+ struct ocfs2_super *osb;
+- int meta_lock = 0;
++ int inode_lock = 0;
+
+ mlog_entry_void();
+
+@@ -367,14 +370,14 @@ int ocfs2_journal_init(struct ocfs2_journal *journal, int *dirty)
+ /* Skip recovery waits here - journal inode metadata never
+ * changes in a live cluster so it can be considered an
+ * exception to the rule. */
+- status = ocfs2_meta_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
++ status = ocfs2_inode_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
+ if (status < 0) {
+ if (status != -ERESTARTSYS)
+ mlog(ML_ERROR, "Could not get lock on journal!\n");
+ goto done;
+ }
+
+- meta_lock = 1;
++ inode_lock = 1;
+ di = (struct ocfs2_dinode *)bh->b_data;
+
+ if (inode->i_size < OCFS2_MIN_JOURNAL_SIZE) {
+@@ -414,8 +417,8 @@ int ocfs2_journal_init(struct ocfs2_journal *journal, int *dirty)
+ status = 0;
+ done:
+ if (status < 0) {
+- if (meta_lock)
+- ocfs2_meta_unlock(inode, 1);
++ if (inode_lock)
++ ocfs2_inode_unlock(inode, 1);
+ if (bh != NULL)
+ brelse(bh);
+ if (inode) {
+@@ -544,7 +547,7 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb)
+ OCFS2_I(inode)->ip_open_count--;
+
+ /* unlock our journal */
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ brelse(journal->j_bh);
+ journal->j_bh = NULL;
+@@ -883,8 +886,8 @@ restart:
+ ocfs2_super_unlock(osb, 1);
+
+ /* We always run recovery on our own orphan dir - the dead
+- * node(s) may have voted "no" on an inode delete earlier. A
+- * revote is therefore required. */
++ * node(s) may have disallowd a previos inode delete. Re-processing
++ * is therefore required. */
+ ocfs2_queue_recovery_completion(osb->journal, osb->slot_num, NULL,
+ NULL);
+
+@@ -973,9 +976,9 @@ static int ocfs2_replay_journal(struct ocfs2_super *osb,
+ }
+ SET_INODE_JOURNAL(inode);
+
+- status = ocfs2_meta_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
++ status = ocfs2_inode_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
+ if (status < 0) {
+- mlog(0, "status returned from ocfs2_meta_lock=%d\n", status);
++ mlog(0, "status returned from ocfs2_inode_lock=%d\n", status);
+ if (status != -ERESTARTSYS)
+ mlog(ML_ERROR, "Could not lock journal!\n");
+ goto done;
+@@ -1047,7 +1050,7 @@ static int ocfs2_replay_journal(struct ocfs2_super *osb,
+ done:
+ /* drop the lock on this nodes journal */
+ if (got_lock)
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ if (inode)
+ iput(inode);
+@@ -1162,14 +1165,14 @@ static int ocfs2_trylock_journal(struct ocfs2_super *osb,
+ SET_INODE_JOURNAL(inode);
+
+ flags = OCFS2_META_LOCK_RECOVERY | OCFS2_META_LOCK_NOQUEUE;
+- status = ocfs2_meta_lock_full(inode, NULL, 1, flags);
++ status = ocfs2_inode_lock_full(inode, NULL, 1, flags);
+ if (status < 0) {
+ if (status != -EAGAIN)
+ mlog_errno(status);
+ goto bail;
+ }
+
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+ bail:
+ if (inode)
+ iput(inode);
+@@ -1241,7 +1244,7 @@ static int ocfs2_orphan_filldir(void *priv, const char *name, int name_len,
+
+ /* Skip bad inodes so that recovery can continue */
+ iter = ocfs2_iget(p->osb, ino,
+- OCFS2_FI_FLAG_ORPHAN_RECOVERY);
++ OCFS2_FI_FLAG_ORPHAN_RECOVERY, 0);
+ if (IS_ERR(iter))
+ return 0;
+
+@@ -1277,7 +1280,7 @@ static int ocfs2_queue_orphans(struct ocfs2_super *osb,
+ }
+
+ mutex_lock(&orphan_dir_inode->i_mutex);
+- status = ocfs2_meta_lock(orphan_dir_inode, NULL, 0);
++ status = ocfs2_inode_lock(orphan_dir_inode, NULL, 0);
+ if (status < 0) {
+ mlog_errno(status);
+ goto out;
+@@ -1293,7 +1296,7 @@ static int ocfs2_queue_orphans(struct ocfs2_super *osb,
+ *head = priv.head;
+
+ out_cluster:
+- ocfs2_meta_unlock(orphan_dir_inode, 0);
++ ocfs2_inode_unlock(orphan_dir_inode, 0);
+ out:
+ mutex_unlock(&orphan_dir_inode->i_mutex);
+ iput(orphan_dir_inode);
+@@ -1380,10 +1383,10 @@ static int ocfs2_recover_orphans(struct ocfs2_super *osb,
+ iter = oi->ip_next_orphan;
+
+ spin_lock(&oi->ip_lock);
+- /* Delete voting may have set these on the assumption
+- * that the other node would wipe them successfully.
+- * If they are still in the node's orphan dir, we need
+- * to reset that state. */
++ /* The remote delete code may have set these on the
++ * assumption that the other node would wipe them
++ * successfully. If they are still in the node's
++ * orphan dir, we need to reset that state. */
+ oi->ip_flags &= ~(OCFS2_INODE_DELETED|OCFS2_INODE_SKIP_DELETE);
+
+ /* Set the proper information to get us going into
+diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
+index 4b32e09..220f3e8 100644
+--- a/fs/ocfs2/journal.h
++++ b/fs/ocfs2/journal.h
+@@ -278,6 +278,12 @@ int ocfs2_journal_dirty_data(handle_t *handle,
+ /* simple file updates like chmod, etc. */
+ #define OCFS2_INODE_UPDATE_CREDITS 1
+
++/* group extend. inode update and last group update. */
++#define OCFS2_GROUP_EXTEND_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 1)
+
-+ block = ac->ac_b_ex.fe_group * EXT4_BLOCKS_PER_GROUP(sb)
-+ + ac->ac_b_ex.fe_start
-+ + le32_to_cpu(es->s_first_data_block);
++/* group add. inode update and the new group update. */
++#define OCFS2_GROUP_ADD_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 1)
+
-+ if (block == ext4_block_bitmap(sb, gdp) ||
-+ block == ext4_inode_bitmap(sb, gdp) ||
-+ in_range(block, ext4_inode_table(sb, gdp),
-+ EXT4_SB(sb)->s_itb_per_group)) {
+ /* get one bit out of a suballocator: dinode + group descriptor +
+ * prev. group desc. if we relink. */
+ #define OCFS2_SUBALLOC_ALLOC (3)
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 58ea88b..add1ffd 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -75,18 +75,12 @@ static int ocfs2_local_alloc_new_window(struct ocfs2_super *osb,
+ static int ocfs2_local_alloc_slide_window(struct ocfs2_super *osb,
+ struct inode *local_alloc_inode);
+
+-/*
+- * Determine how large our local alloc window should be, in bits.
+- *
+- * These values (and the behavior in ocfs2_alloc_should_use_local) have
+- * been chosen so that most allocations, including new block groups go
+- * through local alloc.
+- */
+ static inline int ocfs2_local_alloc_window_bits(struct ocfs2_super *osb)
+ {
+- BUG_ON(osb->s_clustersize_bits < 12);
++ BUG_ON(osb->s_clustersize_bits > 20);
+
+- return 2048 >> (osb->s_clustersize_bits - 12);
++ /* Size local alloc windows by the megabyte */
++ return osb->local_alloc_size << (20 - osb->s_clustersize_bits);
+ }
+
+ /*
+@@ -96,18 +90,23 @@ static inline int ocfs2_local_alloc_window_bits(struct ocfs2_super *osb)
+ int ocfs2_alloc_should_use_local(struct ocfs2_super *osb, u64 bits)
+ {
+ int la_bits = ocfs2_local_alloc_window_bits(osb);
++ int ret = 0;
+
+ if (osb->local_alloc_state != OCFS2_LA_ENABLED)
+- return 0;
++ goto bail;
+
+ /* la_bits should be at least twice the size (in clusters) of
+ * a new block group. We want to be sure block group
+ * allocations go through the local alloc, so allow an
+ * allocation to take up to half the bitmap. */
+ if (bits > (la_bits / 2))
+- return 0;
++ goto bail;
+
+- return 1;
++ ret = 1;
++bail:
++ mlog(0, "state=%d, bits=%llu, la_bits=%d, ret=%d\n",
++ osb->local_alloc_state, (unsigned long long)bits, la_bits, ret);
++ return ret;
+ }
+
+ int ocfs2_load_local_alloc(struct ocfs2_super *osb)
+@@ -121,6 +120,19 @@ int ocfs2_load_local_alloc(struct ocfs2_super *osb)
+
+ mlog_entry_void();
+
++ if (ocfs2_mount_local(osb))
++ goto bail;
+
-+ ext4_error(sb, __FUNCTION__,
-+ "Allocating block in system zone - block = %llu",
-+ block);
-+ }
-+#ifdef AGGRESSIVE_CHECK
-+ {
-+ int i;
-+ for (i = 0; i < ac->ac_b_ex.fe_len; i++) {
-+ BUG_ON(mb_test_bit(ac->ac_b_ex.fe_start + i,
-+ bitmap_bh->b_data));
-+ }
-+ }
-+#endif
-+ mb_set_bits(sb_bgl_lock(sbi, ac->ac_b_ex.fe_group), bitmap_bh->b_data,
-+ ac->ac_b_ex.fe_start, ac->ac_b_ex.fe_len);
++ if (osb->local_alloc_size == 0)
++ goto bail;
+
-+ spin_lock(sb_bgl_lock(sbi, ac->ac_b_ex.fe_group));
-+ if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
-+ gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
-+ gdp->bg_free_blocks_count =
-+ cpu_to_le16(ext4_free_blocks_after_init(sb,
-+ ac->ac_b_ex.fe_group,
-+ gdp));
++ if (ocfs2_local_alloc_window_bits(osb) >= osb->bitmap_cpg) {
++ mlog(ML_NOTICE, "Requested local alloc window %d is larger "
++ "than max possible %u. Using defaults.\n",
++ ocfs2_local_alloc_window_bits(osb), (osb->bitmap_cpg - 1));
++ osb->local_alloc_size = OCFS2_DEFAULT_LOCAL_ALLOC_SIZE;
+ }
-+ gdp->bg_free_blocks_count =
-+ cpu_to_le16(le16_to_cpu(gdp->bg_free_blocks_count)
-+ - ac->ac_b_ex.fe_len);
-+ gdp->bg_checksum = ext4_group_desc_csum(sbi, ac->ac_b_ex.fe_group, gdp);
-+ spin_unlock(sb_bgl_lock(sbi, ac->ac_b_ex.fe_group));
-+ percpu_counter_sub(&sbi->s_freeblocks_counter, ac->ac_b_ex.fe_len);
+
-+ err = ext4_journal_dirty_metadata(handle, bitmap_bh);
-+ if (err)
-+ goto out_err;
-+ err = ext4_journal_dirty_metadata(handle, gdp_bh);
+ /* read the alloc off disk */
+ inode = ocfs2_get_system_file_inode(osb, LOCAL_ALLOC_SYSTEM_INODE,
+ osb->slot_num);
+@@ -181,6 +193,9 @@ bail:
+ if (inode)
+ iput(inode);
+
++ mlog(0, "Local alloc window bits = %d\n",
++ ocfs2_local_alloc_window_bits(osb));
+
-+out_err:
-+ sb->s_dirt = 1;
-+ put_bh(bitmap_bh);
-+ return err;
-+}
+ mlog_exit(status);
+ return status;
+ }
+@@ -231,7 +246,7 @@ void ocfs2_shutdown_local_alloc(struct ocfs2_super *osb)
+
+ mutex_lock(&main_bm_inode->i_mutex);
+
+- status = ocfs2_meta_lock(main_bm_inode, &main_bm_bh, 1);
++ status = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
+ if (status < 0) {
+ mlog_errno(status);
+ goto out_mutex;
+@@ -286,7 +301,7 @@ out_unlock:
+ if (main_bm_bh)
+ brelse(main_bm_bh);
+
+- ocfs2_meta_unlock(main_bm_inode, 1);
++ ocfs2_inode_unlock(main_bm_inode, 1);
+
+ out_mutex:
+ mutex_unlock(&main_bm_inode->i_mutex);
+@@ -399,7 +414,7 @@ int ocfs2_complete_local_alloc_recovery(struct ocfs2_super *osb,
+
+ mutex_lock(&main_bm_inode->i_mutex);
+
+- status = ocfs2_meta_lock(main_bm_inode, &main_bm_bh, 1);
++ status = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
+ if (status < 0) {
+ mlog_errno(status);
+ goto out_mutex;
+@@ -424,7 +439,7 @@ int ocfs2_complete_local_alloc_recovery(struct ocfs2_super *osb,
+ ocfs2_commit_trans(osb, handle);
+
+ out_unlock:
+- ocfs2_meta_unlock(main_bm_inode, 1);
++ ocfs2_inode_unlock(main_bm_inode, 1);
+
+ out_mutex:
+ mutex_unlock(&main_bm_inode->i_mutex);
+@@ -521,6 +536,9 @@ bail:
+ iput(local_alloc_inode);
+ }
+
++ mlog(0, "bits=%d, slot=%d, ret=%d\n", bits_wanted, osb->slot_num,
++ status);
+
-+/*
-+ * here we normalize request for locality group
-+ * Group request are normalized to s_strip size if we set the same via mount
-+ * option. If not we set it to s_mb_group_prealloc which can be configured via
-+ * /proc/fs/ext4/<partition>/group_prealloc
+ mlog_exit(status);
+ return status;
+ }
+diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
+new file mode 100644
+index 0000000..203f871
+--- /dev/null
++++ b/fs/ocfs2/locks.c
+@@ -0,0 +1,125 @@
++/* -*- mode: c; c-basic-offset: 8; -*-
++ * vim: noexpandtab sw=8 ts=8 sts=0:
+ *
-+ * XXX: should we try to preallocate more than the group has now?
-+ */
-+static void ext4_mb_normalize_group_request(struct ext4_allocation_context *ac)
-+{
-+ struct super_block *sb = ac->ac_sb;
-+ struct ext4_locality_group *lg = ac->ac_lg;
-+
-+ BUG_ON(lg == NULL);
-+ if (EXT4_SB(sb)->s_stripe)
-+ ac->ac_g_ex.fe_len = EXT4_SB(sb)->s_stripe;
-+ else
-+ ac->ac_g_ex.fe_len = EXT4_SB(sb)->s_mb_group_prealloc;
-+ mb_debug("#%u: goal %lu blocks for locality group\n",
-+ current->pid, ac->ac_g_ex.fe_len);
-+}
-+
-+/*
-+ * Normalization means making request better in terms of
-+ * size and alignment
-+ */
-+static void ext4_mb_normalize_request(struct ext4_allocation_context *ac,
-+ struct ext4_allocation_request *ar)
-+{
-+ int bsbits, max;
-+ ext4_lblk_t end;
-+ struct list_head *cur;
-+ loff_t size, orig_size, start_off;
-+ ext4_lblk_t start, orig_start;
-+ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
-+
-+ /* do normalize only data requests, metadata requests
-+ do not need preallocation */
-+ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
-+ return;
-+
-+ /* sometime caller may want exact blocks */
-+ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
-+ return;
-+
-+ /* caller may indicate that preallocation isn't
-+ * required (it's a tail, for example) */
-+ if (ac->ac_flags & EXT4_MB_HINT_NOPREALLOC)
-+ return;
-+
-+ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC) {
-+ ext4_mb_normalize_group_request(ac);
-+ return ;
-+ }
-+
-+ bsbits = ac->ac_sb->s_blocksize_bits;
-+
-+ /* first, let's learn actual file size
-+ * given current request is allocated */
-+ size = ac->ac_o_ex.fe_logical + ac->ac_o_ex.fe_len;
-+ size = size << bsbits;
-+ if (size < i_size_read(ac->ac_inode))
-+ size = i_size_read(ac->ac_inode);
-+
-+ /* max available blocks in a free group */
-+ max = EXT4_BLOCKS_PER_GROUP(ac->ac_sb) - 1 - 1 -
-+ EXT4_SB(ac->ac_sb)->s_itb_per_group;
-+
-+#define NRL_CHECK_SIZE(req, size, max,bits) \
-+ (req <= (size) || max <= ((size) >> bits))
-+
-+ /* first, try to predict filesize */
-+ /* XXX: should this table be tunable? */
-+ start_off = 0;
-+ if (size <= 16 * 1024) {
-+ size = 16 * 1024;
-+ } else if (size <= 32 * 1024) {
-+ size = 32 * 1024;
-+ } else if (size <= 64 * 1024) {
-+ size = 64 * 1024;
-+ } else if (size <= 128 * 1024) {
-+ size = 128 * 1024;
-+ } else if (size <= 256 * 1024) {
-+ size = 256 * 1024;
-+ } else if (size <= 512 * 1024) {
-+ size = 512 * 1024;
-+ } else if (size <= 1024 * 1024) {
-+ size = 1024 * 1024;
-+ } else if (NRL_CHECK_SIZE(size, 4 * 1024 * 1024, max, bsbits)) {
-+ start_off = ((loff_t)ac->ac_o_ex.fe_logical >>
-+ (20 - bsbits)) << 20;
-+ size = 1024 * 1024;
-+ } else if (NRL_CHECK_SIZE(size, 8 * 1024 * 1024, max, bsbits)) {
-+ start_off = ((loff_t)ac->ac_o_ex.fe_logical >>
-+ (22 - bsbits)) << 22;
-+ size = 4 * 1024 * 1024;
-+ } else if (NRL_CHECK_SIZE(ac->ac_o_ex.fe_len,
-+ (8<<20)>>bsbits, max, bsbits)) {
-+ start_off = ((loff_t)ac->ac_o_ex.fe_logical >>
-+ (23 - bsbits)) << 23;
-+ size = 8 * 1024 * 1024;
-+ } else {
-+ start_off = (loff_t)ac->ac_o_ex.fe_logical << bsbits;
-+ size = ac->ac_o_ex.fe_len << bsbits;
-+ }
-+ orig_size = size = size >> bsbits;
-+ orig_start = start = start_off >> bsbits;
-+
-+ /* don't cover already allocated blocks in selected range */
-+ if (ar->pleft && start <= ar->lleft) {
-+ size -= ar->lleft + 1 - start;
-+ start = ar->lleft + 1;
-+ }
-+ if (ar->pright && start + size - 1 >= ar->lright)
-+ size -= start + size - ar->lright;
-+
-+ end = start + size;
-+
-+ /* check we don't cross already preallocated blocks */
-+ rcu_read_lock();
-+ list_for_each_rcu(cur, &ei->i_prealloc_list) {
-+ struct ext4_prealloc_space *pa;
-+ unsigned long pa_end;
-+
-+ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
-+
-+ if (pa->pa_deleted)
-+ continue;
-+ spin_lock(&pa->pa_lock);
-+ if (pa->pa_deleted) {
-+ spin_unlock(&pa->pa_lock);
-+ continue;
-+ }
-+
-+ pa_end = pa->pa_lstart + pa->pa_len;
-+
-+ /* PA must not overlap original request */
-+ BUG_ON(!(ac->ac_o_ex.fe_logical >= pa_end ||
-+ ac->ac_o_ex.fe_logical < pa->pa_lstart));
-+
-+ /* skip PA normalized request doesn't overlap with */
-+ if (pa->pa_lstart >= end) {
-+ spin_unlock(&pa->pa_lock);
-+ continue;
-+ }
-+ if (pa_end <= start) {
-+ spin_unlock(&pa->pa_lock);
-+ continue;
-+ }
-+ BUG_ON(pa->pa_lstart <= start && pa_end >= end);
-+
-+ if (pa_end <= ac->ac_o_ex.fe_logical) {
-+ BUG_ON(pa_end < start);
-+ start = pa_end;
-+ }
-+
-+ if (pa->pa_lstart > ac->ac_o_ex.fe_logical) {
-+ BUG_ON(pa->pa_lstart > end);
-+ end = pa->pa_lstart;
-+ }
-+ spin_unlock(&pa->pa_lock);
-+ }
-+ rcu_read_unlock();
-+ size = end - start;
-+
-+ /* XXX: extra loop to check we really don't overlap preallocations */
-+ rcu_read_lock();
-+ list_for_each_rcu(cur, &ei->i_prealloc_list) {
-+ struct ext4_prealloc_space *pa;
-+ unsigned long pa_end;
-+ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
-+ spin_lock(&pa->pa_lock);
-+ if (pa->pa_deleted == 0) {
-+ pa_end = pa->pa_lstart + pa->pa_len;
-+ BUG_ON(!(start >= pa_end || end <= pa->pa_lstart));
-+ }
-+ spin_unlock(&pa->pa_lock);
-+ }
-+ rcu_read_unlock();
-+
-+ if (start + size <= ac->ac_o_ex.fe_logical &&
-+ start > ac->ac_o_ex.fe_logical) {
-+ printk(KERN_ERR "start %lu, size %lu, fe_logical %lu\n",
-+ (unsigned long) start, (unsigned long) size,
-+ (unsigned long) ac->ac_o_ex.fe_logical);
-+ }
-+ BUG_ON(start + size <= ac->ac_o_ex.fe_logical &&
-+ start > ac->ac_o_ex.fe_logical);
-+ BUG_ON(size <= 0 || size >= EXT4_BLOCKS_PER_GROUP(ac->ac_sb));
-+
-+ /* now prepare goal request */
-+
-+ /* XXX: is it better to align blocks WRT to logical
-+ * placement or satisfy big request as is */
-+ ac->ac_g_ex.fe_logical = start;
-+ ac->ac_g_ex.fe_len = size;
-+
-+ /* define goal start in order to merge */
-+ if (ar->pright && (ar->lright == (start + size))) {
-+ /* merge to the right */
-+ ext4_get_group_no_and_offset(ac->ac_sb, ar->pright - size,
-+ &ac->ac_f_ex.fe_group,
-+ &ac->ac_f_ex.fe_start);
-+ ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
-+ }
-+ if (ar->pleft && (ar->lleft + 1 == start)) {
-+ /* merge to the left */
-+ ext4_get_group_no_and_offset(ac->ac_sb, ar->pleft + 1,
-+ &ac->ac_f_ex.fe_group,
-+ &ac->ac_f_ex.fe_start);
-+ ac->ac_flags |= EXT4_MB_HINT_TRY_GOAL;
-+ }
-+
-+ mb_debug("goal: %u(was %u) blocks at %u\n", (unsigned) size,
-+ (unsigned) orig_size, (unsigned) start);
-+}
-+
-+static void ext4_mb_collect_stats(struct ext4_allocation_context *ac)
-+{
-+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
-+
-+ if (sbi->s_mb_stats && ac->ac_g_ex.fe_len > 1) {
-+ atomic_inc(&sbi->s_bal_reqs);
-+ atomic_add(ac->ac_b_ex.fe_len, &sbi->s_bal_allocated);
-+ if (ac->ac_o_ex.fe_len >= ac->ac_g_ex.fe_len)
-+ atomic_inc(&sbi->s_bal_success);
-+ atomic_add(ac->ac_found, &sbi->s_bal_ex_scanned);
-+ if (ac->ac_g_ex.fe_start == ac->ac_b_ex.fe_start &&
-+ ac->ac_g_ex.fe_group == ac->ac_b_ex.fe_group)
-+ atomic_inc(&sbi->s_bal_goals);
-+ if (ac->ac_found > sbi->s_mb_max_to_scan)
-+ atomic_inc(&sbi->s_bal_breaks);
-+ }
-+
-+ ext4_mb_store_history(ac);
-+}
-+
-+/*
-+ * use blocks preallocated to inode
++ * locks.c
++ *
++ * Userspace file locking support
++ *
++ * Copyright (C) 2007 Oracle. All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public
++ * License as published by the Free Software Foundation; either
++ * version 2 of the License, or (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++ * General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public
++ * License along with this program; if not, write to the
++ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
++ * Boston, MA 021110-1307, USA.
+ */
-+static void ext4_mb_use_inode_pa(struct ext4_allocation_context *ac,
-+ struct ext4_prealloc_space *pa)
-+{
-+ ext4_fsblk_t start;
-+ ext4_fsblk_t end;
-+ int len;
-+
-+ /* found preallocated blocks, use them */
-+ start = pa->pa_pstart + (ac->ac_o_ex.fe_logical - pa->pa_lstart);
-+ end = min(pa->pa_pstart + pa->pa_len, start + ac->ac_o_ex.fe_len);
-+ len = end - start;
-+ ext4_get_group_no_and_offset(ac->ac_sb, start, &ac->ac_b_ex.fe_group,
-+ &ac->ac_b_ex.fe_start);
-+ ac->ac_b_ex.fe_len = len;
-+ ac->ac_status = AC_STATUS_FOUND;
-+ ac->ac_pa = pa;
-+
-+ BUG_ON(start < pa->pa_pstart);
-+ BUG_ON(start + len > pa->pa_pstart + pa->pa_len);
-+ BUG_ON(pa->pa_free < len);
-+ pa->pa_free -= len;
+
-+ mb_debug("use %llu/%lu from inode pa %p\n", start, len, pa);
-+}
++#include <linux/fs.h>
+
-+/*
-+ * use blocks preallocated to locality group
-+ */
-+static void ext4_mb_use_group_pa(struct ext4_allocation_context *ac,
-+ struct ext4_prealloc_space *pa)
-+{
-+ unsigned len = ac->ac_o_ex.fe_len;
++#define MLOG_MASK_PREFIX ML_INODE
++#include <cluster/masklog.h>
+
-+ ext4_get_group_no_and_offset(ac->ac_sb, pa->pa_pstart,
-+ &ac->ac_b_ex.fe_group,
-+ &ac->ac_b_ex.fe_start);
-+ ac->ac_b_ex.fe_len = len;
-+ ac->ac_status = AC_STATUS_FOUND;
-+ ac->ac_pa = pa;
++#include "ocfs2.h"
+
-+ /* we don't correct pa_pstart or pa_plen here to avoid
-+ * possible race when tte group is being loaded concurrently
-+ * instead we correct pa later, after blocks are marked
-+ * in on-disk bitmap -- see ext4_mb_release_context() */
-+ /*
-+ * FIXME!! but the other CPUs can look at this particular
-+ * pa and think that it have enought free blocks if we
-+ * don't update pa_free here right ?
-+ */
-+ mb_debug("use %u/%u from group pa %p\n", pa->pa_lstart-len, len, pa);
-+}
++#include "dlmglue.h"
++#include "file.h"
++#include "locks.h"
+
-+/*
-+ * search goal blocks in preallocated space
-+ */
-+static int ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
++static int ocfs2_do_flock(struct file *file, struct inode *inode,
++ int cmd, struct file_lock *fl)
+{
-+ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
-+ struct ext4_locality_group *lg;
-+ struct ext4_prealloc_space *pa;
-+ struct list_head *cur;
-+
-+ /* only data can be preallocated */
-+ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
-+ return 0;
-+
-+ /* first, try per-file preallocation */
-+ rcu_read_lock();
-+ list_for_each_rcu(cur, &ei->i_prealloc_list) {
-+ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
++ int ret = 0, level = 0, trylock = 0;
++ struct ocfs2_file_private *fp = file->private_data;
++ struct ocfs2_lock_res *lockres = &fp->fp_flock;
+
-+ /* all fields in this condition don't change,
-+ * so we can skip locking for them */
-+ if (ac->ac_o_ex.fe_logical < pa->pa_lstart ||
-+ ac->ac_o_ex.fe_logical >= pa->pa_lstart + pa->pa_len)
-+ continue;
++ if (fl->fl_type == F_WRLCK)
++ level = 1;
++ if (!IS_SETLKW(cmd))
++ trylock = 1;
+
-+ /* found preallocated blocks, use them */
-+ spin_lock(&pa->pa_lock);
-+ if (pa->pa_deleted == 0 && pa->pa_free) {
-+ atomic_inc(&pa->pa_count);
-+ ext4_mb_use_inode_pa(ac, pa);
-+ spin_unlock(&pa->pa_lock);
-+ ac->ac_criteria = 10;
-+ rcu_read_unlock();
-+ return 1;
-+ }
-+ spin_unlock(&pa->pa_lock);
-+ }
-+ rcu_read_unlock();
++ mutex_lock(&fp->fp_mutex);
+
-+ /* can we use group allocation? */
-+ if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC))
-+ return 0;
++ if (lockres->l_flags & OCFS2_LOCK_ATTACHED &&
++ lockres->l_level > LKM_NLMODE) {
++ int old_level = 0;
+
-+ /* inode may have no locality group for some reason */
-+ lg = ac->ac_lg;
-+ if (lg == NULL)
-+ return 0;
++ if (lockres->l_level == LKM_EXMODE)
++ old_level = 1;
+
-+ rcu_read_lock();
-+ list_for_each_rcu(cur, &lg->lg_prealloc_list) {
-+ pa = list_entry(cur, struct ext4_prealloc_space, pa_inode_list);
-+ spin_lock(&pa->pa_lock);
-+ if (pa->pa_deleted == 0 && pa->pa_free >= ac->ac_o_ex.fe_len) {
-+ atomic_inc(&pa->pa_count);
-+ ext4_mb_use_group_pa(ac, pa);
-+ spin_unlock(&pa->pa_lock);
-+ ac->ac_criteria = 20;
-+ rcu_read_unlock();
-+ return 1;
-+ }
-+ spin_unlock(&pa->pa_lock);
-+ }
-+ rcu_read_unlock();
++ if (level == old_level)
++ goto out;
+
-+ return 0;
-+}
++ /*
++ * Converting an existing lock is not guaranteed to be
++ * atomic, so we can get away with simply unlocking
++ * here and allowing the lock code to try at the new
++ * level.
++ */
+
-+/*
-+ * the function goes through all preallocation in this group and marks them
-+ * used in in-core bitmap. buddy must be generated from this bitmap
-+ * Need to be called with ext4 group lock (ext4_lock_group)
-+ */
-+static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
-+ ext4_group_t group)
-+{
-+ struct ext4_group_info *grp = ext4_get_group_info(sb, group);
-+ struct ext4_prealloc_space *pa;
-+ struct list_head *cur;
-+ ext4_group_t groupnr;
-+ ext4_grpblk_t start;
-+ int preallocated = 0;
-+ int count = 0;
-+ int len;
++ flock_lock_file_wait(file,
++ &(struct file_lock){.fl_type = F_UNLCK});
+
-+ /* all form of preallocation discards first load group,
-+ * so the only competing code is preallocation use.
-+ * we don't need any locking here
-+ * notice we do NOT ignore preallocations with pa_deleted
-+ * otherwise we could leave used blocks available for
-+ * allocation in buddy when concurrent ext4_mb_put_pa()
-+ * is dropping preallocation
-+ */
-+ list_for_each(cur, &grp->bb_prealloc_list) {
-+ pa = list_entry(cur, struct ext4_prealloc_space, pa_group_list);
-+ spin_lock(&pa->pa_lock);
-+ ext4_get_group_no_and_offset(sb, pa->pa_pstart,
-+ &groupnr, &start);
-+ len = pa->pa_len;
-+ spin_unlock(&pa->pa_lock);
-+ if (unlikely(len == 0))
-+ continue;
-+ BUG_ON(groupnr != group);
-+ mb_set_bits(sb_bgl_lock(EXT4_SB(sb), group),
-+ bitmap, start, len);
-+ preallocated += len;
-+ count++;
++ ocfs2_file_unlock(file);
+ }
-+ mb_debug("prellocated %u for group %lu\n", preallocated, group);
-+}
-+
-+static void ext4_mb_pa_callback(struct rcu_head *head)
-+{
-+ struct ext4_prealloc_space *pa;
-+ pa = container_of(head, struct ext4_prealloc_space, u.pa_rcu);
-+ kmem_cache_free(ext4_pspace_cachep, pa);
-+}
-+
-+/*
-+ * drops a reference to preallocated space descriptor
-+ * if this was the last reference and the space is consumed
-+ */
-+static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
-+ struct super_block *sb, struct ext4_prealloc_space *pa)
-+{
-+ unsigned long grp;
-+
-+ if (!atomic_dec_and_test(&pa->pa_count) || pa->pa_free != 0)
-+ return;
+
-+ /* in this short window concurrent discard can set pa_deleted */
-+ spin_lock(&pa->pa_lock);
-+ if (pa->pa_deleted == 1) {
-+ spin_unlock(&pa->pa_lock);
-+ return;
++ ret = ocfs2_file_lock(file, level, trylock);
++ if (ret) {
++ if (ret == -EAGAIN && trylock)
++ ret = -EWOULDBLOCK;
++ else
++ mlog_errno(ret);
++ goto out;
+ }
+
-+ pa->pa_deleted = 1;
-+ spin_unlock(&pa->pa_lock);
-+
-+ /* -1 is to protect from crossing allocation group */
-+ ext4_get_group_no_and_offset(sb, pa->pa_pstart - 1, &grp, NULL);
-+
-+ /*
-+ * possible race:
-+ *
-+ * P1 (buddy init) P2 (regular allocation)
-+ * find block B in PA
-+ * copy on-disk bitmap to buddy
-+ * mark B in on-disk bitmap
-+ * drop PA from group
-+ * mark all PAs in buddy
-+ *
-+ * thus, P1 initializes buddy with B available. to prevent this
-+ * we make "copy" and "mark all PAs" atomic and serialize "drop PA"
-+ * against that pair
-+ */
-+ ext4_lock_group(sb, grp);
-+ list_del(&pa->pa_group_list);
-+ ext4_unlock_group(sb, grp);
++ ret = flock_lock_file_wait(file, fl);
+
-+ spin_lock(pa->pa_obj_lock);
-+ list_del_rcu(&pa->pa_inode_list);
-+ spin_unlock(pa->pa_obj_lock);
++out:
++ mutex_unlock(&fp->fp_mutex);
+
-+ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
++ return ret;
+}
+
-+/*
-+ * creates new preallocated space for given inode
-+ */
-+static int ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
++static int ocfs2_do_funlock(struct file *file, int cmd, struct file_lock *fl)
+{
-+ struct super_block *sb = ac->ac_sb;
-+ struct ext4_prealloc_space *pa;
-+ struct ext4_group_info *grp;
-+ struct ext4_inode_info *ei;
-+
-+ /* preallocate only when found space is larger then requested */
-+ BUG_ON(ac->ac_o_ex.fe_len >= ac->ac_b_ex.fe_len);
-+ BUG_ON(ac->ac_status != AC_STATUS_FOUND);
-+ BUG_ON(!S_ISREG(ac->ac_inode->i_mode));
-+
-+ pa = kmem_cache_alloc(ext4_pspace_cachep, GFP_NOFS);
-+ if (pa == NULL)
-+ return -ENOMEM;
-+
-+ if (ac->ac_b_ex.fe_len < ac->ac_g_ex.fe_len) {
-+ int winl;
-+ int wins;
-+ int win;
-+ int offs;
-+
-+ /* we can't allocate as much as normalizer wants.
-+ * so, found space must get proper lstart
-+ * to cover original request */
-+ BUG_ON(ac->ac_g_ex.fe_logical > ac->ac_o_ex.fe_logical);
-+ BUG_ON(ac->ac_g_ex.fe_len < ac->ac_o_ex.fe_len);
-+
-+ /* we're limited by original request in that
-+ * logical block must be covered any way
-+ * winl is window we can move our chunk within */
-+ winl = ac->ac_o_ex.fe_logical - ac->ac_g_ex.fe_logical;
-+
-+ /* also, we should cover whole original request */
-+ wins = ac->ac_b_ex.fe_len - ac->ac_o_ex.fe_len;
-+
-+ /* the smallest one defines real window */
-+ win = min(winl, wins);
-+
-+ offs = ac->ac_o_ex.fe_logical % ac->ac_b_ex.fe_len;
-+ if (offs && offs < win)
-+ win = offs;
-+
-+ ac->ac_b_ex.fe_logical = ac->ac_o_ex.fe_logical - win;
-+ BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
-+ BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
-+ }
-+
-+ /* preallocation can change ac_b_ex, thus we store actually
-+ * allocated blocks for history */
-+ ac->ac_f_ex = ac->ac_b_ex;
-+
-+ pa->pa_lstart = ac->ac_b_ex.fe_logical;
-+ pa->pa_pstart = ext4_grp_offs_to_block(sb, &ac->ac_b_ex);
-+ pa->pa_len = ac->ac_b_ex.fe_len;
-+ pa->pa_free = pa->pa_len;
-+ atomic_set(&pa->pa_count, 1);
-+ spin_lock_init(&pa->pa_lock);
-+ pa->pa_deleted = 0;
-+ pa->pa_linear = 0;
-+
-+ mb_debug("new inode pa %p: %llu/%u for %u\n", pa,
-+ pa->pa_pstart, pa->pa_len, pa->pa_lstart);
-+
-+ ext4_mb_use_inode_pa(ac, pa);
-+ atomic_add(pa->pa_free, &EXT4_SB(sb)->s_mb_preallocated);
-+
-+ ei = EXT4_I(ac->ac_inode);
-+ grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
-+
-+ pa->pa_obj_lock = &ei->i_prealloc_lock;
-+ pa->pa_inode = ac->ac_inode;
-+
-+ ext4_lock_group(sb, ac->ac_b_ex.fe_group);
-+ list_add(&pa->pa_group_list, &grp->bb_prealloc_list);
-+ ext4_unlock_group(sb, ac->ac_b_ex.fe_group);
++ int ret;
++ struct ocfs2_file_private *fp = file->private_data;
+
-+ spin_lock(pa->pa_obj_lock);
-+ list_add_rcu(&pa->pa_inode_list, &ei->i_prealloc_list);
-+ spin_unlock(pa->pa_obj_lock);
++ mutex_lock(&fp->fp_mutex);
++ ocfs2_file_unlock(file);
++ ret = flock_lock_file_wait(file, fl);
++ mutex_unlock(&fp->fp_mutex);
+
-+ return 0;
++ return ret;
+}
+
+/*
-+ * creates new preallocated space for locality group inodes belongs to
++ * Overall flow of ocfs2_flock() was influenced by gfs2_flock().
+ */
-+static int ext4_mb_new_group_pa(struct ext4_allocation_context *ac)
++int ocfs2_flock(struct file *file, int cmd, struct file_lock *fl)
+{
-+ struct super_block *sb = ac->ac_sb;
-+ struct ext4_locality_group *lg;
-+ struct ext4_prealloc_space *pa;
-+ struct ext4_group_info *grp;
-+
-+ /* preallocate only when found space is larger then requested */
-+ BUG_ON(ac->ac_o_ex.fe_len >= ac->ac_b_ex.fe_len);
-+ BUG_ON(ac->ac_status != AC_STATUS_FOUND);
-+ BUG_ON(!S_ISREG(ac->ac_inode->i_mode));
-+
-+ BUG_ON(ext4_pspace_cachep == NULL);
-+ pa = kmem_cache_alloc(ext4_pspace_cachep, GFP_NOFS);
-+ if (pa == NULL)
-+ return -ENOMEM;
-+
-+ /* preallocation can change ac_b_ex, thus we store actually
-+ * allocated blocks for history */
-+ ac->ac_f_ex = ac->ac_b_ex;
-+
-+ pa->pa_pstart = ext4_grp_offs_to_block(sb, &ac->ac_b_ex);
-+ pa->pa_lstart = pa->pa_pstart;
-+ pa->pa_len = ac->ac_b_ex.fe_len;
-+ pa->pa_free = pa->pa_len;
-+ atomic_set(&pa->pa_count, 1);
-+ spin_lock_init(&pa->pa_lock);
-+ pa->pa_deleted = 0;
-+ pa->pa_linear = 1;
-+
-+ mb_debug("new group pa %p: %llu/%u for %u\n", pa,
-+ pa->pa_pstart, pa->pa_len, pa->pa_lstart);
-+
-+ ext4_mb_use_group_pa(ac, pa);
-+ atomic_add(pa->pa_free, &EXT4_SB(sb)->s_mb_preallocated);
-+
-+ grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);
-+ lg = ac->ac_lg;
-+ BUG_ON(lg == NULL);
-+
-+ pa->pa_obj_lock = &lg->lg_prealloc_lock;
-+ pa->pa_inode = NULL;
-+
-+ ext4_lock_group(sb, ac->ac_b_ex.fe_group);
-+ list_add(&pa->pa_group_list, &grp->bb_prealloc_list);
-+ ext4_unlock_group(sb, ac->ac_b_ex.fe_group);
-+
-+ spin_lock(pa->pa_obj_lock);
-+ list_add_tail_rcu(&pa->pa_inode_list, &lg->lg_prealloc_list);
-+ spin_unlock(pa->pa_obj_lock);
++ struct inode *inode = file->f_mapping->host;
++ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
-+ return 0;
-+}
++ if (!(fl->fl_flags & FL_FLOCK))
++ return -ENOLCK;
++ if (__mandatory_lock(inode))
++ return -ENOLCK;
+
-+static int ext4_mb_new_preallocation(struct ext4_allocation_context *ac)
-+{
-+ int err;
++ if ((osb->s_mount_opt & OCFS2_MOUNT_LOCALFLOCKS) ||
++ ocfs2_mount_local(osb))
++ return flock_lock_file_wait(file, fl);
+
-+ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)
-+ err = ext4_mb_new_group_pa(ac);
++ if (fl->fl_type == F_UNLCK)
++ return ocfs2_do_funlock(file, cmd, fl);
+ else
-+ err = ext4_mb_new_inode_pa(ac);
-+ return err;
-+}
-+
-+/*
-+ * finds all unused blocks in on-disk bitmap, frees them in
-+ * in-core bitmap and buddy.
-+ * @pa must be unlinked from inode and group lists, so that
-+ * nobody else can find/use it.
-+ * the caller MUST hold group/inode locks.
-+ * TODO: optimize the case when there are no in-core structures yet
-+ */
-+static int ext4_mb_release_inode_pa(struct ext4_buddy *e4b,
-+ struct buffer_head *bitmap_bh,
-+ struct ext4_prealloc_space *pa)
-+{
-+ struct ext4_allocation_context ac;
-+ struct super_block *sb = e4b->bd_sb;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ unsigned long end;
-+ unsigned long next;
-+ ext4_group_t group;
-+ ext4_grpblk_t bit;
-+ sector_t start;
-+ int err = 0;
-+ int free = 0;
-+
-+ BUG_ON(pa->pa_deleted == 0);
-+ ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
-+ BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
-+ end = bit + pa->pa_len;
-+
-+ ac.ac_sb = sb;
-+ ac.ac_inode = pa->pa_inode;
-+ ac.ac_op = EXT4_MB_HISTORY_DISCARD;
-+
-+ while (bit < end) {
-+ bit = ext4_find_next_zero_bit(bitmap_bh->b_data, end, bit);
-+ if (bit >= end)
-+ break;
-+ next = ext4_find_next_bit(bitmap_bh->b_data, end, bit);
-+ if (next > end)
-+ next = end;
-+ start = group * EXT4_BLOCKS_PER_GROUP(sb) + bit +
-+ le32_to_cpu(sbi->s_es->s_first_data_block);
-+ mb_debug(" free preallocated %u/%u in group %u\n",
-+ (unsigned) start, (unsigned) next - bit,
-+ (unsigned) group);
-+ free += next - bit;
-+
-+ ac.ac_b_ex.fe_group = group;
-+ ac.ac_b_ex.fe_start = bit;
-+ ac.ac_b_ex.fe_len = next - bit;
-+ ac.ac_b_ex.fe_logical = 0;
-+ ext4_mb_store_history(&ac);
-+
-+ mb_free_blocks(pa->pa_inode, e4b, bit, next - bit);
-+ bit = next + 1;
-+ }
-+ if (free != pa->pa_free) {
-+ printk(KERN_ERR "pa %p: logic %lu, phys. %lu, len %lu\n",
-+ pa, (unsigned long) pa->pa_lstart,
-+ (unsigned long) pa->pa_pstart,
-+ (unsigned long) pa->pa_len);
-+ printk(KERN_ERR "free %u, pa_free %u\n", free, pa->pa_free);
-+ }
-+ BUG_ON(free != pa->pa_free);
-+ atomic_add(free, &sbi->s_mb_discarded);
-+
-+ return err;
-+}
-+
-+static int ext4_mb_release_group_pa(struct ext4_buddy *e4b,
-+ struct ext4_prealloc_space *pa)
-+{
-+ struct ext4_allocation_context ac;
-+ struct super_block *sb = e4b->bd_sb;
-+ ext4_group_t group;
-+ ext4_grpblk_t bit;
-+
-+ ac.ac_op = EXT4_MB_HISTORY_DISCARD;
-+
-+ BUG_ON(pa->pa_deleted == 0);
-+ ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
-+ BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
-+ mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len);
-+ atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded);
-+
-+ ac.ac_sb = sb;
-+ ac.ac_inode = NULL;
-+ ac.ac_b_ex.fe_group = group;
-+ ac.ac_b_ex.fe_start = bit;
-+ ac.ac_b_ex.fe_len = pa->pa_len;
-+ ac.ac_b_ex.fe_logical = 0;
-+ ext4_mb_store_history(&ac);
-+
-+ return 0;
++ return ocfs2_do_flock(file, inode, cmd, fl);
+}
-+
-+/*
-+ * releases all preallocations in given group
+diff --git a/fs/ocfs2/locks.h b/fs/ocfs2/locks.h
+new file mode 100644
+index 0000000..9743ef2
+--- /dev/null
++++ b/fs/ocfs2/locks.h
+@@ -0,0 +1,31 @@
++/* -*- mode: c; c-basic-offset: 8; -*-
++ * vim: noexpandtab sw=8 ts=8 sts=0:
+ *
-+ * first, we need to decide discard policy:
-+ * - when do we discard
-+ * 1) ENOSPC
-+ * - how many do we discard
-+ * 1) how many requested
-+ */
-+static int ext4_mb_discard_group_preallocations(struct super_block *sb,
-+ ext4_group_t group, int needed)
-+{
-+ struct ext4_group_info *grp = ext4_get_group_info(sb, group);
-+ struct buffer_head *bitmap_bh = NULL;
-+ struct ext4_prealloc_space *pa, *tmp;
-+ struct list_head list;
-+ struct ext4_buddy e4b;
-+ int err;
-+ int busy = 0;
-+ int free = 0;
-+
-+ mb_debug("discard preallocation for group %lu\n", group);
-+
-+ if (list_empty(&grp->bb_prealloc_list))
-+ return 0;
-+
-+ bitmap_bh = read_block_bitmap(sb, group);
-+ if (bitmap_bh == NULL) {
-+ /* error handling here */
-+ ext4_mb_release_desc(&e4b);
-+ BUG_ON(bitmap_bh == NULL);
-+ }
-+
-+ err = ext4_mb_load_buddy(sb, group, &e4b);
-+ BUG_ON(err != 0); /* error handling here */
-+
-+ if (needed == 0)
-+ needed = EXT4_BLOCKS_PER_GROUP(sb) + 1;
-+
-+ grp = ext4_get_group_info(sb, group);
-+ INIT_LIST_HEAD(&list);
-+
-+repeat:
-+ ext4_lock_group(sb, group);
-+ list_for_each_entry_safe(pa, tmp,
-+ &grp->bb_prealloc_list, pa_group_list) {
-+ spin_lock(&pa->pa_lock);
-+ if (atomic_read(&pa->pa_count)) {
-+ spin_unlock(&pa->pa_lock);
-+ busy = 1;
-+ continue;
-+ }
-+ if (pa->pa_deleted) {
-+ spin_unlock(&pa->pa_lock);
-+ continue;
-+ }
-+
-+ /* seems this one can be freed ... */
-+ pa->pa_deleted = 1;
-+
-+ /* we can trust pa_free ... */
-+ free += pa->pa_free;
-+
-+ spin_unlock(&pa->pa_lock);
-+
-+ list_del(&pa->pa_group_list);
-+ list_add(&pa->u.pa_tmp_list, &list);
-+ }
-+
-+ /* if we still need more blocks and some PAs were used, try again */
-+ if (free < needed && busy) {
-+ busy = 0;
-+ ext4_unlock_group(sb, group);
-+ /*
-+ * Yield the CPU here so that we don't get soft lockup
-+ * in non preempt case.
-+ */
-+ yield();
-+ goto repeat;
-+ }
-+
-+ /* found anything to free? */
-+ if (list_empty(&list)) {
-+ BUG_ON(free != 0);
-+ goto out;
-+ }
-+
-+ /* now free all selected PAs */
-+ list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
-+
-+ /* remove from object (inode or locality group) */
-+ spin_lock(pa->pa_obj_lock);
-+ list_del_rcu(&pa->pa_inode_list);
-+ spin_unlock(pa->pa_obj_lock);
-+
-+ if (pa->pa_linear)
-+ ext4_mb_release_group_pa(&e4b, pa);
-+ else
-+ ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa);
-+
-+ list_del(&pa->u.pa_tmp_list);
-+ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
-+ }
-+
-+out:
-+ ext4_unlock_group(sb, group);
-+ ext4_mb_release_desc(&e4b);
-+ put_bh(bitmap_bh);
-+ return free;
-+}
-+
-+/*
-+ * releases all non-used preallocated blocks for given inode
++ * locks.h
+ *
-+ * It's important to discard preallocations under i_data_sem
-+ * We don't want another block to be served from the prealloc
-+ * space when we are discarding the inode prealloc space.
++ * Function prototypes for Userspace file locking support
+ *
-+ * FIXME!! Make sure it is valid at all the call sites
-+ */
-+void ext4_mb_discard_inode_preallocations(struct inode *inode)
-+{
-+ struct ext4_inode_info *ei = EXT4_I(inode);
-+ struct super_block *sb = inode->i_sb;
-+ struct buffer_head *bitmap_bh = NULL;
-+ struct ext4_prealloc_space *pa, *tmp;
-+ ext4_group_t group = 0;
-+ struct list_head list;
-+ struct ext4_buddy e4b;
-+ int err;
-+
-+ if (!test_opt(sb, MBALLOC) || !S_ISREG(inode->i_mode)) {
-+ /*BUG_ON(!list_empty(&ei->i_prealloc_list));*/
-+ return;
-+ }
-+
-+ mb_debug("discard preallocation for inode %lu\n", inode->i_ino);
-+
-+ INIT_LIST_HEAD(&list);
-+
-+repeat:
-+ /* first, collect all pa's in the inode */
-+ spin_lock(&ei->i_prealloc_lock);
-+ while (!list_empty(&ei->i_prealloc_list)) {
-+ pa = list_entry(ei->i_prealloc_list.next,
-+ struct ext4_prealloc_space, pa_inode_list);
-+ BUG_ON(pa->pa_obj_lock != &ei->i_prealloc_lock);
-+ spin_lock(&pa->pa_lock);
-+ if (atomic_read(&pa->pa_count)) {
-+ /* this shouldn't happen often - nobody should
-+ * use preallocation while we're discarding it */
-+ spin_unlock(&pa->pa_lock);
-+ spin_unlock(&ei->i_prealloc_lock);
-+ printk(KERN_ERR "uh-oh! used pa while discarding\n");
-+ WARN_ON(1);
-+ schedule_timeout_uninterruptible(HZ);
-+ goto repeat;
-+
-+ }
-+ if (pa->pa_deleted == 0) {
-+ pa->pa_deleted = 1;
-+ spin_unlock(&pa->pa_lock);
-+ list_del_rcu(&pa->pa_inode_list);
-+ list_add(&pa->u.pa_tmp_list, &list);
-+ continue;
-+ }
-+
-+ /* someone is deleting pa right now */
-+ spin_unlock(&pa->pa_lock);
-+ spin_unlock(&ei->i_prealloc_lock);
-+
-+ /* we have to wait here because pa_deleted
-+ * doesn't mean pa is already unlinked from
-+ * the list. as we might be called from
-+ * ->clear_inode() the inode will get freed
-+ * and concurrent thread which is unlinking
-+ * pa from inode's list may access already
-+ * freed memory, bad-bad-bad */
-+
-+ /* XXX: if this happens too often, we can
-+ * add a flag to force wait only in case
-+ * of ->clear_inode(), but not in case of
-+ * regular truncate */
-+ schedule_timeout_uninterruptible(HZ);
-+ goto repeat;
-+ }
-+ spin_unlock(&ei->i_prealloc_lock);
-+
-+ list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
-+ BUG_ON(pa->pa_linear != 0);
-+ ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, NULL);
-+
-+ err = ext4_mb_load_buddy(sb, group, &e4b);
-+ BUG_ON(err != 0); /* error handling here */
-+
-+ bitmap_bh = read_block_bitmap(sb, group);
-+ if (bitmap_bh == NULL) {
-+ /* error handling here */
-+ ext4_mb_release_desc(&e4b);
-+ BUG_ON(bitmap_bh == NULL);
-+ }
-+
-+ ext4_lock_group(sb, group);
-+ list_del(&pa->pa_group_list);
-+ ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa);
-+ ext4_unlock_group(sb, group);
-+
-+ ext4_mb_release_desc(&e4b);
-+ put_bh(bitmap_bh);
-+
-+ list_del(&pa->u.pa_tmp_list);
-+ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
-+ }
-+}
-+
-+/*
-+ * finds all preallocated spaces and return blocks being freed to them
-+ * if preallocated space becomes full (no block is used from the space)
-+ * then the function frees space in buddy
-+ * XXX: at the moment, truncate (which is the only way to free blocks)
-+ * discards all preallocations
-+ */
-+static void ext4_mb_return_to_preallocation(struct inode *inode,
-+ struct ext4_buddy *e4b,
-+ sector_t block, int count)
-+{
-+ BUG_ON(!list_empty(&EXT4_I(inode)->i_prealloc_list));
-+}
-+#ifdef MB_DEBUG
-+static void ext4_mb_show_ac(struct ext4_allocation_context *ac)
-+{
-+ struct super_block *sb = ac->ac_sb;
-+ ext4_group_t i;
-+
-+ printk(KERN_ERR "EXT4-fs: Can't allocate:"
-+ " Allocation context details:\n");
-+ printk(KERN_ERR "EXT4-fs: status %d flags %d\n",
-+ ac->ac_status, ac->ac_flags);
-+ printk(KERN_ERR "EXT4-fs: orig %lu/%lu/%lu@%lu, goal %lu/%lu/%lu@%lu, "
-+ "best %lu/%lu/%lu@%lu cr %d\n",
-+ (unsigned long)ac->ac_o_ex.fe_group,
-+ (unsigned long)ac->ac_o_ex.fe_start,
-+ (unsigned long)ac->ac_o_ex.fe_len,
-+ (unsigned long)ac->ac_o_ex.fe_logical,
-+ (unsigned long)ac->ac_g_ex.fe_group,
-+ (unsigned long)ac->ac_g_ex.fe_start,
-+ (unsigned long)ac->ac_g_ex.fe_len,
-+ (unsigned long)ac->ac_g_ex.fe_logical,
-+ (unsigned long)ac->ac_b_ex.fe_group,
-+ (unsigned long)ac->ac_b_ex.fe_start,
-+ (unsigned long)ac->ac_b_ex.fe_len,
-+ (unsigned long)ac->ac_b_ex.fe_logical,
-+ (int)ac->ac_criteria);
-+ printk(KERN_ERR "EXT4-fs: %lu scanned, %d found\n", ac->ac_ex_scanned,
-+ ac->ac_found);
-+ printk(KERN_ERR "EXT4-fs: groups: \n");
-+ for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
-+ struct ext4_group_info *grp = ext4_get_group_info(sb, i);
-+ struct ext4_prealloc_space *pa;
-+ ext4_grpblk_t start;
-+ struct list_head *cur;
-+ ext4_lock_group(sb, i);
-+ list_for_each(cur, &grp->bb_prealloc_list) {
-+ pa = list_entry(cur, struct ext4_prealloc_space,
-+ pa_group_list);
-+ spin_lock(&pa->pa_lock);
-+ ext4_get_group_no_and_offset(sb, pa->pa_pstart,
-+ NULL, &start);
-+ spin_unlock(&pa->pa_lock);
-+ printk(KERN_ERR "PA:%lu:%d:%u \n", i,
-+ start, pa->pa_len);
-+ }
-+ ext4_lock_group(sb, i);
-+
-+ if (grp->bb_free == 0)
-+ continue;
-+ printk(KERN_ERR "%lu: %d/%d \n",
-+ i, grp->bb_free, grp->bb_fragments);
-+ }
-+ printk(KERN_ERR "\n");
-+}
-+#else
-+static inline void ext4_mb_show_ac(struct ext4_allocation_context *ac)
-+{
-+ return;
-+}
-+#endif
-+
-+/*
-+ * We use locality group preallocation for small size file. The size of the
-+ * file is determined by the current size or the resulting size after
-+ * allocation which ever is larger
++ * Copyright (C) 2002, 2004 Oracle. All rights reserved.
+ *
-+ * One can tune this size via /proc/fs/ext4/<partition>/stream_req
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public
++ * License as published by the Free Software Foundation; either
++ * version 2 of the License, or (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++ * General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public
++ * License along with this program; if not, write to the
++ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
++ * Boston, MA 021110-1307, USA.
+ */
-+static void ext4_mb_group_or_file(struct ext4_allocation_context *ac)
-+{
-+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
-+ int bsbits = ac->ac_sb->s_blocksize_bits;
-+ loff_t size, isize;
-+
-+ if (!(ac->ac_flags & EXT4_MB_HINT_DATA))
-+ return;
-+
-+ size = ac->ac_o_ex.fe_logical + ac->ac_o_ex.fe_len;
-+ isize = i_size_read(ac->ac_inode) >> bsbits;
-+ size = max(size, isize);
+
-+ /* don't use group allocation for large files */
-+ if (size >= sbi->s_mb_stream_request)
-+ return;
++#ifndef OCFS2_LOCKS_H
++#define OCFS2_LOCKS_H
+
-+ if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))
-+ return;
++int ocfs2_flock(struct file *file, int cmd, struct file_lock *fl);
+
-+ BUG_ON(ac->ac_lg != NULL);
++#endif /* OCFS2_LOCKS_H */
+diff --git a/fs/ocfs2/mmap.c b/fs/ocfs2/mmap.c
+index 9875615..3dc18d6 100644
+--- a/fs/ocfs2/mmap.c
++++ b/fs/ocfs2/mmap.c
+@@ -168,7 +168,7 @@ static int ocfs2_page_mkwrite(struct vm_area_struct *vma, struct page *page)
+ * node. Taking the data lock will also ensure that we don't
+ * attempt page truncation as part of a downconvert.
+ */
+- ret = ocfs2_meta_lock(inode, &di_bh, 1);
++ ret = ocfs2_inode_lock(inode, &di_bh, 1);
+ if (ret < 0) {
+ mlog_errno(ret);
+ goto out;
+@@ -181,21 +181,12 @@ static int ocfs2_page_mkwrite(struct vm_area_struct *vma, struct page *page)
+ */
+ down_write(&OCFS2_I(inode)->ip_alloc_sem);
+
+- ret = ocfs2_data_lock(inode, 1);
+- if (ret < 0) {
+- mlog_errno(ret);
+- goto out_meta_unlock;
+- }
+-
+ ret = __ocfs2_page_mkwrite(inode, di_bh, page);
+
+- ocfs2_data_unlock(inode, 1);
+-
+-out_meta_unlock:
+ up_write(&OCFS2_I(inode)->ip_alloc_sem);
+
+ brelse(di_bh);
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ out:
+ ret2 = ocfs2_vm_op_unblock_sigs(&oldset);
+@@ -214,13 +205,13 @@ int ocfs2_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ int ret = 0, lock_level = 0;
+
+- ret = ocfs2_meta_lock_atime(file->f_dentry->d_inode,
++ ret = ocfs2_inode_lock_atime(file->f_dentry->d_inode,
+ file->f_vfsmnt, &lock_level);
+ if (ret < 0) {
+ mlog_errno(ret);
+ goto out;
+ }
+- ocfs2_meta_unlock(file->f_dentry->d_inode, lock_level);
++ ocfs2_inode_unlock(file->f_dentry->d_inode, lock_level);
+ out:
+ vma->vm_ops = &ocfs2_file_vm_ops;
+ vma->vm_flags |= VM_CAN_NONLINEAR;
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 989ac27..ae9ad95 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -60,7 +60,6 @@
+ #include "symlink.h"
+ #include "sysfile.h"
+ #include "uptodate.h"
+-#include "vote.h"
+
+ #include "buffer_head_io.h"
+
+@@ -116,7 +115,7 @@ static struct dentry *ocfs2_lookup(struct inode *dir, struct dentry *dentry,
+ mlog(0, "find name %.*s in directory %llu\n", dentry->d_name.len,
+ dentry->d_name.name, (unsigned long long)OCFS2_I(dir)->ip_blkno);
+
+- status = ocfs2_meta_lock(dir, NULL, 0);
++ status = ocfs2_inode_lock(dir, NULL, 0);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -129,7 +128,7 @@ static struct dentry *ocfs2_lookup(struct inode *dir, struct dentry *dentry,
+ if (status < 0)
+ goto bail_add;
+
+- inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0);
++ inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0, 0);
+ if (IS_ERR(inode)) {
+ ret = ERR_PTR(-EACCES);
+ goto bail_unlock;
+@@ -176,8 +175,8 @@ bail_unlock:
+ /* Don't drop the cluster lock until *after* the d_add --
+ * unlink on another node will message us to remove that
+ * dentry under this lock so otherwise we can race this with
+- * the vote thread and have a stale dentry. */
+- ocfs2_meta_unlock(dir, 0);
++ * the downconvert thread and have a stale dentry. */
++ ocfs2_inode_unlock(dir, 0);
+
+ bail:
+
+@@ -209,7 +208,7 @@ static int ocfs2_mknod(struct inode *dir,
+ /* get our super block */
+ osb = OCFS2_SB(dir->i_sb);
+
+- status = ocfs2_meta_lock(dir, &parent_fe_bh, 1);
++ status = ocfs2_inode_lock(dir, &parent_fe_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -323,7 +322,7 @@ leave:
+ if (handle)
+ ocfs2_commit_trans(osb, handle);
+
+- ocfs2_meta_unlock(dir, 1);
++ ocfs2_inode_unlock(dir, 1);
+
+ if (status == -ENOSPC)
+ mlog(0, "Disk is full\n");
+@@ -553,7 +552,7 @@ static int ocfs2_link(struct dentry *old_dentry,
+ if (S_ISDIR(inode->i_mode))
+ return -EPERM;
+
+- err = ocfs2_meta_lock(dir, &parent_fe_bh, 1);
++ err = ocfs2_inode_lock(dir, &parent_fe_bh, 1);
+ if (err < 0) {
+ if (err != -ENOENT)
+ mlog_errno(err);
+@@ -578,7 +577,7 @@ static int ocfs2_link(struct dentry *old_dentry,
+ goto out;
+ }
+
+- err = ocfs2_meta_lock(inode, &fe_bh, 1);
++ err = ocfs2_inode_lock(inode, &fe_bh, 1);
+ if (err < 0) {
+ if (err != -ENOENT)
+ mlog_errno(err);
+@@ -643,10 +642,10 @@ static int ocfs2_link(struct dentry *old_dentry,
+ out_commit:
+ ocfs2_commit_trans(osb, handle);
+ out_unlock_inode:
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ out:
+- ocfs2_meta_unlock(dir, 1);
++ ocfs2_inode_unlock(dir, 1);
+
+ if (de_bh)
+ brelse(de_bh);
+@@ -720,7 +719,7 @@ static int ocfs2_unlink(struct inode *dir,
+ return -EPERM;
+ }
+
+- status = ocfs2_meta_lock(dir, &parent_node_bh, 1);
++ status = ocfs2_inode_lock(dir, &parent_node_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -745,7 +744,7 @@ static int ocfs2_unlink(struct inode *dir,
+ goto leave;
+ }
+
+- status = ocfs2_meta_lock(inode, &fe_bh, 1);
++ status = ocfs2_inode_lock(inode, &fe_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -765,7 +764,7 @@ static int ocfs2_unlink(struct inode *dir,
+
+ status = ocfs2_remote_dentry_delete(dentry);
+ if (status < 0) {
+- /* This vote should succeed under all normal
++ /* This remote delete should succeed under all normal
+ * circumstances. */
+ mlog_errno(status);
+ goto leave;
+@@ -841,13 +840,13 @@ leave:
+ ocfs2_commit_trans(osb, handle);
+
+ if (child_locked)
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+- ocfs2_meta_unlock(dir, 1);
++ ocfs2_inode_unlock(dir, 1);
+
+ if (orphan_dir) {
+ /* This was locked for us in ocfs2_prepare_orphan_dir() */
+- ocfs2_meta_unlock(orphan_dir, 1);
++ ocfs2_inode_unlock(orphan_dir, 1);
+ mutex_unlock(&orphan_dir->i_mutex);
+ iput(orphan_dir);
+ }
+@@ -908,7 +907,7 @@ static int ocfs2_double_lock(struct ocfs2_super *osb,
+ inode1 = tmpinode;
+ }
+ /* lock id2 */
+- status = ocfs2_meta_lock(inode2, bh2, 1);
++ status = ocfs2_inode_lock(inode2, bh2, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -917,14 +916,14 @@ static int ocfs2_double_lock(struct ocfs2_super *osb,
+ }
+
+ /* lock id1 */
+- status = ocfs2_meta_lock(inode1, bh1, 1);
++ status = ocfs2_inode_lock(inode1, bh1, 1);
+ if (status < 0) {
+ /*
+ * An error return must mean that no cluster locks
+ * were held on function exit.
+ */
+ if (oi1->ip_blkno != oi2->ip_blkno)
+- ocfs2_meta_unlock(inode2, 1);
++ ocfs2_inode_unlock(inode2, 1);
+
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -937,10 +936,10 @@ bail:
+
+ static void ocfs2_double_unlock(struct inode *inode1, struct inode *inode2)
+ {
+- ocfs2_meta_unlock(inode1, 1);
++ ocfs2_inode_unlock(inode1, 1);
+
+ if (inode1 != inode2)
+- ocfs2_meta_unlock(inode2, 1);
++ ocfs2_inode_unlock(inode2, 1);
+ }
+
+ static int ocfs2_rename(struct inode *old_dir,
+@@ -1031,10 +1030,11 @@ static int ocfs2_rename(struct inode *old_dir,
+
+ /*
+ * Aside from allowing a meta data update, the locking here
+- * also ensures that the vote thread on other nodes won't have
+- * to concurrently downconvert the inode and the dentry locks.
++ * also ensures that the downconvert thread on other nodes
++ * won't have to concurrently downconvert the inode and the
++ * dentry locks.
+ */
+- status = ocfs2_meta_lock(old_inode, &old_inode_bh, 1);
++ status = ocfs2_inode_lock(old_inode, &old_inode_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -1143,7 +1143,7 @@ static int ocfs2_rename(struct inode *old_dir,
+ goto bail;
+ }
+
+- status = ocfs2_meta_lock(new_inode, &newfe_bh, 1);
++ status = ocfs2_inode_lock(new_inode, &newfe_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -1355,14 +1355,14 @@ bail:
+ ocfs2_double_unlock(old_dir, new_dir);
+
+ if (old_child_locked)
+- ocfs2_meta_unlock(old_inode, 1);
++ ocfs2_inode_unlock(old_inode, 1);
+
+ if (new_child_locked)
+- ocfs2_meta_unlock(new_inode, 1);
++ ocfs2_inode_unlock(new_inode, 1);
+
+ if (orphan_dir) {
+ /* This was locked for us in ocfs2_prepare_orphan_dir() */
+- ocfs2_meta_unlock(orphan_dir, 1);
++ ocfs2_inode_unlock(orphan_dir, 1);
+ mutex_unlock(&orphan_dir->i_mutex);
+ iput(orphan_dir);
+ }
+@@ -1530,7 +1530,7 @@ static int ocfs2_symlink(struct inode *dir,
+ credits = ocfs2_calc_symlink_credits(sb);
+
+ /* lock the parent directory */
+- status = ocfs2_meta_lock(dir, &parent_fe_bh, 1);
++ status = ocfs2_inode_lock(dir, &parent_fe_bh, 1);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -1657,7 +1657,7 @@ bail:
+ if (handle)
+ ocfs2_commit_trans(osb, handle);
+
+- ocfs2_meta_unlock(dir, 1);
++ ocfs2_inode_unlock(dir, 1);
+
+ if (new_fe_bh)
+ brelse(new_fe_bh);
+@@ -1735,7 +1735,7 @@ static int ocfs2_prepare_orphan_dir(struct ocfs2_super *osb,
+
+ mutex_lock(&orphan_dir_inode->i_mutex);
+
+- status = ocfs2_meta_lock(orphan_dir_inode, &orphan_dir_bh, 1);
++ status = ocfs2_inode_lock(orphan_dir_inode, &orphan_dir_bh, 1);
+ if (status < 0) {
+ mlog_errno(status);
+ goto leave;
+@@ -1745,7 +1745,7 @@ static int ocfs2_prepare_orphan_dir(struct ocfs2_super *osb,
+ orphan_dir_bh, name,
+ OCFS2_ORPHAN_NAMELEN, de_bh);
+ if (status < 0) {
+- ocfs2_meta_unlock(orphan_dir_inode, 1);
++ ocfs2_inode_unlock(orphan_dir_inode, 1);
+
+ mlog_errno(status);
+ goto leave;
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 60a23e1..d084805 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -101,6 +101,7 @@ enum ocfs2_unlock_action {
+ * about to be
+ * dropped. */
+ #define OCFS2_LOCK_QUEUED (0x00000100) /* queued for downconvert */
++#define OCFS2_LOCK_NOCACHE (0x00000200) /* don't use a holder count */
+
+ struct ocfs2_lock_res_ops;
+
+@@ -170,6 +171,7 @@ enum ocfs2_mount_options
+ OCFS2_MOUNT_NOINTR = 1 << 2, /* Don't catch signals */
+ OCFS2_MOUNT_ERRORS_PANIC = 1 << 3, /* Panic on errors */
+ OCFS2_MOUNT_DATA_WRITEBACK = 1 << 4, /* No data ordering */
++ OCFS2_MOUNT_LOCALFLOCKS = 1 << 5, /* No cluster aware user file locks */
+ };
+
+ #define OCFS2_OSB_SOFT_RO 0x0001
+@@ -189,9 +191,7 @@ struct ocfs2_super
+ struct ocfs2_slot_info *slot_info;
+
+ spinlock_t node_map_lock;
+- struct ocfs2_node_map mounted_map;
+ struct ocfs2_node_map recovery_map;
+- struct ocfs2_node_map umount_map;
+
+ u64 root_blkno;
+ u64 system_dir_blkno;
+@@ -231,7 +231,9 @@ struct ocfs2_super
+ wait_queue_head_t checkpoint_event;
+ atomic_t needs_checkpoint;
+ struct ocfs2_journal *journal;
++ unsigned long osb_commit_interval;
+
++ int local_alloc_size;
+ enum ocfs2_local_alloc_state local_alloc_state;
+ struct buffer_head *local_alloc_bh;
+ u64 la_last_gd;
+@@ -254,28 +256,21 @@ struct ocfs2_super
+
+ wait_queue_head_t recovery_event;
+
+- spinlock_t vote_task_lock;
+- struct task_struct *vote_task;
+- wait_queue_head_t vote_event;
+- unsigned long vote_wake_sequence;
+- unsigned long vote_work_sequence;
++ spinlock_t dc_task_lock;
++ struct task_struct *dc_task;
++ wait_queue_head_t dc_event;
++ unsigned long dc_wake_sequence;
++ unsigned long dc_work_sequence;
+
+ /*
-+ * locality group prealloc space are per cpu. The reason for having
-+ * per cpu locality group is to reduce the contention between block
-+ * request from multiple CPUs.
++ * Any thread can add locks to the list, but the downconvert
++ * thread is the only one allowed to remove locks. Any change
++ * to this rule requires updating
++ * ocfs2_downconvert_thread_do_work().
+ */
-+ ac->ac_lg = &sbi->s_locality_groups[get_cpu()];
-+ put_cpu();
-+
-+ /* we're going to use group allocation */
-+ ac->ac_flags |= EXT4_MB_HINT_GROUP_ALLOC;
-+
-+ /* serialize all allocations in the group */
-+ mutex_lock(&ac->ac_lg->lg_mutex);
-+}
+ struct list_head blocked_lock_list;
+ unsigned long blocked_lock_count;
+
+- struct list_head vote_list;
+- int vote_count;
+-
+- u32 net_key;
+- spinlock_t net_response_lock;
+- unsigned int net_response_ids;
+- struct list_head net_response_list;
+-
+- struct o2hb_callback_func osb_hb_up;
+- struct o2hb_callback_func osb_hb_down;
+-
+- struct list_head osb_net_handlers;
+-
+ wait_queue_head_t osb_mount_event;
+
+ /* Truncate log info */
+diff --git a/fs/ocfs2/ocfs2_fs.h b/fs/ocfs2/ocfs2_fs.h
+index 6ef8767..3633edd 100644
+--- a/fs/ocfs2/ocfs2_fs.h
++++ b/fs/ocfs2/ocfs2_fs.h
+@@ -231,6 +231,20 @@ struct ocfs2_space_resv {
+ #define OCFS2_IOC_RESVSP64 _IOW ('X', 42, struct ocfs2_space_resv)
+ #define OCFS2_IOC_UNRESVSP64 _IOW ('X', 43, struct ocfs2_space_resv)
+
++/* Used to pass group descriptor data when online resize is done */
++struct ocfs2_new_group_input {
++ __u64 group; /* Group descriptor's blkno. */
++ __u32 clusters; /* Total number of clusters in this group */
++ __u32 frees; /* Total free clusters in this group */
++ __u16 chain; /* Chain for this group */
++ __u16 reserved1;
++ __u32 reserved2;
++};
+
-+static int ext4_mb_initialize_context(struct ext4_allocation_context *ac,
-+ struct ext4_allocation_request *ar)
-+{
-+ struct super_block *sb = ar->inode->i_sb;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ struct ext4_super_block *es = sbi->s_es;
-+ ext4_group_t group;
-+ unsigned long len;
-+ unsigned long goal;
-+ ext4_grpblk_t block;
++#define OCFS2_IOC_GROUP_EXTEND _IOW('o', 1, int)
++#define OCFS2_IOC_GROUP_ADD _IOW('o', 2,struct ocfs2_new_group_input)
++#define OCFS2_IOC_GROUP_ADD64 _IOW('o', 3,struct ocfs2_new_group_input)
+
-+ /* we can't allocate > group size */
-+ len = ar->len;
+ /*
+ * Journal Flags (ocfs2_dinode.id1.journal1.i_flags)
+ */
+@@ -256,6 +270,14 @@ struct ocfs2_space_resv {
+ /* Journal limits (in bytes) */
+ #define OCFS2_MIN_JOURNAL_SIZE (4 * 1024 * 1024)
+
++/*
++ * Default local alloc size (in megabytes)
++ *
++ * The value chosen should be such that most allocations, including new
++ * block groups, use local alloc.
++ */
++#define OCFS2_DEFAULT_LOCAL_ALLOC_SIZE 8
+
-+ /* just a dirty hack to filter too big requests */
-+ if (len >= EXT4_BLOCKS_PER_GROUP(sb) - 10)
-+ len = EXT4_BLOCKS_PER_GROUP(sb) - 10;
+ struct ocfs2_system_inode_info {
+ char *si_name;
+ int si_iflags;
+diff --git a/fs/ocfs2/ocfs2_lockid.h b/fs/ocfs2/ocfs2_lockid.h
+index 4ca02b1..86f3e37 100644
+--- a/fs/ocfs2/ocfs2_lockid.h
++++ b/fs/ocfs2/ocfs2_lockid.h
+@@ -45,6 +45,7 @@ enum ocfs2_lock_type {
+ OCFS2_LOCK_TYPE_RW,
+ OCFS2_LOCK_TYPE_DENTRY,
+ OCFS2_LOCK_TYPE_OPEN,
++ OCFS2_LOCK_TYPE_FLOCK,
+ OCFS2_NUM_LOCK_TYPES
+ };
+
+@@ -73,6 +74,9 @@ static inline char ocfs2_lock_type_char(enum ocfs2_lock_type type)
+ case OCFS2_LOCK_TYPE_OPEN:
+ c = 'O';
+ break;
++ case OCFS2_LOCK_TYPE_FLOCK:
++ c = 'F';
++ break;
+ default:
+ c = '\0';
+ }
+@@ -90,6 +94,7 @@ static char *ocfs2_lock_type_strings[] = {
+ [OCFS2_LOCK_TYPE_RW] = "Write/Read",
+ [OCFS2_LOCK_TYPE_DENTRY] = "Dentry",
+ [OCFS2_LOCK_TYPE_OPEN] = "Open",
++ [OCFS2_LOCK_TYPE_FLOCK] = "Flock",
+ };
+
+ static inline const char *ocfs2_lock_type_string(enum ocfs2_lock_type type)
+diff --git a/fs/ocfs2/resize.c b/fs/ocfs2/resize.c
+new file mode 100644
+index 0000000..37835ff
+--- /dev/null
++++ b/fs/ocfs2/resize.c
+@@ -0,0 +1,634 @@
++/* -*- mode: c; c-basic-offset: 8; -*-
++ * vim: noexpandtab sw=8 ts=8 sts=0:
++ *
++ * resize.c
++ *
++ * volume resize.
++ * Inspired by ext3/resize.c.
++ *
++ * Copyright (C) 2007 Oracle. All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public
++ * License as published by the Free Software Foundation; either
++ * version 2 of the License, or (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++ * General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public
++ * License along with this program; if not, write to the
++ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
++ * Boston, MA 021110-1307, USA.
++ */
+
-+ /* start searching from the goal */
-+ goal = ar->goal;
-+ if (goal < le32_to_cpu(es->s_first_data_block) ||
-+ goal >= ext4_blocks_count(es))
-+ goal = le32_to_cpu(es->s_first_data_block);
-+ ext4_get_group_no_and_offset(sb, goal, &group, &block);
++#include <linux/fs.h>
++#include <linux/types.h>
+
-+ /* set up allocation goals */
-+ ac->ac_b_ex.fe_logical = ar->logical;
-+ ac->ac_b_ex.fe_group = 0;
-+ ac->ac_b_ex.fe_start = 0;
-+ ac->ac_b_ex.fe_len = 0;
-+ ac->ac_status = AC_STATUS_CONTINUE;
-+ ac->ac_groups_scanned = 0;
-+ ac->ac_ex_scanned = 0;
-+ ac->ac_found = 0;
-+ ac->ac_sb = sb;
-+ ac->ac_inode = ar->inode;
-+ ac->ac_o_ex.fe_logical = ar->logical;
-+ ac->ac_o_ex.fe_group = group;
-+ ac->ac_o_ex.fe_start = block;
-+ ac->ac_o_ex.fe_len = len;
-+ ac->ac_g_ex.fe_logical = ar->logical;
-+ ac->ac_g_ex.fe_group = group;
-+ ac->ac_g_ex.fe_start = block;
-+ ac->ac_g_ex.fe_len = len;
-+ ac->ac_f_ex.fe_len = 0;
-+ ac->ac_flags = ar->flags;
-+ ac->ac_2order = 0;
-+ ac->ac_criteria = 0;
-+ ac->ac_pa = NULL;
-+ ac->ac_bitmap_page = NULL;
-+ ac->ac_buddy_page = NULL;
-+ ac->ac_lg = NULL;
++#define MLOG_MASK_PREFIX ML_DISK_ALLOC
++#include <cluster/masklog.h>
+
-+ /* we have to define context: we'll we work with a file or
-+ * locality group. this is a policy, actually */
-+ ext4_mb_group_or_file(ac);
++#include "ocfs2.h"
+
-+ mb_debug("init ac: %u blocks @ %u, goal %u, flags %x, 2^%d, "
-+ "left: %u/%u, right %u/%u to %swritable\n",
-+ (unsigned) ar->len, (unsigned) ar->logical,
-+ (unsigned) ar->goal, ac->ac_flags, ac->ac_2order,
-+ (unsigned) ar->lleft, (unsigned) ar->pleft,
-+ (unsigned) ar->lright, (unsigned) ar->pright,
-+ atomic_read(&ar->inode->i_writecount) ? "" : "non-");
-+ return 0;
++#include "alloc.h"
++#include "dlmglue.h"
++#include "inode.h"
++#include "journal.h"
++#include "super.h"
++#include "sysfile.h"
++#include "uptodate.h"
+
-+}
++#include "buffer_head_io.h"
++#include "suballoc.h"
++#include "resize.h"
+
+/*
-+ * release all resource we used in allocation
++ * Check whether there are new backup superblocks exist
++ * in the last group. If there are some, mark them or clear
++ * them in the bitmap.
++ *
++ * Return how many backups we find in the last group.
+ */
-+static int ext4_mb_release_context(struct ext4_allocation_context *ac)
++static u16 ocfs2_calc_new_backup_super(struct inode *inode,
++ struct ocfs2_group_desc *gd,
++ int new_clusters,
++ u32 first_new_cluster,
++ u16 cl_cpg,
++ int set)
+{
-+ if (ac->ac_pa) {
-+ if (ac->ac_pa->pa_linear) {
-+ /* see comment in ext4_mb_use_group_pa() */
-+ spin_lock(&ac->ac_pa->pa_lock);
-+ ac->ac_pa->pa_pstart += ac->ac_b_ex.fe_len;
-+ ac->ac_pa->pa_lstart += ac->ac_b_ex.fe_len;
-+ ac->ac_pa->pa_free -= ac->ac_b_ex.fe_len;
-+ ac->ac_pa->pa_len -= ac->ac_b_ex.fe_len;
-+ spin_unlock(&ac->ac_pa->pa_lock);
-+ }
-+ ext4_mb_put_pa(ac, ac->ac_sb, ac->ac_pa);
-+ }
-+ if (ac->ac_bitmap_page)
-+ page_cache_release(ac->ac_bitmap_page);
-+ if (ac->ac_buddy_page)
-+ page_cache_release(ac->ac_buddy_page);
-+ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)
-+ mutex_unlock(&ac->ac_lg->lg_mutex);
-+ ext4_mb_collect_stats(ac);
-+ return 0;
-+}
++ int i;
++ u16 backups = 0;
++ u32 cluster;
++ u64 blkno, gd_blkno, lgd_blkno = le64_to_cpu(gd->bg_blkno);
+
-+static int ext4_mb_discard_preallocations(struct super_block *sb, int needed)
-+{
-+ ext4_group_t i;
-+ int ret;
-+ int freed = 0;
++ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
++ blkno = ocfs2_backup_super_blkno(inode->i_sb, i);
++ cluster = ocfs2_blocks_to_clusters(inode->i_sb, blkno);
+
-+ for (i = 0; i < EXT4_SB(sb)->s_groups_count && needed > 0; i++) {
-+ ret = ext4_mb_discard_group_preallocations(sb, i, needed);
-+ freed += ret;
-+ needed -= ret;
++ gd_blkno = ocfs2_which_cluster_group(inode, cluster);
++ if (gd_blkno < lgd_blkno)
++ continue;
++ else if (gd_blkno > lgd_blkno)
++ break;
++
++ if (set)
++ ocfs2_set_bit(cluster % cl_cpg,
++ (unsigned long *)gd->bg_bitmap);
++ else
++ ocfs2_clear_bit(cluster % cl_cpg,
++ (unsigned long *)gd->bg_bitmap);
++ backups++;
+ }
+
-+ return freed;
++ mlog_exit_void();
++ return backups;
+}
+
-+/*
-+ * Main entry point into mballoc to allocate blocks
-+ * it tries to use preallocation first, then falls back
-+ * to usual allocation
-+ */
-+ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
-+ struct ext4_allocation_request *ar, int *errp)
++static int ocfs2_update_last_group_and_inode(handle_t *handle,
++ struct inode *bm_inode,
++ struct buffer_head *bm_bh,
++ struct buffer_head *group_bh,
++ u32 first_new_cluster,
++ int new_clusters)
+{
-+ struct ext4_allocation_context ac;
-+ struct ext4_sb_info *sbi;
-+ struct super_block *sb;
-+ ext4_fsblk_t block = 0;
-+ int freed;
-+ int inquota;
-+
-+ sb = ar->inode->i_sb;
-+ sbi = EXT4_SB(sb);
-+
-+ if (!test_opt(sb, MBALLOC)) {
-+ block = ext4_new_blocks_old(handle, ar->inode, ar->goal,
-+ &(ar->len), errp);
-+ return block;
-+ }
-+
-+ while (ar->len && DQUOT_ALLOC_BLOCK(ar->inode, ar->len)) {
-+ ar->flags |= EXT4_MB_HINT_NOPREALLOC;
-+ ar->len--;
-+ }
-+ if (ar->len == 0) {
-+ *errp = -EDQUOT;
-+ return 0;
-+ }
-+ inquota = ar->len;
++ int ret = 0;
++ struct ocfs2_super *osb = OCFS2_SB(bm_inode->i_sb);
++ struct ocfs2_dinode *fe = (struct ocfs2_dinode *) bm_bh->b_data;
++ struct ocfs2_chain_list *cl = &fe->id2.i_chain;
++ struct ocfs2_chain_rec *cr;
++ struct ocfs2_group_desc *group;
++ u16 chain, num_bits, backups = 0;
++ u16 cl_bpc = le16_to_cpu(cl->cl_bpc);
++ u16 cl_cpg = le16_to_cpu(cl->cl_cpg);
+
-+ ext4_mb_poll_new_transaction(sb, handle);
++ mlog_entry("(new_clusters=%d, first_new_cluster = %u)\n",
++ new_clusters, first_new_cluster);
+
-+ *errp = ext4_mb_initialize_context(&ac, ar);
-+ if (*errp) {
-+ ar->len = 0;
++ ret = ocfs2_journal_access(handle, bm_inode, group_bh,
++ OCFS2_JOURNAL_ACCESS_WRITE);
++ if (ret < 0) {
++ mlog_errno(ret);
+ goto out;
+ }
+
-+ ac.ac_op = EXT4_MB_HISTORY_PREALLOC;
-+ if (!ext4_mb_use_preallocated(&ac)) {
-+
-+ ac.ac_op = EXT4_MB_HISTORY_ALLOC;
-+ ext4_mb_normalize_request(&ac, ar);
++ group = (struct ocfs2_group_desc *)group_bh->b_data;
+
-+repeat:
-+ /* allocate space in core */
-+ ext4_mb_regular_allocator(&ac);
++ /* update the group first. */
++ num_bits = new_clusters * cl_bpc;
++ le16_add_cpu(&group->bg_bits, num_bits);
++ le16_add_cpu(&group->bg_free_bits_count, num_bits);
+
-+ /* as we've just preallocated more space than
-+ * user requested orinally, we store allocated
-+ * space in a special descriptor */
-+ if (ac.ac_status == AC_STATUS_FOUND &&
-+ ac.ac_o_ex.fe_len < ac.ac_b_ex.fe_len)
-+ ext4_mb_new_preallocation(&ac);
++ /*
++ * check whether there are some new backup superblocks exist in
++ * this group and update the group bitmap accordingly.
++ */
++ if (OCFS2_HAS_COMPAT_FEATURE(osb->sb,
++ OCFS2_FEATURE_COMPAT_BACKUP_SB)) {
++ backups = ocfs2_calc_new_backup_super(bm_inode,
++ group,
++ new_clusters,
++ first_new_cluster,
++ cl_cpg, 1);
++ le16_add_cpu(&group->bg_free_bits_count, -1 * backups);
+ }
+
-+ if (likely(ac.ac_status == AC_STATUS_FOUND)) {
-+ ext4_mb_mark_diskspace_used(&ac, handle);
-+ *errp = 0;
-+ block = ext4_grp_offs_to_block(sb, &ac.ac_b_ex);
-+ ar->len = ac.ac_b_ex.fe_len;
-+ } else {
-+ freed = ext4_mb_discard_preallocations(sb, ac.ac_o_ex.fe_len);
-+ if (freed)
-+ goto repeat;
-+ *errp = -ENOSPC;
-+ ac.ac_b_ex.fe_len = 0;
-+ ar->len = 0;
-+ ext4_mb_show_ac(&ac);
++ ret = ocfs2_journal_dirty(handle, group_bh);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_rollback;
+ }
+
-+ ext4_mb_release_context(&ac);
++ /* update the inode accordingly. */
++ ret = ocfs2_journal_access(handle, bm_inode, bm_bh,
++ OCFS2_JOURNAL_ACCESS_WRITE);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_rollback;
++ }
+
-+out:
-+ if (ar->len < inquota)
-+ DQUOT_FREE_BLOCK(ar->inode, inquota - ar->len);
++ chain = le16_to_cpu(group->bg_chain);
++ cr = (&cl->cl_recs[chain]);
++ le32_add_cpu(&cr->c_total, num_bits);
++ le32_add_cpu(&cr->c_free, num_bits);
++ le32_add_cpu(&fe->id1.bitmap1.i_total, num_bits);
++ le32_add_cpu(&fe->i_clusters, new_clusters);
+
-+ return block;
-+}
-+static void ext4_mb_poll_new_transaction(struct super_block *sb,
-+ handle_t *handle)
-+{
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ if (backups) {
++ le32_add_cpu(&cr->c_free, -1 * backups);
++ le32_add_cpu(&fe->id1.bitmap1.i_used, backups);
++ }
+
-+ if (sbi->s_last_transaction == handle->h_transaction->t_tid)
-+ return;
++ spin_lock(&OCFS2_I(bm_inode)->ip_lock);
++ OCFS2_I(bm_inode)->ip_clusters = le32_to_cpu(fe->i_clusters);
++ le64_add_cpu(&fe->i_size, new_clusters << osb->s_clustersize_bits);
++ spin_unlock(&OCFS2_I(bm_inode)->ip_lock);
++ i_size_write(bm_inode, le64_to_cpu(fe->i_size));
+
-+ /* new transaction! time to close last one and free blocks for
-+ * committed transaction. we know that only transaction can be
-+ * active, so previos transaction can be being logged and we
-+ * know that transaction before previous is known to be already
-+ * logged. this means that now we may free blocks freed in all
-+ * transactions before previous one. hope I'm clear enough ... */
++ ocfs2_journal_dirty(handle, bm_bh);
+
-+ spin_lock(&sbi->s_md_lock);
-+ if (sbi->s_last_transaction != handle->h_transaction->t_tid) {
-+ mb_debug("new transaction %lu, old %lu\n",
-+ (unsigned long) handle->h_transaction->t_tid,
-+ (unsigned long) sbi->s_last_transaction);
-+ list_splice_init(&sbi->s_closed_transaction,
-+ &sbi->s_committed_transaction);
-+ list_splice_init(&sbi->s_active_transaction,
-+ &sbi->s_closed_transaction);
-+ sbi->s_last_transaction = handle->h_transaction->t_tid;
++out_rollback:
++ if (ret < 0) {
++ ocfs2_calc_new_backup_super(bm_inode,
++ group,
++ new_clusters,
++ first_new_cluster,
++ cl_cpg, 0);
++ le16_add_cpu(&group->bg_free_bits_count, backups);
++ le16_add_cpu(&group->bg_bits, -1 * num_bits);
++ le16_add_cpu(&group->bg_free_bits_count, -1 * num_bits);
+ }
-+ spin_unlock(&sbi->s_md_lock);
-+
-+ ext4_mb_free_committed_blocks(sb);
++out:
++ mlog_exit(ret);
++ return ret;
+}
+
-+static int ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
-+ ext4_group_t group, ext4_grpblk_t block, int count)
++static int update_backups(struct inode * inode, u32 clusters, char *data)
+{
-+ struct ext4_group_info *db = e4b->bd_info;
-+ struct super_block *sb = e4b->bd_sb;
-+ struct ext4_sb_info *sbi = EXT4_SB(sb);
-+ struct ext4_free_metadata *md;
-+ int i;
++ int i, ret = 0;
++ u32 cluster;
++ u64 blkno;
++ struct buffer_head *backup = NULL;
++ struct ocfs2_dinode *backup_di = NULL;
++ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
-+ BUG_ON(e4b->bd_bitmap_page == NULL);
-+ BUG_ON(e4b->bd_buddy_page == NULL);
++ /* calculate the real backups we need to update. */
++ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
++ blkno = ocfs2_backup_super_blkno(inode->i_sb, i);
++ cluster = ocfs2_blocks_to_clusters(inode->i_sb, blkno);
++ if (cluster > clusters)
++ break;
+
-+ ext4_lock_group(sb, group);
-+ for (i = 0; i < count; i++) {
-+ md = db->bb_md_cur;
-+ if (md && db->bb_tid != handle->h_transaction->t_tid) {
-+ db->bb_md_cur = NULL;
-+ md = NULL;
++ ret = ocfs2_read_block(osb, blkno, &backup, 0, NULL);
++ if (ret < 0) {
++ mlog_errno(ret);
++ break;
+ }
+
-+ if (md == NULL) {
-+ ext4_unlock_group(sb, group);
-+ md = kmalloc(sizeof(*md), GFP_NOFS);
-+ if (md == NULL)
-+ return -ENOMEM;
-+ md->num = 0;
-+ md->group = group;
++ memcpy(backup->b_data, data, inode->i_sb->s_blocksize);
+
-+ ext4_lock_group(sb, group);
-+ if (db->bb_md_cur == NULL) {
-+ spin_lock(&sbi->s_md_lock);
-+ list_add(&md->list, &sbi->s_active_transaction);
-+ spin_unlock(&sbi->s_md_lock);
-+ /* protect buddy cache from being freed,
-+ * otherwise we'll refresh it from
-+ * on-disk bitmap and lose not-yet-available
-+ * blocks */
-+ page_cache_get(e4b->bd_buddy_page);
-+ page_cache_get(e4b->bd_bitmap_page);
-+ db->bb_md_cur = md;
-+ db->bb_tid = handle->h_transaction->t_tid;
-+ mb_debug("new md 0x%p for group %lu\n",
-+ md, md->group);
-+ } else {
-+ kfree(md);
-+ md = db->bb_md_cur;
-+ }
-+ }
++ backup_di = (struct ocfs2_dinode *)backup->b_data;
++ backup_di->i_blkno = cpu_to_le64(blkno);
+
-+ BUG_ON(md->num >= EXT4_BB_MAX_BLOCKS);
-+ md->blocks[md->num] = block + i;
-+ md->num++;
-+ if (md->num == EXT4_BB_MAX_BLOCKS) {
-+ /* no more space, put full container on a sb's list */
-+ db->bb_md_cur = NULL;
++ ret = ocfs2_write_super_or_backup(osb, backup);
++ brelse(backup);
++ backup = NULL;
++ if (ret < 0) {
++ mlog_errno(ret);
++ break;
+ }
+ }
-+ ext4_unlock_group(sb, group);
-+ return 0;
++
++ return ret;
+}
+
-+/*
-+ * Main entry point into mballoc to free blocks
-+ */
-+void ext4_mb_free_blocks(handle_t *handle, struct inode *inode,
-+ unsigned long block, unsigned long count,
-+ int metadata, unsigned long *freed)
++static void ocfs2_update_super_and_backups(struct inode *inode,
++ int new_clusters)
+{
-+ struct buffer_head *bitmap_bh = 0;
-+ struct super_block *sb = inode->i_sb;
-+ struct ext4_allocation_context ac;
-+ struct ext4_group_desc *gdp;
-+ struct ext4_super_block *es;
-+ unsigned long overflow;
-+ ext4_grpblk_t bit;
-+ struct buffer_head *gd_bh;
-+ ext4_group_t block_group;
-+ struct ext4_sb_info *sbi;
-+ struct ext4_buddy e4b;
-+ int err = 0;
+ int ret;
-+
-+ *freed = 0;
-+
-+ ext4_mb_poll_new_transaction(sb, handle);
-+
-+ sbi = EXT4_SB(sb);
-+ es = EXT4_SB(sb)->s_es;
-+ if (block < le32_to_cpu(es->s_first_data_block) ||
-+ block + count < block ||
-+ block + count > ext4_blocks_count(es)) {
-+ ext4_error(sb, __FUNCTION__,
-+ "Freeing blocks not in datazone - "
-+ "block = %lu, count = %lu", block, count);
-+ goto error_return;
-+ }
-+
-+ ext4_debug("freeing block %lu\n", block);
-+
-+ ac.ac_op = EXT4_MB_HISTORY_FREE;
-+ ac.ac_inode = inode;
-+ ac.ac_sb = sb;
-+
-+do_more:
-+ overflow = 0;
-+ ext4_get_group_no_and_offset(sb, block, &block_group, &bit);
-+
-+ /*
-+ * Check to see if we are freeing blocks across a group
-+ * boundary.
-+ */
-+ if (bit + count > EXT4_BLOCKS_PER_GROUP(sb)) {
-+ overflow = bit + count - EXT4_BLOCKS_PER_GROUP(sb);
-+ count -= overflow;
-+ }
-+ bitmap_bh = read_block_bitmap(sb, block_group);
-+ if (!bitmap_bh)
-+ goto error_return;
-+ gdp = ext4_get_group_desc(sb, block_group, &gd_bh);
-+ if (!gdp)
-+ goto error_return;
-+
-+ if (in_range(ext4_block_bitmap(sb, gdp), block, count) ||
-+ in_range(ext4_inode_bitmap(sb, gdp), block, count) ||
-+ in_range(block, ext4_inode_table(sb, gdp),
-+ EXT4_SB(sb)->s_itb_per_group) ||
-+ in_range(block + count - 1, ext4_inode_table(sb, gdp),
-+ EXT4_SB(sb)->s_itb_per_group)) {
-+
-+ ext4_error(sb, __FUNCTION__,
-+ "Freeing blocks in system zone - "
-+ "Block = %lu, count = %lu", block, count);
-+ }
-+
-+ BUFFER_TRACE(bitmap_bh, "getting write access");
-+ err = ext4_journal_get_write_access(handle, bitmap_bh);
-+ if (err)
-+ goto error_return;
++ u32 clusters = 0;
++ struct buffer_head *super_bh = NULL;
++ struct ocfs2_dinode *super_di = NULL;
++ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+
+ /*
-+ * We are about to modify some metadata. Call the journal APIs
-+ * to unshare ->b_data if a currently-committing transaction is
-+ * using it
++ * update the superblock last.
++ * It doesn't matter if the write failed.
+ */
-+ BUFFER_TRACE(gd_bh, "get_write_access");
-+ err = ext4_journal_get_write_access(handle, gd_bh);
-+ if (err)
-+ goto error_return;
-+
-+ err = ext4_mb_load_buddy(sb, block_group, &e4b);
-+ if (err)
-+ goto error_return;
-+
-+#ifdef AGGRESSIVE_CHECK
-+ {
-+ int i;
-+ for (i = 0; i < count; i++)
-+ BUG_ON(!mb_test_bit(bit + i, bitmap_bh->b_data));
++ ret = ocfs2_read_block(osb, OCFS2_SUPER_BLOCK_BLKNO,
++ &super_bh, 0, NULL);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out;
+ }
-+#endif
-+ mb_clear_bits(sb_bgl_lock(sbi, block_group), bitmap_bh->b_data,
-+ bit, count);
-+
-+ /* We dirtied the bitmap block */
-+ BUFFER_TRACE(bitmap_bh, "dirtied bitmap block");
-+ err = ext4_journal_dirty_metadata(handle, bitmap_bh);
+
-+ ac.ac_b_ex.fe_group = block_group;
-+ ac.ac_b_ex.fe_start = bit;
-+ ac.ac_b_ex.fe_len = count;
-+ ext4_mb_store_history(&ac);
++ super_di = (struct ocfs2_dinode *)super_bh->b_data;
++ le32_add_cpu(&super_di->i_clusters, new_clusters);
++ clusters = le32_to_cpu(super_di->i_clusters);
+
-+ if (metadata) {
-+ /* blocks being freed are metadata. these blocks shouldn't
-+ * be used until this transaction is committed */
-+ ext4_mb_free_metadata(handle, &e4b, block_group, bit, count);
-+ } else {
-+ ext4_lock_group(sb, block_group);
-+ err = mb_free_blocks(inode, &e4b, bit, count);
-+ ext4_mb_return_to_preallocation(inode, &e4b, block, count);
-+ ext4_unlock_group(sb, block_group);
-+ BUG_ON(err != 0);
++ ret = ocfs2_write_super_or_backup(osb, super_bh);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out;
+ }
+
-+ spin_lock(sb_bgl_lock(sbi, block_group));
-+ gdp->bg_free_blocks_count =
-+ cpu_to_le16(le16_to_cpu(gdp->bg_free_blocks_count) + count);
-+ gdp->bg_checksum = ext4_group_desc_csum(sbi, block_group, gdp);
-+ spin_unlock(sb_bgl_lock(sbi, block_group));
-+ percpu_counter_add(&sbi->s_freeblocks_counter, count);
-+
-+ ext4_mb_release_desc(&e4b);
-+
-+ *freed += count;
-+
-+ /* And the group descriptor block */
-+ BUFFER_TRACE(gd_bh, "dirtied group descriptor block");
-+ ret = ext4_journal_dirty_metadata(handle, gd_bh);
-+ if (!err)
-+ err = ret;
++ if (OCFS2_HAS_COMPAT_FEATURE(osb->sb, OCFS2_FEATURE_COMPAT_BACKUP_SB))
++ ret = update_backups(inode, clusters, super_bh->b_data);
+
-+ if (overflow && !err) {
-+ block += count;
-+ count = overflow;
-+ put_bh(bitmap_bh);
-+ goto do_more;
-+ }
-+ sb->s_dirt = 1;
-+error_return:
-+ brelse(bitmap_bh);
-+ ext4_std_error(sb, err);
++out:
++ brelse(super_bh);
++ if (ret)
++ printk(KERN_WARNING "ocfs2: Failed to update super blocks on %s"
++ " during fs resize. This condition is not fatal,"
++ " but fsck.ocfs2 should be run to fix it\n",
++ osb->dev_str);
+ return;
+}
-diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
-new file mode 100644
-index 0000000..3ebc233
---- /dev/null
-+++ b/fs/ext4/migrate.c
-@@ -0,0 +1,560 @@
-+/*
-+ * Copyright IBM Corporation, 2007
-+ * Author Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
-+ *
-+ * This program is free software; you can redistribute it and/or modify it
-+ * under the terms of version 2.1 of the GNU Lesser General Public License
-+ * as published by the Free Software Foundation.
-+ *
-+ * This program is distributed in the hope that it would be useful, but
-+ * WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-+ *
-+ */
-+
-+#include <linux/module.h>
-+#include <linux/ext4_jbd2.h>
-+#include <linux/ext4_fs_extents.h>
+
+/*
-+ * The contiguous blocks details which can be
-+ * represented by a single extent
++ * Extend the filesystem to the new number of clusters specified. This entry
++ * point is only used to extend the current filesystem to the end of the last
++ * existing group.
+ */
-+struct list_blocks_struct {
-+ ext4_lblk_t first_block, last_block;
-+ ext4_fsblk_t first_pblock, last_pblock;
-+};
++int ocfs2_group_extend(struct inode * inode, int new_clusters)
++{
++ int ret;
++ handle_t *handle;
++ struct buffer_head *main_bm_bh = NULL;
++ struct buffer_head *group_bh = NULL;
++ struct inode *main_bm_inode = NULL;
++ struct ocfs2_dinode *fe = NULL;
++ struct ocfs2_group_desc *group = NULL;
++ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
++ u16 cl_bpc;
++ u32 first_new_cluster;
++ u64 lgd_blkno;
+
-+static int finish_range(handle_t *handle, struct inode *inode,
-+ struct list_blocks_struct *lb)
++ mlog_entry_void();
+
-+{
-+ int retval = 0, needed;
-+ struct ext4_extent newext;
-+ struct ext4_ext_path *path;
-+ if (lb->first_pblock == 0)
-+ return 0;
++ if (ocfs2_is_hard_readonly(osb) || ocfs2_is_soft_readonly(osb))
++ return -EROFS;
+
-+ /* Add the extent to temp inode*/
-+ newext.ee_block = cpu_to_le32(lb->first_block);
-+ newext.ee_len = cpu_to_le16(lb->last_block - lb->first_block + 1);
-+ ext4_ext_store_pblock(&newext, lb->first_pblock);
-+ path = ext4_ext_find_extent(inode, lb->first_block, NULL);
++ if (new_clusters < 0)
++ return -EINVAL;
++ else if (new_clusters == 0)
++ return 0;
+
-+ if (IS_ERR(path)) {
-+ retval = PTR_ERR(path);
-+ goto err_out;
++ main_bm_inode = ocfs2_get_system_file_inode(osb,
++ GLOBAL_BITMAP_SYSTEM_INODE,
++ OCFS2_INVALID_SLOT);
++ if (!main_bm_inode) {
++ ret = -EINVAL;
++ mlog_errno(ret);
++ goto out;
+ }
+
-+ /*
-+ * Calculate the credit needed to inserting this extent
-+ * Since we are doing this in loop we may accumalate extra
-+ * credit. But below we try to not accumalate too much
-+ * of them by restarting the journal.
-+ */
-+ needed = ext4_ext_calc_credits_for_insert(inode, path);
++ mutex_lock(&main_bm_inode->i_mutex);
+
-+ /*
-+ * Make sure the credit we accumalated is not really high
-+ */
-+ if (needed && handle->h_buffer_credits >= EXT4_RESERVE_TRANS_BLOCKS) {
-+ retval = ext4_journal_restart(handle, needed);
-+ if (retval)
-+ goto err_out;
-+ }
-+ if (needed) {
-+ retval = ext4_journal_extend(handle, needed);
-+ if (retval != 0) {
-+ /*
-+ * IF not able to extend the journal restart the journal
-+ */
-+ retval = ext4_journal_restart(handle, needed);
-+ if (retval)
-+ goto err_out;
-+ }
++ ret = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_mutex;
+ }
-+ retval = ext4_ext_insert_extent(handle, inode, path, &newext);
-+err_out:
-+ lb->first_pblock = 0;
-+ return retval;
-+}
+
-+static int update_extent_range(handle_t *handle, struct inode *inode,
-+ ext4_fsblk_t pblock, ext4_lblk_t blk_num,
-+ struct list_blocks_struct *lb)
-+{
-+ int retval;
-+ /*
-+ * See if we can add on to the existing range (if it exists)
-+ */
-+ if (lb->first_pblock &&
-+ (lb->last_pblock+1 == pblock) &&
-+ (lb->last_block+1 == blk_num)) {
-+ lb->last_pblock = pblock;
-+ lb->last_block = blk_num;
-+ return 0;
++ fe = (struct ocfs2_dinode *)main_bm_bh->b_data;
++
++ if (le16_to_cpu(fe->id2.i_chain.cl_cpg) !=
++ ocfs2_group_bitmap_size(osb->sb) * 8) {
++ mlog(ML_ERROR, "The disk is too old and small. "
++ "Force to do offline resize.");
++ ret = -EINVAL;
++ goto out_unlock;
+ }
-+ /*
-+ * Start a new range.
-+ */
-+ retval = finish_range(handle, inode, lb);
-+ lb->first_pblock = lb->last_pblock = pblock;
-+ lb->first_block = lb->last_block = blk_num;
+
-+ return retval;
-+}
++ if (!OCFS2_IS_VALID_DINODE(fe)) {
++ OCFS2_RO_ON_INVALID_DINODE(main_bm_inode->i_sb, fe);
++ ret = -EIO;
++ goto out_unlock;
++ }
+
-+static int update_ind_extent_range(handle_t *handle, struct inode *inode,
-+ ext4_fsblk_t pblock, ext4_lblk_t *blk_nump,
-+ struct list_blocks_struct *lb)
-+{
-+ struct buffer_head *bh;
-+ __le32 *i_data;
-+ int i, retval = 0;
-+ ext4_lblk_t blk_count = *blk_nump;
-+ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++ first_new_cluster = le32_to_cpu(fe->i_clusters);
++ lgd_blkno = ocfs2_which_cluster_group(main_bm_inode,
++ first_new_cluster - 1);
+
-+ if (!pblock) {
-+ /* Only update the file block number */
-+ *blk_nump += max_entries;
-+ return 0;
++ ret = ocfs2_read_block(osb, lgd_blkno, &group_bh, OCFS2_BH_CACHED,
++ main_bm_inode);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_unlock;
+ }
+
-+ bh = sb_bread(inode->i_sb, pblock);
-+ if (!bh)
-+ return -EIO;
++ group = (struct ocfs2_group_desc *)group_bh->b_data;
+
-+ i_data = (__le32 *)bh->b_data;
-+ for (i = 0; i < max_entries; i++, blk_count++) {
-+ if (i_data[i]) {
-+ retval = update_extent_range(handle, inode,
-+ le32_to_cpu(i_data[i]),
-+ blk_count, lb);
-+ if (retval)
-+ break;
-+ }
++ ret = ocfs2_check_group_descriptor(inode->i_sb, fe, group);
++ if (ret) {
++ mlog_errno(ret);
++ goto out_unlock;
+ }
+
-+ /* Update the file block number */
-+ *blk_nump = blk_count;
-+ put_bh(bh);
-+ return retval;
-+
-+}
++ cl_bpc = le16_to_cpu(fe->id2.i_chain.cl_bpc);
++ if (le16_to_cpu(group->bg_bits) / cl_bpc + new_clusters >
++ le16_to_cpu(fe->id2.i_chain.cl_cpg)) {
++ ret = -EINVAL;
++ goto out_unlock;
++ }
+
-+static int update_dind_extent_range(handle_t *handle, struct inode *inode,
-+ ext4_fsblk_t pblock, ext4_lblk_t *blk_nump,
-+ struct list_blocks_struct *lb)
-+{
-+ struct buffer_head *bh;
-+ __le32 *i_data;
-+ int i, retval = 0;
-+ ext4_lblk_t blk_count = *blk_nump;
-+ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++ mlog(0, "extend the last group at %llu, new clusters = %d\n",
++ (unsigned long long)le64_to_cpu(group->bg_blkno), new_clusters);
+
-+ if (!pblock) {
-+ /* Only update the file block number */
-+ *blk_nump += max_entries * max_entries;
-+ return 0;
++ handle = ocfs2_start_trans(osb, OCFS2_GROUP_EXTEND_CREDITS);
++ if (IS_ERR(handle)) {
++ mlog_errno(PTR_ERR(handle));
++ ret = -EINVAL;
++ goto out_unlock;
+ }
-+ bh = sb_bread(inode->i_sb, pblock);
-+ if (!bh)
-+ return -EIO;
+
-+ i_data = (__le32 *)bh->b_data;
-+ for (i = 0; i < max_entries; i++) {
-+ if (i_data[i]) {
-+ retval = update_ind_extent_range(handle, inode,
-+ le32_to_cpu(i_data[i]),
-+ &blk_count, lb);
-+ if (retval)
-+ break;
-+ } else {
-+ /* Only update the file block number */
-+ blk_count += max_entries;
-+ }
++ /* update the last group descriptor and inode. */
++ ret = ocfs2_update_last_group_and_inode(handle, main_bm_inode,
++ main_bm_bh, group_bh,
++ first_new_cluster,
++ new_clusters);
++ if (ret) {
++ mlog_errno(ret);
++ goto out_commit;
+ }
+
-+ /* Update the file block number */
-+ *blk_nump = blk_count;
-+ put_bh(bh);
-+ return retval;
-+
-+}
++ ocfs2_update_super_and_backups(main_bm_inode, new_clusters);
+
-+static int update_tind_extent_range(handle_t *handle, struct inode *inode,
-+ ext4_fsblk_t pblock, ext4_lblk_t *blk_nump,
-+ struct list_blocks_struct *lb)
-+{
-+ struct buffer_head *bh;
-+ __le32 *i_data;
-+ int i, retval = 0;
-+ ext4_lblk_t blk_count = *blk_nump;
-+ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++out_commit:
++ ocfs2_commit_trans(osb, handle);
++out_unlock:
++ brelse(group_bh);
++ brelse(main_bm_bh);
+
-+ if (!pblock) {
-+ /* Only update the file block number */
-+ *blk_nump += max_entries * max_entries * max_entries;
-+ return 0;
-+ }
-+ bh = sb_bread(inode->i_sb, pblock);
-+ if (!bh)
-+ return -EIO;
++ ocfs2_inode_unlock(main_bm_inode, 1);
+
-+ i_data = (__le32 *)bh->b_data;
-+ for (i = 0; i < max_entries; i++) {
-+ if (i_data[i]) {
-+ retval = update_dind_extent_range(handle, inode,
-+ le32_to_cpu(i_data[i]),
-+ &blk_count, lb);
-+ if (retval)
-+ break;
-+ } else
-+ /* Only update the file block number */
-+ blk_count += max_entries * max_entries;
-+ }
-+ /* Update the file block number */
-+ *blk_nump = blk_count;
-+ put_bh(bh);
-+ return retval;
++out_mutex:
++ mutex_unlock(&main_bm_inode->i_mutex);
++ iput(main_bm_inode);
+
++out:
++ mlog_exit_void();
++ return ret;
+}
+
-+static int free_dind_blocks(handle_t *handle,
-+ struct inode *inode, __le32 i_data)
++static int ocfs2_check_new_group(struct inode *inode,
++ struct ocfs2_dinode *di,
++ struct ocfs2_new_group_input *input,
++ struct buffer_head *group_bh)
+{
-+ int i;
-+ __le32 *tmp_idata;
-+ struct buffer_head *bh;
-+ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
-+
-+ bh = sb_bread(inode->i_sb, le32_to_cpu(i_data));
-+ if (!bh)
-+ return -EIO;
++ int ret;
++ struct ocfs2_group_desc *gd;
++ u16 cl_bpc = le16_to_cpu(di->id2.i_chain.cl_bpc);
++ unsigned int max_bits = le16_to_cpu(di->id2.i_chain.cl_cpg) *
++ le16_to_cpu(di->id2.i_chain.cl_bpc);
+
-+ tmp_idata = (__le32 *)bh->b_data;
-+ for (i = 0; i < max_entries; i++) {
-+ if (tmp_idata[i])
-+ ext4_free_blocks(handle, inode,
-+ le32_to_cpu(tmp_idata[i]), 1, 1);
-+ }
-+ put_bh(bh);
-+ ext4_free_blocks(handle, inode, le32_to_cpu(i_data), 1, 1);
-+ return 0;
-+}
+
-+static int free_tind_blocks(handle_t *handle,
-+ struct inode *inode, __le32 i_data)
-+{
-+ int i, retval = 0;
-+ __le32 *tmp_idata;
-+ struct buffer_head *bh;
-+ unsigned long max_entries = inode->i_sb->s_blocksize >> 2;
++ gd = (struct ocfs2_group_desc *)group_bh->b_data;
+
-+ bh = sb_bread(inode->i_sb, le32_to_cpu(i_data));
-+ if (!bh)
-+ return -EIO;
++ ret = -EIO;
++ if (!OCFS2_IS_VALID_GROUP_DESC(gd))
++ mlog(ML_ERROR, "Group descriptor # %llu isn't valid.\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno));
++ else if (di->i_blkno != gd->bg_parent_dinode)
++ mlog(ML_ERROR, "Group descriptor # %llu has bad parent "
++ "pointer (%llu, expected %llu)\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ (unsigned long long)le64_to_cpu(gd->bg_parent_dinode),
++ (unsigned long long)le64_to_cpu(di->i_blkno));
++ else if (le16_to_cpu(gd->bg_bits) > max_bits)
++ mlog(ML_ERROR, "Group descriptor # %llu has bit count of %u\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ le16_to_cpu(gd->bg_bits));
++ else if (le16_to_cpu(gd->bg_free_bits_count) > le16_to_cpu(gd->bg_bits))
++ mlog(ML_ERROR, "Group descriptor # %llu has bit count %u but "
++ "claims that %u are free\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ le16_to_cpu(gd->bg_bits),
++ le16_to_cpu(gd->bg_free_bits_count));
++ else if (le16_to_cpu(gd->bg_bits) > (8 * le16_to_cpu(gd->bg_size)))
++ mlog(ML_ERROR, "Group descriptor # %llu has bit count %u but "
++ "max bitmap bits of %u\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ le16_to_cpu(gd->bg_bits),
++ 8 * le16_to_cpu(gd->bg_size));
++ else if (le16_to_cpu(gd->bg_chain) != input->chain)
++ mlog(ML_ERROR, "Group descriptor # %llu has bad chain %u "
++ "while input has %u set.\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ le16_to_cpu(gd->bg_chain), input->chain);
++ else if (le16_to_cpu(gd->bg_bits) != input->clusters * cl_bpc)
++ mlog(ML_ERROR, "Group descriptor # %llu has bit count %u but "
++ "input has %u clusters set\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ le16_to_cpu(gd->bg_bits), input->clusters);
++ else if (le16_to_cpu(gd->bg_free_bits_count) != input->frees * cl_bpc)
++ mlog(ML_ERROR, "Group descriptor # %llu has free bit count %u "
++ "but it should have %u set\n",
++ (unsigned long long)le64_to_cpu(gd->bg_blkno),
++ le16_to_cpu(gd->bg_bits),
++ input->frees * cl_bpc);
++ else
++ ret = 0;
+
-+ tmp_idata = (__le32 *)bh->b_data;
-+ for (i = 0; i < max_entries; i++) {
-+ if (tmp_idata[i]) {
-+ retval = free_dind_blocks(handle,
-+ inode, tmp_idata[i]);
-+ if (retval) {
-+ put_bh(bh);
-+ return retval;
-+ }
-+ }
-+ }
-+ put_bh(bh);
-+ ext4_free_blocks(handle, inode, le32_to_cpu(i_data), 1, 1);
-+ return 0;
++ return ret;
+}
+
-+static int free_ind_block(handle_t *handle, struct inode *inode)
++static int ocfs2_verify_group_and_input(struct inode *inode,
++ struct ocfs2_dinode *di,
++ struct ocfs2_new_group_input *input,
++ struct buffer_head *group_bh)
+{
-+ int retval;
-+ struct ext4_inode_info *ei = EXT4_I(inode);
-+
-+ if (ei->i_data[EXT4_IND_BLOCK])
-+ ext4_free_blocks(handle, inode,
-+ le32_to_cpu(ei->i_data[EXT4_IND_BLOCK]), 1, 1);
++ u16 cl_count = le16_to_cpu(di->id2.i_chain.cl_count);
++ u16 cl_cpg = le16_to_cpu(di->id2.i_chain.cl_cpg);
++ u16 next_free = le16_to_cpu(di->id2.i_chain.cl_next_free_rec);
++ u32 cluster = ocfs2_blocks_to_clusters(inode->i_sb, input->group);
++ u32 total_clusters = le32_to_cpu(di->i_clusters);
++ int ret = -EINVAL;
+
-+ if (ei->i_data[EXT4_DIND_BLOCK]) {
-+ retval = free_dind_blocks(handle, inode,
-+ ei->i_data[EXT4_DIND_BLOCK]);
-+ if (retval)
-+ return retval;
-+ }
++ if (cluster < total_clusters)
++ mlog(ML_ERROR, "add a group which is in the current volume.\n");
++ else if (input->chain >= cl_count)
++ mlog(ML_ERROR, "input chain exceeds the limit.\n");
++ else if (next_free != cl_count && next_free != input->chain)
++ mlog(ML_ERROR,
++ "the add group should be in chain %u\n", next_free);
++ else if (total_clusters + input->clusters < total_clusters)
++ mlog(ML_ERROR, "add group's clusters overflow.\n");
++ else if (input->clusters > cl_cpg)
++ mlog(ML_ERROR, "the cluster exceeds the maximum of a group\n");
++ else if (input->frees > input->clusters)
++ mlog(ML_ERROR, "the free cluster exceeds the total clusters\n");
++ else if (total_clusters % cl_cpg != 0)
++ mlog(ML_ERROR,
++ "the last group isn't full. Use group extend first.\n");
++ else if (input->group != ocfs2_which_cluster_group(inode, cluster))
++ mlog(ML_ERROR, "group blkno is invalid\n");
++ else if ((ret = ocfs2_check_new_group(inode, di, input, group_bh)))
++ mlog(ML_ERROR, "group descriptor check failed.\n");
++ else
++ ret = 0;
+
-+ if (ei->i_data[EXT4_TIND_BLOCK]) {
-+ retval = free_tind_blocks(handle, inode,
-+ ei->i_data[EXT4_TIND_BLOCK]);
-+ if (retval)
-+ return retval;
-+ }
-+ return 0;
++ return ret;
+}
+
-+static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
-+ struct inode *tmp_inode, int retval)
++/* Add a new group descriptor to global_bitmap. */
++int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input)
+{
-+ struct ext4_inode_info *ei = EXT4_I(inode);
-+ struct ext4_inode_info *tmp_ei = EXT4_I(tmp_inode);
-+
-+ retval = free_ind_block(handle, inode);
-+ if (retval)
-+ goto err_out;
++ int ret;
++ handle_t *handle;
++ struct buffer_head *main_bm_bh = NULL;
++ struct inode *main_bm_inode = NULL;
++ struct ocfs2_dinode *fe = NULL;
++ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
++ struct buffer_head *group_bh = NULL;
++ struct ocfs2_group_desc *group = NULL;
++ struct ocfs2_chain_list *cl;
++ struct ocfs2_chain_rec *cr;
++ u16 cl_bpc;
+
-+ /*
-+ * One credit accounted for writing the
-+ * i_data field of the original inode
-+ */
-+ retval = ext4_journal_extend(handle, 1);
-+ if (retval != 0) {
-+ retval = ext4_journal_restart(handle, 1);
-+ if (retval)
-+ goto err_out;
-+ }
++ mlog_entry_void();
+
-+ /*
-+ * We have the extent map build with the tmp inode.
-+ * Now copy the i_data across
-+ */
-+ ei->i_flags |= EXT4_EXTENTS_FL;
-+ memcpy(ei->i_data, tmp_ei->i_data, sizeof(ei->i_data));
++ if (ocfs2_is_hard_readonly(osb) || ocfs2_is_soft_readonly(osb))
++ return -EROFS;
+
-+ /*
-+ * Update i_blocks with the new blocks that got
-+ * allocated while adding extents for extent index
-+ * blocks.
-+ *
-+ * While converting to extents we need not
-+ * update the orignal inode i_blocks for extent blocks
-+ * via quota APIs. The quota update happened via tmp_inode already.
-+ */
-+ spin_lock(&inode->i_lock);
-+ inode->i_blocks += tmp_inode->i_blocks;
-+ spin_unlock(&inode->i_lock);
++ main_bm_inode = ocfs2_get_system_file_inode(osb,
++ GLOBAL_BITMAP_SYSTEM_INODE,
++ OCFS2_INVALID_SLOT);
++ if (!main_bm_inode) {
++ ret = -EINVAL;
++ mlog_errno(ret);
++ goto out;
++ }
+
-+ ext4_mark_inode_dirty(handle, inode);
-+err_out:
-+ return retval;
-+}
++ mutex_lock(&main_bm_inode->i_mutex);
+
-+static int free_ext_idx(handle_t *handle, struct inode *inode,
-+ struct ext4_extent_idx *ix)
-+{
-+ int i, retval = 0;
-+ ext4_fsblk_t block;
-+ struct buffer_head *bh;
-+ struct ext4_extent_header *eh;
++ ret = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_mutex;
++ }
+
-+ block = idx_pblock(ix);
-+ bh = sb_bread(inode->i_sb, block);
-+ if (!bh)
-+ return -EIO;
++ fe = (struct ocfs2_dinode *)main_bm_bh->b_data;
+
-+ eh = (struct ext4_extent_header *)bh->b_data;
-+ if (eh->eh_depth != 0) {
-+ ix = EXT_FIRST_INDEX(eh);
-+ for (i = 0; i < le16_to_cpu(eh->eh_entries); i++, ix++) {
-+ retval = free_ext_idx(handle, inode, ix);
-+ if (retval)
-+ break;
-+ }
++ if (le16_to_cpu(fe->id2.i_chain.cl_cpg) !=
++ ocfs2_group_bitmap_size(osb->sb) * 8) {
++ mlog(ML_ERROR, "The disk is too old and small."
++ " Force to do offline resize.");
++ ret = -EINVAL;
++ goto out_unlock;
+ }
-+ put_bh(bh);
-+ ext4_free_blocks(handle, inode, block, 1, 1);
-+ return retval;
-+}
+
-+/*
-+ * Free the extent meta data blocks only
-+ */
-+static int free_ext_block(handle_t *handle, struct inode *inode)
-+{
-+ int i, retval = 0;
-+ struct ext4_inode_info *ei = EXT4_I(inode);
-+ struct ext4_extent_header *eh = (struct ext4_extent_header *)ei->i_data;
-+ struct ext4_extent_idx *ix;
-+ if (eh->eh_depth == 0)
-+ /*
-+ * No extra blocks allocated for extent meta data
-+ */
-+ return 0;
-+ ix = EXT_FIRST_INDEX(eh);
-+ for (i = 0; i < le16_to_cpu(eh->eh_entries); i++, ix++) {
-+ retval = free_ext_idx(handle, inode, ix);
-+ if (retval)
-+ return retval;
++ ret = ocfs2_read_block(osb, input->group, &group_bh, 0, NULL);
++ if (ret < 0) {
++ mlog(ML_ERROR, "Can't read the group descriptor # %llu "
++ "from the device.", (unsigned long long)input->group);
++ goto out_unlock;
+ }
-+ return retval;
+
-+}
-+
-+int ext4_ext_migrate(struct inode *inode, struct file *filp,
-+ unsigned int cmd, unsigned long arg)
-+{
-+ handle_t *handle;
-+ int retval = 0, i;
-+ __le32 *i_data;
-+ ext4_lblk_t blk_count = 0;
-+ struct ext4_inode_info *ei;
-+ struct inode *tmp_inode = NULL;
-+ struct list_blocks_struct lb;
-+ unsigned long max_entries;
++ ocfs2_set_new_buffer_uptodate(inode, group_bh);
+
-+ if (!test_opt(inode->i_sb, EXTENTS))
-+ /*
-+ * if mounted with noextents we don't allow the migrate
-+ */
-+ return -EINVAL;
++ ret = ocfs2_verify_group_and_input(main_bm_inode, fe, input, group_bh);
++ if (ret) {
++ mlog_errno(ret);
++ goto out_unlock;
++ }
+
-+ if ((EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
-+ return -EINVAL;
++ mlog(0, "Add a new group %llu in chain = %u, length = %u\n",
++ (unsigned long long)input->group, input->chain, input->clusters);
+
-+ down_write(&EXT4_I(inode)->i_data_sem);
-+ handle = ext4_journal_start(inode,
-+ EXT4_DATA_TRANS_BLOCKS(inode->i_sb) +
-+ EXT4_INDEX_EXTRA_TRANS_BLOCKS + 3 +
-+ 2 * EXT4_QUOTA_INIT_BLOCKS(inode->i_sb)
-+ + 1);
++ handle = ocfs2_start_trans(osb, OCFS2_GROUP_ADD_CREDITS);
+ if (IS_ERR(handle)) {
-+ retval = PTR_ERR(handle);
-+ goto err_out;
-+ }
-+ tmp_inode = ext4_new_inode(handle,
-+ inode->i_sb->s_root->d_inode,
-+ S_IFREG);
-+ if (IS_ERR(tmp_inode)) {
-+ retval = -ENOMEM;
-+ ext4_journal_stop(handle);
-+ tmp_inode = NULL;
-+ goto err_out;
++ mlog_errno(PTR_ERR(handle));
++ ret = -EINVAL;
++ goto out_unlock;
+ }
-+ i_size_write(tmp_inode, i_size_read(inode));
-+ /*
-+ * We don't want the inode to be reclaimed
-+ * if we got interrupted in between. We have
-+ * this tmp inode carrying reference to the
-+ * data blocks of the original file. We set
-+ * the i_nlink to zero at the last stage after
-+ * switching the original file to extent format
-+ */
-+ tmp_inode->i_nlink = 1;
+
-+ ext4_ext_tree_init(handle, tmp_inode);
-+ ext4_orphan_add(handle, tmp_inode);
-+ ext4_journal_stop(handle);
++ cl_bpc = le16_to_cpu(fe->id2.i_chain.cl_bpc);
++ cl = &fe->id2.i_chain;
++ cr = &cl->cl_recs[input->chain];
+
-+ ei = EXT4_I(inode);
-+ i_data = ei->i_data;
-+ memset(&lb, 0, sizeof(lb));
++ ret = ocfs2_journal_access(handle, main_bm_inode, group_bh,
++ OCFS2_JOURNAL_ACCESS_WRITE);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_commit;
++ }
+
-+ /* 32 bit block address 4 bytes */
-+ max_entries = inode->i_sb->s_blocksize >> 2;
++ group = (struct ocfs2_group_desc *)group_bh->b_data;
++ group->bg_next_group = cr->c_blkno;
+
-+ /*
-+ * start with one credit accounted for
-+ * superblock modification.
-+ *
-+ * For the tmp_inode we already have commited the
-+ * trascation that created the inode. Later as and
-+ * when we add extents we extent the journal
-+ */
-+ handle = ext4_journal_start(inode, 1);
-+ for (i = 0; i < EXT4_NDIR_BLOCKS; i++, blk_count++) {
-+ if (i_data[i]) {
-+ retval = update_extent_range(handle, tmp_inode,
-+ le32_to_cpu(i_data[i]),
-+ blk_count, &lb);
-+ if (retval)
-+ goto err_out;
-+ }
++ ret = ocfs2_journal_dirty(handle, group_bh);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_commit;
+ }
-+ if (i_data[EXT4_IND_BLOCK]) {
-+ retval = update_ind_extent_range(handle, tmp_inode,
-+ le32_to_cpu(i_data[EXT4_IND_BLOCK]),
-+ &blk_count, &lb);
-+ if (retval)
-+ goto err_out;
-+ } else
-+ blk_count += max_entries;
-+ if (i_data[EXT4_DIND_BLOCK]) {
-+ retval = update_dind_extent_range(handle, tmp_inode,
-+ le32_to_cpu(i_data[EXT4_DIND_BLOCK]),
-+ &blk_count, &lb);
-+ if (retval)
-+ goto err_out;
-+ } else
-+ blk_count += max_entries * max_entries;
-+ if (i_data[EXT4_TIND_BLOCK]) {
-+ retval = update_tind_extent_range(handle, tmp_inode,
-+ le32_to_cpu(i_data[EXT4_TIND_BLOCK]),
-+ &blk_count, &lb);
-+ if (retval)
-+ goto err_out;
++
++ ret = ocfs2_journal_access(handle, main_bm_inode, main_bm_bh,
++ OCFS2_JOURNAL_ACCESS_WRITE);
++ if (ret < 0) {
++ mlog_errno(ret);
++ goto out_commit;
+ }
-+ /*
-+ * Build the last extent
-+ */
-+ retval = finish_range(handle, tmp_inode, &lb);
-+err_out:
-+ /*
-+ * We are either freeing extent information or indirect
-+ * blocks. During this we touch superblock, group descriptor
-+ * and block bitmap. Later we mark the tmp_inode dirty
-+ * via ext4_ext_tree_init. So allocate a credit of 4
-+ * We may update quota (user and group).
-+ *
-+ * FIXME!! we may be touching bitmaps in different block groups.
-+ */
-+ if (ext4_journal_extend(handle,
-+ 4 + 2*EXT4_QUOTA_TRANS_BLOCKS(inode->i_sb)) != 0)
-+ ext4_journal_restart(handle,
-+ 4 + 2*EXT4_QUOTA_TRANS_BLOCKS(inode->i_sb));
-+ if (retval)
-+ /*
-+ * Failure case delete the extent information with the
-+ * tmp_inode
-+ */
-+ free_ext_block(handle, tmp_inode);
-+ else
-+ retval = ext4_ext_swap_inode_data(handle, inode,
-+ tmp_inode, retval);
+
-+ /*
-+ * Mark the tmp_inode as of size zero
-+ */
-+ i_size_write(tmp_inode, 0);
++ if (input->chain == le16_to_cpu(cl->cl_next_free_rec)) {
++ le16_add_cpu(&cl->cl_next_free_rec, 1);
++ memset(cr, 0, sizeof(struct ocfs2_chain_rec));
++ }
+
-+ /*
-+ * set the i_blocks count to zero
-+ * so that the ext4_delete_inode does the
-+ * right job
-+ *
-+ * We don't need to take the i_lock because
-+ * the inode is not visible to user space.
-+ */
-+ tmp_inode->i_blocks = 0;
++ cr->c_blkno = le64_to_cpu(input->group);
++ le32_add_cpu(&cr->c_total, input->clusters * cl_bpc);
++ le32_add_cpu(&cr->c_free, input->frees * cl_bpc);
+
-+ /* Reset the extent details */
-+ ext4_ext_tree_init(handle, tmp_inode);
++ le32_add_cpu(&fe->id1.bitmap1.i_total, input->clusters *cl_bpc);
++ le32_add_cpu(&fe->id1.bitmap1.i_used,
++ (input->clusters - input->frees) * cl_bpc);
++ le32_add_cpu(&fe->i_clusters, input->clusters);
+
-+ /*
-+ * Set the i_nlink to zero so that
-+ * generic_drop_inode really deletes the
-+ * inode
-+ */
-+ tmp_inode->i_nlink = 0;
++ ocfs2_journal_dirty(handle, main_bm_bh);
+
-+ ext4_journal_stop(handle);
++ spin_lock(&OCFS2_I(main_bm_inode)->ip_lock);
++ OCFS2_I(main_bm_inode)->ip_clusters = le32_to_cpu(fe->i_clusters);
++ le64_add_cpu(&fe->i_size, input->clusters << osb->s_clustersize_bits);
++ spin_unlock(&OCFS2_I(main_bm_inode)->ip_lock);
++ i_size_write(main_bm_inode, le64_to_cpu(fe->i_size));
+
-+ up_write(&EXT4_I(inode)->i_data_sem);
++ ocfs2_update_super_and_backups(main_bm_inode, input->clusters);
+
-+ if (tmp_inode)
-+ iput(tmp_inode);
++out_commit:
++ ocfs2_commit_trans(osb, handle);
++out_unlock:
++ brelse(group_bh);
++ brelse(main_bm_bh);
+
-+ return retval;
-+}
-diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
-index 94ee6f3..67b6d8a 100644
---- a/fs/ext4/namei.c
-+++ b/fs/ext4/namei.c
-@@ -51,7 +51,7 @@
-
- static struct buffer_head *ext4_append(handle_t *handle,
- struct inode *inode,
-- u32 *block, int *err)
-+ ext4_lblk_t *block, int *err)
- {
- struct buffer_head *bh;
-
-@@ -144,8 +144,8 @@ struct dx_map_entry
- u16 size;
- };
-
--static inline unsigned dx_get_block (struct dx_entry *entry);
--static void dx_set_block (struct dx_entry *entry, unsigned value);
-+static inline ext4_lblk_t dx_get_block(struct dx_entry *entry);
-+static void dx_set_block(struct dx_entry *entry, ext4_lblk_t value);
- static inline unsigned dx_get_hash (struct dx_entry *entry);
- static void dx_set_hash (struct dx_entry *entry, unsigned value);
- static unsigned dx_get_count (struct dx_entry *entries);
-@@ -166,7 +166,8 @@ static void dx_sort_map(struct dx_map_entry *map, unsigned count);
- static struct ext4_dir_entry_2 *dx_move_dirents (char *from, char *to,
- struct dx_map_entry *offsets, int count);
- static struct ext4_dir_entry_2* dx_pack_dirents (char *base, int size);
--static void dx_insert_block (struct dx_frame *frame, u32 hash, u32 block);
-+static void dx_insert_block(struct dx_frame *frame,
-+ u32 hash, ext4_lblk_t block);
- static int ext4_htree_next_block(struct inode *dir, __u32 hash,
- struct dx_frame *frame,
- struct dx_frame *frames,
-@@ -181,12 +182,12 @@ static int ext4_dx_add_entry(handle_t *handle, struct dentry *dentry,
- * Mask them off for now.
- */
-
--static inline unsigned dx_get_block (struct dx_entry *entry)
-+static inline ext4_lblk_t dx_get_block(struct dx_entry *entry)
- {
- return le32_to_cpu(entry->block) & 0x00ffffff;
- }
-
--static inline void dx_set_block (struct dx_entry *entry, unsigned value)
-+static inline void dx_set_block(struct dx_entry *entry, ext4_lblk_t value)
- {
- entry->block = cpu_to_le32(value);
- }
-@@ -243,8 +244,8 @@ static void dx_show_index (char * label, struct dx_entry *entries)
- int i, n = dx_get_count (entries);
- printk("%s index ", label);
- for (i = 0; i < n; i++) {
-- printk("%x->%u ", i? dx_get_hash(entries + i) :
-- 0, dx_get_block(entries + i));
-+ printk("%x->%lu ", i? dx_get_hash(entries + i) :
-+ 0, (unsigned long)dx_get_block(entries + i));
- }
- printk("\n");
- }
-@@ -280,7 +281,7 @@ static struct stats dx_show_leaf(struct dx_hash_info *hinfo, struct ext4_dir_ent
- space += EXT4_DIR_REC_LEN(de->name_len);
- names++;
- }
-- de = (struct ext4_dir_entry_2 *) ((char *) de + le16_to_cpu(de->rec_len));
-+ de = ext4_next_entry(de);
- }
- printk("(%i)\n", names);
- return (struct stats) { names, space, 1 };
-@@ -297,7 +298,8 @@ struct stats dx_show_entries(struct dx_hash_info *hinfo, struct inode *dir,
- printk("%i indexed blocks...\n", count);
- for (i = 0; i < count; i++, entries++)
- {
-- u32 block = dx_get_block(entries), hash = i? dx_get_hash(entries): 0;
-+ ext4_lblk_t block = dx_get_block(entries);
-+ ext4_lblk_t hash = i ? dx_get_hash(entries): 0;
- u32 range = i < count - 1? (dx_get_hash(entries + 1) - hash): ~hash;
- struct stats stats;
- printk("%s%3u:%03u hash %8x/%8x ",levels?"":" ", i, block, hash, range);
-@@ -551,7 +553,8 @@ static int ext4_htree_next_block(struct inode *dir, __u32 hash,
- */
- static inline struct ext4_dir_entry_2 *ext4_next_entry(struct ext4_dir_entry_2 *p)
- {
-- return (struct ext4_dir_entry_2 *)((char*)p + le16_to_cpu(p->rec_len));
-+ return (struct ext4_dir_entry_2 *)((char *)p +
-+ ext4_rec_len_from_disk(p->rec_len));
- }
-
- /*
-@@ -560,7 +563,7 @@ static inline struct ext4_dir_entry_2 *ext4_next_entry(struct ext4_dir_entry_2 *
- * into the tree. If there is an error it is returned in err.
- */
- static int htree_dirblock_to_tree(struct file *dir_file,
-- struct inode *dir, int block,
-+ struct inode *dir, ext4_lblk_t block,
- struct dx_hash_info *hinfo,
- __u32 start_hash, __u32 start_minor_hash)
- {
-@@ -568,7 +571,8 @@ static int htree_dirblock_to_tree(struct file *dir_file,
- struct ext4_dir_entry_2 *de, *top;
- int err, count = 0;
-
-- dxtrace(printk("In htree dirblock_to_tree: block %d\n", block));
-+ dxtrace(printk(KERN_INFO "In htree dirblock_to_tree: block %lu\n",
-+ (unsigned long)block));
- if (!(bh = ext4_bread (NULL, dir, block, 0, &err)))
- return err;
-
-@@ -620,9 +624,9 @@ int ext4_htree_fill_tree(struct file *dir_file, __u32 start_hash,
- struct ext4_dir_entry_2 *de;
- struct dx_frame frames[2], *frame;
- struct inode *dir;
-- int block, err;
-+ ext4_lblk_t block;
- int count = 0;
-- int ret;
-+ int ret, err;
- __u32 hashval;
-
- dxtrace(printk("In htree_fill_tree, start hash: %x:%x\n", start_hash,
-@@ -720,7 +724,7 @@ static int dx_make_map (struct ext4_dir_entry_2 *de, int size,
- cond_resched();
- }
- /* XXX: do we need to check rec_len == 0 case? -Chris */
-- de = (struct ext4_dir_entry_2 *) ((char *) de + le16_to_cpu(de->rec_len));
-+ de = ext4_next_entry(de);
- }
- return count;
- }
-@@ -752,7 +756,7 @@ static void dx_sort_map (struct dx_map_entry *map, unsigned count)
- } while(more);
- }
-
--static void dx_insert_block(struct dx_frame *frame, u32 hash, u32 block)
-+static void dx_insert_block(struct dx_frame *frame, u32 hash, ext4_lblk_t block)
- {
- struct dx_entry *entries = frame->entries;
- struct dx_entry *old = frame->at, *new = old + 1;
-@@ -820,7 +824,7 @@ static inline int search_dirblock(struct buffer_head * bh,
- return 1;
- }
- /* prevent looping on a bad block */
-- de_len = le16_to_cpu(de->rec_len);
-+ de_len = ext4_rec_len_from_disk(de->rec_len);
- if (de_len <= 0)
- return -1;
- offset += de_len;
-@@ -847,23 +851,20 @@ static struct buffer_head * ext4_find_entry (struct dentry *dentry,
- struct super_block * sb;
- struct buffer_head * bh_use[NAMEI_RA_SIZE];
- struct buffer_head * bh, *ret = NULL;
-- unsigned long start, block, b;
-+ ext4_lblk_t start, block, b;
- int ra_max = 0; /* Number of bh's in the readahead
- buffer, bh_use[] */
- int ra_ptr = 0; /* Current index into readahead
- buffer */
- int num = 0;
-- int nblocks, i, err;
-+ ext4_lblk_t nblocks;
-+ int i, err;
- struct inode *dir = dentry->d_parent->d_inode;
- int namelen;
-- const u8 *name;
-- unsigned blocksize;
-
- *res_dir = NULL;
- sb = dir->i_sb;
-- blocksize = sb->s_blocksize;
- namelen = dentry->d_name.len;
-- name = dentry->d_name.name;
- if (namelen > EXT4_NAME_LEN)
- return NULL;
- if (is_dx(dir)) {
-@@ -914,7 +915,8 @@ restart:
- if (!buffer_uptodate(bh)) {
- /* read error, skip block & hope for the best */
- ext4_error(sb, __FUNCTION__, "reading directory #%lu "
-- "offset %lu", dir->i_ino, block);
-+ "offset %lu", dir->i_ino,
-+ (unsigned long)block);
- brelse(bh);
- goto next;
- }
-@@ -961,7 +963,7 @@ static struct buffer_head * ext4_dx_find_entry(struct dentry *dentry,
- struct dx_frame frames[2], *frame;
- struct ext4_dir_entry_2 *de, *top;
- struct buffer_head *bh;
-- unsigned long block;
-+ ext4_lblk_t block;
- int retval;
- int namelen = dentry->d_name.len;
- const u8 *name = dentry->d_name.name;
-@@ -1128,7 +1130,7 @@ dx_move_dirents(char *from, char *to, struct dx_map_entry *map, int count)
- rec_len = EXT4_DIR_REC_LEN(de->name_len);
- memcpy (to, de, rec_len);
- ((struct ext4_dir_entry_2 *) to)->rec_len =
-- cpu_to_le16(rec_len);
-+ ext4_rec_len_to_disk(rec_len);
- de->inode = 0;
- map++;
- to += rec_len;
-@@ -1147,13 +1149,12 @@ static struct ext4_dir_entry_2* dx_pack_dirents(char *base, int size)
-
- prev = to = de;
- while ((char*)de < base + size) {
-- next = (struct ext4_dir_entry_2 *) ((char *) de +
-- le16_to_cpu(de->rec_len));
-+ next = ext4_next_entry(de);
- if (de->inode && de->name_len) {
- rec_len = EXT4_DIR_REC_LEN(de->name_len);
- if (de > to)
- memmove(to, de, rec_len);
-- to->rec_len = cpu_to_le16(rec_len);
-+ to->rec_len = ext4_rec_len_to_disk(rec_len);
- prev = to;
- to = (struct ext4_dir_entry_2 *) (((char *) to) + rec_len);
- }
-@@ -1174,7 +1175,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
- unsigned blocksize = dir->i_sb->s_blocksize;
- unsigned count, continued;
- struct buffer_head *bh2;
-- u32 newblock;
-+ ext4_lblk_t newblock;
- u32 hash2;
- struct dx_map_entry *map;
- char *data1 = (*bh)->b_data, *data2;
-@@ -1221,14 +1222,15 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
- split = count - move;
- hash2 = map[split].hash;
- continued = hash2 == map[split - 1].hash;
-- dxtrace(printk("Split block %i at %x, %i/%i\n",
-- dx_get_block(frame->at), hash2, split, count-split));
-+ dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",
-+ (unsigned long)dx_get_block(frame->at),
-+ hash2, split, count-split));
-
- /* Fancy dance to stay within two buffers */
- de2 = dx_move_dirents(data1, data2, map + split, count - split);
- de = dx_pack_dirents(data1,blocksize);
-- de->rec_len = cpu_to_le16(data1 + blocksize - (char *) de);
-- de2->rec_len = cpu_to_le16(data2 + blocksize - (char *) de2);
-+ de->rec_len = ext4_rec_len_to_disk(data1 + blocksize - (char *) de);
-+ de2->rec_len = ext4_rec_len_to_disk(data2 + blocksize - (char *) de2);
- dxtrace(dx_show_leaf (hinfo, (struct ext4_dir_entry_2 *) data1, blocksize, 1));
- dxtrace(dx_show_leaf (hinfo, (struct ext4_dir_entry_2 *) data2, blocksize, 1));
-
-@@ -1297,7 +1299,7 @@ static int add_dirent_to_buf(handle_t *handle, struct dentry *dentry,
- return -EEXIST;
- }
- nlen = EXT4_DIR_REC_LEN(de->name_len);
-- rlen = le16_to_cpu(de->rec_len);
-+ rlen = ext4_rec_len_from_disk(de->rec_len);
- if ((de->inode? rlen - nlen: rlen) >= reclen)
- break;
- de = (struct ext4_dir_entry_2 *)((char *)de + rlen);
-@@ -1316,11 +1318,11 @@ static int add_dirent_to_buf(handle_t *handle, struct dentry *dentry,
-
- /* By now the buffer is marked for journaling */
- nlen = EXT4_DIR_REC_LEN(de->name_len);
-- rlen = le16_to_cpu(de->rec_len);
-+ rlen = ext4_rec_len_from_disk(de->rec_len);
- if (de->inode) {
- struct ext4_dir_entry_2 *de1 = (struct ext4_dir_entry_2 *)((char *)de + nlen);
-- de1->rec_len = cpu_to_le16(rlen - nlen);
-- de->rec_len = cpu_to_le16(nlen);
-+ de1->rec_len = ext4_rec_len_to_disk(rlen - nlen);
-+ de->rec_len = ext4_rec_len_to_disk(nlen);
- de = de1;
- }
- de->file_type = EXT4_FT_UNKNOWN;
-@@ -1374,7 +1376,7 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry,
- int retval;
- unsigned blocksize;
- struct dx_hash_info hinfo;
-- u32 block;
-+ ext4_lblk_t block;
- struct fake_dirent *fde;
-
- blocksize = dir->i_sb->s_blocksize;
-@@ -1397,17 +1399,18 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry,
-
- /* The 0th block becomes the root, move the dirents out */
- fde = &root->dotdot;
-- de = (struct ext4_dir_entry_2 *)((char *)fde + le16_to_cpu(fde->rec_len));
-+ de = (struct ext4_dir_entry_2 *)((char *)fde +
-+ ext4_rec_len_from_disk(fde->rec_len));
- len = ((char *) root) + blocksize - (char *) de;
- memcpy (data1, de, len);
- de = (struct ext4_dir_entry_2 *) data1;
- top = data1 + len;
-- while ((char *)(de2=(void*)de+le16_to_cpu(de->rec_len)) < top)
-+ while ((char *)(de2 = ext4_next_entry(de)) < top)
- de = de2;
-- de->rec_len = cpu_to_le16(data1 + blocksize - (char *) de);
-+ de->rec_len = ext4_rec_len_to_disk(data1 + blocksize - (char *) de);
- /* Initialize the root; the dot dirents already exist */
- de = (struct ext4_dir_entry_2 *) (&root->dotdot);
-- de->rec_len = cpu_to_le16(blocksize - EXT4_DIR_REC_LEN(2));
-+ de->rec_len = ext4_rec_len_to_disk(blocksize - EXT4_DIR_REC_LEN(2));
- memset (&root->info, 0, sizeof(root->info));
- root->info.info_length = sizeof(root->info);
- root->info.hash_version = EXT4_SB(dir->i_sb)->s_def_hash_version;
-@@ -1454,7 +1457,7 @@ static int ext4_add_entry (handle_t *handle, struct dentry *dentry,
- int retval;
- int dx_fallback=0;
- unsigned blocksize;
-- u32 block, blocks;
-+ ext4_lblk_t block, blocks;
-
- sb = dir->i_sb;
- blocksize = sb->s_blocksize;
-@@ -1487,7 +1490,7 @@ static int ext4_add_entry (handle_t *handle, struct dentry *dentry,
- return retval;
- de = (struct ext4_dir_entry_2 *) bh->b_data;
- de->inode = 0;
-- de->rec_len = cpu_to_le16(blocksize);
-+ de->rec_len = ext4_rec_len_to_disk(blocksize);
- return add_dirent_to_buf(handle, dentry, inode, de, bh);
- }
-
-@@ -1531,7 +1534,7 @@ static int ext4_dx_add_entry(handle_t *handle, struct dentry *dentry,
- dx_get_count(entries), dx_get_limit(entries)));
- /* Need to split index? */
- if (dx_get_count(entries) == dx_get_limit(entries)) {
-- u32 newblock;
-+ ext4_lblk_t newblock;
- unsigned icount = dx_get_count(entries);
- int levels = frame - frames;
- struct dx_entry *entries2;
-@@ -1550,7 +1553,7 @@ static int ext4_dx_add_entry(handle_t *handle, struct dentry *dentry,
- goto cleanup;
- node2 = (struct dx_node *)(bh2->b_data);
- entries2 = node2->entries;
-- node2->fake.rec_len = cpu_to_le16(sb->s_blocksize);
-+ node2->fake.rec_len = ext4_rec_len_to_disk(sb->s_blocksize);
- node2->fake.inode = 0;
- BUFFER_TRACE(frame->bh, "get_write_access");
- err = ext4_journal_get_write_access(handle, frame->bh);
-@@ -1648,9 +1651,9 @@ static int ext4_delete_entry (handle_t *handle,
- BUFFER_TRACE(bh, "get_write_access");
- ext4_journal_get_write_access(handle, bh);
- if (pde)
-- pde->rec_len =
-- cpu_to_le16(le16_to_cpu(pde->rec_len) +
-- le16_to_cpu(de->rec_len));
-+ pde->rec_len = ext4_rec_len_to_disk(
-+ ext4_rec_len_from_disk(pde->rec_len) +
-+ ext4_rec_len_from_disk(de->rec_len));
- else
- de->inode = 0;
- dir->i_version++;
-@@ -1658,10 +1661,9 @@ static int ext4_delete_entry (handle_t *handle,
- ext4_journal_dirty_metadata(handle, bh);
- return 0;
- }
-- i += le16_to_cpu(de->rec_len);
-+ i += ext4_rec_len_from_disk(de->rec_len);
- pde = de;
-- de = (struct ext4_dir_entry_2 *)
-- ((char *) de + le16_to_cpu(de->rec_len));
-+ de = ext4_next_entry(de);
- }
- return -ENOENT;
- }
-@@ -1824,13 +1826,13 @@ retry:
- de = (struct ext4_dir_entry_2 *) dir_block->b_data;
- de->inode = cpu_to_le32(inode->i_ino);
- de->name_len = 1;
-- de->rec_len = cpu_to_le16(EXT4_DIR_REC_LEN(de->name_len));
-+ de->rec_len = ext4_rec_len_to_disk(EXT4_DIR_REC_LEN(de->name_len));
- strcpy (de->name, ".");
- ext4_set_de_type(dir->i_sb, de, S_IFDIR);
-- de = (struct ext4_dir_entry_2 *)
-- ((char *) de + le16_to_cpu(de->rec_len));
-+ de = ext4_next_entry(de);
- de->inode = cpu_to_le32(dir->i_ino);
-- de->rec_len = cpu_to_le16(inode->i_sb->s_blocksize-EXT4_DIR_REC_LEN(1));
-+ de->rec_len = ext4_rec_len_to_disk(inode->i_sb->s_blocksize -
-+ EXT4_DIR_REC_LEN(1));
- de->name_len = 2;
- strcpy (de->name, "..");
- ext4_set_de_type(dir->i_sb, de, S_IFDIR);
-@@ -1882,8 +1884,7 @@ static int empty_dir (struct inode * inode)
- return 1;
- }
- de = (struct ext4_dir_entry_2 *) bh->b_data;
-- de1 = (struct ext4_dir_entry_2 *)
-- ((char *) de + le16_to_cpu(de->rec_len));
-+ de1 = ext4_next_entry(de);
- if (le32_to_cpu(de->inode) != inode->i_ino ||
- !le32_to_cpu(de1->inode) ||
- strcmp (".", de->name) ||
-@@ -1894,9 +1895,9 @@ static int empty_dir (struct inode * inode)
- brelse (bh);
- return 1;
- }
-- offset = le16_to_cpu(de->rec_len) + le16_to_cpu(de1->rec_len);
-- de = (struct ext4_dir_entry_2 *)
-- ((char *) de1 + le16_to_cpu(de1->rec_len));
-+ offset = ext4_rec_len_from_disk(de->rec_len) +
-+ ext4_rec_len_from_disk(de1->rec_len);
-+ de = ext4_next_entry(de1);
- while (offset < inode->i_size ) {
- if (!bh ||
- (void *) de >= (void *) (bh->b_data+sb->s_blocksize)) {
-@@ -1925,9 +1926,8 @@ static int empty_dir (struct inode * inode)
- brelse (bh);
- return 0;
- }
-- offset += le16_to_cpu(de->rec_len);
-- de = (struct ext4_dir_entry_2 *)
-- ((char *) de + le16_to_cpu(de->rec_len));
-+ offset += ext4_rec_len_from_disk(de->rec_len);
-+ de = ext4_next_entry(de);
- }
- brelse (bh);
- return 1;
-@@ -2282,8 +2282,7 @@ retry:
- }
-
- #define PARENT_INO(buffer) \
-- ((struct ext4_dir_entry_2 *) ((char *) buffer + \
-- le16_to_cpu(((struct ext4_dir_entry_2 *) buffer)->rec_len)))->inode
-+ (ext4_next_entry((struct ext4_dir_entry_2 *)(buffer))->inode)
-
- /*
- * Anybody can rename anything with this: the permission checks are left to the
-diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
-index bd8a52b..4fbba60 100644
---- a/fs/ext4/resize.c
-+++ b/fs/ext4/resize.c
-@@ -28,7 +28,7 @@ static int verify_group_input(struct super_block *sb,
- struct ext4_super_block *es = sbi->s_es;
- ext4_fsblk_t start = ext4_blocks_count(es);
- ext4_fsblk_t end = start + input->blocks_count;
-- unsigned group = input->group;
-+ ext4_group_t group = input->group;
- ext4_fsblk_t itend = input->inode_table + sbi->s_itb_per_group;
- unsigned overhead = ext4_bg_has_super(sb, group) ?
- (1 + ext4_bg_num_gdb(sb, group) +
-@@ -206,7 +206,7 @@ static int setup_new_group_blocks(struct super_block *sb,
- }
-
- if (ext4_bg_has_super(sb, input->group)) {
-- ext4_debug("mark backup superblock %#04lx (+0)\n", start);
-+ ext4_debug("mark backup superblock %#04llx (+0)\n", start);
- ext4_set_bit(0, bh->b_data);
- }
-
-@@ -215,7 +215,7 @@ static int setup_new_group_blocks(struct super_block *sb,
- i < gdblocks; i++, block++, bit++) {
- struct buffer_head *gdb;
-
-- ext4_debug("update backup group %#04lx (+%d)\n", block, bit);
-+ ext4_debug("update backup group %#04llx (+%d)\n", block, bit);
-
- if ((err = extend_or_restart_transaction(handle, 1, bh)))
- goto exit_bh;
-@@ -243,7 +243,7 @@ static int setup_new_group_blocks(struct super_block *sb,
- i < reserved_gdb; i++, block++, bit++) {
- struct buffer_head *gdb;
-
-- ext4_debug("clear reserved block %#04lx (+%d)\n", block, bit);
-+ ext4_debug("clear reserved block %#04llx (+%d)\n", block, bit);
-
- if ((err = extend_or_restart_transaction(handle, 1, bh)))
- goto exit_bh;
-@@ -256,10 +256,10 @@ static int setup_new_group_blocks(struct super_block *sb,
- ext4_set_bit(bit, bh->b_data);
- brelse(gdb);
- }
-- ext4_debug("mark block bitmap %#04x (+%ld)\n", input->block_bitmap,
-+ ext4_debug("mark block bitmap %#04llx (+%llu)\n", input->block_bitmap,
- input->block_bitmap - start);
- ext4_set_bit(input->block_bitmap - start, bh->b_data);
-- ext4_debug("mark inode bitmap %#04x (+%ld)\n", input->inode_bitmap,
-+ ext4_debug("mark inode bitmap %#04llx (+%llu)\n", input->inode_bitmap,
- input->inode_bitmap - start);
- ext4_set_bit(input->inode_bitmap - start, bh->b_data);
-
-@@ -268,7 +268,7 @@ static int setup_new_group_blocks(struct super_block *sb,
- i < sbi->s_itb_per_group; i++, bit++, block++) {
- struct buffer_head *it;
-
-- ext4_debug("clear inode block %#04lx (+%d)\n", block, bit);
-+ ext4_debug("clear inode block %#04llx (+%d)\n", block, bit);
-
- if ((err = extend_or_restart_transaction(handle, 1, bh)))
- goto exit_bh;
-@@ -291,7 +291,7 @@ static int setup_new_group_blocks(struct super_block *sb,
- brelse(bh);
-
- /* Mark unused entries in inode bitmap used */
-- ext4_debug("clear inode bitmap %#04x (+%ld)\n",
-+ ext4_debug("clear inode bitmap %#04llx (+%llu)\n",
- input->inode_bitmap, input->inode_bitmap - start);
- if (IS_ERR(bh = bclean(handle, sb, input->inode_bitmap))) {
- err = PTR_ERR(bh);
-@@ -357,7 +357,7 @@ static int verify_reserved_gdb(struct super_block *sb,
- struct buffer_head *primary)
- {
- const ext4_fsblk_t blk = primary->b_blocknr;
-- const unsigned long end = EXT4_SB(sb)->s_groups_count;
-+ const ext4_group_t end = EXT4_SB(sb)->s_groups_count;
- unsigned three = 1;
- unsigned five = 5;
- unsigned seven = 7;
-@@ -656,12 +656,12 @@ static void update_backups(struct super_block *sb,
- int blk_off, char *data, int size)
- {
- struct ext4_sb_info *sbi = EXT4_SB(sb);
-- const unsigned long last = sbi->s_groups_count;
-+ const ext4_group_t last = sbi->s_groups_count;
- const int bpg = EXT4_BLOCKS_PER_GROUP(sb);
- unsigned three = 1;
- unsigned five = 5;
- unsigned seven = 7;
-- unsigned group;
-+ ext4_group_t group;
- int rest = sb->s_blocksize - size;
- handle_t *handle;
- int err = 0, err2;
-@@ -716,7 +716,7 @@ static void update_backups(struct super_block *sb,
- exit_err:
- if (err) {
- ext4_warning(sb, __FUNCTION__,
-- "can't update backup for group %d (err %d), "
-+ "can't update backup for group %lu (err %d), "
- "forcing fsck on next reboot", group, err);
- sbi->s_mount_state &= ~EXT4_VALID_FS;
- sbi->s_es->s_state &= cpu_to_le16(~EXT4_VALID_FS);
-@@ -952,7 +952,7 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
- ext4_fsblk_t n_blocks_count)
- {
- ext4_fsblk_t o_blocks_count;
-- unsigned long o_groups_count;
-+ ext4_group_t o_groups_count;
- ext4_grpblk_t last;
- ext4_grpblk_t add;
- struct buffer_head * bh;
-@@ -1054,7 +1054,7 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
- ext4_journal_dirty_metadata(handle, EXT4_SB(sb)->s_sbh);
- sb->s_dirt = 1;
- unlock_super(sb);
-- ext4_debug("freeing blocks %lu through %llu\n", o_blocks_count,
-+ ext4_debug("freeing blocks %llu through %llu\n", o_blocks_count,
- o_blocks_count + add);
- ext4_free_blocks_sb(handle, sb, o_blocks_count, add, &freed_blocks);
- ext4_debug("freed blocks %llu through %llu\n", o_blocks_count,
-diff --git a/fs/ext4/super.c b/fs/ext4/super.c
-index 1ca0f54..055a0cd 100644
---- a/fs/ext4/super.c
-+++ b/fs/ext4/super.c
-@@ -373,6 +373,66 @@ void ext4_update_dynamic_rev(struct super_block *sb)
- */
- }
-
-+int ext4_update_compat_feature(handle_t *handle,
-+ struct super_block *sb, __u32 compat)
-+{
-+ int err = 0;
-+ if (!EXT4_HAS_COMPAT_FEATURE(sb, compat)) {
-+ err = ext4_journal_get_write_access(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ if (err)
-+ return err;
-+ EXT4_SET_COMPAT_FEATURE(sb, compat);
-+ sb->s_dirt = 1;
-+ handle->h_sync = 1;
-+ BUFFER_TRACE(EXT4_SB(sb)->s_sbh,
-+ "call ext4_journal_dirty_met adata");
-+ err = ext4_journal_dirty_metadata(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ }
-+ return err;
-+}
++ ocfs2_inode_unlock(main_bm_inode, 1);
+
-+int ext4_update_rocompat_feature(handle_t *handle,
-+ struct super_block *sb, __u32 rocompat)
-+{
-+ int err = 0;
-+ if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, rocompat)) {
-+ err = ext4_journal_get_write_access(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ if (err)
-+ return err;
-+ EXT4_SET_RO_COMPAT_FEATURE(sb, rocompat);
-+ sb->s_dirt = 1;
-+ handle->h_sync = 1;
-+ BUFFER_TRACE(EXT4_SB(sb)->s_sbh,
-+ "call ext4_journal_dirty_met adata");
-+ err = ext4_journal_dirty_metadata(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ }
-+ return err;
++out_mutex:
++ mutex_unlock(&main_bm_inode->i_mutex);
++ iput(main_bm_inode);
++
++out:
++ mlog_exit_void();
++ return ret;
+}
+diff --git a/fs/ocfs2/resize.h b/fs/ocfs2/resize.h
+new file mode 100644
+index 0000000..f38841a
+--- /dev/null
++++ b/fs/ocfs2/resize.h
+@@ -0,0 +1,32 @@
++/* -*- mode: c; c-basic-offset: 8; -*-
++ * vim: noexpandtab sw=8 ts=8 sts=0:
++ *
++ * resize.h
++ *
++ * Function prototypes
++ *
++ * Copyright (C) 2007 Oracle. All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public
++ * License as published by the Free Software Foundation; either
++ * version 2 of the License, or (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++ * General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public
++ * License along with this program; if not, write to the
++ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
++ * Boston, MA 021110-1307, USA.
++ */
+
-+int ext4_update_incompat_feature(handle_t *handle,
-+ struct super_block *sb, __u32 incompat)
-+{
-+ int err = 0;
-+ if (!EXT4_HAS_INCOMPAT_FEATURE(sb, incompat)) {
-+ err = ext4_journal_get_write_access(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ if (err)
-+ return err;
-+ EXT4_SET_INCOMPAT_FEATURE(sb, incompat);
-+ sb->s_dirt = 1;
-+ handle->h_sync = 1;
-+ BUFFER_TRACE(EXT4_SB(sb)->s_sbh,
-+ "call ext4_journal_dirty_met adata");
-+ err = ext4_journal_dirty_metadata(handle,
-+ EXT4_SB(sb)->s_sbh);
-+ }
-+ return err;
-+}
++#ifndef OCFS2_RESIZE_H
++#define OCFS2_RESIZE_H
+
- /*
- * Open the external journal device
- */
-@@ -443,6 +503,7 @@ static void ext4_put_super (struct super_block * sb)
- struct ext4_super_block *es = sbi->s_es;
- int i;
++int ocfs2_group_extend(struct inode * inode, int new_clusters);
++int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input);
++
++#endif /* OCFS2_RESIZE_H */
+diff --git a/fs/ocfs2/slot_map.c b/fs/ocfs2/slot_map.c
+index af4882b..3a50ce5 100644
+--- a/fs/ocfs2/slot_map.c
++++ b/fs/ocfs2/slot_map.c
+@@ -48,25 +48,6 @@ static void __ocfs2_fill_slot(struct ocfs2_slot_info *si,
+ s16 slot_num,
+ s16 node_num);
-+ ext4_mb_release(sb);
- ext4_ext_release(sb);
- ext4_xattr_put_super(sb);
- jbd2_journal_destroy(sbi->s_journal);
-@@ -509,6 +570,8 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
- ei->i_block_alloc_info = NULL;
- ei->vfs_inode.i_version = 1;
- memset(&ei->i_cached_extent, 0, sizeof(struct ext4_ext_cache));
-+ INIT_LIST_HEAD(&ei->i_prealloc_list);
-+ spin_lock_init(&ei->i_prealloc_lock);
- return &ei->vfs_inode;
- }
+-/* Use the slot information we've collected to create a map of mounted
+- * nodes. Should be holding an EX on super block. assumes slot info is
+- * up to date. Note that we call this *after* we find a slot, so our
+- * own node should be set in the map too... */
+-void ocfs2_populate_mounted_map(struct ocfs2_super *osb)
+-{
+- int i;
+- struct ocfs2_slot_info *si = osb->slot_info;
+-
+- spin_lock(&si->si_lock);
+-
+- for (i = 0; i < si->si_size; i++)
+- if (si->si_global_node_nums[i] != OCFS2_INVALID_SLOT)
+- ocfs2_node_map_set_bit(osb, &osb->mounted_map,
+- si->si_global_node_nums[i]);
+-
+- spin_unlock(&si->si_lock);
+-}
+-
+ /* post the slot information on disk into our slot_info struct. */
+ void ocfs2_update_slot_info(struct ocfs2_slot_info *si)
+ {
+diff --git a/fs/ocfs2/slot_map.h b/fs/ocfs2/slot_map.h
+index d8c8cee..1025872 100644
+--- a/fs/ocfs2/slot_map.h
++++ b/fs/ocfs2/slot_map.h
+@@ -52,8 +52,6 @@ s16 ocfs2_node_num_to_slot(struct ocfs2_slot_info *si,
+ void ocfs2_clear_slot(struct ocfs2_slot_info *si,
+ s16 slot_num);
-@@ -533,7 +596,7 @@ static void init_once(struct kmem_cache *cachep, void *foo)
- #ifdef CONFIG_EXT4DEV_FS_XATTR
- init_rwsem(&ei->xattr_sem);
- #endif
-- mutex_init(&ei->truncate_mutex);
-+ init_rwsem(&ei->i_data_sem);
- inode_init_once(&ei->vfs_inode);
+-void ocfs2_populate_mounted_map(struct ocfs2_super *osb);
+-
+ static inline int ocfs2_is_empty_slot(struct ocfs2_slot_info *si,
+ int slot_num)
+ {
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 8f09f52..7e397e2 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -101,8 +101,6 @@ static inline int ocfs2_block_group_reasonably_empty(struct ocfs2_group_desc *bg
+ static inline u32 ocfs2_desc_bitmap_to_cluster_off(struct inode *inode,
+ u64 bg_blkno,
+ u16 bg_bit_off);
+-static inline u64 ocfs2_which_cluster_group(struct inode *inode,
+- u32 cluster);
+ static inline void ocfs2_block_to_cluster_group(struct inode *inode,
+ u64 data_blkno,
+ u64 *bg_blkno,
+@@ -114,7 +112,7 @@ void ocfs2_free_alloc_context(struct ocfs2_alloc_context *ac)
+
+ if (inode) {
+ if (ac->ac_which != OCFS2_AC_USE_LOCAL)
+- ocfs2_meta_unlock(inode, 1);
++ ocfs2_inode_unlock(inode, 1);
+
+ mutex_unlock(&inode->i_mutex);
+
+@@ -131,9 +129,9 @@ static u32 ocfs2_bits_per_group(struct ocfs2_chain_list *cl)
}
-@@ -605,18 +668,20 @@ static inline void ext4_show_quota_options(struct seq_file *seq, struct super_bl
- */
- static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
+ /* somewhat more expensive than our other checks, so use sparingly. */
+-static int ocfs2_check_group_descriptor(struct super_block *sb,
+- struct ocfs2_dinode *di,
+- struct ocfs2_group_desc *gd)
++int ocfs2_check_group_descriptor(struct super_block *sb,
++ struct ocfs2_dinode *di,
++ struct ocfs2_group_desc *gd)
{
-+ int def_errors;
-+ unsigned long def_mount_opts;
- struct super_block *sb = vfs->mnt_sb;
- struct ext4_sb_info *sbi = EXT4_SB(sb);
- struct ext4_super_block *es = sbi->s_es;
-- unsigned long def_mount_opts;
+ unsigned int max_bits;
- def_mount_opts = le32_to_cpu(es->s_default_mount_opts);
-+ def_errors = le16_to_cpu(es->s_errors);
+@@ -412,7 +410,7 @@ static int ocfs2_reserve_suballoc_bits(struct ocfs2_super *osb,
- if (sbi->s_sb_block != 1)
- seq_printf(seq, ",sb=%llu", sbi->s_sb_block);
- if (test_opt(sb, MINIX_DF))
- seq_puts(seq, ",minixdf");
-- if (test_opt(sb, GRPID))
-+ if (test_opt(sb, GRPID) && !(def_mount_opts & EXT4_DEFM_BSDGROUPS))
- seq_puts(seq, ",grpid");
- if (!test_opt(sb, GRPID) && (def_mount_opts & EXT4_DEFM_BSDGROUPS))
- seq_puts(seq, ",nogrpid");
-@@ -628,34 +693,33 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
- le16_to_cpu(es->s_def_resgid) != EXT4_DEF_RESGID) {
- seq_printf(seq, ",resgid=%u", sbi->s_resgid);
- }
-- if (test_opt(sb, ERRORS_CONT)) {
-- int def_errors = le16_to_cpu(es->s_errors);
--
-+ if (test_opt(sb, ERRORS_RO)) {
- if (def_errors == EXT4_ERRORS_PANIC ||
-- def_errors == EXT4_ERRORS_RO) {
-- seq_puts(seq, ",errors=continue");
-+ def_errors == EXT4_ERRORS_CONTINUE) {
-+ seq_puts(seq, ",errors=remount-ro");
+ mutex_lock(&alloc_inode->i_mutex);
+
+- status = ocfs2_meta_lock(alloc_inode, &bh, 1);
++ status = ocfs2_inode_lock(alloc_inode, &bh, 1);
+ if (status < 0) {
+ mutex_unlock(&alloc_inode->i_mutex);
+ iput(alloc_inode);
+@@ -1443,8 +1441,7 @@ static inline u32 ocfs2_desc_bitmap_to_cluster_off(struct inode *inode,
+
+ /* given a cluster offset, calculate which block group it belongs to
+ * and return that block offset. */
+-static inline u64 ocfs2_which_cluster_group(struct inode *inode,
+- u32 cluster)
++u64 ocfs2_which_cluster_group(struct inode *inode, u32 cluster)
+ {
+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ u32 group_no;
+@@ -1519,8 +1516,9 @@ int __ocfs2_claim_clusters(struct ocfs2_super *osb,
+ if (min_clusters > (osb->bitmap_cpg - 1)) {
+ /* The only paths asking for contiguousness
+ * should know about this already. */
+- mlog(ML_ERROR, "minimum allocation requested exceeds "
+- "group bitmap size!");
++ mlog(ML_ERROR, "minimum allocation requested %u exceeds "
++ "group bitmap size %u!\n", min_clusters,
++ osb->bitmap_cpg);
+ status = -ENOSPC;
+ goto bail;
}
- }
-- if (test_opt(sb, ERRORS_RO))
-- seq_puts(seq, ",errors=remount-ro");
-- if (test_opt(sb, ERRORS_PANIC))
-+ if (test_opt(sb, ERRORS_CONT) && def_errors != EXT4_ERRORS_CONTINUE)
-+ seq_puts(seq, ",errors=continue");
-+ if (test_opt(sb, ERRORS_PANIC) && def_errors != EXT4_ERRORS_PANIC)
- seq_puts(seq, ",errors=panic");
-- if (test_opt(sb, NO_UID32))
-+ if (test_opt(sb, NO_UID32) && !(def_mount_opts & EXT4_DEFM_UID16))
- seq_puts(seq, ",nouid32");
-- if (test_opt(sb, DEBUG))
-+ if (test_opt(sb, DEBUG) && !(def_mount_opts & EXT4_DEFM_DEBUG))
- seq_puts(seq, ",debug");
- if (test_opt(sb, OLDALLOC))
- seq_puts(seq, ",oldalloc");
--#ifdef CONFIG_EXT4_FS_XATTR
-- if (test_opt(sb, XATTR_USER))
-+#ifdef CONFIG_EXT4DEV_FS_XATTR
-+ if (test_opt(sb, XATTR_USER) &&
-+ !(def_mount_opts & EXT4_DEFM_XATTR_USER))
- seq_puts(seq, ",user_xattr");
- if (!test_opt(sb, XATTR_USER) &&
- (def_mount_opts & EXT4_DEFM_XATTR_USER)) {
- seq_puts(seq, ",nouser_xattr");
- }
- #endif
--#ifdef CONFIG_EXT4_FS_POSIX_ACL
-- if (test_opt(sb, POSIX_ACL))
-+#ifdef CONFIG_EXT4DEV_FS_POSIX_ACL
-+ if (test_opt(sb, POSIX_ACL) && !(def_mount_opts & EXT4_DEFM_ACL))
- seq_puts(seq, ",acl");
- if (!test_opt(sb, POSIX_ACL) && (def_mount_opts & EXT4_DEFM_ACL))
- seq_puts(seq, ",noacl");
-@@ -672,7 +736,17 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
- seq_puts(seq, ",nobh");
- if (!test_opt(sb, EXTENTS))
- seq_puts(seq, ",noextents");
-+ if (!test_opt(sb, MBALLOC))
-+ seq_puts(seq, ",nomballoc");
-+ if (test_opt(sb, I_VERSION))
-+ seq_puts(seq, ",i_version");
+diff --git a/fs/ocfs2/suballoc.h b/fs/ocfs2/suballoc.h
+index cafe937..8799033 100644
+--- a/fs/ocfs2/suballoc.h
++++ b/fs/ocfs2/suballoc.h
+@@ -147,4 +147,12 @@ static inline int ocfs2_is_cluster_bitmap(struct inode *inode)
+ int ocfs2_reserve_cluster_bitmap_bits(struct ocfs2_super *osb,
+ struct ocfs2_alloc_context *ac);
-+ if (sbi->s_stripe)
-+ seq_printf(seq, ",stripe=%lu", sbi->s_stripe);
-+ /*
-+ * journal mode get enabled in different ways
-+ * So just print the value even if we didn't specify it
-+ */
- if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
- seq_puts(seq, ",data=journal");
- else if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA)
-@@ -681,7 +755,6 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs)
- seq_puts(seq, ",data=writeback");
++/* given a cluster offset, calculate which block group it belongs to
++ * and return that block offset. */
++u64 ocfs2_which_cluster_group(struct inode *inode, u32 cluster);
++
++/* somewhat more expensive than our other checks, so use sparingly. */
++int ocfs2_check_group_descriptor(struct super_block *sb,
++ struct ocfs2_dinode *di,
++ struct ocfs2_group_desc *gd);
+ #endif /* _CHAINALLOC_H_ */
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 5ee7754..01fe40e 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -65,7 +65,6 @@
+ #include "sysfile.h"
+ #include "uptodate.h"
+ #include "ver.h"
+-#include "vote.h"
- ext4_show_quota_options(seq, sb);
--
- return 0;
- }
+ #include "buffer_head_io.h"
-@@ -809,11 +882,13 @@ enum {
- Opt_user_xattr, Opt_nouser_xattr, Opt_acl, Opt_noacl,
- Opt_reservation, Opt_noreservation, Opt_noload, Opt_nobh, Opt_bh,
- Opt_commit, Opt_journal_update, Opt_journal_inum, Opt_journal_dev,
-+ Opt_journal_checksum, Opt_journal_async_commit,
- Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
- Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
- Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_quota, Opt_noquota,
- Opt_ignore, Opt_barrier, Opt_err, Opt_resize, Opt_usrquota,
-- Opt_grpquota, Opt_extents, Opt_noextents,
-+ Opt_grpquota, Opt_extents, Opt_noextents, Opt_i_version,
-+ Opt_mballoc, Opt_nomballoc, Opt_stripe,
+@@ -84,9 +83,11 @@ MODULE_LICENSE("GPL");
+
+ struct mount_options
+ {
++ unsigned long commit_interval;
+ unsigned long mount_opt;
+ unsigned int atime_quantum;
+ signed short slot;
++ unsigned int localalloc_opt;
};
- static match_table_t tokens = {
-@@ -848,6 +923,8 @@ static match_table_t tokens = {
- {Opt_journal_update, "journal=update"},
- {Opt_journal_inum, "journal=%u"},
- {Opt_journal_dev, "journal_dev=%u"},
-+ {Opt_journal_checksum, "journal_checksum"},
-+ {Opt_journal_async_commit, "journal_async_commit"},
- {Opt_abort, "abort"},
- {Opt_data_journal, "data=journal"},
- {Opt_data_ordered, "data=ordered"},
-@@ -865,6 +942,10 @@ static match_table_t tokens = {
- {Opt_barrier, "barrier=%u"},
- {Opt_extents, "extents"},
- {Opt_noextents, "noextents"},
-+ {Opt_i_version, "i_version"},
-+ {Opt_mballoc, "mballoc"},
-+ {Opt_nomballoc, "nomballoc"},
-+ {Opt_stripe, "stripe=%u"},
- {Opt_err, NULL},
- {Opt_resize, "resize"},
+ static int ocfs2_parse_options(struct super_block *sb, char *options,
+@@ -150,6 +151,9 @@ enum {
+ Opt_data_writeback,
+ Opt_atime_quantum,
+ Opt_slot,
++ Opt_commit,
++ Opt_localalloc,
++ Opt_localflocks,
+ Opt_err,
};
-@@ -1035,6 +1116,13 @@ static int parse_options (char *options, struct super_block *sb,
- return 0;
- *journal_devnum = option;
- break;
-+ case Opt_journal_checksum:
-+ set_opt(sbi->s_mount_opt, JOURNAL_CHECKSUM);
-+ break;
-+ case Opt_journal_async_commit:
-+ set_opt(sbi->s_mount_opt, JOURNAL_ASYNC_COMMIT);
-+ set_opt(sbi->s_mount_opt, JOURNAL_CHECKSUM);
-+ break;
- case Opt_noload:
- set_opt (sbi->s_mount_opt, NOLOAD);
- break;
-@@ -1203,6 +1291,23 @@ clear_qf_name:
- case Opt_noextents:
- clear_opt (sbi->s_mount_opt, EXTENTS);
+
+@@ -165,6 +169,9 @@ static match_table_t tokens = {
+ {Opt_data_writeback, "data=writeback"},
+ {Opt_atime_quantum, "atime_quantum=%u"},
+ {Opt_slot, "preferred_slot=%u"},
++ {Opt_commit, "commit=%u"},
++ {Opt_localalloc, "localalloc=%d"},
++ {Opt_localflocks, "localflocks"},
+ {Opt_err, NULL}
+ };
+
+@@ -213,7 +220,7 @@ static int ocfs2_init_global_system_inodes(struct ocfs2_super *osb)
+
+ mlog_entry_void();
+
+- new = ocfs2_iget(osb, osb->root_blkno, OCFS2_FI_FLAG_SYSFILE);
++ new = ocfs2_iget(osb, osb->root_blkno, OCFS2_FI_FLAG_SYSFILE, 0);
+ if (IS_ERR(new)) {
+ status = PTR_ERR(new);
+ mlog_errno(status);
+@@ -221,7 +228,7 @@ static int ocfs2_init_global_system_inodes(struct ocfs2_super *osb)
+ }
+ osb->root_inode = new;
+
+- new = ocfs2_iget(osb, osb->system_dir_blkno, OCFS2_FI_FLAG_SYSFILE);
++ new = ocfs2_iget(osb, osb->system_dir_blkno, OCFS2_FI_FLAG_SYSFILE, 0);
+ if (IS_ERR(new)) {
+ status = PTR_ERR(new);
+ mlog_errno(status);
+@@ -443,6 +450,8 @@ unlock_osb:
+ osb->s_mount_opt = parsed_options.mount_opt;
+ osb->s_atime_quantum = parsed_options.atime_quantum;
+ osb->preferred_slot = parsed_options.slot;
++ if (parsed_options.commit_interval)
++ osb->osb_commit_interval = parsed_options.commit_interval;
+
+ if (!ocfs2_is_hard_readonly(osb))
+ ocfs2_set_journal_params(osb);
+@@ -597,6 +606,8 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
+ osb->s_mount_opt = parsed_options.mount_opt;
+ osb->s_atime_quantum = parsed_options.atime_quantum;
+ osb->preferred_slot = parsed_options.slot;
++ osb->osb_commit_interval = parsed_options.commit_interval;
++ osb->local_alloc_size = parsed_options.localalloc_opt;
+
+ sb->s_magic = OCFS2_SUPER_MAGIC;
+
+@@ -747,9 +758,11 @@ static int ocfs2_parse_options(struct super_block *sb,
+ mlog_entry("remount: %d, options: \"%s\"\n", is_remount,
+ options ? options : "(none)");
+
++ mopt->commit_interval = 0;
+ mopt->mount_opt = 0;
+ mopt->atime_quantum = OCFS2_DEFAULT_ATIME_QUANTUM;
+ mopt->slot = OCFS2_INVALID_SLOT;
++ mopt->localalloc_opt = OCFS2_DEFAULT_LOCAL_ALLOC_SIZE;
+
+ if (!options) {
+ status = 1;
+@@ -816,6 +829,41 @@ static int ocfs2_parse_options(struct super_block *sb,
+ if (option)
+ mopt->slot = (s16)option;
break;
-+ case Opt_i_version:
-+ set_opt(sbi->s_mount_opt, I_VERSION);
-+ sb->s_flags |= MS_I_VERSION;
-+ break;
-+ case Opt_mballoc:
-+ set_opt(sbi->s_mount_opt, MBALLOC);
-+ break;
-+ case Opt_nomballoc:
-+ clear_opt(sbi->s_mount_opt, MBALLOC);
-+ break;
-+ case Opt_stripe:
-+ if (match_int(&args[0], &option))
-+ return 0;
++ case Opt_commit:
++ option = 0;
++ if (match_int(&args[0], &option)) {
++ status = 0;
++ goto bail;
++ }
+ if (option < 0)
+ return 0;
-+ sbi->s_stripe = option;
++ if (option == 0)
++ option = JBD_DEFAULT_MAX_COMMIT_AGE;
++ mopt->commit_interval = HZ * option;
++ break;
++ case Opt_localalloc:
++ option = 0;
++ if (match_int(&args[0], &option)) {
++ status = 0;
++ goto bail;
++ }
++ if (option >= 0 && (option <= ocfs2_local_alloc_size(sb) * 8))
++ mopt->localalloc_opt = option;
++ break;
++ case Opt_localflocks:
++ /*
++ * Changing this during remount could race
++ * flock() requests, or "unbalance" existing
++ * ones (e.g., a lock is taken in one mode but
++ * dropped in the other). If users care enough
++ * to flip locking modes during remount, we
++ * could add a "local" flag to individual
++ * flock structures for proper tracking of
++ * state.
++ */
++ if (!is_remount)
++ mopt->mount_opt |= OCFS2_MOUNT_LOCALFLOCKS;
+ break;
default:
- printk (KERN_ERR
- "EXT4-fs: Unrecognized mount option \"%s\" "
-@@ -1364,7 +1469,7 @@ static int ext4_check_descriptors (struct super_block * sb)
- struct ext4_group_desc * gdp = NULL;
- int desc_block = 0;
- int flexbg_flag = 0;
-- int i;
-+ ext4_group_t i;
-
- if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG))
- flexbg_flag = 1;
-@@ -1386,7 +1491,7 @@ static int ext4_check_descriptors (struct super_block * sb)
- if (block_bitmap < first_block || block_bitmap > last_block)
- {
- ext4_error (sb, "ext4_check_descriptors",
-- "Block bitmap for group %d"
-+ "Block bitmap for group %lu"
- " not in group (block %llu)!",
- i, block_bitmap);
- return 0;
-@@ -1395,7 +1500,7 @@ static int ext4_check_descriptors (struct super_block * sb)
- if (inode_bitmap < first_block || inode_bitmap > last_block)
- {
- ext4_error (sb, "ext4_check_descriptors",
-- "Inode bitmap for group %d"
-+ "Inode bitmap for group %lu"
- " not in group (block %llu)!",
- i, inode_bitmap);
- return 0;
-@@ -1405,17 +1510,16 @@ static int ext4_check_descriptors (struct super_block * sb)
- inode_table + sbi->s_itb_per_group - 1 > last_block)
- {
- ext4_error (sb, "ext4_check_descriptors",
-- "Inode table for group %d"
-+ "Inode table for group %lu"
- " not in group (block %llu)!",
- i, inode_table);
- return 0;
- }
- if (!ext4_group_desc_csum_verify(sbi, i, gdp)) {
- ext4_error(sb, __FUNCTION__,
-- "Checksum for group %d failed (%u!=%u)\n", i,
-- le16_to_cpu(ext4_group_desc_csum(sbi, i,
-- gdp)),
-- le16_to_cpu(gdp->bg_checksum));
-+ "Checksum for group %lu failed (%u!=%u)\n",
-+ i, le16_to_cpu(ext4_group_desc_csum(sbi, i,
-+ gdp)), le16_to_cpu(gdp->bg_checksum));
- return 0;
- }
- if (!flexbg_flag)
-@@ -1429,7 +1533,6 @@ static int ext4_check_descriptors (struct super_block * sb)
- return 1;
- }
-
--
- /* ext4_orphan_cleanup() walks a singly-linked list of inodes (starting at
- * the superblock) which were deleted from all directories, but held open by
- * a process at the time of a crash. We walk the list and try to delete these
-@@ -1542,20 +1645,95 @@ static void ext4_orphan_cleanup (struct super_block * sb,
- #endif
- sb->s_flags = s_flags; /* Restore MS_RDONLY status */
- }
-+/*
-+ * Maximal extent format file size.
-+ * Resulting logical blkno at s_maxbytes must fit in our on-disk
-+ * extent format containers, within a sector_t, and within i_blocks
-+ * in the vfs. ext4 inode has 48 bits of i_block in fsblock units,
-+ * so that won't be a limiting factor.
-+ *
-+ * Note, this does *not* consider any metadata overhead for vfs i_blocks.
-+ */
-+static loff_t ext4_max_size(int blkbits)
-+{
-+ loff_t res;
-+ loff_t upper_limit = MAX_LFS_FILESIZE;
-+
-+ /* small i_blocks in vfs inode? */
-+ if (sizeof(blkcnt_t) < sizeof(u64)) {
-+ /*
-+ * CONFIG_LSF is not enabled implies the inode
-+ * i_block represent total blocks in 512 bytes
-+ * 32 == size of vfs inode i_blocks * 8
-+ */
-+ upper_limit = (1LL << 32) - 1;
-+
-+ /* total blocks in file system block size */
-+ upper_limit >>= (blkbits - 9);
-+ upper_limit <<= blkbits;
-+ }
-+
-+ /* 32-bit extent-start container, ee_block */
-+ res = 1LL << 32;
-+ res <<= blkbits;
-+ res -= 1;
-+
-+ /* Sanity check against vm- & vfs- imposed limits */
-+ if (res > upper_limit)
-+ res = upper_limit;
-+
-+ return res;
-+}
+ mlog(ML_ERROR,
+ "Unrecognized mount option \"%s\" "
+@@ -864,6 +912,16 @@ static int ocfs2_show_options(struct seq_file *s, struct vfsmount *mnt)
+ if (osb->s_atime_quantum != OCFS2_DEFAULT_ATIME_QUANTUM)
+ seq_printf(s, ",atime_quantum=%u", osb->s_atime_quantum);
- /*
-- * Maximal file size. There is a direct, and {,double-,triple-}indirect
-- * block limit, and also a limit of (2^32 - 1) 512-byte sectors in i_blocks.
-- * We need to be 1 filesystem block less than the 2^32 sector limit.
-+ * Maximal bitmap file size. There is a direct, and {,double-,triple-}indirect
-+ * block limit, and also a limit of (2^48 - 1) 512-byte sectors in i_blocks.
-+ * We need to be 1 filesystem block less than the 2^48 sector limit.
- */
--static loff_t ext4_max_size(int bits)
-+static loff_t ext4_max_bitmap_size(int bits)
- {
- loff_t res = EXT4_NDIR_BLOCKS;
-- /* This constant is calculated to be the largest file size for a
-- * dense, 4k-blocksize file such that the total number of
-+ int meta_blocks;
-+ loff_t upper_limit;
-+ /* This is calculated to be the largest file size for a
-+ * dense, bitmapped file such that the total number of
- * sectors in the file, including data and all indirect blocks,
-- * does not exceed 2^32. */
-- const loff_t upper_limit = 0x1ff7fffd000LL;
-+ * does not exceed 2^48 -1
-+ * __u32 i_blocks_lo and _u16 i_blocks_high representing the
-+ * total number of 512 bytes blocks of the file
-+ */
-+
-+ if (sizeof(blkcnt_t) < sizeof(u64)) {
-+ /*
-+ * CONFIG_LSF is not enabled implies the inode
-+ * i_block represent total blocks in 512 bytes
-+ * 32 == size of vfs inode i_blocks * 8
-+ */
-+ upper_limit = (1LL << 32) - 1;
-+
-+ /* total blocks in file system block size */
-+ upper_limit >>= (bits - 9);
-+
-+ } else {
-+ /*
-+ * We use 48 bit ext4_inode i_blocks
-+ * With EXT4_HUGE_FILE_FL set the i_blocks
-+ * represent total number of blocks in
-+ * file system block size
-+ */
-+ upper_limit = (1LL << 48) - 1;
-+
-+ }
-+
-+ /* indirect blocks */
-+ meta_blocks = 1;
-+ /* double indirect blocks */
-+ meta_blocks += 1 + (1LL << (bits-2));
-+ /* tripple indirect blocks */
-+ meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
++ if (osb->osb_commit_interval)
++ seq_printf(s, ",commit=%u",
++ (unsigned) (osb->osb_commit_interval / HZ));
+
-+ upper_limit -= meta_blocks;
-+ upper_limit <<= bits;
-
- res += 1LL << (bits-2);
- res += 1LL << (2*(bits-2));
-@@ -1563,6 +1741,10 @@ static loff_t ext4_max_size(int bits)
- res <<= bits;
- if (res > upper_limit)
- res = upper_limit;
++ if (osb->local_alloc_size != OCFS2_DEFAULT_LOCAL_ALLOC_SIZE)
++ seq_printf(s, ",localalloc=%d", osb->local_alloc_size);
+
-+ if (res > MAX_LFS_FILESIZE)
-+ res = MAX_LFS_FILESIZE;
++ if (opts & OCFS2_MOUNT_LOCALFLOCKS)
++ seq_printf(s, ",localflocks,");
+
- return res;
+ return 0;
}
-@@ -1570,7 +1752,7 @@ static ext4_fsblk_t descriptor_loc(struct super_block *sb,
- ext4_fsblk_t logical_sb_block, int nr)
- {
- struct ext4_sb_info *sbi = EXT4_SB(sb);
-- unsigned long bg, first_meta_bg;
-+ ext4_group_t bg, first_meta_bg;
- int has_super = 0;
-
- first_meta_bg = le32_to_cpu(sbi->s_es->s_first_meta_bg);
-@@ -1584,8 +1766,39 @@ static ext4_fsblk_t descriptor_loc(struct super_block *sb,
- return (has_super + ext4_group_first_block_no(sb, bg));
- }
+@@ -965,7 +1023,7 @@ static int ocfs2_statfs(struct dentry *dentry, struct kstatfs *buf)
+ goto bail;
+ }
-+/**
-+ * ext4_get_stripe_size: Get the stripe size.
-+ * @sbi: In memory super block info
-+ *
-+ * If we have specified it via mount option, then
-+ * use the mount option value. If the value specified at mount time is
-+ * greater than the blocks per group use the super block value.
-+ * If the super block value is greater than blocks per group return 0.
-+ * Allocator needs it be less than blocks per group.
-+ *
-+ */
-+static unsigned long ext4_get_stripe_size(struct ext4_sb_info *sbi)
-+{
-+ unsigned long stride = le16_to_cpu(sbi->s_es->s_raid_stride);
-+ unsigned long stripe_width =
-+ le32_to_cpu(sbi->s_es->s_raid_stripe_width);
-+
-+ if (sbi->s_stripe && sbi->s_stripe <= sbi->s_blocks_per_group)
-+ return sbi->s_stripe;
-+
-+ if (stripe_width <= sbi->s_blocks_per_group)
-+ return stripe_width;
-+
-+ if (stride <= sbi->s_blocks_per_group)
-+ return stride;
-+
-+ return 0;
-+}
+- status = ocfs2_meta_lock(inode, &bh, 0);
++ status = ocfs2_inode_lock(inode, &bh, 0);
+ if (status < 0) {
+ mlog_errno(status);
+ goto bail;
+@@ -989,7 +1047,7 @@ static int ocfs2_statfs(struct dentry *dentry, struct kstatfs *buf)
- static int ext4_fill_super (struct super_block *sb, void *data, int silent)
-+ __releases(kernel_sem)
-+ __acquires(kernel_sem)
-+
- {
- struct buffer_head * bh;
- struct ext4_super_block *es = NULL;
-@@ -1599,7 +1812,6 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- unsigned long def_mount_opts;
- struct inode *root;
- int blocksize;
-- int hblock;
- int db_count;
- int i;
- int needs_recovery;
-@@ -1624,6 +1836,11 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- goto out_fail;
- }
+ brelse(bh);
-+ if (!sb_set_blocksize(sb, blocksize)) {
-+ printk(KERN_ERR "EXT4-fs: bad blocksize %d.\n", blocksize);
-+ goto out_fail;
-+ }
-+
- /*
- * The ext4 superblock will not be buffer aligned for other than 1kB
- * block sizes. We need to calculate the offset from buffer start.
-@@ -1674,10 +1891,10 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+- ocfs2_meta_unlock(inode, 0);
++ ocfs2_inode_unlock(inode, 0);
+ status = 0;
+ bail:
+ if (inode)
+@@ -1020,8 +1078,7 @@ static void ocfs2_inode_init_once(struct kmem_cache *cachep, void *data)
+ oi->ip_clusters = 0;
- if (le16_to_cpu(sbi->s_es->s_errors) == EXT4_ERRORS_PANIC)
- set_opt(sbi->s_mount_opt, ERRORS_PANIC);
-- else if (le16_to_cpu(sbi->s_es->s_errors) == EXT4_ERRORS_RO)
-- set_opt(sbi->s_mount_opt, ERRORS_RO);
-- else
-+ else if (le16_to_cpu(sbi->s_es->s_errors) == EXT4_ERRORS_CONTINUE)
- set_opt(sbi->s_mount_opt, ERRORS_CONT);
-+ else
-+ set_opt(sbi->s_mount_opt, ERRORS_RO);
+ ocfs2_lock_res_init_once(&oi->ip_rw_lockres);
+- ocfs2_lock_res_init_once(&oi->ip_meta_lockres);
+- ocfs2_lock_res_init_once(&oi->ip_data_lockres);
++ ocfs2_lock_res_init_once(&oi->ip_inode_lockres);
+ ocfs2_lock_res_init_once(&oi->ip_open_lockres);
- sbi->s_resuid = le16_to_cpu(es->s_def_resuid);
- sbi->s_resgid = le16_to_cpu(es->s_def_resgid);
-@@ -1689,6 +1906,11 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- * User -o noextents to turn it off
- */
- set_opt(sbi->s_mount_opt, EXTENTS);
-+ /*
-+ * turn on mballoc feature by default in ext4 filesystem
-+ * User -o nomballoc to turn it off
-+ */
-+ set_opt(sbi->s_mount_opt, MBALLOC);
+ ocfs2_metadata_cache_init(&oi->vfs_inode);
+@@ -1117,25 +1174,12 @@ static int ocfs2_mount_volume(struct super_block *sb)
+ goto leave;
+ }
- if (!parse_options ((char *) data, sb, &journal_inum, &journal_devnum,
- NULL, 0))
-@@ -1723,6 +1945,19 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- sb->s_id, le32_to_cpu(features));
- goto failed_mount;
+- status = ocfs2_register_hb_callbacks(osb);
+- if (status < 0) {
+- mlog_errno(status);
+- goto leave;
+- }
+-
+ status = ocfs2_dlm_init(osb);
+ if (status < 0) {
+ mlog_errno(status);
+ goto leave;
}
-+ if (EXT4_HAS_RO_COMPAT_FEATURE(sb, EXT4_FEATURE_RO_COMPAT_HUGE_FILE)) {
-+ /*
-+ * Large file size enabled file system can only be
-+ * mount if kernel is build with CONFIG_LSF
-+ */
-+ if (sizeof(root->i_blocks) < sizeof(u64) &&
-+ !(sb->s_flags & MS_RDONLY)) {
-+ printk(KERN_ERR "EXT4-fs: %s: Filesystem with huge "
-+ "files cannot be mounted read-write "
-+ "without CONFIG_LSF.\n", sb->s_id);
-+ goto failed_mount;
-+ }
-+ }
- blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
- if (blocksize < EXT4_MIN_BLOCK_SIZE ||
-@@ -1733,20 +1968,16 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- goto failed_mount;
+- /* requires vote_thread to be running. */
+- status = ocfs2_register_net_handlers(osb);
+- if (status < 0) {
+- mlog_errno(status);
+- goto leave;
+- }
+-
+ status = ocfs2_super_lock(osb, 1);
+ if (status < 0) {
+ mlog_errno(status);
+@@ -1150,8 +1194,6 @@ static int ocfs2_mount_volume(struct super_block *sb)
+ goto leave;
}
-- hblock = bdev_hardsect_size(sb->s_bdev);
- if (sb->s_blocksize != blocksize) {
-- /*
-- * Make sure the blocksize for the filesystem is larger
-- * than the hardware sectorsize for the machine.
-- */
-- if (blocksize < hblock) {
-- printk(KERN_ERR "EXT4-fs: blocksize %d too small for "
-- "device blocksize %d.\n", blocksize, hblock);
-+
-+ /* Validate the filesystem blocksize */
-+ if (!sb_set_blocksize(sb, blocksize)) {
-+ printk(KERN_ERR "EXT4-fs: bad block size %d.\n",
-+ blocksize);
- goto failed_mount;
- }
+- ocfs2_populate_mounted_map(osb);
+-
+ /* load all node-local system inodes */
+ status = ocfs2_init_local_system_inodes(osb);
+ if (status < 0) {
+@@ -1174,15 +1216,6 @@ static int ocfs2_mount_volume(struct super_block *sb)
+ if (ocfs2_mount_local(osb))
+ goto leave;
- brelse (bh);
-- sb_set_blocksize(sb, blocksize);
- logical_sb_block = sb_block * EXT4_MIN_BLOCK_SIZE;
- offset = do_div(logical_sb_block, blocksize);
- bh = sb_bread(sb, logical_sb_block);
-@@ -1764,6 +1995,7 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+- /* This should be sent *after* we recovered our journal as it
+- * will cause other nodes to unmark us as needing
+- * recovery. However, we need to send it *before* dropping the
+- * super block lock as otherwise their recovery threads might
+- * try to clean us up while we're live! */
+- status = ocfs2_request_mount_vote(osb);
+- if (status < 0)
+- mlog_errno(status);
+-
+ leave:
+ if (unlock_super)
+ ocfs2_super_unlock(osb, 1);
+@@ -1240,10 +1273,6 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
+ mlog_errno(tmp);
+ return;
}
+-
+- tmp = ocfs2_request_umount_vote(osb);
+- if (tmp < 0)
+- mlog_errno(tmp);
}
-+ sbi->s_bitmap_maxbytes = ext4_max_bitmap_size(sb->s_blocksize_bits);
- sb->s_maxbytes = ext4_max_size(sb->s_blocksize_bits);
+ if (osb->slot_num != OCFS2_INVALID_SLOT)
+@@ -1254,13 +1283,8 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
- if (le32_to_cpu(es->s_rev_level) == EXT4_GOOD_OLD_REV) {
-@@ -1838,6 +2070,17 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
+ ocfs2_release_system_inodes(osb);
- if (EXT4_BLOCKS_PER_GROUP(sb) == 0)
- goto cantfind_ext4;
-+
-+ /* ensure blocks_count calculation below doesn't sign-extend */
-+ if (ext4_blocks_count(es) + EXT4_BLOCKS_PER_GROUP(sb) <
-+ le32_to_cpu(es->s_first_data_block) + 1) {
-+ printk(KERN_WARNING "EXT4-fs: bad geometry: block count %llu, "
-+ "first data block %u, blocks per group %lu\n",
-+ ext4_blocks_count(es),
-+ le32_to_cpu(es->s_first_data_block),
-+ EXT4_BLOCKS_PER_GROUP(sb));
-+ goto failed_mount;
-+ }
- blocks_count = (ext4_blocks_count(es) -
- le32_to_cpu(es->s_first_data_block) +
- EXT4_BLOCKS_PER_GROUP(sb) - 1);
-@@ -1900,6 +2143,8 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- sbi->s_rsv_window_head.rsv_goal_size = 0;
- ext4_rsv_window_add(sb, &sbi->s_rsv_window_head);
+- if (osb->dlm) {
+- ocfs2_unregister_net_handlers(osb);
+-
++ if (osb->dlm)
+ ocfs2_dlm_shutdown(osb);
+- }
+-
+- ocfs2_clear_hb_callbacks(osb);
-+ sbi->s_stripe = ext4_get_stripe_size(sbi);
-+
- /*
- * set up enough so that it can read an inode
- */
-@@ -1944,6 +2189,21 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- goto failed_mount4;
+ debugfs_remove(osb->osb_debug_root);
+
+@@ -1315,7 +1339,6 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ int i, cbits, bbits;
+ struct ocfs2_dinode *di = (struct ocfs2_dinode *)bh->b_data;
+ struct inode *inode = NULL;
+- struct buffer_head *bitmap_bh = NULL;
+ struct ocfs2_journal *journal;
+ __le32 uuid_net_key;
+ struct ocfs2_super *osb;
+@@ -1344,19 +1367,13 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ osb->s_sectsize_bits = blksize_bits(sector_size);
+ BUG_ON(!osb->s_sectsize_bits);
+
+- osb->net_response_ids = 0;
+- spin_lock_init(&osb->net_response_lock);
+- INIT_LIST_HEAD(&osb->net_response_list);
+-
+- INIT_LIST_HEAD(&osb->osb_net_handlers);
+ init_waitqueue_head(&osb->recovery_event);
+- spin_lock_init(&osb->vote_task_lock);
+- init_waitqueue_head(&osb->vote_event);
+- osb->vote_work_sequence = 0;
+- osb->vote_wake_sequence = 0;
++ spin_lock_init(&osb->dc_task_lock);
++ init_waitqueue_head(&osb->dc_event);
++ osb->dc_work_sequence = 0;
++ osb->dc_wake_sequence = 0;
+ INIT_LIST_HEAD(&osb->blocked_lock_list);
+ osb->blocked_lock_count = 0;
+- INIT_LIST_HEAD(&osb->vote_list);
+ spin_lock_init(&osb->osb_lock);
+
+ atomic_set(&osb->alloc_stats.moves, 0);
+@@ -1496,7 +1513,6 @@ static int ocfs2_initialize_super(struct super_block *sb,
}
-+ if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
-+ jbd2_journal_set_features(sbi->s_journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM, 0,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
-+ } else if (test_opt(sb, JOURNAL_CHECKSUM)) {
-+ jbd2_journal_set_features(sbi->s_journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM, 0, 0);
-+ jbd2_journal_clear_features(sbi->s_journal, 0, 0,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
-+ } else {
-+ jbd2_journal_clear_features(sbi->s_journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM, 0,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
-+ }
-+
- /* We have now updated the journal if required, so we can
- * validate the data journaling mode. */
- switch (test_opt(sb, DATA_FLAGS)) {
-@@ -2044,6 +2304,7 @@ static int ext4_fill_super (struct super_block *sb, void *data, int silent)
- "writeback");
+ memcpy(&uuid_net_key, di->id2.i_super.s_uuid, sizeof(uuid_net_key));
+- osb->net_key = le32_to_cpu(uuid_net_key);
- ext4_ext_init(sb);
-+ ext4_mb_init(sb, needs_recovery);
+ strncpy(osb->vol_label, di->id2.i_super.s_label, 63);
+ osb->vol_label[63] = '\0';
+@@ -1539,25 +1555,9 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ }
- lock_kernel();
- return 0;
-@@ -2673,7 +2934,7 @@ static int ext4_statfs (struct dentry * dentry, struct kstatfs * buf)
- if (test_opt(sb, MINIX_DF)) {
- sbi->s_overhead_last = 0;
- } else if (sbi->s_blocks_last != ext4_blocks_count(es)) {
-- unsigned long ngroups = sbi->s_groups_count, i;
-+ ext4_group_t ngroups = sbi->s_groups_count, i;
- ext4_fsblk_t overhead = 0;
- smp_rmb();
+ osb->bitmap_blkno = OCFS2_I(inode)->ip_blkno;
+-
+- /* We don't have a cluster lock on the bitmap here because
+- * we're only interested in static information and the extra
+- * complexity at mount time isn't worht it. Don't pass the
+- * inode in to the read function though as we don't want it to
+- * be put in the cache. */
+- status = ocfs2_read_block(osb, osb->bitmap_blkno, &bitmap_bh, 0,
+- NULL);
+ iput(inode);
+- if (status < 0) {
+- mlog_errno(status);
+- goto bail;
+- }
-@@ -2909,7 +3170,7 @@ static ssize_t ext4_quota_read(struct super_block *sb, int type, char *data,
- size_t len, loff_t off)
- {
- struct inode *inode = sb_dqopt(sb)->files[type];
-- sector_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
-+ ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
- int err = 0;
- int offset = off & (sb->s_blocksize - 1);
- int tocopy;
-@@ -2947,7 +3208,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
- const char *data, size_t len, loff_t off)
- {
- struct inode *inode = sb_dqopt(sb)->files[type];
-- sector_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
-+ ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
- int err = 0;
- int offset = off & (sb->s_blocksize - 1);
- int tocopy;
-@@ -3002,7 +3263,6 @@ out:
- i_size_write(inode, off+len-towrite);
- EXT4_I(inode)->i_disksize = inode->i_size;
+- di = (struct ocfs2_dinode *) bitmap_bh->b_data;
+- osb->bitmap_cpg = le16_to_cpu(di->id2.i_chain.cl_cpg);
+- brelse(bitmap_bh);
+- mlog(0, "cluster bitmap inode: %llu, clusters per group: %u\n",
+- (unsigned long long)osb->bitmap_blkno, osb->bitmap_cpg);
++ osb->bitmap_cpg = ocfs2_group_bitmap_size(sb) * 8;
+
+ status = ocfs2_init_slot_info(osb);
+ if (status < 0) {
+diff --git a/fs/ocfs2/sysfile.c b/fs/ocfs2/sysfile.c
+index fd2e846..ab713eb 100644
+--- a/fs/ocfs2/sysfile.c
++++ b/fs/ocfs2/sysfile.c
+@@ -112,7 +112,7 @@ static struct inode * _ocfs2_get_system_file_inode(struct ocfs2_super *osb,
+ goto bail;
}
-- inode->i_version++;
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
- ext4_mark_inode_dirty(handle, inode);
- mutex_unlock(&inode->i_mutex);
-@@ -3027,9 +3287,15 @@ static struct file_system_type ext4dev_fs_type = {
- static int __init init_ext4_fs(void)
- {
-- int err = init_ext4_xattr();
-+ int err;
-+
-+ err = init_ext4_mballoc();
- if (err)
- return err;
-+
-+ err = init_ext4_xattr();
-+ if (err)
-+ goto out2;
- err = init_inodecache();
- if (err)
- goto out1;
-@@ -3041,6 +3307,8 @@ out:
- destroy_inodecache();
- out1:
- exit_ext4_xattr();
-+out2:
-+ exit_ext4_mballoc();
- return err;
- }
+- inode = ocfs2_iget(osb, blkno, OCFS2_FI_FLAG_SYSFILE);
++ inode = ocfs2_iget(osb, blkno, OCFS2_FI_FLAG_SYSFILE, type);
+ if (IS_ERR(inode)) {
+ mlog_errno(PTR_ERR(inode));
+ inode = NULL;
+diff --git a/fs/ocfs2/ver.c b/fs/ocfs2/ver.c
+index 5405ce1..e2488f4 100644
+--- a/fs/ocfs2/ver.c
++++ b/fs/ocfs2/ver.c
+@@ -29,7 +29,7 @@
-@@ -3049,6 +3317,7 @@ static void __exit exit_ext4_fs(void)
- unregister_filesystem(&ext4dev_fs_type);
- destroy_inodecache();
- exit_ext4_xattr();
-+ exit_ext4_mballoc();
+ #include "ver.h"
+
+-#define OCFS2_BUILD_VERSION "1.3.3"
++#define OCFS2_BUILD_VERSION "1.5.0"
+
+ #define VERSION_STR "OCFS2 " OCFS2_BUILD_VERSION
+
+diff --git a/fs/ocfs2/vote.c b/fs/ocfs2/vote.c
+deleted file mode 100644
+index c053585..0000000
+--- a/fs/ocfs2/vote.c
++++ /dev/null
+@@ -1,756 +0,0 @@
+-/* -*- mode: c; c-basic-offset: 8; -*-
+- * vim: noexpandtab sw=8 ts=8 sts=0:
+- *
+- * vote.c
+- *
+- * description here
+- *
+- * Copyright (C) 2003, 2004 Oracle. All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or
+- * modify it under the terms of the GNU General Public
+- * License as published by the Free Software Foundation; either
+- * version 2 of the License, or (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public
+- * License along with this program; if not, write to the
+- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+- * Boston, MA 021110-1307, USA.
+- */
+-
+-#include <linux/types.h>
+-#include <linux/slab.h>
+-#include <linux/highmem.h>
+-#include <linux/kthread.h>
+-
+-#include <cluster/heartbeat.h>
+-#include <cluster/nodemanager.h>
+-#include <cluster/tcp.h>
+-
+-#include <dlm/dlmapi.h>
+-
+-#define MLOG_MASK_PREFIX ML_VOTE
+-#include <cluster/masklog.h>
+-
+-#include "ocfs2.h"
+-
+-#include "alloc.h"
+-#include "dlmglue.h"
+-#include "extent_map.h"
+-#include "heartbeat.h"
+-#include "inode.h"
+-#include "journal.h"
+-#include "slot_map.h"
+-#include "vote.h"
+-
+-#include "buffer_head_io.h"
+-
+-#define OCFS2_MESSAGE_TYPE_VOTE (0x1)
+-#define OCFS2_MESSAGE_TYPE_RESPONSE (0x2)
+-struct ocfs2_msg_hdr
+-{
+- __be32 h_response_id; /* used to lookup message handle on sending
+- * node. */
+- __be32 h_request;
+- __be64 h_blkno;
+- __be32 h_generation;
+- __be32 h_node_num; /* node sending this particular message. */
+-};
+-
+-struct ocfs2_vote_msg
+-{
+- struct ocfs2_msg_hdr v_hdr;
+- __be32 v_reserved1;
+-} __attribute__ ((packed));
+-
+-/* Responses are given these values to maintain backwards
+- * compatibility with older ocfs2 versions */
+-#define OCFS2_RESPONSE_OK (0)
+-#define OCFS2_RESPONSE_BUSY (-16)
+-#define OCFS2_RESPONSE_BAD_MSG (-22)
+-
+-struct ocfs2_response_msg
+-{
+- struct ocfs2_msg_hdr r_hdr;
+- __be32 r_response;
+-} __attribute__ ((packed));
+-
+-struct ocfs2_vote_work {
+- struct list_head w_list;
+- struct ocfs2_vote_msg w_msg;
+-};
+-
+-enum ocfs2_vote_request {
+- OCFS2_VOTE_REQ_INVALID = 0,
+- OCFS2_VOTE_REQ_MOUNT,
+- OCFS2_VOTE_REQ_UMOUNT,
+- OCFS2_VOTE_REQ_LAST
+-};
+-
+-static inline int ocfs2_is_valid_vote_request(int request)
+-{
+- return OCFS2_VOTE_REQ_INVALID < request &&
+- request < OCFS2_VOTE_REQ_LAST;
+-}
+-
+-typedef void (*ocfs2_net_response_callback)(void *priv,
+- struct ocfs2_response_msg *resp);
+-struct ocfs2_net_response_cb {
+- ocfs2_net_response_callback rc_cb;
+- void *rc_priv;
+-};
+-
+-struct ocfs2_net_wait_ctxt {
+- struct list_head n_list;
+- u32 n_response_id;
+- wait_queue_head_t n_event;
+- struct ocfs2_node_map n_node_map;
+- int n_response; /* an agreggate response. 0 if
+- * all nodes are go, < 0 on any
+- * negative response from any
+- * node or network error. */
+- struct ocfs2_net_response_cb *n_callback;
+-};
+-
+-static void ocfs2_process_mount_request(struct ocfs2_super *osb,
+- unsigned int node_num)
+-{
+- mlog(0, "MOUNT vote from node %u\n", node_num);
+- /* The other node only sends us this message when he has an EX
+- * on the superblock, so our recovery threads (if having been
+- * launched) are waiting on it.*/
+- ocfs2_recovery_map_clear(osb, node_num);
+- ocfs2_node_map_set_bit(osb, &osb->mounted_map, node_num);
+-
+- /* We clear the umount map here because a node may have been
+- * previously mounted, safely unmounted but never stopped
+- * heartbeating - in which case we'd have a stale entry. */
+- ocfs2_node_map_clear_bit(osb, &osb->umount_map, node_num);
+-}
+-
+-static void ocfs2_process_umount_request(struct ocfs2_super *osb,
+- unsigned int node_num)
+-{
+- mlog(0, "UMOUNT vote from node %u\n", node_num);
+- ocfs2_node_map_clear_bit(osb, &osb->mounted_map, node_num);
+- ocfs2_node_map_set_bit(osb, &osb->umount_map, node_num);
+-}
+-
+-static void ocfs2_process_vote(struct ocfs2_super *osb,
+- struct ocfs2_vote_msg *msg)
+-{
+- int net_status, vote_response;
+- unsigned int node_num;
+- u64 blkno;
+- enum ocfs2_vote_request request;
+- struct ocfs2_msg_hdr *hdr = &msg->v_hdr;
+- struct ocfs2_response_msg response;
+-
+- /* decode the network mumbo jumbo into local variables. */
+- request = be32_to_cpu(hdr->h_request);
+- blkno = be64_to_cpu(hdr->h_blkno);
+- node_num = be32_to_cpu(hdr->h_node_num);
+-
+- mlog(0, "processing vote: request = %u, blkno = %llu, node_num = %u\n",
+- request, (unsigned long long)blkno, node_num);
+-
+- if (!ocfs2_is_valid_vote_request(request)) {
+- mlog(ML_ERROR, "Invalid vote request %d from node %u\n",
+- request, node_num);
+- vote_response = OCFS2_RESPONSE_BAD_MSG;
+- goto respond;
+- }
+-
+- vote_response = OCFS2_RESPONSE_OK;
+-
+- switch (request) {
+- case OCFS2_VOTE_REQ_UMOUNT:
+- ocfs2_process_umount_request(osb, node_num);
+- goto respond;
+- case OCFS2_VOTE_REQ_MOUNT:
+- ocfs2_process_mount_request(osb, node_num);
+- goto respond;
+- default:
+- /* avoids a gcc warning */
+- break;
+- }
+-
+-respond:
+- /* Response struture is small so we just put it on the stack
+- * and stuff it inline. */
+- memset(&response, 0, sizeof(struct ocfs2_response_msg));
+- response.r_hdr.h_response_id = hdr->h_response_id;
+- response.r_hdr.h_blkno = hdr->h_blkno;
+- response.r_hdr.h_generation = hdr->h_generation;
+- response.r_hdr.h_node_num = cpu_to_be32(osb->node_num);
+- response.r_response = cpu_to_be32(vote_response);
+-
+- net_status = o2net_send_message(OCFS2_MESSAGE_TYPE_RESPONSE,
+- osb->net_key,
+- &response,
+- sizeof(struct ocfs2_response_msg),
+- node_num,
+- NULL);
+- /* We still want to error print for ENOPROTOOPT here. The
+- * sending node shouldn't have unregistered his net handler
+- * without sending an unmount vote 1st */
+- if (net_status < 0
+- && net_status != -ETIMEDOUT
+- && net_status != -ENOTCONN)
+- mlog(ML_ERROR, "message to node %u fails with error %d!\n",
+- node_num, net_status);
+-}
+-
+-static void ocfs2_vote_thread_do_work(struct ocfs2_super *osb)
+-{
+- unsigned long processed;
+- struct ocfs2_lock_res *lockres;
+- struct ocfs2_vote_work *work;
+-
+- mlog_entry_void();
+-
+- spin_lock(&osb->vote_task_lock);
+- /* grab this early so we know to try again if a state change and
+- * wake happens part-way through our work */
+- osb->vote_work_sequence = osb->vote_wake_sequence;
+-
+- processed = osb->blocked_lock_count;
+- while (processed) {
+- BUG_ON(list_empty(&osb->blocked_lock_list));
+-
+- lockres = list_entry(osb->blocked_lock_list.next,
+- struct ocfs2_lock_res, l_blocked_list);
+- list_del_init(&lockres->l_blocked_list);
+- osb->blocked_lock_count--;
+- spin_unlock(&osb->vote_task_lock);
+-
+- BUG_ON(!processed);
+- processed--;
+-
+- ocfs2_process_blocked_lock(osb, lockres);
+-
+- spin_lock(&osb->vote_task_lock);
+- }
+-
+- while (osb->vote_count) {
+- BUG_ON(list_empty(&osb->vote_list));
+- work = list_entry(osb->vote_list.next,
+- struct ocfs2_vote_work, w_list);
+- list_del(&work->w_list);
+- osb->vote_count--;
+- spin_unlock(&osb->vote_task_lock);
+-
+- ocfs2_process_vote(osb, &work->w_msg);
+- kfree(work);
+-
+- spin_lock(&osb->vote_task_lock);
+- }
+- spin_unlock(&osb->vote_task_lock);
+-
+- mlog_exit_void();
+-}
+-
+-static int ocfs2_vote_thread_lists_empty(struct ocfs2_super *osb)
+-{
+- int empty = 0;
+-
+- spin_lock(&osb->vote_task_lock);
+- if (list_empty(&osb->blocked_lock_list) &&
+- list_empty(&osb->vote_list))
+- empty = 1;
+-
+- spin_unlock(&osb->vote_task_lock);
+- return empty;
+-}
+-
+-static int ocfs2_vote_thread_should_wake(struct ocfs2_super *osb)
+-{
+- int should_wake = 0;
+-
+- spin_lock(&osb->vote_task_lock);
+- if (osb->vote_work_sequence != osb->vote_wake_sequence)
+- should_wake = 1;
+- spin_unlock(&osb->vote_task_lock);
+-
+- return should_wake;
+-}
+-
+-int ocfs2_vote_thread(void *arg)
+-{
+- int status = 0;
+- struct ocfs2_super *osb = arg;
+-
+- /* only quit once we've been asked to stop and there is no more
+- * work available */
+- while (!(kthread_should_stop() &&
+- ocfs2_vote_thread_lists_empty(osb))) {
+-
+- wait_event_interruptible(osb->vote_event,
+- ocfs2_vote_thread_should_wake(osb) ||
+- kthread_should_stop());
+-
+- mlog(0, "vote_thread: awoken\n");
+-
+- ocfs2_vote_thread_do_work(osb);
+- }
+-
+- osb->vote_task = NULL;
+- return status;
+-}
+-
+-static struct ocfs2_net_wait_ctxt *ocfs2_new_net_wait_ctxt(unsigned int response_id)
+-{
+- struct ocfs2_net_wait_ctxt *w;
+-
+- w = kzalloc(sizeof(*w), GFP_NOFS);
+- if (!w) {
+- mlog_errno(-ENOMEM);
+- goto bail;
+- }
+-
+- INIT_LIST_HEAD(&w->n_list);
+- init_waitqueue_head(&w->n_event);
+- ocfs2_node_map_init(&w->n_node_map);
+- w->n_response_id = response_id;
+- w->n_callback = NULL;
+-bail:
+- return w;
+-}
+-
+-static unsigned int ocfs2_new_response_id(struct ocfs2_super *osb)
+-{
+- unsigned int ret;
+-
+- spin_lock(&osb->net_response_lock);
+- ret = ++osb->net_response_ids;
+- spin_unlock(&osb->net_response_lock);
+-
+- return ret;
+-}
+-
+-static void ocfs2_dequeue_net_wait_ctxt(struct ocfs2_super *osb,
+- struct ocfs2_net_wait_ctxt *w)
+-{
+- spin_lock(&osb->net_response_lock);
+- list_del(&w->n_list);
+- spin_unlock(&osb->net_response_lock);
+-}
+-
+-static void ocfs2_queue_net_wait_ctxt(struct ocfs2_super *osb,
+- struct ocfs2_net_wait_ctxt *w)
+-{
+- spin_lock(&osb->net_response_lock);
+- list_add_tail(&w->n_list,
+- &osb->net_response_list);
+- spin_unlock(&osb->net_response_lock);
+-}
+-
+-static void __ocfs2_mark_node_responded(struct ocfs2_super *osb,
+- struct ocfs2_net_wait_ctxt *w,
+- int node_num)
+-{
+- assert_spin_locked(&osb->net_response_lock);
+-
+- ocfs2_node_map_clear_bit(osb, &w->n_node_map, node_num);
+- if (ocfs2_node_map_is_empty(osb, &w->n_node_map))
+- wake_up(&w->n_event);
+-}
+-
+-/* Intended to be called from the node down callback, we fake remove
+- * the node from all our response contexts */
+-void ocfs2_remove_node_from_vote_queues(struct ocfs2_super *osb,
+- int node_num)
+-{
+- struct list_head *p;
+- struct ocfs2_net_wait_ctxt *w = NULL;
+-
+- spin_lock(&osb->net_response_lock);
+-
+- list_for_each(p, &osb->net_response_list) {
+- w = list_entry(p, struct ocfs2_net_wait_ctxt, n_list);
+-
+- __ocfs2_mark_node_responded(osb, w, node_num);
+- }
+-
+- spin_unlock(&osb->net_response_lock);
+-}
+-
+-static int ocfs2_broadcast_vote(struct ocfs2_super *osb,
+- struct ocfs2_vote_msg *request,
+- unsigned int response_id,
+- int *response,
+- struct ocfs2_net_response_cb *callback)
+-{
+- int status, i, remote_err;
+- struct ocfs2_net_wait_ctxt *w = NULL;
+- int dequeued = 0;
+-
+- mlog_entry_void();
+-
+- w = ocfs2_new_net_wait_ctxt(response_id);
+- if (!w) {
+- status = -ENOMEM;
+- mlog_errno(status);
+- goto bail;
+- }
+- w->n_callback = callback;
+-
+- /* we're pretty much ready to go at this point, and this fills
+- * in n_response which we need anyway... */
+- ocfs2_queue_net_wait_ctxt(osb, w);
+-
+- i = ocfs2_node_map_iterate(osb, &osb->mounted_map, 0);
+-
+- while (i != O2NM_INVALID_NODE_NUM) {
+- if (i != osb->node_num) {
+- mlog(0, "trying to send request to node %i\n", i);
+- ocfs2_node_map_set_bit(osb, &w->n_node_map, i);
+-
+- remote_err = 0;
+- status = o2net_send_message(OCFS2_MESSAGE_TYPE_VOTE,
+- osb->net_key,
+- request,
+- sizeof(*request),
+- i,
+- &remote_err);
+- if (status == -ETIMEDOUT) {
+- mlog(0, "remote node %d timed out!\n", i);
+- status = -EAGAIN;
+- goto bail;
+- }
+- if (remote_err < 0) {
+- status = remote_err;
+- mlog(0, "remote error %d on node %d!\n",
+- remote_err, i);
+- mlog_errno(status);
+- goto bail;
+- }
+- if (status < 0) {
+- mlog_errno(status);
+- goto bail;
+- }
+- }
+- i++;
+- i = ocfs2_node_map_iterate(osb, &osb->mounted_map, i);
+- mlog(0, "next is %d, i am %d\n", i, osb->node_num);
+- }
+- mlog(0, "done sending, now waiting on responses...\n");
+-
+- wait_event(w->n_event, ocfs2_node_map_is_empty(osb, &w->n_node_map));
+-
+- ocfs2_dequeue_net_wait_ctxt(osb, w);
+- dequeued = 1;
+-
+- *response = w->n_response;
+- status = 0;
+-bail:
+- if (w) {
+- if (!dequeued)
+- ocfs2_dequeue_net_wait_ctxt(osb, w);
+- kfree(w);
+- }
+-
+- mlog_exit(status);
+- return status;
+-}
+-
+-static struct ocfs2_vote_msg * ocfs2_new_vote_request(struct ocfs2_super *osb,
+- u64 blkno,
+- unsigned int generation,
+- enum ocfs2_vote_request type)
+-{
+- struct ocfs2_vote_msg *request;
+- struct ocfs2_msg_hdr *hdr;
+-
+- BUG_ON(!ocfs2_is_valid_vote_request(type));
+-
+- request = kzalloc(sizeof(*request), GFP_NOFS);
+- if (!request) {
+- mlog_errno(-ENOMEM);
+- } else {
+- hdr = &request->v_hdr;
+- hdr->h_node_num = cpu_to_be32(osb->node_num);
+- hdr->h_request = cpu_to_be32(type);
+- hdr->h_blkno = cpu_to_be64(blkno);
+- hdr->h_generation = cpu_to_be32(generation);
+- }
+-
+- return request;
+-}
+-
+-/* Complete the buildup of a new vote request and process the
+- * broadcast return value. */
+-static int ocfs2_do_request_vote(struct ocfs2_super *osb,
+- struct ocfs2_vote_msg *request,
+- struct ocfs2_net_response_cb *callback)
+-{
+- int status, response = -EBUSY;
+- unsigned int response_id;
+- struct ocfs2_msg_hdr *hdr;
+-
+- response_id = ocfs2_new_response_id(osb);
+-
+- hdr = &request->v_hdr;
+- hdr->h_response_id = cpu_to_be32(response_id);
+-
+- status = ocfs2_broadcast_vote(osb, request, response_id, &response,
+- callback);
+- if (status < 0) {
+- mlog_errno(status);
+- goto bail;
+- }
+-
+- status = response;
+-bail:
+-
+- return status;
+-}
+-
+-int ocfs2_request_mount_vote(struct ocfs2_super *osb)
+-{
+- int status;
+- struct ocfs2_vote_msg *request = NULL;
+-
+- request = ocfs2_new_vote_request(osb, 0ULL, 0, OCFS2_VOTE_REQ_MOUNT);
+- if (!request) {
+- status = -ENOMEM;
+- goto bail;
+- }
+-
+- status = -EAGAIN;
+- while (status == -EAGAIN) {
+- if (!(osb->s_mount_opt & OCFS2_MOUNT_NOINTR) &&
+- signal_pending(current)) {
+- status = -ERESTARTSYS;
+- goto bail;
+- }
+-
+- if (ocfs2_node_map_is_only(osb, &osb->mounted_map,
+- osb->node_num)) {
+- status = 0;
+- goto bail;
+- }
+-
+- status = ocfs2_do_request_vote(osb, request, NULL);
+- }
+-
+-bail:
+- kfree(request);
+- return status;
+-}
+-
+-int ocfs2_request_umount_vote(struct ocfs2_super *osb)
+-{
+- int status;
+- struct ocfs2_vote_msg *request = NULL;
+-
+- request = ocfs2_new_vote_request(osb, 0ULL, 0, OCFS2_VOTE_REQ_UMOUNT);
+- if (!request) {
+- status = -ENOMEM;
+- goto bail;
+- }
+-
+- status = -EAGAIN;
+- while (status == -EAGAIN) {
+- /* Do not check signals on this vote... We really want
+- * this one to go all the way through. */
+-
+- if (ocfs2_node_map_is_only(osb, &osb->mounted_map,
+- osb->node_num)) {
+- status = 0;
+- goto bail;
+- }
+-
+- status = ocfs2_do_request_vote(osb, request, NULL);
+- }
+-
+-bail:
+- kfree(request);
+- return status;
+-}
+-
+-/* TODO: This should eventually be a hash table! */
+-static struct ocfs2_net_wait_ctxt * __ocfs2_find_net_wait_ctxt(struct ocfs2_super *osb,
+- u32 response_id)
+-{
+- struct list_head *p;
+- struct ocfs2_net_wait_ctxt *w = NULL;
+-
+- list_for_each(p, &osb->net_response_list) {
+- w = list_entry(p, struct ocfs2_net_wait_ctxt, n_list);
+- if (response_id == w->n_response_id)
+- break;
+- w = NULL;
+- }
+-
+- return w;
+-}
+-
+-/* Translate response codes into local node errno values */
+-static inline int ocfs2_translate_response(int response)
+-{
+- int ret;
+-
+- switch (response) {
+- case OCFS2_RESPONSE_OK:
+- ret = 0;
+- break;
+-
+- case OCFS2_RESPONSE_BUSY:
+- ret = -EBUSY;
+- break;
+-
+- default:
+- ret = -EINVAL;
+- }
+-
+- return ret;
+-}
+-
+-static int ocfs2_handle_response_message(struct o2net_msg *msg,
+- u32 len,
+- void *data, void **ret_data)
+-{
+- unsigned int response_id, node_num;
+- int response_status;
+- struct ocfs2_super *osb = data;
+- struct ocfs2_response_msg *resp;
+- struct ocfs2_net_wait_ctxt * w;
+- struct ocfs2_net_response_cb *resp_cb;
+-
+- resp = (struct ocfs2_response_msg *) msg->buf;
+-
+- response_id = be32_to_cpu(resp->r_hdr.h_response_id);
+- node_num = be32_to_cpu(resp->r_hdr.h_node_num);
+- response_status =
+- ocfs2_translate_response(be32_to_cpu(resp->r_response));
+-
+- mlog(0, "received response message:\n");
+- mlog(0, "h_response_id = %u\n", response_id);
+- mlog(0, "h_request = %u\n", be32_to_cpu(resp->r_hdr.h_request));
+- mlog(0, "h_blkno = %llu\n",
+- (unsigned long long)be64_to_cpu(resp->r_hdr.h_blkno));
+- mlog(0, "h_generation = %u\n", be32_to_cpu(resp->r_hdr.h_generation));
+- mlog(0, "h_node_num = %u\n", node_num);
+- mlog(0, "r_response = %d\n", response_status);
+-
+- spin_lock(&osb->net_response_lock);
+- w = __ocfs2_find_net_wait_ctxt(osb, response_id);
+- if (!w) {
+- mlog(0, "request not found!\n");
+- goto bail;
+- }
+- resp_cb = w->n_callback;
+-
+- if (response_status && (!w->n_response)) {
+- /* we only really need one negative response so don't
+- * set it twice. */
+- w->n_response = response_status;
+- }
+-
+- if (resp_cb) {
+- spin_unlock(&osb->net_response_lock);
+-
+- resp_cb->rc_cb(resp_cb->rc_priv, resp);
+-
+- spin_lock(&osb->net_response_lock);
+- }
+-
+- __ocfs2_mark_node_responded(osb, w, node_num);
+-bail:
+- spin_unlock(&osb->net_response_lock);
+-
+- return 0;
+-}
+-
+-static int ocfs2_handle_vote_message(struct o2net_msg *msg,
+- u32 len,
+- void *data, void **ret_data)
+-{
+- int status;
+- struct ocfs2_super *osb = data;
+- struct ocfs2_vote_work *work;
+-
+- work = kmalloc(sizeof(struct ocfs2_vote_work), GFP_NOFS);
+- if (!work) {
+- status = -ENOMEM;
+- mlog_errno(status);
+- goto bail;
+- }
+-
+- INIT_LIST_HEAD(&work->w_list);
+- memcpy(&work->w_msg, msg->buf, sizeof(struct ocfs2_vote_msg));
+-
+- mlog(0, "scheduling vote request:\n");
+- mlog(0, "h_response_id = %u\n",
+- be32_to_cpu(work->w_msg.v_hdr.h_response_id));
+- mlog(0, "h_request = %u\n", be32_to_cpu(work->w_msg.v_hdr.h_request));
+- mlog(0, "h_blkno = %llu\n",
+- (unsigned long long)be64_to_cpu(work->w_msg.v_hdr.h_blkno));
+- mlog(0, "h_generation = %u\n",
+- be32_to_cpu(work->w_msg.v_hdr.h_generation));
+- mlog(0, "h_node_num = %u\n",
+- be32_to_cpu(work->w_msg.v_hdr.h_node_num));
+-
+- spin_lock(&osb->vote_task_lock);
+- list_add_tail(&work->w_list, &osb->vote_list);
+- osb->vote_count++;
+- spin_unlock(&osb->vote_task_lock);
+-
+- ocfs2_kick_vote_thread(osb);
+-
+- status = 0;
+-bail:
+- return status;
+-}
+-
+-void ocfs2_unregister_net_handlers(struct ocfs2_super *osb)
+-{
+- if (!osb->net_key)
+- return;
+-
+- o2net_unregister_handler_list(&osb->osb_net_handlers);
+-
+- if (!list_empty(&osb->net_response_list))
+- mlog(ML_ERROR, "net response list not empty!\n");
+-
+- osb->net_key = 0;
+-}
+-
+-int ocfs2_register_net_handlers(struct ocfs2_super *osb)
+-{
+- int status = 0;
+-
+- if (ocfs2_mount_local(osb))
+- return 0;
+-
+- status = o2net_register_handler(OCFS2_MESSAGE_TYPE_RESPONSE,
+- osb->net_key,
+- sizeof(struct ocfs2_response_msg),
+- ocfs2_handle_response_message,
+- osb, NULL, &osb->osb_net_handlers);
+- if (status) {
+- mlog_errno(status);
+- goto bail;
+- }
+-
+- status = o2net_register_handler(OCFS2_MESSAGE_TYPE_VOTE,
+- osb->net_key,
+- sizeof(struct ocfs2_vote_msg),
+- ocfs2_handle_vote_message,
+- osb, NULL, &osb->osb_net_handlers);
+- if (status) {
+- mlog_errno(status);
+- goto bail;
+- }
+-bail:
+- if (status < 0)
+- ocfs2_unregister_net_handlers(osb);
+-
+- return status;
+-}
+diff --git a/fs/ocfs2/vote.h b/fs/ocfs2/vote.h
+deleted file mode 100644
+index 9ea46f6..0000000
+--- a/fs/ocfs2/vote.h
++++ /dev/null
+@@ -1,48 +0,0 @@
+-/* -*- mode: c; c-basic-offset: 8; -*-
+- * vim: noexpandtab sw=8 ts=8 sts=0:
+- *
+- * vote.h
+- *
+- * description here
+- *
+- * Copyright (C) 2002, 2004 Oracle. All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or
+- * modify it under the terms of the GNU General Public
+- * License as published by the Free Software Foundation; either
+- * version 2 of the License, or (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+- * General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public
+- * License along with this program; if not, write to the
+- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+- * Boston, MA 021110-1307, USA.
+- */
+-
+-
+-#ifndef VOTE_H
+-#define VOTE_H
+-
+-int ocfs2_vote_thread(void *arg);
+-static inline void ocfs2_kick_vote_thread(struct ocfs2_super *osb)
+-{
+- spin_lock(&osb->vote_task_lock);
+- /* make sure the voting thread gets a swipe at whatever changes
+- * the caller may have made to the voting state */
+- osb->vote_wake_sequence++;
+- spin_unlock(&osb->vote_task_lock);
+- wake_up(&osb->vote_event);
+-}
+-
+-int ocfs2_request_mount_vote(struct ocfs2_super *osb);
+-int ocfs2_request_umount_vote(struct ocfs2_super *osb);
+-int ocfs2_register_net_handlers(struct ocfs2_super *osb);
+-void ocfs2_unregister_net_handlers(struct ocfs2_super *osb);
+-
+-void ocfs2_remove_node_from_vote_queues(struct ocfs2_super *osb,
+- int node_num);
+-#endif
+diff --git a/fs/partitions/check.c b/fs/partitions/check.c
+index 722e12e..739da70 100644
+--- a/fs/partitions/check.c
++++ b/fs/partitions/check.c
+@@ -195,96 +195,45 @@ check_partition(struct gendisk *hd, struct block_device *bdev)
+ return ERR_PTR(res);
}
- MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");
-diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
-index 8638730..d796213 100644
---- a/fs/ext4/xattr.c
-+++ b/fs/ext4/xattr.c
-@@ -480,7 +480,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
- ea_bdebug(bh, "refcount now=0; freeing");
- if (ce)
- mb_cache_entry_free(ce);
-- ext4_free_blocks(handle, inode, bh->b_blocknr, 1);
-+ ext4_free_blocks(handle, inode, bh->b_blocknr, 1, 1);
- get_bh(bh);
- ext4_forget(handle, 1, inode, bh, bh->b_blocknr);
- } else {
-@@ -821,7 +821,7 @@ inserted:
- new_bh = sb_getblk(sb, block);
- if (!new_bh) {
- getblk_failed:
-- ext4_free_blocks(handle, inode, block, 1);
-+ ext4_free_blocks(handle, inode, block, 1, 1);
- error = -EIO;
- goto cleanup;
- }
-diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
-index 84f9f7d..e5e80d1 100644
---- a/fs/fuse/inode.c
-+++ b/fs/fuse/inode.c
-@@ -744,9 +744,6 @@ static inline void unregister_fuseblk(void)
+-/*
+- * sysfs bindings for partitions
+- */
+-
+-struct part_attribute {
+- struct attribute attr;
+- ssize_t (*show)(struct hd_struct *,char *);
+- ssize_t (*store)(struct hd_struct *,const char *, size_t);
+-};
+-
+-static ssize_t
+-part_attr_show(struct kobject * kobj, struct attribute * attr, char * page)
++static ssize_t part_start_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
+ {
+- struct hd_struct * p = container_of(kobj,struct hd_struct,kobj);
+- struct part_attribute * part_attr = container_of(attr,struct part_attribute,attr);
+- ssize_t ret = 0;
+- if (part_attr->show)
+- ret = part_attr->show(p, page);
+- return ret;
+-}
+-static ssize_t
+-part_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char *page, size_t count)
+-{
+- struct hd_struct * p = container_of(kobj,struct hd_struct,kobj);
+- struct part_attribute * part_attr = container_of(attr,struct part_attribute,attr);
+- ssize_t ret = 0;
++ struct hd_struct *p = dev_to_part(dev);
+
+- if (part_attr->store)
+- ret = part_attr->store(p, page, count);
+- return ret;
++ return sprintf(buf, "%llu\n",(unsigned long long)p->start_sect);
}
- #endif
--static decl_subsys(fuse, NULL, NULL);
--static decl_subsys(connections, NULL, NULL);
+-static struct sysfs_ops part_sysfs_ops = {
+- .show = part_attr_show,
+- .store = part_attr_store,
+-};
-
- static void fuse_inode_init_once(struct kmem_cache *cachep, void *foo)
+-static ssize_t part_uevent_store(struct hd_struct * p,
+- const char *page, size_t count)
++static ssize_t part_size_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
{
- struct inode * inode = foo;
-@@ -791,32 +788,37 @@ static void fuse_fs_cleanup(void)
- kmem_cache_destroy(fuse_inode_cachep);
+- kobject_uevent(&p->kobj, KOBJ_ADD);
+- return count;
++ struct hd_struct *p = dev_to_part(dev);
++ return sprintf(buf, "%llu\n",(unsigned long long)p->nr_sects);
+ }
+-static ssize_t part_dev_read(struct hd_struct * p, char *page)
+-{
+- struct gendisk *disk = container_of(p->kobj.parent,struct gendisk,kobj);
+- dev_t dev = MKDEV(disk->major, disk->first_minor + p->partno);
+- return print_dev_t(page, dev);
+-}
+-static ssize_t part_start_read(struct hd_struct * p, char *page)
+-{
+- return sprintf(page, "%llu\n",(unsigned long long)p->start_sect);
+-}
+-static ssize_t part_size_read(struct hd_struct * p, char *page)
+-{
+- return sprintf(page, "%llu\n",(unsigned long long)p->nr_sects);
+-}
+-static ssize_t part_stat_read(struct hd_struct * p, char *page)
++
++static ssize_t part_stat_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
+ {
+- return sprintf(page, "%8u %8llu %8u %8llu\n",
++ struct hd_struct *p = dev_to_part(dev);
++
++ return sprintf(buf, "%8u %8llu %8u %8llu\n",
+ p->ios[0], (unsigned long long)p->sectors[0],
+ p->ios[1], (unsigned long long)p->sectors[1]);
}
+-static struct part_attribute part_attr_uevent = {
+- .attr = {.name = "uevent", .mode = S_IWUSR },
+- .store = part_uevent_store
+-};
+-static struct part_attribute part_attr_dev = {
+- .attr = {.name = "dev", .mode = S_IRUGO },
+- .show = part_dev_read
+-};
+-static struct part_attribute part_attr_start = {
+- .attr = {.name = "start", .mode = S_IRUGO },
+- .show = part_start_read
+-};
+-static struct part_attribute part_attr_size = {
+- .attr = {.name = "size", .mode = S_IRUGO },
+- .show = part_size_read
+-};
+-static struct part_attribute part_attr_stat = {
+- .attr = {.name = "stat", .mode = S_IRUGO },
+- .show = part_stat_read
+-};
-+static struct kobject *fuse_kobj;
-+static struct kobject *connections_kobj;
+ #ifdef CONFIG_FAIL_MAKE_REQUEST
++static ssize_t part_fail_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ struct hd_struct *p = dev_to_part(dev);
+
+-static ssize_t part_fail_store(struct hd_struct * p,
++ return sprintf(buf, "%d\n", p->make_it_fail);
++}
+
- static int fuse_sysfs_init(void)
++static ssize_t part_fail_store(struct device *dev,
++ struct device_attribute *attr,
+ const char *buf, size_t count)
{
- int err;
++ struct hd_struct *p = dev_to_part(dev);
+ int i;
-- kobj_set_kset_s(&fuse_subsys, fs_subsys);
-- err = subsystem_register(&fuse_subsys);
-- if (err)
-+ fuse_kobj = kobject_create_and_add("fuse", fs_kobj);
-+ if (!fuse_kobj) {
-+ err = -ENOMEM;
- goto out_err;
-+ }
+ if (count > 0 && sscanf(buf, "%d", &i) > 0)
+@@ -292,50 +241,53 @@ static ssize_t part_fail_store(struct hd_struct * p,
-- kobj_set_kset_s(&connections_subsys, fuse_subsys);
-- err = subsystem_register(&connections_subsys);
-- if (err)
-+ connections_kobj = kobject_create_and_add("connections", fuse_kobj);
-+ if (!connections_kobj) {
-+ err = -ENOMEM;
- goto out_fuse_unregister;
-+ }
+ return count;
+ }
+-static ssize_t part_fail_read(struct hd_struct * p, char *page)
+-{
+- return sprintf(page, "%d\n", p->make_it_fail);
+-}
+-static struct part_attribute part_attr_fail = {
+- .attr = {.name = "make-it-fail", .mode = S_IRUGO | S_IWUSR },
+- .store = part_fail_store,
+- .show = part_fail_read
+-};
++#endif
- return 0;
++static DEVICE_ATTR(start, S_IRUGO, part_start_show, NULL);
++static DEVICE_ATTR(size, S_IRUGO, part_size_show, NULL);
++static DEVICE_ATTR(stat, S_IRUGO, part_stat_show, NULL);
++#ifdef CONFIG_FAIL_MAKE_REQUEST
++static struct device_attribute dev_attr_fail =
++ __ATTR(make-it-fail, S_IRUGO|S_IWUSR, part_fail_show, part_fail_store);
+ #endif
- out_fuse_unregister:
-- subsystem_unregister(&fuse_subsys);
-+ kobject_put(fuse_kobj);
- out_err:
- return err;
- }
+-static struct attribute * default_attrs[] = {
+- &part_attr_uevent.attr,
+- &part_attr_dev.attr,
+- &part_attr_start.attr,
+- &part_attr_size.attr,
+- &part_attr_stat.attr,
++static struct attribute *part_attrs[] = {
++ &dev_attr_start.attr,
++ &dev_attr_size.attr,
++ &dev_attr_stat.attr,
+ #ifdef CONFIG_FAIL_MAKE_REQUEST
+- &part_attr_fail.attr,
++ &dev_attr_fail.attr,
+ #endif
+- NULL,
++ NULL
+ };
- static void fuse_sysfs_cleanup(void)
+-extern struct kset block_subsys;
++static struct attribute_group part_attr_group = {
++ .attrs = part_attrs,
++};
+
+-static void part_release(struct kobject *kobj)
++static struct attribute_group *part_attr_groups[] = {
++ &part_attr_group,
++ NULL
++};
++
++static void part_release(struct device *dev)
{
-- subsystem_unregister(&connections_subsys);
-- subsystem_unregister(&fuse_subsys);
-+ kobject_put(connections_kobj);
-+ kobject_put(fuse_kobj);
+- struct hd_struct * p = container_of(kobj,struct hd_struct,kobj);
++ struct hd_struct *p = dev_to_part(dev);
+ kfree(p);
}
- static int __init fuse_init(void)
-diff --git a/fs/gfs2/Makefile b/fs/gfs2/Makefile
-index 04ad0ca..8fff110 100644
---- a/fs/gfs2/Makefile
-+++ b/fs/gfs2/Makefile
-@@ -2,7 +2,7 @@ obj-$(CONFIG_GFS2_FS) += gfs2.o
- gfs2-y := acl.o bmap.o daemon.o dir.o eaops.o eattr.o glock.o \
- glops.o inode.o lm.o log.o lops.o locking.o main.o meta_io.o \
- mount.o ops_address.o ops_dentry.o ops_export.o ops_file.o \
-- ops_fstype.o ops_inode.o ops_super.o ops_vm.o quota.o \
-+ ops_fstype.o ops_inode.o ops_super.o quota.o \
- recovery.o rgrp.o super.o sys.o trans.o util.o
+-struct kobj_type ktype_part = {
++struct device_type part_type = {
++ .name = "partition",
++ .groups = part_attr_groups,
+ .release = part_release,
+- .default_attrs = default_attrs,
+- .sysfs_ops = &part_sysfs_ops,
+ };
- obj-$(CONFIG_GFS2_FS_LOCKING_NOLOCK) += locking/nolock/
-diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
-index 93fa427..e4effc4 100644
---- a/fs/gfs2/bmap.c
-+++ b/fs/gfs2/bmap.c
-@@ -59,7 +59,6 @@ struct strip_mine {
- static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
- u64 block, struct page *page)
+ static inline void partition_sysfs_add_subdir(struct hd_struct *p)
{
-- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
- struct inode *inode = &ip->i_inode;
- struct buffer_head *bh;
- int release = 0;
-@@ -95,7 +94,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
- set_buffer_uptodate(bh);
- if (!gfs2_is_jdata(ip))
- mark_buffer_dirty(bh);
-- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
-+ if (!gfs2_is_writeback(ip))
- gfs2_trans_add_bh(ip->i_gl, bh, 0);
+ struct kobject *k;
- if (release) {
-@@ -453,8 +452,8 @@ static inline void bmap_unlock(struct inode *inode, int create)
- * Returns: errno
- */
+- k = kobject_get(&p->kobj);
+- p->holder_dir = kobject_add_dir(k, "holders");
++ k = kobject_get(&p->dev.kobj);
++ p->holder_dir = kobject_create_and_add("holders", k);
+ kobject_put(k);
+ }
--int gfs2_block_map(struct inode *inode, u64 lblock, int create,
-- struct buffer_head *bh_map)
-+int gfs2_block_map(struct inode *inode, sector_t lblock,
-+ struct buffer_head *bh_map, int create)
+@@ -343,15 +295,16 @@ static inline void disk_sysfs_add_subdirs(struct gendisk *disk)
{
- struct gfs2_inode *ip = GFS2_I(inode);
- struct gfs2_sbd *sdp = GFS2_SB(inode);
-@@ -470,6 +469,7 @@ int gfs2_block_map(struct inode *inode, u64 lblock, int create,
- unsigned int maxlen = bh_map->b_size >> inode->i_blkbits;
- struct metapath mp;
- u64 size;
-+ struct buffer_head *dibh = NULL;
-
- BUG_ON(maxlen == 0);
-
-@@ -500,6 +500,8 @@ int gfs2_block_map(struct inode *inode, u64 lblock, int create,
- error = gfs2_meta_inode_buffer(ip, &bh);
- if (error)
- goto out_fail;
-+ dibh = bh;
-+ get_bh(dibh);
+ struct kobject *k;
- for (x = 0; x < end_of_metadata; x++) {
- lookup_block(ip, bh, x, &mp, create, &new, &dblock);
-@@ -518,13 +520,8 @@ int gfs2_block_map(struct inode *inode, u64 lblock, int create,
- if (boundary)
- set_buffer_boundary(bh_map);
- if (new) {
-- struct buffer_head *dibh;
-- error = gfs2_meta_inode_buffer(ip, &dibh);
-- if (!error) {
-- gfs2_trans_add_bh(ip->i_gl, dibh, 1);
-- gfs2_dinode_out(ip, dibh->b_data);
-- brelse(dibh);
-- }
-+ gfs2_trans_add_bh(ip->i_gl, dibh, 1);
-+ gfs2_dinode_out(ip, dibh->b_data);
- set_buffer_new(bh_map);
- goto out_brelse;
- }
-@@ -545,6 +542,8 @@ out_brelse:
- out_ok:
- error = 0;
- out_fail:
-+ if (dibh)
-+ brelse(dibh);
- bmap_unlock(inode, create);
- return error;
+- k = kobject_get(&disk->kobj);
+- disk->holder_dir = kobject_add_dir(k, "holders");
+- disk->slave_dir = kobject_add_dir(k, "slaves");
++ k = kobject_get(&disk->dev.kobj);
++ disk->holder_dir = kobject_create_and_add("holders", k);
++ disk->slave_dir = kobject_create_and_add("slaves", k);
+ kobject_put(k);
}
-@@ -560,7 +559,7 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi
- BUG_ON(!new);
-
- bh.b_size = 1 << (inode->i_blkbits + 5);
-- ret = gfs2_block_map(inode, lblock, create, &bh);
-+ ret = gfs2_block_map(inode, lblock, &bh, create);
- *extlen = bh.b_size >> inode->i_blkbits;
- *dblock = bh.b_blocknr;
- if (buffer_new(&bh))
-@@ -684,7 +683,7 @@ static int do_strip(struct gfs2_inode *ip, struct buffer_head *dibh,
- if (metadata)
- revokes = (height) ? sdp->sd_inptrs : sdp->sd_diptrs;
-
-- error = gfs2_rindex_hold(sdp, &ip->i_alloc.al_ri_gh);
-+ error = gfs2_rindex_hold(sdp, &ip->i_alloc->al_ri_gh);
- if (error)
- return error;
-@@ -786,7 +785,7 @@ out_rg_gunlock:
- out_rlist:
- gfs2_rlist_free(&rlist);
- out:
-- gfs2_glock_dq_uninit(&ip->i_alloc.al_ri_gh);
-+ gfs2_glock_dq_uninit(&ip->i_alloc->al_ri_gh);
- return error;
+ void delete_partition(struct gendisk *disk, int part)
+ {
+ struct hd_struct *p = disk->part[part-1];
++
+ if (!p)
+ return;
+ if (!p->nr_sects)
+@@ -361,113 +314,55 @@ void delete_partition(struct gendisk *disk, int part)
+ p->nr_sects = 0;
+ p->ios[0] = p->ios[1] = 0;
+ p->sectors[0] = p->sectors[1] = 0;
+- sysfs_remove_link(&p->kobj, "subsystem");
+- kobject_unregister(p->holder_dir);
+- kobject_uevent(&p->kobj, KOBJ_REMOVE);
+- kobject_del(&p->kobj);
+- kobject_put(&p->kobj);
++ kobject_put(p->holder_dir);
++ device_del(&p->dev);
++ put_device(&p->dev);
}
-@@ -879,7 +878,6 @@ static int gfs2_block_truncate_page(struct address_space *mapping)
+ void add_partition(struct gendisk *disk, int part, sector_t start, sector_t len, int flags)
{
- struct inode *inode = mapping->host;
- struct gfs2_inode *ip = GFS2_I(inode);
-- struct gfs2_sbd *sdp = GFS2_SB(inode);
- loff_t from = inode->i_size;
- unsigned long index = from >> PAGE_CACHE_SHIFT;
- unsigned offset = from & (PAGE_CACHE_SIZE-1);
-@@ -911,7 +909,7 @@ static int gfs2_block_truncate_page(struct address_space *mapping)
- err = 0;
-
- if (!buffer_mapped(bh)) {
-- gfs2_get_block(inode, iblock, bh, 0);
-+ gfs2_block_map(inode, iblock, bh, 0);
- /* unmapped? It's a hole - nothing to do */
- if (!buffer_mapped(bh))
- goto unlock;
-@@ -931,7 +929,7 @@ static int gfs2_block_truncate_page(struct address_space *mapping)
- err = 0;
- }
+ struct hd_struct *p;
++ int err;
-- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
-+ if (!gfs2_is_writeback(ip))
- gfs2_trans_add_bh(ip->i_gl, bh, 0);
+ p = kzalloc(sizeof(*p), GFP_KERNEL);
+ if (!p)
+ return;
+-
++
+ p->start_sect = start;
+ p->nr_sects = len;
+ p->partno = part;
+ p->policy = disk->policy;
- zero_user_page(page, offset, length, KM_USER0);
-@@ -1224,8 +1222,13 @@ int gfs2_write_alloc_required(struct gfs2_inode *ip, u64 offset,
- do_div(lblock_stop, bsize);
- } else {
- unsigned int shift = sdp->sd_sb.sb_bsize_shift;
-+ u64 end_of_file = (ip->i_di.di_size + sdp->sd_sb.sb_bsize - 1) >> shift;
- lblock = offset >> shift;
- lblock_stop = (offset + len + sdp->sd_sb.sb_bsize - 1) >> shift;
-+ if (lblock_stop > end_of_file) {
-+ *alloc_required = 1;
-+ return 0;
-+ }
+- if (isdigit(disk->kobj.k_name[strlen(disk->kobj.k_name)-1]))
+- kobject_set_name(&p->kobj, "%sp%d",
+- kobject_name(&disk->kobj), part);
++ if (isdigit(disk->dev.bus_id[strlen(disk->dev.bus_id)-1]))
++ snprintf(p->dev.bus_id, BUS_ID_SIZE,
++ "%sp%d", disk->dev.bus_id, part);
+ else
+- kobject_set_name(&p->kobj, "%s%d",
+- kobject_name(&disk->kobj),part);
+- p->kobj.parent = &disk->kobj;
+- p->kobj.ktype = &ktype_part;
+- kobject_init(&p->kobj);
+- kobject_add(&p->kobj);
+- if (!disk->part_uevent_suppress)
+- kobject_uevent(&p->kobj, KOBJ_ADD);
+- sysfs_create_link(&p->kobj, &block_subsys.kobj, "subsystem");
++ snprintf(p->dev.bus_id, BUS_ID_SIZE,
++ "%s%d", disk->dev.bus_id, part);
++
++ device_initialize(&p->dev);
++ p->dev.devt = MKDEV(disk->major, disk->first_minor + part);
++ p->dev.class = &block_class;
++ p->dev.type = &part_type;
++ p->dev.parent = &disk->dev;
++ disk->part[part-1] = p;
++
++ /* delay uevent until 'holders' subdir is created */
++ p->dev.uevent_suppress = 1;
++ device_add(&p->dev);
++ partition_sysfs_add_subdir(p);
++ p->dev.uevent_suppress = 0;
+ if (flags & ADDPART_FLAG_WHOLEDISK) {
+ static struct attribute addpartattr = {
+ .name = "whole_disk",
+ .mode = S_IRUSR | S_IRGRP | S_IROTH,
+ };
+-
+- sysfs_create_file(&p->kobj, &addpartattr);
++ err = sysfs_create_file(&p->dev.kobj, &addpartattr);
}
+- partition_sysfs_add_subdir(p);
+- disk->part[part-1] = p;
+-}
- for (; lblock < lblock_stop; lblock += extlen) {
-diff --git a/fs/gfs2/bmap.h b/fs/gfs2/bmap.h
-index ac2fd04..4e6cde2 100644
---- a/fs/gfs2/bmap.h
-+++ b/fs/gfs2/bmap.h
-@@ -15,7 +15,7 @@ struct gfs2_inode;
- struct page;
-
- int gfs2_unstuff_dinode(struct gfs2_inode *ip, struct page *page);
--int gfs2_block_map(struct inode *inode, u64 lblock, int create, struct buffer_head *bh);
-+int gfs2_block_map(struct inode *inode, sector_t lblock, struct buffer_head *bh, int create);
- int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsigned *extlen);
-
- int gfs2_truncatei(struct gfs2_inode *ip, u64 size);
-diff --git a/fs/gfs2/daemon.c b/fs/gfs2/daemon.c
-index 3731ab0..e519919 100644
---- a/fs/gfs2/daemon.c
-+++ b/fs/gfs2/daemon.c
-@@ -83,56 +83,6 @@ int gfs2_recoverd(void *data)
- }
-
- /**
-- * gfs2_logd - Update log tail as Active Items get flushed to in-place blocks
-- * @sdp: Pointer to GFS2 superblock
-- *
-- * Also, periodically check to make sure that we're using the most recent
-- * journal index.
-- */
--
--int gfs2_logd(void *data)
+-static char *make_block_name(struct gendisk *disk)
-{
-- struct gfs2_sbd *sdp = data;
-- struct gfs2_holder ji_gh;
-- unsigned long t;
-- int need_flush;
+- char *name;
+- static char *block_str = "block:";
+- int size;
+- char *s;
-
-- while (!kthread_should_stop()) {
-- /* Advance the log tail */
+- size = strlen(block_str) + strlen(disk->disk_name) + 1;
+- name = kmalloc(size, GFP_KERNEL);
+- if (!name)
+- return NULL;
+- strcpy(name, block_str);
+- strcat(name, disk->disk_name);
+- /* ewww... some of these buggers have / in name... */
+- s = strchr(name, '/');
+- if (s)
+- *s = '!';
+- return name;
+-}
-
-- t = sdp->sd_log_flush_time +
-- gfs2_tune_get(sdp, gt_log_flush_secs) * HZ;
+-static int disk_sysfs_symlinks(struct gendisk *disk)
+-{
+- struct device *target = get_device(disk->driverfs_dev);
+- int err;
+- char *disk_name = NULL;
-
-- gfs2_ail1_empty(sdp, DIO_ALL);
-- gfs2_log_lock(sdp);
-- need_flush = sdp->sd_log_num_buf > gfs2_tune_get(sdp, gt_incore_log_blocks);
-- gfs2_log_unlock(sdp);
-- if (need_flush || time_after_eq(jiffies, t)) {
-- gfs2_log_flush(sdp, NULL);
-- sdp->sd_log_flush_time = jiffies;
+- if (target) {
+- disk_name = make_block_name(disk);
+- if (!disk_name) {
+- err = -ENOMEM;
+- goto err_out;
- }
-
-- /* Check for latest journal index */
+- err = sysfs_create_link(&disk->kobj, &target->kobj, "device");
+- if (err)
+- goto err_out_disk_name;
-
-- t = sdp->sd_jindex_refresh_time +
-- gfs2_tune_get(sdp, gt_jindex_refresh_secs) * HZ;
+- err = sysfs_create_link(&target->kobj, &disk->kobj, disk_name);
+- if (err)
+- goto err_out_dev_link;
+- }
-
-- if (time_after_eq(jiffies, t)) {
-- if (!gfs2_jindex_hold(sdp, &ji_gh))
-- gfs2_glock_dq_uninit(&ji_gh);
-- sdp->sd_jindex_refresh_time = jiffies;
-- }
+- err = sysfs_create_link(&disk->kobj, &block_subsys.kobj,
+- "subsystem");
+- if (err)
+- goto err_out_disk_name_lnk;
-
-- t = gfs2_tune_get(sdp, gt_logd_secs) * HZ;
-- if (freezing(current))
-- refrigerator();
-- schedule_timeout_interruptible(t);
-- }
+- kfree(disk_name);
-
- return 0;
--}
-
--/**
- * gfs2_quotad - Write cached quota changes into the quota file
- * @sdp: Pointer to GFS2 superblock
- *
-diff --git a/fs/gfs2/daemon.h b/fs/gfs2/daemon.h
-index 0de9b35..4be084f 100644
---- a/fs/gfs2/daemon.h
-+++ b/fs/gfs2/daemon.h
-@@ -12,7 +12,6 @@
-
- int gfs2_glockd(void *data);
- int gfs2_recoverd(void *data);
--int gfs2_logd(void *data);
- int gfs2_quotad(void *data);
+-err_out_disk_name_lnk:
+- if (target) {
+- sysfs_remove_link(&target->kobj, disk_name);
+-err_out_dev_link:
+- sysfs_remove_link(&disk->kobj, "device");
+-err_out_disk_name:
+- kfree(disk_name);
+-err_out:
+- put_device(target);
+- }
+- return err;
++ /* suppress uevent if the disk supresses it */
++ if (!disk->dev.uevent_suppress)
++ kobject_uevent(&p->dev.kobj, KOBJ_ADD);
+ }
- #endif /* __DAEMON_DOT_H__ */
-diff --git a/fs/gfs2/dir.c b/fs/gfs2/dir.c
-index 9949bb7..57e2ed9 100644
---- a/fs/gfs2/dir.c
-+++ b/fs/gfs2/dir.c
-@@ -1876,7 +1876,7 @@ static int leaf_dealloc(struct gfs2_inode *dip, u32 index, u32 len,
- if (error)
- goto out;
+ /* Not exported, helper to add_disk(). */
+@@ -479,19 +374,29 @@ void register_disk(struct gendisk *disk)
+ struct hd_struct *p;
+ int err;
-- error = gfs2_rindex_hold(sdp, &dip->i_alloc.al_ri_gh);
-+ error = gfs2_rindex_hold(sdp, &dip->i_alloc->al_ri_gh);
- if (error)
- goto out_qs;
+- kobject_set_name(&disk->kobj, "%s", disk->disk_name);
+- /* ewww... some of these buggers have / in name... */
+- s = strchr(disk->kobj.k_name, '/');
++ disk->dev.parent = disk->driverfs_dev;
++ disk->dev.devt = MKDEV(disk->major, disk->first_minor);
++
++ strlcpy(disk->dev.bus_id, disk->disk_name, KOBJ_NAME_LEN);
++ /* ewww... some of these buggers have / in the name... */
++ s = strchr(disk->dev.bus_id, '/');
+ if (s)
+ *s = '!';
+- if ((err = kobject_add(&disk->kobj)))
++
++ /* delay uevents, until we scanned partition table */
++ disk->dev.uevent_suppress = 1;
++
++ if (device_add(&disk->dev))
+ return;
+- err = disk_sysfs_symlinks(disk);
++#ifndef CONFIG_SYSFS_DEPRECATED
++ err = sysfs_create_link(block_depr, &disk->dev.kobj,
++ kobject_name(&disk->dev.kobj));
+ if (err) {
+- kobject_del(&disk->kobj);
++ device_del(&disk->dev);
+ return;
+ }
+- disk_sysfs_add_subdirs(disk);
++#endif
++ disk_sysfs_add_subdirs(disk);
-@@ -1949,7 +1949,7 @@ out_rg_gunlock:
- gfs2_glock_dq_m(rlist.rl_rgrps, rlist.rl_ghs);
- out_rlist:
- gfs2_rlist_free(&rlist);
-- gfs2_glock_dq_uninit(&dip->i_alloc.al_ri_gh);
-+ gfs2_glock_dq_uninit(&dip->i_alloc->al_ri_gh);
- out_qs:
- gfs2_quota_unhold(dip);
- out:
-diff --git a/fs/gfs2/eaops.c b/fs/gfs2/eaops.c
-index aa8dbf3..f114ba2 100644
---- a/fs/gfs2/eaops.c
-+++ b/fs/gfs2/eaops.c
-@@ -56,46 +56,6 @@ unsigned int gfs2_ea_name2type(const char *name, const char **truncated_name)
- return type;
- }
+ /* No minors to use for partitions */
+ if (disk->minors == 1)
+@@ -505,25 +410,23 @@ void register_disk(struct gendisk *disk)
+ if (!bdev)
+ goto exit;
--static int user_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
--{
-- struct inode *inode = &ip->i_inode;
-- int error = permission(inode, MAY_READ, NULL);
-- if (error)
-- return error;
--
-- return gfs2_ea_get_i(ip, er);
--}
--
--static int user_eo_set(struct gfs2_inode *ip, struct gfs2_ea_request *er)
--{
-- struct inode *inode = &ip->i_inode;
--
-- if (S_ISREG(inode->i_mode) ||
-- (S_ISDIR(inode->i_mode) && !(inode->i_mode & S_ISVTX))) {
-- int error = permission(inode, MAY_WRITE, NULL);
-- if (error)
-- return error;
-- } else
-- return -EPERM;
--
-- return gfs2_ea_set_i(ip, er);
--}
--
--static int user_eo_remove(struct gfs2_inode *ip, struct gfs2_ea_request *er)
--{
-- struct inode *inode = &ip->i_inode;
--
-- if (S_ISREG(inode->i_mode) ||
-- (S_ISDIR(inode->i_mode) && !(inode->i_mode & S_ISVTX))) {
-- int error = permission(inode, MAY_WRITE, NULL);
-- if (error)
-- return error;
-- } else
-- return -EPERM;
--
-- return gfs2_ea_remove_i(ip, er);
--}
--
- static int system_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
- {
- if (!GFS2_ACL_IS_ACCESS(er->er_name, er->er_name_len) &&
-@@ -108,8 +68,6 @@ static int system_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
- GFS2_ACL_IS_DEFAULT(er->er_name, er->er_name_len)))
- return -EOPNOTSUPP;
+- /* scan partition table, but suppress uevents */
+ bdev->bd_invalidated = 1;
+- disk->part_uevent_suppress = 1;
+ err = blkdev_get(bdev, FMODE_READ, 0);
+- disk->part_uevent_suppress = 0;
+ if (err < 0)
+ goto exit;
+ blkdev_put(bdev);
--
--
- return gfs2_ea_get_i(ip, er);
- }
+ exit:
+- /* announce disk after possible partitions are already created */
+- kobject_uevent(&disk->kobj, KOBJ_ADD);
++ /* announce disk after possible partitions are created */
++ disk->dev.uevent_suppress = 0;
++ kobject_uevent(&disk->dev.kobj, KOBJ_ADD);
-@@ -170,40 +128,10 @@ static int system_eo_remove(struct gfs2_inode *ip, struct gfs2_ea_request *er)
- return gfs2_ea_remove_i(ip, er);
+ /* announce possible partitions */
+ for (i = 1; i < disk->minors; i++) {
+ p = disk->part[i-1];
+ if (!p || !p->nr_sects)
+ continue;
+- kobject_uevent(&p->kobj, KOBJ_ADD);
++ kobject_uevent(&p->dev.kobj, KOBJ_ADD);
+ }
}
--static int security_eo_get(struct gfs2_inode *ip, struct gfs2_ea_request *er)
--{
-- struct inode *inode = &ip->i_inode;
-- int error = permission(inode, MAY_READ, NULL);
-- if (error)
-- return error;
--
-- return gfs2_ea_get_i(ip, er);
--}
--
--static int security_eo_set(struct gfs2_inode *ip, struct gfs2_ea_request *er)
--{
-- struct inode *inode = &ip->i_inode;
-- int error = permission(inode, MAY_WRITE, NULL);
-- if (error)
-- return error;
--
-- return gfs2_ea_set_i(ip, er);
--}
--
--static int security_eo_remove(struct gfs2_inode *ip, struct gfs2_ea_request *er)
--{
-- struct inode *inode = &ip->i_inode;
-- int error = permission(inode, MAY_WRITE, NULL);
-- if (error)
-- return error;
--
-- return gfs2_ea_remove_i(ip, er);
--}
--
- static const struct gfs2_eattr_operations gfs2_user_eaops = {
-- .eo_get = user_eo_get,
-- .eo_set = user_eo_set,
-- .eo_remove = user_eo_remove,
-+ .eo_get = gfs2_ea_get_i,
-+ .eo_set = gfs2_ea_set_i,
-+ .eo_remove = gfs2_ea_remove_i,
- .eo_name = "user",
- };
+@@ -602,19 +505,11 @@ void del_gendisk(struct gendisk *disk)
+ disk_stat_set_all(disk, 0);
+ disk->stamp = 0;
-@@ -215,9 +143,9 @@ const struct gfs2_eattr_operations gfs2_system_eaops = {
- };
+- kobject_uevent(&disk->kobj, KOBJ_REMOVE);
+- kobject_unregister(disk->holder_dir);
+- kobject_unregister(disk->slave_dir);
+- if (disk->driverfs_dev) {
+- char *disk_name = make_block_name(disk);
+- sysfs_remove_link(&disk->kobj, "device");
+- if (disk_name) {
+- sysfs_remove_link(&disk->driverfs_dev->kobj, disk_name);
+- kfree(disk_name);
+- }
+- put_device(disk->driverfs_dev);
+- disk->driverfs_dev = NULL;
+- }
+- sysfs_remove_link(&disk->kobj, "subsystem");
+- kobject_del(&disk->kobj);
++ kobject_put(disk->holder_dir);
++ kobject_put(disk->slave_dir);
++ disk->driverfs_dev = NULL;
++#ifndef CONFIG_SYSFS_DEPRECATED
++ sysfs_remove_link(block_depr, disk->dev.bus_id);
++#endif
++ device_del(&disk->dev);
+ }
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 7411bfb..91fa8e6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -310,6 +310,77 @@ static int proc_pid_schedstat(struct task_struct *task, char *buffer)
+ }
+ #endif
- static const struct gfs2_eattr_operations gfs2_security_eaops = {
-- .eo_get = security_eo_get,
-- .eo_set = security_eo_set,
-- .eo_remove = security_eo_remove,
-+ .eo_get = gfs2_ea_get_i,
-+ .eo_set = gfs2_ea_set_i,
-+ .eo_remove = gfs2_ea_remove_i,
- .eo_name = "security",
++#ifdef CONFIG_LATENCYTOP
++static int lstats_show_proc(struct seq_file *m, void *v)
++{
++ int i;
++ struct task_struct *task = m->private;
++ seq_puts(m, "Latency Top version : v0.1\n");
++
++ for (i = 0; i < 32; i++) {
++ if (task->latency_record[i].backtrace[0]) {
++ int q;
++ seq_printf(m, "%i %li %li ",
++ task->latency_record[i].count,
++ task->latency_record[i].time,
++ task->latency_record[i].max);
++ for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
++ char sym[KSYM_NAME_LEN];
++ char *c;
++ if (!task->latency_record[i].backtrace[q])
++ break;
++ if (task->latency_record[i].backtrace[q] == ULONG_MAX)
++ break;
++ sprint_symbol(sym, task->latency_record[i].backtrace[q]);
++ c = strchr(sym, '+');
++ if (c)
++ *c = 0;
++ seq_printf(m, "%s ", sym);
++ }
++ seq_printf(m, "\n");
++ }
++
++ }
++ return 0;
++}
++
++static int lstats_open(struct inode *inode, struct file *file)
++{
++ int ret;
++ struct seq_file *m;
++ struct task_struct *task = get_proc_task(inode);
++
++ ret = single_open(file, lstats_show_proc, NULL);
++ if (!ret) {
++ m = file->private_data;
++ m->private = task;
++ }
++ return ret;
++}
++
++static ssize_t lstats_write(struct file *file, const char __user *buf,
++ size_t count, loff_t *offs)
++{
++ struct seq_file *m;
++ struct task_struct *task;
++
++ m = file->private_data;
++ task = m->private;
++ clear_all_latency_tracing(task);
++
++ return count;
++}
++
++static const struct file_operations proc_lstats_operations = {
++ .open = lstats_open,
++ .read = seq_read,
++ .write = lstats_write,
++ .llseek = seq_lseek,
++ .release = single_release,
++};
++
++#endif
++
+ /* The badness from the OOM killer */
+ unsigned long badness(struct task_struct *p, unsigned long uptime);
+ static int proc_oom_score(struct task_struct *task, char *buffer)
+@@ -1020,6 +1091,7 @@ static const struct file_operations proc_fault_inject_operations = {
};
+ #endif
-diff --git a/fs/gfs2/eattr.c b/fs/gfs2/eattr.c
-index 2a7435b..bee9970 100644
---- a/fs/gfs2/eattr.c
-+++ b/fs/gfs2/eattr.c
-@@ -1418,7 +1418,7 @@ out:
- static int ea_dealloc_block(struct gfs2_inode *ip)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_rgrpd *rgd;
- struct buffer_head *dibh;
- int error;
-diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
-index a37efe4..80e09c5 100644
---- a/fs/gfs2/glock.c
-+++ b/fs/gfs2/glock.c
-@@ -1,6 +1,6 @@
++
+ #ifdef CONFIG_SCHED_DEBUG
/*
- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-+ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
- *
- * This copyrighted material is made available to anyone wishing to use,
- * modify, copy, or redistribute it subject to the terms and conditions
-@@ -217,7 +217,6 @@ int gfs2_glock_put(struct gfs2_glock *gl)
- if (atomic_dec_and_test(&gl->gl_ref)) {
- hlist_del(&gl->gl_list);
- write_unlock(gl_lock_addr(gl->gl_hash));
-- BUG_ON(spin_is_locked(&gl->gl_spin));
- gfs2_assert(sdp, gl->gl_state == LM_ST_UNLOCKED);
- gfs2_assert(sdp, list_empty(&gl->gl_reclaim));
- gfs2_assert(sdp, list_empty(&gl->gl_holders));
-@@ -346,7 +345,6 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
- gl->gl_object = NULL;
- gl->gl_sbd = sdp;
- gl->gl_aspace = NULL;
-- lops_init_le(&gl->gl_le, &gfs2_glock_lops);
- INIT_DELAYED_WORK(&gl->gl_work, glock_work_func);
+ * Print out various scheduling related per-task fields:
+@@ -2230,6 +2302,9 @@ static const struct pid_entry tgid_base_stuff[] = {
+ #ifdef CONFIG_SCHEDSTATS
+ INF("schedstat", S_IRUGO, pid_schedstat),
+ #endif
++#ifdef CONFIG_LATENCYTOP
++ REG("latency", S_IRUGO, lstats),
++#endif
+ #ifdef CONFIG_PROC_PID_CPUSET
+ REG("cpuset", S_IRUGO, cpuset),
+ #endif
+@@ -2555,6 +2630,9 @@ static const struct pid_entry tid_base_stuff[] = {
+ #ifdef CONFIG_SCHEDSTATS
+ INF("schedstat", S_IRUGO, pid_schedstat),
+ #endif
++#ifdef CONFIG_LATENCYTOP
++ REG("latency", S_IRUGO, lstats),
++#endif
+ #ifdef CONFIG_PROC_PID_CPUSET
+ REG("cpuset", S_IRUGO, cpuset),
+ #endif
+diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c
+index 0afe21e..4823c96 100644
+--- a/fs/proc/proc_net.c
++++ b/fs/proc/proc_net.c
+@@ -22,10 +22,48 @@
+ #include <linux/mount.h>
+ #include <linux/nsproxy.h>
+ #include <net/net_namespace.h>
++#include <linux/seq_file.h>
- /* If this glock protects actual on-disk data or metadata blocks,
-@@ -461,7 +459,6 @@ static void wait_on_holder(struct gfs2_holder *gh)
+ #include "internal.h"
- static void gfs2_demote_wake(struct gfs2_glock *gl)
- {
-- BUG_ON(!spin_is_locked(&gl->gl_spin));
- gl->gl_demote_state = LM_ST_EXCLUSIVE;
- clear_bit(GLF_DEMOTE, &gl->gl_flags);
- smp_mb__after_clear_bit();
-@@ -507,21 +504,12 @@ static int rq_mutex(struct gfs2_holder *gh)
- static int rq_promote(struct gfs2_holder *gh)
- {
- struct gfs2_glock *gl = gh->gh_gl;
-- struct gfs2_sbd *sdp = gl->gl_sbd;
- if (!relaxed_state_ok(gl->gl_state, gh->gh_state, gh->gh_flags)) {
- if (list_empty(&gl->gl_holders)) {
- gl->gl_req_gh = gh;
- set_bit(GLF_LOCK, &gl->gl_flags);
- spin_unlock(&gl->gl_spin);
--
-- if (atomic_read(&sdp->sd_reclaim_count) >
-- gfs2_tune_get(sdp, gt_reclaim_limit) &&
-- !(gh->gh_flags & LM_FLAG_PRIORITY)) {
-- gfs2_reclaim_glock(sdp);
-- gfs2_reclaim_glock(sdp);
-- }
--
- gfs2_glock_xmote_th(gh->gh_gl, gh);
- spin_lock(&gl->gl_spin);
- }
-@@ -567,7 +555,10 @@ static int rq_demote(struct gfs2_glock *gl)
- gfs2_demote_wake(gl);
- return 0;
- }
++int seq_open_net(struct inode *ino, struct file *f,
++ const struct seq_operations *ops, int size)
++{
++ struct net *net;
++ struct seq_net_private *p;
+
- set_bit(GLF_LOCK, &gl->gl_flags);
-+ set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
++ BUG_ON(size < sizeof(*p));
+
- if (gl->gl_demote_state == LM_ST_UNLOCKED ||
- gl->gl_state != LM_ST_EXCLUSIVE) {
- spin_unlock(&gl->gl_spin);
-@@ -576,7 +567,9 @@ static int rq_demote(struct gfs2_glock *gl)
- spin_unlock(&gl->gl_spin);
- gfs2_glock_xmote_th(gl, NULL);
- }
++ net = get_proc_net(ino);
++ if (net == NULL)
++ return -ENXIO;
+
- spin_lock(&gl->gl_spin);
-+ clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
-
- return 0;
++ p = __seq_open_private(f, ops, size);
++ if (p == NULL) {
++ put_net(net);
++ return -ENOMEM;
++ }
++ p->net = net;
++ return 0;
++}
++EXPORT_SYMBOL_GPL(seq_open_net);
++
++int seq_release_net(struct inode *ino, struct file *f)
++{
++ struct seq_file *seq;
++ struct seq_net_private *p;
++
++ seq = f->private_data;
++ p = seq->private;
++
++ put_net(p->net);
++ seq_release_private(ino, f);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(seq_release_net);
++
++
+ struct proc_dir_entry *proc_net_fops_create(struct net *net,
+ const char *name, mode_t mode, const struct file_operations *fops)
+ {
+@@ -58,6 +96,17 @@ static struct proc_dir_entry *proc_net_shadow(struct task_struct *task,
+ return task->nsproxy->net_ns->proc_net;
}
-@@ -598,23 +591,18 @@ static void run_queue(struct gfs2_glock *gl)
- if (!list_empty(&gl->gl_waiters1)) {
- gh = list_entry(gl->gl_waiters1.next,
- struct gfs2_holder, gh_list);
--
-- if (test_bit(HIF_MUTEX, &gh->gh_iflags))
-- blocked = rq_mutex(gh);
-- else
-- gfs2_assert_warn(gl->gl_sbd, 0);
--
-+ blocked = rq_mutex(gh);
- } else if (test_bit(GLF_DEMOTE, &gl->gl_flags)) {
- blocked = rq_demote(gl);
-+ if (gl->gl_waiters2 && !blocked) {
-+ set_bit(GLF_DEMOTE, &gl->gl_flags);
-+ gl->gl_demote_state = LM_ST_UNLOCKED;
-+ }
-+ gl->gl_waiters2 = 0;
- } else if (!list_empty(&gl->gl_waiters3)) {
- gh = list_entry(gl->gl_waiters3.next,
- struct gfs2_holder, gh_list);
--
-- if (test_bit(HIF_PROMOTE, &gh->gh_iflags))
-- blocked = rq_promote(gh);
-- else
-- gfs2_assert_warn(gl->gl_sbd, 0);
--
-+ blocked = rq_promote(gh);
- } else
- break;
-@@ -632,27 +620,21 @@ static void run_queue(struct gfs2_glock *gl)
++struct proc_dir_entry *proc_net_mkdir(struct net *net, const char *name,
++ struct proc_dir_entry *parent)
++{
++ struct proc_dir_entry *pde;
++ pde = proc_mkdir_mode(name, S_IRUGO | S_IXUGO, parent);
++ if (pde != NULL)
++ pde->data = net;
++ return pde;
++}
++EXPORT_SYMBOL_GPL(proc_net_mkdir);
++
+ static __net_init int proc_net_ns_init(struct net *net)
+ {
+ struct proc_dir_entry *root, *netd, *net_statd;
+@@ -69,18 +118,16 @@ static __net_init int proc_net_ns_init(struct net *net)
+ goto out;
- static void gfs2_glmutex_lock(struct gfs2_glock *gl)
+ err = -EEXIST;
+- netd = proc_mkdir("net", root);
++ netd = proc_net_mkdir(net, "net", root);
+ if (!netd)
+ goto free_root;
+
+ err = -EEXIST;
+- net_statd = proc_mkdir("stat", netd);
++ net_statd = proc_net_mkdir(net, "stat", netd);
+ if (!net_statd)
+ goto free_net;
+
+ root->data = net;
+- netd->data = net;
+- net_statd->data = net;
+
+ net->proc_net_root = root;
+ net->proc_net = netd;
+diff --git a/fs/read_write.c b/fs/read_write.c
+index ea1f94c..1c177f2 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -197,25 +197,27 @@ int rw_verify_area(int read_write, struct file *file, loff_t *ppos, size_t count
{
-- struct gfs2_holder gh;
--
-- gfs2_holder_init(gl, 0, 0, &gh);
-- set_bit(HIF_MUTEX, &gh.gh_iflags);
-- if (test_and_set_bit(HIF_WAIT, &gh.gh_iflags))
-- BUG();
--
- spin_lock(&gl->gl_spin);
- if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {
-+ struct gfs2_holder gh;
-+
-+ gfs2_holder_init(gl, 0, 0, &gh);
-+ set_bit(HIF_WAIT, &gh.gh_iflags);
- list_add_tail(&gh.gh_list, &gl->gl_waiters1);
-+ spin_unlock(&gl->gl_spin);
-+ wait_on_holder(&gh);
-+ gfs2_holder_uninit(&gh);
- } else {
- gl->gl_owner_pid = current->pid;
- gl->gl_ip = (unsigned long)__builtin_return_address(0);
-- clear_bit(HIF_WAIT, &gh.gh_iflags);
-- smp_mb();
-- wake_up_bit(&gh.gh_iflags, HIF_WAIT);
-+ spin_unlock(&gl->gl_spin);
+ struct inode *inode;
+ loff_t pos;
++ int retval = -EINVAL;
+
+ inode = file->f_path.dentry->d_inode;
+ if (unlikely((ssize_t) count < 0))
+- goto Einval;
++ return retval;
+ pos = *ppos;
+ if (unlikely((pos < 0) || (loff_t) (pos + count) < 0))
+- goto Einval;
++ return retval;
+
+ if (unlikely(inode->i_flock && mandatory_lock(inode))) {
+- int retval = locks_mandatory_area(
++ retval = locks_mandatory_area(
+ read_write == READ ? FLOCK_VERIFY_READ : FLOCK_VERIFY_WRITE,
+ inode, file, pos, count);
+ if (retval < 0)
+ return retval;
}
-- spin_unlock(&gl->gl_spin);
++ retval = security_file_permission(file,
++ read_write == READ ? MAY_READ : MAY_WRITE);
++ if (retval)
++ return retval;
+ return count > MAX_RW_COUNT ? MAX_RW_COUNT : count;
-
-- wait_on_holder(&gh);
-- gfs2_holder_uninit(&gh);
+-Einval:
+- return -EINVAL;
}
- /**
-@@ -691,7 +673,6 @@ static void gfs2_glmutex_unlock(struct gfs2_glock *gl)
- gl->gl_owner_pid = 0;
- gl->gl_ip = 0;
- run_queue(gl);
-- BUG_ON(!spin_is_locked(&gl->gl_spin));
- spin_unlock(&gl->gl_spin);
- }
+ static void wait_on_retry_sync_kiocb(struct kiocb *iocb)
+@@ -267,18 +269,15 @@ ssize_t vfs_read(struct file *file, char __user *buf, size_t count, loff_t *pos)
+ ret = rw_verify_area(READ, file, pos, count);
+ if (ret >= 0) {
+ count = ret;
+- ret = security_file_permission (file, MAY_READ);
+- if (!ret) {
+- if (file->f_op->read)
+- ret = file->f_op->read(file, buf, count, pos);
+- else
+- ret = do_sync_read(file, buf, count, pos);
+- if (ret > 0) {
+- fsnotify_access(file->f_path.dentry);
+- add_rchar(current, ret);
+- }
+- inc_syscr(current);
++ if (file->f_op->read)
++ ret = file->f_op->read(file, buf, count, pos);
++ else
++ ret = do_sync_read(file, buf, count, pos);
++ if (ret > 0) {
++ fsnotify_access(file->f_path.dentry);
++ add_rchar(current, ret);
+ }
++ inc_syscr(current);
+ }
-@@ -722,7 +703,10 @@ static void handle_callback(struct gfs2_glock *gl, unsigned int state,
+ return ret;
+@@ -325,18 +324,15 @@ ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_
+ ret = rw_verify_area(WRITE, file, pos, count);
+ if (ret >= 0) {
+ count = ret;
+- ret = security_file_permission (file, MAY_WRITE);
+- if (!ret) {
+- if (file->f_op->write)
+- ret = file->f_op->write(file, buf, count, pos);
+- else
+- ret = do_sync_write(file, buf, count, pos);
+- if (ret > 0) {
+- fsnotify_modify(file->f_path.dentry);
+- add_wchar(current, ret);
+- }
+- inc_syscw(current);
++ if (file->f_op->write)
++ ret = file->f_op->write(file, buf, count, pos);
++ else
++ ret = do_sync_write(file, buf, count, pos);
++ if (ret > 0) {
++ fsnotify_modify(file->f_path.dentry);
++ add_wchar(current, ret);
}
- } else if (gl->gl_demote_state != LM_ST_UNLOCKED &&
- gl->gl_demote_state != state) {
-- gl->gl_demote_state = LM_ST_UNLOCKED;
-+ if (test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags))
-+ gl->gl_waiters2 = 1;
-+ else
-+ gl->gl_demote_state = LM_ST_UNLOCKED;
++ inc_syscw(current);
}
- spin_unlock(&gl->gl_spin);
+
+ return ret;
+@@ -450,6 +446,7 @@ unsigned long iov_shorten(struct iovec *iov, unsigned long nr_segs, size_t to)
+ }
+ return seg;
}
-@@ -943,8 +927,8 @@ static void gfs2_glock_drop_th(struct gfs2_glock *gl)
- const struct gfs2_glock_operations *glops = gl->gl_ops;
- unsigned int ret;
++EXPORT_SYMBOL(iov_shorten);
-- if (glops->go_drop_th)
-- glops->go_drop_th(gl);
-+ if (glops->go_xmote_th)
-+ glops->go_xmote_th(gl);
+ ssize_t do_sync_readv_writev(struct file *filp, const struct iovec *iov,
+ unsigned long nr_segs, size_t len, loff_t *ppos, iov_fn_t fn)
+@@ -603,9 +600,6 @@ static ssize_t do_readv_writev(int type, struct file *file,
+ ret = rw_verify_area(type, file, pos, tot_len);
+ if (ret < 0)
+ goto out;
+- ret = security_file_permission(file, type == READ ? MAY_READ : MAY_WRITE);
+- if (ret)
+- goto out;
- gfs2_assert_warn(sdp, test_bit(GLF_LOCK, &gl->gl_flags));
- gfs2_assert_warn(sdp, list_empty(&gl->gl_holders));
-@@ -1156,8 +1140,6 @@ restart:
- return -EIO;
- }
+ fnv = NULL;
+ if (type == READ) {
+@@ -737,10 +731,6 @@ static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
+ goto fput_in;
+ count = retval;
-- set_bit(HIF_PROMOTE, &gh->gh_iflags);
+- retval = security_file_permission (in_file, MAY_READ);
+- if (retval)
+- goto fput_in;
-
- spin_lock(&gl->gl_spin);
- add_to_queue(gh);
- run_queue(gl);
-@@ -1248,12 +1230,11 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
- list_del_init(&gh->gh_list);
+ /*
+ * Get output file, and verify that it is ok..
+ */
+@@ -759,10 +749,6 @@ static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
+ goto fput_out;
+ count = retval;
- if (list_empty(&gl->gl_holders)) {
-- spin_unlock(&gl->gl_spin);
--
-- if (glops->go_unlock)
-+ if (glops->go_unlock) {
-+ spin_unlock(&gl->gl_spin);
- glops->go_unlock(gh);
+- retval = security_file_permission (out_file, MAY_WRITE);
+- if (retval)
+- goto fput_out;
-
-- spin_lock(&gl->gl_spin);
-+ spin_lock(&gl->gl_spin);
-+ }
- gl->gl_stamp = jiffies;
- }
+ if (!max)
+ max = min(in_inode->i_sb->s_maxbytes, out_inode->i_sb->s_maxbytes);
-@@ -1910,8 +1891,6 @@ static int dump_glock(struct glock_iter *gi, struct gfs2_glock *gl)
- print_dbg(gi, " req_bh = %s\n", (gl->gl_req_bh) ? "yes" : "no");
- print_dbg(gi, " lvb_count = %d\n", atomic_read(&gl->gl_lvb_count));
- print_dbg(gi, " object = %s\n", (gl->gl_object) ? "yes" : "no");
-- print_dbg(gi, " le = %s\n",
-- (list_empty(&gl->gl_le.le_list)) ? "no" : "yes");
- print_dbg(gi, " reclaim = %s\n",
- (list_empty(&gl->gl_reclaim)) ? "no" : "yes");
- if (gl->gl_aspace)
-diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
-index 4670dcb..c663b7a 100644
---- a/fs/gfs2/glops.c
-+++ b/fs/gfs2/glops.c
-@@ -56,7 +56,7 @@ static void gfs2_ail_empty_gl(struct gfs2_glock *gl)
- bd = list_entry(head->next, struct gfs2_bufdata,
- bd_ail_gl_list);
- bh = bd->bd_bh;
-- gfs2_remove_from_ail(NULL, bd);
-+ gfs2_remove_from_ail(bd);
- bd->bd_bh = NULL;
- bh->b_private = NULL;
- bd->bd_blkno = bh->b_blocknr;
-@@ -86,15 +86,10 @@ static void gfs2_pte_inval(struct gfs2_glock *gl)
- if (!ip || !S_ISREG(inode->i_mode))
- return;
+diff --git a/fs/smbfs/Makefile b/fs/smbfs/Makefile
+index 6673ee8..4faf8c4 100644
+--- a/fs/smbfs/Makefile
++++ b/fs/smbfs/Makefile
+@@ -16,23 +16,3 @@ EXTRA_CFLAGS += -DSMBFS_PARANOIA
+ #EXTRA_CFLAGS += -DDEBUG_SMB_TIMESTAMP
+ #EXTRA_CFLAGS += -Werror
-- if (!test_bit(GIF_PAGED, &ip->i_flags))
-- return;
+-#
+-# Maintainer rules
+-#
-
- unmap_shared_mapping_range(inode->i_mapping, 0, 0);
+-# getopt.c not included. It is intentionally separate
+-SRC = proc.c dir.c cache.c sock.c inode.c file.c ioctl.c smbiod.c request.c \
+- symlink.c
-
- if (test_bit(GIF_SW_PAGED, &ip->i_flags))
- set_bit(GLF_DIRTY, &gl->gl_flags);
+-proto:
+- -rm -f proto.h
+- @echo > proto2.h "/*"
+- @echo >> proto2.h " * Autogenerated with cproto on: " `date`
+- @echo >> proto2.h " */"
+- @echo >> proto2.h ""
+- @echo >> proto2.h "struct smb_request;"
+- @echo >> proto2.h "struct sock;"
+- @echo >> proto2.h "struct statfs;"
+- @echo >> proto2.h ""
+- cproto -E "gcc -E" -e -v -I $(TOPDIR)/include -DMAKING_PROTO -D__KERNEL__ $(SRC) >> proto2.h
+- mv proto2.h proto.h
+diff --git a/fs/splice.c b/fs/splice.c
+index 6bdcb61..1577a73 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -254,11 +254,16 @@ ssize_t splice_to_pipe(struct pipe_inode_info *pipe,
+ }
-- clear_bit(GIF_SW_PAGED, &ip->i_flags);
+ while (page_nr < spd_pages)
+- page_cache_release(spd->pages[page_nr++]);
++ spd->spd_release(spd, page_nr++);
+
+ return ret;
}
- /**
-@@ -143,44 +138,34 @@ static void meta_go_inval(struct gfs2_glock *gl, int flags)
- static void inode_go_sync(struct gfs2_glock *gl)
- {
- struct gfs2_inode *ip = gl->gl_object;
-+ struct address_space *metamapping = gl->gl_aspace->i_mapping;
-+ int error;
++static void spd_release_page(struct splice_pipe_desc *spd, unsigned int i)
++{
++ page_cache_release(spd->pages[i]);
++}
+
-+ if (gl->gl_state != LM_ST_UNLOCKED)
-+ gfs2_pte_inval(gl);
-+ if (gl->gl_state != LM_ST_EXCLUSIVE)
-+ return;
+ static int
+ __generic_file_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len,
+@@ -277,6 +282,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
+ .partial = partial,
+ .flags = flags,
+ .ops = &page_cache_pipe_buf_ops,
++ .spd_release = spd_release_page,
+ };
- if (ip && !S_ISREG(ip->i_inode.i_mode))
- ip = NULL;
+ index = *ppos >> PAGE_CACHE_SHIFT;
+@@ -908,10 +914,6 @@ static long do_splice_from(struct pipe_inode_info *pipe, struct file *out,
+ if (unlikely(ret < 0))
+ return ret;
- if (test_bit(GLF_DIRTY, &gl->gl_flags)) {
-- if (ip && !gfs2_is_jdata(ip))
-- filemap_fdatawrite(ip->i_inode.i_mapping);
- gfs2_log_flush(gl->gl_sbd, gl);
-- if (ip && gfs2_is_jdata(ip))
-- filemap_fdatawrite(ip->i_inode.i_mapping);
-- gfs2_meta_sync(gl);
-+ filemap_fdatawrite(metamapping);
- if (ip) {
- struct address_space *mapping = ip->i_inode.i_mapping;
-- int error = filemap_fdatawait(mapping);
-+ filemap_fdatawrite(mapping);
-+ error = filemap_fdatawait(mapping);
- mapping_set_error(mapping, error);
- }
-+ error = filemap_fdatawait(metamapping);
-+ mapping_set_error(metamapping, error);
- clear_bit(GLF_DIRTY, &gl->gl_flags);
- gfs2_ail_empty_gl(gl);
- }
+- ret = security_file_permission(out, MAY_WRITE);
+- if (unlikely(ret < 0))
+- return ret;
+-
+ return out->f_op->splice_write(pipe, out, ppos, len, flags);
}
- /**
-- * inode_go_xmote_th - promote/demote a glock
-- * @gl: the glock
-- * @state: the requested state
-- * @flags:
-- *
-- */
--
--static void inode_go_xmote_th(struct gfs2_glock *gl)
--{
-- if (gl->gl_state != LM_ST_UNLOCKED)
-- gfs2_pte_inval(gl);
-- if (gl->gl_state == LM_ST_EXCLUSIVE)
-- inode_go_sync(gl);
--}
+@@ -934,10 +936,6 @@ static long do_splice_to(struct file *in, loff_t *ppos,
+ if (unlikely(ret < 0))
+ return ret;
+
+- ret = security_file_permission(in, MAY_READ);
+- if (unlikely(ret < 0))
+- return ret;
-
--/**
- * inode_go_xmote_bh - After promoting/demoting a glock
- * @gl: the glock
- *
-@@ -201,22 +186,6 @@ static void inode_go_xmote_bh(struct gfs2_glock *gl)
+ return in->f_op->splice_read(in, ppos, pipe, len, flags);
}
- /**
-- * inode_go_drop_th - unlock a glock
-- * @gl: the glock
-- *
-- * Invoked from rq_demote().
-- * Another node needs the lock in EXCLUSIVE mode, or lock (unused for too long)
-- * is being purged from our node's glock cache; we're dropping lock.
-- */
--
--static void inode_go_drop_th(struct gfs2_glock *gl)
--{
-- gfs2_pte_inval(gl);
-- if (gl->gl_state == LM_ST_EXCLUSIVE)
-- inode_go_sync(gl);
--}
--
--/**
- * inode_go_inval - prepare a inode glock to be released
- * @gl: the glock
- * @flags:
-@@ -234,10 +203,8 @@ static void inode_go_inval(struct gfs2_glock *gl, int flags)
- set_bit(GIF_INVALID, &ip->i_flags);
+@@ -1033,7 +1031,11 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
+ goto out_release;
}
-- if (ip && S_ISREG(ip->i_inode.i_mode)) {
-+ if (ip && S_ISREG(ip->i_inode.i_mode))
- truncate_inode_pages(ip->i_inode.i_mapping, 0);
-- clear_bit(GIF_PAGED, &ip->i_flags);
-- }
- }
-
- /**
-@@ -294,23 +261,6 @@ static int inode_go_lock(struct gfs2_holder *gh)
- }
++done:
+ pipe->nrbufs = pipe->curbuf = 0;
++ if (bytes > 0)
++ file_accessed(in);
++
+ return bytes;
- /**
-- * inode_go_unlock - operation done before an inode lock is unlocked by a
-- * process
-- * @gl: the glock
-- * @flags:
-- *
-- */
--
--static void inode_go_unlock(struct gfs2_holder *gh)
--{
-- struct gfs2_glock *gl = gh->gh_gl;
-- struct gfs2_inode *ip = gl->gl_object;
--
-- if (ip)
-- gfs2_meta_cache_flush(ip);
--}
+ out_release:
+@@ -1049,16 +1051,11 @@ out_release:
+ buf->ops = NULL;
+ }
+ }
+- pipe->nrbufs = pipe->curbuf = 0;
-
--/**
- * rgrp_go_demote_ok - Check to see if it's ok to unlock a RG's glock
- * @gl: the glock
- *
-@@ -350,14 +300,14 @@ static void rgrp_go_unlock(struct gfs2_holder *gh)
- }
+- /*
+- * If we transferred some data, return the number of bytes:
+- */
+- if (bytes > 0)
+- return bytes;
- /**
-- * trans_go_xmote_th - promote/demote the transaction glock
-+ * trans_go_sync - promote/demote the transaction glock
- * @gl: the glock
- * @state: the requested state
- * @flags:
- *
- */
+- return ret;
++ if (!bytes)
++ bytes = ret;
--static void trans_go_xmote_th(struct gfs2_glock *gl)
-+static void trans_go_sync(struct gfs2_glock *gl)
- {
- struct gfs2_sbd *sdp = gl->gl_sbd;
++ goto done;
+ }
+ EXPORT_SYMBOL(splice_direct_to_actor);
-@@ -384,7 +334,6 @@ static void trans_go_xmote_bh(struct gfs2_glock *gl)
+@@ -1440,6 +1437,7 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *iov,
+ .partial = partial,
+ .flags = flags,
+ .ops = &user_page_pipe_buf_ops,
++ .spd_release = spd_release_page,
+ };
- if (gl->gl_state != LM_ST_UNLOCKED &&
- test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) {
-- gfs2_meta_cache_flush(GFS2_I(sdp->sd_jdesc->jd_inode));
- j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+ pipe = pipe_info(file->f_path.dentry->d_inode);
+diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
+index f281cc6..4948d9b 100644
+--- a/fs/sysfs/dir.c
++++ b/fs/sysfs/dir.c
+@@ -440,7 +440,7 @@ int sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd)
+ /**
+ * sysfs_remove_one - remove sysfs_dirent from parent
+ * @acxt: addrm context to use
+- * @sd: sysfs_dirent to be added
++ * @sd: sysfs_dirent to be removed
+ *
+ * Mark @sd removed and drop nlink of parent inode if @sd is a
+ * directory. @sd is unlinked from the children list.
+diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
+index 4045bdc..a271c87 100644
+--- a/fs/sysfs/file.c
++++ b/fs/sysfs/file.c
+@@ -20,43 +20,6 @@
- error = gfs2_find_jhead(sdp->sd_jdesc, &head);
-@@ -402,24 +351,6 @@ static void trans_go_xmote_bh(struct gfs2_glock *gl)
- }
+ #include "sysfs.h"
- /**
-- * trans_go_drop_th - unlock the transaction glock
-- * @gl: the glock
-- *
-- * We want to sync the device even with localcaching. Remember
-- * that localcaching journal replay only marks buffers dirty.
-- */
+-#define to_sattr(a) container_of(a,struct subsys_attribute, attr)
-
--static void trans_go_drop_th(struct gfs2_glock *gl)
+-/*
+- * Subsystem file operations.
+- * These operations allow subsystems to have files that can be
+- * read/written.
+- */
+-static ssize_t
+-subsys_attr_show(struct kobject * kobj, struct attribute * attr, char * page)
-{
-- struct gfs2_sbd *sdp = gl->gl_sbd;
+- struct kset *kset = to_kset(kobj);
+- struct subsys_attribute * sattr = to_sattr(attr);
+- ssize_t ret = -EIO;
-
-- if (test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) {
-- gfs2_meta_syncfs(sdp);
-- gfs2_log_shutdown(sdp);
-- }
+- if (sattr->show)
+- ret = sattr->show(kset, page);
+- return ret;
-}
-
--/**
- * quota_go_demote_ok - Check to see if it's ok to unlock a quota glock
- * @gl: the glock
- *
-@@ -433,25 +364,21 @@ static int quota_go_demote_ok(struct gfs2_glock *gl)
-
- const struct gfs2_glock_operations gfs2_meta_glops = {
- .go_xmote_th = meta_go_sync,
-- .go_drop_th = meta_go_sync,
- .go_type = LM_TYPE_META,
- };
-
- const struct gfs2_glock_operations gfs2_inode_glops = {
-- .go_xmote_th = inode_go_xmote_th,
-+ .go_xmote_th = inode_go_sync,
- .go_xmote_bh = inode_go_xmote_bh,
-- .go_drop_th = inode_go_drop_th,
- .go_inval = inode_go_inval,
- .go_demote_ok = inode_go_demote_ok,
- .go_lock = inode_go_lock,
-- .go_unlock = inode_go_unlock,
- .go_type = LM_TYPE_INODE,
- .go_min_hold_time = HZ / 10,
- };
-
- const struct gfs2_glock_operations gfs2_rgrp_glops = {
- .go_xmote_th = meta_go_sync,
-- .go_drop_th = meta_go_sync,
- .go_inval = meta_go_inval,
- .go_demote_ok = rgrp_go_demote_ok,
- .go_lock = rgrp_go_lock,
-@@ -461,9 +388,8 @@ const struct gfs2_glock_operations gfs2_rgrp_glops = {
- };
-
- const struct gfs2_glock_operations gfs2_trans_glops = {
-- .go_xmote_th = trans_go_xmote_th,
-+ .go_xmote_th = trans_go_sync,
- .go_xmote_bh = trans_go_xmote_bh,
-- .go_drop_th = trans_go_drop_th,
- .go_type = LM_TYPE_NONDISK,
- };
-
-diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
-index eaddfb5..513aaf0 100644
---- a/fs/gfs2/incore.h
-+++ b/fs/gfs2/incore.h
-@@ -1,6 +1,6 @@
- /*
- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-+ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
- *
- * This copyrighted material is made available to anyone wishing to use,
- * modify, copy, or redistribute it subject to the terms and conditions
-@@ -131,7 +131,6 @@ struct gfs2_bufdata {
- struct gfs2_glock_operations {
- void (*go_xmote_th) (struct gfs2_glock *gl);
- void (*go_xmote_bh) (struct gfs2_glock *gl);
-- void (*go_drop_th) (struct gfs2_glock *gl);
- void (*go_inval) (struct gfs2_glock *gl, int flags);
- int (*go_demote_ok) (struct gfs2_glock *gl);
- int (*go_lock) (struct gfs2_holder *gh);
-@@ -141,10 +140,6 @@ struct gfs2_glock_operations {
- };
-
- enum {
-- /* Actions */
-- HIF_MUTEX = 0,
-- HIF_PROMOTE = 1,
+-static ssize_t
+-subsys_attr_store(struct kobject * kobj, struct attribute * attr,
+- const char * page, size_t count)
+-{
+- struct kset *kset = to_kset(kobj);
+- struct subsys_attribute * sattr = to_sattr(attr);
+- ssize_t ret = -EIO;
-
- /* States */
- HIF_HOLDER = 6,
- HIF_FIRST = 7,
-@@ -171,6 +166,8 @@ enum {
- GLF_DEMOTE = 3,
- GLF_PENDING_DEMOTE = 4,
- GLF_DIRTY = 5,
-+ GLF_DEMOTE_IN_PROGRESS = 6,
-+ GLF_LFLUSH = 7,
- };
-
- struct gfs2_glock {
-@@ -190,6 +187,7 @@ struct gfs2_glock {
- struct list_head gl_holders;
- struct list_head gl_waiters1; /* HIF_MUTEX */
- struct list_head gl_waiters3; /* HIF_PROMOTE */
-+ int gl_waiters2; /* GIF_DEMOTE */
-
- const struct gfs2_glock_operations *gl_ops;
-
-@@ -210,7 +208,6 @@ struct gfs2_glock {
- struct gfs2_sbd *gl_sbd;
-
- struct inode *gl_aspace;
-- struct gfs2_log_element gl_le;
- struct list_head gl_ail_list;
- atomic_t gl_ail_count;
- struct delayed_work gl_work;
-@@ -239,7 +236,6 @@ struct gfs2_alloc {
- enum {
- GIF_INVALID = 0,
- GIF_QD_LOCKED = 1,
-- GIF_PAGED = 2,
- GIF_SW_PAGED = 3,
- };
-
-@@ -268,14 +264,10 @@ struct gfs2_inode {
- struct gfs2_glock *i_gl; /* Move into i_gh? */
- struct gfs2_holder i_iopen_gh;
- struct gfs2_holder i_gh; /* for prepare/commit_write only */
-- struct gfs2_alloc i_alloc;
-+ struct gfs2_alloc *i_alloc;
- u64 i_last_rg_alloc;
-
-- spinlock_t i_spin;
- struct rw_semaphore i_rw_mutex;
-- unsigned long i_last_pfault;
+- if (sattr->store)
+- ret = sattr->store(kset, page, count);
+- return ret;
+-}
-
-- struct buffer_head *i_cache[GFS2_MAX_META_HEIGHT];
- };
-
- /*
-@@ -287,19 +279,12 @@ static inline struct gfs2_inode *GFS2_I(struct inode *inode)
- return container_of(inode, struct gfs2_inode, i_inode);
- }
-
--/* To be removed? */
--static inline struct gfs2_sbd *GFS2_SB(struct inode *inode)
-+static inline struct gfs2_sbd *GFS2_SB(const struct inode *inode)
- {
- return inode->i_sb->s_fs_info;
- }
-
--enum {
-- GFF_DID_DIRECT_ALLOC = 0,
-- GFF_EXLOCK = 1,
+-static struct sysfs_ops subsys_sysfs_ops = {
+- .show = subsys_attr_show,
+- .store = subsys_attr_store,
-};
-
- struct gfs2_file {
-- unsigned long f_flags; /* GFF_... */
- struct mutex f_fl_mutex;
- struct gfs2_holder f_fl_gh;
- };
-@@ -373,8 +358,17 @@ struct gfs2_ail {
- u64 ai_sync_gen;
- };
-
-+struct gfs2_journal_extent {
-+ struct list_head extent_list;
-+
-+ unsigned int lblock; /* First logical block */
-+ u64 dblock; /* First disk block */
-+ u64 blocks;
-+};
-+
- struct gfs2_jdesc {
- struct list_head jd_list;
-+ struct list_head extent_list;
+ /*
+ * There's one sysfs_buffer for each open file and one
+ * sysfs_open_dirent for each sysfs_dirent with one or more open
+@@ -66,7 +29,7 @@ static struct sysfs_ops subsys_sysfs_ops = {
+ * sysfs_dirent->s_attr.open points to sysfs_open_dirent. s_attr.open
+ * is protected by sysfs_open_dirent_lock.
+ */
+-static spinlock_t sysfs_open_dirent_lock = SPIN_LOCK_UNLOCKED;
++static DEFINE_SPINLOCK(sysfs_open_dirent_lock);
- struct inode *jd_inode;
- unsigned int jd_jid;
-@@ -421,13 +415,9 @@ struct gfs2_args {
- struct gfs2_tune {
- spinlock_t gt_spin;
+ struct sysfs_open_dirent {
+ atomic_t refcnt;
+@@ -354,31 +317,23 @@ static int sysfs_open_file(struct inode *inode, struct file *file)
+ {
+ struct sysfs_dirent *attr_sd = file->f_path.dentry->d_fsdata;
+ struct kobject *kobj = attr_sd->s_parent->s_dir.kobj;
+- struct sysfs_buffer * buffer;
+- struct sysfs_ops * ops = NULL;
+- int error;
++ struct sysfs_buffer *buffer;
++ struct sysfs_ops *ops;
++ int error = -EACCES;
-- unsigned int gt_ilimit;
-- unsigned int gt_ilimit_tries;
-- unsigned int gt_ilimit_min;
- unsigned int gt_demote_secs; /* Cache retention for unheld glock */
- unsigned int gt_incore_log_blocks;
- unsigned int gt_log_flush_secs;
-- unsigned int gt_jindex_refresh_secs; /* Check for new journal index */
+ /* need attr_sd for attr and ops, its parent for kobj */
+ if (!sysfs_get_active_two(attr_sd))
+ return -ENODEV;
- unsigned int gt_recoverd_secs;
- unsigned int gt_logd_secs;
-@@ -443,10 +433,8 @@ struct gfs2_tune {
- unsigned int gt_new_files_jdata;
- unsigned int gt_new_files_directio;
- unsigned int gt_max_readahead; /* Max bytes to read-ahead from disk */
-- unsigned int gt_lockdump_size;
- unsigned int gt_stall_secs; /* Detects trouble! */
- unsigned int gt_complain_secs;
-- unsigned int gt_reclaim_limit; /* Max num of glocks in reclaim list */
- unsigned int gt_statfs_quantum;
- unsigned int gt_statfs_slow;
- };
-@@ -539,7 +527,6 @@ struct gfs2_sbd {
- /* StatFS stuff */
+- /* if the kobject has no ktype, then we assume that it is a subsystem
+- * itself, and use ops for it.
+- */
+- if (kobj->kset && kobj->kset->ktype)
+- ops = kobj->kset->ktype->sysfs_ops;
+- else if (kobj->ktype)
++ /* every kobject with an attribute needs a ktype assigned */
++ if (kobj->ktype && kobj->ktype->sysfs_ops)
+ ops = kobj->ktype->sysfs_ops;
+- else
+- ops = &subsys_sysfs_ops;
+-
+- error = -EACCES;
+-
+- /* No sysfs operations, either from having no subsystem,
+- * or the subsystem have no operations.
+- */
+- if (!ops)
++ else {
++ printk(KERN_ERR "missing sysfs attribute operations for "
++ "kobject: %s\n", kobject_name(kobj));
++ WARN_ON(1);
+ goto err_out;
++ }
- spinlock_t sd_statfs_spin;
-- struct mutex sd_statfs_mutex;
- struct gfs2_statfs_change_host sd_statfs_master;
- struct gfs2_statfs_change_host sd_statfs_local;
- unsigned long sd_statfs_sync_time;
-@@ -602,20 +589,18 @@ struct gfs2_sbd {
- unsigned int sd_log_commited_databuf;
- unsigned int sd_log_commited_revoke;
+ /* File needs write support.
+ * The inode's perms must say it's ok,
+@@ -568,7 +523,11 @@ int sysfs_add_file_to_group(struct kobject *kobj,
+ struct sysfs_dirent *dir_sd;
+ int error;
-- unsigned int sd_log_num_gl;
- unsigned int sd_log_num_buf;
- unsigned int sd_log_num_revoke;
- unsigned int sd_log_num_rg;
- unsigned int sd_log_num_databuf;
+- dir_sd = sysfs_get_dirent(kobj->sd, group);
++ if (group)
++ dir_sd = sysfs_get_dirent(kobj->sd, group);
++ else
++ dir_sd = sysfs_get(kobj->sd);
++
+ if (!dir_sd)
+ return -ENOENT;
-- struct list_head sd_log_le_gl;
- struct list_head sd_log_le_buf;
- struct list_head sd_log_le_revoke;
- struct list_head sd_log_le_rg;
- struct list_head sd_log_le_databuf;
- struct list_head sd_log_le_ordered;
+@@ -656,7 +615,10 @@ void sysfs_remove_file_from_group(struct kobject *kobj,
+ {
+ struct sysfs_dirent *dir_sd;
-- unsigned int sd_log_blks_free;
-+ atomic_t sd_log_blks_free;
- struct mutex sd_log_reserve_mutex;
+- dir_sd = sysfs_get_dirent(kobj->sd, group);
++ if (group)
++ dir_sd = sysfs_get_dirent(kobj->sd, group);
++ else
++ dir_sd = sysfs_get(kobj->sd);
+ if (dir_sd) {
+ sysfs_hash_and_remove(dir_sd, attr->name);
+ sysfs_put(dir_sd);
+diff --git a/fs/sysfs/group.c b/fs/sysfs/group.c
+index d197237..0871c3d 100644
+--- a/fs/sysfs/group.c
++++ b/fs/sysfs/group.c
+@@ -16,25 +16,31 @@
+ #include "sysfs.h"
- u64 sd_log_sequence;
-diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
-index 5f6dc32..728d316 100644
---- a/fs/gfs2/inode.c
-+++ b/fs/gfs2/inode.c
-@@ -31,7 +31,6 @@
- #include "log.h"
- #include "meta_io.h"
- #include "ops_address.h"
--#include "ops_file.h"
- #include "ops_inode.h"
- #include "quota.h"
- #include "rgrp.h"
-@@ -132,15 +131,21 @@ static struct inode *gfs2_iget_skip(struct super_block *sb,
- void gfs2_set_iop(struct inode *inode)
+-static void remove_files(struct sysfs_dirent *dir_sd,
++static void remove_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
+ const struct attribute_group *grp)
{
-+ struct gfs2_sbd *sdp = GFS2_SB(inode);
- umode_t mode = inode->i_mode;
-
- if (S_ISREG(mode)) {
- inode->i_op = &gfs2_file_iops;
-- inode->i_fop = &gfs2_file_fops;
-- inode->i_mapping->a_ops = &gfs2_file_aops;
-+ if (sdp->sd_args.ar_localflocks)
-+ inode->i_fop = &gfs2_file_fops_nolock;
-+ else
-+ inode->i_fop = &gfs2_file_fops;
- } else if (S_ISDIR(mode)) {
- inode->i_op = &gfs2_dir_iops;
-- inode->i_fop = &gfs2_dir_fops;
-+ if (sdp->sd_args.ar_localflocks)
-+ inode->i_fop = &gfs2_dir_fops_nolock;
-+ else
-+ inode->i_fop = &gfs2_dir_fops;
- } else if (S_ISLNK(mode)) {
- inode->i_op = &gfs2_symlink_iops;
- } else {
-@@ -291,12 +296,10 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
- di->di_entries = be32_to_cpu(str->di_entries);
-
- di->di_eattr = be64_to_cpu(str->di_eattr);
-- return 0;
--}
-+ if (S_ISREG(ip->i_inode.i_mode))
-+ gfs2_set_aops(&ip->i_inode);
+ struct attribute *const* attr;
++ int i;
--static void gfs2_inode_bh(struct gfs2_inode *ip, struct buffer_head *bh)
--{
-- ip->i_cache[0] = bh;
-+ return 0;
+- for (attr = grp->attrs; *attr; attr++)
+- sysfs_hash_and_remove(dir_sd, (*attr)->name);
++ for (i = 0, attr = grp->attrs; *attr; i++, attr++)
++ if (!grp->is_visible ||
++ grp->is_visible(kobj, *attr, i))
++ sysfs_hash_and_remove(dir_sd, (*attr)->name);
}
- /**
-@@ -366,7 +369,8 @@ int gfs2_dinode_dealloc(struct gfs2_inode *ip)
- if (error)
- goto out_rg_gunlock;
-
-- gfs2_trans_add_gl(ip->i_gl);
-+ set_bit(GLF_DIRTY, &ip->i_gl->gl_flags);
-+ set_bit(GLF_LFLUSH, &ip->i_gl->gl_flags);
-
- gfs2_free_di(rgd, ip);
-
-@@ -707,9 +711,10 @@ static int alloc_dinode(struct gfs2_inode *dip, u64 *no_addr, u64 *generation)
- struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode);
- int error;
-
-- gfs2_alloc_get(dip);
-+ if (gfs2_alloc_get(dip) == NULL)
-+ return -ENOMEM;
+-static int create_files(struct sysfs_dirent *dir_sd,
++static int create_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
+ const struct attribute_group *grp)
+ {
+ struct attribute *const* attr;
+- int error = 0;
++ int error = 0, i;
-- dip->i_alloc.al_requested = RES_DINODE;
-+ dip->i_alloc->al_requested = RES_DINODE;
- error = gfs2_inplace_reserve(dip);
+- for (attr = grp->attrs; *attr && !error; attr++)
+- error = sysfs_add_file(dir_sd, *attr, SYSFS_KOBJ_ATTR);
++ for (i = 0, attr = grp->attrs; *attr && !error; i++, attr++)
++ if (!grp->is_visible ||
++ grp->is_visible(kobj, *attr, i))
++ error |=
++ sysfs_add_file(dir_sd, *attr, SYSFS_KOBJ_ATTR);
if (error)
- goto out;
-@@ -855,7 +860,7 @@ static int link_dinode(struct gfs2_inode *dip, const struct qstr *name,
+- remove_files(dir_sd, grp);
++ remove_files(dir_sd, kobj, grp);
+ return error;
+ }
- error = alloc_required = gfs2_diradd_alloc_required(&dip->i_inode, name);
- if (alloc_required < 0)
-- goto fail;
-+ goto fail_quota_locks;
- if (alloc_required) {
- error = gfs2_quota_check(dip, dip->i_inode.i_uid, dip->i_inode.i_gid);
- if (error)
-@@ -896,7 +901,7 @@ fail_end_trans:
- gfs2_trans_end(sdp);
+@@ -54,7 +60,7 @@ int sysfs_create_group(struct kobject * kobj,
+ } else
+ sd = kobj->sd;
+ sysfs_get(sd);
+- error = create_files(sd, grp);
++ error = create_files(sd, kobj, grp);
+ if (error) {
+ if (grp->name)
+ sysfs_remove_subdir(sd);
+@@ -75,7 +81,7 @@ void sysfs_remove_group(struct kobject * kobj,
+ } else
+ sd = sysfs_get(dir_sd);
- fail_ipreserv:
-- if (dip->i_alloc.al_rgd)
-+ if (dip->i_alloc->al_rgd)
- gfs2_inplace_release(dip);
+- remove_files(sd, grp);
++ remove_files(sd, kobj, grp);
+ if (grp->name)
+ sysfs_remove_subdir(sd);
- fail_quota_locks:
-@@ -966,7 +971,7 @@ struct inode *gfs2_createi(struct gfs2_holder *ghs, const struct qstr *name,
- struct gfs2_inum_host inum = { .no_addr = 0, .no_formal_ino = 0 };
- int error;
- u64 generation;
-- struct buffer_head *bh=NULL;
-+ struct buffer_head *bh = NULL;
+diff --git a/fs/sysfs/symlink.c b/fs/sysfs/symlink.c
+index 3eac20c..5f66c44 100644
+--- a/fs/sysfs/symlink.c
++++ b/fs/sysfs/symlink.c
+@@ -19,39 +19,6 @@
- if (!name->len || name->len > GFS2_FNAMESIZE)
- return ERR_PTR(-ENAMETOOLONG);
-@@ -1003,8 +1008,6 @@ struct inode *gfs2_createi(struct gfs2_holder *ghs, const struct qstr *name,
- if (IS_ERR(inode))
- goto fail_gunlock2;
+ #include "sysfs.h"
-- gfs2_inode_bh(GFS2_I(inode), bh);
+-static int object_depth(struct sysfs_dirent *sd)
+-{
+- int depth = 0;
-
- error = gfs2_inode_refresh(GFS2_I(inode));
- if (error)
- goto fail_gunlock2;
-@@ -1021,6 +1024,8 @@ struct inode *gfs2_createi(struct gfs2_holder *ghs, const struct qstr *name,
- if (error)
- goto fail_gunlock2;
-
-+ if (bh)
-+ brelse(bh);
- if (!inode)
- return ERR_PTR(-ENOMEM);
- return inode;
-@@ -1032,6 +1037,8 @@ fail_gunlock2:
- fail_gunlock:
- gfs2_glock_dq(ghs);
- fail:
-+ if (bh)
-+ brelse(bh);
- return ERR_PTR(error);
+- for (; sd->s_parent; sd = sd->s_parent)
+- depth++;
+-
+- return depth;
+-}
+-
+-static int object_path_length(struct sysfs_dirent * sd)
+-{
+- int length = 1;
+-
+- for (; sd->s_parent; sd = sd->s_parent)
+- length += strlen(sd->s_name) + 1;
+-
+- return length;
+-}
+-
+-static void fill_object_path(struct sysfs_dirent *sd, char *buffer, int length)
+-{
+- --length;
+- for (; sd->s_parent; sd = sd->s_parent) {
+- int cur = strlen(sd->s_name);
+-
+- /* back up enough to print this bus id with '/' */
+- length -= cur;
+- strncpy(buffer + length, sd->s_name, cur);
+- *(buffer + --length) = '/';
+- }
+-}
+-
+ /**
+ * sysfs_create_link - create symlink between two objects.
+ * @kobj: object whose directory we're creating the link in.
+@@ -112,7 +79,6 @@ int sysfs_create_link(struct kobject * kobj, struct kobject * target, const char
+ return error;
}
-diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h
-index 351ac87..d446506 100644
---- a/fs/gfs2/inode.h
-+++ b/fs/gfs2/inode.h
-@@ -20,6 +20,18 @@ static inline int gfs2_is_jdata(const struct gfs2_inode *ip)
- return ip->i_di.di_flags & GFS2_DIF_JDATA;
+-
+ /**
+ * sysfs_remove_link - remove symlink in object's directory.
+ * @kobj: object we're acting for.
+@@ -124,24 +90,54 @@ void sysfs_remove_link(struct kobject * kobj, const char * name)
+ sysfs_hash_and_remove(kobj->sd, name);
}
-+static inline int gfs2_is_writeback(const struct gfs2_inode *ip)
-+{
-+ const struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-+ return (sdp->sd_args.ar_data == GFS2_DATA_WRITEBACK) && !gfs2_is_jdata(ip);
-+}
+-static int sysfs_get_target_path(struct sysfs_dirent * parent_sd,
+- struct sysfs_dirent * target_sd, char *path)
++static int sysfs_get_target_path(struct sysfs_dirent *parent_sd,
++ struct sysfs_dirent *target_sd, char *path)
+ {
+- char * s;
+- int depth, size;
++ struct sysfs_dirent *base, *sd;
++ char *s = path;
++ int len = 0;
+
-+static inline int gfs2_is_ordered(const struct gfs2_inode *ip)
-+{
-+ const struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-+ return (sdp->sd_args.ar_data == GFS2_DATA_ORDERED) && !gfs2_is_jdata(ip);
-+}
++ /* go up to the root, stop at the base */
++ base = parent_sd;
++ while (base->s_parent) {
++ sd = target_sd->s_parent;
++ while (sd->s_parent && base != sd)
++ sd = sd->s_parent;
+
- static inline int gfs2_is_dir(const struct gfs2_inode *ip)
- {
- return S_ISDIR(ip->i_inode.i_mode);
-diff --git a/fs/gfs2/locking/dlm/mount.c b/fs/gfs2/locking/dlm/mount.c
-index 41c5b04..f2efff4 100644
---- a/fs/gfs2/locking/dlm/mount.c
-+++ b/fs/gfs2/locking/dlm/mount.c
-@@ -67,6 +67,11 @@ static int make_args(struct gdlm_ls *ls, char *data_arg, int *nodir)
- memset(data, 0, 256);
- strncpy(data, data_arg, 255);
-
-+ if (!strlen(data)) {
-+ log_error("no mount options, (u)mount helpers not installed");
-+ return -EINVAL;
++ if (base == sd)
++ break;
++
++ strcpy(s, "../");
++ s += 3;
++ base = base->s_parent;
+ }
+
- for (options = data; (x = strsep(&options, ":")); ) {
- if (!*x)
- continue;
-diff --git a/fs/gfs2/locking/dlm/plock.c b/fs/gfs2/locking/dlm/plock.c
-index 1f7b038..2ebd374 100644
---- a/fs/gfs2/locking/dlm/plock.c
-+++ b/fs/gfs2/locking/dlm/plock.c
-@@ -89,15 +89,19 @@ int gdlm_plock(void *lockspace, struct lm_lockname *name,
- op->info.number = name->ln_number;
- op->info.start = fl->fl_start;
- op->info.end = fl->fl_end;
-- op->info.owner = (__u64)(long) fl->fl_owner;
- if (fl->fl_lmops && fl->fl_lmops->fl_grant) {
-+ /* fl_owner is lockd which doesn't distinguish
-+ processes on the nfs client */
-+ op->info.owner = (__u64) fl->fl_pid;
- xop->callback = fl->fl_lmops->fl_grant;
- locks_init_lock(&xop->flc);
- locks_copy_lock(&xop->flc, fl);
- xop->fl = fl;
- xop->file = file;
-- } else
-+ } else {
-+ op->info.owner = (__u64)(long) fl->fl_owner;
- xop->callback = NULL;
++ /* determine end of target string for reverse fillup */
++ sd = target_sd;
++ while (sd->s_parent && sd != base) {
++ len += strlen(sd->s_name) + 1;
++ sd = sd->s_parent;
+ }
- send_op(op);
+- depth = object_depth(parent_sd);
+- size = object_path_length(target_sd) + depth * 3 - 1;
+- if (size > PATH_MAX)
++ /* check limits */
++ if (len < 2)
++ return -EINVAL;
++ len--;
++ if ((s - path) + len > PATH_MAX)
+ return -ENAMETOOLONG;
-@@ -203,7 +207,10 @@ int gdlm_punlock(void *lockspace, struct lm_lockname *name,
- op->info.number = name->ln_number;
- op->info.start = fl->fl_start;
- op->info.end = fl->fl_end;
-- op->info.owner = (__u64)(long) fl->fl_owner;
-+ if (fl->fl_lmops && fl->fl_lmops->fl_grant)
-+ op->info.owner = (__u64) fl->fl_pid;
-+ else
-+ op->info.owner = (__u64)(long) fl->fl_owner;
+- pr_debug("%s: depth = %d, size = %d\n", __FUNCTION__, depth, size);
++ /* reverse fillup of target string from target to base */
++ sd = target_sd;
++ while (sd->s_parent && sd != base) {
++ int slen = strlen(sd->s_name);
+
+- for (s = path; depth--; s += 3)
+- strcpy(s,"../");
++ len -= slen;
++ strncpy(s + len, sd->s_name, slen);
++ if (len)
++ s[--len] = '/';
- send_op(op);
- wait_event(recv_wq, (op->done != 0));
-@@ -242,7 +249,10 @@ int gdlm_plock_get(void *lockspace, struct lm_lockname *name,
- op->info.number = name->ln_number;
- op->info.start = fl->fl_start;
- op->info.end = fl->fl_end;
-- op->info.owner = (__u64)(long) fl->fl_owner;
-+ if (fl->fl_lmops && fl->fl_lmops->fl_grant)
-+ op->info.owner = (__u64) fl->fl_pid;
-+ else
-+ op->info.owner = (__u64)(long) fl->fl_owner;
+- fill_object_path(target_sd, path, size);
+- pr_debug("%s: path = '%s'\n", __FUNCTION__, path);
++ sd = sd->s_parent;
++ }
- send_op(op);
- wait_event(recv_wq, (op->done != 0));
-diff --git a/fs/gfs2/locking/dlm/sysfs.c b/fs/gfs2/locking/dlm/sysfs.c
-index ae9e6a2..a87b098 100644
---- a/fs/gfs2/locking/dlm/sysfs.c
-+++ b/fs/gfs2/locking/dlm/sysfs.c
-@@ -189,51 +189,39 @@ static struct kobj_type gdlm_ktype = {
- .sysfs_ops = &gdlm_attr_ops,
+ return 0;
+ }
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 7b74b60..fb7171b 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -319,7 +319,7 @@ struct acpi_bus_event {
+ u32 data;
};
--static struct kset gdlm_kset = {
-- .ktype = &gdlm_ktype,
--};
-+static struct kset *gdlm_kset;
+-extern struct kset acpi_subsys;
++extern struct kobject *acpi_kobj;
+ extern int acpi_bus_generate_netlink_event(const char*, const char*, u8, int);
+ /*
+ * External Functions
+diff --git a/include/acpi/reboot.h b/include/acpi/reboot.h
+new file mode 100644
+index 0000000..8857f57
+--- /dev/null
++++ b/include/acpi/reboot.h
+@@ -0,0 +1,9 @@
++
++/*
++ * Dummy placeholder to make the EFI patches apply to the x86 tree.
++ * Andrew/Len, please just kill this file if you encounter it.
++ */
++#ifndef acpi_reboot
++# define acpi_reboot() do { } while (0)
++#endif
++
+diff --git a/include/asm-alpha/agp.h b/include/asm-alpha/agp.h
+index ef855a3..26c1791 100644
+--- a/include/asm-alpha/agp.h
++++ b/include/asm-alpha/agp.h
+@@ -7,7 +7,6 @@
- int gdlm_kobject_setup(struct gdlm_ls *ls, struct kobject *fskobj)
- {
- int error;
+ #define map_page_into_agp(page)
+ #define unmap_page_from_agp(page)
+-#define flush_agp_mappings()
+ #define flush_agp_cache() mb()
-- error = kobject_set_name(&ls->kobj, "%s", "lock_module");
-- if (error) {
-- log_error("can't set kobj name %d", error);
-- return error;
-- }
+ /* Convert a physical address to an address suitable for the GART. */
+diff --git a/include/asm-arm/arch-at91/at91_lcdc.h b/include/asm-arm/arch-at91/at91_lcdc.h
+deleted file mode 100644
+index ab040a4..0000000
+--- a/include/asm-arm/arch-at91/at91_lcdc.h
++++ /dev/null
+@@ -1,148 +0,0 @@
+-/*
+- * include/asm-arm/arch-at91/at91_lcdc.h
+- *
+- * LCD Controller (LCDC).
+- * Based on AT91SAM9261 datasheet revision E.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- */
-
-- ls->kobj.kset = &gdlm_kset;
-- ls->kobj.ktype = &gdlm_ktype;
-- ls->kobj.parent = fskobj;
+-#ifndef AT91_LCDC_H
+-#define AT91_LCDC_H
-
-- error = kobject_register(&ls->kobj);
-+ ls->kobj.kset = gdlm_kset;
-+ error = kobject_init_and_add(&ls->kobj, &gdlm_ktype, fskobj,
-+ "lock_module");
- if (error)
- log_error("can't register kobj %d", error);
-+ kobject_uevent(&ls->kobj, KOBJ_ADD);
-
- return error;
- }
-
- void gdlm_kobject_release(struct gdlm_ls *ls)
- {
-- kobject_unregister(&ls->kobj);
-+ kobject_put(&ls->kobj);
- }
-
- int gdlm_sysfs_init(void)
- {
-- int error;
+-#define AT91_LCDC_DMABADDR1 0x00 /* DMA Base Address Register 1 */
+-#define AT91_LCDC_DMABADDR2 0x04 /* DMA Base Address Register 2 */
+-#define AT91_LCDC_DMAFRMPT1 0x08 /* DMA Frame Pointer Register 1 */
+-#define AT91_LCDC_DMAFRMPT2 0x0c /* DMA Frame Pointer Register 2 */
+-#define AT91_LCDC_DMAFRMADD1 0x10 /* DMA Frame Address Register 1 */
+-#define AT91_LCDC_DMAFRMADD2 0x14 /* DMA Frame Address Register 2 */
-
-- kobject_set_name(&gdlm_kset.kobj, "lock_dlm");
-- kobj_set_kset_s(&gdlm_kset, kernel_subsys);
-- error = kset_register(&gdlm_kset);
-- if (error)
-- printk("lock_dlm: cannot register kset %d\n", error);
+-#define AT91_LCDC_DMAFRMCFG 0x18 /* DMA Frame Configuration Register */
+-#define AT91_LCDC_FRSIZE (0x7fffff << 0) /* Frame Size */
+-#define AT91_LCDC_BLENGTH (0x7f << 24) /* Burst Length */
-
-- return error;
-+ gdlm_kset = kset_create_and_add("lock_dlm", NULL, kernel_kobj);
-+ if (!gdlm_kset) {
-+ printk(KERN_WARNING "%s: can not create kset\n", __FUNCTION__);
-+ return -ENOMEM;
-+ }
-+ return 0;
- }
+-#define AT91_LCDC_DMACON 0x1c /* DMA Control Register */
+-#define AT91_LCDC_DMAEN (0x1 << 0) /* DMA Enable */
+-#define AT91_LCDC_DMARST (0x1 << 1) /* DMA Reset */
+-#define AT91_LCDC_DMABUSY (0x1 << 2) /* DMA Busy */
+-
+-#define AT91_LCDC_LCDCON1 0x0800 /* LCD Control Register 1 */
+-#define AT91_LCDC_BYPASS (1 << 0) /* Bypass lcd_dotck divider */
+-#define AT91_LCDC_CLKVAL (0x1ff << 12) /* Clock Divider */
+-#define AT91_LCDC_LINCNT (0x7ff << 21) /* Line Counter */
+-
+-#define AT91_LCDC_LCDCON2 0x0804 /* LCD Control Register 2 */
+-#define AT91_LCDC_DISTYPE (3 << 0) /* Display Type */
+-#define AT91_LCDC_DISTYPE_STNMONO (0 << 0)
+-#define AT91_LCDC_DISTYPE_STNCOLOR (1 << 0)
+-#define AT91_LCDC_DISTYPE_TFT (2 << 0)
+-#define AT91_LCDC_SCANMOD (1 << 2) /* Scan Mode */
+-#define AT91_LCDC_SCANMOD_SINGLE (0 << 2)
+-#define AT91_LCDC_SCANMOD_DUAL (1 << 2)
+-#define AT91_LCDC_IFWIDTH (3 << 3) /*Interface Width */
+-#define AT91_LCDC_IFWIDTH_4 (0 << 3)
+-#define AT91_LCDC_IFWIDTH_8 (1 << 3)
+-#define AT91_LCDC_IFWIDTH_16 (2 << 3)
+-#define AT91_LCDC_PIXELSIZE (7 << 5) /* Bits per pixel */
+-#define AT91_LCDC_PIXELSIZE_1 (0 << 5)
+-#define AT91_LCDC_PIXELSIZE_2 (1 << 5)
+-#define AT91_LCDC_PIXELSIZE_4 (2 << 5)
+-#define AT91_LCDC_PIXELSIZE_8 (3 << 5)
+-#define AT91_LCDC_PIXELSIZE_16 (4 << 5)
+-#define AT91_LCDC_PIXELSIZE_24 (5 << 5)
+-#define AT91_LCDC_INVVD (1 << 8) /* LCD Data polarity */
+-#define AT91_LCDC_INVVD_NORMAL (0 << 8)
+-#define AT91_LCDC_INVVD_INVERTED (1 << 8)
+-#define AT91_LCDC_INVFRAME (1 << 9 ) /* LCD VSync polarity */
+-#define AT91_LCDC_INVFRAME_NORMAL (0 << 9)
+-#define AT91_LCDC_INVFRAME_INVERTED (1 << 9)
+-#define AT91_LCDC_INVLINE (1 << 10) /* LCD HSync polarity */
+-#define AT91_LCDC_INVLINE_NORMAL (0 << 10)
+-#define AT91_LCDC_INVLINE_INVERTED (1 << 10)
+-#define AT91_LCDC_INVCLK (1 << 11) /* LCD dotclk polarity */
+-#define AT91_LCDC_INVCLK_NORMAL (0 << 11)
+-#define AT91_LCDC_INVCLK_INVERTED (1 << 11)
+-#define AT91_LCDC_INVDVAL (1 << 12) /* LCD dval polarity */
+-#define AT91_LCDC_INVDVAL_NORMAL (0 << 12)
+-#define AT91_LCDC_INVDVAL_INVERTED (1 << 12)
+-#define AT91_LCDC_CLKMOD (1 << 15) /* LCD dotclk mode */
+-#define AT91_LCDC_CLKMOD_ACTIVEDISPLAY (0 << 15)
+-#define AT91_LCDC_CLKMOD_ALWAYSACTIVE (1 << 15)
+-#define AT91_LCDC_MEMOR (1 << 31) /* Memory Ordering Format */
+-#define AT91_LCDC_MEMOR_BIG (0 << 31)
+-#define AT91_LCDC_MEMOR_LITTLE (1 << 31)
+-
+-#define AT91_LCDC_TIM1 0x0808 /* LCD Timing Register 1 */
+-#define AT91_LCDC_VFP (0xff << 0) /* Vertical Front Porch */
+-#define AT91_LCDC_VBP (0xff << 8) /* Vertical Back Porch */
+-#define AT91_LCDC_VPW (0x3f << 16) /* Vertical Synchronization Pulse Width */
+-#define AT91_LCDC_VHDLY (0xf << 24) /* Vertical to Horizontal Delay */
+-
+-#define AT91_LCDC_TIM2 0x080c /* LCD Timing Register 2 */
+-#define AT91_LCDC_HBP (0xff << 0) /* Horizontal Back Porch */
+-#define AT91_LCDC_HPW (0x3f << 8) /* Horizontal Synchronization Pulse Width */
+-#define AT91_LCDC_HFP (0x7ff << 21) /* Horizontal Front Porch */
+-
+-#define AT91_LCDC_LCDFRMCFG 0x0810 /* LCD Frame Configuration Register */
+-#define AT91_LCDC_LINEVAL (0x7ff << 0) /* Vertical Size of LCD Module */
+-#define AT91_LCDC_HOZVAL (0x7ff << 21) /* Horizontal Size of LCD Module */
+-
+-#define AT91_LCDC_FIFO 0x0814 /* LCD FIFO Register */
+-#define AT91_LCDC_FIFOTH (0xffff) /* FIFO Threshold */
+-
+-#define AT91_LCDC_DP1_2 0x081c /* Dithering Pattern DP1_2 Register */
+-#define AT91_LCDC_DP4_7 0x0820 /* Dithering Pattern DP4_7 Register */
+-#define AT91_LCDC_DP3_5 0x0824 /* Dithering Pattern DP3_5 Register */
+-#define AT91_LCDC_DP2_3 0x0828 /* Dithering Pattern DP2_3 Register */
+-#define AT91_LCDC_DP5_7 0x082c /* Dithering Pattern DP5_7 Register */
+-#define AT91_LCDC_DP3_4 0x0830 /* Dithering Pattern DP3_4 Register */
+-#define AT91_LCDC_DP4_5 0x0834 /* Dithering Pattern DP4_5 Register */
+-#define AT91_LCDC_DP6_7 0x0838 /* Dithering Pattern DP6_7 Register */
+-#define AT91_LCDC_DP1_2_VAL (0xff)
+-#define AT91_LCDC_DP4_7_VAL (0xfffffff)
+-#define AT91_LCDC_DP3_5_VAL (0xfffff)
+-#define AT91_LCDC_DP2_3_VAL (0xfff)
+-#define AT91_LCDC_DP5_7_VAL (0xfffffff)
+-#define AT91_LCDC_DP3_4_VAL (0xffff)
+-#define AT91_LCDC_DP4_5_VAL (0xfffff)
+-#define AT91_LCDC_DP6_7_VAL (0xfffffff)
+-
+-#define AT91_LCDC_PWRCON 0x083c /* Power Control Register */
+-#define AT91_LCDC_PWR (1 << 0) /* LCD Module Power Control */
+-#define AT91_LCDC_GUARDT (0x7f << 1) /* Delay in Frame Period */
+-#define AT91_LCDC_BUSY (1 << 31) /* LCD Busy */
+-
+-#define AT91_LCDC_CONTRAST_CTR 0x0840 /* Contrast Control Register */
+-#define AT91_LCDC_PS (3 << 0) /* Contrast Counter Prescaler */
+-#define AT91_LCDC_PS_DIV1 (0 << 0)
+-#define AT91_LCDC_PS_DIV2 (1 << 0)
+-#define AT91_LCDC_PS_DIV4 (2 << 0)
+-#define AT91_LCDC_PS_DIV8 (3 << 0)
+-#define AT91_LCDC_POL (1 << 2) /* Polarity of output Pulse */
+-#define AT91_LCDC_POL_NEGATIVE (0 << 2)
+-#define AT91_LCDC_POL_POSITIVE (1 << 2)
+-#define AT91_LCDC_ENA (1 << 3) /* PWM generator Control */
+-#define AT91_LCDC_ENA_PWMDISABLE (0 << 3)
+-#define AT91_LCDC_ENA_PWMENABLE (1 << 3)
+-
+-#define AT91_LCDC_CONTRAST_VAL 0x0844 /* Contrast Value Register */
+-#define AT91_LCDC_CVAL (0xff) /* PWM compare value */
+-
+-#define AT91_LCDC_IER 0x0848 /* Interrupt Enable Register */
+-#define AT91_LCDC_IDR 0x084c /* Interrupt Disable Register */
+-#define AT91_LCDC_IMR 0x0850 /* Interrupt Mask Register */
+-#define AT91_LCDC_ISR 0x0854 /* Interrupt Enable Register */
+-#define AT91_LCDC_ICR 0x0858 /* Interrupt Clear Register */
+-#define AT91_LCDC_LNI (1 << 0) /* Line Interrupt */
+-#define AT91_LCDC_LSTLNI (1 << 1) /* Last Line Interrupt */
+-#define AT91_LCDC_EOFI (1 << 2) /* DMA End Of Frame Interrupt */
+-#define AT91_LCDC_UFLWI (1 << 4) /* FIFO Underflow Interrupt */
+-#define AT91_LCDC_OWRI (1 << 5) /* FIFO Overwrite Interrupt */
+-#define AT91_LCDC_MERI (1 << 6) /* DMA Memory Error Interrupt */
+-
+-#define AT91_LCDC_LUT_(n) (0x0c00 + ((n)*4)) /* Palette Entry 0..255 */
+-
+-#endif
+diff --git a/include/asm-arm/arch-at91/at91_pmc.h b/include/asm-arm/arch-at91/at91_pmc.h
+index 33ff5b6..52cd8e5 100644
+--- a/include/asm-arm/arch-at91/at91_pmc.h
++++ b/include/asm-arm/arch-at91/at91_pmc.h
+@@ -25,6 +25,7 @@
+ #define AT91RM9200_PMC_MCKUDP (1 << 2) /* USB Device Port Master Clock Automatic Disable on Suspend [AT91RM9200 only] */
+ #define AT91RM9200_PMC_UHP (1 << 4) /* USB Host Port Clock [AT91RM9200 only] */
+ #define AT91SAM926x_PMC_UHP (1 << 6) /* USB Host Port Clock [AT91SAM926x only] */
++#define AT91CAP9_PMC_UHP (1 << 6) /* USB Host Port Clock [AT91CAP9 only] */
+ #define AT91SAM926x_PMC_UDP (1 << 7) /* USB Devcice Port Clock [AT91SAM926x only] */
+ #define AT91_PMC_PCK0 (1 << 8) /* Programmable Clock 0 */
+ #define AT91_PMC_PCK1 (1 << 9) /* Programmable Clock 1 */
+@@ -37,7 +38,9 @@
+ #define AT91_PMC_PCDR (AT91_PMC + 0x14) /* Peripheral Clock Disable Register */
+ #define AT91_PMC_PCSR (AT91_PMC + 0x18) /* Peripheral Clock Status Register */
- void gdlm_sysfs_exit(void)
- {
-- kset_unregister(&gdlm_kset);
-+ kset_unregister(gdlm_kset);
- }
+-#define AT91_CKGR_MOR (AT91_PMC + 0x20) /* Main Oscillator Register */
++#define AT91_CKGR_UCKR (AT91_PMC + 0x1C) /* UTMI Clock Register [SAM9RL, CAP9] */
++
++#define AT91_CKGR_MOR (AT91_PMC + 0x20) /* Main Oscillator Register [not on SAM9RL] */
+ #define AT91_PMC_MOSCEN (1 << 0) /* Main Oscillator Enable */
+ #define AT91_PMC_OSCBYPASS (1 << 1) /* Oscillator Bypass [AT91SAM926x only] */
+ #define AT91_PMC_OSCOUNT (0xff << 8) /* Main Oscillator Start-up Time */
+@@ -52,6 +55,10 @@
+ #define AT91_PMC_PLLCOUNT (0x3f << 8) /* PLL Counter */
+ #define AT91_PMC_OUT (3 << 14) /* PLL Clock Frequency Range */
+ #define AT91_PMC_MUL (0x7ff << 16) /* PLL Multiplier */
++#define AT91_PMC_USBDIV (3 << 28) /* USB Divisor (PLLB only) */
++#define AT91_PMC_USBDIV_1 (0 << 28)
++#define AT91_PMC_USBDIV_2 (1 << 28)
++#define AT91_PMC_USBDIV_4 (2 << 28)
+ #define AT91_PMC_USB96M (1 << 28) /* Divider by 2 Enable (PLLB only) */
-diff --git a/fs/gfs2/locking/dlm/thread.c b/fs/gfs2/locking/dlm/thread.c
-index bd938f0..521694f 100644
---- a/fs/gfs2/locking/dlm/thread.c
-+++ b/fs/gfs2/locking/dlm/thread.c
-@@ -273,18 +273,13 @@ static int gdlm_thread(void *data, int blist)
- struct gdlm_ls *ls = (struct gdlm_ls *) data;
- struct gdlm_lock *lp = NULL;
- uint8_t complete, blocking, submit, drop;
-- DECLARE_WAITQUEUE(wait, current);
+ #define AT91_PMC_MCKR (AT91_PMC + 0x30) /* Master Clock Register */
+diff --git a/include/asm-arm/arch-at91/at91_rtt.h b/include/asm-arm/arch-at91/at91_rtt.h
+index bae1103..39a3263 100644
+--- a/include/asm-arm/arch-at91/at91_rtt.h
++++ b/include/asm-arm/arch-at91/at91_rtt.h
+@@ -13,19 +13,19 @@
+ #ifndef AT91_RTT_H
+ #define AT91_RTT_H
- /* Only thread1 is allowed to do blocking callbacks since gfs
- may wait for a completion callback within a blocking cb. */
+-#define AT91_RTT_MR (AT91_RTT + 0x00) /* Real-time Mode Register */
++#define AT91_RTT_MR 0x00 /* Real-time Mode Register */
+ #define AT91_RTT_RTPRES (0xffff << 0) /* Real-time Timer Prescaler Value */
+ #define AT91_RTT_ALMIEN (1 << 16) /* Alarm Interrupt Enable */
+ #define AT91_RTT_RTTINCIEN (1 << 17) /* Real Time Timer Increment Interrupt Enable */
+ #define AT91_RTT_RTTRST (1 << 18) /* Real Time Timer Restart */
- while (!kthread_should_stop()) {
-- set_current_state(TASK_INTERRUPTIBLE);
-- add_wait_queue(&ls->thread_wait, &wait);
-- if (no_work(ls, blist))
-- schedule();
-- remove_wait_queue(&ls->thread_wait, &wait);
-- set_current_state(TASK_RUNNING);
-+ wait_event_interruptible(ls->thread_wait,
-+ !no_work(ls, blist) || kthread_should_stop());
+-#define AT91_RTT_AR (AT91_RTT + 0x04) /* Real-time Alarm Register */
++#define AT91_RTT_AR 0x04 /* Real-time Alarm Register */
+ #define AT91_RTT_ALMV (0xffffffff) /* Alarm Value */
- complete = blocking = submit = drop = 0;
+-#define AT91_RTT_VR (AT91_RTT + 0x08) /* Real-time Value Register */
++#define AT91_RTT_VR 0x08 /* Real-time Value Register */
+ #define AT91_RTT_CRTV (0xffffffff) /* Current Real-time Value */
-diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
-index 7df7024..161ab6f 100644
---- a/fs/gfs2/log.c
-+++ b/fs/gfs2/log.c
-@@ -1,6 +1,6 @@
- /*
- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-+ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
- *
- * This copyrighted material is made available to anyone wishing to use,
- * modify, copy, or redistribute it subject to the terms and conditions
-@@ -16,6 +16,8 @@
- #include <linux/crc32.h>
- #include <linux/lm_interface.h>
- #include <linux/delay.h>
-+#include <linux/kthread.h>
-+#include <linux/freezer.h>
+-#define AT91_RTT_SR (AT91_RTT + 0x0c) /* Real-time Status Register */
++#define AT91_RTT_SR 0x0c /* Real-time Status Register */
+ #define AT91_RTT_ALMS (1 << 0) /* Real-time Alarm Status */
+ #define AT91_RTT_RTTINC (1 << 1) /* Real-time Timer Increment */
- #include "gfs2.h"
- #include "incore.h"
-@@ -68,14 +70,12 @@ unsigned int gfs2_struct2blk(struct gfs2_sbd *sdp, unsigned int nstruct,
- *
- */
+diff --git a/include/asm-arm/arch-at91/at91_twi.h b/include/asm-arm/arch-at91/at91_twi.h
+index ca9a907..f9f2e3c 100644
+--- a/include/asm-arm/arch-at91/at91_twi.h
++++ b/include/asm-arm/arch-at91/at91_twi.h
+@@ -21,6 +21,8 @@
+ #define AT91_TWI_STOP (1 << 1) /* Send a Stop Condition */
+ #define AT91_TWI_MSEN (1 << 2) /* Master Transfer Enable */
+ #define AT91_TWI_MSDIS (1 << 3) /* Master Transfer Disable */
++#define AT91_TWI_SVEN (1 << 4) /* Slave Transfer Enable [SAM9260 only] */
++#define AT91_TWI_SVDIS (1 << 5) /* Slave Transfer Disable [SAM9260 only] */
+ #define AT91_TWI_SWRST (1 << 7) /* Software Reset */
--void gfs2_remove_from_ail(struct address_space *mapping, struct gfs2_bufdata *bd)
-+void gfs2_remove_from_ail(struct gfs2_bufdata *bd)
- {
- bd->bd_ail = NULL;
- list_del_init(&bd->bd_ail_st_list);
- list_del_init(&bd->bd_ail_gl_list);
- atomic_dec(&bd->bd_gl->gl_ail_count);
-- if (mapping)
-- gfs2_meta_cache_flush(GFS2_I(mapping->host));
- brelse(bd->bd_bh);
- }
+ #define AT91_TWI_MMR 0x04 /* Master Mode Register */
+@@ -32,6 +34,9 @@
+ #define AT91_TWI_MREAD (1 << 12) /* Master Read Direction */
+ #define AT91_TWI_DADR (0x7f << 16) /* Device Address */
-@@ -92,8 +92,6 @@ static void gfs2_ail1_start_one(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
- struct buffer_head *bh;
- int retry;
++#define AT91_TWI_SMR 0x08 /* Slave Mode Register [SAM9260 only] */
++#define AT91_TWI_SADR (0x7f << 16) /* Slave Address */
++
+ #define AT91_TWI_IADR 0x0c /* Internal Address Register */
-- BUG_ON(!spin_is_locked(&sdp->sd_log_lock));
--
- do {
- retry = 0;
+ #define AT91_TWI_CWGR 0x10 /* Clock Waveform Generator Register */
+@@ -43,9 +48,15 @@
+ #define AT91_TWI_TXCOMP (1 << 0) /* Transmission Complete */
+ #define AT91_TWI_RXRDY (1 << 1) /* Receive Holding Register Ready */
+ #define AT91_TWI_TXRDY (1 << 2) /* Transmit Holding Register Ready */
++#define AT91_TWI_SVREAD (1 << 3) /* Slave Read [SAM9260 only] */
++#define AT91_TWI_SVACC (1 << 4) /* Slave Access [SAM9260 only] */
++#define AT91_TWI_GACC (1 << 5) /* General Call Access [SAM9260 only] */
+ #define AT91_TWI_OVRE (1 << 6) /* Overrun Error [AT91RM9200 only] */
+ #define AT91_TWI_UNRE (1 << 7) /* Underrun Error [AT91RM9200 only] */
+ #define AT91_TWI_NACK (1 << 8) /* Not Acknowledged */
++#define AT91_TWI_ARBLST (1 << 9) /* Arbitration Lost [SAM9260 only] */
++#define AT91_TWI_SCLWS (1 << 10) /* Clock Wait State [SAM9260 only] */
++#define AT91_TWI_EOSACC (1 << 11) /* End of Slave Address [SAM9260 only] */
-@@ -210,7 +208,7 @@ static void gfs2_ail1_start(struct gfs2_sbd *sdp, int flags)
- gfs2_log_unlock(sdp);
- }
+ #define AT91_TWI_IER 0x24 /* Interrupt Enable Register */
+ #define AT91_TWI_IDR 0x28 /* Interrupt Disable Register */
+diff --git a/include/asm-arm/arch-at91/at91cap9.h b/include/asm-arm/arch-at91/at91cap9.h
+new file mode 100644
+index 0000000..73e1fcf
+--- /dev/null
++++ b/include/asm-arm/arch-at91/at91cap9.h
+@@ -0,0 +1,121 @@
++/*
++ * include/asm-arm/arch-at91/at91cap9.h
++ *
++ * Copyright (C) 2007 Stelian Pop <stelian.pop at leadtechdesign.com>
++ * Copyright (C) 2007 Lead Tech Design <www.leadtechdesign.com>
++ * Copyright (C) 2007 Atmel Corporation.
++ *
++ * Common definitions.
++ * Based on AT91CAP9 datasheet revision B (Preliminary).
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#ifndef AT91CAP9_H
++#define AT91CAP9_H
++
++/*
++ * Peripheral identifiers/interrupts.
++ */
++#define AT91_ID_FIQ 0 /* Advanced Interrupt Controller (FIQ) */
++#define AT91_ID_SYS 1 /* System Peripherals */
++#define AT91CAP9_ID_PIOABCD 2 /* Parallel IO Controller A, B, C and D */
++#define AT91CAP9_ID_MPB0 3 /* MP Block Peripheral 0 */
++#define AT91CAP9_ID_MPB1 4 /* MP Block Peripheral 1 */
++#define AT91CAP9_ID_MPB2 5 /* MP Block Peripheral 2 */
++#define AT91CAP9_ID_MPB3 6 /* MP Block Peripheral 3 */
++#define AT91CAP9_ID_MPB4 7 /* MP Block Peripheral 4 */
++#define AT91CAP9_ID_US0 8 /* USART 0 */
++#define AT91CAP9_ID_US1 9 /* USART 1 */
++#define AT91CAP9_ID_US2 10 /* USART 2 */
++#define AT91CAP9_ID_MCI0 11 /* Multimedia Card Interface 0 */
++#define AT91CAP9_ID_MCI1 12 /* Multimedia Card Interface 1 */
++#define AT91CAP9_ID_CAN 13 /* CAN */
++#define AT91CAP9_ID_TWI 14 /* Two-Wire Interface */
++#define AT91CAP9_ID_SPI0 15 /* Serial Peripheral Interface 0 */
++#define AT91CAP9_ID_SPI1 16 /* Serial Peripheral Interface 0 */
++#define AT91CAP9_ID_SSC0 17 /* Serial Synchronous Controller 0 */
++#define AT91CAP9_ID_SSC1 18 /* Serial Synchronous Controller 1 */
++#define AT91CAP9_ID_AC97C 19 /* AC97 Controller */
++#define AT91CAP9_ID_TCB 20 /* Timer Counter 0, 1 and 2 */
++#define AT91CAP9_ID_PWMC 21 /* Pulse Width Modulation Controller */
++#define AT91CAP9_ID_EMAC 22 /* Ethernet */
++#define AT91CAP9_ID_AESTDES 23 /* Advanced Encryption Standard, Triple DES */
++#define AT91CAP9_ID_ADC 24 /* Analog-to-Digital Converter */
++#define AT91CAP9_ID_ISI 25 /* Image Sensor Interface */
++#define AT91CAP9_ID_LCDC 26 /* LCD Controller */
++#define AT91CAP9_ID_DMA 27 /* DMA Controller */
++#define AT91CAP9_ID_UDPHS 28 /* USB High Speed Device Port */
++#define AT91CAP9_ID_UHP 29 /* USB Host Port */
++#define AT91CAP9_ID_IRQ0 30 /* Advanced Interrupt Controller (IRQ0) */
++#define AT91CAP9_ID_IRQ1 31 /* Advanced Interrupt Controller (IRQ1) */
++
++/*
++ * User Peripheral physical base addresses.
++ */
++#define AT91CAP9_BASE_UDPHS 0xfff78000
++#define AT91CAP9_BASE_TCB0 0xfff7c000
++#define AT91CAP9_BASE_TC0 0xfff7c000
++#define AT91CAP9_BASE_TC1 0xfff7c040
++#define AT91CAP9_BASE_TC2 0xfff7c080
++#define AT91CAP9_BASE_MCI0 0xfff80000
++#define AT91CAP9_BASE_MCI1 0xfff84000
++#define AT91CAP9_BASE_TWI 0xfff88000
++#define AT91CAP9_BASE_US0 0xfff8c000
++#define AT91CAP9_BASE_US1 0xfff90000
++#define AT91CAP9_BASE_US2 0xfff94000
++#define AT91CAP9_BASE_SSC0 0xfff98000
++#define AT91CAP9_BASE_SSC1 0xfff9c000
++#define AT91CAP9_BASE_AC97C 0xfffa0000
++#define AT91CAP9_BASE_SPI0 0xfffa4000
++#define AT91CAP9_BASE_SPI1 0xfffa8000
++#define AT91CAP9_BASE_CAN 0xfffac000
++#define AT91CAP9_BASE_PWMC 0xfffb8000
++#define AT91CAP9_BASE_EMAC 0xfffbc000
++#define AT91CAP9_BASE_ADC 0xfffc0000
++#define AT91CAP9_BASE_ISI 0xfffc4000
++#define AT91_BASE_SYS 0xffffe200
++
++/*
++ * System Peripherals (offset from AT91_BASE_SYS)
++ */
++#define AT91_ECC (0xffffe200 - AT91_BASE_SYS)
++#define AT91_BCRAMC (0xffffe400 - AT91_BASE_SYS)
++#define AT91_DDRSDRC (0xffffe600 - AT91_BASE_SYS)
++#define AT91_SMC (0xffffe800 - AT91_BASE_SYS)
++#define AT91_MATRIX (0xffffea00 - AT91_BASE_SYS)
++#define AT91_CCFG (0xffffeb10 - AT91_BASE_SYS)
++#define AT91_DMA (0xffffec00 - AT91_BASE_SYS)
++#define AT91_DBGU (0xffffee00 - AT91_BASE_SYS)
++#define AT91_AIC (0xfffff000 - AT91_BASE_SYS)
++#define AT91_PIOA (0xfffff200 - AT91_BASE_SYS)
++#define AT91_PIOB (0xfffff400 - AT91_BASE_SYS)
++#define AT91_PIOC (0xfffff600 - AT91_BASE_SYS)
++#define AT91_PIOD (0xfffff800 - AT91_BASE_SYS)
++#define AT91_PMC (0xfffffc00 - AT91_BASE_SYS)
++#define AT91_RSTC (0xfffffd00 - AT91_BASE_SYS)
++#define AT91_SHDC (0xfffffd10 - AT91_BASE_SYS)
++#define AT91_RTT (0xfffffd20 - AT91_BASE_SYS)
++#define AT91_PIT (0xfffffd30 - AT91_BASE_SYS)
++#define AT91_WDT (0xfffffd40 - AT91_BASE_SYS)
++#define AT91_GPBR (0xfffffd50 - AT91_BASE_SYS)
++
++/*
++ * Internal Memory.
++ */
++#define AT91CAP9_SRAM_BASE 0x00100000 /* Internal SRAM base address */
++#define AT91CAP9_SRAM_SIZE (32 * SZ_1K) /* Internal SRAM size (32Kb) */
++
++#define AT91CAP9_ROM_BASE 0x00400000 /* Internal ROM base address */
++#define AT91CAP9_ROM_SIZE (32 * SZ_1K) /* Internal ROM size (32Kb) */
++
++#define AT91CAP9_LCDC_BASE 0x00500000 /* LCD Controller */
++#define AT91CAP9_UDPHS_BASE 0x00600000 /* USB High Speed Device Port */
++#define AT91CAP9_UHP_BASE 0x00700000 /* USB Host controller */
++
++#define CONFIG_DRAM_BASE AT91_CHIPSELECT_6
++
++#endif
+diff --git a/include/asm-arm/arch-at91/at91cap9_matrix.h b/include/asm-arm/arch-at91/at91cap9_matrix.h
+new file mode 100644
+index 0000000..a641686
+--- /dev/null
++++ b/include/asm-arm/arch-at91/at91cap9_matrix.h
+@@ -0,0 +1,132 @@
++/*
++ * include/asm-arm/arch-at91/at91cap9_matrix.h
++ *
++ * Copyright (C) 2007 Stelian Pop <stelian.pop at leadtechdesign.com>
++ * Copyright (C) 2007 Lead Tech Design <www.leadtechdesign.com>
++ * Copyright (C) 2006 Atmel Corporation.
++ *
++ * Memory Controllers (MATRIX, EBI) - System peripherals registers.
++ * Based on AT91CAP9 datasheet revision B (Preliminary).
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#ifndef AT91CAP9_MATRIX_H
++#define AT91CAP9_MATRIX_H
++
++#define AT91_MATRIX_MCFG0 (AT91_MATRIX + 0x00) /* Master Configuration Register 0 */
++#define AT91_MATRIX_MCFG1 (AT91_MATRIX + 0x04) /* Master Configuration Register 1 */
++#define AT91_MATRIX_MCFG2 (AT91_MATRIX + 0x08) /* Master Configuration Register 2 */
++#define AT91_MATRIX_MCFG3 (AT91_MATRIX + 0x0C) /* Master Configuration Register 3 */
++#define AT91_MATRIX_MCFG4 (AT91_MATRIX + 0x10) /* Master Configuration Register 4 */
++#define AT91_MATRIX_MCFG5 (AT91_MATRIX + 0x14) /* Master Configuration Register 5 */
++#define AT91_MATRIX_MCFG6 (AT91_MATRIX + 0x18) /* Master Configuration Register 6 */
++#define AT91_MATRIX_MCFG7 (AT91_MATRIX + 0x1C) /* Master Configuration Register 7 */
++#define AT91_MATRIX_MCFG8 (AT91_MATRIX + 0x20) /* Master Configuration Register 8 */
++#define AT91_MATRIX_MCFG9 (AT91_MATRIX + 0x24) /* Master Configuration Register 9 */
++#define AT91_MATRIX_MCFG10 (AT91_MATRIX + 0x28) /* Master Configuration Register 10 */
++#define AT91_MATRIX_MCFG11 (AT91_MATRIX + 0x2C) /* Master Configuration Register 11 */
++#define AT91_MATRIX_ULBT (7 << 0) /* Undefined Length Burst Type */
++#define AT91_MATRIX_ULBT_INFINITE (0 << 0)
++#define AT91_MATRIX_ULBT_SINGLE (1 << 0)
++#define AT91_MATRIX_ULBT_FOUR (2 << 0)
++#define AT91_MATRIX_ULBT_EIGHT (3 << 0)
++#define AT91_MATRIX_ULBT_SIXTEEN (4 << 0)
++
++#define AT91_MATRIX_SCFG0 (AT91_MATRIX + 0x40) /* Slave Configuration Register 0 */
++#define AT91_MATRIX_SCFG1 (AT91_MATRIX + 0x44) /* Slave Configuration Register 1 */
++#define AT91_MATRIX_SCFG2 (AT91_MATRIX + 0x48) /* Slave Configuration Register 2 */
++#define AT91_MATRIX_SCFG3 (AT91_MATRIX + 0x4C) /* Slave Configuration Register 3 */
++#define AT91_MATRIX_SCFG4 (AT91_MATRIX + 0x50) /* Slave Configuration Register 4 */
++#define AT91_MATRIX_SCFG5 (AT91_MATRIX + 0x54) /* Slave Configuration Register 5 */
++#define AT91_MATRIX_SCFG6 (AT91_MATRIX + 0x58) /* Slave Configuration Register 6 */
++#define AT91_MATRIX_SCFG7 (AT91_MATRIX + 0x5C) /* Slave Configuration Register 7 */
++#define AT91_MATRIX_SCFG8 (AT91_MATRIX + 0x60) /* Slave Configuration Register 8 */
++#define AT91_MATRIX_SCFG9 (AT91_MATRIX + 0x64) /* Slave Configuration Register 9 */
++#define AT91_MATRIX_SLOT_CYCLE (0xff << 0) /* Maximum Number of Allowed Cycles for a Burst */
++#define AT91_MATRIX_DEFMSTR_TYPE (3 << 16) /* Default Master Type */
++#define AT91_MATRIX_DEFMSTR_TYPE_NONE (0 << 16)
++#define AT91_MATRIX_DEFMSTR_TYPE_LAST (1 << 16)
++#define AT91_MATRIX_DEFMSTR_TYPE_FIXED (2 << 16)
++#define AT91_MATRIX_FIXED_DEFMSTR (0xf << 18) /* Fixed Index of Default Master */
++#define AT91_MATRIX_ARBT (3 << 24) /* Arbitration Type */
++#define AT91_MATRIX_ARBT_ROUND_ROBIN (0 << 24)
++#define AT91_MATRIX_ARBT_FIXED_PRIORITY (1 << 24)
++
++#define AT91_MATRIX_PRAS0 (AT91_MATRIX + 0x80) /* Priority Register A for Slave 0 */
++#define AT91_MATRIX_PRBS0 (AT91_MATRIX + 0x84) /* Priority Register B for Slave 0 */
++#define AT91_MATRIX_PRAS1 (AT91_MATRIX + 0x88) /* Priority Register A for Slave 1 */
++#define AT91_MATRIX_PRBS1 (AT91_MATRIX + 0x8C) /* Priority Register B for Slave 1 */
++#define AT91_MATRIX_PRAS2 (AT91_MATRIX + 0x90) /* Priority Register A for Slave 2 */
++#define AT91_MATRIX_PRBS2 (AT91_MATRIX + 0x94) /* Priority Register B for Slave 2 */
++#define AT91_MATRIX_PRAS3 (AT91_MATRIX + 0x98) /* Priority Register A for Slave 3 */
++#define AT91_MATRIX_PRBS3 (AT91_MATRIX + 0x9C) /* Priority Register B for Slave 3 */
++#define AT91_MATRIX_PRAS4 (AT91_MATRIX + 0xA0) /* Priority Register A for Slave 4 */
++#define AT91_MATRIX_PRBS4 (AT91_MATRIX + 0xA4) /* Priority Register B for Slave 4 */
++#define AT91_MATRIX_PRAS5 (AT91_MATRIX + 0xA8) /* Priority Register A for Slave 5 */
++#define AT91_MATRIX_PRBS5 (AT91_MATRIX + 0xAC) /* Priority Register B for Slave 5 */
++#define AT91_MATRIX_PRAS6 (AT91_MATRIX + 0xB0) /* Priority Register A for Slave 6 */
++#define AT91_MATRIX_PRBS6 (AT91_MATRIX + 0xB4) /* Priority Register B for Slave 6 */
++#define AT91_MATRIX_PRAS7 (AT91_MATRIX + 0xB8) /* Priority Register A for Slave 7 */
++#define AT91_MATRIX_PRBS7 (AT91_MATRIX + 0xBC) /* Priority Register B for Slave 7 */
++#define AT91_MATRIX_PRAS8 (AT91_MATRIX + 0xC0) /* Priority Register A for Slave 8 */
++#define AT91_MATRIX_PRBS8 (AT91_MATRIX + 0xC4) /* Priority Register B for Slave 8 */
++#define AT91_MATRIX_PRAS9 (AT91_MATRIX + 0xC8) /* Priority Register A for Slave 9 */
++#define AT91_MATRIX_PRBS9 (AT91_MATRIX + 0xCC) /* Priority Register B for Slave 9 */
++#define AT91_MATRIX_M0PR (3 << 0) /* Master 0 Priority */
++#define AT91_MATRIX_M1PR (3 << 4) /* Master 1 Priority */
++#define AT91_MATRIX_M2PR (3 << 8) /* Master 2 Priority */
++#define AT91_MATRIX_M3PR (3 << 12) /* Master 3 Priority */
++#define AT91_MATRIX_M4PR (3 << 16) /* Master 4 Priority */
++#define AT91_MATRIX_M5PR (3 << 20) /* Master 5 Priority */
++#define AT91_MATRIX_M6PR (3 << 24) /* Master 6 Priority */
++#define AT91_MATRIX_M7PR (3 << 28) /* Master 7 Priority */
++#define AT91_MATRIX_M8PR (3 << 0) /* Master 8 Priority (in Register B) */
++#define AT91_MATRIX_M9PR (3 << 4) /* Master 9 Priority (in Register B) */
++#define AT91_MATRIX_M10PR (3 << 8) /* Master 10 Priority (in Register B) */
++#define AT91_MATRIX_M11PR (3 << 12) /* Master 11 Priority (in Register B) */
++
++#define AT91_MATRIX_MRCR (AT91_MATRIX + 0x100) /* Master Remap Control Register */
++#define AT91_MATRIX_RCB0 (1 << 0) /* Remap Command for AHB Master 0 (ARM926EJ-S Instruction Master) */
++#define AT91_MATRIX_RCB1 (1 << 1) /* Remap Command for AHB Master 1 (ARM926EJ-S Data Master) */
++#define AT91_MATRIX_RCB2 (1 << 2)
++#define AT91_MATRIX_RCB3 (1 << 3)
++#define AT91_MATRIX_RCB4 (1 << 4)
++#define AT91_MATRIX_RCB5 (1 << 5)
++#define AT91_MATRIX_RCB6 (1 << 6)
++#define AT91_MATRIX_RCB7 (1 << 7)
++#define AT91_MATRIX_RCB8 (1 << 8)
++#define AT91_MATRIX_RCB9 (1 << 9)
++#define AT91_MATRIX_RCB10 (1 << 10)
++#define AT91_MATRIX_RCB11 (1 << 11)
++
++#define AT91_MPBS0_SFR (AT91_MATRIX + 0x114) /* MPBlock Slave 0 Special Function Register */
++#define AT91_MPBS1_SFR (AT91_MATRIX + 0x11C) /* MPBlock Slave 1 Special Function Register */
++
++#define AT91_MATRIX_EBICSA (AT91_MATRIX + 0x120) /* EBI Chip Select Assignment Register */
++#define AT91_MATRIX_EBI_CS1A (1 << 1) /* Chip Select 1 Assignment */
++#define AT91_MATRIX_EBI_CS1A_SMC (0 << 1)
++#define AT91_MATRIX_EBI_CS1A_BCRAMC (1 << 1)
++#define AT91_MATRIX_EBI_CS3A (1 << 3) /* Chip Select 3 Assignment */
++#define AT91_MATRIX_EBI_CS3A_SMC (0 << 3)
++#define AT91_MATRIX_EBI_CS3A_SMC_SMARTMEDIA (1 << 3)
++#define AT91_MATRIX_EBI_CS4A (1 << 4) /* Chip Select 4 Assignment */
++#define AT91_MATRIX_EBI_CS4A_SMC (0 << 4)
++#define AT91_MATRIX_EBI_CS4A_SMC_CF1 (1 << 4)
++#define AT91_MATRIX_EBI_CS5A (1 << 5) /* Chip Select 5 Assignment */
++#define AT91_MATRIX_EBI_CS5A_SMC (0 << 5)
++#define AT91_MATRIX_EBI_CS5A_SMC_CF2 (1 << 5)
++#define AT91_MATRIX_EBI_DBPUC (1 << 8) /* Data Bus Pull-up Configuration */
++#define AT91_MATRIX_EBI_DQSPDC (1 << 9) /* Data Qualifier Strobe Pull-Down Configuration */
++#define AT91_MATRIX_EBI_VDDIOMSEL (1 << 16) /* Memory voltage selection */
++#define AT91_MATRIX_EBI_VDDIOMSEL_1_8V (0 << 16)
++#define AT91_MATRIX_EBI_VDDIOMSEL_3_3V (1 << 16)
++
++#define AT91_MPBS2_SFR (AT91_MATRIX + 0x12C) /* MPBlock Slave 2 Special Function Register */
++#define AT91_MPBS3_SFR (AT91_MATRIX + 0x130) /* MPBlock Slave 3 Special Function Register */
++#define AT91_APB_SFR (AT91_MATRIX + 0x134) /* APB Bridge Special Function Register */
++
++#endif
+diff --git a/include/asm-arm/arch-at91/at91sam9260_matrix.h b/include/asm-arm/arch-at91/at91sam9260_matrix.h
+index aacb1e9..a8e9fec 100644
+--- a/include/asm-arm/arch-at91/at91sam9260_matrix.h
++++ b/include/asm-arm/arch-at91/at91sam9260_matrix.h
+@@ -67,7 +67,7 @@
+ #define AT91_MATRIX_CS4A (1 << 4) /* Chip Select 4 Assignment */
+ #define AT91_MATRIX_CS4A_SMC (0 << 4)
+ #define AT91_MATRIX_CS4A_SMC_CF1 (1 << 4)
+-#define AT91_MATRIX_CS5A (1 << 5 ) /* Chip Select 5 Assignment */
++#define AT91_MATRIX_CS5A (1 << 5) /* Chip Select 5 Assignment */
+ #define AT91_MATRIX_CS5A_SMC (0 << 5)
+ #define AT91_MATRIX_CS5A_SMC_CF2 (1 << 5)
+ #define AT91_MATRIX_DBPUC (1 << 8) /* Data Bus Pull-up Configuration */
+diff --git a/include/asm-arm/arch-at91/at91sam9263_matrix.h b/include/asm-arm/arch-at91/at91sam9263_matrix.h
+index 6fc6e4b..72f6e66 100644
+--- a/include/asm-arm/arch-at91/at91sam9263_matrix.h
++++ b/include/asm-arm/arch-at91/at91sam9263_matrix.h
+@@ -44,7 +44,7 @@
+ #define AT91_MATRIX_DEFMSTR_TYPE_NONE (0 << 16)
+ #define AT91_MATRIX_DEFMSTR_TYPE_LAST (1 << 16)
+ #define AT91_MATRIX_DEFMSTR_TYPE_FIXED (2 << 16)
+-#define AT91_MATRIX_FIXED_DEFMSTR (7 << 18) /* Fixed Index of Default Master */
++#define AT91_MATRIX_FIXED_DEFMSTR (0xf << 18) /* Fixed Index of Default Master */
+ #define AT91_MATRIX_ARBT (3 << 24) /* Arbitration Type */
+ #define AT91_MATRIX_ARBT_ROUND_ROBIN (0 << 24)
+ #define AT91_MATRIX_ARBT_FIXED_PRIORITY (1 << 24)
+diff --git a/include/asm-arm/arch-at91/at91sam9rl_matrix.h b/include/asm-arm/arch-at91/at91sam9rl_matrix.h
+index b15f11b..8422417 100644
+--- a/include/asm-arm/arch-at91/at91sam9rl_matrix.h
++++ b/include/asm-arm/arch-at91/at91sam9rl_matrix.h
+@@ -38,7 +38,7 @@
+ #define AT91_MATRIX_DEFMSTR_TYPE_NONE (0 << 16)
+ #define AT91_MATRIX_DEFMSTR_TYPE_LAST (1 << 16)
+ #define AT91_MATRIX_DEFMSTR_TYPE_FIXED (2 << 16)
+-#define AT91_MATRIX_FIXED_DEFMSTR (7 << 18) /* Fixed Index of Default Master */
++#define AT91_MATRIX_FIXED_DEFMSTR (0xf << 18) /* Fixed Index of Default Master */
+ #define AT91_MATRIX_ARBT (3 << 24) /* Arbitration Type */
+ #define AT91_MATRIX_ARBT_ROUND_ROBIN (0 << 24)
+ #define AT91_MATRIX_ARBT_FIXED_PRIORITY (1 << 24)
+diff --git a/include/asm-arm/arch-at91/board.h b/include/asm-arm/arch-at91/board.h
+index 7905496..55b07bd 100644
+--- a/include/asm-arm/arch-at91/board.h
++++ b/include/asm-arm/arch-at91/board.h
+@@ -34,6 +34,7 @@
+ #include <linux/mtd/partitions.h>
+ #include <linux/device.h>
+ #include <linux/i2c.h>
++#include <linux/leds.h>
+ #include <linux/spi/spi.h>
--int gfs2_ail1_empty(struct gfs2_sbd *sdp, int flags)
-+static int gfs2_ail1_empty(struct gfs2_sbd *sdp, int flags)
- {
- struct gfs2_ail *ai, *s;
- int ret;
-@@ -248,7 +246,7 @@ static void gfs2_ail2_empty_one(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
- bd = list_entry(head->prev, struct gfs2_bufdata,
- bd_ail_st_list);
- gfs2_assert(sdp, bd->bd_ail == ai);
-- gfs2_remove_from_ail(bd->bd_bh->b_page->mapping, bd);
-+ gfs2_remove_from_ail(bd);
- }
- }
+ /* USB Device */
+@@ -71,7 +72,7 @@ struct at91_eth_data {
+ };
+ extern void __init at91_add_device_eth(struct at91_eth_data *data);
-@@ -303,7 +301,7 @@ int gfs2_log_reserve(struct gfs2_sbd *sdp, unsigned int blks)
+-#if defined(CONFIG_ARCH_AT91SAM9260) || defined(CONFIG_ARCH_AT91SAM9263)
++#if defined(CONFIG_ARCH_AT91SAM9260) || defined(CONFIG_ARCH_AT91SAM9263) || defined(CONFIG_ARCH_AT91CAP9)
+ #define eth_platform_data at91_eth_data
+ #endif
- mutex_lock(&sdp->sd_log_reserve_mutex);
- gfs2_log_lock(sdp);
-- while(sdp->sd_log_blks_free <= (blks + reserved_blks)) {
-+ while(atomic_read(&sdp->sd_log_blks_free) <= (blks + reserved_blks)) {
- gfs2_log_unlock(sdp);
- gfs2_ail1_empty(sdp, 0);
- gfs2_log_flush(sdp, NULL);
-@@ -312,7 +310,7 @@ int gfs2_log_reserve(struct gfs2_sbd *sdp, unsigned int blks)
- gfs2_ail1_start(sdp, 0);
- gfs2_log_lock(sdp);
- }
-- sdp->sd_log_blks_free -= blks;
-+ atomic_sub(blks, &sdp->sd_log_blks_free);
- gfs2_log_unlock(sdp);
- mutex_unlock(&sdp->sd_log_reserve_mutex);
+@@ -101,13 +102,23 @@ extern void __init at91_add_device_i2c(struct i2c_board_info *devices, int nr_de
+ extern void __init at91_add_device_spi(struct spi_board_info *devices, int nr_devices);
-@@ -332,27 +330,23 @@ void gfs2_log_release(struct gfs2_sbd *sdp, unsigned int blks)
- {
+ /* Serial */
++#define ATMEL_UART_CTS 0x01
++#define ATMEL_UART_RTS 0x02
++#define ATMEL_UART_DSR 0x04
++#define ATMEL_UART_DTR 0x08
++#define ATMEL_UART_DCD 0x10
++#define ATMEL_UART_RI 0x20
++
++extern void __init at91_register_uart(unsigned id, unsigned portnr, unsigned pins);
++extern void __init at91_set_serial_console(unsigned portnr);
++
+ struct at91_uart_config {
+ unsigned short console_tty; /* tty number of serial console */
+ unsigned short nr_tty; /* number of serial tty's */
+ short tty_map[]; /* map UART to tty number */
+ };
+ extern struct platform_device *atmel_default_console_device;
+-extern void __init at91_init_serial(struct at91_uart_config *config);
++extern void __init __deprecated at91_init_serial(struct at91_uart_config *config);
- gfs2_log_lock(sdp);
-- sdp->sd_log_blks_free += blks;
-+ atomic_add(blks, &sdp->sd_log_blks_free);
- gfs2_assert_withdraw(sdp,
-- sdp->sd_log_blks_free <= sdp->sd_jdesc->jd_blocks);
-+ atomic_read(&sdp->sd_log_blks_free) <= sdp->sd_jdesc->jd_blocks);
- gfs2_log_unlock(sdp);
- up_read(&sdp->sd_log_flush_lock);
- }
+ struct atmel_uart_data {
+ short use_dma_tx; /* use transmit DMA? */
+@@ -116,6 +127,23 @@ struct atmel_uart_data {
+ };
+ extern void __init at91_add_device_serial(void);
- static u64 log_bmap(struct gfs2_sbd *sdp, unsigned int lbn)
- {
-- struct inode *inode = sdp->sd_jdesc->jd_inode;
-- int error;
-- struct buffer_head bh_map = { .b_state = 0, .b_blocknr = 0 };
--
-- bh_map.b_size = 1 << inode->i_blkbits;
-- error = gfs2_block_map(inode, lbn, 0, &bh_map);
-- if (error || !bh_map.b_blocknr)
-- printk(KERN_INFO "error=%d, dbn=%llu lbn=%u", error,
-- (unsigned long long)bh_map.b_blocknr, lbn);
-- gfs2_assert_withdraw(sdp, !error && bh_map.b_blocknr);
--
-- return bh_map.b_blocknr;
-+ struct gfs2_journal_extent *je;
++/*
++ * SSC -- accessed through ssc_request(id). Drivers don't bind to SSC
++ * platform devices. Their SSC ID is part of their configuration data,
++ * along with information about which SSC signals they should use.
++ */
++#define ATMEL_SSC_TK 0x01
++#define ATMEL_SSC_TF 0x02
++#define ATMEL_SSC_TD 0x04
++#define ATMEL_SSC_TX (ATMEL_SSC_TK | ATMEL_SSC_TF | ATMEL_SSC_TD)
+
-+ list_for_each_entry(je, &sdp->sd_jdesc->extent_list, extent_list) {
-+ if (lbn >= je->lblock && lbn < je->lblock + je->blocks)
-+ return je->dblock + lbn - je->lblock;
-+ }
++#define ATMEL_SSC_RK 0x10
++#define ATMEL_SSC_RF 0x20
++#define ATMEL_SSC_RD 0x40
++#define ATMEL_SSC_RX (ATMEL_SSC_RK | ATMEL_SSC_RF | ATMEL_SSC_RD)
+
-+ return -1;
- }
++extern void __init at91_add_device_ssc(unsigned id, unsigned pins);
++
+ /* LCD Controller */
+ struct atmel_lcdfb_info;
+ extern void __init at91_add_device_lcdc(struct atmel_lcdfb_info *data);
+@@ -126,10 +154,12 @@ struct atmel_ac97_data {
+ };
+ extern void __init at91_add_device_ac97(struct atmel_ac97_data *data);
- /**
-@@ -561,8 +555,8 @@ static void log_pull_tail(struct gfs2_sbd *sdp, unsigned int new_tail)
- ail2_empty(sdp, new_tail);
++ /* ISI */
++extern void __init at91_add_device_isi(void);
++
+ /* LEDs */
+-extern u8 at91_leds_cpu;
+-extern u8 at91_leds_timer;
+ extern void __init at91_init_leds(u8 cpu_led, u8 timer_led);
++extern void __init at91_gpio_leds(struct gpio_led *leds, int nr);
- gfs2_log_lock(sdp);
-- sdp->sd_log_blks_free += dist;
-- gfs2_assert_withdraw(sdp, sdp->sd_log_blks_free <= sdp->sd_jdesc->jd_blocks);
-+ atomic_add(dist, &sdp->sd_log_blks_free);
-+ gfs2_assert_withdraw(sdp, atomic_read(&sdp->sd_log_blks_free) <= sdp->sd_jdesc->jd_blocks);
- gfs2_log_unlock(sdp);
+ /* FIXME: this needs a better location, but gets stuff building again */
+ extern int at91_suspend_entering_slow_clock(void);
+diff --git a/include/asm-arm/arch-at91/cpu.h b/include/asm-arm/arch-at91/cpu.h
+index 080cbb4..7145166 100644
+--- a/include/asm-arm/arch-at91/cpu.h
++++ b/include/asm-arm/arch-at91/cpu.h
+@@ -21,13 +21,13 @@
+ #define ARCH_ID_AT91SAM9260 0x019803a0
+ #define ARCH_ID_AT91SAM9261 0x019703a0
+ #define ARCH_ID_AT91SAM9263 0x019607a0
++#define ARCH_ID_AT91SAM9RL64 0x019b03a0
++#define ARCH_ID_AT91CAP9 0x039A03A0
- sdp->sd_log_tail = new_tail;
-@@ -652,7 +646,7 @@ static void gfs2_ordered_write(struct gfs2_sbd *sdp)
- get_bh(bh);
- gfs2_log_unlock(sdp);
- lock_buffer(bh);
-- if (test_clear_buffer_dirty(bh)) {
-+ if (buffer_mapped(bh) && test_clear_buffer_dirty(bh)) {
- bh->b_end_io = end_buffer_write_sync;
- submit_bh(WRITE, bh);
- } else {
-@@ -694,20 +688,16 @@ static void gfs2_ordered_wait(struct gfs2_sbd *sdp)
- *
- */
+ #define ARCH_ID_AT91SAM9XE128 0x329973a0
+ #define ARCH_ID_AT91SAM9XE256 0x329a93a0
+ #define ARCH_ID_AT91SAM9XE512 0x329aa3a0
--void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
-+void __gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
- {
- struct gfs2_ail *ai;
+-#define ARCH_ID_AT91SAM9RL64 0x019b03a0
+-
+ #define ARCH_ID_AT91M40800 0x14080044
+ #define ARCH_ID_AT91R40807 0x44080746
+ #define ARCH_ID_AT91M40807 0x14080745
+@@ -81,6 +81,11 @@ static inline unsigned long at91_arch_identify(void)
+ #define cpu_is_at91sam9rl() (0)
+ #endif
- down_write(&sdp->sd_log_flush_lock);
++#ifdef CONFIG_ARCH_AT91CAP9
++#define cpu_is_at91cap9() (at91_cpu_identify() == ARCH_ID_AT91CAP9)
++#else
++#define cpu_is_at91cap9() (0)
++#endif
-- if (gl) {
-- gfs2_log_lock(sdp);
-- if (list_empty(&gl->gl_le.le_list)) {
-- gfs2_log_unlock(sdp);
-- up_write(&sdp->sd_log_flush_lock);
-- return;
-- }
-- gfs2_log_unlock(sdp);
-+ /* Log might have been flushed while we waited for the flush lock */
-+ if (gl && !test_bit(GLF_LFLUSH, &gl->gl_flags)) {
-+ up_write(&sdp->sd_log_flush_lock);
-+ return;
- }
+ /*
+ * Since this is ARM, we will never run on any AVR32 CPU. But these
+diff --git a/include/asm-arm/arch-at91/entry-macro.S b/include/asm-arm/arch-at91/entry-macro.S
+index cc1d850..1005eee 100644
+--- a/include/asm-arm/arch-at91/entry-macro.S
++++ b/include/asm-arm/arch-at91/entry-macro.S
+@@ -17,13 +17,13 @@
+ .endm
- ai = kzalloc(sizeof(struct gfs2_ail), GFP_NOFS | __GFP_NOFAIL);
-@@ -739,7 +729,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
- log_flush_commit(sdp);
- else if (sdp->sd_log_tail != current_tail(sdp) && !sdp->sd_log_idle){
- gfs2_log_lock(sdp);
-- sdp->sd_log_blks_free--; /* Adjust for unreserved buffer */
-+ atomic_dec(&sdp->sd_log_blks_free); /* Adjust for unreserved buffer */
- gfs2_log_unlock(sdp);
- log_write_header(sdp, 0, PULL);
- }
-@@ -767,7 +757,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl)
- static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
- {
- unsigned int reserved;
-- unsigned int old;
-+ unsigned int unused;
+ .macro get_irqnr_preamble, base, tmp
++ ldr \base, =(AT91_VA_BASE_SYS + AT91_AIC) @ base virtual address of AIC peripheral
+ .endm
- gfs2_log_lock(sdp);
+ .macro arch_ret_to_user, tmp1, tmp2
+ .endm
-@@ -779,14 +769,11 @@ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
- sdp->sd_log_commited_revoke += tr->tr_num_revoke - tr->tr_num_revoke_rm;
- gfs2_assert_withdraw(sdp, ((int)sdp->sd_log_commited_revoke) >= 0);
- reserved = calc_reserved(sdp);
-- old = sdp->sd_log_blks_free;
-- sdp->sd_log_blks_free += tr->tr_reserved -
-- (reserved - sdp->sd_log_blks_reserved);
--
-- gfs2_assert_withdraw(sdp, sdp->sd_log_blks_free >= old);
-- gfs2_assert_withdraw(sdp, sdp->sd_log_blks_free <=
-+ unused = sdp->sd_log_blks_reserved - reserved + tr->tr_reserved;
-+ gfs2_assert_withdraw(sdp, unused >= 0);
-+ atomic_add(unused, &sdp->sd_log_blks_free);
-+ gfs2_assert_withdraw(sdp, atomic_read(&sdp->sd_log_blks_free) <=
- sdp->sd_jdesc->jd_blocks);
--
- sdp->sd_log_blks_reserved = reserved;
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+- ldr \base, =(AT91_VA_BASE_SYS + AT91_AIC) @ base virtual address of AIC peripheral
+ ldr \irqnr, [\base, #(AT91_AIC_IVR - AT91_AIC)] @ read IRQ vector register: de-asserts nIRQ to processor (and clears interrupt)
+ ldr \irqstat, [\base, #(AT91_AIC_ISR - AT91_AIC)] @ read interrupt source number
+ teq \irqstat, #0 @ ISR is 0 when no current interrupt, or spurious interrupt
+diff --git a/include/asm-arm/arch-at91/hardware.h b/include/asm-arm/arch-at91/hardware.h
+index 8f1cdd3..2c826d8 100644
+--- a/include/asm-arm/arch-at91/hardware.h
++++ b/include/asm-arm/arch-at91/hardware.h
+@@ -26,6 +26,8 @@
+ #include <asm/arch/at91sam9263.h>
+ #elif defined(CONFIG_ARCH_AT91SAM9RL)
+ #include <asm/arch/at91sam9rl.h>
++#elif defined(CONFIG_ARCH_AT91CAP9)
++#include <asm/arch/at91cap9.h>
+ #elif defined(CONFIG_ARCH_AT91X40)
+ #include <asm/arch/at91x40.h>
+ #else
+diff --git a/include/asm-arm/arch-at91/timex.h b/include/asm-arm/arch-at91/timex.h
+index a310698..f1933b0 100644
+--- a/include/asm-arm/arch-at91/timex.h
++++ b/include/asm-arm/arch-at91/timex.h
+@@ -42,6 +42,11 @@
+ #define AT91SAM9_MASTER_CLOCK 100000000
+ #define CLOCK_TICK_RATE (AT91SAM9_MASTER_CLOCK/16)
- gfs2_log_unlock(sdp);
-@@ -825,7 +812,6 @@ void gfs2_log_shutdown(struct gfs2_sbd *sdp)
- down_write(&sdp->sd_log_flush_lock);
++#elif defined(CONFIG_ARCH_AT91CAP9)
++
++#define AT91CAP9_MASTER_CLOCK 100000000
++#define CLOCK_TICK_RATE (AT91CAP9_MASTER_CLOCK/16)
++
+ #elif defined(CONFIG_ARCH_AT91X40)
- gfs2_assert_withdraw(sdp, !sdp->sd_log_blks_reserved);
-- gfs2_assert_withdraw(sdp, !sdp->sd_log_num_gl);
- gfs2_assert_withdraw(sdp, !sdp->sd_log_num_buf);
- gfs2_assert_withdraw(sdp, !sdp->sd_log_num_revoke);
- gfs2_assert_withdraw(sdp, !sdp->sd_log_num_rg);
-@@ -838,7 +824,7 @@ void gfs2_log_shutdown(struct gfs2_sbd *sdp)
- log_write_header(sdp, GFS2_LOG_HEAD_UNMOUNT,
- (sdp->sd_log_tail == current_tail(sdp)) ? 0 : PULL);
+ #define AT91X40_MASTER_CLOCK 40000000
+diff --git a/include/asm-arm/arch-ep93xx/gpio.h b/include/asm-arm/arch-ep93xx/gpio.h
+index 1ee14a1..9b1864b 100644
+--- a/include/asm-arm/arch-ep93xx/gpio.h
++++ b/include/asm-arm/arch-ep93xx/gpio.h
+@@ -5,16 +5,6 @@
+ #ifndef __ASM_ARCH_GPIO_H
+ #define __ASM_ARCH_GPIO_H
-- gfs2_assert_warn(sdp, sdp->sd_log_blks_free == sdp->sd_jdesc->jd_blocks);
-+ gfs2_assert_warn(sdp, atomic_read(&sdp->sd_log_blks_free) == sdp->sd_jdesc->jd_blocks);
- gfs2_assert_warn(sdp, sdp->sd_log_head == sdp->sd_log_tail);
- gfs2_assert_warn(sdp, list_empty(&sdp->sd_ail2_list));
+-#define GPIO_IN 0
+-#define GPIO_OUT 1
+-
+-#define EP93XX_GPIO_LOW 0
+-#define EP93XX_GPIO_HIGH 1
+-
+-extern void gpio_line_config(int line, int direction);
+-extern int gpio_line_get(int line);
+-extern void gpio_line_set(int line, int value);
+-
+ /* GPIO port A. */
+ #define EP93XX_GPIO_LINE_A(x) ((x) + 0)
+ #define EP93XX_GPIO_LINE_EGPIO0 EP93XX_GPIO_LINE_A(0)
+@@ -38,7 +28,7 @@ extern void gpio_line_set(int line, int value);
+ #define EP93XX_GPIO_LINE_EGPIO15 EP93XX_GPIO_LINE_B(7)
-@@ -866,3 +852,42 @@ void gfs2_meta_syncfs(struct gfs2_sbd *sdp)
- }
- }
+ /* GPIO port C. */
+-#define EP93XX_GPIO_LINE_C(x) ((x) + 16)
++#define EP93XX_GPIO_LINE_C(x) ((x) + 40)
+ #define EP93XX_GPIO_LINE_ROW0 EP93XX_GPIO_LINE_C(0)
+ #define EP93XX_GPIO_LINE_ROW1 EP93XX_GPIO_LINE_C(1)
+ #define EP93XX_GPIO_LINE_ROW2 EP93XX_GPIO_LINE_C(2)
+@@ -71,7 +61,7 @@ extern void gpio_line_set(int line, int value);
+ #define EP93XX_GPIO_LINE_IDEDA2 EP93XX_GPIO_LINE_E(7)
+
+ /* GPIO port F. */
+-#define EP93XX_GPIO_LINE_F(x) ((x) + 40)
++#define EP93XX_GPIO_LINE_F(x) ((x) + 16)
+ #define EP93XX_GPIO_LINE_WP EP93XX_GPIO_LINE_F(0)
+ #define EP93XX_GPIO_LINE_MCCD1 EP93XX_GPIO_LINE_F(1)
+ #define EP93XX_GPIO_LINE_MCCD2 EP93XX_GPIO_LINE_F(2)
+@@ -103,5 +93,49 @@ extern void gpio_line_set(int line, int value);
+ #define EP93XX_GPIO_LINE_DD6 EP93XX_GPIO_LINE_H(6)
+ #define EP93XX_GPIO_LINE_DD7 EP93XX_GPIO_LINE_H(7)
++/* maximum value for gpio line identifiers */
++#define EP93XX_GPIO_LINE_MAX EP93XX_GPIO_LINE_H(7)
+
-+/**
-+ * gfs2_logd - Update log tail as Active Items get flushed to in-place blocks
-+ * @sdp: Pointer to GFS2 superblock
-+ *
-+ * Also, periodically check to make sure that we're using the most recent
-+ * journal index.
-+ */
++/* maximum value for irq capable line identifiers */
++#define EP93XX_GPIO_LINE_MAX_IRQ EP93XX_GPIO_LINE_F(7)
+
-+int gfs2_logd(void *data)
++/* new generic GPIO API - see Documentation/gpio.txt */
++
++static inline int gpio_request(unsigned gpio, const char *label)
+{
-+ struct gfs2_sbd *sdp = data;
-+ unsigned long t;
-+ int need_flush;
++ if (gpio > EP93XX_GPIO_LINE_MAX)
++ return -EINVAL;
++ return 0;
++}
+
-+ while (!kthread_should_stop()) {
-+ /* Advance the log tail */
++static inline void gpio_free(unsigned gpio)
++{
++}
+
-+ t = sdp->sd_log_flush_time +
-+ gfs2_tune_get(sdp, gt_log_flush_secs) * HZ;
++int gpio_direction_input(unsigned gpio);
++int gpio_direction_output(unsigned gpio, int value);
++int gpio_get_value(unsigned gpio);
++void gpio_set_value(unsigned gpio, int value);
+
-+ gfs2_ail1_empty(sdp, DIO_ALL);
-+ gfs2_log_lock(sdp);
-+ need_flush = sdp->sd_log_num_buf > gfs2_tune_get(sdp, gt_incore_log_blocks);
-+ gfs2_log_unlock(sdp);
-+ if (need_flush || time_after_eq(jiffies, t)) {
-+ gfs2_log_flush(sdp, NULL);
-+ sdp->sd_log_flush_time = jiffies;
-+ }
++#include <asm-generic/gpio.h> /* cansleep wrappers */
+
-+ t = gfs2_tune_get(sdp, gt_logd_secs) * HZ;
-+ if (freezing(current))
-+ refrigerator();
-+ schedule_timeout_interruptible(t);
-+ }
++/*
++ * Map GPIO A0..A7 (0..7) to irq 64..71,
++ * B0..B7 (7..15) to irq 72..79, and
++ * F0..F7 (16..24) to irq 80..87.
++ */
+
-+ return 0;
-+}
++static inline int gpio_to_irq(unsigned gpio)
++{
++ if (gpio <= EP93XX_GPIO_LINE_MAX_IRQ)
++ return 64 + gpio;
+
-diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h
-index dae2824..7711528 100644
---- a/fs/gfs2/log.h
-+++ b/fs/gfs2/log.h
-@@ -48,8 +48,6 @@ static inline void gfs2_log_pointers_init(struct gfs2_sbd *sdp,
- unsigned int gfs2_struct2blk(struct gfs2_sbd *sdp, unsigned int nstruct,
- unsigned int ssize);
-
--int gfs2_ail1_empty(struct gfs2_sbd *sdp, int flags);
--
- int gfs2_log_reserve(struct gfs2_sbd *sdp, unsigned int blks);
- void gfs2_log_release(struct gfs2_sbd *sdp, unsigned int blks);
- void gfs2_log_incr_head(struct gfs2_sbd *sdp);
-@@ -57,11 +55,19 @@ void gfs2_log_incr_head(struct gfs2_sbd *sdp);
- struct buffer_head *gfs2_log_get_buf(struct gfs2_sbd *sdp);
- struct buffer_head *gfs2_log_fake_buf(struct gfs2_sbd *sdp,
- struct buffer_head *real);
--void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl);
-+void __gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl);
++ return -EINVAL;
++}
+
-+static inline void gfs2_log_flush(struct gfs2_sbd *sbd, struct gfs2_glock *gl)
++static inline int irq_to_gpio(unsigned irq)
+{
-+ if (!gl || test_bit(GLF_LFLUSH, &gl->gl_flags))
-+ __gfs2_log_flush(sbd, gl);
++ return irq - gpio_to_irq(0);
+}
-+
- void gfs2_log_commit(struct gfs2_sbd *sdp, struct gfs2_trans *trans);
--void gfs2_remove_from_ail(struct address_space *mapping, struct gfs2_bufdata *bd);
-+void gfs2_remove_from_ail(struct gfs2_bufdata *bd);
-
- void gfs2_log_shutdown(struct gfs2_sbd *sdp);
- void gfs2_meta_syncfs(struct gfs2_sbd *sdp);
-+int gfs2_logd(void *data);
- #endif /* __LOG_DOT_H__ */
-diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
-index 6c27cea..fae59d6 100644
---- a/fs/gfs2/lops.c
-+++ b/fs/gfs2/lops.c
-@@ -87,6 +87,7 @@ static void gfs2_unpin(struct gfs2_sbd *sdp, struct buffer_head *bh,
- }
- bd->bd_ail = ai;
- list_add(&bd->bd_ail_st_list, &ai->ai_ail1_list);
-+ clear_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags);
- gfs2_log_unlock(sdp);
- unlock_buffer(bh);
- }
-@@ -124,49 +125,6 @@ static struct buffer_head *gfs2_get_log_desc(struct gfs2_sbd *sdp, u32 ld_type)
- return bh;
- }
+ #endif
+diff --git a/include/asm-arm/arch-ep93xx/irqs.h b/include/asm-arm/arch-ep93xx/irqs.h
+index 2a8c636..53d4a68 100644
+--- a/include/asm-arm/arch-ep93xx/irqs.h
++++ b/include/asm-arm/arch-ep93xx/irqs.h
+@@ -67,12 +67,6 @@
+ #define IRQ_EP93XX_SAI 60
+ #define EP93XX_VIC2_VALID_IRQ_MASK 0x1fffffff
--static void __glock_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
--{
-- struct gfs2_glock *gl;
-- struct gfs2_trans *tr = current->journal_info;
--
-- tr->tr_touched = 1;
--
-- gl = container_of(le, struct gfs2_glock, gl_le);
-- if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(gl)))
-- return;
--
-- if (!list_empty(&le->le_list))
-- return;
--
-- gfs2_glock_hold(gl);
-- set_bit(GLF_DIRTY, &gl->gl_flags);
-- sdp->sd_log_num_gl++;
-- list_add(&le->le_list, &sdp->sd_log_le_gl);
--}
--
--static void glock_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
--{
-- gfs2_log_lock(sdp);
-- __glock_lo_add(sdp, le);
-- gfs2_log_unlock(sdp);
--}
--
--static void glock_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
--{
-- struct list_head *head = &sdp->sd_log_le_gl;
-- struct gfs2_glock *gl;
--
-- while (!list_empty(head)) {
-- gl = list_entry(head->next, struct gfs2_glock, gl_le.le_list);
-- list_del_init(&gl->gl_le.le_list);
-- sdp->sd_log_num_gl--;
--
-- gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(gl));
-- gfs2_glock_put(gl);
-- }
-- gfs2_assert_warn(sdp, !sdp->sd_log_num_gl);
--}
+-/*
+- * Map GPIO A0..A7 to irq 64..71, B0..B7 to 72..79, and
+- * F0..F7 to 80..87.
+- */
+-#define IRQ_EP93XX_GPIO(x) (64 + (((x) + (((x) >> 2) & 8)) & 0x1f))
-
- static void buf_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
- {
- struct gfs2_bufdata *bd = container_of(le, struct gfs2_bufdata, bd_le);
-@@ -182,7 +140,8 @@ static void buf_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
- list_add(&bd->bd_list_tr, &tr->tr_list_buf);
- if (!list_empty(&le->le_list))
- goto out;
-- __glock_lo_add(sdp, &bd->bd_gl->gl_le);
-+ set_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags);
-+ set_bit(GLF_DIRTY, &bd->bd_gl->gl_flags);
- gfs2_meta_check(sdp, bd->bd_bh);
- gfs2_pin(sdp, bd->bd_bh);
- sdp->sd_log_num_buf++;
-@@ -556,17 +515,20 @@ static void databuf_lo_add(struct gfs2_sbd *sdp, struct gfs2_log_element *le)
+ #define NR_EP93XX_IRQS (64 + 24)
- lock_buffer(bd->bd_bh);
- gfs2_log_lock(sdp);
-- if (!list_empty(&bd->bd_list_tr))
-- goto out;
-- tr->tr_touched = 1;
-- if (gfs2_is_jdata(ip)) {
-- tr->tr_num_buf++;
-- list_add(&bd->bd_list_tr, &tr->tr_list_buf);
-+ if (tr) {
-+ if (!list_empty(&bd->bd_list_tr))
-+ goto out;
-+ tr->tr_touched = 1;
-+ if (gfs2_is_jdata(ip)) {
-+ tr->tr_num_buf++;
-+ list_add(&bd->bd_list_tr, &tr->tr_list_buf);
-+ }
- }
- if (!list_empty(&le->le_list))
- goto out;
+ #define EP93XX_BOARD_IRQ(x) (NR_EP93XX_IRQS + (x))
+diff --git a/include/asm-arm/arch-ixp4xx/io.h b/include/asm-arm/arch-ixp4xx/io.h
+index eeeea90..9c5d235 100644
+--- a/include/asm-arm/arch-ixp4xx/io.h
++++ b/include/asm-arm/arch-ixp4xx/io.h
+@@ -61,13 +61,13 @@ __ixp4xx_ioremap(unsigned long addr, size_t size, unsigned int mtype)
+ if((addr < PCIBIOS_MIN_MEM) || (addr > 0x4fffffff))
+ return __arm_ioremap(addr, size, mtype);
-- __glock_lo_add(sdp, &bd->bd_gl->gl_le);
-+ set_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags);
-+ set_bit(GLF_DIRTY, &bd->bd_gl->gl_flags);
- if (gfs2_is_jdata(ip)) {
- gfs2_pin(sdp, bd->bd_bh);
- tr->tr_num_databuf_new++;
-@@ -773,12 +735,6 @@ static void databuf_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_ail *ai)
+- return (void *)addr;
++ return (void __iomem *)addr;
}
-
--const struct gfs2_log_operations gfs2_glock_lops = {
-- .lo_add = glock_lo_add,
-- .lo_after_commit = glock_lo_after_commit,
-- .lo_name = "glock",
--};
--
- const struct gfs2_log_operations gfs2_buf_lops = {
- .lo_add = buf_lo_add,
- .lo_incore_commit = buf_lo_incore_commit,
-@@ -816,7 +772,6 @@ const struct gfs2_log_operations gfs2_databuf_lops = {
- };
-
- const struct gfs2_log_operations *gfs2_log_ops[] = {
-- &gfs2_glock_lops,
- &gfs2_databuf_lops,
- &gfs2_buf_lops,
- &gfs2_rg_lops,
-diff --git a/fs/gfs2/main.c b/fs/gfs2/main.c
-index 7ecfe0d..9c7765c 100644
---- a/fs/gfs2/main.c
-+++ b/fs/gfs2/main.c
-@@ -29,9 +29,8 @@ static void gfs2_init_inode_once(struct kmem_cache *cachep, void *foo)
- struct gfs2_inode *ip = foo;
-
- inode_init_once(&ip->i_inode);
-- spin_lock_init(&ip->i_spin);
- init_rwsem(&ip->i_rw_mutex);
-- memset(ip->i_cache, 0, sizeof(ip->i_cache));
-+ ip->i_alloc = NULL;
+ static inline void
+ __ixp4xx_iounmap(void __iomem *addr)
+ {
+- if ((u32)addr >= VMALLOC_START)
++ if ((__force u32)addr >= VMALLOC_START)
+ __iounmap(addr);
}
- static void gfs2_init_glock_once(struct kmem_cache *cachep, void *foo)
-diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
-index 4da4239..85aea27 100644
---- a/fs/gfs2/meta_io.c
-+++ b/fs/gfs2/meta_io.c
-@@ -50,6 +50,7 @@ static int gfs2_aspace_writepage(struct page *page,
- static const struct address_space_operations aspace_aops = {
- .writepage = gfs2_aspace_writepage,
- .releasepage = gfs2_releasepage,
-+ .sync_page = block_sync_page,
- };
-
- /**
-@@ -221,13 +222,14 @@ int gfs2_meta_read(struct gfs2_glock *gl, u64 blkno, int flags,
- struct buffer_head **bhp)
+@@ -141,9 +141,9 @@ __ixp4xx_writesw(volatile void __iomem *bus_addr, const u16 *vaddr, int count)
+ static inline void
+ __ixp4xx_writel(u32 value, volatile void __iomem *p)
{
- *bhp = getbuf(gl, blkno, CREATE);
-- if (!buffer_uptodate(*bhp))
-+ if (!buffer_uptodate(*bhp)) {
- ll_rw_block(READ_META, 1, bhp);
-- if (flags & DIO_WAIT) {
-- int error = gfs2_meta_wait(gl->gl_sbd, *bhp);
-- if (error) {
-- brelse(*bhp);
-- return error;
-+ if (flags & DIO_WAIT) {
-+ int error = gfs2_meta_wait(gl->gl_sbd, *bhp);
-+ if (error) {
-+ brelse(*bhp);
-+ return error;
-+ }
- }
- }
-
-@@ -282,7 +284,7 @@ void gfs2_attach_bufdata(struct gfs2_glock *gl, struct buffer_head *bh,
+- u32 addr = (u32)p;
++ u32 addr = (__force u32)p;
+ if (addr >= VMALLOC_START) {
+- __raw_writel(value, addr);
++ __raw_writel(value, p);
return;
}
-- bd = kmem_cache_zalloc(gfs2_bufdata_cachep, GFP_NOFS | __GFP_NOFAIL),
-+ bd = kmem_cache_zalloc(gfs2_bufdata_cachep, GFP_NOFS | __GFP_NOFAIL);
- bd->bd_bh = bh;
- bd->bd_gl = gl;
+@@ -208,11 +208,11 @@ __ixp4xx_readsw(const volatile void __iomem *bus_addr, u16 *vaddr, u32 count)
+ static inline unsigned long
+ __ixp4xx_readl(const volatile void __iomem *p)
+ {
+- u32 addr = (u32)p;
++ u32 addr = (__force u32)p;
+ u32 data;
-@@ -317,7 +319,7 @@ void gfs2_remove_from_journal(struct buffer_head *bh, struct gfs2_trans *tr, int
- }
- if (bd) {
- if (bd->bd_ail) {
-- gfs2_remove_from_ail(NULL, bd);
-+ gfs2_remove_from_ail(bd);
- bh->b_private = NULL;
- bd->bd_bh = NULL;
- bd->bd_blkno = bh->b_blocknr;
-@@ -358,32 +360,6 @@ void gfs2_meta_wipe(struct gfs2_inode *ip, u64 bstart, u32 blen)
- }
+ if (addr >= VMALLOC_START)
+- return __raw_readl(addr);
++ return __raw_readl(p);
- /**
-- * gfs2_meta_cache_flush - get rid of any references on buffers for this inode
-- * @ip: The GFS2 inode
+ if (ixp4xx_pci_read(addr, NP_CMD_MEMREAD, &data))
+ return 0xffffffff;
+@@ -438,7 +438,7 @@ __ixp4xx_ioread32(const void __iomem *addr)
+ return (unsigned int)__ixp4xx_inl(port & PIO_MASK);
+ else {
+ #ifndef CONFIG_IXP4XX_INDIRECT_PCI
+- return le32_to_cpu(__raw_readl((u32)port));
++ return le32_to_cpu((__force __le32)__raw_readl(addr));
+ #else
+ return (unsigned int)__ixp4xx_readl(addr);
+ #endif
+@@ -523,7 +523,7 @@ __ixp4xx_iowrite32(u32 value, void __iomem *addr)
+ __ixp4xx_outl(value, port & PIO_MASK);
+ else
+ #ifndef CONFIG_IXP4XX_INDIRECT_PCI
+- __raw_writel(cpu_to_le32(value), port);
++ __raw_writel((u32 __force)cpu_to_le32(value), addr);
+ #else
+ __ixp4xx_writel(value, addr);
+ #endif
+diff --git a/include/asm-arm/arch-ixp4xx/platform.h b/include/asm-arm/arch-ixp4xx/platform.h
+index 2a44d3d..2ce28e3 100644
+--- a/include/asm-arm/arch-ixp4xx/platform.h
++++ b/include/asm-arm/arch-ixp4xx/platform.h
+@@ -76,17 +76,6 @@ extern unsigned long ixp4xx_exp_bus_size;
+ #define IXP4XX_UART_XTAL 14745600
+
+ /*
+- * The IXP4xx chips do not have an I2C unit, so GPIO lines are just
+- * used to
+- * Used as platform_data to provide GPIO pin information to the ixp42x
+- * I2C driver.
+- */
+-struct ixp4xx_i2c_pins {
+- unsigned long sda_pin;
+- unsigned long scl_pin;
+-};
+-
+-/*
+ * This structure provide a means for the board setup code
+ * to give information to th pata_ixp4xx driver. It is
+ * passed as platform_data.
+diff --git a/include/asm-arm/arch-ks8695/regs-gpio.h b/include/asm-arm/arch-ks8695/regs-gpio.h
+index 57fcf9f..6b95d77 100644
+--- a/include/asm-arm/arch-ks8695/regs-gpio.h
++++ b/include/asm-arm/arch-ks8695/regs-gpio.h
+@@ -49,5 +49,7 @@
+ #define IOPC_TM_FALLING (4) /* Falling Edge Detection */
+ #define IOPC_TM_EDGE (6) /* Both Edge Detection */
+
++/* Port Data Register */
++#define IOPD_(x) (1 << (x)) /* Signal Level of GPIO Pin x */
+
+ #endif
+diff --git a/include/asm-arm/arch-msm/board.h b/include/asm-arm/arch-msm/board.h
+new file mode 100644
+index 0000000..763051f
+--- /dev/null
++++ b/include/asm-arm/arch-msm/board.h
+@@ -0,0 +1,37 @@
++/* linux/include/asm-arm/arch-msm/board.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ * Author: Brian Swetland <swetland at google.com>
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_BOARD_H
++#define __ASM_ARCH_MSM_BOARD_H
++
++#include <linux/types.h>
++
++/* platform device data structures */
++
++struct msm_mddi_platform_data
++{
++ void (*panel_power)(int on);
++ unsigned has_vsync_irq:1;
++};
++
++/* common init routines for use by arch/arm/mach-msm/board-*.c */
++
++void __init msm_add_devices(void);
++void __init msm_map_common_io(void);
++void __init msm_init_irq(void);
++void __init msm_init_gpio(void);
++
++#endif
+diff --git a/include/asm-arm/arch-msm/debug-macro.S b/include/asm-arm/arch-msm/debug-macro.S
+new file mode 100644
+index 0000000..393d527
+--- /dev/null
++++ b/include/asm-arm/arch-msm/debug-macro.S
+@@ -0,0 +1,40 @@
++/* include/asm-arm/arch-msm7200/debug-macro.S
++ *
++ * Copyright (C) 2007 Google, Inc.
++ * Author: Brian Swetland <swetland at google.com>
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#include <asm/hardware.h>
++#include <asm/arch/msm_iomap.h>
++
++ .macro addruart,rx
++ @ see if the MMU is enabled and select appropriate base address
++ mrc p15, 0, \rx, c1, c0
++ tst \rx, #1
++ ldreq \rx, =MSM_UART1_PHYS
++ ldrne \rx, =MSM_UART1_BASE
++ .endm
++
++ .macro senduart,rd,rx
++ str \rd, [\rx, #0x0C]
++ .endm
++
++ .macro waituart,rd,rx
++ @ wait for TX_READY
++1: ldr \rd, [\rx, #0x08]
++ tst \rd, #0x04
++ beq 1b
++ .endm
++
++ .macro busyuart,rd,rx
++ .endm
+diff --git a/include/asm-arm/arch-msm/dma.h b/include/asm-arm/arch-msm/dma.h
+new file mode 100644
+index 0000000..e4b565b
+--- /dev/null
++++ b/include/asm-arm/arch-msm/dma.h
+@@ -0,0 +1,151 @@
++/* linux/include/asm-arm/arch-msm/dma.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_DMA_H
++
++#include <linux/list.h>
++#include <asm/arch/msm_iomap.h>
++
++struct msm_dmov_cmd {
++ struct list_head list;
++ unsigned int cmdptr;
++ void (*complete_func)(struct msm_dmov_cmd *cmd, unsigned int result);
++/* void (*user_result_func)(struct msm_dmov_cmd *cmd); */
++};
++
++void msm_dmov_enqueue_cmd(unsigned id, struct msm_dmov_cmd *cmd);
++void msm_dmov_stop_cmd(unsigned id, struct msm_dmov_cmd *cmd);
++int msm_dmov_exec_cmd(unsigned id, unsigned int cmdptr);
++/* int msm_dmov_exec_cmd_etc(unsigned id, unsigned int cmdptr, int timeout, int interruptible); */
++
++
++
++#define DMOV_SD0(off, ch) (MSM_DMOV_BASE + 0x0000 + (off) + ((ch) << 2))
++#define DMOV_SD1(off, ch) (MSM_DMOV_BASE + 0x0400 + (off) + ((ch) << 2))
++#define DMOV_SD2(off, ch) (MSM_DMOV_BASE + 0x0800 + (off) + ((ch) << 2))
++#define DMOV_SD3(off, ch) (MSM_DMOV_BASE + 0x0C00 + (off) + ((ch) << 2))
++
++/* only security domain 3 is available to the ARM11
++ * SD0 -> mARM trusted, SD1 -> mARM nontrusted, SD2 -> aDSP, SD3 -> aARM
++ */
++
++#define DMOV_CMD_PTR(ch) DMOV_SD3(0x000, ch)
++#define DMOV_CMD_LIST (0 << 29) /* does not work */
++#define DMOV_CMD_PTR_LIST (1 << 29) /* works */
++#define DMOV_CMD_INPUT_CFG (2 << 29) /* untested */
++#define DMOV_CMD_OUTPUT_CFG (3 << 29) /* untested */
++#define DMOV_CMD_ADDR(addr) ((addr) >> 3)
++
++#define DMOV_RSLT(ch) DMOV_SD3(0x040, ch)
++#define DMOV_RSLT_VALID (1 << 31) /* 0 == host has empties result fifo */
++#define DMOV_RSLT_ERROR (1 << 3)
++#define DMOV_RSLT_FLUSH (1 << 2)
++#define DMOV_RSLT_DONE (1 << 1) /* top pointer done */
++#define DMOV_RSLT_USER (1 << 0) /* command with FR force result */
++
++#define DMOV_FLUSH0(ch) DMOV_SD3(0x080, ch)
++#define DMOV_FLUSH1(ch) DMOV_SD3(0x0C0, ch)
++#define DMOV_FLUSH2(ch) DMOV_SD3(0x100, ch)
++#define DMOV_FLUSH3(ch) DMOV_SD3(0x140, ch)
++#define DMOV_FLUSH4(ch) DMOV_SD3(0x180, ch)
++#define DMOV_FLUSH5(ch) DMOV_SD3(0x1C0, ch)
++
++#define DMOV_STATUS(ch) DMOV_SD3(0x200, ch)
++#define DMOV_STATUS_RSLT_COUNT(n) (((n) >> 29))
++#define DMOV_STATUS_CMD_COUNT(n) (((n) >> 27) & 3)
++#define DMOV_STATUS_RSLT_VALID (1 << 1)
++#define DMOV_STATUS_CMD_PTR_RDY (1 << 0)
++
++#define DMOV_ISR DMOV_SD3(0x380, 0)
++
++#define DMOV_CONFIG(ch) DMOV_SD3(0x300, ch)
++#define DMOV_CONFIG_FORCE_TOP_PTR_RSLT (1 << 2)
++#define DMOV_CONFIG_FORCE_FLUSH_RSLT (1 << 1)
++#define DMOV_CONFIG_IRQ_EN (1 << 0)
++
++/* channel assignments */
++
++#define DMOV_NAND_CHAN 7
++#define DMOV_NAND_CRCI_CMD 5
++#define DMOV_NAND_CRCI_DATA 4
++
++#define DMOV_SDC1_CHAN 8
++#define DMOV_SDC1_CRCI 6
++
++#define DMOV_SDC2_CHAN 8
++#define DMOV_SDC2_CRCI 7
++
++#define DMOV_TSIF_CHAN 10
++#define DMOV_TSIF_CRCI 10
++
++#define DMOV_USB_CHAN 11
++
++/* no client rate control ifc (eg, ram) */
++#define DMOV_NONE_CRCI 0
++
++
++/* If the CMD_PTR register has CMD_PTR_LIST selected, the data mover
++ * is going to walk a list of 32bit pointers as described below. Each
++ * pointer points to a *array* of dmov_s, etc structs. The last pointer
++ * in the list is marked with CMD_PTR_LP. The last struct in each array
++ * is marked with CMD_LC (see below).
++ */
++#define CMD_PTR_ADDR(addr) ((addr) >> 3)
++#define CMD_PTR_LP (1 << 31) /* last pointer */
++#define CMD_PTR_PT (3 << 29) /* ? */
++
++/* Single Item Mode */
++typedef struct {
++ unsigned cmd;
++ unsigned src;
++ unsigned dst;
++ unsigned len;
++} dmov_s;
++
++/* Scatter/Gather Mode */
++typedef struct {
++ unsigned cmd;
++ unsigned src_dscr;
++ unsigned dst_dscr;
++ unsigned _reserved;
++} dmov_sg;
++
++/* bits for the cmd field of the above structures */
++
++#define CMD_LC (1 << 31) /* last command */
++#define CMD_FR (1 << 22) /* force result -- does not work? */
++#define CMD_OCU (1 << 21) /* other channel unblock */
++#define CMD_OCB (1 << 20) /* other channel block */
++#define CMD_TCB (1 << 19) /* ? */
++#define CMD_DAH (1 << 18) /* destination address hold -- does not work?*/
++#define CMD_SAH (1 << 17) /* source address hold -- does not work? */
++
++#define CMD_MODE_SINGLE (0 << 0) /* dmov_s structure used */
++#define CMD_MODE_SG (1 << 0) /* untested */
++#define CMD_MODE_IND_SG (2 << 0) /* untested */
++#define CMD_MODE_BOX (3 << 0) /* untested */
++
++#define CMD_DST_SWAP_BYTES (1 << 14) /* exchange each byte n with byte n+1 */
++#define CMD_DST_SWAP_SHORTS (1 << 15) /* exchange each short n with short n+1 */
++#define CMD_DST_SWAP_WORDS (1 << 16) /* exchange each word n with word n+1 */
++
++#define CMD_SRC_SWAP_BYTES (1 << 11) /* exchange each byte n with byte n+1 */
++#define CMD_SRC_SWAP_SHORTS (1 << 12) /* exchange each short n with short n+1 */
++#define CMD_SRC_SWAP_WORDS (1 << 13) /* exchange each word n with word n+1 */
++
++#define CMD_DST_CRCI(n) (((n) & 15) << 7)
++#define CMD_SRC_CRCI(n) (((n) & 15) << 3)
++
++#endif
+diff --git a/include/asm-arm/arch-msm/entry-macro.S b/include/asm-arm/arch-msm/entry-macro.S
+new file mode 100644
+index 0000000..ee24aec
+--- /dev/null
++++ b/include/asm-arm/arch-msm/entry-macro.S
+@@ -0,0 +1,38 @@
++/* include/asm-arm/arch-msm7200/entry-macro.S
++ *
++ * Copyright (C) 2007 Google, Inc.
++ * Author: Brian Swetland <swetland at google.com>
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#include <asm/arch/msm_iomap.h>
++
++ .macro disable_fiq
++ .endm
++
++ .macro get_irqnr_preamble, base, tmp
++ @ enable imprecise aborts
++ cpsie a
++ mov \base, #MSM_VIC_BASE
++ .endm
++
++ .macro arch_ret_to_user, tmp1, tmp2
++ .endm
++
++ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
++ @ 0xD0 has irq# or old irq# if the irq has been handled
++ @ 0xD4 has irq# or -1 if none pending *but* if you just
++ @ read 0xD4 you never get the first irq for some reason
++ ldr \irqnr, [\base, #0xD0]
++ ldr \irqnr, [\base, #0xD4]
++ cmp \irqnr, #0xffffffff
++ .endm
+diff --git a/include/asm-arm/arch-msm/hardware.h b/include/asm-arm/arch-msm/hardware.h
+new file mode 100644
+index 0000000..89af2b7
+--- /dev/null
++++ b/include/asm-arm/arch-msm/hardware.h
+@@ -0,0 +1,18 @@
++/* linux/include/asm-arm/arch-msm/hardware.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_HARDWARE_H
++
++#endif
+diff --git a/include/asm-arm/arch-msm/io.h b/include/asm-arm/arch-msm/io.h
+new file mode 100644
+index 0000000..4645ae2
+--- /dev/null
++++ b/include/asm-arm/arch-msm/io.h
+@@ -0,0 +1,33 @@
++/* include/asm-arm/arch-msm/io.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARM_ARCH_IO_H
++#define __ASM_ARM_ARCH_IO_H
++
++#define IO_SPACE_LIMIT 0xffffffff
++
++#define __arch_ioremap __msm_ioremap
++#define __arch_iounmap __iounmap
++
++void __iomem *__msm_ioremap(unsigned long phys_addr, size_t size, unsigned int mtype);
++
++static inline void __iomem *__io(unsigned long addr)
++{
++ return (void __iomem *)addr;
++}
++#define __io(a) __io(a)
++#define __mem_pci(a) (a)
++
++#endif
+diff --git a/include/asm-arm/arch-msm/irqs.h b/include/asm-arm/arch-msm/irqs.h
+new file mode 100644
+index 0000000..565430c
+--- /dev/null
++++ b/include/asm-arm/arch-msm/irqs.h
+@@ -0,0 +1,89 @@
++/* linux/include/asm-arm/arch-msm/irqs.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ * Author: Brian Swetland <swetland at google.com>
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_IRQS_H
++
++/* MSM ARM11 Interrupt Numbers */
++/* See 80-VE113-1 A, pp219-221 */
++
++#define INT_A9_M2A_0 0
++#define INT_A9_M2A_1 1
++#define INT_A9_M2A_2 2
++#define INT_A9_M2A_3 3
++#define INT_A9_M2A_4 4
++#define INT_A9_M2A_5 5
++#define INT_A9_M2A_6 6
++#define INT_GP_TIMER_EXP 7
++#define INT_DEBUG_TIMER_EXP 8
++#define INT_UART1 9
++#define INT_UART2 10
++#define INT_UART3 11
++#define INT_UART1_RX 12
++#define INT_UART2_RX 13
++#define INT_UART3_RX 14
++#define INT_USB_OTG 15
++#define INT_MDDI_PRI 16
++#define INT_MDDI_EXT 17
++#define INT_MDDI_CLIENT 18
++#define INT_MDP 19
++#define INT_GRAPHICS 20
++#define INT_ADM_AARM 21
++#define INT_ADSP_A11 22
++#define INT_ADSP_A9_A11 23
++#define INT_SDC1_0 24
++#define INT_SDC1_1 25
++#define INT_SDC2_0 26
++#define INT_SDC2_1 27
++#define INT_KEYSENSE 28
++#define INT_TCHSCRN_SSBI 29
++#define INT_TCHSCRN1 30
++#define INT_TCHSCRN2 31
++
++#define INT_GPIO_GROUP1 (32 + 0)
++#define INT_GPIO_GROUP2 (32 + 1)
++#define INT_PWB_I2C (32 + 2)
++#define INT_SOFTRESET (32 + 3)
++#define INT_NAND_WR_ER_DONE (32 + 4)
++#define INT_NAND_OP_DONE (32 + 5)
++#define INT_PBUS_ARM11 (32 + 6)
++#define INT_AXI_MPU_SMI (32 + 7)
++#define INT_AXI_MPU_EBI1 (32 + 8)
++#define INT_AD_HSSD (32 + 9)
++#define INT_ARM11_PMU (32 + 10)
++#define INT_ARM11_DMA (32 + 11)
++#define INT_TSIF_IRQ (32 + 12)
++#define INT_UART1DM_IRQ (32 + 13)
++#define INT_UART1DM_RX (32 + 14)
++#define INT_USB_HS (32 + 15)
++#define INT_SDC3_0 (32 + 16)
++#define INT_SDC3_1 (32 + 17)
++#define INT_SDC4_0 (32 + 18)
++#define INT_SDC4_1 (32 + 19)
++#define INT_UART2DM_RX (32 + 20)
++#define INT_UART2DM_IRQ (32 + 21)
++
++/* 22-31 are reserved */
++
++#define MSM_IRQ_BIT(irq) (1 << ((irq) & 31))
++
++#define NR_MSM_IRQS 64
++#define NR_GPIO_IRQS 122
++#define NR_BOARD_IRQS 64
++#define NR_IRQS (NR_MSM_IRQS + NR_GPIO_IRQS + NR_BOARD_IRQS)
++
++#define MSM_GPIO_TO_INT(n) (NR_MSM_IRQS + (n))
++
++#endif
+diff --git a/include/asm-arm/arch-msm/memory.h b/include/asm-arm/arch-msm/memory.h
+new file mode 100644
+index 0000000..b5ce0e9
+--- /dev/null
++++ b/include/asm-arm/arch-msm/memory.h
+@@ -0,0 +1,27 @@
++/* linux/include/asm-arm/arch-msm/memory.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MEMORY_H
++#define __ASM_ARCH_MEMORY_H
++
++/* physical offset of RAM */
++#define PHYS_OFFSET UL(0x10000000)
++
++/* bus address and physical addresses are identical */
++#define __virt_to_bus(x) __virt_to_phys(x)
++#define __bus_to_virt(x) __phys_to_virt(x)
++
++#endif
++
+diff --git a/include/asm-arm/arch-msm/msm_iomap.h b/include/asm-arm/arch-msm/msm_iomap.h
+new file mode 100644
+index 0000000..b8955cc
+--- /dev/null
++++ b/include/asm-arm/arch-msm/msm_iomap.h
+@@ -0,0 +1,104 @@
++/* linux/include/asm-arm/arch-msm/msm_iomap.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ * Author: Brian Swetland <swetland at google.com>
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ *
++ * The MSM peripherals are spread all over across 768MB of physical
++ * space, which makes just having a simple IO_ADDRESS macro to slide
++ * them into the right virtual location rough. Instead, we will
++ * provide a master phys->virt mapping for peripherals here.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_IOMAP_H
++#define __ASM_ARCH_MSM_IOMAP_H
++
++#include <asm/sizes.h>
++
++/* Physical base address and size of peripherals.
++ * Ordered by the virtual base addresses they will be mapped at.
++ *
++ * MSM_VIC_BASE must be an value that can be loaded via a "mov"
++ * instruction, otherwise entry-macro.S will not compile.
++ *
++ * If you add or remove entries here, you'll want to edit the
++ * msm_io_desc array in arch/arm/mach-msm/io.c to reflect your
++ * changes.
++ *
++ */
++
++#define MSM_VIC_BASE 0xE0000000
++#define MSM_VIC_PHYS 0xC0000000
++#define MSM_VIC_SIZE SZ_4K
++
++#define MSM_CSR_BASE 0xE0001000
++#define MSM_CSR_PHYS 0xC0100000
++#define MSM_CSR_SIZE SZ_4K
++
++#define MSM_GPT_PHYS MSM_CSR_PHYS
++#define MSM_GPT_BASE MSM_CSR_BASE
++#define MSM_GPT_SIZE SZ_4K
++
++#define MSM_DMOV_BASE 0xE0002000
++#define MSM_DMOV_PHYS 0xA9700000
++#define MSM_DMOV_SIZE SZ_4K
++
++#define MSM_UART1_BASE 0xE0003000
++#define MSM_UART1_PHYS 0xA9A00000
++#define MSM_UART1_SIZE SZ_4K
++
++#define MSM_UART2_BASE 0xE0004000
++#define MSM_UART2_PHYS 0xA9B00000
++#define MSM_UART2_SIZE SZ_4K
++
++#define MSM_UART3_BASE 0xE0005000
++#define MSM_UART3_PHYS 0xA9C00000
++#define MSM_UART3_SIZE SZ_4K
++
++#define MSM_I2C_BASE 0xE0006000
++#define MSM_I2C_PHYS 0xA9900000
++#define MSM_I2C_SIZE SZ_4K
++
++#define MSM_GPIO1_BASE 0xE0007000
++#define MSM_GPIO1_PHYS 0xA9200000
++#define MSM_GPIO1_SIZE SZ_4K
++
++#define MSM_GPIO2_BASE 0xE0008000
++#define MSM_GPIO2_PHYS 0xA9300000
++#define MSM_GPIO2_SIZE SZ_4K
++
++#define MSM_HSUSB_BASE 0xE0009000
++#define MSM_HSUSB_PHYS 0xA0800000
++#define MSM_HSUSB_SIZE SZ_4K
++
++#define MSM_CLK_CTL_BASE 0xE000A000
++#define MSM_CLK_CTL_PHYS 0xA8600000
++#define MSM_CLK_CTL_SIZE SZ_4K
++
++#define MSM_PMDH_BASE 0xE000B000
++#define MSM_PMDH_PHYS 0xAA600000
++#define MSM_PMDH_SIZE SZ_4K
++
++#define MSM_EMDH_BASE 0xE000C000
++#define MSM_EMDH_PHYS 0xAA700000
++#define MSM_EMDH_SIZE SZ_4K
++
++#define MSM_MDP_BASE 0xE0010000
++#define MSM_MDP_PHYS 0xAA200000
++#define MSM_MDP_SIZE 0x000F0000
++
++#define MSM_SHARED_RAM_BASE 0xE0100000
++#define MSM_SHARED_RAM_PHYS 0x01F00000
++#define MSM_SHARED_RAM_SIZE SZ_1M
++
++#endif
+diff --git a/include/asm-arm/arch-msm/system.h b/include/asm-arm/arch-msm/system.h
+new file mode 100644
+index 0000000..7c5544b
+--- /dev/null
++++ b/include/asm-arm/arch-msm/system.h
+@@ -0,0 +1,23 @@
++/* linux/include/asm-arm/arch-msm/system.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#include <asm/hardware.h>
++
++void arch_idle(void);
++
++static inline void arch_reset(char mode)
++{
++ for (;;) ; /* depends on IPC w/ other core */
++}
+diff --git a/include/asm-arm/arch-msm/timex.h b/include/asm-arm/arch-msm/timex.h
+new file mode 100644
+index 0000000..154b23f
+--- /dev/null
++++ b/include/asm-arm/arch-msm/timex.h
+@@ -0,0 +1,20 @@
++/* linux/include/asm-arm/arch-msm/timex.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_TIMEX_H
++
++#define CLOCK_TICK_RATE 1000000
++
++#endif
+diff --git a/include/asm-arm/arch-msm/uncompress.h b/include/asm-arm/arch-msm/uncompress.h
+new file mode 100644
+index 0000000..e91ed78
+--- /dev/null
++++ b/include/asm-arm/arch-msm/uncompress.h
+@@ -0,0 +1,36 @@
++/* linux/include/asm-arm/arch-msm/uncompress.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_UNCOMPRESS_H
++
++#include "hardware.h"
++
++static void putc(int c)
++{
++}
++
++static inline void flush(void)
++{
++}
++
++static inline void arch_decomp_setup(void)
++{
++}
++
++static inline void arch_decomp_wdog(void)
++{
++}
++
++#endif
+diff --git a/include/asm-arm/arch-msm/vmalloc.h b/include/asm-arm/arch-msm/vmalloc.h
+new file mode 100644
+index 0000000..60f8d91
+--- /dev/null
++++ b/include/asm-arm/arch-msm/vmalloc.h
+@@ -0,0 +1,22 @@
++/* linux/include/asm-arm/arch-msm/vmalloc.h
++ *
++ * Copyright (C) 2007 Google, Inc.
++ *
++ * This software is licensed under the terms of the GNU General Public
++ * License version 2, as published by the Free Software Foundation, and
++ * may be copied, distributed, and modified under those terms.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ */
++
++#ifndef __ASM_ARCH_MSM_VMALLOC_H
++#define __ASM_ARCH_MSM_VMALLOC_H
++
++#define VMALLOC_END (PAGE_OFFSET + 0x10000000)
++
++#endif
++
+diff --git a/include/asm-arm/arch-omap/tps65010.h b/include/asm-arm/arch-omap/tps65010.h
+deleted file mode 100644
+index b9aa2b3..0000000
+--- a/include/asm-arm/arch-omap/tps65010.h
++++ /dev/null
+@@ -1,156 +0,0 @@
+-/* linux/include/asm-arm/arch-omap/tps65010.h
- *
-- * This releases buffers that are in the most-recently-used array of
-- * blocks used for indirect block addressing for this inode.
+- * Functions to access TPS65010 power management device.
+- *
+- * Copyright (C) 2004 Dirk Behme <dirk.behme at de.bosch.com>
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms of the GNU General Public License as published by the
+- * Free Software Foundation; either version 2 of the License, or (at your
+- * option) any later version.
+- *
+- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+- * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+- * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+- * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+- * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+- *
+- * You should have received a copy of the GNU General Public License along
+- * with this program; if not, write to the Free Software Foundation, Inc.,
+- * 675 Mass Ave, Cambridge, MA 02139, USA.
- */
-
--void gfs2_meta_cache_flush(struct gfs2_inode *ip)
--{
-- struct buffer_head **bh_slot;
-- unsigned int x;
+-#ifndef __ASM_ARCH_TPS65010_H
+-#define __ASM_ARCH_TPS65010_H
-
-- spin_lock(&ip->i_spin);
+-/*
+- * ----------------------------------------------------------------------------
+- * Registers, all 8 bits
+- * ----------------------------------------------------------------------------
+- */
-
-- for (x = 0; x < GFS2_MAX_META_HEIGHT; x++) {
-- bh_slot = &ip->i_cache[x];
-- if (*bh_slot) {
-- brelse(*bh_slot);
-- *bh_slot = NULL;
-- }
-- }
+-#define TPS_CHGSTATUS 0x01
+-# define TPS_CHG_USB (1 << 7)
+-# define TPS_CHG_AC (1 << 6)
+-# define TPS_CHG_THERM (1 << 5)
+-# define TPS_CHG_TERM (1 << 4)
+-# define TPS_CHG_TAPER_TMO (1 << 3)
+-# define TPS_CHG_CHG_TMO (1 << 2)
+-# define TPS_CHG_PRECHG_TMO (1 << 1)
+-# define TPS_CHG_TEMP_ERR (1 << 0)
+-#define TPS_REGSTATUS 0x02
+-# define TPS_REG_ONOFF (1 << 7)
+-# define TPS_REG_COVER (1 << 6)
+-# define TPS_REG_UVLO (1 << 5)
+-# define TPS_REG_NO_CHG (1 << 4) /* tps65013 */
+-# define TPS_REG_PG_LD02 (1 << 3)
+-# define TPS_REG_PG_LD01 (1 << 2)
+-# define TPS_REG_PG_MAIN (1 << 1)
+-# define TPS_REG_PG_CORE (1 << 0)
+-#define TPS_MASK1 0x03
+-#define TPS_MASK2 0x04
+-#define TPS_ACKINT1 0x05
+-#define TPS_ACKINT2 0x06
+-#define TPS_CHGCONFIG 0x07
+-# define TPS_CHARGE_POR (1 << 7) /* 65010/65012 */
+-# define TPS65013_AUA (1 << 7) /* 65011/65013 */
+-# define TPS_CHARGE_RESET (1 << 6)
+-# define TPS_CHARGE_FAST (1 << 5)
+-# define TPS_CHARGE_CURRENT (3 << 3)
+-# define TPS_VBUS_500MA (1 << 2)
+-# define TPS_VBUS_CHARGING (1 << 1)
+-# define TPS_CHARGE_ENABLE (1 << 0)
+-#define TPS_LED1_ON 0x08
+-#define TPS_LED1_PER 0x09
+-#define TPS_LED2_ON 0x0a
+-#define TPS_LED2_PER 0x0b
+-#define TPS_VDCDC1 0x0c
+-# define TPS_ENABLE_LP (1 << 3)
+-#define TPS_VDCDC2 0x0d
+-#define TPS_VREGS1 0x0e
+-# define TPS_LDO2_ENABLE (1 << 7)
+-# define TPS_LDO2_OFF (1 << 6)
+-# define TPS_VLDO2_3_0V (3 << 4)
+-# define TPS_VLDO2_2_75V (2 << 4)
+-# define TPS_VLDO2_2_5V (1 << 4)
+-# define TPS_VLDO2_1_8V (0 << 4)
+-# define TPS_LDO1_ENABLE (1 << 3)
+-# define TPS_LDO1_OFF (1 << 2)
+-# define TPS_VLDO1_3_0V (3 << 0)
+-# define TPS_VLDO1_2_75V (2 << 0)
+-# define TPS_VLDO1_2_5V (1 << 0)
+-# define TPS_VLDO1_ADJ (0 << 0)
+-#define TPS_MASK3 0x0f
+-#define TPS_DEFGPIO 0x10
-
-- spin_unlock(&ip->i_spin);
--}
+-/*
+- * ----------------------------------------------------------------------------
+- * Macros used by exported functions
+- * ----------------------------------------------------------------------------
+- */
-
--/**
- * gfs2_meta_indirect_buffer - Get a metadata buffer
- * @ip: The GFS2 inode
- * @height: The level of this buf in the metadata (indir addr) tree (if any)
-@@ -391,8 +367,6 @@ void gfs2_meta_cache_flush(struct gfs2_inode *ip)
- * @new: Non-zero if we may create a new buffer
- * @bhp: the buffer is returned here
- *
-- * Try to use the gfs2_inode's MRU metadata tree cache.
-- *
- * Returns: errno
- */
-
-@@ -401,58 +375,25 @@ int gfs2_meta_indirect_buffer(struct gfs2_inode *ip, int height, u64 num,
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
- struct gfs2_glock *gl = ip->i_gl;
-- struct buffer_head *bh = NULL, **bh_slot = ip->i_cache + height;
-- int in_cache = 0;
+-#define LED1 1
+-#define LED2 2
+-#define OFF 0
+-#define ON 1
+-#define BLINK 2
+-#define GPIO1 1
+-#define GPIO2 2
+-#define GPIO3 3
+-#define GPIO4 4
+-#define LOW 0
+-#define HIGH 1
-
-- BUG_ON(!gl);
-- BUG_ON(!sdp);
+-/*
+- * ----------------------------------------------------------------------------
+- * Exported functions
+- * ----------------------------------------------------------------------------
+- */
-
-- spin_lock(&ip->i_spin);
-- if (*bh_slot && (*bh_slot)->b_blocknr == num) {
-- bh = *bh_slot;
-- get_bh(bh);
-- in_cache = 1;
-- }
-- spin_unlock(&ip->i_spin);
+-/* Draw from VBUS:
+- * 0 mA -- DON'T DRAW (might supply power instead)
+- * 100 mA -- usb unit load (slowest charge rate)
+- * 500 mA -- usb high power (fast battery charge)
+- */
+-extern int tps65010_set_vbus_draw(unsigned mA);
-
-- if (!bh)
-- bh = getbuf(gl, num, CREATE);
+-/* tps65010_set_gpio_out_value parameter:
+- * gpio: GPIO1, GPIO2, GPIO3 or GPIO4
+- * value: LOW or HIGH
+- */
+-extern int tps65010_set_gpio_out_value(unsigned gpio, unsigned value);
-
-- if (!bh)
-- return -ENOBUFS;
-+ struct buffer_head *bh;
-+ int ret = 0;
-
- if (new) {
-- if (gfs2_assert_warn(sdp, height))
-- goto err;
-- meta_prep_new(bh);
-+ BUG_ON(height == 0);
-+ bh = gfs2_meta_new(gl, num);
- gfs2_trans_add_bh(ip->i_gl, bh, 1);
- gfs2_metatype_set(bh, GFS2_METATYPE_IN, GFS2_FORMAT_IN);
- gfs2_buffer_clear_tail(bh, sizeof(struct gfs2_meta_header));
- } else {
- u32 mtype = height ? GFS2_METATYPE_IN : GFS2_METATYPE_DI;
-- if (!buffer_uptodate(bh)) {
-- ll_rw_block(READ_META, 1, &bh);
-- if (gfs2_meta_wait(sdp, bh))
-- goto err;
-+ ret = gfs2_meta_read(gl, num, DIO_WAIT, &bh);
-+ if (ret == 0 && gfs2_metatype_check(sdp, bh, mtype)) {
-+ brelse(bh);
-+ ret = -EIO;
- }
-- if (gfs2_metatype_check(sdp, bh, mtype))
-- goto err;
-- }
+-/* tps65010_set_led parameter:
+- * led: LED1 or LED2
+- * mode: ON, OFF or BLINK
+- */
+-extern int tps65010_set_led(unsigned led, unsigned mode);
-
-- if (!in_cache) {
-- spin_lock(&ip->i_spin);
-- if (*bh_slot)
-- brelse(*bh_slot);
-- *bh_slot = bh;
-- get_bh(bh);
-- spin_unlock(&ip->i_spin);
- }
+-/* tps65010_set_vib parameter:
+- * value: ON or OFF
+- */
+-extern int tps65010_set_vib(unsigned value);
-
- *bhp = bh;
-- return 0;
--err:
-- brelse(bh);
-- return -EIO;
-+ return ret;
- }
-
- /**
-diff --git a/fs/gfs2/meta_io.h b/fs/gfs2/meta_io.h
-index b704822..73e3b1c 100644
---- a/fs/gfs2/meta_io.h
-+++ b/fs/gfs2/meta_io.h
-@@ -56,7 +56,6 @@ void gfs2_remove_from_journal(struct buffer_head *bh, struct gfs2_trans *tr,
-
- void gfs2_meta_wipe(struct gfs2_inode *ip, u64 bstart, u32 blen);
-
--void gfs2_meta_cache_flush(struct gfs2_inode *ip);
- int gfs2_meta_indirect_buffer(struct gfs2_inode *ip, int height, u64 num,
- int new, struct buffer_head **bhp);
-
-diff --git a/fs/gfs2/ops_address.c b/fs/gfs2/ops_address.c
-index 9679f8b..38dbe99 100644
---- a/fs/gfs2/ops_address.c
-+++ b/fs/gfs2/ops_address.c
-@@ -20,6 +20,8 @@
- #include <linux/swap.h>
- #include <linux/gfs2_ondisk.h>
- #include <linux/lm_interface.h>
-+#include <linux/backing-dev.h>
-+#include <linux/pagevec.h>
-
- #include "gfs2.h"
- #include "incore.h"
-@@ -32,7 +34,6 @@
- #include "quota.h"
- #include "trans.h"
- #include "rgrp.h"
--#include "ops_file.h"
- #include "super.h"
- #include "util.h"
- #include "glops.h"
-@@ -58,22 +59,6 @@ static void gfs2_page_add_databufs(struct gfs2_inode *ip, struct page *page,
- }
-
- /**
-- * gfs2_get_block - Fills in a buffer head with details about a block
-- * @inode: The inode
-- * @lblock: The block number to look up
-- * @bh_result: The buffer head to return the result in
-- * @create: Non-zero if we may add block to the file
-- *
-- * Returns: errno
+-/* tps65010_set_low_pwr parameter:
+- * mode: ON or OFF
- */
+-extern int tps65010_set_low_pwr(unsigned mode);
-
--int gfs2_get_block(struct inode *inode, sector_t lblock,
-- struct buffer_head *bh_result, int create)
--{
-- return gfs2_block_map(inode, lblock, create, bh_result);
--}
+-/* tps65010_config_vregs1 parameter:
+- * value to be written to VREGS1 register
+- * Note: The complete register is written, set all bits you need
+- */
+-extern int tps65010_config_vregs1(unsigned value);
-
--/**
- * gfs2_get_block_noalloc - Fills in a buffer head with details about a block
- * @inode: The inode
- * @lblock: The block number to look up
-@@ -88,7 +73,7 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock,
- {
- int error;
-
-- error = gfs2_block_map(inode, lblock, 0, bh_result);
-+ error = gfs2_block_map(inode, lblock, bh_result, 0);
- if (error)
- return error;
- if (!buffer_mapped(bh_result))
-@@ -99,20 +84,19 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock,
- static int gfs2_get_block_direct(struct inode *inode, sector_t lblock,
- struct buffer_head *bh_result, int create)
- {
-- return gfs2_block_map(inode, lblock, 0, bh_result);
-+ return gfs2_block_map(inode, lblock, bh_result, 0);
- }
-
- /**
-- * gfs2_writepage - Write complete page
-- * @page: Page to write
-+ * gfs2_writepage_common - Common bits of writepage
-+ * @page: The page to be written
-+ * @wbc: The writeback control
- *
-- * Returns: errno
-- *
-- * Some of this is copied from block_write_full_page() although we still
-- * call it to do most of the work.
-+ * Returns: 1 if writepage is ok, otherwise an error code or zero if no error.
- */
-
--static int gfs2_writepage(struct page *page, struct writeback_control *wbc)
-+static int gfs2_writepage_common(struct page *page,
-+ struct writeback_control *wbc)
- {
- struct inode *inode = page->mapping->host;
- struct gfs2_inode *ip = GFS2_I(inode);
-@@ -120,41 +104,133 @@ static int gfs2_writepage(struct page *page, struct writeback_control *wbc)
- loff_t i_size = i_size_read(inode);
- pgoff_t end_index = i_size >> PAGE_CACHE_SHIFT;
- unsigned offset;
-- int error;
-- int done_trans = 0;
-+ int ret = -EIO;
-
-- if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(ip->i_gl))) {
-- unlock_page(page);
-- return -EIO;
-- }
-+ if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(ip->i_gl)))
-+ goto out;
-+ ret = 0;
- if (current->journal_info)
-- goto out_ignore;
+-/* tps65013_set_low_pwr parameter:
+- * mode: ON or OFF
+- */
+-extern int tps65013_set_low_pwr(unsigned mode);
-
-+ goto redirty;
- /* Is the page fully outside i_size? (truncate in progress) */
-- offset = i_size & (PAGE_CACHE_SIZE-1);
-+ offset = i_size & (PAGE_CACHE_SIZE-1);
- if (page->index > end_index || (page->index == end_index && !offset)) {
- page->mapping->a_ops->invalidatepage(page, 0);
-- unlock_page(page);
-- return 0; /* don't care */
-+ goto out;
-+ }
-+ return 1;
-+redirty:
-+ redirty_page_for_writepage(wbc, page);
-+out:
-+ unlock_page(page);
-+ return 0;
-+}
+-#endif /* __ASM_ARCH_TPS65010_H */
+-
+diff --git a/include/asm-arm/arch-orion/debug-macro.S b/include/asm-arm/arch-orion/debug-macro.S
+new file mode 100644
+index 0000000..e2a8064
+--- /dev/null
++++ b/include/asm-arm/arch-orion/debug-macro.S
+@@ -0,0 +1,17 @@
++/*
++ * linux/include/asm-arm/arch-orion/debug-macro.S
++ *
++ * Debugging macro include header
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++*/
+
-+/**
-+ * gfs2_writeback_writepage - Write page for writeback mappings
-+ * @page: The page
-+ * @wbc: The writeback control
++ .macro addruart,rx
++ mov \rx, #0xf1000000
++ orr \rx, \rx, #0x00012000
++ .endm
++
++#define UART_SHIFT 2
++#include <asm/hardware/debug-8250.S>
+diff --git a/include/asm-arm/arch-orion/dma.h b/include/asm-arm/arch-orion/dma.h
+new file mode 100644
+index 0000000..40a8c17
+--- /dev/null
++++ b/include/asm-arm/arch-orion/dma.h
+@@ -0,0 +1 @@
++/* empty */
+diff --git a/include/asm-arm/arch-orion/entry-macro.S b/include/asm-arm/arch-orion/entry-macro.S
+new file mode 100644
+index 0000000..b76075a
+--- /dev/null
++++ b/include/asm-arm/arch-orion/entry-macro.S
+@@ -0,0 +1,31 @@
++/*
++ * include/asm-arm/arch-orion/entry-macro.S
++ *
++ * Low-level IRQ helper macros for Orion platforms
+ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
+ */
+
-+static int gfs2_writeback_writepage(struct page *page,
-+ struct writeback_control *wbc)
-+{
-+ int ret;
++#include <asm/arch/orion.h>
+
-+ ret = gfs2_writepage_common(page, wbc);
-+ if (ret <= 0)
-+ return ret;
++ .macro disable_fiq
++ .endm
+
-+ ret = mpage_writepage(page, gfs2_get_block_noalloc, wbc);
-+ if (ret == -EAGAIN)
-+ ret = block_write_full_page(page, gfs2_get_block_noalloc, wbc);
-+ return ret;
-+}
++ .macro arch_ret_to_user, tmp1, tmp2
++ .endm
+
-+/**
-+ * gfs2_ordered_writepage - Write page for ordered data files
-+ * @page: The page to write
-+ * @wbc: The writeback control
++ .macro get_irqnr_preamble, base, tmp
++ ldr \base, =MAIN_IRQ_CAUSE
++ .endm
++
++ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
++ ldr \irqstat, [\base, #0] @ main cause
++ ldr \tmp, [\base, #(MAIN_IRQ_MASK - MAIN_IRQ_CAUSE)] @ main mask
++ mov \irqnr, #0 @ default irqnr
++ @ find cause bits that are unmasked
++ ands \irqstat, \irqstat, \tmp @ clear Z flag if any
++ clzne \irqnr, \irqstat @ calc irqnr
++ rsbne \irqnr, \irqnr, #31
++ .endm
+diff --git a/include/asm-arm/arch-orion/gpio.h b/include/asm-arm/arch-orion/gpio.h
+new file mode 100644
+index 0000000..d66284f
+--- /dev/null
++++ b/include/asm-arm/arch-orion/gpio.h
+@@ -0,0 +1,28 @@
++/*
++ * include/asm-arm/arch-orion/gpio.h
+ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
+ */
+
-+static int gfs2_ordered_writepage(struct page *page,
-+ struct writeback_control *wbc)
-+{
-+ struct inode *inode = page->mapping->host;
-+ struct gfs2_inode *ip = GFS2_I(inode);
-+ int ret;
++extern int gpio_request(unsigned pin, const char *label);
++extern void gpio_free(unsigned pin);
++extern int gpio_direction_input(unsigned pin);
++extern int gpio_direction_output(unsigned pin, int value);
++extern int gpio_get_value(unsigned pin);
++extern void gpio_set_value(unsigned pin, int value);
++extern void orion_gpio_set_blink(unsigned pin, int blink);
++extern void gpio_display(void); /* debug */
+
-+ ret = gfs2_writepage_common(page, wbc);
-+ if (ret <= 0)
-+ return ret;
++static inline int gpio_to_irq(int pin)
++{
++ return pin + IRQ_ORION_GPIO_START;
++}
+
-+ if (!page_has_buffers(page)) {
-+ create_empty_buffers(page, inode->i_sb->s_blocksize,
-+ (1 << BH_Dirty)|(1 << BH_Uptodate));
- }
-+ gfs2_page_add_databufs(ip, page, 0, inode->i_sb->s_blocksize-1);
-+ return block_write_full_page(page, gfs2_get_block_noalloc, wbc);
++static inline int irq_to_gpio(int irq)
++{
++ return irq - IRQ_ORION_GPIO_START;
+}
-
-- if ((sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip)) &&
-- PageChecked(page)) {
-+/**
-+ * __gfs2_jdata_writepage - The core of jdata writepage
-+ * @page: The page to write
-+ * @wbc: The writeback control
++
++#include <asm-generic/gpio.h> /* cansleep wrappers */
+diff --git a/include/asm-arm/arch-orion/hardware.h b/include/asm-arm/arch-orion/hardware.h
+new file mode 100644
+index 0000000..8a12d21
+--- /dev/null
++++ b/include/asm-arm/arch-orion/hardware.h
+@@ -0,0 +1,24 @@
++/*
++ * include/asm-arm/arch-orion/hardware.h
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
+ *
-+ * This is shared between writepage and writepages and implements the
-+ * core of the writepage operation. If a transaction is required then
-+ * PageChecked will have been set and the transaction will have
-+ * already been started before this is called.
+ */
+
-+static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc)
-+{
-+ struct inode *inode = page->mapping->host;
-+ struct gfs2_inode *ip = GFS2_I(inode);
-+ struct gfs2_sbd *sdp = GFS2_SB(inode);
++#ifndef __ASM_ARCH_HARDWARE_H__
++#define __ASM_ARCH_HARDWARE_H__
+
-+ if (PageChecked(page)) {
- ClearPageChecked(page);
-- error = gfs2_trans_begin(sdp, RES_DINODE + 1, 0);
-- if (error)
-- goto out_ignore;
- if (!page_has_buffers(page)) {
- create_empty_buffers(page, inode->i_sb->s_blocksize,
- (1 << BH_Dirty)|(1 << BH_Uptodate));
- }
- gfs2_page_add_databufs(ip, page, 0, sdp->sd_vfs->s_blocksize-1);
-+ }
-+ return block_write_full_page(page, gfs2_get_block_noalloc, wbc);
-+}
++#include "orion.h"
+
-+/**
-+ * gfs2_jdata_writepage - Write complete page
-+ * @page: Page to write
++#define PCI_MEMORY_VADDR ORION_PCI_SYS_MEM_BASE
++#define PCI_IO_VADDR ORION_PCI_SYS_IO_BASE
++
++#define pcibios_assign_all_busses() 1
++
++#define PCIBIOS_MIN_IO 0x1000
++#define PCIBIOS_MIN_MEM 0x01000000
++#define PCIMEM_BASE PCI_MEMORY_VADDR /* mem base for VGA */
++
++#endif /* _ASM_ARCH_HARDWARE_H */
+diff --git a/include/asm-arm/arch-orion/io.h b/include/asm-arm/arch-orion/io.h
+new file mode 100644
+index 0000000..e0b8c39
+--- /dev/null
++++ b/include/asm-arm/arch-orion/io.h
+@@ -0,0 +1,27 @@
++/*
++ * include/asm-arm/arch-orion/io.h
+ *
-+ * Returns: errno
++ * Tzachi Perelstein <tzachi at marvell.com>
+ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
+ */
+
-+static int gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc)
-+{
-+ struct inode *inode = page->mapping->host;
-+ struct gfs2_sbd *sdp = GFS2_SB(inode);
-+ int error;
-+ int done_trans = 0;
++#ifndef __ASM_ARM_ARCH_IO_H
++#define __ASM_ARM_ARCH_IO_H
+
-+ error = gfs2_writepage_common(page, wbc);
-+ if (error <= 0)
-+ return error;
++#include "orion.h"
+
-+ if (PageChecked(page)) {
-+ if (wbc->sync_mode != WB_SYNC_ALL)
-+ goto out_ignore;
-+ error = gfs2_trans_begin(sdp, RES_DINODE + 1, 0);
-+ if (error)
-+ goto out_ignore;
- done_trans = 1;
- }
-- error = block_write_full_page(page, gfs2_get_block_noalloc, wbc);
-+ error = __gfs2_jdata_writepage(page, wbc);
- if (done_trans)
- gfs2_trans_end(sdp);
-- gfs2_meta_cache_flush(ip);
- return error;
-
- out_ignore:
-@@ -164,29 +240,190 @@ out_ignore:
- }
-
- /**
-- * gfs2_writepages - Write a bunch of dirty pages back to disk
-+ * gfs2_writeback_writepages - Write a bunch of dirty pages back to disk
- * @mapping: The mapping to write
- * @wbc: Write-back control
- *
-- * For journaled files and/or ordered writes this just falls back to the
-- * kernel's default writepages path for now. We will probably want to change
-- * that eventually (i.e. when we look at allocate on flush).
-- *
-- * For the data=writeback case though we can already ignore buffer heads
-+ * For the data=writeback case we can already ignore buffer heads
- * and write whole extents at once. This is a big reduction in the
- * number of I/O requests we send and the bmap calls we make in this case.
- */
--static int gfs2_writepages(struct address_space *mapping,
-- struct writeback_control *wbc)
-+static int gfs2_writeback_writepages(struct address_space *mapping,
-+ struct writeback_control *wbc)
++#define IO_SPACE_LIMIT 0xffffffff
++#define IO_SPACE_REMAP ORION_PCI_SYS_IO_BASE
++
++static inline void __iomem *__io(unsigned long addr)
+{
-+ return mpage_writepages(mapping, wbc, gfs2_get_block_noalloc);
++ return (void __iomem *)addr;
+}
+
-+/**
-+ * gfs2_write_jdata_pagevec - Write back a pagevec's worth of pages
-+ * @mapping: The mapping
-+ * @wbc: The writeback control
-+ * @writepage: The writepage function to call for each page
-+ * @pvec: The vector of pages
-+ * @nr_pages: The number of pages to write
++#define __io(a) __io(a)
++#define __mem_pci(a) (a)
++
++#endif
+diff --git a/include/asm-arm/arch-orion/irqs.h b/include/asm-arm/arch-orion/irqs.h
+new file mode 100644
+index 0000000..eea65ca
+--- /dev/null
++++ b/include/asm-arm/arch-orion/irqs.h
+@@ -0,0 +1,61 @@
++/*
++ * include/asm-arm/arch-orion/irqs.h
+ *
-+ * Returns: non-zero if loop should terminate, zero otherwise
++ * IRQ definitions for Orion SoC
++ *
++ * Maintainer: Tzachi Perelstein <tzachi at marvell.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
+ */
+
-+static int gfs2_write_jdata_pagevec(struct address_space *mapping,
-+ struct writeback_control *wbc,
-+ struct pagevec *pvec,
-+ int nr_pages, pgoff_t end)
- {
- struct inode *inode = mapping->host;
-- struct gfs2_inode *ip = GFS2_I(inode);
- struct gfs2_sbd *sdp = GFS2_SB(inode);
-+ loff_t i_size = i_size_read(inode);
-+ pgoff_t end_index = i_size >> PAGE_CACHE_SHIFT;
-+ unsigned offset = i_size & (PAGE_CACHE_SIZE-1);
-+ unsigned nrblocks = nr_pages * (PAGE_CACHE_SIZE/inode->i_sb->s_blocksize);
-+ struct backing_dev_info *bdi = mapping->backing_dev_info;
-+ int i;
-+ int ret;
++#ifndef __ASM_ARCH_IRQS_H__
++#define __ASM_ARCH_IRQS_H__
+
-+ ret = gfs2_trans_begin(sdp, nrblocks, 0);
-+ if (ret < 0)
-+ return ret;
++#include "orion.h" /* need GPIO_MAX */
+
-+ for(i = 0; i < nr_pages; i++) {
-+ struct page *page = pvec->pages[i];
++/*
++ * Orion Main Interrupt Controller
++ */
++#define IRQ_ORION_BRIDGE 0
++#define IRQ_ORION_DOORBELL_H2C 1
++#define IRQ_ORION_DOORBELL_C2H 2
++#define IRQ_ORION_UART0 3
++#define IRQ_ORION_UART1 4
++#define IRQ_ORION_I2C 5
++#define IRQ_ORION_GPIO_0_7 6
++#define IRQ_ORION_GPIO_8_15 7
++#define IRQ_ORION_GPIO_16_23 8
++#define IRQ_ORION_GPIO_24_31 9
++#define IRQ_ORION_PCIE0_ERR 10
++#define IRQ_ORION_PCIE0_INT 11
++#define IRQ_ORION_USB1_CTRL 12
++#define IRQ_ORION_DEV_BUS_ERR 14
++#define IRQ_ORION_PCI_ERR 15
++#define IRQ_ORION_USB_BR_ERR 16
++#define IRQ_ORION_USB0_CTRL 17
++#define IRQ_ORION_ETH_RX 18
++#define IRQ_ORION_ETH_TX 19
++#define IRQ_ORION_ETH_MISC 20
++#define IRQ_ORION_ETH_SUM 21
++#define IRQ_ORION_ETH_ERR 22
++#define IRQ_ORION_IDMA_ERR 23
++#define IRQ_ORION_IDMA_0 24
++#define IRQ_ORION_IDMA_1 25
++#define IRQ_ORION_IDMA_2 26
++#define IRQ_ORION_IDMA_3 27
++#define IRQ_ORION_CESA 28
++#define IRQ_ORION_SATA 29
++#define IRQ_ORION_XOR0 30
++#define IRQ_ORION_XOR1 31
+
-+ lock_page(page);
++/*
++ * Orion General Purpose Pins
++ */
++#define IRQ_ORION_GPIO_START 32
++#define NR_GPIO_IRQS GPIO_MAX
+
-+ if (unlikely(page->mapping != mapping)) {
-+ unlock_page(page);
-+ continue;
-+ }
++#define NR_IRQS (IRQ_ORION_GPIO_START + NR_GPIO_IRQS)
+
-+ if (!wbc->range_cyclic && page->index > end) {
-+ ret = 1;
-+ unlock_page(page);
-+ continue;
-+ }
++#endif /* __ASM_ARCH_IRQS_H__ */
+diff --git a/include/asm-arm/arch-orion/memory.h b/include/asm-arm/arch-orion/memory.h
+new file mode 100644
+index 0000000..d954dba
+--- /dev/null
++++ b/include/asm-arm/arch-orion/memory.h
+@@ -0,0 +1,15 @@
++/*
++ * include/asm-arm/arch-orion/memory.h
++ *
++ * Marvell Orion memory definitions
++ */
+
-+ if (wbc->sync_mode != WB_SYNC_NONE)
-+ wait_on_page_writeback(page);
++#ifndef __ASM_ARCH_MMU_H
++#define __ASM_ARCH_MMU_H
+
-+ if (PageWriteback(page) ||
-+ !clear_page_dirty_for_io(page)) {
-+ unlock_page(page);
-+ continue;
-+ }
++#define PHYS_OFFSET UL(0x00000000)
+
-+ /* Is the page fully outside i_size? (truncate in progress) */
-+ if (page->index > end_index || (page->index == end_index && !offset)) {
-+ page->mapping->a_ops->invalidatepage(page, 0);
-+ unlock_page(page);
-+ continue;
-+ }
++#define __virt_to_bus(x) __virt_to_phys(x)
++#define __bus_to_virt(x) __phys_to_virt(x)
+
-+ ret = __gfs2_jdata_writepage(page, wbc);
++#endif
+diff --git a/include/asm-arm/arch-orion/orion.h b/include/asm-arm/arch-orion/orion.h
+new file mode 100644
+index 0000000..f787f75
+--- /dev/null
++++ b/include/asm-arm/arch-orion/orion.h
+@@ -0,0 +1,143 @@
++/*
++ * include/asm-arm/arch-orion/orion.h
++ *
++ * Generic definitions of Orion SoC flavors:
++ * Orion-1, Orion-NAS, Orion-VoIP, and Orion-2.
++ *
++ * Maintainer: Tzachi Perelstein <tzachi at marvell.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
+
-+ if (ret || (--(wbc->nr_to_write) <= 0))
-+ ret = 1;
-+ if (wbc->nonblocking && bdi_write_congested(bdi)) {
-+ wbc->encountered_congestion = 1;
-+ ret = 1;
-+ }
++#ifndef __ASM_ARCH_ORION_H__
++#define __ASM_ARCH_ORION_H__
+
-+ }
-+ gfs2_trans_end(sdp);
-+ return ret;
-+}
++/*******************************************************************************
++ * Orion Address Map
++ * Use the same mapping (1:1 virtual:physical) of internal registers and
++ * PCI system (PCI+PCIE) for all machines.
++ * Each machine defines the rest of its mapping (e.g. device bus flashes)
++ ******************************************************************************/
++#define ORION_REGS_BASE 0xf1000000
++#define ORION_REGS_SIZE SZ_1M
+
-+/**
-+ * gfs2_write_cache_jdata - Like write_cache_pages but different
-+ * @mapping: The mapping to write
-+ * @wbc: The writeback control
-+ * @writepage: The writepage function to call
-+ * @data: The data to pass to writepage
-+ *
-+ * The reason that we use our own function here is that we need to
-+ * start transactions before we grab page locks. This allows us
-+ * to get the ordering right.
-+ */
++#define ORION_PCI_SYS_MEM_BASE 0xe0000000
++#define ORION_PCIE_MEM_BASE ORION_PCI_SYS_MEM_BASE
++#define ORION_PCIE_MEM_SIZE SZ_128M
++#define ORION_PCI_MEM_BASE (ORION_PCIE_MEM_BASE + ORION_PCIE_MEM_SIZE)
++#define ORION_PCI_MEM_SIZE SZ_128M
+
-+static int gfs2_write_cache_jdata(struct address_space *mapping,
-+ struct writeback_control *wbc)
-+{
-+ struct backing_dev_info *bdi = mapping->backing_dev_info;
-+ int ret = 0;
-+ int done = 0;
-+ struct pagevec pvec;
-+ int nr_pages;
-+ pgoff_t index;
-+ pgoff_t end;
-+ int scanned = 0;
-+ int range_whole = 0;
++#define ORION_PCI_SYS_IO_BASE 0xf2000000
++#define ORION_PCIE_IO_BASE ORION_PCI_SYS_IO_BASE
++#define ORION_PCIE_IO_SIZE SZ_1M
++#define ORION_PCIE_IO_REMAP (ORION_PCIE_IO_BASE - ORION_PCI_SYS_IO_BASE)
++#define ORION_PCI_IO_BASE (ORION_PCIE_IO_BASE + ORION_PCIE_IO_SIZE)
++#define ORION_PCI_IO_SIZE SZ_1M
++#define ORION_PCI_IO_REMAP (ORION_PCI_IO_BASE - ORION_PCI_SYS_IO_BASE)
++/* Relevant only for Orion-NAS */
++#define ORION_PCIE_WA_BASE 0xf0000000
++#define ORION_PCIE_WA_SIZE SZ_16M
+
-+ if (wbc->nonblocking && bdi_write_congested(bdi)) {
-+ wbc->encountered_congestion = 1;
-+ return 0;
-+ }
++/*******************************************************************************
++ * Supported Devices & Revisions
++ ******************************************************************************/
++/* Orion-1 (88F5181) */
++#define MV88F5181_DEV_ID 0x5181
++#define MV88F5181_REV_B1 3
++/* Orion-NAS (88F5182) */
++#define MV88F5182_DEV_ID 0x5182
++#define MV88F5182_REV_A2 2
++/* Orion-2 (88F5281) */
++#define MV88F5281_DEV_ID 0x5281
++#define MV88F5281_REV_D1 5
++#define MV88F5281_REV_D2 6
+
-+ pagevec_init(&pvec, 0);
-+ if (wbc->range_cyclic) {
-+ index = mapping->writeback_index; /* Start from prev offset */
-+ end = -1;
-+ } else {
-+ index = wbc->range_start >> PAGE_CACHE_SHIFT;
-+ end = wbc->range_end >> PAGE_CACHE_SHIFT;
-+ if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
-+ range_whole = 1;
-+ scanned = 1;
-+ }
-
-- if (sdp->sd_args.ar_data == GFS2_DATA_WRITEBACK && !gfs2_is_jdata(ip))
-- return mpage_writepages(mapping, wbc, gfs2_get_block_noalloc);
-+retry:
-+ while (!done && (index <= end) &&
-+ (nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
-+ PAGECACHE_TAG_DIRTY,
-+ min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1))) {
-+ scanned = 1;
-+ ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, end);
-+ if (ret)
-+ done = 1;
-+ if (ret > 0)
-+ ret = 0;
++/*******************************************************************************
++ * Orion Registers Map
++ ******************************************************************************/
++#define ORION_DDR_REG_BASE (ORION_REGS_BASE | 0x00000)
++#define ORION_DEV_BUS_REG_BASE (ORION_REGS_BASE | 0x10000)
++#define ORION_BRIDGE_REG_BASE (ORION_REGS_BASE | 0x20000)
++#define ORION_PCI_REG_BASE (ORION_REGS_BASE | 0x30000)
++#define ORION_PCIE_REG_BASE (ORION_REGS_BASE | 0x40000)
++#define ORION_USB0_REG_BASE (ORION_REGS_BASE | 0x50000)
++#define ORION_ETH_REG_BASE (ORION_REGS_BASE | 0x70000)
++#define ORION_SATA_REG_BASE (ORION_REGS_BASE | 0x80000)
++#define ORION_USB1_REG_BASE (ORION_REGS_BASE | 0xa0000)
+
-+ pagevec_release(&pvec);
-+ cond_resched();
-+ }
++#define ORION_DDR_REG(x) (ORION_DDR_REG_BASE | (x))
++#define ORION_DEV_BUS_REG(x) (ORION_DEV_BUS_REG_BASE | (x))
++#define ORION_BRIDGE_REG(x) (ORION_BRIDGE_REG_BASE | (x))
++#define ORION_PCI_REG(x) (ORION_PCI_REG_BASE | (x))
++#define ORION_PCIE_REG(x) (ORION_PCIE_REG_BASE | (x))
++#define ORION_USB0_REG(x) (ORION_USB0_REG_BASE | (x))
++#define ORION_USB1_REG(x) (ORION_USB1_REG_BASE | (x))
++#define ORION_ETH_REG(x) (ORION_ETH_REG_BASE | (x))
++#define ORION_SATA_REG(x) (ORION_SATA_REG_BASE | (x))
+
-+ if (!scanned && !done) {
-+ /*
-+ * We hit the last page and there is more work to be done: wrap
-+ * back to the start of the file
-+ */
-+ scanned = 1;
-+ index = 0;
-+ goto retry;
-+ }
++/*******************************************************************************
++ * Device Bus Registers
++ ******************************************************************************/
++#define MPP_0_7_CTRL ORION_DEV_BUS_REG(0x000)
++#define MPP_8_15_CTRL ORION_DEV_BUS_REG(0x004)
++#define MPP_16_19_CTRL ORION_DEV_BUS_REG(0x050)
++#define MPP_DEV_CTRL ORION_DEV_BUS_REG(0x008)
++#define MPP_RESET_SAMPLE ORION_DEV_BUS_REG(0x010)
++#define GPIO_OUT ORION_DEV_BUS_REG(0x100)
++#define GPIO_IO_CONF ORION_DEV_BUS_REG(0x104)
++#define GPIO_BLINK_EN ORION_DEV_BUS_REG(0x108)
++#define GPIO_IN_POL ORION_DEV_BUS_REG(0x10c)
++#define GPIO_DATA_IN ORION_DEV_BUS_REG(0x110)
++#define GPIO_EDGE_CAUSE ORION_DEV_BUS_REG(0x114)
++#define GPIO_EDGE_MASK ORION_DEV_BUS_REG(0x118)
++#define GPIO_LEVEL_MASK ORION_DEV_BUS_REG(0x11c)
++#define DEV_BANK_0_PARAM ORION_DEV_BUS_REG(0x45c)
++#define DEV_BANK_1_PARAM ORION_DEV_BUS_REG(0x460)
++#define DEV_BANK_2_PARAM ORION_DEV_BUS_REG(0x464)
++#define DEV_BANK_BOOT_PARAM ORION_DEV_BUS_REG(0x46c)
++#define DEV_BUS_CTRL ORION_DEV_BUS_REG(0x4c0)
++#define DEV_BUS_INT_CAUSE ORION_DEV_BUS_REG(0x4d0)
++#define DEV_BUS_INT_MASK ORION_DEV_BUS_REG(0x4d4)
++#define I2C_BASE ORION_DEV_BUS_REG(0x1000)
++#define UART0_BASE ORION_DEV_BUS_REG(0x2000)
++#define UART1_BASE ORION_DEV_BUS_REG(0x2100)
++#define GPIO_MAX 32
+
-+ if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
-+ mapping->writeback_index = index;
-+ return ret;
-+}
++/***************************************************************************
++ * Orion CPU Bridge Registers
++ **************************************************************************/
++#define CPU_CONF ORION_BRIDGE_REG(0x100)
++#define CPU_CTRL ORION_BRIDGE_REG(0x104)
++#define CPU_RESET_MASK ORION_BRIDGE_REG(0x108)
++#define CPU_SOFT_RESET ORION_BRIDGE_REG(0x10c)
++#define POWER_MNG_CTRL_REG ORION_BRIDGE_REG(0x11C)
++#define BRIDGE_CAUSE ORION_BRIDGE_REG(0x110)
++#define BRIDGE_MASK ORION_BRIDGE_REG(0x114)
++#define MAIN_IRQ_CAUSE ORION_BRIDGE_REG(0x200)
++#define MAIN_IRQ_MASK ORION_BRIDGE_REG(0x204)
++#define TIMER_CTRL ORION_BRIDGE_REG(0x300)
++#define TIMER_VAL(x) ORION_BRIDGE_REG(0x314 + ((x) * 8))
++#define TIMER_VAL_RELOAD(x) ORION_BRIDGE_REG(0x310 + ((x) * 8))
+
++#ifndef __ASSEMBLY__
+
-+/**
-+ * gfs2_jdata_writepages - Write a bunch of dirty pages back to disk
-+ * @mapping: The mapping to write
-+ * @wbc: The writeback control
-+ *
++/*******************************************************************************
++ * Helpers to access Orion registers
++ ******************************************************************************/
++#include <asm/types.h>
++#include <asm/io.h>
++
++#define orion_read(r) __raw_readl(r)
++#define orion_write(r, val) __raw_writel(val, r)
++
++/*
++ * These are not preempt safe. Locks, if needed, must be taken care by caller.
+ */
-
-- return generic_writepages(mapping, wbc);
-+static int gfs2_jdata_writepages(struct address_space *mapping,
-+ struct writeback_control *wbc)
-+{
-+ struct gfs2_inode *ip = GFS2_I(mapping->host);
-+ struct gfs2_sbd *sdp = GFS2_SB(mapping->host);
-+ int ret;
++#define orion_setbits(r, mask) orion_write((r), orion_read(r) | (mask))
++#define orion_clrbits(r, mask) orion_write((r), orion_read(r) & ~(mask))
+
-+ ret = gfs2_write_cache_jdata(mapping, wbc);
-+ if (ret == 0 && wbc->sync_mode == WB_SYNC_ALL) {
-+ gfs2_log_flush(sdp, ip->i_gl);
-+ ret = gfs2_write_cache_jdata(mapping, wbc);
-+ }
-+ return ret;
- }
-
- /**
-@@ -231,62 +468,107 @@ static int stuffed_readpage(struct gfs2_inode *ip, struct page *page)
-
-
- /**
-- * gfs2_readpage - readpage with locking
-- * @file: The file to read a page for. N.B. This may be NULL if we are
-- * reading an internal file.
-+ * __gfs2_readpage - readpage
-+ * @file: The file to read a page for
- * @page: The page to read
- *
-- * Returns: errno
-+ * This is the core of gfs2's readpage. Its used by the internal file
-+ * reading code as in that case we already hold the glock. Also its
-+ * called by gfs2_readpage() once the required lock has been granted.
++#endif /* __ASSEMBLY__ */
++
++#endif /* __ASM_ARCH_ORION_H__ */
+diff --git a/include/asm-arm/arch-orion/platform.h b/include/asm-arm/arch-orion/platform.h
+new file mode 100644
+index 0000000..143c38e
+--- /dev/null
++++ b/include/asm-arm/arch-orion/platform.h
+@@ -0,0 +1,25 @@
++/*
++ * asm-arm/arch-orion/platform.h
+ *
- */
-
--static int gfs2_readpage(struct file *file, struct page *page)
-+static int __gfs2_readpage(void *file, struct page *page)
- {
- struct gfs2_inode *ip = GFS2_I(page->mapping->host);
- struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host);
-- struct gfs2_file *gf = NULL;
-- struct gfs2_holder gh;
- int error;
-- int do_unlock = 0;
--
-- if (likely(file != &gfs2_internal_file_sentinel)) {
-- if (file) {
-- gf = file->private_data;
-- if (test_bit(GFF_EXLOCK, &gf->f_flags))
-- /* gfs2_sharewrite_fault has grabbed the ip->i_gl already */
-- goto skip_lock;
-- }
-- gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME|LM_FLAG_TRY_1CB, &gh);
-- do_unlock = 1;
-- error = gfs2_glock_nq_atime(&gh);
-- if (unlikely(error))
-- goto out_unlock;
-- }
-
--skip_lock:
- if (gfs2_is_stuffed(ip)) {
- error = stuffed_readpage(ip, page);
- unlock_page(page);
-- } else
-- error = mpage_readpage(page, gfs2_get_block);
-+ } else {
-+ error = mpage_readpage(page, gfs2_block_map);
-+ }
-
- if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
-- error = -EIO;
-+ return -EIO;
++ * Tzachi Perelstein <tzachi at marvell.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
+
-+ return error;
-+}
++#ifndef __ASM_ARCH_PLATFORM_H__
++#define __ASM_ARCH_PLATFORM_H__
+
-+/**
-+ * gfs2_readpage - read a page of a file
-+ * @file: The file to read
-+ * @page: The page of the file
++/*
++ * Device bus NAND private data
++ */
++struct orion_nand_data {
++ struct mtd_partition *parts;
++ u32 nr_parts;
++ u8 ale; /* address line number connected to ALE */
++ u8 cle; /* address line number connected to CLE */
++ u8 width; /* buswidth */
++};
++
++#endif
+diff --git a/include/asm-arm/arch-orion/system.h b/include/asm-arm/arch-orion/system.h
+new file mode 100644
+index 0000000..17704c6
+--- /dev/null
++++ b/include/asm-arm/arch-orion/system.h
+@@ -0,0 +1,31 @@
++/*
++ * include/asm-arm/arch-orion/system.h
+ *
-+ * This deals with the locking required. We use a trylock in order to
-+ * avoid the page lock / glock ordering problems returning AOP_TRUNCATED_PAGE
-+ * in the event that we are unable to get the lock.
++ * Tzachi Perelstein <tzachi at marvell.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
+ */
+
-+static int gfs2_readpage(struct file *file, struct page *page)
++#ifndef __ASM_ARCH_SYSTEM_H
++#define __ASM_ARCH_SYSTEM_H
++
++#include <asm/arch/hardware.h>
++#include <asm/arch/orion.h>
++
++static inline void arch_idle(void)
+{
-+ struct gfs2_inode *ip = GFS2_I(page->mapping->host);
-+ struct gfs2_holder gh;
-+ int error;
-
-- if (do_unlock) {
-- gfs2_glock_dq_m(1, &gh);
-- gfs2_holder_uninit(&gh);
-+ gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME|LM_FLAG_TRY_1CB, &gh);
-+ error = gfs2_glock_nq_atime(&gh);
-+ if (unlikely(error)) {
-+ unlock_page(page);
-+ goto out;
- }
-+ error = __gfs2_readpage(file, page);
-+ gfs2_glock_dq(&gh);
- out:
-- return error;
--out_unlock:
-- unlock_page(page);
-+ gfs2_holder_uninit(&gh);
- if (error == GLR_TRYFAILED) {
-- error = AOP_TRUNCATED_PAGE;
- yield();
-+ return AOP_TRUNCATED_PAGE;
- }
-- if (do_unlock)
-- gfs2_holder_uninit(&gh);
-- goto out;
-+ return error;
++ cpu_do_idle();
+}
+
-+/**
-+ * gfs2_internal_read - read an internal file
-+ * @ip: The gfs2 inode
-+ * @ra_state: The readahead state (or NULL for no readahead)
-+ * @buf: The buffer to fill
-+ * @pos: The file position
-+ * @size: The amount to read
++static inline void arch_reset(char mode)
++{
++ /*
++ * Enable and issue soft reset
++ */
++ orion_setbits(CPU_RESET_MASK, (1 << 2));
++ orion_setbits(CPU_SOFT_RESET, 1);
++}
++
++#endif
+diff --git a/include/asm-arm/arch-orion/timex.h b/include/asm-arm/arch-orion/timex.h
+new file mode 100644
+index 0000000..26c2c91
+--- /dev/null
++++ b/include/asm-arm/arch-orion/timex.h
+@@ -0,0 +1,12 @@
++/*
++ * include/asm-arm/arch-orion/timex.h
+ *
++ * Tzachi Perelstein <tzachi at marvell.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
+ */
+
-+int gfs2_internal_read(struct gfs2_inode *ip, struct file_ra_state *ra_state,
-+ char *buf, loff_t *pos, unsigned size)
-+{
-+ struct address_space *mapping = ip->i_inode.i_mapping;
-+ unsigned long index = *pos / PAGE_CACHE_SIZE;
-+ unsigned offset = *pos & (PAGE_CACHE_SIZE - 1);
-+ unsigned copied = 0;
-+ unsigned amt;
-+ struct page *page;
-+ void *p;
++#define ORION_TCLK 166666667
++#define CLOCK_TICK_RATE ORION_TCLK
+diff --git a/include/asm-arm/arch-orion/uncompress.h b/include/asm-arm/arch-orion/uncompress.h
+new file mode 100644
+index 0000000..a1a222f
+--- /dev/null
++++ b/include/asm-arm/arch-orion/uncompress.h
+@@ -0,0 +1,44 @@
++/*
++ * include/asm-arm/arch-orion/uncompress.h
++ *
++ * Tzachi Perelstein <tzachi at marvell.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
+
-+ do {
-+ amt = size - copied;
-+ if (offset + size > PAGE_CACHE_SIZE)
-+ amt = PAGE_CACHE_SIZE - offset;
-+ page = read_cache_page(mapping, index, __gfs2_readpage, NULL);
-+ if (IS_ERR(page))
-+ return PTR_ERR(page);
-+ p = kmap_atomic(page, KM_USER0);
-+ memcpy(buf + copied, p + offset, amt);
-+ kunmap_atomic(p, KM_USER0);
-+ mark_page_accessed(page);
-+ page_cache_release(page);
-+ copied += amt;
-+ index++;
-+ offset = 0;
-+ } while(copied < size);
-+ (*pos) += size;
-+ return size;
- }
-
- /**
-@@ -300,10 +582,9 @@ out_unlock:
- * Any I/O we ignore at this time will be done via readpage later.
- * 2. We don't handle stuffed files here we let readpage do the honours.
- * 3. mpage_readpages() does most of the heavy lifting in the common case.
-- * 4. gfs2_get_block() is relied upon to set BH_Boundary in the right places.
-- * 5. We use LM_FLAG_TRY_1CB here, effectively we then have lock-ahead as
-- * well as read-ahead.
-+ * 4. gfs2_block_map() is relied upon to set BH_Boundary in the right places.
- */
++#include <asm/arch/orion.h>
+
- static int gfs2_readpages(struct file *file, struct address_space *mapping,
- struct list_head *pages, unsigned nr_pages)
- {
-@@ -311,42 +592,20 @@ static int gfs2_readpages(struct file *file, struct address_space *mapping,
- struct gfs2_inode *ip = GFS2_I(inode);
- struct gfs2_sbd *sdp = GFS2_SB(inode);
- struct gfs2_holder gh;
-- int ret = 0;
-- int do_unlock = 0;
-+ int ret;
-
-- if (likely(file != &gfs2_internal_file_sentinel)) {
-- if (file) {
-- struct gfs2_file *gf = file->private_data;
-- if (test_bit(GFF_EXLOCK, &gf->f_flags))
-- goto skip_lock;
-- }
-- gfs2_holder_init(ip->i_gl, LM_ST_SHARED,
-- LM_FLAG_TRY_1CB|GL_ATIME, &gh);
-- do_unlock = 1;
-- ret = gfs2_glock_nq_atime(&gh);
-- if (ret == GLR_TRYFAILED)
-- goto out_noerror;
-- if (unlikely(ret))
-- goto out_unlock;
-- }
--skip_lock:
-+ gfs2_holder_init(ip->i_gl, LM_ST_SHARED, GL_ATIME, &gh);
-+ ret = gfs2_glock_nq_atime(&gh);
-+ if (unlikely(ret))
-+ goto out_uninit;
- if (!gfs2_is_stuffed(ip))
-- ret = mpage_readpages(mapping, pages, nr_pages, gfs2_get_block);
--
-- if (do_unlock) {
-- gfs2_glock_dq_m(1, &gh);
-- gfs2_holder_uninit(&gh);
-- }
--out:
-+ ret = mpage_readpages(mapping, pages, nr_pages, gfs2_block_map);
-+ gfs2_glock_dq(&gh);
-+out_uninit:
-+ gfs2_holder_uninit(&gh);
- if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
- ret = -EIO;
- return ret;
--out_noerror:
-- ret = 0;
--out_unlock:
-- if (do_unlock)
-- gfs2_holder_uninit(&gh);
-- goto out;
- }
-
- /**
-@@ -382,20 +641,11 @@ static int gfs2_write_begin(struct file *file, struct address_space *mapping,
- if (unlikely(error))
- goto out_uninit;
-
-- error = -ENOMEM;
-- page = __grab_cache_page(mapping, index);
-- *pagep = page;
-- if (!page)
-- goto out_unlock;
--
- gfs2_write_calc_reserv(ip, len, &data_blocks, &ind_blocks);
--
- error = gfs2_write_alloc_required(ip, pos, len, &alloc_required);
- if (error)
-- goto out_putpage;
--
-+ goto out_unlock;
-
-- ip->i_alloc.al_requested = 0;
- if (alloc_required) {
- al = gfs2_alloc_get(ip);
-
-@@ -424,40 +674,47 @@ static int gfs2_write_begin(struct file *file, struct address_space *mapping,
- if (error)
- goto out_trans_fail;
-
-+ error = -ENOMEM;
-+ page = __grab_cache_page(mapping, index);
-+ *pagep = page;
-+ if (unlikely(!page))
-+ goto out_endtrans;
++#define MV_UART_LSR ((volatile unsigned char *)(UART0_BASE + 0x14))
++#define MV_UART_THR ((volatile unsigned char *)(UART0_BASE + 0x0))
+
- if (gfs2_is_stuffed(ip)) {
-+ error = 0;
- if (pos + len > sdp->sd_sb.sb_bsize - sizeof(struct gfs2_dinode)) {
- error = gfs2_unstuff_dinode(ip, page);
- if (error == 0)
- goto prepare_write;
-- } else if (!PageUptodate(page))
-+ } else if (!PageUptodate(page)) {
- error = stuffed_readpage(ip, page);
-+ }
- goto out;
- }
-
- prepare_write:
-- error = block_prepare_write(page, from, to, gfs2_get_block);
--
-+ error = block_prepare_write(page, from, to, gfs2_block_map);
- out:
-- if (error) {
-- gfs2_trans_end(sdp);
-+ if (error == 0)
-+ return 0;
++#define LSR_THRE 0x20
+
-+ page_cache_release(page);
-+ if (pos + len > ip->i_inode.i_size)
-+ vmtruncate(&ip->i_inode, ip->i_inode.i_size);
-+out_endtrans:
-+ gfs2_trans_end(sdp);
- out_trans_fail:
-- if (alloc_required) {
-- gfs2_inplace_release(ip);
-+ if (alloc_required) {
-+ gfs2_inplace_release(ip);
- out_qunlock:
-- gfs2_quota_unlock(ip);
-+ gfs2_quota_unlock(ip);
- out_alloc_put:
-- gfs2_alloc_put(ip);
-- }
--out_putpage:
-- page_cache_release(page);
-- if (pos + len > ip->i_inode.i_size)
-- vmtruncate(&ip->i_inode, ip->i_inode.i_size);
-+ gfs2_alloc_put(ip);
++static void putc(const char c)
++{
++ int j = 0x1000;
++ while (--j && !(*MV_UART_LSR & LSR_THRE))
++ barrier();
++ *MV_UART_THR = c;
++}
++
++static void flush(void)
++{
++}
++
++static void orion_early_putstr(const char *ptr)
++{
++ char c;
++ while ((c = *ptr++) != '\0') {
++ if (c == '\n')
++ putc('\r');
++ putc(c);
+ }
- out_unlock:
-- gfs2_glock_dq_m(1, &ip->i_gh);
-+ gfs2_glock_dq(&ip->i_gh);
- out_uninit:
-- gfs2_holder_uninit(&ip->i_gh);
-- }
--
-+ gfs2_holder_uninit(&ip->i_gh);
- return error;
- }
-
-@@ -565,7 +822,7 @@ static int gfs2_write_end(struct file *file, struct address_space *mapping,
- struct gfs2_inode *ip = GFS2_I(inode);
- struct gfs2_sbd *sdp = GFS2_SB(inode);
- struct buffer_head *dibh;
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_dinode *di;
- unsigned int from = pos & (PAGE_CACHE_SIZE - 1);
- unsigned int to = from + len;
-@@ -585,19 +842,16 @@ static int gfs2_write_end(struct file *file, struct address_space *mapping,
- if (gfs2_is_stuffed(ip))
- return gfs2_stuffed_write_end(inode, dibh, pos, len, copied, page);
-
-- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
-+ if (!gfs2_is_writeback(ip))
- gfs2_page_add_databufs(ip, page, from, to);
-
- ret = generic_write_end(file, mapping, pos, len, copied, page, fsdata);
-
-- if (likely(ret >= 0)) {
-- copied = ret;
-- if ((pos + copied) > inode->i_size) {
-- di = (struct gfs2_dinode *)dibh->b_data;
-- ip->i_di.di_size = inode->i_size;
-- di->di_size = cpu_to_be64(inode->i_size);
-- mark_inode_dirty(inode);
-- }
-+ if (likely(ret >= 0) && (inode->i_size > ip->i_di.di_size)) {
-+ di = (struct gfs2_dinode *)dibh->b_data;
-+ ip->i_di.di_size = inode->i_size;
-+ di->di_size = cpu_to_be64(inode->i_size);
-+ mark_inode_dirty(inode);
- }
-
- if (inode == sdp->sd_rindex)
-@@ -606,7 +860,7 @@ static int gfs2_write_end(struct file *file, struct address_space *mapping,
- brelse(dibh);
- gfs2_trans_end(sdp);
- failed:
-- if (al->al_requested) {
-+ if (al) {
- gfs2_inplace_release(ip);
- gfs2_quota_unlock(ip);
- gfs2_alloc_put(ip);
-@@ -625,11 +879,7 @@ failed:
-
- static int gfs2_set_page_dirty(struct page *page)
- {
-- struct gfs2_inode *ip = GFS2_I(page->mapping->host);
-- struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host);
--
-- if (sdp->sd_args.ar_data == GFS2_DATA_ORDERED || gfs2_is_jdata(ip))
-- SetPageChecked(page);
-+ SetPageChecked(page);
- return __set_page_dirty_buffers(page);
- }
-
-@@ -653,7 +903,7 @@ static sector_t gfs2_bmap(struct address_space *mapping, sector_t lblock)
- return 0;
-
- if (!gfs2_is_stuffed(ip))
-- dblock = generic_block_bmap(mapping, lblock, gfs2_get_block);
-+ dblock = generic_block_bmap(mapping, lblock, gfs2_block_map);
-
- gfs2_glock_dq_uninit(&i_gh);
++}
++
++/*
++ * nothing to do
++ */
++#define arch_decomp_setup()
++#define arch_decomp_wdog()
+diff --git a/include/asm-arm/arch-orion/vmalloc.h b/include/asm-arm/arch-orion/vmalloc.h
+new file mode 100644
+index 0000000..23e2a10
+--- /dev/null
++++ b/include/asm-arm/arch-orion/vmalloc.h
+@@ -0,0 +1,5 @@
++/*
++ * include/asm-arm/arch-orion/vmalloc.h
++ */
++
++#define VMALLOC_END 0xf0000000
+diff --git a/include/asm-arm/arch-pxa/colibri.h b/include/asm-arm/arch-pxa/colibri.h
+new file mode 100644
+index 0000000..2ae373f
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/colibri.h
+@@ -0,0 +1,19 @@
++#ifndef _COLIBRI_H_
++#define _COLIBRI_H_
++
++/* physical memory regions */
++#define COLIBRI_FLASH_PHYS (PXA_CS0_PHYS) /* Flash region */
++#define COLIBRI_ETH_PHYS (PXA_CS2_PHYS) /* Ethernet DM9000 region */
++#define COLIBRI_SDRAM_BASE 0xa0000000 /* SDRAM region */
++
++/* virtual memory regions */
++#define COLIBRI_DISK_VIRT 0xF0000000 /* Disk On Chip region */
++
++/* size of flash */
++#define COLIBRI_FLASH_SIZE 0x02000000 /* Flash size 32 MB */
++
++/* Ethernet Controller Davicom DM9000 */
++#define GPIO_DM9000 114
++#define COLIBRI_ETH_IRQ IRQ_GPIO(GPIO_DM9000)
++
++#endif /* _COLIBRI_H_ */
+diff --git a/include/asm-arm/arch-pxa/corgi.h b/include/asm-arm/arch-pxa/corgi.h
+index e554caa..bf85650 100644
+--- a/include/asm-arm/arch-pxa/corgi.h
++++ b/include/asm-arm/arch-pxa/corgi.h
+@@ -104,7 +104,6 @@
+ */
+ extern struct platform_device corgiscoop_device;
+ extern struct platform_device corgissp_device;
+-extern struct platform_device corgifb_device;
-@@ -719,13 +969,9 @@ static int gfs2_ok_for_dio(struct gfs2_inode *ip, int rw, loff_t offset)
- {
- /*
- * Should we return an error here? I can't see that O_DIRECT for
-- * a journaled file makes any sense. For now we'll silently fall
-- * back to buffered I/O, likewise we do the same for stuffed
-- * files since they are (a) small and (b) unaligned.
-+ * a stuffed file makes any sense. For now we'll silently fall
-+ * back to buffered I/O
- */
-- if (gfs2_is_jdata(ip))
-- return 0;
--
- if (gfs2_is_stuffed(ip))
- return 0;
+ #endif /* __ASM_ARCH_CORGI_H */
-@@ -836,9 +1082,23 @@ cannot_release:
- return 0;
- }
+diff --git a/include/asm-arm/arch-pxa/i2c.h b/include/asm-arm/arch-pxa/i2c.h
+index e404b23..80596b0 100644
+--- a/include/asm-arm/arch-pxa/i2c.h
++++ b/include/asm-arm/arch-pxa/i2c.h
+@@ -65,7 +65,13 @@ struct i2c_pxa_platform_data {
+ unsigned int slave_addr;
+ struct i2c_slave_client *slave;
+ unsigned int class;
++ int use_pio;
+ };
--const struct address_space_operations gfs2_file_aops = {
-- .writepage = gfs2_writepage,
-- .writepages = gfs2_writepages,
-+static const struct address_space_operations gfs2_writeback_aops = {
-+ .writepage = gfs2_writeback_writepage,
-+ .writepages = gfs2_writeback_writepages,
-+ .readpage = gfs2_readpage,
-+ .readpages = gfs2_readpages,
-+ .sync_page = block_sync_page,
-+ .write_begin = gfs2_write_begin,
-+ .write_end = gfs2_write_end,
-+ .bmap = gfs2_bmap,
-+ .invalidatepage = gfs2_invalidatepage,
-+ .releasepage = gfs2_releasepage,
-+ .direct_IO = gfs2_direct_IO,
-+ .migratepage = buffer_migrate_page,
-+};
+ extern void pxa_set_i2c_info(struct i2c_pxa_platform_data *info);
+
-+static const struct address_space_operations gfs2_ordered_aops = {
-+ .writepage = gfs2_ordered_writepage,
- .readpage = gfs2_readpage,
- .readpages = gfs2_readpages,
- .sync_page = block_sync_page,
-@@ -849,5 +1109,34 @@ const struct address_space_operations gfs2_file_aops = {
- .invalidatepage = gfs2_invalidatepage,
- .releasepage = gfs2_releasepage,
- .direct_IO = gfs2_direct_IO,
-+ .migratepage = buffer_migrate_page,
- };
++#ifdef CONFIG_PXA27x
++extern void pxa_set_i2c_power_info(struct i2c_pxa_platform_data *info);
++#endif
++
+ #endif
+diff --git a/include/asm-arm/arch-pxa/irqs.h b/include/asm-arm/arch-pxa/irqs.h
+index b76ee6d..c562b97 100644
+--- a/include/asm-arm/arch-pxa/irqs.h
++++ b/include/asm-arm/arch-pxa/irqs.h
+@@ -180,7 +180,8 @@
+ #define NR_IRQS (IRQ_LOCOMO_SPI_TEND + 1)
+ #elif defined(CONFIG_ARCH_LUBBOCK) || \
+ defined(CONFIG_MACH_LOGICPD_PXA270) || \
+- defined(CONFIG_MACH_MAINSTONE)
++ defined(CONFIG_MACH_MAINSTONE) || \
++ defined(CONFIG_MACH_PCM027)
+ #define NR_IRQS (IRQ_BOARD_END)
+ #else
+ #define NR_IRQS (IRQ_BOARD_START)
+@@ -227,6 +228,13 @@
+ #define IRQ_LOCOMO_LT_BASE (IRQ_BOARD_START + 2)
+ #define IRQ_LOCOMO_SPI_BASE (IRQ_BOARD_START + 3)
-+static const struct address_space_operations gfs2_jdata_aops = {
-+ .writepage = gfs2_jdata_writepage,
-+ .writepages = gfs2_jdata_writepages,
-+ .readpage = gfs2_readpage,
-+ .readpages = gfs2_readpages,
-+ .sync_page = block_sync_page,
-+ .write_begin = gfs2_write_begin,
-+ .write_end = gfs2_write_end,
-+ .set_page_dirty = gfs2_set_page_dirty,
-+ .bmap = gfs2_bmap,
-+ .invalidatepage = gfs2_invalidatepage,
-+ .releasepage = gfs2_releasepage,
-+};
++/* phyCORE-PXA270 (PCM027) Interrupts */
++#define PCM027_IRQ(x) (IRQ_BOARD_START + (x))
++#define PCM027_BTDET_IRQ PCM027_IRQ(0)
++#define PCM027_FF_RI_IRQ PCM027_IRQ(1)
++#define PCM027_MMCDET_IRQ PCM027_IRQ(2)
++#define PCM027_PM_5V_IRQ PCM027_IRQ(3)
+
-+void gfs2_set_aops(struct inode *inode)
-+{
-+ struct gfs2_inode *ip = GFS2_I(inode);
+ /* ITE8152 irqs */
+ /* add IT8152 IRQs beyond BOARD_END */
+ #ifdef CONFIG_PCI_HOST_ITE8152
+diff --git a/include/asm-arm/arch-pxa/littleton.h b/include/asm-arm/arch-pxa/littleton.h
+new file mode 100644
+index 0000000..79d209b
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/littleton.h
+@@ -0,0 +1,6 @@
++#ifndef __ASM_ARCH_ZYLONITE_H
++#define __ASM_ARCH_ZYLONITE_H
+
-+ if (gfs2_is_writeback(ip))
-+ inode->i_mapping->a_ops = &gfs2_writeback_aops;
-+ else if (gfs2_is_ordered(ip))
-+ inode->i_mapping->a_ops = &gfs2_ordered_aops;
-+ else if (gfs2_is_jdata(ip))
-+ inode->i_mapping->a_ops = &gfs2_jdata_aops;
-+ else
-+ BUG();
-+}
++#define LITTLETON_ETH_PHYS 0x30000000
+
-diff --git a/fs/gfs2/ops_address.h b/fs/gfs2/ops_address.h
-index fa1b5b3..5da2128 100644
---- a/fs/gfs2/ops_address.h
-+++ b/fs/gfs2/ops_address.h
-@@ -14,9 +14,10 @@
- #include <linux/buffer_head.h>
- #include <linux/mm.h>
-
--extern const struct address_space_operations gfs2_file_aops;
--extern int gfs2_get_block(struct inode *inode, sector_t lblock,
-- struct buffer_head *bh_result, int create);
- extern int gfs2_releasepage(struct page *page, gfp_t gfp_mask);
-+extern int gfs2_internal_read(struct gfs2_inode *ip,
-+ struct file_ra_state *ra_state,
-+ char *buf, loff_t *pos, unsigned size);
-+extern void gfs2_set_aops(struct inode *inode);
++#endif /* __ASM_ARCH_ZYLONITE_H */
+diff --git a/include/asm-arm/arch-pxa/magician.h b/include/asm-arm/arch-pxa/magician.h
+new file mode 100644
+index 0000000..337f51f
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/magician.h
+@@ -0,0 +1,111 @@
++/*
++ * GPIO and IRQ definitions for HTC Magician PDA phones
++ *
++ * Copyright (c) 2007 Philipp Zabel
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ */
++
++#ifndef _MAGICIAN_H_
++#define _MAGICIAN_H_
++
++#include <asm/arch/pxa-regs.h>
++
++/*
++ * PXA GPIOs
++ */
++
++#define GPIO0_MAGICIAN_KEY_POWER 0
++#define GPIO9_MAGICIAN_UNKNOWN 9
++#define GPIO10_MAGICIAN_GSM_IRQ 10
++#define GPIO11_MAGICIAN_GSM_OUT1 11
++#define GPIO13_MAGICIAN_CPLD_IRQ 13
++#define GPIO18_MAGICIAN_UNKNOWN 18
++#define GPIO22_MAGICIAN_VIBRA_EN 22
++#define GPIO26_MAGICIAN_GSM_POWER 26
++#define GPIO27_MAGICIAN_USBC_PUEN 27
++#define GPIO30_MAGICIAN_nCHARGE_EN 30
++#define GPIO37_MAGICIAN_KEY_HANGUP 37
++#define GPIO38_MAGICIAN_KEY_CONTACTS 38
++#define GPIO40_MAGICIAN_GSM_OUT2 40
++#define GPIO48_MAGICIAN_UNKNOWN 48
++#define GPIO56_MAGICIAN_UNKNOWN 56
++#define GPIO57_MAGICIAN_CAM_RESET 57
++#define GPIO83_MAGICIAN_nIR_EN 83
++#define GPIO86_MAGICIAN_GSM_RESET 86
++#define GPIO87_MAGICIAN_GSM_SELECT 87
++#define GPIO90_MAGICIAN_KEY_CALENDAR 90
++#define GPIO91_MAGICIAN_KEY_CAMERA 91
++#define GPIO93_MAGICIAN_KEY_UP 93
++#define GPIO94_MAGICIAN_KEY_DOWN 94
++#define GPIO95_MAGICIAN_KEY_LEFT 95
++#define GPIO96_MAGICIAN_KEY_RIGHT 96
++#define GPIO97_MAGICIAN_KEY_ENTER 97
++#define GPIO98_MAGICIAN_KEY_RECORD 98
++#define GPIO99_MAGICIAN_HEADPHONE_IN 99
++#define GPIO100_MAGICIAN_KEY_VOL_UP 100
++#define GPIO101_MAGICIAN_KEY_VOL_DOWN 101
++#define GPIO102_MAGICIAN_KEY_PHONE 102
++#define GPIO103_MAGICIAN_LED_KP 103
++#define GPIO104_MAGICIAN_LCD_POWER_1 104
++#define GPIO105_MAGICIAN_LCD_POWER_2 105
++#define GPIO106_MAGICIAN_LCD_POWER_3 106
++#define GPIO107_MAGICIAN_DS1WM_IRQ 107
++#define GPIO108_MAGICIAN_GSM_READY 108
++#define GPIO114_MAGICIAN_UNKNOWN 114
++#define GPIO115_MAGICIAN_nPEN_IRQ 115
++#define GPIO116_MAGICIAN_nCAM_EN 116
++#define GPIO119_MAGICIAN_UNKNOWN 119
++#define GPIO120_MAGICIAN_UNKNOWN 120
++
++/*
++ * PXA GPIO alternate function mode & direction
++ */
++
++#define GPIO0_MAGICIAN_KEY_POWER_MD (0 | GPIO_IN)
++#define GPIO9_MAGICIAN_UNKNOWN_MD (9 | GPIO_IN)
++#define GPIO10_MAGICIAN_GSM_IRQ_MD (10 | GPIO_IN)
++#define GPIO11_MAGICIAN_GSM_OUT1_MD (11 | GPIO_OUT)
++#define GPIO13_MAGICIAN_CPLD_IRQ_MD (13 | GPIO_IN)
++#define GPIO18_MAGICIAN_UNKNOWN_MD (18 | GPIO_OUT)
++#define GPIO22_MAGICIAN_VIBRA_EN_MD (22 | GPIO_OUT)
++#define GPIO26_MAGICIAN_GSM_POWER_MD (26 | GPIO_OUT)
++#define GPIO27_MAGICIAN_USBC_PUEN_MD (27 | GPIO_OUT)
++#define GPIO30_MAGICIAN_nCHARGE_EN_MD (30 | GPIO_OUT)
++#define GPIO37_MAGICIAN_KEY_HANGUP_MD (37 | GPIO_OUT)
++#define GPIO38_MAGICIAN_KEY_CONTACTS_MD (38 | GPIO_OUT)
++#define GPIO40_MAGICIAN_GSM_OUT2_MD (40 | GPIO_OUT)
++#define GPIO48_MAGICIAN_UNKNOWN_MD (48 | GPIO_OUT)
++#define GPIO56_MAGICIAN_UNKNOWN_MD (56 | GPIO_OUT)
++#define GPIO57_MAGICIAN_CAM_RESET_MD (57 | GPIO_OUT)
++#define GPIO83_MAGICIAN_nIR_EN_MD (83 | GPIO_OUT)
++#define GPIO86_MAGICIAN_GSM_RESET_MD (86 | GPIO_OUT)
++#define GPIO87_MAGICIAN_GSM_SELECT_MD (87 | GPIO_OUT)
++#define GPIO90_MAGICIAN_KEY_CALENDAR_MD (90 | GPIO_OUT)
++#define GPIO91_MAGICIAN_KEY_CAMERA_MD (91 | GPIO_OUT)
++#define GPIO93_MAGICIAN_KEY_UP_MD (93 | GPIO_IN)
++#define GPIO94_MAGICIAN_KEY_DOWN_MD (94 | GPIO_IN)
++#define GPIO95_MAGICIAN_KEY_LEFT_MD (95 | GPIO_IN)
++#define GPIO96_MAGICIAN_KEY_RIGHT_MD (96 | GPIO_IN)
++#define GPIO97_MAGICIAN_KEY_ENTER_MD (97 | GPIO_IN)
++#define GPIO98_MAGICIAN_KEY_RECORD_MD (98 | GPIO_IN)
++#define GPIO99_MAGICIAN_HEADPHONE_IN_MD (99 | GPIO_IN)
++#define GPIO100_MAGICIAN_KEY_VOL_UP_MD (100 | GPIO_IN)
++#define GPIO101_MAGICIAN_KEY_VOL_DOWN_MD (101 | GPIO_IN)
++#define GPIO102_MAGICIAN_KEY_PHONE_MD (102 | GPIO_IN)
++#define GPIO103_MAGICIAN_LED_KP_MD (103 | GPIO_OUT)
++#define GPIO104_MAGICIAN_LCD_POWER_1_MD (104 | GPIO_OUT)
++#define GPIO105_MAGICIAN_LCD_POWER_2_MD (105 | GPIO_OUT)
++#define GPIO106_MAGICIAN_LCD_POWER_3_MD (106 | GPIO_OUT)
++#define GPIO107_MAGICIAN_DS1WM_IRQ_MD (107 | GPIO_IN)
++#define GPIO108_MAGICIAN_GSM_READY_MD (108 | GPIO_IN)
++#define GPIO114_MAGICIAN_UNKNOWN_MD (114 | GPIO_OUT)
++#define GPIO115_MAGICIAN_nPEN_IRQ_MD (115 | GPIO_IN)
++#define GPIO116_MAGICIAN_nCAM_EN_MD (116 | GPIO_OUT)
++#define GPIO119_MAGICIAN_UNKNOWN_MD (119 | GPIO_OUT)
++#define GPIO120_MAGICIAN_UNKNOWN_MD (120 | GPIO_OUT)
++
++#endif /* _MAGICIAN_H_ */
+diff --git a/include/asm-arm/arch-pxa/mfp-pxa300.h b/include/asm-arm/arch-pxa/mfp-pxa300.h
+index a209966..bb41031 100644
+--- a/include/asm-arm/arch-pxa/mfp-pxa300.h
++++ b/include/asm-arm/arch-pxa/mfp-pxa300.h
+@@ -16,6 +16,7 @@
+ #define __ASM_ARCH_MFP_PXA300_H
- #endif /* __OPS_ADDRESS_DOT_H__ */
-diff --git a/fs/gfs2/ops_file.c b/fs/gfs2/ops_file.c
-index bb11fd6..f4842f2 100644
---- a/fs/gfs2/ops_file.c
-+++ b/fs/gfs2/ops_file.c
-@@ -33,57 +33,12 @@
- #include "lm.h"
- #include "log.h"
- #include "meta_io.h"
--#include "ops_file.h"
--#include "ops_vm.h"
- #include "quota.h"
- #include "rgrp.h"
- #include "trans.h"
- #include "util.h"
- #include "eaops.h"
--
--/*
-- * Most fields left uninitialised to catch anybody who tries to
-- * use them. f_flags set to prevent file_accessed() from touching
-- * any other part of this. Its use is purely as a flag so that we
-- * know (in readpage()) whether or not do to locking.
-- */
--struct file gfs2_internal_file_sentinel = {
-- .f_flags = O_NOATIME|O_RDONLY,
--};
--
--static int gfs2_read_actor(read_descriptor_t *desc, struct page *page,
-- unsigned long offset, unsigned long size)
--{
-- char *kaddr;
-- unsigned long count = desc->count;
--
-- if (size > count)
-- size = count;
--
-- kaddr = kmap(page);
-- memcpy(desc->arg.data, kaddr + offset, size);
-- kunmap(page);
--
-- desc->count = count - size;
-- desc->written += size;
-- desc->arg.buf += size;
-- return size;
--}
--
--int gfs2_internal_read(struct gfs2_inode *ip, struct file_ra_state *ra_state,
-- char *buf, loff_t *pos, unsigned size)
--{
-- struct inode *inode = &ip->i_inode;
-- read_descriptor_t desc;
-- desc.written = 0;
-- desc.arg.data = buf;
-- desc.count = size;
-- desc.error = 0;
-- do_generic_mapping_read(inode->i_mapping, ra_state,
-- &gfs2_internal_file_sentinel, pos, &desc,
-- gfs2_read_actor);
-- return desc.written ? desc.written : desc.error;
--}
-+#include "ops_address.h"
+ #include <asm/arch/mfp.h>
++#include <asm/arch/mfp-pxa3xx.h>
- /**
- * gfs2_llseek - seek to a location in a file
-@@ -214,7 +169,7 @@ static int gfs2_get_flags(struct file *filp, u32 __user *ptr)
- if (put_user(fsflags, ptr))
- error = -EFAULT;
+ /* GPIO */
+ #define GPIO46_GPIO MFP_CFG(GPIO46, AF1)
+diff --git a/include/asm-arm/arch-pxa/mfp-pxa320.h b/include/asm-arm/arch-pxa/mfp-pxa320.h
+index 52deedc..576aa46 100644
+--- a/include/asm-arm/arch-pxa/mfp-pxa320.h
++++ b/include/asm-arm/arch-pxa/mfp-pxa320.h
+@@ -16,6 +16,7 @@
+ #define __ASM_ARCH_MFP_PXA320_H
-- gfs2_glock_dq_m(1, &gh);
-+ gfs2_glock_dq(&gh);
- gfs2_holder_uninit(&gh);
- return error;
- }
-@@ -291,7 +246,16 @@ static int do_gfs2_set_flags(struct file *filp, u32 reqflags, u32 mask)
- if (error)
- goto out;
- }
--
-+ if ((flags ^ new_flags) & GFS2_DIF_JDATA) {
-+ if (flags & GFS2_DIF_JDATA)
-+ gfs2_log_flush(sdp, ip->i_gl);
-+ error = filemap_fdatawrite(inode->i_mapping);
-+ if (error)
-+ goto out;
-+ error = filemap_fdatawait(inode->i_mapping);
-+ if (error)
-+ goto out;
-+ }
- error = gfs2_trans_begin(sdp, RES_DINODE, 0);
- if (error)
- goto out;
-@@ -303,6 +267,7 @@ static int do_gfs2_set_flags(struct file *filp, u32 reqflags, u32 mask)
- gfs2_dinode_out(ip, bh->b_data);
- brelse(bh);
- gfs2_set_inode_flags(inode);
-+ gfs2_set_aops(inode);
- out_trans_end:
- gfs2_trans_end(sdp);
- out:
-@@ -338,6 +303,128 @@ static long gfs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
- return -ENOTTY;
- }
+ #include <asm/arch/mfp.h>
++#include <asm/arch/mfp-pxa3xx.h>
-+/**
-+ * gfs2_allocate_page_backing - Use bmap to allocate blocks
-+ * @page: The (locked) page to allocate backing for
+ /* GPIO */
+ #define GPIO46_GPIO MFP_CFG(GPIO46, AF0)
+diff --git a/include/asm-arm/arch-pxa/mfp-pxa3xx.h b/include/asm-arm/arch-pxa/mfp-pxa3xx.h
+new file mode 100644
+index 0000000..1f6b35c
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/mfp-pxa3xx.h
+@@ -0,0 +1,252 @@
++#ifndef __ASM_ARCH_MFP_PXA3XX_H
++#define __ASM_ARCH_MFP_PXA3XX_H
++
++#define MFPR_BASE (0x40e10000)
++#define MFPR_SIZE (PAGE_SIZE)
++
++/* MFPR register bit definitions */
++#define MFPR_PULL_SEL (0x1 << 15)
++#define MFPR_PULLUP_EN (0x1 << 14)
++#define MFPR_PULLDOWN_EN (0x1 << 13)
++#define MFPR_SLEEP_SEL (0x1 << 9)
++#define MFPR_SLEEP_OE_N (0x1 << 7)
++#define MFPR_EDGE_CLEAR (0x1 << 6)
++#define MFPR_EDGE_FALL_EN (0x1 << 5)
++#define MFPR_EDGE_RISE_EN (0x1 << 4)
++
++#define MFPR_SLEEP_DATA(x) ((x) << 8)
++#define MFPR_DRIVE(x) (((x) & 0x7) << 10)
++#define MFPR_AF_SEL(x) (((x) & 0x7) << 0)
++
++#define MFPR_EDGE_NONE (0)
++#define MFPR_EDGE_RISE (MFPR_EDGE_RISE_EN)
++#define MFPR_EDGE_FALL (MFPR_EDGE_FALL_EN)
++#define MFPR_EDGE_BOTH (MFPR_EDGE_RISE | MFPR_EDGE_FALL)
++
++/*
++ * Table that determines the low power modes outputs, with actual settings
++ * used in parentheses for don't-care values. Except for the float output,
++ * the configured driven and pulled levels match, so if there is a need for
++ * non-LPM pulled output, the same configuration could probably be used.
+ *
-+ * We try to allocate all the blocks required for the page in
-+ * one go. This might fail for various reasons, so we keep
-+ * trying until all the blocks to back this page are allocated.
-+ * If some of the blocks are already allocated, thats ok too.
++ * Output value sleep_oe_n sleep_data pullup_en pulldown_en pull_sel
++ * (bit 7) (bit 8) (bit 14) (bit 13) (bit 15)
++ *
++ * Input 0 X(0) X(0) X(0) 0
++ * Drive 0 0 0 0 X(1) 0
++ * Drive 1 0 1 X(1) 0 0
++ * Pull hi (1) 1 X(1) 1 0 0
++ * Pull lo (0) 1 X(0) 0 1 0
++ * Z (float) 1 X(0) 0 0 0
++ */
++#define MFPR_LPM_INPUT (0)
++#define MFPR_LPM_DRIVE_LOW (MFPR_SLEEP_DATA(0) | MFPR_PULLDOWN_EN)
++#define MFPR_LPM_DRIVE_HIGH (MFPR_SLEEP_DATA(1) | MFPR_PULLUP_EN)
++#define MFPR_LPM_PULL_LOW (MFPR_LPM_DRIVE_LOW | MFPR_SLEEP_OE_N)
++#define MFPR_LPM_PULL_HIGH (MFPR_LPM_DRIVE_HIGH | MFPR_SLEEP_OE_N)
++#define MFPR_LPM_FLOAT (MFPR_SLEEP_OE_N)
++#define MFPR_LPM_MASK (0xe080)
++
++/*
++ * The pullup and pulldown state of the MFP pin at run mode is by default
++ * determined by the selected alternate function. In case that some buggy
++ * devices need to override this default behavior, the definitions below
++ * indicates the setting of corresponding MFPR bits
++ *
++ * Definition pull_sel pullup_en pulldown_en
++ * MFPR_PULL_NONE 0 0 0
++ * MFPR_PULL_LOW 1 0 1
++ * MFPR_PULL_HIGH 1 1 0
++ * MFPR_PULL_BOTH 1 1 1
++ */
++#define MFPR_PULL_NONE (0)
++#define MFPR_PULL_LOW (MFPR_PULL_SEL | MFPR_PULLDOWN_EN)
++#define MFPR_PULL_BOTH (MFPR_PULL_LOW | MFPR_PULLUP_EN)
++#define MFPR_PULL_HIGH (MFPR_PULL_SEL | MFPR_PULLUP_EN)
++
++/* PXA3xx common MFP configurations - processor specific ones defined
++ * in mfp-pxa300.h and mfp-pxa320.h
+ */
++#define GPIO0_GPIO MFP_CFG(GPIO0, AF0)
++#define GPIO1_GPIO MFP_CFG(GPIO1, AF0)
++#define GPIO2_GPIO MFP_CFG(GPIO2, AF0)
++#define GPIO3_GPIO MFP_CFG(GPIO3, AF0)
++#define GPIO4_GPIO MFP_CFG(GPIO4, AF0)
++#define GPIO5_GPIO MFP_CFG(GPIO5, AF0)
++#define GPIO6_GPIO MFP_CFG(GPIO6, AF0)
++#define GPIO7_GPIO MFP_CFG(GPIO7, AF0)
++#define GPIO8_GPIO MFP_CFG(GPIO8, AF0)
++#define GPIO9_GPIO MFP_CFG(GPIO9, AF0)
++#define GPIO10_GPIO MFP_CFG(GPIO10, AF0)
++#define GPIO11_GPIO MFP_CFG(GPIO11, AF0)
++#define GPIO12_GPIO MFP_CFG(GPIO12, AF0)
++#define GPIO13_GPIO MFP_CFG(GPIO13, AF0)
++#define GPIO14_GPIO MFP_CFG(GPIO14, AF0)
++#define GPIO15_GPIO MFP_CFG(GPIO15, AF0)
++#define GPIO16_GPIO MFP_CFG(GPIO16, AF0)
++#define GPIO17_GPIO MFP_CFG(GPIO17, AF0)
++#define GPIO18_GPIO MFP_CFG(GPIO18, AF0)
++#define GPIO19_GPIO MFP_CFG(GPIO19, AF0)
++#define GPIO20_GPIO MFP_CFG(GPIO20, AF0)
++#define GPIO21_GPIO MFP_CFG(GPIO21, AF0)
++#define GPIO22_GPIO MFP_CFG(GPIO22, AF0)
++#define GPIO23_GPIO MFP_CFG(GPIO23, AF0)
++#define GPIO24_GPIO MFP_CFG(GPIO24, AF0)
++#define GPIO25_GPIO MFP_CFG(GPIO25, AF0)
++#define GPIO26_GPIO MFP_CFG(GPIO26, AF0)
++#define GPIO27_GPIO MFP_CFG(GPIO27, AF0)
++#define GPIO28_GPIO MFP_CFG(GPIO28, AF0)
++#define GPIO29_GPIO MFP_CFG(GPIO29, AF0)
++#define GPIO30_GPIO MFP_CFG(GPIO30, AF0)
++#define GPIO31_GPIO MFP_CFG(GPIO31, AF0)
++#define GPIO32_GPIO MFP_CFG(GPIO32, AF0)
++#define GPIO33_GPIO MFP_CFG(GPIO33, AF0)
++#define GPIO34_GPIO MFP_CFG(GPIO34, AF0)
++#define GPIO35_GPIO MFP_CFG(GPIO35, AF0)
++#define GPIO36_GPIO MFP_CFG(GPIO36, AF0)
++#define GPIO37_GPIO MFP_CFG(GPIO37, AF0)
++#define GPIO38_GPIO MFP_CFG(GPIO38, AF0)
++#define GPIO39_GPIO MFP_CFG(GPIO39, AF0)
++#define GPIO40_GPIO MFP_CFG(GPIO40, AF0)
++#define GPIO41_GPIO MFP_CFG(GPIO41, AF0)
++#define GPIO42_GPIO MFP_CFG(GPIO42, AF0)
++#define GPIO43_GPIO MFP_CFG(GPIO43, AF0)
++#define GPIO44_GPIO MFP_CFG(GPIO44, AF0)
++#define GPIO45_GPIO MFP_CFG(GPIO45, AF0)
+
-+static int gfs2_allocate_page_backing(struct page *page)
-+{
-+ struct inode *inode = page->mapping->host;
-+ struct buffer_head bh;
-+ unsigned long size = PAGE_CACHE_SIZE;
-+ u64 lblock = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
++#define GPIO47_GPIO MFP_CFG(GPIO47, AF0)
++#define GPIO48_GPIO MFP_CFG(GPIO48, AF0)
+
-+ do {
-+ bh.b_state = 0;
-+ bh.b_size = size;
-+ gfs2_block_map(inode, lblock, &bh, 1);
-+ if (!buffer_mapped(&bh))
-+ return -EIO;
-+ size -= bh.b_size;
-+ lblock += (bh.b_size >> inode->i_blkbits);
-+ } while(size > 0);
-+ return 0;
-+}
++#define GPIO53_GPIO MFP_CFG(GPIO53, AF0)
++#define GPIO54_GPIO MFP_CFG(GPIO54, AF0)
++#define GPIO55_GPIO MFP_CFG(GPIO55, AF0)
+
-+/**
-+ * gfs2_page_mkwrite - Make a shared, mmap()ed, page writable
-+ * @vma: The virtual memory area
-+ * @page: The page which is about to become writable
-+ *
-+ * When the page becomes writable, we need to ensure that we have
-+ * blocks allocated on disk to back that page.
-+ */
++#define GPIO57_GPIO MFP_CFG(GPIO57, AF0)
+
-+static int gfs2_page_mkwrite(struct vm_area_struct *vma, struct page *page)
-+{
-+ struct inode *inode = vma->vm_file->f_path.dentry->d_inode;
-+ struct gfs2_inode *ip = GFS2_I(inode);
-+ struct gfs2_sbd *sdp = GFS2_SB(inode);
-+ unsigned long last_index;
-+ u64 pos = page->index << (PAGE_CACHE_SIZE - inode->i_blkbits);
-+ unsigned int data_blocks, ind_blocks, rblocks;
-+ int alloc_required = 0;
-+ struct gfs2_holder gh;
-+ struct gfs2_alloc *al;
-+ int ret;
++#define GPIO63_GPIO MFP_CFG(GPIO63, AF0)
++#define GPIO64_GPIO MFP_CFG(GPIO64, AF0)
++#define GPIO65_GPIO MFP_CFG(GPIO65, AF0)
++#define GPIO66_GPIO MFP_CFG(GPIO66, AF0)
++#define GPIO67_GPIO MFP_CFG(GPIO67, AF0)
++#define GPIO68_GPIO MFP_CFG(GPIO68, AF0)
++#define GPIO69_GPIO MFP_CFG(GPIO69, AF0)
++#define GPIO70_GPIO MFP_CFG(GPIO70, AF0)
++#define GPIO71_GPIO MFP_CFG(GPIO71, AF0)
++#define GPIO72_GPIO MFP_CFG(GPIO72, AF0)
++#define GPIO73_GPIO MFP_CFG(GPIO73, AF0)
++#define GPIO74_GPIO MFP_CFG(GPIO74, AF0)
++#define GPIO75_GPIO MFP_CFG(GPIO75, AF0)
++#define GPIO76_GPIO MFP_CFG(GPIO76, AF0)
++#define GPIO77_GPIO MFP_CFG(GPIO77, AF0)
++#define GPIO78_GPIO MFP_CFG(GPIO78, AF0)
++#define GPIO79_GPIO MFP_CFG(GPIO79, AF0)
++#define GPIO80_GPIO MFP_CFG(GPIO80, AF0)
++#define GPIO81_GPIO MFP_CFG(GPIO81, AF0)
++#define GPIO82_GPIO MFP_CFG(GPIO82, AF0)
++#define GPIO83_GPIO MFP_CFG(GPIO83, AF0)
++#define GPIO84_GPIO MFP_CFG(GPIO84, AF0)
++#define GPIO85_GPIO MFP_CFG(GPIO85, AF0)
++#define GPIO86_GPIO MFP_CFG(GPIO86, AF0)
++#define GPIO87_GPIO MFP_CFG(GPIO87, AF0)
++#define GPIO88_GPIO MFP_CFG(GPIO88, AF0)
++#define GPIO89_GPIO MFP_CFG(GPIO89, AF0)
++#define GPIO90_GPIO MFP_CFG(GPIO90, AF0)
++#define GPIO91_GPIO MFP_CFG(GPIO91, AF0)
++#define GPIO92_GPIO MFP_CFG(GPIO92, AF0)
++#define GPIO93_GPIO MFP_CFG(GPIO93, AF0)
++#define GPIO94_GPIO MFP_CFG(GPIO94, AF0)
++#define GPIO95_GPIO MFP_CFG(GPIO95, AF0)
++#define GPIO96_GPIO MFP_CFG(GPIO96, AF0)
++#define GPIO97_GPIO MFP_CFG(GPIO97, AF0)
++#define GPIO98_GPIO MFP_CFG(GPIO98, AF0)
++#define GPIO99_GPIO MFP_CFG(GPIO99, AF0)
++#define GPIO100_GPIO MFP_CFG(GPIO100, AF0)
++#define GPIO101_GPIO MFP_CFG(GPIO101, AF0)
++#define GPIO102_GPIO MFP_CFG(GPIO102, AF0)
++#define GPIO103_GPIO MFP_CFG(GPIO103, AF0)
++#define GPIO104_GPIO MFP_CFG(GPIO104, AF0)
++#define GPIO105_GPIO MFP_CFG(GPIO105, AF0)
++#define GPIO106_GPIO MFP_CFG(GPIO106, AF0)
++#define GPIO107_GPIO MFP_CFG(GPIO107, AF0)
++#define GPIO108_GPIO MFP_CFG(GPIO108, AF0)
++#define GPIO109_GPIO MFP_CFG(GPIO109, AF0)
++#define GPIO110_GPIO MFP_CFG(GPIO110, AF0)
++#define GPIO111_GPIO MFP_CFG(GPIO111, AF0)
++#define GPIO112_GPIO MFP_CFG(GPIO112, AF0)
++#define GPIO113_GPIO MFP_CFG(GPIO113, AF0)
++#define GPIO114_GPIO MFP_CFG(GPIO114, AF0)
++#define GPIO115_GPIO MFP_CFG(GPIO115, AF0)
++#define GPIO116_GPIO MFP_CFG(GPIO116, AF0)
++#define GPIO117_GPIO MFP_CFG(GPIO117, AF0)
++#define GPIO118_GPIO MFP_CFG(GPIO118, AF0)
++#define GPIO119_GPIO MFP_CFG(GPIO119, AF0)
++#define GPIO120_GPIO MFP_CFG(GPIO120, AF0)
++#define GPIO121_GPIO MFP_CFG(GPIO121, AF0)
++#define GPIO122_GPIO MFP_CFG(GPIO122, AF0)
++#define GPIO123_GPIO MFP_CFG(GPIO123, AF0)
++#define GPIO124_GPIO MFP_CFG(GPIO124, AF0)
++#define GPIO125_GPIO MFP_CFG(GPIO125, AF0)
++#define GPIO126_GPIO MFP_CFG(GPIO126, AF0)
++#define GPIO127_GPIO MFP_CFG(GPIO127, AF0)
+
-+ gfs2_holder_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_ATIME, &gh);
-+ ret = gfs2_glock_nq_atime(&gh);
-+ if (ret)
-+ goto out;
++#define GPIO0_2_GPIO MFP_CFG(GPIO0_2, AF0)
++#define GPIO1_2_GPIO MFP_CFG(GPIO1_2, AF0)
++#define GPIO2_2_GPIO MFP_CFG(GPIO2_2, AF0)
++#define GPIO3_2_GPIO MFP_CFG(GPIO3_2, AF0)
++#define GPIO4_2_GPIO MFP_CFG(GPIO4_2, AF0)
++#define GPIO5_2_GPIO MFP_CFG(GPIO5_2, AF0)
++#define GPIO6_2_GPIO MFP_CFG(GPIO6_2, AF0)
+
-+ set_bit(GIF_SW_PAGED, &ip->i_flags);
-+ gfs2_write_calc_reserv(ip, PAGE_CACHE_SIZE, &data_blocks, &ind_blocks);
-+ ret = gfs2_write_alloc_required(ip, pos, PAGE_CACHE_SIZE, &alloc_required);
-+ if (ret || !alloc_required)
-+ goto out_unlock;
-+ ret = -ENOMEM;
-+ al = gfs2_alloc_get(ip);
-+ if (al == NULL)
-+ goto out_unlock;
++/*
++ * each MFP pin will have a MFPR register, since the offset of the
++ * register varies between processors, the processor specific code
++ * should initialize the pin offsets by pxa3xx_mfp_init_addr()
++ *
++ * pxa3xx_mfp_init_addr - accepts a table of "pxa3xx_mfp_addr_map"
++ * structure, which represents a range of MFP pins from "start" to
++ * "end", with the offset begining at "offset", to define a single
++ * pin, let "end" = -1
++ *
++ * use
++ *
++ * MFP_ADDR_X() to define a range of pins
++ * MFP_ADDR() to define a single pin
++ * MFP_ADDR_END to signal the end of pin offset definitions
++ */
++struct pxa3xx_mfp_addr_map {
++ unsigned int start;
++ unsigned int end;
++ unsigned long offset;
++};
+
-+ ret = gfs2_quota_lock(ip, NO_QUOTA_CHANGE, NO_QUOTA_CHANGE);
-+ if (ret)
-+ goto out_alloc_put;
-+ ret = gfs2_quota_check(ip, ip->i_inode.i_uid, ip->i_inode.i_gid);
-+ if (ret)
-+ goto out_quota_unlock;
-+ al->al_requested = data_blocks + ind_blocks;
-+ ret = gfs2_inplace_reserve(ip);
-+ if (ret)
-+ goto out_quota_unlock;
++#define MFP_ADDR_X(start, end, offset) \
++ { MFP_PIN_##start, MFP_PIN_##end, offset }
+
-+ rblocks = RES_DINODE + ind_blocks;
-+ if (gfs2_is_jdata(ip))
-+ rblocks += data_blocks ? data_blocks : 1;
-+ if (ind_blocks || data_blocks)
-+ rblocks += RES_STATFS + RES_QUOTA;
-+ ret = gfs2_trans_begin(sdp, rblocks, 0);
-+ if (ret)
-+ goto out_trans_fail;
++#define MFP_ADDR(pin, offset) \
++ { MFP_PIN_##pin, -1, offset }
+
-+ lock_page(page);
-+ ret = -EINVAL;
-+ last_index = ip->i_inode.i_size >> PAGE_CACHE_SHIFT;
-+ if (page->index > last_index)
-+ goto out_unlock_page;
-+ ret = 0;
-+ if (!PageUptodate(page) || page->mapping != ip->i_inode.i_mapping)
-+ goto out_unlock_page;
-+ if (gfs2_is_stuffed(ip)) {
-+ ret = gfs2_unstuff_dinode(ip, page);
-+ if (ret)
-+ goto out_unlock_page;
-+ }
-+ ret = gfs2_allocate_page_backing(page);
++#define MFP_ADDR_END { MFP_PIN_INVALID, 0 }
+
-+out_unlock_page:
-+ unlock_page(page);
-+ gfs2_trans_end(sdp);
-+out_trans_fail:
-+ gfs2_inplace_release(ip);
-+out_quota_unlock:
-+ gfs2_quota_unlock(ip);
-+out_alloc_put:
-+ gfs2_alloc_put(ip);
-+out_unlock:
-+ gfs2_glock_dq(&gh);
-+out:
-+ gfs2_holder_uninit(&gh);
-+ return ret;
-+}
++/*
++ * pxa3xx_mfp_read()/pxa3xx_mfp_write() - for direct read/write access
++ * to the MFPR register
++ */
++unsigned long pxa3xx_mfp_read(int mfp);
++void pxa3xx_mfp_write(int mfp, unsigned long mfpr_val);
+
-+static struct vm_operations_struct gfs2_vm_ops = {
-+ .fault = filemap_fault,
-+ .page_mkwrite = gfs2_page_mkwrite,
-+};
++/*
++ * pxa3xx_mfp_config - configure the MFPR registers
++ *
++ * used by board specific initialization code
++ */
++void pxa3xx_mfp_config(unsigned long *mfp_cfgs, int num);
+
++/*
++ * pxa3xx_mfp_init_addr() - initialize the mapping between mfp pin
++ * index and MFPR register offset
++ *
++ * used by processor specific code
++ */
++void __init pxa3xx_mfp_init_addr(struct pxa3xx_mfp_addr_map *);
++void __init pxa3xx_init_mfp(void);
++#endif /* __ASM_ARCH_MFP_PXA3XX_H */
+diff --git a/include/asm-arm/arch-pxa/mfp.h b/include/asm-arm/arch-pxa/mfp.h
+index 03c508d..02f6157 100644
+--- a/include/asm-arm/arch-pxa/mfp.h
++++ b/include/asm-arm/arch-pxa/mfp.h
+@@ -16,9 +16,6 @@
+ #ifndef __ASM_ARCH_MFP_H
+ #define __ASM_ARCH_MFP_H
- /**
- * gfs2_mmap -
-@@ -360,14 +447,7 @@ static int gfs2_mmap(struct file *file, struct vm_area_struct *vma)
- return error;
- }
-
-- /* This is VM_MAYWRITE instead of VM_WRITE because a call
-- to mprotect() can turn on VM_WRITE later. */
--
-- if ((vma->vm_flags & (VM_MAYSHARE | VM_MAYWRITE)) ==
-- (VM_MAYSHARE | VM_MAYWRITE))
-- vma->vm_ops = &gfs2_vm_ops_sharewrite;
-- else
-- vma->vm_ops = &gfs2_vm_ops_private;
-+ vma->vm_ops = &gfs2_vm_ops;
-
- gfs2_glock_dq_uninit(&i_gh);
-
-@@ -538,15 +618,6 @@ static int gfs2_lock(struct file *file, int cmd, struct file_lock *fl)
- if (__mandatory_lock(&ip->i_inode))
- return -ENOLCK;
-
-- if (sdp->sd_args.ar_localflocks) {
-- if (IS_GETLK(cmd)) {
-- posix_test_lock(file, fl);
-- return 0;
-- } else {
-- return posix_lock_file_wait(file, fl);
-- }
-- }
+-#define MFPR_BASE (0x40e10000)
+-#define MFPR_SIZE (PAGE_SIZE)
-
- if (cmd == F_CANCELLK) {
- /* Hack: */
- cmd = F_SETLK;
-@@ -632,16 +703,12 @@ static void do_unflock(struct file *file, struct file_lock *fl)
- static int gfs2_flock(struct file *file, int cmd, struct file_lock *fl)
- {
- struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
-- struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
-
- if (!(fl->fl_flags & FL_FLOCK))
- return -ENOLCK;
- if (__mandatory_lock(&ip->i_inode))
- return -ENOLCK;
+ #define mfp_to_gpio(m) ((m) % 128)
-- if (sdp->sd_args.ar_localflocks)
-- return flock_lock_file_wait(file, fl);
--
- if (fl->fl_type == F_UNLCK) {
- do_unflock(file, fl);
- return 0;
-@@ -678,3 +745,27 @@ const struct file_operations gfs2_dir_fops = {
- .flock = gfs2_flock,
+ /* list of all the configurable MFP pins */
+@@ -217,114 +214,21 @@ enum {
};
-+const struct file_operations gfs2_file_fops_nolock = {
-+ .llseek = gfs2_llseek,
-+ .read = do_sync_read,
-+ .aio_read = generic_file_aio_read,
-+ .write = do_sync_write,
-+ .aio_write = generic_file_aio_write,
-+ .unlocked_ioctl = gfs2_ioctl,
-+ .mmap = gfs2_mmap,
-+ .open = gfs2_open,
-+ .release = gfs2_close,
-+ .fsync = gfs2_fsync,
-+ .splice_read = generic_file_splice_read,
-+ .splice_write = generic_file_splice_write,
-+ .setlease = gfs2_setlease,
-+};
-+
-+const struct file_operations gfs2_dir_fops_nolock = {
-+ .readdir = gfs2_readdir,
-+ .unlocked_ioctl = gfs2_ioctl,
-+ .open = gfs2_open,
-+ .release = gfs2_close,
-+ .fsync = gfs2_fsync,
-+};
-+
-diff --git a/fs/gfs2/ops_file.h b/fs/gfs2/ops_file.h
-deleted file mode 100644
-index 7e5d8ec..0000000
---- a/fs/gfs2/ops_file.h
-+++ /dev/null
-@@ -1,24 +0,0 @@
+ /*
+- * Table that determines the low power modes outputs, with actual settings
+- * used in parentheses for don't-care values. Except for the float output,
+- * the configured driven and pulled levels match, so if there is a need for
+- * non-LPM pulled output, the same configuration could probably be used.
+- *
+- * Output value sleep_oe_n sleep_data pullup_en pulldown_en pull_sel
+- * (bit 7) (bit 8) (bit 14d) (bit 13d)
+- *
+- * Drive 0 0 0 0 X (1) 0
+- * Drive 1 0 1 X (1) 0 0
+- * Pull hi (1) 1 X(1) 1 0 0
+- * Pull lo (0) 1 X(0) 0 1 0
+- * Z (float) 1 X(0) 0 0 0
+- */
+-#define MFP_LPM_DRIVE_LOW 0x8
+-#define MFP_LPM_DRIVE_HIGH 0x6
+-#define MFP_LPM_PULL_HIGH 0x7
+-#define MFP_LPM_PULL_LOW 0x9
+-#define MFP_LPM_FLOAT 0x1
+-#define MFP_LPM_PULL_NEITHER 0x0
+-
-/*
-- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
+- * The pullup and pulldown state of the MFP pin is by default determined by
+- * selected alternate function. In case some buggy devices need to override
+- * this default behavior, pxa3xx_mfp_set_pull() can be invoked with one of
+- * the following definition as the parameter.
- *
-- * This copyrighted material is made available to anyone wishing to use,
-- * modify, copy, or redistribute it subject to the terms and conditions
-- * of the GNU General Public License version 2.
+- * Definition pull_sel pullup_en pulldown_en
+- * MFP_PULL_HIGH 1 1 0
+- * MFP_PULL_LOW 1 0 1
+- * MFP_PULL_BOTH 1 1 1
+- * MFP_PULL_NONE 1 0 0
+- * MFP_PULL_DEFAULT 0 X X
+- *
+- * NOTE: pxa3xx_mfp_set_pull() will modify the PULLUP_EN and PULLDOWN_EN
+- * bits, which will cause potential conflicts with the low power mode
+- * setting, device drivers should take care of this
- */
+-#define MFP_PULL_BOTH (0x7u)
+-#define MFP_PULL_HIGH (0x6u)
+-#define MFP_PULL_LOW (0x5u)
+-#define MFP_PULL_NONE (0x4u)
+-#define MFP_PULL_DEFAULT (0x0u)
-
--#ifndef __OPS_FILE_DOT_H__
--#define __OPS_FILE_DOT_H__
+-#define MFP_AF0 (0)
+-#define MFP_AF1 (1)
+-#define MFP_AF2 (2)
+-#define MFP_AF3 (3)
+-#define MFP_AF4 (4)
+-#define MFP_AF5 (5)
+-#define MFP_AF6 (6)
+-#define MFP_AF7 (7)
-
--#include <linux/fs.h>
--struct gfs2_inode;
+-#define MFP_DS01X (0)
+-#define MFP_DS02X (1)
+-#define MFP_DS03X (2)
+-#define MFP_DS04X (3)
+-#define MFP_DS06X (4)
+-#define MFP_DS08X (5)
+-#define MFP_DS10X (6)
+-#define MFP_DS12X (7)
-
--extern struct file gfs2_internal_file_sentinel;
--extern int gfs2_internal_read(struct gfs2_inode *ip,
-- struct file_ra_state *ra_state,
-- char *buf, loff_t *pos, unsigned size);
--extern void gfs2_set_inode_flags(struct inode *inode);
--extern const struct file_operations gfs2_file_fops;
--extern const struct file_operations gfs2_dir_fops;
+-#define MFP_EDGE_BOTH 0x3
+-#define MFP_EDGE_RISE 0x2
+-#define MFP_EDGE_FALL 0x1
+-#define MFP_EDGE_NONE 0x0
-
--#endif /* __OPS_FILE_DOT_H__ */
-diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
-index 17de58e..43d511b 100644
---- a/fs/gfs2/ops_fstype.c
-+++ b/fs/gfs2/ops_fstype.c
-@@ -1,6 +1,6 @@
- /*
- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-+ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
+-#define MFPR_AF_MASK 0x0007
+-#define MFPR_DRV_MASK 0x1c00
+-#define MFPR_RDH_MASK 0x0200
+-#define MFPR_LPM_MASK 0xe180
+-#define MFPR_PULL_MASK 0xe000
+-#define MFPR_EDGE_MASK 0x0070
+-
+-#define MFPR_ALT_OFFSET 0
+-#define MFPR_ERE_OFFSET 4
+-#define MFPR_EFE_OFFSET 5
+-#define MFPR_EC_OFFSET 6
+-#define MFPR_SON_OFFSET 7
+-#define MFPR_SD_OFFSET 8
+-#define MFPR_SS_OFFSET 9
+-#define MFPR_DRV_OFFSET 10
+-#define MFPR_PD_OFFSET 13
+-#define MFPR_PU_OFFSET 14
+-#define MFPR_PS_OFFSET 15
+-
+-#define MFPR(af, drv, rdh, lpm, edge) \
+- (((af) & 0x7) | (((drv) & 0x7) << 10) |\
+- (((rdh) & 0x1) << 9) |\
+- (((lpm) & 0x3) << 7) |\
+- (((lpm) & 0x4) << 12)|\
+- (((lpm) & 0x8) << 10)|\
+- ((!(edge)) << 6) |\
+- (((edge) & 0x1) << 5) |\
+- (((edge) & 0x2) << 3))
+-
+-/*
+ * a possible MFP configuration is represented by a 32-bit integer
+- * bit 0..15 - MFPR value (16-bit)
+- * bit 16..31 - mfp pin index (used to obtain the MFPR offset)
++ *
++ * bit 0.. 9 - MFP Pin Number (1024 Pins Maximum)
++ * bit 10..12 - Alternate Function Selection
++ * bit 13..15 - Drive Strength
++ * bit 16..18 - Low Power Mode State
++ * bit 19..20 - Low Power Mode Edge Detection
++ * bit 21..22 - Run Mode Pull State
*
- * This copyrighted material is made available to anyone wishing to use,
- * modify, copy, or redistribute it subject to the terms and conditions
-@@ -21,6 +21,7 @@
-
- #include "gfs2.h"
- #include "incore.h"
-+#include "bmap.h"
- #include "daemon.h"
- #include "glock.h"
- #include "glops.h"
-@@ -59,7 +60,6 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
-
- mutex_init(&sdp->sd_inum_mutex);
- spin_lock_init(&sdp->sd_statfs_spin);
-- mutex_init(&sdp->sd_statfs_mutex);
-
- spin_lock_init(&sdp->sd_rindex_spin);
- mutex_init(&sdp->sd_rindex_mutex);
-@@ -77,7 +77,6 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
-
- spin_lock_init(&sdp->sd_log_lock);
-
-- INIT_LIST_HEAD(&sdp->sd_log_le_gl);
- INIT_LIST_HEAD(&sdp->sd_log_le_buf);
- INIT_LIST_HEAD(&sdp->sd_log_le_revoke);
- INIT_LIST_HEAD(&sdp->sd_log_le_rg);
-@@ -303,6 +302,67 @@ out:
- return error;
- }
+ * to facilitate the definition, the following macros are provided
+ *
+- * MFPR_DEFAULT - default MFPR value, with
++ * MFP_CFG_DEFAULT - default MFP configuration value, with
+ * alternate function = 0,
+- * drive strength = fast 1mA (MFP_DS01X)
++ * drive strength = fast 3mA (MFP_DS03X)
+ * low power mode = default
+- * release dalay hold = false (RDH bit)
+ * edge detection = none
+ *
+ * MFP_CFG - default MFPR value with alternate function
+@@ -334,251 +238,74 @@ enum {
+ * low power mode
+ * MFP_CFG_X - default MFPR value with alternate function,
+ * pin drive strength and low power mode
+- *
+- * use
+- *
+- * MFP_CFG_PIN - to get the MFP pin index
+- * MFP_CFG_VAL - to get the corresponding MFPR value
+ */
-+/**
-+ * map_journal_extents - create a reusable "extent" mapping from all logical
-+ * blocks to all physical blocks for the given journal. This will save
-+ * us time when writing journal blocks. Most journals will have only one
-+ * extent that maps all their logical blocks. That's because gfs2.mkfs
-+ * arranges the journal blocks sequentially to maximize performance.
-+ * So the extent would map the first block for the entire file length.
-+ * However, gfs2_jadd can happen while file activity is happening, so
-+ * those journals may not be sequential. Less likely is the case where
-+ * the users created their own journals by mounting the metafs and
-+ * laying it out. But it's still possible. These journals might have
-+ * several extents.
-+ *
-+ * TODO: This should be done in bigger chunks rather than one block at a time,
-+ * but since it's only done at mount time, I'm not worried about the
-+ * time it takes.
-+ */
-+static int map_journal_extents(struct gfs2_sbd *sdp)
-+{
-+ struct gfs2_jdesc *jd = sdp->sd_jdesc;
-+ unsigned int lb;
-+ u64 db, prev_db; /* logical block, disk block, prev disk block */
-+ struct gfs2_inode *ip = GFS2_I(jd->jd_inode);
-+ struct gfs2_journal_extent *jext = NULL;
-+ struct buffer_head bh;
-+ int rc = 0;
+-typedef uint32_t mfp_cfg_t;
+-
+-#define MFP_CFG_PIN(mfp_cfg) (((mfp_cfg) >> 16) & 0xffff)
+-#define MFP_CFG_VAL(mfp_cfg) ((mfp_cfg) & 0xffff)
+-
+-/*
+- * MFP register defaults to
+- * drive strength fast 3mA (010'b)
+- * edge detection logic disabled
+- * alternate function 0
+- */
+-#define MFPR_DEFAULT (0x0840)
++typedef unsigned long mfp_cfg_t;
+
-+ prev_db = 0;
++#define MFP_PIN(x) ((x) & 0x3ff)
+
-+ for (lb = 0; lb < ip->i_di.di_size >> sdp->sd_sb.sb_bsize_shift; lb++) {
-+ bh.b_state = 0;
-+ bh.b_blocknr = 0;
-+ bh.b_size = 1 << ip->i_inode.i_blkbits;
-+ rc = gfs2_block_map(jd->jd_inode, lb, &bh, 0);
-+ db = bh.b_blocknr;
-+ if (rc || !db) {
-+ printk(KERN_INFO "GFS2 journal mapping error %d: lb="
-+ "%u db=%llu\n", rc, lb, (unsigned long long)db);
-+ break;
-+ }
-+ if (!prev_db || db != prev_db + 1) {
-+ jext = kzalloc(sizeof(struct gfs2_journal_extent),
-+ GFP_KERNEL);
-+ if (!jext) {
-+ printk(KERN_INFO "GFS2 error: out of memory "
-+ "mapping journal extents.\n");
-+ rc = -ENOMEM;
-+ break;
-+ }
-+ jext->dblock = db;
-+ jext->lblock = lb;
-+ jext->blocks = 1;
-+ list_add_tail(&jext->extent_list, &jd->extent_list);
-+ } else {
-+ jext->blocks++;
-+ }
-+ prev_db = db;
-+ }
-+ return rc;
-+}
++#define MFP_AF0 (0x0 << 10)
++#define MFP_AF1 (0x1 << 10)
++#define MFP_AF2 (0x2 << 10)
++#define MFP_AF3 (0x3 << 10)
++#define MFP_AF4 (0x4 << 10)
++#define MFP_AF5 (0x5 << 10)
++#define MFP_AF6 (0x6 << 10)
++#define MFP_AF7 (0x7 << 10)
++#define MFP_AF_MASK (0x7 << 10)
++#define MFP_AF(x) (((x) >> 10) & 0x7)
+
- static int init_journal(struct gfs2_sbd *sdp, int undo)
- {
- struct gfs2_holder ji_gh;
-@@ -340,7 +400,7 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
-
- if (sdp->sd_args.ar_spectator) {
- sdp->sd_jdesc = gfs2_jdesc_find(sdp, 0);
-- sdp->sd_log_blks_free = sdp->sd_jdesc->jd_blocks;
-+ atomic_set(&sdp->sd_log_blks_free, sdp->sd_jdesc->jd_blocks);
- } else {
- if (sdp->sd_lockstruct.ls_jid >= gfs2_jindex_size(sdp)) {
- fs_err(sdp, "can't mount journal #%u\n",
-@@ -377,7 +437,10 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
- sdp->sd_jdesc->jd_jid, error);
- goto fail_jinode_gh;
- }
-- sdp->sd_log_blks_free = sdp->sd_jdesc->jd_blocks;
-+ atomic_set(&sdp->sd_log_blks_free, sdp->sd_jdesc->jd_blocks);
++#define MFP_DS01X (0x0 << 13)
++#define MFP_DS02X (0x1 << 13)
++#define MFP_DS03X (0x2 << 13)
++#define MFP_DS04X (0x3 << 13)
++#define MFP_DS06X (0x4 << 13)
++#define MFP_DS08X (0x5 << 13)
++#define MFP_DS10X (0x6 << 13)
++#define MFP_DS13X (0x7 << 13)
++#define MFP_DS_MASK (0x7 << 13)
++#define MFP_DS(x) (((x) >> 13) & 0x7)
+
-+ /* Map the extents for this journal's blocks */
-+ map_journal_extents(sdp);
- }
-
- if (sdp->sd_lockstruct.ls_first) {
-diff --git a/fs/gfs2/ops_inode.c b/fs/gfs2/ops_inode.c
-index 291f0c7..9f71372 100644
---- a/fs/gfs2/ops_inode.c
-+++ b/fs/gfs2/ops_inode.c
-@@ -61,7 +61,7 @@ static int gfs2_create(struct inode *dir, struct dentry *dentry,
- inode = gfs2_createi(ghs, &dentry->d_name, S_IFREG | mode, 0);
- if (!IS_ERR(inode)) {
- gfs2_trans_end(sdp);
-- if (dip->i_alloc.al_rgd)
-+ if (dip->i_alloc->al_rgd)
- gfs2_inplace_release(dip);
- gfs2_quota_unlock(dip);
- gfs2_alloc_put(dip);
-@@ -113,8 +113,18 @@ static struct dentry *gfs2_lookup(struct inode *dir, struct dentry *dentry,
- if (inode && IS_ERR(inode))
- return ERR_PTR(PTR_ERR(inode));
-
-- if (inode)
-+ if (inode) {
-+ struct gfs2_glock *gl = GFS2_I(inode)->i_gl;
-+ struct gfs2_holder gh;
-+ int error;
-+ error = gfs2_glock_nq_init(gl, LM_ST_SHARED, LM_FLAG_ANY, &gh);
-+ if (error) {
-+ iput(inode);
-+ return ERR_PTR(error);
-+ }
-+ gfs2_glock_dq_uninit(&gh);
- return d_splice_alias(inode, dentry);
-+ }
- d_add(dentry, inode);
-
- return NULL;
-@@ -366,7 +376,7 @@ static int gfs2_symlink(struct inode *dir, struct dentry *dentry,
- }
++#define MFP_LPM_INPUT (0x0 << 16)
++#define MFP_LPM_DRIVE_LOW (0x1 << 16)
++#define MFP_LPM_DRIVE_HIGH (0x2 << 16)
++#define MFP_LPM_PULL_LOW (0x3 << 16)
++#define MFP_LPM_PULL_HIGH (0x4 << 16)
++#define MFP_LPM_FLOAT (0x5 << 16)
++#define MFP_LPM_STATE_MASK (0x7 << 16)
++#define MFP_LPM_STATE(x) (((x) >> 16) & 0x7)
++
++#define MFP_LPM_EDGE_NONE (0x0 << 19)
++#define MFP_LPM_EDGE_RISE (0x1 << 19)
++#define MFP_LPM_EDGE_FALL (0x2 << 19)
++#define MFP_LPM_EDGE_BOTH (0x3 << 19)
++#define MFP_LPM_EDGE_MASK (0x3 << 19)
++#define MFP_LPM_EDGE(x) (((x) >> 19) & 0x3)
++
++#define MFP_PULL_NONE (0x0 << 21)
++#define MFP_PULL_LOW (0x1 << 21)
++#define MFP_PULL_HIGH (0x2 << 21)
++#define MFP_PULL_BOTH (0x3 << 21)
++#define MFP_PULL_MASK (0x3 << 21)
++#define MFP_PULL(x) (((x) >> 21) & 0x3)
++
++#define MFP_CFG_DEFAULT (MFP_AF0 | MFP_DS03X | MFP_LPM_INPUT |\
++ MFP_LPM_EDGE_NONE | MFP_PULL_NONE)
- gfs2_trans_end(sdp);
-- if (dip->i_alloc.al_rgd)
-+ if (dip->i_alloc->al_rgd)
- gfs2_inplace_release(dip);
- gfs2_quota_unlock(dip);
- gfs2_alloc_put(dip);
-@@ -442,7 +452,7 @@ static int gfs2_mkdir(struct inode *dir, struct dentry *dentry, int mode)
- gfs2_assert_withdraw(sdp, !error); /* dip already pinned */
+ #define MFP_CFG(pin, af) \
+- ((MFP_PIN_##pin << 16) | MFPR_DEFAULT | (MFP_##af))
++ ((MFP_CFG_DEFAULT & ~MFP_AF_MASK) |\
++ (MFP_PIN(MFP_PIN_##pin) | MFP_##af))
- gfs2_trans_end(sdp);
-- if (dip->i_alloc.al_rgd)
-+ if (dip->i_alloc->al_rgd)
- gfs2_inplace_release(dip);
- gfs2_quota_unlock(dip);
- gfs2_alloc_put(dip);
-@@ -548,7 +558,7 @@ static int gfs2_mknod(struct inode *dir, struct dentry *dentry, int mode,
- }
+ #define MFP_CFG_DRV(pin, af, drv) \
+- ((MFP_PIN_##pin << 16) | (MFPR_DEFAULT & ~MFPR_DRV_MASK) |\
+- ((MFP_##drv) << 10) | (MFP_##af))
++ ((MFP_CFG_DEFAULT & ~(MFP_AF_MASK | MFP_DS_MASK)) |\
++ (MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_##drv))
- gfs2_trans_end(sdp);
-- if (dip->i_alloc.al_rgd)
-+ if (dip->i_alloc->al_rgd)
- gfs2_inplace_release(dip);
- gfs2_quota_unlock(dip);
- gfs2_alloc_put(dip);
-diff --git a/fs/gfs2/ops_inode.h b/fs/gfs2/ops_inode.h
-index 34f0caa..fd8cee2 100644
---- a/fs/gfs2/ops_inode.h
-+++ b/fs/gfs2/ops_inode.h
-@@ -16,5 +16,11 @@ extern const struct inode_operations gfs2_file_iops;
- extern const struct inode_operations gfs2_dir_iops;
- extern const struct inode_operations gfs2_symlink_iops;
- extern const struct inode_operations gfs2_dev_iops;
-+extern const struct file_operations gfs2_file_fops;
-+extern const struct file_operations gfs2_dir_fops;
-+extern const struct file_operations gfs2_file_fops_nolock;
-+extern const struct file_operations gfs2_dir_fops_nolock;
-+
-+extern void gfs2_set_inode_flags(struct inode *inode);
+ #define MFP_CFG_LPM(pin, af, lpm) \
+- ((MFP_PIN_##pin << 16) | (MFPR_DEFAULT & ~MFPR_LPM_MASK) |\
+- (((MFP_LPM_##lpm) & 0x3) << 7) |\
+- (((MFP_LPM_##lpm) & 0x4) << 12) |\
+- (((MFP_LPM_##lpm) & 0x8) << 10) |\
+- (MFP_##af))
++ ((MFP_CFG_DEFAULT & ~(MFP_AF_MASK | MFP_LPM_STATE_MASK)) |\
++ (MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_LPM_##lpm))
- #endif /* __OPS_INODE_DOT_H__ */
-diff --git a/fs/gfs2/ops_super.c b/fs/gfs2/ops_super.c
-index 950f314..5e52421 100644
---- a/fs/gfs2/ops_super.c
-+++ b/fs/gfs2/ops_super.c
-@@ -487,7 +487,6 @@ static struct inode *gfs2_alloc_inode(struct super_block *sb)
- if (ip) {
- ip->i_flags = 0;
- ip->i_gl = NULL;
-- ip->i_last_pfault = jiffies;
- }
- return &ip->i_inode;
- }
-diff --git a/fs/gfs2/ops_vm.c b/fs/gfs2/ops_vm.c
-deleted file mode 100644
-index 927d739..0000000
---- a/fs/gfs2/ops_vm.c
-+++ /dev/null
-@@ -1,169 +0,0 @@
--/*
-- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-- *
-- * This copyrighted material is made available to anyone wishing to use,
-- * modify, copy, or redistribute it subject to the terms and conditions
-- * of the GNU General Public License version 2.
-- */
--
--#include <linux/slab.h>
--#include <linux/spinlock.h>
--#include <linux/completion.h>
--#include <linux/buffer_head.h>
--#include <linux/mm.h>
--#include <linux/pagemap.h>
--#include <linux/gfs2_ondisk.h>
--#include <linux/lm_interface.h>
--
--#include "gfs2.h"
--#include "incore.h"
--#include "bmap.h"
--#include "glock.h"
--#include "inode.h"
--#include "ops_vm.h"
--#include "quota.h"
--#include "rgrp.h"
--#include "trans.h"
--#include "util.h"
--
--static int gfs2_private_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
--{
-- struct gfs2_inode *ip = GFS2_I(vma->vm_file->f_mapping->host);
--
-- set_bit(GIF_PAGED, &ip->i_flags);
-- return filemap_fault(vma, vmf);
--}
--
--static int alloc_page_backing(struct gfs2_inode *ip, struct page *page)
--{
-- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- unsigned long index = page->index;
-- u64 lblock = index << (PAGE_CACHE_SHIFT -
-- sdp->sd_sb.sb_bsize_shift);
-- unsigned int blocks = PAGE_CACHE_SIZE >> sdp->sd_sb.sb_bsize_shift;
-- struct gfs2_alloc *al;
-- unsigned int data_blocks, ind_blocks;
-- unsigned int x;
-- int error;
--
-- al = gfs2_alloc_get(ip);
--
-- error = gfs2_quota_lock(ip, NO_QUOTA_CHANGE, NO_QUOTA_CHANGE);
-- if (error)
-- goto out;
--
-- error = gfs2_quota_check(ip, ip->i_inode.i_uid, ip->i_inode.i_gid);
-- if (error)
-- goto out_gunlock_q;
--
-- gfs2_write_calc_reserv(ip, PAGE_CACHE_SIZE, &data_blocks, &ind_blocks);
--
-- al->al_requested = data_blocks + ind_blocks;
--
-- error = gfs2_inplace_reserve(ip);
-- if (error)
-- goto out_gunlock_q;
--
-- error = gfs2_trans_begin(sdp, al->al_rgd->rd_length +
-- ind_blocks + RES_DINODE +
-- RES_STATFS + RES_QUOTA, 0);
-- if (error)
-- goto out_ipres;
--
-- if (gfs2_is_stuffed(ip)) {
-- error = gfs2_unstuff_dinode(ip, NULL);
-- if (error)
-- goto out_trans;
-- }
--
-- for (x = 0; x < blocks; ) {
-- u64 dblock;
-- unsigned int extlen;
-- int new = 1;
--
-- error = gfs2_extent_map(&ip->i_inode, lblock, &new, &dblock, &extlen);
-- if (error)
-- goto out_trans;
--
-- lblock += extlen;
-- x += extlen;
-- }
--
-- gfs2_assert_warn(sdp, al->al_alloced);
--
--out_trans:
-- gfs2_trans_end(sdp);
--out_ipres:
-- gfs2_inplace_release(ip);
--out_gunlock_q:
-- gfs2_quota_unlock(ip);
--out:
-- gfs2_alloc_put(ip);
-- return error;
--}
--
--static int gfs2_sharewrite_fault(struct vm_area_struct *vma,
-- struct vm_fault *vmf)
--{
-- struct file *file = vma->vm_file;
-- struct gfs2_file *gf = file->private_data;
-- struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
-- struct gfs2_holder i_gh;
-- int alloc_required;
-- int error;
-- int ret = 0;
--
-- error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0, &i_gh);
-- if (error)
-- goto out;
--
-- set_bit(GIF_PAGED, &ip->i_flags);
-- set_bit(GIF_SW_PAGED, &ip->i_flags);
+ #define MFP_CFG_X(pin, af, drv, lpm) \
+- ((MFP_PIN_##pin << 16) |\
+- (MFPR_DEFAULT & ~(MFPR_DRV_MASK | MFPR_LPM_MASK)) |\
+- ((MFP_##drv) << 10) | (MFP_##af) |\
+- (((MFP_LPM_##lpm) & 0x3) << 7) |\
+- (((MFP_LPM_##lpm) & 0x4) << 12) |\
+- (((MFP_LPM_##lpm) & 0x8) << 10))
-
-- error = gfs2_write_alloc_required(ip,
-- (u64)vmf->pgoff << PAGE_CACHE_SHIFT,
-- PAGE_CACHE_SIZE, &alloc_required);
-- if (error) {
-- ret = VM_FAULT_OOM; /* XXX: are these right? */
-- goto out_unlock;
-- }
+-/* common MFP configurations - processor specific ones defined
+- * in mfp-pxa3xx.h
+- */
+-#define GPIO0_GPIO MFP_CFG(GPIO0, AF0)
+-#define GPIO1_GPIO MFP_CFG(GPIO1, AF0)
+-#define GPIO2_GPIO MFP_CFG(GPIO2, AF0)
+-#define GPIO3_GPIO MFP_CFG(GPIO3, AF0)
+-#define GPIO4_GPIO MFP_CFG(GPIO4, AF0)
+-#define GPIO5_GPIO MFP_CFG(GPIO5, AF0)
+-#define GPIO6_GPIO MFP_CFG(GPIO6, AF0)
+-#define GPIO7_GPIO MFP_CFG(GPIO7, AF0)
+-#define GPIO8_GPIO MFP_CFG(GPIO8, AF0)
+-#define GPIO9_GPIO MFP_CFG(GPIO9, AF0)
+-#define GPIO10_GPIO MFP_CFG(GPIO10, AF0)
+-#define GPIO11_GPIO MFP_CFG(GPIO11, AF0)
+-#define GPIO12_GPIO MFP_CFG(GPIO12, AF0)
+-#define GPIO13_GPIO MFP_CFG(GPIO13, AF0)
+-#define GPIO14_GPIO MFP_CFG(GPIO14, AF0)
+-#define GPIO15_GPIO MFP_CFG(GPIO15, AF0)
+-#define GPIO16_GPIO MFP_CFG(GPIO16, AF0)
+-#define GPIO17_GPIO MFP_CFG(GPIO17, AF0)
+-#define GPIO18_GPIO MFP_CFG(GPIO18, AF0)
+-#define GPIO19_GPIO MFP_CFG(GPIO19, AF0)
+-#define GPIO20_GPIO MFP_CFG(GPIO20, AF0)
+-#define GPIO21_GPIO MFP_CFG(GPIO21, AF0)
+-#define GPIO22_GPIO MFP_CFG(GPIO22, AF0)
+-#define GPIO23_GPIO MFP_CFG(GPIO23, AF0)
+-#define GPIO24_GPIO MFP_CFG(GPIO24, AF0)
+-#define GPIO25_GPIO MFP_CFG(GPIO25, AF0)
+-#define GPIO26_GPIO MFP_CFG(GPIO26, AF0)
+-#define GPIO27_GPIO MFP_CFG(GPIO27, AF0)
+-#define GPIO28_GPIO MFP_CFG(GPIO28, AF0)
+-#define GPIO29_GPIO MFP_CFG(GPIO29, AF0)
+-#define GPIO30_GPIO MFP_CFG(GPIO30, AF0)
+-#define GPIO31_GPIO MFP_CFG(GPIO31, AF0)
+-#define GPIO32_GPIO MFP_CFG(GPIO32, AF0)
+-#define GPIO33_GPIO MFP_CFG(GPIO33, AF0)
+-#define GPIO34_GPIO MFP_CFG(GPIO34, AF0)
+-#define GPIO35_GPIO MFP_CFG(GPIO35, AF0)
+-#define GPIO36_GPIO MFP_CFG(GPIO36, AF0)
+-#define GPIO37_GPIO MFP_CFG(GPIO37, AF0)
+-#define GPIO38_GPIO MFP_CFG(GPIO38, AF0)
+-#define GPIO39_GPIO MFP_CFG(GPIO39, AF0)
+-#define GPIO40_GPIO MFP_CFG(GPIO40, AF0)
+-#define GPIO41_GPIO MFP_CFG(GPIO41, AF0)
+-#define GPIO42_GPIO MFP_CFG(GPIO42, AF0)
+-#define GPIO43_GPIO MFP_CFG(GPIO43, AF0)
+-#define GPIO44_GPIO MFP_CFG(GPIO44, AF0)
+-#define GPIO45_GPIO MFP_CFG(GPIO45, AF0)
-
-- set_bit(GFF_EXLOCK, &gf->f_flags);
-- ret = filemap_fault(vma, vmf);
-- clear_bit(GFF_EXLOCK, &gf->f_flags);
-- if (ret & VM_FAULT_ERROR)
-- goto out_unlock;
+-#define GPIO47_GPIO MFP_CFG(GPIO47, AF0)
+-#define GPIO48_GPIO MFP_CFG(GPIO48, AF0)
-
-- if (alloc_required) {
-- /* XXX: do we need to drop page lock around alloc_page_backing?*/
-- error = alloc_page_backing(ip, vmf->page);
-- if (error) {
-- /*
-- * VM_FAULT_LOCKED should always be the case for
-- * filemap_fault, but it may not be in a future
-- * implementation.
-- */
-- if (ret & VM_FAULT_LOCKED)
-- unlock_page(vmf->page);
-- page_cache_release(vmf->page);
-- ret = VM_FAULT_OOM;
-- goto out_unlock;
-- }
-- set_page_dirty(vmf->page);
-- }
+-#define GPIO53_GPIO MFP_CFG(GPIO53, AF0)
+-#define GPIO54_GPIO MFP_CFG(GPIO54, AF0)
+-#define GPIO55_GPIO MFP_CFG(GPIO55, AF0)
-
--out_unlock:
-- gfs2_glock_dq_uninit(&i_gh);
--out:
-- return ret;
--}
+-#define GPIO57_GPIO MFP_CFG(GPIO57, AF0)
-
--struct vm_operations_struct gfs2_vm_ops_private = {
-- .fault = gfs2_private_fault,
--};
+-#define GPIO63_GPIO MFP_CFG(GPIO63, AF0)
+-#define GPIO64_GPIO MFP_CFG(GPIO64, AF0)
+-#define GPIO65_GPIO MFP_CFG(GPIO65, AF0)
+-#define GPIO66_GPIO MFP_CFG(GPIO66, AF0)
+-#define GPIO67_GPIO MFP_CFG(GPIO67, AF0)
+-#define GPIO68_GPIO MFP_CFG(GPIO68, AF0)
+-#define GPIO69_GPIO MFP_CFG(GPIO69, AF0)
+-#define GPIO70_GPIO MFP_CFG(GPIO70, AF0)
+-#define GPIO71_GPIO MFP_CFG(GPIO71, AF0)
+-#define GPIO72_GPIO MFP_CFG(GPIO72, AF0)
+-#define GPIO73_GPIO MFP_CFG(GPIO73, AF0)
+-#define GPIO74_GPIO MFP_CFG(GPIO74, AF0)
+-#define GPIO75_GPIO MFP_CFG(GPIO75, AF0)
+-#define GPIO76_GPIO MFP_CFG(GPIO76, AF0)
+-#define GPIO77_GPIO MFP_CFG(GPIO77, AF0)
+-#define GPIO78_GPIO MFP_CFG(GPIO78, AF0)
+-#define GPIO79_GPIO MFP_CFG(GPIO79, AF0)
+-#define GPIO80_GPIO MFP_CFG(GPIO80, AF0)
+-#define GPIO81_GPIO MFP_CFG(GPIO81, AF0)
+-#define GPIO82_GPIO MFP_CFG(GPIO82, AF0)
+-#define GPIO83_GPIO MFP_CFG(GPIO83, AF0)
+-#define GPIO84_GPIO MFP_CFG(GPIO84, AF0)
+-#define GPIO85_GPIO MFP_CFG(GPIO85, AF0)
+-#define GPIO86_GPIO MFP_CFG(GPIO86, AF0)
+-#define GPIO87_GPIO MFP_CFG(GPIO87, AF0)
+-#define GPIO88_GPIO MFP_CFG(GPIO88, AF0)
+-#define GPIO89_GPIO MFP_CFG(GPIO89, AF0)
+-#define GPIO90_GPIO MFP_CFG(GPIO90, AF0)
+-#define GPIO91_GPIO MFP_CFG(GPIO91, AF0)
+-#define GPIO92_GPIO MFP_CFG(GPIO92, AF0)
+-#define GPIO93_GPIO MFP_CFG(GPIO93, AF0)
+-#define GPIO94_GPIO MFP_CFG(GPIO94, AF0)
+-#define GPIO95_GPIO MFP_CFG(GPIO95, AF0)
+-#define GPIO96_GPIO MFP_CFG(GPIO96, AF0)
+-#define GPIO97_GPIO MFP_CFG(GPIO97, AF0)
+-#define GPIO98_GPIO MFP_CFG(GPIO98, AF0)
+-#define GPIO99_GPIO MFP_CFG(GPIO99, AF0)
+-#define GPIO100_GPIO MFP_CFG(GPIO100, AF0)
+-#define GPIO101_GPIO MFP_CFG(GPIO101, AF0)
+-#define GPIO102_GPIO MFP_CFG(GPIO102, AF0)
+-#define GPIO103_GPIO MFP_CFG(GPIO103, AF0)
+-#define GPIO104_GPIO MFP_CFG(GPIO104, AF0)
+-#define GPIO105_GPIO MFP_CFG(GPIO105, AF0)
+-#define GPIO106_GPIO MFP_CFG(GPIO106, AF0)
+-#define GPIO107_GPIO MFP_CFG(GPIO107, AF0)
+-#define GPIO108_GPIO MFP_CFG(GPIO108, AF0)
+-#define GPIO109_GPIO MFP_CFG(GPIO109, AF0)
+-#define GPIO110_GPIO MFP_CFG(GPIO110, AF0)
+-#define GPIO111_GPIO MFP_CFG(GPIO111, AF0)
+-#define GPIO112_GPIO MFP_CFG(GPIO112, AF0)
+-#define GPIO113_GPIO MFP_CFG(GPIO113, AF0)
+-#define GPIO114_GPIO MFP_CFG(GPIO114, AF0)
+-#define GPIO115_GPIO MFP_CFG(GPIO115, AF0)
+-#define GPIO116_GPIO MFP_CFG(GPIO116, AF0)
+-#define GPIO117_GPIO MFP_CFG(GPIO117, AF0)
+-#define GPIO118_GPIO MFP_CFG(GPIO118, AF0)
+-#define GPIO119_GPIO MFP_CFG(GPIO119, AF0)
+-#define GPIO120_GPIO MFP_CFG(GPIO120, AF0)
+-#define GPIO121_GPIO MFP_CFG(GPIO121, AF0)
+-#define GPIO122_GPIO MFP_CFG(GPIO122, AF0)
+-#define GPIO123_GPIO MFP_CFG(GPIO123, AF0)
+-#define GPIO124_GPIO MFP_CFG(GPIO124, AF0)
+-#define GPIO125_GPIO MFP_CFG(GPIO125, AF0)
+-#define GPIO126_GPIO MFP_CFG(GPIO126, AF0)
+-#define GPIO127_GPIO MFP_CFG(GPIO127, AF0)
-
--struct vm_operations_struct gfs2_vm_ops_sharewrite = {
-- .fault = gfs2_sharewrite_fault,
--};
+-#define GPIO0_2_GPIO MFP_CFG(GPIO0_2, AF0)
+-#define GPIO1_2_GPIO MFP_CFG(GPIO1_2, AF0)
+-#define GPIO2_2_GPIO MFP_CFG(GPIO2_2, AF0)
+-#define GPIO3_2_GPIO MFP_CFG(GPIO3_2, AF0)
+-#define GPIO4_2_GPIO MFP_CFG(GPIO4_2, AF0)
+-#define GPIO5_2_GPIO MFP_CFG(GPIO5_2, AF0)
+-#define GPIO6_2_GPIO MFP_CFG(GPIO6_2, AF0)
-
-diff --git a/fs/gfs2/ops_vm.h b/fs/gfs2/ops_vm.h
-deleted file mode 100644
-index 4ae8f43..0000000
---- a/fs/gfs2/ops_vm.h
-+++ /dev/null
-@@ -1,18 +0,0 @@
-/*
-- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
+- * each MFP pin will have a MFPR register, since the offset of the
+- * register varies between processors, the processor specific code
+- * should initialize the pin offsets by pxa3xx_mfp_init_addr()
- *
-- * This copyrighted material is made available to anyone wishing to use,
-- * modify, copy, or redistribute it subject to the terms and conditions
-- * of the GNU General Public License version 2.
+- * pxa3xx_mfp_init_addr - accepts a table of "pxa3xx_mfp_addr_map"
+- * structure, which represents a range of MFP pins from "start" to
+- * "end", with the offset begining at "offset", to define a single
+- * pin, let "end" = -1
+- *
+- * use
+- *
+- * MFP_ADDR_X() to define a range of pins
+- * MFP_ADDR() to define a single pin
+- * MFP_ADDR_END to signal the end of pin offset definitions
- */
+-struct pxa3xx_mfp_addr_map {
+- unsigned int start;
+- unsigned int end;
+- unsigned long offset;
+-};
-
--#ifndef __OPS_VM_DOT_H__
--#define __OPS_VM_DOT_H__
--
--#include <linux/mm.h>
--
--extern struct vm_operations_struct gfs2_vm_ops_private;
--extern struct vm_operations_struct gfs2_vm_ops_sharewrite;
--
--#endif /* __OPS_VM_DOT_H__ */
-diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
-index addb51e..a08dabd 100644
---- a/fs/gfs2/quota.c
-+++ b/fs/gfs2/quota.c
-@@ -1,6 +1,6 @@
- /*
- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-+ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
- *
- * This copyrighted material is made available to anyone wishing to use,
- * modify, copy, or redistribute it subject to the terms and conditions
-@@ -59,7 +59,6 @@
- #include "super.h"
- #include "trans.h"
- #include "inode.h"
--#include "ops_file.h"
- #include "ops_address.h"
- #include "util.h"
-
-@@ -274,10 +273,10 @@ static int bh_get(struct gfs2_quota_data *qd)
- }
-
- block = qd->qd_slot / sdp->sd_qc_per_block;
-- offset = qd->qd_slot % sdp->sd_qc_per_block;;
-+ offset = qd->qd_slot % sdp->sd_qc_per_block;
-
- bh_map.b_size = 1 << ip->i_inode.i_blkbits;
-- error = gfs2_block_map(&ip->i_inode, block, 0, &bh_map);
-+ error = gfs2_block_map(&ip->i_inode, block, &bh_map, 0);
- if (error)
- goto fail;
- error = gfs2_meta_read(ip->i_gl, bh_map.b_blocknr, DIO_WAIT, &bh);
-@@ -454,7 +453,7 @@ static void qdsb_put(struct gfs2_quota_data *qd)
- int gfs2_quota_hold(struct gfs2_inode *ip, u32 uid, u32 gid)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_quota_data **qd = al->al_qd;
- int error;
-
-@@ -502,7 +501,7 @@ out:
- void gfs2_quota_unhold(struct gfs2_inode *ip)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- unsigned int x;
-
- gfs2_assert_warn(sdp, !test_bit(GIF_QD_LOCKED, &ip->i_flags));
-@@ -646,7 +645,7 @@ static int gfs2_adjust_quota(struct gfs2_inode *ip, loff_t loc,
- }
-
- if (!buffer_mapped(bh)) {
-- gfs2_get_block(inode, iblock, bh, 1);
-+ gfs2_block_map(inode, iblock, bh, 1);
- if (!buffer_mapped(bh))
- goto unlock;
- }
-@@ -793,11 +792,9 @@ static int do_glock(struct gfs2_quota_data *qd, int force_refresh,
- struct gfs2_holder i_gh;
- struct gfs2_quota_host q;
- char buf[sizeof(struct gfs2_quota)];
-- struct file_ra_state ra_state;
- int error;
- struct gfs2_quota_lvb *qlvb;
-
-- file_ra_state_init(&ra_state, sdp->sd_quota_inode->i_mapping);
- restart:
- error = gfs2_glock_nq_init(qd->qd_gl, LM_ST_SHARED, 0, q_gh);
- if (error)
-@@ -820,8 +817,8 @@ restart:
-
- memset(buf, 0, sizeof(struct gfs2_quota));
- pos = qd2offset(qd);
-- error = gfs2_internal_read(ip, &ra_state, buf,
-- &pos, sizeof(struct gfs2_quota));
-+ error = gfs2_internal_read(ip, NULL, buf, &pos,
-+ sizeof(struct gfs2_quota));
- if (error < 0)
- goto fail_gunlock;
-
-@@ -856,7 +853,7 @@ fail:
- int gfs2_quota_lock(struct gfs2_inode *ip, u32 uid, u32 gid)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- unsigned int x;
- int error = 0;
-
-@@ -924,7 +921,7 @@ static int need_sync(struct gfs2_quota_data *qd)
-
- void gfs2_quota_unlock(struct gfs2_inode *ip)
- {
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_quota_data *qda[4];
- unsigned int count = 0;
- unsigned int x;
-@@ -972,7 +969,7 @@ static int print_message(struct gfs2_quota_data *qd, char *type)
- int gfs2_quota_check(struct gfs2_inode *ip, u32 uid, u32 gid)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_quota_data *qd;
- s64 value;
- unsigned int x;
-@@ -1016,10 +1013,9 @@ int gfs2_quota_check(struct gfs2_inode *ip, u32 uid, u32 gid)
- void gfs2_quota_change(struct gfs2_inode *ip, s64 change,
- u32 uid, u32 gid)
- {
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_quota_data *qd;
- unsigned int x;
-- unsigned int found = 0;
-
- if (gfs2_assert_warn(GFS2_SB(&ip->i_inode), change))
- return;
-@@ -1032,7 +1028,6 @@ void gfs2_quota_change(struct gfs2_inode *ip, s64 change,
- if ((qd->qd_id == uid && test_bit(QDF_USER, &qd->qd_flags)) ||
- (qd->qd_id == gid && !test_bit(QDF_USER, &qd->qd_flags))) {
- do_qc(qd, change);
-- found++;
- }
- }
- }
-diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
-index beb6c7a..b249e29 100644
---- a/fs/gfs2/recovery.c
-+++ b/fs/gfs2/recovery.c
-@@ -391,7 +391,7 @@ static int clean_journal(struct gfs2_jdesc *jd, struct gfs2_log_header_host *hea
- lblock = head->lh_blkno;
- gfs2_replay_incr_blk(sdp, &lblock);
- bh_map.b_size = 1 << ip->i_inode.i_blkbits;
-- error = gfs2_block_map(&ip->i_inode, lblock, 0, &bh_map);
-+ error = gfs2_block_map(&ip->i_inode, lblock, &bh_map, 0);
- if (error)
- return error;
- if (!bh_map.b_blocknr) {
-@@ -504,13 +504,21 @@ int gfs2_recover_journal(struct gfs2_jdesc *jd)
- if (!test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags))
- ro = 1;
- } else {
-- if (sdp->sd_vfs->s_flags & MS_RDONLY)
-- ro = 1;
-+ if (sdp->sd_vfs->s_flags & MS_RDONLY) {
-+ /* check if device itself is read-only */
-+ ro = bdev_read_only(sdp->sd_vfs->s_bdev);
-+ if (!ro) {
-+ fs_info(sdp, "recovery required on "
-+ "read-only filesystem.\n");
-+ fs_info(sdp, "write access will be "
-+ "enabled during recovery.\n");
-+ }
-+ }
- }
-
- if (ro) {
-- fs_warn(sdp, "jid=%u: Can't replay: read-only FS\n",
-- jd->jd_jid);
-+ fs_warn(sdp, "jid=%u: Can't replay: read-only block "
-+ "device\n", jd->jd_jid);
- error = -EROFS;
- goto fail_gunlock_tr;
- }
-diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
-index 708c287..3552110 100644
---- a/fs/gfs2/rgrp.c
-+++ b/fs/gfs2/rgrp.c
-@@ -25,10 +25,10 @@
- #include "rgrp.h"
- #include "super.h"
- #include "trans.h"
--#include "ops_file.h"
- #include "util.h"
- #include "log.h"
- #include "inode.h"
-+#include "ops_address.h"
-
- #define BFITNOENT ((u32)~0)
- #define NO_BLOCK ((u64)~0)
-@@ -126,41 +126,43 @@ static unsigned char gfs2_testbit(struct gfs2_rgrpd *rgd, unsigned char *buffer,
- * Return: the block number (bitmap buffer scope) that was found
- */
-
--static u32 gfs2_bitfit(struct gfs2_rgrpd *rgd, unsigned char *buffer,
-- unsigned int buflen, u32 goal,
-- unsigned char old_state)
-+static u32 gfs2_bitfit(unsigned char *buffer, unsigned int buflen, u32 goal,
-+ unsigned char old_state)
- {
-- unsigned char *byte, *end, alloc;
-+ unsigned char *byte;
- u32 blk = goal;
-- unsigned int bit;
-+ unsigned int bit, bitlong;
-+ unsigned long *plong, plong55;
-
- byte = buffer + (goal / GFS2_NBBY);
-+ plong = (unsigned long *)(buffer + (goal / GFS2_NBBY));
- bit = (goal % GFS2_NBBY) * GFS2_BIT_SIZE;
-- end = buffer + buflen;
-- alloc = (old_state == GFS2_BLKST_FREE) ? 0x55 : 0;
--
-- while (byte < end) {
-- /* If we're looking for a free block we can eliminate all
-- bitmap settings with 0x55, which represents four data
-- blocks in a row. If we're looking for a data block, we can
-- eliminate 0x00 which corresponds to four free blocks. */
-- if ((*byte & 0x55) == alloc) {
-- blk += (8 - bit) >> 1;
--
-- bit = 0;
-- byte++;
--
-+ bitlong = bit;
-+#if BITS_PER_LONG == 32
-+ plong55 = 0x55555555;
-+#else
-+ plong55 = 0x5555555555555555;
-+#endif
-+ while (byte < buffer + buflen) {
-+
-+ if (bitlong == 0 && old_state == 0 && *plong == plong55) {
-+ plong++;
-+ byte += sizeof(unsigned long);
-+ blk += sizeof(unsigned long) * GFS2_NBBY;
- continue;
- }
+-#define MFP_ADDR_X(start, end, offset) \
+- { MFP_PIN_##start, MFP_PIN_##end, offset }
-
- if (((*byte >> bit) & GFS2_BIT_MASK) == old_state)
- return blk;
+-#define MFP_ADDR(pin, offset) \
+- { MFP_PIN_##pin, -1, offset }
-
- bit += GFS2_BIT_SIZE;
- if (bit >= 8) {
- bit = 0;
- byte++;
- }
-+ bitlong += GFS2_BIT_SIZE;
-+ if (bitlong >= sizeof(unsigned long) * 8) {
-+ bitlong = 0;
-+ plong++;
-+ }
-
- blk++;
- }
-@@ -817,11 +819,9 @@ void gfs2_rgrp_repolish_clones(struct gfs2_rgrpd *rgd)
-
- struct gfs2_alloc *gfs2_alloc_get(struct gfs2_inode *ip)
- {
-- struct gfs2_alloc *al = &ip->i_alloc;
+-#define MFP_ADDR_END { MFP_PIN_INVALID, 0 }
-
-- /* FIXME: Should assert that the correct locks are held here... */
-- memset(al, 0, sizeof(*al));
-- return al;
-+ BUG_ON(ip->i_alloc != NULL);
-+ ip->i_alloc = kzalloc(sizeof(struct gfs2_alloc), GFP_KERNEL);
-+ return ip->i_alloc;
- }
-
- /**
-@@ -1059,26 +1059,34 @@ static struct inode *get_local_rgrp(struct gfs2_inode *ip, u64 *last_unlinked)
- struct inode *inode = NULL;
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
- struct gfs2_rgrpd *rgd, *begin = NULL;
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- int flags = LM_FLAG_TRY;
- int skipped = 0;
- int loops = 0;
-- int error;
-+ int error, rg_locked;
-
- /* Try recently successful rgrps */
-
- rgd = recent_rgrp_first(sdp, ip->i_last_rg_alloc);
-
- while (rgd) {
-- error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
-- LM_FLAG_TRY, &al->al_rgd_gh);
-+ rg_locked = 0;
-+
-+ if (gfs2_glock_is_locked_by_me(rgd->rd_gl)) {
-+ rg_locked = 1;
-+ error = 0;
-+ } else {
-+ error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
-+ LM_FLAG_TRY, &al->al_rgd_gh);
-+ }
- switch (error) {
- case 0:
- if (try_rgrp_fit(rgd, al))
- goto out;
- if (rgd->rd_flags & GFS2_RDF_CHECK)
- inode = try_rgrp_unlink(rgd, last_unlinked);
-- gfs2_glock_dq_uninit(&al->al_rgd_gh);
-+ if (!rg_locked)
-+ gfs2_glock_dq_uninit(&al->al_rgd_gh);
- if (inode)
- return inode;
- rgd = recent_rgrp_next(rgd, 1);
-@@ -1098,15 +1106,23 @@ static struct inode *get_local_rgrp(struct gfs2_inode *ip, u64 *last_unlinked)
- begin = rgd = forward_rgrp_get(sdp);
-
- for (;;) {
-- error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE, flags,
-- &al->al_rgd_gh);
-+ rg_locked = 0;
-+
-+ if (gfs2_glock_is_locked_by_me(rgd->rd_gl)) {
-+ rg_locked = 1;
-+ error = 0;
-+ } else {
-+ error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE, flags,
-+ &al->al_rgd_gh);
-+ }
- switch (error) {
- case 0:
- if (try_rgrp_fit(rgd, al))
- goto out;
- if (rgd->rd_flags & GFS2_RDF_CHECK)
- inode = try_rgrp_unlink(rgd, last_unlinked);
-- gfs2_glock_dq_uninit(&al->al_rgd_gh);
-+ if (!rg_locked)
-+ gfs2_glock_dq_uninit(&al->al_rgd_gh);
- if (inode)
- return inode;
- break;
-@@ -1158,7 +1174,7 @@ out:
- int gfs2_inplace_reserve_i(struct gfs2_inode *ip, char *file, unsigned int line)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct inode *inode;
- int error = 0;
- u64 last_unlinked = NO_BLOCK;
-@@ -1204,7 +1220,7 @@ try_again:
- void gfs2_inplace_release(struct gfs2_inode *ip)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
-
- if (gfs2_assert_warn(sdp, al->al_alloced <= al->al_requested) == -1)
- fs_warn(sdp, "al_alloced = %u, al_requested = %u "
-@@ -1213,7 +1229,8 @@ void gfs2_inplace_release(struct gfs2_inode *ip)
- al->al_line);
-
- al->al_rgd = NULL;
-- gfs2_glock_dq_uninit(&al->al_rgd_gh);
-+ if (al->al_rgd_gh.gh_gl)
-+ gfs2_glock_dq_uninit(&al->al_rgd_gh);
- if (ip != GFS2_I(sdp->sd_rindex))
- gfs2_glock_dq_uninit(&al->al_ri_gh);
- }
-@@ -1301,11 +1318,10 @@ static u32 rgblk_search(struct gfs2_rgrpd *rgd, u32 goal,
- /* The GFS2_BLKST_UNLINKED state doesn't apply to the clone
- bitmaps, so we must search the originals for that. */
- if (old_state != GFS2_BLKST_UNLINKED && bi->bi_clone)
-- blk = gfs2_bitfit(rgd, bi->bi_clone + bi->bi_offset,
-+ blk = gfs2_bitfit(bi->bi_clone + bi->bi_offset,
- bi->bi_len, goal, old_state);
- else
-- blk = gfs2_bitfit(rgd,
-- bi->bi_bh->b_data + bi->bi_offset,
-+ blk = gfs2_bitfit(bi->bi_bh->b_data + bi->bi_offset,
- bi->bi_len, goal, old_state);
- if (blk != BFITNOENT)
- break;
-@@ -1394,7 +1410,7 @@ static struct gfs2_rgrpd *rgblk_free(struct gfs2_sbd *sdp, u64 bstart,
- u64 gfs2_alloc_data(struct gfs2_inode *ip)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_rgrpd *rgd = al->al_rgd;
- u32 goal, blk;
- u64 block;
-@@ -1439,7 +1455,7 @@ u64 gfs2_alloc_data(struct gfs2_inode *ip)
- u64 gfs2_alloc_meta(struct gfs2_inode *ip)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
-- struct gfs2_alloc *al = &ip->i_alloc;
-+ struct gfs2_alloc *al = ip->i_alloc;
- struct gfs2_rgrpd *rgd = al->al_rgd;
- u32 goal, blk;
- u64 block;
-@@ -1485,7 +1501,7 @@ u64 gfs2_alloc_meta(struct gfs2_inode *ip)
- u64 gfs2_alloc_di(struct gfs2_inode *dip, u64 *generation)
- {
- struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode);
-- struct gfs2_alloc *al = &dip->i_alloc;
-+ struct gfs2_alloc *al = dip->i_alloc;
- struct gfs2_rgrpd *rgd = al->al_rgd;
- u32 blk;
- u64 block;
-diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h
-index b4c6adf..149bb16 100644
---- a/fs/gfs2/rgrp.h
-+++ b/fs/gfs2/rgrp.h
-@@ -32,7 +32,9 @@ void gfs2_rgrp_repolish_clones(struct gfs2_rgrpd *rgd);
- struct gfs2_alloc *gfs2_alloc_get(struct gfs2_inode *ip);
- static inline void gfs2_alloc_put(struct gfs2_inode *ip)
- {
-- return; /* So we can see where ip->i_alloc is used */
-+ BUG_ON(ip->i_alloc == NULL);
-+ kfree(ip->i_alloc);
-+ ip->i_alloc = NULL;
- }
-
- int gfs2_inplace_reserve_i(struct gfs2_inode *ip,
-diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
-index dd3e737..ef0562c 100644
---- a/fs/gfs2/super.c
-+++ b/fs/gfs2/super.c
-@@ -1,6 +1,6 @@
- /*
- * Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-- * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
-+ * Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
- *
- * This copyrighted material is made available to anyone wishing to use,
- * modify, copy, or redistribute it subject to the terms and conditions
-@@ -51,13 +51,9 @@ void gfs2_tune_init(struct gfs2_tune *gt)
- {
- spin_lock_init(>->gt_spin);
-
-- gt->gt_ilimit = 100;
-- gt->gt_ilimit_tries = 3;
-- gt->gt_ilimit_min = 1;
- gt->gt_demote_secs = 300;
- gt->gt_incore_log_blocks = 1024;
- gt->gt_log_flush_secs = 60;
-- gt->gt_jindex_refresh_secs = 60;
- gt->gt_recoverd_secs = 60;
- gt->gt_logd_secs = 1;
- gt->gt_quotad_secs = 5;
-@@ -71,10 +67,8 @@ void gfs2_tune_init(struct gfs2_tune *gt)
- gt->gt_new_files_jdata = 0;
- gt->gt_new_files_directio = 0;
- gt->gt_max_readahead = 1 << 18;
-- gt->gt_lockdump_size = 131072;
- gt->gt_stall_secs = 600;
- gt->gt_complain_secs = 10;
-- gt->gt_reclaim_limit = 5000;
- gt->gt_statfs_quantum = 30;
- gt->gt_statfs_slow = 0;
- }
-@@ -393,6 +387,7 @@ int gfs2_jindex_hold(struct gfs2_sbd *sdp, struct gfs2_holder *ji_gh)
- if (!jd)
- break;
-
-+ INIT_LIST_HEAD(&jd->extent_list);
- jd->jd_inode = gfs2_lookupi(sdp->sd_jindex, &name, 1, NULL);
- if (!jd->jd_inode || IS_ERR(jd->jd_inode)) {
- if (!jd->jd_inode)
-@@ -422,8 +417,9 @@ int gfs2_jindex_hold(struct gfs2_sbd *sdp, struct gfs2_holder *ji_gh)
-
- void gfs2_jindex_free(struct gfs2_sbd *sdp)
- {
-- struct list_head list;
-+ struct list_head list, *head;
- struct gfs2_jdesc *jd;
-+ struct gfs2_journal_extent *jext;
-
- spin_lock(&sdp->sd_jindex_spin);
- list_add(&list, &sdp->sd_jindex_list);
-@@ -433,6 +429,14 @@ void gfs2_jindex_free(struct gfs2_sbd *sdp)
-
- while (!list_empty(&list)) {
- jd = list_entry(list.next, struct gfs2_jdesc, jd_list);
-+ head = &jd->extent_list;
-+ while (!list_empty(head)) {
-+ jext = list_entry(head->next,
-+ struct gfs2_journal_extent,
-+ extent_list);
-+ list_del(&jext->extent_list);
-+ kfree(jext);
-+ }
- list_del(&jd->jd_list);
- iput(jd->jd_inode);
- kfree(jd);
-@@ -543,7 +547,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
- if (error)
- return error;
-
-- gfs2_meta_cache_flush(ip);
- j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
-
- error = gfs2_find_jhead(sdp->sd_jdesc, &head);
-@@ -686,9 +689,7 @@ void gfs2_statfs_change(struct gfs2_sbd *sdp, s64 total, s64 free,
- if (error)
- return;
-
-- mutex_lock(&sdp->sd_statfs_mutex);
- gfs2_trans_add_bh(l_ip->i_gl, l_bh, 1);
-- mutex_unlock(&sdp->sd_statfs_mutex);
-
- spin_lock(&sdp->sd_statfs_spin);
- l_sc->sc_total += total;
-@@ -736,9 +737,7 @@ int gfs2_statfs_sync(struct gfs2_sbd *sdp)
- if (error)
- goto out_bh2;
-
-- mutex_lock(&sdp->sd_statfs_mutex);
- gfs2_trans_add_bh(l_ip->i_gl, l_bh, 1);
-- mutex_unlock(&sdp->sd_statfs_mutex);
-
- spin_lock(&sdp->sd_statfs_spin);
- m_sc->sc_total += l_sc->sc_total;
-diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c
-index 06e0b77..eaa3b7b 100644
---- a/fs/gfs2/sys.c
-+++ b/fs/gfs2/sys.c
-@@ -32,7 +32,8 @@ spinlock_t gfs2_sys_margs_lock;
-
- static ssize_t id_show(struct gfs2_sbd *sdp, char *buf)
- {
-- return snprintf(buf, PAGE_SIZE, "%s\n", sdp->sd_vfs->s_id);
-+ return snprintf(buf, PAGE_SIZE, "%u:%u\n",
-+ MAJOR(sdp->sd_vfs->s_dev), MINOR(sdp->sd_vfs->s_dev));
- }
-
- static ssize_t fsname_show(struct gfs2_sbd *sdp, char *buf)
-@@ -221,9 +222,7 @@ static struct kobj_type gfs2_ktype = {
- .sysfs_ops = &gfs2_attr_ops,
- };
-
--static struct kset gfs2_kset = {
-- .ktype = &gfs2_ktype,
+-struct pxa3xx_mfp_pin {
+- unsigned long mfpr_off; /* MFPRxx register offset */
+- unsigned long mfpr_val; /* MFPRxx register value */
-};
-+static struct kset *gfs2_kset;
-
- /*
- * display struct lm_lockstruct fields
-@@ -427,13 +426,11 @@ TUNE_ATTR_2(name, name##_store)
- TUNE_ATTR(demote_secs, 0);
- TUNE_ATTR(incore_log_blocks, 0);
- TUNE_ATTR(log_flush_secs, 0);
--TUNE_ATTR(jindex_refresh_secs, 0);
- TUNE_ATTR(quota_warn_period, 0);
- TUNE_ATTR(quota_quantum, 0);
- TUNE_ATTR(atime_quantum, 0);
- TUNE_ATTR(max_readahead, 0);
- TUNE_ATTR(complain_secs, 0);
--TUNE_ATTR(reclaim_limit, 0);
- TUNE_ATTR(statfs_slow, 0);
- TUNE_ATTR(new_files_jdata, 0);
- TUNE_ATTR(new_files_directio, 0);
-@@ -450,13 +447,11 @@ static struct attribute *tune_attrs[] = {
- &tune_attr_demote_secs.attr,
- &tune_attr_incore_log_blocks.attr,
- &tune_attr_log_flush_secs.attr,
-- &tune_attr_jindex_refresh_secs.attr,
- &tune_attr_quota_warn_period.attr,
- &tune_attr_quota_quantum.attr,
- &tune_attr_atime_quantum.attr,
- &tune_attr_max_readahead.attr,
- &tune_attr_complain_secs.attr,
-- &tune_attr_reclaim_limit.attr,
- &tune_attr_statfs_slow.attr,
- &tune_attr_quota_simul_sync.attr,
- &tune_attr_quota_cache_secs.attr,
-@@ -495,14 +490,9 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
- {
- int error;
-
-- sdp->sd_kobj.kset = &gfs2_kset;
-- sdp->sd_kobj.ktype = &gfs2_ktype;
-
-- error = kobject_set_name(&sdp->sd_kobj, "%s", sdp->sd_table_name);
-- if (error)
-- goto fail;
+-/*
+- * pxa3xx_mfp_read()/pxa3xx_mfp_write() - for direct read/write access
+- * to the MFPR register
+- */
+-unsigned long pxa3xx_mfp_read(int mfp);
+-void pxa3xx_mfp_write(int mfp, unsigned long mfpr_val);
-
-- error = kobject_register(&sdp->sd_kobj);
-+ sdp->sd_kobj.kset = gfs2_kset;
-+ error = kobject_init_and_add(&sdp->sd_kobj, &gfs2_ktype, NULL,
-+ "%s", sdp->sd_table_name);
- if (error)
- goto fail;
-
-@@ -522,6 +512,7 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
- if (error)
- goto fail_args;
-
-+ kobject_uevent(&sdp->sd_kobj, KOBJ_ADD);
- return 0;
-
- fail_args:
-@@ -531,7 +522,7 @@ fail_counters:
- fail_lockstruct:
- sysfs_remove_group(&sdp->sd_kobj, &lockstruct_group);
- fail_reg:
-- kobject_unregister(&sdp->sd_kobj);
-+ kobject_put(&sdp->sd_kobj);
- fail:
- fs_err(sdp, "error %d adding sysfs files", error);
- return error;
-@@ -543,21 +534,22 @@ void gfs2_sys_fs_del(struct gfs2_sbd *sdp)
- sysfs_remove_group(&sdp->sd_kobj, &args_group);
- sysfs_remove_group(&sdp->sd_kobj, &counters_group);
- sysfs_remove_group(&sdp->sd_kobj, &lockstruct_group);
-- kobject_unregister(&sdp->sd_kobj);
-+ kobject_put(&sdp->sd_kobj);
- }
-
- int gfs2_sys_init(void)
- {
- gfs2_sys_margs = NULL;
- spin_lock_init(&gfs2_sys_margs_lock);
-- kobject_set_name(&gfs2_kset.kobj, "gfs2");
-- kobj_set_kset_s(&gfs2_kset, fs_subsys);
-- return kset_register(&gfs2_kset);
-+ gfs2_kset = kset_create_and_add("gfs2", NULL, fs_kobj);
-+ if (!gfs2_kset)
-+ return -ENOMEM;
-+ return 0;
- }
-
- void gfs2_sys_uninit(void)
- {
- kfree(gfs2_sys_margs);
-- kset_unregister(&gfs2_kset);
-+ kset_unregister(gfs2_kset);
- }
-
-diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c
-index 717983e..73e5d92 100644
---- a/fs/gfs2/trans.c
-+++ b/fs/gfs2/trans.c
-@@ -114,11 +114,6 @@ void gfs2_trans_end(struct gfs2_sbd *sdp)
- gfs2_log_flush(sdp, NULL);
- }
-
--void gfs2_trans_add_gl(struct gfs2_glock *gl)
--{
-- lops_add(gl->gl_sbd, &gl->gl_le);
--}
+-/*
+- * pxa3xx_mfp_set_afds - set MFP alternate function and drive strength
+- * pxa3xx_mfp_set_rdh - set MFP release delay hold on/off
+- * pxa3xx_mfp_set_lpm - set MFP low power mode state
+- * pxa3xx_mfp_set_edge - set MFP edge detection in low power mode
+- *
+- * use these functions to override/change the default configuration
+- * done by pxa3xx_mfp_set_config(s)
+- */
+-void pxa3xx_mfp_set_afds(int mfp, int af, int ds);
+-void pxa3xx_mfp_set_rdh(int mfp, int rdh);
+-void pxa3xx_mfp_set_lpm(int mfp, int lpm);
+-void pxa3xx_mfp_set_edge(int mfp, int edge);
-
- /**
- * gfs2_trans_add_bh - Add a to-be-modified buffer to the current transaction
- * @gl: the glock the buffer belongs to
-diff --git a/fs/gfs2/trans.h b/fs/gfs2/trans.h
-index 043d5f4..e826f0d 100644
---- a/fs/gfs2/trans.h
-+++ b/fs/gfs2/trans.h
-@@ -30,7 +30,6 @@ int gfs2_trans_begin(struct gfs2_sbd *sdp, unsigned int blocks,
-
- void gfs2_trans_end(struct gfs2_sbd *sdp);
-
--void gfs2_trans_add_gl(struct gfs2_glock *gl);
- void gfs2_trans_add_bh(struct gfs2_glock *gl, struct buffer_head *bh, int meta);
- void gfs2_trans_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd);
- void gfs2_trans_add_unrevoke(struct gfs2_sbd *sdp, u64 blkno);
-diff --git a/fs/inode.c b/fs/inode.c
-index ed35383..276ffd6 100644
---- a/fs/inode.c
-+++ b/fs/inode.c
-@@ -1276,6 +1276,11 @@ void file_update_time(struct file *file)
- sync_it = 1;
- }
-
-+ if (IS_I_VERSION(inode)) {
-+ inode_inc_iversion(inode);
-+ sync_it = 1;
-+ }
-+
- if (sync_it)
- mark_inode_dirty_sync(inode);
- }
-diff --git a/fs/ioprio.c b/fs/ioprio.c
-index e4e01bc..c4a1c3c 100644
---- a/fs/ioprio.c
-+++ b/fs/ioprio.c
-@@ -41,18 +41,28 @@ static int set_task_ioprio(struct task_struct *task, int ioprio)
- return err;
-
- task_lock(task);
-+ do {
-+ ioc = task->io_context;
-+ /* see wmb() in current_io_context() */
-+ smp_read_barrier_depends();
-+ if (ioc)
-+ break;
-
-- task->ioprio = ioprio;
+-/*
+- * pxa3xx_mfp_config - configure the MFPR registers
+- *
+- * used by board specific initialization code
+- */
+-void pxa3xx_mfp_config(mfp_cfg_t *mfp_cfgs, int num);
-
-- ioc = task->io_context;
-- /* see wmb() in current_io_context() */
-- smp_read_barrier_depends();
-+ ioc = alloc_io_context(GFP_ATOMIC, -1);
-+ if (!ioc) {
-+ err = -ENOMEM;
-+ break;
-+ }
-+ task->io_context = ioc;
-+ } while (1);
-
-- if (ioc)
-+ if (!err) {
-+ ioc->ioprio = ioprio;
- ioc->ioprio_changed = 1;
-+ }
-
- task_unlock(task);
-- return 0;
-+ return err;
- }
-
- asmlinkage long sys_ioprio_set(int which, int who, int ioprio)
-@@ -75,8 +85,6 @@ asmlinkage long sys_ioprio_set(int which, int who, int ioprio)
-
- break;
- case IOPRIO_CLASS_IDLE:
-- if (!capable(CAP_SYS_ADMIN))
-- return -EPERM;
- break;
- case IOPRIO_CLASS_NONE:
- if (data)
-@@ -148,7 +156,9 @@ static int get_task_ioprio(struct task_struct *p)
- ret = security_task_getioprio(p);
- if (ret)
- goto out;
-- ret = p->ioprio;
-+ ret = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, IOPRIO_NORM);
-+ if (p->io_context)
-+ ret = p->io_context->ioprio;
- out:
- return ret;
- }
-diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
-index 3fccde7..1b7f282 100644
---- a/fs/jbd2/checkpoint.c
-+++ b/fs/jbd2/checkpoint.c
-@@ -232,7 +232,8 @@ __flush_batch(journal_t *journal, struct buffer_head **bhs, int *batch_count)
- * Called under jbd_lock_bh_state(jh2bh(jh)), and drops it
- */
- static int __process_buffer(journal_t *journal, struct journal_head *jh,
-- struct buffer_head **bhs, int *batch_count)
-+ struct buffer_head **bhs, int *batch_count,
-+ transaction_t *transaction)
- {
- struct buffer_head *bh = jh2bh(jh);
- int ret = 0;
-@@ -250,6 +251,7 @@ static int __process_buffer(journal_t *journal, struct journal_head *jh,
- transaction_t *t = jh->b_transaction;
- tid_t tid = t->t_tid;
-
-+ transaction->t_chp_stats.cs_forced_to_close++;
- spin_unlock(&journal->j_list_lock);
- jbd_unlock_bh_state(bh);
- jbd2_log_start_commit(journal, tid);
-@@ -279,6 +281,7 @@ static int __process_buffer(journal_t *journal, struct journal_head *jh,
- bhs[*batch_count] = bh;
- __buffer_relink_io(jh);
- jbd_unlock_bh_state(bh);
-+ transaction->t_chp_stats.cs_written++;
- (*batch_count)++;
- if (*batch_count == NR_BATCH) {
- spin_unlock(&journal->j_list_lock);
-@@ -322,6 +325,8 @@ int jbd2_log_do_checkpoint(journal_t *journal)
- if (!journal->j_checkpoint_transactions)
- goto out;
- transaction = journal->j_checkpoint_transactions;
-+ if (transaction->t_chp_stats.cs_chp_time == 0)
-+ transaction->t_chp_stats.cs_chp_time = jiffies;
- this_tid = transaction->t_tid;
- restart:
- /*
-@@ -346,7 +351,8 @@ restart:
- retry = 1;
- break;
- }
-- retry = __process_buffer(journal, jh, bhs,&batch_count);
-+ retry = __process_buffer(journal, jh, bhs, &batch_count,
-+ transaction);
- if (!retry && lock_need_resched(&journal->j_list_lock)){
- spin_unlock(&journal->j_list_lock);
- retry = 1;
-@@ -602,15 +608,15 @@ int __jbd2_journal_remove_checkpoint(struct journal_head *jh)
-
- /*
- * There is one special case to worry about: if we have just pulled the
-- * buffer off a committing transaction's forget list, then even if the
-- * checkpoint list is empty, the transaction obviously cannot be
-- * dropped!
-+ * buffer off a running or committing transaction's checkpoing list,
-+ * then even if the checkpoint list is empty, the transaction obviously
-+ * cannot be dropped!
- *
-- * The locking here around j_committing_transaction is a bit sleazy.
-+ * The locking here around t_state is a bit sleazy.
- * See the comment at the end of jbd2_journal_commit_transaction().
- */
-- if (transaction == journal->j_committing_transaction) {
-- JBUFFER_TRACE(jh, "belongs to committing transaction");
-+ if (transaction->t_state != T_FINISHED) {
-+ JBUFFER_TRACE(jh, "belongs to running/committing transaction");
- goto out;
- }
-
-diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
-index 6986f33..da8d0eb 100644
---- a/fs/jbd2/commit.c
-+++ b/fs/jbd2/commit.c
-@@ -20,6 +20,8 @@
- #include <linux/slab.h>
- #include <linux/mm.h>
- #include <linux/pagemap.h>
-+#include <linux/jiffies.h>
-+#include <linux/crc32.h>
-
- /*
- * Default IO end handler for temporary BJ_IO buffer_heads.
-@@ -92,19 +94,23 @@ static int inverted_lock(journal_t *journal, struct buffer_head *bh)
- return 1;
- }
-
--/* Done it all: now write the commit record. We should have
-+/*
-+ * Done it all: now submit the commit record. We should have
- * cleaned up our previous buffers by now, so if we are in abort
- * mode we can now just skip the rest of the journal write
- * entirely.
- *
- * Returns 1 if the journal needs to be aborted or 0 on success
- */
--static int journal_write_commit_record(journal_t *journal,
-- transaction_t *commit_transaction)
-+static int journal_submit_commit_record(journal_t *journal,
-+ transaction_t *commit_transaction,
-+ struct buffer_head **cbh,
-+ __u32 crc32_sum)
- {
- struct journal_head *descriptor;
-+ struct commit_header *tmp;
- struct buffer_head *bh;
-- int i, ret;
-+ int ret;
- int barrier_done = 0;
-
- if (is_journal_aborted(journal))
-@@ -116,21 +122,33 @@ static int journal_write_commit_record(journal_t *journal,
-
- bh = jh2bh(descriptor);
-
-- /* AKPM: buglet - add `i' to tmp! */
-- for (i = 0; i < bh->b_size; i += 512) {
-- journal_header_t *tmp = (journal_header_t*)bh->b_data;
-- tmp->h_magic = cpu_to_be32(JBD2_MAGIC_NUMBER);
-- tmp->h_blocktype = cpu_to_be32(JBD2_COMMIT_BLOCK);
-- tmp->h_sequence = cpu_to_be32(commit_transaction->t_tid);
-+ tmp = (struct commit_header *)bh->b_data;
-+ tmp->h_magic = cpu_to_be32(JBD2_MAGIC_NUMBER);
-+ tmp->h_blocktype = cpu_to_be32(JBD2_COMMIT_BLOCK);
-+ tmp->h_sequence = cpu_to_be32(commit_transaction->t_tid);
-+
-+ if (JBD2_HAS_COMPAT_FEATURE(journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM)) {
-+ tmp->h_chksum_type = JBD2_CRC32_CHKSUM;
-+ tmp->h_chksum_size = JBD2_CRC32_CHKSUM_SIZE;
-+ tmp->h_chksum[0] = cpu_to_be32(crc32_sum);
- }
-
-- JBUFFER_TRACE(descriptor, "write commit block");
-+ JBUFFER_TRACE(descriptor, "submit commit block");
-+ lock_buffer(bh);
-+
- set_buffer_dirty(bh);
-- if (journal->j_flags & JBD2_BARRIER) {
-+ set_buffer_uptodate(bh);
-+ bh->b_end_io = journal_end_buffer_io_sync;
-+
-+ if (journal->j_flags & JBD2_BARRIER &&
-+ !JBD2_HAS_COMPAT_FEATURE(journal,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
- set_buffer_ordered(bh);
- barrier_done = 1;
- }
-- ret = sync_dirty_buffer(bh);
-+ ret = submit_bh(WRITE, bh);
-+
- /* is it possible for another commit to fail at roughly
- * the same time as this one? If so, we don't want to
- * trust the barrier flag in the super, but instead want
-@@ -151,14 +169,72 @@ static int journal_write_commit_record(journal_t *journal,
- clear_buffer_ordered(bh);
- set_buffer_uptodate(bh);
- set_buffer_dirty(bh);
-- ret = sync_dirty_buffer(bh);
-+ ret = submit_bh(WRITE, bh);
- }
-- put_bh(bh); /* One for getblk() */
-- jbd2_journal_put_journal_head(descriptor);
-+ *cbh = bh;
-+ return ret;
-+}
-+
-+/*
-+ * This function along with journal_submit_commit_record
-+ * allows to write the commit record asynchronously.
-+ */
-+static int journal_wait_on_commit_record(struct buffer_head *bh)
-+{
-+ int ret = 0;
-+
-+ clear_buffer_dirty(bh);
-+ wait_on_buffer(bh);
-+
-+ if (unlikely(!buffer_uptodate(bh)))
-+ ret = -EIO;
-+ put_bh(bh); /* One for getblk() */
-+ jbd2_journal_put_journal_head(bh2jh(bh));
+-/*
+- * pxa3xx_mfp_init_addr() - initialize the mapping between mfp pin
+- * index and MFPR register offset
+- *
+- * used by processor specific code
+- */
+-void __init pxa3xx_mfp_init_addr(struct pxa3xx_mfp_addr_map *);
+-void __init pxa3xx_init_mfp(void);
++ ((MFP_CFG_DEFAULT & ~(MFP_AF_MASK | MFP_DS_MASK | MFP_LPM_STATE_MASK)) |\
++ (MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_##drv | MFP_LPM_##lpm))
-- return (ret == -EIO);
-+ return ret;
- }
+ #endif /* __ASM_ARCH_MFP_H */
+diff --git a/include/asm-arm/arch-pxa/mmc.h b/include/asm-arm/arch-pxa/mmc.h
+index ef4f570..6d1304c 100644
+--- a/include/asm-arm/arch-pxa/mmc.h
++++ b/include/asm-arm/arch-pxa/mmc.h
+@@ -17,5 +17,7 @@ struct pxamci_platform_data {
+ };
+
+ extern void pxa_set_mci_info(struct pxamci_platform_data *info);
++extern void pxa3xx_set_mci2_info(struct pxamci_platform_data *info);
++extern void pxa3xx_set_mci3_info(struct pxamci_platform_data *info);
+ #endif
+diff --git a/include/asm-arm/arch-pxa/pcm027.h b/include/asm-arm/arch-pxa/pcm027.h
+new file mode 100644
+index 0000000..7beae14
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/pcm027.h
+@@ -0,0 +1,75 @@
+/*
-+ * Wait for all submitted IO to complete.
++ * linux/include/asm-arm/arch-pxa/pcm027.h
++ *
++ * (c) 2003 Phytec Messtechnik GmbH <armlinux at phytec.de>
++ * (c) 2007 Juergen Beisert <j.beisert at pengutronix.de>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
-+static int journal_wait_on_locked_list(journal_t *journal,
-+ transaction_t *commit_transaction)
-+{
-+ int ret = 0;
-+ struct journal_head *jh;
+
-+ while (commit_transaction->t_locked_list) {
-+ struct buffer_head *bh;
++/*
++ * Definitions of CPU card resources only
++ */
+
-+ jh = commit_transaction->t_locked_list->b_tprev;
-+ bh = jh2bh(jh);
-+ get_bh(bh);
-+ if (buffer_locked(bh)) {
-+ spin_unlock(&journal->j_list_lock);
-+ wait_on_buffer(bh);
-+ if (unlikely(!buffer_uptodate(bh)))
-+ ret = -EIO;
-+ spin_lock(&journal->j_list_lock);
-+ }
-+ if (!inverted_lock(journal, bh)) {
-+ put_bh(bh);
-+ spin_lock(&journal->j_list_lock);
-+ continue;
-+ }
-+ if (buffer_jbd(bh) && jh->b_jlist == BJ_Locked) {
-+ __jbd2_journal_unfile_buffer(jh);
-+ jbd_unlock_bh_state(bh);
-+ jbd2_journal_remove_journal_head(bh);
-+ put_bh(bh);
-+ } else {
-+ jbd_unlock_bh_state(bh);
-+ }
-+ put_bh(bh);
-+ cond_resched_lock(&journal->j_list_lock);
-+ }
-+ return ret;
-+ }
++/* I2C RTC */
++#define PCM027_RTC_IRQ_GPIO 0
++#define PCM027_RTC_IRQ IRQ_GPIO(PCM027_RTC_IRQ_GPIO)
++#define PCM027_RTC_IRQ_EDGE IRQ_TYPE_EDGE_FALLING
++#define ADR_PCM027_RTC 0x51 /* I2C address */
+
- static void journal_do_submit_data(struct buffer_head **wbuf, int bufs)
- {
- int i;
-@@ -274,7 +350,21 @@ write_out_data:
- journal_do_submit_data(wbuf, bufs);
- }
-
--static inline void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
-+static __u32 jbd2_checksum_data(__u32 crc32_sum, struct buffer_head *bh)
-+{
-+ struct page *page = bh->b_page;
-+ char *addr;
-+ __u32 checksum;
++/* I2C EEPROM */
++#define ADR_PCM027_EEPROM 0x54 /* I2C address */
+
-+ addr = kmap_atomic(page, KM_USER0);
-+ checksum = crc32_be(crc32_sum,
-+ (void *)(addr + offset_in_page(bh->b_data)), bh->b_size);
-+ kunmap_atomic(addr, KM_USER0);
++/* Ethernet chip (SMSC91C111) */
++#define PCM027_ETH_IRQ_GPIO 52
++#define PCM027_ETH_IRQ IRQ_GPIO(PCM027_ETH_IRQ_GPIO)
++#define PCM027_ETH_IRQ_EDGE IRQ_TYPE_EDGE_RISING
++#define PCM027_ETH_PHYS PXA_CS5_PHYS
++#define PCM027_ETH_SIZE (1*1024*1024)
+
-+ return checksum;
-+}
++/* CAN controller SJA1000 (unsupported yet) */
++#define PCM027_CAN_IRQ_GPIO 114
++#define PCM027_CAN_IRQ IRQ_GPIO(PCM027_CAN_IRQ_GPIO)
++#define PCM027_CAN_IRQ_EDGE IRQ_TYPE_EDGE_FALLING
++#define PCM027_CAN_PHYS 0x22000000
++#define PCM027_CAN_SIZE 0x100
+
-+static void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
- unsigned long long block)
- {
- tag->t_blocknr = cpu_to_be32(block & (u32)~0);
-@@ -290,6 +380,7 @@ static inline void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
- */
- void jbd2_journal_commit_transaction(journal_t *journal)
- {
-+ struct transaction_stats_s stats;
- transaction_t *commit_transaction;
- struct journal_head *jh, *new_jh, *descriptor;
- struct buffer_head **wbuf = journal->j_wbuf;
-@@ -305,6 +396,8 @@ void jbd2_journal_commit_transaction(journal_t *journal)
- int tag_flag;
- int i;
- int tag_bytes = journal_tag_bytes(journal);
-+ struct buffer_head *cbh = NULL; /* For transactional checksums */
-+ __u32 crc32_sum = ~0;
-
- /*
- * First job: lock down the current transaction and wait for
-@@ -337,6 +430,11 @@ void jbd2_journal_commit_transaction(journal_t *journal)
- spin_lock(&journal->j_state_lock);
- commit_transaction->t_state = T_LOCKED;
-
-+ stats.u.run.rs_wait = commit_transaction->t_max_wait;
-+ stats.u.run.rs_locked = jiffies;
-+ stats.u.run.rs_running = jbd2_time_diff(commit_transaction->t_start,
-+ stats.u.run.rs_locked);
++/* SPI GPIO expander (unsupported yet) */
++#define PCM027_EGPIO_IRQ_GPIO 27
++#define PCM027_EGPIO_IRQ IRQ_GPIO(PCM027_EGPIO_IRQ_GPIO)
++#define PCM027_EGPIO_IRQ_EDGE IRQ_TYPE_EDGE_FALLING
++#define PCM027_EGPIO_CS 24
++/*
++ * TODO: Switch this pin from dedicated usage to GPIO if
++ * more than the MAX7301 device is connected to this SPI bus
++ */
++#define PCM027_EGPIO_CS_MODE GPIO24_SFRM_MD
+
- spin_lock(&commit_transaction->t_handle_lock);
- while (commit_transaction->t_updates) {
- DEFINE_WAIT(wait);
-@@ -407,6 +505,10 @@ void jbd2_journal_commit_transaction(journal_t *journal)
- */
- jbd2_journal_switch_revoke_table(journal);
-
-+ stats.u.run.rs_flushing = jiffies;
-+ stats.u.run.rs_locked = jbd2_time_diff(stats.u.run.rs_locked,
-+ stats.u.run.rs_flushing);
++/* Flash memory */
++#define PCM027_FLASH_PHYS 0x00000000
++#define PCM027_FLASH_SIZE 0x02000000
+
- commit_transaction->t_state = T_FLUSH;
- journal->j_committing_transaction = commit_transaction;
- journal->j_running_transaction = NULL;
-@@ -440,38 +542,15 @@ void jbd2_journal_commit_transaction(journal_t *journal)
- journal_submit_data_buffers(journal, commit_transaction);
-
- /*
-- * Wait for all previously submitted IO to complete.
-+ * Wait for all previously submitted IO to complete if commit
-+ * record is to be written synchronously.
- */
- spin_lock(&journal->j_list_lock);
-- while (commit_transaction->t_locked_list) {
-- struct buffer_head *bh;
-+ if (!JBD2_HAS_INCOMPAT_FEATURE(journal,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT))
-+ err = journal_wait_on_locked_list(journal,
-+ commit_transaction);
-
-- jh = commit_transaction->t_locked_list->b_tprev;
-- bh = jh2bh(jh);
-- get_bh(bh);
-- if (buffer_locked(bh)) {
-- spin_unlock(&journal->j_list_lock);
-- wait_on_buffer(bh);
-- if (unlikely(!buffer_uptodate(bh)))
-- err = -EIO;
-- spin_lock(&journal->j_list_lock);
-- }
-- if (!inverted_lock(journal, bh)) {
-- put_bh(bh);
-- spin_lock(&journal->j_list_lock);
-- continue;
-- }
-- if (buffer_jbd(bh) && jh->b_jlist == BJ_Locked) {
-- __jbd2_journal_unfile_buffer(jh);
-- jbd_unlock_bh_state(bh);
-- jbd2_journal_remove_journal_head(bh);
-- put_bh(bh);
-- } else {
-- jbd_unlock_bh_state(bh);
-- }
-- put_bh(bh);
-- cond_resched_lock(&journal->j_list_lock);
-- }
- spin_unlock(&journal->j_list_lock);
-
- if (err)
-@@ -498,6 +577,12 @@ void jbd2_journal_commit_transaction(journal_t *journal)
- */
- commit_transaction->t_state = T_COMMIT;
-
-+ stats.u.run.rs_logging = jiffies;
-+ stats.u.run.rs_flushing = jbd2_time_diff(stats.u.run.rs_flushing,
-+ stats.u.run.rs_logging);
-+ stats.u.run.rs_blocks = commit_transaction->t_outstanding_credits;
-+ stats.u.run.rs_blocks_logged = 0;
++/* onboard LEDs connected to GPIO */
++#define PCM027_LED_CPU 90
++#define PCM027_LED_HEARD_BEAT 91
+
- descriptor = NULL;
- bufs = 0;
- while (commit_transaction->t_buffers) {
-@@ -639,6 +724,15 @@ void jbd2_journal_commit_transaction(journal_t *journal)
- start_journal_io:
- for (i = 0; i < bufs; i++) {
- struct buffer_head *bh = wbuf[i];
-+ /*
-+ * Compute checksum.
-+ */
-+ if (JBD2_HAS_COMPAT_FEATURE(journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM)) {
-+ crc32_sum =
-+ jbd2_checksum_data(crc32_sum, bh);
-+ }
++/*
++ * This CPU module needs a baseboard to work. After basic initializing
++ * its own devices, it calls baseboard's init function.
++ * TODO: Add your own basebaord init function and call it from
++ * inside pcm027_init(). This example here is for the developmen board.
++ * Refer pcm990-baseboard.c
++ */
++extern void pcm990_baseboard_init(void);
+diff --git a/include/asm-arm/arch-pxa/pcm990_baseboard.h b/include/asm-arm/arch-pxa/pcm990_baseboard.h
+new file mode 100644
+index 0000000..b699d0d
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/pcm990_baseboard.h
+@@ -0,0 +1,275 @@
++/*
++ * include/asm-arm/arch-pxa/pcm990_baseboard.h
++ *
++ * (c) 2003 Phytec Messtechnik GmbH <armlinux at phytec.de>
++ * (c) 2007 Juergen Beisert <j.beisert at pengutronix.de>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ */
+
- lock_buffer(bh);
- clear_buffer_dirty(bh);
- set_buffer_uptodate(bh);
-@@ -646,6 +740,7 @@ start_journal_io:
- submit_bh(WRITE, bh);
- }
- cond_resched();
-+ stats.u.run.rs_blocks_logged += bufs;
-
- /* Force a new descriptor to be generated next
- time round the loop. */
-@@ -654,6 +749,23 @@ start_journal_io:
- }
- }
-
-+ /* Done it all: now write the commit record asynchronously. */
++#include <asm/arch/pcm027.h>
+
-+ if (JBD2_HAS_INCOMPAT_FEATURE(journal,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
-+ err = journal_submit_commit_record(journal, commit_transaction,
-+ &cbh, crc32_sum);
-+ if (err)
-+ __jbd2_journal_abort_hard(journal);
++/*
++ * definitions relevant only when the PCM-990
++ * development base board is in use
++ */
+
-+ spin_lock(&journal->j_list_lock);
-+ err = journal_wait_on_locked_list(journal,
-+ commit_transaction);
-+ spin_unlock(&journal->j_list_lock);
-+ if (err)
-+ __jbd2_journal_abort_hard(journal);
-+ }
++/* CPLD's interrupt controller is connected to PCM-027 GPIO 9 */
++#define PCM990_CTRL_INT_IRQ_GPIO 9
++#define PCM990_CTRL_INT_IRQ IRQ_GPIO(PCM990_CTRL_INT_IRQ_GPIO)
++#define PCM990_CTRL_INT_IRQ_EDGE IRQT_RISING
++#define PCM990_CTRL_PHYS PXA_CS1_PHYS /* 16-Bit */
++#define PCM990_CTRL_BASE 0xea000000
++#define PCM990_CTRL_SIZE (1*1024*1024)
+
- /* Lo and behold: we have just managed to send a transaction to
- the log. Before we can commit it, wait for the IO so far to
- complete. Control buffers being written are on the
-@@ -753,8 +865,14 @@ wait_for_iobuf:
-
- jbd_debug(3, "JBD: commit phase 6\n");
-
-- if (journal_write_commit_record(journal, commit_transaction))
-- err = -EIO;
-+ if (!JBD2_HAS_INCOMPAT_FEATURE(journal,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
-+ err = journal_submit_commit_record(journal, commit_transaction,
-+ &cbh, crc32_sum);
-+ if (err)
-+ __jbd2_journal_abort_hard(journal);
-+ }
-+ err = journal_wait_on_commit_record(cbh);
-
- if (err)
- jbd2_journal_abort(journal, err);
-@@ -816,6 +934,7 @@ restart_loop:
- cp_transaction = jh->b_cp_transaction;
- if (cp_transaction) {
- JBUFFER_TRACE(jh, "remove from old cp transaction");
-+ cp_transaction->t_chp_stats.cs_dropped++;
- __jbd2_journal_remove_checkpoint(jh);
- }
-
-@@ -867,10 +986,10 @@ restart_loop:
- }
- spin_unlock(&journal->j_list_lock);
- /*
-- * This is a bit sleazy. We borrow j_list_lock to protect
-- * journal->j_committing_transaction in __jbd2_journal_remove_checkpoint.
-- * Really, __jbd2_journal_remove_checkpoint should be using j_state_lock but
-- * it's a bit hassle to hold that across __jbd2_journal_remove_checkpoint
-+ * This is a bit sleazy. We use j_list_lock to protect transition
-+ * of a transaction into T_FINISHED state and calling
-+ * __jbd2_journal_drop_transaction(). Otherwise we could race with
-+ * other checkpointing code processing the transaction...
- */
- spin_lock(&journal->j_state_lock);
- spin_lock(&journal->j_list_lock);
-@@ -890,6 +1009,36 @@ restart_loop:
-
- J_ASSERT(commit_transaction->t_state == T_COMMIT);
-
-+ commit_transaction->t_start = jiffies;
-+ stats.u.run.rs_logging = jbd2_time_diff(stats.u.run.rs_logging,
-+ commit_transaction->t_start);
++#define PCM990_CTRL_PWR_IRQ_GPIO 14
++#define PCM990_CTRL_PWR_IRQ IRQ_GPIO(PCM990_CTRL_PWR_IRQ_GPIO)
++#define PCM990_CTRL_PWR_IRQ_EDGE IRQT_RISING
+
-+ /*
-+ * File the transaction for history
-+ */
-+ stats.ts_type = JBD2_STATS_RUN;
-+ stats.ts_tid = commit_transaction->t_tid;
-+ stats.u.run.rs_handle_count = commit_transaction->t_handle_count;
-+ spin_lock(&journal->j_history_lock);
-+ memcpy(journal->j_history + journal->j_history_cur, &stats,
-+ sizeof(stats));
-+ if (++journal->j_history_cur == journal->j_history_max)
-+ journal->j_history_cur = 0;
++/* visible CPLD (U7) registers */
++#define PCM990_CTRL_REG0 0x0000 /* RESET REGISTER */
++#define PCM990_CTRL_SYSRES 0x0001 /* System RESET REGISTER */
++#define PCM990_CTRL_RESOUT 0x0002 /* RESETOUT Enable REGISTER */
++#define PCM990_CTRL_RESGPIO 0x0004 /* RESETGPIO Enable REGISTER */
+
-+ /*
-+ * Calculate overall stats
-+ */
-+ journal->j_stats.ts_tid++;
-+ journal->j_stats.u.run.rs_wait += stats.u.run.rs_wait;
-+ journal->j_stats.u.run.rs_running += stats.u.run.rs_running;
-+ journal->j_stats.u.run.rs_locked += stats.u.run.rs_locked;
-+ journal->j_stats.u.run.rs_flushing += stats.u.run.rs_flushing;
-+ journal->j_stats.u.run.rs_logging += stats.u.run.rs_logging;
-+ journal->j_stats.u.run.rs_handle_count += stats.u.run.rs_handle_count;
-+ journal->j_stats.u.run.rs_blocks += stats.u.run.rs_blocks;
-+ journal->j_stats.u.run.rs_blocks_logged += stats.u.run.rs_blocks_logged;
-+ spin_unlock(&journal->j_history_lock);
++#define PCM990_CTRL_REG1 0x0002 /* Power REGISTER */
++#define PCM990_CTRL_5VOFF 0x0001 /* Disable 5V Regulators */
++#define PCM990_CTRL_CANPWR 0x0004 /* Enable CANPWR ADUM */
++#define PCM990_CTRL_PM_5V 0x0008 /* Read 5V OK */
+
- commit_transaction->t_state = T_FINISHED;
- J_ASSERT(commit_transaction == journal->j_committing_transaction);
- journal->j_commit_sequence = commit_transaction->t_tid;
-diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
-index 6ddc553..96ba846 100644
---- a/fs/jbd2/journal.c
-+++ b/fs/jbd2/journal.c
-@@ -36,6 +36,7 @@
- #include <linux/poison.h>
- #include <linux/proc_fs.h>
- #include <linux/debugfs.h>
-+#include <linux/seq_file.h>
-
- #include <asm/uaccess.h>
- #include <asm/page.h>
-@@ -640,6 +641,312 @@ struct journal_head *jbd2_journal_get_descriptor_buffer(journal_t *journal)
- return jbd2_journal_add_journal_head(bh);
- }
-
-+struct jbd2_stats_proc_session {
-+ journal_t *journal;
-+ struct transaction_stats_s *stats;
-+ int start;
-+ int max;
-+};
++#define PCM990_CTRL_REG2 0x0004 /* LED REGISTER */
++#define PCM990_CTRL_LEDPWR 0x0001 /* POWER LED enable */
++#define PCM990_CTRL_LEDBAS 0x0002 /* BASIS LED enable */
++#define PCM990_CTRL_LEDUSR 0x0004 /* USER LED enable */
+
-+static void *jbd2_history_skip_empty(struct jbd2_stats_proc_session *s,
-+ struct transaction_stats_s *ts,
-+ int first)
-+{
-+ if (ts == s->stats + s->max)
-+ ts = s->stats;
-+ if (!first && ts == s->stats + s->start)
-+ return NULL;
-+ while (ts->ts_type == 0) {
-+ ts++;
-+ if (ts == s->stats + s->max)
-+ ts = s->stats;
-+ if (ts == s->stats + s->start)
-+ return NULL;
-+ }
-+ return ts;
++#define PCM990_CTRL_REG3 0x0006 /* LCD CTRL REGISTER 3 */
++#define PCM990_CTRL_LCDPWR 0x0001 /* RW LCD Power on */
++#define PCM990_CTRL_LCDON 0x0002 /* RW LCD Latch on */
++#define PCM990_CTRL_LCDPOS1 0x0004 /* RW POS 1 */
++#define PCM990_CTRL_LCDPOS2 0x0008 /* RW POS 2 */
+
-+}
++#define PCM990_CTRL_REG4 0x0008 /* MMC1 CTRL REGISTER 4 */
++#define PCM990_CTRL_MMC1PWR 0x0001 /* RW MMC1 Power on */
+
-+static void *jbd2_seq_history_start(struct seq_file *seq, loff_t *pos)
-+{
-+ struct jbd2_stats_proc_session *s = seq->private;
-+ struct transaction_stats_s *ts;
-+ int l = *pos;
++#define PCM990_CTRL_REG5 0x000A /* MMC2 CTRL REGISTER 5 */
++#define PCM990_CTRL_MMC2PWR 0x0001 /* RW MMC2 Power on */
++#define PCM990_CTRL_MMC2LED 0x0002 /* RW MMC2 LED */
++#define PCM990_CTRL_MMC2DE 0x0004 /* R MMC2 Card detect */
++#define PCM990_CTRL_MMC2WP 0x0008 /* R MMC2 Card write protect */
+
-+ if (l == 0)
-+ return SEQ_START_TOKEN;
-+ ts = jbd2_history_skip_empty(s, s->stats + s->start, 1);
-+ if (!ts)
-+ return NULL;
-+ l--;
-+ while (l) {
-+ ts = jbd2_history_skip_empty(s, ++ts, 0);
-+ if (!ts)
-+ break;
-+ l--;
-+ }
-+ return ts;
-+}
++#define PCM990_CTRL_REG6 0x000C /* Interrupt Clear REGISTER */
++#define PCM990_CTRL_INTC0 0x0001 /* Clear Reg BT Detect */
++#define PCM990_CTRL_INTC1 0x0002 /* Clear Reg FR RI */
++#define PCM990_CTRL_INTC2 0x0004 /* Clear Reg MMC1 Detect */
++#define PCM990_CTRL_INTC3 0x0008 /* Clear Reg PM_5V off */
+
-+static void *jbd2_seq_history_next(struct seq_file *seq, void *v, loff_t *pos)
-+{
-+ struct jbd2_stats_proc_session *s = seq->private;
-+ struct transaction_stats_s *ts = v;
++#define PCM990_CTRL_REG7 0x000E /* Interrupt Enable REGISTER */
++#define PCM990_CTRL_ENAINT0 0x0001 /* Enable Int BT Detect */
++#define PCM990_CTRL_ENAINT1 0x0002 /* Enable Int FR RI */
++#define PCM990_CTRL_ENAINT2 0x0004 /* Enable Int MMC1 Detect */
++#define PCM990_CTRL_ENAINT3 0x0008 /* Enable Int PM_5V off */
+
-+ ++*pos;
-+ if (v == SEQ_START_TOKEN)
-+ return jbd2_history_skip_empty(s, s->stats + s->start, 1);
-+ else
-+ return jbd2_history_skip_empty(s, ++ts, 0);
-+}
++#define PCM990_CTRL_REG8 0x0014 /* Uart REGISTER */
++#define PCM990_CTRL_FFSD 0x0001 /* BT Uart Enable */
++#define PCM990_CTRL_BTSD 0x0002 /* FF Uart Enable */
++#define PCM990_CTRL_FFRI 0x0004 /* FF Uart RI detect */
++#define PCM990_CTRL_BTRX 0x0008 /* BT Uart Rx detect */
+
-+static int jbd2_seq_history_show(struct seq_file *seq, void *v)
-+{
-+ struct transaction_stats_s *ts = v;
-+ if (v == SEQ_START_TOKEN) {
-+ seq_printf(seq, "%-4s %-5s %-5s %-5s %-5s %-5s %-5s %-6s %-5s "
-+ "%-5s %-5s %-5s %-5s %-5s\n", "R/C", "tid",
-+ "wait", "run", "lock", "flush", "log", "hndls",
-+ "block", "inlog", "ctime", "write", "drop",
-+ "close");
-+ return 0;
-+ }
-+ if (ts->ts_type == JBD2_STATS_RUN)
-+ seq_printf(seq, "%-4s %-5lu %-5u %-5u %-5u %-5u %-5u "
-+ "%-6lu %-5lu %-5lu\n", "R", ts->ts_tid,
-+ jiffies_to_msecs(ts->u.run.rs_wait),
-+ jiffies_to_msecs(ts->u.run.rs_running),
-+ jiffies_to_msecs(ts->u.run.rs_locked),
-+ jiffies_to_msecs(ts->u.run.rs_flushing),
-+ jiffies_to_msecs(ts->u.run.rs_logging),
-+ ts->u.run.rs_handle_count,
-+ ts->u.run.rs_blocks,
-+ ts->u.run.rs_blocks_logged);
-+ else if (ts->ts_type == JBD2_STATS_CHECKPOINT)
-+ seq_printf(seq, "%-4s %-5lu %48s %-5u %-5lu %-5lu %-5lu\n",
-+ "C", ts->ts_tid, " ",
-+ jiffies_to_msecs(ts->u.chp.cs_chp_time),
-+ ts->u.chp.cs_written, ts->u.chp.cs_dropped,
-+ ts->u.chp.cs_forced_to_close);
-+ else
-+ J_ASSERT(0);
-+ return 0;
-+}
++#define PCM990_CTRL_REG9 0x0010 /* AC97 Flash REGISTER */
++#define PCM990_CTRL_FLWP 0x0001 /* pC Flash Write Protect */
++#define PCM990_CTRL_FLDIS 0x0002 /* pC Flash Disable */
++#define PCM990_CTRL_AC97ENA 0x0004 /* Enable AC97 Expansion */
+
-+static void jbd2_seq_history_stop(struct seq_file *seq, void *v)
-+{
-+}
++#define PCM990_CTRL_REG10 0x0012 /* GPS-REGISTER */
++#define PCM990_CTRL_GPSPWR 0x0004 /* GPS-Modul Power on */
++#define PCM990_CTRL_GPSENA 0x0008 /* GPS-Modul Enable */
+
-+static struct seq_operations jbd2_seq_history_ops = {
-+ .start = jbd2_seq_history_start,
-+ .next = jbd2_seq_history_next,
-+ .stop = jbd2_seq_history_stop,
-+ .show = jbd2_seq_history_show,
-+};
++#define PCM990_CTRL_REG11 0x0014 /* Accu REGISTER */
++#define PCM990_CTRL_ACENA 0x0001 /* Charge Enable */
++#define PCM990_CTRL_ACSEL 0x0002 /* Charge Akku -> DC Enable */
++#define PCM990_CTRL_ACPRES 0x0004 /* DC Present */
++#define PCM990_CTRL_ACALARM 0x0008 /* Error Akku */
+
-+static int jbd2_seq_history_open(struct inode *inode, struct file *file)
-+{
-+ journal_t *journal = PDE(inode)->data;
-+ struct jbd2_stats_proc_session *s;
-+ int rc, size;
++#define PCM990_CTRL_P2V(x) ((x) - PCM990_CTRL_PHYS + PCM990_CTRL_BASE)
++#define PCM990_CTRL_V2P(x) ((x) - PCM990_CTRL_BASE + PCM990_CTRL_PHYS)
+
-+ s = kmalloc(sizeof(*s), GFP_KERNEL);
-+ if (s == NULL)
-+ return -ENOMEM;
-+ size = sizeof(struct transaction_stats_s) * journal->j_history_max;
-+ s->stats = kmalloc(size, GFP_KERNEL);
-+ if (s->stats == NULL) {
-+ kfree(s);
-+ return -ENOMEM;
-+ }
-+ spin_lock(&journal->j_history_lock);
-+ memcpy(s->stats, journal->j_history, size);
-+ s->max = journal->j_history_max;
-+ s->start = journal->j_history_cur % s->max;
-+ spin_unlock(&journal->j_history_lock);
++#ifndef __ASSEMBLY__
++# define __PCM990_CTRL_REG(x) \
++ (*((volatile unsigned char *)PCM990_CTRL_P2V(x)))
++#else
++# define __PCM990_CTRL_REG(x) PCM990_CTRL_P2V(x)
++#endif
+
-+ rc = seq_open(file, &jbd2_seq_history_ops);
-+ if (rc == 0) {
-+ struct seq_file *m = file->private_data;
-+ m->private = s;
-+ } else {
-+ kfree(s->stats);
-+ kfree(s);
-+ }
-+ return rc;
++#define PCM990_INTMSKENA __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG7)
++#define PCM990_INTSETCLR __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG6)
++#define PCM990_CTRL0 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG0)
++#define PCM990_CTRL1 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG1)
++#define PCM990_CTRL2 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG2)
++#define PCM990_CTRL3 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG3)
++#define PCM990_CTRL4 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG4)
++#define PCM990_CTRL5 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG5)
++#define PCM990_CTRL6 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG6)
++#define PCM990_CTRL7 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG7)
++#define PCM990_CTRL8 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG8)
++#define PCM990_CTRL9 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG9)
++#define PCM990_CTRL10 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG10)
++#define PCM990_CTRL11 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG11)
+
-+}
+
-+static int jbd2_seq_history_release(struct inode *inode, struct file *file)
-+{
-+ struct seq_file *seq = file->private_data;
-+ struct jbd2_stats_proc_session *s = seq->private;
++/*
++ * IDE
++ */
++#define PCM990_IDE_IRQ_GPIO 13
++#define PCM990_IDE_IRQ IRQ_GPIO(PCM990_IDE_IRQ_GPIO)
++#define PCM990_IDE_IRQ_EDGE IRQT_RISING
++#define PCM990_IDE_PLD_PHYS 0x20000000 /* 16 bit wide */
++#define PCM990_IDE_PLD_BASE 0xee000000
++#define PCM990_IDE_PLD_SIZE (1*1024*1024)
+
-+ kfree(s->stats);
-+ kfree(s);
-+ return seq_release(inode, file);
-+}
++/* visible CPLD (U6) registers */
++#define PCM990_IDE_PLD_REG0 0x1000 /* OFFSET IDE REGISTER 0 */
++#define PCM990_IDE_PM5V 0x0004 /* R System VCC_5V */
++#define PCM990_IDE_STBY 0x0008 /* R System StandBy */
+
-+static struct file_operations jbd2_seq_history_fops = {
-+ .owner = THIS_MODULE,
-+ .open = jbd2_seq_history_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = jbd2_seq_history_release,
-+};
++#define PCM990_IDE_PLD_REG1 0x1002 /* OFFSET IDE REGISTER 1 */
++#define PCM990_IDE_IDEMODE 0x0001 /* R TrueIDE Mode */
++#define PCM990_IDE_DMAENA 0x0004 /* RW DMA Enable */
++#define PCM990_IDE_DMA1_0 0x0008 /* RW 1=DREQ1 0=DREQ0 */
+
-+static void *jbd2_seq_info_start(struct seq_file *seq, loff_t *pos)
-+{
-+ return *pos ? NULL : SEQ_START_TOKEN;
-+}
++#define PCM990_IDE_PLD_REG2 0x1004 /* OFFSET IDE REGISTER 2 */
++#define PCM990_IDE_RESENA 0x0001 /* RW IDE Reset Bit enable */
++#define PCM990_IDE_RES 0x0002 /* RW IDE Reset Bit */
++#define PCM990_IDE_RDY 0x0008 /* RDY */
+
-+static void *jbd2_seq_info_next(struct seq_file *seq, void *v, loff_t *pos)
-+{
-+ return NULL;
-+}
++#define PCM990_IDE_PLD_REG3 0x1006 /* OFFSET IDE REGISTER 3 */
++#define PCM990_IDE_IDEOE 0x0001 /* RW Latch on Databus */
++#define PCM990_IDE_IDEON 0x0002 /* RW Latch on Control Address */
++#define PCM990_IDE_IDEIN 0x0004 /* RW Latch on Interrupt usw. */
+
-+static int jbd2_seq_info_show(struct seq_file *seq, void *v)
-+{
-+ struct jbd2_stats_proc_session *s = seq->private;
++#define PCM990_IDE_PLD_REG4 0x1008 /* OFFSET IDE REGISTER 4 */
++#define PCM990_IDE_PWRENA 0x0001 /* RW IDE Power enable */
++#define PCM990_IDE_5V 0x0002 /* R IDE Power 5V */
++#define PCM990_IDE_PWG 0x0008 /* R IDE Power is on */
+
-+ if (v != SEQ_START_TOKEN)
-+ return 0;
-+ seq_printf(seq, "%lu transaction, each upto %u blocks\n",
-+ s->stats->ts_tid,
-+ s->journal->j_max_transaction_buffers);
-+ if (s->stats->ts_tid == 0)
-+ return 0;
-+ seq_printf(seq, "average: \n %ums waiting for transaction\n",
-+ jiffies_to_msecs(s->stats->u.run.rs_wait / s->stats->ts_tid));
-+ seq_printf(seq, " %ums running transaction\n",
-+ jiffies_to_msecs(s->stats->u.run.rs_running / s->stats->ts_tid));
-+ seq_printf(seq, " %ums transaction was being locked\n",
-+ jiffies_to_msecs(s->stats->u.run.rs_locked / s->stats->ts_tid));
-+ seq_printf(seq, " %ums flushing data (in ordered mode)\n",
-+ jiffies_to_msecs(s->stats->u.run.rs_flushing / s->stats->ts_tid));
-+ seq_printf(seq, " %ums logging transaction\n",
-+ jiffies_to_msecs(s->stats->u.run.rs_logging / s->stats->ts_tid));
-+ seq_printf(seq, " %lu handles per transaction\n",
-+ s->stats->u.run.rs_handle_count / s->stats->ts_tid);
-+ seq_printf(seq, " %lu blocks per transaction\n",
-+ s->stats->u.run.rs_blocks / s->stats->ts_tid);
-+ seq_printf(seq, " %lu logged blocks per transaction\n",
-+ s->stats->u.run.rs_blocks_logged / s->stats->ts_tid);
-+ return 0;
-+}
++#define PCM990_IDE_PLD_P2V(x) ((x) - PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_BASE)
++#define PCM990_IDE_PLD_V2P(x) ((x) - PCM990_IDE_PLD_BASE + PCM990_IDE_PLD_PHYS)
+
-+static void jbd2_seq_info_stop(struct seq_file *seq, void *v)
-+{
-+}
++#ifndef __ASSEMBLY__
++# define __PCM990_IDE_PLD_REG(x) \
++ (*((volatile unsigned char *)PCM990_IDE_PLD_P2V(x)))
++#else
++# define __PCM990_IDE_PLD_REG(x) PCM990_IDE_PLD_P2V(x)
++#endif
+
-+static struct seq_operations jbd2_seq_info_ops = {
-+ .start = jbd2_seq_info_start,
-+ .next = jbd2_seq_info_next,
-+ .stop = jbd2_seq_info_stop,
-+ .show = jbd2_seq_info_show,
-+};
++#define PCM990_IDE0 \
++ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG0)
++#define PCM990_IDE1 \
++ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG1)
++#define PCM990_IDE2 \
++ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG2)
++#define PCM990_IDE3 \
++ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG3)
++#define PCM990_IDE4 \
++ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG4)
+
-+static int jbd2_seq_info_open(struct inode *inode, struct file *file)
-+{
-+ journal_t *journal = PDE(inode)->data;
-+ struct jbd2_stats_proc_session *s;
-+ int rc, size;
++/*
++ * Compact Flash
++ */
++#define PCM990_CF_IRQ_GPIO 11
++#define PCM990_CF_IRQ IRQ_GPIO(PCM990_CF_IRQ_GPIO)
++#define PCM990_CF_IRQ_EDGE IRQT_RISING
+
-+ s = kmalloc(sizeof(*s), GFP_KERNEL);
-+ if (s == NULL)
-+ return -ENOMEM;
-+ size = sizeof(struct transaction_stats_s);
-+ s->stats = kmalloc(size, GFP_KERNEL);
-+ if (s->stats == NULL) {
-+ kfree(s);
-+ return -ENOMEM;
-+ }
-+ spin_lock(&journal->j_history_lock);
-+ memcpy(s->stats, &journal->j_stats, size);
-+ s->journal = journal;
-+ spin_unlock(&journal->j_history_lock);
++#define PCM990_CF_CD_GPIO 12
++#define PCM990_CF_CD IRQ_GPIO(PCM990_CF_CD_GPIO)
++#define PCM990_CF_CD_EDGE IRQT_RISING
+
-+ rc = seq_open(file, &jbd2_seq_info_ops);
-+ if (rc == 0) {
-+ struct seq_file *m = file->private_data;
-+ m->private = s;
-+ } else {
-+ kfree(s->stats);
-+ kfree(s);
-+ }
-+ return rc;
++#define PCM990_CF_PLD_PHYS 0x30000000 /* 16 bit wide */
++#define PCM990_CF_PLD_BASE 0xef000000
++#define PCM990_CF_PLD_SIZE (1*1024*1024)
++#define PCM990_CF_PLD_P2V(x) ((x) - PCM990_CF_PLD_PHYS + PCM990_CF_PLD_BASE)
++#define PCM990_CF_PLD_V2P(x) ((x) - PCM990_CF_PLD_BASE + PCM990_CF_PLD_PHYS)
+
-+}
++/* visible CPLD (U6) registers */
++#define PCM990_CF_PLD_REG0 0x1000 /* OFFSET CF REGISTER 0 */
++#define PCM990_CF_REG0_LED 0x0001 /* RW LED on */
++#define PCM990_CF_REG0_BLK 0x0002 /* RW LED flash when access */
++#define PCM990_CF_REG0_PM5V 0x0004 /* R System VCC_5V enable */
++#define PCM990_CF_REG0_STBY 0x0008 /* R System StandBy */
+
-+static int jbd2_seq_info_release(struct inode *inode, struct file *file)
-+{
-+ struct seq_file *seq = file->private_data;
-+ struct jbd2_stats_proc_session *s = seq->private;
-+ kfree(s->stats);
-+ kfree(s);
-+ return seq_release(inode, file);
-+}
++#define PCM990_CF_PLD_REG1 0x1002 /* OFFSET CF REGISTER 1 */
++#define PCM990_CF_REG1_IDEMODE 0x0001 /* RW CF card run as TrueIDE */
++#define PCM990_CF_REG1_CF0 0x0002 /* RW CF card at ADDR 0x28000000 */
+
-+static struct file_operations jbd2_seq_info_fops = {
-+ .owner = THIS_MODULE,
-+ .open = jbd2_seq_info_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = jbd2_seq_info_release,
-+};
++#define PCM990_CF_PLD_REG2 0x1004 /* OFFSET CF REGISTER 2 */
++#define PCM990_CF_REG2_RES 0x0002 /* RW CF RESET BIT */
++#define PCM990_CF_REG2_RDYENA 0x0004 /* RW Enable CF_RDY */
++#define PCM990_CF_REG2_RDY 0x0008 /* R CF_RDY auf PWAIT */
+
-+static struct proc_dir_entry *proc_jbd2_stats;
++#define PCM990_CF_PLD_REG3 0x1006 /* OFFSET CF REGISTER 3 */
++#define PCM990_CF_REG3_CFOE 0x0001 /* RW Latch on Databus */
++#define PCM990_CF_REG3_CFON 0x0002 /* RW Latch on Control Address */
++#define PCM990_CF_REG3_CFIN 0x0004 /* RW Latch on Interrupt usw. */
++#define PCM990_CF_REG3_CFCD 0x0008 /* RW Latch on CD1/2 VS1/2 usw */
+
-+static void jbd2_stats_proc_init(journal_t *journal)
-+{
-+ char name[BDEVNAME_SIZE];
++#define PCM990_CF_PLD_REG4 0x1008 /* OFFSET CF REGISTER 4 */
++#define PCM990_CF_REG4_PWRENA 0x0001 /* RW CF Power on (CD1/2 = "00") */
++#define PCM990_CF_REG4_5_3V 0x0002 /* RW 1 = 5V CF_VCC 0 = 3 V CF_VCC */
++#define PCM990_CF_REG4_3B 0x0004 /* RW 3.0V Backup from VCC (5_3V=0) */
++#define PCM990_CF_REG4_PWG 0x0008 /* R CF-Power is on */
+
-+ snprintf(name, sizeof(name) - 1, "%s", bdevname(journal->j_dev, name));
-+ journal->j_proc_entry = proc_mkdir(name, proc_jbd2_stats);
-+ if (journal->j_proc_entry) {
-+ struct proc_dir_entry *p;
-+ p = create_proc_entry("history", S_IRUGO,
-+ journal->j_proc_entry);
-+ if (p) {
-+ p->proc_fops = &jbd2_seq_history_fops;
-+ p->data = journal;
-+ p = create_proc_entry("info", S_IRUGO,
-+ journal->j_proc_entry);
-+ if (p) {
-+ p->proc_fops = &jbd2_seq_info_fops;
-+ p->data = journal;
-+ }
-+ }
-+ }
-+}
++#define PCM990_CF_PLD_REG5 0x100A /* OFFSET CF REGISTER 5 */
++#define PCM990_CF_REG5_BVD1 0x0001 /* R CF /BVD1 */
++#define PCM990_CF_REG5_BVD2 0x0002 /* R CF /BVD2 */
++#define PCM990_CF_REG5_VS1 0x0004 /* R CF /VS1 */
++#define PCM990_CF_REG5_VS2 0x0008 /* R CF /VS2 */
+
-+static void jbd2_stats_proc_exit(journal_t *journal)
-+{
-+ char name[BDEVNAME_SIZE];
++#define PCM990_CF_PLD_REG6 0x100C /* OFFSET CF REGISTER 6 */
++#define PCM990_CF_REG6_CD1 0x0001 /* R CF Card_Detect1 */
++#define PCM990_CF_REG6_CD2 0x0002 /* R CF Card_Detect2 */
+
-+ snprintf(name, sizeof(name) - 1, "%s", bdevname(journal->j_dev, name));
-+ remove_proc_entry("info", journal->j_proc_entry);
-+ remove_proc_entry("history", journal->j_proc_entry);
-+ remove_proc_entry(name, proc_jbd2_stats);
-+}
++#ifndef __ASSEMBLY__
++# define __PCM990_CF_PLD_REG(x) \
++ (*((volatile unsigned char *)PCM990_CF_PLD_P2V(x)))
++#else
++# define __PCM990_CF_PLD_REG(x) PCM990_CF_PLD_P2V(x)
++#endif
+
-+static void journal_init_stats(journal_t *journal)
-+{
-+ int size;
++#define PCM990_CF0 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG0)
++#define PCM990_CF1 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG1)
++#define PCM990_CF2 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG2)
++#define PCM990_CF3 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG3)
++#define PCM990_CF4 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG4)
++#define PCM990_CF5 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG5)
++#define PCM990_CF6 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG6)
+
-+ if (!proc_jbd2_stats)
-+ return;
++/*
++ * Wolfson AC97 Touch
++ */
++#define PCM990_AC97_IRQ_GPIO 10
++#define PCM990_AC97_IRQ IRQ_GPIO(PCM990_AC97_IRQ_GPIO)
++#define PCM990_AC97_IRQ_EDGE IRQT_RISING
+
-+ journal->j_history_max = 100;
-+ size = sizeof(struct transaction_stats_s) * journal->j_history_max;
-+ journal->j_history = kzalloc(size, GFP_KERNEL);
-+ if (!journal->j_history) {
-+ journal->j_history_max = 0;
-+ return;
-+ }
-+ spin_lock_init(&journal->j_history_lock);
-+}
++/*
++ * MMC phyCORE
++ */
++#define PCM990_MMC0_IRQ_GPIO 9
++#define PCM990_MMC0_IRQ IRQ_GPIO(PCM990_MMC0_IRQ_GPIO)
++#define PCM990_MMC0_IRQ_EDGE IRQT_FALLING
+
++/*
++ * USB phyCore
++ */
++#define PCM990_USB_OVERCURRENT (88 | GPIO_ALT_FN_1_IN)
++#define PCM990_USB_PWR_EN (89 | GPIO_ALT_FN_2_OUT)
+diff --git a/include/asm-arm/arch-pxa/pxa-regs.h b/include/asm-arm/arch-pxa/pxa-regs.h
+index 1bd398d..442494d 100644
+--- a/include/asm-arm/arch-pxa/pxa-regs.h
++++ b/include/asm-arm/arch-pxa/pxa-regs.h
+@@ -1597,176 +1597,10 @@
+ #define PWER_GPIO15 PWER_GPIO (15) /* GPIO [15] wake-up enable */
+ #define PWER_RTC 0x80000000 /* RTC alarm wake-up enable */
+
+-
/*
- * Management for journal control blocks: functions to create and
- * destroy journal_t structures, and to initialise and read existing
-@@ -681,6 +988,9 @@ static journal_t * journal_init_common (void)
- kfree(journal);
- goto fail;
- }
-+
-+ journal_init_stats(journal);
-+
- return journal;
- fail:
- return NULL;
-@@ -735,6 +1045,7 @@ journal_t * jbd2_journal_init_dev(struct block_device *bdev,
- journal->j_fs_dev = fs_dev;
- journal->j_blk_offset = start;
- journal->j_maxlen = len;
-+ jbd2_stats_proc_init(journal);
+- * SSP Serial Port Registers
+- * PXA250, PXA255, PXA26x and PXA27x SSP controllers are all slightly different.
+- * PXA255, PXA26x and PXA27x have extra ports, registers and bits.
++ * SSP Serial Port Registers - see include/asm-arm/arch-pxa/regs-ssp.h
+ */
- bh = __getblk(journal->j_dev, start, journal->j_blocksize);
- J_ASSERT(bh != NULL);
-@@ -773,6 +1084,7 @@ journal_t * jbd2_journal_init_inode (struct inode *inode)
+- /* Common PXA2xx bits first */
+-#define SSCR0_DSS (0x0000000f) /* Data Size Select (mask) */
+-#define SSCR0_DataSize(x) ((x) - 1) /* Data Size Select [4..16] */
+-#define SSCR0_FRF (0x00000030) /* FRame Format (mask) */
+-#define SSCR0_Motorola (0x0 << 4) /* Motorola's Serial Peripheral Interface (SPI) */
+-#define SSCR0_TI (0x1 << 4) /* Texas Instruments' Synchronous Serial Protocol (SSP) */
+-#define SSCR0_National (0x2 << 4) /* National Microwire */
+-#define SSCR0_ECS (1 << 6) /* External clock select */
+-#define SSCR0_SSE (1 << 7) /* Synchronous Serial Port Enable */
+-#if defined(CONFIG_PXA25x)
+-#define SSCR0_SCR (0x0000ff00) /* Serial Clock Rate (mask) */
+-#define SSCR0_SerClkDiv(x) ((((x) - 2)/2) << 8) /* Divisor [2..512] */
+-#elif defined(CONFIG_PXA27x)
+-#define SSCR0_SCR (0x000fff00) /* Serial Clock Rate (mask) */
+-#define SSCR0_SerClkDiv(x) (((x) - 1) << 8) /* Divisor [1..4096] */
+-#define SSCR0_EDSS (1 << 20) /* Extended data size select */
+-#define SSCR0_NCS (1 << 21) /* Network clock select */
+-#define SSCR0_RIM (1 << 22) /* Receive FIFO overrrun interrupt mask */
+-#define SSCR0_TUM (1 << 23) /* Transmit FIFO underrun interrupt mask */
+-#define SSCR0_FRDC (0x07000000) /* Frame rate divider control (mask) */
+-#define SSCR0_SlotsPerFrm(x) (((x) - 1) << 24) /* Time slots per frame [1..8] */
+-#define SSCR0_ADC (1 << 30) /* Audio clock select */
+-#define SSCR0_MOD (1 << 31) /* Mode (normal or network) */
+-#endif
+-
+-#define SSCR1_RIE (1 << 0) /* Receive FIFO Interrupt Enable */
+-#define SSCR1_TIE (1 << 1) /* Transmit FIFO Interrupt Enable */
+-#define SSCR1_LBM (1 << 2) /* Loop-Back Mode */
+-#define SSCR1_SPO (1 << 3) /* Motorola SPI SSPSCLK polarity setting */
+-#define SSCR1_SPH (1 << 4) /* Motorola SPI SSPSCLK phase setting */
+-#define SSCR1_MWDS (1 << 5) /* Microwire Transmit Data Size */
+-#define SSCR1_TFT (0x000003c0) /* Transmit FIFO Threshold (mask) */
+-#define SSCR1_TxTresh(x) (((x) - 1) << 6) /* level [1..16] */
+-#define SSCR1_RFT (0x00003c00) /* Receive FIFO Threshold (mask) */
+-#define SSCR1_RxTresh(x) (((x) - 1) << 10) /* level [1..16] */
+-
+-#define SSSR_TNF (1 << 2) /* Transmit FIFO Not Full */
+-#define SSSR_RNE (1 << 3) /* Receive FIFO Not Empty */
+-#define SSSR_BSY (1 << 4) /* SSP Busy */
+-#define SSSR_TFS (1 << 5) /* Transmit FIFO Service Request */
+-#define SSSR_RFS (1 << 6) /* Receive FIFO Service Request */
+-#define SSSR_ROR (1 << 7) /* Receive FIFO Overrun */
+-
+-#define SSCR0_TIM (1 << 23) /* Transmit FIFO Under Run Interrupt Mask */
+-#define SSCR0_RIM (1 << 22) /* Receive FIFO Over Run interrupt Mask */
+-#define SSCR0_NCS (1 << 21) /* Network Clock Select */
+-#define SSCR0_EDSS (1 << 20) /* Extended Data Size Select */
+-
+-/* extra bits in PXA255, PXA26x and PXA27x SSP ports */
+-#define SSCR0_TISSP (1 << 4) /* TI Sync Serial Protocol */
+-#define SSCR0_PSP (3 << 4) /* PSP - Programmable Serial Protocol */
+-#define SSCR1_TTELP (1 << 31) /* TXD Tristate Enable Last Phase */
+-#define SSCR1_TTE (1 << 30) /* TXD Tristate Enable */
+-#define SSCR1_EBCEI (1 << 29) /* Enable Bit Count Error interrupt */
+-#define SSCR1_SCFR (1 << 28) /* Slave Clock free Running */
+-#define SSCR1_ECRA (1 << 27) /* Enable Clock Request A */
+-#define SSCR1_ECRB (1 << 26) /* Enable Clock request B */
+-#define SSCR1_SCLKDIR (1 << 25) /* Serial Bit Rate Clock Direction */
+-#define SSCR1_SFRMDIR (1 << 24) /* Frame Direction */
+-#define SSCR1_RWOT (1 << 23) /* Receive Without Transmit */
+-#define SSCR1_TRAIL (1 << 22) /* Trailing Byte */
+-#define SSCR1_TSRE (1 << 21) /* Transmit Service Request Enable */
+-#define SSCR1_RSRE (1 << 20) /* Receive Service Request Enable */
+-#define SSCR1_TINTE (1 << 19) /* Receiver Time-out Interrupt enable */
+-#define SSCR1_PINTE (1 << 18) /* Peripheral Trailing Byte Interupt Enable */
+-#define SSCR1_STRF (1 << 15) /* Select FIFO or EFWR */
+-#define SSCR1_EFWR (1 << 14) /* Enable FIFO Write/Read */
+-
+-#define SSSR_BCE (1 << 23) /* Bit Count Error */
+-#define SSSR_CSS (1 << 22) /* Clock Synchronisation Status */
+-#define SSSR_TUR (1 << 21) /* Transmit FIFO Under Run */
+-#define SSSR_EOC (1 << 20) /* End Of Chain */
+-#define SSSR_TINT (1 << 19) /* Receiver Time-out Interrupt */
+-#define SSSR_PINT (1 << 18) /* Peripheral Trailing Byte Interrupt */
+-
+-#define SSPSP_FSRT (1 << 25) /* Frame Sync Relative Timing */
+-#define SSPSP_DMYSTOP(x) ((x) << 23) /* Dummy Stop */
+-#define SSPSP_SFRMWDTH(x) ((x) << 16) /* Serial Frame Width */
+-#define SSPSP_SFRMDLY(x) ((x) << 9) /* Serial Frame Delay */
+-#define SSPSP_DMYSTRT(x) ((x) << 7) /* Dummy Start */
+-#define SSPSP_STRTDLY(x) ((x) << 4) /* Start Delay */
+-#define SSPSP_ETDS (1 << 3) /* End of Transfer data State */
+-#define SSPSP_SFRMP (1 << 2) /* Serial Frame Polarity */
+-#define SSPSP_SCMODE(x) ((x) << 0) /* Serial Bit Rate Clock Mode */
+-
+-#define SSACD_SCDB (1 << 3) /* SSPSYSCLK Divider Bypass */
+-#define SSACD_ACPS(x) ((x) << 4) /* Audio clock PLL select */
+-#define SSACD_ACDS(x) ((x) << 0) /* Audio clock divider select */
+-
+-#define SSCR0_P1 __REG(0x41000000) /* SSP Port 1 Control Register 0 */
+-#define SSCR1_P1 __REG(0x41000004) /* SSP Port 1 Control Register 1 */
+-#define SSSR_P1 __REG(0x41000008) /* SSP Port 1 Status Register */
+-#define SSITR_P1 __REG(0x4100000C) /* SSP Port 1 Interrupt Test Register */
+-#define SSDR_P1 __REG(0x41000010) /* (Write / Read) SSP Port 1 Data Write Register/SSP Data Read Register */
+-
+-/* Support existing PXA25x drivers */
+-#define SSCR0 SSCR0_P1 /* SSP Control Register 0 */
+-#define SSCR1 SSCR1_P1 /* SSP Control Register 1 */
+-#define SSSR SSSR_P1 /* SSP Status Register */
+-#define SSITR SSITR_P1 /* SSP Interrupt Test Register */
+-#define SSDR SSDR_P1 /* (Write / Read) SSP Data Write Register/SSP Data Read Register */
+-
+-/* PXA27x ports */
+-#if defined (CONFIG_PXA27x)
+-#define SSTO_P1 __REG(0x41000028) /* SSP Port 1 Time Out Register */
+-#define SSPSP_P1 __REG(0x4100002C) /* SSP Port 1 Programmable Serial Protocol */
+-#define SSTSA_P1 __REG(0x41000030) /* SSP Port 1 Tx Timeslot Active */
+-#define SSRSA_P1 __REG(0x41000034) /* SSP Port 1 Rx Timeslot Active */
+-#define SSTSS_P1 __REG(0x41000038) /* SSP Port 1 Timeslot Status */
+-#define SSACD_P1 __REG(0x4100003C) /* SSP Port 1 Audio Clock Divider */
+-#define SSCR0_P2 __REG(0x41700000) /* SSP Port 2 Control Register 0 */
+-#define SSCR1_P2 __REG(0x41700004) /* SSP Port 2 Control Register 1 */
+-#define SSSR_P2 __REG(0x41700008) /* SSP Port 2 Status Register */
+-#define SSITR_P2 __REG(0x4170000C) /* SSP Port 2 Interrupt Test Register */
+-#define SSDR_P2 __REG(0x41700010) /* (Write / Read) SSP Port 2 Data Write Register/SSP Data Read Register */
+-#define SSTO_P2 __REG(0x41700028) /* SSP Port 2 Time Out Register */
+-#define SSPSP_P2 __REG(0x4170002C) /* SSP Port 2 Programmable Serial Protocol */
+-#define SSTSA_P2 __REG(0x41700030) /* SSP Port 2 Tx Timeslot Active */
+-#define SSRSA_P2 __REG(0x41700034) /* SSP Port 2 Rx Timeslot Active */
+-#define SSTSS_P2 __REG(0x41700038) /* SSP Port 2 Timeslot Status */
+-#define SSACD_P2 __REG(0x4170003C) /* SSP Port 2 Audio Clock Divider */
+-#define SSCR0_P3 __REG(0x41900000) /* SSP Port 3 Control Register 0 */
+-#define SSCR1_P3 __REG(0x41900004) /* SSP Port 3 Control Register 1 */
+-#define SSSR_P3 __REG(0x41900008) /* SSP Port 3 Status Register */
+-#define SSITR_P3 __REG(0x4190000C) /* SSP Port 3 Interrupt Test Register */
+-#define SSDR_P3 __REG(0x41900010) /* (Write / Read) SSP Port 3 Data Write Register/SSP Data Read Register */
+-#define SSTO_P3 __REG(0x41900028) /* SSP Port 3 Time Out Register */
+-#define SSPSP_P3 __REG(0x4190002C) /* SSP Port 3 Programmable Serial Protocol */
+-#define SSTSA_P3 __REG(0x41900030) /* SSP Port 3 Tx Timeslot Active */
+-#define SSRSA_P3 __REG(0x41900034) /* SSP Port 3 Rx Timeslot Active */
+-#define SSTSS_P3 __REG(0x41900038) /* SSP Port 3 Timeslot Status */
+-#define SSACD_P3 __REG(0x4190003C) /* SSP Port 3 Audio Clock Divider */
+-#else /* PXA255 (only port 2) and PXA26x ports*/
+-#define SSTO_P1 __REG(0x41000028) /* SSP Port 1 Time Out Register */
+-#define SSPSP_P1 __REG(0x4100002C) /* SSP Port 1 Programmable Serial Protocol */
+-#define SSCR0_P2 __REG(0x41400000) /* SSP Port 2 Control Register 0 */
+-#define SSCR1_P2 __REG(0x41400004) /* SSP Port 2 Control Register 1 */
+-#define SSSR_P2 __REG(0x41400008) /* SSP Port 2 Status Register */
+-#define SSITR_P2 __REG(0x4140000C) /* SSP Port 2 Interrupt Test Register */
+-#define SSDR_P2 __REG(0x41400010) /* (Write / Read) SSP Port 2 Data Write Register/SSP Data Read Register */
+-#define SSTO_P2 __REG(0x41400028) /* SSP Port 2 Time Out Register */
+-#define SSPSP_P2 __REG(0x4140002C) /* SSP Port 2 Programmable Serial Protocol */
+-#define SSCR0_P3 __REG(0x41500000) /* SSP Port 3 Control Register 0 */
+-#define SSCR1_P3 __REG(0x41500004) /* SSP Port 3 Control Register 1 */
+-#define SSSR_P3 __REG(0x41500008) /* SSP Port 3 Status Register */
+-#define SSITR_P3 __REG(0x4150000C) /* SSP Port 3 Interrupt Test Register */
+-#define SSDR_P3 __REG(0x41500010) /* (Write / Read) SSP Port 3 Data Write Register/SSP Data Read Register */
+-#define SSTO_P3 __REG(0x41500028) /* SSP Port 3 Time Out Register */
+-#define SSPSP_P3 __REG(0x4150002C) /* SSP Port 3 Programmable Serial Protocol */
+-#endif
+-
+-#define SSCR0_P(x) (*(((x) == 1) ? &SSCR0_P1 : ((x) == 2) ? &SSCR0_P2 : ((x) == 3) ? &SSCR0_P3 : NULL))
+-#define SSCR1_P(x) (*(((x) == 1) ? &SSCR1_P1 : ((x) == 2) ? &SSCR1_P2 : ((x) == 3) ? &SSCR1_P3 : NULL))
+-#define SSSR_P(x) (*(((x) == 1) ? &SSSR_P1 : ((x) == 2) ? &SSSR_P2 : ((x) == 3) ? &SSSR_P3 : NULL))
+-#define SSITR_P(x) (*(((x) == 1) ? &SSITR_P1 : ((x) == 2) ? &SSITR_P2 : ((x) == 3) ? &SSITR_P3 : NULL))
+-#define SSDR_P(x) (*(((x) == 1) ? &SSDR_P1 : ((x) == 2) ? &SSDR_P2 : ((x) == 3) ? &SSDR_P3 : NULL))
+-#define SSTO_P(x) (*(((x) == 1) ? &SSTO_P1 : ((x) == 2) ? &SSTO_P2 : ((x) == 3) ? &SSTO_P3 : NULL))
+-#define SSPSP_P(x) (*(((x) == 1) ? &SSPSP_P1 : ((x) == 2) ? &SSPSP_P2 : ((x) == 3) ? &SSPSP_P3 : NULL))
+-#define SSTSA_P(x) (*(((x) == 1) ? &SSTSA_P1 : ((x) == 2) ? &SSTSA_P2 : ((x) == 3) ? &SSTSA_P3 : NULL))
+-#define SSRSA_P(x) (*(((x) == 1) ? &SSRSA_P1 : ((x) == 2) ? &SSRSA_P2 : ((x) == 3) ? &SSRSA_P3 : NULL))
+-#define SSTSS_P(x) (*(((x) == 1) ? &SSTSS_P1 : ((x) == 2) ? &SSTSS_P2 : ((x) == 3) ? &SSTSS_P3 : NULL))
+-#define SSACD_P(x) (*(((x) == 1) ? &SSACD_P1 : ((x) == 2) ? &SSACD_P2 : ((x) == 3) ? &SSACD_P3 : NULL))
+-
+ /*
+ * MultiMediaCard (MMC) controller - see drivers/mmc/host/pxamci.h
+ */
+@@ -2014,71 +1848,8 @@
- journal->j_maxlen = inode->i_size >> inode->i_sb->s_blocksize_bits;
- journal->j_blocksize = inode->i_sb->s_blocksize;
-+ jbd2_stats_proc_init(journal);
+ #define LDCMD_PAL (1 << 26) /* instructs DMA to load palette buffer */
- /* journal descriptor can store up to n blocks -bzzz */
- n = journal->j_blocksize / sizeof(journal_block_tag_t);
-@@ -1153,6 +1465,8 @@ void jbd2_journal_destroy(journal_t *journal)
- brelse(journal->j_sb_buffer);
- }
+-/*
+- * Memory controller
+- */
+-
+-#define MDCNFG __REG(0x48000000) /* SDRAM Configuration Register 0 */
+-#define MDREFR __REG(0x48000004) /* SDRAM Refresh Control Register */
+-#define MSC0 __REG(0x48000008) /* Static Memory Control Register 0 */
+-#define MSC1 __REG(0x4800000C) /* Static Memory Control Register 1 */
+-#define MSC2 __REG(0x48000010) /* Static Memory Control Register 2 */
+-#define MECR __REG(0x48000014) /* Expansion Memory (PCMCIA/Compact Flash) Bus Configuration */
+-#define SXLCR __REG(0x48000018) /* LCR value to be written to SDRAM-Timing Synchronous Flash */
+-#define SXCNFG __REG(0x4800001C) /* Synchronous Static Memory Control Register */
+-#define SXMRS __REG(0x48000024) /* MRS value to be written to Synchronous Flash or SMROM */
+-#define MCMEM0 __REG(0x48000028) /* Card interface Common Memory Space Socket 0 Timing */
+-#define MCMEM1 __REG(0x4800002C) /* Card interface Common Memory Space Socket 1 Timing */
+-#define MCATT0 __REG(0x48000030) /* Card interface Attribute Space Socket 0 Timing Configuration */
+-#define MCATT1 __REG(0x48000034) /* Card interface Attribute Space Socket 1 Timing Configuration */
+-#define MCIO0 __REG(0x48000038) /* Card interface I/O Space Socket 0 Timing Configuration */
+-#define MCIO1 __REG(0x4800003C) /* Card interface I/O Space Socket 1 Timing Configuration */
+-#define MDMRS __REG(0x48000040) /* MRS value to be written to SDRAM */
+-#define BOOT_DEF __REG(0x48000044) /* Read-Only Boot-Time Register. Contains BOOT_SEL and PKG_SEL */
+-
+-/*
+- * More handy macros for PCMCIA
+- *
+- * Arg is socket number
+- */
+-#define MCMEM(s) __REG2(0x48000028, (s)<<2 ) /* Card interface Common Memory Space Socket s Timing */
+-#define MCATT(s) __REG2(0x48000030, (s)<<2 ) /* Card interface Attribute Space Socket s Timing Configuration */
+-#define MCIO(s) __REG2(0x48000038, (s)<<2 ) /* Card interface I/O Space Socket s Timing Configuration */
+-
+-/* MECR register defines */
+-#define MECR_NOS (1 << 0) /* Number Of Sockets: 0 -> 1 sock, 1 -> 2 sock */
+-#define MECR_CIT (1 << 1) /* Card Is There: 0 -> no card, 1 -> card inserted */
+-
+-#define MDREFR_K0DB4 (1 << 29) /* SDCLK0 Divide by 4 Control/Status */
+-#define MDREFR_K2FREE (1 << 25) /* SDRAM Free-Running Control */
+-#define MDREFR_K1FREE (1 << 24) /* SDRAM Free-Running Control */
+-#define MDREFR_K0FREE (1 << 23) /* SDRAM Free-Running Control */
+-#define MDREFR_SLFRSH (1 << 22) /* SDRAM Self-Refresh Control/Status */
+-#define MDREFR_APD (1 << 20) /* SDRAM/SSRAM Auto-Power-Down Enable */
+-#define MDREFR_K2DB2 (1 << 19) /* SDCLK2 Divide by 2 Control/Status */
+-#define MDREFR_K2RUN (1 << 18) /* SDCLK2 Run Control/Status */
+-#define MDREFR_K1DB2 (1 << 17) /* SDCLK1 Divide by 2 Control/Status */
+-#define MDREFR_K1RUN (1 << 16) /* SDCLK1 Run Control/Status */
+-#define MDREFR_E1PIN (1 << 15) /* SDCKE1 Level Control/Status */
+-#define MDREFR_K0DB2 (1 << 14) /* SDCLK0 Divide by 2 Control/Status */
+-#define MDREFR_K0RUN (1 << 13) /* SDCLK0 Run Control/Status */
+-#define MDREFR_E0PIN (1 << 12) /* SDCKE0 Level Control/Status */
+-
+-
+ #ifdef CONFIG_PXA27x
-+ if (journal->j_proc_entry)
-+ jbd2_stats_proc_exit(journal);
- if (journal->j_inode)
- iput(journal->j_inode);
- if (journal->j_revoke)
-@@ -1264,6 +1578,32 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
- return 1;
- }
+-#define ARB_CNTRL __REG(0x48000048) /* Arbiter Control Register */
+-
+-#define ARB_DMA_SLV_PARK (1<<31) /* Be parked with DMA slave when idle */
+-#define ARB_CI_PARK (1<<30) /* Be parked with Camera Interface when idle */
+-#define ARB_EX_MEM_PARK (1<<29) /* Be parked with external MEMC when idle */
+-#define ARB_INT_MEM_PARK (1<<28) /* Be parked with internal MEMC when idle */
+-#define ARB_USB_PARK (1<<27) /* Be parked with USB when idle */
+-#define ARB_LCD_PARK (1<<26) /* Be parked with LCD when idle */
+-#define ARB_DMA_PARK (1<<25) /* Be parked with DMA when idle */
+-#define ARB_CORE_PARK (1<<24) /* Be parked with core when idle */
+-#define ARB_LOCK_FLAG (1<<23) /* Only Locking masters gain access to the bus */
+-
+ /*
+ * Keypad
+ */
+@@ -2135,74 +1906,6 @@
+ #define KPAS_SO (0x1 << 31)
+ #define KPASMKPx_SO (0x1 << 31)
+
+-/*
+- * UHC: USB Host Controller (OHCI-like) register definitions
+- */
+-#define UHC_BASE_PHYS (0x4C000000)
+-#define UHCREV __REG(0x4C000000) /* UHC HCI Spec Revision */
+-#define UHCHCON __REG(0x4C000004) /* UHC Host Control Register */
+-#define UHCCOMS __REG(0x4C000008) /* UHC Command Status Register */
+-#define UHCINTS __REG(0x4C00000C) /* UHC Interrupt Status Register */
+-#define UHCINTE __REG(0x4C000010) /* UHC Interrupt Enable */
+-#define UHCINTD __REG(0x4C000014) /* UHC Interrupt Disable */
+-#define UHCHCCA __REG(0x4C000018) /* UHC Host Controller Comm. Area */
+-#define UHCPCED __REG(0x4C00001C) /* UHC Period Current Endpt Descr */
+-#define UHCCHED __REG(0x4C000020) /* UHC Control Head Endpt Descr */
+-#define UHCCCED __REG(0x4C000024) /* UHC Control Current Endpt Descr */
+-#define UHCBHED __REG(0x4C000028) /* UHC Bulk Head Endpt Descr */
+-#define UHCBCED __REG(0x4C00002C) /* UHC Bulk Current Endpt Descr */
+-#define UHCDHEAD __REG(0x4C000030) /* UHC Done Head */
+-#define UHCFMI __REG(0x4C000034) /* UHC Frame Interval */
+-#define UHCFMR __REG(0x4C000038) /* UHC Frame Remaining */
+-#define UHCFMN __REG(0x4C00003C) /* UHC Frame Number */
+-#define UHCPERS __REG(0x4C000040) /* UHC Periodic Start */
+-#define UHCLS __REG(0x4C000044) /* UHC Low Speed Threshold */
+-
+-#define UHCRHDA __REG(0x4C000048) /* UHC Root Hub Descriptor A */
+-#define UHCRHDA_NOCP (1 << 12) /* No over current protection */
+-
+-#define UHCRHDB __REG(0x4C00004C) /* UHC Root Hub Descriptor B */
+-#define UHCRHS __REG(0x4C000050) /* UHC Root Hub Status */
+-#define UHCRHPS1 __REG(0x4C000054) /* UHC Root Hub Port 1 Status */
+-#define UHCRHPS2 __REG(0x4C000058) /* UHC Root Hub Port 2 Status */
+-#define UHCRHPS3 __REG(0x4C00005C) /* UHC Root Hub Port 3 Status */
+-
+-#define UHCSTAT __REG(0x4C000060) /* UHC Status Register */
+-#define UHCSTAT_UPS3 (1 << 16) /* USB Power Sense Port3 */
+-#define UHCSTAT_SBMAI (1 << 15) /* System Bus Master Abort Interrupt*/
+-#define UHCSTAT_SBTAI (1 << 14) /* System Bus Target Abort Interrupt*/
+-#define UHCSTAT_UPRI (1 << 13) /* USB Port Resume Interrupt */
+-#define UHCSTAT_UPS2 (1 << 12) /* USB Power Sense Port 2 */
+-#define UHCSTAT_UPS1 (1 << 11) /* USB Power Sense Port 1 */
+-#define UHCSTAT_HTA (1 << 10) /* HCI Target Abort */
+-#define UHCSTAT_HBA (1 << 8) /* HCI Buffer Active */
+-#define UHCSTAT_RWUE (1 << 7) /* HCI Remote Wake Up Event */
+-
+-#define UHCHR __REG(0x4C000064) /* UHC Reset Register */
+-#define UHCHR_SSEP3 (1 << 11) /* Sleep Standby Enable for Port3 */
+-#define UHCHR_SSEP2 (1 << 10) /* Sleep Standby Enable for Port2 */
+-#define UHCHR_SSEP1 (1 << 9) /* Sleep Standby Enable for Port1 */
+-#define UHCHR_PCPL (1 << 7) /* Power control polarity low */
+-#define UHCHR_PSPL (1 << 6) /* Power sense polarity low */
+-#define UHCHR_SSE (1 << 5) /* Sleep Standby Enable */
+-#define UHCHR_UIT (1 << 4) /* USB Interrupt Test */
+-#define UHCHR_SSDC (1 << 3) /* Simulation Scale Down Clock */
+-#define UHCHR_CGR (1 << 2) /* Clock Generation Reset */
+-#define UHCHR_FHR (1 << 1) /* Force Host Controller Reset */
+-#define UHCHR_FSBIR (1 << 0) /* Force System Bus Iface Reset */
+-
+-#define UHCHIE __REG(0x4C000068) /* UHC Interrupt Enable Register*/
+-#define UHCHIE_UPS3IE (1 << 14) /* Power Sense Port3 IntEn */
+-#define UHCHIE_UPRIE (1 << 13) /* Port Resume IntEn */
+-#define UHCHIE_UPS2IE (1 << 12) /* Power Sense Port2 IntEn */
+-#define UHCHIE_UPS1IE (1 << 11) /* Power Sense Port1 IntEn */
+-#define UHCHIE_TAIE (1 << 10) /* HCI Interface Transfer Abort
+- Interrupt Enable*/
+-#define UHCHIE_HBAIE (1 << 8) /* HCI Buffer Active IntEn */
+-#define UHCHIE_RWIE (1 << 7) /* Remote Wake-up IntEn */
+-
+-#define UHCHIT __REG(0x4C00006C) /* UHC Interrupt Test register */
+-
+ /* Camera Interface */
+ #define CICR0 __REG(0x50000000)
+ #define CICR1 __REG(0x50000004)
+@@ -2350,6 +2053,77 @@
+
+ #endif
++#if defined(CONFIG_PXA27x) || defined(CONFIG_PXA3xx)
+/*
-+ * jbd2_journal_clear_features () - Clear a given journal feature in the
-+ * superblock
-+ * @journal: Journal to act on.
-+ * @compat: bitmask of compatible features
-+ * @ro: bitmask of features that force read-only mount
-+ * @incompat: bitmask of incompatible features
-+ *
-+ * Clear a given journal feature as present on the
-+ * superblock.
++ * UHC: USB Host Controller (OHCI-like) register definitions
+ */
-+void jbd2_journal_clear_features(journal_t *journal, unsigned long compat,
-+ unsigned long ro, unsigned long incompat)
-+{
-+ journal_superblock_t *sb;
++#define UHC_BASE_PHYS (0x4C000000)
++#define UHCREV __REG(0x4C000000) /* UHC HCI Spec Revision */
++#define UHCHCON __REG(0x4C000004) /* UHC Host Control Register */
++#define UHCCOMS __REG(0x4C000008) /* UHC Command Status Register */
++#define UHCINTS __REG(0x4C00000C) /* UHC Interrupt Status Register */
++#define UHCINTE __REG(0x4C000010) /* UHC Interrupt Enable */
++#define UHCINTD __REG(0x4C000014) /* UHC Interrupt Disable */
++#define UHCHCCA __REG(0x4C000018) /* UHC Host Controller Comm. Area */
++#define UHCPCED __REG(0x4C00001C) /* UHC Period Current Endpt Descr */
++#define UHCCHED __REG(0x4C000020) /* UHC Control Head Endpt Descr */
++#define UHCCCED __REG(0x4C000024) /* UHC Control Current Endpt Descr */
++#define UHCBHED __REG(0x4C000028) /* UHC Bulk Head Endpt Descr */
++#define UHCBCED __REG(0x4C00002C) /* UHC Bulk Current Endpt Descr */
++#define UHCDHEAD __REG(0x4C000030) /* UHC Done Head */
++#define UHCFMI __REG(0x4C000034) /* UHC Frame Interval */
++#define UHCFMR __REG(0x4C000038) /* UHC Frame Remaining */
++#define UHCFMN __REG(0x4C00003C) /* UHC Frame Number */
++#define UHCPERS __REG(0x4C000040) /* UHC Periodic Start */
++#define UHCLS __REG(0x4C000044) /* UHC Low Speed Threshold */
+
-+ jbd_debug(1, "Clear features 0x%lx/0x%lx/0x%lx\n",
-+ compat, ro, incompat);
++#define UHCRHDA __REG(0x4C000048) /* UHC Root Hub Descriptor A */
++#define UHCRHDA_NOCP (1 << 12) /* No over current protection */
+
-+ sb = journal->j_superblock;
++#define UHCRHDB __REG(0x4C00004C) /* UHC Root Hub Descriptor B */
++#define UHCRHS __REG(0x4C000050) /* UHC Root Hub Status */
++#define UHCRHPS1 __REG(0x4C000054) /* UHC Root Hub Port 1 Status */
++#define UHCRHPS2 __REG(0x4C000058) /* UHC Root Hub Port 2 Status */
++#define UHCRHPS3 __REG(0x4C00005C) /* UHC Root Hub Port 3 Status */
+
-+ sb->s_feature_compat &= ~cpu_to_be32(compat);
-+ sb->s_feature_ro_compat &= ~cpu_to_be32(ro);
-+ sb->s_feature_incompat &= ~cpu_to_be32(incompat);
-+}
-+EXPORT_SYMBOL(jbd2_journal_clear_features);
-
- /**
- * int jbd2_journal_update_format () - Update on-disk journal structure.
-@@ -1633,7 +1973,7 @@ static int journal_init_jbd2_journal_head_cache(void)
- jbd2_journal_head_cache = kmem_cache_create("jbd2_journal_head",
- sizeof(struct journal_head),
- 0, /* offset */
-- 0, /* flags */
-+ SLAB_TEMPORARY, /* flags */
- NULL); /* ctor */
- retval = 0;
- if (jbd2_journal_head_cache == 0) {
-@@ -1900,6 +2240,28 @@ static void __exit jbd2_remove_debugfs_entry(void)
-
- #endif
-
-+#ifdef CONFIG_PROC_FS
++#define UHCSTAT __REG(0x4C000060) /* UHC Status Register */
++#define UHCSTAT_UPS3 (1 << 16) /* USB Power Sense Port3 */
++#define UHCSTAT_SBMAI (1 << 15) /* System Bus Master Abort Interrupt*/
++#define UHCSTAT_SBTAI (1 << 14) /* System Bus Target Abort Interrupt*/
++#define UHCSTAT_UPRI (1 << 13) /* USB Port Resume Interrupt */
++#define UHCSTAT_UPS2 (1 << 12) /* USB Power Sense Port 2 */
++#define UHCSTAT_UPS1 (1 << 11) /* USB Power Sense Port 1 */
++#define UHCSTAT_HTA (1 << 10) /* HCI Target Abort */
++#define UHCSTAT_HBA (1 << 8) /* HCI Buffer Active */
++#define UHCSTAT_RWUE (1 << 7) /* HCI Remote Wake Up Event */
+
-+#define JBD2_STATS_PROC_NAME "fs/jbd2"
++#define UHCHR __REG(0x4C000064) /* UHC Reset Register */
++#define UHCHR_SSEP3 (1 << 11) /* Sleep Standby Enable for Port3 */
++#define UHCHR_SSEP2 (1 << 10) /* Sleep Standby Enable for Port2 */
++#define UHCHR_SSEP1 (1 << 9) /* Sleep Standby Enable for Port1 */
++#define UHCHR_PCPL (1 << 7) /* Power control polarity low */
++#define UHCHR_PSPL (1 << 6) /* Power sense polarity low */
++#define UHCHR_SSE (1 << 5) /* Sleep Standby Enable */
++#define UHCHR_UIT (1 << 4) /* USB Interrupt Test */
++#define UHCHR_SSDC (1 << 3) /* Simulation Scale Down Clock */
++#define UHCHR_CGR (1 << 2) /* Clock Generation Reset */
++#define UHCHR_FHR (1 << 1) /* Force Host Controller Reset */
++#define UHCHR_FSBIR (1 << 0) /* Force System Bus Iface Reset */
+
-+static void __init jbd2_create_jbd_stats_proc_entry(void)
-+{
-+ proc_jbd2_stats = proc_mkdir(JBD2_STATS_PROC_NAME, NULL);
-+}
++#define UHCHIE __REG(0x4C000068) /* UHC Interrupt Enable Register*/
++#define UHCHIE_UPS3IE (1 << 14) /* Power Sense Port3 IntEn */
++#define UHCHIE_UPRIE (1 << 13) /* Port Resume IntEn */
++#define UHCHIE_UPS2IE (1 << 12) /* Power Sense Port2 IntEn */
++#define UHCHIE_UPS1IE (1 << 11) /* Power Sense Port1 IntEn */
++#define UHCHIE_TAIE (1 << 10) /* HCI Interface Transfer Abort
++ Interrupt Enable*/
++#define UHCHIE_HBAIE (1 << 8) /* HCI Buffer Active IntEn */
++#define UHCHIE_RWIE (1 << 7) /* Remote Wake-up IntEn */
+
-+static void __exit jbd2_remove_jbd_stats_proc_entry(void)
-+{
-+ if (proc_jbd2_stats)
-+ remove_proc_entry(JBD2_STATS_PROC_NAME, NULL);
-+}
++#define UHCHIT __REG(0x4C00006C) /* UHC Interrupt Test register */
+
-+#else
++#endif /* CONFIG_PXA27x || CONFIG_PXA3xx */
+
-+#define jbd2_create_jbd_stats_proc_entry() do {} while (0)
-+#define jbd2_remove_jbd_stats_proc_entry() do {} while (0)
+ /* PWRMODE register M field values */
+
+ #define PWRMODE_IDLE 0x1
+diff --git a/include/asm-arm/arch-pxa/pxa2xx-regs.h b/include/asm-arm/arch-pxa/pxa2xx-regs.h
+new file mode 100644
+index 0000000..9553b54
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/pxa2xx-regs.h
+@@ -0,0 +1,84 @@
++/*
++ * linux/include/asm-arm/arch-pxa/pxa2xx-regs.h
++ *
++ * Taken from pxa-regs.h by Russell King
++ *
++ * Author: Nicolas Pitre
++ * Copyright: MontaVista Software Inc.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
+
-+#endif
++#ifndef __PXA2XX_REGS_H
++#define __PXA2XX_REGS_H
+
- struct kmem_cache *jbd2_handle_cache;
-
- static int __init journal_init_handle_cache(void)
-@@ -1907,7 +2269,7 @@ static int __init journal_init_handle_cache(void)
- jbd2_handle_cache = kmem_cache_create("jbd2_journal_handle",
- sizeof(handle_t),
- 0, /* offset */
-- 0, /* flags */
-+ SLAB_TEMPORARY, /* flags */
- NULL); /* ctor */
- if (jbd2_handle_cache == NULL) {
- printk(KERN_EMERG "JBD: failed to create handle cache\n");
-@@ -1955,6 +2317,7 @@ static int __init journal_init(void)
- if (ret != 0)
- jbd2_journal_destroy_caches();
- jbd2_create_debugfs_entry();
-+ jbd2_create_jbd_stats_proc_entry();
- return ret;
- }
-
-@@ -1966,6 +2329,7 @@ static void __exit journal_exit(void)
- printk(KERN_EMERG "JBD: leaked %d journal_heads!\n", n);
- #endif
- jbd2_remove_debugfs_entry();
-+ jbd2_remove_jbd_stats_proc_entry();
- jbd2_journal_destroy_caches();
- }
-
-diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
-index d0ce627..9216806 100644
---- a/fs/jbd2/recovery.c
-+++ b/fs/jbd2/recovery.c
-@@ -21,6 +21,7 @@
- #include <linux/jbd2.h>
- #include <linux/errno.h>
- #include <linux/slab.h>
-+#include <linux/crc32.h>
- #endif
-
- /*
-@@ -316,6 +317,37 @@ static inline unsigned long long read_tag_block(int tag_bytes, journal_block_tag
- return block;
- }
-
+/*
-+ * calc_chksums calculates the checksums for the blocks described in the
-+ * descriptor block.
++ * Memory controller
+ */
-+static int calc_chksums(journal_t *journal, struct buffer_head *bh,
-+ unsigned long *next_log_block, __u32 *crc32_sum)
-+{
-+ int i, num_blks, err;
-+ unsigned long io_block;
-+ struct buffer_head *obh;
+
-+ num_blks = count_tags(journal, bh);
-+ /* Calculate checksum of the descriptor block. */
-+ *crc32_sum = crc32_be(*crc32_sum, (void *)bh->b_data, bh->b_size);
++#define MDCNFG __REG(0x48000000) /* SDRAM Configuration Register 0 */
++#define MDREFR __REG(0x48000004) /* SDRAM Refresh Control Register */
++#define MSC0 __REG(0x48000008) /* Static Memory Control Register 0 */
++#define MSC1 __REG(0x4800000C) /* Static Memory Control Register 1 */
++#define MSC2 __REG(0x48000010) /* Static Memory Control Register 2 */
++#define MECR __REG(0x48000014) /* Expansion Memory (PCMCIA/Compact Flash) Bus Configuration */
++#define SXLCR __REG(0x48000018) /* LCR value to be written to SDRAM-Timing Synchronous Flash */
++#define SXCNFG __REG(0x4800001C) /* Synchronous Static Memory Control Register */
++#define SXMRS __REG(0x48000024) /* MRS value to be written to Synchronous Flash or SMROM */
++#define MCMEM0 __REG(0x48000028) /* Card interface Common Memory Space Socket 0 Timing */
++#define MCMEM1 __REG(0x4800002C) /* Card interface Common Memory Space Socket 1 Timing */
++#define MCATT0 __REG(0x48000030) /* Card interface Attribute Space Socket 0 Timing Configuration */
++#define MCATT1 __REG(0x48000034) /* Card interface Attribute Space Socket 1 Timing Configuration */
++#define MCIO0 __REG(0x48000038) /* Card interface I/O Space Socket 0 Timing Configuration */
++#define MCIO1 __REG(0x4800003C) /* Card interface I/O Space Socket 1 Timing Configuration */
++#define MDMRS __REG(0x48000040) /* MRS value to be written to SDRAM */
++#define BOOT_DEF __REG(0x48000044) /* Read-Only Boot-Time Register. Contains BOOT_SEL and PKG_SEL */
+
-+ for (i = 0; i < num_blks; i++) {
-+ io_block = (*next_log_block)++;
-+ wrap(journal, *next_log_block);
-+ err = jread(&obh, journal, io_block);
-+ if (err) {
-+ printk(KERN_ERR "JBD: IO error %d recovering block "
-+ "%lu in log\n", err, io_block);
-+ return 1;
-+ } else {
-+ *crc32_sum = crc32_be(*crc32_sum, (void *)obh->b_data,
-+ obh->b_size);
-+ }
-+ }
-+ return 0;
-+}
++/*
++ * More handy macros for PCMCIA
++ *
++ * Arg is socket number
++ */
++#define MCMEM(s) __REG2(0x48000028, (s)<<2 ) /* Card interface Common Memory Space Socket s Timing */
++#define MCATT(s) __REG2(0x48000030, (s)<<2 ) /* Card interface Attribute Space Socket s Timing Configuration */
++#define MCIO(s) __REG2(0x48000038, (s)<<2 ) /* Card interface I/O Space Socket s Timing Configuration */
+
- static int do_one_pass(journal_t *journal,
- struct recovery_info *info, enum passtype pass)
- {
-@@ -328,6 +360,7 @@ static int do_one_pass(journal_t *journal,
- unsigned int sequence;
- int blocktype;
- int tag_bytes = journal_tag_bytes(journal);
-+ __u32 crc32_sum = ~0; /* Transactional Checksums */
-
- /* Precompute the maximum metadata descriptors in a descriptor block */
- int MAX_BLOCKS_PER_DESC;
-@@ -419,12 +452,26 @@ static int do_one_pass(journal_t *journal,
- switch(blocktype) {
- case JBD2_DESCRIPTOR_BLOCK:
- /* If it is a valid descriptor block, replay it
-- * in pass REPLAY; otherwise, just skip over the
-- * blocks it describes. */
-+ * in pass REPLAY; if journal_checksums enabled, then
-+ * calculate checksums in PASS_SCAN, otherwise,
-+ * just skip over the blocks it describes. */
- if (pass != PASS_REPLAY) {
-+ if (pass == PASS_SCAN &&
-+ JBD2_HAS_COMPAT_FEATURE(journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM) &&
-+ !info->end_transaction) {
-+ if (calc_chksums(journal, bh,
-+ &next_log_block,
-+ &crc32_sum)) {
-+ put_bh(bh);
-+ break;
-+ }
-+ put_bh(bh);
-+ continue;
-+ }
- next_log_block += count_tags(journal, bh);
- wrap(journal, next_log_block);
-- brelse(bh);
-+ put_bh(bh);
- continue;
- }
-
-@@ -516,9 +563,96 @@ static int do_one_pass(journal_t *journal,
- continue;
-
- case JBD2_COMMIT_BLOCK:
-- /* Found an expected commit block: not much to
-- * do other than move on to the next sequence
-+ /* How to differentiate between interrupted commit
-+ * and journal corruption ?
-+ *
-+ * {nth transaction}
-+ * Checksum Verification Failed
-+ * |
-+ * ____________________
-+ * | |
-+ * async_commit sync_commit
-+ * | |
-+ * | GO TO NEXT "Journal Corruption"
-+ * | TRANSACTION
-+ * |
-+ * {(n+1)th transanction}
-+ * |
-+ * _______|______________
-+ * | |
-+ * Commit block found Commit block not found
-+ * | |
-+ * "Journal Corruption" |
-+ * _____________|_________
-+ * | |
-+ * nth trans corrupt OR nth trans
-+ * and (n+1)th interrupted interrupted
-+ * before commit block
-+ * could reach the disk.
-+ * (Cannot find the difference in above
-+ * mentioned conditions. Hence assume
-+ * "Interrupted Commit".)
-+ */
++/* MECR register defines */
++#define MECR_NOS (1 << 0) /* Number Of Sockets: 0 -> 1 sock, 1 -> 2 sock */
++#define MECR_CIT (1 << 1) /* Card Is There: 0 -> no card, 1 -> card inserted */
+
-+ /* Found an expected commit block: if checksums
-+ * are present verify them in PASS_SCAN; else not
-+ * much to do other than move on to the next sequence
- * number. */
-+ if (pass == PASS_SCAN &&
-+ JBD2_HAS_COMPAT_FEATURE(journal,
-+ JBD2_FEATURE_COMPAT_CHECKSUM)) {
-+ int chksum_err, chksum_seen;
-+ struct commit_header *cbh =
-+ (struct commit_header *)bh->b_data;
-+ unsigned found_chksum =
-+ be32_to_cpu(cbh->h_chksum[0]);
++#define MDREFR_K0DB4 (1 << 29) /* SDCLK0 Divide by 4 Control/Status */
++#define MDREFR_K2FREE (1 << 25) /* SDRAM Free-Running Control */
++#define MDREFR_K1FREE (1 << 24) /* SDRAM Free-Running Control */
++#define MDREFR_K0FREE (1 << 23) /* SDRAM Free-Running Control */
++#define MDREFR_SLFRSH (1 << 22) /* SDRAM Self-Refresh Control/Status */
++#define MDREFR_APD (1 << 20) /* SDRAM/SSRAM Auto-Power-Down Enable */
++#define MDREFR_K2DB2 (1 << 19) /* SDCLK2 Divide by 2 Control/Status */
++#define MDREFR_K2RUN (1 << 18) /* SDCLK2 Run Control/Status */
++#define MDREFR_K1DB2 (1 << 17) /* SDCLK1 Divide by 2 Control/Status */
++#define MDREFR_K1RUN (1 << 16) /* SDCLK1 Run Control/Status */
++#define MDREFR_E1PIN (1 << 15) /* SDCKE1 Level Control/Status */
++#define MDREFR_K0DB2 (1 << 14) /* SDCLK0 Divide by 2 Control/Status */
++#define MDREFR_K0RUN (1 << 13) /* SDCLK0 Run Control/Status */
++#define MDREFR_E0PIN (1 << 12) /* SDCKE0 Level Control/Status */
+
-+ chksum_err = chksum_seen = 0;
+
-+ if (info->end_transaction) {
-+ printk(KERN_ERR "JBD: Transaction %u "
-+ "found to be corrupt.\n",
-+ next_commit_ID - 1);
-+ brelse(bh);
-+ break;
-+ }
++#ifdef CONFIG_PXA27x
+
-+ if (crc32_sum == found_chksum &&
-+ cbh->h_chksum_type == JBD2_CRC32_CHKSUM &&
-+ cbh->h_chksum_size ==
-+ JBD2_CRC32_CHKSUM_SIZE)
-+ chksum_seen = 1;
-+ else if (!(cbh->h_chksum_type == 0 &&
-+ cbh->h_chksum_size == 0 &&
-+ found_chksum == 0 &&
-+ !chksum_seen))
-+ /*
-+ * If fs is mounted using an old kernel and then
-+ * kernel with journal_chksum is used then we
-+ * get a situation where the journal flag has
-+ * checksum flag set but checksums are not
-+ * present i.e chksum = 0, in the individual
-+ * commit blocks.
-+ * Hence to avoid checksum failures, in this
-+ * situation, this extra check is added.
-+ */
-+ chksum_err = 1;
++#define ARB_CNTRL __REG(0x48000048) /* Arbiter Control Register */
+
-+ if (chksum_err) {
-+ info->end_transaction = next_commit_ID;
++#define ARB_DMA_SLV_PARK (1<<31) /* Be parked with DMA slave when idle */
++#define ARB_CI_PARK (1<<30) /* Be parked with Camera Interface when idle */
++#define ARB_EX_MEM_PARK (1<<29) /* Be parked with external MEMC when idle */
++#define ARB_INT_MEM_PARK (1<<28) /* Be parked with internal MEMC when idle */
++#define ARB_USB_PARK (1<<27) /* Be parked with USB when idle */
++#define ARB_LCD_PARK (1<<26) /* Be parked with LCD when idle */
++#define ARB_DMA_PARK (1<<25) /* Be parked with DMA when idle */
++#define ARB_CORE_PARK (1<<24) /* Be parked with core when idle */
++#define ARB_LOCK_FLAG (1<<23) /* Only Locking masters gain access to the bus */
+
-+ if (!JBD2_HAS_COMPAT_FEATURE(journal,
-+ JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)){
-+ printk(KERN_ERR
-+ "JBD: Transaction %u "
-+ "found to be corrupt.\n",
-+ next_commit_ID);
-+ brelse(bh);
-+ break;
-+ }
-+ }
-+ crc32_sum = ~0;
-+ }
- brelse(bh);
- next_commit_ID++;
- continue;
-@@ -554,9 +688,10 @@ static int do_one_pass(journal_t *journal,
- * transaction marks the end of the valid log.
- */
-
-- if (pass == PASS_SCAN)
-- info->end_transaction = next_commit_ID;
-- else {
-+ if (pass == PASS_SCAN) {
-+ if (!info->end_transaction)
-+ info->end_transaction = next_commit_ID;
-+ } else {
- /* It's really bad news if different passes end up at
- * different places (but possible due to IO errors). */
- if (info->end_transaction != next_commit_ID) {
-diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
-index 3595fd4..df36f42 100644
---- a/fs/jbd2/revoke.c
-+++ b/fs/jbd2/revoke.c
-@@ -171,13 +171,15 @@ int __init jbd2_journal_init_revoke_caches(void)
- {
- jbd2_revoke_record_cache = kmem_cache_create("jbd2_revoke_record",
- sizeof(struct jbd2_revoke_record_s),
-- 0, SLAB_HWCACHE_ALIGN, NULL);
-+ 0,
-+ SLAB_HWCACHE_ALIGN|SLAB_TEMPORARY,
-+ NULL);
- if (jbd2_revoke_record_cache == 0)
- return -ENOMEM;
-
- jbd2_revoke_table_cache = kmem_cache_create("jbd2_revoke_table",
- sizeof(struct jbd2_revoke_table_s),
-- 0, 0, NULL);
-+ 0, SLAB_TEMPORARY, NULL);
- if (jbd2_revoke_table_cache == 0) {
- kmem_cache_destroy(jbd2_revoke_record_cache);
- jbd2_revoke_record_cache = NULL;
-diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
-index b1fcf2b..b9b0b6f 100644
---- a/fs/jbd2/transaction.c
-+++ b/fs/jbd2/transaction.c
-@@ -54,11 +54,13 @@ jbd2_get_transaction(journal_t *journal, transaction_t *transaction)
- spin_lock_init(&transaction->t_handle_lock);
-
- /* Set up the commit timer for the new transaction. */
-- journal->j_commit_timer.expires = transaction->t_expires;
-+ journal->j_commit_timer.expires = round_jiffies(transaction->t_expires);
- add_timer(&journal->j_commit_timer);
-
- J_ASSERT(journal->j_running_transaction == NULL);
- journal->j_running_transaction = transaction;
-+ transaction->t_max_wait = 0;
-+ transaction->t_start = jiffies;
-
- return transaction;
- }
-@@ -85,6 +87,7 @@ static int start_this_handle(journal_t *journal, handle_t *handle)
- int nblocks = handle->h_buffer_credits;
- transaction_t *new_transaction = NULL;
- int ret = 0;
-+ unsigned long ts = jiffies;
++#endif
++
++#endif
+diff --git a/include/asm-arm/arch-pxa/pxa2xx_spi.h b/include/asm-arm/arch-pxa/pxa2xx_spi.h
+index acc7ec7..3459fb2 100644
+--- a/include/asm-arm/arch-pxa/pxa2xx_spi.h
++++ b/include/asm-arm/arch-pxa/pxa2xx_spi.h
+@@ -22,32 +22,8 @@
+ #define PXA2XX_CS_ASSERT (0x01)
+ #define PXA2XX_CS_DEASSERT (0x02)
- if (nblocks > journal->j_max_transaction_buffers) {
- printk(KERN_ERR "JBD: %s wants too many credits (%d > %d)\n",
-@@ -217,6 +220,12 @@ repeat_locked:
- /* OK, account for the buffers that this operation expects to
- * use and add the handle to the running transaction. */
+-#if defined(CONFIG_PXA25x)
+-#define CLOCK_SPEED_HZ 3686400
+-#define SSP1_SerClkDiv(x) (((CLOCK_SPEED_HZ/2/(x+1))<<8)&0x0000ff00)
+-#define SSP2_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
+-#define SSP3_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
+-#elif defined(CONFIG_PXA27x)
+-#define CLOCK_SPEED_HZ 13000000
+-#define SSP1_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
+-#define SSP2_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
+-#define SSP3_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
+-#endif
+-
+-#define SSP1_VIRT ((void *)(io_p2v(__PREG(SSCR0_P(1)))))
+-#define SSP2_VIRT ((void *)(io_p2v(__PREG(SSCR0_P(2)))))
+-#define SSP3_VIRT ((void *)(io_p2v(__PREG(SSCR0_P(3)))))
+-
+-enum pxa_ssp_type {
+- SSP_UNDEFINED = 0,
+- PXA25x_SSP, /* pxa 210, 250, 255, 26x */
+- PXA25x_NSSP, /* pxa 255, 26x (including ASSP) */
+- PXA27x_SSP,
+-};
+-
+ /* device.platform_data for SSP controller devices */
+ struct pxa2xx_spi_master {
+- enum pxa_ssp_type ssp_type;
+ u32 clock_enable;
+ u16 num_chipselect;
+ u8 enable_dma;
+diff --git a/include/asm-arm/arch-pxa/pxa3xx-regs.h b/include/asm-arm/arch-pxa/pxa3xx-regs.h
+index 3900a0c..66d5411 100644
+--- a/include/asm-arm/arch-pxa/pxa3xx-regs.h
++++ b/include/asm-arm/arch-pxa/pxa3xx-regs.h
+@@ -14,6 +14,92 @@
+ #define __ASM_ARCH_PXA3XX_REGS_H
-+ if (time_after(transaction->t_start, ts)) {
-+ ts = jbd2_time_diff(ts, transaction->t_start);
-+ if (ts > transaction->t_max_wait)
-+ transaction->t_max_wait = ts;
-+ }
+ /*
++ * Slave Power Managment Unit
++ */
++#define ASCR __REG(0x40f40000) /* Application Subsystem Power Status/Configuration */
++#define ARSR __REG(0x40f40004) /* Application Subsystem Reset Status */
++#define AD3ER __REG(0x40f40008) /* Application Subsystem Wake-Up from D3 Enable */
++#define AD3SR __REG(0x40f4000c) /* Application Subsystem Wake-Up from D3 Status */
++#define AD2D0ER __REG(0x40f40010) /* Application Subsystem Wake-Up from D2 to D0 Enable */
++#define AD2D0SR __REG(0x40f40014) /* Application Subsystem Wake-Up from D2 to D0 Status */
++#define AD2D1ER __REG(0x40f40018) /* Application Subsystem Wake-Up from D2 to D1 Enable */
++#define AD2D1SR __REG(0x40f4001c) /* Application Subsystem Wake-Up from D2 to D1 Status */
++#define AD1D0ER __REG(0x40f40020) /* Application Subsystem Wake-Up from D1 to D0 Enable */
++#define AD1D0SR __REG(0x40f40024) /* Application Subsystem Wake-Up from D1 to D0 Status */
++#define AGENP __REG(0x40f4002c) /* Application Subsystem General Purpose */
++#define AD3R __REG(0x40f40030) /* Application Subsystem D3 Configuration */
++#define AD2R __REG(0x40f40034) /* Application Subsystem D2 Configuration */
++#define AD1R __REG(0x40f40038) /* Application Subsystem D1 Configuration */
+
- handle->h_transaction = transaction;
- transaction->t_outstanding_credits += nblocks;
- transaction->t_updates++;
-@@ -232,6 +241,8 @@ out:
- return ret;
- }
-
-+static struct lock_class_key jbd2_handle_key;
++/*
++ * Application Subsystem Configuration bits.
++ */
++#define ASCR_RDH (1 << 31)
++#define ASCR_D1S (1 << 2)
++#define ASCR_D2S (1 << 1)
++#define ASCR_D3S (1 << 0)
+
- /* Allocate a new handle. This should probably be in a slab... */
- static handle_t *new_handle(int nblocks)
- {
-@@ -242,6 +253,9 @@ static handle_t *new_handle(int nblocks)
- handle->h_buffer_credits = nblocks;
- handle->h_ref = 1;
-
-+ lockdep_init_map(&handle->h_lockdep_map, "jbd2_handle",
-+ &jbd2_handle_key, 0);
++/*
++ * Application Reset Status bits.
++ */
++#define ARSR_GPR (1 << 3)
++#define ARSR_LPMR (1 << 2)
++#define ARSR_WDT (1 << 1)
++#define ARSR_HWR (1 << 0)
+
- return handle;
- }
-
-@@ -284,7 +298,11 @@ handle_t *jbd2_journal_start(journal_t *journal, int nblocks)
- jbd2_free_handle(handle);
- current->journal_info = NULL;
- handle = ERR_PTR(err);
-+ goto out;
- }
++/*
++ * Application Subsystem Wake-Up bits.
++ */
++#define ADXER_WRTC (1 << 31) /* RTC */
++#define ADXER_WOST (1 << 30) /* OS Timer */
++#define ADXER_WTSI (1 << 29) /* Touchscreen */
++#define ADXER_WUSBH (1 << 28) /* USB host */
++#define ADXER_WUSB2 (1 << 26) /* USB client 2.0 */
++#define ADXER_WMSL0 (1 << 24) /* MSL port 0*/
++#define ADXER_WDMUX3 (1 << 23) /* USB EDMUX3 */
++#define ADXER_WDMUX2 (1 << 22) /* USB EDMUX2 */
++#define ADXER_WKP (1 << 21) /* Keypad */
++#define ADXER_WUSIM1 (1 << 20) /* USIM Port 1 */
++#define ADXER_WUSIM0 (1 << 19) /* USIM Port 0 */
++#define ADXER_WOTG (1 << 16) /* USBOTG input */
++#define ADXER_MFP_WFLASH (1 << 15) /* MFP: Data flash busy */
++#define ADXER_MFP_GEN12 (1 << 14) /* MFP: MMC3/GPIO/OST inputs */
++#define ADXER_MFP_WMMC2 (1 << 13) /* MFP: MMC2 */
++#define ADXER_MFP_WMMC1 (1 << 12) /* MFP: MMC1 */
++#define ADXER_MFP_WI2C (1 << 11) /* MFP: I2C */
++#define ADXER_MFP_WSSP4 (1 << 10) /* MFP: SSP4 */
++#define ADXER_MFP_WSSP3 (1 << 9) /* MFP: SSP3 */
++#define ADXER_MFP_WMAXTRIX (1 << 8) /* MFP: matrix keypad */
++#define ADXER_MFP_WUART3 (1 << 7) /* MFP: UART3 */
++#define ADXER_MFP_WUART2 (1 << 6) /* MFP: UART2 */
++#define ADXER_MFP_WUART1 (1 << 5) /* MFP: UART1 */
++#define ADXER_MFP_WSSP2 (1 << 4) /* MFP: SSP2 */
++#define ADXER_MFP_WSSP1 (1 << 3) /* MFP: SSP1 */
++#define ADXER_MFP_WAC97 (1 << 2) /* MFP: AC97 */
++#define ADXER_WEXTWAKE1 (1 << 1) /* External Wake 1 */
++#define ADXER_WEXTWAKE0 (1 << 0) /* External Wake 0 */
+
-+ lock_acquire(&handle->h_lockdep_map, 0, 0, 0, 2, _THIS_IP_);
-+out:
- return handle;
- }
-
-@@ -1164,7 +1182,7 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
- }
-
- /* That test should have eliminated the following case: */
-- J_ASSERT_JH(jh, jh->b_frozen_data == 0);
-+ J_ASSERT_JH(jh, jh->b_frozen_data == NULL);
-
- JBUFFER_TRACE(jh, "file as BJ_Metadata");
- spin_lock(&journal->j_list_lock);
-@@ -1410,6 +1428,8 @@ int jbd2_journal_stop(handle_t *handle)
- spin_unlock(&journal->j_state_lock);
- }
-
-+ lock_release(&handle->h_lockdep_map, 1, _THIS_IP_);
++/*
++ * AD3R/AD2R/AD1R bits. R2-R5 are only defined for PXA320.
++ */
++#define ADXR_L2 (1 << 8)
++#define ADXR_R5 (1 << 5)
++#define ADXR_R4 (1 << 4)
++#define ADXR_R3 (1 << 3)
++#define ADXR_R2 (1 << 2)
++#define ADXR_R1 (1 << 1)
++#define ADXR_R0 (1 << 0)
+
- jbd2_free_handle(handle);
- return err;
- }
-@@ -1512,7 +1532,7 @@ void __jbd2_journal_temp_unlink_buffer(struct journal_head *jh)
-
- J_ASSERT_JH(jh, jh->b_jlist < BJ_Types);
- if (jh->b_jlist != BJ_None)
-- J_ASSERT_JH(jh, transaction != 0);
-+ J_ASSERT_JH(jh, transaction != NULL);
-
- switch (jh->b_jlist) {
- case BJ_None:
-@@ -1581,11 +1601,11 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
- if (buffer_locked(bh) || buffer_dirty(bh))
- goto out;
-
-- if (jh->b_next_transaction != 0)
-+ if (jh->b_next_transaction != NULL)
- goto out;
-
- spin_lock(&journal->j_list_lock);
-- if (jh->b_transaction != 0 && jh->b_cp_transaction == 0) {
-+ if (jh->b_transaction != NULL && jh->b_cp_transaction == NULL) {
- if (jh->b_jlist == BJ_SyncData || jh->b_jlist == BJ_Locked) {
- /* A written-back ordered data buffer */
- JBUFFER_TRACE(jh, "release data");
-@@ -1593,7 +1613,7 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
- jbd2_journal_remove_journal_head(bh);
- __brelse(bh);
- }
-- } else if (jh->b_cp_transaction != 0 && jh->b_transaction == 0) {
-+ } else if (jh->b_cp_transaction != NULL && jh->b_transaction == NULL) {
- /* written-back checkpointed metadata buffer */
- if (jh->b_jlist == BJ_None) {
- JBUFFER_TRACE(jh, "remove from checkpoint list");
-@@ -1953,7 +1973,7 @@ void __jbd2_journal_file_buffer(struct journal_head *jh,
-
- J_ASSERT_JH(jh, jh->b_jlist < BJ_Types);
- J_ASSERT_JH(jh, jh->b_transaction == transaction ||
-- jh->b_transaction == 0);
-+ jh->b_transaction == NULL);
-
- if (jh->b_transaction && jh->b_jlist == jlist)
- return;
-diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
-index df25ecc..4dcc058 100644
---- a/fs/jfs/jfs_dtree.c
-+++ b/fs/jfs/jfs_dtree.c
-@@ -284,11 +284,11 @@ static struct dir_table_slot *find_index(struct inode *ip, u32 index,
- release_metapage(*mp);
- *mp = NULL;
- }
-- if (*mp == 0) {
-+ if (!(*mp)) {
- *lblock = blkno;
- *mp = read_index_page(ip, blkno);
- }
-- if (*mp == 0) {
-+ if (!(*mp)) {
- jfs_err("free_index: error reading directory table");
- return NULL;
- }
-@@ -413,7 +413,8 @@ static u32 add_index(tid_t tid, struct inode *ip, s64 bn, int slot)
- }
- ip->i_size = PSIZE;
-
-- if ((mp = get_index_page(ip, 0)) == 0) {
-+ mp = get_index_page(ip, 0);
-+ if (!mp) {
- jfs_err("add_index: get_metapage failed!");
- xtTruncate(tid, ip, 0, COMMIT_PWMAP);
- memcpy(&jfs_ip->i_dirtable, temp_table,
-@@ -461,7 +462,7 @@ static u32 add_index(tid_t tid, struct inode *ip, s64 bn, int slot)
- } else
- mp = read_index_page(ip, blkno);
-
-- if (mp == 0) {
-+ if (!mp) {
- jfs_err("add_index: get/read_metapage failed!");
- goto clean_up;
- }
-@@ -499,7 +500,7 @@ static void free_index(tid_t tid, struct inode *ip, u32 index, u32 next)
-
- dirtab_slot = find_index(ip, index, &mp, &lblock);
-
-- if (dirtab_slot == 0)
-+ if (!dirtab_slot)
- return;
-
- dirtab_slot->flag = DIR_INDEX_FREE;
-@@ -526,7 +527,7 @@ static void modify_index(tid_t tid, struct inode *ip, u32 index, s64 bn,
-
- dirtab_slot = find_index(ip, index, mp, lblock);
-
-- if (dirtab_slot == 0)
-+ if (!dirtab_slot)
- return;
-
- DTSaddress(dirtab_slot, bn);
-@@ -552,7 +553,7 @@ static int read_index(struct inode *ip, u32 index,
- struct dir_table_slot *slot;
-
- slot = find_index(ip, index, &mp, &lblock);
-- if (slot == 0) {
-+ if (!slot) {
- return -EIO;
- }
-
-@@ -592,10 +593,8 @@ int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
- struct component_name ciKey;
- struct super_block *sb = ip->i_sb;
-
-- ciKey.name =
-- (wchar_t *) kmalloc((JFS_NAME_MAX + 1) * sizeof(wchar_t),
-- GFP_NOFS);
-- if (ciKey.name == 0) {
-+ ciKey.name = kmalloc((JFS_NAME_MAX + 1) * sizeof(wchar_t), GFP_NOFS);
-+ if (!ciKey.name) {
- rc = -ENOMEM;
- goto dtSearch_Exit2;
- }
-@@ -957,10 +956,8 @@ static int dtSplitUp(tid_t tid,
- smp = split->mp;
- sp = DT_PAGE(ip, smp);
-
-- key.name =
-- (wchar_t *) kmalloc((JFS_NAME_MAX + 2) * sizeof(wchar_t),
-- GFP_NOFS);
-- if (key.name == 0) {
-+ key.name = kmalloc((JFS_NAME_MAX + 2) * sizeof(wchar_t), GFP_NOFS);
-+ if (!key.name) {
- DT_PUTPAGE(smp);
- rc = -ENOMEM;
- goto dtSplitUp_Exit;
-diff --git a/fs/jfs/jfs_dtree.h b/fs/jfs/jfs_dtree.h
-index 8561c6e..cdac2d5 100644
---- a/fs/jfs/jfs_dtree.h
-+++ b/fs/jfs/jfs_dtree.h
-@@ -74,7 +74,7 @@ struct idtentry {
- #define DTIHDRDATALEN 11
++/*
++ * Values for PWRMODE CP15 register
++ */
++#define PXA3xx_PM_S3D4C4 0x07 /* aka deep sleep */
++#define PXA3xx_PM_S2D3C4 0x06 /* aka sleep */
++#define PXA3xx_PM_S0D2C2 0x03 /* aka standby */
++#define PXA3xx_PM_S0D1C2 0x02 /* aka LCD refresh */
++#define PXA3xx_PM_S0D0C1 0x01
++
++/*
+ * Application Subsystem Clock
+ */
+ #define ACCR __REG(0x41340000) /* Application Subsystem Clock Configuration Register */
+diff --git a/include/asm-arm/arch-pxa/regs-ssp.h b/include/asm-arm/arch-pxa/regs-ssp.h
+new file mode 100644
+index 0000000..991cb68
+--- /dev/null
++++ b/include/asm-arm/arch-pxa/regs-ssp.h
+@@ -0,0 +1,112 @@
++#ifndef __ASM_ARCH_REGS_SSP_H
++#define __ASM_ARCH_REGS_SSP_H
++
++/*
++ * SSP Serial Port Registers
++ * PXA250, PXA255, PXA26x and PXA27x SSP controllers are all slightly different.
++ * PXA255, PXA26x and PXA27x have extra ports, registers and bits.
++ */
++
++#define SSCR0 (0x00) /* SSP Control Register 0 */
++#define SSCR1 (0x04) /* SSP Control Register 1 */
++#define SSSR (0x08) /* SSP Status Register */
++#define SSITR (0x0C) /* SSP Interrupt Test Register */
++#define SSDR (0x10) /* SSP Data Write/Data Read Register */
++
++#define SSTO (0x28) /* SSP Time Out Register */
++#define SSPSP (0x2C) /* SSP Programmable Serial Protocol */
++#define SSTSA (0x30) /* SSP Tx Timeslot Active */
++#define SSRSA (0x34) /* SSP Rx Timeslot Active */
++#define SSTSS (0x38) /* SSP Timeslot Status */
++#define SSACD (0x3C) /* SSP Audio Clock Divider */
++
++/* Common PXA2xx bits first */
++#define SSCR0_DSS (0x0000000f) /* Data Size Select (mask) */
++#define SSCR0_DataSize(x) ((x) - 1) /* Data Size Select [4..16] */
++#define SSCR0_FRF (0x00000030) /* FRame Format (mask) */
++#define SSCR0_Motorola (0x0 << 4) /* Motorola's Serial Peripheral Interface (SPI) */
++#define SSCR0_TI (0x1 << 4) /* Texas Instruments' Synchronous Serial Protocol (SSP) */
++#define SSCR0_National (0x2 << 4) /* National Microwire */
++#define SSCR0_ECS (1 << 6) /* External clock select */
++#define SSCR0_SSE (1 << 7) /* Synchronous Serial Port Enable */
++#if defined(CONFIG_PXA25x)
++#define SSCR0_SCR (0x0000ff00) /* Serial Clock Rate (mask) */
++#define SSCR0_SerClkDiv(x) ((((x) - 2)/2) << 8) /* Divisor [2..512] */
++#elif defined(CONFIG_PXA27x)
++#define SSCR0_SCR (0x000fff00) /* Serial Clock Rate (mask) */
++#define SSCR0_SerClkDiv(x) (((x) - 1) << 8) /* Divisor [1..4096] */
++#define SSCR0_EDSS (1 << 20) /* Extended data size select */
++#define SSCR0_NCS (1 << 21) /* Network clock select */
++#define SSCR0_RIM (1 << 22) /* Receive FIFO overrrun interrupt mask */
++#define SSCR0_TUM (1 << 23) /* Transmit FIFO underrun interrupt mask */
++#define SSCR0_FRDC (0x07000000) /* Frame rate divider control (mask) */
++#define SSCR0_SlotsPerFrm(x) (((x) - 1) << 24) /* Time slots per frame [1..8] */
++#define SSCR0_ADC (1 << 30) /* Audio clock select */
++#define SSCR0_MOD (1 << 31) /* Mode (normal or network) */
++#endif
++
++#define SSCR1_RIE (1 << 0) /* Receive FIFO Interrupt Enable */
++#define SSCR1_TIE (1 << 1) /* Transmit FIFO Interrupt Enable */
++#define SSCR1_LBM (1 << 2) /* Loop-Back Mode */
++#define SSCR1_SPO (1 << 3) /* Motorola SPI SSPSCLK polarity setting */
++#define SSCR1_SPH (1 << 4) /* Motorola SPI SSPSCLK phase setting */
++#define SSCR1_MWDS (1 << 5) /* Microwire Transmit Data Size */
++#define SSCR1_TFT (0x000003c0) /* Transmit FIFO Threshold (mask) */
++#define SSCR1_TxTresh(x) (((x) - 1) << 6) /* level [1..16] */
++#define SSCR1_RFT (0x00003c00) /* Receive FIFO Threshold (mask) */
++#define SSCR1_RxTresh(x) (((x) - 1) << 10) /* level [1..16] */
++
++#define SSSR_TNF (1 << 2) /* Transmit FIFO Not Full */
++#define SSSR_RNE (1 << 3) /* Receive FIFO Not Empty */
++#define SSSR_BSY (1 << 4) /* SSP Busy */
++#define SSSR_TFS (1 << 5) /* Transmit FIFO Service Request */
++#define SSSR_RFS (1 << 6) /* Receive FIFO Service Request */
++#define SSSR_ROR (1 << 7) /* Receive FIFO Overrun */
++
++#define SSCR0_TIM (1 << 23) /* Transmit FIFO Under Run Interrupt Mask */
++#define SSCR0_RIM (1 << 22) /* Receive FIFO Over Run interrupt Mask */
++#define SSCR0_NCS (1 << 21) /* Network Clock Select */
++#define SSCR0_EDSS (1 << 20) /* Extended Data Size Select */
++
++/* extra bits in PXA255, PXA26x and PXA27x SSP ports */
++#define SSCR0_TISSP (1 << 4) /* TI Sync Serial Protocol */
++#define SSCR0_PSP (3 << 4) /* PSP - Programmable Serial Protocol */
++#define SSCR1_TTELP (1 << 31) /* TXD Tristate Enable Last Phase */
++#define SSCR1_TTE (1 << 30) /* TXD Tristate Enable */
++#define SSCR1_EBCEI (1 << 29) /* Enable Bit Count Error interrupt */
++#define SSCR1_SCFR (1 << 28) /* Slave Clock free Running */
++#define SSCR1_ECRA (1 << 27) /* Enable Clock Request A */
++#define SSCR1_ECRB (1 << 26) /* Enable Clock request B */
++#define SSCR1_SCLKDIR (1 << 25) /* Serial Bit Rate Clock Direction */
++#define SSCR1_SFRMDIR (1 << 24) /* Frame Direction */
++#define SSCR1_RWOT (1 << 23) /* Receive Without Transmit */
++#define SSCR1_TRAIL (1 << 22) /* Trailing Byte */
++#define SSCR1_TSRE (1 << 21) /* Transmit Service Request Enable */
++#define SSCR1_RSRE (1 << 20) /* Receive Service Request Enable */
++#define SSCR1_TINTE (1 << 19) /* Receiver Time-out Interrupt enable */
++#define SSCR1_PINTE (1 << 18) /* Peripheral Trailing Byte Interupt Enable */
++#define SSCR1_STRF (1 << 15) /* Select FIFO or EFWR */
++#define SSCR1_EFWR (1 << 14) /* Enable FIFO Write/Read */
++
++#define SSSR_BCE (1 << 23) /* Bit Count Error */
++#define SSSR_CSS (1 << 22) /* Clock Synchronisation Status */
++#define SSSR_TUR (1 << 21) /* Transmit FIFO Under Run */
++#define SSSR_EOC (1 << 20) /* End Of Chain */
++#define SSSR_TINT (1 << 19) /* Receiver Time-out Interrupt */
++#define SSSR_PINT (1 << 18) /* Peripheral Trailing Byte Interrupt */
++
++#define SSPSP_FSRT (1 << 25) /* Frame Sync Relative Timing */
++#define SSPSP_DMYSTOP(x) ((x) << 23) /* Dummy Stop */
++#define SSPSP_SFRMWDTH(x) ((x) << 16) /* Serial Frame Width */
++#define SSPSP_SFRMDLY(x) ((x) << 9) /* Serial Frame Delay */
++#define SSPSP_DMYSTRT(x) ((x) << 7) /* Dummy Start */
++#define SSPSP_STRTDLY(x) ((x) << 4) /* Start Delay */
++#define SSPSP_ETDS (1 << 3) /* End of Transfer data State */
++#define SSPSP_SFRMP (1 << 2) /* Serial Frame Polarity */
++#define SSPSP_SCMODE(x) ((x) << 0) /* Serial Bit Rate Clock Mode */
++
++#define SSACD_SCDB (1 << 3) /* SSPSYSCLK Divider Bypass */
++#define SSACD_ACPS(x) ((x) << 4) /* Audio clock PLL select */
++#define SSACD_ACDS(x) ((x) << 0) /* Audio clock divider select */
++
++#endif /* __ASM_ARCH_REGS_SSP_H */
+diff --git a/include/asm-arm/arch-pxa/sharpsl.h b/include/asm-arm/arch-pxa/sharpsl.h
+index 2b0fe77..3b1d4a7 100644
+--- a/include/asm-arm/arch-pxa/sharpsl.h
++++ b/include/asm-arm/arch-pxa/sharpsl.h
+@@ -16,7 +16,7 @@ int corgi_ssp_max1111_get(unsigned long data);
+ */
- /* compute number of slots for entry */
--#define NDTINTERNAL(klen) ( ((4 + (klen)) + (15 - 1)) / 15 )
-+#define NDTINTERNAL(klen) (DIV_ROUND_UP((4 + (klen)), 15))
+ struct corgits_machinfo {
+- unsigned long (*get_hsync_len)(void);
++ unsigned long (*get_hsync_invperiod)(void);
+ void (*put_hsync)(void);
+ void (*wait_hsync)(void);
+ };
+diff --git a/include/asm-arm/arch-pxa/spitz.h b/include/asm-arm/arch-pxa/spitz.h
+index 4953dd3..bd14365 100644
+--- a/include/asm-arm/arch-pxa/spitz.h
++++ b/include/asm-arm/arch-pxa/spitz.h
+@@ -156,5 +156,3 @@ extern struct platform_device spitzscoop_device;
+ extern struct platform_device spitzscoop2_device;
+ extern struct platform_device spitzssp_device;
+ extern struct sharpsl_charger_machinfo spitz_pm_machinfo;
+-
+-extern void spitz_lcd_power(int on, struct fb_var_screeninfo *var);
+diff --git a/include/asm-arm/arch-pxa/ssp.h b/include/asm-arm/arch-pxa/ssp.h
+index ea20055..a012882 100644
+--- a/include/asm-arm/arch-pxa/ssp.h
++++ b/include/asm-arm/arch-pxa/ssp.h
+@@ -13,10 +13,37 @@
+ * PXA255 SSP, NSSP
+ * PXA26x SSP, NSSP, ASSP
+ * PXA27x SSP1, SSP2, SSP3
++ * PXA3xx SSP1, SSP2, SSP3, SSP4
+ */
+-#ifndef SSP_H
+-#define SSP_H
++#ifndef __ASM_ARCH_SSP_H
++#define __ASM_ARCH_SSP_H
++
++#include <linux/list.h>
++
++enum pxa_ssp_type {
++ SSP_UNDEFINED = 0,
++ PXA25x_SSP, /* pxa 210, 250, 255, 26x */
++ PXA25x_NSSP, /* pxa 255, 26x (including ASSP) */
++ PXA27x_SSP,
++};
++
++struct ssp_device {
++ struct platform_device *pdev;
++ struct list_head node;
++
++ struct clk *clk;
++ void __iomem *mmio_base;
++ unsigned long phys_base;
++
++ const char *label;
++ int port_id;
++ int type;
++ int use_count;
++ int irq;
++ int drcmr_rx;
++ int drcmr_tx;
++};
/*
-@@ -133,7 +133,7 @@ struct dir_table_slot {
- ( ((s64)((dts)->addr1)) << 32 | __le32_to_cpu((dts)->addr2) )
-
- /* compute number of slots for entry */
--#define NDTLEAF_LEGACY(klen) ( ((2 + (klen)) + (15 - 1)) / 15 )
-+#define NDTLEAF_LEGACY(klen) (DIV_ROUND_UP((2 + (klen)), 15))
- #define NDTLEAF NDTINTERNAL
-
-
-diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
-index 3870ba8..9bf29f7 100644
---- a/fs/jfs/jfs_imap.c
-+++ b/fs/jfs/jfs_imap.c
-@@ -381,7 +381,7 @@ int diRead(struct inode *ip)
+ * SSP initialisation flags
+@@ -31,6 +58,7 @@ struct ssp_state {
+ };
- /* read the page of disk inode */
- mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
-- if (mp == 0) {
-+ if (!mp) {
- jfs_err("diRead: read_metapage failed");
- return -EIO;
- }
-@@ -654,7 +654,7 @@ int diWrite(tid_t tid, struct inode *ip)
- /* read the page of disk inode */
- retry:
- mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
-- if (mp == 0)
-+ if (!mp)
- return -EIO;
+ struct ssp_dev {
++ struct ssp_device *ssp;
+ u32 port;
+ u32 mode;
+ u32 flags;
+@@ -50,4 +78,6 @@ int ssp_init(struct ssp_dev *dev, u32 port, u32 init_flags);
+ int ssp_config(struct ssp_dev *dev, u32 mode, u32 flags, u32 psp_flags, u32 speed);
+ void ssp_exit(struct ssp_dev *dev);
- /* get the pointer to the disk inode */
-diff --git a/fs/jfs/jfs_logmgr.c b/fs/jfs/jfs_logmgr.c
-index 15a3974..325a967 100644
---- a/fs/jfs/jfs_logmgr.c
-+++ b/fs/jfs/jfs_logmgr.c
-@@ -208,6 +208,17 @@ static struct lmStat {
- } lmStat;
- #endif
+-#endif
++struct ssp_device *ssp_request(int port, const char *label);
++void ssp_free(struct ssp_device *);
++#endif /* __ASM_ARCH_SSP_H */
+diff --git a/include/asm-arm/arch-pxa/uncompress.h b/include/asm-arm/arch-pxa/uncompress.h
+index 178aa2e..dadf4c2 100644
+--- a/include/asm-arm/arch-pxa/uncompress.h
++++ b/include/asm-arm/arch-pxa/uncompress.h
+@@ -9,19 +9,21 @@
+ * published by the Free Software Foundation.
+ */
-+static void write_special_inodes(struct jfs_log *log,
-+ int (*writer)(struct address_space *))
-+{
-+ struct jfs_sb_info *sbi;
+-#define FFUART ((volatile unsigned long *)0x40100000)
+-#define BTUART ((volatile unsigned long *)0x40200000)
+-#define STUART ((volatile unsigned long *)0x40700000)
+-#define HWUART ((volatile unsigned long *)0x41600000)
++#include <linux/serial_reg.h>
++#include <asm/arch/pxa-regs.h>
+
-+ list_for_each_entry(sbi, &log->sb_list, log_list) {
-+ writer(sbi->ipbmap->i_mapping);
-+ writer(sbi->ipimap->i_mapping);
-+ writer(sbi->direct_inode->i_mapping);
-+ }
-+}
++#define __REG(x) ((volatile unsigned long *)x)
- /*
- * NAME: lmLog()
-@@ -935,22 +946,13 @@ static int lmLogSync(struct jfs_log * log, int hard_sync)
- struct lrd lrd;
- int lsn;
- struct logsyncblk *lp;
-- struct jfs_sb_info *sbi;
- unsigned long flags;
+ #define UART FFUART
- /* push dirty metapages out to disk */
- if (hard_sync)
-- list_for_each_entry(sbi, &log->sb_list, log_list) {
-- filemap_fdatawrite(sbi->ipbmap->i_mapping);
-- filemap_fdatawrite(sbi->ipimap->i_mapping);
-- filemap_fdatawrite(sbi->direct_inode->i_mapping);
-- }
-+ write_special_inodes(log, filemap_fdatawrite);
- else
-- list_for_each_entry(sbi, &log->sb_list, log_list) {
-- filemap_flush(sbi->ipbmap->i_mapping);
-- filemap_flush(sbi->ipimap->i_mapping);
-- filemap_flush(sbi->direct_inode->i_mapping);
-- }
-+ write_special_inodes(log, filemap_flush);
- /*
- * forward syncpt
-@@ -1536,7 +1538,6 @@ void jfs_flush_journal(struct jfs_log *log, int wait)
+ static inline void putc(char c)
{
- int i;
- struct tblock *target = NULL;
-- struct jfs_sb_info *sbi;
+- while (!(UART[5] & 0x20))
++ if (!(UART[UART_IER] & IER_UUE))
++ return;
++ while (!(UART[UART_LSR] & LSR_TDRQ))
+ barrier();
+- UART[0] = c;
++ UART[UART_TX] = c;
+ }
- /* jfs_write_inode may call us during read-only mount */
- if (!log)
-@@ -1598,11 +1599,7 @@ void jfs_flush_journal(struct jfs_log *log, int wait)
- if (wait < 2)
- return;
+ /*
+diff --git a/include/asm-arm/arch-pxa/zylonite.h b/include/asm-arm/arch-pxa/zylonite.h
+index f58b591..5f717d6 100644
+--- a/include/asm-arm/arch-pxa/zylonite.h
++++ b/include/asm-arm/arch-pxa/zylonite.h
+@@ -3,9 +3,18 @@
-- list_for_each_entry(sbi, &log->sb_list, log_list) {
-- filemap_fdatawrite(sbi->ipbmap->i_mapping);
-- filemap_fdatawrite(sbi->ipimap->i_mapping);
-- filemap_fdatawrite(sbi->direct_inode->i_mapping);
-- }
-+ write_special_inodes(log, filemap_fdatawrite);
+ #define ZYLONITE_ETH_PHYS 0x14000000
- /*
- * If there was recent activity, we may need to wait
-@@ -1611,6 +1608,7 @@ void jfs_flush_journal(struct jfs_log *log, int wait)
- if ((!list_empty(&log->cqueue)) || !list_empty(&log->synclist)) {
- for (i = 0; i < 200; i++) { /* Too much? */
- msleep(250);
-+ write_special_inodes(log, filemap_fdatawrite);
- if (list_empty(&log->cqueue) &&
- list_empty(&log->synclist))
- break;
-@@ -2347,7 +2345,7 @@ int jfsIOWait(void *arg)
++#define EXT_GPIO(x) (128 + (x))
++
+ /* the following variables are processor specific and initialized
+ * by the corresponding zylonite_pxa3xx_init()
+ */
++struct platform_mmc_slot {
++ int gpio_cd;
++ int gpio_wp;
++};
++
++extern struct platform_mmc_slot zylonite_mmc_slot[];
++
+ extern int gpio_backlight;
+ extern int gpio_eth_irq;
- do {
- spin_lock_irq(&log_redrive_lock);
-- while ((bp = log_redrive_list) != 0) {
-+ while ((bp = log_redrive_list)) {
- log_redrive_list = bp->l_redrive_next;
- bp->l_redrive_next = NULL;
- spin_unlock_irq(&log_redrive_lock);
-diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
-index f5cd8d3..d1e64f2 100644
---- a/fs/jfs/jfs_metapage.c
-+++ b/fs/jfs/jfs_metapage.c
-@@ -39,11 +39,11 @@ static struct {
+diff --git a/include/asm-arm/arch-s3c2410/debug-macro.S b/include/asm-arm/arch-s3c2410/debug-macro.S
+index 9c8cd9a..89076c3 100644
+--- a/include/asm-arm/arch-s3c2410/debug-macro.S
++++ b/include/asm-arm/arch-s3c2410/debug-macro.S
+@@ -92,11 +92,9 @@
+ #if defined(CONFIG_CPU_LLSERIAL_S3C2410_ONLY)
+ #define fifo_full fifo_full_s3c2410
+ #define fifo_level fifo_level_s3c2410
+-#warning 2410only
+ #elif !defined(CONFIG_CPU_LLSERIAL_S3C2440_ONLY)
+ #define fifo_full fifo_full_s3c24xx
+ #define fifo_level fifo_level_s3c24xx
+-#warning generic
#endif
- #define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
--#define trylock_metapage(mp) test_and_set_bit(META_locked, &(mp)->flag)
-+#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
-
- static inline void unlock_metapage(struct metapage *mp)
- {
-- clear_bit(META_locked, &mp->flag);
-+ clear_bit_unlock(META_locked, &mp->flag);
- wake_up(&mp->wait);
- }
+ /* include the reset of the code which will do the work */
+diff --git a/include/asm-arm/arch-s3c2410/dma.h b/include/asm-arm/arch-s3c2410/dma.h
+index c6e8d8f..4f291d9 100644
+--- a/include/asm-arm/arch-s3c2410/dma.h
++++ b/include/asm-arm/arch-s3c2410/dma.h
+@@ -214,6 +214,7 @@ struct s3c2410_dma_chan {
+ unsigned long dev_addr;
+ unsigned long load_timeout;
+ unsigned int flags; /* channel flags */
++ unsigned int hw_cfg; /* last hw config */
-@@ -88,7 +88,7 @@ struct meta_anchor {
- };
- #define mp_anchor(page) ((struct meta_anchor *)page_private(page))
+ struct s3c24xx_dma_map *map; /* channel hw maps */
--static inline struct metapage *page_to_mp(struct page *page, uint offset)
-+static inline struct metapage *page_to_mp(struct page *page, int offset)
- {
- if (!PagePrivate(page))
- return NULL;
-@@ -153,7 +153,7 @@ static inline void dec_io(struct page *page, void (*handler) (struct page *))
- }
+diff --git a/include/asm-arm/arch-s3c2410/hardware.h b/include/asm-arm/arch-s3c2410/hardware.h
+index 6dadf58..29592c3 100644
+--- a/include/asm-arm/arch-s3c2410/hardware.h
++++ b/include/asm-arm/arch-s3c2410/hardware.h
+@@ -50,6 +50,17 @@ extern unsigned int s3c2410_gpio_getcfg(unsigned int pin);
- #else
--static inline struct metapage *page_to_mp(struct page *page, uint offset)
-+static inline struct metapage *page_to_mp(struct page *page, int offset)
- {
- return PagePrivate(page) ? (struct metapage *)page_private(page) : NULL;
- }
-@@ -249,7 +249,7 @@ static inline void drop_metapage(struct page *page, struct metapage *mp)
- */
+ extern int s3c2410_gpio_getirq(unsigned int pin);
- static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
-- unsigned int *len)
-+ int *len)
- {
- int rc = 0;
- int xflag;
-@@ -352,25 +352,27 @@ static void metapage_write_end_io(struct bio *bio, int err)
- static int metapage_writepage(struct page *page, struct writeback_control *wbc)
- {
- struct bio *bio = NULL;
-- unsigned int block_offset; /* block offset of mp within page */
-+ int block_offset; /* block offset of mp within page */
- struct inode *inode = page->mapping->host;
-- unsigned int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
-- unsigned int len;
-- unsigned int xlen;
-+ int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
-+ int len;
-+ int xlen;
- struct metapage *mp;
- int redirty = 0;
- sector_t lblock;
-+ int nr_underway = 0;
- sector_t pblock;
- sector_t next_block = 0;
- sector_t page_start;
- unsigned long bio_bytes = 0;
- unsigned long bio_offset = 0;
-- unsigned int offset;
-+ int offset;
++/* s3c2410_gpio_irq2pin
++ *
++ * turn the given irq number into the corresponding GPIO number
++ *
++ * returns:
++ * < 0 = no pin
++ * >=0 = gpio pin number
++*/
++
++extern int s3c2410_gpio_irq2pin(unsigned int irq);
++
+ #ifdef CONFIG_CPU_S3C2400
- page_start = (sector_t)page->index <<
- (PAGE_CACHE_SHIFT - inode->i_blkbits);
- BUG_ON(!PageLocked(page));
- BUG_ON(PageWriteback(page));
-+ set_page_writeback(page);
+ extern int s3c2400_gpio_getirq(unsigned int pin);
+@@ -87,6 +98,18 @@ extern int s3c2410_gpio_irqfilter(unsigned int pin, unsigned int on,
- for (offset = 0; offset < PAGE_CACHE_SIZE; offset += PSIZE) {
- mp = page_to_mp(page, offset);
-@@ -413,11 +415,10 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
- if (!bio->bi_size)
- goto dump_bio;
- submit_bio(WRITE, bio);
-+ nr_underway++;
- bio = NULL;
-- } else {
-- set_page_writeback(page);
-+ } else
- inc_io(page);
-- }
- xlen = (PAGE_CACHE_SIZE - offset) >> inode->i_blkbits;
- pblock = metapage_get_blocks(inode, lblock, &xlen);
- if (!pblock) {
-@@ -427,7 +428,7 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
- continue;
- }
- set_bit(META_io, &mp->flag);
-- len = min(xlen, (uint) JFS_SBI(inode->i_sb)->nbperpage);
-+ len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
+ extern void s3c2410_gpio_pullup(unsigned int pin, unsigned int to);
- bio = bio_alloc(GFP_NOFS, 1);
- bio->bi_bdev = inode->i_sb->s_bdev;
-@@ -449,12 +450,16 @@ static int metapage_writepage(struct page *page, struct writeback_control *wbc)
- goto dump_bio;
++/* s3c2410_gpio_getpull
++ *
++ * Read the state of the pull-up on a given pin
++ *
++ * return:
++ * < 0 => error code
++ * 0 => enabled
++ * 1 => disabled
++*/
++
++extern int s3c2410_gpio_getpull(unsigned int pin);
++
+ extern void s3c2410_gpio_setpin(unsigned int pin, unsigned int to);
- submit_bio(WRITE, bio);
-+ nr_underway++;
- }
- if (redirty)
- redirty_page_for_writepage(wbc, page);
+ extern unsigned int s3c2410_gpio_getpin(unsigned int pin);
+@@ -99,6 +122,11 @@ extern int s3c2440_set_dsc(unsigned int pin, unsigned int value);
- unlock_page(page);
+ #endif /* CONFIG_CPU_S3C2440 */
-+ if (nr_underway == 0)
-+ end_page_writeback(page);
++#ifdef CONFIG_CPU_S3C2412
+
- return 0;
- add_failed:
- /* We should never reach here, since we're only adding one vec */
-@@ -475,13 +480,13 @@ static int metapage_readpage(struct file *fp, struct page *page)
- {
- struct inode *inode = page->mapping->host;
- struct bio *bio = NULL;
-- unsigned int block_offset;
-- unsigned int blocks_per_page = PAGE_CACHE_SIZE >> inode->i_blkbits;
-+ int block_offset;
-+ int blocks_per_page = PAGE_CACHE_SIZE >> inode->i_blkbits;
- sector_t page_start; /* address of page in fs blocks */
- sector_t pblock;
-- unsigned int xlen;
-+ int xlen;
- unsigned int len;
-- unsigned int offset;
-+ int offset;
-
- BUG_ON(!PageLocked(page));
- page_start = (sector_t)page->index <<
-@@ -530,7 +535,7 @@ static int metapage_releasepage(struct page *page, gfp_t gfp_mask)
- {
- struct metapage *mp;
- int ret = 1;
-- unsigned int offset;
-+ int offset;
++extern int s3c2412_gpio_set_sleepcfg(unsigned int pin, unsigned int state);
++
++#endif /* CONFIG_CPU_S3C2412 */
- for (offset = 0; offset < PAGE_CACHE_SIZE; offset += PSIZE) {
- mp = page_to_mp(page, offset);
-diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
-index 644429a..7b698f2 100644
---- a/fs/jfs/jfs_mount.c
-+++ b/fs/jfs/jfs_mount.c
-@@ -147,7 +147,7 @@ int jfs_mount(struct super_block *sb)
- */
- if ((sbi->mntflag & JFS_BAD_SAIT) == 0) {
- ipaimap2 = diReadSpecial(sb, AGGREGATE_I, 1);
-- if (ipaimap2 == 0) {
-+ if (!ipaimap2) {
- jfs_err("jfs_mount: Faild to read AGGREGATE_I");
- rc = -EIO;
- goto errout35;
-diff --git a/fs/jfs/jfs_umount.c b/fs/jfs/jfs_umount.c
-index 7971f37..adcf92d 100644
---- a/fs/jfs/jfs_umount.c
-+++ b/fs/jfs/jfs_umount.c
-@@ -68,7 +68,7 @@ int jfs_umount(struct super_block *sb)
- /*
- * Wait for outstanding transactions to be written to log:
- */
-- jfs_flush_journal(log, 2);
-+ jfs_flush_journal(log, 1);
+ #endif /* __ASSEMBLY__ */
- /*
- * close fileset inode allocation map (aka fileset inode)
-@@ -146,7 +146,7 @@ int jfs_umount_rw(struct super_block *sb)
- *
- * remove file system from log active file system list.
- */
-- jfs_flush_journal(log, 2);
-+ jfs_flush_journal(log, 1);
+diff --git a/include/asm-arm/arch-s3c2410/irqs.h b/include/asm-arm/arch-s3c2410/irqs.h
+index 996f654..d858b3e 100644
+--- a/include/asm-arm/arch-s3c2410/irqs.h
++++ b/include/asm-arm/arch-s3c2410/irqs.h
+@@ -160,4 +160,7 @@
+ #define NR_IRQS (IRQ_S3C2440_AC97+1)
+ #endif
- /*
- * Make sure all metadata makes it to disk
-diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c
-index 4e0a849..f8718de 100644
---- a/fs/jfs/namei.c
-+++ b/fs/jfs/namei.c
-@@ -1103,8 +1103,8 @@ static int jfs_rename(struct inode *old_dir, struct dentry *old_dentry,
- * Make sure dest inode number (if any) is what we think it is
- */
- rc = dtSearch(new_dir, &new_dname, &ino, &btstack, JFS_LOOKUP);
-- if (rc == 0) {
-- if ((new_ip == 0) || (ino != new_ip->i_ino)) {
-+ if (!rc) {
-+ if ((!new_ip) || (ino != new_ip->i_ino)) {
- rc = -ESTALE;
- goto out3;
- }
-diff --git a/fs/jfs/resize.c b/fs/jfs/resize.c
-index 71984ee..7f24a0b 100644
---- a/fs/jfs/resize.c
-+++ b/fs/jfs/resize.c
-@@ -172,7 +172,7 @@ int jfs_extendfs(struct super_block *sb, s64 newLVSize, int newLogSize)
- */
- t64 = ((newLVSize - newLogSize + BPERDMAP - 1) >> L2BPERDMAP)
- << L2BPERDMAP;
-- t32 = ((t64 + (BITSPERPAGE - 1)) / BITSPERPAGE) + 1 + 50;
-+ t32 = DIV_ROUND_UP(t64, BITSPERPAGE) + 1 + 50;
- newFSCKSize = t32 << sbi->l2nbperpage;
- newFSCKAddress = newLogAddress - newFSCKSize;
++/* Our FIQs are routable from IRQ_EINT0 to IRQ_ADCPARENT */
++#define FIQ_START IRQ_EINT0
++
+ #endif /* __ASM_ARCH_IRQ_H */
+diff --git a/include/asm-arm/arch-s3c2410/regs-clock.h b/include/asm-arm/arch-s3c2410/regs-clock.h
+index e39656b..dba9df9 100644
+--- a/include/asm-arm/arch-s3c2410/regs-clock.h
++++ b/include/asm-arm/arch-s3c2410/regs-clock.h
+@@ -138,6 +138,8 @@ s3c2410_get_pll(unsigned int pllval, unsigned int baseclk)
+ #define S3C2412_CLKDIVN_PDIVN (1<<2)
+ #define S3C2412_CLKDIVN_HDIVN_MASK (3<<0)
+ #define S3C2421_CLKDIVN_ARMDIVN (1<<3)
++#define S3C2412_CLKDIVN_DVSEN (1<<4)
++#define S3C2412_CLKDIVN_HALFHCLK (1<<5)
+ #define S3C2412_CLKDIVN_USB48DIV (1<<6)
+ #define S3C2412_CLKDIVN_UARTDIV_MASK (15<<8)
+ #define S3C2412_CLKDIVN_UARTDIV_SHIFT (8)
+diff --git a/include/asm-arm/arch-s3c2410/regs-dsc.h b/include/asm-arm/arch-s3c2410/regs-dsc.h
+index c074851..1235df7 100644
+--- a/include/asm-arm/arch-s3c2410/regs-dsc.h
++++ b/include/asm-arm/arch-s3c2410/regs-dsc.h
+@@ -19,7 +19,7 @@
+ #define S3C2412_DSC1 S3C2410_GPIOREG(0xe0)
+ #endif
-diff --git a/fs/jfs/super.c b/fs/jfs/super.c
-index 314bb4f..70a1400 100644
---- a/fs/jfs/super.c
-+++ b/fs/jfs/super.c
-@@ -598,6 +598,12 @@ static int jfs_show_options(struct seq_file *seq, struct vfsmount *vfs)
- seq_printf(seq, ",umask=%03o", sbi->umask);
- if (sbi->flag & JFS_NOINTEGRITY)
- seq_puts(seq, ",nointegrity");
-+ if (sbi->nls_tab)
-+ seq_printf(seq, ",iocharset=%s", sbi->nls_tab->charset);
-+ if (sbi->flag & JFS_ERR_CONTINUE)
-+ seq_printf(seq, ",errors=continue");
-+ if (sbi->flag & JFS_ERR_PANIC)
-+ seq_printf(seq, ",errors=panic");
+-#if defined(CONFIG_CPU_S3C2440)
++#if defined(CONFIG_CPU_S3C244X)
- #ifdef CONFIG_QUOTA
- if (sbi->flag & JFS_USRQUOTA)
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 0608388..61bf376 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -41,8 +41,8 @@ static struct kmem_cache *mnt_cache __read_mostly;
- static struct rw_semaphore namespace_sem;
+ #define S3C2440_DSC0 S3C2410_GPIOREG(0xc4)
+ #define S3C2440_DSC1 S3C2410_GPIOREG(0xc8)
+diff --git a/include/asm-arm/arch-s3c2410/regs-gpio.h b/include/asm-arm/arch-s3c2410/regs-gpio.h
+index b693158..0ad75d7 100644
+--- a/include/asm-arm/arch-s3c2410/regs-gpio.h
++++ b/include/asm-arm/arch-s3c2410/regs-gpio.h
+@@ -1133,12 +1133,16 @@
+ #define S3C2412_GPBSLPCON S3C2410_GPIOREG(0x1C)
+ #define S3C2412_GPCSLPCON S3C2410_GPIOREG(0x2C)
+ #define S3C2412_GPDSLPCON S3C2410_GPIOREG(0x3C)
+-#define S3C2412_GPESLPCON S3C2410_GPIOREG(0x4C)
+ #define S3C2412_GPFSLPCON S3C2410_GPIOREG(0x5C)
+ #define S3C2412_GPGSLPCON S3C2410_GPIOREG(0x6C)
+ #define S3C2412_GPHSLPCON S3C2410_GPIOREG(0x7C)
- /* /sys/fs */
--decl_subsys(fs, NULL, NULL);
--EXPORT_SYMBOL_GPL(fs_subsys);
-+struct kobject *fs_kobj;
-+EXPORT_SYMBOL_GPL(fs_kobj);
+ /* definitions for each pin bit */
++#define S3C2412_GPIO_SLPCON_LOW ( 0x00 )
++#define S3C2412_GPIO_SLPCON_HIGH ( 0x01 )
++#define S3C2412_GPIO_SLPCON_IN ( 0x02 )
++#define S3C2412_GPIO_SLPCON_PULL ( 0x03 )
++
+ #define S3C2412_SLPCON_LOW(x) ( 0x00 << ((x) * 2))
+ #define S3C2412_SLPCON_HIGH(x) ( 0x01 << ((x) * 2))
+ #define S3C2412_SLPCON_IN(x) ( 0x02 << ((x) * 2))
+diff --git a/include/asm-arm/arch-s3c2410/regs-mem.h b/include/asm-arm/arch-s3c2410/regs-mem.h
+index e4d8234..312ff93 100644
+--- a/include/asm-arm/arch-s3c2410/regs-mem.h
++++ b/include/asm-arm/arch-s3c2410/regs-mem.h
+@@ -98,16 +98,19 @@
+ #define S3C2410_BANKCON_Tacp3 (0x1 << 2)
+ #define S3C2410_BANKCON_Tacp4 (0x2 << 2)
+ #define S3C2410_BANKCON_Tacp6 (0x3 << 2)
++#define S3C2410_BANKCON_Tacp_SHIFT (2)
- static inline unsigned long hash(struct vfsmount *mnt, struct dentry *dentry)
- {
-@@ -1861,10 +1861,9 @@ void __init mnt_init(void)
- if (err)
- printk(KERN_WARNING "%s: sysfs_init error: %d\n",
- __FUNCTION__, err);
-- err = subsystem_register(&fs_subsys);
-- if (err)
-- printk(KERN_WARNING "%s: subsystem_register error: %d\n",
-- __FUNCTION__, err);
-+ fs_kobj = kobject_create_and_add("fs", NULL);
-+ if (!fs_kobj)
-+ printk(KERN_WARNING "%s: kobj create error\n", __FUNCTION__);
- init_rootfs();
- init_mount_tree();
- }
-diff --git a/fs/ocfs2/Makefile b/fs/ocfs2/Makefile
-index 9fb8132..4d4ce48 100644
---- a/fs/ocfs2/Makefile
-+++ b/fs/ocfs2/Makefile
-@@ -19,16 +19,17 @@ ocfs2-objs := \
- ioctl.o \
- journal.o \
- localalloc.o \
-+ locks.o \
- mmap.o \
- namei.o \
-+ resize.o \
- slot_map.o \
- suballoc.o \
- super.o \
- symlink.o \
- sysfile.o \
- uptodate.o \
-- ver.o \
-- vote.o
-+ ver.o
+ #define S3C2410_BANKCON_Tcah0 (0x0 << 4)
+ #define S3C2410_BANKCON_Tcah1 (0x1 << 4)
+ #define S3C2410_BANKCON_Tcah2 (0x2 << 4)
+ #define S3C2410_BANKCON_Tcah4 (0x3 << 4)
++#define S3C2410_BANKCON_Tcah_SHIFT (4)
- obj-$(CONFIG_OCFS2_FS) += cluster/
- obj-$(CONFIG_OCFS2_FS) += dlm/
-diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
-index 23c8cda..e6df06a 100644
---- a/fs/ocfs2/alloc.c
-+++ b/fs/ocfs2/alloc.c
-@@ -4731,7 +4731,7 @@ int __ocfs2_flush_truncate_log(struct ocfs2_super *osb)
+ #define S3C2410_BANKCON_Tcoh0 (0x0 << 6)
+ #define S3C2410_BANKCON_Tcoh1 (0x1 << 6)
+ #define S3C2410_BANKCON_Tcoh2 (0x2 << 6)
+ #define S3C2410_BANKCON_Tcoh4 (0x3 << 6)
++#define S3C2410_BANKCON_Tcoh_SHIFT (6)
- mutex_lock(&data_alloc_inode->i_mutex);
+ #define S3C2410_BANKCON_Tacc1 (0x0 << 8)
+ #define S3C2410_BANKCON_Tacc2 (0x1 << 8)
+@@ -117,16 +120,19 @@
+ #define S3C2410_BANKCON_Tacc8 (0x5 << 8)
+ #define S3C2410_BANKCON_Tacc10 (0x6 << 8)
+ #define S3C2410_BANKCON_Tacc14 (0x7 << 8)
++#define S3C2410_BANKCON_Tacc_SHIFT (8)
-- status = ocfs2_meta_lock(data_alloc_inode, &data_alloc_bh, 1);
-+ status = ocfs2_inode_lock(data_alloc_inode, &data_alloc_bh, 1);
- if (status < 0) {
- mlog_errno(status);
- goto out_mutex;
-@@ -4753,7 +4753,7 @@ int __ocfs2_flush_truncate_log(struct ocfs2_super *osb)
+ #define S3C2410_BANKCON_Tcos0 (0x0 << 11)
+ #define S3C2410_BANKCON_Tcos1 (0x1 << 11)
+ #define S3C2410_BANKCON_Tcos2 (0x2 << 11)
+ #define S3C2410_BANKCON_Tcos4 (0x3 << 11)
++#define S3C2410_BANKCON_Tcos_SHIFT (11)
- out_unlock:
- brelse(data_alloc_bh);
-- ocfs2_meta_unlock(data_alloc_inode, 1);
-+ ocfs2_inode_unlock(data_alloc_inode, 1);
+ #define S3C2410_BANKCON_Tacs0 (0x0 << 13)
+ #define S3C2410_BANKCON_Tacs1 (0x1 << 13)
+ #define S3C2410_BANKCON_Tacs2 (0x2 << 13)
+ #define S3C2410_BANKCON_Tacs4 (0x3 << 13)
++#define S3C2410_BANKCON_Tacs_SHIFT (13)
- out_mutex:
- mutex_unlock(&data_alloc_inode->i_mutex);
-@@ -5077,7 +5077,7 @@ static int ocfs2_free_cached_items(struct ocfs2_super *osb,
+ #define S3C2410_BANKCON_SRAM (0x0 << 15)
+ #define S3C2400_BANKCON_EDODRAM (0x2 << 15)
+diff --git a/include/asm-arm/arch-s3c2410/regs-power.h b/include/asm-arm/arch-s3c2410/regs-power.h
+index f79987b..13d13b7 100644
+--- a/include/asm-arm/arch-s3c2410/regs-power.h
++++ b/include/asm-arm/arch-s3c2410/regs-power.h
+@@ -23,7 +23,8 @@
+ #define S3C2412_INFORM2 S3C24XX_PWRREG(0x78)
+ #define S3C2412_INFORM3 S3C24XX_PWRREG(0x7C)
- mutex_lock(&inode->i_mutex);
+-#define S3C2412_PWRCFG_BATF_IGNORE (0<<0)
++#define S3C2412_PWRCFG_BATF_IRQ (1<<0)
++#define S3C2412_PWRCFG_BATF_IGNORE (2<<0)
+ #define S3C2412_PWRCFG_BATF_SLEEP (3<<0)
+ #define S3C2412_PWRCFG_BATF_MASK (3<<0)
-- ret = ocfs2_meta_lock(inode, &di_bh, 1);
-+ ret = ocfs2_inode_lock(inode, &di_bh, 1);
- if (ret) {
- mlog_errno(ret);
- goto out_mutex;
-@@ -5118,7 +5118,7 @@ out_journal:
- ocfs2_commit_trans(osb, handle);
+diff --git a/include/asm-arm/arch-s3c2410/system.h b/include/asm-arm/arch-s3c2410/system.h
+index 6389178..14de4e5 100644
+--- a/include/asm-arm/arch-s3c2410/system.h
++++ b/include/asm-arm/arch-s3c2410/system.h
+@@ -20,6 +20,9 @@
+ #include <asm/plat-s3c/regs-watchdog.h>
+ #include <asm/arch/regs-clock.h>
- out_unlock:
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- brelse(di_bh);
- out_mutex:
- mutex_unlock(&inode->i_mutex);
-diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
-index 56f7790..bc7b4cb 100644
---- a/fs/ocfs2/aops.c
-+++ b/fs/ocfs2/aops.c
-@@ -26,6 +26,7 @@
- #include <asm/byteorder.h>
- #include <linux/swap.h>
- #include <linux/pipe_fs_i.h>
-+#include <linux/mpage.h>
++#include <linux/clk.h>
++#include <linux/err.h>
++
+ void (*s3c24xx_idle)(void);
+ void (*s3c24xx_reset_hook)(void);
- #define MLOG_MASK_PREFIX ML_FILE_IO
- #include <cluster/masklog.h>
-@@ -139,7 +140,8 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
+@@ -59,6 +62,8 @@ static void arch_idle(void)
+ static void
+ arch_reset(char mode)
{
- int err = 0;
- unsigned int ext_flags;
-- u64 p_blkno, past_eof;
-+ u64 max_blocks = bh_result->b_size >> inode->i_blkbits;
-+ u64 p_blkno, count, past_eof;
- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-
- mlog_entry("(0x%p, %llu, 0x%p, %d)\n", inode,
-@@ -155,7 +157,7 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
- goto bail;
- }
-
-- err = ocfs2_extent_map_get_blocks(inode, iblock, &p_blkno, NULL,
-+ err = ocfs2_extent_map_get_blocks(inode, iblock, &p_blkno, &count,
- &ext_flags);
- if (err) {
- mlog(ML_ERROR, "Error %d from get_blocks(0x%p, %llu, 1, "
-@@ -164,6 +166,9 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
- goto bail;
++ struct clk *wdtclk;
++
+ if (mode == 's') {
+ cpu_reset(0);
}
+@@ -70,19 +75,28 @@ arch_reset(char mode)
-+ if (max_blocks < count)
-+ count = max_blocks;
-+
- /*
- * ocfs2 never allocates in this function - the only time we
- * need to use BH_New is when we're extending i_size on a file
-@@ -178,6 +183,8 @@ static int ocfs2_get_block(struct inode *inode, sector_t iblock,
- if (p_blkno && !(ext_flags & OCFS2_EXT_UNWRITTEN))
- map_bh(bh_result, inode->i_sb, p_blkno);
+ __raw_writel(0, S3C2410_WTCON); /* disable watchdog, to be safe */
-+ bh_result->b_size = count << inode->i_blkbits;
++ wdtclk = clk_get(NULL, "watchdog");
++ if (!IS_ERR(wdtclk)) {
++ clk_enable(wdtclk);
++ } else
++ printk(KERN_WARNING "%s: warning: cannot get watchdog clock\n", __func__);
+
- if (!ocfs2_sparse_alloc(osb)) {
- if (p_blkno == 0) {
- err = -EIO;
-@@ -210,7 +217,7 @@ int ocfs2_read_inline_data(struct inode *inode, struct page *page,
- struct buffer_head *di_bh)
- {
- void *kaddr;
-- unsigned int size;
-+ loff_t size;
- struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
-
- if (!(le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL)) {
-@@ -224,8 +231,9 @@ int ocfs2_read_inline_data(struct inode *inode, struct page *page,
- if (size > PAGE_CACHE_SIZE ||
- size > ocfs2_max_inline_data(inode->i_sb)) {
- ocfs2_error(inode->i_sb,
-- "Inode %llu has with inline data has bad size: %u",
-- (unsigned long long)OCFS2_I(inode)->ip_blkno, size);
-+ "Inode %llu has with inline data has bad size: %Lu",
-+ (unsigned long long)OCFS2_I(inode)->ip_blkno,
-+ (unsigned long long)size);
- return -EROFS;
- }
-
-@@ -275,7 +283,7 @@ static int ocfs2_readpage(struct file *file, struct page *page)
+ /* put initial values into count and data */
+- __raw_writel(0x100, S3C2410_WTCNT);
+- __raw_writel(0x100, S3C2410_WTDAT);
++ __raw_writel(0x80, S3C2410_WTCNT);
++ __raw_writel(0x80, S3C2410_WTDAT);
- mlog_entry("(0x%p, %lu)\n", file, (page ? page->index : 0));
+ /* set the watchdog to go and reset... */
+ __raw_writel(S3C2410_WTCON_ENABLE|S3C2410_WTCON_DIV16|S3C2410_WTCON_RSTEN |
+ S3C2410_WTCON_PRESCALE(0x20), S3C2410_WTCON);
-- ret = ocfs2_meta_lock_with_page(inode, NULL, 0, page);
-+ ret = ocfs2_inode_lock_with_page(inode, NULL, 0, page);
- if (ret != 0) {
- if (ret == AOP_TRUNCATED_PAGE)
- unlock = 0;
-@@ -285,7 +293,7 @@ static int ocfs2_readpage(struct file *file, struct page *page)
+ /* wait for reset to assert... */
+- mdelay(5000);
++ mdelay(500);
- if (down_read_trylock(&oi->ip_alloc_sem) == 0) {
- ret = AOP_TRUNCATED_PAGE;
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
- }
+ printk(KERN_ERR "Watchdog reset failed to assert reset\n");
- /*
-@@ -305,25 +313,16 @@ static int ocfs2_readpage(struct file *file, struct page *page)
- goto out_alloc;
- }
++ /* delay to allow the serial port to show the message */
++ mdelay(50);
++
+ /* we'll take a jump through zero as a poor second */
+ cpu_reset(0);
+ }
+diff --git a/include/asm-arm/bitops.h b/include/asm-arm/bitops.h
+index 47a6b08..5c60bfc 100644
+--- a/include/asm-arm/bitops.h
++++ b/include/asm-arm/bitops.h
+@@ -310,6 +310,8 @@ static inline int constant_fls(int x)
+ _find_first_zero_bit_le(p,sz)
+ #define ext2_find_next_zero_bit(p,sz,off) \
+ _find_next_zero_bit_le(p,sz,off)
++#define ext2_find_next_bit(p, sz, off) \
++ _find_next_bit_le(p, sz, off)
-- ret = ocfs2_data_lock_with_page(inode, 0, page);
-- if (ret != 0) {
-- if (ret == AOP_TRUNCATED_PAGE)
-- unlock = 0;
-- mlog_errno(ret);
-- goto out_alloc;
-- }
--
- if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL)
- ret = ocfs2_readpage_inline(inode, page);
- else
- ret = block_read_full_page(page, ocfs2_get_block);
- unlock = 0;
+ /*
+ * Minix is defined to use little-endian byte ordering.
+diff --git a/include/asm-arm/cacheflush.h b/include/asm-arm/cacheflush.h
+index 6c1c968..759a97b 100644
+--- a/include/asm-arm/cacheflush.h
++++ b/include/asm-arm/cacheflush.h
+@@ -94,6 +94,14 @@
+ # endif
+ #endif
-- ocfs2_data_unlock(inode, 0);
- out_alloc:
- up_read(&OCFS2_I(inode)->ip_alloc_sem);
--out_meta_unlock:
-- ocfs2_meta_unlock(inode, 0);
-+out_inode_unlock:
-+ ocfs2_inode_unlock(inode, 0);
- out:
- if (unlock)
- unlock_page(page);
-@@ -331,6 +330,62 @@ out:
- return ret;
- }
++#if defined(CONFIG_CPU_FEROCEON)
++# ifdef _CACHE
++# define MULTI_CACHE 1
++# else
++# define _CACHE feroceon
++# endif
++#endif
++
+ #if defined(CONFIG_CPU_V6)
+ //# ifdef _CACHE
+ # define MULTI_CACHE 1
+diff --git a/include/asm-arm/fpstate.h b/include/asm-arm/fpstate.h
+index f31cda5..392eb53 100644
+--- a/include/asm-arm/fpstate.h
++++ b/include/asm-arm/fpstate.h
+@@ -17,14 +17,18 @@
+ /*
+ * VFP storage area has:
+ * - FPEXC, FPSCR, FPINST and FPINST2.
+- * - 16 double precision data registers
+- * - an implementation-dependant word of state for FLDMX/FSTMX
++ * - 16 or 32 double precision data registers
++ * - an implementation-dependant word of state for FLDMX/FSTMX (pre-ARMv6)
+ *
+ * FPEXC will always be non-zero once the VFP has been used in this process.
+ */
+ struct vfp_hard_struct {
++#ifdef CONFIG_VFPv3
++ __u64 fpregs[32];
++#else
+ __u64 fpregs[16];
++#endif
+ #if __LINUX_ARM_ARCH__ < 6
+ __u32 fpmx_state;
+ #endif
+@@ -35,6 +39,7 @@ struct vfp_hard_struct {
+ */
+ __u32 fpinst;
+ __u32 fpinst2;
++
+ #ifdef CONFIG_SMP
+ __u32 cpu;
+ #endif
+diff --git a/include/asm-arm/kprobes.h b/include/asm-arm/kprobes.h
+new file mode 100644
+index 0000000..4e7bd32
+--- /dev/null
++++ b/include/asm-arm/kprobes.h
+@@ -0,0 +1,79 @@
+/*
-+ * This is used only for read-ahead. Failures or difficult to handle
-+ * situations are safe to ignore.
++ * include/asm-arm/kprobes.h
+ *
-+ * Right now, we don't bother with BH_Boundary - in-inode extent lists
-+ * are quite large (243 extents on 4k blocks), so most inodes don't
-+ * grow out to a tree. If need be, detecting boundary extents could
-+ * trivially be added in a future version of ocfs2_get_block().
++ * Copyright (C) 2006, 2007 Motorola Inc.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++ * General Public License for more details.
+ */
-+static int ocfs2_readpages(struct file *filp, struct address_space *mapping,
-+ struct list_head *pages, unsigned nr_pages)
-+{
-+ int ret, err = -EIO;
-+ struct inode *inode = mapping->host;
-+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
-+ loff_t start;
-+ struct page *last;
+
-+ /*
-+ * Use the nonblocking flag for the dlm code to avoid page
-+ * lock inversion, but don't bother with retrying.
-+ */
-+ ret = ocfs2_inode_lock_full(inode, NULL, 0, OCFS2_LOCK_NONBLOCK);
-+ if (ret)
-+ return err;
++#ifndef _ARM_KPROBES_H
++#define _ARM_KPROBES_H
+
-+ if (down_read_trylock(&oi->ip_alloc_sem) == 0) {
-+ ocfs2_inode_unlock(inode, 0);
-+ return err;
-+ }
++#include <linux/types.h>
++#include <linux/ptrace.h>
++#include <linux/percpu.h>
+
-+ /*
-+ * Don't bother with inline-data. There isn't anything
-+ * to read-ahead in that case anyway...
-+ */
-+ if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL)
-+ goto out_unlock;
++#define ARCH_SUPPORTS_KRETPROBES
++#define __ARCH_WANT_KPROBES_INSN_SLOT
++#define MAX_INSN_SIZE 2
++#define MAX_STACK_SIZE 64 /* 32 would probably be OK */
+
-+ /*
-+ * Check whether a remote node truncated this file - we just
-+ * drop out in that case as it's not worth handling here.
-+ */
-+ last = list_entry(pages->prev, struct page, lru);
-+ start = (loff_t)last->index << PAGE_CACHE_SHIFT;
-+ if (start >= i_size_read(inode))
-+ goto out_unlock;
++/*
++ * This undefined instruction must be unique and
++ * reserved solely for kprobes' use.
++ */
++#define KPROBE_BREAKPOINT_INSTRUCTION 0xe7f001f8
+
-+ err = mpage_readpages(mapping, pages, nr_pages, ocfs2_get_block);
++#define regs_return_value(regs) ((regs)->ARM_r0)
++#define flush_insn_slot(p) do { } while (0)
++#define kretprobe_blacklist_size 0
+
-+out_unlock:
-+ up_read(&oi->ip_alloc_sem);
-+ ocfs2_inode_unlock(inode, 0);
++typedef u32 kprobe_opcode_t;
+
-+ return err;
-+}
++struct kprobe;
++typedef void (kprobe_insn_handler_t)(struct kprobe *, struct pt_regs *);
+
- /* Note: Because we don't support holes, our allocation has
- * already happened (allocation writes zeros to the file data)
- * so we don't have to worry about ordered writes in
-@@ -452,7 +507,7 @@ static sector_t ocfs2_bmap(struct address_space *mapping, sector_t block)
- * accessed concurrently from multiple nodes.
- */
- if (!INODE_JOURNAL(inode)) {
-- err = ocfs2_meta_lock(inode, NULL, 0);
-+ err = ocfs2_inode_lock(inode, NULL, 0);
- if (err) {
- if (err != -ENOENT)
- mlog_errno(err);
-@@ -467,7 +522,7 @@ static sector_t ocfs2_bmap(struct address_space *mapping, sector_t block)
-
- if (!INODE_JOURNAL(inode)) {
- up_read(&OCFS2_I(inode)->ip_alloc_sem);
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
- }
-
- if (err) {
-@@ -638,34 +693,12 @@ static ssize_t ocfs2_direct_IO(int rw,
- if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL)
- return 0;
-
-- if (!ocfs2_sparse_alloc(OCFS2_SB(inode->i_sb))) {
-- /*
-- * We get PR data locks even for O_DIRECT. This
-- * allows concurrent O_DIRECT I/O but doesn't let
-- * O_DIRECT with extending and buffered zeroing writes
-- * race. If they did race then the buffered zeroing
-- * could be written back after the O_DIRECT I/O. It's
-- * one thing to tell people not to mix buffered and
-- * O_DIRECT writes, but expecting them to understand
-- * that file extension is also an implicit buffered
-- * write is too much. By getting the PR we force
-- * writeback of the buffered zeroing before
-- * proceeding.
-- */
-- ret = ocfs2_data_lock(inode, 0);
-- if (ret < 0) {
-- mlog_errno(ret);
-- goto out;
-- }
-- ocfs2_data_unlock(inode, 0);
-- }
--
- ret = blockdev_direct_IO_no_locking(rw, iocb, inode,
- inode->i_sb->s_bdev, iov, offset,
- nr_segs,
- ocfs2_direct_IO_get_blocks,
- ocfs2_dio_end_io);
--out:
++/* Architecture specific copy of original instruction. */
++struct arch_specific_insn {
++ kprobe_opcode_t *insn;
++ kprobe_insn_handler_t *insn_handler;
++};
+
- mlog_exit(ret);
- return ret;
- }
-@@ -1754,7 +1787,7 @@ static int ocfs2_write_begin(struct file *file, struct address_space *mapping,
- struct buffer_head *di_bh = NULL;
- struct inode *inode = mapping->host;
-
-- ret = ocfs2_meta_lock(inode, &di_bh, 1);
-+ ret = ocfs2_inode_lock(inode, &di_bh, 1);
- if (ret) {
- mlog_errno(ret);
- return ret;
-@@ -1769,30 +1802,22 @@ static int ocfs2_write_begin(struct file *file, struct address_space *mapping,
- */
- down_write(&OCFS2_I(inode)->ip_alloc_sem);
-
-- ret = ocfs2_data_lock(inode, 1);
-- if (ret) {
-- mlog_errno(ret);
-- goto out_fail;
-- }
--
- ret = ocfs2_write_begin_nolock(mapping, pos, len, flags, pagep,
- fsdata, di_bh, NULL);
- if (ret) {
- mlog_errno(ret);
-- goto out_fail_data;
-+ goto out_fail;
- }
-
- brelse(di_bh);
-
- return 0;
-
--out_fail_data:
-- ocfs2_data_unlock(inode, 1);
- out_fail:
- up_write(&OCFS2_I(inode)->ip_alloc_sem);
++struct prev_kprobe {
++ struct kprobe *kp;
++ unsigned int status;
++};
++
++/* per-cpu kprobe control block */
++struct kprobe_ctlblk {
++ unsigned int kprobe_status;
++ struct prev_kprobe prev_kprobe;
++ struct pt_regs jprobe_saved_regs;
++ char jprobes_stack[MAX_STACK_SIZE];
++};
++
++void arch_remove_kprobe(struct kprobe *);
++
++int kprobe_trap_handler(struct pt_regs *regs, unsigned int instr);
++int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
++int kprobe_exceptions_notify(struct notifier_block *self,
++ unsigned long val, void *data);
++
++enum kprobe_insn {
++ INSN_REJECTED,
++ INSN_GOOD,
++ INSN_GOOD_NO_SLOT
++};
++
++enum kprobe_insn arm_kprobe_decode_insn(kprobe_opcode_t,
++ struct arch_specific_insn *);
++void __init arm_kprobe_decode_init(void);
++
++#endif /* _ARM_KPROBES_H */
+diff --git a/include/asm-arm/plat-s3c24xx/dma.h b/include/asm-arm/plat-s3c24xx/dma.h
+index 2c59406..c78efe3 100644
+--- a/include/asm-arm/plat-s3c24xx/dma.h
++++ b/include/asm-arm/plat-s3c24xx/dma.h
+@@ -32,6 +32,7 @@ struct s3c24xx_dma_map {
+ struct s3c24xx_dma_addr hw_addr;
- brelse(di_bh);
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
+ unsigned long channels[S3C2410_DMA_CHANNELS];
++ unsigned long channels_rx[S3C2410_DMA_CHANNELS];
+ };
- return ret;
- }
-@@ -1908,15 +1933,15 @@ static int ocfs2_write_end(struct file *file, struct address_space *mapping,
+ struct s3c24xx_dma_selection {
+@@ -41,6 +42,10 @@ struct s3c24xx_dma_selection {
- ret = ocfs2_write_end_nolock(mapping, pos, len, copied, page, fsdata);
+ void (*select)(struct s3c2410_dma_chan *chan,
+ struct s3c24xx_dma_map *map);
++
++ void (*direction)(struct s3c2410_dma_chan *chan,
++ struct s3c24xx_dma_map *map,
++ enum s3c2410_dmasrc dir);
+ };
-- ocfs2_data_unlock(inode, 1);
- up_write(&OCFS2_I(inode)->ip_alloc_sem);
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
+ extern int s3c24xx_dma_init_map(struct s3c24xx_dma_selection *sel);
+diff --git a/include/asm-arm/plat-s3c24xx/irq.h b/include/asm-arm/plat-s3c24xx/irq.h
+index 8af6d95..45746a9 100644
+--- a/include/asm-arm/plat-s3c24xx/irq.h
++++ b/include/asm-arm/plat-s3c24xx/irq.h
+@@ -15,7 +15,9 @@
- return ret;
- }
+ #define EXTINT_OFF (IRQ_EINT4 - 4)
- const struct address_space_operations ocfs2_aops = {
- .readpage = ocfs2_readpage,
-+ .readpages = ocfs2_readpages,
- .writepage = ocfs2_writepage,
- .write_begin = ocfs2_write_begin,
- .write_end = ocfs2_write_end,
-diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
-index c903741..f136639 100644
---- a/fs/ocfs2/buffer_head_io.c
-+++ b/fs/ocfs2/buffer_head_io.c
-@@ -79,7 +79,7 @@ int ocfs2_write_block(struct ocfs2_super *osb, struct buffer_head *bh,
- * information for this bh as it's not marked locally
- * uptodate. */
- ret = -EIO;
-- brelse(bh);
-+ put_bh(bh);
- }
++/* these are exported for arch/arm/mach-* usage */
+ extern struct irq_chip s3c_irq_level_chip;
++extern struct irq_chip s3c_irq_chip;
- mutex_unlock(&OCFS2_I(inode)->ip_io_mutex);
-@@ -256,7 +256,7 @@ int ocfs2_read_blocks(struct ocfs2_super *osb, u64 block, int nr,
- * for this bh as it's not marked locally
- * uptodate. */
- status = -EIO;
-- brelse(bh);
-+ put_bh(bh);
- bhs[i] = NULL;
- continue;
- }
-@@ -280,3 +280,64 @@ bail:
- mlog_exit(status);
- return status;
- }
-+
-+/* Check whether the blkno is the super block or one of the backups. */
-+static void ocfs2_check_super_or_backup(struct super_block *sb,
-+ sector_t blkno)
-+{
-+ int i;
-+ u64 backup_blkno;
+ static inline void
+ s3c_irqsub_mask(unsigned int irqno, unsigned int parentbit,
+diff --git a/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h b/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h
+new file mode 100644
+index 0000000..25d4058
+--- /dev/null
++++ b/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h
+@@ -0,0 +1,72 @@
++/* linux/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h
++ *
++ * Copyright 2007 Simtec Electronics <linux at simtec.co.uk>
++ * http://armlinux.simtec.co.uk/
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * S3C2412 IIS register definition
++*/
+
-+ if (blkno == OCFS2_SUPER_BLOCK_BLKNO)
-+ return;
++#ifndef __ASM_ARCH_REGS_S3C2412_IIS_H
++#define __ASM_ARCH_REGS_S3C2412_IIS_H
+
-+ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
-+ backup_blkno = ocfs2_backup_super_blkno(sb, i);
-+ if (backup_blkno == blkno)
-+ return;
-+ }
++#define S3C2412_IISCON (0x00)
++#define S3C2412_IISMOD (0x04)
++#define S3C2412_IISFIC (0x08)
++#define S3C2412_IISPSR (0x0C)
++#define S3C2412_IISTXD (0x10)
++#define S3C2412_IISRXD (0x14)
+
-+ BUG();
-+}
++#define S3C2412_IISCON_LRINDEX (1 << 11)
++#define S3C2412_IISCON_TXFIFO_EMPTY (1 << 10)
++#define S3C2412_IISCON_RXFIFO_EMPTY (1 << 9)
++#define S3C2412_IISCON_TXFIFO_FULL (1 << 8)
++#define S3C2412_IISCON_RXFIFO_FULL (1 << 7)
++#define S3C2412_IISCON_TXDMA_PAUSE (1 << 6)
++#define S3C2412_IISCON_RXDMA_PAUSE (1 << 5)
++#define S3C2412_IISCON_TXCH_PAUSE (1 << 4)
++#define S3C2412_IISCON_RXCH_PAUSE (1 << 3)
++#define S3C2412_IISCON_TXDMA_ACTIVE (1 << 2)
++#define S3C2412_IISCON_RXDMA_ACTIVE (1 << 1)
++#define S3C2412_IISCON_IIS_ACTIVE (1 << 0)
+
-+/*
-+ * Write super block and backups doesn't need to collaborate with journal,
-+ * so we don't need to lock ip_io_mutex and inode doesn't need to bea passed
-+ * into this function.
-+ */
-+int ocfs2_write_super_or_backup(struct ocfs2_super *osb,
-+ struct buffer_head *bh)
-+{
-+ int ret = 0;
++#define S3C2412_IISMOD_MASTER_INTERNAL (0 << 10)
++#define S3C2412_IISMOD_MASTER_EXTERNAL (1 << 10)
++#define S3C2412_IISMOD_SLAVE (2 << 10)
++#define S3C2412_IISMOD_MASTER_MASK (3 << 10)
++#define S3C2412_IISMOD_MODE_TXONLY (0 << 8)
++#define S3C2412_IISMOD_MODE_RXONLY (1 << 8)
++#define S3C2412_IISMOD_MODE_TXRX (2 << 8)
++#define S3C2412_IISMOD_MODE_MASK (3 << 8)
++#define S3C2412_IISMOD_LR_LLOW (0 << 7)
++#define S3C2412_IISMOD_LR_RLOW (1 << 7)
++#define S3C2412_IISMOD_SDF_IIS (0 << 5)
++#define S3C2412_IISMOD_SDF_MSB (0 << 5)
++#define S3C2412_IISMOD_SDF_LSB (0 << 5)
++#define S3C2412_IISMOD_SDF_MASK (3 << 5)
++#define S3C2412_IISMOD_RCLK_256FS (0 << 3)
++#define S3C2412_IISMOD_RCLK_512FS (1 << 3)
++#define S3C2412_IISMOD_RCLK_384FS (2 << 3)
++#define S3C2412_IISMOD_RCLK_768FS (3 << 3)
++#define S3C2412_IISMOD_RCLK_MASK (3 << 3)
++#define S3C2412_IISMOD_BCLK_32FS (0 << 1)
++#define S3C2412_IISMOD_BCLK_48FS (1 << 1)
++#define S3C2412_IISMOD_BCLK_16FS (2 << 1)
++#define S3C2412_IISMOD_BCLK_24FS (3 << 1)
++#define S3C2412_IISMOD_BCLK_MASK (3 << 1)
++#define S3C2412_IISMOD_8BIT (1 << 0)
+
-+ mlog_entry_void();
++#define S3C2412_IISPSR_PSREN (1 << 15)
+
-+ BUG_ON(buffer_jbd(bh));
-+ ocfs2_check_super_or_backup(osb->sb, bh->b_blocknr);
++#define S3C2412_IISFIC_TXFLUSH (1 << 15)
++#define S3C2412_IISFIC_RXFLUSH (1 << 7)
++#define S3C2412_IISFIC_TXCOUNT(x) (((x) >> 8) & 0xf)
++#define S3C2412_IISFIC_RXCOUNT(x) (((x) >> 0) & 0xf)
+
-+ if (ocfs2_is_hard_readonly(osb) || ocfs2_is_soft_readonly(osb)) {
-+ ret = -EROFS;
-+ goto out;
-+ }
+
-+ lock_buffer(bh);
-+ set_buffer_uptodate(bh);
+
-+ /* remove from dirty list before I/O. */
-+ clear_buffer_dirty(bh);
++#endif /* __ASM_ARCH_REGS_S3C2412_IIS_H */
+
-+ get_bh(bh); /* for end_buffer_write_sync() */
-+ bh->b_end_io = end_buffer_write_sync;
-+ submit_bh(WRITE, bh);
+diff --git a/include/asm-arm/plat-s3c24xx/regs-spi.h b/include/asm-arm/plat-s3c24xx/regs-spi.h
+index 4a499a1..ea565b0 100644
+--- a/include/asm-arm/plat-s3c24xx/regs-spi.h
++++ b/include/asm-arm/plat-s3c24xx/regs-spi.h
+@@ -17,6 +17,21 @@
+
+ #define S3C2410_SPCON (0x00)
+
++#define S3C2412_SPCON_RXFIFO_RB2 (0<<14)
++#define S3C2412_SPCON_RXFIFO_RB4 (1<<14)
++#define S3C2412_SPCON_RXFIFO_RB12 (2<<14)
++#define S3C2412_SPCON_RXFIFO_RB14 (3<<14)
++#define S3C2412_SPCON_TXFIFO_RB2 (0<<12)
++#define S3C2412_SPCON_TXFIFO_RB4 (1<<12)
++#define S3C2412_SPCON_TXFIFO_RB12 (2<<12)
++#define S3C2412_SPCON_TXFIFO_RB14 (3<<12)
++#define S3C2412_SPCON_RXFIFO_RESET (1<<11) /* RxFIFO reset */
++#define S3C2412_SPCON_TXFIFO_RESET (1<<10) /* TxFIFO reset */
++#define S3C2412_SPCON_RXFIFO_EN (1<<9) /* RxFIFO Enable */
++#define S3C2412_SPCON_TXFIFO_EN (1<<8) /* TxFIFO Enable */
+
-+ wait_on_buffer(bh);
++#define S3C2412_SPCON_DIRC_RX (1<<7)
+
-+ if (!buffer_uptodate(bh)) {
-+ ret = -EIO;
-+ put_bh(bh);
-+ }
+ #define S3C2410_SPCON_SMOD_DMA (2<<5) /* DMA mode */
+ #define S3C2410_SPCON_SMOD_INT (1<<5) /* interrupt mode */
+ #define S3C2410_SPCON_SMOD_POLL (0<<5) /* polling mode */
+@@ -34,10 +49,19 @@
+
+ #define S3C2410_SPSTA (0x04)
+
++#define S3C2412_SPSTA_RXFIFO_AE (1<<11)
++#define S3C2412_SPSTA_TXFIFO_AE (1<<10)
++#define S3C2412_SPSTA_RXFIFO_ERROR (1<<9)
++#define S3C2412_SPSTA_TXFIFO_ERROR (1<<8)
++#define S3C2412_SPSTA_RXFIFO_FIFO (1<<7)
++#define S3C2412_SPSTA_RXFIFO_EMPTY (1<<6)
++#define S3C2412_SPSTA_TXFIFO_NFULL (1<<5)
++#define S3C2412_SPSTA_TXFIFO_EMPTY (1<<4)
+
-+out:
-+ mlog_exit(ret);
-+ return ret;
-+}
-diff --git a/fs/ocfs2/buffer_head_io.h b/fs/ocfs2/buffer_head_io.h
-index 6cc2093..c2e7861 100644
---- a/fs/ocfs2/buffer_head_io.h
-+++ b/fs/ocfs2/buffer_head_io.h
-@@ -47,6 +47,8 @@ int ocfs2_read_blocks(struct ocfs2_super *osb,
- int flags,
- struct inode *inode);
+ #define S3C2410_SPSTA_DCOL (1<<2) /* Data Collision Error */
+ #define S3C2410_SPSTA_MULD (1<<1) /* Multi Master Error */
+ #define S3C2410_SPSTA_READY (1<<0) /* Data Tx/Rx ready */
+-
++#define S3C2412_SPSTA_READY_ORG (1<<3)
-+int ocfs2_write_super_or_backup(struct ocfs2_super *osb,
-+ struct buffer_head *bh);
+ #define S3C2410_SPPIN (0x08)
- #define OCFS2_BH_CACHED 1
- #define OCFS2_BH_READAHEAD 8
-diff --git a/fs/ocfs2/cluster/heartbeat.h b/fs/ocfs2/cluster/heartbeat.h
-index 35397dd..e511339 100644
---- a/fs/ocfs2/cluster/heartbeat.h
-+++ b/fs/ocfs2/cluster/heartbeat.h
-@@ -35,7 +35,7 @@
- #define O2HB_LIVE_THRESHOLD 2
- /* number of equal samples to be seen as dead */
- extern unsigned int o2hb_dead_threshold;
--#define O2HB_DEFAULT_DEAD_THRESHOLD 7
-+#define O2HB_DEFAULT_DEAD_THRESHOLD 31
- /* Otherwise MAX_WRITE_TIMEOUT will be zero... */
- #define O2HB_MIN_DEAD_THRESHOLD 2
- #define O2HB_MAX_WRITE_TIMEOUT_MS (O2HB_REGION_TIMEOUT_MS * (o2hb_dead_threshold - 1))
-diff --git a/fs/ocfs2/cluster/masklog.c b/fs/ocfs2/cluster/masklog.c
-index a4882c8..23c732f 100644
---- a/fs/ocfs2/cluster/masklog.c
-+++ b/fs/ocfs2/cluster/masklog.c
-@@ -146,7 +146,7 @@ static struct kset mlog_kset = {
- .kobj = {.ktype = &mlog_ktype},
- };
+@@ -46,9 +70,13 @@
+ #define S3C2400_SPPIN_nCS (1<<1) /* SPI Card Select */
+ #define S3C2410_SPPIN_KEEP (1<<0) /* Master Out keep */
--int mlog_sys_init(struct kset *o2cb_subsys)
-+int mlog_sys_init(struct kset *o2cb_kset)
- {
- int i = 0;
+-
+ #define S3C2410_SPPRE (0x0C)
+ #define S3C2410_SPTDAT (0x10)
+ #define S3C2410_SPRDAT (0x14)
-@@ -157,7 +157,7 @@ int mlog_sys_init(struct kset *o2cb_subsys)
- mlog_attr_ptrs[i] = NULL;
++#define S3C2412_TXFIFO (0x18)
++#define S3C2412_RXFIFO (0x18)
++#define S3C2412_SPFIC (0x24)
++
++
+ #endif /* __ASM_ARCH_REGS_SPI_H */
+diff --git a/include/asm-arm/proc-fns.h b/include/asm-arm/proc-fns.h
+index 5599d4e..a4ce457 100644
+--- a/include/asm-arm/proc-fns.h
++++ b/include/asm-arm/proc-fns.h
+@@ -185,6 +185,14 @@
+ # define CPU_NAME cpu_xsc3
+ # endif
+ # endif
++# ifdef CONFIG_CPU_FEROCEON
++# ifdef CPU_NAME
++# undef MULTI_CPU
++# define MULTI_CPU
++# else
++# define CPU_NAME cpu_feroceon
++# endif
++# endif
+ # ifdef CONFIG_CPU_V6
+ # ifdef CPU_NAME
+ # undef MULTI_CPU
+diff --git a/include/asm-arm/traps.h b/include/asm-arm/traps.h
+index d4f34dc..f1541af 100644
+--- a/include/asm-arm/traps.h
++++ b/include/asm-arm/traps.h
+@@ -15,4 +15,13 @@ struct undef_hook {
+ void register_undef_hook(struct undef_hook *hook);
+ void unregister_undef_hook(struct undef_hook *hook);
- kobject_set_name(&mlog_kset.kobj, "logmask");
-- kobj_set_kset_s(&mlog_kset, *o2cb_subsys);
-+ mlog_kset.kobj.kset = o2cb_kset;
- return kset_register(&mlog_kset);
- }
++static inline int in_exception_text(unsigned long ptr)
++{
++ extern char __exception_text_start[];
++ extern char __exception_text_end[];
++
++ return ptr >= (unsigned long)&__exception_text_start &&
++ ptr < (unsigned long)&__exception_text_end;
++}
++
+ #endif
+diff --git a/include/asm-arm/vfp.h b/include/asm-arm/vfp.h
+index bd6be9d..5f9a2cb 100644
+--- a/include/asm-arm/vfp.h
++++ b/include/asm-arm/vfp.h
+@@ -7,7 +7,11 @@
-diff --git a/fs/ocfs2/cluster/sys.c b/fs/ocfs2/cluster/sys.c
-index 64f6f37..0c095ce 100644
---- a/fs/ocfs2/cluster/sys.c
-+++ b/fs/ocfs2/cluster/sys.c
-@@ -28,96 +28,55 @@
- #include <linux/module.h>
- #include <linux/kobject.h>
- #include <linux/sysfs.h>
-+#include <linux/fs.h>
+ #define FPSID cr0
+ #define FPSCR cr1
++#define MVFR1 cr6
++#define MVFR0 cr7
+ #define FPEXC cr8
++#define FPINST cr9
++#define FPINST2 cr10
- #include "ocfs2_nodemanager.h"
- #include "masklog.h"
- #include "sys.h"
+ /* FPSID bits */
+ #define FPSID_IMPLEMENTER_BIT (24)
+@@ -28,6 +32,19 @@
+ /* FPEXC bits */
+ #define FPEXC_EX (1 << 31)
+ #define FPEXC_EN (1 << 30)
++#define FPEXC_DEX (1 << 29)
++#define FPEXC_FP2V (1 << 28)
++#define FPEXC_VV (1 << 27)
++#define FPEXC_TFV (1 << 26)
++#define FPEXC_LENGTH_BIT (8)
++#define FPEXC_LENGTH_MASK (7 << FPEXC_LENGTH_BIT)
++#define FPEXC_IDF (1 << 7)
++#define FPEXC_IXF (1 << 4)
++#define FPEXC_UFF (1 << 3)
++#define FPEXC_OFF (1 << 2)
++#define FPEXC_DZF (1 << 1)
++#define FPEXC_IOF (1 << 0)
++#define FPEXC_TRAP_MASK (FPEXC_IDF|FPEXC_IXF|FPEXC_UFF|FPEXC_OFF|FPEXC_DZF|FPEXC_IOF)
--struct o2cb_attribute {
-- struct attribute attr;
-- ssize_t (*show)(char *buf);
-- ssize_t (*store)(const char *buf, size_t count);
--};
--
--#define O2CB_ATTR(_name, _mode, _show, _store) \
--struct o2cb_attribute o2cb_attr_##_name = __ATTR(_name, _mode, _show, _store)
--
--#define to_o2cb_attr(_attr) container_of(_attr, struct o2cb_attribute, attr)
+ /* FPSCR bits */
+ #define FPSCR_DEFAULT_NAN (1<<25)
+@@ -55,20 +72,9 @@
+ #define FPSCR_IXC (1<<4)
+ #define FPSCR_IDC (1<<7)
--static ssize_t o2cb_interface_revision_show(char *buf)
-+static ssize_t version_show(struct kobject *kobj, struct kobj_attribute *attr,
-+ char *buf)
- {
- return snprintf(buf, PAGE_SIZE, "%u\n", O2NM_API_VERSION);
- }
+-/*
+- * VFP9-S specific.
+- */
+-#define FPINST cr9
+-#define FPINST2 cr10
-
--static O2CB_ATTR(interface_revision, S_IFREG | S_IRUGO, o2cb_interface_revision_show, NULL);
-+static struct kobj_attribute attr_version =
-+ __ATTR(interface_revision, S_IFREG | S_IRUGO, version_show, NULL);
+-/* FPEXC bits */
+-#define FPEXC_FPV2 (1<<28)
+-#define FPEXC_LENGTH_BIT (8)
+-#define FPEXC_LENGTH_MASK (7 << FPEXC_LENGTH_BIT)
+-#define FPEXC_INV (1 << 7)
+-#define FPEXC_UFC (1 << 3)
+-#define FPEXC_OFC (1 << 2)
+-#define FPEXC_IOC (1 << 0)
++/* MVFR0 bits */
++#define MVFR0_A_SIMD_BIT (0)
++#define MVFR0_A_SIMD_MASK (0xf << MVFR0_A_SIMD_BIT)
- static struct attribute *o2cb_attrs[] = {
-- &o2cb_attr_interface_revision.attr,
-+ &attr_version.attr,
- NULL,
- };
+ /* Bit patterns for decoding the packaged operation descriptors */
+ #define VFPOPDESC_LENGTH_BIT (9)
+diff --git a/include/asm-arm/vfpmacros.h b/include/asm-arm/vfpmacros.h
+index 27fe028..cccb389 100644
+--- a/include/asm-arm/vfpmacros.h
++++ b/include/asm-arm/vfpmacros.h
+@@ -15,19 +15,33 @@
+ .endm
--static ssize_t
--o2cb_show(struct kobject * kobj, struct attribute * attr, char * buffer);
--static ssize_t
--o2cb_store(struct kobject * kobj, struct attribute * attr,
-- const char * buffer, size_t count);
--static struct sysfs_ops o2cb_sysfs_ops = {
-- .show = o2cb_show,
-- .store = o2cb_store,
-+static struct attribute_group o2cb_attr_group = {
-+ .attrs = o2cb_attrs,
- };
+ @ read all the working registers back into the VFP
+- .macro VFPFLDMIA, base
++ .macro VFPFLDMIA, base, tmp
+ #if __LINUX_ARM_ARCH__ < 6
+ LDC p11, cr0, [\base],#33*4 @ FLDMIAX \base!, {d0-d15}
+ #else
+ LDC p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d0-d15}
+ #endif
++#ifdef CONFIG_VFPv3
++ VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
++ and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
++ cmp \tmp, #2 @ 32 x 64bit registers?
++ ldceql p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
++ addne \base, \base, #32*4 @ step over unused register space
++#endif
+ .endm
--static struct kobj_type o2cb_subsys_type = {
-- .default_attrs = o2cb_attrs,
-- .sysfs_ops = &o2cb_sysfs_ops,
--};
--
--/* gives us o2cb_subsys */
--static decl_subsys(o2cb, NULL, NULL);
--
--static ssize_t
--o2cb_show(struct kobject * kobj, struct attribute * attr, char * buffer)
--{
-- struct o2cb_attribute *o2cb_attr = to_o2cb_attr(attr);
-- struct kset *sbs = to_kset(kobj);
+ @ write all the working registers out of the VFP
+- .macro VFPFSTMIA, base
++ .macro VFPFSTMIA, base, tmp
+ #if __LINUX_ARM_ARCH__ < 6
+ STC p11, cr0, [\base],#33*4 @ FSTMIAX \base!, {d0-d15}
+ #else
+ STC p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d0-d15}
+ #endif
++#ifdef CONFIG_VFPv3
++ VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
++ and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
++ cmp \tmp, #2 @ 32 x 64bit registers?
++ stceql p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
++ addne \base, \base, #32*4 @ step over unused register space
++#endif
+ .endm
+diff --git a/include/asm-avr32/arch-at32ap/at32ap7000.h b/include/asm-avr32/arch-at32ap/at32ap7000.h
+deleted file mode 100644
+index 3914d7b..0000000
+--- a/include/asm-avr32/arch-at32ap/at32ap7000.h
++++ /dev/null
+@@ -1,35 +0,0 @@
+-/*
+- * Pin definitions for AT32AP7000.
+- *
+- * Copyright (C) 2006 Atmel Corporation
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License version 2 as
+- * published by the Free Software Foundation.
+- */
+-#ifndef __ASM_ARCH_AT32AP7000_H__
+-#define __ASM_ARCH_AT32AP7000_H__
-
-- BUG_ON(sbs != &o2cb_subsys);
+-#define GPIO_PERIPH_A 0
+-#define GPIO_PERIPH_B 1
-
-- if (o2cb_attr->show)
-- return o2cb_attr->show(buffer);
-- return -EIO;
--}
+-#define NR_GPIO_CONTROLLERS 4
-
--static ssize_t
--o2cb_store(struct kobject * kobj, struct attribute * attr,
-- const char * buffer, size_t count)
--{
-- struct o2cb_attribute *o2cb_attr = to_o2cb_attr(attr);
-- struct kset *sbs = to_kset(kobj);
+-/*
+- * Pin numbers identifying specific GPIO pins on the chip. They can
+- * also be converted to IRQ numbers by passing them through
+- * gpio_to_irq().
+- */
+-#define GPIO_PIOA_BASE (0)
+-#define GPIO_PIOB_BASE (GPIO_PIOA_BASE + 32)
+-#define GPIO_PIOC_BASE (GPIO_PIOB_BASE + 32)
+-#define GPIO_PIOD_BASE (GPIO_PIOC_BASE + 32)
+-#define GPIO_PIOE_BASE (GPIO_PIOD_BASE + 32)
-
-- BUG_ON(sbs != &o2cb_subsys);
+-#define GPIO_PIN_PA(N) (GPIO_PIOA_BASE + (N))
+-#define GPIO_PIN_PB(N) (GPIO_PIOB_BASE + (N))
+-#define GPIO_PIN_PC(N) (GPIO_PIOC_BASE + (N))
+-#define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N))
+-#define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N))
-
-- if (o2cb_attr->store)
-- return o2cb_attr->store(buffer, count);
-- return -EIO;
--}
-+static struct kset *o2cb_kset;
-
- void o2cb_sys_shutdown(void)
- {
- mlog_sys_shutdown();
-- subsystem_unregister(&o2cb_subsys);
-+ kset_unregister(o2cb_kset);
- }
-
- int o2cb_sys_init(void)
- {
- int ret;
-
-- o2cb_subsys.kobj.ktype = &o2cb_subsys_type;
-- ret = subsystem_register(&o2cb_subsys);
-+ o2cb_kset = kset_create_and_add("o2cb", NULL, NULL);
-+ if (!o2cb_kset)
-+ return -ENOMEM;
-+
-+ ret = sysfs_create_group(&o2cb_kset->kobj, &o2cb_attr_group);
- if (ret)
-- return ret;
-+ goto error;
-
-- ret = mlog_sys_init(&o2cb_subsys);
-+ ret = mlog_sys_init(o2cb_kset);
- if (ret)
-- subsystem_unregister(&o2cb_subsys);
-+ goto error;
-+ return 0;
-+error:
-+ kset_unregister(o2cb_kset);
- return ret;
- }
-diff --git a/fs/ocfs2/cluster/tcp.h b/fs/ocfs2/cluster/tcp.h
-index da880fc..f36f66a 100644
---- a/fs/ocfs2/cluster/tcp.h
-+++ b/fs/ocfs2/cluster/tcp.h
-@@ -60,8 +60,8 @@ typedef void (o2net_post_msg_handler_func)(int status, void *data,
- /* same as hb delay, we're waiting for another node to recognize our hb */
- #define O2NET_RECONNECT_DELAY_MS_DEFAULT 2000
-
--#define O2NET_KEEPALIVE_DELAY_MS_DEFAULT 5000
--#define O2NET_IDLE_TIMEOUT_MS_DEFAULT 10000
-+#define O2NET_KEEPALIVE_DELAY_MS_DEFAULT 2000
-+#define O2NET_IDLE_TIMEOUT_MS_DEFAULT 30000
-
-
- /* TODO: figure this out.... */
-diff --git a/fs/ocfs2/cluster/tcp_internal.h b/fs/ocfs2/cluster/tcp_internal.h
-index 9606111..b2e832a 100644
---- a/fs/ocfs2/cluster/tcp_internal.h
-+++ b/fs/ocfs2/cluster/tcp_internal.h
-@@ -38,6 +38,12 @@
- * locking semantics of the file system using the protocol. It should
- * be somewhere else, I'm sure, but right now it isn't.
- *
-+ * New in version 10:
-+ * - Meta/data locks combined
+-#endif /* __ASM_ARCH_AT32AP7000_H__ */
+diff --git a/include/asm-avr32/arch-at32ap/at32ap700x.h b/include/asm-avr32/arch-at32ap/at32ap700x.h
+new file mode 100644
+index 0000000..99684d6
+--- /dev/null
++++ b/include/asm-avr32/arch-at32ap/at32ap700x.h
+@@ -0,0 +1,35 @@
++/*
++ * Pin definitions for AT32AP7000.
+ *
-+ * New in version 9:
-+ * - All votes removed
++ * Copyright (C) 2006 Atmel Corporation
+ *
- * New in version 8:
- * - Replace delete inode votes with a cluster lock
- *
-@@ -60,7 +66,7 @@
- * - full 64 bit i_size in the metadata lock lvbs
- * - introduction of "rw" lock and pushing meta/data locking down
- */
--#define O2NET_PROTOCOL_VERSION 8ULL
-+#define O2NET_PROTOCOL_VERSION 10ULL
- struct o2net_handshake {
- __be64 protocol_version;
- __be64 connector_id;
-diff --git a/fs/ocfs2/cluster/ver.c b/fs/ocfs2/cluster/ver.c
-index 7286c48..a56eee6 100644
---- a/fs/ocfs2/cluster/ver.c
-+++ b/fs/ocfs2/cluster/ver.c
-@@ -28,7 +28,7 @@
-
- #include "ver.h"
-
--#define CLUSTER_BUILD_VERSION "1.3.3"
-+#define CLUSTER_BUILD_VERSION "1.5.0"
-
- #define VERSION_STR "OCFS2 Node Manager " CLUSTER_BUILD_VERSION
-
-diff --git a/fs/ocfs2/dcache.c b/fs/ocfs2/dcache.c
-index 9923278..b1cc7c3 100644
---- a/fs/ocfs2/dcache.c
-+++ b/fs/ocfs2/dcache.c
-@@ -128,9 +128,9 @@ static int ocfs2_match_dentry(struct dentry *dentry,
- /*
- * Walk the inode alias list, and find a dentry which has a given
- * parent. ocfs2_dentry_attach_lock() wants to find _any_ alias as it
-- * is looking for a dentry_lock reference. The vote thread is looking
-- * to unhash aliases, so we allow it to skip any that already have
-- * that property.
-+ * is looking for a dentry_lock reference. The downconvert thread is
-+ * looking to unhash aliases, so we allow it to skip any that already
-+ * have that property.
- */
- struct dentry *ocfs2_find_local_alias(struct inode *inode,
- u64 parent_blkno,
-@@ -266,7 +266,7 @@ int ocfs2_dentry_attach_lock(struct dentry *dentry,
- dl->dl_count = 0;
- /*
- * Does this have to happen below, for all attaches, in case
-- * the struct inode gets blown away by votes?
-+ * the struct inode gets blown away by the downconvert thread?
- */
- dl->dl_inode = igrab(inode);
- dl->dl_parent_blkno = parent_blkno;
-diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
-index 63b28fd..6b0107f 100644
---- a/fs/ocfs2/dir.c
-+++ b/fs/ocfs2/dir.c
-@@ -846,14 +846,14 @@ int ocfs2_readdir(struct file * filp, void * dirent, filldir_t filldir)
- mlog_entry("dirino=%llu\n",
- (unsigned long long)OCFS2_I(inode)->ip_blkno);
-
-- error = ocfs2_meta_lock_atime(inode, filp->f_vfsmnt, &lock_level);
-+ error = ocfs2_inode_lock_atime(inode, filp->f_vfsmnt, &lock_level);
- if (lock_level && error >= 0) {
- /* We release EX lock which used to update atime
- * and get PR lock again to reduce contention
- * on commonly accessed directories. */
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- lock_level = 0;
-- error = ocfs2_meta_lock(inode, NULL, 0);
-+ error = ocfs2_inode_lock(inode, NULL, 0);
- }
- if (error < 0) {
- if (error != -ENOENT)
-@@ -865,7 +865,7 @@ int ocfs2_readdir(struct file * filp, void * dirent, filldir_t filldir)
- error = ocfs2_dir_foreach_blk(inode, &filp->f_version, &filp->f_pos,
- dirent, filldir, NULL);
-
-- ocfs2_meta_unlock(inode, lock_level);
-+ ocfs2_inode_unlock(inode, lock_level);
-
- bail_nolock:
- mlog_exit(error);
-diff --git a/fs/ocfs2/dlm/dlmfsver.c b/fs/ocfs2/dlm/dlmfsver.c
-index d2be3ad..a733b33 100644
---- a/fs/ocfs2/dlm/dlmfsver.c
-+++ b/fs/ocfs2/dlm/dlmfsver.c
-@@ -28,7 +28,7 @@
-
- #include "dlmfsver.h"
-
--#define DLM_BUILD_VERSION "1.3.3"
-+#define DLM_BUILD_VERSION "1.5.0"
-
- #define VERSION_STR "OCFS2 DLMFS " DLM_BUILD_VERSION
-
-diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
-index 2fde7bf..91f747b 100644
---- a/fs/ocfs2/dlm/dlmrecovery.c
-+++ b/fs/ocfs2/dlm/dlmrecovery.c
-@@ -2270,6 +2270,12 @@ static void __dlm_hb_node_down(struct dlm_ctxt *dlm, int idx)
- }
- }
-
-+ /* Clean up join state on node death. */
-+ if (dlm->joining_node == idx) {
-+ mlog(0, "Clearing join state for node %u\n", idx);
-+ __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);
-+ }
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++#ifndef __ASM_ARCH_AT32AP700X_H__
++#define __ASM_ARCH_AT32AP700X_H__
+
- /* check to see if the node is already considered dead */
- if (!test_bit(idx, dlm->live_nodes_map)) {
- mlog(0, "for domain %s, node %d is already dead. "
-@@ -2288,12 +2294,6 @@ static void __dlm_hb_node_down(struct dlm_ctxt *dlm, int idx)
-
- clear_bit(idx, dlm->live_nodes_map);
-
-- /* Clean up join state on node death. */
-- if (dlm->joining_node == idx) {
-- mlog(0, "Clearing join state for node %u\n", idx);
-- __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);
-- }
--
- /* make sure local cleanup occurs before the heartbeat events */
- if (!test_bit(idx, dlm->recovery_map))
- dlm_do_local_recovery_cleanup(dlm, idx);
-@@ -2321,6 +2321,13 @@ void dlm_hb_node_down_cb(struct o2nm_node *node, int idx, void *data)
- if (!dlm_grab(dlm))
- return;
-
-+ /*
-+ * This will notify any dlm users that a node in our domain
-+ * went away without notifying us first.
-+ */
-+ if (test_bit(idx, dlm->domain_map))
-+ dlm_fire_domain_eviction_callbacks(dlm, idx);
++#define GPIO_PERIPH_A 0
++#define GPIO_PERIPH_B 1
+
- spin_lock(&dlm->spinlock);
- __dlm_hb_node_down(dlm, idx);
- spin_unlock(&dlm->spinlock);
-diff --git a/fs/ocfs2/dlm/dlmver.c b/fs/ocfs2/dlm/dlmver.c
-index 7ef2653..dfc0da4 100644
---- a/fs/ocfs2/dlm/dlmver.c
-+++ b/fs/ocfs2/dlm/dlmver.c
-@@ -28,7 +28,7 @@
-
- #include "dlmver.h"
-
--#define DLM_BUILD_VERSION "1.3.3"
-+#define DLM_BUILD_VERSION "1.5.0"
-
- #define VERSION_STR "OCFS2 DLM " DLM_BUILD_VERSION
-
-diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
-index 4e97dcc..3867244 100644
---- a/fs/ocfs2/dlmglue.c
-+++ b/fs/ocfs2/dlmglue.c
-@@ -55,7 +55,6 @@
- #include "slot_map.h"
- #include "super.h"
- #include "uptodate.h"
--#include "vote.h"
-
- #include "buffer_head_io.h"
-
-@@ -69,6 +68,7 @@ struct ocfs2_mask_waiter {
++#define NR_GPIO_CONTROLLERS 4
++
++/*
++ * Pin numbers identifying specific GPIO pins on the chip. They can
++ * also be converted to IRQ numbers by passing them through
++ * gpio_to_irq().
++ */
++#define GPIO_PIOA_BASE (0)
++#define GPIO_PIOB_BASE (GPIO_PIOA_BASE + 32)
++#define GPIO_PIOC_BASE (GPIO_PIOB_BASE + 32)
++#define GPIO_PIOD_BASE (GPIO_PIOC_BASE + 32)
++#define GPIO_PIOE_BASE (GPIO_PIOD_BASE + 32)
++
++#define GPIO_PIN_PA(N) (GPIO_PIOA_BASE + (N))
++#define GPIO_PIN_PB(N) (GPIO_PIOB_BASE + (N))
++#define GPIO_PIN_PC(N) (GPIO_PIOC_BASE + (N))
++#define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N))
++#define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N))
++
++#endif /* __ASM_ARCH_AT32AP700X_H__ */
+diff --git a/include/asm-avr32/arch-at32ap/cpu.h b/include/asm-avr32/arch-at32ap/cpu.h
+index a762f42..44d0bfa 100644
+--- a/include/asm-avr32/arch-at32ap/cpu.h
++++ b/include/asm-avr32/arch-at32ap/cpu.h
+@@ -14,7 +14,7 @@
+ * Only AT32AP7000 is defined for now. We can identify the specific
+ * chip at runtime, but I'm not sure if it's really worth it.
+ */
+-#ifdef CONFIG_CPU_AT32AP7000
++#ifdef CONFIG_CPU_AT32AP700X
+ # define cpu_is_at32ap7000() (1)
+ #else
+ # define cpu_is_at32ap7000() (0)
+@@ -30,5 +30,6 @@
+ #define cpu_is_at91sam9261() (0)
+ #define cpu_is_at91sam9263() (0)
+ #define cpu_is_at91sam9rl() (0)
++#define cpu_is_at91cap9() (0)
- static struct ocfs2_super *ocfs2_get_dentry_osb(struct ocfs2_lock_res *lockres);
- static struct ocfs2_super *ocfs2_get_inode_osb(struct ocfs2_lock_res *lockres);
-+static struct ocfs2_super *ocfs2_get_file_osb(struct ocfs2_lock_res *lockres);
+ #endif /* __ASM_ARCH_CPU_H */
+diff --git a/include/asm-avr32/arch-at32ap/io.h b/include/asm-avr32/arch-at32ap/io.h
+index ee59e40..4ec6abc 100644
+--- a/include/asm-avr32/arch-at32ap/io.h
++++ b/include/asm-avr32/arch-at32ap/io.h
+@@ -4,7 +4,7 @@
+ /* For "bizarre" halfword swapping */
+ #include <linux/byteorder/swabb.h>
- /*
- * Return value from ->downconvert_worker functions.
-@@ -153,10 +153,10 @@ struct ocfs2_lock_res_ops {
- struct ocfs2_super * (*get_osb)(struct ocfs2_lock_res *);
+-#if defined(CONFIG_AP7000_32_BIT_SMC)
++#if defined(CONFIG_AP700X_32_BIT_SMC)
+ # define __swizzle_addr_b(addr) (addr ^ 3UL)
+ # define __swizzle_addr_w(addr) (addr ^ 2UL)
+ # define __swizzle_addr_l(addr) (addr)
+@@ -14,7 +14,7 @@
+ # define __mem_ioswabb(a, x) (x)
+ # define __mem_ioswabw(a, x) swab16(x)
+ # define __mem_ioswabl(a, x) swab32(x)
+-#elif defined(CONFIG_AP7000_16_BIT_SMC)
++#elif defined(CONFIG_AP700X_16_BIT_SMC)
+ # define __swizzle_addr_b(addr) (addr ^ 1UL)
+ # define __swizzle_addr_w(addr) (addr)
+ # define __swizzle_addr_l(addr) (addr)
+diff --git a/include/asm-avr32/irq.h b/include/asm-avr32/irq.h
+index 83e6549..9315724 100644
+--- a/include/asm-avr32/irq.h
++++ b/include/asm-avr32/irq.h
+@@ -11,4 +11,9 @@
- /*
-- * Optionally called in the downconvert (or "vote") thread
-- * after a successful downconvert. The lockres will not be
-- * referenced after this callback is called, so it is safe to
-- * free memory, etc.
-+ * Optionally called in the downconvert thread after a
-+ * successful downconvert. The lockres will not be referenced
-+ * after this callback is called, so it is safe to free
-+ * memory, etc.
- *
- * The exact semantics of when this is called are controlled
- * by ->downconvert_worker()
-@@ -225,17 +225,12 @@ static struct ocfs2_lock_res_ops ocfs2_inode_rw_lops = {
- .flags = 0,
- };
+ #define irq_canonicalize(i) (i)
--static struct ocfs2_lock_res_ops ocfs2_inode_meta_lops = {
-+static struct ocfs2_lock_res_ops ocfs2_inode_inode_lops = {
- .get_osb = ocfs2_get_inode_osb,
- .check_downconvert = ocfs2_check_meta_downconvert,
- .set_lvb = ocfs2_set_meta_lvb,
-- .flags = LOCK_TYPE_REQUIRES_REFRESH|LOCK_TYPE_USES_LVB,
--};
--
--static struct ocfs2_lock_res_ops ocfs2_inode_data_lops = {
-- .get_osb = ocfs2_get_inode_osb,
- .downconvert_worker = ocfs2_data_convert_worker,
-- .flags = 0,
-+ .flags = LOCK_TYPE_REQUIRES_REFRESH|LOCK_TYPE_USES_LVB,
++#ifndef __ASSEMBLER__
++int nmi_enable(void);
++void nmi_disable(void);
++#endif
++
+ #endif /* __ASM_AVR32_IOCTLS_H */
+diff --git a/include/asm-avr32/kdebug.h b/include/asm-avr32/kdebug.h
+index fd7e990..ca4f954 100644
+--- a/include/asm-avr32/kdebug.h
++++ b/include/asm-avr32/kdebug.h
+@@ -5,6 +5,7 @@
+ enum die_val {
+ DIE_BREAKPOINT,
+ DIE_SSTEP,
++ DIE_NMI,
};
- static struct ocfs2_lock_res_ops ocfs2_super_lops = {
-@@ -258,10 +253,14 @@ static struct ocfs2_lock_res_ops ocfs2_inode_open_lops = {
- .flags = 0,
- };
+ #endif /* __ASM_AVR32_KDEBUG_H */
+diff --git a/include/asm-avr32/ocd.h b/include/asm-avr32/ocd.h
+index 996405e..6bef094 100644
+--- a/include/asm-avr32/ocd.h
++++ b/include/asm-avr32/ocd.h
+@@ -533,6 +533,11 @@ static inline void __ocd_write(unsigned int reg, unsigned long value)
+ #define ocd_read(reg) __ocd_read(OCD_##reg)
+ #define ocd_write(reg, value) __ocd_write(OCD_##reg, value)
-+static struct ocfs2_lock_res_ops ocfs2_flock_lops = {
-+ .get_osb = ocfs2_get_file_osb,
-+ .flags = 0,
-+};
++struct task_struct;
+
- static inline int ocfs2_is_inode_lock(struct ocfs2_lock_res *lockres)
- {
- return lockres->l_type == OCFS2_LOCK_TYPE_META ||
-- lockres->l_type == OCFS2_LOCK_TYPE_DATA ||
- lockres->l_type == OCFS2_LOCK_TYPE_RW ||
- lockres->l_type == OCFS2_LOCK_TYPE_OPEN;
- }
-@@ -310,12 +309,24 @@ static inline void ocfs2_recover_from_dlm_error(struct ocfs2_lock_res *lockres,
- "resource %s: %s\n", dlm_errname(_stat), _func, \
- _lockres->l_name, dlm_errmsg(_stat)); \
- } while (0)
--static void ocfs2_vote_on_unlock(struct ocfs2_super *osb,
-- struct ocfs2_lock_res *lockres);
--static int ocfs2_meta_lock_update(struct inode *inode,
-+static int ocfs2_downconvert_thread(void *arg);
-+static void ocfs2_downconvert_on_unlock(struct ocfs2_super *osb,
-+ struct ocfs2_lock_res *lockres);
-+static int ocfs2_inode_lock_update(struct inode *inode,
- struct buffer_head **bh);
- static void ocfs2_drop_osb_locks(struct ocfs2_super *osb);
- static inline int ocfs2_highest_compat_lock_level(int level);
-+static void ocfs2_prepare_downconvert(struct ocfs2_lock_res *lockres,
-+ int new_level);
-+static int ocfs2_downconvert_lock(struct ocfs2_super *osb,
-+ struct ocfs2_lock_res *lockres,
-+ int new_level,
-+ int lvb);
-+static int ocfs2_prepare_cancel_convert(struct ocfs2_super *osb,
-+ struct ocfs2_lock_res *lockres);
-+static int ocfs2_cancel_convert(struct ocfs2_super *osb,
-+ struct ocfs2_lock_res *lockres);
++void ocd_enable(struct task_struct *child);
++void ocd_disable(struct task_struct *child);
+
+ #endif /* !__ASSEMBLER__ */
- static void ocfs2_build_lock_name(enum ocfs2_lock_type type,
- u64 blkno,
-@@ -402,10 +413,7 @@ void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
- ops = &ocfs2_inode_rw_lops;
- break;
- case OCFS2_LOCK_TYPE_META:
-- ops = &ocfs2_inode_meta_lops;
-- break;
-- case OCFS2_LOCK_TYPE_DATA:
-- ops = &ocfs2_inode_data_lops;
-+ ops = &ocfs2_inode_inode_lops;
- break;
- case OCFS2_LOCK_TYPE_OPEN:
- ops = &ocfs2_inode_open_lops;
-@@ -428,6 +436,13 @@ static struct ocfs2_super *ocfs2_get_inode_osb(struct ocfs2_lock_res *lockres)
- return OCFS2_SB(inode->i_sb);
- }
+ #endif /* __ASM_AVR32_OCD_H */
+diff --git a/include/asm-avr32/processor.h b/include/asm-avr32/processor.h
+index a52576b..4212551 100644
+--- a/include/asm-avr32/processor.h
++++ b/include/asm-avr32/processor.h
+@@ -57,11 +57,25 @@ struct avr32_cpuinfo {
+ unsigned short cpu_revision;
+ enum tlb_config tlb_config;
+ unsigned long features;
++ u32 device_id;
-+static struct ocfs2_super *ocfs2_get_file_osb(struct ocfs2_lock_res *lockres)
+ struct cache_info icache;
+ struct cache_info dcache;
+ };
+
++static inline unsigned int avr32_get_manufacturer_id(struct avr32_cpuinfo *cpu)
+{
-+ struct ocfs2_file_private *fp = lockres->l_priv;
-+
-+ return OCFS2_SB(fp->fp_file->f_mapping->host->i_sb);
++ return (cpu->device_id >> 1) & 0x7f;
+}
-+
- static __u64 ocfs2_get_dentry_lock_ino(struct ocfs2_lock_res *lockres)
- {
- __be64 inode_blkno_be;
-@@ -508,6 +523,21 @@ static void ocfs2_rename_lock_res_init(struct ocfs2_lock_res *res,
- &ocfs2_rename_lops, osb);
- }
-
-+void ocfs2_file_lock_res_init(struct ocfs2_lock_res *lockres,
-+ struct ocfs2_file_private *fp)
++static inline unsigned int avr32_get_product_number(struct avr32_cpuinfo *cpu)
+{
-+ struct inode *inode = fp->fp_file->f_mapping->host;
-+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
-+
-+ ocfs2_lock_res_init_once(lockres);
-+ ocfs2_build_lock_name(OCFS2_LOCK_TYPE_FLOCK, oi->ip_blkno,
-+ inode->i_generation, lockres->l_name);
-+ ocfs2_lock_res_init_common(OCFS2_SB(inode->i_sb), lockres,
-+ OCFS2_LOCK_TYPE_FLOCK, &ocfs2_flock_lops,
-+ fp);
-+ lockres->l_flags |= OCFS2_LOCK_NOCACHE;
++ return (cpu->device_id >> 12) & 0xffff;
++}
++static inline unsigned int avr32_get_chip_revision(struct avr32_cpuinfo *cpu)
++{
++ return (cpu->device_id >> 28) & 0x0f;
+}
+
- void ocfs2_lock_res_free(struct ocfs2_lock_res *res)
- {
- mlog_entry_void();
-@@ -724,6 +754,13 @@ static void ocfs2_blocking_ast(void *opaque, int level)
- lockres->l_name, level, lockres->l_level,
- ocfs2_lock_type_string(lockres->l_type));
+ extern struct avr32_cpuinfo boot_cpu_data;
-+ /*
-+ * We can skip the bast for locks which don't enable caching -
-+ * they'll be dropped at the earliest possible time anyway.
-+ */
-+ if (lockres->l_flags & OCFS2_LOCK_NOCACHE)
-+ return;
-+
- spin_lock_irqsave(&lockres->l_lock, flags);
- needs_downconvert = ocfs2_generic_handle_bast(lockres, level);
- if (needs_downconvert)
-@@ -732,7 +769,7 @@ static void ocfs2_blocking_ast(void *opaque, int level)
+ #ifdef CONFIG_SMP
+diff --git a/include/asm-avr32/ptrace.h b/include/asm-avr32/ptrace.h
+index 8c5dba5..9e2d44f 100644
+--- a/include/asm-avr32/ptrace.h
++++ b/include/asm-avr32/ptrace.h
+@@ -121,7 +121,15 @@ struct pt_regs {
+ };
- wake_up(&lockres->l_event);
+ #ifdef __KERNEL__
+-# define user_mode(regs) (((regs)->sr & MODE_MASK) == MODE_USER)
++
++#include <asm/ocd.h>
++
++#define arch_ptrace_attach(child) ocd_enable(child)
++
++#define user_mode(regs) (((regs)->sr & MODE_MASK) == MODE_USER)
++#define instruction_pointer(regs) ((regs)->pc)
++#define profile_pc(regs) instruction_pointer(regs)
++
+ extern void show_regs (struct pt_regs *);
-- ocfs2_kick_vote_thread(osb);
-+ ocfs2_wake_downconvert_thread(osb);
+ static __inline__ int valid_user_regs(struct pt_regs *regs)
+@@ -141,9 +149,6 @@ static __inline__ int valid_user_regs(struct pt_regs *regs)
+ return 0;
}
- static void ocfs2_locking_ast(void *opaque)
-@@ -935,6 +972,21 @@ static int lockres_remove_mask_waiter(struct ocfs2_lock_res *lockres,
+-#define instruction_pointer(regs) ((regs)->pc)
+-
+-#define profile_pc(regs) instruction_pointer(regs)
- }
+ #endif /* __KERNEL__ */
-+static int ocfs2_wait_for_mask_interruptible(struct ocfs2_mask_waiter *mw,
-+ struct ocfs2_lock_res *lockres)
-+{
-+ int ret;
-+
-+ ret = wait_for_completion_interruptible(&mw->mw_complete);
-+ if (ret)
-+ lockres_remove_mask_waiter(lockres, mw);
-+ else
-+ ret = mw->mw_status;
-+ /* Re-arm the completion in case we want to wait on it again */
-+ INIT_COMPLETION(mw->mw_complete);
-+ return ret;
-+}
-+
- static int ocfs2_cluster_lock(struct ocfs2_super *osb,
- struct ocfs2_lock_res *lockres,
- int level,
-@@ -1089,7 +1141,7 @@ static void ocfs2_cluster_unlock(struct ocfs2_super *osb,
- mlog_entry_void();
- spin_lock_irqsave(&lockres->l_lock, flags);
- ocfs2_dec_holders(lockres, level);
-- ocfs2_vote_on_unlock(osb, lockres);
-+ ocfs2_downconvert_on_unlock(osb, lockres);
- spin_unlock_irqrestore(&lockres->l_lock, flags);
- mlog_exit_void();
- }
-@@ -1147,13 +1199,7 @@ int ocfs2_create_new_inode_locks(struct inode *inode)
- * We don't want to use LKM_LOCAL on a meta data lock as they
- * don't use a generation in their lock names.
- */
-- ret = ocfs2_create_new_lock(osb, &OCFS2_I(inode)->ip_meta_lockres, 1, 0);
-- if (ret) {
-- mlog_errno(ret);
-- goto bail;
-- }
--
-- ret = ocfs2_create_new_lock(osb, &OCFS2_I(inode)->ip_data_lockres, 1, 1);
-+ ret = ocfs2_create_new_lock(osb, &OCFS2_I(inode)->ip_inode_lockres, 1, 0);
- if (ret) {
- mlog_errno(ret);
- goto bail;
-@@ -1311,76 +1357,221 @@ out:
- mlog_exit_void();
- }
+diff --git a/include/asm-avr32/setup.h b/include/asm-avr32/setup.h
+index b0828d4..ea3070f 100644
+--- a/include/asm-avr32/setup.h
++++ b/include/asm-avr32/setup.h
+@@ -110,7 +110,7 @@ struct tagtable {
+ int (*parse)(struct tag *);
+ };
--int ocfs2_data_lock_full(struct inode *inode,
-- int write,
-- int arg_flags)
-+static int ocfs2_flock_handle_signal(struct ocfs2_lock_res *lockres,
-+ int level)
- {
-- int status = 0, level;
-- struct ocfs2_lock_res *lockres;
-- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-+ int ret;
-+ struct ocfs2_super *osb = ocfs2_get_lockres_osb(lockres);
-+ unsigned long flags;
-+ struct ocfs2_mask_waiter mw;
+-#define __tag __attribute_used__ __attribute__((__section__(".taglist.init")))
++#define __tag __used __attribute__((__section__(".taglist.init")))
+ #define __tagtable(tag, fn) \
+ static struct tagtable __tagtable_##fn __tag = { tag, fn }
-- BUG_ON(!inode);
-+ ocfs2_init_mask_waiter(&mw);
+diff --git a/include/asm-avr32/thread_info.h b/include/asm-avr32/thread_info.h
+index 184b574..07049f6 100644
+--- a/include/asm-avr32/thread_info.h
++++ b/include/asm-avr32/thread_info.h
+@@ -88,6 +88,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_MEMDIE 6
+ #define TIF_RESTORE_SIGMASK 7 /* restore signal mask in do_signal */
+ #define TIF_CPU_GOING_TO_SLEEP 8 /* CPU is entering sleep 0 mode */
++#define TIF_DEBUG 30 /* debugging enabled */
+ #define TIF_USERSPACE 31 /* true if FS sets userspace */
-- mlog_entry_void();
-+retry_cancel:
-+ spin_lock_irqsave(&lockres->l_lock, flags);
-+ if (lockres->l_flags & OCFS2_LOCK_BUSY) {
-+ ret = ocfs2_prepare_cancel_convert(osb, lockres);
-+ if (ret) {
-+ spin_unlock_irqrestore(&lockres->l_lock, flags);
-+ ret = ocfs2_cancel_convert(osb, lockres);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out;
-+ }
-+ goto retry_cancel;
-+ }
-+ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
-+ spin_unlock_irqrestore(&lockres->l_lock, flags);
+ #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
+diff --git a/include/asm-blackfin/bfin-global.h b/include/asm-blackfin/bfin-global.h
+index 39bdd86..6ae0619 100644
+--- a/include/asm-blackfin/bfin-global.h
++++ b/include/asm-blackfin/bfin-global.h
+@@ -51,7 +51,7 @@ extern unsigned long sclk_to_usecs(unsigned long sclk);
+ extern unsigned long usecs_to_sclk(unsigned long usecs);
-- mlog(0, "inode %llu take %s DATA lock\n",
-- (unsigned long long)OCFS2_I(inode)->ip_blkno,
-- write ? "EXMODE" : "PRMODE");
-+ ocfs2_wait_for_mask(&mw);
-+ goto retry_cancel;
-+ }
+ extern void dump_bfin_process(struct pt_regs *regs);
+-extern void dump_bfin_mem(void *retaddr);
++extern void dump_bfin_mem(struct pt_regs *regs);
+ extern void dump_bfin_trace_buffer(void);
-- /* We'll allow faking a readonly data lock for
-- * rodevices. */
-- if (ocfs2_is_hard_readonly(OCFS2_SB(inode->i_sb))) {
-- if (write) {
-- status = -EROFS;
-- mlog_errno(status);
-+ ret = -ERESTARTSYS;
-+ /*
-+ * We may still have gotten the lock, in which case there's no
-+ * point to restarting the syscall.
-+ */
-+ if (lockres->l_level == level)
-+ ret = 0;
-+
-+ mlog(0, "Cancel returning %d. flags: 0x%lx, level: %d, act: %d\n", ret,
-+ lockres->l_flags, lockres->l_level, lockres->l_action);
-+
-+ spin_unlock_irqrestore(&lockres->l_lock, flags);
-+
-+out:
-+ return ret;
-+}
-+
+ extern int init_arch_irq(void);
+diff --git a/include/asm-blackfin/cplb-mpu.h b/include/asm-blackfin/cplb-mpu.h
+new file mode 100644
+index 0000000..75c67b9
+--- /dev/null
++++ b/include/asm-blackfin/cplb-mpu.h
+@@ -0,0 +1,61 @@
+/*
-+ * ocfs2_file_lock() and ocfs2_file_unlock() map to a single pair of
-+ * flock() calls. The locking approach this requires is sufficiently
-+ * different from all other cluster lock types that we implement a
-+ * seperate path to the "low-level" dlm calls. In particular:
++ * File: include/asm-blackfin/cplbinit.h
++ * Based on:
++ * Author:
+ *
-+ * - No optimization of lock levels is done - we take at exactly
-+ * what's been requested.
++ * Created:
++ * Description:
+ *
-+ * - No lock caching is employed. We immediately downconvert to
-+ * no-lock at unlock time. This also means flock locks never go on
-+ * the blocking list).
++ * Modified:
++ * Copyright 2004-2006 Analog Devices Inc.
+ *
-+ * - Since userspace can trivially deadlock itself with flock, we make
-+ * sure to allow cancellation of a misbehaving applications flock()
-+ * request.
++ * Bugs: Enter bugs at http://blackfin.uclinux.org/
+ *
-+ * - Access to any flock lockres doesn't require concurrency, so we
-+ * can simplify the code by requiring the caller to guarantee
-+ * serialization of dlmglue flock calls.
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, see the file COPYING, or write
++ * to the Free Software Foundation, Inc.,
++ * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
-+int ocfs2_file_lock(struct file *file, int ex, int trylock)
-+{
-+ int ret, level = ex ? LKM_EXMODE : LKM_PRMODE;
-+ unsigned int lkm_flags = trylock ? LKM_NOQUEUE : 0;
-+ unsigned long flags;
-+ struct ocfs2_file_private *fp = file->private_data;
-+ struct ocfs2_lock_res *lockres = &fp->fp_flock;
-+ struct ocfs2_super *osb = OCFS2_SB(file->f_mapping->host->i_sb);
-+ struct ocfs2_mask_waiter mw;
++#ifndef __ASM_BFIN_CPLB_MPU_H
++#define __ASM_BFIN_CPLB_MPU_H
+
-+ ocfs2_init_mask_waiter(&mw);
++struct cplb_entry {
++ unsigned long data, addr;
++};
+
-+ if ((lockres->l_flags & OCFS2_LOCK_BUSY) ||
-+ (lockres->l_level > LKM_NLMODE)) {
-+ mlog(ML_ERROR,
-+ "File lock \"%s\" has busy or locked state: flags: 0x%lx, "
-+ "level: %u\n", lockres->l_name, lockres->l_flags,
-+ lockres->l_level);
-+ return -EINVAL;
-+ }
++struct mem_region {
++ unsigned long start, end;
++ unsigned long dcplb_data;
++ unsigned long icplb_data;
++};
+
-+ spin_lock_irqsave(&lockres->l_lock, flags);
-+ if (!(lockres->l_flags & OCFS2_LOCK_ATTACHED)) {
-+ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
-+ spin_unlock_irqrestore(&lockres->l_lock, flags);
++extern struct cplb_entry dcplb_tbl[MAX_CPLBS];
++extern struct cplb_entry icplb_tbl[MAX_CPLBS];
++extern int first_switched_icplb;
++extern int first_mask_dcplb;
++extern int first_switched_dcplb;
+
-+ /*
-+ * Get the lock at NLMODE to start - that way we
-+ * can cancel the upconvert request if need be.
-+ */
-+ ret = ocfs2_lock_create(osb, lockres, LKM_NLMODE, 0);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out;
- }
-- goto out;
++extern int nr_dcplb_miss, nr_icplb_miss, nr_icplb_supv_miss, nr_dcplb_prot;
++extern int nr_cplb_flush;
+
-+ ret = ocfs2_wait_for_mask(&mw);
-+ if (ret) {
-+ mlog_errno(ret);
-+ goto out;
-+ }
-+ spin_lock_irqsave(&lockres->l_lock, flags);
- }
-
-- if (ocfs2_mount_local(osb))
-- goto out;
-+ lockres->l_action = OCFS2_AST_CONVERT;
-+ lkm_flags |= LKM_CONVERT;
-+ lockres->l_requested = level;
-+ lockres_or_flags(lockres, OCFS2_LOCK_BUSY);
-
-- lockres = &OCFS2_I(inode)->ip_data_lockres;
-+ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
-+ spin_unlock_irqrestore(&lockres->l_lock, flags);
-
-- level = write ? LKM_EXMODE : LKM_PRMODE;
-+ ret = dlmlock(osb->dlm, level, &lockres->l_lksb, lkm_flags,
-+ lockres->l_name, OCFS2_LOCK_ID_MAX_LEN - 1,
-+ ocfs2_locking_ast, lockres, ocfs2_blocking_ast);
-+ if (ret != DLM_NORMAL) {
-+ if (trylock && ret == DLM_NOTQUEUED)
-+ ret = -EAGAIN;
-+ else {
-+ ocfs2_log_dlm_error("dlmlock", ret, lockres);
-+ ret = -EINVAL;
-+ }
-
-- status = ocfs2_cluster_lock(OCFS2_SB(inode->i_sb), lockres, level,
-- 0, arg_flags);
-- if (status < 0 && status != -EAGAIN)
-- mlog_errno(status);
-+ ocfs2_recover_from_dlm_error(lockres, 1);
-+ lockres_remove_mask_waiter(lockres, &mw);
-+ goto out;
-+ }
++extern int page_mask_order;
++extern int page_mask_nelts;
+
-+ ret = ocfs2_wait_for_mask_interruptible(&mw, lockres);
-+ if (ret == -ERESTARTSYS) {
-+ /*
-+ * Userspace can cause deadlock itself with
-+ * flock(). Current behavior locally is to allow the
-+ * deadlock, but abort the system call if a signal is
-+ * received. We follow this example, otherwise a
-+ * poorly written program could sit in kernel until
-+ * reboot.
-+ *
-+ * Handling this is a bit more complicated for Ocfs2
-+ * though. We can't exit this function with an
-+ * outstanding lock request, so a cancel convert is
-+ * required. We intentionally overwrite 'ret' - if the
-+ * cancel fails and the lock was granted, it's easier
-+ * to just bubble sucess back up to the user.
-+ */
-+ ret = ocfs2_flock_handle_signal(lockres, level);
-+ }
-
- out:
-- mlog_exit(status);
-- return status;
++extern unsigned long *current_rwx_mask;
+
-+ mlog(0, "Lock: \"%s\" ex: %d, trylock: %d, returns: %d\n",
-+ lockres->l_name, ex, trylock, ret);
-+ return ret;
- }
-
--/* see ocfs2_meta_lock_with_page() */
--int ocfs2_data_lock_with_page(struct inode *inode,
-- int write,
-- struct page *page)
-+void ocfs2_file_unlock(struct file *file)
- {
- int ret;
-+ unsigned long flags;
-+ struct ocfs2_file_private *fp = file->private_data;
-+ struct ocfs2_lock_res *lockres = &fp->fp_flock;
-+ struct ocfs2_super *osb = OCFS2_SB(file->f_mapping->host->i_sb);
-+ struct ocfs2_mask_waiter mw;
-
-- ret = ocfs2_data_lock_full(inode, write, OCFS2_LOCK_NONBLOCK);
-- if (ret == -EAGAIN) {
-- unlock_page(page);
-- if (ocfs2_data_lock(inode, write) == 0)
-- ocfs2_data_unlock(inode, write);
-- ret = AOP_TRUNCATED_PAGE;
-+ ocfs2_init_mask_waiter(&mw);
++extern void flush_switched_cplbs(void);
++extern void set_mask_dcplbs(unsigned long *);
+
-+ if (!(lockres->l_flags & OCFS2_LOCK_ATTACHED))
-+ return;
++extern void __noreturn panic_cplb_error(int seqstat, struct pt_regs *);
+
-+ if (lockres->l_level == LKM_NLMODE)
-+ return;
++#endif /* __ASM_BFIN_CPLB_MPU_H */
+diff --git a/include/asm-blackfin/cplb.h b/include/asm-blackfin/cplb.h
+index 06828d7..654375c 100644
+--- a/include/asm-blackfin/cplb.h
++++ b/include/asm-blackfin/cplb.h
+@@ -65,7 +65,11 @@
+ #define SIZE_1M 0x00100000 /* 1M */
+ #define SIZE_4M 0x00400000 /* 4M */
+
++#ifdef CONFIG_MPU
++#define MAX_CPLBS 16
++#else
+ #define MAX_CPLBS (16 * 2)
++#endif
+
+ #define ASYNC_MEMORY_CPLB_COVERAGE ((ASYNC_BANK0_SIZE + ASYNC_BANK1_SIZE + \
+ ASYNC_BANK2_SIZE + ASYNC_BANK3_SIZE) / SIZE_4M)
+diff --git a/include/asm-blackfin/cplbinit.h b/include/asm-blackfin/cplbinit.h
+index c4d0596..0eb1c1b 100644
+--- a/include/asm-blackfin/cplbinit.h
++++ b/include/asm-blackfin/cplbinit.h
+@@ -33,6 +33,12 @@
+ #include <asm/blackfin.h>
+ #include <asm/cplb.h>
+
++#ifdef CONFIG_MPU
+
-+ mlog(0, "Unlock: \"%s\" flags: 0x%lx, level: %d, act: %d\n",
-+ lockres->l_name, lockres->l_flags, lockres->l_level,
-+ lockres->l_action);
++#include <asm/cplb-mpu.h>
+
-+ spin_lock_irqsave(&lockres->l_lock, flags);
-+ /*
-+ * Fake a blocking ast for the downconvert code.
-+ */
-+ lockres_or_flags(lockres, OCFS2_LOCK_BLOCKED);
-+ lockres->l_blocking = LKM_EXMODE;
++#else
+
-+ ocfs2_prepare_downconvert(lockres, LKM_NLMODE);
-+ lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
-+ spin_unlock_irqrestore(&lockres->l_lock, flags);
+ #define INITIAL_T 0x1
+ #define SWITCH_T 0x2
+ #define I_CPLB 0x4
+@@ -79,6 +85,8 @@ extern u_long ipdt_swapcount_table[];
+ extern u_long dpdt_swapcount_table[];
+ #endif
+
++#endif /* CONFIG_MPU */
+
-+ ret = ocfs2_downconvert_lock(osb, lockres, LKM_NLMODE, 0);
-+ if (ret) {
-+ mlog_errno(ret);
-+ return;
- }
+ extern unsigned long reserved_mem_dcache_on;
+ extern unsigned long reserved_mem_icache_on;
-- return ret;
-+ ret = ocfs2_wait_for_mask(&mw);
-+ if (ret)
-+ mlog_errno(ret);
- }
+diff --git a/include/asm-blackfin/dma.h b/include/asm-blackfin/dma.h
+index b469505..5abaa2c 100644
+--- a/include/asm-blackfin/dma.h
++++ b/include/asm-blackfin/dma.h
+@@ -76,6 +76,9 @@ enum dma_chan_status {
+ #define INTR_ON_BUF 2
+ #define INTR_ON_ROW 3
--static void ocfs2_vote_on_unlock(struct ocfs2_super *osb,
-- struct ocfs2_lock_res *lockres)
-+static void ocfs2_downconvert_on_unlock(struct ocfs2_super *osb,
-+ struct ocfs2_lock_res *lockres)
- {
- int kick = 0;
++#define DMA_NOSYNC_KEEP_DMA_BUF 0
++#define DMA_SYNC_RESTART 1
++
+ struct dmasg {
+ unsigned long next_desc_addr;
+ unsigned long start_addr;
+@@ -157,7 +160,8 @@ void set_dma_y_count(unsigned int channel, unsigned short y_count);
+ void set_dma_y_modify(unsigned int channel, short y_modify);
+ void set_dma_config(unsigned int channel, unsigned short config);
+ unsigned short set_bfin_dma_config(char direction, char flow_mode,
+- char intr_mode, char dma_mode, char width);
++ char intr_mode, char dma_mode, char width,
++ char syncmode);
+ void set_dma_curr_addr(unsigned int channel, unsigned long addr);
- mlog_entry_void();
+ /* get curr status for polling */
+diff --git a/include/asm-blackfin/gpio.h b/include/asm-blackfin/gpio.h
+index 33ce98e..d0426c1 100644
+--- a/include/asm-blackfin/gpio.h
++++ b/include/asm-blackfin/gpio.h
+@@ -7,7 +7,7 @@
+ * Description:
+ *
+ * Modified:
+- * Copyright 2004-2006 Analog Devices Inc.
++ * Copyright 2004-2008 Analog Devices Inc.
+ *
+ * Bugs: Enter bugs at http://blackfin.uclinux.org/
+ *
+@@ -304,39 +304,39 @@
+ **************************************************************/
- /* If we know that another node is waiting on our lock, kick
-- * the vote thread * pre-emptively when we reach a release
-+ * the downconvert thread * pre-emptively when we reach a release
- * condition. */
- if (lockres->l_flags & OCFS2_LOCK_BLOCKED) {
- switch(lockres->l_blocking) {
-@@ -1398,27 +1589,7 @@ static void ocfs2_vote_on_unlock(struct ocfs2_super *osb,
- }
+ #ifndef BF548_FAMILY
+-void set_gpio_dir(unsigned short, unsigned short);
+-void set_gpio_inen(unsigned short, unsigned short);
+-void set_gpio_polar(unsigned short, unsigned short);
+-void set_gpio_edge(unsigned short, unsigned short);
+-void set_gpio_both(unsigned short, unsigned short);
+-void set_gpio_data(unsigned short, unsigned short);
+-void set_gpio_maska(unsigned short, unsigned short);
+-void set_gpio_maskb(unsigned short, unsigned short);
+-void set_gpio_toggle(unsigned short);
+-void set_gpiop_dir(unsigned short, unsigned short);
+-void set_gpiop_inen(unsigned short, unsigned short);
+-void set_gpiop_polar(unsigned short, unsigned short);
+-void set_gpiop_edge(unsigned short, unsigned short);
+-void set_gpiop_both(unsigned short, unsigned short);
+-void set_gpiop_data(unsigned short, unsigned short);
+-void set_gpiop_maska(unsigned short, unsigned short);
+-void set_gpiop_maskb(unsigned short, unsigned short);
+-unsigned short get_gpio_dir(unsigned short);
+-unsigned short get_gpio_inen(unsigned short);
+-unsigned short get_gpio_polar(unsigned short);
+-unsigned short get_gpio_edge(unsigned short);
+-unsigned short get_gpio_both(unsigned short);
+-unsigned short get_gpio_maska(unsigned short);
+-unsigned short get_gpio_maskb(unsigned short);
+-unsigned short get_gpio_data(unsigned short);
+-unsigned short get_gpiop_dir(unsigned short);
+-unsigned short get_gpiop_inen(unsigned short);
+-unsigned short get_gpiop_polar(unsigned short);
+-unsigned short get_gpiop_edge(unsigned short);
+-unsigned short get_gpiop_both(unsigned short);
+-unsigned short get_gpiop_maska(unsigned short);
+-unsigned short get_gpiop_maskb(unsigned short);
+-unsigned short get_gpiop_data(unsigned short);
++void set_gpio_dir(unsigned, unsigned short);
++void set_gpio_inen(unsigned, unsigned short);
++void set_gpio_polar(unsigned, unsigned short);
++void set_gpio_edge(unsigned, unsigned short);
++void set_gpio_both(unsigned, unsigned short);
++void set_gpio_data(unsigned, unsigned short);
++void set_gpio_maska(unsigned, unsigned short);
++void set_gpio_maskb(unsigned, unsigned short);
++void set_gpio_toggle(unsigned);
++void set_gpiop_dir(unsigned, unsigned short);
++void set_gpiop_inen(unsigned, unsigned short);
++void set_gpiop_polar(unsigned, unsigned short);
++void set_gpiop_edge(unsigned, unsigned short);
++void set_gpiop_both(unsigned, unsigned short);
++void set_gpiop_data(unsigned, unsigned short);
++void set_gpiop_maska(unsigned, unsigned short);
++void set_gpiop_maskb(unsigned, unsigned short);
++unsigned short get_gpio_dir(unsigned);
++unsigned short get_gpio_inen(unsigned);
++unsigned short get_gpio_polar(unsigned);
++unsigned short get_gpio_edge(unsigned);
++unsigned short get_gpio_both(unsigned);
++unsigned short get_gpio_maska(unsigned);
++unsigned short get_gpio_maskb(unsigned);
++unsigned short get_gpio_data(unsigned);
++unsigned short get_gpiop_dir(unsigned);
++unsigned short get_gpiop_inen(unsigned);
++unsigned short get_gpiop_polar(unsigned);
++unsigned short get_gpiop_edge(unsigned);
++unsigned short get_gpiop_both(unsigned);
++unsigned short get_gpiop_maska(unsigned);
++unsigned short get_gpiop_maskb(unsigned);
++unsigned short get_gpiop_data(unsigned);
- if (kick)
-- ocfs2_kick_vote_thread(osb);
--
-- mlog_exit_void();
--}
--
--void ocfs2_data_unlock(struct inode *inode,
-- int write)
--{
-- int level = write ? LKM_EXMODE : LKM_PRMODE;
-- struct ocfs2_lock_res *lockres = &OCFS2_I(inode)->ip_data_lockres;
-- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
--
-- mlog_entry_void();
--
-- mlog(0, "inode %llu drop %s DATA lock\n",
-- (unsigned long long)OCFS2_I(inode)->ip_blkno,
-- write ? "EXMODE" : "PRMODE");
--
-- if (!ocfs2_is_hard_readonly(OCFS2_SB(inode->i_sb)) &&
-- !ocfs2_mount_local(osb))
-- ocfs2_cluster_unlock(OCFS2_SB(inode->i_sb), lockres, level);
-+ ocfs2_wake_downconvert_thread(osb);
+ struct gpio_port_t {
+ unsigned short data;
+@@ -382,8 +382,8 @@ struct gpio_port_t {
+ #define PM_WAKE_LOW 0x8
+ #define PM_WAKE_BOTH_EDGES (PM_WAKE_RISING | PM_WAKE_FALLING)
- mlog_exit_void();
- }
-@@ -1442,11 +1613,11 @@ static u64 ocfs2_pack_timespec(struct timespec *spec)
+-int gpio_pm_wakeup_request(unsigned short gpio, unsigned char type);
+-void gpio_pm_wakeup_free(unsigned short gpio);
++int gpio_pm_wakeup_request(unsigned gpio, unsigned char type);
++void gpio_pm_wakeup_free(unsigned gpio);
+ unsigned int gpio_pm_setup(void);
+ void gpio_pm_restore(void);
- /* Call this with the lockres locked. I am reasonably sure we don't
- * need ip_lock in this function as anyone who would be changing those
-- * values is supposed to be blocked in ocfs2_meta_lock right now. */
-+ * values is supposed to be blocked in ocfs2_inode_lock right now. */
- static void __ocfs2_stuff_meta_lvb(struct inode *inode)
- {
- struct ocfs2_inode_info *oi = OCFS2_I(inode);
-- struct ocfs2_lock_res *lockres = &oi->ip_meta_lockres;
-+ struct ocfs2_lock_res *lockres = &oi->ip_inode_lockres;
- struct ocfs2_meta_lvb *lvb;
+@@ -426,19 +426,19 @@ struct gpio_port_s {
+ * MODIFICATION HISTORY :
+ **************************************************************/
- mlog_entry_void();
-@@ -1496,7 +1667,7 @@ static void ocfs2_unpack_timespec(struct timespec *spec,
- static void ocfs2_refresh_inode_from_lvb(struct inode *inode)
- {
- struct ocfs2_inode_info *oi = OCFS2_I(inode);
-- struct ocfs2_lock_res *lockres = &oi->ip_meta_lockres;
-+ struct ocfs2_lock_res *lockres = &oi->ip_inode_lockres;
- struct ocfs2_meta_lvb *lvb;
+-int gpio_request(unsigned short, const char *);
+-void gpio_free(unsigned short);
++int gpio_request(unsigned, const char *);
++void gpio_free(unsigned);
- mlog_entry_void();
-@@ -1604,12 +1775,12 @@ static inline void ocfs2_complete_lock_res_refresh(struct ocfs2_lock_res *lockre
- }
+-void gpio_set_value(unsigned short gpio, unsigned short arg);
+-unsigned short gpio_get_value(unsigned short gpio);
++void gpio_set_value(unsigned gpio, int arg);
++int gpio_get_value(unsigned gpio);
- /* may or may not return a bh if it went to disk. */
--static int ocfs2_meta_lock_update(struct inode *inode,
-+static int ocfs2_inode_lock_update(struct inode *inode,
- struct buffer_head **bh)
- {
- int status = 0;
- struct ocfs2_inode_info *oi = OCFS2_I(inode);
-- struct ocfs2_lock_res *lockres = &oi->ip_meta_lockres;
-+ struct ocfs2_lock_res *lockres = &oi->ip_inode_lockres;
- struct ocfs2_dinode *fe;
- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ #ifndef BF548_FAMILY
+ #define gpio_get_value(gpio) get_gpio_data(gpio)
+ #define gpio_set_value(gpio, value) set_gpio_data(gpio, value)
+ #endif
-@@ -1721,7 +1892,7 @@ static int ocfs2_assign_bh(struct inode *inode,
- * returns < 0 error if the callback will never be called, otherwise
- * the result of the lock will be communicated via the callback.
- */
--int ocfs2_meta_lock_full(struct inode *inode,
-+int ocfs2_inode_lock_full(struct inode *inode,
- struct buffer_head **ret_bh,
- int ex,
- int arg_flags)
-@@ -1756,7 +1927,7 @@ int ocfs2_meta_lock_full(struct inode *inode,
- wait_event(osb->recovery_event,
- ocfs2_node_map_is_empty(osb, &osb->recovery_map));
+-void gpio_direction_input(unsigned short gpio);
+-void gpio_direction_output(unsigned short gpio);
++int gpio_direction_input(unsigned gpio);
++int gpio_direction_output(unsigned gpio, int value);
-- lockres = &OCFS2_I(inode)->ip_meta_lockres;
-+ lockres = &OCFS2_I(inode)->ip_inode_lockres;
- level = ex ? LKM_EXMODE : LKM_PRMODE;
- dlm_flags = 0;
- if (arg_flags & OCFS2_META_LOCK_NOQUEUE)
-@@ -1795,11 +1966,11 @@ local:
- }
+ #include <asm-generic/gpio.h> /* cansleep wrappers */
+ #include <asm/irq.h>
+diff --git a/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
+index 0b867e6..15dbc21 100644
+--- a/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
+@@ -146,7 +146,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
- /* This is fun. The caller may want a bh back, or it may
-- * not. ocfs2_meta_lock_update definitely wants one in, but
-+ * not. ocfs2_inode_lock_update definitely wants one in, but
- * may or may not read one, depending on what's in the
- * LVB. The result of all of this is that we've *only* gone to
- * disk if we have to, so the complexity is worthwhile. */
-- status = ocfs2_meta_lock_update(inode, &local_bh);
-+ status = ocfs2_inode_lock_update(inode, &local_bh);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -1821,7 +1992,7 @@ bail:
- *ret_bh = NULL;
- }
- if (acquired)
-- ocfs2_meta_unlock(inode, ex);
-+ ocfs2_inode_unlock(inode, ex);
+ if (uart->rts_pin >= 0) {
+ gpio_request(uart->rts_pin, DRIVER_NAME);
+- gpio_direction_output(uart->rts_pin);
++ gpio_direction_output(uart->rts_pin, 0);
}
-
- if (local_bh)
-@@ -1832,19 +2003,20 @@ bail:
+ #endif
}
+diff --git a/include/asm-blackfin/mach-bf527/portmux.h b/include/asm-blackfin/mach-bf527/portmux.h
+index dcf001a..ae4d205 100644
+--- a/include/asm-blackfin/mach-bf527/portmux.h
++++ b/include/asm-blackfin/mach-bf527/portmux.h
+@@ -1,6 +1,8 @@
+ #ifndef _MACH_PORTMUX_H_
+ #define _MACH_PORTMUX_H_
- /*
-- * This is working around a lock inversion between tasks acquiring DLM locks
-- * while holding a page lock and the vote thread which blocks dlm lock acquiry
-- * while acquiring page locks.
-+ * This is working around a lock inversion between tasks acquiring DLM
-+ * locks while holding a page lock and the downconvert thread which
-+ * blocks dlm lock acquiry while acquiring page locks.
- *
- * ** These _with_page variantes are only intended to be called from aop
- * methods that hold page locks and return a very specific *positive* error
- * code that aop methods pass up to the VFS -- test for errors with != 0. **
- *
-- * The DLM is called such that it returns -EAGAIN if it would have blocked
-- * waiting for the vote thread. In that case we unlock our page so the vote
-- * thread can make progress. Once we've done this we have to return
-- * AOP_TRUNCATED_PAGE so the aop method that called us can bubble that back up
-- * into the VFS who will then immediately retry the aop call.
-+ * The DLM is called such that it returns -EAGAIN if it would have
-+ * blocked waiting for the downconvert thread. In that case we unlock
-+ * our page so the downconvert thread can make progress. Once we've
-+ * done this we have to return AOP_TRUNCATED_PAGE so the aop method
-+ * that called us can bubble that back up into the VFS who will then
-+ * immediately retry the aop call.
- *
- * We do a blocking lock and immediate unlock before returning, though, so that
- * the lock has a great chance of being cached on this node by the time the VFS
-@@ -1852,32 +2024,32 @@ bail:
- * ping locks back and forth, but that's a risk we're willing to take to avoid
- * the lock inversion simply.
++#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++
+ #define P_PPI0_D0 (P_DEFINED | P_IDENT(GPIO_PF0) | P_FUNCT(0))
+ #define P_PPI0_D1 (P_DEFINED | P_IDENT(GPIO_PF1) | P_FUNCT(0))
+ #define P_PPI0_D2 (P_DEFINED | P_IDENT(GPIO_PF2) | P_FUNCT(0))
+diff --git a/include/asm-blackfin/mach-bf533/anomaly.h b/include/asm-blackfin/mach-bf533/anomaly.h
+index f36ff5a..98209d4 100644
+--- a/include/asm-blackfin/mach-bf533/anomaly.h
++++ b/include/asm-blackfin/mach-bf533/anomaly.h
+@@ -7,9 +7,7 @@
*/
--int ocfs2_meta_lock_with_page(struct inode *inode,
-+int ocfs2_inode_lock_with_page(struct inode *inode,
- struct buffer_head **ret_bh,
- int ex,
- struct page *page)
- {
- int ret;
-
-- ret = ocfs2_meta_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK);
-+ ret = ocfs2_inode_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK);
- if (ret == -EAGAIN) {
- unlock_page(page);
-- if (ocfs2_meta_lock(inode, ret_bh, ex) == 0)
-- ocfs2_meta_unlock(inode, ex);
-+ if (ocfs2_inode_lock(inode, ret_bh, ex) == 0)
-+ ocfs2_inode_unlock(inode, ex);
- ret = AOP_TRUNCATED_PAGE;
- }
-
- return ret;
- }
-
--int ocfs2_meta_lock_atime(struct inode *inode,
-+int ocfs2_inode_lock_atime(struct inode *inode,
- struct vfsmount *vfsmnt,
- int *level)
- {
- int ret;
- mlog_entry_void();
-- ret = ocfs2_meta_lock(inode, NULL, 0);
-+ ret = ocfs2_inode_lock(inode, NULL, 0);
- if (ret < 0) {
- mlog_errno(ret);
- return ret;
-@@ -1890,8 +2062,8 @@ int ocfs2_meta_lock_atime(struct inode *inode,
- if (ocfs2_should_update_atime(inode, vfsmnt)) {
- struct buffer_head *bh = NULL;
+ /* This file shoule be up to date with:
+- * - Revision X, March 23, 2007; ADSP-BF533 Blackfin Processor Anomaly List
+- * - Revision AB, March 23, 2007; ADSP-BF532 Blackfin Processor Anomaly List
+- * - Revision W, March 23, 2007; ADSP-BF531 Blackfin Processor Anomaly List
++ * - Revision B, 12/10/2007; ADSP-BF531/BF532/BF533 Blackfin Processor Anomaly List
+ */
-- ocfs2_meta_unlock(inode, 0);
-- ret = ocfs2_meta_lock(inode, &bh, 1);
-+ ocfs2_inode_unlock(inode, 0);
-+ ret = ocfs2_inode_lock(inode, &bh, 1);
- if (ret < 0) {
- mlog_errno(ret);
- return ret;
-@@ -1908,11 +2080,11 @@ int ocfs2_meta_lock_atime(struct inode *inode,
- return ret;
- }
+ #ifndef _MACH_ANOMALY_H_
+@@ -17,7 +15,7 @@
--void ocfs2_meta_unlock(struct inode *inode,
-+void ocfs2_inode_unlock(struct inode *inode,
- int ex)
- {
- int level = ex ? LKM_EXMODE : LKM_PRMODE;
-- struct ocfs2_lock_res *lockres = &OCFS2_I(inode)->ip_meta_lockres;
-+ struct ocfs2_lock_res *lockres = &OCFS2_I(inode)->ip_inode_lockres;
- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ /* We do not support 0.1 or 0.2 silicon - sorry */
+ #if __SILICON_REVISION__ < 3
+-# error Kernel will not work on BF533 silicon version 0.0, 0.1, or 0.2
++# error will not work on BF533 silicon version 0.0, 0.1, or 0.2
+ #endif
- mlog_entry_void();
-@@ -2320,11 +2492,11 @@ int ocfs2_dlm_init(struct ocfs2_super *osb)
- goto bail;
- }
+ #if defined(__ADSPBF531__)
+@@ -251,6 +249,12 @@
+ #define ANOMALY_05000192 (__SILICON_REVISION__ < 3)
+ /* Internal Voltage Regulator may not start up */
+ #define ANOMALY_05000206 (__SILICON_REVISION__ < 3)
++/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
++#define ANOMALY_05000357 (1)
++/* PPI Underflow Error Goes Undetected in ITU-R 656 Mode */
++#define ANOMALY_05000366 (1)
++/* Possible RETS Register Corruption when Subroutine Is under 5 Cycles in Duration */
++#define ANOMALY_05000371 (1)
-- /* launch vote thread */
-- osb->vote_task = kthread_run(ocfs2_vote_thread, osb, "ocfs2vote");
-- if (IS_ERR(osb->vote_task)) {
-- status = PTR_ERR(osb->vote_task);
-- osb->vote_task = NULL;
-+ /* launch downconvert thread */
-+ osb->dc_task = kthread_run(ocfs2_downconvert_thread, osb, "ocfs2dc");
-+ if (IS_ERR(osb->dc_task)) {
-+ status = PTR_ERR(osb->dc_task);
-+ osb->dc_task = NULL;
- mlog_errno(status);
- goto bail;
+ /* Anomalies that don't exist on this proc */
+ #define ANOMALY_05000266 (0)
+diff --git a/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
+index 69b9f8e..7871d43 100644
+--- a/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
+@@ -111,7 +111,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
}
-@@ -2353,8 +2525,8 @@ local:
- bail:
- if (status < 0) {
- ocfs2_dlm_shutdown_debug(osb);
-- if (osb->vote_task)
-- kthread_stop(osb->vote_task);
-+ if (osb->dc_task)
-+ kthread_stop(osb->dc_task);
+ if (uart->rts_pin >= 0) {
+ gpio_request(uart->rts_pin, DRIVER_NAME);
+- gpio_direction_input(uart->rts_pin);
++ gpio_direction_input(uart->rts_pin, 0);
}
+ #endif
+ }
+diff --git a/include/asm-blackfin/mach-bf533/portmux.h b/include/asm-blackfin/mach-bf533/portmux.h
+index 137f488..685a265 100644
+--- a/include/asm-blackfin/mach-bf533/portmux.h
++++ b/include/asm-blackfin/mach-bf533/portmux.h
+@@ -1,6 +1,8 @@
+ #ifndef _MACH_PORTMUX_H_
+ #define _MACH_PORTMUX_H_
- mlog_exit(status);
-@@ -2369,9 +2541,9 @@ void ocfs2_dlm_shutdown(struct ocfs2_super *osb)
++#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++
+ #define P_PPI0_CLK (P_DONTCARE)
+ #define P_PPI0_FS1 (P_DONTCARE)
+ #define P_PPI0_FS2 (P_DONTCARE)
+diff --git a/include/asm-blackfin/mach-bf537/anomaly.h b/include/asm-blackfin/mach-bf537/anomaly.h
+index 2b66ecf..746a794 100644
+--- a/include/asm-blackfin/mach-bf537/anomaly.h
++++ b/include/asm-blackfin/mach-bf537/anomaly.h
+@@ -7,9 +7,7 @@
+ */
- ocfs2_drop_osb_locks(osb);
+ /* This file shoule be up to date with:
+- * - Revision M, March 13, 2007; ADSP-BF537 Blackfin Processor Anomaly List
+- * - Revision L, March 13, 2007; ADSP-BF536 Blackfin Processor Anomaly List
+- * - Revision M, March 13, 2007; ADSP-BF534 Blackfin Processor Anomaly List
++ * - Revision A, 09/04/2007; ADSP-BF534/ADSP-BF536/ADSP-BF537 Blackfin Processor Anomaly List
+ */
-- if (osb->vote_task) {
-- kthread_stop(osb->vote_task);
-- osb->vote_task = NULL;
-+ if (osb->dc_task) {
-+ kthread_stop(osb->dc_task);
-+ osb->dc_task = NULL;
- }
+ #ifndef _MACH_ANOMALY_H_
+@@ -17,7 +15,7 @@
- ocfs2_lock_res_free(&osb->osb_super_lockres);
-@@ -2527,7 +2699,7 @@ out:
+ /* We do not support 0.1 silicon - sorry */
+ #if __SILICON_REVISION__ < 2
+-# error Kernel will not work on BF537 silicon version 0.0 or 0.1
++# error will not work on BF537 silicon version 0.0 or 0.1
+ #endif
- /* Mark the lockres as being dropped. It will no longer be
- * queued if blocking, but we still may have to wait on it
-- * being dequeued from the vote thread before we can consider
-+ * being dequeued from the downconvert thread before we can consider
- * it safe to drop.
- *
- * You can *not* attempt to call cluster_lock on this lockres anymore. */
-@@ -2590,14 +2762,7 @@ int ocfs2_drop_inode_locks(struct inode *inode)
- status = err;
+ #if defined(__ADSPBF534__)
+@@ -44,6 +42,8 @@
+ #define ANOMALY_05000122 (1)
+ /* Killed 32-bit MMR write leads to next system MMR access thinking it should be 32-bit */
+ #define ANOMALY_05000157 (__SILICON_REVISION__ < 2)
++/* Turning SPORTs on while External Frame Sync Is Active May Corrupt Data */
++#define ANOMALY_05000167 (1)
+ /* PPI_DELAY not functional in PPI modes with 0 frame syncs */
+ #define ANOMALY_05000180 (1)
+ /* Instruction Cache Is Not Functional */
+@@ -130,6 +130,12 @@
+ #define ANOMALY_05000321 (__SILICON_REVISION__ < 3)
+ /* EMAC RMII mode at 10-Base-T speed: RX frames not received properly */
+ #define ANOMALY_05000322 (1)
++/* Ethernet MAC MDIO Reads Do Not Meet IEEE Specification */
++#define ANOMALY_05000341 (__SILICON_REVISION__ >= 3)
++/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
++#define ANOMALY_05000357 (1)
++/* DMAs that Go Urgent during Tight Core Writes to External Memory Are Blocked */
++#define ANOMALY_05000359 (1)
- err = ocfs2_drop_lock(OCFS2_SB(inode->i_sb),
-- &OCFS2_I(inode)->ip_data_lockres);
-- if (err < 0)
-- mlog_errno(err);
-- if (err < 0 && !status)
-- status = err;
--
-- err = ocfs2_drop_lock(OCFS2_SB(inode->i_sb),
-- &OCFS2_I(inode)->ip_meta_lockres);
-+ &OCFS2_I(inode)->ip_inode_lockres);
- if (err < 0)
- mlog_errno(err);
- if (err < 0 && !status)
-@@ -2850,6 +3015,9 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
- inode = ocfs2_lock_res_inode(lockres);
- mapping = inode->i_mapping;
+ /* Anomalies that don't exist on this proc */
+ #define ANOMALY_05000125 (0)
+diff --git a/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
+index 6fb328f..86e45c3 100644
+--- a/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
+@@ -146,7 +146,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
-+ if (S_ISREG(inode->i_mode))
-+ goto out;
-+
- /*
- * We need this before the filemap_fdatawrite() so that it can
- * transfer the dirty bit from the PTE to the
-@@ -2875,6 +3043,7 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
- filemap_fdatawait(mapping);
+ if (uart->rts_pin >= 0) {
+ gpio_request(uart->rts_pin, DRIVER_NAME);
+- gpio_direction_output(uart->rts_pin);
++ gpio_direction_output(uart->rts_pin, 0);
}
-
-+out:
- return UNBLOCK_CONTINUE;
+ #endif
}
+diff --git a/include/asm-blackfin/mach-bf537/portmux.h b/include/asm-blackfin/mach-bf537/portmux.h
+index 5a3f7d3..78fee6e 100644
+--- a/include/asm-blackfin/mach-bf537/portmux.h
++++ b/include/asm-blackfin/mach-bf537/portmux.h
+@@ -1,6 +1,8 @@
+ #ifndef _MACH_PORTMUX_H_
+ #define _MACH_PORTMUX_H_
-@@ -2903,7 +3072,7 @@ static void ocfs2_set_meta_lvb(struct ocfs2_lock_res *lockres)
++#define MAX_RESOURCES (MAX_BLACKFIN_GPIOS + GPIO_BANKSIZE) /* We additionally handle PORTJ */
++
+ #define P_UART0_TX (P_DEFINED | P_IDENT(GPIO_PF0) | P_FUNCT(0))
+ #define P_UART0_RX (P_DEFINED | P_IDENT(GPIO_PF1) | P_FUNCT(0))
+ #define P_UART1_TX (P_DEFINED | P_IDENT(GPIO_PF2) | P_FUNCT(0))
+diff --git a/include/asm-blackfin/mach-bf548/anomaly.h b/include/asm-blackfin/mach-bf548/anomaly.h
+index c5b6375..850dc12 100644
+--- a/include/asm-blackfin/mach-bf548/anomaly.h
++++ b/include/asm-blackfin/mach-bf548/anomaly.h
+@@ -7,7 +7,7 @@
+ */
- /*
- * Does the final reference drop on our dentry lock. Right now this
-- * happens in the vote thread, but we could choose to simplify the
-+ * happens in the downconvert thread, but we could choose to simplify the
- * dlmglue API and push these off to the ocfs2_wq in the future.
+ /* This file shoule be up to date with:
+- * - Revision C, July 16, 2007; ADSP-BF549 Silicon Anomaly List
++ * - Revision E, 11/28/2007; ADSP-BF542/BF544/BF547/BF548/BF549 Blackfin Processor Anomaly List
*/
- static void ocfs2_dentry_post_unlock(struct ocfs2_super *osb,
-@@ -3042,7 +3211,7 @@ void ocfs2_process_blocked_lock(struct ocfs2_super *osb,
- mlog(0, "lockres %s blocked.\n", lockres->l_name);
- /* Detect whether a lock has been marked as going away while
-- * the vote thread was processing other things. A lock can
-+ * the downconvert thread was processing other things. A lock can
- * still be marked with OCFS2_LOCK_FREEING after this check,
- * but short circuiting here will still save us some
- * performance. */
-@@ -3091,13 +3260,104 @@ static void ocfs2_schedule_blocked_lock(struct ocfs2_super *osb,
+ #ifndef _MACH_ANOMALY_H_
+@@ -26,47 +26,59 @@
+ /* Certain Data Cache Writethrough Modes Fail for Vddint <= 0.9V */
+ #define ANOMALY_05000272 (1)
+ /* False Hardware Error Exception when ISR context is not restored */
+-#define ANOMALY_05000281 (1)
++#define ANOMALY_05000281 (__SILICON_REVISION__ < 1)
+ /* SSYNCs After Writes To CAN/DMA MMR Registers Are Not Always Handled Correctly */
+-#define ANOMALY_05000304 (1)
++#define ANOMALY_05000304 (__SILICON_REVISION__ < 1)
+ /* False Hardware Errors Caused by Fetches at the Boundary of Reserved Memory */
+ #define ANOMALY_05000310 (1)
+ /* Errors When SSYNC, CSYNC, or Loads to LT, LB and LC Registers Are Interrupted */
+-#define ANOMALY_05000312 (1)
++#define ANOMALY_05000312 (__SILICON_REVISION__ < 1)
+ /* TWI Slave Boot Mode Is Not Functional */
+-#define ANOMALY_05000324 (1)
++#define ANOMALY_05000324 (__SILICON_REVISION__ < 1)
+ /* External FIFO Boot Mode Is Not Functional */
+-#define ANOMALY_05000325 (1)
++#define ANOMALY_05000325 (__SILICON_REVISION__ < 1)
+ /* Data Lost When Core and DMA Accesses Are Made to the USB FIFO Simultaneously */
+-#define ANOMALY_05000327 (1)
++#define ANOMALY_05000327 (__SILICON_REVISION__ < 1)
+ /* Incorrect Access of OTP_STATUS During otp_write() Function */
+-#define ANOMALY_05000328 (1)
++#define ANOMALY_05000328 (__SILICON_REVISION__ < 1)
+ /* Synchronous Burst Flash Boot Mode Is Not Functional */
+-#define ANOMALY_05000329 (1)
++#define ANOMALY_05000329 (__SILICON_REVISION__ < 1)
+ /* Host DMA Boot Mode Is Not Functional */
+-#define ANOMALY_05000330 (1)
++#define ANOMALY_05000330 (__SILICON_REVISION__ < 1)
+ /* Inadequate Timing Margins on DDR DQS to DQ and DQM Skew */
+-#define ANOMALY_05000334 (1)
++#define ANOMALY_05000334 (__SILICON_REVISION__ < 1)
+ /* Inadequate Rotary Debounce Logic Duration */
+-#define ANOMALY_05000335 (1)
++#define ANOMALY_05000335 (__SILICON_REVISION__ < 1)
+ /* Phantom Interrupt Occurs After First Configuration of Host DMA Port */
+-#define ANOMALY_05000336 (1)
++#define ANOMALY_05000336 (__SILICON_REVISION__ < 1)
+ /* Disallowed Configuration Prevents Subsequent Allowed Configuration on Host DMA Port */
+-#define ANOMALY_05000337 (1)
++#define ANOMALY_05000337 (__SILICON_REVISION__ < 1)
+ /* Slave-Mode SPI0 MISO Failure With CPHA = 0 */
+-#define ANOMALY_05000338 (1)
++#define ANOMALY_05000338 (__SILICON_REVISION__ < 1)
+ /* If Memory Reads Are Enabled on SDH or HOSTDP, Other DMAC1 Peripherals Cannot Read */
+-#define ANOMALY_05000340 (1)
++#define ANOMALY_05000340 (__SILICON_REVISION__ < 1)
+ /* Boot Host Wait (HWAIT) and Boot Host Wait Alternate (HWAITA) Signals Are Swapped */
+-#define ANOMALY_05000344 (1)
++#define ANOMALY_05000344 (__SILICON_REVISION__ < 1)
+ /* USB Calibration Value Is Not Intialized */
+-#define ANOMALY_05000346 (1)
++#define ANOMALY_05000346 (__SILICON_REVISION__ < 1)
+ /* Boot ROM Kernel Incorrectly Alters Reset Value of USB Register */
+-#define ANOMALY_05000347 (1)
++#define ANOMALY_05000347 (__SILICON_REVISION__ < 1)
+ /* Data Lost when Core Reads SDH Data FIFO */
+-#define ANOMALY_05000349 (1)
++#define ANOMALY_05000349 (__SILICON_REVISION__ < 1)
+ /* PLL Status Register Is Inaccurate */
+-#define ANOMALY_05000351 (1)
++#define ANOMALY_05000351 (__SILICON_REVISION__ < 1)
++/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
++#define ANOMALY_05000357 (1)
++/* External Memory Read Access Hangs Core With PLL Bypass */
++#define ANOMALY_05000360 (1)
++/* DMAs that Go Urgent during Tight Core Writes to External Memory Are Blocked */
++#define ANOMALY_05000365 (1)
++/* Addressing Conflict between Boot ROM and Asynchronous Memory */
++#define ANOMALY_05000369 (1)
++/* Mobile DDR Operation Not Functional */
++#define ANOMALY_05000377 (1)
++/* Security/Authentication Speedpath Causes Authentication To Fail To Initiate */
++#define ANOMALY_05000378 (1)
- lockres_or_flags(lockres, OCFS2_LOCK_QUEUED);
+ /* Anomalies that don't exist on this proc */
+ #define ANOMALY_05000125 (0)
+diff --git a/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
+index f21a162..3770aa3 100644
+--- a/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
+@@ -186,7 +186,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
-- spin_lock(&osb->vote_task_lock);
-+ spin_lock(&osb->dc_task_lock);
- if (list_empty(&lockres->l_blocked_list)) {
- list_add_tail(&lockres->l_blocked_list,
- &osb->blocked_lock_list);
- osb->blocked_lock_count++;
+ if (uart->rts_pin >= 0) {
+ gpio_request(uart->rts_pin, DRIVER_NAME);
+- gpio_direction_output(uart->rts_pin);
++ gpio_direction_output(uart->rts_pin, 0);
}
-- spin_unlock(&osb->vote_task_lock);
-+ spin_unlock(&osb->dc_task_lock);
-+
-+ mlog_exit_void();
-+}
-+
-+static void ocfs2_downconvert_thread_do_work(struct ocfs2_super *osb)
-+{
-+ unsigned long processed;
-+ struct ocfs2_lock_res *lockres;
-+
-+ mlog_entry_void();
-+
-+ spin_lock(&osb->dc_task_lock);
-+ /* grab this early so we know to try again if a state change and
-+ * wake happens part-way through our work */
-+ osb->dc_work_sequence = osb->dc_wake_sequence;
-+
-+ processed = osb->blocked_lock_count;
-+ while (processed) {
-+ BUG_ON(list_empty(&osb->blocked_lock_list));
-+
-+ lockres = list_entry(osb->blocked_lock_list.next,
-+ struct ocfs2_lock_res, l_blocked_list);
-+ list_del_init(&lockres->l_blocked_list);
-+ osb->blocked_lock_count--;
-+ spin_unlock(&osb->dc_task_lock);
-+
-+ BUG_ON(!processed);
-+ processed--;
-+
-+ ocfs2_process_blocked_lock(osb, lockres);
-+
-+ spin_lock(&osb->dc_task_lock);
-+ }
-+ spin_unlock(&osb->dc_task_lock);
-
- mlog_exit_void();
+ #endif
}
-+
-+static int ocfs2_downconvert_thread_lists_empty(struct ocfs2_super *osb)
-+{
-+ int empty = 0;
-+
-+ spin_lock(&osb->dc_task_lock);
-+ if (list_empty(&osb->blocked_lock_list))
-+ empty = 1;
-+
-+ spin_unlock(&osb->dc_task_lock);
-+ return empty;
-+}
-+
-+static int ocfs2_downconvert_thread_should_wake(struct ocfs2_super *osb)
-+{
-+ int should_wake = 0;
-+
-+ spin_lock(&osb->dc_task_lock);
-+ if (osb->dc_work_sequence != osb->dc_wake_sequence)
-+ should_wake = 1;
-+ spin_unlock(&osb->dc_task_lock);
-+
-+ return should_wake;
-+}
-+
-+int ocfs2_downconvert_thread(void *arg)
-+{
-+ int status = 0;
-+ struct ocfs2_super *osb = arg;
-+
-+ /* only quit once we've been asked to stop and there is no more
-+ * work available */
-+ while (!(kthread_should_stop() &&
-+ ocfs2_downconvert_thread_lists_empty(osb))) {
-+
-+ wait_event_interruptible(osb->dc_event,
-+ ocfs2_downconvert_thread_should_wake(osb) ||
-+ kthread_should_stop());
-+
-+ mlog(0, "downconvert_thread: awoken\n");
-+
-+ ocfs2_downconvert_thread_do_work(osb);
-+ }
-+
-+ osb->dc_task = NULL;
-+ return status;
-+}
-+
-+void ocfs2_wake_downconvert_thread(struct ocfs2_super *osb)
-+{
-+ spin_lock(&osb->dc_task_lock);
-+ /* make sure the voting thread gets a swipe at whatever changes
-+ * the caller may have made to the voting state */
-+ osb->dc_wake_sequence++;
-+ spin_unlock(&osb->dc_task_lock);
-+ wake_up(&osb->dc_event);
-+}
-diff --git a/fs/ocfs2/dlmglue.h b/fs/ocfs2/dlmglue.h
-index 87a785e..5f17243 100644
---- a/fs/ocfs2/dlmglue.h
-+++ b/fs/ocfs2/dlmglue.h
-@@ -49,12 +49,12 @@ struct ocfs2_meta_lvb {
- __be32 lvb_reserved2;
- };
+diff --git a/include/asm-blackfin/mach-bf548/cdefBF54x_base.h b/include/asm-blackfin/mach-bf548/cdefBF54x_base.h
+index aefab3f..19ddcd8 100644
+--- a/include/asm-blackfin/mach-bf548/cdefBF54x_base.h
++++ b/include/asm-blackfin/mach-bf548/cdefBF54x_base.h
+@@ -244,39 +244,6 @@ static __inline__ void bfin_write_VR_CTL(unsigned int val)
+ #define bfin_read_TWI0_RCV_DATA16() bfin_read16(TWI0_RCV_DATA16)
+ #define bfin_write_TWI0_RCV_DATA16(val) bfin_write16(TWI0_RCV_DATA16, val)
--/* ocfs2_meta_lock_full() and ocfs2_data_lock_full() 'arg_flags' flags */
-+/* ocfs2_inode_lock_full() 'arg_flags' flags */
- /* don't wait on recovery. */
- #define OCFS2_META_LOCK_RECOVERY (0x01)
- /* Instruct the dlm not to queue ourselves on the other node. */
- #define OCFS2_META_LOCK_NOQUEUE (0x02)
--/* don't block waiting for the vote thread, instead return -EAGAIN */
-+/* don't block waiting for the downconvert thread, instead return -EAGAIN */
- #define OCFS2_LOCK_NONBLOCK (0x04)
+-#define bfin_read_TWI_CLKDIV() bfin_read16(TWI0_CLKDIV)
+-#define bfin_write_TWI_CLKDIV(val) bfin_write16(TWI0_CLKDIV, val)
+-#define bfin_read_TWI_CONTROL() bfin_read16(TWI0_CONTROL)
+-#define bfin_write_TWI_CONTROL(val) bfin_write16(TWI0_CONTROL, val)
+-#define bfin_read_TWI_SLAVE_CTRL() bfin_read16(TWI0_SLAVE_CTRL)
+-#define bfin_write_TWI_SLAVE_CTRL(val) bfin_write16(TWI0_SLAVE_CTRL, val)
+-#define bfin_read_TWI_SLAVE_STAT() bfin_read16(TWI0_SLAVE_STAT)
+-#define bfin_write_TWI_SLAVE_STAT(val) bfin_write16(TWI0_SLAVE_STAT, val)
+-#define bfin_read_TWI_SLAVE_ADDR() bfin_read16(TWI0_SLAVE_ADDR)
+-#define bfin_write_TWI_SLAVE_ADDR(val) bfin_write16(TWI0_SLAVE_ADDR, val)
+-#define bfin_read_TWI_MASTER_CTL() bfin_read16(TWI0_MASTER_CTRL)
+-#define bfin_write_TWI_MASTER_CTL(val) bfin_write16(TWI0_MASTER_CTRL, val)
+-#define bfin_read_TWI_MASTER_STAT() bfin_read16(TWI0_MASTER_STAT)
+-#define bfin_write_TWI_MASTER_STAT(val) bfin_write16(TWI0_MASTER_STAT, val)
+-#define bfin_read_TWI_MASTER_ADDR() bfin_read16(TWI0_MASTER_ADDR)
+-#define bfin_write_TWI_MASTER_ADDR(val) bfin_write16(TWI0_MASTER_ADDR, val)
+-#define bfin_read_TWI_INT_STAT() bfin_read16(TWI0_INT_STAT)
+-#define bfin_write_TWI_INT_STAT(val) bfin_write16(TWI0_INT_STAT, val)
+-#define bfin_read_TWI_INT_MASK() bfin_read16(TWI0_INT_MASK)
+-#define bfin_write_TWI_INT_MASK(val) bfin_write16(TWI0_INT_MASK, val)
+-#define bfin_read_TWI_FIFO_CTL() bfin_read16(TWI0_FIFO_CTRL)
+-#define bfin_write_TWI_FIFO_CTL(val) bfin_write16(TWI0_FIFO_CTRL, val)
+-#define bfin_read_TWI_FIFO_STAT() bfin_read16(TWI0_FIFO_STAT)
+-#define bfin_write_TWI_FIFO_STAT(val) bfin_write16(TWI0_FIFO_STAT, val)
+-#define bfin_read_TWI_XMT_DATA8() bfin_read16(TWI0_XMT_DATA8)
+-#define bfin_write_TWI_XMT_DATA8(val) bfin_write16(TWI0_XMT_DATA8, val)
+-#define bfin_read_TWI_XMT_DATA16() bfin_read16(TWI0_XMT_DATA16)
+-#define bfin_write_TWI_XMT_DATA16(val) bfin_write16(TWI0_XMT_DATA16, val)
+-#define bfin_read_TWI_RCV_DATA8() bfin_read16(TWI0_RCV_DATA8)
+-#define bfin_write_TWI_RCV_DATA8(val) bfin_write16(TWI0_RCV_DATA8, val)
+-#define bfin_read_TWI_RCV_DATA16() bfin_read16(TWI0_RCV_DATA16)
+-#define bfin_write_TWI_RCV_DATA16(val) bfin_write16(TWI0_RCV_DATA16, val)
+-
+ /* SPORT0 is not defined in the shared file because it is not available on the ADSP-BF542 and ADSP-BF544 bfin_read_()rocessors */
- int ocfs2_dlm_init(struct ocfs2_super *osb);
-@@ -66,38 +66,32 @@ void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
- struct inode *inode);
- void ocfs2_dentry_lock_res_init(struct ocfs2_dentry_lock *dl,
- u64 parent, struct inode *inode);
-+struct ocfs2_file_private;
-+void ocfs2_file_lock_res_init(struct ocfs2_lock_res *lockres,
-+ struct ocfs2_file_private *fp);
- void ocfs2_lock_res_free(struct ocfs2_lock_res *res);
- int ocfs2_create_new_inode_locks(struct inode *inode);
- int ocfs2_drop_inode_locks(struct inode *inode);
--int ocfs2_data_lock_full(struct inode *inode,
-- int write,
-- int arg_flags);
--#define ocfs2_data_lock(inode, write) ocfs2_data_lock_full(inode, write, 0)
--int ocfs2_data_lock_with_page(struct inode *inode,
-- int write,
-- struct page *page);
--void ocfs2_data_unlock(struct inode *inode,
-- int write);
- int ocfs2_rw_lock(struct inode *inode, int write);
- void ocfs2_rw_unlock(struct inode *inode, int write);
- int ocfs2_open_lock(struct inode *inode);
- int ocfs2_try_open_lock(struct inode *inode, int write);
- void ocfs2_open_unlock(struct inode *inode);
--int ocfs2_meta_lock_atime(struct inode *inode,
-+int ocfs2_inode_lock_atime(struct inode *inode,
- struct vfsmount *vfsmnt,
- int *level);
--int ocfs2_meta_lock_full(struct inode *inode,
-+int ocfs2_inode_lock_full(struct inode *inode,
- struct buffer_head **ret_bh,
- int ex,
- int arg_flags);
--int ocfs2_meta_lock_with_page(struct inode *inode,
-+int ocfs2_inode_lock_with_page(struct inode *inode,
- struct buffer_head **ret_bh,
- int ex,
- struct page *page);
- /* 99% of the time we don't want to supply any additional flags --
- * those are for very specific cases only. */
--#define ocfs2_meta_lock(i, b, e) ocfs2_meta_lock_full(i, b, e, 0)
--void ocfs2_meta_unlock(struct inode *inode,
-+#define ocfs2_inode_lock(i, b, e) ocfs2_inode_lock_full(i, b, e, 0)
-+void ocfs2_inode_unlock(struct inode *inode,
- int ex);
- int ocfs2_super_lock(struct ocfs2_super *osb,
- int ex);
-@@ -107,14 +101,17 @@ int ocfs2_rename_lock(struct ocfs2_super *osb);
- void ocfs2_rename_unlock(struct ocfs2_super *osb);
- int ocfs2_dentry_lock(struct dentry *dentry, int ex);
- void ocfs2_dentry_unlock(struct dentry *dentry, int ex);
-+int ocfs2_file_lock(struct file *file, int ex, int trylock);
-+void ocfs2_file_unlock(struct file *file);
+ /* SPORT1 Registers */
+diff --git a/include/asm-blackfin/mach-bf548/defBF542.h b/include/asm-blackfin/mach-bf548/defBF542.h
+index 32d0713..a7c809f 100644
+--- a/include/asm-blackfin/mach-bf548/defBF542.h
++++ b/include/asm-blackfin/mach-bf548/defBF542.h
+@@ -432,8 +432,8 @@
- void ocfs2_mark_lockres_freeing(struct ocfs2_lock_res *lockres);
- void ocfs2_simple_drop_lockres(struct ocfs2_super *osb,
- struct ocfs2_lock_res *lockres);
+ #define CMD_CRC_FAIL 0x1 /* CMD CRC Fail */
+ #define DAT_CRC_FAIL 0x2 /* Data CRC Fail */
+-#define CMD_TIMEOUT 0x4 /* CMD Time Out */
+-#define DAT_TIMEOUT 0x8 /* Data Time Out */
++#define CMD_TIME_OUT 0x4 /* CMD Time Out */
++#define DAT_TIME_OUT 0x8 /* Data Time Out */
+ #define TX_UNDERRUN 0x10 /* Transmit Underrun */
+ #define RX_OVERRUN 0x20 /* Receive Overrun */
+ #define CMD_RESP_END 0x40 /* CMD Response End */
+diff --git a/include/asm-blackfin/mach-bf548/defBF548.h b/include/asm-blackfin/mach-bf548/defBF548.h
+index ecbca95..e46f568 100644
+--- a/include/asm-blackfin/mach-bf548/defBF548.h
++++ b/include/asm-blackfin/mach-bf548/defBF548.h
+@@ -1095,8 +1095,8 @@
--/* for the vote thread */
-+/* for the downconvert thread */
- void ocfs2_process_blocked_lock(struct ocfs2_super *osb,
- struct ocfs2_lock_res *lockres);
-+void ocfs2_wake_downconvert_thread(struct ocfs2_super *osb);
+ #define CMD_CRC_FAIL 0x1 /* CMD CRC Fail */
+ #define DAT_CRC_FAIL 0x2 /* Data CRC Fail */
+-#define CMD_TIMEOUT 0x4 /* CMD Time Out */
+-#define DAT_TIMEOUT 0x8 /* Data Time Out */
++#define CMD_TIME_OUT 0x4 /* CMD Time Out */
++#define DAT_TIME_OUT 0x8 /* Data Time Out */
+ #define TX_UNDERRUN 0x10 /* Transmit Underrun */
+ #define RX_OVERRUN 0x20 /* Receive Overrun */
+ #define CMD_RESP_END 0x40 /* CMD Response End */
+diff --git a/include/asm-blackfin/mach-bf548/defBF54x_base.h b/include/asm-blackfin/mach-bf548/defBF54x_base.h
+index 319a485..08f90c2 100644
+--- a/include/asm-blackfin/mach-bf548/defBF54x_base.h
++++ b/include/asm-blackfin/mach-bf548/defBF54x_base.h
+@@ -1772,17 +1772,36 @@
+ #define TRP 0x3c0000 /* Pre charge-to-active command period */
+ #define TRAS 0x3c00000 /* Min Active-to-pre charge time */
+ #define TRC 0x3c000000 /* Active-to-active time */
++#define DDR_TRAS(x) ((x<<22)&TRAS) /* DDR tRAS = (1~15) cycles */
++#define DDR_TRP(x) ((x<<18)&TRP) /* DDR tRP = (1~15) cycles */
++#define DDR_TRC(x) ((x<<26)&TRC) /* DDR tRC = (1~15) cycles */
++#define DDR_TRFC(x) ((x<<14)&TRFC) /* DDR tRFC = (1~15) cycles */
++#define DDR_TREFI(x) (x&TREFI) /* DDR tRFC = (1~15) cycles */
- struct ocfs2_dlm_debug *ocfs2_new_dlm_debug(void);
- void ocfs2_put_dlm_debug(struct ocfs2_dlm_debug *dlm_debug);
-diff --git a/fs/ocfs2/endian.h b/fs/ocfs2/endian.h
-index ff25762..1942e09 100644
---- a/fs/ocfs2/endian.h
-+++ b/fs/ocfs2/endian.h
-@@ -37,11 +37,6 @@ static inline void le64_add_cpu(__le64 *var, u64 val)
- *var = cpu_to_le64(le64_to_cpu(*var) + val);
- }
+ /* Bit masks for EBIU_DDRCTL1 */
--static inline void le32_and_cpu(__le32 *var, u32 val)
--{
-- *var = cpu_to_le32(le32_to_cpu(*var) & val);
--}
--
- static inline void be32_add_cpu(__be32 *var, u32 val)
- {
- *var = cpu_to_be32(be32_to_cpu(*var) + val);
-diff --git a/fs/ocfs2/export.c b/fs/ocfs2/export.c
-index 535bfa9..67527ce 100644
---- a/fs/ocfs2/export.c
-+++ b/fs/ocfs2/export.c
-@@ -58,7 +58,7 @@ static struct dentry *ocfs2_get_dentry(struct super_block *sb,
- return ERR_PTR(-ESTALE);
- }
+ #define TRCD 0xf /* Active-to-Read/write delay */
+-#define MRD 0xf0 /* Mode register set to active */
++#define TMRD 0xf0 /* Mode register set to active */
+ #define TWR 0x300 /* Write Recovery time */
+ #define DDRDATWIDTH 0x3000 /* DDR data width */
+ #define EXTBANKS 0xc000 /* External banks */
+ #define DDRDEVWIDTH 0x30000 /* DDR device width */
+ #define DDRDEVSIZE 0xc0000 /* DDR device size */
+-#define TWWTR 0xf0000000 /* Write-to-read delay */
++#define TWTR 0xf0000000 /* Write-to-read delay */
++#define DDR_TWTR(x) ((x<<28)&TWTR) /* DDR tWTR = (1~15) cycles */
++#define DDR_TMRD(x) ((x<<4)&TMRD) /* DDR tMRD = (1~15) cycles */
++#define DDR_TWR(x) ((x<<8)&TWR) /* DDR tWR = (1~15) cycles */
++#define DDR_TRCD(x) (x&TRCD) /* DDR tRCD = (1~15) cycles */
++#define DDR_DATWIDTH 0x2000 /* DDR data width */
++#define EXTBANK_1 0 /* 1 external bank */
++#define EXTBANK_2 0x4000 /* 2 external banks */
++#define DEVSZ_64 0x40000 /* DDR External Bank Size = 64MB */
++#define DEVSZ_128 0x80000 /* DDR External Bank Size = 128MB */
++#define DEVSZ_256 0xc0000 /* DDR External Bank Size = 256MB */
++#define DEVSZ_512 0 /* DDR External Bank Size = 512MB */
++#define DEVWD_4 0 /* DDR Device Width = 4 Bits */
++#define DEVWD_8 0x10000 /* DDR Device Width = 8 Bits */
++#define DEVWD_16 0x20000 /* DDR Device Width = 16 Bits */
-- inode = ocfs2_iget(OCFS2_SB(sb), handle->ih_blkno, 0);
-+ inode = ocfs2_iget(OCFS2_SB(sb), handle->ih_blkno, 0, 0);
+ /* Bit masks for EBIU_DDRCTL2 */
- if (IS_ERR(inode))
- return (void *)inode;
-@@ -95,7 +95,7 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
- mlog(0, "find parent of directory %llu\n",
- (unsigned long long)OCFS2_I(dir)->ip_blkno);
+@@ -1790,6 +1809,10 @@
+ #define CASLATENCY 0x70 /* CAS latency */
+ #define DLLRESET 0x100 /* DLL Reset */
+ #define REGE 0x1000 /* Register mode enable */
++#define CL_1_5 0x50 /* DDR CAS Latency = 1.5 cycles */
++#define CL_2 0x20 /* DDR CAS Latency = 2 cycles */
++#define CL_2_5 0x60 /* DDR CAS Latency = 2.5 cycles */
++#define CL_3 0x30 /* DDR CAS Latency = 3 cycles */
-- status = ocfs2_meta_lock(dir, NULL, 0);
-+ status = ocfs2_inode_lock(dir, NULL, 0);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -109,7 +109,7 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
- goto bail_unlock;
- }
+ /* Bit masks for EBIU_DDRCTL3 */
-- inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0);
-+ inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0, 0);
- if (IS_ERR(inode)) {
- mlog(ML_ERROR, "Unable to create inode %llu\n",
- (unsigned long long)blkno);
-@@ -126,7 +126,7 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
- parent->d_op = &ocfs2_dentry_ops;
+@@ -2257,6 +2280,10 @@
- bail_unlock:
-- ocfs2_meta_unlock(dir, 0);
-+ ocfs2_inode_unlock(dir, 0);
+ #define CSEL 0x30 /* Core Select */
+ #define SSEL 0xf /* System Select */
++#define CSEL_DIV1 0x0000 /* CCLK = VCO / 1 */
++#define CSEL_DIV2 0x0010 /* CCLK = VCO / 2 */
++#define CSEL_DIV4 0x0020 /* CCLK = VCO / 4 */
++#define CSEL_DIV8 0x0030 /* CCLK = VCO / 8 */
- bail:
- mlog_exit_ptr(parent);
-diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
-index b75b2e1..ed5d523 100644
---- a/fs/ocfs2/file.c
-+++ b/fs/ocfs2/file.c
-@@ -51,6 +51,7 @@
- #include "inode.h"
- #include "ioctl.h"
- #include "journal.h"
-+#include "locks.h"
- #include "mmap.h"
- #include "suballoc.h"
- #include "super.h"
-@@ -63,6 +64,35 @@ static int ocfs2_sync_inode(struct inode *inode)
- return sync_mapping_buffers(inode->i_mapping);
- }
+ /* Bit masks for PLL_CTL */
-+static int ocfs2_init_file_private(struct inode *inode, struct file *file)
-+{
-+ struct ocfs2_file_private *fp;
-+
-+ fp = kzalloc(sizeof(struct ocfs2_file_private), GFP_KERNEL);
-+ if (!fp)
-+ return -ENOMEM;
+diff --git a/include/asm-blackfin/mach-bf548/irq.h b/include/asm-blackfin/mach-bf548/irq.h
+index 9fb7bc5..c34507a 100644
+--- a/include/asm-blackfin/mach-bf548/irq.h
++++ b/include/asm-blackfin/mach-bf548/irq.h
+@@ -88,7 +88,7 @@ Events (highest priority) EMU 0
+ #define IRQ_PINT1 BFIN_IRQ(20) /* PINT1 Interrupt */
+ #define IRQ_MDMAS0 BFIN_IRQ(21) /* MDMA Stream 0 Interrupt */
+ #define IRQ_MDMAS1 BFIN_IRQ(22) /* MDMA Stream 1 Interrupt */
+-#define IRQ_WATCHDOG BFIN_IRQ(23) /* Watchdog Interrupt */
++#define IRQ_WATCH BFIN_IRQ(23) /* Watchdog Interrupt */
+ #define IRQ_DMAC1_ERROR BFIN_IRQ(24) /* DMAC1 Status (Error) Interrupt */
+ #define IRQ_SPORT2_ERROR BFIN_IRQ(25) /* SPORT2 Error Interrupt */
+ #define IRQ_SPORT3_ERROR BFIN_IRQ(26) /* SPORT3 Error Interrupt */
+@@ -406,7 +406,7 @@ Events (highest priority) EMU 0
+ #define IRQ_PINT1_POS 16
+ #define IRQ_MDMAS0_POS 20
+ #define IRQ_MDMAS1_POS 24
+-#define IRQ_WATCHDOG_POS 28
++#define IRQ_WATCH_POS 28
+
+ /* IAR3 BIT FIELDS */
+ #define IRQ_DMAC1_ERR_POS 0
+diff --git a/include/asm-blackfin/mach-bf548/mem_init.h b/include/asm-blackfin/mach-bf548/mem_init.h
+index 0cb279e..befc290 100644
+--- a/include/asm-blackfin/mach-bf548/mem_init.h
++++ b/include/asm-blackfin/mach-bf548/mem_init.h
+@@ -28,8 +28,68 @@
+ * If not, write to the Free Software Foundation,
+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
++#define MIN_DDR_SCLK(x) (x*(CONFIG_SCLK_HZ/1000/1000)/1000 + 1)
+
-+ fp->fp_file = file;
-+ mutex_init(&fp->fp_mutex);
-+ ocfs2_file_lock_res_init(&fp->fp_flock, fp);
-+ file->private_data = fp;
++#if (CONFIG_MEM_MT46V32M16_6T)
++#define DDR_SIZE DEVSZ_512
++#define DDR_WIDTH DEVWD_16
+
-+ return 0;
-+}
++#define DDR_tRC DDR_TRC(MIN_DDR_SCLK(60))
++#define DDR_tRAS DDR_TRAS(MIN_DDR_SCLK(42))
++#define DDR_tRP DDR_TRP(MIN_DDR_SCLK(15))
++#define DDR_tRFC DDR_TRFC(MIN_DDR_SCLK(72))
++#define DDR_tREFI DDR_TREFI(MIN_DDR_SCLK(7800))
+
-+static void ocfs2_free_file_private(struct inode *inode, struct file *file)
-+{
-+ struct ocfs2_file_private *fp = file->private_data;
-+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
++#define DDR_tRCD DDR_TRCD(MIN_DDR_SCLK(15))
++#define DDR_tWTR DDR_TWTR(1)
++#define DDR_tMRD DDR_TMRD(MIN_DDR_SCLK(12))
++#define DDR_tWR DDR_TWR(MIN_DDR_SCLK(15))
++#endif
+
-+ if (fp) {
-+ ocfs2_simple_drop_lockres(osb, &fp->fp_flock);
-+ ocfs2_lock_res_free(&fp->fp_flock);
-+ kfree(fp);
-+ file->private_data = NULL;
-+ }
-+}
++#if (CONFIG_MEM_MT46V32M16_5B)
++#define DDR_SIZE DEVSZ_512
++#define DDR_WIDTH DEVWD_16
+
- static int ocfs2_file_open(struct inode *inode, struct file *file)
- {
- int status;
-@@ -89,7 +119,18 @@ static int ocfs2_file_open(struct inode *inode, struct file *file)
-
- oi->ip_open_count++;
- spin_unlock(&oi->ip_lock);
-- status = 0;
++#define DDR_tRC DDR_TRC(MIN_DDR_SCLK(55))
++#define DDR_tRAS DDR_TRAS(MIN_DDR_SCLK(40))
++#define DDR_tRP DDR_TRP(MIN_DDR_SCLK(15))
++#define DDR_tRFC DDR_TRFC(MIN_DDR_SCLK(70))
++#define DDR_tREFI DDR_TREFI(MIN_DDR_SCLK(7800))
+
-+ status = ocfs2_init_file_private(inode, file);
-+ if (status) {
-+ /*
-+ * We want to set open count back if we're failing the
-+ * open.
-+ */
-+ spin_lock(&oi->ip_lock);
-+ oi->ip_open_count--;
-+ spin_unlock(&oi->ip_lock);
-+ }
++#define DDR_tRCD DDR_TRCD(MIN_DDR_SCLK(15))
++#define DDR_tWTR DDR_TWTR(2)
++#define DDR_tMRD DDR_TMRD(MIN_DDR_SCLK(10))
++#define DDR_tWR DDR_TWR(MIN_DDR_SCLK(15))
++#endif
+
- leave:
- mlog_exit(status);
- return status;
-@@ -108,11 +149,24 @@ static int ocfs2_file_release(struct inode *inode, struct file *file)
- oi->ip_flags &= ~OCFS2_INODE_OPEN_DIRECT;
- spin_unlock(&oi->ip_lock);
-
-+ ocfs2_free_file_private(inode, file);
++#if (CONFIG_MEM_GENERIC_BOARD)
++#define DDR_SIZE DEVSZ_512
++#define DDR_WIDTH DEVWD_16
+
- mlog_exit(0);
-
- return 0;
- }
-
-+static int ocfs2_dir_open(struct inode *inode, struct file *file)
-+{
-+ return ocfs2_init_file_private(inode, file);
-+}
++#define DDR_tRCD DDR_TRCD(3)
++#define DDR_tWTR DDR_TWTR(2)
++#define DDR_tWR DDR_TWR(2)
++#define DDR_tMRD DDR_TMRD(2)
++#define DDR_tRP DDR_TRP(3)
++#define DDR_tRAS DDR_TRAS(7)
++#define DDR_tRC DDR_TRC(10)
++#define DDR_tRFC DDR_TRFC(12)
++#define DDR_tREFI DDR_TREFI(1288)
++#endif
+
-+static int ocfs2_dir_release(struct inode *inode, struct file *file)
-+{
-+ ocfs2_free_file_private(inode, file);
-+ return 0;
-+}
++#if (CONFIG_SCLK_HZ <= 133333333)
++#define DDR_CL CL_2
++#elif (CONFIG_SCLK_HZ <= 166666666)
++#define DDR_CL CL_2_5
++#else
++#define DDR_CL CL_3
++#endif
+
- static int ocfs2_sync_file(struct file *file,
- struct dentry *dentry,
- int datasync)
-@@ -382,18 +436,13 @@ static int ocfs2_truncate_file(struct inode *inode,
-
- down_write(&OCFS2_I(inode)->ip_alloc_sem);
-
-- /* This forces other nodes to sync and drop their pages. Do
-- * this even if we have a truncate without allocation change -
-- * ocfs2 cluster sizes can be much greater than page size, so
-- * we have to truncate them anyway. */
-- status = ocfs2_data_lock(inode, 1);
-- if (status < 0) {
-- up_write(&OCFS2_I(inode)->ip_alloc_sem);
--
-- mlog_errno(status);
-- goto bail;
-- }
--
-+ /*
-+ * The inode lock forced other nodes to sync and drop their
-+ * pages, which (correctly) happens even if we have a truncate
-+ * without allocation change - ocfs2 cluster sizes can be much
-+ * greater than page size, so we have to truncate them
-+ * anyway.
-+ */
- unmap_mapping_range(inode->i_mapping, new_i_size + PAGE_SIZE - 1, 0, 1);
- truncate_inode_pages(inode->i_mapping, new_i_size);
-
-@@ -403,7 +452,7 @@ static int ocfs2_truncate_file(struct inode *inode,
- if (status)
- mlog_errno(status);
-
-- goto bail_unlock_data;
-+ goto bail_unlock_sem;
- }
-
- /* alright, we're going to need to do a full blown alloc size
-@@ -413,25 +462,23 @@ static int ocfs2_truncate_file(struct inode *inode,
- status = ocfs2_orphan_for_truncate(osb, inode, di_bh, new_i_size);
- if (status < 0) {
- mlog_errno(status);
-- goto bail_unlock_data;
-+ goto bail_unlock_sem;
- }
-
- status = ocfs2_prepare_truncate(osb, inode, di_bh, &tc);
- if (status < 0) {
- mlog_errno(status);
-- goto bail_unlock_data;
-+ goto bail_unlock_sem;
- }
-
- status = ocfs2_commit_truncate(osb, inode, di_bh, tc);
- if (status < 0) {
- mlog_errno(status);
-- goto bail_unlock_data;
-+ goto bail_unlock_sem;
- }
-
- /* TODO: orphan dir cleanup here. */
--bail_unlock_data:
-- ocfs2_data_unlock(inode, 1);
--
-+bail_unlock_sem:
- up_write(&OCFS2_I(inode)->ip_alloc_sem);
-
- bail:
-@@ -579,7 +626,7 @@ int ocfs2_lock_allocators(struct inode *inode, struct ocfs2_dinode *di,
-
- mlog(0, "extend inode %llu, i_size = %lld, di->i_clusters = %u, "
- "clusters_to_add = %u, extents_to_split = %u\n",
-- (unsigned long long)OCFS2_I(inode)->ip_blkno, i_size_read(inode),
-+ (unsigned long long)OCFS2_I(inode)->ip_blkno, (long long)i_size_read(inode),
- le32_to_cpu(di->i_clusters), clusters_to_add, extents_to_split);
-
- num_free_extents = ocfs2_num_free_extents(osb, inode, di);
-@@ -760,7 +807,7 @@ restarted_transaction:
- le32_to_cpu(fe->i_clusters),
- (unsigned long long)le64_to_cpu(fe->i_size));
- mlog(0, "inode: ip_clusters=%u, i_size=%lld\n",
-- OCFS2_I(inode)->ip_clusters, i_size_read(inode));
-+ OCFS2_I(inode)->ip_clusters, (long long)i_size_read(inode));
-
- leave:
- if (handle) {
-@@ -917,7 +964,7 @@ static int ocfs2_extend_file(struct inode *inode,
- struct buffer_head *di_bh,
- u64 new_i_size)
- {
-- int ret = 0, data_locked = 0;
-+ int ret = 0;
- struct ocfs2_inode_info *oi = OCFS2_I(inode);
-
- BUG_ON(!di_bh);
-@@ -943,20 +990,6 @@ static int ocfs2_extend_file(struct inode *inode,
- && ocfs2_sparse_alloc(OCFS2_SB(inode->i_sb)))
- goto out_update_size;
-
-- /*
-- * protect the pages that ocfs2_zero_extend is going to be
-- * pulling into the page cache.. we do this before the
-- * metadata extend so that we don't get into the situation
-- * where we've extended the metadata but can't get the data
-- * lock to zero.
-- */
-- ret = ocfs2_data_lock(inode, 1);
-- if (ret < 0) {
-- mlog_errno(ret);
-- goto out;
-- }
-- data_locked = 1;
--
- /*
- * The alloc sem blocks people in read/write from reading our
- * allocation until we're done changing it. We depend on
-@@ -980,7 +1013,7 @@ static int ocfs2_extend_file(struct inode *inode,
- up_write(&oi->ip_alloc_sem);
-
- mlog_errno(ret);
-- goto out_unlock;
-+ goto out;
- }
- }
-
-@@ -991,7 +1024,7 @@ static int ocfs2_extend_file(struct inode *inode,
-
- if (ret < 0) {
- mlog_errno(ret);
-- goto out_unlock;
-+ goto out;
- }
-
- out_update_size:
-@@ -999,10 +1032,6 @@ out_update_size:
- if (ret < 0)
- mlog_errno(ret);
-
--out_unlock:
-- if (data_locked)
-- ocfs2_data_unlock(inode, 1);
--
- out:
- return ret;
- }
-@@ -1050,7 +1079,7 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
- }
- }
-
-- status = ocfs2_meta_lock(inode, &bh, 1);
-+ status = ocfs2_inode_lock(inode, &bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -1102,7 +1131,7 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
- bail_commit:
- ocfs2_commit_trans(osb, handle);
- bail_unlock:
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- bail_unlock_rw:
- if (size_change)
- ocfs2_rw_unlock(inode, 1);
-@@ -1149,7 +1178,7 @@ int ocfs2_permission(struct inode *inode, int mask, struct nameidata *nd)
-
- mlog_entry_void();
-
-- ret = ocfs2_meta_lock(inode, NULL, 0);
-+ ret = ocfs2_inode_lock(inode, NULL, 0);
- if (ret) {
- if (ret != -ENOENT)
- mlog_errno(ret);
-@@ -1158,7 +1187,7 @@ int ocfs2_permission(struct inode *inode, int mask, struct nameidata *nd)
-
- ret = generic_permission(inode, mask, NULL);
++#define mem_DDRCTL0 (DDR_tRP | DDR_tRAS | DDR_tRC | DDR_tRFC | DDR_tREFI)
++#define mem_DDRCTL1 (DDR_DATWIDTH | EXTBANK_1 | DDR_SIZE | DDR_WIDTH | DDR_tWTR \
++ | DDR_tMRD | DDR_tWR | DDR_tRCD)
++#define mem_DDRCTL2 DDR_CL
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
- out:
- mlog_exit(ret);
- return ret;
-@@ -1630,7 +1659,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
- goto out;
- }
+-#if (CONFIG_MEM_MT46V32M16)
-- ret = ocfs2_meta_lock(inode, &di_bh, 1);
-+ ret = ocfs2_inode_lock(inode, &di_bh, 1);
- if (ret) {
- mlog_errno(ret);
- goto out_rw_unlock;
-@@ -1638,7 +1667,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ #if defined CONFIG_CLKIN_HALF
+ #define CLKIN_HALF 1
+diff --git a/include/asm-blackfin/mach-bf548/portmux.h b/include/asm-blackfin/mach-bf548/portmux.h
+index 6b48512..8177a56 100644
+--- a/include/asm-blackfin/mach-bf548/portmux.h
++++ b/include/asm-blackfin/mach-bf548/portmux.h
+@@ -1,6 +1,8 @@
+ #ifndef _MACH_PORTMUX_H_
+ #define _MACH_PORTMUX_H_
- if (inode->i_flags & (S_IMMUTABLE|S_APPEND)) {
- ret = -EPERM;
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
- }
++#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++
+ #define P_SPORT2_TFS (P_DEFINED | P_IDENT(GPIO_PA0) | P_FUNCT(0))
+ #define P_SPORT2_DTSEC (P_DEFINED | P_IDENT(GPIO_PA1) | P_FUNCT(0))
+ #define P_SPORT2_DTPRI (P_DEFINED | P_IDENT(GPIO_PA2) | P_FUNCT(0))
+diff --git a/include/asm-blackfin/mach-bf561/anomaly.h b/include/asm-blackfin/mach-bf561/anomaly.h
+index bed9564..0c1d461 100644
+--- a/include/asm-blackfin/mach-bf561/anomaly.h
++++ b/include/asm-blackfin/mach-bf561/anomaly.h
+@@ -7,7 +7,7 @@
+ */
- switch (sr->l_whence) {
-@@ -1652,7 +1681,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
- break;
- default:
- ret = -EINVAL;
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
- }
- sr->l_whence = 0;
+ /* This file shoule be up to date with:
+- * - Revision N, March 28, 2007; ADSP-BF561 Silicon Anomaly List
++ * - Revision O, 11/15/2007; ADSP-BF561 Blackfin Processor Anomaly List
+ */
-@@ -1663,14 +1692,14 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
- || (sr->l_start + llen) < 0
- || (sr->l_start + llen) > max_off) {
- ret = -EINVAL;
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
- }
- size = sr->l_start + sr->l_len;
+ #ifndef _MACH_ANOMALY_H_
+@@ -15,7 +15,7 @@
- if (cmd == OCFS2_IOC_RESVSP || cmd == OCFS2_IOC_RESVSP64) {
- if (sr->l_len <= 0) {
- ret = -EINVAL;
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
- }
- }
+ /* We do not support 0.1, 0.2, or 0.4 silicon - sorry */
+ #if __SILICON_REVISION__ < 3 || __SILICON_REVISION__ == 4
+-# error Kernel will not work on BF561 silicon version 0.0, 0.1, 0.2, or 0.4
++# error will not work on BF561 silicon version 0.0, 0.1, 0.2, or 0.4
+ #endif
-@@ -1678,7 +1707,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
- ret = __ocfs2_write_remove_suid(inode, di_bh);
- if (ret) {
- mlog_errno(ret);
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
- }
- }
+ /* Multi-Issue Instruction with dsp32shiftimm in slot1 and P-reg Store in slot 2 Not Supported */
+@@ -208,6 +208,8 @@
+ #define ANOMALY_05000275 (__SILICON_REVISION__ > 2)
+ /* Timing Requirements Change for External Frame Sync PPI Modes with Non-Zero PPI_DELAY */
+ #define ANOMALY_05000276 (__SILICON_REVISION__ < 5)
++/* Writes to an I/O data register one SCLK cycle after an edge is detected may clear interrupt */
++#define ANOMALY_05000277 (__SILICON_REVISION__ < 3)
+ /* Disabling Peripherals with DMA Running May Cause DMA System Instability */
+ #define ANOMALY_05000278 (__SILICON_REVISION__ < 5)
+ /* False Hardware Error Exception When ISR Context Is Not Restored */
+@@ -246,6 +248,18 @@
+ #define ANOMALY_05000332 (__SILICON_REVISION__ < 5)
+ /* Flag Data Register Writes One SCLK Cycle After Edge Is Detected May Clear Interrupt Status */
+ #define ANOMALY_05000333 (__SILICON_REVISION__ < 5)
++/* New Feature: Additional PPI Frame Sync Sampling Options (Not Available on Older Silicon) */
++#define ANOMALY_05000339 (__SILICON_REVISION__ < 5)
++/* Memory DMA FIFO Causes Throughput Degradation on Writes to External Memory */
++#define ANOMALY_05000343 (__SILICON_REVISION__ < 5)
++/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
++#define ANOMALY_05000357 (1)
++/* Conflicting Column Address Widths Causes SDRAM Errors */
++#define ANOMALY_05000362 (1)
++/* PPI Underflow Error Goes Undetected in ITU-R 656 Mode */
++#define ANOMALY_05000366 (1)
++/* Possible RETS Register Corruption when Subroutine Is under 5 Cycles in Duration */
++#define ANOMALY_05000371 (1)
-@@ -1704,7 +1733,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
- up_write(&OCFS2_I(inode)->ip_alloc_sem);
- if (ret) {
- mlog_errno(ret);
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
+ /* Anomalies that don't exist on this proc */
+ #define ANOMALY_05000158 (0)
+diff --git a/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
+index 69b9f8e..7871d43 100644
+--- a/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
+@@ -111,7 +111,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
}
-
- /*
-@@ -1714,7 +1743,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
- if (IS_ERR(handle)) {
- ret = PTR_ERR(handle);
- mlog_errno(ret);
-- goto out_meta_unlock;
-+ goto out_inode_unlock;
+ if (uart->rts_pin >= 0) {
+ gpio_request(uart->rts_pin, DRIVER_NAME);
+- gpio_direction_input(uart->rts_pin);
++ gpio_direction_input(uart->rts_pin, 0);
}
+ #endif
+ }
+diff --git a/include/asm-blackfin/mach-bf561/portmux.h b/include/asm-blackfin/mach-bf561/portmux.h
+index 132ad31..a6ee820 100644
+--- a/include/asm-blackfin/mach-bf561/portmux.h
++++ b/include/asm-blackfin/mach-bf561/portmux.h
+@@ -1,6 +1,8 @@
+ #ifndef _MACH_PORTMUX_H_
+ #define _MACH_PORTMUX_H_
- if (change_size && i_size_read(inode) < size)
-@@ -1727,9 +1756,9 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
-
- ocfs2_commit_trans(osb, handle);
++#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++
+ #define P_PPI0_CLK (P_DONTCARE)
+ #define P_PPI0_FS1 (P_DONTCARE)
+ #define P_PPI0_FS2 (P_DONTCARE)
+diff --git a/include/asm-blackfin/mmu.h b/include/asm-blackfin/mmu.h
+index 11d52f1..757e439 100644
+--- a/include/asm-blackfin/mmu.h
++++ b/include/asm-blackfin/mmu.h
+@@ -24,7 +24,9 @@ typedef struct {
+ unsigned long exec_fdpic_loadmap;
+ unsigned long interp_fdpic_loadmap;
+ #endif
+-
++#ifdef CONFIG_MPU
++ unsigned long *page_rwx_mask;
++#endif
+ } mm_context_t;
--out_meta_unlock:
-+out_inode_unlock:
- brelse(di_bh);
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- out_rw_unlock:
- ocfs2_rw_unlock(inode, 1);
+ #endif
+diff --git a/include/asm-blackfin/mmu_context.h b/include/asm-blackfin/mmu_context.h
+index c5c71a6..b5eb675 100644
+--- a/include/asm-blackfin/mmu_context.h
++++ b/include/asm-blackfin/mmu_context.h
+@@ -30,9 +30,12 @@
+ #ifndef __BLACKFIN_MMU_CONTEXT_H__
+ #define __BLACKFIN_MMU_CONTEXT_H__
-@@ -1799,7 +1828,7 @@ static int ocfs2_prepare_inode_for_write(struct dentry *dentry,
- * if we need to make modifications here.
- */
- for(;;) {
-- ret = ocfs2_meta_lock(inode, NULL, meta_level);
-+ ret = ocfs2_inode_lock(inode, NULL, meta_level);
- if (ret < 0) {
- meta_level = -1;
- mlog_errno(ret);
-@@ -1817,7 +1846,7 @@ static int ocfs2_prepare_inode_for_write(struct dentry *dentry,
- * set inode->i_size at the end of a write. */
- if (should_remove_suid(dentry)) {
- if (meta_level == 0) {
-- ocfs2_meta_unlock(inode, meta_level);
-+ ocfs2_inode_unlock(inode, meta_level);
- meta_level = 1;
- continue;
- }
-@@ -1886,7 +1915,7 @@ static int ocfs2_prepare_inode_for_write(struct dentry *dentry,
- *ppos = saved_pos;
++#include <linux/gfp.h>
++#include <linux/sched.h>
+ #include <asm/setup.h>
+ #include <asm/page.h>
+ #include <asm/pgalloc.h>
++#include <asm/cplbinit.h>
- out_unlock:
-- ocfs2_meta_unlock(inode, meta_level);
-+ ocfs2_inode_unlock(inode, meta_level);
+ extern void *current_l1_stack_save;
+ extern int nr_l1stack_tasks;
+@@ -50,6 +53,12 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+ static inline int
+ init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+ {
++#ifdef CONFIG_MPU
++ unsigned long p = __get_free_pages(GFP_KERNEL, page_mask_order);
++ mm->context.page_rwx_mask = (unsigned long *)p;
++ memset(mm->context.page_rwx_mask, 0,
++ page_mask_nelts * 3 * sizeof(long));
++#endif
+ return 0;
+ }
- out:
- return ret;
-@@ -2099,12 +2128,12 @@ static ssize_t ocfs2_file_splice_read(struct file *in,
- /*
- * See the comment in ocfs2_file_aio_read()
- */
-- ret = ocfs2_meta_lock(inode, NULL, 0);
-+ ret = ocfs2_inode_lock(inode, NULL, 0);
- if (ret < 0) {
- mlog_errno(ret);
- goto bail;
+@@ -73,6 +82,11 @@ static inline void destroy_context(struct mm_struct *mm)
+ sram_free(tmp->addr);
+ kfree(tmp);
}
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
++#ifdef CONFIG_MPU
++ if (current_rwx_mask == mm->context.page_rwx_mask)
++ current_rwx_mask = NULL;
++ free_pages((unsigned long)mm->context.page_rwx_mask, page_mask_order);
++#endif
+ }
- ret = generic_file_splice_read(in, ppos, pipe, len, flags);
+ static inline unsigned long
+@@ -106,9 +120,21 @@ activate_l1stack(struct mm_struct *mm, unsigned long sp_base)
-@@ -2160,12 +2189,12 @@ static ssize_t ocfs2_file_aio_read(struct kiocb *iocb,
- * like i_size. This allows the checks down below
- * generic_file_aio_read() a chance of actually working.
- */
-- ret = ocfs2_meta_lock_atime(inode, filp->f_vfsmnt, &lock_level);
-+ ret = ocfs2_inode_lock_atime(inode, filp->f_vfsmnt, &lock_level);
- if (ret < 0) {
- mlog_errno(ret);
- goto bail;
- }
-- ocfs2_meta_unlock(inode, lock_level);
-+ ocfs2_inode_unlock(inode, lock_level);
+ #define deactivate_mm(tsk,mm) do { } while (0)
- ret = generic_file_aio_read(iocb, iov, nr_segs, iocb->ki_pos);
- if (ret == -EINVAL)
-@@ -2204,6 +2233,7 @@ const struct inode_operations ocfs2_special_file_iops = {
- };
+-static inline void activate_mm(struct mm_struct *prev_mm,
+- struct mm_struct *next_mm)
++#define activate_mm(prev, next) switch_mm(prev, next, NULL)
++
++static inline void switch_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm,
++ struct task_struct *tsk)
+ {
++ if (prev_mm == next_mm)
++ return;
++#ifdef CONFIG_MPU
++ if (prev_mm->context.page_rwx_mask == current_rwx_mask) {
++ flush_switched_cplbs();
++ set_mask_dcplbs(next_mm->context.page_rwx_mask);
++ }
++#endif
++
++ /* L1 stack switching. */
+ if (!next_mm->context.l1_stack_save)
+ return;
+ if (next_mm->context.l1_stack_save == current_l1_stack_save)
+@@ -120,10 +146,36 @@ static inline void activate_mm(struct mm_struct *prev_mm,
+ memcpy(l1_stack_base, current_l1_stack_save, l1_stack_len);
+ }
- const struct file_operations ocfs2_fops = {
-+ .llseek = generic_file_llseek,
- .read = do_sync_read,
- .write = do_sync_write,
- .mmap = ocfs2_mmap,
-@@ -2216,16 +2246,21 @@ const struct file_operations ocfs2_fops = {
- #ifdef CONFIG_COMPAT
- .compat_ioctl = ocfs2_compat_ioctl,
- #endif
-+ .flock = ocfs2_flock,
- .splice_read = ocfs2_file_splice_read,
- .splice_write = ocfs2_file_splice_write,
- };
+-static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+- struct task_struct *tsk)
++#ifdef CONFIG_MPU
++static inline void protect_page(struct mm_struct *mm, unsigned long addr,
++ unsigned long flags)
++{
++ unsigned long *mask = mm->context.page_rwx_mask;
++ unsigned long page = addr >> 12;
++ unsigned long idx = page >> 5;
++ unsigned long bit = 1 << (page & 31);
++
++ if (flags & VM_MAYREAD)
++ mask[idx] |= bit;
++ else
++ mask[idx] &= ~bit;
++ mask += page_mask_nelts;
++ if (flags & VM_MAYWRITE)
++ mask[idx] |= bit;
++ else
++ mask[idx] &= ~bit;
++ mask += page_mask_nelts;
++ if (flags & VM_MAYEXEC)
++ mask[idx] |= bit;
++ else
++ mask[idx] &= ~bit;
++}
++
++static inline void update_protections(struct mm_struct *mm)
+ {
+- activate_mm(prev, next);
++ flush_switched_cplbs();
++ set_mask_dcplbs(mm->context.page_rwx_mask);
+ }
++#endif
- const struct file_operations ocfs2_dops = {
-+ .llseek = generic_file_llseek,
- .read = generic_read_dir,
- .readdir = ocfs2_readdir,
- .fsync = ocfs2_sync_file,
-+ .release = ocfs2_dir_release,
-+ .open = ocfs2_dir_open,
- .ioctl = ocfs2_ioctl,
- #ifdef CONFIG_COMPAT
- .compat_ioctl = ocfs2_compat_ioctl,
#endif
-+ .flock = ocfs2_flock,
- };
-diff --git a/fs/ocfs2/file.h b/fs/ocfs2/file.h
-index 066f14a..048ddca 100644
---- a/fs/ocfs2/file.h
-+++ b/fs/ocfs2/file.h
-@@ -32,6 +32,12 @@ extern const struct inode_operations ocfs2_file_iops;
- extern const struct inode_operations ocfs2_special_file_iops;
- struct ocfs2_alloc_context;
-
-+struct ocfs2_file_private {
-+ struct file *fp_file;
-+ struct mutex fp_mutex;
-+ struct ocfs2_lock_res fp_flock;
-+};
-+
- enum ocfs2_alloc_restarted {
- RESTART_NONE = 0,
- RESTART_TRANS,
-diff --git a/fs/ocfs2/heartbeat.c b/fs/ocfs2/heartbeat.c
-index c4c3617..c0efd94 100644
---- a/fs/ocfs2/heartbeat.c
-+++ b/fs/ocfs2/heartbeat.c
-@@ -30,9 +30,6 @@
- #include <linux/highmem.h>
- #include <linux/kmod.h>
+diff --git a/include/asm-blackfin/traps.h b/include/asm-blackfin/traps.h
+index ee1cbf7..f0e5f94 100644
+--- a/include/asm-blackfin/traps.h
++++ b/include/asm-blackfin/traps.h
+@@ -45,6 +45,10 @@
+ #define VEC_CPLB_I_M (44)
+ #define VEC_CPLB_I_MHIT (45)
+ #define VEC_ILL_RES (46) /* including unvalid supervisor mode insn */
++/* The hardware reserves (63) for future use - we use it to tell our
++ * normal exception handling code we have a hardware error
++ */
++#define VEC_HWERR (63)
--#include <cluster/heartbeat.h>
--#include <cluster/nodemanager.h>
--
- #include <dlm/dlmapi.h>
+ #ifndef __ASSEMBLY__
- #define MLOG_MASK_PREFIX ML_SUPER
-@@ -44,13 +41,9 @@
- #include "heartbeat.h"
- #include "inode.h"
- #include "journal.h"
--#include "vote.h"
+diff --git a/include/asm-blackfin/uaccess.h b/include/asm-blackfin/uaccess.h
+index 2233f8f..22a410b 100644
+--- a/include/asm-blackfin/uaccess.h
++++ b/include/asm-blackfin/uaccess.h
+@@ -31,7 +31,7 @@ static inline void set_fs(mm_segment_t fs)
+ #define VERIFY_READ 0
+ #define VERIFY_WRITE 1
- #include "buffer_head_io.h"
+-#define access_ok(type,addr,size) _access_ok((unsigned long)(addr),(size))
++#define access_ok(type, addr, size) _access_ok((unsigned long)(addr), (size))
--#define OCFS2_HB_NODE_DOWN_PRI (0x0000002)
--#define OCFS2_HB_NODE_UP_PRI OCFS2_HB_NODE_DOWN_PRI
--
- static inline void __ocfs2_node_map_set_bit(struct ocfs2_node_map *map,
- int bit);
- static inline void __ocfs2_node_map_clear_bit(struct ocfs2_node_map *map,
-@@ -64,9 +57,7 @@ static void __ocfs2_node_map_set(struct ocfs2_node_map *target,
- void ocfs2_init_node_maps(struct ocfs2_super *osb)
+ static inline int is_in_rom(unsigned long addr)
{
- spin_lock_init(&osb->node_map_lock);
-- ocfs2_node_map_init(&osb->mounted_map);
- ocfs2_node_map_init(&osb->recovery_map);
-- ocfs2_node_map_init(&osb->umount_map);
- ocfs2_node_map_init(&osb->osb_recovering_orphan_dirs);
- }
+diff --git a/include/asm-blackfin/unistd.h b/include/asm-blackfin/unistd.h
+index 07ffe8b..e981673 100644
+--- a/include/asm-blackfin/unistd.h
++++ b/include/asm-blackfin/unistd.h
+@@ -369,8 +369,9 @@
+ #define __NR_set_robust_list 354
+ #define __NR_get_robust_list 355
+ #define __NR_fallocate 356
++#define __NR_semtimedop 357
-@@ -87,24 +78,7 @@ static void ocfs2_do_node_down(int node_num,
- return;
+-#define __NR_syscall 357
++#define __NR_syscall 358
+ #define NR_syscalls __NR_syscall
+
+ /* Old optional stuff no one actually uses */
+diff --git a/include/asm-cris/arch-v10/ide.h b/include/asm-cris/arch-v10/ide.h
+index 78b301e..ea34e0d 100644
+--- a/include/asm-cris/arch-v10/ide.h
++++ b/include/asm-cris/arch-v10/ide.h
+@@ -89,11 +89,6 @@ static inline void ide_init_default_hwifs(void)
}
+ }
-- if (ocfs2_node_map_test_bit(osb, &osb->umount_map, node_num)) {
-- /* If a node is in the umount map, then we've been
-- * expecting him to go down and we know ahead of time
-- * that recovery is not necessary. */
-- ocfs2_node_map_clear_bit(osb, &osb->umount_map, node_num);
-- return;
-- }
--
- ocfs2_recovery_thread(osb, node_num);
+-/* some configuration options we don't need */
-
-- ocfs2_remove_node_from_vote_queues(osb, node_num);
--}
+-#undef SUPPORT_VLB_SYNC
+-#define SUPPORT_VLB_SYNC 0
-
--static void ocfs2_hb_node_down_cb(struct o2nm_node *node,
-- int node_num,
-- void *data)
--{
-- ocfs2_do_node_down(node_num, (struct ocfs2_super *) data);
- }
+ #endif /* __KERNEL__ */
- /* Called from the dlm when it's about to evict a node. We may also
-@@ -121,27 +95,8 @@ static void ocfs2_dlm_eviction_cb(int node_num,
- ocfs2_do_node_down(node_num, osb);
+ #endif /* __ASMCRIS_IDE_H */
+diff --git a/include/asm-cris/arch-v32/ide.h b/include/asm-cris/arch-v32/ide.h
+index 1129617..fb9c362 100644
+--- a/include/asm-cris/arch-v32/ide.h
++++ b/include/asm-cris/arch-v32/ide.h
+@@ -48,11 +48,6 @@ static inline unsigned long ide_default_io_base(int index)
+ return REG_TYPE_CONV(unsigned long, reg_ata_rw_ctrl2, ctrl2);
}
--static void ocfs2_hb_node_up_cb(struct o2nm_node *node,
-- int node_num,
-- void *data)
--{
-- struct ocfs2_super *osb = data;
--
-- BUG_ON(osb->node_num == node_num);
--
-- mlog(0, "node up event for %d\n", node_num);
-- ocfs2_node_map_clear_bit(osb, &osb->umount_map, node_num);
--}
--
- void ocfs2_setup_hb_callbacks(struct ocfs2_super *osb)
- {
-- o2hb_setup_callback(&osb->osb_hb_down, O2HB_NODE_DOWN_CB,
-- ocfs2_hb_node_down_cb, osb,
-- OCFS2_HB_NODE_DOWN_PRI);
+-/* some configuration options we don't need */
-
-- o2hb_setup_callback(&osb->osb_hb_up, O2HB_NODE_UP_CB,
-- ocfs2_hb_node_up_cb, osb, OCFS2_HB_NODE_UP_PRI);
+-#undef SUPPORT_VLB_SYNC
+-#define SUPPORT_VLB_SYNC 0
-
- /* Not exactly a heartbeat callback, but leads to essentially
- * the same path so we set it up here. */
- dlm_setup_eviction_cb(&osb->osb_eviction_cb,
-@@ -149,39 +104,6 @@ void ocfs2_setup_hb_callbacks(struct ocfs2_super *osb)
- osb);
- }
+ #define IDE_ARCH_ACK_INTR
+ #define ide_ack_intr(hwif) ((hwif)->ack_intr(hwif))
--/* Most functions here are just stubs for now... */
--int ocfs2_register_hb_callbacks(struct ocfs2_super *osb)
--{
-- int status;
--
-- if (ocfs2_mount_local(osb))
-- return 0;
--
-- status = o2hb_register_callback(osb->uuid_str, &osb->osb_hb_down);
-- if (status < 0) {
-- mlog_errno(status);
-- goto bail;
-- }
--
-- status = o2hb_register_callback(osb->uuid_str, &osb->osb_hb_up);
-- if (status < 0) {
-- mlog_errno(status);
-- o2hb_unregister_callback(osb->uuid_str, &osb->osb_hb_down);
-- }
--
--bail:
-- return status;
--}
--
--void ocfs2_clear_hb_callbacks(struct ocfs2_super *osb)
--{
-- if (ocfs2_mount_local(osb))
-- return;
+diff --git a/include/asm-frv/ide.h b/include/asm-frv/ide.h
+index f0bd2cb..8c9a540 100644
+--- a/include/asm-frv/ide.h
++++ b/include/asm-frv/ide.h
+@@ -18,12 +18,6 @@
+ #include <asm/io.h>
+ #include <asm/irq.h>
+
+-#undef SUPPORT_SLOW_DATA_PORTS
+-#define SUPPORT_SLOW_DATA_PORTS 0
-
-- o2hb_unregister_callback(osb->uuid_str, &osb->osb_hb_down);
-- o2hb_unregister_callback(osb->uuid_str, &osb->osb_hb_up);
--}
+-#undef SUPPORT_VLB_SYNC
+-#define SUPPORT_VLB_SYNC 0
-
- void ocfs2_stop_heartbeat(struct ocfs2_super *osb)
- {
- int ret;
-@@ -341,8 +263,6 @@ int ocfs2_recovery_map_set(struct ocfs2_super *osb,
+ #ifndef MAX_HWIFS
+ #define MAX_HWIFS 8
+ #endif
+diff --git a/include/asm-generic/bitops/ext2-non-atomic.h b/include/asm-generic/bitops/ext2-non-atomic.h
+index 1697404..63cf822 100644
+--- a/include/asm-generic/bitops/ext2-non-atomic.h
++++ b/include/asm-generic/bitops/ext2-non-atomic.h
+@@ -14,5 +14,7 @@
+ generic_find_first_zero_le_bit((unsigned long *)(addr), (size))
+ #define ext2_find_next_zero_bit(addr, size, off) \
+ generic_find_next_zero_le_bit((unsigned long *)(addr), (size), (off))
++#define ext2_find_next_bit(addr, size, off) \
++ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
- spin_lock(&osb->node_map_lock);
+ #endif /* _ASM_GENERIC_BITOPS_EXT2_NON_ATOMIC_H_ */
+diff --git a/include/asm-generic/bitops/le.h b/include/asm-generic/bitops/le.h
+index b9c7e5d..80e3bf1 100644
+--- a/include/asm-generic/bitops/le.h
++++ b/include/asm-generic/bitops/le.h
+@@ -20,6 +20,8 @@
+ #define generic___test_and_clear_le_bit(nr, addr) __test_and_clear_bit(nr, addr)
-- __ocfs2_node_map_clear_bit(&osb->mounted_map, num);
--
- if (!test_bit(num, osb->recovery_map.map)) {
- __ocfs2_node_map_set_bit(&osb->recovery_map, num);
- set = 1;
-diff --git a/fs/ocfs2/heartbeat.h b/fs/ocfs2/heartbeat.h
-index e8fb079..5685921 100644
---- a/fs/ocfs2/heartbeat.h
-+++ b/fs/ocfs2/heartbeat.h
-@@ -29,8 +29,6 @@
- void ocfs2_init_node_maps(struct ocfs2_super *osb);
+ #define generic_find_next_zero_le_bit(addr, size, offset) find_next_zero_bit(addr, size, offset)
++#define generic_find_next_le_bit(addr, size, offset) \
++ find_next_bit(addr, size, offset)
- void ocfs2_setup_hb_callbacks(struct ocfs2_super *osb);
--int ocfs2_register_hb_callbacks(struct ocfs2_super *osb);
--void ocfs2_clear_hb_callbacks(struct ocfs2_super *osb);
- void ocfs2_stop_heartbeat(struct ocfs2_super *osb);
+ #elif defined(__BIG_ENDIAN)
- /* node map functions - used to keep track of mounted and in-recovery
-diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c
-index ebb2bbe..7e9e4c7 100644
---- a/fs/ocfs2/inode.c
-+++ b/fs/ocfs2/inode.c
-@@ -49,7 +49,6 @@
- #include "symlink.h"
- #include "sysfile.h"
- #include "uptodate.h"
--#include "vote.h"
+@@ -42,6 +44,8 @@
- #include "buffer_head_io.h"
+ extern unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset);
++extern unsigned long generic_find_next_le_bit(const unsigned long *addr,
++ unsigned long size, unsigned long offset);
-@@ -58,8 +57,11 @@ struct ocfs2_find_inode_args
- u64 fi_blkno;
- unsigned long fi_ino;
- unsigned int fi_flags;
-+ unsigned int fi_sysfile_type;
- };
+ #else
+ #error "Please fix <asm/byteorder.h>"
+diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
+index d56fedb..2632328 100644
+--- a/include/asm-generic/bug.h
++++ b/include/asm-generic/bug.h
+@@ -31,14 +31,19 @@ struct bug_entry {
+ #define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while(0)
+ #endif
-+static struct lock_class_key ocfs2_sysfile_lock_key[NUM_SYSTEM_INODES];
+-#ifndef HAVE_ARCH_WARN_ON
++#ifndef __WARN
++#ifndef __ASSEMBLY__
++extern void warn_on_slowpath(const char *file, const int line);
++#define WANT_WARN_ON_SLOWPATH
++#endif
++#define __WARN() warn_on_slowpath(__FILE__, __LINE__)
++#endif
+
- static int ocfs2_read_locked_inode(struct inode *inode,
- struct ocfs2_find_inode_args *args);
- static int ocfs2_init_locked_inode(struct inode *inode, void *opaque);
-@@ -107,7 +109,8 @@ void ocfs2_get_inode_flags(struct ocfs2_inode_info *oi)
- oi->ip_attr |= OCFS2_DIRSYNC_FL;
- }
++#ifndef WARN_ON
+ #define WARN_ON(condition) ({ \
+ int __ret_warn_on = !!(condition); \
+- if (unlikely(__ret_warn_on)) { \
+- printk("WARNING: at %s:%d %s()\n", __FILE__, \
+- __LINE__, __FUNCTION__); \
+- dump_stack(); \
+- } \
++ if (unlikely(__ret_warn_on)) \
++ __WARN(); \
+ unlikely(__ret_warn_on); \
+ })
+ #endif
+diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h
+index d85172e..4b8d31c 100644
+--- a/include/asm-generic/percpu.h
++++ b/include/asm-generic/percpu.h
+@@ -3,54 +3,79 @@
+ #include <linux/compiler.h>
+ #include <linux/threads.h>
--struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, int flags)
-+struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
-+ int sysfile_type)
- {
- struct inode *inode = NULL;
- struct super_block *sb = osb->sb;
-@@ -127,6 +130,7 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, int flags)
- args.fi_blkno = blkno;
- args.fi_flags = flags;
- args.fi_ino = ino_from_blkno(sb, blkno);
-+ args.fi_sysfile_type = sysfile_type;
+-#define __GENERIC_PER_CPU
++/*
++ * Determine the real variable name from the name visible in the
++ * kernel sources.
++ */
++#define per_cpu_var(var) per_cpu__##var
++
+ #ifdef CONFIG_SMP
- inode = iget5_locked(sb, args.fi_ino, ocfs2_find_actor,
- ocfs2_init_locked_inode, &args);
-@@ -201,6 +205,9 @@ static int ocfs2_init_locked_inode(struct inode *inode, void *opaque)
++/*
++ * per_cpu_offset() is the offset that has to be added to a
++ * percpu variable to get to the instance for a certain processor.
++ *
++ * Most arches use the __per_cpu_offset array for those offsets but
++ * some arches have their own ways of determining the offset (x86_64, s390).
++ */
++#ifndef __per_cpu_offset
+ extern unsigned long __per_cpu_offset[NR_CPUS];
- inode->i_ino = args->fi_ino;
- OCFS2_I(inode)->ip_blkno = args->fi_blkno;
-+ if (args->fi_sysfile_type != 0)
-+ lockdep_set_class(&inode->i_mutex,
-+ &ocfs2_sysfile_lock_key[args->fi_sysfile_type]);
+ #define per_cpu_offset(x) (__per_cpu_offset[x])
++#endif
- mlog_exit(0);
- return 0;
-@@ -322,7 +329,7 @@ int ocfs2_populate_inode(struct inode *inode, struct ocfs2_dinode *fe,
- */
- BUG_ON(le32_to_cpu(fe->i_flags) & OCFS2_SYSTEM_FL);
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
+-
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __typeof__(type) per_cpu__##name \
+- ____cacheline_aligned_in_smp
+-
+-/* var is in discarded region: offset to particular copy we want */
+-#define per_cpu(var, cpu) (*({ \
+- extern int simple_identifier_##var(void); \
+- RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]); }))
+-#define __get_cpu_var(var) per_cpu(var, smp_processor_id())
+-#define __raw_get_cpu_var(var) per_cpu(var, raw_smp_processor_id())
+-
+-/* A macro to avoid #include hell... */
+-#define percpu_modcopy(pcpudst, src, size) \
+-do { \
+- unsigned int __i; \
+- for_each_possible_cpu(__i) \
+- memcpy((pcpudst)+__per_cpu_offset[__i], \
+- (src), (size)); \
+-} while (0)
+-#else /* ! SMP */
++/*
++ * Determine the offset for the currently active processor.
++ * An arch may define __my_cpu_offset to provide a more effective
++ * means of obtaining the offset to the per cpu variables of the
++ * current processor.
++ */
++#ifndef __my_cpu_offset
++#define __my_cpu_offset per_cpu_offset(raw_smp_processor_id())
++#define my_cpu_offset per_cpu_offset(smp_processor_id())
++#else
++#define my_cpu_offset __my_cpu_offset
++#endif
++
++/*
++ * Add a offset to a pointer but keep the pointer as is.
++ *
++ * Only S390 provides its own means of moving the pointer.
++ */
++#ifndef SHIFT_PERCPU_PTR
++#define SHIFT_PERCPU_PTR(__p, __offset) RELOC_HIDE((__p), (__offset))
++#endif
-- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_meta_lockres,
-+ ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_inode_lockres,
- OCFS2_LOCK_TYPE_META, 0, inode);
+-#define DEFINE_PER_CPU(type, name) \
+- __typeof__(type) per_cpu__##name
++/*
++ * A percpu variable may point to a discarded regions. The following are
++ * established ways to produce a usable pointer from the percpu variable
++ * offset.
++ */
++#define per_cpu(var, cpu) \
++ (*SHIFT_PERCPU_PTR(&per_cpu_var(var), per_cpu_offset(cpu)))
++#define __get_cpu_var(var) \
++ (*SHIFT_PERCPU_PTR(&per_cpu_var(var), my_cpu_offset))
++#define __raw_get_cpu_var(var) \
++ (*SHIFT_PERCPU_PTR(&per_cpu_var(var), __my_cpu_offset))
- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_open_lockres,
-@@ -333,10 +340,6 @@ int ocfs2_populate_inode(struct inode *inode, struct ocfs2_dinode *fe,
- OCFS2_LOCK_TYPE_RW, inode->i_generation,
- inode);
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- DEFINE_PER_CPU(type, name)
-- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_data_lockres,
-- OCFS2_LOCK_TYPE_DATA, inode->i_generation,
-- inode);
--
- ocfs2_set_inode_flags(inode);
+-#define per_cpu(var, cpu) (*((void)(cpu), &per_cpu__##var))
+-#define __get_cpu_var(var) per_cpu__##var
+-#define __raw_get_cpu_var(var) per_cpu__##var
++#ifdef CONFIG_HAVE_SETUP_PER_CPU_AREA
++extern void setup_per_cpu_areas(void);
++#endif
++
++#else /* ! SMP */
++
++#define per_cpu(var, cpu) (*((void)(cpu), &per_cpu_var(var)))
++#define __get_cpu_var(var) per_cpu_var(var)
++#define __raw_get_cpu_var(var) per_cpu_var(var)
- status = 0;
-@@ -414,7 +417,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
- if (args->fi_flags & OCFS2_FI_FLAG_SYSFILE)
- generation = osb->fs_generation;
+ #endif /* SMP */
-- ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_meta_lockres,
-+ ocfs2_inode_lock_res_init(&OCFS2_I(inode)->ip_inode_lockres,
- OCFS2_LOCK_TYPE_META,
- generation, inode);
+-#define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
++#ifndef PER_CPU_ATTRIBUTES
++#define PER_CPU_ATTRIBUTES
++#endif
-@@ -429,7 +432,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
- mlog_errno(status);
- return status;
- }
-- status = ocfs2_meta_lock(inode, NULL, 0);
-+ status = ocfs2_inode_lock(inode, NULL, 0);
- if (status) {
- make_bad_inode(inode);
- mlog_errno(status);
-@@ -484,7 +487,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
++#define DECLARE_PER_CPU(type, name) extern PER_CPU_ATTRIBUTES \
++ __typeof__(type) per_cpu_var(name)
- bail:
- if (can_lock)
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
+ #endif /* _ASM_GENERIC_PERCPU_H_ */
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index a4a22cc..587566f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -44,8 +44,8 @@
+ #define RLIMIT_NICE 13 /* max nice prio allowed to raise to
+ 0-39 for nice level 19 .. -20 */
+ #define RLIMIT_RTPRIO 14 /* maximum realtime priority */
+-
+-#define RLIM_NLIMITS 15
++#define RLIMIT_RTTIME 15 /* timeout for RT tasks in us */
++#define RLIM_NLIMITS 16
- if (status < 0)
- make_bad_inode(inode);
-@@ -586,7 +589,7 @@ static int ocfs2_remove_inode(struct inode *inode,
- }
+ /*
+ * SuS says limits have to be unsigned.
+@@ -86,6 +86,7 @@
+ [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
+ [RLIMIT_NICE] = { 0, 0 }, \
+ [RLIMIT_RTPRIO] = { 0, 0 }, \
++ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ }
- mutex_lock(&inode_alloc_inode->i_mutex);
-- status = ocfs2_meta_lock(inode_alloc_inode, &inode_alloc_bh, 1);
-+ status = ocfs2_inode_lock(inode_alloc_inode, &inode_alloc_bh, 1);
- if (status < 0) {
- mutex_unlock(&inode_alloc_inode->i_mutex);
+ #endif /* __KERNEL__ */
+diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
+index 75f2bfa..6ce9f3a 100644
+--- a/include/asm-generic/tlb.h
++++ b/include/asm-generic/tlb.h
+@@ -15,7 +15,6 @@
-@@ -617,7 +620,7 @@ static int ocfs2_remove_inode(struct inode *inode,
- }
+ #include <linux/swap.h>
+ #include <linux/quicklist.h>
+-#include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
- di->i_dtime = cpu_to_le64(CURRENT_TIME.tv_sec);
-- le32_and_cpu(&di->i_flags, ~(OCFS2_VALID_FL | OCFS2_ORPHANED_FL));
-+ di->i_flags &= cpu_to_le32(~(OCFS2_VALID_FL | OCFS2_ORPHANED_FL));
+ /*
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 9f584cc..f784d2f 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -9,10 +9,46 @@
+ /* Align . to a 8 byte boundary equals to maximum function alignment. */
+ #define ALIGN_FUNCTION() . = ALIGN(8)
- status = ocfs2_journal_dirty(handle, di_bh);
- if (status < 0) {
-@@ -635,7 +638,7 @@ static int ocfs2_remove_inode(struct inode *inode,
- bail_commit:
- ocfs2_commit_trans(osb, handle);
- bail_unlock:
-- ocfs2_meta_unlock(inode_alloc_inode, 1);
-+ ocfs2_inode_unlock(inode_alloc_inode, 1);
- mutex_unlock(&inode_alloc_inode->i_mutex);
- brelse(inode_alloc_bh);
- bail:
-@@ -709,7 +712,7 @@ static int ocfs2_wipe_inode(struct inode *inode,
- * delete_inode operation. We do this now to avoid races with
- * recovery completion on other nodes. */
- mutex_lock(&orphan_dir_inode->i_mutex);
-- status = ocfs2_meta_lock(orphan_dir_inode, &orphan_dir_bh, 1);
-+ status = ocfs2_inode_lock(orphan_dir_inode, &orphan_dir_bh, 1);
- if (status < 0) {
- mutex_unlock(&orphan_dir_inode->i_mutex);
++/* The actual configuration determine if the init/exit sections
++ * are handled as text/data or they can be discarded (which
++ * often happens at runtime)
++ */
++#ifdef CONFIG_HOTPLUG
++#define DEV_KEEP(sec) *(.dev##sec)
++#define DEV_DISCARD(sec)
++#else
++#define DEV_KEEP(sec)
++#define DEV_DISCARD(sec) *(.dev##sec)
++#endif
++
++#ifdef CONFIG_HOTPLUG_CPU
++#define CPU_KEEP(sec) *(.cpu##sec)
++#define CPU_DISCARD(sec)
++#else
++#define CPU_KEEP(sec)
++#define CPU_DISCARD(sec) *(.cpu##sec)
++#endif
++
++#if defined(CONFIG_MEMORY_HOTPLUG)
++#define MEM_KEEP(sec) *(.mem##sec)
++#define MEM_DISCARD(sec)
++#else
++#define MEM_KEEP(sec)
++#define MEM_DISCARD(sec) *(.mem##sec)
++#endif
++
++
+ /* .data section */
+ #define DATA_DATA \
+ *(.data) \
+ *(.data.init.refok) \
++ *(.ref.data) \
++ DEV_KEEP(init.data) \
++ DEV_KEEP(exit.data) \
++ CPU_KEEP(init.data) \
++ CPU_KEEP(exit.data) \
++ MEM_KEEP(init.data) \
++ MEM_KEEP(exit.data) \
+ . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start___markers) = .; \
+ *(__markers) \
+@@ -132,14 +168,25 @@
+ *(__ksymtab_strings) \
+ } \
+ \
++ /* __*init sections */ \
++ __init_rodata : AT(ADDR(__init_rodata) - LOAD_OFFSET) { \
++ *(.ref.rodata) \
++ DEV_KEEP(init.rodata) \
++ DEV_KEEP(exit.rodata) \
++ CPU_KEEP(init.rodata) \
++ CPU_KEEP(exit.rodata) \
++ MEM_KEEP(init.rodata) \
++ MEM_KEEP(exit.rodata) \
++ } \
++ \
+ /* Built-in module parameters. */ \
+ __param : AT(ADDR(__param) - LOAD_OFFSET) { \
+ VMLINUX_SYMBOL(__start___param) = .; \
+ *(__param) \
+ VMLINUX_SYMBOL(__stop___param) = .; \
++ . = ALIGN((align)); \
+ VMLINUX_SYMBOL(__end_rodata) = .; \
+ } \
+- \
+ . = ALIGN((align));
-@@ -718,8 +721,8 @@ static int ocfs2_wipe_inode(struct inode *inode,
- }
+ /* RODATA provided for backward compatibility.
+@@ -158,8 +205,16 @@
+ #define TEXT_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text) \
++ *(.ref.text) \
+ *(.text.init.refok) \
+- *(.exit.text.refok)
++ *(.exit.text.refok) \
++ DEV_KEEP(init.text) \
++ DEV_KEEP(exit.text) \
++ CPU_KEEP(init.text) \
++ CPU_KEEP(exit.text) \
++ MEM_KEEP(init.text) \
++ MEM_KEEP(exit.text)
++
- /* we do this while holding the orphan dir lock because we
-- * don't want recovery being run from another node to vote for
-- * an inode delete on us -- this will result in two nodes
-+ * don't want recovery being run from another node to try an
-+ * inode delete underneath us -- this will result in two nodes
- * truncating the same file! */
- status = ocfs2_truncate_for_delete(osb, inode, di_bh);
- if (status < 0) {
-@@ -733,7 +736,7 @@ static int ocfs2_wipe_inode(struct inode *inode,
- mlog_errno(status);
+ /* sched.text is aling to function alignment to secure we have same
+ * address even at second ld pass when generating System.map */
+@@ -183,6 +238,37 @@
+ *(.kprobes.text) \
+ VMLINUX_SYMBOL(__kprobes_text_end) = .;
- bail_unlock_dir:
-- ocfs2_meta_unlock(orphan_dir_inode, 1);
-+ ocfs2_inode_unlock(orphan_dir_inode, 1);
- mutex_unlock(&orphan_dir_inode->i_mutex);
- brelse(orphan_dir_bh);
- bail:
-@@ -744,7 +747,7 @@ bail:
- }
++/* init and exit section handling */
++#define INIT_DATA \
++ *(.init.data) \
++ DEV_DISCARD(init.data) \
++ DEV_DISCARD(init.rodata) \
++ CPU_DISCARD(init.data) \
++ CPU_DISCARD(init.rodata) \
++ MEM_DISCARD(init.data) \
++ MEM_DISCARD(init.rodata)
++
++#define INIT_TEXT \
++ *(.init.text) \
++ DEV_DISCARD(init.text) \
++ CPU_DISCARD(init.text) \
++ MEM_DISCARD(init.text)
++
++#define EXIT_DATA \
++ *(.exit.data) \
++ DEV_DISCARD(exit.data) \
++ DEV_DISCARD(exit.rodata) \
++ CPU_DISCARD(exit.data) \
++ CPU_DISCARD(exit.rodata) \
++ MEM_DISCARD(exit.data) \
++ MEM_DISCARD(exit.rodata)
++
++#define EXIT_TEXT \
++ *(.exit.text) \
++ DEV_DISCARD(exit.text) \
++ CPU_DISCARD(exit.text) \
++ MEM_DISCARD(exit.text)
++
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging sections are relative to
+ the beginning of the section so we begin them at 0. */
+diff --git a/include/asm-ia64/acpi.h b/include/asm-ia64/acpi.h
+index 81bcd5e..cd1cc39 100644
+--- a/include/asm-ia64/acpi.h
++++ b/include/asm-ia64/acpi.h
+@@ -127,6 +127,8 @@ extern int __devinitdata pxm_to_nid_map[MAX_PXM_DOMAINS];
+ extern int __initdata nid_to_pxm_map[MAX_NUMNODES];
+ #endif
- /* There is a series of simple checks that should be done before a
-- * vote is even considered. Encapsulate those in this function. */
-+ * trylock is even considered. Encapsulate those in this function. */
- static int ocfs2_inode_is_valid_to_delete(struct inode *inode)
- {
- int ret = 0;
-@@ -758,14 +761,14 @@ static int ocfs2_inode_is_valid_to_delete(struct inode *inode)
- goto bail;
- }
++#define acpi_unlazy_tlb(x)
++
+ #endif /*__KERNEL__*/
-- /* If we're coming from process_vote we can't go into our own
-+ /* If we're coming from downconvert_thread we can't go into our own
- * voting [hello, deadlock city!], so unforuntately we just
- * have to skip deleting this guy. That's OK though because
- * the node who's doing the actual deleting should handle it
- * anyway. */
-- if (current == osb->vote_task) {
-+ if (current == osb->dc_task) {
- mlog(0, "Skipping delete of %lu because we're currently "
-- "in process_vote\n", inode->i_ino);
-+ "in downconvert\n", inode->i_ino);
- goto bail;
- }
+ #endif /*_ASM_ACPI_H*/
+diff --git a/include/asm-ia64/agp.h b/include/asm-ia64/agp.h
+index 4e517f0..c11fdd8 100644
+--- a/include/asm-ia64/agp.h
++++ b/include/asm-ia64/agp.h
+@@ -15,7 +15,6 @@
+ */
+ #define map_page_into_agp(page) /* nothing */
+ #define unmap_page_from_agp(page) /* nothing */
+-#define flush_agp_mappings() /* nothing */
+ #define flush_agp_cache() mb()
-@@ -779,10 +782,9 @@ static int ocfs2_inode_is_valid_to_delete(struct inode *inode)
- goto bail_unlock;
- }
+ /* Convert a physical address to an address suitable for the GART. */
+diff --git a/include/asm-ia64/gcc_intrin.h b/include/asm-ia64/gcc_intrin.h
+index e58d329..5b6665c 100644
+--- a/include/asm-ia64/gcc_intrin.h
++++ b/include/asm-ia64/gcc_intrin.h
+@@ -24,7 +24,7 @@
+ extern void ia64_bad_param_for_setreg (void);
+ extern void ia64_bad_param_for_getreg (void);
-- /* If we have voted "yes" on the wipe of this inode for
-- * another node, it will be marked here so we can safely skip
-- * it. Recovery will cleanup any inodes we might inadvertantly
-- * skip here. */
-+ /* If we have allowd wipe of this inode for another node, it
-+ * will be marked here so we can safely skip it. Recovery will
-+ * cleanup any inodes we might inadvertantly skip here. */
- if (oi->ip_flags & OCFS2_INODE_SKIP_DELETE) {
- mlog(0, "Skipping delete of %lu because another node "
- "has done this for us.\n", inode->i_ino);
-@@ -929,13 +931,13 @@ void ocfs2_delete_inode(struct inode *inode)
+-register unsigned long ia64_r13 asm ("r13") __attribute_used__;
++register unsigned long ia64_r13 asm ("r13") __used;
- /* Lock down the inode. This gives us an up to date view of
- * it's metadata (for verification), and allows us to
-- * serialize delete_inode votes.
-+ * serialize delete_inode on multiple nodes.
- *
- * Even though we might be doing a truncate, we don't take the
- * allocation lock here as it won't be needed - nobody will
- * have the file open.
- */
-- status = ocfs2_meta_lock(inode, &di_bh, 1);
-+ status = ocfs2_inode_lock(inode, &di_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -947,15 +949,15 @@ void ocfs2_delete_inode(struct inode *inode)
- * before we go ahead and wipe the inode. */
- status = ocfs2_query_inode_wipe(inode, di_bh, &wipe);
- if (!wipe || status < 0) {
-- /* Error and inode busy vote both mean we won't be
-+ /* Error and remote inode busy both mean we won't be
- * removing the inode, so they take almost the same
- * path. */
- if (status < 0)
- mlog_errno(status);
+ #define ia64_setreg(regnum, val) \
+ ({ \
+diff --git a/include/asm-ia64/percpu.h b/include/asm-ia64/percpu.h
+index c4f1e32..0095bcf 100644
+--- a/include/asm-ia64/percpu.h
++++ b/include/asm-ia64/percpu.h
+@@ -16,28 +16,11 @@
+ #include <linux/threads.h>
-- /* Someone in the cluster has voted to not wipe this
-- * inode, or it was never completely orphaned. Write
-- * out the pages and exit now. */
-+ /* Someone in the cluster has disallowed a wipe of
-+ * this inode, or it was never completely
-+ * orphaned. Write out the pages and exit now. */
- ocfs2_cleanup_delete_inode(inode, 1);
- goto bail_unlock_inode;
- }
-@@ -981,7 +983,7 @@ void ocfs2_delete_inode(struct inode *inode)
- OCFS2_I(inode)->ip_flags |= OCFS2_INODE_DELETED;
+ #ifdef HAVE_MODEL_SMALL_ATTRIBUTE
+-# define __SMALL_ADDR_AREA __attribute__((__model__ (__small__)))
+-#else
+-# define __SMALL_ADDR_AREA
++# define PER_CPU_ATTRIBUTES __attribute__((__model__ (__small__)))
+ #endif
- bail_unlock_inode:
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- brelse(di_bh);
- bail_unblock:
- status = sigprocmask(SIG_SETMASK, &oldset, NULL);
-@@ -1008,15 +1010,14 @@ void ocfs2_clear_inode(struct inode *inode)
- mlog_bug_on_msg(OCFS2_SB(inode->i_sb) == NULL,
- "Inode=%lu\n", inode->i_ino);
+ #define DECLARE_PER_CPU(type, name) \
+- extern __SMALL_ADDR_AREA __typeof__(type) per_cpu__##name
+-
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) \
+- __SMALL_ADDR_AREA __typeof__(type) per_cpu__##name
+-
+-#ifdef CONFIG_SMP
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __SMALL_ADDR_AREA __typeof__(type) per_cpu__##name \
+- ____cacheline_aligned_in_smp
+-#else
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- DEFINE_PER_CPU(type, name)
+-#endif
++ extern PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
-- /* For remove delete_inode vote, we hold open lock before,
-- * now it is time to unlock PR and EX open locks. */
-+ /* To preven remote deletes we hold open lock before, now it
-+ * is time to unlock PR and EX open locks. */
- ocfs2_open_unlock(inode);
+ /*
+ * Pretty much a literal copy of asm-generic/percpu.h, except that percpu_modcopy() is an
+@@ -68,9 +51,6 @@ extern void *per_cpu_init(void);
- /* Do these before all the other work so that we don't bounce
-- * the vote thread while waiting to destroy the locks. */
-+ * the downconvert thread while waiting to destroy the locks. */
- ocfs2_mark_lockres_freeing(&oi->ip_rw_lockres);
-- ocfs2_mark_lockres_freeing(&oi->ip_meta_lockres);
-- ocfs2_mark_lockres_freeing(&oi->ip_data_lockres);
-+ ocfs2_mark_lockres_freeing(&oi->ip_inode_lockres);
- ocfs2_mark_lockres_freeing(&oi->ip_open_lockres);
+ #endif /* SMP */
- /* We very well may get a clear_inode before all an inodes
-@@ -1039,8 +1040,7 @@ void ocfs2_clear_inode(struct inode *inode)
- mlog_errno(status);
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
+-
+ /*
+ * Be extremely careful when taking the address of this variable! Due to virtual
+ * remapping, it is different from the canonical address returned by __get_cpu_var(var)!
+diff --git a/include/asm-m32r/signal.h b/include/asm-m32r/signal.h
+index 9372586..1a60706 100644
+--- a/include/asm-m32r/signal.h
++++ b/include/asm-m32r/signal.h
+@@ -157,7 +157,7 @@ typedef struct sigaltstack {
+ #undef __HAVE_ARCH_SIG_BITOPS
- ocfs2_lock_res_free(&oi->ip_rw_lockres);
-- ocfs2_lock_res_free(&oi->ip_meta_lockres);
-- ocfs2_lock_res_free(&oi->ip_data_lockres);
-+ ocfs2_lock_res_free(&oi->ip_inode_lockres);
- ocfs2_lock_res_free(&oi->ip_open_lockres);
+ struct pt_regs;
+-extern int FASTCALL(do_signal(struct pt_regs *regs, sigset_t *oldset));
++extern int do_signal(struct pt_regs *regs, sigset_t *oldset);
- ocfs2_metadata_cache_purge(inode);
-@@ -1184,15 +1184,15 @@ int ocfs2_inode_revalidate(struct dentry *dentry)
- }
- spin_unlock(&OCFS2_I(inode)->ip_lock);
+ #define ptrace_signal_deliver(regs, cookie) do { } while (0)
-- /* Let ocfs2_meta_lock do the work of updating our struct
-+ /* Let ocfs2_inode_lock do the work of updating our struct
- * inode for us. */
-- status = ocfs2_meta_lock(inode, NULL, 0);
-+ status = ocfs2_inode_lock(inode, NULL, 0);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
- goto bail;
- }
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
- bail:
- mlog_exit(status);
+diff --git a/include/asm-m68k/bitops.h b/include/asm-m68k/bitops.h
+index 2976b5d..83d1f28 100644
+--- a/include/asm-m68k/bitops.h
++++ b/include/asm-m68k/bitops.h
+@@ -410,6 +410,8 @@ static inline int ext2_find_next_zero_bit(const void *vaddr, unsigned size,
+ res = ext2_find_first_zero_bit (p, size - 32 * (p - addr));
+ return (p - addr) * 32 + res;
+ }
++#define ext2_find_next_bit(addr, size, off) \
++ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
-diff --git a/fs/ocfs2/inode.h b/fs/ocfs2/inode.h
-index 70e881c..390a855 100644
---- a/fs/ocfs2/inode.h
-+++ b/fs/ocfs2/inode.h
-@@ -34,8 +34,7 @@ struct ocfs2_inode_info
- u64 ip_blkno;
+ #endif /* __KERNEL__ */
- struct ocfs2_lock_res ip_rw_lockres;
-- struct ocfs2_lock_res ip_meta_lockres;
-- struct ocfs2_lock_res ip_data_lockres;
-+ struct ocfs2_lock_res ip_inode_lockres;
- struct ocfs2_lock_res ip_open_lockres;
+diff --git a/include/asm-m68knommu/bitops.h b/include/asm-m68knommu/bitops.h
+index f8dfb7b..f43afe1 100644
+--- a/include/asm-m68knommu/bitops.h
++++ b/include/asm-m68knommu/bitops.h
+@@ -294,6 +294,8 @@ found_middle:
+ return result + ffz(__swab32(tmp));
+ }
- /* protects allocation changes on this inode. */
-@@ -121,9 +120,10 @@ void ocfs2_delete_inode(struct inode *inode);
- void ocfs2_drop_inode(struct inode *inode);
++#define ext2_find_next_bit(addr, size, off) \
++ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
+ #include <asm-generic/bitops/minix.h>
- /* Flags for ocfs2_iget() */
--#define OCFS2_FI_FLAG_SYSFILE 0x4
--#define OCFS2_FI_FLAG_ORPHAN_RECOVERY 0x8
--struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 feoff, int flags);
-+#define OCFS2_FI_FLAG_SYSFILE 0x1
-+#define OCFS2_FI_FLAG_ORPHAN_RECOVERY 0x2
-+struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 feoff, unsigned flags,
-+ int sysfile_type);
- int ocfs2_inode_init_private(struct inode *inode);
- int ocfs2_inode_revalidate(struct dentry *dentry);
- int ocfs2_populate_inode(struct inode *inode, struct ocfs2_dinode *fe,
-diff --git a/fs/ocfs2/ioctl.c b/fs/ocfs2/ioctl.c
-index 87dcece..5177fba 100644
---- a/fs/ocfs2/ioctl.c
-+++ b/fs/ocfs2/ioctl.c
-@@ -20,6 +20,7 @@
+ #endif /* __KERNEL__ */
+diff --git a/include/asm-mips/addrspace.h b/include/asm-mips/addrspace.h
+index 0bb7a93..569f80a 100644
+--- a/include/asm-mips/addrspace.h
++++ b/include/asm-mips/addrspace.h
+@@ -127,7 +127,7 @@
+ #define PHYS_TO_XKSEG_CACHED(p) PHYS_TO_XKPHYS(K_CALG_COH_SHAREABLE, (p))
+ #define XKPHYS_TO_PHYS(p) ((p) & TO_PHYS_MASK)
+ #define PHYS_TO_XKPHYS(cm, a) (_CONST64_(0x8000000000000000) | \
+- ((cm)<<59) | (a))
++ (_CONST64_(cm) << 59) | (a))
- #include "ocfs2_fs.h"
- #include "ioctl.h"
-+#include "resize.h"
+ /*
+ * The ultimate limited of the 64-bit MIPS architecture: 2 bits for selecting
+diff --git a/include/asm-mips/asm.h b/include/asm-mips/asm.h
+index 12e1758..608cfcf 100644
+--- a/include/asm-mips/asm.h
++++ b/include/asm-mips/asm.h
+@@ -398,4 +398,12 @@ symbol = value
- #include <linux/ext2_fs.h>
+ #define SSNOP sll zero, zero, 1
-@@ -27,14 +28,14 @@ static int ocfs2_get_inode_attr(struct inode *inode, unsigned *flags)
- {
- int status;
++#ifdef CONFIG_SGI_IP28
++/* Inhibit speculative stores to volatile (e.g.DMA) or invalid addresses. */
++#include <asm/cacheops.h>
++#define R10KCBARRIER(addr) cache Cache_Barrier, addr;
++#else
++#define R10KCBARRIER(addr)
++#endif
++
+ #endif /* __ASM_ASM_H */
+diff --git a/include/asm-mips/bootinfo.h b/include/asm-mips/bootinfo.h
+index b2dd9b3..e031bdf 100644
+--- a/include/asm-mips/bootinfo.h
++++ b/include/asm-mips/bootinfo.h
+@@ -48,22 +48,11 @@
+ #define MACH_DS5900 10 /* DECsystem 5900 */
-- status = ocfs2_meta_lock(inode, NULL, 0);
-+ status = ocfs2_inode_lock(inode, NULL, 0);
- if (status < 0) {
- mlog_errno(status);
- return status;
- }
- ocfs2_get_inode_flags(OCFS2_I(inode));
- *flags = OCFS2_I(inode)->ip_attr;
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
+ /*
+- * Valid machtype for group ARC
+- */
+-#define MACH_DESKSTATION_RPC44 0 /* Deskstation rPC44 */
+-#define MACH_DESKSTATION_TYNE 1 /* Deskstation Tyne */
+-
+-/*
+ * Valid machtype for group SNI_RM
+ */
+ #define MACH_SNI_RM200_PCI 0 /* RM200/RM300/RM400 PCI series */
- mlog_exit(status);
- return status;
-@@ -52,7 +53,7 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
+ /*
+- * Valid machtype for group ACN
+- */
+-#define MACH_ACN_MIPS_BOARD 0 /* ACN MIPS single board */
+-
+-/*
+ * Valid machtype for group SGI
+ */
+ #define MACH_SGI_IP22 0 /* Indy, Indigo2, Challenge S */
+@@ -73,44 +62,6 @@
+ #define MACH_SGI_IP30 4 /* Octane, Octane2 */
- mutex_lock(&inode->i_mutex);
+ /*
+- * Valid machtype for group COBALT
+- */
+-#define MACH_COBALT_27 0 /* Proto "27" hardware */
+-
+-/*
+- * Valid machtype for group BAGET
+- */
+-#define MACH_BAGET201 0 /* BT23-201 */
+-#define MACH_BAGET202 1 /* BT23-202 */
+-
+-/*
+- * Cosine boards.
+- */
+-#define MACH_COSINE_ORION 0
+-
+-/*
+- * Valid machtype for group MOMENCO
+- */
+-#define MACH_MOMENCO_OCELOT 0
+-#define MACH_MOMENCO_OCELOT_G 1 /* no more supported (may 2007) */
+-#define MACH_MOMENCO_OCELOT_C 2 /* no more supported (jun 2007) */
+-#define MACH_MOMENCO_JAGUAR_ATX 3 /* no more supported (may 2007) */
+-#define MACH_MOMENCO_OCELOT_3 4
+-
+-/*
+- * Valid machtype for group PHILIPS
+- */
+-#define MACH_PHILIPS_NINO 0 /* Nino */
+-#define MACH_PHILIPS_VELO 1 /* Velo */
+-#define MACH_PHILIPS_JBS 2 /* JBS */
+-#define MACH_PHILIPS_STB810 3 /* STB810 */
+-
+-/*
+- * Valid machtype for group SIBYTE
+- */
+-#define MACH_SWARM 0
+-
+-/*
+ * Valid machtypes for group Toshiba
+ */
+ #define MACH_PALLAS 0
+@@ -122,64 +73,17 @@
+ #define MACH_TOSHIBA_RBTX4938 6
-- status = ocfs2_meta_lock(inode, &bh, 1);
-+ status = ocfs2_inode_lock(inode, &bh, 1);
- if (status < 0) {
- mlog_errno(status);
- goto bail;
-@@ -100,7 +101,7 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
+ /*
+- * Valid machtype for group Alchemy
+- */
+-#define MACH_PB1000 0 /* Au1000-based eval board */
+-#define MACH_PB1100 1 /* Au1100-based eval board */
+-#define MACH_PB1500 2 /* Au1500-based eval board */
+-#define MACH_DB1000 3 /* Au1000-based eval board */
+-#define MACH_DB1100 4 /* Au1100-based eval board */
+-#define MACH_DB1500 5 /* Au1500-based eval board */
+-#define MACH_XXS1500 6 /* Au1500-based eval board */
+-#define MACH_MTX1 7 /* 4G MTX-1 Au1500-based board */
+-#define MACH_PB1550 8 /* Au1550-based eval board */
+-#define MACH_DB1550 9 /* Au1550-based eval board */
+-#define MACH_PB1200 10 /* Au1200-based eval board */
+-#define MACH_DB1200 11 /* Au1200-based eval board */
+-
+-/*
+- * Valid machtype for group NEC_VR41XX
+- *
+- * Various NEC-based devices.
+- *
+- * FIXME: MACH_GROUPs should be by _MANUFACTURER_ of * the device, not by
+- * technical properties, so no new additions to this group.
+- */
+-#define MACH_NEC_OSPREY 0 /* Osprey eval board */
+-#define MACH_NEC_EAGLE 1 /* NEC Eagle/Hawk board */
+-#define MACH_ZAO_CAPCELLA 2 /* ZAO Networks Capcella */
+-#define MACH_VICTOR_MPC30X 3 /* Victor MP-C303/304 */
+-#define MACH_IBM_WORKPAD 4 /* IBM WorkPad z50 */
+-#define MACH_CASIO_E55 5 /* CASIO CASSIOPEIA E-10/15/55/65 */
+-#define MACH_TANBAC_TB0226 6 /* TANBAC TB0226 (Mbase) */
+-#define MACH_TANBAC_TB0229 7 /* TANBAC TB0229 (VR4131DIMM) */
+-#define MACH_NEC_CMBVR4133 8 /* CMB VR4133 Board */
+-
+-#define MACH_HP_LASERJET 1
+-
+-/*
+ * Valid machtype for group LASAT
+ */
+ #define MACH_LASAT_100 0 /* Masquerade II/SP100/SP50/SP25 */
+ #define MACH_LASAT_200 1 /* Masquerade PRO/SP200 */
- ocfs2_commit_trans(osb, handle);
- bail_unlock:
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- bail:
- mutex_unlock(&inode->i_mutex);
+ /*
+- * Valid machtype for group TITAN
+- */
+-#define MACH_TITAN_YOSEMITE 1 /* PMC-Sierra Yosemite */
+-#define MACH_TITAN_EXCITE 2 /* Basler eXcite */
+-
+-/*
+ * Valid machtype for group NEC EMMA2RH
+ */
+ #define MACH_NEC_MARKEINS 0 /* NEC EMMA2RH Mark-eins */
-@@ -115,8 +116,10 @@ int ocfs2_ioctl(struct inode * inode, struct file * filp,
- unsigned int cmd, unsigned long arg)
- {
- unsigned int flags;
-+ int new_clusters;
- int status;
- struct ocfs2_space_resv sr;
-+ struct ocfs2_new_group_input input;
+ /*
+- * Valid machtype for group LEMOTE
+- */
+-#define MACH_LEMOTE_FULONG 0
+-
+-/*
+ * Valid machtype for group PMC-MSP
+ */
+ #define MACH_MSP4200_EVAL 0 /* PMC-Sierra MSP4200 Evaluation */
+@@ -190,16 +94,9 @@
+ #define MACH_MSP7120_FPGA 5 /* PMC-Sierra MSP7120 Emulation */
+ #define MACH_MSP_OTHER 255 /* PMC-Sierra unknown board type */
- switch (cmd) {
- case OCFS2_IOC_GETFLAGS:
-@@ -140,6 +143,23 @@ int ocfs2_ioctl(struct inode * inode, struct file * filp,
- return -EFAULT;
+-#define MACH_WRPPMC 1
+-
+-/*
+- * Valid machtype for group Broadcom
+- */
+-#define MACH_GROUP_BRCM 23 /* Broadcom */
+-#define MACH_BCM47XX 1 /* Broadcom BCM47XX */
+-
+ #define CL_SIZE COMMAND_LINE_SIZE
- return ocfs2_change_file_space(filp, cmd, &sr);
-+ case OCFS2_IOC_GROUP_EXTEND:
-+ if (!capable(CAP_SYS_RESOURCE))
-+ return -EPERM;
-+
-+ if (get_user(new_clusters, (int __user *)arg))
-+ return -EFAULT;
++extern char *system_type;
+ const char *get_system_type(void);
+
+ extern unsigned long mips_machtype;
+diff --git a/include/asm-mips/bugs.h b/include/asm-mips/bugs.h
+index 0d7f9c1..9dc10df 100644
+--- a/include/asm-mips/bugs.h
++++ b/include/asm-mips/bugs.h
+@@ -1,19 +1,34 @@
+ /*
+ * This is included by init/main.c to check for architecture-dependent bugs.
+ *
++ * Copyright (C) 2007 Maciej W. Rozycki
++ *
+ * Needs:
+ * void check_bugs(void);
+ */
+ #ifndef _ASM_BUGS_H
+ #define _ASM_BUGS_H
+
++#include <linux/bug.h>
+ #include <linux/delay.h>
+
-+ return ocfs2_group_extend(inode, new_clusters);
-+ case OCFS2_IOC_GROUP_ADD:
-+ case OCFS2_IOC_GROUP_ADD64:
-+ if (!capable(CAP_SYS_RESOURCE))
-+ return -EPERM;
+ #include <asm/cpu.h>
+ #include <asm/cpu-info.h>
+
++extern int daddiu_bug;
+
-+ if (copy_from_user(&input, (int __user *) arg, sizeof(input)))
-+ return -EFAULT;
++extern void check_bugs64_early(void);
+
-+ return ocfs2_group_add(inode, &input);
- default:
- return -ENOTTY;
- }
-@@ -162,6 +182,9 @@ long ocfs2_compat_ioctl(struct file *file, unsigned cmd, unsigned long arg)
- case OCFS2_IOC_RESVSP64:
- case OCFS2_IOC_UNRESVSP:
- case OCFS2_IOC_UNRESVSP64:
-+ case OCFS2_IOC_GROUP_EXTEND:
-+ case OCFS2_IOC_GROUP_ADD:
-+ case OCFS2_IOC_GROUP_ADD64:
- break;
- default:
- return -ENOIOCTLCMD;
-diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
-index 8d81f6c..f31c7e8 100644
---- a/fs/ocfs2/journal.c
-+++ b/fs/ocfs2/journal.c
-@@ -44,7 +44,6 @@
- #include "localalloc.h"
- #include "slot_map.h"
- #include "super.h"
--#include "vote.h"
- #include "sysfile.h"
-
- #include "buffer_head_io.h"
-@@ -103,7 +102,7 @@ static int ocfs2_commit_cache(struct ocfs2_super *osb)
- mlog(0, "commit_thread: flushed transaction %lu (%u handles)\n",
- journal->j_trans_id, flushed);
+ extern void check_bugs32(void);
+ extern void check_bugs64(void);
-- ocfs2_kick_vote_thread(osb);
-+ ocfs2_wake_downconvert_thread(osb);
- wake_up(&journal->j_checkpointed);
- finally:
- mlog_exit(status);
-@@ -314,14 +313,18 @@ int ocfs2_journal_dirty_data(handle_t *handle,
- return err;
++static inline void check_bugs_early(void)
++{
++#ifdef CONFIG_64BIT
++ check_bugs64_early();
++#endif
++}
++
+ static inline void check_bugs(void)
+ {
+ unsigned int cpu = smp_processor_id();
+@@ -25,4 +40,14 @@ static inline void check_bugs(void)
+ #endif
}
--#define OCFS2_DEFAULT_COMMIT_INTERVAL (HZ * 5)
-+#define OCFS2_DEFAULT_COMMIT_INTERVAL (HZ * JBD_DEFAULT_MAX_COMMIT_AGE)
-
- void ocfs2_set_journal_params(struct ocfs2_super *osb)
- {
- journal_t *journal = osb->journal->j_journal;
-+ unsigned long commit_interval = OCFS2_DEFAULT_COMMIT_INTERVAL;
++static inline int r4k_daddiu_bug(void)
++{
++#ifdef CONFIG_64BIT
++ WARN_ON(daddiu_bug < 0);
++ return daddiu_bug != 0;
++#else
++ return 0;
++#endif
++}
+
-+ if (osb->osb_commit_interval)
-+ commit_interval = osb->osb_commit_interval;
-
- spin_lock(&journal->j_state_lock);
-- journal->j_commit_interval = OCFS2_DEFAULT_COMMIT_INTERVAL;
-+ journal->j_commit_interval = commit_interval;
- if (osb->s_mount_opt & OCFS2_MOUNT_BARRIER)
- journal->j_flags |= JFS_BARRIER;
- else
-@@ -337,7 +340,7 @@ int ocfs2_journal_init(struct ocfs2_journal *journal, int *dirty)
- struct ocfs2_dinode *di = NULL;
- struct buffer_head *bh = NULL;
- struct ocfs2_super *osb;
-- int meta_lock = 0;
-+ int inode_lock = 0;
-
- mlog_entry_void();
-
-@@ -367,14 +370,14 @@ int ocfs2_journal_init(struct ocfs2_journal *journal, int *dirty)
- /* Skip recovery waits here - journal inode metadata never
- * changes in a live cluster so it can be considered an
- * exception to the rule. */
-- status = ocfs2_meta_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
-+ status = ocfs2_inode_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
- if (status < 0) {
- if (status != -ERESTARTSYS)
- mlog(ML_ERROR, "Could not get lock on journal!\n");
- goto done;
- }
-
-- meta_lock = 1;
-+ inode_lock = 1;
- di = (struct ocfs2_dinode *)bh->b_data;
+ #endif /* _ASM_BUGS_H */
+diff --git a/include/asm-mips/cpu-info.h b/include/asm-mips/cpu-info.h
+index ed5c02c..0c5a358 100644
+--- a/include/asm-mips/cpu-info.h
++++ b/include/asm-mips/cpu-info.h
+@@ -55,6 +55,7 @@ struct cpuinfo_mips {
+ struct cache_desc scache; /* Secondary cache */
+ struct cache_desc tcache; /* Tertiary/split secondary cache */
+ int srsets; /* Shadow register sets */
++ int core; /* physical core number */
+ #if defined(CONFIG_MIPS_MT_SMTC)
+ /*
+ * In the MIPS MT "SMTC" model, each TC is considered
+@@ -63,8 +64,10 @@ struct cpuinfo_mips {
+ * to all TCs within the same VPE.
+ */
+ int vpe_id; /* Virtual Processor number */
+- int tc_id; /* Thread Context number */
+ #endif /* CONFIG_MIPS_MT */
++#ifdef CONFIG_MIPS_MT_SMTC
++ int tc_id; /* Thread Context number */
++#endif
+ void *data; /* Additional data */
+ } __attribute__((aligned(SMP_CACHE_BYTES)));
- if (inode->i_size < OCFS2_MIN_JOURNAL_SIZE) {
-@@ -414,8 +417,8 @@ int ocfs2_journal_init(struct ocfs2_journal *journal, int *dirty)
- status = 0;
- done:
- if (status < 0) {
-- if (meta_lock)
-- ocfs2_meta_unlock(inode, 1);
-+ if (inode_lock)
-+ ocfs2_inode_unlock(inode, 1);
- if (bh != NULL)
- brelse(bh);
- if (inode) {
-@@ -544,7 +547,7 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb)
- OCFS2_I(inode)->ip_open_count--;
+diff --git a/include/asm-mips/cpu.h b/include/asm-mips/cpu.h
+index 54fc18a..bf5bbc7 100644
+--- a/include/asm-mips/cpu.h
++++ b/include/asm-mips/cpu.h
+@@ -195,8 +195,8 @@ enum cpu_type_enum {
+ * MIPS32 class processors
+ */
+ CPU_4KC, CPU_4KEC, CPU_4KSC, CPU_24K, CPU_34K, CPU_74K, CPU_AU1000,
+- CPU_AU1100, CPU_AU1200, CPU_AU1500, CPU_AU1550, CPU_PR4450,
+- CPU_BCM3302, CPU_BCM4710,
++ CPU_AU1100, CPU_AU1200, CPU_AU1210, CPU_AU1250, CPU_AU1500, CPU_AU1550,
++ CPU_PR4450, CPU_BCM3302, CPU_BCM4710,
- /* unlock our journal */
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
+ /*
+ * MIPS64 class processors
+diff --git a/include/asm-mips/delay.h b/include/asm-mips/delay.h
+index fab3213..b0bccd2 100644
+--- a/include/asm-mips/delay.h
++++ b/include/asm-mips/delay.h
+@@ -6,13 +6,16 @@
+ * Copyright (C) 1994 by Waldorf Electronics
+ * Copyright (C) 1995 - 2000, 01, 03 by Ralf Baechle
+ * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
++ * Copyright (C) 2007 Maciej W. Rozycki
+ */
+ #ifndef _ASM_DELAY_H
+ #define _ASM_DELAY_H
- brelse(journal->j_bh);
- journal->j_bh = NULL;
-@@ -883,8 +886,8 @@ restart:
- ocfs2_super_unlock(osb, 1);
+ #include <linux/param.h>
+ #include <linux/smp.h>
++
+ #include <asm/compiler.h>
++#include <asm/war.h>
- /* We always run recovery on our own orphan dir - the dead
-- * node(s) may have voted "no" on an inode delete earlier. A
-- * revote is therefore required. */
-+ * node(s) may have disallowd a previos inode delete. Re-processing
-+ * is therefore required. */
- ocfs2_queue_recovery_completion(osb->journal, osb->slot_num, NULL,
- NULL);
+ static inline void __delay(unsigned long loops)
+ {
+@@ -25,7 +28,7 @@ static inline void __delay(unsigned long loops)
+ " .set reorder \n"
+ : "=r" (loops)
+ : "0" (loops));
+- else if (sizeof(long) == 8)
++ else if (sizeof(long) == 8 && !DADDI_WAR)
+ __asm__ __volatile__ (
+ " .set noreorder \n"
+ " .align 3 \n"
+@@ -34,6 +37,15 @@ static inline void __delay(unsigned long loops)
+ " .set reorder \n"
+ : "=r" (loops)
+ : "0" (loops));
++ else if (sizeof(long) == 8 && DADDI_WAR)
++ __asm__ __volatile__ (
++ " .set noreorder \n"
++ " .align 3 \n"
++ "1: bnez %0, 1b \n"
++ " dsubu %0, %2 \n"
++ " .set reorder \n"
++ : "=r" (loops)
++ : "0" (loops), "r" (1));
+ }
-@@ -973,9 +976,9 @@ static int ocfs2_replay_journal(struct ocfs2_super *osb,
- }
- SET_INODE_JOURNAL(inode);
-- status = ocfs2_meta_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
-+ status = ocfs2_inode_lock_full(inode, &bh, 1, OCFS2_META_LOCK_RECOVERY);
- if (status < 0) {
-- mlog(0, "status returned from ocfs2_meta_lock=%d\n", status);
-+ mlog(0, "status returned from ocfs2_inode_lock=%d\n", status);
- if (status != -ERESTARTSYS)
- mlog(ML_ERROR, "Could not lock journal!\n");
- goto done;
-@@ -1047,7 +1050,7 @@ static int ocfs2_replay_journal(struct ocfs2_super *osb,
- done:
- /* drop the lock on this nodes journal */
- if (got_lock)
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
+@@ -50,7 +62,7 @@ static inline void __delay(unsigned long loops)
- if (inode)
- iput(inode);
-@@ -1162,14 +1165,14 @@ static int ocfs2_trylock_journal(struct ocfs2_super *osb,
- SET_INODE_JOURNAL(inode);
+ static inline void __udelay(unsigned long usecs, unsigned long lpj)
+ {
+- unsigned long lo;
++ unsigned long hi, lo;
- flags = OCFS2_META_LOCK_RECOVERY | OCFS2_META_LOCK_NOQUEUE;
-- status = ocfs2_meta_lock_full(inode, NULL, 1, flags);
-+ status = ocfs2_inode_lock_full(inode, NULL, 1, flags);
- if (status < 0) {
- if (status != -EAGAIN)
- mlog_errno(status);
- goto bail;
- }
+ /*
+ * The rates of 128 is rounded wrongly by the catchall case
+@@ -70,11 +82,16 @@ static inline void __udelay(unsigned long usecs, unsigned long lpj)
+ : "=h" (usecs), "=l" (lo)
+ : "r" (usecs), "r" (lpj)
+ : GCC_REG_ACCUM);
+- else if (sizeof(long) == 8)
++ else if (sizeof(long) == 8 && !R4000_WAR)
+ __asm__("dmultu\t%2, %3"
+ : "=h" (usecs), "=l" (lo)
+ : "r" (usecs), "r" (lpj)
+ : GCC_REG_ACCUM);
++ else if (sizeof(long) == 8 && R4000_WAR)
++ __asm__("dmultu\t%3, %4\n\tmfhi\t%0"
++ : "=r" (usecs), "=h" (hi), "=l" (lo)
++ : "r" (usecs), "r" (lpj)
++ : GCC_REG_ACCUM);
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
- bail:
- if (inode)
- iput(inode);
-@@ -1241,7 +1244,7 @@ static int ocfs2_orphan_filldir(void *priv, const char *name, int name_len,
+ __delay(usecs);
+ }
+diff --git a/include/asm-mips/dma.h b/include/asm-mips/dma.h
+index d6a6c21..1353c81 100644
+--- a/include/asm-mips/dma.h
++++ b/include/asm-mips/dma.h
+@@ -84,10 +84,9 @@
+ * Deskstations or Acer PICA but not the much more versatile DMA logic used
+ * for the local devices on Acer PICA or Magnums.
+ */
+-#ifdef CONFIG_SGI_IP22
+-/* Horrible hack to have a correct DMA window on IP22 */
+-#include <asm/sgi/mc.h>
+-#define MAX_DMA_ADDRESS (PAGE_OFFSET + SGIMC_SEG0_BADDR + 0x01000000)
++#if defined(CONFIG_SGI_IP22) || defined(CONFIG_SGI_IP28)
++/* don't care; ISA bus master won't work, ISA slave DMA supports 32bit addr */
++#define MAX_DMA_ADDRESS PAGE_OFFSET
+ #else
+ #define MAX_DMA_ADDRESS (PAGE_OFFSET + 0x01000000)
+ #endif
+diff --git a/include/asm-mips/fixmap.h b/include/asm-mips/fixmap.h
+index f27b96c..9cc8522 100644
+--- a/include/asm-mips/fixmap.h
++++ b/include/asm-mips/fixmap.h
+@@ -60,16 +60,6 @@ enum fixed_addresses {
+ __end_of_fixed_addresses
+ };
- /* Skip bad inodes so that recovery can continue */
- iter = ocfs2_iget(p->osb, ino,
-- OCFS2_FI_FLAG_ORPHAN_RECOVERY);
-+ OCFS2_FI_FLAG_ORPHAN_RECOVERY, 0);
- if (IS_ERR(iter))
- return 0;
+-extern void __set_fixmap(enum fixed_addresses idx,
+- unsigned long phys, pgprot_t flags);
+-
+-#define set_fixmap(idx, phys) \
+- __set_fixmap(idx, phys, PAGE_KERNEL)
+-/*
+- * Some hardware wants to get fixmapped without caching.
+- */
+-#define set_fixmap_nocache(idx, phys) \
+- __set_fixmap(idx, phys, PAGE_KERNEL_NOCACHE)
+ /*
+ * used by vmalloc.c.
+ *
+diff --git a/include/asm-mips/fw/cfe/cfe_api.h b/include/asm-mips/fw/cfe/cfe_api.h
+index 1003e71..0995575 100644
+--- a/include/asm-mips/fw/cfe/cfe_api.h
++++ b/include/asm-mips/fw/cfe/cfe_api.h
+@@ -15,49 +15,27 @@
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+-
+-/* *********************************************************************
+- *
+- * Broadcom Common Firmware Environment (CFE)
+- *
+- * Device function prototypes File: cfe_api.h
+- *
+- * This file contains declarations for doing callbacks to
+- * cfe from an application. It should be the only header
+- * needed by the application to use this library
+- *
+- * Authors: Mitch Lichtenberg, Chris Demetriou
+- *
+- ********************************************************************* */
+-
++/*
++ * Broadcom Common Firmware Environment (CFE)
++ *
++ * This file contains declarations for doing callbacks to
++ * cfe from an application. It should be the only header
++ * needed by the application to use this library
++ *
++ * Authors: Mitch Lichtenberg, Chris Demetriou
++ */
+ #ifndef CFE_API_H
+ #define CFE_API_H
-@@ -1277,7 +1280,7 @@ static int ocfs2_queue_orphans(struct ocfs2_super *osb,
- }
+-/*
+- * Apply customizations here for different OSes. These need to:
+- * * typedef uint64_t, int64_t, intptr_t, uintptr_t.
+- * * define cfe_strlen() if use of an existing function is desired.
+- * * define CFE_API_IMPL_NAMESPACE if API functions are to use
+- * names in the implementation namespace.
+- * Also, optionally, if the build environment does not do so automatically,
+- * CFE_API_* can be defined here as desired.
+- */
+-/* Begin customization. */
+ #include <linux/types.h>
+ #include <linux/string.h>
- mutex_lock(&orphan_dir_inode->i_mutex);
-- status = ocfs2_meta_lock(orphan_dir_inode, NULL, 0);
-+ status = ocfs2_inode_lock(orphan_dir_inode, NULL, 0);
- if (status < 0) {
- mlog_errno(status);
- goto out;
-@@ -1293,7 +1296,7 @@ static int ocfs2_queue_orphans(struct ocfs2_super *osb,
- *head = priv.head;
+ typedef long intptr_t;
- out_cluster:
-- ocfs2_meta_unlock(orphan_dir_inode, 0);
-+ ocfs2_inode_unlock(orphan_dir_inode, 0);
- out:
- mutex_unlock(&orphan_dir_inode->i_mutex);
- iput(orphan_dir_inode);
-@@ -1380,10 +1383,10 @@ static int ocfs2_recover_orphans(struct ocfs2_super *osb,
- iter = oi->ip_next_orphan;
+-#define cfe_strlen strlen
- spin_lock(&oi->ip_lock);
-- /* Delete voting may have set these on the assumption
-- * that the other node would wipe them successfully.
-- * If they are still in the node's orphan dir, we need
-- * to reset that state. */
-+ /* The remote delete code may have set these on the
-+ * assumption that the other node would wipe them
-+ * successfully. If they are still in the node's
-+ * orphan dir, we need to reset that state. */
- oi->ip_flags &= ~(OCFS2_INODE_DELETED|OCFS2_INODE_SKIP_DELETE);
+-#define CFE_API_ALL
+-#define CFE_API_STRLEN_CUSTOM
+-/* End customization. */
+-
+-
+-/* *********************************************************************
+- * Constants
+- ********************************************************************* */
++/*
++ * Constants
++ */
- /* Set the proper information to get us going into
-diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
-index 4b32e09..220f3e8 100644
---- a/fs/ocfs2/journal.h
-+++ b/fs/ocfs2/journal.h
-@@ -278,6 +278,12 @@ int ocfs2_journal_dirty_data(handle_t *handle,
- /* simple file updates like chmod, etc. */
- #define OCFS2_INODE_UPDATE_CREDITS 1
+ /* Seal indicating CFE's presence, passed to user program. */
+ #define CFE_EPTSEAL 0x43464531
+@@ -109,54 +87,13 @@ typedef struct {
-+/* group extend. inode update and last group update. */
-+#define OCFS2_GROUP_EXTEND_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 1)
-+
-+/* group add. inode update and the new group update. */
-+#define OCFS2_GROUP_ADD_CREDITS (OCFS2_INODE_UPDATE_CREDITS + 1)
-+
- /* get one bit out of a suballocator: dinode + group descriptor +
- * prev. group desc. if we relink. */
- #define OCFS2_SUBALLOC_ALLOC (3)
-diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
-index 58ea88b..add1ffd 100644
---- a/fs/ocfs2/localalloc.c
-+++ b/fs/ocfs2/localalloc.c
-@@ -75,18 +75,12 @@ static int ocfs2_local_alloc_new_window(struct ocfs2_super *osb,
- static int ocfs2_local_alloc_slide_window(struct ocfs2_super *osb,
- struct inode *local_alloc_inode);
--/*
-- * Determine how large our local alloc window should be, in bits.
-- *
-- * These values (and the behavior in ocfs2_alloc_should_use_local) have
-- * been chosen so that most allocations, including new block groups go
-- * through local alloc.
+ /*
+- * cfe_strlen is handled specially: If already defined, it has been
+- * overridden in this environment with a standard strlen-like function.
- */
- static inline int ocfs2_local_alloc_window_bits(struct ocfs2_super *osb)
- {
-- BUG_ON(osb->s_clustersize_bits < 12);
-+ BUG_ON(osb->s_clustersize_bits > 20);
-
-- return 2048 >> (osb->s_clustersize_bits - 12);
-+ /* Size local alloc windows by the megabyte */
-+ return osb->local_alloc_size << (20 - osb->s_clustersize_bits);
- }
+-#ifdef cfe_strlen
+-# define CFE_API_STRLEN_CUSTOM
+-#else
+-# ifdef CFE_API_IMPL_NAMESPACE
+-# define cfe_strlen(a) __cfe_strlen(a)
+-# endif
+-int cfe_strlen(char *name);
+-#endif
+-
+-/*
+ * Defines and prototypes for functions which take no arguments.
+ */
+-#ifdef CFE_API_IMPL_NAMESPACE
+-int64_t __cfe_getticks(void);
+-#define cfe_getticks() __cfe_getticks()
+-#else
+ int64_t cfe_getticks(void);
+-#endif
/*
-@@ -96,18 +90,23 @@ static inline int ocfs2_local_alloc_window_bits(struct ocfs2_super *osb)
- int ocfs2_alloc_should_use_local(struct ocfs2_super *osb, u64 bits)
- {
- int la_bits = ocfs2_local_alloc_window_bits(osb);
-+ int ret = 0;
-
- if (osb->local_alloc_state != OCFS2_LA_ENABLED)
-- return 0;
-+ goto bail;
+ * Defines and prototypes for the rest of the functions.
+ */
+-#ifdef CFE_API_IMPL_NAMESPACE
+-#define cfe_close(a) __cfe_close(a)
+-#define cfe_cpu_start(a, b, c, d, e) __cfe_cpu_start(a, b, c, d, e)
+-#define cfe_cpu_stop(a) __cfe_cpu_stop(a)
+-#define cfe_enumenv(a, b, d, e, f) __cfe_enumenv(a, b, d, e, f)
+-#define cfe_enummem(a, b, c, d, e) __cfe_enummem(a, b, c, d, e)
+-#define cfe_exit(a, b) __cfe_exit(a, b)
+-#define cfe_flushcache(a) __cfe_cacheflush(a)
+-#define cfe_getdevinfo(a) __cfe_getdevinfo(a)
+-#define cfe_getenv(a, b, c) __cfe_getenv(a, b, c)
+-#define cfe_getfwinfo(a) __cfe_getfwinfo(a)
+-#define cfe_getstdhandle(a) __cfe_getstdhandle(a)
+-#define cfe_init(a, b) __cfe_init(a, b)
+-#define cfe_inpstat(a) __cfe_inpstat(a)
+-#define cfe_ioctl(a, b, c, d, e, f) __cfe_ioctl(a, b, c, d, e, f)
+-#define cfe_open(a) __cfe_open(a)
+-#define cfe_read(a, b, c) __cfe_read(a, b, c)
+-#define cfe_readblk(a, b, c, d) __cfe_readblk(a, b, c, d)
+-#define cfe_setenv(a, b) __cfe_setenv(a, b)
+-#define cfe_write(a, b, c) __cfe_write(a, b, c)
+-#define cfe_writeblk(a, b, c, d) __cfe_writeblk(a, b, c, d)
+-#endif /* CFE_API_IMPL_NAMESPACE */
+-
+ int cfe_close(int handle);
+ int cfe_cpu_start(int cpu, void (*fn) (void), long sp, long gp, long a1);
+ int cfe_cpu_stop(int cpu);
+diff --git a/include/asm-mips/fw/cfe/cfe_error.h b/include/asm-mips/fw/cfe/cfe_error.h
+index 975f000..b803746 100644
+--- a/include/asm-mips/fw/cfe/cfe_error.h
++++ b/include/asm-mips/fw/cfe/cfe_error.h
+@@ -16,18 +16,13 @@
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
- /* la_bits should be at least twice the size (in clusters) of
- * a new block group. We want to be sure block group
- * allocations go through the local alloc, so allow an
- * allocation to take up to half the bitmap. */
- if (bits > (la_bits / 2))
-- return 0;
-+ goto bail;
+-/* *********************************************************************
+- *
+- * Broadcom Common Firmware Environment (CFE)
+- *
+- * Error codes File: cfe_error.h
+- *
+- * CFE's global error code list is here.
+- *
+- * Author: Mitch Lichtenberg
+- *
+- ********************************************************************* */
+-
++/*
++ * Broadcom Common Firmware Environment (CFE)
++ *
++ * CFE's global error code list is here.
++ *
++ * Author: Mitch Lichtenberg
++ */
-- return 1;
-+ ret = 1;
-+bail:
-+ mlog(0, "state=%d, bits=%llu, la_bits=%d, ret=%d\n",
-+ osb->local_alloc_state, (unsigned long long)bits, la_bits, ret);
-+ return ret;
- }
+ #define CFE_OK 0
+ #define CFE_ERR -1 /* generic error */
+diff --git a/include/asm-mips/mach-cobalt/cobalt.h b/include/asm-mips/mach-cobalt/cobalt.h
+index a79e7ca..5b9fce7 100644
+--- a/include/asm-mips/mach-cobalt/cobalt.h
++++ b/include/asm-mips/mach-cobalt/cobalt.h
+@@ -1,5 +1,5 @@
+ /*
+- * Lowlevel hardware stuff for the MIPS based Cobalt microservers.
++ * The Cobalt board ID information.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+@@ -12,9 +12,6 @@
+ #ifndef __ASM_COBALT_H
+ #define __ASM_COBALT_H
- int ocfs2_load_local_alloc(struct ocfs2_super *osb)
-@@ -121,6 +120,19 @@ int ocfs2_load_local_alloc(struct ocfs2_super *osb)
+-/*
+- * The Cobalt board ID information.
+- */
+ extern int cobalt_board_id;
- mlog_entry_void();
+ #define COBALT_BRD_ID_QUBE1 0x3
+@@ -22,14 +19,4 @@ extern int cobalt_board_id;
+ #define COBALT_BRD_ID_QUBE2 0x5
+ #define COBALT_BRD_ID_RAQ2 0x6
-+ if (ocfs2_mount_local(osb))
-+ goto bail;
+-#define COBALT_KEY_PORT ((~*(volatile unsigned int *) CKSEG1ADDR(0x1d000000) >> 24) & COBALT_KEY_MASK)
+-# define COBALT_KEY_CLEAR (1 << 1)
+-# define COBALT_KEY_LEFT (1 << 2)
+-# define COBALT_KEY_UP (1 << 3)
+-# define COBALT_KEY_DOWN (1 << 4)
+-# define COBALT_KEY_RIGHT (1 << 5)
+-# define COBALT_KEY_ENTER (1 << 6)
+-# define COBALT_KEY_SELECT (1 << 7)
+-# define COBALT_KEY_MASK 0xfe
+-
+ #endif /* __ASM_COBALT_H */
+diff --git a/include/asm-mips/mach-ip28/cpu-feature-overrides.h b/include/asm-mips/mach-ip28/cpu-feature-overrides.h
+new file mode 100644
+index 0000000..9a53b32
+--- /dev/null
++++ b/include/asm-mips/mach-ip28/cpu-feature-overrides.h
+@@ -0,0 +1,50 @@
++/*
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ *
++ * Copyright (C) 2003 Ralf Baechle
++ * 6/2004 pf
++ */
++#ifndef __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H
++#define __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H
+
-+ if (osb->local_alloc_size == 0)
-+ goto bail;
++/*
++ * IP28 only comes with R10000 family processors all using the same config
++ */
++#define cpu_has_watch 1
++#define cpu_has_mips16 0
++#define cpu_has_divec 0
++#define cpu_has_vce 0
++#define cpu_has_cache_cdex_p 0
++#define cpu_has_cache_cdex_s 0
++#define cpu_has_prefetch 1
++#define cpu_has_mcheck 0
++#define cpu_has_ejtag 0
+
-+ if (ocfs2_local_alloc_window_bits(osb) >= osb->bitmap_cpg) {
-+ mlog(ML_NOTICE, "Requested local alloc window %d is larger "
-+ "than max possible %u. Using defaults.\n",
-+ ocfs2_local_alloc_window_bits(osb), (osb->bitmap_cpg - 1));
-+ osb->local_alloc_size = OCFS2_DEFAULT_LOCAL_ALLOC_SIZE;
-+ }
++#define cpu_has_llsc 1
++#define cpu_has_vtag_icache 0
++#define cpu_has_dc_aliases 0 /* see probe_pcache() */
++#define cpu_has_ic_fills_f_dc 0
++#define cpu_has_dsp 0
++#define cpu_icache_snoops_remote_store 1
++#define cpu_has_mipsmt 0
++#define cpu_has_userlocal 0
+
- /* read the alloc off disk */
- inode = ocfs2_get_system_file_inode(osb, LOCAL_ALLOC_SYSTEM_INODE,
- osb->slot_num);
-@@ -181,6 +193,9 @@ bail:
- if (inode)
- iput(inode);
++#define cpu_has_nofpuex 0
++#define cpu_has_64bits 1
++
++#define cpu_has_4kex 1
++#define cpu_has_4k_cache 1
++
++#define cpu_has_inclusive_pcaches 1
++
++#define cpu_dcache_line_size() 32
++#define cpu_icache_line_size() 64
++
++#define cpu_has_mips32r1 0
++#define cpu_has_mips32r2 0
++#define cpu_has_mips64r1 0
++#define cpu_has_mips64r2 0
++
++#endif /* __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H */
+diff --git a/include/asm-mips/mach-ip28/ds1286.h b/include/asm-mips/mach-ip28/ds1286.h
+new file mode 100644
+index 0000000..471bb9a
+--- /dev/null
++++ b/include/asm-mips/mach-ip28/ds1286.h
+@@ -0,0 +1,4 @@
++#ifndef __ASM_MACH_IP28_DS1286_H
++#define __ASM_MACH_IP28_DS1286_H
++#include <asm/mach-ip22/ds1286.h>
++#endif /* __ASM_MACH_IP28_DS1286_H */
+diff --git a/include/asm-mips/mach-ip28/spaces.h b/include/asm-mips/mach-ip28/spaces.h
+new file mode 100644
+index 0000000..05aabb2
+--- /dev/null
++++ b/include/asm-mips/mach-ip28/spaces.h
+@@ -0,0 +1,22 @@
++/*
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ *
++ * Copyright (C) 1994 - 1999, 2000, 03, 04 Ralf Baechle
++ * Copyright (C) 2000, 2002 Maciej W. Rozycki
++ * Copyright (C) 1990, 1999, 2000 Silicon Graphics, Inc.
++ * 2004 pf
++ */
++#ifndef _ASM_MACH_IP28_SPACES_H
++#define _ASM_MACH_IP28_SPACES_H
++
++#define CAC_BASE 0xa800000000000000
++
++#define HIGHMEM_START (~0UL)
++
++#define PHYS_OFFSET _AC(0x20000000, UL)
++
++#include <asm/mach-generic/spaces.h>
++
++#endif /* _ASM_MACH_IP28_SPACES_H */
+diff --git a/include/asm-mips/mach-ip28/war.h b/include/asm-mips/mach-ip28/war.h
+new file mode 100644
+index 0000000..a1baafa
+--- /dev/null
++++ b/include/asm-mips/mach-ip28/war.h
+@@ -0,0 +1,25 @@
++/*
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ *
++ * Copyright (C) 2002, 2004, 2007 by Ralf Baechle <ralf at linux-mips.org>
++ */
++#ifndef __ASM_MIPS_MACH_IP28_WAR_H
++#define __ASM_MIPS_MACH_IP28_WAR_H
++
++#define R4600_V1_INDEX_ICACHEOP_WAR 0
++#define R4600_V1_HIT_CACHEOP_WAR 0
++#define R4600_V2_HIT_CACHEOP_WAR 0
++#define R5432_CP0_INTERRUPT_WAR 0
++#define BCM1250_M3_WAR 0
++#define SIBYTE_1956_WAR 0
++#define MIPS4K_ICACHE_REFILL_WAR 0
++#define MIPS_CACHE_SYNC_WAR 0
++#define TX49XX_ICACHE_INDEX_INV_WAR 0
++#define RM9000_CDEX_SMP_WAR 0
++#define ICACHE_REFILLS_WORKAROUND_WAR 0
++#define R10000_LLSC_WAR 1
++#define MIPS34K_MISSED_ITLB_WAR 0
++
++#endif /* __ASM_MIPS_MACH_IP28_WAR_H */
+diff --git a/include/asm-mips/mach-qemu/cpu-feature-overrides.h b/include/asm-mips/mach-qemu/cpu-feature-overrides.h
+deleted file mode 100644
+index d2daaed..0000000
+--- a/include/asm-mips/mach-qemu/cpu-feature-overrides.h
++++ /dev/null
+@@ -1,32 +0,0 @@
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * Copyright (C) 2003, 07 Ralf Baechle
+- */
+-#ifndef __ASM_MACH_QEMU_CPU_FEATURE_OVERRIDES_H
+-#define __ASM_MACH_QEMU_CPU_FEATURE_OVERRIDES_H
+-
+-/*
+- * QEMU only comes with a hazard-free MIPS32 processor, so things are easy.
+- */
+-#define cpu_has_mips16 0
+-#define cpu_has_divec 0
+-#define cpu_has_cache_cdex_p 0
+-#define cpu_has_prefetch 0
+-#define cpu_has_mcheck 0
+-#define cpu_has_ejtag 0
+-
+-#define cpu_has_llsc 1
+-#define cpu_has_vtag_icache 0
+-#define cpu_has_dc_aliases 0
+-#define cpu_has_ic_fills_f_dc 0
+-
+-#define cpu_has_dsp 0
+-#define cpu_has_mipsmt 0
+-
+-#define cpu_has_nofpuex 0
+-#define cpu_has_64bits 0
+-
+-#endif /* __ASM_MACH_QEMU_CPU_FEATURE_OVERRIDES_H */
+diff --git a/include/asm-mips/mach-qemu/war.h b/include/asm-mips/mach-qemu/war.h
+deleted file mode 100644
+index 0eaf0c5..0000000
+--- a/include/asm-mips/mach-qemu/war.h
++++ /dev/null
+@@ -1,25 +0,0 @@
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * Copyright (C) 2002, 2004, 2007 by Ralf Baechle <ralf at linux-mips.org>
+- */
+-#ifndef __ASM_MIPS_MACH_QEMU_WAR_H
+-#define __ASM_MIPS_MACH_QEMU_WAR_H
+-
+-#define R4600_V1_INDEX_ICACHEOP_WAR 0
+-#define R4600_V1_HIT_CACHEOP_WAR 0
+-#define R4600_V2_HIT_CACHEOP_WAR 0
+-#define R5432_CP0_INTERRUPT_WAR 0
+-#define BCM1250_M3_WAR 0
+-#define SIBYTE_1956_WAR 0
+-#define MIPS4K_ICACHE_REFILL_WAR 0
+-#define MIPS_CACHE_SYNC_WAR 0
+-#define TX49XX_ICACHE_INDEX_INV_WAR 0
+-#define RM9000_CDEX_SMP_WAR 0
+-#define ICACHE_REFILLS_WORKAROUND_WAR 0
+-#define R10000_LLSC_WAR 0
+-#define MIPS34K_MISSED_ITLB_WAR 0
+-
+-#endif /* __ASM_MIPS_MACH_QEMU_WAR_H */
+diff --git a/include/asm-mips/mips-boards/generic.h b/include/asm-mips/mips-boards/generic.h
+index d589774..1c39d33 100644
+--- a/include/asm-mips/mips-boards/generic.h
++++ b/include/asm-mips/mips-boards/generic.h
+@@ -97,10 +97,16 @@ extern int mips_revision_corid;
-+ mlog(0, "Local alloc window bits = %d\n",
-+ ocfs2_local_alloc_window_bits(osb));
-+
- mlog_exit(status);
- return status;
- }
-@@ -231,7 +246,7 @@ void ocfs2_shutdown_local_alloc(struct ocfs2_super *osb)
+ extern int mips_revision_sconid;
- mutex_lock(&main_bm_inode->i_mutex);
++extern void mips_reboot_setup(void);
++
+ #ifdef CONFIG_PCI
+ extern void mips_pcibios_init(void);
+ #else
+ #define mips_pcibios_init() do { } while (0)
+ #endif
-- status = ocfs2_meta_lock(main_bm_inode, &main_bm_bh, 1);
-+ status = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
- if (status < 0) {
- mlog_errno(status);
- goto out_mutex;
-@@ -286,7 +301,7 @@ out_unlock:
- if (main_bm_bh)
- brelse(main_bm_bh);
++#ifdef CONFIG_KGDB
++extern void kgdb_config(void);
++#endif
++
+ #endif /* __ASM_MIPS_BOARDS_GENERIC_H */
+diff --git a/include/asm-mips/mipsprom.h b/include/asm-mips/mipsprom.h
+index ce7cff7..146d41b 100644
+--- a/include/asm-mips/mipsprom.h
++++ b/include/asm-mips/mipsprom.h
+@@ -71,4 +71,6 @@
+ #define PROM_NV_GET 53 /* XXX */
+ #define PROM_NV_SET 54 /* XXX */
-- ocfs2_meta_unlock(main_bm_inode, 1);
-+ ocfs2_inode_unlock(main_bm_inode, 1);
++extern char *prom_getenv(char *);
++
+ #endif /* __ASM_MIPS_PROM_H */
+diff --git a/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h b/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h
+index 0b56f55..603eb73 100644
+--- a/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h
++++ b/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h
+@@ -585,11 +585,7 @@
+ * UART defines *
+ ***************************************************************************
+ */
+-#ifndef CONFIG_MSP_FPGA
+ #define MSP_BASE_BAUD 25000000
+-#else
+-#define MSP_BASE_BAUD 6000000
+-#endif
+ #define MSP_UART_REG_LEN 0x20
- out_mutex:
- mutex_unlock(&main_bm_inode->i_mutex);
-@@ -399,7 +414,7 @@ int ocfs2_complete_local_alloc_recovery(struct ocfs2_super *osb,
+ /*
+diff --git a/include/asm-mips/r4kcache.h b/include/asm-mips/r4kcache.h
+index 2b8466f..4c140db 100644
+--- a/include/asm-mips/r4kcache.h
++++ b/include/asm-mips/r4kcache.h
+@@ -403,6 +403,13 @@ __BUILD_BLAST_CACHE(i, icache, Index_Invalidate_I, Hit_Invalidate_I, 64)
+ __BUILD_BLAST_CACHE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 64)
+ __BUILD_BLAST_CACHE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 128)
- mutex_lock(&main_bm_inode->i_mutex);
++__BUILD_BLAST_CACHE(inv_d, dcache, Index_Writeback_Inv_D, Hit_Invalidate_D, 16)
++__BUILD_BLAST_CACHE(inv_d, dcache, Index_Writeback_Inv_D, Hit_Invalidate_D, 32)
++__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 16)
++__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 32)
++__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 64)
++__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 128)
++
+ /* build blast_xxx_range, protected_blast_xxx_range */
+ #define __BUILD_BLAST_CACHE_RANGE(pfx, desc, hitop, prot) \
+ static inline void prot##blast_##pfx##cache##_range(unsigned long start, \
+diff --git a/include/asm-mips/sgi/ioc.h b/include/asm-mips/sgi/ioc.h
+index f3e3dc9..343ed15 100644
+--- a/include/asm-mips/sgi/ioc.h
++++ b/include/asm-mips/sgi/ioc.h
+@@ -138,8 +138,8 @@ struct sgioc_regs {
+ u8 _sysid[3];
+ volatile u8 sysid;
+ #define SGIOC_SYSID_FULLHOUSE 0x01
+-#define SGIOC_SYSID_BOARDREV(x) ((x & 0xe0) > 5)
+-#define SGIOC_SYSID_CHIPREV(x) ((x & 0x1e) > 1)
++#define SGIOC_SYSID_BOARDREV(x) (((x) & 0x1e) >> 1)
++#define SGIOC_SYSID_CHIPREV(x) (((x) & 0xe0) >> 5)
+ u32 _unused2;
+ u8 _read[3];
+ volatile u8 read;
+diff --git a/include/asm-mips/sibyte/board.h b/include/asm-mips/sibyte/board.h
+index da198a1..25372ae 100644
+--- a/include/asm-mips/sibyte/board.h
++++ b/include/asm-mips/sibyte/board.h
+@@ -19,10 +19,8 @@
+ #ifndef _SIBYTE_BOARD_H
+ #define _SIBYTE_BOARD_H
-- status = ocfs2_meta_lock(main_bm_inode, &main_bm_bh, 1);
-+ status = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
- if (status < 0) {
- mlog_errno(status);
- goto out_mutex;
-@@ -424,7 +439,7 @@ int ocfs2_complete_local_alloc_recovery(struct ocfs2_super *osb,
- ocfs2_commit_trans(osb, handle);
+-#if defined(CONFIG_SIBYTE_SWARM) || defined(CONFIG_SIBYTE_PTSWARM) || \
+- defined(CONFIG_SIBYTE_PT1120) || defined(CONFIG_SIBYTE_PT1125) || \
+- defined(CONFIG_SIBYTE_CRHONE) || defined(CONFIG_SIBYTE_CRHINE) || \
+- defined(CONFIG_SIBYTE_LITTLESUR)
++#if defined(CONFIG_SIBYTE_SWARM) || defined(CONFIG_SIBYTE_CRHONE) || \
++ defined(CONFIG_SIBYTE_CRHINE) || defined(CONFIG_SIBYTE_LITTLESUR)
+ #include <asm/sibyte/swarm.h>
+ #endif
- out_unlock:
-- ocfs2_meta_unlock(main_bm_inode, 1);
-+ ocfs2_inode_unlock(main_bm_inode, 1);
+diff --git a/include/asm-mips/sibyte/sb1250.h b/include/asm-mips/sibyte/sb1250.h
+index 0dad844..80c1a05 100644
+--- a/include/asm-mips/sibyte/sb1250.h
++++ b/include/asm-mips/sibyte/sb1250.h
+@@ -48,12 +48,10 @@ extern unsigned int zbbus_mhz;
+ extern void sb1250_time_init(void);
+ extern void sb1250_mask_irq(int cpu, int irq);
+ extern void sb1250_unmask_irq(int cpu, int irq);
+-extern void sb1250_smp_finish(void);
- out_mutex:
- mutex_unlock(&main_bm_inode->i_mutex);
-@@ -521,6 +536,9 @@ bail:
- iput(local_alloc_inode);
- }
+ extern void bcm1480_time_init(void);
+ extern void bcm1480_mask_irq(int cpu, int irq);
+ extern void bcm1480_unmask_irq(int cpu, int irq);
+-extern void bcm1480_smp_finish(void);
-+ mlog(0, "bits=%d, slot=%d, ret=%d\n", bits_wanted, osb->slot_num,
-+ status);
-+
- mlog_exit(status);
- return status;
- }
-diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
+ #define AT_spin \
+ __asm__ __volatile__ ( \
+diff --git a/include/asm-mips/sibyte/swarm.h b/include/asm-mips/sibyte/swarm.h
+index 540865f..114d9d2 100644
+--- a/include/asm-mips/sibyte/swarm.h
++++ b/include/asm-mips/sibyte/swarm.h
+@@ -26,24 +26,6 @@
+ #define SIBYTE_HAVE_PCMCIA 1
+ #define SIBYTE_HAVE_IDE 1
+ #endif
+-#ifdef CONFIG_SIBYTE_PTSWARM
+-#define SIBYTE_BOARD_NAME "PTSWARM"
+-#define SIBYTE_HAVE_PCMCIA 1
+-#define SIBYTE_HAVE_IDE 1
+-#define SIBYTE_DEFAULT_CONSOLE "ttyS0,115200"
+-#endif
+-#ifdef CONFIG_SIBYTE_PT1120
+-#define SIBYTE_BOARD_NAME "PT1120"
+-#define SIBYTE_HAVE_PCMCIA 1
+-#define SIBYTE_HAVE_IDE 1
+-#define SIBYTE_DEFAULT_CONSOLE "ttyS0,115200"
+-#endif
+-#ifdef CONFIG_SIBYTE_PT1125
+-#define SIBYTE_BOARD_NAME "PT1125"
+-#define SIBYTE_HAVE_PCMCIA 1
+-#define SIBYTE_HAVE_IDE 1
+-#define SIBYTE_DEFAULT_CONSOLE "ttyS0,115200"
+-#endif
+ #ifdef CONFIG_SIBYTE_LITTLESUR
+ #define SIBYTE_BOARD_NAME "BCM91250C2 (LittleSur)"
+ #define SIBYTE_HAVE_PCMCIA 0
+diff --git a/include/asm-mips/smp-ops.h b/include/asm-mips/smp-ops.h
new file mode 100644
-index 0000000..203f871
+index 0000000..b17fdfb
--- /dev/null
-+++ b/fs/ocfs2/locks.c
-@@ -0,0 +1,125 @@
-+/* -*- mode: c; c-basic-offset: 8; -*-
-+ * vim: noexpandtab sw=8 ts=8 sts=0:
-+ *
-+ * locks.c
-+ *
-+ * Userspace file locking support
-+ *
-+ * Copyright (C) 2007 Oracle. All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2 of the License, or (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+ * General Public License for more details.
++++ b/include/asm-mips/smp-ops.h
+@@ -0,0 +1,56 @@
++/*
++ * This file is subject to the terms and conditions of the GNU General
++ * Public License. See the file "COPYING" in the main directory of this
++ * archive for more details.
+ *
-+ * You should have received a copy of the GNU General Public
-+ * License along with this program; if not, write to the
-+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-+ * Boston, MA 021110-1307, USA.
++ * Copyright (C) 2000 - 2001 by Kanoj Sarcar (kanoj at sgi.com)
++ * Copyright (C) 2000 - 2001 by Silicon Graphics, Inc.
++ * Copyright (C) 2000, 2001, 2002 Ralf Baechle
++ * Copyright (C) 2000, 2001 Broadcom Corporation
+ */
++#ifndef __ASM_SMP_OPS_H
++#define __ASM_SMP_OPS_H
+
-+#include <linux/fs.h>
++#ifdef CONFIG_SMP
+
-+#define MLOG_MASK_PREFIX ML_INODE
-+#include <cluster/masklog.h>
++#include <linux/cpumask.h>
+
-+#include "ocfs2.h"
++struct plat_smp_ops {
++ void (*send_ipi_single)(int cpu, unsigned int action);
++ void (*send_ipi_mask)(cpumask_t mask, unsigned int action);
++ void (*init_secondary)(void);
++ void (*smp_finish)(void);
++ void (*cpus_done)(void);
++ void (*boot_secondary)(int cpu, struct task_struct *idle);
++ void (*smp_setup)(void);
++ void (*prepare_cpus)(unsigned int max_cpus);
++};
+
-+#include "dlmglue.h"
-+#include "file.h"
-+#include "locks.h"
++extern void register_smp_ops(struct plat_smp_ops *ops);
+
-+static int ocfs2_do_flock(struct file *file, struct inode *inode,
-+ int cmd, struct file_lock *fl)
++static inline void plat_smp_setup(void)
+{
-+ int ret = 0, level = 0, trylock = 0;
-+ struct ocfs2_file_private *fp = file->private_data;
-+ struct ocfs2_lock_res *lockres = &fp->fp_flock;
-+
-+ if (fl->fl_type == F_WRLCK)
-+ level = 1;
-+ if (!IS_SETLKW(cmd))
-+ trylock = 1;
-+
-+ mutex_lock(&fp->fp_mutex);
-+
-+ if (lockres->l_flags & OCFS2_LOCK_ATTACHED &&
-+ lockres->l_level > LKM_NLMODE) {
-+ int old_level = 0;
-+
-+ if (lockres->l_level == LKM_EXMODE)
-+ old_level = 1;
-+
-+ if (level == old_level)
-+ goto out;
-+
-+ /*
-+ * Converting an existing lock is not guaranteed to be
-+ * atomic, so we can get away with simply unlocking
-+ * here and allowing the lock code to try at the new
-+ * level.
-+ */
-+
-+ flock_lock_file_wait(file,
-+ &(struct file_lock){.fl_type = F_UNLCK});
-+
-+ ocfs2_file_unlock(file);
-+ }
-+
-+ ret = ocfs2_file_lock(file, level, trylock);
-+ if (ret) {
-+ if (ret == -EAGAIN && trylock)
-+ ret = -EWOULDBLOCK;
-+ else
-+ mlog_errno(ret);
-+ goto out;
-+ }
-+
-+ ret = flock_lock_file_wait(file, fl);
-+
-+out:
-+ mutex_unlock(&fp->fp_mutex);
++ extern struct plat_smp_ops *mp_ops; /* private */
+
-+ return ret;
++ mp_ops->smp_setup();
+}
+
-+static int ocfs2_do_funlock(struct file *file, int cmd, struct file_lock *fl)
-+{
-+ int ret;
-+ struct ocfs2_file_private *fp = file->private_data;
++#else /* !CONFIG_SMP */
+
-+ mutex_lock(&fp->fp_mutex);
-+ ocfs2_file_unlock(file);
-+ ret = flock_lock_file_wait(file, fl);
-+ mutex_unlock(&fp->fp_mutex);
++struct plat_smp_ops;
+
-+ return ret;
++static inline void plat_smp_setup(void)
++{
++ /* UP, nothing to do ... */
+}
+
-+/*
-+ * Overall flow of ocfs2_flock() was influenced by gfs2_flock().
-+ */
-+int ocfs2_flock(struct file *file, int cmd, struct file_lock *fl)
++static inline void register_smp_ops(struct plat_smp_ops *ops)
+{
-+ struct inode *inode = file->f_mapping->host;
-+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-+
-+ if (!(fl->fl_flags & FL_FLOCK))
-+ return -ENOLCK;
-+ if (__mandatory_lock(inode))
-+ return -ENOLCK;
-+
-+ if ((osb->s_mount_opt & OCFS2_MOUNT_LOCALFLOCKS) ||
-+ ocfs2_mount_local(osb))
-+ return flock_lock_file_wait(file, fl);
-+
-+ if (fl->fl_type == F_UNLCK)
-+ return ocfs2_do_funlock(file, cmd, fl);
-+ else
-+ return ocfs2_do_flock(file, inode, cmd, fl);
+}
-diff --git a/fs/ocfs2/locks.h b/fs/ocfs2/locks.h
-new file mode 100644
-index 0000000..9743ef2
---- /dev/null
-+++ b/fs/ocfs2/locks.h
-@@ -0,0 +1,31 @@
-+/* -*- mode: c; c-basic-offset: 8; -*-
-+ * vim: noexpandtab sw=8 ts=8 sts=0:
-+ *
-+ * locks.h
-+ *
-+ * Function prototypes for Userspace file locking support
-+ *
-+ * Copyright (C) 2002, 2004 Oracle. All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2 of the License, or (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+ * General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public
-+ * License along with this program; if not, write to the
-+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-+ * Boston, MA 021110-1307, USA.
-+ */
+
-+#ifndef OCFS2_LOCKS_H
-+#define OCFS2_LOCKS_H
++#endif /* !CONFIG_SMP */
+
-+int ocfs2_flock(struct file *file, int cmd, struct file_lock *fl);
++extern struct plat_smp_ops up_smp_ops;
++extern struct plat_smp_ops vsmp_smp_ops;
+
-+#endif /* OCFS2_LOCKS_H */
-diff --git a/fs/ocfs2/mmap.c b/fs/ocfs2/mmap.c
-index 9875615..3dc18d6 100644
---- a/fs/ocfs2/mmap.c
-+++ b/fs/ocfs2/mmap.c
-@@ -168,7 +168,7 @@ static int ocfs2_page_mkwrite(struct vm_area_struct *vma, struct page *page)
- * node. Taking the data lock will also ensure that we don't
- * attempt page truncation as part of a downconvert.
- */
-- ret = ocfs2_meta_lock(inode, &di_bh, 1);
-+ ret = ocfs2_inode_lock(inode, &di_bh, 1);
- if (ret < 0) {
- mlog_errno(ret);
- goto out;
-@@ -181,21 +181,12 @@ static int ocfs2_page_mkwrite(struct vm_area_struct *vma, struct page *page)
- */
- down_write(&OCFS2_I(inode)->ip_alloc_sem);
++#endif /* __ASM_SMP_OPS_H */
+diff --git a/include/asm-mips/smp.h b/include/asm-mips/smp.h
+index dc77002..84fef1a 100644
+--- a/include/asm-mips/smp.h
++++ b/include/asm-mips/smp.h
+@@ -11,14 +11,16 @@
+ #ifndef __ASM_SMP_H
+ #define __ASM_SMP_H
-- ret = ocfs2_data_lock(inode, 1);
-- if (ret < 0) {
-- mlog_errno(ret);
-- goto out_meta_unlock;
-- }
-
- ret = __ocfs2_page_mkwrite(inode, di_bh, page);
-
-- ocfs2_data_unlock(inode, 1);
+-#ifdef CONFIG_SMP
-
--out_meta_unlock:
- up_write(&OCFS2_I(inode)->ip_alloc_sem);
-
- brelse(di_bh);
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
-
- out:
- ret2 = ocfs2_vm_op_unblock_sigs(&oldset);
-@@ -214,13 +205,13 @@ int ocfs2_mmap(struct file *file, struct vm_area_struct *vma)
- {
- int ret = 0, lock_level = 0;
-
-- ret = ocfs2_meta_lock_atime(file->f_dentry->d_inode,
-+ ret = ocfs2_inode_lock_atime(file->f_dentry->d_inode,
- file->f_vfsmnt, &lock_level);
- if (ret < 0) {
- mlog_errno(ret);
- goto out;
- }
-- ocfs2_meta_unlock(file->f_dentry->d_inode, lock_level);
-+ ocfs2_inode_unlock(file->f_dentry->d_inode, lock_level);
- out:
- vma->vm_ops = &ocfs2_file_vm_ops;
- vma->vm_flags |= VM_CAN_NONLINEAR;
-diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
-index 989ac27..ae9ad95 100644
---- a/fs/ocfs2/namei.c
-+++ b/fs/ocfs2/namei.c
-@@ -60,7 +60,6 @@
- #include "symlink.h"
- #include "sysfile.h"
- #include "uptodate.h"
--#include "vote.h"
-
- #include "buffer_head_io.h"
-
-@@ -116,7 +115,7 @@ static struct dentry *ocfs2_lookup(struct inode *dir, struct dentry *dentry,
- mlog(0, "find name %.*s in directory %llu\n", dentry->d_name.len,
- dentry->d_name.name, (unsigned long long)OCFS2_I(dir)->ip_blkno);
-
-- status = ocfs2_meta_lock(dir, NULL, 0);
-+ status = ocfs2_inode_lock(dir, NULL, 0);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -129,7 +128,7 @@ static struct dentry *ocfs2_lookup(struct inode *dir, struct dentry *dentry,
- if (status < 0)
- goto bail_add;
+ #include <linux/bitops.h>
+ #include <linux/linkage.h>
+ #include <linux/threads.h>
+ #include <linux/cpumask.h>
++
+ #include <asm/atomic.h>
++#include <asm/smp-ops.h>
++
++extern int smp_num_siblings;
++extern cpumask_t cpu_sibling_map[];
-- inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0);
-+ inode = ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0, 0);
- if (IS_ERR(inode)) {
- ret = ERR_PTR(-EACCES);
- goto bail_unlock;
-@@ -176,8 +175,8 @@ bail_unlock:
- /* Don't drop the cluster lock until *after* the d_add --
- * unlink on another node will message us to remove that
- * dentry under this lock so otherwise we can race this with
-- * the vote thread and have a stale dentry. */
-- ocfs2_meta_unlock(dir, 0);
-+ * the downconvert thread and have a stale dentry. */
-+ ocfs2_inode_unlock(dir, 0);
+ #define raw_smp_processor_id() (current_thread_info()->cpu)
- bail:
+@@ -49,56 +51,6 @@ extern struct call_data_struct *call_data;
+ extern cpumask_t phys_cpu_present_map;
+ #define cpu_possible_map phys_cpu_present_map
-@@ -209,7 +208,7 @@ static int ocfs2_mknod(struct inode *dir,
- /* get our super block */
- osb = OCFS2_SB(dir->i_sb);
+-/*
+- * These are defined by the board-specific code.
+- */
+-
+-/*
+- * Cause the function described by call_data to be executed on the passed
+- * cpu. When the function has finished, increment the finished field of
+- * call_data.
+- */
+-extern void core_send_ipi(int cpu, unsigned int action);
+-
+-static inline void core_send_ipi_mask(cpumask_t mask, unsigned int action)
+-{
+- unsigned int i;
+-
+- for_each_cpu_mask(i, mask)
+- core_send_ipi(i, action);
+-}
+-
+-
+-/*
+- * Firmware CPU startup hook
+- */
+-extern void prom_boot_secondary(int cpu, struct task_struct *idle);
+-
+-/*
+- * After we've done initial boot, this function is called to allow the
+- * board code to clean up state, if needed
+- */
+-extern void prom_init_secondary(void);
+-
+-/*
+- * Populate cpu_possible_map before smp_init, called from setup_arch.
+- */
+-extern void plat_smp_setup(void);
+-
+-/*
+- * Called in smp_prepare_cpus.
+- */
+-extern void plat_prepare_cpus(unsigned int max_cpus);
+-
+-/*
+- * Last chance for the board code to finish SMP initialization before
+- * the CPU is "online".
+- */
+-extern void prom_smp_finish(void);
+-
+-/* Hook for after all CPUs are online */
+-extern void prom_cpus_done(void);
+-
+ extern void asmlinkage smp_bootstrap(void);
-- status = ocfs2_meta_lock(dir, &parent_fe_bh, 1);
-+ status = ocfs2_inode_lock(dir, &parent_fe_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -323,7 +322,7 @@ leave:
- if (handle)
- ocfs2_commit_trans(osb, handle);
+ /*
+@@ -108,11 +60,11 @@ extern void asmlinkage smp_bootstrap(void);
+ */
+ static inline void smp_send_reschedule(int cpu)
+ {
+- core_send_ipi(cpu, SMP_RESCHEDULE_YOURSELF);
++ extern struct plat_smp_ops *mp_ops; /* private */
++
++ mp_ops->send_ipi_single(cpu, SMP_RESCHEDULE_YOURSELF);
+ }
-- ocfs2_meta_unlock(dir, 1);
-+ ocfs2_inode_unlock(dir, 1);
+ extern asmlinkage void smp_call_function_interrupt(void);
- if (status == -ENOSPC)
- mlog(0, "Disk is full\n");
-@@ -553,7 +552,7 @@ static int ocfs2_link(struct dentry *old_dentry,
- if (S_ISDIR(inode->i_mode))
- return -EPERM;
+-#endif /* CONFIG_SMP */
+-
+ #endif /* __ASM_SMP_H */
+diff --git a/include/asm-mips/sni.h b/include/asm-mips/sni.h
+index af08145..e716447 100644
+--- a/include/asm-mips/sni.h
++++ b/include/asm-mips/sni.h
+@@ -35,23 +35,23 @@ extern unsigned int sni_brd_type;
+ #define SNI_CPU_M8050 0x0b
+ #define SNI_CPU_M8053 0x0d
-- err = ocfs2_meta_lock(dir, &parent_fe_bh, 1);
-+ err = ocfs2_inode_lock(dir, &parent_fe_bh, 1);
- if (err < 0) {
- if (err != -ENOENT)
- mlog_errno(err);
-@@ -578,7 +577,7 @@ static int ocfs2_link(struct dentry *old_dentry,
- goto out;
- }
+-#define SNI_PORT_BASE 0xb4000000
++#define SNI_PORT_BASE CKSEG1ADDR(0xb4000000)
-- err = ocfs2_meta_lock(inode, &fe_bh, 1);
-+ err = ocfs2_inode_lock(inode, &fe_bh, 1);
- if (err < 0) {
- if (err != -ENOENT)
- mlog_errno(err);
-@@ -643,10 +642,10 @@ static int ocfs2_link(struct dentry *old_dentry,
- out_commit:
- ocfs2_commit_trans(osb, handle);
- out_unlock_inode:
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
+ #ifndef __MIPSEL__
+ /*
+ * ASIC PCI registers for big endian configuration.
+ */
+-#define PCIMT_UCONF 0xbfff0004
+-#define PCIMT_IOADTIMEOUT2 0xbfff000c
+-#define PCIMT_IOMEMCONF 0xbfff0014
+-#define PCIMT_IOMMU 0xbfff001c
+-#define PCIMT_IOADTIMEOUT1 0xbfff0024
+-#define PCIMT_DMAACCESS 0xbfff002c
+-#define PCIMT_DMAHIT 0xbfff0034
+-#define PCIMT_ERRSTATUS 0xbfff003c
+-#define PCIMT_ERRADDR 0xbfff0044
+-#define PCIMT_SYNDROME 0xbfff004c
+-#define PCIMT_ITPEND 0xbfff0054
++#define PCIMT_UCONF CKSEG1ADDR(0xbfff0004)
++#define PCIMT_IOADTIMEOUT2 CKSEG1ADDR(0xbfff000c)
++#define PCIMT_IOMEMCONF CKSEG1ADDR(0xbfff0014)
++#define PCIMT_IOMMU CKSEG1ADDR(0xbfff001c)
++#define PCIMT_IOADTIMEOUT1 CKSEG1ADDR(0xbfff0024)
++#define PCIMT_DMAACCESS CKSEG1ADDR(0xbfff002c)
++#define PCIMT_DMAHIT CKSEG1ADDR(0xbfff0034)
++#define PCIMT_ERRSTATUS CKSEG1ADDR(0xbfff003c)
++#define PCIMT_ERRADDR CKSEG1ADDR(0xbfff0044)
++#define PCIMT_SYNDROME CKSEG1ADDR(0xbfff004c)
++#define PCIMT_ITPEND CKSEG1ADDR(0xbfff0054)
+ #define IT_INT2 0x01
+ #define IT_INTD 0x02
+ #define IT_INTC 0x04
+@@ -60,32 +60,32 @@ extern unsigned int sni_brd_type;
+ #define IT_EISA 0x20
+ #define IT_SCSI 0x40
+ #define IT_ETH 0x80
+-#define PCIMT_IRQSEL 0xbfff005c
+-#define PCIMT_TESTMEM 0xbfff0064
+-#define PCIMT_ECCREG 0xbfff006c
+-#define PCIMT_CONFIG_ADDRESS 0xbfff0074
+-#define PCIMT_ASIC_ID 0xbfff007c /* read */
+-#define PCIMT_SOFT_RESET 0xbfff007c /* write */
+-#define PCIMT_PIA_OE 0xbfff0084
+-#define PCIMT_PIA_DATAOUT 0xbfff008c
+-#define PCIMT_PIA_DATAIN 0xbfff0094
+-#define PCIMT_CACHECONF 0xbfff009c
+-#define PCIMT_INVSPACE 0xbfff00a4
++#define PCIMT_IRQSEL CKSEG1ADDR(0xbfff005c)
++#define PCIMT_TESTMEM CKSEG1ADDR(0xbfff0064)
++#define PCIMT_ECCREG CKSEG1ADDR(0xbfff006c)
++#define PCIMT_CONFIG_ADDRESS CKSEG1ADDR(0xbfff0074)
++#define PCIMT_ASIC_ID CKSEG1ADDR(0xbfff007c) /* read */
++#define PCIMT_SOFT_RESET CKSEG1ADDR(0xbfff007c) /* write */
++#define PCIMT_PIA_OE CKSEG1ADDR(0xbfff0084)
++#define PCIMT_PIA_DATAOUT CKSEG1ADDR(0xbfff008c)
++#define PCIMT_PIA_DATAIN CKSEG1ADDR(0xbfff0094)
++#define PCIMT_CACHECONF CKSEG1ADDR(0xbfff009c)
++#define PCIMT_INVSPACE CKSEG1ADDR(0xbfff00a4)
+ #else
+ /*
+ * ASIC PCI registers for little endian configuration.
+ */
+-#define PCIMT_UCONF 0xbfff0000
+-#define PCIMT_IOADTIMEOUT2 0xbfff0008
+-#define PCIMT_IOMEMCONF 0xbfff0010
+-#define PCIMT_IOMMU 0xbfff0018
+-#define PCIMT_IOADTIMEOUT1 0xbfff0020
+-#define PCIMT_DMAACCESS 0xbfff0028
+-#define PCIMT_DMAHIT 0xbfff0030
+-#define PCIMT_ERRSTATUS 0xbfff0038
+-#define PCIMT_ERRADDR 0xbfff0040
+-#define PCIMT_SYNDROME 0xbfff0048
+-#define PCIMT_ITPEND 0xbfff0050
++#define PCIMT_UCONF CKSEG1ADDR(0xbfff0000)
++#define PCIMT_IOADTIMEOUT2 CKSEG1ADDR(0xbfff0008)
++#define PCIMT_IOMEMCONF CKSEG1ADDR(0xbfff0010)
++#define PCIMT_IOMMU CKSEG1ADDR(0xbfff0018)
++#define PCIMT_IOADTIMEOUT1 CKSEG1ADDR(0xbfff0020)
++#define PCIMT_DMAACCESS CKSEG1ADDR(0xbfff0028)
++#define PCIMT_DMAHIT CKSEG1ADDR(0xbfff0030)
++#define PCIMT_ERRSTATUS CKSEG1ADDR(0xbfff0038)
++#define PCIMT_ERRADDR CKSEG1ADDR(0xbfff0040)
++#define PCIMT_SYNDROME CKSEG1ADDR(0xbfff0048)
++#define PCIMT_ITPEND CKSEG1ADDR(0xbfff0050)
+ #define IT_INT2 0x01
+ #define IT_INTD 0x02
+ #define IT_INTC 0x04
+@@ -94,20 +94,20 @@ extern unsigned int sni_brd_type;
+ #define IT_EISA 0x20
+ #define IT_SCSI 0x40
+ #define IT_ETH 0x80
+-#define PCIMT_IRQSEL 0xbfff0058
+-#define PCIMT_TESTMEM 0xbfff0060
+-#define PCIMT_ECCREG 0xbfff0068
+-#define PCIMT_CONFIG_ADDRESS 0xbfff0070
+-#define PCIMT_ASIC_ID 0xbfff0078 /* read */
+-#define PCIMT_SOFT_RESET 0xbfff0078 /* write */
+-#define PCIMT_PIA_OE 0xbfff0080
+-#define PCIMT_PIA_DATAOUT 0xbfff0088
+-#define PCIMT_PIA_DATAIN 0xbfff0090
+-#define PCIMT_CACHECONF 0xbfff0098
+-#define PCIMT_INVSPACE 0xbfff00a0
++#define PCIMT_IRQSEL CKSEG1ADDR(0xbfff0058)
++#define PCIMT_TESTMEM CKSEG1ADDR(0xbfff0060)
++#define PCIMT_ECCREG CKSEG1ADDR(0xbfff0068)
++#define PCIMT_CONFIG_ADDRESS CKSEG1ADDR(0xbfff0070)
++#define PCIMT_ASIC_ID CKSEG1ADDR(0xbfff0078) /* read */
++#define PCIMT_SOFT_RESET CKSEG1ADDR(0xbfff0078) /* write */
++#define PCIMT_PIA_OE CKSEG1ADDR(0xbfff0080)
++#define PCIMT_PIA_DATAOUT CKSEG1ADDR(0xbfff0088)
++#define PCIMT_PIA_DATAIN CKSEG1ADDR(0xbfff0090)
++#define PCIMT_CACHECONF CKSEG1ADDR(0xbfff0098)
++#define PCIMT_INVSPACE CKSEG1ADDR(0xbfff00a0)
+ #endif
- out:
-- ocfs2_meta_unlock(dir, 1);
-+ ocfs2_inode_unlock(dir, 1);
+-#define PCIMT_PCI_CONF 0xbfff0100
++#define PCIMT_PCI_CONF CKSEG1ADDR(0xbfff0100)
- if (de_bh)
- brelse(de_bh);
-@@ -720,7 +719,7 @@ static int ocfs2_unlink(struct inode *dir,
- return -EPERM;
- }
+ /*
+ * Data port for the PCI bus in IO space
+@@ -117,34 +117,34 @@ extern unsigned int sni_brd_type;
+ /*
+ * Board specific registers
+ */
+-#define PCIMT_CSMSR 0xbfd00000
+-#define PCIMT_CSSWITCH 0xbfd10000
+-#define PCIMT_CSITPEND 0xbfd20000
+-#define PCIMT_AUTO_PO_EN 0xbfd30000
+-#define PCIMT_CLR_TEMP 0xbfd40000
+-#define PCIMT_AUTO_PO_DIS 0xbfd50000
+-#define PCIMT_EXMSR 0xbfd60000
+-#define PCIMT_UNUSED1 0xbfd70000
+-#define PCIMT_CSWCSM 0xbfd80000
+-#define PCIMT_UNUSED2 0xbfd90000
+-#define PCIMT_CSLED 0xbfda0000
+-#define PCIMT_CSMAPISA 0xbfdb0000
+-#define PCIMT_CSRSTBP 0xbfdc0000
+-#define PCIMT_CLRPOFF 0xbfdd0000
+-#define PCIMT_CSTIMER 0xbfde0000
+-#define PCIMT_PWDN 0xbfdf0000
++#define PCIMT_CSMSR CKSEG1ADDR(0xbfd00000)
++#define PCIMT_CSSWITCH CKSEG1ADDR(0xbfd10000)
++#define PCIMT_CSITPEND CKSEG1ADDR(0xbfd20000)
++#define PCIMT_AUTO_PO_EN CKSEG1ADDR(0xbfd30000)
++#define PCIMT_CLR_TEMP CKSEG1ADDR(0xbfd40000)
++#define PCIMT_AUTO_PO_DIS CKSEG1ADDR(0xbfd50000)
++#define PCIMT_EXMSR CKSEG1ADDR(0xbfd60000)
++#define PCIMT_UNUSED1 CKSEG1ADDR(0xbfd70000)
++#define PCIMT_CSWCSM CKSEG1ADDR(0xbfd80000)
++#define PCIMT_UNUSED2 CKSEG1ADDR(0xbfd90000)
++#define PCIMT_CSLED CKSEG1ADDR(0xbfda0000)
++#define PCIMT_CSMAPISA CKSEG1ADDR(0xbfdb0000)
++#define PCIMT_CSRSTBP CKSEG1ADDR(0xbfdc0000)
++#define PCIMT_CLRPOFF CKSEG1ADDR(0xbfdd0000)
++#define PCIMT_CSTIMER CKSEG1ADDR(0xbfde0000)
++#define PCIMT_PWDN CKSEG1ADDR(0xbfdf0000)
-- status = ocfs2_meta_lock(dir, &parent_node_bh, 1);
-+ status = ocfs2_inode_lock(dir, &parent_node_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -745,7 +744,7 @@ static int ocfs2_unlink(struct inode *dir,
- goto leave;
- }
+ /*
+ * A20R based boards
+ */
+-#define A20R_PT_CLOCK_BASE 0xbc040000
+-#define A20R_PT_TIM0_ACK 0xbc050000
+-#define A20R_PT_TIM1_ACK 0xbc060000
++#define A20R_PT_CLOCK_BASE CKSEG1ADDR(0xbc040000)
++#define A20R_PT_TIM0_ACK CKSEG1ADDR(0xbc050000)
++#define A20R_PT_TIM1_ACK CKSEG1ADDR(0xbc060000)
-- status = ocfs2_meta_lock(inode, &fe_bh, 1);
-+ status = ocfs2_inode_lock(inode, &fe_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -765,7 +764,7 @@ static int ocfs2_unlink(struct inode *dir,
+ #define SNI_A20R_IRQ_BASE MIPS_CPU_IRQ_BASE
+ #define SNI_A20R_IRQ_TIMER (SNI_A20R_IRQ_BASE+5)
- status = ocfs2_remote_dentry_delete(dentry);
- if (status < 0) {
-- /* This vote should succeed under all normal
-+ /* This remote delete should succeed under all normal
- * circumstances. */
- mlog_errno(status);
- goto leave;
-@@ -841,13 +840,13 @@ leave:
- ocfs2_commit_trans(osb, handle);
+-#define SNI_PCIT_INT_REG 0xbfff000c
++#define SNI_PCIT_INT_REG CKSEG1ADDR(0xbfff000c)
- if (child_locked)
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
+ #define SNI_PCIT_INT_START 24
+ #define SNI_PCIT_INT_END 30
+@@ -186,10 +186,30 @@ extern unsigned int sni_brd_type;
+ /*
+ * Base address for the mapped 16mb EISA bus segment.
+ */
+-#define PCIMT_EISA_BASE 0xb0000000
++#define PCIMT_EISA_BASE CKSEG1ADDR(0xb0000000)
-- ocfs2_meta_unlock(dir, 1);
-+ ocfs2_inode_unlock(dir, 1);
+ /* PCI EISA Interrupt acknowledge */
+-#define PCIMT_INT_ACKNOWLEDGE 0xba000000
++#define PCIMT_INT_ACKNOWLEDGE CKSEG1ADDR(0xba000000)
++
++/*
++ * SNI ID PROM
++ *
++ * SNI_IDPROM_MEMSIZE Memsize in 16MB quantities
++ * SNI_IDPROM_BRDTYPE Board Type
++ * SNI_IDPROM_CPUTYPE CPU Type on RM400
++ */
++#ifdef CONFIG_CPU_BIG_ENDIAN
++#define __SNI_END 0
++#endif
++#ifdef CONFIG_CPU_LITTLE_ENDIAN
++#define __SNI_END 3
++#endif
++#define SNI_IDPROM_BASE CKSEG1ADDR(0x1ff00000)
++#define SNI_IDPROM_MEMSIZE (SNI_IDPROM_BASE + (0x28 ^ __SNI_END))
++#define SNI_IDPROM_BRDTYPE (SNI_IDPROM_BASE + (0x29 ^ __SNI_END))
++#define SNI_IDPROM_CPUTYPE (SNI_IDPROM_BASE + (0x30 ^ __SNI_END))
++
++#define SNI_IDPROM_SIZE 0x1000
- if (orphan_dir) {
- /* This was locked for us in ocfs2_prepare_orphan_dir() */
-- ocfs2_meta_unlock(orphan_dir, 1);
-+ ocfs2_inode_unlock(orphan_dir, 1);
- mutex_unlock(&orphan_dir->i_mutex);
- iput(orphan_dir);
- }
-@@ -908,7 +907,7 @@ static int ocfs2_double_lock(struct ocfs2_super *osb,
- inode1 = tmpinode;
- }
- /* lock id2 */
-- status = ocfs2_meta_lock(inode2, bh2, 1);
-+ status = ocfs2_inode_lock(inode2, bh2, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -917,14 +916,14 @@ static int ocfs2_double_lock(struct ocfs2_super *osb,
- }
+ /* board specific init functions */
+ extern void sni_a20r_init(void);
+@@ -207,6 +227,9 @@ extern void sni_pcimt_irq_init(void);
+ /* timer inits */
+ extern void sni_cpu_time_init(void);
- /* lock id1 */
-- status = ocfs2_meta_lock(inode1, bh1, 1);
-+ status = ocfs2_inode_lock(inode1, bh1, 1);
- if (status < 0) {
++/* eisa init for RM200/400 */
++extern int sni_eisa_root_init(void);
++
+ /* common irq stuff */
+ extern void (*sni_hwint)(void);
+ extern struct irqaction sni_isa_irq;
+diff --git a/include/asm-mips/stackframe.h b/include/asm-mips/stackframe.h
+index fb41a8d..051e1af 100644
+--- a/include/asm-mips/stackframe.h
++++ b/include/asm-mips/stackframe.h
+@@ -6,6 +6,7 @@
+ * Copyright (C) 1994, 95, 96, 99, 2001 Ralf Baechle
+ * Copyright (C) 1994, 1995, 1996 Paul M. Antoine.
+ * Copyright (C) 1999 Silicon Graphics, Inc.
++ * Copyright (C) 2007 Maciej W. Rozycki
+ */
+ #ifndef _ASM_STACKFRAME_H
+ #define _ASM_STACKFRAME_H
+@@ -145,8 +146,16 @@
+ .set reorder
+ /* Called from user mode, new stack. */
+ get_saved_sp
++#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
+ 8: move k0, sp
+ PTR_SUBU sp, k1, PT_SIZE
++#else
++ .set at=k0
++8: PTR_SUBU k1, PT_SIZE
++ .set noat
++ move k0, sp
++ move sp, k1
++#endif
+ LONG_S k0, PT_R29(sp)
+ LONG_S $3, PT_R3(sp)
/*
- * An error return must mean that no cluster locks
- * were held on function exit.
- */
- if (oi1->ip_blkno != oi2->ip_blkno)
-- ocfs2_meta_unlock(inode2, 1);
-+ ocfs2_inode_unlock(inode2, 1);
-
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -937,10 +936,10 @@ bail:
-
- static void ocfs2_double_unlock(struct inode *inode1, struct inode *inode2)
- {
-- ocfs2_meta_unlock(inode1, 1);
-+ ocfs2_inode_unlock(inode1, 1);
-
- if (inode1 != inode2)
-- ocfs2_meta_unlock(inode2, 1);
-+ ocfs2_inode_unlock(inode2, 1);
- }
-
- static int ocfs2_rename(struct inode *old_dir,
-@@ -1031,10 +1030,11 @@ static int ocfs2_rename(struct inode *old_dir,
-
- /*
- * Aside from allowing a meta data update, the locking here
-- * also ensures that the vote thread on other nodes won't have
-- * to concurrently downconvert the inode and the dentry locks.
-+ * also ensures that the downconvert thread on other nodes
-+ * won't have to concurrently downconvert the inode and the
-+ * dentry locks.
- */
-- status = ocfs2_meta_lock(old_inode, &old_inode_bh, 1);
-+ status = ocfs2_inode_lock(old_inode, &old_inode_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -1143,7 +1143,7 @@ static int ocfs2_rename(struct inode *old_dir,
- goto bail;
- }
+diff --git a/include/asm-mips/time.h b/include/asm-mips/time.h
+index 7717934..a8fd16e 100644
+--- a/include/asm-mips/time.h
++++ b/include/asm-mips/time.h
+@@ -31,20 +31,13 @@ extern int rtc_mips_set_time(unsigned long);
+ extern int rtc_mips_set_mmss(unsigned long);
-- status = ocfs2_meta_lock(new_inode, &newfe_bh, 1);
-+ status = ocfs2_inode_lock(new_inode, &newfe_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -1355,14 +1355,14 @@ bail:
- ocfs2_double_unlock(old_dir, new_dir);
+ /*
+- * Timer interrupt functions.
+- * mips_timer_state is needed for high precision timer calibration.
+- */
+-extern int (*mips_timer_state)(void);
+-
+-/*
+ * board specific routines required by time_init().
+ */
+ extern void plat_time_init(void);
- if (old_child_locked)
-- ocfs2_meta_unlock(old_inode, 1);
-+ ocfs2_inode_unlock(old_inode, 1);
+ /*
+ * mips_hpt_frequency - must be set if you intend to use an R4k-compatible
+- * counter as a timer interrupt source; otherwise it can be set up
+- * automagically with an aid of mips_timer_state.
++ * counter as a timer interrupt source.
+ */
+ extern unsigned int mips_hpt_frequency;
- if (new_child_locked)
-- ocfs2_meta_unlock(new_inode, 1);
-+ ocfs2_inode_unlock(new_inode, 1);
+diff --git a/include/asm-mips/topology.h b/include/asm-mips/topology.h
+index 0440fb9..259145e 100644
+--- a/include/asm-mips/topology.h
++++ b/include/asm-mips/topology.h
+@@ -1 +1,17 @@
++/*
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ *
++ * Copyright (C) 2007 by Ralf Baechle
++ */
++#ifndef __ASM_TOPOLOGY_H
++#define __ASM_TOPOLOGY_H
++
+ #include <topology.h>
++
++#ifdef CONFIG_SMP
++#define smt_capable() (smp_num_siblings > 1)
++#endif
++
++#endif /* __ASM_TOPOLOGY_H */
+diff --git a/include/asm-mips/tx4927/tx4927_pci.h b/include/asm-mips/tx4927/tx4927_pci.h
+index 3f1e470..0be77df 100644
+--- a/include/asm-mips/tx4927/tx4927_pci.h
++++ b/include/asm-mips/tx4927/tx4927_pci.h
+@@ -9,6 +9,7 @@
+ #define __ASM_TX4927_TX4927_PCI_H
- if (orphan_dir) {
- /* This was locked for us in ocfs2_prepare_orphan_dir() */
-- ocfs2_meta_unlock(orphan_dir, 1);
-+ ocfs2_inode_unlock(orphan_dir, 1);
- mutex_unlock(&orphan_dir->i_mutex);
- iput(orphan_dir);
- }
-@@ -1530,7 +1530,7 @@ static int ocfs2_symlink(struct inode *dir,
- credits = ocfs2_calc_symlink_credits(sb);
+ #define TX4927_CCFG_TOE 0x00004000
++#define TX4927_CCFG_WR 0x00008000
+ #define TX4927_CCFG_TINTDIS 0x01000000
- /* lock the parent directory */
-- status = ocfs2_meta_lock(dir, &parent_fe_bh, 1);
-+ status = ocfs2_inode_lock(dir, &parent_fe_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
-@@ -1657,7 +1657,7 @@ bail:
- if (handle)
- ocfs2_commit_trans(osb, handle);
+ #define TX4927_PCIMEM 0x08000000
+diff --git a/include/asm-mips/uaccess.h b/include/asm-mips/uaccess.h
+index c30c718..66523d6 100644
+--- a/include/asm-mips/uaccess.h
++++ b/include/asm-mips/uaccess.h
+@@ -5,6 +5,7 @@
+ *
+ * Copyright (C) 1996, 1997, 1998, 1999, 2000, 03, 04 by Ralf Baechle
+ * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
++ * Copyright (C) 2007 Maciej W. Rozycki
+ */
+ #ifndef _ASM_UACCESS_H
+ #define _ASM_UACCESS_H
+@@ -387,6 +388,12 @@ extern void __put_user_unknown(void);
+ "jal\t" #destination "\n\t"
+ #endif
-- ocfs2_meta_unlock(dir, 1);
-+ ocfs2_inode_unlock(dir, 1);
++#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
++#define DADDI_SCRATCH "$0"
++#else
++#define DADDI_SCRATCH "$3"
++#endif
++
+ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
- if (new_fe_bh)
- brelse(new_fe_bh);
-@@ -1735,7 +1735,7 @@ static int ocfs2_prepare_orphan_dir(struct ocfs2_super *osb,
+ #define __invoke_copy_to_user(to, from, n) \
+@@ -403,7 +410,7 @@ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
+ : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r) \
+ : \
+ : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31", \
+- "memory"); \
++ DADDI_SCRATCH, "memory"); \
+ __cu_len_r; \
+ })
- mutex_lock(&orphan_dir_inode->i_mutex);
+@@ -512,7 +519,7 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
+ : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r) \
+ : \
+ : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31", \
+- "memory"); \
++ DADDI_SCRATCH, "memory"); \
+ __cu_len_r; \
+ })
-- status = ocfs2_meta_lock(orphan_dir_inode, &orphan_dir_bh, 1);
-+ status = ocfs2_inode_lock(orphan_dir_inode, &orphan_dir_bh, 1);
- if (status < 0) {
- mlog_errno(status);
- goto leave;
-@@ -1745,7 +1745,7 @@ static int ocfs2_prepare_orphan_dir(struct ocfs2_super *osb,
- orphan_dir_bh, name,
- OCFS2_ORPHAN_NAMELEN, de_bh);
- if (status < 0) {
-- ocfs2_meta_unlock(orphan_dir_inode, 1);
-+ ocfs2_inode_unlock(orphan_dir_inode, 1);
+@@ -535,7 +542,7 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
+ : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r) \
+ : \
+ : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31", \
+- "memory"); \
++ DADDI_SCRATCH, "memory"); \
+ __cu_len_r; \
+ })
- mlog_errno(status);
- goto leave;
-diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
-index 60a23e1..d084805 100644
---- a/fs/ocfs2/ocfs2.h
-+++ b/fs/ocfs2/ocfs2.h
-@@ -101,6 +101,7 @@ enum ocfs2_unlock_action {
- * about to be
- * dropped. */
- #define OCFS2_LOCK_QUEUED (0x00000100) /* queued for downconvert */
-+#define OCFS2_LOCK_NOCACHE (0x00000200) /* don't use a holder count */
+diff --git a/include/asm-mips/war.h b/include/asm-mips/war.h
+index d2808ed..22361d5 100644
+--- a/include/asm-mips/war.h
++++ b/include/asm-mips/war.h
+@@ -4,6 +4,7 @@
+ * for more details.
+ *
+ * Copyright (C) 2002, 2004, 2007 by Ralf Baechle
++ * Copyright (C) 2007 Maciej W. Rozycki
+ */
+ #ifndef _ASM_WAR_H
+ #define _ASM_WAR_H
+@@ -11,6 +12,67 @@
+ #include <war.h>
- struct ocfs2_lock_res_ops;
+ /*
++ * Work around certain R4000 CPU errata (as implemented by GCC):
++ *
++ * - A double-word or a variable shift may give an incorrect result
++ * if executed immediately after starting an integer division:
++ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
++ * erratum #28
++ * "MIPS R4000MC Errata, Processor Revision 2.2 and 3.0", erratum
++ * #19
++ *
++ * - A double-word or a variable shift may give an incorrect result
++ * if executed while an integer multiplication is in progress:
++ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
++ * errata #16 & #28
++ *
++ * - An integer division may give an incorrect result if started in
++ * a delay slot of a taken branch or a jump:
++ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
++ * erratum #52
++ */
++#ifdef CONFIG_CPU_R4000_WORKAROUNDS
++#define R4000_WAR 1
++#else
++#define R4000_WAR 0
++#endif
++
++/*
++ * Work around certain R4400 CPU errata (as implemented by GCC):
++ *
++ * - A double-word or a variable shift may give an incorrect result
++ * if executed immediately after starting an integer division:
++ * "MIPS R4400MC Errata, Processor Revision 1.0", erratum #10
++ * "MIPS R4400MC Errata, Processor Revision 2.0 & 3.0", erratum #4
++ */
++#ifdef CONFIG_CPU_R4400_WORKAROUNDS
++#define R4400_WAR 1
++#else
++#define R4400_WAR 0
++#endif
++
++/*
++ * Work around the "daddi" and "daddiu" CPU errata:
++ *
++ * - The `daddi' instruction fails to trap on overflow.
++ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
++ * erratum #23
++ *
++ * - The `daddiu' instruction can produce an incorrect result.
++ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
++ * erratum #41
++ * "MIPS R4000MC Errata, Processor Revision 2.2 and 3.0", erratum
++ * #15
++ * "MIPS R4400PC/SC Errata, Processor Revision 1.0", erratum #7
++ * "MIPS R4400MC Errata, Processor Revision 1.0", erratum #5
++ */
++#ifdef CONFIG_CPU_DADDI_WORKAROUNDS
++#define DADDI_WAR 1
++#else
++#define DADDI_WAR 0
++#endif
++
++/*
+ * Another R4600 erratum. Due to the lack of errata information the exact
+ * technical details aren't known. I've experimentally found that disabling
+ * interrupts during indexed I-cache flushes seems to be sufficient to deal
+diff --git a/include/asm-parisc/agp.h b/include/asm-parisc/agp.h
+index 9f61d4e..9651660 100644
+--- a/include/asm-parisc/agp.h
++++ b/include/asm-parisc/agp.h
+@@ -9,7 +9,6 @@
-@@ -170,6 +171,7 @@ enum ocfs2_mount_options
- OCFS2_MOUNT_NOINTR = 1 << 2, /* Don't catch signals */
- OCFS2_MOUNT_ERRORS_PANIC = 1 << 3, /* Panic on errors */
- OCFS2_MOUNT_DATA_WRITEBACK = 1 << 4, /* No data ordering */
-+ OCFS2_MOUNT_LOCALFLOCKS = 1 << 5, /* No cluster aware user file locks */
- };
+ #define map_page_into_agp(page) /* nothing */
+ #define unmap_page_from_agp(page) /* nothing */
+-#define flush_agp_mappings() /* nothing */
+ #define flush_agp_cache() mb()
+
+ /* Convert a physical address to an address suitable for the GART. */
+diff --git a/include/asm-powerpc/agp.h b/include/asm-powerpc/agp.h
+index e5ccaca..86455c4 100644
+--- a/include/asm-powerpc/agp.h
++++ b/include/asm-powerpc/agp.h
+@@ -6,7 +6,6 @@
- #define OCFS2_OSB_SOFT_RO 0x0001
-@@ -189,9 +191,7 @@ struct ocfs2_super
- struct ocfs2_slot_info *slot_info;
+ #define map_page_into_agp(page)
+ #define unmap_page_from_agp(page)
+-#define flush_agp_mappings()
+ #define flush_agp_cache() mb()
- spinlock_t node_map_lock;
-- struct ocfs2_node_map mounted_map;
- struct ocfs2_node_map recovery_map;
-- struct ocfs2_node_map umount_map;
+ /* Convert a physical address to an address suitable for the GART. */
+diff --git a/include/asm-powerpc/bitops.h b/include/asm-powerpc/bitops.h
+index 733b4af..220d9a7 100644
+--- a/include/asm-powerpc/bitops.h
++++ b/include/asm-powerpc/bitops.h
+@@ -359,6 +359,8 @@ static __inline__ int test_le_bit(unsigned long nr,
+ unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset);
- u64 root_blkno;
- u64 system_dir_blkno;
-@@ -231,7 +231,9 @@ struct ocfs2_super
- wait_queue_head_t checkpoint_event;
- atomic_t needs_checkpoint;
- struct ocfs2_journal *journal;
-+ unsigned long osb_commit_interval;
++unsigned long generic_find_next_le_bit(const unsigned long *addr,
++ unsigned long size, unsigned long offset);
+ /* Bitmap functions for the ext2 filesystem */
-+ int local_alloc_size;
- enum ocfs2_local_alloc_state local_alloc_state;
- struct buffer_head *local_alloc_bh;
- u64 la_last_gd;
-@@ -254,28 +256,21 @@ struct ocfs2_super
+ #define ext2_set_bit(nr,addr) \
+@@ -378,6 +380,8 @@ unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
+ #define ext2_find_next_zero_bit(addr, size, off) \
+ generic_find_next_zero_le_bit((unsigned long*)addr, size, off)
- wait_queue_head_t recovery_event;
++#define ext2_find_next_bit(addr, size, off) \
++ generic_find_next_le_bit((unsigned long *)addr, size, off)
+ /* Bitmap functions for the minix filesystem. */
-- spinlock_t vote_task_lock;
-- struct task_struct *vote_task;
-- wait_queue_head_t vote_event;
-- unsigned long vote_wake_sequence;
-- unsigned long vote_work_sequence;
-+ spinlock_t dc_task_lock;
-+ struct task_struct *dc_task;
-+ wait_queue_head_t dc_event;
-+ unsigned long dc_wake_sequence;
-+ unsigned long dc_work_sequence;
+ #define minix_test_and_set_bit(nr,addr) \
+diff --git a/include/asm-powerpc/ide.h b/include/asm-powerpc/ide.h
+index fd7f5a4..6d50310 100644
+--- a/include/asm-powerpc/ide.h
++++ b/include/asm-powerpc/ide.h
+@@ -42,9 +42,6 @@ struct ide_machdep_calls {
-+ /*
-+ * Any thread can add locks to the list, but the downconvert
-+ * thread is the only one allowed to remove locks. Any change
-+ * to this rule requires updating
-+ * ocfs2_downconvert_thread_do_work().
-+ */
- struct list_head blocked_lock_list;
- unsigned long blocked_lock_count;
+ extern struct ide_machdep_calls ppc_ide_md;
-- struct list_head vote_list;
-- int vote_count;
--
-- u32 net_key;
-- spinlock_t net_response_lock;
-- unsigned int net_response_ids;
-- struct list_head net_response_list;
--
-- struct o2hb_callback_func osb_hb_up;
-- struct o2hb_callback_func osb_hb_down;
--
-- struct list_head osb_net_handlers;
+-#undef SUPPORT_SLOW_DATA_PORTS
+-#define SUPPORT_SLOW_DATA_PORTS 0
-
- wait_queue_head_t osb_mount_event;
-
- /* Truncate log info */
-diff --git a/fs/ocfs2/ocfs2_fs.h b/fs/ocfs2/ocfs2_fs.h
-index 6ef8767..3633edd 100644
---- a/fs/ocfs2/ocfs2_fs.h
-+++ b/fs/ocfs2/ocfs2_fs.h
-@@ -231,6 +231,20 @@ struct ocfs2_space_resv {
- #define OCFS2_IOC_RESVSP64 _IOW ('X', 42, struct ocfs2_space_resv)
- #define OCFS2_IOC_UNRESVSP64 _IOW ('X', 43, struct ocfs2_space_resv)
-
-+/* Used to pass group descriptor data when online resize is done */
-+struct ocfs2_new_group_input {
-+ __u64 group; /* Group descriptor's blkno. */
-+ __u32 clusters; /* Total number of clusters in this group */
-+ __u32 frees; /* Total free clusters in this group */
-+ __u16 chain; /* Chain for this group */
-+ __u16 reserved1;
-+ __u32 reserved2;
-+};
-+
-+#define OCFS2_IOC_GROUP_EXTEND _IOW('o', 1, int)
-+#define OCFS2_IOC_GROUP_ADD _IOW('o', 2,struct ocfs2_new_group_input)
-+#define OCFS2_IOC_GROUP_ADD64 _IOW('o', 3,struct ocfs2_new_group_input)
-+
- /*
- * Journal Flags (ocfs2_dinode.id1.journal1.i_flags)
- */
-@@ -256,6 +270,14 @@ struct ocfs2_space_resv {
- /* Journal limits (in bytes) */
- #define OCFS2_MIN_JOURNAL_SIZE (4 * 1024 * 1024)
-
-+/*
-+ * Default local alloc size (in megabytes)
-+ *
-+ * The value chosen should be such that most allocations, including new
-+ * block groups, use local alloc.
-+ */
-+#define OCFS2_DEFAULT_LOCAL_ALLOC_SIZE 8
-+
- struct ocfs2_system_inode_info {
- char *si_name;
- int si_iflags;
-diff --git a/fs/ocfs2/ocfs2_lockid.h b/fs/ocfs2/ocfs2_lockid.h
-index 4ca02b1..86f3e37 100644
---- a/fs/ocfs2/ocfs2_lockid.h
-+++ b/fs/ocfs2/ocfs2_lockid.h
-@@ -45,6 +45,7 @@ enum ocfs2_lock_type {
- OCFS2_LOCK_TYPE_RW,
- OCFS2_LOCK_TYPE_DENTRY,
- OCFS2_LOCK_TYPE_OPEN,
-+ OCFS2_LOCK_TYPE_FLOCK,
- OCFS2_NUM_LOCK_TYPES
- };
-
-@@ -73,6 +74,9 @@ static inline char ocfs2_lock_type_char(enum ocfs2_lock_type type)
- case OCFS2_LOCK_TYPE_OPEN:
- c = 'O';
- break;
-+ case OCFS2_LOCK_TYPE_FLOCK:
-+ c = 'F';
-+ break;
- default:
- c = '\0';
- }
-@@ -90,6 +94,7 @@ static char *ocfs2_lock_type_strings[] = {
- [OCFS2_LOCK_TYPE_RW] = "Write/Read",
- [OCFS2_LOCK_TYPE_DENTRY] = "Dentry",
- [OCFS2_LOCK_TYPE_OPEN] = "Open",
-+ [OCFS2_LOCK_TYPE_FLOCK] = "Flock",
- };
+ #define IDE_ARCH_OBSOLETE_DEFAULTS
- static inline const char *ocfs2_lock_type_string(enum ocfs2_lock_type type)
-diff --git a/fs/ocfs2/resize.c b/fs/ocfs2/resize.c
+ static __inline__ int ide_default_irq(unsigned long base)
+diff --git a/include/asm-powerpc/pasemi_dma.h b/include/asm-powerpc/pasemi_dma.h
new file mode 100644
-index 0000000..37835ff
+index 0000000..b4526ff
--- /dev/null
-+++ b/fs/ocfs2/resize.c
-@@ -0,0 +1,634 @@
-+/* -*- mode: c; c-basic-offset: 8; -*-
-+ * vim: noexpandtab sw=8 ts=8 sts=0:
-+ *
-+ * resize.c
-+ *
-+ * volume resize.
-+ * Inspired by ext3/resize.c.
++++ b/include/asm-powerpc/pasemi_dma.h
+@@ -0,0 +1,467 @@
++/*
++ * Copyright (C) 2006 PA Semi, Inc
+ *
-+ * Copyright (C) 2007 Oracle. All rights reserved.
++ * Hardware register layout and descriptor formats for the on-board
++ * DMA engine on PA Semi PWRficient. Used by ethernet, function and security
++ * drivers.
+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2 of the License, or (at your option) any later version.
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+ * General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public
-+ * License along with this program; if not, write to the
-+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-+ * Boston, MA 021110-1307, USA.
-+ */
-+
-+#include <linux/fs.h>
-+#include <linux/types.h>
-+
-+#define MLOG_MASK_PREFIX ML_DISK_ALLOC
-+#include <cluster/masklog.h>
-+
-+#include "ocfs2.h"
-+
-+#include "alloc.h"
-+#include "dlmglue.h"
-+#include "inode.h"
-+#include "journal.h"
-+#include "super.h"
-+#include "sysfile.h"
-+#include "uptodate.h"
-+
-+#include "buffer_head_io.h"
-+#include "suballoc.h"
-+#include "resize.h"
-+
-+/*
-+ * Check whether there are new backup superblocks exist
-+ * in the last group. If there are some, mark them or clear
-+ * them in the bitmap.
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
+ *
-+ * Return how many backups we find in the last group.
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
-+static u16 ocfs2_calc_new_backup_super(struct inode *inode,
-+ struct ocfs2_group_desc *gd,
-+ int new_clusters,
-+ u32 first_new_cluster,
-+ u16 cl_cpg,
-+ int set)
-+{
-+ int i;
-+ u16 backups = 0;
-+ u32 cluster;
-+ u64 blkno, gd_blkno, lgd_blkno = le64_to_cpu(gd->bg_blkno);
-+
-+ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
-+ blkno = ocfs2_backup_super_blkno(inode->i_sb, i);
-+ cluster = ocfs2_blocks_to_clusters(inode->i_sb, blkno);
-+
-+ gd_blkno = ocfs2_which_cluster_group(inode, cluster);
-+ if (gd_blkno < lgd_blkno)
-+ continue;
-+ else if (gd_blkno > lgd_blkno)
-+ break;
-+
-+ if (set)
-+ ocfs2_set_bit(cluster % cl_cpg,
-+ (unsigned long *)gd->bg_bitmap);
-+ else
-+ ocfs2_clear_bit(cluster % cl_cpg,
-+ (unsigned long *)gd->bg_bitmap);
-+ backups++;
-+ }
-+
-+ mlog_exit_void();
-+ return backups;
-+}
-+
-+static int ocfs2_update_last_group_and_inode(handle_t *handle,
-+ struct inode *bm_inode,
-+ struct buffer_head *bm_bh,
-+ struct buffer_head *group_bh,
-+ u32 first_new_cluster,
-+ int new_clusters)
-+{
-+ int ret = 0;
-+ struct ocfs2_super *osb = OCFS2_SB(bm_inode->i_sb);
-+ struct ocfs2_dinode *fe = (struct ocfs2_dinode *) bm_bh->b_data;
-+ struct ocfs2_chain_list *cl = &fe->id2.i_chain;
-+ struct ocfs2_chain_rec *cr;
-+ struct ocfs2_group_desc *group;
-+ u16 chain, num_bits, backups = 0;
-+ u16 cl_bpc = le16_to_cpu(cl->cl_bpc);
-+ u16 cl_cpg = le16_to_cpu(cl->cl_cpg);
-+
-+ mlog_entry("(new_clusters=%d, first_new_cluster = %u)\n",
-+ new_clusters, first_new_cluster);
-+
-+ ret = ocfs2_journal_access(handle, bm_inode, group_bh,
-+ OCFS2_JOURNAL_ACCESS_WRITE);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out;
-+ }
-+
-+ group = (struct ocfs2_group_desc *)group_bh->b_data;
-+
-+ /* update the group first. */
-+ num_bits = new_clusters * cl_bpc;
-+ le16_add_cpu(&group->bg_bits, num_bits);
-+ le16_add_cpu(&group->bg_free_bits_count, num_bits);
-+
-+ /*
-+ * check whether there are some new backup superblocks exist in
-+ * this group and update the group bitmap accordingly.
-+ */
-+ if (OCFS2_HAS_COMPAT_FEATURE(osb->sb,
-+ OCFS2_FEATURE_COMPAT_BACKUP_SB)) {
-+ backups = ocfs2_calc_new_backup_super(bm_inode,
-+ group,
-+ new_clusters,
-+ first_new_cluster,
-+ cl_cpg, 1);
-+ le16_add_cpu(&group->bg_free_bits_count, -1 * backups);
-+ }
-+
-+ ret = ocfs2_journal_dirty(handle, group_bh);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_rollback;
-+ }
-+
-+ /* update the inode accordingly. */
-+ ret = ocfs2_journal_access(handle, bm_inode, bm_bh,
-+ OCFS2_JOURNAL_ACCESS_WRITE);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_rollback;
-+ }
-+
-+ chain = le16_to_cpu(group->bg_chain);
-+ cr = (&cl->cl_recs[chain]);
-+ le32_add_cpu(&cr->c_total, num_bits);
-+ le32_add_cpu(&cr->c_free, num_bits);
-+ le32_add_cpu(&fe->id1.bitmap1.i_total, num_bits);
-+ le32_add_cpu(&fe->i_clusters, new_clusters);
-+
-+ if (backups) {
-+ le32_add_cpu(&cr->c_free, -1 * backups);
-+ le32_add_cpu(&fe->id1.bitmap1.i_used, backups);
-+ }
-+
-+ spin_lock(&OCFS2_I(bm_inode)->ip_lock);
-+ OCFS2_I(bm_inode)->ip_clusters = le32_to_cpu(fe->i_clusters);
-+ le64_add_cpu(&fe->i_size, new_clusters << osb->s_clustersize_bits);
-+ spin_unlock(&OCFS2_I(bm_inode)->ip_lock);
-+ i_size_write(bm_inode, le64_to_cpu(fe->i_size));
-+
-+ ocfs2_journal_dirty(handle, bm_bh);
-+
-+out_rollback:
-+ if (ret < 0) {
-+ ocfs2_calc_new_backup_super(bm_inode,
-+ group,
-+ new_clusters,
-+ first_new_cluster,
-+ cl_cpg, 0);
-+ le16_add_cpu(&group->bg_free_bits_count, backups);
-+ le16_add_cpu(&group->bg_bits, -1 * num_bits);
-+ le16_add_cpu(&group->bg_free_bits_count, -1 * num_bits);
-+ }
-+out:
-+ mlog_exit(ret);
-+ return ret;
-+}
-+
-+static int update_backups(struct inode * inode, u32 clusters, char *data)
-+{
-+ int i, ret = 0;
-+ u32 cluster;
-+ u64 blkno;
-+ struct buffer_head *backup = NULL;
-+ struct ocfs2_dinode *backup_di = NULL;
-+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-+
-+ /* calculate the real backups we need to update. */
-+ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
-+ blkno = ocfs2_backup_super_blkno(inode->i_sb, i);
-+ cluster = ocfs2_blocks_to_clusters(inode->i_sb, blkno);
-+ if (cluster > clusters)
-+ break;
-+
-+ ret = ocfs2_read_block(osb, blkno, &backup, 0, NULL);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ break;
-+ }
-+
-+ memcpy(backup->b_data, data, inode->i_sb->s_blocksize);
-+
-+ backup_di = (struct ocfs2_dinode *)backup->b_data;
-+ backup_di->i_blkno = cpu_to_le64(blkno);
-+
-+ ret = ocfs2_write_super_or_backup(osb, backup);
-+ brelse(backup);
-+ backup = NULL;
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ break;
-+ }
-+ }
-+
-+ return ret;
-+}
-+
-+static void ocfs2_update_super_and_backups(struct inode *inode,
-+ int new_clusters)
-+{
-+ int ret;
-+ u32 clusters = 0;
-+ struct buffer_head *super_bh = NULL;
-+ struct ocfs2_dinode *super_di = NULL;
-+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-+
-+ /*
-+ * update the superblock last.
-+ * It doesn't matter if the write failed.
-+ */
-+ ret = ocfs2_read_block(osb, OCFS2_SUPER_BLOCK_BLKNO,
-+ &super_bh, 0, NULL);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out;
-+ }
-+
-+ super_di = (struct ocfs2_dinode *)super_bh->b_data;
-+ le32_add_cpu(&super_di->i_clusters, new_clusters);
-+ clusters = le32_to_cpu(super_di->i_clusters);
+
-+ ret = ocfs2_write_super_or_backup(osb, super_bh);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out;
-+ }
++#ifndef ASM_PASEMI_DMA_H
++#define ASM_PASEMI_DMA_H
+
-+ if (OCFS2_HAS_COMPAT_FEATURE(osb->sb, OCFS2_FEATURE_COMPAT_BACKUP_SB))
-+ ret = update_backups(inode, clusters, super_bh->b_data);
++/* status register layout in IOB region, at 0xfb800000 */
++struct pasdma_status {
++ u64 rx_sta[64]; /* RX channel status */
++ u64 tx_sta[20]; /* TX channel status */
++};
+
-+out:
-+ brelse(super_bh);
-+ if (ret)
-+ printk(KERN_WARNING "ocfs2: Failed to update super blocks on %s"
-+ " during fs resize. This condition is not fatal,"
-+ " but fsck.ocfs2 should be run to fix it\n",
-+ osb->dev_str);
-+ return;
-+}
+
-+/*
-+ * Extend the filesystem to the new number of clusters specified. This entry
-+ * point is only used to extend the current filesystem to the end of the last
-+ * existing group.
++/* All these registers live in the PCI configuration space for the DMA PCI
++ * device. Use the normal PCI config access functions for them.
+ */
-+int ocfs2_group_extend(struct inode * inode, int new_clusters)
-+{
-+ int ret;
-+ handle_t *handle;
-+ struct buffer_head *main_bm_bh = NULL;
-+ struct buffer_head *group_bh = NULL;
-+ struct inode *main_bm_inode = NULL;
-+ struct ocfs2_dinode *fe = NULL;
-+ struct ocfs2_group_desc *group = NULL;
-+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-+ u16 cl_bpc;
-+ u32 first_new_cluster;
-+ u64 lgd_blkno;
-+
-+ mlog_entry_void();
-+
-+ if (ocfs2_is_hard_readonly(osb) || ocfs2_is_soft_readonly(osb))
-+ return -EROFS;
-+
-+ if (new_clusters < 0)
-+ return -EINVAL;
-+ else if (new_clusters == 0)
-+ return 0;
-+
-+ main_bm_inode = ocfs2_get_system_file_inode(osb,
-+ GLOBAL_BITMAP_SYSTEM_INODE,
-+ OCFS2_INVALID_SLOT);
-+ if (!main_bm_inode) {
-+ ret = -EINVAL;
-+ mlog_errno(ret);
-+ goto out;
-+ }
-+
-+ mutex_lock(&main_bm_inode->i_mutex);
-+
-+ ret = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_mutex;
-+ }
-+
-+ fe = (struct ocfs2_dinode *)main_bm_bh->b_data;
-+
-+ if (le16_to_cpu(fe->id2.i_chain.cl_cpg) !=
-+ ocfs2_group_bitmap_size(osb->sb) * 8) {
-+ mlog(ML_ERROR, "The disk is too old and small. "
-+ "Force to do offline resize.");
-+ ret = -EINVAL;
-+ goto out_unlock;
-+ }
-+
-+ if (!OCFS2_IS_VALID_DINODE(fe)) {
-+ OCFS2_RO_ON_INVALID_DINODE(main_bm_inode->i_sb, fe);
-+ ret = -EIO;
-+ goto out_unlock;
-+ }
-+
-+ first_new_cluster = le32_to_cpu(fe->i_clusters);
-+ lgd_blkno = ocfs2_which_cluster_group(main_bm_inode,
-+ first_new_cluster - 1);
-+
-+ ret = ocfs2_read_block(osb, lgd_blkno, &group_bh, OCFS2_BH_CACHED,
-+ main_bm_inode);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_unlock;
-+ }
-+
-+ group = (struct ocfs2_group_desc *)group_bh->b_data;
-+
-+ ret = ocfs2_check_group_descriptor(inode->i_sb, fe, group);
-+ if (ret) {
-+ mlog_errno(ret);
-+ goto out_unlock;
-+ }
-+
-+ cl_bpc = le16_to_cpu(fe->id2.i_chain.cl_bpc);
-+ if (le16_to_cpu(group->bg_bits) / cl_bpc + new_clusters >
-+ le16_to_cpu(fe->id2.i_chain.cl_cpg)) {
-+ ret = -EINVAL;
-+ goto out_unlock;
-+ }
-+
-+ mlog(0, "extend the last group at %llu, new clusters = %d\n",
-+ (unsigned long long)le64_to_cpu(group->bg_blkno), new_clusters);
-+
-+ handle = ocfs2_start_trans(osb, OCFS2_GROUP_EXTEND_CREDITS);
-+ if (IS_ERR(handle)) {
-+ mlog_errno(PTR_ERR(handle));
-+ ret = -EINVAL;
-+ goto out_unlock;
-+ }
-+
-+ /* update the last group descriptor and inode. */
-+ ret = ocfs2_update_last_group_and_inode(handle, main_bm_inode,
-+ main_bm_bh, group_bh,
-+ first_new_cluster,
-+ new_clusters);
-+ if (ret) {
-+ mlog_errno(ret);
-+ goto out_commit;
-+ }
-+
-+ ocfs2_update_super_and_backups(main_bm_inode, new_clusters);
++enum {
++ PAS_DMA_CAP_TXCH = 0x44, /* Transmit Channel Info */
++ PAS_DMA_CAP_RXCH = 0x48, /* Transmit Channel Info */
++ PAS_DMA_CAP_IFI = 0x4c, /* Interface Info */
++ PAS_DMA_COM_TXCMD = 0x100, /* Transmit Command Register */
++ PAS_DMA_COM_TXSTA = 0x104, /* Transmit Status Register */
++ PAS_DMA_COM_RXCMD = 0x108, /* Receive Command Register */
++ PAS_DMA_COM_RXSTA = 0x10c, /* Receive Status Register */
++};
+
-+out_commit:
-+ ocfs2_commit_trans(osb, handle);
-+out_unlock:
-+ brelse(group_bh);
-+ brelse(main_bm_bh);
+
-+ ocfs2_inode_unlock(main_bm_inode, 1);
++#define PAS_DMA_CAP_TXCH_TCHN_M 0x00ff0000 /* # of TX channels */
++#define PAS_DMA_CAP_TXCH_TCHN_S 16
+
-+out_mutex:
-+ mutex_unlock(&main_bm_inode->i_mutex);
-+ iput(main_bm_inode);
++#define PAS_DMA_CAP_RXCH_RCHN_M 0x00ff0000 /* # of RX channels */
++#define PAS_DMA_CAP_RXCH_RCHN_S 16
+
-+out:
-+ mlog_exit_void();
-+ return ret;
-+}
++#define PAS_DMA_CAP_IFI_IOFF_M 0xff000000 /* Cfg reg for intf pointers */
++#define PAS_DMA_CAP_IFI_IOFF_S 24
++#define PAS_DMA_CAP_IFI_NIN_M 0x00ff0000 /* # of interfaces */
++#define PAS_DMA_CAP_IFI_NIN_S 16
+
-+static int ocfs2_check_new_group(struct inode *inode,
-+ struct ocfs2_dinode *di,
-+ struct ocfs2_new_group_input *input,
-+ struct buffer_head *group_bh)
-+{
-+ int ret;
-+ struct ocfs2_group_desc *gd;
-+ u16 cl_bpc = le16_to_cpu(di->id2.i_chain.cl_bpc);
-+ unsigned int max_bits = le16_to_cpu(di->id2.i_chain.cl_cpg) *
-+ le16_to_cpu(di->id2.i_chain.cl_bpc);
++#define PAS_DMA_COM_TXCMD_EN 0x00000001 /* enable */
++#define PAS_DMA_COM_TXSTA_ACT 0x00000001 /* active */
++#define PAS_DMA_COM_RXCMD_EN 0x00000001 /* enable */
++#define PAS_DMA_COM_RXSTA_ACT 0x00000001 /* active */
+
+
-+ gd = (struct ocfs2_group_desc *)group_bh->b_data;
++/* Per-interface and per-channel registers */
++#define _PAS_DMA_RXINT_STRIDE 0x20
++#define PAS_DMA_RXINT_RCMDSTA(i) (0x200+(i)*_PAS_DMA_RXINT_STRIDE)
++#define PAS_DMA_RXINT_RCMDSTA_EN 0x00000001
++#define PAS_DMA_RXINT_RCMDSTA_ST 0x00000002
++#define PAS_DMA_RXINT_RCMDSTA_MBT 0x00000008
++#define PAS_DMA_RXINT_RCMDSTA_MDR 0x00000010
++#define PAS_DMA_RXINT_RCMDSTA_MOO 0x00000020
++#define PAS_DMA_RXINT_RCMDSTA_MBP 0x00000040
++#define PAS_DMA_RXINT_RCMDSTA_BT 0x00000800
++#define PAS_DMA_RXINT_RCMDSTA_DR 0x00001000
++#define PAS_DMA_RXINT_RCMDSTA_OO 0x00002000
++#define PAS_DMA_RXINT_RCMDSTA_BP 0x00004000
++#define PAS_DMA_RXINT_RCMDSTA_TB 0x00008000
++#define PAS_DMA_RXINT_RCMDSTA_ACT 0x00010000
++#define PAS_DMA_RXINT_RCMDSTA_DROPS_M 0xfffe0000
++#define PAS_DMA_RXINT_RCMDSTA_DROPS_S 17
++#define PAS_DMA_RXINT_CFG(i) (0x204+(i)*_PAS_DMA_RXINT_STRIDE)
++#define PAS_DMA_RXINT_CFG_RBP 0x80000000
++#define PAS_DMA_RXINT_CFG_ITRR 0x40000000
++#define PAS_DMA_RXINT_CFG_DHL_M 0x07000000
++#define PAS_DMA_RXINT_CFG_DHL_S 24
++#define PAS_DMA_RXINT_CFG_DHL(x) (((x) << PAS_DMA_RXINT_CFG_DHL_S) & \
++ PAS_DMA_RXINT_CFG_DHL_M)
++#define PAS_DMA_RXINT_CFG_ITR 0x00400000
++#define PAS_DMA_RXINT_CFG_LW 0x00200000
++#define PAS_DMA_RXINT_CFG_L2 0x00100000
++#define PAS_DMA_RXINT_CFG_HEN 0x00080000
++#define PAS_DMA_RXINT_CFG_WIF 0x00000002
++#define PAS_DMA_RXINT_CFG_WIL 0x00000001
+
-+ ret = -EIO;
-+ if (!OCFS2_IS_VALID_GROUP_DESC(gd))
-+ mlog(ML_ERROR, "Group descriptor # %llu isn't valid.\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno));
-+ else if (di->i_blkno != gd->bg_parent_dinode)
-+ mlog(ML_ERROR, "Group descriptor # %llu has bad parent "
-+ "pointer (%llu, expected %llu)\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ (unsigned long long)le64_to_cpu(gd->bg_parent_dinode),
-+ (unsigned long long)le64_to_cpu(di->i_blkno));
-+ else if (le16_to_cpu(gd->bg_bits) > max_bits)
-+ mlog(ML_ERROR, "Group descriptor # %llu has bit count of %u\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ le16_to_cpu(gd->bg_bits));
-+ else if (le16_to_cpu(gd->bg_free_bits_count) > le16_to_cpu(gd->bg_bits))
-+ mlog(ML_ERROR, "Group descriptor # %llu has bit count %u but "
-+ "claims that %u are free\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ le16_to_cpu(gd->bg_bits),
-+ le16_to_cpu(gd->bg_free_bits_count));
-+ else if (le16_to_cpu(gd->bg_bits) > (8 * le16_to_cpu(gd->bg_size)))
-+ mlog(ML_ERROR, "Group descriptor # %llu has bit count %u but "
-+ "max bitmap bits of %u\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ le16_to_cpu(gd->bg_bits),
-+ 8 * le16_to_cpu(gd->bg_size));
-+ else if (le16_to_cpu(gd->bg_chain) != input->chain)
-+ mlog(ML_ERROR, "Group descriptor # %llu has bad chain %u "
-+ "while input has %u set.\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ le16_to_cpu(gd->bg_chain), input->chain);
-+ else if (le16_to_cpu(gd->bg_bits) != input->clusters * cl_bpc)
-+ mlog(ML_ERROR, "Group descriptor # %llu has bit count %u but "
-+ "input has %u clusters set\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ le16_to_cpu(gd->bg_bits), input->clusters);
-+ else if (le16_to_cpu(gd->bg_free_bits_count) != input->frees * cl_bpc)
-+ mlog(ML_ERROR, "Group descriptor # %llu has free bit count %u "
-+ "but it should have %u set\n",
-+ (unsigned long long)le64_to_cpu(gd->bg_blkno),
-+ le16_to_cpu(gd->bg_bits),
-+ input->frees * cl_bpc);
-+ else
-+ ret = 0;
++#define PAS_DMA_RXINT_INCR(i) (0x210+(i)*_PAS_DMA_RXINT_STRIDE)
++#define PAS_DMA_RXINT_INCR_INCR_M 0x0000ffff
++#define PAS_DMA_RXINT_INCR_INCR_S 0
++#define PAS_DMA_RXINT_INCR_INCR(x) ((x) & 0x0000ffff)
++#define PAS_DMA_RXINT_BASEL(i) (0x218+(i)*_PAS_DMA_RXINT_STRIDE)
++#define PAS_DMA_RXINT_BASEL_BRBL(x) ((x) & ~0x3f)
++#define PAS_DMA_RXINT_BASEU(i) (0x21c+(i)*_PAS_DMA_RXINT_STRIDE)
++#define PAS_DMA_RXINT_BASEU_BRBH(x) ((x) & 0xfff)
++#define PAS_DMA_RXINT_BASEU_SIZ_M 0x3fff0000 /* # of cache lines worth of buffer ring */
++#define PAS_DMA_RXINT_BASEU_SIZ_S 16 /* 0 = 16K */
++#define PAS_DMA_RXINT_BASEU_SIZ(x) (((x) << PAS_DMA_RXINT_BASEU_SIZ_S) & \
++ PAS_DMA_RXINT_BASEU_SIZ_M)
+
-+ return ret;
-+}
+
-+static int ocfs2_verify_group_and_input(struct inode *inode,
-+ struct ocfs2_dinode *di,
-+ struct ocfs2_new_group_input *input,
-+ struct buffer_head *group_bh)
-+{
-+ u16 cl_count = le16_to_cpu(di->id2.i_chain.cl_count);
-+ u16 cl_cpg = le16_to_cpu(di->id2.i_chain.cl_cpg);
-+ u16 next_free = le16_to_cpu(di->id2.i_chain.cl_next_free_rec);
-+ u32 cluster = ocfs2_blocks_to_clusters(inode->i_sb, input->group);
-+ u32 total_clusters = le32_to_cpu(di->i_clusters);
-+ int ret = -EINVAL;
++#define _PAS_DMA_TXCHAN_STRIDE 0x20 /* Size per channel */
++#define _PAS_DMA_TXCHAN_TCMDSTA 0x300 /* Command / Status */
++#define _PAS_DMA_TXCHAN_CFG 0x304 /* Configuration */
++#define _PAS_DMA_TXCHAN_DSCRBU 0x308 /* Descriptor BU Allocation */
++#define _PAS_DMA_TXCHAN_INCR 0x310 /* Descriptor increment */
++#define _PAS_DMA_TXCHAN_CNT 0x314 /* Descriptor count/offset */
++#define _PAS_DMA_TXCHAN_BASEL 0x318 /* Descriptor ring base (low) */
++#define _PAS_DMA_TXCHAN_BASEU 0x31c /* (high) */
++#define PAS_DMA_TXCHAN_TCMDSTA(c) (0x300+(c)*_PAS_DMA_TXCHAN_STRIDE)
++#define PAS_DMA_TXCHAN_TCMDSTA_EN 0x00000001 /* Enabled */
++#define PAS_DMA_TXCHAN_TCMDSTA_ST 0x00000002 /* Stop interface */
++#define PAS_DMA_TXCHAN_TCMDSTA_ACT 0x00010000 /* Active */
++#define PAS_DMA_TXCHAN_TCMDSTA_SZ 0x00000800
++#define PAS_DMA_TXCHAN_TCMDSTA_DB 0x00000400
++#define PAS_DMA_TXCHAN_TCMDSTA_DE 0x00000200
++#define PAS_DMA_TXCHAN_TCMDSTA_DA 0x00000100
++#define PAS_DMA_TXCHAN_CFG(c) (0x304+(c)*_PAS_DMA_TXCHAN_STRIDE)
++#define PAS_DMA_TXCHAN_CFG_TY_IFACE 0x00000000 /* Type = interface */
++#define PAS_DMA_TXCHAN_CFG_TATTR_M 0x0000003c
++#define PAS_DMA_TXCHAN_CFG_TATTR_S 2
++#define PAS_DMA_TXCHAN_CFG_TATTR(x) (((x) << PAS_DMA_TXCHAN_CFG_TATTR_S) & \
++ PAS_DMA_TXCHAN_CFG_TATTR_M)
++#define PAS_DMA_TXCHAN_CFG_WT_M 0x000001c0
++#define PAS_DMA_TXCHAN_CFG_WT_S 6
++#define PAS_DMA_TXCHAN_CFG_WT(x) (((x) << PAS_DMA_TXCHAN_CFG_WT_S) & \
++ PAS_DMA_TXCHAN_CFG_WT_M)
++#define PAS_DMA_TXCHAN_CFG_TRD 0x00010000 /* translate data */
++#define PAS_DMA_TXCHAN_CFG_TRR 0x00008000 /* translate rings */
++#define PAS_DMA_TXCHAN_CFG_UP 0x00004000 /* update tx descr when sent */
++#define PAS_DMA_TXCHAN_CFG_CL 0x00002000 /* Clean last line */
++#define PAS_DMA_TXCHAN_CFG_CF 0x00001000 /* Clean first line */
++#define PAS_DMA_TXCHAN_INCR(c) (0x310+(c)*_PAS_DMA_TXCHAN_STRIDE)
++#define PAS_DMA_TXCHAN_BASEL(c) (0x318+(c)*_PAS_DMA_TXCHAN_STRIDE)
++#define PAS_DMA_TXCHAN_BASEL_BRBL_M 0xffffffc0
++#define PAS_DMA_TXCHAN_BASEL_BRBL_S 0
++#define PAS_DMA_TXCHAN_BASEL_BRBL(x) (((x) << PAS_DMA_TXCHAN_BASEL_BRBL_S) & \
++ PAS_DMA_TXCHAN_BASEL_BRBL_M)
++#define PAS_DMA_TXCHAN_BASEU(c) (0x31c+(c)*_PAS_DMA_TXCHAN_STRIDE)
++#define PAS_DMA_TXCHAN_BASEU_BRBH_M 0x00000fff
++#define PAS_DMA_TXCHAN_BASEU_BRBH_S 0
++#define PAS_DMA_TXCHAN_BASEU_BRBH(x) (((x) << PAS_DMA_TXCHAN_BASEU_BRBH_S) & \
++ PAS_DMA_TXCHAN_BASEU_BRBH_M)
++/* # of cache lines worth of buffer ring */
++#define PAS_DMA_TXCHAN_BASEU_SIZ_M 0x3fff0000
++#define PAS_DMA_TXCHAN_BASEU_SIZ_S 16 /* 0 = 16K */
++#define PAS_DMA_TXCHAN_BASEU_SIZ(x) (((x) << PAS_DMA_TXCHAN_BASEU_SIZ_S) & \
++ PAS_DMA_TXCHAN_BASEU_SIZ_M)
+
-+ if (cluster < total_clusters)
-+ mlog(ML_ERROR, "add a group which is in the current volume.\n");
-+ else if (input->chain >= cl_count)
-+ mlog(ML_ERROR, "input chain exceeds the limit.\n");
-+ else if (next_free != cl_count && next_free != input->chain)
-+ mlog(ML_ERROR,
-+ "the add group should be in chain %u\n", next_free);
-+ else if (total_clusters + input->clusters < total_clusters)
-+ mlog(ML_ERROR, "add group's clusters overflow.\n");
-+ else if (input->clusters > cl_cpg)
-+ mlog(ML_ERROR, "the cluster exceeds the maximum of a group\n");
-+ else if (input->frees > input->clusters)
-+ mlog(ML_ERROR, "the free cluster exceeds the total clusters\n");
-+ else if (total_clusters % cl_cpg != 0)
-+ mlog(ML_ERROR,
-+ "the last group isn't full. Use group extend first.\n");
-+ else if (input->group != ocfs2_which_cluster_group(inode, cluster))
-+ mlog(ML_ERROR, "group blkno is invalid\n");
-+ else if ((ret = ocfs2_check_new_group(inode, di, input, group_bh)))
-+ mlog(ML_ERROR, "group descriptor check failed.\n");
-+ else
-+ ret = 0;
++#define _PAS_DMA_RXCHAN_STRIDE 0x20 /* Size per channel */
++#define _PAS_DMA_RXCHAN_CCMDSTA 0x800 /* Command / Status */
++#define _PAS_DMA_RXCHAN_CFG 0x804 /* Configuration */
++#define _PAS_DMA_RXCHAN_INCR 0x810 /* Descriptor increment */
++#define _PAS_DMA_RXCHAN_CNT 0x814 /* Descriptor count/offset */
++#define _PAS_DMA_RXCHAN_BASEL 0x818 /* Descriptor ring base (low) */
++#define _PAS_DMA_RXCHAN_BASEU 0x81c /* (high) */
++#define PAS_DMA_RXCHAN_CCMDSTA(c) (0x800+(c)*_PAS_DMA_RXCHAN_STRIDE)
++#define PAS_DMA_RXCHAN_CCMDSTA_EN 0x00000001 /* Enabled */
++#define PAS_DMA_RXCHAN_CCMDSTA_ST 0x00000002 /* Stop interface */
++#define PAS_DMA_RXCHAN_CCMDSTA_ACT 0x00010000 /* Active */
++#define PAS_DMA_RXCHAN_CCMDSTA_DU 0x00020000
++#define PAS_DMA_RXCHAN_CCMDSTA_OD 0x00002000
++#define PAS_DMA_RXCHAN_CCMDSTA_FD 0x00001000
++#define PAS_DMA_RXCHAN_CCMDSTA_DT 0x00000800
++#define PAS_DMA_RXCHAN_CFG(c) (0x804+(c)*_PAS_DMA_RXCHAN_STRIDE)
++#define PAS_DMA_RXCHAN_CFG_CTR 0x00000400
++#define PAS_DMA_RXCHAN_CFG_HBU_M 0x00000380
++#define PAS_DMA_RXCHAN_CFG_HBU_S 7
++#define PAS_DMA_RXCHAN_CFG_HBU(x) (((x) << PAS_DMA_RXCHAN_CFG_HBU_S) & \
++ PAS_DMA_RXCHAN_CFG_HBU_M)
++#define PAS_DMA_RXCHAN_INCR(c) (0x810+(c)*_PAS_DMA_RXCHAN_STRIDE)
++#define PAS_DMA_RXCHAN_BASEL(c) (0x818+(c)*_PAS_DMA_RXCHAN_STRIDE)
++#define PAS_DMA_RXCHAN_BASEL_BRBL_M 0xffffffc0
++#define PAS_DMA_RXCHAN_BASEL_BRBL_S 0
++#define PAS_DMA_RXCHAN_BASEL_BRBL(x) (((x) << PAS_DMA_RXCHAN_BASEL_BRBL_S) & \
++ PAS_DMA_RXCHAN_BASEL_BRBL_M)
++#define PAS_DMA_RXCHAN_BASEU(c) (0x81c+(c)*_PAS_DMA_RXCHAN_STRIDE)
++#define PAS_DMA_RXCHAN_BASEU_BRBH_M 0x00000fff
++#define PAS_DMA_RXCHAN_BASEU_BRBH_S 0
++#define PAS_DMA_RXCHAN_BASEU_BRBH(x) (((x) << PAS_DMA_RXCHAN_BASEU_BRBH_S) & \
++ PAS_DMA_RXCHAN_BASEU_BRBH_M)
++/* # of cache lines worth of buffer ring */
++#define PAS_DMA_RXCHAN_BASEU_SIZ_M 0x3fff0000
++#define PAS_DMA_RXCHAN_BASEU_SIZ_S 16 /* 0 = 16K */
++#define PAS_DMA_RXCHAN_BASEU_SIZ(x) (((x) << PAS_DMA_RXCHAN_BASEU_SIZ_S) & \
++ PAS_DMA_RXCHAN_BASEU_SIZ_M)
+
-+ return ret;
-+}
++#define PAS_STATUS_PCNT_M 0x000000000000ffffull
++#define PAS_STATUS_PCNT_S 0
++#define PAS_STATUS_DCNT_M 0x00000000ffff0000ull
++#define PAS_STATUS_DCNT_S 16
++#define PAS_STATUS_BPCNT_M 0x0000ffff00000000ull
++#define PAS_STATUS_BPCNT_S 32
++#define PAS_STATUS_CAUSE_M 0xf000000000000000ull
++#define PAS_STATUS_TIMER 0x1000000000000000ull
++#define PAS_STATUS_ERROR 0x2000000000000000ull
++#define PAS_STATUS_SOFT 0x4000000000000000ull
++#define PAS_STATUS_INT 0x8000000000000000ull
+
-+/* Add a new group descriptor to global_bitmap. */
-+int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input)
-+{
-+ int ret;
-+ handle_t *handle;
-+ struct buffer_head *main_bm_bh = NULL;
-+ struct inode *main_bm_inode = NULL;
-+ struct ocfs2_dinode *fe = NULL;
-+ struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
-+ struct buffer_head *group_bh = NULL;
-+ struct ocfs2_group_desc *group = NULL;
-+ struct ocfs2_chain_list *cl;
-+ struct ocfs2_chain_rec *cr;
-+ u16 cl_bpc;
++#define PAS_IOB_COM_PKTHDRCNT 0x120
++#define PAS_IOB_COM_PKTHDRCNT_PKTHDR1_M 0x0fff0000
++#define PAS_IOB_COM_PKTHDRCNT_PKTHDR1_S 16
++#define PAS_IOB_COM_PKTHDRCNT_PKTHDR0_M 0x00000fff
++#define PAS_IOB_COM_PKTHDRCNT_PKTHDR0_S 0
+
-+ mlog_entry_void();
++#define PAS_IOB_DMA_RXCH_CFG(i) (0x1100 + (i)*4)
++#define PAS_IOB_DMA_RXCH_CFG_CNTTH_M 0x00000fff
++#define PAS_IOB_DMA_RXCH_CFG_CNTTH_S 0
++#define PAS_IOB_DMA_RXCH_CFG_CNTTH(x) (((x) << PAS_IOB_DMA_RXCH_CFG_CNTTH_S) & \
++ PAS_IOB_DMA_RXCH_CFG_CNTTH_M)
++#define PAS_IOB_DMA_TXCH_CFG(i) (0x1200 + (i)*4)
++#define PAS_IOB_DMA_TXCH_CFG_CNTTH_M 0x00000fff
++#define PAS_IOB_DMA_TXCH_CFG_CNTTH_S 0
++#define PAS_IOB_DMA_TXCH_CFG_CNTTH(x) (((x) << PAS_IOB_DMA_TXCH_CFG_CNTTH_S) & \
++ PAS_IOB_DMA_TXCH_CFG_CNTTH_M)
++#define PAS_IOB_DMA_RXCH_STAT(i) (0x1300 + (i)*4)
++#define PAS_IOB_DMA_RXCH_STAT_INTGEN 0x00001000
++#define PAS_IOB_DMA_RXCH_STAT_CNTDEL_M 0x00000fff
++#define PAS_IOB_DMA_RXCH_STAT_CNTDEL_S 0
++#define PAS_IOB_DMA_RXCH_STAT_CNTDEL(x) (((x) << PAS_IOB_DMA_RXCH_STAT_CNTDEL_S) &\
++ PAS_IOB_DMA_RXCH_STAT_CNTDEL_M)
++#define PAS_IOB_DMA_TXCH_STAT(i) (0x1400 + (i)*4)
++#define PAS_IOB_DMA_TXCH_STAT_INTGEN 0x00001000
++#define PAS_IOB_DMA_TXCH_STAT_CNTDEL_M 0x00000fff
++#define PAS_IOB_DMA_TXCH_STAT_CNTDEL_S 0
++#define PAS_IOB_DMA_TXCH_STAT_CNTDEL(x) (((x) << PAS_IOB_DMA_TXCH_STAT_CNTDEL_S) &\
++ PAS_IOB_DMA_TXCH_STAT_CNTDEL_M)
++#define PAS_IOB_DMA_RXCH_RESET(i) (0x1500 + (i)*4)
++#define PAS_IOB_DMA_RXCH_RESET_PCNT_M 0xffff0000
++#define PAS_IOB_DMA_RXCH_RESET_PCNT_S 16
++#define PAS_IOB_DMA_RXCH_RESET_PCNT(x) (((x) << PAS_IOB_DMA_RXCH_RESET_PCNT_S) & \
++ PAS_IOB_DMA_RXCH_RESET_PCNT_M)
++#define PAS_IOB_DMA_RXCH_RESET_PCNTRST 0x00000020
++#define PAS_IOB_DMA_RXCH_RESET_DCNTRST 0x00000010
++#define PAS_IOB_DMA_RXCH_RESET_TINTC 0x00000008
++#define PAS_IOB_DMA_RXCH_RESET_DINTC 0x00000004
++#define PAS_IOB_DMA_RXCH_RESET_SINTC 0x00000002
++#define PAS_IOB_DMA_RXCH_RESET_PINTC 0x00000001
++#define PAS_IOB_DMA_TXCH_RESET(i) (0x1600 + (i)*4)
++#define PAS_IOB_DMA_TXCH_RESET_PCNT_M 0xffff0000
++#define PAS_IOB_DMA_TXCH_RESET_PCNT_S 16
++#define PAS_IOB_DMA_TXCH_RESET_PCNT(x) (((x) << PAS_IOB_DMA_TXCH_RESET_PCNT_S) & \
++ PAS_IOB_DMA_TXCH_RESET_PCNT_M)
++#define PAS_IOB_DMA_TXCH_RESET_PCNTRST 0x00000020
++#define PAS_IOB_DMA_TXCH_RESET_DCNTRST 0x00000010
++#define PAS_IOB_DMA_TXCH_RESET_TINTC 0x00000008
++#define PAS_IOB_DMA_TXCH_RESET_DINTC 0x00000004
++#define PAS_IOB_DMA_TXCH_RESET_SINTC 0x00000002
++#define PAS_IOB_DMA_TXCH_RESET_PINTC 0x00000001
+
-+ if (ocfs2_is_hard_readonly(osb) || ocfs2_is_soft_readonly(osb))
-+ return -EROFS;
++#define PAS_IOB_DMA_COM_TIMEOUTCFG 0x1700
++#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_M 0x00ffffff
++#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_S 0
++#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT(x) (((x) << PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_S) & \
++ PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_M)
+
-+ main_bm_inode = ocfs2_get_system_file_inode(osb,
-+ GLOBAL_BITMAP_SYSTEM_INODE,
-+ OCFS2_INVALID_SLOT);
-+ if (!main_bm_inode) {
-+ ret = -EINVAL;
-+ mlog_errno(ret);
-+ goto out;
-+ }
++/* Transmit descriptor fields */
++#define XCT_MACTX_T 0x8000000000000000ull
++#define XCT_MACTX_ST 0x4000000000000000ull
++#define XCT_MACTX_NORES 0x0000000000000000ull
++#define XCT_MACTX_8BRES 0x1000000000000000ull
++#define XCT_MACTX_24BRES 0x2000000000000000ull
++#define XCT_MACTX_40BRES 0x3000000000000000ull
++#define XCT_MACTX_I 0x0800000000000000ull
++#define XCT_MACTX_O 0x0400000000000000ull
++#define XCT_MACTX_E 0x0200000000000000ull
++#define XCT_MACTX_VLAN_M 0x0180000000000000ull
++#define XCT_MACTX_VLAN_NOP 0x0000000000000000ull
++#define XCT_MACTX_VLAN_REMOVE 0x0080000000000000ull
++#define XCT_MACTX_VLAN_INSERT 0x0100000000000000ull
++#define XCT_MACTX_VLAN_REPLACE 0x0180000000000000ull
++#define XCT_MACTX_CRC_M 0x0060000000000000ull
++#define XCT_MACTX_CRC_NOP 0x0000000000000000ull
++#define XCT_MACTX_CRC_INSERT 0x0020000000000000ull
++#define XCT_MACTX_CRC_PAD 0x0040000000000000ull
++#define XCT_MACTX_CRC_REPLACE 0x0060000000000000ull
++#define XCT_MACTX_SS 0x0010000000000000ull
++#define XCT_MACTX_LLEN_M 0x00007fff00000000ull
++#define XCT_MACTX_LLEN_S 32ull
++#define XCT_MACTX_LLEN(x) ((((long)(x)) << XCT_MACTX_LLEN_S) & \
++ XCT_MACTX_LLEN_M)
++#define XCT_MACTX_IPH_M 0x00000000f8000000ull
++#define XCT_MACTX_IPH_S 27ull
++#define XCT_MACTX_IPH(x) ((((long)(x)) << XCT_MACTX_IPH_S) & \
++ XCT_MACTX_IPH_M)
++#define XCT_MACTX_IPO_M 0x0000000007c00000ull
++#define XCT_MACTX_IPO_S 22ull
++#define XCT_MACTX_IPO(x) ((((long)(x)) << XCT_MACTX_IPO_S) & \
++ XCT_MACTX_IPO_M)
++#define XCT_MACTX_CSUM_M 0x0000000000000060ull
++#define XCT_MACTX_CSUM_NOP 0x0000000000000000ull
++#define XCT_MACTX_CSUM_TCP 0x0000000000000040ull
++#define XCT_MACTX_CSUM_UDP 0x0000000000000060ull
++#define XCT_MACTX_V6 0x0000000000000010ull
++#define XCT_MACTX_C 0x0000000000000004ull
++#define XCT_MACTX_AL2 0x0000000000000002ull
+
-+ mutex_lock(&main_bm_inode->i_mutex);
++/* Receive descriptor fields */
++#define XCT_MACRX_T 0x8000000000000000ull
++#define XCT_MACRX_ST 0x4000000000000000ull
++#define XCT_MACRX_RR_M 0x3000000000000000ull
++#define XCT_MACRX_RR_NORES 0x0000000000000000ull
++#define XCT_MACRX_RR_8BRES 0x1000000000000000ull
++#define XCT_MACRX_O 0x0400000000000000ull
++#define XCT_MACRX_E 0x0200000000000000ull
++#define XCT_MACRX_FF 0x0100000000000000ull
++#define XCT_MACRX_PF 0x0080000000000000ull
++#define XCT_MACRX_OB 0x0040000000000000ull
++#define XCT_MACRX_OD 0x0020000000000000ull
++#define XCT_MACRX_FS 0x0010000000000000ull
++#define XCT_MACRX_NB_M 0x000fc00000000000ull
++#define XCT_MACRX_NB_S 46ULL
++#define XCT_MACRX_NB(x) ((((long)(x)) << XCT_MACRX_NB_S) & \
++ XCT_MACRX_NB_M)
++#define XCT_MACRX_LLEN_M 0x00003fff00000000ull
++#define XCT_MACRX_LLEN_S 32ULL
++#define XCT_MACRX_LLEN(x) ((((long)(x)) << XCT_MACRX_LLEN_S) & \
++ XCT_MACRX_LLEN_M)
++#define XCT_MACRX_CRC 0x0000000080000000ull
++#define XCT_MACRX_LEN_M 0x0000000060000000ull
++#define XCT_MACRX_LEN_TOOSHORT 0x0000000020000000ull
++#define XCT_MACRX_LEN_BELOWMIN 0x0000000040000000ull
++#define XCT_MACRX_LEN_TRUNC 0x0000000060000000ull
++#define XCT_MACRX_CAST_M 0x0000000018000000ull
++#define XCT_MACRX_CAST_UNI 0x0000000000000000ull
++#define XCT_MACRX_CAST_MULTI 0x0000000008000000ull
++#define XCT_MACRX_CAST_BROAD 0x0000000010000000ull
++#define XCT_MACRX_CAST_PAUSE 0x0000000018000000ull
++#define XCT_MACRX_VLC_M 0x0000000006000000ull
++#define XCT_MACRX_FM 0x0000000001000000ull
++#define XCT_MACRX_HTY_M 0x0000000000c00000ull
++#define XCT_MACRX_HTY_IPV4_OK 0x0000000000000000ull
++#define XCT_MACRX_HTY_IPV6 0x0000000000400000ull
++#define XCT_MACRX_HTY_IPV4_BAD 0x0000000000800000ull
++#define XCT_MACRX_HTY_NONIP 0x0000000000c00000ull
++#define XCT_MACRX_IPP_M 0x00000000003f0000ull
++#define XCT_MACRX_IPP_S 16
++#define XCT_MACRX_CSUM_M 0x000000000000ffffull
++#define XCT_MACRX_CSUM_S 0
+
-+ ret = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_mutex;
-+ }
++#define XCT_PTR_T 0x8000000000000000ull
++#define XCT_PTR_LEN_M 0x7ffff00000000000ull
++#define XCT_PTR_LEN_S 44
++#define XCT_PTR_LEN(x) ((((long)(x)) << XCT_PTR_LEN_S) & \
++ XCT_PTR_LEN_M)
++#define XCT_PTR_ADDR_M 0x00000fffffffffffull
++#define XCT_PTR_ADDR_S 0
++#define XCT_PTR_ADDR(x) ((((long)(x)) << XCT_PTR_ADDR_S) & \
++ XCT_PTR_ADDR_M)
+
-+ fe = (struct ocfs2_dinode *)main_bm_bh->b_data;
++/* Receive interface 8byte result fields */
++#define XCT_RXRES_8B_L4O_M 0xff00000000000000ull
++#define XCT_RXRES_8B_L4O_S 56
++#define XCT_RXRES_8B_RULE_M 0x00ffff0000000000ull
++#define XCT_RXRES_8B_RULE_S 40
++#define XCT_RXRES_8B_EVAL_M 0x000000ffff000000ull
++#define XCT_RXRES_8B_EVAL_S 24
++#define XCT_RXRES_8B_HTYPE_M 0x0000000000f00000ull
++#define XCT_RXRES_8B_HASH_M 0x00000000000fffffull
++#define XCT_RXRES_8B_HASH_S 0
+
-+ if (le16_to_cpu(fe->id2.i_chain.cl_cpg) !=
-+ ocfs2_group_bitmap_size(osb->sb) * 8) {
-+ mlog(ML_ERROR, "The disk is too old and small."
-+ " Force to do offline resize.");
-+ ret = -EINVAL;
-+ goto out_unlock;
-+ }
++/* Receive interface buffer fields */
++#define XCT_RXB_LEN_M 0x0ffff00000000000ull
++#define XCT_RXB_LEN_S 44
++#define XCT_RXB_LEN(x) ((((long)(x)) << XCT_RXB_LEN_S) & \
++ XCT_RXB_LEN_M)
++#define XCT_RXB_ADDR_M 0x00000fffffffffffull
++#define XCT_RXB_ADDR_S 0
++#define XCT_RXB_ADDR(x) ((((long)(x)) << XCT_RXB_ADDR_S) & \
++ XCT_RXB_ADDR_M)
+
-+ ret = ocfs2_read_block(osb, input->group, &group_bh, 0, NULL);
-+ if (ret < 0) {
-+ mlog(ML_ERROR, "Can't read the group descriptor # %llu "
-+ "from the device.", (unsigned long long)input->group);
-+ goto out_unlock;
-+ }
++/* Copy descriptor fields */
++#define XCT_COPY_T 0x8000000000000000ull
++#define XCT_COPY_ST 0x4000000000000000ull
++#define XCT_COPY_RR_M 0x3000000000000000ull
++#define XCT_COPY_RR_NORES 0x0000000000000000ull
++#define XCT_COPY_RR_8BRES 0x1000000000000000ull
++#define XCT_COPY_RR_24BRES 0x2000000000000000ull
++#define XCT_COPY_RR_40BRES 0x3000000000000000ull
++#define XCT_COPY_I 0x0800000000000000ull
++#define XCT_COPY_O 0x0400000000000000ull
++#define XCT_COPY_E 0x0200000000000000ull
++#define XCT_COPY_STY_ZERO 0x01c0000000000000ull
++#define XCT_COPY_DTY_PREF 0x0038000000000000ull
++#define XCT_COPY_LLEN_M 0x0007ffff00000000ull
++#define XCT_COPY_LLEN_S 32
++#define XCT_COPY_LLEN(x) ((((long)(x)) << XCT_COPY_LLEN_S) & \
++ XCT_COPY_LLEN_M)
++#define XCT_COPY_SE 0x0000000000000001ull
+
-+ ocfs2_set_new_buffer_uptodate(inode, group_bh);
++/* Control descriptor fields */
++#define CTRL_CMD_T 0x8000000000000000ull
++#define CTRL_CMD_META_EVT 0x2000000000000000ull
++#define CTRL_CMD_O 0x0400000000000000ull
++#define CTRL_CMD_REG_M 0x000000000000000full
++#define CTRL_CMD_REG_S 0
++#define CTRL_CMD_REG(x) ((((long)(x)) << CTRL_CMD_REG_S) & \
++ CTRL_CMD_REG_M)
+
-+ ret = ocfs2_verify_group_and_input(main_bm_inode, fe, input, group_bh);
-+ if (ret) {
-+ mlog_errno(ret);
-+ goto out_unlock;
-+ }
+
-+ mlog(0, "Add a new group %llu in chain = %u, length = %u\n",
-+ (unsigned long long)input->group, input->chain, input->clusters);
+
-+ handle = ocfs2_start_trans(osb, OCFS2_GROUP_ADD_CREDITS);
-+ if (IS_ERR(handle)) {
-+ mlog_errno(PTR_ERR(handle));
-+ ret = -EINVAL;
-+ goto out_unlock;
-+ }
++/* Prototypes for the shared DMA functions in the platform code. */
+
-+ cl_bpc = le16_to_cpu(fe->id2.i_chain.cl_bpc);
-+ cl = &fe->id2.i_chain;
-+ cr = &cl->cl_recs[input->chain];
++/* DMA TX Channel type. Right now only limitations used are event types 0/1,
++ * for event-triggered DMA transactions.
++ */
+
-+ ret = ocfs2_journal_access(handle, main_bm_inode, group_bh,
-+ OCFS2_JOURNAL_ACCESS_WRITE);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_commit;
-+ }
++enum pasemi_dmachan_type {
++ RXCHAN = 0, /* Any RX chan */
++ TXCHAN = 1, /* Any TX chan */
++ TXCHAN_EVT0 = 0x1001, /* TX chan in event class 0 (chan 0-9) */
++ TXCHAN_EVT1 = 0x2001, /* TX chan in event class 1 (chan 10-19) */
++};
+
-+ group = (struct ocfs2_group_desc *)group_bh->b_data;
-+ group->bg_next_group = cr->c_blkno;
++struct pasemi_dmachan {
++ int chno; /* Channel number */
++ enum pasemi_dmachan_type chan_type; /* TX / RX */
++ u64 *status; /* Ptr to cacheable status */
++ int irq; /* IRQ used by channel */
++ unsigned int ring_size; /* size of allocated ring */
++ dma_addr_t ring_dma; /* DMA address for ring */
++ u64 *ring_virt; /* Virt address for ring */
++ void *priv; /* Ptr to start of client struct */
++};
+
-+ ret = ocfs2_journal_dirty(handle, group_bh);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_commit;
-+ }
++/* Read/write the different registers in the I/O Bridge, Ethernet
++ * and DMA Controller
++ */
++extern unsigned int pasemi_read_iob_reg(unsigned int reg);
++extern void pasemi_write_iob_reg(unsigned int reg, unsigned int val);
+
-+ ret = ocfs2_journal_access(handle, main_bm_inode, main_bm_bh,
-+ OCFS2_JOURNAL_ACCESS_WRITE);
-+ if (ret < 0) {
-+ mlog_errno(ret);
-+ goto out_commit;
-+ }
++extern unsigned int pasemi_read_mac_reg(int intf, unsigned int reg);
++extern void pasemi_write_mac_reg(int intf, unsigned int reg, unsigned int val);
+
-+ if (input->chain == le16_to_cpu(cl->cl_next_free_rec)) {
-+ le16_add_cpu(&cl->cl_next_free_rec, 1);
-+ memset(cr, 0, sizeof(struct ocfs2_chain_rec));
-+ }
++extern unsigned int pasemi_read_dma_reg(unsigned int reg);
++extern void pasemi_write_dma_reg(unsigned int reg, unsigned int val);
+
-+ cr->c_blkno = le64_to_cpu(input->group);
-+ le32_add_cpu(&cr->c_total, input->clusters * cl_bpc);
-+ le32_add_cpu(&cr->c_free, input->frees * cl_bpc);
++/* Channel management routines */
+
-+ le32_add_cpu(&fe->id1.bitmap1.i_total, input->clusters *cl_bpc);
-+ le32_add_cpu(&fe->id1.bitmap1.i_used,
-+ (input->clusters - input->frees) * cl_bpc);
-+ le32_add_cpu(&fe->i_clusters, input->clusters);
++extern void *pasemi_dma_alloc_chan(enum pasemi_dmachan_type type,
++ int total_size, int offset);
++extern void pasemi_dma_free_chan(struct pasemi_dmachan *chan);
+
-+ ocfs2_journal_dirty(handle, main_bm_bh);
++extern void pasemi_dma_start_chan(const struct pasemi_dmachan *chan,
++ const u32 cmdsta);
++extern int pasemi_dma_stop_chan(const struct pasemi_dmachan *chan);
+
-+ spin_lock(&OCFS2_I(main_bm_inode)->ip_lock);
-+ OCFS2_I(main_bm_inode)->ip_clusters = le32_to_cpu(fe->i_clusters);
-+ le64_add_cpu(&fe->i_size, input->clusters << osb->s_clustersize_bits);
-+ spin_unlock(&OCFS2_I(main_bm_inode)->ip_lock);
-+ i_size_write(main_bm_inode, le64_to_cpu(fe->i_size));
++/* Common routines to allocate rings and buffers */
+
-+ ocfs2_update_super_and_backups(main_bm_inode, input->clusters);
++extern int pasemi_dma_alloc_ring(struct pasemi_dmachan *chan, int ring_size);
++extern void pasemi_dma_free_ring(struct pasemi_dmachan *chan);
+
-+out_commit:
-+ ocfs2_commit_trans(osb, handle);
-+out_unlock:
-+ brelse(group_bh);
-+ brelse(main_bm_bh);
++extern void *pasemi_dma_alloc_buf(struct pasemi_dmachan *chan, int size,
++ dma_addr_t *handle);
++extern void pasemi_dma_free_buf(struct pasemi_dmachan *chan, int size,
++ dma_addr_t *handle);
+
-+ ocfs2_inode_unlock(main_bm_inode, 1);
++/* Initialize the library, must be called before any other functions */
++extern int pasemi_dma_init(void);
+
-+out_mutex:
-+ mutex_unlock(&main_bm_inode->i_mutex);
-+ iput(main_bm_inode);
++#endif /* ASM_PASEMI_DMA_H */
+diff --git a/include/asm-powerpc/percpu.h b/include/asm-powerpc/percpu.h
+index 6b22962..cc1cbf6 100644
+--- a/include/asm-powerpc/percpu.h
++++ b/include/asm-powerpc/percpu.h
+@@ -16,15 +16,6 @@
+ #define __my_cpu_offset() get_paca()->data_offset
+ #define per_cpu_offset(x) (__per_cpu_offset(x))
+
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
+-
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __typeof__(type) per_cpu__##name \
+- ____cacheline_aligned_in_smp
+-
+ /* var is in discarded region: offset to particular copy we want */
+ #define per_cpu(var, cpu) (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset(cpu)))
+ #define __get_cpu_var(var) (*RELOC_HIDE(&per_cpu__##var, __my_cpu_offset()))
+@@ -43,11 +34,6 @@ extern void setup_per_cpu_areas(void);
+
+ #else /* ! SMP */
+
+-#define DEFINE_PER_CPU(type, name) \
+- __typeof__(type) per_cpu__##name
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- DEFINE_PER_CPU(type, name)
+-
+ #define per_cpu(var, cpu) (*((void)(cpu), &per_cpu__##var))
+ #define __get_cpu_var(var) per_cpu__##var
+ #define __raw_get_cpu_var(var) per_cpu__##var
+@@ -56,9 +42,6 @@ extern void setup_per_cpu_areas(void);
+
+ #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
+
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
+-
+ #else
+ #include <asm-generic/percpu.h>
+ #endif
+diff --git a/include/asm-powerpc/ptrace.h b/include/asm-powerpc/ptrace.h
+index 13fccc5..3063363 100644
+--- a/include/asm-powerpc/ptrace.h
++++ b/include/asm-powerpc/ptrace.h
+@@ -119,6 +119,13 @@ do { \
+ } while (0)
+ #endif /* __powerpc64__ */
+
++/*
++ * These are defined as per linux/ptrace.h, which see.
++ */
++#define arch_has_single_step() (1)
++extern void user_enable_single_step(struct task_struct *);
++extern void user_disable_single_step(struct task_struct *);
+
-+out:
-+ mlog_exit_void();
-+ return ret;
-+}
-diff --git a/fs/ocfs2/resize.h b/fs/ocfs2/resize.h
+ #endif /* __ASSEMBLY__ */
+
+ #endif /* __KERNEL__ */
+diff --git a/include/asm-s390/airq.h b/include/asm-s390/airq.h
new file mode 100644
-index 0000000..f38841a
+index 0000000..41d028c
--- /dev/null
-+++ b/fs/ocfs2/resize.h
-@@ -0,0 +1,32 @@
-+/* -*- mode: c; c-basic-offset: 8; -*-
-+ * vim: noexpandtab sw=8 ts=8 sts=0:
-+ *
-+ * resize.h
-+ *
-+ * Function prototypes
-+ *
-+ * Copyright (C) 2007 Oracle. All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2 of the License, or (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+ * General Public License for more details.
++++ b/include/asm-s390/airq.h
+@@ -0,0 +1,19 @@
++/*
++ * include/asm-s390/airq.h
+ *
-+ * You should have received a copy of the GNU General Public
-+ * License along with this program; if not, write to the
-+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-+ * Boston, MA 021110-1307, USA.
++ * Copyright IBM Corp. 2002,2007
++ * Author(s): Ingo Adlung <adlung at de.ibm.com>
++ * Cornelia Huck <cornelia.huck at de.ibm.com>
++ * Arnd Bergmann <arndb at de.ibm.com>
++ * Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
+ */
+
-+#ifndef OCFS2_RESIZE_H
-+#define OCFS2_RESIZE_H
++#ifndef _ASM_S390_AIRQ_H
++#define _ASM_S390_AIRQ_H
+
-+int ocfs2_group_extend(struct inode * inode, int new_clusters);
-+int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input);
++typedef void (*adapter_int_handler_t)(void *, void *);
+
-+#endif /* OCFS2_RESIZE_H */
-diff --git a/fs/ocfs2/slot_map.c b/fs/ocfs2/slot_map.c
-index af4882b..3a50ce5 100644
---- a/fs/ocfs2/slot_map.c
-+++ b/fs/ocfs2/slot_map.c
-@@ -48,25 +48,6 @@ static void __ocfs2_fill_slot(struct ocfs2_slot_info *si,
- s16 slot_num,
- s16 node_num);
-
--/* Use the slot information we've collected to create a map of mounted
-- * nodes. Should be holding an EX on super block. assumes slot info is
-- * up to date. Note that we call this *after* we find a slot, so our
-- * own node should be set in the map too... */
--void ocfs2_populate_mounted_map(struct ocfs2_super *osb)
--{
-- int i;
-- struct ocfs2_slot_info *si = osb->slot_info;
--
-- spin_lock(&si->si_lock);
--
-- for (i = 0; i < si->si_size; i++)
-- if (si->si_global_node_nums[i] != OCFS2_INVALID_SLOT)
-- ocfs2_node_map_set_bit(osb, &osb->mounted_map,
-- si->si_global_node_nums[i]);
--
-- spin_unlock(&si->si_lock);
--}
--
- /* post the slot information on disk into our slot_info struct. */
- void ocfs2_update_slot_info(struct ocfs2_slot_info *si)
- {
-diff --git a/fs/ocfs2/slot_map.h b/fs/ocfs2/slot_map.h
-index d8c8cee..1025872 100644
---- a/fs/ocfs2/slot_map.h
-+++ b/fs/ocfs2/slot_map.h
-@@ -52,8 +52,6 @@ s16 ocfs2_node_num_to_slot(struct ocfs2_slot_info *si,
- void ocfs2_clear_slot(struct ocfs2_slot_info *si,
- s16 slot_num);
-
--void ocfs2_populate_mounted_map(struct ocfs2_super *osb);
--
- static inline int ocfs2_is_empty_slot(struct ocfs2_slot_info *si,
- int slot_num)
- {
-diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
-index 8f09f52..7e397e2 100644
---- a/fs/ocfs2/suballoc.c
-+++ b/fs/ocfs2/suballoc.c
-@@ -101,8 +101,6 @@ static inline int ocfs2_block_group_reasonably_empty(struct ocfs2_group_desc *bg
- static inline u32 ocfs2_desc_bitmap_to_cluster_off(struct inode *inode,
- u64 bg_blkno,
- u16 bg_bit_off);
--static inline u64 ocfs2_which_cluster_group(struct inode *inode,
-- u32 cluster);
- static inline void ocfs2_block_to_cluster_group(struct inode *inode,
- u64 data_blkno,
- u64 *bg_blkno,
-@@ -114,7 +112,7 @@ void ocfs2_free_alloc_context(struct ocfs2_alloc_context *ac)
-
- if (inode) {
- if (ac->ac_which != OCFS2_AC_USE_LOCAL)
-- ocfs2_meta_unlock(inode, 1);
-+ ocfs2_inode_unlock(inode, 1);
-
- mutex_unlock(&inode->i_mutex);
-
-@@ -131,9 +129,9 @@ static u32 ocfs2_bits_per_group(struct ocfs2_chain_list *cl)
- }
-
- /* somewhat more expensive than our other checks, so use sparingly. */
--static int ocfs2_check_group_descriptor(struct super_block *sb,
-- struct ocfs2_dinode *di,
-- struct ocfs2_group_desc *gd)
-+int ocfs2_check_group_descriptor(struct super_block *sb,
-+ struct ocfs2_dinode *di,
-+ struct ocfs2_group_desc *gd)
- {
- unsigned int max_bits;
-
-@@ -412,7 +410,7 @@ static int ocfs2_reserve_suballoc_bits(struct ocfs2_super *osb,
-
- mutex_lock(&alloc_inode->i_mutex);
-
-- status = ocfs2_meta_lock(alloc_inode, &bh, 1);
-+ status = ocfs2_inode_lock(alloc_inode, &bh, 1);
- if (status < 0) {
- mutex_unlock(&alloc_inode->i_mutex);
- iput(alloc_inode);
-@@ -1443,8 +1441,7 @@ static inline u32 ocfs2_desc_bitmap_to_cluster_off(struct inode *inode,
-
- /* given a cluster offset, calculate which block group it belongs to
- * and return that block offset. */
--static inline u64 ocfs2_which_cluster_group(struct inode *inode,
-- u32 cluster)
-+u64 ocfs2_which_cluster_group(struct inode *inode, u32 cluster)
- {
- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
- u32 group_no;
-@@ -1519,8 +1516,9 @@ int __ocfs2_claim_clusters(struct ocfs2_super *osb,
- if (min_clusters > (osb->bitmap_cpg - 1)) {
- /* The only paths asking for contiguousness
- * should know about this already. */
-- mlog(ML_ERROR, "minimum allocation requested exceeds "
-- "group bitmap size!");
-+ mlog(ML_ERROR, "minimum allocation requested %u exceeds "
-+ "group bitmap size %u!\n", min_clusters,
-+ osb->bitmap_cpg);
- status = -ENOSPC;
- goto bail;
- }
-diff --git a/fs/ocfs2/suballoc.h b/fs/ocfs2/suballoc.h
-index cafe937..8799033 100644
---- a/fs/ocfs2/suballoc.h
-+++ b/fs/ocfs2/suballoc.h
-@@ -147,4 +147,12 @@ static inline int ocfs2_is_cluster_bitmap(struct inode *inode)
- int ocfs2_reserve_cluster_bitmap_bits(struct ocfs2_super *osb,
- struct ocfs2_alloc_context *ac);
-
-+/* given a cluster offset, calculate which block group it belongs to
-+ * and return that block offset. */
-+u64 ocfs2_which_cluster_group(struct inode *inode, u32 cluster);
++void *s390_register_adapter_interrupt(adapter_int_handler_t, void *);
++void s390_unregister_adapter_interrupt(void *);
+
-+/* somewhat more expensive than our other checks, so use sparingly. */
-+int ocfs2_check_group_descriptor(struct super_block *sb,
-+ struct ocfs2_dinode *di,
-+ struct ocfs2_group_desc *gd);
- #endif /* _CHAINALLOC_H_ */
-diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
-index 5ee7754..01fe40e 100644
---- a/fs/ocfs2/super.c
-+++ b/fs/ocfs2/super.c
-@@ -65,7 +65,6 @@
- #include "sysfile.h"
- #include "uptodate.h"
- #include "ver.h"
--#include "vote.h"
++#endif /* _ASM_S390_AIRQ_H */
+diff --git a/include/asm-s390/bitops.h b/include/asm-s390/bitops.h
+index 34d9a63..dba6fec 100644
+--- a/include/asm-s390/bitops.h
++++ b/include/asm-s390/bitops.h
+@@ -772,6 +772,8 @@ static inline int sched_find_first_bit(unsigned long *b)
+ test_and_clear_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
+ #define ext2_test_bit(nr, addr) \
+ test_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
++#define ext2_find_next_bit(addr, size, off) \
++ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
- #include "buffer_head_io.h"
+ #ifndef __s390x__
-@@ -84,9 +83,11 @@ MODULE_LICENSE("GPL");
+diff --git a/include/asm-s390/cio.h b/include/asm-s390/cio.h
+index 2f08c16..123b557 100644
+--- a/include/asm-s390/cio.h
++++ b/include/asm-s390/cio.h
+@@ -24,8 +24,8 @@
+ * @fmt: format
+ * @pfch: prefetch
+ * @isic: initial-status interruption control
+- * @alcc: adress-limit checking control
+- * @ssi: supress-suspended interruption
++ * @alcc: address-limit checking control
++ * @ssi: suppress-suspended interruption
+ * @zcc: zero condition code
+ * @ectl: extended control
+ * @pno: path not operational
+diff --git a/include/asm-s390/dasd.h b/include/asm-s390/dasd.h
+index 604f68f..3f002e1 100644
+--- a/include/asm-s390/dasd.h
++++ b/include/asm-s390/dasd.h
+@@ -105,7 +105,7 @@ typedef struct dasd_information_t {
+ } dasd_information_t;
- struct mount_options
- {
-+ unsigned long commit_interval;
- unsigned long mount_opt;
- unsigned int atime_quantum;
- signed short slot;
-+ unsigned int localalloc_opt;
- };
+ /*
+- * Read Subsystem Data - Perfomance Statistics
++ * Read Subsystem Data - Performance Statistics
+ */
+ typedef struct dasd_rssd_perf_stats_t {
+ unsigned char invalid:1;
+diff --git a/include/asm-s390/ipl.h b/include/asm-s390/ipl.h
+index 2c40fd3..c1b2e50 100644
+--- a/include/asm-s390/ipl.h
++++ b/include/asm-s390/ipl.h
+@@ -83,6 +83,8 @@ extern u32 dump_prefix_page;
+ extern unsigned int zfcpdump_prefix_array[];
- static int ocfs2_parse_options(struct super_block *sb, char *options,
-@@ -150,6 +151,9 @@ enum {
- Opt_data_writeback,
- Opt_atime_quantum,
- Opt_slot,
-+ Opt_commit,
-+ Opt_localalloc,
-+ Opt_localflocks,
- Opt_err,
- };
+ extern void do_reipl(void);
++extern void do_halt(void);
++extern void do_poff(void);
+ extern void ipl_save_parameters(void);
-@@ -165,6 +169,9 @@ static match_table_t tokens = {
- {Opt_data_writeback, "data=writeback"},
- {Opt_atime_quantum, "atime_quantum=%u"},
- {Opt_slot, "preferred_slot=%u"},
-+ {Opt_commit, "commit=%u"},
-+ {Opt_localalloc, "localalloc=%d"},
-+ {Opt_localflocks, "localflocks"},
- {Opt_err, NULL}
+ enum {
+@@ -118,7 +120,7 @@ struct ipl_info
};
-@@ -213,7 +220,7 @@ static int ocfs2_init_global_system_inodes(struct ocfs2_super *osb)
-
- mlog_entry_void();
-
-- new = ocfs2_iget(osb, osb->root_blkno, OCFS2_FI_FLAG_SYSFILE);
-+ new = ocfs2_iget(osb, osb->root_blkno, OCFS2_FI_FLAG_SYSFILE, 0);
- if (IS_ERR(new)) {
- status = PTR_ERR(new);
- mlog_errno(status);
-@@ -221,7 +228,7 @@ static int ocfs2_init_global_system_inodes(struct ocfs2_super *osb)
- }
- osb->root_inode = new;
-
-- new = ocfs2_iget(osb, osb->system_dir_blkno, OCFS2_FI_FLAG_SYSFILE);
-+ new = ocfs2_iget(osb, osb->system_dir_blkno, OCFS2_FI_FLAG_SYSFILE, 0);
- if (IS_ERR(new)) {
- status = PTR_ERR(new);
- mlog_errno(status);
-@@ -443,6 +450,8 @@ unlock_osb:
- osb->s_mount_opt = parsed_options.mount_opt;
- osb->s_atime_quantum = parsed_options.atime_quantum;
- osb->preferred_slot = parsed_options.slot;
-+ if (parsed_options.commit_interval)
-+ osb->osb_commit_interval = parsed_options.commit_interval;
-
- if (!ocfs2_is_hard_readonly(osb))
- ocfs2_set_journal_params(osb);
-@@ -597,6 +606,8 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
- osb->s_mount_opt = parsed_options.mount_opt;
- osb->s_atime_quantum = parsed_options.atime_quantum;
- osb->preferred_slot = parsed_options.slot;
-+ osb->osb_commit_interval = parsed_options.commit_interval;
-+ osb->local_alloc_size = parsed_options.localalloc_opt;
-
- sb->s_magic = OCFS2_SUPER_MAGIC;
-
-@@ -747,9 +758,11 @@ static int ocfs2_parse_options(struct super_block *sb,
- mlog_entry("remount: %d, options: \"%s\"\n", is_remount,
- options ? options : "(none)");
-
-+ mopt->commit_interval = 0;
- mopt->mount_opt = 0;
- mopt->atime_quantum = OCFS2_DEFAULT_ATIME_QUANTUM;
- mopt->slot = OCFS2_INVALID_SLOT;
-+ mopt->localalloc_opt = OCFS2_DEFAULT_LOCAL_ALLOC_SIZE;
+ extern struct ipl_info ipl_info;
+-extern void setup_ipl_info(void);
++extern void setup_ipl(void);
- if (!options) {
- status = 1;
-@@ -816,6 +829,41 @@ static int ocfs2_parse_options(struct super_block *sb,
- if (option)
- mopt->slot = (s16)option;
- break;
-+ case Opt_commit:
-+ option = 0;
-+ if (match_int(&args[0], &option)) {
-+ status = 0;
-+ goto bail;
-+ }
-+ if (option < 0)
-+ return 0;
-+ if (option == 0)
-+ option = JBD_DEFAULT_MAX_COMMIT_AGE;
-+ mopt->commit_interval = HZ * option;
-+ break;
-+ case Opt_localalloc:
-+ option = 0;
-+ if (match_int(&args[0], &option)) {
-+ status = 0;
-+ goto bail;
-+ }
-+ if (option >= 0 && (option <= ocfs2_local_alloc_size(sb) * 8))
-+ mopt->localalloc_opt = option;
-+ break;
-+ case Opt_localflocks:
-+ /*
-+ * Changing this during remount could race
-+ * flock() requests, or "unbalance" existing
-+ * ones (e.g., a lock is taken in one mode but
-+ * dropped in the other). If users care enough
-+ * to flip locking modes during remount, we
-+ * could add a "local" flag to individual
-+ * flock structures for proper tracking of
-+ * state.
-+ */
-+ if (!is_remount)
-+ mopt->mount_opt |= OCFS2_MOUNT_LOCALFLOCKS;
-+ break;
- default:
- mlog(ML_ERROR,
- "Unrecognized mount option \"%s\" "
-@@ -864,6 +912,16 @@ static int ocfs2_show_options(struct seq_file *s, struct vfsmount *mnt)
- if (osb->s_atime_quantum != OCFS2_DEFAULT_ATIME_QUANTUM)
- seq_printf(s, ",atime_quantum=%u", osb->s_atime_quantum);
+ /*
+ * DIAG 308 support
+@@ -141,6 +143,10 @@ enum diag308_opt {
+ DIAG308_IPL_OPT_DUMP = 0x20,
+ };
-+ if (osb->osb_commit_interval)
-+ seq_printf(s, ",commit=%u",
-+ (unsigned) (osb->osb_commit_interval / HZ));
-+
-+ if (osb->local_alloc_size != OCFS2_DEFAULT_LOCAL_ALLOC_SIZE)
-+ seq_printf(s, ",localalloc=%d", osb->local_alloc_size);
-+
-+ if (opts & OCFS2_MOUNT_LOCALFLOCKS)
-+ seq_printf(s, ",localflocks,");
++enum diag308_flags {
++ DIAG308_FLAGS_LP_VALID = 0x80,
++};
+
- return 0;
- }
-
-@@ -965,7 +1023,7 @@ static int ocfs2_statfs(struct dentry *dentry, struct kstatfs *buf)
- goto bail;
- }
-
-- status = ocfs2_meta_lock(inode, &bh, 0);
-+ status = ocfs2_inode_lock(inode, &bh, 0);
- if (status < 0) {
- mlog_errno(status);
- goto bail;
-@@ -989,7 +1047,7 @@ static int ocfs2_statfs(struct dentry *dentry, struct kstatfs *buf)
-
- brelse(bh);
-
-- ocfs2_meta_unlock(inode, 0);
-+ ocfs2_inode_unlock(inode, 0);
- status = 0;
- bail:
- if (inode)
-@@ -1020,8 +1078,7 @@ static void ocfs2_inode_init_once(struct kmem_cache *cachep, void *data)
- oi->ip_clusters = 0;
-
- ocfs2_lock_res_init_once(&oi->ip_rw_lockres);
-- ocfs2_lock_res_init_once(&oi->ip_meta_lockres);
-- ocfs2_lock_res_init_once(&oi->ip_data_lockres);
-+ ocfs2_lock_res_init_once(&oi->ip_inode_lockres);
- ocfs2_lock_res_init_once(&oi->ip_open_lockres);
+ enum diag308_rc {
+ DIAG308_RC_OK = 1,
+ };
+diff --git a/include/asm-s390/mmu_context.h b/include/asm-s390/mmu_context.h
+index 05b8421..a77d4ba 100644
+--- a/include/asm-s390/mmu_context.h
++++ b/include/asm-s390/mmu_context.h
+@@ -12,10 +12,15 @@
+ #include <asm/pgalloc.h>
+ #include <asm-generic/mm_hooks.h>
- ocfs2_metadata_cache_init(&oi->vfs_inode);
-@@ -1117,25 +1174,12 @@ static int ocfs2_mount_volume(struct super_block *sb)
- goto leave;
- }
+-/*
+- * get a new mmu context.. S390 don't know about contexts.
+- */
+-#define init_new_context(tsk,mm) 0
++static inline int init_new_context(struct task_struct *tsk,
++ struct mm_struct *mm)
++{
++ mm->context = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS;
++#ifdef CONFIG_64BIT
++ mm->context |= _ASCE_TYPE_REGION3;
++#endif
++ return 0;
++}
-- status = ocfs2_register_hb_callbacks(osb);
-- if (status < 0) {
-- mlog_errno(status);
-- goto leave;
-- }
--
- status = ocfs2_dlm_init(osb);
- if (status < 0) {
- mlog_errno(status);
- goto leave;
- }
+ #define destroy_context(mm) do { } while (0)
-- /* requires vote_thread to be running. */
-- status = ocfs2_register_net_handlers(osb);
-- if (status < 0) {
-- mlog_errno(status);
-- goto leave;
-- }
--
- status = ocfs2_super_lock(osb, 1);
- if (status < 0) {
- mlog_errno(status);
-@@ -1150,8 +1194,6 @@ static int ocfs2_mount_volume(struct super_block *sb)
- goto leave;
- }
+@@ -27,19 +32,11 @@
-- ocfs2_populate_mounted_map(osb);
+ static inline void update_mm(struct mm_struct *mm, struct task_struct *tsk)
+ {
+- pgd_t *pgd = mm->pgd;
+- unsigned long asce_bits;
-
- /* load all node-local system inodes */
- status = ocfs2_init_local_system_inodes(osb);
- if (status < 0) {
-@@ -1174,15 +1216,6 @@ static int ocfs2_mount_volume(struct super_block *sb)
- if (ocfs2_mount_local(osb))
- goto leave;
+- /* Calculate asce bits from the first pgd table entry. */
+- asce_bits = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS;
+-#ifdef CONFIG_64BIT
+- asce_bits |= _ASCE_TYPE_REGION3;
+-#endif
+- S390_lowcore.user_asce = asce_bits | __pa(pgd);
++ S390_lowcore.user_asce = mm->context | __pa(mm->pgd);
+ if (switch_amode) {
+ /* Load primary space page table origin. */
+- pgd_t *shadow_pgd = get_shadow_table(pgd) ? : pgd;
+- S390_lowcore.user_exec_asce = asce_bits | __pa(shadow_pgd);
++ pgd_t *shadow_pgd = get_shadow_table(mm->pgd) ? : mm->pgd;
++ S390_lowcore.user_exec_asce = mm->context | __pa(shadow_pgd);
+ asm volatile(LCTL_OPCODE" 1,1,%0\n"
+ : : "m" (S390_lowcore.user_exec_asce) );
+ } else
+diff --git a/include/asm-s390/percpu.h b/include/asm-s390/percpu.h
+index 545857e..2d676a8 100644
+--- a/include/asm-s390/percpu.h
++++ b/include/asm-s390/percpu.h
+@@ -4,8 +4,6 @@
+ #include <linux/compiler.h>
+ #include <asm/lowcore.h>
-- /* This should be sent *after* we recovered our journal as it
-- * will cause other nodes to unmark us as needing
-- * recovery. However, we need to send it *before* dropping the
-- * super block lock as otherwise their recovery threads might
-- * try to clean us up while we're live! */
-- status = ocfs2_request_mount_vote(osb);
-- if (status < 0)
-- mlog_errno(status);
--
- leave:
- if (unlock_super)
- ocfs2_super_unlock(osb, 1);
-@@ -1240,10 +1273,6 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
- mlog_errno(tmp);
- return;
- }
+-#define __GENERIC_PER_CPU
-
-- tmp = ocfs2_request_umount_vote(osb);
-- if (tmp < 0)
-- mlog_errno(tmp);
- }
-
- if (osb->slot_num != OCFS2_INVALID_SLOT)
-@@ -1254,13 +1283,8 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
+ /*
+ * s390 uses its own implementation for per cpu data, the offset of
+ * the cpu local data area is cached in the cpu's lowcore memory.
+@@ -36,16 +34,6 @@
- ocfs2_release_system_inodes(osb);
+ extern unsigned long __per_cpu_offset[NR_CPUS];
-- if (osb->dlm) {
-- ocfs2_unregister_net_handlers(osb);
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) \
+- __typeof__(type) per_cpu__##name
-
-+ if (osb->dlm)
- ocfs2_dlm_shutdown(osb);
-- }
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __typeof__(type) per_cpu__##name \
+- ____cacheline_aligned_in_smp
-
-- ocfs2_clear_hb_callbacks(osb);
+ #define __get_cpu_var(var) __reloc_hide(var,S390_lowcore.percpu_offset)
+ #define __raw_get_cpu_var(var) __reloc_hide(var,S390_lowcore.percpu_offset)
+ #define per_cpu(var,cpu) __reloc_hide(var,__per_cpu_offset[cpu])
+@@ -62,11 +50,6 @@ do { \
- debugfs_remove(osb->osb_debug_root);
-
-@@ -1315,7 +1339,6 @@ static int ocfs2_initialize_super(struct super_block *sb,
- int i, cbits, bbits;
- struct ocfs2_dinode *di = (struct ocfs2_dinode *)bh->b_data;
- struct inode *inode = NULL;
-- struct buffer_head *bitmap_bh = NULL;
- struct ocfs2_journal *journal;
- __le32 uuid_net_key;
- struct ocfs2_super *osb;
-@@ -1344,19 +1367,13 @@ static int ocfs2_initialize_super(struct super_block *sb,
- osb->s_sectsize_bits = blksize_bits(sector_size);
- BUG_ON(!osb->s_sectsize_bits);
+ #else /* ! SMP */
-- osb->net_response_ids = 0;
-- spin_lock_init(&osb->net_response_lock);
-- INIT_LIST_HEAD(&osb->net_response_list);
+-#define DEFINE_PER_CPU(type, name) \
+- __typeof__(type) per_cpu__##name
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- DEFINE_PER_CPU(type, name)
-
-- INIT_LIST_HEAD(&osb->osb_net_handlers);
- init_waitqueue_head(&osb->recovery_event);
-- spin_lock_init(&osb->vote_task_lock);
-- init_waitqueue_head(&osb->vote_event);
-- osb->vote_work_sequence = 0;
-- osb->vote_wake_sequence = 0;
-+ spin_lock_init(&osb->dc_task_lock);
-+ init_waitqueue_head(&osb->dc_event);
-+ osb->dc_work_sequence = 0;
-+ osb->dc_wake_sequence = 0;
- INIT_LIST_HEAD(&osb->blocked_lock_list);
- osb->blocked_lock_count = 0;
-- INIT_LIST_HEAD(&osb->vote_list);
- spin_lock_init(&osb->osb_lock);
-
- atomic_set(&osb->alloc_stats.moves, 0);
-@@ -1496,7 +1513,6 @@ static int ocfs2_initialize_super(struct super_block *sb,
- }
-
- memcpy(&uuid_net_key, di->id2.i_super.s_uuid, sizeof(uuid_net_key));
-- osb->net_key = le32_to_cpu(uuid_net_key);
+ #define __get_cpu_var(var) __reloc_hide(var,0)
+ #define __raw_get_cpu_var(var) __reloc_hide(var,0)
+ #define per_cpu(var,cpu) __reloc_hide(var,0)
+@@ -75,7 +58,4 @@ do { \
- strncpy(osb->vol_label, di->id2.i_super.s_label, 63);
- osb->vol_label[63] = '\0';
-@@ -1539,25 +1555,9 @@ static int ocfs2_initialize_super(struct super_block *sb,
- }
+ #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
- osb->bitmap_blkno = OCFS2_I(inode)->ip_blkno;
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
-
-- /* We don't have a cluster lock on the bitmap here because
-- * we're only interested in static information and the extra
-- * complexity at mount time isn't worht it. Don't pass the
-- * inode in to the read function though as we don't want it to
-- * be put in the cache. */
-- status = ocfs2_read_block(osb, osb->bitmap_blkno, &bitmap_bh, 0,
-- NULL);
- iput(inode);
-- if (status < 0) {
-- mlog_errno(status);
-- goto bail;
-- }
-
-- di = (struct ocfs2_dinode *) bitmap_bh->b_data;
-- osb->bitmap_cpg = le16_to_cpu(di->id2.i_chain.cl_cpg);
-- brelse(bitmap_bh);
-- mlog(0, "cluster bitmap inode: %llu, clusters per group: %u\n",
-- (unsigned long long)osb->bitmap_blkno, osb->bitmap_cpg);
-+ osb->bitmap_cpg = ocfs2_group_bitmap_size(sb) * 8;
-
- status = ocfs2_init_slot_info(osb);
- if (status < 0) {
-diff --git a/fs/ocfs2/sysfile.c b/fs/ocfs2/sysfile.c
-index fd2e846..ab713eb 100644
---- a/fs/ocfs2/sysfile.c
-+++ b/fs/ocfs2/sysfile.c
-@@ -112,7 +112,7 @@ static struct inode * _ocfs2_get_system_file_inode(struct ocfs2_super *osb,
- goto bail;
- }
-
-- inode = ocfs2_iget(osb, blkno, OCFS2_FI_FLAG_SYSFILE);
-+ inode = ocfs2_iget(osb, blkno, OCFS2_FI_FLAG_SYSFILE, type);
- if (IS_ERR(inode)) {
- mlog_errno(PTR_ERR(inode));
- inode = NULL;
-diff --git a/fs/ocfs2/ver.c b/fs/ocfs2/ver.c
-index 5405ce1..e2488f4 100644
---- a/fs/ocfs2/ver.c
-+++ b/fs/ocfs2/ver.c
-@@ -29,7 +29,7 @@
-
- #include "ver.h"
-
--#define OCFS2_BUILD_VERSION "1.3.3"
-+#define OCFS2_BUILD_VERSION "1.5.0"
-
- #define VERSION_STR "OCFS2 " OCFS2_BUILD_VERSION
+ #endif /* __ARCH_S390_PERCPU__ */
+diff --git a/include/asm-s390/pgtable.h b/include/asm-s390/pgtable.h
+index 1f530f8..79b9eab 100644
+--- a/include/asm-s390/pgtable.h
++++ b/include/asm-s390/pgtable.h
+@@ -104,41 +104,27 @@ extern char empty_zero_page[PAGE_SIZE];
-diff --git a/fs/ocfs2/vote.c b/fs/ocfs2/vote.c
-deleted file mode 100644
-index c053585..0000000
---- a/fs/ocfs2/vote.c
-+++ /dev/null
-@@ -1,756 +0,0 @@
--/* -*- mode: c; c-basic-offset: 8; -*-
-- * vim: noexpandtab sw=8 ts=8 sts=0:
-- *
-- * vote.c
-- *
-- * description here
-- *
-- * Copyright (C) 2003, 2004 Oracle. All rights reserved.
-- *
-- * This program is free software; you can redistribute it and/or
-- * modify it under the terms of the GNU General Public
-- * License as published by the Free Software Foundation; either
-- * version 2 of the License, or (at your option) any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- * General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public
-- * License along with this program; if not, write to the
-- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-- * Boston, MA 021110-1307, USA.
-- */
--
--#include <linux/types.h>
--#include <linux/slab.h>
--#include <linux/highmem.h>
--#include <linux/kthread.h>
--
--#include <cluster/heartbeat.h>
--#include <cluster/nodemanager.h>
--#include <cluster/tcp.h>
--
--#include <dlm/dlmapi.h>
--
--#define MLOG_MASK_PREFIX ML_VOTE
--#include <cluster/masklog.h>
--
--#include "ocfs2.h"
--
--#include "alloc.h"
--#include "dlmglue.h"
--#include "extent_map.h"
--#include "heartbeat.h"
--#include "inode.h"
--#include "journal.h"
--#include "slot_map.h"
--#include "vote.h"
--
--#include "buffer_head_io.h"
--
--#define OCFS2_MESSAGE_TYPE_VOTE (0x1)
--#define OCFS2_MESSAGE_TYPE_RESPONSE (0x2)
--struct ocfs2_msg_hdr
--{
-- __be32 h_response_id; /* used to lookup message handle on sending
-- * node. */
-- __be32 h_request;
-- __be64 h_blkno;
-- __be32 h_generation;
-- __be32 h_node_num; /* node sending this particular message. */
--};
--
--struct ocfs2_vote_msg
--{
-- struct ocfs2_msg_hdr v_hdr;
-- __be32 v_reserved1;
--} __attribute__ ((packed));
--
--/* Responses are given these values to maintain backwards
-- * compatibility with older ocfs2 versions */
--#define OCFS2_RESPONSE_OK (0)
--#define OCFS2_RESPONSE_BUSY (-16)
--#define OCFS2_RESPONSE_BAD_MSG (-22)
--
--struct ocfs2_response_msg
--{
-- struct ocfs2_msg_hdr r_hdr;
-- __be32 r_response;
--} __attribute__ ((packed));
--
--struct ocfs2_vote_work {
-- struct list_head w_list;
-- struct ocfs2_vote_msg w_msg;
--};
--
--enum ocfs2_vote_request {
-- OCFS2_VOTE_REQ_INVALID = 0,
-- OCFS2_VOTE_REQ_MOUNT,
-- OCFS2_VOTE_REQ_UMOUNT,
-- OCFS2_VOTE_REQ_LAST
--};
--
--static inline int ocfs2_is_valid_vote_request(int request)
--{
-- return OCFS2_VOTE_REQ_INVALID < request &&
-- request < OCFS2_VOTE_REQ_LAST;
--}
--
--typedef void (*ocfs2_net_response_callback)(void *priv,
-- struct ocfs2_response_msg *resp);
--struct ocfs2_net_response_cb {
-- ocfs2_net_response_callback rc_cb;
-- void *rc_priv;
--};
--
--struct ocfs2_net_wait_ctxt {
-- struct list_head n_list;
-- u32 n_response_id;
-- wait_queue_head_t n_event;
-- struct ocfs2_node_map n_node_map;
-- int n_response; /* an agreggate response. 0 if
-- * all nodes are go, < 0 on any
-- * negative response from any
-- * node or network error. */
-- struct ocfs2_net_response_cb *n_callback;
--};
--
--static void ocfs2_process_mount_request(struct ocfs2_super *osb,
-- unsigned int node_num)
--{
-- mlog(0, "MOUNT vote from node %u\n", node_num);
-- /* The other node only sends us this message when he has an EX
-- * on the superblock, so our recovery threads (if having been
-- * launched) are waiting on it.*/
-- ocfs2_recovery_map_clear(osb, node_num);
-- ocfs2_node_map_set_bit(osb, &osb->mounted_map, node_num);
--
-- /* We clear the umount map here because a node may have been
-- * previously mounted, safely unmounted but never stopped
-- * heartbeating - in which case we'd have a stale entry. */
-- ocfs2_node_map_clear_bit(osb, &osb->umount_map, node_num);
--}
--
--static void ocfs2_process_umount_request(struct ocfs2_super *osb,
-- unsigned int node_num)
--{
-- mlog(0, "UMOUNT vote from node %u\n", node_num);
-- ocfs2_node_map_clear_bit(osb, &osb->mounted_map, node_num);
-- ocfs2_node_map_set_bit(osb, &osb->umount_map, node_num);
--}
--
--static void ocfs2_process_vote(struct ocfs2_super *osb,
-- struct ocfs2_vote_msg *msg)
--{
-- int net_status, vote_response;
-- unsigned int node_num;
-- u64 blkno;
-- enum ocfs2_vote_request request;
-- struct ocfs2_msg_hdr *hdr = &msg->v_hdr;
-- struct ocfs2_response_msg response;
--
-- /* decode the network mumbo jumbo into local variables. */
-- request = be32_to_cpu(hdr->h_request);
-- blkno = be64_to_cpu(hdr->h_blkno);
-- node_num = be32_to_cpu(hdr->h_node_num);
--
-- mlog(0, "processing vote: request = %u, blkno = %llu, node_num = %u\n",
-- request, (unsigned long long)blkno, node_num);
--
-- if (!ocfs2_is_valid_vote_request(request)) {
-- mlog(ML_ERROR, "Invalid vote request %d from node %u\n",
-- request, node_num);
-- vote_response = OCFS2_RESPONSE_BAD_MSG;
-- goto respond;
-- }
--
-- vote_response = OCFS2_RESPONSE_OK;
--
-- switch (request) {
-- case OCFS2_VOTE_REQ_UMOUNT:
-- ocfs2_process_umount_request(osb, node_num);
-- goto respond;
-- case OCFS2_VOTE_REQ_MOUNT:
-- ocfs2_process_mount_request(osb, node_num);
-- goto respond;
-- default:
-- /* avoids a gcc warning */
-- break;
-- }
--
--respond:
-- /* Response struture is small so we just put it on the stack
-- * and stuff it inline. */
-- memset(&response, 0, sizeof(struct ocfs2_response_msg));
-- response.r_hdr.h_response_id = hdr->h_response_id;
-- response.r_hdr.h_blkno = hdr->h_blkno;
-- response.r_hdr.h_generation = hdr->h_generation;
-- response.r_hdr.h_node_num = cpu_to_be32(osb->node_num);
-- response.r_response = cpu_to_be32(vote_response);
--
-- net_status = o2net_send_message(OCFS2_MESSAGE_TYPE_RESPONSE,
-- osb->net_key,
-- &response,
-- sizeof(struct ocfs2_response_msg),
-- node_num,
-- NULL);
-- /* We still want to error print for ENOPROTOOPT here. The
-- * sending node shouldn't have unregistered his net handler
-- * without sending an unmount vote 1st */
-- if (net_status < 0
-- && net_status != -ETIMEDOUT
-- && net_status != -ENOTCONN)
-- mlog(ML_ERROR, "message to node %u fails with error %d!\n",
-- node_num, net_status);
--}
--
--static void ocfs2_vote_thread_do_work(struct ocfs2_super *osb)
--{
-- unsigned long processed;
-- struct ocfs2_lock_res *lockres;
-- struct ocfs2_vote_work *work;
--
-- mlog_entry_void();
--
-- spin_lock(&osb->vote_task_lock);
-- /* grab this early so we know to try again if a state change and
-- * wake happens part-way through our work */
-- osb->vote_work_sequence = osb->vote_wake_sequence;
--
-- processed = osb->blocked_lock_count;
-- while (processed) {
-- BUG_ON(list_empty(&osb->blocked_lock_list));
--
-- lockres = list_entry(osb->blocked_lock_list.next,
-- struct ocfs2_lock_res, l_blocked_list);
-- list_del_init(&lockres->l_blocked_list);
-- osb->blocked_lock_count--;
-- spin_unlock(&osb->vote_task_lock);
--
-- BUG_ON(!processed);
-- processed--;
--
-- ocfs2_process_blocked_lock(osb, lockres);
--
-- spin_lock(&osb->vote_task_lock);
-- }
--
-- while (osb->vote_count) {
-- BUG_ON(list_empty(&osb->vote_list));
-- work = list_entry(osb->vote_list.next,
-- struct ocfs2_vote_work, w_list);
-- list_del(&work->w_list);
-- osb->vote_count--;
-- spin_unlock(&osb->vote_task_lock);
--
-- ocfs2_process_vote(osb, &work->w_msg);
-- kfree(work);
--
-- spin_lock(&osb->vote_task_lock);
-- }
-- spin_unlock(&osb->vote_task_lock);
--
-- mlog_exit_void();
--}
--
--static int ocfs2_vote_thread_lists_empty(struct ocfs2_super *osb)
--{
-- int empty = 0;
--
-- spin_lock(&osb->vote_task_lock);
-- if (list_empty(&osb->blocked_lock_list) &&
-- list_empty(&osb->vote_list))
-- empty = 1;
--
-- spin_unlock(&osb->vote_task_lock);
-- return empty;
--}
--
--static int ocfs2_vote_thread_should_wake(struct ocfs2_super *osb)
--{
-- int should_wake = 0;
--
-- spin_lock(&osb->vote_task_lock);
-- if (osb->vote_work_sequence != osb->vote_wake_sequence)
-- should_wake = 1;
-- spin_unlock(&osb->vote_task_lock);
--
-- return should_wake;
--}
--
--int ocfs2_vote_thread(void *arg)
--{
-- int status = 0;
-- struct ocfs2_super *osb = arg;
--
-- /* only quit once we've been asked to stop and there is no more
-- * work available */
-- while (!(kthread_should_stop() &&
-- ocfs2_vote_thread_lists_empty(osb))) {
--
-- wait_event_interruptible(osb->vote_event,
-- ocfs2_vote_thread_should_wake(osb) ||
-- kthread_should_stop());
--
-- mlog(0, "vote_thread: awoken\n");
--
-- ocfs2_vote_thread_do_work(osb);
-- }
--
-- osb->vote_task = NULL;
-- return status;
--}
--
--static struct ocfs2_net_wait_ctxt *ocfs2_new_net_wait_ctxt(unsigned int response_id)
--{
-- struct ocfs2_net_wait_ctxt *w;
--
-- w = kzalloc(sizeof(*w), GFP_NOFS);
-- if (!w) {
-- mlog_errno(-ENOMEM);
-- goto bail;
-- }
--
-- INIT_LIST_HEAD(&w->n_list);
-- init_waitqueue_head(&w->n_event);
-- ocfs2_node_map_init(&w->n_node_map);
-- w->n_response_id = response_id;
-- w->n_callback = NULL;
--bail:
-- return w;
--}
--
--static unsigned int ocfs2_new_response_id(struct ocfs2_super *osb)
--{
-- unsigned int ret;
--
-- spin_lock(&osb->net_response_lock);
-- ret = ++osb->net_response_ids;
-- spin_unlock(&osb->net_response_lock);
--
-- return ret;
--}
--
--static void ocfs2_dequeue_net_wait_ctxt(struct ocfs2_super *osb,
-- struct ocfs2_net_wait_ctxt *w)
--{
-- spin_lock(&osb->net_response_lock);
-- list_del(&w->n_list);
-- spin_unlock(&osb->net_response_lock);
--}
--
--static void ocfs2_queue_net_wait_ctxt(struct ocfs2_super *osb,
-- struct ocfs2_net_wait_ctxt *w)
--{
-- spin_lock(&osb->net_response_lock);
-- list_add_tail(&w->n_list,
-- &osb->net_response_list);
-- spin_unlock(&osb->net_response_lock);
--}
--
--static void __ocfs2_mark_node_responded(struct ocfs2_super *osb,
-- struct ocfs2_net_wait_ctxt *w,
-- int node_num)
--{
-- assert_spin_locked(&osb->net_response_lock);
--
-- ocfs2_node_map_clear_bit(osb, &w->n_node_map, node_num);
-- if (ocfs2_node_map_is_empty(osb, &w->n_node_map))
-- wake_up(&w->n_event);
--}
--
--/* Intended to be called from the node down callback, we fake remove
-- * the node from all our response contexts */
--void ocfs2_remove_node_from_vote_queues(struct ocfs2_super *osb,
-- int node_num)
--{
-- struct list_head *p;
-- struct ocfs2_net_wait_ctxt *w = NULL;
--
-- spin_lock(&osb->net_response_lock);
--
-- list_for_each(p, &osb->net_response_list) {
-- w = list_entry(p, struct ocfs2_net_wait_ctxt, n_list);
--
-- __ocfs2_mark_node_responded(osb, w, node_num);
-- }
--
-- spin_unlock(&osb->net_response_lock);
--}
--
--static int ocfs2_broadcast_vote(struct ocfs2_super *osb,
-- struct ocfs2_vote_msg *request,
-- unsigned int response_id,
-- int *response,
-- struct ocfs2_net_response_cb *callback)
--{
-- int status, i, remote_err;
-- struct ocfs2_net_wait_ctxt *w = NULL;
-- int dequeued = 0;
--
-- mlog_entry_void();
--
-- w = ocfs2_new_net_wait_ctxt(response_id);
-- if (!w) {
-- status = -ENOMEM;
-- mlog_errno(status);
-- goto bail;
-- }
-- w->n_callback = callback;
--
-- /* we're pretty much ready to go at this point, and this fills
-- * in n_response which we need anyway... */
-- ocfs2_queue_net_wait_ctxt(osb, w);
--
-- i = ocfs2_node_map_iterate(osb, &osb->mounted_map, 0);
--
-- while (i != O2NM_INVALID_NODE_NUM) {
-- if (i != osb->node_num) {
-- mlog(0, "trying to send request to node %i\n", i);
-- ocfs2_node_map_set_bit(osb, &w->n_node_map, i);
--
-- remote_err = 0;
-- status = o2net_send_message(OCFS2_MESSAGE_TYPE_VOTE,
-- osb->net_key,
-- request,
-- sizeof(*request),
-- i,
-- &remote_err);
-- if (status == -ETIMEDOUT) {
-- mlog(0, "remote node %d timed out!\n", i);
-- status = -EAGAIN;
-- goto bail;
-- }
-- if (remote_err < 0) {
-- status = remote_err;
-- mlog(0, "remote error %d on node %d!\n",
-- remote_err, i);
-- mlog_errno(status);
-- goto bail;
-- }
-- if (status < 0) {
-- mlog_errno(status);
-- goto bail;
-- }
-- }
-- i++;
-- i = ocfs2_node_map_iterate(osb, &osb->mounted_map, i);
-- mlog(0, "next is %d, i am %d\n", i, osb->node_num);
-- }
-- mlog(0, "done sending, now waiting on responses...\n");
--
-- wait_event(w->n_event, ocfs2_node_map_is_empty(osb, &w->n_node_map));
--
-- ocfs2_dequeue_net_wait_ctxt(osb, w);
-- dequeued = 1;
--
-- *response = w->n_response;
-- status = 0;
--bail:
-- if (w) {
-- if (!dequeued)
-- ocfs2_dequeue_net_wait_ctxt(osb, w);
-- kfree(w);
-- }
--
-- mlog_exit(status);
-- return status;
--}
--
--static struct ocfs2_vote_msg * ocfs2_new_vote_request(struct ocfs2_super *osb,
-- u64 blkno,
-- unsigned int generation,
-- enum ocfs2_vote_request type)
--{
-- struct ocfs2_vote_msg *request;
-- struct ocfs2_msg_hdr *hdr;
--
-- BUG_ON(!ocfs2_is_valid_vote_request(type));
--
-- request = kzalloc(sizeof(*request), GFP_NOFS);
-- if (!request) {
-- mlog_errno(-ENOMEM);
-- } else {
-- hdr = &request->v_hdr;
-- hdr->h_node_num = cpu_to_be32(osb->node_num);
-- hdr->h_request = cpu_to_be32(type);
-- hdr->h_blkno = cpu_to_be64(blkno);
-- hdr->h_generation = cpu_to_be32(generation);
-- }
--
-- return request;
--}
--
--/* Complete the buildup of a new vote request and process the
-- * broadcast return value. */
--static int ocfs2_do_request_vote(struct ocfs2_super *osb,
-- struct ocfs2_vote_msg *request,
-- struct ocfs2_net_response_cb *callback)
--{
-- int status, response = -EBUSY;
-- unsigned int response_id;
-- struct ocfs2_msg_hdr *hdr;
--
-- response_id = ocfs2_new_response_id(osb);
--
-- hdr = &request->v_hdr;
-- hdr->h_response_id = cpu_to_be32(response_id);
--
-- status = ocfs2_broadcast_vote(osb, request, response_id, &response,
-- callback);
-- if (status < 0) {
-- mlog_errno(status);
-- goto bail;
-- }
--
-- status = response;
--bail:
--
-- return status;
--}
--
--int ocfs2_request_mount_vote(struct ocfs2_super *osb)
--{
-- int status;
-- struct ocfs2_vote_msg *request = NULL;
--
-- request = ocfs2_new_vote_request(osb, 0ULL, 0, OCFS2_VOTE_REQ_MOUNT);
-- if (!request) {
-- status = -ENOMEM;
-- goto bail;
-- }
--
-- status = -EAGAIN;
-- while (status == -EAGAIN) {
-- if (!(osb->s_mount_opt & OCFS2_MOUNT_NOINTR) &&
-- signal_pending(current)) {
-- status = -ERESTARTSYS;
-- goto bail;
-- }
--
-- if (ocfs2_node_map_is_only(osb, &osb->mounted_map,
-- osb->node_num)) {
-- status = 0;
-- goto bail;
-- }
--
-- status = ocfs2_do_request_vote(osb, request, NULL);
-- }
--
--bail:
-- kfree(request);
-- return status;
--}
--
--int ocfs2_request_umount_vote(struct ocfs2_super *osb)
--{
-- int status;
-- struct ocfs2_vote_msg *request = NULL;
--
-- request = ocfs2_new_vote_request(osb, 0ULL, 0, OCFS2_VOTE_REQ_UMOUNT);
-- if (!request) {
-- status = -ENOMEM;
-- goto bail;
-- }
--
-- status = -EAGAIN;
-- while (status == -EAGAIN) {
-- /* Do not check signals on this vote... We really want
-- * this one to go all the way through. */
--
-- if (ocfs2_node_map_is_only(osb, &osb->mounted_map,
-- osb->node_num)) {
-- status = 0;
-- goto bail;
-- }
--
-- status = ocfs2_do_request_vote(osb, request, NULL);
-- }
--
--bail:
-- kfree(request);
-- return status;
--}
--
--/* TODO: This should eventually be a hash table! */
--static struct ocfs2_net_wait_ctxt * __ocfs2_find_net_wait_ctxt(struct ocfs2_super *osb,
-- u32 response_id)
--{
-- struct list_head *p;
-- struct ocfs2_net_wait_ctxt *w = NULL;
--
-- list_for_each(p, &osb->net_response_list) {
-- w = list_entry(p, struct ocfs2_net_wait_ctxt, n_list);
-- if (response_id == w->n_response_id)
-- break;
-- w = NULL;
-- }
--
-- return w;
--}
--
--/* Translate response codes into local node errno values */
--static inline int ocfs2_translate_response(int response)
--{
-- int ret;
--
-- switch (response) {
-- case OCFS2_RESPONSE_OK:
-- ret = 0;
-- break;
--
-- case OCFS2_RESPONSE_BUSY:
-- ret = -EBUSY;
-- break;
--
-- default:
-- ret = -EINVAL;
-- }
--
-- return ret;
--}
--
--static int ocfs2_handle_response_message(struct o2net_msg *msg,
-- u32 len,
-- void *data, void **ret_data)
--{
-- unsigned int response_id, node_num;
-- int response_status;
-- struct ocfs2_super *osb = data;
-- struct ocfs2_response_msg *resp;
-- struct ocfs2_net_wait_ctxt * w;
-- struct ocfs2_net_response_cb *resp_cb;
--
-- resp = (struct ocfs2_response_msg *) msg->buf;
--
-- response_id = be32_to_cpu(resp->r_hdr.h_response_id);
-- node_num = be32_to_cpu(resp->r_hdr.h_node_num);
-- response_status =
-- ocfs2_translate_response(be32_to_cpu(resp->r_response));
--
-- mlog(0, "received response message:\n");
-- mlog(0, "h_response_id = %u\n", response_id);
-- mlog(0, "h_request = %u\n", be32_to_cpu(resp->r_hdr.h_request));
-- mlog(0, "h_blkno = %llu\n",
-- (unsigned long long)be64_to_cpu(resp->r_hdr.h_blkno));
-- mlog(0, "h_generation = %u\n", be32_to_cpu(resp->r_hdr.h_generation));
-- mlog(0, "h_node_num = %u\n", node_num);
-- mlog(0, "r_response = %d\n", response_status);
--
-- spin_lock(&osb->net_response_lock);
-- w = __ocfs2_find_net_wait_ctxt(osb, response_id);
-- if (!w) {
-- mlog(0, "request not found!\n");
-- goto bail;
-- }
-- resp_cb = w->n_callback;
--
-- if (response_status && (!w->n_response)) {
-- /* we only really need one negative response so don't
-- * set it twice. */
-- w->n_response = response_status;
-- }
--
-- if (resp_cb) {
-- spin_unlock(&osb->net_response_lock);
--
-- resp_cb->rc_cb(resp_cb->rc_priv, resp);
--
-- spin_lock(&osb->net_response_lock);
-- }
--
-- __ocfs2_mark_node_responded(osb, w, node_num);
--bail:
-- spin_unlock(&osb->net_response_lock);
--
-- return 0;
--}
--
--static int ocfs2_handle_vote_message(struct o2net_msg *msg,
-- u32 len,
-- void *data, void **ret_data)
--{
-- int status;
-- struct ocfs2_super *osb = data;
-- struct ocfs2_vote_work *work;
--
-- work = kmalloc(sizeof(struct ocfs2_vote_work), GFP_NOFS);
-- if (!work) {
-- status = -ENOMEM;
-- mlog_errno(status);
-- goto bail;
-- }
--
-- INIT_LIST_HEAD(&work->w_list);
-- memcpy(&work->w_msg, msg->buf, sizeof(struct ocfs2_vote_msg));
--
-- mlog(0, "scheduling vote request:\n");
-- mlog(0, "h_response_id = %u\n",
-- be32_to_cpu(work->w_msg.v_hdr.h_response_id));
-- mlog(0, "h_request = %u\n", be32_to_cpu(work->w_msg.v_hdr.h_request));
-- mlog(0, "h_blkno = %llu\n",
-- (unsigned long long)be64_to_cpu(work->w_msg.v_hdr.h_blkno));
-- mlog(0, "h_generation = %u\n",
-- be32_to_cpu(work->w_msg.v_hdr.h_generation));
-- mlog(0, "h_node_num = %u\n",
-- be32_to_cpu(work->w_msg.v_hdr.h_node_num));
--
-- spin_lock(&osb->vote_task_lock);
-- list_add_tail(&work->w_list, &osb->vote_list);
-- osb->vote_count++;
-- spin_unlock(&osb->vote_task_lock);
--
-- ocfs2_kick_vote_thread(osb);
--
-- status = 0;
--bail:
-- return status;
--}
--
--void ocfs2_unregister_net_handlers(struct ocfs2_super *osb)
--{
-- if (!osb->net_key)
-- return;
--
-- o2net_unregister_handler_list(&osb->osb_net_handlers);
--
-- if (!list_empty(&osb->net_response_list))
-- mlog(ML_ERROR, "net response list not empty!\n");
--
-- osb->net_key = 0;
--}
--
--int ocfs2_register_net_handlers(struct ocfs2_super *osb)
--{
-- int status = 0;
--
-- if (ocfs2_mount_local(osb))
-- return 0;
--
-- status = o2net_register_handler(OCFS2_MESSAGE_TYPE_RESPONSE,
-- osb->net_key,
-- sizeof(struct ocfs2_response_msg),
-- ocfs2_handle_response_message,
-- osb, NULL, &osb->osb_net_handlers);
-- if (status) {
-- mlog_errno(status);
-- goto bail;
-- }
--
-- status = o2net_register_handler(OCFS2_MESSAGE_TYPE_VOTE,
-- osb->net_key,
-- sizeof(struct ocfs2_vote_msg),
-- ocfs2_handle_vote_message,
-- osb, NULL, &osb->osb_net_handlers);
-- if (status) {
-- mlog_errno(status);
-- goto bail;
-- }
--bail:
-- if (status < 0)
-- ocfs2_unregister_net_handlers(osb);
--
-- return status;
--}
-diff --git a/fs/ocfs2/vote.h b/fs/ocfs2/vote.h
-deleted file mode 100644
-index 9ea46f6..0000000
---- a/fs/ocfs2/vote.h
-+++ /dev/null
-@@ -1,48 +0,0 @@
--/* -*- mode: c; c-basic-offset: 8; -*-
-- * vim: noexpandtab sw=8 ts=8 sts=0:
-- *
-- * vote.h
-- *
-- * description here
-- *
-- * Copyright (C) 2002, 2004 Oracle. All rights reserved.
-- *
-- * This program is free software; you can redistribute it and/or
-- * modify it under the terms of the GNU General Public
-- * License as published by the Free Software Foundation; either
-- * version 2 of the License, or (at your option) any later version.
-- *
-- * This program is distributed in the hope that it will be useful,
-- * but WITHOUT ANY WARRANTY; without even the implied warranty of
-- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- * General Public License for more details.
-- *
-- * You should have received a copy of the GNU General Public
-- * License along with this program; if not, write to the
-- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-- * Boston, MA 021110-1307, USA.
+ #ifndef __ASSEMBLY__
+ /*
+- * Just any arbitrary offset to the start of the vmalloc VM area: the
+- * current 8MB value just means that there will be a 8MB "hole" after the
+- * physical memory until the kernel virtual memory starts. That means that
+- * any out-of-bounds memory accesses will hopefully be caught.
+- * The vmalloc() routines leaves a hole of 4kB between each vmalloced
+- * area for the same reason. ;)
+- * vmalloc area starts at 4GB to prevent syscall table entry exchanging
+- * from modules.
- */
+-extern unsigned long vmalloc_end;
-
--
--#ifndef VOTE_H
--#define VOTE_H
--
--int ocfs2_vote_thread(void *arg);
--static inline void ocfs2_kick_vote_thread(struct ocfs2_super *osb)
--{
-- spin_lock(&osb->vote_task_lock);
-- /* make sure the voting thread gets a swipe at whatever changes
-- * the caller may have made to the voting state */
-- osb->vote_wake_sequence++;
-- spin_unlock(&osb->vote_task_lock);
-- wake_up(&osb->vote_event);
--}
--
--int ocfs2_request_mount_vote(struct ocfs2_super *osb);
--int ocfs2_request_umount_vote(struct ocfs2_super *osb);
--int ocfs2_register_net_handlers(struct ocfs2_super *osb);
--void ocfs2_unregister_net_handlers(struct ocfs2_super *osb);
--
--void ocfs2_remove_node_from_vote_queues(struct ocfs2_super *osb,
-- int node_num);
+-#ifdef CONFIG_64BIT
+-#define VMALLOC_ADDR (max(0x100000000UL, (unsigned long) high_memory))
+-#else
+-#define VMALLOC_ADDR ((unsigned long) high_memory)
-#endif
-diff --git a/fs/partitions/check.c b/fs/partitions/check.c
-index 722e12e..739da70 100644
---- a/fs/partitions/check.c
-+++ b/fs/partitions/check.c
-@@ -195,96 +195,45 @@ check_partition(struct gendisk *hd, struct block_device *bdev)
- return ERR_PTR(res);
- }
-
--/*
-- * sysfs bindings for partitions
-- */
--
--struct part_attribute {
-- struct attribute attr;
-- ssize_t (*show)(struct hd_struct *,char *);
-- ssize_t (*store)(struct hd_struct *,const char *, size_t);
--};
+-#define VMALLOC_OFFSET (8*1024*1024)
+-#define VMALLOC_START ((VMALLOC_ADDR + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
+-#define VMALLOC_END vmalloc_end
-
--static ssize_t
--part_attr_show(struct kobject * kobj, struct attribute * attr, char * page)
-+static ssize_t part_start_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
- {
-- struct hd_struct * p = container_of(kobj,struct hd_struct,kobj);
-- struct part_attribute * part_attr = container_of(attr,struct part_attribute,attr);
-- ssize_t ret = 0;
-- if (part_attr->show)
-- ret = part_attr->show(p, page);
-- return ret;
--}
--static ssize_t
--part_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char *page, size_t count)
--{
-- struct hd_struct * p = container_of(kobj,struct hd_struct,kobj);
-- struct part_attribute * part_attr = container_of(attr,struct part_attribute,attr);
-- ssize_t ret = 0;
-+ struct hd_struct *p = dev_to_part(dev);
+-/*
+- * We need some free virtual space to be able to do vmalloc.
+- * VMALLOC_MIN_SIZE defines the minimum size of the vmalloc
+- * area. On a machine with 2GB memory we make sure that we
+- * have at least 128MB free space for vmalloc. On a machine
+- * with 4TB we make sure we have at least 128GB.
++ * The vmalloc area will always be on the topmost area of the kernel
++ * mapping. We reserve 96MB (31bit) / 1GB (64bit) for vmalloc,
++ * which should be enough for any sane case.
++ * By putting vmalloc at the top, we maximise the gap between physical
++ * memory and vmalloc to catch misplaced memory accesses. As a side
++ * effect, this also makes sure that 64 bit module code cannot be used
++ * as system call address.
+ */
+ #ifndef __s390x__
+-#define VMALLOC_MIN_SIZE 0x8000000UL
+-#define VMALLOC_END_INIT 0x80000000UL
++#define VMALLOC_START 0x78000000UL
++#define VMALLOC_END 0x7e000000UL
++#define VMEM_MAP_MAX 0x80000000UL
+ #else /* __s390x__ */
+-#define VMALLOC_MIN_SIZE 0x2000000000UL
+-#define VMALLOC_END_INIT 0x40000000000UL
++#define VMALLOC_START 0x3e000000000UL
++#define VMALLOC_END 0x3e040000000UL
++#define VMEM_MAP_MAX 0x40000000000UL
+ #endif /* __s390x__ */
-- if (part_attr->store)
-- ret = part_attr->store(p, page, count);
-- return ret;
-+ return sprintf(buf, "%llu\n",(unsigned long long)p->start_sect);
- }
++#define VMEM_MAP ((struct page *) VMALLOC_END)
++#define VMEM_MAP_SIZE ((VMALLOC_START / PAGE_SIZE) * sizeof(struct page))
++
+ /*
+ * A 31 bit pagetable entry of S390 has following format:
+ * | PFRA | | OS |
+diff --git a/include/asm-s390/processor.h b/include/asm-s390/processor.h
+index 21d40a1..c86b982 100644
+--- a/include/asm-s390/processor.h
++++ b/include/asm-s390/processor.h
+@@ -59,9 +59,6 @@ extern void s390_adjust_jiffies(void);
+ extern void print_cpu_info(struct cpuinfo_S390 *);
+ extern int get_cpu_capability(unsigned int *);
--static struct sysfs_ops part_sysfs_ops = {
-- .show = part_attr_show,
-- .store = part_attr_store,
--};
+-/* Lazy FPU handling on uni-processor */
+-extern struct task_struct *last_task_used_math;
-
--static ssize_t part_uevent_store(struct hd_struct * p,
-- const char *page, size_t count)
-+static ssize_t part_size_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
- {
-- kobject_uevent(&p->kobj, KOBJ_ADD);
-- return count;
-+ struct hd_struct *p = dev_to_part(dev);
-+ return sprintf(buf, "%llu\n",(unsigned long long)p->nr_sects);
- }
--static ssize_t part_dev_read(struct hd_struct * p, char *page)
--{
-- struct gendisk *disk = container_of(p->kobj.parent,struct gendisk,kobj);
-- dev_t dev = MKDEV(disk->major, disk->first_minor + p->partno);
-- return print_dev_t(page, dev);
--}
--static ssize_t part_start_read(struct hd_struct * p, char *page)
--{
-- return sprintf(page, "%llu\n",(unsigned long long)p->start_sect);
--}
--static ssize_t part_size_read(struct hd_struct * p, char *page)
--{
-- return sprintf(page, "%llu\n",(unsigned long long)p->nr_sects);
--}
--static ssize_t part_stat_read(struct hd_struct * p, char *page)
-+
-+static ssize_t part_stat_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
- {
-- return sprintf(page, "%8u %8llu %8u %8llu\n",
-+ struct hd_struct *p = dev_to_part(dev);
+ /*
+ * User space process size: 2GB for 31 bit, 4TB for 64 bit.
+ */
+@@ -95,7 +92,6 @@ struct thread_struct {
+ unsigned long ksp; /* kernel stack pointer */
+ mm_segment_t mm_segment;
+ unsigned long prot_addr; /* address of protection-excep. */
+- unsigned int error_code; /* error-code of last prog-excep. */
+ unsigned int trap_no;
+ per_struct per_info;
+ /* Used to give failing instruction back to user for ieee exceptions */
+diff --git a/include/asm-s390/ptrace.h b/include/asm-s390/ptrace.h
+index 332ee73..61f6952 100644
+--- a/include/asm-s390/ptrace.h
++++ b/include/asm-s390/ptrace.h
+@@ -465,6 +465,14 @@ struct user_regs_struct
+ #ifdef __KERNEL__
+ #define __ARCH_SYS_PTRACE 1
+
++/*
++ * These are defined as per linux/ptrace.h, which see.
++ */
++#define arch_has_single_step() (1)
++struct task_struct;
++extern void user_enable_single_step(struct task_struct *);
++extern void user_disable_single_step(struct task_struct *);
+
-+ return sprintf(buf, "%8u %8llu %8u %8llu\n",
- p->ios[0], (unsigned long long)p->sectors[0],
- p->ios[1], (unsigned long long)p->sectors[1]);
- }
--static struct part_attribute part_attr_uevent = {
-- .attr = {.name = "uevent", .mode = S_IWUSR },
-- .store = part_uevent_store
--};
--static struct part_attribute part_attr_dev = {
-- .attr = {.name = "dev", .mode = S_IRUGO },
-- .show = part_dev_read
--};
--static struct part_attribute part_attr_start = {
-- .attr = {.name = "start", .mode = S_IRUGO },
-- .show = part_start_read
--};
--static struct part_attribute part_attr_size = {
-- .attr = {.name = "size", .mode = S_IRUGO },
-- .show = part_size_read
--};
--static struct part_attribute part_attr_stat = {
-- .attr = {.name = "stat", .mode = S_IRUGO },
-- .show = part_stat_read
--};
+ #define user_mode(regs) (((regs)->psw.mask & PSW_MASK_PSTATE) != 0)
+ #define instruction_pointer(regs) ((regs)->psw.addr & PSW_ADDR_INSN)
+ #define regs_return_value(regs)((regs)->gprs[2])
+diff --git a/include/asm-s390/qdio.h b/include/asm-s390/qdio.h
+index 74db1dc..4b8ff55 100644
+--- a/include/asm-s390/qdio.h
++++ b/include/asm-s390/qdio.h
+@@ -184,7 +184,7 @@ struct qdr {
+ #endif /* QDIO_32_BIT */
+ unsigned long qiba; /* queue-information-block address */
+ unsigned int res8; /* reserved */
+- unsigned int qkey : 4; /* queue-informatio-block key */
++ unsigned int qkey : 4; /* queue-information-block key */
+ unsigned int res9 : 28; /* reserved */
+ /* union _qd {*/ /* why this? */
+ struct qdesfmt0 qdf0[126];
+diff --git a/include/asm-s390/rwsem.h b/include/asm-s390/rwsem.h
+index 90f4ecc..9d2a179 100644
+--- a/include/asm-s390/rwsem.h
++++ b/include/asm-s390/rwsem.h
+@@ -91,8 +91,8 @@ struct rw_semaphore {
+ #endif
- #ifdef CONFIG_FAIL_MAKE_REQUEST
-+static ssize_t part_fail_show(struct device *dev,
-+ struct device_attribute *attr, char *buf)
-+{
-+ struct hd_struct *p = dev_to_part(dev);
+ #define __RWSEM_INITIALIZER(name) \
+-{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) \
+- __RWSEM_DEP_MAP_INIT(name) }
++ { RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait.lock), \
++ LIST_HEAD_INIT((name).wait_list) __RWSEM_DEP_MAP_INIT(name) }
--static ssize_t part_fail_store(struct hd_struct * p,
-+ return sprintf(buf, "%d\n", p->make_it_fail);
-+}
+ #define DECLARE_RWSEM(name) \
+ struct rw_semaphore name = __RWSEM_INITIALIZER(name)
+diff --git a/include/asm-s390/sclp.h b/include/asm-s390/sclp.h
+index cb9faf1..b5f2843 100644
+--- a/include/asm-s390/sclp.h
++++ b/include/asm-s390/sclp.h
+@@ -27,7 +27,25 @@ struct sclp_ipl_info {
+ char loadparm[LOADPARM_LEN];
+ };
+
+-void sclp_readinfo_early(void);
++struct sclp_cpu_entry {
++ u8 address;
++ u8 reserved0[13];
++ u8 type;
++ u8 reserved1;
++} __attribute__((packed));
+
-+static ssize_t part_fail_store(struct device *dev,
-+ struct device_attribute *attr,
- const char *buf, size_t count)
- {
-+ struct hd_struct *p = dev_to_part(dev);
- int i;
++struct sclp_cpu_info {
++ unsigned int configured;
++ unsigned int standby;
++ unsigned int combined;
++ int has_cpu_type;
++ struct sclp_cpu_entry cpu[255];
++};
++
++int sclp_get_cpu_info(struct sclp_cpu_info *info);
++int sclp_cpu_configure(u8 cpu);
++int sclp_cpu_deconfigure(u8 cpu);
++void sclp_read_info_early(void);
+ void sclp_facilities_detect(void);
+ unsigned long long sclp_memory_detect(void);
+ int sclp_sdias_blk_count(void);
+diff --git a/include/asm-s390/smp.h b/include/asm-s390/smp.h
+index 07708c0..c7b7432 100644
+--- a/include/asm-s390/smp.h
++++ b/include/asm-s390/smp.h
+@@ -35,8 +35,6 @@ extern void machine_restart_smp(char *);
+ extern void machine_halt_smp(void);
+ extern void machine_power_off_smp(void);
- if (count > 0 && sscanf(buf, "%d", &i) > 0)
-@@ -292,50 +241,53 @@ static ssize_t part_fail_store(struct hd_struct * p,
+-extern void smp_setup_cpu_possible_map(void);
+-
+ #define NO_PROC_ID 0xFF /* No processor magic marker */
- return count;
- }
--static ssize_t part_fail_read(struct hd_struct * p, char *page)
--{
-- return sprintf(page, "%d\n", p->make_it_fail);
--}
--static struct part_attribute part_attr_fail = {
-- .attr = {.name = "make-it-fail", .mode = S_IRUGO | S_IWUSR },
-- .store = part_fail_store,
-- .show = part_fail_read
--};
-+#endif
+ /*
+@@ -92,6 +90,8 @@ extern void __cpu_die (unsigned int cpu);
+ extern void cpu_die (void) __attribute__ ((noreturn));
+ extern int __cpu_up (unsigned int cpu);
-+static DEVICE_ATTR(start, S_IRUGO, part_start_show, NULL);
-+static DEVICE_ATTR(size, S_IRUGO, part_size_show, NULL);
-+static DEVICE_ATTR(stat, S_IRUGO, part_stat_show, NULL);
-+#ifdef CONFIG_FAIL_MAKE_REQUEST
-+static struct device_attribute dev_attr_fail =
-+ __ATTR(make-it-fail, S_IRUGO|S_IWUSR, part_fail_show, part_fail_store);
++extern int smp_call_function_mask(cpumask_t mask, void (*func)(void *),
++ void *info, int wait);
#endif
--static struct attribute * default_attrs[] = {
-- &part_attr_uevent.attr,
-- &part_attr_dev.attr,
-- &part_attr_start.attr,
-- &part_attr_size.attr,
-- &part_attr_stat.attr,
-+static struct attribute *part_attrs[] = {
-+ &dev_attr_start.attr,
-+ &dev_attr_size.attr,
-+ &dev_attr_stat.attr,
- #ifdef CONFIG_FAIL_MAKE_REQUEST
-- &part_attr_fail.attr,
-+ &dev_attr_fail.attr,
+ #ifndef CONFIG_SMP
+@@ -103,7 +103,6 @@ static inline void smp_send_stop(void)
+
+ #define hard_smp_processor_id() 0
+ #define smp_cpu_not_running(cpu) 1
+-#define smp_setup_cpu_possible_map() do { } while (0)
#endif
-- NULL,
-+ NULL
- };
--extern struct kset block_subsys;
-+static struct attribute_group part_attr_group = {
-+ .attrs = part_attrs,
-+};
+ extern union save_area *zfcpdump_save_areas[NR_CPUS + 1];
+diff --git a/include/asm-s390/spinlock.h b/include/asm-s390/spinlock.h
+index 3fd4382..df84ae9 100644
+--- a/include/asm-s390/spinlock.h
++++ b/include/asm-s390/spinlock.h
+@@ -53,44 +53,48 @@ _raw_compare_and_swap(volatile unsigned int *lock,
+ */
--static void part_release(struct kobject *kobj)
-+static struct attribute_group *part_attr_groups[] = {
-+ &part_attr_group,
-+ NULL
-+};
-+
-+static void part_release(struct device *dev)
- {
-- struct hd_struct * p = container_of(kobj,struct hd_struct,kobj);
-+ struct hd_struct *p = dev_to_part(dev);
- kfree(p);
- }
+ #define __raw_spin_is_locked(x) ((x)->owner_cpu != 0)
+-#define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock)
+ #define __raw_spin_unlock_wait(lock) \
+ do { while (__raw_spin_is_locked(lock)) \
+ _raw_spin_relax(lock); } while (0)
--struct kobj_type ktype_part = {
-+struct device_type part_type = {
-+ .name = "partition",
-+ .groups = part_attr_groups,
- .release = part_release,
-- .default_attrs = default_attrs,
-- .sysfs_ops = &part_sysfs_ops,
- };
+-extern void _raw_spin_lock_wait(raw_spinlock_t *, unsigned int pc);
+-extern int _raw_spin_trylock_retry(raw_spinlock_t *, unsigned int pc);
++extern void _raw_spin_lock_wait(raw_spinlock_t *);
++extern void _raw_spin_lock_wait_flags(raw_spinlock_t *, unsigned long flags);
++extern int _raw_spin_trylock_retry(raw_spinlock_t *);
+ extern void _raw_spin_relax(raw_spinlock_t *lock);
- static inline void partition_sysfs_add_subdir(struct hd_struct *p)
+ static inline void __raw_spin_lock(raw_spinlock_t *lp)
{
- struct kobject *k;
+- unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
+ int old;
-- k = kobject_get(&p->kobj);
-- p->holder_dir = kobject_add_dir(k, "holders");
-+ k = kobject_get(&p->dev.kobj);
-+ p->holder_dir = kobject_create_and_add("holders", k);
- kobject_put(k);
+ old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id());
+- if (likely(old == 0)) {
+- lp->owner_pc = pc;
++ if (likely(old == 0))
+ return;
+- }
+- _raw_spin_lock_wait(lp, pc);
++ _raw_spin_lock_wait(lp);
++}
++
++static inline void __raw_spin_lock_flags(raw_spinlock_t *lp,
++ unsigned long flags)
++{
++ int old;
++
++ old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id());
++ if (likely(old == 0))
++ return;
++ _raw_spin_lock_wait_flags(lp, flags);
}
-@@ -343,15 +295,16 @@ static inline void disk_sysfs_add_subdirs(struct gendisk *disk)
+ static inline int __raw_spin_trylock(raw_spinlock_t *lp)
{
- struct kobject *k;
+- unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
+ int old;
-- k = kobject_get(&disk->kobj);
-- disk->holder_dir = kobject_add_dir(k, "holders");
-- disk->slave_dir = kobject_add_dir(k, "slaves");
-+ k = kobject_get(&disk->dev.kobj);
-+ disk->holder_dir = kobject_create_and_add("holders", k);
-+ disk->slave_dir = kobject_create_and_add("slaves", k);
- kobject_put(k);
+ old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id());
+- if (likely(old == 0)) {
+- lp->owner_pc = pc;
++ if (likely(old == 0))
+ return 1;
+- }
+- return _raw_spin_trylock_retry(lp, pc);
++ return _raw_spin_trylock_retry(lp);
}
- void delete_partition(struct gendisk *disk, int part)
+ static inline void __raw_spin_unlock(raw_spinlock_t *lp)
{
- struct hd_struct *p = disk->part[part-1];
-+
- if (!p)
- return;
- if (!p->nr_sects)
-@@ -361,113 +314,55 @@ void delete_partition(struct gendisk *disk, int part)
- p->nr_sects = 0;
- p->ios[0] = p->ios[1] = 0;
- p->sectors[0] = p->sectors[1] = 0;
-- sysfs_remove_link(&p->kobj, "subsystem");
-- kobject_unregister(p->holder_dir);
-- kobject_uevent(&p->kobj, KOBJ_REMOVE);
-- kobject_del(&p->kobj);
-- kobject_put(&p->kobj);
-+ kobject_put(p->holder_dir);
-+ device_del(&p->dev);
-+ put_device(&p->dev);
+- lp->owner_pc = 0;
+ _raw_compare_and_swap(&lp->owner_cpu, lp->owner_cpu, 0);
}
+
+diff --git a/include/asm-s390/spinlock_types.h b/include/asm-s390/spinlock_types.h
+index b7ac13f..654abc4 100644
+--- a/include/asm-s390/spinlock_types.h
++++ b/include/asm-s390/spinlock_types.h
+@@ -7,7 +7,6 @@
- void add_partition(struct gendisk *disk, int part, sector_t start, sector_t len, int flags)
+ typedef struct {
+ volatile unsigned int owner_cpu;
+- volatile unsigned int owner_pc;
+ } __attribute__ ((aligned (4))) raw_spinlock_t;
+
+ #define __RAW_SPIN_LOCK_UNLOCKED { 0 }
+diff --git a/include/asm-s390/tlbflush.h b/include/asm-s390/tlbflush.h
+index a69bd24..70fa5ae 100644
+--- a/include/asm-s390/tlbflush.h
++++ b/include/asm-s390/tlbflush.h
+@@ -42,11 +42,11 @@ static inline void __tlb_flush_global(void)
+ /*
+ * Flush all tlb entries of a page table on all cpus.
+ */
+-static inline void __tlb_flush_idte(pgd_t *pgd)
++static inline void __tlb_flush_idte(unsigned long asce)
{
- struct hd_struct *p;
-+ int err;
+ asm volatile(
+ " .insn rrf,0xb98e0000,0,%0,%1,0"
+- : : "a" (2048), "a" (__pa(pgd) & PAGE_MASK) : "cc" );
++ : : "a" (2048), "a" (asce) : "cc" );
+ }
- p = kzalloc(sizeof(*p), GFP_KERNEL);
- if (!p)
+ static inline void __tlb_flush_mm(struct mm_struct * mm)
+@@ -61,11 +61,11 @@ static inline void __tlb_flush_mm(struct mm_struct * mm)
+ * only ran on the local cpu.
+ */
+ if (MACHINE_HAS_IDTE) {
+- pgd_t *shadow_pgd = get_shadow_table(mm->pgd);
++ pgd_t *shadow = get_shadow_table(mm->pgd);
+
+- if (shadow_pgd)
+- __tlb_flush_idte(shadow_pgd);
+- __tlb_flush_idte(mm->pgd);
++ if (shadow)
++ __tlb_flush_idte((unsigned long) shadow | mm->context);
++ __tlb_flush_idte((unsigned long) mm->pgd | mm->context);
return;
--
+ }
+ preempt_disable();
+@@ -106,9 +106,23 @@ static inline void __tlb_flush_mm_cond(struct mm_struct * mm)
+ */
+ #define flush_tlb() do { } while (0)
+ #define flush_tlb_all() do { } while (0)
+-#define flush_tlb_mm(mm) __tlb_flush_mm_cond(mm)
+ #define flush_tlb_page(vma, addr) do { } while (0)
+-#define flush_tlb_range(vma, start, end) __tlb_flush_mm_cond(mm)
+-#define flush_tlb_kernel_range(start, end) __tlb_flush_mm(&init_mm)
+
- p->start_sect = start;
- p->nr_sects = len;
- p->partno = part;
- p->policy = disk->policy;
-
-- if (isdigit(disk->kobj.k_name[strlen(disk->kobj.k_name)-1]))
-- kobject_set_name(&p->kobj, "%sp%d",
-- kobject_name(&disk->kobj), part);
-+ if (isdigit(disk->dev.bus_id[strlen(disk->dev.bus_id)-1]))
-+ snprintf(p->dev.bus_id, BUS_ID_SIZE,
-+ "%sp%d", disk->dev.bus_id, part);
- else
-- kobject_set_name(&p->kobj, "%s%d",
-- kobject_name(&disk->kobj),part);
-- p->kobj.parent = &disk->kobj;
-- p->kobj.ktype = &ktype_part;
-- kobject_init(&p->kobj);
-- kobject_add(&p->kobj);
-- if (!disk->part_uevent_suppress)
-- kobject_uevent(&p->kobj, KOBJ_ADD);
-- sysfs_create_link(&p->kobj, &block_subsys.kobj, "subsystem");
-+ snprintf(p->dev.bus_id, BUS_ID_SIZE,
-+ "%s%d", disk->dev.bus_id, part);
++static inline void flush_tlb_mm(struct mm_struct *mm)
++{
++ __tlb_flush_mm_cond(mm);
++}
+
-+ device_initialize(&p->dev);
-+ p->dev.devt = MKDEV(disk->major, disk->first_minor + part);
-+ p->dev.class = &block_class;
-+ p->dev.type = &part_type;
-+ p->dev.parent = &disk->dev;
-+ disk->part[part-1] = p;
++static inline void flush_tlb_range(struct vm_area_struct *vma,
++ unsigned long start, unsigned long end)
++{
++ __tlb_flush_mm_cond(vma->vm_mm);
++}
+
-+ /* delay uevent until 'holders' subdir is created */
-+ p->dev.uevent_suppress = 1;
-+ device_add(&p->dev);
-+ partition_sysfs_add_subdir(p);
-+ p->dev.uevent_suppress = 0;
- if (flags & ADDPART_FLAG_WHOLEDISK) {
- static struct attribute addpartattr = {
- .name = "whole_disk",
- .mode = S_IRUSR | S_IRGRP | S_IROTH,
- };
--
-- sysfs_create_file(&p->kobj, &addpartattr);
-+ err = sysfs_create_file(&p->dev.kobj, &addpartattr);
- }
-- partition_sysfs_add_subdir(p);
-- disk->part[part-1] = p;
--}
-
--static char *make_block_name(struct gendisk *disk)
--{
-- char *name;
-- static char *block_str = "block:";
-- int size;
-- char *s;
--
-- size = strlen(block_str) + strlen(disk->disk_name) + 1;
-- name = kmalloc(size, GFP_KERNEL);
-- if (!name)
-- return NULL;
-- strcpy(name, block_str);
-- strcat(name, disk->disk_name);
-- /* ewww... some of these buggers have / in name... */
-- s = strchr(name, '/');
-- if (s)
-- *s = '!';
-- return name;
--}
--
--static int disk_sysfs_symlinks(struct gendisk *disk)
--{
-- struct device *target = get_device(disk->driverfs_dev);
-- int err;
-- char *disk_name = NULL;
--
-- if (target) {
-- disk_name = make_block_name(disk);
-- if (!disk_name) {
-- err = -ENOMEM;
-- goto err_out;
-- }
--
-- err = sysfs_create_link(&disk->kobj, &target->kobj, "device");
-- if (err)
-- goto err_out_disk_name;
--
-- err = sysfs_create_link(&target->kobj, &disk->kobj, disk_name);
-- if (err)
-- goto err_out_dev_link;
-- }
--
-- err = sysfs_create_link(&disk->kobj, &block_subsys.kobj,
-- "subsystem");
-- if (err)
-- goto err_out_disk_name_lnk;
--
-- kfree(disk_name);
--
-- return 0;
--
--err_out_disk_name_lnk:
-- if (target) {
-- sysfs_remove_link(&target->kobj, disk_name);
--err_out_dev_link:
-- sysfs_remove_link(&disk->kobj, "device");
--err_out_disk_name:
-- kfree(disk_name);
--err_out:
-- put_device(target);
-- }
-- return err;
-+ /* suppress uevent if the disk supresses it */
-+ if (!disk->dev.uevent_suppress)
-+ kobject_uevent(&p->dev.kobj, KOBJ_ADD);
- }
++static inline void flush_tlb_kernel_range(unsigned long start,
++ unsigned long end)
++{
++ __tlb_flush_mm(&init_mm);
++}
- /* Not exported, helper to add_disk(). */
-@@ -479,19 +374,29 @@ void register_disk(struct gendisk *disk)
- struct hd_struct *p;
- int err;
+ #endif /* _S390_TLBFLUSH_H */
+diff --git a/include/asm-s390/zcrypt.h b/include/asm-s390/zcrypt.h
+index a5dada6..f228f1b 100644
+--- a/include/asm-s390/zcrypt.h
++++ b/include/asm-s390/zcrypt.h
+@@ -117,7 +117,7 @@ struct CPRBX {
+ unsigned char padx004[16 - sizeof (char *)];
+ unsigned char * req_extb; /* request extension block 'addr'*/
+ unsigned char padx005[16 - sizeof (char *)];
+- unsigned char * rpl_extb; /* reply extension block 'addres'*/
++ unsigned char * rpl_extb; /* reply extension block 'address'*/
+ unsigned short ccp_rtcode; /* server return code */
+ unsigned short ccp_rscode; /* server reason code */
+ unsigned int mac_data_len; /* Mac Data Length */
+diff --git a/include/asm-sh/Kbuild b/include/asm-sh/Kbuild
+index 76a8ccf..43910cd 100644
+--- a/include/asm-sh/Kbuild
++++ b/include/asm-sh/Kbuild
+@@ -1,3 +1,8 @@
+ include include/asm-generic/Kbuild.asm
-- kobject_set_name(&disk->kobj, "%s", disk->disk_name);
-- /* ewww... some of these buggers have / in name... */
-- s = strchr(disk->kobj.k_name, '/');
-+ disk->dev.parent = disk->driverfs_dev;
-+ disk->dev.devt = MKDEV(disk->major, disk->first_minor);
-+
-+ strlcpy(disk->dev.bus_id, disk->disk_name, KOBJ_NAME_LEN);
-+ /* ewww... some of these buggers have / in the name... */
-+ s = strchr(disk->dev.bus_id, '/');
- if (s)
- *s = '!';
-- if ((err = kobject_add(&disk->kobj)))
+ header-y += cpu-features.h
+
-+ /* delay uevents, until we scanned partition table */
-+ disk->dev.uevent_suppress = 1;
++unifdef-y += unistd_32.h
++unifdef-y += unistd_64.h
++unifdef-y += posix_types_32.h
++unifdef-y += posix_types_64.h
+diff --git a/include/asm-sh/addrspace.h b/include/asm-sh/addrspace.h
+index b860218..fa544fc 100644
+--- a/include/asm-sh/addrspace.h
++++ b/include/asm-sh/addrspace.h
+@@ -9,24 +9,21 @@
+ */
+ #ifndef __ASM_SH_ADDRSPACE_H
+ #define __ASM_SH_ADDRSPACE_H
+
-+ if (device_add(&disk->dev))
- return;
-- err = disk_sysfs_symlinks(disk);
-+#ifndef CONFIG_SYSFS_DEPRECATED
-+ err = sysfs_create_link(block_depr, &disk->dev.kobj,
-+ kobject_name(&disk->dev.kobj));
- if (err) {
-- kobject_del(&disk->kobj);
-+ device_del(&disk->dev);
- return;
- }
-- disk_sysfs_add_subdirs(disk);
-+#endif
-+ disk_sysfs_add_subdirs(disk);
-
- /* No minors to use for partitions */
- if (disk->minors == 1)
-@@ -505,25 +410,23 @@ void register_disk(struct gendisk *disk)
- if (!bdev)
- goto exit;
-
-- /* scan partition table, but suppress uevents */
- bdev->bd_invalidated = 1;
-- disk->part_uevent_suppress = 1;
- err = blkdev_get(bdev, FMODE_READ, 0);
-- disk->part_uevent_suppress = 0;
- if (err < 0)
- goto exit;
- blkdev_put(bdev);
+ #ifdef __KERNEL__
- exit:
-- /* announce disk after possible partitions are already created */
-- kobject_uevent(&disk->kobj, KOBJ_ADD);
-+ /* announce disk after possible partitions are created */
-+ disk->dev.uevent_suppress = 0;
-+ kobject_uevent(&disk->dev.kobj, KOBJ_ADD);
+ #include <asm/cpu/addrspace.h>
- /* announce possible partitions */
- for (i = 1; i < disk->minors; i++) {
- p = disk->part[i-1];
- if (!p || !p->nr_sects)
- continue;
-- kobject_uevent(&p->kobj, KOBJ_ADD);
-+ kobject_uevent(&p->dev.kobj, KOBJ_ADD);
- }
- }
+-/* Memory segments (32bit Privileged mode addresses) */
+-#ifndef CONFIG_CPU_SH2A
+-#define P0SEG 0x00000000
+-#define P1SEG 0x80000000
+-#define P2SEG 0xa0000000
+-#define P3SEG 0xc0000000
+-#define P4SEG 0xe0000000
+-#else
+-#define P0SEG 0x00000000
+-#define P1SEG 0x00000000
+-#define P2SEG 0x20000000
+-#define P3SEG 0x00000000
+-#define P4SEG 0x80000000
+-#endif
++/* If this CPU supports segmentation, hook up the helpers */
++#ifdef P1SEG
++
++/*
++ [ P0/U0 (virtual) ] 0x00000000 <------ User space
++ [ P1 (fixed) cached ] 0x80000000 <------ Kernel space
++ [ P2 (fixed) non-cachable] 0xA0000000 <------ Physical access
++ [ P3 (virtual) cached] 0xC0000000 <------ vmalloced area
++ [ P4 control ] 0xE0000000
++ */
-@@ -602,19 +505,11 @@ void del_gendisk(struct gendisk *disk)
- disk_stat_set_all(disk, 0);
- disk->stamp = 0;
+ /* Returns the privileged segment base of a given address */
+ #define PXSEG(a) (((unsigned long)(a)) & 0xe0000000)
+@@ -34,13 +31,23 @@
+ /* Returns the physical address of a PnSEG (n=1,2) address */
+ #define PHYSADDR(a) (((unsigned long)(a)) & 0x1fffffff)
-- kobject_uevent(&disk->kobj, KOBJ_REMOVE);
-- kobject_unregister(disk->holder_dir);
-- kobject_unregister(disk->slave_dir);
-- if (disk->driverfs_dev) {
-- char *disk_name = make_block_name(disk);
-- sysfs_remove_link(&disk->kobj, "device");
-- if (disk_name) {
-- sysfs_remove_link(&disk->driverfs_dev->kobj, disk_name);
-- kfree(disk_name);
-- }
-- put_device(disk->driverfs_dev);
-- disk->driverfs_dev = NULL;
-- }
-- sysfs_remove_link(&disk->kobj, "subsystem");
-- kobject_del(&disk->kobj);
-+ kobject_put(disk->holder_dir);
-+ kobject_put(disk->slave_dir);
-+ disk->driverfs_dev = NULL;
-+#ifndef CONFIG_SYSFS_DEPRECATED
-+ sysfs_remove_link(block_depr, disk->dev.bus_id);
-+#endif
-+ device_del(&disk->dev);
- }
-diff --git a/fs/proc/base.c b/fs/proc/base.c
-index 7411bfb..91fa8e6 100644
---- a/fs/proc/base.c
-+++ b/fs/proc/base.c
-@@ -310,6 +310,77 @@ static int proc_pid_schedstat(struct task_struct *task, char *buffer)
- }
- #endif
++#ifdef CONFIG_29BIT
+ /*
+ * Map an address to a certain privileged segment
+ */
+-#define P1SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P1SEG))
+-#define P2SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P2SEG))
+-#define P3SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P3SEG))
+-#define P4SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P4SEG))
++#define P1SEGADDR(a) \
++ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P1SEG))
++#define P2SEGADDR(a) \
++ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P2SEG))
++#define P3SEGADDR(a) \
++ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P3SEG))
++#define P4SEGADDR(a) \
++ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P4SEG))
++#endif /* 29BIT */
++#endif /* P1SEG */
++
++/* Check if an address can be reached in 29 bits */
++#define IS_29BIT(a) (((unsigned long)(a)) < 0x20000000)
-+#ifdef CONFIG_LATENCYTOP
-+static int lstats_show_proc(struct seq_file *m, void *v)
-+{
-+ int i;
-+ struct task_struct *task = m->private;
-+ seq_puts(m, "Latency Top version : v0.1\n");
+ #endif /* __KERNEL__ */
+ #endif /* __ASM_SH_ADDRSPACE_H */
+diff --git a/include/asm-sh/atomic-grb.h b/include/asm-sh/atomic-grb.h
+new file mode 100644
+index 0000000..4c5b7db
+--- /dev/null
++++ b/include/asm-sh/atomic-grb.h
+@@ -0,0 +1,169 @@
++#ifndef __ASM_SH_ATOMIC_GRB_H
++#define __ASM_SH_ATOMIC_GRB_H
+
-+ for (i = 0; i < 32; i++) {
-+ if (task->latency_record[i].backtrace[0]) {
-+ int q;
-+ seq_printf(m, "%i %li %li ",
-+ task->latency_record[i].count,
-+ task->latency_record[i].time,
-+ task->latency_record[i].max);
-+ for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
-+ char sym[KSYM_NAME_LEN];
-+ char *c;
-+ if (!task->latency_record[i].backtrace[q])
-+ break;
-+ if (task->latency_record[i].backtrace[q] == ULONG_MAX)
-+ break;
-+ sprint_symbol(sym, task->latency_record[i].backtrace[q]);
-+ c = strchr(sym, '+');
-+ if (c)
-+ *c = 0;
-+ seq_printf(m, "%s ", sym);
-+ }
-+ seq_printf(m, "\n");
-+ }
++static inline void atomic_add(int i, atomic_t *v)
++{
++ int tmp;
+
-+ }
-+ return 0;
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " add %2, %0 \n\t" /* add */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (v)
++ : "r" (i)
++ : "memory" , "r0", "r1");
+}
+
-+static int lstats_open(struct inode *inode, struct file *file)
++static inline void atomic_sub(int i, atomic_t *v)
+{
-+ int ret;
-+ struct seq_file *m;
-+ struct task_struct *task = get_proc_task(inode);
++ int tmp;
+
-+ ret = single_open(file, lstats_show_proc, NULL);
-+ if (!ret) {
-+ m = file->private_data;
-+ m->private = task;
-+ }
-+ return ret;
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " sub %2, %0 \n\t" /* sub */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (v)
++ : "r" (i)
++ : "memory" , "r0", "r1");
+}
+
-+static ssize_t lstats_write(struct file *file, const char __user *buf,
-+ size_t count, loff_t *offs)
++static inline int atomic_add_return(int i, atomic_t *v)
+{
-+ struct seq_file *m;
-+ struct task_struct *task;
++ int tmp;
+
-+ m = file->private_data;
-+ task = m->private;
-+ clear_all_latency_tracing(task);
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " add %2, %0 \n\t" /* add */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (v)
++ : "r" (i)
++ : "memory" , "r0", "r1");
+
-+ return count;
++ return tmp;
+}
+
-+static const struct file_operations proc_lstats_operations = {
-+ .open = lstats_open,
-+ .read = seq_read,
-+ .write = lstats_write,
-+ .llseek = seq_lseek,
-+ .release = single_release,
-+};
++static inline int atomic_sub_return(int i, atomic_t *v)
++{
++ int tmp;
+
-+#endif
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " sub %2, %0 \n\t" /* sub */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (v)
++ : "r" (i)
++ : "memory", "r0", "r1");
+
- /* The badness from the OOM killer */
- unsigned long badness(struct task_struct *p, unsigned long uptime);
- static int proc_oom_score(struct task_struct *task, char *buffer)
-@@ -1020,6 +1091,7 @@ static const struct file_operations proc_fault_inject_operations = {
- };
- #endif
-
++ return tmp;
++}
+
- #ifdef CONFIG_SCHED_DEBUG
- /*
- * Print out various scheduling related per-task fields:
-@@ -2230,6 +2302,9 @@ static const struct pid_entry tgid_base_stuff[] = {
- #ifdef CONFIG_SCHEDSTATS
- INF("schedstat", S_IRUGO, pid_schedstat),
- #endif
-+#ifdef CONFIG_LATENCYTOP
-+ REG("latency", S_IRUGO, lstats),
-+#endif
- #ifdef CONFIG_PROC_PID_CPUSET
- REG("cpuset", S_IRUGO, cpuset),
- #endif
-@@ -2555,6 +2630,9 @@ static const struct pid_entry tid_base_stuff[] = {
- #ifdef CONFIG_SCHEDSTATS
- INF("schedstat", S_IRUGO, pid_schedstat),
- #endif
-+#ifdef CONFIG_LATENCYTOP
-+ REG("latency", S_IRUGO, lstats),
-+#endif
- #ifdef CONFIG_PROC_PID_CPUSET
- REG("cpuset", S_IRUGO, cpuset),
- #endif
-diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c
-index 0afe21e..4823c96 100644
---- a/fs/proc/proc_net.c
-+++ b/fs/proc/proc_net.c
-@@ -22,10 +22,48 @@
- #include <linux/mount.h>
- #include <linux/nsproxy.h>
- #include <net/net_namespace.h>
-+#include <linux/seq_file.h>
-
- #include "internal.h"
-
-
-+int seq_open_net(struct inode *ino, struct file *f,
-+ const struct seq_operations *ops, int size)
++static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
-+ struct net *net;
-+ struct seq_net_private *p;
++ int tmp;
++ unsigned int _mask = ~mask;
+
-+ BUG_ON(size < sizeof(*p));
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " and %2, %0 \n\t" /* add */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (v)
++ : "r" (_mask)
++ : "memory" , "r0", "r1");
++}
+
-+ net = get_proc_net(ino);
-+ if (net == NULL)
-+ return -ENXIO;
++static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
++{
++ int tmp;
+
-+ p = __seq_open_private(f, ops, size);
-+ if (p == NULL) {
-+ put_net(net);
-+ return -ENOMEM;
-+ }
-+ p->net = net;
-+ return 0;
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " or %2, %0 \n\t" /* or */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (v)
++ : "r" (mask)
++ : "memory" , "r0", "r1");
+}
-+EXPORT_SYMBOL_GPL(seq_open_net);
+
-+int seq_release_net(struct inode *ino, struct file *f)
++static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
-+ struct seq_file *seq;
-+ struct seq_net_private *p;
++ int ret;
+
-+ seq = f->private_data;
-+ p = seq->private;
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t"
++ " nop \n\t"
++ " mov r15, r1 \n\t"
++ " mov #-8, r15 \n\t"
++ " mov.l @%1, %0 \n\t"
++ " cmp/eq %2, %0 \n\t"
++ " bf 1f \n\t"
++ " mov.l %3, @%1 \n\t"
++ "1: mov r1, r15 \n\t"
++ : "=&r" (ret)
++ : "r" (v), "r" (old), "r" (new)
++ : "memory" , "r0", "r1" , "t");
+
-+ put_net(p->net);
-+ seq_release_private(ino, f);
-+ return 0;
++ return ret;
+}
-+EXPORT_SYMBOL_GPL(seq_release_net);
+
-+
- struct proc_dir_entry *proc_net_fops_create(struct net *net,
- const char *name, mode_t mode, const struct file_operations *fops)
- {
-@@ -58,6 +96,17 @@ static struct proc_dir_entry *proc_net_shadow(struct task_struct *task,
- return task->nsproxy->net_ns->proc_net;
- }
-
-+struct proc_dir_entry *proc_net_mkdir(struct net *net, const char *name,
-+ struct proc_dir_entry *parent)
++static inline int atomic_add_unless(atomic_t *v, int a, int u)
+{
-+ struct proc_dir_entry *pde;
-+ pde = proc_mkdir_mode(name, S_IRUGO | S_IXUGO, parent);
-+ if (pde != NULL)
-+ pde->data = net;
-+ return pde;
-+}
-+EXPORT_SYMBOL_GPL(proc_net_mkdir);
++ int ret;
++ unsigned long tmp;
+
- static __net_init int proc_net_ns_init(struct net *net)
- {
- struct proc_dir_entry *root, *netd, *net_statd;
-@@ -69,18 +118,16 @@ static __net_init int proc_net_ns_init(struct net *net)
- goto out;
-
- err = -EEXIST;
-- netd = proc_mkdir("net", root);
-+ netd = proc_net_mkdir(net, "net", root);
- if (!netd)
- goto free_root;
-
- err = -EEXIST;
-- net_statd = proc_mkdir("stat", netd);
-+ net_statd = proc_net_mkdir(net, "stat", netd);
- if (!net_statd)
- goto free_net;
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t"
++ " nop \n\t"
++ " mov r15, r1 \n\t"
++ " mov #-12, r15 \n\t"
++ " mov.l @%2, %1 \n\t"
++ " mov %1, %0 \n\t"
++ " cmp/eq %4, %0 \n\t"
++ " bt/s 1f \n\t"
++ " add %3, %1 \n\t"
++ " mov.l %1, @%2 \n\t"
++ "1: mov r1, r15 \n\t"
++ : "=&r" (ret), "=&r" (tmp)
++ : "r" (v), "r" (a), "r" (u)
++ : "memory" , "r0", "r1" , "t");
++
++ return ret != u;
++}
++#endif /* __ASM_SH_ATOMIC_GRB_H */
+diff --git a/include/asm-sh/atomic.h b/include/asm-sh/atomic.h
+index e12570b..c043ef0 100644
+--- a/include/asm-sh/atomic.h
++++ b/include/asm-sh/atomic.h
+@@ -17,7 +17,9 @@ typedef struct { volatile int counter; } atomic_t;
+ #include <linux/compiler.h>
+ #include <asm/system.h>
- root->data = net;
-- netd->data = net;
-- net_statd->data = net;
+-#ifdef CONFIG_CPU_SH4A
++#if defined(CONFIG_GUSA_RB)
++#include <asm/atomic-grb.h>
++#elif defined(CONFIG_CPU_SH4A)
+ #include <asm/atomic-llsc.h>
+ #else
+ #include <asm/atomic-irq.h>
+@@ -44,6 +46,7 @@ typedef struct { volatile int counter; } atomic_t;
+ #define atomic_inc(v) atomic_add(1,(v))
+ #define atomic_dec(v) atomic_sub(1,(v))
- net->proc_net_root = root;
- net->proc_net = netd;
-diff --git a/fs/read_write.c b/fs/read_write.c
-index ea1f94c..1c177f2 100644
---- a/fs/read_write.c
-+++ b/fs/read_write.c
-@@ -197,25 +197,27 @@ int rw_verify_area(int read_write, struct file *file, loff_t *ppos, size_t count
++#ifndef CONFIG_GUSA_RB
+ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{
- struct inode *inode;
- loff_t pos;
-+ int retval = -EINVAL;
-
- inode = file->f_path.dentry->d_inode;
- if (unlikely((ssize_t) count < 0))
-- goto Einval;
-+ return retval;
- pos = *ppos;
- if (unlikely((pos < 0) || (loff_t) (pos + count) < 0))
-- goto Einval;
-+ return retval;
-
- if (unlikely(inode->i_flock && mandatory_lock(inode))) {
-- int retval = locks_mandatory_area(
-+ retval = locks_mandatory_area(
- read_write == READ ? FLOCK_VERIFY_READ : FLOCK_VERIFY_WRITE,
- inode, file, pos, count);
- if (retval < 0)
- return retval;
- }
-+ retval = security_file_permission(file,
-+ read_write == READ ? MAY_READ : MAY_WRITE);
-+ if (retval)
-+ return retval;
- return count > MAX_RW_COUNT ? MAX_RW_COUNT : count;
--
--Einval:
-- return -EINVAL;
- }
-
- static void wait_on_retry_sync_kiocb(struct kiocb *iocb)
-@@ -267,18 +269,15 @@ ssize_t vfs_read(struct file *file, char __user *buf, size_t count, loff_t *pos)
- ret = rw_verify_area(READ, file, pos, count);
- if (ret >= 0) {
- count = ret;
-- ret = security_file_permission (file, MAY_READ);
-- if (!ret) {
-- if (file->f_op->read)
-- ret = file->f_op->read(file, buf, count, pos);
-- else
-- ret = do_sync_read(file, buf, count, pos);
-- if (ret > 0) {
-- fsnotify_access(file->f_path.dentry);
-- add_rchar(current, ret);
-- }
-- inc_syscr(current);
-+ if (file->f_op->read)
-+ ret = file->f_op->read(file, buf, count, pos);
-+ else
-+ ret = do_sync_read(file, buf, count, pos);
-+ if (ret > 0) {
-+ fsnotify_access(file->f_path.dentry);
-+ add_rchar(current, ret);
- }
-+ inc_syscr(current);
- }
-
- return ret;
-@@ -325,18 +324,15 @@ ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_
- ret = rw_verify_area(WRITE, file, pos, count);
- if (ret >= 0) {
- count = ret;
-- ret = security_file_permission (file, MAY_WRITE);
-- if (!ret) {
-- if (file->f_op->write)
-- ret = file->f_op->write(file, buf, count, pos);
-- else
-- ret = do_sync_write(file, buf, count, pos);
-- if (ret > 0) {
-- fsnotify_modify(file->f_path.dentry);
-- add_wchar(current, ret);
-- }
-- inc_syscw(current);
-+ if (file->f_op->write)
-+ ret = file->f_op->write(file, buf, count, pos);
-+ else
-+ ret = do_sync_write(file, buf, count, pos);
-+ if (ret > 0) {
-+ fsnotify_modify(file->f_path.dentry);
-+ add_wchar(current, ret);
- }
-+ inc_syscw(current);
- }
-
+ int ret;
+@@ -58,8 +61,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
return ret;
-@@ -450,6 +446,7 @@ unsigned long iov_shorten(struct iovec *iov, unsigned long nr_segs, size_t to)
- }
- return seg;
}
-+EXPORT_SYMBOL(iov_shorten);
-
- ssize_t do_sync_readv_writev(struct file *filp, const struct iovec *iov,
- unsigned long nr_segs, size_t len, loff_t *ppos, iov_fn_t fn)
-@@ -603,9 +600,6 @@ static ssize_t do_readv_writev(int type, struct file *file,
- ret = rw_verify_area(type, file, pos, tot_len);
- if (ret < 0)
- goto out;
-- ret = security_file_permission(file, type == READ ? MAY_READ : MAY_WRITE);
-- if (ret)
-- goto out;
-
- fnv = NULL;
- if (type == READ) {
-@@ -737,10 +731,6 @@ static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
- goto fput_in;
- count = retval;
-
-- retval = security_file_permission (in_file, MAY_READ);
-- if (retval)
-- goto fput_in;
--
- /*
- * Get output file, and verify that it is ok..
- */
-@@ -759,10 +749,6 @@ static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
- goto fput_out;
- count = retval;
-- retval = security_file_permission (out_file, MAY_WRITE);
-- if (retval)
-- goto fput_out;
+-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
-
- if (!max)
- max = min(in_inode->i_sb->s_maxbytes, out_inode->i_sb->s_maxbytes);
-
-diff --git a/fs/smbfs/Makefile b/fs/smbfs/Makefile
-index 6673ee8..4faf8c4 100644
---- a/fs/smbfs/Makefile
-+++ b/fs/smbfs/Makefile
-@@ -16,23 +16,3 @@ EXTRA_CFLAGS += -DSMBFS_PARANOIA
- #EXTRA_CFLAGS += -DDEBUG_SMB_TIMESTAMP
- #EXTRA_CFLAGS += -Werror
+ static inline int atomic_add_unless(atomic_t *v, int a, int u)
+ {
+ int ret;
+@@ -73,6 +74,9 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
--#
--# Maintainer rules
--#
--
--# getopt.c not included. It is intentionally separate
--SRC = proc.c dir.c cache.c sock.c inode.c file.c ioctl.c smbiod.c request.c \
-- symlink.c
--
--proto:
-- -rm -f proto.h
-- @echo > proto2.h "/*"
-- @echo >> proto2.h " * Autogenerated with cproto on: " `date`
-- @echo >> proto2.h " */"
-- @echo >> proto2.h ""
-- @echo >> proto2.h "struct smb_request;"
-- @echo >> proto2.h "struct sock;"
-- @echo >> proto2.h "struct statfs;"
-- @echo >> proto2.h ""
-- cproto -E "gcc -E" -e -v -I $(TOPDIR)/include -DMAKING_PROTO -D__KERNEL__ $(SRC) >> proto2.h
-- mv proto2.h proto.h
-diff --git a/fs/splice.c b/fs/splice.c
-index 6bdcb61..1577a73 100644
---- a/fs/splice.c
-+++ b/fs/splice.c
-@@ -254,11 +254,16 @@ ssize_t splice_to_pipe(struct pipe_inode_info *pipe,
- }
+ return ret != u;
+ }
++#endif
++
++#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+ #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
- while (page_nr < spd_pages)
-- page_cache_release(spd->pages[page_nr++]);
-+ spd->spd_release(spd, page_nr++);
+ /* Atomic operations are already serializing on SH */
+diff --git a/include/asm-sh/auxvec.h b/include/asm-sh/auxvec.h
+index 1b6916e..a6b9d4f 100644
+--- a/include/asm-sh/auxvec.h
++++ b/include/asm-sh/auxvec.h
+@@ -6,6 +6,12 @@
+ * for more of them.
+ */
- return ret;
- }
++/*
++ * This entry gives some information about the FPU initialization
++ * performed by the kernel.
++ */
++#define AT_FPUCW 18 /* Used FPU control word. */
++
+ #ifdef CONFIG_VSYSCALL
+ /*
+ * Only define this in the vsyscall case, the entry point to
+@@ -15,4 +21,16 @@
+ #define AT_SYSINFO_EHDR 33
+ #endif
-+static void spd_release_page(struct splice_pipe_desc *spd, unsigned int i)
++/*
++ * More complete cache descriptions than AT_[DIU]CACHEBSIZE. If the
++ * value is -1, then the cache doesn't exist. Otherwise:
++ *
++ * bit 0-3: Cache set-associativity; 0 means fully associative.
++ * bit 4-7: Log2 of cacheline size.
++ * bit 8-31: Size of the entire cache >> 8.
++ */
++#define AT_L1I_CACHESHAPE 34
++#define AT_L1D_CACHESHAPE 35
++#define AT_L2_CACHESHAPE 36
++
+ #endif /* __ASM_SH_AUXVEC_H */
+diff --git a/include/asm-sh/bitops-grb.h b/include/asm-sh/bitops-grb.h
+new file mode 100644
+index 0000000..a5907b9
+--- /dev/null
++++ b/include/asm-sh/bitops-grb.h
+@@ -0,0 +1,169 @@
++#ifndef __ASM_SH_BITOPS_GRB_H
++#define __ASM_SH_BITOPS_GRB_H
++
++static inline void set_bit(int nr, volatile void * addr)
+{
-+ page_cache_release(spd->pages[i]);
++ int mask;
++ volatile unsigned int *a = addr;
++ unsigned long tmp;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " or %2, %0 \n\t" /* or */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (a)
++ : "r" (mask)
++ : "memory" , "r0", "r1");
+}
+
- static int
- __generic_file_splice_read(struct file *in, loff_t *ppos,
- struct pipe_inode_info *pipe, size_t len,
-@@ -277,6 +282,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
- .partial = partial,
- .flags = flags,
- .ops = &page_cache_pipe_buf_ops,
-+ .spd_release = spd_release_page,
- };
-
- index = *ppos >> PAGE_CACHE_SHIFT;
-@@ -908,10 +914,6 @@ static long do_splice_from(struct pipe_inode_info *pipe, struct file *out,
- if (unlikely(ret < 0))
- return ret;
-
-- ret = security_file_permission(out, MAY_WRITE);
-- if (unlikely(ret < 0))
-- return ret;
--
- return out->f_op->splice_write(pipe, out, ppos, len, flags);
- }
-
-@@ -934,10 +936,6 @@ static long do_splice_to(struct file *in, loff_t *ppos,
- if (unlikely(ret < 0))
- return ret;
++static inline void clear_bit(int nr, volatile void * addr)
++{
++ int mask;
++ volatile unsigned int *a = addr;
++ unsigned long tmp;
++
++ a += nr >> 5;
++ mask = ~(1 << (nr & 0x1f));
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " and %2, %0 \n\t" /* and */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (a)
++ : "r" (mask)
++ : "memory" , "r0", "r1");
++}
++
++static inline void change_bit(int nr, volatile void * addr)
++{
++ int mask;
++ volatile unsigned int *a = addr;
++ unsigned long tmp;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " xor %2, %0 \n\t" /* xor */
++ " mov.l %0, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "+r" (a)
++ : "r" (mask)
++ : "memory" , "r0", "r1");
++}
++
++static inline int test_and_set_bit(int nr, volatile void * addr)
++{
++ int mask, retval;
++ volatile unsigned int *a = addr;
++ unsigned long tmp;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-14, r15 \n\t" /* LOGIN: r15 = size */
++ " mov.l @%2, %0 \n\t" /* load old value */
++ " mov %0, %1 \n\t"
++ " tst %1, %3 \n\t" /* T = ((*a & mask) == 0) */
++ " mov #-1, %1 \n\t" /* retvat = -1 */
++ " negc %1, %1 \n\t" /* retval = (mask & *a) != 0 */
++ " or %3, %0 \n\t"
++ " mov.l %0, @%2 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "=&r" (retval),
++ "+r" (a)
++ : "r" (mask)
++ : "memory" , "r0", "r1" ,"t");
++
++ return retval;
++}
++
++static inline int test_and_clear_bit(int nr, volatile void * addr)
++{
++ int mask, retval,not_mask;
++ volatile unsigned int *a = addr;
++ unsigned long tmp;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++
++ not_mask = ~mask;
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-14, r15 \n\t" /* LOGIN */
++ " mov.l @%2, %0 \n\t" /* load old value */
++ " mov %0, %1 \n\t" /* %1 = *a */
++ " tst %1, %3 \n\t" /* T = ((*a & mask) == 0) */
++ " mov #-1, %1 \n\t" /* retvat = -1 */
++ " negc %1, %1 \n\t" /* retval = (mask & *a) != 0 */
++ " and %4, %0 \n\t"
++ " mov.l %0, @%2 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "=&r" (retval),
++ "+r" (a)
++ : "r" (mask),
++ "r" (not_mask)
++ : "memory" , "r0", "r1", "t");
++
++ return retval;
++}
++
++static inline int test_and_change_bit(int nr, volatile void * addr)
++{
++ int mask, retval;
++ volatile unsigned int *a = addr;
++ unsigned long tmp;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-14, r15 \n\t" /* LOGIN */
++ " mov.l @%2, %0 \n\t" /* load old value */
++ " mov %0, %1 \n\t" /* %1 = *a */
++ " tst %1, %3 \n\t" /* T = ((*a & mask) == 0) */
++ " mov #-1, %1 \n\t" /* retvat = -1 */
++ " negc %1, %1 \n\t" /* retval = (mask & *a) != 0 */
++ " xor %3, %0 \n\t"
++ " mov.l %0, @%2 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (tmp),
++ "=&r" (retval),
++ "+r" (a)
++ : "r" (mask)
++ : "memory" , "r0", "r1", "t");
++
++ return retval;
++}
++#endif /* __ASM_SH_BITOPS_GRB_H */
+diff --git a/include/asm-sh/bitops-irq.h b/include/asm-sh/bitops-irq.h
+new file mode 100644
+index 0000000..653a127
+--- /dev/null
++++ b/include/asm-sh/bitops-irq.h
+@@ -0,0 +1,91 @@
++#ifndef __ASM_SH_BITOPS_IRQ_H
++#define __ASM_SH_BITOPS_IRQ_H
++
++static inline void set_bit(int nr, volatile void *addr)
++{
++ int mask;
++ volatile unsigned int *a = addr;
++ unsigned long flags;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ local_irq_save(flags);
++ *a |= mask;
++ local_irq_restore(flags);
++}
++
++static inline void clear_bit(int nr, volatile void *addr)
++{
++ int mask;
++ volatile unsigned int *a = addr;
++ unsigned long flags;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ local_irq_save(flags);
++ *a &= ~mask;
++ local_irq_restore(flags);
++}
++
++static inline void change_bit(int nr, volatile void *addr)
++{
++ int mask;
++ volatile unsigned int *a = addr;
++ unsigned long flags;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ local_irq_save(flags);
++ *a ^= mask;
++ local_irq_restore(flags);
++}
++
++static inline int test_and_set_bit(int nr, volatile void *addr)
++{
++ int mask, retval;
++ volatile unsigned int *a = addr;
++ unsigned long flags;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ local_irq_save(flags);
++ retval = (mask & *a) != 0;
++ *a |= mask;
++ local_irq_restore(flags);
++
++ return retval;
++}
++
++static inline int test_and_clear_bit(int nr, volatile void *addr)
++{
++ int mask, retval;
++ volatile unsigned int *a = addr;
++ unsigned long flags;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ local_irq_save(flags);
++ retval = (mask & *a) != 0;
++ *a &= ~mask;
++ local_irq_restore(flags);
++
++ return retval;
++}
++
++static inline int test_and_change_bit(int nr, volatile void *addr)
++{
++ int mask, retval;
++ volatile unsigned int *a = addr;
++ unsigned long flags;
++
++ a += nr >> 5;
++ mask = 1 << (nr & 0x1f);
++ local_irq_save(flags);
++ retval = (mask & *a) != 0;
++ *a ^= mask;
++ local_irq_restore(flags);
++
++ return retval;
++}
++
++#endif /* __ASM_SH_BITOPS_IRQ_H */
+diff --git a/include/asm-sh/bitops.h b/include/asm-sh/bitops.h
+index df805f2..b6ba5a6 100644
+--- a/include/asm-sh/bitops.h
++++ b/include/asm-sh/bitops.h
+@@ -11,100 +11,22 @@
+ /* For __swab32 */
+ #include <asm/byteorder.h>
-- ret = security_file_permission(in, MAY_READ);
-- if (unlikely(ret < 0))
-- return ret;
+-static inline void set_bit(int nr, volatile void * addr)
+-{
+- int mask;
+- volatile unsigned int *a = addr;
+- unsigned long flags;
-
- return in->f_op->splice_read(in, ppos, pipe, len, flags);
- }
-
-@@ -1033,7 +1031,11 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
- goto out_release;
- }
-
-+done:
- pipe->nrbufs = pipe->curbuf = 0;
-+ if (bytes > 0)
-+ file_accessed(in);
+- a += nr >> 5;
+- mask = 1 << (nr & 0x1f);
+- local_irq_save(flags);
+- *a |= mask;
+- local_irq_restore(flags);
+-}
++#ifdef CONFIG_GUSA_RB
++#include <asm/bitops-grb.h>
++#else
++#include <asm/bitops-irq.h>
++#endif
+
- return bytes;
- out_release:
-@@ -1049,16 +1051,11 @@ out_release:
- buf->ops = NULL;
- }
- }
-- pipe->nrbufs = pipe->curbuf = 0;
+ /*
+ * clear_bit() doesn't provide any barrier for the compiler.
+ */
+ #define smp_mb__before_clear_bit() barrier()
+ #define smp_mb__after_clear_bit() barrier()
+-static inline void clear_bit(int nr, volatile void * addr)
+-{
+- int mask;
+- volatile unsigned int *a = addr;
+- unsigned long flags;
-
-- /*
-- * If we transferred some data, return the number of bytes:
-- */
-- if (bytes > 0)
-- return bytes;
-
-- return ret;
-+ if (!bytes)
-+ bytes = ret;
-
-+ goto done;
- }
- EXPORT_SYMBOL(splice_direct_to_actor);
-
-@@ -1440,6 +1437,7 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *iov,
- .partial = partial,
- .flags = flags,
- .ops = &user_page_pipe_buf_ops,
-+ .spd_release = spd_release_page,
- };
-
- pipe = pipe_info(file->f_path.dentry->d_inode);
-diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
-index f281cc6..4948d9b 100644
---- a/fs/sysfs/dir.c
-+++ b/fs/sysfs/dir.c
-@@ -440,7 +440,7 @@ int sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd)
- /**
- * sysfs_remove_one - remove sysfs_dirent from parent
- * @acxt: addrm context to use
-- * @sd: sysfs_dirent to be added
-+ * @sd: sysfs_dirent to be removed
- *
- * Mark @sd removed and drop nlink of parent inode if @sd is a
- * directory. @sd is unlinked from the children list.
-diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
-index 4045bdc..a271c87 100644
---- a/fs/sysfs/file.c
-+++ b/fs/sysfs/file.c
-@@ -20,43 +20,6 @@
-
- #include "sysfs.h"
-
--#define to_sattr(a) container_of(a,struct subsys_attribute, attr)
+- a += nr >> 5;
+- mask = 1 << (nr & 0x1f);
+- local_irq_save(flags);
+- *a &= ~mask;
+- local_irq_restore(flags);
+-}
-
--/*
-- * Subsystem file operations.
-- * These operations allow subsystems to have files that can be
-- * read/written.
-- */
--static ssize_t
--subsys_attr_show(struct kobject * kobj, struct attribute * attr, char * page)
+-static inline void change_bit(int nr, volatile void * addr)
-{
-- struct kset *kset = to_kset(kobj);
-- struct subsys_attribute * sattr = to_sattr(attr);
-- ssize_t ret = -EIO;
+- int mask;
+- volatile unsigned int *a = addr;
+- unsigned long flags;
-
-- if (sattr->show)
-- ret = sattr->show(kset, page);
-- return ret;
+- a += nr >> 5;
+- mask = 1 << (nr & 0x1f);
+- local_irq_save(flags);
+- *a ^= mask;
+- local_irq_restore(flags);
-}
-
--static ssize_t
--subsys_attr_store(struct kobject * kobj, struct attribute * attr,
-- const char * page, size_t count)
+-static inline int test_and_set_bit(int nr, volatile void * addr)
-{
-- struct kset *kset = to_kset(kobj);
-- struct subsys_attribute * sattr = to_sattr(attr);
-- ssize_t ret = -EIO;
+- int mask, retval;
+- volatile unsigned int *a = addr;
+- unsigned long flags;
-
-- if (sattr->store)
-- ret = sattr->store(kset, page, count);
-- return ret;
+- a += nr >> 5;
+- mask = 1 << (nr & 0x1f);
+- local_irq_save(flags);
+- retval = (mask & *a) != 0;
+- *a |= mask;
+- local_irq_restore(flags);
+-
+- return retval;
-}
-
--static struct sysfs_ops subsys_sysfs_ops = {
-- .show = subsys_attr_show,
-- .store = subsys_attr_store,
--};
+-static inline int test_and_clear_bit(int nr, volatile void * addr)
+-{
+- int mask, retval;
+- volatile unsigned int *a = addr;
+- unsigned long flags;
-
- /*
- * There's one sysfs_buffer for each open file and one
- * sysfs_open_dirent for each sysfs_dirent with one or more open
-@@ -66,7 +29,7 @@ static struct sysfs_ops subsys_sysfs_ops = {
- * sysfs_dirent->s_attr.open points to sysfs_open_dirent. s_attr.open
- * is protected by sysfs_open_dirent_lock.
- */
--static spinlock_t sysfs_open_dirent_lock = SPIN_LOCK_UNLOCKED;
-+static DEFINE_SPINLOCK(sysfs_open_dirent_lock);
-
- struct sysfs_open_dirent {
- atomic_t refcnt;
-@@ -354,31 +317,23 @@ static int sysfs_open_file(struct inode *inode, struct file *file)
- {
- struct sysfs_dirent *attr_sd = file->f_path.dentry->d_fsdata;
- struct kobject *kobj = attr_sd->s_parent->s_dir.kobj;
-- struct sysfs_buffer * buffer;
-- struct sysfs_ops * ops = NULL;
-- int error;
-+ struct sysfs_buffer *buffer;
-+ struct sysfs_ops *ops;
-+ int error = -EACCES;
-
- /* need attr_sd for attr and ops, its parent for kobj */
- if (!sysfs_get_active_two(attr_sd))
- return -ENODEV;
-
-- /* if the kobject has no ktype, then we assume that it is a subsystem
-- * itself, and use ops for it.
-- */
-- if (kobj->kset && kobj->kset->ktype)
-- ops = kobj->kset->ktype->sysfs_ops;
-- else if (kobj->ktype)
-+ /* every kobject with an attribute needs a ktype assigned */
-+ if (kobj->ktype && kobj->ktype->sysfs_ops)
- ops = kobj->ktype->sysfs_ops;
-- else
-- ops = &subsys_sysfs_ops;
+- a += nr >> 5;
+- mask = 1 << (nr & 0x1f);
+- local_irq_save(flags);
+- retval = (mask & *a) != 0;
+- *a &= ~mask;
+- local_irq_restore(flags);
-
-- error = -EACCES;
+- return retval;
+-}
-
-- /* No sysfs operations, either from having no subsystem,
-- * or the subsystem have no operations.
-- */
-- if (!ops)
-+ else {
-+ printk(KERN_ERR "missing sysfs attribute operations for "
-+ "kobject: %s\n", kobject_name(kobj));
-+ WARN_ON(1);
- goto err_out;
-+ }
-
- /* File needs write support.
- * The inode's perms must say it's ok,
-@@ -568,7 +523,11 @@ int sysfs_add_file_to_group(struct kobject *kobj,
- struct sysfs_dirent *dir_sd;
- int error;
+-static inline int test_and_change_bit(int nr, volatile void * addr)
+-{
+- int mask, retval;
+- volatile unsigned int *a = addr;
+- unsigned long flags;
+-
+- a += nr >> 5;
+- mask = 1 << (nr & 0x1f);
+- local_irq_save(flags);
+- retval = (mask & *a) != 0;
+- *a ^= mask;
+- local_irq_restore(flags);
+-
+- return retval;
+-}
-- dir_sd = sysfs_get_dirent(kobj->sd, group);
-+ if (group)
-+ dir_sd = sysfs_get_dirent(kobj->sd, group);
-+ else
-+ dir_sd = sysfs_get(kobj->sd);
-+
- if (!dir_sd)
- return -ENOENT;
+ #include <asm-generic/bitops/non-atomic.h>
-@@ -656,7 +615,10 @@ void sysfs_remove_file_from_group(struct kobject *kobj,
++#ifdef CONFIG_SUPERH32
+ static inline unsigned long ffz(unsigned long word)
{
- struct sysfs_dirent *dir_sd;
-
-- dir_sd = sysfs_get_dirent(kobj->sd, group);
-+ if (group)
-+ dir_sd = sysfs_get_dirent(kobj->sd, group);
-+ else
-+ dir_sd = sysfs_get(kobj->sd);
- if (dir_sd) {
- sysfs_hash_and_remove(dir_sd, attr->name);
- sysfs_put(dir_sd);
-diff --git a/fs/sysfs/group.c b/fs/sysfs/group.c
-index d197237..0871c3d 100644
---- a/fs/sysfs/group.c
-+++ b/fs/sysfs/group.c
-@@ -16,25 +16,31 @@
- #include "sysfs.h"
-
+ unsigned long result;
+@@ -138,6 +60,31 @@ static inline unsigned long __ffs(unsigned long word)
+ : "t");
+ return result;
+ }
++#else
++static inline unsigned long ffz(unsigned long word)
++{
++ unsigned long result, __d2, __d3;
++
++ __asm__("gettr tr0, %2\n\t"
++ "pta $+32, tr0\n\t"
++ "andi %1, 1, %3\n\t"
++ "beq %3, r63, tr0\n\t"
++ "pta $+4, tr0\n"
++ "0:\n\t"
++ "shlri.l %1, 1, %1\n\t"
++ "addi %0, 1, %0\n\t"
++ "andi %1, 1, %3\n\t"
++ "beqi %3, 1, tr0\n"
++ "1:\n\t"
++ "ptabs %2, tr0\n\t"
++ : "=r" (result), "=r" (word), "=r" (__d2), "=r" (__d3)
++ : "0" (0L), "1" (word));
++
++ return result;
++}
++
++#include <asm-generic/bitops/__ffs.h>
++#endif
--static void remove_files(struct sysfs_dirent *dir_sd,
-+static void remove_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
- const struct attribute_group *grp)
- {
- struct attribute *const* attr;
-+ int i;
+ #include <asm-generic/bitops/find.h>
+ #include <asm-generic/bitops/ffs.h>
+diff --git a/include/asm-sh/bug.h b/include/asm-sh/bug.h
+index a78d482..c017180 100644
+--- a/include/asm-sh/bug.h
++++ b/include/asm-sh/bug.h
+@@ -3,7 +3,7 @@
-- for (attr = grp->attrs; *attr; attr++)
-- sysfs_hash_and_remove(dir_sd, (*attr)->name);
-+ for (i = 0, attr = grp->attrs; *attr; i++, attr++)
-+ if (!grp->is_visible ||
-+ grp->is_visible(kobj, *attr, i))
-+ sysfs_hash_and_remove(dir_sd, (*attr)->name);
- }
+ #define TRAPA_BUG_OPCODE 0xc33e /* trapa #0x3e */
--static int create_files(struct sysfs_dirent *dir_sd,
-+static int create_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
- const struct attribute_group *grp)
- {
- struct attribute *const* attr;
-- int error = 0;
-+ int error = 0, i;
+-#ifdef CONFIG_BUG
++#ifdef CONFIG_GENERIC_BUG
+ #define HAVE_ARCH_BUG
+ #define HAVE_ARCH_WARN_ON
-- for (attr = grp->attrs; *attr && !error; attr++)
-- error = sysfs_add_file(dir_sd, *attr, SYSFS_KOBJ_ATTR);
-+ for (i = 0, attr = grp->attrs; *attr && !error; i++, attr++)
-+ if (!grp->is_visible ||
-+ grp->is_visible(kobj, *attr, i))
-+ error |=
-+ sysfs_add_file(dir_sd, *attr, SYSFS_KOBJ_ATTR);
- if (error)
-- remove_files(dir_sd, grp);
-+ remove_files(dir_sd, kobj, grp);
- return error;
- }
+@@ -72,12 +72,7 @@ do { \
+ unlikely(__ret_warn_on); \
+ })
-@@ -54,7 +60,7 @@ int sysfs_create_group(struct kobject * kobj,
- } else
- sd = kobj->sd;
- sysfs_get(sd);
-- error = create_files(sd, grp);
-+ error = create_files(sd, kobj, grp);
- if (error) {
- if (grp->name)
- sysfs_remove_subdir(sd);
-@@ -75,7 +81,7 @@ void sysfs_remove_group(struct kobject * kobj,
- } else
- sd = sysfs_get(dir_sd);
+-struct pt_regs;
+-
+-/* arch/sh/kernel/traps.c */
+-void handle_BUG(struct pt_regs *);
+-
+-#endif /* CONFIG_BUG */
++#endif /* CONFIG_GENERIC_BUG */
-- remove_files(sd, grp);
-+ remove_files(sd, kobj, grp);
- if (grp->name)
- sysfs_remove_subdir(sd);
+ #include <asm-generic/bug.h>
-diff --git a/fs/sysfs/symlink.c b/fs/sysfs/symlink.c
-index 3eac20c..5f66c44 100644
---- a/fs/sysfs/symlink.c
-+++ b/fs/sysfs/symlink.c
-@@ -19,39 +19,6 @@
+diff --git a/include/asm-sh/bugs.h b/include/asm-sh/bugs.h
+index b66139f..def8128 100644
+--- a/include/asm-sh/bugs.h
++++ b/include/asm-sh/bugs.h
+@@ -25,7 +25,7 @@ static void __init check_bugs(void)
+ case CPU_SH7619:
+ *p++ = '2';
+ break;
+- case CPU_SH7206:
++ case CPU_SH7203 ... CPU_SH7263:
+ *p++ = '2';
+ *p++ = 'a';
+ break;
+@@ -35,7 +35,7 @@ static void __init check_bugs(void)
+ case CPU_SH7750 ... CPU_SH4_501:
+ *p++ = '4';
+ break;
+- case CPU_SH7770 ... CPU_SHX3:
++ case CPU_SH7763 ... CPU_SHX3:
+ *p++ = '4';
+ *p++ = 'a';
+ break;
+@@ -48,9 +48,16 @@ static void __init check_bugs(void)
+ *p++ = 's';
+ *p++ = 'p';
+ break;
+- default:
+- *p++ = '?';
+- *p++ = '!';
++ case CPU_SH5_101 ... CPU_SH5_103:
++ *p++ = '6';
++ *p++ = '4';
++ break;
++ case CPU_SH_NONE:
++ /*
++ * Specifically use CPU_SH_NONE rather than default:,
++ * so we're able to have the compiler whine about
++ * unhandled enumerations.
++ */
+ break;
+ }
- #include "sysfs.h"
+diff --git a/include/asm-sh/byteorder.h b/include/asm-sh/byteorder.h
+index bff2b13..0eb9904 100644
+--- a/include/asm-sh/byteorder.h
++++ b/include/asm-sh/byteorder.h
+@@ -3,40 +3,55 @@
--static int object_depth(struct sysfs_dirent *sd)
--{
-- int depth = 0;
--
-- for (; sd->s_parent; sd = sd->s_parent)
-- depth++;
--
-- return depth;
--}
--
--static int object_path_length(struct sysfs_dirent * sd)
--{
-- int length = 1;
--
-- for (; sd->s_parent; sd = sd->s_parent)
-- length += strlen(sd->s_name) + 1;
--
-- return length;
--}
--
--static void fill_object_path(struct sysfs_dirent *sd, char *buffer, int length)
--{
-- --length;
-- for (; sd->s_parent; sd = sd->s_parent) {
-- int cur = strlen(sd->s_name);
--
-- /* back up enough to print this bus id with '/' */
-- length -= cur;
-- strncpy(buffer + length, sd->s_name, cur);
-- *(buffer + --length) = '/';
-- }
--}
+ /*
+ * Copyright (C) 1999 Niibe Yutaka
++ * Copyright (C) 2000, 2001 Paolo Alberelli
+ */
-
- /**
- * sysfs_create_link - create symlink between two objects.
- * @kobj: object whose directory we're creating the link in.
-@@ -112,7 +79,6 @@ int sysfs_create_link(struct kobject * kobj, struct kobject * target, const char
- return error;
- }
+-#include <asm/types.h>
+ #include <linux/compiler.h>
++#include <linux/types.h>
--
- /**
- * sysfs_remove_link - remove symlink in object's directory.
- * @kobj: object we're acting for.
-@@ -124,24 +90,54 @@ void sysfs_remove_link(struct kobject * kobj, const char * name)
- sysfs_hash_and_remove(kobj->sd, name);
+-static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 x)
++static inline __attribute_const__ __u32 ___arch__swab32(__u32 x)
+ {
+- __asm__("swap.b %0, %0\n\t"
+- "swap.w %0, %0\n\t"
+- "swap.b %0, %0"
++ __asm__(
++#ifdef CONFIG_SUPERH32
++ "swap.b %0, %0\n\t"
++ "swap.w %0, %0\n\t"
++ "swap.b %0, %0"
++#else
++ "byterev %0, %0\n\t"
++ "shari %0, 32, %0"
++#endif
+ : "=r" (x)
+ : "0" (x));
++
+ return x;
}
--static int sysfs_get_target_path(struct sysfs_dirent * parent_sd,
-- struct sysfs_dirent * target_sd, char *path)
-+static int sysfs_get_target_path(struct sysfs_dirent *parent_sd,
-+ struct sysfs_dirent *target_sd, char *path)
+-static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 x)
++static inline __attribute_const__ __u16 ___arch__swab16(__u16 x)
{
-- char * s;
-- int depth, size;
-+ struct sysfs_dirent *base, *sd;
-+ char *s = path;
-+ int len = 0;
-+
-+ /* go up to the root, stop at the base */
-+ base = parent_sd;
-+ while (base->s_parent) {
-+ sd = target_sd->s_parent;
-+ while (sd->s_parent && base != sd)
-+ sd = sd->s_parent;
-+
-+ if (base == sd)
-+ break;
+- __asm__("swap.b %0, %0"
++ __asm__(
++#ifdef CONFIG_SUPERH32
++ "swap.b %0, %0"
++#else
++ "byterev %0, %0\n\t"
++ "shari %0, 32, %0"
+
-+ strcpy(s, "../");
-+ s += 3;
-+ base = base->s_parent;
-+ }
++#endif
+ : "=r" (x)
+ : "0" (x));
+
-+ /* determine end of target string for reverse fillup */
-+ sd = target_sd;
-+ while (sd->s_parent && sd != base) {
-+ len += strlen(sd->s_name) + 1;
-+ sd = sd->s_parent;
-+ }
-
-- depth = object_depth(parent_sd);
-- size = object_path_length(target_sd) + depth * 3 - 1;
-- if (size > PATH_MAX)
-+ /* check limits */
-+ if (len < 2)
-+ return -EINVAL;
-+ len--;
-+ if ((s - path) + len > PATH_MAX)
- return -ENAMETOOLONG;
-
-- pr_debug("%s: depth = %d, size = %d\n", __FUNCTION__, depth, size);
-+ /* reverse fillup of target string from target to base */
-+ sd = target_sd;
-+ while (sd->s_parent && sd != base) {
-+ int slen = strlen(sd->s_name);
+ return x;
+ }
-- for (s = path; depth--; s += 3)
-- strcpy(s,"../");
-+ len -= slen;
-+ strncpy(s + len, sd->s_name, slen);
-+ if (len)
-+ s[--len] = '/';
+-static inline __u64 ___arch__swab64(__u64 val)
+-{
+- union {
++static inline __u64 ___arch__swab64(__u64 val)
++{
++ union {
+ struct { __u32 a,b; } s;
+ __u64 u;
+ } v, w;
+ v.u = val;
+- w.s.b = ___arch__swab32(v.s.a);
+- w.s.a = ___arch__swab32(v.s.b);
+- return w.u;
+-}
++ w.s.b = ___arch__swab32(v.s.a);
++ w.s.a = ___arch__swab32(v.s.b);
++ return w.u;
++}
-- fill_object_path(target_sd, path, size);
-- pr_debug("%s: path = '%s'\n", __FUNCTION__, path);
-+ sd = sd->s_parent;
-+ }
+ #define __arch__swab64(x) ___arch__swab64(x)
+ #define __arch__swab32(x) ___arch__swab32(x)
+diff --git a/include/asm-sh/cache.h b/include/asm-sh/cache.h
+index 01e5cf5..083419f 100644
+--- a/include/asm-sh/cache.h
++++ b/include/asm-sh/cache.h
+@@ -12,11 +12,6 @@
+ #include <linux/init.h>
+ #include <asm/cpu/cache.h>
- return 0;
- }
-diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
-index 7b74b60..fb7171b 100644
---- a/include/acpi/acpi_bus.h
-+++ b/include/acpi/acpi_bus.h
-@@ -319,7 +319,7 @@ struct acpi_bus_event {
- u32 data;
- };
+-#define SH_CACHE_VALID 1
+-#define SH_CACHE_UPDATED 2
+-#define SH_CACHE_COMBINED 4
+-#define SH_CACHE_ASSOC 8
+-
+ #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
--extern struct kset acpi_subsys;
-+extern struct kobject *acpi_kobj;
- extern int acpi_bus_generate_netlink_event(const char*, const char*, u8, int);
- /*
- * External Functions
-diff --git a/include/asm-arm/arch-at91/at91_lcdc.h b/include/asm-arm/arch-at91/at91_lcdc.h
-deleted file mode 100644
-index ab040a4..0000000
---- a/include/asm-arm/arch-at91/at91_lcdc.h
-+++ /dev/null
-@@ -1,148 +0,0 @@
+ #define __read_mostly __attribute__((__section__(".data.read_mostly")))
+diff --git a/include/asm-sh/checksum.h b/include/asm-sh/checksum.h
+index 4bc8357..67496ab 100644
+--- a/include/asm-sh/checksum.h
++++ b/include/asm-sh/checksum.h
+@@ -1,215 +1,5 @@
+-#ifndef __ASM_SH_CHECKSUM_H
+-#define __ASM_SH_CHECKSUM_H
+-
-/*
-- * include/asm-arm/arch-at91/at91_lcdc.h
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
- *
-- * LCD Controller (LCDC).
-- * Based on AT91SAM9261 datasheet revision E.
+- * Copyright (C) 1999 by Kaz Kojima & Niibe Yutaka
+- */
+-
+-#include <linux/in6.h>
+-
+-/*
+- * computes the checksum of a memory block at buff, length len,
+- * and adds in "sum" (32-bit)
- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License as published by
-- * the Free Software Foundation; either version 2 of the License, or
-- * (at your option) any later version.
+- * returns a 32-bit number suitable for feeding into itself
+- * or csum_tcpudp_magic
+- *
+- * this function must be called with even lengths, except
+- * for the last fragment, which may be odd
+- *
+- * it's best to have buff aligned on a 32-bit boundary
- */
+-asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
-
--#ifndef AT91_LCDC_H
--#define AT91_LCDC_H
+-/*
+- * the same as csum_partial, but copies from src while it
+- * checksums, and handles user-space pointer exceptions correctly, when needed.
+- *
+- * here even more important to align src and dst on a 32-bit (or even
+- * better 64-bit) boundary
+- */
-
--#define AT91_LCDC_DMABADDR1 0x00 /* DMA Base Address Register 1 */
--#define AT91_LCDC_DMABADDR2 0x04 /* DMA Base Address Register 2 */
--#define AT91_LCDC_DMAFRMPT1 0x08 /* DMA Frame Pointer Register 1 */
--#define AT91_LCDC_DMAFRMPT2 0x0c /* DMA Frame Pointer Register 2 */
--#define AT91_LCDC_DMAFRMADD1 0x10 /* DMA Frame Address Register 1 */
--#define AT91_LCDC_DMAFRMADD2 0x14 /* DMA Frame Address Register 2 */
+-asmlinkage __wsum csum_partial_copy_generic(const void *src, void *dst,
+- int len, __wsum sum,
+- int *src_err_ptr, int *dst_err_ptr);
-
--#define AT91_LCDC_DMAFRMCFG 0x18 /* DMA Frame Configuration Register */
--#define AT91_LCDC_FRSIZE (0x7fffff << 0) /* Frame Size */
--#define AT91_LCDC_BLENGTH (0x7f << 24) /* Burst Length */
+-/*
+- * Note: when you get a NULL pointer exception here this means someone
+- * passed in an incorrect kernel address to one of these functions.
+- *
+- * If you use these functions directly please don't forget the
+- * access_ok().
+- */
+-static inline
+-__wsum csum_partial_copy_nocheck(const void *src, void *dst,
+- int len, __wsum sum)
+-{
+- return csum_partial_copy_generic(src, dst, len, sum, NULL, NULL);
+-}
-
--#define AT91_LCDC_DMACON 0x1c /* DMA Control Register */
--#define AT91_LCDC_DMAEN (0x1 << 0) /* DMA Enable */
--#define AT91_LCDC_DMARST (0x1 << 1) /* DMA Reset */
--#define AT91_LCDC_DMABUSY (0x1 << 2) /* DMA Busy */
+-static inline
+-__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
+- int len, __wsum sum, int *err_ptr)
+-{
+- return csum_partial_copy_generic((__force const void *)src, dst,
+- len, sum, err_ptr, NULL);
+-}
-
--#define AT91_LCDC_LCDCON1 0x0800 /* LCD Control Register 1 */
--#define AT91_LCDC_BYPASS (1 << 0) /* Bypass lcd_dotck divider */
--#define AT91_LCDC_CLKVAL (0x1ff << 12) /* Clock Divider */
--#define AT91_LCDC_LINCNT (0x7ff << 21) /* Line Counter */
+-/*
+- * Fold a partial checksum
+- */
-
--#define AT91_LCDC_LCDCON2 0x0804 /* LCD Control Register 2 */
--#define AT91_LCDC_DISTYPE (3 << 0) /* Display Type */
--#define AT91_LCDC_DISTYPE_STNMONO (0 << 0)
--#define AT91_LCDC_DISTYPE_STNCOLOR (1 << 0)
--#define AT91_LCDC_DISTYPE_TFT (2 << 0)
--#define AT91_LCDC_SCANMOD (1 << 2) /* Scan Mode */
--#define AT91_LCDC_SCANMOD_SINGLE (0 << 2)
--#define AT91_LCDC_SCANMOD_DUAL (1 << 2)
--#define AT91_LCDC_IFWIDTH (3 << 3) /*Interface Width */
--#define AT91_LCDC_IFWIDTH_4 (0 << 3)
--#define AT91_LCDC_IFWIDTH_8 (1 << 3)
--#define AT91_LCDC_IFWIDTH_16 (2 << 3)
--#define AT91_LCDC_PIXELSIZE (7 << 5) /* Bits per pixel */
--#define AT91_LCDC_PIXELSIZE_1 (0 << 5)
--#define AT91_LCDC_PIXELSIZE_2 (1 << 5)
--#define AT91_LCDC_PIXELSIZE_4 (2 << 5)
--#define AT91_LCDC_PIXELSIZE_8 (3 << 5)
--#define AT91_LCDC_PIXELSIZE_16 (4 << 5)
--#define AT91_LCDC_PIXELSIZE_24 (5 << 5)
--#define AT91_LCDC_INVVD (1 << 8) /* LCD Data polarity */
--#define AT91_LCDC_INVVD_NORMAL (0 << 8)
--#define AT91_LCDC_INVVD_INVERTED (1 << 8)
--#define AT91_LCDC_INVFRAME (1 << 9 ) /* LCD VSync polarity */
--#define AT91_LCDC_INVFRAME_NORMAL (0 << 9)
--#define AT91_LCDC_INVFRAME_INVERTED (1 << 9)
--#define AT91_LCDC_INVLINE (1 << 10) /* LCD HSync polarity */
--#define AT91_LCDC_INVLINE_NORMAL (0 << 10)
--#define AT91_LCDC_INVLINE_INVERTED (1 << 10)
--#define AT91_LCDC_INVCLK (1 << 11) /* LCD dotclk polarity */
--#define AT91_LCDC_INVCLK_NORMAL (0 << 11)
--#define AT91_LCDC_INVCLK_INVERTED (1 << 11)
--#define AT91_LCDC_INVDVAL (1 << 12) /* LCD dval polarity */
--#define AT91_LCDC_INVDVAL_NORMAL (0 << 12)
--#define AT91_LCDC_INVDVAL_INVERTED (1 << 12)
--#define AT91_LCDC_CLKMOD (1 << 15) /* LCD dotclk mode */
--#define AT91_LCDC_CLKMOD_ACTIVEDISPLAY (0 << 15)
--#define AT91_LCDC_CLKMOD_ALWAYSACTIVE (1 << 15)
--#define AT91_LCDC_MEMOR (1 << 31) /* Memory Ordering Format */
--#define AT91_LCDC_MEMOR_BIG (0 << 31)
--#define AT91_LCDC_MEMOR_LITTLE (1 << 31)
+-static inline __sum16 csum_fold(__wsum sum)
+-{
+- unsigned int __dummy;
+- __asm__("swap.w %0, %1\n\t"
+- "extu.w %0, %0\n\t"
+- "extu.w %1, %1\n\t"
+- "add %1, %0\n\t"
+- "swap.w %0, %1\n\t"
+- "add %1, %0\n\t"
+- "not %0, %0\n\t"
+- : "=r" (sum), "=&r" (__dummy)
+- : "0" (sum)
+- : "t");
+- return (__force __sum16)sum;
+-}
-
--#define AT91_LCDC_TIM1 0x0808 /* LCD Timing Register 1 */
--#define AT91_LCDC_VFP (0xff << 0) /* Vertical Front Porch */
--#define AT91_LCDC_VBP (0xff << 8) /* Vertical Back Porch */
--#define AT91_LCDC_VPW (0x3f << 16) /* Vertical Synchronization Pulse Width */
--#define AT91_LCDC_VHDLY (0xf << 24) /* Vertical to Horizontal Delay */
+-/*
+- * This is a version of ip_compute_csum() optimized for IP headers,
+- * which always checksum on 4 octet boundaries.
+- *
+- * i386 version by Jorge Cwik <jorge at laser.satlink.net>, adapted
+- * for linux by * Arnt Gulbrandsen.
+- */
+-static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
+-{
+- unsigned int sum, __dummy0, __dummy1;
-
--#define AT91_LCDC_TIM2 0x080c /* LCD Timing Register 2 */
--#define AT91_LCDC_HBP (0xff << 0) /* Horizontal Back Porch */
--#define AT91_LCDC_HPW (0x3f << 8) /* Horizontal Synchronization Pulse Width */
--#define AT91_LCDC_HFP (0x7ff << 21) /* Horizontal Front Porch */
+- __asm__ __volatile__(
+- "mov.l @%1+, %0\n\t"
+- "mov.l @%1+, %3\n\t"
+- "add #-2, %2\n\t"
+- "clrt\n\t"
+- "1:\t"
+- "addc %3, %0\n\t"
+- "movt %4\n\t"
+- "mov.l @%1+, %3\n\t"
+- "dt %2\n\t"
+- "bf/s 1b\n\t"
+- " cmp/eq #1, %4\n\t"
+- "addc %3, %0\n\t"
+- "addc %2, %0" /* Here %2 is 0, add carry-bit */
+- /* Since the input registers which are loaded with iph and ihl
+- are modified, we must also specify them as outputs, or gcc
+- will assume they contain their original values. */
+- : "=r" (sum), "=r" (iph), "=r" (ihl), "=&r" (__dummy0), "=&z" (__dummy1)
+- : "1" (iph), "2" (ihl)
+- : "t");
-
--#define AT91_LCDC_LCDFRMCFG 0x0810 /* LCD Frame Configuration Register */
--#define AT91_LCDC_LINEVAL (0x7ff << 0) /* Vertical Size of LCD Module */
--#define AT91_LCDC_HOZVAL (0x7ff << 21) /* Horizontal Size of LCD Module */
+- return csum_fold(sum);
+-}
-
--#define AT91_LCDC_FIFO 0x0814 /* LCD FIFO Register */
--#define AT91_LCDC_FIFOTH (0xffff) /* FIFO Threshold */
+-static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
+- unsigned short len,
+- unsigned short proto,
+- __wsum sum)
+-{
+-#ifdef __LITTLE_ENDIAN__
+- unsigned long len_proto = (proto + len) << 8;
++#ifdef CONFIG_SUPERH32
++# include "checksum_32.h"
+ #else
+- unsigned long len_proto = proto + len;
++# include "checksum_64.h"
+ #endif
+- __asm__("clrt\n\t"
+- "addc %0, %1\n\t"
+- "addc %2, %1\n\t"
+- "addc %3, %1\n\t"
+- "movt %0\n\t"
+- "add %1, %0"
+- : "=r" (sum), "=r" (len_proto)
+- : "r" (daddr), "r" (saddr), "1" (len_proto), "0" (sum)
+- : "t");
-
--#define AT91_LCDC_DP1_2 0x081c /* Dithering Pattern DP1_2 Register */
--#define AT91_LCDC_DP4_7 0x0820 /* Dithering Pattern DP4_7 Register */
--#define AT91_LCDC_DP3_5 0x0824 /* Dithering Pattern DP3_5 Register */
--#define AT91_LCDC_DP2_3 0x0828 /* Dithering Pattern DP2_3 Register */
--#define AT91_LCDC_DP5_7 0x082c /* Dithering Pattern DP5_7 Register */
--#define AT91_LCDC_DP3_4 0x0830 /* Dithering Pattern DP3_4 Register */
--#define AT91_LCDC_DP4_5 0x0834 /* Dithering Pattern DP4_5 Register */
--#define AT91_LCDC_DP6_7 0x0838 /* Dithering Pattern DP6_7 Register */
--#define AT91_LCDC_DP1_2_VAL (0xff)
--#define AT91_LCDC_DP4_7_VAL (0xfffffff)
--#define AT91_LCDC_DP3_5_VAL (0xfffff)
--#define AT91_LCDC_DP2_3_VAL (0xfff)
--#define AT91_LCDC_DP5_7_VAL (0xfffffff)
--#define AT91_LCDC_DP3_4_VAL (0xffff)
--#define AT91_LCDC_DP4_5_VAL (0xfffff)
--#define AT91_LCDC_DP6_7_VAL (0xfffffff)
+- return sum;
+-}
-
--#define AT91_LCDC_PWRCON 0x083c /* Power Control Register */
--#define AT91_LCDC_PWR (1 << 0) /* LCD Module Power Control */
--#define AT91_LCDC_GUARDT (0x7f << 1) /* Delay in Frame Period */
--#define AT91_LCDC_BUSY (1 << 31) /* LCD Busy */
+-/*
+- * computes the checksum of the TCP/UDP pseudo-header
+- * returns a 16-bit checksum, already complemented
+- */
+-static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
+- unsigned short len,
+- unsigned short proto,
+- __wsum sum)
+-{
+- return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
+-}
-
--#define AT91_LCDC_CONTRAST_CTR 0x0840 /* Contrast Control Register */
--#define AT91_LCDC_PS (3 << 0) /* Contrast Counter Prescaler */
--#define AT91_LCDC_PS_DIV1 (0 << 0)
--#define AT91_LCDC_PS_DIV2 (1 << 0)
--#define AT91_LCDC_PS_DIV4 (2 << 0)
--#define AT91_LCDC_PS_DIV8 (3 << 0)
--#define AT91_LCDC_POL (1 << 2) /* Polarity of output Pulse */
--#define AT91_LCDC_POL_NEGATIVE (0 << 2)
--#define AT91_LCDC_POL_POSITIVE (1 << 2)
--#define AT91_LCDC_ENA (1 << 3) /* PWM generator Control */
--#define AT91_LCDC_ENA_PWMDISABLE (0 << 3)
--#define AT91_LCDC_ENA_PWMENABLE (1 << 3)
+-/*
+- * this routine is used for miscellaneous IP-like checksums, mainly
+- * in icmp.c
+- */
+-static inline __sum16 ip_compute_csum(const void *buff, int len)
+-{
+- return csum_fold(csum_partial(buff, len, 0));
+-}
-
--#define AT91_LCDC_CONTRAST_VAL 0x0844 /* Contrast Value Register */
--#define AT91_LCDC_CVAL (0xff) /* PWM compare value */
+-#define _HAVE_ARCH_IPV6_CSUM
+-static inline __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+- const struct in6_addr *daddr,
+- __u32 len, unsigned short proto,
+- __wsum sum)
+-{
+- unsigned int __dummy;
+- __asm__("clrt\n\t"
+- "mov.l @(0,%2), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(4,%2), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(8,%2), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(12,%2), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(0,%3), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(4,%3), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(8,%3), %1\n\t"
+- "addc %1, %0\n\t"
+- "mov.l @(12,%3), %1\n\t"
+- "addc %1, %0\n\t"
+- "addc %4, %0\n\t"
+- "addc %5, %0\n\t"
+- "movt %1\n\t"
+- "add %1, %0\n"
+- : "=r" (sum), "=&r" (__dummy)
+- : "r" (saddr), "r" (daddr),
+- "r" (htonl(len)), "r" (htonl(proto)), "0" (sum)
+- : "t");
-
--#define AT91_LCDC_IER 0x0848 /* Interrupt Enable Register */
--#define AT91_LCDC_IDR 0x084c /* Interrupt Disable Register */
--#define AT91_LCDC_IMR 0x0850 /* Interrupt Mask Register */
--#define AT91_LCDC_ISR 0x0854 /* Interrupt Enable Register */
--#define AT91_LCDC_ICR 0x0858 /* Interrupt Clear Register */
--#define AT91_LCDC_LNI (1 << 0) /* Line Interrupt */
--#define AT91_LCDC_LSTLNI (1 << 1) /* Last Line Interrupt */
--#define AT91_LCDC_EOFI (1 << 2) /* DMA End Of Frame Interrupt */
--#define AT91_LCDC_UFLWI (1 << 4) /* FIFO Underflow Interrupt */
--#define AT91_LCDC_OWRI (1 << 5) /* FIFO Overwrite Interrupt */
--#define AT91_LCDC_MERI (1 << 6) /* DMA Memory Error Interrupt */
+- return csum_fold(sum);
+-}
-
--#define AT91_LCDC_LUT_(n) (0x0c00 + ((n)*4)) /* Palette Entry 0..255 */
+-/*
+- * Copy and checksum to user
+- */
+-#define HAVE_CSUM_COPY_USER
+-static inline __wsum csum_and_copy_to_user(const void *src,
+- void __user *dst,
+- int len, __wsum sum,
+- int *err_ptr)
+-{
+- if (access_ok(VERIFY_WRITE, dst, len))
+- return csum_partial_copy_generic((__force const void *)src,
+- dst, len, sum, NULL, err_ptr);
-
--#endif
-diff --git a/include/asm-arm/arch-at91/at91_pmc.h b/include/asm-arm/arch-at91/at91_pmc.h
-index 33ff5b6..52cd8e5 100644
---- a/include/asm-arm/arch-at91/at91_pmc.h
-+++ b/include/asm-arm/arch-at91/at91_pmc.h
-@@ -25,6 +25,7 @@
- #define AT91RM9200_PMC_MCKUDP (1 << 2) /* USB Device Port Master Clock Automatic Disable on Suspend [AT91RM9200 only] */
- #define AT91RM9200_PMC_UHP (1 << 4) /* USB Host Port Clock [AT91RM9200 only] */
- #define AT91SAM926x_PMC_UHP (1 << 6) /* USB Host Port Clock [AT91SAM926x only] */
-+#define AT91CAP9_PMC_UHP (1 << 6) /* USB Host Port Clock [AT91CAP9 only] */
- #define AT91SAM926x_PMC_UDP (1 << 7) /* USB Devcice Port Clock [AT91SAM926x only] */
- #define AT91_PMC_PCK0 (1 << 8) /* Programmable Clock 0 */
- #define AT91_PMC_PCK1 (1 << 9) /* Programmable Clock 1 */
-@@ -37,7 +38,9 @@
- #define AT91_PMC_PCDR (AT91_PMC + 0x14) /* Peripheral Clock Disable Register */
- #define AT91_PMC_PCSR (AT91_PMC + 0x18) /* Peripheral Clock Status Register */
-
--#define AT91_CKGR_MOR (AT91_PMC + 0x20) /* Main Oscillator Register */
-+#define AT91_CKGR_UCKR (AT91_PMC + 0x1C) /* UTMI Clock Register [SAM9RL, CAP9] */
-+
-+#define AT91_CKGR_MOR (AT91_PMC + 0x20) /* Main Oscillator Register [not on SAM9RL] */
- #define AT91_PMC_MOSCEN (1 << 0) /* Main Oscillator Enable */
- #define AT91_PMC_OSCBYPASS (1 << 1) /* Oscillator Bypass [AT91SAM926x only] */
- #define AT91_PMC_OSCOUNT (0xff << 8) /* Main Oscillator Start-up Time */
-@@ -52,6 +55,10 @@
- #define AT91_PMC_PLLCOUNT (0x3f << 8) /* PLL Counter */
- #define AT91_PMC_OUT (3 << 14) /* PLL Clock Frequency Range */
- #define AT91_PMC_MUL (0x7ff << 16) /* PLL Multiplier */
-+#define AT91_PMC_USBDIV (3 << 28) /* USB Divisor (PLLB only) */
-+#define AT91_PMC_USBDIV_1 (0 << 28)
-+#define AT91_PMC_USBDIV_2 (1 << 28)
-+#define AT91_PMC_USBDIV_4 (2 << 28)
- #define AT91_PMC_USB96M (1 << 28) /* Divider by 2 Enable (PLLB only) */
-
- #define AT91_PMC_MCKR (AT91_PMC + 0x30) /* Master Clock Register */
-diff --git a/include/asm-arm/arch-at91/at91_rtt.h b/include/asm-arm/arch-at91/at91_rtt.h
-index bae1103..39a3263 100644
---- a/include/asm-arm/arch-at91/at91_rtt.h
-+++ b/include/asm-arm/arch-at91/at91_rtt.h
-@@ -13,19 +13,19 @@
- #ifndef AT91_RTT_H
- #define AT91_RTT_H
-
--#define AT91_RTT_MR (AT91_RTT + 0x00) /* Real-time Mode Register */
-+#define AT91_RTT_MR 0x00 /* Real-time Mode Register */
- #define AT91_RTT_RTPRES (0xffff << 0) /* Real-time Timer Prescaler Value */
- #define AT91_RTT_ALMIEN (1 << 16) /* Alarm Interrupt Enable */
- #define AT91_RTT_RTTINCIEN (1 << 17) /* Real Time Timer Increment Interrupt Enable */
- #define AT91_RTT_RTTRST (1 << 18) /* Real Time Timer Restart */
-
--#define AT91_RTT_AR (AT91_RTT + 0x04) /* Real-time Alarm Register */
-+#define AT91_RTT_AR 0x04 /* Real-time Alarm Register */
- #define AT91_RTT_ALMV (0xffffffff) /* Alarm Value */
-
--#define AT91_RTT_VR (AT91_RTT + 0x08) /* Real-time Value Register */
-+#define AT91_RTT_VR 0x08 /* Real-time Value Register */
- #define AT91_RTT_CRTV (0xffffffff) /* Current Real-time Value */
-
--#define AT91_RTT_SR (AT91_RTT + 0x0c) /* Real-time Status Register */
-+#define AT91_RTT_SR 0x0c /* Real-time Status Register */
- #define AT91_RTT_ALMS (1 << 0) /* Real-time Alarm Status */
- #define AT91_RTT_RTTINC (1 << 1) /* Real-time Timer Increment */
-
-diff --git a/include/asm-arm/arch-at91/at91_twi.h b/include/asm-arm/arch-at91/at91_twi.h
-index ca9a907..f9f2e3c 100644
---- a/include/asm-arm/arch-at91/at91_twi.h
-+++ b/include/asm-arm/arch-at91/at91_twi.h
-@@ -21,6 +21,8 @@
- #define AT91_TWI_STOP (1 << 1) /* Send a Stop Condition */
- #define AT91_TWI_MSEN (1 << 2) /* Master Transfer Enable */
- #define AT91_TWI_MSDIS (1 << 3) /* Master Transfer Disable */
-+#define AT91_TWI_SVEN (1 << 4) /* Slave Transfer Enable [SAM9260 only] */
-+#define AT91_TWI_SVDIS (1 << 5) /* Slave Transfer Disable [SAM9260 only] */
- #define AT91_TWI_SWRST (1 << 7) /* Software Reset */
-
- #define AT91_TWI_MMR 0x04 /* Master Mode Register */
-@@ -32,6 +34,9 @@
- #define AT91_TWI_MREAD (1 << 12) /* Master Read Direction */
- #define AT91_TWI_DADR (0x7f << 16) /* Device Address */
-
-+#define AT91_TWI_SMR 0x08 /* Slave Mode Register [SAM9260 only] */
-+#define AT91_TWI_SADR (0x7f << 16) /* Slave Address */
-+
- #define AT91_TWI_IADR 0x0c /* Internal Address Register */
-
- #define AT91_TWI_CWGR 0x10 /* Clock Waveform Generator Register */
-@@ -43,9 +48,15 @@
- #define AT91_TWI_TXCOMP (1 << 0) /* Transmission Complete */
- #define AT91_TWI_RXRDY (1 << 1) /* Receive Holding Register Ready */
- #define AT91_TWI_TXRDY (1 << 2) /* Transmit Holding Register Ready */
-+#define AT91_TWI_SVREAD (1 << 3) /* Slave Read [SAM9260 only] */
-+#define AT91_TWI_SVACC (1 << 4) /* Slave Access [SAM9260 only] */
-+#define AT91_TWI_GACC (1 << 5) /* General Call Access [SAM9260 only] */
- #define AT91_TWI_OVRE (1 << 6) /* Overrun Error [AT91RM9200 only] */
- #define AT91_TWI_UNRE (1 << 7) /* Underrun Error [AT91RM9200 only] */
- #define AT91_TWI_NACK (1 << 8) /* Not Acknowledged */
-+#define AT91_TWI_ARBLST (1 << 9) /* Arbitration Lost [SAM9260 only] */
-+#define AT91_TWI_SCLWS (1 << 10) /* Clock Wait State [SAM9260 only] */
-+#define AT91_TWI_EOSACC (1 << 11) /* End of Slave Address [SAM9260 only] */
-
- #define AT91_TWI_IER 0x24 /* Interrupt Enable Register */
- #define AT91_TWI_IDR 0x28 /* Interrupt Disable Register */
-diff --git a/include/asm-arm/arch-at91/at91cap9.h b/include/asm-arm/arch-at91/at91cap9.h
+- if (len)
+- *err_ptr = -EFAULT;
+-
+- return (__force __wsum)-1; /* invalid checksum */
+-}
+-#endif /* __ASM_SH_CHECKSUM_H */
+diff --git a/include/asm-sh/checksum_32.h b/include/asm-sh/checksum_32.h
new file mode 100644
-index 0000000..73e1fcf
+index 0000000..4bc8357
--- /dev/null
-+++ b/include/asm-arm/arch-at91/at91cap9.h
-@@ -0,0 +1,121 @@
++++ b/include/asm-sh/checksum_32.h
+@@ -0,0 +1,215 @@
++#ifndef __ASM_SH_CHECKSUM_H
++#define __ASM_SH_CHECKSUM_H
++
+/*
-+ * include/asm-arm/arch-at91/at91cap9.h
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ *
-+ * Copyright (C) 2007 Stelian Pop <stelian.pop at leadtechdesign.com>
-+ * Copyright (C) 2007 Lead Tech Design <www.leadtechdesign.com>
-+ * Copyright (C) 2007 Atmel Corporation.
++ * Copyright (C) 1999 by Kaz Kojima & Niibe Yutaka
++ */
++
++#include <linux/in6.h>
++
++/*
++ * computes the checksum of a memory block at buff, length len,
++ * and adds in "sum" (32-bit)
+ *
-+ * Common definitions.
-+ * Based on AT91CAP9 datasheet revision B (Preliminary).
++ * returns a 32-bit number suitable for feeding into itself
++ * or csum_tcpudp_magic
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
++ * this function must be called with even lengths, except
++ * for the last fragment, which may be odd
++ *
++ * it's best to have buff aligned on a 32-bit boundary
+ */
++asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
+
-+#ifndef AT91CAP9_H
-+#define AT91CAP9_H
++/*
++ * the same as csum_partial, but copies from src while it
++ * checksums, and handles user-space pointer exceptions correctly, when needed.
++ *
++ * here even more important to align src and dst on a 32-bit (or even
++ * better 64-bit) boundary
++ */
++
++asmlinkage __wsum csum_partial_copy_generic(const void *src, void *dst,
++ int len, __wsum sum,
++ int *src_err_ptr, int *dst_err_ptr);
+
+/*
-+ * Peripheral identifiers/interrupts.
++ * Note: when you get a NULL pointer exception here this means someone
++ * passed in an incorrect kernel address to one of these functions.
++ *
++ * If you use these functions directly please don't forget the
++ * access_ok().
+ */
-+#define AT91_ID_FIQ 0 /* Advanced Interrupt Controller (FIQ) */
-+#define AT91_ID_SYS 1 /* System Peripherals */
-+#define AT91CAP9_ID_PIOABCD 2 /* Parallel IO Controller A, B, C and D */
-+#define AT91CAP9_ID_MPB0 3 /* MP Block Peripheral 0 */
-+#define AT91CAP9_ID_MPB1 4 /* MP Block Peripheral 1 */
-+#define AT91CAP9_ID_MPB2 5 /* MP Block Peripheral 2 */
-+#define AT91CAP9_ID_MPB3 6 /* MP Block Peripheral 3 */
-+#define AT91CAP9_ID_MPB4 7 /* MP Block Peripheral 4 */
-+#define AT91CAP9_ID_US0 8 /* USART 0 */
-+#define AT91CAP9_ID_US1 9 /* USART 1 */
-+#define AT91CAP9_ID_US2 10 /* USART 2 */
-+#define AT91CAP9_ID_MCI0 11 /* Multimedia Card Interface 0 */
-+#define AT91CAP9_ID_MCI1 12 /* Multimedia Card Interface 1 */
-+#define AT91CAP9_ID_CAN 13 /* CAN */
-+#define AT91CAP9_ID_TWI 14 /* Two-Wire Interface */
-+#define AT91CAP9_ID_SPI0 15 /* Serial Peripheral Interface 0 */
-+#define AT91CAP9_ID_SPI1 16 /* Serial Peripheral Interface 0 */
-+#define AT91CAP9_ID_SSC0 17 /* Serial Synchronous Controller 0 */
-+#define AT91CAP9_ID_SSC1 18 /* Serial Synchronous Controller 1 */
-+#define AT91CAP9_ID_AC97C 19 /* AC97 Controller */
-+#define AT91CAP9_ID_TCB 20 /* Timer Counter 0, 1 and 2 */
-+#define AT91CAP9_ID_PWMC 21 /* Pulse Width Modulation Controller */
-+#define AT91CAP9_ID_EMAC 22 /* Ethernet */
-+#define AT91CAP9_ID_AESTDES 23 /* Advanced Encryption Standard, Triple DES */
-+#define AT91CAP9_ID_ADC 24 /* Analog-to-Digital Converter */
-+#define AT91CAP9_ID_ISI 25 /* Image Sensor Interface */
-+#define AT91CAP9_ID_LCDC 26 /* LCD Controller */
-+#define AT91CAP9_ID_DMA 27 /* DMA Controller */
-+#define AT91CAP9_ID_UDPHS 28 /* USB High Speed Device Port */
-+#define AT91CAP9_ID_UHP 29 /* USB Host Port */
-+#define AT91CAP9_ID_IRQ0 30 /* Advanced Interrupt Controller (IRQ0) */
-+#define AT91CAP9_ID_IRQ1 31 /* Advanced Interrupt Controller (IRQ1) */
++static inline
++__wsum csum_partial_copy_nocheck(const void *src, void *dst,
++ int len, __wsum sum)
++{
++ return csum_partial_copy_generic(src, dst, len, sum, NULL, NULL);
++}
++
++static inline
++__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
++ int len, __wsum sum, int *err_ptr)
++{
++ return csum_partial_copy_generic((__force const void *)src, dst,
++ len, sum, err_ptr, NULL);
++}
+
+/*
-+ * User Peripheral physical base addresses.
++ * Fold a partial checksum
+ */
-+#define AT91CAP9_BASE_UDPHS 0xfff78000
-+#define AT91CAP9_BASE_TCB0 0xfff7c000
-+#define AT91CAP9_BASE_TC0 0xfff7c000
-+#define AT91CAP9_BASE_TC1 0xfff7c040
-+#define AT91CAP9_BASE_TC2 0xfff7c080
-+#define AT91CAP9_BASE_MCI0 0xfff80000
-+#define AT91CAP9_BASE_MCI1 0xfff84000
-+#define AT91CAP9_BASE_TWI 0xfff88000
-+#define AT91CAP9_BASE_US0 0xfff8c000
-+#define AT91CAP9_BASE_US1 0xfff90000
-+#define AT91CAP9_BASE_US2 0xfff94000
-+#define AT91CAP9_BASE_SSC0 0xfff98000
-+#define AT91CAP9_BASE_SSC1 0xfff9c000
-+#define AT91CAP9_BASE_AC97C 0xfffa0000
-+#define AT91CAP9_BASE_SPI0 0xfffa4000
-+#define AT91CAP9_BASE_SPI1 0xfffa8000
-+#define AT91CAP9_BASE_CAN 0xfffac000
-+#define AT91CAP9_BASE_PWMC 0xfffb8000
-+#define AT91CAP9_BASE_EMAC 0xfffbc000
-+#define AT91CAP9_BASE_ADC 0xfffc0000
-+#define AT91CAP9_BASE_ISI 0xfffc4000
-+#define AT91_BASE_SYS 0xffffe200
++
++static inline __sum16 csum_fold(__wsum sum)
++{
++ unsigned int __dummy;
++ __asm__("swap.w %0, %1\n\t"
++ "extu.w %0, %0\n\t"
++ "extu.w %1, %1\n\t"
++ "add %1, %0\n\t"
++ "swap.w %0, %1\n\t"
++ "add %1, %0\n\t"
++ "not %0, %0\n\t"
++ : "=r" (sum), "=&r" (__dummy)
++ : "0" (sum)
++ : "t");
++ return (__force __sum16)sum;
++}
+
+/*
-+ * System Peripherals (offset from AT91_BASE_SYS)
++ * This is a version of ip_compute_csum() optimized for IP headers,
++ * which always checksum on 4 octet boundaries.
++ *
++ * i386 version by Jorge Cwik <jorge at laser.satlink.net>, adapted
++ * for linux by * Arnt Gulbrandsen.
+ */
-+#define AT91_ECC (0xffffe200 - AT91_BASE_SYS)
-+#define AT91_BCRAMC (0xffffe400 - AT91_BASE_SYS)
-+#define AT91_DDRSDRC (0xffffe600 - AT91_BASE_SYS)
-+#define AT91_SMC (0xffffe800 - AT91_BASE_SYS)
-+#define AT91_MATRIX (0xffffea00 - AT91_BASE_SYS)
-+#define AT91_CCFG (0xffffeb10 - AT91_BASE_SYS)
-+#define AT91_DMA (0xffffec00 - AT91_BASE_SYS)
-+#define AT91_DBGU (0xffffee00 - AT91_BASE_SYS)
-+#define AT91_AIC (0xfffff000 - AT91_BASE_SYS)
-+#define AT91_PIOA (0xfffff200 - AT91_BASE_SYS)
-+#define AT91_PIOB (0xfffff400 - AT91_BASE_SYS)
-+#define AT91_PIOC (0xfffff600 - AT91_BASE_SYS)
-+#define AT91_PIOD (0xfffff800 - AT91_BASE_SYS)
-+#define AT91_PMC (0xfffffc00 - AT91_BASE_SYS)
-+#define AT91_RSTC (0xfffffd00 - AT91_BASE_SYS)
-+#define AT91_SHDC (0xfffffd10 - AT91_BASE_SYS)
-+#define AT91_RTT (0xfffffd20 - AT91_BASE_SYS)
-+#define AT91_PIT (0xfffffd30 - AT91_BASE_SYS)
-+#define AT91_WDT (0xfffffd40 - AT91_BASE_SYS)
-+#define AT91_GPBR (0xfffffd50 - AT91_BASE_SYS)
++static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
++{
++ unsigned int sum, __dummy0, __dummy1;
++
++ __asm__ __volatile__(
++ "mov.l @%1+, %0\n\t"
++ "mov.l @%1+, %3\n\t"
++ "add #-2, %2\n\t"
++ "clrt\n\t"
++ "1:\t"
++ "addc %3, %0\n\t"
++ "movt %4\n\t"
++ "mov.l @%1+, %3\n\t"
++ "dt %2\n\t"
++ "bf/s 1b\n\t"
++ " cmp/eq #1, %4\n\t"
++ "addc %3, %0\n\t"
++ "addc %2, %0" /* Here %2 is 0, add carry-bit */
++ /* Since the input registers which are loaded with iph and ihl
++ are modified, we must also specify them as outputs, or gcc
++ will assume they contain their original values. */
++ : "=r" (sum), "=r" (iph), "=r" (ihl), "=&r" (__dummy0), "=&z" (__dummy1)
++ : "1" (iph), "2" (ihl)
++ : "t");
++
++ return csum_fold(sum);
++}
++
++static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
++ unsigned short len,
++ unsigned short proto,
++ __wsum sum)
++{
++#ifdef __LITTLE_ENDIAN__
++ unsigned long len_proto = (proto + len) << 8;
++#else
++ unsigned long len_proto = proto + len;
++#endif
++ __asm__("clrt\n\t"
++ "addc %0, %1\n\t"
++ "addc %2, %1\n\t"
++ "addc %3, %1\n\t"
++ "movt %0\n\t"
++ "add %1, %0"
++ : "=r" (sum), "=r" (len_proto)
++ : "r" (daddr), "r" (saddr), "1" (len_proto), "0" (sum)
++ : "t");
++
++ return sum;
++}
+
+/*
-+ * Internal Memory.
++ * computes the checksum of the TCP/UDP pseudo-header
++ * returns a 16-bit checksum, already complemented
+ */
-+#define AT91CAP9_SRAM_BASE 0x00100000 /* Internal SRAM base address */
-+#define AT91CAP9_SRAM_SIZE (32 * SZ_1K) /* Internal SRAM size (32Kb) */
++static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
++ unsigned short len,
++ unsigned short proto,
++ __wsum sum)
++{
++ return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
++}
+
-+#define AT91CAP9_ROM_BASE 0x00400000 /* Internal ROM base address */
-+#define AT91CAP9_ROM_SIZE (32 * SZ_1K) /* Internal ROM size (32Kb) */
++/*
++ * this routine is used for miscellaneous IP-like checksums, mainly
++ * in icmp.c
++ */
++static inline __sum16 ip_compute_csum(const void *buff, int len)
++{
++ return csum_fold(csum_partial(buff, len, 0));
++}
+
-+#define AT91CAP9_LCDC_BASE 0x00500000 /* LCD Controller */
-+#define AT91CAP9_UDPHS_BASE 0x00600000 /* USB High Speed Device Port */
-+#define AT91CAP9_UHP_BASE 0x00700000 /* USB Host controller */
++#define _HAVE_ARCH_IPV6_CSUM
++static inline __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
++ const struct in6_addr *daddr,
++ __u32 len, unsigned short proto,
++ __wsum sum)
++{
++ unsigned int __dummy;
++ __asm__("clrt\n\t"
++ "mov.l @(0,%2), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(4,%2), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(8,%2), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(12,%2), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(0,%3), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(4,%3), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(8,%3), %1\n\t"
++ "addc %1, %0\n\t"
++ "mov.l @(12,%3), %1\n\t"
++ "addc %1, %0\n\t"
++ "addc %4, %0\n\t"
++ "addc %5, %0\n\t"
++ "movt %1\n\t"
++ "add %1, %0\n"
++ : "=r" (sum), "=&r" (__dummy)
++ : "r" (saddr), "r" (daddr),
++ "r" (htonl(len)), "r" (htonl(proto)), "0" (sum)
++ : "t");
+
-+#define CONFIG_DRAM_BASE AT91_CHIPSELECT_6
++ return csum_fold(sum);
++}
+
-+#endif
-diff --git a/include/asm-arm/arch-at91/at91cap9_matrix.h b/include/asm-arm/arch-at91/at91cap9_matrix.h
++/*
++ * Copy and checksum to user
++ */
++#define HAVE_CSUM_COPY_USER
++static inline __wsum csum_and_copy_to_user(const void *src,
++ void __user *dst,
++ int len, __wsum sum,
++ int *err_ptr)
++{
++ if (access_ok(VERIFY_WRITE, dst, len))
++ return csum_partial_copy_generic((__force const void *)src,
++ dst, len, sum, NULL, err_ptr);
++
++ if (len)
++ *err_ptr = -EFAULT;
++
++ return (__force __wsum)-1; /* invalid checksum */
++}
++#endif /* __ASM_SH_CHECKSUM_H */
+diff --git a/include/asm-sh/checksum_64.h b/include/asm-sh/checksum_64.h
new file mode 100644
-index 0000000..a641686
+index 0000000..9c62a03
--- /dev/null
-+++ b/include/asm-arm/arch-at91/at91cap9_matrix.h
-@@ -0,0 +1,132 @@
++++ b/include/asm-sh/checksum_64.h
+@@ -0,0 +1,78 @@
++#ifndef __ASM_SH_CHECKSUM_64_H
++#define __ASM_SH_CHECKSUM_64_H
++
+/*
-+ * include/asm-arm/arch-at91/at91cap9_matrix.h
++ * include/asm-sh/checksum_64.h
+ *
-+ * Copyright (C) 2007 Stelian Pop <stelian.pop at leadtechdesign.com>
-+ * Copyright (C) 2007 Lead Tech Design <www.leadtechdesign.com>
-+ * Copyright (C) 2006 Atmel Corporation.
++ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
-+ * Memory Controllers (MATRIX, EBI) - System peripherals registers.
-+ * Based on AT91CAP9 datasheet revision B (Preliminary).
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ */
++
++/*
++ * computes the checksum of a memory block at buff, length len,
++ * and adds in "sum" (32-bit)
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
++ * returns a 32-bit number suitable for feeding into itself
++ * or csum_tcpudp_magic
++ *
++ * this function must be called with even lengths, except
++ * for the last fragment, which may be odd
++ *
++ * it's best to have buff aligned on a 32-bit boundary
+ */
++asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
+
-+#ifndef AT91CAP9_MATRIX_H
-+#define AT91CAP9_MATRIX_H
++/*
++ * Note: when you get a NULL pointer exception here this means someone
++ * passed in an incorrect kernel address to one of these functions.
++ *
++ * If you use these functions directly please don't forget the
++ * access_ok().
++ */
+
-+#define AT91_MATRIX_MCFG0 (AT91_MATRIX + 0x00) /* Master Configuration Register 0 */
-+#define AT91_MATRIX_MCFG1 (AT91_MATRIX + 0x04) /* Master Configuration Register 1 */
-+#define AT91_MATRIX_MCFG2 (AT91_MATRIX + 0x08) /* Master Configuration Register 2 */
-+#define AT91_MATRIX_MCFG3 (AT91_MATRIX + 0x0C) /* Master Configuration Register 3 */
-+#define AT91_MATRIX_MCFG4 (AT91_MATRIX + 0x10) /* Master Configuration Register 4 */
-+#define AT91_MATRIX_MCFG5 (AT91_MATRIX + 0x14) /* Master Configuration Register 5 */
-+#define AT91_MATRIX_MCFG6 (AT91_MATRIX + 0x18) /* Master Configuration Register 6 */
-+#define AT91_MATRIX_MCFG7 (AT91_MATRIX + 0x1C) /* Master Configuration Register 7 */
-+#define AT91_MATRIX_MCFG8 (AT91_MATRIX + 0x20) /* Master Configuration Register 8 */
-+#define AT91_MATRIX_MCFG9 (AT91_MATRIX + 0x24) /* Master Configuration Register 9 */
-+#define AT91_MATRIX_MCFG10 (AT91_MATRIX + 0x28) /* Master Configuration Register 10 */
-+#define AT91_MATRIX_MCFG11 (AT91_MATRIX + 0x2C) /* Master Configuration Register 11 */
-+#define AT91_MATRIX_ULBT (7 << 0) /* Undefined Length Burst Type */
-+#define AT91_MATRIX_ULBT_INFINITE (0 << 0)
-+#define AT91_MATRIX_ULBT_SINGLE (1 << 0)
-+#define AT91_MATRIX_ULBT_FOUR (2 << 0)
-+#define AT91_MATRIX_ULBT_EIGHT (3 << 0)
-+#define AT91_MATRIX_ULBT_SIXTEEN (4 << 0)
+
-+#define AT91_MATRIX_SCFG0 (AT91_MATRIX + 0x40) /* Slave Configuration Register 0 */
-+#define AT91_MATRIX_SCFG1 (AT91_MATRIX + 0x44) /* Slave Configuration Register 1 */
-+#define AT91_MATRIX_SCFG2 (AT91_MATRIX + 0x48) /* Slave Configuration Register 2 */
-+#define AT91_MATRIX_SCFG3 (AT91_MATRIX + 0x4C) /* Slave Configuration Register 3 */
-+#define AT91_MATRIX_SCFG4 (AT91_MATRIX + 0x50) /* Slave Configuration Register 4 */
-+#define AT91_MATRIX_SCFG5 (AT91_MATRIX + 0x54) /* Slave Configuration Register 5 */
-+#define AT91_MATRIX_SCFG6 (AT91_MATRIX + 0x58) /* Slave Configuration Register 6 */
-+#define AT91_MATRIX_SCFG7 (AT91_MATRIX + 0x5C) /* Slave Configuration Register 7 */
-+#define AT91_MATRIX_SCFG8 (AT91_MATRIX + 0x60) /* Slave Configuration Register 8 */
-+#define AT91_MATRIX_SCFG9 (AT91_MATRIX + 0x64) /* Slave Configuration Register 9 */
-+#define AT91_MATRIX_SLOT_CYCLE (0xff << 0) /* Maximum Number of Allowed Cycles for a Burst */
-+#define AT91_MATRIX_DEFMSTR_TYPE (3 << 16) /* Default Master Type */
-+#define AT91_MATRIX_DEFMSTR_TYPE_NONE (0 << 16)
-+#define AT91_MATRIX_DEFMSTR_TYPE_LAST (1 << 16)
-+#define AT91_MATRIX_DEFMSTR_TYPE_FIXED (2 << 16)
-+#define AT91_MATRIX_FIXED_DEFMSTR (0xf << 18) /* Fixed Index of Default Master */
-+#define AT91_MATRIX_ARBT (3 << 24) /* Arbitration Type */
-+#define AT91_MATRIX_ARBT_ROUND_ROBIN (0 << 24)
-+#define AT91_MATRIX_ARBT_FIXED_PRIORITY (1 << 24)
++__wsum csum_partial_copy_nocheck(const void *src, void *dst, int len,
++ __wsum sum);
+
-+#define AT91_MATRIX_PRAS0 (AT91_MATRIX + 0x80) /* Priority Register A for Slave 0 */
-+#define AT91_MATRIX_PRBS0 (AT91_MATRIX + 0x84) /* Priority Register B for Slave 0 */
-+#define AT91_MATRIX_PRAS1 (AT91_MATRIX + 0x88) /* Priority Register A for Slave 1 */
-+#define AT91_MATRIX_PRBS1 (AT91_MATRIX + 0x8C) /* Priority Register B for Slave 1 */
-+#define AT91_MATRIX_PRAS2 (AT91_MATRIX + 0x90) /* Priority Register A for Slave 2 */
-+#define AT91_MATRIX_PRBS2 (AT91_MATRIX + 0x94) /* Priority Register B for Slave 2 */
-+#define AT91_MATRIX_PRAS3 (AT91_MATRIX + 0x98) /* Priority Register A for Slave 3 */
-+#define AT91_MATRIX_PRBS3 (AT91_MATRIX + 0x9C) /* Priority Register B for Slave 3 */
-+#define AT91_MATRIX_PRAS4 (AT91_MATRIX + 0xA0) /* Priority Register A for Slave 4 */
-+#define AT91_MATRIX_PRBS4 (AT91_MATRIX + 0xA4) /* Priority Register B for Slave 4 */
-+#define AT91_MATRIX_PRAS5 (AT91_MATRIX + 0xA8) /* Priority Register A for Slave 5 */
-+#define AT91_MATRIX_PRBS5 (AT91_MATRIX + 0xAC) /* Priority Register B for Slave 5 */
-+#define AT91_MATRIX_PRAS6 (AT91_MATRIX + 0xB0) /* Priority Register A for Slave 6 */
-+#define AT91_MATRIX_PRBS6 (AT91_MATRIX + 0xB4) /* Priority Register B for Slave 6 */
-+#define AT91_MATRIX_PRAS7 (AT91_MATRIX + 0xB8) /* Priority Register A for Slave 7 */
-+#define AT91_MATRIX_PRBS7 (AT91_MATRIX + 0xBC) /* Priority Register B for Slave 7 */
-+#define AT91_MATRIX_PRAS8 (AT91_MATRIX + 0xC0) /* Priority Register A for Slave 8 */
-+#define AT91_MATRIX_PRBS8 (AT91_MATRIX + 0xC4) /* Priority Register B for Slave 8 */
-+#define AT91_MATRIX_PRAS9 (AT91_MATRIX + 0xC8) /* Priority Register A for Slave 9 */
-+#define AT91_MATRIX_PRBS9 (AT91_MATRIX + 0xCC) /* Priority Register B for Slave 9 */
-+#define AT91_MATRIX_M0PR (3 << 0) /* Master 0 Priority */
-+#define AT91_MATRIX_M1PR (3 << 4) /* Master 1 Priority */
-+#define AT91_MATRIX_M2PR (3 << 8) /* Master 2 Priority */
-+#define AT91_MATRIX_M3PR (3 << 12) /* Master 3 Priority */
-+#define AT91_MATRIX_M4PR (3 << 16) /* Master 4 Priority */
-+#define AT91_MATRIX_M5PR (3 << 20) /* Master 5 Priority */
-+#define AT91_MATRIX_M6PR (3 << 24) /* Master 6 Priority */
-+#define AT91_MATRIX_M7PR (3 << 28) /* Master 7 Priority */
-+#define AT91_MATRIX_M8PR (3 << 0) /* Master 8 Priority (in Register B) */
-+#define AT91_MATRIX_M9PR (3 << 4) /* Master 9 Priority (in Register B) */
-+#define AT91_MATRIX_M10PR (3 << 8) /* Master 10 Priority (in Register B) */
-+#define AT91_MATRIX_M11PR (3 << 12) /* Master 11 Priority (in Register B) */
++__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
++ int len, __wsum sum, int *err_ptr);
+
-+#define AT91_MATRIX_MRCR (AT91_MATRIX + 0x100) /* Master Remap Control Register */
-+#define AT91_MATRIX_RCB0 (1 << 0) /* Remap Command for AHB Master 0 (ARM926EJ-S Instruction Master) */
-+#define AT91_MATRIX_RCB1 (1 << 1) /* Remap Command for AHB Master 1 (ARM926EJ-S Data Master) */
-+#define AT91_MATRIX_RCB2 (1 << 2)
-+#define AT91_MATRIX_RCB3 (1 << 3)
-+#define AT91_MATRIX_RCB4 (1 << 4)
-+#define AT91_MATRIX_RCB5 (1 << 5)
-+#define AT91_MATRIX_RCB6 (1 << 6)
-+#define AT91_MATRIX_RCB7 (1 << 7)
-+#define AT91_MATRIX_RCB8 (1 << 8)
-+#define AT91_MATRIX_RCB9 (1 << 9)
-+#define AT91_MATRIX_RCB10 (1 << 10)
-+#define AT91_MATRIX_RCB11 (1 << 11)
++static inline __sum16 csum_fold(__wsum csum)
++{
++ u32 sum = (__force u32)csum;
++ sum = (sum & 0xffff) + (sum >> 16);
++ sum = (sum & 0xffff) + (sum >> 16);
++ return (__force __sum16)~sum;
++}
+
-+#define AT91_MPBS0_SFR (AT91_MATRIX + 0x114) /* MPBlock Slave 0 Special Function Register */
-+#define AT91_MPBS1_SFR (AT91_MATRIX + 0x11C) /* MPBlock Slave 1 Special Function Register */
++__sum16 ip_fast_csum(const void *iph, unsigned int ihl);
+
-+#define AT91_MATRIX_EBICSA (AT91_MATRIX + 0x120) /* EBI Chip Select Assignment Register */
-+#define AT91_MATRIX_EBI_CS1A (1 << 1) /* Chip Select 1 Assignment */
-+#define AT91_MATRIX_EBI_CS1A_SMC (0 << 1)
-+#define AT91_MATRIX_EBI_CS1A_BCRAMC (1 << 1)
-+#define AT91_MATRIX_EBI_CS3A (1 << 3) /* Chip Select 3 Assignment */
-+#define AT91_MATRIX_EBI_CS3A_SMC (0 << 3)
-+#define AT91_MATRIX_EBI_CS3A_SMC_SMARTMEDIA (1 << 3)
-+#define AT91_MATRIX_EBI_CS4A (1 << 4) /* Chip Select 4 Assignment */
-+#define AT91_MATRIX_EBI_CS4A_SMC (0 << 4)
-+#define AT91_MATRIX_EBI_CS4A_SMC_CF1 (1 << 4)
-+#define AT91_MATRIX_EBI_CS5A (1 << 5) /* Chip Select 5 Assignment */
-+#define AT91_MATRIX_EBI_CS5A_SMC (0 << 5)
-+#define AT91_MATRIX_EBI_CS5A_SMC_CF2 (1 << 5)
-+#define AT91_MATRIX_EBI_DBPUC (1 << 8) /* Data Bus Pull-up Configuration */
-+#define AT91_MATRIX_EBI_DQSPDC (1 << 9) /* Data Qualifier Strobe Pull-Down Configuration */
-+#define AT91_MATRIX_EBI_VDDIOMSEL (1 << 16) /* Memory voltage selection */
-+#define AT91_MATRIX_EBI_VDDIOMSEL_1_8V (0 << 16)
-+#define AT91_MATRIX_EBI_VDDIOMSEL_3_3V (1 << 16)
++__wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
++ unsigned short len, unsigned short proto,
++ __wsum sum);
+
-+#define AT91_MPBS2_SFR (AT91_MATRIX + 0x12C) /* MPBlock Slave 2 Special Function Register */
-+#define AT91_MPBS3_SFR (AT91_MATRIX + 0x130) /* MPBlock Slave 3 Special Function Register */
-+#define AT91_APB_SFR (AT91_MATRIX + 0x134) /* APB Bridge Special Function Register */
++/*
++ * computes the checksum of the TCP/UDP pseudo-header
++ * returns a 16-bit checksum, already complemented
++ */
++static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
++ unsigned short len,
++ unsigned short proto,
++ __wsum sum)
++{
++ return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
++}
+
-+#endif
-diff --git a/include/asm-arm/arch-at91/at91sam9260_matrix.h b/include/asm-arm/arch-at91/at91sam9260_matrix.h
-index aacb1e9..a8e9fec 100644
---- a/include/asm-arm/arch-at91/at91sam9260_matrix.h
-+++ b/include/asm-arm/arch-at91/at91sam9260_matrix.h
-@@ -67,7 +67,7 @@
- #define AT91_MATRIX_CS4A (1 << 4) /* Chip Select 4 Assignment */
- #define AT91_MATRIX_CS4A_SMC (0 << 4)
- #define AT91_MATRIX_CS4A_SMC_CF1 (1 << 4)
--#define AT91_MATRIX_CS5A (1 << 5 ) /* Chip Select 5 Assignment */
-+#define AT91_MATRIX_CS5A (1 << 5) /* Chip Select 5 Assignment */
- #define AT91_MATRIX_CS5A_SMC (0 << 5)
- #define AT91_MATRIX_CS5A_SMC_CF2 (1 << 5)
- #define AT91_MATRIX_DBPUC (1 << 8) /* Data Bus Pull-up Configuration */
-diff --git a/include/asm-arm/arch-at91/at91sam9263_matrix.h b/include/asm-arm/arch-at91/at91sam9263_matrix.h
-index 6fc6e4b..72f6e66 100644
---- a/include/asm-arm/arch-at91/at91sam9263_matrix.h
-+++ b/include/asm-arm/arch-at91/at91sam9263_matrix.h
-@@ -44,7 +44,7 @@
- #define AT91_MATRIX_DEFMSTR_TYPE_NONE (0 << 16)
- #define AT91_MATRIX_DEFMSTR_TYPE_LAST (1 << 16)
- #define AT91_MATRIX_DEFMSTR_TYPE_FIXED (2 << 16)
--#define AT91_MATRIX_FIXED_DEFMSTR (7 << 18) /* Fixed Index of Default Master */
-+#define AT91_MATRIX_FIXED_DEFMSTR (0xf << 18) /* Fixed Index of Default Master */
- #define AT91_MATRIX_ARBT (3 << 24) /* Arbitration Type */
- #define AT91_MATRIX_ARBT_ROUND_ROBIN (0 << 24)
- #define AT91_MATRIX_ARBT_FIXED_PRIORITY (1 << 24)
-diff --git a/include/asm-arm/arch-at91/at91sam9rl_matrix.h b/include/asm-arm/arch-at91/at91sam9rl_matrix.h
-index b15f11b..8422417 100644
---- a/include/asm-arm/arch-at91/at91sam9rl_matrix.h
-+++ b/include/asm-arm/arch-at91/at91sam9rl_matrix.h
-@@ -38,7 +38,7 @@
- #define AT91_MATRIX_DEFMSTR_TYPE_NONE (0 << 16)
- #define AT91_MATRIX_DEFMSTR_TYPE_LAST (1 << 16)
- #define AT91_MATRIX_DEFMSTR_TYPE_FIXED (2 << 16)
--#define AT91_MATRIX_FIXED_DEFMSTR (7 << 18) /* Fixed Index of Default Master */
-+#define AT91_MATRIX_FIXED_DEFMSTR (0xf << 18) /* Fixed Index of Default Master */
- #define AT91_MATRIX_ARBT (3 << 24) /* Arbitration Type */
- #define AT91_MATRIX_ARBT_ROUND_ROBIN (0 << 24)
- #define AT91_MATRIX_ARBT_FIXED_PRIORITY (1 << 24)
-diff --git a/include/asm-arm/arch-at91/board.h b/include/asm-arm/arch-at91/board.h
-index 7905496..55b07bd 100644
---- a/include/asm-arm/arch-at91/board.h
-+++ b/include/asm-arm/arch-at91/board.h
-@@ -34,6 +34,7 @@
- #include <linux/mtd/partitions.h>
- #include <linux/device.h>
- #include <linux/i2c.h>
-+#include <linux/leds.h>
- #include <linux/spi/spi.h>
++/*
++ * this routine is used for miscellaneous IP-like checksums, mainly
++ * in icmp.c
++ */
++static inline __sum16 ip_compute_csum(const void *buff, int len)
++{
++ return csum_fold(csum_partial(buff, len, 0));
++}
++
++#endif /* __ASM_SH_CHECKSUM_64_H */
+diff --git a/include/asm-sh/cmpxchg-grb.h b/include/asm-sh/cmpxchg-grb.h
+new file mode 100644
+index 0000000..e2681ab
+--- /dev/null
++++ b/include/asm-sh/cmpxchg-grb.h
+@@ -0,0 +1,70 @@
++#ifndef __ASM_SH_CMPXCHG_GRB_H
++#define __ASM_SH_CMPXCHG_GRB_H
++
++static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
++{
++ unsigned long retval;
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " nop \n\t"
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-4, r15 \n\t" /* LOGIN */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " mov.l %2, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (retval),
++ "+r" (m)
++ : "r" (val)
++ : "memory", "r0", "r1");
++
++ return retval;
++}
++
++static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
++{
++ unsigned long retval;
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-6, r15 \n\t" /* LOGIN */
++ " mov.b @%1, %0 \n\t" /* load old value */
++ " extu.b %0, %0 \n\t" /* extend as unsigned */
++ " mov.b %2, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (retval),
++ "+r" (m)
++ : "r" (val)
++ : "memory" , "r0", "r1");
++
++ return retval;
++}
++
++static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
++ unsigned long new)
++{
++ unsigned long retval;
++
++ __asm__ __volatile__ (
++ " .align 2 \n\t"
++ " mova 1f, r0 \n\t" /* r0 = end point */
++ " nop \n\t"
++ " mov r15, r1 \n\t" /* r1 = saved sp */
++ " mov #-8, r15 \n\t" /* LOGIN */
++ " mov.l @%1, %0 \n\t" /* load old value */
++ " cmp/eq %0, %2 \n\t"
++ " bf 1f \n\t" /* if not equal */
++ " mov.l %2, @%1 \n\t" /* store new value */
++ "1: mov r1, r15 \n\t" /* LOGOUT */
++ : "=&r" (retval),
++ "+r" (m)
++ : "r" (new)
++ : "memory" , "r0", "r1", "t");
++
++ return retval;
++}
++
++#endif /* __ASM_SH_CMPXCHG_GRB_H */
+diff --git a/include/asm-sh/cmpxchg-irq.h b/include/asm-sh/cmpxchg-irq.h
+new file mode 100644
+index 0000000..43049ec
+--- /dev/null
++++ b/include/asm-sh/cmpxchg-irq.h
+@@ -0,0 +1,40 @@
++#ifndef __ASM_SH_CMPXCHG_IRQ_H
++#define __ASM_SH_CMPXCHG_IRQ_H
++
++static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
++{
++ unsigned long flags, retval;
++
++ local_irq_save(flags);
++ retval = *m;
++ *m = val;
++ local_irq_restore(flags);
++ return retval;
++}
++
++static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
++{
++ unsigned long flags, retval;
++
++ local_irq_save(flags);
++ retval = *m;
++ *m = val & 0xff;
++ local_irq_restore(flags);
++ return retval;
++}
++
++static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
++ unsigned long new)
++{
++ __u32 retval;
++ unsigned long flags;
++
++ local_irq_save(flags);
++ retval = *m;
++ if (retval == old)
++ *m = new;
++ local_irq_restore(flags); /* implies memory barrier */
++ return retval;
++}
++
++#endif /* __ASM_SH_CMPXCHG_IRQ_H */
+diff --git a/include/asm-sh/cpu-sh2/addrspace.h b/include/asm-sh/cpu-sh2/addrspace.h
+index 8706c90..2b9ab93 100644
+--- a/include/asm-sh/cpu-sh2/addrspace.h
++++ b/include/asm-sh/cpu-sh2/addrspace.h
+@@ -10,7 +10,10 @@
+ #ifndef __ASM_CPU_SH2_ADDRSPACE_H
+ #define __ASM_CPU_SH2_ADDRSPACE_H
- /* USB Device */
-@@ -71,7 +72,7 @@ struct at91_eth_data {
- };
- extern void __init at91_add_device_eth(struct at91_eth_data *data);
+-/* Should fill here */
++#define P0SEG 0x00000000
++#define P1SEG 0x80000000
++#define P2SEG 0xa0000000
++#define P3SEG 0xc0000000
++#define P4SEG 0xe0000000
--#if defined(CONFIG_ARCH_AT91SAM9260) || defined(CONFIG_ARCH_AT91SAM9263)
-+#if defined(CONFIG_ARCH_AT91SAM9260) || defined(CONFIG_ARCH_AT91SAM9263) || defined(CONFIG_ARCH_AT91CAP9)
- #define eth_platform_data at91_eth_data
- #endif
+ #endif /* __ASM_CPU_SH2_ADDRSPACE_H */
+-
+diff --git a/include/asm-sh/cpu-sh2/cache.h b/include/asm-sh/cpu-sh2/cache.h
+index f02ba7a..4e0b165 100644
+--- a/include/asm-sh/cpu-sh2/cache.h
++++ b/include/asm-sh/cpu-sh2/cache.h
+@@ -12,9 +12,13 @@
-@@ -101,13 +102,23 @@ extern void __init at91_add_device_i2c(struct i2c_board_info *devices, int nr_de
- extern void __init at91_add_device_spi(struct spi_board_info *devices, int nr_devices);
+ #define L1_CACHE_SHIFT 4
- /* Serial */
-+#define ATMEL_UART_CTS 0x01
-+#define ATMEL_UART_RTS 0x02
-+#define ATMEL_UART_DSR 0x04
-+#define ATMEL_UART_DTR 0x08
-+#define ATMEL_UART_DCD 0x10
-+#define ATMEL_UART_RI 0x20
-+
-+extern void __init at91_register_uart(unsigned id, unsigned portnr, unsigned pins);
-+extern void __init at91_set_serial_console(unsigned portnr);
++#define SH_CACHE_VALID 1
++#define SH_CACHE_UPDATED 2
++#define SH_CACHE_COMBINED 4
++#define SH_CACHE_ASSOC 8
+
- struct at91_uart_config {
- unsigned short console_tty; /* tty number of serial console */
- unsigned short nr_tty; /* number of serial tty's */
- short tty_map[]; /* map UART to tty number */
- };
- extern struct platform_device *atmel_default_console_device;
--extern void __init at91_init_serial(struct at91_uart_config *config);
-+extern void __init __deprecated at91_init_serial(struct at91_uart_config *config);
-
- struct atmel_uart_data {
- short use_dma_tx; /* use transmit DMA? */
-@@ -116,6 +127,23 @@ struct atmel_uart_data {
- };
- extern void __init at91_add_device_serial(void);
+ #if defined(CONFIG_CPU_SUBTYPE_SH7619)
+-#define CCR1 0xffffffec
+-#define CCR CCR1
++#define CCR 0xffffffec
-+/*
-+ * SSC -- accessed through ssc_request(id). Drivers don't bind to SSC
-+ * platform devices. Their SSC ID is part of their configuration data,
-+ * along with information about which SSC signals they should use.
-+ */
-+#define ATMEL_SSC_TK 0x01
-+#define ATMEL_SSC_TF 0x02
-+#define ATMEL_SSC_TD 0x04
-+#define ATMEL_SSC_TX (ATMEL_SSC_TK | ATMEL_SSC_TF | ATMEL_SSC_TD)
+ #define CCR_CACHE_CE 0x01 /* Cache enable */
+ #define CCR_CACHE_WT 0x06 /* CCR[bit1=1,bit2=1] */
+diff --git a/include/asm-sh/cpu-sh2/rtc.h b/include/asm-sh/cpu-sh2/rtc.h
+new file mode 100644
+index 0000000..39e2d6e
+--- /dev/null
++++ b/include/asm-sh/cpu-sh2/rtc.h
+@@ -0,0 +1,8 @@
++#ifndef __ASM_SH_CPU_SH2_RTC_H
++#define __ASM_SH_CPU_SH2_RTC_H
+
-+#define ATMEL_SSC_RK 0x10
-+#define ATMEL_SSC_RF 0x20
-+#define ATMEL_SSC_RD 0x40
-+#define ATMEL_SSC_RX (ATMEL_SSC_RK | ATMEL_SSC_RF | ATMEL_SSC_RD)
++#define rtc_reg_size sizeof(u16)
++#define RTC_BIT_INVERTED 0
++#define RTC_DEF_CAPABILITIES 0UL
+
-+extern void __init at91_add_device_ssc(unsigned id, unsigned pins);
++#endif /* __ASM_SH_CPU_SH2_RTC_H */
+diff --git a/include/asm-sh/cpu-sh2a/addrspace.h b/include/asm-sh/cpu-sh2a/addrspace.h
+index 3d2e9aa..795ddd6 100644
+--- a/include/asm-sh/cpu-sh2a/addrspace.h
++++ b/include/asm-sh/cpu-sh2a/addrspace.h
+@@ -1 +1,10 @@
+-#include <asm/cpu-sh2/addrspace.h>
++#ifndef __ASM_SH_CPU_SH2A_ADDRSPACE_H
++#define __ASM_SH_CPU_SH2A_ADDRSPACE_H
+
- /* LCD Controller */
- struct atmel_lcdfb_info;
- extern void __init at91_add_device_lcdc(struct atmel_lcdfb_info *data);
-@@ -126,10 +154,12 @@ struct atmel_ac97_data {
- };
- extern void __init at91_add_device_ac97(struct atmel_ac97_data *data);
-
-+ /* ISI */
-+extern void __init at91_add_device_isi(void);
++#define P0SEG 0x00000000
++#define P1SEG 0x00000000
++#define P2SEG 0x20000000
++#define P3SEG 0x00000000
++#define P4SEG 0x80000000
+
- /* LEDs */
--extern u8 at91_leds_cpu;
--extern u8 at91_leds_timer;
- extern void __init at91_init_leds(u8 cpu_led, u8 timer_led);
-+extern void __init at91_gpio_leds(struct gpio_led *leds, int nr);
-
- /* FIXME: this needs a better location, but gets stuff building again */
- extern int at91_suspend_entering_slow_clock(void);
-diff --git a/include/asm-arm/arch-at91/cpu.h b/include/asm-arm/arch-at91/cpu.h
-index 080cbb4..7145166 100644
---- a/include/asm-arm/arch-at91/cpu.h
-+++ b/include/asm-arm/arch-at91/cpu.h
-@@ -21,13 +21,13 @@
- #define ARCH_ID_AT91SAM9260 0x019803a0
- #define ARCH_ID_AT91SAM9261 0x019703a0
- #define ARCH_ID_AT91SAM9263 0x019607a0
-+#define ARCH_ID_AT91SAM9RL64 0x019b03a0
-+#define ARCH_ID_AT91CAP9 0x039A03A0
++#endif /* __ASM_SH_CPU_SH2A_ADDRSPACE_H */
+diff --git a/include/asm-sh/cpu-sh2a/cache.h b/include/asm-sh/cpu-sh2a/cache.h
+index 3e4b9e4..afe228b 100644
+--- a/include/asm-sh/cpu-sh2a/cache.h
++++ b/include/asm-sh/cpu-sh2a/cache.h
+@@ -12,11 +12,13 @@
- #define ARCH_ID_AT91SAM9XE128 0x329973a0
- #define ARCH_ID_AT91SAM9XE256 0x329a93a0
- #define ARCH_ID_AT91SAM9XE512 0x329aa3a0
+ #define L1_CACHE_SHIFT 4
--#define ARCH_ID_AT91SAM9RL64 0x019b03a0
--
- #define ARCH_ID_AT91M40800 0x14080044
- #define ARCH_ID_AT91R40807 0x44080746
- #define ARCH_ID_AT91M40807 0x14080745
-@@ -81,6 +81,11 @@ static inline unsigned long at91_arch_identify(void)
- #define cpu_is_at91sam9rl() (0)
- #endif
+-#define CCR1 0xfffc1000
+-#define CCR2 0xfffc1004
++#define SH_CACHE_VALID 1
++#define SH_CACHE_UPDATED 2
++#define SH_CACHE_COMBINED 4
++#define SH_CACHE_ASSOC 8
-+#ifdef CONFIG_ARCH_AT91CAP9
-+#define cpu_is_at91cap9() (at91_cpu_identify() == ARCH_ID_AT91CAP9)
-+#else
-+#define cpu_is_at91cap9() (0)
-+#endif
+-/* CCR1 behaves more like the traditional CCR */
+-#define CCR CCR1
++#define CCR 0xfffc1000 /* CCR1 */
++#define CCR2 0xfffc1004
/*
- * Since this is ARM, we will never run on any AVR32 CPU. But these
-diff --git a/include/asm-arm/arch-at91/entry-macro.S b/include/asm-arm/arch-at91/entry-macro.S
-index cc1d850..1005eee 100644
---- a/include/asm-arm/arch-at91/entry-macro.S
-+++ b/include/asm-arm/arch-at91/entry-macro.S
-@@ -17,13 +17,13 @@
- .endm
+ * Most of the SH-2A CCR1 definitions resemble the SH-4 ones. All others not
+@@ -36,4 +38,3 @@
+ #define CCR_CACHE_INVALIDATE (CCR_CACHE_OCI | CCR_CACHE_ICI)
- .macro get_irqnr_preamble, base, tmp
-+ ldr \base, =(AT91_VA_BASE_SYS + AT91_AIC) @ base virtual address of AIC peripheral
- .endm
+ #endif /* __ASM_CPU_SH2A_CACHE_H */
+-
+diff --git a/include/asm-sh/cpu-sh2a/freq.h b/include/asm-sh/cpu-sh2a/freq.h
+index e518fff..830fd43 100644
+--- a/include/asm-sh/cpu-sh2a/freq.h
++++ b/include/asm-sh/cpu-sh2a/freq.h
+@@ -10,9 +10,7 @@
+ #ifndef __ASM_CPU_SH2A_FREQ_H
+ #define __ASM_CPU_SH2A_FREQ_H
- .macro arch_ret_to_user, tmp1, tmp2
- .endm
+-#if defined(CONFIG_CPU_SUBTYPE_SH7206)
+ #define FREQCR 0xfffe0010
+-#endif
- .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
-- ldr \base, =(AT91_VA_BASE_SYS + AT91_AIC) @ base virtual address of AIC peripheral
- ldr \irqnr, [\base, #(AT91_AIC_IVR - AT91_AIC)] @ read IRQ vector register: de-asserts nIRQ to processor (and clears interrupt)
- ldr \irqstat, [\base, #(AT91_AIC_ISR - AT91_AIC)] @ read interrupt source number
- teq \irqstat, #0 @ ISR is 0 when no current interrupt, or spurious interrupt
-diff --git a/include/asm-arm/arch-at91/hardware.h b/include/asm-arm/arch-at91/hardware.h
-index 8f1cdd3..2c826d8 100644
---- a/include/asm-arm/arch-at91/hardware.h
-+++ b/include/asm-arm/arch-at91/hardware.h
-@@ -26,6 +26,8 @@
- #include <asm/arch/at91sam9263.h>
- #elif defined(CONFIG_ARCH_AT91SAM9RL)
- #include <asm/arch/at91sam9rl.h>
-+#elif defined(CONFIG_ARCH_AT91CAP9)
-+#include <asm/arch/at91cap9.h>
- #elif defined(CONFIG_ARCH_AT91X40)
- #include <asm/arch/at91x40.h>
- #else
-diff --git a/include/asm-arm/arch-at91/timex.h b/include/asm-arm/arch-at91/timex.h
-index a310698..f1933b0 100644
---- a/include/asm-arm/arch-at91/timex.h
-+++ b/include/asm-arm/arch-at91/timex.h
-@@ -42,6 +42,11 @@
- #define AT91SAM9_MASTER_CLOCK 100000000
- #define CLOCK_TICK_RATE (AT91SAM9_MASTER_CLOCK/16)
+ #endif /* __ASM_CPU_SH2A_FREQ_H */
-+#elif defined(CONFIG_ARCH_AT91CAP9)
+diff --git a/include/asm-sh/cpu-sh2a/rtc.h b/include/asm-sh/cpu-sh2a/rtc.h
+new file mode 100644
+index 0000000..afb511e
+--- /dev/null
++++ b/include/asm-sh/cpu-sh2a/rtc.h
+@@ -0,0 +1,8 @@
++#ifndef __ASM_SH_CPU_SH2A_RTC_H
++#define __ASM_SH_CPU_SH2A_RTC_H
+
-+#define AT91CAP9_MASTER_CLOCK 100000000
-+#define CLOCK_TICK_RATE (AT91CAP9_MASTER_CLOCK/16)
++#define rtc_reg_size sizeof(u16)
++#define RTC_BIT_INVERTED 0
++#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
+
- #elif defined(CONFIG_ARCH_AT91X40)
++#endif /* __ASM_SH_CPU_SH2A_RTC_H */
+diff --git a/include/asm-sh/cpu-sh3/addrspace.h b/include/asm-sh/cpu-sh3/addrspace.h
+index 872e9e1..0f94726 100644
+--- a/include/asm-sh/cpu-sh3/addrspace.h
++++ b/include/asm-sh/cpu-sh3/addrspace.h
+@@ -10,7 +10,10 @@
+ #ifndef __ASM_CPU_SH3_ADDRSPACE_H
+ #define __ASM_CPU_SH3_ADDRSPACE_H
- #define AT91X40_MASTER_CLOCK 40000000
-diff --git a/include/asm-arm/arch-ep93xx/gpio.h b/include/asm-arm/arch-ep93xx/gpio.h
-index 1ee14a1..9b1864b 100644
---- a/include/asm-arm/arch-ep93xx/gpio.h
-+++ b/include/asm-arm/arch-ep93xx/gpio.h
-@@ -5,16 +5,6 @@
- #ifndef __ASM_ARCH_GPIO_H
- #define __ASM_ARCH_GPIO_H
+-/* Should fill here */
++#define P0SEG 0x00000000
++#define P1SEG 0x80000000
++#define P2SEG 0xa0000000
++#define P3SEG 0xc0000000
++#define P4SEG 0xe0000000
--#define GPIO_IN 0
--#define GPIO_OUT 1
--
--#define EP93XX_GPIO_LOW 0
--#define EP93XX_GPIO_HIGH 1
--
--extern void gpio_line_config(int line, int direction);
--extern int gpio_line_get(int line);
--extern void gpio_line_set(int line, int value);
+ #endif /* __ASM_CPU_SH3_ADDRSPACE_H */
-
- /* GPIO port A. */
- #define EP93XX_GPIO_LINE_A(x) ((x) + 0)
- #define EP93XX_GPIO_LINE_EGPIO0 EP93XX_GPIO_LINE_A(0)
-@@ -38,7 +28,7 @@ extern void gpio_line_set(int line, int value);
- #define EP93XX_GPIO_LINE_EGPIO15 EP93XX_GPIO_LINE_B(7)
-
- /* GPIO port C. */
--#define EP93XX_GPIO_LINE_C(x) ((x) + 16)
-+#define EP93XX_GPIO_LINE_C(x) ((x) + 40)
- #define EP93XX_GPIO_LINE_ROW0 EP93XX_GPIO_LINE_C(0)
- #define EP93XX_GPIO_LINE_ROW1 EP93XX_GPIO_LINE_C(1)
- #define EP93XX_GPIO_LINE_ROW2 EP93XX_GPIO_LINE_C(2)
-@@ -71,7 +61,7 @@ extern void gpio_line_set(int line, int value);
- #define EP93XX_GPIO_LINE_IDEDA2 EP93XX_GPIO_LINE_E(7)
+diff --git a/include/asm-sh/cpu-sh3/cache.h b/include/asm-sh/cpu-sh3/cache.h
+index 255016f..56bd838 100644
+--- a/include/asm-sh/cpu-sh3/cache.h
++++ b/include/asm-sh/cpu-sh3/cache.h
+@@ -12,6 +12,11 @@
- /* GPIO port F. */
--#define EP93XX_GPIO_LINE_F(x) ((x) + 40)
-+#define EP93XX_GPIO_LINE_F(x) ((x) + 16)
- #define EP93XX_GPIO_LINE_WP EP93XX_GPIO_LINE_F(0)
- #define EP93XX_GPIO_LINE_MCCD1 EP93XX_GPIO_LINE_F(1)
- #define EP93XX_GPIO_LINE_MCCD2 EP93XX_GPIO_LINE_F(2)
-@@ -103,5 +93,49 @@ extern void gpio_line_set(int line, int value);
- #define EP93XX_GPIO_LINE_DD6 EP93XX_GPIO_LINE_H(6)
- #define EP93XX_GPIO_LINE_DD7 EP93XX_GPIO_LINE_H(7)
+ #define L1_CACHE_SHIFT 4
-+/* maximum value for gpio line identifiers */
-+#define EP93XX_GPIO_LINE_MAX EP93XX_GPIO_LINE_H(7)
-+
-+/* maximum value for irq capable line identifiers */
-+#define EP93XX_GPIO_LINE_MAX_IRQ EP93XX_GPIO_LINE_F(7)
-+
-+/* new generic GPIO API - see Documentation/gpio.txt */
-+
-+static inline int gpio_request(unsigned gpio, const char *label)
-+{
-+ if (gpio > EP93XX_GPIO_LINE_MAX)
-+ return -EINVAL;
-+ return 0;
-+}
-+
-+static inline void gpio_free(unsigned gpio)
-+{
-+}
-+
-+int gpio_direction_input(unsigned gpio);
-+int gpio_direction_output(unsigned gpio, int value);
-+int gpio_get_value(unsigned gpio);
-+void gpio_set_value(unsigned gpio, int value);
-+
-+#include <asm-generic/gpio.h> /* cansleep wrappers */
-+
-+/*
-+ * Map GPIO A0..A7 (0..7) to irq 64..71,
-+ * B0..B7 (7..15) to irq 72..79, and
-+ * F0..F7 (16..24) to irq 80..87.
-+ */
-+
-+static inline int gpio_to_irq(unsigned gpio)
-+{
-+ if (gpio <= EP93XX_GPIO_LINE_MAX_IRQ)
-+ return 64 + gpio;
-+
-+ return -EINVAL;
-+}
++#define SH_CACHE_VALID 1
++#define SH_CACHE_UPDATED 2
++#define SH_CACHE_COMBINED 4
++#define SH_CACHE_ASSOC 8
+
-+static inline int irq_to_gpio(unsigned irq)
-+{
-+ return irq - gpio_to_irq(0);
-+}
+ #define CCR 0xffffffec /* Address of Cache Control Register */
- #endif
-diff --git a/include/asm-arm/arch-ep93xx/irqs.h b/include/asm-arm/arch-ep93xx/irqs.h
-index 2a8c636..53d4a68 100644
---- a/include/asm-arm/arch-ep93xx/irqs.h
-+++ b/include/asm-arm/arch-ep93xx/irqs.h
-@@ -67,12 +67,6 @@
- #define IRQ_EP93XX_SAI 60
- #define EP93XX_VIC2_VALID_IRQ_MASK 0x1fffffff
+ #define CCR_CACHE_CE 0x01 /* Cache Enable */
+@@ -28,7 +33,8 @@
--/*
-- * Map GPIO A0..A7 to irq 64..71, B0..B7 to 72..79, and
-- * F0..F7 to 80..87.
-- */
--#define IRQ_EP93XX_GPIO(x) (64 + (((x) + (((x) >> 2) & 8)) & 0x1f))
--
- #define NR_EP93XX_IRQS (64 + 24)
+ #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7710) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define CCR3 0xa40000b4
+ #define CCR_CACHE_16KB 0x00010000
+ #define CCR_CACHE_32KB 0x00020000
+diff --git a/include/asm-sh/cpu-sh3/dma.h b/include/asm-sh/cpu-sh3/dma.h
+index 54bfece..092ff9d 100644
+--- a/include/asm-sh/cpu-sh3/dma.h
++++ b/include/asm-sh/cpu-sh3/dma.h
+@@ -2,7 +2,9 @@
+ #define __ASM_CPU_SH3_DMA_H
- #define EP93XX_BOARD_IRQ(x) (NR_EP93XX_IRQS + (x))
-diff --git a/include/asm-arm/arch-ixp4xx/io.h b/include/asm-arm/arch-ixp4xx/io.h
-index eeeea90..9c5d235 100644
---- a/include/asm-arm/arch-ixp4xx/io.h
-+++ b/include/asm-arm/arch-ixp4xx/io.h
-@@ -61,13 +61,13 @@ __ixp4xx_ioremap(unsigned long addr, size_t size, unsigned int mtype)
- if((addr < PCIBIOS_MIN_MEM) || (addr > 0x4fffffff))
- return __arm_ioremap(addr, size, mtype);
-- return (void *)addr;
-+ return (void __iomem *)addr;
- }
+-#if defined(CONFIG_CPU_SUBTYPE_SH7720) || defined(CONFIG_CPU_SUBTYPE_SH7709)
++#if defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7709)
+ #define SH_DMAC_BASE 0xa4010020
- static inline void
- __ixp4xx_iounmap(void __iomem *addr)
- {
-- if ((u32)addr >= VMALLOC_START)
-+ if ((__force u32)addr >= VMALLOC_START)
- __iounmap(addr);
- }
+ #define DMTE0_IRQ 48
+diff --git a/include/asm-sh/cpu-sh3/freq.h b/include/asm-sh/cpu-sh3/freq.h
+index 0a054b5..53c6230 100644
+--- a/include/asm-sh/cpu-sh3/freq.h
++++ b/include/asm-sh/cpu-sh3/freq.h
+@@ -10,7 +10,12 @@
+ #ifndef __ASM_CPU_SH3_FREQ_H
+ #define __ASM_CPU_SH3_FREQ_H
-@@ -141,9 +141,9 @@ __ixp4xx_writesw(volatile void __iomem *bus_addr, const u16 *vaddr, int count)
- static inline void
- __ixp4xx_writel(u32 value, volatile void __iomem *p)
- {
-- u32 addr = (u32)p;
-+ u32 addr = (__force u32)p;
- if (addr >= VMALLOC_START) {
-- __raw_writel(value, addr);
-+ __raw_writel(value, p);
- return;
- }
++#ifdef CONFIG_CPU_SUBTYPE_SH7712
++#define FRQCR 0xA415FF80
++#else
+ #define FRQCR 0xffffff80
++#endif
++
+ #define MIN_DIVISOR_NR 0
+ #define MAX_DIVISOR_NR 4
-@@ -208,11 +208,11 @@ __ixp4xx_readsw(const volatile void __iomem *bus_addr, u16 *vaddr, u32 count)
- static inline unsigned long
- __ixp4xx_readl(const volatile void __iomem *p)
- {
-- u32 addr = (u32)p;
-+ u32 addr = (__force u32)p;
- u32 data;
+diff --git a/include/asm-sh/cpu-sh3/gpio.h b/include/asm-sh/cpu-sh3/gpio.h
+index 48770c1..4e53eb3 100644
+--- a/include/asm-sh/cpu-sh3/gpio.h
++++ b/include/asm-sh/cpu-sh3/gpio.h
+@@ -12,7 +12,8 @@
+ #ifndef _CPU_SH3_GPIO_H
+ #define _CPU_SH3_GPIO_H
- if (addr >= VMALLOC_START)
-- return __raw_readl(addr);
-+ return __raw_readl(p);
+-#if defined(CONFIG_CPU_SUBTYPE_SH7720)
++#if defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
- if (ixp4xx_pci_read(addr, NP_CMD_MEMREAD, &data))
- return 0xffffffff;
-@@ -438,7 +438,7 @@ __ixp4xx_ioread32(const void __iomem *addr)
- return (unsigned int)__ixp4xx_inl(port & PIO_MASK);
- else {
- #ifndef CONFIG_IXP4XX_INDIRECT_PCI
-- return le32_to_cpu(__raw_readl((u32)port));
-+ return le32_to_cpu((__force __le32)__raw_readl(addr));
+ /* Control registers */
+ #define PORT_PACR 0xA4050100UL
+diff --git a/include/asm-sh/cpu-sh3/mmu_context.h b/include/asm-sh/cpu-sh3/mmu_context.h
+index 16c2d63..ab09da7 100644
+--- a/include/asm-sh/cpu-sh3/mmu_context.h
++++ b/include/asm-sh/cpu-sh3/mmu_context.h
+@@ -33,7 +33,8 @@
+ defined(CONFIG_CPU_SUBTYPE_SH7709) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7710) || \
+ defined(CONFIG_CPU_SUBTYPE_SH7712) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define INTEVT 0xa4000000 /* INTEVTE2(0xa4000000) */
#else
- return (unsigned int)__ixp4xx_readl(addr);
+ #define INTEVT 0xffffffd8
+diff --git a/include/asm-sh/cpu-sh3/rtc.h b/include/asm-sh/cpu-sh3/rtc.h
+new file mode 100644
+index 0000000..319404a
+--- /dev/null
++++ b/include/asm-sh/cpu-sh3/rtc.h
+@@ -0,0 +1,8 @@
++#ifndef __ASM_SH_CPU_SH3_RTC_H
++#define __ASM_SH_CPU_SH3_RTC_H
++
++#define rtc_reg_size sizeof(u16)
++#define RTC_BIT_INVERTED 0 /* No bug on SH7708, SH7709A */
++#define RTC_DEF_CAPABILITIES 0UL
++
++#endif /* __ASM_SH_CPU_SH3_RTC_H */
+diff --git a/include/asm-sh/cpu-sh3/timer.h b/include/asm-sh/cpu-sh3/timer.h
+index 7b795ac..793acf1 100644
+--- a/include/asm-sh/cpu-sh3/timer.h
++++ b/include/asm-sh/cpu-sh3/timer.h
+@@ -23,12 +23,13 @@
+ * ---------------------------------------------------------------------------
+ */
+
+-#if !defined(CONFIG_CPU_SUBTYPE_SH7720)
++#if !defined(CONFIG_CPU_SUBTYPE_SH7720) && !defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define TMU_TOCR 0xfffffe90 /* Byte access */
#endif
-@@ -523,7 +523,7 @@ __ixp4xx_iowrite32(u32 value, void __iomem *addr)
- __ixp4xx_outl(value, port & PIO_MASK);
- else
- #ifndef CONFIG_IXP4XX_INDIRECT_PCI
-- __raw_writel(cpu_to_le32(value), port);
-+ __raw_writel((u32 __force)cpu_to_le32(value), addr);
- #else
- __ixp4xx_writel(value, addr);
+
+ #if defined(CONFIG_CPU_SUBTYPE_SH7710) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define TMU_012_TSTR 0xa412fe92 /* Byte access */
+
+ #define TMU0_TCOR 0xa412fe94 /* Long access */
+@@ -57,7 +58,7 @@
+ #define TMU2_TCOR 0xfffffeac /* Long access */
+ #define TMU2_TCNT 0xfffffeb0 /* Long access */
+ #define TMU2_TCR 0xfffffeb4 /* Word access */
+-#if !defined(CONFIG_CPU_SUBTYPE_SH7720)
++#if !defined(CONFIG_CPU_SUBTYPE_SH7720) && !defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define TMU2_TCPR2 0xfffffeb8 /* Long access */
#endif
-diff --git a/include/asm-arm/arch-ixp4xx/platform.h b/include/asm-arm/arch-ixp4xx/platform.h
-index 2a44d3d..2ce28e3 100644
---- a/include/asm-arm/arch-ixp4xx/platform.h
-+++ b/include/asm-arm/arch-ixp4xx/platform.h
-@@ -76,17 +76,6 @@ extern unsigned long ixp4xx_exp_bus_size;
- #define IXP4XX_UART_XTAL 14745600
+ #endif
+diff --git a/include/asm-sh/cpu-sh3/ubc.h b/include/asm-sh/cpu-sh3/ubc.h
+index 18467c5..4e6381d 100644
+--- a/include/asm-sh/cpu-sh3/ubc.h
++++ b/include/asm-sh/cpu-sh3/ubc.h
+@@ -12,7 +12,8 @@
+ #define __ASM_CPU_SH3_UBC_H
- /*
-- * The IXP4xx chips do not have an I2C unit, so GPIO lines are just
-- * used to
-- * Used as platform_data to provide GPIO pin information to the ixp42x
-- * I2C driver.
-- */
--struct ixp4xx_i2c_pins {
-- unsigned long sda_pin;
-- unsigned long scl_pin;
--};
--
--/*
- * This structure provide a means for the board setup code
- * to give information to th pata_ixp4xx driver. It is
- * passed as platform_data.
-diff --git a/include/asm-arm/arch-ks8695/regs-gpio.h b/include/asm-arm/arch-ks8695/regs-gpio.h
-index 57fcf9f..6b95d77 100644
---- a/include/asm-arm/arch-ks8695/regs-gpio.h
-+++ b/include/asm-arm/arch-ks8695/regs-gpio.h
-@@ -49,5 +49,7 @@
- #define IOPC_TM_FALLING (4) /* Falling Edge Detection */
- #define IOPC_TM_EDGE (6) /* Both Edge Detection */
+ #if defined(CONFIG_CPU_SUBTYPE_SH7710) || \
+- defined(CONFIG_CPU_SUBTYPE_SH7720)
++ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7721)
+ #define UBC_BARA 0xa4ffffb0
+ #define UBC_BAMRA 0xa4ffffb4
+ #define UBC_BBRA 0xa4ffffb8
+diff --git a/include/asm-sh/cpu-sh4/addrspace.h b/include/asm-sh/cpu-sh4/addrspace.h
+index bb2e1b0..a3fa733 100644
+--- a/include/asm-sh/cpu-sh4/addrspace.h
++++ b/include/asm-sh/cpu-sh4/addrspace.h
+@@ -10,6 +10,12 @@
+ #ifndef __ASM_CPU_SH4_ADDRSPACE_H
+ #define __ASM_CPU_SH4_ADDRSPACE_H
-+/* Port Data Register */
-+#define IOPD_(x) (1 << (x)) /* Signal Level of GPIO Pin x */
++#define P0SEG 0x00000000
++#define P1SEG 0x80000000
++#define P2SEG 0xa0000000
++#define P3SEG 0xc0000000
++#define P4SEG 0xe0000000
++
+ /* Detailed P4SEG */
+ #define P4SEG_STORE_QUE (P4SEG)
+ #define P4SEG_IC_ADDR 0xf0000000
+diff --git a/include/asm-sh/cpu-sh4/cache.h b/include/asm-sh/cpu-sh4/cache.h
+index f92b20a..1c61ebf 100644
+--- a/include/asm-sh/cpu-sh4/cache.h
++++ b/include/asm-sh/cpu-sh4/cache.h
+@@ -12,6 +12,11 @@
- #endif
-diff --git a/include/asm-arm/arch-msm/board.h b/include/asm-arm/arch-msm/board.h
+ #define L1_CACHE_SHIFT 5
+
++#define SH_CACHE_VALID 1
++#define SH_CACHE_UPDATED 2
++#define SH_CACHE_COMBINED 4
++#define SH_CACHE_ASSOC 8
++
+ #define CCR 0xff00001c /* Address of Cache Control Register */
+ #define CCR_CACHE_OCE 0x0001 /* Operand Cache Enable */
+ #define CCR_CACHE_WT 0x0002 /* Write-Through (for P0,U0,P3) (else writeback)*/
+diff --git a/include/asm-sh/cpu-sh4/fpu.h b/include/asm-sh/cpu-sh4/fpu.h
new file mode 100644
-index 0000000..763051f
+index 0000000..febef73
--- /dev/null
-+++ b/include/asm-arm/arch-msm/board.h
-@@ -0,0 +1,37 @@
-+/* linux/include/asm-arm/arch-msm/board.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ * Author: Brian Swetland <swetland at google.com>
++++ b/include/asm-sh/cpu-sh4/fpu.h
+@@ -0,0 +1,32 @@
++/*
++ * linux/arch/sh/kernel/cpu/sh4/sh4_fpu.h
+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
++ * Copyright (C) 2006 STMicroelectronics Limited
++ * Author: Carl Shaw <carl.shaw at st.com>
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * May be copied or modified under the terms of the GNU General Public
++ * License Version 2. See linux/COPYING for more information.
+ *
++ * Definitions for SH4 FPU operations
+ */
+
-+#ifndef __ASM_ARCH_MSM_BOARD_H
-+#define __ASM_ARCH_MSM_BOARD_H
++#ifndef __CPU_SH4_FPU_H
++#define __CPU_SH4_FPU_H
+
-+#include <linux/types.h>
++#define FPSCR_ENABLE_MASK 0x00000f80UL
+
-+/* platform device data structures */
++#define FPSCR_FMOV_DOUBLE (1<<1)
+
-+struct msm_mddi_platform_data
-+{
-+ void (*panel_power)(int on);
-+ unsigned has_vsync_irq:1;
-+};
++#define FPSCR_CAUSE_INEXACT (1<<12)
++#define FPSCR_CAUSE_UNDERFLOW (1<<13)
++#define FPSCR_CAUSE_OVERFLOW (1<<14)
++#define FPSCR_CAUSE_DIVZERO (1<<15)
++#define FPSCR_CAUSE_INVALID (1<<16)
++#define FPSCR_CAUSE_ERROR (1<<17)
+
-+/* common init routines for use by arch/arm/mach-msm/board-*.c */
++#define FPSCR_DBL_PRECISION (1<<19)
++#define FPSCR_ROUNDING_MODE(x) ((x >> 20) & 3)
++#define FPSCR_RM_NEAREST (0)
++#define FPSCR_RM_ZERO (1)
+
-+void __init msm_add_devices(void);
-+void __init msm_map_common_io(void);
-+void __init msm_init_irq(void);
-+void __init msm_init_gpio(void);
++#endif
+diff --git a/include/asm-sh/cpu-sh4/freq.h b/include/asm-sh/cpu-sh4/freq.h
+index dc1d32a..1ac10b9 100644
+--- a/include/asm-sh/cpu-sh4/freq.h
++++ b/include/asm-sh/cpu-sh4/freq.h
+@@ -16,7 +16,8 @@
+ #define SCLKACR 0xa4150008
+ #define SCLKBCR 0xa415000c
+ #define IrDACLKCR 0xa4150010
+-#elif defined(CONFIG_CPU_SUBTYPE_SH7780)
++#elif defined(CONFIG_CPU_SUBTYPE_SH7763) || \
++ defined(CONFIG_CPU_SUBTYPE_SH7780)
+ #define FRQCR 0xffc80000
+ #elif defined(CONFIG_CPU_SUBTYPE_SH7785)
+ #define FRQCR0 0xffc80000
+diff --git a/include/asm-sh/cpu-sh4/mmu_context.h b/include/asm-sh/cpu-sh4/mmu_context.h
+index 979acdd..9ea8eb2 100644
+--- a/include/asm-sh/cpu-sh4/mmu_context.h
++++ b/include/asm-sh/cpu-sh4/mmu_context.h
+@@ -22,12 +22,20 @@
+ #define MMU_UTLB_ADDRESS_ARRAY 0xF6000000
+ #define MMU_PAGE_ASSOC_BIT 0x80
+
++#define MMUCR_TI (1<<2)
+
+ #ifdef CONFIG_X2TLB
+ #define MMUCR_ME (1 << 7)
+ #else
+ #define MMUCR_ME (0)
+ #endif
+
++#if defined(CONFIG_32BIT) && defined(CONFIG_CPU_SUBTYPE_ST40)
++#define MMUCR_SE (1 << 4)
++#else
++#define MMUCR_SE (0)
+#endif
-diff --git a/include/asm-arm/arch-msm/debug-macro.S b/include/asm-arm/arch-msm/debug-macro.S
++
+ #ifdef CONFIG_SH_STORE_QUEUES
+ #define MMUCR_SQMD (1 << 9)
+ #else
+@@ -35,7 +43,7 @@
+ #endif
+
+ #define MMU_NTLB_ENTRIES 64
+-#define MMU_CONTROL_INIT (0x05|MMUCR_SQMD|MMUCR_ME)
++#define MMU_CONTROL_INIT (0x05|MMUCR_SQMD|MMUCR_ME|MMUCR_SE)
+
+ #define MMU_ITLB_DATA_ARRAY 0xF3000000
+ #define MMU_UTLB_DATA_ARRAY 0xF7000000
+diff --git a/include/asm-sh/cpu-sh4/rtc.h b/include/asm-sh/cpu-sh4/rtc.h
new file mode 100644
-index 0000000..393d527
+index 0000000..f3d0f53
--- /dev/null
-+++ b/include/asm-arm/arch-msm/debug-macro.S
-@@ -0,0 +1,40 @@
-+/* include/asm-arm/arch-msm7200/debug-macro.S
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ * Author: Brian Swetland <swetland at google.com>
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
++++ b/include/asm-sh/cpu-sh4/rtc.h
+@@ -0,0 +1,8 @@
++#ifndef __ASM_SH_CPU_SH4_RTC_H
++#define __ASM_SH_CPU_SH4_RTC_H
+
-+#include <asm/hardware.h>
-+#include <asm/arch/msm_iomap.h>
++#define rtc_reg_size sizeof(u32)
++#define RTC_BIT_INVERTED 0x40 /* bug on SH7750, SH7750S */
++#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
+
-+ .macro addruart,rx
-+ @ see if the MMU is enabled and select appropriate base address
-+ mrc p15, 0, \rx, c1, c0
-+ tst \rx, #1
-+ ldreq \rx, =MSM_UART1_PHYS
-+ ldrne \rx, =MSM_UART1_BASE
-+ .endm
++#endif /* __ASM_SH_CPU_SH4_RTC_H */
+diff --git a/include/asm-sh/cpu-sh5/addrspace.h b/include/asm-sh/cpu-sh5/addrspace.h
+new file mode 100644
+index 0000000..dc36b9a
+--- /dev/null
++++ b/include/asm-sh/cpu-sh5/addrspace.h
+@@ -0,0 +1,11 @@
++#ifndef __ASM_SH_CPU_SH5_ADDRSPACE_H
++#define __ASM_SH_CPU_SH5_ADDRSPACE_H
+
-+ .macro senduart,rd,rx
-+ str \rd, [\rx, #0x0C]
-+ .endm
++#define PHYS_PERIPHERAL_BLOCK 0x09000000
++#define PHYS_DMAC_BLOCK 0x0e000000
++#define PHYS_PCI_BLOCK 0x60000000
++#define PHYS_EMI_BLOCK 0xff000000
+
-+ .macro waituart,rd,rx
-+ @ wait for TX_READY
-+1: ldr \rd, [\rx, #0x08]
-+ tst \rd, #0x04
-+ beq 1b
-+ .endm
++/* No segmentation.. */
+
-+ .macro busyuart,rd,rx
-+ .endm
-diff --git a/include/asm-arm/arch-msm/dma.h b/include/asm-arm/arch-msm/dma.h
++#endif /* __ASM_SH_CPU_SH5_ADDRSPACE_H */
+diff --git a/include/asm-sh/cpu-sh5/cache.h b/include/asm-sh/cpu-sh5/cache.h
new file mode 100644
-index 0000000..e4b565b
+index 0000000..ed050ab
--- /dev/null
-+++ b/include/asm-arm/arch-msm/dma.h
-@@ -0,0 +1,151 @@
-+/* linux/include/asm-arm/arch-msm/dma.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
++++ b/include/asm-sh/cpu-sh5/cache.h
+@@ -0,0 +1,97 @@
++#ifndef __ASM_SH_CPU_SH5_CACHE_H
++#define __ASM_SH_CPU_SH5_CACHE_H
++
++/*
++ * include/asm-sh/cpu-sh5/cache.h
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003, 2004 Paul Mundt
+ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
+
-+#ifndef __ASM_ARCH_MSM_DMA_H
-+
-+#include <linux/list.h>
-+#include <asm/arch/msm_iomap.h>
-+
-+struct msm_dmov_cmd {
-+ struct list_head list;
-+ unsigned int cmdptr;
-+ void (*complete_func)(struct msm_dmov_cmd *cmd, unsigned int result);
-+/* void (*user_result_func)(struct msm_dmov_cmd *cmd); */
-+};
-+
-+void msm_dmov_enqueue_cmd(unsigned id, struct msm_dmov_cmd *cmd);
-+void msm_dmov_stop_cmd(unsigned id, struct msm_dmov_cmd *cmd);
-+int msm_dmov_exec_cmd(unsigned id, unsigned int cmdptr);
-+/* int msm_dmov_exec_cmd_etc(unsigned id, unsigned int cmdptr, int timeout, int interruptible); */
++#define L1_CACHE_SHIFT 5
+
++/* Valid and Dirty bits */
++#define SH_CACHE_VALID (1LL<<0)
++#define SH_CACHE_UPDATED (1LL<<57)
+
++/* Unimplemented compat bits.. */
++#define SH_CACHE_COMBINED 0
++#define SH_CACHE_ASSOC 0
+
-+#define DMOV_SD0(off, ch) (MSM_DMOV_BASE + 0x0000 + (off) + ((ch) << 2))
-+#define DMOV_SD1(off, ch) (MSM_DMOV_BASE + 0x0400 + (off) + ((ch) << 2))
-+#define DMOV_SD2(off, ch) (MSM_DMOV_BASE + 0x0800 + (off) + ((ch) << 2))
-+#define DMOV_SD3(off, ch) (MSM_DMOV_BASE + 0x0C00 + (off) + ((ch) << 2))
++/* Cache flags */
++#define SH_CACHE_MODE_WT (1LL<<0)
++#define SH_CACHE_MODE_WB (1LL<<1)
+
-+/* only security domain 3 is available to the ARM11
-+ * SD0 -> mARM trusted, SD1 -> mARM nontrusted, SD2 -> aDSP, SD3 -> aARM
++/*
++ * Control Registers.
+ */
++#define ICCR_BASE 0x01600000 /* Instruction Cache Control Register */
++#define ICCR_REG0 0 /* Register 0 offset */
++#define ICCR_REG1 1 /* Register 1 offset */
++#define ICCR0 ICCR_BASE+ICCR_REG0
++#define ICCR1 ICCR_BASE+ICCR_REG1
+
-+#define DMOV_CMD_PTR(ch) DMOV_SD3(0x000, ch)
-+#define DMOV_CMD_LIST (0 << 29) /* does not work */
-+#define DMOV_CMD_PTR_LIST (1 << 29) /* works */
-+#define DMOV_CMD_INPUT_CFG (2 << 29) /* untested */
-+#define DMOV_CMD_OUTPUT_CFG (3 << 29) /* untested */
-+#define DMOV_CMD_ADDR(addr) ((addr) >> 3)
-+
-+#define DMOV_RSLT(ch) DMOV_SD3(0x040, ch)
-+#define DMOV_RSLT_VALID (1 << 31) /* 0 == host has empties result fifo */
-+#define DMOV_RSLT_ERROR (1 << 3)
-+#define DMOV_RSLT_FLUSH (1 << 2)
-+#define DMOV_RSLT_DONE (1 << 1) /* top pointer done */
-+#define DMOV_RSLT_USER (1 << 0) /* command with FR force result */
-+
-+#define DMOV_FLUSH0(ch) DMOV_SD3(0x080, ch)
-+#define DMOV_FLUSH1(ch) DMOV_SD3(0x0C0, ch)
-+#define DMOV_FLUSH2(ch) DMOV_SD3(0x100, ch)
-+#define DMOV_FLUSH3(ch) DMOV_SD3(0x140, ch)
-+#define DMOV_FLUSH4(ch) DMOV_SD3(0x180, ch)
-+#define DMOV_FLUSH5(ch) DMOV_SD3(0x1C0, ch)
++#define ICCR0_OFF 0x0 /* Set ICACHE off */
++#define ICCR0_ON 0x1 /* Set ICACHE on */
++#define ICCR0_ICI 0x2 /* Invalidate all in IC */
+
-+#define DMOV_STATUS(ch) DMOV_SD3(0x200, ch)
-+#define DMOV_STATUS_RSLT_COUNT(n) (((n) >> 29))
-+#define DMOV_STATUS_CMD_COUNT(n) (((n) >> 27) & 3)
-+#define DMOV_STATUS_RSLT_VALID (1 << 1)
-+#define DMOV_STATUS_CMD_PTR_RDY (1 << 0)
++#define ICCR1_NOLOCK 0x0 /* Set No Locking */
+
-+#define DMOV_ISR DMOV_SD3(0x380, 0)
++#define OCCR_BASE 0x01E00000 /* Operand Cache Control Register */
++#define OCCR_REG0 0 /* Register 0 offset */
++#define OCCR_REG1 1 /* Register 1 offset */
++#define OCCR0 OCCR_BASE+OCCR_REG0
++#define OCCR1 OCCR_BASE+OCCR_REG1
+
-+#define DMOV_CONFIG(ch) DMOV_SD3(0x300, ch)
-+#define DMOV_CONFIG_FORCE_TOP_PTR_RSLT (1 << 2)
-+#define DMOV_CONFIG_FORCE_FLUSH_RSLT (1 << 1)
-+#define DMOV_CONFIG_IRQ_EN (1 << 0)
++#define OCCR0_OFF 0x0 /* Set OCACHE off */
++#define OCCR0_ON 0x1 /* Set OCACHE on */
++#define OCCR0_OCI 0x2 /* Invalidate all in OC */
++#define OCCR0_WT 0x4 /* Set OCACHE in WT Mode */
++#define OCCR0_WB 0x0 /* Set OCACHE in WB Mode */
+
-+/* channel assignments */
++#define OCCR1_NOLOCK 0x0 /* Set No Locking */
+
-+#define DMOV_NAND_CHAN 7
-+#define DMOV_NAND_CRCI_CMD 5
-+#define DMOV_NAND_CRCI_DATA 4
++/*
++ * SH-5
++ * A bit of description here, for neff=32.
++ *
++ * |<--- tag (19 bits) --->|
++ * +-----------------------------+-----------------+------+----------+------+
++ * | | | ways |set index |offset|
++ * +-----------------------------+-----------------+------+----------+------+
++ * ^ 2 bits 8 bits 5 bits
++ * +- Bit 31
++ *
++ * Cacheline size is based on offset: 5 bits = 32 bytes per line
++ * A cache line is identified by a tag + set but OCACHETAG/ICACHETAG
++ * have a broader space for registers. These are outlined by
++ * CACHE_?C_*_STEP below.
++ *
++ */
+
-+#define DMOV_SDC1_CHAN 8
-+#define DMOV_SDC1_CRCI 6
++/* Instruction cache */
++#define CACHE_IC_ADDRESS_ARRAY 0x01000000
+
-+#define DMOV_SDC2_CHAN 8
-+#define DMOV_SDC2_CRCI 7
++/* Operand Cache */
++#define CACHE_OC_ADDRESS_ARRAY 0x01800000
+
-+#define DMOV_TSIF_CHAN 10
-+#define DMOV_TSIF_CRCI 10
++/* These declarations relate to cache 'synonyms' in the operand cache. A
++ 'synonym' occurs where effective address bits overlap between those used for
++ indexing the cache sets and those passed to the MMU for translation. In the
++ case of SH5-101 & SH5-103, only bit 12 is affected for 4k pages. */
+
-+#define DMOV_USB_CHAN 11
++#define CACHE_OC_N_SYNBITS 1 /* Number of synonym bits */
++#define CACHE_OC_SYN_SHIFT 12
++/* Mask to select synonym bit(s) */
++#define CACHE_OC_SYN_MASK (((1UL<<CACHE_OC_N_SYNBITS)-1)<<CACHE_OC_SYN_SHIFT)
+
-+/* no client rate control ifc (eg, ram) */
-+#define DMOV_NONE_CRCI 0
++/*
++ * Instruction cache can't be invalidated based on physical addresses.
++ * No Instruction Cache defines required, then.
++ */
+
++#endif /* __ASM_SH_CPU_SH5_CACHE_H */
+diff --git a/include/asm-sh/cpu-sh5/cacheflush.h b/include/asm-sh/cpu-sh5/cacheflush.h
+new file mode 100644
+index 0000000..98edb5b
+--- /dev/null
++++ b/include/asm-sh/cpu-sh5/cacheflush.h
+@@ -0,0 +1,35 @@
++#ifndef __ASM_SH_CPU_SH5_CACHEFLUSH_H
++#define __ASM_SH_CPU_SH5_CACHEFLUSH_H
+
-+/* If the CMD_PTR register has CMD_PTR_LIST selected, the data mover
-+ * is going to walk a list of 32bit pointers as described below. Each
-+ * pointer points to a *array* of dmov_s, etc structs. The last pointer
-+ * in the list is marked with CMD_PTR_LP. The last struct in each array
-+ * is marked with CMD_LC (see below).
-+ */
-+#define CMD_PTR_ADDR(addr) ((addr) >> 3)
-+#define CMD_PTR_LP (1 << 31) /* last pointer */
-+#define CMD_PTR_PT (3 << 29) /* ? */
++#ifndef __ASSEMBLY__
+
-+/* Single Item Mode */
-+typedef struct {
-+ unsigned cmd;
-+ unsigned src;
-+ unsigned dst;
-+ unsigned len;
-+} dmov_s;
++#include <asm/page.h>
+
-+/* Scatter/Gather Mode */
-+typedef struct {
-+ unsigned cmd;
-+ unsigned src_dscr;
-+ unsigned dst_dscr;
-+ unsigned _reserved;
-+} dmov_sg;
++struct vm_area_struct;
++struct page;
++struct mm_struct;
+
-+/* bits for the cmd field of the above structures */
++extern void flush_cache_all(void);
++extern void flush_cache_mm(struct mm_struct *mm);
++extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
++extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
++ unsigned long end);
++extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
++extern void flush_dcache_page(struct page *pg);
++extern void flush_icache_range(unsigned long start, unsigned long end);
++extern void flush_icache_user_range(struct vm_area_struct *vma,
++ struct page *page, unsigned long addr,
++ int len);
+
-+#define CMD_LC (1 << 31) /* last command */
-+#define CMD_FR (1 << 22) /* force result -- does not work? */
-+#define CMD_OCU (1 << 21) /* other channel unblock */
-+#define CMD_OCB (1 << 20) /* other channel block */
-+#define CMD_TCB (1 << 19) /* ? */
-+#define CMD_DAH (1 << 18) /* destination address hold -- does not work?*/
-+#define CMD_SAH (1 << 17) /* source address hold -- does not work? */
++#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+
-+#define CMD_MODE_SINGLE (0 << 0) /* dmov_s structure used */
-+#define CMD_MODE_SG (1 << 0) /* untested */
-+#define CMD_MODE_IND_SG (2 << 0) /* untested */
-+#define CMD_MODE_BOX (3 << 0) /* untested */
++#define flush_dcache_mmap_lock(mapping) do { } while (0)
++#define flush_dcache_mmap_unlock(mapping) do { } while (0)
+
-+#define CMD_DST_SWAP_BYTES (1 << 14) /* exchange each byte n with byte n+1 */
-+#define CMD_DST_SWAP_SHORTS (1 << 15) /* exchange each short n with short n+1 */
-+#define CMD_DST_SWAP_WORDS (1 << 16) /* exchange each word n with word n+1 */
++#define flush_icache_page(vma, page) do { } while (0)
++#define p3_cache_init() do { } while (0)
+
-+#define CMD_SRC_SWAP_BYTES (1 << 11) /* exchange each byte n with byte n+1 */
-+#define CMD_SRC_SWAP_SHORTS (1 << 12) /* exchange each short n with short n+1 */
-+#define CMD_SRC_SWAP_WORDS (1 << 13) /* exchange each word n with word n+1 */
++#endif /* __ASSEMBLY__ */
+
-+#define CMD_DST_CRCI(n) (((n) & 15) << 7)
-+#define CMD_SRC_CRCI(n) (((n) & 15) << 3)
++#endif /* __ASM_SH_CPU_SH5_CACHEFLUSH_H */
+
-+#endif
-diff --git a/include/asm-arm/arch-msm/entry-macro.S b/include/asm-arm/arch-msm/entry-macro.S
+diff --git a/include/asm-sh/cpu-sh5/dma.h b/include/asm-sh/cpu-sh5/dma.h
new file mode 100644
-index 0000000..ee24aec
+index 0000000..7bf6bb3
--- /dev/null
-+++ b/include/asm-arm/arch-msm/entry-macro.S
-@@ -0,0 +1,38 @@
-+/* include/asm-arm/arch-msm7200/entry-macro.S
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ * Author: Brian Swetland <swetland at google.com>
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
-+
-+#include <asm/arch/msm_iomap.h>
-+
-+ .macro disable_fiq
-+ .endm
-+
-+ .macro get_irqnr_preamble, base, tmp
-+ @ enable imprecise aborts
-+ cpsie a
-+ mov \base, #MSM_VIC_BASE
-+ .endm
++++ b/include/asm-sh/cpu-sh5/dma.h
+@@ -0,0 +1,6 @@
++#ifndef __ASM_SH_CPU_SH5_DMA_H
++#define __ASM_SH_CPU_SH5_DMA_H
+
-+ .macro arch_ret_to_user, tmp1, tmp2
-+ .endm
++/* Nothing yet */
+
-+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
-+ @ 0xD0 has irq# or old irq# if the irq has been handled
-+ @ 0xD4 has irq# or -1 if none pending *but* if you just
-+ @ read 0xD4 you never get the first irq for some reason
-+ ldr \irqnr, [\base, #0xD0]
-+ ldr \irqnr, [\base, #0xD4]
-+ cmp \irqnr, #0xffffffff
-+ .endm
-diff --git a/include/asm-arm/arch-msm/hardware.h b/include/asm-arm/arch-msm/hardware.h
++#endif /* __ASM_SH_CPU_SH5_DMA_H */
+diff --git a/include/asm-sh/cpu-sh5/irq.h b/include/asm-sh/cpu-sh5/irq.h
new file mode 100644
-index 0000000..89af2b7
+index 0000000..f0f0756
--- /dev/null
-+++ b/include/asm-arm/arch-msm/hardware.h
-@@ -0,0 +1,18 @@
-+/* linux/include/asm-arm/arch-msm/hardware.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
++++ b/include/asm-sh/cpu-sh5/irq.h
+@@ -0,0 +1,117 @@
++#ifndef __ASM_SH_CPU_SH5_IRQ_H
++#define __ASM_SH_CPU_SH5_IRQ_H
++
++/*
++ * include/asm-sh/cpu-sh5/irq.h
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
+
-+#ifndef __ASM_ARCH_MSM_HARDWARE_H
+
-+#endif
-diff --git a/include/asm-arm/arch-msm/io.h b/include/asm-arm/arch-msm/io.h
-new file mode 100644
-index 0000000..4645ae2
---- /dev/null
-+++ b/include/asm-arm/arch-msm/io.h
-@@ -0,0 +1,33 @@
-+/* include/asm-arm/arch-msm/io.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
++/*
++ * Encoded IRQs are not considered worth to be supported.
++ * Main reason is that there's no per-encoded-interrupt
++ * enable/disable mechanism (as there was in SH3/4).
++ * An all enabled/all disabled is worth only if there's
++ * a cascaded IC to disable/enable/ack on. Until such
++ * IC is available there's no such support.
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * Presumably Encoded IRQs may use extra IRQs beyond 64,
++ * below. Some logic must be added to cope with IRQ_IRL?
++ * in an exclusive way.
+ *
++ * Priorities are set at Platform level, when IRQ_IRL0-3
++ * are set to 0 Encoding is allowed. Otherwise it's not
++ * allowed.
+ */
+
-+#ifndef __ASM_ARM_ARCH_IO_H
-+#define __ASM_ARM_ARCH_IO_H
-+
-+#define IO_SPACE_LIMIT 0xffffffff
++/* Independent IRQs */
++#define IRQ_IRL0 0
++#define IRQ_IRL1 1
++#define IRQ_IRL2 2
++#define IRQ_IRL3 3
+
-+#define __arch_ioremap __msm_ioremap
-+#define __arch_iounmap __iounmap
++#define IRQ_INTA 4
++#define IRQ_INTB 5
++#define IRQ_INTC 6
++#define IRQ_INTD 7
+
-+void __iomem *__msm_ioremap(unsigned long phys_addr, size_t size, unsigned int mtype);
++#define IRQ_SERR 12
++#define IRQ_ERR 13
++#define IRQ_PWR3 14
++#define IRQ_PWR2 15
++#define IRQ_PWR1 16
++#define IRQ_PWR0 17
+
-+static inline void __iomem *__io(unsigned long addr)
-+{
-+ return (void __iomem *)addr;
-+}
-+#define __io(a) __io(a)
-+#define __mem_pci(a) (a)
++#define IRQ_DMTE0 18
++#define IRQ_DMTE1 19
++#define IRQ_DMTE2 20
++#define IRQ_DMTE3 21
++#define IRQ_DAERR 22
+
-+#endif
-diff --git a/include/asm-arm/arch-msm/irqs.h b/include/asm-arm/arch-msm/irqs.h
-new file mode 100644
-index 0000000..565430c
---- /dev/null
-+++ b/include/asm-arm/arch-msm/irqs.h
-@@ -0,0 +1,89 @@
-+/* linux/include/asm-arm/arch-msm/irqs.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ * Author: Brian Swetland <swetland at google.com>
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
++#define IRQ_TUNI0 32
++#define IRQ_TUNI1 33
++#define IRQ_TUNI2 34
++#define IRQ_TICPI2 35
+
-+#ifndef __ASM_ARCH_MSM_IRQS_H
++#define IRQ_ATI 36
++#define IRQ_PRI 37
++#define IRQ_CUI 38
+
-+/* MSM ARM11 Interrupt Numbers */
-+/* See 80-VE113-1 A, pp219-221 */
++#define IRQ_ERI 39
++#define IRQ_RXI 40
++#define IRQ_BRI 41
++#define IRQ_TXI 42
+
-+#define INT_A9_M2A_0 0
-+#define INT_A9_M2A_1 1
-+#define INT_A9_M2A_2 2
-+#define INT_A9_M2A_3 3
-+#define INT_A9_M2A_4 4
-+#define INT_A9_M2A_5 5
-+#define INT_A9_M2A_6 6
-+#define INT_GP_TIMER_EXP 7
-+#define INT_DEBUG_TIMER_EXP 8
-+#define INT_UART1 9
-+#define INT_UART2 10
-+#define INT_UART3 11
-+#define INT_UART1_RX 12
-+#define INT_UART2_RX 13
-+#define INT_UART3_RX 14
-+#define INT_USB_OTG 15
-+#define INT_MDDI_PRI 16
-+#define INT_MDDI_EXT 17
-+#define INT_MDDI_CLIENT 18
-+#define INT_MDP 19
-+#define INT_GRAPHICS 20
-+#define INT_ADM_AARM 21
-+#define INT_ADSP_A11 22
-+#define INT_ADSP_A9_A11 23
-+#define INT_SDC1_0 24
-+#define INT_SDC1_1 25
-+#define INT_SDC2_0 26
-+#define INT_SDC2_1 27
-+#define INT_KEYSENSE 28
-+#define INT_TCHSCRN_SSBI 29
-+#define INT_TCHSCRN1 30
-+#define INT_TCHSCRN2 31
++#define IRQ_ITI 63
+
-+#define INT_GPIO_GROUP1 (32 + 0)
-+#define INT_GPIO_GROUP2 (32 + 1)
-+#define INT_PWB_I2C (32 + 2)
-+#define INT_SOFTRESET (32 + 3)
-+#define INT_NAND_WR_ER_DONE (32 + 4)
-+#define INT_NAND_OP_DONE (32 + 5)
-+#define INT_PBUS_ARM11 (32 + 6)
-+#define INT_AXI_MPU_SMI (32 + 7)
-+#define INT_AXI_MPU_EBI1 (32 + 8)
-+#define INT_AD_HSSD (32 + 9)
-+#define INT_ARM11_PMU (32 + 10)
-+#define INT_ARM11_DMA (32 + 11)
-+#define INT_TSIF_IRQ (32 + 12)
-+#define INT_UART1DM_IRQ (32 + 13)
-+#define INT_UART1DM_RX (32 + 14)
-+#define INT_USB_HS (32 + 15)
-+#define INT_SDC3_0 (32 + 16)
-+#define INT_SDC3_1 (32 + 17)
-+#define INT_SDC4_0 (32 + 18)
-+#define INT_SDC4_1 (32 + 19)
-+#define INT_UART2DM_RX (32 + 20)
-+#define INT_UART2DM_IRQ (32 + 21)
++#define NR_INTC_IRQS 64
+
-+/* 22-31 are reserved */
++#ifdef CONFIG_SH_CAYMAN
++#define NR_EXT_IRQS 32
++#define START_EXT_IRQS 64
+
-+#define MSM_IRQ_BIT(irq) (1 << ((irq) & 31))
++/* PCI bus 2 uses encoded external interrupts on the Cayman board */
++#define IRQ_P2INTA (START_EXT_IRQS + (3*8) + 0)
++#define IRQ_P2INTB (START_EXT_IRQS + (3*8) + 1)
++#define IRQ_P2INTC (START_EXT_IRQS + (3*8) + 2)
++#define IRQ_P2INTD (START_EXT_IRQS + (3*8) + 3)
+
-+#define NR_MSM_IRQS 64
-+#define NR_GPIO_IRQS 122
-+#define NR_BOARD_IRQS 64
-+#define NR_IRQS (NR_MSM_IRQS + NR_GPIO_IRQS + NR_BOARD_IRQS)
++#define I8042_KBD_IRQ (START_EXT_IRQS + 2)
++#define I8042_AUX_IRQ (START_EXT_IRQS + 6)
+
-+#define MSM_GPIO_TO_INT(n) (NR_MSM_IRQS + (n))
++#define IRQ_CFCARD (START_EXT_IRQS + 7)
++#define IRQ_PCMCIA (0)
+
++#else
++#define NR_EXT_IRQS 0
+#endif
-diff --git a/include/asm-arm/arch-msm/memory.h b/include/asm-arm/arch-msm/memory.h
++
++/* Default IRQs, fixed */
++#define TIMER_IRQ IRQ_TUNI0
++#define RTC_IRQ IRQ_CUI
++
++/* Default Priorities, Platform may choose differently */
++#define NO_PRIORITY 0 /* Disabled */
++#define TIMER_PRIORITY 2
++#define RTC_PRIORITY TIMER_PRIORITY
++#define SCIF_PRIORITY 3
++#define INTD_PRIORITY 3
++#define IRL3_PRIORITY 4
++#define INTC_PRIORITY 6
++#define IRL2_PRIORITY 7
++#define INTB_PRIORITY 9
++#define IRL1_PRIORITY 10
++#define INTA_PRIORITY 12
++#define IRL0_PRIORITY 13
++#define TOP_PRIORITY 15
++
++extern int intc_evt_to_irq[(0xE20/0x20)+1];
++int intc_irq_describe(char* p, int irq);
++extern int platform_int_priority[NR_INTC_IRQS];
++
++#endif /* __ASM_SH_CPU_SH5_IRQ_H */
+diff --git a/include/asm-sh/cpu-sh5/mmu_context.h b/include/asm-sh/cpu-sh5/mmu_context.h
new file mode 100644
-index 0000000..b5ce0e9
+index 0000000..df857fc
--- /dev/null
-+++ b/include/asm-arm/arch-msm/memory.h
++++ b/include/asm-sh/cpu-sh5/mmu_context.h
@@ -0,0 +1,27 @@
-+/* linux/include/asm-arm/arch-msm/memory.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
++#ifndef __ASM_SH_CPU_SH5_MMU_CONTEXT_H
++#define __ASM_SH_CPU_SH5_MMU_CONTEXT_H
+
-+#ifndef __ASM_ARCH_MEMORY_H
-+#define __ASM_ARCH_MEMORY_H
++/* Common defines */
++#define TLB_STEP 0x00000010
++#define TLB_PTEH 0x00000000
++#define TLB_PTEL 0x00000008
+
-+/* physical offset of RAM */
-+#define PHYS_OFFSET UL(0x10000000)
++/* PTEH defines */
++#define PTEH_ASID_SHIFT 2
++#define PTEH_VALID 0x0000000000000001
++#define PTEH_SHARED 0x0000000000000002
++#define PTEH_MATCH_ASID 0x00000000000003ff
+
-+/* bus address and physical addresses are identical */
-+#define __virt_to_bus(x) __virt_to_phys(x)
-+#define __bus_to_virt(x) __phys_to_virt(x)
++#ifndef __ASSEMBLY__
++/* This has to be a common function because the next location to fill
++ * information is shared. */
++extern void __do_tlb_refill(unsigned long address, unsigned long long is_text_not_data, pte_t *pte);
+
++/* Profiling counter. */
++#ifdef CONFIG_SH64_PROC_TLB
++extern unsigned long long calls_to_do_fast_page_fault;
+#endif
+
-diff --git a/include/asm-arm/arch-msm/msm_iomap.h b/include/asm-arm/arch-msm/msm_iomap.h
++#endif /* __ASSEMBLY__ */
++
++#endif /* __ASM_SH_CPU_SH5_MMU_CONTEXT_H */
+diff --git a/include/asm-sh/cpu-sh5/registers.h b/include/asm-sh/cpu-sh5/registers.h
new file mode 100644
-index 0000000..b8955cc
+index 0000000..6664ea6
--- /dev/null
-+++ b/include/asm-arm/arch-msm/msm_iomap.h
-@@ -0,0 +1,104 @@
-+/* linux/include/asm-arm/arch-msm/msm_iomap.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ * Author: Brian Swetland <swetland at google.com>
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
++++ b/include/asm-sh/cpu-sh5/registers.h
+@@ -0,0 +1,106 @@
++#ifndef __ASM_SH_CPU_SH5_REGISTERS_H
++#define __ASM_SH_CPU_SH5_REGISTERS_H
++
++/*
++ * include/asm-sh/cpu-sh5/registers.h
+ *
-+ * The MSM peripherals are spread all over across 768MB of physical
-+ * space, which makes just having a simple IO_ADDRESS macro to slide
-+ * them into the right virtual location rough. Instead, we will
-+ * provide a master phys->virt mapping for peripherals here.
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2004 Richard Curnow
+ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
+
-+#ifndef __ASM_ARCH_MSM_IOMAP_H
-+#define __ASM_ARCH_MSM_IOMAP_H
-+
-+#include <asm/sizes.h>
++#ifdef __ASSEMBLY__
++/* =====================================================================
++**
++** Section 1: acts on assembly sources pre-processed by GPP ( <source.S>).
++** Assigns symbolic names to control & target registers.
++*/
+
-+/* Physical base address and size of peripherals.
-+ * Ordered by the virtual base addresses they will be mapped at.
-+ *
-+ * MSM_VIC_BASE must be an value that can be loaded via a "mov"
-+ * instruction, otherwise entry-macro.S will not compile.
-+ *
-+ * If you add or remove entries here, you'll want to edit the
-+ * msm_io_desc array in arch/arm/mach-msm/io.c to reflect your
-+ * changes.
-+ *
++/*
++ * Define some useful aliases for control registers.
+ */
++#define SR cr0
++#define SSR cr1
++#define PSSR cr2
++ /* cr3 UNDEFINED */
++#define INTEVT cr4
++#define EXPEVT cr5
++#define PEXPEVT cr6
++#define TRA cr7
++#define SPC cr8
++#define PSPC cr9
++#define RESVEC cr10
++#define VBR cr11
++ /* cr12 UNDEFINED */
++#define TEA cr13
++ /* cr14-cr15 UNDEFINED */
++#define DCR cr16
++#define KCR0 cr17
++#define KCR1 cr18
++ /* cr19-cr31 UNDEFINED */
++ /* cr32-cr61 RESERVED */
++#define CTC cr62
++#define USR cr63
+
-+#define MSM_VIC_BASE 0xE0000000
-+#define MSM_VIC_PHYS 0xC0000000
-+#define MSM_VIC_SIZE SZ_4K
++/*
++ * ABI dependent registers (general purpose set)
++ */
++#define RET r2
++#define ARG1 r2
++#define ARG2 r3
++#define ARG3 r4
++#define ARG4 r5
++#define ARG5 r6
++#define ARG6 r7
++#define SP r15
++#define LINK r18
++#define ZERO r63
+
-+#define MSM_CSR_BASE 0xE0001000
-+#define MSM_CSR_PHYS 0xC0100000
-+#define MSM_CSR_SIZE SZ_4K
++/*
++ * Status register defines: used only by assembly sources (and
++ * syntax independednt)
++ */
++#define SR_RESET_VAL 0x0000000050008000
++#define SR_HARMLESS 0x00000000500080f0 /* Write ignores for most */
++#define SR_ENABLE_FPU 0xffffffffffff7fff /* AND with this */
+
-+#define MSM_GPT_PHYS MSM_CSR_PHYS
-+#define MSM_GPT_BASE MSM_CSR_BASE
-+#define MSM_GPT_SIZE SZ_4K
++#if defined (CONFIG_SH64_SR_WATCH)
++#define SR_ENABLE_MMU 0x0000000084000000 /* OR with this */
++#else
++#define SR_ENABLE_MMU 0x0000000080000000 /* OR with this */
++#endif
+
-+#define MSM_DMOV_BASE 0xE0002000
-+#define MSM_DMOV_PHYS 0xA9700000
-+#define MSM_DMOV_SIZE SZ_4K
++#define SR_UNBLOCK_EXC 0xffffffffefffffff /* AND with this */
++#define SR_BLOCK_EXC 0x0000000010000000 /* OR with this */
+
-+#define MSM_UART1_BASE 0xE0003000
-+#define MSM_UART1_PHYS 0xA9A00000
-+#define MSM_UART1_SIZE SZ_4K
++#else /* Not __ASSEMBLY__ syntax */
+
-+#define MSM_UART2_BASE 0xE0004000
-+#define MSM_UART2_PHYS 0xA9B00000
-+#define MSM_UART2_SIZE SZ_4K
++/*
++** Stringify reg. name
++*/
++#define __str(x) #x
+
-+#define MSM_UART3_BASE 0xE0005000
-+#define MSM_UART3_PHYS 0xA9C00000
-+#define MSM_UART3_SIZE SZ_4K
++/* Stringify control register names for use in inline assembly */
++#define __SR __str(SR)
++#define __SSR __str(SSR)
++#define __PSSR __str(PSSR)
++#define __INTEVT __str(INTEVT)
++#define __EXPEVT __str(EXPEVT)
++#define __PEXPEVT __str(PEXPEVT)
++#define __TRA __str(TRA)
++#define __SPC __str(SPC)
++#define __PSPC __str(PSPC)
++#define __RESVEC __str(RESVEC)
++#define __VBR __str(VBR)
++#define __TEA __str(TEA)
++#define __DCR __str(DCR)
++#define __KCR0 __str(KCR0)
++#define __KCR1 __str(KCR1)
++#define __CTC __str(CTC)
++#define __USR __str(USR)
+
-+#define MSM_I2C_BASE 0xE0006000
-+#define MSM_I2C_PHYS 0xA9900000
-+#define MSM_I2C_SIZE SZ_4K
++#endif /* __ASSEMBLY__ */
++#endif /* __ASM_SH_CPU_SH5_REGISTERS_H */
+diff --git a/include/asm-sh/cpu-sh5/rtc.h b/include/asm-sh/cpu-sh5/rtc.h
+new file mode 100644
+index 0000000..12ea0ed
+--- /dev/null
++++ b/include/asm-sh/cpu-sh5/rtc.h
+@@ -0,0 +1,8 @@
++#ifndef __ASM_SH_CPU_SH5_RTC_H
++#define __ASM_SH_CPU_SH5_RTC_H
+
-+#define MSM_GPIO1_BASE 0xE0007000
-+#define MSM_GPIO1_PHYS 0xA9200000
-+#define MSM_GPIO1_SIZE SZ_4K
++#define rtc_reg_size sizeof(u32)
++#define RTC_BIT_INVERTED 0 /* The SH-5 RTC is surprisingly sane! */
++#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
+
-+#define MSM_GPIO2_BASE 0xE0008000
-+#define MSM_GPIO2_PHYS 0xA9300000
-+#define MSM_GPIO2_SIZE SZ_4K
++#endif /* __ASM_SH_CPU_SH5_RTC_H */
+diff --git a/include/asm-sh/cpu-sh5/timer.h b/include/asm-sh/cpu-sh5/timer.h
+new file mode 100644
+index 0000000..88da9b3
+--- /dev/null
++++ b/include/asm-sh/cpu-sh5/timer.h
+@@ -0,0 +1,4 @@
++#ifndef __ASM_SH_CPU_SH5_TIMER_H
++#define __ASM_SH_CPU_SH5_TIMER_H
+
-+#define MSM_HSUSB_BASE 0xE0009000
-+#define MSM_HSUSB_PHYS 0xA0800000
-+#define MSM_HSUSB_SIZE SZ_4K
++#endif /* __ASM_SH_CPU_SH5_TIMER_H */
+diff --git a/include/asm-sh/delay.h b/include/asm-sh/delay.h
+index db599b2..031db84 100644
+--- a/include/asm-sh/delay.h
++++ b/include/asm-sh/delay.h
+@@ -6,7 +6,7 @@
+ *
+ * Delay routines calling functions in arch/sh/lib/delay.c
+ */
+-
+
-+#define MSM_CLK_CTL_BASE 0xE000A000
-+#define MSM_CLK_CTL_PHYS 0xA8600000
-+#define MSM_CLK_CTL_SIZE SZ_4K
+ extern void __bad_udelay(void);
+ extern void __bad_ndelay(void);
+
+@@ -15,13 +15,17 @@ extern void __ndelay(unsigned long nsecs);
+ extern void __const_udelay(unsigned long usecs);
+ extern void __delay(unsigned long loops);
+
++#ifdef CONFIG_SUPERH32
+ #define udelay(n) (__builtin_constant_p(n) ? \
+ ((n) > 20000 ? __bad_udelay() : __const_udelay((n) * 0x10c6ul)) : \
+ __udelay(n))
+
+-
+ #define ndelay(n) (__builtin_constant_p(n) ? \
+ ((n) > 20000 ? __bad_ndelay() : __const_udelay((n) * 5ul)) : \
+ __ndelay(n))
++#else
++extern void udelay(unsigned long usecs);
++extern void ndelay(unsigned long nsecs);
++#endif
+
+ #endif /* __ASM_SH_DELAY_H */
+diff --git a/include/asm-sh/dma-mapping.h b/include/asm-sh/dma-mapping.h
+index fcea067..22cc419 100644
+--- a/include/asm-sh/dma-mapping.h
++++ b/include/asm-sh/dma-mapping.h
+@@ -8,11 +8,6 @@
+
+ extern struct bus_type pci_bus_type;
+
+-/* arch/sh/mm/consistent.c */
+-extern void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *handle);
+-extern void consistent_free(void *vaddr, size_t size);
+-extern void consistent_sync(void *vaddr, size_t size, int direction);
+-
+ #define dma_supported(dev, mask) (1)
+
+ static inline int dma_set_mask(struct device *dev, u64 mask)
+@@ -25,44 +20,19 @@ static inline int dma_set_mask(struct device *dev, u64 mask)
+ return 0;
+ }
+
+-static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+- dma_addr_t *dma_handle, gfp_t flag)
+-{
+- if (sh_mv.mv_consistent_alloc) {
+- void *ret;
++void *dma_alloc_coherent(struct device *dev, size_t size,
++ dma_addr_t *dma_handle, gfp_t flag);
+
+- ret = sh_mv.mv_consistent_alloc(dev, size, dma_handle, flag);
+- if (ret != NULL)
+- return ret;
+- }
+-
+- return consistent_alloc(flag, size, dma_handle);
+-}
+-
+-static inline void dma_free_coherent(struct device *dev, size_t size,
+- void *vaddr, dma_addr_t dma_handle)
+-{
+- if (sh_mv.mv_consistent_free) {
+- int ret;
+-
+- ret = sh_mv.mv_consistent_free(dev, size, vaddr, dma_handle);
+- if (ret == 0)
+- return;
+- }
++void dma_free_coherent(struct device *dev, size_t size,
++ void *vaddr, dma_addr_t dma_handle);
+
+- consistent_free(vaddr, size);
+-}
++void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
++ enum dma_data_direction dir);
+
+ #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
+ #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
+ #define dma_is_consistent(d, h) (1)
+
+-static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+- enum dma_data_direction dir)
+-{
+- consistent_sync(vaddr, size, (int)dir);
+-}
+-
+ static inline dma_addr_t dma_map_single(struct device *dev,
+ void *ptr, size_t size,
+ enum dma_data_direction dir)
+@@ -205,4 +175,18 @@ static inline int dma_mapping_error(dma_addr_t dma_addr)
+ {
+ return dma_addr == 0;
+ }
+
-+#define MSM_PMDH_BASE 0xE000B000
-+#define MSM_PMDH_PHYS 0xAA600000
-+#define MSM_PMDH_SIZE SZ_4K
++#define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
+
-+#define MSM_EMDH_BASE 0xE000C000
-+#define MSM_EMDH_PHYS 0xAA700000
-+#define MSM_EMDH_SIZE SZ_4K
++extern int
++dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
++ dma_addr_t device_addr, size_t size, int flags);
+
-+#define MSM_MDP_BASE 0xE0010000
-+#define MSM_MDP_PHYS 0xAA200000
-+#define MSM_MDP_SIZE 0x000F0000
++extern void
++dma_release_declared_memory(struct device *dev);
+
-+#define MSM_SHARED_RAM_BASE 0xE0100000
-+#define MSM_SHARED_RAM_PHYS 0x01F00000
-+#define MSM_SHARED_RAM_SIZE SZ_1M
++extern void *
++dma_mark_declared_memory_occupied(struct device *dev,
++ dma_addr_t device_addr, size_t size);
+
-+#endif
-diff --git a/include/asm-arm/arch-msm/system.h b/include/asm-arm/arch-msm/system.h
-new file mode 100644
-index 0000000..7c5544b
---- /dev/null
-+++ b/include/asm-arm/arch-msm/system.h
-@@ -0,0 +1,23 @@
-+/* linux/include/asm-arm/arch-msm/system.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
+ #endif /* __ASM_SH_DMA_MAPPING_H */
+diff --git a/include/asm-sh/elf.h b/include/asm-sh/elf.h
+index 12cc4b3..05092da 100644
+--- a/include/asm-sh/elf.h
++++ b/include/asm-sh/elf.h
+@@ -5,7 +5,7 @@
+ #include <asm/ptrace.h>
+ #include <asm/user.h>
+
+-/* SH relocation types */
++/* SH (particularly SHcompact) relocation types */
+ #define R_SH_NONE 0
+ #define R_SH_DIR32 1
+ #define R_SH_REL32 2
+@@ -43,6 +43,11 @@
+ #define R_SH_RELATIVE 165
+ #define R_SH_GOTOFF 166
+ #define R_SH_GOTPC 167
++/* SHmedia relocs */
++#define R_SH_IMM_LOW16 246
++#define R_SH_IMM_LOW16_PCREL 247
++#define R_SH_IMM_MEDLOW16 248
++#define R_SH_IMM_MEDLOW16_PCREL 249
+ /* Keep this the last entry. */
+ #define R_SH_NUM 256
+
+@@ -58,11 +63,6 @@ typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+ typedef struct user_fpu_struct elf_fpregset_t;
+
+ /*
+- * This is used to ensure we don't load something for the wrong architecture.
+- */
+-#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
+-
+-/*
+ * These are used to set parameters in the core dumps.
+ */
+ #define ELF_CLASS ELFCLASS32
+@@ -73,6 +73,12 @@ typedef struct user_fpu_struct elf_fpregset_t;
+ #endif
+ #define ELF_ARCH EM_SH
+
++#ifdef __KERNEL__
++/*
++ * This is used to ensure we don't load something for the wrong architecture.
+ */
++#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
+
-+#include <asm/hardware.h>
-+
-+void arch_idle(void);
-+
-+static inline void arch_reset(char mode)
-+{
-+ for (;;) ; /* depends on IPC w/ other core */
-+}
-diff --git a/include/asm-arm/arch-msm/timex.h b/include/asm-arm/arch-msm/timex.h
-new file mode 100644
-index 0000000..154b23f
---- /dev/null
-+++ b/include/asm-arm/arch-msm/timex.h
-@@ -0,0 +1,20 @@
-+/* linux/include/asm-arm/arch-msm/timex.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
+ #define USE_ELF_CORE_DUMP
+ #define ELF_EXEC_PAGESIZE PAGE_SIZE
+
+@@ -83,7 +89,6 @@ typedef struct user_fpu_struct elf_fpregset_t;
+
+ #define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
+
+-
+ #define ELF_CORE_COPY_REGS(_dest,_regs) \
+ memcpy((char *) &_dest, (char *) _regs, \
+ sizeof(struct pt_regs));
+@@ -101,16 +106,38 @@ typedef struct user_fpu_struct elf_fpregset_t;
+ For the moment, we have only optimizations for the Intel generations,
+ but that could change... */
+
+-#define ELF_PLATFORM (NULL)
++#define ELF_PLATFORM (utsname()->machine)
+
++#ifdef __SH5__
++#define ELF_PLAT_INIT(_r, load_addr) \
++ do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
++ _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
++ _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
++ _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; _r->regs[15]=0; \
++ _r->regs[16]=0; _r->regs[17]=0; _r->regs[18]=0; _r->regs[19]=0; \
++ _r->regs[20]=0; _r->regs[21]=0; _r->regs[22]=0; _r->regs[23]=0; \
++ _r->regs[24]=0; _r->regs[25]=0; _r->regs[26]=0; _r->regs[27]=0; \
++ _r->regs[28]=0; _r->regs[29]=0; _r->regs[30]=0; _r->regs[31]=0; \
++ _r->regs[32]=0; _r->regs[33]=0; _r->regs[34]=0; _r->regs[35]=0; \
++ _r->regs[36]=0; _r->regs[37]=0; _r->regs[38]=0; _r->regs[39]=0; \
++ _r->regs[40]=0; _r->regs[41]=0; _r->regs[42]=0; _r->regs[43]=0; \
++ _r->regs[44]=0; _r->regs[45]=0; _r->regs[46]=0; _r->regs[47]=0; \
++ _r->regs[48]=0; _r->regs[49]=0; _r->regs[50]=0; _r->regs[51]=0; \
++ _r->regs[52]=0; _r->regs[53]=0; _r->regs[54]=0; _r->regs[55]=0; \
++ _r->regs[56]=0; _r->regs[57]=0; _r->regs[58]=0; _r->regs[59]=0; \
++ _r->regs[60]=0; _r->regs[61]=0; _r->regs[62]=0; \
++ _r->tregs[0]=0; _r->tregs[1]=0; _r->tregs[2]=0; _r->tregs[3]=0; \
++ _r->tregs[4]=0; _r->tregs[5]=0; _r->tregs[6]=0; _r->tregs[7]=0; \
++ _r->sr = SR_FD | SR_MMU; } while (0)
++#else
+ #define ELF_PLAT_INIT(_r, load_addr) \
+ do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
+ _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
+ _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
+ _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; \
+ _r->sr = SR_FD; } while (0)
++#endif
+
+-#ifdef __KERNEL__
+ #define SET_PERSONALITY(ex, ibcs2) set_personality(PER_LINUX_32BIT)
+ struct task_struct;
+ extern int dump_task_regs (struct task_struct *, elf_gregset_t *);
+@@ -118,7 +145,6 @@ extern int dump_task_fpu (struct task_struct *, elf_fpregset_t *);
+
+ #define ELF_CORE_COPY_TASK_REGS(tsk, elf_regs) dump_task_regs(tsk, elf_regs)
+ #define ELF_CORE_COPY_FPREGS(tsk, elf_fpregs) dump_task_fpu(tsk, elf_fpregs)
+-#endif
+
+ #ifdef CONFIG_VSYSCALL
+ /* vDSO has arch_setup_additional_pages */
+@@ -133,12 +159,35 @@ extern void __kernel_vsyscall;
+ #define VDSO_BASE ((unsigned long)current->mm->context.vdso)
+ #define VDSO_SYM(x) (VDSO_BASE + (unsigned long)(x))
+
++#define VSYSCALL_AUX_ENT \
++ if (vdso_enabled) \
++ NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE);
++#else
++#define VSYSCALL_AUX_ENT
++#endif /* CONFIG_VSYSCALL */
+
-+#ifndef __ASM_ARCH_MSM_TIMEX_H
++#ifdef CONFIG_SH_FPU
++#define FPU_AUX_ENT NEW_AUX_ENT(AT_FPUCW, FPSCR_INIT)
++#else
++#define FPU_AUX_ENT
++#endif
+
-+#define CLOCK_TICK_RATE 1000000
++extern int l1i_cache_shape, l1d_cache_shape, l2_cache_shape;
+
+ /* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
+ #define ARCH_DLINFO \
+ do { \
+- if (vdso_enabled) \
+- NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE); \
++ /* Optional FPU initialization */ \
++ FPU_AUX_ENT; \
++ \
++ /* Optional vsyscall entry */ \
++ VSYSCALL_AUX_ENT; \
++ \
++ /* Cache desc */ \
++ NEW_AUX_ENT(AT_L1I_CACHESHAPE, l1i_cache_shape); \
++ NEW_AUX_ENT(AT_L1D_CACHESHAPE, l1d_cache_shape); \
++ NEW_AUX_ENT(AT_L2_CACHESHAPE, l2_cache_shape); \
+ } while (0)
+-#endif /* CONFIG_VSYSCALL */
+
++#endif /* __KERNEL__ */
+ #endif /* __ASM_SH_ELF_H */
+diff --git a/include/asm-sh/fixmap.h b/include/asm-sh/fixmap.h
+index 8a56617..721fcc4 100644
+--- a/include/asm-sh/fixmap.h
++++ b/include/asm-sh/fixmap.h
+@@ -49,6 +49,7 @@ enum fixed_addresses {
+ #define FIX_N_COLOURS 16
+ FIX_CMAP_BEGIN,
+ FIX_CMAP_END = FIX_CMAP_BEGIN + FIX_N_COLOURS,
++ FIX_UNCACHED,
+ #ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+@@ -73,7 +74,11 @@ extern void __set_fixmap(enum fixed_addresses idx,
+ * the start of the fixmap, and leave one page empty
+ * at the top of mem..
+ */
++#ifdef CONFIG_SUPERH32
+ #define FIXADDR_TOP (P4SEG - PAGE_SIZE)
++#else
++#define FIXADDR_TOP (0xff000000 - PAGE_SIZE)
+#endif
-diff --git a/include/asm-arm/arch-msm/uncompress.h b/include/asm-arm/arch-msm/uncompress.h
+ #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+ #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+
+diff --git a/include/asm-sh/flat.h b/include/asm-sh/flat.h
+index dc4f595..0cc8002 100644
+--- a/include/asm-sh/flat.h
++++ b/include/asm-sh/flat.h
+@@ -19,6 +19,6 @@
+ #define flat_get_addr_from_rp(rp, relval, flags, p) get_unaligned(rp)
+ #define flat_put_addr_at_rp(rp, val, relval) put_unaligned(val,rp)
+ #define flat_get_relocate_addr(rel) (rel)
+-#define flat_set_persistent(relval, p) 0
++#define flat_set_persistent(relval, p) ({ (void)p; 0; })
+
+ #endif /* __ASM_SH_FLAT_H */
+diff --git a/include/asm-sh/fpu.h b/include/asm-sh/fpu.h
new file mode 100644
-index 0000000..e91ed78
+index 0000000..f842988
--- /dev/null
-+++ b/include/asm-arm/arch-msm/uncompress.h
-@@ -0,0 +1,36 @@
-+/* linux/include/asm-arm/arch-msm/uncompress.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
-+
-+#ifndef __ASM_ARCH_MSM_UNCOMPRESS_H
++++ b/include/asm-sh/fpu.h
+@@ -0,0 +1,46 @@
++#ifndef __ASM_SH_FPU_H
++#define __ASM_SH_FPU_H
+
-+#include "hardware.h"
++#define SR_FD 0x00008000
+
-+static void putc(int c)
-+{
-+}
++#ifndef __ASSEMBLY__
++#include <asm/ptrace.h>
+
-+static inline void flush(void)
++#ifdef CONFIG_SH_FPU
++static inline void release_fpu(struct pt_regs *regs)
+{
++ regs->sr |= SR_FD;
+}
+
-+static inline void arch_decomp_setup(void)
++static inline void grab_fpu(struct pt_regs *regs)
+{
++ regs->sr &= ~SR_FD;
+}
+
-+static inline void arch_decomp_wdog(void)
-+{
-+}
++struct task_struct;
+
++extern void save_fpu(struct task_struct *__tsk, struct pt_regs *regs);
++#else
++#define release_fpu(regs) do { } while (0)
++#define grab_fpu(regs) do { } while (0)
++#define save_fpu(tsk, regs) do { } while (0)
+#endif
-diff --git a/include/asm-arm/arch-msm/vmalloc.h b/include/asm-arm/arch-msm/vmalloc.h
-new file mode 100644
-index 0000000..60f8d91
---- /dev/null
-+++ b/include/asm-arm/arch-msm/vmalloc.h
-@@ -0,0 +1,22 @@
-+/* linux/include/asm-arm/arch-msm/vmalloc.h
-+ *
-+ * Copyright (C) 2007 Google, Inc.
-+ *
-+ * This software is licensed under the terms of the GNU General Public
-+ * License version 2, as published by the Free Software Foundation, and
-+ * may be copied, distributed, and modified under those terms.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ */
+
-+#ifndef __ASM_ARCH_MSM_VMALLOC_H
-+#define __ASM_ARCH_MSM_VMALLOC_H
++extern int do_fpu_inst(unsigned short, struct pt_regs *);
+
-+#define VMALLOC_END (PAGE_OFFSET + 0x10000000)
++#define unlazy_fpu(tsk, regs) do { \
++ if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
++ save_fpu(tsk, regs); \
++ } \
++} while (0)
+
-+#endif
++#define clear_fpu(tsk, regs) do { \
++ if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
++ clear_tsk_thread_flag(tsk, TIF_USEDFPU); \
++ release_fpu(regs); \
++ } \
++} while (0)
+
-diff --git a/include/asm-arm/arch-omap/tps65010.h b/include/asm-arm/arch-omap/tps65010.h
++#endif /* __ASSEMBLY__ */
++
++#endif /* __ASM_SH_FPU_H */
+diff --git a/include/asm-sh/hd64461.h b/include/asm-sh/hd64461.h
+index 342ca55..8c1353b 100644
+--- a/include/asm-sh/hd64461.h
++++ b/include/asm-sh/hd64461.h
+@@ -46,10 +46,10 @@
+ /* CPU Data Bus Control Register */
+ #define HD64461_SCPUCR (CONFIG_HD64461_IOBASE + 0x04)
+
+-/* Base Adress Register */
++/* Base Address Register */
+ #define HD64461_LCDCBAR (CONFIG_HD64461_IOBASE + 0x1000)
+
+-/* Line increment adress */
++/* Line increment address */
+ #define HD64461_LCDCLOR (CONFIG_HD64461_IOBASE + 0x1002)
+
+ /* Controls LCD controller */
+@@ -80,9 +80,9 @@
+ #define HD64461_LDR3 (CONFIG_HD64461_IOBASE + 0x101e)
+
+ /* Palette Registers */
+-#define HD64461_CPTWAR (CONFIG_HD64461_IOBASE + 0x1030) /* Color Palette Write Adress Register */
++#define HD64461_CPTWAR (CONFIG_HD64461_IOBASE + 0x1030) /* Color Palette Write Address Register */
+ #define HD64461_CPTWDR (CONFIG_HD64461_IOBASE + 0x1032) /* Color Palette Write Data Register */
+-#define HD64461_CPTRAR (CONFIG_HD64461_IOBASE + 0x1034) /* Color Palette Read Adress Register */
++#define HD64461_CPTRAR (CONFIG_HD64461_IOBASE + 0x1034) /* Color Palette Read Address Register */
+ #define HD64461_CPTRDR (CONFIG_HD64461_IOBASE + 0x1036) /* Color Palette Read Data Register */
+
+ #define HD64461_GRDOR (CONFIG_HD64461_IOBASE + 0x1040) /* Display Resolution Offset Register */
+@@ -97,8 +97,8 @@
+ #define HD64461_GRCFGR_COLORDEPTH8 0x01 /* Sets Colordepth 8 for Accelerator */
+
+ /* Line Drawing Registers */
+-#define HD64461_LNSARH (CONFIG_HD64461_IOBASE + 0x1046) /* Line Start Adress Register (H) */
+-#define HD64461_LNSARL (CONFIG_HD64461_IOBASE + 0x1048) /* Line Start Adress Register (L) */
++#define HD64461_LNSARH (CONFIG_HD64461_IOBASE + 0x1046) /* Line Start Address Register (H) */
++#define HD64461_LNSARL (CONFIG_HD64461_IOBASE + 0x1048) /* Line Start Address Register (L) */
+ #define HD64461_LNAXLR (CONFIG_HD64461_IOBASE + 0x104a) /* Axis Pixel Length Register */
+ #define HD64461_LNDGR (CONFIG_HD64461_IOBASE + 0x104c) /* Diagonal Register */
+ #define HD64461_LNAXR (CONFIG_HD64461_IOBASE + 0x104e) /* Axial Register */
+@@ -106,16 +106,16 @@
+ #define HD64461_LNMDR (CONFIG_HD64461_IOBASE + 0x1052) /* Line Mode Register */
+
+ /* BitBLT Registers */
+-#define HD64461_BBTSSARH (CONFIG_HD64461_IOBASE + 0x1054) /* Source Start Adress Register (H) */
+-#define HD64461_BBTSSARL (CONFIG_HD64461_IOBASE + 0x1056) /* Source Start Adress Register (L) */
+-#define HD64461_BBTDSARH (CONFIG_HD64461_IOBASE + 0x1058) /* Destination Start Adress Register (H) */
+-#define HD64461_BBTDSARL (CONFIG_HD64461_IOBASE + 0x105a) /* Destination Start Adress Register (L) */
++#define HD64461_BBTSSARH (CONFIG_HD64461_IOBASE + 0x1054) /* Source Start Address Register (H) */
++#define HD64461_BBTSSARL (CONFIG_HD64461_IOBASE + 0x1056) /* Source Start Address Register (L) */
++#define HD64461_BBTDSARH (CONFIG_HD64461_IOBASE + 0x1058) /* Destination Start Address Register (H) */
++#define HD64461_BBTDSARL (CONFIG_HD64461_IOBASE + 0x105a) /* Destination Start Address Register (L) */
+ #define HD64461_BBTDWR (CONFIG_HD64461_IOBASE + 0x105c) /* Destination Block Width Register */
+ #define HD64461_BBTDHR (CONFIG_HD64461_IOBASE + 0x105e) /* Destination Block Height Register */
+-#define HD64461_BBTPARH (CONFIG_HD64461_IOBASE + 0x1060) /* Pattern Start Adress Register (H) */
+-#define HD64461_BBTPARL (CONFIG_HD64461_IOBASE + 0x1062) /* Pattern Start Adress Register (L) */
+-#define HD64461_BBTMARH (CONFIG_HD64461_IOBASE + 0x1064) /* Mask Start Adress Register (H) */
+-#define HD64461_BBTMARL (CONFIG_HD64461_IOBASE + 0x1066) /* Mask Start Adress Register (L) */
++#define HD64461_BBTPARH (CONFIG_HD64461_IOBASE + 0x1060) /* Pattern Start Address Register (H) */
++#define HD64461_BBTPARL (CONFIG_HD64461_IOBASE + 0x1062) /* Pattern Start Address Register (L) */
++#define HD64461_BBTMARH (CONFIG_HD64461_IOBASE + 0x1064) /* Mask Start Address Register (H) */
++#define HD64461_BBTMARL (CONFIG_HD64461_IOBASE + 0x1066) /* Mask Start Address Register (L) */
+ #define HD64461_BBTROPR (CONFIG_HD64461_IOBASE + 0x1068) /* ROP Register */
+ #define HD64461_BBTMDR (CONFIG_HD64461_IOBASE + 0x106a) /* BitBLT Mode Register */
+
+diff --git a/include/asm-sh/hs7751rvoip.h b/include/asm-sh/hs7751rvoip.h
deleted file mode 100644
-index b9aa2b3..0000000
---- a/include/asm-arm/arch-omap/tps65010.h
+index c4cff9d..0000000
+--- a/include/asm-sh/hs7751rvoip.h
+++ /dev/null
-@@ -1,156 +0,0 @@
--/* linux/include/asm-arm/arch-omap/tps65010.h
-- *
-- * Functions to access TPS65010 power management device.
-- *
-- * Copyright (C) 2004 Dirk Behme <dirk.behme at de.bosch.com>
-- *
-- * This program is free software; you can redistribute it and/or modify it
-- * under the terms of the GNU General Public License as published by the
-- * Free Software Foundation; either version 2 of the License, or (at your
-- * option) any later version.
+@@ -1,54 +0,0 @@
+-#ifndef __ASM_SH_RENESAS_HS7751RVOIP_H
+-#define __ASM_SH_RENESAS_HS7751RVOIP_H
+-
+-/*
+- * linux/include/asm-sh/hs7751rvoip/hs7751rvoip.h
- *
-- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
-- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
-- * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
-- * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
-- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
-- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
-- * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
-- * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
-- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+- * Copyright (C) 2000 Atom Create Engineering Co., Ltd.
- *
-- * You should have received a copy of the GNU General Public License along
-- * with this program; if not, write to the Free Software Foundation, Inc.,
-- * 675 Mass Ave, Cambridge, MA 02139, USA.
+- * Renesas Technology Sales HS7751RVoIP support
- */
-
--#ifndef __ASM_ARCH_TPS65010_H
--#define __ASM_ARCH_TPS65010_H
+-/* Box specific addresses. */
-
--/*
-- * ----------------------------------------------------------------------------
-- * Registers, all 8 bits
-- * ----------------------------------------------------------------------------
-- */
+-#define PA_BCR 0xa4000000 /* FPGA */
+-#define PA_SLICCNTR1 0xa4000006 /* SLIC PIO Control 1 */
+-#define PA_SLICCNTR2 0xa4000008 /* SLIC PIO Control 2 */
+-#define PA_DMACNTR 0xa400000a /* USB DMA Control */
+-#define PA_INPORTR 0xa400000c /* Input Port Register */
+-#define PA_OUTPORTR 0xa400000e /* Output Port Reguster */
+-#define PA_VERREG 0xa4000014 /* FPGA Version Register */
-
--#define TPS_CHGSTATUS 0x01
--# define TPS_CHG_USB (1 << 7)
--# define TPS_CHG_AC (1 << 6)
--# define TPS_CHG_THERM (1 << 5)
--# define TPS_CHG_TERM (1 << 4)
--# define TPS_CHG_TAPER_TMO (1 << 3)
--# define TPS_CHG_CHG_TMO (1 << 2)
--# define TPS_CHG_PRECHG_TMO (1 << 1)
--# define TPS_CHG_TEMP_ERR (1 << 0)
--#define TPS_REGSTATUS 0x02
--# define TPS_REG_ONOFF (1 << 7)
--# define TPS_REG_COVER (1 << 6)
--# define TPS_REG_UVLO (1 << 5)
--# define TPS_REG_NO_CHG (1 << 4) /* tps65013 */
--# define TPS_REG_PG_LD02 (1 << 3)
--# define TPS_REG_PG_LD01 (1 << 2)
--# define TPS_REG_PG_MAIN (1 << 1)
--# define TPS_REG_PG_CORE (1 << 0)
--#define TPS_MASK1 0x03
--#define TPS_MASK2 0x04
--#define TPS_ACKINT1 0x05
--#define TPS_ACKINT2 0x06
--#define TPS_CHGCONFIG 0x07
--# define TPS_CHARGE_POR (1 << 7) /* 65010/65012 */
--# define TPS65013_AUA (1 << 7) /* 65011/65013 */
--# define TPS_CHARGE_RESET (1 << 6)
--# define TPS_CHARGE_FAST (1 << 5)
--# define TPS_CHARGE_CURRENT (3 << 3)
--# define TPS_VBUS_500MA (1 << 2)
--# define TPS_VBUS_CHARGING (1 << 1)
--# define TPS_CHARGE_ENABLE (1 << 0)
--#define TPS_LED1_ON 0x08
--#define TPS_LED1_PER 0x09
--#define TPS_LED2_ON 0x0a
--#define TPS_LED2_PER 0x0b
--#define TPS_VDCDC1 0x0c
--# define TPS_ENABLE_LP (1 << 3)
--#define TPS_VDCDC2 0x0d
--#define TPS_VREGS1 0x0e
--# define TPS_LDO2_ENABLE (1 << 7)
--# define TPS_LDO2_OFF (1 << 6)
--# define TPS_VLDO2_3_0V (3 << 4)
--# define TPS_VLDO2_2_75V (2 << 4)
--# define TPS_VLDO2_2_5V (1 << 4)
--# define TPS_VLDO2_1_8V (0 << 4)
--# define TPS_LDO1_ENABLE (1 << 3)
--# define TPS_LDO1_OFF (1 << 2)
--# define TPS_VLDO1_3_0V (3 << 0)
--# define TPS_VLDO1_2_75V (2 << 0)
--# define TPS_VLDO1_2_5V (1 << 0)
--# define TPS_VLDO1_ADJ (0 << 0)
--#define TPS_MASK3 0x0f
--#define TPS_DEFGPIO 0x10
+-#define PA_IDE_OFFSET 0x1f0 /* CF IDE Offset */
-
--/*
-- * ----------------------------------------------------------------------------
-- * Macros used by exported functions
-- * ----------------------------------------------------------------------------
-- */
+-#define IRLCNTR1 (PA_BCR + 0) /* Interrupt Control Register1 */
+-#define IRLCNTR2 (PA_BCR + 2) /* Interrupt Control Register2 */
+-#define IRLCNTR3 (PA_BCR + 4) /* Interrupt Control Register3 */
+-#define IRLCNTR4 (PA_BCR + 16) /* Interrupt Control Register4 */
+-#define IRLCNTR5 (PA_BCR + 18) /* Interrupt Control Register5 */
-
--#define LED1 1
--#define LED2 2
--#define OFF 0
--#define ON 1
--#define BLINK 2
--#define GPIO1 1
--#define GPIO2 2
--#define GPIO3 3
--#define GPIO4 4
--#define LOW 0
--#define HIGH 1
+-#define IRQ_PCIETH 6 /* PCI Ethernet IRQ */
+-#define IRQ_PCIHUB 7 /* PCI Ethernet Hub IRQ */
+-#define IRQ_USBCOM 8 /* USB Comunication IRQ */
+-#define IRQ_USBCON 9 /* USB Connect IRQ */
+-#define IRQ_USBDMA 10 /* USB DMA IRQ */
+-#define IRQ_CFCARD 11 /* CF Card IRQ */
+-#define IRQ_PCMCIA 12 /* PCMCIA IRQ */
+-#define IRQ_PCISLOT 13 /* PCI Slot #1 IRQ */
+-#define IRQ_ONHOOK1 0 /* ON HOOK1 IRQ */
+-#define IRQ_OFFHOOK1 1 /* OFF HOOK1 IRQ */
+-#define IRQ_ONHOOK2 2 /* ON HOOK2 IRQ */
+-#define IRQ_OFFHOOK2 3 /* OFF HOOK2 IRQ */
+-#define IRQ_RINGING 4 /* Ringing IRQ */
+-#define IRQ_CODEC 5 /* CODEC IRQ */
-
+-#define __IO_PREFIX hs7751rvoip
+-#include <asm/io_generic.h>
+-
+-/* arch/sh/boards/renesas/hs7751rvoip/irq.c */
+-void init_hs7751rvoip_IRQ(void);
+-
+-/* arch/sh/boards/renesas/hs7751rvoip/io.c */
+-void *hs7751rvoip_ioremap(unsigned long, unsigned long);
+-
+-#endif /* __ASM_SH_RENESAS_HS7751RVOIP */
+diff --git a/include/asm-sh/hw_irq.h b/include/asm-sh/hw_irq.h
+index cb0b6c9..c958fda 100644
+--- a/include/asm-sh/hw_irq.h
++++ b/include/asm-sh/hw_irq.h
+@@ -33,13 +33,6 @@ struct intc_vect {
+ #define INTC_VECT(enum_id, vect) { enum_id, vect }
+ #define INTC_IRQ(enum_id, irq) INTC_VECT(enum_id, irq2evt(irq))
+
+-struct intc_prio {
+- intc_enum enum_id;
+- unsigned char priority;
+-};
+-
+-#define INTC_PRIO(enum_id, prio) { enum_id, prio }
+-
+ struct intc_group {
+ intc_enum enum_id;
+ intc_enum enum_ids[32];
+@@ -79,8 +72,6 @@ struct intc_desc {
+ unsigned int nr_vectors;
+ struct intc_group *groups;
+ unsigned int nr_groups;
+- struct intc_prio *priorities;
+- unsigned int nr_priorities;
+ struct intc_mask_reg *mask_regs;
+ unsigned int nr_mask_regs;
+ struct intc_prio_reg *prio_regs;
+@@ -92,10 +83,9 @@ struct intc_desc {
+
+ #define _INTC_ARRAY(a) a, sizeof(a)/sizeof(*a)
+ #define DECLARE_INTC_DESC(symbol, chipname, vectors, groups, \
+- priorities, mask_regs, prio_regs, sense_regs) \
++ mask_regs, prio_regs, sense_regs) \
+ struct intc_desc symbol __initdata = { \
+ _INTC_ARRAY(vectors), _INTC_ARRAY(groups), \
+- _INTC_ARRAY(priorities), \
+ _INTC_ARRAY(mask_regs), _INTC_ARRAY(prio_regs), \
+ _INTC_ARRAY(sense_regs), \
+ chipname, \
+diff --git a/include/asm-sh/io.h b/include/asm-sh/io.h
+index 6ed34d8..94900c0 100644
+--- a/include/asm-sh/io.h
++++ b/include/asm-sh/io.h
+@@ -191,6 +191,8 @@ __BUILD_MEMORY_STRING(w, u16)
+
+ #define mmiowb() wmb() /* synco on SH-4A, otherwise a nop */
+
++#define IO_SPACE_LIMIT 0xffffffff
++
+ /*
+ * This function provides a method for the generic case where a board-specific
+ * ioport_map simply needs to return the port + some arbitrary port base.
+@@ -226,6 +228,11 @@ static inline unsigned int ctrl_inl(unsigned long addr)
+ return *(volatile unsigned long*)addr;
+ }
+
++static inline unsigned long long ctrl_inq(unsigned long addr)
++{
++ return *(volatile unsigned long long*)addr;
++}
++
+ static inline void ctrl_outb(unsigned char b, unsigned long addr)
+ {
+ *(volatile unsigned char*)addr = b;
+@@ -241,49 +248,52 @@ static inline void ctrl_outl(unsigned int b, unsigned long addr)
+ *(volatile unsigned long*)addr = b;
+ }
+
++static inline void ctrl_outq(unsigned long long b, unsigned long addr)
++{
++ *(volatile unsigned long long*)addr = b;
++}
++
+ static inline void ctrl_delay(void)
+ {
++#ifdef P2SEG
+ ctrl_inw(P2SEG);
++#endif
+ }
+
+-#define IO_SPACE_LIMIT 0xffffffff
++/* Quad-word real-mode I/O, don't ask.. */
++unsigned long long peek_real_address_q(unsigned long long addr);
++unsigned long long poke_real_address_q(unsigned long long addr,
++ unsigned long long val);
+
+-#ifdef CONFIG_MMU
-/*
-- * ----------------------------------------------------------------------------
-- * Exported functions
-- * ----------------------------------------------------------------------------
+- * Change virtual addresses to physical addresses and vv.
+- * These are trivial on the 1:1 Linux/SuperH mapping
- */
+-static inline unsigned long virt_to_phys(volatile void *address)
+-{
+- return PHYSADDR(address);
+-}
++/* arch/sh/mm/ioremap_64.c */
++unsigned long onchip_remap(unsigned long addr, unsigned long size,
++ const char *name);
++extern void onchip_unmap(unsigned long vaddr);
+
+-static inline void *phys_to_virt(unsigned long address)
+-{
+- return (void *)P1SEGADDR(address);
+-}
+-#else
+-#define phys_to_virt(address) ((void *)(address))
++#if !defined(CONFIG_MMU)
+ #define virt_to_phys(address) ((unsigned long)(address))
++#define phys_to_virt(address) ((void *)(address))
++#else
++#define virt_to_phys(address) (__pa(address))
++#define phys_to_virt(address) (__va(address))
+ #endif
+
+ /*
+- * readX/writeX() are used to access memory mapped devices. On some
+- * architectures the memory mapped IO stuff needs to be accessed
+- * differently. On the x86 architecture, we just read/write the
+- * memory location directly.
++ * On 32-bit SH, we traditionally have the whole physical address space
++ * mapped at all times (as MIPS does), so "ioremap()" and "iounmap()" do
++ * not need to do anything but place the address in the proper segment.
++ * This is true for P1 and P2 addresses, as well as some P3 ones.
++ * However, most of the P3 addresses and newer cores using extended
++ * addressing need to map through page tables, so the ioremap()
++ * implementation becomes a bit more complicated.
+ *
+- * On SH, we traditionally have the whole physical address space mapped
+- * at all times (as MIPS does), so "ioremap()" and "iounmap()" do not
+- * need to do anything but place the address in the proper segment. This
+- * is true for P1 and P2 addresses, as well as some P3 ones. However,
+- * most of the P3 addresses and newer cores using extended addressing
+- * need to map through page tables, so the ioremap() implementation
+- * becomes a bit more complicated. See arch/sh/mm/ioremap.c for
+- * additional notes on this.
++ * See arch/sh/mm/ioremap.c for additional notes on this.
+ *
+ * We cheat a bit and always return uncachable areas until we've fixed
+ * the drivers to handle caching properly.
++ *
++ * On the SH-5 the concept of segmentation in the 1:1 PXSEG sense simply
++ * doesn't exist, so everything must go through page tables.
+ */
+ #ifdef CONFIG_MMU
+ void __iomem *__ioremap(unsigned long offset, unsigned long size,
+@@ -297,6 +307,7 @@ void __iounmap(void __iomem *addr);
+ static inline void __iomem *
+ __ioremap_mode(unsigned long offset, unsigned long size, unsigned long flags)
+ {
++#ifdef CONFIG_SUPERH32
+ unsigned long last_addr = offset + size - 1;
+
+ /*
+@@ -311,6 +322,7 @@ __ioremap_mode(unsigned long offset, unsigned long size, unsigned long flags)
+
+ return (void __iomem *)P2SEGADDR(offset);
+ }
++#endif
+
+ return __ioremap(offset, size, flags);
+ }
+diff --git a/include/asm-sh/irqflags.h b/include/asm-sh/irqflags.h
+index 9dedc1b..46e71da 100644
+--- a/include/asm-sh/irqflags.h
++++ b/include/asm-sh/irqflags.h
+@@ -1,81 +1,11 @@
+ #ifndef __ASM_SH_IRQFLAGS_H
+ #define __ASM_SH_IRQFLAGS_H
+
+-static inline void raw_local_irq_enable(void)
+-{
+- unsigned long __dummy0, __dummy1;
-
--/* Draw from VBUS:
-- * 0 mA -- DON'T DRAW (might supply power instead)
-- * 100 mA -- usb unit load (slowest charge rate)
-- * 500 mA -- usb high power (fast battery charge)
-- */
--extern int tps65010_set_vbus_draw(unsigned mA);
+- __asm__ __volatile__ (
+- "stc sr, %0\n\t"
+- "and %1, %0\n\t"
+-#ifdef CONFIG_CPU_HAS_SR_RB
+- "stc r6_bank, %1\n\t"
+- "or %1, %0\n\t"
++#ifdef CONFIG_SUPERH32
++#include "irqflags_32.h"
++#else
++#include "irqflags_64.h"
+ #endif
+- "ldc %0, sr\n\t"
+- : "=&r" (__dummy0), "=r" (__dummy1)
+- : "1" (~0x000000f0)
+- : "memory"
+- );
+-}
-
--/* tps65010_set_gpio_out_value parameter:
-- * gpio: GPIO1, GPIO2, GPIO3 or GPIO4
-- * value: LOW or HIGH
-- */
--extern int tps65010_set_gpio_out_value(unsigned gpio, unsigned value);
+-static inline void raw_local_irq_disable(void)
+-{
+- unsigned long flags;
-
--/* tps65010_set_led parameter:
-- * led: LED1 or LED2
-- * mode: ON, OFF or BLINK
-- */
--extern int tps65010_set_led(unsigned led, unsigned mode);
+- __asm__ __volatile__ (
+- "stc sr, %0\n\t"
+- "or #0xf0, %0\n\t"
+- "ldc %0, sr\n\t"
+- : "=&z" (flags)
+- : /* no inputs */
+- : "memory"
+- );
+-}
-
--/* tps65010_set_vib parameter:
-- * value: ON or OFF
-- */
--extern int tps65010_set_vib(unsigned value);
+-static inline void set_bl_bit(void)
+-{
+- unsigned long __dummy0, __dummy1;
-
--/* tps65010_set_low_pwr parameter:
-- * mode: ON or OFF
-- */
--extern int tps65010_set_low_pwr(unsigned mode);
+- __asm__ __volatile__ (
+- "stc sr, %0\n\t"
+- "or %2, %0\n\t"
+- "and %3, %0\n\t"
+- "ldc %0, sr\n\t"
+- : "=&r" (__dummy0), "=r" (__dummy1)
+- : "r" (0x10000000), "r" (0xffffff0f)
+- : "memory"
+- );
+-}
-
--/* tps65010_config_vregs1 parameter:
-- * value to be written to VREGS1 register
-- * Note: The complete register is written, set all bits you need
-- */
--extern int tps65010_config_vregs1(unsigned value);
+-static inline void clear_bl_bit(void)
+-{
+- unsigned long __dummy0, __dummy1;
-
--/* tps65013_set_low_pwr parameter:
-- * mode: ON or OFF
-- */
--extern int tps65013_set_low_pwr(unsigned mode);
+- __asm__ __volatile__ (
+- "stc sr, %0\n\t"
+- "and %2, %0\n\t"
+- "ldc %0, sr\n\t"
+- : "=&r" (__dummy0), "=r" (__dummy1)
+- : "1" (~0x10000000)
+- : "memory"
+- );
+-}
-
--#endif /* __ASM_ARCH_TPS65010_H */
+-static inline unsigned long __raw_local_save_flags(void)
+-{
+- unsigned long flags;
-
-diff --git a/include/asm-arm/arch-orion/debug-macro.S b/include/asm-arm/arch-orion/debug-macro.S
-new file mode 100644
-index 0000000..e2a8064
---- /dev/null
-+++ b/include/asm-arm/arch-orion/debug-macro.S
-@@ -0,0 +1,17 @@
-+/*
-+ * linux/include/asm-arm/arch-orion/debug-macro.S
-+ *
-+ * Debugging macro include header
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+*/
-+
-+ .macro addruart,rx
-+ mov \rx, #0xf1000000
-+ orr \rx, \rx, #0x00012000
-+ .endm
-+
-+#define UART_SHIFT 2
-+#include <asm/hardware/debug-8250.S>
-diff --git a/include/asm-arm/arch-orion/dma.h b/include/asm-arm/arch-orion/dma.h
-new file mode 100644
-index 0000000..40a8c17
---- /dev/null
-+++ b/include/asm-arm/arch-orion/dma.h
-@@ -0,0 +1 @@
-+/* empty */
-diff --git a/include/asm-arm/arch-orion/entry-macro.S b/include/asm-arm/arch-orion/entry-macro.S
-new file mode 100644
-index 0000000..b76075a
---- /dev/null
-+++ b/include/asm-arm/arch-orion/entry-macro.S
-@@ -0,0 +1,31 @@
-+/*
-+ * include/asm-arm/arch-orion/entry-macro.S
-+ *
-+ * Low-level IRQ helper macros for Orion platforms
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
-+
-+#include <asm/arch/orion.h>
-+
-+ .macro disable_fiq
-+ .endm
-+
-+ .macro arch_ret_to_user, tmp1, tmp2
-+ .endm
-+
-+ .macro get_irqnr_preamble, base, tmp
-+ ldr \base, =MAIN_IRQ_CAUSE
-+ .endm
-+
-+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
-+ ldr \irqstat, [\base, #0] @ main cause
-+ ldr \tmp, [\base, #(MAIN_IRQ_MASK - MAIN_IRQ_CAUSE)] @ main mask
-+ mov \irqnr, #0 @ default irqnr
-+ @ find cause bits that are unmasked
-+ ands \irqstat, \irqstat, \tmp @ clear Z flag if any
-+ clzne \irqnr, \irqstat @ calc irqnr
-+ rsbne \irqnr, \irqnr, #31
-+ .endm
-diff --git a/include/asm-arm/arch-orion/gpio.h b/include/asm-arm/arch-orion/gpio.h
+- __asm__ __volatile__ (
+- "stc sr, %0\n\t"
+- "and #0xf0, %0\n\t"
+- : "=&z" (flags)
+- : /* no inputs */
+- : "memory"
+- );
+-
+- return flags;
+-}
+
+ #define raw_local_save_flags(flags) \
+ do { (flags) = __raw_local_save_flags(); } while (0)
+@@ -92,25 +22,6 @@ static inline int raw_irqs_disabled(void)
+ return raw_irqs_disabled_flags(flags);
+ }
+
+-static inline unsigned long __raw_local_irq_save(void)
+-{
+- unsigned long flags, __dummy;
+-
+- __asm__ __volatile__ (
+- "stc sr, %1\n\t"
+- "mov %1, %0\n\t"
+- "or #0xf0, %0\n\t"
+- "ldc %0, sr\n\t"
+- "mov %1, %0\n\t"
+- "and #0xf0, %0\n\t"
+- : "=&z" (flags), "=&r" (__dummy)
+- : /* no inputs */
+- : "memory"
+- );
+-
+- return flags;
+-}
+-
+ #define raw_local_irq_save(flags) \
+ do { (flags) = __raw_local_irq_save(); } while (0)
+
+diff --git a/include/asm-sh/irqflags_32.h b/include/asm-sh/irqflags_32.h
new file mode 100644
-index 0000000..d66284f
+index 0000000..60218f5
--- /dev/null
-+++ b/include/asm-arm/arch-orion/gpio.h
-@@ -0,0 +1,28 @@
-+/*
-+ * include/asm-arm/arch-orion/gpio.h
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
-+
-+extern int gpio_request(unsigned pin, const char *label);
-+extern void gpio_free(unsigned pin);
-+extern int gpio_direction_input(unsigned pin);
-+extern int gpio_direction_output(unsigned pin, int value);
-+extern int gpio_get_value(unsigned pin);
-+extern void gpio_set_value(unsigned pin, int value);
-+extern void orion_gpio_set_blink(unsigned pin, int blink);
-+extern void gpio_display(void); /* debug */
++++ b/include/asm-sh/irqflags_32.h
+@@ -0,0 +1,99 @@
++#ifndef __ASM_SH_IRQFLAGS_32_H
++#define __ASM_SH_IRQFLAGS_32_H
+
-+static inline int gpio_to_irq(int pin)
++static inline void raw_local_irq_enable(void)
+{
-+ return pin + IRQ_ORION_GPIO_START;
++ unsigned long __dummy0, __dummy1;
++
++ __asm__ __volatile__ (
++ "stc sr, %0\n\t"
++ "and %1, %0\n\t"
++#ifdef CONFIG_CPU_HAS_SR_RB
++ "stc r6_bank, %1\n\t"
++ "or %1, %0\n\t"
++#endif
++ "ldc %0, sr\n\t"
++ : "=&r" (__dummy0), "=r" (__dummy1)
++ : "1" (~0x000000f0)
++ : "memory"
++ );
+}
+
-+static inline int irq_to_gpio(int irq)
++static inline void raw_local_irq_disable(void)
+{
-+ return irq - IRQ_ORION_GPIO_START;
-+}
++ unsigned long flags;
+
-+#include <asm-generic/gpio.h> /* cansleep wrappers */
-diff --git a/include/asm-arm/arch-orion/hardware.h b/include/asm-arm/arch-orion/hardware.h
-new file mode 100644
-index 0000000..8a12d21
---- /dev/null
-+++ b/include/asm-arm/arch-orion/hardware.h
-@@ -0,0 +1,24 @@
-+/*
-+ * include/asm-arm/arch-orion/hardware.h
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ *
-+ */
++ __asm__ __volatile__ (
++ "stc sr, %0\n\t"
++ "or #0xf0, %0\n\t"
++ "ldc %0, sr\n\t"
++ : "=&z" (flags)
++ : /* no inputs */
++ : "memory"
++ );
++}
+
-+#ifndef __ASM_ARCH_HARDWARE_H__
-+#define __ASM_ARCH_HARDWARE_H__
++static inline void set_bl_bit(void)
++{
++ unsigned long __dummy0, __dummy1;
+
-+#include "orion.h"
++ __asm__ __volatile__ (
++ "stc sr, %0\n\t"
++ "or %2, %0\n\t"
++ "and %3, %0\n\t"
++ "ldc %0, sr\n\t"
++ : "=&r" (__dummy0), "=r" (__dummy1)
++ : "r" (0x10000000), "r" (0xffffff0f)
++ : "memory"
++ );
++}
+
-+#define PCI_MEMORY_VADDR ORION_PCI_SYS_MEM_BASE
-+#define PCI_IO_VADDR ORION_PCI_SYS_IO_BASE
++static inline void clear_bl_bit(void)
++{
++ unsigned long __dummy0, __dummy1;
+
-+#define pcibios_assign_all_busses() 1
++ __asm__ __volatile__ (
++ "stc sr, %0\n\t"
++ "and %2, %0\n\t"
++ "ldc %0, sr\n\t"
++ : "=&r" (__dummy0), "=r" (__dummy1)
++ : "1" (~0x10000000)
++ : "memory"
++ );
++}
+
-+#define PCIBIOS_MIN_IO 0x1000
-+#define PCIBIOS_MIN_MEM 0x01000000
-+#define PCIMEM_BASE PCI_MEMORY_VADDR /* mem base for VGA */
++static inline unsigned long __raw_local_save_flags(void)
++{
++ unsigned long flags;
+
-+#endif /* _ASM_ARCH_HARDWARE_H */
-diff --git a/include/asm-arm/arch-orion/io.h b/include/asm-arm/arch-orion/io.h
-new file mode 100644
-index 0000000..e0b8c39
---- /dev/null
-+++ b/include/asm-arm/arch-orion/io.h
-@@ -0,0 +1,27 @@
-+/*
-+ * include/asm-arm/arch-orion/io.h
-+ *
-+ * Tzachi Perelstein <tzachi at marvell.com>
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
++ __asm__ __volatile__ (
++ "stc sr, %0\n\t"
++ "and #0xf0, %0\n\t"
++ : "=&z" (flags)
++ : /* no inputs */
++ : "memory"
++ );
+
-+#ifndef __ASM_ARM_ARCH_IO_H
-+#define __ASM_ARM_ARCH_IO_H
++ return flags;
++}
+
-+#include "orion.h"
++static inline unsigned long __raw_local_irq_save(void)
++{
++ unsigned long flags, __dummy;
+
-+#define IO_SPACE_LIMIT 0xffffffff
-+#define IO_SPACE_REMAP ORION_PCI_SYS_IO_BASE
++ __asm__ __volatile__ (
++ "stc sr, %1\n\t"
++ "mov %1, %0\n\t"
++ "or #0xf0, %0\n\t"
++ "ldc %0, sr\n\t"
++ "mov %1, %0\n\t"
++ "and #0xf0, %0\n\t"
++ : "=&z" (flags), "=&r" (__dummy)
++ : /* no inputs */
++ : "memory"
++ );
+
-+static inline void __iomem *__io(unsigned long addr)
-+{
-+ return (void __iomem *)addr;
++ return flags;
+}
+
-+#define __io(a) __io(a)
-+#define __mem_pci(a) (a)
-+
-+#endif
-diff --git a/include/asm-arm/arch-orion/irqs.h b/include/asm-arm/arch-orion/irqs.h
++#endif /* __ASM_SH_IRQFLAGS_32_H */
+diff --git a/include/asm-sh/irqflags_64.h b/include/asm-sh/irqflags_64.h
new file mode 100644
-index 0000000..eea65ca
+index 0000000..4f6b8a5
--- /dev/null
-+++ b/include/asm-arm/arch-orion/irqs.h
-@@ -0,0 +1,61 @@
-+/*
-+ * include/asm-arm/arch-orion/irqs.h
-+ *
-+ * IRQ definitions for Orion SoC
-+ *
-+ * Maintainer: Tzachi Perelstein <tzachi at marvell.com>
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
-+
-+#ifndef __ASM_ARCH_IRQS_H__
-+#define __ASM_ARCH_IRQS_H__
-+
-+#include "orion.h" /* need GPIO_MAX */
-+
-+/*
-+ * Orion Main Interrupt Controller
-+ */
-+#define IRQ_ORION_BRIDGE 0
-+#define IRQ_ORION_DOORBELL_H2C 1
-+#define IRQ_ORION_DOORBELL_C2H 2
-+#define IRQ_ORION_UART0 3
-+#define IRQ_ORION_UART1 4
-+#define IRQ_ORION_I2C 5
-+#define IRQ_ORION_GPIO_0_7 6
-+#define IRQ_ORION_GPIO_8_15 7
-+#define IRQ_ORION_GPIO_16_23 8
-+#define IRQ_ORION_GPIO_24_31 9
-+#define IRQ_ORION_PCIE0_ERR 10
-+#define IRQ_ORION_PCIE0_INT 11
-+#define IRQ_ORION_USB1_CTRL 12
-+#define IRQ_ORION_DEV_BUS_ERR 14
-+#define IRQ_ORION_PCI_ERR 15
-+#define IRQ_ORION_USB_BR_ERR 16
-+#define IRQ_ORION_USB0_CTRL 17
-+#define IRQ_ORION_ETH_RX 18
-+#define IRQ_ORION_ETH_TX 19
-+#define IRQ_ORION_ETH_MISC 20
-+#define IRQ_ORION_ETH_SUM 21
-+#define IRQ_ORION_ETH_ERR 22
-+#define IRQ_ORION_IDMA_ERR 23
-+#define IRQ_ORION_IDMA_0 24
-+#define IRQ_ORION_IDMA_1 25
-+#define IRQ_ORION_IDMA_2 26
-+#define IRQ_ORION_IDMA_3 27
-+#define IRQ_ORION_CESA 28
-+#define IRQ_ORION_SATA 29
-+#define IRQ_ORION_XOR0 30
-+#define IRQ_ORION_XOR1 31
++++ b/include/asm-sh/irqflags_64.h
+@@ -0,0 +1,85 @@
++#ifndef __ASM_SH_IRQFLAGS_64_H
++#define __ASM_SH_IRQFLAGS_64_H
+
-+/*
-+ * Orion General Purpose Pins
-+ */
-+#define IRQ_ORION_GPIO_START 32
-+#define NR_GPIO_IRQS GPIO_MAX
++#include <asm/cpu/registers.h>
+
-+#define NR_IRQS (IRQ_ORION_GPIO_START + NR_GPIO_IRQS)
++#define SR_MASK_LL 0x00000000000000f0LL
++#define SR_BL_LL 0x0000000010000000LL
+
-+#endif /* __ASM_ARCH_IRQS_H__ */
-diff --git a/include/asm-arm/arch-orion/memory.h b/include/asm-arm/arch-orion/memory.h
-new file mode 100644
-index 0000000..d954dba
---- /dev/null
-+++ b/include/asm-arm/arch-orion/memory.h
-@@ -0,0 +1,15 @@
-+/*
-+ * include/asm-arm/arch-orion/memory.h
-+ *
-+ * Marvell Orion memory definitions
-+ */
++static inline void raw_local_irq_enable(void)
++{
++ unsigned long long __dummy0, __dummy1 = ~SR_MASK_LL;
+
-+#ifndef __ASM_ARCH_MMU_H
-+#define __ASM_ARCH_MMU_H
++ __asm__ __volatile__("getcon " __SR ", %0\n\t"
++ "and %0, %1, %0\n\t"
++ "putcon %0, " __SR "\n\t"
++ : "=&r" (__dummy0)
++ : "r" (__dummy1));
++}
+
-+#define PHYS_OFFSET UL(0x00000000)
++static inline void raw_local_irq_disable(void)
++{
++ unsigned long long __dummy0, __dummy1 = SR_MASK_LL;
+
-+#define __virt_to_bus(x) __virt_to_phys(x)
-+#define __bus_to_virt(x) __phys_to_virt(x)
++ __asm__ __volatile__("getcon " __SR ", %0\n\t"
++ "or %0, %1, %0\n\t"
++ "putcon %0, " __SR "\n\t"
++ : "=&r" (__dummy0)
++ : "r" (__dummy1));
++}
+
-+#endif
-diff --git a/include/asm-arm/arch-orion/orion.h b/include/asm-arm/arch-orion/orion.h
-new file mode 100644
-index 0000000..f787f75
---- /dev/null
-+++ b/include/asm-arm/arch-orion/orion.h
-@@ -0,0 +1,143 @@
-+/*
-+ * include/asm-arm/arch-orion/orion.h
-+ *
-+ * Generic definitions of Orion SoC flavors:
-+ * Orion-1, Orion-NAS, Orion-VoIP, and Orion-2.
-+ *
-+ * Maintainer: Tzachi Perelstein <tzachi at marvell.com>
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
++static inline void set_bl_bit(void)
++{
++ unsigned long long __dummy0, __dummy1 = SR_BL_LL;
+
-+#ifndef __ASM_ARCH_ORION_H__
-+#define __ASM_ARCH_ORION_H__
++ __asm__ __volatile__("getcon " __SR ", %0\n\t"
++ "or %0, %1, %0\n\t"
++ "putcon %0, " __SR "\n\t"
++ : "=&r" (__dummy0)
++ : "r" (__dummy1));
+
-+/*******************************************************************************
-+ * Orion Address Map
-+ * Use the same mapping (1:1 virtual:physical) of internal registers and
-+ * PCI system (PCI+PCIE) for all machines.
-+ * Each machine defines the rest of its mapping (e.g. device bus flashes)
-+ ******************************************************************************/
-+#define ORION_REGS_BASE 0xf1000000
-+#define ORION_REGS_SIZE SZ_1M
++}
+
-+#define ORION_PCI_SYS_MEM_BASE 0xe0000000
-+#define ORION_PCIE_MEM_BASE ORION_PCI_SYS_MEM_BASE
-+#define ORION_PCIE_MEM_SIZE SZ_128M
-+#define ORION_PCI_MEM_BASE (ORION_PCIE_MEM_BASE + ORION_PCIE_MEM_SIZE)
-+#define ORION_PCI_MEM_SIZE SZ_128M
++static inline void clear_bl_bit(void)
++{
++ unsigned long long __dummy0, __dummy1 = ~SR_BL_LL;
+
-+#define ORION_PCI_SYS_IO_BASE 0xf2000000
-+#define ORION_PCIE_IO_BASE ORION_PCI_SYS_IO_BASE
-+#define ORION_PCIE_IO_SIZE SZ_1M
-+#define ORION_PCIE_IO_REMAP (ORION_PCIE_IO_BASE - ORION_PCI_SYS_IO_BASE)
-+#define ORION_PCI_IO_BASE (ORION_PCIE_IO_BASE + ORION_PCIE_IO_SIZE)
-+#define ORION_PCI_IO_SIZE SZ_1M
-+#define ORION_PCI_IO_REMAP (ORION_PCI_IO_BASE - ORION_PCI_SYS_IO_BASE)
-+/* Relevant only for Orion-NAS */
-+#define ORION_PCIE_WA_BASE 0xf0000000
-+#define ORION_PCIE_WA_SIZE SZ_16M
++ __asm__ __volatile__("getcon " __SR ", %0\n\t"
++ "and %0, %1, %0\n\t"
++ "putcon %0, " __SR "\n\t"
++ : "=&r" (__dummy0)
++ : "r" (__dummy1));
++}
+
-+/*******************************************************************************
-+ * Supported Devices & Revisions
-+ ******************************************************************************/
-+/* Orion-1 (88F5181) */
-+#define MV88F5181_DEV_ID 0x5181
-+#define MV88F5181_REV_B1 3
-+/* Orion-NAS (88F5182) */
-+#define MV88F5182_DEV_ID 0x5182
-+#define MV88F5182_REV_A2 2
-+/* Orion-2 (88F5281) */
-+#define MV88F5281_DEV_ID 0x5281
-+#define MV88F5281_REV_D1 5
-+#define MV88F5281_REV_D2 6
++static inline unsigned long __raw_local_save_flags(void)
++{
++ unsigned long long __dummy = SR_MASK_LL;
++ unsigned long flags;
+
-+/*******************************************************************************
-+ * Orion Registers Map
-+ ******************************************************************************/
-+#define ORION_DDR_REG_BASE (ORION_REGS_BASE | 0x00000)
-+#define ORION_DEV_BUS_REG_BASE (ORION_REGS_BASE | 0x10000)
-+#define ORION_BRIDGE_REG_BASE (ORION_REGS_BASE | 0x20000)
-+#define ORION_PCI_REG_BASE (ORION_REGS_BASE | 0x30000)
-+#define ORION_PCIE_REG_BASE (ORION_REGS_BASE | 0x40000)
-+#define ORION_USB0_REG_BASE (ORION_REGS_BASE | 0x50000)
-+#define ORION_ETH_REG_BASE (ORION_REGS_BASE | 0x70000)
-+#define ORION_SATA_REG_BASE (ORION_REGS_BASE | 0x80000)
-+#define ORION_USB1_REG_BASE (ORION_REGS_BASE | 0xa0000)
++ __asm__ __volatile__ (
++ "getcon " __SR ", %0\n\t"
++ "and %0, %1, %0"
++ : "=&r" (flags)
++ : "r" (__dummy));
+
-+#define ORION_DDR_REG(x) (ORION_DDR_REG_BASE | (x))
-+#define ORION_DEV_BUS_REG(x) (ORION_DEV_BUS_REG_BASE | (x))
-+#define ORION_BRIDGE_REG(x) (ORION_BRIDGE_REG_BASE | (x))
-+#define ORION_PCI_REG(x) (ORION_PCI_REG_BASE | (x))
-+#define ORION_PCIE_REG(x) (ORION_PCIE_REG_BASE | (x))
-+#define ORION_USB0_REG(x) (ORION_USB0_REG_BASE | (x))
-+#define ORION_USB1_REG(x) (ORION_USB1_REG_BASE | (x))
-+#define ORION_ETH_REG(x) (ORION_ETH_REG_BASE | (x))
-+#define ORION_SATA_REG(x) (ORION_SATA_REG_BASE | (x))
++ return flags;
++}
+
-+/*******************************************************************************
-+ * Device Bus Registers
-+ ******************************************************************************/
-+#define MPP_0_7_CTRL ORION_DEV_BUS_REG(0x000)
-+#define MPP_8_15_CTRL ORION_DEV_BUS_REG(0x004)
-+#define MPP_16_19_CTRL ORION_DEV_BUS_REG(0x050)
-+#define MPP_DEV_CTRL ORION_DEV_BUS_REG(0x008)
-+#define MPP_RESET_SAMPLE ORION_DEV_BUS_REG(0x010)
-+#define GPIO_OUT ORION_DEV_BUS_REG(0x100)
-+#define GPIO_IO_CONF ORION_DEV_BUS_REG(0x104)
-+#define GPIO_BLINK_EN ORION_DEV_BUS_REG(0x108)
-+#define GPIO_IN_POL ORION_DEV_BUS_REG(0x10c)
-+#define GPIO_DATA_IN ORION_DEV_BUS_REG(0x110)
-+#define GPIO_EDGE_CAUSE ORION_DEV_BUS_REG(0x114)
-+#define GPIO_EDGE_MASK ORION_DEV_BUS_REG(0x118)
-+#define GPIO_LEVEL_MASK ORION_DEV_BUS_REG(0x11c)
-+#define DEV_BANK_0_PARAM ORION_DEV_BUS_REG(0x45c)
-+#define DEV_BANK_1_PARAM ORION_DEV_BUS_REG(0x460)
-+#define DEV_BANK_2_PARAM ORION_DEV_BUS_REG(0x464)
-+#define DEV_BANK_BOOT_PARAM ORION_DEV_BUS_REG(0x46c)
-+#define DEV_BUS_CTRL ORION_DEV_BUS_REG(0x4c0)
-+#define DEV_BUS_INT_CAUSE ORION_DEV_BUS_REG(0x4d0)
-+#define DEV_BUS_INT_MASK ORION_DEV_BUS_REG(0x4d4)
-+#define I2C_BASE ORION_DEV_BUS_REG(0x1000)
-+#define UART0_BASE ORION_DEV_BUS_REG(0x2000)
-+#define UART1_BASE ORION_DEV_BUS_REG(0x2100)
-+#define GPIO_MAX 32
++static inline unsigned long __raw_local_irq_save(void)
++{
++ unsigned long long __dummy0, __dummy1 = SR_MASK_LL;
++ unsigned long flags;
+
-+/***************************************************************************
-+ * Orion CPU Bridge Registers
-+ **************************************************************************/
-+#define CPU_CONF ORION_BRIDGE_REG(0x100)
-+#define CPU_CTRL ORION_BRIDGE_REG(0x104)
-+#define CPU_RESET_MASK ORION_BRIDGE_REG(0x108)
-+#define CPU_SOFT_RESET ORION_BRIDGE_REG(0x10c)
-+#define POWER_MNG_CTRL_REG ORION_BRIDGE_REG(0x11C)
-+#define BRIDGE_CAUSE ORION_BRIDGE_REG(0x110)
-+#define BRIDGE_MASK ORION_BRIDGE_REG(0x114)
-+#define MAIN_IRQ_CAUSE ORION_BRIDGE_REG(0x200)
-+#define MAIN_IRQ_MASK ORION_BRIDGE_REG(0x204)
-+#define TIMER_CTRL ORION_BRIDGE_REG(0x300)
-+#define TIMER_VAL(x) ORION_BRIDGE_REG(0x314 + ((x) * 8))
-+#define TIMER_VAL_RELOAD(x) ORION_BRIDGE_REG(0x310 + ((x) * 8))
++ __asm__ __volatile__ (
++ "getcon " __SR ", %1\n\t"
++ "or %1, r63, %0\n\t"
++ "or %1, %2, %1\n\t"
++ "putcon %1, " __SR "\n\t"
++ "and %0, %2, %0"
++ : "=&r" (flags), "=&r" (__dummy0)
++ : "r" (__dummy1));
+
-+#ifndef __ASSEMBLY__
++ return flags;
++}
+
-+/*******************************************************************************
-+ * Helpers to access Orion registers
-+ ******************************************************************************/
-+#include <asm/types.h>
-+#include <asm/io.h>
++#endif /* __ASM_SH_IRQFLAGS_64_H */
+diff --git a/include/asm-sh/machvec.h b/include/asm-sh/machvec.h
+index 088698b..b2e4124 100644
+--- a/include/asm-sh/machvec.h
++++ b/include/asm-sh/machvec.h
+@@ -56,9 +56,6 @@ struct sh_machine_vector {
+
+ void (*mv_heartbeat)(void);
+
+- void *(*mv_consistent_alloc)(struct device *, size_t, dma_addr_t *, gfp_t);
+- int (*mv_consistent_free)(struct device *, size_t, void *, dma_addr_t);
+-
+ void __iomem *(*mv_ioport_map)(unsigned long port, unsigned int size);
+ void (*mv_ioport_unmap)(void __iomem *);
+ };
+@@ -68,6 +65,6 @@ extern struct sh_machine_vector sh_mv;
+ #define get_system_type() sh_mv.mv_name
+
+ #define __initmv \
+- __attribute_used__ __attribute__((__section__ (".machvec.init")))
++ __used __section(.machvec.init)
+
+ #endif /* _ASM_SH_MACHVEC_H */
+diff --git a/include/asm-sh/microdev.h b/include/asm-sh/microdev.h
+index 018332a..1aed158 100644
+--- a/include/asm-sh/microdev.h
++++ b/include/asm-sh/microdev.h
+@@ -17,7 +17,7 @@ extern void microdev_print_fpga_intc_status(void);
+ /*
+ * The following are useful macros for manipulating the interrupt
+ * controller (INTC) on the CPU-board FPGA. should be noted that there
+- * is an INTC on the FPGA, and a seperate INTC on the SH4-202 core -
++ * is an INTC on the FPGA, and a separate INTC on the SH4-202 core -
+ * these are two different things, both of which need to be prorammed to
+ * correctly route - unfortunately, they have the same name and
+ * abbreviations!
+@@ -25,7 +25,7 @@ extern void microdev_print_fpga_intc_status(void);
+ #define MICRODEV_FPGA_INTC_BASE 0xa6110000ul /* INTC base address on CPU-board FPGA */
+ #define MICRODEV_FPGA_INTENB_REG (MICRODEV_FPGA_INTC_BASE+0ul) /* Interrupt Enable Register on INTC on CPU-board FPGA */
+ #define MICRODEV_FPGA_INTDSB_REG (MICRODEV_FPGA_INTC_BASE+8ul) /* Interrupt Disable Register on INTC on CPU-board FPGA */
+-#define MICRODEV_FPGA_INTC_MASK(n) (1ul<<(n)) /* Interupt mask to enable/disable INTC in CPU-board FPGA */
++#define MICRODEV_FPGA_INTC_MASK(n) (1ul<<(n)) /* Interrupt mask to enable/disable INTC in CPU-board FPGA */
+ #define MICRODEV_FPGA_INTPRI_REG(n) (MICRODEV_FPGA_INTC_BASE+0x10+((n)/8)*8)/* Interrupt Priority Register on INTC on CPU-board FPGA */
+ #define MICRODEV_FPGA_INTPRI_LEVEL(n,x) ((x)<<(((n)%8)*4)) /* MICRODEV_FPGA_INTPRI_LEVEL(int_number, int_level) */
+ #define MICRODEV_FPGA_INTPRI_MASK(n) (MICRODEV_FPGA_INTPRI_LEVEL((n),0xful)) /* Interrupt Priority Mask on INTC on CPU-board FPGA */
+diff --git a/include/asm-sh/mmu_context.h b/include/asm-sh/mmu_context.h
+index 199662b..fe58d00 100644
+--- a/include/asm-sh/mmu_context.h
++++ b/include/asm-sh/mmu_context.h
+@@ -1,13 +1,13 @@
+ /*
+ * Copyright (C) 1999 Niibe Yutaka
+- * Copyright (C) 2003 - 2006 Paul Mundt
++ * Copyright (C) 2003 - 2007 Paul Mundt
+ *
+ * ASID handling idea taken from MIPS implementation.
+ */
+ #ifndef __ASM_SH_MMU_CONTEXT_H
+ #define __ASM_SH_MMU_CONTEXT_H
+-#ifdef __KERNEL__
+
++#ifdef __KERNEL__
+ #include <asm/cpu/mmu_context.h>
+ #include <asm/tlbflush.h>
+ #include <asm/uaccess.h>
+@@ -19,7 +19,6 @@
+ * (a) TLB cache version (or round, cycle whatever expression you like)
+ * (b) ASID (Address Space IDentifier)
+ */
+-
+ #define MMU_CONTEXT_ASID_MASK 0x000000ff
+ #define MMU_CONTEXT_VERSION_MASK 0xffffff00
+ #define MMU_CONTEXT_FIRST_VERSION 0x00000100
+@@ -28,10 +27,11 @@
+ /* ASID is 8-bit value, so it can't be 0x100 */
+ #define MMU_NO_ASID 0x100
+
+-#define cpu_context(cpu, mm) ((mm)->context.id[cpu])
+-#define cpu_asid(cpu, mm) (cpu_context((cpu), (mm)) & \
+- MMU_CONTEXT_ASID_MASK)
+ #define asid_cache(cpu) (cpu_data[cpu].asid_cache)
++#define cpu_context(cpu, mm) ((mm)->context.id[cpu])
+
-+#define orion_read(r) __raw_readl(r)
-+#define orion_write(r, val) __raw_writel(val, r)
++#define cpu_asid(cpu, mm) \
++ (cpu_context((cpu), (mm)) & MMU_CONTEXT_ASID_MASK)
+
+ /*
+ * Virtual Page Number mask
+@@ -39,6 +39,12 @@
+ #define MMU_VPN_MASK 0xfffff000
+
+ #ifdef CONFIG_MMU
++#if defined(CONFIG_SUPERH32)
++#include "mmu_context_32.h"
++#else
++#include "mmu_context_64.h"
++#endif
+
-+/*
-+ * These are not preempt safe. Locks, if needed, must be taken care by caller.
-+ */
-+#define orion_setbits(r, mask) orion_write((r), orion_read(r) | (mask))
-+#define orion_clrbits(r, mask) orion_write((r), orion_read(r) & ~(mask))
+ /*
+ * Get MMU context if needed.
+ */
+@@ -59,6 +65,14 @@ static inline void get_mmu_context(struct mm_struct *mm, unsigned int cpu)
+ */
+ flush_tlb_all();
+
++#ifdef CONFIG_SUPERH64
++ /*
++ * The SH-5 cache uses the ASIDs, requiring both the I and D
++ * cache to be flushed when the ASID is exhausted. Weak.
++ */
++ flush_cache_all();
++#endif
+
-+#endif /* __ASSEMBLY__ */
+ /*
+ * Fix version; Note that we avoid version #0
+ * to distingush NO_CONTEXT.
+@@ -86,39 +100,6 @@ static inline int init_new_context(struct task_struct *tsk,
+ }
+
+ /*
+- * Destroy context related info for an mm_struct that is about
+- * to be put to rest.
+- */
+-static inline void destroy_context(struct mm_struct *mm)
+-{
+- /* Do nothing */
+-}
+-
+-static inline void set_asid(unsigned long asid)
+-{
+- unsigned long __dummy;
+-
+- __asm__ __volatile__ ("mov.l %2, %0\n\t"
+- "and %3, %0\n\t"
+- "or %1, %0\n\t"
+- "mov.l %0, %2"
+- : "=&r" (__dummy)
+- : "r" (asid), "m" (__m(MMU_PTEH)),
+- "r" (0xffffff00));
+-}
+-
+-static inline unsigned long get_asid(void)
+-{
+- unsigned long asid;
+-
+- __asm__ __volatile__ ("mov.l %1, %0"
+- : "=r" (asid)
+- : "m" (__m(MMU_PTEH)));
+- asid &= MMU_CONTEXT_ASID_MASK;
+- return asid;
+-}
+-
+-/*
+ * After we have set current->mm to a new value, this activates
+ * the context for the new mm so we see the new mappings.
+ */
+@@ -128,17 +109,6 @@ static inline void activate_context(struct mm_struct *mm, unsigned int cpu)
+ set_asid(cpu_asid(cpu, mm));
+ }
+
+-/* MMU_TTB is used for optimizing the fault handling. */
+-static inline void set_TTB(pgd_t *pgd)
+-{
+- ctrl_outl((unsigned long)pgd, MMU_TTB);
+-}
+-
+-static inline pgd_t *get_TTB(void)
+-{
+- return (pgd_t *)ctrl_inl(MMU_TTB);
+-}
+-
+ static inline void switch_mm(struct mm_struct *prev,
+ struct mm_struct *next,
+ struct task_struct *tsk)
+@@ -153,17 +123,7 @@ static inline void switch_mm(struct mm_struct *prev,
+ if (!cpu_test_and_set(cpu, next->cpu_vm_mask))
+ activate_context(next, cpu);
+ }
+-
+-#define deactivate_mm(tsk,mm) do { } while (0)
+-
+-#define activate_mm(prev, next) \
+- switch_mm((prev),(next),NULL)
+-
+-static inline void
+-enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+-{
+-}
+-#else /* !CONFIG_MMU */
++#else
+ #define get_mmu_context(mm) do { } while (0)
+ #define init_new_context(tsk,mm) (0)
+ #define destroy_context(mm) do { } while (0)
+@@ -173,10 +133,11 @@ enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+ #define get_TTB() (0)
+ #define activate_context(mm,cpu) do { } while (0)
+ #define switch_mm(prev,next,tsk) do { } while (0)
++#endif /* CONFIG_MMU */
+
-+#endif /* __ASM_ARCH_ORION_H__ */
-diff --git a/include/asm-arm/arch-orion/platform.h b/include/asm-arm/arch-orion/platform.h
++#define activate_mm(prev, next) switch_mm((prev),(next),NULL)
+ #define deactivate_mm(tsk,mm) do { } while (0)
+-#define activate_mm(prev,next) do { } while (0)
+ #define enter_lazy_tlb(mm,tsk) do { } while (0)
+-#endif /* CONFIG_MMU */
+
+ #if defined(CONFIG_CPU_SH3) || defined(CONFIG_CPU_SH4)
+ /*
+diff --git a/include/asm-sh/mmu_context_32.h b/include/asm-sh/mmu_context_32.h
new file mode 100644
-index 0000000..143c38e
+index 0000000..f4f9aeb
--- /dev/null
-+++ b/include/asm-arm/arch-orion/platform.h
-@@ -0,0 +1,25 @@
-+/*
-+ * asm-arm/arch-orion/platform.h
-+ *
-+ * Tzachi Perelstein <tzachi at marvell.com>
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
-+
-+#ifndef __ASM_ARCH_PLATFORM_H__
-+#define __ASM_ARCH_PLATFORM_H__
-+
-+/*
-+ * Device bus NAND private data
-+ */
-+struct orion_nand_data {
-+ struct mtd_partition *parts;
-+ u32 nr_parts;
-+ u8 ale; /* address line number connected to ALE */
-+ u8 cle; /* address line number connected to CLE */
-+ u8 width; /* buswidth */
-+};
++++ b/include/asm-sh/mmu_context_32.h
+@@ -0,0 +1,47 @@
++#ifndef __ASM_SH_MMU_CONTEXT_32_H
++#define __ASM_SH_MMU_CONTEXT_32_H
+
-+#endif
-diff --git a/include/asm-arm/arch-orion/system.h b/include/asm-arm/arch-orion/system.h
-new file mode 100644
-index 0000000..17704c6
---- /dev/null
-+++ b/include/asm-arm/arch-orion/system.h
-@@ -0,0 +1,31 @@
+/*
-+ * include/asm-arm/arch-orion/system.h
-+ *
-+ * Tzachi Perelstein <tzachi at marvell.com>
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
++ * Destroy context related info for an mm_struct that is about
++ * to be put to rest.
+ */
++static inline void destroy_context(struct mm_struct *mm)
++{
++ /* Do nothing */
++}
+
-+#ifndef __ASM_ARCH_SYSTEM_H
-+#define __ASM_ARCH_SYSTEM_H
++static inline void set_asid(unsigned long asid)
++{
++ unsigned long __dummy;
+
-+#include <asm/arch/hardware.h>
-+#include <asm/arch/orion.h>
++ __asm__ __volatile__ ("mov.l %2, %0\n\t"
++ "and %3, %0\n\t"
++ "or %1, %0\n\t"
++ "mov.l %0, %2"
++ : "=&r" (__dummy)
++ : "r" (asid), "m" (__m(MMU_PTEH)),
++ "r" (0xffffff00));
++}
+
-+static inline void arch_idle(void)
++static inline unsigned long get_asid(void)
+{
-+ cpu_do_idle();
++ unsigned long asid;
++
++ __asm__ __volatile__ ("mov.l %1, %0"
++ : "=r" (asid)
++ : "m" (__m(MMU_PTEH)));
++ asid &= MMU_CONTEXT_ASID_MASK;
++ return asid;
+}
+
-+static inline void arch_reset(char mode)
++/* MMU_TTB is used for optimizing the fault handling. */
++static inline void set_TTB(pgd_t *pgd)
+{
-+ /*
-+ * Enable and issue soft reset
-+ */
-+ orion_setbits(CPU_RESET_MASK, (1 << 2));
-+ orion_setbits(CPU_SOFT_RESET, 1);
++ ctrl_outl((unsigned long)pgd, MMU_TTB);
+}
+
-+#endif
-diff --git a/include/asm-arm/arch-orion/timex.h b/include/asm-arm/arch-orion/timex.h
++static inline pgd_t *get_TTB(void)
++{
++ return (pgd_t *)ctrl_inl(MMU_TTB);
++}
++#endif /* __ASM_SH_MMU_CONTEXT_32_H */
+diff --git a/include/asm-sh/mmu_context_64.h b/include/asm-sh/mmu_context_64.h
new file mode 100644
-index 0000000..26c2c91
+index 0000000..020be74
--- /dev/null
-+++ b/include/asm-arm/arch-orion/timex.h
-@@ -0,0 +1,12 @@
-+/*
-+ * include/asm-arm/arch-orion/timex.h
-+ *
-+ * Tzachi Perelstein <tzachi at marvell.com>
-+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
-+ */
++++ b/include/asm-sh/mmu_context_64.h
+@@ -0,0 +1,75 @@
++#ifndef __ASM_SH_MMU_CONTEXT_64_H
++#define __ASM_SH_MMU_CONTEXT_64_H
+
-+#define ORION_TCLK 166666667
-+#define CLOCK_TICK_RATE ORION_TCLK
-diff --git a/include/asm-arm/arch-orion/uncompress.h b/include/asm-arm/arch-orion/uncompress.h
-new file mode 100644
-index 0000000..a1a222f
---- /dev/null
-+++ b/include/asm-arm/arch-orion/uncompress.h
-@@ -0,0 +1,44 @@
+/*
-+ * include/asm-arm/arch-orion/uncompress.h
++ * sh64-specific mmu_context interface.
+ *
-+ * Tzachi Perelstein <tzachi at marvell.com>
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003 - 2007 Paul Mundt
+ *
-+ * This file is licensed under the terms of the GNU General Public
-+ * License version 2. This program is licensed "as is" without any
-+ * warranty of any kind, whether express or implied.
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
++#include <asm/cpu/registers.h>
++#include <asm/cacheflush.h>
+
-+#include <asm/arch/orion.h>
-+
-+#define MV_UART_LSR ((volatile unsigned char *)(UART0_BASE + 0x14))
-+#define MV_UART_THR ((volatile unsigned char *)(UART0_BASE + 0x0))
-+
-+#define LSR_THRE 0x20
++#define SR_ASID_MASK 0xffffffffff00ffffULL
++#define SR_ASID_SHIFT 16
+
-+static void putc(const char c)
++/*
++ * Destroy context related info for an mm_struct that is about
++ * to be put to rest.
++ */
++static inline void destroy_context(struct mm_struct *mm)
+{
-+ int j = 0x1000;
-+ while (--j && !(*MV_UART_LSR & LSR_THRE))
-+ barrier();
-+ *MV_UART_THR = c;
++ /* Well, at least free TLB entries */
++ flush_tlb_mm(mm);
+}
+
-+static void flush(void)
++static inline unsigned long get_asid(void)
+{
-+}
++ unsigned long long sr;
+
-+static void orion_early_putstr(const char *ptr)
-+{
-+ char c;
-+ while ((c = *ptr++) != '\0') {
-+ if (c == '\n')
-+ putc('\r');
-+ putc(c);
-+ }
++ asm volatile ("getcon " __SR ", %0\n\t"
++ : "=r" (sr));
++
++ sr = (sr >> SR_ASID_SHIFT) & MMU_CONTEXT_ASID_MASK;
++ return (unsigned long) sr;
+}
+
-+/*
-+ * nothing to do
-+ */
-+#define arch_decomp_setup()
-+#define arch_decomp_wdog()
-diff --git a/include/asm-arm/arch-orion/vmalloc.h b/include/asm-arm/arch-orion/vmalloc.h
-new file mode 100644
-index 0000000..23e2a10
---- /dev/null
-+++ b/include/asm-arm/arch-orion/vmalloc.h
-@@ -0,0 +1,5 @@
-+/*
-+ * include/asm-arm/arch-orion/vmalloc.h
-+ */
++/* Set ASID into SR */
++static inline void set_asid(unsigned long asid)
++{
++ unsigned long long sr, pc;
+
-+#define VMALLOC_END 0xf0000000
-diff --git a/include/asm-arm/arch-pxa/colibri.h b/include/asm-arm/arch-pxa/colibri.h
-new file mode 100644
-index 0000000..2ae373f
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/colibri.h
-@@ -0,0 +1,19 @@
-+#ifndef _COLIBRI_H_
-+#define _COLIBRI_H_
++ asm volatile ("getcon " __SR ", %0" : "=r" (sr));
+
-+/* physical memory regions */
-+#define COLIBRI_FLASH_PHYS (PXA_CS0_PHYS) /* Flash region */
-+#define COLIBRI_ETH_PHYS (PXA_CS2_PHYS) /* Ethernet DM9000 region */
-+#define COLIBRI_SDRAM_BASE 0xa0000000 /* SDRAM region */
++ sr = (sr & SR_ASID_MASK) | (asid << SR_ASID_SHIFT);
+
-+/* virtual memory regions */
-+#define COLIBRI_DISK_VIRT 0xF0000000 /* Disk On Chip region */
++ /*
++ * It is possible that this function may be inlined and so to avoid
++ * the assembler reporting duplicate symbols we make use of the
++ * gas trick of generating symbols using numerics and forward
++ * reference.
++ */
++ asm volatile ("movi 1, %1\n\t"
++ "shlli %1, 28, %1\n\t"
++ "or %0, %1, %1\n\t"
++ "putcon %1, " __SR "\n\t"
++ "putcon %0, " __SSR "\n\t"
++ "movi 1f, %1\n\t"
++ "ori %1, 1 , %1\n\t"
++ "putcon %1, " __SPC "\n\t"
++ "rte\n"
++ "1:\n\t"
++ : "=r" (sr), "=r" (pc) : "0" (sr));
++}
+
-+/* size of flash */
-+#define COLIBRI_FLASH_SIZE 0x02000000 /* Flash size 32 MB */
++/* No spare register to twiddle, so use a software cache */
++extern pgd_t *mmu_pdtp_cache;
+
-+/* Ethernet Controller Davicom DM9000 */
-+#define GPIO_DM9000 114
-+#define COLIBRI_ETH_IRQ IRQ_GPIO(GPIO_DM9000)
++#define set_TTB(pgd) (mmu_pdtp_cache = (pgd))
++#define get_TTB() (mmu_pdtp_cache)
+
-+#endif /* _COLIBRI_H_ */
-diff --git a/include/asm-arm/arch-pxa/corgi.h b/include/asm-arm/arch-pxa/corgi.h
-index e554caa..bf85650 100644
---- a/include/asm-arm/arch-pxa/corgi.h
-+++ b/include/asm-arm/arch-pxa/corgi.h
-@@ -104,7 +104,6 @@
++#endif /* __ASM_SH_MMU_CONTEXT_64_H */
+diff --git a/include/asm-sh/module.h b/include/asm-sh/module.h
+index 118d5a2..46eccd3 100644
+--- a/include/asm-sh/module.h
++++ b/include/asm-sh/module.h
+@@ -20,6 +20,8 @@ struct mod_arch_specific {
+ # define MODULE_PROC_FAMILY "SH3LE "
+ # elif defined CONFIG_CPU_SH4
+ # define MODULE_PROC_FAMILY "SH4LE "
++# elif defined CONFIG_CPU_SH5
++# define MODULE_PROC_FAMILY "SH5LE "
+ # else
+ # error unknown processor family
+ # endif
+@@ -30,6 +32,8 @@ struct mod_arch_specific {
+ # define MODULE_PROC_FAMILY "SH3BE "
+ # elif defined CONFIG_CPU_SH4
+ # define MODULE_PROC_FAMILY "SH4BE "
++# elif defined CONFIG_CPU_SH5
++# define MODULE_PROC_FAMILY "SH5BE "
+ # else
+ # error unknown processor family
+ # endif
+diff --git a/include/asm-sh/page.h b/include/asm-sh/page.h
+index d00a8fd..002e64a 100644
+--- a/include/asm-sh/page.h
++++ b/include/asm-sh/page.h
+@@ -5,13 +5,7 @@
+ * Copyright (C) 1999 Niibe Yutaka
*/
- extern struct platform_device corgiscoop_device;
- extern struct platform_device corgissp_device;
--extern struct platform_device corgifb_device;
- #endif /* __ASM_ARCH_CORGI_H */
+-/*
+- [ P0/U0 (virtual) ] 0x00000000 <------ User space
+- [ P1 (fixed) cached ] 0x80000000 <------ Kernel space
+- [ P2 (fixed) non-cachable] 0xA0000000 <------ Physical access
+- [ P3 (virtual) cached] 0xC0000000 <------ vmalloced area
+- [ P4 control ] 0xE0000000
+- */
++#include <linux/const.h>
-diff --git a/include/asm-arm/arch-pxa/i2c.h b/include/asm-arm/arch-pxa/i2c.h
-index e404b23..80596b0 100644
---- a/include/asm-arm/arch-pxa/i2c.h
-+++ b/include/asm-arm/arch-pxa/i2c.h
-@@ -65,7 +65,13 @@ struct i2c_pxa_platform_data {
- unsigned int slave_addr;
- struct i2c_slave_client *slave;
- unsigned int class;
-+ int use_pio;
- };
+ #ifdef __KERNEL__
- extern void pxa_set_i2c_info(struct i2c_pxa_platform_data *info);
-+
-+#ifdef CONFIG_PXA27x
-+extern void pxa_set_i2c_power_info(struct i2c_pxa_platform_data *info);
-+#endif
-+
+@@ -26,15 +20,13 @@
+ # error "Bogus kernel page size?"
#endif
-diff --git a/include/asm-arm/arch-pxa/irqs.h b/include/asm-arm/arch-pxa/irqs.h
-index b76ee6d..c562b97 100644
---- a/include/asm-arm/arch-pxa/irqs.h
-+++ b/include/asm-arm/arch-pxa/irqs.h
-@@ -180,7 +180,8 @@
- #define NR_IRQS (IRQ_LOCOMO_SPI_TEND + 1)
- #elif defined(CONFIG_ARCH_LUBBOCK) || \
- defined(CONFIG_MACH_LOGICPD_PXA270) || \
-- defined(CONFIG_MACH_MAINSTONE)
-+ defined(CONFIG_MACH_MAINSTONE) || \
-+ defined(CONFIG_MACH_PCM027)
- #define NR_IRQS (IRQ_BOARD_END)
- #else
- #define NR_IRQS (IRQ_BOARD_START)
-@@ -227,6 +228,13 @@
- #define IRQ_LOCOMO_LT_BASE (IRQ_BOARD_START + 2)
- #define IRQ_LOCOMO_SPI_BASE (IRQ_BOARD_START + 3)
-+/* phyCORE-PXA270 (PCM027) Interrupts */
-+#define PCM027_IRQ(x) (IRQ_BOARD_START + (x))
-+#define PCM027_BTDET_IRQ PCM027_IRQ(0)
-+#define PCM027_FF_RI_IRQ PCM027_IRQ(1)
-+#define PCM027_MMCDET_IRQ PCM027_IRQ(2)
-+#define PCM027_PM_5V_IRQ PCM027_IRQ(3)
-+
- /* ITE8152 irqs */
- /* add IT8152 IRQs beyond BOARD_END */
- #ifdef CONFIG_PCI_HOST_ITE8152
-diff --git a/include/asm-arm/arch-pxa/littleton.h b/include/asm-arm/arch-pxa/littleton.h
-new file mode 100644
-index 0000000..79d209b
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/littleton.h
-@@ -0,0 +1,6 @@
-+#ifndef __ASM_ARCH_ZYLONITE_H
-+#define __ASM_ARCH_ZYLONITE_H
-+
-+#define LITTLETON_ETH_PHYS 0x30000000
-+
-+#endif /* __ASM_ARCH_ZYLONITE_H */
-diff --git a/include/asm-arm/arch-pxa/magician.h b/include/asm-arm/arch-pxa/magician.h
-new file mode 100644
-index 0000000..337f51f
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/magician.h
-@@ -0,0 +1,111 @@
-+/*
-+ * GPIO and IRQ definitions for HTC Magician PDA phones
-+ *
-+ * Copyright (c) 2007 Philipp Zabel
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ *
-+ */
-+
-+#ifndef _MAGICIAN_H_
-+#define _MAGICIAN_H_
-+
-+#include <asm/arch/pxa-regs.h>
-+
-+/*
-+ * PXA GPIOs
-+ */
-+
-+#define GPIO0_MAGICIAN_KEY_POWER 0
-+#define GPIO9_MAGICIAN_UNKNOWN 9
-+#define GPIO10_MAGICIAN_GSM_IRQ 10
-+#define GPIO11_MAGICIAN_GSM_OUT1 11
-+#define GPIO13_MAGICIAN_CPLD_IRQ 13
-+#define GPIO18_MAGICIAN_UNKNOWN 18
-+#define GPIO22_MAGICIAN_VIBRA_EN 22
-+#define GPIO26_MAGICIAN_GSM_POWER 26
-+#define GPIO27_MAGICIAN_USBC_PUEN 27
-+#define GPIO30_MAGICIAN_nCHARGE_EN 30
-+#define GPIO37_MAGICIAN_KEY_HANGUP 37
-+#define GPIO38_MAGICIAN_KEY_CONTACTS 38
-+#define GPIO40_MAGICIAN_GSM_OUT2 40
-+#define GPIO48_MAGICIAN_UNKNOWN 48
-+#define GPIO56_MAGICIAN_UNKNOWN 56
-+#define GPIO57_MAGICIAN_CAM_RESET 57
-+#define GPIO83_MAGICIAN_nIR_EN 83
-+#define GPIO86_MAGICIAN_GSM_RESET 86
-+#define GPIO87_MAGICIAN_GSM_SELECT 87
-+#define GPIO90_MAGICIAN_KEY_CALENDAR 90
-+#define GPIO91_MAGICIAN_KEY_CAMERA 91
-+#define GPIO93_MAGICIAN_KEY_UP 93
-+#define GPIO94_MAGICIAN_KEY_DOWN 94
-+#define GPIO95_MAGICIAN_KEY_LEFT 95
-+#define GPIO96_MAGICIAN_KEY_RIGHT 96
-+#define GPIO97_MAGICIAN_KEY_ENTER 97
-+#define GPIO98_MAGICIAN_KEY_RECORD 98
-+#define GPIO99_MAGICIAN_HEADPHONE_IN 99
-+#define GPIO100_MAGICIAN_KEY_VOL_UP 100
-+#define GPIO101_MAGICIAN_KEY_VOL_DOWN 101
-+#define GPIO102_MAGICIAN_KEY_PHONE 102
-+#define GPIO103_MAGICIAN_LED_KP 103
-+#define GPIO104_MAGICIAN_LCD_POWER_1 104
-+#define GPIO105_MAGICIAN_LCD_POWER_2 105
-+#define GPIO106_MAGICIAN_LCD_POWER_3 106
-+#define GPIO107_MAGICIAN_DS1WM_IRQ 107
-+#define GPIO108_MAGICIAN_GSM_READY 108
-+#define GPIO114_MAGICIAN_UNKNOWN 114
-+#define GPIO115_MAGICIAN_nPEN_IRQ 115
-+#define GPIO116_MAGICIAN_nCAM_EN 116
-+#define GPIO119_MAGICIAN_UNKNOWN 119
-+#define GPIO120_MAGICIAN_UNKNOWN 120
-+
-+/*
-+ * PXA GPIO alternate function mode & direction
-+ */
-+
-+#define GPIO0_MAGICIAN_KEY_POWER_MD (0 | GPIO_IN)
-+#define GPIO9_MAGICIAN_UNKNOWN_MD (9 | GPIO_IN)
-+#define GPIO10_MAGICIAN_GSM_IRQ_MD (10 | GPIO_IN)
-+#define GPIO11_MAGICIAN_GSM_OUT1_MD (11 | GPIO_OUT)
-+#define GPIO13_MAGICIAN_CPLD_IRQ_MD (13 | GPIO_IN)
-+#define GPIO18_MAGICIAN_UNKNOWN_MD (18 | GPIO_OUT)
-+#define GPIO22_MAGICIAN_VIBRA_EN_MD (22 | GPIO_OUT)
-+#define GPIO26_MAGICIAN_GSM_POWER_MD (26 | GPIO_OUT)
-+#define GPIO27_MAGICIAN_USBC_PUEN_MD (27 | GPIO_OUT)
-+#define GPIO30_MAGICIAN_nCHARGE_EN_MD (30 | GPIO_OUT)
-+#define GPIO37_MAGICIAN_KEY_HANGUP_MD (37 | GPIO_OUT)
-+#define GPIO38_MAGICIAN_KEY_CONTACTS_MD (38 | GPIO_OUT)
-+#define GPIO40_MAGICIAN_GSM_OUT2_MD (40 | GPIO_OUT)
-+#define GPIO48_MAGICIAN_UNKNOWN_MD (48 | GPIO_OUT)
-+#define GPIO56_MAGICIAN_UNKNOWN_MD (56 | GPIO_OUT)
-+#define GPIO57_MAGICIAN_CAM_RESET_MD (57 | GPIO_OUT)
-+#define GPIO83_MAGICIAN_nIR_EN_MD (83 | GPIO_OUT)
-+#define GPIO86_MAGICIAN_GSM_RESET_MD (86 | GPIO_OUT)
-+#define GPIO87_MAGICIAN_GSM_SELECT_MD (87 | GPIO_OUT)
-+#define GPIO90_MAGICIAN_KEY_CALENDAR_MD (90 | GPIO_OUT)
-+#define GPIO91_MAGICIAN_KEY_CAMERA_MD (91 | GPIO_OUT)
-+#define GPIO93_MAGICIAN_KEY_UP_MD (93 | GPIO_IN)
-+#define GPIO94_MAGICIAN_KEY_DOWN_MD (94 | GPIO_IN)
-+#define GPIO95_MAGICIAN_KEY_LEFT_MD (95 | GPIO_IN)
-+#define GPIO96_MAGICIAN_KEY_RIGHT_MD (96 | GPIO_IN)
-+#define GPIO97_MAGICIAN_KEY_ENTER_MD (97 | GPIO_IN)
-+#define GPIO98_MAGICIAN_KEY_RECORD_MD (98 | GPIO_IN)
-+#define GPIO99_MAGICIAN_HEADPHONE_IN_MD (99 | GPIO_IN)
-+#define GPIO100_MAGICIAN_KEY_VOL_UP_MD (100 | GPIO_IN)
-+#define GPIO101_MAGICIAN_KEY_VOL_DOWN_MD (101 | GPIO_IN)
-+#define GPIO102_MAGICIAN_KEY_PHONE_MD (102 | GPIO_IN)
-+#define GPIO103_MAGICIAN_LED_KP_MD (103 | GPIO_OUT)
-+#define GPIO104_MAGICIAN_LCD_POWER_1_MD (104 | GPIO_OUT)
-+#define GPIO105_MAGICIAN_LCD_POWER_2_MD (105 | GPIO_OUT)
-+#define GPIO106_MAGICIAN_LCD_POWER_3_MD (106 | GPIO_OUT)
-+#define GPIO107_MAGICIAN_DS1WM_IRQ_MD (107 | GPIO_IN)
-+#define GPIO108_MAGICIAN_GSM_READY_MD (108 | GPIO_IN)
-+#define GPIO114_MAGICIAN_UNKNOWN_MD (114 | GPIO_OUT)
-+#define GPIO115_MAGICIAN_nPEN_IRQ_MD (115 | GPIO_IN)
-+#define GPIO116_MAGICIAN_nCAM_EN_MD (116 | GPIO_OUT)
-+#define GPIO119_MAGICIAN_UNKNOWN_MD (119 | GPIO_OUT)
-+#define GPIO120_MAGICIAN_UNKNOWN_MD (120 | GPIO_OUT)
+-#ifdef __ASSEMBLY__
+-#define PAGE_SIZE (1 << PAGE_SHIFT)
+-#else
+-#define PAGE_SIZE (1UL << PAGE_SHIFT)
+-#endif
+-
++#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
+ #define PAGE_MASK (~(PAGE_SIZE-1))
+ #define PTE_MASK PAGE_MASK
+
++/* to align the pointer to the (next) page boundary */
++#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
+
-+#endif /* _MAGICIAN_H_ */
-diff --git a/include/asm-arm/arch-pxa/mfp-pxa300.h b/include/asm-arm/arch-pxa/mfp-pxa300.h
-index a209966..bb41031 100644
---- a/include/asm-arm/arch-pxa/mfp-pxa300.h
-+++ b/include/asm-arm/arch-pxa/mfp-pxa300.h
-@@ -16,6 +16,7 @@
- #define __ASM_ARCH_MFP_PXA300_H
+ #if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+ #define HPAGE_SHIFT 16
+ #elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
+@@ -45,6 +37,8 @@
+ #define HPAGE_SHIFT 22
+ #elif defined(CONFIG_HUGETLB_PAGE_SIZE_64MB)
+ #define HPAGE_SHIFT 26
++#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
++#define HPAGE_SHIFT 29
+ #endif
- #include <asm/arch/mfp.h>
-+#include <asm/arch/mfp-pxa3xx.h>
+ #ifdef CONFIG_HUGETLB_PAGE
+@@ -55,20 +49,12 @@
- /* GPIO */
- #define GPIO46_GPIO MFP_CFG(GPIO46, AF1)
-diff --git a/include/asm-arm/arch-pxa/mfp-pxa320.h b/include/asm-arm/arch-pxa/mfp-pxa320.h
-index 52deedc..576aa46 100644
---- a/include/asm-arm/arch-pxa/mfp-pxa320.h
-+++ b/include/asm-arm/arch-pxa/mfp-pxa320.h
-@@ -16,6 +16,7 @@
- #define __ASM_ARCH_MFP_PXA320_H
+ #ifndef __ASSEMBLY__
- #include <asm/arch/mfp.h>
-+#include <asm/arch/mfp-pxa3xx.h>
+-extern void (*clear_page)(void *to);
+-extern void (*copy_page)(void *to, void *from);
+-
+ extern unsigned long shm_align_mask;
+ extern unsigned long max_low_pfn, min_low_pfn;
+ extern unsigned long memory_start, memory_end;
+
+-#ifdef CONFIG_MMU
+-extern void clear_page_slow(void *to);
+-extern void copy_page_slow(void *to, void *from);
+-#else
+-extern void clear_page_nommu(void *to);
+-extern void copy_page_nommu(void *to, void *from);
+-#endif
++extern void clear_page(void *to);
++extern void copy_page(void *to, void *from);
+
+ #if !defined(CONFIG_CACHE_OFF) && defined(CONFIG_MMU) && \
+ (defined(CONFIG_CPU_SH4) || defined(CONFIG_SH7705_CACHE_32KB))
+@@ -96,12 +82,18 @@ typedef struct { unsigned long long pgd; } pgd_t;
+ ((x).pte_low | ((unsigned long long)(x).pte_high << 32))
+ #define __pte(x) \
+ ({ pte_t __pte = {(x), ((unsigned long long)(x)) >> 32}; __pte; })
+-#else
++#elif defined(CONFIG_SUPERH32)
+ typedef struct { unsigned long pte_low; } pte_t;
+ typedef struct { unsigned long pgprot; } pgprot_t;
+ typedef struct { unsigned long pgd; } pgd_t;
+ #define pte_val(x) ((x).pte_low)
+-#define __pte(x) ((pte_t) { (x) } )
++#define __pte(x) ((pte_t) { (x) } )
++#else
++typedef struct { unsigned long long pte_low; } pte_t;
++typedef struct { unsigned long pgprot; } pgprot_t;
++typedef struct { unsigned long pgd; } pgd_t;
++#define pte_val(x) ((x).pte_low)
++#define __pte(x) ((pte_t) { (x) } )
+ #endif
+
+ #define pgd_val(x) ((x).pgd)
+@@ -112,28 +104,44 @@ typedef struct { unsigned long pgd; } pgd_t;
+
+ #endif /* !__ASSEMBLY__ */
+
+-/* to align the pointer to the (next) page boundary */
+-#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
+-
+ /*
+- * IF YOU CHANGE THIS, PLEASE ALSO CHANGE
+- *
+- * arch/sh/kernel/vmlinux.lds.S
+- *
+- * which has the same constant encoded..
++ * __MEMORY_START and SIZE are the physical addresses and size of RAM.
+ */
+-
+ #define __MEMORY_START CONFIG_MEMORY_START
+ #define __MEMORY_SIZE CONFIG_MEMORY_SIZE
- /* GPIO */
- #define GPIO46_GPIO MFP_CFG(GPIO46, AF0)
-diff --git a/include/asm-arm/arch-pxa/mfp-pxa3xx.h b/include/asm-arm/arch-pxa/mfp-pxa3xx.h
-new file mode 100644
-index 0000000..1f6b35c
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/mfp-pxa3xx.h
-@@ -0,0 +1,252 @@
-+#ifndef __ASM_ARCH_MFP_PXA3XX_H
-+#define __ASM_ARCH_MFP_PXA3XX_H
-+
-+#define MFPR_BASE (0x40e10000)
-+#define MFPR_SIZE (PAGE_SIZE)
-+
-+/* MFPR register bit definitions */
-+#define MFPR_PULL_SEL (0x1 << 15)
-+#define MFPR_PULLUP_EN (0x1 << 14)
-+#define MFPR_PULLDOWN_EN (0x1 << 13)
-+#define MFPR_SLEEP_SEL (0x1 << 9)
-+#define MFPR_SLEEP_OE_N (0x1 << 7)
-+#define MFPR_EDGE_CLEAR (0x1 << 6)
-+#define MFPR_EDGE_FALL_EN (0x1 << 5)
-+#define MFPR_EDGE_RISE_EN (0x1 << 4)
-+
-+#define MFPR_SLEEP_DATA(x) ((x) << 8)
-+#define MFPR_DRIVE(x) (((x) & 0x7) << 10)
-+#define MFPR_AF_SEL(x) (((x) & 0x7) << 0)
-+
-+#define MFPR_EDGE_NONE (0)
-+#define MFPR_EDGE_RISE (MFPR_EDGE_RISE_EN)
-+#define MFPR_EDGE_FALL (MFPR_EDGE_FALL_EN)
-+#define MFPR_EDGE_BOTH (MFPR_EDGE_RISE | MFPR_EDGE_FALL)
-+
+/*
-+ * Table that determines the low power modes outputs, with actual settings
-+ * used in parentheses for don't-care values. Except for the float output,
-+ * the configured driven and pulled levels match, so if there is a need for
-+ * non-LPM pulled output, the same configuration could probably be used.
-+ *
-+ * Output value sleep_oe_n sleep_data pullup_en pulldown_en pull_sel
-+ * (bit 7) (bit 8) (bit 14) (bit 13) (bit 15)
-+ *
-+ * Input 0 X(0) X(0) X(0) 0
-+ * Drive 0 0 0 0 X(1) 0
-+ * Drive 1 0 1 X(1) 0 0
-+ * Pull hi (1) 1 X(1) 1 0 0
-+ * Pull lo (0) 1 X(0) 0 1 0
-+ * Z (float) 1 X(0) 0 0 0
++ * PAGE_OFFSET is the virtual address of the start of kernel address
++ * space.
+ */
-+#define MFPR_LPM_INPUT (0)
-+#define MFPR_LPM_DRIVE_LOW (MFPR_SLEEP_DATA(0) | MFPR_PULLDOWN_EN)
-+#define MFPR_LPM_DRIVE_HIGH (MFPR_SLEEP_DATA(1) | MFPR_PULLUP_EN)
-+#define MFPR_LPM_PULL_LOW (MFPR_LPM_DRIVE_LOW | MFPR_SLEEP_OE_N)
-+#define MFPR_LPM_PULL_HIGH (MFPR_LPM_DRIVE_HIGH | MFPR_SLEEP_OE_N)
-+#define MFPR_LPM_FLOAT (MFPR_SLEEP_OE_N)
-+#define MFPR_LPM_MASK (0xe080)
-+
+ #define PAGE_OFFSET CONFIG_PAGE_OFFSET
+-#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
+-#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
+-#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
+
+/*
-+ * The pullup and pulldown state of the MFP pin at run mode is by default
-+ * determined by the selected alternate function. In case that some buggy
-+ * devices need to override this default behavior, the definitions below
-+ * indicates the setting of corresponding MFPR bits
++ * Virtual to physical RAM address translation.
+ *
-+ * Definition pull_sel pullup_en pulldown_en
-+ * MFPR_PULL_NONE 0 0 0
-+ * MFPR_PULL_LOW 1 0 1
-+ * MFPR_PULL_HIGH 1 1 0
-+ * MFPR_PULL_BOTH 1 1 1
-+ */
-+#define MFPR_PULL_NONE (0)
-+#define MFPR_PULL_LOW (MFPR_PULL_SEL | MFPR_PULLDOWN_EN)
-+#define MFPR_PULL_BOTH (MFPR_PULL_LOW | MFPR_PULLUP_EN)
-+#define MFPR_PULL_HIGH (MFPR_PULL_SEL | MFPR_PULLUP_EN)
-+
-+/* PXA3xx common MFP configurations - processor specific ones defined
-+ * in mfp-pxa300.h and mfp-pxa320.h
++ * In 29 bit mode, the physical offset of RAM from address 0 is visible in
++ * the kernel virtual address space, and thus we don't have to take
++ * this into account when translating. However in 32 bit mode this offset
++ * is not visible (it is part of the PMB mapping) and so needs to be
++ * added or subtracted as required.
+ */
-+#define GPIO0_GPIO MFP_CFG(GPIO0, AF0)
-+#define GPIO1_GPIO MFP_CFG(GPIO1, AF0)
-+#define GPIO2_GPIO MFP_CFG(GPIO2, AF0)
-+#define GPIO3_GPIO MFP_CFG(GPIO3, AF0)
-+#define GPIO4_GPIO MFP_CFG(GPIO4, AF0)
-+#define GPIO5_GPIO MFP_CFG(GPIO5, AF0)
-+#define GPIO6_GPIO MFP_CFG(GPIO6, AF0)
-+#define GPIO7_GPIO MFP_CFG(GPIO7, AF0)
-+#define GPIO8_GPIO MFP_CFG(GPIO8, AF0)
-+#define GPIO9_GPIO MFP_CFG(GPIO9, AF0)
-+#define GPIO10_GPIO MFP_CFG(GPIO10, AF0)
-+#define GPIO11_GPIO MFP_CFG(GPIO11, AF0)
-+#define GPIO12_GPIO MFP_CFG(GPIO12, AF0)
-+#define GPIO13_GPIO MFP_CFG(GPIO13, AF0)
-+#define GPIO14_GPIO MFP_CFG(GPIO14, AF0)
-+#define GPIO15_GPIO MFP_CFG(GPIO15, AF0)
-+#define GPIO16_GPIO MFP_CFG(GPIO16, AF0)
-+#define GPIO17_GPIO MFP_CFG(GPIO17, AF0)
-+#define GPIO18_GPIO MFP_CFG(GPIO18, AF0)
-+#define GPIO19_GPIO MFP_CFG(GPIO19, AF0)
-+#define GPIO20_GPIO MFP_CFG(GPIO20, AF0)
-+#define GPIO21_GPIO MFP_CFG(GPIO21, AF0)
-+#define GPIO22_GPIO MFP_CFG(GPIO22, AF0)
-+#define GPIO23_GPIO MFP_CFG(GPIO23, AF0)
-+#define GPIO24_GPIO MFP_CFG(GPIO24, AF0)
-+#define GPIO25_GPIO MFP_CFG(GPIO25, AF0)
-+#define GPIO26_GPIO MFP_CFG(GPIO26, AF0)
-+#define GPIO27_GPIO MFP_CFG(GPIO27, AF0)
-+#define GPIO28_GPIO MFP_CFG(GPIO28, AF0)
-+#define GPIO29_GPIO MFP_CFG(GPIO29, AF0)
-+#define GPIO30_GPIO MFP_CFG(GPIO30, AF0)
-+#define GPIO31_GPIO MFP_CFG(GPIO31, AF0)
-+#define GPIO32_GPIO MFP_CFG(GPIO32, AF0)
-+#define GPIO33_GPIO MFP_CFG(GPIO33, AF0)
-+#define GPIO34_GPIO MFP_CFG(GPIO34, AF0)
-+#define GPIO35_GPIO MFP_CFG(GPIO35, AF0)
-+#define GPIO36_GPIO MFP_CFG(GPIO36, AF0)
-+#define GPIO37_GPIO MFP_CFG(GPIO37, AF0)
-+#define GPIO38_GPIO MFP_CFG(GPIO38, AF0)
-+#define GPIO39_GPIO MFP_CFG(GPIO39, AF0)
-+#define GPIO40_GPIO MFP_CFG(GPIO40, AF0)
-+#define GPIO41_GPIO MFP_CFG(GPIO41, AF0)
-+#define GPIO42_GPIO MFP_CFG(GPIO42, AF0)
-+#define GPIO43_GPIO MFP_CFG(GPIO43, AF0)
-+#define GPIO44_GPIO MFP_CFG(GPIO44, AF0)
-+#define GPIO45_GPIO MFP_CFG(GPIO45, AF0)
-+
-+#define GPIO47_GPIO MFP_CFG(GPIO47, AF0)
-+#define GPIO48_GPIO MFP_CFG(GPIO48, AF0)
-+
-+#define GPIO53_GPIO MFP_CFG(GPIO53, AF0)
-+#define GPIO54_GPIO MFP_CFG(GPIO54, AF0)
-+#define GPIO55_GPIO MFP_CFG(GPIO55, AF0)
-+
-+#define GPIO57_GPIO MFP_CFG(GPIO57, AF0)
-+
-+#define GPIO63_GPIO MFP_CFG(GPIO63, AF0)
-+#define GPIO64_GPIO MFP_CFG(GPIO64, AF0)
-+#define GPIO65_GPIO MFP_CFG(GPIO65, AF0)
-+#define GPIO66_GPIO MFP_CFG(GPIO66, AF0)
-+#define GPIO67_GPIO MFP_CFG(GPIO67, AF0)
-+#define GPIO68_GPIO MFP_CFG(GPIO68, AF0)
-+#define GPIO69_GPIO MFP_CFG(GPIO69, AF0)
-+#define GPIO70_GPIO MFP_CFG(GPIO70, AF0)
-+#define GPIO71_GPIO MFP_CFG(GPIO71, AF0)
-+#define GPIO72_GPIO MFP_CFG(GPIO72, AF0)
-+#define GPIO73_GPIO MFP_CFG(GPIO73, AF0)
-+#define GPIO74_GPIO MFP_CFG(GPIO74, AF0)
-+#define GPIO75_GPIO MFP_CFG(GPIO75, AF0)
-+#define GPIO76_GPIO MFP_CFG(GPIO76, AF0)
-+#define GPIO77_GPIO MFP_CFG(GPIO77, AF0)
-+#define GPIO78_GPIO MFP_CFG(GPIO78, AF0)
-+#define GPIO79_GPIO MFP_CFG(GPIO79, AF0)
-+#define GPIO80_GPIO MFP_CFG(GPIO80, AF0)
-+#define GPIO81_GPIO MFP_CFG(GPIO81, AF0)
-+#define GPIO82_GPIO MFP_CFG(GPIO82, AF0)
-+#define GPIO83_GPIO MFP_CFG(GPIO83, AF0)
-+#define GPIO84_GPIO MFP_CFG(GPIO84, AF0)
-+#define GPIO85_GPIO MFP_CFG(GPIO85, AF0)
-+#define GPIO86_GPIO MFP_CFG(GPIO86, AF0)
-+#define GPIO87_GPIO MFP_CFG(GPIO87, AF0)
-+#define GPIO88_GPIO MFP_CFG(GPIO88, AF0)
-+#define GPIO89_GPIO MFP_CFG(GPIO89, AF0)
-+#define GPIO90_GPIO MFP_CFG(GPIO90, AF0)
-+#define GPIO91_GPIO MFP_CFG(GPIO91, AF0)
-+#define GPIO92_GPIO MFP_CFG(GPIO92, AF0)
-+#define GPIO93_GPIO MFP_CFG(GPIO93, AF0)
-+#define GPIO94_GPIO MFP_CFG(GPIO94, AF0)
-+#define GPIO95_GPIO MFP_CFG(GPIO95, AF0)
-+#define GPIO96_GPIO MFP_CFG(GPIO96, AF0)
-+#define GPIO97_GPIO MFP_CFG(GPIO97, AF0)
-+#define GPIO98_GPIO MFP_CFG(GPIO98, AF0)
-+#define GPIO99_GPIO MFP_CFG(GPIO99, AF0)
-+#define GPIO100_GPIO MFP_CFG(GPIO100, AF0)
-+#define GPIO101_GPIO MFP_CFG(GPIO101, AF0)
-+#define GPIO102_GPIO MFP_CFG(GPIO102, AF0)
-+#define GPIO103_GPIO MFP_CFG(GPIO103, AF0)
-+#define GPIO104_GPIO MFP_CFG(GPIO104, AF0)
-+#define GPIO105_GPIO MFP_CFG(GPIO105, AF0)
-+#define GPIO106_GPIO MFP_CFG(GPIO106, AF0)
-+#define GPIO107_GPIO MFP_CFG(GPIO107, AF0)
-+#define GPIO108_GPIO MFP_CFG(GPIO108, AF0)
-+#define GPIO109_GPIO MFP_CFG(GPIO109, AF0)
-+#define GPIO110_GPIO MFP_CFG(GPIO110, AF0)
-+#define GPIO111_GPIO MFP_CFG(GPIO111, AF0)
-+#define GPIO112_GPIO MFP_CFG(GPIO112, AF0)
-+#define GPIO113_GPIO MFP_CFG(GPIO113, AF0)
-+#define GPIO114_GPIO MFP_CFG(GPIO114, AF0)
-+#define GPIO115_GPIO MFP_CFG(GPIO115, AF0)
-+#define GPIO116_GPIO MFP_CFG(GPIO116, AF0)
-+#define GPIO117_GPIO MFP_CFG(GPIO117, AF0)
-+#define GPIO118_GPIO MFP_CFG(GPIO118, AF0)
-+#define GPIO119_GPIO MFP_CFG(GPIO119, AF0)
-+#define GPIO120_GPIO MFP_CFG(GPIO120, AF0)
-+#define GPIO121_GPIO MFP_CFG(GPIO121, AF0)
-+#define GPIO122_GPIO MFP_CFG(GPIO122, AF0)
-+#define GPIO123_GPIO MFP_CFG(GPIO123, AF0)
-+#define GPIO124_GPIO MFP_CFG(GPIO124, AF0)
-+#define GPIO125_GPIO MFP_CFG(GPIO125, AF0)
-+#define GPIO126_GPIO MFP_CFG(GPIO126, AF0)
-+#define GPIO127_GPIO MFP_CFG(GPIO127, AF0)
-+
-+#define GPIO0_2_GPIO MFP_CFG(GPIO0_2, AF0)
-+#define GPIO1_2_GPIO MFP_CFG(GPIO1_2, AF0)
-+#define GPIO2_2_GPIO MFP_CFG(GPIO2_2, AF0)
-+#define GPIO3_2_GPIO MFP_CFG(GPIO3_2, AF0)
-+#define GPIO4_2_GPIO MFP_CFG(GPIO4_2, AF0)
-+#define GPIO5_2_GPIO MFP_CFG(GPIO5_2, AF0)
-+#define GPIO6_2_GPIO MFP_CFG(GPIO6_2, AF0)
++#ifdef CONFIG_32BIT
++#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET+__MEMORY_START)
++#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET-__MEMORY_START))
++#else
++#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
++#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
++#endif
+
++#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
+ #define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
+
+-/* PFN start number, because of __MEMORY_START */
+/*
-+ * each MFP pin will have a MFPR register, since the offset of the
-+ * register varies between processors, the processor specific code
-+ * should initialize the pin offsets by pxa3xx_mfp_init_addr()
-+ *
-+ * pxa3xx_mfp_init_addr - accepts a table of "pxa3xx_mfp_addr_map"
-+ * structure, which represents a range of MFP pins from "start" to
-+ * "end", with the offset begining at "offset", to define a single
-+ * pin, let "end" = -1
-+ *
-+ * use
-+ *
-+ * MFP_ADDR_X() to define a range of pins
-+ * MFP_ADDR() to define a single pin
-+ * MFP_ADDR_END to signal the end of pin offset definitions
++ * PFN = physical frame number (ie PFN 0 == physical address 0)
++ * PFN_START is the PFN of the first page of RAM. By defining this we
++ * don't have struct page entries for the portion of address space
++ * between physical address 0 and the start of RAM.
+ */
-+struct pxa3xx_mfp_addr_map {
-+ unsigned int start;
-+ unsigned int end;
-+ unsigned long offset;
-+};
-+
-+#define MFP_ADDR_X(start, end, offset) \
-+ { MFP_PIN_##start, MFP_PIN_##end, offset }
-+
-+#define MFP_ADDR(pin, offset) \
-+ { MFP_PIN_##pin, -1, offset }
-+
-+#define MFP_ADDR_END { MFP_PIN_INVALID, 0 }
+ #define PFN_START (__MEMORY_START >> PAGE_SHIFT)
+ #define ARCH_PFN_OFFSET (PFN_START)
+ #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+@@ -154,11 +162,21 @@ typedef struct { unsigned long pgd; } pgd_t;
+ #endif
+
+ /*
+- * Slub defaults to 8-byte alignment, we're only interested in 4.
+- * Slab defaults to BYTES_PER_WORD, which ends up being the same anyways.
++ * Some drivers need to perform DMA into kmalloc'ed buffers
++ * and so we have to increase the kmalloc minalign for this.
+ */
+-#define ARCH_KMALLOC_MINALIGN 4
+-#define ARCH_SLAB_MINALIGN 4
++#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
+
++#ifdef CONFIG_SUPERH64
+/*
-+ * pxa3xx_mfp_read()/pxa3xx_mfp_write() - for direct read/write access
-+ * to the MFPR register
++ * While BYTES_PER_WORD == 4 on the current sh64 ABI, GCC will still
++ * happily generate {ld/st}.q pairs, requiring us to have 8-byte
++ * alignment to avoid traps. The kmalloc alignment is gauranteed by
++ * virtue of L1_CACHE_BYTES, requiring this to only be special cased
++ * for slab caches.
+ */
-+unsigned long pxa3xx_mfp_read(int mfp);
-+void pxa3xx_mfp_write(int mfp, unsigned long mfpr_val);
-+
-+/*
-+ * pxa3xx_mfp_config - configure the MFPR registers
-+ *
-+ * used by board specific initialization code
++#define ARCH_SLAB_MINALIGN 8
++#endif
+
+ #endif /* __KERNEL__ */
+ #endif /* __ASM_SH_PAGE_H */
+diff --git a/include/asm-sh/param.h b/include/asm-sh/param.h
+index 1012296..ae245af 100644
+--- a/include/asm-sh/param.h
++++ b/include/asm-sh/param.h
+@@ -2,11 +2,7 @@
+ #define __ASM_SH_PARAM_H
+
+ #ifdef __KERNEL__
+-# ifdef CONFIG_SH_WDT
+-# define HZ 1000 /* Needed for high-res WOVF */
+-# else
+-# define HZ CONFIG_HZ
+-# endif
++# define HZ CONFIG_HZ
+ # define USER_HZ 100 /* User interfaces are in "ticks" */
+ # define CLOCKS_PER_SEC (USER_HZ) /* frequency at which times() counts */
+ #endif
+diff --git a/include/asm-sh/pci.h b/include/asm-sh/pci.h
+index 2757ce0..df1d383 100644
+--- a/include/asm-sh/pci.h
++++ b/include/asm-sh/pci.h
+@@ -38,9 +38,12 @@ extern struct pci_channel board_pci_channels[];
+ #if defined(CONFIG_CPU_SUBTYPE_SH7780) || defined(CONFIG_CPU_SUBTYPE_SH7785)
+ #define PCI_IO_AREA 0xFE400000
+ #define PCI_IO_SIZE 0x00400000
++#elif defined(CONFIG_CPU_SH5)
++extern unsigned long PCI_IO_AREA;
++#define PCI_IO_SIZE 0x00010000
+ #else
+ #define PCI_IO_AREA 0xFE240000
+-#define PCI_IO_SIZE 0X00040000
++#define PCI_IO_SIZE 0x00040000
+ #endif
+
+ #define PCI_MEM_SIZE 0x01000000
+diff --git a/include/asm-sh/pgtable.h b/include/asm-sh/pgtable.h
+index 8f1e8be..a4a8f8b 100644
+--- a/include/asm-sh/pgtable.h
++++ b/include/asm-sh/pgtable.h
+@@ -3,7 +3,7 @@
+ * use the SuperH page table tree.
+ *
+ * Copyright (C) 1999 Niibe Yutaka
+- * Copyright (C) 2002 - 2005 Paul Mundt
++ * Copyright (C) 2002 - 2007 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General
+ * Public License. See the file "COPYING" in the main directory of this
+@@ -29,10 +29,27 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+ #endif /* !__ASSEMBLY__ */
+
+ /*
++ * Effective and physical address definitions, to aid with sign
++ * extension.
+ */
-+void pxa3xx_mfp_config(unsigned long *mfp_cfgs, int num);
++#define NEFF 32
++#define NEFF_SIGN (1LL << (NEFF - 1))
++#define NEFF_MASK (-1LL << NEFF)
++
++#ifdef CONFIG_29BIT
++#define NPHYS 29
++#else
++#define NPHYS 32
++#endif
++
++#define NPHYS_SIGN (1LL << (NPHYS - 1))
++#define NPHYS_MASK (-1LL << NPHYS)
+
+/*
-+ * pxa3xx_mfp_init_addr() - initialize the mapping between mfp pin
-+ * index and MFPR register offset
-+ *
-+ * used by processor specific code
-+ */
-+void __init pxa3xx_mfp_init_addr(struct pxa3xx_mfp_addr_map *);
-+void __init pxa3xx_init_mfp(void);
-+#endif /* __ASM_ARCH_MFP_PXA3XX_H */
-diff --git a/include/asm-arm/arch-pxa/mfp.h b/include/asm-arm/arch-pxa/mfp.h
-index 03c508d..02f6157 100644
---- a/include/asm-arm/arch-pxa/mfp.h
-+++ b/include/asm-arm/arch-pxa/mfp.h
-@@ -16,9 +16,6 @@
- #ifndef __ASM_ARCH_MFP_H
- #define __ASM_ARCH_MFP_H
+ * traditional two-level paging structure
+ */
+ /* PTE bits */
+-#ifdef CONFIG_X2TLB
++#if defined(CONFIG_X2TLB) || defined(CONFIG_SUPERH64)
+ # define PTE_MAGNITUDE 3 /* 64-bit PTEs on extended mode SH-X2 TLB */
+ #else
+ # define PTE_MAGNITUDE 2 /* 32-bit PTEs */
+@@ -52,283 +69,27 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+ #define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+ #define FIRST_USER_ADDRESS 0
--#define MFPR_BASE (0x40e10000)
--#define MFPR_SIZE (PAGE_SIZE)
+-#define PTE_PHYS_MASK (0x20000000 - PAGE_SIZE)
-
- #define mfp_to_gpio(m) ((m) % 128)
-
- /* list of all the configurable MFP pins */
-@@ -217,114 +214,21 @@ enum {
- };
-
- /*
-- * Table that determines the low power modes outputs, with actual settings
-- * used in parentheses for don't-care values. Except for the float output,
-- * the configured driven and pulled levels match, so if there is a need for
-- * non-LPM pulled output, the same configuration could probably be used.
-- *
-- * Output value sleep_oe_n sleep_data pullup_en pulldown_en pull_sel
-- * (bit 7) (bit 8) (bit 14d) (bit 13d)
-- *
-- * Drive 0 0 0 0 X (1) 0
-- * Drive 1 0 1 X (1) 0 0
-- * Pull hi (1) 1 X(1) 1 0 0
-- * Pull lo (0) 1 X(0) 0 1 0
-- * Z (float) 1 X(0) 0 0 0
-- */
--#define MFP_LPM_DRIVE_LOW 0x8
--#define MFP_LPM_DRIVE_HIGH 0x6
--#define MFP_LPM_PULL_HIGH 0x7
--#define MFP_LPM_PULL_LOW 0x9
--#define MFP_LPM_FLOAT 0x1
--#define MFP_LPM_PULL_NEITHER 0x0
+-#define VMALLOC_START (P3SEG)
+-#define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
-
-/*
-- * The pullup and pulldown state of the MFP pin is by default determined by
-- * selected alternate function. In case some buggy devices need to override
-- * this default behavior, pxa3xx_mfp_set_pull() can be invoked with one of
-- * the following definition as the parameter.
+- * Linux PTEL encoding.
- *
-- * Definition pull_sel pullup_en pulldown_en
-- * MFP_PULL_HIGH 1 1 0
-- * MFP_PULL_LOW 1 0 1
-- * MFP_PULL_BOTH 1 1 1
-- * MFP_PULL_NONE 1 0 0
-- * MFP_PULL_DEFAULT 0 X X
+- * Hardware and software bit definitions for the PTEL value (see below for
+- * notes on SH-X2 MMUs and 64-bit PTEs):
- *
-- * NOTE: pxa3xx_mfp_set_pull() will modify the PULLUP_EN and PULLDOWN_EN
-- * bits, which will cause potential conflicts with the low power mode
-- * setting, device drivers should take care of this
+- * - Bits 0 and 7 are reserved on SH-3 (_PAGE_WT and _PAGE_SZ1 on SH-4).
+- *
+- * - Bit 1 is the SH-bit, but is unused on SH-3 due to an MMU bug (the
+- * hardware PTEL value can't have the SH-bit set when MMUCR.IX is set,
+- * which is the default in cpu-sh3/mmu_context.h:MMU_CONTROL_INIT).
+- *
+- * In order to keep this relatively clean, do not use these for defining
+- * SH-3 specific flags until all of the other unused bits have been
+- * exhausted.
+- *
+- * - Bit 9 is reserved by everyone and used by _PAGE_PROTNONE.
+- *
+- * - Bits 10 and 11 are low bits of the PPN that are reserved on >= 4K pages.
+- * Bit 10 is used for _PAGE_ACCESSED, bit 11 remains unused.
+- *
+- * - Bits 31, 30, and 29 remain unused by everyone and can be used for future
+- * software flags, although care must be taken to update _PAGE_CLEAR_FLAGS.
+- *
+- * XXX: Leave the _PAGE_FILE and _PAGE_WT overhaul for a rainy day.
+- *
+- * SH-X2 MMUs and extended PTEs
+- *
+- * SH-X2 supports an extended mode TLB with split data arrays due to the
+- * number of bits needed for PR and SZ (now EPR and ESZ) encodings. The PR and
+- * SZ bit placeholders still exist in data array 1, but are implemented as
+- * reserved bits, with the real logic existing in data array 2.
+- *
+- * The downside to this is that we can no longer fit everything in to a 32-bit
+- * PTE encoding, so a 64-bit pte_t is necessary for these parts. On the plus
+- * side, this gives us quite a few spare bits to play with for future usage.
- */
--#define MFP_PULL_BOTH (0x7u)
--#define MFP_PULL_HIGH (0x6u)
--#define MFP_PULL_LOW (0x5u)
--#define MFP_PULL_NONE (0x4u)
--#define MFP_PULL_DEFAULT (0x0u)
+-/* Legacy and compat mode bits */
+-#define _PAGE_WT 0x001 /* WT-bit on SH-4, 0 on SH-3 */
+-#define _PAGE_HW_SHARED 0x002 /* SH-bit : shared among processes */
+-#define _PAGE_DIRTY 0x004 /* D-bit : page changed */
+-#define _PAGE_CACHABLE 0x008 /* C-bit : cachable */
+-#define _PAGE_SZ0 0x010 /* SZ0-bit : Size of page */
+-#define _PAGE_RW 0x020 /* PR0-bit : write access allowed */
+-#define _PAGE_USER 0x040 /* PR1-bit : user space access allowed*/
+-#define _PAGE_SZ1 0x080 /* SZ1-bit : Size of page (on SH-4) */
+-#define _PAGE_PRESENT 0x100 /* V-bit : page is valid */
+-#define _PAGE_PROTNONE 0x200 /* software: if not present */
+-#define _PAGE_ACCESSED 0x400 /* software: page referenced */
+-#define _PAGE_FILE _PAGE_WT /* software: pagecache or swap? */
-
--#define MFP_AF0 (0)
--#define MFP_AF1 (1)
--#define MFP_AF2 (2)
--#define MFP_AF3 (3)
--#define MFP_AF4 (4)
--#define MFP_AF5 (5)
--#define MFP_AF6 (6)
--#define MFP_AF7 (7)
+-#define _PAGE_SZ_MASK (_PAGE_SZ0 | _PAGE_SZ1)
+-#define _PAGE_PR_MASK (_PAGE_RW | _PAGE_USER)
-
--#define MFP_DS01X (0)
--#define MFP_DS02X (1)
--#define MFP_DS03X (2)
--#define MFP_DS04X (3)
--#define MFP_DS06X (4)
--#define MFP_DS08X (5)
--#define MFP_DS10X (6)
--#define MFP_DS12X (7)
+-/* Extended mode bits */
+-#define _PAGE_EXT_ESZ0 0x0010 /* ESZ0-bit: Size of page */
+-#define _PAGE_EXT_ESZ1 0x0020 /* ESZ1-bit: Size of page */
+-#define _PAGE_EXT_ESZ2 0x0040 /* ESZ2-bit: Size of page */
+-#define _PAGE_EXT_ESZ3 0x0080 /* ESZ3-bit: Size of page */
-
--#define MFP_EDGE_BOTH 0x3
--#define MFP_EDGE_RISE 0x2
--#define MFP_EDGE_FALL 0x1
--#define MFP_EDGE_NONE 0x0
+-#define _PAGE_EXT_USER_EXEC 0x0100 /* EPR0-bit: User space executable */
+-#define _PAGE_EXT_USER_WRITE 0x0200 /* EPR1-bit: User space writable */
+-#define _PAGE_EXT_USER_READ 0x0400 /* EPR2-bit: User space readable */
-
--#define MFPR_AF_MASK 0x0007
--#define MFPR_DRV_MASK 0x1c00
--#define MFPR_RDH_MASK 0x0200
--#define MFPR_LPM_MASK 0xe180
--#define MFPR_PULL_MASK 0xe000
--#define MFPR_EDGE_MASK 0x0070
+-#define _PAGE_EXT_KERN_EXEC 0x0800 /* EPR3-bit: Kernel space executable */
+-#define _PAGE_EXT_KERN_WRITE 0x1000 /* EPR4-bit: Kernel space writable */
+-#define _PAGE_EXT_KERN_READ 0x2000 /* EPR5-bit: Kernel space readable */
-
--#define MFPR_ALT_OFFSET 0
--#define MFPR_ERE_OFFSET 4
--#define MFPR_EFE_OFFSET 5
--#define MFPR_EC_OFFSET 6
--#define MFPR_SON_OFFSET 7
--#define MFPR_SD_OFFSET 8
--#define MFPR_SS_OFFSET 9
--#define MFPR_DRV_OFFSET 10
--#define MFPR_PD_OFFSET 13
--#define MFPR_PU_OFFSET 14
--#define MFPR_PS_OFFSET 15
+-/* Wrapper for extended mode pgprot twiddling */
+-#define _PAGE_EXT(x) ((unsigned long long)(x) << 32)
-
--#define MFPR(af, drv, rdh, lpm, edge) \
-- (((af) & 0x7) | (((drv) & 0x7) << 10) |\
-- (((rdh) & 0x1) << 9) |\
-- (((lpm) & 0x3) << 7) |\
-- (((lpm) & 0x4) << 12)|\
-- (((lpm) & 0x8) << 10)|\
-- ((!(edge)) << 6) |\
-- (((edge) & 0x1) << 5) |\
-- (((edge) & 0x2) << 3))
+-/* software: moves to PTEA.TC (Timing Control) */
+-#define _PAGE_PCC_AREA5 0x00000000 /* use BSC registers for area5 */
+-#define _PAGE_PCC_AREA6 0x80000000 /* use BSC registers for area6 */
-
--/*
- * a possible MFP configuration is represented by a 32-bit integer
-- * bit 0..15 - MFPR value (16-bit)
-- * bit 16..31 - mfp pin index (used to obtain the MFPR offset)
-+ *
-+ * bit 0.. 9 - MFP Pin Number (1024 Pins Maximum)
-+ * bit 10..12 - Alternate Function Selection
-+ * bit 13..15 - Drive Strength
-+ * bit 16..18 - Low Power Mode State
-+ * bit 19..20 - Low Power Mode Edge Detection
-+ * bit 21..22 - Run Mode Pull State
- *
- * to facilitate the definition, the following macros are provided
- *
-- * MFPR_DEFAULT - default MFPR value, with
-+ * MFP_CFG_DEFAULT - default MFP configuration value, with
- * alternate function = 0,
-- * drive strength = fast 1mA (MFP_DS01X)
-+ * drive strength = fast 3mA (MFP_DS03X)
- * low power mode = default
-- * release dalay hold = false (RDH bit)
- * edge detection = none
- *
- * MFP_CFG - default MFPR value with alternate function
-@@ -334,251 +238,74 @@ enum {
- * low power mode
- * MFP_CFG_X - default MFPR value with alternate function,
- * pin drive strength and low power mode
-- *
-- * use
-- *
-- * MFP_CFG_PIN - to get the MFP pin index
-- * MFP_CFG_VAL - to get the corresponding MFPR value
- */
-
--typedef uint32_t mfp_cfg_t;
+-/* software: moves to PTEA.SA[2:0] (Space Attributes) */
+-#define _PAGE_PCC_IODYN 0x00000001 /* IO space, dynamically sized bus */
+-#define _PAGE_PCC_IO8 0x20000000 /* IO space, 8 bit bus */
+-#define _PAGE_PCC_IO16 0x20000001 /* IO space, 16 bit bus */
+-#define _PAGE_PCC_COM8 0x40000000 /* Common Memory space, 8 bit bus */
+-#define _PAGE_PCC_COM16 0x40000001 /* Common Memory space, 16 bit bus */
+-#define _PAGE_PCC_ATR8 0x60000000 /* Attribute Memory space, 8 bit bus */
+-#define _PAGE_PCC_ATR16 0x60000001 /* Attribute Memory space, 6 bit bus */
-
--#define MFP_CFG_PIN(mfp_cfg) (((mfp_cfg) >> 16) & 0xffff)
--#define MFP_CFG_VAL(mfp_cfg) ((mfp_cfg) & 0xffff)
+-/* Mask which drops unused bits from the PTEL value */
+-#if defined(CONFIG_CPU_SH3)
+-#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED| \
+- _PAGE_FILE | _PAGE_SZ1 | \
+- _PAGE_HW_SHARED)
+-#elif defined(CONFIG_X2TLB)
+-/* Get rid of the legacy PR/SZ bits when using extended mode */
+-#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | \
+- _PAGE_FILE | _PAGE_PR_MASK | _PAGE_SZ_MASK)
++#ifdef CONFIG_32BIT
++#define PHYS_ADDR_MASK 0xffffffff
+ #else
+-#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | _PAGE_FILE)
++#define PHYS_ADDR_MASK 0x1fffffff
+ #endif
+
+-#define _PAGE_FLAGS_HARDWARE_MASK (0x1fffffff & ~(_PAGE_CLEAR_FLAGS))
++#define PTE_PHYS_MASK (PHYS_ADDR_MASK & PAGE_MASK)
+
+-/* Hardware flags, page size encoding */
+-#if defined(CONFIG_X2TLB)
+-# if defined(CONFIG_PAGE_SIZE_4KB)
+-# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ0)
+-# elif defined(CONFIG_PAGE_SIZE_8KB)
+-# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ1)
+-# elif defined(CONFIG_PAGE_SIZE_64KB)
+-# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ2)
+-# endif
++#ifdef CONFIG_SUPERH32
++#define VMALLOC_START (P3SEG)
+ #else
+-# if defined(CONFIG_PAGE_SIZE_4KB)
+-# define _PAGE_FLAGS_HARD _PAGE_SZ0
+-# elif defined(CONFIG_PAGE_SIZE_64KB)
+-# define _PAGE_FLAGS_HARD _PAGE_SZ1
+-# endif
++#define VMALLOC_START (0xf0000000)
+ #endif
++#define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
+
+-#if defined(CONFIG_X2TLB)
+-# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+-# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2)
+-# elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
+-# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ2)
+-# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+-# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ1 | _PAGE_EXT_ESZ2)
+-# elif defined(CONFIG_HUGETLB_PAGE_SIZE_4MB)
+-# define _PAGE_SZHUGE (_PAGE_EXT_ESZ3)
+-# elif defined(CONFIG_HUGETLB_PAGE_SIZE_64MB)
+-# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2 | _PAGE_EXT_ESZ3)
+-# endif
++#if defined(CONFIG_SUPERH32)
++#include <asm/pgtable_32.h>
+ #else
+-# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+-# define _PAGE_SZHUGE (_PAGE_SZ1)
+-# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+-# define _PAGE_SZHUGE (_PAGE_SZ0 | _PAGE_SZ1)
+-# endif
+-#endif
-
-/*
-- * MFP register defaults to
-- * drive strength fast 3mA (010'b)
-- * edge detection logic disabled
-- * alternate function 0
+- * Stub out _PAGE_SZHUGE if we don't have a good definition for it,
+- * to make pte_mkhuge() happy.
- */
--#define MFPR_DEFAULT (0x0840)
-+typedef unsigned long mfp_cfg_t;
-+
-+#define MFP_PIN(x) ((x) & 0x3ff)
-+
-+#define MFP_AF0 (0x0 << 10)
-+#define MFP_AF1 (0x1 << 10)
-+#define MFP_AF2 (0x2 << 10)
-+#define MFP_AF3 (0x3 << 10)
-+#define MFP_AF4 (0x4 << 10)
-+#define MFP_AF5 (0x5 << 10)
-+#define MFP_AF6 (0x6 << 10)
-+#define MFP_AF7 (0x7 << 10)
-+#define MFP_AF_MASK (0x7 << 10)
-+#define MFP_AF(x) (((x) >> 10) & 0x7)
-+
-+#define MFP_DS01X (0x0 << 13)
-+#define MFP_DS02X (0x1 << 13)
-+#define MFP_DS03X (0x2 << 13)
-+#define MFP_DS04X (0x3 << 13)
-+#define MFP_DS06X (0x4 << 13)
-+#define MFP_DS08X (0x5 << 13)
-+#define MFP_DS10X (0x6 << 13)
-+#define MFP_DS13X (0x7 << 13)
-+#define MFP_DS_MASK (0x7 << 13)
-+#define MFP_DS(x) (((x) >> 13) & 0x7)
-+
-+#define MFP_LPM_INPUT (0x0 << 16)
-+#define MFP_LPM_DRIVE_LOW (0x1 << 16)
-+#define MFP_LPM_DRIVE_HIGH (0x2 << 16)
-+#define MFP_LPM_PULL_LOW (0x3 << 16)
-+#define MFP_LPM_PULL_HIGH (0x4 << 16)
-+#define MFP_LPM_FLOAT (0x5 << 16)
-+#define MFP_LPM_STATE_MASK (0x7 << 16)
-+#define MFP_LPM_STATE(x) (((x) >> 16) & 0x7)
-+
-+#define MFP_LPM_EDGE_NONE (0x0 << 19)
-+#define MFP_LPM_EDGE_RISE (0x1 << 19)
-+#define MFP_LPM_EDGE_FALL (0x2 << 19)
-+#define MFP_LPM_EDGE_BOTH (0x3 << 19)
-+#define MFP_LPM_EDGE_MASK (0x3 << 19)
-+#define MFP_LPM_EDGE(x) (((x) >> 19) & 0x3)
-+
-+#define MFP_PULL_NONE (0x0 << 21)
-+#define MFP_PULL_LOW (0x1 << 21)
-+#define MFP_PULL_HIGH (0x2 << 21)
-+#define MFP_PULL_BOTH (0x3 << 21)
-+#define MFP_PULL_MASK (0x3 << 21)
-+#define MFP_PULL(x) (((x) >> 21) & 0x3)
-+
-+#define MFP_CFG_DEFAULT (MFP_AF0 | MFP_DS03X | MFP_LPM_INPUT |\
-+ MFP_LPM_EDGE_NONE | MFP_PULL_NONE)
-
- #define MFP_CFG(pin, af) \
-- ((MFP_PIN_##pin << 16) | MFPR_DEFAULT | (MFP_##af))
-+ ((MFP_CFG_DEFAULT & ~MFP_AF_MASK) |\
-+ (MFP_PIN(MFP_PIN_##pin) | MFP_##af))
-
- #define MFP_CFG_DRV(pin, af, drv) \
-- ((MFP_PIN_##pin << 16) | (MFPR_DEFAULT & ~MFPR_DRV_MASK) |\
-- ((MFP_##drv) << 10) | (MFP_##af))
-+ ((MFP_CFG_DEFAULT & ~(MFP_AF_MASK | MFP_DS_MASK)) |\
-+ (MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_##drv))
+-#ifndef _PAGE_SZHUGE
+-# define _PAGE_SZHUGE (_PAGE_FLAGS_HARD)
+-#endif
+-
+-#define _PAGE_CHG_MASK \
+- (PTE_MASK | _PAGE_ACCESSED | _PAGE_CACHABLE | _PAGE_DIRTY)
+-
+-#ifndef __ASSEMBLY__
+-
+-#if defined(CONFIG_X2TLB) /* SH-X2 TLB */
+-#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
+- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-
+-#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
+- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_KERN_WRITE | \
+- _PAGE_EXT_USER_READ | \
+- _PAGE_EXT_USER_WRITE))
+-
+-#define PAGE_EXECREAD __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
+- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_EXEC | \
+- _PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_USER_EXEC | \
+- _PAGE_EXT_USER_READ))
+-
+-#define PAGE_COPY PAGE_EXECREAD
+-
+-#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
+- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_USER_READ))
+-
+-#define PAGE_WRITEONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
+- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
+- _PAGE_EXT_USER_WRITE))
+-
+-#define PAGE_RWX __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
+- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
+- _PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_KERN_EXEC | \
+- _PAGE_EXT_USER_WRITE | \
+- _PAGE_EXT_USER_READ | \
+- _PAGE_EXT_USER_EXEC))
+-
+-#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
+- _PAGE_DIRTY | _PAGE_ACCESSED | \
+- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_KERN_WRITE | \
+- _PAGE_EXT_KERN_EXEC))
+-
+-#define PAGE_KERNEL_NOCACHE \
+- __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
+- _PAGE_ACCESSED | _PAGE_HW_SHARED | \
+- _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_KERN_WRITE | \
+- _PAGE_EXT_KERN_EXEC))
+-
+-#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
+- _PAGE_DIRTY | _PAGE_ACCESSED | \
+- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_KERN_EXEC))
+-
+-#define PAGE_KERNEL_PCC(slot, type) \
+- __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
+- _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
+- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
+- _PAGE_EXT_KERN_WRITE | \
+- _PAGE_EXT_KERN_EXEC) \
+- (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
+- (type))
+-
+-#elif defined(CONFIG_MMU) /* SH-X TLB */
+-#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
+- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-
+-#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
+- _PAGE_CACHABLE | _PAGE_ACCESSED | \
+- _PAGE_FLAGS_HARD)
+-
+-#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
+- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-
+-#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
+- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-
+-#define PAGE_EXECREAD PAGE_READONLY
+-#define PAGE_RWX PAGE_SHARED
+-#define PAGE_WRITEONLY PAGE_SHARED
+-
+-#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_CACHABLE | \
+- _PAGE_DIRTY | _PAGE_ACCESSED | \
+- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+-
+-#define PAGE_KERNEL_NOCACHE \
+- __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
+- _PAGE_ACCESSED | _PAGE_HW_SHARED | \
+- _PAGE_FLAGS_HARD)
+-
+-#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
+- _PAGE_DIRTY | _PAGE_ACCESSED | \
+- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+-
+-#define PAGE_KERNEL_PCC(slot, type) \
+- __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
+- _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
+- (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
+- (type))
+-#else /* no mmu */
+-#define PAGE_NONE __pgprot(0)
+-#define PAGE_SHARED __pgprot(0)
+-#define PAGE_COPY __pgprot(0)
+-#define PAGE_EXECREAD __pgprot(0)
+-#define PAGE_RWX __pgprot(0)
+-#define PAGE_READONLY __pgprot(0)
+-#define PAGE_WRITEONLY __pgprot(0)
+-#define PAGE_KERNEL __pgprot(0)
+-#define PAGE_KERNEL_NOCACHE __pgprot(0)
+-#define PAGE_KERNEL_RO __pgprot(0)
+-
+-#define PAGE_KERNEL_PCC(slot, type) \
+- __pgprot(0)
++#include <asm/pgtable_64.h>
+ #endif
- #define MFP_CFG_LPM(pin, af, lpm) \
-- ((MFP_PIN_##pin << 16) | (MFPR_DEFAULT & ~MFPR_LPM_MASK) |\
-- (((MFP_LPM_##lpm) & 0x3) << 7) |\
-- (((MFP_LPM_##lpm) & 0x4) << 12) |\
-- (((MFP_LPM_##lpm) & 0x8) << 10) |\
-- (MFP_##af))
-+ ((MFP_CFG_DEFAULT & ~(MFP_AF_MASK | MFP_LPM_STATE_MASK)) |\
-+ (MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_LPM_##lpm))
+-#endif /* __ASSEMBLY__ */
+-
+ /*
+ * SH-X and lower (legacy) SuperH parts (SH-3, SH-4, some SH-4A) can't do page
+ * protection for execute, and considers it the same as a read. Also, write
+@@ -357,208 +118,6 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+ #define __S110 PAGE_RWX
+ #define __S111 PAGE_RWX
- #define MFP_CFG_X(pin, af, drv, lpm) \
-- ((MFP_PIN_##pin << 16) |\
-- (MFPR_DEFAULT & ~(MFPR_DRV_MASK | MFPR_LPM_MASK)) |\
-- ((MFP_##drv) << 10) | (MFP_##af) |\
-- (((MFP_LPM_##lpm) & 0x3) << 7) |\
-- (((MFP_LPM_##lpm) & 0x4) << 12) |\
-- (((MFP_LPM_##lpm) & 0x8) << 10))
+-#ifndef __ASSEMBLY__
-
--/* common MFP configurations - processor specific ones defined
-- * in mfp-pxa3xx.h
+-/*
+- * Certain architectures need to do special things when PTEs
+- * within a page table are directly modified. Thus, the following
+- * hook is made available.
- */
--#define GPIO0_GPIO MFP_CFG(GPIO0, AF0)
--#define GPIO1_GPIO MFP_CFG(GPIO1, AF0)
--#define GPIO2_GPIO MFP_CFG(GPIO2, AF0)
--#define GPIO3_GPIO MFP_CFG(GPIO3, AF0)
--#define GPIO4_GPIO MFP_CFG(GPIO4, AF0)
--#define GPIO5_GPIO MFP_CFG(GPIO5, AF0)
--#define GPIO6_GPIO MFP_CFG(GPIO6, AF0)
--#define GPIO7_GPIO MFP_CFG(GPIO7, AF0)
--#define GPIO8_GPIO MFP_CFG(GPIO8, AF0)
--#define GPIO9_GPIO MFP_CFG(GPIO9, AF0)
--#define GPIO10_GPIO MFP_CFG(GPIO10, AF0)
--#define GPIO11_GPIO MFP_CFG(GPIO11, AF0)
--#define GPIO12_GPIO MFP_CFG(GPIO12, AF0)
--#define GPIO13_GPIO MFP_CFG(GPIO13, AF0)
--#define GPIO14_GPIO MFP_CFG(GPIO14, AF0)
--#define GPIO15_GPIO MFP_CFG(GPIO15, AF0)
--#define GPIO16_GPIO MFP_CFG(GPIO16, AF0)
--#define GPIO17_GPIO MFP_CFG(GPIO17, AF0)
--#define GPIO18_GPIO MFP_CFG(GPIO18, AF0)
--#define GPIO19_GPIO MFP_CFG(GPIO19, AF0)
--#define GPIO20_GPIO MFP_CFG(GPIO20, AF0)
--#define GPIO21_GPIO MFP_CFG(GPIO21, AF0)
--#define GPIO22_GPIO MFP_CFG(GPIO22, AF0)
--#define GPIO23_GPIO MFP_CFG(GPIO23, AF0)
--#define GPIO24_GPIO MFP_CFG(GPIO24, AF0)
--#define GPIO25_GPIO MFP_CFG(GPIO25, AF0)
--#define GPIO26_GPIO MFP_CFG(GPIO26, AF0)
--#define GPIO27_GPIO MFP_CFG(GPIO27, AF0)
--#define GPIO28_GPIO MFP_CFG(GPIO28, AF0)
--#define GPIO29_GPIO MFP_CFG(GPIO29, AF0)
--#define GPIO30_GPIO MFP_CFG(GPIO30, AF0)
--#define GPIO31_GPIO MFP_CFG(GPIO31, AF0)
--#define GPIO32_GPIO MFP_CFG(GPIO32, AF0)
--#define GPIO33_GPIO MFP_CFG(GPIO33, AF0)
--#define GPIO34_GPIO MFP_CFG(GPIO34, AF0)
--#define GPIO35_GPIO MFP_CFG(GPIO35, AF0)
--#define GPIO36_GPIO MFP_CFG(GPIO36, AF0)
--#define GPIO37_GPIO MFP_CFG(GPIO37, AF0)
--#define GPIO38_GPIO MFP_CFG(GPIO38, AF0)
--#define GPIO39_GPIO MFP_CFG(GPIO39, AF0)
--#define GPIO40_GPIO MFP_CFG(GPIO40, AF0)
--#define GPIO41_GPIO MFP_CFG(GPIO41, AF0)
--#define GPIO42_GPIO MFP_CFG(GPIO42, AF0)
--#define GPIO43_GPIO MFP_CFG(GPIO43, AF0)
--#define GPIO44_GPIO MFP_CFG(GPIO44, AF0)
--#define GPIO45_GPIO MFP_CFG(GPIO45, AF0)
+-#ifdef CONFIG_X2TLB
+-static inline void set_pte(pte_t *ptep, pte_t pte)
+-{
+- ptep->pte_high = pte.pte_high;
+- smp_wmb();
+- ptep->pte_low = pte.pte_low;
+-}
+-#else
+-#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
+-#endif
-
--#define GPIO47_GPIO MFP_CFG(GPIO47, AF0)
--#define GPIO48_GPIO MFP_CFG(GPIO48, AF0)
+-#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
-
--#define GPIO53_GPIO MFP_CFG(GPIO53, AF0)
--#define GPIO54_GPIO MFP_CFG(GPIO54, AF0)
--#define GPIO55_GPIO MFP_CFG(GPIO55, AF0)
+-/*
+- * (pmds are folded into pgds so this doesn't get actually called,
+- * but the define is needed for a generic inline function.)
+- */
+-#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
-
--#define GPIO57_GPIO MFP_CFG(GPIO57, AF0)
+-#define pte_pfn(x) ((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
-
--#define GPIO63_GPIO MFP_CFG(GPIO63, AF0)
--#define GPIO64_GPIO MFP_CFG(GPIO64, AF0)
--#define GPIO65_GPIO MFP_CFG(GPIO65, AF0)
--#define GPIO66_GPIO MFP_CFG(GPIO66, AF0)
--#define GPIO67_GPIO MFP_CFG(GPIO67, AF0)
--#define GPIO68_GPIO MFP_CFG(GPIO68, AF0)
--#define GPIO69_GPIO MFP_CFG(GPIO69, AF0)
--#define GPIO70_GPIO MFP_CFG(GPIO70, AF0)
--#define GPIO71_GPIO MFP_CFG(GPIO71, AF0)
--#define GPIO72_GPIO MFP_CFG(GPIO72, AF0)
--#define GPIO73_GPIO MFP_CFG(GPIO73, AF0)
--#define GPIO74_GPIO MFP_CFG(GPIO74, AF0)
--#define GPIO75_GPIO MFP_CFG(GPIO75, AF0)
--#define GPIO76_GPIO MFP_CFG(GPIO76, AF0)
--#define GPIO77_GPIO MFP_CFG(GPIO77, AF0)
--#define GPIO78_GPIO MFP_CFG(GPIO78, AF0)
--#define GPIO79_GPIO MFP_CFG(GPIO79, AF0)
--#define GPIO80_GPIO MFP_CFG(GPIO80, AF0)
--#define GPIO81_GPIO MFP_CFG(GPIO81, AF0)
--#define GPIO82_GPIO MFP_CFG(GPIO82, AF0)
--#define GPIO83_GPIO MFP_CFG(GPIO83, AF0)
--#define GPIO84_GPIO MFP_CFG(GPIO84, AF0)
--#define GPIO85_GPIO MFP_CFG(GPIO85, AF0)
--#define GPIO86_GPIO MFP_CFG(GPIO86, AF0)
--#define GPIO87_GPIO MFP_CFG(GPIO87, AF0)
--#define GPIO88_GPIO MFP_CFG(GPIO88, AF0)
--#define GPIO89_GPIO MFP_CFG(GPIO89, AF0)
--#define GPIO90_GPIO MFP_CFG(GPIO90, AF0)
--#define GPIO91_GPIO MFP_CFG(GPIO91, AF0)
--#define GPIO92_GPIO MFP_CFG(GPIO92, AF0)
--#define GPIO93_GPIO MFP_CFG(GPIO93, AF0)
--#define GPIO94_GPIO MFP_CFG(GPIO94, AF0)
--#define GPIO95_GPIO MFP_CFG(GPIO95, AF0)
--#define GPIO96_GPIO MFP_CFG(GPIO96, AF0)
--#define GPIO97_GPIO MFP_CFG(GPIO97, AF0)
--#define GPIO98_GPIO MFP_CFG(GPIO98, AF0)
--#define GPIO99_GPIO MFP_CFG(GPIO99, AF0)
--#define GPIO100_GPIO MFP_CFG(GPIO100, AF0)
--#define GPIO101_GPIO MFP_CFG(GPIO101, AF0)
--#define GPIO102_GPIO MFP_CFG(GPIO102, AF0)
--#define GPIO103_GPIO MFP_CFG(GPIO103, AF0)
--#define GPIO104_GPIO MFP_CFG(GPIO104, AF0)
--#define GPIO105_GPIO MFP_CFG(GPIO105, AF0)
--#define GPIO106_GPIO MFP_CFG(GPIO106, AF0)
--#define GPIO107_GPIO MFP_CFG(GPIO107, AF0)
--#define GPIO108_GPIO MFP_CFG(GPIO108, AF0)
--#define GPIO109_GPIO MFP_CFG(GPIO109, AF0)
--#define GPIO110_GPIO MFP_CFG(GPIO110, AF0)
--#define GPIO111_GPIO MFP_CFG(GPIO111, AF0)
--#define GPIO112_GPIO MFP_CFG(GPIO112, AF0)
--#define GPIO113_GPIO MFP_CFG(GPIO113, AF0)
--#define GPIO114_GPIO MFP_CFG(GPIO114, AF0)
--#define GPIO115_GPIO MFP_CFG(GPIO115, AF0)
--#define GPIO116_GPIO MFP_CFG(GPIO116, AF0)
--#define GPIO117_GPIO MFP_CFG(GPIO117, AF0)
--#define GPIO118_GPIO MFP_CFG(GPIO118, AF0)
--#define GPIO119_GPIO MFP_CFG(GPIO119, AF0)
--#define GPIO120_GPIO MFP_CFG(GPIO120, AF0)
--#define GPIO121_GPIO MFP_CFG(GPIO121, AF0)
--#define GPIO122_GPIO MFP_CFG(GPIO122, AF0)
--#define GPIO123_GPIO MFP_CFG(GPIO123, AF0)
--#define GPIO124_GPIO MFP_CFG(GPIO124, AF0)
--#define GPIO125_GPIO MFP_CFG(GPIO125, AF0)
--#define GPIO126_GPIO MFP_CFG(GPIO126, AF0)
--#define GPIO127_GPIO MFP_CFG(GPIO127, AF0)
+-#define pfn_pte(pfn, prot) \
+- __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-#define pfn_pmd(pfn, prot) \
+- __pmd(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
-
--#define GPIO0_2_GPIO MFP_CFG(GPIO0_2, AF0)
--#define GPIO1_2_GPIO MFP_CFG(GPIO1_2, AF0)
--#define GPIO2_2_GPIO MFP_CFG(GPIO2_2, AF0)
--#define GPIO3_2_GPIO MFP_CFG(GPIO3_2, AF0)
--#define GPIO4_2_GPIO MFP_CFG(GPIO4_2, AF0)
--#define GPIO5_2_GPIO MFP_CFG(GPIO5_2, AF0)
--#define GPIO6_2_GPIO MFP_CFG(GPIO6_2, AF0)
+-#define pte_none(x) (!pte_val(x))
+-#define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE))
+-
+-#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0)
+-
+-#define pmd_none(x) (!pmd_val(x))
+-#define pmd_present(x) (pmd_val(x))
+-#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
+-#define pmd_bad(x) (pmd_val(x) & ~PAGE_MASK)
+-
+-#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+-#define pte_page(x) pfn_to_page(pte_pfn(x))
-
-/*
-- * each MFP pin will have a MFPR register, since the offset of the
-- * register varies between processors, the processor specific code
-- * should initialize the pin offsets by pxa3xx_mfp_init_addr()
-- *
-- * pxa3xx_mfp_init_addr - accepts a table of "pxa3xx_mfp_addr_map"
-- * structure, which represents a range of MFP pins from "start" to
-- * "end", with the offset begining at "offset", to define a single
-- * pin, let "end" = -1
-- *
-- * use
-- *
-- * MFP_ADDR_X() to define a range of pins
-- * MFP_ADDR() to define a single pin
-- * MFP_ADDR_END to signal the end of pin offset definitions
+- * The following only work if pte_present() is true.
+- * Undefined behaviour if not..
- */
--struct pxa3xx_mfp_addr_map {
-- unsigned int start;
-- unsigned int end;
-- unsigned long offset;
--};
+-#define pte_not_present(pte) (!((pte).pte_low & _PAGE_PRESENT))
+-#define pte_dirty(pte) ((pte).pte_low & _PAGE_DIRTY)
+-#define pte_young(pte) ((pte).pte_low & _PAGE_ACCESSED)
+-#define pte_file(pte) ((pte).pte_low & _PAGE_FILE)
-
--#define MFP_ADDR_X(start, end, offset) \
-- { MFP_PIN_##start, MFP_PIN_##end, offset }
+-#ifdef CONFIG_X2TLB
+-#define pte_write(pte) ((pte).pte_high & _PAGE_EXT_USER_WRITE)
+-#else
+-#define pte_write(pte) ((pte).pte_low & _PAGE_RW)
+-#endif
-
--#define MFP_ADDR(pin, offset) \
-- { MFP_PIN_##pin, -1, offset }
+-#define PTE_BIT_FUNC(h,fn,op) \
+-static inline pte_t pte_##fn(pte_t pte) { pte.pte_##h op; return pte; }
-
--#define MFP_ADDR_END { MFP_PIN_INVALID, 0 }
+-#ifdef CONFIG_X2TLB
+-/*
+- * We cheat a bit in the SH-X2 TLB case. As the permission bits are
+- * individually toggled (and user permissions are entirely decoupled from
+- * kernel permissions), we attempt to couple them a bit more sanely here.
+- */
+-PTE_BIT_FUNC(high, wrprotect, &= ~_PAGE_EXT_USER_WRITE);
+-PTE_BIT_FUNC(high, mkwrite, |= _PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE);
+-PTE_BIT_FUNC(high, mkhuge, |= _PAGE_SZHUGE);
+-#else
+-PTE_BIT_FUNC(low, wrprotect, &= ~_PAGE_RW);
+-PTE_BIT_FUNC(low, mkwrite, |= _PAGE_RW);
+-PTE_BIT_FUNC(low, mkhuge, |= _PAGE_SZHUGE);
+-#endif
-
--struct pxa3xx_mfp_pin {
-- unsigned long mfpr_off; /* MFPRxx register offset */
-- unsigned long mfpr_val; /* MFPRxx register value */
--};
+-PTE_BIT_FUNC(low, mkclean, &= ~_PAGE_DIRTY);
+-PTE_BIT_FUNC(low, mkdirty, |= _PAGE_DIRTY);
+-PTE_BIT_FUNC(low, mkold, &= ~_PAGE_ACCESSED);
+-PTE_BIT_FUNC(low, mkyoung, |= _PAGE_ACCESSED);
-
-/*
-- * pxa3xx_mfp_read()/pxa3xx_mfp_write() - for direct read/write access
-- * to the MFPR register
+- * Macro and implementation to make a page protection as uncachable.
- */
--unsigned long pxa3xx_mfp_read(int mfp);
--void pxa3xx_mfp_write(int mfp, unsigned long mfpr_val);
+-#define pgprot_writecombine(prot) \
+- __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+-
+-#define pgprot_noncached pgprot_writecombine
-
-/*
-- * pxa3xx_mfp_set_afds - set MFP alternate function and drive strength
-- * pxa3xx_mfp_set_rdh - set MFP release delay hold on/off
-- * pxa3xx_mfp_set_lpm - set MFP low power mode state
-- * pxa3xx_mfp_set_edge - set MFP edge detection in low power mode
+- * Conversion functions: convert a page and protection to a page entry,
+- * and a page entry and page directory to the page they refer to.
- *
-- * use these functions to override/change the default configuration
-- * done by pxa3xx_mfp_set_config(s)
+- * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
- */
--void pxa3xx_mfp_set_afds(int mfp, int af, int ds);
--void pxa3xx_mfp_set_rdh(int mfp, int rdh);
--void pxa3xx_mfp_set_lpm(int mfp, int lpm);
--void pxa3xx_mfp_set_edge(int mfp, int edge);
+-#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+-
+-static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+-{
+- pte.pte_low &= _PAGE_CHG_MASK;
+- pte.pte_low |= pgprot_val(newprot);
+-
+-#ifdef CONFIG_X2TLB
+- pte.pte_high |= pgprot_val(newprot) >> 32;
+-#endif
+-
+- return pte;
+-}
+-
+-#define pmd_page_vaddr(pmd) ((unsigned long)pmd_val(pmd))
+-#define pmd_page(pmd) (virt_to_page(pmd_val(pmd)))
+-
+-/* to find an entry in a page-table-directory. */
+-#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
+-#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+-
+-/* to find an entry in a kernel page-table-directory */
+-#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+-
+-/* Find an entry in the third-level page table.. */
+-#define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+-#define pte_offset_kernel(dir, address) \
+- ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(address))
+-#define pte_offset_map(dir, address) pte_offset_kernel(dir, address)
+-#define pte_offset_map_nested(dir, address) pte_offset_kernel(dir, address)
+-
+-#define pte_unmap(pte) do { } while (0)
+-#define pte_unmap_nested(pte) do { } while (0)
+-
+-#ifdef CONFIG_X2TLB
+-#define pte_ERROR(e) \
+- printk("%s:%d: bad pte %p(%08lx%08lx).\n", __FILE__, __LINE__, \
+- &(e), (e).pte_high, (e).pte_low)
+-#define pgd_ERROR(e) \
+- printk("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e))
+-#else
+-#define pte_ERROR(e) \
+- printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
+-#define pgd_ERROR(e) \
+- printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+-#endif
+-
+-struct vm_area_struct;
+-extern void update_mmu_cache(struct vm_area_struct * vma,
+- unsigned long address, pte_t pte);
-
-/*
-- * pxa3xx_mfp_config - configure the MFPR registers
+- * Encode and de-code a swap entry
- *
-- * used by board specific initialization code
+- * Constraints:
+- * _PAGE_FILE at bit 0
+- * _PAGE_PRESENT at bit 8
+- * _PAGE_PROTNONE at bit 9
+- *
+- * For the normal case, we encode the swap type into bits 0:7 and the
+- * swap offset into bits 10:30. For the 64-bit PTE case, we keep the
+- * preserved bits in the low 32-bits and use the upper 32 as the swap
+- * offset (along with a 5-bit type), following the same approach as x86
+- * PAE. This keeps the logic quite simple, and allows for a full 32
+- * PTE_FILE_MAX_BITS, as opposed to the 29-bits we're constrained with
+- * in the pte_low case.
+- *
+- * As is evident by the Alpha code, if we ever get a 64-bit unsigned
+- * long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes
+- * much cleaner..
+- *
+- * NOTE: We should set ZEROs at the position of _PAGE_PRESENT
+- * and _PAGE_PROTNONE bits
- */
--void pxa3xx_mfp_config(mfp_cfg_t *mfp_cfgs, int num);
+-#ifdef CONFIG_X2TLB
+-#define __swp_type(x) ((x).val & 0x1f)
+-#define __swp_offset(x) ((x).val >> 5)
+-#define __swp_entry(type, offset) ((swp_entry_t){ (type) | (offset) << 5})
+-#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
+-#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
-
-/*
-- * pxa3xx_mfp_init_addr() - initialize the mapping between mfp pin
-- * index and MFPR register offset
-- *
-- * used by processor specific code
+- * Encode and decode a nonlinear file mapping entry
- */
--void __init pxa3xx_mfp_init_addr(struct pxa3xx_mfp_addr_map *);
--void __init pxa3xx_init_mfp(void);
-+ ((MFP_CFG_DEFAULT & ~(MFP_AF_MASK | MFP_DS_MASK | MFP_LPM_STATE_MASK)) |\
-+ (MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_##drv | MFP_LPM_##lpm))
+-#define pte_to_pgoff(pte) ((pte).pte_high)
+-#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
+-
+-#define PTE_FILE_MAX_BITS 32
+-#else
+-#define __swp_type(x) ((x).val & 0xff)
+-#define __swp_offset(x) ((x).val >> 10)
+-#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) <<10})
+-
+-#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 })
+-#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 })
+-
+-/*
+- * Encode and decode a nonlinear file mapping entry
+- */
+-#define PTE_FILE_MAX_BITS 29
+-#define pte_to_pgoff(pte) (pte_val(pte) >> 1)
+-#define pgoff_to_pte(off) ((pte_t) { ((off) << 1) | _PAGE_FILE })
+-#endif
+-
+ typedef pte_t *pte_addr_t;
- #endif /* __ASM_ARCH_MFP_H */
-diff --git a/include/asm-arm/arch-pxa/mmc.h b/include/asm-arm/arch-pxa/mmc.h
-index ef4f570..6d1304c 100644
---- a/include/asm-arm/arch-pxa/mmc.h
-+++ b/include/asm-arm/arch-pxa/mmc.h
-@@ -17,5 +17,7 @@ struct pxamci_platform_data {
- };
+ #define kern_addr_valid(addr) (1)
+@@ -566,27 +125,28 @@ typedef pte_t *pte_addr_t;
+ #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
+ remap_pfn_range(vma, vaddr, pfn, size, prot)
- extern void pxa_set_mci_info(struct pxamci_platform_data *info);
-+extern void pxa3xx_set_mci2_info(struct pxamci_platform_data *info);
-+extern void pxa3xx_set_mci3_info(struct pxamci_platform_data *info);
+-struct mm_struct;
++#define pte_pfn(x) ((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
+
+ /*
+ * No page table caches to initialise
+ */
+ #define pgtable_cache_init() do { } while (0)
+-#ifndef CONFIG_MMU
+-extern unsigned int kobjsize(const void *objp);
+-#endif /* !CONFIG_MMU */
+-
+ #if !defined(CONFIG_CACHE_OFF) && (defined(CONFIG_CPU_SH4) || \
+ defined(CONFIG_SH7705_CACHE_32KB))
++struct mm_struct;
+ #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+-extern pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
++pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
#endif
-diff --git a/include/asm-arm/arch-pxa/pcm027.h b/include/asm-arm/arch-pxa/pcm027.h
+
++struct vm_area_struct;
++extern void update_mmu_cache(struct vm_area_struct * vma,
++ unsigned long address, pte_t pte);
+ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+ extern void paging_init(void);
++extern void page_table_range_init(unsigned long start, unsigned long end,
++ pgd_t *pgd);
+
+ #include <asm-generic/pgtable.h>
+
+-#endif /* !__ASSEMBLY__ */
+-#endif /* __ASM_SH_PAGE_H */
++#endif /* __ASM_SH_PGTABLE_H */
+diff --git a/include/asm-sh/pgtable_32.h b/include/asm-sh/pgtable_32.h
new file mode 100644
-index 0000000..7beae14
+index 0000000..3e3557c
--- /dev/null
-+++ b/include/asm-arm/arch-pxa/pcm027.h
-@@ -0,0 +1,75 @@
++++ b/include/asm-sh/pgtable_32.h
+@@ -0,0 +1,474 @@
++#ifndef __ASM_SH_PGTABLE_32_H
++#define __ASM_SH_PGTABLE_32_H
++
+/*
-+ * linux/include/asm-arm/arch-pxa/pcm027.h
++ * Linux PTEL encoding.
+ *
-+ * (c) 2003 Phytec Messtechnik GmbH <armlinux at phytec.de>
-+ * (c) 2007 Juergen Beisert <j.beisert at pengutronix.de>
++ * Hardware and software bit definitions for the PTEL value (see below for
++ * notes on SH-X2 MMUs and 64-bit PTEs):
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
++ * - Bits 0 and 7 are reserved on SH-3 (_PAGE_WT and _PAGE_SZ1 on SH-4).
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * - Bit 1 is the SH-bit, but is unused on SH-3 due to an MMU bug (the
++ * hardware PTEL value can't have the SH-bit set when MMUCR.IX is set,
++ * which is the default in cpu-sh3/mmu_context.h:MMU_CONTROL_INIT).
+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with this program; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ * In order to keep this relatively clean, do not use these for defining
++ * SH-3 specific flags until all of the other unused bits have been
++ * exhausted.
++ *
++ * - Bit 9 is reserved by everyone and used by _PAGE_PROTNONE.
++ *
++ * - Bits 10 and 11 are low bits of the PPN that are reserved on >= 4K pages.
++ * Bit 10 is used for _PAGE_ACCESSED, bit 11 remains unused.
++ *
++ * - On 29 bit platforms, bits 31 to 29 are used for the space attributes
++ * and timing control which (together with bit 0) are moved into the
++ * old-style PTEA on the parts that support it.
++ *
++ * XXX: Leave the _PAGE_FILE and _PAGE_WT overhaul for a rainy day.
++ *
++ * SH-X2 MMUs and extended PTEs
++ *
++ * SH-X2 supports an extended mode TLB with split data arrays due to the
++ * number of bits needed for PR and SZ (now EPR and ESZ) encodings. The PR and
++ * SZ bit placeholders still exist in data array 1, but are implemented as
++ * reserved bits, with the real logic existing in data array 2.
++ *
++ * The downside to this is that we can no longer fit everything in to a 32-bit
++ * PTE encoding, so a 64-bit pte_t is necessary for these parts. On the plus
++ * side, this gives us quite a few spare bits to play with for future usage.
+ */
++/* Legacy and compat mode bits */
++#define _PAGE_WT 0x001 /* WT-bit on SH-4, 0 on SH-3 */
++#define _PAGE_HW_SHARED 0x002 /* SH-bit : shared among processes */
++#define _PAGE_DIRTY 0x004 /* D-bit : page changed */
++#define _PAGE_CACHABLE 0x008 /* C-bit : cachable */
++#define _PAGE_SZ0 0x010 /* SZ0-bit : Size of page */
++#define _PAGE_RW 0x020 /* PR0-bit : write access allowed */
++#define _PAGE_USER 0x040 /* PR1-bit : user space access allowed*/
++#define _PAGE_SZ1 0x080 /* SZ1-bit : Size of page (on SH-4) */
++#define _PAGE_PRESENT 0x100 /* V-bit : page is valid */
++#define _PAGE_PROTNONE 0x200 /* software: if not present */
++#define _PAGE_ACCESSED 0x400 /* software: page referenced */
++#define _PAGE_FILE _PAGE_WT /* software: pagecache or swap? */
+
-+/*
-+ * Definitions of CPU card resources only
-+ */
++#define _PAGE_SZ_MASK (_PAGE_SZ0 | _PAGE_SZ1)
++#define _PAGE_PR_MASK (_PAGE_RW | _PAGE_USER)
+
-+/* I2C RTC */
-+#define PCM027_RTC_IRQ_GPIO 0
-+#define PCM027_RTC_IRQ IRQ_GPIO(PCM027_RTC_IRQ_GPIO)
-+#define PCM027_RTC_IRQ_EDGE IRQ_TYPE_EDGE_FALLING
-+#define ADR_PCM027_RTC 0x51 /* I2C address */
++/* Extended mode bits */
++#define _PAGE_EXT_ESZ0 0x0010 /* ESZ0-bit: Size of page */
++#define _PAGE_EXT_ESZ1 0x0020 /* ESZ1-bit: Size of page */
++#define _PAGE_EXT_ESZ2 0x0040 /* ESZ2-bit: Size of page */
++#define _PAGE_EXT_ESZ3 0x0080 /* ESZ3-bit: Size of page */
+
-+/* I2C EEPROM */
-+#define ADR_PCM027_EEPROM 0x54 /* I2C address */
++#define _PAGE_EXT_USER_EXEC 0x0100 /* EPR0-bit: User space executable */
++#define _PAGE_EXT_USER_WRITE 0x0200 /* EPR1-bit: User space writable */
++#define _PAGE_EXT_USER_READ 0x0400 /* EPR2-bit: User space readable */
+
-+/* Ethernet chip (SMSC91C111) */
-+#define PCM027_ETH_IRQ_GPIO 52
-+#define PCM027_ETH_IRQ IRQ_GPIO(PCM027_ETH_IRQ_GPIO)
-+#define PCM027_ETH_IRQ_EDGE IRQ_TYPE_EDGE_RISING
-+#define PCM027_ETH_PHYS PXA_CS5_PHYS
-+#define PCM027_ETH_SIZE (1*1024*1024)
++#define _PAGE_EXT_KERN_EXEC 0x0800 /* EPR3-bit: Kernel space executable */
++#define _PAGE_EXT_KERN_WRITE 0x1000 /* EPR4-bit: Kernel space writable */
++#define _PAGE_EXT_KERN_READ 0x2000 /* EPR5-bit: Kernel space readable */
+
-+/* CAN controller SJA1000 (unsupported yet) */
-+#define PCM027_CAN_IRQ_GPIO 114
-+#define PCM027_CAN_IRQ IRQ_GPIO(PCM027_CAN_IRQ_GPIO)
-+#define PCM027_CAN_IRQ_EDGE IRQ_TYPE_EDGE_FALLING
-+#define PCM027_CAN_PHYS 0x22000000
-+#define PCM027_CAN_SIZE 0x100
++/* Wrapper for extended mode pgprot twiddling */
++#define _PAGE_EXT(x) ((unsigned long long)(x) << 32)
+
-+/* SPI GPIO expander (unsupported yet) */
-+#define PCM027_EGPIO_IRQ_GPIO 27
-+#define PCM027_EGPIO_IRQ IRQ_GPIO(PCM027_EGPIO_IRQ_GPIO)
-+#define PCM027_EGPIO_IRQ_EDGE IRQ_TYPE_EDGE_FALLING
-+#define PCM027_EGPIO_CS 24
-+/*
-+ * TODO: Switch this pin from dedicated usage to GPIO if
-+ * more than the MAX7301 device is connected to this SPI bus
-+ */
-+#define PCM027_EGPIO_CS_MODE GPIO24_SFRM_MD
++/* software: moves to PTEA.TC (Timing Control) */
++#define _PAGE_PCC_AREA5 0x00000000 /* use BSC registers for area5 */
++#define _PAGE_PCC_AREA6 0x80000000 /* use BSC registers for area6 */
+
-+/* Flash memory */
-+#define PCM027_FLASH_PHYS 0x00000000
-+#define PCM027_FLASH_SIZE 0x02000000
++/* software: moves to PTEA.SA[2:0] (Space Attributes) */
++#define _PAGE_PCC_IODYN 0x00000001 /* IO space, dynamically sized bus */
++#define _PAGE_PCC_IO8 0x20000000 /* IO space, 8 bit bus */
++#define _PAGE_PCC_IO16 0x20000001 /* IO space, 16 bit bus */
++#define _PAGE_PCC_COM8 0x40000000 /* Common Memory space, 8 bit bus */
++#define _PAGE_PCC_COM16 0x40000001 /* Common Memory space, 16 bit bus */
++#define _PAGE_PCC_ATR8 0x60000000 /* Attribute Memory space, 8 bit bus */
++#define _PAGE_PCC_ATR16 0x60000001 /* Attribute Memory space, 6 bit bus */
+
-+/* onboard LEDs connected to GPIO */
-+#define PCM027_LED_CPU 90
-+#define PCM027_LED_HEARD_BEAT 91
++/* Mask which drops unused bits from the PTEL value */
++#if defined(CONFIG_CPU_SH3)
++#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED| \
++ _PAGE_FILE | _PAGE_SZ1 | \
++ _PAGE_HW_SHARED)
++#elif defined(CONFIG_X2TLB)
++/* Get rid of the legacy PR/SZ bits when using extended mode */
++#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | \
++ _PAGE_FILE | _PAGE_PR_MASK | _PAGE_SZ_MASK)
++#else
++#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | _PAGE_FILE)
++#endif
+
-+/*
-+ * This CPU module needs a baseboard to work. After basic initializing
-+ * its own devices, it calls baseboard's init function.
-+ * TODO: Add your own basebaord init function and call it from
-+ * inside pcm027_init(). This example here is for the developmen board.
-+ * Refer pcm990-baseboard.c
-+ */
-+extern void pcm990_baseboard_init(void);
-diff --git a/include/asm-arm/arch-pxa/pcm990_baseboard.h b/include/asm-arm/arch-pxa/pcm990_baseboard.h
-new file mode 100644
-index 0000000..b699d0d
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/pcm990_baseboard.h
-@@ -0,0 +1,275 @@
-+/*
-+ * include/asm-arm/arch-pxa/pcm990_baseboard.h
-+ *
-+ * (c) 2003 Phytec Messtechnik GmbH <armlinux at phytec.de>
-+ * (c) 2007 Juergen Beisert <j.beisert at pengutronix.de>
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with this program; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-+ */
++#define _PAGE_FLAGS_HARDWARE_MASK (PHYS_ADDR_MASK & ~(_PAGE_CLEAR_FLAGS))
++
++/* Hardware flags, page size encoding */
++#if defined(CONFIG_X2TLB)
++# if defined(CONFIG_PAGE_SIZE_4KB)
++# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ0)
++# elif defined(CONFIG_PAGE_SIZE_8KB)
++# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ1)
++# elif defined(CONFIG_PAGE_SIZE_64KB)
++# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ2)
++# endif
++#else
++# if defined(CONFIG_PAGE_SIZE_4KB)
++# define _PAGE_FLAGS_HARD _PAGE_SZ0
++# elif defined(CONFIG_PAGE_SIZE_64KB)
++# define _PAGE_FLAGS_HARD _PAGE_SZ1
++# endif
++#endif
+
-+#include <asm/arch/pcm027.h>
++#if defined(CONFIG_X2TLB)
++# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
++# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2)
++# elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
++# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ2)
++# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
++# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ1 | _PAGE_EXT_ESZ2)
++# elif defined(CONFIG_HUGETLB_PAGE_SIZE_4MB)
++# define _PAGE_SZHUGE (_PAGE_EXT_ESZ3)
++# elif defined(CONFIG_HUGETLB_PAGE_SIZE_64MB)
++# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2 | _PAGE_EXT_ESZ3)
++# endif
++#else
++# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
++# define _PAGE_SZHUGE (_PAGE_SZ1)
++# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
++# define _PAGE_SZHUGE (_PAGE_SZ0 | _PAGE_SZ1)
++# endif
++#endif
+
+/*
-+ * definitions relevant only when the PCM-990
-+ * development base board is in use
++ * Stub out _PAGE_SZHUGE if we don't have a good definition for it,
++ * to make pte_mkhuge() happy.
+ */
++#ifndef _PAGE_SZHUGE
++# define _PAGE_SZHUGE (_PAGE_FLAGS_HARD)
++#endif
+
-+/* CPLD's interrupt controller is connected to PCM-027 GPIO 9 */
-+#define PCM990_CTRL_INT_IRQ_GPIO 9
-+#define PCM990_CTRL_INT_IRQ IRQ_GPIO(PCM990_CTRL_INT_IRQ_GPIO)
-+#define PCM990_CTRL_INT_IRQ_EDGE IRQT_RISING
-+#define PCM990_CTRL_PHYS PXA_CS1_PHYS /* 16-Bit */
-+#define PCM990_CTRL_BASE 0xea000000
-+#define PCM990_CTRL_SIZE (1*1024*1024)
-+
-+#define PCM990_CTRL_PWR_IRQ_GPIO 14
-+#define PCM990_CTRL_PWR_IRQ IRQ_GPIO(PCM990_CTRL_PWR_IRQ_GPIO)
-+#define PCM990_CTRL_PWR_IRQ_EDGE IRQT_RISING
++#define _PAGE_CHG_MASK \
++ (PTE_MASK | _PAGE_ACCESSED | _PAGE_CACHABLE | _PAGE_DIRTY)
+
-+/* visible CPLD (U7) registers */
-+#define PCM990_CTRL_REG0 0x0000 /* RESET REGISTER */
-+#define PCM990_CTRL_SYSRES 0x0001 /* System RESET REGISTER */
-+#define PCM990_CTRL_RESOUT 0x0002 /* RESETOUT Enable REGISTER */
-+#define PCM990_CTRL_RESGPIO 0x0004 /* RESETGPIO Enable REGISTER */
++#ifndef __ASSEMBLY__
+
-+#define PCM990_CTRL_REG1 0x0002 /* Power REGISTER */
-+#define PCM990_CTRL_5VOFF 0x0001 /* Disable 5V Regulators */
-+#define PCM990_CTRL_CANPWR 0x0004 /* Enable CANPWR ADUM */
-+#define PCM990_CTRL_PM_5V 0x0008 /* Read 5V OK */
++#if defined(CONFIG_X2TLB) /* SH-X2 TLB */
++#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
++ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+
-+#define PCM990_CTRL_REG2 0x0004 /* LED REGISTER */
-+#define PCM990_CTRL_LEDPWR 0x0001 /* POWER LED enable */
-+#define PCM990_CTRL_LEDBAS 0x0002 /* BASIS LED enable */
-+#define PCM990_CTRL_LEDUSR 0x0004 /* USER LED enable */
++#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
++ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_KERN_WRITE | \
++ _PAGE_EXT_USER_READ | \
++ _PAGE_EXT_USER_WRITE))
+
-+#define PCM990_CTRL_REG3 0x0006 /* LCD CTRL REGISTER 3 */
-+#define PCM990_CTRL_LCDPWR 0x0001 /* RW LCD Power on */
-+#define PCM990_CTRL_LCDON 0x0002 /* RW LCD Latch on */
-+#define PCM990_CTRL_LCDPOS1 0x0004 /* RW POS 1 */
-+#define PCM990_CTRL_LCDPOS2 0x0008 /* RW POS 2 */
++#define PAGE_EXECREAD __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
++ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_EXEC | \
++ _PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_USER_EXEC | \
++ _PAGE_EXT_USER_READ))
+
-+#define PCM990_CTRL_REG4 0x0008 /* MMC1 CTRL REGISTER 4 */
-+#define PCM990_CTRL_MMC1PWR 0x0001 /* RW MMC1 Power on */
++#define PAGE_COPY PAGE_EXECREAD
+
-+#define PCM990_CTRL_REG5 0x000A /* MMC2 CTRL REGISTER 5 */
-+#define PCM990_CTRL_MMC2PWR 0x0001 /* RW MMC2 Power on */
-+#define PCM990_CTRL_MMC2LED 0x0002 /* RW MMC2 LED */
-+#define PCM990_CTRL_MMC2DE 0x0004 /* R MMC2 Card detect */
-+#define PCM990_CTRL_MMC2WP 0x0008 /* R MMC2 Card write protect */
++#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
++ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_USER_READ))
+
-+#define PCM990_CTRL_REG6 0x000C /* Interrupt Clear REGISTER */
-+#define PCM990_CTRL_INTC0 0x0001 /* Clear Reg BT Detect */
-+#define PCM990_CTRL_INTC1 0x0002 /* Clear Reg FR RI */
-+#define PCM990_CTRL_INTC2 0x0004 /* Clear Reg MMC1 Detect */
-+#define PCM990_CTRL_INTC3 0x0008 /* Clear Reg PM_5V off */
++#define PAGE_WRITEONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
++ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
++ _PAGE_EXT_USER_WRITE))
+
-+#define PCM990_CTRL_REG7 0x000E /* Interrupt Enable REGISTER */
-+#define PCM990_CTRL_ENAINT0 0x0001 /* Enable Int BT Detect */
-+#define PCM990_CTRL_ENAINT1 0x0002 /* Enable Int FR RI */
-+#define PCM990_CTRL_ENAINT2 0x0004 /* Enable Int MMC1 Detect */
-+#define PCM990_CTRL_ENAINT3 0x0008 /* Enable Int PM_5V off */
++#define PAGE_RWX __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
++ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
++ _PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_KERN_EXEC | \
++ _PAGE_EXT_USER_WRITE | \
++ _PAGE_EXT_USER_READ | \
++ _PAGE_EXT_USER_EXEC))
+
-+#define PCM990_CTRL_REG8 0x0014 /* Uart REGISTER */
-+#define PCM990_CTRL_FFSD 0x0001 /* BT Uart Enable */
-+#define PCM990_CTRL_BTSD 0x0002 /* FF Uart Enable */
-+#define PCM990_CTRL_FFRI 0x0004 /* FF Uart RI detect */
-+#define PCM990_CTRL_BTRX 0x0008 /* BT Uart Rx detect */
++#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
++ _PAGE_DIRTY | _PAGE_ACCESSED | \
++ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_KERN_WRITE | \
++ _PAGE_EXT_KERN_EXEC))
+
-+#define PCM990_CTRL_REG9 0x0010 /* AC97 Flash REGISTER */
-+#define PCM990_CTRL_FLWP 0x0001 /* pC Flash Write Protect */
-+#define PCM990_CTRL_FLDIS 0x0002 /* pC Flash Disable */
-+#define PCM990_CTRL_AC97ENA 0x0004 /* Enable AC97 Expansion */
++#define PAGE_KERNEL_NOCACHE \
++ __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
++ _PAGE_ACCESSED | _PAGE_HW_SHARED | \
++ _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_KERN_WRITE | \
++ _PAGE_EXT_KERN_EXEC))
+
-+#define PCM990_CTRL_REG10 0x0012 /* GPS-REGISTER */
-+#define PCM990_CTRL_GPSPWR 0x0004 /* GPS-Modul Power on */
-+#define PCM990_CTRL_GPSENA 0x0008 /* GPS-Modul Enable */
++#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
++ _PAGE_DIRTY | _PAGE_ACCESSED | \
++ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_KERN_EXEC))
+
-+#define PCM990_CTRL_REG11 0x0014 /* Accu REGISTER */
-+#define PCM990_CTRL_ACENA 0x0001 /* Charge Enable */
-+#define PCM990_CTRL_ACSEL 0x0002 /* Charge Akku -> DC Enable */
-+#define PCM990_CTRL_ACPRES 0x0004 /* DC Present */
-+#define PCM990_CTRL_ACALARM 0x0008 /* Error Akku */
++#define PAGE_KERNEL_PCC(slot, type) \
++ __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
++ _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
++ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
++ _PAGE_EXT_KERN_WRITE | \
++ _PAGE_EXT_KERN_EXEC) \
++ (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
++ (type))
+
-+#define PCM990_CTRL_P2V(x) ((x) - PCM990_CTRL_PHYS + PCM990_CTRL_BASE)
-+#define PCM990_CTRL_V2P(x) ((x) - PCM990_CTRL_BASE + PCM990_CTRL_PHYS)
++#elif defined(CONFIG_MMU) /* SH-X TLB */
++#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
++ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+
-+#ifndef __ASSEMBLY__
-+# define __PCM990_CTRL_REG(x) \
-+ (*((volatile unsigned char *)PCM990_CTRL_P2V(x)))
-+#else
-+# define __PCM990_CTRL_REG(x) PCM990_CTRL_P2V(x)
-+#endif
++#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
++ _PAGE_CACHABLE | _PAGE_ACCESSED | \
++ _PAGE_FLAGS_HARD)
+
-+#define PCM990_INTMSKENA __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG7)
-+#define PCM990_INTSETCLR __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG6)
-+#define PCM990_CTRL0 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG0)
-+#define PCM990_CTRL1 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG1)
-+#define PCM990_CTRL2 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG2)
-+#define PCM990_CTRL3 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG3)
-+#define PCM990_CTRL4 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG4)
-+#define PCM990_CTRL5 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG5)
-+#define PCM990_CTRL6 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG6)
-+#define PCM990_CTRL7 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG7)
-+#define PCM990_CTRL8 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG8)
-+#define PCM990_CTRL9 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG9)
-+#define PCM990_CTRL10 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG10)
-+#define PCM990_CTRL11 __PCM990_CTRL_REG(PCM990_CTRL_PHYS + PCM990_CTRL_REG11)
++#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
++ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+
++#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
++ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+
-+/*
-+ * IDE
-+ */
-+#define PCM990_IDE_IRQ_GPIO 13
-+#define PCM990_IDE_IRQ IRQ_GPIO(PCM990_IDE_IRQ_GPIO)
-+#define PCM990_IDE_IRQ_EDGE IRQT_RISING
-+#define PCM990_IDE_PLD_PHYS 0x20000000 /* 16 bit wide */
-+#define PCM990_IDE_PLD_BASE 0xee000000
-+#define PCM990_IDE_PLD_SIZE (1*1024*1024)
++#define PAGE_EXECREAD PAGE_READONLY
++#define PAGE_RWX PAGE_SHARED
++#define PAGE_WRITEONLY PAGE_SHARED
+
-+/* visible CPLD (U6) registers */
-+#define PCM990_IDE_PLD_REG0 0x1000 /* OFFSET IDE REGISTER 0 */
-+#define PCM990_IDE_PM5V 0x0004 /* R System VCC_5V */
-+#define PCM990_IDE_STBY 0x0008 /* R System StandBy */
++#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_CACHABLE | \
++ _PAGE_DIRTY | _PAGE_ACCESSED | \
++ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+
-+#define PCM990_IDE_PLD_REG1 0x1002 /* OFFSET IDE REGISTER 1 */
-+#define PCM990_IDE_IDEMODE 0x0001 /* R TrueIDE Mode */
-+#define PCM990_IDE_DMAENA 0x0004 /* RW DMA Enable */
-+#define PCM990_IDE_DMA1_0 0x0008 /* RW 1=DREQ1 0=DREQ0 */
++#define PAGE_KERNEL_NOCACHE \
++ __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
++ _PAGE_ACCESSED | _PAGE_HW_SHARED | \
++ _PAGE_FLAGS_HARD)
+
-+#define PCM990_IDE_PLD_REG2 0x1004 /* OFFSET IDE REGISTER 2 */
-+#define PCM990_IDE_RESENA 0x0001 /* RW IDE Reset Bit enable */
-+#define PCM990_IDE_RES 0x0002 /* RW IDE Reset Bit */
-+#define PCM990_IDE_RDY 0x0008 /* RDY */
++#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
++ _PAGE_DIRTY | _PAGE_ACCESSED | \
++ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+
-+#define PCM990_IDE_PLD_REG3 0x1006 /* OFFSET IDE REGISTER 3 */
-+#define PCM990_IDE_IDEOE 0x0001 /* RW Latch on Databus */
-+#define PCM990_IDE_IDEON 0x0002 /* RW Latch on Control Address */
-+#define PCM990_IDE_IDEIN 0x0004 /* RW Latch on Interrupt usw. */
++#define PAGE_KERNEL_PCC(slot, type) \
++ __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
++ _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
++ (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
++ (type))
++#else /* no mmu */
++#define PAGE_NONE __pgprot(0)
++#define PAGE_SHARED __pgprot(0)
++#define PAGE_COPY __pgprot(0)
++#define PAGE_EXECREAD __pgprot(0)
++#define PAGE_RWX __pgprot(0)
++#define PAGE_READONLY __pgprot(0)
++#define PAGE_WRITEONLY __pgprot(0)
++#define PAGE_KERNEL __pgprot(0)
++#define PAGE_KERNEL_NOCACHE __pgprot(0)
++#define PAGE_KERNEL_RO __pgprot(0)
+
-+#define PCM990_IDE_PLD_REG4 0x1008 /* OFFSET IDE REGISTER 4 */
-+#define PCM990_IDE_PWRENA 0x0001 /* RW IDE Power enable */
-+#define PCM990_IDE_5V 0x0002 /* R IDE Power 5V */
-+#define PCM990_IDE_PWG 0x0008 /* R IDE Power is on */
++#define PAGE_KERNEL_PCC(slot, type) \
++ __pgprot(0)
++#endif
+
-+#define PCM990_IDE_PLD_P2V(x) ((x) - PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_BASE)
-+#define PCM990_IDE_PLD_V2P(x) ((x) - PCM990_IDE_PLD_BASE + PCM990_IDE_PLD_PHYS)
++#endif /* __ASSEMBLY__ */
+
+#ifndef __ASSEMBLY__
-+# define __PCM990_IDE_PLD_REG(x) \
-+ (*((volatile unsigned char *)PCM990_IDE_PLD_P2V(x)))
++
++/*
++ * Certain architectures need to do special things when PTEs
++ * within a page table are directly modified. Thus, the following
++ * hook is made available.
++ */
++#ifdef CONFIG_X2TLB
++static inline void set_pte(pte_t *ptep, pte_t pte)
++{
++ ptep->pte_high = pte.pte_high;
++ smp_wmb();
++ ptep->pte_low = pte.pte_low;
++}
+#else
-+# define __PCM990_IDE_PLD_REG(x) PCM990_IDE_PLD_P2V(x)
++#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
+#endif
+
-+#define PCM990_IDE0 \
-+ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG0)
-+#define PCM990_IDE1 \
-+ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG1)
-+#define PCM990_IDE2 \
-+ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG2)
-+#define PCM990_IDE3 \
-+ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG3)
-+#define PCM990_IDE4 \
-+ __PCM990_IDE_PLD_REG(PCM990_IDE_PLD_PHYS + PCM990_IDE_PLD_REG4)
++#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
+
+/*
-+ * Compact Flash
++ * (pmds are folded into pgds so this doesn't get actually called,
++ * but the define is needed for a generic inline function.)
+ */
-+#define PCM990_CF_IRQ_GPIO 11
-+#define PCM990_CF_IRQ IRQ_GPIO(PCM990_CF_IRQ_GPIO)
-+#define PCM990_CF_IRQ_EDGE IRQT_RISING
-+
-+#define PCM990_CF_CD_GPIO 12
-+#define PCM990_CF_CD IRQ_GPIO(PCM990_CF_CD_GPIO)
-+#define PCM990_CF_CD_EDGE IRQT_RISING
-+
-+#define PCM990_CF_PLD_PHYS 0x30000000 /* 16 bit wide */
-+#define PCM990_CF_PLD_BASE 0xef000000
-+#define PCM990_CF_PLD_SIZE (1*1024*1024)
-+#define PCM990_CF_PLD_P2V(x) ((x) - PCM990_CF_PLD_PHYS + PCM990_CF_PLD_BASE)
-+#define PCM990_CF_PLD_V2P(x) ((x) - PCM990_CF_PLD_BASE + PCM990_CF_PLD_PHYS)
-+
-+/* visible CPLD (U6) registers */
-+#define PCM990_CF_PLD_REG0 0x1000 /* OFFSET CF REGISTER 0 */
-+#define PCM990_CF_REG0_LED 0x0001 /* RW LED on */
-+#define PCM990_CF_REG0_BLK 0x0002 /* RW LED flash when access */
-+#define PCM990_CF_REG0_PM5V 0x0004 /* R System VCC_5V enable */
-+#define PCM990_CF_REG0_STBY 0x0008 /* R System StandBy */
++#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+
-+#define PCM990_CF_PLD_REG1 0x1002 /* OFFSET CF REGISTER 1 */
-+#define PCM990_CF_REG1_IDEMODE 0x0001 /* RW CF card run as TrueIDE */
-+#define PCM990_CF_REG1_CF0 0x0002 /* RW CF card at ADDR 0x28000000 */
++#define pfn_pte(pfn, prot) \
++ __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
++#define pfn_pmd(pfn, prot) \
++ __pmd(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
+
-+#define PCM990_CF_PLD_REG2 0x1004 /* OFFSET CF REGISTER 2 */
-+#define PCM990_CF_REG2_RES 0x0002 /* RW CF RESET BIT */
-+#define PCM990_CF_REG2_RDYENA 0x0004 /* RW Enable CF_RDY */
-+#define PCM990_CF_REG2_RDY 0x0008 /* R CF_RDY auf PWAIT */
++#define pte_none(x) (!pte_val(x))
++#define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE))
+
-+#define PCM990_CF_PLD_REG3 0x1006 /* OFFSET CF REGISTER 3 */
-+#define PCM990_CF_REG3_CFOE 0x0001 /* RW Latch on Databus */
-+#define PCM990_CF_REG3_CFON 0x0002 /* RW Latch on Control Address */
-+#define PCM990_CF_REG3_CFIN 0x0004 /* RW Latch on Interrupt usw. */
-+#define PCM990_CF_REG3_CFCD 0x0008 /* RW Latch on CD1/2 VS1/2 usw */
++#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0)
+
-+#define PCM990_CF_PLD_REG4 0x1008 /* OFFSET CF REGISTER 4 */
-+#define PCM990_CF_REG4_PWRENA 0x0001 /* RW CF Power on (CD1/2 = "00") */
-+#define PCM990_CF_REG4_5_3V 0x0002 /* RW 1 = 5V CF_VCC 0 = 3 V CF_VCC */
-+#define PCM990_CF_REG4_3B 0x0004 /* RW 3.0V Backup from VCC (5_3V=0) */
-+#define PCM990_CF_REG4_PWG 0x0008 /* R CF-Power is on */
++#define pmd_none(x) (!pmd_val(x))
++#define pmd_present(x) (pmd_val(x))
++#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
++#define pmd_bad(x) (pmd_val(x) & ~PAGE_MASK)
+
-+#define PCM990_CF_PLD_REG5 0x100A /* OFFSET CF REGISTER 5 */
-+#define PCM990_CF_REG5_BVD1 0x0001 /* R CF /BVD1 */
-+#define PCM990_CF_REG5_BVD2 0x0002 /* R CF /BVD2 */
-+#define PCM990_CF_REG5_VS1 0x0004 /* R CF /VS1 */
-+#define PCM990_CF_REG5_VS2 0x0008 /* R CF /VS2 */
++#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
++#define pte_page(x) pfn_to_page(pte_pfn(x))
+
-+#define PCM990_CF_PLD_REG6 0x100C /* OFFSET CF REGISTER 6 */
-+#define PCM990_CF_REG6_CD1 0x0001 /* R CF Card_Detect1 */
-+#define PCM990_CF_REG6_CD2 0x0002 /* R CF Card_Detect2 */
++/*
++ * The following only work if pte_present() is true.
++ * Undefined behaviour if not..
++ */
++#define pte_not_present(pte) (!((pte).pte_low & _PAGE_PRESENT))
++#define pte_dirty(pte) ((pte).pte_low & _PAGE_DIRTY)
++#define pte_young(pte) ((pte).pte_low & _PAGE_ACCESSED)
++#define pte_file(pte) ((pte).pte_low & _PAGE_FILE)
+
-+#ifndef __ASSEMBLY__
-+# define __PCM990_CF_PLD_REG(x) \
-+ (*((volatile unsigned char *)PCM990_CF_PLD_P2V(x)))
++#ifdef CONFIG_X2TLB
++#define pte_write(pte) ((pte).pte_high & _PAGE_EXT_USER_WRITE)
+#else
-+# define __PCM990_CF_PLD_REG(x) PCM990_CF_PLD_P2V(x)
++#define pte_write(pte) ((pte).pte_low & _PAGE_RW)
+#endif
+
-+#define PCM990_CF0 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG0)
-+#define PCM990_CF1 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG1)
-+#define PCM990_CF2 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG2)
-+#define PCM990_CF3 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG3)
-+#define PCM990_CF4 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG4)
-+#define PCM990_CF5 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG5)
-+#define PCM990_CF6 __PCM990_CF_PLD_REG(PCM990_CF_PLD_PHYS + PCM990_CF_PLD_REG6)
++#define PTE_BIT_FUNC(h,fn,op) \
++static inline pte_t pte_##fn(pte_t pte) { pte.pte_##h op; return pte; }
+
++#ifdef CONFIG_X2TLB
+/*
-+ * Wolfson AC97 Touch
++ * We cheat a bit in the SH-X2 TLB case. As the permission bits are
++ * individually toggled (and user permissions are entirely decoupled from
++ * kernel permissions), we attempt to couple them a bit more sanely here.
+ */
-+#define PCM990_AC97_IRQ_GPIO 10
-+#define PCM990_AC97_IRQ IRQ_GPIO(PCM990_AC97_IRQ_GPIO)
-+#define PCM990_AC97_IRQ_EDGE IRQT_RISING
++PTE_BIT_FUNC(high, wrprotect, &= ~_PAGE_EXT_USER_WRITE);
++PTE_BIT_FUNC(high, mkwrite, |= _PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE);
++PTE_BIT_FUNC(high, mkhuge, |= _PAGE_SZHUGE);
++#else
++PTE_BIT_FUNC(low, wrprotect, &= ~_PAGE_RW);
++PTE_BIT_FUNC(low, mkwrite, |= _PAGE_RW);
++PTE_BIT_FUNC(low, mkhuge, |= _PAGE_SZHUGE);
++#endif
+
-+/*
-+ * MMC phyCORE
-+ */
-+#define PCM990_MMC0_IRQ_GPIO 9
-+#define PCM990_MMC0_IRQ IRQ_GPIO(PCM990_MMC0_IRQ_GPIO)
-+#define PCM990_MMC0_IRQ_EDGE IRQT_FALLING
++PTE_BIT_FUNC(low, mkclean, &= ~_PAGE_DIRTY);
++PTE_BIT_FUNC(low, mkdirty, |= _PAGE_DIRTY);
++PTE_BIT_FUNC(low, mkold, &= ~_PAGE_ACCESSED);
++PTE_BIT_FUNC(low, mkyoung, |= _PAGE_ACCESSED);
+
+/*
-+ * USB phyCore
++ * Macro and implementation to make a page protection as uncachable.
+ */
-+#define PCM990_USB_OVERCURRENT (88 | GPIO_ALT_FN_1_IN)
-+#define PCM990_USB_PWR_EN (89 | GPIO_ALT_FN_2_OUT)
-diff --git a/include/asm-arm/arch-pxa/pxa-regs.h b/include/asm-arm/arch-pxa/pxa-regs.h
-index 1bd398d..442494d 100644
---- a/include/asm-arm/arch-pxa/pxa-regs.h
-+++ b/include/asm-arm/arch-pxa/pxa-regs.h
-@@ -1597,176 +1597,10 @@
- #define PWER_GPIO15 PWER_GPIO (15) /* GPIO [15] wake-up enable */
- #define PWER_RTC 0x80000000 /* RTC alarm wake-up enable */
-
--
- /*
-- * SSP Serial Port Registers
-- * PXA250, PXA255, PXA26x and PXA27x SSP controllers are all slightly different.
-- * PXA255, PXA26x and PXA27x have extra ports, registers and bits.
-+ * SSP Serial Port Registers - see include/asm-arm/arch-pxa/regs-ssp.h
- */
-
-- /* Common PXA2xx bits first */
--#define SSCR0_DSS (0x0000000f) /* Data Size Select (mask) */
--#define SSCR0_DataSize(x) ((x) - 1) /* Data Size Select [4..16] */
--#define SSCR0_FRF (0x00000030) /* FRame Format (mask) */
--#define SSCR0_Motorola (0x0 << 4) /* Motorola's Serial Peripheral Interface (SPI) */
--#define SSCR0_TI (0x1 << 4) /* Texas Instruments' Synchronous Serial Protocol (SSP) */
--#define SSCR0_National (0x2 << 4) /* National Microwire */
--#define SSCR0_ECS (1 << 6) /* External clock select */
--#define SSCR0_SSE (1 << 7) /* Synchronous Serial Port Enable */
--#if defined(CONFIG_PXA25x)
--#define SSCR0_SCR (0x0000ff00) /* Serial Clock Rate (mask) */
--#define SSCR0_SerClkDiv(x) ((((x) - 2)/2) << 8) /* Divisor [2..512] */
--#elif defined(CONFIG_PXA27x)
--#define SSCR0_SCR (0x000fff00) /* Serial Clock Rate (mask) */
--#define SSCR0_SerClkDiv(x) (((x) - 1) << 8) /* Divisor [1..4096] */
--#define SSCR0_EDSS (1 << 20) /* Extended data size select */
--#define SSCR0_NCS (1 << 21) /* Network clock select */
--#define SSCR0_RIM (1 << 22) /* Receive FIFO overrrun interrupt mask */
--#define SSCR0_TUM (1 << 23) /* Transmit FIFO underrun interrupt mask */
--#define SSCR0_FRDC (0x07000000) /* Frame rate divider control (mask) */
--#define SSCR0_SlotsPerFrm(x) (((x) - 1) << 24) /* Time slots per frame [1..8] */
--#define SSCR0_ADC (1 << 30) /* Audio clock select */
--#define SSCR0_MOD (1 << 31) /* Mode (normal or network) */
--#endif
--
--#define SSCR1_RIE (1 << 0) /* Receive FIFO Interrupt Enable */
--#define SSCR1_TIE (1 << 1) /* Transmit FIFO Interrupt Enable */
--#define SSCR1_LBM (1 << 2) /* Loop-Back Mode */
--#define SSCR1_SPO (1 << 3) /* Motorola SPI SSPSCLK polarity setting */
--#define SSCR1_SPH (1 << 4) /* Motorola SPI SSPSCLK phase setting */
--#define SSCR1_MWDS (1 << 5) /* Microwire Transmit Data Size */
--#define SSCR1_TFT (0x000003c0) /* Transmit FIFO Threshold (mask) */
--#define SSCR1_TxTresh(x) (((x) - 1) << 6) /* level [1..16] */
--#define SSCR1_RFT (0x00003c00) /* Receive FIFO Threshold (mask) */
--#define SSCR1_RxTresh(x) (((x) - 1) << 10) /* level [1..16] */
--
--#define SSSR_TNF (1 << 2) /* Transmit FIFO Not Full */
--#define SSSR_RNE (1 << 3) /* Receive FIFO Not Empty */
--#define SSSR_BSY (1 << 4) /* SSP Busy */
--#define SSSR_TFS (1 << 5) /* Transmit FIFO Service Request */
--#define SSSR_RFS (1 << 6) /* Receive FIFO Service Request */
--#define SSSR_ROR (1 << 7) /* Receive FIFO Overrun */
--
--#define SSCR0_TIM (1 << 23) /* Transmit FIFO Under Run Interrupt Mask */
--#define SSCR0_RIM (1 << 22) /* Receive FIFO Over Run interrupt Mask */
--#define SSCR0_NCS (1 << 21) /* Network Clock Select */
--#define SSCR0_EDSS (1 << 20) /* Extended Data Size Select */
--
--/* extra bits in PXA255, PXA26x and PXA27x SSP ports */
--#define SSCR0_TISSP (1 << 4) /* TI Sync Serial Protocol */
--#define SSCR0_PSP (3 << 4) /* PSP - Programmable Serial Protocol */
--#define SSCR1_TTELP (1 << 31) /* TXD Tristate Enable Last Phase */
--#define SSCR1_TTE (1 << 30) /* TXD Tristate Enable */
--#define SSCR1_EBCEI (1 << 29) /* Enable Bit Count Error interrupt */
--#define SSCR1_SCFR (1 << 28) /* Slave Clock free Running */
--#define SSCR1_ECRA (1 << 27) /* Enable Clock Request A */
--#define SSCR1_ECRB (1 << 26) /* Enable Clock request B */
--#define SSCR1_SCLKDIR (1 << 25) /* Serial Bit Rate Clock Direction */
--#define SSCR1_SFRMDIR (1 << 24) /* Frame Direction */
--#define SSCR1_RWOT (1 << 23) /* Receive Without Transmit */
--#define SSCR1_TRAIL (1 << 22) /* Trailing Byte */
--#define SSCR1_TSRE (1 << 21) /* Transmit Service Request Enable */
--#define SSCR1_RSRE (1 << 20) /* Receive Service Request Enable */
--#define SSCR1_TINTE (1 << 19) /* Receiver Time-out Interrupt enable */
--#define SSCR1_PINTE (1 << 18) /* Peripheral Trailing Byte Interupt Enable */
--#define SSCR1_STRF (1 << 15) /* Select FIFO or EFWR */
--#define SSCR1_EFWR (1 << 14) /* Enable FIFO Write/Read */
--
--#define SSSR_BCE (1 << 23) /* Bit Count Error */
--#define SSSR_CSS (1 << 22) /* Clock Synchronisation Status */
--#define SSSR_TUR (1 << 21) /* Transmit FIFO Under Run */
--#define SSSR_EOC (1 << 20) /* End Of Chain */
--#define SSSR_TINT (1 << 19) /* Receiver Time-out Interrupt */
--#define SSSR_PINT (1 << 18) /* Peripheral Trailing Byte Interrupt */
--
--#define SSPSP_FSRT (1 << 25) /* Frame Sync Relative Timing */
--#define SSPSP_DMYSTOP(x) ((x) << 23) /* Dummy Stop */
--#define SSPSP_SFRMWDTH(x) ((x) << 16) /* Serial Frame Width */
--#define SSPSP_SFRMDLY(x) ((x) << 9) /* Serial Frame Delay */
--#define SSPSP_DMYSTRT(x) ((x) << 7) /* Dummy Start */
--#define SSPSP_STRTDLY(x) ((x) << 4) /* Start Delay */
--#define SSPSP_ETDS (1 << 3) /* End of Transfer data State */
--#define SSPSP_SFRMP (1 << 2) /* Serial Frame Polarity */
--#define SSPSP_SCMODE(x) ((x) << 0) /* Serial Bit Rate Clock Mode */
--
--#define SSACD_SCDB (1 << 3) /* SSPSYSCLK Divider Bypass */
--#define SSACD_ACPS(x) ((x) << 4) /* Audio clock PLL select */
--#define SSACD_ACDS(x) ((x) << 0) /* Audio clock divider select */
--
--#define SSCR0_P1 __REG(0x41000000) /* SSP Port 1 Control Register 0 */
--#define SSCR1_P1 __REG(0x41000004) /* SSP Port 1 Control Register 1 */
--#define SSSR_P1 __REG(0x41000008) /* SSP Port 1 Status Register */
--#define SSITR_P1 __REG(0x4100000C) /* SSP Port 1 Interrupt Test Register */
--#define SSDR_P1 __REG(0x41000010) /* (Write / Read) SSP Port 1 Data Write Register/SSP Data Read Register */
--
--/* Support existing PXA25x drivers */
--#define SSCR0 SSCR0_P1 /* SSP Control Register 0 */
--#define SSCR1 SSCR1_P1 /* SSP Control Register 1 */
--#define SSSR SSSR_P1 /* SSP Status Register */
--#define SSITR SSITR_P1 /* SSP Interrupt Test Register */
--#define SSDR SSDR_P1 /* (Write / Read) SSP Data Write Register/SSP Data Read Register */
--
--/* PXA27x ports */
--#if defined (CONFIG_PXA27x)
--#define SSTO_P1 __REG(0x41000028) /* SSP Port 1 Time Out Register */
--#define SSPSP_P1 __REG(0x4100002C) /* SSP Port 1 Programmable Serial Protocol */
--#define SSTSA_P1 __REG(0x41000030) /* SSP Port 1 Tx Timeslot Active */
--#define SSRSA_P1 __REG(0x41000034) /* SSP Port 1 Rx Timeslot Active */
--#define SSTSS_P1 __REG(0x41000038) /* SSP Port 1 Timeslot Status */
--#define SSACD_P1 __REG(0x4100003C) /* SSP Port 1 Audio Clock Divider */
--#define SSCR0_P2 __REG(0x41700000) /* SSP Port 2 Control Register 0 */
--#define SSCR1_P2 __REG(0x41700004) /* SSP Port 2 Control Register 1 */
--#define SSSR_P2 __REG(0x41700008) /* SSP Port 2 Status Register */
--#define SSITR_P2 __REG(0x4170000C) /* SSP Port 2 Interrupt Test Register */
--#define SSDR_P2 __REG(0x41700010) /* (Write / Read) SSP Port 2 Data Write Register/SSP Data Read Register */
--#define SSTO_P2 __REG(0x41700028) /* SSP Port 2 Time Out Register */
--#define SSPSP_P2 __REG(0x4170002C) /* SSP Port 2 Programmable Serial Protocol */
--#define SSTSA_P2 __REG(0x41700030) /* SSP Port 2 Tx Timeslot Active */
--#define SSRSA_P2 __REG(0x41700034) /* SSP Port 2 Rx Timeslot Active */
--#define SSTSS_P2 __REG(0x41700038) /* SSP Port 2 Timeslot Status */
--#define SSACD_P2 __REG(0x4170003C) /* SSP Port 2 Audio Clock Divider */
--#define SSCR0_P3 __REG(0x41900000) /* SSP Port 3 Control Register 0 */
--#define SSCR1_P3 __REG(0x41900004) /* SSP Port 3 Control Register 1 */
--#define SSSR_P3 __REG(0x41900008) /* SSP Port 3 Status Register */
--#define SSITR_P3 __REG(0x4190000C) /* SSP Port 3 Interrupt Test Register */
--#define SSDR_P3 __REG(0x41900010) /* (Write / Read) SSP Port 3 Data Write Register/SSP Data Read Register */
--#define SSTO_P3 __REG(0x41900028) /* SSP Port 3 Time Out Register */
--#define SSPSP_P3 __REG(0x4190002C) /* SSP Port 3 Programmable Serial Protocol */
--#define SSTSA_P3 __REG(0x41900030) /* SSP Port 3 Tx Timeslot Active */
--#define SSRSA_P3 __REG(0x41900034) /* SSP Port 3 Rx Timeslot Active */
--#define SSTSS_P3 __REG(0x41900038) /* SSP Port 3 Timeslot Status */
--#define SSACD_P3 __REG(0x4190003C) /* SSP Port 3 Audio Clock Divider */
--#else /* PXA255 (only port 2) and PXA26x ports*/
--#define SSTO_P1 __REG(0x41000028) /* SSP Port 1 Time Out Register */
--#define SSPSP_P1 __REG(0x4100002C) /* SSP Port 1 Programmable Serial Protocol */
--#define SSCR0_P2 __REG(0x41400000) /* SSP Port 2 Control Register 0 */
--#define SSCR1_P2 __REG(0x41400004) /* SSP Port 2 Control Register 1 */
--#define SSSR_P2 __REG(0x41400008) /* SSP Port 2 Status Register */
--#define SSITR_P2 __REG(0x4140000C) /* SSP Port 2 Interrupt Test Register */
--#define SSDR_P2 __REG(0x41400010) /* (Write / Read) SSP Port 2 Data Write Register/SSP Data Read Register */
--#define SSTO_P2 __REG(0x41400028) /* SSP Port 2 Time Out Register */
--#define SSPSP_P2 __REG(0x4140002C) /* SSP Port 2 Programmable Serial Protocol */
--#define SSCR0_P3 __REG(0x41500000) /* SSP Port 3 Control Register 0 */
--#define SSCR1_P3 __REG(0x41500004) /* SSP Port 3 Control Register 1 */
--#define SSSR_P3 __REG(0x41500008) /* SSP Port 3 Status Register */
--#define SSITR_P3 __REG(0x4150000C) /* SSP Port 3 Interrupt Test Register */
--#define SSDR_P3 __REG(0x41500010) /* (Write / Read) SSP Port 3 Data Write Register/SSP Data Read Register */
--#define SSTO_P3 __REG(0x41500028) /* SSP Port 3 Time Out Register */
--#define SSPSP_P3 __REG(0x4150002C) /* SSP Port 3 Programmable Serial Protocol */
--#endif
--
--#define SSCR0_P(x) (*(((x) == 1) ? &SSCR0_P1 : ((x) == 2) ? &SSCR0_P2 : ((x) == 3) ? &SSCR0_P3 : NULL))
--#define SSCR1_P(x) (*(((x) == 1) ? &SSCR1_P1 : ((x) == 2) ? &SSCR1_P2 : ((x) == 3) ? &SSCR1_P3 : NULL))
--#define SSSR_P(x) (*(((x) == 1) ? &SSSR_P1 : ((x) == 2) ? &SSSR_P2 : ((x) == 3) ? &SSSR_P3 : NULL))
--#define SSITR_P(x) (*(((x) == 1) ? &SSITR_P1 : ((x) == 2) ? &SSITR_P2 : ((x) == 3) ? &SSITR_P3 : NULL))
--#define SSDR_P(x) (*(((x) == 1) ? &SSDR_P1 : ((x) == 2) ? &SSDR_P2 : ((x) == 3) ? &SSDR_P3 : NULL))
--#define SSTO_P(x) (*(((x) == 1) ? &SSTO_P1 : ((x) == 2) ? &SSTO_P2 : ((x) == 3) ? &SSTO_P3 : NULL))
--#define SSPSP_P(x) (*(((x) == 1) ? &SSPSP_P1 : ((x) == 2) ? &SSPSP_P2 : ((x) == 3) ? &SSPSP_P3 : NULL))
--#define SSTSA_P(x) (*(((x) == 1) ? &SSTSA_P1 : ((x) == 2) ? &SSTSA_P2 : ((x) == 3) ? &SSTSA_P3 : NULL))
--#define SSRSA_P(x) (*(((x) == 1) ? &SSRSA_P1 : ((x) == 2) ? &SSRSA_P2 : ((x) == 3) ? &SSRSA_P3 : NULL))
--#define SSTSS_P(x) (*(((x) == 1) ? &SSTSS_P1 : ((x) == 2) ? &SSTSS_P2 : ((x) == 3) ? &SSTSS_P3 : NULL))
--#define SSACD_P(x) (*(((x) == 1) ? &SSACD_P1 : ((x) == 2) ? &SSACD_P2 : ((x) == 3) ? &SSACD_P3 : NULL))
--
- /*
- * MultiMediaCard (MMC) controller - see drivers/mmc/host/pxamci.h
- */
-@@ -2014,71 +1848,8 @@
-
- #define LDCMD_PAL (1 << 26) /* instructs DMA to load palette buffer */
-
--/*
-- * Memory controller
-- */
--
--#define MDCNFG __REG(0x48000000) /* SDRAM Configuration Register 0 */
--#define MDREFR __REG(0x48000004) /* SDRAM Refresh Control Register */
--#define MSC0 __REG(0x48000008) /* Static Memory Control Register 0 */
--#define MSC1 __REG(0x4800000C) /* Static Memory Control Register 1 */
--#define MSC2 __REG(0x48000010) /* Static Memory Control Register 2 */
--#define MECR __REG(0x48000014) /* Expansion Memory (PCMCIA/Compact Flash) Bus Configuration */
--#define SXLCR __REG(0x48000018) /* LCR value to be written to SDRAM-Timing Synchronous Flash */
--#define SXCNFG __REG(0x4800001C) /* Synchronous Static Memory Control Register */
--#define SXMRS __REG(0x48000024) /* MRS value to be written to Synchronous Flash or SMROM */
--#define MCMEM0 __REG(0x48000028) /* Card interface Common Memory Space Socket 0 Timing */
--#define MCMEM1 __REG(0x4800002C) /* Card interface Common Memory Space Socket 1 Timing */
--#define MCATT0 __REG(0x48000030) /* Card interface Attribute Space Socket 0 Timing Configuration */
--#define MCATT1 __REG(0x48000034) /* Card interface Attribute Space Socket 1 Timing Configuration */
--#define MCIO0 __REG(0x48000038) /* Card interface I/O Space Socket 0 Timing Configuration */
--#define MCIO1 __REG(0x4800003C) /* Card interface I/O Space Socket 1 Timing Configuration */
--#define MDMRS __REG(0x48000040) /* MRS value to be written to SDRAM */
--#define BOOT_DEF __REG(0x48000044) /* Read-Only Boot-Time Register. Contains BOOT_SEL and PKG_SEL */
--
--/*
-- * More handy macros for PCMCIA
-- *
-- * Arg is socket number
-- */
--#define MCMEM(s) __REG2(0x48000028, (s)<<2 ) /* Card interface Common Memory Space Socket s Timing */
--#define MCATT(s) __REG2(0x48000030, (s)<<2 ) /* Card interface Attribute Space Socket s Timing Configuration */
--#define MCIO(s) __REG2(0x48000038, (s)<<2 ) /* Card interface I/O Space Socket s Timing Configuration */
--
--/* MECR register defines */
--#define MECR_NOS (1 << 0) /* Number Of Sockets: 0 -> 1 sock, 1 -> 2 sock */
--#define MECR_CIT (1 << 1) /* Card Is There: 0 -> no card, 1 -> card inserted */
--
--#define MDREFR_K0DB4 (1 << 29) /* SDCLK0 Divide by 4 Control/Status */
--#define MDREFR_K2FREE (1 << 25) /* SDRAM Free-Running Control */
--#define MDREFR_K1FREE (1 << 24) /* SDRAM Free-Running Control */
--#define MDREFR_K0FREE (1 << 23) /* SDRAM Free-Running Control */
--#define MDREFR_SLFRSH (1 << 22) /* SDRAM Self-Refresh Control/Status */
--#define MDREFR_APD (1 << 20) /* SDRAM/SSRAM Auto-Power-Down Enable */
--#define MDREFR_K2DB2 (1 << 19) /* SDCLK2 Divide by 2 Control/Status */
--#define MDREFR_K2RUN (1 << 18) /* SDCLK2 Run Control/Status */
--#define MDREFR_K1DB2 (1 << 17) /* SDCLK1 Divide by 2 Control/Status */
--#define MDREFR_K1RUN (1 << 16) /* SDCLK1 Run Control/Status */
--#define MDREFR_E1PIN (1 << 15) /* SDCKE1 Level Control/Status */
--#define MDREFR_K0DB2 (1 << 14) /* SDCLK0 Divide by 2 Control/Status */
--#define MDREFR_K0RUN (1 << 13) /* SDCLK0 Run Control/Status */
--#define MDREFR_E0PIN (1 << 12) /* SDCKE0 Level Control/Status */
--
--
- #ifdef CONFIG_PXA27x
-
--#define ARB_CNTRL __REG(0x48000048) /* Arbiter Control Register */
--
--#define ARB_DMA_SLV_PARK (1<<31) /* Be parked with DMA slave when idle */
--#define ARB_CI_PARK (1<<30) /* Be parked with Camera Interface when idle */
--#define ARB_EX_MEM_PARK (1<<29) /* Be parked with external MEMC when idle */
--#define ARB_INT_MEM_PARK (1<<28) /* Be parked with internal MEMC when idle */
--#define ARB_USB_PARK (1<<27) /* Be parked with USB when idle */
--#define ARB_LCD_PARK (1<<26) /* Be parked with LCD when idle */
--#define ARB_DMA_PARK (1<<25) /* Be parked with DMA when idle */
--#define ARB_CORE_PARK (1<<24) /* Be parked with core when idle */
--#define ARB_LOCK_FLAG (1<<23) /* Only Locking masters gain access to the bus */
--
- /*
- * Keypad
- */
-@@ -2135,74 +1906,6 @@
- #define KPAS_SO (0x1 << 31)
- #define KPASMKPx_SO (0x1 << 31)
-
--/*
-- * UHC: USB Host Controller (OHCI-like) register definitions
-- */
--#define UHC_BASE_PHYS (0x4C000000)
--#define UHCREV __REG(0x4C000000) /* UHC HCI Spec Revision */
--#define UHCHCON __REG(0x4C000004) /* UHC Host Control Register */
--#define UHCCOMS __REG(0x4C000008) /* UHC Command Status Register */
--#define UHCINTS __REG(0x4C00000C) /* UHC Interrupt Status Register */
--#define UHCINTE __REG(0x4C000010) /* UHC Interrupt Enable */
--#define UHCINTD __REG(0x4C000014) /* UHC Interrupt Disable */
--#define UHCHCCA __REG(0x4C000018) /* UHC Host Controller Comm. Area */
--#define UHCPCED __REG(0x4C00001C) /* UHC Period Current Endpt Descr */
--#define UHCCHED __REG(0x4C000020) /* UHC Control Head Endpt Descr */
--#define UHCCCED __REG(0x4C000024) /* UHC Control Current Endpt Descr */
--#define UHCBHED __REG(0x4C000028) /* UHC Bulk Head Endpt Descr */
--#define UHCBCED __REG(0x4C00002C) /* UHC Bulk Current Endpt Descr */
--#define UHCDHEAD __REG(0x4C000030) /* UHC Done Head */
--#define UHCFMI __REG(0x4C000034) /* UHC Frame Interval */
--#define UHCFMR __REG(0x4C000038) /* UHC Frame Remaining */
--#define UHCFMN __REG(0x4C00003C) /* UHC Frame Number */
--#define UHCPERS __REG(0x4C000040) /* UHC Periodic Start */
--#define UHCLS __REG(0x4C000044) /* UHC Low Speed Threshold */
--
--#define UHCRHDA __REG(0x4C000048) /* UHC Root Hub Descriptor A */
--#define UHCRHDA_NOCP (1 << 12) /* No over current protection */
--
--#define UHCRHDB __REG(0x4C00004C) /* UHC Root Hub Descriptor B */
--#define UHCRHS __REG(0x4C000050) /* UHC Root Hub Status */
--#define UHCRHPS1 __REG(0x4C000054) /* UHC Root Hub Port 1 Status */
--#define UHCRHPS2 __REG(0x4C000058) /* UHC Root Hub Port 2 Status */
--#define UHCRHPS3 __REG(0x4C00005C) /* UHC Root Hub Port 3 Status */
--
--#define UHCSTAT __REG(0x4C000060) /* UHC Status Register */
--#define UHCSTAT_UPS3 (1 << 16) /* USB Power Sense Port3 */
--#define UHCSTAT_SBMAI (1 << 15) /* System Bus Master Abort Interrupt*/
--#define UHCSTAT_SBTAI (1 << 14) /* System Bus Target Abort Interrupt*/
--#define UHCSTAT_UPRI (1 << 13) /* USB Port Resume Interrupt */
--#define UHCSTAT_UPS2 (1 << 12) /* USB Power Sense Port 2 */
--#define UHCSTAT_UPS1 (1 << 11) /* USB Power Sense Port 1 */
--#define UHCSTAT_HTA (1 << 10) /* HCI Target Abort */
--#define UHCSTAT_HBA (1 << 8) /* HCI Buffer Active */
--#define UHCSTAT_RWUE (1 << 7) /* HCI Remote Wake Up Event */
--
--#define UHCHR __REG(0x4C000064) /* UHC Reset Register */
--#define UHCHR_SSEP3 (1 << 11) /* Sleep Standby Enable for Port3 */
--#define UHCHR_SSEP2 (1 << 10) /* Sleep Standby Enable for Port2 */
--#define UHCHR_SSEP1 (1 << 9) /* Sleep Standby Enable for Port1 */
--#define UHCHR_PCPL (1 << 7) /* Power control polarity low */
--#define UHCHR_PSPL (1 << 6) /* Power sense polarity low */
--#define UHCHR_SSE (1 << 5) /* Sleep Standby Enable */
--#define UHCHR_UIT (1 << 4) /* USB Interrupt Test */
--#define UHCHR_SSDC (1 << 3) /* Simulation Scale Down Clock */
--#define UHCHR_CGR (1 << 2) /* Clock Generation Reset */
--#define UHCHR_FHR (1 << 1) /* Force Host Controller Reset */
--#define UHCHR_FSBIR (1 << 0) /* Force System Bus Iface Reset */
--
--#define UHCHIE __REG(0x4C000068) /* UHC Interrupt Enable Register*/
--#define UHCHIE_UPS3IE (1 << 14) /* Power Sense Port3 IntEn */
--#define UHCHIE_UPRIE (1 << 13) /* Port Resume IntEn */
--#define UHCHIE_UPS2IE (1 << 12) /* Power Sense Port2 IntEn */
--#define UHCHIE_UPS1IE (1 << 11) /* Power Sense Port1 IntEn */
--#define UHCHIE_TAIE (1 << 10) /* HCI Interface Transfer Abort
-- Interrupt Enable*/
--#define UHCHIE_HBAIE (1 << 8) /* HCI Buffer Active IntEn */
--#define UHCHIE_RWIE (1 << 7) /* Remote Wake-up IntEn */
--
--#define UHCHIT __REG(0x4C00006C) /* UHC Interrupt Test register */
--
- /* Camera Interface */
- #define CICR0 __REG(0x50000000)
- #define CICR1 __REG(0x50000004)
-@@ -2350,6 +2053,77 @@
-
- #endif
-
-+#if defined(CONFIG_PXA27x) || defined(CONFIG_PXA3xx)
++#define pgprot_writecombine(prot) \
++ __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
++
++#define pgprot_noncached pgprot_writecombine
++
+/*
-+ * UHC: USB Host Controller (OHCI-like) register definitions
++ * Conversion functions: convert a page and protection to a page entry,
++ * and a page entry and page directory to the page they refer to.
++ *
++ * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
+ */
-+#define UHC_BASE_PHYS (0x4C000000)
-+#define UHCREV __REG(0x4C000000) /* UHC HCI Spec Revision */
-+#define UHCHCON __REG(0x4C000004) /* UHC Host Control Register */
-+#define UHCCOMS __REG(0x4C000008) /* UHC Command Status Register */
-+#define UHCINTS __REG(0x4C00000C) /* UHC Interrupt Status Register */
-+#define UHCINTE __REG(0x4C000010) /* UHC Interrupt Enable */
-+#define UHCINTD __REG(0x4C000014) /* UHC Interrupt Disable */
-+#define UHCHCCA __REG(0x4C000018) /* UHC Host Controller Comm. Area */
-+#define UHCPCED __REG(0x4C00001C) /* UHC Period Current Endpt Descr */
-+#define UHCCHED __REG(0x4C000020) /* UHC Control Head Endpt Descr */
-+#define UHCCCED __REG(0x4C000024) /* UHC Control Current Endpt Descr */
-+#define UHCBHED __REG(0x4C000028) /* UHC Bulk Head Endpt Descr */
-+#define UHCBCED __REG(0x4C00002C) /* UHC Bulk Current Endpt Descr */
-+#define UHCDHEAD __REG(0x4C000030) /* UHC Done Head */
-+#define UHCFMI __REG(0x4C000034) /* UHC Frame Interval */
-+#define UHCFMR __REG(0x4C000038) /* UHC Frame Remaining */
-+#define UHCFMN __REG(0x4C00003C) /* UHC Frame Number */
-+#define UHCPERS __REG(0x4C000040) /* UHC Periodic Start */
-+#define UHCLS __REG(0x4C000044) /* UHC Low Speed Threshold */
++#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+
-+#define UHCRHDA __REG(0x4C000048) /* UHC Root Hub Descriptor A */
-+#define UHCRHDA_NOCP (1 << 12) /* No over current protection */
++static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
++{
++ pte.pte_low &= _PAGE_CHG_MASK;
++ pte.pte_low |= pgprot_val(newprot);
+
-+#define UHCRHDB __REG(0x4C00004C) /* UHC Root Hub Descriptor B */
-+#define UHCRHS __REG(0x4C000050) /* UHC Root Hub Status */
-+#define UHCRHPS1 __REG(0x4C000054) /* UHC Root Hub Port 1 Status */
-+#define UHCRHPS2 __REG(0x4C000058) /* UHC Root Hub Port 2 Status */
-+#define UHCRHPS3 __REG(0x4C00005C) /* UHC Root Hub Port 3 Status */
++#ifdef CONFIG_X2TLB
++ pte.pte_high |= pgprot_val(newprot) >> 32;
++#endif
+
-+#define UHCSTAT __REG(0x4C000060) /* UHC Status Register */
-+#define UHCSTAT_UPS3 (1 << 16) /* USB Power Sense Port3 */
-+#define UHCSTAT_SBMAI (1 << 15) /* System Bus Master Abort Interrupt*/
-+#define UHCSTAT_SBTAI (1 << 14) /* System Bus Target Abort Interrupt*/
-+#define UHCSTAT_UPRI (1 << 13) /* USB Port Resume Interrupt */
-+#define UHCSTAT_UPS2 (1 << 12) /* USB Power Sense Port 2 */
-+#define UHCSTAT_UPS1 (1 << 11) /* USB Power Sense Port 1 */
-+#define UHCSTAT_HTA (1 << 10) /* HCI Target Abort */
-+#define UHCSTAT_HBA (1 << 8) /* HCI Buffer Active */
-+#define UHCSTAT_RWUE (1 << 7) /* HCI Remote Wake Up Event */
++ return pte;
++}
+
-+#define UHCHR __REG(0x4C000064) /* UHC Reset Register */
-+#define UHCHR_SSEP3 (1 << 11) /* Sleep Standby Enable for Port3 */
-+#define UHCHR_SSEP2 (1 << 10) /* Sleep Standby Enable for Port2 */
-+#define UHCHR_SSEP1 (1 << 9) /* Sleep Standby Enable for Port1 */
-+#define UHCHR_PCPL (1 << 7) /* Power control polarity low */
-+#define UHCHR_PSPL (1 << 6) /* Power sense polarity low */
-+#define UHCHR_SSE (1 << 5) /* Sleep Standby Enable */
-+#define UHCHR_UIT (1 << 4) /* USB Interrupt Test */
-+#define UHCHR_SSDC (1 << 3) /* Simulation Scale Down Clock */
-+#define UHCHR_CGR (1 << 2) /* Clock Generation Reset */
-+#define UHCHR_FHR (1 << 1) /* Force Host Controller Reset */
-+#define UHCHR_FSBIR (1 << 0) /* Force System Bus Iface Reset */
++#define pmd_page_vaddr(pmd) ((unsigned long)pmd_val(pmd))
++#define pmd_page(pmd) (virt_to_page(pmd_val(pmd)))
+
-+#define UHCHIE __REG(0x4C000068) /* UHC Interrupt Enable Register*/
-+#define UHCHIE_UPS3IE (1 << 14) /* Power Sense Port3 IntEn */
-+#define UHCHIE_UPRIE (1 << 13) /* Port Resume IntEn */
-+#define UHCHIE_UPS2IE (1 << 12) /* Power Sense Port2 IntEn */
-+#define UHCHIE_UPS1IE (1 << 11) /* Power Sense Port1 IntEn */
-+#define UHCHIE_TAIE (1 << 10) /* HCI Interface Transfer Abort
-+ Interrupt Enable*/
-+#define UHCHIE_HBAIE (1 << 8) /* HCI Buffer Active IntEn */
-+#define UHCHIE_RWIE (1 << 7) /* Remote Wake-up IntEn */
++/* to find an entry in a page-table-directory. */
++#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
++#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+
-+#define UHCHIT __REG(0x4C00006C) /* UHC Interrupt Test register */
++/* to find an entry in a kernel page-table-directory */
++#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
-+#endif /* CONFIG_PXA27x || CONFIG_PXA3xx */
++/* Find an entry in the third-level page table.. */
++#define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
++#define pte_offset_kernel(dir, address) \
++ ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(address))
++#define pte_offset_map(dir, address) pte_offset_kernel(dir, address)
++#define pte_offset_map_nested(dir, address) pte_offset_kernel(dir, address)
++
++#define pte_unmap(pte) do { } while (0)
++#define pte_unmap_nested(pte) do { } while (0)
++
++#ifdef CONFIG_X2TLB
++#define pte_ERROR(e) \
++ printk("%s:%d: bad pte %p(%08lx%08lx).\n", __FILE__, __LINE__, \
++ &(e), (e).pte_high, (e).pte_low)
++#define pgd_ERROR(e) \
++ printk("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e))
++#else
++#define pte_ERROR(e) \
++ printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
++#define pgd_ERROR(e) \
++ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
++#endif
+
- /* PWRMODE register M field values */
-
- #define PWRMODE_IDLE 0x1
-diff --git a/include/asm-arm/arch-pxa/pxa2xx-regs.h b/include/asm-arm/arch-pxa/pxa2xx-regs.h
-new file mode 100644
-index 0000000..9553b54
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/pxa2xx-regs.h
-@@ -0,0 +1,84 @@
+/*
-+ * linux/include/asm-arm/arch-pxa/pxa2xx-regs.h
++ * Encode and de-code a swap entry
+ *
-+ * Taken from pxa-regs.h by Russell King
++ * Constraints:
++ * _PAGE_FILE at bit 0
++ * _PAGE_PRESENT at bit 8
++ * _PAGE_PROTNONE at bit 9
+ *
-+ * Author: Nicolas Pitre
-+ * Copyright: MontaVista Software Inc.
++ * For the normal case, we encode the swap type into bits 0:7 and the
++ * swap offset into bits 10:30. For the 64-bit PTE case, we keep the
++ * preserved bits in the low 32-bits and use the upper 32 as the swap
++ * offset (along with a 5-bit type), following the same approach as x86
++ * PAE. This keeps the logic quite simple, and allows for a full 32
++ * PTE_FILE_MAX_BITS, as opposed to the 29-bits we're constrained with
++ * in the pte_low case.
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * As is evident by the Alpha code, if we ever get a 64-bit unsigned
++ * long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes
++ * much cleaner..
++ *
++ * NOTE: We should set ZEROs at the position of _PAGE_PRESENT
++ * and _PAGE_PROTNONE bits
+ */
-+
-+#ifndef __PXA2XX_REGS_H
-+#define __PXA2XX_REGS_H
++#ifdef CONFIG_X2TLB
++#define __swp_type(x) ((x).val & 0x1f)
++#define __swp_offset(x) ((x).val >> 5)
++#define __swp_entry(type, offset) ((swp_entry_t){ (type) | (offset) << 5})
++#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
++#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
+
+/*
-+ * Memory controller
++ * Encode and decode a nonlinear file mapping entry
+ */
++#define pte_to_pgoff(pte) ((pte).pte_high)
++#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
+
-+#define MDCNFG __REG(0x48000000) /* SDRAM Configuration Register 0 */
-+#define MDREFR __REG(0x48000004) /* SDRAM Refresh Control Register */
-+#define MSC0 __REG(0x48000008) /* Static Memory Control Register 0 */
-+#define MSC1 __REG(0x4800000C) /* Static Memory Control Register 1 */
-+#define MSC2 __REG(0x48000010) /* Static Memory Control Register 2 */
-+#define MECR __REG(0x48000014) /* Expansion Memory (PCMCIA/Compact Flash) Bus Configuration */
-+#define SXLCR __REG(0x48000018) /* LCR value to be written to SDRAM-Timing Synchronous Flash */
-+#define SXCNFG __REG(0x4800001C) /* Synchronous Static Memory Control Register */
-+#define SXMRS __REG(0x48000024) /* MRS value to be written to Synchronous Flash or SMROM */
-+#define MCMEM0 __REG(0x48000028) /* Card interface Common Memory Space Socket 0 Timing */
-+#define MCMEM1 __REG(0x4800002C) /* Card interface Common Memory Space Socket 1 Timing */
-+#define MCATT0 __REG(0x48000030) /* Card interface Attribute Space Socket 0 Timing Configuration */
-+#define MCATT1 __REG(0x48000034) /* Card interface Attribute Space Socket 1 Timing Configuration */
-+#define MCIO0 __REG(0x48000038) /* Card interface I/O Space Socket 0 Timing Configuration */
-+#define MCIO1 __REG(0x4800003C) /* Card interface I/O Space Socket 1 Timing Configuration */
-+#define MDMRS __REG(0x48000040) /* MRS value to be written to SDRAM */
-+#define BOOT_DEF __REG(0x48000044) /* Read-Only Boot-Time Register. Contains BOOT_SEL and PKG_SEL */
++#define PTE_FILE_MAX_BITS 32
++#else
++#define __swp_type(x) ((x).val & 0xff)
++#define __swp_offset(x) ((x).val >> 10)
++#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) <<10})
++
++#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 })
++#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 })
+
+/*
-+ * More handy macros for PCMCIA
-+ *
-+ * Arg is socket number
++ * Encode and decode a nonlinear file mapping entry
+ */
-+#define MCMEM(s) __REG2(0x48000028, (s)<<2 ) /* Card interface Common Memory Space Socket s Timing */
-+#define MCATT(s) __REG2(0x48000030, (s)<<2 ) /* Card interface Attribute Space Socket s Timing Configuration */
-+#define MCIO(s) __REG2(0x48000038, (s)<<2 ) /* Card interface I/O Space Socket s Timing Configuration */
-+
-+/* MECR register defines */
-+#define MECR_NOS (1 << 0) /* Number Of Sockets: 0 -> 1 sock, 1 -> 2 sock */
-+#define MECR_CIT (1 << 1) /* Card Is There: 0 -> no card, 1 -> card inserted */
-+
-+#define MDREFR_K0DB4 (1 << 29) /* SDCLK0 Divide by 4 Control/Status */
-+#define MDREFR_K2FREE (1 << 25) /* SDRAM Free-Running Control */
-+#define MDREFR_K1FREE (1 << 24) /* SDRAM Free-Running Control */
-+#define MDREFR_K0FREE (1 << 23) /* SDRAM Free-Running Control */
-+#define MDREFR_SLFRSH (1 << 22) /* SDRAM Self-Refresh Control/Status */
-+#define MDREFR_APD (1 << 20) /* SDRAM/SSRAM Auto-Power-Down Enable */
-+#define MDREFR_K2DB2 (1 << 19) /* SDCLK2 Divide by 2 Control/Status */
-+#define MDREFR_K2RUN (1 << 18) /* SDCLK2 Run Control/Status */
-+#define MDREFR_K1DB2 (1 << 17) /* SDCLK1 Divide by 2 Control/Status */
-+#define MDREFR_K1RUN (1 << 16) /* SDCLK1 Run Control/Status */
-+#define MDREFR_E1PIN (1 << 15) /* SDCKE1 Level Control/Status */
-+#define MDREFR_K0DB2 (1 << 14) /* SDCLK0 Divide by 2 Control/Status */
-+#define MDREFR_K0RUN (1 << 13) /* SDCLK0 Run Control/Status */
-+#define MDREFR_E0PIN (1 << 12) /* SDCKE0 Level Control/Status */
-+
-+
-+#ifdef CONFIG_PXA27x
-+
-+#define ARB_CNTRL __REG(0x48000048) /* Arbiter Control Register */
-+
-+#define ARB_DMA_SLV_PARK (1<<31) /* Be parked with DMA slave when idle */
-+#define ARB_CI_PARK (1<<30) /* Be parked with Camera Interface when idle */
-+#define ARB_EX_MEM_PARK (1<<29) /* Be parked with external MEMC when idle */
-+#define ARB_INT_MEM_PARK (1<<28) /* Be parked with internal MEMC when idle */
-+#define ARB_USB_PARK (1<<27) /* Be parked with USB when idle */
-+#define ARB_LCD_PARK (1<<26) /* Be parked with LCD when idle */
-+#define ARB_DMA_PARK (1<<25) /* Be parked with DMA when idle */
-+#define ARB_CORE_PARK (1<<24) /* Be parked with core when idle */
-+#define ARB_LOCK_FLAG (1<<23) /* Only Locking masters gain access to the bus */
-+
++#define PTE_FILE_MAX_BITS 29
++#define pte_to_pgoff(pte) (pte_val(pte) >> 1)
++#define pgoff_to_pte(off) ((pte_t) { ((off) << 1) | _PAGE_FILE })
+#endif
+
-+#endif
-diff --git a/include/asm-arm/arch-pxa/pxa2xx_spi.h b/include/asm-arm/arch-pxa/pxa2xx_spi.h
-index acc7ec7..3459fb2 100644
---- a/include/asm-arm/arch-pxa/pxa2xx_spi.h
-+++ b/include/asm-arm/arch-pxa/pxa2xx_spi.h
-@@ -22,32 +22,8 @@
- #define PXA2XX_CS_ASSERT (0x01)
- #define PXA2XX_CS_DEASSERT (0x02)
-
--#if defined(CONFIG_PXA25x)
--#define CLOCK_SPEED_HZ 3686400
--#define SSP1_SerClkDiv(x) (((CLOCK_SPEED_HZ/2/(x+1))<<8)&0x0000ff00)
--#define SSP2_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
--#define SSP3_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
--#elif defined(CONFIG_PXA27x)
--#define CLOCK_SPEED_HZ 13000000
--#define SSP1_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
--#define SSP2_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
--#define SSP3_SerClkDiv(x) (((CLOCK_SPEED_HZ/(x+1))<<8)&0x000fff00)
--#endif
--
--#define SSP1_VIRT ((void *)(io_p2v(__PREG(SSCR0_P(1)))))
--#define SSP2_VIRT ((void *)(io_p2v(__PREG(SSCR0_P(2)))))
--#define SSP3_VIRT ((void *)(io_p2v(__PREG(SSCR0_P(3)))))
--
--enum pxa_ssp_type {
-- SSP_UNDEFINED = 0,
-- PXA25x_SSP, /* pxa 210, 250, 255, 26x */
-- PXA25x_NSSP, /* pxa 255, 26x (including ASSP) */
-- PXA27x_SSP,
--};
--
- /* device.platform_data for SSP controller devices */
- struct pxa2xx_spi_master {
-- enum pxa_ssp_type ssp_type;
- u32 clock_enable;
- u16 num_chipselect;
- u8 enable_dma;
-diff --git a/include/asm-arm/arch-pxa/pxa3xx-regs.h b/include/asm-arm/arch-pxa/pxa3xx-regs.h
-index 3900a0c..66d5411 100644
---- a/include/asm-arm/arch-pxa/pxa3xx-regs.h
-+++ b/include/asm-arm/arch-pxa/pxa3xx-regs.h
-@@ -14,6 +14,92 @@
- #define __ASM_ARCH_PXA3XX_REGS_H
-
- /*
-+ * Slave Power Managment Unit
-+ */
-+#define ASCR __REG(0x40f40000) /* Application Subsystem Power Status/Configuration */
-+#define ARSR __REG(0x40f40004) /* Application Subsystem Reset Status */
-+#define AD3ER __REG(0x40f40008) /* Application Subsystem Wake-Up from D3 Enable */
-+#define AD3SR __REG(0x40f4000c) /* Application Subsystem Wake-Up from D3 Status */
-+#define AD2D0ER __REG(0x40f40010) /* Application Subsystem Wake-Up from D2 to D0 Enable */
-+#define AD2D0SR __REG(0x40f40014) /* Application Subsystem Wake-Up from D2 to D0 Status */
-+#define AD2D1ER __REG(0x40f40018) /* Application Subsystem Wake-Up from D2 to D1 Enable */
-+#define AD2D1SR __REG(0x40f4001c) /* Application Subsystem Wake-Up from D2 to D1 Status */
-+#define AD1D0ER __REG(0x40f40020) /* Application Subsystem Wake-Up from D1 to D0 Enable */
-+#define AD1D0SR __REG(0x40f40024) /* Application Subsystem Wake-Up from D1 to D0 Status */
-+#define AGENP __REG(0x40f4002c) /* Application Subsystem General Purpose */
-+#define AD3R __REG(0x40f40030) /* Application Subsystem D3 Configuration */
-+#define AD2R __REG(0x40f40034) /* Application Subsystem D2 Configuration */
-+#define AD1R __REG(0x40f40038) /* Application Subsystem D1 Configuration */
++#endif /* __ASSEMBLY__ */
++#endif /* __ASM_SH_PGTABLE_32_H */
+diff --git a/include/asm-sh/pgtable_64.h b/include/asm-sh/pgtable_64.h
+new file mode 100644
+index 0000000..9722116
+--- /dev/null
++++ b/include/asm-sh/pgtable_64.h
+@@ -0,0 +1,299 @@
++#ifndef __ASM_SH_PGTABLE_64_H
++#define __ASM_SH_PGTABLE_64_H
+
+/*
-+ * Application Subsystem Configuration bits.
++ * include/asm-sh/pgtable_64.h
++ *
++ * This file contains the functions and defines necessary to modify and use
++ * the SuperH page table tree.
++ *
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003, 2004 Paul Mundt
++ * Copyright (C) 2003, 2004 Richard Curnow
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
-+#define ASCR_RDH (1 << 31)
-+#define ASCR_D1S (1 << 2)
-+#define ASCR_D2S (1 << 1)
-+#define ASCR_D3S (1 << 0)
++#include <linux/threads.h>
++#include <asm/processor.h>
++#include <asm/page.h>
+
+/*
-+ * Application Reset Status bits.
++ * Error outputs.
+ */
-+#define ARSR_GPR (1 << 3)
-+#define ARSR_LPMR (1 << 2)
-+#define ARSR_WDT (1 << 1)
-+#define ARSR_HWR (1 << 0)
++#define pte_ERROR(e) \
++ printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
++#define pgd_ERROR(e) \
++ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+
+/*
-+ * Application Subsystem Wake-Up bits.
++ * Table setting routines. Used within arch/mm only.
+ */
-+#define ADXER_WRTC (1 << 31) /* RTC */
-+#define ADXER_WOST (1 << 30) /* OS Timer */
-+#define ADXER_WTSI (1 << 29) /* Touchscreen */
-+#define ADXER_WUSBH (1 << 28) /* USB host */
-+#define ADXER_WUSB2 (1 << 26) /* USB client 2.0 */
-+#define ADXER_WMSL0 (1 << 24) /* MSL port 0*/
-+#define ADXER_WDMUX3 (1 << 23) /* USB EDMUX3 */
-+#define ADXER_WDMUX2 (1 << 22) /* USB EDMUX2 */
-+#define ADXER_WKP (1 << 21) /* Keypad */
-+#define ADXER_WUSIM1 (1 << 20) /* USIM Port 1 */
-+#define ADXER_WUSIM0 (1 << 19) /* USIM Port 0 */
-+#define ADXER_WOTG (1 << 16) /* USBOTG input */
-+#define ADXER_MFP_WFLASH (1 << 15) /* MFP: Data flash busy */
-+#define ADXER_MFP_GEN12 (1 << 14) /* MFP: MMC3/GPIO/OST inputs */
-+#define ADXER_MFP_WMMC2 (1 << 13) /* MFP: MMC2 */
-+#define ADXER_MFP_WMMC1 (1 << 12) /* MFP: MMC1 */
-+#define ADXER_MFP_WI2C (1 << 11) /* MFP: I2C */
-+#define ADXER_MFP_WSSP4 (1 << 10) /* MFP: SSP4 */
-+#define ADXER_MFP_WSSP3 (1 << 9) /* MFP: SSP3 */
-+#define ADXER_MFP_WMAXTRIX (1 << 8) /* MFP: matrix keypad */
-+#define ADXER_MFP_WUART3 (1 << 7) /* MFP: UART3 */
-+#define ADXER_MFP_WUART2 (1 << 6) /* MFP: UART2 */
-+#define ADXER_MFP_WUART1 (1 << 5) /* MFP: UART1 */
-+#define ADXER_MFP_WSSP2 (1 << 4) /* MFP: SSP2 */
-+#define ADXER_MFP_WSSP1 (1 << 3) /* MFP: SSP1 */
-+#define ADXER_MFP_WAC97 (1 << 2) /* MFP: AC97 */
-+#define ADXER_WEXTWAKE1 (1 << 1) /* External Wake 1 */
-+#define ADXER_WEXTWAKE0 (1 << 0) /* External Wake 0 */
++#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+
-+/*
-+ * AD3R/AD2R/AD1R bits. R2-R5 are only defined for PXA320.
-+ */
-+#define ADXR_L2 (1 << 8)
-+#define ADXR_R5 (1 << 5)
-+#define ADXR_R4 (1 << 4)
-+#define ADXR_R3 (1 << 3)
-+#define ADXR_R2 (1 << 2)
-+#define ADXR_R1 (1 << 1)
-+#define ADXR_R0 (1 << 0)
++static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
++{
++ unsigned long long x = ((unsigned long long) pteval.pte_low);
++ unsigned long long *xp = (unsigned long long *) pteptr;
++ /*
++ * Sign-extend based on NPHYS.
++ */
++ *(xp) = (x & NPHYS_SIGN) ? (x | NPHYS_MASK) : x;
++}
++#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
++
++static __inline__ void pmd_set(pmd_t *pmdp,pte_t *ptep)
++{
++ pmd_val(*pmdp) = (unsigned long) ptep;
++}
+
+/*
-+ * Values for PWRMODE CP15 register
++ * PGD defines. Top level.
+ */
-+#define PXA3xx_PM_S3D4C4 0x07 /* aka deep sleep */
-+#define PXA3xx_PM_S2D3C4 0x06 /* aka sleep */
-+#define PXA3xx_PM_S0D2C2 0x03 /* aka standby */
-+#define PXA3xx_PM_S0D1C2 0x02 /* aka LCD refresh */
-+#define PXA3xx_PM_S0D0C1 0x01
+
-+/*
- * Application Subsystem Clock
- */
- #define ACCR __REG(0x41340000) /* Application Subsystem Clock Configuration Register */
-diff --git a/include/asm-arm/arch-pxa/regs-ssp.h b/include/asm-arm/arch-pxa/regs-ssp.h
-new file mode 100644
-index 0000000..991cb68
---- /dev/null
-+++ b/include/asm-arm/arch-pxa/regs-ssp.h
-@@ -0,0 +1,112 @@
-+#ifndef __ASM_ARCH_REGS_SSP_H
-+#define __ASM_ARCH_REGS_SSP_H
++/* To find an entry in a generic PGD. */
++#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
++#define __pgd_offset(address) pgd_index(address)
++#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
++
++/* To find an entry in a kernel PGD. */
++#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/*
-+ * SSP Serial Port Registers
-+ * PXA250, PXA255, PXA26x and PXA27x SSP controllers are all slightly different.
-+ * PXA255, PXA26x and PXA27x have extra ports, registers and bits.
++ * PMD level access routines. Same notes as above.
+ */
++#define _PMD_EMPTY 0x0
++/* Either the PMD is empty or present, it's not paged out */
++#define pmd_present(pmd_entry) (pmd_val(pmd_entry) & _PAGE_PRESENT)
++#define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY)))
++#define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY)
++#define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+
-+#define SSCR0 (0x00) /* SSP Control Register 0 */
-+#define SSCR1 (0x04) /* SSP Control Register 1 */
-+#define SSSR (0x08) /* SSP Status Register */
-+#define SSITR (0x0C) /* SSP Interrupt Test Register */
-+#define SSDR (0x10) /* SSP Data Write/Data Read Register */
++#define pmd_page_vaddr(pmd_entry) \
++ ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))
+
-+#define SSTO (0x28) /* SSP Time Out Register */
-+#define SSPSP (0x2C) /* SSP Programmable Serial Protocol */
-+#define SSTSA (0x30) /* SSP Tx Timeslot Active */
-+#define SSRSA (0x34) /* SSP Rx Timeslot Active */
-+#define SSTSS (0x38) /* SSP Timeslot Status */
-+#define SSACD (0x3C) /* SSP Audio Clock Divider */
++#define pmd_page(pmd) \
++ (virt_to_page(pmd_val(pmd)))
+
-+/* Common PXA2xx bits first */
-+#define SSCR0_DSS (0x0000000f) /* Data Size Select (mask) */
-+#define SSCR0_DataSize(x) ((x) - 1) /* Data Size Select [4..16] */
-+#define SSCR0_FRF (0x00000030) /* FRame Format (mask) */
-+#define SSCR0_Motorola (0x0 << 4) /* Motorola's Serial Peripheral Interface (SPI) */
-+#define SSCR0_TI (0x1 << 4) /* Texas Instruments' Synchronous Serial Protocol (SSP) */
-+#define SSCR0_National (0x2 << 4) /* National Microwire */
-+#define SSCR0_ECS (1 << 6) /* External clock select */
-+#define SSCR0_SSE (1 << 7) /* Synchronous Serial Port Enable */
-+#if defined(CONFIG_PXA25x)
-+#define SSCR0_SCR (0x0000ff00) /* Serial Clock Rate (mask) */
-+#define SSCR0_SerClkDiv(x) ((((x) - 2)/2) << 8) /* Divisor [2..512] */
-+#elif defined(CONFIG_PXA27x)
-+#define SSCR0_SCR (0x000fff00) /* Serial Clock Rate (mask) */
-+#define SSCR0_SerClkDiv(x) (((x) - 1) << 8) /* Divisor [1..4096] */
-+#define SSCR0_EDSS (1 << 20) /* Extended data size select */
-+#define SSCR0_NCS (1 << 21) /* Network clock select */
-+#define SSCR0_RIM (1 << 22) /* Receive FIFO overrrun interrupt mask */
-+#define SSCR0_TUM (1 << 23) /* Transmit FIFO underrun interrupt mask */
-+#define SSCR0_FRDC (0x07000000) /* Frame rate divider control (mask) */
-+#define SSCR0_SlotsPerFrm(x) (((x) - 1) << 24) /* Time slots per frame [1..8] */
-+#define SSCR0_ADC (1 << 30) /* Audio clock select */
-+#define SSCR0_MOD (1 << 31) /* Mode (normal or network) */
-+#endif
++/* PMD to PTE dereferencing */
++#define pte_index(address) \
++ ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
-+#define SSCR1_RIE (1 << 0) /* Receive FIFO Interrupt Enable */
-+#define SSCR1_TIE (1 << 1) /* Transmit FIFO Interrupt Enable */
-+#define SSCR1_LBM (1 << 2) /* Loop-Back Mode */
-+#define SSCR1_SPO (1 << 3) /* Motorola SPI SSPSCLK polarity setting */
-+#define SSCR1_SPH (1 << 4) /* Motorola SPI SSPSCLK phase setting */
-+#define SSCR1_MWDS (1 << 5) /* Microwire Transmit Data Size */
-+#define SSCR1_TFT (0x000003c0) /* Transmit FIFO Threshold (mask) */
-+#define SSCR1_TxTresh(x) (((x) - 1) << 6) /* level [1..16] */
-+#define SSCR1_RFT (0x00003c00) /* Receive FIFO Threshold (mask) */
-+#define SSCR1_RxTresh(x) (((x) - 1) << 10) /* level [1..16] */
++#define pte_offset_kernel(dir, addr) \
++ ((pte_t *) ((pmd_val(*(dir))) & PAGE_MASK) + pte_index((addr)))
+
-+#define SSSR_TNF (1 << 2) /* Transmit FIFO Not Full */
-+#define SSSR_RNE (1 << 3) /* Receive FIFO Not Empty */
-+#define SSSR_BSY (1 << 4) /* SSP Busy */
-+#define SSSR_TFS (1 << 5) /* Transmit FIFO Service Request */
-+#define SSSR_RFS (1 << 6) /* Receive FIFO Service Request */
-+#define SSSR_ROR (1 << 7) /* Receive FIFO Overrun */
++#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr)
++#define pte_offset_map_nested(dir,addr) pte_offset_kernel(dir, addr)
++#define pte_unmap(pte) do { } while (0)
++#define pte_unmap_nested(pte) do { } while (0)
+
-+#define SSCR0_TIM (1 << 23) /* Transmit FIFO Under Run Interrupt Mask */
-+#define SSCR0_RIM (1 << 22) /* Receive FIFO Over Run interrupt Mask */
-+#define SSCR0_NCS (1 << 21) /* Network Clock Select */
-+#define SSCR0_EDSS (1 << 20) /* Extended Data Size Select */
++#ifndef __ASSEMBLY__
++#define IOBASE_VADDR 0xff000000
++#define IOBASE_END 0xffffffff
+
-+/* extra bits in PXA255, PXA26x and PXA27x SSP ports */
-+#define SSCR0_TISSP (1 << 4) /* TI Sync Serial Protocol */
-+#define SSCR0_PSP (3 << 4) /* PSP - Programmable Serial Protocol */
-+#define SSCR1_TTELP (1 << 31) /* TXD Tristate Enable Last Phase */
-+#define SSCR1_TTE (1 << 30) /* TXD Tristate Enable */
-+#define SSCR1_EBCEI (1 << 29) /* Enable Bit Count Error interrupt */
-+#define SSCR1_SCFR (1 << 28) /* Slave Clock free Running */
-+#define SSCR1_ECRA (1 << 27) /* Enable Clock Request A */
-+#define SSCR1_ECRB (1 << 26) /* Enable Clock request B */
-+#define SSCR1_SCLKDIR (1 << 25) /* Serial Bit Rate Clock Direction */
-+#define SSCR1_SFRMDIR (1 << 24) /* Frame Direction */
-+#define SSCR1_RWOT (1 << 23) /* Receive Without Transmit */
-+#define SSCR1_TRAIL (1 << 22) /* Trailing Byte */
-+#define SSCR1_TSRE (1 << 21) /* Transmit Service Request Enable */
-+#define SSCR1_RSRE (1 << 20) /* Receive Service Request Enable */
-+#define SSCR1_TINTE (1 << 19) /* Receiver Time-out Interrupt enable */
-+#define SSCR1_PINTE (1 << 18) /* Peripheral Trailing Byte Interupt Enable */
-+#define SSCR1_STRF (1 << 15) /* Select FIFO or EFWR */
-+#define SSCR1_EFWR (1 << 14) /* Enable FIFO Write/Read */
++/*
++ * PTEL coherent flags.
++ * See Chapter 17 ST50 CPU Core Volume 1, Architecture.
++ */
++/* The bits that are required in the SH-5 TLB are placed in the h/w-defined
++ positions, to avoid expensive bit shuffling on every refill. The remaining
++ bits are used for s/w purposes and masked out on each refill.
+
-+#define SSSR_BCE (1 << 23) /* Bit Count Error */
-+#define SSSR_CSS (1 << 22) /* Clock Synchronisation Status */
-+#define SSSR_TUR (1 << 21) /* Transmit FIFO Under Run */
-+#define SSSR_EOC (1 << 20) /* End Of Chain */
-+#define SSSR_TINT (1 << 19) /* Receiver Time-out Interrupt */
-+#define SSSR_PINT (1 << 18) /* Peripheral Trailing Byte Interrupt */
++ Note, the PTE slots are used to hold data of type swp_entry_t when a page is
++ swapped out. Only the _PAGE_PRESENT flag is significant when the page is
++ swapped out, and it must be placed so that it doesn't overlap either the
++ type or offset fields of swp_entry_t. For x86, offset is at [31:8] and type
++ at [6:1], with _PAGE_PRESENT at bit 0 for both pte_t and swp_entry_t. This
++ scheme doesn't map to SH-5 because bit [0] controls cacheability. So bit
++ [2] is used for _PAGE_PRESENT and the type field of swp_entry_t is split
++ into 2 pieces. That is handled by SWP_ENTRY and SWP_TYPE below. */
++#define _PAGE_WT 0x001 /* CB0: if cacheable, 1->write-thru, 0->write-back */
++#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
++#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
++#define _PAGE_PRESENT 0x004 /* software: page referenced */
++#define _PAGE_FILE 0x004 /* software: only when !present */
++#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
++#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
++#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
++#define _PAGE_READ 0x040 /* PR0-bit : read access allowed */
++#define _PAGE_EXECUTE 0x080 /* PR1-bit : execute access allowed */
++#define _PAGE_WRITE 0x100 /* PR2-bit : write access allowed */
++#define _PAGE_USER 0x200 /* PR3-bit : user space access allowed */
++#define _PAGE_DIRTY 0x400 /* software: page accessed in write */
++#define _PAGE_ACCESSED 0x800 /* software: page referenced */
+
-+#define SSPSP_FSRT (1 << 25) /* Frame Sync Relative Timing */
-+#define SSPSP_DMYSTOP(x) ((x) << 23) /* Dummy Stop */
-+#define SSPSP_SFRMWDTH(x) ((x) << 16) /* Serial Frame Width */
-+#define SSPSP_SFRMDLY(x) ((x) << 9) /* Serial Frame Delay */
-+#define SSPSP_DMYSTRT(x) ((x) << 7) /* Dummy Start */
-+#define SSPSP_STRTDLY(x) ((x) << 4) /* Start Delay */
-+#define SSPSP_ETDS (1 << 3) /* End of Transfer data State */
-+#define SSPSP_SFRMP (1 << 2) /* Serial Frame Polarity */
-+#define SSPSP_SCMODE(x) ((x) << 0) /* Serial Bit Rate Clock Mode */
++/* Mask which drops software flags */
++#define _PAGE_FLAGS_HARDWARE_MASK 0xfffffffffffff3dbLL
+
-+#define SSACD_SCDB (1 << 3) /* SSPSYSCLK Divider Bypass */
-+#define SSACD_ACPS(x) ((x) << 4) /* Audio clock PLL select */
-+#define SSACD_ACDS(x) ((x) << 0) /* Audio clock divider select */
++/*
++ * HugeTLB support
++ */
++#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
++#define _PAGE_SZHUGE (_PAGE_SIZE0)
++#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
++#define _PAGE_SZHUGE (_PAGE_SIZE1)
++#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
++#define _PAGE_SZHUGE (_PAGE_SIZE0 | _PAGE_SIZE1)
++#endif
+
-+#endif /* __ASM_ARCH_REGS_SSP_H */
-diff --git a/include/asm-arm/arch-pxa/sharpsl.h b/include/asm-arm/arch-pxa/sharpsl.h
-index 2b0fe77..3b1d4a7 100644
---- a/include/asm-arm/arch-pxa/sharpsl.h
-+++ b/include/asm-arm/arch-pxa/sharpsl.h
-@@ -16,7 +16,7 @@ int corgi_ssp_max1111_get(unsigned long data);
- */
-
- struct corgits_machinfo {
-- unsigned long (*get_hsync_len)(void);
-+ unsigned long (*get_hsync_invperiod)(void);
- void (*put_hsync)(void);
- void (*wait_hsync)(void);
- };
-diff --git a/include/asm-arm/arch-pxa/spitz.h b/include/asm-arm/arch-pxa/spitz.h
-index 4953dd3..bd14365 100644
---- a/include/asm-arm/arch-pxa/spitz.h
-+++ b/include/asm-arm/arch-pxa/spitz.h
-@@ -156,5 +156,3 @@ extern struct platform_device spitzscoop_device;
- extern struct platform_device spitzscoop2_device;
- extern struct platform_device spitzssp_device;
- extern struct sharpsl_charger_machinfo spitz_pm_machinfo;
--
--extern void spitz_lcd_power(int on, struct fb_var_screeninfo *var);
-diff --git a/include/asm-arm/arch-pxa/ssp.h b/include/asm-arm/arch-pxa/ssp.h
-index ea20055..a012882 100644
---- a/include/asm-arm/arch-pxa/ssp.h
-+++ b/include/asm-arm/arch-pxa/ssp.h
-@@ -13,10 +13,37 @@
- * PXA255 SSP, NSSP
- * PXA26x SSP, NSSP, ASSP
- * PXA27x SSP1, SSP2, SSP3
-+ * PXA3xx SSP1, SSP2, SSP3, SSP4
- */
-
--#ifndef SSP_H
--#define SSP_H
-+#ifndef __ASM_ARCH_SSP_H
-+#define __ASM_ARCH_SSP_H
++/*
++ * Default flags for a Kernel page.
++ * This is fundametally also SHARED because the main use of this define
++ * (other than for PGD/PMD entries) is for the VMALLOC pool which is
++ * contextless.
++ *
++ * _PAGE_EXECUTE is required for modules
++ *
++ */
++#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
++ _PAGE_EXECUTE | \
++ _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_DIRTY | \
++ _PAGE_SHARED)
+
-+#include <linux/list.h>
++/* Default flags for a User page */
++#define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER)
+
-+enum pxa_ssp_type {
-+ SSP_UNDEFINED = 0,
-+ PXA25x_SSP, /* pxa 210, 250, 255, 26x */
-+ PXA25x_NSSP, /* pxa 255, 26x (including ASSP) */
-+ PXA27x_SSP,
-+};
++#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+
-+struct ssp_device {
-+ struct platform_device *pdev;
-+ struct list_head node;
++/*
++ * We have full permissions (Read/Write/Execute/Shared).
++ */
++#define _PAGE_COMMON (_PAGE_PRESENT | _PAGE_USER | \
++ _PAGE_CACHABLE | _PAGE_ACCESSED)
+
-+ struct clk *clk;
-+ void __iomem *mmio_base;
-+ unsigned long phys_base;
++#define PAGE_NONE __pgprot(_PAGE_CACHABLE | _PAGE_ACCESSED)
++#define PAGE_SHARED __pgprot(_PAGE_COMMON | _PAGE_READ | _PAGE_WRITE | \
++ _PAGE_SHARED)
++#define PAGE_EXECREAD __pgprot(_PAGE_COMMON | _PAGE_READ | _PAGE_EXECUTE)
+
-+ const char *label;
-+ int port_id;
-+ int type;
-+ int use_count;
-+ int irq;
-+ int drcmr_rx;
-+ int drcmr_tx;
-+};
-
- /*
- * SSP initialisation flags
-@@ -31,6 +58,7 @@ struct ssp_state {
- };
-
- struct ssp_dev {
-+ struct ssp_device *ssp;
- u32 port;
- u32 mode;
- u32 flags;
-@@ -50,4 +78,6 @@ int ssp_init(struct ssp_dev *dev, u32 port, u32 init_flags);
- int ssp_config(struct ssp_dev *dev, u32 mode, u32 flags, u32 psp_flags, u32 speed);
- void ssp_exit(struct ssp_dev *dev);
-
--#endif
-+struct ssp_device *ssp_request(int port, const char *label);
-+void ssp_free(struct ssp_device *);
-+#endif /* __ASM_ARCH_SSP_H */
-diff --git a/include/asm-arm/arch-pxa/uncompress.h b/include/asm-arm/arch-pxa/uncompress.h
-index 178aa2e..dadf4c2 100644
---- a/include/asm-arm/arch-pxa/uncompress.h
-+++ b/include/asm-arm/arch-pxa/uncompress.h
-@@ -9,19 +9,21 @@
- * published by the Free Software Foundation.
- */
-
--#define FFUART ((volatile unsigned long *)0x40100000)
--#define BTUART ((volatile unsigned long *)0x40200000)
--#define STUART ((volatile unsigned long *)0x40700000)
--#define HWUART ((volatile unsigned long *)0x41600000)
-+#include <linux/serial_reg.h>
-+#include <asm/arch/pxa-regs.h>
++/*
++ * We need to include PAGE_EXECUTE in PAGE_COPY because it is the default
++ * protection mode for the stack.
++ */
++#define PAGE_COPY PAGE_EXECREAD
+
-+#define __REG(x) ((volatile unsigned long *)x)
-
- #define UART FFUART
-
-
- static inline void putc(char c)
- {
-- while (!(UART[5] & 0x20))
-+ if (!(UART[UART_IER] & IER_UUE))
-+ return;
-+ while (!(UART[UART_LSR] & LSR_TDRQ))
- barrier();
-- UART[0] = c;
-+ UART[UART_TX] = c;
- }
-
- /*
-diff --git a/include/asm-arm/arch-pxa/zylonite.h b/include/asm-arm/arch-pxa/zylonite.h
-index f58b591..5f717d6 100644
---- a/include/asm-arm/arch-pxa/zylonite.h
-+++ b/include/asm-arm/arch-pxa/zylonite.h
-@@ -3,9 +3,18 @@
-
- #define ZYLONITE_ETH_PHYS 0x14000000
-
-+#define EXT_GPIO(x) (128 + (x))
++#define PAGE_READONLY __pgprot(_PAGE_COMMON | _PAGE_READ)
++#define PAGE_WRITEONLY __pgprot(_PAGE_COMMON | _PAGE_WRITE)
++#define PAGE_RWX __pgprot(_PAGE_COMMON | _PAGE_READ | \
++ _PAGE_WRITE | _PAGE_EXECUTE)
++#define PAGE_KERNEL __pgprot(_KERNPG_TABLE)
+
- /* the following variables are processor specific and initialized
- * by the corresponding zylonite_pxa3xx_init()
- */
-+struct platform_mmc_slot {
-+ int gpio_cd;
-+ int gpio_wp;
-+};
++/* Make it a device mapping for maximum safety (e.g. for mapping device
++ registers into user-space via /dev/map). */
++#define pgprot_noncached(x) __pgprot(((x).pgprot & ~(_PAGE_CACHABLE)) | _PAGE_DEVICE)
++#define pgprot_writecombine(prot) __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+
-+extern struct platform_mmc_slot zylonite_mmc_slot[];
++/*
++ * Handling allocation failures during page table setup.
++ */
++extern void __handle_bad_pmd_kernel(pmd_t * pmd);
++#define __handle_bad_pmd(x) __handle_bad_pmd_kernel(x)
+
- extern int gpio_backlight;
- extern int gpio_eth_irq;
-
-diff --git a/include/asm-arm/arch-s3c2410/debug-macro.S b/include/asm-arm/arch-s3c2410/debug-macro.S
-index 9c8cd9a..89076c3 100644
---- a/include/asm-arm/arch-s3c2410/debug-macro.S
-+++ b/include/asm-arm/arch-s3c2410/debug-macro.S
-@@ -92,11 +92,9 @@
- #if defined(CONFIG_CPU_LLSERIAL_S3C2410_ONLY)
- #define fifo_full fifo_full_s3c2410
- #define fifo_level fifo_level_s3c2410
--#warning 2410only
- #elif !defined(CONFIG_CPU_LLSERIAL_S3C2440_ONLY)
- #define fifo_full fifo_full_s3c24xx
- #define fifo_level fifo_level_s3c24xx
--#warning generic
- #endif
-
- /* include the reset of the code which will do the work */
-diff --git a/include/asm-arm/arch-s3c2410/dma.h b/include/asm-arm/arch-s3c2410/dma.h
-index c6e8d8f..4f291d9 100644
---- a/include/asm-arm/arch-s3c2410/dma.h
-+++ b/include/asm-arm/arch-s3c2410/dma.h
-@@ -214,6 +214,7 @@ struct s3c2410_dma_chan {
- unsigned long dev_addr;
- unsigned long load_timeout;
- unsigned int flags; /* channel flags */
-+ unsigned int hw_cfg; /* last hw config */
-
- struct s3c24xx_dma_map *map; /* channel hw maps */
-
-diff --git a/include/asm-arm/arch-s3c2410/hardware.h b/include/asm-arm/arch-s3c2410/hardware.h
-index 6dadf58..29592c3 100644
---- a/include/asm-arm/arch-s3c2410/hardware.h
-+++ b/include/asm-arm/arch-s3c2410/hardware.h
-@@ -50,6 +50,17 @@ extern unsigned int s3c2410_gpio_getcfg(unsigned int pin);
-
- extern int s3c2410_gpio_getirq(unsigned int pin);
-
-+/* s3c2410_gpio_irq2pin
++/*
++ * PTE level access routines.
+ *
-+ * turn the given irq number into the corresponding GPIO number
++ * Note1:
++ * It's the tree walk leaf. This is physical address to be stored.
+ *
-+ * returns:
-+ * < 0 = no pin
-+ * >=0 = gpio pin number
-+*/
++ * Note 2:
++ * Regarding the choice of _PTE_EMPTY:
+
-+extern int s3c2410_gpio_irq2pin(unsigned int irq);
++ We must choose a bit pattern that cannot be valid, whether or not the page
++ is present. bit[2]==1 => present, bit[2]==0 => swapped out. If swapped
++ out, bits [31:8], [6:3], [1:0] are under swapper control, so only bit[7] is
++ left for us to select. If we force bit[7]==0 when swapped out, we could use
++ the combination bit[7,2]=2'b10 to indicate an empty PTE. Alternatively, if
++ we force bit[7]==1 when swapped out, we can use all zeroes to indicate
++ empty. This is convenient, because the page tables get cleared to zero
++ when they are allocated.
+
- #ifdef CONFIG_CPU_S3C2400
-
- extern int s3c2400_gpio_getirq(unsigned int pin);
-@@ -87,6 +98,18 @@ extern int s3c2410_gpio_irqfilter(unsigned int pin, unsigned int on,
-
- extern void s3c2410_gpio_pullup(unsigned int pin, unsigned int to);
-
-+/* s3c2410_gpio_getpull
-+ *
-+ * Read the state of the pull-up on a given pin
-+ *
-+ * return:
-+ * < 0 => error code
-+ * 0 => enabled
-+ * 1 => disabled
-+*/
++ */
++#define _PTE_EMPTY 0x0
++#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
++#define pte_clear(mm,addr,xp) (set_pte_at(mm, addr, xp, __pte(_PTE_EMPTY)))
++#define pte_none(x) (pte_val(x) == _PTE_EMPTY)
+
-+extern int s3c2410_gpio_getpull(unsigned int pin);
++/*
++ * Some definitions to translate between mem_map, PTEs, and page
++ * addresses:
++ */
+
- extern void s3c2410_gpio_setpin(unsigned int pin, unsigned int to);
-
- extern unsigned int s3c2410_gpio_getpin(unsigned int pin);
-@@ -99,6 +122,11 @@ extern int s3c2440_set_dsc(unsigned int pin, unsigned int value);
-
- #endif /* CONFIG_CPU_S3C2440 */
-
-+#ifdef CONFIG_CPU_S3C2412
++/*
++ * Given a PTE, return the index of the mem_map[] entry corresponding
++ * to the page frame the PTE. Get the absolute physical address, make
++ * a relative physical address and translate it to an index.
++ */
++#define pte_pagenr(x) (((unsigned long) (pte_val(x)) - \
++ __MEMORY_START) >> PAGE_SHIFT)
+
-+extern int s3c2412_gpio_set_sleepcfg(unsigned int pin, unsigned int state);
++/*
++ * Given a PTE, return the "struct page *".
++ */
++#define pte_page(x) (mem_map + pte_pagenr(x))
+
-+#endif /* CONFIG_CPU_S3C2412 */
-
- #endif /* __ASSEMBLY__ */
-
-diff --git a/include/asm-arm/arch-s3c2410/irqs.h b/include/asm-arm/arch-s3c2410/irqs.h
-index 996f654..d858b3e 100644
---- a/include/asm-arm/arch-s3c2410/irqs.h
-+++ b/include/asm-arm/arch-s3c2410/irqs.h
-@@ -160,4 +160,7 @@
- #define NR_IRQS (IRQ_S3C2440_AC97+1)
- #endif
-
-+/* Our FIQs are routable from IRQ_EINT0 to IRQ_ADCPARENT */
-+#define FIQ_START IRQ_EINT0
++/*
++ * Return number of (down rounded) MB corresponding to x pages.
++ */
++#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+
- #endif /* __ASM_ARCH_IRQ_H */
-diff --git a/include/asm-arm/arch-s3c2410/regs-clock.h b/include/asm-arm/arch-s3c2410/regs-clock.h
-index e39656b..dba9df9 100644
---- a/include/asm-arm/arch-s3c2410/regs-clock.h
-+++ b/include/asm-arm/arch-s3c2410/regs-clock.h
-@@ -138,6 +138,8 @@ s3c2410_get_pll(unsigned int pllval, unsigned int baseclk)
- #define S3C2412_CLKDIVN_PDIVN (1<<2)
- #define S3C2412_CLKDIVN_HDIVN_MASK (3<<0)
- #define S3C2421_CLKDIVN_ARMDIVN (1<<3)
-+#define S3C2412_CLKDIVN_DVSEN (1<<4)
-+#define S3C2412_CLKDIVN_HALFHCLK (1<<5)
- #define S3C2412_CLKDIVN_USB48DIV (1<<6)
- #define S3C2412_CLKDIVN_UARTDIV_MASK (15<<8)
- #define S3C2412_CLKDIVN_UARTDIV_SHIFT (8)
-diff --git a/include/asm-arm/arch-s3c2410/regs-dsc.h b/include/asm-arm/arch-s3c2410/regs-dsc.h
-index c074851..1235df7 100644
---- a/include/asm-arm/arch-s3c2410/regs-dsc.h
-+++ b/include/asm-arm/arch-s3c2410/regs-dsc.h
-@@ -19,7 +19,7 @@
- #define S3C2412_DSC1 S3C2410_GPIOREG(0xe0)
- #endif
-
--#if defined(CONFIG_CPU_S3C2440)
-+#if defined(CONFIG_CPU_S3C244X)
-
- #define S3C2440_DSC0 S3C2410_GPIOREG(0xc4)
- #define S3C2440_DSC1 S3C2410_GPIOREG(0xc8)
-diff --git a/include/asm-arm/arch-s3c2410/regs-gpio.h b/include/asm-arm/arch-s3c2410/regs-gpio.h
-index b693158..0ad75d7 100644
---- a/include/asm-arm/arch-s3c2410/regs-gpio.h
-+++ b/include/asm-arm/arch-s3c2410/regs-gpio.h
-@@ -1133,12 +1133,16 @@
- #define S3C2412_GPBSLPCON S3C2410_GPIOREG(0x1C)
- #define S3C2412_GPCSLPCON S3C2410_GPIOREG(0x2C)
- #define S3C2412_GPDSLPCON S3C2410_GPIOREG(0x3C)
--#define S3C2412_GPESLPCON S3C2410_GPIOREG(0x4C)
- #define S3C2412_GPFSLPCON S3C2410_GPIOREG(0x5C)
- #define S3C2412_GPGSLPCON S3C2410_GPIOREG(0x6C)
- #define S3C2412_GPHSLPCON S3C2410_GPIOREG(0x7C)
-
- /* definitions for each pin bit */
-+#define S3C2412_GPIO_SLPCON_LOW ( 0x00 )
-+#define S3C2412_GPIO_SLPCON_HIGH ( 0x01 )
-+#define S3C2412_GPIO_SLPCON_IN ( 0x02 )
-+#define S3C2412_GPIO_SLPCON_PULL ( 0x03 )
+
- #define S3C2412_SLPCON_LOW(x) ( 0x00 << ((x) * 2))
- #define S3C2412_SLPCON_HIGH(x) ( 0x01 << ((x) * 2))
- #define S3C2412_SLPCON_IN(x) ( 0x02 << ((x) * 2))
-diff --git a/include/asm-arm/arch-s3c2410/regs-mem.h b/include/asm-arm/arch-s3c2410/regs-mem.h
-index e4d8234..312ff93 100644
---- a/include/asm-arm/arch-s3c2410/regs-mem.h
-+++ b/include/asm-arm/arch-s3c2410/regs-mem.h
-@@ -98,16 +98,19 @@
- #define S3C2410_BANKCON_Tacp3 (0x1 << 2)
- #define S3C2410_BANKCON_Tacp4 (0x2 << 2)
- #define S3C2410_BANKCON_Tacp6 (0x3 << 2)
-+#define S3C2410_BANKCON_Tacp_SHIFT (2)
-
- #define S3C2410_BANKCON_Tcah0 (0x0 << 4)
- #define S3C2410_BANKCON_Tcah1 (0x1 << 4)
- #define S3C2410_BANKCON_Tcah2 (0x2 << 4)
- #define S3C2410_BANKCON_Tcah4 (0x3 << 4)
-+#define S3C2410_BANKCON_Tcah_SHIFT (4)
-
- #define S3C2410_BANKCON_Tcoh0 (0x0 << 6)
- #define S3C2410_BANKCON_Tcoh1 (0x1 << 6)
- #define S3C2410_BANKCON_Tcoh2 (0x2 << 6)
- #define S3C2410_BANKCON_Tcoh4 (0x3 << 6)
-+#define S3C2410_BANKCON_Tcoh_SHIFT (6)
-
- #define S3C2410_BANKCON_Tacc1 (0x0 << 8)
- #define S3C2410_BANKCON_Tacc2 (0x1 << 8)
-@@ -117,16 +120,19 @@
- #define S3C2410_BANKCON_Tacc8 (0x5 << 8)
- #define S3C2410_BANKCON_Tacc10 (0x6 << 8)
- #define S3C2410_BANKCON_Tacc14 (0x7 << 8)
-+#define S3C2410_BANKCON_Tacc_SHIFT (8)
-
- #define S3C2410_BANKCON_Tcos0 (0x0 << 11)
- #define S3C2410_BANKCON_Tcos1 (0x1 << 11)
- #define S3C2410_BANKCON_Tcos2 (0x2 << 11)
- #define S3C2410_BANKCON_Tcos4 (0x3 << 11)
-+#define S3C2410_BANKCON_Tcos_SHIFT (11)
-
- #define S3C2410_BANKCON_Tacs0 (0x0 << 13)
- #define S3C2410_BANKCON_Tacs1 (0x1 << 13)
- #define S3C2410_BANKCON_Tacs2 (0x2 << 13)
- #define S3C2410_BANKCON_Tacs4 (0x3 << 13)
-+#define S3C2410_BANKCON_Tacs_SHIFT (13)
-
- #define S3C2410_BANKCON_SRAM (0x0 << 15)
- #define S3C2400_BANKCON_EDODRAM (0x2 << 15)
-diff --git a/include/asm-arm/arch-s3c2410/regs-power.h b/include/asm-arm/arch-s3c2410/regs-power.h
-index f79987b..13d13b7 100644
---- a/include/asm-arm/arch-s3c2410/regs-power.h
-+++ b/include/asm-arm/arch-s3c2410/regs-power.h
-@@ -23,7 +23,8 @@
- #define S3C2412_INFORM2 S3C24XX_PWRREG(0x78)
- #define S3C2412_INFORM3 S3C24XX_PWRREG(0x7C)
-
--#define S3C2412_PWRCFG_BATF_IGNORE (0<<0)
-+#define S3C2412_PWRCFG_BATF_IRQ (1<<0)
-+#define S3C2412_PWRCFG_BATF_IGNORE (2<<0)
- #define S3C2412_PWRCFG_BATF_SLEEP (3<<0)
- #define S3C2412_PWRCFG_BATF_MASK (3<<0)
-
-diff --git a/include/asm-arm/arch-s3c2410/system.h b/include/asm-arm/arch-s3c2410/system.h
-index 6389178..14de4e5 100644
---- a/include/asm-arm/arch-s3c2410/system.h
-+++ b/include/asm-arm/arch-s3c2410/system.h
-@@ -20,6 +20,9 @@
- #include <asm/plat-s3c/regs-watchdog.h>
- #include <asm/arch/regs-clock.h>
-
-+#include <linux/clk.h>
-+#include <linux/err.h>
++/*
++ * The following have defined behavior only work if pte_present() is true.
++ */
++static inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
++static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
++static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
++static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
+
- void (*s3c24xx_idle)(void);
- void (*s3c24xx_reset_hook)(void);
-
-@@ -59,6 +62,8 @@ static void arch_idle(void)
- static void
- arch_reset(char mode)
- {
-+ struct clk *wdtclk;
++static inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
++static inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
++static inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
++static inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_WRITE)); return pte; }
++static inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
++static inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
++static inline pte_t pte_mkhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_SZHUGE)); return pte; }
+
- if (mode == 's') {
- cpu_reset(0);
- }
-@@ -70,19 +75,28 @@ arch_reset(char mode)
-
- __raw_writel(0, S3C2410_WTCON); /* disable watchdog, to be safe */
-
-+ wdtclk = clk_get(NULL, "watchdog");
-+ if (!IS_ERR(wdtclk)) {
-+ clk_enable(wdtclk);
-+ } else
-+ printk(KERN_WARNING "%s: warning: cannot get watchdog clock\n", __func__);
+
- /* put initial values into count and data */
-- __raw_writel(0x100, S3C2410_WTCNT);
-- __raw_writel(0x100, S3C2410_WTDAT);
-+ __raw_writel(0x80, S3C2410_WTCNT);
-+ __raw_writel(0x80, S3C2410_WTDAT);
-
- /* set the watchdog to go and reset... */
- __raw_writel(S3C2410_WTCON_ENABLE|S3C2410_WTCON_DIV16|S3C2410_WTCON_RSTEN |
- S3C2410_WTCON_PRESCALE(0x20), S3C2410_WTCON);
-
- /* wait for reset to assert... */
-- mdelay(5000);
-+ mdelay(500);
-
- printk(KERN_ERR "Watchdog reset failed to assert reset\n");
-
-+ /* delay to allow the serial port to show the message */
-+ mdelay(50);
++/*
++ * Conversion functions: convert a page and protection to a page entry.
++ *
++ * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
++ */
++#define mk_pte(page,pgprot) \
++({ \
++ pte_t __pte; \
++ \
++ set_pte(&__pte, __pte((((page)-mem_map) << PAGE_SHIFT) | \
++ __MEMORY_START | pgprot_val((pgprot)))); \
++ __pte; \
++})
+
- /* we'll take a jump through zero as a poor second */
- cpu_reset(0);
- }
-diff --git a/include/asm-arm/bitops.h b/include/asm-arm/bitops.h
-index 47a6b08..5c60bfc 100644
---- a/include/asm-arm/bitops.h
-+++ b/include/asm-arm/bitops.h
-@@ -310,6 +310,8 @@ static inline int constant_fls(int x)
- _find_first_zero_bit_le(p,sz)
- #define ext2_find_next_zero_bit(p,sz,off) \
- _find_next_zero_bit_le(p,sz,off)
-+#define ext2_find_next_bit(p, sz, off) \
-+ _find_next_bit_le(p, sz, off)
-
- /*
- * Minix is defined to use little-endian byte ordering.
-diff --git a/include/asm-arm/cacheflush.h b/include/asm-arm/cacheflush.h
-index 6c1c968..759a97b 100644
---- a/include/asm-arm/cacheflush.h
-+++ b/include/asm-arm/cacheflush.h
-@@ -94,6 +94,14 @@
- # endif
- #endif
-
-+#if defined(CONFIG_CPU_FEROCEON)
-+# ifdef _CACHE
-+# define MULTI_CACHE 1
-+# else
-+# define _CACHE feroceon
-+# endif
-+#endif
++/*
++ * This takes a (absolute) physical page address that is used
++ * by the remapping functions
++ */
++#define mk_pte_phys(physpage, pgprot) \
++({ pte_t __pte; set_pte(&__pte, __pte(physpage | pgprot_val(pgprot))); __pte; })
+
- #if defined(CONFIG_CPU_V6)
- //# ifdef _CACHE
- # define MULTI_CACHE 1
-diff --git a/include/asm-arm/fpstate.h b/include/asm-arm/fpstate.h
-index f31cda5..392eb53 100644
---- a/include/asm-arm/fpstate.h
-+++ b/include/asm-arm/fpstate.h
-@@ -17,14 +17,18 @@
- /*
- * VFP storage area has:
- * - FPEXC, FPSCR, FPINST and FPINST2.
-- * - 16 double precision data registers
-- * - an implementation-dependant word of state for FLDMX/FSTMX
-+ * - 16 or 32 double precision data registers
-+ * - an implementation-dependant word of state for FLDMX/FSTMX (pre-ARMv6)
- *
- * FPEXC will always be non-zero once the VFP has been used in this process.
- */
-
- struct vfp_hard_struct {
-+#ifdef CONFIG_VFPv3
-+ __u64 fpregs[32];
-+#else
- __u64 fpregs[16];
-+#endif
- #if __LINUX_ARM_ARCH__ < 6
- __u32 fpmx_state;
- #endif
-@@ -35,6 +39,7 @@ struct vfp_hard_struct {
- */
- __u32 fpinst;
- __u32 fpinst2;
++static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
++{ set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot))); return pte; }
+
- #ifdef CONFIG_SMP
- __u32 cpu;
- #endif
-diff --git a/include/asm-arm/kprobes.h b/include/asm-arm/kprobes.h
++/* Encode and decode a swap entry */
++#define __swp_type(x) (((x).val & 3) + (((x).val >> 1) & 0x3c))
++#define __swp_offset(x) ((x).val >> 8)
++#define __swp_entry(type, offset) ((swp_entry_t) { ((offset << 8) + ((type & 0x3c) << 1) + (type & 3)) })
++#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
++#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
++
++/* Encode and decode a nonlinear file mapping entry */
++#define PTE_FILE_MAX_BITS 29
++#define pte_to_pgoff(pte) (pte_val(pte))
++#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
++
++#endif /* !__ASSEMBLY__ */
++
++#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
++#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
++
++#endif /* __ASM_SH_PGTABLE_64_H */
+diff --git a/include/asm-sh/posix_types.h b/include/asm-sh/posix_types.h
+index 0a3d2f5..4b9d11c 100644
+--- a/include/asm-sh/posix_types.h
++++ b/include/asm-sh/posix_types.h
+@@ -1,122 +1,7 @@
+-#ifndef __ASM_SH_POSIX_TYPES_H
+-#define __ASM_SH_POSIX_TYPES_H
+-
+-/*
+- * This file is generally used by user-level software, so you need to
+- * be a little careful about namespace pollution etc. Also, we cannot
+- * assume GCC is being used.
+- */
+-
+-typedef unsigned long __kernel_ino_t;
+-typedef unsigned short __kernel_mode_t;
+-typedef unsigned short __kernel_nlink_t;
+-typedef long __kernel_off_t;
+-typedef int __kernel_pid_t;
+-typedef unsigned short __kernel_ipc_pid_t;
+-typedef unsigned short __kernel_uid_t;
+-typedef unsigned short __kernel_gid_t;
+-typedef unsigned int __kernel_size_t;
+-typedef int __kernel_ssize_t;
+-typedef int __kernel_ptrdiff_t;
+-typedef long __kernel_time_t;
+-typedef long __kernel_suseconds_t;
+-typedef long __kernel_clock_t;
+-typedef int __kernel_timer_t;
+-typedef int __kernel_clockid_t;
+-typedef int __kernel_daddr_t;
+-typedef char * __kernel_caddr_t;
+-typedef unsigned short __kernel_uid16_t;
+-typedef unsigned short __kernel_gid16_t;
+-typedef unsigned int __kernel_uid32_t;
+-typedef unsigned int __kernel_gid32_t;
+-
+-typedef unsigned short __kernel_old_uid_t;
+-typedef unsigned short __kernel_old_gid_t;
+-typedef unsigned short __kernel_old_dev_t;
+-
+-#ifdef __GNUC__
+-typedef long long __kernel_loff_t;
+-#endif
+-
+-typedef struct {
+-#if defined(__KERNEL__) || defined(__USE_ALL)
+- int val[2];
+-#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
+- int __val[2];
+-#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
+-} __kernel_fsid_t;
+-
+-#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
+-
+-#undef __FD_SET
+-static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
+-{
+- unsigned long __tmp = __fd / __NFDBITS;
+- unsigned long __rem = __fd % __NFDBITS;
+- __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
+-}
+-
+-#undef __FD_CLR
+-static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
+-{
+- unsigned long __tmp = __fd / __NFDBITS;
+- unsigned long __rem = __fd % __NFDBITS;
+- __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
+-}
+-
+-
+-#undef __FD_ISSET
+-static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
+-{
+- unsigned long __tmp = __fd / __NFDBITS;
+- unsigned long __rem = __fd % __NFDBITS;
+- return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
+-}
+-
+-/*
+- * This will unroll the loop for the normal constant case (8 ints,
+- * for a 256-bit fd_set)
+- */
+-#undef __FD_ZERO
+-static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
+-{
+- unsigned long *__tmp = __p->fds_bits;
+- int __i;
+-
+- if (__builtin_constant_p(__FDSET_LONGS)) {
+- switch (__FDSET_LONGS) {
+- case 16:
+- __tmp[ 0] = 0; __tmp[ 1] = 0;
+- __tmp[ 2] = 0; __tmp[ 3] = 0;
+- __tmp[ 4] = 0; __tmp[ 5] = 0;
+- __tmp[ 6] = 0; __tmp[ 7] = 0;
+- __tmp[ 8] = 0; __tmp[ 9] = 0;
+- __tmp[10] = 0; __tmp[11] = 0;
+- __tmp[12] = 0; __tmp[13] = 0;
+- __tmp[14] = 0; __tmp[15] = 0;
+- return;
+-
+- case 8:
+- __tmp[ 0] = 0; __tmp[ 1] = 0;
+- __tmp[ 2] = 0; __tmp[ 3] = 0;
+- __tmp[ 4] = 0; __tmp[ 5] = 0;
+- __tmp[ 6] = 0; __tmp[ 7] = 0;
+- return;
+-
+- case 4:
+- __tmp[ 0] = 0; __tmp[ 1] = 0;
+- __tmp[ 2] = 0; __tmp[ 3] = 0;
+- return;
+- }
+- }
+- __i = __FDSET_LONGS;
+- while (__i) {
+- __i--;
+- *__tmp = 0;
+- __tmp++;
+- }
+-}
+-
+-#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
+-
+-#endif /* __ASM_SH_POSIX_TYPES_H */
++#ifdef __KERNEL__
++# ifdef CONFIG_SUPERH32
++# include "posix_types_32.h"
++# else
++# include "posix_types_64.h"
++# endif
++#endif /* __KERNEL__ */
+diff --git a/include/asm-sh/posix_types_32.h b/include/asm-sh/posix_types_32.h
new file mode 100644
-index 0000000..4e7bd32
+index 0000000..0a3d2f5
--- /dev/null
-+++ b/include/asm-arm/kprobes.h
-@@ -0,0 +1,79 @@
++++ b/include/asm-sh/posix_types_32.h
+@@ -0,0 +1,122 @@
++#ifndef __ASM_SH_POSIX_TYPES_H
++#define __ASM_SH_POSIX_TYPES_H
++
+/*
-+ * include/asm-arm/kprobes.h
-+ *
-+ * Copyright (C) 2006, 2007 Motorola Inc.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
-+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+ * General Public License for more details.
++ * This file is generally used by user-level software, so you need to
++ * be a little careful about namespace pollution etc. Also, we cannot
++ * assume GCC is being used.
+ */
+
-+#ifndef _ARM_KPROBES_H
-+#define _ARM_KPROBES_H
-+
-+#include <linux/types.h>
-+#include <linux/ptrace.h>
-+#include <linux/percpu.h>
++typedef unsigned long __kernel_ino_t;
++typedef unsigned short __kernel_mode_t;
++typedef unsigned short __kernel_nlink_t;
++typedef long __kernel_off_t;
++typedef int __kernel_pid_t;
++typedef unsigned short __kernel_ipc_pid_t;
++typedef unsigned short __kernel_uid_t;
++typedef unsigned short __kernel_gid_t;
++typedef unsigned int __kernel_size_t;
++typedef int __kernel_ssize_t;
++typedef int __kernel_ptrdiff_t;
++typedef long __kernel_time_t;
++typedef long __kernel_suseconds_t;
++typedef long __kernel_clock_t;
++typedef int __kernel_timer_t;
++typedef int __kernel_clockid_t;
++typedef int __kernel_daddr_t;
++typedef char * __kernel_caddr_t;
++typedef unsigned short __kernel_uid16_t;
++typedef unsigned short __kernel_gid16_t;
++typedef unsigned int __kernel_uid32_t;
++typedef unsigned int __kernel_gid32_t;
+
-+#define ARCH_SUPPORTS_KRETPROBES
-+#define __ARCH_WANT_KPROBES_INSN_SLOT
-+#define MAX_INSN_SIZE 2
-+#define MAX_STACK_SIZE 64 /* 32 would probably be OK */
++typedef unsigned short __kernel_old_uid_t;
++typedef unsigned short __kernel_old_gid_t;
++typedef unsigned short __kernel_old_dev_t;
+
-+/*
-+ * This undefined instruction must be unique and
-+ * reserved solely for kprobes' use.
-+ */
-+#define KPROBE_BREAKPOINT_INSTRUCTION 0xe7f001f8
++#ifdef __GNUC__
++typedef long long __kernel_loff_t;
++#endif
+
-+#define regs_return_value(regs) ((regs)->ARM_r0)
-+#define flush_insn_slot(p) do { } while (0)
-+#define kretprobe_blacklist_size 0
++typedef struct {
++#if defined(__KERNEL__) || defined(__USE_ALL)
++ int val[2];
++#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
++ int __val[2];
++#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
++} __kernel_fsid_t;
+
-+typedef u32 kprobe_opcode_t;
++#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
+
-+struct kprobe;
-+typedef void (kprobe_insn_handler_t)(struct kprobe *, struct pt_regs *);
++#undef __FD_SET
++static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
++{
++ unsigned long __tmp = __fd / __NFDBITS;
++ unsigned long __rem = __fd % __NFDBITS;
++ __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
++}
+
-+/* Architecture specific copy of original instruction. */
-+struct arch_specific_insn {
-+ kprobe_opcode_t *insn;
-+ kprobe_insn_handler_t *insn_handler;
-+};
++#undef __FD_CLR
++static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
++{
++ unsigned long __tmp = __fd / __NFDBITS;
++ unsigned long __rem = __fd % __NFDBITS;
++ __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
++}
+
-+struct prev_kprobe {
-+ struct kprobe *kp;
-+ unsigned int status;
-+};
+
-+/* per-cpu kprobe control block */
-+struct kprobe_ctlblk {
-+ unsigned int kprobe_status;
-+ struct prev_kprobe prev_kprobe;
-+ struct pt_regs jprobe_saved_regs;
-+ char jprobes_stack[MAX_STACK_SIZE];
-+};
++#undef __FD_ISSET
++static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
++{
++ unsigned long __tmp = __fd / __NFDBITS;
++ unsigned long __rem = __fd % __NFDBITS;
++ return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
++}
+
-+void arch_remove_kprobe(struct kprobe *);
++/*
++ * This will unroll the loop for the normal constant case (8 ints,
++ * for a 256-bit fd_set)
++ */
++#undef __FD_ZERO
++static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
++{
++ unsigned long *__tmp = __p->fds_bits;
++ int __i;
+
-+int kprobe_trap_handler(struct pt_regs *regs, unsigned int instr);
-+int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
-+int kprobe_exceptions_notify(struct notifier_block *self,
-+ unsigned long val, void *data);
++ if (__builtin_constant_p(__FDSET_LONGS)) {
++ switch (__FDSET_LONGS) {
++ case 16:
++ __tmp[ 0] = 0; __tmp[ 1] = 0;
++ __tmp[ 2] = 0; __tmp[ 3] = 0;
++ __tmp[ 4] = 0; __tmp[ 5] = 0;
++ __tmp[ 6] = 0; __tmp[ 7] = 0;
++ __tmp[ 8] = 0; __tmp[ 9] = 0;
++ __tmp[10] = 0; __tmp[11] = 0;
++ __tmp[12] = 0; __tmp[13] = 0;
++ __tmp[14] = 0; __tmp[15] = 0;
++ return;
+
-+enum kprobe_insn {
-+ INSN_REJECTED,
-+ INSN_GOOD,
-+ INSN_GOOD_NO_SLOT
-+};
++ case 8:
++ __tmp[ 0] = 0; __tmp[ 1] = 0;
++ __tmp[ 2] = 0; __tmp[ 3] = 0;
++ __tmp[ 4] = 0; __tmp[ 5] = 0;
++ __tmp[ 6] = 0; __tmp[ 7] = 0;
++ return;
+
-+enum kprobe_insn arm_kprobe_decode_insn(kprobe_opcode_t,
-+ struct arch_specific_insn *);
-+void __init arm_kprobe_decode_init(void);
++ case 4:
++ __tmp[ 0] = 0; __tmp[ 1] = 0;
++ __tmp[ 2] = 0; __tmp[ 3] = 0;
++ return;
++ }
++ }
++ __i = __FDSET_LONGS;
++ while (__i) {
++ __i--;
++ *__tmp = 0;
++ __tmp++;
++ }
++}
+
-+#endif /* _ARM_KPROBES_H */
-diff --git a/include/asm-arm/plat-s3c24xx/dma.h b/include/asm-arm/plat-s3c24xx/dma.h
-index 2c59406..c78efe3 100644
---- a/include/asm-arm/plat-s3c24xx/dma.h
-+++ b/include/asm-arm/plat-s3c24xx/dma.h
-@@ -32,6 +32,7 @@ struct s3c24xx_dma_map {
- struct s3c24xx_dma_addr hw_addr;
-
- unsigned long channels[S3C2410_DMA_CHANNELS];
-+ unsigned long channels_rx[S3C2410_DMA_CHANNELS];
- };
-
- struct s3c24xx_dma_selection {
-@@ -41,6 +42,10 @@ struct s3c24xx_dma_selection {
-
- void (*select)(struct s3c2410_dma_chan *chan,
- struct s3c24xx_dma_map *map);
++#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
+
-+ void (*direction)(struct s3c2410_dma_chan *chan,
-+ struct s3c24xx_dma_map *map,
-+ enum s3c2410_dmasrc dir);
- };
-
- extern int s3c24xx_dma_init_map(struct s3c24xx_dma_selection *sel);
-diff --git a/include/asm-arm/plat-s3c24xx/irq.h b/include/asm-arm/plat-s3c24xx/irq.h
-index 8af6d95..45746a9 100644
---- a/include/asm-arm/plat-s3c24xx/irq.h
-+++ b/include/asm-arm/plat-s3c24xx/irq.h
-@@ -15,7 +15,9 @@
-
- #define EXTINT_OFF (IRQ_EINT4 - 4)
-
-+/* these are exported for arch/arm/mach-* usage */
- extern struct irq_chip s3c_irq_level_chip;
-+extern struct irq_chip s3c_irq_chip;
-
- static inline void
- s3c_irqsub_mask(unsigned int irqno, unsigned int parentbit,
-diff --git a/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h b/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h
++#endif /* __ASM_SH_POSIX_TYPES_H */
+diff --git a/include/asm-sh/posix_types_64.h b/include/asm-sh/posix_types_64.h
new file mode 100644
-index 0000000..25d4058
+index 0000000..0620317
--- /dev/null
-+++ b/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h
-@@ -0,0 +1,72 @@
-+/* linux/include/asm-arm/plat-s3c24xx/regs-s3c2412-iis.h
++++ b/include/asm-sh/posix_types_64.h
+@@ -0,0 +1,131 @@
++#ifndef __ASM_SH64_POSIX_TYPES_H
++#define __ASM_SH64_POSIX_TYPES_H
++
++/*
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ *
-+ * Copyright 2007 Simtec Electronics <linux at simtec.co.uk>
-+ * http://armlinux.simtec.co.uk/
++ * include/asm-sh64/posix_types.h
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003 Paul Mundt
+ *
-+ * S3C2412 IIS register definition
-+*/
++ * This file is generally used by user-level software, so you need to
++ * be a little careful about namespace pollution etc. Also, we cannot
++ * assume GCC is being used.
++ */
+
-+#ifndef __ASM_ARCH_REGS_S3C2412_IIS_H
-+#define __ASM_ARCH_REGS_S3C2412_IIS_H
++typedef unsigned long __kernel_ino_t;
++typedef unsigned short __kernel_mode_t;
++typedef unsigned short __kernel_nlink_t;
++typedef long __kernel_off_t;
++typedef int __kernel_pid_t;
++typedef unsigned short __kernel_ipc_pid_t;
++typedef unsigned short __kernel_uid_t;
++typedef unsigned short __kernel_gid_t;
++typedef long unsigned int __kernel_size_t;
++typedef int __kernel_ssize_t;
++typedef int __kernel_ptrdiff_t;
++typedef long __kernel_time_t;
++typedef long __kernel_suseconds_t;
++typedef long __kernel_clock_t;
++typedef int __kernel_timer_t;
++typedef int __kernel_clockid_t;
++typedef int __kernel_daddr_t;
++typedef char * __kernel_caddr_t;
++typedef unsigned short __kernel_uid16_t;
++typedef unsigned short __kernel_gid16_t;
++typedef unsigned int __kernel_uid32_t;
++typedef unsigned int __kernel_gid32_t;
+
-+#define S3C2412_IISCON (0x00)
-+#define S3C2412_IISMOD (0x04)
-+#define S3C2412_IISFIC (0x08)
-+#define S3C2412_IISPSR (0x0C)
-+#define S3C2412_IISTXD (0x10)
-+#define S3C2412_IISRXD (0x14)
++typedef unsigned short __kernel_old_uid_t;
++typedef unsigned short __kernel_old_gid_t;
++typedef unsigned short __kernel_old_dev_t;
+
-+#define S3C2412_IISCON_LRINDEX (1 << 11)
-+#define S3C2412_IISCON_TXFIFO_EMPTY (1 << 10)
-+#define S3C2412_IISCON_RXFIFO_EMPTY (1 << 9)
-+#define S3C2412_IISCON_TXFIFO_FULL (1 << 8)
-+#define S3C2412_IISCON_RXFIFO_FULL (1 << 7)
-+#define S3C2412_IISCON_TXDMA_PAUSE (1 << 6)
-+#define S3C2412_IISCON_RXDMA_PAUSE (1 << 5)
-+#define S3C2412_IISCON_TXCH_PAUSE (1 << 4)
-+#define S3C2412_IISCON_RXCH_PAUSE (1 << 3)
-+#define S3C2412_IISCON_TXDMA_ACTIVE (1 << 2)
-+#define S3C2412_IISCON_RXDMA_ACTIVE (1 << 1)
-+#define S3C2412_IISCON_IIS_ACTIVE (1 << 0)
++#ifdef __GNUC__
++typedef long long __kernel_loff_t;
++#endif
+
-+#define S3C2412_IISMOD_MASTER_INTERNAL (0 << 10)
-+#define S3C2412_IISMOD_MASTER_EXTERNAL (1 << 10)
-+#define S3C2412_IISMOD_SLAVE (2 << 10)
-+#define S3C2412_IISMOD_MASTER_MASK (3 << 10)
-+#define S3C2412_IISMOD_MODE_TXONLY (0 << 8)
-+#define S3C2412_IISMOD_MODE_RXONLY (1 << 8)
-+#define S3C2412_IISMOD_MODE_TXRX (2 << 8)
-+#define S3C2412_IISMOD_MODE_MASK (3 << 8)
-+#define S3C2412_IISMOD_LR_LLOW (0 << 7)
-+#define S3C2412_IISMOD_LR_RLOW (1 << 7)
-+#define S3C2412_IISMOD_SDF_IIS (0 << 5)
-+#define S3C2412_IISMOD_SDF_MSB (0 << 5)
-+#define S3C2412_IISMOD_SDF_LSB (0 << 5)
-+#define S3C2412_IISMOD_SDF_MASK (3 << 5)
-+#define S3C2412_IISMOD_RCLK_256FS (0 << 3)
-+#define S3C2412_IISMOD_RCLK_512FS (1 << 3)
-+#define S3C2412_IISMOD_RCLK_384FS (2 << 3)
-+#define S3C2412_IISMOD_RCLK_768FS (3 << 3)
-+#define S3C2412_IISMOD_RCLK_MASK (3 << 3)
-+#define S3C2412_IISMOD_BCLK_32FS (0 << 1)
-+#define S3C2412_IISMOD_BCLK_48FS (1 << 1)
-+#define S3C2412_IISMOD_BCLK_16FS (2 << 1)
-+#define S3C2412_IISMOD_BCLK_24FS (3 << 1)
-+#define S3C2412_IISMOD_BCLK_MASK (3 << 1)
-+#define S3C2412_IISMOD_8BIT (1 << 0)
++typedef struct {
++#if defined(__KERNEL__) || defined(__USE_ALL)
++ int val[2];
++#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
++ int __val[2];
++#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
++} __kernel_fsid_t;
+
-+#define S3C2412_IISPSR_PSREN (1 << 15)
++#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
+
-+#define S3C2412_IISFIC_TXFLUSH (1 << 15)
-+#define S3C2412_IISFIC_RXFLUSH (1 << 7)
-+#define S3C2412_IISFIC_TXCOUNT(x) (((x) >> 8) & 0xf)
-+#define S3C2412_IISFIC_RXCOUNT(x) (((x) >> 0) & 0xf)
++#undef __FD_SET
++static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
++{
++ unsigned long __tmp = __fd / __NFDBITS;
++ unsigned long __rem = __fd % __NFDBITS;
++ __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
++}
+
++#undef __FD_CLR
++static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
++{
++ unsigned long __tmp = __fd / __NFDBITS;
++ unsigned long __rem = __fd % __NFDBITS;
++ __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
++}
+
+
-+#endif /* __ASM_ARCH_REGS_S3C2412_IIS_H */
++#undef __FD_ISSET
++static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
++{
++ unsigned long __tmp = __fd / __NFDBITS;
++ unsigned long __rem = __fd % __NFDBITS;
++ return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
++}
+
-diff --git a/include/asm-arm/plat-s3c24xx/regs-spi.h b/include/asm-arm/plat-s3c24xx/regs-spi.h
-index 4a499a1..ea565b0 100644
---- a/include/asm-arm/plat-s3c24xx/regs-spi.h
-+++ b/include/asm-arm/plat-s3c24xx/regs-spi.h
-@@ -17,6 +17,21 @@
-
- #define S3C2410_SPCON (0x00)
-
-+#define S3C2412_SPCON_RXFIFO_RB2 (0<<14)
-+#define S3C2412_SPCON_RXFIFO_RB4 (1<<14)
-+#define S3C2412_SPCON_RXFIFO_RB12 (2<<14)
-+#define S3C2412_SPCON_RXFIFO_RB14 (3<<14)
-+#define S3C2412_SPCON_TXFIFO_RB2 (0<<12)
-+#define S3C2412_SPCON_TXFIFO_RB4 (1<<12)
-+#define S3C2412_SPCON_TXFIFO_RB12 (2<<12)
-+#define S3C2412_SPCON_TXFIFO_RB14 (3<<12)
-+#define S3C2412_SPCON_RXFIFO_RESET (1<<11) /* RxFIFO reset */
-+#define S3C2412_SPCON_TXFIFO_RESET (1<<10) /* TxFIFO reset */
-+#define S3C2412_SPCON_RXFIFO_EN (1<<9) /* RxFIFO Enable */
-+#define S3C2412_SPCON_TXFIFO_EN (1<<8) /* TxFIFO Enable */
++/*
++ * This will unroll the loop for the normal constant case (8 ints,
++ * for a 256-bit fd_set)
++ */
++#undef __FD_ZERO
++static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
++{
++ unsigned long *__tmp = __p->fds_bits;
++ int __i;
+
-+#define S3C2412_SPCON_DIRC_RX (1<<7)
++ if (__builtin_constant_p(__FDSET_LONGS)) {
++ switch (__FDSET_LONGS) {
++ case 16:
++ __tmp[ 0] = 0; __tmp[ 1] = 0;
++ __tmp[ 2] = 0; __tmp[ 3] = 0;
++ __tmp[ 4] = 0; __tmp[ 5] = 0;
++ __tmp[ 6] = 0; __tmp[ 7] = 0;
++ __tmp[ 8] = 0; __tmp[ 9] = 0;
++ __tmp[10] = 0; __tmp[11] = 0;
++ __tmp[12] = 0; __tmp[13] = 0;
++ __tmp[14] = 0; __tmp[15] = 0;
++ return;
+
- #define S3C2410_SPCON_SMOD_DMA (2<<5) /* DMA mode */
- #define S3C2410_SPCON_SMOD_INT (1<<5) /* interrupt mode */
- #define S3C2410_SPCON_SMOD_POLL (0<<5) /* polling mode */
-@@ -34,10 +49,19 @@
-
- #define S3C2410_SPSTA (0x04)
-
-+#define S3C2412_SPSTA_RXFIFO_AE (1<<11)
-+#define S3C2412_SPSTA_TXFIFO_AE (1<<10)
-+#define S3C2412_SPSTA_RXFIFO_ERROR (1<<9)
-+#define S3C2412_SPSTA_TXFIFO_ERROR (1<<8)
-+#define S3C2412_SPSTA_RXFIFO_FIFO (1<<7)
-+#define S3C2412_SPSTA_RXFIFO_EMPTY (1<<6)
-+#define S3C2412_SPSTA_TXFIFO_NFULL (1<<5)
-+#define S3C2412_SPSTA_TXFIFO_EMPTY (1<<4)
++ case 8:
++ __tmp[ 0] = 0; __tmp[ 1] = 0;
++ __tmp[ 2] = 0; __tmp[ 3] = 0;
++ __tmp[ 4] = 0; __tmp[ 5] = 0;
++ __tmp[ 6] = 0; __tmp[ 7] = 0;
++ return;
+
- #define S3C2410_SPSTA_DCOL (1<<2) /* Data Collision Error */
- #define S3C2410_SPSTA_MULD (1<<1) /* Multi Master Error */
- #define S3C2410_SPSTA_READY (1<<0) /* Data Tx/Rx ready */
++ case 4:
++ __tmp[ 0] = 0; __tmp[ 1] = 0;
++ __tmp[ 2] = 0; __tmp[ 3] = 0;
++ return;
++ }
++ }
++ __i = __FDSET_LONGS;
++ while (__i) {
++ __i--;
++ *__tmp = 0;
++ __tmp++;
++ }
++}
++
++#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
++
++#endif /* __ASM_SH64_POSIX_TYPES_H */
+diff --git a/include/asm-sh/processor.h b/include/asm-sh/processor.h
+index fda6848..c9b1416 100644
+--- a/include/asm-sh/processor.h
++++ b/include/asm-sh/processor.h
+@@ -1,32 +1,10 @@
+-/*
+- * include/asm-sh/processor.h
+- *
+- * Copyright (C) 1999, 2000 Niibe Yutaka
+- * Copyright (C) 2002, 2003 Paul Mundt
+- */
-
-+#define S3C2412_SPSTA_READY_ORG (1<<3)
-
- #define S3C2410_SPPIN (0x08)
+ #ifndef __ASM_SH_PROCESSOR_H
+ #define __ASM_SH_PROCESSOR_H
+-#ifdef __KERNEL__
-@@ -46,9 +70,13 @@
- #define S3C2400_SPPIN_nCS (1<<1) /* SPI Card Select */
- #define S3C2410_SPPIN_KEEP (1<<0) /* Master Out keep */
+-#include <linux/compiler.h>
+-#include <asm/page.h>
+-#include <asm/types.h>
+-#include <asm/cache.h>
+-#include <asm/ptrace.h>
+ #include <asm/cpu-features.h>
++#include <asm/fpu.h>
+-/*
+- * Default implementation of macro that returns current
+- * instruction pointer ("program counter").
+- */
+-#define current_text_addr() ({ void *pc; __asm__("mova 1f, %0\n1:":"=z" (pc)); pc; })
-
- #define S3C2410_SPPRE (0x0C)
- #define S3C2410_SPTDAT (0x10)
- #define S3C2410_SPRDAT (0x14)
+-/* Core Processor Version Register */
+-#define CCN_PVR 0xff000030
+-#define CCN_CVR 0xff000040
+-#define CCN_PRR 0xff000044
+-
++#ifndef __ASSEMBLY__
+ /*
+ * CPU type and hardware bug flags. Kept separately for each CPU.
+ *
+@@ -39,247 +17,49 @@ enum cpu_type {
+ CPU_SH7619,
-+#define S3C2412_TXFIFO (0x18)
-+#define S3C2412_RXFIFO (0x18)
-+#define S3C2412_SPFIC (0x24)
-+
-+
- #endif /* __ASM_ARCH_REGS_SPI_H */
-diff --git a/include/asm-arm/proc-fns.h b/include/asm-arm/proc-fns.h
-index 5599d4e..a4ce457 100644
---- a/include/asm-arm/proc-fns.h
-+++ b/include/asm-arm/proc-fns.h
-@@ -185,6 +185,14 @@
- # define CPU_NAME cpu_xsc3
- # endif
- # endif
-+# ifdef CONFIG_CPU_FEROCEON
-+# ifdef CPU_NAME
-+# undef MULTI_CPU
-+# define MULTI_CPU
-+# else
-+# define CPU_NAME cpu_feroceon
-+# endif
-+# endif
- # ifdef CONFIG_CPU_V6
- # ifdef CPU_NAME
- # undef MULTI_CPU
-diff --git a/include/asm-arm/traps.h b/include/asm-arm/traps.h
-index d4f34dc..f1541af 100644
---- a/include/asm-arm/traps.h
-+++ b/include/asm-arm/traps.h
-@@ -15,4 +15,13 @@ struct undef_hook {
- void register_undef_hook(struct undef_hook *hook);
- void unregister_undef_hook(struct undef_hook *hook);
+ /* SH-2A types */
+- CPU_SH7206,
++ CPU_SH7203, CPU_SH7206, CPU_SH7263,
-+static inline int in_exception_text(unsigned long ptr)
-+{
-+ extern char __exception_text_start[];
-+ extern char __exception_text_end[];
-+
-+ return ptr >= (unsigned long)&__exception_text_start &&
-+ ptr < (unsigned long)&__exception_text_end;
-+}
-+
- #endif
-diff --git a/include/asm-arm/vfp.h b/include/asm-arm/vfp.h
-index bd6be9d..5f9a2cb 100644
---- a/include/asm-arm/vfp.h
-+++ b/include/asm-arm/vfp.h
-@@ -7,7 +7,11 @@
+ /* SH-3 types */
+ CPU_SH7705, CPU_SH7706, CPU_SH7707,
+ CPU_SH7708, CPU_SH7708S, CPU_SH7708R,
+ CPU_SH7709, CPU_SH7709A, CPU_SH7710, CPU_SH7712,
+- CPU_SH7720, CPU_SH7729,
++ CPU_SH7720, CPU_SH7721, CPU_SH7729,
- #define FPSID cr0
- #define FPSCR cr1
-+#define MVFR1 cr6
-+#define MVFR0 cr7
- #define FPEXC cr8
-+#define FPINST cr9
-+#define FPINST2 cr10
+ /* SH-4 types */
+ CPU_SH7750, CPU_SH7750S, CPU_SH7750R, CPU_SH7751, CPU_SH7751R,
+ CPU_SH7760, CPU_SH4_202, CPU_SH4_501,
- /* FPSID bits */
- #define FPSID_IMPLEMENTER_BIT (24)
-@@ -28,6 +32,19 @@
- /* FPEXC bits */
- #define FPEXC_EX (1 << 31)
- #define FPEXC_EN (1 << 30)
-+#define FPEXC_DEX (1 << 29)
-+#define FPEXC_FP2V (1 << 28)
-+#define FPEXC_VV (1 << 27)
-+#define FPEXC_TFV (1 << 26)
-+#define FPEXC_LENGTH_BIT (8)
-+#define FPEXC_LENGTH_MASK (7 << FPEXC_LENGTH_BIT)
-+#define FPEXC_IDF (1 << 7)
-+#define FPEXC_IXF (1 << 4)
-+#define FPEXC_UFF (1 << 3)
-+#define FPEXC_OFF (1 << 2)
-+#define FPEXC_DZF (1 << 1)
-+#define FPEXC_IOF (1 << 0)
-+#define FPEXC_TRAP_MASK (FPEXC_IDF|FPEXC_IXF|FPEXC_UFF|FPEXC_OFF|FPEXC_DZF|FPEXC_IOF)
+ /* SH-4A types */
+- CPU_SH7770, CPU_SH7780, CPU_SH7781, CPU_SH7785, CPU_SHX3,
++ CPU_SH7763, CPU_SH7770, CPU_SH7780, CPU_SH7781, CPU_SH7785, CPU_SHX3,
- /* FPSCR bits */
- #define FPSCR_DEFAULT_NAN (1<<25)
-@@ -55,20 +72,9 @@
- #define FPSCR_IXC (1<<4)
- #define FPSCR_IDC (1<<7)
+ /* SH4AL-DSP types */
+ CPU_SH7343, CPU_SH7722,
+
++ /* SH-5 types */
++ CPU_SH5_101, CPU_SH5_103,
++
+ /* Unknown subtype */
+ CPU_SH_NONE
+ };
+-struct sh_cpuinfo {
+- unsigned int type;
+- unsigned long loops_per_jiffy;
+- unsigned long asid_cache;
+-
+- struct cache_info icache; /* Primary I-cache */
+- struct cache_info dcache; /* Primary D-cache */
+- struct cache_info scache; /* Secondary cache */
+-
+- unsigned long flags;
+-} __attribute__ ((aligned(L1_CACHE_BYTES)));
+-
+-extern struct sh_cpuinfo cpu_data[];
+-#define boot_cpu_data cpu_data[0]
+-#define current_cpu_data cpu_data[smp_processor_id()]
+-#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
+-
-/*
-- * VFP9-S specific.
+- * User space process size: 2GB.
+- *
+- * Since SH7709 and SH7750 have "area 7", we can't use 0x7c000000--0x7fffffff
- */
--#define FPINST cr9
--#define FPINST2 cr10
+-#define TASK_SIZE 0x7c000000UL
+-
+-/* This decides where the kernel will search for a free chunk of vm
+- * space during mmap's.
+- */
+-#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
-
--/* FPEXC bits */
--#define FPEXC_FPV2 (1<<28)
--#define FPEXC_LENGTH_BIT (8)
--#define FPEXC_LENGTH_MASK (7 << FPEXC_LENGTH_BIT)
--#define FPEXC_INV (1 << 7)
--#define FPEXC_UFC (1 << 3)
--#define FPEXC_OFC (1 << 2)
--#define FPEXC_IOC (1 << 0)
-+/* MVFR0 bits */
-+#define MVFR0_A_SIMD_BIT (0)
-+#define MVFR0_A_SIMD_MASK (0xf << MVFR0_A_SIMD_BIT)
-
- /* Bit patterns for decoding the packaged operation descriptors */
- #define VFPOPDESC_LENGTH_BIT (9)
-diff --git a/include/asm-arm/vfpmacros.h b/include/asm-arm/vfpmacros.h
-index 27fe028..cccb389 100644
---- a/include/asm-arm/vfpmacros.h
-+++ b/include/asm-arm/vfpmacros.h
-@@ -15,19 +15,33 @@
- .endm
-
- @ read all the working registers back into the VFP
-- .macro VFPFLDMIA, base
-+ .macro VFPFLDMIA, base, tmp
- #if __LINUX_ARM_ARCH__ < 6
- LDC p11, cr0, [\base],#33*4 @ FLDMIAX \base!, {d0-d15}
- #else
- LDC p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d0-d15}
- #endif
-+#ifdef CONFIG_VFPv3
-+ VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
-+ and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
-+ cmp \tmp, #2 @ 32 x 64bit registers?
-+ ldceql p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
-+ addne \base, \base, #32*4 @ step over unused register space
-+#endif
- .endm
-
- @ write all the working registers out of the VFP
-- .macro VFPFSTMIA, base
-+ .macro VFPFSTMIA, base, tmp
- #if __LINUX_ARM_ARCH__ < 6
- STC p11, cr0, [\base],#33*4 @ FSTMIAX \base!, {d0-d15}
- #else
- STC p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d0-d15}
- #endif
-+#ifdef CONFIG_VFPv3
-+ VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
-+ and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
-+ cmp \tmp, #2 @ 32 x 64bit registers?
-+ stceql p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
-+ addne \base, \base, #32*4 @ step over unused register space
-+#endif
- .endm
-diff --git a/include/asm-avr32/arch-at32ap/at32ap7000.h b/include/asm-avr32/arch-at32ap/at32ap7000.h
-deleted file mode 100644
-index 3914d7b..0000000
---- a/include/asm-avr32/arch-at32ap/at32ap7000.h
-+++ /dev/null
-@@ -1,35 +0,0 @@
-/*
-- * Pin definitions for AT32AP7000.
+- * Bit of SR register
- *
-- * Copyright (C) 2006 Atmel Corporation
+- * FD-bit:
+- * When it's set, it means the processor doesn't have right to use FPU,
+- * and it results exception when the floating operation is executed.
- *
-- * This program is free software; you can redistribute it and/or modify
-- * it under the terms of the GNU General Public License version 2 as
-- * published by the Free Software Foundation.
+- * IMASK-bit:
+- * Interrupt level mask
- */
--#ifndef __ASM_ARCH_AT32AP7000_H__
--#define __ASM_ARCH_AT32AP7000_H__
+-#define SR_FD 0x00008000
+-#define SR_DSP 0x00001000
+-#define SR_IMASK 0x000000f0
-
--#define GPIO_PERIPH_A 0
--#define GPIO_PERIPH_B 1
+-/*
+- * FPU structure and data
+- */
-
--#define NR_GPIO_CONTROLLERS 4
+-struct sh_fpu_hard_struct {
+- unsigned long fp_regs[16];
+- unsigned long xfp_regs[16];
+- unsigned long fpscr;
+- unsigned long fpul;
+-
+- long status; /* software status information */
+-};
+-
+-/* Dummy fpu emulator */
+-struct sh_fpu_soft_struct {
+- unsigned long fp_regs[16];
+- unsigned long xfp_regs[16];
+- unsigned long fpscr;
+- unsigned long fpul;
+-
+- unsigned char lookahead;
+- unsigned long entry_pc;
+-};
+-
+-union sh_fpu_union {
+- struct sh_fpu_hard_struct hard;
+- struct sh_fpu_soft_struct soft;
+-};
+-
+-struct thread_struct {
+- /* Saved registers when thread is descheduled */
+- unsigned long sp;
+- unsigned long pc;
+-
+- /* Hardware debugging registers */
+- unsigned long ubc_pc;
+-
+- /* floating point info */
+- union sh_fpu_union fpu;
+-};
+-
+-typedef struct {
+- unsigned long seg;
+-} mm_segment_t;
+-
+-/* Count of active tasks with UBC settings */
+-extern int ubc_usercnt;
++/* Forward decl */
++struct sh_cpuinfo;
+
+-#define INIT_THREAD { \
+- .sp = sizeof(init_stack) + (long) &init_stack, \
+-}
-
-/*
-- * Pin numbers identifying specific GPIO pins on the chip. They can
-- * also be converted to IRQ numbers by passing them through
-- * gpio_to_irq().
+- * Do necessary setup to start up a newly executed thread.
- */
--#define GPIO_PIOA_BASE (0)
--#define GPIO_PIOB_BASE (GPIO_PIOA_BASE + 32)
--#define GPIO_PIOC_BASE (GPIO_PIOB_BASE + 32)
--#define GPIO_PIOD_BASE (GPIO_PIOC_BASE + 32)
--#define GPIO_PIOE_BASE (GPIO_PIOD_BASE + 32)
+-#define start_thread(regs, new_pc, new_sp) \
+- set_fs(USER_DS); \
+- regs->pr = 0; \
+- regs->sr = SR_FD; /* User mode. */ \
+- regs->pc = new_pc; \
+- regs->regs[15] = new_sp
-
--#define GPIO_PIN_PA(N) (GPIO_PIOA_BASE + (N))
--#define GPIO_PIN_PB(N) (GPIO_PIOB_BASE + (N))
--#define GPIO_PIN_PC(N) (GPIO_PIOC_BASE + (N))
--#define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N))
--#define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N))
+-/* Forward declaration, a strange C thing */
+-struct task_struct;
+-struct mm_struct;
-
--#endif /* __ASM_ARCH_AT32AP7000_H__ */
-diff --git a/include/asm-avr32/arch-at32ap/at32ap700x.h b/include/asm-avr32/arch-at32ap/at32ap700x.h
+-/* Free all resources held by a thread. */
+-extern void release_thread(struct task_struct *);
+-
+-/* Prepare to copy thread state - unlazy all lazy status */
+-#define prepare_to_copy(tsk) do { } while (0)
+-
+-/*
+- * create a kernel thread without removing it from tasklists
+- */
+-extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
+-
+-/* Copy and release all segment info associated with a VM */
+-#define copy_segments(p, mm) do { } while(0)
+-#define release_segments(mm) do { } while(0)
+-
+-/*
+- * FPU lazy state save handling.
+- */
+-
+-static __inline__ void disable_fpu(void)
+-{
+- unsigned long __dummy;
+-
+- /* Set FD flag in SR */
+- __asm__ __volatile__("stc sr, %0\n\t"
+- "or %1, %0\n\t"
+- "ldc %0, sr"
+- : "=&r" (__dummy)
+- : "r" (SR_FD));
+-}
+-
+-static __inline__ void enable_fpu(void)
+-{
+- unsigned long __dummy;
+-
+- /* Clear out FD flag in SR */
+- __asm__ __volatile__("stc sr, %0\n\t"
+- "and %1, %0\n\t"
+- "ldc %0, sr"
+- : "=&r" (__dummy)
+- : "r" (~SR_FD));
+-}
+-
+-static __inline__ void release_fpu(struct pt_regs *regs)
+-{
+- regs->sr |= SR_FD;
+-}
+-
+-static __inline__ void grab_fpu(struct pt_regs *regs)
+-{
+- regs->sr &= ~SR_FD;
+-}
+-
+-extern void save_fpu(struct task_struct *__tsk, struct pt_regs *regs);
+-
+-#define unlazy_fpu(tsk, regs) do { \
+- if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
+- save_fpu(tsk, regs); \
+- } \
+-} while (0)
+-
+-#define clear_fpu(tsk, regs) do { \
+- if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
+- clear_tsk_thread_flag(tsk, TIF_USEDFPU); \
+- release_fpu(regs); \
+- } \
+-} while (0)
+-
+-/* Double presision, NANS as NANS, rounding to nearest, no exceptions */
+-#define FPSCR_INIT 0x00080000
+-
+-#define FPSCR_CAUSE_MASK 0x0001f000 /* Cause bits */
+-#define FPSCR_FLAG_MASK 0x0000007c /* Flag bits */
+-
+-/*
+- * Return saved PC of a blocked thread.
+- */
+-#define thread_saved_pc(tsk) (tsk->thread.pc)
+-
+-void show_trace(struct task_struct *tsk, unsigned long *sp,
+- struct pt_regs *regs);
+-extern unsigned long get_wchan(struct task_struct *p);
+-
+-#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
+-#define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[15])
+-
+-#define cpu_sleep() __asm__ __volatile__ ("sleep" : : : "memory")
+-#define cpu_relax() barrier()
+-
+-#if defined(CONFIG_CPU_SH2A) || defined(CONFIG_CPU_SH3) || \
+- defined(CONFIG_CPU_SH4)
+-#define PREFETCH_STRIDE L1_CACHE_BYTES
+-#define ARCH_HAS_PREFETCH
+-#define ARCH_HAS_PREFETCHW
+-static inline void prefetch(void *x)
+-{
+- __asm__ __volatile__ ("pref @%0\n\t" : : "r" (x) : "memory");
+-}
+-
+-#define prefetchw(x) prefetch(x)
+-#endif
++/* arch/sh/kernel/setup.c */
++const char *get_cpu_subtype(struct sh_cpuinfo *c);
+
+ #ifdef CONFIG_VSYSCALL
+-extern int vsyscall_init(void);
++int vsyscall_init(void);
+ #else
+ #define vsyscall_init() do { } while (0)
+ #endif
+
+-/* arch/sh/kernel/setup.c */
+-const char *get_cpu_subtype(struct sh_cpuinfo *c);
++#endif /* __ASSEMBLY__ */
++
++#ifdef CONFIG_SUPERH32
++# include "processor_32.h"
++#else
++# include "processor_64.h"
++#endif
+
+-#endif /* __KERNEL__ */
+ #endif /* __ASM_SH_PROCESSOR_H */
+diff --git a/include/asm-sh/processor_32.h b/include/asm-sh/processor_32.h
new file mode 100644
-index 0000000..99684d6
+index 0000000..a7edaa1
--- /dev/null
-+++ b/include/asm-avr32/arch-at32ap/at32ap700x.h
-@@ -0,0 +1,35 @@
++++ b/include/asm-sh/processor_32.h
+@@ -0,0 +1,215 @@
+/*
-+ * Pin definitions for AT32AP7000.
++ * include/asm-sh/processor.h
+ *
-+ * Copyright (C) 2006 Atmel Corporation
++ * Copyright (C) 1999, 2000 Niibe Yutaka
++ * Copyright (C) 2002, 2003 Paul Mundt
++ */
++
++#ifndef __ASM_SH_PROCESSOR_32_H
++#define __ASM_SH_PROCESSOR_32_H
++#ifdef __KERNEL__
++
++#include <linux/compiler.h>
++#include <asm/page.h>
++#include <asm/types.h>
++#include <asm/cache.h>
++#include <asm/ptrace.h>
++
++/*
++ * Default implementation of macro that returns current
++ * instruction pointer ("program counter").
++ */
++#define current_text_addr() ({ void *pc; __asm__("mova 1f, %0\n1:":"=z" (pc)); pc; })
++
++/* Core Processor Version Register */
++#define CCN_PVR 0xff000030
++#define CCN_CVR 0xff000040
++#define CCN_PRR 0xff000044
++
++struct sh_cpuinfo {
++ unsigned int type;
++ unsigned long loops_per_jiffy;
++ unsigned long asid_cache;
++
++ struct cache_info icache; /* Primary I-cache */
++ struct cache_info dcache; /* Primary D-cache */
++ struct cache_info scache; /* Secondary cache */
++
++ unsigned long flags;
++} __attribute__ ((aligned(L1_CACHE_BYTES)));
++
++extern struct sh_cpuinfo cpu_data[];
++#define boot_cpu_data cpu_data[0]
++#define current_cpu_data cpu_data[smp_processor_id()]
++#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
++
++/*
++ * User space process size: 2GB.
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * Since SH7709 and SH7750 have "area 7", we can't use 0x7c000000--0x7fffffff
+ */
-+#ifndef __ASM_ARCH_AT32AP700X_H__
-+#define __ASM_ARCH_AT32AP700X_H__
++#define TASK_SIZE 0x7c000000UL
+
-+#define GPIO_PERIPH_A 0
-+#define GPIO_PERIPH_B 1
++/* This decides where the kernel will search for a free chunk of vm
++ * space during mmap's.
++ */
++#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
-+#define NR_GPIO_CONTROLLERS 4
++/*
++ * Bit of SR register
++ *
++ * FD-bit:
++ * When it's set, it means the processor doesn't have right to use FPU,
++ * and it results exception when the floating operation is executed.
++ *
++ * IMASK-bit:
++ * Interrupt level mask
++ */
++#define SR_DSP 0x00001000
++#define SR_IMASK 0x000000f0
+
+/*
-+ * Pin numbers identifying specific GPIO pins on the chip. They can
-+ * also be converted to IRQ numbers by passing them through
-+ * gpio_to_irq().
++ * FPU structure and data
+ */
-+#define GPIO_PIOA_BASE (0)
-+#define GPIO_PIOB_BASE (GPIO_PIOA_BASE + 32)
-+#define GPIO_PIOC_BASE (GPIO_PIOB_BASE + 32)
-+#define GPIO_PIOD_BASE (GPIO_PIOC_BASE + 32)
-+#define GPIO_PIOE_BASE (GPIO_PIOD_BASE + 32)
+
-+#define GPIO_PIN_PA(N) (GPIO_PIOA_BASE + (N))
-+#define GPIO_PIN_PB(N) (GPIO_PIOB_BASE + (N))
-+#define GPIO_PIN_PC(N) (GPIO_PIOC_BASE + (N))
-+#define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N))
-+#define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N))
++struct sh_fpu_hard_struct {
++ unsigned long fp_regs[16];
++ unsigned long xfp_regs[16];
++ unsigned long fpscr;
++ unsigned long fpul;
+
-+#endif /* __ASM_ARCH_AT32AP700X_H__ */
-diff --git a/include/asm-avr32/arch-at32ap/cpu.h b/include/asm-avr32/arch-at32ap/cpu.h
-index a762f42..44d0bfa 100644
---- a/include/asm-avr32/arch-at32ap/cpu.h
-+++ b/include/asm-avr32/arch-at32ap/cpu.h
-@@ -14,7 +14,7 @@
- * Only AT32AP7000 is defined for now. We can identify the specific
- * chip at runtime, but I'm not sure if it's really worth it.
- */
--#ifdef CONFIG_CPU_AT32AP7000
-+#ifdef CONFIG_CPU_AT32AP700X
- # define cpu_is_at32ap7000() (1)
- #else
- # define cpu_is_at32ap7000() (0)
-@@ -30,5 +30,6 @@
- #define cpu_is_at91sam9261() (0)
- #define cpu_is_at91sam9263() (0)
- #define cpu_is_at91sam9rl() (0)
-+#define cpu_is_at91cap9() (0)
-
- #endif /* __ASM_ARCH_CPU_H */
-diff --git a/include/asm-avr32/arch-at32ap/io.h b/include/asm-avr32/arch-at32ap/io.h
-index ee59e40..4ec6abc 100644
---- a/include/asm-avr32/arch-at32ap/io.h
-+++ b/include/asm-avr32/arch-at32ap/io.h
-@@ -4,7 +4,7 @@
- /* For "bizarre" halfword swapping */
- #include <linux/byteorder/swabb.h>
-
--#if defined(CONFIG_AP7000_32_BIT_SMC)
-+#if defined(CONFIG_AP700X_32_BIT_SMC)
- # define __swizzle_addr_b(addr) (addr ^ 3UL)
- # define __swizzle_addr_w(addr) (addr ^ 2UL)
- # define __swizzle_addr_l(addr) (addr)
-@@ -14,7 +14,7 @@
- # define __mem_ioswabb(a, x) (x)
- # define __mem_ioswabw(a, x) swab16(x)
- # define __mem_ioswabl(a, x) swab32(x)
--#elif defined(CONFIG_AP7000_16_BIT_SMC)
-+#elif defined(CONFIG_AP700X_16_BIT_SMC)
- # define __swizzle_addr_b(addr) (addr ^ 1UL)
- # define __swizzle_addr_w(addr) (addr)
- # define __swizzle_addr_l(addr) (addr)
-diff --git a/include/asm-avr32/irq.h b/include/asm-avr32/irq.h
-index 83e6549..9315724 100644
---- a/include/asm-avr32/irq.h
-+++ b/include/asm-avr32/irq.h
-@@ -11,4 +11,9 @@
-
- #define irq_canonicalize(i) (i)
-
-+#ifndef __ASSEMBLER__
-+int nmi_enable(void);
-+void nmi_disable(void);
-+#endif
++ long status; /* software status information */
++};
+
- #endif /* __ASM_AVR32_IOCTLS_H */
-diff --git a/include/asm-avr32/kdebug.h b/include/asm-avr32/kdebug.h
-index fd7e990..ca4f954 100644
---- a/include/asm-avr32/kdebug.h
-+++ b/include/asm-avr32/kdebug.h
-@@ -5,6 +5,7 @@
- enum die_val {
- DIE_BREAKPOINT,
- DIE_SSTEP,
-+ DIE_NMI,
- };
-
- #endif /* __ASM_AVR32_KDEBUG_H */
-diff --git a/include/asm-avr32/ocd.h b/include/asm-avr32/ocd.h
-index 996405e..6bef094 100644
---- a/include/asm-avr32/ocd.h
-+++ b/include/asm-avr32/ocd.h
-@@ -533,6 +533,11 @@ static inline void __ocd_write(unsigned int reg, unsigned long value)
- #define ocd_read(reg) __ocd_read(OCD_##reg)
- #define ocd_write(reg, value) __ocd_write(OCD_##reg, value)
-
-+struct task_struct;
++/* Dummy fpu emulator */
++struct sh_fpu_soft_struct {
++ unsigned long fp_regs[16];
++ unsigned long xfp_regs[16];
++ unsigned long fpscr;
++ unsigned long fpul;
+
-+void ocd_enable(struct task_struct *child);
-+void ocd_disable(struct task_struct *child);
++ unsigned char lookahead;
++ unsigned long entry_pc;
++};
+
- #endif /* !__ASSEMBLER__ */
-
- #endif /* __ASM_AVR32_OCD_H */
-diff --git a/include/asm-avr32/processor.h b/include/asm-avr32/processor.h
-index a52576b..4212551 100644
---- a/include/asm-avr32/processor.h
-+++ b/include/asm-avr32/processor.h
-@@ -57,11 +57,25 @@ struct avr32_cpuinfo {
- unsigned short cpu_revision;
- enum tlb_config tlb_config;
- unsigned long features;
-+ u32 device_id;
-
- struct cache_info icache;
- struct cache_info dcache;
- };
-
-+static inline unsigned int avr32_get_manufacturer_id(struct avr32_cpuinfo *cpu)
-+{
-+ return (cpu->device_id >> 1) & 0x7f;
++union sh_fpu_union {
++ struct sh_fpu_hard_struct hard;
++ struct sh_fpu_soft_struct soft;
++};
++
++struct thread_struct {
++ /* Saved registers when thread is descheduled */
++ unsigned long sp;
++ unsigned long pc;
++
++ /* Hardware debugging registers */
++ unsigned long ubc_pc;
++
++ /* floating point info */
++ union sh_fpu_union fpu;
++};
++
++typedef struct {
++ unsigned long seg;
++} mm_segment_t;
++
++/* Count of active tasks with UBC settings */
++extern int ubc_usercnt;
++
++#define INIT_THREAD { \
++ .sp = sizeof(init_stack) + (long) &init_stack, \
+}
-+static inline unsigned int avr32_get_product_number(struct avr32_cpuinfo *cpu)
++
++/*
++ * Do necessary setup to start up a newly executed thread.
++ */
++#define start_thread(regs, new_pc, new_sp) \
++ set_fs(USER_DS); \
++ regs->pr = 0; \
++ regs->sr = SR_FD; /* User mode. */ \
++ regs->pc = new_pc; \
++ regs->regs[15] = new_sp
++
++/* Forward declaration, a strange C thing */
++struct task_struct;
++struct mm_struct;
++
++/* Free all resources held by a thread. */
++extern void release_thread(struct task_struct *);
++
++/* Prepare to copy thread state - unlazy all lazy status */
++#define prepare_to_copy(tsk) do { } while (0)
++
++/*
++ * create a kernel thread without removing it from tasklists
++ */
++extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
++
++/* Copy and release all segment info associated with a VM */
++#define copy_segments(p, mm) do { } while(0)
++#define release_segments(mm) do { } while(0)
++
++/*
++ * FPU lazy state save handling.
++ */
++
++static __inline__ void disable_fpu(void)
+{
-+ return (cpu->device_id >> 12) & 0xffff;
++ unsigned long __dummy;
++
++ /* Set FD flag in SR */
++ __asm__ __volatile__("stc sr, %0\n\t"
++ "or %1, %0\n\t"
++ "ldc %0, sr"
++ : "=&r" (__dummy)
++ : "r" (SR_FD));
+}
-+static inline unsigned int avr32_get_chip_revision(struct avr32_cpuinfo *cpu)
++
++static __inline__ void enable_fpu(void)
+{
-+ return (cpu->device_id >> 28) & 0x0f;
++ unsigned long __dummy;
++
++ /* Clear out FD flag in SR */
++ __asm__ __volatile__("stc sr, %0\n\t"
++ "and %1, %0\n\t"
++ "ldc %0, sr"
++ : "=&r" (__dummy)
++ : "r" (~SR_FD));
+}
+
- extern struct avr32_cpuinfo boot_cpu_data;
-
- #ifdef CONFIG_SMP
-diff --git a/include/asm-avr32/ptrace.h b/include/asm-avr32/ptrace.h
-index 8c5dba5..9e2d44f 100644
---- a/include/asm-avr32/ptrace.h
-+++ b/include/asm-avr32/ptrace.h
-@@ -121,7 +121,15 @@ struct pt_regs {
- };
-
- #ifdef __KERNEL__
--# define user_mode(regs) (((regs)->sr & MODE_MASK) == MODE_USER)
++/* Double presision, NANS as NANS, rounding to nearest, no exceptions */
++#define FPSCR_INIT 0x00080000
+
-+#include <asm/ocd.h>
++#define FPSCR_CAUSE_MASK 0x0001f000 /* Cause bits */
++#define FPSCR_FLAG_MASK 0x0000007c /* Flag bits */
+
-+#define arch_ptrace_attach(child) ocd_enable(child)
++/*
++ * Return saved PC of a blocked thread.
++ */
++#define thread_saved_pc(tsk) (tsk->thread.pc)
+
-+#define user_mode(regs) (((regs)->sr & MODE_MASK) == MODE_USER)
-+#define instruction_pointer(regs) ((regs)->pc)
-+#define profile_pc(regs) instruction_pointer(regs)
++void show_trace(struct task_struct *tsk, unsigned long *sp,
++ struct pt_regs *regs);
++extern unsigned long get_wchan(struct task_struct *p);
+
- extern void show_regs (struct pt_regs *);
-
- static __inline__ int valid_user_regs(struct pt_regs *regs)
-@@ -141,9 +149,6 @@ static __inline__ int valid_user_regs(struct pt_regs *regs)
- return 0;
- }
-
--#define instruction_pointer(regs) ((regs)->pc)
--
--#define profile_pc(regs) instruction_pointer(regs)
-
- #endif /* __KERNEL__ */
-
-diff --git a/include/asm-avr32/setup.h b/include/asm-avr32/setup.h
-index b0828d4..ea3070f 100644
---- a/include/asm-avr32/setup.h
-+++ b/include/asm-avr32/setup.h
-@@ -110,7 +110,7 @@ struct tagtable {
- int (*parse)(struct tag *);
- };
-
--#define __tag __attribute_used__ __attribute__((__section__(".taglist.init")))
-+#define __tag __used __attribute__((__section__(".taglist.init")))
- #define __tagtable(tag, fn) \
- static struct tagtable __tagtable_##fn __tag = { tag, fn }
-
-diff --git a/include/asm-avr32/thread_info.h b/include/asm-avr32/thread_info.h
-index 184b574..07049f6 100644
---- a/include/asm-avr32/thread_info.h
-+++ b/include/asm-avr32/thread_info.h
-@@ -88,6 +88,7 @@ static inline struct thread_info *current_thread_info(void)
- #define TIF_MEMDIE 6
- #define TIF_RESTORE_SIGMASK 7 /* restore signal mask in do_signal */
- #define TIF_CPU_GOING_TO_SLEEP 8 /* CPU is entering sleep 0 mode */
-+#define TIF_DEBUG 30 /* debugging enabled */
- #define TIF_USERSPACE 31 /* true if FS sets userspace */
-
- #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
-diff --git a/include/asm-blackfin/bfin-global.h b/include/asm-blackfin/bfin-global.h
-index 39bdd86..6ae0619 100644
---- a/include/asm-blackfin/bfin-global.h
-+++ b/include/asm-blackfin/bfin-global.h
-@@ -51,7 +51,7 @@ extern unsigned long sclk_to_usecs(unsigned long sclk);
- extern unsigned long usecs_to_sclk(unsigned long usecs);
-
- extern void dump_bfin_process(struct pt_regs *regs);
--extern void dump_bfin_mem(void *retaddr);
-+extern void dump_bfin_mem(struct pt_regs *regs);
- extern void dump_bfin_trace_buffer(void);
-
- extern int init_arch_irq(void);
-diff --git a/include/asm-blackfin/cplb-mpu.h b/include/asm-blackfin/cplb-mpu.h
++#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
++#define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[15])
++
++#define cpu_sleep() __asm__ __volatile__ ("sleep" : : : "memory")
++#define cpu_relax() barrier()
++
++#if defined(CONFIG_CPU_SH2A) || defined(CONFIG_CPU_SH3) || \
++ defined(CONFIG_CPU_SH4)
++#define PREFETCH_STRIDE L1_CACHE_BYTES
++#define ARCH_HAS_PREFETCH
++#define ARCH_HAS_PREFETCHW
++static inline void prefetch(void *x)
++{
++ __asm__ __volatile__ ("pref @%0\n\t" : : "r" (x) : "memory");
++}
++
++#define prefetchw(x) prefetch(x)
++#endif
++
++#endif /* __KERNEL__ */
++#endif /* __ASM_SH_PROCESSOR_32_H */
+diff --git a/include/asm-sh/processor_64.h b/include/asm-sh/processor_64.h
new file mode 100644
-index 0000000..75c67b9
+index 0000000..99c22b1
--- /dev/null
-+++ b/include/asm-blackfin/cplb-mpu.h
-@@ -0,0 +1,61 @@
++++ b/include/asm-sh/processor_64.h
+@@ -0,0 +1,275 @@
++#ifndef __ASM_SH_PROCESSOR_64_H
++#define __ASM_SH_PROCESSOR_64_H
++
+/*
-+ * File: include/asm-blackfin/cplbinit.h
-+ * Based on:
-+ * Author:
-+ *
-+ * Created:
-+ * Description:
-+ *
-+ * Modified:
-+ * Copyright 2004-2006 Analog Devices Inc.
-+ *
-+ * Bugs: Enter bugs at http://blackfin.uclinux.org/
++ * include/asm-sh/processor_64.h
+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003 Paul Mundt
++ * Copyright (C) 2004 Richard Curnow
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ */
++#ifndef __ASSEMBLY__
++
++#include <linux/compiler.h>
++#include <asm/page.h>
++#include <asm/types.h>
++#include <asm/cache.h>
++#include <asm/ptrace.h>
++#include <asm/cpu/registers.h>
++
++/*
++ * Default implementation of macro that returns current
++ * instruction pointer ("program counter").
++ */
++#define current_text_addr() ({ \
++void *pc; \
++unsigned long long __dummy = 0; \
++__asm__("gettr tr0, %1\n\t" \
++ "pta 4, tr0\n\t" \
++ "gettr tr0, %0\n\t" \
++ "ptabs %1, tr0\n\t" \
++ :"=r" (pc), "=r" (__dummy) \
++ : "1" (__dummy)); \
++pc; })
++
++/*
++ * TLB information structure
+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with this program; if not, see the file COPYING, or write
-+ * to the Free Software Foundation, Inc.,
-+ * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
++ * Defined for both I and D tlb, per-processor.
+ */
-+#ifndef __ASM_BFIN_CPLB_MPU_H
-+#define __ASM_BFIN_CPLB_MPU_H
++struct tlb_info {
++ unsigned long long next;
++ unsigned long long first;
++ unsigned long long last;
+
-+struct cplb_entry {
-+ unsigned long data, addr;
-+};
++ unsigned int entries;
++ unsigned int step;
+
-+struct mem_region {
-+ unsigned long start, end;
-+ unsigned long dcplb_data;
-+ unsigned long icplb_data;
++ unsigned long flags;
+};
+
-+extern struct cplb_entry dcplb_tbl[MAX_CPLBS];
-+extern struct cplb_entry icplb_tbl[MAX_CPLBS];
-+extern int first_switched_icplb;
-+extern int first_mask_dcplb;
-+extern int first_switched_dcplb;
++struct sh_cpuinfo {
++ enum cpu_type type;
++ unsigned long loops_per_jiffy;
++ unsigned long asid_cache;
+
-+extern int nr_dcplb_miss, nr_icplb_miss, nr_icplb_supv_miss, nr_dcplb_prot;
-+extern int nr_cplb_flush;
++ unsigned int cpu_clock, master_clock, bus_clock, module_clock;
+
-+extern int page_mask_order;
-+extern int page_mask_nelts;
++ /* Cache info */
++ struct cache_info icache;
++ struct cache_info dcache;
++ struct cache_info scache;
+
-+extern unsigned long *current_rwx_mask;
++ /* TLB info */
++ struct tlb_info itlb;
++ struct tlb_info dtlb;
+
-+extern void flush_switched_cplbs(void);
-+extern void set_mask_dcplbs(unsigned long *);
++ unsigned long flags;
++};
+
-+extern void __noreturn panic_cplb_error(int seqstat, struct pt_regs *);
++extern struct sh_cpuinfo cpu_data[];
++#define boot_cpu_data cpu_data[0]
++#define current_cpu_data cpu_data[smp_processor_id()]
++#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
+
-+#endif /* __ASM_BFIN_CPLB_MPU_H */
-diff --git a/include/asm-blackfin/cplb.h b/include/asm-blackfin/cplb.h
-index 06828d7..654375c 100644
---- a/include/asm-blackfin/cplb.h
-+++ b/include/asm-blackfin/cplb.h
-@@ -65,7 +65,11 @@
- #define SIZE_1M 0x00100000 /* 1M */
- #define SIZE_4M 0x00400000 /* 4M */
-
-+#ifdef CONFIG_MPU
-+#define MAX_CPLBS 16
-+#else
- #define MAX_CPLBS (16 * 2)
+#endif
-
- #define ASYNC_MEMORY_CPLB_COVERAGE ((ASYNC_BANK0_SIZE + ASYNC_BANK1_SIZE + \
- ASYNC_BANK2_SIZE + ASYNC_BANK3_SIZE) / SIZE_4M)
-diff --git a/include/asm-blackfin/cplbinit.h b/include/asm-blackfin/cplbinit.h
-index c4d0596..0eb1c1b 100644
---- a/include/asm-blackfin/cplbinit.h
-+++ b/include/asm-blackfin/cplbinit.h
-@@ -33,6 +33,12 @@
- #include <asm/blackfin.h>
- #include <asm/cplb.h>
-
-+#ifdef CONFIG_MPU
+
-+#include <asm/cplb-mpu.h>
++/*
++ * User space process size: 2GB - 4k.
++ */
++#define TASK_SIZE 0x7ffff000UL
++
++/* This decides where the kernel will search for a free chunk of vm
++ * space during mmap's.
++ */
++#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
++/*
++ * Bit of SR register
++ *
++ * FD-bit:
++ * When it's set, it means the processor doesn't have right to use FPU,
++ * and it results exception when the floating operation is executed.
++ *
++ * IMASK-bit:
++ * Interrupt level mask
++ *
++ * STEP-bit:
++ * Single step bit
++ *
++ */
++#if defined(CONFIG_SH64_SR_WATCH)
++#define SR_MMU 0x84000000
+#else
++#define SR_MMU 0x80000000
++#endif
+
- #define INITIAL_T 0x1
- #define SWITCH_T 0x2
- #define I_CPLB 0x4
-@@ -79,6 +85,8 @@ extern u_long ipdt_swapcount_table[];
- extern u_long dpdt_swapcount_table[];
- #endif
-
-+#endif /* CONFIG_MPU */
++#define SR_IMASK 0x000000f0
++#define SR_SSTEP 0x08000000
+
- extern unsigned long reserved_mem_dcache_on;
- extern unsigned long reserved_mem_icache_on;
-
-diff --git a/include/asm-blackfin/dma.h b/include/asm-blackfin/dma.h
-index b469505..5abaa2c 100644
---- a/include/asm-blackfin/dma.h
-+++ b/include/asm-blackfin/dma.h
-@@ -76,6 +76,9 @@ enum dma_chan_status {
- #define INTR_ON_BUF 2
- #define INTR_ON_ROW 3
-
-+#define DMA_NOSYNC_KEEP_DMA_BUF 0
-+#define DMA_SYNC_RESTART 1
++#ifndef __ASSEMBLY__
+
- struct dmasg {
- unsigned long next_desc_addr;
- unsigned long start_addr;
-@@ -157,7 +160,8 @@ void set_dma_y_count(unsigned int channel, unsigned short y_count);
- void set_dma_y_modify(unsigned int channel, short y_modify);
- void set_dma_config(unsigned int channel, unsigned short config);
- unsigned short set_bfin_dma_config(char direction, char flow_mode,
-- char intr_mode, char dma_mode, char width);
-+ char intr_mode, char dma_mode, char width,
-+ char syncmode);
- void set_dma_curr_addr(unsigned int channel, unsigned long addr);
-
- /* get curr status for polling */
-diff --git a/include/asm-blackfin/gpio.h b/include/asm-blackfin/gpio.h
-index 33ce98e..d0426c1 100644
---- a/include/asm-blackfin/gpio.h
-+++ b/include/asm-blackfin/gpio.h
-@@ -7,7 +7,7 @@
- * Description:
- *
- * Modified:
-- * Copyright 2004-2006 Analog Devices Inc.
-+ * Copyright 2004-2008 Analog Devices Inc.
- *
- * Bugs: Enter bugs at http://blackfin.uclinux.org/
- *
-@@ -304,39 +304,39 @@
- **************************************************************/
-
- #ifndef BF548_FAMILY
--void set_gpio_dir(unsigned short, unsigned short);
--void set_gpio_inen(unsigned short, unsigned short);
--void set_gpio_polar(unsigned short, unsigned short);
--void set_gpio_edge(unsigned short, unsigned short);
--void set_gpio_both(unsigned short, unsigned short);
--void set_gpio_data(unsigned short, unsigned short);
--void set_gpio_maska(unsigned short, unsigned short);
--void set_gpio_maskb(unsigned short, unsigned short);
--void set_gpio_toggle(unsigned short);
--void set_gpiop_dir(unsigned short, unsigned short);
--void set_gpiop_inen(unsigned short, unsigned short);
--void set_gpiop_polar(unsigned short, unsigned short);
--void set_gpiop_edge(unsigned short, unsigned short);
--void set_gpiop_both(unsigned short, unsigned short);
--void set_gpiop_data(unsigned short, unsigned short);
--void set_gpiop_maska(unsigned short, unsigned short);
--void set_gpiop_maskb(unsigned short, unsigned short);
--unsigned short get_gpio_dir(unsigned short);
--unsigned short get_gpio_inen(unsigned short);
--unsigned short get_gpio_polar(unsigned short);
--unsigned short get_gpio_edge(unsigned short);
--unsigned short get_gpio_both(unsigned short);
--unsigned short get_gpio_maska(unsigned short);
--unsigned short get_gpio_maskb(unsigned short);
--unsigned short get_gpio_data(unsigned short);
--unsigned short get_gpiop_dir(unsigned short);
--unsigned short get_gpiop_inen(unsigned short);
--unsigned short get_gpiop_polar(unsigned short);
--unsigned short get_gpiop_edge(unsigned short);
--unsigned short get_gpiop_both(unsigned short);
--unsigned short get_gpiop_maska(unsigned short);
--unsigned short get_gpiop_maskb(unsigned short);
--unsigned short get_gpiop_data(unsigned short);
-+void set_gpio_dir(unsigned, unsigned short);
-+void set_gpio_inen(unsigned, unsigned short);
-+void set_gpio_polar(unsigned, unsigned short);
-+void set_gpio_edge(unsigned, unsigned short);
-+void set_gpio_both(unsigned, unsigned short);
-+void set_gpio_data(unsigned, unsigned short);
-+void set_gpio_maska(unsigned, unsigned short);
-+void set_gpio_maskb(unsigned, unsigned short);
-+void set_gpio_toggle(unsigned);
-+void set_gpiop_dir(unsigned, unsigned short);
-+void set_gpiop_inen(unsigned, unsigned short);
-+void set_gpiop_polar(unsigned, unsigned short);
-+void set_gpiop_edge(unsigned, unsigned short);
-+void set_gpiop_both(unsigned, unsigned short);
-+void set_gpiop_data(unsigned, unsigned short);
-+void set_gpiop_maska(unsigned, unsigned short);
-+void set_gpiop_maskb(unsigned, unsigned short);
-+unsigned short get_gpio_dir(unsigned);
-+unsigned short get_gpio_inen(unsigned);
-+unsigned short get_gpio_polar(unsigned);
-+unsigned short get_gpio_edge(unsigned);
-+unsigned short get_gpio_both(unsigned);
-+unsigned short get_gpio_maska(unsigned);
-+unsigned short get_gpio_maskb(unsigned);
-+unsigned short get_gpio_data(unsigned);
-+unsigned short get_gpiop_dir(unsigned);
-+unsigned short get_gpiop_inen(unsigned);
-+unsigned short get_gpiop_polar(unsigned);
-+unsigned short get_gpiop_edge(unsigned);
-+unsigned short get_gpiop_both(unsigned);
-+unsigned short get_gpiop_maska(unsigned);
-+unsigned short get_gpiop_maskb(unsigned);
-+unsigned short get_gpiop_data(unsigned);
-
- struct gpio_port_t {
- unsigned short data;
-@@ -382,8 +382,8 @@ struct gpio_port_t {
- #define PM_WAKE_LOW 0x8
- #define PM_WAKE_BOTH_EDGES (PM_WAKE_RISING | PM_WAKE_FALLING)
-
--int gpio_pm_wakeup_request(unsigned short gpio, unsigned char type);
--void gpio_pm_wakeup_free(unsigned short gpio);
-+int gpio_pm_wakeup_request(unsigned gpio, unsigned char type);
-+void gpio_pm_wakeup_free(unsigned gpio);
- unsigned int gpio_pm_setup(void);
- void gpio_pm_restore(void);
-
-@@ -426,19 +426,19 @@ struct gpio_port_s {
- * MODIFICATION HISTORY :
- **************************************************************/
-
--int gpio_request(unsigned short, const char *);
--void gpio_free(unsigned short);
-+int gpio_request(unsigned, const char *);
-+void gpio_free(unsigned);
-
--void gpio_set_value(unsigned short gpio, unsigned short arg);
--unsigned short gpio_get_value(unsigned short gpio);
-+void gpio_set_value(unsigned gpio, int arg);
-+int gpio_get_value(unsigned gpio);
-
- #ifndef BF548_FAMILY
- #define gpio_get_value(gpio) get_gpio_data(gpio)
- #define gpio_set_value(gpio, value) set_gpio_data(gpio, value)
- #endif
-
--void gpio_direction_input(unsigned short gpio);
--void gpio_direction_output(unsigned short gpio);
-+int gpio_direction_input(unsigned gpio);
-+int gpio_direction_output(unsigned gpio, int value);
-
- #include <asm-generic/gpio.h> /* cansleep wrappers */
- #include <asm/irq.h>
-diff --git a/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
-index 0b867e6..15dbc21 100644
---- a/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
-+++ b/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
-@@ -146,7 +146,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
-
- if (uart->rts_pin >= 0) {
- gpio_request(uart->rts_pin, DRIVER_NAME);
-- gpio_direction_output(uart->rts_pin);
-+ gpio_direction_output(uart->rts_pin, 0);
- }
- #endif
- }
-diff --git a/include/asm-blackfin/mach-bf527/portmux.h b/include/asm-blackfin/mach-bf527/portmux.h
-index dcf001a..ae4d205 100644
---- a/include/asm-blackfin/mach-bf527/portmux.h
-+++ b/include/asm-blackfin/mach-bf527/portmux.h
-@@ -1,6 +1,8 @@
- #ifndef _MACH_PORTMUX_H_
- #define _MACH_PORTMUX_H_
-
-+#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++/*
++ * FPU structure and data : require 8-byte alignment as we need to access it
++ with fld.p, fst.p
++ */
+
- #define P_PPI0_D0 (P_DEFINED | P_IDENT(GPIO_PF0) | P_FUNCT(0))
- #define P_PPI0_D1 (P_DEFINED | P_IDENT(GPIO_PF1) | P_FUNCT(0))
- #define P_PPI0_D2 (P_DEFINED | P_IDENT(GPIO_PF2) | P_FUNCT(0))
-diff --git a/include/asm-blackfin/mach-bf533/anomaly.h b/include/asm-blackfin/mach-bf533/anomaly.h
-index f36ff5a..98209d4 100644
---- a/include/asm-blackfin/mach-bf533/anomaly.h
-+++ b/include/asm-blackfin/mach-bf533/anomaly.h
-@@ -7,9 +7,7 @@
- */
-
- /* This file shoule be up to date with:
-- * - Revision X, March 23, 2007; ADSP-BF533 Blackfin Processor Anomaly List
-- * - Revision AB, March 23, 2007; ADSP-BF532 Blackfin Processor Anomaly List
-- * - Revision W, March 23, 2007; ADSP-BF531 Blackfin Processor Anomaly List
-+ * - Revision B, 12/10/2007; ADSP-BF531/BF532/BF533 Blackfin Processor Anomaly List
- */
-
- #ifndef _MACH_ANOMALY_H_
-@@ -17,7 +15,7 @@
-
- /* We do not support 0.1 or 0.2 silicon - sorry */
- #if __SILICON_REVISION__ < 3
--# error Kernel will not work on BF533 silicon version 0.0, 0.1, or 0.2
-+# error will not work on BF533 silicon version 0.0, 0.1, or 0.2
- #endif
-
- #if defined(__ADSPBF531__)
-@@ -251,6 +249,12 @@
- #define ANOMALY_05000192 (__SILICON_REVISION__ < 3)
- /* Internal Voltage Regulator may not start up */
- #define ANOMALY_05000206 (__SILICON_REVISION__ < 3)
-+/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
-+#define ANOMALY_05000357 (1)
-+/* PPI Underflow Error Goes Undetected in ITU-R 656 Mode */
-+#define ANOMALY_05000366 (1)
-+/* Possible RETS Register Corruption when Subroutine Is under 5 Cycles in Duration */
-+#define ANOMALY_05000371 (1)
-
- /* Anomalies that don't exist on this proc */
- #define ANOMALY_05000266 (0)
-diff --git a/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
-index 69b9f8e..7871d43 100644
---- a/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
-+++ b/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
-@@ -111,7 +111,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
- }
- if (uart->rts_pin >= 0) {
- gpio_request(uart->rts_pin, DRIVER_NAME);
-- gpio_direction_input(uart->rts_pin);
-+ gpio_direction_input(uart->rts_pin, 0);
- }
- #endif
- }
-diff --git a/include/asm-blackfin/mach-bf533/portmux.h b/include/asm-blackfin/mach-bf533/portmux.h
-index 137f488..685a265 100644
---- a/include/asm-blackfin/mach-bf533/portmux.h
-+++ b/include/asm-blackfin/mach-bf533/portmux.h
-@@ -1,6 +1,8 @@
- #ifndef _MACH_PORTMUX_H_
- #define _MACH_PORTMUX_H_
-
-+#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++struct sh_fpu_hard_struct {
++ unsigned long fp_regs[64];
++ unsigned int fpscr;
++ /* long status; * software status information */
++};
+
- #define P_PPI0_CLK (P_DONTCARE)
- #define P_PPI0_FS1 (P_DONTCARE)
- #define P_PPI0_FS2 (P_DONTCARE)
-diff --git a/include/asm-blackfin/mach-bf537/anomaly.h b/include/asm-blackfin/mach-bf537/anomaly.h
-index 2b66ecf..746a794 100644
---- a/include/asm-blackfin/mach-bf537/anomaly.h
-+++ b/include/asm-blackfin/mach-bf537/anomaly.h
-@@ -7,9 +7,7 @@
- */
-
- /* This file shoule be up to date with:
-- * - Revision M, March 13, 2007; ADSP-BF537 Blackfin Processor Anomaly List
-- * - Revision L, March 13, 2007; ADSP-BF536 Blackfin Processor Anomaly List
-- * - Revision M, March 13, 2007; ADSP-BF534 Blackfin Processor Anomaly List
-+ * - Revision A, 09/04/2007; ADSP-BF534/ADSP-BF536/ADSP-BF537 Blackfin Processor Anomaly List
- */
-
- #ifndef _MACH_ANOMALY_H_
-@@ -17,7 +15,7 @@
-
- /* We do not support 0.1 silicon - sorry */
- #if __SILICON_REVISION__ < 2
--# error Kernel will not work on BF537 silicon version 0.0 or 0.1
-+# error will not work on BF537 silicon version 0.0 or 0.1
- #endif
-
- #if defined(__ADSPBF534__)
-@@ -44,6 +42,8 @@
- #define ANOMALY_05000122 (1)
- /* Killed 32-bit MMR write leads to next system MMR access thinking it should be 32-bit */
- #define ANOMALY_05000157 (__SILICON_REVISION__ < 2)
-+/* Turning SPORTs on while External Frame Sync Is Active May Corrupt Data */
-+#define ANOMALY_05000167 (1)
- /* PPI_DELAY not functional in PPI modes with 0 frame syncs */
- #define ANOMALY_05000180 (1)
- /* Instruction Cache Is Not Functional */
-@@ -130,6 +130,12 @@
- #define ANOMALY_05000321 (__SILICON_REVISION__ < 3)
- /* EMAC RMII mode at 10-Base-T speed: RX frames not received properly */
- #define ANOMALY_05000322 (1)
-+/* Ethernet MAC MDIO Reads Do Not Meet IEEE Specification */
-+#define ANOMALY_05000341 (__SILICON_REVISION__ >= 3)
-+/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
-+#define ANOMALY_05000357 (1)
-+/* DMAs that Go Urgent during Tight Core Writes to External Memory Are Blocked */
-+#define ANOMALY_05000359 (1)
-
- /* Anomalies that don't exist on this proc */
- #define ANOMALY_05000125 (0)
-diff --git a/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
-index 6fb328f..86e45c3 100644
---- a/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
-+++ b/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
-@@ -146,7 +146,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
-
- if (uart->rts_pin >= 0) {
- gpio_request(uart->rts_pin, DRIVER_NAME);
-- gpio_direction_output(uart->rts_pin);
-+ gpio_direction_output(uart->rts_pin, 0);
- }
- #endif
- }
-diff --git a/include/asm-blackfin/mach-bf537/portmux.h b/include/asm-blackfin/mach-bf537/portmux.h
-index 5a3f7d3..78fee6e 100644
---- a/include/asm-blackfin/mach-bf537/portmux.h
-+++ b/include/asm-blackfin/mach-bf537/portmux.h
-@@ -1,6 +1,8 @@
- #ifndef _MACH_PORTMUX_H_
- #define _MACH_PORTMUX_H_
-
-+#define MAX_RESOURCES (MAX_BLACKFIN_GPIOS + GPIO_BANKSIZE) /* We additionally handle PORTJ */
++#if 0
++/* Dummy fpu emulator */
++struct sh_fpu_soft_struct {
++ unsigned long long fp_regs[32];
++ unsigned int fpscr;
++ unsigned char lookahead;
++ unsigned long entry_pc;
++};
++#endif
+
- #define P_UART0_TX (P_DEFINED | P_IDENT(GPIO_PF0) | P_FUNCT(0))
- #define P_UART0_RX (P_DEFINED | P_IDENT(GPIO_PF1) | P_FUNCT(0))
- #define P_UART1_TX (P_DEFINED | P_IDENT(GPIO_PF2) | P_FUNCT(0))
-diff --git a/include/asm-blackfin/mach-bf548/anomaly.h b/include/asm-blackfin/mach-bf548/anomaly.h
-index c5b6375..850dc12 100644
---- a/include/asm-blackfin/mach-bf548/anomaly.h
-+++ b/include/asm-blackfin/mach-bf548/anomaly.h
-@@ -7,7 +7,7 @@
- */
-
- /* This file shoule be up to date with:
-- * - Revision C, July 16, 2007; ADSP-BF549 Silicon Anomaly List
-+ * - Revision E, 11/28/2007; ADSP-BF542/BF544/BF547/BF548/BF549 Blackfin Processor Anomaly List
- */
-
- #ifndef _MACH_ANOMALY_H_
-@@ -26,47 +26,59 @@
- /* Certain Data Cache Writethrough Modes Fail for Vddint <= 0.9V */
- #define ANOMALY_05000272 (1)
- /* False Hardware Error Exception when ISR context is not restored */
--#define ANOMALY_05000281 (1)
-+#define ANOMALY_05000281 (__SILICON_REVISION__ < 1)
- /* SSYNCs After Writes To CAN/DMA MMR Registers Are Not Always Handled Correctly */
--#define ANOMALY_05000304 (1)
-+#define ANOMALY_05000304 (__SILICON_REVISION__ < 1)
- /* False Hardware Errors Caused by Fetches at the Boundary of Reserved Memory */
- #define ANOMALY_05000310 (1)
- /* Errors When SSYNC, CSYNC, or Loads to LT, LB and LC Registers Are Interrupted */
--#define ANOMALY_05000312 (1)
-+#define ANOMALY_05000312 (__SILICON_REVISION__ < 1)
- /* TWI Slave Boot Mode Is Not Functional */
--#define ANOMALY_05000324 (1)
-+#define ANOMALY_05000324 (__SILICON_REVISION__ < 1)
- /* External FIFO Boot Mode Is Not Functional */
--#define ANOMALY_05000325 (1)
-+#define ANOMALY_05000325 (__SILICON_REVISION__ < 1)
- /* Data Lost When Core and DMA Accesses Are Made to the USB FIFO Simultaneously */
--#define ANOMALY_05000327 (1)
-+#define ANOMALY_05000327 (__SILICON_REVISION__ < 1)
- /* Incorrect Access of OTP_STATUS During otp_write() Function */
--#define ANOMALY_05000328 (1)
-+#define ANOMALY_05000328 (__SILICON_REVISION__ < 1)
- /* Synchronous Burst Flash Boot Mode Is Not Functional */
--#define ANOMALY_05000329 (1)
-+#define ANOMALY_05000329 (__SILICON_REVISION__ < 1)
- /* Host DMA Boot Mode Is Not Functional */
--#define ANOMALY_05000330 (1)
-+#define ANOMALY_05000330 (__SILICON_REVISION__ < 1)
- /* Inadequate Timing Margins on DDR DQS to DQ and DQM Skew */
--#define ANOMALY_05000334 (1)
-+#define ANOMALY_05000334 (__SILICON_REVISION__ < 1)
- /* Inadequate Rotary Debounce Logic Duration */
--#define ANOMALY_05000335 (1)
-+#define ANOMALY_05000335 (__SILICON_REVISION__ < 1)
- /* Phantom Interrupt Occurs After First Configuration of Host DMA Port */
--#define ANOMALY_05000336 (1)
-+#define ANOMALY_05000336 (__SILICON_REVISION__ < 1)
- /* Disallowed Configuration Prevents Subsequent Allowed Configuration on Host DMA Port */
--#define ANOMALY_05000337 (1)
-+#define ANOMALY_05000337 (__SILICON_REVISION__ < 1)
- /* Slave-Mode SPI0 MISO Failure With CPHA = 0 */
--#define ANOMALY_05000338 (1)
-+#define ANOMALY_05000338 (__SILICON_REVISION__ < 1)
- /* If Memory Reads Are Enabled on SDH or HOSTDP, Other DMAC1 Peripherals Cannot Read */
--#define ANOMALY_05000340 (1)
-+#define ANOMALY_05000340 (__SILICON_REVISION__ < 1)
- /* Boot Host Wait (HWAIT) and Boot Host Wait Alternate (HWAITA) Signals Are Swapped */
--#define ANOMALY_05000344 (1)
-+#define ANOMALY_05000344 (__SILICON_REVISION__ < 1)
- /* USB Calibration Value Is Not Intialized */
--#define ANOMALY_05000346 (1)
-+#define ANOMALY_05000346 (__SILICON_REVISION__ < 1)
- /* Boot ROM Kernel Incorrectly Alters Reset Value of USB Register */
--#define ANOMALY_05000347 (1)
-+#define ANOMALY_05000347 (__SILICON_REVISION__ < 1)
- /* Data Lost when Core Reads SDH Data FIFO */
--#define ANOMALY_05000349 (1)
-+#define ANOMALY_05000349 (__SILICON_REVISION__ < 1)
- /* PLL Status Register Is Inaccurate */
--#define ANOMALY_05000351 (1)
-+#define ANOMALY_05000351 (__SILICON_REVISION__ < 1)
-+/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
-+#define ANOMALY_05000357 (1)
-+/* External Memory Read Access Hangs Core With PLL Bypass */
-+#define ANOMALY_05000360 (1)
-+/* DMAs that Go Urgent during Tight Core Writes to External Memory Are Blocked */
-+#define ANOMALY_05000365 (1)
-+/* Addressing Conflict between Boot ROM and Asynchronous Memory */
-+#define ANOMALY_05000369 (1)
-+/* Mobile DDR Operation Not Functional */
-+#define ANOMALY_05000377 (1)
-+/* Security/Authentication Speedpath Causes Authentication To Fail To Initiate */
-+#define ANOMALY_05000378 (1)
-
- /* Anomalies that don't exist on this proc */
- #define ANOMALY_05000125 (0)
-diff --git a/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
-index f21a162..3770aa3 100644
---- a/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
-+++ b/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
-@@ -186,7 +186,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
-
- if (uart->rts_pin >= 0) {
- gpio_request(uart->rts_pin, DRIVER_NAME);
-- gpio_direction_output(uart->rts_pin);
-+ gpio_direction_output(uart->rts_pin, 0);
- }
- #endif
- }
-diff --git a/include/asm-blackfin/mach-bf548/cdefBF54x_base.h b/include/asm-blackfin/mach-bf548/cdefBF54x_base.h
-index aefab3f..19ddcd8 100644
---- a/include/asm-blackfin/mach-bf548/cdefBF54x_base.h
-+++ b/include/asm-blackfin/mach-bf548/cdefBF54x_base.h
-@@ -244,39 +244,6 @@ static __inline__ void bfin_write_VR_CTL(unsigned int val)
- #define bfin_read_TWI0_RCV_DATA16() bfin_read16(TWI0_RCV_DATA16)
- #define bfin_write_TWI0_RCV_DATA16(val) bfin_write16(TWI0_RCV_DATA16, val)
-
--#define bfin_read_TWI_CLKDIV() bfin_read16(TWI0_CLKDIV)
--#define bfin_write_TWI_CLKDIV(val) bfin_write16(TWI0_CLKDIV, val)
--#define bfin_read_TWI_CONTROL() bfin_read16(TWI0_CONTROL)
--#define bfin_write_TWI_CONTROL(val) bfin_write16(TWI0_CONTROL, val)
--#define bfin_read_TWI_SLAVE_CTRL() bfin_read16(TWI0_SLAVE_CTRL)
--#define bfin_write_TWI_SLAVE_CTRL(val) bfin_write16(TWI0_SLAVE_CTRL, val)
--#define bfin_read_TWI_SLAVE_STAT() bfin_read16(TWI0_SLAVE_STAT)
--#define bfin_write_TWI_SLAVE_STAT(val) bfin_write16(TWI0_SLAVE_STAT, val)
--#define bfin_read_TWI_SLAVE_ADDR() bfin_read16(TWI0_SLAVE_ADDR)
--#define bfin_write_TWI_SLAVE_ADDR(val) bfin_write16(TWI0_SLAVE_ADDR, val)
--#define bfin_read_TWI_MASTER_CTL() bfin_read16(TWI0_MASTER_CTRL)
--#define bfin_write_TWI_MASTER_CTL(val) bfin_write16(TWI0_MASTER_CTRL, val)
--#define bfin_read_TWI_MASTER_STAT() bfin_read16(TWI0_MASTER_STAT)
--#define bfin_write_TWI_MASTER_STAT(val) bfin_write16(TWI0_MASTER_STAT, val)
--#define bfin_read_TWI_MASTER_ADDR() bfin_read16(TWI0_MASTER_ADDR)
--#define bfin_write_TWI_MASTER_ADDR(val) bfin_write16(TWI0_MASTER_ADDR, val)
--#define bfin_read_TWI_INT_STAT() bfin_read16(TWI0_INT_STAT)
--#define bfin_write_TWI_INT_STAT(val) bfin_write16(TWI0_INT_STAT, val)
--#define bfin_read_TWI_INT_MASK() bfin_read16(TWI0_INT_MASK)
--#define bfin_write_TWI_INT_MASK(val) bfin_write16(TWI0_INT_MASK, val)
--#define bfin_read_TWI_FIFO_CTL() bfin_read16(TWI0_FIFO_CTRL)
--#define bfin_write_TWI_FIFO_CTL(val) bfin_write16(TWI0_FIFO_CTRL, val)
--#define bfin_read_TWI_FIFO_STAT() bfin_read16(TWI0_FIFO_STAT)
--#define bfin_write_TWI_FIFO_STAT(val) bfin_write16(TWI0_FIFO_STAT, val)
--#define bfin_read_TWI_XMT_DATA8() bfin_read16(TWI0_XMT_DATA8)
--#define bfin_write_TWI_XMT_DATA8(val) bfin_write16(TWI0_XMT_DATA8, val)
--#define bfin_read_TWI_XMT_DATA16() bfin_read16(TWI0_XMT_DATA16)
--#define bfin_write_TWI_XMT_DATA16(val) bfin_write16(TWI0_XMT_DATA16, val)
--#define bfin_read_TWI_RCV_DATA8() bfin_read16(TWI0_RCV_DATA8)
--#define bfin_write_TWI_RCV_DATA8(val) bfin_write16(TWI0_RCV_DATA8, val)
--#define bfin_read_TWI_RCV_DATA16() bfin_read16(TWI0_RCV_DATA16)
--#define bfin_write_TWI_RCV_DATA16(val) bfin_write16(TWI0_RCV_DATA16, val)
--
- /* SPORT0 is not defined in the shared file because it is not available on the ADSP-BF542 and ADSP-BF544 bfin_read_()rocessors */
-
- /* SPORT1 Registers */
-diff --git a/include/asm-blackfin/mach-bf548/defBF542.h b/include/asm-blackfin/mach-bf548/defBF542.h
-index 32d0713..a7c809f 100644
---- a/include/asm-blackfin/mach-bf548/defBF542.h
-+++ b/include/asm-blackfin/mach-bf548/defBF542.h
-@@ -432,8 +432,8 @@
-
- #define CMD_CRC_FAIL 0x1 /* CMD CRC Fail */
- #define DAT_CRC_FAIL 0x2 /* Data CRC Fail */
--#define CMD_TIMEOUT 0x4 /* CMD Time Out */
--#define DAT_TIMEOUT 0x8 /* Data Time Out */
-+#define CMD_TIME_OUT 0x4 /* CMD Time Out */
-+#define DAT_TIME_OUT 0x8 /* Data Time Out */
- #define TX_UNDERRUN 0x10 /* Transmit Underrun */
- #define RX_OVERRUN 0x20 /* Receive Overrun */
- #define CMD_RESP_END 0x40 /* CMD Response End */
-diff --git a/include/asm-blackfin/mach-bf548/defBF548.h b/include/asm-blackfin/mach-bf548/defBF548.h
-index ecbca95..e46f568 100644
---- a/include/asm-blackfin/mach-bf548/defBF548.h
-+++ b/include/asm-blackfin/mach-bf548/defBF548.h
-@@ -1095,8 +1095,8 @@
-
- #define CMD_CRC_FAIL 0x1 /* CMD CRC Fail */
- #define DAT_CRC_FAIL 0x2 /* Data CRC Fail */
--#define CMD_TIMEOUT 0x4 /* CMD Time Out */
--#define DAT_TIMEOUT 0x8 /* Data Time Out */
-+#define CMD_TIME_OUT 0x4 /* CMD Time Out */
-+#define DAT_TIME_OUT 0x8 /* Data Time Out */
- #define TX_UNDERRUN 0x10 /* Transmit Underrun */
- #define RX_OVERRUN 0x20 /* Receive Overrun */
- #define CMD_RESP_END 0x40 /* CMD Response End */
-diff --git a/include/asm-blackfin/mach-bf548/defBF54x_base.h b/include/asm-blackfin/mach-bf548/defBF54x_base.h
-index 319a485..08f90c2 100644
---- a/include/asm-blackfin/mach-bf548/defBF54x_base.h
-+++ b/include/asm-blackfin/mach-bf548/defBF54x_base.h
-@@ -1772,17 +1772,36 @@
- #define TRP 0x3c0000 /* Pre charge-to-active command period */
- #define TRAS 0x3c00000 /* Min Active-to-pre charge time */
- #define TRC 0x3c000000 /* Active-to-active time */
-+#define DDR_TRAS(x) ((x<<22)&TRAS) /* DDR tRAS = (1~15) cycles */
-+#define DDR_TRP(x) ((x<<18)&TRP) /* DDR tRP = (1~15) cycles */
-+#define DDR_TRC(x) ((x<<26)&TRC) /* DDR tRC = (1~15) cycles */
-+#define DDR_TRFC(x) ((x<<14)&TRFC) /* DDR tRFC = (1~15) cycles */
-+#define DDR_TREFI(x) (x&TREFI) /* DDR tRFC = (1~15) cycles */
-
- /* Bit masks for EBIU_DDRCTL1 */
-
- #define TRCD 0xf /* Active-to-Read/write delay */
--#define MRD 0xf0 /* Mode register set to active */
-+#define TMRD 0xf0 /* Mode register set to active */
- #define TWR 0x300 /* Write Recovery time */
- #define DDRDATWIDTH 0x3000 /* DDR data width */
- #define EXTBANKS 0xc000 /* External banks */
- #define DDRDEVWIDTH 0x30000 /* DDR device width */
- #define DDRDEVSIZE 0xc0000 /* DDR device size */
--#define TWWTR 0xf0000000 /* Write-to-read delay */
-+#define TWTR 0xf0000000 /* Write-to-read delay */
-+#define DDR_TWTR(x) ((x<<28)&TWTR) /* DDR tWTR = (1~15) cycles */
-+#define DDR_TMRD(x) ((x<<4)&TMRD) /* DDR tMRD = (1~15) cycles */
-+#define DDR_TWR(x) ((x<<8)&TWR) /* DDR tWR = (1~15) cycles */
-+#define DDR_TRCD(x) (x&TRCD) /* DDR tRCD = (1~15) cycles */
-+#define DDR_DATWIDTH 0x2000 /* DDR data width */
-+#define EXTBANK_1 0 /* 1 external bank */
-+#define EXTBANK_2 0x4000 /* 2 external banks */
-+#define DEVSZ_64 0x40000 /* DDR External Bank Size = 64MB */
-+#define DEVSZ_128 0x80000 /* DDR External Bank Size = 128MB */
-+#define DEVSZ_256 0xc0000 /* DDR External Bank Size = 256MB */
-+#define DEVSZ_512 0 /* DDR External Bank Size = 512MB */
-+#define DEVWD_4 0 /* DDR Device Width = 4 Bits */
-+#define DEVWD_8 0x10000 /* DDR Device Width = 8 Bits */
-+#define DEVWD_16 0x20000 /* DDR Device Width = 16 Bits */
-
- /* Bit masks for EBIU_DDRCTL2 */
-
-@@ -1790,6 +1809,10 @@
- #define CASLATENCY 0x70 /* CAS latency */
- #define DLLRESET 0x100 /* DLL Reset */
- #define REGE 0x1000 /* Register mode enable */
-+#define CL_1_5 0x50 /* DDR CAS Latency = 1.5 cycles */
-+#define CL_2 0x20 /* DDR CAS Latency = 2 cycles */
-+#define CL_2_5 0x60 /* DDR CAS Latency = 2.5 cycles */
-+#define CL_3 0x30 /* DDR CAS Latency = 3 cycles */
-
- /* Bit masks for EBIU_DDRCTL3 */
-
-@@ -2257,6 +2280,10 @@
-
- #define CSEL 0x30 /* Core Select */
- #define SSEL 0xf /* System Select */
-+#define CSEL_DIV1 0x0000 /* CCLK = VCO / 1 */
-+#define CSEL_DIV2 0x0010 /* CCLK = VCO / 2 */
-+#define CSEL_DIV4 0x0020 /* CCLK = VCO / 4 */
-+#define CSEL_DIV8 0x0030 /* CCLK = VCO / 8 */
-
- /* Bit masks for PLL_CTL */
-
-diff --git a/include/asm-blackfin/mach-bf548/irq.h b/include/asm-blackfin/mach-bf548/irq.h
-index 9fb7bc5..c34507a 100644
---- a/include/asm-blackfin/mach-bf548/irq.h
-+++ b/include/asm-blackfin/mach-bf548/irq.h
-@@ -88,7 +88,7 @@ Events (highest priority) EMU 0
- #define IRQ_PINT1 BFIN_IRQ(20) /* PINT1 Interrupt */
- #define IRQ_MDMAS0 BFIN_IRQ(21) /* MDMA Stream 0 Interrupt */
- #define IRQ_MDMAS1 BFIN_IRQ(22) /* MDMA Stream 1 Interrupt */
--#define IRQ_WATCHDOG BFIN_IRQ(23) /* Watchdog Interrupt */
-+#define IRQ_WATCH BFIN_IRQ(23) /* Watchdog Interrupt */
- #define IRQ_DMAC1_ERROR BFIN_IRQ(24) /* DMAC1 Status (Error) Interrupt */
- #define IRQ_SPORT2_ERROR BFIN_IRQ(25) /* SPORT2 Error Interrupt */
- #define IRQ_SPORT3_ERROR BFIN_IRQ(26) /* SPORT3 Error Interrupt */
-@@ -406,7 +406,7 @@ Events (highest priority) EMU 0
- #define IRQ_PINT1_POS 16
- #define IRQ_MDMAS0_POS 20
- #define IRQ_MDMAS1_POS 24
--#define IRQ_WATCHDOG_POS 28
-+#define IRQ_WATCH_POS 28
-
- /* IAR3 BIT FIELDS */
- #define IRQ_DMAC1_ERR_POS 0
-diff --git a/include/asm-blackfin/mach-bf548/mem_init.h b/include/asm-blackfin/mach-bf548/mem_init.h
-index 0cb279e..befc290 100644
---- a/include/asm-blackfin/mach-bf548/mem_init.h
-+++ b/include/asm-blackfin/mach-bf548/mem_init.h
-@@ -28,8 +28,68 @@
- * If not, write to the Free Software Foundation,
- * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
- */
-+#define MIN_DDR_SCLK(x) (x*(CONFIG_SCLK_HZ/1000/1000)/1000 + 1)
++union sh_fpu_union {
++ struct sh_fpu_hard_struct hard;
++ /* 'hard' itself only produces 32 bit alignment, yet we need
++ to access it using 64 bit load/store as well. */
++ unsigned long long alignment_dummy;
++};
++
++struct thread_struct {
++ unsigned long sp;
++ unsigned long pc;
++ /* This stores the address of the pt_regs built during a context
++ switch, or of the register save area built for a kernel mode
++ exception. It is used for backtracing the stack of a sleeping task
++ or one that traps in kernel mode. */
++ struct pt_regs *kregs;
++ /* This stores the address of the pt_regs constructed on entry from
++ user mode. It is a fixed value over the lifetime of a process, or
++ NULL for a kernel thread. */
++ struct pt_regs *uregs;
++
++ unsigned long trap_no, error_code;
++ unsigned long address;
++ /* Hardware debugging registers may come here */
++
++ /* floating point info */
++ union sh_fpu_union fpu;
++};
+
-+#if (CONFIG_MEM_MT46V32M16_6T)
-+#define DDR_SIZE DEVSZ_512
-+#define DDR_WIDTH DEVWD_16
++typedef struct {
++ unsigned long seg;
++} mm_segment_t;
+
-+#define DDR_tRC DDR_TRC(MIN_DDR_SCLK(60))
-+#define DDR_tRAS DDR_TRAS(MIN_DDR_SCLK(42))
-+#define DDR_tRP DDR_TRP(MIN_DDR_SCLK(15))
-+#define DDR_tRFC DDR_TRFC(MIN_DDR_SCLK(72))
-+#define DDR_tREFI DDR_TREFI(MIN_DDR_SCLK(7800))
++#define INIT_MMAP \
++{ &init_mm, 0, 0, NULL, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, 1, NULL, NULL }
+
-+#define DDR_tRCD DDR_TRCD(MIN_DDR_SCLK(15))
-+#define DDR_tWTR DDR_TWTR(1)
-+#define DDR_tMRD DDR_TMRD(MIN_DDR_SCLK(12))
-+#define DDR_tWR DDR_TWR(MIN_DDR_SCLK(15))
-+#endif
++extern struct pt_regs fake_swapper_regs;
+
-+#if (CONFIG_MEM_MT46V32M16_5B)
-+#define DDR_SIZE DEVSZ_512
-+#define DDR_WIDTH DEVWD_16
++#define INIT_THREAD { \
++ .sp = sizeof(init_stack) + \
++ (long) &init_stack, \
++ .pc = 0, \
++ .kregs = &fake_swapper_regs, \
++ .uregs = NULL, \
++ .trap_no = 0, \
++ .error_code = 0, \
++ .address = 0, \
++ .fpu = { { { 0, } }, } \
++}
+
-+#define DDR_tRC DDR_TRC(MIN_DDR_SCLK(55))
-+#define DDR_tRAS DDR_TRAS(MIN_DDR_SCLK(40))
-+#define DDR_tRP DDR_TRP(MIN_DDR_SCLK(15))
-+#define DDR_tRFC DDR_TRFC(MIN_DDR_SCLK(70))
-+#define DDR_tREFI DDR_TREFI(MIN_DDR_SCLK(7800))
++/*
++ * Do necessary setup to start up a newly executed thread.
++ */
++#define SR_USER (SR_MMU | SR_FD)
+
-+#define DDR_tRCD DDR_TRCD(MIN_DDR_SCLK(15))
-+#define DDR_tWTR DDR_TWTR(2)
-+#define DDR_tMRD DDR_TMRD(MIN_DDR_SCLK(10))
-+#define DDR_tWR DDR_TWR(MIN_DDR_SCLK(15))
-+#endif
++#define start_thread(regs, new_pc, new_sp) \
++ set_fs(USER_DS); \
++ regs->sr = SR_USER; /* User mode. */ \
++ regs->pc = new_pc - 4; /* Compensate syscall exit */ \
++ regs->pc |= 1; /* Set SHmedia ! */ \
++ regs->regs[18] = 0; \
++ regs->regs[15] = new_sp
+
-+#if (CONFIG_MEM_GENERIC_BOARD)
-+#define DDR_SIZE DEVSZ_512
-+#define DDR_WIDTH DEVWD_16
++/* Forward declaration, a strange C thing */
++struct task_struct;
++struct mm_struct;
+
-+#define DDR_tRCD DDR_TRCD(3)
-+#define DDR_tWTR DDR_TWTR(2)
-+#define DDR_tWR DDR_TWR(2)
-+#define DDR_tMRD DDR_TMRD(2)
-+#define DDR_tRP DDR_TRP(3)
-+#define DDR_tRAS DDR_TRAS(7)
-+#define DDR_tRC DDR_TRC(10)
-+#define DDR_tRFC DDR_TRFC(12)
-+#define DDR_tREFI DDR_TREFI(1288)
++/* Free all resources held by a thread. */
++extern void release_thread(struct task_struct *);
++/*
++ * create a kernel thread without removing it from tasklists
++ */
++extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
++
++
++/* Copy and release all segment info associated with a VM */
++#define copy_segments(p, mm) do { } while (0)
++#define release_segments(mm) do { } while (0)
++#define forget_segments() do { } while (0)
++#define prepare_to_copy(tsk) do { } while (0)
++/*
++ * FPU lazy state save handling.
++ */
++
++static inline void disable_fpu(void)
++{
++ unsigned long long __dummy;
++
++ /* Set FD flag in SR */
++ __asm__ __volatile__("getcon " __SR ", %0\n\t"
++ "or %0, %1, %0\n\t"
++ "putcon %0, " __SR "\n\t"
++ : "=&r" (__dummy)
++ : "r" (SR_FD));
++}
++
++static inline void enable_fpu(void)
++{
++ unsigned long long __dummy;
++
++ /* Clear out FD flag in SR */
++ __asm__ __volatile__("getcon " __SR ", %0\n\t"
++ "and %0, %1, %0\n\t"
++ "putcon %0, " __SR "\n\t"
++ : "=&r" (__dummy)
++ : "r" (~SR_FD));
++}
++
++/* Round to nearest, no exceptions on inexact, overflow, underflow,
++ zero-divide, invalid. Configure option for whether to flush denorms to
++ zero, or except if a denorm is encountered. */
++#if defined(CONFIG_SH64_FPU_DENORM_FLUSH)
++#define FPSCR_INIT 0x00040000
++#else
++#define FPSCR_INIT 0x00000000
+#endif
+
-+#if (CONFIG_SCLK_HZ <= 133333333)
-+#define DDR_CL CL_2
-+#elif (CONFIG_SCLK_HZ <= 166666666)
-+#define DDR_CL CL_2_5
++#ifdef CONFIG_SH_FPU
++/* Initialise the FP state of a task */
++void fpinit(struct sh_fpu_hard_struct *fpregs);
+#else
-+#define DDR_CL CL_3
++#define fpinit(fpregs) do { } while (0)
+#endif
+
-+#define mem_DDRCTL0 (DDR_tRP | DDR_tRAS | DDR_tRC | DDR_tRFC | DDR_tREFI)
-+#define mem_DDRCTL1 (DDR_DATWIDTH | EXTBANK_1 | DDR_SIZE | DDR_WIDTH | DDR_tWTR \
-+ | DDR_tMRD | DDR_tWR | DDR_tRCD)
-+#define mem_DDRCTL2 DDR_CL
-
--#if (CONFIG_MEM_MT46V32M16)
-
- #if defined CONFIG_CLKIN_HALF
- #define CLKIN_HALF 1
-diff --git a/include/asm-blackfin/mach-bf548/portmux.h b/include/asm-blackfin/mach-bf548/portmux.h
-index 6b48512..8177a56 100644
---- a/include/asm-blackfin/mach-bf548/portmux.h
-+++ b/include/asm-blackfin/mach-bf548/portmux.h
-@@ -1,6 +1,8 @@
- #ifndef _MACH_PORTMUX_H_
- #define _MACH_PORTMUX_H_
-
-+#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++extern struct task_struct *last_task_used_math;
+
- #define P_SPORT2_TFS (P_DEFINED | P_IDENT(GPIO_PA0) | P_FUNCT(0))
- #define P_SPORT2_DTSEC (P_DEFINED | P_IDENT(GPIO_PA1) | P_FUNCT(0))
- #define P_SPORT2_DTPRI (P_DEFINED | P_IDENT(GPIO_PA2) | P_FUNCT(0))
-diff --git a/include/asm-blackfin/mach-bf561/anomaly.h b/include/asm-blackfin/mach-bf561/anomaly.h
-index bed9564..0c1d461 100644
---- a/include/asm-blackfin/mach-bf561/anomaly.h
-+++ b/include/asm-blackfin/mach-bf561/anomaly.h
-@@ -7,7 +7,7 @@
- */
-
- /* This file shoule be up to date with:
-- * - Revision N, March 28, 2007; ADSP-BF561 Silicon Anomaly List
-+ * - Revision O, 11/15/2007; ADSP-BF561 Blackfin Processor Anomaly List
- */
-
- #ifndef _MACH_ANOMALY_H_
-@@ -15,7 +15,7 @@
-
- /* We do not support 0.1, 0.2, or 0.4 silicon - sorry */
- #if __SILICON_REVISION__ < 3 || __SILICON_REVISION__ == 4
--# error Kernel will not work on BF561 silicon version 0.0, 0.1, 0.2, or 0.4
-+# error will not work on BF561 silicon version 0.0, 0.1, 0.2, or 0.4
- #endif
-
- /* Multi-Issue Instruction with dsp32shiftimm in slot1 and P-reg Store in slot 2 Not Supported */
-@@ -208,6 +208,8 @@
- #define ANOMALY_05000275 (__SILICON_REVISION__ > 2)
- /* Timing Requirements Change for External Frame Sync PPI Modes with Non-Zero PPI_DELAY */
- #define ANOMALY_05000276 (__SILICON_REVISION__ < 5)
-+/* Writes to an I/O data register one SCLK cycle after an edge is detected may clear interrupt */
-+#define ANOMALY_05000277 (__SILICON_REVISION__ < 3)
- /* Disabling Peripherals with DMA Running May Cause DMA System Instability */
- #define ANOMALY_05000278 (__SILICON_REVISION__ < 5)
- /* False Hardware Error Exception When ISR Context Is Not Restored */
-@@ -246,6 +248,18 @@
- #define ANOMALY_05000332 (__SILICON_REVISION__ < 5)
- /* Flag Data Register Writes One SCLK Cycle After Edge Is Detected May Clear Interrupt Status */
- #define ANOMALY_05000333 (__SILICON_REVISION__ < 5)
-+/* New Feature: Additional PPI Frame Sync Sampling Options (Not Available on Older Silicon) */
-+#define ANOMALY_05000339 (__SILICON_REVISION__ < 5)
-+/* Memory DMA FIFO Causes Throughput Degradation on Writes to External Memory */
-+#define ANOMALY_05000343 (__SILICON_REVISION__ < 5)
-+/* Serial Port (SPORT) Multichannel Transmit Failure when Channel 0 Is Disabled */
-+#define ANOMALY_05000357 (1)
-+/* Conflicting Column Address Widths Causes SDRAM Errors */
-+#define ANOMALY_05000362 (1)
-+/* PPI Underflow Error Goes Undetected in ITU-R 656 Mode */
-+#define ANOMALY_05000366 (1)
-+/* Possible RETS Register Corruption when Subroutine Is under 5 Cycles in Duration */
-+#define ANOMALY_05000371 (1)
-
- /* Anomalies that don't exist on this proc */
- #define ANOMALY_05000158 (0)
-diff --git a/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
-index 69b9f8e..7871d43 100644
---- a/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
-+++ b/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
-@@ -111,7 +111,7 @@ static void bfin_serial_hw_init(struct bfin_serial_port *uart)
- }
- if (uart->rts_pin >= 0) {
- gpio_request(uart->rts_pin, DRIVER_NAME);
-- gpio_direction_input(uart->rts_pin);
-+ gpio_direction_input(uart->rts_pin, 0);
- }
- #endif
- }
-diff --git a/include/asm-blackfin/mach-bf561/portmux.h b/include/asm-blackfin/mach-bf561/portmux.h
-index 132ad31..a6ee820 100644
---- a/include/asm-blackfin/mach-bf561/portmux.h
-+++ b/include/asm-blackfin/mach-bf561/portmux.h
-@@ -1,6 +1,8 @@
- #ifndef _MACH_PORTMUX_H_
- #define _MACH_PORTMUX_H_
-
-+#define MAX_RESOURCES MAX_BLACKFIN_GPIOS
++/*
++ * Return saved PC of a blocked thread.
++ */
++#define thread_saved_pc(tsk) (tsk->thread.pc)
+
- #define P_PPI0_CLK (P_DONTCARE)
- #define P_PPI0_FS1 (P_DONTCARE)
- #define P_PPI0_FS2 (P_DONTCARE)
-diff --git a/include/asm-blackfin/mmu.h b/include/asm-blackfin/mmu.h
-index 11d52f1..757e439 100644
---- a/include/asm-blackfin/mmu.h
-+++ b/include/asm-blackfin/mmu.h
-@@ -24,7 +24,9 @@ typedef struct {
- unsigned long exec_fdpic_loadmap;
- unsigned long interp_fdpic_loadmap;
- #endif
++extern unsigned long get_wchan(struct task_struct *p);
++
++#define KSTK_EIP(tsk) ((tsk)->thread.pc)
++#define KSTK_ESP(tsk) ((tsk)->thread.sp)
++
++#define cpu_relax() barrier()
++
++#endif /* __ASSEMBLY__ */
++#endif /* __ASM_SH_PROCESSOR_64_H */
+diff --git a/include/asm-sh/ptrace.h b/include/asm-sh/ptrace.h
+index b9789c8..8d6c92b 100644
+--- a/include/asm-sh/ptrace.h
++++ b/include/asm-sh/ptrace.h
+@@ -5,7 +5,16 @@
+ * Copyright (C) 1999, 2000 Niibe Yutaka
+ *
+ */
-
-+#ifdef CONFIG_MPU
-+ unsigned long *page_rwx_mask;
-+#endif
- } mm_context_t;
-
- #endif
-diff --git a/include/asm-blackfin/mmu_context.h b/include/asm-blackfin/mmu_context.h
-index c5c71a6..b5eb675 100644
---- a/include/asm-blackfin/mmu_context.h
-+++ b/include/asm-blackfin/mmu_context.h
-@@ -30,9 +30,12 @@
- #ifndef __BLACKFIN_MMU_CONTEXT_H__
- #define __BLACKFIN_MMU_CONTEXT_H__
-
-+#include <linux/gfp.h>
-+#include <linux/sched.h>
- #include <asm/setup.h>
- #include <asm/page.h>
- #include <asm/pgalloc.h>
-+#include <asm/cplbinit.h>
-
- extern void *current_l1_stack_save;
- extern int nr_l1stack_tasks;
-@@ -50,6 +53,12 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
- static inline int
- init_new_context(struct task_struct *tsk, struct mm_struct *mm)
- {
-+#ifdef CONFIG_MPU
-+ unsigned long p = __get_free_pages(GFP_KERNEL, page_mask_order);
-+ mm->context.page_rwx_mask = (unsigned long *)p;
-+ memset(mm->context.page_rwx_mask, 0,
-+ page_mask_nelts * 3 * sizeof(long));
-+#endif
- return 0;
- }
-
-@@ -73,6 +82,11 @@ static inline void destroy_context(struct mm_struct *mm)
- sram_free(tmp->addr);
- kfree(tmp);
- }
-+#ifdef CONFIG_MPU
-+ if (current_rwx_mask == mm->context.page_rwx_mask)
-+ current_rwx_mask = NULL;
-+ free_pages((unsigned long)mm->context.page_rwx_mask, page_mask_order);
-+#endif
- }
++#if defined(__SH5__) || defined(CONFIG_SUPERH64)
++struct pt_regs {
++ unsigned long long pc;
++ unsigned long long sr;
++ unsigned long long syscall_nr;
++ unsigned long long regs[63];
++ unsigned long long tregs[8];
++ unsigned long long pad[2];
++};
++#else
+ /*
+ * GCC defines register number like this:
+ * -----------------------------
+@@ -28,7 +37,7 @@
- static inline unsigned long
-@@ -106,9 +120,21 @@ activate_l1stack(struct mm_struct *mm, unsigned long sp_base)
+ #define REG_PR 17
+ #define REG_SR 18
+-#define REG_GBR 19
++#define REG_GBR 19
+ #define REG_MACH 20
+ #define REG_MACL 21
- #define deactivate_mm(tsk,mm) do { } while (0)
+@@ -80,10 +89,14 @@ struct pt_dspregs {
--static inline void activate_mm(struct mm_struct *prev_mm,
-- struct mm_struct *next_mm)
-+#define activate_mm(prev, next) switch_mm(prev, next, NULL)
-+
-+static inline void switch_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm,
-+ struct task_struct *tsk)
- {
-+ if (prev_mm == next_mm)
-+ return;
-+#ifdef CONFIG_MPU
-+ if (prev_mm->context.page_rwx_mask == current_rwx_mask) {
-+ flush_switched_cplbs();
-+ set_mask_dcplbs(next_mm->context.page_rwx_mask);
-+ }
+ #define PTRACE_GETDSPREGS 55
+ #define PTRACE_SETDSPREGS 56
+#endif
-+
-+ /* L1 stack switching. */
- if (!next_mm->context.l1_stack_save)
- return;
- if (next_mm->context.l1_stack_save == current_l1_stack_save)
-@@ -120,10 +146,36 @@ static inline void activate_mm(struct mm_struct *prev_mm,
- memcpy(l1_stack_base, current_l1_stack_save, l1_stack_len);
- }
--static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
-- struct task_struct *tsk)
-+#ifdef CONFIG_MPU
-+static inline void protect_page(struct mm_struct *mm, unsigned long addr,
-+ unsigned long flags)
-+{
-+ unsigned long *mask = mm->context.page_rwx_mask;
-+ unsigned long page = addr >> 12;
-+ unsigned long idx = page >> 5;
-+ unsigned long bit = 1 << (page & 31);
+ #ifdef __KERNEL__
+-#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
+-#define instruction_pointer(regs) ((regs)->pc)
++#include <asm/addrspace.h>
+
-+ if (flags & VM_MAYREAD)
-+ mask[idx] |= bit;
-+ else
-+ mask[idx] &= ~bit;
-+ mask += page_mask_nelts;
-+ if (flags & VM_MAYWRITE)
-+ mask[idx] |= bit;
-+ else
-+ mask[idx] &= ~bit;
-+ mask += page_mask_nelts;
-+ if (flags & VM_MAYEXEC)
-+ mask[idx] |= bit;
-+ else
-+ mask[idx] &= ~bit;
-+}
++#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
++#define instruction_pointer(regs) ((unsigned long)(regs)->pc)
+
-+static inline void update_protections(struct mm_struct *mm)
- {
-- activate_mm(prev, next);
-+ flush_switched_cplbs();
-+ set_mask_dcplbs(mm->context.page_rwx_mask);
- }
-+#endif
-
- #endif
-diff --git a/include/asm-blackfin/traps.h b/include/asm-blackfin/traps.h
-index ee1cbf7..f0e5f94 100644
---- a/include/asm-blackfin/traps.h
-+++ b/include/asm-blackfin/traps.h
-@@ -45,6 +45,10 @@
- #define VEC_CPLB_I_M (44)
- #define VEC_CPLB_I_MHIT (45)
- #define VEC_ILL_RES (46) /* including unvalid supervisor mode insn */
-+/* The hardware reserves (63) for future use - we use it to tell our
-+ * normal exception handling code we have a hardware error
-+ */
-+#define VEC_HWERR (63)
-
- #ifndef __ASSEMBLY__
-
-diff --git a/include/asm-blackfin/uaccess.h b/include/asm-blackfin/uaccess.h
-index 2233f8f..22a410b 100644
---- a/include/asm-blackfin/uaccess.h
-+++ b/include/asm-blackfin/uaccess.h
-@@ -31,7 +31,7 @@ static inline void set_fs(mm_segment_t fs)
- #define VERIFY_READ 0
- #define VERIFY_WRITE 1
-
--#define access_ok(type,addr,size) _access_ok((unsigned long)(addr),(size))
-+#define access_ok(type, addr, size) _access_ok((unsigned long)(addr), (size))
+ extern void show_regs(struct pt_regs *);
- static inline int is_in_rom(unsigned long addr)
+ #ifdef CONFIG_SH_DSP
+@@ -100,10 +113,13 @@ static inline unsigned long profile_pc(struct pt_regs *regs)
{
-diff --git a/include/asm-blackfin/unistd.h b/include/asm-blackfin/unistd.h
-index 07ffe8b..e981673 100644
---- a/include/asm-blackfin/unistd.h
-+++ b/include/asm-blackfin/unistd.h
-@@ -369,8 +369,9 @@
- #define __NR_set_robust_list 354
- #define __NR_get_robust_list 355
- #define __NR_fallocate 356
-+#define __NR_semtimedop 357
-
--#define __NR_syscall 357
-+#define __NR_syscall 358
- #define NR_syscalls __NR_syscall
+ unsigned long pc = instruction_pointer(regs);
- /* Old optional stuff no one actually uses */
-diff --git a/include/asm-cris/arch-v10/ide.h b/include/asm-cris/arch-v10/ide.h
-index 78b301e..ea34e0d 100644
---- a/include/asm-cris/arch-v10/ide.h
-+++ b/include/asm-cris/arch-v10/ide.h
-@@ -89,11 +89,6 @@ static inline void ide_init_default_hwifs(void)
- }
+- if (pc >= 0xa0000000UL && pc < 0xc0000000UL)
++#ifdef P2SEG
++ if (pc >= P2SEG && pc < P3SEG)
+ pc -= 0x20000000;
++#endif
++
+ return pc;
}
+-#endif
++#endif /* __KERNEL__ */
--/* some configuration options we don't need */
--
--#undef SUPPORT_VLB_SYNC
--#define SUPPORT_VLB_SYNC 0
--
- #endif /* __KERNEL__ */
+ #endif /* __ASM_SH_PTRACE_H */
+diff --git a/include/asm-sh/r7780rp.h b/include/asm-sh/r7780rp.h
+index de37f93..bdecea0 100644
+--- a/include/asm-sh/r7780rp.h
++++ b/include/asm-sh/r7780rp.h
+@@ -121,21 +121,6 @@
- #endif /* __ASMCRIS_IDE_H */
-diff --git a/include/asm-cris/arch-v32/ide.h b/include/asm-cris/arch-v32/ide.h
-index 1129617..fb9c362 100644
---- a/include/asm-cris/arch-v32/ide.h
-+++ b/include/asm-cris/arch-v32/ide.h
-@@ -48,11 +48,6 @@ static inline unsigned long ide_default_io_base(int index)
- return REG_TYPE_CONV(unsigned long, reg_ata_rw_ctrl2, ctrl2);
- }
+ #define IRLCNTR1 (PA_BCR + 0) /* Interrupt Control Register1 */
--/* some configuration options we don't need */
--
--#undef SUPPORT_VLB_SYNC
--#define SUPPORT_VLB_SYNC 0
+-#define IRQ_PCISLOT1 0 /* PCI Slot #1 IRQ */
+-#define IRQ_PCISLOT2 1 /* PCI Slot #2 IRQ */
+-#define IRQ_PCISLOT3 2 /* PCI Slot #3 IRQ */
+-#define IRQ_PCISLOT4 3 /* PCI Slot #4 IRQ */
+-#define IRQ_CFINST 5 /* CF Card Insert IRQ */
+-#define IRQ_M66596 6 /* M66596 IRQ */
+-#define IRQ_SDCARD 7 /* SD Card IRQ */
+-#define IRQ_TUCHPANEL 8 /* Touch Panel IRQ */
+-#define IRQ_SCI 9 /* SCI IRQ */
+-#define IRQ_2SERIAL 10 /* Serial IRQ */
+-#define IRQ_EXTENTION 11 /* EXTn IRQ */
+-#define IRQ_ONETH 12 /* On board Ethernet IRQ */
+-#define IRQ_PSW 13 /* Push Switch IRQ */
+-#define IRQ_ZIGBEE 14 /* Ziggbee IO IRQ */
-
- #define IDE_ARCH_ACK_INTR
- #define ide_ack_intr(hwif) ((hwif)->ack_intr(hwif))
+ #define IVDR_CK_ON 8 /* iVDR Clock ON */
-diff --git a/include/asm-frv/ide.h b/include/asm-frv/ide.h
-index f0bd2cb..8c9a540 100644
---- a/include/asm-frv/ide.h
-+++ b/include/asm-frv/ide.h
-@@ -18,12 +18,6 @@
- #include <asm/io.h>
- #include <asm/irq.h>
+ #elif defined(CONFIG_SH_R7785RP)
+@@ -192,13 +177,19 @@
--#undef SUPPORT_SLOW_DATA_PORTS
--#define SUPPORT_SLOW_DATA_PORTS 0
--
--#undef SUPPORT_VLB_SYNC
--#define SUPPORT_VLB_SYNC 0
+ #define IRQ_AX88796 (HL_FPGA_IRQ_BASE + 0)
+ #define IRQ_CF (HL_FPGA_IRQ_BASE + 1)
+-#ifndef IRQ_PSW
+ #define IRQ_PSW (HL_FPGA_IRQ_BASE + 2)
+-#endif
+-#define IRQ_EXT1 (HL_FPGA_IRQ_BASE + 3)
+-#define IRQ_EXT4 (HL_FPGA_IRQ_BASE + 4)
-
- #ifndef MAX_HWIFS
- #define MAX_HWIFS 8
- #endif
-diff --git a/include/asm-generic/bitops/ext2-non-atomic.h b/include/asm-generic/bitops/ext2-non-atomic.h
-index 1697404..63cf822 100644
---- a/include/asm-generic/bitops/ext2-non-atomic.h
-+++ b/include/asm-generic/bitops/ext2-non-atomic.h
-@@ -14,5 +14,7 @@
- generic_find_first_zero_le_bit((unsigned long *)(addr), (size))
- #define ext2_find_next_zero_bit(addr, size, off) \
- generic_find_next_zero_le_bit((unsigned long *)(addr), (size), (off))
-+#define ext2_find_next_bit(addr, size, off) \
-+ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
-
- #endif /* _ASM_GENERIC_BITOPS_EXT2_NON_ATOMIC_H_ */
-diff --git a/include/asm-generic/bitops/le.h b/include/asm-generic/bitops/le.h
-index b9c7e5d..80e3bf1 100644
---- a/include/asm-generic/bitops/le.h
-+++ b/include/asm-generic/bitops/le.h
-@@ -20,6 +20,8 @@
- #define generic___test_and_clear_le_bit(nr, addr) __test_and_clear_bit(nr, addr)
-
- #define generic_find_next_zero_le_bit(addr, size, offset) find_next_zero_bit(addr, size, offset)
-+#define generic_find_next_le_bit(addr, size, offset) \
-+ find_next_bit(addr, size, offset)
-
- #elif defined(__BIG_ENDIAN)
-
-@@ -42,6 +44,8 @@
-
- extern unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
- unsigned long size, unsigned long offset);
-+extern unsigned long generic_find_next_le_bit(const unsigned long *addr,
-+ unsigned long size, unsigned long offset);
+-void make_r7780rp_irq(unsigned int irq);
++#define IRQ_EXT0 (HL_FPGA_IRQ_BASE + 3)
++#define IRQ_EXT1 (HL_FPGA_IRQ_BASE + 4)
++#define IRQ_EXT2 (HL_FPGA_IRQ_BASE + 5)
++#define IRQ_EXT3 (HL_FPGA_IRQ_BASE + 6)
++#define IRQ_EXT4 (HL_FPGA_IRQ_BASE + 7)
++#define IRQ_EXT5 (HL_FPGA_IRQ_BASE + 8)
++#define IRQ_EXT6 (HL_FPGA_IRQ_BASE + 9)
++#define IRQ_EXT7 (HL_FPGA_IRQ_BASE + 10)
++#define IRQ_SMBUS (HL_FPGA_IRQ_BASE + 11)
++#define IRQ_TP (HL_FPGA_IRQ_BASE + 12)
++#define IRQ_RTC (HL_FPGA_IRQ_BASE + 13)
++#define IRQ_TH_ALERT (HL_FPGA_IRQ_BASE + 14)
- #else
- #error "Please fix <asm/byteorder.h>"
-diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
-index a4a22cc..587566f 100644
---- a/include/asm-generic/resource.h
-+++ b/include/asm-generic/resource.h
-@@ -44,8 +44,8 @@
- #define RLIMIT_NICE 13 /* max nice prio allowed to raise to
- 0-39 for nice level 19 .. -20 */
- #define RLIMIT_RTPRIO 14 /* maximum realtime priority */
--
--#define RLIM_NLIMITS 15
-+#define RLIMIT_RTTIME 15 /* timeout for RT tasks in us */
-+#define RLIM_NLIMITS 16
+ unsigned char *highlander_init_irq_r7780mp(void);
+ unsigned char *highlander_init_irq_r7780rp(void);
+diff --git a/include/asm-sh/rtc.h b/include/asm-sh/rtc.h
+index 858da99..ec45ba8 100644
+--- a/include/asm-sh/rtc.h
++++ b/include/asm-sh/rtc.h
+@@ -11,4 +11,6 @@ struct sh_rtc_platform_info {
+ unsigned long capabilities;
+ };
- /*
- * SuS says limits have to be unsigned.
-@@ -86,6 +86,7 @@
- [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
- [RLIMIT_NICE] = { 0, 0 }, \
- [RLIMIT_RTPRIO] = { 0, 0 }, \
-+ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
- }
++#include <asm/cpu/rtc.h>
++
+ #endif /* _ASM_RTC_H */
+diff --git a/include/asm-sh/scatterlist.h b/include/asm-sh/scatterlist.h
+index a7d0d18..2084d03 100644
+--- a/include/asm-sh/scatterlist.h
++++ b/include/asm-sh/scatterlist.h
+@@ -13,7 +13,7 @@ struct scatterlist {
+ unsigned int length;
+ };
- #endif /* __KERNEL__ */
-diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
-index 9f584cc..76df771 100644
---- a/include/asm-generic/vmlinux.lds.h
-+++ b/include/asm-generic/vmlinux.lds.h
-@@ -9,10 +9,46 @@
- /* Align . to a 8 byte boundary equals to maximum function alignment. */
- #define ALIGN_FUNCTION() . = ALIGN(8)
+-#define ISA_DMA_THRESHOLD (0x1fffffff)
++#define ISA_DMA_THRESHOLD PHYS_ADDR_MASK
-+/* The actual configuration determine if the init/exit sections
-+ * are handled as text/data or they can be discarded (which
-+ * often happens at runtime)
+ /* These macros should be used after a pci_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+diff --git a/include/asm-sh/sdk7780.h b/include/asm-sh/sdk7780.h
+new file mode 100644
+index 0000000..697dc86
+--- /dev/null
++++ b/include/asm-sh/sdk7780.h
+@@ -0,0 +1,81 @@
++#ifndef __ASM_SH_RENESAS_SDK7780_H
++#define __ASM_SH_RENESAS_SDK7780_H
++
++/*
++ * linux/include/asm-sh/sdk7780.h
++ *
++ * Renesas Solutions SH7780 SDK Support
++ * Copyright (C) 2008 Nicholas Beck <nbeck at mpc-data.co.uk>
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
-+#ifdef CONFIG_HOTPLUG
-+#define DEV_KEEP(sec) *(.dev##sec)
-+#define DEV_DISCARD(sec)
-+#else
-+#define DEV_KEEP(sec)
-+#define DEV_DISCARD(sec) *(.dev##sec)
-+#endif
++#include <asm/addrspace.h>
+
-+#ifdef CONFIG_HOTPLUG_CPU
-+#define CPU_KEEP(sec) *(.cpu##sec)
-+#define CPU_DISCARD(sec)
-+#else
-+#define CPU_KEEP(sec)
-+#define CPU_DISCARD(sec) *(.cpu##sec)
-+#endif
++/* Box specific addresses. */
++#define SE_AREA0_WIDTH 4 /* Area0: 32bit */
++#define PA_ROM 0xa0000000 /* EPROM */
++#define PA_ROM_SIZE 0x00400000 /* EPROM size 4M byte */
++#define PA_FROM 0xa0800000 /* Flash-ROM */
++#define PA_FROM_SIZE 0x00400000 /* Flash-ROM size 4M byte */
++#define PA_EXT1 0xa4000000
++#define PA_EXT1_SIZE 0x04000000
++#define PA_SDRAM 0xa8000000 /* DDR-SDRAM(Area2/3) 128MB */
++#define PA_SDRAM_SIZE 0x08000000
+
-+#if defined(CONFIG_MEMORY_HOTPLUG)
-+#define MEM_KEEP(sec) *(.mem##sec)
-+#define MEM_DISCARD(sec)
-+#else
-+#define MEM_KEEP(sec)
-+#define MEM_DISCARD(sec) *(.mem##sec)
-+#endif
++#define PA_EXT4 0xb0000000
++#define PA_EXT4_SIZE 0x04000000
++#define PA_EXT_USER PA_EXT4 /* User Expansion Space */
+
++#define PA_PERIPHERAL PA_AREA5_IO
+
- /* .data section */
- #define DATA_DATA \
- *(.data) \
- *(.data.init.refok) \
-+ *(.ref.data) \
-+ DEV_KEEP(init.data) \
-+ DEV_KEEP(exit.data) \
-+ CPU_KEEP(init.data) \
-+ CPU_KEEP(exit.data) \
-+ MEM_KEEP(init.data) \
-+ MEM_KEEP(exit.data) \
- . = ALIGN(8); \
- VMLINUX_SYMBOL(__start___markers) = .; \
- *(__markers) \
-@@ -132,6 +168,17 @@
- *(__ksymtab_strings) \
- } \
- \
-+ /* __*init sections */ \
-+ __init_rodata : AT(ADDR(__init_rodata) - LOAD_OFFSET) { \
-+ *(.ref.rodata) \
-+ DEV_KEEP(init.rodata) \
-+ DEV_KEEP(exit.rodata) \
-+ CPU_KEEP(init.rodata) \
-+ CPU_KEEP(exit.rodata) \
-+ MEM_KEEP(init.rodata) \
-+ MEM_KEEP(exit.rodata) \
-+ } \
-+ \
- /* Built-in module parameters. */ \
- __param : AT(ADDR(__param) - LOAD_OFFSET) { \
- VMLINUX_SYMBOL(__start___param) = .; \
-@@ -139,7 +186,6 @@
- VMLINUX_SYMBOL(__stop___param) = .; \
- VMLINUX_SYMBOL(__end_rodata) = .; \
- } \
-- \
- . = ALIGN((align));
-
- /* RODATA provided for backward compatibility.
-@@ -158,8 +204,16 @@
- #define TEXT_TEXT \
- ALIGN_FUNCTION(); \
- *(.text) \
-+ *(.ref.text) \
- *(.text.init.refok) \
-- *(.exit.text.refok)
-+ *(.exit.text.refok) \
-+ DEV_KEEP(init.text) \
-+ DEV_KEEP(exit.text) \
-+ CPU_KEEP(init.text) \
-+ CPU_KEEP(exit.text) \
-+ MEM_KEEP(init.text) \
-+ MEM_KEEP(exit.text)
++/* SRAM/Reserved */
++#define PA_RESERVED (PA_PERIPHERAL + 0)
++/* FPGA base address */
++#define PA_FPGA (PA_PERIPHERAL + 0x01000000)
++/* SMC LAN91C111 */
++#define PA_LAN (PA_PERIPHERAL + 0x01800000)
+
-
- /* sched.text is aling to function alignment to secure we have same
- * address even at second ld pass when generating System.map */
-@@ -183,6 +237,37 @@
- *(.kprobes.text) \
- VMLINUX_SYMBOL(__kprobes_text_end) = .;
-
-+/* init and exit section handling */
-+#define INIT_DATA \
-+ *(.init.data) \
-+ DEV_DISCARD(init.data) \
-+ DEV_DISCARD(init.rodata) \
-+ CPU_DISCARD(init.data) \
-+ CPU_DISCARD(init.rodata) \
-+ MEM_DISCARD(init.data) \
-+ MEM_DISCARD(init.rodata)
+
-+#define INIT_TEXT \
-+ *(.init.text) \
-+ DEV_DISCARD(init.text) \
-+ CPU_DISCARD(init.text) \
-+ MEM_DISCARD(init.text)
++#define FPGA_SRSTR (PA_FPGA + 0x000) /* System reset */
++#define FPGA_IRQ0SR (PA_FPGA + 0x010) /* IRQ0 status */
++#define FPGA_IRQ0MR (PA_FPGA + 0x020) /* IRQ0 mask */
++#define FPGA_BDMR (PA_FPGA + 0x030) /* Board operating mode */
++#define FPGA_INTT0PRTR (PA_FPGA + 0x040) /* Interrupt test mode0 port */
++#define FPGA_INTT0SELR (PA_FPGA + 0x050) /* Int. test mode0 select */
++#define FPGA_INTT1POLR (PA_FPGA + 0x060) /* Int. test mode0 polarity */
++#define FPGA_NMIR (PA_FPGA + 0x070) /* NMI source */
++#define FPGA_NMIMR (PA_FPGA + 0x080) /* NMI mask */
++#define FPGA_IRQR (PA_FPGA + 0x090) /* IRQX source */
++#define FPGA_IRQMR (PA_FPGA + 0x0A0) /* IRQX mask */
++#define FPGA_SLEDR (PA_FPGA + 0x0B0) /* LED control */
++#define PA_LED FPGA_SLEDR
++#define FPGA_MAPSWR (PA_FPGA + 0x0C0) /* Map switch */
++#define FPGA_FPVERR (PA_FPGA + 0x0D0) /* FPGA version */
++#define FPGA_FPDATER (PA_FPGA + 0x0E0) /* FPGA date */
++#define FPGA_RSE (PA_FPGA + 0x100) /* Reset source */
++#define FPGA_EASR (PA_FPGA + 0x110) /* External area select */
++#define FPGA_SPER (PA_FPGA + 0x120) /* Serial port enable */
++#define FPGA_IMSR (PA_FPGA + 0x130) /* Interrupt mode select */
++#define FPGA_PCIMR (PA_FPGA + 0x140) /* PCI Mode */
++#define FPGA_DIPSWMR (PA_FPGA + 0x150) /* DIPSW monitor */
++#define FPGA_FPODR (PA_FPGA + 0x160) /* Output port data */
++#define FPGA_ATAESR (PA_FPGA + 0x170) /* ATA extended bus status */
++#define FPGA_IRQPOLR (PA_FPGA + 0x180) /* IRQx polarity */
+
-+#define EXIT_DATA \
-+ *(.exit.data) \
-+ DEV_DISCARD(exit.data) \
-+ DEV_DISCARD(exit.rodata) \
-+ CPU_DISCARD(exit.data) \
-+ CPU_DISCARD(exit.rodata) \
-+ MEM_DISCARD(exit.data) \
-+ MEM_DISCARD(exit.rodata)
+
-+#define EXIT_TEXT \
-+ *(.exit.text) \
-+ DEV_DISCARD(exit.text) \
-+ CPU_DISCARD(exit.text) \
-+ MEM_DISCARD(exit.text)
++#define SDK7780_NR_IRL 15
++/* IDE/ATA interrupt */
++#define IRQ_CFCARD 14
++/* SMC interrupt */
++#define IRQ_ETHERNET 6
+
- /* DWARF debug sections.
- Symbols in the DWARF debugging sections are relative to
- the beginning of the section so we begin them at 0. */
-diff --git a/include/asm-ia64/gcc_intrin.h b/include/asm-ia64/gcc_intrin.h
-index e58d329..5b6665c 100644
---- a/include/asm-ia64/gcc_intrin.h
-+++ b/include/asm-ia64/gcc_intrin.h
-@@ -24,7 +24,7 @@
- extern void ia64_bad_param_for_setreg (void);
- extern void ia64_bad_param_for_getreg (void);
++
++/* arch/sh/boards/renesas/sdk7780/irq.c */
++void init_sdk7780_IRQ(void);
++
++#define __IO_PREFIX sdk7780
++#include <asm/io_generic.h>
++
++#endif /* __ASM_SH_RENESAS_SDK7780_H */
+diff --git a/include/asm-sh/sections.h b/include/asm-sh/sections.h
+index bd9cbc9..8f8f4ad 100644
+--- a/include/asm-sh/sections.h
++++ b/include/asm-sh/sections.h
+@@ -4,6 +4,7 @@
+ #include <asm-generic/sections.h>
--register unsigned long ia64_r13 asm ("r13") __attribute_used__;
-+register unsigned long ia64_r13 asm ("r13") __used;
+ extern long __machvec_start, __machvec_end;
++extern char __uncached_start, __uncached_end;
+ extern char _ebss[];
- #define ia64_setreg(regnum, val) \
- ({ \
-diff --git a/include/asm-m68k/bitops.h b/include/asm-m68k/bitops.h
-index 2976b5d..83d1f28 100644
---- a/include/asm-m68k/bitops.h
-+++ b/include/asm-m68k/bitops.h
-@@ -410,6 +410,8 @@ static inline int ext2_find_next_zero_bit(const void *vaddr, unsigned size,
- res = ext2_find_first_zero_bit (p, size - 32 * (p - addr));
- return (p - addr) * 32 + res;
- }
-+#define ext2_find_next_bit(addr, size, off) \
-+ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
+ #endif /* __ASM_SH_SECTIONS_H */
+diff --git a/include/asm-sh/sigcontext.h b/include/asm-sh/sigcontext.h
+index eb8effb..8ce1435 100644
+--- a/include/asm-sh/sigcontext.h
++++ b/include/asm-sh/sigcontext.h
+@@ -4,6 +4,18 @@
+ struct sigcontext {
+ unsigned long oldmask;
- #endif /* __KERNEL__ */
++#if defined(__SH5__) || defined(CONFIG_CPU_SH5)
++ /* CPU registers */
++ unsigned long long sc_regs[63];
++ unsigned long long sc_tregs[8];
++ unsigned long long sc_pc;
++ unsigned long long sc_sr;
++
++ /* FPU registers */
++ unsigned long long sc_fpregs[32];
++ unsigned int sc_fpscr;
++ unsigned int sc_fpvalid;
++#else
+ /* CPU registers */
+ unsigned long sc_regs[16];
+ unsigned long sc_pc;
+@@ -13,7 +25,8 @@ struct sigcontext {
+ unsigned long sc_mach;
+ unsigned long sc_macl;
-diff --git a/include/asm-m68knommu/bitops.h b/include/asm-m68knommu/bitops.h
-index f8dfb7b..f43afe1 100644
---- a/include/asm-m68knommu/bitops.h
-+++ b/include/asm-m68knommu/bitops.h
-@@ -294,6 +294,8 @@ found_middle:
- return result + ffz(__swab32(tmp));
- }
+-#if defined(__SH4__) || defined(CONFIG_CPU_SH4)
++#if defined(__SH4__) || defined(CONFIG_CPU_SH4) || \
++ defined(__SH2A__) || defined(CONFIG_CPU_SH2A)
+ /* FPU registers */
+ unsigned long sc_fpregs[16];
+ unsigned long sc_xfpregs[16];
+@@ -21,6 +34,7 @@ struct sigcontext {
+ unsigned int sc_fpul;
+ unsigned int sc_ownedfp;
+ #endif
++#endif
+ };
-+#define ext2_find_next_bit(addr, size, off) \
-+ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
- #include <asm-generic/bitops/minix.h>
+ #endif /* __ASM_SH_SIGCONTEXT_H */
+diff --git a/include/asm-sh/spi.h b/include/asm-sh/spi.h
+new file mode 100644
+index 0000000..e96f5b0
+--- /dev/null
++++ b/include/asm-sh/spi.h
+@@ -0,0 +1,13 @@
++#ifndef __ASM_SPI_H__
++#define __ASM_SPI_H__
++
++struct sh_spi_info;
++
++struct sh_spi_info {
++ int bus_num;
++ int num_chipselect;
++
++ void (*chip_select)(struct sh_spi_info *spi, int cs, int state);
++};
++
++#endif /* __ASM_SPI_H__ */
+diff --git a/include/asm-sh/stat.h b/include/asm-sh/stat.h
+index 6d6ad26..e1810cc 100644
+--- a/include/asm-sh/stat.h
++++ b/include/asm-sh/stat.h
+@@ -15,6 +15,66 @@ struct __old_kernel_stat {
+ unsigned long st_ctime;
+ };
- #endif /* __KERNEL__ */
-diff --git a/include/asm-mips/addrspace.h b/include/asm-mips/addrspace.h
-index 0bb7a93..569f80a 100644
---- a/include/asm-mips/addrspace.h
-+++ b/include/asm-mips/addrspace.h
-@@ -127,7 +127,7 @@
- #define PHYS_TO_XKSEG_CACHED(p) PHYS_TO_XKPHYS(K_CALG_COH_SHAREABLE, (p))
- #define XKPHYS_TO_PHYS(p) ((p) & TO_PHYS_MASK)
- #define PHYS_TO_XKPHYS(cm, a) (_CONST64_(0x8000000000000000) | \
-- ((cm)<<59) | (a))
-+ (_CONST64_(cm) << 59) | (a))
++#if defined(__SH5__) || defined(CONFIG_CPU_SH5)
++struct stat {
++ unsigned short st_dev;
++ unsigned short __pad1;
++ unsigned long st_ino;
++ unsigned short st_mode;
++ unsigned short st_nlink;
++ unsigned short st_uid;
++ unsigned short st_gid;
++ unsigned short st_rdev;
++ unsigned short __pad2;
++ unsigned long st_size;
++ unsigned long st_blksize;
++ unsigned long st_blocks;
++ unsigned long st_atime;
++ unsigned long st_atime_nsec;
++ unsigned long st_mtime;
++ unsigned long st_mtime_nsec;
++ unsigned long st_ctime;
++ unsigned long st_ctime_nsec;
++ unsigned long __unused4;
++ unsigned long __unused5;
++};
++
++/* This matches struct stat64 in glibc2.1, hence the absolutely
++ * insane amounts of padding around dev_t's.
++ */
++struct stat64 {
++ unsigned short st_dev;
++ unsigned char __pad0[10];
++
++ unsigned long st_ino;
++ unsigned int st_mode;
++ unsigned int st_nlink;
++
++ unsigned long st_uid;
++ unsigned long st_gid;
++
++ unsigned short st_rdev;
++ unsigned char __pad3[10];
++
++ long long st_size;
++ unsigned long st_blksize;
++
++ unsigned long st_blocks; /* Number 512-byte blocks allocated. */
++ unsigned long __pad4; /* future possible st_blocks high bits */
++
++ unsigned long st_atime;
++ unsigned long st_atime_nsec;
++
++ unsigned long st_mtime;
++ unsigned long st_mtime_nsec;
++
++ unsigned long st_ctime;
++ unsigned long st_ctime_nsec; /* will be high 32 bits of ctime someday */
++
++ unsigned long __unused1;
++ unsigned long __unused2;
++};
++#else
+ struct stat {
+ unsigned long st_dev;
+ unsigned long st_ino;
+@@ -67,11 +127,12 @@ struct stat64 {
+ unsigned long st_mtime_nsec;
- /*
- * The ultimate limited of the 64-bit MIPS architecture: 2 bits for selecting
-diff --git a/include/asm-mips/asm.h b/include/asm-mips/asm.h
-index 12e1758..608cfcf 100644
---- a/include/asm-mips/asm.h
-+++ b/include/asm-mips/asm.h
-@@ -398,4 +398,12 @@ symbol = value
+ unsigned long st_ctime;
+- unsigned long st_ctime_nsec;
++ unsigned long st_ctime_nsec;
- #define SSNOP sll zero, zero, 1
+ unsigned long long st_ino;
+ };
-+#ifdef CONFIG_SGI_IP28
-+/* Inhibit speculative stores to volatile (e.g.DMA) or invalid addresses. */
-+#include <asm/cacheops.h>
-+#define R10KCBARRIER(addr) cache Cache_Barrier, addr;
-+#else
-+#define R10KCBARRIER(addr)
+ #define STAT_HAVE_NSEC 1
+#endif
-+
- #endif /* __ASM_ASM_H */
-diff --git a/include/asm-mips/bootinfo.h b/include/asm-mips/bootinfo.h
-index b2dd9b3..e031bdf 100644
---- a/include/asm-mips/bootinfo.h
-+++ b/include/asm-mips/bootinfo.h
-@@ -48,22 +48,11 @@
- #define MACH_DS5900 10 /* DECsystem 5900 */
- /*
-- * Valid machtype for group ARC
-- */
--#define MACH_DESKSTATION_RPC44 0 /* Deskstation rPC44 */
--#define MACH_DESKSTATION_TYNE 1 /* Deskstation Tyne */
+ #endif /* __ASM_SH_STAT_H */
+diff --git a/include/asm-sh/string.h b/include/asm-sh/string.h
+index 55f8db6..8c1ea21 100644
+--- a/include/asm-sh/string.h
++++ b/include/asm-sh/string.h
+@@ -1,131 +1,5 @@
+-#ifndef __ASM_SH_STRING_H
+-#define __ASM_SH_STRING_H
-
--/*
- * Valid machtype for group SNI_RM
- */
- #define MACH_SNI_RM200_PCI 0 /* RM200/RM300/RM400 PCI series */
-
- /*
-- * Valid machtype for group ACN
-- */
--#define MACH_ACN_MIPS_BOARD 0 /* ACN MIPS single board */
+-#ifdef __KERNEL__
-
-/*
- * Valid machtype for group SGI
- */
- #define MACH_SGI_IP22 0 /* Indy, Indigo2, Challenge S */
-@@ -73,44 +62,6 @@
- #define MACH_SGI_IP30 4 /* Octane, Octane2 */
-
- /*
-- * Valid machtype for group COBALT
+- * Copyright (C) 1999 Niibe Yutaka
+- * But consider these trivial functions to be public domain.
- */
--#define MACH_COBALT_27 0 /* Proto "27" hardware */
-
--/*
-- * Valid machtype for group BAGET
-- */
--#define MACH_BAGET201 0 /* BT23-201 */
--#define MACH_BAGET202 1 /* BT23-202 */
+-#define __HAVE_ARCH_STRCPY
+-static inline char *strcpy(char *__dest, const char *__src)
+-{
+- register char *__xdest = __dest;
+- unsigned long __dummy;
-
--/*
-- * Cosine boards.
-- */
--#define MACH_COSINE_ORION 0
+- __asm__ __volatile__("1:\n\t"
+- "mov.b @%1+, %2\n\t"
+- "mov.b %2, @%0\n\t"
+- "cmp/eq #0, %2\n\t"
+- "bf/s 1b\n\t"
+- " add #1, %0\n\t"
+- : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
+- : "0" (__dest), "1" (__src)
+- : "memory", "t");
-
--/*
-- * Valid machtype for group MOMENCO
-- */
--#define MACH_MOMENCO_OCELOT 0
--#define MACH_MOMENCO_OCELOT_G 1 /* no more supported (may 2007) */
--#define MACH_MOMENCO_OCELOT_C 2 /* no more supported (jun 2007) */
--#define MACH_MOMENCO_JAGUAR_ATX 3 /* no more supported (may 2007) */
--#define MACH_MOMENCO_OCELOT_3 4
+- return __xdest;
+-}
-
--/*
-- * Valid machtype for group PHILIPS
-- */
--#define MACH_PHILIPS_NINO 0 /* Nino */
--#define MACH_PHILIPS_VELO 1 /* Velo */
--#define MACH_PHILIPS_JBS 2 /* JBS */
--#define MACH_PHILIPS_STB810 3 /* STB810 */
+-#define __HAVE_ARCH_STRNCPY
+-static inline char *strncpy(char *__dest, const char *__src, size_t __n)
+-{
+- register char *__xdest = __dest;
+- unsigned long __dummy;
-
--/*
-- * Valid machtype for group SIBYTE
-- */
--#define MACH_SWARM 0
+- if (__n == 0)
+- return __xdest;
-
--/*
- * Valid machtypes for group Toshiba
- */
- #define MACH_PALLAS 0
-@@ -122,64 +73,17 @@
- #define MACH_TOSHIBA_RBTX4938 6
-
- /*
-- * Valid machtype for group Alchemy
-- */
--#define MACH_PB1000 0 /* Au1000-based eval board */
--#define MACH_PB1100 1 /* Au1100-based eval board */
--#define MACH_PB1500 2 /* Au1500-based eval board */
--#define MACH_DB1000 3 /* Au1000-based eval board */
--#define MACH_DB1100 4 /* Au1100-based eval board */
--#define MACH_DB1500 5 /* Au1500-based eval board */
--#define MACH_XXS1500 6 /* Au1500-based eval board */
--#define MACH_MTX1 7 /* 4G MTX-1 Au1500-based board */
--#define MACH_PB1550 8 /* Au1550-based eval board */
--#define MACH_DB1550 9 /* Au1550-based eval board */
--#define MACH_PB1200 10 /* Au1200-based eval board */
--#define MACH_DB1200 11 /* Au1200-based eval board */
+- __asm__ __volatile__(
+- "1:\n"
+- "mov.b @%1+, %2\n\t"
+- "mov.b %2, @%0\n\t"
+- "cmp/eq #0, %2\n\t"
+- "bt/s 2f\n\t"
+- " cmp/eq %5,%1\n\t"
+- "bf/s 1b\n\t"
+- " add #1, %0\n"
+- "2:"
+- : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
+- : "0" (__dest), "1" (__src), "r" (__src+__n)
+- : "memory", "t");
-
--/*
-- * Valid machtype for group NEC_VR41XX
-- *
-- * Various NEC-based devices.
-- *
-- * FIXME: MACH_GROUPs should be by _MANUFACTURER_ of * the device, not by
-- * technical properties, so no new additions to this group.
-- */
--#define MACH_NEC_OSPREY 0 /* Osprey eval board */
--#define MACH_NEC_EAGLE 1 /* NEC Eagle/Hawk board */
--#define MACH_ZAO_CAPCELLA 2 /* ZAO Networks Capcella */
--#define MACH_VICTOR_MPC30X 3 /* Victor MP-C303/304 */
--#define MACH_IBM_WORKPAD 4 /* IBM WorkPad z50 */
--#define MACH_CASIO_E55 5 /* CASIO CASSIOPEIA E-10/15/55/65 */
--#define MACH_TANBAC_TB0226 6 /* TANBAC TB0226 (Mbase) */
--#define MACH_TANBAC_TB0229 7 /* TANBAC TB0229 (VR4131DIMM) */
--#define MACH_NEC_CMBVR4133 8 /* CMB VR4133 Board */
+- return __xdest;
+-}
-
--#define MACH_HP_LASERJET 1
+-#define __HAVE_ARCH_STRCMP
+-static inline int strcmp(const char *__cs, const char *__ct)
+-{
+- register int __res;
+- unsigned long __dummy;
-
--/*
- * Valid machtype for group LASAT
- */
- #define MACH_LASAT_100 0 /* Masquerade II/SP100/SP50/SP25 */
- #define MACH_LASAT_200 1 /* Masquerade PRO/SP200 */
-
- /*
-- * Valid machtype for group TITAN
-- */
--#define MACH_TITAN_YOSEMITE 1 /* PMC-Sierra Yosemite */
--#define MACH_TITAN_EXCITE 2 /* Basler eXcite */
+- __asm__ __volatile__(
+- "mov.b @%1+, %3\n"
+- "1:\n\t"
+- "mov.b @%0+, %2\n\t"
+- "cmp/eq #0, %3\n\t"
+- "bt 2f\n\t"
+- "cmp/eq %2, %3\n\t"
+- "bt/s 1b\n\t"
+- " mov.b @%1+, %3\n\t"
+- "add #-2, %1\n\t"
+- "mov.b @%1, %3\n\t"
+- "sub %3, %2\n"
+- "2:"
+- : "=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
+- : "0" (__cs), "1" (__ct)
+- : "t");
-
--/*
- * Valid machtype for group NEC EMMA2RH
- */
- #define MACH_NEC_MARKEINS 0 /* NEC EMMA2RH Mark-eins */
-
- /*
-- * Valid machtype for group LEMOTE
-- */
--#define MACH_LEMOTE_FULONG 0
+- return __res;
+-}
-
--/*
- * Valid machtype for group PMC-MSP
- */
- #define MACH_MSP4200_EVAL 0 /* PMC-Sierra MSP4200 Evaluation */
-@@ -190,16 +94,9 @@
- #define MACH_MSP7120_FPGA 5 /* PMC-Sierra MSP7120 Emulation */
- #define MACH_MSP_OTHER 255 /* PMC-Sierra unknown board type */
-
--#define MACH_WRPPMC 1
+-#define __HAVE_ARCH_STRNCMP
+-static inline int strncmp(const char *__cs, const char *__ct, size_t __n)
+-{
+- register int __res;
+- unsigned long __dummy;
-
--/*
-- * Valid machtype for group Broadcom
-- */
--#define MACH_GROUP_BRCM 23 /* Broadcom */
--#define MACH_BCM47XX 1 /* Broadcom BCM47XX */
+- if (__n == 0)
+- return 0;
-
- #define CL_SIZE COMMAND_LINE_SIZE
-
-+extern char *system_type;
- const char *get_system_type(void);
-
- extern unsigned long mips_machtype;
-diff --git a/include/asm-mips/bugs.h b/include/asm-mips/bugs.h
-index 0d7f9c1..9dc10df 100644
---- a/include/asm-mips/bugs.h
-+++ b/include/asm-mips/bugs.h
-@@ -1,19 +1,34 @@
- /*
- * This is included by init/main.c to check for architecture-dependent bugs.
- *
-+ * Copyright (C) 2007 Maciej W. Rozycki
-+ *
- * Needs:
- * void check_bugs(void);
- */
- #ifndef _ASM_BUGS_H
- #define _ASM_BUGS_H
-
-+#include <linux/bug.h>
- #include <linux/delay.h>
+- __asm__ __volatile__(
+- "mov.b @%1+, %3\n"
+- "1:\n\t"
+- "mov.b @%0+, %2\n\t"
+- "cmp/eq %6, %0\n\t"
+- "bt/s 2f\n\t"
+- " cmp/eq #0, %3\n\t"
+- "bt/s 3f\n\t"
+- " cmp/eq %3, %2\n\t"
+- "bt/s 1b\n\t"
+- " mov.b @%1+, %3\n\t"
+- "add #-2, %1\n\t"
+- "mov.b @%1, %3\n"
+- "2:\n\t"
+- "sub %3, %2\n"
+- "3:"
+- :"=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
+- : "0" (__cs), "1" (__ct), "r" (__cs+__n)
+- : "t");
+-
+- return __res;
+-}
+-
+-#define __HAVE_ARCH_MEMSET
+-extern void *memset(void *__s, int __c, size_t __count);
+-
+-#define __HAVE_ARCH_MEMCPY
+-extern void *memcpy(void *__to, __const__ void *__from, size_t __n);
+-
+-#define __HAVE_ARCH_MEMMOVE
+-extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
+-
+-#define __HAVE_ARCH_MEMCHR
+-extern void *memchr(const void *__s, int __c, size_t __n);
+-
+-#define __HAVE_ARCH_STRLEN
+-extern size_t strlen(const char *);
+-
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH_STRING_H */
++#ifdef CONFIG_SUPERH32
++# include "string_32.h"
++#else
++# include "string_64.h"
++#endif
+diff --git a/include/asm-sh/string_32.h b/include/asm-sh/string_32.h
+new file mode 100644
+index 0000000..55f8db6
+--- /dev/null
++++ b/include/asm-sh/string_32.h
+@@ -0,0 +1,131 @@
++#ifndef __ASM_SH_STRING_H
++#define __ASM_SH_STRING_H
+
- #include <asm/cpu.h>
- #include <asm/cpu-info.h>
-
-+extern int daddiu_bug;
++#ifdef __KERNEL__
+
-+extern void check_bugs64_early(void);
++/*
++ * Copyright (C) 1999 Niibe Yutaka
++ * But consider these trivial functions to be public domain.
++ */
+
- extern void check_bugs32(void);
- extern void check_bugs64(void);
-
-+static inline void check_bugs_early(void)
++#define __HAVE_ARCH_STRCPY
++static inline char *strcpy(char *__dest, const char *__src)
+{
-+#ifdef CONFIG_64BIT
-+ check_bugs64_early();
-+#endif
++ register char *__xdest = __dest;
++ unsigned long __dummy;
++
++ __asm__ __volatile__("1:\n\t"
++ "mov.b @%1+, %2\n\t"
++ "mov.b %2, @%0\n\t"
++ "cmp/eq #0, %2\n\t"
++ "bf/s 1b\n\t"
++ " add #1, %0\n\t"
++ : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
++ : "0" (__dest), "1" (__src)
++ : "memory", "t");
++
++ return __xdest;
+}
+
- static inline void check_bugs(void)
- {
- unsigned int cpu = smp_processor_id();
-@@ -25,4 +40,14 @@ static inline void check_bugs(void)
- #endif
- }
-
-+static inline int r4k_daddiu_bug(void)
++#define __HAVE_ARCH_STRNCPY
++static inline char *strncpy(char *__dest, const char *__src, size_t __n)
+{
-+#ifdef CONFIG_64BIT
-+ WARN_ON(daddiu_bug < 0);
-+ return daddiu_bug != 0;
-+#else
-+ return 0;
-+#endif
++ register char *__xdest = __dest;
++ unsigned long __dummy;
++
++ if (__n == 0)
++ return __xdest;
++
++ __asm__ __volatile__(
++ "1:\n"
++ "mov.b @%1+, %2\n\t"
++ "mov.b %2, @%0\n\t"
++ "cmp/eq #0, %2\n\t"
++ "bt/s 2f\n\t"
++ " cmp/eq %5,%1\n\t"
++ "bf/s 1b\n\t"
++ " add #1, %0\n"
++ "2:"
++ : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
++ : "0" (__dest), "1" (__src), "r" (__src+__n)
++ : "memory", "t");
++
++ return __xdest;
+}
+
- #endif /* _ASM_BUGS_H */
-diff --git a/include/asm-mips/cpu-info.h b/include/asm-mips/cpu-info.h
-index ed5c02c..0c5a358 100644
---- a/include/asm-mips/cpu-info.h
-+++ b/include/asm-mips/cpu-info.h
-@@ -55,6 +55,7 @@ struct cpuinfo_mips {
- struct cache_desc scache; /* Secondary cache */
- struct cache_desc tcache; /* Tertiary/split secondary cache */
- int srsets; /* Shadow register sets */
-+ int core; /* physical core number */
- #if defined(CONFIG_MIPS_MT_SMTC)
- /*
- * In the MIPS MT "SMTC" model, each TC is considered
-@@ -63,8 +64,10 @@ struct cpuinfo_mips {
- * to all TCs within the same VPE.
- */
- int vpe_id; /* Virtual Processor number */
-- int tc_id; /* Thread Context number */
- #endif /* CONFIG_MIPS_MT */
-+#ifdef CONFIG_MIPS_MT_SMTC
-+ int tc_id; /* Thread Context number */
-+#endif
- void *data; /* Additional data */
- } __attribute__((aligned(SMP_CACHE_BYTES)));
-
-diff --git a/include/asm-mips/cpu.h b/include/asm-mips/cpu.h
-index 54fc18a..bf5bbc7 100644
---- a/include/asm-mips/cpu.h
-+++ b/include/asm-mips/cpu.h
-@@ -195,8 +195,8 @@ enum cpu_type_enum {
- * MIPS32 class processors
- */
- CPU_4KC, CPU_4KEC, CPU_4KSC, CPU_24K, CPU_34K, CPU_74K, CPU_AU1000,
-- CPU_AU1100, CPU_AU1200, CPU_AU1500, CPU_AU1550, CPU_PR4450,
-- CPU_BCM3302, CPU_BCM4710,
-+ CPU_AU1100, CPU_AU1200, CPU_AU1210, CPU_AU1250, CPU_AU1500, CPU_AU1550,
-+ CPU_PR4450, CPU_BCM3302, CPU_BCM4710,
-
- /*
- * MIPS64 class processors
-diff --git a/include/asm-mips/delay.h b/include/asm-mips/delay.h
-index fab3213..b0bccd2 100644
---- a/include/asm-mips/delay.h
-+++ b/include/asm-mips/delay.h
-@@ -6,13 +6,16 @@
- * Copyright (C) 1994 by Waldorf Electronics
- * Copyright (C) 1995 - 2000, 01, 03 by Ralf Baechle
- * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
-+ * Copyright (C) 2007 Maciej W. Rozycki
- */
- #ifndef _ASM_DELAY_H
- #define _ASM_DELAY_H
-
- #include <linux/param.h>
- #include <linux/smp.h>
++#define __HAVE_ARCH_STRCMP
++static inline int strcmp(const char *__cs, const char *__ct)
++{
++ register int __res;
++ unsigned long __dummy;
++
++ __asm__ __volatile__(
++ "mov.b @%1+, %3\n"
++ "1:\n\t"
++ "mov.b @%0+, %2\n\t"
++ "cmp/eq #0, %3\n\t"
++ "bt 2f\n\t"
++ "cmp/eq %2, %3\n\t"
++ "bt/s 1b\n\t"
++ " mov.b @%1+, %3\n\t"
++ "add #-2, %1\n\t"
++ "mov.b @%1, %3\n\t"
++ "sub %3, %2\n"
++ "2:"
++ : "=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
++ : "0" (__cs), "1" (__ct)
++ : "t");
++
++ return __res;
++}
++
++#define __HAVE_ARCH_STRNCMP
++static inline int strncmp(const char *__cs, const char *__ct, size_t __n)
++{
++ register int __res;
++ unsigned long __dummy;
++
++ if (__n == 0)
++ return 0;
++
++ __asm__ __volatile__(
++ "mov.b @%1+, %3\n"
++ "1:\n\t"
++ "mov.b @%0+, %2\n\t"
++ "cmp/eq %6, %0\n\t"
++ "bt/s 2f\n\t"
++ " cmp/eq #0, %3\n\t"
++ "bt/s 3f\n\t"
++ " cmp/eq %3, %2\n\t"
++ "bt/s 1b\n\t"
++ " mov.b @%1+, %3\n\t"
++ "add #-2, %1\n\t"
++ "mov.b @%1, %3\n"
++ "2:\n\t"
++ "sub %3, %2\n"
++ "3:"
++ :"=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
++ : "0" (__cs), "1" (__ct), "r" (__cs+__n)
++ : "t");
++
++ return __res;
++}
++
++#define __HAVE_ARCH_MEMSET
++extern void *memset(void *__s, int __c, size_t __count);
++
++#define __HAVE_ARCH_MEMCPY
++extern void *memcpy(void *__to, __const__ void *__from, size_t __n);
++
++#define __HAVE_ARCH_MEMMOVE
++extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
++
++#define __HAVE_ARCH_MEMCHR
++extern void *memchr(const void *__s, int __c, size_t __n);
++
++#define __HAVE_ARCH_STRLEN
++extern size_t strlen(const char *);
++
++#endif /* __KERNEL__ */
++
++#endif /* __ASM_SH_STRING_H */
+diff --git a/include/asm-sh/string_64.h b/include/asm-sh/string_64.h
+new file mode 100644
+index 0000000..aa1fef2
+--- /dev/null
++++ b/include/asm-sh/string_64.h
+@@ -0,0 +1,17 @@
++#ifndef __ASM_SH_STRING_64_H
++#define __ASM_SH_STRING_64_H
+
- #include <asm/compiler.h>
-+#include <asm/war.h>
-
- static inline void __delay(unsigned long loops)
- {
-@@ -25,7 +28,7 @@ static inline void __delay(unsigned long loops)
- " .set reorder \n"
- : "=r" (loops)
- : "0" (loops));
-- else if (sizeof(long) == 8)
-+ else if (sizeof(long) == 8 && !DADDI_WAR)
- __asm__ __volatile__ (
- " .set noreorder \n"
- " .align 3 \n"
-@@ -34,6 +37,15 @@ static inline void __delay(unsigned long loops)
- " .set reorder \n"
- : "=r" (loops)
- : "0" (loops));
-+ else if (sizeof(long) == 8 && DADDI_WAR)
-+ __asm__ __volatile__ (
-+ " .set noreorder \n"
-+ " .align 3 \n"
-+ "1: bnez %0, 1b \n"
-+ " dsubu %0, %2 \n"
-+ " .set reorder \n"
-+ : "=r" (loops)
-+ : "0" (loops), "r" (1));
- }
-
-
-@@ -50,7 +62,7 @@ static inline void __delay(unsigned long loops)
-
- static inline void __udelay(unsigned long usecs, unsigned long lpj)
- {
-- unsigned long lo;
-+ unsigned long hi, lo;
-
- /*
- * The rates of 128 is rounded wrongly by the catchall case
-@@ -70,11 +82,16 @@ static inline void __udelay(unsigned long usecs, unsigned long lpj)
- : "=h" (usecs), "=l" (lo)
- : "r" (usecs), "r" (lpj)
- : GCC_REG_ACCUM);
-- else if (sizeof(long) == 8)
-+ else if (sizeof(long) == 8 && !R4000_WAR)
- __asm__("dmultu\t%2, %3"
- : "=h" (usecs), "=l" (lo)
- : "r" (usecs), "r" (lpj)
- : GCC_REG_ACCUM);
-+ else if (sizeof(long) == 8 && R4000_WAR)
-+ __asm__("dmultu\t%3, %4\n\tmfhi\t%0"
-+ : "=r" (usecs), "=h" (hi), "=l" (lo)
-+ : "r" (usecs), "r" (lpj)
-+ : GCC_REG_ACCUM);
-
- __delay(usecs);
- }
-diff --git a/include/asm-mips/dma.h b/include/asm-mips/dma.h
-index d6a6c21..1353c81 100644
---- a/include/asm-mips/dma.h
-+++ b/include/asm-mips/dma.h
-@@ -84,10 +84,9 @@
- * Deskstations or Acer PICA but not the much more versatile DMA logic used
- * for the local devices on Acer PICA or Magnums.
- */
--#ifdef CONFIG_SGI_IP22
--/* Horrible hack to have a correct DMA window on IP22 */
--#include <asm/sgi/mc.h>
--#define MAX_DMA_ADDRESS (PAGE_OFFSET + SGIMC_SEG0_BADDR + 0x01000000)
-+#if defined(CONFIG_SGI_IP22) || defined(CONFIG_SGI_IP28)
-+/* don't care; ISA bus master won't work, ISA slave DMA supports 32bit addr */
-+#define MAX_DMA_ADDRESS PAGE_OFFSET
- #else
- #define MAX_DMA_ADDRESS (PAGE_OFFSET + 0x01000000)
- #endif
-diff --git a/include/asm-mips/fixmap.h b/include/asm-mips/fixmap.h
-index f27b96c..9cc8522 100644
---- a/include/asm-mips/fixmap.h
-+++ b/include/asm-mips/fixmap.h
-@@ -60,16 +60,6 @@ enum fixed_addresses {
- __end_of_fixed_addresses
- };
-
--extern void __set_fixmap(enum fixed_addresses idx,
-- unsigned long phys, pgprot_t flags);
--
--#define set_fixmap(idx, phys) \
-- __set_fixmap(idx, phys, PAGE_KERNEL)
--/*
-- * Some hardware wants to get fixmapped without caching.
-- */
--#define set_fixmap_nocache(idx, phys) \
-- __set_fixmap(idx, phys, PAGE_KERNEL_NOCACHE)
- /*
- * used by vmalloc.c.
- *
-diff --git a/include/asm-mips/fw/cfe/cfe_api.h b/include/asm-mips/fw/cfe/cfe_api.h
-index 1003e71..0995575 100644
---- a/include/asm-mips/fw/cfe/cfe_api.h
-+++ b/include/asm-mips/fw/cfe/cfe_api.h
-@@ -15,49 +15,27 @@
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
- */
--
--/* *********************************************************************
-- *
-- * Broadcom Common Firmware Environment (CFE)
-- *
-- * Device function prototypes File: cfe_api.h
-- *
-- * This file contains declarations for doing callbacks to
-- * cfe from an application. It should be the only header
-- * needed by the application to use this library
-- *
-- * Authors: Mitch Lichtenberg, Chris Demetriou
-- *
-- ********************************************************************* */
--
+/*
-+ * Broadcom Common Firmware Environment (CFE)
++ * include/asm-sh/string_64.h
+ *
-+ * This file contains declarations for doing callbacks to
-+ * cfe from an application. It should be the only header
-+ * needed by the application to use this library
++ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
-+ * Authors: Mitch Lichtenberg, Chris Demetriou
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
- #ifndef CFE_API_H
- #define CFE_API_H
++
++#define __HAVE_ARCH_MEMCPY
++extern void *memcpy(void *dest, const void *src, size_t count);
++
++#endif /* __ASM_SH_STRING_64_H */
+diff --git a/include/asm-sh/system.h b/include/asm-sh/system.h
+index 4faa2fb..772cd1a 100644
+--- a/include/asm-sh/system.h
++++ b/include/asm-sh/system.h
+@@ -12,60 +12,9 @@
+ #include <asm/types.h>
+ #include <asm/ptrace.h>
+
+-struct task_struct *__switch_to(struct task_struct *prev,
+- struct task_struct *next);
++#define AT_VECTOR_SIZE_ARCH 5 /* entries in ARCH_DLINFO */
+-#define AT_VECTOR_SIZE_ARCH 1 /* entries in ARCH_DLINFO */
-/*
-- * Apply customizations here for different OSes. These need to:
-- * * typedef uint64_t, int64_t, intptr_t, uintptr_t.
-- * * define cfe_strlen() if use of an existing function is desired.
-- * * define CFE_API_IMPL_NAMESPACE if API functions are to use
-- * names in the implementation namespace.
-- * Also, optionally, if the build environment does not do so automatically,
-- * CFE_API_* can be defined here as desired.
+- * switch_to() should switch tasks to task nr n, first
- */
--/* Begin customization. */
- #include <linux/types.h>
- #include <linux/string.h>
-
- typedef long intptr_t;
+-
+-#define switch_to(prev, next, last) do { \
+- struct task_struct *__last; \
+- register unsigned long *__ts1 __asm__ ("r1") = &prev->thread.sp; \
+- register unsigned long *__ts2 __asm__ ("r2") = &prev->thread.pc; \
+- register unsigned long *__ts4 __asm__ ("r4") = (unsigned long *)prev; \
+- register unsigned long *__ts5 __asm__ ("r5") = (unsigned long *)next; \
+- register unsigned long *__ts6 __asm__ ("r6") = &next->thread.sp; \
+- register unsigned long __ts7 __asm__ ("r7") = next->thread.pc; \
+- __asm__ __volatile__ (".balign 4\n\t" \
+- "stc.l gbr, @-r15\n\t" \
+- "sts.l pr, @-r15\n\t" \
+- "mov.l r8, @-r15\n\t" \
+- "mov.l r9, @-r15\n\t" \
+- "mov.l r10, @-r15\n\t" \
+- "mov.l r11, @-r15\n\t" \
+- "mov.l r12, @-r15\n\t" \
+- "mov.l r13, @-r15\n\t" \
+- "mov.l r14, @-r15\n\t" \
+- "mov.l r15, @r1 ! save SP\n\t" \
+- "mov.l @r6, r15 ! change to new stack\n\t" \
+- "mova 1f, %0\n\t" \
+- "mov.l %0, @r2 ! save PC\n\t" \
+- "mov.l 2f, %0\n\t" \
+- "jmp @%0 ! call __switch_to\n\t" \
+- " lds r7, pr ! with return to new PC\n\t" \
+- ".balign 4\n" \
+- "2:\n\t" \
+- ".long __switch_to\n" \
+- "1:\n\t" \
+- "mov.l @r15+, r14\n\t" \
+- "mov.l @r15+, r13\n\t" \
+- "mov.l @r15+, r12\n\t" \
+- "mov.l @r15+, r11\n\t" \
+- "mov.l @r15+, r10\n\t" \
+- "mov.l @r15+, r9\n\t" \
+- "mov.l @r15+, r8\n\t" \
+- "lds.l @r15+, pr\n\t" \
+- "ldc.l @r15+, gbr\n\t" \
+- : "=z" (__last) \
+- : "r" (__ts1), "r" (__ts2), "r" (__ts4), \
+- "r" (__ts5), "r" (__ts6), "r" (__ts7) \
+- : "r3", "t"); \
+- last = __last; \
+-} while (0)
+-
+-#ifdef CONFIG_CPU_SH4A
++#if defined(CONFIG_CPU_SH4A) || defined(CONFIG_CPU_SH5)
+ #define __icbi() \
+ { \
+ unsigned long __addr; \
+@@ -91,7 +40,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ * Historically we have only done this type of barrier for the MMUCR, but
+ * it's also necessary for the CCR, so we make it generic here instead.
+ */
+-#ifdef CONFIG_CPU_SH4A
++#if defined(CONFIG_CPU_SH4A) || defined(CONFIG_CPU_SH5)
+ #define mb() __asm__ __volatile__ ("synco": : :"memory")
+ #define rmb() mb()
+ #define wmb() __asm__ __volatile__ ("synco": : :"memory")
+@@ -119,63 +68,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
--#define cfe_strlen strlen
+ #define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
--#define CFE_API_ALL
--#define CFE_API_STRLEN_CUSTOM
--/* End customization. */
+-/*
+- * Jump to P2 area.
+- * When handling TLB or caches, we need to do it from P2 area.
+- */
+-#define jump_to_P2() \
+-do { \
+- unsigned long __dummy; \
+- __asm__ __volatile__( \
+- "mov.l 1f, %0\n\t" \
+- "or %1, %0\n\t" \
+- "jmp @%0\n\t" \
+- " nop\n\t" \
+- ".balign 4\n" \
+- "1: .long 2f\n" \
+- "2:" \
+- : "=&r" (__dummy) \
+- : "r" (0x20000000)); \
+-} while (0)
-
+-/*
+- * Back to P1 area.
+- */
+-#define back_to_P1() \
+-do { \
+- unsigned long __dummy; \
+- ctrl_barrier(); \
+- __asm__ __volatile__( \
+- "mov.l 1f, %0\n\t" \
+- "jmp @%0\n\t" \
+- " nop\n\t" \
+- ".balign 4\n" \
+- "1: .long 2f\n" \
+- "2:" \
+- : "=&r" (__dummy)); \
+-} while (0)
-
--/* *********************************************************************
-- * Constants
-- ********************************************************************* */
-+/*
-+ * Constants
-+ */
+-static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
+-{
+- unsigned long flags, retval;
+-
+- local_irq_save(flags);
+- retval = *m;
+- *m = val;
+- local_irq_restore(flags);
+- return retval;
+-}
+-
+-static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
+-{
+- unsigned long flags, retval;
+-
+- local_irq_save(flags);
+- retval = *m;
+- *m = val & 0xff;
+- local_irq_restore(flags);
+- return retval;
+-}
++#ifdef CONFIG_GUSA_RB
++#include <asm/cmpxchg-grb.h>
++#else
++#include <asm/cmpxchg-irq.h>
++#endif
- /* Seal indicating CFE's presence, passed to user program. */
- #define CFE_EPTSEAL 0x43464531
-@@ -109,54 +87,13 @@ typedef struct {
+ extern void __xchg_called_with_bad_pointer(void);
+@@ -202,20 +99,6 @@ extern void __xchg_called_with_bad_pointer(void);
+ #define xchg(ptr,x) \
+ ((__typeof__(*(ptr)))__xchg((ptr),(unsigned long)(x), sizeof(*(ptr))))
- /*
-- * cfe_strlen is handled specially: If already defined, it has been
-- * overridden in this environment with a standard strlen-like function.
-- */
--#ifdef cfe_strlen
--# define CFE_API_STRLEN_CUSTOM
--#else
--# ifdef CFE_API_IMPL_NAMESPACE
--# define cfe_strlen(a) __cfe_strlen(a)
--# endif
--int cfe_strlen(char *name);
--#endif
+-static inline unsigned long __cmpxchg_u32(volatile int * m, unsigned long old,
+- unsigned long new)
+-{
+- __u32 retval;
+- unsigned long flags;
-
--/*
- * Defines and prototypes for functions which take no arguments.
+- local_irq_save(flags);
+- retval = *m;
+- if (retval == old)
+- *m = new;
+- local_irq_restore(flags); /* implies memory barrier */
+- return retval;
+-}
+-
+ /* This function doesn't exist, so you'll get a linker error
+ * if something tries to do an invalid cmpxchg(). */
+ extern void __cmpxchg_called_with_bad_pointer(void);
+@@ -255,10 +138,14 @@ static inline void *set_exception_table_evt(unsigned int evt, void *handler)
*/
--#ifdef CFE_API_IMPL_NAMESPACE
--int64_t __cfe_getticks(void);
--#define cfe_getticks() __cfe_getticks()
+ #ifdef CONFIG_CPU_SH2A
+ extern unsigned int instruction_size(unsigned int insn);
-#else
- int64_t cfe_getticks(void);
--#endif
++#elif defined(CONFIG_SUPERH32)
+ #define instruction_size(insn) (2)
++#else
++#define instruction_size(insn) (4)
+ #endif
- /*
- * Defines and prototypes for the rest of the functions.
- */
--#ifdef CFE_API_IMPL_NAMESPACE
--#define cfe_close(a) __cfe_close(a)
--#define cfe_cpu_start(a, b, c, d, e) __cfe_cpu_start(a, b, c, d, e)
--#define cfe_cpu_stop(a) __cfe_cpu_stop(a)
--#define cfe_enumenv(a, b, d, e, f) __cfe_enumenv(a, b, d, e, f)
--#define cfe_enummem(a, b, c, d, e) __cfe_enummem(a, b, c, d, e)
--#define cfe_exit(a, b) __cfe_exit(a, b)
--#define cfe_flushcache(a) __cfe_cacheflush(a)
--#define cfe_getdevinfo(a) __cfe_getdevinfo(a)
--#define cfe_getenv(a, b, c) __cfe_getenv(a, b, c)
--#define cfe_getfwinfo(a) __cfe_getfwinfo(a)
--#define cfe_getstdhandle(a) __cfe_getstdhandle(a)
--#define cfe_init(a, b) __cfe_init(a, b)
--#define cfe_inpstat(a) __cfe_inpstat(a)
--#define cfe_ioctl(a, b, c, d, e, f) __cfe_ioctl(a, b, c, d, e, f)
--#define cfe_open(a) __cfe_open(a)
--#define cfe_read(a, b, c) __cfe_read(a, b, c)
--#define cfe_readblk(a, b, c, d) __cfe_readblk(a, b, c, d)
--#define cfe_setenv(a, b) __cfe_setenv(a, b)
--#define cfe_write(a, b, c) __cfe_write(a, b, c)
--#define cfe_writeblk(a, b, c, d) __cfe_writeblk(a, b, c, d)
--#endif /* CFE_API_IMPL_NAMESPACE */
--
- int cfe_close(int handle);
- int cfe_cpu_start(int cpu, void (*fn) (void), long sp, long gp, long a1);
- int cfe_cpu_stop(int cpu);
-diff --git a/include/asm-mips/fw/cfe/cfe_error.h b/include/asm-mips/fw/cfe/cfe_error.h
-index 975f000..b803746 100644
---- a/include/asm-mips/fw/cfe/cfe_error.h
-+++ b/include/asm-mips/fw/cfe/cfe_error.h
-@@ -16,18 +16,13 @@
- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
++extern unsigned long cached_to_uncached;
++
+ /* XXX
+ * disable hlt during certain critical i/o operations
*/
+@@ -270,13 +157,35 @@ void default_idle(void);
+ void per_cpu_trap_init(void);
--/* *********************************************************************
-- *
-- * Broadcom Common Firmware Environment (CFE)
-- *
-- * Error codes File: cfe_error.h
-- *
-- * CFE's global error code list is here.
-- *
-- * Author: Mitch Lichtenberg
-- *
-- ********************************************************************* */
--
-+/*
-+ * Broadcom Common Firmware Environment (CFE)
-+ *
-+ * CFE's global error code list is here.
-+ *
-+ * Author: Mitch Lichtenberg
-+ */
-
- #define CFE_OK 0
- #define CFE_ERR -1 /* generic error */
-diff --git a/include/asm-mips/mach-cobalt/cobalt.h b/include/asm-mips/mach-cobalt/cobalt.h
-index a79e7ca..5b9fce7 100644
---- a/include/asm-mips/mach-cobalt/cobalt.h
-+++ b/include/asm-mips/mach-cobalt/cobalt.h
-@@ -1,5 +1,5 @@
- /*
-- * Lowlevel hardware stuff for the MIPS based Cobalt microservers.
-+ * The Cobalt board ID information.
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
-@@ -12,9 +12,6 @@
- #ifndef __ASM_COBALT_H
- #define __ASM_COBALT_H
-
--/*
-- * The Cobalt board ID information.
-- */
- extern int cobalt_board_id;
+ asmlinkage void break_point_trap(void);
+-asmlinkage void debug_trap_handler(unsigned long r4, unsigned long r5,
+- unsigned long r6, unsigned long r7,
+- struct pt_regs __regs);
+-asmlinkage void bug_trap_handler(unsigned long r4, unsigned long r5,
+- unsigned long r6, unsigned long r7,
+- struct pt_regs __regs);
++
++#ifdef CONFIG_SUPERH32
++#define BUILD_TRAP_HANDLER(name) \
++asmlinkage void name##_trap_handler(unsigned long r4, unsigned long r5, \
++ unsigned long r6, unsigned long r7, \
++ struct pt_regs __regs)
++
++#define TRAP_HANDLER_DECL \
++ struct pt_regs *regs = RELOC_HIDE(&__regs, 0); \
++ unsigned int vec = regs->tra; \
++ (void)vec;
++#else
++#define BUILD_TRAP_HANDLER(name) \
++asmlinkage void name##_trap_handler(unsigned int vec, struct pt_regs *regs)
++#define TRAP_HANDLER_DECL
++#endif
++
++BUILD_TRAP_HANDLER(address_error);
++BUILD_TRAP_HANDLER(debug);
++BUILD_TRAP_HANDLER(bug);
++BUILD_TRAP_HANDLER(fpu_error);
++BUILD_TRAP_HANDLER(fpu_state_restore);
- #define COBALT_BRD_ID_QUBE1 0x3
-@@ -22,14 +19,4 @@ extern int cobalt_board_id;
- #define COBALT_BRD_ID_QUBE2 0x5
- #define COBALT_BRD_ID_RAQ2 0x6
+ #define arch_align_stack(x) (x)
--#define COBALT_KEY_PORT ((~*(volatile unsigned int *) CKSEG1ADDR(0x1d000000) >> 24) & COBALT_KEY_MASK)
--# define COBALT_KEY_CLEAR (1 << 1)
--# define COBALT_KEY_LEFT (1 << 2)
--# define COBALT_KEY_UP (1 << 3)
--# define COBALT_KEY_DOWN (1 << 4)
--# define COBALT_KEY_RIGHT (1 << 5)
--# define COBALT_KEY_ENTER (1 << 6)
--# define COBALT_KEY_SELECT (1 << 7)
--# define COBALT_KEY_MASK 0xfe
--
- #endif /* __ASM_COBALT_H */
-diff --git a/include/asm-mips/mach-ip28/cpu-feature-overrides.h b/include/asm-mips/mach-ip28/cpu-feature-overrides.h
++#ifdef CONFIG_SUPERH32
++# include "system_32.h"
++#else
++# include "system_64.h"
++#endif
++
+ #endif
+diff --git a/include/asm-sh/system_32.h b/include/asm-sh/system_32.h
new file mode 100644
-index 0000000..9a53b32
+index 0000000..7ff08d9
--- /dev/null
-+++ b/include/asm-mips/mach-ip28/cpu-feature-overrides.h
-@@ -0,0 +1,50 @@
++++ b/include/asm-sh/system_32.h
+@@ -0,0 +1,99 @@
++#ifndef __ASM_SH_SYSTEM_32_H
++#define __ASM_SH_SYSTEM_32_H
++
++#include <linux/types.h>
++
++struct task_struct *__switch_to(struct task_struct *prev,
++ struct task_struct *next);
++
+/*
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ *
-+ * Copyright (C) 2003 Ralf Baechle
-+ * 6/2004 pf
++ * switch_to() should switch tasks to task nr n, first
+ */
-+#ifndef __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H
-+#define __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H
++#define switch_to(prev, next, last) \
++do { \
++ register u32 *__ts1 __asm__ ("r1") = (u32 *)&prev->thread.sp; \
++ register u32 *__ts2 __asm__ ("r2") = (u32 *)&prev->thread.pc; \
++ register u32 *__ts4 __asm__ ("r4") = (u32 *)prev; \
++ register u32 *__ts5 __asm__ ("r5") = (u32 *)next; \
++ register u32 *__ts6 __asm__ ("r6") = (u32 *)&next->thread.sp; \
++ register u32 __ts7 __asm__ ("r7") = next->thread.pc; \
++ struct task_struct *__last; \
++ \
++ __asm__ __volatile__ ( \
++ ".balign 4\n\t" \
++ "stc.l gbr, @-r15\n\t" \
++ "sts.l pr, @-r15\n\t" \
++ "mov.l r8, @-r15\n\t" \
++ "mov.l r9, @-r15\n\t" \
++ "mov.l r10, @-r15\n\t" \
++ "mov.l r11, @-r15\n\t" \
++ "mov.l r12, @-r15\n\t" \
++ "mov.l r13, @-r15\n\t" \
++ "mov.l r14, @-r15\n\t" \
++ "mov.l r15, @r1\t! save SP\n\t" \
++ "mov.l @r6, r15\t! change to new stack\n\t" \
++ "mova 1f, %0\n\t" \
++ "mov.l %0, @r2\t! save PC\n\t" \
++ "mov.l 2f, %0\n\t" \
++ "jmp @%0\t! call __switch_to\n\t" \
++ " lds r7, pr\t! with return to new PC\n\t" \
++ ".balign 4\n" \
++ "2:\n\t" \
++ ".long __switch_to\n" \
++ "1:\n\t" \
++ "mov.l @r15+, r14\n\t" \
++ "mov.l @r15+, r13\n\t" \
++ "mov.l @r15+, r12\n\t" \
++ "mov.l @r15+, r11\n\t" \
++ "mov.l @r15+, r10\n\t" \
++ "mov.l @r15+, r9\n\t" \
++ "mov.l @r15+, r8\n\t" \
++ "lds.l @r15+, pr\n\t" \
++ "ldc.l @r15+, gbr\n\t" \
++ : "=z" (__last) \
++ : "r" (__ts1), "r" (__ts2), "r" (__ts4), \
++ "r" (__ts5), "r" (__ts6), "r" (__ts7) \
++ : "r3", "t"); \
++ \
++ last = __last; \
++} while (0)
++
++#define __uses_jump_to_uncached __attribute__ ((__section__ (".uncached.text")))
+
+/*
-+ * IP28 only comes with R10000 family processors all using the same config
++ * Jump to uncached area.
++ * When handling TLB or caches, we need to do it from an uncached area.
+ */
-+#define cpu_has_watch 1
-+#define cpu_has_mips16 0
-+#define cpu_has_divec 0
-+#define cpu_has_vce 0
-+#define cpu_has_cache_cdex_p 0
-+#define cpu_has_cache_cdex_s 0
-+#define cpu_has_prefetch 1
-+#define cpu_has_mcheck 0
-+#define cpu_has_ejtag 0
-+
-+#define cpu_has_llsc 1
-+#define cpu_has_vtag_icache 0
-+#define cpu_has_dc_aliases 0 /* see probe_pcache() */
-+#define cpu_has_ic_fills_f_dc 0
-+#define cpu_has_dsp 0
-+#define cpu_icache_snoops_remote_store 1
-+#define cpu_has_mipsmt 0
-+#define cpu_has_userlocal 0
-+
-+#define cpu_has_nofpuex 0
-+#define cpu_has_64bits 1
-+
-+#define cpu_has_4kex 1
-+#define cpu_has_4k_cache 1
-+
-+#define cpu_has_inclusive_pcaches 1
-+
-+#define cpu_dcache_line_size() 32
-+#define cpu_icache_line_size() 64
++#define jump_to_uncached() \
++do { \
++ unsigned long __dummy; \
++ \
++ __asm__ __volatile__( \
++ "mova 1f, %0\n\t" \
++ "add %1, %0\n\t" \
++ "jmp @%0\n\t" \
++ " nop\n\t" \
++ ".balign 4\n" \
++ "1:" \
++ : "=&z" (__dummy) \
++ : "r" (cached_to_uncached)); \
++} while (0)
+
-+#define cpu_has_mips32r1 0
-+#define cpu_has_mips32r2 0
-+#define cpu_has_mips64r1 0
-+#define cpu_has_mips64r2 0
++/*
++ * Back to cached area.
++ */
++#define back_to_cached() \
++do { \
++ unsigned long __dummy; \
++ ctrl_barrier(); \
++ __asm__ __volatile__( \
++ "mov.l 1f, %0\n\t" \
++ "jmp @%0\n\t" \
++ " nop\n\t" \
++ ".balign 4\n" \
++ "1: .long 2f\n" \
++ "2:" \
++ : "=&r" (__dummy)); \
++} while (0)
+
-+#endif /* __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H */
-diff --git a/include/asm-mips/mach-ip28/ds1286.h b/include/asm-mips/mach-ip28/ds1286.h
-new file mode 100644
-index 0000000..471bb9a
---- /dev/null
-+++ b/include/asm-mips/mach-ip28/ds1286.h
-@@ -0,0 +1,4 @@
-+#ifndef __ASM_MACH_IP28_DS1286_H
-+#define __ASM_MACH_IP28_DS1286_H
-+#include <asm/mach-ip22/ds1286.h>
-+#endif /* __ASM_MACH_IP28_DS1286_H */
-diff --git a/include/asm-mips/mach-ip28/spaces.h b/include/asm-mips/mach-ip28/spaces.h
++#endif /* __ASM_SH_SYSTEM_32_H */
+diff --git a/include/asm-sh/system_64.h b/include/asm-sh/system_64.h
new file mode 100644
-index 0000000..05aabb2
+index 0000000..943acf5
--- /dev/null
-+++ b/include/asm-mips/mach-ip28/spaces.h
-@@ -0,0 +1,22 @@
++++ b/include/asm-sh/system_64.h
+@@ -0,0 +1,40 @@
++#ifndef __ASM_SH_SYSTEM_64_H
++#define __ASM_SH_SYSTEM_64_H
++
+/*
++ * include/asm-sh/system_64.h
++ *
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003 Paul Mundt
++ * Copyright (C) 2004 Richard Curnow
++ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
-+ *
-+ * Copyright (C) 1994 - 1999, 2000, 03, 04 Ralf Baechle
-+ * Copyright (C) 2000, 2002 Maciej W. Rozycki
-+ * Copyright (C) 1990, 1999, 2000 Silicon Graphics, Inc.
-+ * 2004 pf
+ */
-+#ifndef _ASM_MACH_IP28_SPACES_H
-+#define _ASM_MACH_IP28_SPACES_H
-+
-+#define CAC_BASE 0xa800000000000000
-+
-+#define HIGHMEM_START (~0UL)
-+
-+#define PHYS_OFFSET _AC(0x20000000, UL)
-+
-+#include <asm/mach-generic/spaces.h>
++#include <asm/processor.h>
+
-+#endif /* _ASM_MACH_IP28_SPACES_H */
-diff --git a/include/asm-mips/mach-ip28/war.h b/include/asm-mips/mach-ip28/war.h
-new file mode 100644
-index 0000000..a1baafa
---- /dev/null
-+++ b/include/asm-mips/mach-ip28/war.h
-@@ -0,0 +1,25 @@
+/*
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ *
-+ * Copyright (C) 2002, 2004, 2007 by Ralf Baechle <ralf at linux-mips.org>
++ * switch_to() should switch tasks to task nr n, first
+ */
-+#ifndef __ASM_MIPS_MACH_IP28_WAR_H
-+#define __ASM_MIPS_MACH_IP28_WAR_H
++struct task_struct *sh64_switch_to(struct task_struct *prev,
++ struct thread_struct *prev_thread,
++ struct task_struct *next,
++ struct thread_struct *next_thread);
+
-+#define R4600_V1_INDEX_ICACHEOP_WAR 0
-+#define R4600_V1_HIT_CACHEOP_WAR 0
-+#define R4600_V2_HIT_CACHEOP_WAR 0
-+#define R5432_CP0_INTERRUPT_WAR 0
-+#define BCM1250_M3_WAR 0
-+#define SIBYTE_1956_WAR 0
-+#define MIPS4K_ICACHE_REFILL_WAR 0
-+#define MIPS_CACHE_SYNC_WAR 0
-+#define TX49XX_ICACHE_INDEX_INV_WAR 0
-+#define RM9000_CDEX_SMP_WAR 0
-+#define ICACHE_REFILLS_WORKAROUND_WAR 0
-+#define R10000_LLSC_WAR 1
-+#define MIPS34K_MISSED_ITLB_WAR 0
++#define switch_to(prev,next,last) \
++do { \
++ if (last_task_used_math != next) { \
++ struct pt_regs *regs = next->thread.uregs; \
++ if (regs) regs->sr |= SR_FD; \
++ } \
++ last = sh64_switch_to(prev, &prev->thread, next, \
++ &next->thread); \
++} while (0)
+
-+#endif /* __ASM_MIPS_MACH_IP28_WAR_H */
-diff --git a/include/asm-mips/mach-qemu/cpu-feature-overrides.h b/include/asm-mips/mach-qemu/cpu-feature-overrides.h
-deleted file mode 100644
-index d2daaed..0000000
---- a/include/asm-mips/mach-qemu/cpu-feature-overrides.h
-+++ /dev/null
-@@ -1,32 +0,0 @@
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * Copyright (C) 2003, 07 Ralf Baechle
-- */
--#ifndef __ASM_MACH_QEMU_CPU_FEATURE_OVERRIDES_H
--#define __ASM_MACH_QEMU_CPU_FEATURE_OVERRIDES_H
--
--/*
-- * QEMU only comes with a hazard-free MIPS32 processor, so things are easy.
-- */
--#define cpu_has_mips16 0
--#define cpu_has_divec 0
--#define cpu_has_cache_cdex_p 0
--#define cpu_has_prefetch 0
--#define cpu_has_mcheck 0
--#define cpu_has_ejtag 0
--
--#define cpu_has_llsc 1
--#define cpu_has_vtag_icache 0
--#define cpu_has_dc_aliases 0
--#define cpu_has_ic_fills_f_dc 0
--
--#define cpu_has_dsp 0
--#define cpu_has_mipsmt 0
--
--#define cpu_has_nofpuex 0
--#define cpu_has_64bits 0
--
--#endif /* __ASM_MACH_QEMU_CPU_FEATURE_OVERRIDES_H */
-diff --git a/include/asm-mips/mach-qemu/war.h b/include/asm-mips/mach-qemu/war.h
-deleted file mode 100644
-index 0eaf0c5..0000000
---- a/include/asm-mips/mach-qemu/war.h
-+++ /dev/null
-@@ -1,25 +0,0 @@
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * Copyright (C) 2002, 2004, 2007 by Ralf Baechle <ralf at linux-mips.org>
-- */
--#ifndef __ASM_MIPS_MACH_QEMU_WAR_H
--#define __ASM_MIPS_MACH_QEMU_WAR_H
--
--#define R4600_V1_INDEX_ICACHEOP_WAR 0
--#define R4600_V1_HIT_CACHEOP_WAR 0
--#define R4600_V2_HIT_CACHEOP_WAR 0
--#define R5432_CP0_INTERRUPT_WAR 0
--#define BCM1250_M3_WAR 0
--#define SIBYTE_1956_WAR 0
--#define MIPS4K_ICACHE_REFILL_WAR 0
--#define MIPS_CACHE_SYNC_WAR 0
--#define TX49XX_ICACHE_INDEX_INV_WAR 0
--#define RM9000_CDEX_SMP_WAR 0
--#define ICACHE_REFILLS_WORKAROUND_WAR 0
--#define R10000_LLSC_WAR 0
--#define MIPS34K_MISSED_ITLB_WAR 0
--
--#endif /* __ASM_MIPS_MACH_QEMU_WAR_H */
-diff --git a/include/asm-mips/mips-boards/generic.h b/include/asm-mips/mips-boards/generic.h
-index d589774..1c39d33 100644
---- a/include/asm-mips/mips-boards/generic.h
-+++ b/include/asm-mips/mips-boards/generic.h
-@@ -97,10 +97,16 @@ extern int mips_revision_corid;
++#define __uses_jump_to_uncached
++
++#define jump_to_uncached() do { } while (0)
++#define back_to_cached() do { } while (0)
++
++#endif /* __ASM_SH_SYSTEM_64_H */
+diff --git a/include/asm-sh/thread_info.h b/include/asm-sh/thread_info.h
+index 1f7e1de..c50e5d3 100644
+--- a/include/asm-sh/thread_info.h
++++ b/include/asm-sh/thread_info.h
+@@ -68,14 +68,16 @@ struct thread_info {
+ #define init_stack (init_thread_union.stack)
- extern int mips_revision_sconid;
+ /* how to get the current stack pointer from C */
+-register unsigned long current_stack_pointer asm("r15") __attribute_used__;
++register unsigned long current_stack_pointer asm("r15") __used;
-+extern void mips_reboot_setup(void);
-+
- #ifdef CONFIG_PCI
- extern void mips_pcibios_init(void);
+ /* how to get the thread information struct from C */
+ static inline struct thread_info *current_thread_info(void)
+ {
+ struct thread_info *ti;
+-#ifdef CONFIG_CPU_HAS_SR_RB
+- __asm__("stc r7_bank, %0" : "=r" (ti));
++#if defined(CONFIG_SUPERH64)
++ __asm__ __volatile__ ("getcon cr17, %0" : "=r" (ti));
++#elif defined(CONFIG_CPU_HAS_SR_RB)
++ __asm__ __volatile__ ("stc r7_bank, %0" : "=r" (ti));
#else
- #define mips_pcibios_init() do { } while (0)
- #endif
+ unsigned long __dummy;
-+#ifdef CONFIG_KGDB
-+extern void kgdb_config(void);
-+#endif
-+
- #endif /* __ASM_MIPS_BOARDS_GENERIC_H */
-diff --git a/include/asm-mips/mipsprom.h b/include/asm-mips/mipsprom.h
-index ce7cff7..146d41b 100644
---- a/include/asm-mips/mipsprom.h
-+++ b/include/asm-mips/mipsprom.h
-@@ -71,4 +71,6 @@
- #define PROM_NV_GET 53 /* XXX */
- #define PROM_NV_SET 54 /* XXX */
+@@ -111,6 +113,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_NEED_RESCHED 2 /* rescheduling necessary */
+ #define TIF_RESTORE_SIGMASK 3 /* restore signal mask in do_signal() */
+ #define TIF_SINGLESTEP 4 /* singlestepping active */
++#define TIF_SYSCALL_AUDIT 5
+ #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
+ #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
+ #define TIF_MEMDIE 18
+@@ -121,6 +124,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
+ #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
+ #define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
++#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
+ #define _TIF_USEDFPU (1<<TIF_USEDFPU)
+ #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
+ #define _TIF_FREEZE (1<<TIF_FREEZE)
+diff --git a/include/asm-sh/tlb.h b/include/asm-sh/tlb.h
+index 53d185b..56ad1fb 100644
+--- a/include/asm-sh/tlb.h
++++ b/include/asm-sh/tlb.h
+@@ -1,6 +1,12 @@
+ #ifndef __ASM_SH_TLB_H
+ #define __ASM_SH_TLB_H
-+extern char *prom_getenv(char *);
++#ifdef CONFIG_SUPERH64
++# include "tlb_64.h"
++#endif
+
- #endif /* __ASM_MIPS_PROM_H */
-diff --git a/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h b/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h
-index 0b56f55..603eb73 100644
---- a/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h
-+++ b/include/asm-mips/pmc-sierra/msp71xx/msp_regs.h
-@@ -585,11 +585,7 @@
- * UART defines *
- ***************************************************************************
- */
--#ifndef CONFIG_MSP_FPGA
- #define MSP_BASE_BAUD 25000000
--#else
--#define MSP_BASE_BAUD 6000000
--#endif
- #define MSP_UART_REG_LEN 0x20
-
- /*
-diff --git a/include/asm-mips/r4kcache.h b/include/asm-mips/r4kcache.h
-index 2b8466f..4c140db 100644
---- a/include/asm-mips/r4kcache.h
-+++ b/include/asm-mips/r4kcache.h
-@@ -403,6 +403,13 @@ __BUILD_BLAST_CACHE(i, icache, Index_Invalidate_I, Hit_Invalidate_I, 64)
- __BUILD_BLAST_CACHE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 64)
- __BUILD_BLAST_CACHE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 128)
-
-+__BUILD_BLAST_CACHE(inv_d, dcache, Index_Writeback_Inv_D, Hit_Invalidate_D, 16)
-+__BUILD_BLAST_CACHE(inv_d, dcache, Index_Writeback_Inv_D, Hit_Invalidate_D, 32)
-+__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 16)
-+__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 32)
-+__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 64)
-+__BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 128)
++#ifndef __ASSEMBLY__
+
- /* build blast_xxx_range, protected_blast_xxx_range */
- #define __BUILD_BLAST_CACHE_RANGE(pfx, desc, hitop, prot) \
- static inline void prot##blast_##pfx##cache##_range(unsigned long start, \
-diff --git a/include/asm-mips/sgi/ioc.h b/include/asm-mips/sgi/ioc.h
-index f3e3dc9..343ed15 100644
---- a/include/asm-mips/sgi/ioc.h
-+++ b/include/asm-mips/sgi/ioc.h
-@@ -138,8 +138,8 @@ struct sgioc_regs {
- u8 _sysid[3];
- volatile u8 sysid;
- #define SGIOC_SYSID_FULLHOUSE 0x01
--#define SGIOC_SYSID_BOARDREV(x) ((x & 0xe0) > 5)
--#define SGIOC_SYSID_CHIPREV(x) ((x & 0x1e) > 1)
-+#define SGIOC_SYSID_BOARDREV(x) (((x) & 0x1e) >> 1)
-+#define SGIOC_SYSID_CHIPREV(x) (((x) & 0xe0) >> 5)
- u32 _unused2;
- u8 _read[3];
- volatile u8 read;
-diff --git a/include/asm-mips/sibyte/board.h b/include/asm-mips/sibyte/board.h
-index da198a1..25372ae 100644
---- a/include/asm-mips/sibyte/board.h
-+++ b/include/asm-mips/sibyte/board.h
-@@ -19,10 +19,8 @@
- #ifndef _SIBYTE_BOARD_H
- #define _SIBYTE_BOARD_H
-
--#if defined(CONFIG_SIBYTE_SWARM) || defined(CONFIG_SIBYTE_PTSWARM) || \
-- defined(CONFIG_SIBYTE_PT1120) || defined(CONFIG_SIBYTE_PT1125) || \
-- defined(CONFIG_SIBYTE_CRHONE) || defined(CONFIG_SIBYTE_CRHINE) || \
-- defined(CONFIG_SIBYTE_LITTLESUR)
-+#if defined(CONFIG_SIBYTE_SWARM) || defined(CONFIG_SIBYTE_CRHONE) || \
-+ defined(CONFIG_SIBYTE_CRHINE) || defined(CONFIG_SIBYTE_LITTLESUR)
- #include <asm/sibyte/swarm.h>
- #endif
-
-diff --git a/include/asm-mips/sibyte/sb1250.h b/include/asm-mips/sibyte/sb1250.h
-index 0dad844..80c1a05 100644
---- a/include/asm-mips/sibyte/sb1250.h
-+++ b/include/asm-mips/sibyte/sb1250.h
-@@ -48,12 +48,10 @@ extern unsigned int zbbus_mhz;
- extern void sb1250_time_init(void);
- extern void sb1250_mask_irq(int cpu, int irq);
- extern void sb1250_unmask_irq(int cpu, int irq);
--extern void sb1250_smp_finish(void);
+ #define tlb_start_vma(tlb, vma) \
+ flush_cache_range(vma, vma->vm_start, vma->vm_end)
- extern void bcm1480_time_init(void);
- extern void bcm1480_mask_irq(int cpu, int irq);
- extern void bcm1480_unmask_irq(int cpu, int irq);
--extern void bcm1480_smp_finish(void);
+@@ -15,4 +21,6 @@
+ #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
- #define AT_spin \
- __asm__ __volatile__ ( \
-diff --git a/include/asm-mips/sibyte/swarm.h b/include/asm-mips/sibyte/swarm.h
-index 540865f..114d9d2 100644
---- a/include/asm-mips/sibyte/swarm.h
-+++ b/include/asm-mips/sibyte/swarm.h
-@@ -26,24 +26,6 @@
- #define SIBYTE_HAVE_PCMCIA 1
- #define SIBYTE_HAVE_IDE 1
- #endif
--#ifdef CONFIG_SIBYTE_PTSWARM
--#define SIBYTE_BOARD_NAME "PTSWARM"
--#define SIBYTE_HAVE_PCMCIA 1
--#define SIBYTE_HAVE_IDE 1
--#define SIBYTE_DEFAULT_CONSOLE "ttyS0,115200"
--#endif
--#ifdef CONFIG_SIBYTE_PT1120
--#define SIBYTE_BOARD_NAME "PT1120"
--#define SIBYTE_HAVE_PCMCIA 1
--#define SIBYTE_HAVE_IDE 1
--#define SIBYTE_DEFAULT_CONSOLE "ttyS0,115200"
--#endif
--#ifdef CONFIG_SIBYTE_PT1125
--#define SIBYTE_BOARD_NAME "PT1125"
--#define SIBYTE_HAVE_PCMCIA 1
--#define SIBYTE_HAVE_IDE 1
--#define SIBYTE_DEFAULT_CONSOLE "ttyS0,115200"
+ #include <asm-generic/tlb.h>
-#endif
- #ifdef CONFIG_SIBYTE_LITTLESUR
- #define SIBYTE_BOARD_NAME "BCM91250C2 (LittleSur)"
- #define SIBYTE_HAVE_PCMCIA 0
-diff --git a/include/asm-mips/smp-ops.h b/include/asm-mips/smp-ops.h
++
++#endif /* __ASSEMBLY__ */
++#endif /* __ASM_SH_TLB_H */
+diff --git a/include/asm-sh/tlb_64.h b/include/asm-sh/tlb_64.h
new file mode 100644
-index 0000000..b17fdfb
+index 0000000..0308e05
--- /dev/null
-+++ b/include/asm-mips/smp-ops.h
-@@ -0,0 +1,56 @@
++++ b/include/asm-sh/tlb_64.h
+@@ -0,0 +1,69 @@
+/*
-+ * This file is subject to the terms and conditions of the GNU General
-+ * Public License. See the file "COPYING" in the main directory of this
-+ * archive for more details.
++ * include/asm-sh/tlb_64.h
+ *
-+ * Copyright (C) 2000 - 2001 by Kanoj Sarcar (kanoj at sgi.com)
-+ * Copyright (C) 2000 - 2001 by Silicon Graphics, Inc.
-+ * Copyright (C) 2000, 2001, 2002 Ralf Baechle
-+ * Copyright (C) 2000, 2001 Broadcom Corporation
++ * Copyright (C) 2003 Paul Mundt
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
-+#ifndef __ASM_SMP_OPS_H
-+#define __ASM_SMP_OPS_H
-+
-+#ifdef CONFIG_SMP
-+
-+#include <linux/cpumask.h>
-+
-+struct plat_smp_ops {
-+ void (*send_ipi_single)(int cpu, unsigned int action);
-+ void (*send_ipi_mask)(cpumask_t mask, unsigned int action);
-+ void (*init_secondary)(void);
-+ void (*smp_finish)(void);
-+ void (*cpus_done)(void);
-+ void (*boot_secondary)(int cpu, struct task_struct *idle);
-+ void (*smp_setup)(void);
-+ void (*prepare_cpus)(unsigned int max_cpus);
-+};
-+
-+extern void register_smp_ops(struct plat_smp_ops *ops);
++#ifndef __ASM_SH_TLB_64_H
++#define __ASM_SH_TLB_64_H
+
-+static inline void plat_smp_setup(void)
-+{
-+ extern struct plat_smp_ops *mp_ops; /* private */
++/* ITLB defines */
++#define ITLB_FIXED 0x00000000 /* First fixed ITLB, see head.S */
++#define ITLB_LAST_VAR_UNRESTRICTED 0x000003F0 /* Last ITLB */
+
-+ mp_ops->smp_setup();
-+}
++/* DTLB defines */
++#define DTLB_FIXED 0x00800000 /* First fixed DTLB, see head.S */
++#define DTLB_LAST_VAR_UNRESTRICTED 0x008003F0 /* Last DTLB */
+
-+#else /* !CONFIG_SMP */
++#ifndef __ASSEMBLY__
+
-+struct plat_smp_ops;
++/**
++ * for_each_dtlb_entry
++ *
++ * @tlb: TLB entry
++ *
++ * Iterate over free (non-wired) DTLB entries
++ */
++#define for_each_dtlb_entry(tlb) \
++ for (tlb = cpu_data->dtlb.first; \
++ tlb <= cpu_data->dtlb.last; \
++ tlb += cpu_data->dtlb.step)
+
-+static inline void plat_smp_setup(void)
-+{
-+ /* UP, nothing to do ... */
-+}
++/**
++ * for_each_itlb_entry
++ *
++ * @tlb: TLB entry
++ *
++ * Iterate over free (non-wired) ITLB entries
++ */
++#define for_each_itlb_entry(tlb) \
++ for (tlb = cpu_data->itlb.first; \
++ tlb <= cpu_data->itlb.last; \
++ tlb += cpu_data->itlb.step)
+
-+static inline void register_smp_ops(struct plat_smp_ops *ops)
++/**
++ * __flush_tlb_slot
++ *
++ * @slot: Address of TLB slot.
++ *
++ * Flushes TLB slot @slot.
++ */
++static inline void __flush_tlb_slot(unsigned long long slot)
+{
++ __asm__ __volatile__ ("putcfg %0, 0, r63\n" : : "r" (slot));
+}
+
-+#endif /* !CONFIG_SMP */
++/* arch/sh64/mm/tlb.c */
++int sh64_tlb_init(void);
++unsigned long long sh64_next_free_dtlb_entry(void);
++unsigned long long sh64_get_wired_dtlb_entry(void);
++int sh64_put_wired_dtlb_entry(unsigned long long entry);
++void sh64_setup_tlb_slot(unsigned long long config_addr, unsigned long eaddr,
++ unsigned long asid, unsigned long paddr);
++void sh64_teardown_tlb_slot(unsigned long long config_addr);
+
-+extern struct plat_smp_ops up_smp_ops;
-+extern struct plat_smp_ops vsmp_smp_ops;
++#endif /* __ASSEMBLY__ */
++#endif /* __ASM_SH_TLB_64_H */
+diff --git a/include/asm-sh/types.h b/include/asm-sh/types.h
+index 7ba69d9..a6e1d41 100644
+--- a/include/asm-sh/types.h
++++ b/include/asm-sh/types.h
+@@ -52,6 +52,12 @@ typedef unsigned long long u64;
+
+ typedef u32 dma_addr_t;
+
++#ifdef CONFIG_SUPERH32
++typedef u16 opcode_t;
++#else
++typedef u32 opcode_t;
++#endif
+
-+#endif /* __ASM_SMP_OPS_H */
-diff --git a/include/asm-mips/smp.h b/include/asm-mips/smp.h
-index dc77002..84fef1a 100644
---- a/include/asm-mips/smp.h
-+++ b/include/asm-mips/smp.h
-@@ -11,14 +11,16 @@
- #ifndef __ASM_SMP_H
- #define __ASM_SMP_H
+ #endif /* __ASSEMBLY__ */
+ #endif /* __KERNEL__ */
+diff --git a/include/asm-sh/uaccess.h b/include/asm-sh/uaccess.h
+index 77c391f..ff24ce9 100644
+--- a/include/asm-sh/uaccess.h
++++ b/include/asm-sh/uaccess.h
+@@ -1,563 +1,5 @@
+-/* $Id: uaccess.h,v 1.11 2003/10/13 07:21:20 lethal Exp $
+- *
+- * User space memory access functions
+- *
+- * Copyright (C) 1999, 2002 Niibe Yutaka
+- * Copyright (C) 2003 Paul Mundt
+- *
+- * Based on:
+- * MIPS implementation version 1.15 by
+- * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
+- * and i386 version.
+- */
+-#ifndef __ASM_SH_UACCESS_H
+-#define __ASM_SH_UACCESS_H
-
--#ifdef CONFIG_SMP
+-#include <linux/errno.h>
+-#include <linux/sched.h>
+-
+-#define VERIFY_READ 0
+-#define VERIFY_WRITE 1
-
- #include <linux/bitops.h>
- #include <linux/linkage.h>
- #include <linux/threads.h>
- #include <linux/cpumask.h>
-+
- #include <asm/atomic.h>
-+#include <asm/smp-ops.h>
-+
-+extern int smp_num_siblings;
-+extern cpumask_t cpu_sibling_map[];
-
- #define raw_smp_processor_id() (current_thread_info()->cpu)
-
-@@ -49,56 +51,6 @@ extern struct call_data_struct *call_data;
- extern cpumask_t phys_cpu_present_map;
- #define cpu_possible_map phys_cpu_present_map
-
-/*
-- * These are defined by the board-specific code.
+- * The fs value determines whether argument validity checking should be
+- * performed or not. If get_fs() == USER_DS, checking is performed, with
+- * get_fs() == KERNEL_DS, checking is bypassed.
+- *
+- * For historical reasons (Data Segment Register?), these macros are misnamed.
- */
-
+-#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
+-
+-#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFFUL)
+-#define USER_DS MAKE_MM_SEG(PAGE_OFFSET)
+-
+-#define segment_eq(a,b) ((a).seg == (b).seg)
+-
+-#define get_ds() (KERNEL_DS)
+-
+-#if !defined(CONFIG_MMU)
+-/* NOMMU is always true */
+-#define __addr_ok(addr) (1)
+-
+-static inline mm_segment_t get_fs(void)
+-{
+- return USER_DS;
+-}
+-
+-static inline void set_fs(mm_segment_t s)
+-{
+-}
+-
-/*
-- * Cause the function described by call_data to be executed on the passed
-- * cpu. When the function has finished, increment the finished field of
-- * call_data.
+- * __access_ok: Check if address with size is OK or not.
+- *
+- * If we don't have an MMU (or if its disabled) the only thing we really have
+- * to look out for is if the address resides somewhere outside of what
+- * available RAM we have.
+- *
+- * TODO: This check could probably also stand to be restricted somewhat more..
+- * though it still does the Right Thing(tm) for the time being.
- */
--extern void core_send_ipi(int cpu, unsigned int action);
+-static inline int __access_ok(unsigned long addr, unsigned long size)
+-{
+- return ((addr >= memory_start) && ((addr + size) < memory_end));
+-}
+-#else /* CONFIG_MMU */
+-#define __addr_ok(addr) \
+- ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
-
--static inline void core_send_ipi_mask(cpumask_t mask, unsigned int action)
+-#define get_fs() (current_thread_info()->addr_limit)
+-#define set_fs(x) (current_thread_info()->addr_limit = (x))
+-
+-/*
+- * __access_ok: Check if address with size is OK or not.
+- *
+- * Uhhuh, this needs 33-bit arithmetic. We have a carry..
+- *
+- * sum := addr + size; carry? --> flag = true;
+- * if (sum >= addr_limit) flag = true;
+- */
+-static inline int __access_ok(unsigned long addr, unsigned long size)
-{
-- unsigned int i;
+- unsigned long flag, sum;
+-
+- __asm__("clrt\n\t"
+- "addc %3, %1\n\t"
+- "movt %0\n\t"
+- "cmp/hi %4, %1\n\t"
+- "rotcl %0"
+- :"=&r" (flag), "=r" (sum)
+- :"1" (addr), "r" (size),
+- "r" (current_thread_info()->addr_limit.seg)
+- :"t");
+- return flag == 0;
-
-- for_each_cpu_mask(i, mask)
-- core_send_ipi(i, action);
-}
+-#endif /* CONFIG_MMU */
-
+-static inline int access_ok(int type, const void __user *p, unsigned long size)
+-{
+- unsigned long addr = (unsigned long)p;
+- return __access_ok(addr, size);
+-}
-
-/*
-- * Firmware CPU startup hook
+- * Uh, these should become the main single-value transfer routines ...
+- * They automatically use the right size if we just have the right
+- * pointer type ...
+- *
+- * As SuperH uses the same address space for kernel and user data, we
+- * can just do these as direct assignments.
+- *
+- * Careful to not
+- * (a) re-use the arguments for side effects (sizeof is ok)
+- * (b) require any knowledge of processes at this stage
- */
--extern void prom_boot_secondary(int cpu, struct task_struct *idle);
+-#define put_user(x,ptr) __put_user_check((x),(ptr),sizeof(*(ptr)))
+-#define get_user(x,ptr) __get_user_check((x),(ptr),sizeof(*(ptr)))
-
-/*
-- * After we've done initial boot, this function is called to allow the
-- * board code to clean up state, if needed
+- * The "__xxx" versions do not do address space checking, useful when
+- * doing multiple accesses to the same area (the user has to do the
+- * checks by hand with "access_ok()")
- */
--extern void prom_init_secondary(void);
+-#define __put_user(x,ptr) \
+- __put_user_nocheck((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)))
+-#define __get_user(x,ptr) \
+- __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+-
+-struct __large_struct { unsigned long buf[100]; };
+-#define __m(x) (*(struct __large_struct __user *)(x))
+-
+-#define __get_user_size(x,ptr,size,retval) \
+-do { \
+- retval = 0; \
+- __chk_user_ptr(ptr); \
+- switch (size) { \
+- case 1: \
+- __get_user_asm(x, ptr, retval, "b"); \
+- break; \
+- case 2: \
+- __get_user_asm(x, ptr, retval, "w"); \
+- break; \
+- case 4: \
+- __get_user_asm(x, ptr, retval, "l"); \
+- break; \
+- default: \
+- __get_user_unknown(); \
+- break; \
+- } \
+-} while (0)
+-
+-#define __get_user_nocheck(x,ptr,size) \
+-({ \
+- long __gu_err, __gu_val; \
+- __get_user_size(__gu_val, (ptr), (size), __gu_err); \
+- (x) = (__typeof__(*(ptr)))__gu_val; \
+- __gu_err; \
+-})
+-
+-#ifdef CONFIG_MMU
+-#define __get_user_check(x,ptr,size) \
+-({ \
+- long __gu_err, __gu_val; \
+- __chk_user_ptr(ptr); \
+- switch (size) { \
+- case 1: \
+- __get_user_1(__gu_val, (ptr), __gu_err); \
+- break; \
+- case 2: \
+- __get_user_2(__gu_val, (ptr), __gu_err); \
+- break; \
+- case 4: \
+- __get_user_4(__gu_val, (ptr), __gu_err); \
+- break; \
+- default: \
+- __get_user_unknown(); \
+- break; \
+- } \
+- \
+- (x) = (__typeof__(*(ptr)))__gu_val; \
+- __gu_err; \
+-})
+-
+-#define __get_user_1(x,addr,err) ({ \
+-__asm__("stc r7_bank, %1\n\t" \
+- "mov.l @(8,%1), %1\n\t" \
+- "and %2, %1\n\t" \
+- "cmp/pz %1\n\t" \
+- "bt/s 1f\n\t" \
+- " mov #0, %0\n\t" \
+- "0:\n" \
+- "mov #-14, %0\n\t" \
+- "bra 2f\n\t" \
+- " mov #0, %1\n" \
+- "1:\n\t" \
+- "mov.b @%2, %1\n\t" \
+- "extu.b %1, %1\n" \
+- "2:\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 0b\n\t" \
+- ".previous" \
+- : "=&r" (err), "=&r" (x) \
+- : "r" (addr) \
+- : "t"); \
+-})
+-
+-#define __get_user_2(x,addr,err) ({ \
+-__asm__("stc r7_bank, %1\n\t" \
+- "mov.l @(8,%1), %1\n\t" \
+- "and %2, %1\n\t" \
+- "cmp/pz %1\n\t" \
+- "bt/s 1f\n\t" \
+- " mov #0, %0\n\t" \
+- "0:\n" \
+- "mov #-14, %0\n\t" \
+- "bra 2f\n\t" \
+- " mov #0, %1\n" \
+- "1:\n\t" \
+- "mov.w @%2, %1\n\t" \
+- "extu.w %1, %1\n" \
+- "2:\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 0b\n\t" \
+- ".previous" \
+- : "=&r" (err), "=&r" (x) \
+- : "r" (addr) \
+- : "t"); \
+-})
+-
+-#define __get_user_4(x,addr,err) ({ \
+-__asm__("stc r7_bank, %1\n\t" \
+- "mov.l @(8,%1), %1\n\t" \
+- "and %2, %1\n\t" \
+- "cmp/pz %1\n\t" \
+- "bt/s 1f\n\t" \
+- " mov #0, %0\n\t" \
+- "0:\n" \
+- "mov #-14, %0\n\t" \
+- "bra 2f\n\t" \
+- " mov #0, %1\n" \
+- "1:\n\t" \
+- "mov.l @%2, %1\n\t" \
+- "2:\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 0b\n\t" \
+- ".previous" \
+- : "=&r" (err), "=&r" (x) \
+- : "r" (addr) \
+- : "t"); \
+-})
+-#else /* CONFIG_MMU */
+-#define __get_user_check(x,ptr,size) \
+-({ \
+- long __gu_err, __gu_val; \
+- if (__access_ok((unsigned long)(ptr), (size))) { \
+- __get_user_size(__gu_val, (ptr), (size), __gu_err); \
+- (x) = (__typeof__(*(ptr)))__gu_val; \
+- } else \
+- __gu_err = -EFAULT; \
+- __gu_err; \
+-})
+-#endif
+-
+-#define __get_user_asm(x, addr, err, insn) \
+-({ \
+-__asm__ __volatile__( \
+- "1:\n\t" \
+- "mov." insn " %2, %1\n\t" \
+- "mov #0, %0\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3:\n\t" \
+- "mov #0, %1\n\t" \
+- "mov.l 4f, %0\n\t" \
+- "jmp @%0\n\t" \
+- " mov %3, %0\n" \
+- "4: .long 2b\n\t" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 3b\n\t" \
+- ".previous" \
+- :"=&r" (err), "=&r" (x) \
+- :"m" (__m(addr)), "i" (-EFAULT)); })
+-
+-extern void __get_user_unknown(void);
+-
+-#define __put_user_size(x,ptr,size,retval) \
+-do { \
+- retval = 0; \
+- __chk_user_ptr(ptr); \
+- switch (size) { \
+- case 1: \
+- __put_user_asm(x, ptr, retval, "b"); \
+- break; \
+- case 2: \
+- __put_user_asm(x, ptr, retval, "w"); \
+- break; \
+- case 4: \
+- __put_user_asm(x, ptr, retval, "l"); \
+- break; \
+- case 8: \
+- __put_user_u64(x, ptr, retval); \
+- break; \
+- default: \
+- __put_user_unknown(); \
+- } \
+-} while (0)
+-
+-#define __put_user_nocheck(x,ptr,size) \
+-({ \
+- long __pu_err; \
+- __put_user_size((x),(ptr),(size),__pu_err); \
+- __pu_err; \
+-})
+-
+-#define __put_user_check(x,ptr,size) \
+-({ \
+- long __pu_err = -EFAULT; \
+- __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
+- \
+- if (__access_ok((unsigned long)__pu_addr,size)) \
+- __put_user_size((x),__pu_addr,(size),__pu_err); \
+- __pu_err; \
+-})
+-
+-#define __put_user_asm(x, addr, err, insn) \
+-({ \
+-__asm__ __volatile__( \
+- "1:\n\t" \
+- "mov." insn " %1, %2\n\t" \
+- "mov #0, %0\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3:\n\t" \
+- "nop\n\t" \
+- "mov.l 4f, %0\n\t" \
+- "jmp @%0\n\t" \
+- "mov %3, %0\n" \
+- "4: .long 2b\n\t" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 3b\n\t" \
+- ".previous" \
+- :"=&r" (err) \
+- :"r" (x), "m" (__m(addr)), "i" (-EFAULT) \
+- :"memory"); })
+-
+-#if defined(__LITTLE_ENDIAN__)
+-#define __put_user_u64(val,addr,retval) \
+-({ \
+-__asm__ __volatile__( \
+- "1:\n\t" \
+- "mov.l %R1,%2\n\t" \
+- "mov.l %S1,%T2\n\t" \
+- "mov #0,%0\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3:\n\t" \
+- "nop\n\t" \
+- "mov.l 4f,%0\n\t" \
+- "jmp @%0\n\t" \
+- " mov %3,%0\n" \
+- "4: .long 2b\n\t" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 3b\n\t" \
+- ".previous" \
+- : "=r" (retval) \
+- : "r" (val), "m" (__m(addr)), "i" (-EFAULT) \
+- : "memory"); })
++#ifdef CONFIG_SUPERH32
++# include "uaccess_32.h"
+ #else
+-#define __put_user_u64(val,addr,retval) \
+-({ \
+-__asm__ __volatile__( \
+- "1:\n\t" \
+- "mov.l %S1,%2\n\t" \
+- "mov.l %R1,%T2\n\t" \
+- "mov #0,%0\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3:\n\t" \
+- "nop\n\t" \
+- "mov.l 4f,%0\n\t" \
+- "jmp @%0\n\t" \
+- " mov %3,%0\n" \
+- "4: .long 2b\n\t" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".long 1b, 3b\n\t" \
+- ".previous" \
+- : "=r" (retval) \
+- : "r" (val), "m" (__m(addr)), "i" (-EFAULT) \
+- : "memory"); })
++# include "uaccess_64.h"
+ #endif
+-
+-extern void __put_user_unknown(void);
+-
+-/* Generic arbitrary sized copy. */
+-/* Return the number of bytes NOT copied */
+-__kernel_size_t __copy_user(void *to, const void *from, __kernel_size_t n);
+-
+-#define copy_to_user(to,from,n) ({ \
+-void *__copy_to = (void *) (to); \
+-__kernel_size_t __copy_size = (__kernel_size_t) (n); \
+-__kernel_size_t __copy_res; \
+-if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
+-__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
+-} else __copy_res = __copy_size; \
+-__copy_res; })
+-
+-#define copy_from_user(to,from,n) ({ \
+-void *__copy_to = (void *) (to); \
+-void *__copy_from = (void *) (from); \
+-__kernel_size_t __copy_size = (__kernel_size_t) (n); \
+-__kernel_size_t __copy_res; \
+-if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
+-__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
+-} else __copy_res = __copy_size; \
+-__copy_res; })
+-
+-static __always_inline unsigned long
+-__copy_from_user(void *to, const void __user *from, unsigned long n)
+-{
+- return __copy_user(to, (__force void *)from, n);
+-}
+-
+-static __always_inline unsigned long __must_check
+-__copy_to_user(void __user *to, const void *from, unsigned long n)
+-{
+- return __copy_user((__force void *)to, from, n);
+-}
+-
+-#define __copy_to_user_inatomic __copy_to_user
+-#define __copy_from_user_inatomic __copy_from_user
-
-/*
-- * Populate cpu_possible_map before smp_init, called from setup_arch.
+- * Clear the area and return remaining number of bytes
+- * (on failure. Usually it's 0.)
- */
--extern void plat_smp_setup(void);
+-extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
+-
+-#define clear_user(addr,n) ({ \
+-void * __cl_addr = (addr); \
+-unsigned long __cl_size = (n); \
+-if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
+-__cl_size = __clear_user(__cl_addr, __cl_size); \
+-__cl_size; })
+-
+-static __inline__ int
+-__strncpy_from_user(unsigned long __dest, unsigned long __user __src, int __count)
+-{
+- __kernel_size_t res;
+- unsigned long __dummy, _d, _s;
+-
+- __asm__ __volatile__(
+- "9:\n"
+- "mov.b @%2+, %1\n\t"
+- "cmp/eq #0, %1\n\t"
+- "bt/s 2f\n"
+- "1:\n"
+- "mov.b %1, @%3\n\t"
+- "dt %7\n\t"
+- "bf/s 9b\n\t"
+- " add #1, %3\n\t"
+- "2:\n\t"
+- "sub %7, %0\n"
+- "3:\n"
+- ".section .fixup,\"ax\"\n"
+- "4:\n\t"
+- "mov.l 5f, %1\n\t"
+- "jmp @%1\n\t"
+- " mov %8, %0\n\t"
+- ".balign 4\n"
+- "5: .long 3b\n"
+- ".previous\n"
+- ".section __ex_table,\"a\"\n"
+- " .balign 4\n"
+- " .long 9b,4b\n"
+- ".previous"
+- : "=r" (res), "=&z" (__dummy), "=r" (_s), "=r" (_d)
+- : "0" (__count), "2" (__src), "3" (__dest), "r" (__count),
+- "i" (-EFAULT)
+- : "memory", "t");
+-
+- return res;
+-}
+-
+-#define strncpy_from_user(dest,src,count) ({ \
+-unsigned long __sfu_src = (unsigned long) (src); \
+-int __sfu_count = (int) (count); \
+-long __sfu_res = -EFAULT; \
+-if(__access_ok(__sfu_src, __sfu_count)) { \
+-__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
+-} __sfu_res; })
-
-/*
-- * Called in smp_prepare_cpus.
+- * Return the size of a string (including the ending 0!)
- */
--extern void plat_prepare_cpus(unsigned int max_cpus);
+-static __inline__ long __strnlen_user(const char __user *__s, long __n)
+-{
+- unsigned long res;
+- unsigned long __dummy;
+-
+- __asm__ __volatile__(
+- "9:\n"
+- "cmp/eq %4, %0\n\t"
+- "bt 2f\n"
+- "1:\t"
+- "mov.b @(%0,%3), %1\n\t"
+- "tst %1, %1\n\t"
+- "bf/s 9b\n\t"
+- " add #1, %0\n"
+- "2:\n"
+- ".section .fixup,\"ax\"\n"
+- "3:\n\t"
+- "mov.l 4f, %1\n\t"
+- "jmp @%1\n\t"
+- " mov #0, %0\n"
+- ".balign 4\n"
+- "4: .long 2b\n"
+- ".previous\n"
+- ".section __ex_table,\"a\"\n"
+- " .balign 4\n"
+- " .long 1b,3b\n"
+- ".previous"
+- : "=z" (res), "=&r" (__dummy)
+- : "0" (0), "r" (__s), "r" (__n)
+- : "t");
+- return res;
+-}
+-
+-static __inline__ long strnlen_user(const char __user *s, long n)
+-{
+- if (!__addr_ok(s))
+- return 0;
+- else
+- return __strnlen_user(s, n);
+-}
+-
+-#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
-
-/*
-- * Last chance for the board code to finish SMP initialization before
-- * the CPU is "online".
+- * The exception table consists of pairs of addresses: the first is the
+- * address of an instruction that is allowed to fault, and the second is
+- * the address at which the program should continue. No registers are
+- * modified, so it is entirely up to the continuation code to figure out
+- * what to do.
+- *
+- * All the routines below use bits of fixup code that are out of line
+- * with the main instruction path. This means when everything is well,
+- * we don't even have to jump over them. Further, they do not intrude
+- * on our cache or tlb entries.
- */
--extern void prom_smp_finish(void);
-
--/* Hook for after all CPUs are online */
--extern void prom_cpus_done(void);
+-struct exception_table_entry
+-{
+- unsigned long insn, fixup;
+-};
-
- extern void asmlinkage smp_bootstrap(void);
-
- /*
-@@ -108,11 +60,11 @@ extern void asmlinkage smp_bootstrap(void);
- */
- static inline void smp_send_reschedule(int cpu)
- {
-- core_send_ipi(cpu, SMP_RESCHEDULE_YOURSELF);
-+ extern struct plat_smp_ops *mp_ops; /* private */
-+
-+ mp_ops->send_ipi_single(cpu, SMP_RESCHEDULE_YOURSELF);
- }
-
- extern asmlinkage void smp_call_function_interrupt(void);
-
--#endif /* CONFIG_SMP */
+-extern int fixup_exception(struct pt_regs *regs);
-
- #endif /* __ASM_SMP_H */
-diff --git a/include/asm-mips/sni.h b/include/asm-mips/sni.h
-index af08145..e716447 100644
---- a/include/asm-mips/sni.h
-+++ b/include/asm-mips/sni.h
-@@ -35,23 +35,23 @@ extern unsigned int sni_brd_type;
- #define SNI_CPU_M8050 0x0b
- #define SNI_CPU_M8053 0x0d
-
--#define SNI_PORT_BASE 0xb4000000
-+#define SNI_PORT_BASE CKSEG1ADDR(0xb4000000)
-
- #ifndef __MIPSEL__
- /*
- * ASIC PCI registers for big endian configuration.
- */
--#define PCIMT_UCONF 0xbfff0004
--#define PCIMT_IOADTIMEOUT2 0xbfff000c
--#define PCIMT_IOMEMCONF 0xbfff0014
--#define PCIMT_IOMMU 0xbfff001c
--#define PCIMT_IOADTIMEOUT1 0xbfff0024
--#define PCIMT_DMAACCESS 0xbfff002c
--#define PCIMT_DMAHIT 0xbfff0034
--#define PCIMT_ERRSTATUS 0xbfff003c
--#define PCIMT_ERRADDR 0xbfff0044
--#define PCIMT_SYNDROME 0xbfff004c
--#define PCIMT_ITPEND 0xbfff0054
-+#define PCIMT_UCONF CKSEG1ADDR(0xbfff0004)
-+#define PCIMT_IOADTIMEOUT2 CKSEG1ADDR(0xbfff000c)
-+#define PCIMT_IOMEMCONF CKSEG1ADDR(0xbfff0014)
-+#define PCIMT_IOMMU CKSEG1ADDR(0xbfff001c)
-+#define PCIMT_IOADTIMEOUT1 CKSEG1ADDR(0xbfff0024)
-+#define PCIMT_DMAACCESS CKSEG1ADDR(0xbfff002c)
-+#define PCIMT_DMAHIT CKSEG1ADDR(0xbfff0034)
-+#define PCIMT_ERRSTATUS CKSEG1ADDR(0xbfff003c)
-+#define PCIMT_ERRADDR CKSEG1ADDR(0xbfff0044)
-+#define PCIMT_SYNDROME CKSEG1ADDR(0xbfff004c)
-+#define PCIMT_ITPEND CKSEG1ADDR(0xbfff0054)
- #define IT_INT2 0x01
- #define IT_INTD 0x02
- #define IT_INTC 0x04
-@@ -60,32 +60,32 @@ extern unsigned int sni_brd_type;
- #define IT_EISA 0x20
- #define IT_SCSI 0x40
- #define IT_ETH 0x80
--#define PCIMT_IRQSEL 0xbfff005c
--#define PCIMT_TESTMEM 0xbfff0064
--#define PCIMT_ECCREG 0xbfff006c
--#define PCIMT_CONFIG_ADDRESS 0xbfff0074
--#define PCIMT_ASIC_ID 0xbfff007c /* read */
--#define PCIMT_SOFT_RESET 0xbfff007c /* write */
--#define PCIMT_PIA_OE 0xbfff0084
--#define PCIMT_PIA_DATAOUT 0xbfff008c
--#define PCIMT_PIA_DATAIN 0xbfff0094
--#define PCIMT_CACHECONF 0xbfff009c
--#define PCIMT_INVSPACE 0xbfff00a4
-+#define PCIMT_IRQSEL CKSEG1ADDR(0xbfff005c)
-+#define PCIMT_TESTMEM CKSEG1ADDR(0xbfff0064)
-+#define PCIMT_ECCREG CKSEG1ADDR(0xbfff006c)
-+#define PCIMT_CONFIG_ADDRESS CKSEG1ADDR(0xbfff0074)
-+#define PCIMT_ASIC_ID CKSEG1ADDR(0xbfff007c) /* read */
-+#define PCIMT_SOFT_RESET CKSEG1ADDR(0xbfff007c) /* write */
-+#define PCIMT_PIA_OE CKSEG1ADDR(0xbfff0084)
-+#define PCIMT_PIA_DATAOUT CKSEG1ADDR(0xbfff008c)
-+#define PCIMT_PIA_DATAIN CKSEG1ADDR(0xbfff0094)
-+#define PCIMT_CACHECONF CKSEG1ADDR(0xbfff009c)
-+#define PCIMT_INVSPACE CKSEG1ADDR(0xbfff00a4)
- #else
- /*
- * ASIC PCI registers for little endian configuration.
- */
--#define PCIMT_UCONF 0xbfff0000
--#define PCIMT_IOADTIMEOUT2 0xbfff0008
--#define PCIMT_IOMEMCONF 0xbfff0010
--#define PCIMT_IOMMU 0xbfff0018
--#define PCIMT_IOADTIMEOUT1 0xbfff0020
--#define PCIMT_DMAACCESS 0xbfff0028
--#define PCIMT_DMAHIT 0xbfff0030
--#define PCIMT_ERRSTATUS 0xbfff0038
--#define PCIMT_ERRADDR 0xbfff0040
--#define PCIMT_SYNDROME 0xbfff0048
--#define PCIMT_ITPEND 0xbfff0050
-+#define PCIMT_UCONF CKSEG1ADDR(0xbfff0000)
-+#define PCIMT_IOADTIMEOUT2 CKSEG1ADDR(0xbfff0008)
-+#define PCIMT_IOMEMCONF CKSEG1ADDR(0xbfff0010)
-+#define PCIMT_IOMMU CKSEG1ADDR(0xbfff0018)
-+#define PCIMT_IOADTIMEOUT1 CKSEG1ADDR(0xbfff0020)
-+#define PCIMT_DMAACCESS CKSEG1ADDR(0xbfff0028)
-+#define PCIMT_DMAHIT CKSEG1ADDR(0xbfff0030)
-+#define PCIMT_ERRSTATUS CKSEG1ADDR(0xbfff0038)
-+#define PCIMT_ERRADDR CKSEG1ADDR(0xbfff0040)
-+#define PCIMT_SYNDROME CKSEG1ADDR(0xbfff0048)
-+#define PCIMT_ITPEND CKSEG1ADDR(0xbfff0050)
- #define IT_INT2 0x01
- #define IT_INTD 0x02
- #define IT_INTC 0x04
-@@ -94,20 +94,20 @@ extern unsigned int sni_brd_type;
- #define IT_EISA 0x20
- #define IT_SCSI 0x40
- #define IT_ETH 0x80
--#define PCIMT_IRQSEL 0xbfff0058
--#define PCIMT_TESTMEM 0xbfff0060
--#define PCIMT_ECCREG 0xbfff0068
--#define PCIMT_CONFIG_ADDRESS 0xbfff0070
--#define PCIMT_ASIC_ID 0xbfff0078 /* read */
--#define PCIMT_SOFT_RESET 0xbfff0078 /* write */
--#define PCIMT_PIA_OE 0xbfff0080
--#define PCIMT_PIA_DATAOUT 0xbfff0088
--#define PCIMT_PIA_DATAIN 0xbfff0090
--#define PCIMT_CACHECONF 0xbfff0098
--#define PCIMT_INVSPACE 0xbfff00a0
-+#define PCIMT_IRQSEL CKSEG1ADDR(0xbfff0058)
-+#define PCIMT_TESTMEM CKSEG1ADDR(0xbfff0060)
-+#define PCIMT_ECCREG CKSEG1ADDR(0xbfff0068)
-+#define PCIMT_CONFIG_ADDRESS CKSEG1ADDR(0xbfff0070)
-+#define PCIMT_ASIC_ID CKSEG1ADDR(0xbfff0078) /* read */
-+#define PCIMT_SOFT_RESET CKSEG1ADDR(0xbfff0078) /* write */
-+#define PCIMT_PIA_OE CKSEG1ADDR(0xbfff0080)
-+#define PCIMT_PIA_DATAOUT CKSEG1ADDR(0xbfff0088)
-+#define PCIMT_PIA_DATAIN CKSEG1ADDR(0xbfff0090)
-+#define PCIMT_CACHECONF CKSEG1ADDR(0xbfff0098)
-+#define PCIMT_INVSPACE CKSEG1ADDR(0xbfff00a0)
- #endif
-
--#define PCIMT_PCI_CONF 0xbfff0100
-+#define PCIMT_PCI_CONF CKSEG1ADDR(0xbfff0100)
-
- /*
- * Data port for the PCI bus in IO space
-@@ -117,34 +117,34 @@ extern unsigned int sni_brd_type;
- /*
- * Board specific registers
- */
--#define PCIMT_CSMSR 0xbfd00000
--#define PCIMT_CSSWITCH 0xbfd10000
--#define PCIMT_CSITPEND 0xbfd20000
--#define PCIMT_AUTO_PO_EN 0xbfd30000
--#define PCIMT_CLR_TEMP 0xbfd40000
--#define PCIMT_AUTO_PO_DIS 0xbfd50000
--#define PCIMT_EXMSR 0xbfd60000
--#define PCIMT_UNUSED1 0xbfd70000
--#define PCIMT_CSWCSM 0xbfd80000
--#define PCIMT_UNUSED2 0xbfd90000
--#define PCIMT_CSLED 0xbfda0000
--#define PCIMT_CSMAPISA 0xbfdb0000
--#define PCIMT_CSRSTBP 0xbfdc0000
--#define PCIMT_CLRPOFF 0xbfdd0000
--#define PCIMT_CSTIMER 0xbfde0000
--#define PCIMT_PWDN 0xbfdf0000
-+#define PCIMT_CSMSR CKSEG1ADDR(0xbfd00000)
-+#define PCIMT_CSSWITCH CKSEG1ADDR(0xbfd10000)
-+#define PCIMT_CSITPEND CKSEG1ADDR(0xbfd20000)
-+#define PCIMT_AUTO_PO_EN CKSEG1ADDR(0xbfd30000)
-+#define PCIMT_CLR_TEMP CKSEG1ADDR(0xbfd40000)
-+#define PCIMT_AUTO_PO_DIS CKSEG1ADDR(0xbfd50000)
-+#define PCIMT_EXMSR CKSEG1ADDR(0xbfd60000)
-+#define PCIMT_UNUSED1 CKSEG1ADDR(0xbfd70000)
-+#define PCIMT_CSWCSM CKSEG1ADDR(0xbfd80000)
-+#define PCIMT_UNUSED2 CKSEG1ADDR(0xbfd90000)
-+#define PCIMT_CSLED CKSEG1ADDR(0xbfda0000)
-+#define PCIMT_CSMAPISA CKSEG1ADDR(0xbfdb0000)
-+#define PCIMT_CSRSTBP CKSEG1ADDR(0xbfdc0000)
-+#define PCIMT_CLRPOFF CKSEG1ADDR(0xbfdd0000)
-+#define PCIMT_CSTIMER CKSEG1ADDR(0xbfde0000)
-+#define PCIMT_PWDN CKSEG1ADDR(0xbfdf0000)
-
- /*
- * A20R based boards
- */
--#define A20R_PT_CLOCK_BASE 0xbc040000
--#define A20R_PT_TIM0_ACK 0xbc050000
--#define A20R_PT_TIM1_ACK 0xbc060000
-+#define A20R_PT_CLOCK_BASE CKSEG1ADDR(0xbc040000)
-+#define A20R_PT_TIM0_ACK CKSEG1ADDR(0xbc050000)
-+#define A20R_PT_TIM1_ACK CKSEG1ADDR(0xbc060000)
-
- #define SNI_A20R_IRQ_BASE MIPS_CPU_IRQ_BASE
- #define SNI_A20R_IRQ_TIMER (SNI_A20R_IRQ_BASE+5)
-
--#define SNI_PCIT_INT_REG 0xbfff000c
-+#define SNI_PCIT_INT_REG CKSEG1ADDR(0xbfff000c)
-
- #define SNI_PCIT_INT_START 24
- #define SNI_PCIT_INT_END 30
-@@ -186,10 +186,30 @@ extern unsigned int sni_brd_type;
- /*
- * Base address for the mapped 16mb EISA bus segment.
- */
--#define PCIMT_EISA_BASE 0xb0000000
-+#define PCIMT_EISA_BASE CKSEG1ADDR(0xb0000000)
-
- /* PCI EISA Interrupt acknowledge */
--#define PCIMT_INT_ACKNOWLEDGE 0xba000000
-+#define PCIMT_INT_ACKNOWLEDGE CKSEG1ADDR(0xba000000)
-+
-+/*
-+ * SNI ID PROM
+-#endif /* __ASM_SH_UACCESS_H */
+diff --git a/include/asm-sh/uaccess_32.h b/include/asm-sh/uaccess_32.h
+new file mode 100644
+index 0000000..b6082f3
+--- /dev/null
++++ b/include/asm-sh/uaccess_32.h
+@@ -0,0 +1,510 @@
++/* $Id: uaccess.h,v 1.11 2003/10/13 07:21:20 lethal Exp $
+ *
-+ * SNI_IDPROM_MEMSIZE Memsize in 16MB quantities
-+ * SNI_IDPROM_BRDTYPE Board Type
-+ * SNI_IDPROM_CPUTYPE CPU Type on RM400
++ * User space memory access functions
++ *
++ * Copyright (C) 1999, 2002 Niibe Yutaka
++ * Copyright (C) 2003 Paul Mundt
++ *
++ * Based on:
++ * MIPS implementation version 1.15 by
++ * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
++ * and i386 version.
+ */
-+#ifdef CONFIG_CPU_BIG_ENDIAN
-+#define __SNI_END 0
-+#endif
-+#ifdef CONFIG_CPU_LITTLE_ENDIAN
-+#define __SNI_END 3
-+#endif
-+#define SNI_IDPROM_BASE CKSEG1ADDR(0x1ff00000)
-+#define SNI_IDPROM_MEMSIZE (SNI_IDPROM_BASE + (0x28 ^ __SNI_END))
-+#define SNI_IDPROM_BRDTYPE (SNI_IDPROM_BASE + (0x29 ^ __SNI_END))
-+#define SNI_IDPROM_CPUTYPE (SNI_IDPROM_BASE + (0x30 ^ __SNI_END))
++#ifndef __ASM_SH_UACCESS_H
++#define __ASM_SH_UACCESS_H
+
-+#define SNI_IDPROM_SIZE 0x1000
-
- /* board specific init functions */
- extern void sni_a20r_init(void);
-@@ -207,6 +227,9 @@ extern void sni_pcimt_irq_init(void);
- /* timer inits */
- extern void sni_cpu_time_init(void);
-
-+/* eisa init for RM200/400 */
-+extern int sni_eisa_root_init(void);
++#include <linux/errno.h>
++#include <linux/sched.h>
++
++#define VERIFY_READ 0
++#define VERIFY_WRITE 1
+
- /* common irq stuff */
- extern void (*sni_hwint)(void);
- extern struct irqaction sni_isa_irq;
-diff --git a/include/asm-mips/stackframe.h b/include/asm-mips/stackframe.h
-index fb41a8d..051e1af 100644
---- a/include/asm-mips/stackframe.h
-+++ b/include/asm-mips/stackframe.h
-@@ -6,6 +6,7 @@
- * Copyright (C) 1994, 95, 96, 99, 2001 Ralf Baechle
- * Copyright (C) 1994, 1995, 1996 Paul M. Antoine.
- * Copyright (C) 1999 Silicon Graphics, Inc.
-+ * Copyright (C) 2007 Maciej W. Rozycki
- */
- #ifndef _ASM_STACKFRAME_H
- #define _ASM_STACKFRAME_H
-@@ -145,8 +146,16 @@
- .set reorder
- /* Called from user mode, new stack. */
- get_saved_sp
-+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
- 8: move k0, sp
- PTR_SUBU sp, k1, PT_SIZE
-+#else
-+ .set at=k0
-+8: PTR_SUBU k1, PT_SIZE
-+ .set noat
-+ move k0, sp
-+ move sp, k1
-+#endif
- LONG_S k0, PT_R29(sp)
- LONG_S $3, PT_R3(sp)
- /*
-diff --git a/include/asm-mips/time.h b/include/asm-mips/time.h
-index 7717934..a8fd16e 100644
---- a/include/asm-mips/time.h
-+++ b/include/asm-mips/time.h
-@@ -31,20 +31,13 @@ extern int rtc_mips_set_time(unsigned long);
- extern int rtc_mips_set_mmss(unsigned long);
-
- /*
-- * Timer interrupt functions.
-- * mips_timer_state is needed for high precision timer calibration.
-- */
--extern int (*mips_timer_state)(void);
--
--/*
- * board specific routines required by time_init().
- */
- extern void plat_time_init(void);
-
- /*
- * mips_hpt_frequency - must be set if you intend to use an R4k-compatible
-- * counter as a timer interrupt source; otherwise it can be set up
-- * automagically with an aid of mips_timer_state.
-+ * counter as a timer interrupt source.
- */
- extern unsigned int mips_hpt_frequency;
-
-diff --git a/include/asm-mips/topology.h b/include/asm-mips/topology.h
-index 0440fb9..259145e 100644
---- a/include/asm-mips/topology.h
-+++ b/include/asm-mips/topology.h
-@@ -1 +1,17 @@
+/*
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
++ * The fs value determines whether argument validity checking should be
++ * performed or not. If get_fs() == USER_DS, checking is performed, with
++ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
-+ * Copyright (C) 2007 by Ralf Baechle
++ * For historical reasons (Data Segment Register?), these macros are misnamed.
+ */
-+#ifndef __ASM_TOPOLOGY_H
-+#define __ASM_TOPOLOGY_H
+
- #include <topology.h>
++#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
+
-+#ifdef CONFIG_SMP
-+#define smt_capable() (smp_num_siblings > 1)
-+#endif
++#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFFUL)
++#define USER_DS MAKE_MM_SEG(PAGE_OFFSET)
+
-+#endif /* __ASM_TOPOLOGY_H */
-diff --git a/include/asm-mips/tx4927/tx4927_pci.h b/include/asm-mips/tx4927/tx4927_pci.h
-index 3f1e470..0be77df 100644
---- a/include/asm-mips/tx4927/tx4927_pci.h
-+++ b/include/asm-mips/tx4927/tx4927_pci.h
-@@ -9,6 +9,7 @@
- #define __ASM_TX4927_TX4927_PCI_H
-
- #define TX4927_CCFG_TOE 0x00004000
-+#define TX4927_CCFG_WR 0x00008000
- #define TX4927_CCFG_TINTDIS 0x01000000
-
- #define TX4927_PCIMEM 0x08000000
-diff --git a/include/asm-mips/uaccess.h b/include/asm-mips/uaccess.h
-index c30c718..66523d6 100644
---- a/include/asm-mips/uaccess.h
-+++ b/include/asm-mips/uaccess.h
-@@ -5,6 +5,7 @@
- *
- * Copyright (C) 1996, 1997, 1998, 1999, 2000, 03, 04 by Ralf Baechle
- * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
-+ * Copyright (C) 2007 Maciej W. Rozycki
- */
- #ifndef _ASM_UACCESS_H
- #define _ASM_UACCESS_H
-@@ -387,6 +388,12 @@ extern void __put_user_unknown(void);
- "jal\t" #destination "\n\t"
- #endif
-
-+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
-+#define DADDI_SCRATCH "$0"
-+#else
-+#define DADDI_SCRATCH "$3"
-+#endif
++#define segment_eq(a,b) ((a).seg == (b).seg)
+
- extern size_t __copy_user(void *__to, const void *__from, size_t __n);
-
- #define __invoke_copy_to_user(to, from, n) \
-@@ -403,7 +410,7 @@ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
- : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r) \
- : \
- : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31", \
-- "memory"); \
-+ DADDI_SCRATCH, "memory"); \
- __cu_len_r; \
- })
-
-@@ -512,7 +519,7 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
- : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r) \
- : \
- : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31", \
-- "memory"); \
-+ DADDI_SCRATCH, "memory"); \
- __cu_len_r; \
- })
-
-@@ -535,7 +542,7 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
- : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r) \
- : \
- : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31", \
-- "memory"); \
-+ DADDI_SCRATCH, "memory"); \
- __cu_len_r; \
- })
-
-diff --git a/include/asm-mips/war.h b/include/asm-mips/war.h
-index d2808ed..22361d5 100644
---- a/include/asm-mips/war.h
-+++ b/include/asm-mips/war.h
-@@ -4,6 +4,7 @@
- * for more details.
- *
- * Copyright (C) 2002, 2004, 2007 by Ralf Baechle
-+ * Copyright (C) 2007 Maciej W. Rozycki
- */
- #ifndef _ASM_WAR_H
- #define _ASM_WAR_H
-@@ -11,6 +12,67 @@
- #include <war.h>
-
- /*
-+ * Work around certain R4000 CPU errata (as implemented by GCC):
-+ *
-+ * - A double-word or a variable shift may give an incorrect result
-+ * if executed immediately after starting an integer division:
-+ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
-+ * erratum #28
-+ * "MIPS R4000MC Errata, Processor Revision 2.2 and 3.0", erratum
-+ * #19
-+ *
-+ * - A double-word or a variable shift may give an incorrect result
-+ * if executed while an integer multiplication is in progress:
-+ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
-+ * errata #16 & #28
-+ *
-+ * - An integer division may give an incorrect result if started in
-+ * a delay slot of a taken branch or a jump:
-+ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
-+ * erratum #52
-+ */
-+#ifdef CONFIG_CPU_R4000_WORKAROUNDS
-+#define R4000_WAR 1
-+#else
-+#define R4000_WAR 0
-+#endif
++#define get_ds() (KERNEL_DS)
+
-+/*
-+ * Work around certain R4400 CPU errata (as implemented by GCC):
-+ *
-+ * - A double-word or a variable shift may give an incorrect result
-+ * if executed immediately after starting an integer division:
-+ * "MIPS R4400MC Errata, Processor Revision 1.0", erratum #10
-+ * "MIPS R4400MC Errata, Processor Revision 2.0 & 3.0", erratum #4
-+ */
-+#ifdef CONFIG_CPU_R4400_WORKAROUNDS
-+#define R4400_WAR 1
-+#else
-+#define R4400_WAR 0
-+#endif
++#if !defined(CONFIG_MMU)
++/* NOMMU is always true */
++#define __addr_ok(addr) (1)
++
++static inline mm_segment_t get_fs(void)
++{
++ return USER_DS;
++}
++
++static inline void set_fs(mm_segment_t s)
++{
++}
+
+/*
-+ * Work around the "daddi" and "daddiu" CPU errata:
++ * __access_ok: Check if address with size is OK or not.
+ *
-+ * - The `daddi' instruction fails to trap on overflow.
-+ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
-+ * erratum #23
++ * If we don't have an MMU (or if its disabled) the only thing we really have
++ * to look out for is if the address resides somewhere outside of what
++ * available RAM we have.
+ *
-+ * - The `daddiu' instruction can produce an incorrect result.
-+ * "MIPS R4000PC/SC Errata, Processor Revision 2.2 and 3.0",
-+ * erratum #41
-+ * "MIPS R4000MC Errata, Processor Revision 2.2 and 3.0", erratum
-+ * #15
-+ * "MIPS R4400PC/SC Errata, Processor Revision 1.0", erratum #7
-+ * "MIPS R4400MC Errata, Processor Revision 1.0", erratum #5
++ * TODO: This check could probably also stand to be restricted somewhat more..
++ * though it still does the Right Thing(tm) for the time being.
+ */
-+#ifdef CONFIG_CPU_DADDI_WORKAROUNDS
-+#define DADDI_WAR 1
-+#else
-+#define DADDI_WAR 0
-+#endif
++static inline int __access_ok(unsigned long addr, unsigned long size)
++{
++ return ((addr >= memory_start) && ((addr + size) < memory_end));
++}
++#else /* CONFIG_MMU */
++#define __addr_ok(addr) \
++ ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
++
++#define get_fs() (current_thread_info()->addr_limit)
++#define set_fs(x) (current_thread_info()->addr_limit = (x))
+
+/*
- * Another R4600 erratum. Due to the lack of errata information the exact
- * technical details aren't known. I've experimentally found that disabling
- * interrupts during indexed I-cache flushes seems to be sufficient to deal
-diff --git a/include/asm-powerpc/bitops.h b/include/asm-powerpc/bitops.h
-index 733b4af..220d9a7 100644
---- a/include/asm-powerpc/bitops.h
-+++ b/include/asm-powerpc/bitops.h
-@@ -359,6 +359,8 @@ static __inline__ int test_le_bit(unsigned long nr,
- unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
- unsigned long size, unsigned long offset);
-
-+unsigned long generic_find_next_le_bit(const unsigned long *addr,
-+ unsigned long size, unsigned long offset);
- /* Bitmap functions for the ext2 filesystem */
-
- #define ext2_set_bit(nr,addr) \
-@@ -378,6 +380,8 @@ unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
- #define ext2_find_next_zero_bit(addr, size, off) \
- generic_find_next_zero_le_bit((unsigned long*)addr, size, off)
-
-+#define ext2_find_next_bit(addr, size, off) \
-+ generic_find_next_le_bit((unsigned long *)addr, size, off)
- /* Bitmap functions for the minix filesystem. */
-
- #define minix_test_and_set_bit(nr,addr) \
-diff --git a/include/asm-powerpc/ide.h b/include/asm-powerpc/ide.h
-index fd7f5a4..6d50310 100644
---- a/include/asm-powerpc/ide.h
-+++ b/include/asm-powerpc/ide.h
-@@ -42,9 +42,6 @@ struct ide_machdep_calls {
-
- extern struct ide_machdep_calls ppc_ide_md;
-
--#undef SUPPORT_SLOW_DATA_PORTS
--#define SUPPORT_SLOW_DATA_PORTS 0
--
- #define IDE_ARCH_OBSOLETE_DEFAULTS
-
- static __inline__ int ide_default_irq(unsigned long base)
-diff --git a/include/asm-powerpc/pasemi_dma.h b/include/asm-powerpc/pasemi_dma.h
-new file mode 100644
-index 0000000..b4526ff
---- /dev/null
-+++ b/include/asm-powerpc/pasemi_dma.h
-@@ -0,0 +1,467 @@
-+/*
-+ * Copyright (C) 2006 PA Semi, Inc
-+ *
-+ * Hardware register layout and descriptor formats for the on-board
-+ * DMA engine on PA Semi PWRficient. Used by ethernet, function and security
-+ * drivers.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License version 2 as
-+ * published by the Free Software Foundation.
++ * __access_ok: Check if address with size is OK or not.
+ *
-+ * This program is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-+ * GNU General Public License for more details.
++ * Uhhuh, this needs 33-bit arithmetic. We have a carry..
+ *
-+ * You should have received a copy of the GNU General Public License
-+ * along with this program; if not, write to the Free Software
-+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ * sum := addr + size; carry? --> flag = true;
++ * if (sum >= addr_limit) flag = true;
+ */
++static inline int __access_ok(unsigned long addr, unsigned long size)
++{
++ unsigned long flag, sum;
+
-+#ifndef ASM_PASEMI_DMA_H
-+#define ASM_PASEMI_DMA_H
++ __asm__("clrt\n\t"
++ "addc %3, %1\n\t"
++ "movt %0\n\t"
++ "cmp/hi %4, %1\n\t"
++ "rotcl %0"
++ :"=&r" (flag), "=r" (sum)
++ :"1" (addr), "r" (size),
++ "r" (current_thread_info()->addr_limit.seg)
++ :"t");
++ return flag == 0;
++}
++#endif /* CONFIG_MMU */
+
-+/* status register layout in IOB region, at 0xfb800000 */
-+struct pasdma_status {
-+ u64 rx_sta[64]; /* RX channel status */
-+ u64 tx_sta[20]; /* TX channel status */
-+};
++#define access_ok(type, addr, size) \
++ (__chk_user_ptr(addr), \
++ __access_ok((unsigned long __force)(addr), (size)))
+
++/*
++ * Uh, these should become the main single-value transfer routines ...
++ * They automatically use the right size if we just have the right
++ * pointer type ...
++ *
++ * As SuperH uses the same address space for kernel and user data, we
++ * can just do these as direct assignments.
++ *
++ * Careful to not
++ * (a) re-use the arguments for side effects (sizeof is ok)
++ * (b) require any knowledge of processes at this stage
++ */
++#define put_user(x,ptr) __put_user_check((x), (ptr), sizeof(*(ptr)))
++#define get_user(x,ptr) __get_user_check((x), (ptr), sizeof(*(ptr)))
+
-+/* All these registers live in the PCI configuration space for the DMA PCI
-+ * device. Use the normal PCI config access functions for them.
++/*
++ * The "__xxx" versions do not do address space checking, useful when
++ * doing multiple accesses to the same area (the user has to do the
++ * checks by hand with "access_ok()")
+ */
-+enum {
-+ PAS_DMA_CAP_TXCH = 0x44, /* Transmit Channel Info */
-+ PAS_DMA_CAP_RXCH = 0x48, /* Transmit Channel Info */
-+ PAS_DMA_CAP_IFI = 0x4c, /* Interface Info */
-+ PAS_DMA_COM_TXCMD = 0x100, /* Transmit Command Register */
-+ PAS_DMA_COM_TXSTA = 0x104, /* Transmit Status Register */
-+ PAS_DMA_COM_RXCMD = 0x108, /* Receive Command Register */
-+ PAS_DMA_COM_RXSTA = 0x10c, /* Receive Status Register */
-+};
++#define __put_user(x,ptr) __put_user_nocheck((x), (ptr), sizeof(*(ptr)))
++#define __get_user(x,ptr) __get_user_nocheck((x), (ptr), sizeof(*(ptr)))
+
++struct __large_struct { unsigned long buf[100]; };
++#define __m(x) (*(struct __large_struct __user *)(x))
+
-+#define PAS_DMA_CAP_TXCH_TCHN_M 0x00ff0000 /* # of TX channels */
-+#define PAS_DMA_CAP_TXCH_TCHN_S 16
++#define __get_user_size(x,ptr,size,retval) \
++do { \
++ retval = 0; \
++ switch (size) { \
++ case 1: \
++ __get_user_asm(x, ptr, retval, "b"); \
++ break; \
++ case 2: \
++ __get_user_asm(x, ptr, retval, "w"); \
++ break; \
++ case 4: \
++ __get_user_asm(x, ptr, retval, "l"); \
++ break; \
++ default: \
++ __get_user_unknown(); \
++ break; \
++ } \
++} while (0)
+
-+#define PAS_DMA_CAP_RXCH_RCHN_M 0x00ff0000 /* # of RX channels */
-+#define PAS_DMA_CAP_RXCH_RCHN_S 16
++#define __get_user_nocheck(x,ptr,size) \
++({ \
++ long __gu_err; \
++ unsigned long __gu_val; \
++ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
++ __chk_user_ptr(ptr); \
++ __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
++ (x) = (__typeof__(*(ptr)))__gu_val; \
++ __gu_err; \
++})
+
-+#define PAS_DMA_CAP_IFI_IOFF_M 0xff000000 /* Cfg reg for intf pointers */
-+#define PAS_DMA_CAP_IFI_IOFF_S 24
-+#define PAS_DMA_CAP_IFI_NIN_M 0x00ff0000 /* # of interfaces */
-+#define PAS_DMA_CAP_IFI_NIN_S 16
++#define __get_user_check(x,ptr,size) \
++({ \
++ long __gu_err = -EFAULT; \
++ unsigned long __gu_val = 0; \
++ const __typeof__(*(ptr)) *__gu_addr = (ptr); \
++ if (likely(access_ok(VERIFY_READ, __gu_addr, (size)))) \
++ __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
++ (x) = (__typeof__(*(ptr)))__gu_val; \
++ __gu_err; \
++})
+
-+#define PAS_DMA_COM_TXCMD_EN 0x00000001 /* enable */
-+#define PAS_DMA_COM_TXSTA_ACT 0x00000001 /* active */
-+#define PAS_DMA_COM_RXCMD_EN 0x00000001 /* enable */
-+#define PAS_DMA_COM_RXSTA_ACT 0x00000001 /* active */
++#define __get_user_asm(x, addr, err, insn) \
++({ \
++__asm__ __volatile__( \
++ "1:\n\t" \
++ "mov." insn " %2, %1\n\t" \
++ "2:\n" \
++ ".section .fixup,\"ax\"\n" \
++ "3:\n\t" \
++ "mov #0, %1\n\t" \
++ "mov.l 4f, %0\n\t" \
++ "jmp @%0\n\t" \
++ " mov %3, %0\n\t" \
++ ".balign 4\n" \
++ "4: .long 2b\n\t" \
++ ".previous\n" \
++ ".section __ex_table,\"a\"\n\t" \
++ ".long 1b, 3b\n\t" \
++ ".previous" \
++ :"=&r" (err), "=&r" (x) \
++ :"m" (__m(addr)), "i" (-EFAULT), "0" (err)); })
+
++extern void __get_user_unknown(void);
+
-+/* Per-interface and per-channel registers */
-+#define _PAS_DMA_RXINT_STRIDE 0x20
-+#define PAS_DMA_RXINT_RCMDSTA(i) (0x200+(i)*_PAS_DMA_RXINT_STRIDE)
-+#define PAS_DMA_RXINT_RCMDSTA_EN 0x00000001
-+#define PAS_DMA_RXINT_RCMDSTA_ST 0x00000002
-+#define PAS_DMA_RXINT_RCMDSTA_MBT 0x00000008
-+#define PAS_DMA_RXINT_RCMDSTA_MDR 0x00000010
-+#define PAS_DMA_RXINT_RCMDSTA_MOO 0x00000020
-+#define PAS_DMA_RXINT_RCMDSTA_MBP 0x00000040
-+#define PAS_DMA_RXINT_RCMDSTA_BT 0x00000800
-+#define PAS_DMA_RXINT_RCMDSTA_DR 0x00001000
-+#define PAS_DMA_RXINT_RCMDSTA_OO 0x00002000
-+#define PAS_DMA_RXINT_RCMDSTA_BP 0x00004000
-+#define PAS_DMA_RXINT_RCMDSTA_TB 0x00008000
-+#define PAS_DMA_RXINT_RCMDSTA_ACT 0x00010000
-+#define PAS_DMA_RXINT_RCMDSTA_DROPS_M 0xfffe0000
-+#define PAS_DMA_RXINT_RCMDSTA_DROPS_S 17
-+#define PAS_DMA_RXINT_CFG(i) (0x204+(i)*_PAS_DMA_RXINT_STRIDE)
-+#define PAS_DMA_RXINT_CFG_RBP 0x80000000
-+#define PAS_DMA_RXINT_CFG_ITRR 0x40000000
-+#define PAS_DMA_RXINT_CFG_DHL_M 0x07000000
-+#define PAS_DMA_RXINT_CFG_DHL_S 24
-+#define PAS_DMA_RXINT_CFG_DHL(x) (((x) << PAS_DMA_RXINT_CFG_DHL_S) & \
-+ PAS_DMA_RXINT_CFG_DHL_M)
-+#define PAS_DMA_RXINT_CFG_ITR 0x00400000
-+#define PAS_DMA_RXINT_CFG_LW 0x00200000
-+#define PAS_DMA_RXINT_CFG_L2 0x00100000
-+#define PAS_DMA_RXINT_CFG_HEN 0x00080000
-+#define PAS_DMA_RXINT_CFG_WIF 0x00000002
-+#define PAS_DMA_RXINT_CFG_WIL 0x00000001
++#define __put_user_size(x,ptr,size,retval) \
++do { \
++ retval = 0; \
++ switch (size) { \
++ case 1: \
++ __put_user_asm(x, ptr, retval, "b"); \
++ break; \
++ case 2: \
++ __put_user_asm(x, ptr, retval, "w"); \
++ break; \
++ case 4: \
++ __put_user_asm(x, ptr, retval, "l"); \
++ break; \
++ case 8: \
++ __put_user_u64(x, ptr, retval); \
++ break; \
++ default: \
++ __put_user_unknown(); \
++ } \
++} while (0)
+
-+#define PAS_DMA_RXINT_INCR(i) (0x210+(i)*_PAS_DMA_RXINT_STRIDE)
-+#define PAS_DMA_RXINT_INCR_INCR_M 0x0000ffff
-+#define PAS_DMA_RXINT_INCR_INCR_S 0
-+#define PAS_DMA_RXINT_INCR_INCR(x) ((x) & 0x0000ffff)
-+#define PAS_DMA_RXINT_BASEL(i) (0x218+(i)*_PAS_DMA_RXINT_STRIDE)
-+#define PAS_DMA_RXINT_BASEL_BRBL(x) ((x) & ~0x3f)
-+#define PAS_DMA_RXINT_BASEU(i) (0x21c+(i)*_PAS_DMA_RXINT_STRIDE)
-+#define PAS_DMA_RXINT_BASEU_BRBH(x) ((x) & 0xfff)
-+#define PAS_DMA_RXINT_BASEU_SIZ_M 0x3fff0000 /* # of cache lines worth of buffer ring */
-+#define PAS_DMA_RXINT_BASEU_SIZ_S 16 /* 0 = 16K */
-+#define PAS_DMA_RXINT_BASEU_SIZ(x) (((x) << PAS_DMA_RXINT_BASEU_SIZ_S) & \
-+ PAS_DMA_RXINT_BASEU_SIZ_M)
++#define __put_user_nocheck(x,ptr,size) \
++({ \
++ long __pu_err; \
++ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
++ __chk_user_ptr(ptr); \
++ __put_user_size((x), __pu_addr, (size), __pu_err); \
++ __pu_err; \
++})
+
++#define __put_user_check(x,ptr,size) \
++({ \
++ long __pu_err = -EFAULT; \
++ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
++ if (likely(access_ok(VERIFY_WRITE, __pu_addr, size))) \
++ __put_user_size((x), __pu_addr, (size), \
++ __pu_err); \
++ __pu_err; \
++})
+
-+#define _PAS_DMA_TXCHAN_STRIDE 0x20 /* Size per channel */
-+#define _PAS_DMA_TXCHAN_TCMDSTA 0x300 /* Command / Status */
-+#define _PAS_DMA_TXCHAN_CFG 0x304 /* Configuration */
-+#define _PAS_DMA_TXCHAN_DSCRBU 0x308 /* Descriptor BU Allocation */
-+#define _PAS_DMA_TXCHAN_INCR 0x310 /* Descriptor increment */
-+#define _PAS_DMA_TXCHAN_CNT 0x314 /* Descriptor count/offset */
-+#define _PAS_DMA_TXCHAN_BASEL 0x318 /* Descriptor ring base (low) */
-+#define _PAS_DMA_TXCHAN_BASEU 0x31c /* (high) */
-+#define PAS_DMA_TXCHAN_TCMDSTA(c) (0x300+(c)*_PAS_DMA_TXCHAN_STRIDE)
-+#define PAS_DMA_TXCHAN_TCMDSTA_EN 0x00000001 /* Enabled */
-+#define PAS_DMA_TXCHAN_TCMDSTA_ST 0x00000002 /* Stop interface */
-+#define PAS_DMA_TXCHAN_TCMDSTA_ACT 0x00010000 /* Active */
-+#define PAS_DMA_TXCHAN_TCMDSTA_SZ 0x00000800
-+#define PAS_DMA_TXCHAN_TCMDSTA_DB 0x00000400
-+#define PAS_DMA_TXCHAN_TCMDSTA_DE 0x00000200
-+#define PAS_DMA_TXCHAN_TCMDSTA_DA 0x00000100
-+#define PAS_DMA_TXCHAN_CFG(c) (0x304+(c)*_PAS_DMA_TXCHAN_STRIDE)
-+#define PAS_DMA_TXCHAN_CFG_TY_IFACE 0x00000000 /* Type = interface */
-+#define PAS_DMA_TXCHAN_CFG_TATTR_M 0x0000003c
-+#define PAS_DMA_TXCHAN_CFG_TATTR_S 2
-+#define PAS_DMA_TXCHAN_CFG_TATTR(x) (((x) << PAS_DMA_TXCHAN_CFG_TATTR_S) & \
-+ PAS_DMA_TXCHAN_CFG_TATTR_M)
-+#define PAS_DMA_TXCHAN_CFG_WT_M 0x000001c0
-+#define PAS_DMA_TXCHAN_CFG_WT_S 6
-+#define PAS_DMA_TXCHAN_CFG_WT(x) (((x) << PAS_DMA_TXCHAN_CFG_WT_S) & \
-+ PAS_DMA_TXCHAN_CFG_WT_M)
-+#define PAS_DMA_TXCHAN_CFG_TRD 0x00010000 /* translate data */
-+#define PAS_DMA_TXCHAN_CFG_TRR 0x00008000 /* translate rings */
-+#define PAS_DMA_TXCHAN_CFG_UP 0x00004000 /* update tx descr when sent */
-+#define PAS_DMA_TXCHAN_CFG_CL 0x00002000 /* Clean last line */
-+#define PAS_DMA_TXCHAN_CFG_CF 0x00001000 /* Clean first line */
-+#define PAS_DMA_TXCHAN_INCR(c) (0x310+(c)*_PAS_DMA_TXCHAN_STRIDE)
-+#define PAS_DMA_TXCHAN_BASEL(c) (0x318+(c)*_PAS_DMA_TXCHAN_STRIDE)
-+#define PAS_DMA_TXCHAN_BASEL_BRBL_M 0xffffffc0
-+#define PAS_DMA_TXCHAN_BASEL_BRBL_S 0
-+#define PAS_DMA_TXCHAN_BASEL_BRBL(x) (((x) << PAS_DMA_TXCHAN_BASEL_BRBL_S) & \
-+ PAS_DMA_TXCHAN_BASEL_BRBL_M)
-+#define PAS_DMA_TXCHAN_BASEU(c) (0x31c+(c)*_PAS_DMA_TXCHAN_STRIDE)
-+#define PAS_DMA_TXCHAN_BASEU_BRBH_M 0x00000fff
-+#define PAS_DMA_TXCHAN_BASEU_BRBH_S 0
-+#define PAS_DMA_TXCHAN_BASEU_BRBH(x) (((x) << PAS_DMA_TXCHAN_BASEU_BRBH_S) & \
-+ PAS_DMA_TXCHAN_BASEU_BRBH_M)
-+/* # of cache lines worth of buffer ring */
-+#define PAS_DMA_TXCHAN_BASEU_SIZ_M 0x3fff0000
-+#define PAS_DMA_TXCHAN_BASEU_SIZ_S 16 /* 0 = 16K */
-+#define PAS_DMA_TXCHAN_BASEU_SIZ(x) (((x) << PAS_DMA_TXCHAN_BASEU_SIZ_S) & \
-+ PAS_DMA_TXCHAN_BASEU_SIZ_M)
++#define __put_user_asm(x, addr, err, insn) \
++({ \
++__asm__ __volatile__( \
++ "1:\n\t" \
++ "mov." insn " %1, %2\n\t" \
++ "2:\n" \
++ ".section .fixup,\"ax\"\n" \
++ "3:\n\t" \
++ "mov.l 4f, %0\n\t" \
++ "jmp @%0\n\t" \
++ " mov %3, %0\n\t" \
++ ".balign 4\n" \
++ "4: .long 2b\n\t" \
++ ".previous\n" \
++ ".section __ex_table,\"a\"\n\t" \
++ ".long 1b, 3b\n\t" \
++ ".previous" \
++ :"=&r" (err) \
++ :"r" (x), "m" (__m(addr)), "i" (-EFAULT), "0" (err) \
++ :"memory"); })
+
-+#define _PAS_DMA_RXCHAN_STRIDE 0x20 /* Size per channel */
-+#define _PAS_DMA_RXCHAN_CCMDSTA 0x800 /* Command / Status */
-+#define _PAS_DMA_RXCHAN_CFG 0x804 /* Configuration */
-+#define _PAS_DMA_RXCHAN_INCR 0x810 /* Descriptor increment */
-+#define _PAS_DMA_RXCHAN_CNT 0x814 /* Descriptor count/offset */
-+#define _PAS_DMA_RXCHAN_BASEL 0x818 /* Descriptor ring base (low) */
-+#define _PAS_DMA_RXCHAN_BASEU 0x81c /* (high) */
-+#define PAS_DMA_RXCHAN_CCMDSTA(c) (0x800+(c)*_PAS_DMA_RXCHAN_STRIDE)
-+#define PAS_DMA_RXCHAN_CCMDSTA_EN 0x00000001 /* Enabled */
-+#define PAS_DMA_RXCHAN_CCMDSTA_ST 0x00000002 /* Stop interface */
-+#define PAS_DMA_RXCHAN_CCMDSTA_ACT 0x00010000 /* Active */
-+#define PAS_DMA_RXCHAN_CCMDSTA_DU 0x00020000
-+#define PAS_DMA_RXCHAN_CCMDSTA_OD 0x00002000
-+#define PAS_DMA_RXCHAN_CCMDSTA_FD 0x00001000
-+#define PAS_DMA_RXCHAN_CCMDSTA_DT 0x00000800
-+#define PAS_DMA_RXCHAN_CFG(c) (0x804+(c)*_PAS_DMA_RXCHAN_STRIDE)
-+#define PAS_DMA_RXCHAN_CFG_CTR 0x00000400
-+#define PAS_DMA_RXCHAN_CFG_HBU_M 0x00000380
-+#define PAS_DMA_RXCHAN_CFG_HBU_S 7
-+#define PAS_DMA_RXCHAN_CFG_HBU(x) (((x) << PAS_DMA_RXCHAN_CFG_HBU_S) & \
-+ PAS_DMA_RXCHAN_CFG_HBU_M)
-+#define PAS_DMA_RXCHAN_INCR(c) (0x810+(c)*_PAS_DMA_RXCHAN_STRIDE)
-+#define PAS_DMA_RXCHAN_BASEL(c) (0x818+(c)*_PAS_DMA_RXCHAN_STRIDE)
-+#define PAS_DMA_RXCHAN_BASEL_BRBL_M 0xffffffc0
-+#define PAS_DMA_RXCHAN_BASEL_BRBL_S 0
-+#define PAS_DMA_RXCHAN_BASEL_BRBL(x) (((x) << PAS_DMA_RXCHAN_BASEL_BRBL_S) & \
-+ PAS_DMA_RXCHAN_BASEL_BRBL_M)
-+#define PAS_DMA_RXCHAN_BASEU(c) (0x81c+(c)*_PAS_DMA_RXCHAN_STRIDE)
-+#define PAS_DMA_RXCHAN_BASEU_BRBH_M 0x00000fff
-+#define PAS_DMA_RXCHAN_BASEU_BRBH_S 0
-+#define PAS_DMA_RXCHAN_BASEU_BRBH(x) (((x) << PAS_DMA_RXCHAN_BASEU_BRBH_S) & \
-+ PAS_DMA_RXCHAN_BASEU_BRBH_M)
-+/* # of cache lines worth of buffer ring */
-+#define PAS_DMA_RXCHAN_BASEU_SIZ_M 0x3fff0000
-+#define PAS_DMA_RXCHAN_BASEU_SIZ_S 16 /* 0 = 16K */
-+#define PAS_DMA_RXCHAN_BASEU_SIZ(x) (((x) << PAS_DMA_RXCHAN_BASEU_SIZ_S) & \
-+ PAS_DMA_RXCHAN_BASEU_SIZ_M)
++#if defined(CONFIG_CPU_LITTLE_ENDIAN)
++#define __put_user_u64(val,addr,retval) \
++({ \
++__asm__ __volatile__( \
++ "1:\n\t" \
++ "mov.l %R1,%2\n\t" \
++ "mov.l %S1,%T2\n\t" \
++ "2:\n" \
++ ".section .fixup,\"ax\"\n" \
++ "3:\n\t" \
++ "mov.l 4f,%0\n\t" \
++ "jmp @%0\n\t" \
++ " mov %3,%0\n\t" \
++ ".balign 4\n" \
++ "4: .long 2b\n\t" \
++ ".previous\n" \
++ ".section __ex_table,\"a\"\n\t" \
++ ".long 1b, 3b\n\t" \
++ ".previous" \
++ : "=r" (retval) \
++ : "r" (val), "m" (__m(addr)), "i" (-EFAULT), "0" (retval) \
++ : "memory"); })
++#else
++#define __put_user_u64(val,addr,retval) \
++({ \
++__asm__ __volatile__( \
++ "1:\n\t" \
++ "mov.l %S1,%2\n\t" \
++ "mov.l %R1,%T2\n\t" \
++ "2:\n" \
++ ".section .fixup,\"ax\"\n" \
++ "3:\n\t" \
++ "mov.l 4f,%0\n\t" \
++ "jmp @%0\n\t" \
++ " mov %3,%0\n\t" \
++ ".balign 4\n" \
++ "4: .long 2b\n\t" \
++ ".previous\n" \
++ ".section __ex_table,\"a\"\n\t" \
++ ".long 1b, 3b\n\t" \
++ ".previous" \
++ : "=r" (retval) \
++ : "r" (val), "m" (__m(addr)), "i" (-EFAULT), "0" (retval) \
++ : "memory"); })
++#endif
+
-+#define PAS_STATUS_PCNT_M 0x000000000000ffffull
-+#define PAS_STATUS_PCNT_S 0
-+#define PAS_STATUS_DCNT_M 0x00000000ffff0000ull
-+#define PAS_STATUS_DCNT_S 16
-+#define PAS_STATUS_BPCNT_M 0x0000ffff00000000ull
-+#define PAS_STATUS_BPCNT_S 32
-+#define PAS_STATUS_CAUSE_M 0xf000000000000000ull
-+#define PAS_STATUS_TIMER 0x1000000000000000ull
-+#define PAS_STATUS_ERROR 0x2000000000000000ull
-+#define PAS_STATUS_SOFT 0x4000000000000000ull
-+#define PAS_STATUS_INT 0x8000000000000000ull
++extern void __put_user_unknown(void);
+
-+#define PAS_IOB_COM_PKTHDRCNT 0x120
-+#define PAS_IOB_COM_PKTHDRCNT_PKTHDR1_M 0x0fff0000
-+#define PAS_IOB_COM_PKTHDRCNT_PKTHDR1_S 16
-+#define PAS_IOB_COM_PKTHDRCNT_PKTHDR0_M 0x00000fff
-+#define PAS_IOB_COM_PKTHDRCNT_PKTHDR0_S 0
++/* Generic arbitrary sized copy. */
++/* Return the number of bytes NOT copied */
++__kernel_size_t __copy_user(void *to, const void *from, __kernel_size_t n);
+
-+#define PAS_IOB_DMA_RXCH_CFG(i) (0x1100 + (i)*4)
-+#define PAS_IOB_DMA_RXCH_CFG_CNTTH_M 0x00000fff
-+#define PAS_IOB_DMA_RXCH_CFG_CNTTH_S 0
-+#define PAS_IOB_DMA_RXCH_CFG_CNTTH(x) (((x) << PAS_IOB_DMA_RXCH_CFG_CNTTH_S) & \
-+ PAS_IOB_DMA_RXCH_CFG_CNTTH_M)
-+#define PAS_IOB_DMA_TXCH_CFG(i) (0x1200 + (i)*4)
-+#define PAS_IOB_DMA_TXCH_CFG_CNTTH_M 0x00000fff
-+#define PAS_IOB_DMA_TXCH_CFG_CNTTH_S 0
-+#define PAS_IOB_DMA_TXCH_CFG_CNTTH(x) (((x) << PAS_IOB_DMA_TXCH_CFG_CNTTH_S) & \
-+ PAS_IOB_DMA_TXCH_CFG_CNTTH_M)
-+#define PAS_IOB_DMA_RXCH_STAT(i) (0x1300 + (i)*4)
-+#define PAS_IOB_DMA_RXCH_STAT_INTGEN 0x00001000
-+#define PAS_IOB_DMA_RXCH_STAT_CNTDEL_M 0x00000fff
-+#define PAS_IOB_DMA_RXCH_STAT_CNTDEL_S 0
-+#define PAS_IOB_DMA_RXCH_STAT_CNTDEL(x) (((x) << PAS_IOB_DMA_RXCH_STAT_CNTDEL_S) &\
-+ PAS_IOB_DMA_RXCH_STAT_CNTDEL_M)
-+#define PAS_IOB_DMA_TXCH_STAT(i) (0x1400 + (i)*4)
-+#define PAS_IOB_DMA_TXCH_STAT_INTGEN 0x00001000
-+#define PAS_IOB_DMA_TXCH_STAT_CNTDEL_M 0x00000fff
-+#define PAS_IOB_DMA_TXCH_STAT_CNTDEL_S 0
-+#define PAS_IOB_DMA_TXCH_STAT_CNTDEL(x) (((x) << PAS_IOB_DMA_TXCH_STAT_CNTDEL_S) &\
-+ PAS_IOB_DMA_TXCH_STAT_CNTDEL_M)
-+#define PAS_IOB_DMA_RXCH_RESET(i) (0x1500 + (i)*4)
-+#define PAS_IOB_DMA_RXCH_RESET_PCNT_M 0xffff0000
-+#define PAS_IOB_DMA_RXCH_RESET_PCNT_S 16
-+#define PAS_IOB_DMA_RXCH_RESET_PCNT(x) (((x) << PAS_IOB_DMA_RXCH_RESET_PCNT_S) & \
-+ PAS_IOB_DMA_RXCH_RESET_PCNT_M)
-+#define PAS_IOB_DMA_RXCH_RESET_PCNTRST 0x00000020
-+#define PAS_IOB_DMA_RXCH_RESET_DCNTRST 0x00000010
-+#define PAS_IOB_DMA_RXCH_RESET_TINTC 0x00000008
-+#define PAS_IOB_DMA_RXCH_RESET_DINTC 0x00000004
-+#define PAS_IOB_DMA_RXCH_RESET_SINTC 0x00000002
-+#define PAS_IOB_DMA_RXCH_RESET_PINTC 0x00000001
-+#define PAS_IOB_DMA_TXCH_RESET(i) (0x1600 + (i)*4)
-+#define PAS_IOB_DMA_TXCH_RESET_PCNT_M 0xffff0000
-+#define PAS_IOB_DMA_TXCH_RESET_PCNT_S 16
-+#define PAS_IOB_DMA_TXCH_RESET_PCNT(x) (((x) << PAS_IOB_DMA_TXCH_RESET_PCNT_S) & \
-+ PAS_IOB_DMA_TXCH_RESET_PCNT_M)
-+#define PAS_IOB_DMA_TXCH_RESET_PCNTRST 0x00000020
-+#define PAS_IOB_DMA_TXCH_RESET_DCNTRST 0x00000010
-+#define PAS_IOB_DMA_TXCH_RESET_TINTC 0x00000008
-+#define PAS_IOB_DMA_TXCH_RESET_DINTC 0x00000004
-+#define PAS_IOB_DMA_TXCH_RESET_SINTC 0x00000002
-+#define PAS_IOB_DMA_TXCH_RESET_PINTC 0x00000001
++#define copy_to_user(to,from,n) ({ \
++void *__copy_to = (void *) (to); \
++__kernel_size_t __copy_size = (__kernel_size_t) (n); \
++__kernel_size_t __copy_res; \
++if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
++__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
++} else __copy_res = __copy_size; \
++__copy_res; })
+
-+#define PAS_IOB_DMA_COM_TIMEOUTCFG 0x1700
-+#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_M 0x00ffffff
-+#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_S 0
-+#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT(x) (((x) << PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_S) & \
-+ PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_M)
++#define copy_from_user(to,from,n) ({ \
++void *__copy_to = (void *) (to); \
++void *__copy_from = (void *) (from); \
++__kernel_size_t __copy_size = (__kernel_size_t) (n); \
++__kernel_size_t __copy_res; \
++if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
++__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
++} else __copy_res = __copy_size; \
++__copy_res; })
+
-+/* Transmit descriptor fields */
-+#define XCT_MACTX_T 0x8000000000000000ull
-+#define XCT_MACTX_ST 0x4000000000000000ull
-+#define XCT_MACTX_NORES 0x0000000000000000ull
-+#define XCT_MACTX_8BRES 0x1000000000000000ull
-+#define XCT_MACTX_24BRES 0x2000000000000000ull
-+#define XCT_MACTX_40BRES 0x3000000000000000ull
-+#define XCT_MACTX_I 0x0800000000000000ull
-+#define XCT_MACTX_O 0x0400000000000000ull
-+#define XCT_MACTX_E 0x0200000000000000ull
-+#define XCT_MACTX_VLAN_M 0x0180000000000000ull
-+#define XCT_MACTX_VLAN_NOP 0x0000000000000000ull
-+#define XCT_MACTX_VLAN_REMOVE 0x0080000000000000ull
-+#define XCT_MACTX_VLAN_INSERT 0x0100000000000000ull
-+#define XCT_MACTX_VLAN_REPLACE 0x0180000000000000ull
-+#define XCT_MACTX_CRC_M 0x0060000000000000ull
-+#define XCT_MACTX_CRC_NOP 0x0000000000000000ull
-+#define XCT_MACTX_CRC_INSERT 0x0020000000000000ull
-+#define XCT_MACTX_CRC_PAD 0x0040000000000000ull
-+#define XCT_MACTX_CRC_REPLACE 0x0060000000000000ull
-+#define XCT_MACTX_SS 0x0010000000000000ull
-+#define XCT_MACTX_LLEN_M 0x00007fff00000000ull
-+#define XCT_MACTX_LLEN_S 32ull
-+#define XCT_MACTX_LLEN(x) ((((long)(x)) << XCT_MACTX_LLEN_S) & \
-+ XCT_MACTX_LLEN_M)
-+#define XCT_MACTX_IPH_M 0x00000000f8000000ull
-+#define XCT_MACTX_IPH_S 27ull
-+#define XCT_MACTX_IPH(x) ((((long)(x)) << XCT_MACTX_IPH_S) & \
-+ XCT_MACTX_IPH_M)
-+#define XCT_MACTX_IPO_M 0x0000000007c00000ull
-+#define XCT_MACTX_IPO_S 22ull
-+#define XCT_MACTX_IPO(x) ((((long)(x)) << XCT_MACTX_IPO_S) & \
-+ XCT_MACTX_IPO_M)
-+#define XCT_MACTX_CSUM_M 0x0000000000000060ull
-+#define XCT_MACTX_CSUM_NOP 0x0000000000000000ull
-+#define XCT_MACTX_CSUM_TCP 0x0000000000000040ull
-+#define XCT_MACTX_CSUM_UDP 0x0000000000000060ull
-+#define XCT_MACTX_V6 0x0000000000000010ull
-+#define XCT_MACTX_C 0x0000000000000004ull
-+#define XCT_MACTX_AL2 0x0000000000000002ull
++static __always_inline unsigned long
++__copy_from_user(void *to, const void __user *from, unsigned long n)
++{
++ return __copy_user(to, (__force void *)from, n);
++}
+
-+/* Receive descriptor fields */
-+#define XCT_MACRX_T 0x8000000000000000ull
-+#define XCT_MACRX_ST 0x4000000000000000ull
-+#define XCT_MACRX_RR_M 0x3000000000000000ull
-+#define XCT_MACRX_RR_NORES 0x0000000000000000ull
-+#define XCT_MACRX_RR_8BRES 0x1000000000000000ull
-+#define XCT_MACRX_O 0x0400000000000000ull
-+#define XCT_MACRX_E 0x0200000000000000ull
-+#define XCT_MACRX_FF 0x0100000000000000ull
-+#define XCT_MACRX_PF 0x0080000000000000ull
-+#define XCT_MACRX_OB 0x0040000000000000ull
-+#define XCT_MACRX_OD 0x0020000000000000ull
-+#define XCT_MACRX_FS 0x0010000000000000ull
-+#define XCT_MACRX_NB_M 0x000fc00000000000ull
-+#define XCT_MACRX_NB_S 46ULL
-+#define XCT_MACRX_NB(x) ((((long)(x)) << XCT_MACRX_NB_S) & \
-+ XCT_MACRX_NB_M)
-+#define XCT_MACRX_LLEN_M 0x00003fff00000000ull
-+#define XCT_MACRX_LLEN_S 32ULL
-+#define XCT_MACRX_LLEN(x) ((((long)(x)) << XCT_MACRX_LLEN_S) & \
-+ XCT_MACRX_LLEN_M)
-+#define XCT_MACRX_CRC 0x0000000080000000ull
-+#define XCT_MACRX_LEN_M 0x0000000060000000ull
-+#define XCT_MACRX_LEN_TOOSHORT 0x0000000020000000ull
-+#define XCT_MACRX_LEN_BELOWMIN 0x0000000040000000ull
-+#define XCT_MACRX_LEN_TRUNC 0x0000000060000000ull
-+#define XCT_MACRX_CAST_M 0x0000000018000000ull
-+#define XCT_MACRX_CAST_UNI 0x0000000000000000ull
-+#define XCT_MACRX_CAST_MULTI 0x0000000008000000ull
-+#define XCT_MACRX_CAST_BROAD 0x0000000010000000ull
-+#define XCT_MACRX_CAST_PAUSE 0x0000000018000000ull
-+#define XCT_MACRX_VLC_M 0x0000000006000000ull
-+#define XCT_MACRX_FM 0x0000000001000000ull
-+#define XCT_MACRX_HTY_M 0x0000000000c00000ull
-+#define XCT_MACRX_HTY_IPV4_OK 0x0000000000000000ull
-+#define XCT_MACRX_HTY_IPV6 0x0000000000400000ull
-+#define XCT_MACRX_HTY_IPV4_BAD 0x0000000000800000ull
-+#define XCT_MACRX_HTY_NONIP 0x0000000000c00000ull
-+#define XCT_MACRX_IPP_M 0x00000000003f0000ull
-+#define XCT_MACRX_IPP_S 16
-+#define XCT_MACRX_CSUM_M 0x000000000000ffffull
-+#define XCT_MACRX_CSUM_S 0
++static __always_inline unsigned long __must_check
++__copy_to_user(void __user *to, const void *from, unsigned long n)
++{
++ return __copy_user((__force void *)to, from, n);
++}
+
-+#define XCT_PTR_T 0x8000000000000000ull
-+#define XCT_PTR_LEN_M 0x7ffff00000000000ull
-+#define XCT_PTR_LEN_S 44
-+#define XCT_PTR_LEN(x) ((((long)(x)) << XCT_PTR_LEN_S) & \
-+ XCT_PTR_LEN_M)
-+#define XCT_PTR_ADDR_M 0x00000fffffffffffull
-+#define XCT_PTR_ADDR_S 0
-+#define XCT_PTR_ADDR(x) ((((long)(x)) << XCT_PTR_ADDR_S) & \
-+ XCT_PTR_ADDR_M)
++#define __copy_to_user_inatomic __copy_to_user
++#define __copy_from_user_inatomic __copy_from_user
+
-+/* Receive interface 8byte result fields */
-+#define XCT_RXRES_8B_L4O_M 0xff00000000000000ull
-+#define XCT_RXRES_8B_L4O_S 56
-+#define XCT_RXRES_8B_RULE_M 0x00ffff0000000000ull
-+#define XCT_RXRES_8B_RULE_S 40
-+#define XCT_RXRES_8B_EVAL_M 0x000000ffff000000ull
-+#define XCT_RXRES_8B_EVAL_S 24
-+#define XCT_RXRES_8B_HTYPE_M 0x0000000000f00000ull
-+#define XCT_RXRES_8B_HASH_M 0x00000000000fffffull
-+#define XCT_RXRES_8B_HASH_S 0
++/*
++ * Clear the area and return remaining number of bytes
++ * (on failure. Usually it's 0.)
++ */
++extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
+
-+/* Receive interface buffer fields */
-+#define XCT_RXB_LEN_M 0x0ffff00000000000ull
-+#define XCT_RXB_LEN_S 44
-+#define XCT_RXB_LEN(x) ((((long)(x)) << XCT_RXB_LEN_S) & \
-+ XCT_RXB_LEN_M)
-+#define XCT_RXB_ADDR_M 0x00000fffffffffffull
-+#define XCT_RXB_ADDR_S 0
-+#define XCT_RXB_ADDR(x) ((((long)(x)) << XCT_RXB_ADDR_S) & \
-+ XCT_RXB_ADDR_M)
++#define clear_user(addr,n) ({ \
++void * __cl_addr = (addr); \
++unsigned long __cl_size = (n); \
++if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
++__cl_size = __clear_user(__cl_addr, __cl_size); \
++__cl_size; })
+
-+/* Copy descriptor fields */
-+#define XCT_COPY_T 0x8000000000000000ull
-+#define XCT_COPY_ST 0x4000000000000000ull
-+#define XCT_COPY_RR_M 0x3000000000000000ull
-+#define XCT_COPY_RR_NORES 0x0000000000000000ull
-+#define XCT_COPY_RR_8BRES 0x1000000000000000ull
-+#define XCT_COPY_RR_24BRES 0x2000000000000000ull
-+#define XCT_COPY_RR_40BRES 0x3000000000000000ull
-+#define XCT_COPY_I 0x0800000000000000ull
-+#define XCT_COPY_O 0x0400000000000000ull
-+#define XCT_COPY_E 0x0200000000000000ull
-+#define XCT_COPY_STY_ZERO 0x01c0000000000000ull
-+#define XCT_COPY_DTY_PREF 0x0038000000000000ull
-+#define XCT_COPY_LLEN_M 0x0007ffff00000000ull
-+#define XCT_COPY_LLEN_S 32
-+#define XCT_COPY_LLEN(x) ((((long)(x)) << XCT_COPY_LLEN_S) & \
-+ XCT_COPY_LLEN_M)
-+#define XCT_COPY_SE 0x0000000000000001ull
++static __inline__ int
++__strncpy_from_user(unsigned long __dest, unsigned long __user __src, int __count)
++{
++ __kernel_size_t res;
++ unsigned long __dummy, _d, _s, _c;
+
-+/* Control descriptor fields */
-+#define CTRL_CMD_T 0x8000000000000000ull
-+#define CTRL_CMD_META_EVT 0x2000000000000000ull
-+#define CTRL_CMD_O 0x0400000000000000ull
-+#define CTRL_CMD_REG_M 0x000000000000000full
-+#define CTRL_CMD_REG_S 0
-+#define CTRL_CMD_REG(x) ((((long)(x)) << CTRL_CMD_REG_S) & \
-+ CTRL_CMD_REG_M)
++ __asm__ __volatile__(
++ "9:\n"
++ "mov.b @%2+, %1\n\t"
++ "cmp/eq #0, %1\n\t"
++ "bt/s 2f\n"
++ "1:\n"
++ "mov.b %1, @%3\n\t"
++ "dt %4\n\t"
++ "bf/s 9b\n\t"
++ " add #1, %3\n\t"
++ "2:\n\t"
++ "sub %4, %0\n"
++ "3:\n"
++ ".section .fixup,\"ax\"\n"
++ "4:\n\t"
++ "mov.l 5f, %1\n\t"
++ "jmp @%1\n\t"
++ " mov %9, %0\n\t"
++ ".balign 4\n"
++ "5: .long 3b\n"
++ ".previous\n"
++ ".section __ex_table,\"a\"\n"
++ " .balign 4\n"
++ " .long 9b,4b\n"
++ ".previous"
++ : "=r" (res), "=&z" (__dummy), "=r" (_s), "=r" (_d), "=r"(_c)
++ : "0" (__count), "2" (__src), "3" (__dest), "4" (__count),
++ "i" (-EFAULT)
++ : "memory", "t");
+
++ return res;
++}
+
++/**
++ * strncpy_from_user: - Copy a NUL terminated string from userspace.
++ * @dst: Destination address, in kernel space. This buffer must be at
++ * least @count bytes long.
++ * @src: Source address, in user space.
++ * @count: Maximum number of bytes to copy, including the trailing NUL.
++ *
++ * Copies a NUL-terminated string from userspace to kernel space.
++ *
++ * On success, returns the length of the string (not including the trailing
++ * NUL).
++ *
++ * If access to userspace fails, returns -EFAULT (some data may have been
++ * copied).
++ *
++ * If @count is smaller than the length of the string, copies @count bytes
++ * and returns @count.
++ */
++#define strncpy_from_user(dest,src,count) ({ \
++unsigned long __sfu_src = (unsigned long) (src); \
++int __sfu_count = (int) (count); \
++long __sfu_res = -EFAULT; \
++if(__access_ok(__sfu_src, __sfu_count)) { \
++__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
++} __sfu_res; })
+
-+/* Prototypes for the shared DMA functions in the platform code. */
++/*
++ * Return the size of a string (including the ending 0 even when we have
++ * exceeded the maximum string length).
++ */
++static __inline__ long __strnlen_user(const char __user *__s, long __n)
++{
++ unsigned long res;
++ unsigned long __dummy;
+
-+/* DMA TX Channel type. Right now only limitations used are event types 0/1,
-+ * for event-triggered DMA transactions.
++ __asm__ __volatile__(
++ "1:\t"
++ "mov.b @(%0,%3), %1\n\t"
++ "cmp/eq %4, %0\n\t"
++ "bt/s 2f\n\t"
++ " add #1, %0\n\t"
++ "tst %1, %1\n\t"
++ "bf 1b\n\t"
++ "2:\n"
++ ".section .fixup,\"ax\"\n"
++ "3:\n\t"
++ "mov.l 4f, %1\n\t"
++ "jmp @%1\n\t"
++ " mov #0, %0\n"
++ ".balign 4\n"
++ "4: .long 2b\n"
++ ".previous\n"
++ ".section __ex_table,\"a\"\n"
++ " .balign 4\n"
++ " .long 1b,3b\n"
++ ".previous"
++ : "=z" (res), "=&r" (__dummy)
++ : "0" (0), "r" (__s), "r" (__n)
++ : "t");
++ return res;
++}
++
++/**
++ * strnlen_user: - Get the size of a string in user space.
++ * @s: The string to measure.
++ * @n: The maximum valid length
++ *
++ * Context: User context only. This function may sleep.
++ *
++ * Get the size of a NUL-terminated string in user space.
++ *
++ * Returns the size of the string INCLUDING the terminating NUL.
++ * On exception, returns 0.
++ * If the string is too long, returns a value greater than @n.
+ */
++static __inline__ long strnlen_user(const char __user *s, long n)
++{
++ if (!__addr_ok(s))
++ return 0;
++ else
++ return __strnlen_user(s, n);
++}
+
-+enum pasemi_dmachan_type {
-+ RXCHAN = 0, /* Any RX chan */
-+ TXCHAN = 1, /* Any TX chan */
-+ TXCHAN_EVT0 = 0x1001, /* TX chan in event class 0 (chan 0-9) */
-+ TXCHAN_EVT1 = 0x2001, /* TX chan in event class 1 (chan 10-19) */
-+};
++/**
++ * strlen_user: - Get the size of a string in user space.
++ * @str: The string to measure.
++ *
++ * Context: User context only. This function may sleep.
++ *
++ * Get the size of a NUL-terminated string in user space.
++ *
++ * Returns the size of the string INCLUDING the terminating NUL.
++ * On exception, returns 0.
++ *
++ * If there is a limit on the length of a valid string, you may wish to
++ * consider using strnlen_user() instead.
++ */
++#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+
-+struct pasemi_dmachan {
-+ int chno; /* Channel number */
-+ enum pasemi_dmachan_type chan_type; /* TX / RX */
-+ u64 *status; /* Ptr to cacheable status */
-+ int irq; /* IRQ used by channel */
-+ unsigned int ring_size; /* size of allocated ring */
-+ dma_addr_t ring_dma; /* DMA address for ring */
-+ u64 *ring_virt; /* Virt address for ring */
-+ void *priv; /* Ptr to start of client struct */
++/*
++ * The exception table consists of pairs of addresses: the first is the
++ * address of an instruction that is allowed to fault, and the second is
++ * the address at which the program should continue. No registers are
++ * modified, so it is entirely up to the continuation code to figure out
++ * what to do.
++ *
++ * All the routines below use bits of fixup code that are out of line
++ * with the main instruction path. This means when everything is well,
++ * we don't even have to jump over them. Further, they do not intrude
++ * on our cache or tlb entries.
++ */
++
++struct exception_table_entry
++{
++ unsigned long insn, fixup;
+};
+
-+/* Read/write the different registers in the I/O Bridge, Ethernet
-+ * and DMA Controller
++extern int fixup_exception(struct pt_regs *regs);
++
++#endif /* __ASM_SH_UACCESS_H */
+diff --git a/include/asm-sh/uaccess_64.h b/include/asm-sh/uaccess_64.h
+new file mode 100644
+index 0000000..d54ec08
+--- /dev/null
++++ b/include/asm-sh/uaccess_64.h
+@@ -0,0 +1,302 @@
++#ifndef __ASM_SH_UACCESS_64_H
++#define __ASM_SH_UACCESS_64_H
++
++/*
++ * include/asm-sh/uaccess_64.h
++ *
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003, 2004 Paul Mundt
++ *
++ * User space memory access functions
++ *
++ * Copyright (C) 1999 Niibe Yutaka
++ *
++ * Based on:
++ * MIPS implementation version 1.15 by
++ * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
++ * and i386 version.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
+ */
-+extern unsigned int pasemi_read_iob_reg(unsigned int reg);
-+extern void pasemi_write_iob_reg(unsigned int reg, unsigned int val);
++#include <linux/errno.h>
++#include <linux/sched.h>
+
-+extern unsigned int pasemi_read_mac_reg(int intf, unsigned int reg);
-+extern void pasemi_write_mac_reg(int intf, unsigned int reg, unsigned int val);
++#define VERIFY_READ 0
++#define VERIFY_WRITE 1
+
-+extern unsigned int pasemi_read_dma_reg(unsigned int reg);
-+extern void pasemi_write_dma_reg(unsigned int reg, unsigned int val);
++/*
++ * The fs value determines whether argument validity checking should be
++ * performed or not. If get_fs() == USER_DS, checking is performed, with
++ * get_fs() == KERNEL_DS, checking is bypassed.
++ *
++ * For historical reasons (Data Segment Register?), these macros are misnamed.
++ */
+
-+/* Channel management routines */
++#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
+
-+extern void *pasemi_dma_alloc_chan(enum pasemi_dmachan_type type,
-+ int total_size, int offset);
-+extern void pasemi_dma_free_chan(struct pasemi_dmachan *chan);
++#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
++#define USER_DS MAKE_MM_SEG(0x80000000)
+
-+extern void pasemi_dma_start_chan(const struct pasemi_dmachan *chan,
-+ const u32 cmdsta);
-+extern int pasemi_dma_stop_chan(const struct pasemi_dmachan *chan);
++#define get_ds() (KERNEL_DS)
++#define get_fs() (current_thread_info()->addr_limit)
++#define set_fs(x) (current_thread_info()->addr_limit=(x))
+
-+/* Common routines to allocate rings and buffers */
++#define segment_eq(a,b) ((a).seg == (b).seg)
+
-+extern int pasemi_dma_alloc_ring(struct pasemi_dmachan *chan, int ring_size);
-+extern void pasemi_dma_free_ring(struct pasemi_dmachan *chan);
++#define __addr_ok(addr) ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
+
-+extern void *pasemi_dma_alloc_buf(struct pasemi_dmachan *chan, int size,
-+ dma_addr_t *handle);
-+extern void pasemi_dma_free_buf(struct pasemi_dmachan *chan, int size,
-+ dma_addr_t *handle);
++/*
++ * Uhhuh, this needs 33-bit arithmetic. We have a carry..
++ *
++ * sum := addr + size; carry? --> flag = true;
++ * if (sum >= addr_limit) flag = true;
++ */
++#define __range_ok(addr,size) (((unsigned long) (addr) + (size) < (current_thread_info()->addr_limit.seg)) ? 0 : 1)
+
-+/* Initialize the library, must be called before any other functions */
-+extern int pasemi_dma_init(void);
++#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
++#define __access_ok(addr,size) (__range_ok(addr,size) == 0)
+
-+#endif /* ASM_PASEMI_DMA_H */
-diff --git a/include/asm-s390/airq.h b/include/asm-s390/airq.h
-new file mode 100644
-index 0000000..41d028c
---- /dev/null
-+++ b/include/asm-s390/airq.h
-@@ -0,0 +1,19 @@
+/*
-+ * include/asm-s390/airq.h
++ * Uh, these should become the main single-value transfer routines ...
++ * They automatically use the right size if we just have the right
++ * pointer type ...
+ *
-+ * Copyright IBM Corp. 2002,2007
-+ * Author(s): Ingo Adlung <adlung at de.ibm.com>
-+ * Cornelia Huck <cornelia.huck at de.ibm.com>
-+ * Arnd Bergmann <arndb at de.ibm.com>
-+ * Peter Oberparleiter <peter.oberparleiter at de.ibm.com>
++ * As MIPS uses the same address space for kernel and user data, we
++ * can just do these as direct assignments.
++ *
++ * Careful to not
++ * (a) re-use the arguments for side effects (sizeof is ok)
++ * (b) require any knowledge of processes at this stage
+ */
++#define put_user(x,ptr) __put_user_check((x),(ptr),sizeof(*(ptr)))
++#define get_user(x,ptr) __get_user_check((x),(ptr),sizeof(*(ptr)))
+
-+#ifndef _ASM_S390_AIRQ_H
-+#define _ASM_S390_AIRQ_H
++/*
++ * The "__xxx" versions do not do address space checking, useful when
++ * doing multiple accesses to the same area (the user has to do the
++ * checks by hand with "access_ok()")
++ */
++#define __put_user(x,ptr) __put_user_nocheck((x),(ptr),sizeof(*(ptr)))
++#define __get_user(x,ptr) __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+
-+typedef void (*adapter_int_handler_t)(void *, void *);
++/*
++ * The "xxx_ret" versions return constant specified in third argument, if
++ * something bad happens. These macros can be optimized for the
++ * case of just returning from the function xxx_ret is used.
++ */
+
-+void *s390_register_adapter_interrupt(adapter_int_handler_t, void *);
-+void s390_unregister_adapter_interrupt(void *);
++#define put_user_ret(x,ptr,ret) ({ \
++if (put_user(x,ptr)) return ret; })
+
-+#endif /* _ASM_S390_AIRQ_H */
-diff --git a/include/asm-s390/bitops.h b/include/asm-s390/bitops.h
-index 34d9a63..dba6fec 100644
---- a/include/asm-s390/bitops.h
-+++ b/include/asm-s390/bitops.h
-@@ -772,6 +772,8 @@ static inline int sched_find_first_bit(unsigned long *b)
- test_and_clear_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
- #define ext2_test_bit(nr, addr) \
- test_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
-+#define ext2_find_next_bit(addr, size, off) \
-+ generic_find_next_le_bit((unsigned long *)(addr), (size), (off))
-
- #ifndef __s390x__
-
-diff --git a/include/asm-s390/cio.h b/include/asm-s390/cio.h
-index 2f08c16..123b557 100644
---- a/include/asm-s390/cio.h
-+++ b/include/asm-s390/cio.h
-@@ -24,8 +24,8 @@
- * @fmt: format
- * @pfch: prefetch
- * @isic: initial-status interruption control
-- * @alcc: adress-limit checking control
-- * @ssi: supress-suspended interruption
-+ * @alcc: address-limit checking control
-+ * @ssi: suppress-suspended interruption
- * @zcc: zero condition code
- * @ectl: extended control
- * @pno: path not operational
-diff --git a/include/asm-s390/dasd.h b/include/asm-s390/dasd.h
-index 604f68f..3f002e1 100644
---- a/include/asm-s390/dasd.h
-+++ b/include/asm-s390/dasd.h
-@@ -105,7 +105,7 @@ typedef struct dasd_information_t {
- } dasd_information_t;
-
- /*
-- * Read Subsystem Data - Perfomance Statistics
-+ * Read Subsystem Data - Performance Statistics
- */
- typedef struct dasd_rssd_perf_stats_t {
- unsigned char invalid:1;
-diff --git a/include/asm-s390/ipl.h b/include/asm-s390/ipl.h
-index 2c40fd3..c1b2e50 100644
---- a/include/asm-s390/ipl.h
-+++ b/include/asm-s390/ipl.h
-@@ -83,6 +83,8 @@ extern u32 dump_prefix_page;
- extern unsigned int zfcpdump_prefix_array[];
-
- extern void do_reipl(void);
-+extern void do_halt(void);
-+extern void do_poff(void);
- extern void ipl_save_parameters(void);
-
- enum {
-@@ -118,7 +120,7 @@ struct ipl_info
- };
-
- extern struct ipl_info ipl_info;
--extern void setup_ipl_info(void);
-+extern void setup_ipl(void);
-
- /*
- * DIAG 308 support
-@@ -141,6 +143,10 @@ enum diag308_opt {
- DIAG308_IPL_OPT_DUMP = 0x20,
- };
-
-+enum diag308_flags {
-+ DIAG308_FLAGS_LP_VALID = 0x80,
-+};
++#define get_user_ret(x,ptr,ret) ({ \
++if (get_user(x,ptr)) return ret; })
+
- enum diag308_rc {
- DIAG308_RC_OK = 1,
- };
-diff --git a/include/asm-s390/mmu_context.h b/include/asm-s390/mmu_context.h
-index 05b8421..a77d4ba 100644
---- a/include/asm-s390/mmu_context.h
-+++ b/include/asm-s390/mmu_context.h
-@@ -12,10 +12,15 @@
- #include <asm/pgalloc.h>
- #include <asm-generic/mm_hooks.h>
-
--/*
-- * get a new mmu context.. S390 don't know about contexts.
-- */
--#define init_new_context(tsk,mm) 0
-+static inline int init_new_context(struct task_struct *tsk,
-+ struct mm_struct *mm)
-+{
-+ mm->context = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS;
-+#ifdef CONFIG_64BIT
-+ mm->context |= _ASCE_TYPE_REGION3;
-+#endif
-+ return 0;
-+}
-
- #define destroy_context(mm) do { } while (0)
-
-@@ -27,19 +32,11 @@
-
- static inline void update_mm(struct mm_struct *mm, struct task_struct *tsk)
- {
-- pgd_t *pgd = mm->pgd;
-- unsigned long asce_bits;
--
-- /* Calculate asce bits from the first pgd table entry. */
-- asce_bits = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS;
--#ifdef CONFIG_64BIT
-- asce_bits |= _ASCE_TYPE_REGION3;
--#endif
-- S390_lowcore.user_asce = asce_bits | __pa(pgd);
-+ S390_lowcore.user_asce = mm->context | __pa(mm->pgd);
- if (switch_amode) {
- /* Load primary space page table origin. */
-- pgd_t *shadow_pgd = get_shadow_table(pgd) ? : pgd;
-- S390_lowcore.user_exec_asce = asce_bits | __pa(shadow_pgd);
-+ pgd_t *shadow_pgd = get_shadow_table(mm->pgd) ? : mm->pgd;
-+ S390_lowcore.user_exec_asce = mm->context | __pa(shadow_pgd);
- asm volatile(LCTL_OPCODE" 1,1,%0\n"
- : : "m" (S390_lowcore.user_exec_asce) );
- } else
-diff --git a/include/asm-s390/pgtable.h b/include/asm-s390/pgtable.h
-index 1f530f8..79b9eab 100644
---- a/include/asm-s390/pgtable.h
-+++ b/include/asm-s390/pgtable.h
-@@ -104,41 +104,27 @@ extern char empty_zero_page[PAGE_SIZE];
-
- #ifndef __ASSEMBLY__
- /*
-- * Just any arbitrary offset to the start of the vmalloc VM area: the
-- * current 8MB value just means that there will be a 8MB "hole" after the
-- * physical memory until the kernel virtual memory starts. That means that
-- * any out-of-bounds memory accesses will hopefully be caught.
-- * The vmalloc() routines leaves a hole of 4kB between each vmalloced
-- * area for the same reason. ;)
-- * vmalloc area starts at 4GB to prevent syscall table entry exchanging
-- * from modules.
-- */
--extern unsigned long vmalloc_end;
--
--#ifdef CONFIG_64BIT
--#define VMALLOC_ADDR (max(0x100000000UL, (unsigned long) high_memory))
--#else
--#define VMALLOC_ADDR ((unsigned long) high_memory)
--#endif
--#define VMALLOC_OFFSET (8*1024*1024)
--#define VMALLOC_START ((VMALLOC_ADDR + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
--#define VMALLOC_END vmalloc_end
--
--/*
-- * We need some free virtual space to be able to do vmalloc.
-- * VMALLOC_MIN_SIZE defines the minimum size of the vmalloc
-- * area. On a machine with 2GB memory we make sure that we
-- * have at least 128MB free space for vmalloc. On a machine
-- * with 4TB we make sure we have at least 128GB.
-+ * The vmalloc area will always be on the topmost area of the kernel
-+ * mapping. We reserve 96MB (31bit) / 1GB (64bit) for vmalloc,
-+ * which should be enough for any sane case.
-+ * By putting vmalloc at the top, we maximise the gap between physical
-+ * memory and vmalloc to catch misplaced memory accesses. As a side
-+ * effect, this also makes sure that 64 bit module code cannot be used
-+ * as system call address.
- */
- #ifndef __s390x__
--#define VMALLOC_MIN_SIZE 0x8000000UL
--#define VMALLOC_END_INIT 0x80000000UL
-+#define VMALLOC_START 0x78000000UL
-+#define VMALLOC_END 0x7e000000UL
-+#define VMEM_MAP_MAX 0x80000000UL
- #else /* __s390x__ */
--#define VMALLOC_MIN_SIZE 0x2000000000UL
--#define VMALLOC_END_INIT 0x40000000000UL
-+#define VMALLOC_START 0x3e000000000UL
-+#define VMALLOC_END 0x3e040000000UL
-+#define VMEM_MAP_MAX 0x40000000000UL
- #endif /* __s390x__ */
-
-+#define VMEM_MAP ((struct page *) VMALLOC_END)
-+#define VMEM_MAP_SIZE ((VMALLOC_START / PAGE_SIZE) * sizeof(struct page))
++#define __put_user_ret(x,ptr,ret) ({ \
++if (__put_user(x,ptr)) return ret; })
+
- /*
- * A 31 bit pagetable entry of S390 has following format:
- * | PFRA | | OS |
-diff --git a/include/asm-s390/processor.h b/include/asm-s390/processor.h
-index 21d40a1..c86b982 100644
---- a/include/asm-s390/processor.h
-+++ b/include/asm-s390/processor.h
-@@ -59,9 +59,6 @@ extern void s390_adjust_jiffies(void);
- extern void print_cpu_info(struct cpuinfo_S390 *);
- extern int get_cpu_capability(unsigned int *);
-
--/* Lazy FPU handling on uni-processor */
--extern struct task_struct *last_task_used_math;
--
- /*
- * User space process size: 2GB for 31 bit, 4TB for 64 bit.
- */
-@@ -95,7 +92,6 @@ struct thread_struct {
- unsigned long ksp; /* kernel stack pointer */
- mm_segment_t mm_segment;
- unsigned long prot_addr; /* address of protection-excep. */
-- unsigned int error_code; /* error-code of last prog-excep. */
- unsigned int trap_no;
- per_struct per_info;
- /* Used to give failing instruction back to user for ieee exceptions */
-diff --git a/include/asm-s390/ptrace.h b/include/asm-s390/ptrace.h
-index 332ee73..61f6952 100644
---- a/include/asm-s390/ptrace.h
-+++ b/include/asm-s390/ptrace.h
-@@ -465,6 +465,14 @@ struct user_regs_struct
- #ifdef __KERNEL__
- #define __ARCH_SYS_PTRACE 1
-
-+/*
-+ * These are defined as per linux/ptrace.h, which see.
-+ */
-+#define arch_has_single_step() (1)
-+struct task_struct;
-+extern void user_enable_single_step(struct task_struct *);
-+extern void user_disable_single_step(struct task_struct *);
++#define __get_user_ret(x,ptr,ret) ({ \
++if (__get_user(x,ptr)) return ret; })
+
- #define user_mode(regs) (((regs)->psw.mask & PSW_MASK_PSTATE) != 0)
- #define instruction_pointer(regs) ((regs)->psw.addr & PSW_ADDR_INSN)
- #define regs_return_value(regs)((regs)->gprs[2])
-diff --git a/include/asm-s390/qdio.h b/include/asm-s390/qdio.h
-index 74db1dc..4b8ff55 100644
---- a/include/asm-s390/qdio.h
-+++ b/include/asm-s390/qdio.h
-@@ -184,7 +184,7 @@ struct qdr {
- #endif /* QDIO_32_BIT */
- unsigned long qiba; /* queue-information-block address */
- unsigned int res8; /* reserved */
-- unsigned int qkey : 4; /* queue-informatio-block key */
-+ unsigned int qkey : 4; /* queue-information-block key */
- unsigned int res9 : 28; /* reserved */
- /* union _qd {*/ /* why this? */
- struct qdesfmt0 qdf0[126];
-diff --git a/include/asm-s390/rwsem.h b/include/asm-s390/rwsem.h
-index 90f4ecc..9d2a179 100644
---- a/include/asm-s390/rwsem.h
-+++ b/include/asm-s390/rwsem.h
-@@ -91,8 +91,8 @@ struct rw_semaphore {
- #endif
-
- #define __RWSEM_INITIALIZER(name) \
--{ RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) \
-- __RWSEM_DEP_MAP_INIT(name) }
-+ { RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait.lock), \
-+ LIST_HEAD_INIT((name).wait_list) __RWSEM_DEP_MAP_INIT(name) }
-
- #define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-diff --git a/include/asm-s390/sclp.h b/include/asm-s390/sclp.h
-index cb9faf1..b5f2843 100644
---- a/include/asm-s390/sclp.h
-+++ b/include/asm-s390/sclp.h
-@@ -27,7 +27,25 @@ struct sclp_ipl_info {
- char loadparm[LOADPARM_LEN];
- };
-
--void sclp_readinfo_early(void);
-+struct sclp_cpu_entry {
-+ u8 address;
-+ u8 reserved0[13];
-+ u8 type;
-+ u8 reserved1;
-+} __attribute__((packed));
++struct __large_struct { unsigned long buf[100]; };
++#define __m(x) (*(struct __large_struct *)(x))
+
-+struct sclp_cpu_info {
-+ unsigned int configured;
-+ unsigned int standby;
-+ unsigned int combined;
-+ int has_cpu_type;
-+ struct sclp_cpu_entry cpu[255];
-+};
++#define __get_user_size(x,ptr,size,retval) \
++do { \
++ retval = 0; \
++ switch (size) { \
++ case 1: \
++ retval = __get_user_asm_b(x, ptr); \
++ break; \
++ case 2: \
++ retval = __get_user_asm_w(x, ptr); \
++ break; \
++ case 4: \
++ retval = __get_user_asm_l(x, ptr); \
++ break; \
++ case 8: \
++ retval = __get_user_asm_q(x, ptr); \
++ break; \
++ default: \
++ __get_user_unknown(); \
++ break; \
++ } \
++} while (0)
+
-+int sclp_get_cpu_info(struct sclp_cpu_info *info);
-+int sclp_cpu_configure(u8 cpu);
-+int sclp_cpu_deconfigure(u8 cpu);
-+void sclp_read_info_early(void);
- void sclp_facilities_detect(void);
- unsigned long long sclp_memory_detect(void);
- int sclp_sdias_blk_count(void);
-diff --git a/include/asm-s390/smp.h b/include/asm-s390/smp.h
-index 07708c0..c7b7432 100644
---- a/include/asm-s390/smp.h
-+++ b/include/asm-s390/smp.h
-@@ -35,8 +35,6 @@ extern void machine_restart_smp(char *);
- extern void machine_halt_smp(void);
- extern void machine_power_off_smp(void);
-
--extern void smp_setup_cpu_possible_map(void);
--
- #define NO_PROC_ID 0xFF /* No processor magic marker */
-
- /*
-@@ -92,6 +90,8 @@ extern void __cpu_die (unsigned int cpu);
- extern void cpu_die (void) __attribute__ ((noreturn));
- extern int __cpu_up (unsigned int cpu);
-
-+extern int smp_call_function_mask(cpumask_t mask, void (*func)(void *),
-+ void *info, int wait);
- #endif
-
- #ifndef CONFIG_SMP
-@@ -103,7 +103,6 @@ static inline void smp_send_stop(void)
-
- #define hard_smp_processor_id() 0
- #define smp_cpu_not_running(cpu) 1
--#define smp_setup_cpu_possible_map() do { } while (0)
- #endif
-
- extern union save_area *zfcpdump_save_areas[NR_CPUS + 1];
-diff --git a/include/asm-s390/spinlock.h b/include/asm-s390/spinlock.h
-index 3fd4382..df84ae9 100644
---- a/include/asm-s390/spinlock.h
-+++ b/include/asm-s390/spinlock.h
-@@ -53,44 +53,48 @@ _raw_compare_and_swap(volatile unsigned int *lock,
- */
-
- #define __raw_spin_is_locked(x) ((x)->owner_cpu != 0)
--#define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock)
- #define __raw_spin_unlock_wait(lock) \
- do { while (__raw_spin_is_locked(lock)) \
- _raw_spin_relax(lock); } while (0)
-
--extern void _raw_spin_lock_wait(raw_spinlock_t *, unsigned int pc);
--extern int _raw_spin_trylock_retry(raw_spinlock_t *, unsigned int pc);
-+extern void _raw_spin_lock_wait(raw_spinlock_t *);
-+extern void _raw_spin_lock_wait_flags(raw_spinlock_t *, unsigned long flags);
-+extern int _raw_spin_trylock_retry(raw_spinlock_t *);
- extern void _raw_spin_relax(raw_spinlock_t *lock);
-
- static inline void __raw_spin_lock(raw_spinlock_t *lp)
- {
-- unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
- int old;
-
- old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id());
-- if (likely(old == 0)) {
-- lp->owner_pc = pc;
-+ if (likely(old == 0))
- return;
-- }
-- _raw_spin_lock_wait(lp, pc);
-+ _raw_spin_lock_wait(lp);
-+}
++#define __get_user_nocheck(x,ptr,size) \
++({ \
++ long __gu_err, __gu_val; \
++ __get_user_size((void *)&__gu_val, (long)(ptr), \
++ (size), __gu_err); \
++ (x) = (__typeof__(*(ptr)))__gu_val; \
++ __gu_err; \
++})
+
-+static inline void __raw_spin_lock_flags(raw_spinlock_t *lp,
-+ unsigned long flags)
-+{
-+ int old;
++#define __get_user_check(x,ptr,size) \
++({ \
++ long __gu_addr = (long)(ptr); \
++ long __gu_err = -EFAULT, __gu_val; \
++ if (__access_ok(__gu_addr, (size))) \
++ __get_user_size((void *)&__gu_val, __gu_addr, \
++ (size), __gu_err); \
++ (x) = (__typeof__(*(ptr))) __gu_val; \
++ __gu_err; \
++})
+
-+ old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id());
-+ if (likely(old == 0))
-+ return;
-+ _raw_spin_lock_wait_flags(lp, flags);
- }
-
- static inline int __raw_spin_trylock(raw_spinlock_t *lp)
- {
-- unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
- int old;
-
- old = _raw_compare_and_swap(&lp->owner_cpu, 0, ~smp_processor_id());
-- if (likely(old == 0)) {
-- lp->owner_pc = pc;
-+ if (likely(old == 0))
- return 1;
-- }
-- return _raw_spin_trylock_retry(lp, pc);
-+ return _raw_spin_trylock_retry(lp);
- }
-
- static inline void __raw_spin_unlock(raw_spinlock_t *lp)
- {
-- lp->owner_pc = 0;
- _raw_compare_and_swap(&lp->owner_cpu, lp->owner_cpu, 0);
- }
-
-diff --git a/include/asm-s390/spinlock_types.h b/include/asm-s390/spinlock_types.h
-index b7ac13f..654abc4 100644
---- a/include/asm-s390/spinlock_types.h
-+++ b/include/asm-s390/spinlock_types.h
-@@ -7,7 +7,6 @@
-
- typedef struct {
- volatile unsigned int owner_cpu;
-- volatile unsigned int owner_pc;
- } __attribute__ ((aligned (4))) raw_spinlock_t;
-
- #define __RAW_SPIN_LOCK_UNLOCKED { 0 }
-diff --git a/include/asm-s390/tlbflush.h b/include/asm-s390/tlbflush.h
-index a69bd24..70fa5ae 100644
---- a/include/asm-s390/tlbflush.h
-+++ b/include/asm-s390/tlbflush.h
-@@ -42,11 +42,11 @@ static inline void __tlb_flush_global(void)
- /*
- * Flush all tlb entries of a page table on all cpus.
- */
--static inline void __tlb_flush_idte(pgd_t *pgd)
-+static inline void __tlb_flush_idte(unsigned long asce)
- {
- asm volatile(
- " .insn rrf,0xb98e0000,0,%0,%1,0"
-- : : "a" (2048), "a" (__pa(pgd) & PAGE_MASK) : "cc" );
-+ : : "a" (2048), "a" (asce) : "cc" );
- }
-
- static inline void __tlb_flush_mm(struct mm_struct * mm)
-@@ -61,11 +61,11 @@ static inline void __tlb_flush_mm(struct mm_struct * mm)
- * only ran on the local cpu.
- */
- if (MACHINE_HAS_IDTE) {
-- pgd_t *shadow_pgd = get_shadow_table(mm->pgd);
-+ pgd_t *shadow = get_shadow_table(mm->pgd);
-
-- if (shadow_pgd)
-- __tlb_flush_idte(shadow_pgd);
-- __tlb_flush_idte(mm->pgd);
-+ if (shadow)
-+ __tlb_flush_idte((unsigned long) shadow | mm->context);
-+ __tlb_flush_idte((unsigned long) mm->pgd | mm->context);
- return;
- }
- preempt_disable();
-@@ -106,9 +106,23 @@ static inline void __tlb_flush_mm_cond(struct mm_struct * mm)
- */
- #define flush_tlb() do { } while (0)
- #define flush_tlb_all() do { } while (0)
--#define flush_tlb_mm(mm) __tlb_flush_mm_cond(mm)
- #define flush_tlb_page(vma, addr) do { } while (0)
--#define flush_tlb_range(vma, start, end) __tlb_flush_mm_cond(mm)
--#define flush_tlb_kernel_range(start, end) __tlb_flush_mm(&init_mm)
++extern long __get_user_asm_b(void *, long);
++extern long __get_user_asm_w(void *, long);
++extern long __get_user_asm_l(void *, long);
++extern long __get_user_asm_q(void *, long);
++extern void __get_user_unknown(void);
+
-+static inline void flush_tlb_mm(struct mm_struct *mm)
-+{
-+ __tlb_flush_mm_cond(mm);
-+}
++#define __put_user_size(x,ptr,size,retval) \
++do { \
++ retval = 0; \
++ switch (size) { \
++ case 1: \
++ retval = __put_user_asm_b(x, ptr); \
++ break; \
++ case 2: \
++ retval = __put_user_asm_w(x, ptr); \
++ break; \
++ case 4: \
++ retval = __put_user_asm_l(x, ptr); \
++ break; \
++ case 8: \
++ retval = __put_user_asm_q(x, ptr); \
++ break; \
++ default: \
++ __put_user_unknown(); \
++ } \
++} while (0)
+
-+static inline void flush_tlb_range(struct vm_area_struct *vma,
-+ unsigned long start, unsigned long end)
-+{
-+ __tlb_flush_mm_cond(vma->vm_mm);
-+}
++#define __put_user_nocheck(x,ptr,size) \
++({ \
++ long __pu_err; \
++ __typeof__(*(ptr)) __pu_val = (x); \
++ __put_user_size((void *)&__pu_val, (long)(ptr), (size), __pu_err); \
++ __pu_err; \
++})
+
-+static inline void flush_tlb_kernel_range(unsigned long start,
-+ unsigned long end)
-+{
-+ __tlb_flush_mm(&init_mm);
-+}
-
- #endif /* _S390_TLBFLUSH_H */
-diff --git a/include/asm-s390/zcrypt.h b/include/asm-s390/zcrypt.h
-index a5dada6..f228f1b 100644
---- a/include/asm-s390/zcrypt.h
-+++ b/include/asm-s390/zcrypt.h
-@@ -117,7 +117,7 @@ struct CPRBX {
- unsigned char padx004[16 - sizeof (char *)];
- unsigned char * req_extb; /* request extension block 'addr'*/
- unsigned char padx005[16 - sizeof (char *)];
-- unsigned char * rpl_extb; /* reply extension block 'addres'*/
-+ unsigned char * rpl_extb; /* reply extension block 'address'*/
- unsigned short ccp_rtcode; /* server return code */
- unsigned short ccp_rscode; /* server reason code */
- unsigned int mac_data_len; /* Mac Data Length */
-diff --git a/include/asm-sh/Kbuild b/include/asm-sh/Kbuild
-index 76a8ccf..43910cd 100644
---- a/include/asm-sh/Kbuild
-+++ b/include/asm-sh/Kbuild
-@@ -1,3 +1,8 @@
- include include/asm-generic/Kbuild.asm
-
- header-y += cpu-features.h
++#define __put_user_check(x,ptr,size) \
++({ \
++ long __pu_err = -EFAULT; \
++ long __pu_addr = (long)(ptr); \
++ __typeof__(*(ptr)) __pu_val = (x); \
++ \
++ if (__access_ok(__pu_addr, (size))) \
++ __put_user_size((void *)&__pu_val, __pu_addr, (size), __pu_err);\
++ __pu_err; \
++})
+
-+unifdef-y += unistd_32.h
-+unifdef-y += unistd_64.h
-+unifdef-y += posix_types_32.h
-+unifdef-y += posix_types_64.h
-diff --git a/include/asm-sh/addrspace.h b/include/asm-sh/addrspace.h
-index b860218..fa544fc 100644
---- a/include/asm-sh/addrspace.h
-+++ b/include/asm-sh/addrspace.h
-@@ -9,24 +9,21 @@
- */
- #ifndef __ASM_SH_ADDRSPACE_H
- #define __ASM_SH_ADDRSPACE_H
++extern long __put_user_asm_b(void *, long);
++extern long __put_user_asm_w(void *, long);
++extern long __put_user_asm_l(void *, long);
++extern long __put_user_asm_q(void *, long);
++extern void __put_user_unknown(void);
+
- #ifdef __KERNEL__
-
- #include <asm/cpu/addrspace.h>
-
--/* Memory segments (32bit Privileged mode addresses) */
--#ifndef CONFIG_CPU_SH2A
--#define P0SEG 0x00000000
--#define P1SEG 0x80000000
--#define P2SEG 0xa0000000
--#define P3SEG 0xc0000000
--#define P4SEG 0xe0000000
--#else
--#define P0SEG 0x00000000
--#define P1SEG 0x00000000
--#define P2SEG 0x20000000
--#define P3SEG 0x00000000
--#define P4SEG 0x80000000
--#endif
-+/* If this CPU supports segmentation, hook up the helpers */
-+#ifdef P1SEG
++
++/* Generic arbitrary sized copy. */
++/* Return the number of bytes NOT copied */
++/* XXX: should be such that: 4byte and the rest. */
++extern __kernel_size_t __copy_user(void *__to, const void *__from, __kernel_size_t __n);
+
-+/*
-+ [ P0/U0 (virtual) ] 0x00000000 <------ User space
-+ [ P1 (fixed) cached ] 0x80000000 <------ Kernel space
-+ [ P2 (fixed) non-cachable] 0xA0000000 <------ Physical access
-+ [ P3 (virtual) cached] 0xC0000000 <------ vmalloced area
-+ [ P4 control ] 0xE0000000
-+ */
-
- /* Returns the privileged segment base of a given address */
- #define PXSEG(a) (((unsigned long)(a)) & 0xe0000000)
-@@ -34,13 +31,23 @@
- /* Returns the physical address of a PnSEG (n=1,2) address */
- #define PHYSADDR(a) (((unsigned long)(a)) & 0x1fffffff)
-
-+#ifdef CONFIG_29BIT
- /*
- * Map an address to a certain privileged segment
- */
--#define P1SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P1SEG))
--#define P2SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P2SEG))
--#define P3SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P3SEG))
--#define P4SEGADDR(a) ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P4SEG))
-+#define P1SEGADDR(a) \
-+ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P1SEG))
-+#define P2SEGADDR(a) \
-+ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P2SEG))
-+#define P3SEGADDR(a) \
-+ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P3SEG))
-+#define P4SEGADDR(a) \
-+ ((__typeof__(a))(((unsigned long)(a) & 0x1fffffff) | P4SEG))
-+#endif /* 29BIT */
-+#endif /* P1SEG */
++#define copy_to_user(to,from,n) ({ \
++void *__copy_to = (void *) (to); \
++__kernel_size_t __copy_size = (__kernel_size_t) (n); \
++__kernel_size_t __copy_res; \
++if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
++__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
++} else __copy_res = __copy_size; \
++__copy_res; })
+
-+/* Check if an address can be reached in 29 bits */
-+#define IS_29BIT(a) (((unsigned long)(a)) < 0x20000000)
-
- #endif /* __KERNEL__ */
- #endif /* __ASM_SH_ADDRSPACE_H */
-diff --git a/include/asm-sh/atomic-grb.h b/include/asm-sh/atomic-grb.h
-new file mode 100644
-index 0000000..4c5b7db
---- /dev/null
-+++ b/include/asm-sh/atomic-grb.h
-@@ -0,0 +1,169 @@
-+#ifndef __ASM_SH_ATOMIC_GRB_H
-+#define __ASM_SH_ATOMIC_GRB_H
++#define copy_to_user_ret(to,from,n,retval) ({ \
++if (copy_to_user(to,from,n)) \
++ return retval; \
++})
+
-+static inline void atomic_add(int i, atomic_t *v)
-+{
-+ int tmp;
++#define __copy_to_user(to,from,n) \
++ __copy_user((void *)(to), \
++ (void *)(from), n)
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " add %2, %0 \n\t" /* add */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (v)
-+ : "r" (i)
-+ : "memory" , "r0", "r1");
-+}
++#define __copy_to_user_ret(to,from,n,retval) ({ \
++if (__copy_to_user(to,from,n)) \
++ return retval; \
++})
+
-+static inline void atomic_sub(int i, atomic_t *v)
-+{
-+ int tmp;
++#define copy_from_user(to,from,n) ({ \
++void *__copy_to = (void *) (to); \
++void *__copy_from = (void *) (from); \
++__kernel_size_t __copy_size = (__kernel_size_t) (n); \
++__kernel_size_t __copy_res; \
++if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
++__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
++} else __copy_res = __copy_size; \
++__copy_res; })
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " sub %2, %0 \n\t" /* sub */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (v)
-+ : "r" (i)
-+ : "memory" , "r0", "r1");
-+}
++#define copy_from_user_ret(to,from,n,retval) ({ \
++if (copy_from_user(to,from,n)) \
++ return retval; \
++})
+
-+static inline int atomic_add_return(int i, atomic_t *v)
-+{
-+ int tmp;
++#define __copy_from_user(to,from,n) \
++ __copy_user((void *)(to), \
++ (void *)(from), n)
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " add %2, %0 \n\t" /* add */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (v)
-+ : "r" (i)
-+ : "memory" , "r0", "r1");
++#define __copy_from_user_ret(to,from,n,retval) ({ \
++if (__copy_from_user(to,from,n)) \
++ return retval; \
++})
+
-+ return tmp;
-+}
++#define __copy_to_user_inatomic __copy_to_user
++#define __copy_from_user_inatomic __copy_from_user
+
-+static inline int atomic_sub_return(int i, atomic_t *v)
-+{
-+ int tmp;
++/* XXX: Not sure it works well..
++ should be such that: 4byte clear and the rest. */
++extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " sub %2, %0 \n\t" /* sub */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (v)
-+ : "r" (i)
-+ : "memory", "r0", "r1");
++#define clear_user(addr,n) ({ \
++void * __cl_addr = (addr); \
++unsigned long __cl_size = (n); \
++if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
++__cl_size = __clear_user(__cl_addr, __cl_size); \
++__cl_size; })
+
-+ return tmp;
-+}
++extern int __strncpy_from_user(unsigned long __dest, unsigned long __src, int __count);
+
-+static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
-+{
-+ int tmp;
-+ unsigned int _mask = ~mask;
++#define strncpy_from_user(dest,src,count) ({ \
++unsigned long __sfu_src = (unsigned long) (src); \
++int __sfu_count = (int) (count); \
++long __sfu_res = -EFAULT; \
++if(__access_ok(__sfu_src, __sfu_count)) { \
++__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
++} __sfu_res; })
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " and %2, %0 \n\t" /* add */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (v)
-+ : "r" (_mask)
-+ : "memory" , "r0", "r1");
-+}
++#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+
-+static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
-+{
-+ int tmp;
++/*
++ * Return the size of a string (including the ending 0!)
++ */
++extern long __strnlen_user(const char *__s, long __n);
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " or %2, %0 \n\t" /* or */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (v)
-+ : "r" (mask)
-+ : "memory" , "r0", "r1");
++static inline long strnlen_user(const char *s, long n)
++{
++ if (!__addr_ok(s))
++ return 0;
++ else
++ return __strnlen_user(s, n);
+}
+
-+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
++struct exception_table_entry
+{
-+ int ret;
++ unsigned long insn, fixup;
++};
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t"
-+ " nop \n\t"
-+ " mov r15, r1 \n\t"
-+ " mov #-8, r15 \n\t"
-+ " mov.l @%1, %0 \n\t"
-+ " cmp/eq %2, %0 \n\t"
-+ " bf 1f \n\t"
-+ " mov.l %3, @%1 \n\t"
-+ "1: mov r1, r15 \n\t"
-+ : "=&r" (ret)
-+ : "r" (v), "r" (old), "r" (new)
-+ : "memory" , "r0", "r1" , "t");
++#define ARCH_HAS_SEARCH_EXTABLE
+
-+ return ret;
-+}
++/* Returns 0 if exception not found and fixup.unit otherwise. */
++extern unsigned long search_exception_table(unsigned long addr);
++extern const struct exception_table_entry *search_exception_tables (unsigned long addr);
+
-+static inline int atomic_add_unless(atomic_t *v, int a, int u)
-+{
-+ int ret;
-+ unsigned long tmp;
++#endif /* __ASM_SH_UACCESS_64_H */
+diff --git a/include/asm-sh/unistd.h b/include/asm-sh/unistd.h
+index b182b1c..4b21f36 100644
+--- a/include/asm-sh/unistd.h
++++ b/include/asm-sh/unistd.h
+@@ -1,376 +1,5 @@
+-#ifndef __ASM_SH_UNISTD_H
+-#define __ASM_SH_UNISTD_H
+-
+-/*
+- * Copyright (C) 1999 Niibe Yutaka
+- */
+-
+-/*
+- * This file contains the system call numbers.
+- */
+-
+-#define __NR_restart_syscall 0
+-#define __NR_exit 1
+-#define __NR_fork 2
+-#define __NR_read 3
+-#define __NR_write 4
+-#define __NR_open 5
+-#define __NR_close 6
+-#define __NR_waitpid 7
+-#define __NR_creat 8
+-#define __NR_link 9
+-#define __NR_unlink 10
+-#define __NR_execve 11
+-#define __NR_chdir 12
+-#define __NR_time 13
+-#define __NR_mknod 14
+-#define __NR_chmod 15
+-#define __NR_lchown 16
+-#define __NR_break 17
+-#define __NR_oldstat 18
+-#define __NR_lseek 19
+-#define __NR_getpid 20
+-#define __NR_mount 21
+-#define __NR_umount 22
+-#define __NR_setuid 23
+-#define __NR_getuid 24
+-#define __NR_stime 25
+-#define __NR_ptrace 26
+-#define __NR_alarm 27
+-#define __NR_oldfstat 28
+-#define __NR_pause 29
+-#define __NR_utime 30
+-#define __NR_stty 31
+-#define __NR_gtty 32
+-#define __NR_access 33
+-#define __NR_nice 34
+-#define __NR_ftime 35
+-#define __NR_sync 36
+-#define __NR_kill 37
+-#define __NR_rename 38
+-#define __NR_mkdir 39
+-#define __NR_rmdir 40
+-#define __NR_dup 41
+-#define __NR_pipe 42
+-#define __NR_times 43
+-#define __NR_prof 44
+-#define __NR_brk 45
+-#define __NR_setgid 46
+-#define __NR_getgid 47
+-#define __NR_signal 48
+-#define __NR_geteuid 49
+-#define __NR_getegid 50
+-#define __NR_acct 51
+-#define __NR_umount2 52
+-#define __NR_lock 53
+-#define __NR_ioctl 54
+-#define __NR_fcntl 55
+-#define __NR_mpx 56
+-#define __NR_setpgid 57
+-#define __NR_ulimit 58
+-#define __NR_oldolduname 59
+-#define __NR_umask 60
+-#define __NR_chroot 61
+-#define __NR_ustat 62
+-#define __NR_dup2 63
+-#define __NR_getppid 64
+-#define __NR_getpgrp 65
+-#define __NR_setsid 66
+-#define __NR_sigaction 67
+-#define __NR_sgetmask 68
+-#define __NR_ssetmask 69
+-#define __NR_setreuid 70
+-#define __NR_setregid 71
+-#define __NR_sigsuspend 72
+-#define __NR_sigpending 73
+-#define __NR_sethostname 74
+-#define __NR_setrlimit 75
+-#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
+-#define __NR_getrusage 77
+-#define __NR_gettimeofday 78
+-#define __NR_settimeofday 79
+-#define __NR_getgroups 80
+-#define __NR_setgroups 81
+-#define __NR_select 82
+-#define __NR_symlink 83
+-#define __NR_oldlstat 84
+-#define __NR_readlink 85
+-#define __NR_uselib 86
+-#define __NR_swapon 87
+-#define __NR_reboot 88
+-#define __NR_readdir 89
+-#define __NR_mmap 90
+-#define __NR_munmap 91
+-#define __NR_truncate 92
+-#define __NR_ftruncate 93
+-#define __NR_fchmod 94
+-#define __NR_fchown 95
+-#define __NR_getpriority 96
+-#define __NR_setpriority 97
+-#define __NR_profil 98
+-#define __NR_statfs 99
+-#define __NR_fstatfs 100
+-#define __NR_ioperm 101
+-#define __NR_socketcall 102
+-#define __NR_syslog 103
+-#define __NR_setitimer 104
+-#define __NR_getitimer 105
+-#define __NR_stat 106
+-#define __NR_lstat 107
+-#define __NR_fstat 108
+-#define __NR_olduname 109
+-#define __NR_iopl 110
+-#define __NR_vhangup 111
+-#define __NR_idle 112
+-#define __NR_vm86old 113
+-#define __NR_wait4 114
+-#define __NR_swapoff 115
+-#define __NR_sysinfo 116
+-#define __NR_ipc 117
+-#define __NR_fsync 118
+-#define __NR_sigreturn 119
+-#define __NR_clone 120
+-#define __NR_setdomainname 121
+-#define __NR_uname 122
+-#define __NR_modify_ldt 123
+-#define __NR_adjtimex 124
+-#define __NR_mprotect 125
+-#define __NR_sigprocmask 126
+-#define __NR_create_module 127
+-#define __NR_init_module 128
+-#define __NR_delete_module 129
+-#define __NR_get_kernel_syms 130
+-#define __NR_quotactl 131
+-#define __NR_getpgid 132
+-#define __NR_fchdir 133
+-#define __NR_bdflush 134
+-#define __NR_sysfs 135
+-#define __NR_personality 136
+-#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
+-#define __NR_setfsuid 138
+-#define __NR_setfsgid 139
+-#define __NR__llseek 140
+-#define __NR_getdents 141
+-#define __NR__newselect 142
+-#define __NR_flock 143
+-#define __NR_msync 144
+-#define __NR_readv 145
+-#define __NR_writev 146
+-#define __NR_getsid 147
+-#define __NR_fdatasync 148
+-#define __NR__sysctl 149
+-#define __NR_mlock 150
+-#define __NR_munlock 151
+-#define __NR_mlockall 152
+-#define __NR_munlockall 153
+-#define __NR_sched_setparam 154
+-#define __NR_sched_getparam 155
+-#define __NR_sched_setscheduler 156
+-#define __NR_sched_getscheduler 157
+-#define __NR_sched_yield 158
+-#define __NR_sched_get_priority_max 159
+-#define __NR_sched_get_priority_min 160
+-#define __NR_sched_rr_get_interval 161
+-#define __NR_nanosleep 162
+-#define __NR_mremap 163
+-#define __NR_setresuid 164
+-#define __NR_getresuid 165
+-#define __NR_vm86 166
+-#define __NR_query_module 167
+-#define __NR_poll 168
+-#define __NR_nfsservctl 169
+-#define __NR_setresgid 170
+-#define __NR_getresgid 171
+-#define __NR_prctl 172
+-#define __NR_rt_sigreturn 173
+-#define __NR_rt_sigaction 174
+-#define __NR_rt_sigprocmask 175
+-#define __NR_rt_sigpending 176
+-#define __NR_rt_sigtimedwait 177
+-#define __NR_rt_sigqueueinfo 178
+-#define __NR_rt_sigsuspend 179
+-#define __NR_pread64 180
+-#define __NR_pwrite64 181
+-#define __NR_chown 182
+-#define __NR_getcwd 183
+-#define __NR_capget 184
+-#define __NR_capset 185
+-#define __NR_sigaltstack 186
+-#define __NR_sendfile 187
+-#define __NR_streams1 188 /* some people actually want it */
+-#define __NR_streams2 189 /* some people actually want it */
+-#define __NR_vfork 190
+-#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
+-#define __NR_mmap2 192
+-#define __NR_truncate64 193
+-#define __NR_ftruncate64 194
+-#define __NR_stat64 195
+-#define __NR_lstat64 196
+-#define __NR_fstat64 197
+-#define __NR_lchown32 198
+-#define __NR_getuid32 199
+-#define __NR_getgid32 200
+-#define __NR_geteuid32 201
+-#define __NR_getegid32 202
+-#define __NR_setreuid32 203
+-#define __NR_setregid32 204
+-#define __NR_getgroups32 205
+-#define __NR_setgroups32 206
+-#define __NR_fchown32 207
+-#define __NR_setresuid32 208
+-#define __NR_getresuid32 209
+-#define __NR_setresgid32 210
+-#define __NR_getresgid32 211
+-#define __NR_chown32 212
+-#define __NR_setuid32 213
+-#define __NR_setgid32 214
+-#define __NR_setfsuid32 215
+-#define __NR_setfsgid32 216
+-#define __NR_pivot_root 217
+-#define __NR_mincore 218
+-#define __NR_madvise 219
+-#define __NR_getdents64 220
+-#define __NR_fcntl64 221
+-/* 223 is unused */
+-#define __NR_gettid 224
+-#define __NR_readahead 225
+-#define __NR_setxattr 226
+-#define __NR_lsetxattr 227
+-#define __NR_fsetxattr 228
+-#define __NR_getxattr 229
+-#define __NR_lgetxattr 230
+-#define __NR_fgetxattr 231
+-#define __NR_listxattr 232
+-#define __NR_llistxattr 233
+-#define __NR_flistxattr 234
+-#define __NR_removexattr 235
+-#define __NR_lremovexattr 236
+-#define __NR_fremovexattr 237
+-#define __NR_tkill 238
+-#define __NR_sendfile64 239
+-#define __NR_futex 240
+-#define __NR_sched_setaffinity 241
+-#define __NR_sched_getaffinity 242
+-#define __NR_set_thread_area 243
+-#define __NR_get_thread_area 244
+-#define __NR_io_setup 245
+-#define __NR_io_destroy 246
+-#define __NR_io_getevents 247
+-#define __NR_io_submit 248
+-#define __NR_io_cancel 249
+-#define __NR_fadvise64 250
+-
+-#define __NR_exit_group 252
+-#define __NR_lookup_dcookie 253
+-#define __NR_epoll_create 254
+-#define __NR_epoll_ctl 255
+-#define __NR_epoll_wait 256
+-#define __NR_remap_file_pages 257
+-#define __NR_set_tid_address 258
+-#define __NR_timer_create 259
+-#define __NR_timer_settime (__NR_timer_create+1)
+-#define __NR_timer_gettime (__NR_timer_create+2)
+-#define __NR_timer_getoverrun (__NR_timer_create+3)
+-#define __NR_timer_delete (__NR_timer_create+4)
+-#define __NR_clock_settime (__NR_timer_create+5)
+-#define __NR_clock_gettime (__NR_timer_create+6)
+-#define __NR_clock_getres (__NR_timer_create+7)
+-#define __NR_clock_nanosleep (__NR_timer_create+8)
+-#define __NR_statfs64 268
+-#define __NR_fstatfs64 269
+-#define __NR_tgkill 270
+-#define __NR_utimes 271
+-#define __NR_fadvise64_64 272
+-#define __NR_vserver 273
+-#define __NR_mbind 274
+-#define __NR_get_mempolicy 275
+-#define __NR_set_mempolicy 276
+-#define __NR_mq_open 277
+-#define __NR_mq_unlink (__NR_mq_open+1)
+-#define __NR_mq_timedsend (__NR_mq_open+2)
+-#define __NR_mq_timedreceive (__NR_mq_open+3)
+-#define __NR_mq_notify (__NR_mq_open+4)
+-#define __NR_mq_getsetattr (__NR_mq_open+5)
+-#define __NR_kexec_load 283
+-#define __NR_waitid 284
+-#define __NR_add_key 285
+-#define __NR_request_key 286
+-#define __NR_keyctl 287
+-#define __NR_ioprio_set 288
+-#define __NR_ioprio_get 289
+-#define __NR_inotify_init 290
+-#define __NR_inotify_add_watch 291
+-#define __NR_inotify_rm_watch 292
+-/* 293 is unused */
+-#define __NR_migrate_pages 294
+-#define __NR_openat 295
+-#define __NR_mkdirat 296
+-#define __NR_mknodat 297
+-#define __NR_fchownat 298
+-#define __NR_futimesat 299
+-#define __NR_fstatat64 300
+-#define __NR_unlinkat 301
+-#define __NR_renameat 302
+-#define __NR_linkat 303
+-#define __NR_symlinkat 304
+-#define __NR_readlinkat 305
+-#define __NR_fchmodat 306
+-#define __NR_faccessat 307
+-#define __NR_pselect6 308
+-#define __NR_ppoll 309
+-#define __NR_unshare 310
+-#define __NR_set_robust_list 311
+-#define __NR_get_robust_list 312
+-#define __NR_splice 313
+-#define __NR_sync_file_range 314
+-#define __NR_tee 315
+-#define __NR_vmsplice 316
+-#define __NR_move_pages 317
+-#define __NR_getcpu 318
+-#define __NR_epoll_pwait 319
+-#define __NR_utimensat 320
+-#define __NR_signalfd 321
+-#define __NR_timerfd 322
+-#define __NR_eventfd 323
+-#define __NR_fallocate 324
+-
+-#define NR_syscalls 325
+-
+-#ifdef __KERNEL__
+-
+-#define __ARCH_WANT_IPC_PARSE_VERSION
+-#define __ARCH_WANT_OLD_READDIR
+-#define __ARCH_WANT_OLD_STAT
+-#define __ARCH_WANT_STAT64
+-#define __ARCH_WANT_SYS_ALARM
+-#define __ARCH_WANT_SYS_GETHOSTNAME
+-#define __ARCH_WANT_SYS_PAUSE
+-#define __ARCH_WANT_SYS_SGETMASK
+-#define __ARCH_WANT_SYS_SIGNAL
+-#define __ARCH_WANT_SYS_TIME
+-#define __ARCH_WANT_SYS_UTIME
+-#define __ARCH_WANT_SYS_WAITPID
+-#define __ARCH_WANT_SYS_SOCKETCALL
+-#define __ARCH_WANT_SYS_FADVISE64
+-#define __ARCH_WANT_SYS_GETPGRP
+-#define __ARCH_WANT_SYS_LLSEEK
+-#define __ARCH_WANT_SYS_NICE
+-#define __ARCH_WANT_SYS_OLD_GETRLIMIT
+-#define __ARCH_WANT_SYS_OLDUMOUNT
+-#define __ARCH_WANT_SYS_SIGPENDING
+-#define __ARCH_WANT_SYS_SIGPROCMASK
+-#define __ARCH_WANT_SYS_RT_SIGACTION
+-#define __ARCH_WANT_SYS_RT_SIGSUSPEND
+-
+-/*
+- * "Conditional" syscalls
+- *
+- * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
+- * but it doesn't work on all toolchains, so we just do it by hand
+- */
+-#ifndef cond_syscall
+-#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
++#ifdef CONFIG_SUPERH32
++# include "unistd_32.h"
++#else
++# include "unistd_64.h"
+ #endif
+-
+-#endif /* __KERNEL__ */
+-#endif /* __ASM_SH_UNISTD_H */
+diff --git a/include/asm-sh/unistd_32.h b/include/asm-sh/unistd_32.h
+new file mode 100644
+index 0000000..b182b1c
+--- /dev/null
++++ b/include/asm-sh/unistd_32.h
+@@ -0,0 +1,376 @@
++#ifndef __ASM_SH_UNISTD_H
++#define __ASM_SH_UNISTD_H
++
++/*
++ * Copyright (C) 1999 Niibe Yutaka
++ */
++
++/*
++ * This file contains the system call numbers.
++ */
++
++#define __NR_restart_syscall 0
++#define __NR_exit 1
++#define __NR_fork 2
++#define __NR_read 3
++#define __NR_write 4
++#define __NR_open 5
++#define __NR_close 6
++#define __NR_waitpid 7
++#define __NR_creat 8
++#define __NR_link 9
++#define __NR_unlink 10
++#define __NR_execve 11
++#define __NR_chdir 12
++#define __NR_time 13
++#define __NR_mknod 14
++#define __NR_chmod 15
++#define __NR_lchown 16
++#define __NR_break 17
++#define __NR_oldstat 18
++#define __NR_lseek 19
++#define __NR_getpid 20
++#define __NR_mount 21
++#define __NR_umount 22
++#define __NR_setuid 23
++#define __NR_getuid 24
++#define __NR_stime 25
++#define __NR_ptrace 26
++#define __NR_alarm 27
++#define __NR_oldfstat 28
++#define __NR_pause 29
++#define __NR_utime 30
++#define __NR_stty 31
++#define __NR_gtty 32
++#define __NR_access 33
++#define __NR_nice 34
++#define __NR_ftime 35
++#define __NR_sync 36
++#define __NR_kill 37
++#define __NR_rename 38
++#define __NR_mkdir 39
++#define __NR_rmdir 40
++#define __NR_dup 41
++#define __NR_pipe 42
++#define __NR_times 43
++#define __NR_prof 44
++#define __NR_brk 45
++#define __NR_setgid 46
++#define __NR_getgid 47
++#define __NR_signal 48
++#define __NR_geteuid 49
++#define __NR_getegid 50
++#define __NR_acct 51
++#define __NR_umount2 52
++#define __NR_lock 53
++#define __NR_ioctl 54
++#define __NR_fcntl 55
++#define __NR_mpx 56
++#define __NR_setpgid 57
++#define __NR_ulimit 58
++#define __NR_oldolduname 59
++#define __NR_umask 60
++#define __NR_chroot 61
++#define __NR_ustat 62
++#define __NR_dup2 63
++#define __NR_getppid 64
++#define __NR_getpgrp 65
++#define __NR_setsid 66
++#define __NR_sigaction 67
++#define __NR_sgetmask 68
++#define __NR_ssetmask 69
++#define __NR_setreuid 70
++#define __NR_setregid 71
++#define __NR_sigsuspend 72
++#define __NR_sigpending 73
++#define __NR_sethostname 74
++#define __NR_setrlimit 75
++#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
++#define __NR_getrusage 77
++#define __NR_gettimeofday 78
++#define __NR_settimeofday 79
++#define __NR_getgroups 80
++#define __NR_setgroups 81
++#define __NR_select 82
++#define __NR_symlink 83
++#define __NR_oldlstat 84
++#define __NR_readlink 85
++#define __NR_uselib 86
++#define __NR_swapon 87
++#define __NR_reboot 88
++#define __NR_readdir 89
++#define __NR_mmap 90
++#define __NR_munmap 91
++#define __NR_truncate 92
++#define __NR_ftruncate 93
++#define __NR_fchmod 94
++#define __NR_fchown 95
++#define __NR_getpriority 96
++#define __NR_setpriority 97
++#define __NR_profil 98
++#define __NR_statfs 99
++#define __NR_fstatfs 100
++#define __NR_ioperm 101
++#define __NR_socketcall 102
++#define __NR_syslog 103
++#define __NR_setitimer 104
++#define __NR_getitimer 105
++#define __NR_stat 106
++#define __NR_lstat 107
++#define __NR_fstat 108
++#define __NR_olduname 109
++#define __NR_iopl 110
++#define __NR_vhangup 111
++#define __NR_idle 112
++#define __NR_vm86old 113
++#define __NR_wait4 114
++#define __NR_swapoff 115
++#define __NR_sysinfo 116
++#define __NR_ipc 117
++#define __NR_fsync 118
++#define __NR_sigreturn 119
++#define __NR_clone 120
++#define __NR_setdomainname 121
++#define __NR_uname 122
++#define __NR_modify_ldt 123
++#define __NR_adjtimex 124
++#define __NR_mprotect 125
++#define __NR_sigprocmask 126
++#define __NR_create_module 127
++#define __NR_init_module 128
++#define __NR_delete_module 129
++#define __NR_get_kernel_syms 130
++#define __NR_quotactl 131
++#define __NR_getpgid 132
++#define __NR_fchdir 133
++#define __NR_bdflush 134
++#define __NR_sysfs 135
++#define __NR_personality 136
++#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
++#define __NR_setfsuid 138
++#define __NR_setfsgid 139
++#define __NR__llseek 140
++#define __NR_getdents 141
++#define __NR__newselect 142
++#define __NR_flock 143
++#define __NR_msync 144
++#define __NR_readv 145
++#define __NR_writev 146
++#define __NR_getsid 147
++#define __NR_fdatasync 148
++#define __NR__sysctl 149
++#define __NR_mlock 150
++#define __NR_munlock 151
++#define __NR_mlockall 152
++#define __NR_munlockall 153
++#define __NR_sched_setparam 154
++#define __NR_sched_getparam 155
++#define __NR_sched_setscheduler 156
++#define __NR_sched_getscheduler 157
++#define __NR_sched_yield 158
++#define __NR_sched_get_priority_max 159
++#define __NR_sched_get_priority_min 160
++#define __NR_sched_rr_get_interval 161
++#define __NR_nanosleep 162
++#define __NR_mremap 163
++#define __NR_setresuid 164
++#define __NR_getresuid 165
++#define __NR_vm86 166
++#define __NR_query_module 167
++#define __NR_poll 168
++#define __NR_nfsservctl 169
++#define __NR_setresgid 170
++#define __NR_getresgid 171
++#define __NR_prctl 172
++#define __NR_rt_sigreturn 173
++#define __NR_rt_sigaction 174
++#define __NR_rt_sigprocmask 175
++#define __NR_rt_sigpending 176
++#define __NR_rt_sigtimedwait 177
++#define __NR_rt_sigqueueinfo 178
++#define __NR_rt_sigsuspend 179
++#define __NR_pread64 180
++#define __NR_pwrite64 181
++#define __NR_chown 182
++#define __NR_getcwd 183
++#define __NR_capget 184
++#define __NR_capset 185
++#define __NR_sigaltstack 186
++#define __NR_sendfile 187
++#define __NR_streams1 188 /* some people actually want it */
++#define __NR_streams2 189 /* some people actually want it */
++#define __NR_vfork 190
++#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
++#define __NR_mmap2 192
++#define __NR_truncate64 193
++#define __NR_ftruncate64 194
++#define __NR_stat64 195
++#define __NR_lstat64 196
++#define __NR_fstat64 197
++#define __NR_lchown32 198
++#define __NR_getuid32 199
++#define __NR_getgid32 200
++#define __NR_geteuid32 201
++#define __NR_getegid32 202
++#define __NR_setreuid32 203
++#define __NR_setregid32 204
++#define __NR_getgroups32 205
++#define __NR_setgroups32 206
++#define __NR_fchown32 207
++#define __NR_setresuid32 208
++#define __NR_getresuid32 209
++#define __NR_setresgid32 210
++#define __NR_getresgid32 211
++#define __NR_chown32 212
++#define __NR_setuid32 213
++#define __NR_setgid32 214
++#define __NR_setfsuid32 215
++#define __NR_setfsgid32 216
++#define __NR_pivot_root 217
++#define __NR_mincore 218
++#define __NR_madvise 219
++#define __NR_getdents64 220
++#define __NR_fcntl64 221
++/* 223 is unused */
++#define __NR_gettid 224
++#define __NR_readahead 225
++#define __NR_setxattr 226
++#define __NR_lsetxattr 227
++#define __NR_fsetxattr 228
++#define __NR_getxattr 229
++#define __NR_lgetxattr 230
++#define __NR_fgetxattr 231
++#define __NR_listxattr 232
++#define __NR_llistxattr 233
++#define __NR_flistxattr 234
++#define __NR_removexattr 235
++#define __NR_lremovexattr 236
++#define __NR_fremovexattr 237
++#define __NR_tkill 238
++#define __NR_sendfile64 239
++#define __NR_futex 240
++#define __NR_sched_setaffinity 241
++#define __NR_sched_getaffinity 242
++#define __NR_set_thread_area 243
++#define __NR_get_thread_area 244
++#define __NR_io_setup 245
++#define __NR_io_destroy 246
++#define __NR_io_getevents 247
++#define __NR_io_submit 248
++#define __NR_io_cancel 249
++#define __NR_fadvise64 250
++
++#define __NR_exit_group 252
++#define __NR_lookup_dcookie 253
++#define __NR_epoll_create 254
++#define __NR_epoll_ctl 255
++#define __NR_epoll_wait 256
++#define __NR_remap_file_pages 257
++#define __NR_set_tid_address 258
++#define __NR_timer_create 259
++#define __NR_timer_settime (__NR_timer_create+1)
++#define __NR_timer_gettime (__NR_timer_create+2)
++#define __NR_timer_getoverrun (__NR_timer_create+3)
++#define __NR_timer_delete (__NR_timer_create+4)
++#define __NR_clock_settime (__NR_timer_create+5)
++#define __NR_clock_gettime (__NR_timer_create+6)
++#define __NR_clock_getres (__NR_timer_create+7)
++#define __NR_clock_nanosleep (__NR_timer_create+8)
++#define __NR_statfs64 268
++#define __NR_fstatfs64 269
++#define __NR_tgkill 270
++#define __NR_utimes 271
++#define __NR_fadvise64_64 272
++#define __NR_vserver 273
++#define __NR_mbind 274
++#define __NR_get_mempolicy 275
++#define __NR_set_mempolicy 276
++#define __NR_mq_open 277
++#define __NR_mq_unlink (__NR_mq_open+1)
++#define __NR_mq_timedsend (__NR_mq_open+2)
++#define __NR_mq_timedreceive (__NR_mq_open+3)
++#define __NR_mq_notify (__NR_mq_open+4)
++#define __NR_mq_getsetattr (__NR_mq_open+5)
++#define __NR_kexec_load 283
++#define __NR_waitid 284
++#define __NR_add_key 285
++#define __NR_request_key 286
++#define __NR_keyctl 287
++#define __NR_ioprio_set 288
++#define __NR_ioprio_get 289
++#define __NR_inotify_init 290
++#define __NR_inotify_add_watch 291
++#define __NR_inotify_rm_watch 292
++/* 293 is unused */
++#define __NR_migrate_pages 294
++#define __NR_openat 295
++#define __NR_mkdirat 296
++#define __NR_mknodat 297
++#define __NR_fchownat 298
++#define __NR_futimesat 299
++#define __NR_fstatat64 300
++#define __NR_unlinkat 301
++#define __NR_renameat 302
++#define __NR_linkat 303
++#define __NR_symlinkat 304
++#define __NR_readlinkat 305
++#define __NR_fchmodat 306
++#define __NR_faccessat 307
++#define __NR_pselect6 308
++#define __NR_ppoll 309
++#define __NR_unshare 310
++#define __NR_set_robust_list 311
++#define __NR_get_robust_list 312
++#define __NR_splice 313
++#define __NR_sync_file_range 314
++#define __NR_tee 315
++#define __NR_vmsplice 316
++#define __NR_move_pages 317
++#define __NR_getcpu 318
++#define __NR_epoll_pwait 319
++#define __NR_utimensat 320
++#define __NR_signalfd 321
++#define __NR_timerfd 322
++#define __NR_eventfd 323
++#define __NR_fallocate 324
+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t"
-+ " nop \n\t"
-+ " mov r15, r1 \n\t"
-+ " mov #-12, r15 \n\t"
-+ " mov.l @%2, %1 \n\t"
-+ " mov %1, %0 \n\t"
-+ " cmp/eq %4, %0 \n\t"
-+ " bt/s 1f \n\t"
-+ " add %3, %1 \n\t"
-+ " mov.l %1, @%2 \n\t"
-+ "1: mov r1, r15 \n\t"
-+ : "=&r" (ret), "=&r" (tmp)
-+ : "r" (v), "r" (a), "r" (u)
-+ : "memory" , "r0", "r1" , "t");
++#define NR_syscalls 325
+
-+ return ret != u;
-+}
-+#endif /* __ASM_SH_ATOMIC_GRB_H */
-diff --git a/include/asm-sh/atomic.h b/include/asm-sh/atomic.h
-index e12570b..c043ef0 100644
---- a/include/asm-sh/atomic.h
-+++ b/include/asm-sh/atomic.h
-@@ -17,7 +17,9 @@ typedef struct { volatile int counter; } atomic_t;
- #include <linux/compiler.h>
- #include <asm/system.h>
-
--#ifdef CONFIG_CPU_SH4A
-+#if defined(CONFIG_GUSA_RB)
-+#include <asm/atomic-grb.h>
-+#elif defined(CONFIG_CPU_SH4A)
- #include <asm/atomic-llsc.h>
- #else
- #include <asm/atomic-irq.h>
-@@ -44,6 +46,7 @@ typedef struct { volatile int counter; } atomic_t;
- #define atomic_inc(v) atomic_add(1,(v))
- #define atomic_dec(v) atomic_sub(1,(v))
-
-+#ifndef CONFIG_GUSA_RB
- static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
- {
- int ret;
-@@ -58,8 +61,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
- return ret;
- }
-
--#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
--
- static inline int atomic_add_unless(atomic_t *v, int a, int u)
- {
- int ret;
-@@ -73,6 +74,9 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
-
- return ret != u;
- }
-+#endif
++#ifdef __KERNEL__
+
-+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
- #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
-
- /* Atomic operations are already serializing on SH */
-diff --git a/include/asm-sh/auxvec.h b/include/asm-sh/auxvec.h
-index 1b6916e..a6b9d4f 100644
---- a/include/asm-sh/auxvec.h
-+++ b/include/asm-sh/auxvec.h
-@@ -6,6 +6,12 @@
- * for more of them.
- */
-
-+/*
-+ * This entry gives some information about the FPU initialization
-+ * performed by the kernel.
-+ */
-+#define AT_FPUCW 18 /* Used FPU control word. */
++#define __ARCH_WANT_IPC_PARSE_VERSION
++#define __ARCH_WANT_OLD_READDIR
++#define __ARCH_WANT_OLD_STAT
++#define __ARCH_WANT_STAT64
++#define __ARCH_WANT_SYS_ALARM
++#define __ARCH_WANT_SYS_GETHOSTNAME
++#define __ARCH_WANT_SYS_PAUSE
++#define __ARCH_WANT_SYS_SGETMASK
++#define __ARCH_WANT_SYS_SIGNAL
++#define __ARCH_WANT_SYS_TIME
++#define __ARCH_WANT_SYS_UTIME
++#define __ARCH_WANT_SYS_WAITPID
++#define __ARCH_WANT_SYS_SOCKETCALL
++#define __ARCH_WANT_SYS_FADVISE64
++#define __ARCH_WANT_SYS_GETPGRP
++#define __ARCH_WANT_SYS_LLSEEK
++#define __ARCH_WANT_SYS_NICE
++#define __ARCH_WANT_SYS_OLD_GETRLIMIT
++#define __ARCH_WANT_SYS_OLDUMOUNT
++#define __ARCH_WANT_SYS_SIGPENDING
++#define __ARCH_WANT_SYS_SIGPROCMASK
++#define __ARCH_WANT_SYS_RT_SIGACTION
++#define __ARCH_WANT_SYS_RT_SIGSUSPEND
+
- #ifdef CONFIG_VSYSCALL
- /*
- * Only define this in the vsyscall case, the entry point to
-@@ -15,4 +21,16 @@
- #define AT_SYSINFO_EHDR 33
- #endif
-
+/*
-+ * More complete cache descriptions than AT_[DIU]CACHEBSIZE. If the
-+ * value is -1, then the cache doesn't exist. Otherwise:
++ * "Conditional" syscalls
+ *
-+ * bit 0-3: Cache set-associativity; 0 means fully associative.
-+ * bit 4-7: Log2 of cacheline size.
-+ * bit 8-31: Size of the entire cache >> 8.
++ * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
++ * but it doesn't work on all toolchains, so we just do it by hand
+ */
-+#define AT_L1I_CACHESHAPE 34
-+#define AT_L1D_CACHESHAPE 35
-+#define AT_L2_CACHESHAPE 36
-+
- #endif /* __ASM_SH_AUXVEC_H */
-diff --git a/include/asm-sh/bitops-grb.h b/include/asm-sh/bitops-grb.h
-new file mode 100644
-index 0000000..a5907b9
---- /dev/null
-+++ b/include/asm-sh/bitops-grb.h
-@@ -0,0 +1,169 @@
-+#ifndef __ASM_SH_BITOPS_GRB_H
-+#define __ASM_SH_BITOPS_GRB_H
-+
-+static inline void set_bit(int nr, volatile void * addr)
-+{
-+ int mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long tmp;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " or %2, %0 \n\t" /* or */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (a)
-+ : "r" (mask)
-+ : "memory" , "r0", "r1");
-+}
-+
-+static inline void clear_bit(int nr, volatile void * addr)
-+{
-+ int mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long tmp;
-+
-+ a += nr >> 5;
-+ mask = ~(1 << (nr & 0x1f));
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " and %2, %0 \n\t" /* and */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (a)
-+ : "r" (mask)
-+ : "memory" , "r0", "r1");
-+}
-+
-+static inline void change_bit(int nr, volatile void * addr)
-+{
-+ int mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long tmp;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " xor %2, %0 \n\t" /* xor */
-+ " mov.l %0, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "+r" (a)
-+ : "r" (mask)
-+ : "memory" , "r0", "r1");
-+}
-+
-+static inline int test_and_set_bit(int nr, volatile void * addr)
-+{
-+ int mask, retval;
-+ volatile unsigned int *a = addr;
-+ unsigned long tmp;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-14, r15 \n\t" /* LOGIN: r15 = size */
-+ " mov.l @%2, %0 \n\t" /* load old value */
-+ " mov %0, %1 \n\t"
-+ " tst %1, %3 \n\t" /* T = ((*a & mask) == 0) */
-+ " mov #-1, %1 \n\t" /* retvat = -1 */
-+ " negc %1, %1 \n\t" /* retval = (mask & *a) != 0 */
-+ " or %3, %0 \n\t"
-+ " mov.l %0, @%2 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "=&r" (retval),
-+ "+r" (a)
-+ : "r" (mask)
-+ : "memory" , "r0", "r1" ,"t");
-+
-+ return retval;
-+}
-+
-+static inline int test_and_clear_bit(int nr, volatile void * addr)
-+{
-+ int mask, retval,not_mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long tmp;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+
-+ not_mask = ~mask;
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-14, r15 \n\t" /* LOGIN */
-+ " mov.l @%2, %0 \n\t" /* load old value */
-+ " mov %0, %1 \n\t" /* %1 = *a */
-+ " tst %1, %3 \n\t" /* T = ((*a & mask) == 0) */
-+ " mov #-1, %1 \n\t" /* retvat = -1 */
-+ " negc %1, %1 \n\t" /* retval = (mask & *a) != 0 */
-+ " and %4, %0 \n\t"
-+ " mov.l %0, @%2 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "=&r" (retval),
-+ "+r" (a)
-+ : "r" (mask),
-+ "r" (not_mask)
-+ : "memory" , "r0", "r1", "t");
-+
-+ return retval;
-+}
-+
-+static inline int test_and_change_bit(int nr, volatile void * addr)
-+{
-+ int mask, retval;
-+ volatile unsigned int *a = addr;
-+ unsigned long tmp;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-14, r15 \n\t" /* LOGIN */
-+ " mov.l @%2, %0 \n\t" /* load old value */
-+ " mov %0, %1 \n\t" /* %1 = *a */
-+ " tst %1, %3 \n\t" /* T = ((*a & mask) == 0) */
-+ " mov #-1, %1 \n\t" /* retvat = -1 */
-+ " negc %1, %1 \n\t" /* retval = (mask & *a) != 0 */
-+ " xor %3, %0 \n\t"
-+ " mov.l %0, @%2 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (tmp),
-+ "=&r" (retval),
-+ "+r" (a)
-+ : "r" (mask)
-+ : "memory" , "r0", "r1", "t");
++#ifndef cond_syscall
++#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
++#endif
+
-+ return retval;
-+}
-+#endif /* __ASM_SH_BITOPS_GRB_H */
-diff --git a/include/asm-sh/bitops-irq.h b/include/asm-sh/bitops-irq.h
++#endif /* __KERNEL__ */
++#endif /* __ASM_SH_UNISTD_H */
+diff --git a/include/asm-sh/unistd_64.h b/include/asm-sh/unistd_64.h
new file mode 100644
-index 0000000..653a127
+index 0000000..9445118
--- /dev/null
-+++ b/include/asm-sh/bitops-irq.h
-@@ -0,0 +1,91 @@
-+#ifndef __ASM_SH_BITOPS_IRQ_H
-+#define __ASM_SH_BITOPS_IRQ_H
-+
-+static inline void set_bit(int nr, volatile void *addr)
-+{
-+ int mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long flags;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ local_irq_save(flags);
-+ *a |= mask;
-+ local_irq_restore(flags);
-+}
-+
-+static inline void clear_bit(int nr, volatile void *addr)
-+{
-+ int mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long flags;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ local_irq_save(flags);
-+ *a &= ~mask;
-+ local_irq_restore(flags);
-+}
-+
-+static inline void change_bit(int nr, volatile void *addr)
-+{
-+ int mask;
-+ volatile unsigned int *a = addr;
-+ unsigned long flags;
-+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ local_irq_save(flags);
-+ *a ^= mask;
-+ local_irq_restore(flags);
-+}
++++ b/include/asm-sh/unistd_64.h
+@@ -0,0 +1,415 @@
++#ifndef __ASM_SH_UNISTD_64_H
++#define __ASM_SH_UNISTD_64_H
+
-+static inline int test_and_set_bit(int nr, volatile void *addr)
-+{
-+ int mask, retval;
-+ volatile unsigned int *a = addr;
-+ unsigned long flags;
++/*
++ * include/asm-sh/unistd_64.h
++ *
++ * This file contains the system call numbers.
++ *
++ * Copyright (C) 2000, 2001 Paolo Alberelli
++ * Copyright (C) 2003 - 2007 Paul Mundt
++ * Copyright (C) 2004 Sean McGoogan
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file "COPYING" in the main directory of this archive
++ * for more details.
++ */
++#define __NR_restart_syscall 0
++#define __NR_exit 1
++#define __NR_fork 2
++#define __NR_read 3
++#define __NR_write 4
++#define __NR_open 5
++#define __NR_close 6
++#define __NR_waitpid 7
++#define __NR_creat 8
++#define __NR_link 9
++#define __NR_unlink 10
++#define __NR_execve 11
++#define __NR_chdir 12
++#define __NR_time 13
++#define __NR_mknod 14
++#define __NR_chmod 15
++#define __NR_lchown 16
++#define __NR_break 17
++#define __NR_oldstat 18
++#define __NR_lseek 19
++#define __NR_getpid 20
++#define __NR_mount 21
++#define __NR_umount 22
++#define __NR_setuid 23
++#define __NR_getuid 24
++#define __NR_stime 25
++#define __NR_ptrace 26
++#define __NR_alarm 27
++#define __NR_oldfstat 28
++#define __NR_pause 29
++#define __NR_utime 30
++#define __NR_stty 31
++#define __NR_gtty 32
++#define __NR_access 33
++#define __NR_nice 34
++#define __NR_ftime 35
++#define __NR_sync 36
++#define __NR_kill 37
++#define __NR_rename 38
++#define __NR_mkdir 39
++#define __NR_rmdir 40
++#define __NR_dup 41
++#define __NR_pipe 42
++#define __NR_times 43
++#define __NR_prof 44
++#define __NR_brk 45
++#define __NR_setgid 46
++#define __NR_getgid 47
++#define __NR_signal 48
++#define __NR_geteuid 49
++#define __NR_getegid 50
++#define __NR_acct 51
++#define __NR_umount2 52
++#define __NR_lock 53
++#define __NR_ioctl 54
++#define __NR_fcntl 55
++#define __NR_mpx 56
++#define __NR_setpgid 57
++#define __NR_ulimit 58
++#define __NR_oldolduname 59
++#define __NR_umask 60
++#define __NR_chroot 61
++#define __NR_ustat 62
++#define __NR_dup2 63
++#define __NR_getppid 64
++#define __NR_getpgrp 65
++#define __NR_setsid 66
++#define __NR_sigaction 67
++#define __NR_sgetmask 68
++#define __NR_ssetmask 69
++#define __NR_setreuid 70
++#define __NR_setregid 71
++#define __NR_sigsuspend 72
++#define __NR_sigpending 73
++#define __NR_sethostname 74
++#define __NR_setrlimit 75
++#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
++#define __NR_getrusage 77
++#define __NR_gettimeofday 78
++#define __NR_settimeofday 79
++#define __NR_getgroups 80
++#define __NR_setgroups 81
++#define __NR_select 82
++#define __NR_symlink 83
++#define __NR_oldlstat 84
++#define __NR_readlink 85
++#define __NR_uselib 86
++#define __NR_swapon 87
++#define __NR_reboot 88
++#define __NR_readdir 89
++#define __NR_mmap 90
++#define __NR_munmap 91
++#define __NR_truncate 92
++#define __NR_ftruncate 93
++#define __NR_fchmod 94
++#define __NR_fchown 95
++#define __NR_getpriority 96
++#define __NR_setpriority 97
++#define __NR_profil 98
++#define __NR_statfs 99
++#define __NR_fstatfs 100
++#define __NR_ioperm 101
++#define __NR_socketcall 102 /* old implementation of socket systemcall */
++#define __NR_syslog 103
++#define __NR_setitimer 104
++#define __NR_getitimer 105
++#define __NR_stat 106
++#define __NR_lstat 107
++#define __NR_fstat 108
++#define __NR_olduname 109
++#define __NR_iopl 110
++#define __NR_vhangup 111
++#define __NR_idle 112
++#define __NR_vm86old 113
++#define __NR_wait4 114
++#define __NR_swapoff 115
++#define __NR_sysinfo 116
++#define __NR_ipc 117
++#define __NR_fsync 118
++#define __NR_sigreturn 119
++#define __NR_clone 120
++#define __NR_setdomainname 121
++#define __NR_uname 122
++#define __NR_modify_ldt 123
++#define __NR_adjtimex 124
++#define __NR_mprotect 125
++#define __NR_sigprocmask 126
++#define __NR_create_module 127
++#define __NR_init_module 128
++#define __NR_delete_module 129
++#define __NR_get_kernel_syms 130
++#define __NR_quotactl 131
++#define __NR_getpgid 132
++#define __NR_fchdir 133
++#define __NR_bdflush 134
++#define __NR_sysfs 135
++#define __NR_personality 136
++#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
++#define __NR_setfsuid 138
++#define __NR_setfsgid 139
++#define __NR__llseek 140
++#define __NR_getdents 141
++#define __NR__newselect 142
++#define __NR_flock 143
++#define __NR_msync 144
++#define __NR_readv 145
++#define __NR_writev 146
++#define __NR_getsid 147
++#define __NR_fdatasync 148
++#define __NR__sysctl 149
++#define __NR_mlock 150
++#define __NR_munlock 151
++#define __NR_mlockall 152
++#define __NR_munlockall 153
++#define __NR_sched_setparam 154
++#define __NR_sched_getparam 155
++#define __NR_sched_setscheduler 156
++#define __NR_sched_getscheduler 157
++#define __NR_sched_yield 158
++#define __NR_sched_get_priority_max 159
++#define __NR_sched_get_priority_min 160
++#define __NR_sched_rr_get_interval 161
++#define __NR_nanosleep 162
++#define __NR_mremap 163
++#define __NR_setresuid 164
++#define __NR_getresuid 165
++#define __NR_vm86 166
++#define __NR_query_module 167
++#define __NR_poll 168
++#define __NR_nfsservctl 169
++#define __NR_setresgid 170
++#define __NR_getresgid 171
++#define __NR_prctl 172
++#define __NR_rt_sigreturn 173
++#define __NR_rt_sigaction 174
++#define __NR_rt_sigprocmask 175
++#define __NR_rt_sigpending 176
++#define __NR_rt_sigtimedwait 177
++#define __NR_rt_sigqueueinfo 178
++#define __NR_rt_sigsuspend 179
++#define __NR_pread64 180
++#define __NR_pwrite64 181
++#define __NR_chown 182
++#define __NR_getcwd 183
++#define __NR_capget 184
++#define __NR_capset 185
++#define __NR_sigaltstack 186
++#define __NR_sendfile 187
++#define __NR_streams1 188 /* some people actually want it */
++#define __NR_streams2 189 /* some people actually want it */
++#define __NR_vfork 190
++#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
++#define __NR_mmap2 192
++#define __NR_truncate64 193
++#define __NR_ftruncate64 194
++#define __NR_stat64 195
++#define __NR_lstat64 196
++#define __NR_fstat64 197
++#define __NR_lchown32 198
++#define __NR_getuid32 199
++#define __NR_getgid32 200
++#define __NR_geteuid32 201
++#define __NR_getegid32 202
++#define __NR_setreuid32 203
++#define __NR_setregid32 204
++#define __NR_getgroups32 205
++#define __NR_setgroups32 206
++#define __NR_fchown32 207
++#define __NR_setresuid32 208
++#define __NR_getresuid32 209
++#define __NR_setresgid32 210
++#define __NR_getresgid32 211
++#define __NR_chown32 212
++#define __NR_setuid32 213
++#define __NR_setgid32 214
++#define __NR_setfsuid32 215
++#define __NR_setfsgid32 216
++#define __NR_pivot_root 217
++#define __NR_mincore 218
++#define __NR_madvise 219
+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ local_irq_save(flags);
-+ retval = (mask & *a) != 0;
-+ *a |= mask;
-+ local_irq_restore(flags);
++/* Non-multiplexed socket family */
++#define __NR_socket 220
++#define __NR_bind 221
++#define __NR_connect 222
++#define __NR_listen 223
++#define __NR_accept 224
++#define __NR_getsockname 225
++#define __NR_getpeername 226
++#define __NR_socketpair 227
++#define __NR_send 228
++#define __NR_sendto 229
++#define __NR_recv 230
++#define __NR_recvfrom 231
++#define __NR_shutdown 232
++#define __NR_setsockopt 233
++#define __NR_getsockopt 234
++#define __NR_sendmsg 235
++#define __NR_recvmsg 236
+
-+ return retval;
-+}
++/* Non-multiplexed IPC family */
++#define __NR_semop 237
++#define __NR_semget 238
++#define __NR_semctl 239
++#define __NR_msgsnd 240
++#define __NR_msgrcv 241
++#define __NR_msgget 242
++#define __NR_msgctl 243
++#if 0
++#define __NR_shmatcall 244
++#endif
++#define __NR_shmdt 245
++#define __NR_shmget 246
++#define __NR_shmctl 247
+
-+static inline int test_and_clear_bit(int nr, volatile void *addr)
-+{
-+ int mask, retval;
-+ volatile unsigned int *a = addr;
-+ unsigned long flags;
++#define __NR_getdents64 248
++#define __NR_fcntl64 249
++/* 223 is unused */
++#define __NR_gettid 252
++#define __NR_readahead 253
++#define __NR_setxattr 254
++#define __NR_lsetxattr 255
++#define __NR_fsetxattr 256
++#define __NR_getxattr 257
++#define __NR_lgetxattr 258
++#define __NR_fgetxattr 269
++#define __NR_listxattr 260
++#define __NR_llistxattr 261
++#define __NR_flistxattr 262
++#define __NR_removexattr 263
++#define __NR_lremovexattr 264
++#define __NR_fremovexattr 265
++#define __NR_tkill 266
++#define __NR_sendfile64 267
++#define __NR_futex 268
++#define __NR_sched_setaffinity 269
++#define __NR_sched_getaffinity 270
++#define __NR_set_thread_area 271
++#define __NR_get_thread_area 272
++#define __NR_io_setup 273
++#define __NR_io_destroy 274
++#define __NR_io_getevents 275
++#define __NR_io_submit 276
++#define __NR_io_cancel 277
++#define __NR_fadvise64 278
++#define __NR_exit_group 280
+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ local_irq_save(flags);
-+ retval = (mask & *a) != 0;
-+ *a &= ~mask;
-+ local_irq_restore(flags);
++#define __NR_lookup_dcookie 281
++#define __NR_epoll_create 282
++#define __NR_epoll_ctl 283
++#define __NR_epoll_wait 284
++#define __NR_remap_file_pages 285
++#define __NR_set_tid_address 286
++#define __NR_timer_create 287
++#define __NR_timer_settime (__NR_timer_create+1)
++#define __NR_timer_gettime (__NR_timer_create+2)
++#define __NR_timer_getoverrun (__NR_timer_create+3)
++#define __NR_timer_delete (__NR_timer_create+4)
++#define __NR_clock_settime (__NR_timer_create+5)
++#define __NR_clock_gettime (__NR_timer_create+6)
++#define __NR_clock_getres (__NR_timer_create+7)
++#define __NR_clock_nanosleep (__NR_timer_create+8)
++#define __NR_statfs64 296
++#define __NR_fstatfs64 297
++#define __NR_tgkill 298
++#define __NR_utimes 299
++#define __NR_fadvise64_64 300
++#define __NR_vserver 301
++#define __NR_mbind 302
++#define __NR_get_mempolicy 303
++#define __NR_set_mempolicy 304
++#define __NR_mq_open 305
++#define __NR_mq_unlink (__NR_mq_open+1)
++#define __NR_mq_timedsend (__NR_mq_open+2)
++#define __NR_mq_timedreceive (__NR_mq_open+3)
++#define __NR_mq_notify (__NR_mq_open+4)
++#define __NR_mq_getsetattr (__NR_mq_open+5)
++#define __NR_kexec_load 311
++#define __NR_waitid 312
++#define __NR_add_key 313
++#define __NR_request_key 314
++#define __NR_keyctl 315
++#define __NR_ioprio_set 316
++#define __NR_ioprio_get 317
++#define __NR_inotify_init 318
++#define __NR_inotify_add_watch 319
++#define __NR_inotify_rm_watch 320
++/* 321 is unused */
++#define __NR_migrate_pages 322
++#define __NR_openat 323
++#define __NR_mkdirat 324
++#define __NR_mknodat 325
++#define __NR_fchownat 326
++#define __NR_futimesat 327
++#define __NR_fstatat64 328
++#define __NR_unlinkat 329
++#define __NR_renameat 330
++#define __NR_linkat 331
++#define __NR_symlinkat 332
++#define __NR_readlinkat 333
++#define __NR_fchmodat 334
++#define __NR_faccessat 335
++#define __NR_pselect6 336
++#define __NR_ppoll 337
++#define __NR_unshare 338
++#define __NR_set_robust_list 339
++#define __NR_get_robust_list 340
++#define __NR_splice 341
++#define __NR_sync_file_range 342
++#define __NR_tee 343
++#define __NR_vmsplice 344
++#define __NR_move_pages 345
++#define __NR_getcpu 346
++#define __NR_epoll_pwait 347
++#define __NR_utimensat 348
++#define __NR_signalfd 349
++#define __NR_timerfd 350
++#define __NR_eventfd 351
++#define __NR_fallocate 352
+
-+ return retval;
-+}
++#ifdef __KERNEL__
+
-+static inline int test_and_change_bit(int nr, volatile void *addr)
-+{
-+ int mask, retval;
-+ volatile unsigned int *a = addr;
-+ unsigned long flags;
++#define NR_syscalls 353
+
-+ a += nr >> 5;
-+ mask = 1 << (nr & 0x1f);
-+ local_irq_save(flags);
-+ retval = (mask & *a) != 0;
-+ *a ^= mask;
-+ local_irq_restore(flags);
++#define __ARCH_WANT_IPC_PARSE_VERSION
++#define __ARCH_WANT_OLD_READDIR
++#define __ARCH_WANT_OLD_STAT
++#define __ARCH_WANT_STAT64
++#define __ARCH_WANT_SYS_ALARM
++#define __ARCH_WANT_SYS_GETHOSTNAME
++#define __ARCH_WANT_SYS_PAUSE
++#define __ARCH_WANT_SYS_SGETMASK
++#define __ARCH_WANT_SYS_SIGNAL
++#define __ARCH_WANT_SYS_TIME
++#define __ARCH_WANT_SYS_UTIME
++#define __ARCH_WANT_SYS_WAITPID
++#define __ARCH_WANT_SYS_SOCKETCALL
++#define __ARCH_WANT_SYS_FADVISE64
++#define __ARCH_WANT_SYS_GETPGRP
++#define __ARCH_WANT_SYS_LLSEEK
++#define __ARCH_WANT_SYS_NICE
++#define __ARCH_WANT_SYS_OLD_GETRLIMIT
++#define __ARCH_WANT_SYS_OLDUMOUNT
++#define __ARCH_WANT_SYS_SIGPENDING
++#define __ARCH_WANT_SYS_SIGPROCMASK
++#define __ARCH_WANT_SYS_RT_SIGACTION
+
-+ return retval;
-+}
++/*
++ * "Conditional" syscalls
++ *
++ * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
++ * but it doesn't work on all toolchains, so we just do it by hand
++ */
++#ifndef cond_syscall
++#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
++#endif
+
-+#endif /* __ASM_SH_BITOPS_IRQ_H */
-diff --git a/include/asm-sh/bitops.h b/include/asm-sh/bitops.h
-index df805f2..b6ba5a6 100644
---- a/include/asm-sh/bitops.h
-+++ b/include/asm-sh/bitops.h
-@@ -11,100 +11,22 @@
- /* For __swab32 */
- #include <asm/byteorder.h>
++#endif /* __KERNEL__ */
++#endif /* __ASM_SH_UNISTD_64_H */
+diff --git a/include/asm-sh/user.h b/include/asm-sh/user.h
+index d1b8511..1a4f43c 100644
+--- a/include/asm-sh/user.h
++++ b/include/asm-sh/user.h
+@@ -27,12 +27,19 @@
+ * to write an integer number of pages.
+ */
--static inline void set_bit(int nr, volatile void * addr)
++#if defined(__SH5__) || defined(CONFIG_CPU_SH5)
++struct user_fpu_struct {
++ unsigned long fp_regs[32];
++ unsigned int fpscr;
++};
++#else
+ struct user_fpu_struct {
+ unsigned long fp_regs[16];
+ unsigned long xfp_regs[16];
+ unsigned long fpscr;
+ unsigned long fpul;
+ };
++#endif
+
+ struct user {
+ struct pt_regs regs; /* entire machine state */
+diff --git a/include/asm-sh/voyagergx.h b/include/asm-sh/voyagergx.h
+deleted file mode 100644
+index d825596..0000000
+--- a/include/asm-sh/voyagergx.h
++++ /dev/null
+@@ -1,341 +0,0 @@
+-/* -------------------------------------------------------------------- */
+-/* voyagergx.h */
+-/* -------------------------------------------------------------------- */
+-/* This program is free software; you can redistribute it and/or modify
+- it under the terms of the GNU General Public License as published by
+- the Free Software Foundation; either version 2 of the License, or
+- (at your option) any later version.
+-
+- This program is distributed in the hope that it will be useful,
+- but WITHOUT ANY WARRANTY; without even the implied warranty of
+- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- GNU General Public License for more details.
+-
+- You should have received a copy of the GNU General Public License
+- along with this program; if not, write to the Free Software
+- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+-
+- Copyright 2003 (c) Lineo uSolutions,Inc.
+-*/
+-/* -------------------------------------------------------------------- */
+-
+-#ifndef _VOYAGER_GX_REG_H
+-#define _VOYAGER_GX_REG_H
+-
+-#define VOYAGER_BASE 0xb3e00000
+-#define VOYAGER_USBH_BASE (0x40000 + VOYAGER_BASE)
+-#define VOYAGER_UART_BASE (0x30000 + VOYAGER_BASE)
+-#define VOYAGER_AC97_BASE (0xa0000 + VOYAGER_BASE)
+-
+-#define VOYAGER_IRQ_NUM 26
+-#define VOYAGER_IRQ_BASE 200
+-
+-#define IRQ_SM501_UP (VOYAGER_IRQ_BASE + 0)
+-#define IRQ_SM501_G54 (VOYAGER_IRQ_BASE + 1)
+-#define IRQ_SM501_G53 (VOYAGER_IRQ_BASE + 2)
+-#define IRQ_SM501_G52 (VOYAGER_IRQ_BASE + 3)
+-#define IRQ_SM501_G51 (VOYAGER_IRQ_BASE + 4)
+-#define IRQ_SM501_G50 (VOYAGER_IRQ_BASE + 5)
+-#define IRQ_SM501_G49 (VOYAGER_IRQ_BASE + 6)
+-#define IRQ_SM501_G48 (VOYAGER_IRQ_BASE + 7)
+-#define IRQ_SM501_I2C (VOYAGER_IRQ_BASE + 8)
+-#define IRQ_SM501_PW (VOYAGER_IRQ_BASE + 9)
+-#define IRQ_SM501_DMA (VOYAGER_IRQ_BASE + 10)
+-#define IRQ_SM501_PCI (VOYAGER_IRQ_BASE + 11)
+-#define IRQ_SM501_I2S (VOYAGER_IRQ_BASE + 12)
+-#define IRQ_SM501_AC (VOYAGER_IRQ_BASE + 13)
+-#define IRQ_SM501_US (VOYAGER_IRQ_BASE + 14)
+-#define IRQ_SM501_U1 (VOYAGER_IRQ_BASE + 15)
+-#define IRQ_SM501_U0 (VOYAGER_IRQ_BASE + 16)
+-#define IRQ_SM501_CV (VOYAGER_IRQ_BASE + 17)
+-#define IRQ_SM501_MC (VOYAGER_IRQ_BASE + 18)
+-#define IRQ_SM501_S1 (VOYAGER_IRQ_BASE + 19)
+-#define IRQ_SM501_S0 (VOYAGER_IRQ_BASE + 20)
+-#define IRQ_SM501_UH (VOYAGER_IRQ_BASE + 21)
+-#define IRQ_SM501_2D (VOYAGER_IRQ_BASE + 22)
+-#define IRQ_SM501_ZD (VOYAGER_IRQ_BASE + 23)
+-#define IRQ_SM501_PV (VOYAGER_IRQ_BASE + 24)
+-#define IRQ_SM501_CI (VOYAGER_IRQ_BASE + 25)
+-
+-/* ----- MISC controle register ------------------------------ */
+-#define MISC_CTRL (0x000004 + VOYAGER_BASE)
+-#define MISC_CTRL_USBCLK_48 (3 << 28)
+-#define MISC_CTRL_USBCLK_96 (2 << 28)
+-#define MISC_CTRL_USBCLK_CRYSTAL (1 << 28)
+-
+-/* ----- GPIO[31:0] register --------------------------------- */
+-#define GPIO_MUX_LOW (0x000008 + VOYAGER_BASE)
+-#define GPIO_MUX_LOW_AC97 0x1F000000
+-#define GPIO_MUX_LOW_8051 0x0000ffff
+-#define GPIO_MUX_LOW_PWM (1 << 29)
+-
+-/* ----- GPIO[63:32] register --------------------------------- */
+-#define GPIO_MUX_HIGH (0x00000C + VOYAGER_BASE)
+-
+-/* ----- DRAM controle register ------------------------------- */
+-#define DRAM_CTRL (0x000010 + VOYAGER_BASE)
+-#define DRAM_CTRL_EMBEDDED (1 << 31)
+-#define DRAM_CTRL_CPU_BURST_1 (0 << 28)
+-#define DRAM_CTRL_CPU_BURST_2 (1 << 28)
+-#define DRAM_CTRL_CPU_BURST_4 (2 << 28)
+-#define DRAM_CTRL_CPU_BURST_8 (3 << 28)
+-#define DRAM_CTRL_CPU_CAS_LATENCY (1 << 27)
+-#define DRAM_CTRL_CPU_SIZE_2 (0 << 24)
+-#define DRAM_CTRL_CPU_SIZE_4 (1 << 24)
+-#define DRAM_CTRL_CPU_SIZE_64 (4 << 24)
+-#define DRAM_CTRL_CPU_SIZE_32 (5 << 24)
+-#define DRAM_CTRL_CPU_SIZE_16 (6 << 24)
+-#define DRAM_CTRL_CPU_SIZE_8 (7 << 24)
+-#define DRAM_CTRL_CPU_COLUMN_SIZE_1024 (0 << 22)
+-#define DRAM_CTRL_CPU_COLUMN_SIZE_512 (2 << 22)
+-#define DRAM_CTRL_CPU_COLUMN_SIZE_256 (3 << 22)
+-#define DRAM_CTRL_CPU_ACTIVE_PRECHARGE (1 << 21)
+-#define DRAM_CTRL_CPU_RESET (1 << 20)
+-#define DRAM_CTRL_CPU_BANKS (1 << 19)
+-#define DRAM_CTRL_CPU_WRITE_PRECHARGE (1 << 18)
+-#define DRAM_CTRL_BLOCK_WRITE (1 << 17)
+-#define DRAM_CTRL_REFRESH_COMMAND (1 << 16)
+-#define DRAM_CTRL_SIZE_4 (0 << 13)
+-#define DRAM_CTRL_SIZE_8 (1 << 13)
+-#define DRAM_CTRL_SIZE_16 (2 << 13)
+-#define DRAM_CTRL_SIZE_32 (3 << 13)
+-#define DRAM_CTRL_SIZE_64 (4 << 13)
+-#define DRAM_CTRL_SIZE_2 (5 << 13)
+-#define DRAM_CTRL_COLUMN_SIZE_256 (0 << 11)
+-#define DRAM_CTRL_COLUMN_SIZE_512 (2 << 11)
+-#define DRAM_CTRL_COLUMN_SIZE_1024 (3 << 11)
+-#define DRAM_CTRL_BLOCK_WRITE_TIME (1 << 10)
+-#define DRAM_CTRL_BLOCK_WRITE_PRECHARGE (1 << 9)
+-#define DRAM_CTRL_ACTIVE_PRECHARGE (1 << 8)
+-#define DRAM_CTRL_RESET (1 << 7)
+-#define DRAM_CTRL_REMAIN_ACTIVE (1 << 6)
+-#define DRAM_CTRL_BANKS (1 << 1)
+-#define DRAM_CTRL_WRITE_PRECHARGE (1 << 0)
+-
+-/* ----- Arvitration control register -------------------------- */
+-#define ARBITRATION_CTRL (0x000014 + VOYAGER_BASE)
+-#define ARBITRATION_CTRL_CPUMEM (1 << 29)
+-#define ARBITRATION_CTRL_INTMEM (1 << 28)
+-#define ARBITRATION_CTRL_USB_OFF (0 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_1 (1 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_2 (2 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_3 (3 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_4 (4 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_5 (5 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_6 (6 << 24)
+-#define ARBITRATION_CTRL_USB_PRIORITY_7 (7 << 24)
+-#define ARBITRATION_CTRL_PANEL_OFF (0 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_1 (1 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_2 (2 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_3 (3 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_4 (4 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_5 (5 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_6 (6 << 20)
+-#define ARBITRATION_CTRL_PANEL_PRIORITY_7 (7 << 20)
+-#define ARBITRATION_CTRL_ZVPORT_OFF (0 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_1 (1 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_2 (2 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_3 (3 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_4 (4 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_5 (5 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_6 (6 << 16)
+-#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_7 (7 << 16)
+-#define ARBITRATION_CTRL_CMD_INTPR_OFF (0 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_1 (1 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_2 (2 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_3 (3 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_4 (4 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_5 (5 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_6 (6 << 12)
+-#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_7 (7 << 12)
+-#define ARBITRATION_CTRL_DMA_OFF (0 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_1 (1 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_2 (2 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_3 (3 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_4 (4 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_5 (5 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_6 (6 << 8)
+-#define ARBITRATION_CTRL_DMA_PRIORITY_7 (7 << 8)
+-#define ARBITRATION_CTRL_VIDEO_OFF (0 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_1 (1 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_2 (2 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_3 (3 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_4 (4 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_5 (5 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_6 (6 << 4)
+-#define ARBITRATION_CTRL_VIDEO_PRIORITY_7 (7 << 4)
+-#define ARBITRATION_CTRL_CRT_OFF (0 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_1 (1 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_2 (2 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_3 (3 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_4 (4 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_5 (5 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_6 (6 << 0)
+-#define ARBITRATION_CTRL_CRT_PRIORITY_7 (7 << 0)
+-
+-/* ----- Command list status register -------------------------- */
+-#define CMD_INTPR_STATUS (0x000024 + VOYAGER_BASE)
+-
+-/* ----- Interrupt status register ----------------------------- */
+-#define INT_STATUS (0x00002c + VOYAGER_BASE)
+-#define INT_STATUS_UH (1 << 6)
+-#define INT_STATUS_MC (1 << 10)
+-#define INT_STATUS_U0 (1 << 12)
+-#define INT_STATUS_U1 (1 << 13)
+-#define INT_STATUS_AC (1 << 17)
+-
+-/* ----- Interrupt mask register ------------------------------ */
+-#define VOYAGER_INT_MASK (0x000030 + VOYAGER_BASE)
+-#define VOYAGER_INT_MASK_AC (1 << 17)
+-
+-/* ----- Current Gate register ---------------------------------*/
+-#define CURRENT_GATE (0x000038 + VOYAGER_BASE)
+-
+-/* ----- Power mode 0 gate register --------------------------- */
+-#define POWER_MODE0_GATE (0x000040 + VOYAGER_BASE)
+-#define POWER_MODE0_GATE_G (1 << 6)
+-#define POWER_MODE0_GATE_U0 (1 << 7)
+-#define POWER_MODE0_GATE_U1 (1 << 8)
+-#define POWER_MODE0_GATE_UH (1 << 11)
+-#define POWER_MODE0_GATE_AC (1 << 18)
+-
+-/* ----- Power mode 1 gate register --------------------------- */
+-#define POWER_MODE1_GATE (0x000048 + VOYAGER_BASE)
+-#define POWER_MODE1_GATE_G (1 << 6)
+-#define POWER_MODE1_GATE_U0 (1 << 7)
+-#define POWER_MODE1_GATE_U1 (1 << 8)
+-#define POWER_MODE1_GATE_UH (1 << 11)
+-#define POWER_MODE1_GATE_AC (1 << 18)
+-
+-/* ----- Power mode 0 clock register -------------------------- */
+-#define POWER_MODE0_CLOCK (0x000044 + VOYAGER_BASE)
+-
+-/* ----- Power mode 1 clock register -------------------------- */
+-#define POWER_MODE1_CLOCK (0x00004C + VOYAGER_BASE)
+-
+-/* ----- Power mode controll register ------------------------- */
+-#define POWER_MODE_CTRL (0x000054 + VOYAGER_BASE)
+-
+-/* ----- Miscellaneous Timing register ------------------------ */
+-#define SYSTEM_DRAM_CTRL (0x000068 + VOYAGER_BASE)
+-
+-/* ----- PWM register ------------------------------------------*/
+-#define PWM_0 (0x010020 + VOYAGER_BASE)
+-#define PWM_0_HC(x) (((x)&0x0fff)<<20)
+-#define PWM_0_LC(x) (((x)&0x0fff)<<8 )
+-#define PWM_0_CLK_DEV(x) (((x)&0x000f)<<4 )
+-#define PWM_0_EN (1<<0)
+-
+-/* ----- I2C register ----------------------------------------- */
+-#define I2C_BYTECOUNT (0x010040 + VOYAGER_BASE)
+-#define I2C_CONTROL (0x010041 + VOYAGER_BASE)
+-#define I2C_STATUS (0x010042 + VOYAGER_BASE)
+-#define I2C_RESET (0x010042 + VOYAGER_BASE)
+-#define I2C_SADDRESS (0x010043 + VOYAGER_BASE)
+-#define I2C_DATA (0x010044 + VOYAGER_BASE)
+-
+-/* ----- Controle register bits ----------------------------------------- */
+-#define I2C_CONTROL_E (1 << 0)
+-#define I2C_CONTROL_MODE (1 << 1)
+-#define I2C_CONTROL_STATUS (1 << 2)
+-#define I2C_CONTROL_INT (1 << 4)
+-#define I2C_CONTROL_INTACK (1 << 5)
+-#define I2C_CONTROL_REPEAT (1 << 6)
+-
+-/* ----- Status register bits ----------------------------------------- */
+-#define I2C_STATUS_BUSY (1 << 0)
+-#define I2C_STATUS_ACK (1 << 1)
+-#define I2C_STATUS_ERROR (1 << 2)
+-#define I2C_STATUS_COMPLETE (1 << 3)
+-
+-/* ----- Reset register ---------------------------------------------- */
+-#define I2C_RESET_ERROR (1 << 2)
+-
+-/* ----- transmission frequencies ------------------------------------- */
+-#define I2C_SADDRESS_SELECT (1 << 0)
+-
+-/* ----- Display Controll register ----------------------------------------- */
+-#define PANEL_DISPLAY_CTRL (0x080000 + VOYAGER_BASE)
+-#define PANEL_DISPLAY_CTRL_BIAS (1<<26)
+-#define PANEL_PAN_CTRL (0x080004 + VOYAGER_BASE)
+-#define PANEL_COLOR_KEY (0x080008 + VOYAGER_BASE)
+-#define PANEL_FB_ADDRESS (0x08000C + VOYAGER_BASE)
+-#define PANEL_FB_WIDTH (0x080010 + VOYAGER_BASE)
+-#define PANEL_WINDOW_WIDTH (0x080014 + VOYAGER_BASE)
+-#define PANEL_WINDOW_HEIGHT (0x080018 + VOYAGER_BASE)
+-#define PANEL_PLANE_TL (0x08001C + VOYAGER_BASE)
+-#define PANEL_PLANE_BR (0x080020 + VOYAGER_BASE)
+-#define PANEL_HORIZONTAL_TOTAL (0x080024 + VOYAGER_BASE)
+-#define PANEL_HORIZONTAL_SYNC (0x080028 + VOYAGER_BASE)
+-#define PANEL_VERTICAL_TOTAL (0x08002C + VOYAGER_BASE)
+-#define PANEL_VERTICAL_SYNC (0x080030 + VOYAGER_BASE)
+-#define PANEL_CURRENT_LINE (0x080034 + VOYAGER_BASE)
+-#define VIDEO_DISPLAY_CTRL (0x080040 + VOYAGER_BASE)
+-#define VIDEO_FB_0_ADDRESS (0x080044 + VOYAGER_BASE)
+-#define VIDEO_FB_WIDTH (0x080048 + VOYAGER_BASE)
+-#define VIDEO_FB_0_LAST_ADDRESS (0x08004C + VOYAGER_BASE)
+-#define VIDEO_PLANE_TL (0x080050 + VOYAGER_BASE)
+-#define VIDEO_PLANE_BR (0x080054 + VOYAGER_BASE)
+-#define VIDEO_SCALE (0x080058 + VOYAGER_BASE)
+-#define VIDEO_INITIAL_SCALE (0x08005C + VOYAGER_BASE)
+-#define VIDEO_YUV_CONSTANTS (0x080060 + VOYAGER_BASE)
+-#define VIDEO_FB_1_ADDRESS (0x080064 + VOYAGER_BASE)
+-#define VIDEO_FB_1_LAST_ADDRESS (0x080068 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_DISPLAY_CTRL (0x080080 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_FB_ADDRESS (0x080084 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_FB_WIDTH (0x080088 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_FB_LAST_ADDRESS (0x08008C + VOYAGER_BASE)
+-#define VIDEO_ALPHA_PLANE_TL (0x080090 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_PLANE_BR (0x080094 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_SCALE (0x080098 + VOYAGER_BASE)
+-#define VIDEO_ALPHA_INITIAL_SCALE (0x08009C + VOYAGER_BASE)
+-#define VIDEO_ALPHA_CHROMA_KEY (0x0800A0 + VOYAGER_BASE)
+-#define PANEL_HWC_ADDRESS (0x0800F0 + VOYAGER_BASE)
+-#define PANEL_HWC_LOCATION (0x0800F4 + VOYAGER_BASE)
+-#define PANEL_HWC_COLOR_12 (0x0800F8 + VOYAGER_BASE)
+-#define PANEL_HWC_COLOR_3 (0x0800FC + VOYAGER_BASE)
+-#define ALPHA_DISPLAY_CTRL (0x080100 + VOYAGER_BASE)
+-#define ALPHA_FB_ADDRESS (0x080104 + VOYAGER_BASE)
+-#define ALPHA_FB_WIDTH (0x080108 + VOYAGER_BASE)
+-#define ALPHA_PLANE_TL (0x08010C + VOYAGER_BASE)
+-#define ALPHA_PLANE_BR (0x080110 + VOYAGER_BASE)
+-#define ALPHA_CHROMA_KEY (0x080114 + VOYAGER_BASE)
+-#define CRT_DISPLAY_CTRL (0x080200 + VOYAGER_BASE)
+-#define CRT_FB_ADDRESS (0x080204 + VOYAGER_BASE)
+-#define CRT_FB_WIDTH (0x080208 + VOYAGER_BASE)
+-#define CRT_HORIZONTAL_TOTAL (0x08020C + VOYAGER_BASE)
+-#define CRT_HORIZONTAL_SYNC (0x080210 + VOYAGER_BASE)
+-#define CRT_VERTICAL_TOTAL (0x080214 + VOYAGER_BASE)
+-#define CRT_VERTICAL_SYNC (0x080218 + VOYAGER_BASE)
+-#define CRT_SIGNATURE_ANALYZER (0x08021C + VOYAGER_BASE)
+-#define CRT_CURRENT_LINE (0x080220 + VOYAGER_BASE)
+-#define CRT_MONITOR_DETECT (0x080224 + VOYAGER_BASE)
+-#define CRT_HWC_ADDRESS (0x080230 + VOYAGER_BASE)
+-#define CRT_HWC_LOCATION (0x080234 + VOYAGER_BASE)
+-#define CRT_HWC_COLOR_12 (0x080238 + VOYAGER_BASE)
+-#define CRT_HWC_COLOR_3 (0x08023C + VOYAGER_BASE)
+-#define CRT_PALETTE_RAM (0x080400 + VOYAGER_BASE)
+-#define PANEL_PALETTE_RAM (0x080800 + VOYAGER_BASE)
+-#define VIDEO_PALETTE_RAM (0x080C00 + VOYAGER_BASE)
+-
+-/* ----- 8051 Controle register ----------------------------------------- */
+-#define VOYAGER_8051_BASE (0x000c0000 + VOYAGER_BASE)
+-#define VOYAGER_8051_RESET (0x000b0000 + VOYAGER_BASE)
+-#define VOYAGER_8051_SELECT (0x000b0004 + VOYAGER_BASE)
+-#define VOYAGER_8051_CPU_INT (0x000b000c + VOYAGER_BASE)
+-
+-/* ----- AC97 Controle register ----------------------------------------- */
+-#define AC97_TX_SLOT0 (0x00000000 + VOYAGER_AC97_BASE)
+-#define AC97_CONTROL_STATUS (0x00000080 + VOYAGER_AC97_BASE)
+-#define AC97C_READ (1 << 19)
+-#define AC97C_WD_BIT (1 << 2)
+-#define AC97C_INDEX_MASK 0x7f
+-
+-/* arch/sh/cchips/voyagergx/consistent.c */
+-void *voyagergx_consistent_alloc(struct device *, size_t, dma_addr_t *, gfp_t);
+-int voyagergx_consistent_free(struct device *, size_t, void *, dma_addr_t);
+-
+-/* arch/sh/cchips/voyagergx/irq.c */
+-void setup_voyagergx_irq(void);
+-
+-#endif /* _VOYAGER_GX_REG_H */
+diff --git a/include/asm-sh64/Kbuild b/include/asm-sh64/Kbuild
+deleted file mode 100644
+index c68e168..0000000
+--- a/include/asm-sh64/Kbuild
++++ /dev/null
+@@ -1 +0,0 @@
+-include include/asm-generic/Kbuild.asm
+diff --git a/include/asm-sh64/a.out.h b/include/asm-sh64/a.out.h
+deleted file mode 100644
+index 237ee4e..0000000
+--- a/include/asm-sh64/a.out.h
++++ /dev/null
+@@ -1,38 +0,0 @@
+-#ifndef __ASM_SH64_A_OUT_H
+-#define __ASM_SH64_A_OUT_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/a.out.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-struct exec
+-{
+- unsigned long a_info; /* Use macros N_MAGIC, etc for access */
+- unsigned a_text; /* length of text, in bytes */
+- unsigned a_data; /* length of data, in bytes */
+- unsigned a_bss; /* length of uninitialized data area for file, in bytes */
+- unsigned a_syms; /* length of symbol table data in file, in bytes */
+- unsigned a_entry; /* start address */
+- unsigned a_trsize; /* length of relocation info for text, in bytes */
+- unsigned a_drsize; /* length of relocation info for data, in bytes */
+-};
+-
+-#define N_TRSIZE(a) ((a).a_trsize)
+-#define N_DRSIZE(a) ((a).a_drsize)
+-#define N_SYMSIZE(a) ((a).a_syms)
+-
+-#ifdef __KERNEL__
+-
+-#define STACK_TOP TASK_SIZE
+-#define STACK_TOP_MAX STACK_TOP
+-
+-#endif
+-
+-#endif /* __ASM_SH64_A_OUT_H */
+diff --git a/include/asm-sh64/atomic.h b/include/asm-sh64/atomic.h
+deleted file mode 100644
+index 28f2ea9..0000000
+--- a/include/asm-sh64/atomic.h
++++ /dev/null
+@@ -1,158 +0,0 @@
+-#ifndef __ASM_SH64_ATOMIC_H
+-#define __ASM_SH64_ATOMIC_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/atomic.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
+- */
+-
+-/*
+- * Atomic operations that C can't guarantee us. Useful for
+- * resource counting etc..
+- *
+- */
+-
+-typedef struct { volatile int counter; } atomic_t;
+-
+-#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
+-
+-#define atomic_read(v) ((v)->counter)
+-#define atomic_set(v,i) ((v)->counter = (i))
+-
+-#include <asm/system.h>
+-
+-/*
+- * To get proper branch prediction for the main line, we must branch
+- * forward to code at the end of this object's .text section, then
+- * branch back to restart the operation.
+- */
+-
+-static __inline__ void atomic_add(int i, atomic_t * v)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- *(long *)v += i;
+- local_irq_restore(flags);
+-}
+-
+-static __inline__ void atomic_sub(int i, atomic_t *v)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- *(long *)v -= i;
+- local_irq_restore(flags);
+-}
+-
+-static __inline__ int atomic_add_return(int i, atomic_t * v)
+-{
+- unsigned long temp, flags;
+-
+- local_irq_save(flags);
+- temp = *(long *)v;
+- temp += i;
+- *(long *)v = temp;
+- local_irq_restore(flags);
+-
+- return temp;
+-}
+-
+-#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
+-
+-static __inline__ int atomic_sub_return(int i, atomic_t * v)
+-{
+- unsigned long temp, flags;
+-
+- local_irq_save(flags);
+- temp = *(long *)v;
+- temp -= i;
+- *(long *)v = temp;
+- local_irq_restore(flags);
+-
+- return temp;
+-}
+-
+-#define atomic_dec_return(v) atomic_sub_return(1,(v))
+-#define atomic_inc_return(v) atomic_add_return(1,(v))
+-
+-/*
+- * atomic_inc_and_test - increment and test
+- * @v: pointer of type atomic_t
+- *
+- * Atomically increments @v by 1
+- * and returns true if the result is zero, or false for all
+- * other cases.
+- */
+-#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0)
+-
+-#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0)
+-#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
+-
+-#define atomic_inc(v) atomic_add(1,(v))
+-#define atomic_dec(v) atomic_sub(1,(v))
+-
+-static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+-{
+- int ret;
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- ret = v->counter;
+- if (likely(ret == old))
+- v->counter = new;
+- local_irq_restore(flags);
+-
+- return ret;
+-}
+-
+-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+-
+-static inline int atomic_add_unless(atomic_t *v, int a, int u)
+-{
+- int ret;
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- ret = v->counter;
+- if (ret != u)
+- v->counter += a;
+- local_irq_restore(flags);
+-
+- return ret != u;
+-}
+-#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
+-
+-static __inline__ void atomic_clear_mask(unsigned int mask, atomic_t *v)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- *(long *)v &= ~mask;
+- local_irq_restore(flags);
+-}
+-
+-static __inline__ void atomic_set_mask(unsigned int mask, atomic_t *v)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- *(long *)v |= mask;
+- local_irq_restore(flags);
+-}
+-
+-/* Atomic operations are already serializing on SH */
+-#define smp_mb__before_atomic_dec() barrier()
+-#define smp_mb__after_atomic_dec() barrier()
+-#define smp_mb__before_atomic_inc() barrier()
+-#define smp_mb__after_atomic_inc() barrier()
+-
+-#include <asm-generic/atomic.h>
+-#endif /* __ASM_SH64_ATOMIC_H */
+diff --git a/include/asm-sh64/auxvec.h b/include/asm-sh64/auxvec.h
+deleted file mode 100644
+index 1ad5a44..0000000
+--- a/include/asm-sh64/auxvec.h
++++ /dev/null
+@@ -1,4 +0,0 @@
+-#ifndef __ASM_SH64_AUXVEC_H
+-#define __ASM_SH64_AUXVEC_H
+-
+-#endif /* __ASM_SH64_AUXVEC_H */
+diff --git a/include/asm-sh64/bitops.h b/include/asm-sh64/bitops.h
+deleted file mode 100644
+index 600c59e..0000000
+--- a/include/asm-sh64/bitops.h
++++ /dev/null
+@@ -1,155 +0,0 @@
+-#ifndef __ASM_SH64_BITOPS_H
+-#define __ASM_SH64_BITOPS_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/bitops.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- */
+-
+-#ifdef __KERNEL__
+-
+-#ifndef _LINUX_BITOPS_H
+-#error only <linux/bitops.h> can be included directly
+-#endif
+-
+-#include <linux/compiler.h>
+-#include <asm/system.h>
+-/* For __swab32 */
+-#include <asm/byteorder.h>
+-
+-static __inline__ void set_bit(int nr, volatile void * addr)
-{
- int mask;
- volatile unsigned int *a = addr;
@@ -520464,22 +609596,15 @@
- *a |= mask;
- local_irq_restore(flags);
-}
-+#ifdef CONFIG_GUSA_RB
-+#include <asm/bitops-grb.h>
-+#else
-+#include <asm/bitops-irq.h>
-+#endif
-+
-
- /*
- * clear_bit() doesn't provide any barrier for the compiler.
- */
- #define smp_mb__before_clear_bit() barrier()
- #define smp_mb__after_clear_bit() barrier()
--static inline void clear_bit(int nr, volatile void * addr)
+-
+-/*
+- * clear_bit() doesn't provide any barrier for the compiler.
+- */
+-#define smp_mb__before_clear_bit() barrier()
+-#define smp_mb__after_clear_bit() barrier()
+-static inline void clear_bit(int nr, volatile unsigned long *a)
-{
- int mask;
-- volatile unsigned int *a = addr;
- unsigned long flags;
-
- a += nr >> 5;
@@ -520489,7 +609614,7 @@
- local_irq_restore(flags);
-}
-
--static inline void change_bit(int nr, volatile void * addr)
+-static __inline__ void change_bit(int nr, volatile void * addr)
-{
- int mask;
- volatile unsigned int *a = addr;
@@ -520502,7 +609627,7 @@
- local_irq_restore(flags);
-}
-
--static inline int test_and_set_bit(int nr, volatile void * addr)
+-static __inline__ int test_and_set_bit(int nr, volatile void * addr)
-{
- int mask, retval;
- volatile unsigned int *a = addr;
@@ -520518,7 +609643,7 @@
- return retval;
-}
-
--static inline int test_and_clear_bit(int nr, volatile void * addr)
+-static __inline__ int test_and_clear_bit(int nr, volatile void * addr)
-{
- int mask, retval;
- volatile unsigned int *a = addr;
@@ -520534,7 +609659,7 @@
- return retval;
-}
-
--static inline int test_and_change_bit(int nr, volatile void * addr)
+-static __inline__ int test_and_change_bit(int nr, volatile void * addr)
-{
- int mask, retval;
- volatile unsigned int *a = addr;
@@ -520549,222 +609674,418 @@
-
- return retval;
-}
-
- #include <asm-generic/bitops/non-atomic.h>
-
-+#ifdef CONFIG_SUPERH32
- static inline unsigned long ffz(unsigned long word)
- {
- unsigned long result;
-@@ -138,6 +60,31 @@ static inline unsigned long __ffs(unsigned long word)
- : "t");
- return result;
- }
-+#else
-+static inline unsigned long ffz(unsigned long word)
-+{
-+ unsigned long result, __d2, __d3;
-+
-+ __asm__("gettr tr0, %2\n\t"
-+ "pta $+32, tr0\n\t"
-+ "andi %1, 1, %3\n\t"
-+ "beq %3, r63, tr0\n\t"
-+ "pta $+4, tr0\n"
-+ "0:\n\t"
-+ "shlri.l %1, 1, %1\n\t"
-+ "addi %0, 1, %0\n\t"
-+ "andi %1, 1, %3\n\t"
-+ "beqi %3, 1, tr0\n"
-+ "1:\n\t"
-+ "ptabs %2, tr0\n\t"
-+ : "=r" (result), "=r" (word), "=r" (__d2), "=r" (__d3)
-+ : "0" (0L), "1" (word));
-+
-+ return result;
-+}
-+
-+#include <asm-generic/bitops/__ffs.h>
-+#endif
-
- #include <asm-generic/bitops/find.h>
- #include <asm-generic/bitops/ffs.h>
-diff --git a/include/asm-sh/bug.h b/include/asm-sh/bug.h
-index a78d482..c017180 100644
---- a/include/asm-sh/bug.h
-+++ b/include/asm-sh/bug.h
-@@ -3,7 +3,7 @@
-
- #define TRAPA_BUG_OPCODE 0xc33e /* trapa #0x3e */
-
+-
+-#include <asm-generic/bitops/non-atomic.h>
+-
+-static __inline__ unsigned long ffz(unsigned long word)
+-{
+- unsigned long result, __d2, __d3;
+-
+- __asm__("gettr tr0, %2\n\t"
+- "pta $+32, tr0\n\t"
+- "andi %1, 1, %3\n\t"
+- "beq %3, r63, tr0\n\t"
+- "pta $+4, tr0\n"
+- "0:\n\t"
+- "shlri.l %1, 1, %1\n\t"
+- "addi %0, 1, %0\n\t"
+- "andi %1, 1, %3\n\t"
+- "beqi %3, 1, tr0\n"
+- "1:\n\t"
+- "ptabs %2, tr0\n\t"
+- : "=r" (result), "=r" (word), "=r" (__d2), "=r" (__d3)
+- : "0" (0L), "1" (word));
+-
+- return result;
+-}
+-
+-#include <asm-generic/bitops/__ffs.h>
+-#include <asm-generic/bitops/find.h>
+-#include <asm-generic/bitops/hweight.h>
+-#include <asm-generic/bitops/lock.h>
+-#include <asm-generic/bitops/sched.h>
+-#include <asm-generic/bitops/ffs.h>
+-#include <asm-generic/bitops/ext2-non-atomic.h>
+-#include <asm-generic/bitops/ext2-atomic.h>
+-#include <asm-generic/bitops/minix.h>
+-#include <asm-generic/bitops/fls.h>
+-#include <asm-generic/bitops/fls64.h>
+-
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH64_BITOPS_H */
+diff --git a/include/asm-sh64/bug.h b/include/asm-sh64/bug.h
+deleted file mode 100644
+index f3a9c92..0000000
+--- a/include/asm-sh64/bug.h
++++ /dev/null
+@@ -1,19 +0,0 @@
+-#ifndef __ASM_SH64_BUG_H
+-#define __ASM_SH64_BUG_H
+-
-#ifdef CONFIG_BUG
-+#ifdef CONFIG_GENERIC_BUG
- #define HAVE_ARCH_BUG
- #define HAVE_ARCH_WARN_ON
-
-@@ -72,12 +72,7 @@ do { \
- unlikely(__ret_warn_on); \
- })
-
--struct pt_regs;
+-/*
+- * Tell the user there is some problem, then force a segfault (in process
+- * context) or a panic (interrupt context).
+- */
+-#define BUG() do { \
+- printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); \
+- *(volatile int *)0 = 0; \
+-} while (0)
-
--/* arch/sh/kernel/traps.c */
--void handle_BUG(struct pt_regs *);
+-#define HAVE_ARCH_BUG
+-#endif
-
--#endif /* CONFIG_BUG */
-+#endif /* CONFIG_GENERIC_BUG */
-
- #include <asm-generic/bug.h>
-
-diff --git a/include/asm-sh/bugs.h b/include/asm-sh/bugs.h
-index b66139f..def8128 100644
---- a/include/asm-sh/bugs.h
-+++ b/include/asm-sh/bugs.h
-@@ -25,7 +25,7 @@ static void __init check_bugs(void)
- case CPU_SH7619:
- *p++ = '2';
- break;
-- case CPU_SH7206:
-+ case CPU_SH7203 ... CPU_SH7263:
- *p++ = '2';
- *p++ = 'a';
- break;
-@@ -35,7 +35,7 @@ static void __init check_bugs(void)
- case CPU_SH7750 ... CPU_SH4_501:
- *p++ = '4';
- break;
-- case CPU_SH7770 ... CPU_SHX3:
-+ case CPU_SH7763 ... CPU_SHX3:
- *p++ = '4';
- *p++ = 'a';
- break;
-@@ -48,9 +48,16 @@ static void __init check_bugs(void)
- *p++ = 's';
- *p++ = 'p';
- break;
-- default:
-- *p++ = '?';
-- *p++ = '!';
-+ case CPU_SH5_101 ... CPU_SH5_103:
-+ *p++ = '6';
-+ *p++ = '4';
-+ break;
-+ case CPU_SH_NONE:
-+ /*
-+ * Specifically use CPU_SH_NONE rather than default:,
-+ * so we're able to have the compiler whine about
-+ * unhandled enumerations.
-+ */
- break;
- }
-
-diff --git a/include/asm-sh/byteorder.h b/include/asm-sh/byteorder.h
-index bff2b13..0eb9904 100644
---- a/include/asm-sh/byteorder.h
-+++ b/include/asm-sh/byteorder.h
-@@ -3,40 +3,55 @@
-
- /*
- * Copyright (C) 1999 Niibe Yutaka
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
- */
+-#include <asm-generic/bug.h>
+-
+-#endif /* __ASM_SH64_BUG_H */
+diff --git a/include/asm-sh64/bugs.h b/include/asm-sh64/bugs.h
+deleted file mode 100644
+index 05554aa..0000000
+--- a/include/asm-sh64/bugs.h
++++ /dev/null
+@@ -1,38 +0,0 @@
+-#ifndef __ASM_SH64_BUGS_H
+-#define __ASM_SH64_BUGS_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/bugs.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- * void check_bugs(void);
+- */
+-
+-/*
+- * I don't know of any Super-H bugs yet.
+- */
+-
+-#include <asm/processor.h>
+-
+-static void __init check_bugs(void)
+-{
+- extern char *get_cpu_subtype(void);
+- extern unsigned long loops_per_jiffy;
+-
+- cpu_data->loops_per_jiffy = loops_per_jiffy;
+-
+- printk("CPU: %s\n", get_cpu_subtype());
+-}
+-#endif /* __ASM_SH64_BUGS_H */
+diff --git a/include/asm-sh64/byteorder.h b/include/asm-sh64/byteorder.h
+deleted file mode 100644
+index 7419d78..0000000
+--- a/include/asm-sh64/byteorder.h
++++ /dev/null
+@@ -1,49 +0,0 @@
+-#ifndef __ASM_SH64_BYTEORDER_H
+-#define __ASM_SH64_BYTEORDER_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/byteorder.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
-
-#include <asm/types.h>
- #include <linux/compiler.h>
-+#include <linux/types.h>
-
--static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 x)
-+static inline __attribute_const__ __u32 ___arch__swab32(__u32 x)
- {
-- __asm__("swap.b %0, %0\n\t"
-- "swap.w %0, %0\n\t"
-- "swap.b %0, %0"
-+ __asm__(
-+#ifdef CONFIG_SUPERH32
-+ "swap.b %0, %0\n\t"
-+ "swap.w %0, %0\n\t"
-+ "swap.b %0, %0"
-+#else
-+ "byterev %0, %0\n\t"
-+ "shari %0, 32, %0"
-+#endif
- : "=r" (x)
- : "0" (x));
-+
- return x;
- }
-
--static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 x)
-+static inline __attribute_const__ __u16 ___arch__swab16(__u16 x)
- {
-- __asm__("swap.b %0, %0"
-+ __asm__(
-+#ifdef CONFIG_SUPERH32
-+ "swap.b %0, %0"
-+#else
-+ "byterev %0, %0\n\t"
-+ "shari %0, 32, %0"
-+
-+#endif
- : "=r" (x)
- : "0" (x));
-+
- return x;
- }
-
--static inline __u64 ___arch__swab64(__u64 val)
--{
-- union {
-+static inline __u64 ___arch__swab64(__u64 val)
-+{
-+ union {
- struct { __u32 a,b; } s;
- __u64 u;
- } v, w;
- v.u = val;
-- w.s.b = ___arch__swab32(v.s.a);
-- w.s.a = ___arch__swab32(v.s.b);
-- return w.u;
--}
-+ w.s.b = ___arch__swab32(v.s.a);
-+ w.s.a = ___arch__swab32(v.s.b);
-+ return w.u;
-+}
-
- #define __arch__swab64(x) ___arch__swab64(x)
- #define __arch__swab32(x) ___arch__swab32(x)
-diff --git a/include/asm-sh/cache.h b/include/asm-sh/cache.h
-index 01e5cf5..083419f 100644
---- a/include/asm-sh/cache.h
-+++ b/include/asm-sh/cache.h
-@@ -12,11 +12,6 @@
- #include <linux/init.h>
- #include <asm/cpu/cache.h>
-
--#define SH_CACHE_VALID 1
--#define SH_CACHE_UPDATED 2
--#define SH_CACHE_COMBINED 4
--#define SH_CACHE_ASSOC 8
-
- #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
-
- #define __read_mostly __attribute__((__section__(".data.read_mostly")))
-diff --git a/include/asm-sh/checksum.h b/include/asm-sh/checksum.h
-index 4bc8357..67496ab 100644
---- a/include/asm-sh/checksum.h
-+++ b/include/asm-sh/checksum.h
-@@ -1,215 +1,5 @@
--#ifndef __ASM_SH_CHECKSUM_H
--#define __ASM_SH_CHECKSUM_H
+-static inline __attribute_const__ __u32 ___arch__swab32(__u32 x)
+-{
+- __asm__("byterev %0, %0\n\t"
+- "shari %0, 32, %0"
+- : "=r" (x)
+- : "0" (x));
+- return x;
+-}
+-
+-static inline __attribute_const__ __u16 ___arch__swab16(__u16 x)
+-{
+- __asm__("byterev %0, %0\n\t"
+- "shari %0, 48, %0"
+- : "=r" (x)
+- : "0" (x));
+- return x;
+-}
+-
+-#define __arch__swab32(x) ___arch__swab32(x)
+-#define __arch__swab16(x) ___arch__swab16(x)
+-
+-#if !defined(__STRICT_ANSI__) || defined(__KERNEL__)
+-# define __BYTEORDER_HAS_U64__
+-# define __SWAB_64_THRU_32__
+-#endif
+-
+-#ifdef __LITTLE_ENDIAN__
+-#include <linux/byteorder/little_endian.h>
+-#else
+-#include <linux/byteorder/big_endian.h>
+-#endif
+-
+-#endif /* __ASM_SH64_BYTEORDER_H */
+diff --git a/include/asm-sh64/cache.h b/include/asm-sh64/cache.h
+deleted file mode 100644
+index a4f36f0..0000000
+--- a/include/asm-sh64/cache.h
++++ /dev/null
+@@ -1,139 +0,0 @@
+-#ifndef __ASM_SH64_CACHE_H
+-#define __ASM_SH64_CACHE_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/cache.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003, 2004 Paul Mundt
+- *
+- */
+-#include <asm/cacheflush.h>
+-
+-#define L1_CACHE_SHIFT 5
+-/* bytes per L1 cache line */
+-#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
+-#define L1_CACHE_ALIGN_MASK (~(L1_CACHE_BYTES - 1))
+-#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES - 1)) & L1_CACHE_ALIGN_MASK)
+-#define L1_CACHE_SIZE_BYTES (L1_CACHE_BYTES << 10)
+-
+-#ifdef MODULE
+-#define __cacheline_aligned __attribute__((__aligned__(L1_CACHE_BYTES)))
+-#else
+-#define __cacheline_aligned \
+- __attribute__((__aligned__(L1_CACHE_BYTES), \
+- __section__(".data.cacheline_aligned")))
+-#endif
+-
+-/*
+- * Control Registers.
+- */
+-#define ICCR_BASE 0x01600000 /* Instruction Cache Control Register */
+-#define ICCR_REG0 0 /* Register 0 offset */
+-#define ICCR_REG1 1 /* Register 1 offset */
+-#define ICCR0 ICCR_BASE+ICCR_REG0
+-#define ICCR1 ICCR_BASE+ICCR_REG1
+-
+-#define ICCR0_OFF 0x0 /* Set ICACHE off */
+-#define ICCR0_ON 0x1 /* Set ICACHE on */
+-#define ICCR0_ICI 0x2 /* Invalidate all in IC */
+-
+-#define ICCR1_NOLOCK 0x0 /* Set No Locking */
+-
+-#define OCCR_BASE 0x01E00000 /* Operand Cache Control Register */
+-#define OCCR_REG0 0 /* Register 0 offset */
+-#define OCCR_REG1 1 /* Register 1 offset */
+-#define OCCR0 OCCR_BASE+OCCR_REG0
+-#define OCCR1 OCCR_BASE+OCCR_REG1
+-
+-#define OCCR0_OFF 0x0 /* Set OCACHE off */
+-#define OCCR0_ON 0x1 /* Set OCACHE on */
+-#define OCCR0_OCI 0x2 /* Invalidate all in OC */
+-#define OCCR0_WT 0x4 /* Set OCACHE in WT Mode */
+-#define OCCR0_WB 0x0 /* Set OCACHE in WB Mode */
+-
+-#define OCCR1_NOLOCK 0x0 /* Set No Locking */
+-
+-
+-/*
+- * SH-5
+- * A bit of description here, for neff=32.
+- *
+- * |<--- tag (19 bits) --->|
+- * +-----------------------------+-----------------+------+----------+------+
+- * | | | ways |set index |offset|
+- * +-----------------------------+-----------------+------+----------+------+
+- * ^ 2 bits 8 bits 5 bits
+- * +- Bit 31
+- *
+- * Cacheline size is based on offset: 5 bits = 32 bytes per line
+- * A cache line is identified by a tag + set but OCACHETAG/ICACHETAG
+- * have a broader space for registers. These are outlined by
+- * CACHE_?C_*_STEP below.
+- *
+- */
+-
+-/* Valid and Dirty bits */
+-#define SH_CACHE_VALID (1LL<<0)
+-#define SH_CACHE_UPDATED (1LL<<57)
+-
+-/* Cache flags */
+-#define SH_CACHE_MODE_WT (1LL<<0)
+-#define SH_CACHE_MODE_WB (1LL<<1)
+-
+-#ifndef __ASSEMBLY__
+-
+-/*
+- * Cache information structure.
+- *
+- * Defined for both I and D cache, per-processor.
+- */
+-struct cache_info {
+- unsigned int ways;
+- unsigned int sets;
+- unsigned int linesz;
+-
+- unsigned int way_shift;
+- unsigned int entry_shift;
+- unsigned int set_shift;
+- unsigned int way_step_shift;
+- unsigned int asid_shift;
+-
+- unsigned int way_ofs;
+-
+- unsigned int asid_mask;
+- unsigned int idx_mask;
+- unsigned int epn_mask;
+-
+- unsigned long flags;
+-};
+-
+-#endif /* __ASSEMBLY__ */
+-
+-/* Instruction cache */
+-#define CACHE_IC_ADDRESS_ARRAY 0x01000000
+-
+-/* Operand Cache */
+-#define CACHE_OC_ADDRESS_ARRAY 0x01800000
+-
+-/* These declarations relate to cache 'synonyms' in the operand cache. A
+- 'synonym' occurs where effective address bits overlap between those used for
+- indexing the cache sets and those passed to the MMU for translation. In the
+- case of SH5-101 & SH5-103, only bit 12 is affected for 4k pages. */
+-
+-#define CACHE_OC_N_SYNBITS 1 /* Number of synonym bits */
+-#define CACHE_OC_SYN_SHIFT 12
+-/* Mask to select synonym bit(s) */
+-#define CACHE_OC_SYN_MASK (((1UL<<CACHE_OC_N_SYNBITS)-1)<<CACHE_OC_SYN_SHIFT)
+-
+-
+-/*
+- * Instruction cache can't be invalidated based on physical addresses.
+- * No Instruction Cache defines required, then.
+- */
+-
+-#endif /* __ASM_SH64_CACHE_H */
+diff --git a/include/asm-sh64/cacheflush.h b/include/asm-sh64/cacheflush.h
+deleted file mode 100644
+index 1e53a47..0000000
+--- a/include/asm-sh64/cacheflush.h
++++ /dev/null
+@@ -1,50 +0,0 @@
+-#ifndef __ASM_SH64_CACHEFLUSH_H
+-#define __ASM_SH64_CACHEFLUSH_H
+-
+-#ifndef __ASSEMBLY__
+-
+-#include <asm/page.h>
+-
+-struct vm_area_struct;
+-struct page;
+-struct mm_struct;
+-
+-extern void flush_cache_all(void);
+-extern void flush_cache_mm(struct mm_struct *mm);
+-extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
+-extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+- unsigned long end);
+-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
+-extern void flush_dcache_page(struct page *pg);
+-extern void flush_icache_range(unsigned long start, unsigned long end);
+-extern void flush_icache_user_range(struct vm_area_struct *vma,
+- struct page *page, unsigned long addr,
+- int len);
+-
+-#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+-
+-#define flush_dcache_mmap_lock(mapping) do { } while (0)
+-#define flush_dcache_mmap_unlock(mapping) do { } while (0)
+-
+-#define flush_cache_vmap(start, end) flush_cache_all()
+-#define flush_cache_vunmap(start, end) flush_cache_all()
+-
+-#define flush_icache_page(vma, page) do { } while (0)
+-
+-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+- do { \
+- flush_cache_page(vma, vaddr, page_to_pfn(page));\
+- memcpy(dst, src, len); \
+- flush_icache_user_range(vma, page, vaddr, len); \
+- } while (0)
+-
+-#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
+- do { \
+- flush_cache_page(vma, vaddr, page_to_pfn(page));\
+- memcpy(dst, src, len); \
+- } while (0)
+-
+-#endif /* __ASSEMBLY__ */
+-
+-#endif /* __ASM_SH64_CACHEFLUSH_H */
-
+diff --git a/include/asm-sh64/cayman.h b/include/asm-sh64/cayman.h
+deleted file mode 100644
+index 7b6b968..0000000
+--- a/include/asm-sh64/cayman.h
++++ /dev/null
+@@ -1,20 +0,0 @@
-/*
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- *
-- * Copyright (C) 1999 by Kaz Kojima & Niibe Yutaka
+- * include/asm-sh64/cayman.h
+- *
+- * Cayman definitions
+- *
+- * Global defintions for the SH5 Cayman board
+- *
+- * Copyright (C) 2002 Stuart Menefy
- */
-
--#include <linux/in6.h>
+-
+-/* Setup for the SMSC FDC37C935 / LAN91C100FD */
+-#define SMSC_IRQ IRQ_IRL1
+-
+-/* Setup for PCI Bus 2, which transmits interrupts via the EPLD */
+-#define PCI2_IRQ IRQ_IRL3
+diff --git a/include/asm-sh64/checksum.h b/include/asm-sh64/checksum.h
+deleted file mode 100644
+index ba594cc..0000000
+--- a/include/asm-sh64/checksum.h
++++ /dev/null
+@@ -1,82 +0,0 @@
+-#ifndef __ASM_SH64_CHECKSUM_H
+-#define __ASM_SH64_CHECKSUM_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/checksum.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-#include <asm/registers.h>
-
-/*
- * computes the checksum of a memory block at buff, length len,
@@ -520781,4403 +610102,2763 @@
-asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
-
-/*
-- * the same as csum_partial, but copies from src while it
-- * checksums, and handles user-space pointer exceptions correctly, when needed.
-- *
-- * here even more important to align src and dst on a 32-bit (or even
-- * better 64-bit) boundary
-- */
--
--asmlinkage __wsum csum_partial_copy_generic(const void *src, void *dst,
-- int len, __wsum sum,
-- int *src_err_ptr, int *dst_err_ptr);
--
--/*
- * Note: when you get a NULL pointer exception here this means someone
- * passed in an incorrect kernel address to one of these functions.
- *
- * If you use these functions directly please don't forget the
- * access_ok().
- */
--static inline
--__wsum csum_partial_copy_nocheck(const void *src, void *dst,
-- int len, __wsum sum)
+-
+-
+-__wsum csum_partial_copy_nocheck(const void *src, void *dst, int len,
+- __wsum sum);
+-
+-__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
+- int len, __wsum sum, int *err_ptr);
+-
+-static inline __sum16 csum_fold(__wsum csum)
-{
-- return csum_partial_copy_generic(src, dst, len, sum, NULL, NULL);
+- u32 sum = (__force u32)csum;
+- sum = (sum & 0xffff) + (sum >> 16);
+- sum = (sum & 0xffff) + (sum >> 16);
+- return (__force __sum16)~sum;
-}
-
--static inline
--__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
-- int len, __wsum sum, int *err_ptr)
+-__sum16 ip_fast_csum(const void *iph, unsigned int ihl);
+-
+-__wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
+- unsigned short len, unsigned short proto,
+- __wsum sum);
+-
+-/*
+- * computes the checksum of the TCP/UDP pseudo-header
+- * returns a 16-bit checksum, already complemented
+- */
+-static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
+- unsigned short len,
+- unsigned short proto,
+- __wsum sum)
-{
-- return csum_partial_copy_generic((__force const void *)src, dst,
-- len, sum, err_ptr, NULL);
+- return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
-}
-
-/*
-- * Fold a partial checksum
+- * this routine is used for miscellaneous IP-like checksums, mainly
+- * in icmp.c
- */
+-static inline __sum16 ip_compute_csum(const void *buff, int len)
+-{
+- return csum_fold(csum_partial(buff, len, 0));
+-}
-
--static inline __sum16 csum_fold(__wsum sum)
+-#endif /* __ASM_SH64_CHECKSUM_H */
+-
+diff --git a/include/asm-sh64/cpumask.h b/include/asm-sh64/cpumask.h
+deleted file mode 100644
+index b7b105d..0000000
+--- a/include/asm-sh64/cpumask.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_CPUMASK_H
+-#define __ASM_SH64_CPUMASK_H
+-
+-#include <asm-generic/cpumask.h>
+-
+-#endif /* __ASM_SH64_CPUMASK_H */
+diff --git a/include/asm-sh64/cputime.h b/include/asm-sh64/cputime.h
+deleted file mode 100644
+index 0fd89da..0000000
+--- a/include/asm-sh64/cputime.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __SH64_CPUTIME_H
+-#define __SH64_CPUTIME_H
+-
+-#include <asm-generic/cputime.h>
+-
+-#endif /* __SH64_CPUTIME_H */
+diff --git a/include/asm-sh64/current.h b/include/asm-sh64/current.h
+deleted file mode 100644
+index 2612243..0000000
+--- a/include/asm-sh64/current.h
++++ /dev/null
+@@ -1,28 +0,0 @@
+-#ifndef __ASM_SH64_CURRENT_H
+-#define __ASM_SH64_CURRENT_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/current.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
+- */
+-
+-#include <linux/thread_info.h>
+-
+-struct task_struct;
+-
+-static __inline__ struct task_struct * get_current(void)
-{
-- unsigned int __dummy;
-- __asm__("swap.w %0, %1\n\t"
-- "extu.w %0, %0\n\t"
-- "extu.w %1, %1\n\t"
-- "add %1, %0\n\t"
-- "swap.w %0, %1\n\t"
-- "add %1, %0\n\t"
-- "not %0, %0\n\t"
-- : "=r" (sum), "=&r" (__dummy)
-- : "0" (sum)
-- : "t");
-- return (__force __sum16)sum;
+- return current_thread_info()->task;
-}
-
+-#define current get_current()
+-
+-#endif /* __ASM_SH64_CURRENT_H */
+-
+diff --git a/include/asm-sh64/delay.h b/include/asm-sh64/delay.h
+deleted file mode 100644
+index 6ae3130..0000000
+--- a/include/asm-sh64/delay.h
++++ /dev/null
+@@ -1,11 +0,0 @@
+-#ifndef __ASM_SH64_DELAY_H
+-#define __ASM_SH64_DELAY_H
+-
+-extern void __delay(int loops);
+-extern void __udelay(unsigned long long usecs, unsigned long lpj);
+-extern void __ndelay(unsigned long long nsecs, unsigned long lpj);
+-extern void udelay(unsigned long usecs);
+-extern void ndelay(unsigned long nsecs);
+-
+-#endif /* __ASM_SH64_DELAY_H */
+-
+diff --git a/include/asm-sh64/device.h b/include/asm-sh64/device.h
+deleted file mode 100644
+index d8f9872..0000000
+--- a/include/asm-sh64/device.h
++++ /dev/null
+@@ -1,7 +0,0 @@
-/*
-- * This is a version of ip_compute_csum() optimized for IP headers,
-- * which always checksum on 4 octet boundaries.
+- * Arch specific extensions to struct device
- *
-- * i386 version by Jorge Cwik <jorge at laser.satlink.net>, adapted
-- * for linux by * Arnt Gulbrandsen.
+- * This file is released under the GPLv2
- */
--static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
+-#include <asm-generic/device.h>
+-
+diff --git a/include/asm-sh64/div64.h b/include/asm-sh64/div64.h
+deleted file mode 100644
+index f758695..0000000
+--- a/include/asm-sh64/div64.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_DIV64_H
+-#define __ASM_SH64_DIV64_H
+-
+-#include <asm-generic/div64.h>
+-
+-#endif /* __ASM_SH64_DIV64_H */
+diff --git a/include/asm-sh64/dma-mapping.h b/include/asm-sh64/dma-mapping.h
+deleted file mode 100644
+index 18f8dd6..0000000
+--- a/include/asm-sh64/dma-mapping.h
++++ /dev/null
+@@ -1,194 +0,0 @@
+-#ifndef __ASM_SH_DMA_MAPPING_H
+-#define __ASM_SH_DMA_MAPPING_H
+-
+-#include <linux/mm.h>
+-#include <linux/scatterlist.h>
+-#include <asm/io.h>
+-
+-struct pci_dev;
+-extern void *consistent_alloc(struct pci_dev *hwdev, size_t size,
+- dma_addr_t *dma_handle);
+-extern void consistent_free(struct pci_dev *hwdev, size_t size,
+- void *vaddr, dma_addr_t dma_handle);
+-
+-#define dma_supported(dev, mask) (1)
+-
+-static inline int dma_set_mask(struct device *dev, u64 mask)
-{
-- unsigned int sum, __dummy0, __dummy1;
+- if (!dev->dma_mask || !dma_supported(dev, mask))
+- return -EIO;
-
-- __asm__ __volatile__(
-- "mov.l @%1+, %0\n\t"
-- "mov.l @%1+, %3\n\t"
-- "add #-2, %2\n\t"
-- "clrt\n\t"
-- "1:\t"
-- "addc %3, %0\n\t"
-- "movt %4\n\t"
-- "mov.l @%1+, %3\n\t"
-- "dt %2\n\t"
-- "bf/s 1b\n\t"
-- " cmp/eq #1, %4\n\t"
-- "addc %3, %0\n\t"
-- "addc %2, %0" /* Here %2 is 0, add carry-bit */
-- /* Since the input registers which are loaded with iph and ihl
-- are modified, we must also specify them as outputs, or gcc
-- will assume they contain their original values. */
-- : "=r" (sum), "=r" (iph), "=r" (ihl), "=&r" (__dummy0), "=&z" (__dummy1)
-- : "1" (iph), "2" (ihl)
-- : "t");
+- *dev->dma_mask = mask;
-
-- return csum_fold(sum);
+- return 0;
-}
-
--static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
-- unsigned short len,
-- unsigned short proto,
-- __wsum sum)
+-static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+- dma_addr_t *dma_handle, gfp_t flag)
-{
--#ifdef __LITTLE_ENDIAN__
-- unsigned long len_proto = (proto + len) << 8;
-+#ifdef CONFIG_SUPERH32
-+# include "checksum_32.h"
- #else
-- unsigned long len_proto = proto + len;
-+# include "checksum_64.h"
- #endif
-- __asm__("clrt\n\t"
-- "addc %0, %1\n\t"
-- "addc %2, %1\n\t"
-- "addc %3, %1\n\t"
-- "movt %0\n\t"
-- "add %1, %0"
-- : "=r" (sum), "=r" (len_proto)
-- : "r" (daddr), "r" (saddr), "1" (len_proto), "0" (sum)
-- : "t");
+- return consistent_alloc(NULL, size, dma_handle);
+-}
-
-- return sum;
+-static inline void dma_free_coherent(struct device *dev, size_t size,
+- void *vaddr, dma_addr_t dma_handle)
+-{
+- consistent_free(NULL, size, vaddr, dma_handle);
-}
-
--/*
-- * computes the checksum of the TCP/UDP pseudo-header
-- * returns a 16-bit checksum, already complemented
-- */
--static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
-- unsigned short len,
-- unsigned short proto,
-- __wsum sum)
+-#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
+-#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
+-#define dma_is_consistent(d, h) (1)
+-
+-static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+- enum dma_data_direction dir)
-{
-- return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
+- unsigned long start = (unsigned long) vaddr;
+- unsigned long s = start & L1_CACHE_ALIGN_MASK;
+- unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+-
+- for (; s <= e; s += L1_CACHE_BYTES)
+- asm volatile ("ocbp %0, 0" : : "r" (s));
-}
-
--/*
-- * this routine is used for miscellaneous IP-like checksums, mainly
-- * in icmp.c
-- */
--static inline __sum16 ip_compute_csum(const void *buff, int len)
+-static inline dma_addr_t dma_map_single(struct device *dev,
+- void *ptr, size_t size,
+- enum dma_data_direction dir)
-{
-- return csum_fold(csum_partial(buff, len, 0));
+-#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+- if (dev->bus == &pci_bus_type)
+- return virt_to_phys(ptr);
+-#endif
+- dma_cache_sync(dev, ptr, size, dir);
+-
+- return virt_to_phys(ptr);
-}
-
--#define _HAVE_ARCH_IPV6_CSUM
--static inline __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
-- const struct in6_addr *daddr,
-- __u32 len, unsigned short proto,
-- __wsum sum)
+-#define dma_unmap_single(dev, addr, size, dir) do { } while (0)
+-
+-static inline int dma_map_sg(struct device *dev, struct scatterlist *sg,
+- int nents, enum dma_data_direction dir)
-{
-- unsigned int __dummy;
-- __asm__("clrt\n\t"
-- "mov.l @(0,%2), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(4,%2), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(8,%2), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(12,%2), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(0,%3), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(4,%3), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(8,%3), %1\n\t"
-- "addc %1, %0\n\t"
-- "mov.l @(12,%3), %1\n\t"
-- "addc %1, %0\n\t"
-- "addc %4, %0\n\t"
-- "addc %5, %0\n\t"
-- "movt %1\n\t"
-- "add %1, %0\n"
-- : "=r" (sum), "=&r" (__dummy)
-- : "r" (saddr), "r" (daddr),
-- "r" (htonl(len)), "r" (htonl(proto)), "0" (sum)
-- : "t");
+- int i;
-
-- return csum_fold(sum);
+- for (i = 0; i < nents; i++) {
+-#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+- dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir);
+-#endif
+- sg[i].dma_address = sg_phys(&sg[i]);
+- }
+-
+- return nents;
-}
-
--/*
-- * Copy and checksum to user
-- */
--#define HAVE_CSUM_COPY_USER
--static inline __wsum csum_and_copy_to_user(const void *src,
-- void __user *dst,
-- int len, __wsum sum,
-- int *err_ptr)
+-#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)
+-
+-static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
+- unsigned long offset, size_t size,
+- enum dma_data_direction dir)
-{
-- if (access_ok(VERIFY_WRITE, dst, len))
-- return csum_partial_copy_generic((__force const void *)src,
-- dst, len, sum, NULL, err_ptr);
+- return dma_map_single(dev, page_address(page) + offset, size, dir);
+-}
-
-- if (len)
-- *err_ptr = -EFAULT;
+-static inline void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
+- size_t size, enum dma_data_direction dir)
+-{
+- dma_unmap_single(dev, dma_address, size, dir);
+-}
-
-- return (__force __wsum)-1; /* invalid checksum */
+-static inline void dma_sync_single(struct device *dev, dma_addr_t dma_handle,
+- size_t size, enum dma_data_direction dir)
+-{
+-#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+- if (dev->bus == &pci_bus_type)
+- return;
+-#endif
+- dma_cache_sync(dev, phys_to_virt(dma_handle), size, dir);
-}
--#endif /* __ASM_SH_CHECKSUM_H */
-diff --git a/include/asm-sh/checksum_32.h b/include/asm-sh/checksum_32.h
-new file mode 100644
-index 0000000..4bc8357
---- /dev/null
-+++ b/include/asm-sh/checksum_32.h
-@@ -0,0 +1,215 @@
-+#ifndef __ASM_SH_CHECKSUM_H
-+#define __ASM_SH_CHECKSUM_H
-+
-+/*
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ *
-+ * Copyright (C) 1999 by Kaz Kojima & Niibe Yutaka
-+ */
-+
-+#include <linux/in6.h>
-+
-+/*
-+ * computes the checksum of a memory block at buff, length len,
-+ * and adds in "sum" (32-bit)
-+ *
-+ * returns a 32-bit number suitable for feeding into itself
-+ * or csum_tcpudp_magic
-+ *
-+ * this function must be called with even lengths, except
-+ * for the last fragment, which may be odd
-+ *
-+ * it's best to have buff aligned on a 32-bit boundary
-+ */
-+asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
-+
-+/*
-+ * the same as csum_partial, but copies from src while it
-+ * checksums, and handles user-space pointer exceptions correctly, when needed.
-+ *
-+ * here even more important to align src and dst on a 32-bit (or even
-+ * better 64-bit) boundary
-+ */
-+
-+asmlinkage __wsum csum_partial_copy_generic(const void *src, void *dst,
-+ int len, __wsum sum,
-+ int *src_err_ptr, int *dst_err_ptr);
-+
-+/*
-+ * Note: when you get a NULL pointer exception here this means someone
-+ * passed in an incorrect kernel address to one of these functions.
-+ *
-+ * If you use these functions directly please don't forget the
-+ * access_ok().
-+ */
-+static inline
-+__wsum csum_partial_copy_nocheck(const void *src, void *dst,
-+ int len, __wsum sum)
-+{
-+ return csum_partial_copy_generic(src, dst, len, sum, NULL, NULL);
-+}
-+
-+static inline
-+__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
-+ int len, __wsum sum, int *err_ptr)
-+{
-+ return csum_partial_copy_generic((__force const void *)src, dst,
-+ len, sum, err_ptr, NULL);
-+}
-+
-+/*
-+ * Fold a partial checksum
-+ */
-+
-+static inline __sum16 csum_fold(__wsum sum)
-+{
-+ unsigned int __dummy;
-+ __asm__("swap.w %0, %1\n\t"
-+ "extu.w %0, %0\n\t"
-+ "extu.w %1, %1\n\t"
-+ "add %1, %0\n\t"
-+ "swap.w %0, %1\n\t"
-+ "add %1, %0\n\t"
-+ "not %0, %0\n\t"
-+ : "=r" (sum), "=&r" (__dummy)
-+ : "0" (sum)
-+ : "t");
-+ return (__force __sum16)sum;
-+}
-+
-+/*
-+ * This is a version of ip_compute_csum() optimized for IP headers,
-+ * which always checksum on 4 octet boundaries.
-+ *
-+ * i386 version by Jorge Cwik <jorge at laser.satlink.net>, adapted
-+ * for linux by * Arnt Gulbrandsen.
-+ */
-+static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
-+{
-+ unsigned int sum, __dummy0, __dummy1;
-+
-+ __asm__ __volatile__(
-+ "mov.l @%1+, %0\n\t"
-+ "mov.l @%1+, %3\n\t"
-+ "add #-2, %2\n\t"
-+ "clrt\n\t"
-+ "1:\t"
-+ "addc %3, %0\n\t"
-+ "movt %4\n\t"
-+ "mov.l @%1+, %3\n\t"
-+ "dt %2\n\t"
-+ "bf/s 1b\n\t"
-+ " cmp/eq #1, %4\n\t"
-+ "addc %3, %0\n\t"
-+ "addc %2, %0" /* Here %2 is 0, add carry-bit */
-+ /* Since the input registers which are loaded with iph and ihl
-+ are modified, we must also specify them as outputs, or gcc
-+ will assume they contain their original values. */
-+ : "=r" (sum), "=r" (iph), "=r" (ihl), "=&r" (__dummy0), "=&z" (__dummy1)
-+ : "1" (iph), "2" (ihl)
-+ : "t");
-+
-+ return csum_fold(sum);
-+}
-+
-+static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
-+ unsigned short len,
-+ unsigned short proto,
-+ __wsum sum)
-+{
-+#ifdef __LITTLE_ENDIAN__
-+ unsigned long len_proto = (proto + len) << 8;
-+#else
-+ unsigned long len_proto = proto + len;
-+#endif
-+ __asm__("clrt\n\t"
-+ "addc %0, %1\n\t"
-+ "addc %2, %1\n\t"
-+ "addc %3, %1\n\t"
-+ "movt %0\n\t"
-+ "add %1, %0"
-+ : "=r" (sum), "=r" (len_proto)
-+ : "r" (daddr), "r" (saddr), "1" (len_proto), "0" (sum)
-+ : "t");
-+
-+ return sum;
-+}
-+
-+/*
-+ * computes the checksum of the TCP/UDP pseudo-header
-+ * returns a 16-bit checksum, already complemented
-+ */
-+static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
-+ unsigned short len,
-+ unsigned short proto,
-+ __wsum sum)
-+{
-+ return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
-+}
-+
-+/*
-+ * this routine is used for miscellaneous IP-like checksums, mainly
-+ * in icmp.c
-+ */
-+static inline __sum16 ip_compute_csum(const void *buff, int len)
-+{
-+ return csum_fold(csum_partial(buff, len, 0));
-+}
-+
-+#define _HAVE_ARCH_IPV6_CSUM
-+static inline __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
-+ const struct in6_addr *daddr,
-+ __u32 len, unsigned short proto,
-+ __wsum sum)
-+{
-+ unsigned int __dummy;
-+ __asm__("clrt\n\t"
-+ "mov.l @(0,%2), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(4,%2), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(8,%2), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(12,%2), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(0,%3), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(4,%3), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(8,%3), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "mov.l @(12,%3), %1\n\t"
-+ "addc %1, %0\n\t"
-+ "addc %4, %0\n\t"
-+ "addc %5, %0\n\t"
-+ "movt %1\n\t"
-+ "add %1, %0\n"
-+ : "=r" (sum), "=&r" (__dummy)
-+ : "r" (saddr), "r" (daddr),
-+ "r" (htonl(len)), "r" (htonl(proto)), "0" (sum)
-+ : "t");
-+
-+ return csum_fold(sum);
-+}
-+
-+/*
-+ * Copy and checksum to user
-+ */
-+#define HAVE_CSUM_COPY_USER
-+static inline __wsum csum_and_copy_to_user(const void *src,
-+ void __user *dst,
-+ int len, __wsum sum,
-+ int *err_ptr)
-+{
-+ if (access_ok(VERIFY_WRITE, dst, len))
-+ return csum_partial_copy_generic((__force const void *)src,
-+ dst, len, sum, NULL, err_ptr);
-+
-+ if (len)
-+ *err_ptr = -EFAULT;
-+
-+ return (__force __wsum)-1; /* invalid checksum */
-+}
-+#endif /* __ASM_SH_CHECKSUM_H */
-diff --git a/include/asm-sh/checksum_64.h b/include/asm-sh/checksum_64.h
-new file mode 100644
-index 0000000..9c62a03
---- /dev/null
-+++ b/include/asm-sh/checksum_64.h
-@@ -0,0 +1,78 @@
-+#ifndef __ASM_SH_CHECKSUM_64_H
-+#define __ASM_SH_CHECKSUM_64_H
-+
-+/*
-+ * include/asm-sh/checksum_64.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+
-+/*
-+ * computes the checksum of a memory block at buff, length len,
-+ * and adds in "sum" (32-bit)
-+ *
-+ * returns a 32-bit number suitable for feeding into itself
-+ * or csum_tcpudp_magic
-+ *
-+ * this function must be called with even lengths, except
-+ * for the last fragment, which may be odd
-+ *
-+ * it's best to have buff aligned on a 32-bit boundary
-+ */
-+asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
-+
-+/*
-+ * Note: when you get a NULL pointer exception here this means someone
-+ * passed in an incorrect kernel address to one of these functions.
-+ *
-+ * If you use these functions directly please don't forget the
-+ * access_ok().
-+ */
-+
-+
-+__wsum csum_partial_copy_nocheck(const void *src, void *dst, int len,
-+ __wsum sum);
-+
-+__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
-+ int len, __wsum sum, int *err_ptr);
-+
-+static inline __sum16 csum_fold(__wsum csum)
-+{
-+ u32 sum = (__force u32)csum;
-+ sum = (sum & 0xffff) + (sum >> 16);
-+ sum = (sum & 0xffff) + (sum >> 16);
-+ return (__force __sum16)~sum;
-+}
-+
-+__sum16 ip_fast_csum(const void *iph, unsigned int ihl);
-+
-+__wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
-+ unsigned short len, unsigned short proto,
-+ __wsum sum);
-+
-+/*
-+ * computes the checksum of the TCP/UDP pseudo-header
-+ * returns a 16-bit checksum, already complemented
-+ */
-+static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
-+ unsigned short len,
-+ unsigned short proto,
-+ __wsum sum)
-+{
-+ return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
-+}
-+
-+/*
-+ * this routine is used for miscellaneous IP-like checksums, mainly
-+ * in icmp.c
-+ */
-+static inline __sum16 ip_compute_csum(const void *buff, int len)
-+{
-+ return csum_fold(csum_partial(buff, len, 0));
-+}
-+
-+#endif /* __ASM_SH_CHECKSUM_64_H */
-diff --git a/include/asm-sh/cmpxchg-grb.h b/include/asm-sh/cmpxchg-grb.h
-new file mode 100644
-index 0000000..e2681ab
---- /dev/null
-+++ b/include/asm-sh/cmpxchg-grb.h
-@@ -0,0 +1,70 @@
-+#ifndef __ASM_SH_CMPXCHG_GRB_H
-+#define __ASM_SH_CMPXCHG_GRB_H
-+
-+static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
-+{
-+ unsigned long retval;
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " nop \n\t"
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-4, r15 \n\t" /* LOGIN */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " mov.l %2, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (retval),
-+ "+r" (m)
-+ : "r" (val)
-+ : "memory", "r0", "r1");
-+
-+ return retval;
-+}
-+
-+static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
-+{
-+ unsigned long retval;
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-6, r15 \n\t" /* LOGIN */
-+ " mov.b @%1, %0 \n\t" /* load old value */
-+ " extu.b %0, %0 \n\t" /* extend as unsigned */
-+ " mov.b %2, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (retval),
-+ "+r" (m)
-+ : "r" (val)
-+ : "memory" , "r0", "r1");
-+
-+ return retval;
-+}
-+
-+static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
-+ unsigned long new)
-+{
-+ unsigned long retval;
-+
-+ __asm__ __volatile__ (
-+ " .align 2 \n\t"
-+ " mova 1f, r0 \n\t" /* r0 = end point */
-+ " nop \n\t"
-+ " mov r15, r1 \n\t" /* r1 = saved sp */
-+ " mov #-8, r15 \n\t" /* LOGIN */
-+ " mov.l @%1, %0 \n\t" /* load old value */
-+ " cmp/eq %0, %2 \n\t"
-+ " bf 1f \n\t" /* if not equal */
-+ " mov.l %2, @%1 \n\t" /* store new value */
-+ "1: mov r1, r15 \n\t" /* LOGOUT */
-+ : "=&r" (retval),
-+ "+r" (m)
-+ : "r" (new)
-+ : "memory" , "r0", "r1", "t");
-+
-+ return retval;
-+}
-+
-+#endif /* __ASM_SH_CMPXCHG_GRB_H */
-diff --git a/include/asm-sh/cmpxchg-irq.h b/include/asm-sh/cmpxchg-irq.h
-new file mode 100644
-index 0000000..43049ec
---- /dev/null
-+++ b/include/asm-sh/cmpxchg-irq.h
-@@ -0,0 +1,40 @@
-+#ifndef __ASM_SH_CMPXCHG_IRQ_H
-+#define __ASM_SH_CMPXCHG_IRQ_H
-+
-+static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
-+{
-+ unsigned long flags, retval;
-+
-+ local_irq_save(flags);
-+ retval = *m;
-+ *m = val;
-+ local_irq_restore(flags);
-+ return retval;
-+}
-+
-+static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
-+{
-+ unsigned long flags, retval;
-+
-+ local_irq_save(flags);
-+ retval = *m;
-+ *m = val & 0xff;
-+ local_irq_restore(flags);
-+ return retval;
-+}
-+
-+static inline unsigned long __cmpxchg_u32(volatile int *m, unsigned long old,
-+ unsigned long new)
-+{
-+ __u32 retval;
-+ unsigned long flags;
-+
-+ local_irq_save(flags);
-+ retval = *m;
-+ if (retval == old)
-+ *m = new;
-+ local_irq_restore(flags); /* implies memory barrier */
-+ return retval;
-+}
-+
-+#endif /* __ASM_SH_CMPXCHG_IRQ_H */
-diff --git a/include/asm-sh/cpu-sh2/addrspace.h b/include/asm-sh/cpu-sh2/addrspace.h
-index 8706c90..2b9ab93 100644
---- a/include/asm-sh/cpu-sh2/addrspace.h
-+++ b/include/asm-sh/cpu-sh2/addrspace.h
-@@ -10,7 +10,10 @@
- #ifndef __ASM_CPU_SH2_ADDRSPACE_H
- #define __ASM_CPU_SH2_ADDRSPACE_H
-
--/* Should fill here */
-+#define P0SEG 0x00000000
-+#define P1SEG 0x80000000
-+#define P2SEG 0xa0000000
-+#define P3SEG 0xc0000000
-+#define P4SEG 0xe0000000
-
- #endif /* __ASM_CPU_SH2_ADDRSPACE_H */
-
-diff --git a/include/asm-sh/cpu-sh2/cache.h b/include/asm-sh/cpu-sh2/cache.h
-index f02ba7a..4e0b165 100644
---- a/include/asm-sh/cpu-sh2/cache.h
-+++ b/include/asm-sh/cpu-sh2/cache.h
-@@ -12,9 +12,13 @@
-
- #define L1_CACHE_SHIFT 4
-
-+#define SH_CACHE_VALID 1
-+#define SH_CACHE_UPDATED 2
-+#define SH_CACHE_COMBINED 4
-+#define SH_CACHE_ASSOC 8
-+
- #if defined(CONFIG_CPU_SUBTYPE_SH7619)
--#define CCR1 0xffffffec
--#define CCR CCR1
-+#define CCR 0xffffffec
-
- #define CCR_CACHE_CE 0x01 /* Cache enable */
- #define CCR_CACHE_WT 0x06 /* CCR[bit1=1,bit2=1] */
-diff --git a/include/asm-sh/cpu-sh2/rtc.h b/include/asm-sh/cpu-sh2/rtc.h
-new file mode 100644
-index 0000000..39e2d6e
---- /dev/null
-+++ b/include/asm-sh/cpu-sh2/rtc.h
-@@ -0,0 +1,8 @@
-+#ifndef __ASM_SH_CPU_SH2_RTC_H
-+#define __ASM_SH_CPU_SH2_RTC_H
-+
-+#define rtc_reg_size sizeof(u16)
-+#define RTC_BIT_INVERTED 0
-+#define RTC_DEF_CAPABILITIES 0UL
-+
-+#endif /* __ASM_SH_CPU_SH2_RTC_H */
-diff --git a/include/asm-sh/cpu-sh2a/addrspace.h b/include/asm-sh/cpu-sh2a/addrspace.h
-index 3d2e9aa..795ddd6 100644
---- a/include/asm-sh/cpu-sh2a/addrspace.h
-+++ b/include/asm-sh/cpu-sh2a/addrspace.h
-@@ -1 +1,10 @@
--#include <asm/cpu-sh2/addrspace.h>
-+#ifndef __ASM_SH_CPU_SH2A_ADDRSPACE_H
-+#define __ASM_SH_CPU_SH2A_ADDRSPACE_H
-+
-+#define P0SEG 0x00000000
-+#define P1SEG 0x00000000
-+#define P2SEG 0x20000000
-+#define P3SEG 0x00000000
-+#define P4SEG 0x80000000
-+
-+#endif /* __ASM_SH_CPU_SH2A_ADDRSPACE_H */
-diff --git a/include/asm-sh/cpu-sh2a/cache.h b/include/asm-sh/cpu-sh2a/cache.h
-index 3e4b9e4..afe228b 100644
---- a/include/asm-sh/cpu-sh2a/cache.h
-+++ b/include/asm-sh/cpu-sh2a/cache.h
-@@ -12,11 +12,13 @@
-
- #define L1_CACHE_SHIFT 4
-
--#define CCR1 0xfffc1000
--#define CCR2 0xfffc1004
-+#define SH_CACHE_VALID 1
-+#define SH_CACHE_UPDATED 2
-+#define SH_CACHE_COMBINED 4
-+#define SH_CACHE_ASSOC 8
-
--/* CCR1 behaves more like the traditional CCR */
--#define CCR CCR1
-+#define CCR 0xfffc1000 /* CCR1 */
-+#define CCR2 0xfffc1004
-
- /*
- * Most of the SH-2A CCR1 definitions resemble the SH-4 ones. All others not
-@@ -36,4 +38,3 @@
- #define CCR_CACHE_INVALIDATE (CCR_CACHE_OCI | CCR_CACHE_ICI)
-
- #endif /* __ASM_CPU_SH2A_CACHE_H */
+-static inline void dma_sync_single_range(struct device *dev,
+- dma_addr_t dma_handle,
+- unsigned long offset, size_t size,
+- enum dma_data_direction dir)
+-{
+-#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+- if (dev->bus == &pci_bus_type)
+- return;
+-#endif
+- dma_cache_sync(dev, phys_to_virt(dma_handle) + offset, size, dir);
+-}
-
-diff --git a/include/asm-sh/cpu-sh2a/freq.h b/include/asm-sh/cpu-sh2a/freq.h
-index e518fff..830fd43 100644
---- a/include/asm-sh/cpu-sh2a/freq.h
-+++ b/include/asm-sh/cpu-sh2a/freq.h
-@@ -10,9 +10,7 @@
- #ifndef __ASM_CPU_SH2A_FREQ_H
- #define __ASM_CPU_SH2A_FREQ_H
-
--#if defined(CONFIG_CPU_SUBTYPE_SH7206)
- #define FREQCR 0xfffe0010
+-static inline void dma_sync_sg(struct device *dev, struct scatterlist *sg,
+- int nelems, enum dma_data_direction dir)
+-{
+- int i;
+-
+- for (i = 0; i < nelems; i++) {
+-#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+- dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir);
-#endif
-
- #endif /* __ASM_CPU_SH2A_FREQ_H */
-
-diff --git a/include/asm-sh/cpu-sh2a/rtc.h b/include/asm-sh/cpu-sh2a/rtc.h
-new file mode 100644
-index 0000000..afb511e
---- /dev/null
-+++ b/include/asm-sh/cpu-sh2a/rtc.h
-@@ -0,0 +1,8 @@
-+#ifndef __ASM_SH_CPU_SH2A_RTC_H
-+#define __ASM_SH_CPU_SH2A_RTC_H
-+
-+#define rtc_reg_size sizeof(u16)
-+#define RTC_BIT_INVERTED 0
-+#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
-+
-+#endif /* __ASM_SH_CPU_SH2A_RTC_H */
-diff --git a/include/asm-sh/cpu-sh3/addrspace.h b/include/asm-sh/cpu-sh3/addrspace.h
-index 872e9e1..0f94726 100644
---- a/include/asm-sh/cpu-sh3/addrspace.h
-+++ b/include/asm-sh/cpu-sh3/addrspace.h
-@@ -10,7 +10,10 @@
- #ifndef __ASM_CPU_SH3_ADDRSPACE_H
- #define __ASM_CPU_SH3_ADDRSPACE_H
-
--/* Should fill here */
-+#define P0SEG 0x00000000
-+#define P1SEG 0x80000000
-+#define P2SEG 0xa0000000
-+#define P3SEG 0xc0000000
-+#define P4SEG 0xe0000000
-
- #endif /* __ASM_CPU_SH3_ADDRSPACE_H */
+- sg[i].dma_address = sg_phys(&sg[i]);
+- }
+-}
-
-diff --git a/include/asm-sh/cpu-sh3/cache.h b/include/asm-sh/cpu-sh3/cache.h
-index 255016f..56bd838 100644
---- a/include/asm-sh/cpu-sh3/cache.h
-+++ b/include/asm-sh/cpu-sh3/cache.h
-@@ -12,6 +12,11 @@
-
- #define L1_CACHE_SHIFT 4
-
-+#define SH_CACHE_VALID 1
-+#define SH_CACHE_UPDATED 2
-+#define SH_CACHE_COMBINED 4
-+#define SH_CACHE_ASSOC 8
-+
- #define CCR 0xffffffec /* Address of Cache Control Register */
-
- #define CCR_CACHE_CE 0x01 /* Cache Enable */
-@@ -28,7 +33,8 @@
-
- #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \
- defined(CONFIG_CPU_SUBTYPE_SH7710) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define CCR3 0xa40000b4
- #define CCR_CACHE_16KB 0x00010000
- #define CCR_CACHE_32KB 0x00020000
-diff --git a/include/asm-sh/cpu-sh3/dma.h b/include/asm-sh/cpu-sh3/dma.h
-index 54bfece..092ff9d 100644
---- a/include/asm-sh/cpu-sh3/dma.h
-+++ b/include/asm-sh/cpu-sh3/dma.h
-@@ -2,7 +2,9 @@
- #define __ASM_CPU_SH3_DMA_H
-
-
--#if defined(CONFIG_CPU_SUBTYPE_SH7720) || defined(CONFIG_CPU_SUBTYPE_SH7709)
-+#if defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7709)
- #define SH_DMAC_BASE 0xa4010020
-
- #define DMTE0_IRQ 48
-diff --git a/include/asm-sh/cpu-sh3/freq.h b/include/asm-sh/cpu-sh3/freq.h
-index 0a054b5..53c6230 100644
---- a/include/asm-sh/cpu-sh3/freq.h
-+++ b/include/asm-sh/cpu-sh3/freq.h
-@@ -10,7 +10,12 @@
- #ifndef __ASM_CPU_SH3_FREQ_H
- #define __ASM_CPU_SH3_FREQ_H
-
-+#ifdef CONFIG_CPU_SUBTYPE_SH7712
-+#define FRQCR 0xA415FF80
-+#else
- #define FRQCR 0xffffff80
-+#endif
-+
- #define MIN_DIVISOR_NR 0
- #define MAX_DIVISOR_NR 4
-
-diff --git a/include/asm-sh/cpu-sh3/gpio.h b/include/asm-sh/cpu-sh3/gpio.h
-index 48770c1..4e53eb3 100644
---- a/include/asm-sh/cpu-sh3/gpio.h
-+++ b/include/asm-sh/cpu-sh3/gpio.h
-@@ -12,7 +12,8 @@
- #ifndef _CPU_SH3_GPIO_H
- #define _CPU_SH3_GPIO_H
-
--#if defined(CONFIG_CPU_SUBTYPE_SH7720)
-+#if defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
-
- /* Control registers */
- #define PORT_PACR 0xA4050100UL
-diff --git a/include/asm-sh/cpu-sh3/mmu_context.h b/include/asm-sh/cpu-sh3/mmu_context.h
-index 16c2d63..ab09da7 100644
---- a/include/asm-sh/cpu-sh3/mmu_context.h
-+++ b/include/asm-sh/cpu-sh3/mmu_context.h
-@@ -33,7 +33,8 @@
- defined(CONFIG_CPU_SUBTYPE_SH7709) || \
- defined(CONFIG_CPU_SUBTYPE_SH7710) || \
- defined(CONFIG_CPU_SUBTYPE_SH7712) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define INTEVT 0xa4000000 /* INTEVTE2(0xa4000000) */
- #else
- #define INTEVT 0xffffffd8
-diff --git a/include/asm-sh/cpu-sh3/rtc.h b/include/asm-sh/cpu-sh3/rtc.h
-new file mode 100644
-index 0000000..319404a
---- /dev/null
-+++ b/include/asm-sh/cpu-sh3/rtc.h
-@@ -0,0 +1,8 @@
-+#ifndef __ASM_SH_CPU_SH3_RTC_H
-+#define __ASM_SH_CPU_SH3_RTC_H
-+
-+#define rtc_reg_size sizeof(u16)
-+#define RTC_BIT_INVERTED 0 /* No bug on SH7708, SH7709A */
-+#define RTC_DEF_CAPABILITIES 0UL
-+
-+#endif /* __ASM_SH_CPU_SH3_RTC_H */
-diff --git a/include/asm-sh/cpu-sh3/timer.h b/include/asm-sh/cpu-sh3/timer.h
-index 7b795ac..793acf1 100644
---- a/include/asm-sh/cpu-sh3/timer.h
-+++ b/include/asm-sh/cpu-sh3/timer.h
-@@ -23,12 +23,13 @@
- * ---------------------------------------------------------------------------
- */
-
--#if !defined(CONFIG_CPU_SUBTYPE_SH7720)
-+#if !defined(CONFIG_CPU_SUBTYPE_SH7720) && !defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define TMU_TOCR 0xfffffe90 /* Byte access */
- #endif
-
- #if defined(CONFIG_CPU_SUBTYPE_SH7710) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define TMU_012_TSTR 0xa412fe92 /* Byte access */
-
- #define TMU0_TCOR 0xa412fe94 /* Long access */
-@@ -57,7 +58,7 @@
- #define TMU2_TCOR 0xfffffeac /* Long access */
- #define TMU2_TCNT 0xfffffeb0 /* Long access */
- #define TMU2_TCR 0xfffffeb4 /* Word access */
--#if !defined(CONFIG_CPU_SUBTYPE_SH7720)
-+#if !defined(CONFIG_CPU_SUBTYPE_SH7720) && !defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define TMU2_TCPR2 0xfffffeb8 /* Long access */
- #endif
- #endif
-diff --git a/include/asm-sh/cpu-sh3/ubc.h b/include/asm-sh/cpu-sh3/ubc.h
-index 18467c5..4e6381d 100644
---- a/include/asm-sh/cpu-sh3/ubc.h
-+++ b/include/asm-sh/cpu-sh3/ubc.h
-@@ -12,7 +12,8 @@
- #define __ASM_CPU_SH3_UBC_H
-
- #if defined(CONFIG_CPU_SUBTYPE_SH7710) || \
-- defined(CONFIG_CPU_SUBTYPE_SH7720)
-+ defined(CONFIG_CPU_SUBTYPE_SH7720) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7721)
- #define UBC_BARA 0xa4ffffb0
- #define UBC_BAMRA 0xa4ffffb4
- #define UBC_BBRA 0xa4ffffb8
-diff --git a/include/asm-sh/cpu-sh4/addrspace.h b/include/asm-sh/cpu-sh4/addrspace.h
-index bb2e1b0..a3fa733 100644
---- a/include/asm-sh/cpu-sh4/addrspace.h
-+++ b/include/asm-sh/cpu-sh4/addrspace.h
-@@ -10,6 +10,12 @@
- #ifndef __ASM_CPU_SH4_ADDRSPACE_H
- #define __ASM_CPU_SH4_ADDRSPACE_H
-
-+#define P0SEG 0x00000000
-+#define P1SEG 0x80000000
-+#define P2SEG 0xa0000000
-+#define P3SEG 0xc0000000
-+#define P4SEG 0xe0000000
-+
- /* Detailed P4SEG */
- #define P4SEG_STORE_QUE (P4SEG)
- #define P4SEG_IC_ADDR 0xf0000000
-diff --git a/include/asm-sh/cpu-sh4/cache.h b/include/asm-sh/cpu-sh4/cache.h
-index f92b20a..1c61ebf 100644
---- a/include/asm-sh/cpu-sh4/cache.h
-+++ b/include/asm-sh/cpu-sh4/cache.h
-@@ -12,6 +12,11 @@
-
- #define L1_CACHE_SHIFT 5
-
-+#define SH_CACHE_VALID 1
-+#define SH_CACHE_UPDATED 2
-+#define SH_CACHE_COMBINED 4
-+#define SH_CACHE_ASSOC 8
-+
- #define CCR 0xff00001c /* Address of Cache Control Register */
- #define CCR_CACHE_OCE 0x0001 /* Operand Cache Enable */
- #define CCR_CACHE_WT 0x0002 /* Write-Through (for P0,U0,P3) (else writeback)*/
-diff --git a/include/asm-sh/cpu-sh4/fpu.h b/include/asm-sh/cpu-sh4/fpu.h
-new file mode 100644
-index 0000000..febef73
---- /dev/null
-+++ b/include/asm-sh/cpu-sh4/fpu.h
-@@ -0,0 +1,32 @@
-+/*
-+ * linux/arch/sh/kernel/cpu/sh4/sh4_fpu.h
-+ *
-+ * Copyright (C) 2006 STMicroelectronics Limited
-+ * Author: Carl Shaw <carl.shaw at st.com>
-+ *
-+ * May be copied or modified under the terms of the GNU General Public
-+ * License Version 2. See linux/COPYING for more information.
-+ *
-+ * Definitions for SH4 FPU operations
-+ */
-+
-+#ifndef __CPU_SH4_FPU_H
-+#define __CPU_SH4_FPU_H
-+
-+#define FPSCR_ENABLE_MASK 0x00000f80UL
-+
-+#define FPSCR_FMOV_DOUBLE (1<<1)
-+
-+#define FPSCR_CAUSE_INEXACT (1<<12)
-+#define FPSCR_CAUSE_UNDERFLOW (1<<13)
-+#define FPSCR_CAUSE_OVERFLOW (1<<14)
-+#define FPSCR_CAUSE_DIVZERO (1<<15)
-+#define FPSCR_CAUSE_INVALID (1<<16)
-+#define FPSCR_CAUSE_ERROR (1<<17)
-+
-+#define FPSCR_DBL_PRECISION (1<<19)
-+#define FPSCR_ROUNDING_MODE(x) ((x >> 20) & 3)
-+#define FPSCR_RM_NEAREST (0)
-+#define FPSCR_RM_ZERO (1)
-+
-+#endif
-diff --git a/include/asm-sh/cpu-sh4/freq.h b/include/asm-sh/cpu-sh4/freq.h
-index dc1d32a..1ac10b9 100644
---- a/include/asm-sh/cpu-sh4/freq.h
-+++ b/include/asm-sh/cpu-sh4/freq.h
-@@ -16,7 +16,8 @@
- #define SCLKACR 0xa4150008
- #define SCLKBCR 0xa415000c
- #define IrDACLKCR 0xa4150010
--#elif defined(CONFIG_CPU_SUBTYPE_SH7780)
-+#elif defined(CONFIG_CPU_SUBTYPE_SH7763) || \
-+ defined(CONFIG_CPU_SUBTYPE_SH7780)
- #define FRQCR 0xffc80000
- #elif defined(CONFIG_CPU_SUBTYPE_SH7785)
- #define FRQCR0 0xffc80000
-diff --git a/include/asm-sh/cpu-sh4/mmu_context.h b/include/asm-sh/cpu-sh4/mmu_context.h
-index 979acdd..9ea8eb2 100644
---- a/include/asm-sh/cpu-sh4/mmu_context.h
-+++ b/include/asm-sh/cpu-sh4/mmu_context.h
-@@ -22,12 +22,20 @@
- #define MMU_UTLB_ADDRESS_ARRAY 0xF6000000
- #define MMU_PAGE_ASSOC_BIT 0x80
-
-+#define MMUCR_TI (1<<2)
-+
- #ifdef CONFIG_X2TLB
- #define MMUCR_ME (1 << 7)
- #else
- #define MMUCR_ME (0)
- #endif
-
-+#if defined(CONFIG_32BIT) && defined(CONFIG_CPU_SUBTYPE_ST40)
-+#define MMUCR_SE (1 << 4)
-+#else
-+#define MMUCR_SE (0)
-+#endif
-+
- #ifdef CONFIG_SH_STORE_QUEUES
- #define MMUCR_SQMD (1 << 9)
- #else
-@@ -35,7 +43,7 @@
- #endif
-
- #define MMU_NTLB_ENTRIES 64
--#define MMU_CONTROL_INIT (0x05|MMUCR_SQMD|MMUCR_ME)
-+#define MMU_CONTROL_INIT (0x05|MMUCR_SQMD|MMUCR_ME|MMUCR_SE)
-
- #define MMU_ITLB_DATA_ARRAY 0xF3000000
- #define MMU_UTLB_DATA_ARRAY 0xF7000000
-diff --git a/include/asm-sh/cpu-sh4/rtc.h b/include/asm-sh/cpu-sh4/rtc.h
-new file mode 100644
-index 0000000..f3d0f53
---- /dev/null
-+++ b/include/asm-sh/cpu-sh4/rtc.h
-@@ -0,0 +1,8 @@
-+#ifndef __ASM_SH_CPU_SH4_RTC_H
-+#define __ASM_SH_CPU_SH4_RTC_H
-+
-+#define rtc_reg_size sizeof(u32)
-+#define RTC_BIT_INVERTED 0x40 /* bug on SH7750, SH7750S */
-+#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
-+
-+#endif /* __ASM_SH_CPU_SH4_RTC_H */
-diff --git a/include/asm-sh/cpu-sh5/addrspace.h b/include/asm-sh/cpu-sh5/addrspace.h
-new file mode 100644
-index 0000000..dc36b9a
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/addrspace.h
-@@ -0,0 +1,11 @@
-+#ifndef __ASM_SH_CPU_SH5_ADDRSPACE_H
-+#define __ASM_SH_CPU_SH5_ADDRSPACE_H
-+
-+#define PHYS_PERIPHERAL_BLOCK 0x09000000
-+#define PHYS_DMAC_BLOCK 0x0e000000
-+#define PHYS_PCI_BLOCK 0x60000000
-+#define PHYS_EMI_BLOCK 0xff000000
-+
-+/* No segmentation.. */
-+
-+#endif /* __ASM_SH_CPU_SH5_ADDRSPACE_H */
-diff --git a/include/asm-sh/cpu-sh5/cache.h b/include/asm-sh/cpu-sh5/cache.h
-new file mode 100644
-index 0000000..ed050ab
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/cache.h
-@@ -0,0 +1,97 @@
-+#ifndef __ASM_SH_CPU_SH5_CACHE_H
-+#define __ASM_SH_CPU_SH5_CACHE_H
-+
-+/*
-+ * include/asm-sh/cpu-sh5/cache.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003, 2004 Paul Mundt
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+
-+#define L1_CACHE_SHIFT 5
-+
-+/* Valid and Dirty bits */
-+#define SH_CACHE_VALID (1LL<<0)
-+#define SH_CACHE_UPDATED (1LL<<57)
-+
-+/* Unimplemented compat bits.. */
-+#define SH_CACHE_COMBINED 0
-+#define SH_CACHE_ASSOC 0
-+
-+/* Cache flags */
-+#define SH_CACHE_MODE_WT (1LL<<0)
-+#define SH_CACHE_MODE_WB (1LL<<1)
-+
-+/*
-+ * Control Registers.
-+ */
-+#define ICCR_BASE 0x01600000 /* Instruction Cache Control Register */
-+#define ICCR_REG0 0 /* Register 0 offset */
-+#define ICCR_REG1 1 /* Register 1 offset */
-+#define ICCR0 ICCR_BASE+ICCR_REG0
-+#define ICCR1 ICCR_BASE+ICCR_REG1
-+
-+#define ICCR0_OFF 0x0 /* Set ICACHE off */
-+#define ICCR0_ON 0x1 /* Set ICACHE on */
-+#define ICCR0_ICI 0x2 /* Invalidate all in IC */
-+
-+#define ICCR1_NOLOCK 0x0 /* Set No Locking */
-+
-+#define OCCR_BASE 0x01E00000 /* Operand Cache Control Register */
-+#define OCCR_REG0 0 /* Register 0 offset */
-+#define OCCR_REG1 1 /* Register 1 offset */
-+#define OCCR0 OCCR_BASE+OCCR_REG0
-+#define OCCR1 OCCR_BASE+OCCR_REG1
-+
-+#define OCCR0_OFF 0x0 /* Set OCACHE off */
-+#define OCCR0_ON 0x1 /* Set OCACHE on */
-+#define OCCR0_OCI 0x2 /* Invalidate all in OC */
-+#define OCCR0_WT 0x4 /* Set OCACHE in WT Mode */
-+#define OCCR0_WB 0x0 /* Set OCACHE in WB Mode */
-+
-+#define OCCR1_NOLOCK 0x0 /* Set No Locking */
-+
-+/*
-+ * SH-5
-+ * A bit of description here, for neff=32.
-+ *
-+ * |<--- tag (19 bits) --->|
-+ * +-----------------------------+-----------------+------+----------+------+
-+ * | | | ways |set index |offset|
-+ * +-----------------------------+-----------------+------+----------+------+
-+ * ^ 2 bits 8 bits 5 bits
-+ * +- Bit 31
-+ *
-+ * Cacheline size is based on offset: 5 bits = 32 bytes per line
-+ * A cache line is identified by a tag + set but OCACHETAG/ICACHETAG
-+ * have a broader space for registers. These are outlined by
-+ * CACHE_?C_*_STEP below.
-+ *
-+ */
-+
-+/* Instruction cache */
-+#define CACHE_IC_ADDRESS_ARRAY 0x01000000
-+
-+/* Operand Cache */
-+#define CACHE_OC_ADDRESS_ARRAY 0x01800000
-+
-+/* These declarations relate to cache 'synonyms' in the operand cache. A
-+ 'synonym' occurs where effective address bits overlap between those used for
-+ indexing the cache sets and those passed to the MMU for translation. In the
-+ case of SH5-101 & SH5-103, only bit 12 is affected for 4k pages. */
-+
-+#define CACHE_OC_N_SYNBITS 1 /* Number of synonym bits */
-+#define CACHE_OC_SYN_SHIFT 12
-+/* Mask to select synonym bit(s) */
-+#define CACHE_OC_SYN_MASK (((1UL<<CACHE_OC_N_SYNBITS)-1)<<CACHE_OC_SYN_SHIFT)
-+
-+/*
-+ * Instruction cache can't be invalidated based on physical addresses.
-+ * No Instruction Cache defines required, then.
-+ */
-+
-+#endif /* __ASM_SH_CPU_SH5_CACHE_H */
-diff --git a/include/asm-sh/cpu-sh5/cacheflush.h b/include/asm-sh/cpu-sh5/cacheflush.h
-new file mode 100644
-index 0000000..98edb5b
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/cacheflush.h
-@@ -0,0 +1,35 @@
-+#ifndef __ASM_SH_CPU_SH5_CACHEFLUSH_H
-+#define __ASM_SH_CPU_SH5_CACHEFLUSH_H
-+
-+#ifndef __ASSEMBLY__
-+
-+#include <asm/page.h>
-+
-+struct vm_area_struct;
-+struct page;
-+struct mm_struct;
-+
-+extern void flush_cache_all(void);
-+extern void flush_cache_mm(struct mm_struct *mm);
-+extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
-+extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
-+ unsigned long end);
-+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
-+extern void flush_dcache_page(struct page *pg);
-+extern void flush_icache_range(unsigned long start, unsigned long end);
-+extern void flush_icache_user_range(struct vm_area_struct *vma,
-+ struct page *page, unsigned long addr,
-+ int len);
-+
-+#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
-+
-+#define flush_dcache_mmap_lock(mapping) do { } while (0)
-+#define flush_dcache_mmap_unlock(mapping) do { } while (0)
-+
-+#define flush_icache_page(vma, page) do { } while (0)
-+#define p3_cache_init() do { } while (0)
-+
-+#endif /* __ASSEMBLY__ */
-+
-+#endif /* __ASM_SH_CPU_SH5_CACHEFLUSH_H */
-+
-diff --git a/include/asm-sh/cpu-sh5/dma.h b/include/asm-sh/cpu-sh5/dma.h
-new file mode 100644
-index 0000000..7bf6bb3
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/dma.h
-@@ -0,0 +1,6 @@
-+#ifndef __ASM_SH_CPU_SH5_DMA_H
-+#define __ASM_SH_CPU_SH5_DMA_H
-+
-+/* Nothing yet */
-+
-+#endif /* __ASM_SH_CPU_SH5_DMA_H */
-diff --git a/include/asm-sh/cpu-sh5/irq.h b/include/asm-sh/cpu-sh5/irq.h
-new file mode 100644
-index 0000000..f0f0756
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/irq.h
-@@ -0,0 +1,117 @@
-+#ifndef __ASM_SH_CPU_SH5_IRQ_H
-+#define __ASM_SH_CPU_SH5_IRQ_H
-+
-+/*
-+ * include/asm-sh/cpu-sh5/irq.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+
-+
-+/*
-+ * Encoded IRQs are not considered worth to be supported.
-+ * Main reason is that there's no per-encoded-interrupt
-+ * enable/disable mechanism (as there was in SH3/4).
-+ * An all enabled/all disabled is worth only if there's
-+ * a cascaded IC to disable/enable/ack on. Until such
-+ * IC is available there's no such support.
-+ *
-+ * Presumably Encoded IRQs may use extra IRQs beyond 64,
-+ * below. Some logic must be added to cope with IRQ_IRL?
-+ * in an exclusive way.
-+ *
-+ * Priorities are set at Platform level, when IRQ_IRL0-3
-+ * are set to 0 Encoding is allowed. Otherwise it's not
-+ * allowed.
-+ */
-+
-+/* Independent IRQs */
-+#define IRQ_IRL0 0
-+#define IRQ_IRL1 1
-+#define IRQ_IRL2 2
-+#define IRQ_IRL3 3
-+
-+#define IRQ_INTA 4
-+#define IRQ_INTB 5
-+#define IRQ_INTC 6
-+#define IRQ_INTD 7
-+
-+#define IRQ_SERR 12
-+#define IRQ_ERR 13
-+#define IRQ_PWR3 14
-+#define IRQ_PWR2 15
-+#define IRQ_PWR1 16
-+#define IRQ_PWR0 17
-+
-+#define IRQ_DMTE0 18
-+#define IRQ_DMTE1 19
-+#define IRQ_DMTE2 20
-+#define IRQ_DMTE3 21
-+#define IRQ_DAERR 22
-+
-+#define IRQ_TUNI0 32
-+#define IRQ_TUNI1 33
-+#define IRQ_TUNI2 34
-+#define IRQ_TICPI2 35
-+
-+#define IRQ_ATI 36
-+#define IRQ_PRI 37
-+#define IRQ_CUI 38
-+
-+#define IRQ_ERI 39
-+#define IRQ_RXI 40
-+#define IRQ_BRI 41
-+#define IRQ_TXI 42
-+
-+#define IRQ_ITI 63
-+
-+#define NR_INTC_IRQS 64
-+
-+#ifdef CONFIG_SH_CAYMAN
-+#define NR_EXT_IRQS 32
-+#define START_EXT_IRQS 64
-+
-+/* PCI bus 2 uses encoded external interrupts on the Cayman board */
-+#define IRQ_P2INTA (START_EXT_IRQS + (3*8) + 0)
-+#define IRQ_P2INTB (START_EXT_IRQS + (3*8) + 1)
-+#define IRQ_P2INTC (START_EXT_IRQS + (3*8) + 2)
-+#define IRQ_P2INTD (START_EXT_IRQS + (3*8) + 3)
-+
-+#define I8042_KBD_IRQ (START_EXT_IRQS + 2)
-+#define I8042_AUX_IRQ (START_EXT_IRQS + 6)
-+
-+#define IRQ_CFCARD (START_EXT_IRQS + 7)
-+#define IRQ_PCMCIA (0)
-+
-+#else
-+#define NR_EXT_IRQS 0
-+#endif
-+
-+/* Default IRQs, fixed */
-+#define TIMER_IRQ IRQ_TUNI0
-+#define RTC_IRQ IRQ_CUI
-+
-+/* Default Priorities, Platform may choose differently */
-+#define NO_PRIORITY 0 /* Disabled */
-+#define TIMER_PRIORITY 2
-+#define RTC_PRIORITY TIMER_PRIORITY
-+#define SCIF_PRIORITY 3
-+#define INTD_PRIORITY 3
-+#define IRL3_PRIORITY 4
-+#define INTC_PRIORITY 6
-+#define IRL2_PRIORITY 7
-+#define INTB_PRIORITY 9
-+#define IRL1_PRIORITY 10
-+#define INTA_PRIORITY 12
-+#define IRL0_PRIORITY 13
-+#define TOP_PRIORITY 15
-+
-+extern int intc_evt_to_irq[(0xE20/0x20)+1];
-+int intc_irq_describe(char* p, int irq);
-+extern int platform_int_priority[NR_INTC_IRQS];
-+
-+#endif /* __ASM_SH_CPU_SH5_IRQ_H */
-diff --git a/include/asm-sh/cpu-sh5/mmu_context.h b/include/asm-sh/cpu-sh5/mmu_context.h
-new file mode 100644
-index 0000000..df857fc
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/mmu_context.h
-@@ -0,0 +1,27 @@
-+#ifndef __ASM_SH_CPU_SH5_MMU_CONTEXT_H
-+#define __ASM_SH_CPU_SH5_MMU_CONTEXT_H
-+
-+/* Common defines */
-+#define TLB_STEP 0x00000010
-+#define TLB_PTEH 0x00000000
-+#define TLB_PTEL 0x00000008
-+
-+/* PTEH defines */
-+#define PTEH_ASID_SHIFT 2
-+#define PTEH_VALID 0x0000000000000001
-+#define PTEH_SHARED 0x0000000000000002
-+#define PTEH_MATCH_ASID 0x00000000000003ff
-+
-+#ifndef __ASSEMBLY__
-+/* This has to be a common function because the next location to fill
-+ * information is shared. */
-+extern void __do_tlb_refill(unsigned long address, unsigned long long is_text_not_data, pte_t *pte);
-+
-+/* Profiling counter. */
-+#ifdef CONFIG_SH64_PROC_TLB
-+extern unsigned long long calls_to_do_fast_page_fault;
-+#endif
-+
-+#endif /* __ASSEMBLY__ */
-+
-+#endif /* __ASM_SH_CPU_SH5_MMU_CONTEXT_H */
-diff --git a/include/asm-sh/cpu-sh5/registers.h b/include/asm-sh/cpu-sh5/registers.h
-new file mode 100644
-index 0000000..6664ea6
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/registers.h
-@@ -0,0 +1,106 @@
-+#ifndef __ASM_SH_CPU_SH5_REGISTERS_H
-+#define __ASM_SH_CPU_SH5_REGISTERS_H
-+
-+/*
-+ * include/asm-sh/cpu-sh5/registers.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2004 Richard Curnow
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+
-+#ifdef __ASSEMBLY__
-+/* =====================================================================
-+**
-+** Section 1: acts on assembly sources pre-processed by GPP ( <source.S>).
-+** Assigns symbolic names to control & target registers.
-+*/
-+
-+/*
-+ * Define some useful aliases for control registers.
-+ */
-+#define SR cr0
-+#define SSR cr1
-+#define PSSR cr2
-+ /* cr3 UNDEFINED */
-+#define INTEVT cr4
-+#define EXPEVT cr5
-+#define PEXPEVT cr6
-+#define TRA cr7
-+#define SPC cr8
-+#define PSPC cr9
-+#define RESVEC cr10
-+#define VBR cr11
-+ /* cr12 UNDEFINED */
-+#define TEA cr13
-+ /* cr14-cr15 UNDEFINED */
-+#define DCR cr16
-+#define KCR0 cr17
-+#define KCR1 cr18
-+ /* cr19-cr31 UNDEFINED */
-+ /* cr32-cr61 RESERVED */
-+#define CTC cr62
-+#define USR cr63
-+
-+/*
-+ * ABI dependent registers (general purpose set)
-+ */
-+#define RET r2
-+#define ARG1 r2
-+#define ARG2 r3
-+#define ARG3 r4
-+#define ARG4 r5
-+#define ARG5 r6
-+#define ARG6 r7
-+#define SP r15
-+#define LINK r18
-+#define ZERO r63
-+
-+/*
-+ * Status register defines: used only by assembly sources (and
-+ * syntax independednt)
-+ */
-+#define SR_RESET_VAL 0x0000000050008000
-+#define SR_HARMLESS 0x00000000500080f0 /* Write ignores for most */
-+#define SR_ENABLE_FPU 0xffffffffffff7fff /* AND with this */
-+
-+#if defined (CONFIG_SH64_SR_WATCH)
-+#define SR_ENABLE_MMU 0x0000000084000000 /* OR with this */
-+#else
-+#define SR_ENABLE_MMU 0x0000000080000000 /* OR with this */
-+#endif
-+
-+#define SR_UNBLOCK_EXC 0xffffffffefffffff /* AND with this */
-+#define SR_BLOCK_EXC 0x0000000010000000 /* OR with this */
-+
-+#else /* Not __ASSEMBLY__ syntax */
-+
-+/*
-+** Stringify reg. name
-+*/
-+#define __str(x) #x
-+
-+/* Stringify control register names for use in inline assembly */
-+#define __SR __str(SR)
-+#define __SSR __str(SSR)
-+#define __PSSR __str(PSSR)
-+#define __INTEVT __str(INTEVT)
-+#define __EXPEVT __str(EXPEVT)
-+#define __PEXPEVT __str(PEXPEVT)
-+#define __TRA __str(TRA)
-+#define __SPC __str(SPC)
-+#define __PSPC __str(PSPC)
-+#define __RESVEC __str(RESVEC)
-+#define __VBR __str(VBR)
-+#define __TEA __str(TEA)
-+#define __DCR __str(DCR)
-+#define __KCR0 __str(KCR0)
-+#define __KCR1 __str(KCR1)
-+#define __CTC __str(CTC)
-+#define __USR __str(USR)
-+
-+#endif /* __ASSEMBLY__ */
-+#endif /* __ASM_SH_CPU_SH5_REGISTERS_H */
-diff --git a/include/asm-sh/cpu-sh5/rtc.h b/include/asm-sh/cpu-sh5/rtc.h
-new file mode 100644
-index 0000000..12ea0ed
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/rtc.h
-@@ -0,0 +1,8 @@
-+#ifndef __ASM_SH_CPU_SH5_RTC_H
-+#define __ASM_SH_CPU_SH5_RTC_H
-+
-+#define rtc_reg_size sizeof(u32)
-+#define RTC_BIT_INVERTED 0 /* The SH-5 RTC is surprisingly sane! */
-+#define RTC_DEF_CAPABILITIES RTC_CAP_4_DIGIT_YEAR
-+
-+#endif /* __ASM_SH_CPU_SH5_RTC_H */
-diff --git a/include/asm-sh/cpu-sh5/timer.h b/include/asm-sh/cpu-sh5/timer.h
-new file mode 100644
-index 0000000..88da9b3
---- /dev/null
-+++ b/include/asm-sh/cpu-sh5/timer.h
-@@ -0,0 +1,4 @@
-+#ifndef __ASM_SH_CPU_SH5_TIMER_H
-+#define __ASM_SH_CPU_SH5_TIMER_H
-+
-+#endif /* __ASM_SH_CPU_SH5_TIMER_H */
-diff --git a/include/asm-sh/delay.h b/include/asm-sh/delay.h
-index db599b2..031db84 100644
---- a/include/asm-sh/delay.h
-+++ b/include/asm-sh/delay.h
-@@ -6,7 +6,7 @@
- *
- * Delay routines calling functions in arch/sh/lib/delay.c
- */
--
-+
- extern void __bad_udelay(void);
- extern void __bad_ndelay(void);
-
-@@ -15,13 +15,17 @@ extern void __ndelay(unsigned long nsecs);
- extern void __const_udelay(unsigned long usecs);
- extern void __delay(unsigned long loops);
-
-+#ifdef CONFIG_SUPERH32
- #define udelay(n) (__builtin_constant_p(n) ? \
- ((n) > 20000 ? __bad_udelay() : __const_udelay((n) * 0x10c6ul)) : \
- __udelay(n))
-
+-static inline void dma_sync_single_for_cpu(struct device *dev,
+- dma_addr_t dma_handle, size_t size,
+- enum dma_data_direction dir)
+-{
+- dma_sync_single(dev, dma_handle, size, dir);
+-}
-
- #define ndelay(n) (__builtin_constant_p(n) ? \
- ((n) > 20000 ? __bad_ndelay() : __const_udelay((n) * 5ul)) : \
- __ndelay(n))
-+#else
-+extern void udelay(unsigned long usecs);
-+extern void ndelay(unsigned long nsecs);
-+#endif
-
- #endif /* __ASM_SH_DELAY_H */
-diff --git a/include/asm-sh/dma-mapping.h b/include/asm-sh/dma-mapping.h
-index fcea067..22cc419 100644
---- a/include/asm-sh/dma-mapping.h
-+++ b/include/asm-sh/dma-mapping.h
-@@ -8,11 +8,6 @@
-
- extern struct bus_type pci_bus_type;
-
--/* arch/sh/mm/consistent.c */
--extern void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *handle);
--extern void consistent_free(void *vaddr, size_t size);
--extern void consistent_sync(void *vaddr, size_t size, int direction);
+-static inline void dma_sync_single_for_device(struct device *dev,
+- dma_addr_t dma_handle, size_t size,
+- enum dma_data_direction dir)
+-{
+- dma_sync_single(dev, dma_handle, size, dir);
+-}
-
- #define dma_supported(dev, mask) (1)
-
- static inline int dma_set_mask(struct device *dev, u64 mask)
-@@ -25,44 +20,19 @@ static inline int dma_set_mask(struct device *dev, u64 mask)
- return 0;
- }
-
--static inline void *dma_alloc_coherent(struct device *dev, size_t size,
-- dma_addr_t *dma_handle, gfp_t flag)
+-static inline void dma_sync_single_range_for_cpu(struct device *dev,
+- dma_addr_t dma_handle,
+- unsigned long offset,
+- size_t size,
+- enum dma_data_direction direction)
-{
-- if (sh_mv.mv_consistent_alloc) {
-- void *ret;
-+void *dma_alloc_coherent(struct device *dev, size_t size,
-+ dma_addr_t *dma_handle, gfp_t flag);
-
-- ret = sh_mv.mv_consistent_alloc(dev, size, dma_handle, flag);
-- if (ret != NULL)
-- return ret;
-- }
+- dma_sync_single_for_cpu(dev, dma_handle+offset, size, direction);
+-}
+-
+-static inline void dma_sync_single_range_for_device(struct device *dev,
+- dma_addr_t dma_handle,
+- unsigned long offset,
+- size_t size,
+- enum dma_data_direction direction)
+-{
+- dma_sync_single_for_device(dev, dma_handle+offset, size, direction);
+-}
+-
+-static inline void dma_sync_sg_for_cpu(struct device *dev,
+- struct scatterlist *sg, int nelems,
+- enum dma_data_direction dir)
+-{
+- dma_sync_sg(dev, sg, nelems, dir);
+-}
+-
+-static inline void dma_sync_sg_for_device(struct device *dev,
+- struct scatterlist *sg, int nelems,
+- enum dma_data_direction dir)
+-{
+- dma_sync_sg(dev, sg, nelems, dir);
+-}
+-
+-static inline int dma_get_cache_alignment(void)
+-{
+- /*
+- * Each processor family will define its own L1_CACHE_SHIFT,
+- * L1_CACHE_BYTES wraps to this, so this is always safe.
+- */
+- return L1_CACHE_BYTES;
+-}
+-
+-static inline int dma_mapping_error(dma_addr_t dma_addr)
+-{
+- return dma_addr == 0;
+-}
+-
+-#endif /* __ASM_SH_DMA_MAPPING_H */
+-
+diff --git a/include/asm-sh64/dma.h b/include/asm-sh64/dma.h
+deleted file mode 100644
+index e701f39..0000000
+--- a/include/asm-sh64/dma.h
++++ /dev/null
+@@ -1,41 +0,0 @@
+-#ifndef __ASM_SH64_DMA_H
+-#define __ASM_SH64_DMA_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/dma.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
+- */
+-
+-#include <linux/mm.h>
+-#include <asm/io.h>
+-#include <asm/pgtable.h>
+-
+-#define MAX_DMA_CHANNELS 4
+-
+-/*
+- * SH5 can DMA in any memory area.
+- *
+- * The static definition is dodgy because it should limit
+- * the highest DMA-able address based on the actual
+- * Physical memory available. This is actually performed
+- * at run time in defining the memory allowed to DMA_ZONE.
+- */
+-#define MAX_DMA_ADDRESS ~(NPHYS_MASK)
+-
+-#define DMA_MODE_READ 0
+-#define DMA_MODE_WRITE 1
+-
+-#ifdef CONFIG_PCI
+-extern int isa_dma_bridge_buggy;
+-#else
+-#define isa_dma_bridge_buggy (0)
+-#endif
+-
+-#endif /* __ASM_SH64_DMA_H */
+diff --git a/include/asm-sh64/elf.h b/include/asm-sh64/elf.h
+deleted file mode 100644
+index f994286..0000000
+--- a/include/asm-sh64/elf.h
++++ /dev/null
+@@ -1,107 +0,0 @@
+-#ifndef __ASM_SH64_ELF_H
+-#define __ASM_SH64_ELF_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/elf.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-/*
+- * ELF register definitions..
+- */
+-
+-#include <asm/ptrace.h>
+-#include <asm/user.h>
+-#include <asm/byteorder.h>
+-
+-typedef unsigned long elf_greg_t;
+-
+-#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
+-typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+-
+-typedef struct user_fpu_struct elf_fpregset_t;
+-
+-/*
+- * This is used to ensure we don't load something for the wrong architecture.
+- */
+-#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
+-
+-/*
+- * These are used to set parameters in the core dumps.
+- */
+-#define ELF_CLASS ELFCLASS32
+-#ifdef __LITTLE_ENDIAN__
+-#define ELF_DATA ELFDATA2LSB
+-#else
+-#define ELF_DATA ELFDATA2MSB
+-#endif
+-#define ELF_ARCH EM_SH
+-
+-#define USE_ELF_CORE_DUMP
+-#define ELF_EXEC_PAGESIZE 4096
+-
+-/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
+- use of this is to invoke "./ld.so someprog" to test out a new version of
+- the loader. We need to make sure that it is out of the way of the program
+- that it will "exec", and that there is sufficient room for the brk. */
+-
+-#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
+-
+-#define R_SH_DIR32 1
+-#define R_SH_REL32 2
+-#define R_SH_IMM_LOW16 246
+-#define R_SH_IMM_LOW16_PCREL 247
+-#define R_SH_IMM_MEDLOW16 248
+-#define R_SH_IMM_MEDLOW16_PCREL 249
+-
+-#define ELF_CORE_COPY_REGS(_dest,_regs) \
+- memcpy((char *) &_dest, (char *) _regs, \
+- sizeof(struct pt_regs));
+-
+-/* This yields a mask that user programs can use to figure out what
+- instruction set this CPU supports. This could be done in user space,
+- but it's not easy, and we've already done it here. */
+-
+-#define ELF_HWCAP (0)
+-
+-/* This yields a string that ld.so will use to load implementation
+- specific libraries for optimization. This is more specific in
+- intent than poking at uname or /proc/cpuinfo.
+-
+- For the moment, we have only optimizations for the Intel generations,
+- but that could change... */
+-
+-#define ELF_PLATFORM (NULL)
+-
+-#define ELF_PLAT_INIT(_r, load_addr) \
+- do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
+- _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
+- _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
+- _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; _r->regs[15]=0; \
+- _r->regs[16]=0; _r->regs[17]=0; _r->regs[18]=0; _r->regs[19]=0; \
+- _r->regs[20]=0; _r->regs[21]=0; _r->regs[22]=0; _r->regs[23]=0; \
+- _r->regs[24]=0; _r->regs[25]=0; _r->regs[26]=0; _r->regs[27]=0; \
+- _r->regs[28]=0; _r->regs[29]=0; _r->regs[30]=0; _r->regs[31]=0; \
+- _r->regs[32]=0; _r->regs[33]=0; _r->regs[34]=0; _r->regs[35]=0; \
+- _r->regs[36]=0; _r->regs[37]=0; _r->regs[38]=0; _r->regs[39]=0; \
+- _r->regs[40]=0; _r->regs[41]=0; _r->regs[42]=0; _r->regs[43]=0; \
+- _r->regs[44]=0; _r->regs[45]=0; _r->regs[46]=0; _r->regs[47]=0; \
+- _r->regs[48]=0; _r->regs[49]=0; _r->regs[50]=0; _r->regs[51]=0; \
+- _r->regs[52]=0; _r->regs[53]=0; _r->regs[54]=0; _r->regs[55]=0; \
+- _r->regs[56]=0; _r->regs[57]=0; _r->regs[58]=0; _r->regs[59]=0; \
+- _r->regs[60]=0; _r->regs[61]=0; _r->regs[62]=0; \
+- _r->tregs[0]=0; _r->tregs[1]=0; _r->tregs[2]=0; _r->tregs[3]=0; \
+- _r->tregs[4]=0; _r->tregs[5]=0; _r->tregs[6]=0; _r->tregs[7]=0; \
+- _r->sr = SR_FD | SR_MMU; } while (0)
+-
+-#ifdef __KERNEL__
+-#define SET_PERSONALITY(ex, ibcs2) set_personality(PER_LINUX_32BIT)
+-#endif
+-
+-#endif /* __ASM_SH64_ELF_H */
+diff --git a/include/asm-sh64/emergency-restart.h b/include/asm-sh64/emergency-restart.h
+deleted file mode 100644
+index 108d8c4..0000000
+--- a/include/asm-sh64/emergency-restart.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef _ASM_EMERGENCY_RESTART_H
+-#define _ASM_EMERGENCY_RESTART_H
+-
+-#include <asm-generic/emergency-restart.h>
+-
+-#endif /* _ASM_EMERGENCY_RESTART_H */
+diff --git a/include/asm-sh64/errno.h b/include/asm-sh64/errno.h
+deleted file mode 100644
+index 57b46d4..0000000
+--- a/include/asm-sh64/errno.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_ERRNO_H
+-#define __ASM_SH64_ERRNO_H
-
-- return consistent_alloc(flag, size, dma_handle);
--}
+-#include <asm-generic/errno.h>
-
--static inline void dma_free_coherent(struct device *dev, size_t size,
-- void *vaddr, dma_addr_t dma_handle)
--{
-- if (sh_mv.mv_consistent_free) {
-- int ret;
+-#endif /* __ASM_SH64_ERRNO_H */
+diff --git a/include/asm-sh64/fb.h b/include/asm-sh64/fb.h
+deleted file mode 100644
+index d92e99c..0000000
+--- a/include/asm-sh64/fb.h
++++ /dev/null
+@@ -1,19 +0,0 @@
+-#ifndef _ASM_FB_H_
+-#define _ASM_FB_H_
-
-- ret = sh_mv.mv_consistent_free(dev, size, vaddr, dma_handle);
-- if (ret == 0)
-- return;
-- }
-+void dma_free_coherent(struct device *dev, size_t size,
-+ void *vaddr, dma_addr_t dma_handle);
-
-- consistent_free(vaddr, size);
+-#include <linux/fb.h>
+-#include <linux/fs.h>
+-#include <asm/page.h>
+-
+-static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
+- unsigned long off)
+-{
+- vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
-}
-+void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
-+ enum dma_data_direction dir);
-
- #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
- #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
- #define dma_is_consistent(d, h) (1)
-
--static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
-- enum dma_data_direction dir)
+-
+-static inline int fb_is_primary_device(struct fb_info *info)
-{
-- consistent_sync(vaddr, size, (int)dir);
+- return 0;
-}
-
- static inline dma_addr_t dma_map_single(struct device *dev,
- void *ptr, size_t size,
- enum dma_data_direction dir)
-@@ -205,4 +175,18 @@ static inline int dma_mapping_error(dma_addr_t dma_addr)
- {
- return dma_addr == 0;
- }
-+
-+#define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
-+
-+extern int
-+dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
-+ dma_addr_t device_addr, size_t size, int flags);
-+
-+extern void
-+dma_release_declared_memory(struct device *dev);
-+
-+extern void *
-+dma_mark_declared_memory_occupied(struct device *dev,
-+ dma_addr_t device_addr, size_t size);
-+
- #endif /* __ASM_SH_DMA_MAPPING_H */
-diff --git a/include/asm-sh/elf.h b/include/asm-sh/elf.h
-index 12cc4b3..05092da 100644
---- a/include/asm-sh/elf.h
-+++ b/include/asm-sh/elf.h
-@@ -5,7 +5,7 @@
- #include <asm/ptrace.h>
- #include <asm/user.h>
-
--/* SH relocation types */
-+/* SH (particularly SHcompact) relocation types */
- #define R_SH_NONE 0
- #define R_SH_DIR32 1
- #define R_SH_REL32 2
-@@ -43,6 +43,11 @@
- #define R_SH_RELATIVE 165
- #define R_SH_GOTOFF 166
- #define R_SH_GOTPC 167
-+/* SHmedia relocs */
-+#define R_SH_IMM_LOW16 246
-+#define R_SH_IMM_LOW16_PCREL 247
-+#define R_SH_IMM_MEDLOW16 248
-+#define R_SH_IMM_MEDLOW16_PCREL 249
- /* Keep this the last entry. */
- #define R_SH_NUM 256
-
-@@ -58,11 +63,6 @@ typedef elf_greg_t elf_gregset_t[ELF_NGREG];
- typedef struct user_fpu_struct elf_fpregset_t;
-
- /*
-- * This is used to ensure we don't load something for the wrong architecture.
+-#endif /* _ASM_FB_H_ */
+diff --git a/include/asm-sh64/fcntl.h b/include/asm-sh64/fcntl.h
+deleted file mode 100644
+index 744dd79..0000000
+--- a/include/asm-sh64/fcntl.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-sh/fcntl.h>
+diff --git a/include/asm-sh64/futex.h b/include/asm-sh64/futex.h
+deleted file mode 100644
+index 6a332a9..0000000
+--- a/include/asm-sh64/futex.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef _ASM_FUTEX_H
+-#define _ASM_FUTEX_H
+-
+-#include <asm-generic/futex.h>
+-
+-#endif
+diff --git a/include/asm-sh64/gpio.h b/include/asm-sh64/gpio.h
+deleted file mode 100644
+index 6bc5a13..0000000
+--- a/include/asm-sh64/gpio.h
++++ /dev/null
+@@ -1,8 +0,0 @@
+-#ifndef __ASM_SH64_GPIO_H
+-#define __ASM_SH64_GPIO_H
+-
+-/*
+- * This is just a stub, so that every arch using sh-sci has a gpio.h
- */
--#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
+-
+-#endif /* __ASM_SH64_GPIO_H */
+diff --git a/include/asm-sh64/hardirq.h b/include/asm-sh64/hardirq.h
+deleted file mode 100644
+index 555fd7a..0000000
+--- a/include/asm-sh64/hardirq.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-#ifndef __ASM_SH64_HARDIRQ_H
+-#define __ASM_SH64_HARDIRQ_H
+-
+-#include <linux/threads.h>
+-#include <linux/irq.h>
+-
+-/* entry.S is sensitive to the offsets of these fields */
+-typedef struct {
+- unsigned int __softirq_pending;
+-} ____cacheline_aligned irq_cpustat_t;
+-
+-#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
+-
+-/* arch/sh64/kernel/irq.c */
+-extern void ack_bad_irq(unsigned int irq);
+-
+-#endif /* __ASM_SH64_HARDIRQ_H */
+-
+diff --git a/include/asm-sh64/hardware.h b/include/asm-sh64/hardware.h
+deleted file mode 100644
+index 931c1ad..0000000
+--- a/include/asm-sh64/hardware.h
++++ /dev/null
+@@ -1,22 +0,0 @@
+-#ifndef __ASM_SH64_HARDWARE_H
+-#define __ASM_SH64_HARDWARE_H
-
-/*
- * These are used to set parameters in the core dumps.
- */
- #define ELF_CLASS ELFCLASS32
-@@ -73,6 +73,12 @@ typedef struct user_fpu_struct elf_fpregset_t;
- #endif
- #define ELF_ARCH EM_SH
-
-+#ifdef __KERNEL__
-+/*
-+ * This is used to ensure we don't load something for the wrong architecture.
-+ */
-+#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
-+
- #define USE_ELF_CORE_DUMP
- #define ELF_EXEC_PAGESIZE PAGE_SIZE
-
-@@ -83,7 +89,6 @@ typedef struct user_fpu_struct elf_fpregset_t;
-
- #define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
-
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/hardware.h
+- *
+- * Copyright (C) 2002 Stuart Menefy
+- * Copyright (C) 2003 Paul Mundt
+- *
+- * Defitions of the locations of registers in the physical address space.
+- */
-
- #define ELF_CORE_COPY_REGS(_dest,_regs) \
- memcpy((char *) &_dest, (char *) _regs, \
- sizeof(struct pt_regs));
-@@ -101,16 +106,38 @@ typedef struct user_fpu_struct elf_fpregset_t;
- For the moment, we have only optimizations for the Intel generations,
- but that could change... */
-
--#define ELF_PLATFORM (NULL)
-+#define ELF_PLATFORM (utsname()->machine)
-
-+#ifdef __SH5__
-+#define ELF_PLAT_INIT(_r, load_addr) \
-+ do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
-+ _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
-+ _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
-+ _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; _r->regs[15]=0; \
-+ _r->regs[16]=0; _r->regs[17]=0; _r->regs[18]=0; _r->regs[19]=0; \
-+ _r->regs[20]=0; _r->regs[21]=0; _r->regs[22]=0; _r->regs[23]=0; \
-+ _r->regs[24]=0; _r->regs[25]=0; _r->regs[26]=0; _r->regs[27]=0; \
-+ _r->regs[28]=0; _r->regs[29]=0; _r->regs[30]=0; _r->regs[31]=0; \
-+ _r->regs[32]=0; _r->regs[33]=0; _r->regs[34]=0; _r->regs[35]=0; \
-+ _r->regs[36]=0; _r->regs[37]=0; _r->regs[38]=0; _r->regs[39]=0; \
-+ _r->regs[40]=0; _r->regs[41]=0; _r->regs[42]=0; _r->regs[43]=0; \
-+ _r->regs[44]=0; _r->regs[45]=0; _r->regs[46]=0; _r->regs[47]=0; \
-+ _r->regs[48]=0; _r->regs[49]=0; _r->regs[50]=0; _r->regs[51]=0; \
-+ _r->regs[52]=0; _r->regs[53]=0; _r->regs[54]=0; _r->regs[55]=0; \
-+ _r->regs[56]=0; _r->regs[57]=0; _r->regs[58]=0; _r->regs[59]=0; \
-+ _r->regs[60]=0; _r->regs[61]=0; _r->regs[62]=0; \
-+ _r->tregs[0]=0; _r->tregs[1]=0; _r->tregs[2]=0; _r->tregs[3]=0; \
-+ _r->tregs[4]=0; _r->tregs[5]=0; _r->tregs[6]=0; _r->tregs[7]=0; \
-+ _r->sr = SR_FD | SR_MMU; } while (0)
-+#else
- #define ELF_PLAT_INIT(_r, load_addr) \
- do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
- _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
- _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
- _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; \
- _r->sr = SR_FD; } while (0)
-+#endif
-
--#ifdef __KERNEL__
- #define SET_PERSONALITY(ex, ibcs2) set_personality(PER_LINUX_32BIT)
- struct task_struct;
- extern int dump_task_regs (struct task_struct *, elf_gregset_t *);
-@@ -118,7 +145,6 @@ extern int dump_task_fpu (struct task_struct *, elf_fpregset_t *);
-
- #define ELF_CORE_COPY_TASK_REGS(tsk, elf_regs) dump_task_regs(tsk, elf_regs)
- #define ELF_CORE_COPY_FPREGS(tsk, elf_fpregs) dump_task_fpu(tsk, elf_fpregs)
--#endif
-
- #ifdef CONFIG_VSYSCALL
- /* vDSO has arch_setup_additional_pages */
-@@ -133,12 +159,35 @@ extern void __kernel_vsyscall;
- #define VDSO_BASE ((unsigned long)current->mm->context.vdso)
- #define VDSO_SYM(x) (VDSO_BASE + (unsigned long)(x))
-
-+#define VSYSCALL_AUX_ENT \
-+ if (vdso_enabled) \
-+ NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE);
-+#else
-+#define VSYSCALL_AUX_ENT
-+#endif /* CONFIG_VSYSCALL */
-+
-+#ifdef CONFIG_SH_FPU
-+#define FPU_AUX_ENT NEW_AUX_ENT(AT_FPUCW, FPSCR_INIT)
-+#else
-+#define FPU_AUX_ENT
-+#endif
-+
-+extern int l1i_cache_shape, l1d_cache_shape, l2_cache_shape;
-+
- /* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
- #define ARCH_DLINFO \
- do { \
-- if (vdso_enabled) \
-- NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE); \
-+ /* Optional FPU initialization */ \
-+ FPU_AUX_ENT; \
-+ \
-+ /* Optional vsyscall entry */ \
-+ VSYSCALL_AUX_ENT; \
-+ \
-+ /* Cache desc */ \
-+ NEW_AUX_ENT(AT_L1I_CACHESHAPE, l1i_cache_shape); \
-+ NEW_AUX_ENT(AT_L1D_CACHESHAPE, l1d_cache_shape); \
-+ NEW_AUX_ENT(AT_L2_CACHESHAPE, l2_cache_shape); \
- } while (0)
--#endif /* CONFIG_VSYSCALL */
-
-+#endif /* __KERNEL__ */
- #endif /* __ASM_SH_ELF_H */
-diff --git a/include/asm-sh/fixmap.h b/include/asm-sh/fixmap.h
-index 8a56617..721fcc4 100644
---- a/include/asm-sh/fixmap.h
-+++ b/include/asm-sh/fixmap.h
-@@ -49,6 +49,7 @@ enum fixed_addresses {
- #define FIX_N_COLOURS 16
- FIX_CMAP_BEGIN,
- FIX_CMAP_END = FIX_CMAP_BEGIN + FIX_N_COLOURS,
-+ FIX_UNCACHED,
- #ifdef CONFIG_HIGHMEM
- FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
- FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
-@@ -73,7 +74,11 @@ extern void __set_fixmap(enum fixed_addresses idx,
- * the start of the fixmap, and leave one page empty
- * at the top of mem..
- */
-+#ifdef CONFIG_SUPERH32
- #define FIXADDR_TOP (P4SEG - PAGE_SIZE)
-+#else
-+#define FIXADDR_TOP (0xff000000 - PAGE_SIZE)
-+#endif
- #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
- #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
-
-diff --git a/include/asm-sh/flat.h b/include/asm-sh/flat.h
-index dc4f595..0cc8002 100644
---- a/include/asm-sh/flat.h
-+++ b/include/asm-sh/flat.h
-@@ -19,6 +19,6 @@
- #define flat_get_addr_from_rp(rp, relval, flags, p) get_unaligned(rp)
- #define flat_put_addr_at_rp(rp, val, relval) put_unaligned(val,rp)
- #define flat_get_relocate_addr(rel) (rel)
--#define flat_set_persistent(relval, p) 0
-+#define flat_set_persistent(relval, p) ({ (void)p; 0; })
-
- #endif /* __ASM_SH_FLAT_H */
-diff --git a/include/asm-sh/fpu.h b/include/asm-sh/fpu.h
-new file mode 100644
-index 0000000..f842988
---- /dev/null
-+++ b/include/asm-sh/fpu.h
-@@ -0,0 +1,46 @@
-+#ifndef __ASM_SH_FPU_H
-+#define __ASM_SH_FPU_H
-+
-+#define SR_FD 0x00008000
-+
-+#ifndef __ASSEMBLY__
-+#include <asm/ptrace.h>
-+
-+#ifdef CONFIG_SH_FPU
-+static inline void release_fpu(struct pt_regs *regs)
-+{
-+ regs->sr |= SR_FD;
-+}
-+
-+static inline void grab_fpu(struct pt_regs *regs)
-+{
-+ regs->sr &= ~SR_FD;
-+}
-+
-+struct task_struct;
-+
-+extern void save_fpu(struct task_struct *__tsk, struct pt_regs *regs);
-+#else
-+#define release_fpu(regs) do { } while (0)
-+#define grab_fpu(regs) do { } while (0)
-+#define save_fpu(tsk, regs) do { } while (0)
-+#endif
-+
-+extern int do_fpu_inst(unsigned short, struct pt_regs *);
-+
-+#define unlazy_fpu(tsk, regs) do { \
-+ if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
-+ save_fpu(tsk, regs); \
-+ } \
-+} while (0)
-+
-+#define clear_fpu(tsk, regs) do { \
-+ if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
-+ clear_tsk_thread_flag(tsk, TIF_USEDFPU); \
-+ release_fpu(regs); \
-+ } \
-+} while (0)
-+
-+#endif /* __ASSEMBLY__ */
-+
-+#endif /* __ASM_SH_FPU_H */
-diff --git a/include/asm-sh/hd64461.h b/include/asm-sh/hd64461.h
-index 342ca55..8c1353b 100644
---- a/include/asm-sh/hd64461.h
-+++ b/include/asm-sh/hd64461.h
-@@ -46,10 +46,10 @@
- /* CPU Data Bus Control Register */
- #define HD64461_SCPUCR (CONFIG_HD64461_IOBASE + 0x04)
-
--/* Base Adress Register */
-+/* Base Address Register */
- #define HD64461_LCDCBAR (CONFIG_HD64461_IOBASE + 0x1000)
-
--/* Line increment adress */
-+/* Line increment address */
- #define HD64461_LCDCLOR (CONFIG_HD64461_IOBASE + 0x1002)
-
- /* Controls LCD controller */
-@@ -80,9 +80,9 @@
- #define HD64461_LDR3 (CONFIG_HD64461_IOBASE + 0x101e)
-
- /* Palette Registers */
--#define HD64461_CPTWAR (CONFIG_HD64461_IOBASE + 0x1030) /* Color Palette Write Adress Register */
-+#define HD64461_CPTWAR (CONFIG_HD64461_IOBASE + 0x1030) /* Color Palette Write Address Register */
- #define HD64461_CPTWDR (CONFIG_HD64461_IOBASE + 0x1032) /* Color Palette Write Data Register */
--#define HD64461_CPTRAR (CONFIG_HD64461_IOBASE + 0x1034) /* Color Palette Read Adress Register */
-+#define HD64461_CPTRAR (CONFIG_HD64461_IOBASE + 0x1034) /* Color Palette Read Address Register */
- #define HD64461_CPTRDR (CONFIG_HD64461_IOBASE + 0x1036) /* Color Palette Read Data Register */
-
- #define HD64461_GRDOR (CONFIG_HD64461_IOBASE + 0x1040) /* Display Resolution Offset Register */
-@@ -97,8 +97,8 @@
- #define HD64461_GRCFGR_COLORDEPTH8 0x01 /* Sets Colordepth 8 for Accelerator */
-
- /* Line Drawing Registers */
--#define HD64461_LNSARH (CONFIG_HD64461_IOBASE + 0x1046) /* Line Start Adress Register (H) */
--#define HD64461_LNSARL (CONFIG_HD64461_IOBASE + 0x1048) /* Line Start Adress Register (L) */
-+#define HD64461_LNSARH (CONFIG_HD64461_IOBASE + 0x1046) /* Line Start Address Register (H) */
-+#define HD64461_LNSARL (CONFIG_HD64461_IOBASE + 0x1048) /* Line Start Address Register (L) */
- #define HD64461_LNAXLR (CONFIG_HD64461_IOBASE + 0x104a) /* Axis Pixel Length Register */
- #define HD64461_LNDGR (CONFIG_HD64461_IOBASE + 0x104c) /* Diagonal Register */
- #define HD64461_LNAXR (CONFIG_HD64461_IOBASE + 0x104e) /* Axial Register */
-@@ -106,16 +106,16 @@
- #define HD64461_LNMDR (CONFIG_HD64461_IOBASE + 0x1052) /* Line Mode Register */
-
- /* BitBLT Registers */
--#define HD64461_BBTSSARH (CONFIG_HD64461_IOBASE + 0x1054) /* Source Start Adress Register (H) */
--#define HD64461_BBTSSARL (CONFIG_HD64461_IOBASE + 0x1056) /* Source Start Adress Register (L) */
--#define HD64461_BBTDSARH (CONFIG_HD64461_IOBASE + 0x1058) /* Destination Start Adress Register (H) */
--#define HD64461_BBTDSARL (CONFIG_HD64461_IOBASE + 0x105a) /* Destination Start Adress Register (L) */
-+#define HD64461_BBTSSARH (CONFIG_HD64461_IOBASE + 0x1054) /* Source Start Address Register (H) */
-+#define HD64461_BBTSSARL (CONFIG_HD64461_IOBASE + 0x1056) /* Source Start Address Register (L) */
-+#define HD64461_BBTDSARH (CONFIG_HD64461_IOBASE + 0x1058) /* Destination Start Address Register (H) */
-+#define HD64461_BBTDSARL (CONFIG_HD64461_IOBASE + 0x105a) /* Destination Start Address Register (L) */
- #define HD64461_BBTDWR (CONFIG_HD64461_IOBASE + 0x105c) /* Destination Block Width Register */
- #define HD64461_BBTDHR (CONFIG_HD64461_IOBASE + 0x105e) /* Destination Block Height Register */
--#define HD64461_BBTPARH (CONFIG_HD64461_IOBASE + 0x1060) /* Pattern Start Adress Register (H) */
--#define HD64461_BBTPARL (CONFIG_HD64461_IOBASE + 0x1062) /* Pattern Start Adress Register (L) */
--#define HD64461_BBTMARH (CONFIG_HD64461_IOBASE + 0x1064) /* Mask Start Adress Register (H) */
--#define HD64461_BBTMARL (CONFIG_HD64461_IOBASE + 0x1066) /* Mask Start Adress Register (L) */
-+#define HD64461_BBTPARH (CONFIG_HD64461_IOBASE + 0x1060) /* Pattern Start Address Register (H) */
-+#define HD64461_BBTPARL (CONFIG_HD64461_IOBASE + 0x1062) /* Pattern Start Address Register (L) */
-+#define HD64461_BBTMARH (CONFIG_HD64461_IOBASE + 0x1064) /* Mask Start Address Register (H) */
-+#define HD64461_BBTMARL (CONFIG_HD64461_IOBASE + 0x1066) /* Mask Start Address Register (L) */
- #define HD64461_BBTROPR (CONFIG_HD64461_IOBASE + 0x1068) /* ROP Register */
- #define HD64461_BBTMDR (CONFIG_HD64461_IOBASE + 0x106a) /* BitBLT Mode Register */
-
-diff --git a/include/asm-sh/hs7751rvoip.h b/include/asm-sh/hs7751rvoip.h
+-#define PHYS_PERIPHERAL_BLOCK 0x09000000
+-#define PHYS_DMAC_BLOCK 0x0e000000
+-#define PHYS_PCI_BLOCK 0x60000000
+-#define PHYS_EMI_BLOCK 0xff000000
+-
+-#endif /* __ASM_SH64_HARDWARE_H */
+diff --git a/include/asm-sh64/hw_irq.h b/include/asm-sh64/hw_irq.h
deleted file mode 100644
-index c4cff9d..0000000
---- a/include/asm-sh/hs7751rvoip.h
+index ebb3908..0000000
+--- a/include/asm-sh64/hw_irq.h
+++ /dev/null
-@@ -1,54 +0,0 @@
--#ifndef __ASM_SH_RENESAS_HS7751RVOIP_H
--#define __ASM_SH_RENESAS_HS7751RVOIP_H
+@@ -1,15 +0,0 @@
+-#ifndef __ASM_SH64_HW_IRQ_H
+-#define __ASM_SH64_HW_IRQ_H
-
-/*
-- * linux/include/asm-sh/hs7751rvoip/hs7751rvoip.h
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
- *
-- * Copyright (C) 2000 Atom Create Engineering Co., Ltd.
+- * include/asm-sh64/hw_irq.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
- *
-- * Renesas Technology Sales HS7751RVoIP support
- */
-
--/* Box specific addresses. */
+-#endif /* __ASM_SH64_HW_IRQ_H */
+diff --git a/include/asm-sh64/ide.h b/include/asm-sh64/ide.h
+deleted file mode 100644
+index b6e31e8..0000000
+--- a/include/asm-sh64/ide.h
++++ /dev/null
+@@ -1,29 +0,0 @@
+-/*
+- * linux/include/asm-sh64/ide.h
+- *
+- * Copyright (C) 1994-1996 Linus Torvalds & authors
+- *
+- * sh64 version by Richard Curnow & Paul Mundt
+- */
-
--#define PA_BCR 0xa4000000 /* FPGA */
--#define PA_SLICCNTR1 0xa4000006 /* SLIC PIO Control 1 */
--#define PA_SLICCNTR2 0xa4000008 /* SLIC PIO Control 2 */
--#define PA_DMACNTR 0xa400000a /* USB DMA Control */
--#define PA_INPORTR 0xa400000c /* Input Port Register */
--#define PA_OUTPORTR 0xa400000e /* Output Port Reguster */
--#define PA_VERREG 0xa4000014 /* FPGA Version Register */
+-/*
+- * This file contains the sh64 architecture specific IDE code.
+- */
-
--#define PA_IDE_OFFSET 0x1f0 /* CF IDE Offset */
+-#ifndef __ASM_SH64_IDE_H
+-#define __ASM_SH64_IDE_H
-
--#define IRLCNTR1 (PA_BCR + 0) /* Interrupt Control Register1 */
--#define IRLCNTR2 (PA_BCR + 2) /* Interrupt Control Register2 */
--#define IRLCNTR3 (PA_BCR + 4) /* Interrupt Control Register3 */
--#define IRLCNTR4 (PA_BCR + 16) /* Interrupt Control Register4 */
--#define IRLCNTR5 (PA_BCR + 18) /* Interrupt Control Register5 */
+-#ifdef __KERNEL__
-
--#define IRQ_PCIETH 6 /* PCI Ethernet IRQ */
--#define IRQ_PCIHUB 7 /* PCI Ethernet Hub IRQ */
--#define IRQ_USBCOM 8 /* USB Comunication IRQ */
--#define IRQ_USBCON 9 /* USB Connect IRQ */
--#define IRQ_USBDMA 10 /* USB DMA IRQ */
--#define IRQ_CFCARD 11 /* CF Card IRQ */
--#define IRQ_PCMCIA 12 /* PCMCIA IRQ */
--#define IRQ_PCISLOT 13 /* PCI Slot #1 IRQ */
--#define IRQ_ONHOOK1 0 /* ON HOOK1 IRQ */
--#define IRQ_OFFHOOK1 1 /* OFF HOOK1 IRQ */
--#define IRQ_ONHOOK2 2 /* ON HOOK2 IRQ */
--#define IRQ_OFFHOOK2 3 /* OFF HOOK2 IRQ */
--#define IRQ_RINGING 4 /* Ringing IRQ */
--#define IRQ_CODEC 5 /* CODEC IRQ */
-
--#define __IO_PREFIX hs7751rvoip
--#include <asm/io_generic.h>
+-/* Without this, the initialisation of PCI IDE cards end up calling
+- * ide_init_hwif_ports, which won't work. */
+-#ifdef CONFIG_BLK_DEV_IDEPCI
+-#define ide_default_io_ctl(base) (0)
+-#endif
-
--/* arch/sh/boards/renesas/hs7751rvoip/irq.c */
--void init_hs7751rvoip_IRQ(void);
+-#include <asm-generic/ide_iops.h>
-
--/* arch/sh/boards/renesas/hs7751rvoip/io.c */
--void *hs7751rvoip_ioremap(unsigned long, unsigned long);
+-#endif /* __KERNEL__ */
-
--#endif /* __ASM_SH_RENESAS_HS7751RVOIP */
-diff --git a/include/asm-sh/hw_irq.h b/include/asm-sh/hw_irq.h
-index cb0b6c9..c958fda 100644
---- a/include/asm-sh/hw_irq.h
-+++ b/include/asm-sh/hw_irq.h
-@@ -33,13 +33,6 @@ struct intc_vect {
- #define INTC_VECT(enum_id, vect) { enum_id, vect }
- #define INTC_IRQ(enum_id, irq) INTC_VECT(enum_id, irq2evt(irq))
-
--struct intc_prio {
-- intc_enum enum_id;
-- unsigned char priority;
--};
+-#endif /* __ASM_SH64_IDE_H */
+diff --git a/include/asm-sh64/io.h b/include/asm-sh64/io.h
+deleted file mode 100644
+index 7bd7314..0000000
+--- a/include/asm-sh64/io.h
++++ /dev/null
+@@ -1,196 +0,0 @@
+-#ifndef __ASM_SH64_IO_H
+-#define __ASM_SH64_IO_H
-
--#define INTC_PRIO(enum_id, prio) { enum_id, prio }
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/io.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
+- */
-
- struct intc_group {
- intc_enum enum_id;
- intc_enum enum_ids[32];
-@@ -79,8 +72,6 @@ struct intc_desc {
- unsigned int nr_vectors;
- struct intc_group *groups;
- unsigned int nr_groups;
-- struct intc_prio *priorities;
-- unsigned int nr_priorities;
- struct intc_mask_reg *mask_regs;
- unsigned int nr_mask_regs;
- struct intc_prio_reg *prio_regs;
-@@ -92,10 +83,9 @@ struct intc_desc {
-
- #define _INTC_ARRAY(a) a, sizeof(a)/sizeof(*a)
- #define DECLARE_INTC_DESC(symbol, chipname, vectors, groups, \
-- priorities, mask_regs, prio_regs, sense_regs) \
-+ mask_regs, prio_regs, sense_regs) \
- struct intc_desc symbol __initdata = { \
- _INTC_ARRAY(vectors), _INTC_ARRAY(groups), \
-- _INTC_ARRAY(priorities), \
- _INTC_ARRAY(mask_regs), _INTC_ARRAY(prio_regs), \
- _INTC_ARRAY(sense_regs), \
- chipname, \
-diff --git a/include/asm-sh/io.h b/include/asm-sh/io.h
-index 6ed34d8..94900c0 100644
---- a/include/asm-sh/io.h
-+++ b/include/asm-sh/io.h
-@@ -191,6 +191,8 @@ __BUILD_MEMORY_STRING(w, u16)
-
- #define mmiowb() wmb() /* synco on SH-4A, otherwise a nop */
-
-+#define IO_SPACE_LIMIT 0xffffffff
-+
- /*
- * This function provides a method for the generic case where a board-specific
- * ioport_map simply needs to return the port + some arbitrary port base.
-@@ -226,6 +228,11 @@ static inline unsigned int ctrl_inl(unsigned long addr)
- return *(volatile unsigned long*)addr;
- }
-
-+static inline unsigned long long ctrl_inq(unsigned long addr)
-+{
-+ return *(volatile unsigned long long*)addr;
-+}
-+
- static inline void ctrl_outb(unsigned char b, unsigned long addr)
- {
- *(volatile unsigned char*)addr = b;
-@@ -241,49 +248,52 @@ static inline void ctrl_outl(unsigned int b, unsigned long addr)
- *(volatile unsigned long*)addr = b;
- }
-
-+static inline void ctrl_outq(unsigned long long b, unsigned long addr)
-+{
-+ *(volatile unsigned long long*)addr = b;
-+}
-+
- static inline void ctrl_delay(void)
- {
-+#ifdef P2SEG
- ctrl_inw(P2SEG);
-+#endif
- }
-
--#define IO_SPACE_LIMIT 0xffffffff
-+/* Quad-word real-mode I/O, don't ask.. */
-+unsigned long long peek_real_address_q(unsigned long long addr);
-+unsigned long long poke_real_address_q(unsigned long long addr,
-+ unsigned long long val);
-
--#ifdef CONFIG_MMU
-/*
-- * Change virtual addresses to physical addresses and vv.
-- * These are trivial on the 1:1 Linux/SuperH mapping
+- * Convention:
+- * read{b,w,l}/write{b,w,l} are for PCI,
+- * while in{b,w,l}/out{b,w,l} are for ISA
+- * These may (will) be platform specific function.
+- *
+- * In addition, we have
+- * ctrl_in{b,w,l}/ctrl_out{b,w,l} for SuperH specific I/O.
+- * which are processor specific. Address should be the result of
+- * onchip_remap();
- */
--static inline unsigned long virt_to_phys(volatile void *address)
+-
+-#include <linux/compiler.h>
+-#include <asm/cache.h>
+-#include <asm/system.h>
+-#include <asm/page.h>
+-#include <asm-generic/iomap.h>
+-
+-/*
+- * Nothing overly special here.. instead of doing the same thing
+- * over and over again, we just define a set of sh64_in/out functions
+- * with an implicit size. The traditional read{b,w,l}/write{b,w,l}
+- * mess is wrapped to this, as are the SH-specific ctrl_in/out routines.
+- */
+-static inline unsigned char sh64_in8(const volatile void __iomem *addr)
-{
-- return PHYSADDR(address);
+- return *(volatile unsigned char __force *)addr;
-}
-+/* arch/sh/mm/ioremap_64.c */
-+unsigned long onchip_remap(unsigned long addr, unsigned long size,
-+ const char *name);
-+extern void onchip_unmap(unsigned long vaddr);
-
--static inline void *phys_to_virt(unsigned long address)
+-
+-static inline unsigned short sh64_in16(const volatile void __iomem *addr)
-{
-- return (void *)P1SEGADDR(address);
+- return *(volatile unsigned short __force *)addr;
-}
--#else
--#define phys_to_virt(address) ((void *)(address))
-+#if !defined(CONFIG_MMU)
- #define virt_to_phys(address) ((unsigned long)(address))
-+#define phys_to_virt(address) ((void *)(address))
-+#else
-+#define virt_to_phys(address) (__pa(address))
-+#define phys_to_virt(address) (__va(address))
- #endif
-
- /*
-- * readX/writeX() are used to access memory mapped devices. On some
-- * architectures the memory mapped IO stuff needs to be accessed
-- * differently. On the x86 architecture, we just read/write the
-- * memory location directly.
-+ * On 32-bit SH, we traditionally have the whole physical address space
-+ * mapped at all times (as MIPS does), so "ioremap()" and "iounmap()" do
-+ * not need to do anything but place the address in the proper segment.
-+ * This is true for P1 and P2 addresses, as well as some P3 ones.
-+ * However, most of the P3 addresses and newer cores using extended
-+ * addressing need to map through page tables, so the ioremap()
-+ * implementation becomes a bit more complicated.
- *
-- * On SH, we traditionally have the whole physical address space mapped
-- * at all times (as MIPS does), so "ioremap()" and "iounmap()" do not
-- * need to do anything but place the address in the proper segment. This
-- * is true for P1 and P2 addresses, as well as some P3 ones. However,
-- * most of the P3 addresses and newer cores using extended addressing
-- * need to map through page tables, so the ioremap() implementation
-- * becomes a bit more complicated. See arch/sh/mm/ioremap.c for
-- * additional notes on this.
-+ * See arch/sh/mm/ioremap.c for additional notes on this.
- *
- * We cheat a bit and always return uncachable areas until we've fixed
- * the drivers to handle caching properly.
-+ *
-+ * On the SH-5 the concept of segmentation in the 1:1 PXSEG sense simply
-+ * doesn't exist, so everything must go through page tables.
- */
- #ifdef CONFIG_MMU
- void __iomem *__ioremap(unsigned long offset, unsigned long size,
-@@ -297,6 +307,7 @@ void __iounmap(void __iomem *addr);
- static inline void __iomem *
- __ioremap_mode(unsigned long offset, unsigned long size, unsigned long flags)
- {
-+#ifdef CONFIG_SUPERH32
- unsigned long last_addr = offset + size - 1;
-
- /*
-@@ -311,6 +322,7 @@ __ioremap_mode(unsigned long offset, unsigned long size, unsigned long flags)
-
- return (void __iomem *)P2SEGADDR(offset);
- }
-+#endif
-
- return __ioremap(offset, size, flags);
- }
-diff --git a/include/asm-sh/irqflags.h b/include/asm-sh/irqflags.h
-index 9dedc1b..46e71da 100644
---- a/include/asm-sh/irqflags.h
-+++ b/include/asm-sh/irqflags.h
-@@ -1,81 +1,11 @@
- #ifndef __ASM_SH_IRQFLAGS_H
- #define __ASM_SH_IRQFLAGS_H
-
--static inline void raw_local_irq_enable(void)
--{
-- unsigned long __dummy0, __dummy1;
-
-- __asm__ __volatile__ (
-- "stc sr, %0\n\t"
-- "and %1, %0\n\t"
--#ifdef CONFIG_CPU_HAS_SR_RB
-- "stc r6_bank, %1\n\t"
-- "or %1, %0\n\t"
-+#ifdef CONFIG_SUPERH32
-+#include "irqflags_32.h"
-+#else
-+#include "irqflags_64.h"
- #endif
-- "ldc %0, sr\n\t"
-- : "=&r" (__dummy0), "=r" (__dummy1)
-- : "1" (~0x000000f0)
-- : "memory"
-- );
+-static inline unsigned int sh64_in32(const volatile void __iomem *addr)
+-{
+- return *(volatile unsigned int __force *)addr;
-}
-
--static inline void raw_local_irq_disable(void)
+-static inline unsigned long long sh64_in64(const volatile void __iomem *addr)
-{
-- unsigned long flags;
--
-- __asm__ __volatile__ (
-- "stc sr, %0\n\t"
-- "or #0xf0, %0\n\t"
-- "ldc %0, sr\n\t"
-- : "=&z" (flags)
-- : /* no inputs */
-- : "memory"
-- );
+- return *(volatile unsigned long long __force *)addr;
-}
-
--static inline void set_bl_bit(void)
+-static inline void sh64_out8(unsigned char b, volatile void __iomem *addr)
-{
-- unsigned long __dummy0, __dummy1;
--
-- __asm__ __volatile__ (
-- "stc sr, %0\n\t"
-- "or %2, %0\n\t"
-- "and %3, %0\n\t"
-- "ldc %0, sr\n\t"
-- : "=&r" (__dummy0), "=r" (__dummy1)
-- : "r" (0x10000000), "r" (0xffffff0f)
-- : "memory"
-- );
+- *(volatile unsigned char __force *)addr = b;
+- wmb();
-}
-
--static inline void clear_bl_bit(void)
+-static inline void sh64_out16(unsigned short b, volatile void __iomem *addr)
-{
-- unsigned long __dummy0, __dummy1;
+- *(volatile unsigned short __force *)addr = b;
+- wmb();
+-}
-
-- __asm__ __volatile__ (
-- "stc sr, %0\n\t"
-- "and %2, %0\n\t"
-- "ldc %0, sr\n\t"
-- : "=&r" (__dummy0), "=r" (__dummy1)
-- : "1" (~0x10000000)
-- : "memory"
-- );
+-static inline void sh64_out32(unsigned int b, volatile void __iomem *addr)
+-{
+- *(volatile unsigned int __force *)addr = b;
+- wmb();
-}
-
--static inline unsigned long __raw_local_save_flags(void)
+-static inline void sh64_out64(unsigned long long b, volatile void __iomem *addr)
-{
-- unsigned long flags;
+- *(volatile unsigned long long __force *)addr = b;
+- wmb();
+-}
-
-- __asm__ __volatile__ (
-- "stc sr, %0\n\t"
-- "and #0xf0, %0\n\t"
-- : "=&z" (flags)
-- : /* no inputs */
-- : "memory"
-- );
+-#define readb(addr) sh64_in8(addr)
+-#define readw(addr) sh64_in16(addr)
+-#define readl(addr) sh64_in32(addr)
+-#define readb_relaxed(addr) sh64_in8(addr)
+-#define readw_relaxed(addr) sh64_in16(addr)
+-#define readl_relaxed(addr) sh64_in32(addr)
-
-- return flags;
--}
-
- #define raw_local_save_flags(flags) \
- do { (flags) = __raw_local_save_flags(); } while (0)
-@@ -92,25 +22,6 @@ static inline int raw_irqs_disabled(void)
- return raw_irqs_disabled_flags(flags);
- }
-
--static inline unsigned long __raw_local_irq_save(void)
--{
-- unsigned long flags, __dummy;
+-#define writeb(b, addr) sh64_out8(b, addr)
+-#define writew(b, addr) sh64_out16(b, addr)
+-#define writel(b, addr) sh64_out32(b, addr)
-
-- __asm__ __volatile__ (
-- "stc sr, %1\n\t"
-- "mov %1, %0\n\t"
-- "or #0xf0, %0\n\t"
-- "ldc %0, sr\n\t"
-- "mov %1, %0\n\t"
-- "and #0xf0, %0\n\t"
-- : "=&z" (flags), "=&r" (__dummy)
-- : /* no inputs */
-- : "memory"
-- );
+-#define ctrl_inb(addr) sh64_in8(ioport_map(addr, 1))
+-#define ctrl_inw(addr) sh64_in16(ioport_map(addr, 2))
+-#define ctrl_inl(addr) sh64_in32(ioport_map(addr, 4))
-
-- return flags;
--}
+-#define ctrl_outb(b, addr) sh64_out8(b, ioport_map(addr, 1))
+-#define ctrl_outw(b, addr) sh64_out16(b, ioport_map(addr, 2))
+-#define ctrl_outl(b, addr) sh64_out32(b, ioport_map(addr, 4))
-
- #define raw_local_irq_save(flags) \
- do { (flags) = __raw_local_irq_save(); } while (0)
-
-diff --git a/include/asm-sh/irqflags_32.h b/include/asm-sh/irqflags_32.h
-new file mode 100644
-index 0000000..60218f5
---- /dev/null
-+++ b/include/asm-sh/irqflags_32.h
-@@ -0,0 +1,99 @@
-+#ifndef __ASM_SH_IRQFLAGS_32_H
-+#define __ASM_SH_IRQFLAGS_32_H
-+
-+static inline void raw_local_irq_enable(void)
-+{
-+ unsigned long __dummy0, __dummy1;
-+
-+ __asm__ __volatile__ (
-+ "stc sr, %0\n\t"
-+ "and %1, %0\n\t"
-+#ifdef CONFIG_CPU_HAS_SR_RB
-+ "stc r6_bank, %1\n\t"
-+ "or %1, %0\n\t"
-+#endif
-+ "ldc %0, sr\n\t"
-+ : "=&r" (__dummy0), "=r" (__dummy1)
-+ : "1" (~0x000000f0)
-+ : "memory"
-+ );
-+}
-+
-+static inline void raw_local_irq_disable(void)
-+{
-+ unsigned long flags;
-+
-+ __asm__ __volatile__ (
-+ "stc sr, %0\n\t"
-+ "or #0xf0, %0\n\t"
-+ "ldc %0, sr\n\t"
-+ : "=&z" (flags)
-+ : /* no inputs */
-+ : "memory"
-+ );
-+}
-+
-+static inline void set_bl_bit(void)
-+{
-+ unsigned long __dummy0, __dummy1;
-+
-+ __asm__ __volatile__ (
-+ "stc sr, %0\n\t"
-+ "or %2, %0\n\t"
-+ "and %3, %0\n\t"
-+ "ldc %0, sr\n\t"
-+ : "=&r" (__dummy0), "=r" (__dummy1)
-+ : "r" (0x10000000), "r" (0xffffff0f)
-+ : "memory"
-+ );
-+}
-+
-+static inline void clear_bl_bit(void)
-+{
-+ unsigned long __dummy0, __dummy1;
-+
-+ __asm__ __volatile__ (
-+ "stc sr, %0\n\t"
-+ "and %2, %0\n\t"
-+ "ldc %0, sr\n\t"
-+ : "=&r" (__dummy0), "=r" (__dummy1)
-+ : "1" (~0x10000000)
-+ : "memory"
-+ );
-+}
-+
-+static inline unsigned long __raw_local_save_flags(void)
-+{
-+ unsigned long flags;
-+
-+ __asm__ __volatile__ (
-+ "stc sr, %0\n\t"
-+ "and #0xf0, %0\n\t"
-+ : "=&z" (flags)
-+ : /* no inputs */
-+ : "memory"
-+ );
-+
-+ return flags;
-+}
-+
-+static inline unsigned long __raw_local_irq_save(void)
-+{
-+ unsigned long flags, __dummy;
-+
-+ __asm__ __volatile__ (
-+ "stc sr, %1\n\t"
-+ "mov %1, %0\n\t"
-+ "or #0xf0, %0\n\t"
-+ "ldc %0, sr\n\t"
-+ "mov %1, %0\n\t"
-+ "and #0xf0, %0\n\t"
-+ : "=&z" (flags), "=&r" (__dummy)
-+ : /* no inputs */
-+ : "memory"
-+ );
-+
-+ return flags;
-+}
-+
-+#endif /* __ASM_SH_IRQFLAGS_32_H */
-diff --git a/include/asm-sh/irqflags_64.h b/include/asm-sh/irqflags_64.h
-new file mode 100644
-index 0000000..4f6b8a5
---- /dev/null
-+++ b/include/asm-sh/irqflags_64.h
-@@ -0,0 +1,85 @@
-+#ifndef __ASM_SH_IRQFLAGS_64_H
-+#define __ASM_SH_IRQFLAGS_64_H
-+
-+#include <asm/cpu/registers.h>
-+
-+#define SR_MASK_LL 0x00000000000000f0LL
-+#define SR_BL_LL 0x0000000010000000LL
-+
-+static inline void raw_local_irq_enable(void)
-+{
-+ unsigned long long __dummy0, __dummy1 = ~SR_MASK_LL;
-+
-+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
-+ "and %0, %1, %0\n\t"
-+ "putcon %0, " __SR "\n\t"
-+ : "=&r" (__dummy0)
-+ : "r" (__dummy1));
-+}
-+
-+static inline void raw_local_irq_disable(void)
-+{
-+ unsigned long long __dummy0, __dummy1 = SR_MASK_LL;
-+
-+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
-+ "or %0, %1, %0\n\t"
-+ "putcon %0, " __SR "\n\t"
-+ : "=&r" (__dummy0)
-+ : "r" (__dummy1));
-+}
-+
-+static inline void set_bl_bit(void)
-+{
-+ unsigned long long __dummy0, __dummy1 = SR_BL_LL;
-+
-+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
-+ "or %0, %1, %0\n\t"
-+ "putcon %0, " __SR "\n\t"
-+ : "=&r" (__dummy0)
-+ : "r" (__dummy1));
-+
-+}
-+
-+static inline void clear_bl_bit(void)
-+{
-+ unsigned long long __dummy0, __dummy1 = ~SR_BL_LL;
-+
-+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
-+ "and %0, %1, %0\n\t"
-+ "putcon %0, " __SR "\n\t"
-+ : "=&r" (__dummy0)
-+ : "r" (__dummy1));
-+}
-+
-+static inline unsigned long __raw_local_save_flags(void)
-+{
-+ unsigned long long __dummy = SR_MASK_LL;
-+ unsigned long flags;
-+
-+ __asm__ __volatile__ (
-+ "getcon " __SR ", %0\n\t"
-+ "and %0, %1, %0"
-+ : "=&r" (flags)
-+ : "r" (__dummy));
-+
-+ return flags;
-+}
-+
-+static inline unsigned long __raw_local_irq_save(void)
-+{
-+ unsigned long long __dummy0, __dummy1 = SR_MASK_LL;
-+ unsigned long flags;
-+
-+ __asm__ __volatile__ (
-+ "getcon " __SR ", %1\n\t"
-+ "or %1, r63, %0\n\t"
-+ "or %1, %2, %1\n\t"
-+ "putcon %1, " __SR "\n\t"
-+ "and %0, %2, %0"
-+ : "=&r" (flags), "=&r" (__dummy0)
-+ : "r" (__dummy1));
-+
-+ return flags;
-+}
-+
-+#endif /* __ASM_SH_IRQFLAGS_64_H */
-diff --git a/include/asm-sh/machvec.h b/include/asm-sh/machvec.h
-index 088698b..b2e4124 100644
---- a/include/asm-sh/machvec.h
-+++ b/include/asm-sh/machvec.h
-@@ -56,9 +56,6 @@ struct sh_machine_vector {
-
- void (*mv_heartbeat)(void);
-
-- void *(*mv_consistent_alloc)(struct device *, size_t, dma_addr_t *, gfp_t);
-- int (*mv_consistent_free)(struct device *, size_t, void *, dma_addr_t);
+-#define ioread8(addr) sh64_in8(addr)
+-#define ioread16(addr) sh64_in16(addr)
+-#define ioread32(addr) sh64_in32(addr)
+-#define iowrite8(b, addr) sh64_out8(b, addr)
+-#define iowrite16(b, addr) sh64_out16(b, addr)
+-#define iowrite32(b, addr) sh64_out32(b, addr)
+-
+-#define inb(addr) ctrl_inb(addr)
+-#define inw(addr) ctrl_inw(addr)
+-#define inl(addr) ctrl_inl(addr)
+-#define outb(b, addr) ctrl_outb(b, addr)
+-#define outw(b, addr) ctrl_outw(b, addr)
+-#define outl(b, addr) ctrl_outl(b, addr)
+-
+-void outsw(unsigned long port, const void *addr, unsigned long count);
+-void insw(unsigned long port, void *addr, unsigned long count);
+-void outsl(unsigned long port, const void *addr, unsigned long count);
+-void insl(unsigned long port, void *addr, unsigned long count);
+-
+-#define inb_p(addr) inb(addr)
+-#define inw_p(addr) inw(addr)
+-#define inl_p(addr) inl(addr)
+-#define outb_p(x,addr) outb(x,addr)
+-#define outw_p(x,addr) outw(x,addr)
+-#define outl_p(x,addr) outl(x,addr)
+-
+-#define __raw_readb readb
+-#define __raw_readw readw
+-#define __raw_readl readl
+-#define __raw_writeb writeb
+-#define __raw_writew writew
+-#define __raw_writel writel
+-
+-void memcpy_toio(void __iomem *to, const void *from, long count);
+-void memcpy_fromio(void *to, void __iomem *from, long count);
+-
+-#define mmiowb()
-
- void __iomem *(*mv_ioport_map)(unsigned long port, unsigned int size);
- void (*mv_ioport_unmap)(void __iomem *);
- };
-@@ -68,6 +65,6 @@ extern struct sh_machine_vector sh_mv;
- #define get_system_type() sh_mv.mv_name
-
- #define __initmv \
-- __attribute_used__ __attribute__((__section__ (".machvec.init")))
-+ __used __section(.machvec.init)
-
- #endif /* _ASM_SH_MACHVEC_H */
-diff --git a/include/asm-sh/microdev.h b/include/asm-sh/microdev.h
-index 018332a..1aed158 100644
---- a/include/asm-sh/microdev.h
-+++ b/include/asm-sh/microdev.h
-@@ -17,7 +17,7 @@ extern void microdev_print_fpga_intc_status(void);
- /*
- * The following are useful macros for manipulating the interrupt
- * controller (INTC) on the CPU-board FPGA. should be noted that there
-- * is an INTC on the FPGA, and a seperate INTC on the SH4-202 core -
-+ * is an INTC on the FPGA, and a separate INTC on the SH4-202 core -
- * these are two different things, both of which need to be prorammed to
- * correctly route - unfortunately, they have the same name and
- * abbreviations!
-@@ -25,7 +25,7 @@ extern void microdev_print_fpga_intc_status(void);
- #define MICRODEV_FPGA_INTC_BASE 0xa6110000ul /* INTC base address on CPU-board FPGA */
- #define MICRODEV_FPGA_INTENB_REG (MICRODEV_FPGA_INTC_BASE+0ul) /* Interrupt Enable Register on INTC on CPU-board FPGA */
- #define MICRODEV_FPGA_INTDSB_REG (MICRODEV_FPGA_INTC_BASE+8ul) /* Interrupt Disable Register on INTC on CPU-board FPGA */
--#define MICRODEV_FPGA_INTC_MASK(n) (1ul<<(n)) /* Interupt mask to enable/disable INTC in CPU-board FPGA */
-+#define MICRODEV_FPGA_INTC_MASK(n) (1ul<<(n)) /* Interrupt mask to enable/disable INTC in CPU-board FPGA */
- #define MICRODEV_FPGA_INTPRI_REG(n) (MICRODEV_FPGA_INTC_BASE+0x10+((n)/8)*8)/* Interrupt Priority Register on INTC on CPU-board FPGA */
- #define MICRODEV_FPGA_INTPRI_LEVEL(n,x) ((x)<<(((n)%8)*4)) /* MICRODEV_FPGA_INTPRI_LEVEL(int_number, int_level) */
- #define MICRODEV_FPGA_INTPRI_MASK(n) (MICRODEV_FPGA_INTPRI_LEVEL((n),0xful)) /* Interrupt Priority Mask on INTC on CPU-board FPGA */
-diff --git a/include/asm-sh/mmu_context.h b/include/asm-sh/mmu_context.h
-index 199662b..fe58d00 100644
---- a/include/asm-sh/mmu_context.h
-+++ b/include/asm-sh/mmu_context.h
-@@ -1,13 +1,13 @@
- /*
- * Copyright (C) 1999 Niibe Yutaka
-- * Copyright (C) 2003 - 2006 Paul Mundt
-+ * Copyright (C) 2003 - 2007 Paul Mundt
- *
- * ASID handling idea taken from MIPS implementation.
- */
- #ifndef __ASM_SH_MMU_CONTEXT_H
- #define __ASM_SH_MMU_CONTEXT_H
-#ifdef __KERNEL__
-
-+#ifdef __KERNEL__
- #include <asm/cpu/mmu_context.h>
- #include <asm/tlbflush.h>
- #include <asm/uaccess.h>
-@@ -19,7 +19,6 @@
- * (a) TLB cache version (or round, cycle whatever expression you like)
- * (b) ASID (Address Space IDentifier)
- */
-
- #define MMU_CONTEXT_ASID_MASK 0x000000ff
- #define MMU_CONTEXT_VERSION_MASK 0xffffff00
- #define MMU_CONTEXT_FIRST_VERSION 0x00000100
-@@ -28,10 +27,11 @@
- /* ASID is 8-bit value, so it can't be 0x100 */
- #define MMU_NO_ASID 0x100
-
--#define cpu_context(cpu, mm) ((mm)->context.id[cpu])
--#define cpu_asid(cpu, mm) (cpu_context((cpu), (mm)) & \
-- MMU_CONTEXT_ASID_MASK)
- #define asid_cache(cpu) (cpu_data[cpu].asid_cache)
-+#define cpu_context(cpu, mm) ((mm)->context.id[cpu])
-+
-+#define cpu_asid(cpu, mm) \
-+ (cpu_context((cpu), (mm)) & MMU_CONTEXT_ASID_MASK)
-
- /*
- * Virtual Page Number mask
-@@ -39,6 +39,12 @@
- #define MMU_VPN_MASK 0xfffff000
-
- #ifdef CONFIG_MMU
-+#if defined(CONFIG_SUPERH32)
-+#include "mmu_context_32.h"
-+#else
-+#include "mmu_context_64.h"
-+#endif
-+
- /*
- * Get MMU context if needed.
- */
-@@ -59,6 +65,14 @@ static inline void get_mmu_context(struct mm_struct *mm, unsigned int cpu)
- */
- flush_tlb_all();
-
-+#ifdef CONFIG_SUPERH64
-+ /*
-+ * The SH-5 cache uses the ASIDs, requiring both the I and D
-+ * cache to be flushed when the ASID is exhausted. Weak.
-+ */
-+ flush_cache_all();
-+#endif
-+
- /*
- * Fix version; Note that we avoid version #0
- * to distingush NO_CONTEXT.
-@@ -86,39 +100,6 @@ static inline int init_new_context(struct task_struct *tsk,
- }
-
- /*
-- * Destroy context related info for an mm_struct that is about
-- * to be put to rest.
+-#ifdef CONFIG_SH_CAYMAN
+-extern unsigned long smsc_superio_virt;
+-#endif
+-#ifdef CONFIG_PCI
+-extern unsigned long pciio_virt;
+-#endif
+-
+-#define IO_SPACE_LIMIT 0xffffffff
+-
+-/*
+- * Change virtual addresses to physical addresses and vv.
+- * These are trivial on the 1:1 Linux/SuperH mapping
- */
--static inline void destroy_context(struct mm_struct *mm)
+-static inline unsigned long virt_to_phys(volatile void * address)
-{
-- /* Do nothing */
+- return __pa(address);
-}
-
--static inline void set_asid(unsigned long asid)
+-static inline void * phys_to_virt(unsigned long address)
-{
-- unsigned long __dummy;
--
-- __asm__ __volatile__ ("mov.l %2, %0\n\t"
-- "and %3, %0\n\t"
-- "or %1, %0\n\t"
-- "mov.l %0, %2"
-- : "=&r" (__dummy)
-- : "r" (asid), "m" (__m(MMU_PTEH)),
-- "r" (0xffffff00));
+- return __va(address);
-}
-
--static inline unsigned long get_asid(void)
--{
-- unsigned long asid;
--
-- __asm__ __volatile__ ("mov.l %1, %0"
-- : "=r" (asid)
-- : "m" (__m(MMU_PTEH)));
-- asid &= MMU_CONTEXT_ASID_MASK;
-- return asid;
--}
+-extern void * __ioremap(unsigned long phys_addr, unsigned long size,
+- unsigned long flags);
-
--/*
- * After we have set current->mm to a new value, this activates
- * the context for the new mm so we see the new mappings.
- */
-@@ -128,17 +109,6 @@ static inline void activate_context(struct mm_struct *mm, unsigned int cpu)
- set_asid(cpu_asid(cpu, mm));
- }
-
--/* MMU_TTB is used for optimizing the fault handling. */
--static inline void set_TTB(pgd_t *pgd)
+-static inline void * ioremap(unsigned long phys_addr, unsigned long size)
-{
-- ctrl_outl((unsigned long)pgd, MMU_TTB);
+- return __ioremap(phys_addr, size, 1);
-}
-
--static inline pgd_t *get_TTB(void)
+-static inline void * ioremap_nocache (unsigned long phys_addr, unsigned long size)
-{
-- return (pgd_t *)ctrl_inl(MMU_TTB);
+- return __ioremap(phys_addr, size, 0);
-}
-
- static inline void switch_mm(struct mm_struct *prev,
- struct mm_struct *next,
- struct task_struct *tsk)
-@@ -153,17 +123,7 @@ static inline void switch_mm(struct mm_struct *prev,
- if (!cpu_test_and_set(cpu, next->cpu_vm_mask))
- activate_context(next, cpu);
- }
+-extern void iounmap(void *addr);
-
--#define deactivate_mm(tsk,mm) do { } while (0)
+-unsigned long onchip_remap(unsigned long addr, unsigned long size, const char* name);
+-extern void onchip_unmap(unsigned long vaddr);
-
--#define activate_mm(prev, next) \
-- switch_mm((prev),(next),NULL)
+-/*
+- * Convert a physical pointer to a virtual kernel pointer for /dev/mem
+- * access
+- */
+-#define xlate_dev_mem_ptr(p) __va(p)
-
--static inline void
--enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+-/*
+- * Convert a virtual cached pointer to an uncached pointer
+- */
+-#define xlate_dev_kmem_ptr(p) p
+-
+-#endif /* __KERNEL__ */
+-#endif /* __ASM_SH64_IO_H */
+diff --git a/include/asm-sh64/ioctl.h b/include/asm-sh64/ioctl.h
+deleted file mode 100644
+index b279fe0..0000000
+--- a/include/asm-sh64/ioctl.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-generic/ioctl.h>
+diff --git a/include/asm-sh64/ioctls.h b/include/asm-sh64/ioctls.h
+deleted file mode 100644
+index 6b0c04f..0000000
+--- a/include/asm-sh64/ioctls.h
++++ /dev/null
+@@ -1,116 +0,0 @@
+-#ifndef __ASM_SH64_IOCTLS_H
+-#define __ASM_SH64_IOCTLS_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/ioctls.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2004 Richard Curnow
+- *
+- */
+-
+-#include <asm/ioctl.h>
+-
+-#define FIOCLEX 0x6601 /* _IO('f', 1) */
+-#define FIONCLEX 0x6602 /* _IO('f', 2) */
+-#define FIOASYNC 0x4004667d /* _IOW('f', 125, int) */
+-#define FIONBIO 0x4004667e /* _IOW('f', 126, int) */
+-#define FIONREAD 0x8004667f /* _IOW('f', 127, int) */
+-#define TIOCINQ FIONREAD
+-#define FIOQSIZE 0x80086680 /* _IOR('f', 128, loff_t) */
+-
+-#define TCGETS 0x5401
+-#define TCSETS 0x5402
+-#define TCSETSW 0x5403
+-#define TCSETSF 0x5404
+-
+-#define TCGETA 0x80127417 /* _IOR('t', 23, struct termio) */
+-#define TCSETA 0x40127418 /* _IOW('t', 24, struct termio) */
+-#define TCSETAW 0x40127419 /* _IOW('t', 25, struct termio) */
+-#define TCSETAF 0x4012741c /* _IOW('t', 28, struct termio) */
+-
+-#define TCSBRK 0x741d /* _IO('t', 29) */
+-#define TCXONC 0x741e /* _IO('t', 30) */
+-#define TCFLSH 0x741f /* _IO('t', 31) */
+-
+-#define TIOCSWINSZ 0x40087467 /* _IOW('t', 103, struct winsize) */
+-#define TIOCGWINSZ 0x80087468 /* _IOR('t', 104, struct winsize) */
+-#define TIOCSTART 0x746e /* _IO('t', 110) start output, like ^Q */
+-#define TIOCSTOP 0x746f /* _IO('t', 111) stop output, like ^S */
+-#define TIOCOUTQ 0x80047473 /* _IOR('t', 115, int) output queue size */
+-
+-#define TIOCSPGRP 0x40047476 /* _IOW('t', 118, int) */
+-#define TIOCGPGRP 0x80047477 /* _IOR('t', 119, int) */
+-
+-#define TIOCEXCL 0x540c /* _IO('T', 12) */
+-#define TIOCNXCL 0x540d /* _IO('T', 13) */
+-#define TIOCSCTTY 0x540e /* _IO('T', 14) */
+-
+-#define TIOCSTI 0x40015412 /* _IOW('T', 18, char) 0x5412 */
+-#define TIOCMGET 0x80045415 /* _IOR('T', 21, unsigned int) 0x5415 */
+-#define TIOCMBIS 0x40045416 /* _IOW('T', 22, unsigned int) 0x5416 */
+-#define TIOCMBIC 0x40045417 /* _IOW('T', 23, unsigned int) 0x5417 */
+-#define TIOCMSET 0x40045418 /* _IOW('T', 24, unsigned int) 0x5418 */
+-
+-#define TIOCM_LE 0x001
+-#define TIOCM_DTR 0x002
+-#define TIOCM_RTS 0x004
+-#define TIOCM_ST 0x008
+-#define TIOCM_SR 0x010
+-#define TIOCM_CTS 0x020
+-#define TIOCM_CAR 0x040
+-#define TIOCM_RNG 0x080
+-#define TIOCM_DSR 0x100
+-#define TIOCM_CD TIOCM_CAR
+-#define TIOCM_RI TIOCM_RNG
+-
+-#define TIOCGSOFTCAR 0x80045419 /* _IOR('T', 25, unsigned int) 0x5419 */
+-#define TIOCSSOFTCAR 0x4004541a /* _IOW('T', 26, unsigned int) 0x541A */
+-#define TIOCLINUX 0x4004541c /* _IOW('T', 28, char) 0x541C */
+-#define TIOCCONS 0x541d /* _IO('T', 29) */
+-#define TIOCGSERIAL 0x803c541e /* _IOR('T', 30, struct serial_struct) 0x541E */
+-#define TIOCSSERIAL 0x403c541f /* _IOW('T', 31, struct serial_struct) 0x541F */
+-#define TIOCPKT 0x40045420 /* _IOW('T', 32, int) 0x5420 */
+-
+-#define TIOCPKT_DATA 0
+-#define TIOCPKT_FLUSHREAD 1
+-#define TIOCPKT_FLUSHWRITE 2
+-#define TIOCPKT_STOP 4
+-#define TIOCPKT_START 8
+-#define TIOCPKT_NOSTOP 16
+-#define TIOCPKT_DOSTOP 32
+-
+-
+-#define TIOCNOTTY 0x5422 /* _IO('T', 34) */
+-#define TIOCSETD 0x40045423 /* _IOW('T', 35, int) 0x5423 */
+-#define TIOCGETD 0x80045424 /* _IOR('T', 36, int) 0x5424 */
+-#define TCSBRKP 0x40045424 /* _IOW('T', 37, int) 0x5425 */ /* Needed for POSIX tcsendbreak() */
+-#define TIOCTTYGSTRUCT 0x8c105426 /* _IOR('T', 38, struct tty_struct) 0x5426 */ /* For debugging only */
+-#define TIOCSBRK 0x5427 /* _IO('T', 39) */ /* BSD compatibility */
+-#define TIOCCBRK 0x5428 /* _IO('T', 40) */ /* BSD compatibility */
+-#define TIOCGSID 0x80045429 /* _IOR('T', 41, pid_t) 0x5429 */ /* Return the session ID of FD */
+-#define TIOCGPTN 0x80045430 /* _IOR('T',0x30, unsigned int) 0x5430 Get Pty Number (of pty-mux device) */
+-#define TIOCSPTLCK 0x40045431 /* _IOW('T',0x31, int) Lock/unlock Pty */
+-
+-#define TIOCSERCONFIG 0x5453 /* _IO('T', 83) */
+-#define TIOCSERGWILD 0x80045454 /* _IOR('T', 84, int) 0x5454 */
+-#define TIOCSERSWILD 0x40045455 /* _IOW('T', 85, int) 0x5455 */
+-#define TIOCGLCKTRMIOS 0x5456
+-#define TIOCSLCKTRMIOS 0x5457
+-#define TIOCSERGSTRUCT 0x80d85458 /* _IOR('T', 88, struct async_struct) 0x5458 */ /* For debugging only */
+-#define TIOCSERGETLSR 0x80045459 /* _IOR('T', 89, unsigned int) 0x5459 */ /* Get line status register */
+-
+-/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+-#define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
+-
+-#define TIOCSERGETMULTI 0x80a8545a /* _IOR('T', 90, struct serial_multiport_struct) 0x545A */ /* Get multiport config */
+-#define TIOCSERSETMULTI 0x40a8545b /* _IOW('T', 91, struct serial_multiport_struct) 0x545B */ /* Set multiport config */
+-
+-#define TIOCMIWAIT 0x545c /* _IO('T', 92) wait for a change on serial input line(s) */
+-#define TIOCGICOUNT 0x545d /* read serial port inline interrupt counts */
+-
+-#endif /* __ASM_SH64_IOCTLS_H */
+diff --git a/include/asm-sh64/ipcbuf.h b/include/asm-sh64/ipcbuf.h
+deleted file mode 100644
+index c441e35..0000000
+--- a/include/asm-sh64/ipcbuf.h
++++ /dev/null
+@@ -1,40 +0,0 @@
+-#ifndef __ASM_SH64_IPCBUF_H__
+-#define __ASM_SH64_IPCBUF_H__
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/ipcbuf.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-/*
+- * The ipc64_perm structure for i386 architecture.
+- * Note extra padding because this structure is passed back and forth
+- * between kernel and user space.
+- *
+- * Pad space is left for:
+- * - 32-bit mode_t and seq
+- * - 2 miscellaneous 32-bit values
+- */
+-
+-struct ipc64_perm
-{
--}
--#else /* !CONFIG_MMU */
-+#else
- #define get_mmu_context(mm) do { } while (0)
- #define init_new_context(tsk,mm) (0)
- #define destroy_context(mm) do { } while (0)
-@@ -173,10 +133,11 @@ enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
- #define get_TTB() (0)
- #define activate_context(mm,cpu) do { } while (0)
- #define switch_mm(prev,next,tsk) do { } while (0)
-+#endif /* CONFIG_MMU */
-+
-+#define activate_mm(prev, next) switch_mm((prev),(next),NULL)
- #define deactivate_mm(tsk,mm) do { } while (0)
--#define activate_mm(prev,next) do { } while (0)
- #define enter_lazy_tlb(mm,tsk) do { } while (0)
--#endif /* CONFIG_MMU */
-
- #if defined(CONFIG_CPU_SH3) || defined(CONFIG_CPU_SH4)
- /*
-diff --git a/include/asm-sh/mmu_context_32.h b/include/asm-sh/mmu_context_32.h
-new file mode 100644
-index 0000000..f4f9aeb
---- /dev/null
-+++ b/include/asm-sh/mmu_context_32.h
-@@ -0,0 +1,47 @@
-+#ifndef __ASM_SH_MMU_CONTEXT_32_H
-+#define __ASM_SH_MMU_CONTEXT_32_H
-+
-+/*
-+ * Destroy context related info for an mm_struct that is about
-+ * to be put to rest.
-+ */
-+static inline void destroy_context(struct mm_struct *mm)
-+{
-+ /* Do nothing */
-+}
-+
-+static inline void set_asid(unsigned long asid)
-+{
-+ unsigned long __dummy;
-+
-+ __asm__ __volatile__ ("mov.l %2, %0\n\t"
-+ "and %3, %0\n\t"
-+ "or %1, %0\n\t"
-+ "mov.l %0, %2"
-+ : "=&r" (__dummy)
-+ : "r" (asid), "m" (__m(MMU_PTEH)),
-+ "r" (0xffffff00));
-+}
-+
-+static inline unsigned long get_asid(void)
-+{
-+ unsigned long asid;
-+
-+ __asm__ __volatile__ ("mov.l %1, %0"
-+ : "=r" (asid)
-+ : "m" (__m(MMU_PTEH)));
-+ asid &= MMU_CONTEXT_ASID_MASK;
-+ return asid;
-+}
-+
-+/* MMU_TTB is used for optimizing the fault handling. */
-+static inline void set_TTB(pgd_t *pgd)
-+{
-+ ctrl_outl((unsigned long)pgd, MMU_TTB);
-+}
-+
-+static inline pgd_t *get_TTB(void)
-+{
-+ return (pgd_t *)ctrl_inl(MMU_TTB);
-+}
-+#endif /* __ASM_SH_MMU_CONTEXT_32_H */
-diff --git a/include/asm-sh/mmu_context_64.h b/include/asm-sh/mmu_context_64.h
-new file mode 100644
-index 0000000..020be74
---- /dev/null
-+++ b/include/asm-sh/mmu_context_64.h
-@@ -0,0 +1,75 @@
-+#ifndef __ASM_SH_MMU_CONTEXT_64_H
-+#define __ASM_SH_MMU_CONTEXT_64_H
-+
-+/*
-+ * sh64-specific mmu_context interface.
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003 - 2007 Paul Mundt
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#include <asm/cpu/registers.h>
-+#include <asm/cacheflush.h>
-+
-+#define SR_ASID_MASK 0xffffffffff00ffffULL
-+#define SR_ASID_SHIFT 16
-+
-+/*
-+ * Destroy context related info for an mm_struct that is about
-+ * to be put to rest.
-+ */
-+static inline void destroy_context(struct mm_struct *mm)
-+{
-+ /* Well, at least free TLB entries */
-+ flush_tlb_mm(mm);
-+}
-+
-+static inline unsigned long get_asid(void)
-+{
-+ unsigned long long sr;
-+
-+ asm volatile ("getcon " __SR ", %0\n\t"
-+ : "=r" (sr));
-+
-+ sr = (sr >> SR_ASID_SHIFT) & MMU_CONTEXT_ASID_MASK;
-+ return (unsigned long) sr;
-+}
-+
-+/* Set ASID into SR */
-+static inline void set_asid(unsigned long asid)
-+{
-+ unsigned long long sr, pc;
-+
-+ asm volatile ("getcon " __SR ", %0" : "=r" (sr));
-+
-+ sr = (sr & SR_ASID_MASK) | (asid << SR_ASID_SHIFT);
-+
-+ /*
-+ * It is possible that this function may be inlined and so to avoid
-+ * the assembler reporting duplicate symbols we make use of the
-+ * gas trick of generating symbols using numerics and forward
-+ * reference.
-+ */
-+ asm volatile ("movi 1, %1\n\t"
-+ "shlli %1, 28, %1\n\t"
-+ "or %0, %1, %1\n\t"
-+ "putcon %1, " __SR "\n\t"
-+ "putcon %0, " __SSR "\n\t"
-+ "movi 1f, %1\n\t"
-+ "ori %1, 1 , %1\n\t"
-+ "putcon %1, " __SPC "\n\t"
-+ "rte\n"
-+ "1:\n\t"
-+ : "=r" (sr), "=r" (pc) : "0" (sr));
-+}
-+
-+/* No spare register to twiddle, so use a software cache */
-+extern pgd_t *mmu_pdtp_cache;
-+
-+#define set_TTB(pgd) (mmu_pdtp_cache = (pgd))
-+#define get_TTB() (mmu_pdtp_cache)
-+
-+#endif /* __ASM_SH_MMU_CONTEXT_64_H */
-diff --git a/include/asm-sh/module.h b/include/asm-sh/module.h
-index 118d5a2..46eccd3 100644
---- a/include/asm-sh/module.h
-+++ b/include/asm-sh/module.h
-@@ -20,6 +20,8 @@ struct mod_arch_specific {
- # define MODULE_PROC_FAMILY "SH3LE "
- # elif defined CONFIG_CPU_SH4
- # define MODULE_PROC_FAMILY "SH4LE "
-+# elif defined CONFIG_CPU_SH5
-+# define MODULE_PROC_FAMILY "SH5LE "
- # else
- # error unknown processor family
- # endif
-@@ -30,6 +32,8 @@ struct mod_arch_specific {
- # define MODULE_PROC_FAMILY "SH3BE "
- # elif defined CONFIG_CPU_SH4
- # define MODULE_PROC_FAMILY "SH4BE "
-+# elif defined CONFIG_CPU_SH5
-+# define MODULE_PROC_FAMILY "SH5BE "
- # else
- # error unknown processor family
- # endif
-diff --git a/include/asm-sh/page.h b/include/asm-sh/page.h
-index d00a8fd..002e64a 100644
---- a/include/asm-sh/page.h
-+++ b/include/asm-sh/page.h
-@@ -5,13 +5,7 @@
- * Copyright (C) 1999 Niibe Yutaka
- */
-
+- __kernel_key_t key;
+- __kernel_uid32_t uid;
+- __kernel_gid32_t gid;
+- __kernel_uid32_t cuid;
+- __kernel_gid32_t cgid;
+- __kernel_mode_t mode;
+- unsigned short __pad1;
+- unsigned short seq;
+- unsigned short __pad2;
+- unsigned long __unused1;
+- unsigned long __unused2;
+-};
+-
+-#endif /* __ASM_SH64_IPCBUF_H__ */
+diff --git a/include/asm-sh64/irq.h b/include/asm-sh64/irq.h
+deleted file mode 100644
+index 5c9e6a8..0000000
+--- a/include/asm-sh64/irq.h
++++ /dev/null
+@@ -1,144 +0,0 @@
+-#ifndef __ASM_SH64_IRQ_H
+-#define __ASM_SH64_IRQ_H
+-
-/*
-- [ P0/U0 (virtual) ] 0x00000000 <------ User space
-- [ P1 (fixed) cached ] 0x80000000 <------ Kernel space
-- [ P2 (fixed) non-cachable] 0xA0000000 <------ Physical access
-- [ P3 (virtual) cached] 0xC0000000 <------ vmalloced area
-- [ P4 control ] 0xE0000000
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/irq.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
- */
-+#include <linux/const.h>
-
- #ifdef __KERNEL__
-
-@@ -26,15 +20,13 @@
- # error "Bogus kernel page size?"
- #endif
-
--#ifdef __ASSEMBLY__
--#define PAGE_SIZE (1 << PAGE_SHIFT)
+-
+-
+-/*
+- * Encoded IRQs are not considered worth to be supported.
+- * Main reason is that there's no per-encoded-interrupt
+- * enable/disable mechanism (as there was in SH3/4).
+- * An all enabled/all disabled is worth only if there's
+- * a cascaded IC to disable/enable/ack on. Until such
+- * IC is available there's no such support.
+- *
+- * Presumably Encoded IRQs may use extra IRQs beyond 64,
+- * below. Some logic must be added to cope with IRQ_IRL?
+- * in an exclusive way.
+- *
+- * Priorities are set at Platform level, when IRQ_IRL0-3
+- * are set to 0 Encoding is allowed. Otherwise it's not
+- * allowed.
+- */
+-
+-/* Independent IRQs */
+-#define IRQ_IRL0 0
+-#define IRQ_IRL1 1
+-#define IRQ_IRL2 2
+-#define IRQ_IRL3 3
+-
+-#define IRQ_INTA 4
+-#define IRQ_INTB 5
+-#define IRQ_INTC 6
+-#define IRQ_INTD 7
+-
+-#define IRQ_SERR 12
+-#define IRQ_ERR 13
+-#define IRQ_PWR3 14
+-#define IRQ_PWR2 15
+-#define IRQ_PWR1 16
+-#define IRQ_PWR0 17
+-
+-#define IRQ_DMTE0 18
+-#define IRQ_DMTE1 19
+-#define IRQ_DMTE2 20
+-#define IRQ_DMTE3 21
+-#define IRQ_DAERR 22
+-
+-#define IRQ_TUNI0 32
+-#define IRQ_TUNI1 33
+-#define IRQ_TUNI2 34
+-#define IRQ_TICPI2 35
+-
+-#define IRQ_ATI 36
+-#define IRQ_PRI 37
+-#define IRQ_CUI 38
+-
+-#define IRQ_ERI 39
+-#define IRQ_RXI 40
+-#define IRQ_BRI 41
+-#define IRQ_TXI 42
+-
+-#define IRQ_ITI 63
+-
+-#define NR_INTC_IRQS 64
+-
+-#ifdef CONFIG_SH_CAYMAN
+-#define NR_EXT_IRQS 32
+-#define START_EXT_IRQS 64
+-
+-/* PCI bus 2 uses encoded external interrupts on the Cayman board */
+-#define IRQ_P2INTA (START_EXT_IRQS + (3*8) + 0)
+-#define IRQ_P2INTB (START_EXT_IRQS + (3*8) + 1)
+-#define IRQ_P2INTC (START_EXT_IRQS + (3*8) + 2)
+-#define IRQ_P2INTD (START_EXT_IRQS + (3*8) + 3)
+-
+-#define I8042_KBD_IRQ (START_EXT_IRQS + 2)
+-#define I8042_AUX_IRQ (START_EXT_IRQS + 6)
+-
+-#define IRQ_CFCARD (START_EXT_IRQS + 7)
+-#define IRQ_PCMCIA (0)
+-
-#else
--#define PAGE_SIZE (1UL << PAGE_SHIFT)
+-#define NR_EXT_IRQS 0
-#endif
-
-+#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
- #define PAGE_MASK (~(PAGE_SIZE-1))
- #define PTE_MASK PAGE_MASK
-
-+/* to align the pointer to the (next) page boundary */
-+#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
-+
- #if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
- #define HPAGE_SHIFT 16
- #elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
-@@ -45,6 +37,8 @@
- #define HPAGE_SHIFT 22
- #elif defined(CONFIG_HUGETLB_PAGE_SIZE_64MB)
- #define HPAGE_SHIFT 26
-+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
-+#define HPAGE_SHIFT 29
- #endif
-
- #ifdef CONFIG_HUGETLB_PAGE
-@@ -55,20 +49,12 @@
-
- #ifndef __ASSEMBLY__
-
--extern void (*clear_page)(void *to);
--extern void (*copy_page)(void *to, void *from);
+-#define NR_IRQS (NR_INTC_IRQS+NR_EXT_IRQS)
-
- extern unsigned long shm_align_mask;
- extern unsigned long max_low_pfn, min_low_pfn;
- extern unsigned long memory_start, memory_end;
-
--#ifdef CONFIG_MMU
--extern void clear_page_slow(void *to);
--extern void copy_page_slow(void *to, void *from);
+-
+-/* Default IRQs, fixed */
+-#define TIMER_IRQ IRQ_TUNI0
+-#define RTC_IRQ IRQ_CUI
+-
+-/* Default Priorities, Platform may choose differently */
+-#define NO_PRIORITY 0 /* Disabled */
+-#define TIMER_PRIORITY 2
+-#define RTC_PRIORITY TIMER_PRIORITY
+-#define SCIF_PRIORITY 3
+-#define INTD_PRIORITY 3
+-#define IRL3_PRIORITY 4
+-#define INTC_PRIORITY 6
+-#define IRL2_PRIORITY 7
+-#define INTB_PRIORITY 9
+-#define IRL1_PRIORITY 10
+-#define INTA_PRIORITY 12
+-#define IRL0_PRIORITY 13
+-#define TOP_PRIORITY 15
+-
+-extern int intc_evt_to_irq[(0xE20/0x20)+1];
+-int intc_irq_describe(char* p, int irq);
+-
+-#define irq_canonicalize(irq) (irq)
+-
+-#ifdef CONFIG_SH_CAYMAN
+-int cayman_irq_demux(int evt);
+-int cayman_irq_describe(char* p, int irq);
+-#define irq_demux(x) cayman_irq_demux(x)
+-#define irq_describe(p, x) cayman_irq_describe(p, x)
-#else
--extern void clear_page_nommu(void *to);
--extern void copy_page_nommu(void *to, void *from);
+-#define irq_demux(x) (intc_evt_to_irq[x])
+-#define irq_describe(p, x) intc_irq_describe(p, x)
-#endif
-+extern void clear_page(void *to);
-+extern void copy_page(void *to, void *from);
-
- #if !defined(CONFIG_CACHE_OFF) && defined(CONFIG_MMU) && \
- (defined(CONFIG_CPU_SH4) || defined(CONFIG_SH7705_CACHE_32KB))
-@@ -96,12 +82,18 @@ typedef struct { unsigned long long pgd; } pgd_t;
- ((x).pte_low | ((unsigned long long)(x).pte_high << 32))
- #define __pte(x) \
- ({ pte_t __pte = {(x), ((unsigned long long)(x)) >> 32}; __pte; })
--#else
-+#elif defined(CONFIG_SUPERH32)
- typedef struct { unsigned long pte_low; } pte_t;
- typedef struct { unsigned long pgprot; } pgprot_t;
- typedef struct { unsigned long pgd; } pgd_t;
- #define pte_val(x) ((x).pte_low)
--#define __pte(x) ((pte_t) { (x) } )
-+#define __pte(x) ((pte_t) { (x) } )
-+#else
-+typedef struct { unsigned long long pte_low; } pte_t;
-+typedef struct { unsigned long pgprot; } pgprot_t;
-+typedef struct { unsigned long pgd; } pgd_t;
-+#define pte_val(x) ((x).pte_low)
-+#define __pte(x) ((pte_t) { (x) } )
- #endif
-
- #define pgd_val(x) ((x).pgd)
-@@ -112,28 +104,44 @@ typedef struct { unsigned long pgd; } pgd_t;
-
- #endif /* !__ASSEMBLY__ */
-
--/* to align the pointer to the (next) page boundary */
--#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
-
- /*
-- * IF YOU CHANGE THIS, PLEASE ALSO CHANGE
-- *
-- * arch/sh/kernel/vmlinux.lds.S
-- *
-- * which has the same constant encoded..
-+ * __MEMORY_START and SIZE are the physical addresses and size of RAM.
- */
+-/*
+- * Function for "on chip support modules".
+- */
-
- #define __MEMORY_START CONFIG_MEMORY_START
- #define __MEMORY_SIZE CONFIG_MEMORY_SIZE
-
-+/*
-+ * PAGE_OFFSET is the virtual address of the start of kernel address
-+ * space.
-+ */
- #define PAGE_OFFSET CONFIG_PAGE_OFFSET
--#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
--#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
--#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
-
-+/*
-+ * Virtual to physical RAM address translation.
-+ *
-+ * In 29 bit mode, the physical offset of RAM from address 0 is visible in
-+ * the kernel virtual address space, and thus we don't have to take
-+ * this into account when translating. However in 32 bit mode this offset
-+ * is not visible (it is part of the PMB mapping) and so needs to be
-+ * added or subtracted as required.
-+ */
-+#ifdef CONFIG_32BIT
-+#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET+__MEMORY_START)
-+#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET-__MEMORY_START))
-+#else
-+#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
-+#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
-+#endif
-+
-+#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
- #define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
-
--/* PFN start number, because of __MEMORY_START */
-+/*
-+ * PFN = physical frame number (ie PFN 0 == physical address 0)
-+ * PFN_START is the PFN of the first page of RAM. By defining this we
-+ * don't have struct page entries for the portion of address space
-+ * between physical address 0 and the start of RAM.
-+ */
- #define PFN_START (__MEMORY_START >> PAGE_SHIFT)
- #define ARCH_PFN_OFFSET (PFN_START)
- #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-@@ -154,11 +162,21 @@ typedef struct { unsigned long pgd; } pgd_t;
- #endif
-
- /*
-- * Slub defaults to 8-byte alignment, we're only interested in 4.
-- * Slab defaults to BYTES_PER_WORD, which ends up being the same anyways.
-+ * Some drivers need to perform DMA into kmalloc'ed buffers
-+ * and so we have to increase the kmalloc minalign for this.
- */
--#define ARCH_KMALLOC_MINALIGN 4
--#define ARCH_SLAB_MINALIGN 4
-+#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
-+
-+#ifdef CONFIG_SUPERH64
-+/*
-+ * While BYTES_PER_WORD == 4 on the current sh64 ABI, GCC will still
-+ * happily generate {ld/st}.q pairs, requiring us to have 8-byte
-+ * alignment to avoid traps. The kmalloc alignment is gauranteed by
-+ * virtue of L1_CACHE_BYTES, requiring this to only be special cased
-+ * for slab caches.
-+ */
-+#define ARCH_SLAB_MINALIGN 8
-+#endif
-
- #endif /* __KERNEL__ */
- #endif /* __ASM_SH_PAGE_H */
-diff --git a/include/asm-sh/param.h b/include/asm-sh/param.h
-index 1012296..ae245af 100644
---- a/include/asm-sh/param.h
-+++ b/include/asm-sh/param.h
-@@ -2,11 +2,7 @@
- #define __ASM_SH_PARAM_H
-
- #ifdef __KERNEL__
--# ifdef CONFIG_SH_WDT
--# define HZ 1000 /* Needed for high-res WOVF */
--# else
--# define HZ CONFIG_HZ
--# endif
-+# define HZ CONFIG_HZ
- # define USER_HZ 100 /* User interfaces are in "ticks" */
- # define CLOCKS_PER_SEC (USER_HZ) /* frequency at which times() counts */
- #endif
-diff --git a/include/asm-sh/pci.h b/include/asm-sh/pci.h
-index 2757ce0..df1d383 100644
---- a/include/asm-sh/pci.h
-+++ b/include/asm-sh/pci.h
-@@ -38,9 +38,12 @@ extern struct pci_channel board_pci_channels[];
- #if defined(CONFIG_CPU_SUBTYPE_SH7780) || defined(CONFIG_CPU_SUBTYPE_SH7785)
- #define PCI_IO_AREA 0xFE400000
- #define PCI_IO_SIZE 0x00400000
-+#elif defined(CONFIG_CPU_SH5)
-+extern unsigned long PCI_IO_AREA;
-+#define PCI_IO_SIZE 0x00010000
- #else
- #define PCI_IO_AREA 0xFE240000
--#define PCI_IO_SIZE 0X00040000
-+#define PCI_IO_SIZE 0x00040000
- #endif
-
- #define PCI_MEM_SIZE 0x01000000
-diff --git a/include/asm-sh/pgtable.h b/include/asm-sh/pgtable.h
-index 8f1e8be..a4a8f8b 100644
---- a/include/asm-sh/pgtable.h
-+++ b/include/asm-sh/pgtable.h
-@@ -3,7 +3,7 @@
- * use the SuperH page table tree.
- *
- * Copyright (C) 1999 Niibe Yutaka
-- * Copyright (C) 2002 - 2005 Paul Mundt
-+ * Copyright (C) 2002 - 2007 Paul Mundt
- *
- * This file is subject to the terms and conditions of the GNU General
- * Public License. See the file "COPYING" in the main directory of this
-@@ -29,10 +29,27 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
- #endif /* !__ASSEMBLY__ */
-
- /*
-+ * Effective and physical address definitions, to aid with sign
-+ * extension.
-+ */
-+#define NEFF 32
-+#define NEFF_SIGN (1LL << (NEFF - 1))
-+#define NEFF_MASK (-1LL << NEFF)
-+
-+#ifdef CONFIG_29BIT
-+#define NPHYS 29
-+#else
-+#define NPHYS 32
-+#endif
-+
-+#define NPHYS_SIGN (1LL << (NPHYS - 1))
-+#define NPHYS_MASK (-1LL << NPHYS)
-+
-+/*
- * traditional two-level paging structure
- */
- /* PTE bits */
--#ifdef CONFIG_X2TLB
-+#if defined(CONFIG_X2TLB) || defined(CONFIG_SUPERH64)
- # define PTE_MAGNITUDE 3 /* 64-bit PTEs on extended mode SH-X2 TLB */
- #else
- # define PTE_MAGNITUDE 2 /* 32-bit PTEs */
-@@ -52,283 +69,27 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
- #define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
- #define FIRST_USER_ADDRESS 0
-
--#define PTE_PHYS_MASK (0x20000000 - PAGE_SIZE)
+-/*
+- * SH-5 supports Priority based interrupts only.
+- * Interrupt priorities are defined at platform level.
+- */
+-#define set_ipr_data(a, b, c, d)
+-#define make_ipr_irq(a)
+-#define make_imask_irq(a)
+-
+-#endif /* __ASM_SH64_IRQ_H */
+diff --git a/include/asm-sh64/irq_regs.h b/include/asm-sh64/irq_regs.h
+deleted file mode 100644
+index 3dd9c0b..0000000
+--- a/include/asm-sh64/irq_regs.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-generic/irq_regs.h>
+diff --git a/include/asm-sh64/kdebug.h b/include/asm-sh64/kdebug.h
+deleted file mode 100644
+index 6ece1b0..0000000
+--- a/include/asm-sh64/kdebug.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-generic/kdebug.h>
+diff --git a/include/asm-sh64/keyboard.h b/include/asm-sh64/keyboard.h
+deleted file mode 100644
+index 0b01c3b..0000000
+--- a/include/asm-sh64/keyboard.h
++++ /dev/null
+@@ -1,70 +0,0 @@
+-/*
+- * linux/include/asm-shmedia/keyboard.h
+- *
+- * Copied from i386 version:
+- * Created 3 Nov 1996 by Geert Uytterhoeven
+- */
-
--#define VMALLOC_START (P3SEG)
--#define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
+-/*
+- * This file contains the i386 architecture specific keyboard definitions
+- */
+-
+-#ifndef __ASM_SH64_KEYBOARD_H
+-#define __ASM_SH64_KEYBOARD_H
+-
+-#ifdef __KERNEL__
+-
+-#include <linux/kernel.h>
+-#include <linux/ioport.h>
+-#include <asm/io.h>
+-
+-#ifdef CONFIG_SH_CAYMAN
+-#define KEYBOARD_IRQ (START_EXT_IRQS + 2) /* SMSC SuperIO IRQ 1 */
+-#endif
+-#define DISABLE_KBD_DURING_INTERRUPTS 0
+-
+-extern int pckbd_setkeycode(unsigned int scancode, unsigned int keycode);
+-extern int pckbd_getkeycode(unsigned int scancode);
+-extern int pckbd_translate(unsigned char scancode, unsigned char *keycode,
+- char raw_mode);
+-extern char pckbd_unexpected_up(unsigned char keycode);
+-extern void pckbd_leds(unsigned char leds);
+-extern void pckbd_init_hw(void);
+-
+-#define kbd_setkeycode pckbd_setkeycode
+-#define kbd_getkeycode pckbd_getkeycode
+-#define kbd_translate pckbd_translate
+-#define kbd_unexpected_up pckbd_unexpected_up
+-#define kbd_leds pckbd_leds
+-#define kbd_init_hw pckbd_init_hw
+-
+-/* resource allocation */
+-#define kbd_request_region()
+-#define kbd_request_irq(handler) request_irq(KEYBOARD_IRQ, handler, 0, \
+- "keyboard", NULL)
+-
+-/* How to access the keyboard macros on this platform. */
+-#define kbd_read_input() inb(KBD_DATA_REG)
+-#define kbd_read_status() inb(KBD_STATUS_REG)
+-#define kbd_write_output(val) outb(val, KBD_DATA_REG)
+-#define kbd_write_command(val) outb(val, KBD_CNTL_REG)
+-
+-/* Some stoneage hardware needs delays after some operations. */
+-#define kbd_pause() do { } while(0)
-
-/*
-- * Linux PTEL encoding.
-- *
-- * Hardware and software bit definitions for the PTEL value (see below for
-- * notes on SH-X2 MMUs and 64-bit PTEs):
-- *
-- * - Bits 0 and 7 are reserved on SH-3 (_PAGE_WT and _PAGE_SZ1 on SH-4).
-- *
-- * - Bit 1 is the SH-bit, but is unused on SH-3 due to an MMU bug (the
-- * hardware PTEL value can't have the SH-bit set when MMUCR.IX is set,
-- * which is the default in cpu-sh3/mmu_context.h:MMU_CONTROL_INIT).
-- *
-- * In order to keep this relatively clean, do not use these for defining
-- * SH-3 specific flags until all of the other unused bits have been
-- * exhausted.
-- *
-- * - Bit 9 is reserved by everyone and used by _PAGE_PROTNONE.
+- * Machine specific bits for the PS/2 driver
+- */
+-
+-#ifdef CONFIG_SH_CAYMAN
+-#define AUX_IRQ (START_EXT_IRQS + 6) /* SMSC SuperIO IRQ12 */
+-#endif
+-
+-#define aux_request_irq(hand, dev_id) \
+- request_irq(AUX_IRQ, hand, IRQF_SHARED, "PS2 Mouse", dev_id)
+-
+-#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
+-
+-#endif /* __KERNEL__ */
+-#endif /* __ASM_SH64_KEYBOARD_H */
+-
+diff --git a/include/asm-sh64/kmap_types.h b/include/asm-sh64/kmap_types.h
+deleted file mode 100644
+index 2ae7c75..0000000
+--- a/include/asm-sh64/kmap_types.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-#ifndef __ASM_SH64_KMAP_TYPES_H
+-#define __ASM_SH64_KMAP_TYPES_H
+-
+-#include <asm-sh/kmap_types.h>
+-
+-#endif /* __ASM_SH64_KMAP_TYPES_H */
+-
+diff --git a/include/asm-sh64/linkage.h b/include/asm-sh64/linkage.h
+deleted file mode 100644
+index 1dd0e84..0000000
+--- a/include/asm-sh64/linkage.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-#ifndef __ASM_SH64_LINKAGE_H
+-#define __ASM_SH64_LINKAGE_H
+-
+-#include <asm-sh/linkage.h>
+-
+-#endif /* __ASM_SH64_LINKAGE_H */
+-
+diff --git a/include/asm-sh64/local.h b/include/asm-sh64/local.h
+deleted file mode 100644
+index d9bd95d..0000000
+--- a/include/asm-sh64/local.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-#ifndef __ASM_SH64_LOCAL_H
+-#define __ASM_SH64_LOCAL_H
+-
+-#include <asm-generic/local.h>
+-
+-#endif /* __ASM_SH64_LOCAL_H */
+-
+diff --git a/include/asm-sh64/mc146818rtc.h b/include/asm-sh64/mc146818rtc.h
+deleted file mode 100644
+index 6cd3aec..0000000
+--- a/include/asm-sh64/mc146818rtc.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-/*
+- * linux/include/asm-sh64/mc146818rtc.h
- *
-- * - Bits 10 and 11 are low bits of the PPN that are reserved on >= 4K pages.
-- * Bit 10 is used for _PAGE_ACCESSED, bit 11 remains unused.
+-*/
+-
+-/* For now, an empty place-holder to get IDE to compile. */
+-
+diff --git a/include/asm-sh64/mman.h b/include/asm-sh64/mman.h
+deleted file mode 100644
+index a9be6d8..0000000
+--- a/include/asm-sh64/mman.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_MMAN_H
+-#define __ASM_SH64_MMAN_H
+-
+-#include <asm-sh/mman.h>
+-
+-#endif /* __ASM_SH64_MMAN_H */
+diff --git a/include/asm-sh64/mmu.h b/include/asm-sh64/mmu.h
+deleted file mode 100644
+index ccd36d2..0000000
+--- a/include/asm-sh64/mmu.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-#ifndef __MMU_H
+-#define __MMU_H
+-
+-/* Default "unsigned long" context */
+-typedef unsigned long mm_context_t;
+-
+-#endif
+diff --git a/include/asm-sh64/mmu_context.h b/include/asm-sh64/mmu_context.h
+deleted file mode 100644
+index 507bf72..0000000
+--- a/include/asm-sh64/mmu_context.h
++++ /dev/null
+@@ -1,208 +0,0 @@
+-#ifndef __ASM_SH64_MMU_CONTEXT_H
+-#define __ASM_SH64_MMU_CONTEXT_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
- *
-- * - Bits 31, 30, and 29 remain unused by everyone and can be used for future
-- * software flags, although care must be taken to update _PAGE_CLEAR_FLAGS.
+- * include/asm-sh64/mmu_context.h
- *
-- * XXX: Leave the _PAGE_FILE and _PAGE_WT overhaul for a rainy day.
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
- *
-- * SH-X2 MMUs and extended PTEs
+- * ASID handling idea taken from MIPS implementation.
- *
-- * SH-X2 supports an extended mode TLB with split data arrays due to the
-- * number of bits needed for PR and SZ (now EPR and ESZ) encodings. The PR and
-- * SZ bit placeholders still exist in data array 1, but are implemented as
-- * reserved bits, with the real logic existing in data array 2.
+- */
+-
+-#ifndef __ASSEMBLY__
+-
+-/*
+- * Cache of MMU context last used.
- *
-- * The downside to this is that we can no longer fit everything in to a 32-bit
-- * PTE encoding, so a 64-bit pte_t is necessary for these parts. On the plus
-- * side, this gives us quite a few spare bits to play with for future usage.
+- * The MMU "context" consists of two things:
+- * (a) TLB cache version (or cycle, top 24 bits of mmu_context_cache)
+- * (b) ASID (Address Space IDentifier, bottom 8 bits of mmu_context_cache)
- */
--/* Legacy and compat mode bits */
--#define _PAGE_WT 0x001 /* WT-bit on SH-4, 0 on SH-3 */
--#define _PAGE_HW_SHARED 0x002 /* SH-bit : shared among processes */
--#define _PAGE_DIRTY 0x004 /* D-bit : page changed */
--#define _PAGE_CACHABLE 0x008 /* C-bit : cachable */
--#define _PAGE_SZ0 0x010 /* SZ0-bit : Size of page */
--#define _PAGE_RW 0x020 /* PR0-bit : write access allowed */
--#define _PAGE_USER 0x040 /* PR1-bit : user space access allowed*/
--#define _PAGE_SZ1 0x080 /* SZ1-bit : Size of page (on SH-4) */
--#define _PAGE_PRESENT 0x100 /* V-bit : page is valid */
--#define _PAGE_PROTNONE 0x200 /* software: if not present */
--#define _PAGE_ACCESSED 0x400 /* software: page referenced */
--#define _PAGE_FILE _PAGE_WT /* software: pagecache or swap? */
+-extern unsigned long mmu_context_cache;
-
--#define _PAGE_SZ_MASK (_PAGE_SZ0 | _PAGE_SZ1)
--#define _PAGE_PR_MASK (_PAGE_RW | _PAGE_USER)
+-#include <asm/page.h>
+-#include <asm-generic/mm_hooks.h>
-
--/* Extended mode bits */
--#define _PAGE_EXT_ESZ0 0x0010 /* ESZ0-bit: Size of page */
--#define _PAGE_EXT_ESZ1 0x0020 /* ESZ1-bit: Size of page */
--#define _PAGE_EXT_ESZ2 0x0040 /* ESZ2-bit: Size of page */
--#define _PAGE_EXT_ESZ3 0x0080 /* ESZ3-bit: Size of page */
+-/* Current mm's pgd */
+-extern pgd_t *mmu_pdtp_cache;
-
--#define _PAGE_EXT_USER_EXEC 0x0100 /* EPR0-bit: User space executable */
--#define _PAGE_EXT_USER_WRITE 0x0200 /* EPR1-bit: User space writable */
--#define _PAGE_EXT_USER_READ 0x0400 /* EPR2-bit: User space readable */
+-#define SR_ASID_MASK 0xffffffffff00ffffULL
+-#define SR_ASID_SHIFT 16
-
--#define _PAGE_EXT_KERN_EXEC 0x0800 /* EPR3-bit: Kernel space executable */
--#define _PAGE_EXT_KERN_WRITE 0x1000 /* EPR4-bit: Kernel space writable */
--#define _PAGE_EXT_KERN_READ 0x2000 /* EPR5-bit: Kernel space readable */
+-#define MMU_CONTEXT_ASID_MASK 0x000000ff
+-#define MMU_CONTEXT_VERSION_MASK 0xffffff00
+-#define MMU_CONTEXT_FIRST_VERSION 0x00000100
+-#define NO_CONTEXT 0
-
--/* Wrapper for extended mode pgprot twiddling */
--#define _PAGE_EXT(x) ((unsigned long long)(x) << 32)
+-/* ASID is 8-bit value, so it can't be 0x100 */
+-#define MMU_NO_ASID 0x100
-
--/* software: moves to PTEA.TC (Timing Control) */
--#define _PAGE_PCC_AREA5 0x00000000 /* use BSC registers for area5 */
--#define _PAGE_PCC_AREA6 0x80000000 /* use BSC registers for area6 */
-
--/* software: moves to PTEA.SA[2:0] (Space Attributes) */
--#define _PAGE_PCC_IODYN 0x00000001 /* IO space, dynamically sized bus */
--#define _PAGE_PCC_IO8 0x20000000 /* IO space, 8 bit bus */
--#define _PAGE_PCC_IO16 0x20000001 /* IO space, 16 bit bus */
--#define _PAGE_PCC_COM8 0x40000000 /* Common Memory space, 8 bit bus */
--#define _PAGE_PCC_COM16 0x40000001 /* Common Memory space, 16 bit bus */
--#define _PAGE_PCC_ATR8 0x60000000 /* Attribute Memory space, 8 bit bus */
--#define _PAGE_PCC_ATR16 0x60000001 /* Attribute Memory space, 6 bit bus */
+-/*
+- * Virtual Page Number mask
+- */
+-#define MMU_VPN_MASK 0xfffff000
-
--/* Mask which drops unused bits from the PTEL value */
--#if defined(CONFIG_CPU_SH3)
--#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED| \
-- _PAGE_FILE | _PAGE_SZ1 | \
-- _PAGE_HW_SHARED)
--#elif defined(CONFIG_X2TLB)
--/* Get rid of the legacy PR/SZ bits when using extended mode */
--#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | \
-- _PAGE_FILE | _PAGE_PR_MASK | _PAGE_SZ_MASK)
-+#ifdef CONFIG_32BIT
-+#define PHYS_ADDR_MASK 0xffffffff
- #else
--#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | _PAGE_FILE)
-+#define PHYS_ADDR_MASK 0x1fffffff
- #endif
-
--#define _PAGE_FLAGS_HARDWARE_MASK (0x1fffffff & ~(_PAGE_CLEAR_FLAGS))
-+#define PTE_PHYS_MASK (PHYS_ADDR_MASK & PAGE_MASK)
-
--/* Hardware flags, page size encoding */
--#if defined(CONFIG_X2TLB)
--# if defined(CONFIG_PAGE_SIZE_4KB)
--# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ0)
--# elif defined(CONFIG_PAGE_SIZE_8KB)
--# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ1)
--# elif defined(CONFIG_PAGE_SIZE_64KB)
--# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ2)
--# endif
-+#ifdef CONFIG_SUPERH32
-+#define VMALLOC_START (P3SEG)
- #else
--# if defined(CONFIG_PAGE_SIZE_4KB)
--# define _PAGE_FLAGS_HARD _PAGE_SZ0
--# elif defined(CONFIG_PAGE_SIZE_64KB)
--# define _PAGE_FLAGS_HARD _PAGE_SZ1
--# endif
-+#define VMALLOC_START (0xf0000000)
- #endif
-+#define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
-
--#if defined(CONFIG_X2TLB)
--# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
--# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2)
--# elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
--# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ2)
--# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
--# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ1 | _PAGE_EXT_ESZ2)
--# elif defined(CONFIG_HUGETLB_PAGE_SIZE_4MB)
--# define _PAGE_SZHUGE (_PAGE_EXT_ESZ3)
--# elif defined(CONFIG_HUGETLB_PAGE_SIZE_64MB)
--# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2 | _PAGE_EXT_ESZ3)
--# endif
-+#if defined(CONFIG_SUPERH32)
-+#include <asm/pgtable_32.h>
- #else
--# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
--# define _PAGE_SZHUGE (_PAGE_SZ1)
--# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
--# define _PAGE_SZHUGE (_PAGE_SZ0 | _PAGE_SZ1)
--# endif
--#endif
+-static inline void
+-get_new_mmu_context(struct mm_struct *mm)
+-{
+- extern void flush_tlb_all(void);
+- extern void flush_cache_all(void);
+-
+- unsigned long mc = ++mmu_context_cache;
+-
+- if (!(mc & MMU_CONTEXT_ASID_MASK)) {
+- /* We exhaust ASID of this version.
+- Flush all TLB and start new cycle. */
+- flush_tlb_all();
+- /* We have to flush all caches as ASIDs are
+- used in cache */
+- flush_cache_all();
+- /* Fix version if needed.
+- Note that we avoid version #0/asid #0 to distingush NO_CONTEXT. */
+- if (!mc)
+- mmu_context_cache = mc = MMU_CONTEXT_FIRST_VERSION;
+- }
+- mm->context = mc;
+-}
-
-/*
-- * Stub out _PAGE_SZHUGE if we don't have a good definition for it,
-- * to make pte_mkhuge() happy.
+- * Get MMU context if needed.
- */
--#ifndef _PAGE_SZHUGE
--# define _PAGE_SZHUGE (_PAGE_FLAGS_HARD)
--#endif
+-static __inline__ void
+-get_mmu_context(struct mm_struct *mm)
+-{
+- if (mm) {
+- unsigned long mc = mmu_context_cache;
+- /* Check if we have old version of context.
+- If it's old, we need to get new context with new version. */
+- if ((mm->context ^ mc) & MMU_CONTEXT_VERSION_MASK)
+- get_new_mmu_context(mm);
+- }
+-}
-
--#define _PAGE_CHG_MASK \
-- (PTE_MASK | _PAGE_ACCESSED | _PAGE_CACHABLE | _PAGE_DIRTY)
+-/*
+- * Initialize the context related info for a new mm_struct
+- * instance.
+- */
+-static inline int init_new_context(struct task_struct *tsk,
+- struct mm_struct *mm)
+-{
+- mm->context = NO_CONTEXT;
+-
+- return 0;
+-}
+-
+-/*
+- * Destroy context related info for an mm_struct that is about
+- * to be put to rest.
+- */
+-static inline void destroy_context(struct mm_struct *mm)
+-{
+- extern void flush_tlb_mm(struct mm_struct *mm);
+-
+- /* Well, at least free TLB entries */
+- flush_tlb_mm(mm);
+-}
+-
+-#endif /* __ASSEMBLY__ */
+-
+-/* Common defines */
+-#define TLB_STEP 0x00000010
+-#define TLB_PTEH 0x00000000
+-#define TLB_PTEL 0x00000008
+-
+-/* PTEH defines */
+-#define PTEH_ASID_SHIFT 2
+-#define PTEH_VALID 0x0000000000000001
+-#define PTEH_SHARED 0x0000000000000002
+-#define PTEH_MATCH_ASID 0x00000000000003ff
-
-#ifndef __ASSEMBLY__
+-/* This has to be a common function because the next location to fill
+- * information is shared. */
+-extern void __do_tlb_refill(unsigned long address, unsigned long long is_text_not_data, pte_t *pte);
-
--#if defined(CONFIG_X2TLB) /* SH-X2 TLB */
--#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
-- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-/* Profiling counter. */
+-#ifdef CONFIG_SH64_PROC_TLB
+-extern unsigned long long calls_to_do_fast_page_fault;
+-#endif
-
--#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_KERN_WRITE | \
-- _PAGE_EXT_USER_READ | \
-- _PAGE_EXT_USER_WRITE))
+-static inline unsigned long get_asid(void)
+-{
+- unsigned long long sr;
-
--#define PAGE_EXECREAD __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_EXEC | \
-- _PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_USER_EXEC | \
-- _PAGE_EXT_USER_READ))
+- asm volatile ("getcon " __SR ", %0\n\t"
+- : "=r" (sr));
-
--#define PAGE_COPY PAGE_EXECREAD
+- sr = (sr >> SR_ASID_SHIFT) & MMU_CONTEXT_ASID_MASK;
+- return (unsigned long) sr;
+-}
-
--#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_USER_READ))
+-/* Set ASID into SR */
+-static inline void set_asid(unsigned long asid)
+-{
+- unsigned long long sr, pc;
-
--#define PAGE_WRITEONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
-- _PAGE_EXT_USER_WRITE))
+- asm volatile ("getcon " __SR ", %0" : "=r" (sr));
-
--#define PAGE_RWX __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-- _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
-- _PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_KERN_EXEC | \
-- _PAGE_EXT_USER_WRITE | \
-- _PAGE_EXT_USER_READ | \
-- _PAGE_EXT_USER_EXEC))
+- sr = (sr & SR_ASID_MASK) | (asid << SR_ASID_SHIFT);
-
--#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
-- _PAGE_DIRTY | _PAGE_ACCESSED | \
-- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_KERN_WRITE | \
-- _PAGE_EXT_KERN_EXEC))
+- /*
+- * It is possible that this function may be inlined and so to avoid
+- * the assembler reporting duplicate symbols we make use of the gas trick
+- * of generating symbols using numerics and forward reference.
+- */
+- asm volatile ("movi 1, %1\n\t"
+- "shlli %1, 28, %1\n\t"
+- "or %0, %1, %1\n\t"
+- "putcon %1, " __SR "\n\t"
+- "putcon %0, " __SSR "\n\t"
+- "movi 1f, %1\n\t"
+- "ori %1, 1 , %1\n\t"
+- "putcon %1, " __SPC "\n\t"
+- "rte\n"
+- "1:\n\t"
+- : "=r" (sr), "=r" (pc) : "0" (sr));
+-}
-
--#define PAGE_KERNEL_NOCACHE \
-- __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
-- _PAGE_ACCESSED | _PAGE_HW_SHARED | \
-- _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_KERN_WRITE | \
-- _PAGE_EXT_KERN_EXEC))
+-/*
+- * After we have set current->mm to a new value, this activates
+- * the context for the new mm so we see the new mappings.
+- */
+-static __inline__ void activate_context(struct mm_struct *mm)
+-{
+- get_mmu_context(mm);
+- set_asid(mm->context & MMU_CONTEXT_ASID_MASK);
+-}
-
--#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
-- _PAGE_DIRTY | _PAGE_ACCESSED | \
-- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_KERN_EXEC))
-
--#define PAGE_KERNEL_PCC(slot, type) \
-- __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
-- _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
-- _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-- _PAGE_EXT_KERN_WRITE | \
-- _PAGE_EXT_KERN_EXEC) \
-- (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
-- (type))
+-static __inline__ void switch_mm(struct mm_struct *prev,
+- struct mm_struct *next,
+- struct task_struct *tsk)
+-{
+- if (prev != next) {
+- mmu_pdtp_cache = next->pgd;
+- activate_context(next);
+- }
+-}
-
--#elif defined(CONFIG_MMU) /* SH-X TLB */
--#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
-- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-#define deactivate_mm(tsk,mm) do { } while (0)
-
--#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
-- _PAGE_CACHABLE | _PAGE_ACCESSED | \
-- _PAGE_FLAGS_HARD)
+-#define activate_mm(prev, next) \
+- switch_mm((prev),(next),NULL)
-
--#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
-- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-static inline void
+-enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+-{
+-}
-
--#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
-- _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
+-#endif /* __ASSEMBLY__ */
-
--#define PAGE_EXECREAD PAGE_READONLY
--#define PAGE_RWX PAGE_SHARED
--#define PAGE_WRITEONLY PAGE_SHARED
+-#endif /* __ASM_SH64_MMU_CONTEXT_H */
+diff --git a/include/asm-sh64/module.h b/include/asm-sh64/module.h
+deleted file mode 100644
+index c313650..0000000
+--- a/include/asm-sh64/module.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-#ifndef __ASM_SH64_MODULE_H
+-#define __ASM_SH64_MODULE_H
+-/*
+- * This file contains the SH architecture specific module code.
+- */
-
--#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_CACHABLE | \
-- _PAGE_DIRTY | _PAGE_ACCESSED | \
-- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+-struct mod_arch_specific {
+- /* empty */
+-};
-
--#define PAGE_KERNEL_NOCACHE \
-- __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
-- _PAGE_ACCESSED | _PAGE_HW_SHARED | \
-- _PAGE_FLAGS_HARD)
+-#define Elf_Shdr Elf32_Shdr
+-#define Elf_Sym Elf32_Sym
+-#define Elf_Ehdr Elf32_Ehdr
-
--#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
-- _PAGE_DIRTY | _PAGE_ACCESSED | \
-- _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
+-#define module_map(x) vmalloc(x)
+-#define module_unmap(x) vfree(x)
+-#define module_arch_init(x) (0)
+-#define arch_init_modules(x) do { } while (0)
-
--#define PAGE_KERNEL_PCC(slot, type) \
-- __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
-- _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
-- (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
-- (type))
--#else /* no mmu */
--#define PAGE_NONE __pgprot(0)
--#define PAGE_SHARED __pgprot(0)
--#define PAGE_COPY __pgprot(0)
--#define PAGE_EXECREAD __pgprot(0)
--#define PAGE_RWX __pgprot(0)
--#define PAGE_READONLY __pgprot(0)
--#define PAGE_WRITEONLY __pgprot(0)
--#define PAGE_KERNEL __pgprot(0)
--#define PAGE_KERNEL_NOCACHE __pgprot(0)
--#define PAGE_KERNEL_RO __pgprot(0)
+-#endif /* __ASM_SH64_MODULE_H */
+diff --git a/include/asm-sh64/msgbuf.h b/include/asm-sh64/msgbuf.h
+deleted file mode 100644
+index cf0494c..0000000
+--- a/include/asm-sh64/msgbuf.h
++++ /dev/null
+@@ -1,42 +0,0 @@
+-#ifndef __ASM_SH64_MSGBUF_H
+-#define __ASM_SH64_MSGBUF_H
-
--#define PAGE_KERNEL_PCC(slot, type) \
-- __pgprot(0)
-+#include <asm/pgtable_64.h>
- #endif
-
--#endif /* __ASSEMBLY__ */
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/msgbuf.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
-
- /*
- * SH-X and lower (legacy) SuperH parts (SH-3, SH-4, some SH-4A) can't do page
- * protection for execute, and considers it the same as a read. Also, write
-@@ -357,208 +118,6 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
- #define __S110 PAGE_RWX
- #define __S111 PAGE_RWX
-
--#ifndef __ASSEMBLY__
+-/*
+- * The msqid64_ds structure for i386 architecture.
+- * Note extra padding because this structure is passed back and forth
+- * between kernel and user space.
+- *
+- * Pad space is left for:
+- * - 64-bit time_t to solve y2038 problem
+- * - 2 miscellaneous 32-bit values
+- */
+-
+-struct msqid64_ds {
+- struct ipc64_perm msg_perm;
+- __kernel_time_t msg_stime; /* last msgsnd time */
+- unsigned long __unused1;
+- __kernel_time_t msg_rtime; /* last msgrcv time */
+- unsigned long __unused2;
+- __kernel_time_t msg_ctime; /* last change time */
+- unsigned long __unused3;
+- unsigned long msg_cbytes; /* current number of bytes on queue */
+- unsigned long msg_qnum; /* number of messages in queue */
+- unsigned long msg_qbytes; /* max number of bytes on queue */
+- __kernel_pid_t msg_lspid; /* pid of last msgsnd */
+- __kernel_pid_t msg_lrpid; /* last receive pid */
+- unsigned long __unused4;
+- unsigned long __unused5;
+-};
-
+-#endif /* __ASM_SH64_MSGBUF_H */
+diff --git a/include/asm-sh64/mutex.h b/include/asm-sh64/mutex.h
+deleted file mode 100644
+index 458c1f7..0000000
+--- a/include/asm-sh64/mutex.h
++++ /dev/null
+@@ -1,9 +0,0 @@
-/*
-- * Certain architectures need to do special things when PTEs
-- * within a page table are directly modified. Thus, the following
-- * hook is made available.
+- * Pull in the generic implementation for the mutex fastpath.
+- *
+- * TODO: implement optimized primitives instead, or leave the generic
+- * implementation in place, or pick the atomic_xchg() based generic
+- * implementation. (see asm-generic/mutex-xchg.h for details)
- */
--#ifdef CONFIG_X2TLB
--static inline void set_pte(pte_t *ptep, pte_t pte)
--{
-- ptep->pte_high = pte.pte_high;
-- smp_wmb();
-- ptep->pte_low = pte.pte_low;
--}
+-
+-#include <asm-generic/mutex-dec.h>
+diff --git a/include/asm-sh64/namei.h b/include/asm-sh64/namei.h
+deleted file mode 100644
+index 99d759a..0000000
+--- a/include/asm-sh64/namei.h
++++ /dev/null
+@@ -1,24 +0,0 @@
+-#ifndef __ASM_SH64_NAMEI_H
+-#define __ASM_SH64_NAMEI_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/namei.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- * Included from linux/fs/namei.c
+- *
+- */
+-
+-/* This dummy routine maybe changed to something useful
+- * for /usr/gnemul/ emulation stuff.
+- * Look at asm-sparc/namei.h for details.
+- */
+-
+-#define __emul_prefix() NULL
+-
+-#endif /* __ASM_SH64_NAMEI_H */
+diff --git a/include/asm-sh64/page.h b/include/asm-sh64/page.h
+deleted file mode 100644
+index 472089a..0000000
+--- a/include/asm-sh64/page.h
++++ /dev/null
+@@ -1,119 +0,0 @@
+-#ifndef __ASM_SH64_PAGE_H
+-#define __ASM_SH64_PAGE_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/page.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003, 2004 Paul Mundt
+- *
+- * benedict.gaster at superh.com 19th, 24th July 2002.
+- *
+- * Modified to take account of enabling for D-CACHE support.
+- *
+- */
+-
+-
+-/* PAGE_SHIFT determines the page size */
+-#define PAGE_SHIFT 12
+-#ifdef __ASSEMBLY__
+-#define PAGE_SIZE 4096
-#else
--#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
+-#define PAGE_SIZE (1UL << PAGE_SHIFT)
-#endif
+-#define PAGE_MASK (~(PAGE_SIZE-1))
+-#define PTE_MASK PAGE_MASK
-
--#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
+-#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+-#define HPAGE_SHIFT 16
+-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+-#define HPAGE_SHIFT 20
+-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
+-#define HPAGE_SHIFT 29
+-#endif
+-
+-#ifdef CONFIG_HUGETLB_PAGE
+-#define HPAGE_SIZE (1UL << HPAGE_SHIFT)
+-#define HPAGE_MASK (~(HPAGE_SIZE-1))
+-#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT-PAGE_SHIFT)
+-#define ARCH_HAS_SETCLEAR_HUGE_PTE
+-#endif
+-
+-#ifdef __KERNEL__
+-#ifndef __ASSEMBLY__
+-
+-extern struct page *mem_map;
+-extern void sh64_page_clear(void *page);
+-extern void sh64_page_copy(void *from, void *to);
+-
+-#define clear_page(page) sh64_page_clear(page)
+-#define copy_page(to,from) sh64_page_copy(from, to)
+-
+-#if defined(CONFIG_DCACHE_DISABLED)
+-
+-#define clear_user_page(page, vaddr, pg) clear_page(page)
+-#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
+-
+-#else
+-
+-extern void clear_user_page(void *to, unsigned long address, struct page *pg);
+-extern void copy_user_page(void *to, void *from, unsigned long address, struct page *pg);
+-
+-#endif /* defined(CONFIG_DCACHE_DISABLED) */
-
-/*
-- * (pmds are folded into pgds so this doesn't get actually called,
-- * but the define is needed for a generic inline function.)
+- * These are used to make use of C type-checking..
- */
--#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+-typedef struct { unsigned long long pte; } pte_t;
+-typedef struct { unsigned long pmd; } pmd_t;
+-typedef struct { unsigned long pgd; } pgd_t;
+-typedef struct { unsigned long pgprot; } pgprot_t;
-
--#define pte_pfn(x) ((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
+-#define pte_val(x) ((x).pte)
+-#define pmd_val(x) ((x).pmd)
+-#define pgd_val(x) ((x).pgd)
+-#define pgprot_val(x) ((x).pgprot)
-
--#define pfn_pte(pfn, prot) \
-- __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
--#define pfn_pmd(pfn, prot) \
-- __pmd(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-#define __pte(x) ((pte_t) { (x) } )
+-#define __pmd(x) ((pmd_t) { (x) } )
+-#define __pgd(x) ((pgd_t) { (x) } )
+-#define __pgprot(x) ((pgprot_t) { (x) } )
-
--#define pte_none(x) (!pte_val(x))
--#define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE))
+-#endif /* !__ASSEMBLY__ */
-
--#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0)
+-/* to align the pointer to the (next) page boundary */
+-#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
-
--#define pmd_none(x) (!pmd_val(x))
--#define pmd_present(x) (pmd_val(x))
--#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
--#define pmd_bad(x) (pmd_val(x) & ~PAGE_MASK)
+-/*
+- * Kconfig defined.
+- */
+-#define __MEMORY_START (CONFIG_MEMORY_START)
+-#define PAGE_OFFSET (CONFIG_CACHED_MEMORY_OFFSET)
-
--#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
--#define pte_page(x) pfn_to_page(pte_pfn(x))
+-#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
+-#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
+-#define MAP_NR(addr) ((__pa(addr)-__MEMORY_START) >> PAGE_SHIFT)
+-#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
+-
+-#define phys_to_page(phys) (mem_map + (((phys) - __MEMORY_START) >> PAGE_SHIFT))
+-#define page_to_phys(page) (((page - mem_map) << PAGE_SHIFT) + __MEMORY_START)
+-
+-/* PFN start number, because of __MEMORY_START */
+-#define PFN_START (__MEMORY_START >> PAGE_SHIFT)
+-#define ARCH_PFN_OFFSET (PFN_START)
+-#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+-#define pfn_valid(pfn) (((pfn) - PFN_START) < max_mapnr)
+-#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+-
+-#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \
+- VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+-
+-#include <asm-generic/memory_model.h>
+-#include <asm-generic/page.h>
-
+-#endif /* __KERNEL__ */
+-#endif /* __ASM_SH64_PAGE_H */
+diff --git a/include/asm-sh64/param.h b/include/asm-sh64/param.h
+deleted file mode 100644
+index f409adb..0000000
+--- a/include/asm-sh64/param.h
++++ /dev/null
+@@ -1,42 +0,0 @@
-/*
-- * The following only work if pte_present() is true.
-- * Undefined behaviour if not..
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/param.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
- */
--#define pte_not_present(pte) (!((pte).pte_low & _PAGE_PRESENT))
--#define pte_dirty(pte) ((pte).pte_low & _PAGE_DIRTY)
--#define pte_young(pte) ((pte).pte_low & _PAGE_ACCESSED)
--#define pte_file(pte) ((pte).pte_low & _PAGE_FILE)
+-#ifndef __ASM_SH64_PARAM_H
+-#define __ASM_SH64_PARAM_H
-
--#ifdef CONFIG_X2TLB
--#define pte_write(pte) ((pte).pte_high & _PAGE_EXT_USER_WRITE)
--#else
--#define pte_write(pte) ((pte).pte_low & _PAGE_RW)
+-
+-#ifdef __KERNEL__
+-# ifdef CONFIG_SH_WDT
+-# define HZ 1000 /* Needed for high-res WOVF */
+-# else
+-# define HZ 100
+-# endif
+-# define USER_HZ 100 /* User interfaces are in "ticks" */
+-# define CLOCKS_PER_SEC (USER_HZ) /* frequency at which times() counts */
-#endif
-
--#define PTE_BIT_FUNC(h,fn,op) \
--static inline pte_t pte_##fn(pte_t pte) { pte.pte_##h op; return pte; }
+-#ifndef HZ
+-#define HZ 100
+-#endif
+-
+-#define EXEC_PAGESIZE 4096
+-
+-#ifndef NGROUPS
+-#define NGROUPS 32
+-#endif
+-
+-#ifndef NOGROUP
+-#define NOGROUP (-1)
+-#endif
+-
+-#define MAXHOSTNAMELEN 64 /* max length of hostname */
+-
+-#endif /* __ASM_SH64_PARAM_H */
+diff --git a/include/asm-sh64/pci.h b/include/asm-sh64/pci.h
+deleted file mode 100644
+index 18055db..0000000
+--- a/include/asm-sh64/pci.h
++++ /dev/null
+@@ -1,102 +0,0 @@
+-#ifndef __ASM_SH64_PCI_H
+-#define __ASM_SH64_PCI_H
+-
+-#ifdef __KERNEL__
+-
+-#include <linux/dma-mapping.h>
+-
+-/* Can be used to override the logic in pci_scan_bus for skipping
+- already-configured bus numbers - to be used for buggy BIOSes
+- or architectures with incomplete PCI setup by the loader */
+-
+-#define pcibios_assign_all_busses() 1
-
--#ifdef CONFIG_X2TLB
-/*
-- * We cheat a bit in the SH-X2 TLB case. As the permission bits are
-- * individually toggled (and user permissions are entirely decoupled from
-- * kernel permissions), we attempt to couple them a bit more sanely here.
+- * These are currently the correct values for the STM overdrive board
+- * We need some way of setting this on a board specific way, it will
+- * not be the same on other boards I think
- */
--PTE_BIT_FUNC(high, wrprotect, &= ~_PAGE_EXT_USER_WRITE);
--PTE_BIT_FUNC(high, mkwrite, |= _PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE);
--PTE_BIT_FUNC(high, mkhuge, |= _PAGE_SZHUGE);
--#else
--PTE_BIT_FUNC(low, wrprotect, &= ~_PAGE_RW);
--PTE_BIT_FUNC(low, mkwrite, |= _PAGE_RW);
--PTE_BIT_FUNC(low, mkhuge, |= _PAGE_SZHUGE);
+-#if defined(CONFIG_CPU_SUBTYPE_SH5_101) || defined(CONFIG_CPU_SUBTYPE_SH5_103)
+-#define PCIBIOS_MIN_IO 0x2000
+-#define PCIBIOS_MIN_MEM 0x40000000
-#endif
-
--PTE_BIT_FUNC(low, mkclean, &= ~_PAGE_DIRTY);
--PTE_BIT_FUNC(low, mkdirty, |= _PAGE_DIRTY);
--PTE_BIT_FUNC(low, mkold, &= ~_PAGE_ACCESSED);
--PTE_BIT_FUNC(low, mkyoung, |= _PAGE_ACCESSED);
+-extern void pcibios_set_master(struct pci_dev *dev);
-
-/*
-- * Macro and implementation to make a page protection as uncachable.
+- * Set penalize isa irq function
- */
--#define pgprot_writecombine(prot) \
-- __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+-static inline void pcibios_penalize_isa_irq(int irq, int active)
+-{
+- /* We don't do dynamic PCI IRQ allocation */
+-}
-
--#define pgprot_noncached pgprot_writecombine
+-/* Dynamic DMA mapping stuff.
+- * SuperH has everything mapped statically like x86.
+- */
-
--/*
-- * Conversion functions: convert a page and protection to a page entry,
-- * and a page entry and page directory to the page they refer to.
-- *
-- * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
+-/* The PCI address space does equal the physical memory
+- * address space. The networking and block device layers use
+- * this boolean for bounce buffer decisions.
- */
--#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+-#define PCI_DMA_BUS_IS_PHYS (1)
-
--static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
--{
-- pte.pte_low &= _PAGE_CHG_MASK;
-- pte.pte_low |= pgprot_val(newprot);
+-#include <linux/types.h>
+-#include <linux/slab.h>
+-#include <asm/scatterlist.h>
+-#include <linux/string.h>
+-#include <asm/io.h>
-
--#ifdef CONFIG_X2TLB
-- pte.pte_high |= pgprot_val(newprot) >> 32;
+-/* pci_unmap_{single,page} being a nop depends upon the
+- * configuration.
+- */
+-#ifdef CONFIG_SH_PCIDMA_NONCOHERENT
+-#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) \
+- dma_addr_t ADDR_NAME;
+-#define DECLARE_PCI_UNMAP_LEN(LEN_NAME) \
+- __u32 LEN_NAME;
+-#define pci_unmap_addr(PTR, ADDR_NAME) \
+- ((PTR)->ADDR_NAME)
+-#define pci_unmap_addr_set(PTR, ADDR_NAME, VAL) \
+- (((PTR)->ADDR_NAME) = (VAL))
+-#define pci_unmap_len(PTR, LEN_NAME) \
+- ((PTR)->LEN_NAME)
+-#define pci_unmap_len_set(PTR, LEN_NAME, VAL) \
+- (((PTR)->LEN_NAME) = (VAL))
+-#else
+-#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME)
+-#define DECLARE_PCI_UNMAP_LEN(LEN_NAME)
+-#define pci_unmap_addr(PTR, ADDR_NAME) (0)
+-#define pci_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0)
+-#define pci_unmap_len(PTR, LEN_NAME) (0)
+-#define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0)
-#endif
-
-- return pte;
+-#ifdef CONFIG_PCI
+-static inline void pci_dma_burst_advice(struct pci_dev *pdev,
+- enum pci_dma_burst_strategy *strat,
+- unsigned long *strategy_parameter)
+-{
+- *strat = PCI_DMA_BURST_INFINITY;
+- *strategy_parameter = ~0UL;
-}
+-#endif
-
--#define pmd_page_vaddr(pmd) ((unsigned long)pmd_val(pmd))
--#define pmd_page(pmd) (virt_to_page(pmd_val(pmd)))
+-/* Board-specific fixup routines. */
+-extern void pcibios_fixup(void);
+-extern void pcibios_fixup_irqs(void);
-
--/* to find an entry in a page-table-directory. */
--#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
--#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+-#ifdef CONFIG_PCI_AUTO
+-extern int pciauto_assign_resources(int busno, struct pci_channel *hose);
+-#endif
-
--/* to find an entry in a kernel page-table-directory */
--#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+-#endif /* __KERNEL__ */
-
--/* Find an entry in the third-level page table.. */
--#define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
--#define pte_offset_kernel(dir, address) \
-- ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(address))
--#define pte_offset_map(dir, address) pte_offset_kernel(dir, address)
--#define pte_offset_map_nested(dir, address) pte_offset_kernel(dir, address)
+-/* generic pci stuff */
+-#include <asm-generic/pci.h>
-
--#define pte_unmap(pte) do { } while (0)
--#define pte_unmap_nested(pte) do { } while (0)
+-/* generic DMA-mapping stuff */
+-#include <asm-generic/pci-dma-compat.h>
-
--#ifdef CONFIG_X2TLB
--#define pte_ERROR(e) \
-- printk("%s:%d: bad pte %p(%08lx%08lx).\n", __FILE__, __LINE__, \
-- &(e), (e).pte_high, (e).pte_low)
--#define pgd_ERROR(e) \
-- printk("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e))
--#else
--#define pte_ERROR(e) \
-- printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
--#define pgd_ERROR(e) \
-- printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
--#endif
+-#endif /* __ASM_SH64_PCI_H */
-
--struct vm_area_struct;
--extern void update_mmu_cache(struct vm_area_struct * vma,
-- unsigned long address, pte_t pte);
+diff --git a/include/asm-sh64/percpu.h b/include/asm-sh64/percpu.h
+deleted file mode 100644
+index a01d16c..0000000
+--- a/include/asm-sh64/percpu.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_PERCPU
+-#define __ASM_SH64_PERCPU
+-
+-#include <asm-generic/percpu.h>
+-
+-#endif /* __ASM_SH64_PERCPU */
+diff --git a/include/asm-sh64/pgalloc.h b/include/asm-sh64/pgalloc.h
+deleted file mode 100644
+index 6eccab7..0000000
+--- a/include/asm-sh64/pgalloc.h
++++ /dev/null
+@@ -1,125 +0,0 @@
+-#ifndef __ASM_SH64_PGALLOC_H
+-#define __ASM_SH64_PGALLOC_H
-
-/*
-- * Encode and de-code a swap entry
-- *
-- * Constraints:
-- * _PAGE_FILE at bit 0
-- * _PAGE_PRESENT at bit 8
-- * _PAGE_PROTNONE at bit 9
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
- *
-- * For the normal case, we encode the swap type into bits 0:7 and the
-- * swap offset into bits 10:30. For the 64-bit PTE case, we keep the
-- * preserved bits in the low 32-bits and use the upper 32 as the swap
-- * offset (along with a 5-bit type), following the same approach as x86
-- * PAE. This keeps the logic quite simple, and allows for a full 32
-- * PTE_FILE_MAX_BITS, as opposed to the 29-bits we're constrained with
-- * in the pte_low case.
+- * include/asm-sh64/pgalloc.h
- *
-- * As is evident by the Alpha code, if we ever get a 64-bit unsigned
-- * long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes
-- * much cleaner..
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003, 2004 Paul Mundt
+- * Copyright (C) 2003, 2004 Richard Curnow
- *
-- * NOTE: We should set ZEROs at the position of _PAGE_PRESENT
-- * and _PAGE_PROTNONE bits
- */
--#ifdef CONFIG_X2TLB
--#define __swp_type(x) ((x).val & 0x1f)
--#define __swp_offset(x) ((x).val >> 5)
--#define __swp_entry(type, offset) ((swp_entry_t){ (type) | (offset) << 5})
--#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
--#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
+-
+-#include <linux/mm.h>
+-#include <linux/quicklist.h>
+-#include <asm/page.h>
+-
+-static inline void pgd_init(unsigned long page)
+-{
+- unsigned long *pgd = (unsigned long *)page;
+- extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
+- int i;
+-
+- for (i = 0; i < USER_PTRS_PER_PGD; i++)
+- pgd[i] = (unsigned long)empty_bad_pte_table;
+-}
-
-/*
-- * Encode and decode a nonlinear file mapping entry
+- * Allocate and free page tables. The xxx_kernel() versions are
+- * used to allocate a kernel page table - this turns on ASN bits
+- * if any.
- */
--#define pte_to_pgoff(pte) ((pte).pte_high)
--#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
-
--#define PTE_FILE_MAX_BITS 32
--#else
--#define __swp_type(x) ((x).val & 0xff)
--#define __swp_offset(x) ((x).val >> 10)
--#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) <<10})
+-static inline pgd_t *get_pgd_slow(void)
+-{
+- unsigned int pgd_size = (USER_PTRS_PER_PGD * sizeof(pgd_t));
+- pgd_t *ret = kmalloc(pgd_size, GFP_KERNEL);
+- return ret;
+-}
-
--#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 })
--#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 })
+-static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+-{
+- return quicklist_alloc(0, GFP_KERNEL, NULL);
+-}
+-
+-static inline void pgd_free(pgd_t *pgd)
+-{
+- quicklist_free(0, NULL, pgd);
+-}
+-
+-static inline struct page *pte_alloc_one(struct mm_struct *mm,
+- unsigned long address)
+-{
+- void *pg = quicklist_alloc(0, GFP_KERNEL, NULL);
+- return pg ? virt_to_page(pg) : NULL;
+-}
+-
+-static inline void pte_free_kernel(pte_t *pte)
+-{
+- quicklist_free(0, NULL, pte);
+-}
+-
+-static inline void pte_free(struct page *pte)
+-{
+- quicklist_free_page(0, NULL, pte);
+-}
+-
+-static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+- unsigned long address)
+-{
+- return quicklist_alloc(0, GFP_KERNEL, NULL);
+-}
+-
+-#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
-
-/*
-- * Encode and decode a nonlinear file mapping entry
+- * allocating and freeing a pmd is trivial: the 1-entry pmd is
+- * inside the pgd, so has no extra memory associated with it.
- */
--#define PTE_FILE_MAX_BITS 29
--#define pte_to_pgoff(pte) (pte_val(pte) >> 1)
--#define pgoff_to_pte(off) ((pte_t) { ((off) << 1) | _PAGE_FILE })
+-
+-#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+-
+-#define pmd_alloc_one(mm, addr) ({ BUG(); ((pmd_t *)2); })
+-#define pmd_free(x) do { } while (0)
+-#define pgd_populate(mm, pmd, pte) BUG()
+-#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
+-#define __pmd_free_tlb(tlb,pmd) do { } while (0)
+-
+-#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+-
+-static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+-{
+- return quicklist_alloc(0, GFP_KERNEL, NULL);
+-}
+-
+-static inline void pmd_free(pmd_t *pmd)
+-{
+- quicklist_free(0, NULL, pmd);
+-}
+-
+-#define pgd_populate(mm, pgd, pmd) pgd_set(pgd, pmd)
+-#define __pmd_free_tlb(tlb,pmd) pmd_free(pmd)
+-
+-#else
+-#error "No defined page table size"
-#endif
-
- typedef pte_t *pte_addr_t;
-
- #define kern_addr_valid(addr) (1)
-@@ -566,27 +125,28 @@ typedef pte_t *pte_addr_t;
- #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
- remap_pfn_range(vma, vaddr, pfn, size, prot)
-
--struct mm_struct;
-+#define pte_pfn(x) ((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
-
- /*
- * No page table caches to initialise
- */
- #define pgtable_cache_init() do { } while (0)
-
--#ifndef CONFIG_MMU
--extern unsigned int kobjsize(const void *objp);
--#endif /* !CONFIG_MMU */
+-#define pmd_populate_kernel(mm, pmd, pte) \
+- set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) (pte)))
-
- #if !defined(CONFIG_CACHE_OFF) && (defined(CONFIG_CPU_SH4) || \
- defined(CONFIG_SH7705_CACHE_32KB))
-+struct mm_struct;
- #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
--extern pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
-+pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
- #endif
-
-+struct vm_area_struct;
-+extern void update_mmu_cache(struct vm_area_struct * vma,
-+ unsigned long address, pte_t pte);
- extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
- extern void paging_init(void);
-+extern void page_table_range_init(unsigned long start, unsigned long end,
-+ pgd_t *pgd);
-
- #include <asm-generic/pgtable.h>
-
--#endif /* !__ASSEMBLY__ */
--#endif /* __ASM_SH_PAGE_H */
-+#endif /* __ASM_SH_PGTABLE_H */
-diff --git a/include/asm-sh/pgtable_32.h b/include/asm-sh/pgtable_32.h
-new file mode 100644
-index 0000000..3e3557c
---- /dev/null
-+++ b/include/asm-sh/pgtable_32.h
-@@ -0,0 +1,474 @@
-+#ifndef __ASM_SH_PGTABLE_32_H
-+#define __ASM_SH_PGTABLE_32_H
-+
-+/*
-+ * Linux PTEL encoding.
-+ *
-+ * Hardware and software bit definitions for the PTEL value (see below for
-+ * notes on SH-X2 MMUs and 64-bit PTEs):
-+ *
-+ * - Bits 0 and 7 are reserved on SH-3 (_PAGE_WT and _PAGE_SZ1 on SH-4).
-+ *
-+ * - Bit 1 is the SH-bit, but is unused on SH-3 due to an MMU bug (the
-+ * hardware PTEL value can't have the SH-bit set when MMUCR.IX is set,
-+ * which is the default in cpu-sh3/mmu_context.h:MMU_CONTROL_INIT).
-+ *
-+ * In order to keep this relatively clean, do not use these for defining
-+ * SH-3 specific flags until all of the other unused bits have been
-+ * exhausted.
-+ *
-+ * - Bit 9 is reserved by everyone and used by _PAGE_PROTNONE.
-+ *
-+ * - Bits 10 and 11 are low bits of the PPN that are reserved on >= 4K pages.
-+ * Bit 10 is used for _PAGE_ACCESSED, bit 11 remains unused.
-+ *
-+ * - On 29 bit platforms, bits 31 to 29 are used for the space attributes
-+ * and timing control which (together with bit 0) are moved into the
-+ * old-style PTEA on the parts that support it.
-+ *
-+ * XXX: Leave the _PAGE_FILE and _PAGE_WT overhaul for a rainy day.
-+ *
-+ * SH-X2 MMUs and extended PTEs
-+ *
-+ * SH-X2 supports an extended mode TLB with split data arrays due to the
-+ * number of bits needed for PR and SZ (now EPR and ESZ) encodings. The PR and
-+ * SZ bit placeholders still exist in data array 1, but are implemented as
-+ * reserved bits, with the real logic existing in data array 2.
-+ *
-+ * The downside to this is that we can no longer fit everything in to a 32-bit
-+ * PTE encoding, so a 64-bit pte_t is necessary for these parts. On the plus
-+ * side, this gives us quite a few spare bits to play with for future usage.
-+ */
-+/* Legacy and compat mode bits */
-+#define _PAGE_WT 0x001 /* WT-bit on SH-4, 0 on SH-3 */
-+#define _PAGE_HW_SHARED 0x002 /* SH-bit : shared among processes */
-+#define _PAGE_DIRTY 0x004 /* D-bit : page changed */
-+#define _PAGE_CACHABLE 0x008 /* C-bit : cachable */
-+#define _PAGE_SZ0 0x010 /* SZ0-bit : Size of page */
-+#define _PAGE_RW 0x020 /* PR0-bit : write access allowed */
-+#define _PAGE_USER 0x040 /* PR1-bit : user space access allowed*/
-+#define _PAGE_SZ1 0x080 /* SZ1-bit : Size of page (on SH-4) */
-+#define _PAGE_PRESENT 0x100 /* V-bit : page is valid */
-+#define _PAGE_PROTNONE 0x200 /* software: if not present */
-+#define _PAGE_ACCESSED 0x400 /* software: page referenced */
-+#define _PAGE_FILE _PAGE_WT /* software: pagecache or swap? */
-+
-+#define _PAGE_SZ_MASK (_PAGE_SZ0 | _PAGE_SZ1)
-+#define _PAGE_PR_MASK (_PAGE_RW | _PAGE_USER)
-+
-+/* Extended mode bits */
-+#define _PAGE_EXT_ESZ0 0x0010 /* ESZ0-bit: Size of page */
-+#define _PAGE_EXT_ESZ1 0x0020 /* ESZ1-bit: Size of page */
-+#define _PAGE_EXT_ESZ2 0x0040 /* ESZ2-bit: Size of page */
-+#define _PAGE_EXT_ESZ3 0x0080 /* ESZ3-bit: Size of page */
-+
-+#define _PAGE_EXT_USER_EXEC 0x0100 /* EPR0-bit: User space executable */
-+#define _PAGE_EXT_USER_WRITE 0x0200 /* EPR1-bit: User space writable */
-+#define _PAGE_EXT_USER_READ 0x0400 /* EPR2-bit: User space readable */
-+
-+#define _PAGE_EXT_KERN_EXEC 0x0800 /* EPR3-bit: Kernel space executable */
-+#define _PAGE_EXT_KERN_WRITE 0x1000 /* EPR4-bit: Kernel space writable */
-+#define _PAGE_EXT_KERN_READ 0x2000 /* EPR5-bit: Kernel space readable */
-+
-+/* Wrapper for extended mode pgprot twiddling */
-+#define _PAGE_EXT(x) ((unsigned long long)(x) << 32)
-+
-+/* software: moves to PTEA.TC (Timing Control) */
-+#define _PAGE_PCC_AREA5 0x00000000 /* use BSC registers for area5 */
-+#define _PAGE_PCC_AREA6 0x80000000 /* use BSC registers for area6 */
-+
-+/* software: moves to PTEA.SA[2:0] (Space Attributes) */
-+#define _PAGE_PCC_IODYN 0x00000001 /* IO space, dynamically sized bus */
-+#define _PAGE_PCC_IO8 0x20000000 /* IO space, 8 bit bus */
-+#define _PAGE_PCC_IO16 0x20000001 /* IO space, 16 bit bus */
-+#define _PAGE_PCC_COM8 0x40000000 /* Common Memory space, 8 bit bus */
-+#define _PAGE_PCC_COM16 0x40000001 /* Common Memory space, 16 bit bus */
-+#define _PAGE_PCC_ATR8 0x60000000 /* Attribute Memory space, 8 bit bus */
-+#define _PAGE_PCC_ATR16 0x60000001 /* Attribute Memory space, 6 bit bus */
-+
-+/* Mask which drops unused bits from the PTEL value */
-+#if defined(CONFIG_CPU_SH3)
-+#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED| \
-+ _PAGE_FILE | _PAGE_SZ1 | \
-+ _PAGE_HW_SHARED)
-+#elif defined(CONFIG_X2TLB)
-+/* Get rid of the legacy PR/SZ bits when using extended mode */
-+#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | \
-+ _PAGE_FILE | _PAGE_PR_MASK | _PAGE_SZ_MASK)
-+#else
-+#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | _PAGE_FILE)
-+#endif
-+
-+#define _PAGE_FLAGS_HARDWARE_MASK (PHYS_ADDR_MASK & ~(_PAGE_CLEAR_FLAGS))
-+
-+/* Hardware flags, page size encoding */
-+#if defined(CONFIG_X2TLB)
-+# if defined(CONFIG_PAGE_SIZE_4KB)
-+# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ0)
-+# elif defined(CONFIG_PAGE_SIZE_8KB)
-+# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ1)
-+# elif defined(CONFIG_PAGE_SIZE_64KB)
-+# define _PAGE_FLAGS_HARD _PAGE_EXT(_PAGE_EXT_ESZ2)
-+# endif
-+#else
-+# if defined(CONFIG_PAGE_SIZE_4KB)
-+# define _PAGE_FLAGS_HARD _PAGE_SZ0
-+# elif defined(CONFIG_PAGE_SIZE_64KB)
-+# define _PAGE_FLAGS_HARD _PAGE_SZ1
-+# endif
-+#endif
-+
-+#if defined(CONFIG_X2TLB)
-+# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
-+# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2)
-+# elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)
-+# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ2)
-+# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
-+# define _PAGE_SZHUGE (_PAGE_EXT_ESZ0 | _PAGE_EXT_ESZ1 | _PAGE_EXT_ESZ2)
-+# elif defined(CONFIG_HUGETLB_PAGE_SIZE_4MB)
-+# define _PAGE_SZHUGE (_PAGE_EXT_ESZ3)
-+# elif defined(CONFIG_HUGETLB_PAGE_SIZE_64MB)
-+# define _PAGE_SZHUGE (_PAGE_EXT_ESZ2 | _PAGE_EXT_ESZ3)
-+# endif
-+#else
-+# if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
-+# define _PAGE_SZHUGE (_PAGE_SZ1)
-+# elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
-+# define _PAGE_SZHUGE (_PAGE_SZ0 | _PAGE_SZ1)
-+# endif
-+#endif
-+
-+/*
-+ * Stub out _PAGE_SZHUGE if we don't have a good definition for it,
-+ * to make pte_mkhuge() happy.
-+ */
-+#ifndef _PAGE_SZHUGE
-+# define _PAGE_SZHUGE (_PAGE_FLAGS_HARD)
-+#endif
-+
-+#define _PAGE_CHG_MASK \
-+ (PTE_MASK | _PAGE_ACCESSED | _PAGE_CACHABLE | _PAGE_DIRTY)
-+
-+#ifndef __ASSEMBLY__
-+
-+#if defined(CONFIG_X2TLB) /* SH-X2 TLB */
-+#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
-+ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
-+
-+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-+ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_KERN_WRITE | \
-+ _PAGE_EXT_USER_READ | \
-+ _PAGE_EXT_USER_WRITE))
-+
-+#define PAGE_EXECREAD __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-+ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_EXEC | \
-+ _PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_USER_EXEC | \
-+ _PAGE_EXT_USER_READ))
-+
-+#define PAGE_COPY PAGE_EXECREAD
-+
-+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-+ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_USER_READ))
-+
-+#define PAGE_WRITEONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-+ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
-+ _PAGE_EXT_USER_WRITE))
-+
-+#define PAGE_RWX __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-+ _PAGE_CACHABLE | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_WRITE | \
-+ _PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_KERN_EXEC | \
-+ _PAGE_EXT_USER_WRITE | \
-+ _PAGE_EXT_USER_READ | \
-+ _PAGE_EXT_USER_EXEC))
-+
-+#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
-+ _PAGE_DIRTY | _PAGE_ACCESSED | \
-+ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_KERN_WRITE | \
-+ _PAGE_EXT_KERN_EXEC))
-+
-+#define PAGE_KERNEL_NOCACHE \
-+ __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
-+ _PAGE_ACCESSED | _PAGE_HW_SHARED | \
-+ _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_KERN_WRITE | \
-+ _PAGE_EXT_KERN_EXEC))
-+
-+#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
-+ _PAGE_DIRTY | _PAGE_ACCESSED | \
-+ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_KERN_EXEC))
-+
-+#define PAGE_KERNEL_PCC(slot, type) \
-+ __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | \
-+ _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
-+ _PAGE_EXT(_PAGE_EXT_KERN_READ | \
-+ _PAGE_EXT_KERN_WRITE | \
-+ _PAGE_EXT_KERN_EXEC) \
-+ (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
-+ (type))
-+
-+#elif defined(CONFIG_MMU) /* SH-X TLB */
-+#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_CACHABLE | \
-+ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
-+
-+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
-+ _PAGE_CACHABLE | _PAGE_ACCESSED | \
-+ _PAGE_FLAGS_HARD)
-+
-+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
-+ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
-+
-+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | \
-+ _PAGE_ACCESSED | _PAGE_FLAGS_HARD)
-+
-+#define PAGE_EXECREAD PAGE_READONLY
-+#define PAGE_RWX PAGE_SHARED
-+#define PAGE_WRITEONLY PAGE_SHARED
-+
-+#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_CACHABLE | \
-+ _PAGE_DIRTY | _PAGE_ACCESSED | \
-+ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
-+
-+#define PAGE_KERNEL_NOCACHE \
-+ __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
-+ _PAGE_ACCESSED | _PAGE_HW_SHARED | \
-+ _PAGE_FLAGS_HARD)
-+
-+#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_CACHABLE | \
-+ _PAGE_DIRTY | _PAGE_ACCESSED | \
-+ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD)
-+
-+#define PAGE_KERNEL_PCC(slot, type) \
-+ __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
-+ _PAGE_ACCESSED | _PAGE_FLAGS_HARD | \
-+ (slot ? _PAGE_PCC_AREA5 : _PAGE_PCC_AREA6) | \
-+ (type))
-+#else /* no mmu */
-+#define PAGE_NONE __pgprot(0)
-+#define PAGE_SHARED __pgprot(0)
-+#define PAGE_COPY __pgprot(0)
-+#define PAGE_EXECREAD __pgprot(0)
-+#define PAGE_RWX __pgprot(0)
-+#define PAGE_READONLY __pgprot(0)
-+#define PAGE_WRITEONLY __pgprot(0)
-+#define PAGE_KERNEL __pgprot(0)
-+#define PAGE_KERNEL_NOCACHE __pgprot(0)
-+#define PAGE_KERNEL_RO __pgprot(0)
-+
-+#define PAGE_KERNEL_PCC(slot, type) \
-+ __pgprot(0)
-+#endif
-+
-+#endif /* __ASSEMBLY__ */
-+
-+#ifndef __ASSEMBLY__
-+
-+/*
-+ * Certain architectures need to do special things when PTEs
-+ * within a page table are directly modified. Thus, the following
-+ * hook is made available.
-+ */
-+#ifdef CONFIG_X2TLB
-+static inline void set_pte(pte_t *ptep, pte_t pte)
-+{
-+ ptep->pte_high = pte.pte_high;
-+ smp_wmb();
-+ ptep->pte_low = pte.pte_low;
-+}
-+#else
-+#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
-+#endif
-+
-+#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
-+
-+/*
-+ * (pmds are folded into pgds so this doesn't get actually called,
-+ * but the define is needed for a generic inline function.)
-+ */
-+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
-+
-+#define pfn_pte(pfn, prot) \
-+ __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
-+#define pfn_pmd(pfn, prot) \
-+ __pmd(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
-+
-+#define pte_none(x) (!pte_val(x))
-+#define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE))
-+
-+#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0)
-+
-+#define pmd_none(x) (!pmd_val(x))
-+#define pmd_present(x) (pmd_val(x))
-+#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
-+#define pmd_bad(x) (pmd_val(x) & ~PAGE_MASK)
-+
-+#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
-+#define pte_page(x) pfn_to_page(pte_pfn(x))
-+
-+/*
-+ * The following only work if pte_present() is true.
-+ * Undefined behaviour if not..
-+ */
-+#define pte_not_present(pte) (!((pte).pte_low & _PAGE_PRESENT))
-+#define pte_dirty(pte) ((pte).pte_low & _PAGE_DIRTY)
-+#define pte_young(pte) ((pte).pte_low & _PAGE_ACCESSED)
-+#define pte_file(pte) ((pte).pte_low & _PAGE_FILE)
-+
-+#ifdef CONFIG_X2TLB
-+#define pte_write(pte) ((pte).pte_high & _PAGE_EXT_USER_WRITE)
-+#else
-+#define pte_write(pte) ((pte).pte_low & _PAGE_RW)
-+#endif
-+
-+#define PTE_BIT_FUNC(h,fn,op) \
-+static inline pte_t pte_##fn(pte_t pte) { pte.pte_##h op; return pte; }
-+
-+#ifdef CONFIG_X2TLB
-+/*
-+ * We cheat a bit in the SH-X2 TLB case. As the permission bits are
-+ * individually toggled (and user permissions are entirely decoupled from
-+ * kernel permissions), we attempt to couple them a bit more sanely here.
-+ */
-+PTE_BIT_FUNC(high, wrprotect, &= ~_PAGE_EXT_USER_WRITE);
-+PTE_BIT_FUNC(high, mkwrite, |= _PAGE_EXT_USER_WRITE | _PAGE_EXT_KERN_WRITE);
-+PTE_BIT_FUNC(high, mkhuge, |= _PAGE_SZHUGE);
-+#else
-+PTE_BIT_FUNC(low, wrprotect, &= ~_PAGE_RW);
-+PTE_BIT_FUNC(low, mkwrite, |= _PAGE_RW);
-+PTE_BIT_FUNC(low, mkhuge, |= _PAGE_SZHUGE);
-+#endif
-+
-+PTE_BIT_FUNC(low, mkclean, &= ~_PAGE_DIRTY);
-+PTE_BIT_FUNC(low, mkdirty, |= _PAGE_DIRTY);
-+PTE_BIT_FUNC(low, mkold, &= ~_PAGE_ACCESSED);
-+PTE_BIT_FUNC(low, mkyoung, |= _PAGE_ACCESSED);
-+
-+/*
-+ * Macro and implementation to make a page protection as uncachable.
-+ */
-+#define pgprot_writecombine(prot) \
-+ __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
-+
-+#define pgprot_noncached pgprot_writecombine
-+
-+/*
-+ * Conversion functions: convert a page and protection to a page entry,
-+ * and a page entry and page directory to the page they refer to.
-+ *
-+ * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
-+ */
-+#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
-+
-+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
-+{
-+ pte.pte_low &= _PAGE_CHG_MASK;
-+ pte.pte_low |= pgprot_val(newprot);
-+
-+#ifdef CONFIG_X2TLB
-+ pte.pte_high |= pgprot_val(newprot) >> 32;
-+#endif
-+
-+ return pte;
-+}
-+
-+#define pmd_page_vaddr(pmd) ((unsigned long)pmd_val(pmd))
-+#define pmd_page(pmd) (virt_to_page(pmd_val(pmd)))
-+
-+/* to find an entry in a page-table-directory. */
-+#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
-+#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
-+
-+/* to find an entry in a kernel page-table-directory */
-+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
-+
-+/* Find an entry in the third-level page table.. */
-+#define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
-+#define pte_offset_kernel(dir, address) \
-+ ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(address))
-+#define pte_offset_map(dir, address) pte_offset_kernel(dir, address)
-+#define pte_offset_map_nested(dir, address) pte_offset_kernel(dir, address)
-+
-+#define pte_unmap(pte) do { } while (0)
-+#define pte_unmap_nested(pte) do { } while (0)
-+
-+#ifdef CONFIG_X2TLB
-+#define pte_ERROR(e) \
-+ printk("%s:%d: bad pte %p(%08lx%08lx).\n", __FILE__, __LINE__, \
-+ &(e), (e).pte_high, (e).pte_low)
-+#define pgd_ERROR(e) \
-+ printk("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e))
-+#else
-+#define pte_ERROR(e) \
-+ printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
-+#define pgd_ERROR(e) \
-+ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
-+#endif
-+
-+/*
-+ * Encode and de-code a swap entry
-+ *
-+ * Constraints:
-+ * _PAGE_FILE at bit 0
-+ * _PAGE_PRESENT at bit 8
-+ * _PAGE_PROTNONE at bit 9
-+ *
-+ * For the normal case, we encode the swap type into bits 0:7 and the
-+ * swap offset into bits 10:30. For the 64-bit PTE case, we keep the
-+ * preserved bits in the low 32-bits and use the upper 32 as the swap
-+ * offset (along with a 5-bit type), following the same approach as x86
-+ * PAE. This keeps the logic quite simple, and allows for a full 32
-+ * PTE_FILE_MAX_BITS, as opposed to the 29-bits we're constrained with
-+ * in the pte_low case.
-+ *
-+ * As is evident by the Alpha code, if we ever get a 64-bit unsigned
-+ * long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes
-+ * much cleaner..
-+ *
-+ * NOTE: We should set ZEROs at the position of _PAGE_PRESENT
-+ * and _PAGE_PROTNONE bits
-+ */
-+#ifdef CONFIG_X2TLB
-+#define __swp_type(x) ((x).val & 0x1f)
-+#define __swp_offset(x) ((x).val >> 5)
-+#define __swp_entry(type, offset) ((swp_entry_t){ (type) | (offset) << 5})
-+#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
-+#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
-+
-+/*
-+ * Encode and decode a nonlinear file mapping entry
-+ */
-+#define pte_to_pgoff(pte) ((pte).pte_high)
-+#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
-+
-+#define PTE_FILE_MAX_BITS 32
-+#else
-+#define __swp_type(x) ((x).val & 0xff)
-+#define __swp_offset(x) ((x).val >> 10)
-+#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) <<10})
-+
-+#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 })
-+#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 })
-+
-+/*
-+ * Encode and decode a nonlinear file mapping entry
-+ */
-+#define PTE_FILE_MAX_BITS 29
-+#define pte_to_pgoff(pte) (pte_val(pte) >> 1)
-+#define pgoff_to_pte(off) ((pte_t) { ((off) << 1) | _PAGE_FILE })
-+#endif
-+
-+#endif /* __ASSEMBLY__ */
-+#endif /* __ASM_SH_PGTABLE_32_H */
-diff --git a/include/asm-sh/pgtable_64.h b/include/asm-sh/pgtable_64.h
-new file mode 100644
-index 0000000..9722116
---- /dev/null
-+++ b/include/asm-sh/pgtable_64.h
-@@ -0,0 +1,299 @@
-+#ifndef __ASM_SH_PGTABLE_64_H
-+#define __ASM_SH_PGTABLE_64_H
-+
-+/*
-+ * include/asm-sh/pgtable_64.h
-+ *
-+ * This file contains the functions and defines necessary to modify and use
-+ * the SuperH page table tree.
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003, 2004 Paul Mundt
-+ * Copyright (C) 2003, 2004 Richard Curnow
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#include <linux/threads.h>
-+#include <asm/processor.h>
-+#include <asm/page.h>
-+
-+/*
-+ * Error outputs.
-+ */
-+#define pte_ERROR(e) \
-+ printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
-+#define pgd_ERROR(e) \
-+ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
-+
-+/*
-+ * Table setting routines. Used within arch/mm only.
-+ */
-+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
-+
-+static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
-+{
-+ unsigned long long x = ((unsigned long long) pteval.pte_low);
-+ unsigned long long *xp = (unsigned long long *) pteptr;
-+ /*
-+ * Sign-extend based on NPHYS.
-+ */
-+ *(xp) = (x & NPHYS_SIGN) ? (x | NPHYS_MASK) : x;
-+}
-+#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
-+
-+static __inline__ void pmd_set(pmd_t *pmdp,pte_t *ptep)
-+{
-+ pmd_val(*pmdp) = (unsigned long) ptep;
-+}
-+
-+/*
-+ * PGD defines. Top level.
-+ */
-+
-+/* To find an entry in a generic PGD. */
-+#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
-+#define __pgd_offset(address) pgd_index(address)
-+#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
-+
-+/* To find an entry in a kernel PGD. */
-+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
-+
-+/*
-+ * PMD level access routines. Same notes as above.
-+ */
-+#define _PMD_EMPTY 0x0
-+/* Either the PMD is empty or present, it's not paged out */
-+#define pmd_present(pmd_entry) (pmd_val(pmd_entry) & _PAGE_PRESENT)
-+#define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY)))
-+#define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY)
-+#define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
-+
-+#define pmd_page_vaddr(pmd_entry) \
-+ ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))
-+
-+#define pmd_page(pmd) \
-+ (virt_to_page(pmd_val(pmd)))
-+
-+/* PMD to PTE dereferencing */
-+#define pte_index(address) \
-+ ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
-+
-+#define pte_offset_kernel(dir, addr) \
-+ ((pte_t *) ((pmd_val(*(dir))) & PAGE_MASK) + pte_index((addr)))
-+
-+#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr)
-+#define pte_offset_map_nested(dir,addr) pte_offset_kernel(dir, addr)
-+#define pte_unmap(pte) do { } while (0)
-+#define pte_unmap_nested(pte) do { } while (0)
-+
-+#ifndef __ASSEMBLY__
-+#define IOBASE_VADDR 0xff000000
-+#define IOBASE_END 0xffffffff
-+
-+/*
-+ * PTEL coherent flags.
-+ * See Chapter 17 ST50 CPU Core Volume 1, Architecture.
-+ */
-+/* The bits that are required in the SH-5 TLB are placed in the h/w-defined
-+ positions, to avoid expensive bit shuffling on every refill. The remaining
-+ bits are used for s/w purposes and masked out on each refill.
-+
-+ Note, the PTE slots are used to hold data of type swp_entry_t when a page is
-+ swapped out. Only the _PAGE_PRESENT flag is significant when the page is
-+ swapped out, and it must be placed so that it doesn't overlap either the
-+ type or offset fields of swp_entry_t. For x86, offset is at [31:8] and type
-+ at [6:1], with _PAGE_PRESENT at bit 0 for both pte_t and swp_entry_t. This
-+ scheme doesn't map to SH-5 because bit [0] controls cacheability. So bit
-+ [2] is used for _PAGE_PRESENT and the type field of swp_entry_t is split
-+ into 2 pieces. That is handled by SWP_ENTRY and SWP_TYPE below. */
-+#define _PAGE_WT 0x001 /* CB0: if cacheable, 1->write-thru, 0->write-back */
-+#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
-+#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
-+#define _PAGE_PRESENT 0x004 /* software: page referenced */
-+#define _PAGE_FILE 0x004 /* software: only when !present */
-+#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
-+#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
-+#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
-+#define _PAGE_READ 0x040 /* PR0-bit : read access allowed */
-+#define _PAGE_EXECUTE 0x080 /* PR1-bit : execute access allowed */
-+#define _PAGE_WRITE 0x100 /* PR2-bit : write access allowed */
-+#define _PAGE_USER 0x200 /* PR3-bit : user space access allowed */
-+#define _PAGE_DIRTY 0x400 /* software: page accessed in write */
-+#define _PAGE_ACCESSED 0x800 /* software: page referenced */
-+
-+/* Mask which drops software flags */
-+#define _PAGE_FLAGS_HARDWARE_MASK 0xfffffffffffff3dbLL
-+
-+/*
-+ * HugeTLB support
-+ */
-+#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
-+#define _PAGE_SZHUGE (_PAGE_SIZE0)
-+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
-+#define _PAGE_SZHUGE (_PAGE_SIZE1)
-+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
-+#define _PAGE_SZHUGE (_PAGE_SIZE0 | _PAGE_SIZE1)
-+#endif
-+
-+/*
-+ * Default flags for a Kernel page.
-+ * This is fundametally also SHARED because the main use of this define
-+ * (other than for PGD/PMD entries) is for the VMALLOC pool which is
-+ * contextless.
-+ *
-+ * _PAGE_EXECUTE is required for modules
-+ *
-+ */
-+#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
-+ _PAGE_EXECUTE | \
-+ _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_DIRTY | \
-+ _PAGE_SHARED)
-+
-+/* Default flags for a User page */
-+#define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER)
-+
-+#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
-+
-+/*
-+ * We have full permissions (Read/Write/Execute/Shared).
-+ */
-+#define _PAGE_COMMON (_PAGE_PRESENT | _PAGE_USER | \
-+ _PAGE_CACHABLE | _PAGE_ACCESSED)
-+
-+#define PAGE_NONE __pgprot(_PAGE_CACHABLE | _PAGE_ACCESSED)
-+#define PAGE_SHARED __pgprot(_PAGE_COMMON | _PAGE_READ | _PAGE_WRITE | \
-+ _PAGE_SHARED)
-+#define PAGE_EXECREAD __pgprot(_PAGE_COMMON | _PAGE_READ | _PAGE_EXECUTE)
-+
-+/*
-+ * We need to include PAGE_EXECUTE in PAGE_COPY because it is the default
-+ * protection mode for the stack.
-+ */
-+#define PAGE_COPY PAGE_EXECREAD
-+
-+#define PAGE_READONLY __pgprot(_PAGE_COMMON | _PAGE_READ)
-+#define PAGE_WRITEONLY __pgprot(_PAGE_COMMON | _PAGE_WRITE)
-+#define PAGE_RWX __pgprot(_PAGE_COMMON | _PAGE_READ | \
-+ _PAGE_WRITE | _PAGE_EXECUTE)
-+#define PAGE_KERNEL __pgprot(_KERNPG_TABLE)
-+
-+/* Make it a device mapping for maximum safety (e.g. for mapping device
-+ registers into user-space via /dev/map). */
-+#define pgprot_noncached(x) __pgprot(((x).pgprot & ~(_PAGE_CACHABLE)) | _PAGE_DEVICE)
-+#define pgprot_writecombine(prot) __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
-+
-+/*
-+ * Handling allocation failures during page table setup.
-+ */
-+extern void __handle_bad_pmd_kernel(pmd_t * pmd);
-+#define __handle_bad_pmd(x) __handle_bad_pmd_kernel(x)
-+
-+/*
-+ * PTE level access routines.
-+ *
-+ * Note1:
-+ * It's the tree walk leaf. This is physical address to be stored.
-+ *
-+ * Note 2:
-+ * Regarding the choice of _PTE_EMPTY:
-+
-+ We must choose a bit pattern that cannot be valid, whether or not the page
-+ is present. bit[2]==1 => present, bit[2]==0 => swapped out. If swapped
-+ out, bits [31:8], [6:3], [1:0] are under swapper control, so only bit[7] is
-+ left for us to select. If we force bit[7]==0 when swapped out, we could use
-+ the combination bit[7,2]=2'b10 to indicate an empty PTE. Alternatively, if
-+ we force bit[7]==1 when swapped out, we can use all zeroes to indicate
-+ empty. This is convenient, because the page tables get cleared to zero
-+ when they are allocated.
-+
-+ */
-+#define _PTE_EMPTY 0x0
-+#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
-+#define pte_clear(mm,addr,xp) (set_pte_at(mm, addr, xp, __pte(_PTE_EMPTY)))
-+#define pte_none(x) (pte_val(x) == _PTE_EMPTY)
-+
-+/*
-+ * Some definitions to translate between mem_map, PTEs, and page
-+ * addresses:
-+ */
-+
-+/*
-+ * Given a PTE, return the index of the mem_map[] entry corresponding
-+ * to the page frame the PTE. Get the absolute physical address, make
-+ * a relative physical address and translate it to an index.
-+ */
-+#define pte_pagenr(x) (((unsigned long) (pte_val(x)) - \
-+ __MEMORY_START) >> PAGE_SHIFT)
-+
-+/*
-+ * Given a PTE, return the "struct page *".
-+ */
-+#define pte_page(x) (mem_map + pte_pagenr(x))
-+
-+/*
-+ * Return number of (down rounded) MB corresponding to x pages.
-+ */
-+#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
-+
-+
-+/*
-+ * The following have defined behavior only work if pte_present() is true.
-+ */
-+static inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
-+static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
-+static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
-+static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
-+
-+static inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
-+static inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
-+static inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
-+static inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_WRITE)); return pte; }
-+static inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
-+static inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
-+static inline pte_t pte_mkhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_SZHUGE)); return pte; }
-+
-+
-+/*
-+ * Conversion functions: convert a page and protection to a page entry.
-+ *
-+ * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
-+ */
-+#define mk_pte(page,pgprot) \
-+({ \
-+ pte_t __pte; \
-+ \
-+ set_pte(&__pte, __pte((((page)-mem_map) << PAGE_SHIFT) | \
-+ __MEMORY_START | pgprot_val((pgprot)))); \
-+ __pte; \
-+})
-+
-+/*
-+ * This takes a (absolute) physical page address that is used
-+ * by the remapping functions
-+ */
-+#define mk_pte_phys(physpage, pgprot) \
-+({ pte_t __pte; set_pte(&__pte, __pte(physpage | pgprot_val(pgprot))); __pte; })
-+
-+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
-+{ set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot))); return pte; }
-+
-+/* Encode and decode a swap entry */
-+#define __swp_type(x) (((x).val & 3) + (((x).val >> 1) & 0x3c))
-+#define __swp_offset(x) ((x).val >> 8)
-+#define __swp_entry(type, offset) ((swp_entry_t) { ((offset << 8) + ((type & 0x3c) << 1) + (type & 3)) })
-+#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
-+#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-+
-+/* Encode and decode a nonlinear file mapping entry */
-+#define PTE_FILE_MAX_BITS 29
-+#define pte_to_pgoff(pte) (pte_val(pte))
-+#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
-+
-+#endif /* !__ASSEMBLY__ */
-+
-+#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
-+#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
-+
-+#endif /* __ASM_SH_PGTABLE_64_H */
-diff --git a/include/asm-sh/posix_types.h b/include/asm-sh/posix_types.h
-index 0a3d2f5..4b9d11c 100644
---- a/include/asm-sh/posix_types.h
-+++ b/include/asm-sh/posix_types.h
-@@ -1,122 +1,7 @@
--#ifndef __ASM_SH_POSIX_TYPES_H
--#define __ASM_SH_POSIX_TYPES_H
+-static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
+- struct page *pte)
+-{
+- set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) page_address (pte)));
+-}
+-
+-static inline void check_pgt_cache(void)
+-{
+- quicklist_trim(0, NULL, 25, 16);
+-}
+-
+-#endif /* __ASM_SH64_PGALLOC_H */
+diff --git a/include/asm-sh64/pgtable.h b/include/asm-sh64/pgtable.h
+deleted file mode 100644
+index 3488fe3..0000000
+--- a/include/asm-sh64/pgtable.h
++++ /dev/null
+@@ -1,496 +0,0 @@
+-#ifndef __ASM_SH64_PGTABLE_H
+-#define __ASM_SH64_PGTABLE_H
+-
+-#include <asm-generic/4level-fixup.h>
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/pgtable.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003, 2004 Paul Mundt
+- * Copyright (C) 2003, 2004 Richard Curnow
+- *
+- * This file contains the functions and defines necessary to modify and use
+- * the SuperH page table tree.
+- */
+-
+-#ifndef __ASSEMBLY__
+-#include <asm/processor.h>
+-#include <asm/page.h>
+-#include <linux/threads.h>
+-
+-struct vm_area_struct;
+-
+-extern void paging_init(void);
+-
+-/* We provide our own get_unmapped_area to avoid cache synonym issue */
+-#define HAVE_ARCH_UNMAPPED_AREA
+-
+-/*
+- * Basically we have the same two-level (which is the logical three level
+- * Linux page table layout folded) page tables as the i386.
+- */
+-
+-/*
+- * ZERO_PAGE is a global shared page that is always zero: used
+- * for zero-mapped memory areas etc..
+- */
+-extern unsigned char empty_zero_page[PAGE_SIZE];
+-#define ZERO_PAGE(vaddr) (mem_map + MAP_NR(empty_zero_page))
+-
+-#endif /* !__ASSEMBLY__ */
+-
+-/*
+- * NEFF and NPHYS related defines.
+- * FIXME : These need to be model-dependent. For now this is OK, SH5-101 and SH5-103
+- * implement 32 bits effective and 32 bits physical. But future implementations may
+- * extend beyond this.
+- */
+-#define NEFF 32
+-#define NEFF_SIGN (1LL << (NEFF - 1))
+-#define NEFF_MASK (-1LL << NEFF)
+-
+-#define NPHYS 32
+-#define NPHYS_SIGN (1LL << (NPHYS - 1))
+-#define NPHYS_MASK (-1LL << NPHYS)
+-
+-/* Typically 2-level is sufficient up to 32 bits of virtual address space, beyond
+- that 3-level would be appropriate. */
+-#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+-/* For 4k pages, this contains 512 entries, i.e. 9 bits worth of address. */
+-#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
+-#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
+-#define PTE_SHIFT PAGE_SHIFT
+-#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
+-
+-/* top level: PMD. */
+-#define PGDIR_SHIFT (PTE_SHIFT + PTE_BITS)
+-#define PGD_BITS (NEFF - PGDIR_SHIFT)
+-#define PTRS_PER_PGD (1<<PGD_BITS)
+-
+-/* middle level: PMD. This doesn't do anything for the 2-level case. */
+-#define PTRS_PER_PMD (1)
+-
+-#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+-#define PGDIR_MASK (~(PGDIR_SIZE-1))
+-#define PMD_SHIFT PGDIR_SHIFT
+-#define PMD_SIZE PGDIR_SIZE
+-#define PMD_MASK PGDIR_MASK
+-
+-#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+-/*
+- * three-level asymmetric paging structure: PGD is top level.
+- * The asymmetry comes from 32-bit pointers and 64-bit PTEs.
+- */
+-/* bottom level: PTE. It's 9 bits = 512 pointers */
+-#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
+-#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
+-#define PTE_SHIFT PAGE_SHIFT
+-#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
+-
+-/* middle level: PMD. It's 10 bits = 1024 pointers */
+-#define PTRS_PER_PMD ((1<<PAGE_SHIFT)/sizeof(unsigned long long *))
+-#define PMD_MAGNITUDE 2 /* sizeof(unsigned long long *) magnit. */
+-#define PMD_SHIFT (PTE_SHIFT + PTE_BITS)
+-#define PMD_BITS (PAGE_SHIFT - PMD_MAGNITUDE)
+-
+-/* top level: PMD. It's 1 bit = 2 pointers */
+-#define PGDIR_SHIFT (PMD_SHIFT + PMD_BITS)
+-#define PGD_BITS (NEFF - PGDIR_SHIFT)
+-#define PTRS_PER_PGD (1<<PGD_BITS)
+-
+-#define PMD_SIZE (1UL << PMD_SHIFT)
+-#define PMD_MASK (~(PMD_SIZE-1))
+-#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+-#define PGDIR_MASK (~(PGDIR_SIZE-1))
+-
+-#else
+-#error "No defined number of page table levels"
+-#endif
+-
+-/*
+- * Error outputs.
+- */
+-#define pte_ERROR(e) \
+- printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
+-#define pmd_ERROR(e) \
+- printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
+-#define pgd_ERROR(e) \
+- printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+-
+-/*
+- * Table setting routines. Used within arch/mm only.
+- */
+-#define set_pgd(pgdptr, pgdval) (*(pgdptr) = pgdval)
+-#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+-
+-static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
+-{
+- unsigned long long x = ((unsigned long long) pteval.pte);
+- unsigned long long *xp = (unsigned long long *) pteptr;
+- /*
+- * Sign-extend based on NPHYS.
+- */
+- *(xp) = (x & NPHYS_SIGN) ? (x | NPHYS_MASK) : x;
+-}
+-#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
+-
+-static __inline__ void pmd_set(pmd_t *pmdp,pte_t *ptep)
+-{
+- pmd_val(*pmdp) = (unsigned long) ptep;
+-}
+-
+-/*
+- * PGD defines. Top level.
+- */
+-
+-/* To find an entry in a generic PGD. */
+-#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
+-#define __pgd_offset(address) pgd_index(address)
+-#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+-
+-/* To find an entry in a kernel PGD. */
+-#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+-
+-/*
+- * PGD level access routines.
+- *
+- * Note1:
+- * There's no need to use physical addresses since the tree walk is all
+- * in performed in software, until the PTE translation.
+- *
+- * Note 2:
+- * A PGD entry can be uninitialized (_PGD_UNUSED), generically bad,
+- * clear (_PGD_EMPTY), present. When present, lower 3 nibbles contain
+- * _KERNPG_TABLE. Being a kernel virtual pointer also bit 31 must
+- * be 1. Assuming an arbitrary clear value of bit 31 set to 0 and
+- * lower 3 nibbles set to 0xFFF (_PGD_EMPTY) any other value is a
+- * bad pgd that must be notified via printk().
+- *
+- */
+-#define _PGD_EMPTY 0x0
+-
+-#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+-static inline int pgd_none(pgd_t pgd) { return 0; }
+-static inline int pgd_bad(pgd_t pgd) { return 0; }
+-#define pgd_present(pgd) ((pgd_val(pgd) & _PAGE_PRESENT) ? 1 : 0)
+-#define pgd_clear(xx) do { } while(0)
+-
+-#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+-#define pgd_present(pgd_entry) (1)
+-#define pgd_none(pgd_entry) (pgd_val((pgd_entry)) == _PGD_EMPTY)
+-/* TODO: Think later about what a useful definition of 'bad' would be now. */
+-#define pgd_bad(pgd_entry) (0)
+-#define pgd_clear(pgd_entry_p) (set_pgd((pgd_entry_p), __pgd(_PGD_EMPTY)))
+-
+-#endif
+-
+-
+-#define pgd_page_vaddr(pgd_entry) ((unsigned long) (pgd_val(pgd_entry) & PAGE_MASK))
+-#define pgd_page(pgd) (virt_to_page(pgd_val(pgd)))
+-
+-
+-/*
+- * PMD defines. Middle level.
+- */
+-
+-/* PGD to PMD dereferencing */
+-#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+-static inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
+-{
+- return (pmd_t *) dir;
+-}
+-#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+-#define __pmd_offset(address) \
+- (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+-#define pmd_offset(dir, addr) \
+- ((pmd_t *) ((pgd_val(*(dir))) & PAGE_MASK) + __pmd_offset((addr)))
+-#endif
+-
+-/*
+- * PMD level access routines. Same notes as above.
+- */
+-#define _PMD_EMPTY 0x0
+-/* Either the PMD is empty or present, it's not paged out */
+-#define pmd_present(pmd_entry) (pmd_val(pmd_entry) & _PAGE_PRESENT)
+-#define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY)))
+-#define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY)
+-#define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+-
+-#define pmd_page_vaddr(pmd_entry) \
+- ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))
+-
+-#define pmd_page(pmd) \
+- (virt_to_page(pmd_val(pmd)))
+-
+-/* PMD to PTE dereferencing */
+-#define pte_index(address) \
+- ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+-
+-#define pte_offset_kernel(dir, addr) \
+- ((pte_t *) ((pmd_val(*(dir))) & PAGE_MASK) + pte_index((addr)))
+-
+-#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr)
+-#define pte_offset_map_nested(dir,addr) pte_offset_kernel(dir, addr)
+-#define pte_unmap(pte) do { } while (0)
+-#define pte_unmap_nested(pte) do { } while (0)
+-
+-/* Round it up ! */
+-#define USER_PTRS_PER_PGD ((TASK_SIZE+PGDIR_SIZE-1)/PGDIR_SIZE)
+-#define FIRST_USER_ADDRESS 0
+-
+-#ifndef __ASSEMBLY__
+-#define VMALLOC_END 0xff000000
+-#define VMALLOC_START 0xf0000000
+-#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+-
+-#define IOBASE_VADDR 0xff000000
+-#define IOBASE_END 0xffffffff
+-
+-/*
+- * PTEL coherent flags.
+- * See Chapter 17 ST50 CPU Core Volume 1, Architecture.
+- */
+-/* The bits that are required in the SH-5 TLB are placed in the h/w-defined
+- positions, to avoid expensive bit shuffling on every refill. The remaining
+- bits are used for s/w purposes and masked out on each refill.
+-
+- Note, the PTE slots are used to hold data of type swp_entry_t when a page is
+- swapped out. Only the _PAGE_PRESENT flag is significant when the page is
+- swapped out, and it must be placed so that it doesn't overlap either the
+- type or offset fields of swp_entry_t. For x86, offset is at [31:8] and type
+- at [6:1], with _PAGE_PRESENT at bit 0 for both pte_t and swp_entry_t. This
+- scheme doesn't map to SH-5 because bit [0] controls cacheability. So bit
+- [2] is used for _PAGE_PRESENT and the type field of swp_entry_t is split
+- into 2 pieces. That is handled by SWP_ENTRY and SWP_TYPE below. */
+-#define _PAGE_WT 0x001 /* CB0: if cacheable, 1->write-thru, 0->write-back */
+-#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
+-#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
+-#define _PAGE_PRESENT 0x004 /* software: page referenced */
+-#define _PAGE_FILE 0x004 /* software: only when !present */
+-#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
+-#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
+-#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
+-#define _PAGE_READ 0x040 /* PR0-bit : read access allowed */
+-#define _PAGE_EXECUTE 0x080 /* PR1-bit : execute access allowed */
+-#define _PAGE_WRITE 0x100 /* PR2-bit : write access allowed */
+-#define _PAGE_USER 0x200 /* PR3-bit : user space access allowed */
+-#define _PAGE_DIRTY 0x400 /* software: page accessed in write */
+-#define _PAGE_ACCESSED 0x800 /* software: page referenced */
+-
+-/* Mask which drops software flags */
+-#define _PAGE_FLAGS_HARDWARE_MASK 0xfffffffffffff3dbLL
+-
+-/*
+- * HugeTLB support
+- */
+-#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+-#define _PAGE_SZHUGE (_PAGE_SIZE0)
+-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+-#define _PAGE_SZHUGE (_PAGE_SIZE1)
+-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
+-#define _PAGE_SZHUGE (_PAGE_SIZE0 | _PAGE_SIZE1)
+-#endif
+-
+-/*
+- * Default flags for a Kernel page.
+- * This is fundametally also SHARED because the main use of this define
+- * (other than for PGD/PMD entries) is for the VMALLOC pool which is
+- * contextless.
+- *
+- * _PAGE_EXECUTE is required for modules
+- *
+- */
+-#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+- _PAGE_EXECUTE | \
+- _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_DIRTY | \
+- _PAGE_SHARED)
+-
+-/* Default flags for a User page */
+-#define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER)
+-
+-#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+-
+-#define PAGE_NONE __pgprot(_PAGE_CACHABLE | _PAGE_ACCESSED)
+-#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+- _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_USER | \
+- _PAGE_SHARED)
+-/* We need to include PAGE_EXECUTE in PAGE_COPY because it is the default
+- * protection mode for the stack. */
+-#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
+- _PAGE_ACCESSED | _PAGE_USER | _PAGE_EXECUTE)
+-#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
+- _PAGE_ACCESSED | _PAGE_USER)
+-#define PAGE_KERNEL __pgprot(_KERNPG_TABLE)
+-
+-
+-/*
+- * In ST50 we have full permissions (Read/Write/Execute/Shared).
+- * Just match'em all. These are for mmap(), therefore all at least
+- * User/Cachable/Present/Accessed. No point in making Fault on Write.
+- */
+-#define __MMAP_COMMON (_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED)
+- /* sxwr */
+-#define __P000 __pgprot(__MMAP_COMMON)
+-#define __P001 __pgprot(__MMAP_COMMON | _PAGE_READ)
+-#define __P010 __pgprot(__MMAP_COMMON)
+-#define __P011 __pgprot(__MMAP_COMMON | _PAGE_READ)
+-#define __P100 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
+-#define __P101 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+-#define __P110 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
+-#define __P111 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+-
+-#define __S000 __pgprot(__MMAP_COMMON | _PAGE_SHARED)
+-#define __S001 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ)
+-#define __S010 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_WRITE)
+-#define __S011 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ | _PAGE_WRITE)
+-#define __S100 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE)
+-#define __S101 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ)
+-#define __S110 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_WRITE)
+-#define __S111 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ | _PAGE_WRITE)
+-
+-/* Make it a device mapping for maximum safety (e.g. for mapping device
+- registers into user-space via /dev/map). */
+-#define pgprot_noncached(x) __pgprot(((x).pgprot & ~(_PAGE_CACHABLE)) | _PAGE_DEVICE)
+-#define pgprot_writecombine(prot) __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+-
+-/*
+- * Handling allocation failures during page table setup.
+- */
+-extern void __handle_bad_pmd_kernel(pmd_t * pmd);
+-#define __handle_bad_pmd(x) __handle_bad_pmd_kernel(x)
+-
+-/*
+- * PTE level access routines.
+- *
+- * Note1:
+- * It's the tree walk leaf. This is physical address to be stored.
+- *
+- * Note 2:
+- * Regarding the choice of _PTE_EMPTY:
+-
+- We must choose a bit pattern that cannot be valid, whether or not the page
+- is present. bit[2]==1 => present, bit[2]==0 => swapped out. If swapped
+- out, bits [31:8], [6:3], [1:0] are under swapper control, so only bit[7] is
+- left for us to select. If we force bit[7]==0 when swapped out, we could use
+- the combination bit[7,2]=2'b10 to indicate an empty PTE. Alternatively, if
+- we force bit[7]==1 when swapped out, we can use all zeroes to indicate
+- empty. This is convenient, because the page tables get cleared to zero
+- when they are allocated.
+-
+- */
+-#define _PTE_EMPTY 0x0
+-#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
+-#define pte_clear(mm,addr,xp) (set_pte_at(mm, addr, xp, __pte(_PTE_EMPTY)))
+-#define pte_none(x) (pte_val(x) == _PTE_EMPTY)
+-
+-/*
+- * Some definitions to translate between mem_map, PTEs, and page
+- * addresses:
+- */
+-
+-/*
+- * Given a PTE, return the index of the mem_map[] entry corresponding
+- * to the page frame the PTE. Get the absolute physical address, make
+- * a relative physical address and translate it to an index.
+- */
+-#define pte_pagenr(x) (((unsigned long) (pte_val(x)) - \
+- __MEMORY_START) >> PAGE_SHIFT)
+-
+-/*
+- * Given a PTE, return the "struct page *".
+- */
+-#define pte_page(x) (mem_map + pte_pagenr(x))
+-
+-/*
+- * Return number of (down rounded) MB corresponding to x pages.
+- */
+-#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+-
+-
+-/*
+- * The following have defined behavior only work if pte_present() is true.
+- */
+-static inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
+-static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
+-static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
+-static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
+-
+-static inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
+-static inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
+-static inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
+-static inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_WRITE)); return pte; }
+-static inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
+-static inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
+-static inline pte_t pte_mkhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_SZHUGE)); return pte; }
+-
+-
+-/*
+- * Conversion functions: convert a page and protection to a page entry.
+- *
+- * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
+- */
+-#define mk_pte(page,pgprot) \
+-({ \
+- pte_t __pte; \
+- \
+- set_pte(&__pte, __pte((((page)-mem_map) << PAGE_SHIFT) | \
+- __MEMORY_START | pgprot_val((pgprot)))); \
+- __pte; \
+-})
+-
+-/*
+- * This takes a (absolute) physical page address that is used
+- * by the remapping functions
+- */
+-#define mk_pte_phys(physpage, pgprot) \
+-({ pte_t __pte; set_pte(&__pte, __pte(physpage | pgprot_val(pgprot))); __pte; })
+-
+-static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+-{ set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot))); return pte; }
+-
+-typedef pte_t *pte_addr_t;
+-#define pgtable_cache_init() do { } while (0)
+-
+-extern void update_mmu_cache(struct vm_area_struct * vma,
+- unsigned long address, pte_t pte);
+-
+-/* Encode and decode a swap entry */
+-#define __swp_type(x) (((x).val & 3) + (((x).val >> 1) & 0x3c))
+-#define __swp_offset(x) ((x).val >> 8)
+-#define __swp_entry(type, offset) ((swp_entry_t) { ((offset << 8) + ((type & 0x3c) << 1) + (type & 3)) })
+-#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+-#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+-
+-/* Encode and decode a nonlinear file mapping entry */
+-#define PTE_FILE_MAX_BITS 29
+-#define pte_to_pgoff(pte) (pte_val(pte))
+-#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
+-
+-/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
+-#define PageSkip(page) (0)
+-#define kern_addr_valid(addr) (1)
+-
+-#define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
+- remap_pfn_range(vma, vaddr, pfn, size, prot)
+-
+-#endif /* !__ASSEMBLY__ */
+-
+-/*
+- * No page table caches to initialise
+- */
+-#define pgtable_cache_init() do { } while (0)
+-
+-#define pte_pfn(x) (((unsigned long)((x).pte)) >> PAGE_SHIFT)
+-#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-
+-extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+-
+-#include <asm-generic/pgtable.h>
+-
+-#endif /* __ASM_SH64_PGTABLE_H */
+diff --git a/include/asm-sh64/platform.h b/include/asm-sh64/platform.h
+deleted file mode 100644
+index bd0d9c4..0000000
+--- a/include/asm-sh64/platform.h
++++ /dev/null
+@@ -1,64 +0,0 @@
+-#ifndef __ASM_SH64_PLATFORM_H
+-#define __ASM_SH64_PLATFORM_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/platform.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- * benedict.gaster at superh.com: 3rd May 2002
+- * Added support for ramdisk, removing statically linked romfs at the same time.
+- */
+-
+-#include <linux/ioport.h>
+-#include <asm/irq.h>
+-
+-
+-/*
+- * Platform definition structure.
+- */
+-struct sh64_platform {
+- unsigned int readonly_rootfs;
+- unsigned int ramdisk_flags;
+- unsigned int initial_root_dev;
+- unsigned int loader_type;
+- unsigned int initrd_start;
+- unsigned int initrd_size;
+- unsigned int fpu_flags;
+- unsigned int io_res_count;
+- unsigned int kram_res_count;
+- unsigned int xram_res_count;
+- unsigned int rom_res_count;
+- struct resource *io_res_p;
+- struct resource *kram_res_p;
+- struct resource *xram_res_p;
+- struct resource *rom_res_p;
+-};
+-
+-extern struct sh64_platform platform_parms;
+-
+-extern unsigned long long memory_start, memory_end;
+-
+-extern unsigned long long fpu_in_use;
+-
+-extern int platform_int_priority[NR_INTC_IRQS];
+-
+-#define FPU_FLAGS (platform_parms.fpu_flags)
+-#define STANDARD_IO_RESOURCES (platform_parms.io_res_count)
+-#define STANDARD_KRAM_RESOURCES (platform_parms.kram_res_count)
+-#define STANDARD_XRAM_RESOURCES (platform_parms.xram_res_count)
+-#define STANDARD_ROM_RESOURCES (platform_parms.rom_res_count)
+-
+-/*
+- * Kernel Memory description, Respectively:
+- * code = last but one memory descriptor
+- * data = last memory descriptor
+- */
+-#define code_resource (platform_parms.kram_res_p[STANDARD_KRAM_RESOURCES - 2])
+-#define data_resource (platform_parms.kram_res_p[STANDARD_KRAM_RESOURCES - 1])
+-
+-#endif /* __ASM_SH64_PLATFORM_H */
+diff --git a/include/asm-sh64/poll.h b/include/asm-sh64/poll.h
+deleted file mode 100644
+index ca29502..0000000
+--- a/include/asm-sh64/poll.h
++++ /dev/null
+@@ -1,8 +0,0 @@
+-#ifndef __ASM_SH64_POLL_H
+-#define __ASM_SH64_POLL_H
+-
+-#include <asm-generic/poll.h>
+-
+-#undef POLLREMOVE
+-
+-#endif /* __ASM_SH64_POLL_H */
+diff --git a/include/asm-sh64/posix_types.h b/include/asm-sh64/posix_types.h
+deleted file mode 100644
+index 0620317..0000000
+--- a/include/asm-sh64/posix_types.h
++++ /dev/null
+@@ -1,131 +0,0 @@
+-#ifndef __ASM_SH64_POSIX_TYPES_H
+-#define __ASM_SH64_POSIX_TYPES_H
-
-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/posix_types.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
- * This file is generally used by user-level software, so you need to
- * be a little careful about namespace pollution etc. Also, we cannot
- * assume GCC is being used.
@@ -525191,7 +612872,7 @@
-typedef unsigned short __kernel_ipc_pid_t;
-typedef unsigned short __kernel_uid_t;
-typedef unsigned short __kernel_gid_t;
--typedef unsigned int __kernel_size_t;
+-typedef long unsigned int __kernel_size_t;
-typedef int __kernel_ssize_t;
-typedef int __kernel_ptrdiff_t;
-typedef long __kernel_time_t;
@@ -525243,7 +612924,7 @@
-
-#undef __FD_ISSET
-static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
--{
+-{
- unsigned long __tmp = __fd / __NFDBITS;
- unsigned long __rem = __fd % __NFDBITS;
- return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
@@ -525295,2322 +612976,2088 @@
-
-#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
-
--#endif /* __ASM_SH_POSIX_TYPES_H */
-+#ifdef __KERNEL__
-+# ifdef CONFIG_SUPERH32
-+# include "posix_types_32.h"
-+# else
-+# include "posix_types_64.h"
-+# endif
-+#endif /* __KERNEL__ */
-diff --git a/include/asm-sh/posix_types_32.h b/include/asm-sh/posix_types_32.h
-new file mode 100644
-index 0000000..0a3d2f5
---- /dev/null
-+++ b/include/asm-sh/posix_types_32.h
-@@ -0,0 +1,122 @@
-+#ifndef __ASM_SH_POSIX_TYPES_H
-+#define __ASM_SH_POSIX_TYPES_H
-+
-+/*
-+ * This file is generally used by user-level software, so you need to
-+ * be a little careful about namespace pollution etc. Also, we cannot
-+ * assume GCC is being used.
-+ */
-+
-+typedef unsigned long __kernel_ino_t;
-+typedef unsigned short __kernel_mode_t;
-+typedef unsigned short __kernel_nlink_t;
-+typedef long __kernel_off_t;
-+typedef int __kernel_pid_t;
-+typedef unsigned short __kernel_ipc_pid_t;
-+typedef unsigned short __kernel_uid_t;
-+typedef unsigned short __kernel_gid_t;
-+typedef unsigned int __kernel_size_t;
-+typedef int __kernel_ssize_t;
-+typedef int __kernel_ptrdiff_t;
-+typedef long __kernel_time_t;
-+typedef long __kernel_suseconds_t;
-+typedef long __kernel_clock_t;
-+typedef int __kernel_timer_t;
-+typedef int __kernel_clockid_t;
-+typedef int __kernel_daddr_t;
-+typedef char * __kernel_caddr_t;
-+typedef unsigned short __kernel_uid16_t;
-+typedef unsigned short __kernel_gid16_t;
-+typedef unsigned int __kernel_uid32_t;
-+typedef unsigned int __kernel_gid32_t;
-+
-+typedef unsigned short __kernel_old_uid_t;
-+typedef unsigned short __kernel_old_gid_t;
-+typedef unsigned short __kernel_old_dev_t;
-+
-+#ifdef __GNUC__
-+typedef long long __kernel_loff_t;
-+#endif
-+
-+typedef struct {
-+#if defined(__KERNEL__) || defined(__USE_ALL)
-+ int val[2];
-+#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
-+ int __val[2];
-+#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
-+} __kernel_fsid_t;
-+
-+#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
-+
-+#undef __FD_SET
-+static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
-+{
-+ unsigned long __tmp = __fd / __NFDBITS;
-+ unsigned long __rem = __fd % __NFDBITS;
-+ __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
-+}
-+
-+#undef __FD_CLR
-+static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
-+{
-+ unsigned long __tmp = __fd / __NFDBITS;
-+ unsigned long __rem = __fd % __NFDBITS;
-+ __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
-+}
-+
-+
-+#undef __FD_ISSET
-+static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
-+{
-+ unsigned long __tmp = __fd / __NFDBITS;
-+ unsigned long __rem = __fd % __NFDBITS;
-+ return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
-+}
-+
-+/*
-+ * This will unroll the loop for the normal constant case (8 ints,
-+ * for a 256-bit fd_set)
-+ */
-+#undef __FD_ZERO
-+static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
-+{
-+ unsigned long *__tmp = __p->fds_bits;
-+ int __i;
-+
-+ if (__builtin_constant_p(__FDSET_LONGS)) {
-+ switch (__FDSET_LONGS) {
-+ case 16:
-+ __tmp[ 0] = 0; __tmp[ 1] = 0;
-+ __tmp[ 2] = 0; __tmp[ 3] = 0;
-+ __tmp[ 4] = 0; __tmp[ 5] = 0;
-+ __tmp[ 6] = 0; __tmp[ 7] = 0;
-+ __tmp[ 8] = 0; __tmp[ 9] = 0;
-+ __tmp[10] = 0; __tmp[11] = 0;
-+ __tmp[12] = 0; __tmp[13] = 0;
-+ __tmp[14] = 0; __tmp[15] = 0;
-+ return;
-+
-+ case 8:
-+ __tmp[ 0] = 0; __tmp[ 1] = 0;
-+ __tmp[ 2] = 0; __tmp[ 3] = 0;
-+ __tmp[ 4] = 0; __tmp[ 5] = 0;
-+ __tmp[ 6] = 0; __tmp[ 7] = 0;
-+ return;
-+
-+ case 4:
-+ __tmp[ 0] = 0; __tmp[ 1] = 0;
-+ __tmp[ 2] = 0; __tmp[ 3] = 0;
-+ return;
-+ }
-+ }
-+ __i = __FDSET_LONGS;
-+ while (__i) {
-+ __i--;
-+ *__tmp = 0;
-+ __tmp++;
-+ }
-+}
-+
-+#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
-+
-+#endif /* __ASM_SH_POSIX_TYPES_H */
-diff --git a/include/asm-sh/posix_types_64.h b/include/asm-sh/posix_types_64.h
-new file mode 100644
-index 0000000..0620317
---- /dev/null
-+++ b/include/asm-sh/posix_types_64.h
-@@ -0,0 +1,131 @@
-+#ifndef __ASM_SH64_POSIX_TYPES_H
-+#define __ASM_SH64_POSIX_TYPES_H
-+
-+/*
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ *
-+ * include/asm-sh64/posix_types.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003 Paul Mundt
-+ *
-+ * This file is generally used by user-level software, so you need to
-+ * be a little careful about namespace pollution etc. Also, we cannot
-+ * assume GCC is being used.
-+ */
-+
-+typedef unsigned long __kernel_ino_t;
-+typedef unsigned short __kernel_mode_t;
-+typedef unsigned short __kernel_nlink_t;
-+typedef long __kernel_off_t;
-+typedef int __kernel_pid_t;
-+typedef unsigned short __kernel_ipc_pid_t;
-+typedef unsigned short __kernel_uid_t;
-+typedef unsigned short __kernel_gid_t;
-+typedef long unsigned int __kernel_size_t;
-+typedef int __kernel_ssize_t;
-+typedef int __kernel_ptrdiff_t;
-+typedef long __kernel_time_t;
-+typedef long __kernel_suseconds_t;
-+typedef long __kernel_clock_t;
-+typedef int __kernel_timer_t;
-+typedef int __kernel_clockid_t;
-+typedef int __kernel_daddr_t;
-+typedef char * __kernel_caddr_t;
-+typedef unsigned short __kernel_uid16_t;
-+typedef unsigned short __kernel_gid16_t;
-+typedef unsigned int __kernel_uid32_t;
-+typedef unsigned int __kernel_gid32_t;
-+
-+typedef unsigned short __kernel_old_uid_t;
-+typedef unsigned short __kernel_old_gid_t;
-+typedef unsigned short __kernel_old_dev_t;
-+
-+#ifdef __GNUC__
-+typedef long long __kernel_loff_t;
-+#endif
-+
-+typedef struct {
-+#if defined(__KERNEL__) || defined(__USE_ALL)
-+ int val[2];
-+#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
-+ int __val[2];
-+#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
-+} __kernel_fsid_t;
-+
-+#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
-+
-+#undef __FD_SET
-+static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
-+{
-+ unsigned long __tmp = __fd / __NFDBITS;
-+ unsigned long __rem = __fd % __NFDBITS;
-+ __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
-+}
-+
-+#undef __FD_CLR
-+static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
-+{
-+ unsigned long __tmp = __fd / __NFDBITS;
-+ unsigned long __rem = __fd % __NFDBITS;
-+ __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
-+}
-+
-+
-+#undef __FD_ISSET
-+static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
-+{
-+ unsigned long __tmp = __fd / __NFDBITS;
-+ unsigned long __rem = __fd % __NFDBITS;
-+ return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
-+}
-+
-+/*
-+ * This will unroll the loop for the normal constant case (8 ints,
-+ * for a 256-bit fd_set)
-+ */
-+#undef __FD_ZERO
-+static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
-+{
-+ unsigned long *__tmp = __p->fds_bits;
-+ int __i;
-+
-+ if (__builtin_constant_p(__FDSET_LONGS)) {
-+ switch (__FDSET_LONGS) {
-+ case 16:
-+ __tmp[ 0] = 0; __tmp[ 1] = 0;
-+ __tmp[ 2] = 0; __tmp[ 3] = 0;
-+ __tmp[ 4] = 0; __tmp[ 5] = 0;
-+ __tmp[ 6] = 0; __tmp[ 7] = 0;
-+ __tmp[ 8] = 0; __tmp[ 9] = 0;
-+ __tmp[10] = 0; __tmp[11] = 0;
-+ __tmp[12] = 0; __tmp[13] = 0;
-+ __tmp[14] = 0; __tmp[15] = 0;
-+ return;
-+
-+ case 8:
-+ __tmp[ 0] = 0; __tmp[ 1] = 0;
-+ __tmp[ 2] = 0; __tmp[ 3] = 0;
-+ __tmp[ 4] = 0; __tmp[ 5] = 0;
-+ __tmp[ 6] = 0; __tmp[ 7] = 0;
-+ return;
-+
-+ case 4:
-+ __tmp[ 0] = 0; __tmp[ 1] = 0;
-+ __tmp[ 2] = 0; __tmp[ 3] = 0;
-+ return;
-+ }
-+ }
-+ __i = __FDSET_LONGS;
-+ while (__i) {
-+ __i--;
-+ *__tmp = 0;
-+ __tmp++;
-+ }
-+}
-+
-+#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
-+
-+#endif /* __ASM_SH64_POSIX_TYPES_H */
-diff --git a/include/asm-sh/processor.h b/include/asm-sh/processor.h
-index fda6848..c9b1416 100644
---- a/include/asm-sh/processor.h
-+++ b/include/asm-sh/processor.h
-@@ -1,32 +1,10 @@
+-#endif /* __ASM_SH64_POSIX_TYPES_H */
+diff --git a/include/asm-sh64/processor.h b/include/asm-sh64/processor.h
+deleted file mode 100644
+index eb2bee4..0000000
+--- a/include/asm-sh64/processor.h
++++ /dev/null
+@@ -1,287 +0,0 @@
+-#ifndef __ASM_SH64_PROCESSOR_H
+-#define __ASM_SH64_PROCESSOR_H
+-
-/*
-- * include/asm-sh/processor.h
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/processor.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- * Copyright (C) 2004 Richard Curnow
+- *
+- */
+-
+-#include <asm/page.h>
+-
+-#ifndef __ASSEMBLY__
+-
+-#include <asm/types.h>
+-#include <asm/cache.h>
+-#include <asm/registers.h>
+-#include <linux/threads.h>
+-#include <linux/compiler.h>
+-
+-/*
+- * Default implementation of macro that returns current
+- * instruction pointer ("program counter").
+- */
+-#define current_text_addr() ({ \
+-void *pc; \
+-unsigned long long __dummy = 0; \
+-__asm__("gettr tr0, %1\n\t" \
+- "pta 4, tr0\n\t" \
+- "gettr tr0, %0\n\t" \
+- "ptabs %1, tr0\n\t" \
+- :"=r" (pc), "=r" (__dummy) \
+- : "1" (__dummy)); \
+-pc; })
+-
+-/*
+- * CPU type and hardware bug flags. Kept separately for each CPU.
+- */
+-enum cpu_type {
+- CPU_SH5_101,
+- CPU_SH5_103,
+- CPU_SH_NONE
+-};
+-
+-/*
+- * TLB information structure
+- *
+- * Defined for both I and D tlb, per-processor.
+- */
+-struct tlb_info {
+- unsigned long long next;
+- unsigned long long first;
+- unsigned long long last;
+-
+- unsigned int entries;
+- unsigned int step;
+-
+- unsigned long flags;
+-};
+-
+-struct sh_cpuinfo {
+- enum cpu_type type;
+- unsigned long loops_per_jiffy;
+-
+- char hard_math;
+-
+- unsigned long *pgd_quick;
+- unsigned long *pmd_quick;
+- unsigned long *pte_quick;
+- unsigned long pgtable_cache_sz;
+- unsigned int cpu_clock, master_clock, bus_clock, module_clock;
+-
+- /* Cache info */
+- struct cache_info icache;
+- struct cache_info dcache;
+-
+- /* TLB info */
+- struct tlb_info itlb;
+- struct tlb_info dtlb;
+-};
+-
+-extern struct sh_cpuinfo boot_cpu_data;
+-
+-#define cpu_data (&boot_cpu_data)
+-#define current_cpu_data boot_cpu_data
+-
+-#endif
+-
+-/*
+- * User space process size: 2GB - 4k.
+- */
+-#define TASK_SIZE 0x7ffff000UL
+-
+-/* This decides where the kernel will search for a free chunk of vm
+- * space during mmap's.
+- */
+-#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+-
+-/*
+- * Bit of SR register
+- *
+- * FD-bit:
+- * When it's set, it means the processor doesn't have right to use FPU,
+- * and it results exception when the floating operation is executed.
+- *
+- * IMASK-bit:
+- * Interrupt level mask
+- *
+- * STEP-bit:
+- * Single step bit
+- *
+- */
+-#define SR_FD 0x00008000
+-
+-#if defined(CONFIG_SH64_SR_WATCH)
+-#define SR_MMU 0x84000000
+-#else
+-#define SR_MMU 0x80000000
+-#endif
+-
+-#define SR_IMASK 0x000000f0
+-#define SR_SSTEP 0x08000000
+-
+-#ifndef __ASSEMBLY__
+-
+-/*
+- * FPU structure and data : require 8-byte alignment as we need to access it
+- with fld.p, fst.p
+- */
+-
+-struct sh_fpu_hard_struct {
+- unsigned long fp_regs[64];
+- unsigned int fpscr;
+- /* long status; * software status information */
+-};
+-
+-#if 0
+-/* Dummy fpu emulator */
+-struct sh_fpu_soft_struct {
+- unsigned long long fp_regs[32];
+- unsigned int fpscr;
+- unsigned char lookahead;
+- unsigned long entry_pc;
+-};
+-#endif
+-
+-union sh_fpu_union {
+- struct sh_fpu_hard_struct hard;
+- /* 'hard' itself only produces 32 bit alignment, yet we need
+- to access it using 64 bit load/store as well. */
+- unsigned long long alignment_dummy;
+-};
+-
+-struct thread_struct {
+- unsigned long sp;
+- unsigned long pc;
+- /* This stores the address of the pt_regs built during a context
+- switch, or of the register save area built for a kernel mode
+- exception. It is used for backtracing the stack of a sleeping task
+- or one that traps in kernel mode. */
+- struct pt_regs *kregs;
+- /* This stores the address of the pt_regs constructed on entry from
+- user mode. It is a fixed value over the lifetime of a process, or
+- NULL for a kernel thread. */
+- struct pt_regs *uregs;
+-
+- unsigned long trap_no, error_code;
+- unsigned long address;
+- /* Hardware debugging registers may come here */
+-
+- /* floating point info */
+- union sh_fpu_union fpu;
+-};
+-
+-#define INIT_MMAP \
+-{ &init_mm, 0, 0, NULL, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, 1, NULL, NULL }
+-
+-extern struct pt_regs fake_swapper_regs;
+-
+-#define INIT_THREAD { \
+- .sp = sizeof(init_stack) + \
+- (long) &init_stack, \
+- .pc = 0, \
+- .kregs = &fake_swapper_regs, \
+- .uregs = NULL, \
+- .trap_no = 0, \
+- .error_code = 0, \
+- .address = 0, \
+- .fpu = { { { 0, } }, } \
+-}
+-
+-/*
+- * Do necessary setup to start up a newly executed thread.
+- */
+-#define SR_USER (SR_MMU | SR_FD)
+-
+-#define start_thread(regs, new_pc, new_sp) \
+- set_fs(USER_DS); \
+- regs->sr = SR_USER; /* User mode. */ \
+- regs->pc = new_pc - 4; /* Compensate syscall exit */ \
+- regs->pc |= 1; /* Set SHmedia ! */ \
+- regs->regs[18] = 0; \
+- regs->regs[15] = new_sp
+-
+-/* Forward declaration, a strange C thing */
+-struct task_struct;
+-struct mm_struct;
+-
+-/* Free all resources held by a thread. */
+-extern void release_thread(struct task_struct *);
+-/*
+- * create a kernel thread without removing it from tasklists
+- */
+-extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
+-
+-
+-/* Copy and release all segment info associated with a VM */
+-#define copy_segments(p, mm) do { } while (0)
+-#define release_segments(mm) do { } while (0)
+-#define forget_segments() do { } while (0)
+-#define prepare_to_copy(tsk) do { } while (0)
+-/*
+- * FPU lazy state save handling.
+- */
+-
+-static inline void release_fpu(void)
+-{
+- unsigned long long __dummy;
+-
+- /* Set FD flag in SR */
+- __asm__ __volatile__("getcon " __SR ", %0\n\t"
+- "or %0, %1, %0\n\t"
+- "putcon %0, " __SR "\n\t"
+- : "=&r" (__dummy)
+- : "r" (SR_FD));
+-}
+-
+-static inline void grab_fpu(void)
+-{
+- unsigned long long __dummy;
+-
+- /* Clear out FD flag in SR */
+- __asm__ __volatile__("getcon " __SR ", %0\n\t"
+- "and %0, %1, %0\n\t"
+- "putcon %0, " __SR "\n\t"
+- : "=&r" (__dummy)
+- : "r" (~SR_FD));
+-}
+-
+-/* Round to nearest, no exceptions on inexact, overflow, underflow,
+- zero-divide, invalid. Configure option for whether to flush denorms to
+- zero, or except if a denorm is encountered. */
+-#if defined(CONFIG_SH64_FPU_DENORM_FLUSH)
+-#define FPSCR_INIT 0x00040000
+-#else
+-#define FPSCR_INIT 0x00000000
+-#endif
+-
+-/* Save the current FP regs */
+-void fpsave(struct sh_fpu_hard_struct *fpregs);
+-
+-/* Initialise the FP state of a task */
+-void fpinit(struct sh_fpu_hard_struct *fpregs);
+-
+-extern struct task_struct *last_task_used_math;
+-
+-/*
+- * Return saved PC of a blocked thread.
+- */
+-#define thread_saved_pc(tsk) (tsk->thread.pc)
+-
+-extern unsigned long get_wchan(struct task_struct *p);
+-
+-#define KSTK_EIP(tsk) ((tsk)->thread.pc)
+-#define KSTK_ESP(tsk) ((tsk)->thread.sp)
+-
+-#define cpu_relax() barrier()
+-
+-#endif /* __ASSEMBLY__ */
+-#endif /* __ASM_SH64_PROCESSOR_H */
+-
+diff --git a/include/asm-sh64/ptrace.h b/include/asm-sh64/ptrace.h
+deleted file mode 100644
+index c424f80..0000000
+--- a/include/asm-sh64/ptrace.h
++++ /dev/null
+@@ -1,35 +0,0 @@
+-#ifndef __ASM_SH64_PTRACE_H
+-#define __ASM_SH64_PTRACE_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/ptrace.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-/*
+- * This struct defines the way the registers are stored on the
+- * kernel stack during a system call or other kernel entry.
+- */
+-struct pt_regs {
+- unsigned long long pc;
+- unsigned long long sr;
+- unsigned long long syscall_nr;
+- unsigned long long regs[63];
+- unsigned long long tregs[8];
+- unsigned long long pad[2];
+-};
+-
+-#ifdef __KERNEL__
+-#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
+-#define instruction_pointer(regs) ((regs)->pc)
+-#define profile_pc(regs) ((unsigned long)instruction_pointer(regs))
+-extern void show_regs(struct pt_regs *);
+-#endif
+-
+-#endif /* __ASM_SH64_PTRACE_H */
+diff --git a/include/asm-sh64/registers.h b/include/asm-sh64/registers.h
+deleted file mode 100644
+index 7eec666..0000000
+--- a/include/asm-sh64/registers.h
++++ /dev/null
+@@ -1,106 +0,0 @@
+-#ifndef __ASM_SH64_REGISTERS_H
+-#define __ASM_SH64_REGISTERS_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/registers.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2004 Richard Curnow
+- */
+-
+-#ifdef __ASSEMBLY__
+-/* =====================================================================
+-**
+-** Section 1: acts on assembly sources pre-processed by GPP ( <source.S>).
+-** Assigns symbolic names to control & target registers.
+-*/
+-
+-/*
+- * Define some useful aliases for control registers.
+- */
+-#define SR cr0
+-#define SSR cr1
+-#define PSSR cr2
+- /* cr3 UNDEFINED */
+-#define INTEVT cr4
+-#define EXPEVT cr5
+-#define PEXPEVT cr6
+-#define TRA cr7
+-#define SPC cr8
+-#define PSPC cr9
+-#define RESVEC cr10
+-#define VBR cr11
+- /* cr12 UNDEFINED */
+-#define TEA cr13
+- /* cr14-cr15 UNDEFINED */
+-#define DCR cr16
+-#define KCR0 cr17
+-#define KCR1 cr18
+- /* cr19-cr31 UNDEFINED */
+- /* cr32-cr61 RESERVED */
+-#define CTC cr62
+-#define USR cr63
+-
+-/*
+- * ABI dependent registers (general purpose set)
+- */
+-#define RET r2
+-#define ARG1 r2
+-#define ARG2 r3
+-#define ARG3 r4
+-#define ARG4 r5
+-#define ARG5 r6
+-#define ARG6 r7
+-#define SP r15
+-#define LINK r18
+-#define ZERO r63
+-
+-/*
+- * Status register defines: used only by assembly sources (and
+- * syntax independednt)
+- */
+-#define SR_RESET_VAL 0x0000000050008000
+-#define SR_HARMLESS 0x00000000500080f0 /* Write ignores for most */
+-#define SR_ENABLE_FPU 0xffffffffffff7fff /* AND with this */
+-
+-#if defined (CONFIG_SH64_SR_WATCH)
+-#define SR_ENABLE_MMU 0x0000000084000000 /* OR with this */
+-#else
+-#define SR_ENABLE_MMU 0x0000000080000000 /* OR with this */
+-#endif
+-
+-#define SR_UNBLOCK_EXC 0xffffffffefffffff /* AND with this */
+-#define SR_BLOCK_EXC 0x0000000010000000 /* OR with this */
+-
+-#else /* Not __ASSEMBLY__ syntax */
+-
+-/*
+-** Stringify reg. name
+-*/
+-#define __str(x) #x
+-
+-/* Stringify control register names for use in inline assembly */
+-#define __SR __str(SR)
+-#define __SSR __str(SSR)
+-#define __PSSR __str(PSSR)
+-#define __INTEVT __str(INTEVT)
+-#define __EXPEVT __str(EXPEVT)
+-#define __PEXPEVT __str(PEXPEVT)
+-#define __TRA __str(TRA)
+-#define __SPC __str(SPC)
+-#define __PSPC __str(PSPC)
+-#define __RESVEC __str(RESVEC)
+-#define __VBR __str(VBR)
+-#define __TEA __str(TEA)
+-#define __DCR __str(DCR)
+-#define __KCR0 __str(KCR0)
+-#define __KCR1 __str(KCR1)
+-#define __CTC __str(CTC)
+-#define __USR __str(USR)
+-
+-#endif /* __ASSEMBLY__ */
+-#endif /* __ASM_SH64_REGISTERS_H */
+diff --git a/include/asm-sh64/resource.h b/include/asm-sh64/resource.h
+deleted file mode 100644
+index 8ff9394..0000000
+--- a/include/asm-sh64/resource.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_RESOURCE_H
+-#define __ASM_SH64_RESOURCE_H
+-
+-#include <asm-sh/resource.h>
+-
+-#endif /* __ASM_SH64_RESOURCE_H */
+diff --git a/include/asm-sh64/scatterlist.h b/include/asm-sh64/scatterlist.h
+deleted file mode 100644
+index 7f729bb..0000000
+--- a/include/asm-sh64/scatterlist.h
++++ /dev/null
+@@ -1,37 +0,0 @@
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/scatterlist.h
+- *
+- * Copyright (C) 2003 Paul Mundt
+- *
+- */
+-#ifndef __ASM_SH64_SCATTERLIST_H
+-#define __ASM_SH64_SCATTERLIST_H
+-
+-#include <asm/types.h>
+-
+-struct scatterlist {
+-#ifdef CONFIG_DEBUG_SG
+- unsigned long sg_magic;
+-#endif
+- unsigned long page_link;
+- unsigned int offset;/* for highmem, page offset */
+- dma_addr_t dma_address;
+- unsigned int length;
+-};
+-
+-/* These macros should be used after a pci_map_sg call has been done
+- * to get bus addresses of each of the SG entries and their lengths.
+- * You should only work with the number of sg entries pci_map_sg
+- * returns, or alternatively stop on the first sg_dma_len(sg) which
+- * is 0.
+- */
+-#define sg_dma_address(sg) ((sg)->dma_address)
+-#define sg_dma_len(sg) ((sg)->length)
+-
+-#define ISA_DMA_THRESHOLD (0xffffffff)
+-
+-#endif /* !__ASM_SH64_SCATTERLIST_H */
+diff --git a/include/asm-sh64/sci.h b/include/asm-sh64/sci.h
+deleted file mode 100644
+index 793c568..0000000
+--- a/include/asm-sh64/sci.h
++++ /dev/null
+@@ -1 +0,0 @@
+-#include <asm-sh/sci.h>
+diff --git a/include/asm-sh64/sections.h b/include/asm-sh64/sections.h
+deleted file mode 100644
+index 897f36b..0000000
+--- a/include/asm-sh64/sections.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-#ifndef __ASM_SH64_SECTIONS_H
+-#define __ASM_SH64_SECTIONS_H
+-
+-#include <asm-sh/sections.h>
+-
+-#endif /* __ASM_SH64_SECTIONS_H */
+-
+diff --git a/include/asm-sh64/segment.h b/include/asm-sh64/segment.h
+deleted file mode 100644
+index 92ac001..0000000
+--- a/include/asm-sh64/segment.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef _ASM_SEGMENT_H
+-#define _ASM_SEGMENT_H
+-
+-/* Only here because we have some old header files that expect it.. */
+-
+-#endif /* _ASM_SEGMENT_H */
+diff --git a/include/asm-sh64/semaphore-helper.h b/include/asm-sh64/semaphore-helper.h
+deleted file mode 100644
+index fcfafe2..0000000
+--- a/include/asm-sh64/semaphore-helper.h
++++ /dev/null
+@@ -1,101 +0,0 @@
+-#ifndef __ASM_SH64_SEMAPHORE_HELPER_H
+-#define __ASM_SH64_SEMAPHORE_HELPER_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/semaphore-helper.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-#include <asm/errno.h>
+-
+-/*
+- * SMP- and interrupt-safe semaphores helper functions.
+- *
+- * (C) Copyright 1996 Linus Torvalds
+- * (C) Copyright 1999 Andrea Arcangeli
+- */
+-
+-/*
+- * These two _must_ execute atomically wrt each other.
+- *
+- * This is trivially done with load_locked/store_cond,
+- * which we have. Let the rest of the losers suck eggs.
+- */
+-static __inline__ void wake_one_more(struct semaphore * sem)
+-{
+- atomic_inc((atomic_t *)&sem->sleepers);
+-}
+-
+-static __inline__ int waking_non_zero(struct semaphore *sem)
+-{
+- unsigned long flags;
+- int ret = 0;
+-
+- spin_lock_irqsave(&semaphore_wake_lock, flags);
+- if (sem->sleepers > 0) {
+- sem->sleepers--;
+- ret = 1;
+- }
+- spin_unlock_irqrestore(&semaphore_wake_lock, flags);
+- return ret;
+-}
+-
+-/*
+- * waking_non_zero_interruptible:
+- * 1 got the lock
+- * 0 go to sleep
+- * -EINTR interrupted
+- *
+- * We must undo the sem->count down_interruptible() increment while we are
+- * protected by the spinlock in order to make atomic this atomic_inc() with the
+- * atomic_read() in wake_one_more(), otherwise we can race. -arca
+- */
+-static __inline__ int waking_non_zero_interruptible(struct semaphore *sem,
+- struct task_struct *tsk)
+-{
+- unsigned long flags;
+- int ret = 0;
+-
+- spin_lock_irqsave(&semaphore_wake_lock, flags);
+- if (sem->sleepers > 0) {
+- sem->sleepers--;
+- ret = 1;
+- } else if (signal_pending(tsk)) {
+- atomic_inc(&sem->count);
+- ret = -EINTR;
+- }
+- spin_unlock_irqrestore(&semaphore_wake_lock, flags);
+- return ret;
+-}
+-
+-/*
+- * waking_non_zero_trylock:
+- * 1 failed to lock
+- * 0 got the lock
+- *
+- * We must undo the sem->count down_trylock() increment while we are
+- * protected by the spinlock in order to make atomic this atomic_inc() with the
+- * atomic_read() in wake_one_more(), otherwise we can race. -arca
+- */
+-static __inline__ int waking_non_zero_trylock(struct semaphore *sem)
+-{
+- unsigned long flags;
+- int ret = 1;
+-
+- spin_lock_irqsave(&semaphore_wake_lock, flags);
+- if (sem->sleepers <= 0)
+- atomic_inc(&sem->count);
+- else {
+- sem->sleepers--;
+- ret = 0;
+- }
+- spin_unlock_irqrestore(&semaphore_wake_lock, flags);
+- return ret;
+-}
+-
+-#endif /* __ASM_SH64_SEMAPHORE_HELPER_H */
+diff --git a/include/asm-sh64/semaphore.h b/include/asm-sh64/semaphore.h
+deleted file mode 100644
+index f027cc1..0000000
+--- a/include/asm-sh64/semaphore.h
++++ /dev/null
+@@ -1,119 +0,0 @@
+-#ifndef __ASM_SH64_SEMAPHORE_H
+-#define __ASM_SH64_SEMAPHORE_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/semaphore.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- * SMP- and interrupt-safe semaphores.
+- *
+- * (C) Copyright 1996 Linus Torvalds
+- *
+- * SuperH verison by Niibe Yutaka
+- * (Currently no asm implementation but generic C code...)
+- *
+- */
+-
+-#include <linux/linkage.h>
+-#include <linux/spinlock.h>
+-#include <linux/wait.h>
+-#include <linux/rwsem.h>
+-
+-#include <asm/system.h>
+-#include <asm/atomic.h>
+-
+-struct semaphore {
+- atomic_t count;
+- int sleepers;
+- wait_queue_head_t wait;
+-};
+-
+-#define __SEMAPHORE_INITIALIZER(name, n) \
+-{ \
+- .count = ATOMIC_INIT(n), \
+- .sleepers = 0, \
+- .wait = __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \
+-}
+-
+-#define __DECLARE_SEMAPHORE_GENERIC(name,count) \
+- struct semaphore name = __SEMAPHORE_INITIALIZER(name,count)
+-
+-#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name,1)
+-
+-static inline void sema_init (struct semaphore *sem, int val)
+-{
+-/*
+- * *sem = (struct semaphore)__SEMAPHORE_INITIALIZER((*sem),val);
+- *
+- * i'd rather use the more flexible initialization above, but sadly
+- * GCC 2.7.2.3 emits a bogus warning. EGCS doesnt. Oh well.
+- */
+- atomic_set(&sem->count, val);
+- sem->sleepers = 0;
+- init_waitqueue_head(&sem->wait);
+-}
+-
+-static inline void init_MUTEX (struct semaphore *sem)
+-{
+- sema_init(sem, 1);
+-}
+-
+-static inline void init_MUTEX_LOCKED (struct semaphore *sem)
+-{
+- sema_init(sem, 0);
+-}
+-
+-#if 0
+-asmlinkage void __down_failed(void /* special register calling convention */);
+-asmlinkage int __down_failed_interruptible(void /* params in registers */);
+-asmlinkage int __down_failed_trylock(void /* params in registers */);
+-asmlinkage void __up_wakeup(void /* special register calling convention */);
+-#endif
+-
+-asmlinkage void __down(struct semaphore * sem);
+-asmlinkage int __down_interruptible(struct semaphore * sem);
+-asmlinkage int __down_trylock(struct semaphore * sem);
+-asmlinkage void __up(struct semaphore * sem);
+-
+-extern spinlock_t semaphore_wake_lock;
+-
+-static inline void down(struct semaphore * sem)
+-{
+- if (atomic_dec_return(&sem->count) < 0)
+- __down(sem);
+-}
+-
+-static inline int down_interruptible(struct semaphore * sem)
+-{
+- int ret = 0;
+-
+- if (atomic_dec_return(&sem->count) < 0)
+- ret = __down_interruptible(sem);
+- return ret;
+-}
+-
+-static inline int down_trylock(struct semaphore * sem)
+-{
+- int ret = 0;
+-
+- if (atomic_dec_return(&sem->count) < 0)
+- ret = __down_trylock(sem);
+- return ret;
+-}
+-
+-/*
+- * Note! This is subtle. We jump to wake people up only if
+- * the semaphore was negative (== somebody was waiting on it).
+- */
+-static inline void up(struct semaphore * sem)
+-{
+- if (atomic_inc_return(&sem->count) <= 0)
+- __up(sem);
+-}
+-
+-#endif /* __ASM_SH64_SEMAPHORE_H */
+diff --git a/include/asm-sh64/sembuf.h b/include/asm-sh64/sembuf.h
+deleted file mode 100644
+index ec4d9f1..0000000
+--- a/include/asm-sh64/sembuf.h
++++ /dev/null
+@@ -1,36 +0,0 @@
+-#ifndef __ASM_SH64_SEMBUF_H
+-#define __ASM_SH64_SEMBUF_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/sembuf.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-/*
+- * The semid64_ds structure for i386 architecture.
+- * Note extra padding because this structure is passed back and forth
+- * between kernel and user space.
+- *
+- * Pad space is left for:
+- * - 64-bit time_t to solve y2038 problem
+- * - 2 miscellaneous 32-bit values
+- */
+-
+-struct semid64_ds {
+- struct ipc64_perm sem_perm; /* permissions .. see ipc.h */
+- __kernel_time_t sem_otime; /* last semop time */
+- unsigned long __unused1;
+- __kernel_time_t sem_ctime; /* last change time */
+- unsigned long __unused2;
+- unsigned long sem_nsems; /* no. of semaphores in array */
+- unsigned long __unused3;
+- unsigned long __unused4;
+-};
+-
+-#endif /* __ASM_SH64_SEMBUF_H */
+diff --git a/include/asm-sh64/serial.h b/include/asm-sh64/serial.h
+deleted file mode 100644
+index e8d7b3f..0000000
+--- a/include/asm-sh64/serial.h
++++ /dev/null
+@@ -1,31 +0,0 @@
+-/*
+- * include/asm-sh64/serial.h
+- *
+- * Configuration details for 8250, 16450, 16550, etc. serial ports
+- */
+-
+-#ifndef _ASM_SERIAL_H
+-#define _ASM_SERIAL_H
+-
+-/*
+- * This assumes you have a 1.8432 MHz clock for your UART.
+- *
+- * It'd be nice if someone built a serial card with a 24.576 MHz
+- * clock, since the 16550A is capable of handling a top speed of 1.5
+- * megabits/second; but this requires the faster clock.
+- */
+-#define BASE_BAUD ( 1843200 / 16 )
+-
+-#define RS_TABLE_SIZE 2
+-
+-#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+-
+-#define SERIAL_PORT_DFNS \
+- /* UART CLK PORT IRQ FLAGS */ \
+- { 0, BASE_BAUD, 0x3F8, 4, STD_COM_FLAGS }, /* ttyS0 */ \
+- { 0, BASE_BAUD, 0x2F8, 3, STD_COM_FLAGS } /* ttyS1 */
+-
+-/* XXX: This should be moved ino irq.h */
+-#define irq_cannonicalize(x) (x)
+-
+-#endif /* _ASM_SERIAL_H */
+diff --git a/include/asm-sh64/setup.h b/include/asm-sh64/setup.h
+deleted file mode 100644
+index 5b07b14..0000000
+--- a/include/asm-sh64/setup.h
++++ /dev/null
+@@ -1,22 +0,0 @@
+-#ifndef __ASM_SH64_SETUP_H
+-#define __ASM_SH64_SETUP_H
+-
+-#define COMMAND_LINE_SIZE 256
+-
+-#ifdef __KERNEL__
+-
+-#define PARAM ((unsigned char *)empty_zero_page)
+-#define MOUNT_ROOT_RDONLY (*(unsigned long *) (PARAM+0x000))
+-#define RAMDISK_FLAGS (*(unsigned long *) (PARAM+0x004))
+-#define ORIG_ROOT_DEV (*(unsigned long *) (PARAM+0x008))
+-#define LOADER_TYPE (*(unsigned long *) (PARAM+0x00c))
+-#define INITRD_START (*(unsigned long *) (PARAM+0x010))
+-#define INITRD_SIZE (*(unsigned long *) (PARAM+0x014))
+-
+-#define COMMAND_LINE ((char *) (PARAM+256))
+-#define COMMAND_LINE_SIZE 256
+-
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH64_SETUP_H */
+-
+diff --git a/include/asm-sh64/shmbuf.h b/include/asm-sh64/shmbuf.h
+deleted file mode 100644
+index 022f349..0000000
+--- a/include/asm-sh64/shmbuf.h
++++ /dev/null
+@@ -1,53 +0,0 @@
+-#ifndef __ASM_SH64_SHMBUF_H
+-#define __ASM_SH64_SHMBUF_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/shmbuf.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-/*
+- * The shmid64_ds structure for i386 architecture.
+- * Note extra padding because this structure is passed back and forth
+- * between kernel and user space.
+- *
+- * Pad space is left for:
+- * - 64-bit time_t to solve y2038 problem
+- * - 2 miscellaneous 32-bit values
+- */
+-
+-struct shmid64_ds {
+- struct ipc64_perm shm_perm; /* operation perms */
+- size_t shm_segsz; /* size of segment (bytes) */
+- __kernel_time_t shm_atime; /* last attach time */
+- unsigned long __unused1;
+- __kernel_time_t shm_dtime; /* last detach time */
+- unsigned long __unused2;
+- __kernel_time_t shm_ctime; /* last change time */
+- unsigned long __unused3;
+- __kernel_pid_t shm_cpid; /* pid of creator */
+- __kernel_pid_t shm_lpid; /* pid of last operator */
+- unsigned long shm_nattch; /* no. of current attaches */
+- unsigned long __unused4;
+- unsigned long __unused5;
+-};
+-
+-struct shminfo64 {
+- unsigned long shmmax;
+- unsigned long shmmin;
+- unsigned long shmmni;
+- unsigned long shmseg;
+- unsigned long shmall;
+- unsigned long __unused1;
+- unsigned long __unused2;
+- unsigned long __unused3;
+- unsigned long __unused4;
+-};
+-
+-#endif /* __ASM_SH64_SHMBUF_H */
+diff --git a/include/asm-sh64/shmparam.h b/include/asm-sh64/shmparam.h
+deleted file mode 100644
+index 1bb820c..0000000
+--- a/include/asm-sh64/shmparam.h
++++ /dev/null
+@@ -1,12 +0,0 @@
+-#ifndef __ASM_SH64_SHMPARAM_H
+-#define __ASM_SH64_SHMPARAM_H
+-
+-/*
+- * Set this to a sensible safe default, we'll work out the specifics for the
+- * align mask from the cache descriptor at run-time.
+- */
+-#define SHMLBA 0x4000
+-
+-#define __ARCH_FORCE_SHMLBA
+-
+-#endif /* __ASM_SH64_SHMPARAM_H */
+diff --git a/include/asm-sh64/sigcontext.h b/include/asm-sh64/sigcontext.h
+deleted file mode 100644
+index 6293509..0000000
+--- a/include/asm-sh64/sigcontext.h
++++ /dev/null
+@@ -1,30 +0,0 @@
+-#ifndef __ASM_SH64_SIGCONTEXT_H
+-#define __ASM_SH64_SIGCONTEXT_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/sigcontext.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-struct sigcontext {
+- unsigned long oldmask;
+-
+- /* CPU registers */
+- unsigned long long sc_regs[63];
+- unsigned long long sc_tregs[8];
+- unsigned long long sc_pc;
+- unsigned long long sc_sr;
+-
+- /* FPU registers */
+- unsigned long long sc_fpregs[32];
+- unsigned int sc_fpscr;
+- unsigned int sc_fpvalid;
+-};
+-
+-#endif /* __ASM_SH64_SIGCONTEXT_H */
+diff --git a/include/asm-sh64/siginfo.h b/include/asm-sh64/siginfo.h
+deleted file mode 100644
+index 56ef1da..0000000
+--- a/include/asm-sh64/siginfo.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_SIGINFO_H
+-#define __ASM_SH64_SIGINFO_H
+-
+-#include <asm-generic/siginfo.h>
+-
+-#endif /* __ASM_SH64_SIGINFO_H */
+diff --git a/include/asm-sh64/signal.h b/include/asm-sh64/signal.h
+deleted file mode 100644
+index 244e134..0000000
+--- a/include/asm-sh64/signal.h
++++ /dev/null
+@@ -1,159 +0,0 @@
+-#ifndef __ASM_SH64_SIGNAL_H
+-#define __ASM_SH64_SIGNAL_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/signal.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-#include <linux/types.h>
+-
+-/* Avoid too many header ordering problems. */
+-struct siginfo;
+-
+-#define _NSIG 64
+-#define _NSIG_BPW 32
+-#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
+-
+-typedef unsigned long old_sigset_t; /* at least 32 bits */
+-
+-typedef struct {
+- unsigned long sig[_NSIG_WORDS];
+-} sigset_t;
+-
+-#define SIGHUP 1
+-#define SIGINT 2
+-#define SIGQUIT 3
+-#define SIGILL 4
+-#define SIGTRAP 5
+-#define SIGABRT 6
+-#define SIGIOT 6
+-#define SIGBUS 7
+-#define SIGFPE 8
+-#define SIGKILL 9
+-#define SIGUSR1 10
+-#define SIGSEGV 11
+-#define SIGUSR2 12
+-#define SIGPIPE 13
+-#define SIGALRM 14
+-#define SIGTERM 15
+-#define SIGSTKFLT 16
+-#define SIGCHLD 17
+-#define SIGCONT 18
+-#define SIGSTOP 19
+-#define SIGTSTP 20
+-#define SIGTTIN 21
+-#define SIGTTOU 22
+-#define SIGURG 23
+-#define SIGXCPU 24
+-#define SIGXFSZ 25
+-#define SIGVTALRM 26
+-#define SIGPROF 27
+-#define SIGWINCH 28
+-#define SIGIO 29
+-#define SIGPOLL SIGIO
+-/*
+-#define SIGLOST 29
+-*/
+-#define SIGPWR 30
+-#define SIGSYS 31
+-#define SIGUNUSED 31
+-
+-/* These should not be considered constants from userland. */
+-#define SIGRTMIN 32
+-#define SIGRTMAX (_NSIG-1)
+-
+-/*
+- * SA_FLAGS values:
+- *
+- * SA_ONSTACK indicates that a registered stack_t will be used.
+- * SA_RESTART flag to get restarting signals (which were the default long ago)
+- * SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
+- * SA_RESETHAND clears the handler when the signal is delivered.
+- * SA_NOCLDWAIT flag on SIGCHLD to inhibit zombies.
+- * SA_NODEFER prevents the current signal from being masked in the handler.
+- *
+- * SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
+- * Unix names RESETHAND and NODEFER respectively.
+- */
+-#define SA_NOCLDSTOP 0x00000001
+-#define SA_NOCLDWAIT 0x00000002 /* not supported yet */
+-#define SA_SIGINFO 0x00000004
+-#define SA_ONSTACK 0x08000000
+-#define SA_RESTART 0x10000000
+-#define SA_NODEFER 0x40000000
+-#define SA_RESETHAND 0x80000000
+-
+-#define SA_NOMASK SA_NODEFER
+-#define SA_ONESHOT SA_RESETHAND
+-
+-#define SA_RESTORER 0x04000000
+-
+-/*
+- * sigaltstack controls
+- */
+-#define SS_ONSTACK 1
+-#define SS_DISABLE 2
+-
+-#define MINSIGSTKSZ 2048
+-#define SIGSTKSZ THREAD_SIZE
+-
+-#include <asm-generic/signal.h>
+-
+-#ifdef __KERNEL__
+-struct old_sigaction {
+- __sighandler_t sa_handler;
+- old_sigset_t sa_mask;
+- unsigned long sa_flags;
+- void (*sa_restorer)(void);
+-};
+-
+-struct sigaction {
+- __sighandler_t sa_handler;
+- unsigned long sa_flags;
+- void (*sa_restorer)(void);
+- sigset_t sa_mask; /* mask last for extensibility */
+-};
+-
+-struct k_sigaction {
+- struct sigaction sa;
+-};
+-#else
+-/* Here we must cater to libcs that poke about in kernel headers. */
+-
+-struct sigaction {
+- union {
+- __sighandler_t _sa_handler;
+- void (*_sa_sigaction)(int, struct siginfo *, void *);
+- } _u;
+- sigset_t sa_mask;
+- unsigned long sa_flags;
+- void (*sa_restorer)(void);
+-};
+-
+-#define sa_handler _u._sa_handler
+-#define sa_sigaction _u._sa_sigaction
+-
+-#endif /* __KERNEL__ */
+-
+-typedef struct sigaltstack {
+- void *ss_sp;
+- int ss_flags;
+- size_t ss_size;
+-} stack_t;
+-
+-#ifdef __KERNEL__
+-#include <asm/sigcontext.h>
+-
+-#define sigmask(sig) (1UL << ((sig) - 1))
+-#define ptrace_signal_deliver(regs, cookie) do { } while (0)
+-
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH64_SIGNAL_H */
+diff --git a/include/asm-sh64/smp.h b/include/asm-sh64/smp.h
+deleted file mode 100644
+index 4a4d0da..0000000
+--- a/include/asm-sh64/smp.h
++++ /dev/null
+@@ -1,15 +0,0 @@
+-#ifndef __ASM_SH64_SMP_H
+-#define __ASM_SH64_SMP_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/smp.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
- *
-- * Copyright (C) 1999, 2000 Niibe Yutaka
-- * Copyright (C) 2002, 2003 Paul Mundt
- */
-
- #ifndef __ASM_SH_PROCESSOR_H
- #define __ASM_SH_PROCESSOR_H
--#ifdef __KERNEL__
-
--#include <linux/compiler.h>
--#include <asm/page.h>
--#include <asm/types.h>
--#include <asm/cache.h>
--#include <asm/ptrace.h>
- #include <asm/cpu-features.h>
-+#include <asm/fpu.h>
-
--/*
-- * Default implementation of macro that returns current
-- * instruction pointer ("program counter").
-- */
--#define current_text_addr() ({ void *pc; __asm__("mova 1f, %0\n1:":"=z" (pc)); pc; })
+-#endif /* __ASM_SH64_SMP_H */
+diff --git a/include/asm-sh64/socket.h b/include/asm-sh64/socket.h
+deleted file mode 100644
+index 1853f72..0000000
+--- a/include/asm-sh64/socket.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_SOCKET_H
+-#define __ASM_SH64_SOCKET_H
-
--/* Core Processor Version Register */
--#define CCN_PVR 0xff000030
--#define CCN_CVR 0xff000040
--#define CCN_PRR 0xff000044
+-#include <asm-sh/socket.h>
-
-+#ifndef __ASSEMBLY__
- /*
- * CPU type and hardware bug flags. Kept separately for each CPU.
- *
-@@ -39,247 +17,49 @@ enum cpu_type {
- CPU_SH7619,
-
- /* SH-2A types */
-- CPU_SH7206,
-+ CPU_SH7203, CPU_SH7206, CPU_SH7263,
-
- /* SH-3 types */
- CPU_SH7705, CPU_SH7706, CPU_SH7707,
- CPU_SH7708, CPU_SH7708S, CPU_SH7708R,
- CPU_SH7709, CPU_SH7709A, CPU_SH7710, CPU_SH7712,
-- CPU_SH7720, CPU_SH7729,
-+ CPU_SH7720, CPU_SH7721, CPU_SH7729,
-
- /* SH-4 types */
- CPU_SH7750, CPU_SH7750S, CPU_SH7750R, CPU_SH7751, CPU_SH7751R,
- CPU_SH7760, CPU_SH4_202, CPU_SH4_501,
-
- /* SH-4A types */
-- CPU_SH7770, CPU_SH7780, CPU_SH7781, CPU_SH7785, CPU_SHX3,
-+ CPU_SH7763, CPU_SH7770, CPU_SH7780, CPU_SH7781, CPU_SH7785, CPU_SHX3,
-
- /* SH4AL-DSP types */
- CPU_SH7343, CPU_SH7722,
-
-+ /* SH-5 types */
-+ CPU_SH5_101, CPU_SH5_103,
-+
- /* Unknown subtype */
- CPU_SH_NONE
- };
-
--struct sh_cpuinfo {
-- unsigned int type;
-- unsigned long loops_per_jiffy;
-- unsigned long asid_cache;
+-#endif /* __ASM_SH64_SOCKET_H */
+diff --git a/include/asm-sh64/sockios.h b/include/asm-sh64/sockios.h
+deleted file mode 100644
+index 419e76f..0000000
+--- a/include/asm-sh64/sockios.h
++++ /dev/null
+@@ -1,25 +0,0 @@
+-#ifndef __ASM_SH64_SOCKIOS_H
+-#define __ASM_SH64_SOCKIOS_H
-
-- struct cache_info icache; /* Primary I-cache */
-- struct cache_info dcache; /* Primary D-cache */
-- struct cache_info scache; /* Secondary cache */
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/sockios.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
-
-- unsigned long flags;
--} __attribute__ ((aligned(L1_CACHE_BYTES)));
+-/* Socket-level I/O control calls. */
+-#define FIOGETOWN _IOR('f', 123, int)
+-#define FIOSETOWN _IOW('f', 124, int)
-
--extern struct sh_cpuinfo cpu_data[];
--#define boot_cpu_data cpu_data[0]
--#define current_cpu_data cpu_data[smp_processor_id()]
--#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
+-#define SIOCATMARK _IOR('s', 7, int)
+-#define SIOCSPGRP _IOW('s', 8, pid_t)
+-#define SIOCGPGRP _IOR('s', 9, pid_t)
+-
+-#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp (timeval) */
+-#define SIOCGSTAMPNS _IOR('s', 101, struct timespec) /* Get stamp (timespec) */
+-#endif /* __ASM_SH64_SOCKIOS_H */
+diff --git a/include/asm-sh64/spinlock.h b/include/asm-sh64/spinlock.h
+deleted file mode 100644
+index 296b0c9..0000000
+--- a/include/asm-sh64/spinlock.h
++++ /dev/null
+@@ -1,17 +0,0 @@
+-#ifndef __ASM_SH64_SPINLOCK_H
+-#define __ASM_SH64_SPINLOCK_H
-
-/*
-- * User space process size: 2GB.
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/spinlock.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
- *
-- * Since SH7709 and SH7750 have "area 7", we can't use 0x7c000000--0x7fffffff
- */
--#define TASK_SIZE 0x7c000000UL
-
--/* This decides where the kernel will search for a free chunk of vm
-- * space during mmap's.
-- */
--#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+-#error "No SMP on SH64"
+-
+-#endif /* __ASM_SH64_SPINLOCK_H */
+diff --git a/include/asm-sh64/stat.h b/include/asm-sh64/stat.h
+deleted file mode 100644
+index 86f551b..0000000
+--- a/include/asm-sh64/stat.h
++++ /dev/null
+@@ -1,88 +0,0 @@
+-#ifndef __ASM_SH64_STAT_H
+-#define __ASM_SH64_STAT_H
-
-/*
-- * Bit of SR register
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
- *
-- * FD-bit:
-- * When it's set, it means the processor doesn't have right to use FPU,
-- * and it results exception when the floating operation is executed.
+- * include/asm-sh64/stat.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
- *
-- * IMASK-bit:
-- * Interrupt level mask
- */
--#define SR_FD 0x00008000
--#define SR_DSP 0x00001000
--#define SR_IMASK 0x000000f0
-
--/*
-- * FPU structure and data
+-struct __old_kernel_stat {
+- unsigned short st_dev;
+- unsigned short st_ino;
+- unsigned short st_mode;
+- unsigned short st_nlink;
+- unsigned short st_uid;
+- unsigned short st_gid;
+- unsigned short st_rdev;
+- unsigned long st_size;
+- unsigned long st_atime;
+- unsigned long st_mtime;
+- unsigned long st_ctime;
+-};
+-
+-struct stat {
+- unsigned short st_dev;
+- unsigned short __pad1;
+- unsigned long st_ino;
+- unsigned short st_mode;
+- unsigned short st_nlink;
+- unsigned short st_uid;
+- unsigned short st_gid;
+- unsigned short st_rdev;
+- unsigned short __pad2;
+- unsigned long st_size;
+- unsigned long st_blksize;
+- unsigned long st_blocks;
+- unsigned long st_atime;
+- unsigned long st_atime_nsec;
+- unsigned long st_mtime;
+- unsigned long st_mtime_nsec;
+- unsigned long st_ctime;
+- unsigned long st_ctime_nsec;
+- unsigned long __unused4;
+- unsigned long __unused5;
+-};
+-
+-/* This matches struct stat64 in glibc2.1, hence the absolutely
+- * insane amounts of padding around dev_t's.
- */
+-struct stat64 {
+- unsigned short st_dev;
+- unsigned char __pad0[10];
-
--struct sh_fpu_hard_struct {
-- unsigned long fp_regs[16];
-- unsigned long xfp_regs[16];
-- unsigned long fpscr;
-- unsigned long fpul;
+- unsigned long st_ino;
+- unsigned int st_mode;
+- unsigned int st_nlink;
-
-- long status; /* software status information */
--};
+- unsigned long st_uid;
+- unsigned long st_gid;
-
--/* Dummy fpu emulator */
--struct sh_fpu_soft_struct {
-- unsigned long fp_regs[16];
-- unsigned long xfp_regs[16];
-- unsigned long fpscr;
-- unsigned long fpul;
+- unsigned short st_rdev;
+- unsigned char __pad3[10];
-
-- unsigned char lookahead;
-- unsigned long entry_pc;
--};
+- long long st_size;
+- unsigned long st_blksize;
-
--union sh_fpu_union {
-- struct sh_fpu_hard_struct hard;
-- struct sh_fpu_soft_struct soft;
--};
+- unsigned long st_blocks; /* Number 512-byte blocks allocated. */
+- unsigned long __pad4; /* future possible st_blocks high bits */
-
--struct thread_struct {
-- /* Saved registers when thread is descheduled */
-- unsigned long sp;
-- unsigned long pc;
+- unsigned long st_atime;
+- unsigned long st_atime_nsec;
-
-- /* Hardware debugging registers */
-- unsigned long ubc_pc;
+- unsigned long st_mtime;
+- unsigned long st_mtime_nsec;
-
-- /* floating point info */
-- union sh_fpu_union fpu;
+- unsigned long st_ctime;
+- unsigned long st_ctime_nsec; /* will be high 32 bits of ctime someday */
+-
+- unsigned long __unused1;
+- unsigned long __unused2;
-};
-
--typedef struct {
-- unsigned long seg;
--} mm_segment_t;
+-#endif /* __ASM_SH64_STAT_H */
+diff --git a/include/asm-sh64/statfs.h b/include/asm-sh64/statfs.h
+deleted file mode 100644
+index 083fd79..0000000
+--- a/include/asm-sh64/statfs.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_STATFS_H
+-#define __ASM_SH64_STATFS_H
-
--/* Count of active tasks with UBC settings */
--extern int ubc_usercnt;
-+/* Forward decl */
-+struct sh_cpuinfo;
-
--#define INIT_THREAD { \
-- .sp = sizeof(init_stack) + (long) &init_stack, \
--}
+-#include <asm-generic/statfs.h>
+-
+-#endif /* __ASM_SH64_STATFS_H */
+diff --git a/include/asm-sh64/string.h b/include/asm-sh64/string.h
+deleted file mode 100644
+index 8a73573..0000000
+--- a/include/asm-sh64/string.h
++++ /dev/null
+@@ -1,21 +0,0 @@
+-#ifndef __ASM_SH64_STRING_H
+-#define __ASM_SH64_STRING_H
-
-/*
-- * Do necessary setup to start up a newly executed thread.
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/string.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- * Empty on purpose. ARCH SH64 ASM libs are out of the current project scope.
+- *
- */
--#define start_thread(regs, new_pc, new_sp) \
-- set_fs(USER_DS); \
-- regs->pr = 0; \
-- regs->sr = SR_FD; /* User mode. */ \
-- regs->pc = new_pc; \
-- regs->regs[15] = new_sp
-
--/* Forward declaration, a strange C thing */
--struct task_struct;
--struct mm_struct;
+-#define __HAVE_ARCH_MEMCPY
-
--/* Free all resources held by a thread. */
--extern void release_thread(struct task_struct *);
+-extern void *memcpy(void *dest, const void *src, size_t count);
-
--/* Prepare to copy thread state - unlazy all lazy status */
--#define prepare_to_copy(tsk) do { } while (0)
+-#endif
+diff --git a/include/asm-sh64/system.h b/include/asm-sh64/system.h
+deleted file mode 100644
+index be2a15f..0000000
+--- a/include/asm-sh64/system.h
++++ /dev/null
+@@ -1,190 +0,0 @@
+-#ifndef __ASM_SH64_SYSTEM_H
+-#define __ASM_SH64_SYSTEM_H
-
-/*
-- * create a kernel thread without removing it from tasklists
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/system.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- * Copyright (C) 2004 Richard Curnow
+- *
- */
--extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-
--/* Copy and release all segment info associated with a VM */
--#define copy_segments(p, mm) do { } while(0)
--#define release_segments(mm) do { } while(0)
+-#include <asm/registers.h>
+-#include <asm/processor.h>
-
-/*
-- * FPU lazy state save handling.
+- * switch_to() should switch tasks to task nr n, first
- */
-
--static __inline__ void disable_fpu(void)
+-typedef struct {
+- unsigned long seg;
+-} mm_segment_t;
+-
+-extern struct task_struct *sh64_switch_to(struct task_struct *prev,
+- struct thread_struct *prev_thread,
+- struct task_struct *next,
+- struct thread_struct *next_thread);
+-
+-#define switch_to(prev,next,last) \
+- do {\
+- if (last_task_used_math != next) {\
+- struct pt_regs *regs = next->thread.uregs;\
+- if (regs) regs->sr |= SR_FD;\
+- }\
+- last = sh64_switch_to(prev, &prev->thread, next, &next->thread);\
+- } while(0)
+-
+-#define nop() __asm__ __volatile__ ("nop")
+-
+-#define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
+-
+-extern void __xchg_called_with_bad_pointer(void);
+-
+-#define mb() __asm__ __volatile__ ("synco": : :"memory")
+-#define rmb() mb()
+-#define wmb() __asm__ __volatile__ ("synco": : :"memory")
+-#define read_barrier_depends() do { } while (0)
+-
+-#ifdef CONFIG_SMP
+-#define smp_mb() mb()
+-#define smp_rmb() rmb()
+-#define smp_wmb() wmb()
+-#define smp_read_barrier_depends() read_barrier_depends()
+-#else
+-#define smp_mb() barrier()
+-#define smp_rmb() barrier()
+-#define smp_wmb() barrier()
+-#define smp_read_barrier_depends() do { } while (0)
+-#endif /* CONFIG_SMP */
+-
+-#define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
+-
+-/* Interrupt Control */
+-#ifndef HARD_CLI
+-#define SR_MASK_L 0x000000f0L
+-#define SR_MASK_LL 0x00000000000000f0LL
+-#else
+-#define SR_MASK_L 0x10000000L
+-#define SR_MASK_LL 0x0000000010000000LL
+-#endif
+-
+-static __inline__ void local_irq_enable(void)
-{
-- unsigned long __dummy;
+- /* cli/sti based on SR.BL */
+- unsigned long long __dummy0, __dummy1=~SR_MASK_LL;
-
-- /* Set FD flag in SR */
-- __asm__ __volatile__("stc sr, %0\n\t"
-- "or %1, %0\n\t"
-- "ldc %0, sr"
-- : "=&r" (__dummy)
-- : "r" (SR_FD));
+- __asm__ __volatile__("getcon " __SR ", %0\n\t"
+- "and %0, %1, %0\n\t"
+- "putcon %0, " __SR "\n\t"
+- : "=&r" (__dummy0)
+- : "r" (__dummy1));
-}
-
--static __inline__ void enable_fpu(void)
+-static __inline__ void local_irq_disable(void)
-{
-- unsigned long __dummy;
+- /* cli/sti based on SR.BL */
+- unsigned long long __dummy0, __dummy1=SR_MASK_LL;
+- __asm__ __volatile__("getcon " __SR ", %0\n\t"
+- "or %0, %1, %0\n\t"
+- "putcon %0, " __SR "\n\t"
+- : "=&r" (__dummy0)
+- : "r" (__dummy1));
+-}
-
-- /* Clear out FD flag in SR */
-- __asm__ __volatile__("stc sr, %0\n\t"
-- "and %1, %0\n\t"
-- "ldc %0, sr"
-- : "=&r" (__dummy)
-- : "r" (~SR_FD));
+-#define local_save_flags(x) \
+-(__extension__ ({ unsigned long long __dummy=SR_MASK_LL; \
+- __asm__ __volatile__( \
+- "getcon " __SR ", %0\n\t" \
+- "and %0, %1, %0" \
+- : "=&r" (x) \
+- : "r" (__dummy));}))
+-
+-#define local_irq_save(x) \
+-(__extension__ ({ unsigned long long __d2=SR_MASK_LL, __d1; \
+- __asm__ __volatile__( \
+- "getcon " __SR ", %1\n\t" \
+- "or %1, r63, %0\n\t" \
+- "or %1, %2, %1\n\t" \
+- "putcon %1, " __SR "\n\t" \
+- "and %0, %2, %0" \
+- : "=&r" (x), "=&r" (__d1) \
+- : "r" (__d2));}));
+-
+-#define local_irq_restore(x) do { \
+- if ( ((x) & SR_MASK_L) == 0 ) /* dropping to 0 ? */ \
+- local_irq_enable(); /* yes...re-enable */ \
+-} while (0)
+-
+-#define irqs_disabled() \
+-({ \
+- unsigned long flags; \
+- local_save_flags(flags); \
+- (flags != 0); \
+-})
+-
+-static inline unsigned long xchg_u32(volatile int * m, unsigned long val)
+-{
+- unsigned long flags, retval;
+-
+- local_irq_save(flags);
+- retval = *m;
+- *m = val;
+- local_irq_restore(flags);
+- return retval;
-}
-
--static __inline__ void release_fpu(struct pt_regs *regs)
+-static inline unsigned long xchg_u8(volatile unsigned char * m, unsigned long val)
-{
-- regs->sr |= SR_FD;
+- unsigned long flags, retval;
+-
+- local_irq_save(flags);
+- retval = *m;
+- *m = val & 0xff;
+- local_irq_restore(flags);
+- return retval;
-}
-
--static __inline__ void grab_fpu(struct pt_regs *regs)
+-static __inline__ unsigned long __xchg(unsigned long x, volatile void * ptr, int size)
-{
-- regs->sr &= ~SR_FD;
+- switch (size) {
+- case 4:
+- return xchg_u32(ptr, x);
+- break;
+- case 1:
+- return xchg_u8(ptr, x);
+- break;
+- }
+- __xchg_called_with_bad_pointer();
+- return x;
-}
-
--extern void save_fpu(struct task_struct *__tsk, struct pt_regs *regs);
+-/* XXX
+- * disable hlt during certain critical i/o operations
+- */
+-#define HAVE_DISABLE_HLT
+-void disable_hlt(void);
+-void enable_hlt(void);
-
--#define unlazy_fpu(tsk, regs) do { \
-- if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
-- save_fpu(tsk, regs); \
-- } \
--} while (0)
-
--#define clear_fpu(tsk, regs) do { \
-- if (test_tsk_thread_flag(tsk, TIF_USEDFPU)) { \
-- clear_tsk_thread_flag(tsk, TIF_USEDFPU); \
-- release_fpu(regs); \
-- } \
--} while (0)
+-#define smp_mb() barrier()
+-#define smp_rmb() barrier()
+-#define smp_wmb() barrier()
-
--/* Double presision, NANS as NANS, rounding to nearest, no exceptions */
--#define FPSCR_INIT 0x00080000
+-#ifdef CONFIG_SH_ALPHANUMERIC
+-/* This is only used for debugging. */
+-extern void print_seg(char *file,int line);
+-#define PLS() print_seg(__FILE__,__LINE__)
+-#else /* CONFIG_SH_ALPHANUMERIC */
+-#define PLS()
+-#endif /* CONFIG_SH_ALPHANUMERIC */
-
--#define FPSCR_CAUSE_MASK 0x0001f000 /* Cause bits */
--#define FPSCR_FLAG_MASK 0x0000007c /* Flag bits */
+-#define PL() printk("@ <%s,%s:%d>\n",__FILE__,__FUNCTION__,__LINE__)
+-
+-#define arch_align_stack(x) (x)
+-
+-#endif /* __ASM_SH64_SYSTEM_H */
+diff --git a/include/asm-sh64/termbits.h b/include/asm-sh64/termbits.h
+deleted file mode 100644
+index 86bde5e..0000000
+--- a/include/asm-sh64/termbits.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_TERMBITS_H
+-#define __ASM_SH64_TERMBITS_H
+-
+-#include <asm-sh/termbits.h>
+-
+-#endif /* __ASM_SH64_TERMBITS_H */
+diff --git a/include/asm-sh64/termios.h b/include/asm-sh64/termios.h
+deleted file mode 100644
+index dc44e6e..0000000
+--- a/include/asm-sh64/termios.h
++++ /dev/null
+@@ -1,99 +0,0 @@
+-#ifndef __ASM_SH64_TERMIOS_H
+-#define __ASM_SH64_TERMIOS_H
-
-/*
-- * Return saved PC of a blocked thread.
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/termios.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
- */
--#define thread_saved_pc(tsk) (tsk->thread.pc)
-
--void show_trace(struct task_struct *tsk, unsigned long *sp,
-- struct pt_regs *regs);
--extern unsigned long get_wchan(struct task_struct *p);
+-#include <asm/termbits.h>
+-#include <asm/ioctls.h>
-
--#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
--#define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[15])
+-struct winsize {
+- unsigned short ws_row;
+- unsigned short ws_col;
+- unsigned short ws_xpixel;
+- unsigned short ws_ypixel;
+-};
-
--#define cpu_sleep() __asm__ __volatile__ ("sleep" : : : "memory")
--#define cpu_relax() barrier()
+-#define NCC 8
+-struct termio {
+- unsigned short c_iflag; /* input mode flags */
+- unsigned short c_oflag; /* output mode flags */
+- unsigned short c_cflag; /* control mode flags */
+- unsigned short c_lflag; /* local mode flags */
+- unsigned char c_line; /* line discipline */
+- unsigned char c_cc[NCC]; /* control characters */
+-};
-
--#if defined(CONFIG_CPU_SH2A) || defined(CONFIG_CPU_SH3) || \
-- defined(CONFIG_CPU_SH4)
--#define PREFETCH_STRIDE L1_CACHE_BYTES
--#define ARCH_HAS_PREFETCH
--#define ARCH_HAS_PREFETCHW
--static inline void prefetch(void *x)
--{
-- __asm__ __volatile__ ("pref @%0\n\t" : : "r" (x) : "memory");
+-/* modem lines */
+-#define TIOCM_LE 0x001
+-#define TIOCM_DTR 0x002
+-#define TIOCM_RTS 0x004
+-#define TIOCM_ST 0x008
+-#define TIOCM_SR 0x010
+-#define TIOCM_CTS 0x020
+-#define TIOCM_CAR 0x040
+-#define TIOCM_RNG 0x080
+-#define TIOCM_DSR 0x100
+-#define TIOCM_CD TIOCM_CAR
+-#define TIOCM_RI TIOCM_RNG
+-#define TIOCM_OUT1 0x2000
+-#define TIOCM_OUT2 0x4000
+-#define TIOCM_LOOP 0x8000
+-
+-/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+-
+-#ifdef __KERNEL__
+-
+-/* intr=^C quit=^\ erase=del kill=^U
+- eof=^D vtime=\0 vmin=\1 sxtc=\0
+- start=^Q stop=^S susp=^Z eol=\0
+- reprint=^R discard=^U werase=^W lnext=^V
+- eol2=\0
+-*/
+-#define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0"
+-
+-/*
+- * Translate a "termio" structure into a "termios". Ugh.
+- */
+-#define SET_LOW_TERMIOS_BITS(termios, termio, x) { \
+- unsigned short __tmp; \
+- get_user(__tmp,&(termio)->x); \
+- *(unsigned short *) &(termios)->x = __tmp; \
-}
-
--#define prefetchw(x) prefetch(x)
--#endif
-+/* arch/sh/kernel/setup.c */
-+const char *get_cpu_subtype(struct sh_cpuinfo *c);
-
- #ifdef CONFIG_VSYSCALL
--extern int vsyscall_init(void);
-+int vsyscall_init(void);
- #else
- #define vsyscall_init() do { } while (0)
- #endif
-
--/* arch/sh/kernel/setup.c */
--const char *get_cpu_subtype(struct sh_cpuinfo *c);
-+#endif /* __ASSEMBLY__ */
-+
-+#ifdef CONFIG_SUPERH32
-+# include "processor_32.h"
-+#else
-+# include "processor_64.h"
-+#endif
-
--#endif /* __KERNEL__ */
- #endif /* __ASM_SH_PROCESSOR_H */
-diff --git a/include/asm-sh/processor_32.h b/include/asm-sh/processor_32.h
-new file mode 100644
-index 0000000..a7edaa1
---- /dev/null
-+++ b/include/asm-sh/processor_32.h
-@@ -0,0 +1,215 @@
-+/*
-+ * include/asm-sh/processor.h
-+ *
-+ * Copyright (C) 1999, 2000 Niibe Yutaka
-+ * Copyright (C) 2002, 2003 Paul Mundt
-+ */
-+
-+#ifndef __ASM_SH_PROCESSOR_32_H
-+#define __ASM_SH_PROCESSOR_32_H
-+#ifdef __KERNEL__
-+
-+#include <linux/compiler.h>
-+#include <asm/page.h>
-+#include <asm/types.h>
-+#include <asm/cache.h>
-+#include <asm/ptrace.h>
-+
-+/*
-+ * Default implementation of macro that returns current
-+ * instruction pointer ("program counter").
-+ */
-+#define current_text_addr() ({ void *pc; __asm__("mova 1f, %0\n1:":"=z" (pc)); pc; })
-+
-+/* Core Processor Version Register */
-+#define CCN_PVR 0xff000030
-+#define CCN_CVR 0xff000040
-+#define CCN_PRR 0xff000044
-+
-+struct sh_cpuinfo {
-+ unsigned int type;
-+ unsigned long loops_per_jiffy;
-+ unsigned long asid_cache;
-+
-+ struct cache_info icache; /* Primary I-cache */
-+ struct cache_info dcache; /* Primary D-cache */
-+ struct cache_info scache; /* Secondary cache */
-+
-+ unsigned long flags;
-+} __attribute__ ((aligned(L1_CACHE_BYTES)));
-+
-+extern struct sh_cpuinfo cpu_data[];
-+#define boot_cpu_data cpu_data[0]
-+#define current_cpu_data cpu_data[smp_processor_id()]
-+#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
-+
-+/*
-+ * User space process size: 2GB.
-+ *
-+ * Since SH7709 and SH7750 have "area 7", we can't use 0x7c000000--0x7fffffff
-+ */
-+#define TASK_SIZE 0x7c000000UL
-+
-+/* This decides where the kernel will search for a free chunk of vm
-+ * space during mmap's.
-+ */
-+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
-+
-+/*
-+ * Bit of SR register
-+ *
-+ * FD-bit:
-+ * When it's set, it means the processor doesn't have right to use FPU,
-+ * and it results exception when the floating operation is executed.
-+ *
-+ * IMASK-bit:
-+ * Interrupt level mask
-+ */
-+#define SR_DSP 0x00001000
-+#define SR_IMASK 0x000000f0
-+
-+/*
-+ * FPU structure and data
-+ */
-+
-+struct sh_fpu_hard_struct {
-+ unsigned long fp_regs[16];
-+ unsigned long xfp_regs[16];
-+ unsigned long fpscr;
-+ unsigned long fpul;
-+
-+ long status; /* software status information */
-+};
-+
-+/* Dummy fpu emulator */
-+struct sh_fpu_soft_struct {
-+ unsigned long fp_regs[16];
-+ unsigned long xfp_regs[16];
-+ unsigned long fpscr;
-+ unsigned long fpul;
-+
-+ unsigned char lookahead;
-+ unsigned long entry_pc;
-+};
-+
-+union sh_fpu_union {
-+ struct sh_fpu_hard_struct hard;
-+ struct sh_fpu_soft_struct soft;
-+};
-+
-+struct thread_struct {
-+ /* Saved registers when thread is descheduled */
-+ unsigned long sp;
-+ unsigned long pc;
-+
-+ /* Hardware debugging registers */
-+ unsigned long ubc_pc;
-+
-+ /* floating point info */
-+ union sh_fpu_union fpu;
-+};
-+
-+typedef struct {
-+ unsigned long seg;
-+} mm_segment_t;
-+
-+/* Count of active tasks with UBC settings */
-+extern int ubc_usercnt;
-+
-+#define INIT_THREAD { \
-+ .sp = sizeof(init_stack) + (long) &init_stack, \
-+}
-+
-+/*
-+ * Do necessary setup to start up a newly executed thread.
-+ */
-+#define start_thread(regs, new_pc, new_sp) \
-+ set_fs(USER_DS); \
-+ regs->pr = 0; \
-+ regs->sr = SR_FD; /* User mode. */ \
-+ regs->pc = new_pc; \
-+ regs->regs[15] = new_sp
-+
-+/* Forward declaration, a strange C thing */
-+struct task_struct;
-+struct mm_struct;
-+
-+/* Free all resources held by a thread. */
-+extern void release_thread(struct task_struct *);
-+
-+/* Prepare to copy thread state - unlazy all lazy status */
-+#define prepare_to_copy(tsk) do { } while (0)
-+
-+/*
-+ * create a kernel thread without removing it from tasklists
-+ */
-+extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-+
-+/* Copy and release all segment info associated with a VM */
-+#define copy_segments(p, mm) do { } while(0)
-+#define release_segments(mm) do { } while(0)
-+
-+/*
-+ * FPU lazy state save handling.
-+ */
-+
-+static __inline__ void disable_fpu(void)
-+{
-+ unsigned long __dummy;
-+
-+ /* Set FD flag in SR */
-+ __asm__ __volatile__("stc sr, %0\n\t"
-+ "or %1, %0\n\t"
-+ "ldc %0, sr"
-+ : "=&r" (__dummy)
-+ : "r" (SR_FD));
-+}
-+
-+static __inline__ void enable_fpu(void)
-+{
-+ unsigned long __dummy;
-+
-+ /* Clear out FD flag in SR */
-+ __asm__ __volatile__("stc sr, %0\n\t"
-+ "and %1, %0\n\t"
-+ "ldc %0, sr"
-+ : "=&r" (__dummy)
-+ : "r" (~SR_FD));
-+}
-+
-+/* Double presision, NANS as NANS, rounding to nearest, no exceptions */
-+#define FPSCR_INIT 0x00080000
-+
-+#define FPSCR_CAUSE_MASK 0x0001f000 /* Cause bits */
-+#define FPSCR_FLAG_MASK 0x0000007c /* Flag bits */
-+
-+/*
-+ * Return saved PC of a blocked thread.
-+ */
-+#define thread_saved_pc(tsk) (tsk->thread.pc)
-+
-+void show_trace(struct task_struct *tsk, unsigned long *sp,
-+ struct pt_regs *regs);
-+extern unsigned long get_wchan(struct task_struct *p);
-+
-+#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
-+#define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[15])
-+
-+#define cpu_sleep() __asm__ __volatile__ ("sleep" : : : "memory")
-+#define cpu_relax() barrier()
-+
-+#if defined(CONFIG_CPU_SH2A) || defined(CONFIG_CPU_SH3) || \
-+ defined(CONFIG_CPU_SH4)
-+#define PREFETCH_STRIDE L1_CACHE_BYTES
-+#define ARCH_HAS_PREFETCH
-+#define ARCH_HAS_PREFETCHW
-+static inline void prefetch(void *x)
-+{
-+ __asm__ __volatile__ ("pref @%0\n\t" : : "r" (x) : "memory");
-+}
-+
-+#define prefetchw(x) prefetch(x)
-+#endif
-+
-+#endif /* __KERNEL__ */
-+#endif /* __ASM_SH_PROCESSOR_32_H */
-diff --git a/include/asm-sh/processor_64.h b/include/asm-sh/processor_64.h
-new file mode 100644
-index 0000000..99c22b1
---- /dev/null
-+++ b/include/asm-sh/processor_64.h
-@@ -0,0 +1,275 @@
-+#ifndef __ASM_SH_PROCESSOR_64_H
-+#define __ASM_SH_PROCESSOR_64_H
-+
-+/*
-+ * include/asm-sh/processor_64.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003 Paul Mundt
-+ * Copyright (C) 2004 Richard Curnow
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#ifndef __ASSEMBLY__
-+
-+#include <linux/compiler.h>
-+#include <asm/page.h>
-+#include <asm/types.h>
-+#include <asm/cache.h>
-+#include <asm/ptrace.h>
-+#include <asm/cpu/registers.h>
-+
-+/*
-+ * Default implementation of macro that returns current
-+ * instruction pointer ("program counter").
-+ */
-+#define current_text_addr() ({ \
-+void *pc; \
-+unsigned long long __dummy = 0; \
-+__asm__("gettr tr0, %1\n\t" \
-+ "pta 4, tr0\n\t" \
-+ "gettr tr0, %0\n\t" \
-+ "ptabs %1, tr0\n\t" \
-+ :"=r" (pc), "=r" (__dummy) \
-+ : "1" (__dummy)); \
-+pc; })
-+
-+/*
-+ * TLB information structure
-+ *
-+ * Defined for both I and D tlb, per-processor.
-+ */
-+struct tlb_info {
-+ unsigned long long next;
-+ unsigned long long first;
-+ unsigned long long last;
-+
-+ unsigned int entries;
-+ unsigned int step;
-+
-+ unsigned long flags;
-+};
-+
-+struct sh_cpuinfo {
-+ enum cpu_type type;
-+ unsigned long loops_per_jiffy;
-+ unsigned long asid_cache;
-+
-+ unsigned int cpu_clock, master_clock, bus_clock, module_clock;
-+
-+ /* Cache info */
-+ struct cache_info icache;
-+ struct cache_info dcache;
-+ struct cache_info scache;
-+
-+ /* TLB info */
-+ struct tlb_info itlb;
-+ struct tlb_info dtlb;
-+
-+ unsigned long flags;
-+};
-+
-+extern struct sh_cpuinfo cpu_data[];
-+#define boot_cpu_data cpu_data[0]
-+#define current_cpu_data cpu_data[smp_processor_id()]
-+#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
-+
-+#endif
-+
-+/*
-+ * User space process size: 2GB - 4k.
-+ */
-+#define TASK_SIZE 0x7ffff000UL
-+
-+/* This decides where the kernel will search for a free chunk of vm
-+ * space during mmap's.
-+ */
-+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
-+
-+/*
-+ * Bit of SR register
-+ *
-+ * FD-bit:
-+ * When it's set, it means the processor doesn't have right to use FPU,
-+ * and it results exception when the floating operation is executed.
-+ *
-+ * IMASK-bit:
-+ * Interrupt level mask
-+ *
-+ * STEP-bit:
-+ * Single step bit
-+ *
-+ */
-+#if defined(CONFIG_SH64_SR_WATCH)
-+#define SR_MMU 0x84000000
-+#else
-+#define SR_MMU 0x80000000
-+#endif
-+
-+#define SR_IMASK 0x000000f0
-+#define SR_SSTEP 0x08000000
-+
-+#ifndef __ASSEMBLY__
-+
-+/*
-+ * FPU structure and data : require 8-byte alignment as we need to access it
-+ with fld.p, fst.p
-+ */
-+
-+struct sh_fpu_hard_struct {
-+ unsigned long fp_regs[64];
-+ unsigned int fpscr;
-+ /* long status; * software status information */
-+};
-+
-+#if 0
-+/* Dummy fpu emulator */
-+struct sh_fpu_soft_struct {
-+ unsigned long long fp_regs[32];
-+ unsigned int fpscr;
-+ unsigned char lookahead;
-+ unsigned long entry_pc;
-+};
-+#endif
-+
-+union sh_fpu_union {
-+ struct sh_fpu_hard_struct hard;
-+ /* 'hard' itself only produces 32 bit alignment, yet we need
-+ to access it using 64 bit load/store as well. */
-+ unsigned long long alignment_dummy;
-+};
-+
-+struct thread_struct {
-+ unsigned long sp;
-+ unsigned long pc;
-+ /* This stores the address of the pt_regs built during a context
-+ switch, or of the register save area built for a kernel mode
-+ exception. It is used for backtracing the stack of a sleeping task
-+ or one that traps in kernel mode. */
-+ struct pt_regs *kregs;
-+ /* This stores the address of the pt_regs constructed on entry from
-+ user mode. It is a fixed value over the lifetime of a process, or
-+ NULL for a kernel thread. */
-+ struct pt_regs *uregs;
-+
-+ unsigned long trap_no, error_code;
-+ unsigned long address;
-+ /* Hardware debugging registers may come here */
-+
-+ /* floating point info */
-+ union sh_fpu_union fpu;
-+};
-+
-+typedef struct {
-+ unsigned long seg;
-+} mm_segment_t;
-+
-+#define INIT_MMAP \
-+{ &init_mm, 0, 0, NULL, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, 1, NULL, NULL }
-+
-+extern struct pt_regs fake_swapper_regs;
-+
-+#define INIT_THREAD { \
-+ .sp = sizeof(init_stack) + \
-+ (long) &init_stack, \
-+ .pc = 0, \
-+ .kregs = &fake_swapper_regs, \
-+ .uregs = NULL, \
-+ .trap_no = 0, \
-+ .error_code = 0, \
-+ .address = 0, \
-+ .fpu = { { { 0, } }, } \
-+}
-+
-+/*
-+ * Do necessary setup to start up a newly executed thread.
-+ */
-+#define SR_USER (SR_MMU | SR_FD)
-+
-+#define start_thread(regs, new_pc, new_sp) \
-+ set_fs(USER_DS); \
-+ regs->sr = SR_USER; /* User mode. */ \
-+ regs->pc = new_pc - 4; /* Compensate syscall exit */ \
-+ regs->pc |= 1; /* Set SHmedia ! */ \
-+ regs->regs[18] = 0; \
-+ regs->regs[15] = new_sp
-+
-+/* Forward declaration, a strange C thing */
-+struct task_struct;
-+struct mm_struct;
-+
-+/* Free all resources held by a thread. */
-+extern void release_thread(struct task_struct *);
-+/*
-+ * create a kernel thread without removing it from tasklists
-+ */
-+extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-+
-+
-+/* Copy and release all segment info associated with a VM */
-+#define copy_segments(p, mm) do { } while (0)
-+#define release_segments(mm) do { } while (0)
-+#define forget_segments() do { } while (0)
-+#define prepare_to_copy(tsk) do { } while (0)
-+/*
-+ * FPU lazy state save handling.
-+ */
-+
-+static inline void disable_fpu(void)
-+{
-+ unsigned long long __dummy;
-+
-+ /* Set FD flag in SR */
-+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
-+ "or %0, %1, %0\n\t"
-+ "putcon %0, " __SR "\n\t"
-+ : "=&r" (__dummy)
-+ : "r" (SR_FD));
-+}
-+
-+static inline void enable_fpu(void)
-+{
-+ unsigned long long __dummy;
-+
-+ /* Clear out FD flag in SR */
-+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
-+ "and %0, %1, %0\n\t"
-+ "putcon %0, " __SR "\n\t"
-+ : "=&r" (__dummy)
-+ : "r" (~SR_FD));
-+}
-+
-+/* Round to nearest, no exceptions on inexact, overflow, underflow,
-+ zero-divide, invalid. Configure option for whether to flush denorms to
-+ zero, or except if a denorm is encountered. */
-+#if defined(CONFIG_SH64_FPU_DENORM_FLUSH)
-+#define FPSCR_INIT 0x00040000
-+#else
-+#define FPSCR_INIT 0x00000000
-+#endif
-+
-+#ifdef CONFIG_SH_FPU
-+/* Initialise the FP state of a task */
-+void fpinit(struct sh_fpu_hard_struct *fpregs);
-+#else
-+#define fpinit(fpregs) do { } while (0)
-+#endif
-+
-+extern struct task_struct *last_task_used_math;
-+
-+/*
-+ * Return saved PC of a blocked thread.
-+ */
-+#define thread_saved_pc(tsk) (tsk->thread.pc)
-+
-+extern unsigned long get_wchan(struct task_struct *p);
-+
-+#define KSTK_EIP(tsk) ((tsk)->thread.pc)
-+#define KSTK_ESP(tsk) ((tsk)->thread.sp)
-+
-+#define cpu_relax() barrier()
-+
-+#endif /* __ASSEMBLY__ */
-+#endif /* __ASM_SH_PROCESSOR_64_H */
-diff --git a/include/asm-sh/ptrace.h b/include/asm-sh/ptrace.h
-index b9789c8..8d6c92b 100644
---- a/include/asm-sh/ptrace.h
-+++ b/include/asm-sh/ptrace.h
-@@ -5,7 +5,16 @@
- * Copyright (C) 1999, 2000 Niibe Yutaka
- *
- */
+-#define user_termio_to_kernel_termios(termios, termio) \
+-({ \
+- SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \
+- SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \
+- SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \
+- SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \
+- copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \
+-})
-
-+#if defined(__SH5__) || defined(CONFIG_SUPERH64)
-+struct pt_regs {
-+ unsigned long long pc;
-+ unsigned long long sr;
-+ unsigned long long syscall_nr;
-+ unsigned long long regs[63];
-+ unsigned long long tregs[8];
-+ unsigned long long pad[2];
-+};
-+#else
- /*
- * GCC defines register number like this:
- * -----------------------------
-@@ -28,7 +37,7 @@
-
- #define REG_PR 17
- #define REG_SR 18
--#define REG_GBR 19
-+#define REG_GBR 19
- #define REG_MACH 20
- #define REG_MACL 21
-
-@@ -80,10 +89,14 @@ struct pt_dspregs {
-
- #define PTRACE_GETDSPREGS 55
- #define PTRACE_SETDSPREGS 56
-+#endif
-
- #ifdef __KERNEL__
--#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
--#define instruction_pointer(regs) ((regs)->pc)
-+#include <asm/addrspace.h>
-+
-+#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
-+#define instruction_pointer(regs) ((unsigned long)(regs)->pc)
-+
- extern void show_regs(struct pt_regs *);
-
- #ifdef CONFIG_SH_DSP
-@@ -100,10 +113,13 @@ static inline unsigned long profile_pc(struct pt_regs *regs)
- {
- unsigned long pc = instruction_pointer(regs);
-
-- if (pc >= 0xa0000000UL && pc < 0xc0000000UL)
-+#ifdef P2SEG
-+ if (pc >= P2SEG && pc < P3SEG)
- pc -= 0x20000000;
-+#endif
-+
- return pc;
- }
--#endif
-+#endif /* __KERNEL__ */
-
- #endif /* __ASM_SH_PTRACE_H */
-diff --git a/include/asm-sh/r7780rp.h b/include/asm-sh/r7780rp.h
-index de37f93..bdecea0 100644
---- a/include/asm-sh/r7780rp.h
-+++ b/include/asm-sh/r7780rp.h
-@@ -121,21 +121,6 @@
-
- #define IRLCNTR1 (PA_BCR + 0) /* Interrupt Control Register1 */
-
--#define IRQ_PCISLOT1 0 /* PCI Slot #1 IRQ */
--#define IRQ_PCISLOT2 1 /* PCI Slot #2 IRQ */
--#define IRQ_PCISLOT3 2 /* PCI Slot #3 IRQ */
--#define IRQ_PCISLOT4 3 /* PCI Slot #4 IRQ */
--#define IRQ_CFINST 5 /* CF Card Insert IRQ */
--#define IRQ_M66596 6 /* M66596 IRQ */
--#define IRQ_SDCARD 7 /* SD Card IRQ */
--#define IRQ_TUCHPANEL 8 /* Touch Panel IRQ */
--#define IRQ_SCI 9 /* SCI IRQ */
--#define IRQ_2SERIAL 10 /* Serial IRQ */
--#define IRQ_EXTENTION 11 /* EXTn IRQ */
--#define IRQ_ONETH 12 /* On board Ethernet IRQ */
--#define IRQ_PSW 13 /* Push Switch IRQ */
--#define IRQ_ZIGBEE 14 /* Ziggbee IO IRQ */
+-/*
+- * Translate a "termios" structure into a "termio". Ugh.
+- */
+-#define kernel_termios_to_user_termio(termio, termios) \
+-({ \
+- put_user((termios)->c_iflag, &(termio)->c_iflag); \
+- put_user((termios)->c_oflag, &(termio)->c_oflag); \
+- put_user((termios)->c_cflag, &(termio)->c_cflag); \
+- put_user((termios)->c_lflag, &(termio)->c_lflag); \
+- put_user((termios)->c_line, &(termio)->c_line); \
+- copy_to_user((termio)->c_cc, (termios)->c_cc, NCC); \
+-})
-
- #define IVDR_CK_ON 8 /* iVDR Clock ON */
-
- #elif defined(CONFIG_SH_R7785RP)
-@@ -192,13 +177,19 @@
-
- #define IRQ_AX88796 (HL_FPGA_IRQ_BASE + 0)
- #define IRQ_CF (HL_FPGA_IRQ_BASE + 1)
--#ifndef IRQ_PSW
- #define IRQ_PSW (HL_FPGA_IRQ_BASE + 2)
--#endif
--#define IRQ_EXT1 (HL_FPGA_IRQ_BASE + 3)
--#define IRQ_EXT4 (HL_FPGA_IRQ_BASE + 4)
+-#define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios))
+-#define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios))
-
--void make_r7780rp_irq(unsigned int irq);
-+#define IRQ_EXT0 (HL_FPGA_IRQ_BASE + 3)
-+#define IRQ_EXT1 (HL_FPGA_IRQ_BASE + 4)
-+#define IRQ_EXT2 (HL_FPGA_IRQ_BASE + 5)
-+#define IRQ_EXT3 (HL_FPGA_IRQ_BASE + 6)
-+#define IRQ_EXT4 (HL_FPGA_IRQ_BASE + 7)
-+#define IRQ_EXT5 (HL_FPGA_IRQ_BASE + 8)
-+#define IRQ_EXT6 (HL_FPGA_IRQ_BASE + 9)
-+#define IRQ_EXT7 (HL_FPGA_IRQ_BASE + 10)
-+#define IRQ_SMBUS (HL_FPGA_IRQ_BASE + 11)
-+#define IRQ_TP (HL_FPGA_IRQ_BASE + 12)
-+#define IRQ_RTC (HL_FPGA_IRQ_BASE + 13)
-+#define IRQ_TH_ALERT (HL_FPGA_IRQ_BASE + 14)
-
- unsigned char *highlander_init_irq_r7780mp(void);
- unsigned char *highlander_init_irq_r7780rp(void);
-diff --git a/include/asm-sh/rtc.h b/include/asm-sh/rtc.h
-index 858da99..ec45ba8 100644
---- a/include/asm-sh/rtc.h
-+++ b/include/asm-sh/rtc.h
-@@ -11,4 +11,6 @@ struct sh_rtc_platform_info {
- unsigned long capabilities;
- };
-
-+#include <asm/cpu/rtc.h>
-+
- #endif /* _ASM_RTC_H */
-diff --git a/include/asm-sh/scatterlist.h b/include/asm-sh/scatterlist.h
-index a7d0d18..2084d03 100644
---- a/include/asm-sh/scatterlist.h
-+++ b/include/asm-sh/scatterlist.h
-@@ -13,7 +13,7 @@ struct scatterlist {
- unsigned int length;
- };
-
--#define ISA_DMA_THRESHOLD (0x1fffffff)
-+#define ISA_DMA_THRESHOLD PHYS_ADDR_MASK
-
- /* These macros should be used after a pci_map_sg call has been done
- * to get bus addresses of each of the SG entries and their lengths.
-diff --git a/include/asm-sh/sdk7780.h b/include/asm-sh/sdk7780.h
-new file mode 100644
-index 0000000..697dc86
---- /dev/null
-+++ b/include/asm-sh/sdk7780.h
-@@ -0,0 +1,81 @@
-+#ifndef __ASM_SH_RENESAS_SDK7780_H
-+#define __ASM_SH_RENESAS_SDK7780_H
-+
-+/*
-+ * linux/include/asm-sh/sdk7780.h
-+ *
-+ * Renesas Solutions SH7780 SDK Support
-+ * Copyright (C) 2008 Nicholas Beck <nbeck at mpc-data.co.uk>
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#include <asm/addrspace.h>
-+
-+/* Box specific addresses. */
-+#define SE_AREA0_WIDTH 4 /* Area0: 32bit */
-+#define PA_ROM 0xa0000000 /* EPROM */
-+#define PA_ROM_SIZE 0x00400000 /* EPROM size 4M byte */
-+#define PA_FROM 0xa0800000 /* Flash-ROM */
-+#define PA_FROM_SIZE 0x00400000 /* Flash-ROM size 4M byte */
-+#define PA_EXT1 0xa4000000
-+#define PA_EXT1_SIZE 0x04000000
-+#define PA_SDRAM 0xa8000000 /* DDR-SDRAM(Area2/3) 128MB */
-+#define PA_SDRAM_SIZE 0x08000000
-+
-+#define PA_EXT4 0xb0000000
-+#define PA_EXT4_SIZE 0x04000000
-+#define PA_EXT_USER PA_EXT4 /* User Expansion Space */
-+
-+#define PA_PERIPHERAL PA_AREA5_IO
-+
-+/* SRAM/Reserved */
-+#define PA_RESERVED (PA_PERIPHERAL + 0)
-+/* FPGA base address */
-+#define PA_FPGA (PA_PERIPHERAL + 0x01000000)
-+/* SMC LAN91C111 */
-+#define PA_LAN (PA_PERIPHERAL + 0x01800000)
-+
-+
-+#define FPGA_SRSTR (PA_FPGA + 0x000) /* System reset */
-+#define FPGA_IRQ0SR (PA_FPGA + 0x010) /* IRQ0 status */
-+#define FPGA_IRQ0MR (PA_FPGA + 0x020) /* IRQ0 mask */
-+#define FPGA_BDMR (PA_FPGA + 0x030) /* Board operating mode */
-+#define FPGA_INTT0PRTR (PA_FPGA + 0x040) /* Interrupt test mode0 port */
-+#define FPGA_INTT0SELR (PA_FPGA + 0x050) /* Int. test mode0 select */
-+#define FPGA_INTT1POLR (PA_FPGA + 0x060) /* Int. test mode0 polarity */
-+#define FPGA_NMIR (PA_FPGA + 0x070) /* NMI source */
-+#define FPGA_NMIMR (PA_FPGA + 0x080) /* NMI mask */
-+#define FPGA_IRQR (PA_FPGA + 0x090) /* IRQX source */
-+#define FPGA_IRQMR (PA_FPGA + 0x0A0) /* IRQX mask */
-+#define FPGA_SLEDR (PA_FPGA + 0x0B0) /* LED control */
-+#define PA_LED FPGA_SLEDR
-+#define FPGA_MAPSWR (PA_FPGA + 0x0C0) /* Map switch */
-+#define FPGA_FPVERR (PA_FPGA + 0x0D0) /* FPGA version */
-+#define FPGA_FPDATER (PA_FPGA + 0x0E0) /* FPGA date */
-+#define FPGA_RSE (PA_FPGA + 0x100) /* Reset source */
-+#define FPGA_EASR (PA_FPGA + 0x110) /* External area select */
-+#define FPGA_SPER (PA_FPGA + 0x120) /* Serial port enable */
-+#define FPGA_IMSR (PA_FPGA + 0x130) /* Interrupt mode select */
-+#define FPGA_PCIMR (PA_FPGA + 0x140) /* PCI Mode */
-+#define FPGA_DIPSWMR (PA_FPGA + 0x150) /* DIPSW monitor */
-+#define FPGA_FPODR (PA_FPGA + 0x160) /* Output port data */
-+#define FPGA_ATAESR (PA_FPGA + 0x170) /* ATA extended bus status */
-+#define FPGA_IRQPOLR (PA_FPGA + 0x180) /* IRQx polarity */
-+
-+
-+#define SDK7780_NR_IRL 15
-+/* IDE/ATA interrupt */
-+#define IRQ_CFCARD 14
-+/* SMC interrupt */
-+#define IRQ_ETHERNET 6
-+
-+
-+/* arch/sh/boards/renesas/sdk7780/irq.c */
-+void init_sdk7780_IRQ(void);
-+
-+#define __IO_PREFIX sdk7780
-+#include <asm/io_generic.h>
-+
-+#endif /* __ASM_SH_RENESAS_SDK7780_H */
-diff --git a/include/asm-sh/sections.h b/include/asm-sh/sections.h
-index bd9cbc9..8f8f4ad 100644
---- a/include/asm-sh/sections.h
-+++ b/include/asm-sh/sections.h
-@@ -4,6 +4,7 @@
- #include <asm-generic/sections.h>
-
- extern long __machvec_start, __machvec_end;
-+extern char __uncached_start, __uncached_end;
- extern char _ebss[];
-
- #endif /* __ASM_SH_SECTIONS_H */
-diff --git a/include/asm-sh/sigcontext.h b/include/asm-sh/sigcontext.h
-index eb8effb..8ce1435 100644
---- a/include/asm-sh/sigcontext.h
-+++ b/include/asm-sh/sigcontext.h
-@@ -4,6 +4,18 @@
- struct sigcontext {
- unsigned long oldmask;
-
-+#if defined(__SH5__) || defined(CONFIG_CPU_SH5)
-+ /* CPU registers */
-+ unsigned long long sc_regs[63];
-+ unsigned long long sc_tregs[8];
-+ unsigned long long sc_pc;
-+ unsigned long long sc_sr;
-+
-+ /* FPU registers */
-+ unsigned long long sc_fpregs[32];
-+ unsigned int sc_fpscr;
-+ unsigned int sc_fpvalid;
-+#else
- /* CPU registers */
- unsigned long sc_regs[16];
- unsigned long sc_pc;
-@@ -13,7 +25,8 @@ struct sigcontext {
- unsigned long sc_mach;
- unsigned long sc_macl;
-
--#if defined(__SH4__) || defined(CONFIG_CPU_SH4)
-+#if defined(__SH4__) || defined(CONFIG_CPU_SH4) || \
-+ defined(__SH2A__) || defined(CONFIG_CPU_SH2A)
- /* FPU registers */
- unsigned long sc_fpregs[16];
- unsigned long sc_xfpregs[16];
-@@ -21,6 +34,7 @@ struct sigcontext {
- unsigned int sc_fpul;
- unsigned int sc_ownedfp;
- #endif
-+#endif
- };
-
- #endif /* __ASM_SH_SIGCONTEXT_H */
-diff --git a/include/asm-sh/spi.h b/include/asm-sh/spi.h
-new file mode 100644
-index 0000000..e96f5b0
---- /dev/null
-+++ b/include/asm-sh/spi.h
-@@ -0,0 +1,13 @@
-+#ifndef __ASM_SPI_H__
-+#define __ASM_SPI_H__
-+
-+struct sh_spi_info;
-+
-+struct sh_spi_info {
-+ int bus_num;
-+ int num_chipselect;
-+
-+ void (*chip_select)(struct sh_spi_info *spi, int cs, int state);
-+};
-+
-+#endif /* __ASM_SPI_H__ */
-diff --git a/include/asm-sh/stat.h b/include/asm-sh/stat.h
-index 6d6ad26..e1810cc 100644
---- a/include/asm-sh/stat.h
-+++ b/include/asm-sh/stat.h
-@@ -15,6 +15,66 @@ struct __old_kernel_stat {
- unsigned long st_ctime;
- };
-
-+#if defined(__SH5__) || defined(CONFIG_CPU_SH5)
-+struct stat {
-+ unsigned short st_dev;
-+ unsigned short __pad1;
-+ unsigned long st_ino;
-+ unsigned short st_mode;
-+ unsigned short st_nlink;
-+ unsigned short st_uid;
-+ unsigned short st_gid;
-+ unsigned short st_rdev;
-+ unsigned short __pad2;
-+ unsigned long st_size;
-+ unsigned long st_blksize;
-+ unsigned long st_blocks;
-+ unsigned long st_atime;
-+ unsigned long st_atime_nsec;
-+ unsigned long st_mtime;
-+ unsigned long st_mtime_nsec;
-+ unsigned long st_ctime;
-+ unsigned long st_ctime_nsec;
-+ unsigned long __unused4;
-+ unsigned long __unused5;
-+};
-+
-+/* This matches struct stat64 in glibc2.1, hence the absolutely
-+ * insane amounts of padding around dev_t's.
-+ */
-+struct stat64 {
-+ unsigned short st_dev;
-+ unsigned char __pad0[10];
-+
-+ unsigned long st_ino;
-+ unsigned int st_mode;
-+ unsigned int st_nlink;
-+
-+ unsigned long st_uid;
-+ unsigned long st_gid;
-+
-+ unsigned short st_rdev;
-+ unsigned char __pad3[10];
-+
-+ long long st_size;
-+ unsigned long st_blksize;
-+
-+ unsigned long st_blocks; /* Number 512-byte blocks allocated. */
-+ unsigned long __pad4; /* future possible st_blocks high bits */
-+
-+ unsigned long st_atime;
-+ unsigned long st_atime_nsec;
-+
-+ unsigned long st_mtime;
-+ unsigned long st_mtime_nsec;
-+
-+ unsigned long st_ctime;
-+ unsigned long st_ctime_nsec; /* will be high 32 bits of ctime someday */
-+
-+ unsigned long __unused1;
-+ unsigned long __unused2;
-+};
-+#else
- struct stat {
- unsigned long st_dev;
- unsigned long st_ino;
-@@ -67,11 +127,12 @@ struct stat64 {
- unsigned long st_mtime_nsec;
-
- unsigned long st_ctime;
-- unsigned long st_ctime_nsec;
-+ unsigned long st_ctime_nsec;
-
- unsigned long long st_ino;
- };
-
- #define STAT_HAVE_NSEC 1
-+#endif
-
- #endif /* __ASM_SH_STAT_H */
-diff --git a/include/asm-sh/string.h b/include/asm-sh/string.h
-index 55f8db6..8c1ea21 100644
---- a/include/asm-sh/string.h
-+++ b/include/asm-sh/string.h
-@@ -1,131 +1,5 @@
--#ifndef __ASM_SH_STRING_H
--#define __ASM_SH_STRING_H
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH64_TERMIOS_H */
+diff --git a/include/asm-sh64/thread_info.h b/include/asm-sh64/thread_info.h
+deleted file mode 100644
+index f6d5117..0000000
+--- a/include/asm-sh64/thread_info.h
++++ /dev/null
+@@ -1,91 +0,0 @@
+-#ifndef __ASM_SH64_THREAD_INFO_H
+-#define __ASM_SH64_THREAD_INFO_H
+-
+-/*
+- * SuperH 5 version
+- * Copyright (C) 2003 Paul Mundt
+- */
-
-#ifdef __KERNEL__
-
+-#ifndef __ASSEMBLY__
+-#include <asm/registers.h>
+-
-/*
-- * Copyright (C) 1999 Niibe Yutaka
-- * But consider these trivial functions to be public domain.
+- * low level task data that entry.S needs immediate access to
+- * - this struct should fit entirely inside of one cache line
+- * - this struct shares the supervisor stack pages
+- * - if the contents of this structure are changed, the assembly constants must also be changed
- */
+-struct thread_info {
+- struct task_struct *task; /* main task structure */
+- struct exec_domain *exec_domain; /* execution domain */
+- unsigned long flags; /* low level flags */
+- /* Put the 4 32-bit fields together to make asm offsetting easier. */
+- int preempt_count; /* 0 => preemptable, <0 => BUG */
+- __u16 cpu;
-
--#define __HAVE_ARCH_STRCPY
--static inline char *strcpy(char *__dest, const char *__src)
--{
-- register char *__xdest = __dest;
-- unsigned long __dummy;
+- mm_segment_t addr_limit;
+- struct restart_block restart_block;
-
-- __asm__ __volatile__("1:\n\t"
-- "mov.b @%1+, %2\n\t"
-- "mov.b %2, @%0\n\t"
-- "cmp/eq #0, %2\n\t"
-- "bf/s 1b\n\t"
-- " add #1, %0\n\t"
-- : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
-- : "0" (__dest), "1" (__src)
-- : "memory", "t");
+- __u8 supervisor_stack[0];
+-};
-
-- return __xdest;
+-/*
+- * macros/functions for gaining access to the thread information structure
+- */
+-#define INIT_THREAD_INFO(tsk) \
+-{ \
+- .task = &tsk, \
+- .exec_domain = &default_exec_domain, \
+- .flags = 0, \
+- .cpu = 0, \
+- .preempt_count = 1, \
+- .addr_limit = KERNEL_DS, \
+- .restart_block = { \
+- .fn = do_no_restart_syscall, \
+- }, \
-}
-
--#define __HAVE_ARCH_STRNCPY
--static inline char *strncpy(char *__dest, const char *__src, size_t __n)
--{
-- register char *__xdest = __dest;
-- unsigned long __dummy;
+-#define init_thread_info (init_thread_union.thread_info)
+-#define init_stack (init_thread_union.stack)
-
-- if (__n == 0)
-- return __xdest;
+-/* how to get the thread information struct from C */
+-static inline struct thread_info *current_thread_info(void)
+-{
+- struct thread_info *ti;
-
-- __asm__ __volatile__(
-- "1:\n"
-- "mov.b @%1+, %2\n\t"
-- "mov.b %2, @%0\n\t"
-- "cmp/eq #0, %2\n\t"
-- "bt/s 2f\n\t"
-- " cmp/eq %5,%1\n\t"
-- "bf/s 1b\n\t"
-- " add #1, %0\n"
-- "2:"
-- : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
-- : "0" (__dest), "1" (__src), "r" (__src+__n)
-- : "memory", "t");
+- __asm__ __volatile__ ("getcon " __KCR0 ", %0\n\t" : "=r" (ti));
-
-- return __xdest;
+- return ti;
-}
-
--#define __HAVE_ARCH_STRCMP
--static inline int strcmp(const char *__cs, const char *__ct)
--{
-- register int __res;
-- unsigned long __dummy;
+-/* thread information allocation */
-
-- __asm__ __volatile__(
-- "mov.b @%1+, %3\n"
-- "1:\n\t"
-- "mov.b @%0+, %2\n\t"
-- "cmp/eq #0, %3\n\t"
-- "bt 2f\n\t"
-- "cmp/eq %2, %3\n\t"
-- "bt/s 1b\n\t"
-- " mov.b @%1+, %3\n\t"
-- "add #-2, %1\n\t"
-- "mov.b @%1, %3\n\t"
-- "sub %3, %2\n"
-- "2:"
-- : "=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
-- : "0" (__cs), "1" (__ct)
-- : "t");
-
-- return __res;
--}
-
--#define __HAVE_ARCH_STRNCMP
--static inline int strncmp(const char *__cs, const char *__ct, size_t __n)
+-#define alloc_thread_info(ti) ((struct thread_info *) __get_free_pages(GFP_KERNEL,1))
+-#define free_thread_info(ti) free_pages((unsigned long) (ti), 1)
+-
+-#endif /* __ASSEMBLY__ */
+-
+-#define THREAD_SIZE 8192
+-
+-#define PREEMPT_ACTIVE 0x10000000
+-
+-/* thread information flags */
+-#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
+-#define TIF_SIGPENDING 2 /* signal pending */
+-#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
+-#define TIF_MEMDIE 4
+-#define TIF_RESTORE_SIGMASK 5 /* Restore signal mask in do_signal */
+-
+-#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
+-#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
+-#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
+-#define _TIF_MEMDIE (1 << TIF_MEMDIE)
+-#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
+-
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH64_THREAD_INFO_H */
+diff --git a/include/asm-sh64/timex.h b/include/asm-sh64/timex.h
+deleted file mode 100644
+index 163e2b6..0000000
+--- a/include/asm-sh64/timex.h
++++ /dev/null
+@@ -1,31 +0,0 @@
+-#ifndef __ASM_SH64_TIMEX_H
+-#define __ASM_SH64_TIMEX_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/timex.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 Paul Mundt
+- *
+- * sh-5 architecture timex specifications
+- *
+- */
+-
+-#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
+-#define CLOCK_TICK_FACTOR 20 /* Factor of both 1000000 and CLOCK_TICK_RATE */
+-
+-typedef unsigned long cycles_t;
+-
+-static __inline__ cycles_t get_cycles (void)
-{
-- register int __res;
-- unsigned long __dummy;
+- return 0;
+-}
-
-- if (__n == 0)
-- return 0;
+-#define vxtime_lock() do {} while (0)
+-#define vxtime_unlock() do {} while (0)
-
-- __asm__ __volatile__(
-- "mov.b @%1+, %3\n"
-- "1:\n\t"
-- "mov.b @%0+, %2\n\t"
-- "cmp/eq %6, %0\n\t"
-- "bt/s 2f\n\t"
-- " cmp/eq #0, %3\n\t"
-- "bt/s 3f\n\t"
-- " cmp/eq %3, %2\n\t"
-- "bt/s 1b\n\t"
-- " mov.b @%1+, %3\n\t"
-- "add #-2, %1\n\t"
-- "mov.b @%1, %3\n"
-- "2:\n\t"
-- "sub %3, %2\n"
-- "3:"
-- :"=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
-- : "0" (__cs), "1" (__ct), "r" (__cs+__n)
-- : "t");
+-#endif /* __ASM_SH64_TIMEX_H */
+diff --git a/include/asm-sh64/tlb.h b/include/asm-sh64/tlb.h
+deleted file mode 100644
+index 4979408..0000000
+--- a/include/asm-sh64/tlb.h
++++ /dev/null
+@@ -1,92 +0,0 @@
+-/*
+- * include/asm-sh64/tlb.h
+- *
+- * Copyright (C) 2003 Paul Mundt
+- *
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- */
+-#ifndef __ASM_SH64_TLB_H
+-#define __ASM_SH64_TLB_H
-
-- return __res;
+-/*
+- * Note! These are mostly unused, we just need the xTLB_LAST_VAR_UNRESTRICTED
+- * for head.S! Once this limitation is gone, we can clean the rest of this up.
+- */
+-
+-/* ITLB defines */
+-#define ITLB_FIXED 0x00000000 /* First fixed ITLB, see head.S */
+-#define ITLB_LAST_VAR_UNRESTRICTED 0x000003F0 /* Last ITLB */
+-
+-/* DTLB defines */
+-#define DTLB_FIXED 0x00800000 /* First fixed DTLB, see head.S */
+-#define DTLB_LAST_VAR_UNRESTRICTED 0x008003F0 /* Last DTLB */
+-
+-#ifndef __ASSEMBLY__
+-
+-/**
+- * for_each_dtlb_entry
+- *
+- * @tlb: TLB entry
+- *
+- * Iterate over free (non-wired) DTLB entries
+- */
+-#define for_each_dtlb_entry(tlb) \
+- for (tlb = cpu_data->dtlb.first; \
+- tlb <= cpu_data->dtlb.last; \
+- tlb += cpu_data->dtlb.step)
+-
+-/**
+- * for_each_itlb_entry
+- *
+- * @tlb: TLB entry
+- *
+- * Iterate over free (non-wired) ITLB entries
+- */
+-#define for_each_itlb_entry(tlb) \
+- for (tlb = cpu_data->itlb.first; \
+- tlb <= cpu_data->itlb.last; \
+- tlb += cpu_data->itlb.step)
+-
+-/**
+- * __flush_tlb_slot
+- *
+- * @slot: Address of TLB slot.
+- *
+- * Flushes TLB slot @slot.
+- */
+-static inline void __flush_tlb_slot(unsigned long long slot)
+-{
+- __asm__ __volatile__ ("putcfg %0, 0, r63\n" : : "r" (slot));
-}
-
--#define __HAVE_ARCH_MEMSET
--extern void *memset(void *__s, int __c, size_t __count);
+-/* arch/sh64/mm/tlb.c */
+-extern int sh64_tlb_init(void);
+-extern unsigned long long sh64_next_free_dtlb_entry(void);
+-extern unsigned long long sh64_get_wired_dtlb_entry(void);
+-extern int sh64_put_wired_dtlb_entry(unsigned long long entry);
-
--#define __HAVE_ARCH_MEMCPY
--extern void *memcpy(void *__to, __const__ void *__from, size_t __n);
+-extern void sh64_setup_tlb_slot(unsigned long long config_addr, unsigned long eaddr, unsigned long asid, unsigned long paddr);
+-extern void sh64_teardown_tlb_slot(unsigned long long config_addr);
-
--#define __HAVE_ARCH_MEMMOVE
--extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
+-#define tlb_start_vma(tlb, vma) \
+- flush_cache_range(vma, vma->vm_start, vma->vm_end)
-
--#define __HAVE_ARCH_MEMCHR
--extern void *memchr(const void *__s, int __c, size_t __n);
+-#define tlb_end_vma(tlb, vma) \
+- flush_tlb_range(vma, vma->vm_start, vma->vm_end)
-
--#define __HAVE_ARCH_STRLEN
--extern size_t strlen(const char *);
+-#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0)
-
--#endif /* __KERNEL__ */
+-/*
+- * Flush whole TLBs for MM
+- */
+-#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
+-
+-#include <asm-generic/tlb.h>
+-
+-#endif /* __ASSEMBLY__ */
+-
+-#endif /* __ASM_SH64_TLB_H */
+-
+diff --git a/include/asm-sh64/tlbflush.h b/include/asm-sh64/tlbflush.h
+deleted file mode 100644
+index 16a164a..0000000
+--- a/include/asm-sh64/tlbflush.h
++++ /dev/null
+@@ -1,27 +0,0 @@
+-#ifndef __ASM_SH64_TLBFLUSH_H
+-#define __ASM_SH64_TLBFLUSH_H
+-
+-#include <asm/pgalloc.h>
-
--#endif /* __ASM_SH_STRING_H */
-+#ifdef CONFIG_SUPERH32
-+# include "string_32.h"
-+#else
-+# include "string_64.h"
-+#endif
-diff --git a/include/asm-sh/string_32.h b/include/asm-sh/string_32.h
-new file mode 100644
-index 0000000..55f8db6
---- /dev/null
-+++ b/include/asm-sh/string_32.h
-@@ -0,0 +1,131 @@
-+#ifndef __ASM_SH_STRING_H
-+#define __ASM_SH_STRING_H
-+
-+#ifdef __KERNEL__
-+
-+/*
-+ * Copyright (C) 1999 Niibe Yutaka
-+ * But consider these trivial functions to be public domain.
-+ */
-+
-+#define __HAVE_ARCH_STRCPY
-+static inline char *strcpy(char *__dest, const char *__src)
-+{
-+ register char *__xdest = __dest;
-+ unsigned long __dummy;
-+
-+ __asm__ __volatile__("1:\n\t"
-+ "mov.b @%1+, %2\n\t"
-+ "mov.b %2, @%0\n\t"
-+ "cmp/eq #0, %2\n\t"
-+ "bf/s 1b\n\t"
-+ " add #1, %0\n\t"
-+ : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
-+ : "0" (__dest), "1" (__src)
-+ : "memory", "t");
-+
-+ return __xdest;
-+}
-+
-+#define __HAVE_ARCH_STRNCPY
-+static inline char *strncpy(char *__dest, const char *__src, size_t __n)
-+{
-+ register char *__xdest = __dest;
-+ unsigned long __dummy;
-+
-+ if (__n == 0)
-+ return __xdest;
-+
-+ __asm__ __volatile__(
-+ "1:\n"
-+ "mov.b @%1+, %2\n\t"
-+ "mov.b %2, @%0\n\t"
-+ "cmp/eq #0, %2\n\t"
-+ "bt/s 2f\n\t"
-+ " cmp/eq %5,%1\n\t"
-+ "bf/s 1b\n\t"
-+ " add #1, %0\n"
-+ "2:"
-+ : "=r" (__dest), "=r" (__src), "=&z" (__dummy)
-+ : "0" (__dest), "1" (__src), "r" (__src+__n)
-+ : "memory", "t");
-+
-+ return __xdest;
-+}
-+
-+#define __HAVE_ARCH_STRCMP
-+static inline int strcmp(const char *__cs, const char *__ct)
-+{
-+ register int __res;
-+ unsigned long __dummy;
-+
-+ __asm__ __volatile__(
-+ "mov.b @%1+, %3\n"
-+ "1:\n\t"
-+ "mov.b @%0+, %2\n\t"
-+ "cmp/eq #0, %3\n\t"
-+ "bt 2f\n\t"
-+ "cmp/eq %2, %3\n\t"
-+ "bt/s 1b\n\t"
-+ " mov.b @%1+, %3\n\t"
-+ "add #-2, %1\n\t"
-+ "mov.b @%1, %3\n\t"
-+ "sub %3, %2\n"
-+ "2:"
-+ : "=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
-+ : "0" (__cs), "1" (__ct)
-+ : "t");
-+
-+ return __res;
-+}
-+
-+#define __HAVE_ARCH_STRNCMP
-+static inline int strncmp(const char *__cs, const char *__ct, size_t __n)
-+{
-+ register int __res;
-+ unsigned long __dummy;
-+
-+ if (__n == 0)
-+ return 0;
-+
-+ __asm__ __volatile__(
-+ "mov.b @%1+, %3\n"
-+ "1:\n\t"
-+ "mov.b @%0+, %2\n\t"
-+ "cmp/eq %6, %0\n\t"
-+ "bt/s 2f\n\t"
-+ " cmp/eq #0, %3\n\t"
-+ "bt/s 3f\n\t"
-+ " cmp/eq %3, %2\n\t"
-+ "bt/s 1b\n\t"
-+ " mov.b @%1+, %3\n\t"
-+ "add #-2, %1\n\t"
-+ "mov.b @%1, %3\n"
-+ "2:\n\t"
-+ "sub %3, %2\n"
-+ "3:"
-+ :"=r" (__cs), "=r" (__ct), "=&r" (__res), "=&z" (__dummy)
-+ : "0" (__cs), "1" (__ct), "r" (__cs+__n)
-+ : "t");
-+
-+ return __res;
-+}
-+
-+#define __HAVE_ARCH_MEMSET
-+extern void *memset(void *__s, int __c, size_t __count);
-+
-+#define __HAVE_ARCH_MEMCPY
-+extern void *memcpy(void *__to, __const__ void *__from, size_t __n);
-+
-+#define __HAVE_ARCH_MEMMOVE
-+extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
-+
-+#define __HAVE_ARCH_MEMCHR
-+extern void *memchr(const void *__s, int __c, size_t __n);
-+
-+#define __HAVE_ARCH_STRLEN
-+extern size_t strlen(const char *);
-+
-+#endif /* __KERNEL__ */
-+
-+#endif /* __ASM_SH_STRING_H */
-diff --git a/include/asm-sh/string_64.h b/include/asm-sh/string_64.h
-new file mode 100644
-index 0000000..aa1fef2
---- /dev/null
-+++ b/include/asm-sh/string_64.h
-@@ -0,0 +1,17 @@
-+#ifndef __ASM_SH_STRING_64_H
-+#define __ASM_SH_STRING_64_H
-+
-+/*
-+ * include/asm-sh/string_64.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+
-+#define __HAVE_ARCH_MEMCPY
-+extern void *memcpy(void *dest, const void *src, size_t count);
-+
-+#endif /* __ASM_SH_STRING_64_H */
-diff --git a/include/asm-sh/system.h b/include/asm-sh/system.h
-index 4faa2fb..772cd1a 100644
---- a/include/asm-sh/system.h
-+++ b/include/asm-sh/system.h
-@@ -12,60 +12,9 @@
- #include <asm/types.h>
- #include <asm/ptrace.h>
-
--struct task_struct *__switch_to(struct task_struct *prev,
-- struct task_struct *next);
-+#define AT_VECTOR_SIZE_ARCH 5 /* entries in ARCH_DLINFO */
-
--#define AT_VECTOR_SIZE_ARCH 1 /* entries in ARCH_DLINFO */
-/*
-- * switch_to() should switch tasks to task nr n, first
+- * TLB flushing:
+- *
+- * - flush_tlb() flushes the current mm struct TLBs
+- * - flush_tlb_all() flushes all processes TLBs
+- * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+- * - flush_tlb_page(vma, vmaddr) flushes one page
+- * - flush_tlb_range(mm, start, end) flushes a range of pages
+- *
- */
-
--#define switch_to(prev, next, last) do { \
-- struct task_struct *__last; \
-- register unsigned long *__ts1 __asm__ ("r1") = &prev->thread.sp; \
-- register unsigned long *__ts2 __asm__ ("r2") = &prev->thread.pc; \
-- register unsigned long *__ts4 __asm__ ("r4") = (unsigned long *)prev; \
-- register unsigned long *__ts5 __asm__ ("r5") = (unsigned long *)next; \
-- register unsigned long *__ts6 __asm__ ("r6") = &next->thread.sp; \
-- register unsigned long __ts7 __asm__ ("r7") = next->thread.pc; \
-- __asm__ __volatile__ (".balign 4\n\t" \
-- "stc.l gbr, @-r15\n\t" \
-- "sts.l pr, @-r15\n\t" \
-- "mov.l r8, @-r15\n\t" \
-- "mov.l r9, @-r15\n\t" \
-- "mov.l r10, @-r15\n\t" \
-- "mov.l r11, @-r15\n\t" \
-- "mov.l r12, @-r15\n\t" \
-- "mov.l r13, @-r15\n\t" \
-- "mov.l r14, @-r15\n\t" \
-- "mov.l r15, @r1 ! save SP\n\t" \
-- "mov.l @r6, r15 ! change to new stack\n\t" \
-- "mova 1f, %0\n\t" \
-- "mov.l %0, @r2 ! save PC\n\t" \
-- "mov.l 2f, %0\n\t" \
-- "jmp @%0 ! call __switch_to\n\t" \
-- " lds r7, pr ! with return to new PC\n\t" \
-- ".balign 4\n" \
-- "2:\n\t" \
-- ".long __switch_to\n" \
-- "1:\n\t" \
-- "mov.l @r15+, r14\n\t" \
-- "mov.l @r15+, r13\n\t" \
-- "mov.l @r15+, r12\n\t" \
-- "mov.l @r15+, r11\n\t" \
-- "mov.l @r15+, r10\n\t" \
-- "mov.l @r15+, r9\n\t" \
-- "mov.l @r15+, r8\n\t" \
-- "lds.l @r15+, pr\n\t" \
-- "ldc.l @r15+, gbr\n\t" \
-- : "=z" (__last) \
-- : "r" (__ts1), "r" (__ts2), "r" (__ts4), \
-- "r" (__ts5), "r" (__ts6), "r" (__ts7) \
-- : "r3", "t"); \
-- last = __last; \
--} while (0)
+-extern void flush_tlb(void);
+-extern void flush_tlb_all(void);
+-extern void flush_tlb_mm(struct mm_struct *mm);
+-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+- unsigned long end);
+-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+-
+-extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
+-
+-#endif /* __ASM_SH64_TLBFLUSH_H */
+-
+diff --git a/include/asm-sh64/topology.h b/include/asm-sh64/topology.h
+deleted file mode 100644
+index 3421178..0000000
+--- a/include/asm-sh64/topology.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_SH64_TOPOLOGY_H
+-#define __ASM_SH64_TOPOLOGY_H
+-
+-#include <asm-generic/topology.h>
+-
+-#endif /* __ASM_SH64_TOPOLOGY_H */
+diff --git a/include/asm-sh64/types.h b/include/asm-sh64/types.h
+deleted file mode 100644
+index 2c7ad73..0000000
+--- a/include/asm-sh64/types.h
++++ /dev/null
+@@ -1,74 +0,0 @@
+-#ifndef __ASM_SH64_TYPES_H
+-#define __ASM_SH64_TYPES_H
-
--#ifdef CONFIG_CPU_SH4A
-+#if defined(CONFIG_CPU_SH4A) || defined(CONFIG_CPU_SH5)
- #define __icbi() \
- { \
- unsigned long __addr; \
-@@ -91,7 +40,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
- * Historically we have only done this type of barrier for the MMUCR, but
- * it's also necessary for the CCR, so we make it generic here instead.
- */
--#ifdef CONFIG_CPU_SH4A
-+#if defined(CONFIG_CPU_SH4A) || defined(CONFIG_CPU_SH5)
- #define mb() __asm__ __volatile__ ("synco": : :"memory")
- #define rmb() mb()
- #define wmb() __asm__ __volatile__ ("synco": : :"memory")
-@@ -119,63 +68,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
-
- #define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
-
-/*
-- * Jump to P2 area.
-- * When handling TLB or caches, we need to do it from P2 area.
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/types.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
- */
--#define jump_to_P2() \
--do { \
-- unsigned long __dummy; \
-- __asm__ __volatile__( \
-- "mov.l 1f, %0\n\t" \
-- "or %1, %0\n\t" \
-- "jmp @%0\n\t" \
-- " nop\n\t" \
-- ".balign 4\n" \
-- "1: .long 2f\n" \
-- "2:" \
-- : "=&r" (__dummy) \
-- : "r" (0x20000000)); \
--} while (0)
+-
+-#ifndef __ASSEMBLY__
+-
+-typedef unsigned short umode_t;
-
-/*
-- * Back to P1 area.
+- * __xx is ok: it doesn't pollute the POSIX namespace. Use these in the
+- * header files exported to user space
- */
--#define back_to_P1() \
--do { \
-- unsigned long __dummy; \
-- ctrl_barrier(); \
-- __asm__ __volatile__( \
-- "mov.l 1f, %0\n\t" \
-- "jmp @%0\n\t" \
-- " nop\n\t" \
-- ".balign 4\n" \
-- "1: .long 2f\n" \
-- "2:" \
-- : "=&r" (__dummy)); \
--} while (0)
-
--static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
--{
-- unsigned long flags, retval;
+-typedef __signed__ char __s8;
+-typedef unsigned char __u8;
-
-- local_irq_save(flags);
-- retval = *m;
-- *m = val;
-- local_irq_restore(flags);
-- return retval;
--}
+-typedef __signed__ short __s16;
+-typedef unsigned short __u16;
-
--static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
--{
-- unsigned long flags, retval;
+-typedef __signed__ int __s32;
+-typedef unsigned int __u32;
-
-- local_irq_save(flags);
-- retval = *m;
-- *m = val & 0xff;
-- local_irq_restore(flags);
-- return retval;
--}
-+#ifdef CONFIG_GUSA_RB
-+#include <asm/cmpxchg-grb.h>
-+#else
-+#include <asm/cmpxchg-irq.h>
-+#endif
-
- extern void __xchg_called_with_bad_pointer(void);
-
-@@ -202,20 +99,6 @@ extern void __xchg_called_with_bad_pointer(void);
- #define xchg(ptr,x) \
- ((__typeof__(*(ptr)))__xchg((ptr),(unsigned long)(x), sizeof(*(ptr))))
-
--static inline unsigned long __cmpxchg_u32(volatile int * m, unsigned long old,
-- unsigned long new)
--{
-- __u32 retval;
-- unsigned long flags;
+-#if defined(__GNUC__)
+-__extension__ typedef __signed__ long long __s64;
+-__extension__ typedef unsigned long long __u64;
+-#endif
-
-- local_irq_save(flags);
-- retval = *m;
-- if (retval == old)
-- *m = new;
-- local_irq_restore(flags); /* implies memory barrier */
-- return retval;
--}
+-#endif /* __ASSEMBLY__ */
-
- /* This function doesn't exist, so you'll get a linker error
- * if something tries to do an invalid cmpxchg(). */
- extern void __cmpxchg_called_with_bad_pointer(void);
-@@ -255,10 +138,14 @@ static inline void *set_exception_table_evt(unsigned int evt, void *handler)
- */
- #ifdef CONFIG_CPU_SH2A
- extern unsigned int instruction_size(unsigned int insn);
+-/*
+- * These aren't exported outside the kernel to avoid name space clashes
+- */
+-#ifdef __KERNEL__
+-
+-#ifndef __ASSEMBLY__
+-
+-typedef __signed__ char s8;
+-typedef unsigned char u8;
+-
+-typedef __signed__ short s16;
+-typedef unsigned short u16;
+-
+-typedef __signed__ int s32;
+-typedef unsigned int u32;
+-
+-typedef __signed__ long long s64;
+-typedef unsigned long long u64;
+-
+-/* DMA addresses come in generic and 64-bit flavours. */
+-
+-#ifdef CONFIG_HIGHMEM64G
+-typedef u64 dma_addr_t;
-#else
-+#elif defined(CONFIG_SUPERH32)
- #define instruction_size(insn) (2)
-+#else
-+#define instruction_size(insn) (4)
- #endif
-
-+extern unsigned long cached_to_uncached;
-+
- /* XXX
- * disable hlt during certain critical i/o operations
- */
-@@ -270,13 +157,35 @@ void default_idle(void);
- void per_cpu_trap_init(void);
-
- asmlinkage void break_point_trap(void);
--asmlinkage void debug_trap_handler(unsigned long r4, unsigned long r5,
-- unsigned long r6, unsigned long r7,
-- struct pt_regs __regs);
--asmlinkage void bug_trap_handler(unsigned long r4, unsigned long r5,
-- unsigned long r6, unsigned long r7,
-- struct pt_regs __regs);
-+
-+#ifdef CONFIG_SUPERH32
-+#define BUILD_TRAP_HANDLER(name) \
-+asmlinkage void name##_trap_handler(unsigned long r4, unsigned long r5, \
-+ unsigned long r6, unsigned long r7, \
-+ struct pt_regs __regs)
-+
-+#define TRAP_HANDLER_DECL \
-+ struct pt_regs *regs = RELOC_HIDE(&__regs, 0); \
-+ unsigned int vec = regs->tra; \
-+ (void)vec;
-+#else
-+#define BUILD_TRAP_HANDLER(name) \
-+asmlinkage void name##_trap_handler(unsigned int vec, struct pt_regs *regs)
-+#define TRAP_HANDLER_DECL
-+#endif
-+
-+BUILD_TRAP_HANDLER(address_error);
-+BUILD_TRAP_HANDLER(debug);
-+BUILD_TRAP_HANDLER(bug);
-+BUILD_TRAP_HANDLER(fpu_error);
-+BUILD_TRAP_HANDLER(fpu_state_restore);
-
- #define arch_align_stack(x) (x)
-
-+#ifdef CONFIG_SUPERH32
-+# include "system_32.h"
-+#else
-+# include "system_64.h"
-+#endif
-+
- #endif
-diff --git a/include/asm-sh/system_32.h b/include/asm-sh/system_32.h
-new file mode 100644
-index 0000000..7ff08d9
---- /dev/null
-+++ b/include/asm-sh/system_32.h
-@@ -0,0 +1,99 @@
-+#ifndef __ASM_SH_SYSTEM_32_H
-+#define __ASM_SH_SYSTEM_32_H
-+
-+#include <linux/types.h>
-+
-+struct task_struct *__switch_to(struct task_struct *prev,
-+ struct task_struct *next);
-+
-+/*
-+ * switch_to() should switch tasks to task nr n, first
-+ */
-+#define switch_to(prev, next, last) \
-+do { \
-+ register u32 *__ts1 __asm__ ("r1") = (u32 *)&prev->thread.sp; \
-+ register u32 *__ts2 __asm__ ("r2") = (u32 *)&prev->thread.pc; \
-+ register u32 *__ts4 __asm__ ("r4") = (u32 *)prev; \
-+ register u32 *__ts5 __asm__ ("r5") = (u32 *)next; \
-+ register u32 *__ts6 __asm__ ("r6") = (u32 *)&next->thread.sp; \
-+ register u32 __ts7 __asm__ ("r7") = next->thread.pc; \
-+ struct task_struct *__last; \
-+ \
-+ __asm__ __volatile__ ( \
-+ ".balign 4\n\t" \
-+ "stc.l gbr, @-r15\n\t" \
-+ "sts.l pr, @-r15\n\t" \
-+ "mov.l r8, @-r15\n\t" \
-+ "mov.l r9, @-r15\n\t" \
-+ "mov.l r10, @-r15\n\t" \
-+ "mov.l r11, @-r15\n\t" \
-+ "mov.l r12, @-r15\n\t" \
-+ "mov.l r13, @-r15\n\t" \
-+ "mov.l r14, @-r15\n\t" \
-+ "mov.l r15, @r1\t! save SP\n\t" \
-+ "mov.l @r6, r15\t! change to new stack\n\t" \
-+ "mova 1f, %0\n\t" \
-+ "mov.l %0, @r2\t! save PC\n\t" \
-+ "mov.l 2f, %0\n\t" \
-+ "jmp @%0\t! call __switch_to\n\t" \
-+ " lds r7, pr\t! with return to new PC\n\t" \
-+ ".balign 4\n" \
-+ "2:\n\t" \
-+ ".long __switch_to\n" \
-+ "1:\n\t" \
-+ "mov.l @r15+, r14\n\t" \
-+ "mov.l @r15+, r13\n\t" \
-+ "mov.l @r15+, r12\n\t" \
-+ "mov.l @r15+, r11\n\t" \
-+ "mov.l @r15+, r10\n\t" \
-+ "mov.l @r15+, r9\n\t" \
-+ "mov.l @r15+, r8\n\t" \
-+ "lds.l @r15+, pr\n\t" \
-+ "ldc.l @r15+, gbr\n\t" \
-+ : "=z" (__last) \
-+ : "r" (__ts1), "r" (__ts2), "r" (__ts4), \
-+ "r" (__ts5), "r" (__ts6), "r" (__ts7) \
-+ : "r3", "t"); \
-+ \
-+ last = __last; \
-+} while (0)
-+
-+#define __uses_jump_to_uncached __attribute__ ((__section__ (".uncached.text")))
-+
-+/*
-+ * Jump to uncached area.
-+ * When handling TLB or caches, we need to do it from an uncached area.
-+ */
-+#define jump_to_uncached() \
-+do { \
-+ unsigned long __dummy; \
-+ \
-+ __asm__ __volatile__( \
-+ "mova 1f, %0\n\t" \
-+ "add %1, %0\n\t" \
-+ "jmp @%0\n\t" \
-+ " nop\n\t" \
-+ ".balign 4\n" \
-+ "1:" \
-+ : "=&z" (__dummy) \
-+ : "r" (cached_to_uncached)); \
-+} while (0)
-+
-+/*
-+ * Back to cached area.
-+ */
-+#define back_to_cached() \
-+do { \
-+ unsigned long __dummy; \
-+ ctrl_barrier(); \
-+ __asm__ __volatile__( \
-+ "mov.l 1f, %0\n\t" \
-+ "jmp @%0\n\t" \
-+ " nop\n\t" \
-+ ".balign 4\n" \
-+ "1: .long 2f\n" \
-+ "2:" \
-+ : "=&r" (__dummy)); \
-+} while (0)
-+
-+#endif /* __ASM_SH_SYSTEM_32_H */
-diff --git a/include/asm-sh/system_64.h b/include/asm-sh/system_64.h
-new file mode 100644
-index 0000000..943acf5
---- /dev/null
-+++ b/include/asm-sh/system_64.h
-@@ -0,0 +1,40 @@
-+#ifndef __ASM_SH_SYSTEM_64_H
-+#define __ASM_SH_SYSTEM_64_H
-+
-+/*
-+ * include/asm-sh/system_64.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003 Paul Mundt
-+ * Copyright (C) 2004 Richard Curnow
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#include <asm/processor.h>
-+
-+/*
-+ * switch_to() should switch tasks to task nr n, first
-+ */
-+struct task_struct *sh64_switch_to(struct task_struct *prev,
-+ struct thread_struct *prev_thread,
-+ struct task_struct *next,
-+ struct thread_struct *next_thread);
-+
-+#define switch_to(prev,next,last) \
-+do { \
-+ if (last_task_used_math != next) { \
-+ struct pt_regs *regs = next->thread.uregs; \
-+ if (regs) regs->sr |= SR_FD; \
-+ } \
-+ last = sh64_switch_to(prev, &prev->thread, next, \
-+ &next->thread); \
-+} while (0)
-+
-+#define __uses_jump_to_uncached
-+
-+#define jump_to_uncached() do { } while (0)
-+#define back_to_cached() do { } while (0)
-+
-+#endif /* __ASM_SH_SYSTEM_64_H */
-diff --git a/include/asm-sh/thread_info.h b/include/asm-sh/thread_info.h
-index 1f7e1de..c50e5d3 100644
---- a/include/asm-sh/thread_info.h
-+++ b/include/asm-sh/thread_info.h
-@@ -68,14 +68,16 @@ struct thread_info {
- #define init_stack (init_thread_union.stack)
-
- /* how to get the current stack pointer from C */
--register unsigned long current_stack_pointer asm("r15") __attribute_used__;
-+register unsigned long current_stack_pointer asm("r15") __used;
-
- /* how to get the thread information struct from C */
- static inline struct thread_info *current_thread_info(void)
- {
- struct thread_info *ti;
--#ifdef CONFIG_CPU_HAS_SR_RB
-- __asm__("stc r7_bank, %0" : "=r" (ti));
-+#if defined(CONFIG_SUPERH64)
-+ __asm__ __volatile__ ("getcon cr17, %0" : "=r" (ti));
-+#elif defined(CONFIG_CPU_HAS_SR_RB)
-+ __asm__ __volatile__ ("stc r7_bank, %0" : "=r" (ti));
- #else
- unsigned long __dummy;
-
-@@ -111,6 +113,7 @@ static inline struct thread_info *current_thread_info(void)
- #define TIF_NEED_RESCHED 2 /* rescheduling necessary */
- #define TIF_RESTORE_SIGMASK 3 /* restore signal mask in do_signal() */
- #define TIF_SINGLESTEP 4 /* singlestepping active */
-+#define TIF_SYSCALL_AUDIT 5
- #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
- #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
- #define TIF_MEMDIE 18
-@@ -121,6 +124,7 @@ static inline struct thread_info *current_thread_info(void)
- #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
- #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
- #define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
-+#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
- #define _TIF_USEDFPU (1<<TIF_USEDFPU)
- #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
- #define _TIF_FREEZE (1<<TIF_FREEZE)
-diff --git a/include/asm-sh/tlb.h b/include/asm-sh/tlb.h
-index 53d185b..56ad1fb 100644
---- a/include/asm-sh/tlb.h
-+++ b/include/asm-sh/tlb.h
-@@ -1,6 +1,12 @@
- #ifndef __ASM_SH_TLB_H
- #define __ASM_SH_TLB_H
-
-+#ifdef CONFIG_SUPERH64
-+# include "tlb_64.h"
-+#endif
-+
-+#ifndef __ASSEMBLY__
-+
- #define tlb_start_vma(tlb, vma) \
- flush_cache_range(vma, vma->vm_start, vma->vm_end)
-
-@@ -15,4 +21,6 @@
- #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
-
- #include <asm-generic/tlb.h>
+-typedef u32 dma_addr_t;
-#endif
-+
-+#endif /* __ASSEMBLY__ */
-+#endif /* __ASM_SH_TLB_H */
-diff --git a/include/asm-sh/tlb_64.h b/include/asm-sh/tlb_64.h
-new file mode 100644
-index 0000000..0308e05
---- /dev/null
-+++ b/include/asm-sh/tlb_64.h
-@@ -0,0 +1,69 @@
-+/*
-+ * include/asm-sh/tlb_64.h
-+ *
-+ * Copyright (C) 2003 Paul Mundt
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#ifndef __ASM_SH_TLB_64_H
-+#define __ASM_SH_TLB_64_H
-+
-+/* ITLB defines */
-+#define ITLB_FIXED 0x00000000 /* First fixed ITLB, see head.S */
-+#define ITLB_LAST_VAR_UNRESTRICTED 0x000003F0 /* Last ITLB */
-+
-+/* DTLB defines */
-+#define DTLB_FIXED 0x00800000 /* First fixed DTLB, see head.S */
-+#define DTLB_LAST_VAR_UNRESTRICTED 0x008003F0 /* Last DTLB */
-+
-+#ifndef __ASSEMBLY__
-+
-+/**
-+ * for_each_dtlb_entry
-+ *
-+ * @tlb: TLB entry
-+ *
-+ * Iterate over free (non-wired) DTLB entries
-+ */
-+#define for_each_dtlb_entry(tlb) \
-+ for (tlb = cpu_data->dtlb.first; \
-+ tlb <= cpu_data->dtlb.last; \
-+ tlb += cpu_data->dtlb.step)
-+
-+/**
-+ * for_each_itlb_entry
-+ *
-+ * @tlb: TLB entry
-+ *
-+ * Iterate over free (non-wired) ITLB entries
-+ */
-+#define for_each_itlb_entry(tlb) \
-+ for (tlb = cpu_data->itlb.first; \
-+ tlb <= cpu_data->itlb.last; \
-+ tlb += cpu_data->itlb.step)
-+
-+/**
-+ * __flush_tlb_slot
-+ *
-+ * @slot: Address of TLB slot.
-+ *
-+ * Flushes TLB slot @slot.
-+ */
-+static inline void __flush_tlb_slot(unsigned long long slot)
-+{
-+ __asm__ __volatile__ ("putcfg %0, 0, r63\n" : : "r" (slot));
-+}
-+
-+/* arch/sh64/mm/tlb.c */
-+int sh64_tlb_init(void);
-+unsigned long long sh64_next_free_dtlb_entry(void);
-+unsigned long long sh64_get_wired_dtlb_entry(void);
-+int sh64_put_wired_dtlb_entry(unsigned long long entry);
-+void sh64_setup_tlb_slot(unsigned long long config_addr, unsigned long eaddr,
-+ unsigned long asid, unsigned long paddr);
-+void sh64_teardown_tlb_slot(unsigned long long config_addr);
-+
-+#endif /* __ASSEMBLY__ */
-+#endif /* __ASM_SH_TLB_64_H */
-diff --git a/include/asm-sh/types.h b/include/asm-sh/types.h
-index 7ba69d9..a6e1d41 100644
---- a/include/asm-sh/types.h
-+++ b/include/asm-sh/types.h
-@@ -52,6 +52,12 @@ typedef unsigned long long u64;
-
- typedef u32 dma_addr_t;
-
-+#ifdef CONFIG_SUPERH32
-+typedef u16 opcode_t;
-+#else
-+typedef u32 opcode_t;
-+#endif
-+
- #endif /* __ASSEMBLY__ */
-
- #endif /* __KERNEL__ */
-diff --git a/include/asm-sh/uaccess.h b/include/asm-sh/uaccess.h
-index 77c391f..ff24ce9 100644
---- a/include/asm-sh/uaccess.h
-+++ b/include/asm-sh/uaccess.h
-@@ -1,563 +1,5 @@
--/* $Id: uaccess.h,v 1.11 2003/10/13 07:21:20 lethal Exp $
+-typedef u64 dma64_addr_t;
+-
+-#endif /* __ASSEMBLY__ */
+-
+-#define BITS_PER_LONG 32
+-
+-#endif /* __KERNEL__ */
+-
+-#endif /* __ASM_SH64_TYPES_H */
+diff --git a/include/asm-sh64/uaccess.h b/include/asm-sh64/uaccess.h
+deleted file mode 100644
+index 644c67b..0000000
+--- a/include/asm-sh64/uaccess.h
++++ /dev/null
+@@ -1,316 +0,0 @@
+-#ifndef __ASM_SH64_UACCESS_H
+-#define __ASM_SH64_UACCESS_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/uaccess.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003, 2004 Paul Mundt
- *
- * User space memory access functions
- *
-- * Copyright (C) 1999, 2002 Niibe Yutaka
-- * Copyright (C) 2003 Paul Mundt
+- * Copyright (C) 1999 Niibe Yutaka
- *
- * Based on:
- * MIPS implementation version 1.15 by
- * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
- * and i386 version.
+- *
- */
--#ifndef __ASM_SH_UACCESS_H
--#define __ASM_SH_UACCESS_H
-
-#include <linux/errno.h>
-#include <linux/sched.h>
@@ -527628,85 +615075,34 @@
-
-#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
-
--#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFFUL)
--#define USER_DS MAKE_MM_SEG(PAGE_OFFSET)
--
--#define segment_eq(a,b) ((a).seg == (b).seg)
+-#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
+-#define USER_DS MAKE_MM_SEG(0x80000000)
-
-#define get_ds() (KERNEL_DS)
+-#define get_fs() (current_thread_info()->addr_limit)
+-#define set_fs(x) (current_thread_info()->addr_limit=(x))
-
--#if !defined(CONFIG_MMU)
--/* NOMMU is always true */
--#define __addr_ok(addr) (1)
--
--static inline mm_segment_t get_fs(void)
--{
-- return USER_DS;
--}
--
--static inline void set_fs(mm_segment_t s)
--{
--}
--
--/*
-- * __access_ok: Check if address with size is OK or not.
-- *
-- * If we don't have an MMU (or if its disabled) the only thing we really have
-- * to look out for is if the address resides somewhere outside of what
-- * available RAM we have.
-- *
-- * TODO: This check could probably also stand to be restricted somewhat more..
-- * though it still does the Right Thing(tm) for the time being.
-- */
--static inline int __access_ok(unsigned long addr, unsigned long size)
--{
-- return ((addr >= memory_start) && ((addr + size) < memory_end));
--}
--#else /* CONFIG_MMU */
--#define __addr_ok(addr) \
-- ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
+-#define segment_eq(a,b) ((a).seg == (b).seg)
-
--#define get_fs() (current_thread_info()->addr_limit)
--#define set_fs(x) (current_thread_info()->addr_limit = (x))
+-#define __addr_ok(addr) ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
-
-/*
-- * __access_ok: Check if address with size is OK or not.
-- *
- * Uhhuh, this needs 33-bit arithmetic. We have a carry..
- *
- * sum := addr + size; carry? --> flag = true;
- * if (sum >= addr_limit) flag = true;
- */
--static inline int __access_ok(unsigned long addr, unsigned long size)
--{
-- unsigned long flag, sum;
--
-- __asm__("clrt\n\t"
-- "addc %3, %1\n\t"
-- "movt %0\n\t"
-- "cmp/hi %4, %1\n\t"
-- "rotcl %0"
-- :"=&r" (flag), "=r" (sum)
-- :"1" (addr), "r" (size),
-- "r" (current_thread_info()->addr_limit.seg)
-- :"t");
-- return flag == 0;
--
--}
--#endif /* CONFIG_MMU */
+-#define __range_ok(addr,size) (((unsigned long) (addr) + (size) < (current_thread_info()->addr_limit.seg)) ? 0 : 1)
-
--static inline int access_ok(int type, const void __user *p, unsigned long size)
--{
-- unsigned long addr = (unsigned long)p;
-- return __access_ok(addr, size);
--}
+-#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
+-#define __access_ok(addr,size) (__range_ok(addr,size) == 0)
-
-/*
- * Uh, these should become the main single-value transfer routines ...
- * They automatically use the right size if we just have the right
- * pointer type ...
- *
-- * As SuperH uses the same address space for kernel and user data, we
+- * As MIPS uses the same address space for kernel and user data, we
- * can just do these as direct assignments.
- *
- * Careful to not
@@ -527721,27 +615117,45 @@
- * doing multiple accesses to the same area (the user has to do the
- * checks by hand with "access_ok()")
- */
--#define __put_user(x,ptr) \
-- __put_user_nocheck((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)))
--#define __get_user(x,ptr) \
-- __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+-#define __put_user(x,ptr) __put_user_nocheck((x),(ptr),sizeof(*(ptr)))
+-#define __get_user(x,ptr) __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+-
+-/*
+- * The "xxx_ret" versions return constant specified in third argument, if
+- * something bad happens. These macros can be optimized for the
+- * case of just returning from the function xxx_ret is used.
+- */
+-
+-#define put_user_ret(x,ptr,ret) ({ \
+-if (put_user(x,ptr)) return ret; })
+-
+-#define get_user_ret(x,ptr,ret) ({ \
+-if (get_user(x,ptr)) return ret; })
+-
+-#define __put_user_ret(x,ptr,ret) ({ \
+-if (__put_user(x,ptr)) return ret; })
+-
+-#define __get_user_ret(x,ptr,ret) ({ \
+-if (__get_user(x,ptr)) return ret; })
-
-struct __large_struct { unsigned long buf[100]; };
--#define __m(x) (*(struct __large_struct __user *)(x))
+-#define __m(x) (*(struct __large_struct *)(x))
-
-#define __get_user_size(x,ptr,size,retval) \
-do { \
- retval = 0; \
-- __chk_user_ptr(ptr); \
- switch (size) { \
- case 1: \
-- __get_user_asm(x, ptr, retval, "b"); \
+- retval = __get_user_asm_b(x, ptr); \
- break; \
- case 2: \
-- __get_user_asm(x, ptr, retval, "w"); \
+- retval = __get_user_asm_w(x, ptr); \
- break; \
- case 4: \
-- __get_user_asm(x, ptr, retval, "l"); \
+- retval = __get_user_asm_l(x, ptr); \
+- break; \
+- case 8: \
+- retval = __get_user_asm_q(x, ptr); \
- break; \
- default: \
- __get_user_unknown(); \
@@ -527752,255 +615166,80 @@
-#define __get_user_nocheck(x,ptr,size) \
-({ \
- long __gu_err, __gu_val; \
-- __get_user_size(__gu_val, (ptr), (size), __gu_err); \
+- __get_user_size((void *)&__gu_val, (long)(ptr), \
+- (size), __gu_err); \
- (x) = (__typeof__(*(ptr)))__gu_val; \
- __gu_err; \
-})
-
--#ifdef CONFIG_MMU
-#define __get_user_check(x,ptr,size) \
-({ \
-- long __gu_err, __gu_val; \
-- __chk_user_ptr(ptr); \
+- long __gu_addr = (long)(ptr); \
+- long __gu_err = -EFAULT, __gu_val; \
+- if (__access_ok(__gu_addr, (size))) \
+- __get_user_size((void *)&__gu_val, __gu_addr, \
+- (size), __gu_err); \
+- (x) = (__typeof__(*(ptr))) __gu_val; \
+- __gu_err; \
+-})
+-
+-extern long __get_user_asm_b(void *, long);
+-extern long __get_user_asm_w(void *, long);
+-extern long __get_user_asm_l(void *, long);
+-extern long __get_user_asm_q(void *, long);
+-extern void __get_user_unknown(void);
+-
+-#define __put_user_size(x,ptr,size,retval) \
+-do { \
+- retval = 0; \
- switch (size) { \
- case 1: \
-- __get_user_1(__gu_val, (ptr), __gu_err); \
+- retval = __put_user_asm_b(x, ptr); \
- break; \
- case 2: \
-- __get_user_2(__gu_val, (ptr), __gu_err); \
+- retval = __put_user_asm_w(x, ptr); \
- break; \
- case 4: \
-- __get_user_4(__gu_val, (ptr), __gu_err); \
+- retval = __put_user_asm_l(x, ptr); \
- break; \
-- default: \
-- __get_user_unknown(); \
+- case 8: \
+- retval = __put_user_asm_q(x, ptr); \
- break; \
+- default: \
+- __put_user_unknown(); \
- } \
-- \
-- (x) = (__typeof__(*(ptr)))__gu_val; \
-- __gu_err; \
--})
--
--#define __get_user_1(x,addr,err) ({ \
--__asm__("stc r7_bank, %1\n\t" \
-- "mov.l @(8,%1), %1\n\t" \
-- "and %2, %1\n\t" \
-- "cmp/pz %1\n\t" \
-- "bt/s 1f\n\t" \
-- " mov #0, %0\n\t" \
-- "0:\n" \
-- "mov #-14, %0\n\t" \
-- "bra 2f\n\t" \
-- " mov #0, %1\n" \
-- "1:\n\t" \
-- "mov.b @%2, %1\n\t" \
-- "extu.b %1, %1\n" \
-- "2:\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 0b\n\t" \
-- ".previous" \
-- : "=&r" (err), "=&r" (x) \
-- : "r" (addr) \
-- : "t"); \
--})
--
--#define __get_user_2(x,addr,err) ({ \
--__asm__("stc r7_bank, %1\n\t" \
-- "mov.l @(8,%1), %1\n\t" \
-- "and %2, %1\n\t" \
-- "cmp/pz %1\n\t" \
-- "bt/s 1f\n\t" \
-- " mov #0, %0\n\t" \
-- "0:\n" \
-- "mov #-14, %0\n\t" \
-- "bra 2f\n\t" \
-- " mov #0, %1\n" \
-- "1:\n\t" \
-- "mov.w @%2, %1\n\t" \
-- "extu.w %1, %1\n" \
-- "2:\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 0b\n\t" \
-- ".previous" \
-- : "=&r" (err), "=&r" (x) \
-- : "r" (addr) \
-- : "t"); \
--})
--
--#define __get_user_4(x,addr,err) ({ \
--__asm__("stc r7_bank, %1\n\t" \
-- "mov.l @(8,%1), %1\n\t" \
-- "and %2, %1\n\t" \
-- "cmp/pz %1\n\t" \
-- "bt/s 1f\n\t" \
-- " mov #0, %0\n\t" \
-- "0:\n" \
-- "mov #-14, %0\n\t" \
-- "bra 2f\n\t" \
-- " mov #0, %1\n" \
-- "1:\n\t" \
-- "mov.l @%2, %1\n\t" \
-- "2:\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 0b\n\t" \
-- ".previous" \
-- : "=&r" (err), "=&r" (x) \
-- : "r" (addr) \
-- : "t"); \
--})
--#else /* CONFIG_MMU */
--#define __get_user_check(x,ptr,size) \
--({ \
-- long __gu_err, __gu_val; \
-- if (__access_ok((unsigned long)(ptr), (size))) { \
-- __get_user_size(__gu_val, (ptr), (size), __gu_err); \
-- (x) = (__typeof__(*(ptr)))__gu_val; \
-- } else \
-- __gu_err = -EFAULT; \
-- __gu_err; \
--})
--#endif
--
--#define __get_user_asm(x, addr, err, insn) \
--({ \
--__asm__ __volatile__( \
-- "1:\n\t" \
-- "mov." insn " %2, %1\n\t" \
-- "mov #0, %0\n" \
-- "2:\n" \
-- ".section .fixup,\"ax\"\n" \
-- "3:\n\t" \
-- "mov #0, %1\n\t" \
-- "mov.l 4f, %0\n\t" \
-- "jmp @%0\n\t" \
-- " mov %3, %0\n" \
-- "4: .long 2b\n\t" \
-- ".previous\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 3b\n\t" \
-- ".previous" \
-- :"=&r" (err), "=&r" (x) \
-- :"m" (__m(addr)), "i" (-EFAULT)); })
--
--extern void __get_user_unknown(void);
--
--#define __put_user_size(x,ptr,size,retval) \
--do { \
-- retval = 0; \
-- __chk_user_ptr(ptr); \
-- switch (size) { \
-- case 1: \
-- __put_user_asm(x, ptr, retval, "b"); \
-- break; \
-- case 2: \
-- __put_user_asm(x, ptr, retval, "w"); \
-- break; \
-- case 4: \
-- __put_user_asm(x, ptr, retval, "l"); \
-- break; \
-- case 8: \
-- __put_user_u64(x, ptr, retval); \
-- break; \
-- default: \
-- __put_user_unknown(); \
-- } \
-} while (0)
-
--#define __put_user_nocheck(x,ptr,size) \
--({ \
-- long __pu_err; \
-- __put_user_size((x),(ptr),(size),__pu_err); \
-- __pu_err; \
+-#define __put_user_nocheck(x,ptr,size) \
+-({ \
+- long __pu_err; \
+- __typeof__(*(ptr)) __pu_val = (x); \
+- __put_user_size((void *)&__pu_val, (long)(ptr), (size), __pu_err); \
+- __pu_err; \
-})
-
-#define __put_user_check(x,ptr,size) \
-({ \
- long __pu_err = -EFAULT; \
-- __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
+- long __pu_addr = (long)(ptr); \
+- __typeof__(*(ptr)) __pu_val = (x); \
- \
-- if (__access_ok((unsigned long)__pu_addr,size)) \
-- __put_user_size((x),__pu_addr,(size),__pu_err); \
+- if (__access_ok(__pu_addr, (size))) \
+- __put_user_size((void *)&__pu_val, __pu_addr, (size), __pu_err);\
- __pu_err; \
-})
-
--#define __put_user_asm(x, addr, err, insn) \
--({ \
--__asm__ __volatile__( \
-- "1:\n\t" \
-- "mov." insn " %1, %2\n\t" \
-- "mov #0, %0\n" \
-- "2:\n" \
-- ".section .fixup,\"ax\"\n" \
-- "3:\n\t" \
-- "nop\n\t" \
-- "mov.l 4f, %0\n\t" \
-- "jmp @%0\n\t" \
-- "mov %3, %0\n" \
-- "4: .long 2b\n\t" \
-- ".previous\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 3b\n\t" \
-- ".previous" \
-- :"=&r" (err) \
-- :"r" (x), "m" (__m(addr)), "i" (-EFAULT) \
-- :"memory"); })
--
--#if defined(__LITTLE_ENDIAN__)
--#define __put_user_u64(val,addr,retval) \
--({ \
--__asm__ __volatile__( \
-- "1:\n\t" \
-- "mov.l %R1,%2\n\t" \
-- "mov.l %S1,%T2\n\t" \
-- "mov #0,%0\n" \
-- "2:\n" \
-- ".section .fixup,\"ax\"\n" \
-- "3:\n\t" \
-- "nop\n\t" \
-- "mov.l 4f,%0\n\t" \
-- "jmp @%0\n\t" \
-- " mov %3,%0\n" \
-- "4: .long 2b\n\t" \
-- ".previous\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 3b\n\t" \
-- ".previous" \
-- : "=r" (retval) \
-- : "r" (val), "m" (__m(addr)), "i" (-EFAULT) \
-- : "memory"); })
-+#ifdef CONFIG_SUPERH32
-+# include "uaccess_32.h"
- #else
--#define __put_user_u64(val,addr,retval) \
--({ \
--__asm__ __volatile__( \
-- "1:\n\t" \
-- "mov.l %S1,%2\n\t" \
-- "mov.l %R1,%T2\n\t" \
-- "mov #0,%0\n" \
-- "2:\n" \
-- ".section .fixup,\"ax\"\n" \
-- "3:\n\t" \
-- "nop\n\t" \
-- "mov.l 4f,%0\n\t" \
-- "jmp @%0\n\t" \
-- " mov %3,%0\n" \
-- "4: .long 2b\n\t" \
-- ".previous\n" \
-- ".section __ex_table,\"a\"\n\t" \
-- ".long 1b, 3b\n\t" \
-- ".previous" \
-- : "=r" (retval) \
-- : "r" (val), "m" (__m(addr)), "i" (-EFAULT) \
-- : "memory"); })
-+# include "uaccess_64.h"
- #endif
--
+-extern long __put_user_asm_b(void *, long);
+-extern long __put_user_asm_w(void *, long);
+-extern long __put_user_asm_l(void *, long);
+-extern long __put_user_asm_q(void *, long);
-extern void __put_user_unknown(void);
-
+-
-/* Generic arbitrary sized copy. */
-/* Return the number of bytes NOT copied */
--__kernel_size_t __copy_user(void *to, const void *from, __kernel_size_t n);
+-/* XXX: should be such that: 4byte and the rest. */
+-extern __kernel_size_t __copy_user(void *__to, const void *__from, __kernel_size_t __n);
-
-#define copy_to_user(to,from,n) ({ \
-void *__copy_to = (void *) (to); \
@@ -528011,6 +615250,20 @@
-} else __copy_res = __copy_size; \
-__copy_res; })
-
+-#define copy_to_user_ret(to,from,n,retval) ({ \
+-if (copy_to_user(to,from,n)) \
+- return retval; \
+-})
+-
+-#define __copy_to_user(to,from,n) \
+- __copy_user((void *)(to), \
+- (void *)(from), n)
+-
+-#define __copy_to_user_ret(to,from,n,retval) ({ \
+-if (__copy_to_user(to,from,n)) \
+- return retval; \
+-})
+-
-#define copy_from_user(to,from,n) ({ \
-void *__copy_to = (void *) (to); \
-void *__copy_from = (void *) (from); \
@@ -528021,25 +615274,25 @@
-} else __copy_res = __copy_size; \
-__copy_res; })
-
--static __always_inline unsigned long
--__copy_from_user(void *to, const void __user *from, unsigned long n)
--{
-- return __copy_user(to, (__force void *)from, n);
--}
+-#define copy_from_user_ret(to,from,n,retval) ({ \
+-if (copy_from_user(to,from,n)) \
+- return retval; \
+-})
-
--static __always_inline unsigned long __must_check
--__copy_to_user(void __user *to, const void *from, unsigned long n)
--{
-- return __copy_user((__force void *)to, from, n);
--}
+-#define __copy_from_user(to,from,n) \
+- __copy_user((void *)(to), \
+- (void *)(from), n)
+-
+-#define __copy_from_user_ret(to,from,n,retval) ({ \
+-if (__copy_from_user(to,from,n)) \
+- return retval; \
+-})
-
-#define __copy_to_user_inatomic __copy_to_user
-#define __copy_from_user_inatomic __copy_from_user
-
--/*
-- * Clear the area and return remaining number of bytes
-- * (on failure. Usually it's 0.)
-- */
+-/* XXX: Not sure it works well..
+- should be such that: 4byte clear and the rest. */
-extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
-
-#define clear_user(addr,n) ({ \
@@ -528049,44 +615302,7 @@
-__cl_size = __clear_user(__cl_addr, __cl_size); \
-__cl_size; })
-
--static __inline__ int
--__strncpy_from_user(unsigned long __dest, unsigned long __user __src, int __count)
--{
-- __kernel_size_t res;
-- unsigned long __dummy, _d, _s;
--
-- __asm__ __volatile__(
-- "9:\n"
-- "mov.b @%2+, %1\n\t"
-- "cmp/eq #0, %1\n\t"
-- "bt/s 2f\n"
-- "1:\n"
-- "mov.b %1, @%3\n\t"
-- "dt %7\n\t"
-- "bf/s 9b\n\t"
-- " add #1, %3\n\t"
-- "2:\n\t"
-- "sub %7, %0\n"
-- "3:\n"
-- ".section .fixup,\"ax\"\n"
-- "4:\n\t"
-- "mov.l 5f, %1\n\t"
-- "jmp @%1\n\t"
-- " mov %8, %0\n\t"
-- ".balign 4\n"
-- "5: .long 3b\n"
-- ".previous\n"
-- ".section __ex_table,\"a\"\n"
-- " .balign 4\n"
-- " .long 9b,4b\n"
-- ".previous"
-- : "=r" (res), "=&z" (__dummy), "=r" (_s), "=r" (_d)
-- : "0" (__count), "2" (__src), "3" (__dest), "r" (__count),
-- "i" (-EFAULT)
-- : "memory", "t");
--
-- return res;
--}
+-extern int __strncpy_from_user(unsigned long __dest, unsigned long __src, int __count);
-
-#define strncpy_from_user(dest,src,count) ({ \
-unsigned long __sfu_src = (unsigned long) (src); \
@@ -528096,43 +615312,14 @@
-__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
-} __sfu_res; })
-
+-#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+-
-/*
- * Return the size of a string (including the ending 0!)
- */
--static __inline__ long __strnlen_user(const char __user *__s, long __n)
--{
-- unsigned long res;
-- unsigned long __dummy;
--
-- __asm__ __volatile__(
-- "9:\n"
-- "cmp/eq %4, %0\n\t"
-- "bt 2f\n"
-- "1:\t"
-- "mov.b @(%0,%3), %1\n\t"
-- "tst %1, %1\n\t"
-- "bf/s 9b\n\t"
-- " add #1, %0\n"
-- "2:\n"
-- ".section .fixup,\"ax\"\n"
-- "3:\n\t"
-- "mov.l 4f, %1\n\t"
-- "jmp @%1\n\t"
-- " mov #0, %0\n"
-- ".balign 4\n"
-- "4: .long 2b\n"
-- ".previous\n"
-- ".section __ex_table,\"a\"\n"
-- " .balign 4\n"
-- " .long 1b,3b\n"
-- ".previous"
-- : "=z" (res), "=&r" (__dummy)
-- : "0" (0), "r" (__s), "r" (__n)
-- : "t");
-- return res;
--}
+-extern long __strnlen_user(const char *__s, long __n);
-
--static __inline__ long strnlen_user(const char __user *s, long n)
+-static inline long strnlen_user(const char *s, long n)
-{
- if (!__addr_ok(s))
- return 0;
@@ -528140,867 +615327,104 @@
- return __strnlen_user(s, n);
-}
-
--#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+-struct exception_table_entry
+-{
+- unsigned long insn, fixup;
+-};
+-
+-#define ARCH_HAS_SEARCH_EXTABLE
+-
+-/* If gcc inlines memset, it will use st.q instructions. Therefore, we need
+- kmalloc allocations to be 8-byte aligned. Without this, the alignment
+- becomes BYTE_PER_WORD i.e. only 4 (since sizeof(long)==sizeof(void*)==4 on
+- sh64 at the moment). */
+-#define ARCH_KMALLOC_MINALIGN 8
-
-/*
-- * The exception table consists of pairs of addresses: the first is the
-- * address of an instruction that is allowed to fault, and the second is
-- * the address at which the program should continue. No registers are
-- * modified, so it is entirely up to the continuation code to figure out
-- * what to do.
+- * We want 8-byte alignment for the slab caches as well, otherwise we have
+- * the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
+- */
+-#define ARCH_SLAB_MINALIGN 8
+-
+-/* Returns 0 if exception not found and fixup.unit otherwise. */
+-extern unsigned long search_exception_table(unsigned long addr);
+-extern const struct exception_table_entry *search_exception_tables (unsigned long addr);
+-
+-#endif /* __ASM_SH64_UACCESS_H */
+diff --git a/include/asm-sh64/ucontext.h b/include/asm-sh64/ucontext.h
+deleted file mode 100644
+index cf77a08..0000000
+--- a/include/asm-sh64/ucontext.h
++++ /dev/null
+@@ -1,23 +0,0 @@
+-#ifndef __ASM_SH64_UCONTEXT_H
+-#define __ASM_SH64_UCONTEXT_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/ucontext.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
- *
-- * All the routines below use bits of fixup code that are out of line
-- * with the main instruction path. This means when everything is well,
-- * we don't even have to jump over them. Further, they do not intrude
-- * on our cache or tlb entries.
- */
-
--struct exception_table_entry
--{
-- unsigned long insn, fixup;
+-struct ucontext {
+- unsigned long uc_flags;
+- struct ucontext *uc_link;
+- stack_t uc_stack;
+- struct sigcontext uc_mcontext;
+- sigset_t uc_sigmask; /* mask last for extensibility */
-};
-
--extern int fixup_exception(struct pt_regs *regs);
--
--#endif /* __ASM_SH_UACCESS_H */
-diff --git a/include/asm-sh/uaccess_32.h b/include/asm-sh/uaccess_32.h
-new file mode 100644
-index 0000000..b6082f3
---- /dev/null
-+++ b/include/asm-sh/uaccess_32.h
-@@ -0,0 +1,510 @@
-+/* $Id: uaccess.h,v 1.11 2003/10/13 07:21:20 lethal Exp $
-+ *
-+ * User space memory access functions
-+ *
-+ * Copyright (C) 1999, 2002 Niibe Yutaka
-+ * Copyright (C) 2003 Paul Mundt
-+ *
-+ * Based on:
-+ * MIPS implementation version 1.15 by
-+ * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
-+ * and i386 version.
-+ */
-+#ifndef __ASM_SH_UACCESS_H
-+#define __ASM_SH_UACCESS_H
-+
-+#include <linux/errno.h>
-+#include <linux/sched.h>
-+
-+#define VERIFY_READ 0
-+#define VERIFY_WRITE 1
-+
-+/*
-+ * The fs value determines whether argument validity checking should be
-+ * performed or not. If get_fs() == USER_DS, checking is performed, with
-+ * get_fs() == KERNEL_DS, checking is bypassed.
-+ *
-+ * For historical reasons (Data Segment Register?), these macros are misnamed.
-+ */
-+
-+#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
-+
-+#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFFUL)
-+#define USER_DS MAKE_MM_SEG(PAGE_OFFSET)
-+
-+#define segment_eq(a,b) ((a).seg == (b).seg)
-+
-+#define get_ds() (KERNEL_DS)
-+
-+#if !defined(CONFIG_MMU)
-+/* NOMMU is always true */
-+#define __addr_ok(addr) (1)
-+
-+static inline mm_segment_t get_fs(void)
-+{
-+ return USER_DS;
-+}
-+
-+static inline void set_fs(mm_segment_t s)
-+{
-+}
-+
-+/*
-+ * __access_ok: Check if address with size is OK or not.
-+ *
-+ * If we don't have an MMU (or if its disabled) the only thing we really have
-+ * to look out for is if the address resides somewhere outside of what
-+ * available RAM we have.
-+ *
-+ * TODO: This check could probably also stand to be restricted somewhat more..
-+ * though it still does the Right Thing(tm) for the time being.
-+ */
-+static inline int __access_ok(unsigned long addr, unsigned long size)
-+{
-+ return ((addr >= memory_start) && ((addr + size) < memory_end));
-+}
-+#else /* CONFIG_MMU */
-+#define __addr_ok(addr) \
-+ ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
-+
-+#define get_fs() (current_thread_info()->addr_limit)
-+#define set_fs(x) (current_thread_info()->addr_limit = (x))
-+
-+/*
-+ * __access_ok: Check if address with size is OK or not.
-+ *
-+ * Uhhuh, this needs 33-bit arithmetic. We have a carry..
-+ *
-+ * sum := addr + size; carry? --> flag = true;
-+ * if (sum >= addr_limit) flag = true;
-+ */
-+static inline int __access_ok(unsigned long addr, unsigned long size)
-+{
-+ unsigned long flag, sum;
-+
-+ __asm__("clrt\n\t"
-+ "addc %3, %1\n\t"
-+ "movt %0\n\t"
-+ "cmp/hi %4, %1\n\t"
-+ "rotcl %0"
-+ :"=&r" (flag), "=r" (sum)
-+ :"1" (addr), "r" (size),
-+ "r" (current_thread_info()->addr_limit.seg)
-+ :"t");
-+ return flag == 0;
-+}
-+#endif /* CONFIG_MMU */
-+
-+#define access_ok(type, addr, size) \
-+ (__chk_user_ptr(addr), \
-+ __access_ok((unsigned long __force)(addr), (size)))
-+
-+/*
-+ * Uh, these should become the main single-value transfer routines ...
-+ * They automatically use the right size if we just have the right
-+ * pointer type ...
-+ *
-+ * As SuperH uses the same address space for kernel and user data, we
-+ * can just do these as direct assignments.
-+ *
-+ * Careful to not
-+ * (a) re-use the arguments for side effects (sizeof is ok)
-+ * (b) require any knowledge of processes at this stage
-+ */
-+#define put_user(x,ptr) __put_user_check((x), (ptr), sizeof(*(ptr)))
-+#define get_user(x,ptr) __get_user_check((x), (ptr), sizeof(*(ptr)))
-+
-+/*
-+ * The "__xxx" versions do not do address space checking, useful when
-+ * doing multiple accesses to the same area (the user has to do the
-+ * checks by hand with "access_ok()")
-+ */
-+#define __put_user(x,ptr) __put_user_nocheck((x), (ptr), sizeof(*(ptr)))
-+#define __get_user(x,ptr) __get_user_nocheck((x), (ptr), sizeof(*(ptr)))
-+
-+struct __large_struct { unsigned long buf[100]; };
-+#define __m(x) (*(struct __large_struct __user *)(x))
-+
-+#define __get_user_size(x,ptr,size,retval) \
-+do { \
-+ retval = 0; \
-+ switch (size) { \
-+ case 1: \
-+ __get_user_asm(x, ptr, retval, "b"); \
-+ break; \
-+ case 2: \
-+ __get_user_asm(x, ptr, retval, "w"); \
-+ break; \
-+ case 4: \
-+ __get_user_asm(x, ptr, retval, "l"); \
-+ break; \
-+ default: \
-+ __get_user_unknown(); \
-+ break; \
-+ } \
-+} while (0)
-+
-+#define __get_user_nocheck(x,ptr,size) \
-+({ \
-+ long __gu_err; \
-+ unsigned long __gu_val; \
-+ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
-+ __chk_user_ptr(ptr); \
-+ __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
-+ (x) = (__typeof__(*(ptr)))__gu_val; \
-+ __gu_err; \
-+})
-+
-+#define __get_user_check(x,ptr,size) \
-+({ \
-+ long __gu_err = -EFAULT; \
-+ unsigned long __gu_val = 0; \
-+ const __typeof__(*(ptr)) *__gu_addr = (ptr); \
-+ if (likely(access_ok(VERIFY_READ, __gu_addr, (size)))) \
-+ __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
-+ (x) = (__typeof__(*(ptr)))__gu_val; \
-+ __gu_err; \
-+})
-+
-+#define __get_user_asm(x, addr, err, insn) \
-+({ \
-+__asm__ __volatile__( \
-+ "1:\n\t" \
-+ "mov." insn " %2, %1\n\t" \
-+ "2:\n" \
-+ ".section .fixup,\"ax\"\n" \
-+ "3:\n\t" \
-+ "mov #0, %1\n\t" \
-+ "mov.l 4f, %0\n\t" \
-+ "jmp @%0\n\t" \
-+ " mov %3, %0\n\t" \
-+ ".balign 4\n" \
-+ "4: .long 2b\n\t" \
-+ ".previous\n" \
-+ ".section __ex_table,\"a\"\n\t" \
-+ ".long 1b, 3b\n\t" \
-+ ".previous" \
-+ :"=&r" (err), "=&r" (x) \
-+ :"m" (__m(addr)), "i" (-EFAULT), "0" (err)); })
-+
-+extern void __get_user_unknown(void);
-+
-+#define __put_user_size(x,ptr,size,retval) \
-+do { \
-+ retval = 0; \
-+ switch (size) { \
-+ case 1: \
-+ __put_user_asm(x, ptr, retval, "b"); \
-+ break; \
-+ case 2: \
-+ __put_user_asm(x, ptr, retval, "w"); \
-+ break; \
-+ case 4: \
-+ __put_user_asm(x, ptr, retval, "l"); \
-+ break; \
-+ case 8: \
-+ __put_user_u64(x, ptr, retval); \
-+ break; \
-+ default: \
-+ __put_user_unknown(); \
-+ } \
-+} while (0)
-+
-+#define __put_user_nocheck(x,ptr,size) \
-+({ \
-+ long __pu_err; \
-+ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
-+ __chk_user_ptr(ptr); \
-+ __put_user_size((x), __pu_addr, (size), __pu_err); \
-+ __pu_err; \
-+})
-+
-+#define __put_user_check(x,ptr,size) \
-+({ \
-+ long __pu_err = -EFAULT; \
-+ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
-+ if (likely(access_ok(VERIFY_WRITE, __pu_addr, size))) \
-+ __put_user_size((x), __pu_addr, (size), \
-+ __pu_err); \
-+ __pu_err; \
-+})
-+
-+#define __put_user_asm(x, addr, err, insn) \
-+({ \
-+__asm__ __volatile__( \
-+ "1:\n\t" \
-+ "mov." insn " %1, %2\n\t" \
-+ "2:\n" \
-+ ".section .fixup,\"ax\"\n" \
-+ "3:\n\t" \
-+ "mov.l 4f, %0\n\t" \
-+ "jmp @%0\n\t" \
-+ " mov %3, %0\n\t" \
-+ ".balign 4\n" \
-+ "4: .long 2b\n\t" \
-+ ".previous\n" \
-+ ".section __ex_table,\"a\"\n\t" \
-+ ".long 1b, 3b\n\t" \
-+ ".previous" \
-+ :"=&r" (err) \
-+ :"r" (x), "m" (__m(addr)), "i" (-EFAULT), "0" (err) \
-+ :"memory"); })
-+
-+#if defined(CONFIG_CPU_LITTLE_ENDIAN)
-+#define __put_user_u64(val,addr,retval) \
-+({ \
-+__asm__ __volatile__( \
-+ "1:\n\t" \
-+ "mov.l %R1,%2\n\t" \
-+ "mov.l %S1,%T2\n\t" \
-+ "2:\n" \
-+ ".section .fixup,\"ax\"\n" \
-+ "3:\n\t" \
-+ "mov.l 4f,%0\n\t" \
-+ "jmp @%0\n\t" \
-+ " mov %3,%0\n\t" \
-+ ".balign 4\n" \
-+ "4: .long 2b\n\t" \
-+ ".previous\n" \
-+ ".section __ex_table,\"a\"\n\t" \
-+ ".long 1b, 3b\n\t" \
-+ ".previous" \
-+ : "=r" (retval) \
-+ : "r" (val), "m" (__m(addr)), "i" (-EFAULT), "0" (retval) \
-+ : "memory"); })
-+#else
-+#define __put_user_u64(val,addr,retval) \
-+({ \
-+__asm__ __volatile__( \
-+ "1:\n\t" \
-+ "mov.l %S1,%2\n\t" \
-+ "mov.l %R1,%T2\n\t" \
-+ "2:\n" \
-+ ".section .fixup,\"ax\"\n" \
-+ "3:\n\t" \
-+ "mov.l 4f,%0\n\t" \
-+ "jmp @%0\n\t" \
-+ " mov %3,%0\n\t" \
-+ ".balign 4\n" \
-+ "4: .long 2b\n\t" \
-+ ".previous\n" \
-+ ".section __ex_table,\"a\"\n\t" \
-+ ".long 1b, 3b\n\t" \
-+ ".previous" \
-+ : "=r" (retval) \
-+ : "r" (val), "m" (__m(addr)), "i" (-EFAULT), "0" (retval) \
-+ : "memory"); })
-+#endif
-+
-+extern void __put_user_unknown(void);
-+
-+/* Generic arbitrary sized copy. */
-+/* Return the number of bytes NOT copied */
-+__kernel_size_t __copy_user(void *to, const void *from, __kernel_size_t n);
-+
-+#define copy_to_user(to,from,n) ({ \
-+void *__copy_to = (void *) (to); \
-+__kernel_size_t __copy_size = (__kernel_size_t) (n); \
-+__kernel_size_t __copy_res; \
-+if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
-+__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
-+} else __copy_res = __copy_size; \
-+__copy_res; })
-+
-+#define copy_from_user(to,from,n) ({ \
-+void *__copy_to = (void *) (to); \
-+void *__copy_from = (void *) (from); \
-+__kernel_size_t __copy_size = (__kernel_size_t) (n); \
-+__kernel_size_t __copy_res; \
-+if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
-+__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
-+} else __copy_res = __copy_size; \
-+__copy_res; })
-+
-+static __always_inline unsigned long
-+__copy_from_user(void *to, const void __user *from, unsigned long n)
-+{
-+ return __copy_user(to, (__force void *)from, n);
-+}
-+
-+static __always_inline unsigned long __must_check
-+__copy_to_user(void __user *to, const void *from, unsigned long n)
-+{
-+ return __copy_user((__force void *)to, from, n);
-+}
-+
-+#define __copy_to_user_inatomic __copy_to_user
-+#define __copy_from_user_inatomic __copy_from_user
-+
-+/*
-+ * Clear the area and return remaining number of bytes
-+ * (on failure. Usually it's 0.)
-+ */
-+extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
-+
-+#define clear_user(addr,n) ({ \
-+void * __cl_addr = (addr); \
-+unsigned long __cl_size = (n); \
-+if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
-+__cl_size = __clear_user(__cl_addr, __cl_size); \
-+__cl_size; })
-+
-+static __inline__ int
-+__strncpy_from_user(unsigned long __dest, unsigned long __user __src, int __count)
-+{
-+ __kernel_size_t res;
-+ unsigned long __dummy, _d, _s, _c;
-+
-+ __asm__ __volatile__(
-+ "9:\n"
-+ "mov.b @%2+, %1\n\t"
-+ "cmp/eq #0, %1\n\t"
-+ "bt/s 2f\n"
-+ "1:\n"
-+ "mov.b %1, @%3\n\t"
-+ "dt %4\n\t"
-+ "bf/s 9b\n\t"
-+ " add #1, %3\n\t"
-+ "2:\n\t"
-+ "sub %4, %0\n"
-+ "3:\n"
-+ ".section .fixup,\"ax\"\n"
-+ "4:\n\t"
-+ "mov.l 5f, %1\n\t"
-+ "jmp @%1\n\t"
-+ " mov %9, %0\n\t"
-+ ".balign 4\n"
-+ "5: .long 3b\n"
-+ ".previous\n"
-+ ".section __ex_table,\"a\"\n"
-+ " .balign 4\n"
-+ " .long 9b,4b\n"
-+ ".previous"
-+ : "=r" (res), "=&z" (__dummy), "=r" (_s), "=r" (_d), "=r"(_c)
-+ : "0" (__count), "2" (__src), "3" (__dest), "4" (__count),
-+ "i" (-EFAULT)
-+ : "memory", "t");
-+
-+ return res;
-+}
-+
-+/**
-+ * strncpy_from_user: - Copy a NUL terminated string from userspace.
-+ * @dst: Destination address, in kernel space. This buffer must be at
-+ * least @count bytes long.
-+ * @src: Source address, in user space.
-+ * @count: Maximum number of bytes to copy, including the trailing NUL.
-+ *
-+ * Copies a NUL-terminated string from userspace to kernel space.
-+ *
-+ * On success, returns the length of the string (not including the trailing
-+ * NUL).
-+ *
-+ * If access to userspace fails, returns -EFAULT (some data may have been
-+ * copied).
-+ *
-+ * If @count is smaller than the length of the string, copies @count bytes
-+ * and returns @count.
-+ */
-+#define strncpy_from_user(dest,src,count) ({ \
-+unsigned long __sfu_src = (unsigned long) (src); \
-+int __sfu_count = (int) (count); \
-+long __sfu_res = -EFAULT; \
-+if(__access_ok(__sfu_src, __sfu_count)) { \
-+__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
-+} __sfu_res; })
-+
-+/*
-+ * Return the size of a string (including the ending 0 even when we have
-+ * exceeded the maximum string length).
-+ */
-+static __inline__ long __strnlen_user(const char __user *__s, long __n)
-+{
-+ unsigned long res;
-+ unsigned long __dummy;
-+
-+ __asm__ __volatile__(
-+ "1:\t"
-+ "mov.b @(%0,%3), %1\n\t"
-+ "cmp/eq %4, %0\n\t"
-+ "bt/s 2f\n\t"
-+ " add #1, %0\n\t"
-+ "tst %1, %1\n\t"
-+ "bf 1b\n\t"
-+ "2:\n"
-+ ".section .fixup,\"ax\"\n"
-+ "3:\n\t"
-+ "mov.l 4f, %1\n\t"
-+ "jmp @%1\n\t"
-+ " mov #0, %0\n"
-+ ".balign 4\n"
-+ "4: .long 2b\n"
-+ ".previous\n"
-+ ".section __ex_table,\"a\"\n"
-+ " .balign 4\n"
-+ " .long 1b,3b\n"
-+ ".previous"
-+ : "=z" (res), "=&r" (__dummy)
-+ : "0" (0), "r" (__s), "r" (__n)
-+ : "t");
-+ return res;
-+}
-+
-+/**
-+ * strnlen_user: - Get the size of a string in user space.
-+ * @s: The string to measure.
-+ * @n: The maximum valid length
-+ *
-+ * Context: User context only. This function may sleep.
-+ *
-+ * Get the size of a NUL-terminated string in user space.
-+ *
-+ * Returns the size of the string INCLUDING the terminating NUL.
-+ * On exception, returns 0.
-+ * If the string is too long, returns a value greater than @n.
-+ */
-+static __inline__ long strnlen_user(const char __user *s, long n)
-+{
-+ if (!__addr_ok(s))
-+ return 0;
-+ else
-+ return __strnlen_user(s, n);
-+}
-+
-+/**
-+ * strlen_user: - Get the size of a string in user space.
-+ * @str: The string to measure.
-+ *
-+ * Context: User context only. This function may sleep.
-+ *
-+ * Get the size of a NUL-terminated string in user space.
-+ *
-+ * Returns the size of the string INCLUDING the terminating NUL.
-+ * On exception, returns 0.
-+ *
-+ * If there is a limit on the length of a valid string, you may wish to
-+ * consider using strnlen_user() instead.
-+ */
-+#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
-+
-+/*
-+ * The exception table consists of pairs of addresses: the first is the
-+ * address of an instruction that is allowed to fault, and the second is
-+ * the address at which the program should continue. No registers are
-+ * modified, so it is entirely up to the continuation code to figure out
-+ * what to do.
-+ *
-+ * All the routines below use bits of fixup code that are out of line
-+ * with the main instruction path. This means when everything is well,
-+ * we don't even have to jump over them. Further, they do not intrude
-+ * on our cache or tlb entries.
-+ */
-+
-+struct exception_table_entry
-+{
-+ unsigned long insn, fixup;
-+};
-+
-+extern int fixup_exception(struct pt_regs *regs);
-+
-+#endif /* __ASM_SH_UACCESS_H */
-diff --git a/include/asm-sh/uaccess_64.h b/include/asm-sh/uaccess_64.h
-new file mode 100644
-index 0000000..d54ec08
---- /dev/null
-+++ b/include/asm-sh/uaccess_64.h
-@@ -0,0 +1,302 @@
-+#ifndef __ASM_SH_UACCESS_64_H
-+#define __ASM_SH_UACCESS_64_H
-+
-+/*
-+ * include/asm-sh/uaccess_64.h
-+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003, 2004 Paul Mundt
-+ *
-+ * User space memory access functions
-+ *
-+ * Copyright (C) 1999 Niibe Yutaka
-+ *
-+ * Based on:
-+ * MIPS implementation version 1.15 by
-+ * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
-+ * and i386 version.
-+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
-+ */
-+#include <linux/errno.h>
-+#include <linux/sched.h>
-+
-+#define VERIFY_READ 0
-+#define VERIFY_WRITE 1
-+
-+/*
-+ * The fs value determines whether argument validity checking should be
-+ * performed or not. If get_fs() == USER_DS, checking is performed, with
-+ * get_fs() == KERNEL_DS, checking is bypassed.
-+ *
-+ * For historical reasons (Data Segment Register?), these macros are misnamed.
-+ */
-+
-+#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
-+
-+#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
-+#define USER_DS MAKE_MM_SEG(0x80000000)
-+
-+#define get_ds() (KERNEL_DS)
-+#define get_fs() (current_thread_info()->addr_limit)
-+#define set_fs(x) (current_thread_info()->addr_limit=(x))
-+
-+#define segment_eq(a,b) ((a).seg == (b).seg)
-+
-+#define __addr_ok(addr) ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
-+
-+/*
-+ * Uhhuh, this needs 33-bit arithmetic. We have a carry..
-+ *
-+ * sum := addr + size; carry? --> flag = true;
-+ * if (sum >= addr_limit) flag = true;
-+ */
-+#define __range_ok(addr,size) (((unsigned long) (addr) + (size) < (current_thread_info()->addr_limit.seg)) ? 0 : 1)
-+
-+#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
-+#define __access_ok(addr,size) (__range_ok(addr,size) == 0)
-+
-+/*
-+ * Uh, these should become the main single-value transfer routines ...
-+ * They automatically use the right size if we just have the right
-+ * pointer type ...
-+ *
-+ * As MIPS uses the same address space for kernel and user data, we
-+ * can just do these as direct assignments.
-+ *
-+ * Careful to not
-+ * (a) re-use the arguments for side effects (sizeof is ok)
-+ * (b) require any knowledge of processes at this stage
-+ */
-+#define put_user(x,ptr) __put_user_check((x),(ptr),sizeof(*(ptr)))
-+#define get_user(x,ptr) __get_user_check((x),(ptr),sizeof(*(ptr)))
-+
-+/*
-+ * The "__xxx" versions do not do address space checking, useful when
-+ * doing multiple accesses to the same area (the user has to do the
-+ * checks by hand with "access_ok()")
-+ */
-+#define __put_user(x,ptr) __put_user_nocheck((x),(ptr),sizeof(*(ptr)))
-+#define __get_user(x,ptr) __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
-+
-+/*
-+ * The "xxx_ret" versions return constant specified in third argument, if
-+ * something bad happens. These macros can be optimized for the
-+ * case of just returning from the function xxx_ret is used.
-+ */
-+
-+#define put_user_ret(x,ptr,ret) ({ \
-+if (put_user(x,ptr)) return ret; })
-+
-+#define get_user_ret(x,ptr,ret) ({ \
-+if (get_user(x,ptr)) return ret; })
-+
-+#define __put_user_ret(x,ptr,ret) ({ \
-+if (__put_user(x,ptr)) return ret; })
-+
-+#define __get_user_ret(x,ptr,ret) ({ \
-+if (__get_user(x,ptr)) return ret; })
-+
-+struct __large_struct { unsigned long buf[100]; };
-+#define __m(x) (*(struct __large_struct *)(x))
-+
-+#define __get_user_size(x,ptr,size,retval) \
-+do { \
-+ retval = 0; \
-+ switch (size) { \
-+ case 1: \
-+ retval = __get_user_asm_b(x, ptr); \
-+ break; \
-+ case 2: \
-+ retval = __get_user_asm_w(x, ptr); \
-+ break; \
-+ case 4: \
-+ retval = __get_user_asm_l(x, ptr); \
-+ break; \
-+ case 8: \
-+ retval = __get_user_asm_q(x, ptr); \
-+ break; \
-+ default: \
-+ __get_user_unknown(); \
-+ break; \
-+ } \
-+} while (0)
-+
-+#define __get_user_nocheck(x,ptr,size) \
-+({ \
-+ long __gu_err, __gu_val; \
-+ __get_user_size((void *)&__gu_val, (long)(ptr), \
-+ (size), __gu_err); \
-+ (x) = (__typeof__(*(ptr)))__gu_val; \
-+ __gu_err; \
-+})
-+
-+#define __get_user_check(x,ptr,size) \
-+({ \
-+ long __gu_addr = (long)(ptr); \
-+ long __gu_err = -EFAULT, __gu_val; \
-+ if (__access_ok(__gu_addr, (size))) \
-+ __get_user_size((void *)&__gu_val, __gu_addr, \
-+ (size), __gu_err); \
-+ (x) = (__typeof__(*(ptr))) __gu_val; \
-+ __gu_err; \
-+})
-+
-+extern long __get_user_asm_b(void *, long);
-+extern long __get_user_asm_w(void *, long);
-+extern long __get_user_asm_l(void *, long);
-+extern long __get_user_asm_q(void *, long);
-+extern void __get_user_unknown(void);
-+
-+#define __put_user_size(x,ptr,size,retval) \
-+do { \
-+ retval = 0; \
-+ switch (size) { \
-+ case 1: \
-+ retval = __put_user_asm_b(x, ptr); \
-+ break; \
-+ case 2: \
-+ retval = __put_user_asm_w(x, ptr); \
-+ break; \
-+ case 4: \
-+ retval = __put_user_asm_l(x, ptr); \
-+ break; \
-+ case 8: \
-+ retval = __put_user_asm_q(x, ptr); \
-+ break; \
-+ default: \
-+ __put_user_unknown(); \
-+ } \
-+} while (0)
-+
-+#define __put_user_nocheck(x,ptr,size) \
-+({ \
-+ long __pu_err; \
-+ __typeof__(*(ptr)) __pu_val = (x); \
-+ __put_user_size((void *)&__pu_val, (long)(ptr), (size), __pu_err); \
-+ __pu_err; \
-+})
-+
-+#define __put_user_check(x,ptr,size) \
-+({ \
-+ long __pu_err = -EFAULT; \
-+ long __pu_addr = (long)(ptr); \
-+ __typeof__(*(ptr)) __pu_val = (x); \
-+ \
-+ if (__access_ok(__pu_addr, (size))) \
-+ __put_user_size((void *)&__pu_val, __pu_addr, (size), __pu_err);\
-+ __pu_err; \
-+})
-+
-+extern long __put_user_asm_b(void *, long);
-+extern long __put_user_asm_w(void *, long);
-+extern long __put_user_asm_l(void *, long);
-+extern long __put_user_asm_q(void *, long);
-+extern void __put_user_unknown(void);
-+
-+
-+/* Generic arbitrary sized copy. */
-+/* Return the number of bytes NOT copied */
-+/* XXX: should be such that: 4byte and the rest. */
-+extern __kernel_size_t __copy_user(void *__to, const void *__from, __kernel_size_t __n);
-+
-+#define copy_to_user(to,from,n) ({ \
-+void *__copy_to = (void *) (to); \
-+__kernel_size_t __copy_size = (__kernel_size_t) (n); \
-+__kernel_size_t __copy_res; \
-+if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
-+__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
-+} else __copy_res = __copy_size; \
-+__copy_res; })
-+
-+#define copy_to_user_ret(to,from,n,retval) ({ \
-+if (copy_to_user(to,from,n)) \
-+ return retval; \
-+})
-+
-+#define __copy_to_user(to,from,n) \
-+ __copy_user((void *)(to), \
-+ (void *)(from), n)
-+
-+#define __copy_to_user_ret(to,from,n,retval) ({ \
-+if (__copy_to_user(to,from,n)) \
-+ return retval; \
-+})
-+
-+#define copy_from_user(to,from,n) ({ \
-+void *__copy_to = (void *) (to); \
-+void *__copy_from = (void *) (from); \
-+__kernel_size_t __copy_size = (__kernel_size_t) (n); \
-+__kernel_size_t __copy_res; \
-+if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
-+__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
-+} else __copy_res = __copy_size; \
-+__copy_res; })
-+
-+#define copy_from_user_ret(to,from,n,retval) ({ \
-+if (copy_from_user(to,from,n)) \
-+ return retval; \
-+})
-+
-+#define __copy_from_user(to,from,n) \
-+ __copy_user((void *)(to), \
-+ (void *)(from), n)
-+
-+#define __copy_from_user_ret(to,from,n,retval) ({ \
-+if (__copy_from_user(to,from,n)) \
-+ return retval; \
-+})
-+
-+#define __copy_to_user_inatomic __copy_to_user
-+#define __copy_from_user_inatomic __copy_from_user
-+
-+/* XXX: Not sure it works well..
-+ should be such that: 4byte clear and the rest. */
-+extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
-+
-+#define clear_user(addr,n) ({ \
-+void * __cl_addr = (addr); \
-+unsigned long __cl_size = (n); \
-+if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
-+__cl_size = __clear_user(__cl_addr, __cl_size); \
-+__cl_size; })
-+
-+extern int __strncpy_from_user(unsigned long __dest, unsigned long __src, int __count);
-+
-+#define strncpy_from_user(dest,src,count) ({ \
-+unsigned long __sfu_src = (unsigned long) (src); \
-+int __sfu_count = (int) (count); \
-+long __sfu_res = -EFAULT; \
-+if(__access_ok(__sfu_src, __sfu_count)) { \
-+__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
-+} __sfu_res; })
-+
-+#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
-+
-+/*
-+ * Return the size of a string (including the ending 0!)
-+ */
-+extern long __strnlen_user(const char *__s, long __n);
-+
-+static inline long strnlen_user(const char *s, long n)
-+{
-+ if (!__addr_ok(s))
-+ return 0;
-+ else
-+ return __strnlen_user(s, n);
-+}
-+
-+struct exception_table_entry
-+{
-+ unsigned long insn, fixup;
-+};
-+
-+#define ARCH_HAS_SEARCH_EXTABLE
-+
-+/* Returns 0 if exception not found and fixup.unit otherwise. */
-+extern unsigned long search_exception_table(unsigned long addr);
-+extern const struct exception_table_entry *search_exception_tables (unsigned long addr);
-+
-+#endif /* __ASM_SH_UACCESS_64_H */
-diff --git a/include/asm-sh/unistd.h b/include/asm-sh/unistd.h
-index b182b1c..4b21f36 100644
---- a/include/asm-sh/unistd.h
-+++ b/include/asm-sh/unistd.h
-@@ -1,376 +1,5 @@
--#ifndef __ASM_SH_UNISTD_H
--#define __ASM_SH_UNISTD_H
+-#endif /* __ASM_SH64_UCONTEXT_H */
+diff --git a/include/asm-sh64/unaligned.h b/include/asm-sh64/unaligned.h
+deleted file mode 100644
+index 74481b1..0000000
+--- a/include/asm-sh64/unaligned.h
++++ /dev/null
+@@ -1,17 +0,0 @@
+-#ifndef __ASM_SH64_UNALIGNED_H
+-#define __ASM_SH64_UNALIGNED_H
-
-/*
-- * Copyright (C) 1999 Niibe Yutaka
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/unaligned.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
- */
-
+-#include <asm-generic/unaligned.h>
+-
+-#endif /* __ASM_SH64_UNALIGNED_H */
+diff --git a/include/asm-sh64/unistd.h b/include/asm-sh64/unistd.h
+deleted file mode 100644
+index 1a5197f..0000000
+--- a/include/asm-sh64/unistd.h
++++ /dev/null
+@@ -1,417 +0,0 @@
+-#ifndef __ASM_SH64_UNISTD_H
+-#define __ASM_SH64_UNISTD_H
+-
-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/unistd.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Copyright (C) 2003 - 2007 Paul Mundt
+- * Copyright (C) 2004 Sean McGoogan
+- *
- * This file contains the system call numbers.
+- *
- */
-
-#define __NR_restart_syscall 0
@@ -529079,7 +615503,7 @@
-#define __NR_sigpending 73
-#define __NR_sethostname 74
-#define __NR_setrlimit 75
--#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
+-#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
-#define __NR_getrusage 77
-#define __NR_gettimeofday 78
-#define __NR_settimeofday 79
@@ -529105,7 +615529,7 @@
-#define __NR_statfs 99
-#define __NR_fstatfs 100
-#define __NR_ioperm 101
--#define __NR_socketcall 102
+-#define __NR_socketcall 102 /* old implementation of socket systemcall */
-#define __NR_syslog 103
-#define __NR_setitimer 104
-#define __NR_getitimer 105
@@ -529223,45 +615647,80 @@
-#define __NR_pivot_root 217
-#define __NR_mincore 218
-#define __NR_madvise 219
--#define __NR_getdents64 220
--#define __NR_fcntl64 221
+-
+-/* Non-multiplexed socket family */
+-#define __NR_socket 220
+-#define __NR_bind 221
+-#define __NR_connect 222
+-#define __NR_listen 223
+-#define __NR_accept 224
+-#define __NR_getsockname 225
+-#define __NR_getpeername 226
+-#define __NR_socketpair 227
+-#define __NR_send 228
+-#define __NR_sendto 229
+-#define __NR_recv 230
+-#define __NR_recvfrom 231
+-#define __NR_shutdown 232
+-#define __NR_setsockopt 233
+-#define __NR_getsockopt 234
+-#define __NR_sendmsg 235
+-#define __NR_recvmsg 236
+-
+-/* Non-multiplexed IPC family */
+-#define __NR_semop 237
+-#define __NR_semget 238
+-#define __NR_semctl 239
+-#define __NR_msgsnd 240
+-#define __NR_msgrcv 241
+-#define __NR_msgget 242
+-#define __NR_msgctl 243
+-#if 0
+-#define __NR_shmatcall 244
+-#endif
+-#define __NR_shmdt 245
+-#define __NR_shmget 246
+-#define __NR_shmctl 247
+-
+-#define __NR_getdents64 248
+-#define __NR_fcntl64 249
-/* 223 is unused */
--#define __NR_gettid 224
--#define __NR_readahead 225
--#define __NR_setxattr 226
--#define __NR_lsetxattr 227
--#define __NR_fsetxattr 228
--#define __NR_getxattr 229
--#define __NR_lgetxattr 230
--#define __NR_fgetxattr 231
--#define __NR_listxattr 232
--#define __NR_llistxattr 233
--#define __NR_flistxattr 234
--#define __NR_removexattr 235
--#define __NR_lremovexattr 236
--#define __NR_fremovexattr 237
--#define __NR_tkill 238
--#define __NR_sendfile64 239
--#define __NR_futex 240
--#define __NR_sched_setaffinity 241
--#define __NR_sched_getaffinity 242
--#define __NR_set_thread_area 243
--#define __NR_get_thread_area 244
--#define __NR_io_setup 245
--#define __NR_io_destroy 246
--#define __NR_io_getevents 247
--#define __NR_io_submit 248
--#define __NR_io_cancel 249
--#define __NR_fadvise64 250
+-#define __NR_gettid 252
+-#define __NR_readahead 253
+-#define __NR_setxattr 254
+-#define __NR_lsetxattr 255
+-#define __NR_fsetxattr 256
+-#define __NR_getxattr 257
+-#define __NR_lgetxattr 258
+-#define __NR_fgetxattr 269
+-#define __NR_listxattr 260
+-#define __NR_llistxattr 261
+-#define __NR_flistxattr 262
+-#define __NR_removexattr 263
+-#define __NR_lremovexattr 264
+-#define __NR_fremovexattr 265
+-#define __NR_tkill 266
+-#define __NR_sendfile64 267
+-#define __NR_futex 268
+-#define __NR_sched_setaffinity 269
+-#define __NR_sched_getaffinity 270
+-#define __NR_set_thread_area 271
+-#define __NR_get_thread_area 272
+-#define __NR_io_setup 273
+-#define __NR_io_destroy 274
+-#define __NR_io_getevents 275
+-#define __NR_io_submit 276
+-#define __NR_io_cancel 277
+-#define __NR_fadvise64 278
+-#define __NR_exit_group 280
-
--#define __NR_exit_group 252
--#define __NR_lookup_dcookie 253
--#define __NR_epoll_create 254
--#define __NR_epoll_ctl 255
--#define __NR_epoll_wait 256
--#define __NR_remap_file_pages 257
--#define __NR_set_tid_address 258
--#define __NR_timer_create 259
+-#define __NR_lookup_dcookie 281
+-#define __NR_epoll_create 282
+-#define __NR_epoll_ctl 283
+-#define __NR_epoll_wait 284
+-#define __NR_remap_file_pages 285
+-#define __NR_set_tid_address 286
+-#define __NR_timer_create 287
-#define __NR_timer_settime (__NR_timer_create+1)
-#define __NR_timer_gettime (__NR_timer_create+2)
-#define __NR_timer_getoverrun (__NR_timer_create+3)
@@ -529270,68 +615729,68 @@
-#define __NR_clock_gettime (__NR_timer_create+6)
-#define __NR_clock_getres (__NR_timer_create+7)
-#define __NR_clock_nanosleep (__NR_timer_create+8)
--#define __NR_statfs64 268
--#define __NR_fstatfs64 269
--#define __NR_tgkill 270
--#define __NR_utimes 271
--#define __NR_fadvise64_64 272
--#define __NR_vserver 273
--#define __NR_mbind 274
--#define __NR_get_mempolicy 275
--#define __NR_set_mempolicy 276
--#define __NR_mq_open 277
+-#define __NR_statfs64 296
+-#define __NR_fstatfs64 297
+-#define __NR_tgkill 298
+-#define __NR_utimes 299
+-#define __NR_fadvise64_64 300
+-#define __NR_vserver 301
+-#define __NR_mbind 302
+-#define __NR_get_mempolicy 303
+-#define __NR_set_mempolicy 304
+-#define __NR_mq_open 305
-#define __NR_mq_unlink (__NR_mq_open+1)
-#define __NR_mq_timedsend (__NR_mq_open+2)
-#define __NR_mq_timedreceive (__NR_mq_open+3)
-#define __NR_mq_notify (__NR_mq_open+4)
-#define __NR_mq_getsetattr (__NR_mq_open+5)
--#define __NR_kexec_load 283
--#define __NR_waitid 284
--#define __NR_add_key 285
--#define __NR_request_key 286
--#define __NR_keyctl 287
--#define __NR_ioprio_set 288
--#define __NR_ioprio_get 289
--#define __NR_inotify_init 290
--#define __NR_inotify_add_watch 291
--#define __NR_inotify_rm_watch 292
--/* 293 is unused */
--#define __NR_migrate_pages 294
--#define __NR_openat 295
--#define __NR_mkdirat 296
--#define __NR_mknodat 297
--#define __NR_fchownat 298
--#define __NR_futimesat 299
--#define __NR_fstatat64 300
--#define __NR_unlinkat 301
--#define __NR_renameat 302
--#define __NR_linkat 303
--#define __NR_symlinkat 304
--#define __NR_readlinkat 305
--#define __NR_fchmodat 306
--#define __NR_faccessat 307
--#define __NR_pselect6 308
--#define __NR_ppoll 309
--#define __NR_unshare 310
--#define __NR_set_robust_list 311
--#define __NR_get_robust_list 312
--#define __NR_splice 313
--#define __NR_sync_file_range 314
--#define __NR_tee 315
--#define __NR_vmsplice 316
--#define __NR_move_pages 317
--#define __NR_getcpu 318
--#define __NR_epoll_pwait 319
--#define __NR_utimensat 320
--#define __NR_signalfd 321
--#define __NR_timerfd 322
--#define __NR_eventfd 323
--#define __NR_fallocate 324
--
--#define NR_syscalls 325
+-#define __NR_kexec_load 311
+-#define __NR_waitid 312
+-#define __NR_add_key 313
+-#define __NR_request_key 314
+-#define __NR_keyctl 315
+-#define __NR_ioprio_set 316
+-#define __NR_ioprio_get 317
+-#define __NR_inotify_init 318
+-#define __NR_inotify_add_watch 319
+-#define __NR_inotify_rm_watch 320
+-/* 321 is unused */
+-#define __NR_migrate_pages 322
+-#define __NR_openat 323
+-#define __NR_mkdirat 324
+-#define __NR_mknodat 325
+-#define __NR_fchownat 326
+-#define __NR_futimesat 327
+-#define __NR_fstatat64 328
+-#define __NR_unlinkat 329
+-#define __NR_renameat 330
+-#define __NR_linkat 331
+-#define __NR_symlinkat 332
+-#define __NR_readlinkat 333
+-#define __NR_fchmodat 334
+-#define __NR_faccessat 335
+-#define __NR_pselect6 336
+-#define __NR_ppoll 337
+-#define __NR_unshare 338
+-#define __NR_set_robust_list 339
+-#define __NR_get_robust_list 340
+-#define __NR_splice 341
+-#define __NR_sync_file_range 342
+-#define __NR_tee 343
+-#define __NR_vmsplice 344
+-#define __NR_move_pages 345
+-#define __NR_getcpu 346
+-#define __NR_epoll_pwait 347
+-#define __NR_utimensat 348
+-#define __NR_signalfd 349
+-#define __NR_timerfd 350
+-#define __NR_eventfd 351
+-#define __NR_fallocate 352
-
-#ifdef __KERNEL__
-
+-#define NR_syscalls 353
+-
-#define __ARCH_WANT_IPC_PARSE_VERSION
-#define __ARCH_WANT_OLD_READDIR
-#define __ARCH_WANT_OLD_STAT
@@ -529354,7 +615813,6 @@
-#define __ARCH_WANT_SYS_SIGPENDING
-#define __ARCH_WANT_SYS_SIGPROCMASK
-#define __ARCH_WANT_SYS_RT_SIGACTION
--#define __ARCH_WANT_SYS_RT_SIGSUSPEND
-
-/*
- * "Conditional" syscalls
@@ -529364,7821 +615822,24745 @@
- */
-#ifndef cond_syscall
-#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
-+#ifdef CONFIG_SUPERH32
-+# include "unistd_32.h"
-+#else
-+# include "unistd_64.h"
- #endif
+-#endif
-
-#endif /* __KERNEL__ */
--#endif /* __ASM_SH_UNISTD_H */
-diff --git a/include/asm-sh/unistd_32.h b/include/asm-sh/unistd_32.h
+-#endif /* __ASM_SH64_UNISTD_H */
+diff --git a/include/asm-sh64/user.h b/include/asm-sh64/user.h
+deleted file mode 100644
+index eb3b33e..0000000
+--- a/include/asm-sh64/user.h
++++ /dev/null
+@@ -1,70 +0,0 @@
+-#ifndef __ASM_SH64_USER_H
+-#define __ASM_SH64_USER_H
+-
+-/*
+- * This file is subject to the terms and conditions of the GNU General Public
+- * License. See the file "COPYING" in the main directory of this archive
+- * for more details.
+- *
+- * include/asm-sh64/user.h
+- *
+- * Copyright (C) 2000, 2001 Paolo Alberelli
+- *
+- */
+-
+-#include <linux/types.h>
+-#include <asm/ptrace.h>
+-#include <asm/page.h>
+-
+-/*
+- * Core file format: The core file is written in such a way that gdb
+- * can understand it and provide useful information to the user (under
+- * linux we use the `trad-core' bfd). The file contents are as follows:
+- *
+- * upage: 1 page consisting of a user struct that tells gdb
+- * what is present in the file. Directly after this is a
+- * copy of the task_struct, which is currently not used by gdb,
+- * but it may come in handy at some point. All of the registers
+- * are stored as part of the upage. The upage should always be
+- * only one page long.
+- * data: The data segment follows next. We use current->end_text to
+- * current->brk to pick up all of the user variables, plus any memory
+- * that may have been sbrk'ed. No attempt is made to determine if a
+- * page is demand-zero or if a page is totally unused, we just cover
+- * the entire range. All of the addresses are rounded in such a way
+- * that an integral number of pages is written.
+- * stack: We need the stack information in order to get a meaningful
+- * backtrace. We need to write the data from usp to
+- * current->start_stack, so we round each of these in order to be able
+- * to write an integer number of pages.
+- */
+-
+-struct user_fpu_struct {
+- unsigned long long fp_regs[32];
+- unsigned int fpscr;
+-};
+-
+-struct user {
+- struct pt_regs regs; /* entire machine state */
+- struct user_fpu_struct fpu; /* Math Co-processor registers */
+- int u_fpvalid; /* True if math co-processor being used */
+- size_t u_tsize; /* text size (pages) */
+- size_t u_dsize; /* data size (pages) */
+- size_t u_ssize; /* stack size (pages) */
+- unsigned long start_code; /* text starting address */
+- unsigned long start_data; /* data starting address */
+- unsigned long start_stack; /* stack starting address */
+- long int signal; /* signal causing core dump */
+- struct regs * u_ar0; /* help gdb find registers */
+- struct user_fpu_struct* u_fpstate; /* Math Co-processor pointer */
+- unsigned long magic; /* identifies a core file */
+- char u_comm[32]; /* user command name */
+-};
+-
+-#define NBPG PAGE_SIZE
+-#define UPAGES 1
+-#define HOST_TEXT_START_ADDR (u.start_code)
+-#define HOST_DATA_START_ADDR (u.start_data)
+-#define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG)
+-
+-#endif /* __ASM_SH64_USER_H */
+diff --git a/include/asm-sparc64/agp.h b/include/asm-sparc64/agp.h
+index 58f8cb6..e9fcf0e 100644
+--- a/include/asm-sparc64/agp.h
++++ b/include/asm-sparc64/agp.h
+@@ -5,7 +5,6 @@
+
+ #define map_page_into_agp(page)
+ #define unmap_page_from_agp(page)
+-#define flush_agp_mappings()
+ #define flush_agp_cache() mb()
+
+ /* Convert a physical address to an address suitable for the GART. */
+diff --git a/include/asm-sparc64/percpu.h b/include/asm-sparc64/percpu.h
+index a1f53a4..c7e52de 100644
+--- a/include/asm-sparc64/percpu.h
++++ b/include/asm-sparc64/percpu.h
+@@ -16,15 +16,6 @@ extern unsigned long __per_cpu_shift;
+ (__per_cpu_base + ((unsigned long)(__cpu) << __per_cpu_shift))
+ #define per_cpu_offset(x) (__per_cpu_offset(x))
+
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
+-
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __typeof__(type) per_cpu__##name \
+- ____cacheline_aligned_in_smp
+-
+ /* var is in discarded region: offset to particular copy we want */
+ #define per_cpu(var, cpu) (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset(cpu)))
+ #define __get_cpu_var(var) (*RELOC_HIDE(&per_cpu__##var, __local_per_cpu_offset))
+@@ -41,10 +32,6 @@ do { \
+ #else /* ! SMP */
+
+ #define real_setup_per_cpu_areas() do { } while (0)
+-#define DEFINE_PER_CPU(type, name) \
+- __typeof__(type) per_cpu__##name
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- DEFINE_PER_CPU(type, name)
+
+ #define per_cpu(var, cpu) (*((void)cpu, &per_cpu__##var))
+ #define __get_cpu_var(var) per_cpu__##var
+@@ -54,7 +41,4 @@ do { \
+
+ #define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
+
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
+-
+ #endif /* __ARCH_SPARC64_PERCPU__ */
+diff --git a/include/asm-um/asm.h b/include/asm-um/asm.h
new file mode 100644
-index 0000000..b182b1c
+index 0000000..af1269a
--- /dev/null
-+++ b/include/asm-sh/unistd_32.h
-@@ -0,0 +1,376 @@
-+#ifndef __ASM_SH_UNISTD_H
-+#define __ASM_SH_UNISTD_H
++++ b/include/asm-um/asm.h
+@@ -0,0 +1,6 @@
++#ifndef __UM_ASM_H
++#define __UM_ASM_H
++
++#include "asm/arch/asm.h"
++
++#endif
+diff --git a/include/asm-um/linkage.h b/include/asm-um/linkage.h
+index 78b8624..cdb3024 100644
+--- a/include/asm-um/linkage.h
++++ b/include/asm-um/linkage.h
+@@ -6,7 +6,6 @@
+
+ /* <linux/linkage.h> will pick sane defaults */
+ #ifdef CONFIG_GPROF
+-#undef FASTCALL
+ #undef fastcall
+ #endif
+
+diff --git a/include/asm-um/nops.h b/include/asm-um/nops.h
+new file mode 100644
+index 0000000..814e9bf
+--- /dev/null
++++ b/include/asm-um/nops.h
+@@ -0,0 +1,6 @@
++#ifndef __UM_NOPS_H
++#define __UM_NOPS_H
+
++#include "asm/arch/nops.h"
++
++#endif
+diff --git a/include/asm-x86/Kbuild b/include/asm-x86/Kbuild
+index 12db5a1..e6189b2 100644
+--- a/include/asm-x86/Kbuild
++++ b/include/asm-x86/Kbuild
+@@ -9,15 +9,13 @@ header-y += prctl.h
+ header-y += ptrace-abi.h
+ header-y += sigcontext32.h
+ header-y += ucontext.h
+-header-y += vsyscall32.h
+
+ unifdef-y += e820.h
+ unifdef-y += ist.h
+ unifdef-y += mce.h
+ unifdef-y += msr.h
+ unifdef-y += mtrr.h
+-unifdef-y += page_32.h
+-unifdef-y += page_64.h
++unifdef-y += page.h
+ unifdef-y += posix_types_32.h
+ unifdef-y += posix_types_64.h
+ unifdef-y += ptrace.h
+diff --git a/include/asm-x86/acpi.h b/include/asm-x86/acpi.h
+index f8a8979..98a9ca2 100644
+--- a/include/asm-x86/acpi.h
++++ b/include/asm-x86/acpi.h
+@@ -1,13 +1,123 @@
+ #ifndef _ASM_X86_ACPI_H
+ #define _ASM_X86_ACPI_H
+
+-#ifdef CONFIG_X86_32
+-# include "acpi_32.h"
+-#else
+-# include "acpi_64.h"
+-#endif
+/*
-+ * Copyright (C) 1999 Niibe Yutaka
++ * Copyright (C) 2001 Paul Diefenbaugh <paul.s.diefenbaugh at intel.com>
++ * Copyright (C) 2001 Patrick Mochel <mochel at osdl.org>
++ *
++ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
++ *
++ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
++#include <acpi/pdc_intel.h>
+
++#include <asm/numa.h>
+ #include <asm/processor.h>
++#include <asm/mmu.h>
++
++#define COMPILER_DEPENDENT_INT64 long long
++#define COMPILER_DEPENDENT_UINT64 unsigned long long
+
+/*
-+ * This file contains the system call numbers.
++ * Calling conventions:
++ *
++ * ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads)
++ * ACPI_EXTERNAL_XFACE - External ACPI interfaces
++ * ACPI_INTERNAL_XFACE - Internal ACPI interfaces
++ * ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces
+ */
++#define ACPI_SYSTEM_XFACE
++#define ACPI_EXTERNAL_XFACE
++#define ACPI_INTERNAL_XFACE
++#define ACPI_INTERNAL_VAR_XFACE
+
-+#define __NR_restart_syscall 0
-+#define __NR_exit 1
-+#define __NR_fork 2
-+#define __NR_read 3
-+#define __NR_write 4
-+#define __NR_open 5
-+#define __NR_close 6
-+#define __NR_waitpid 7
-+#define __NR_creat 8
-+#define __NR_link 9
-+#define __NR_unlink 10
-+#define __NR_execve 11
-+#define __NR_chdir 12
-+#define __NR_time 13
-+#define __NR_mknod 14
-+#define __NR_chmod 15
-+#define __NR_lchown 16
-+#define __NR_break 17
-+#define __NR_oldstat 18
-+#define __NR_lseek 19
-+#define __NR_getpid 20
-+#define __NR_mount 21
-+#define __NR_umount 22
-+#define __NR_setuid 23
-+#define __NR_getuid 24
-+#define __NR_stime 25
-+#define __NR_ptrace 26
-+#define __NR_alarm 27
-+#define __NR_oldfstat 28
-+#define __NR_pause 29
-+#define __NR_utime 30
-+#define __NR_stty 31
-+#define __NR_gtty 32
-+#define __NR_access 33
-+#define __NR_nice 34
-+#define __NR_ftime 35
-+#define __NR_sync 36
-+#define __NR_kill 37
-+#define __NR_rename 38
-+#define __NR_mkdir 39
-+#define __NR_rmdir 40
-+#define __NR_dup 41
-+#define __NR_pipe 42
-+#define __NR_times 43
-+#define __NR_prof 44
-+#define __NR_brk 45
-+#define __NR_setgid 46
-+#define __NR_getgid 47
-+#define __NR_signal 48
-+#define __NR_geteuid 49
-+#define __NR_getegid 50
-+#define __NR_acct 51
-+#define __NR_umount2 52
-+#define __NR_lock 53
-+#define __NR_ioctl 54
-+#define __NR_fcntl 55
-+#define __NR_mpx 56
-+#define __NR_setpgid 57
-+#define __NR_ulimit 58
-+#define __NR_oldolduname 59
-+#define __NR_umask 60
-+#define __NR_chroot 61
-+#define __NR_ustat 62
-+#define __NR_dup2 63
-+#define __NR_getppid 64
-+#define __NR_getpgrp 65
-+#define __NR_setsid 66
-+#define __NR_sigaction 67
-+#define __NR_sgetmask 68
-+#define __NR_ssetmask 69
-+#define __NR_setreuid 70
-+#define __NR_setregid 71
-+#define __NR_sigsuspend 72
-+#define __NR_sigpending 73
-+#define __NR_sethostname 74
-+#define __NR_setrlimit 75
-+#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
-+#define __NR_getrusage 77
-+#define __NR_gettimeofday 78
-+#define __NR_settimeofday 79
-+#define __NR_getgroups 80
-+#define __NR_setgroups 81
-+#define __NR_select 82
-+#define __NR_symlink 83
-+#define __NR_oldlstat 84
-+#define __NR_readlink 85
-+#define __NR_uselib 86
-+#define __NR_swapon 87
-+#define __NR_reboot 88
-+#define __NR_readdir 89
-+#define __NR_mmap 90
-+#define __NR_munmap 91
-+#define __NR_truncate 92
-+#define __NR_ftruncate 93
-+#define __NR_fchmod 94
-+#define __NR_fchown 95
-+#define __NR_getpriority 96
-+#define __NR_setpriority 97
-+#define __NR_profil 98
-+#define __NR_statfs 99
-+#define __NR_fstatfs 100
-+#define __NR_ioperm 101
-+#define __NR_socketcall 102
-+#define __NR_syslog 103
-+#define __NR_setitimer 104
-+#define __NR_getitimer 105
-+#define __NR_stat 106
-+#define __NR_lstat 107
-+#define __NR_fstat 108
-+#define __NR_olduname 109
-+#define __NR_iopl 110
-+#define __NR_vhangup 111
-+#define __NR_idle 112
-+#define __NR_vm86old 113
-+#define __NR_wait4 114
-+#define __NR_swapoff 115
-+#define __NR_sysinfo 116
-+#define __NR_ipc 117
-+#define __NR_fsync 118
-+#define __NR_sigreturn 119
-+#define __NR_clone 120
-+#define __NR_setdomainname 121
-+#define __NR_uname 122
-+#define __NR_modify_ldt 123
-+#define __NR_adjtimex 124
-+#define __NR_mprotect 125
-+#define __NR_sigprocmask 126
-+#define __NR_create_module 127
-+#define __NR_init_module 128
-+#define __NR_delete_module 129
-+#define __NR_get_kernel_syms 130
-+#define __NR_quotactl 131
-+#define __NR_getpgid 132
-+#define __NR_fchdir 133
-+#define __NR_bdflush 134
-+#define __NR_sysfs 135
-+#define __NR_personality 136
-+#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
-+#define __NR_setfsuid 138
-+#define __NR_setfsgid 139
-+#define __NR__llseek 140
-+#define __NR_getdents 141
-+#define __NR__newselect 142
-+#define __NR_flock 143
-+#define __NR_msync 144
-+#define __NR_readv 145
-+#define __NR_writev 146
-+#define __NR_getsid 147
-+#define __NR_fdatasync 148
-+#define __NR__sysctl 149
-+#define __NR_mlock 150
-+#define __NR_munlock 151
-+#define __NR_mlockall 152
-+#define __NR_munlockall 153
-+#define __NR_sched_setparam 154
-+#define __NR_sched_getparam 155
-+#define __NR_sched_setscheduler 156
-+#define __NR_sched_getscheduler 157
-+#define __NR_sched_yield 158
-+#define __NR_sched_get_priority_max 159
-+#define __NR_sched_get_priority_min 160
-+#define __NR_sched_rr_get_interval 161
-+#define __NR_nanosleep 162
-+#define __NR_mremap 163
-+#define __NR_setresuid 164
-+#define __NR_getresuid 165
-+#define __NR_vm86 166
-+#define __NR_query_module 167
-+#define __NR_poll 168
-+#define __NR_nfsservctl 169
-+#define __NR_setresgid 170
-+#define __NR_getresgid 171
-+#define __NR_prctl 172
-+#define __NR_rt_sigreturn 173
-+#define __NR_rt_sigaction 174
-+#define __NR_rt_sigprocmask 175
-+#define __NR_rt_sigpending 176
-+#define __NR_rt_sigtimedwait 177
-+#define __NR_rt_sigqueueinfo 178
-+#define __NR_rt_sigsuspend 179
-+#define __NR_pread64 180
-+#define __NR_pwrite64 181
-+#define __NR_chown 182
-+#define __NR_getcwd 183
-+#define __NR_capget 184
-+#define __NR_capset 185
-+#define __NR_sigaltstack 186
-+#define __NR_sendfile 187
-+#define __NR_streams1 188 /* some people actually want it */
-+#define __NR_streams2 189 /* some people actually want it */
-+#define __NR_vfork 190
-+#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
-+#define __NR_mmap2 192
-+#define __NR_truncate64 193
-+#define __NR_ftruncate64 194
-+#define __NR_stat64 195
-+#define __NR_lstat64 196
-+#define __NR_fstat64 197
-+#define __NR_lchown32 198
-+#define __NR_getuid32 199
-+#define __NR_getgid32 200
-+#define __NR_geteuid32 201
-+#define __NR_getegid32 202
-+#define __NR_setreuid32 203
-+#define __NR_setregid32 204
-+#define __NR_getgroups32 205
-+#define __NR_setgroups32 206
-+#define __NR_fchown32 207
-+#define __NR_setresuid32 208
-+#define __NR_getresuid32 209
-+#define __NR_setresgid32 210
-+#define __NR_getresgid32 211
-+#define __NR_chown32 212
-+#define __NR_setuid32 213
-+#define __NR_setgid32 214
-+#define __NR_setfsuid32 215
-+#define __NR_setfsgid32 216
-+#define __NR_pivot_root 217
-+#define __NR_mincore 218
-+#define __NR_madvise 219
-+#define __NR_getdents64 220
-+#define __NR_fcntl64 221
-+/* 223 is unused */
-+#define __NR_gettid 224
-+#define __NR_readahead 225
-+#define __NR_setxattr 226
-+#define __NR_lsetxattr 227
-+#define __NR_fsetxattr 228
-+#define __NR_getxattr 229
-+#define __NR_lgetxattr 230
-+#define __NR_fgetxattr 231
-+#define __NR_listxattr 232
-+#define __NR_llistxattr 233
-+#define __NR_flistxattr 234
-+#define __NR_removexattr 235
-+#define __NR_lremovexattr 236
-+#define __NR_fremovexattr 237
-+#define __NR_tkill 238
-+#define __NR_sendfile64 239
-+#define __NR_futex 240
-+#define __NR_sched_setaffinity 241
-+#define __NR_sched_getaffinity 242
-+#define __NR_set_thread_area 243
-+#define __NR_get_thread_area 244
-+#define __NR_io_setup 245
-+#define __NR_io_destroy 246
-+#define __NR_io_getevents 247
-+#define __NR_io_submit 248
-+#define __NR_io_cancel 249
-+#define __NR_fadvise64 250
++/* Asm macros */
+
-+#define __NR_exit_group 252
-+#define __NR_lookup_dcookie 253
-+#define __NR_epoll_create 254
-+#define __NR_epoll_ctl 255
-+#define __NR_epoll_wait 256
-+#define __NR_remap_file_pages 257
-+#define __NR_set_tid_address 258
-+#define __NR_timer_create 259
-+#define __NR_timer_settime (__NR_timer_create+1)
-+#define __NR_timer_gettime (__NR_timer_create+2)
-+#define __NR_timer_getoverrun (__NR_timer_create+3)
-+#define __NR_timer_delete (__NR_timer_create+4)
-+#define __NR_clock_settime (__NR_timer_create+5)
-+#define __NR_clock_gettime (__NR_timer_create+6)
-+#define __NR_clock_getres (__NR_timer_create+7)
-+#define __NR_clock_nanosleep (__NR_timer_create+8)
-+#define __NR_statfs64 268
-+#define __NR_fstatfs64 269
-+#define __NR_tgkill 270
-+#define __NR_utimes 271
-+#define __NR_fadvise64_64 272
-+#define __NR_vserver 273
-+#define __NR_mbind 274
-+#define __NR_get_mempolicy 275
-+#define __NR_set_mempolicy 276
-+#define __NR_mq_open 277
-+#define __NR_mq_unlink (__NR_mq_open+1)
-+#define __NR_mq_timedsend (__NR_mq_open+2)
-+#define __NR_mq_timedreceive (__NR_mq_open+3)
-+#define __NR_mq_notify (__NR_mq_open+4)
-+#define __NR_mq_getsetattr (__NR_mq_open+5)
-+#define __NR_kexec_load 283
-+#define __NR_waitid 284
-+#define __NR_add_key 285
-+#define __NR_request_key 286
-+#define __NR_keyctl 287
-+#define __NR_ioprio_set 288
-+#define __NR_ioprio_get 289
-+#define __NR_inotify_init 290
-+#define __NR_inotify_add_watch 291
-+#define __NR_inotify_rm_watch 292
-+/* 293 is unused */
-+#define __NR_migrate_pages 294
-+#define __NR_openat 295
-+#define __NR_mkdirat 296
-+#define __NR_mknodat 297
-+#define __NR_fchownat 298
-+#define __NR_futimesat 299
-+#define __NR_fstatat64 300
-+#define __NR_unlinkat 301
-+#define __NR_renameat 302
-+#define __NR_linkat 303
-+#define __NR_symlinkat 304
-+#define __NR_readlinkat 305
-+#define __NR_fchmodat 306
-+#define __NR_faccessat 307
-+#define __NR_pselect6 308
-+#define __NR_ppoll 309
-+#define __NR_unshare 310
-+#define __NR_set_robust_list 311
-+#define __NR_get_robust_list 312
-+#define __NR_splice 313
-+#define __NR_sync_file_range 314
-+#define __NR_tee 315
-+#define __NR_vmsplice 316
-+#define __NR_move_pages 317
-+#define __NR_getcpu 318
-+#define __NR_epoll_pwait 319
-+#define __NR_utimensat 320
-+#define __NR_signalfd 321
-+#define __NR_timerfd 322
-+#define __NR_eventfd 323
-+#define __NR_fallocate 324
++#define ACPI_ASM_MACROS
++#define BREAKPOINT3
++#define ACPI_DISABLE_IRQS() local_irq_disable()
++#define ACPI_ENABLE_IRQS() local_irq_enable()
++#define ACPI_FLUSH_CPU_CACHE() wbinvd()
+
-+#define NR_syscalls 325
++int __acpi_acquire_global_lock(unsigned int *lock);
++int __acpi_release_global_lock(unsigned int *lock);
+
-+#ifdef __KERNEL__
++#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
++ ((Acq) = __acpi_acquire_global_lock(&facs->global_lock))
+
-+#define __ARCH_WANT_IPC_PARSE_VERSION
-+#define __ARCH_WANT_OLD_READDIR
-+#define __ARCH_WANT_OLD_STAT
-+#define __ARCH_WANT_STAT64
-+#define __ARCH_WANT_SYS_ALARM
-+#define __ARCH_WANT_SYS_GETHOSTNAME
-+#define __ARCH_WANT_SYS_PAUSE
-+#define __ARCH_WANT_SYS_SGETMASK
-+#define __ARCH_WANT_SYS_SIGNAL
-+#define __ARCH_WANT_SYS_TIME
-+#define __ARCH_WANT_SYS_UTIME
-+#define __ARCH_WANT_SYS_WAITPID
-+#define __ARCH_WANT_SYS_SOCKETCALL
-+#define __ARCH_WANT_SYS_FADVISE64
-+#define __ARCH_WANT_SYS_GETPGRP
-+#define __ARCH_WANT_SYS_LLSEEK
-+#define __ARCH_WANT_SYS_NICE
-+#define __ARCH_WANT_SYS_OLD_GETRLIMIT
-+#define __ARCH_WANT_SYS_OLDUMOUNT
-+#define __ARCH_WANT_SYS_SIGPENDING
-+#define __ARCH_WANT_SYS_SIGPROCMASK
-+#define __ARCH_WANT_SYS_RT_SIGACTION
-+#define __ARCH_WANT_SYS_RT_SIGSUSPEND
++#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
++ ((Acq) = __acpi_release_global_lock(&facs->global_lock))
+
+/*
-+ * "Conditional" syscalls
++ * Math helper asm macros
++ */
++#define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \
++ asm("divl %2;" \
++ :"=a"(q32), "=d"(r32) \
++ :"r"(d32), \
++ "0"(n_lo), "1"(n_hi))
++
++
++#define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \
++ asm("shrl $1,%2 ;" \
++ "rcrl $1,%3;" \
++ :"=r"(n_hi), "=r"(n_lo) \
++ :"0"(n_hi), "1"(n_lo))
++
++#ifdef CONFIG_ACPI
++extern int acpi_lapic;
++extern int acpi_ioapic;
++extern int acpi_noirq;
++extern int acpi_strict;
++extern int acpi_disabled;
++extern int acpi_ht;
++extern int acpi_pci_disabled;
++extern int acpi_skip_timer_override;
++extern int acpi_use_timer_override;
++
++static inline void disable_acpi(void)
++{
++ acpi_disabled = 1;
++ acpi_ht = 0;
++ acpi_pci_disabled = 1;
++ acpi_noirq = 1;
++}
++
++/* Fixmap pages to reserve for ACPI boot-time tables (see fixmap.h) */
++#define FIX_ACPI_PAGES 4
++
++extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq);
++
++static inline void acpi_noirq_set(void) { acpi_noirq = 1; }
++static inline void acpi_disable_pci(void)
++{
++ acpi_pci_disabled = 1;
++ acpi_noirq_set();
++}
++extern int acpi_irq_balance_set(char *str);
++
++/* routines for saving/restoring kernel state */
++extern int acpi_save_state_mem(void);
++extern void acpi_restore_state_mem(void);
++
++extern unsigned long acpi_wakeup_address;
++
++/* early initialization routine */
++extern void acpi_reserve_bootmem(void);
+
+ /*
+ * Check if the CPU can handle C2 and deeper
+@@ -29,4 +139,35 @@ static inline unsigned int acpi_processor_cstate_check(unsigned int max_cstate)
+ return max_cstate;
+ }
+
++#else /* !CONFIG_ACPI */
++
++#define acpi_lapic 0
++#define acpi_ioapic 0
++static inline void acpi_noirq_set(void) { }
++static inline void acpi_disable_pci(void) { }
++static inline void disable_acpi(void) { }
++
++#endif /* !CONFIG_ACPI */
++
++#define ARCH_HAS_POWER_INIT 1
++
++struct bootnode;
++
++#ifdef CONFIG_ACPI_NUMA
++extern int acpi_numa;
++extern int acpi_scan_nodes(unsigned long start, unsigned long end);
++#ifdef CONFIG_X86_64
++# define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
++#endif
++extern void acpi_fake_nodes(const struct bootnode *fake_nodes,
++ int num_nodes);
++#else
++static inline void acpi_fake_nodes(const struct bootnode *fake_nodes,
++ int num_nodes)
++{
++}
+ #endif
++
++#define acpi_unlazy_tlb(x) leave_mm(x)
++
++#endif /*__X86_ASM_ACPI_H*/
+diff --git a/include/asm-x86/acpi_32.h b/include/asm-x86/acpi_32.h
+deleted file mode 100644
+index 723493e..0000000
+--- a/include/asm-x86/acpi_32.h
++++ /dev/null
+@@ -1,143 +0,0 @@
+-/*
+- * asm-i386/acpi.h
+- *
+- * Copyright (C) 2001 Paul Diefenbaugh <paul.s.diefenbaugh at intel.com>
+- * Copyright (C) 2001 Patrick Mochel <mochel at osdl.org>
+- *
+- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- *
+- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- */
+-
+-#ifndef _ASM_ACPI_H
+-#define _ASM_ACPI_H
+-
+-#ifdef __KERNEL__
+-
+-#include <acpi/pdc_intel.h>
+-
+-#include <asm/system.h> /* defines cmpxchg */
+-
+-#define COMPILER_DEPENDENT_INT64 long long
+-#define COMPILER_DEPENDENT_UINT64 unsigned long long
+-
+-/*
+- * Calling conventions:
+- *
+- * ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads)
+- * ACPI_EXTERNAL_XFACE - External ACPI interfaces
+- * ACPI_INTERNAL_XFACE - Internal ACPI interfaces
+- * ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces
+- */
+-#define ACPI_SYSTEM_XFACE
+-#define ACPI_EXTERNAL_XFACE
+-#define ACPI_INTERNAL_XFACE
+-#define ACPI_INTERNAL_VAR_XFACE
+-
+-/* Asm macros */
+-
+-#define ACPI_ASM_MACROS
+-#define BREAKPOINT3
+-#define ACPI_DISABLE_IRQS() local_irq_disable()
+-#define ACPI_ENABLE_IRQS() local_irq_enable()
+-#define ACPI_FLUSH_CPU_CACHE() wbinvd()
+-
+-int __acpi_acquire_global_lock(unsigned int *lock);
+-int __acpi_release_global_lock(unsigned int *lock);
+-
+-#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
+- ((Acq) = __acpi_acquire_global_lock(&facs->global_lock))
+-
+-#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
+- ((Acq) = __acpi_release_global_lock(&facs->global_lock))
+-
+-/*
+- * Math helper asm macros
+- */
+-#define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \
+- asm("divl %2;" \
+- :"=a"(q32), "=d"(r32) \
+- :"r"(d32), \
+- "0"(n_lo), "1"(n_hi))
+-
+-
+-#define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \
+- asm("shrl $1,%2;" \
+- "rcrl $1,%3;" \
+- :"=r"(n_hi), "=r"(n_lo) \
+- :"0"(n_hi), "1"(n_lo))
+-
+-extern void early_quirks(void);
+-
+-#ifdef CONFIG_ACPI
+-extern int acpi_lapic;
+-extern int acpi_ioapic;
+-extern int acpi_noirq;
+-extern int acpi_strict;
+-extern int acpi_disabled;
+-extern int acpi_ht;
+-extern int acpi_pci_disabled;
+-static inline void disable_acpi(void)
+-{
+- acpi_disabled = 1;
+- acpi_ht = 0;
+- acpi_pci_disabled = 1;
+- acpi_noirq = 1;
+-}
+-
+-/* Fixmap pages to reserve for ACPI boot-time tables (see fixmap.h) */
+-#define FIX_ACPI_PAGES 4
+-
+-extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq);
+-
+-#ifdef CONFIG_X86_IO_APIC
+-extern int acpi_skip_timer_override;
+-extern int acpi_use_timer_override;
+-#endif
+-
+-static inline void acpi_noirq_set(void) { acpi_noirq = 1; }
+-static inline void acpi_disable_pci(void)
+-{
+- acpi_pci_disabled = 1;
+- acpi_noirq_set();
+-}
+-extern int acpi_irq_balance_set(char *str);
+-
+-/* routines for saving/restoring kernel state */
+-extern int acpi_save_state_mem(void);
+-extern void acpi_restore_state_mem(void);
+-
+-extern unsigned long acpi_wakeup_address;
+-
+-/* early initialization routine */
+-extern void acpi_reserve_bootmem(void);
+-
+-#else /* !CONFIG_ACPI */
+-
+-#define acpi_lapic 0
+-#define acpi_ioapic 0
+-static inline void acpi_noirq_set(void) { }
+-static inline void acpi_disable_pci(void) { }
+-static inline void disable_acpi(void) { }
+-
+-#endif /* !CONFIG_ACPI */
+-
+-#define ARCH_HAS_POWER_INIT 1
+-
+-#endif /*__KERNEL__*/
+-
+-#endif /*_ASM_ACPI_H*/
+diff --git a/include/asm-x86/acpi_64.h b/include/asm-x86/acpi_64.h
+deleted file mode 100644
+index 9817335..0000000
+--- a/include/asm-x86/acpi_64.h
++++ /dev/null
+@@ -1,153 +0,0 @@
+-/*
+- * asm-x86_64/acpi.h
+- *
+- * Copyright (C) 2001 Paul Diefenbaugh <paul.s.diefenbaugh at intel.com>
+- * Copyright (C) 2001 Patrick Mochel <mochel at osdl.org>
+- *
+- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+- *
+- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- */
+-
+-#ifndef _ASM_ACPI_H
+-#define _ASM_ACPI_H
+-
+-#ifdef __KERNEL__
+-
+-#include <acpi/pdc_intel.h>
+-#include <asm/numa.h>
+-
+-#define COMPILER_DEPENDENT_INT64 long long
+-#define COMPILER_DEPENDENT_UINT64 unsigned long long
+-
+-/*
+- * Calling conventions:
+- *
+- * ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads)
+- * ACPI_EXTERNAL_XFACE - External ACPI interfaces
+- * ACPI_INTERNAL_XFACE - Internal ACPI interfaces
+- * ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces
+- */
+-#define ACPI_SYSTEM_XFACE
+-#define ACPI_EXTERNAL_XFACE
+-#define ACPI_INTERNAL_XFACE
+-#define ACPI_INTERNAL_VAR_XFACE
+-
+-/* Asm macros */
+-
+-#define ACPI_ASM_MACROS
+-#define BREAKPOINT3
+-#define ACPI_DISABLE_IRQS() local_irq_disable()
+-#define ACPI_ENABLE_IRQS() local_irq_enable()
+-#define ACPI_FLUSH_CPU_CACHE() wbinvd()
+-
+-int __acpi_acquire_global_lock(unsigned int *lock);
+-int __acpi_release_global_lock(unsigned int *lock);
+-
+-#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
+- ((Acq) = __acpi_acquire_global_lock(&facs->global_lock))
+-
+-#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
+- ((Acq) = __acpi_release_global_lock(&facs->global_lock))
+-
+-/*
+- * Math helper asm macros
+- */
+-#define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \
+- asm("divl %2;" \
+- :"=a"(q32), "=d"(r32) \
+- :"r"(d32), \
+- "0"(n_lo), "1"(n_hi))
+-
+-
+-#define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \
+- asm("shrl $1,%2;" \
+- "rcrl $1,%3;" \
+- :"=r"(n_hi), "=r"(n_lo) \
+- :"0"(n_hi), "1"(n_lo))
+-
+-#ifdef CONFIG_ACPI
+-extern int acpi_lapic;
+-extern int acpi_ioapic;
+-extern int acpi_noirq;
+-extern int acpi_strict;
+-extern int acpi_disabled;
+-extern int acpi_pci_disabled;
+-extern int acpi_ht;
+-static inline void disable_acpi(void)
+-{
+- acpi_disabled = 1;
+- acpi_ht = 0;
+- acpi_pci_disabled = 1;
+- acpi_noirq = 1;
+-}
+-
+-/* Fixmap pages to reserve for ACPI boot-time tables (see fixmap.h) */
+-#define FIX_ACPI_PAGES 4
+-
+-extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq);
+-static inline void acpi_noirq_set(void) { acpi_noirq = 1; }
+-static inline void acpi_disable_pci(void)
+-{
+- acpi_pci_disabled = 1;
+- acpi_noirq_set();
+-}
+-extern int acpi_irq_balance_set(char *str);
+-
+-/* routines for saving/restoring kernel state */
+-extern int acpi_save_state_mem(void);
+-extern void acpi_restore_state_mem(void);
+-
+-extern unsigned long acpi_wakeup_address;
+-
+-/* early initialization routine */
+-extern void acpi_reserve_bootmem(void);
+-
+-#else /* !CONFIG_ACPI */
+-
+-#define acpi_lapic 0
+-#define acpi_ioapic 0
+-static inline void acpi_noirq_set(void) { }
+-static inline void acpi_disable_pci(void) { }
+-
+-#endif /* !CONFIG_ACPI */
+-
+-extern int acpi_numa;
+-extern int acpi_scan_nodes(unsigned long start, unsigned long end);
+-#define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
+-
+-extern int acpi_disabled;
+-extern int acpi_pci_disabled;
+-
+-#define ARCH_HAS_POWER_INIT 1
+-
+-extern int acpi_skip_timer_override;
+-extern int acpi_use_timer_override;
+-
+-#ifdef CONFIG_ACPI_NUMA
+-extern void __init acpi_fake_nodes(const struct bootnode *fake_nodes,
+- int num_nodes);
+-#else
+-static inline void acpi_fake_nodes(const struct bootnode *fake_nodes,
+- int num_nodes)
+-{
+-}
+-#endif
+-
+-#endif /*__KERNEL__*/
+-
+-#endif /*_ASM_ACPI_H*/
+diff --git a/include/asm-x86/agp.h b/include/asm-x86/agp.h
+index 62df2a9..e4004a9 100644
+--- a/include/asm-x86/agp.h
++++ b/include/asm-x86/agp.h
+@@ -12,13 +12,8 @@
+ * page. This avoids data corruption on some CPUs.
+ */
+
+-/*
+- * Caller's responsibility to call global_flush_tlb() for performance
+- * reasons
+- */
+-#define map_page_into_agp(page) change_page_attr(page, 1, PAGE_KERNEL_NOCACHE)
+-#define unmap_page_from_agp(page) change_page_attr(page, 1, PAGE_KERNEL)
+-#define flush_agp_mappings() global_flush_tlb()
++#define map_page_into_agp(page) set_pages_uc(page, 1)
++#define unmap_page_from_agp(page) set_pages_wb(page, 1)
+
+ /*
+ * Could use CLFLUSH here if the cpu supports it. But then it would
+diff --git a/include/asm-x86/alternative.h b/include/asm-x86/alternative.h
+index 9eef6a3..d8bacf3 100644
+--- a/include/asm-x86/alternative.h
++++ b/include/asm-x86/alternative.h
+@@ -1,5 +1,161 @@
+-#ifdef CONFIG_X86_32
+-# include "alternative_32.h"
++#ifndef _ASM_X86_ALTERNATIVE_H
++#define _ASM_X86_ALTERNATIVE_H
++
++#include <linux/types.h>
++#include <linux/stddef.h>
++#include <asm/asm.h>
++
++/*
++ * Alternative inline assembly for SMP.
+ *
-+ * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
-+ * but it doesn't work on all toolchains, so we just do it by hand
++ * The LOCK_PREFIX macro defined here replaces the LOCK and
++ * LOCK_PREFIX macros used everywhere in the source tree.
++ *
++ * SMP alternatives use the same data structures as the other
++ * alternatives and the X86_FEATURE_UP flag to indicate the case of a
++ * UP system running a SMP kernel. The existing apply_alternatives()
++ * works fine for patching a SMP kernel for UP.
++ *
++ * The SMP alternative tables can be kept after boot and contain both
++ * UP and SMP versions of the instructions to allow switching back to
++ * SMP at runtime, when hotplugging in a new CPU, which is especially
++ * useful in virtualized environments.
++ *
++ * The very common lock prefix is handled as special case in a
++ * separate table which is a pure address list without replacement ptr
++ * and size information. That keeps the table sizes small.
+ */
-+#ifndef cond_syscall
-+#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
++
++#ifdef CONFIG_SMP
++#define LOCK_PREFIX \
++ ".section .smp_locks,\"a\"\n" \
++ _ASM_ALIGN "\n" \
++ _ASM_PTR "661f\n" /* address */ \
++ ".previous\n" \
++ "661:\n\tlock; "
++
++#else /* ! CONFIG_SMP */
++#define LOCK_PREFIX ""
++#endif
++
++/* This must be included *after* the definition of LOCK_PREFIX */
++#include <asm/cpufeature.h>
++
++struct alt_instr {
++ u8 *instr; /* original instruction */
++ u8 *replacement;
++ u8 cpuid; /* cpuid bit set for replacement */
++ u8 instrlen; /* length of original instruction */
++ u8 replacementlen; /* length of new instruction, <= instrlen */
++ u8 pad1;
++#ifdef CONFIG_X86_64
++ u32 pad2;
+#endif
++};
+
-+#endif /* __KERNEL__ */
-+#endif /* __ASM_SH_UNISTD_H */
-diff --git a/include/asm-sh/unistd_64.h b/include/asm-sh/unistd_64.h
++extern void alternative_instructions(void);
++extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
++
++struct module;
++
++#ifdef CONFIG_SMP
++extern void alternatives_smp_module_add(struct module *mod, char *name,
++ void *locks, void *locks_end,
++ void *text, void *text_end);
++extern void alternatives_smp_module_del(struct module *mod);
++extern void alternatives_smp_switch(int smp);
++#else
++static inline void alternatives_smp_module_add(struct module *mod, char *name,
++ void *locks, void *locks_end,
++ void *text, void *text_end) {}
++static inline void alternatives_smp_module_del(struct module *mod) {}
++static inline void alternatives_smp_switch(int smp) {}
++#endif /* CONFIG_SMP */
++
++/*
++ * Alternative instructions for different CPU types or capabilities.
++ *
++ * This allows to use optimized instructions even on generic binary
++ * kernels.
++ *
++ * length of oldinstr must be longer or equal the length of newinstr
++ * It can be padded with nops as needed.
++ *
++ * For non barrier like inlines please define new variants
++ * without volatile and memory clobber.
++ */
++#define alternative(oldinstr, newinstr, feature) \
++ asm volatile ("661:\n\t" oldinstr "\n662:\n" \
++ ".section .altinstructions,\"a\"\n" \
++ _ASM_ALIGN "\n" \
++ _ASM_PTR "661b\n" /* label */ \
++ _ASM_PTR "663f\n" /* new instruction */ \
++ " .byte %c0\n" /* feature bit */ \
++ " .byte 662b-661b\n" /* sourcelen */ \
++ " .byte 664f-663f\n" /* replacementlen */ \
++ ".previous\n" \
++ ".section .altinstr_replacement,\"ax\"\n" \
++ "663:\n\t" newinstr "\n664:\n" /* replacement */ \
++ ".previous" :: "i" (feature) : "memory")
++
++/*
++ * Alternative inline assembly with input.
++ *
++ * Pecularities:
++ * No memory clobber here.
++ * Argument numbers start with 1.
++ * Best is to use constraints that are fixed size (like (%1) ... "r")
++ * If you use variable sized constraints like "m" or "g" in the
++ * replacement make sure to pad to the worst case length.
++ */
++#define alternative_input(oldinstr, newinstr, feature, input...) \
++ asm volatile ("661:\n\t" oldinstr "\n662:\n" \
++ ".section .altinstructions,\"a\"\n" \
++ _ASM_ALIGN "\n" \
++ _ASM_PTR "661b\n" /* label */ \
++ _ASM_PTR "663f\n" /* new instruction */ \
++ " .byte %c0\n" /* feature bit */ \
++ " .byte 662b-661b\n" /* sourcelen */ \
++ " .byte 664f-663f\n" /* replacementlen */ \
++ ".previous\n" \
++ ".section .altinstr_replacement,\"ax\"\n" \
++ "663:\n\t" newinstr "\n664:\n" /* replacement */ \
++ ".previous" :: "i" (feature), ##input)
++
++/* Like alternative_input, but with a single output argument */
++#define alternative_io(oldinstr, newinstr, feature, output, input...) \
++ asm volatile ("661:\n\t" oldinstr "\n662:\n" \
++ ".section .altinstructions,\"a\"\n" \
++ _ASM_ALIGN "\n" \
++ _ASM_PTR "661b\n" /* label */ \
++ _ASM_PTR "663f\n" /* new instruction */ \
++ " .byte %c[feat]\n" /* feature bit */ \
++ " .byte 662b-661b\n" /* sourcelen */ \
++ " .byte 664f-663f\n" /* replacementlen */ \
++ ".previous\n" \
++ ".section .altinstr_replacement,\"ax\"\n" \
++ "663:\n\t" newinstr "\n664:\n" /* replacement */ \
++ ".previous" : output : [feat] "i" (feature), ##input)
++
++/*
++ * use this macro(s) if you need more than one output parameter
++ * in alternative_io
++ */
++#define ASM_OUTPUT2(a, b) a, b
++
++struct paravirt_patch_site;
++#ifdef CONFIG_PARAVIRT
++void apply_paravirt(struct paravirt_patch_site *start,
++ struct paravirt_patch_site *end);
+ #else
+-# include "alternative_64.h"
++static inline void
++apply_paravirt(struct paravirt_patch_site *start,
++ struct paravirt_patch_site *end)
++{}
++#define __parainstructions NULL
++#define __parainstructions_end NULL
+ #endif
++
++extern void text_poke(void *addr, unsigned char *opcode, int len);
++
++#endif /* _ASM_X86_ALTERNATIVE_H */
+diff --git a/include/asm-x86/alternative_32.h b/include/asm-x86/alternative_32.h
+deleted file mode 100644
+index bda6c81..0000000
+--- a/include/asm-x86/alternative_32.h
++++ /dev/null
+@@ -1,154 +0,0 @@
+-#ifndef _I386_ALTERNATIVE_H
+-#define _I386_ALTERNATIVE_H
+-
+-#include <asm/types.h>
+-#include <linux/stddef.h>
+-#include <linux/types.h>
+-
+-struct alt_instr {
+- u8 *instr; /* original instruction */
+- u8 *replacement;
+- u8 cpuid; /* cpuid bit set for replacement */
+- u8 instrlen; /* length of original instruction */
+- u8 replacementlen; /* length of new instruction, <= instrlen */
+- u8 pad;
+-};
+-
+-extern void alternative_instructions(void);
+-extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
+-
+-struct module;
+-#ifdef CONFIG_SMP
+-extern void alternatives_smp_module_add(struct module *mod, char *name,
+- void *locks, void *locks_end,
+- void *text, void *text_end);
+-extern void alternatives_smp_module_del(struct module *mod);
+-extern void alternatives_smp_switch(int smp);
+-#else
+-static inline void alternatives_smp_module_add(struct module *mod, char *name,
+- void *locks, void *locks_end,
+- void *text, void *text_end) {}
+-static inline void alternatives_smp_module_del(struct module *mod) {}
+-static inline void alternatives_smp_switch(int smp) {}
+-#endif /* CONFIG_SMP */
+-
+-/*
+- * Alternative instructions for different CPU types or capabilities.
+- *
+- * This allows to use optimized instructions even on generic binary
+- * kernels.
+- *
+- * length of oldinstr must be longer or equal the length of newinstr
+- * It can be padded with nops as needed.
+- *
+- * For non barrier like inlines please define new variants
+- * without volatile and memory clobber.
+- */
+-#define alternative(oldinstr, newinstr, feature) \
+- asm volatile ("661:\n\t" oldinstr "\n662:\n" \
+- ".section .altinstructions,\"a\"\n" \
+- " .align 4\n" \
+- " .long 661b\n" /* label */ \
+- " .long 663f\n" /* new instruction */ \
+- " .byte %c0\n" /* feature bit */ \
+- " .byte 662b-661b\n" /* sourcelen */ \
+- " .byte 664f-663f\n" /* replacementlen */ \
+- ".previous\n" \
+- ".section .altinstr_replacement,\"ax\"\n" \
+- "663:\n\t" newinstr "\n664:\n" /* replacement */\
+- ".previous" :: "i" (feature) : "memory")
+-
+-/*
+- * Alternative inline assembly with input.
+- *
+- * Pecularities:
+- * No memory clobber here.
+- * Argument numbers start with 1.
+- * Best is to use constraints that are fixed size (like (%1) ... "r")
+- * If you use variable sized constraints like "m" or "g" in the
+- * replacement maake sure to pad to the worst case length.
+- */
+-#define alternative_input(oldinstr, newinstr, feature, input...) \
+- asm volatile ("661:\n\t" oldinstr "\n662:\n" \
+- ".section .altinstructions,\"a\"\n" \
+- " .align 4\n" \
+- " .long 661b\n" /* label */ \
+- " .long 663f\n" /* new instruction */ \
+- " .byte %c0\n" /* feature bit */ \
+- " .byte 662b-661b\n" /* sourcelen */ \
+- " .byte 664f-663f\n" /* replacementlen */ \
+- ".previous\n" \
+- ".section .altinstr_replacement,\"ax\"\n" \
+- "663:\n\t" newinstr "\n664:\n" /* replacement */\
+- ".previous" :: "i" (feature), ##input)
+-
+-/* Like alternative_input, but with a single output argument */
+-#define alternative_io(oldinstr, newinstr, feature, output, input...) \
+- asm volatile ("661:\n\t" oldinstr "\n662:\n" \
+- ".section .altinstructions,\"a\"\n" \
+- " .align 4\n" \
+- " .long 661b\n" /* label */ \
+- " .long 663f\n" /* new instruction */ \
+- " .byte %c[feat]\n" /* feature bit */ \
+- " .byte 662b-661b\n" /* sourcelen */ \
+- " .byte 664f-663f\n" /* replacementlen */ \
+- ".previous\n" \
+- ".section .altinstr_replacement,\"ax\"\n" \
+- "663:\n\t" newinstr "\n664:\n" /* replacement */ \
+- ".previous" : output : [feat] "i" (feature), ##input)
+-
+-/*
+- * use this macro(s) if you need more than one output parameter
+- * in alternative_io
+- */
+-#define ASM_OUTPUT2(a, b) a, b
+-
+-/*
+- * Alternative inline assembly for SMP.
+- *
+- * The LOCK_PREFIX macro defined here replaces the LOCK and
+- * LOCK_PREFIX macros used everywhere in the source tree.
+- *
+- * SMP alternatives use the same data structures as the other
+- * alternatives and the X86_FEATURE_UP flag to indicate the case of a
+- * UP system running a SMP kernel. The existing apply_alternatives()
+- * works fine for patching a SMP kernel for UP.
+- *
+- * The SMP alternative tables can be kept after boot and contain both
+- * UP and SMP versions of the instructions to allow switching back to
+- * SMP at runtime, when hotplugging in a new CPU, which is especially
+- * useful in virtualized environments.
+- *
+- * The very common lock prefix is handled as special case in a
+- * separate table which is a pure address list without replacement ptr
+- * and size information. That keeps the table sizes small.
+- */
+-
+-#ifdef CONFIG_SMP
+-#define LOCK_PREFIX \
+- ".section .smp_locks,\"a\"\n" \
+- " .align 4\n" \
+- " .long 661f\n" /* address */ \
+- ".previous\n" \
+- "661:\n\tlock; "
+-
+-#else /* ! CONFIG_SMP */
+-#define LOCK_PREFIX ""
+-#endif
+-
+-struct paravirt_patch_site;
+-#ifdef CONFIG_PARAVIRT
+-void apply_paravirt(struct paravirt_patch_site *start,
+- struct paravirt_patch_site *end);
+-#else
+-static inline void
+-apply_paravirt(struct paravirt_patch_site *start,
+- struct paravirt_patch_site *end)
+-{}
+-#define __parainstructions NULL
+-#define __parainstructions_end NULL
+-#endif
+-
+-extern void text_poke(void *addr, unsigned char *opcode, int len);
+-
+-#endif /* _I386_ALTERNATIVE_H */
+diff --git a/include/asm-x86/alternative_64.h b/include/asm-x86/alternative_64.h
+deleted file mode 100644
+index ab161e8..0000000
+--- a/include/asm-x86/alternative_64.h
++++ /dev/null
+@@ -1,159 +0,0 @@
+-#ifndef _X86_64_ALTERNATIVE_H
+-#define _X86_64_ALTERNATIVE_H
+-
+-#ifdef __KERNEL__
+-
+-#include <linux/types.h>
+-#include <linux/stddef.h>
+-
+-/*
+- * Alternative inline assembly for SMP.
+- *
+- * The LOCK_PREFIX macro defined here replaces the LOCK and
+- * LOCK_PREFIX macros used everywhere in the source tree.
+- *
+- * SMP alternatives use the same data structures as the other
+- * alternatives and the X86_FEATURE_UP flag to indicate the case of a
+- * UP system running a SMP kernel. The existing apply_alternatives()
+- * works fine for patching a SMP kernel for UP.
+- *
+- * The SMP alternative tables can be kept after boot and contain both
+- * UP and SMP versions of the instructions to allow switching back to
+- * SMP at runtime, when hotplugging in a new CPU, which is especially
+- * useful in virtualized environments.
+- *
+- * The very common lock prefix is handled as special case in a
+- * separate table which is a pure address list without replacement ptr
+- * and size information. That keeps the table sizes small.
+- */
+-
+-#ifdef CONFIG_SMP
+-#define LOCK_PREFIX \
+- ".section .smp_locks,\"a\"\n" \
+- " .align 8\n" \
+- " .quad 661f\n" /* address */ \
+- ".previous\n" \
+- "661:\n\tlock; "
+-
+-#else /* ! CONFIG_SMP */
+-#define LOCK_PREFIX ""
+-#endif
+-
+-/* This must be included *after* the definition of LOCK_PREFIX */
+-#include <asm/cpufeature.h>
+-
+-struct alt_instr {
+- u8 *instr; /* original instruction */
+- u8 *replacement;
+- u8 cpuid; /* cpuid bit set for replacement */
+- u8 instrlen; /* length of original instruction */
+- u8 replacementlen; /* length of new instruction, <= instrlen */
+- u8 pad[5];
+-};
+-
+-extern void alternative_instructions(void);
+-extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
+-
+-struct module;
+-
+-#ifdef CONFIG_SMP
+-extern void alternatives_smp_module_add(struct module *mod, char *name,
+- void *locks, void *locks_end,
+- void *text, void *text_end);
+-extern void alternatives_smp_module_del(struct module *mod);
+-extern void alternatives_smp_switch(int smp);
+-#else
+-static inline void alternatives_smp_module_add(struct module *mod, char *name,
+- void *locks, void *locks_end,
+- void *text, void *text_end) {}
+-static inline void alternatives_smp_module_del(struct module *mod) {}
+-static inline void alternatives_smp_switch(int smp) {}
+-#endif
+-
+-#endif
+-
+-/*
+- * Alternative instructions for different CPU types or capabilities.
+- *
+- * This allows to use optimized instructions even on generic binary
+- * kernels.
+- *
+- * length of oldinstr must be longer or equal the length of newinstr
+- * It can be padded with nops as needed.
+- *
+- * For non barrier like inlines please define new variants
+- * without volatile and memory clobber.
+- */
+-#define alternative(oldinstr, newinstr, feature) \
+- asm volatile ("661:\n\t" oldinstr "\n662:\n" \
+- ".section .altinstructions,\"a\"\n" \
+- " .align 8\n" \
+- " .quad 661b\n" /* label */ \
+- " .quad 663f\n" /* new instruction */ \
+- " .byte %c0\n" /* feature bit */ \
+- " .byte 662b-661b\n" /* sourcelen */ \
+- " .byte 664f-663f\n" /* replacementlen */ \
+- ".previous\n" \
+- ".section .altinstr_replacement,\"ax\"\n" \
+- "663:\n\t" newinstr "\n664:\n" /* replacement */ \
+- ".previous" :: "i" (feature) : "memory")
+-
+-/*
+- * Alternative inline assembly with input.
+- *
+- * Pecularities:
+- * No memory clobber here.
+- * Argument numbers start with 1.
+- * Best is to use constraints that are fixed size (like (%1) ... "r")
+- * If you use variable sized constraints like "m" or "g" in the
+- * replacement make sure to pad to the worst case length.
+- */
+-#define alternative_input(oldinstr, newinstr, feature, input...) \
+- asm volatile ("661:\n\t" oldinstr "\n662:\n" \
+- ".section .altinstructions,\"a\"\n" \
+- " .align 8\n" \
+- " .quad 661b\n" /* label */ \
+- " .quad 663f\n" /* new instruction */ \
+- " .byte %c0\n" /* feature bit */ \
+- " .byte 662b-661b\n" /* sourcelen */ \
+- " .byte 664f-663f\n" /* replacementlen */ \
+- ".previous\n" \
+- ".section .altinstr_replacement,\"ax\"\n" \
+- "663:\n\t" newinstr "\n664:\n" /* replacement */ \
+- ".previous" :: "i" (feature), ##input)
+-
+-/* Like alternative_input, but with a single output argument */
+-#define alternative_io(oldinstr, newinstr, feature, output, input...) \
+- asm volatile ("661:\n\t" oldinstr "\n662:\n" \
+- ".section .altinstructions,\"a\"\n" \
+- " .align 8\n" \
+- " .quad 661b\n" /* label */ \
+- " .quad 663f\n" /* new instruction */ \
+- " .byte %c[feat]\n" /* feature bit */ \
+- " .byte 662b-661b\n" /* sourcelen */ \
+- " .byte 664f-663f\n" /* replacementlen */ \
+- ".previous\n" \
+- ".section .altinstr_replacement,\"ax\"\n" \
+- "663:\n\t" newinstr "\n664:\n" /* replacement */ \
+- ".previous" : output : [feat] "i" (feature), ##input)
+-
+-/*
+- * use this macro(s) if you need more than one output parameter
+- * in alternative_io
+- */
+-#define ASM_OUTPUT2(a, b) a, b
+-
+-struct paravirt_patch;
+-#ifdef CONFIG_PARAVIRT
+-void apply_paravirt(struct paravirt_patch *start, struct paravirt_patch *end);
+-#else
+-static inline void
+-apply_paravirt(struct paravirt_patch *start, struct paravirt_patch *end)
+-{}
+-#define __parainstructions NULL
+-#define __parainstructions_end NULL
+-#endif
+-
+-extern void text_poke(void *addr, unsigned char *opcode, int len);
+-
+-#endif /* _X86_64_ALTERNATIVE_H */
+diff --git a/include/asm-x86/apic.h b/include/asm-x86/apic.h
+index 9fbcc0b..bcfc07f 100644
+--- a/include/asm-x86/apic.h
++++ b/include/asm-x86/apic.h
+@@ -1,5 +1,140 @@
+-#ifdef CONFIG_X86_32
+-# include "apic_32.h"
++#ifndef _ASM_X86_APIC_H
++#define _ASM_X86_APIC_H
++
++#include <linux/pm.h>
++#include <linux/delay.h>
++#include <asm/fixmap.h>
++#include <asm/apicdef.h>
++#include <asm/processor.h>
++#include <asm/system.h>
++
++#define ARCH_APICTIMER_STOPS_ON_C3 1
++
++#define Dprintk(x...)
++
++/*
++ * Debugging macros
++ */
++#define APIC_QUIET 0
++#define APIC_VERBOSE 1
++#define APIC_DEBUG 2
++
++/*
++ * Define the default level of output to be very little
++ * This can be turned up by using apic=verbose for more
++ * information and apic=debug for _lots_ of information.
++ * apic_verbosity is defined in apic.c
++ */
++#define apic_printk(v, s, a...) do { \
++ if ((v) <= apic_verbosity) \
++ printk(s, ##a); \
++ } while (0)
++
++
++extern void generic_apic_probe(void);
++
++#ifdef CONFIG_X86_LOCAL_APIC
++
++extern int apic_verbosity;
++extern int timer_over_8254;
++extern int local_apic_timer_c2_ok;
++extern int local_apic_timer_disabled;
++
++extern int apic_runs_main_timer;
++extern int ioapic_force;
++extern int disable_apic;
++extern int disable_apic_timer;
++extern unsigned boot_cpu_id;
++
++/*
++ * Basic functions accessing APICs.
++ */
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
+ #else
+-# include "apic_64.h"
++#define apic_write native_apic_write
++#define apic_write_atomic native_apic_write_atomic
++#define apic_read native_apic_read
++#define setup_boot_clock setup_boot_APIC_clock
++#define setup_secondary_clock setup_secondary_APIC_clock
+ #endif
++
++static inline void native_apic_write(unsigned long reg, u32 v)
++{
++ *((volatile u32 *)(APIC_BASE + reg)) = v;
++}
++
++static inline void native_apic_write_atomic(unsigned long reg, u32 v)
++{
++ (void) xchg((u32*)(APIC_BASE + reg), v);
++}
++
++static inline u32 native_apic_read(unsigned long reg)
++{
++ return *((volatile u32 *)(APIC_BASE + reg));
++}
++
++extern void apic_wait_icr_idle(void);
++extern u32 safe_apic_wait_icr_idle(void);
++extern int get_physical_broadcast(void);
++
++#ifdef CONFIG_X86_GOOD_APIC
++# define FORCE_READ_AROUND_WRITE 0
++# define apic_read_around(x)
++# define apic_write_around(x, y) apic_write((x), (y))
++#else
++# define FORCE_READ_AROUND_WRITE 1
++# define apic_read_around(x) apic_read(x)
++# define apic_write_around(x, y) apic_write_atomic((x), (y))
++#endif
++
++static inline void ack_APIC_irq(void)
++{
++ /*
++ * ack_APIC_irq() actually gets compiled as a single instruction:
++ * - a single rmw on Pentium/82489DX
++ * - a single write on P6+ cores (CONFIG_X86_GOOD_APIC)
++ * ... yummie.
++ */
++
++ /* Docs say use 0 for future compatibility */
++ apic_write_around(APIC_EOI, 0);
++}
++
++extern int lapic_get_maxlvt(void);
++extern void clear_local_APIC(void);
++extern void connect_bsp_APIC(void);
++extern void disconnect_bsp_APIC(int virt_wire_setup);
++extern void disable_local_APIC(void);
++extern void lapic_shutdown(void);
++extern int verify_local_APIC(void);
++extern void cache_APIC_registers(void);
++extern void sync_Arb_IDs(void);
++extern void init_bsp_APIC(void);
++extern void setup_local_APIC(void);
++extern void end_local_APIC_setup(void);
++extern void init_apic_mappings(void);
++extern void setup_boot_APIC_clock(void);
++extern void setup_secondary_APIC_clock(void);
++extern int APIC_init_uniprocessor(void);
++extern void enable_NMI_through_LVT0(void);
++
++/*
++ * On 32bit this is mach-xxx local
++ */
++#ifdef CONFIG_X86_64
++extern void setup_apic_routing(void);
++#endif
++
++extern u8 setup_APIC_eilvt_mce(u8 vector, u8 msg_type, u8 mask);
++extern u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask);
++
++extern int apic_is_clustered_box(void);
++
++#else /* !CONFIG_X86_LOCAL_APIC */
++static inline void lapic_shutdown(void) { }
++#define local_apic_timer_c2_ok 1
++
++#endif /* !CONFIG_X86_LOCAL_APIC */
++
++#endif /* __ASM_APIC_H */
+diff --git a/include/asm-x86/apic_32.h b/include/asm-x86/apic_32.h
+deleted file mode 100644
+index be158b2..0000000
+--- a/include/asm-x86/apic_32.h
++++ /dev/null
+@@ -1,127 +0,0 @@
+-#ifndef __ASM_APIC_H
+-#define __ASM_APIC_H
+-
+-#include <linux/pm.h>
+-#include <linux/delay.h>
+-#include <asm/fixmap.h>
+-#include <asm/apicdef.h>
+-#include <asm/processor.h>
+-#include <asm/system.h>
+-
+-#define Dprintk(x...)
+-
+-/*
+- * Debugging macros
+- */
+-#define APIC_QUIET 0
+-#define APIC_VERBOSE 1
+-#define APIC_DEBUG 2
+-
+-extern int apic_verbosity;
+-
+-/*
+- * Define the default level of output to be very little
+- * This can be turned up by using apic=verbose for more
+- * information and apic=debug for _lots_ of information.
+- * apic_verbosity is defined in apic.c
+- */
+-#define apic_printk(v, s, a...) do { \
+- if ((v) <= apic_verbosity) \
+- printk(s, ##a); \
+- } while (0)
+-
+-
+-extern void generic_apic_probe(void);
+-
+-#ifdef CONFIG_X86_LOCAL_APIC
+-
+-/*
+- * Basic functions accessing APICs.
+- */
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#define apic_write native_apic_write
+-#define apic_write_atomic native_apic_write_atomic
+-#define apic_read native_apic_read
+-#define setup_boot_clock setup_boot_APIC_clock
+-#define setup_secondary_clock setup_secondary_APIC_clock
+-#endif
+-
+-static __inline fastcall void native_apic_write(unsigned long reg,
+- unsigned long v)
+-{
+- *((volatile unsigned long *)(APIC_BASE+reg)) = v;
+-}
+-
+-static __inline fastcall void native_apic_write_atomic(unsigned long reg,
+- unsigned long v)
+-{
+- xchg((volatile unsigned long *)(APIC_BASE+reg), v);
+-}
+-
+-static __inline fastcall unsigned long native_apic_read(unsigned long reg)
+-{
+- return *((volatile unsigned long *)(APIC_BASE+reg));
+-}
+-
+-void apic_wait_icr_idle(void);
+-unsigned long safe_apic_wait_icr_idle(void);
+-int get_physical_broadcast(void);
+-
+-#ifdef CONFIG_X86_GOOD_APIC
+-# define FORCE_READ_AROUND_WRITE 0
+-# define apic_read_around(x)
+-# define apic_write_around(x,y) apic_write((x),(y))
+-#else
+-# define FORCE_READ_AROUND_WRITE 1
+-# define apic_read_around(x) apic_read(x)
+-# define apic_write_around(x,y) apic_write_atomic((x),(y))
+-#endif
+-
+-static inline void ack_APIC_irq(void)
+-{
+- /*
+- * ack_APIC_irq() actually gets compiled as a single instruction:
+- * - a single rmw on Pentium/82489DX
+- * - a single write on P6+ cores (CONFIG_X86_GOOD_APIC)
+- * ... yummie.
+- */
+-
+- /* Docs say use 0 for future compatibility */
+- apic_write_around(APIC_EOI, 0);
+-}
+-
+-extern int lapic_get_maxlvt(void);
+-extern void clear_local_APIC(void);
+-extern void connect_bsp_APIC (void);
+-extern void disconnect_bsp_APIC (int virt_wire_setup);
+-extern void disable_local_APIC (void);
+-extern void lapic_shutdown (void);
+-extern int verify_local_APIC (void);
+-extern void cache_APIC_registers (void);
+-extern void sync_Arb_IDs (void);
+-extern void init_bsp_APIC (void);
+-extern void setup_local_APIC (void);
+-extern void init_apic_mappings (void);
+-extern void smp_local_timer_interrupt (void);
+-extern void setup_boot_APIC_clock (void);
+-extern void setup_secondary_APIC_clock (void);
+-extern int APIC_init_uniprocessor (void);
+-
+-extern void enable_NMI_through_LVT0 (void * dummy);
+-
+-#define ARCH_APICTIMER_STOPS_ON_C3 1
+-
+-extern int timer_over_8254;
+-extern int local_apic_timer_c2_ok;
+-
+-extern int local_apic_timer_disabled;
+-
+-#else /* !CONFIG_X86_LOCAL_APIC */
+-static inline void lapic_shutdown(void) { }
+-#define local_apic_timer_c2_ok 1
+-
+-#endif /* !CONFIG_X86_LOCAL_APIC */
+-
+-#endif /* __ASM_APIC_H */
+diff --git a/include/asm-x86/apic_64.h b/include/asm-x86/apic_64.h
+deleted file mode 100644
+index 2747a11..0000000
+--- a/include/asm-x86/apic_64.h
++++ /dev/null
+@@ -1,102 +0,0 @@
+-#ifndef __ASM_APIC_H
+-#define __ASM_APIC_H
+-
+-#include <linux/pm.h>
+-#include <linux/delay.h>
+-#include <asm/fixmap.h>
+-#include <asm/apicdef.h>
+-#include <asm/system.h>
+-
+-#define Dprintk(x...)
+-
+-/*
+- * Debugging macros
+- */
+-#define APIC_QUIET 0
+-#define APIC_VERBOSE 1
+-#define APIC_DEBUG 2
+-
+-extern int apic_verbosity;
+-extern int apic_runs_main_timer;
+-extern int ioapic_force;
+-extern int disable_apic_timer;
+-
+-/*
+- * Define the default level of output to be very little
+- * This can be turned up by using apic=verbose for more
+- * information and apic=debug for _lots_ of information.
+- * apic_verbosity is defined in apic.c
+- */
+-#define apic_printk(v, s, a...) do { \
+- if ((v) <= apic_verbosity) \
+- printk(s, ##a); \
+- } while (0)
+-
+-struct pt_regs;
+-
+-/*
+- * Basic functions accessing APICs.
+- */
+-
+-static __inline void apic_write(unsigned long reg, unsigned int v)
+-{
+- *((volatile unsigned int *)(APIC_BASE+reg)) = v;
+-}
+-
+-static __inline unsigned int apic_read(unsigned long reg)
+-{
+- return *((volatile unsigned int *)(APIC_BASE+reg));
+-}
+-
+-extern void apic_wait_icr_idle(void);
+-extern unsigned int safe_apic_wait_icr_idle(void);
+-
+-static inline void ack_APIC_irq(void)
+-{
+- /*
+- * ack_APIC_irq() actually gets compiled as a single instruction:
+- * - a single rmw on Pentium/82489DX
+- * - a single write on P6+ cores (CONFIG_X86_GOOD_APIC)
+- * ... yummie.
+- */
+-
+- /* Docs say use 0 for future compatibility */
+- apic_write(APIC_EOI, 0);
+-}
+-
+-extern int get_maxlvt (void);
+-extern void clear_local_APIC (void);
+-extern void connect_bsp_APIC (void);
+-extern void disconnect_bsp_APIC (int virt_wire_setup);
+-extern void disable_local_APIC (void);
+-extern void lapic_shutdown (void);
+-extern int verify_local_APIC (void);
+-extern void cache_APIC_registers (void);
+-extern void sync_Arb_IDs (void);
+-extern void init_bsp_APIC (void);
+-extern void setup_local_APIC (void);
+-extern void init_apic_mappings (void);
+-extern void smp_local_timer_interrupt (void);
+-extern void setup_boot_APIC_clock (void);
+-extern void setup_secondary_APIC_clock (void);
+-extern int APIC_init_uniprocessor (void);
+-extern void setup_apic_routing(void);
+-
+-extern void setup_APIC_extended_lvt(unsigned char lvt_off, unsigned char vector,
+- unsigned char msg_type, unsigned char mask);
+-
+-extern int apic_is_clustered_box(void);
+-
+-#define K8_APIC_EXT_LVT_BASE 0x500
+-#define K8_APIC_EXT_INT_MSG_FIX 0x0
+-#define K8_APIC_EXT_INT_MSG_SMI 0x2
+-#define K8_APIC_EXT_INT_MSG_NMI 0x4
+-#define K8_APIC_EXT_INT_MSG_EXT 0x7
+-#define K8_APIC_EXT_LVT_ENTRY_THRESHOLD 0
+-
+-#define ARCH_APICTIMER_STOPS_ON_C3 1
+-
+-extern unsigned boot_cpu_id;
+-extern int local_apic_timer_c2_ok;
+-
+-#endif /* __ASM_APIC_H */
+diff --git a/include/asm-x86/apicdef.h b/include/asm-x86/apicdef.h
+index 4542c22..550af7a 100644
+--- a/include/asm-x86/apicdef.h
++++ b/include/asm-x86/apicdef.h
+@@ -1,5 +1,413 @@
++#ifndef _ASM_X86_APICDEF_H
++#define _ASM_X86_APICDEF_H
++
++/*
++ * Constants for various Intel APICs. (local APIC, IOAPIC, etc.)
++ *
++ * Alan Cox <Alan.Cox at linux.org>, 1995.
++ * Ingo Molnar <mingo at redhat.com>, 1999, 2000
++ */
++
++#define APIC_DEFAULT_PHYS_BASE 0xfee00000
++
++#define APIC_ID 0x20
++
++#ifdef CONFIG_X86_64
++# define APIC_ID_MASK (0xFFu<<24)
++# define GET_APIC_ID(x) (((x)>>24)&0xFFu)
++# define SET_APIC_ID(x) (((x)<<24))
++#endif
++
++#define APIC_LVR 0x30
++#define APIC_LVR_MASK 0xFF00FF
++#define GET_APIC_VERSION(x) ((x)&0xFFu)
++#define GET_APIC_MAXLVT(x) (((x)>>16)&0xFFu)
++#define APIC_INTEGRATED(x) ((x)&0xF0u)
++#define APIC_XAPIC(x) ((x) >= 0x14)
++#define APIC_TASKPRI 0x80
++#define APIC_TPRI_MASK 0xFFu
++#define APIC_ARBPRI 0x90
++#define APIC_ARBPRI_MASK 0xFFu
++#define APIC_PROCPRI 0xA0
++#define APIC_EOI 0xB0
++#define APIC_EIO_ACK 0x0
++#define APIC_RRR 0xC0
++#define APIC_LDR 0xD0
++#define APIC_LDR_MASK (0xFFu<<24)
++#define GET_APIC_LOGICAL_ID(x) (((x)>>24)&0xFFu)
++#define SET_APIC_LOGICAL_ID(x) (((x)<<24))
++#define APIC_ALL_CPUS 0xFFu
++#define APIC_DFR 0xE0
++#define APIC_DFR_CLUSTER 0x0FFFFFFFul
++#define APIC_DFR_FLAT 0xFFFFFFFFul
++#define APIC_SPIV 0xF0
++#define APIC_SPIV_FOCUS_DISABLED (1<<9)
++#define APIC_SPIV_APIC_ENABLED (1<<8)
++#define APIC_ISR 0x100
++#define APIC_ISR_NR 0x8 /* Number of 32 bit ISR registers. */
++#define APIC_TMR 0x180
++#define APIC_IRR 0x200
++#define APIC_ESR 0x280
++#define APIC_ESR_SEND_CS 0x00001
++#define APIC_ESR_RECV_CS 0x00002
++#define APIC_ESR_SEND_ACC 0x00004
++#define APIC_ESR_RECV_ACC 0x00008
++#define APIC_ESR_SENDILL 0x00020
++#define APIC_ESR_RECVILL 0x00040
++#define APIC_ESR_ILLREGA 0x00080
++#define APIC_ICR 0x300
++#define APIC_DEST_SELF 0x40000
++#define APIC_DEST_ALLINC 0x80000
++#define APIC_DEST_ALLBUT 0xC0000
++#define APIC_ICR_RR_MASK 0x30000
++#define APIC_ICR_RR_INVALID 0x00000
++#define APIC_ICR_RR_INPROG 0x10000
++#define APIC_ICR_RR_VALID 0x20000
++#define APIC_INT_LEVELTRIG 0x08000
++#define APIC_INT_ASSERT 0x04000
++#define APIC_ICR_BUSY 0x01000
++#define APIC_DEST_LOGICAL 0x00800
++#define APIC_DEST_PHYSICAL 0x00000
++#define APIC_DM_FIXED 0x00000
++#define APIC_DM_LOWEST 0x00100
++#define APIC_DM_SMI 0x00200
++#define APIC_DM_REMRD 0x00300
++#define APIC_DM_NMI 0x00400
++#define APIC_DM_INIT 0x00500
++#define APIC_DM_STARTUP 0x00600
++#define APIC_DM_EXTINT 0x00700
++#define APIC_VECTOR_MASK 0x000FF
++#define APIC_ICR2 0x310
++#define GET_APIC_DEST_FIELD(x) (((x)>>24)&0xFF)
++#define SET_APIC_DEST_FIELD(x) ((x)<<24)
++#define APIC_LVTT 0x320
++#define APIC_LVTTHMR 0x330
++#define APIC_LVTPC 0x340
++#define APIC_LVT0 0x350
++#define APIC_LVT_TIMER_BASE_MASK (0x3<<18)
++#define GET_APIC_TIMER_BASE(x) (((x)>>18)&0x3)
++#define SET_APIC_TIMER_BASE(x) (((x)<<18))
++#define APIC_TIMER_BASE_CLKIN 0x0
++#define APIC_TIMER_BASE_TMBASE 0x1
++#define APIC_TIMER_BASE_DIV 0x2
++#define APIC_LVT_TIMER_PERIODIC (1<<17)
++#define APIC_LVT_MASKED (1<<16)
++#define APIC_LVT_LEVEL_TRIGGER (1<<15)
++#define APIC_LVT_REMOTE_IRR (1<<14)
++#define APIC_INPUT_POLARITY (1<<13)
++#define APIC_SEND_PENDING (1<<12)
++#define APIC_MODE_MASK 0x700
++#define GET_APIC_DELIVERY_MODE(x) (((x)>>8)&0x7)
++#define SET_APIC_DELIVERY_MODE(x, y) (((x)&~0x700)|((y)<<8))
++#define APIC_MODE_FIXED 0x0
++#define APIC_MODE_NMI 0x4
++#define APIC_MODE_EXTINT 0x7
++#define APIC_LVT1 0x360
++#define APIC_LVTERR 0x370
++#define APIC_TMICT 0x380
++#define APIC_TMCCT 0x390
++#define APIC_TDCR 0x3E0
++#define APIC_TDR_DIV_TMBASE (1<<2)
++#define APIC_TDR_DIV_1 0xB
++#define APIC_TDR_DIV_2 0x0
++#define APIC_TDR_DIV_4 0x1
++#define APIC_TDR_DIV_8 0x2
++#define APIC_TDR_DIV_16 0x3
++#define APIC_TDR_DIV_32 0x8
++#define APIC_TDR_DIV_64 0x9
++#define APIC_TDR_DIV_128 0xA
++#define APIC_EILVT0 0x500
++#define APIC_EILVT_NR_AMD_K8 1 /* Number of extended interrupts */
++#define APIC_EILVT_NR_AMD_10H 4
++#define APIC_EILVT_LVTOFF(x) (((x)>>4)&0xF)
++#define APIC_EILVT_MSG_FIX 0x0
++#define APIC_EILVT_MSG_SMI 0x2
++#define APIC_EILVT_MSG_NMI 0x4
++#define APIC_EILVT_MSG_EXT 0x7
++#define APIC_EILVT_MASKED (1<<16)
++#define APIC_EILVT1 0x510
++#define APIC_EILVT2 0x520
++#define APIC_EILVT3 0x530
++
++#define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
++
+ #ifdef CONFIG_X86_32
+-# include "apicdef_32.h"
++# define MAX_IO_APICS 64
+ #else
+-# include "apicdef_64.h"
++# define MAX_IO_APICS 128
++# define MAX_LOCAL_APIC 256
++#endif
++
++/*
++ * All x86-64 systems are xAPIC compatible.
++ * In the following, "apicid" is a physical APIC ID.
++ */
++#define XAPIC_DEST_CPUS_SHIFT 4
++#define XAPIC_DEST_CPUS_MASK ((1u << XAPIC_DEST_CPUS_SHIFT) - 1)
++#define XAPIC_DEST_CLUSTER_MASK (XAPIC_DEST_CPUS_MASK << XAPIC_DEST_CPUS_SHIFT)
++#define APIC_CLUSTER(apicid) ((apicid) & XAPIC_DEST_CLUSTER_MASK)
++#define APIC_CLUSTERID(apicid) (APIC_CLUSTER(apicid) >> XAPIC_DEST_CPUS_SHIFT)
++#define APIC_CPUID(apicid) ((apicid) & XAPIC_DEST_CPUS_MASK)
++#define NUM_APIC_CLUSTERS ((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
++
++/*
++ * the local APIC register structure, memory mapped. Not terribly well
++ * tested, but we might eventually use this one in the future - the
++ * problem why we cannot use it right now is the P5 APIC, it has an
++ * errata which cannot take 8-bit reads and writes, only 32-bit ones ...
++ */
++#define u32 unsigned int
++
++struct local_apic {
++
++/*000*/ struct { u32 __reserved[4]; } __reserved_01;
++
++/*010*/ struct { u32 __reserved[4]; } __reserved_02;
++
++/*020*/ struct { /* APIC ID Register */
++ u32 __reserved_1 : 24,
++ phys_apic_id : 4,
++ __reserved_2 : 4;
++ u32 __reserved[3];
++ } id;
++
++/*030*/ const
++ struct { /* APIC Version Register */
++ u32 version : 8,
++ __reserved_1 : 8,
++ max_lvt : 8,
++ __reserved_2 : 8;
++ u32 __reserved[3];
++ } version;
++
++/*040*/ struct { u32 __reserved[4]; } __reserved_03;
++
++/*050*/ struct { u32 __reserved[4]; } __reserved_04;
++
++/*060*/ struct { u32 __reserved[4]; } __reserved_05;
++
++/*070*/ struct { u32 __reserved[4]; } __reserved_06;
++
++/*080*/ struct { /* Task Priority Register */
++ u32 priority : 8,
++ __reserved_1 : 24;
++ u32 __reserved_2[3];
++ } tpr;
++
++/*090*/ const
++ struct { /* Arbitration Priority Register */
++ u32 priority : 8,
++ __reserved_1 : 24;
++ u32 __reserved_2[3];
++ } apr;
++
++/*0A0*/ const
++ struct { /* Processor Priority Register */
++ u32 priority : 8,
++ __reserved_1 : 24;
++ u32 __reserved_2[3];
++ } ppr;
++
++/*0B0*/ struct { /* End Of Interrupt Register */
++ u32 eoi;
++ u32 __reserved[3];
++ } eoi;
++
++/*0C0*/ struct { u32 __reserved[4]; } __reserved_07;
++
++/*0D0*/ struct { /* Logical Destination Register */
++ u32 __reserved_1 : 24,
++ logical_dest : 8;
++ u32 __reserved_2[3];
++ } ldr;
++
++/*0E0*/ struct { /* Destination Format Register */
++ u32 __reserved_1 : 28,
++ model : 4;
++ u32 __reserved_2[3];
++ } dfr;
++
++/*0F0*/ struct { /* Spurious Interrupt Vector Register */
++ u32 spurious_vector : 8,
++ apic_enabled : 1,
++ focus_cpu : 1,
++ __reserved_2 : 22;
++ u32 __reserved_3[3];
++ } svr;
++
++/*100*/ struct { /* In Service Register */
++/*170*/ u32 bitfield;
++ u32 __reserved[3];
++ } isr [8];
++
++/*180*/ struct { /* Trigger Mode Register */
++/*1F0*/ u32 bitfield;
++ u32 __reserved[3];
++ } tmr [8];
++
++/*200*/ struct { /* Interrupt Request Register */
++/*270*/ u32 bitfield;
++ u32 __reserved[3];
++ } irr [8];
++
++/*280*/ union { /* Error Status Register */
++ struct {
++ u32 send_cs_error : 1,
++ receive_cs_error : 1,
++ send_accept_error : 1,
++ receive_accept_error : 1,
++ __reserved_1 : 1,
++ send_illegal_vector : 1,
++ receive_illegal_vector : 1,
++ illegal_register_address : 1,
++ __reserved_2 : 24;
++ u32 __reserved_3[3];
++ } error_bits;
++ struct {
++ u32 errors;
++ u32 __reserved_3[3];
++ } all_errors;
++ } esr;
++
++/*290*/ struct { u32 __reserved[4]; } __reserved_08;
++
++/*2A0*/ struct { u32 __reserved[4]; } __reserved_09;
++
++/*2B0*/ struct { u32 __reserved[4]; } __reserved_10;
++
++/*2C0*/ struct { u32 __reserved[4]; } __reserved_11;
++
++/*2D0*/ struct { u32 __reserved[4]; } __reserved_12;
++
++/*2E0*/ struct { u32 __reserved[4]; } __reserved_13;
++
++/*2F0*/ struct { u32 __reserved[4]; } __reserved_14;
++
++/*300*/ struct { /* Interrupt Command Register 1 */
++ u32 vector : 8,
++ delivery_mode : 3,
++ destination_mode : 1,
++ delivery_status : 1,
++ __reserved_1 : 1,
++ level : 1,
++ trigger : 1,
++ __reserved_2 : 2,
++ shorthand : 2,
++ __reserved_3 : 12;
++ u32 __reserved_4[3];
++ } icr1;
++
++/*310*/ struct { /* Interrupt Command Register 2 */
++ union {
++ u32 __reserved_1 : 24,
++ phys_dest : 4,
++ __reserved_2 : 4;
++ u32 __reserved_3 : 24,
++ logical_dest : 8;
++ } dest;
++ u32 __reserved_4[3];
++ } icr2;
++
++/*320*/ struct { /* LVT - Timer */
++ u32 vector : 8,
++ __reserved_1 : 4,
++ delivery_status : 1,
++ __reserved_2 : 3,
++ mask : 1,
++ timer_mode : 1,
++ __reserved_3 : 14;
++ u32 __reserved_4[3];
++ } lvt_timer;
++
++/*330*/ struct { /* LVT - Thermal Sensor */
++ u32 vector : 8,
++ delivery_mode : 3,
++ __reserved_1 : 1,
++ delivery_status : 1,
++ __reserved_2 : 3,
++ mask : 1,
++ __reserved_3 : 15;
++ u32 __reserved_4[3];
++ } lvt_thermal;
++
++/*340*/ struct { /* LVT - Performance Counter */
++ u32 vector : 8,
++ delivery_mode : 3,
++ __reserved_1 : 1,
++ delivery_status : 1,
++ __reserved_2 : 3,
++ mask : 1,
++ __reserved_3 : 15;
++ u32 __reserved_4[3];
++ } lvt_pc;
++
++/*350*/ struct { /* LVT - LINT0 */
++ u32 vector : 8,
++ delivery_mode : 3,
++ __reserved_1 : 1,
++ delivery_status : 1,
++ polarity : 1,
++ remote_irr : 1,
++ trigger : 1,
++ mask : 1,
++ __reserved_2 : 15;
++ u32 __reserved_3[3];
++ } lvt_lint0;
++
++/*360*/ struct { /* LVT - LINT1 */
++ u32 vector : 8,
++ delivery_mode : 3,
++ __reserved_1 : 1,
++ delivery_status : 1,
++ polarity : 1,
++ remote_irr : 1,
++ trigger : 1,
++ mask : 1,
++ __reserved_2 : 15;
++ u32 __reserved_3[3];
++ } lvt_lint1;
++
++/*370*/ struct { /* LVT - Error */
++ u32 vector : 8,
++ __reserved_1 : 4,
++ delivery_status : 1,
++ __reserved_2 : 3,
++ mask : 1,
++ __reserved_3 : 15;
++ u32 __reserved_4[3];
++ } lvt_error;
++
++/*380*/ struct { /* Timer Initial Count Register */
++ u32 initial_count;
++ u32 __reserved_2[3];
++ } timer_icr;
++
++/*390*/ const
++ struct { /* Timer Current Count Register */
++ u32 curr_count;
++ u32 __reserved_2[3];
++ } timer_ccr;
++
++/*3A0*/ struct { u32 __reserved[4]; } __reserved_16;
++
++/*3B0*/ struct { u32 __reserved[4]; } __reserved_17;
++
++/*3C0*/ struct { u32 __reserved[4]; } __reserved_18;
++
++/*3D0*/ struct { u32 __reserved[4]; } __reserved_19;
++
++/*3E0*/ struct { /* Timer Divide Configuration Register */
++ u32 divisor : 4,
++ __reserved_1 : 28;
++ u32 __reserved_2[3];
++ } timer_dcr;
++
++/*3F0*/ struct { u32 __reserved[4]; } __reserved_20;
++
++} __attribute__ ((packed));
++
++#undef u32
++
++#define BAD_APICID 0xFFu
++
+ #endif
+diff --git a/include/asm-x86/apicdef_32.h b/include/asm-x86/apicdef_32.h
+deleted file mode 100644
+index 9f69953..0000000
+--- a/include/asm-x86/apicdef_32.h
++++ /dev/null
+@@ -1,375 +0,0 @@
+-#ifndef __ASM_APICDEF_H
+-#define __ASM_APICDEF_H
+-
+-/*
+- * Constants for various Intel APICs. (local APIC, IOAPIC, etc.)
+- *
+- * Alan Cox <Alan.Cox at linux.org>, 1995.
+- * Ingo Molnar <mingo at redhat.com>, 1999, 2000
+- */
+-
+-#define APIC_DEFAULT_PHYS_BASE 0xfee00000
+-
+-#define APIC_ID 0x20
+-#define APIC_LVR 0x30
+-#define APIC_LVR_MASK 0xFF00FF
+-#define GET_APIC_VERSION(x) ((x)&0xFF)
+-#define GET_APIC_MAXLVT(x) (((x)>>16)&0xFF)
+-#define APIC_INTEGRATED(x) ((x)&0xF0)
+-#define APIC_XAPIC(x) ((x) >= 0x14)
+-#define APIC_TASKPRI 0x80
+-#define APIC_TPRI_MASK 0xFF
+-#define APIC_ARBPRI 0x90
+-#define APIC_ARBPRI_MASK 0xFF
+-#define APIC_PROCPRI 0xA0
+-#define APIC_EOI 0xB0
+-#define APIC_EIO_ACK 0x0 /* Write this to the EOI register */
+-#define APIC_RRR 0xC0
+-#define APIC_LDR 0xD0
+-#define APIC_LDR_MASK (0xFF<<24)
+-#define GET_APIC_LOGICAL_ID(x) (((x)>>24)&0xFF)
+-#define SET_APIC_LOGICAL_ID(x) (((x)<<24))
+-#define APIC_ALL_CPUS 0xFF
+-#define APIC_DFR 0xE0
+-#define APIC_DFR_CLUSTER 0x0FFFFFFFul
+-#define APIC_DFR_FLAT 0xFFFFFFFFul
+-#define APIC_SPIV 0xF0
+-#define APIC_SPIV_FOCUS_DISABLED (1<<9)
+-#define APIC_SPIV_APIC_ENABLED (1<<8)
+-#define APIC_ISR 0x100
+-#define APIC_ISR_NR 0x8 /* Number of 32 bit ISR registers. */
+-#define APIC_TMR 0x180
+-#define APIC_IRR 0x200
+-#define APIC_ESR 0x280
+-#define APIC_ESR_SEND_CS 0x00001
+-#define APIC_ESR_RECV_CS 0x00002
+-#define APIC_ESR_SEND_ACC 0x00004
+-#define APIC_ESR_RECV_ACC 0x00008
+-#define APIC_ESR_SENDILL 0x00020
+-#define APIC_ESR_RECVILL 0x00040
+-#define APIC_ESR_ILLREGA 0x00080
+-#define APIC_ICR 0x300
+-#define APIC_DEST_SELF 0x40000
+-#define APIC_DEST_ALLINC 0x80000
+-#define APIC_DEST_ALLBUT 0xC0000
+-#define APIC_ICR_RR_MASK 0x30000
+-#define APIC_ICR_RR_INVALID 0x00000
+-#define APIC_ICR_RR_INPROG 0x10000
+-#define APIC_ICR_RR_VALID 0x20000
+-#define APIC_INT_LEVELTRIG 0x08000
+-#define APIC_INT_ASSERT 0x04000
+-#define APIC_ICR_BUSY 0x01000
+-#define APIC_DEST_LOGICAL 0x00800
+-#define APIC_DM_FIXED 0x00000
+-#define APIC_DM_LOWEST 0x00100
+-#define APIC_DM_SMI 0x00200
+-#define APIC_DM_REMRD 0x00300
+-#define APIC_DM_NMI 0x00400
+-#define APIC_DM_INIT 0x00500
+-#define APIC_DM_STARTUP 0x00600
+-#define APIC_DM_EXTINT 0x00700
+-#define APIC_VECTOR_MASK 0x000FF
+-#define APIC_ICR2 0x310
+-#define GET_APIC_DEST_FIELD(x) (((x)>>24)&0xFF)
+-#define SET_APIC_DEST_FIELD(x) ((x)<<24)
+-#define APIC_LVTT 0x320
+-#define APIC_LVTTHMR 0x330
+-#define APIC_LVTPC 0x340
+-#define APIC_LVT0 0x350
+-#define APIC_LVT_TIMER_BASE_MASK (0x3<<18)
+-#define GET_APIC_TIMER_BASE(x) (((x)>>18)&0x3)
+-#define SET_APIC_TIMER_BASE(x) (((x)<<18))
+-#define APIC_TIMER_BASE_CLKIN 0x0
+-#define APIC_TIMER_BASE_TMBASE 0x1
+-#define APIC_TIMER_BASE_DIV 0x2
+-#define APIC_LVT_TIMER_PERIODIC (1<<17)
+-#define APIC_LVT_MASKED (1<<16)
+-#define APIC_LVT_LEVEL_TRIGGER (1<<15)
+-#define APIC_LVT_REMOTE_IRR (1<<14)
+-#define APIC_INPUT_POLARITY (1<<13)
+-#define APIC_SEND_PENDING (1<<12)
+-#define APIC_MODE_MASK 0x700
+-#define GET_APIC_DELIVERY_MODE(x) (((x)>>8)&0x7)
+-#define SET_APIC_DELIVERY_MODE(x,y) (((x)&~0x700)|((y)<<8))
+-#define APIC_MODE_FIXED 0x0
+-#define APIC_MODE_NMI 0x4
+-#define APIC_MODE_EXTINT 0x7
+-#define APIC_LVT1 0x360
+-#define APIC_LVTERR 0x370
+-#define APIC_TMICT 0x380
+-#define APIC_TMCCT 0x390
+-#define APIC_TDCR 0x3E0
+-#define APIC_TDR_DIV_TMBASE (1<<2)
+-#define APIC_TDR_DIV_1 0xB
+-#define APIC_TDR_DIV_2 0x0
+-#define APIC_TDR_DIV_4 0x1
+-#define APIC_TDR_DIV_8 0x2
+-#define APIC_TDR_DIV_16 0x3
+-#define APIC_TDR_DIV_32 0x8
+-#define APIC_TDR_DIV_64 0x9
+-#define APIC_TDR_DIV_128 0xA
+-
+-#define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
+-
+-#define MAX_IO_APICS 64
+-
+-/*
+- * the local APIC register structure, memory mapped. Not terribly well
+- * tested, but we might eventually use this one in the future - the
+- * problem why we cannot use it right now is the P5 APIC, it has an
+- * errata which cannot take 8-bit reads and writes, only 32-bit ones ...
+- */
+-#define u32 unsigned int
+-
+-
+-struct local_apic {
+-
+-/*000*/ struct { u32 __reserved[4]; } __reserved_01;
+-
+-/*010*/ struct { u32 __reserved[4]; } __reserved_02;
+-
+-/*020*/ struct { /* APIC ID Register */
+- u32 __reserved_1 : 24,
+- phys_apic_id : 4,
+- __reserved_2 : 4;
+- u32 __reserved[3];
+- } id;
+-
+-/*030*/ const
+- struct { /* APIC Version Register */
+- u32 version : 8,
+- __reserved_1 : 8,
+- max_lvt : 8,
+- __reserved_2 : 8;
+- u32 __reserved[3];
+- } version;
+-
+-/*040*/ struct { u32 __reserved[4]; } __reserved_03;
+-
+-/*050*/ struct { u32 __reserved[4]; } __reserved_04;
+-
+-/*060*/ struct { u32 __reserved[4]; } __reserved_05;
+-
+-/*070*/ struct { u32 __reserved[4]; } __reserved_06;
+-
+-/*080*/ struct { /* Task Priority Register */
+- u32 priority : 8,
+- __reserved_1 : 24;
+- u32 __reserved_2[3];
+- } tpr;
+-
+-/*090*/ const
+- struct { /* Arbitration Priority Register */
+- u32 priority : 8,
+- __reserved_1 : 24;
+- u32 __reserved_2[3];
+- } apr;
+-
+-/*0A0*/ const
+- struct { /* Processor Priority Register */
+- u32 priority : 8,
+- __reserved_1 : 24;
+- u32 __reserved_2[3];
+- } ppr;
+-
+-/*0B0*/ struct { /* End Of Interrupt Register */
+- u32 eoi;
+- u32 __reserved[3];
+- } eoi;
+-
+-/*0C0*/ struct { u32 __reserved[4]; } __reserved_07;
+-
+-/*0D0*/ struct { /* Logical Destination Register */
+- u32 __reserved_1 : 24,
+- logical_dest : 8;
+- u32 __reserved_2[3];
+- } ldr;
+-
+-/*0E0*/ struct { /* Destination Format Register */
+- u32 __reserved_1 : 28,
+- model : 4;
+- u32 __reserved_2[3];
+- } dfr;
+-
+-/*0F0*/ struct { /* Spurious Interrupt Vector Register */
+- u32 spurious_vector : 8,
+- apic_enabled : 1,
+- focus_cpu : 1,
+- __reserved_2 : 22;
+- u32 __reserved_3[3];
+- } svr;
+-
+-/*100*/ struct { /* In Service Register */
+-/*170*/ u32 bitfield;
+- u32 __reserved[3];
+- } isr [8];
+-
+-/*180*/ struct { /* Trigger Mode Register */
+-/*1F0*/ u32 bitfield;
+- u32 __reserved[3];
+- } tmr [8];
+-
+-/*200*/ struct { /* Interrupt Request Register */
+-/*270*/ u32 bitfield;
+- u32 __reserved[3];
+- } irr [8];
+-
+-/*280*/ union { /* Error Status Register */
+- struct {
+- u32 send_cs_error : 1,
+- receive_cs_error : 1,
+- send_accept_error : 1,
+- receive_accept_error : 1,
+- __reserved_1 : 1,
+- send_illegal_vector : 1,
+- receive_illegal_vector : 1,
+- illegal_register_address : 1,
+- __reserved_2 : 24;
+- u32 __reserved_3[3];
+- } error_bits;
+- struct {
+- u32 errors;
+- u32 __reserved_3[3];
+- } all_errors;
+- } esr;
+-
+-/*290*/ struct { u32 __reserved[4]; } __reserved_08;
+-
+-/*2A0*/ struct { u32 __reserved[4]; } __reserved_09;
+-
+-/*2B0*/ struct { u32 __reserved[4]; } __reserved_10;
+-
+-/*2C0*/ struct { u32 __reserved[4]; } __reserved_11;
+-
+-/*2D0*/ struct { u32 __reserved[4]; } __reserved_12;
+-
+-/*2E0*/ struct { u32 __reserved[4]; } __reserved_13;
+-
+-/*2F0*/ struct { u32 __reserved[4]; } __reserved_14;
+-
+-/*300*/ struct { /* Interrupt Command Register 1 */
+- u32 vector : 8,
+- delivery_mode : 3,
+- destination_mode : 1,
+- delivery_status : 1,
+- __reserved_1 : 1,
+- level : 1,
+- trigger : 1,
+- __reserved_2 : 2,
+- shorthand : 2,
+- __reserved_3 : 12;
+- u32 __reserved_4[3];
+- } icr1;
+-
+-/*310*/ struct { /* Interrupt Command Register 2 */
+- union {
+- u32 __reserved_1 : 24,
+- phys_dest : 4,
+- __reserved_2 : 4;
+- u32 __reserved_3 : 24,
+- logical_dest : 8;
+- } dest;
+- u32 __reserved_4[3];
+- } icr2;
+-
+-/*320*/ struct { /* LVT - Timer */
+- u32 vector : 8,
+- __reserved_1 : 4,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- timer_mode : 1,
+- __reserved_3 : 14;
+- u32 __reserved_4[3];
+- } lvt_timer;
+-
+-/*330*/ struct { /* LVT - Thermal Sensor */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- __reserved_3 : 15;
+- u32 __reserved_4[3];
+- } lvt_thermal;
+-
+-/*340*/ struct { /* LVT - Performance Counter */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- __reserved_3 : 15;
+- u32 __reserved_4[3];
+- } lvt_pc;
+-
+-/*350*/ struct { /* LVT - LINT0 */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- polarity : 1,
+- remote_irr : 1,
+- trigger : 1,
+- mask : 1,
+- __reserved_2 : 15;
+- u32 __reserved_3[3];
+- } lvt_lint0;
+-
+-/*360*/ struct { /* LVT - LINT1 */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- polarity : 1,
+- remote_irr : 1,
+- trigger : 1,
+- mask : 1,
+- __reserved_2 : 15;
+- u32 __reserved_3[3];
+- } lvt_lint1;
+-
+-/*370*/ struct { /* LVT - Error */
+- u32 vector : 8,
+- __reserved_1 : 4,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- __reserved_3 : 15;
+- u32 __reserved_4[3];
+- } lvt_error;
+-
+-/*380*/ struct { /* Timer Initial Count Register */
+- u32 initial_count;
+- u32 __reserved_2[3];
+- } timer_icr;
+-
+-/*390*/ const
+- struct { /* Timer Current Count Register */
+- u32 curr_count;
+- u32 __reserved_2[3];
+- } timer_ccr;
+-
+-/*3A0*/ struct { u32 __reserved[4]; } __reserved_16;
+-
+-/*3B0*/ struct { u32 __reserved[4]; } __reserved_17;
+-
+-/*3C0*/ struct { u32 __reserved[4]; } __reserved_18;
+-
+-/*3D0*/ struct { u32 __reserved[4]; } __reserved_19;
+-
+-/*3E0*/ struct { /* Timer Divide Configuration Register */
+- u32 divisor : 4,
+- __reserved_1 : 28;
+- u32 __reserved_2[3];
+- } timer_dcr;
+-
+-/*3F0*/ struct { u32 __reserved[4]; } __reserved_20;
+-
+-} __attribute__ ((packed));
+-
+-#undef u32
+-
+-#endif
+diff --git a/include/asm-x86/apicdef_64.h b/include/asm-x86/apicdef_64.h
+deleted file mode 100644
+index 1dd4006..0000000
+--- a/include/asm-x86/apicdef_64.h
++++ /dev/null
+@@ -1,392 +0,0 @@
+-#ifndef __ASM_APICDEF_H
+-#define __ASM_APICDEF_H
+-
+-/*
+- * Constants for various Intel APICs. (local APIC, IOAPIC, etc.)
+- *
+- * Alan Cox <Alan.Cox at linux.org>, 1995.
+- * Ingo Molnar <mingo at redhat.com>, 1999, 2000
+- */
+-
+-#define APIC_DEFAULT_PHYS_BASE 0xfee00000
+-
+-#define APIC_ID 0x20
+-#define APIC_ID_MASK (0xFFu<<24)
+-#define GET_APIC_ID(x) (((x)>>24)&0xFFu)
+-#define SET_APIC_ID(x) (((x)<<24))
+-#define APIC_LVR 0x30
+-#define APIC_LVR_MASK 0xFF00FF
+-#define GET_APIC_VERSION(x) ((x)&0xFFu)
+-#define GET_APIC_MAXLVT(x) (((x)>>16)&0xFFu)
+-#define APIC_INTEGRATED(x) ((x)&0xF0u)
+-#define APIC_TASKPRI 0x80
+-#define APIC_TPRI_MASK 0xFFu
+-#define APIC_ARBPRI 0x90
+-#define APIC_ARBPRI_MASK 0xFFu
+-#define APIC_PROCPRI 0xA0
+-#define APIC_EOI 0xB0
+-#define APIC_EIO_ACK 0x0 /* Write this to the EOI register */
+-#define APIC_RRR 0xC0
+-#define APIC_LDR 0xD0
+-#define APIC_LDR_MASK (0xFFu<<24)
+-#define GET_APIC_LOGICAL_ID(x) (((x)>>24)&0xFFu)
+-#define SET_APIC_LOGICAL_ID(x) (((x)<<24))
+-#define APIC_ALL_CPUS 0xFFu
+-#define APIC_DFR 0xE0
+-#define APIC_DFR_CLUSTER 0x0FFFFFFFul
+-#define APIC_DFR_FLAT 0xFFFFFFFFul
+-#define APIC_SPIV 0xF0
+-#define APIC_SPIV_FOCUS_DISABLED (1<<9)
+-#define APIC_SPIV_APIC_ENABLED (1<<8)
+-#define APIC_ISR 0x100
+-#define APIC_ISR_NR 0x8 /* Number of 32 bit ISR registers. */
+-#define APIC_TMR 0x180
+-#define APIC_IRR 0x200
+-#define APIC_ESR 0x280
+-#define APIC_ESR_SEND_CS 0x00001
+-#define APIC_ESR_RECV_CS 0x00002
+-#define APIC_ESR_SEND_ACC 0x00004
+-#define APIC_ESR_RECV_ACC 0x00008
+-#define APIC_ESR_SENDILL 0x00020
+-#define APIC_ESR_RECVILL 0x00040
+-#define APIC_ESR_ILLREGA 0x00080
+-#define APIC_ICR 0x300
+-#define APIC_DEST_SELF 0x40000
+-#define APIC_DEST_ALLINC 0x80000
+-#define APIC_DEST_ALLBUT 0xC0000
+-#define APIC_ICR_RR_MASK 0x30000
+-#define APIC_ICR_RR_INVALID 0x00000
+-#define APIC_ICR_RR_INPROG 0x10000
+-#define APIC_ICR_RR_VALID 0x20000
+-#define APIC_INT_LEVELTRIG 0x08000
+-#define APIC_INT_ASSERT 0x04000
+-#define APIC_ICR_BUSY 0x01000
+-#define APIC_DEST_LOGICAL 0x00800
+-#define APIC_DEST_PHYSICAL 0x00000
+-#define APIC_DM_FIXED 0x00000
+-#define APIC_DM_LOWEST 0x00100
+-#define APIC_DM_SMI 0x00200
+-#define APIC_DM_REMRD 0x00300
+-#define APIC_DM_NMI 0x00400
+-#define APIC_DM_INIT 0x00500
+-#define APIC_DM_STARTUP 0x00600
+-#define APIC_DM_EXTINT 0x00700
+-#define APIC_VECTOR_MASK 0x000FF
+-#define APIC_ICR2 0x310
+-#define GET_APIC_DEST_FIELD(x) (((x)>>24)&0xFF)
+-#define SET_APIC_DEST_FIELD(x) ((x)<<24)
+-#define APIC_LVTT 0x320
+-#define APIC_LVTTHMR 0x330
+-#define APIC_LVTPC 0x340
+-#define APIC_LVT0 0x350
+-#define APIC_LVT_TIMER_BASE_MASK (0x3<<18)
+-#define GET_APIC_TIMER_BASE(x) (((x)>>18)&0x3)
+-#define SET_APIC_TIMER_BASE(x) (((x)<<18))
+-#define APIC_TIMER_BASE_CLKIN 0x0
+-#define APIC_TIMER_BASE_TMBASE 0x1
+-#define APIC_TIMER_BASE_DIV 0x2
+-#define APIC_LVT_TIMER_PERIODIC (1<<17)
+-#define APIC_LVT_MASKED (1<<16)
+-#define APIC_LVT_LEVEL_TRIGGER (1<<15)
+-#define APIC_LVT_REMOTE_IRR (1<<14)
+-#define APIC_INPUT_POLARITY (1<<13)
+-#define APIC_SEND_PENDING (1<<12)
+-#define APIC_MODE_MASK 0x700
+-#define GET_APIC_DELIVERY_MODE(x) (((x)>>8)&0x7)
+-#define SET_APIC_DELIVERY_MODE(x,y) (((x)&~0x700)|((y)<<8))
+-#define APIC_MODE_FIXED 0x0
+-#define APIC_MODE_NMI 0x4
+-#define APIC_MODE_EXTINT 0x7
+-#define APIC_LVT1 0x360
+-#define APIC_LVTERR 0x370
+-#define APIC_TMICT 0x380
+-#define APIC_TMCCT 0x390
+-#define APIC_TDCR 0x3E0
+-#define APIC_TDR_DIV_TMBASE (1<<2)
+-#define APIC_TDR_DIV_1 0xB
+-#define APIC_TDR_DIV_2 0x0
+-#define APIC_TDR_DIV_4 0x1
+-#define APIC_TDR_DIV_8 0x2
+-#define APIC_TDR_DIV_16 0x3
+-#define APIC_TDR_DIV_32 0x8
+-#define APIC_TDR_DIV_64 0x9
+-#define APIC_TDR_DIV_128 0xA
+-
+-#define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
+-
+-#define MAX_IO_APICS 128
+-#define MAX_LOCAL_APIC 256
+-
+-/*
+- * All x86-64 systems are xAPIC compatible.
+- * In the following, "apicid" is a physical APIC ID.
+- */
+-#define XAPIC_DEST_CPUS_SHIFT 4
+-#define XAPIC_DEST_CPUS_MASK ((1u << XAPIC_DEST_CPUS_SHIFT) - 1)
+-#define XAPIC_DEST_CLUSTER_MASK (XAPIC_DEST_CPUS_MASK << XAPIC_DEST_CPUS_SHIFT)
+-#define APIC_CLUSTER(apicid) ((apicid) & XAPIC_DEST_CLUSTER_MASK)
+-#define APIC_CLUSTERID(apicid) (APIC_CLUSTER(apicid) >> XAPIC_DEST_CPUS_SHIFT)
+-#define APIC_CPUID(apicid) ((apicid) & XAPIC_DEST_CPUS_MASK)
+-#define NUM_APIC_CLUSTERS ((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
+-
+-/*
+- * the local APIC register structure, memory mapped. Not terribly well
+- * tested, but we might eventually use this one in the future - the
+- * problem why we cannot use it right now is the P5 APIC, it has an
+- * errata which cannot take 8-bit reads and writes, only 32-bit ones ...
+- */
+-#define u32 unsigned int
+-
+-struct local_apic {
+-
+-/*000*/ struct { u32 __reserved[4]; } __reserved_01;
+-
+-/*010*/ struct { u32 __reserved[4]; } __reserved_02;
+-
+-/*020*/ struct { /* APIC ID Register */
+- u32 __reserved_1 : 24,
+- phys_apic_id : 4,
+- __reserved_2 : 4;
+- u32 __reserved[3];
+- } id;
+-
+-/*030*/ const
+- struct { /* APIC Version Register */
+- u32 version : 8,
+- __reserved_1 : 8,
+- max_lvt : 8,
+- __reserved_2 : 8;
+- u32 __reserved[3];
+- } version;
+-
+-/*040*/ struct { u32 __reserved[4]; } __reserved_03;
+-
+-/*050*/ struct { u32 __reserved[4]; } __reserved_04;
+-
+-/*060*/ struct { u32 __reserved[4]; } __reserved_05;
+-
+-/*070*/ struct { u32 __reserved[4]; } __reserved_06;
+-
+-/*080*/ struct { /* Task Priority Register */
+- u32 priority : 8,
+- __reserved_1 : 24;
+- u32 __reserved_2[3];
+- } tpr;
+-
+-/*090*/ const
+- struct { /* Arbitration Priority Register */
+- u32 priority : 8,
+- __reserved_1 : 24;
+- u32 __reserved_2[3];
+- } apr;
+-
+-/*0A0*/ const
+- struct { /* Processor Priority Register */
+- u32 priority : 8,
+- __reserved_1 : 24;
+- u32 __reserved_2[3];
+- } ppr;
+-
+-/*0B0*/ struct { /* End Of Interrupt Register */
+- u32 eoi;
+- u32 __reserved[3];
+- } eoi;
+-
+-/*0C0*/ struct { u32 __reserved[4]; } __reserved_07;
+-
+-/*0D0*/ struct { /* Logical Destination Register */
+- u32 __reserved_1 : 24,
+- logical_dest : 8;
+- u32 __reserved_2[3];
+- } ldr;
+-
+-/*0E0*/ struct { /* Destination Format Register */
+- u32 __reserved_1 : 28,
+- model : 4;
+- u32 __reserved_2[3];
+- } dfr;
+-
+-/*0F0*/ struct { /* Spurious Interrupt Vector Register */
+- u32 spurious_vector : 8,
+- apic_enabled : 1,
+- focus_cpu : 1,
+- __reserved_2 : 22;
+- u32 __reserved_3[3];
+- } svr;
+-
+-/*100*/ struct { /* In Service Register */
+-/*170*/ u32 bitfield;
+- u32 __reserved[3];
+- } isr [8];
+-
+-/*180*/ struct { /* Trigger Mode Register */
+-/*1F0*/ u32 bitfield;
+- u32 __reserved[3];
+- } tmr [8];
+-
+-/*200*/ struct { /* Interrupt Request Register */
+-/*270*/ u32 bitfield;
+- u32 __reserved[3];
+- } irr [8];
+-
+-/*280*/ union { /* Error Status Register */
+- struct {
+- u32 send_cs_error : 1,
+- receive_cs_error : 1,
+- send_accept_error : 1,
+- receive_accept_error : 1,
+- __reserved_1 : 1,
+- send_illegal_vector : 1,
+- receive_illegal_vector : 1,
+- illegal_register_address : 1,
+- __reserved_2 : 24;
+- u32 __reserved_3[3];
+- } error_bits;
+- struct {
+- u32 errors;
+- u32 __reserved_3[3];
+- } all_errors;
+- } esr;
+-
+-/*290*/ struct { u32 __reserved[4]; } __reserved_08;
+-
+-/*2A0*/ struct { u32 __reserved[4]; } __reserved_09;
+-
+-/*2B0*/ struct { u32 __reserved[4]; } __reserved_10;
+-
+-/*2C0*/ struct { u32 __reserved[4]; } __reserved_11;
+-
+-/*2D0*/ struct { u32 __reserved[4]; } __reserved_12;
+-
+-/*2E0*/ struct { u32 __reserved[4]; } __reserved_13;
+-
+-/*2F0*/ struct { u32 __reserved[4]; } __reserved_14;
+-
+-/*300*/ struct { /* Interrupt Command Register 1 */
+- u32 vector : 8,
+- delivery_mode : 3,
+- destination_mode : 1,
+- delivery_status : 1,
+- __reserved_1 : 1,
+- level : 1,
+- trigger : 1,
+- __reserved_2 : 2,
+- shorthand : 2,
+- __reserved_3 : 12;
+- u32 __reserved_4[3];
+- } icr1;
+-
+-/*310*/ struct { /* Interrupt Command Register 2 */
+- union {
+- u32 __reserved_1 : 24,
+- phys_dest : 4,
+- __reserved_2 : 4;
+- u32 __reserved_3 : 24,
+- logical_dest : 8;
+- } dest;
+- u32 __reserved_4[3];
+- } icr2;
+-
+-/*320*/ struct { /* LVT - Timer */
+- u32 vector : 8,
+- __reserved_1 : 4,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- timer_mode : 1,
+- __reserved_3 : 14;
+- u32 __reserved_4[3];
+- } lvt_timer;
+-
+-/*330*/ struct { /* LVT - Thermal Sensor */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- __reserved_3 : 15;
+- u32 __reserved_4[3];
+- } lvt_thermal;
+-
+-/*340*/ struct { /* LVT - Performance Counter */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- __reserved_3 : 15;
+- u32 __reserved_4[3];
+- } lvt_pc;
+-
+-/*350*/ struct { /* LVT - LINT0 */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- polarity : 1,
+- remote_irr : 1,
+- trigger : 1,
+- mask : 1,
+- __reserved_2 : 15;
+- u32 __reserved_3[3];
+- } lvt_lint0;
+-
+-/*360*/ struct { /* LVT - LINT1 */
+- u32 vector : 8,
+- delivery_mode : 3,
+- __reserved_1 : 1,
+- delivery_status : 1,
+- polarity : 1,
+- remote_irr : 1,
+- trigger : 1,
+- mask : 1,
+- __reserved_2 : 15;
+- u32 __reserved_3[3];
+- } lvt_lint1;
+-
+-/*370*/ struct { /* LVT - Error */
+- u32 vector : 8,
+- __reserved_1 : 4,
+- delivery_status : 1,
+- __reserved_2 : 3,
+- mask : 1,
+- __reserved_3 : 15;
+- u32 __reserved_4[3];
+- } lvt_error;
+-
+-/*380*/ struct { /* Timer Initial Count Register */
+- u32 initial_count;
+- u32 __reserved_2[3];
+- } timer_icr;
+-
+-/*390*/ const
+- struct { /* Timer Current Count Register */
+- u32 curr_count;
+- u32 __reserved_2[3];
+- } timer_ccr;
+-
+-/*3A0*/ struct { u32 __reserved[4]; } __reserved_16;
+-
+-/*3B0*/ struct { u32 __reserved[4]; } __reserved_17;
+-
+-/*3C0*/ struct { u32 __reserved[4]; } __reserved_18;
+-
+-/*3D0*/ struct { u32 __reserved[4]; } __reserved_19;
+-
+-/*3E0*/ struct { /* Timer Divide Configuration Register */
+- u32 divisor : 4,
+- __reserved_1 : 28;
+- u32 __reserved_2[3];
+- } timer_dcr;
+-
+-/*3F0*/ struct { u32 __reserved[4]; } __reserved_20;
+-
+-} __attribute__ ((packed));
+-
+-#undef u32
+-
+-#define BAD_APICID 0xFFu
+-
+-#endif
+diff --git a/include/asm-x86/arch_hooks.h b/include/asm-x86/arch_hooks.h
+index a8c1fca..768aee8 100644
+--- a/include/asm-x86/arch_hooks.h
++++ b/include/asm-x86/arch_hooks.h
+@@ -6,7 +6,7 @@
+ /*
+ * linux/include/asm/arch_hooks.h
+ *
+- * define the architecture specific hooks
++ * define the architecture specific hooks
+ */
+
+ /* these aren't arch hooks, they are generic routines
+@@ -24,7 +24,4 @@ extern void trap_init_hook(void);
+ extern void time_init_hook(void);
+ extern void mca_nmi_hook(void);
+
+-extern int setup_early_printk(char *);
+-extern void early_printk(const char *fmt, ...) __attribute__((format(printf,1,2)));
+-
+ #endif
+diff --git a/include/asm-x86/asm.h b/include/asm-x86/asm.h
new file mode 100644
-index 0000000..9445118
+index 0000000..1a6980a
--- /dev/null
-+++ b/include/asm-sh/unistd_64.h
-@@ -0,0 +1,415 @@
-+#ifndef __ASM_SH_UNISTD_64_H
-+#define __ASM_SH_UNISTD_64_H
++++ b/include/asm-x86/asm.h
+@@ -0,0 +1,32 @@
++#ifndef _ASM_X86_ASM_H
++#define _ASM_X86_ASM_H
++
++#ifdef CONFIG_X86_32
++/* 32 bits */
++
++# define _ASM_PTR " .long "
++# define _ASM_ALIGN " .balign 4 "
++# define _ASM_MOV_UL " movl "
++
++# define _ASM_INC " incl "
++# define _ASM_DEC " decl "
++# define _ASM_ADD " addl "
++# define _ASM_SUB " subl "
++# define _ASM_XADD " xaddl "
++
++#else
++/* 64 bits */
++
++# define _ASM_PTR " .quad "
++# define _ASM_ALIGN " .balign 8 "
++# define _ASM_MOV_UL " movq "
++
++# define _ASM_INC " incq "
++# define _ASM_DEC " decq "
++# define _ASM_ADD " addq "
++# define _ASM_SUB " subq "
++# define _ASM_XADD " xaddq "
++
++#endif /* CONFIG_X86_32 */
++
++#endif /* _ASM_X86_ASM_H */
+diff --git a/include/asm-x86/bitops.h b/include/asm-x86/bitops.h
+index 07e3f6d..1a23ce1 100644
+--- a/include/asm-x86/bitops.h
++++ b/include/asm-x86/bitops.h
+@@ -1,5 +1,321 @@
++#ifndef _ASM_X86_BITOPS_H
++#define _ASM_X86_BITOPS_H
+
+/*
-+ * include/asm-sh/unistd_64.h
++ * Copyright 1992, Linus Torvalds.
++ */
++
++#ifndef _LINUX_BITOPS_H
++#error only <linux/bitops.h> can be included directly
++#endif
++
++#include <linux/compiler.h>
++#include <asm/alternative.h>
++
++/*
++ * These have to be done with inline assembly: that way the bit-setting
++ * is guaranteed to be atomic. All bit operations return 0 if the bit
++ * was cleared before the operation and != 0 if it was not.
+ *
-+ * This file contains the system call numbers.
++ * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
++ */
++
++#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
++/* Technically wrong, but this avoids compilation errors on some gcc
++ versions. */
++#define ADDR "=m" (*(volatile long *) addr)
++#else
++#define ADDR "+m" (*(volatile long *) addr)
++#endif
++
++/**
++ * set_bit - Atomically set a bit in memory
++ * @nr: the bit to set
++ * @addr: the address to start counting from
+ *
-+ * Copyright (C) 2000, 2001 Paolo Alberelli
-+ * Copyright (C) 2003 - 2007 Paul Mundt
-+ * Copyright (C) 2004 Sean McGoogan
++ * This function is atomic and may not be reordered. See __set_bit()
++ * if you do not require the atomic guarantees.
+ *
-+ * This file is subject to the terms and conditions of the GNU General Public
-+ * License. See the file "COPYING" in the main directory of this archive
-+ * for more details.
++ * Note: there are no guarantees that this function will not be reordered
++ * on non x86 architectures, so if you are writing portable code,
++ * make sure not to rely on its reordering guarantees.
++ *
++ * Note that @nr may be almost arbitrarily large; this function is not
++ * restricted to acting on a single-word quantity.
+ */
-+#define __NR_restart_syscall 0
-+#define __NR_exit 1
-+#define __NR_fork 2
-+#define __NR_read 3
-+#define __NR_write 4
-+#define __NR_open 5
-+#define __NR_close 6
-+#define __NR_waitpid 7
-+#define __NR_creat 8
-+#define __NR_link 9
-+#define __NR_unlink 10
-+#define __NR_execve 11
-+#define __NR_chdir 12
-+#define __NR_time 13
-+#define __NR_mknod 14
-+#define __NR_chmod 15
-+#define __NR_lchown 16
-+#define __NR_break 17
-+#define __NR_oldstat 18
-+#define __NR_lseek 19
-+#define __NR_getpid 20
-+#define __NR_mount 21
-+#define __NR_umount 22
-+#define __NR_setuid 23
-+#define __NR_getuid 24
-+#define __NR_stime 25
-+#define __NR_ptrace 26
-+#define __NR_alarm 27
-+#define __NR_oldfstat 28
-+#define __NR_pause 29
-+#define __NR_utime 30
-+#define __NR_stty 31
-+#define __NR_gtty 32
-+#define __NR_access 33
-+#define __NR_nice 34
-+#define __NR_ftime 35
-+#define __NR_sync 36
-+#define __NR_kill 37
-+#define __NR_rename 38
-+#define __NR_mkdir 39
-+#define __NR_rmdir 40
-+#define __NR_dup 41
-+#define __NR_pipe 42
-+#define __NR_times 43
-+#define __NR_prof 44
-+#define __NR_brk 45
-+#define __NR_setgid 46
-+#define __NR_getgid 47
-+#define __NR_signal 48
-+#define __NR_geteuid 49
-+#define __NR_getegid 50
-+#define __NR_acct 51
-+#define __NR_umount2 52
-+#define __NR_lock 53
-+#define __NR_ioctl 54
-+#define __NR_fcntl 55
-+#define __NR_mpx 56
-+#define __NR_setpgid 57
-+#define __NR_ulimit 58
-+#define __NR_oldolduname 59
-+#define __NR_umask 60
-+#define __NR_chroot 61
-+#define __NR_ustat 62
-+#define __NR_dup2 63
-+#define __NR_getppid 64
-+#define __NR_getpgrp 65
-+#define __NR_setsid 66
-+#define __NR_sigaction 67
-+#define __NR_sgetmask 68
-+#define __NR_ssetmask 69
-+#define __NR_setreuid 70
-+#define __NR_setregid 71
-+#define __NR_sigsuspend 72
-+#define __NR_sigpending 73
-+#define __NR_sethostname 74
-+#define __NR_setrlimit 75
-+#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
-+#define __NR_getrusage 77
-+#define __NR_gettimeofday 78
-+#define __NR_settimeofday 79
-+#define __NR_getgroups 80
-+#define __NR_setgroups 81
-+#define __NR_select 82
-+#define __NR_symlink 83
-+#define __NR_oldlstat 84
-+#define __NR_readlink 85
-+#define __NR_uselib 86
-+#define __NR_swapon 87
-+#define __NR_reboot 88
-+#define __NR_readdir 89
-+#define __NR_mmap 90
-+#define __NR_munmap 91
-+#define __NR_truncate 92
-+#define __NR_ftruncate 93
-+#define __NR_fchmod 94
-+#define __NR_fchown 95
-+#define __NR_getpriority 96
-+#define __NR_setpriority 97
-+#define __NR_profil 98
-+#define __NR_statfs 99
-+#define __NR_fstatfs 100
-+#define __NR_ioperm 101
-+#define __NR_socketcall 102 /* old implementation of socket systemcall */
-+#define __NR_syslog 103
-+#define __NR_setitimer 104
-+#define __NR_getitimer 105
-+#define __NR_stat 106
-+#define __NR_lstat 107
-+#define __NR_fstat 108
-+#define __NR_olduname 109
-+#define __NR_iopl 110
-+#define __NR_vhangup 111
-+#define __NR_idle 112
-+#define __NR_vm86old 113
-+#define __NR_wait4 114
-+#define __NR_swapoff 115
-+#define __NR_sysinfo 116
-+#define __NR_ipc 117
-+#define __NR_fsync 118
-+#define __NR_sigreturn 119
-+#define __NR_clone 120
-+#define __NR_setdomainname 121
-+#define __NR_uname 122
-+#define __NR_modify_ldt 123
-+#define __NR_adjtimex 124
-+#define __NR_mprotect 125
-+#define __NR_sigprocmask 126
-+#define __NR_create_module 127
-+#define __NR_init_module 128
-+#define __NR_delete_module 129
-+#define __NR_get_kernel_syms 130
-+#define __NR_quotactl 131
-+#define __NR_getpgid 132
-+#define __NR_fchdir 133
-+#define __NR_bdflush 134
-+#define __NR_sysfs 135
-+#define __NR_personality 136
-+#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
-+#define __NR_setfsuid 138
-+#define __NR_setfsgid 139
-+#define __NR__llseek 140
-+#define __NR_getdents 141
-+#define __NR__newselect 142
-+#define __NR_flock 143
-+#define __NR_msync 144
-+#define __NR_readv 145
-+#define __NR_writev 146
-+#define __NR_getsid 147
-+#define __NR_fdatasync 148
-+#define __NR__sysctl 149
-+#define __NR_mlock 150
-+#define __NR_munlock 151
-+#define __NR_mlockall 152
-+#define __NR_munlockall 153
-+#define __NR_sched_setparam 154
-+#define __NR_sched_getparam 155
-+#define __NR_sched_setscheduler 156
-+#define __NR_sched_getscheduler 157
-+#define __NR_sched_yield 158
-+#define __NR_sched_get_priority_max 159
-+#define __NR_sched_get_priority_min 160
-+#define __NR_sched_rr_get_interval 161
-+#define __NR_nanosleep 162
-+#define __NR_mremap 163
-+#define __NR_setresuid 164
-+#define __NR_getresuid 165
-+#define __NR_vm86 166
-+#define __NR_query_module 167
-+#define __NR_poll 168
-+#define __NR_nfsservctl 169
-+#define __NR_setresgid 170
-+#define __NR_getresgid 171
-+#define __NR_prctl 172
-+#define __NR_rt_sigreturn 173
-+#define __NR_rt_sigaction 174
-+#define __NR_rt_sigprocmask 175
-+#define __NR_rt_sigpending 176
-+#define __NR_rt_sigtimedwait 177
-+#define __NR_rt_sigqueueinfo 178
-+#define __NR_rt_sigsuspend 179
-+#define __NR_pread64 180
-+#define __NR_pwrite64 181
-+#define __NR_chown 182
-+#define __NR_getcwd 183
-+#define __NR_capget 184
-+#define __NR_capset 185
-+#define __NR_sigaltstack 186
-+#define __NR_sendfile 187
-+#define __NR_streams1 188 /* some people actually want it */
-+#define __NR_streams2 189 /* some people actually want it */
-+#define __NR_vfork 190
-+#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
-+#define __NR_mmap2 192
-+#define __NR_truncate64 193
-+#define __NR_ftruncate64 194
-+#define __NR_stat64 195
-+#define __NR_lstat64 196
-+#define __NR_fstat64 197
-+#define __NR_lchown32 198
-+#define __NR_getuid32 199
-+#define __NR_getgid32 200
-+#define __NR_geteuid32 201
-+#define __NR_getegid32 202
-+#define __NR_setreuid32 203
-+#define __NR_setregid32 204
-+#define __NR_getgroups32 205
-+#define __NR_setgroups32 206
-+#define __NR_fchown32 207
-+#define __NR_setresuid32 208
-+#define __NR_getresuid32 209
-+#define __NR_setresgid32 210
-+#define __NR_getresgid32 211
-+#define __NR_chown32 212
-+#define __NR_setuid32 213
-+#define __NR_setgid32 214
-+#define __NR_setfsuid32 215
-+#define __NR_setfsgid32 216
-+#define __NR_pivot_root 217
-+#define __NR_mincore 218
-+#define __NR_madvise 219
++static inline void set_bit(int nr, volatile void *addr)
++{
++ asm volatile(LOCK_PREFIX "bts %1,%0"
++ : ADDR
++ : "Ir" (nr) : "memory");
++}
+
-+/* Non-multiplexed socket family */
-+#define __NR_socket 220
-+#define __NR_bind 221
-+#define __NR_connect 222
-+#define __NR_listen 223
-+#define __NR_accept 224
-+#define __NR_getsockname 225
-+#define __NR_getpeername 226
-+#define __NR_socketpair 227
-+#define __NR_send 228
-+#define __NR_sendto 229
-+#define __NR_recv 230
-+#define __NR_recvfrom 231
-+#define __NR_shutdown 232
-+#define __NR_setsockopt 233
-+#define __NR_getsockopt 234
-+#define __NR_sendmsg 235
-+#define __NR_recvmsg 236
++/**
++ * __set_bit - Set a bit in memory
++ * @nr: the bit to set
++ * @addr: the address to start counting from
++ *
++ * Unlike set_bit(), this function is non-atomic and may be reordered.
++ * If it's called on the same region of memory simultaneously, the effect
++ * may be that only one operation succeeds.
++ */
++static inline void __set_bit(int nr, volatile void *addr)
++{
++ asm volatile("bts %1,%0"
++ : ADDR
++ : "Ir" (nr) : "memory");
++}
+
-+/* Non-multiplexed IPC family */
-+#define __NR_semop 237
-+#define __NR_semget 238
-+#define __NR_semctl 239
-+#define __NR_msgsnd 240
-+#define __NR_msgrcv 241
-+#define __NR_msgget 242
-+#define __NR_msgctl 243
-+#if 0
-+#define __NR_shmatcall 244
++
++/**
++ * clear_bit - Clears a bit in memory
++ * @nr: Bit to clear
++ * @addr: Address to start counting from
++ *
++ * clear_bit() is atomic and may not be reordered. However, it does
++ * not contain a memory barrier, so if it is used for locking purposes,
++ * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
++ * in order to ensure changes are visible on other processors.
++ */
++static inline void clear_bit(int nr, volatile void *addr)
++{
++ asm volatile(LOCK_PREFIX "btr %1,%0"
++ : ADDR
++ : "Ir" (nr));
++}
++
++/*
++ * clear_bit_unlock - Clears a bit in memory
++ * @nr: Bit to clear
++ * @addr: Address to start counting from
++ *
++ * clear_bit() is atomic and implies release semantics before the memory
++ * operation. It can be used for an unlock.
++ */
++static inline void clear_bit_unlock(unsigned nr, volatile void *addr)
++{
++ barrier();
++ clear_bit(nr, addr);
++}
++
++static inline void __clear_bit(int nr, volatile void *addr)
++{
++ asm volatile("btr %1,%0" : ADDR : "Ir" (nr));
++}
++
++/*
++ * __clear_bit_unlock - Clears a bit in memory
++ * @nr: Bit to clear
++ * @addr: Address to start counting from
++ *
++ * __clear_bit() is non-atomic and implies release semantics before the memory
++ * operation. It can be used for an unlock if no other CPUs can concurrently
++ * modify other bits in the word.
++ *
++ * No memory barrier is required here, because x86 cannot reorder stores past
++ * older loads. Same principle as spin_unlock.
++ */
++static inline void __clear_bit_unlock(unsigned nr, volatile void *addr)
++{
++ barrier();
++ __clear_bit(nr, addr);
++}
++
++#define smp_mb__before_clear_bit() barrier()
++#define smp_mb__after_clear_bit() barrier()
++
++/**
++ * __change_bit - Toggle a bit in memory
++ * @nr: the bit to change
++ * @addr: the address to start counting from
++ *
++ * Unlike change_bit(), this function is non-atomic and may be reordered.
++ * If it's called on the same region of memory simultaneously, the effect
++ * may be that only one operation succeeds.
++ */
++static inline void __change_bit(int nr, volatile void *addr)
++{
++ asm volatile("btc %1,%0" : ADDR : "Ir" (nr));
++}
++
++/**
++ * change_bit - Toggle a bit in memory
++ * @nr: Bit to change
++ * @addr: Address to start counting from
++ *
++ * change_bit() is atomic and may not be reordered.
++ * Note that @nr may be almost arbitrarily large; this function is not
++ * restricted to acting on a single-word quantity.
++ */
++static inline void change_bit(int nr, volatile void *addr)
++{
++ asm volatile(LOCK_PREFIX "btc %1,%0"
++ : ADDR : "Ir" (nr));
++}
++
++/**
++ * test_and_set_bit - Set a bit and return its old value
++ * @nr: Bit to set
++ * @addr: Address to count from
++ *
++ * This operation is atomic and cannot be reordered.
++ * It also implies a memory barrier.
++ */
++static inline int test_and_set_bit(int nr, volatile void *addr)
++{
++ int oldbit;
++
++ asm volatile(LOCK_PREFIX "bts %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit), ADDR
++ : "Ir" (nr) : "memory");
++
++ return oldbit;
++}
++
++/**
++ * test_and_set_bit_lock - Set a bit and return its old value for lock
++ * @nr: Bit to set
++ * @addr: Address to count from
++ *
++ * This is the same as test_and_set_bit on x86.
++ */
++static inline int test_and_set_bit_lock(int nr, volatile void *addr)
++{
++ return test_and_set_bit(nr, addr);
++}
++
++/**
++ * __test_and_set_bit - Set a bit and return its old value
++ * @nr: Bit to set
++ * @addr: Address to count from
++ *
++ * This operation is non-atomic and can be reordered.
++ * If two examples of this operation race, one can appear to succeed
++ * but actually fail. You must protect multiple accesses with a lock.
++ */
++static inline int __test_and_set_bit(int nr, volatile void *addr)
++{
++ int oldbit;
++
++ asm("bts %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit), ADDR
++ : "Ir" (nr));
++ return oldbit;
++}
++
++/**
++ * test_and_clear_bit - Clear a bit and return its old value
++ * @nr: Bit to clear
++ * @addr: Address to count from
++ *
++ * This operation is atomic and cannot be reordered.
++ * It also implies a memory barrier.
++ */
++static inline int test_and_clear_bit(int nr, volatile void *addr)
++{
++ int oldbit;
++
++ asm volatile(LOCK_PREFIX "btr %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit), ADDR
++ : "Ir" (nr) : "memory");
++
++ return oldbit;
++}
++
++/**
++ * __test_and_clear_bit - Clear a bit and return its old value
++ * @nr: Bit to clear
++ * @addr: Address to count from
++ *
++ * This operation is non-atomic and can be reordered.
++ * If two examples of this operation race, one can appear to succeed
++ * but actually fail. You must protect multiple accesses with a lock.
++ */
++static inline int __test_and_clear_bit(int nr, volatile void *addr)
++{
++ int oldbit;
++
++ asm volatile("btr %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit), ADDR
++ : "Ir" (nr));
++ return oldbit;
++}
++
++/* WARNING: non atomic and it can be reordered! */
++static inline int __test_and_change_bit(int nr, volatile void *addr)
++{
++ int oldbit;
++
++ asm volatile("btc %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit), ADDR
++ : "Ir" (nr) : "memory");
++
++ return oldbit;
++}
++
++/**
++ * test_and_change_bit - Change a bit and return its old value
++ * @nr: Bit to change
++ * @addr: Address to count from
++ *
++ * This operation is atomic and cannot be reordered.
++ * It also implies a memory barrier.
++ */
++static inline int test_and_change_bit(int nr, volatile void *addr)
++{
++ int oldbit;
++
++ asm volatile(LOCK_PREFIX "btc %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit), ADDR
++ : "Ir" (nr) : "memory");
++
++ return oldbit;
++}
++
++static inline int constant_test_bit(int nr, const volatile void *addr)
++{
++ return ((1UL << (nr % BITS_PER_LONG)) &
++ (((unsigned long *)addr)[nr / BITS_PER_LONG])) != 0;
++}
++
++static inline int variable_test_bit(int nr, volatile const void *addr)
++{
++ int oldbit;
++
++ asm volatile("bt %2,%1\n\t"
++ "sbb %0,%0"
++ : "=r" (oldbit)
++ : "m" (*(unsigned long *)addr), "Ir" (nr));
++
++ return oldbit;
++}
++
++#if 0 /* Fool kernel-doc since it doesn't do macros yet */
++/**
++ * test_bit - Determine whether a bit is set
++ * @nr: bit number to test
++ * @addr: Address to start counting from
++ */
++static int test_bit(int nr, const volatile unsigned long *addr);
+#endif
-+#define __NR_shmdt 245
-+#define __NR_shmget 246
-+#define __NR_shmctl 247
+
-+#define __NR_getdents64 248
-+#define __NR_fcntl64 249
-+/* 223 is unused */
-+#define __NR_gettid 252
-+#define __NR_readahead 253
-+#define __NR_setxattr 254
-+#define __NR_lsetxattr 255
-+#define __NR_fsetxattr 256
-+#define __NR_getxattr 257
-+#define __NR_lgetxattr 258
-+#define __NR_fgetxattr 269
-+#define __NR_listxattr 260
-+#define __NR_llistxattr 261
-+#define __NR_flistxattr 262
-+#define __NR_removexattr 263
-+#define __NR_lremovexattr 264
-+#define __NR_fremovexattr 265
-+#define __NR_tkill 266
-+#define __NR_sendfile64 267
-+#define __NR_futex 268
-+#define __NR_sched_setaffinity 269
-+#define __NR_sched_getaffinity 270
-+#define __NR_set_thread_area 271
-+#define __NR_get_thread_area 272
-+#define __NR_io_setup 273
-+#define __NR_io_destroy 274
-+#define __NR_io_getevents 275
-+#define __NR_io_submit 276
-+#define __NR_io_cancel 277
-+#define __NR_fadvise64 278
-+#define __NR_exit_group 280
++#define test_bit(nr,addr) \
++ (__builtin_constant_p(nr) ? \
++ constant_test_bit((nr),(addr)) : \
++ variable_test_bit((nr),(addr)))
+
-+#define __NR_lookup_dcookie 281
-+#define __NR_epoll_create 282
-+#define __NR_epoll_ctl 283
-+#define __NR_epoll_wait 284
-+#define __NR_remap_file_pages 285
-+#define __NR_set_tid_address 286
-+#define __NR_timer_create 287
-+#define __NR_timer_settime (__NR_timer_create+1)
-+#define __NR_timer_gettime (__NR_timer_create+2)
-+#define __NR_timer_getoverrun (__NR_timer_create+3)
-+#define __NR_timer_delete (__NR_timer_create+4)
-+#define __NR_clock_settime (__NR_timer_create+5)
-+#define __NR_clock_gettime (__NR_timer_create+6)
-+#define __NR_clock_getres (__NR_timer_create+7)
-+#define __NR_clock_nanosleep (__NR_timer_create+8)
-+#define __NR_statfs64 296
-+#define __NR_fstatfs64 297
-+#define __NR_tgkill 298
-+#define __NR_utimes 299
-+#define __NR_fadvise64_64 300
-+#define __NR_vserver 301
-+#define __NR_mbind 302
-+#define __NR_get_mempolicy 303
-+#define __NR_set_mempolicy 304
-+#define __NR_mq_open 305
-+#define __NR_mq_unlink (__NR_mq_open+1)
-+#define __NR_mq_timedsend (__NR_mq_open+2)
-+#define __NR_mq_timedreceive (__NR_mq_open+3)
-+#define __NR_mq_notify (__NR_mq_open+4)
-+#define __NR_mq_getsetattr (__NR_mq_open+5)
-+#define __NR_kexec_load 311
-+#define __NR_waitid 312
-+#define __NR_add_key 313
-+#define __NR_request_key 314
-+#define __NR_keyctl 315
-+#define __NR_ioprio_set 316
-+#define __NR_ioprio_get 317
-+#define __NR_inotify_init 318
-+#define __NR_inotify_add_watch 319
-+#define __NR_inotify_rm_watch 320
-+/* 321 is unused */
-+#define __NR_migrate_pages 322
-+#define __NR_openat 323
-+#define __NR_mkdirat 324
-+#define __NR_mknodat 325
-+#define __NR_fchownat 326
-+#define __NR_futimesat 327
-+#define __NR_fstatat64 328
-+#define __NR_unlinkat 329
-+#define __NR_renameat 330
-+#define __NR_linkat 331
-+#define __NR_symlinkat 332
-+#define __NR_readlinkat 333
-+#define __NR_fchmodat 334
-+#define __NR_faccessat 335
-+#define __NR_pselect6 336
-+#define __NR_ppoll 337
-+#define __NR_unshare 338
-+#define __NR_set_robust_list 339
-+#define __NR_get_robust_list 340
-+#define __NR_splice 341
-+#define __NR_sync_file_range 342
-+#define __NR_tee 343
-+#define __NR_vmsplice 344
-+#define __NR_move_pages 345
-+#define __NR_getcpu 346
-+#define __NR_epoll_pwait 347
-+#define __NR_utimensat 348
-+#define __NR_signalfd 349
-+#define __NR_timerfd 350
-+#define __NR_eventfd 351
-+#define __NR_fallocate 352
++#undef ADDR
++
+ #ifdef CONFIG_X86_32
+ # include "bitops_32.h"
+ #else
+ # include "bitops_64.h"
+ #endif
++
++#endif /* _ASM_X86_BITOPS_H */
+diff --git a/include/asm-x86/bitops_32.h b/include/asm-x86/bitops_32.h
+index 0b40f6d..e4d75fc 100644
+--- a/include/asm-x86/bitops_32.h
++++ b/include/asm-x86/bitops_32.h
+@@ -5,320 +5,12 @@
+ * Copyright 1992, Linus Torvalds.
+ */
+
+-#ifndef _LINUX_BITOPS_H
+-#error only <linux/bitops.h> can be included directly
+-#endif
+-
+-#include <linux/compiler.h>
+-#include <asm/alternative.h>
+-
+-/*
+- * These have to be done with inline assembly: that way the bit-setting
+- * is guaranteed to be atomic. All bit operations return 0 if the bit
+- * was cleared before the operation and != 0 if it was not.
+- *
+- * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+- */
+-
+-#define ADDR (*(volatile long *) addr)
+-
+-/**
+- * set_bit - Atomically set a bit in memory
+- * @nr: the bit to set
+- * @addr: the address to start counting from
+- *
+- * This function is atomic and may not be reordered. See __set_bit()
+- * if you do not require the atomic guarantees.
+- *
+- * Note: there are no guarantees that this function will not be reordered
+- * on non x86 architectures, so if you are writing portable code,
+- * make sure not to rely on its reordering guarantees.
+- *
+- * Note that @nr may be almost arbitrarily large; this function is not
+- * restricted to acting on a single-word quantity.
+- */
+-static inline void set_bit(int nr, volatile unsigned long * addr)
+-{
+- __asm__ __volatile__( LOCK_PREFIX
+- "btsl %1,%0"
+- :"+m" (ADDR)
+- :"Ir" (nr));
+-}
+-
+-/**
+- * __set_bit - Set a bit in memory
+- * @nr: the bit to set
+- * @addr: the address to start counting from
+- *
+- * Unlike set_bit(), this function is non-atomic and may be reordered.
+- * If it's called on the same region of memory simultaneously, the effect
+- * may be that only one operation succeeds.
+- */
+-static inline void __set_bit(int nr, volatile unsigned long * addr)
+-{
+- __asm__(
+- "btsl %1,%0"
+- :"+m" (ADDR)
+- :"Ir" (nr));
+-}
+-
+-/**
+- * clear_bit - Clears a bit in memory
+- * @nr: Bit to clear
+- * @addr: Address to start counting from
+- *
+- * clear_bit() is atomic and may not be reordered. However, it does
+- * not contain a memory barrier, so if it is used for locking purposes,
+- * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+- * in order to ensure changes are visible on other processors.
+- */
+-static inline void clear_bit(int nr, volatile unsigned long * addr)
+-{
+- __asm__ __volatile__( LOCK_PREFIX
+- "btrl %1,%0"
+- :"+m" (ADDR)
+- :"Ir" (nr));
+-}
+-
+-/*
+- * clear_bit_unlock - Clears a bit in memory
+- * @nr: Bit to clear
+- * @addr: Address to start counting from
+- *
+- * clear_bit() is atomic and implies release semantics before the memory
+- * operation. It can be used for an unlock.
+- */
+-static inline void clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
+-{
+- barrier();
+- clear_bit(nr, addr);
+-}
+-
+-static inline void __clear_bit(int nr, volatile unsigned long * addr)
+-{
+- __asm__ __volatile__(
+- "btrl %1,%0"
+- :"+m" (ADDR)
+- :"Ir" (nr));
+-}
+-
+-/*
+- * __clear_bit_unlock - Clears a bit in memory
+- * @nr: Bit to clear
+- * @addr: Address to start counting from
+- *
+- * __clear_bit() is non-atomic and implies release semantics before the memory
+- * operation. It can be used for an unlock if no other CPUs can concurrently
+- * modify other bits in the word.
+- *
+- * No memory barrier is required here, because x86 cannot reorder stores past
+- * older loads. Same principle as spin_unlock.
+- */
+-static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
+-{
+- barrier();
+- __clear_bit(nr, addr);
+-}
+-
+-#define smp_mb__before_clear_bit() barrier()
+-#define smp_mb__after_clear_bit() barrier()
+-
+-/**
+- * __change_bit - Toggle a bit in memory
+- * @nr: the bit to change
+- * @addr: the address to start counting from
+- *
+- * Unlike change_bit(), this function is non-atomic and may be reordered.
+- * If it's called on the same region of memory simultaneously, the effect
+- * may be that only one operation succeeds.
+- */
+-static inline void __change_bit(int nr, volatile unsigned long * addr)
+-{
+- __asm__ __volatile__(
+- "btcl %1,%0"
+- :"+m" (ADDR)
+- :"Ir" (nr));
+-}
+-
+-/**
+- * change_bit - Toggle a bit in memory
+- * @nr: Bit to change
+- * @addr: Address to start counting from
+- *
+- * change_bit() is atomic and may not be reordered. It may be
+- * reordered on other architectures than x86.
+- * Note that @nr may be almost arbitrarily large; this function is not
+- * restricted to acting on a single-word quantity.
+- */
+-static inline void change_bit(int nr, volatile unsigned long * addr)
+-{
+- __asm__ __volatile__( LOCK_PREFIX
+- "btcl %1,%0"
+- :"+m" (ADDR)
+- :"Ir" (nr));
+-}
+-
+-/**
+- * test_and_set_bit - Set a bit and return its old value
+- * @nr: Bit to set
+- * @addr: Address to count from
+- *
+- * This operation is atomic and cannot be reordered.
+- * It may be reordered on other architectures than x86.
+- * It also implies a memory barrier.
+- */
+-static inline int test_and_set_bit(int nr, volatile unsigned long * addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__( LOCK_PREFIX
+- "btsl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),"+m" (ADDR)
+- :"Ir" (nr) : "memory");
+- return oldbit;
+-}
+-
+-/**
+- * test_and_set_bit_lock - Set a bit and return its old value for lock
+- * @nr: Bit to set
+- * @addr: Address to count from
+- *
+- * This is the same as test_and_set_bit on x86.
+- */
+-static inline int test_and_set_bit_lock(int nr, volatile unsigned long *addr)
+-{
+- return test_and_set_bit(nr, addr);
+-}
+-
+-/**
+- * __test_and_set_bit - Set a bit and return its old value
+- * @nr: Bit to set
+- * @addr: Address to count from
+- *
+- * This operation is non-atomic and can be reordered.
+- * If two examples of this operation race, one can appear to succeed
+- * but actually fail. You must protect multiple accesses with a lock.
+- */
+-static inline int __test_and_set_bit(int nr, volatile unsigned long * addr)
+-{
+- int oldbit;
+-
+- __asm__(
+- "btsl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),"+m" (ADDR)
+- :"Ir" (nr));
+- return oldbit;
+-}
+-
+-/**
+- * test_and_clear_bit - Clear a bit and return its old value
+- * @nr: Bit to clear
+- * @addr: Address to count from
+- *
+- * This operation is atomic and cannot be reordered.
+- * It can be reorderdered on other architectures other than x86.
+- * It also implies a memory barrier.
+- */
+-static inline int test_and_clear_bit(int nr, volatile unsigned long * addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__( LOCK_PREFIX
+- "btrl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),"+m" (ADDR)
+- :"Ir" (nr) : "memory");
+- return oldbit;
+-}
+-
+-/**
+- * __test_and_clear_bit - Clear a bit and return its old value
+- * @nr: Bit to clear
+- * @addr: Address to count from
+- *
+- * This operation is non-atomic and can be reordered.
+- * If two examples of this operation race, one can appear to succeed
+- * but actually fail. You must protect multiple accesses with a lock.
+- */
+-static inline int __test_and_clear_bit(int nr, volatile unsigned long *addr)
+-{
+- int oldbit;
+-
+- __asm__(
+- "btrl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),"+m" (ADDR)
+- :"Ir" (nr));
+- return oldbit;
+-}
+-
+-/* WARNING: non atomic and it can be reordered! */
+-static inline int __test_and_change_bit(int nr, volatile unsigned long *addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__(
+- "btcl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),"+m" (ADDR)
+- :"Ir" (nr) : "memory");
+- return oldbit;
+-}
+-
+-/**
+- * test_and_change_bit - Change a bit and return its old value
+- * @nr: Bit to change
+- * @addr: Address to count from
+- *
+- * This operation is atomic and cannot be reordered.
+- * It also implies a memory barrier.
+- */
+-static inline int test_and_change_bit(int nr, volatile unsigned long* addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__( LOCK_PREFIX
+- "btcl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),"+m" (ADDR)
+- :"Ir" (nr) : "memory");
+- return oldbit;
+-}
+-
+-#if 0 /* Fool kernel-doc since it doesn't do macros yet */
+-/**
+- * test_bit - Determine whether a bit is set
+- * @nr: bit number to test
+- * @addr: Address to start counting from
+- */
+-static int test_bit(int nr, const volatile void * addr);
+-#endif
+-
+-static __always_inline int constant_test_bit(int nr, const volatile unsigned long *addr)
+-{
+- return ((1UL << (nr & 31)) & (addr[nr >> 5])) != 0;
+-}
+-
+-static inline int variable_test_bit(int nr, const volatile unsigned long * addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__(
+- "btl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit)
+- :"m" (ADDR),"Ir" (nr));
+- return oldbit;
+-}
+-
+-#define test_bit(nr,addr) \
+-(__builtin_constant_p(nr) ? \
+- constant_test_bit((nr),(addr)) : \
+- variable_test_bit((nr),(addr)))
+-
+-#undef ADDR
+-
+ /**
+ * find_first_zero_bit - find the first zero bit in a memory region
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+- * Returns the bit-number of the first zero bit, not the number of the byte
++ * Returns the bit number of the first zero bit, not the number of the byte
+ * containing a bit.
+ */
+ static inline int find_first_zero_bit(const unsigned long *addr, unsigned size)
+@@ -348,7 +40,7 @@ static inline int find_first_zero_bit(const unsigned long *addr, unsigned size)
+ /**
+ * find_next_zero_bit - find the first zero bit in a memory region
+ * @addr: The address to base the search on
+- * @offset: The bitnumber to start searching at
++ * @offset: The bit number to start searching at
+ * @size: The maximum size to search
+ */
+ int find_next_zero_bit(const unsigned long *addr, int size, int offset);
+@@ -372,7 +64,7 @@ static inline unsigned long __ffs(unsigned long word)
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+- * Returns the bit-number of the first set bit, not the number of the byte
++ * Returns the bit number of the first set bit, not the number of the byte
+ * containing a bit.
+ */
+ static inline unsigned find_first_bit(const unsigned long *addr, unsigned size)
+@@ -391,7 +83,7 @@ static inline unsigned find_first_bit(const unsigned long *addr, unsigned size)
+ /**
+ * find_next_bit - find the first set bit in a memory region
+ * @addr: The address to base the search on
+- * @offset: The bitnumber to start searching at
++ * @offset: The bit number to start searching at
+ * @size: The maximum size to search
+ */
+ int find_next_bit(const unsigned long *addr, int size, int offset);
+@@ -460,10 +152,10 @@ static inline int fls(int x)
+
+ #include <asm-generic/bitops/ext2-non-atomic.h>
+
+-#define ext2_set_bit_atomic(lock,nr,addr) \
+- test_and_set_bit((nr),(unsigned long*)addr)
+-#define ext2_clear_bit_atomic(lock,nr, addr) \
+- test_and_clear_bit((nr),(unsigned long*)addr)
++#define ext2_set_bit_atomic(lock, nr, addr) \
++ test_and_set_bit((nr), (unsigned long *)addr)
++#define ext2_clear_bit_atomic(lock, nr, addr) \
++ test_and_clear_bit((nr), (unsigned long *)addr)
+
+ #include <asm-generic/bitops/minix.h>
+
+diff --git a/include/asm-x86/bitops_64.h b/include/asm-x86/bitops_64.h
+index 766bcc0..48adbf5 100644
+--- a/include/asm-x86/bitops_64.h
++++ b/include/asm-x86/bitops_64.h
+@@ -5,303 +5,6 @@
+ * Copyright 1992, Linus Torvalds.
+ */
+
+-#ifndef _LINUX_BITOPS_H
+-#error only <linux/bitops.h> can be included directly
+-#endif
+-
+-#include <asm/alternative.h>
+-
+-#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
+-/* Technically wrong, but this avoids compilation errors on some gcc
+- versions. */
+-#define ADDR "=m" (*(volatile long *) addr)
+-#else
+-#define ADDR "+m" (*(volatile long *) addr)
+-#endif
+-
+-/**
+- * set_bit - Atomically set a bit in memory
+- * @nr: the bit to set
+- * @addr: the address to start counting from
+- *
+- * This function is atomic and may not be reordered. See __set_bit()
+- * if you do not require the atomic guarantees.
+- * Note that @nr may be almost arbitrarily large; this function is not
+- * restricted to acting on a single-word quantity.
+- */
+-static inline void set_bit(int nr, volatile void *addr)
+-{
+- __asm__ __volatile__( LOCK_PREFIX
+- "btsl %1,%0"
+- :ADDR
+- :"dIr" (nr) : "memory");
+-}
+-
+-/**
+- * __set_bit - Set a bit in memory
+- * @nr: the bit to set
+- * @addr: the address to start counting from
+- *
+- * Unlike set_bit(), this function is non-atomic and may be reordered.
+- * If it's called on the same region of memory simultaneously, the effect
+- * may be that only one operation succeeds.
+- */
+-static inline void __set_bit(int nr, volatile void *addr)
+-{
+- __asm__ volatile(
+- "btsl %1,%0"
+- :ADDR
+- :"dIr" (nr) : "memory");
+-}
+-
+-/**
+- * clear_bit - Clears a bit in memory
+- * @nr: Bit to clear
+- * @addr: Address to start counting from
+- *
+- * clear_bit() is atomic and may not be reordered. However, it does
+- * not contain a memory barrier, so if it is used for locking purposes,
+- * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
+- * in order to ensure changes are visible on other processors.
+- */
+-static inline void clear_bit(int nr, volatile void *addr)
+-{
+- __asm__ __volatile__( LOCK_PREFIX
+- "btrl %1,%0"
+- :ADDR
+- :"dIr" (nr));
+-}
+-
+-/*
+- * clear_bit_unlock - Clears a bit in memory
+- * @nr: Bit to clear
+- * @addr: Address to start counting from
+- *
+- * clear_bit() is atomic and implies release semantics before the memory
+- * operation. It can be used for an unlock.
+- */
+-static inline void clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
+-{
+- barrier();
+- clear_bit(nr, addr);
+-}
+-
+-static inline void __clear_bit(int nr, volatile void *addr)
+-{
+- __asm__ __volatile__(
+- "btrl %1,%0"
+- :ADDR
+- :"dIr" (nr));
+-}
+-
+-/*
+- * __clear_bit_unlock - Clears a bit in memory
+- * @nr: Bit to clear
+- * @addr: Address to start counting from
+- *
+- * __clear_bit() is non-atomic and implies release semantics before the memory
+- * operation. It can be used for an unlock if no other CPUs can concurrently
+- * modify other bits in the word.
+- *
+- * No memory barrier is required here, because x86 cannot reorder stores past
+- * older loads. Same principle as spin_unlock.
+- */
+-static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
+-{
+- barrier();
+- __clear_bit(nr, addr);
+-}
+-
+-#define smp_mb__before_clear_bit() barrier()
+-#define smp_mb__after_clear_bit() barrier()
+-
+-/**
+- * __change_bit - Toggle a bit in memory
+- * @nr: the bit to change
+- * @addr: the address to start counting from
+- *
+- * Unlike change_bit(), this function is non-atomic and may be reordered.
+- * If it's called on the same region of memory simultaneously, the effect
+- * may be that only one operation succeeds.
+- */
+-static inline void __change_bit(int nr, volatile void *addr)
+-{
+- __asm__ __volatile__(
+- "btcl %1,%0"
+- :ADDR
+- :"dIr" (nr));
+-}
+-
+-/**
+- * change_bit - Toggle a bit in memory
+- * @nr: Bit to change
+- * @addr: Address to start counting from
+- *
+- * change_bit() is atomic and may not be reordered.
+- * Note that @nr may be almost arbitrarily large; this function is not
+- * restricted to acting on a single-word quantity.
+- */
+-static inline void change_bit(int nr, volatile void *addr)
+-{
+- __asm__ __volatile__( LOCK_PREFIX
+- "btcl %1,%0"
+- :ADDR
+- :"dIr" (nr));
+-}
+-
+-/**
+- * test_and_set_bit - Set a bit and return its old value
+- * @nr: Bit to set
+- * @addr: Address to count from
+- *
+- * This operation is atomic and cannot be reordered.
+- * It also implies a memory barrier.
+- */
+-static inline int test_and_set_bit(int nr, volatile void *addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__( LOCK_PREFIX
+- "btsl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),ADDR
+- :"dIr" (nr) : "memory");
+- return oldbit;
+-}
+-
+-/**
+- * test_and_set_bit_lock - Set a bit and return its old value for lock
+- * @nr: Bit to set
+- * @addr: Address to count from
+- *
+- * This is the same as test_and_set_bit on x86.
+- */
+-static inline int test_and_set_bit_lock(int nr, volatile void *addr)
+-{
+- return test_and_set_bit(nr, addr);
+-}
+-
+-/**
+- * __test_and_set_bit - Set a bit and return its old value
+- * @nr: Bit to set
+- * @addr: Address to count from
+- *
+- * This operation is non-atomic and can be reordered.
+- * If two examples of this operation race, one can appear to succeed
+- * but actually fail. You must protect multiple accesses with a lock.
+- */
+-static inline int __test_and_set_bit(int nr, volatile void *addr)
+-{
+- int oldbit;
+-
+- __asm__(
+- "btsl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),ADDR
+- :"dIr" (nr));
+- return oldbit;
+-}
+-
+-/**
+- * test_and_clear_bit - Clear a bit and return its old value
+- * @nr: Bit to clear
+- * @addr: Address to count from
+- *
+- * This operation is atomic and cannot be reordered.
+- * It also implies a memory barrier.
+- */
+-static inline int test_and_clear_bit(int nr, volatile void *addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__( LOCK_PREFIX
+- "btrl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),ADDR
+- :"dIr" (nr) : "memory");
+- return oldbit;
+-}
+-
+-/**
+- * __test_and_clear_bit - Clear a bit and return its old value
+- * @nr: Bit to clear
+- * @addr: Address to count from
+- *
+- * This operation is non-atomic and can be reordered.
+- * If two examples of this operation race, one can appear to succeed
+- * but actually fail. You must protect multiple accesses with a lock.
+- */
+-static inline int __test_and_clear_bit(int nr, volatile void *addr)
+-{
+- int oldbit;
+-
+- __asm__(
+- "btrl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),ADDR
+- :"dIr" (nr));
+- return oldbit;
+-}
+-
+-/* WARNING: non atomic and it can be reordered! */
+-static inline int __test_and_change_bit(int nr, volatile void *addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__(
+- "btcl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),ADDR
+- :"dIr" (nr) : "memory");
+- return oldbit;
+-}
+-
+-/**
+- * test_and_change_bit - Change a bit and return its old value
+- * @nr: Bit to change
+- * @addr: Address to count from
+- *
+- * This operation is atomic and cannot be reordered.
+- * It also implies a memory barrier.
+- */
+-static inline int test_and_change_bit(int nr, volatile void *addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__( LOCK_PREFIX
+- "btcl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit),ADDR
+- :"dIr" (nr) : "memory");
+- return oldbit;
+-}
+-
+-#if 0 /* Fool kernel-doc since it doesn't do macros yet */
+-/**
+- * test_bit - Determine whether a bit is set
+- * @nr: bit number to test
+- * @addr: Address to start counting from
+- */
+-static int test_bit(int nr, const volatile void *addr);
+-#endif
+-
+-static inline int constant_test_bit(int nr, const volatile void *addr)
+-{
+- return ((1UL << (nr & 31)) & (((const volatile unsigned int *) addr)[nr >> 5])) != 0;
+-}
+-
+-static inline int variable_test_bit(int nr, volatile const void *addr)
+-{
+- int oldbit;
+-
+- __asm__ __volatile__(
+- "btl %2,%1\n\tsbbl %0,%0"
+- :"=r" (oldbit)
+- :"m" (*(volatile long *)addr),"dIr" (nr));
+- return oldbit;
+-}
+-
+-#define test_bit(nr,addr) \
+-(__builtin_constant_p(nr) ? \
+- constant_test_bit((nr),(addr)) : \
+- variable_test_bit((nr),(addr)))
+-
+-#undef ADDR
+-
+ extern long find_first_zero_bit(const unsigned long *addr, unsigned long size);
+ extern long find_next_zero_bit(const unsigned long *addr, long size, long offset);
+ extern long find_first_bit(const unsigned long *addr, unsigned long size);
+diff --git a/include/asm-x86/bootparam.h b/include/asm-x86/bootparam.h
+index 19f3ddf..5115135 100644
+--- a/include/asm-x86/bootparam.h
++++ b/include/asm-x86/bootparam.h
+@@ -54,13 +54,14 @@ struct sys_desc_table {
+ };
+
+ struct efi_info {
+- __u32 _pad1;
++ __u32 efi_loader_signature;
+ __u32 efi_systab;
+ __u32 efi_memdesc_size;
+ __u32 efi_memdesc_version;
+ __u32 efi_memmap;
+ __u32 efi_memmap_size;
+- __u32 _pad2[2];
++ __u32 efi_systab_hi;
++ __u32 efi_memmap_hi;
+ };
+
+ /* The so-called "zeropage" */
+diff --git a/include/asm-x86/bug.h b/include/asm-x86/bug.h
+index fd8bdc6..8d477a2 100644
+--- a/include/asm-x86/bug.h
++++ b/include/asm-x86/bug.h
+@@ -33,9 +33,6 @@
+ } while(0)
+ #endif
+
+-void out_of_line_bug(void);
+-#else /* CONFIG_BUG */
+-static inline void out_of_line_bug(void) { }
+ #endif /* !CONFIG_BUG */
+
+ #include <asm-generic/bug.h>
+diff --git a/include/asm-x86/bugs.h b/include/asm-x86/bugs.h
+index aac8317..3fcc30d 100644
+--- a/include/asm-x86/bugs.h
++++ b/include/asm-x86/bugs.h
+@@ -1,6 +1,7 @@
+ #ifndef _ASM_X86_BUGS_H
+ #define _ASM_X86_BUGS_H
+
+-void check_bugs(void);
++extern void check_bugs(void);
++extern int ppro_with_ram_bug(void);
+
+ #endif /* _ASM_X86_BUGS_H */
+diff --git a/include/asm-x86/cacheflush.h b/include/asm-x86/cacheflush.h
+index 9411a2d..8dd8c5e 100644
+--- a/include/asm-x86/cacheflush.h
++++ b/include/asm-x86/cacheflush.h
+@@ -24,18 +24,35 @@
+ #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
+ memcpy(dst, src, len)
+
+-void global_flush_tlb(void);
+-int change_page_attr(struct page *page, int numpages, pgprot_t prot);
+-int change_page_attr_addr(unsigned long addr, int numpages, pgprot_t prot);
+-void clflush_cache_range(void *addr, int size);
+-
+-#ifdef CONFIG_DEBUG_PAGEALLOC
+-/* internal debugging function */
+-void kernel_map_pages(struct page *page, int numpages, int enable);
+-#endif
++int __deprecated_for_modules change_page_attr(struct page *page, int numpages,
++ pgprot_t prot);
++
++int set_pages_uc(struct page *page, int numpages);
++int set_pages_wb(struct page *page, int numpages);
++int set_pages_x(struct page *page, int numpages);
++int set_pages_nx(struct page *page, int numpages);
++int set_pages_ro(struct page *page, int numpages);
++int set_pages_rw(struct page *page, int numpages);
++
++int set_memory_uc(unsigned long addr, int numpages);
++int set_memory_wb(unsigned long addr, int numpages);
++int set_memory_x(unsigned long addr, int numpages);
++int set_memory_nx(unsigned long addr, int numpages);
++int set_memory_ro(unsigned long addr, int numpages);
++int set_memory_rw(unsigned long addr, int numpages);
++int set_memory_np(unsigned long addr, int numpages);
++
++void clflush_cache_range(void *addr, unsigned int size);
+
+ #ifdef CONFIG_DEBUG_RODATA
+ void mark_rodata_ro(void);
+ #endif
++#ifdef CONFIG_DEBUG_RODATA_TEST
++void rodata_test(void);
++#else
++static inline void rodata_test(void)
++{
++}
++#endif
+
+ #endif
+diff --git a/include/asm-x86/calling.h b/include/asm-x86/calling.h
+index 6f4f63a..f13e62e 100644
+--- a/include/asm-x86/calling.h
++++ b/include/asm-x86/calling.h
+@@ -1,162 +1,168 @@
+-/*
++/*
+ * Some macros to handle stack frames in assembly.
+- */
++ */
+
++#define R15 0
++#define R14 8
++#define R13 16
++#define R12 24
++#define RBP 32
++#define RBX 40
+
+-#define R15 0
+-#define R14 8
+-#define R13 16
+-#define R12 24
+-#define RBP 32
+-#define RBX 40
+ /* arguments: interrupts/non tracing syscalls only save upto here*/
+-#define R11 48
+-#define R10 56
+-#define R9 64
+-#define R8 72
+-#define RAX 80
+-#define RCX 88
+-#define RDX 96
+-#define RSI 104
+-#define RDI 112
+-#define ORIG_RAX 120 /* + error_code */
+-/* end of arguments */
++#define R11 48
++#define R10 56
++#define R9 64
++#define R8 72
++#define RAX 80
++#define RCX 88
++#define RDX 96
++#define RSI 104
++#define RDI 112
++#define ORIG_RAX 120 /* + error_code */
++/* end of arguments */
++
+ /* cpu exception frame or undefined in case of fast syscall. */
+-#define RIP 128
+-#define CS 136
+-#define EFLAGS 144
+-#define RSP 152
+-#define SS 160
+-#define ARGOFFSET R11
+-#define SWFRAME ORIG_RAX
++#define RIP 128
++#define CS 136
++#define EFLAGS 144
++#define RSP 152
++#define SS 160
++
++#define ARGOFFSET R11
++#define SWFRAME ORIG_RAX
+
+- .macro SAVE_ARGS addskip=0,norcx=0,nor891011=0
+- subq $9*8+\addskip,%rsp
++ .macro SAVE_ARGS addskip=0, norcx=0, nor891011=0
++ subq $9*8+\addskip, %rsp
+ CFI_ADJUST_CFA_OFFSET 9*8+\addskip
+- movq %rdi,8*8(%rsp)
+- CFI_REL_OFFSET rdi,8*8
+- movq %rsi,7*8(%rsp)
+- CFI_REL_OFFSET rsi,7*8
+- movq %rdx,6*8(%rsp)
+- CFI_REL_OFFSET rdx,6*8
++ movq %rdi, 8*8(%rsp)
++ CFI_REL_OFFSET rdi, 8*8
++ movq %rsi, 7*8(%rsp)
++ CFI_REL_OFFSET rsi, 7*8
++ movq %rdx, 6*8(%rsp)
++ CFI_REL_OFFSET rdx, 6*8
+ .if \norcx
+ .else
+- movq %rcx,5*8(%rsp)
+- CFI_REL_OFFSET rcx,5*8
++ movq %rcx, 5*8(%rsp)
++ CFI_REL_OFFSET rcx, 5*8
+ .endif
+- movq %rax,4*8(%rsp)
+- CFI_REL_OFFSET rax,4*8
++ movq %rax, 4*8(%rsp)
++ CFI_REL_OFFSET rax, 4*8
+ .if \nor891011
+ .else
+- movq %r8,3*8(%rsp)
+- CFI_REL_OFFSET r8,3*8
+- movq %r9,2*8(%rsp)
+- CFI_REL_OFFSET r9,2*8
+- movq %r10,1*8(%rsp)
+- CFI_REL_OFFSET r10,1*8
+- movq %r11,(%rsp)
+- CFI_REL_OFFSET r11,0*8
++ movq %r8, 3*8(%rsp)
++ CFI_REL_OFFSET r8, 3*8
++ movq %r9, 2*8(%rsp)
++ CFI_REL_OFFSET r9, 2*8
++ movq %r10, 1*8(%rsp)
++ CFI_REL_OFFSET r10, 1*8
++ movq %r11, (%rsp)
++ CFI_REL_OFFSET r11, 0*8
+ .endif
+ .endm
+
+-#define ARG_SKIP 9*8
+- .macro RESTORE_ARGS skiprax=0,addskip=0,skiprcx=0,skipr11=0,skipr8910=0,skiprdx=0
++#define ARG_SKIP 9*8
++
++ .macro RESTORE_ARGS skiprax=0, addskip=0, skiprcx=0, skipr11=0, \
++ skipr8910=0, skiprdx=0
+ .if \skipr11
+ .else
+- movq (%rsp),%r11
++ movq (%rsp), %r11
+ CFI_RESTORE r11
+ .endif
+ .if \skipr8910
+ .else
+- movq 1*8(%rsp),%r10
++ movq 1*8(%rsp), %r10
+ CFI_RESTORE r10
+- movq 2*8(%rsp),%r9
++ movq 2*8(%rsp), %r9
+ CFI_RESTORE r9
+- movq 3*8(%rsp),%r8
++ movq 3*8(%rsp), %r8
+ CFI_RESTORE r8
+ .endif
+ .if \skiprax
+ .else
+- movq 4*8(%rsp),%rax
++ movq 4*8(%rsp), %rax
+ CFI_RESTORE rax
+ .endif
+ .if \skiprcx
+ .else
+- movq 5*8(%rsp),%rcx
++ movq 5*8(%rsp), %rcx
+ CFI_RESTORE rcx
+ .endif
+ .if \skiprdx
+ .else
+- movq 6*8(%rsp),%rdx
++ movq 6*8(%rsp), %rdx
+ CFI_RESTORE rdx
+ .endif
+- movq 7*8(%rsp),%rsi
++ movq 7*8(%rsp), %rsi
+ CFI_RESTORE rsi
+- movq 8*8(%rsp),%rdi
++ movq 8*8(%rsp), %rdi
+ CFI_RESTORE rdi
+ .if ARG_SKIP+\addskip > 0
+- addq $ARG_SKIP+\addskip,%rsp
++ addq $ARG_SKIP+\addskip, %rsp
+ CFI_ADJUST_CFA_OFFSET -(ARG_SKIP+\addskip)
+ .endif
+- .endm
++ .endm
+
+ .macro LOAD_ARGS offset
+- movq \offset(%rsp),%r11
+- movq \offset+8(%rsp),%r10
+- movq \offset+16(%rsp),%r9
+- movq \offset+24(%rsp),%r8
+- movq \offset+40(%rsp),%rcx
+- movq \offset+48(%rsp),%rdx
+- movq \offset+56(%rsp),%rsi
+- movq \offset+64(%rsp),%rdi
+- movq \offset+72(%rsp),%rax
++ movq \offset(%rsp), %r11
++ movq \offset+8(%rsp), %r10
++ movq \offset+16(%rsp), %r9
++ movq \offset+24(%rsp), %r8
++ movq \offset+40(%rsp), %rcx
++ movq \offset+48(%rsp), %rdx
++ movq \offset+56(%rsp), %rsi
++ movq \offset+64(%rsp), %rdi
++ movq \offset+72(%rsp), %rax
+ .endm
+-
+-#define REST_SKIP 6*8
++
++#define REST_SKIP 6*8
++
+ .macro SAVE_REST
+- subq $REST_SKIP,%rsp
++ subq $REST_SKIP, %rsp
+ CFI_ADJUST_CFA_OFFSET REST_SKIP
+- movq %rbx,5*8(%rsp)
+- CFI_REL_OFFSET rbx,5*8
+- movq %rbp,4*8(%rsp)
+- CFI_REL_OFFSET rbp,4*8
+- movq %r12,3*8(%rsp)
+- CFI_REL_OFFSET r12,3*8
+- movq %r13,2*8(%rsp)
+- CFI_REL_OFFSET r13,2*8
+- movq %r14,1*8(%rsp)
+- CFI_REL_OFFSET r14,1*8
+- movq %r15,(%rsp)
+- CFI_REL_OFFSET r15,0*8
+- .endm
++ movq %rbx, 5*8(%rsp)
++ CFI_REL_OFFSET rbx, 5*8
++ movq %rbp, 4*8(%rsp)
++ CFI_REL_OFFSET rbp, 4*8
++ movq %r12, 3*8(%rsp)
++ CFI_REL_OFFSET r12, 3*8
++ movq %r13, 2*8(%rsp)
++ CFI_REL_OFFSET r13, 2*8
++ movq %r14, 1*8(%rsp)
++ CFI_REL_OFFSET r14, 1*8
++ movq %r15, (%rsp)
++ CFI_REL_OFFSET r15, 0*8
++ .endm
+
+ .macro RESTORE_REST
+- movq (%rsp),%r15
++ movq (%rsp), %r15
+ CFI_RESTORE r15
+- movq 1*8(%rsp),%r14
++ movq 1*8(%rsp), %r14
+ CFI_RESTORE r14
+- movq 2*8(%rsp),%r13
++ movq 2*8(%rsp), %r13
+ CFI_RESTORE r13
+- movq 3*8(%rsp),%r12
++ movq 3*8(%rsp), %r12
+ CFI_RESTORE r12
+- movq 4*8(%rsp),%rbp
++ movq 4*8(%rsp), %rbp
+ CFI_RESTORE rbp
+- movq 5*8(%rsp),%rbx
++ movq 5*8(%rsp), %rbx
+ CFI_RESTORE rbx
+- addq $REST_SKIP,%rsp
++ addq $REST_SKIP, %rsp
+ CFI_ADJUST_CFA_OFFSET -(REST_SKIP)
+ .endm
+-
++
+ .macro SAVE_ALL
+ SAVE_ARGS
+ SAVE_REST
+ .endm
+-
++
+ .macro RESTORE_ALL addskip=0
+ RESTORE_REST
+- RESTORE_ARGS 0,\addskip
++ RESTORE_ARGS 0, \addskip
+ .endm
+
+ .macro icebp
+ .byte 0xf1
+ .endm
++
+diff --git a/include/asm-x86/checksum_64.h b/include/asm-x86/checksum_64.h
+index 419fe88..e5f7999 100644
+--- a/include/asm-x86/checksum_64.h
++++ b/include/asm-x86/checksum_64.h
+@@ -4,7 +4,7 @@
+ /*
+ * Checksums for x86-64
+ * Copyright 2002 by Andi Kleen, SuSE Labs
+- * with some code from asm-i386/checksum.h
++ * with some code from asm-x86/checksum.h
+ */
+
+ #include <linux/compiler.h>
+diff --git a/include/asm-x86/cmpxchg_32.h b/include/asm-x86/cmpxchg_32.h
+index f86ede2..cea1dae 100644
+--- a/include/asm-x86/cmpxchg_32.h
++++ b/include/asm-x86/cmpxchg_32.h
+@@ -105,15 +105,24 @@ static inline unsigned long __xchg(unsigned long x, volatile void * ptr, int siz
+
+ #ifdef CONFIG_X86_CMPXCHG
+ #define __HAVE_ARCH_CMPXCHG 1
+-#define cmpxchg(ptr,o,n)\
+- ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o),\
+- (unsigned long)(n),sizeof(*(ptr))))
+-#define sync_cmpxchg(ptr,o,n)\
+- ((__typeof__(*(ptr)))__sync_cmpxchg((ptr),(unsigned long)(o),\
+- (unsigned long)(n),sizeof(*(ptr))))
+-#define cmpxchg_local(ptr,o,n)\
+- ((__typeof__(*(ptr)))__cmpxchg_local((ptr),(unsigned long)(o),\
+- (unsigned long)(n),sizeof(*(ptr))))
++#define cmpxchg(ptr, o, n) \
++ ((__typeof__(*(ptr)))__cmpxchg((ptr), (unsigned long)(o), \
++ (unsigned long)(n), sizeof(*(ptr))))
++#define sync_cmpxchg(ptr, o, n) \
++ ((__typeof__(*(ptr)))__sync_cmpxchg((ptr), (unsigned long)(o), \
++ (unsigned long)(n), sizeof(*(ptr))))
++#define cmpxchg_local(ptr, o, n) \
++ ((__typeof__(*(ptr)))__cmpxchg_local((ptr), (unsigned long)(o), \
++ (unsigned long)(n), sizeof(*(ptr))))
++#endif
++
++#ifdef CONFIG_X86_CMPXCHG64
++#define cmpxchg64(ptr, o, n) \
++ ((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \
++ (unsigned long long)(n)))
++#define cmpxchg64_local(ptr, o, n) \
++ ((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o),\
++ (unsigned long long)(n)))
+ #endif
+
+ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+@@ -203,6 +212,34 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
+ return old;
+ }
+
++static inline unsigned long long __cmpxchg64(volatile void *ptr,
++ unsigned long long old, unsigned long long new)
++{
++ unsigned long long prev;
++ __asm__ __volatile__(LOCK_PREFIX "cmpxchg8b %3"
++ : "=A"(prev)
++ : "b"((unsigned long)new),
++ "c"((unsigned long)(new >> 32)),
++ "m"(*__xg(ptr)),
++ "0"(old)
++ : "memory");
++ return prev;
++}
++
++static inline unsigned long long __cmpxchg64_local(volatile void *ptr,
++ unsigned long long old, unsigned long long new)
++{
++ unsigned long long prev;
++ __asm__ __volatile__("cmpxchg8b %3"
++ : "=A"(prev)
++ : "b"((unsigned long)new),
++ "c"((unsigned long)(new >> 32)),
++ "m"(*__xg(ptr)),
++ "0"(old)
++ : "memory");
++ return prev;
++}
++
+ #ifndef CONFIG_X86_CMPXCHG
+ /*
+ * Building a kernel capable running on 80386. It may be necessary to
+@@ -228,7 +265,7 @@ static inline unsigned long cmpxchg_386(volatile void *ptr, unsigned long old,
+ return old;
+ }
+
+-#define cmpxchg(ptr,o,n) \
++#define cmpxchg(ptr, o, n) \
+ ({ \
+ __typeof__(*(ptr)) __ret; \
+ if (likely(boot_cpu_data.x86 > 3)) \
+@@ -239,7 +276,7 @@ static inline unsigned long cmpxchg_386(volatile void *ptr, unsigned long old,
+ (unsigned long)(n), sizeof(*(ptr))); \
+ __ret; \
+ })
+-#define cmpxchg_local(ptr,o,n) \
++#define cmpxchg_local(ptr, o, n) \
+ ({ \
+ __typeof__(*(ptr)) __ret; \
+ if (likely(boot_cpu_data.x86 > 3)) \
+@@ -252,38 +289,37 @@ static inline unsigned long cmpxchg_386(volatile void *ptr, unsigned long old,
+ })
+ #endif
+
+-static inline unsigned long long __cmpxchg64(volatile void *ptr, unsigned long long old,
+- unsigned long long new)
+-{
+- unsigned long long prev;
+- __asm__ __volatile__(LOCK_PREFIX "cmpxchg8b %3"
+- : "=A"(prev)
+- : "b"((unsigned long)new),
+- "c"((unsigned long)(new >> 32)),
+- "m"(*__xg(ptr)),
+- "0"(old)
+- : "memory");
+- return prev;
+-}
++#ifndef CONFIG_X86_CMPXCHG64
++/*
++ * Building a kernel capable running on 80386 and 80486. It may be necessary
++ * to simulate the cmpxchg8b on the 80386 and 80486 CPU.
++ */
+
+-static inline unsigned long long __cmpxchg64_local(volatile void *ptr,
+- unsigned long long old, unsigned long long new)
+-{
+- unsigned long long prev;
+- __asm__ __volatile__("cmpxchg8b %3"
+- : "=A"(prev)
+- : "b"((unsigned long)new),
+- "c"((unsigned long)(new >> 32)),
+- "m"(*__xg(ptr)),
+- "0"(old)
+- : "memory");
+- return prev;
+-}
++extern unsigned long long cmpxchg_486_u64(volatile void *, u64, u64);
++
++#define cmpxchg64(ptr, o, n) \
++({ \
++ __typeof__(*(ptr)) __ret; \
++ if (likely(boot_cpu_data.x86 > 4)) \
++ __ret = __cmpxchg64((ptr), (unsigned long long)(o), \
++ (unsigned long long)(n)); \
++ else \
++ __ret = cmpxchg_486_u64((ptr), (unsigned long long)(o), \
++ (unsigned long long)(n)); \
++ __ret; \
++})
++#define cmpxchg64_local(ptr, o, n) \
++({ \
++ __typeof__(*(ptr)) __ret; \
++ if (likely(boot_cpu_data.x86 > 4)) \
++ __ret = __cmpxchg64_local((ptr), (unsigned long long)(o), \
++ (unsigned long long)(n)); \
++ else \
++ __ret = cmpxchg_486_u64((ptr), (unsigned long long)(o), \
++ (unsigned long long)(n)); \
++ __ret; \
++})
++
++#endif
+
+-#define cmpxchg64(ptr,o,n)\
+- ((__typeof__(*(ptr)))__cmpxchg64((ptr),(unsigned long long)(o),\
+- (unsigned long long)(n)))
+-#define cmpxchg64_local(ptr,o,n)\
+- ((__typeof__(*(ptr)))__cmpxchg64_local((ptr),(unsigned long long)(o),\
+- (unsigned long long)(n)))
+ #endif
+diff --git a/include/asm-x86/compat.h b/include/asm-x86/compat.h
+index 66ba798..b270ee0 100644
+--- a/include/asm-x86/compat.h
++++ b/include/asm-x86/compat.h
+@@ -207,7 +207,7 @@ static inline compat_uptr_t ptr_to_compat(void __user *uptr)
+ static __inline__ void __user *compat_alloc_user_space(long len)
+ {
+ struct pt_regs *regs = task_pt_regs(current);
+- return (void __user *)regs->rsp - len;
++ return (void __user *)regs->sp - len;
+ }
+
+ static inline int is_compat_task(void)
+diff --git a/include/asm-x86/cpu.h b/include/asm-x86/cpu.h
+index b1bc7b1..85ece5f 100644
+--- a/include/asm-x86/cpu.h
++++ b/include/asm-x86/cpu.h
+@@ -7,7 +7,7 @@
+ #include <linux/nodemask.h>
+ #include <linux/percpu.h>
+
+-struct i386_cpu {
++struct x86_cpu {
+ struct cpu cpu;
+ };
+ extern int arch_register_cpu(int num);
+diff --git a/include/asm-x86/cpufeature.h b/include/asm-x86/cpufeature.h
+index b7160a4..3fb7dfa 100644
+--- a/include/asm-x86/cpufeature.h
++++ b/include/asm-x86/cpufeature.h
+@@ -1,5 +1,207 @@
+-#ifdef CONFIG_X86_32
+-# include "cpufeature_32.h"
++/*
++ * Defines x86 CPU feature bits
++ */
++#ifndef _ASM_X86_CPUFEATURE_H
++#define _ASM_X86_CPUFEATURE_H
++
++#ifndef __ASSEMBLY__
++#include <linux/bitops.h>
++#endif
++#include <asm/required-features.h>
++
++#define NCAPINTS 8 /* N 32-bit words worth of info */
++
++/* Intel-defined CPU features, CPUID level 0x00000001 (edx), word 0 */
++#define X86_FEATURE_FPU (0*32+ 0) /* Onboard FPU */
++#define X86_FEATURE_VME (0*32+ 1) /* Virtual Mode Extensions */
++#define X86_FEATURE_DE (0*32+ 2) /* Debugging Extensions */
++#define X86_FEATURE_PSE (0*32+ 3) /* Page Size Extensions */
++#define X86_FEATURE_TSC (0*32+ 4) /* Time Stamp Counter */
++#define X86_FEATURE_MSR (0*32+ 5) /* Model-Specific Registers, RDMSR, WRMSR */
++#define X86_FEATURE_PAE (0*32+ 6) /* Physical Address Extensions */
++#define X86_FEATURE_MCE (0*32+ 7) /* Machine Check Architecture */
++#define X86_FEATURE_CX8 (0*32+ 8) /* CMPXCHG8 instruction */
++#define X86_FEATURE_APIC (0*32+ 9) /* Onboard APIC */
++#define X86_FEATURE_SEP (0*32+11) /* SYSENTER/SYSEXIT */
++#define X86_FEATURE_MTRR (0*32+12) /* Memory Type Range Registers */
++#define X86_FEATURE_PGE (0*32+13) /* Page Global Enable */
++#define X86_FEATURE_MCA (0*32+14) /* Machine Check Architecture */
++#define X86_FEATURE_CMOV (0*32+15) /* CMOV instruction (FCMOVCC and FCOMI too if FPU present) */
++#define X86_FEATURE_PAT (0*32+16) /* Page Attribute Table */
++#define X86_FEATURE_PSE36 (0*32+17) /* 36-bit PSEs */
++#define X86_FEATURE_PN (0*32+18) /* Processor serial number */
++#define X86_FEATURE_CLFLSH (0*32+19) /* Supports the CLFLUSH instruction */
++#define X86_FEATURE_DS (0*32+21) /* Debug Store */
++#define X86_FEATURE_ACPI (0*32+22) /* ACPI via MSR */
++#define X86_FEATURE_MMX (0*32+23) /* Multimedia Extensions */
++#define X86_FEATURE_FXSR (0*32+24) /* FXSAVE and FXRSTOR instructions (fast save and restore */
++ /* of FPU context), and CR4.OSFXSR available */
++#define X86_FEATURE_XMM (0*32+25) /* Streaming SIMD Extensions */
++#define X86_FEATURE_XMM2 (0*32+26) /* Streaming SIMD Extensions-2 */
++#define X86_FEATURE_SELFSNOOP (0*32+27) /* CPU self snoop */
++#define X86_FEATURE_HT (0*32+28) /* Hyper-Threading */
++#define X86_FEATURE_ACC (0*32+29) /* Automatic clock control */
++#define X86_FEATURE_IA64 (0*32+30) /* IA-64 processor */
++
++/* AMD-defined CPU features, CPUID level 0x80000001, word 1 */
++/* Don't duplicate feature flags which are redundant with Intel! */
++#define X86_FEATURE_SYSCALL (1*32+11) /* SYSCALL/SYSRET */
++#define X86_FEATURE_MP (1*32+19) /* MP Capable. */
++#define X86_FEATURE_NX (1*32+20) /* Execute Disable */
++#define X86_FEATURE_MMXEXT (1*32+22) /* AMD MMX extensions */
++#define X86_FEATURE_RDTSCP (1*32+27) /* RDTSCP */
++#define X86_FEATURE_LM (1*32+29) /* Long Mode (x86-64) */
++#define X86_FEATURE_3DNOWEXT (1*32+30) /* AMD 3DNow! extensions */
++#define X86_FEATURE_3DNOW (1*32+31) /* 3DNow! */
++
++/* Transmeta-defined CPU features, CPUID level 0x80860001, word 2 */
++#define X86_FEATURE_RECOVERY (2*32+ 0) /* CPU in recovery mode */
++#define X86_FEATURE_LONGRUN (2*32+ 1) /* Longrun power control */
++#define X86_FEATURE_LRTI (2*32+ 3) /* LongRun table interface */
++
++/* Other features, Linux-defined mapping, word 3 */
++/* This range is used for feature bits which conflict or are synthesized */
++#define X86_FEATURE_CXMMX (3*32+ 0) /* Cyrix MMX extensions */
++#define X86_FEATURE_K6_MTRR (3*32+ 1) /* AMD K6 nonstandard MTRRs */
++#define X86_FEATURE_CYRIX_ARR (3*32+ 2) /* Cyrix ARRs (= MTRRs) */
++#define X86_FEATURE_CENTAUR_MCR (3*32+ 3) /* Centaur MCRs (= MTRRs) */
++/* cpu types for specific tunings: */
++#define X86_FEATURE_K8 (3*32+ 4) /* Opteron, Athlon64 */
++#define X86_FEATURE_K7 (3*32+ 5) /* Athlon */
++#define X86_FEATURE_P3 (3*32+ 6) /* P3 */
++#define X86_FEATURE_P4 (3*32+ 7) /* P4 */
++#define X86_FEATURE_CONSTANT_TSC (3*32+ 8) /* TSC ticks at a constant rate */
++#define X86_FEATURE_UP (3*32+ 9) /* smp kernel running on up */
++#define X86_FEATURE_FXSAVE_LEAK (3*32+10) /* FXSAVE leaks FOP/FIP/FOP */
++#define X86_FEATURE_ARCH_PERFMON (3*32+11) /* Intel Architectural PerfMon */
++#define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */
++#define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */
++/* 14 free */
++/* 15 free */
++#define X86_FEATURE_REP_GOOD (3*32+16) /* rep microcode works well on this CPU */
++#define X86_FEATURE_MFENCE_RDTSC (3*32+17) /* Mfence synchronizes RDTSC */
++#define X86_FEATURE_LFENCE_RDTSC (3*32+18) /* Lfence synchronizes RDTSC */
++
++/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */
++#define X86_FEATURE_XMM3 (4*32+ 0) /* Streaming SIMD Extensions-3 */
++#define X86_FEATURE_MWAIT (4*32+ 3) /* Monitor/Mwait support */
++#define X86_FEATURE_DSCPL (4*32+ 4) /* CPL Qualified Debug Store */
++#define X86_FEATURE_EST (4*32+ 7) /* Enhanced SpeedStep */
++#define X86_FEATURE_TM2 (4*32+ 8) /* Thermal Monitor 2 */
++#define X86_FEATURE_CID (4*32+10) /* Context ID */
++#define X86_FEATURE_CX16 (4*32+13) /* CMPXCHG16B */
++#define X86_FEATURE_XTPR (4*32+14) /* Send Task Priority Messages */
++#define X86_FEATURE_DCA (4*32+18) /* Direct Cache Access */
++
++/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */
++#define X86_FEATURE_XSTORE (5*32+ 2) /* on-CPU RNG present (xstore insn) */
++#define X86_FEATURE_XSTORE_EN (5*32+ 3) /* on-CPU RNG enabled */
++#define X86_FEATURE_XCRYPT (5*32+ 6) /* on-CPU crypto (xcrypt insn) */
++#define X86_FEATURE_XCRYPT_EN (5*32+ 7) /* on-CPU crypto enabled */
++#define X86_FEATURE_ACE2 (5*32+ 8) /* Advanced Cryptography Engine v2 */
++#define X86_FEATURE_ACE2_EN (5*32+ 9) /* ACE v2 enabled */
++#define X86_FEATURE_PHE (5*32+ 10) /* PadLock Hash Engine */
++#define X86_FEATURE_PHE_EN (5*32+ 11) /* PHE enabled */
++#define X86_FEATURE_PMM (5*32+ 12) /* PadLock Montgomery Multiplier */
++#define X86_FEATURE_PMM_EN (5*32+ 13) /* PMM enabled */
++
++/* More extended AMD flags: CPUID level 0x80000001, ecx, word 6 */
++#define X86_FEATURE_LAHF_LM (6*32+ 0) /* LAHF/SAHF in long mode */
++#define X86_FEATURE_CMP_LEGACY (6*32+ 1) /* If yes HyperThreading not valid */
++
++/*
++ * Auxiliary flags: Linux defined - For features scattered in various
++ * CPUID levels like 0x6, 0xA etc
++ */
++#define X86_FEATURE_IDA (7*32+ 0) /* Intel Dynamic Acceleration */
++
++#define cpu_has(c, bit) \
++ (__builtin_constant_p(bit) && \
++ ( (((bit)>>5)==0 && (1UL<<((bit)&31) & REQUIRED_MASK0)) || \
++ (((bit)>>5)==1 && (1UL<<((bit)&31) & REQUIRED_MASK1)) || \
++ (((bit)>>5)==2 && (1UL<<((bit)&31) & REQUIRED_MASK2)) || \
++ (((bit)>>5)==3 && (1UL<<((bit)&31) & REQUIRED_MASK3)) || \
++ (((bit)>>5)==4 && (1UL<<((bit)&31) & REQUIRED_MASK4)) || \
++ (((bit)>>5)==5 && (1UL<<((bit)&31) & REQUIRED_MASK5)) || \
++ (((bit)>>5)==6 && (1UL<<((bit)&31) & REQUIRED_MASK6)) || \
++ (((bit)>>5)==7 && (1UL<<((bit)&31) & REQUIRED_MASK7)) ) \
++ ? 1 : \
++ test_bit(bit, (unsigned long *)((c)->x86_capability)))
++#define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
++
++#define set_cpu_cap(c, bit) set_bit(bit, (unsigned long *)((c)->x86_capability))
++#define clear_cpu_cap(c, bit) clear_bit(bit, (unsigned long *)((c)->x86_capability))
++#define setup_clear_cpu_cap(bit) do { \
++ clear_cpu_cap(&boot_cpu_data, bit); \
++ set_bit(bit, cleared_cpu_caps); \
++} while (0)
++#define setup_force_cpu_cap(bit) do { \
++ set_cpu_cap(&boot_cpu_data, bit); \
++ clear_bit(bit, cleared_cpu_caps); \
++} while (0)
++
++#define cpu_has_fpu boot_cpu_has(X86_FEATURE_FPU)
++#define cpu_has_vme boot_cpu_has(X86_FEATURE_VME)
++#define cpu_has_de boot_cpu_has(X86_FEATURE_DE)
++#define cpu_has_pse boot_cpu_has(X86_FEATURE_PSE)
++#define cpu_has_tsc boot_cpu_has(X86_FEATURE_TSC)
++#define cpu_has_pae boot_cpu_has(X86_FEATURE_PAE)
++#define cpu_has_pge boot_cpu_has(X86_FEATURE_PGE)
++#define cpu_has_apic boot_cpu_has(X86_FEATURE_APIC)
++#define cpu_has_sep boot_cpu_has(X86_FEATURE_SEP)
++#define cpu_has_mtrr boot_cpu_has(X86_FEATURE_MTRR)
++#define cpu_has_mmx boot_cpu_has(X86_FEATURE_MMX)
++#define cpu_has_fxsr boot_cpu_has(X86_FEATURE_FXSR)
++#define cpu_has_xmm boot_cpu_has(X86_FEATURE_XMM)
++#define cpu_has_xmm2 boot_cpu_has(X86_FEATURE_XMM2)
++#define cpu_has_xmm3 boot_cpu_has(X86_FEATURE_XMM3)
++#define cpu_has_ht boot_cpu_has(X86_FEATURE_HT)
++#define cpu_has_mp boot_cpu_has(X86_FEATURE_MP)
++#define cpu_has_nx boot_cpu_has(X86_FEATURE_NX)
++#define cpu_has_k6_mtrr boot_cpu_has(X86_FEATURE_K6_MTRR)
++#define cpu_has_cyrix_arr boot_cpu_has(X86_FEATURE_CYRIX_ARR)
++#define cpu_has_centaur_mcr boot_cpu_has(X86_FEATURE_CENTAUR_MCR)
++#define cpu_has_xstore boot_cpu_has(X86_FEATURE_XSTORE)
++#define cpu_has_xstore_enabled boot_cpu_has(X86_FEATURE_XSTORE_EN)
++#define cpu_has_xcrypt boot_cpu_has(X86_FEATURE_XCRYPT)
++#define cpu_has_xcrypt_enabled boot_cpu_has(X86_FEATURE_XCRYPT_EN)
++#define cpu_has_ace2 boot_cpu_has(X86_FEATURE_ACE2)
++#define cpu_has_ace2_enabled boot_cpu_has(X86_FEATURE_ACE2_EN)
++#define cpu_has_phe boot_cpu_has(X86_FEATURE_PHE)
++#define cpu_has_phe_enabled boot_cpu_has(X86_FEATURE_PHE_EN)
++#define cpu_has_pmm boot_cpu_has(X86_FEATURE_PMM)
++#define cpu_has_pmm_enabled boot_cpu_has(X86_FEATURE_PMM_EN)
++#define cpu_has_ds boot_cpu_has(X86_FEATURE_DS)
++#define cpu_has_pebs boot_cpu_has(X86_FEATURE_PEBS)
++#define cpu_has_clflush boot_cpu_has(X86_FEATURE_CLFLSH)
++#define cpu_has_bts boot_cpu_has(X86_FEATURE_BTS)
++
++#if defined(CONFIG_X86_INVLPG) || defined(CONFIG_X86_64)
++# define cpu_has_invlpg 1
+ #else
+-# include "cpufeature_64.h"
++# define cpu_has_invlpg (boot_cpu_data.x86 > 3)
+ #endif
++
++#ifdef CONFIG_X86_64
++
++#undef cpu_has_vme
++#define cpu_has_vme 0
++
++#undef cpu_has_pae
++#define cpu_has_pae ___BUG___
++
++#undef cpu_has_mp
++#define cpu_has_mp 1
++
++#undef cpu_has_k6_mtrr
++#define cpu_has_k6_mtrr 0
++
++#undef cpu_has_cyrix_arr
++#define cpu_has_cyrix_arr 0
++
++#undef cpu_has_centaur_mcr
++#define cpu_has_centaur_mcr 0
++
++#endif /* CONFIG_X86_64 */
++
++#endif /* _ASM_X86_CPUFEATURE_H */
+diff --git a/include/asm-x86/cpufeature_32.h b/include/asm-x86/cpufeature_32.h
+deleted file mode 100644
+index f17e688..0000000
+--- a/include/asm-x86/cpufeature_32.h
++++ /dev/null
+@@ -1,176 +0,0 @@
+-/*
+- * cpufeature.h
+- *
+- * Defines x86 CPU feature bits
+- */
+-
+-#ifndef __ASM_I386_CPUFEATURE_H
+-#define __ASM_I386_CPUFEATURE_H
+-
+-#ifndef __ASSEMBLY__
+-#include <linux/bitops.h>
+-#endif
+-#include <asm/required-features.h>
+-
+-#define NCAPINTS 8 /* N 32-bit words worth of info */
+-
+-/* Intel-defined CPU features, CPUID level 0x00000001 (edx), word 0 */
+-#define X86_FEATURE_FPU (0*32+ 0) /* Onboard FPU */
+-#define X86_FEATURE_VME (0*32+ 1) /* Virtual Mode Extensions */
+-#define X86_FEATURE_DE (0*32+ 2) /* Debugging Extensions */
+-#define X86_FEATURE_PSE (0*32+ 3) /* Page Size Extensions */
+-#define X86_FEATURE_TSC (0*32+ 4) /* Time Stamp Counter */
+-#define X86_FEATURE_MSR (0*32+ 5) /* Model-Specific Registers, RDMSR, WRMSR */
+-#define X86_FEATURE_PAE (0*32+ 6) /* Physical Address Extensions */
+-#define X86_FEATURE_MCE (0*32+ 7) /* Machine Check Architecture */
+-#define X86_FEATURE_CX8 (0*32+ 8) /* CMPXCHG8 instruction */
+-#define X86_FEATURE_APIC (0*32+ 9) /* Onboard APIC */
+-#define X86_FEATURE_SEP (0*32+11) /* SYSENTER/SYSEXIT */
+-#define X86_FEATURE_MTRR (0*32+12) /* Memory Type Range Registers */
+-#define X86_FEATURE_PGE (0*32+13) /* Page Global Enable */
+-#define X86_FEATURE_MCA (0*32+14) /* Machine Check Architecture */
+-#define X86_FEATURE_CMOV (0*32+15) /* CMOV instruction (FCMOVCC and FCOMI too if FPU present) */
+-#define X86_FEATURE_PAT (0*32+16) /* Page Attribute Table */
+-#define X86_FEATURE_PSE36 (0*32+17) /* 36-bit PSEs */
+-#define X86_FEATURE_PN (0*32+18) /* Processor serial number */
+-#define X86_FEATURE_CLFLSH (0*32+19) /* Supports the CLFLUSH instruction */
+-#define X86_FEATURE_DS (0*32+21) /* Debug Store */
+-#define X86_FEATURE_ACPI (0*32+22) /* ACPI via MSR */
+-#define X86_FEATURE_MMX (0*32+23) /* Multimedia Extensions */
+-#define X86_FEATURE_FXSR (0*32+24) /* FXSAVE and FXRSTOR instructions (fast save and restore */
+- /* of FPU context), and CR4.OSFXSR available */
+-#define X86_FEATURE_XMM (0*32+25) /* Streaming SIMD Extensions */
+-#define X86_FEATURE_XMM2 (0*32+26) /* Streaming SIMD Extensions-2 */
+-#define X86_FEATURE_SELFSNOOP (0*32+27) /* CPU self snoop */
+-#define X86_FEATURE_HT (0*32+28) /* Hyper-Threading */
+-#define X86_FEATURE_ACC (0*32+29) /* Automatic clock control */
+-#define X86_FEATURE_IA64 (0*32+30) /* IA-64 processor */
+-
+-/* AMD-defined CPU features, CPUID level 0x80000001, word 1 */
+-/* Don't duplicate feature flags which are redundant with Intel! */
+-#define X86_FEATURE_SYSCALL (1*32+11) /* SYSCALL/SYSRET */
+-#define X86_FEATURE_MP (1*32+19) /* MP Capable. */
+-#define X86_FEATURE_NX (1*32+20) /* Execute Disable */
+-#define X86_FEATURE_MMXEXT (1*32+22) /* AMD MMX extensions */
+-#define X86_FEATURE_RDTSCP (1*32+27) /* RDTSCP */
+-#define X86_FEATURE_LM (1*32+29) /* Long Mode (x86-64) */
+-#define X86_FEATURE_3DNOWEXT (1*32+30) /* AMD 3DNow! extensions */
+-#define X86_FEATURE_3DNOW (1*32+31) /* 3DNow! */
+-
+-/* Transmeta-defined CPU features, CPUID level 0x80860001, word 2 */
+-#define X86_FEATURE_RECOVERY (2*32+ 0) /* CPU in recovery mode */
+-#define X86_FEATURE_LONGRUN (2*32+ 1) /* Longrun power control */
+-#define X86_FEATURE_LRTI (2*32+ 3) /* LongRun table interface */
+-
+-/* Other features, Linux-defined mapping, word 3 */
+-/* This range is used for feature bits which conflict or are synthesized */
+-#define X86_FEATURE_CXMMX (3*32+ 0) /* Cyrix MMX extensions */
+-#define X86_FEATURE_K6_MTRR (3*32+ 1) /* AMD K6 nonstandard MTRRs */
+-#define X86_FEATURE_CYRIX_ARR (3*32+ 2) /* Cyrix ARRs (= MTRRs) */
+-#define X86_FEATURE_CENTAUR_MCR (3*32+ 3) /* Centaur MCRs (= MTRRs) */
+-/* cpu types for specific tunings: */
+-#define X86_FEATURE_K8 (3*32+ 4) /* Opteron, Athlon64 */
+-#define X86_FEATURE_K7 (3*32+ 5) /* Athlon */
+-#define X86_FEATURE_P3 (3*32+ 6) /* P3 */
+-#define X86_FEATURE_P4 (3*32+ 7) /* P4 */
+-#define X86_FEATURE_CONSTANT_TSC (3*32+ 8) /* TSC ticks at a constant rate */
+-#define X86_FEATURE_UP (3*32+ 9) /* smp kernel running on up */
+-#define X86_FEATURE_FXSAVE_LEAK (3*32+10) /* FXSAVE leaks FOP/FIP/FOP */
+-#define X86_FEATURE_ARCH_PERFMON (3*32+11) /* Intel Architectural PerfMon */
+-#define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */
+-#define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */
+-/* 14 free */
+-#define X86_FEATURE_SYNC_RDTSC (3*32+15) /* RDTSC synchronizes the CPU */
+-#define X86_FEATURE_REP_GOOD (3*32+16) /* rep microcode works well on this CPU */
+-
+-/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */
+-#define X86_FEATURE_XMM3 (4*32+ 0) /* Streaming SIMD Extensions-3 */
+-#define X86_FEATURE_MWAIT (4*32+ 3) /* Monitor/Mwait support */
+-#define X86_FEATURE_DSCPL (4*32+ 4) /* CPL Qualified Debug Store */
+-#define X86_FEATURE_EST (4*32+ 7) /* Enhanced SpeedStep */
+-#define X86_FEATURE_TM2 (4*32+ 8) /* Thermal Monitor 2 */
+-#define X86_FEATURE_CID (4*32+10) /* Context ID */
+-#define X86_FEATURE_CX16 (4*32+13) /* CMPXCHG16B */
+-#define X86_FEATURE_XTPR (4*32+14) /* Send Task Priority Messages */
+-#define X86_FEATURE_DCA (4*32+18) /* Direct Cache Access */
+-
+-/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */
+-#define X86_FEATURE_XSTORE (5*32+ 2) /* on-CPU RNG present (xstore insn) */
+-#define X86_FEATURE_XSTORE_EN (5*32+ 3) /* on-CPU RNG enabled */
+-#define X86_FEATURE_XCRYPT (5*32+ 6) /* on-CPU crypto (xcrypt insn) */
+-#define X86_FEATURE_XCRYPT_EN (5*32+ 7) /* on-CPU crypto enabled */
+-#define X86_FEATURE_ACE2 (5*32+ 8) /* Advanced Cryptography Engine v2 */
+-#define X86_FEATURE_ACE2_EN (5*32+ 9) /* ACE v2 enabled */
+-#define X86_FEATURE_PHE (5*32+ 10) /* PadLock Hash Engine */
+-#define X86_FEATURE_PHE_EN (5*32+ 11) /* PHE enabled */
+-#define X86_FEATURE_PMM (5*32+ 12) /* PadLock Montgomery Multiplier */
+-#define X86_FEATURE_PMM_EN (5*32+ 13) /* PMM enabled */
+-
+-/* More extended AMD flags: CPUID level 0x80000001, ecx, word 6 */
+-#define X86_FEATURE_LAHF_LM (6*32+ 0) /* LAHF/SAHF in long mode */
+-#define X86_FEATURE_CMP_LEGACY (6*32+ 1) /* If yes HyperThreading not valid */
+-
+-/*
+- * Auxiliary flags: Linux defined - For features scattered in various
+- * CPUID levels like 0x6, 0xA etc
+- */
+-#define X86_FEATURE_IDA (7*32+ 0) /* Intel Dynamic Acceleration */
+-
+-#define cpu_has(c, bit) \
+- (__builtin_constant_p(bit) && \
+- ( (((bit)>>5)==0 && (1UL<<((bit)&31) & REQUIRED_MASK0)) || \
+- (((bit)>>5)==1 && (1UL<<((bit)&31) & REQUIRED_MASK1)) || \
+- (((bit)>>5)==2 && (1UL<<((bit)&31) & REQUIRED_MASK2)) || \
+- (((bit)>>5)==3 && (1UL<<((bit)&31) & REQUIRED_MASK3)) || \
+- (((bit)>>5)==4 && (1UL<<((bit)&31) & REQUIRED_MASK4)) || \
+- (((bit)>>5)==5 && (1UL<<((bit)&31) & REQUIRED_MASK5)) || \
+- (((bit)>>5)==6 && (1UL<<((bit)&31) & REQUIRED_MASK6)) || \
+- (((bit)>>5)==7 && (1UL<<((bit)&31) & REQUIRED_MASK7)) ) \
+- ? 1 : \
+- test_bit(bit, (c)->x86_capability))
+-#define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
+-
+-#define cpu_has_fpu boot_cpu_has(X86_FEATURE_FPU)
+-#define cpu_has_vme boot_cpu_has(X86_FEATURE_VME)
+-#define cpu_has_de boot_cpu_has(X86_FEATURE_DE)
+-#define cpu_has_pse boot_cpu_has(X86_FEATURE_PSE)
+-#define cpu_has_tsc boot_cpu_has(X86_FEATURE_TSC)
+-#define cpu_has_pae boot_cpu_has(X86_FEATURE_PAE)
+-#define cpu_has_pge boot_cpu_has(X86_FEATURE_PGE)
+-#define cpu_has_apic boot_cpu_has(X86_FEATURE_APIC)
+-#define cpu_has_sep boot_cpu_has(X86_FEATURE_SEP)
+-#define cpu_has_mtrr boot_cpu_has(X86_FEATURE_MTRR)
+-#define cpu_has_mmx boot_cpu_has(X86_FEATURE_MMX)
+-#define cpu_has_fxsr boot_cpu_has(X86_FEATURE_FXSR)
+-#define cpu_has_xmm boot_cpu_has(X86_FEATURE_XMM)
+-#define cpu_has_xmm2 boot_cpu_has(X86_FEATURE_XMM2)
+-#define cpu_has_xmm3 boot_cpu_has(X86_FEATURE_XMM3)
+-#define cpu_has_ht boot_cpu_has(X86_FEATURE_HT)
+-#define cpu_has_mp boot_cpu_has(X86_FEATURE_MP)
+-#define cpu_has_nx boot_cpu_has(X86_FEATURE_NX)
+-#define cpu_has_k6_mtrr boot_cpu_has(X86_FEATURE_K6_MTRR)
+-#define cpu_has_cyrix_arr boot_cpu_has(X86_FEATURE_CYRIX_ARR)
+-#define cpu_has_centaur_mcr boot_cpu_has(X86_FEATURE_CENTAUR_MCR)
+-#define cpu_has_xstore boot_cpu_has(X86_FEATURE_XSTORE)
+-#define cpu_has_xstore_enabled boot_cpu_has(X86_FEATURE_XSTORE_EN)
+-#define cpu_has_xcrypt boot_cpu_has(X86_FEATURE_XCRYPT)
+-#define cpu_has_xcrypt_enabled boot_cpu_has(X86_FEATURE_XCRYPT_EN)
+-#define cpu_has_ace2 boot_cpu_has(X86_FEATURE_ACE2)
+-#define cpu_has_ace2_enabled boot_cpu_has(X86_FEATURE_ACE2_EN)
+-#define cpu_has_phe boot_cpu_has(X86_FEATURE_PHE)
+-#define cpu_has_phe_enabled boot_cpu_has(X86_FEATURE_PHE_EN)
+-#define cpu_has_pmm boot_cpu_has(X86_FEATURE_PMM)
+-#define cpu_has_pmm_enabled boot_cpu_has(X86_FEATURE_PMM_EN)
+-#define cpu_has_ds boot_cpu_has(X86_FEATURE_DS)
+-#define cpu_has_pebs boot_cpu_has(X86_FEATURE_PEBS)
+-#define cpu_has_clflush boot_cpu_has(X86_FEATURE_CLFLSH)
+-#define cpu_has_bts boot_cpu_has(X86_FEATURE_BTS)
+-
+-#endif /* __ASM_I386_CPUFEATURE_H */
+-
+-/*
+- * Local Variables:
+- * mode:c
+- * comment-column:42
+- * End:
+- */
+diff --git a/include/asm-x86/cpufeature_64.h b/include/asm-x86/cpufeature_64.h
+deleted file mode 100644
+index e18496b..0000000
+--- a/include/asm-x86/cpufeature_64.h
++++ /dev/null
+@@ -1,30 +0,0 @@
+-/*
+- * cpufeature_32.h
+- *
+- * Defines x86 CPU feature bits
+- */
+-
+-#ifndef __ASM_X8664_CPUFEATURE_H
+-#define __ASM_X8664_CPUFEATURE_H
+-
+-#include "cpufeature_32.h"
+-
+-#undef cpu_has_vme
+-#define cpu_has_vme 0
+-
+-#undef cpu_has_pae
+-#define cpu_has_pae ___BUG___
+-
+-#undef cpu_has_mp
+-#define cpu_has_mp 1 /* XXX */
+-
+-#undef cpu_has_k6_mtrr
+-#define cpu_has_k6_mtrr 0
+-
+-#undef cpu_has_cyrix_arr
+-#define cpu_has_cyrix_arr 0
+-
+-#undef cpu_has_centaur_mcr
+-#define cpu_has_centaur_mcr 0
+-
+-#endif /* __ASM_X8664_CPUFEATURE_H */
+diff --git a/include/asm-x86/desc.h b/include/asm-x86/desc.h
+index 6065c50..5b6a05d 100644
+--- a/include/asm-x86/desc.h
++++ b/include/asm-x86/desc.h
+@@ -1,5 +1,381 @@
++#ifndef _ASM_DESC_H_
++#define _ASM_DESC_H_
++
++#ifndef __ASSEMBLY__
++#include <asm/desc_defs.h>
++#include <asm/ldt.h>
++#include <asm/mmu.h>
++#include <linux/smp.h>
++
++static inline void fill_ldt(struct desc_struct *desc,
++ const struct user_desc *info)
++{
++ desc->limit0 = info->limit & 0x0ffff;
++ desc->base0 = info->base_addr & 0x0000ffff;
++
++ desc->base1 = (info->base_addr & 0x00ff0000) >> 16;
++ desc->type = (info->read_exec_only ^ 1) << 1;
++ desc->type |= info->contents << 2;
++ desc->s = 1;
++ desc->dpl = 0x3;
++ desc->p = info->seg_not_present ^ 1;
++ desc->limit = (info->limit & 0xf0000) >> 16;
++ desc->avl = info->useable;
++ desc->d = info->seg_32bit;
++ desc->g = info->limit_in_pages;
++ desc->base2 = (info->base_addr & 0xff000000) >> 24;
++}
++
++extern struct desc_ptr idt_descr;
++extern gate_desc idt_table[];
++
++#ifdef CONFIG_X86_64
++extern struct desc_struct cpu_gdt_table[GDT_ENTRIES];
++extern struct desc_ptr cpu_gdt_descr[];
++/* the cpu gdt accessor */
++#define get_cpu_gdt_table(x) ((struct desc_struct *)cpu_gdt_descr[x].address)
++
++static inline void pack_gate(gate_desc *gate, unsigned type, unsigned long func,
++ unsigned dpl, unsigned ist, unsigned seg)
++{
++ gate->offset_low = PTR_LOW(func);
++ gate->segment = __KERNEL_CS;
++ gate->ist = ist;
++ gate->p = 1;
++ gate->dpl = dpl;
++ gate->zero0 = 0;
++ gate->zero1 = 0;
++ gate->type = type;
++ gate->offset_middle = PTR_MIDDLE(func);
++ gate->offset_high = PTR_HIGH(func);
++}
++
++#else
++struct gdt_page {
++ struct desc_struct gdt[GDT_ENTRIES];
++} __attribute__((aligned(PAGE_SIZE)));
++DECLARE_PER_CPU(struct gdt_page, gdt_page);
++
++static inline struct desc_struct *get_cpu_gdt_table(unsigned int cpu)
++{
++ return per_cpu(gdt_page, cpu).gdt;
++}
++
++static inline void pack_gate(gate_desc *gate, unsigned char type,
++ unsigned long base, unsigned dpl, unsigned flags, unsigned short seg)
++
++{
++ gate->a = (seg << 16) | (base & 0xffff);
++ gate->b = (base & 0xffff0000) |
++ (((0x80 | type | (dpl << 5)) & 0xff) << 8);
++}
++
++#endif
++
++static inline int desc_empty(const void *ptr)
++{
++ const u32 *desc = ptr;
++ return !(desc[0] | desc[1]);
++}
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else
++#define load_TR_desc() native_load_tr_desc()
++#define load_gdt(dtr) native_load_gdt(dtr)
++#define load_idt(dtr) native_load_idt(dtr)
++#define load_tr(tr) __asm__ __volatile("ltr %0"::"m" (tr))
++#define load_ldt(ldt) __asm__ __volatile("lldt %0"::"m" (ldt))
++
++#define store_gdt(dtr) native_store_gdt(dtr)
++#define store_idt(dtr) native_store_idt(dtr)
++#define store_tr(tr) (tr = native_store_tr())
++#define store_ldt(ldt) __asm__ ("sldt %0":"=m" (ldt))
++
++#define load_TLS(t, cpu) native_load_tls(t, cpu)
++#define set_ldt native_set_ldt
++
++#define write_ldt_entry(dt, entry, desc) \
++ native_write_ldt_entry(dt, entry, desc)
++#define write_gdt_entry(dt, entry, desc, type) \
++ native_write_gdt_entry(dt, entry, desc, type)
++#define write_idt_entry(dt, entry, g) native_write_idt_entry(dt, entry, g)
++#endif
++
++static inline void native_write_idt_entry(gate_desc *idt, int entry,
++ const gate_desc *gate)
++{
++ memcpy(&idt[entry], gate, sizeof(*gate));
++}
++
++static inline void native_write_ldt_entry(struct desc_struct *ldt, int entry,
++ const void *desc)
++{
++ memcpy(&ldt[entry], desc, 8);
++}
++
++static inline void native_write_gdt_entry(struct desc_struct *gdt, int entry,
++ const void *desc, int type)
++{
++ unsigned int size;
++ switch (type) {
++ case DESC_TSS:
++ size = sizeof(tss_desc);
++ break;
++ case DESC_LDT:
++ size = sizeof(ldt_desc);
++ break;
++ default:
++ size = sizeof(struct desc_struct);
++ break;
++ }
++ memcpy(&gdt[entry], desc, size);
++}
++
++static inline void pack_descriptor(struct desc_struct *desc, unsigned long base,
++ unsigned long limit, unsigned char type,
++ unsigned char flags)
++{
++ desc->a = ((base & 0xffff) << 16) | (limit & 0xffff);
++ desc->b = (base & 0xff000000) | ((base & 0xff0000) >> 16) |
++ (limit & 0x000f0000) | ((type & 0xff) << 8) |
++ ((flags & 0xf) << 20);
++ desc->p = 1;
++}
++
++
++static inline void set_tssldt_descriptor(void *d, unsigned long addr,
++ unsigned type, unsigned size)
++{
++#ifdef CONFIG_X86_64
++ struct ldttss_desc64 *desc = d;
++ memset(desc, 0, sizeof(*desc));
++ desc->limit0 = size & 0xFFFF;
++ desc->base0 = PTR_LOW(addr);
++ desc->base1 = PTR_MIDDLE(addr) & 0xFF;
++ desc->type = type;
++ desc->p = 1;
++ desc->limit1 = (size >> 16) & 0xF;
++ desc->base2 = (PTR_MIDDLE(addr) >> 8) & 0xFF;
++ desc->base3 = PTR_HIGH(addr);
++#else
+
-+#ifdef __KERNEL__
++ pack_descriptor((struct desc_struct *)d, addr, size, 0x80 | type, 0);
++#endif
++}
+
-+#define NR_syscalls 353
++static inline void __set_tss_desc(unsigned cpu, unsigned int entry, void *addr)
++{
++ struct desc_struct *d = get_cpu_gdt_table(cpu);
++ tss_desc tss;
+
-+#define __ARCH_WANT_IPC_PARSE_VERSION
-+#define __ARCH_WANT_OLD_READDIR
-+#define __ARCH_WANT_OLD_STAT
-+#define __ARCH_WANT_STAT64
-+#define __ARCH_WANT_SYS_ALARM
-+#define __ARCH_WANT_SYS_GETHOSTNAME
-+#define __ARCH_WANT_SYS_PAUSE
-+#define __ARCH_WANT_SYS_SGETMASK
-+#define __ARCH_WANT_SYS_SIGNAL
-+#define __ARCH_WANT_SYS_TIME
-+#define __ARCH_WANT_SYS_UTIME
-+#define __ARCH_WANT_SYS_WAITPID
-+#define __ARCH_WANT_SYS_SOCKETCALL
-+#define __ARCH_WANT_SYS_FADVISE64
-+#define __ARCH_WANT_SYS_GETPGRP
-+#define __ARCH_WANT_SYS_LLSEEK
-+#define __ARCH_WANT_SYS_NICE
-+#define __ARCH_WANT_SYS_OLD_GETRLIMIT
-+#define __ARCH_WANT_SYS_OLDUMOUNT
-+#define __ARCH_WANT_SYS_SIGPENDING
-+#define __ARCH_WANT_SYS_SIGPROCMASK
-+#define __ARCH_WANT_SYS_RT_SIGACTION
++ /*
++ * sizeof(unsigned long) coming from an extra "long" at the end
++ * of the iobitmap. See tss_struct definition in processor.h
++ *
++ * -1? seg base+limit should be pointing to the address of the
++ * last valid byte
++ */
++ set_tssldt_descriptor(&tss, (unsigned long)addr, DESC_TSS,
++ IO_BITMAP_OFFSET + IO_BITMAP_BYTES + sizeof(unsigned long) - 1);
++ write_gdt_entry(d, entry, &tss, DESC_TSS);
++}
++
++#define set_tss_desc(cpu, addr) __set_tss_desc(cpu, GDT_ENTRY_TSS, addr)
++
++static inline void native_set_ldt(const void *addr, unsigned int entries)
++{
++ if (likely(entries == 0))
++ __asm__ __volatile__("lldt %w0"::"q" (0));
++ else {
++ unsigned cpu = smp_processor_id();
++ ldt_desc ldt;
++
++ set_tssldt_descriptor(&ldt, (unsigned long)addr,
++ DESC_LDT, entries * sizeof(ldt) - 1);
++ write_gdt_entry(get_cpu_gdt_table(cpu), GDT_ENTRY_LDT,
++ &ldt, DESC_LDT);
++ __asm__ __volatile__("lldt %w0"::"q" (GDT_ENTRY_LDT*8));
++ }
++}
++
++static inline void native_load_tr_desc(void)
++{
++ asm volatile("ltr %w0"::"q" (GDT_ENTRY_TSS*8));
++}
++
++static inline void native_load_gdt(const struct desc_ptr *dtr)
++{
++ asm volatile("lgdt %0"::"m" (*dtr));
++}
++
++static inline void native_load_idt(const struct desc_ptr *dtr)
++{
++ asm volatile("lidt %0"::"m" (*dtr));
++}
++
++static inline void native_store_gdt(struct desc_ptr *dtr)
++{
++ asm volatile("sgdt %0":"=m" (*dtr));
++}
++
++static inline void native_store_idt(struct desc_ptr *dtr)
++{
++ asm volatile("sidt %0":"=m" (*dtr));
++}
++
++static inline unsigned long native_store_tr(void)
++{
++ unsigned long tr;
++ asm volatile("str %0":"=r" (tr));
++ return tr;
++}
++
++static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
++{
++ unsigned int i;
++ struct desc_struct *gdt = get_cpu_gdt_table(cpu);
++
++ for (i = 0; i < GDT_ENTRY_TLS_ENTRIES; i++)
++ gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i];
++}
++
++#define _LDT_empty(info) (\
++ (info)->base_addr == 0 && \
++ (info)->limit == 0 && \
++ (info)->contents == 0 && \
++ (info)->read_exec_only == 1 && \
++ (info)->seg_32bit == 0 && \
++ (info)->limit_in_pages == 0 && \
++ (info)->seg_not_present == 1 && \
++ (info)->useable == 0)
++
++#ifdef CONFIG_X86_64
++#define LDT_empty(info) (_LDT_empty(info) && ((info)->lm == 0))
++#else
++#define LDT_empty(info) (_LDT_empty(info))
++#endif
++
++static inline void clear_LDT(void)
++{
++ set_ldt(NULL, 0);
++}
+
+/*
-+ * "Conditional" syscalls
-+ *
-+ * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
-+ * but it doesn't work on all toolchains, so we just do it by hand
++ * load one particular LDT into the current CPU
+ */
-+#ifndef cond_syscall
-+#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
-+#endif
++static inline void load_LDT_nolock(mm_context_t *pc)
++{
++ set_ldt(pc->ldt, pc->size);
++}
+
-+#endif /* __KERNEL__ */
-+#endif /* __ASM_SH_UNISTD_64_H */
-diff --git a/include/asm-sh/user.h b/include/asm-sh/user.h
-index d1b8511..1a4f43c 100644
---- a/include/asm-sh/user.h
-+++ b/include/asm-sh/user.h
-@@ -27,12 +27,19 @@
- * to write an integer number of pages.
- */
-
-+#if defined(__SH5__) || defined(CONFIG_CPU_SH5)
-+struct user_fpu_struct {
-+ unsigned long fp_regs[32];
-+ unsigned int fpscr;
-+};
++static inline void load_LDT(mm_context_t *pc)
++{
++ preempt_disable();
++ load_LDT_nolock(pc);
++ preempt_enable();
++}
++
++static inline unsigned long get_desc_base(const struct desc_struct *desc)
++{
++ return desc->base0 | ((desc->base1) << 16) | ((desc->base2) << 24);
++}
++
++static inline unsigned long get_desc_limit(const struct desc_struct *desc)
++{
++ return desc->limit0 | (desc->limit << 16);
++}
++
++static inline void _set_gate(int gate, unsigned type, void *addr,
++ unsigned dpl, unsigned ist, unsigned seg)
++{
++ gate_desc s;
++ pack_gate(&s, type, (unsigned long)addr, dpl, ist, seg);
++ /*
++ * does not need to be atomic because it is only done once at
++ * setup time
++ */
++ write_idt_entry(idt_table, gate, &s);
++}
++
++/*
++ * This needs to use 'idt_table' rather than 'idt', and
++ * thus use the _nonmapped_ version of the IDT, as the
++ * Pentium F0 0F bugfix can have resulted in the mapped
++ * IDT being write-protected.
++ */
++static inline void set_intr_gate(unsigned int n, void *addr)
++{
++ BUG_ON((unsigned)n > 0xFF);
++ _set_gate(n, GATE_INTERRUPT, addr, 0, 0, __KERNEL_CS);
++}
++
++/*
++ * This routine sets up an interrupt gate at directory privilege level 3.
++ */
++static inline void set_system_intr_gate(unsigned int n, void *addr)
++{
++ BUG_ON((unsigned)n > 0xFF);
++ _set_gate(n, GATE_INTERRUPT, addr, 0x3, 0, __KERNEL_CS);
++}
++
++static inline void set_trap_gate(unsigned int n, void *addr)
++{
++ BUG_ON((unsigned)n > 0xFF);
++ _set_gate(n, GATE_TRAP, addr, 0, 0, __KERNEL_CS);
++}
++
++static inline void set_system_gate(unsigned int n, void *addr)
++{
++ BUG_ON((unsigned)n > 0xFF);
+ #ifdef CONFIG_X86_32
+-# include "desc_32.h"
++ _set_gate(n, GATE_TRAP, addr, 0x3, 0, __KERNEL_CS);
+#else
- struct user_fpu_struct {
- unsigned long fp_regs[16];
- unsigned long xfp_regs[16];
- unsigned long fpscr;
- unsigned long fpul;
- };
++ _set_gate(n, GATE_INTERRUPT, addr, 0x3, 0, __KERNEL_CS);
+#endif
-
- struct user {
- struct pt_regs regs; /* entire machine state */
-diff --git a/include/asm-sh/voyagergx.h b/include/asm-sh/voyagergx.h
++}
++
++static inline void set_task_gate(unsigned int n, unsigned int gdt_entry)
++{
++ BUG_ON((unsigned)n > 0xFF);
++ _set_gate(n, GATE_TASK, (void *)0, 0, 0, (gdt_entry<<3));
++}
++
++static inline void set_intr_gate_ist(int n, void *addr, unsigned ist)
++{
++ BUG_ON((unsigned)n > 0xFF);
++ _set_gate(n, GATE_INTERRUPT, addr, 0, ist, __KERNEL_CS);
++}
++
++static inline void set_system_gate_ist(int n, void *addr, unsigned ist)
++{
++ BUG_ON((unsigned)n > 0xFF);
++ _set_gate(n, GATE_INTERRUPT, addr, 0x3, ist, __KERNEL_CS);
++}
++
+ #else
+-# include "desc_64.h"
++/*
++ * GET_DESC_BASE reads the descriptor base of the specified segment.
++ *
++ * Args:
++ * idx - descriptor index
++ * gdt - GDT pointer
++ * base - 32bit register to which the base will be written
++ * lo_w - lo word of the "base" register
++ * lo_b - lo byte of the "base" register
++ * hi_b - hi byte of the low word of the "base" register
++ *
++ * Example:
++ * GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah)
++ * Will read the base address of GDT_ENTRY_ESPFIX_SS and put it into %eax.
++ */
++#define GET_DESC_BASE(idx, gdt, base, lo_w, lo_b, hi_b) \
++ movb idx*8+4(gdt), lo_b; \
++ movb idx*8+7(gdt), hi_b; \
++ shll $16, base; \
++ movw idx*8+2(gdt), lo_w;
++
++
++#endif /* __ASSEMBLY__ */
++
+ #endif
+diff --git a/include/asm-x86/desc_32.h b/include/asm-x86/desc_32.h
deleted file mode 100644
-index d825596..0000000
---- a/include/asm-sh/voyagergx.h
+index c547403..0000000
+--- a/include/asm-x86/desc_32.h
+++ /dev/null
-@@ -1,341 +0,0 @@
--/* -------------------------------------------------------------------- */
--/* voyagergx.h */
--/* -------------------------------------------------------------------- */
--/* This program is free software; you can redistribute it and/or modify
-- it under the terms of the GNU General Public License as published by
-- the Free Software Foundation; either version 2 of the License, or
-- (at your option) any later version.
+@@ -1,244 +0,0 @@
+-#ifndef __ARCH_DESC_H
+-#define __ARCH_DESC_H
-
-- This program is distributed in the hope that it will be useful,
-- but WITHOUT ANY WARRANTY; without even the implied warranty of
-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-- GNU General Public License for more details.
+-#include <asm/ldt.h>
+-#include <asm/segment.h>
-
-- You should have received a copy of the GNU General Public License
-- along with this program; if not, write to the Free Software
-- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+-#ifndef __ASSEMBLY__
-
-- Copyright 2003 (c) Lineo uSolutions,Inc.
--*/
--/* -------------------------------------------------------------------- */
+-#include <linux/preempt.h>
+-#include <linux/smp.h>
+-#include <linux/percpu.h>
-
--#ifndef _VOYAGER_GX_REG_H
--#define _VOYAGER_GX_REG_H
+-#include <asm/mmu.h>
-
--#define VOYAGER_BASE 0xb3e00000
--#define VOYAGER_USBH_BASE (0x40000 + VOYAGER_BASE)
--#define VOYAGER_UART_BASE (0x30000 + VOYAGER_BASE)
--#define VOYAGER_AC97_BASE (0xa0000 + VOYAGER_BASE)
+-struct Xgt_desc_struct {
+- unsigned short size;
+- unsigned long address __attribute__((packed));
+- unsigned short pad;
+-} __attribute__ ((packed));
-
--#define VOYAGER_IRQ_NUM 26
--#define VOYAGER_IRQ_BASE 200
+-struct gdt_page
+-{
+- struct desc_struct gdt[GDT_ENTRIES];
+-} __attribute__((aligned(PAGE_SIZE)));
+-DECLARE_PER_CPU(struct gdt_page, gdt_page);
-
--#define IRQ_SM501_UP (VOYAGER_IRQ_BASE + 0)
--#define IRQ_SM501_G54 (VOYAGER_IRQ_BASE + 1)
--#define IRQ_SM501_G53 (VOYAGER_IRQ_BASE + 2)
--#define IRQ_SM501_G52 (VOYAGER_IRQ_BASE + 3)
--#define IRQ_SM501_G51 (VOYAGER_IRQ_BASE + 4)
--#define IRQ_SM501_G50 (VOYAGER_IRQ_BASE + 5)
--#define IRQ_SM501_G49 (VOYAGER_IRQ_BASE + 6)
--#define IRQ_SM501_G48 (VOYAGER_IRQ_BASE + 7)
--#define IRQ_SM501_I2C (VOYAGER_IRQ_BASE + 8)
--#define IRQ_SM501_PW (VOYAGER_IRQ_BASE + 9)
--#define IRQ_SM501_DMA (VOYAGER_IRQ_BASE + 10)
--#define IRQ_SM501_PCI (VOYAGER_IRQ_BASE + 11)
--#define IRQ_SM501_I2S (VOYAGER_IRQ_BASE + 12)
--#define IRQ_SM501_AC (VOYAGER_IRQ_BASE + 13)
--#define IRQ_SM501_US (VOYAGER_IRQ_BASE + 14)
--#define IRQ_SM501_U1 (VOYAGER_IRQ_BASE + 15)
--#define IRQ_SM501_U0 (VOYAGER_IRQ_BASE + 16)
--#define IRQ_SM501_CV (VOYAGER_IRQ_BASE + 17)
--#define IRQ_SM501_MC (VOYAGER_IRQ_BASE + 18)
--#define IRQ_SM501_S1 (VOYAGER_IRQ_BASE + 19)
--#define IRQ_SM501_S0 (VOYAGER_IRQ_BASE + 20)
--#define IRQ_SM501_UH (VOYAGER_IRQ_BASE + 21)
--#define IRQ_SM501_2D (VOYAGER_IRQ_BASE + 22)
--#define IRQ_SM501_ZD (VOYAGER_IRQ_BASE + 23)
--#define IRQ_SM501_PV (VOYAGER_IRQ_BASE + 24)
--#define IRQ_SM501_CI (VOYAGER_IRQ_BASE + 25)
+-static inline struct desc_struct *get_cpu_gdt_table(unsigned int cpu)
+-{
+- return per_cpu(gdt_page, cpu).gdt;
+-}
-
--/* ----- MISC controle register ------------------------------ */
--#define MISC_CTRL (0x000004 + VOYAGER_BASE)
--#define MISC_CTRL_USBCLK_48 (3 << 28)
--#define MISC_CTRL_USBCLK_96 (2 << 28)
--#define MISC_CTRL_USBCLK_CRYSTAL (1 << 28)
+-extern struct Xgt_desc_struct idt_descr;
+-extern struct desc_struct idt_table[];
+-extern void set_intr_gate(unsigned int irq, void * addr);
-
--/* ----- GPIO[31:0] register --------------------------------- */
--#define GPIO_MUX_LOW (0x000008 + VOYAGER_BASE)
--#define GPIO_MUX_LOW_AC97 0x1F000000
--#define GPIO_MUX_LOW_8051 0x0000ffff
--#define GPIO_MUX_LOW_PWM (1 << 29)
+-static inline void pack_descriptor(__u32 *a, __u32 *b,
+- unsigned long base, unsigned long limit, unsigned char type, unsigned char flags)
+-{
+- *a = ((base & 0xffff) << 16) | (limit & 0xffff);
+- *b = (base & 0xff000000) | ((base & 0xff0000) >> 16) |
+- (limit & 0x000f0000) | ((type & 0xff) << 8) | ((flags & 0xf) << 20);
+-}
-
--/* ----- GPIO[63:32] register --------------------------------- */
--#define GPIO_MUX_HIGH (0x00000C + VOYAGER_BASE)
+-static inline void pack_gate(__u32 *a, __u32 *b,
+- unsigned long base, unsigned short seg, unsigned char type, unsigned char flags)
+-{
+- *a = (seg << 16) | (base & 0xffff);
+- *b = (base & 0xffff0000) | ((type & 0xff) << 8) | (flags & 0xff);
+-}
-
--/* ----- DRAM controle register ------------------------------- */
--#define DRAM_CTRL (0x000010 + VOYAGER_BASE)
--#define DRAM_CTRL_EMBEDDED (1 << 31)
--#define DRAM_CTRL_CPU_BURST_1 (0 << 28)
--#define DRAM_CTRL_CPU_BURST_2 (1 << 28)
--#define DRAM_CTRL_CPU_BURST_4 (2 << 28)
--#define DRAM_CTRL_CPU_BURST_8 (3 << 28)
--#define DRAM_CTRL_CPU_CAS_LATENCY (1 << 27)
--#define DRAM_CTRL_CPU_SIZE_2 (0 << 24)
--#define DRAM_CTRL_CPU_SIZE_4 (1 << 24)
--#define DRAM_CTRL_CPU_SIZE_64 (4 << 24)
--#define DRAM_CTRL_CPU_SIZE_32 (5 << 24)
--#define DRAM_CTRL_CPU_SIZE_16 (6 << 24)
--#define DRAM_CTRL_CPU_SIZE_8 (7 << 24)
--#define DRAM_CTRL_CPU_COLUMN_SIZE_1024 (0 << 22)
--#define DRAM_CTRL_CPU_COLUMN_SIZE_512 (2 << 22)
--#define DRAM_CTRL_CPU_COLUMN_SIZE_256 (3 << 22)
--#define DRAM_CTRL_CPU_ACTIVE_PRECHARGE (1 << 21)
--#define DRAM_CTRL_CPU_RESET (1 << 20)
--#define DRAM_CTRL_CPU_BANKS (1 << 19)
--#define DRAM_CTRL_CPU_WRITE_PRECHARGE (1 << 18)
--#define DRAM_CTRL_BLOCK_WRITE (1 << 17)
--#define DRAM_CTRL_REFRESH_COMMAND (1 << 16)
--#define DRAM_CTRL_SIZE_4 (0 << 13)
--#define DRAM_CTRL_SIZE_8 (1 << 13)
--#define DRAM_CTRL_SIZE_16 (2 << 13)
--#define DRAM_CTRL_SIZE_32 (3 << 13)
--#define DRAM_CTRL_SIZE_64 (4 << 13)
--#define DRAM_CTRL_SIZE_2 (5 << 13)
--#define DRAM_CTRL_COLUMN_SIZE_256 (0 << 11)
--#define DRAM_CTRL_COLUMN_SIZE_512 (2 << 11)
--#define DRAM_CTRL_COLUMN_SIZE_1024 (3 << 11)
--#define DRAM_CTRL_BLOCK_WRITE_TIME (1 << 10)
--#define DRAM_CTRL_BLOCK_WRITE_PRECHARGE (1 << 9)
--#define DRAM_CTRL_ACTIVE_PRECHARGE (1 << 8)
--#define DRAM_CTRL_RESET (1 << 7)
--#define DRAM_CTRL_REMAIN_ACTIVE (1 << 6)
--#define DRAM_CTRL_BANKS (1 << 1)
--#define DRAM_CTRL_WRITE_PRECHARGE (1 << 0)
+-#define DESCTYPE_LDT 0x82 /* present, system, DPL-0, LDT */
+-#define DESCTYPE_TSS 0x89 /* present, system, DPL-0, 32-bit TSS */
+-#define DESCTYPE_TASK 0x85 /* present, system, DPL-0, task gate */
+-#define DESCTYPE_INT 0x8e /* present, system, DPL-0, interrupt gate */
+-#define DESCTYPE_TRAP 0x8f /* present, system, DPL-0, trap gate */
+-#define DESCTYPE_DPL3 0x60 /* DPL-3 */
+-#define DESCTYPE_S 0x10 /* !system */
-
--/* ----- Arvitration control register -------------------------- */
--#define ARBITRATION_CTRL (0x000014 + VOYAGER_BASE)
--#define ARBITRATION_CTRL_CPUMEM (1 << 29)
--#define ARBITRATION_CTRL_INTMEM (1 << 28)
--#define ARBITRATION_CTRL_USB_OFF (0 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_1 (1 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_2 (2 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_3 (3 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_4 (4 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_5 (5 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_6 (6 << 24)
--#define ARBITRATION_CTRL_USB_PRIORITY_7 (7 << 24)
--#define ARBITRATION_CTRL_PANEL_OFF (0 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_1 (1 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_2 (2 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_3 (3 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_4 (4 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_5 (5 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_6 (6 << 20)
--#define ARBITRATION_CTRL_PANEL_PRIORITY_7 (7 << 20)
--#define ARBITRATION_CTRL_ZVPORT_OFF (0 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_1 (1 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_2 (2 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_3 (3 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_4 (4 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_5 (5 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_6 (6 << 16)
--#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_7 (7 << 16)
--#define ARBITRATION_CTRL_CMD_INTPR_OFF (0 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_1 (1 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_2 (2 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_3 (3 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_4 (4 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_5 (5 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_6 (6 << 12)
--#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_7 (7 << 12)
--#define ARBITRATION_CTRL_DMA_OFF (0 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_1 (1 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_2 (2 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_3 (3 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_4 (4 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_5 (5 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_6 (6 << 8)
--#define ARBITRATION_CTRL_DMA_PRIORITY_7 (7 << 8)
--#define ARBITRATION_CTRL_VIDEO_OFF (0 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_1 (1 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_2 (2 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_3 (3 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_4 (4 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_5 (5 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_6 (6 << 4)
--#define ARBITRATION_CTRL_VIDEO_PRIORITY_7 (7 << 4)
--#define ARBITRATION_CTRL_CRT_OFF (0 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_1 (1 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_2 (2 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_3 (3 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_4 (4 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_5 (5 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_6 (6 << 0)
--#define ARBITRATION_CTRL_CRT_PRIORITY_7 (7 << 0)
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#define load_TR_desc() native_load_tr_desc()
+-#define load_gdt(dtr) native_load_gdt(dtr)
+-#define load_idt(dtr) native_load_idt(dtr)
+-#define load_tr(tr) __asm__ __volatile("ltr %0"::"m" (tr))
+-#define load_ldt(ldt) __asm__ __volatile("lldt %0"::"m" (ldt))
-
--/* ----- Command list status register -------------------------- */
--#define CMD_INTPR_STATUS (0x000024 + VOYAGER_BASE)
+-#define store_gdt(dtr) native_store_gdt(dtr)
+-#define store_idt(dtr) native_store_idt(dtr)
+-#define store_tr(tr) (tr = native_store_tr())
+-#define store_ldt(ldt) __asm__ ("sldt %0":"=m" (ldt))
-
--/* ----- Interrupt status register ----------------------------- */
--#define INT_STATUS (0x00002c + VOYAGER_BASE)
--#define INT_STATUS_UH (1 << 6)
--#define INT_STATUS_MC (1 << 10)
--#define INT_STATUS_U0 (1 << 12)
--#define INT_STATUS_U1 (1 << 13)
--#define INT_STATUS_AC (1 << 17)
+-#define load_TLS(t, cpu) native_load_tls(t, cpu)
+-#define set_ldt native_set_ldt
-
--/* ----- Interrupt mask register ------------------------------ */
--#define VOYAGER_INT_MASK (0x000030 + VOYAGER_BASE)
--#define VOYAGER_INT_MASK_AC (1 << 17)
+-#define write_ldt_entry(dt, entry, a, b) write_dt_entry(dt, entry, a, b)
+-#define write_gdt_entry(dt, entry, a, b) write_dt_entry(dt, entry, a, b)
+-#define write_idt_entry(dt, entry, a, b) write_dt_entry(dt, entry, a, b)
+-#endif
-
--/* ----- Current Gate register ---------------------------------*/
--#define CURRENT_GATE (0x000038 + VOYAGER_BASE)
+-static inline void write_dt_entry(struct desc_struct *dt,
+- int entry, u32 entry_low, u32 entry_high)
+-{
+- dt[entry].a = entry_low;
+- dt[entry].b = entry_high;
+-}
-
--/* ----- Power mode 0 gate register --------------------------- */
--#define POWER_MODE0_GATE (0x000040 + VOYAGER_BASE)
--#define POWER_MODE0_GATE_G (1 << 6)
--#define POWER_MODE0_GATE_U0 (1 << 7)
--#define POWER_MODE0_GATE_U1 (1 << 8)
--#define POWER_MODE0_GATE_UH (1 << 11)
--#define POWER_MODE0_GATE_AC (1 << 18)
+-static inline void native_set_ldt(const void *addr, unsigned int entries)
+-{
+- if (likely(entries == 0))
+- __asm__ __volatile__("lldt %w0"::"q" (0));
+- else {
+- unsigned cpu = smp_processor_id();
+- __u32 a, b;
-
--/* ----- Power mode 1 gate register --------------------------- */
--#define POWER_MODE1_GATE (0x000048 + VOYAGER_BASE)
--#define POWER_MODE1_GATE_G (1 << 6)
--#define POWER_MODE1_GATE_U0 (1 << 7)
--#define POWER_MODE1_GATE_U1 (1 << 8)
--#define POWER_MODE1_GATE_UH (1 << 11)
--#define POWER_MODE1_GATE_AC (1 << 18)
+- pack_descriptor(&a, &b, (unsigned long)addr,
+- entries * sizeof(struct desc_struct) - 1,
+- DESCTYPE_LDT, 0);
+- write_gdt_entry(get_cpu_gdt_table(cpu), GDT_ENTRY_LDT, a, b);
+- __asm__ __volatile__("lldt %w0"::"q" (GDT_ENTRY_LDT*8));
+- }
+-}
-
--/* ----- Power mode 0 clock register -------------------------- */
--#define POWER_MODE0_CLOCK (0x000044 + VOYAGER_BASE)
-
--/* ----- Power mode 1 clock register -------------------------- */
--#define POWER_MODE1_CLOCK (0x00004C + VOYAGER_BASE)
+-static inline void native_load_tr_desc(void)
+-{
+- asm volatile("ltr %w0"::"q" (GDT_ENTRY_TSS*8));
+-}
-
--/* ----- Power mode controll register ------------------------- */
--#define POWER_MODE_CTRL (0x000054 + VOYAGER_BASE)
+-static inline void native_load_gdt(const struct Xgt_desc_struct *dtr)
+-{
+- asm volatile("lgdt %0"::"m" (*dtr));
+-}
-
--/* ----- Miscellaneous Timing register ------------------------ */
--#define SYSTEM_DRAM_CTRL (0x000068 + VOYAGER_BASE)
+-static inline void native_load_idt(const struct Xgt_desc_struct *dtr)
+-{
+- asm volatile("lidt %0"::"m" (*dtr));
+-}
-
--/* ----- PWM register ------------------------------------------*/
--#define PWM_0 (0x010020 + VOYAGER_BASE)
--#define PWM_0_HC(x) (((x)&0x0fff)<<20)
--#define PWM_0_LC(x) (((x)&0x0fff)<<8 )
--#define PWM_0_CLK_DEV(x) (((x)&0x000f)<<4 )
--#define PWM_0_EN (1<<0)
+-static inline void native_store_gdt(struct Xgt_desc_struct *dtr)
+-{
+- asm ("sgdt %0":"=m" (*dtr));
+-}
-
--/* ----- I2C register ----------------------------------------- */
--#define I2C_BYTECOUNT (0x010040 + VOYAGER_BASE)
--#define I2C_CONTROL (0x010041 + VOYAGER_BASE)
--#define I2C_STATUS (0x010042 + VOYAGER_BASE)
--#define I2C_RESET (0x010042 + VOYAGER_BASE)
--#define I2C_SADDRESS (0x010043 + VOYAGER_BASE)
--#define I2C_DATA (0x010044 + VOYAGER_BASE)
+-static inline void native_store_idt(struct Xgt_desc_struct *dtr)
+-{
+- asm ("sidt %0":"=m" (*dtr));
+-}
-
--/* ----- Controle register bits ----------------------------------------- */
--#define I2C_CONTROL_E (1 << 0)
--#define I2C_CONTROL_MODE (1 << 1)
--#define I2C_CONTROL_STATUS (1 << 2)
--#define I2C_CONTROL_INT (1 << 4)
--#define I2C_CONTROL_INTACK (1 << 5)
--#define I2C_CONTROL_REPEAT (1 << 6)
+-static inline unsigned long native_store_tr(void)
+-{
+- unsigned long tr;
+- asm ("str %0":"=r" (tr));
+- return tr;
+-}
-
--/* ----- Status register bits ----------------------------------------- */
--#define I2C_STATUS_BUSY (1 << 0)
--#define I2C_STATUS_ACK (1 << 1)
--#define I2C_STATUS_ERROR (1 << 2)
--#define I2C_STATUS_COMPLETE (1 << 3)
+-static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
+-{
+- unsigned int i;
+- struct desc_struct *gdt = get_cpu_gdt_table(cpu);
-
--/* ----- Reset register ---------------------------------------------- */
--#define I2C_RESET_ERROR (1 << 2)
+- for (i = 0; i < GDT_ENTRY_TLS_ENTRIES; i++)
+- gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i];
+-}
-
--/* ----- transmission frequencies ------------------------------------- */
--#define I2C_SADDRESS_SELECT (1 << 0)
+-static inline void _set_gate(int gate, unsigned int type, void *addr, unsigned short seg)
+-{
+- __u32 a, b;
+- pack_gate(&a, &b, (unsigned long)addr, seg, type, 0);
+- write_idt_entry(idt_table, gate, a, b);
+-}
-
--/* ----- Display Controll register ----------------------------------------- */
--#define PANEL_DISPLAY_CTRL (0x080000 + VOYAGER_BASE)
--#define PANEL_DISPLAY_CTRL_BIAS (1<<26)
--#define PANEL_PAN_CTRL (0x080004 + VOYAGER_BASE)
--#define PANEL_COLOR_KEY (0x080008 + VOYAGER_BASE)
--#define PANEL_FB_ADDRESS (0x08000C + VOYAGER_BASE)
--#define PANEL_FB_WIDTH (0x080010 + VOYAGER_BASE)
--#define PANEL_WINDOW_WIDTH (0x080014 + VOYAGER_BASE)
--#define PANEL_WINDOW_HEIGHT (0x080018 + VOYAGER_BASE)
--#define PANEL_PLANE_TL (0x08001C + VOYAGER_BASE)
--#define PANEL_PLANE_BR (0x080020 + VOYAGER_BASE)
--#define PANEL_HORIZONTAL_TOTAL (0x080024 + VOYAGER_BASE)
--#define PANEL_HORIZONTAL_SYNC (0x080028 + VOYAGER_BASE)
--#define PANEL_VERTICAL_TOTAL (0x08002C + VOYAGER_BASE)
--#define PANEL_VERTICAL_SYNC (0x080030 + VOYAGER_BASE)
--#define PANEL_CURRENT_LINE (0x080034 + VOYAGER_BASE)
--#define VIDEO_DISPLAY_CTRL (0x080040 + VOYAGER_BASE)
--#define VIDEO_FB_0_ADDRESS (0x080044 + VOYAGER_BASE)
--#define VIDEO_FB_WIDTH (0x080048 + VOYAGER_BASE)
--#define VIDEO_FB_0_LAST_ADDRESS (0x08004C + VOYAGER_BASE)
--#define VIDEO_PLANE_TL (0x080050 + VOYAGER_BASE)
--#define VIDEO_PLANE_BR (0x080054 + VOYAGER_BASE)
--#define VIDEO_SCALE (0x080058 + VOYAGER_BASE)
--#define VIDEO_INITIAL_SCALE (0x08005C + VOYAGER_BASE)
--#define VIDEO_YUV_CONSTANTS (0x080060 + VOYAGER_BASE)
--#define VIDEO_FB_1_ADDRESS (0x080064 + VOYAGER_BASE)
--#define VIDEO_FB_1_LAST_ADDRESS (0x080068 + VOYAGER_BASE)
--#define VIDEO_ALPHA_DISPLAY_CTRL (0x080080 + VOYAGER_BASE)
--#define VIDEO_ALPHA_FB_ADDRESS (0x080084 + VOYAGER_BASE)
--#define VIDEO_ALPHA_FB_WIDTH (0x080088 + VOYAGER_BASE)
--#define VIDEO_ALPHA_FB_LAST_ADDRESS (0x08008C + VOYAGER_BASE)
--#define VIDEO_ALPHA_PLANE_TL (0x080090 + VOYAGER_BASE)
--#define VIDEO_ALPHA_PLANE_BR (0x080094 + VOYAGER_BASE)
--#define VIDEO_ALPHA_SCALE (0x080098 + VOYAGER_BASE)
--#define VIDEO_ALPHA_INITIAL_SCALE (0x08009C + VOYAGER_BASE)
--#define VIDEO_ALPHA_CHROMA_KEY (0x0800A0 + VOYAGER_BASE)
--#define PANEL_HWC_ADDRESS (0x0800F0 + VOYAGER_BASE)
--#define PANEL_HWC_LOCATION (0x0800F4 + VOYAGER_BASE)
--#define PANEL_HWC_COLOR_12 (0x0800F8 + VOYAGER_BASE)
--#define PANEL_HWC_COLOR_3 (0x0800FC + VOYAGER_BASE)
--#define ALPHA_DISPLAY_CTRL (0x080100 + VOYAGER_BASE)
--#define ALPHA_FB_ADDRESS (0x080104 + VOYAGER_BASE)
--#define ALPHA_FB_WIDTH (0x080108 + VOYAGER_BASE)
--#define ALPHA_PLANE_TL (0x08010C + VOYAGER_BASE)
--#define ALPHA_PLANE_BR (0x080110 + VOYAGER_BASE)
--#define ALPHA_CHROMA_KEY (0x080114 + VOYAGER_BASE)
--#define CRT_DISPLAY_CTRL (0x080200 + VOYAGER_BASE)
--#define CRT_FB_ADDRESS (0x080204 + VOYAGER_BASE)
--#define CRT_FB_WIDTH (0x080208 + VOYAGER_BASE)
--#define CRT_HORIZONTAL_TOTAL (0x08020C + VOYAGER_BASE)
--#define CRT_HORIZONTAL_SYNC (0x080210 + VOYAGER_BASE)
--#define CRT_VERTICAL_TOTAL (0x080214 + VOYAGER_BASE)
--#define CRT_VERTICAL_SYNC (0x080218 + VOYAGER_BASE)
--#define CRT_SIGNATURE_ANALYZER (0x08021C + VOYAGER_BASE)
--#define CRT_CURRENT_LINE (0x080220 + VOYAGER_BASE)
--#define CRT_MONITOR_DETECT (0x080224 + VOYAGER_BASE)
--#define CRT_HWC_ADDRESS (0x080230 + VOYAGER_BASE)
--#define CRT_HWC_LOCATION (0x080234 + VOYAGER_BASE)
--#define CRT_HWC_COLOR_12 (0x080238 + VOYAGER_BASE)
--#define CRT_HWC_COLOR_3 (0x08023C + VOYAGER_BASE)
--#define CRT_PALETTE_RAM (0x080400 + VOYAGER_BASE)
--#define PANEL_PALETTE_RAM (0x080800 + VOYAGER_BASE)
--#define VIDEO_PALETTE_RAM (0x080C00 + VOYAGER_BASE)
+-static inline void __set_tss_desc(unsigned int cpu, unsigned int entry, const void *addr)
+-{
+- __u32 a, b;
+- pack_descriptor(&a, &b, (unsigned long)addr,
+- offsetof(struct tss_struct, __cacheline_filler) - 1,
+- DESCTYPE_TSS, 0);
+- write_gdt_entry(get_cpu_gdt_table(cpu), entry, a, b);
+-}
-
--/* ----- 8051 Controle register ----------------------------------------- */
--#define VOYAGER_8051_BASE (0x000c0000 + VOYAGER_BASE)
--#define VOYAGER_8051_RESET (0x000b0000 + VOYAGER_BASE)
--#define VOYAGER_8051_SELECT (0x000b0004 + VOYAGER_BASE)
--#define VOYAGER_8051_CPU_INT (0x000b000c + VOYAGER_BASE)
-
--/* ----- AC97 Controle register ----------------------------------------- */
--#define AC97_TX_SLOT0 (0x00000000 + VOYAGER_AC97_BASE)
--#define AC97_CONTROL_STATUS (0x00000080 + VOYAGER_AC97_BASE)
--#define AC97C_READ (1 << 19)
--#define AC97C_WD_BIT (1 << 2)
--#define AC97C_INDEX_MASK 0x7f
+-#define set_tss_desc(cpu,addr) __set_tss_desc(cpu, GDT_ENTRY_TSS, addr)
-
--/* arch/sh/cchips/voyagergx/consistent.c */
--void *voyagergx_consistent_alloc(struct device *, size_t, dma_addr_t *, gfp_t);
--int voyagergx_consistent_free(struct device *, size_t, void *, dma_addr_t);
+-#define LDT_entry_a(info) \
+- ((((info)->base_addr & 0x0000ffff) << 16) | ((info)->limit & 0x0ffff))
-
--/* arch/sh/cchips/voyagergx/irq.c */
--void setup_voyagergx_irq(void);
+-#define LDT_entry_b(info) \
+- (((info)->base_addr & 0xff000000) | \
+- (((info)->base_addr & 0x00ff0000) >> 16) | \
+- ((info)->limit & 0xf0000) | \
+- (((info)->read_exec_only ^ 1) << 9) | \
+- ((info)->contents << 10) | \
+- (((info)->seg_not_present ^ 1) << 15) | \
+- ((info)->seg_32bit << 22) | \
+- ((info)->limit_in_pages << 23) | \
+- ((info)->useable << 20) | \
+- 0x7000)
-
--#endif /* _VOYAGER_GX_REG_H */
-diff --git a/include/asm-sh64/Kbuild b/include/asm-sh64/Kbuild
-deleted file mode 100644
-index c68e168..0000000
---- a/include/asm-sh64/Kbuild
-+++ /dev/null
-@@ -1 +0,0 @@
--include include/asm-generic/Kbuild.asm
-diff --git a/include/asm-sh64/a.out.h b/include/asm-sh64/a.out.h
-deleted file mode 100644
-index 237ee4e..0000000
---- a/include/asm-sh64/a.out.h
-+++ /dev/null
-@@ -1,38 +0,0 @@
--#ifndef __ASM_SH64_A_OUT_H
--#define __ASM_SH64_A_OUT_H
+-#define LDT_empty(info) (\
+- (info)->base_addr == 0 && \
+- (info)->limit == 0 && \
+- (info)->contents == 0 && \
+- (info)->read_exec_only == 1 && \
+- (info)->seg_32bit == 0 && \
+- (info)->limit_in_pages == 0 && \
+- (info)->seg_not_present == 1 && \
+- (info)->useable == 0 )
+-
+-static inline void clear_LDT(void)
+-{
+- set_ldt(NULL, 0);
+-}
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/a.out.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
+- * load one particular LDT into the current CPU
- */
--
--struct exec
+-static inline void load_LDT_nolock(mm_context_t *pc)
-{
-- unsigned long a_info; /* Use macros N_MAGIC, etc for access */
-- unsigned a_text; /* length of text, in bytes */
-- unsigned a_data; /* length of data, in bytes */
-- unsigned a_bss; /* length of uninitialized data area for file, in bytes */
-- unsigned a_syms; /* length of symbol table data in file, in bytes */
-- unsigned a_entry; /* start address */
-- unsigned a_trsize; /* length of relocation info for text, in bytes */
-- unsigned a_drsize; /* length of relocation info for data, in bytes */
--};
--
--#define N_TRSIZE(a) ((a).a_trsize)
--#define N_DRSIZE(a) ((a).a_drsize)
--#define N_SYMSIZE(a) ((a).a_syms)
--
--#ifdef __KERNEL__
+- set_ldt(pc->ldt, pc->size);
+-}
-
--#define STACK_TOP TASK_SIZE
--#define STACK_TOP_MAX STACK_TOP
+-static inline void load_LDT(mm_context_t *pc)
+-{
+- preempt_disable();
+- load_LDT_nolock(pc);
+- preempt_enable();
+-}
-
--#endif
+-static inline unsigned long get_desc_base(unsigned long *desc)
+-{
+- unsigned long base;
+- base = ((desc[0] >> 16) & 0x0000ffff) |
+- ((desc[1] << 16) & 0x00ff0000) |
+- (desc[1] & 0xff000000);
+- return base;
+-}
-
--#endif /* __ASM_SH64_A_OUT_H */
-diff --git a/include/asm-sh64/atomic.h b/include/asm-sh64/atomic.h
-deleted file mode 100644
-index 28f2ea9..0000000
---- a/include/asm-sh64/atomic.h
-+++ /dev/null
-@@ -1,158 +0,0 @@
--#ifndef __ASM_SH64_ATOMIC_H
--#define __ASM_SH64_ATOMIC_H
+-#else /* __ASSEMBLY__ */
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/atomic.h
+- * GET_DESC_BASE reads the descriptor base of the specified segment.
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
+- * Args:
+- * idx - descriptor index
+- * gdt - GDT pointer
+- * base - 32bit register to which the base will be written
+- * lo_w - lo word of the "base" register
+- * lo_b - lo byte of the "base" register
+- * hi_b - hi byte of the low word of the "base" register
- *
+- * Example:
+- * GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah)
+- * Will read the base address of GDT_ENTRY_ESPFIX_SS and put it into %eax.
- */
+-#define GET_DESC_BASE(idx, gdt, base, lo_w, lo_b, hi_b) \
+- movb idx*8+4(gdt), lo_b; \
+- movb idx*8+7(gdt), hi_b; \
+- shll $16, base; \
+- movw idx*8+2(gdt), lo_w;
-
--/*
-- * Atomic operations that C can't guarantee us. Useful for
-- * resource counting etc..
-- *
-- */
+-#endif /* !__ASSEMBLY__ */
-
--typedef struct { volatile int counter; } atomic_t;
+-#endif
+diff --git a/include/asm-x86/desc_64.h b/include/asm-x86/desc_64.h
+index 7d9c938..8b13789 100644
+--- a/include/asm-x86/desc_64.h
++++ b/include/asm-x86/desc_64.h
+@@ -1,204 +1 @@
+-/* Written 2000 by Andi Kleen */
+-#ifndef __ARCH_DESC_H
+-#define __ARCH_DESC_H
+
+-#include <linux/threads.h>
+-#include <asm/ldt.h>
-
--#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
+-#ifndef __ASSEMBLY__
-
--#define atomic_read(v) ((v)->counter)
--#define atomic_set(v,i) ((v)->counter = (i))
+-#include <linux/string.h>
+-#include <linux/smp.h>
+-#include <asm/desc_defs.h>
-
--#include <asm/system.h>
+-#include <asm/segment.h>
+-#include <asm/mmu.h>
+-
+-extern struct desc_struct cpu_gdt_table[GDT_ENTRIES];
+-
+-#define load_TR_desc() asm volatile("ltr %w0"::"r" (GDT_ENTRY_TSS*8))
+-#define load_LDT_desc() asm volatile("lldt %w0"::"r" (GDT_ENTRY_LDT*8))
+-#define clear_LDT() asm volatile("lldt %w0"::"r" (0))
+-
+-static inline unsigned long __store_tr(void)
+-{
+- unsigned long tr;
+-
+- asm volatile ("str %w0":"=r" (tr));
+- return tr;
+-}
+-
+-#define store_tr(tr) (tr) = __store_tr()
-
-/*
-- * To get proper branch prediction for the main line, we must branch
-- * forward to code at the end of this object's .text section, then
-- * branch back to restart the operation.
+- * This is the ldt that every process will get unless we need
+- * something other than this.
- */
+-extern struct desc_struct default_ldt[];
+-extern struct gate_struct idt_table[];
+-extern struct desc_ptr cpu_gdt_descr[];
-
--static __inline__ void atomic_add(int i, atomic_t * v)
+-/* the cpu gdt accessor */
+-#define cpu_gdt(_cpu) ((struct desc_struct *)cpu_gdt_descr[_cpu].address)
+-
+-static inline void load_gdt(const struct desc_ptr *ptr)
-{
-- unsigned long flags;
+- asm volatile("lgdt %w0"::"m" (*ptr));
+-}
-
-- local_irq_save(flags);
-- *(long *)v += i;
-- local_irq_restore(flags);
+-static inline void store_gdt(struct desc_ptr *ptr)
+-{
+- asm("sgdt %w0":"=m" (*ptr));
-}
-
--static __inline__ void atomic_sub(int i, atomic_t *v)
+-static inline void _set_gate(void *adr, unsigned type, unsigned long func, unsigned dpl, unsigned ist)
-{
-- unsigned long flags;
+- struct gate_struct s;
+- s.offset_low = PTR_LOW(func);
+- s.segment = __KERNEL_CS;
+- s.ist = ist;
+- s.p = 1;
+- s.dpl = dpl;
+- s.zero0 = 0;
+- s.zero1 = 0;
+- s.type = type;
+- s.offset_middle = PTR_MIDDLE(func);
+- s.offset_high = PTR_HIGH(func);
+- /* does not need to be atomic because it is only done once at setup time */
+- memcpy(adr, &s, 16);
+-}
-
-- local_irq_save(flags);
-- *(long *)v -= i;
-- local_irq_restore(flags);
+-static inline void set_intr_gate(int nr, void *func)
+-{
+- BUG_ON((unsigned)nr > 0xFF);
+- _set_gate(&idt_table[nr], GATE_INTERRUPT, (unsigned long) func, 0, 0);
+-}
+-
+-static inline void set_intr_gate_ist(int nr, void *func, unsigned ist)
+-{
+- BUG_ON((unsigned)nr > 0xFF);
+- _set_gate(&idt_table[nr], GATE_INTERRUPT, (unsigned long) func, 0, ist);
+-}
+-
+-static inline void set_system_gate(int nr, void *func)
+-{
+- BUG_ON((unsigned)nr > 0xFF);
+- _set_gate(&idt_table[nr], GATE_INTERRUPT, (unsigned long) func, 3, 0);
+-}
+-
+-static inline void set_system_gate_ist(int nr, void *func, unsigned ist)
+-{
+- _set_gate(&idt_table[nr], GATE_INTERRUPT, (unsigned long) func, 3, ist);
-}
-
--static __inline__ int atomic_add_return(int i, atomic_t * v)
+-static inline void load_idt(const struct desc_ptr *ptr)
-{
-- unsigned long temp, flags;
+- asm volatile("lidt %w0"::"m" (*ptr));
+-}
-
-- local_irq_save(flags);
-- temp = *(long *)v;
-- temp += i;
-- *(long *)v = temp;
-- local_irq_restore(flags);
+-static inline void store_idt(struct desc_ptr *dtr)
+-{
+- asm("sidt %w0":"=m" (*dtr));
+-}
-
-- return temp;
+-static inline void set_tssldt_descriptor(void *ptr, unsigned long tss, unsigned type,
+- unsigned size)
+-{
+- struct ldttss_desc d;
+- memset(&d,0,sizeof(d));
+- d.limit0 = size & 0xFFFF;
+- d.base0 = PTR_LOW(tss);
+- d.base1 = PTR_MIDDLE(tss) & 0xFF;
+- d.type = type;
+- d.p = 1;
+- d.limit1 = (size >> 16) & 0xF;
+- d.base2 = (PTR_MIDDLE(tss) >> 8) & 0xFF;
+- d.base3 = PTR_HIGH(tss);
+- memcpy(ptr, &d, 16);
-}
-
--#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
+-static inline void set_tss_desc(unsigned cpu, void *addr)
+-{
+- /*
+- * sizeof(unsigned long) coming from an extra "long" at the end
+- * of the iobitmap. See tss_struct definition in processor.h
+- *
+- * -1? seg base+limit should be pointing to the address of the
+- * last valid byte
+- */
+- set_tssldt_descriptor(&cpu_gdt(cpu)[GDT_ENTRY_TSS],
+- (unsigned long)addr, DESC_TSS,
+- IO_BITMAP_OFFSET + IO_BITMAP_BYTES + sizeof(unsigned long) - 1);
+-}
-
--static __inline__ int atomic_sub_return(int i, atomic_t * v)
+-static inline void set_ldt_desc(unsigned cpu, void *addr, int size)
+-{
+- set_tssldt_descriptor(&cpu_gdt(cpu)[GDT_ENTRY_LDT], (unsigned long)addr,
+- DESC_LDT, size * 8 - 1);
+-}
+-
+-#define LDT_entry_a(info) \
+- ((((info)->base_addr & 0x0000ffff) << 16) | ((info)->limit & 0x0ffff))
+-/* Don't allow setting of the lm bit. It is useless anyways because
+- 64bit system calls require __USER_CS. */
+-#define LDT_entry_b(info) \
+- (((info)->base_addr & 0xff000000) | \
+- (((info)->base_addr & 0x00ff0000) >> 16) | \
+- ((info)->limit & 0xf0000) | \
+- (((info)->read_exec_only ^ 1) << 9) | \
+- ((info)->contents << 10) | \
+- (((info)->seg_not_present ^ 1) << 15) | \
+- ((info)->seg_32bit << 22) | \
+- ((info)->limit_in_pages << 23) | \
+- ((info)->useable << 20) | \
+- /* ((info)->lm << 21) | */ \
+- 0x7000)
+-
+-#define LDT_empty(info) (\
+- (info)->base_addr == 0 && \
+- (info)->limit == 0 && \
+- (info)->contents == 0 && \
+- (info)->read_exec_only == 1 && \
+- (info)->seg_32bit == 0 && \
+- (info)->limit_in_pages == 0 && \
+- (info)->seg_not_present == 1 && \
+- (info)->useable == 0 && \
+- (info)->lm == 0)
+-
+-static inline void load_TLS(struct thread_struct *t, unsigned int cpu)
-{
-- unsigned long temp, flags;
+- unsigned int i;
+- u64 *gdt = (u64 *)(cpu_gdt(cpu) + GDT_ENTRY_TLS_MIN);
-
-- local_irq_save(flags);
-- temp = *(long *)v;
-- temp -= i;
-- *(long *)v = temp;
-- local_irq_restore(flags);
+- for (i = 0; i < GDT_ENTRY_TLS_ENTRIES; i++)
+- gdt[i] = t->tls_array[i];
+-}
-
-- return temp;
+-/*
+- * load one particular LDT into the current CPU
+- */
+-static inline void load_LDT_nolock (mm_context_t *pc, int cpu)
+-{
+- int count = pc->size;
+-
+- if (likely(!count)) {
+- clear_LDT();
+- return;
+- }
+-
+- set_ldt_desc(cpu, pc->ldt, count);
+- load_LDT_desc();
-}
-
--#define atomic_dec_return(v) atomic_sub_return(1,(v))
--#define atomic_inc_return(v) atomic_add_return(1,(v))
+-static inline void load_LDT(mm_context_t *pc)
+-{
+- int cpu = get_cpu();
+- load_LDT_nolock(pc, cpu);
+- put_cpu();
+-}
+-
+-extern struct desc_ptr idt_descr;
+-
+-#endif /* !__ASSEMBLY__ */
+-
+-#endif
+diff --git a/include/asm-x86/desc_defs.h b/include/asm-x86/desc_defs.h
+index 0890040..e33f078 100644
+--- a/include/asm-x86/desc_defs.h
++++ b/include/asm-x86/desc_defs.h
+@@ -11,26 +11,36 @@
+
+ #include <linux/types.h>
+
++/*
++ * FIXME: Acessing the desc_struct through its fields is more elegant,
++ * and should be the one valid thing to do. However, a lot of open code
++ * still touches the a and b acessors, and doing this allow us to do it
++ * incrementally. We keep the signature as a struct, rather than an union,
++ * so we can get rid of it transparently in the future -- glommer
++ */
+ // 8 byte segment descriptor
+ struct desc_struct {
+- u16 limit0;
+- u16 base0;
+- unsigned base1 : 8, type : 4, s : 1, dpl : 2, p : 1;
+- unsigned limit : 4, avl : 1, l : 1, d : 1, g : 1, base2 : 8;
+-} __attribute__((packed));
++ union {
++ struct { unsigned int a, b; };
++ struct {
++ u16 limit0;
++ u16 base0;
++ unsigned base1: 8, type: 4, s: 1, dpl: 2, p: 1;
++ unsigned limit: 4, avl: 1, l: 1, d: 1, g: 1, base2: 8;
++ };
+
+-struct n_desc_struct {
+- unsigned int a,b;
+-};
++ };
++} __attribute__((packed));
+
+ enum {
+ GATE_INTERRUPT = 0xE,
+ GATE_TRAP = 0xF,
+ GATE_CALL = 0xC,
++ GATE_TASK = 0x5,
+ };
+
+ // 16byte gate
+-struct gate_struct {
++struct gate_struct64 {
+ u16 offset_low;
+ u16 segment;
+ unsigned ist : 3, zero0 : 5, type : 5, dpl : 2, p : 1;
+@@ -39,17 +49,18 @@ struct gate_struct {
+ u32 zero1;
+ } __attribute__((packed));
+
+-#define PTR_LOW(x) ((unsigned long)(x) & 0xFFFF)
+-#define PTR_MIDDLE(x) (((unsigned long)(x) >> 16) & 0xFFFF)
+-#define PTR_HIGH(x) ((unsigned long)(x) >> 32)
++#define PTR_LOW(x) ((unsigned long long)(x) & 0xFFFF)
++#define PTR_MIDDLE(x) (((unsigned long long)(x) >> 16) & 0xFFFF)
++#define PTR_HIGH(x) ((unsigned long long)(x) >> 32)
+
+ enum {
+ DESC_TSS = 0x9,
+ DESC_LDT = 0x2,
++ DESCTYPE_S = 0x10, /* !system */
+ };
+
+ // LDT or TSS descriptor in the GDT. 16 bytes.
+-struct ldttss_desc {
++struct ldttss_desc64 {
+ u16 limit0;
+ u16 base0;
+ unsigned base1 : 8, type : 5, dpl : 2, p : 1;
+@@ -58,6 +69,16 @@ struct ldttss_desc {
+ u32 zero1;
+ } __attribute__((packed));
+
++#ifdef CONFIG_X86_64
++typedef struct gate_struct64 gate_desc;
++typedef struct ldttss_desc64 ldt_desc;
++typedef struct ldttss_desc64 tss_desc;
++#else
++typedef struct desc_struct gate_desc;
++typedef struct desc_struct ldt_desc;
++typedef struct desc_struct tss_desc;
++#endif
++
+ struct desc_ptr {
+ unsigned short size;
+ unsigned long address;
+diff --git a/include/asm-x86/dma.h b/include/asm-x86/dma.h
+index 9f936c6..e9733ce 100644
+--- a/include/asm-x86/dma.h
++++ b/include/asm-x86/dma.h
+@@ -1,5 +1,319 @@
++/*
++ * linux/include/asm/dma.h: Defines for using and allocating dma channels.
++ * Written by Hennus Bergman, 1992.
++ * High DMA channel support & info by Hannu Savolainen
++ * and John Boyd, Nov. 1992.
++ */
++
++#ifndef _ASM_X86_DMA_H
++#define _ASM_X86_DMA_H
++
++#include <linux/spinlock.h> /* And spinlocks */
++#include <asm/io.h> /* need byte IO */
++#include <linux/delay.h>
++
++
++#ifdef HAVE_REALLY_SLOW_DMA_CONTROLLER
++#define dma_outb outb_p
++#else
++#define dma_outb outb
++#endif
++
++#define dma_inb inb
++
++/*
++ * NOTES about DMA transfers:
++ *
++ * controller 1: channels 0-3, byte operations, ports 00-1F
++ * controller 2: channels 4-7, word operations, ports C0-DF
++ *
++ * - ALL registers are 8 bits only, regardless of transfer size
++ * - channel 4 is not used - cascades 1 into 2.
++ * - channels 0-3 are byte - addresses/counts are for physical bytes
++ * - channels 5-7 are word - addresses/counts are for physical words
++ * - transfers must not cross physical 64K (0-3) or 128K (5-7) boundaries
++ * - transfer count loaded to registers is 1 less than actual count
++ * - controller 2 offsets are all even (2x offsets for controller 1)
++ * - page registers for 5-7 don't use data bit 0, represent 128K pages
++ * - page registers for 0-3 use bit 0, represent 64K pages
++ *
++ * DMA transfers are limited to the lower 16MB of _physical_ memory.
++ * Note that addresses loaded into registers must be _physical_ addresses,
++ * not logical addresses (which may differ if paging is active).
++ *
++ * Address mapping for channels 0-3:
++ *
++ * A23 ... A16 A15 ... A8 A7 ... A0 (Physical addresses)
++ * | ... | | ... | | ... |
++ * | ... | | ... | | ... |
++ * | ... | | ... | | ... |
++ * P7 ... P0 A7 ... A0 A7 ... A0
++ * | Page | Addr MSB | Addr LSB | (DMA registers)
++ *
++ * Address mapping for channels 5-7:
++ *
++ * A23 ... A17 A16 A15 ... A9 A8 A7 ... A1 A0 (Physical addresses)
++ * | ... | \ \ ... \ \ \ ... \ \
++ * | ... | \ \ ... \ \ \ ... \ (not used)
++ * | ... | \ \ ... \ \ \ ... \
++ * P7 ... P1 (0) A7 A6 ... A0 A7 A6 ... A0
++ * | Page | Addr MSB | Addr LSB | (DMA registers)
++ *
++ * Again, channels 5-7 transfer _physical_ words (16 bits), so addresses
++ * and counts _must_ be word-aligned (the lowest address bit is _ignored_ at
++ * the hardware level, so odd-byte transfers aren't possible).
++ *
++ * Transfer count (_not # bytes_) is limited to 64K, represented as actual
++ * count - 1 : 64K => 0xFFFF, 1 => 0x0000. Thus, count is always 1 or more,
++ * and up to 128K bytes may be transferred on channels 5-7 in one operation.
++ *
++ */
++
++#define MAX_DMA_CHANNELS 8
++
+ #ifdef CONFIG_X86_32
+-# include "dma_32.h"
++
++/* The maximum address that we can perform a DMA transfer to on this platform */
++#define MAX_DMA_ADDRESS (PAGE_OFFSET+0x1000000)
++
++#else
++
++/* 16MB ISA DMA zone */
++#define MAX_DMA_PFN ((16*1024*1024) >> PAGE_SHIFT)
++
++/* 4GB broken PCI/AGP hardware bus master zone */
++#define MAX_DMA32_PFN ((4UL*1024*1024*1024) >> PAGE_SHIFT)
++
++/* Compat define for old dma zone */
++#define MAX_DMA_ADDRESS ((unsigned long)__va(MAX_DMA_PFN << PAGE_SHIFT))
++
++#endif
++
++/* 8237 DMA controllers */
++#define IO_DMA1_BASE 0x00 /* 8 bit slave DMA, channels 0..3 */
++#define IO_DMA2_BASE 0xC0 /* 16 bit master DMA, ch 4(=slave input)..7 */
++
++/* DMA controller registers */
++#define DMA1_CMD_REG 0x08 /* command register (w) */
++#define DMA1_STAT_REG 0x08 /* status register (r) */
++#define DMA1_REQ_REG 0x09 /* request register (w) */
++#define DMA1_MASK_REG 0x0A /* single-channel mask (w) */
++#define DMA1_MODE_REG 0x0B /* mode register (w) */
++#define DMA1_CLEAR_FF_REG 0x0C /* clear pointer flip-flop (w) */
++#define DMA1_TEMP_REG 0x0D /* Temporary Register (r) */
++#define DMA1_RESET_REG 0x0D /* Master Clear (w) */
++#define DMA1_CLR_MASK_REG 0x0E /* Clear Mask */
++#define DMA1_MASK_ALL_REG 0x0F /* all-channels mask (w) */
++
++#define DMA2_CMD_REG 0xD0 /* command register (w) */
++#define DMA2_STAT_REG 0xD0 /* status register (r) */
++#define DMA2_REQ_REG 0xD2 /* request register (w) */
++#define DMA2_MASK_REG 0xD4 /* single-channel mask (w) */
++#define DMA2_MODE_REG 0xD6 /* mode register (w) */
++#define DMA2_CLEAR_FF_REG 0xD8 /* clear pointer flip-flop (w) */
++#define DMA2_TEMP_REG 0xDA /* Temporary Register (r) */
++#define DMA2_RESET_REG 0xDA /* Master Clear (w) */
++#define DMA2_CLR_MASK_REG 0xDC /* Clear Mask */
++#define DMA2_MASK_ALL_REG 0xDE /* all-channels mask (w) */
++
++#define DMA_ADDR_0 0x00 /* DMA address registers */
++#define DMA_ADDR_1 0x02
++#define DMA_ADDR_2 0x04
++#define DMA_ADDR_3 0x06
++#define DMA_ADDR_4 0xC0
++#define DMA_ADDR_5 0xC4
++#define DMA_ADDR_6 0xC8
++#define DMA_ADDR_7 0xCC
++
++#define DMA_CNT_0 0x01 /* DMA count registers */
++#define DMA_CNT_1 0x03
++#define DMA_CNT_2 0x05
++#define DMA_CNT_3 0x07
++#define DMA_CNT_4 0xC2
++#define DMA_CNT_5 0xC6
++#define DMA_CNT_6 0xCA
++#define DMA_CNT_7 0xCE
++
++#define DMA_PAGE_0 0x87 /* DMA page registers */
++#define DMA_PAGE_1 0x83
++#define DMA_PAGE_2 0x81
++#define DMA_PAGE_3 0x82
++#define DMA_PAGE_5 0x8B
++#define DMA_PAGE_6 0x89
++#define DMA_PAGE_7 0x8A
++
++/* I/O to memory, no autoinit, increment, single mode */
++#define DMA_MODE_READ 0x44
++/* memory to I/O, no autoinit, increment, single mode */
++#define DMA_MODE_WRITE 0x48
++/* pass thru DREQ->HRQ, DACK<-HLDA only */
++#define DMA_MODE_CASCADE 0xC0
++
++#define DMA_AUTOINIT 0x10
++
++
++extern spinlock_t dma_spin_lock;
++
++static __inline__ unsigned long claim_dma_lock(void)
++{
++ unsigned long flags;
++ spin_lock_irqsave(&dma_spin_lock, flags);
++ return flags;
++}
++
++static __inline__ void release_dma_lock(unsigned long flags)
++{
++ spin_unlock_irqrestore(&dma_spin_lock, flags);
++}
++
++/* enable/disable a specific DMA channel */
++static __inline__ void enable_dma(unsigned int dmanr)
++{
++ if (dmanr <= 3)
++ dma_outb(dmanr, DMA1_MASK_REG);
++ else
++ dma_outb(dmanr & 3, DMA2_MASK_REG);
++}
++
++static __inline__ void disable_dma(unsigned int dmanr)
++{
++ if (dmanr <= 3)
++ dma_outb(dmanr | 4, DMA1_MASK_REG);
++ else
++ dma_outb((dmanr & 3) | 4, DMA2_MASK_REG);
++}
++
++/* Clear the 'DMA Pointer Flip Flop'.
++ * Write 0 for LSB/MSB, 1 for MSB/LSB access.
++ * Use this once to initialize the FF to a known state.
++ * After that, keep track of it. :-)
++ * --- In order to do that, the DMA routines below should ---
++ * --- only be used while holding the DMA lock ! ---
++ */
++static __inline__ void clear_dma_ff(unsigned int dmanr)
++{
++ if (dmanr <= 3)
++ dma_outb(0, DMA1_CLEAR_FF_REG);
++ else
++ dma_outb(0, DMA2_CLEAR_FF_REG);
++}
++
++/* set mode (above) for a specific DMA channel */
++static __inline__ void set_dma_mode(unsigned int dmanr, char mode)
++{
++ if (dmanr <= 3)
++ dma_outb(mode | dmanr, DMA1_MODE_REG);
++ else
++ dma_outb(mode | (dmanr & 3), DMA2_MODE_REG);
++}
++
++/* Set only the page register bits of the transfer address.
++ * This is used for successive transfers when we know the contents of
++ * the lower 16 bits of the DMA current address register, but a 64k boundary
++ * may have been crossed.
++ */
++static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
++{
++ switch (dmanr) {
++ case 0:
++ dma_outb(pagenr, DMA_PAGE_0);
++ break;
++ case 1:
++ dma_outb(pagenr, DMA_PAGE_1);
++ break;
++ case 2:
++ dma_outb(pagenr, DMA_PAGE_2);
++ break;
++ case 3:
++ dma_outb(pagenr, DMA_PAGE_3);
++ break;
++ case 5:
++ dma_outb(pagenr & 0xfe, DMA_PAGE_5);
++ break;
++ case 6:
++ dma_outb(pagenr & 0xfe, DMA_PAGE_6);
++ break;
++ case 7:
++ dma_outb(pagenr & 0xfe, DMA_PAGE_7);
++ break;
++ }
++}
++
++
++/* Set transfer address & page bits for specific DMA channel.
++ * Assumes dma flipflop is clear.
++ */
++static __inline__ void set_dma_addr(unsigned int dmanr, unsigned int a)
++{
++ set_dma_page(dmanr, a>>16);
++ if (dmanr <= 3) {
++ dma_outb(a & 0xff, ((dmanr & 3) << 1) + IO_DMA1_BASE);
++ dma_outb((a >> 8) & 0xff, ((dmanr & 3) << 1) + IO_DMA1_BASE);
++ } else {
++ dma_outb((a >> 1) & 0xff, ((dmanr & 3) << 2) + IO_DMA2_BASE);
++ dma_outb((a >> 9) & 0xff, ((dmanr & 3) << 2) + IO_DMA2_BASE);
++ }
++}
++
++
++/* Set transfer size (max 64k for DMA0..3, 128k for DMA5..7) for
++ * a specific DMA channel.
++ * You must ensure the parameters are valid.
++ * NOTE: from a manual: "the number of transfers is one more
++ * than the initial word count"! This is taken into account.
++ * Assumes dma flip-flop is clear.
++ * NOTE 2: "count" represents _bytes_ and must be even for channels 5-7.
++ */
++static __inline__ void set_dma_count(unsigned int dmanr, unsigned int count)
++{
++ count--;
++ if (dmanr <= 3) {
++ dma_outb(count & 0xff, ((dmanr & 3) << 1) + 1 + IO_DMA1_BASE);
++ dma_outb((count >> 8) & 0xff,
++ ((dmanr & 3) << 1) + 1 + IO_DMA1_BASE);
++ } else {
++ dma_outb((count >> 1) & 0xff,
++ ((dmanr & 3) << 2) + 2 + IO_DMA2_BASE);
++ dma_outb((count >> 9) & 0xff,
++ ((dmanr & 3) << 2) + 2 + IO_DMA2_BASE);
++ }
++}
++
++
++/* Get DMA residue count. After a DMA transfer, this
++ * should return zero. Reading this while a DMA transfer is
++ * still in progress will return unpredictable results.
++ * If called before the channel has been used, it may return 1.
++ * Otherwise, it returns the number of _bytes_ left to transfer.
++ *
++ * Assumes DMA flip-flop is clear.
++ */
++static __inline__ int get_dma_residue(unsigned int dmanr)
++{
++ unsigned int io_port;
++ /* using short to get 16-bit wrap around */
++ unsigned short count;
++
++ io_port = (dmanr <= 3) ? ((dmanr & 3) << 1) + 1 + IO_DMA1_BASE
++ : ((dmanr & 3) << 2) + 2 + IO_DMA2_BASE;
++
++ count = 1 + dma_inb(io_port);
++ count += dma_inb(io_port) << 8;
++
++ return (dmanr <= 3) ? count : (count << 1);
++}
++
++
++/* These are in kernel/dma.c: */
++extern int request_dma(unsigned int dmanr, const char *device_id);
++extern void free_dma(unsigned int dmanr);
++
++/* From PCI */
++
++#ifdef CONFIG_PCI
++extern int isa_dma_bridge_buggy;
+ #else
+-# include "dma_64.h"
++#define isa_dma_bridge_buggy (0)
+ #endif
++
++#endif /* _ASM_X86_DMA_H */
+diff --git a/include/asm-x86/dma_32.h b/include/asm-x86/dma_32.h
+deleted file mode 100644
+index d23aac8..0000000
+--- a/include/asm-x86/dma_32.h
++++ /dev/null
+@@ -1,297 +0,0 @@
+-/* $Id: dma.h,v 1.7 1992/12/14 00:29:34 root Exp root $
+- * linux/include/asm/dma.h: Defines for using and allocating dma channels.
+- * Written by Hennus Bergman, 1992.
+- * High DMA channel support & info by Hannu Savolainen
+- * and John Boyd, Nov. 1992.
+- */
+-
+-#ifndef _ASM_DMA_H
+-#define _ASM_DMA_H
+-
+-#include <linux/spinlock.h> /* And spinlocks */
+-#include <asm/io.h> /* need byte IO */
+-#include <linux/delay.h>
+-
+-
+-#ifdef HAVE_REALLY_SLOW_DMA_CONTROLLER
+-#define dma_outb outb_p
+-#else
+-#define dma_outb outb
+-#endif
+-
+-#define dma_inb inb
-
-/*
-- * atomic_inc_and_test - increment and test
-- * @v: pointer of type atomic_t
+- * NOTES about DMA transfers:
- *
-- * Atomically increments @v by 1
-- * and returns true if the result is zero, or false for all
-- * other cases.
-- */
--#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0)
+- * controller 1: channels 0-3, byte operations, ports 00-1F
+- * controller 2: channels 4-7, word operations, ports C0-DF
+- *
+- * - ALL registers are 8 bits only, regardless of transfer size
+- * - channel 4 is not used - cascades 1 into 2.
+- * - channels 0-3 are byte - addresses/counts are for physical bytes
+- * - channels 5-7 are word - addresses/counts are for physical words
+- * - transfers must not cross physical 64K (0-3) or 128K (5-7) boundaries
+- * - transfer count loaded to registers is 1 less than actual count
+- * - controller 2 offsets are all even (2x offsets for controller 1)
+- * - page registers for 5-7 don't use data bit 0, represent 128K pages
+- * - page registers for 0-3 use bit 0, represent 64K pages
+- *
+- * DMA transfers are limited to the lower 16MB of _physical_ memory.
+- * Note that addresses loaded into registers must be _physical_ addresses,
+- * not logical addresses (which may differ if paging is active).
+- *
+- * Address mapping for channels 0-3:
+- *
+- * A23 ... A16 A15 ... A8 A7 ... A0 (Physical addresses)
+- * | ... | | ... | | ... |
+- * | ... | | ... | | ... |
+- * | ... | | ... | | ... |
+- * P7 ... P0 A7 ... A0 A7 ... A0
+- * | Page | Addr MSB | Addr LSB | (DMA registers)
+- *
+- * Address mapping for channels 5-7:
+- *
+- * A23 ... A17 A16 A15 ... A9 A8 A7 ... A1 A0 (Physical addresses)
+- * | ... | \ \ ... \ \ \ ... \ \
+- * | ... | \ \ ... \ \ \ ... \ (not used)
+- * | ... | \ \ ... \ \ \ ... \
+- * P7 ... P1 (0) A7 A6 ... A0 A7 A6 ... A0
+- * | Page | Addr MSB | Addr LSB | (DMA registers)
+- *
+- * Again, channels 5-7 transfer _physical_ words (16 bits), so addresses
+- * and counts _must_ be word-aligned (the lowest address bit is _ignored_ at
+- * the hardware level, so odd-byte transfers aren't possible).
+- *
+- * Transfer count (_not # bytes_) is limited to 64K, represented as actual
+- * count - 1 : 64K => 0xFFFF, 1 => 0x0000. Thus, count is always 1 or more,
+- * and up to 128K bytes may be transferred on channels 5-7 in one operation.
+- *
+- */
+-
+-#define MAX_DMA_CHANNELS 8
+-
+-/* The maximum address that we can perform a DMA transfer to on this platform */
+-#define MAX_DMA_ADDRESS (PAGE_OFFSET+0x1000000)
+-
+-/* 8237 DMA controllers */
+-#define IO_DMA1_BASE 0x00 /* 8 bit slave DMA, channels 0..3 */
+-#define IO_DMA2_BASE 0xC0 /* 16 bit master DMA, ch 4(=slave input)..7 */
+-
+-/* DMA controller registers */
+-#define DMA1_CMD_REG 0x08 /* command register (w) */
+-#define DMA1_STAT_REG 0x08 /* status register (r) */
+-#define DMA1_REQ_REG 0x09 /* request register (w) */
+-#define DMA1_MASK_REG 0x0A /* single-channel mask (w) */
+-#define DMA1_MODE_REG 0x0B /* mode register (w) */
+-#define DMA1_CLEAR_FF_REG 0x0C /* clear pointer flip-flop (w) */
+-#define DMA1_TEMP_REG 0x0D /* Temporary Register (r) */
+-#define DMA1_RESET_REG 0x0D /* Master Clear (w) */
+-#define DMA1_CLR_MASK_REG 0x0E /* Clear Mask */
+-#define DMA1_MASK_ALL_REG 0x0F /* all-channels mask (w) */
+-
+-#define DMA2_CMD_REG 0xD0 /* command register (w) */
+-#define DMA2_STAT_REG 0xD0 /* status register (r) */
+-#define DMA2_REQ_REG 0xD2 /* request register (w) */
+-#define DMA2_MASK_REG 0xD4 /* single-channel mask (w) */
+-#define DMA2_MODE_REG 0xD6 /* mode register (w) */
+-#define DMA2_CLEAR_FF_REG 0xD8 /* clear pointer flip-flop (w) */
+-#define DMA2_TEMP_REG 0xDA /* Temporary Register (r) */
+-#define DMA2_RESET_REG 0xDA /* Master Clear (w) */
+-#define DMA2_CLR_MASK_REG 0xDC /* Clear Mask */
+-#define DMA2_MASK_ALL_REG 0xDE /* all-channels mask (w) */
+-
+-#define DMA_ADDR_0 0x00 /* DMA address registers */
+-#define DMA_ADDR_1 0x02
+-#define DMA_ADDR_2 0x04
+-#define DMA_ADDR_3 0x06
+-#define DMA_ADDR_4 0xC0
+-#define DMA_ADDR_5 0xC4
+-#define DMA_ADDR_6 0xC8
+-#define DMA_ADDR_7 0xCC
+-
+-#define DMA_CNT_0 0x01 /* DMA count registers */
+-#define DMA_CNT_1 0x03
+-#define DMA_CNT_2 0x05
+-#define DMA_CNT_3 0x07
+-#define DMA_CNT_4 0xC2
+-#define DMA_CNT_5 0xC6
+-#define DMA_CNT_6 0xCA
+-#define DMA_CNT_7 0xCE
+-
+-#define DMA_PAGE_0 0x87 /* DMA page registers */
+-#define DMA_PAGE_1 0x83
+-#define DMA_PAGE_2 0x81
+-#define DMA_PAGE_3 0x82
+-#define DMA_PAGE_5 0x8B
+-#define DMA_PAGE_6 0x89
+-#define DMA_PAGE_7 0x8A
+-
+-#define DMA_MODE_READ 0x44 /* I/O to memory, no autoinit, increment, single mode */
+-#define DMA_MODE_WRITE 0x48 /* memory to I/O, no autoinit, increment, single mode */
+-#define DMA_MODE_CASCADE 0xC0 /* pass thru DREQ->HRQ, DACK<-HLDA only */
-
--#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0)
--#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
+-#define DMA_AUTOINIT 0x10
-
--#define atomic_inc(v) atomic_add(1,(v))
--#define atomic_dec(v) atomic_sub(1,(v))
-
--static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+-extern spinlock_t dma_spin_lock;
+-
+-static __inline__ unsigned long claim_dma_lock(void)
-{
-- int ret;
- unsigned long flags;
+- spin_lock_irqsave(&dma_spin_lock, flags);
+- return flags;
+-}
-
-- local_irq_save(flags);
-- ret = v->counter;
-- if (likely(ret == old))
-- v->counter = new;
-- local_irq_restore(flags);
+-static __inline__ void release_dma_lock(unsigned long flags)
+-{
+- spin_unlock_irqrestore(&dma_spin_lock, flags);
+-}
-
-- return ret;
+-/* enable/disable a specific DMA channel */
+-static __inline__ void enable_dma(unsigned int dmanr)
+-{
+- if (dmanr<=3)
+- dma_outb(dmanr, DMA1_MASK_REG);
+- else
+- dma_outb(dmanr & 3, DMA2_MASK_REG);
-}
-
--#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+-static __inline__ void disable_dma(unsigned int dmanr)
+-{
+- if (dmanr<=3)
+- dma_outb(dmanr | 4, DMA1_MASK_REG);
+- else
+- dma_outb((dmanr & 3) | 4, DMA2_MASK_REG);
+-}
-
--static inline int atomic_add_unless(atomic_t *v, int a, int u)
+-/* Clear the 'DMA Pointer Flip Flop'.
+- * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+- * Use this once to initialize the FF to a known state.
+- * After that, keep track of it. :-)
+- * --- In order to do that, the DMA routines below should ---
+- * --- only be used while holding the DMA lock ! ---
+- */
+-static __inline__ void clear_dma_ff(unsigned int dmanr)
-{
-- int ret;
-- unsigned long flags;
+- if (dmanr<=3)
+- dma_outb(0, DMA1_CLEAR_FF_REG);
+- else
+- dma_outb(0, DMA2_CLEAR_FF_REG);
+-}
-
-- local_irq_save(flags);
-- ret = v->counter;
-- if (ret != u)
-- v->counter += a;
-- local_irq_restore(flags);
+-/* set mode (above) for a specific DMA channel */
+-static __inline__ void set_dma_mode(unsigned int dmanr, char mode)
+-{
+- if (dmanr<=3)
+- dma_outb(mode | dmanr, DMA1_MODE_REG);
+- else
+- dma_outb(mode | (dmanr&3), DMA2_MODE_REG);
+-}
-
-- return ret != u;
+-/* Set only the page register bits of the transfer address.
+- * This is used for successive transfers when we know the contents of
+- * the lower 16 bits of the DMA current address register, but a 64k boundary
+- * may have been crossed.
+- */
+-static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+-{
+- switch(dmanr) {
+- case 0:
+- dma_outb(pagenr, DMA_PAGE_0);
+- break;
+- case 1:
+- dma_outb(pagenr, DMA_PAGE_1);
+- break;
+- case 2:
+- dma_outb(pagenr, DMA_PAGE_2);
+- break;
+- case 3:
+- dma_outb(pagenr, DMA_PAGE_3);
+- break;
+- case 5:
+- dma_outb(pagenr & 0xfe, DMA_PAGE_5);
+- break;
+- case 6:
+- dma_outb(pagenr & 0xfe, DMA_PAGE_6);
+- break;
+- case 7:
+- dma_outb(pagenr & 0xfe, DMA_PAGE_7);
+- break;
+- }
-}
--#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
-
--static __inline__ void atomic_clear_mask(unsigned int mask, atomic_t *v)
+-
+-/* Set transfer address & page bits for specific DMA channel.
+- * Assumes dma flipflop is clear.
+- */
+-static __inline__ void set_dma_addr(unsigned int dmanr, unsigned int a)
-{
-- unsigned long flags;
+- set_dma_page(dmanr, a>>16);
+- if (dmanr <= 3) {
+- dma_outb( a & 0xff, ((dmanr&3)<<1) + IO_DMA1_BASE );
+- dma_outb( (a>>8) & 0xff, ((dmanr&3)<<1) + IO_DMA1_BASE );
+- } else {
+- dma_outb( (a>>1) & 0xff, ((dmanr&3)<<2) + IO_DMA2_BASE );
+- dma_outb( (a>>9) & 0xff, ((dmanr&3)<<2) + IO_DMA2_BASE );
+- }
+-}
-
-- local_irq_save(flags);
-- *(long *)v &= ~mask;
-- local_irq_restore(flags);
+-
+-/* Set transfer size (max 64k for DMA0..3, 128k for DMA5..7) for
+- * a specific DMA channel.
+- * You must ensure the parameters are valid.
+- * NOTE: from a manual: "the number of transfers is one more
+- * than the initial word count"! This is taken into account.
+- * Assumes dma flip-flop is clear.
+- * NOTE 2: "count" represents _bytes_ and must be even for channels 5-7.
+- */
+-static __inline__ void set_dma_count(unsigned int dmanr, unsigned int count)
+-{
+- count--;
+- if (dmanr <= 3) {
+- dma_outb( count & 0xff, ((dmanr&3)<<1) + 1 + IO_DMA1_BASE );
+- dma_outb( (count>>8) & 0xff, ((dmanr&3)<<1) + 1 + IO_DMA1_BASE );
+- } else {
+- dma_outb( (count>>1) & 0xff, ((dmanr&3)<<2) + 2 + IO_DMA2_BASE );
+- dma_outb( (count>>9) & 0xff, ((dmanr&3)<<2) + 2 + IO_DMA2_BASE );
+- }
-}
-
--static __inline__ void atomic_set_mask(unsigned int mask, atomic_t *v)
+-
+-/* Get DMA residue count. After a DMA transfer, this
+- * should return zero. Reading this while a DMA transfer is
+- * still in progress will return unpredictable results.
+- * If called before the channel has been used, it may return 1.
+- * Otherwise, it returns the number of _bytes_ left to transfer.
+- *
+- * Assumes DMA flip-flop is clear.
+- */
+-static __inline__ int get_dma_residue(unsigned int dmanr)
-{
-- unsigned long flags;
+- unsigned int io_port = (dmanr<=3)? ((dmanr&3)<<1) + 1 + IO_DMA1_BASE
+- : ((dmanr&3)<<2) + 2 + IO_DMA2_BASE;
-
-- local_irq_save(flags);
-- *(long *)v |= mask;
-- local_irq_restore(flags);
+- /* using short to get 16-bit wrap around */
+- unsigned short count;
+-
+- count = 1 + dma_inb(io_port);
+- count += dma_inb(io_port) << 8;
+-
+- return (dmanr<=3)? count : (count<<1);
-}
-
--/* Atomic operations are already serializing on SH */
--#define smp_mb__before_atomic_dec() barrier()
--#define smp_mb__after_atomic_dec() barrier()
--#define smp_mb__before_atomic_inc() barrier()
--#define smp_mb__after_atomic_inc() barrier()
-
--#include <asm-generic/atomic.h>
--#endif /* __ASM_SH64_ATOMIC_H */
-diff --git a/include/asm-sh64/auxvec.h b/include/asm-sh64/auxvec.h
-deleted file mode 100644
-index 1ad5a44..0000000
---- a/include/asm-sh64/auxvec.h
-+++ /dev/null
-@@ -1,4 +0,0 @@
--#ifndef __ASM_SH64_AUXVEC_H
--#define __ASM_SH64_AUXVEC_H
+-/* These are in kernel/dma.c: */
+-extern int request_dma(unsigned int dmanr, const char * device_id); /* reserve a DMA channel */
+-extern void free_dma(unsigned int dmanr); /* release it again */
-
--#endif /* __ASM_SH64_AUXVEC_H */
-diff --git a/include/asm-sh64/bitops.h b/include/asm-sh64/bitops.h
+-/* From PCI */
+-
+-#ifdef CONFIG_PCI
+-extern int isa_dma_bridge_buggy;
+-#else
+-#define isa_dma_bridge_buggy (0)
+-#endif
+-
+-#endif /* _ASM_DMA_H */
+diff --git a/include/asm-x86/dma_64.h b/include/asm-x86/dma_64.h
deleted file mode 100644
-index 600c59e..0000000
---- a/include/asm-sh64/bitops.h
+index a37c16f..0000000
+--- a/include/asm-x86/dma_64.h
+++ /dev/null
-@@ -1,155 +0,0 @@
--#ifndef __ASM_SH64_BITOPS_H
--#define __ASM_SH64_BITOPS_H
--
+@@ -1,304 +0,0 @@
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/bitops.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
+- * linux/include/asm/dma.h: Defines for using and allocating dma channels.
+- * Written by Hennus Bergman, 1992.
+- * High DMA channel support & info by Hannu Savolainen
+- * and John Boyd, Nov. 1992.
- */
-
--#ifdef __KERNEL__
+-#ifndef _ASM_DMA_H
+-#define _ASM_DMA_H
-
--#ifndef _LINUX_BITOPS_H
--#error only <linux/bitops.h> can be included directly
--#endif
+-#include <linux/spinlock.h> /* And spinlocks */
+-#include <asm/io.h> /* need byte IO */
+-#include <linux/delay.h>
-
--#include <linux/compiler.h>
--#include <asm/system.h>
--/* For __swab32 */
--#include <asm/byteorder.h>
-
--static __inline__ void set_bit(int nr, volatile void * addr)
--{
-- int mask;
-- volatile unsigned int *a = addr;
-- unsigned long flags;
+-#ifdef HAVE_REALLY_SLOW_DMA_CONTROLLER
+-#define dma_outb outb_p
+-#else
+-#define dma_outb outb
+-#endif
-
-- a += nr >> 5;
-- mask = 1 << (nr & 0x1f);
-- local_irq_save(flags);
-- *a |= mask;
-- local_irq_restore(flags);
--}
+-#define dma_inb inb
-
-/*
-- * clear_bit() doesn't provide any barrier for the compiler.
-- */
--#define smp_mb__before_clear_bit() barrier()
--#define smp_mb__after_clear_bit() barrier()
--static inline void clear_bit(int nr, volatile unsigned long *a)
--{
-- int mask;
-- unsigned long flags;
+- * NOTES about DMA transfers:
+- *
+- * controller 1: channels 0-3, byte operations, ports 00-1F
+- * controller 2: channels 4-7, word operations, ports C0-DF
+- *
+- * - ALL registers are 8 bits only, regardless of transfer size
+- * - channel 4 is not used - cascades 1 into 2.
+- * - channels 0-3 are byte - addresses/counts are for physical bytes
+- * - channels 5-7 are word - addresses/counts are for physical words
+- * - transfers must not cross physical 64K (0-3) or 128K (5-7) boundaries
+- * - transfer count loaded to registers is 1 less than actual count
+- * - controller 2 offsets are all even (2x offsets for controller 1)
+- * - page registers for 5-7 don't use data bit 0, represent 128K pages
+- * - page registers for 0-3 use bit 0, represent 64K pages
+- *
+- * DMA transfers are limited to the lower 16MB of _physical_ memory.
+- * Note that addresses loaded into registers must be _physical_ addresses,
+- * not logical addresses (which may differ if paging is active).
+- *
+- * Address mapping for channels 0-3:
+- *
+- * A23 ... A16 A15 ... A8 A7 ... A0 (Physical addresses)
+- * | ... | | ... | | ... |
+- * | ... | | ... | | ... |
+- * | ... | | ... | | ... |
+- * P7 ... P0 A7 ... A0 A7 ... A0
+- * | Page | Addr MSB | Addr LSB | (DMA registers)
+- *
+- * Address mapping for channels 5-7:
+- *
+- * A23 ... A17 A16 A15 ... A9 A8 A7 ... A1 A0 (Physical addresses)
+- * | ... | \ \ ... \ \ \ ... \ \
+- * | ... | \ \ ... \ \ \ ... \ (not used)
+- * | ... | \ \ ... \ \ \ ... \
+- * P7 ... P1 (0) A7 A6 ... A0 A7 A6 ... A0
+- * | Page | Addr MSB | Addr LSB | (DMA registers)
+- *
+- * Again, channels 5-7 transfer _physical_ words (16 bits), so addresses
+- * and counts _must_ be word-aligned (the lowest address bit is _ignored_ at
+- * the hardware level, so odd-byte transfers aren't possible).
+- *
+- * Transfer count (_not # bytes_) is limited to 64K, represented as actual
+- * count - 1 : 64K => 0xFFFF, 1 => 0x0000. Thus, count is always 1 or more,
+- * and up to 128K bytes may be transferred on channels 5-7 in one operation.
+- *
+- */
+-
+-#define MAX_DMA_CHANNELS 8
+-
+-
+-/* 16MB ISA DMA zone */
+-#define MAX_DMA_PFN ((16*1024*1024) >> PAGE_SHIFT)
+-
+-/* 4GB broken PCI/AGP hardware bus master zone */
+-#define MAX_DMA32_PFN ((4UL*1024*1024*1024) >> PAGE_SHIFT)
+-
+-/* Compat define for old dma zone */
+-#define MAX_DMA_ADDRESS ((unsigned long)__va(MAX_DMA_PFN << PAGE_SHIFT))
+-
+-/* 8237 DMA controllers */
+-#define IO_DMA1_BASE 0x00 /* 8 bit slave DMA, channels 0..3 */
+-#define IO_DMA2_BASE 0xC0 /* 16 bit master DMA, ch 4(=slave input)..7 */
+-
+-/* DMA controller registers */
+-#define DMA1_CMD_REG 0x08 /* command register (w) */
+-#define DMA1_STAT_REG 0x08 /* status register (r) */
+-#define DMA1_REQ_REG 0x09 /* request register (w) */
+-#define DMA1_MASK_REG 0x0A /* single-channel mask (w) */
+-#define DMA1_MODE_REG 0x0B /* mode register (w) */
+-#define DMA1_CLEAR_FF_REG 0x0C /* clear pointer flip-flop (w) */
+-#define DMA1_TEMP_REG 0x0D /* Temporary Register (r) */
+-#define DMA1_RESET_REG 0x0D /* Master Clear (w) */
+-#define DMA1_CLR_MASK_REG 0x0E /* Clear Mask */
+-#define DMA1_MASK_ALL_REG 0x0F /* all-channels mask (w) */
+-
+-#define DMA2_CMD_REG 0xD0 /* command register (w) */
+-#define DMA2_STAT_REG 0xD0 /* status register (r) */
+-#define DMA2_REQ_REG 0xD2 /* request register (w) */
+-#define DMA2_MASK_REG 0xD4 /* single-channel mask (w) */
+-#define DMA2_MODE_REG 0xD6 /* mode register (w) */
+-#define DMA2_CLEAR_FF_REG 0xD8 /* clear pointer flip-flop (w) */
+-#define DMA2_TEMP_REG 0xDA /* Temporary Register (r) */
+-#define DMA2_RESET_REG 0xDA /* Master Clear (w) */
+-#define DMA2_CLR_MASK_REG 0xDC /* Clear Mask */
+-#define DMA2_MASK_ALL_REG 0xDE /* all-channels mask (w) */
+-
+-#define DMA_ADDR_0 0x00 /* DMA address registers */
+-#define DMA_ADDR_1 0x02
+-#define DMA_ADDR_2 0x04
+-#define DMA_ADDR_3 0x06
+-#define DMA_ADDR_4 0xC0
+-#define DMA_ADDR_5 0xC4
+-#define DMA_ADDR_6 0xC8
+-#define DMA_ADDR_7 0xCC
+-
+-#define DMA_CNT_0 0x01 /* DMA count registers */
+-#define DMA_CNT_1 0x03
+-#define DMA_CNT_2 0x05
+-#define DMA_CNT_3 0x07
+-#define DMA_CNT_4 0xC2
+-#define DMA_CNT_5 0xC6
+-#define DMA_CNT_6 0xCA
+-#define DMA_CNT_7 0xCE
+-
+-#define DMA_PAGE_0 0x87 /* DMA page registers */
+-#define DMA_PAGE_1 0x83
+-#define DMA_PAGE_2 0x81
+-#define DMA_PAGE_3 0x82
+-#define DMA_PAGE_5 0x8B
+-#define DMA_PAGE_6 0x89
+-#define DMA_PAGE_7 0x8A
+-
+-#define DMA_MODE_READ 0x44 /* I/O to memory, no autoinit, increment, single mode */
+-#define DMA_MODE_WRITE 0x48 /* memory to I/O, no autoinit, increment, single mode */
+-#define DMA_MODE_CASCADE 0xC0 /* pass thru DREQ->HRQ, DACK<-HLDA only */
-
-- a += nr >> 5;
-- mask = 1 << (nr & 0x1f);
-- local_irq_save(flags);
-- *a &= ~mask;
-- local_irq_restore(flags);
--}
+-#define DMA_AUTOINIT 0x10
-
--static __inline__ void change_bit(int nr, volatile void * addr)
+-
+-extern spinlock_t dma_spin_lock;
+-
+-static __inline__ unsigned long claim_dma_lock(void)
-{
-- int mask;
-- volatile unsigned int *a = addr;
- unsigned long flags;
--
-- a += nr >> 5;
-- mask = 1 << (nr & 0x1f);
-- local_irq_save(flags);
-- *a ^= mask;
-- local_irq_restore(flags);
+- spin_lock_irqsave(&dma_spin_lock, flags);
+- return flags;
-}
-
--static __inline__ int test_and_set_bit(int nr, volatile void * addr)
+-static __inline__ void release_dma_lock(unsigned long flags)
-{
-- int mask, retval;
-- volatile unsigned int *a = addr;
-- unsigned long flags;
+- spin_unlock_irqrestore(&dma_spin_lock, flags);
+-}
-
-- a += nr >> 5;
-- mask = 1 << (nr & 0x1f);
-- local_irq_save(flags);
-- retval = (mask & *a) != 0;
-- *a |= mask;
-- local_irq_restore(flags);
+-/* enable/disable a specific DMA channel */
+-static __inline__ void enable_dma(unsigned int dmanr)
+-{
+- if (dmanr<=3)
+- dma_outb(dmanr, DMA1_MASK_REG);
+- else
+- dma_outb(dmanr & 3, DMA2_MASK_REG);
+-}
-
-- return retval;
+-static __inline__ void disable_dma(unsigned int dmanr)
+-{
+- if (dmanr<=3)
+- dma_outb(dmanr | 4, DMA1_MASK_REG);
+- else
+- dma_outb((dmanr & 3) | 4, DMA2_MASK_REG);
-}
-
--static __inline__ int test_and_clear_bit(int nr, volatile void * addr)
+-/* Clear the 'DMA Pointer Flip Flop'.
+- * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+- * Use this once to initialize the FF to a known state.
+- * After that, keep track of it. :-)
+- * --- In order to do that, the DMA routines below should ---
+- * --- only be used while holding the DMA lock ! ---
+- */
+-static __inline__ void clear_dma_ff(unsigned int dmanr)
-{
-- int mask, retval;
-- volatile unsigned int *a = addr;
-- unsigned long flags;
+- if (dmanr<=3)
+- dma_outb(0, DMA1_CLEAR_FF_REG);
+- else
+- dma_outb(0, DMA2_CLEAR_FF_REG);
+-}
-
-- a += nr >> 5;
-- mask = 1 << (nr & 0x1f);
-- local_irq_save(flags);
-- retval = (mask & *a) != 0;
-- *a &= ~mask;
-- local_irq_restore(flags);
+-/* set mode (above) for a specific DMA channel */
+-static __inline__ void set_dma_mode(unsigned int dmanr, char mode)
+-{
+- if (dmanr<=3)
+- dma_outb(mode | dmanr, DMA1_MODE_REG);
+- else
+- dma_outb(mode | (dmanr&3), DMA2_MODE_REG);
+-}
-
-- return retval;
+-/* Set only the page register bits of the transfer address.
+- * This is used for successive transfers when we know the contents of
+- * the lower 16 bits of the DMA current address register, but a 64k boundary
+- * may have been crossed.
+- */
+-static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+-{
+- switch(dmanr) {
+- case 0:
+- dma_outb(pagenr, DMA_PAGE_0);
+- break;
+- case 1:
+- dma_outb(pagenr, DMA_PAGE_1);
+- break;
+- case 2:
+- dma_outb(pagenr, DMA_PAGE_2);
+- break;
+- case 3:
+- dma_outb(pagenr, DMA_PAGE_3);
+- break;
+- case 5:
+- dma_outb(pagenr & 0xfe, DMA_PAGE_5);
+- break;
+- case 6:
+- dma_outb(pagenr & 0xfe, DMA_PAGE_6);
+- break;
+- case 7:
+- dma_outb(pagenr & 0xfe, DMA_PAGE_7);
+- break;
+- }
-}
-
--static __inline__ int test_and_change_bit(int nr, volatile void * addr)
+-
+-/* Set transfer address & page bits for specific DMA channel.
+- * Assumes dma flipflop is clear.
+- */
+-static __inline__ void set_dma_addr(unsigned int dmanr, unsigned int a)
-{
-- int mask, retval;
-- volatile unsigned int *a = addr;
-- unsigned long flags;
+- set_dma_page(dmanr, a>>16);
+- if (dmanr <= 3) {
+- dma_outb( a & 0xff, ((dmanr&3)<<1) + IO_DMA1_BASE );
+- dma_outb( (a>>8) & 0xff, ((dmanr&3)<<1) + IO_DMA1_BASE );
+- } else {
+- dma_outb( (a>>1) & 0xff, ((dmanr&3)<<2) + IO_DMA2_BASE );
+- dma_outb( (a>>9) & 0xff, ((dmanr&3)<<2) + IO_DMA2_BASE );
+- }
+-}
-
-- a += nr >> 5;
-- mask = 1 << (nr & 0x1f);
-- local_irq_save(flags);
-- retval = (mask & *a) != 0;
-- *a ^= mask;
-- local_irq_restore(flags);
-
-- return retval;
+-/* Set transfer size (max 64k for DMA1..3, 128k for DMA5..7) for
+- * a specific DMA channel.
+- * You must ensure the parameters are valid.
+- * NOTE: from a manual: "the number of transfers is one more
+- * than the initial word count"! This is taken into account.
+- * Assumes dma flip-flop is clear.
+- * NOTE 2: "count" represents _bytes_ and must be even for channels 5-7.
+- */
+-static __inline__ void set_dma_count(unsigned int dmanr, unsigned int count)
+-{
+- count--;
+- if (dmanr <= 3) {
+- dma_outb( count & 0xff, ((dmanr&3)<<1) + 1 + IO_DMA1_BASE );
+- dma_outb( (count>>8) & 0xff, ((dmanr&3)<<1) + 1 + IO_DMA1_BASE );
+- } else {
+- dma_outb( (count>>1) & 0xff, ((dmanr&3)<<2) + 2 + IO_DMA2_BASE );
+- dma_outb( (count>>9) & 0xff, ((dmanr&3)<<2) + 2 + IO_DMA2_BASE );
+- }
-}
-
--#include <asm-generic/bitops/non-atomic.h>
-
--static __inline__ unsigned long ffz(unsigned long word)
+-/* Get DMA residue count. After a DMA transfer, this
+- * should return zero. Reading this while a DMA transfer is
+- * still in progress will return unpredictable results.
+- * If called before the channel has been used, it may return 1.
+- * Otherwise, it returns the number of _bytes_ left to transfer.
+- *
+- * Assumes DMA flip-flop is clear.
+- */
+-static __inline__ int get_dma_residue(unsigned int dmanr)
-{
-- unsigned long result, __d2, __d3;
+- unsigned int io_port = (dmanr<=3)? ((dmanr&3)<<1) + 1 + IO_DMA1_BASE
+- : ((dmanr&3)<<2) + 2 + IO_DMA2_BASE;
-
-- __asm__("gettr tr0, %2\n\t"
-- "pta $+32, tr0\n\t"
-- "andi %1, 1, %3\n\t"
-- "beq %3, r63, tr0\n\t"
-- "pta $+4, tr0\n"
-- "0:\n\t"
-- "shlri.l %1, 1, %1\n\t"
-- "addi %0, 1, %0\n\t"
-- "andi %1, 1, %3\n\t"
-- "beqi %3, 1, tr0\n"
-- "1:\n\t"
-- "ptabs %2, tr0\n\t"
-- : "=r" (result), "=r" (word), "=r" (__d2), "=r" (__d3)
-- : "0" (0L), "1" (word));
+- /* using short to get 16-bit wrap around */
+- unsigned short count;
-
-- return result;
+- count = 1 + dma_inb(io_port);
+- count += dma_inb(io_port) << 8;
+-
+- return (dmanr<=3)? count : (count<<1);
-}
-
--#include <asm-generic/bitops/__ffs.h>
--#include <asm-generic/bitops/find.h>
--#include <asm-generic/bitops/hweight.h>
--#include <asm-generic/bitops/lock.h>
--#include <asm-generic/bitops/sched.h>
--#include <asm-generic/bitops/ffs.h>
--#include <asm-generic/bitops/ext2-non-atomic.h>
--#include <asm-generic/bitops/ext2-atomic.h>
--#include <asm-generic/bitops/minix.h>
--#include <asm-generic/bitops/fls.h>
--#include <asm-generic/bitops/fls64.h>
-
--#endif /* __KERNEL__ */
+-/* These are in kernel/dma.c: */
+-extern int request_dma(unsigned int dmanr, const char * device_id); /* reserve a DMA channel */
+-extern void free_dma(unsigned int dmanr); /* release it again */
-
--#endif /* __ASM_SH64_BITOPS_H */
-diff --git a/include/asm-sh64/bug.h b/include/asm-sh64/bug.h
-deleted file mode 100644
-index f3a9c92..0000000
---- a/include/asm-sh64/bug.h
-+++ /dev/null
-@@ -1,19 +0,0 @@
--#ifndef __ASM_SH64_BUG_H
--#define __ASM_SH64_BUG_H
+-/* From PCI */
-
--#ifdef CONFIG_BUG
--/*
-- * Tell the user there is some problem, then force a segfault (in process
-- * context) or a panic (interrupt context).
-- */
--#define BUG() do { \
-- printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); \
-- *(volatile int *)0 = 0; \
--} while (0)
+-#ifdef CONFIG_PCI
+-extern int isa_dma_bridge_buggy;
+-#else
+-#define isa_dma_bridge_buggy (0)
+-#endif
+-
+-#endif /* _ASM_DMA_H */
+diff --git a/include/asm-x86/dmi.h b/include/asm-x86/dmi.h
+index 8e2b0e6..1241e6a 100644
+--- a/include/asm-x86/dmi.h
++++ b/include/asm-x86/dmi.h
+@@ -5,9 +5,6 @@
+
+ #ifdef CONFIG_X86_32
+
+-/* Use early IO mappings for DMI because it's initialized early */
+-#define dmi_ioremap bt_ioremap
+-#define dmi_iounmap bt_iounmap
+ #define dmi_alloc alloc_bootmem
+
+ #else /* CONFIG_X86_32 */
+@@ -22,14 +19,15 @@ extern char dmi_alloc_data[DMI_MAX_DATA];
+ static inline void *dmi_alloc(unsigned len)
+ {
+ int idx = dmi_alloc_index;
+- if ((dmi_alloc_index += len) > DMI_MAX_DATA)
++ if ((dmi_alloc_index + len) > DMI_MAX_DATA)
+ return NULL;
++ dmi_alloc_index += len;
+ return dmi_alloc_data + idx;
+ }
+
++#endif
++
+ #define dmi_ioremap early_ioremap
+ #define dmi_iounmap early_iounmap
+
+ #endif
-
--#define HAVE_ARCH_BUG
-#endif
+diff --git a/include/asm-x86/ds.h b/include/asm-x86/ds.h
+new file mode 100644
+index 0000000..7881368
+--- /dev/null
++++ b/include/asm-x86/ds.h
+@@ -0,0 +1,72 @@
++/*
++ * Debug Store (DS) support
++ *
++ * This provides a low-level interface to the hardware's Debug Store
++ * feature that is used for last branch recording (LBR) and
++ * precise-event based sampling (PEBS).
++ *
++ * Different architectures use a different DS layout/pointer size.
++ * The below functions therefore work on a void*.
++ *
++ *
++ * Since there is no user for PEBS, yet, only LBR (or branch
++ * trace store, BTS) is supported.
++ *
++ *
++ * Copyright (C) 2007 Intel Corporation.
++ * Markus Metzger <markus.t.metzger at intel.com>, Dec 2007
++ */
++
++#ifndef _ASM_X86_DS_H
++#define _ASM_X86_DS_H
++
++#include <linux/types.h>
++#include <linux/init.h>
++
++struct cpuinfo_x86;
++
++
++/* a branch trace record entry
++ *
++ * In order to unify the interface between various processor versions,
++ * we use the below data structure for all processors.
++ */
++enum bts_qualifier {
++ BTS_INVALID = 0,
++ BTS_BRANCH,
++ BTS_TASK_ARRIVES,
++ BTS_TASK_DEPARTS
++};
++
++struct bts_struct {
++ u64 qualifier;
++ union {
++ /* BTS_BRANCH */
++ struct {
++ u64 from_ip;
++ u64 to_ip;
++ } lbr;
++ /* BTS_TASK_ARRIVES or
++ BTS_TASK_DEPARTS */
++ u64 jiffies;
++ } variant;
++};
++
++/* Overflow handling mechanisms */
++#define DS_O_SIGNAL 1 /* send overflow signal */
++#define DS_O_WRAP 2 /* wrap around */
++
++extern int ds_allocate(void **, size_t);
++extern int ds_free(void **);
++extern int ds_get_bts_size(void *);
++extern int ds_get_bts_end(void *);
++extern int ds_get_bts_index(void *);
++extern int ds_set_overflow(void *, int);
++extern int ds_get_overflow(void *);
++extern int ds_clear(void *);
++extern int ds_read_bts(void *, int, struct bts_struct *);
++extern int ds_write_bts(void *, const struct bts_struct *);
++extern unsigned long ds_debugctl_mask(void);
++extern void __cpuinit ds_init_intel(struct cpuinfo_x86 *c);
++
++#endif /* _ASM_X86_DS_H */
+diff --git a/include/asm-x86/e820.h b/include/asm-x86/e820.h
+index 3e214f3..7004251 100644
+--- a/include/asm-x86/e820.h
++++ b/include/asm-x86/e820.h
+@@ -22,6 +22,12 @@ struct e820map {
+ };
+ #endif /* __ASSEMBLY__ */
+
++#define ISA_START_ADDRESS 0xa0000
++#define ISA_END_ADDRESS 0x100000
++
++#define BIOS_BEGIN 0x000a0000
++#define BIOS_END 0x00100000
++
+ #ifdef __KERNEL__
+ #ifdef CONFIG_X86_32
+ # include "e820_32.h"
+diff --git a/include/asm-x86/e820_32.h b/include/asm-x86/e820_32.h
+index 03f60c6..f1da7eb 100644
+--- a/include/asm-x86/e820_32.h
++++ b/include/asm-x86/e820_32.h
+@@ -12,20 +12,28 @@
+ #ifndef __E820_HEADER
+ #define __E820_HEADER
+
++#include <linux/ioport.h>
++
+ #define HIGH_MEMORY (1024*1024)
+
+ #ifndef __ASSEMBLY__
+
+ extern struct e820map e820;
++extern void update_e820(void);
+
+ extern int e820_all_mapped(unsigned long start, unsigned long end,
+ unsigned type);
+ extern int e820_any_mapped(u64 start, u64 end, unsigned type);
+ extern void find_max_pfn(void);
+ extern void register_bootmem_low_pages(unsigned long max_low_pfn);
++extern void add_memory_region(unsigned long long start,
++ unsigned long long size, int type);
+ extern void e820_register_memory(void);
+ extern void limit_regions(unsigned long long size);
+ extern void print_memory_map(char *who);
++extern void init_iomem_resources(struct resource *code_resource,
++ struct resource *data_resource,
++ struct resource *bss_resource);
+
+ #if defined(CONFIG_PM) && defined(CONFIG_HIBERNATION)
+ extern void e820_mark_nosave_regions(void);
+@@ -35,5 +43,6 @@ static inline void e820_mark_nosave_regions(void)
+ }
+ #endif
+
++
+ #endif/*!__ASSEMBLY__*/
+ #endif/*__E820_HEADER*/
+diff --git a/include/asm-x86/e820_64.h b/include/asm-x86/e820_64.h
+index 0bd4787..51e4170 100644
+--- a/include/asm-x86/e820_64.h
++++ b/include/asm-x86/e820_64.h
+@@ -11,6 +11,8 @@
+ #ifndef __E820_HEADER
+ #define __E820_HEADER
+
++#include <linux/ioport.h>
++
+ #ifndef __ASSEMBLY__
+ extern unsigned long find_e820_area(unsigned long start, unsigned long end,
+ unsigned size);
+@@ -19,11 +21,15 @@ extern void add_memory_region(unsigned long start, unsigned long size,
+ extern void setup_memory_region(void);
+ extern void contig_e820_setup(void);
+ extern unsigned long e820_end_of_ram(void);
+-extern void e820_reserve_resources(void);
++extern void e820_reserve_resources(struct resource *code_resource,
++ struct resource *data_resource, struct resource *bss_resource);
+ extern void e820_mark_nosave_regions(void);
+-extern void e820_print_map(char *who);
+ extern int e820_any_mapped(unsigned long start, unsigned long end, unsigned type);
+ extern int e820_all_mapped(unsigned long start, unsigned long end, unsigned type);
++extern int e820_any_non_reserved(unsigned long start, unsigned long end);
++extern int is_memory_any_valid(unsigned long start, unsigned long end);
++extern int e820_all_non_reserved(unsigned long start, unsigned long end);
++extern int is_memory_all_valid(unsigned long start, unsigned long end);
+ extern unsigned long e820_hole_size(unsigned long start, unsigned long end);
+
+ extern void e820_setup_gap(void);
+@@ -33,9 +39,11 @@ extern void e820_register_active_regions(int nid,
+ extern void finish_e820_parsing(void);
+
+ extern struct e820map e820;
++extern void update_e820(void);
++
++extern void reserve_early(unsigned long start, unsigned long end);
++extern void early_res_to_bootmem(void);
+
+-extern unsigned ebda_addr, ebda_size;
+-extern unsigned long nodemap_addr, nodemap_size;
+ #endif/*!__ASSEMBLY__*/
+
+ #endif/*__E820_HEADER*/
+diff --git a/include/asm-x86/efi.h b/include/asm-x86/efi.h
+new file mode 100644
+index 0000000..9c68a1f
+--- /dev/null
++++ b/include/asm-x86/efi.h
+@@ -0,0 +1,97 @@
++#ifndef _ASM_X86_EFI_H
++#define _ASM_X86_EFI_H
++
++#ifdef CONFIG_X86_32
++
++extern unsigned long asmlinkage efi_call_phys(void *, ...);
++
++#define efi_call_phys0(f) efi_call_phys(f)
++#define efi_call_phys1(f, a1) efi_call_phys(f, a1)
++#define efi_call_phys2(f, a1, a2) efi_call_phys(f, a1, a2)
++#define efi_call_phys3(f, a1, a2, a3) efi_call_phys(f, a1, a2, a3)
++#define efi_call_phys4(f, a1, a2, a3, a4) \
++ efi_call_phys(f, a1, a2, a3, a4)
++#define efi_call_phys5(f, a1, a2, a3, a4, a5) \
++ efi_call_phys(f, a1, a2, a3, a4, a5)
++#define efi_call_phys6(f, a1, a2, a3, a4, a5, a6) \
++ efi_call_phys(f, a1, a2, a3, a4, a5, a6)
++/*
++ * Wrap all the virtual calls in a way that forces the parameters on the stack.
++ */
++
++#define efi_call_virt(f, args...) \
++ ((efi_##f##_t __attribute__((regparm(0)))*)efi.systab->runtime->f)(args)
++
++#define efi_call_virt0(f) efi_call_virt(f)
++#define efi_call_virt1(f, a1) efi_call_virt(f, a1)
++#define efi_call_virt2(f, a1, a2) efi_call_virt(f, a1, a2)
++#define efi_call_virt3(f, a1, a2, a3) efi_call_virt(f, a1, a2, a3)
++#define efi_call_virt4(f, a1, a2, a3, a4) \
++ efi_call_virt(f, a1, a2, a3, a4)
++#define efi_call_virt5(f, a1, a2, a3, a4, a5) \
++ efi_call_virt(f, a1, a2, a3, a4, a5)
++#define efi_call_virt6(f, a1, a2, a3, a4, a5, a6) \
++ efi_call_virt(f, a1, a2, a3, a4, a5, a6)
++
++#define efi_ioremap(addr, size) ioremap(addr, size)
++
++#else /* !CONFIG_X86_32 */
++
++#define MAX_EFI_IO_PAGES 100
++
++extern u64 efi_call0(void *fp);
++extern u64 efi_call1(void *fp, u64 arg1);
++extern u64 efi_call2(void *fp, u64 arg1, u64 arg2);
++extern u64 efi_call3(void *fp, u64 arg1, u64 arg2, u64 arg3);
++extern u64 efi_call4(void *fp, u64 arg1, u64 arg2, u64 arg3, u64 arg4);
++extern u64 efi_call5(void *fp, u64 arg1, u64 arg2, u64 arg3,
++ u64 arg4, u64 arg5);
++extern u64 efi_call6(void *fp, u64 arg1, u64 arg2, u64 arg3,
++ u64 arg4, u64 arg5, u64 arg6);
++
++#define efi_call_phys0(f) \
++ efi_call0((void *)(f))
++#define efi_call_phys1(f, a1) \
++ efi_call1((void *)(f), (u64)(a1))
++#define efi_call_phys2(f, a1, a2) \
++ efi_call2((void *)(f), (u64)(a1), (u64)(a2))
++#define efi_call_phys3(f, a1, a2, a3) \
++ efi_call3((void *)(f), (u64)(a1), (u64)(a2), (u64)(a3))
++#define efi_call_phys4(f, a1, a2, a3, a4) \
++ efi_call4((void *)(f), (u64)(a1), (u64)(a2), (u64)(a3), \
++ (u64)(a4))
++#define efi_call_phys5(f, a1, a2, a3, a4, a5) \
++ efi_call5((void *)(f), (u64)(a1), (u64)(a2), (u64)(a3), \
++ (u64)(a4), (u64)(a5))
++#define efi_call_phys6(f, a1, a2, a3, a4, a5, a6) \
++ efi_call6((void *)(f), (u64)(a1), (u64)(a2), (u64)(a3), \
++ (u64)(a4), (u64)(a5), (u64)(a6))
++
++#define efi_call_virt0(f) \
++ efi_call0((void *)(efi.systab->runtime->f))
++#define efi_call_virt1(f, a1) \
++ efi_call1((void *)(efi.systab->runtime->f), (u64)(a1))
++#define efi_call_virt2(f, a1, a2) \
++ efi_call2((void *)(efi.systab->runtime->f), (u64)(a1), (u64)(a2))
++#define efi_call_virt3(f, a1, a2, a3) \
++ efi_call3((void *)(efi.systab->runtime->f), (u64)(a1), (u64)(a2), \
++ (u64)(a3))
++#define efi_call_virt4(f, a1, a2, a3, a4) \
++ efi_call4((void *)(efi.systab->runtime->f), (u64)(a1), (u64)(a2), \
++ (u64)(a3), (u64)(a4))
++#define efi_call_virt5(f, a1, a2, a3, a4, a5) \
++ efi_call5((void *)(efi.systab->runtime->f), (u64)(a1), (u64)(a2), \
++ (u64)(a3), (u64)(a4), (u64)(a5))
++#define efi_call_virt6(f, a1, a2, a3, a4, a5, a6) \
++ efi_call6((void *)(efi.systab->runtime->f), (u64)(a1), (u64)(a2), \
++ (u64)(a3), (u64)(a4), (u64)(a5), (u64)(a6))
++
++extern void *efi_ioremap(unsigned long offset, unsigned long size);
++
++#endif /* CONFIG_X86_32 */
++
++extern void efi_reserve_bootmem(void);
++extern void efi_call_phys_prelog(void);
++extern void efi_call_phys_epilog(void);
++
++#endif
+diff --git a/include/asm-x86/elf.h b/include/asm-x86/elf.h
+index ec42a4d..d9c94e7 100644
+--- a/include/asm-x86/elf.h
++++ b/include/asm-x86/elf.h
+@@ -73,18 +73,23 @@ typedef struct user_fxsr_struct elf_fpxregset_t;
+ #endif
+
+ #ifdef __KERNEL__
++#include <asm/vdso.h>
+
+-#ifdef CONFIG_X86_32
+-#include <asm/processor.h>
+-#include <asm/system.h> /* for savesegment */
+-#include <asm/desc.h>
++extern unsigned int vdso_enabled;
+
+ /*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+-#define elf_check_arch(x) \
++#define elf_check_arch_ia32(x) \
+ (((x)->e_machine == EM_386) || ((x)->e_machine == EM_486))
+
++#ifdef CONFIG_X86_32
++#include <asm/processor.h>
++#include <asm/system.h> /* for savesegment */
++#include <asm/desc.h>
++
++#define elf_check_arch(x) elf_check_arch_ia32(x)
++
+ /* SVR4/i386 ABI (pages 3-31, 3-32) says that when the program starts %edx
+ contains a pointer to a function which might be registered using `atexit'.
+ This provides a mean for the dynamic linker to call DT_FINI functions for
+@@ -96,36 +101,38 @@ typedef struct user_fxsr_struct elf_fpxregset_t;
+ just to make things more deterministic.
+ */
+ #define ELF_PLAT_INIT(_r, load_addr) do { \
+- _r->ebx = 0; _r->ecx = 0; _r->edx = 0; \
+- _r->esi = 0; _r->edi = 0; _r->ebp = 0; \
+- _r->eax = 0; \
++ _r->bx = 0; _r->cx = 0; _r->dx = 0; \
++ _r->si = 0; _r->di = 0; _r->bp = 0; \
++ _r->ax = 0; \
+ } while (0)
+
+-/* regs is struct pt_regs, pr_reg is elf_gregset_t (which is
+- now struct_user_regs, they are different) */
-
--#include <asm-generic/bug.h>
+-#define ELF_CORE_COPY_REGS(pr_reg, regs) \
+- pr_reg[0] = regs->ebx; \
+- pr_reg[1] = regs->ecx; \
+- pr_reg[2] = regs->edx; \
+- pr_reg[3] = regs->esi; \
+- pr_reg[4] = regs->edi; \
+- pr_reg[5] = regs->ebp; \
+- pr_reg[6] = regs->eax; \
+- pr_reg[7] = regs->xds & 0xffff; \
+- pr_reg[8] = regs->xes & 0xffff; \
+- pr_reg[9] = regs->xfs & 0xffff; \
+- savesegment(gs,pr_reg[10]); \
+- pr_reg[11] = regs->orig_eax; \
+- pr_reg[12] = regs->eip; \
+- pr_reg[13] = regs->xcs & 0xffff; \
+- pr_reg[14] = regs->eflags; \
+- pr_reg[15] = regs->esp; \
+- pr_reg[16] = regs->xss & 0xffff;
++/*
++ * regs is struct pt_regs, pr_reg is elf_gregset_t (which is
++ * now struct_user_regs, they are different)
++ */
++
++#define ELF_CORE_COPY_REGS(pr_reg, regs) do { \
++ pr_reg[0] = regs->bx; \
++ pr_reg[1] = regs->cx; \
++ pr_reg[2] = regs->dx; \
++ pr_reg[3] = regs->si; \
++ pr_reg[4] = regs->di; \
++ pr_reg[5] = regs->bp; \
++ pr_reg[6] = regs->ax; \
++ pr_reg[7] = regs->ds & 0xffff; \
++ pr_reg[8] = regs->es & 0xffff; \
++ pr_reg[9] = regs->fs & 0xffff; \
++ savesegment(gs, pr_reg[10]); \
++ pr_reg[11] = regs->orig_ax; \
++ pr_reg[12] = regs->ip; \
++ pr_reg[13] = regs->cs & 0xffff; \
++ pr_reg[14] = regs->flags; \
++ pr_reg[15] = regs->sp; \
++ pr_reg[16] = regs->ss & 0xffff; \
++} while (0);
+
+ #define ELF_PLATFORM (utsname()->machine)
+ #define set_personality_64bit() do { } while (0)
+-extern unsigned int vdso_enabled;
+
+ #else /* CONFIG_X86_32 */
+
+@@ -137,28 +144,57 @@ extern unsigned int vdso_enabled;
+ #define elf_check_arch(x) \
+ ((x)->e_machine == EM_X86_64)
+
++#define compat_elf_check_arch(x) elf_check_arch_ia32(x)
++
++static inline void start_ia32_thread(struct pt_regs *regs, u32 ip, u32 sp)
++{
++ asm volatile("movl %0,%%fs" :: "r" (0));
++ asm volatile("movl %0,%%es; movl %0,%%ds" : : "r" (__USER32_DS));
++ load_gs_index(0);
++ regs->ip = ip;
++ regs->sp = sp;
++ regs->flags = X86_EFLAGS_IF;
++ regs->cs = __USER32_CS;
++ regs->ss = __USER32_DS;
++}
++
++static inline void elf_common_init(struct thread_struct *t,
++ struct pt_regs *regs, const u16 ds)
++{
++ regs->ax = regs->bx = regs->cx = regs->dx = 0;
++ regs->si = regs->di = regs->bp = 0;
++ regs->r8 = regs->r9 = regs->r10 = regs->r11 = 0;
++ regs->r12 = regs->r13 = regs->r14 = regs->r15 = 0;
++ t->fs = t->gs = 0;
++ t->fsindex = t->gsindex = 0;
++ t->ds = t->es = ds;
++}
++
+ #define ELF_PLAT_INIT(_r, load_addr) do { \
+- struct task_struct *cur = current; \
+- (_r)->rbx = 0; (_r)->rcx = 0; (_r)->rdx = 0; \
+- (_r)->rsi = 0; (_r)->rdi = 0; (_r)->rbp = 0; \
+- (_r)->rax = 0; \
+- (_r)->r8 = 0; \
+- (_r)->r9 = 0; \
+- (_r)->r10 = 0; \
+- (_r)->r11 = 0; \
+- (_r)->r12 = 0; \
+- (_r)->r13 = 0; \
+- (_r)->r14 = 0; \
+- (_r)->r15 = 0; \
+- cur->thread.fs = 0; cur->thread.gs = 0; \
+- cur->thread.fsindex = 0; cur->thread.gsindex = 0; \
+- cur->thread.ds = 0; cur->thread.es = 0; \
++ elf_common_init(¤t->thread, _r, 0); \
+ clear_thread_flag(TIF_IA32); \
+ } while (0)
+
+-/* regs is struct pt_regs, pr_reg is elf_gregset_t (which is
+- now struct_user_regs, they are different). Assumes current is the process
+- getting dumped. */
++#define COMPAT_ELF_PLAT_INIT(regs, load_addr) \
++ elf_common_init(¤t->thread, regs, __USER_DS)
++#define compat_start_thread(regs, ip, sp) do { \
++ start_ia32_thread(regs, ip, sp); \
++ set_fs(USER_DS); \
++ } while (0)
++#define COMPAT_SET_PERSONALITY(ex, ibcs2) do { \
++ if (test_thread_flag(TIF_IA32)) \
++ clear_thread_flag(TIF_ABI_PENDING); \
++ else \
++ set_thread_flag(TIF_ABI_PENDING); \
++ current->personality |= force_personality32; \
++ } while (0)
++#define COMPAT_ELF_PLATFORM ("i686")
++
++/*
++ * regs is struct pt_regs, pr_reg is elf_gregset_t (which is
++ * now struct_user_regs, they are different). Assumes current is the process
++ * getting dumped.
++ */
+
+ #define ELF_CORE_COPY_REGS(pr_reg, regs) do { \
+ unsigned v; \
+@@ -166,22 +202,22 @@ extern unsigned int vdso_enabled;
+ (pr_reg)[1] = (regs)->r14; \
+ (pr_reg)[2] = (regs)->r13; \
+ (pr_reg)[3] = (regs)->r12; \
+- (pr_reg)[4] = (regs)->rbp; \
+- (pr_reg)[5] = (regs)->rbx; \
++ (pr_reg)[4] = (regs)->bp; \
++ (pr_reg)[5] = (regs)->bx; \
+ (pr_reg)[6] = (regs)->r11; \
+ (pr_reg)[7] = (regs)->r10; \
+ (pr_reg)[8] = (regs)->r9; \
+ (pr_reg)[9] = (regs)->r8; \
+- (pr_reg)[10] = (regs)->rax; \
+- (pr_reg)[11] = (regs)->rcx; \
+- (pr_reg)[12] = (regs)->rdx; \
+- (pr_reg)[13] = (regs)->rsi; \
+- (pr_reg)[14] = (regs)->rdi; \
+- (pr_reg)[15] = (regs)->orig_rax; \
+- (pr_reg)[16] = (regs)->rip; \
++ (pr_reg)[10] = (regs)->ax; \
++ (pr_reg)[11] = (regs)->cx; \
++ (pr_reg)[12] = (regs)->dx; \
++ (pr_reg)[13] = (regs)->si; \
++ (pr_reg)[14] = (regs)->di; \
++ (pr_reg)[15] = (regs)->orig_ax; \
++ (pr_reg)[16] = (regs)->ip; \
+ (pr_reg)[17] = (regs)->cs; \
+- (pr_reg)[18] = (regs)->eflags; \
+- (pr_reg)[19] = (regs)->rsp; \
++ (pr_reg)[18] = (regs)->flags; \
++ (pr_reg)[19] = (regs)->sp; \
+ (pr_reg)[20] = (regs)->ss; \
+ (pr_reg)[21] = current->thread.fs; \
+ (pr_reg)[22] = current->thread.gs; \
+@@ -189,15 +225,17 @@ extern unsigned int vdso_enabled;
+ asm("movl %%es,%0" : "=r" (v)); (pr_reg)[24] = v; \
+ asm("movl %%fs,%0" : "=r" (v)); (pr_reg)[25] = v; \
+ asm("movl %%gs,%0" : "=r" (v)); (pr_reg)[26] = v; \
+-} while(0);
++} while (0);
+
+ /* I'm not sure if we can use '-' here */
+ #define ELF_PLATFORM ("x86_64")
+ extern void set_personality_64bit(void);
+-extern int vdso_enabled;
++extern unsigned int sysctl_vsyscall32;
++extern int force_personality32;
+
+ #endif /* !CONFIG_X86_32 */
+
++#define CORE_DUMP_USE_REGSET
+ #define USE_ELF_CORE_DUMP
+ #define ELF_EXEC_PAGESIZE 4096
+
+@@ -232,43 +270,24 @@ extern int vdso_enabled;
+
+ struct task_struct;
+
+-extern int dump_task_regs (struct task_struct *, elf_gregset_t *);
+-extern int dump_task_fpu (struct task_struct *, elf_fpregset_t *);
-
--#endif /* __ASM_SH64_BUG_H */
-diff --git a/include/asm-sh64/bugs.h b/include/asm-sh64/bugs.h
+-#define ELF_CORE_COPY_TASK_REGS(tsk, elf_regs) dump_task_regs(tsk, elf_regs)
+-#define ELF_CORE_COPY_FPREGS(tsk, elf_fpregs) dump_task_fpu(tsk, elf_fpregs)
++#define ARCH_DLINFO_IA32(vdso_enabled) \
++do if (vdso_enabled) { \
++ NEW_AUX_ENT(AT_SYSINFO, VDSO_ENTRY); \
++ NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_CURRENT_BASE); \
++} while (0)
+
+ #ifdef CONFIG_X86_32
+-extern int dump_task_extended_fpu (struct task_struct *,
+- struct user_fxsr_struct *);
+-#define ELF_CORE_COPY_XFPREGS(tsk, elf_xfpregs) \
+- dump_task_extended_fpu(tsk, elf_xfpregs)
+-#define ELF_CORE_XFPREG_TYPE NT_PRXFPREG
+
+ #define VDSO_HIGH_BASE (__fix_to_virt(FIX_VDSO))
+-#define VDSO_CURRENT_BASE ((unsigned long)current->mm->context.vdso)
+-#define VDSO_PRELINK 0
+-
+-#define VDSO_SYM(x) \
+- (VDSO_CURRENT_BASE + (unsigned long)(x) - VDSO_PRELINK)
+-
+-#define VDSO_HIGH_EHDR ((const struct elfhdr *) VDSO_HIGH_BASE)
+-#define VDSO_EHDR ((const struct elfhdr *) VDSO_CURRENT_BASE)
+
+-extern void __kernel_vsyscall;
+-
+-#define VDSO_ENTRY VDSO_SYM(&__kernel_vsyscall)
++#define ARCH_DLINFO ARCH_DLINFO_IA32(vdso_enabled)
+
+ /* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
+
+-#define ARCH_DLINFO \
+-do if (vdso_enabled) { \
+- NEW_AUX_ENT(AT_SYSINFO, VDSO_ENTRY); \
+- NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_CURRENT_BASE); \
+-} while (0)
+-
+ #else /* CONFIG_X86_32 */
+
++#define VDSO_HIGH_BASE 0xffffe000U /* CONFIG_COMPAT_VDSO address */
++
+ /* 1GB for 64bit, 8MB for 32bit */
+ #define STACK_RND_MASK (test_thread_flag(TIF_IA32) ? 0x7ff : 0x3fffff)
+
+@@ -277,14 +296,31 @@ do if (vdso_enabled) { \
+ NEW_AUX_ENT(AT_SYSINFO_EHDR,(unsigned long)current->mm->context.vdso);\
+ } while (0)
+
++#define AT_SYSINFO 32
++
++#define COMPAT_ARCH_DLINFO ARCH_DLINFO_IA32(sysctl_vsyscall32)
++
++#define COMPAT_ELF_ET_DYN_BASE (TASK_UNMAPPED_BASE + 0x1000000)
++
+ #endif /* !CONFIG_X86_32 */
+
++#define VDSO_CURRENT_BASE ((unsigned long)current->mm->context.vdso)
++
++#define VDSO_ENTRY \
++ ((unsigned long) VDSO32_SYMBOL(VDSO_CURRENT_BASE, vsyscall))
++
+ struct linux_binprm;
+
+ #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
+ extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+ int executable_stack);
+
++extern int syscall32_setup_pages(struct linux_binprm *, int exstack);
++#define compat_arch_setup_additional_pages syscall32_setup_pages
++
++extern unsigned long arch_randomize_brk(struct mm_struct *mm);
++#define arch_randomize_brk arch_randomize_brk
++
+ #endif /* __KERNEL__ */
+
+ #endif
+diff --git a/include/asm-x86/emergency-restart.h b/include/asm-x86/emergency-restart.h
+index 680c395..8e6aef1 100644
+--- a/include/asm-x86/emergency-restart.h
++++ b/include/asm-x86/emergency-restart.h
+@@ -1,6 +1,18 @@
+ #ifndef _ASM_EMERGENCY_RESTART_H
+ #define _ASM_EMERGENCY_RESTART_H
+
++enum reboot_type {
++ BOOT_TRIPLE = 't',
++ BOOT_KBD = 'k',
++#ifdef CONFIG_X86_32
++ BOOT_BIOS = 'b',
++#endif
++ BOOT_ACPI = 'a',
++ BOOT_EFI = 'e'
++};
++
++extern enum reboot_type reboot_type;
++
+ extern void machine_emergency_restart(void);
+
+ #endif /* _ASM_EMERGENCY_RESTART_H */
+diff --git a/include/asm-x86/fixmap_32.h b/include/asm-x86/fixmap_32.h
+index 249e753..a7404d5 100644
+--- a/include/asm-x86/fixmap_32.h
++++ b/include/asm-x86/fixmap_32.h
+@@ -65,7 +65,7 @@ enum fixed_addresses {
+ #endif
+ #ifdef CONFIG_X86_VISWS_APIC
+ FIX_CO_CPU, /* Cobalt timer */
+- FIX_CO_APIC, /* Cobalt APIC Redirection Table */
++ FIX_CO_APIC, /* Cobalt APIC Redirection Table */
+ FIX_LI_PCIA, /* Lithium PCI Bridge A */
+ FIX_LI_PCIB, /* Lithium PCI Bridge B */
+ #endif
+@@ -74,7 +74,7 @@ enum fixed_addresses {
+ #endif
+ #ifdef CONFIG_X86_CYCLONE_TIMER
+ FIX_CYCLONE_TIMER, /*cyclone timer register*/
+-#endif
++#endif
+ #ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+@@ -90,11 +90,23 @@ enum fixed_addresses {
+ FIX_PARAVIRT_BOOTMAP,
+ #endif
+ __end_of_permanent_fixed_addresses,
+- /* temporary boot-time mappings, used before ioremap() is functional */
+-#define NR_FIX_BTMAPS 16
+- FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
+- FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS - 1,
++ /*
++ * 256 temporary boot-time mappings, used by early_ioremap(),
++ * before ioremap() is functional.
++ *
++ * We round it up to the next 512 pages boundary so that we
++ * can have a single pgd entry and a single pte table:
++ */
++#define NR_FIX_BTMAPS 64
++#define FIX_BTMAPS_NESTING 4
++ FIX_BTMAP_END =
++ __end_of_permanent_fixed_addresses + 512 -
++ (__end_of_permanent_fixed_addresses & 511),
++ FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_NESTING - 1,
+ FIX_WP_TEST,
++#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
++ FIX_OHCI1394_BASE,
++#endif
+ __end_of_fixed_addresses
+ };
+
+diff --git a/include/asm-x86/fixmap_64.h b/include/asm-x86/fixmap_64.h
+index cdfbe4a..70ddb21 100644
+--- a/include/asm-x86/fixmap_64.h
++++ b/include/asm-x86/fixmap_64.h
+@@ -15,6 +15,7 @@
+ #include <asm/apicdef.h>
+ #include <asm/page.h>
+ #include <asm/vsyscall.h>
++#include <asm/efi.h>
+
+ /*
+ * Here we define all the compile-time 'special' virtual
+@@ -41,6 +42,11 @@ enum fixed_addresses {
+ FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
+ FIX_IO_APIC_BASE_0,
+ FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS-1,
++ FIX_EFI_IO_MAP_LAST_PAGE,
++ FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE+MAX_EFI_IO_PAGES-1,
++#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
++ FIX_OHCI1394_BASE,
++#endif
+ __end_of_fixed_addresses
+ };
+
+diff --git a/include/asm-x86/fpu32.h b/include/asm-x86/fpu32.h
deleted file mode 100644
-index 05554aa..0000000
---- a/include/asm-sh64/bugs.h
+index 4153db5..0000000
+--- a/include/asm-x86/fpu32.h
+++ /dev/null
-@@ -1,38 +0,0 @@
--#ifndef __ASM_SH64_BUGS_H
--#define __ASM_SH64_BUGS_H
+@@ -1,10 +0,0 @@
+-#ifndef _FPU32_H
+-#define _FPU32_H 1
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/bugs.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
-- *
-- */
+-struct _fpstate_ia32;
-
--/*
-- * This is included by init/main.c to check for architecture-dependent bugs.
-- *
-- * Needs:
-- * void check_bugs(void);
-- */
+-int restore_i387_ia32(struct task_struct *tsk, struct _fpstate_ia32 __user *buf, int fsave);
+-int save_i387_ia32(struct task_struct *tsk, struct _fpstate_ia32 __user *buf,
+- struct pt_regs *regs, int fsave);
+-
+-#endif
+diff --git a/include/asm-x86/futex.h b/include/asm-x86/futex.h
+index 1f4610e..62828d6 100644
+--- a/include/asm-x86/futex.h
++++ b/include/asm-x86/futex.h
+@@ -1,5 +1,135 @@
+-#ifdef CONFIG_X86_32
+-# include "futex_32.h"
+-#else
+-# include "futex_64.h"
++#ifndef _ASM_X86_FUTEX_H
++#define _ASM_X86_FUTEX_H
++
++#ifdef __KERNEL__
++
++#include <linux/futex.h>
++
++#include <asm/asm.h>
++#include <asm/errno.h>
++#include <asm/processor.h>
++#include <asm/system.h>
++#include <asm/uaccess.h>
++
++#define __futex_atomic_op1(insn, ret, oldval, uaddr, oparg) \
++ __asm__ __volatile( \
++"1: " insn "\n" \
++"2: .section .fixup,\"ax\"\n \
++3: mov %3, %1\n \
++ jmp 2b\n \
++ .previous\n \
++ .section __ex_table,\"a\"\n \
++ .align 8\n" \
++ _ASM_PTR "1b,3b\n \
++ .previous" \
++ : "=r" (oldval), "=r" (ret), "+m" (*uaddr) \
++ : "i" (-EFAULT), "0" (oparg), "1" (0))
++
++#define __futex_atomic_op2(insn, ret, oldval, uaddr, oparg) \
++ __asm__ __volatile( \
++"1: movl %2, %0\n \
++ movl %0, %3\n" \
++ insn "\n" \
++"2: " LOCK_PREFIX "cmpxchgl %3, %2\n \
++ jnz 1b\n \
++3: .section .fixup,\"ax\"\n \
++4: mov %5, %1\n \
++ jmp 3b\n \
++ .previous\n \
++ .section __ex_table,\"a\"\n \
++ .align 8\n" \
++ _ASM_PTR "1b,4b,2b,4b\n \
++ .previous" \
++ : "=&a" (oldval), "=&r" (ret), "+m" (*uaddr), \
++ "=&r" (tem) \
++ : "r" (oparg), "i" (-EFAULT), "1" (0))
++
++static inline int
++futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
++{
++ int op = (encoded_op >> 28) & 7;
++ int cmp = (encoded_op >> 24) & 15;
++ int oparg = (encoded_op << 8) >> 20;
++ int cmparg = (encoded_op << 20) >> 20;
++ int oldval = 0, ret, tem;
++
++ if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
++ oparg = 1 << oparg;
++
++ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
++ return -EFAULT;
++
++#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_BSWAP)
++ /* Real i386 machines can only support FUTEX_OP_SET */
++ if (op != FUTEX_OP_SET && boot_cpu_data.x86 == 3)
++ return -ENOSYS;
++#endif
++
++ pagefault_disable();
++
++ switch (op) {
++ case FUTEX_OP_SET:
++ __futex_atomic_op1("xchgl %0, %2", ret, oldval, uaddr, oparg);
++ break;
++ case FUTEX_OP_ADD:
++ __futex_atomic_op1(LOCK_PREFIX "xaddl %0, %2", ret, oldval,
++ uaddr, oparg);
++ break;
++ case FUTEX_OP_OR:
++ __futex_atomic_op2("orl %4, %3", ret, oldval, uaddr, oparg);
++ break;
++ case FUTEX_OP_ANDN:
++ __futex_atomic_op2("andl %4, %3", ret, oldval, uaddr, ~oparg);
++ break;
++ case FUTEX_OP_XOR:
++ __futex_atomic_op2("xorl %4, %3", ret, oldval, uaddr, oparg);
++ break;
++ default:
++ ret = -ENOSYS;
++ }
++
++ pagefault_enable();
++
++ if (!ret) {
++ switch (cmp) {
++ case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
++ case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
++ case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
++ case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
++ case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
++ case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
++ default: ret = -ENOSYS;
++ }
++ }
++ return ret;
++}
++
++static inline int
++futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
++{
++ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
++ return -EFAULT;
++
++ __asm__ __volatile__(
++ "1: " LOCK_PREFIX "cmpxchgl %3, %1 \n"
++
++ "2: .section .fixup, \"ax\" \n"
++ "3: mov %2, %0 \n"
++ " jmp 2b \n"
++ " .previous \n"
++
++ " .section __ex_table, \"a\" \n"
++ " .align 8 \n"
++ _ASM_PTR " 1b,3b \n"
++ " .previous \n"
++
++ : "=a" (oldval), "+m" (*uaddr)
++ : "i" (-EFAULT), "r" (newval), "0" (oldval)
++ : "memory"
++ );
++
++ return oldval;
++}
++
++#endif
+ #endif
+diff --git a/include/asm-x86/futex_32.h b/include/asm-x86/futex_32.h
+deleted file mode 100644
+index 438ef0e..0000000
+--- a/include/asm-x86/futex_32.h
++++ /dev/null
+@@ -1,135 +0,0 @@
+-#ifndef _ASM_FUTEX_H
+-#define _ASM_FUTEX_H
-
--/*
-- * I don't know of any Super-H bugs yet.
-- */
+-#ifdef __KERNEL__
-
+-#include <linux/futex.h>
+-#include <asm/errno.h>
+-#include <asm/system.h>
-#include <asm/processor.h>
+-#include <asm/uaccess.h>
-
--static void __init check_bugs(void)
+-#define __futex_atomic_op1(insn, ret, oldval, uaddr, oparg) \
+- __asm__ __volatile ( \
+-"1: " insn "\n" \
+-"2: .section .fixup,\"ax\"\n\
+-3: mov %3, %1\n\
+- jmp 2b\n\
+- .previous\n\
+- .section __ex_table,\"a\"\n\
+- .align 8\n\
+- .long 1b,3b\n\
+- .previous" \
+- : "=r" (oldval), "=r" (ret), "+m" (*uaddr) \
+- : "i" (-EFAULT), "0" (oparg), "1" (0))
+-
+-#define __futex_atomic_op2(insn, ret, oldval, uaddr, oparg) \
+- __asm__ __volatile ( \
+-"1: movl %2, %0\n\
+- movl %0, %3\n" \
+- insn "\n" \
+-"2: " LOCK_PREFIX "cmpxchgl %3, %2\n\
+- jnz 1b\n\
+-3: .section .fixup,\"ax\"\n\
+-4: mov %5, %1\n\
+- jmp 3b\n\
+- .previous\n\
+- .section __ex_table,\"a\"\n\
+- .align 8\n\
+- .long 1b,4b,2b,4b\n\
+- .previous" \
+- : "=&a" (oldval), "=&r" (ret), "+m" (*uaddr), \
+- "=&r" (tem) \
+- : "r" (oparg), "i" (-EFAULT), "1" (0))
+-
+-static inline int
+-futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
-{
-- extern char *get_cpu_subtype(void);
-- extern unsigned long loops_per_jiffy;
+- int op = (encoded_op >> 28) & 7;
+- int cmp = (encoded_op >> 24) & 15;
+- int oparg = (encoded_op << 8) >> 20;
+- int cmparg = (encoded_op << 20) >> 20;
+- int oldval = 0, ret, tem;
+- if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
+- oparg = 1 << oparg;
-
-- cpu_data->loops_per_jiffy = loops_per_jiffy;
+- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+- return -EFAULT;
-
-- printk("CPU: %s\n", get_cpu_subtype());
--}
--#endif /* __ASM_SH64_BUGS_H */
-diff --git a/include/asm-sh64/byteorder.h b/include/asm-sh64/byteorder.h
-deleted file mode 100644
-index 7419d78..0000000
---- a/include/asm-sh64/byteorder.h
-+++ /dev/null
-@@ -1,49 +0,0 @@
--#ifndef __ASM_SH64_BYTEORDER_H
--#define __ASM_SH64_BYTEORDER_H
+- pagefault_disable();
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/byteorder.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
+- if (op == FUTEX_OP_SET)
+- __futex_atomic_op1("xchgl %0, %2", ret, oldval, uaddr, oparg);
+- else {
+-#ifndef CONFIG_X86_BSWAP
+- if (boot_cpu_data.x86 == 3)
+- ret = -ENOSYS;
+- else
+-#endif
+- switch (op) {
+- case FUTEX_OP_ADD:
+- __futex_atomic_op1(LOCK_PREFIX "xaddl %0, %2", ret,
+- oldval, uaddr, oparg);
+- break;
+- case FUTEX_OP_OR:
+- __futex_atomic_op2("orl %4, %3", ret, oldval, uaddr,
+- oparg);
+- break;
+- case FUTEX_OP_ANDN:
+- __futex_atomic_op2("andl %4, %3", ret, oldval, uaddr,
+- ~oparg);
+- break;
+- case FUTEX_OP_XOR:
+- __futex_atomic_op2("xorl %4, %3", ret, oldval, uaddr,
+- oparg);
+- break;
+- default:
+- ret = -ENOSYS;
+- }
+- }
-
--#include <asm/types.h>
+- pagefault_enable();
-
--static inline __attribute_const__ __u32 ___arch__swab32(__u32 x)
--{
-- __asm__("byterev %0, %0\n\t"
-- "shari %0, 32, %0"
-- : "=r" (x)
-- : "0" (x));
-- return x;
+- if (!ret) {
+- switch (cmp) {
+- case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
+- case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
+- case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
+- case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
+- case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
+- case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
+- default: ret = -ENOSYS;
+- }
+- }
+- return ret;
-}
-
--static inline __attribute_const__ __u16 ___arch__swab16(__u16 x)
+-static inline int
+-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
-{
-- __asm__("byterev %0, %0\n\t"
-- "shari %0, 48, %0"
-- : "=r" (x)
-- : "0" (x));
-- return x;
--}
+- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+- return -EFAULT;
-
--#define __arch__swab32(x) ___arch__swab32(x)
--#define __arch__swab16(x) ___arch__swab16(x)
+- __asm__ __volatile__(
+- "1: " LOCK_PREFIX "cmpxchgl %3, %1 \n"
-
--#if !defined(__STRICT_ANSI__) || defined(__KERNEL__)
--# define __BYTEORDER_HAS_U64__
--# define __SWAB_64_THRU_32__
--#endif
+- "2: .section .fixup, \"ax\" \n"
+- "3: mov %2, %0 \n"
+- " jmp 2b \n"
+- " .previous \n"
+-
+- " .section __ex_table, \"a\" \n"
+- " .align 8 \n"
+- " .long 1b,3b \n"
+- " .previous \n"
-
--#ifdef __LITTLE_ENDIAN__
--#include <linux/byteorder/little_endian.h>
--#else
--#include <linux/byteorder/big_endian.h>
--#endif
+- : "=a" (oldval), "+m" (*uaddr)
+- : "i" (-EFAULT), "r" (newval), "0" (oldval)
+- : "memory"
+- );
-
--#endif /* __ASM_SH64_BYTEORDER_H */
-diff --git a/include/asm-sh64/cache.h b/include/asm-sh64/cache.h
+- return oldval;
+-}
+-
+-#endif
+-#endif
+diff --git a/include/asm-x86/futex_64.h b/include/asm-x86/futex_64.h
deleted file mode 100644
-index a4f36f0..0000000
---- a/include/asm-sh64/cache.h
+index 5cdfb08..0000000
+--- a/include/asm-x86/futex_64.h
+++ /dev/null
-@@ -1,139 +0,0 @@
--#ifndef __ASM_SH64_CACHE_H
--#define __ASM_SH64_CACHE_H
+@@ -1,125 +0,0 @@
+-#ifndef _ASM_FUTEX_H
+-#define _ASM_FUTEX_H
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/cache.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003, 2004 Paul Mundt
-- *
-- */
--#include <asm/cacheflush.h>
+-#ifdef __KERNEL__
-
--#define L1_CACHE_SHIFT 5
--/* bytes per L1 cache line */
--#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
--#define L1_CACHE_ALIGN_MASK (~(L1_CACHE_BYTES - 1))
--#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES - 1)) & L1_CACHE_ALIGN_MASK)
--#define L1_CACHE_SIZE_BYTES (L1_CACHE_BYTES << 10)
+-#include <linux/futex.h>
+-#include <asm/errno.h>
+-#include <asm/system.h>
+-#include <asm/uaccess.h>
-
--#ifdef MODULE
--#define __cacheline_aligned __attribute__((__aligned__(L1_CACHE_BYTES)))
--#else
--#define __cacheline_aligned \
-- __attribute__((__aligned__(L1_CACHE_BYTES), \
-- __section__(".data.cacheline_aligned")))
--#endif
+-#define __futex_atomic_op1(insn, ret, oldval, uaddr, oparg) \
+- __asm__ __volatile ( \
+-"1: " insn "\n" \
+-"2: .section .fixup,\"ax\"\n\
+-3: mov %3, %1\n\
+- jmp 2b\n\
+- .previous\n\
+- .section __ex_table,\"a\"\n\
+- .align 8\n\
+- .quad 1b,3b\n\
+- .previous" \
+- : "=r" (oldval), "=r" (ret), "=m" (*uaddr) \
+- : "i" (-EFAULT), "m" (*uaddr), "0" (oparg), "1" (0))
+-
+-#define __futex_atomic_op2(insn, ret, oldval, uaddr, oparg) \
+- __asm__ __volatile ( \
+-"1: movl %2, %0\n\
+- movl %0, %3\n" \
+- insn "\n" \
+-"2: " LOCK_PREFIX "cmpxchgl %3, %2\n\
+- jnz 1b\n\
+-3: .section .fixup,\"ax\"\n\
+-4: mov %5, %1\n\
+- jmp 3b\n\
+- .previous\n\
+- .section __ex_table,\"a\"\n\
+- .align 8\n\
+- .quad 1b,4b,2b,4b\n\
+- .previous" \
+- : "=&a" (oldval), "=&r" (ret), "=m" (*uaddr), \
+- "=&r" (tem) \
+- : "r" (oparg), "i" (-EFAULT), "m" (*uaddr), "1" (0))
-
--/*
-- * Control Registers.
-- */
--#define ICCR_BASE 0x01600000 /* Instruction Cache Control Register */
--#define ICCR_REG0 0 /* Register 0 offset */
--#define ICCR_REG1 1 /* Register 1 offset */
--#define ICCR0 ICCR_BASE+ICCR_REG0
--#define ICCR1 ICCR_BASE+ICCR_REG1
+-static inline int
+-futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+-{
+- int op = (encoded_op >> 28) & 7;
+- int cmp = (encoded_op >> 24) & 15;
+- int oparg = (encoded_op << 8) >> 20;
+- int cmparg = (encoded_op << 20) >> 20;
+- int oldval = 0, ret, tem;
+- if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
+- oparg = 1 << oparg;
-
--#define ICCR0_OFF 0x0 /* Set ICACHE off */
--#define ICCR0_ON 0x1 /* Set ICACHE on */
--#define ICCR0_ICI 0x2 /* Invalidate all in IC */
+- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+- return -EFAULT;
-
--#define ICCR1_NOLOCK 0x0 /* Set No Locking */
+- pagefault_disable();
-
--#define OCCR_BASE 0x01E00000 /* Operand Cache Control Register */
--#define OCCR_REG0 0 /* Register 0 offset */
--#define OCCR_REG1 1 /* Register 1 offset */
--#define OCCR0 OCCR_BASE+OCCR_REG0
--#define OCCR1 OCCR_BASE+OCCR_REG1
+- switch (op) {
+- case FUTEX_OP_SET:
+- __futex_atomic_op1("xchgl %0, %2", ret, oldval, uaddr, oparg);
+- break;
+- case FUTEX_OP_ADD:
+- __futex_atomic_op1(LOCK_PREFIX "xaddl %0, %2", ret, oldval,
+- uaddr, oparg);
+- break;
+- case FUTEX_OP_OR:
+- __futex_atomic_op2("orl %4, %3", ret, oldval, uaddr, oparg);
+- break;
+- case FUTEX_OP_ANDN:
+- __futex_atomic_op2("andl %4, %3", ret, oldval, uaddr, ~oparg);
+- break;
+- case FUTEX_OP_XOR:
+- __futex_atomic_op2("xorl %4, %3", ret, oldval, uaddr, oparg);
+- break;
+- default:
+- ret = -ENOSYS;
+- }
-
--#define OCCR0_OFF 0x0 /* Set OCACHE off */
--#define OCCR0_ON 0x1 /* Set OCACHE on */
--#define OCCR0_OCI 0x2 /* Invalidate all in OC */
--#define OCCR0_WT 0x4 /* Set OCACHE in WT Mode */
--#define OCCR0_WB 0x0 /* Set OCACHE in WB Mode */
+- pagefault_enable();
-
--#define OCCR1_NOLOCK 0x0 /* Set No Locking */
+- if (!ret) {
+- switch (cmp) {
+- case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
+- case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
+- case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
+- case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
+- case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
+- case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
+- default: ret = -ENOSYS;
+- }
+- }
+- return ret;
+-}
+-
+-static inline int
+-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+-{
+- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+- return -EFAULT;
+-
+- __asm__ __volatile__(
+- "1: " LOCK_PREFIX "cmpxchgl %3, %1 \n"
+-
+- "2: .section .fixup, \"ax\" \n"
+- "3: mov %2, %0 \n"
+- " jmp 2b \n"
+- " .previous \n"
+-
+- " .section __ex_table, \"a\" \n"
+- " .align 8 \n"
+- " .quad 1b,3b \n"
+- " .previous \n"
+-
+- : "=a" (oldval), "=m" (*uaddr)
+- : "i" (-EFAULT), "r" (newval), "0" (oldval)
+- : "memory"
+- );
-
+- return oldval;
+-}
-
+-#endif
+-#endif
+diff --git a/include/asm-x86/gart.h b/include/asm-x86/gart.h
+index f704c50..90958ed 100644
+--- a/include/asm-x86/gart.h
++++ b/include/asm-x86/gart.h
+@@ -9,6 +9,7 @@ extern int iommu_detected;
+ extern void gart_iommu_init(void);
+ extern void gart_iommu_shutdown(void);
+ extern void __init gart_parse_options(char *);
++extern void early_gart_iommu_check(void);
+ extern void gart_iommu_hole_init(void);
+ extern int fallback_aper_order;
+ extern int fallback_aper_force;
+@@ -20,6 +21,10 @@ extern int fix_aperture;
+ #define gart_iommu_aperture 0
+ #define gart_iommu_aperture_allowed 0
+
++static inline void early_gart_iommu_check(void)
++{
++}
++
+ static inline void gart_iommu_shutdown(void)
+ {
+ }
+diff --git a/include/asm-x86/geode.h b/include/asm-x86/geode.h
+index 771af33..811fe14 100644
+--- a/include/asm-x86/geode.h
++++ b/include/asm-x86/geode.h
+@@ -121,9 +121,15 @@ extern int geode_get_dev_base(unsigned int dev);
+ #define GPIO_MAP_Z 0xE8
+ #define GPIO_MAP_W 0xEC
+
+-extern void geode_gpio_set(unsigned int, unsigned int);
+-extern void geode_gpio_clear(unsigned int, unsigned int);
+-extern int geode_gpio_isset(unsigned int, unsigned int);
++static inline u32 geode_gpio(unsigned int nr)
++{
++ BUG_ON(nr > 28);
++ return 1 << nr;
++}
++
++extern void geode_gpio_set(u32, unsigned int);
++extern void geode_gpio_clear(u32, unsigned int);
++extern int geode_gpio_isset(u32, unsigned int);
+ extern void geode_gpio_setup_event(unsigned int, int, int);
+ extern void geode_gpio_set_irq(unsigned int, unsigned int);
+
+diff --git a/include/asm-x86/gpio.h b/include/asm-x86/gpio.h
+new file mode 100644
+index 0000000..ff87fca
+--- /dev/null
++++ b/include/asm-x86/gpio.h
+@@ -0,0 +1,6 @@
++#ifndef _ASM_I386_GPIO_H
++#define _ASM_I386_GPIO_H
++
++#include <gpio.h>
++
++#endif /* _ASM_I386_GPIO_H */
+diff --git a/include/asm-x86/hpet.h b/include/asm-x86/hpet.h
+index ad8d6e7..6a9b4ac 100644
+--- a/include/asm-x86/hpet.h
++++ b/include/asm-x86/hpet.h
+@@ -69,6 +69,7 @@ extern void force_hpet_resume(void);
+
+ #include <linux/interrupt.h>
+
++typedef irqreturn_t (*rtc_irq_handler)(int interrupt, void *cookie);
+ extern int hpet_mask_rtc_irq_bit(unsigned long bit_mask);
+ extern int hpet_set_rtc_irq_bit(unsigned long bit_mask);
+ extern int hpet_set_alarm_time(unsigned char hrs, unsigned char min,
+@@ -77,13 +78,16 @@ extern int hpet_set_periodic_freq(unsigned long freq);
+ extern int hpet_rtc_dropped_irq(void);
+ extern int hpet_rtc_timer_init(void);
+ extern irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id);
++extern int hpet_register_irq_handler(rtc_irq_handler handler);
++extern void hpet_unregister_irq_handler(rtc_irq_handler handler);
+
+ #endif /* CONFIG_HPET_EMULATE_RTC */
+
+-#else
++#else /* CONFIG_HPET_TIMER */
+
+ static inline int hpet_enable(void) { return 0; }
+ static inline unsigned long hpet_readl(unsigned long a) { return 0; }
++static inline int is_hpet_enabled(void) { return 0; }
+
+-#endif /* CONFIG_HPET_TIMER */
++#endif
+ #endif /* ASM_X86_HPET_H */
+diff --git a/include/asm-x86/hw_irq_32.h b/include/asm-x86/hw_irq_32.h
+index 0bedbdf..6d65fbb 100644
+--- a/include/asm-x86/hw_irq_32.h
++++ b/include/asm-x86/hw_irq_32.h
+@@ -26,19 +26,19 @@
+ * Interrupt entry/exit code at both C and assembly level
+ */
+
+-extern void (*interrupt[NR_IRQS])(void);
++extern void (*const interrupt[NR_IRQS])(void);
+
+ #ifdef CONFIG_SMP
+-fastcall void reschedule_interrupt(void);
+-fastcall void invalidate_interrupt(void);
+-fastcall void call_function_interrupt(void);
++void reschedule_interrupt(void);
++void invalidate_interrupt(void);
++void call_function_interrupt(void);
+ #endif
+
+ #ifdef CONFIG_X86_LOCAL_APIC
+-fastcall void apic_timer_interrupt(void);
+-fastcall void error_interrupt(void);
+-fastcall void spurious_interrupt(void);
+-fastcall void thermal_interrupt(void);
++void apic_timer_interrupt(void);
++void error_interrupt(void);
++void spurious_interrupt(void);
++void thermal_interrupt(void);
+ #define platform_legacy_irq(irq) ((irq) < 16)
+ #endif
+
+diff --git a/include/asm-x86/hw_irq_64.h b/include/asm-x86/hw_irq_64.h
+index a470d59..312a58d 100644
+--- a/include/asm-x86/hw_irq_64.h
++++ b/include/asm-x86/hw_irq_64.h
+@@ -135,11 +135,13 @@ extern void init_8259A(int aeoi);
+ extern void send_IPI_self(int vector);
+ extern void init_VISWS_APIC_irqs(void);
+ extern void setup_IO_APIC(void);
++extern void enable_IO_APIC(void);
+ extern void disable_IO_APIC(void);
+ extern void print_IO_APIC(void);
+ extern int IO_APIC_get_PCI_irq_vector(int bus, int slot, int fn);
+ extern void send_IPI(int dest, int vector);
+ extern void setup_ioapic_dest(void);
++extern void native_init_IRQ(void);
+
+ extern unsigned long io_apic_irqs;
+
+diff --git a/include/asm-x86/i387.h b/include/asm-x86/i387.h
+index a8bbed3..ba8105c 100644
+--- a/include/asm-x86/i387.h
++++ b/include/asm-x86/i387.h
+@@ -1,5 +1,360 @@
+-#ifdef CONFIG_X86_32
+-# include "i387_32.h"
++/*
++ * Copyright (C) 1994 Linus Torvalds
++ *
++ * Pentium III FXSR, SSE support
++ * General FPU state handling cleanups
++ * Gareth Hughes <gareth at valinux.com>, May 2000
++ * x86-64 work by Andi Kleen 2002
++ */
++
++#ifndef _ASM_X86_I387_H
++#define _ASM_X86_I387_H
++
++#include <linux/sched.h>
++#include <linux/kernel_stat.h>
++#include <linux/regset.h>
++#include <asm/processor.h>
++#include <asm/sigcontext.h>
++#include <asm/user.h>
++#include <asm/uaccess.h>
++
++extern void fpu_init(void);
++extern unsigned int mxcsr_feature_mask;
++extern void mxcsr_feature_mask_init(void);
++extern void init_fpu(struct task_struct *child);
++extern asmlinkage void math_state_restore(void);
++
++extern user_regset_active_fn fpregs_active, xfpregs_active;
++extern user_regset_get_fn fpregs_get, xfpregs_get, fpregs_soft_get;
++extern user_regset_set_fn fpregs_set, xfpregs_set, fpregs_soft_set;
++
++#ifdef CONFIG_IA32_EMULATION
++struct _fpstate_ia32;
++extern int save_i387_ia32(struct _fpstate_ia32 __user *buf);
++extern int restore_i387_ia32(struct _fpstate_ia32 __user *buf);
++#endif
++
++#ifdef CONFIG_X86_64
++
++/* Ignore delayed exceptions from user space */
++static inline void tolerant_fwait(void)
++{
++ asm volatile("1: fwait\n"
++ "2:\n"
++ " .section __ex_table,\"a\"\n"
++ " .align 8\n"
++ " .quad 1b,2b\n"
++ " .previous\n");
++}
++
++static inline int restore_fpu_checking(struct i387_fxsave_struct *fx)
++{
++ int err;
++
++ asm volatile("1: rex64/fxrstor (%[fx])\n\t"
++ "2:\n"
++ ".section .fixup,\"ax\"\n"
++ "3: movl $-1,%[err]\n"
++ " jmp 2b\n"
++ ".previous\n"
++ ".section __ex_table,\"a\"\n"
++ " .align 8\n"
++ " .quad 1b,3b\n"
++ ".previous"
++ : [err] "=r" (err)
++#if 0 /* See comment in __save_init_fpu() below. */
++ : [fx] "r" (fx), "m" (*fx), "0" (0));
++#else
++ : [fx] "cdaSDb" (fx), "m" (*fx), "0" (0));
++#endif
++ if (unlikely(err))
++ init_fpu(current);
++ return err;
++}
++
++#define X87_FSW_ES (1 << 7) /* Exception Summary */
++
++/* AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
++ is pending. Clear the x87 state here by setting it to fixed
++ values. The kernel data segment can be sometimes 0 and sometimes
++ new user value. Both should be ok.
++ Use the PDA as safe address because it should be already in L1. */
++static inline void clear_fpu_state(struct i387_fxsave_struct *fx)
++{
++ if (unlikely(fx->swd & X87_FSW_ES))
++ asm volatile("fnclex");
++ alternative_input(ASM_NOP8 ASM_NOP2,
++ " emms\n" /* clear stack tags */
++ " fildl %%gs:0", /* load to clear state */
++ X86_FEATURE_FXSAVE_LEAK);
++}
++
++static inline int save_i387_checking(struct i387_fxsave_struct __user *fx)
++{
++ int err;
++
++ asm volatile("1: rex64/fxsave (%[fx])\n\t"
++ "2:\n"
++ ".section .fixup,\"ax\"\n"
++ "3: movl $-1,%[err]\n"
++ " jmp 2b\n"
++ ".previous\n"
++ ".section __ex_table,\"a\"\n"
++ " .align 8\n"
++ " .quad 1b,3b\n"
++ ".previous"
++ : [err] "=r" (err), "=m" (*fx)
++#if 0 /* See comment in __fxsave_clear() below. */
++ : [fx] "r" (fx), "0" (0));
++#else
++ : [fx] "cdaSDb" (fx), "0" (0));
++#endif
++ if (unlikely(err) && __clear_user(fx, sizeof(struct i387_fxsave_struct)))
++ err = -EFAULT;
++ /* No need to clear here because the caller clears USED_MATH */
++ return err;
++}
++
++static inline void __save_init_fpu(struct task_struct *tsk)
++{
++ /* Using "rex64; fxsave %0" is broken because, if the memory operand
++ uses any extended registers for addressing, a second REX prefix
++ will be generated (to the assembler, rex64 followed by semicolon
++ is a separate instruction), and hence the 64-bitness is lost. */
++#if 0
++ /* Using "fxsaveq %0" would be the ideal choice, but is only supported
++ starting with gas 2.16. */
++ __asm__ __volatile__("fxsaveq %0"
++ : "=m" (tsk->thread.i387.fxsave));
++#elif 0
++ /* Using, as a workaround, the properly prefixed form below isn't
++ accepted by any binutils version so far released, complaining that
++ the same type of prefix is used twice if an extended register is
++ needed for addressing (fix submitted to mainline 2005-11-21). */
++ __asm__ __volatile__("rex64/fxsave %0"
++ : "=m" (tsk->thread.i387.fxsave));
++#else
++ /* This, however, we can work around by forcing the compiler to select
++ an addressing mode that doesn't require extended registers. */
++ __asm__ __volatile__("rex64/fxsave %P2(%1)"
++ : "=m" (tsk->thread.i387.fxsave)
++ : "cdaSDb" (tsk),
++ "i" (offsetof(__typeof__(*tsk),
++ thread.i387.fxsave)));
++#endif
++ clear_fpu_state(&tsk->thread.i387.fxsave);
++ task_thread_info(tsk)->status &= ~TS_USEDFPU;
++}
++
++/*
++ * Signal frame handlers.
++ */
++
++static inline int save_i387(struct _fpstate __user *buf)
++{
++ struct task_struct *tsk = current;
++ int err = 0;
++
++ BUILD_BUG_ON(sizeof(struct user_i387_struct) !=
++ sizeof(tsk->thread.i387.fxsave));
++
++ if ((unsigned long)buf % 16)
++ printk("save_i387: bad fpstate %p\n", buf);
++
++ if (!used_math())
++ return 0;
++ clear_used_math(); /* trigger finit */
++ if (task_thread_info(tsk)->status & TS_USEDFPU) {
++ err = save_i387_checking((struct i387_fxsave_struct __user *)buf);
++ if (err) return err;
++ task_thread_info(tsk)->status &= ~TS_USEDFPU;
++ stts();
++ } else {
++ if (__copy_to_user(buf, &tsk->thread.i387.fxsave,
++ sizeof(struct i387_fxsave_struct)))
++ return -1;
++ }
++ return 1;
++}
++
++/*
++ * This restores directly out of user space. Exceptions are handled.
++ */
++static inline int restore_i387(struct _fpstate __user *buf)
++{
++ set_used_math();
++ if (!(task_thread_info(current)->status & TS_USEDFPU)) {
++ clts();
++ task_thread_info(current)->status |= TS_USEDFPU;
++ }
++ return restore_fpu_checking((__force struct i387_fxsave_struct *)buf);
++}
++
++#else /* CONFIG_X86_32 */
++
++static inline void tolerant_fwait(void)
++{
++ asm volatile("fnclex ; fwait");
++}
++
++static inline void restore_fpu(struct task_struct *tsk)
++{
++ /*
++ * The "nop" is needed to make the instructions the same
++ * length.
++ */
++ alternative_input(
++ "nop ; frstor %1",
++ "fxrstor %1",
++ X86_FEATURE_FXSR,
++ "m" ((tsk)->thread.i387.fxsave));
++}
++
++/* We need a safe address that is cheap to find and that is already
++ in L1 during context switch. The best choices are unfortunately
++ different for UP and SMP */
++#ifdef CONFIG_SMP
++#define safe_address (__per_cpu_offset[0])
+ #else
+-# include "i387_64.h"
++#define safe_address (kstat_cpu(0).cpustat.user)
+ #endif
++
++/*
++ * These must be called with preempt disabled
++ */
++static inline void __save_init_fpu(struct task_struct *tsk)
++{
++ /* Use more nops than strictly needed in case the compiler
++ varies code */
++ alternative_input(
++ "fnsave %[fx] ;fwait;" GENERIC_NOP8 GENERIC_NOP4,
++ "fxsave %[fx]\n"
++ "bt $7,%[fsw] ; jnc 1f ; fnclex\n1:",
++ X86_FEATURE_FXSR,
++ [fx] "m" (tsk->thread.i387.fxsave),
++ [fsw] "m" (tsk->thread.i387.fxsave.swd) : "memory");
++ /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
++ is pending. Clear the x87 state here by setting it to fixed
++ values. safe_address is a random variable that should be in L1 */
++ alternative_input(
++ GENERIC_NOP8 GENERIC_NOP2,
++ "emms\n\t" /* clear stack tags */
++ "fildl %[addr]", /* set F?P to defined value */
++ X86_FEATURE_FXSAVE_LEAK,
++ [addr] "m" (safe_address));
++ task_thread_info(tsk)->status &= ~TS_USEDFPU;
++}
++
++/*
++ * Signal frame handlers...
++ */
++extern int save_i387(struct _fpstate __user *buf);
++extern int restore_i387(struct _fpstate __user *buf);
++
++#endif /* CONFIG_X86_64 */
++
++static inline void __unlazy_fpu(struct task_struct *tsk)
++{
++ if (task_thread_info(tsk)->status & TS_USEDFPU) {
++ __save_init_fpu(tsk);
++ stts();
++ } else
++ tsk->fpu_counter = 0;
++}
++
++static inline void __clear_fpu(struct task_struct *tsk)
++{
++ if (task_thread_info(tsk)->status & TS_USEDFPU) {
++ tolerant_fwait();
++ task_thread_info(tsk)->status &= ~TS_USEDFPU;
++ stts();
++ }
++}
++
++static inline void kernel_fpu_begin(void)
++{
++ struct thread_info *me = current_thread_info();
++ preempt_disable();
++ if (me->status & TS_USEDFPU)
++ __save_init_fpu(me->task);
++ else
++ clts();
++}
++
++static inline void kernel_fpu_end(void)
++{
++ stts();
++ preempt_enable();
++}
++
++#ifdef CONFIG_X86_64
++
++static inline void save_init_fpu(struct task_struct *tsk)
++{
++ __save_init_fpu(tsk);
++ stts();
++}
++
++#define unlazy_fpu __unlazy_fpu
++#define clear_fpu __clear_fpu
++
++#else /* CONFIG_X86_32 */
++
++/*
++ * These disable preemption on their own and are safe
++ */
++static inline void save_init_fpu(struct task_struct *tsk)
++{
++ preempt_disable();
++ __save_init_fpu(tsk);
++ stts();
++ preempt_enable();
++}
++
++static inline void unlazy_fpu(struct task_struct *tsk)
++{
++ preempt_disable();
++ __unlazy_fpu(tsk);
++ preempt_enable();
++}
++
++static inline void clear_fpu(struct task_struct *tsk)
++{
++ preempt_disable();
++ __clear_fpu(tsk);
++ preempt_enable();
++}
++
++#endif /* CONFIG_X86_64 */
++
++/*
++ * i387 state interaction
++ */
++static inline unsigned short get_fpu_cwd(struct task_struct *tsk)
++{
++ if (cpu_has_fxsr) {
++ return tsk->thread.i387.fxsave.cwd;
++ } else {
++ return (unsigned short)tsk->thread.i387.fsave.cwd;
++ }
++}
++
++static inline unsigned short get_fpu_swd(struct task_struct *tsk)
++{
++ if (cpu_has_fxsr) {
++ return tsk->thread.i387.fxsave.swd;
++ } else {
++ return (unsigned short)tsk->thread.i387.fsave.swd;
++ }
++}
++
++static inline unsigned short get_fpu_mxcsr(struct task_struct *tsk)
++{
++ if (cpu_has_xmm) {
++ return tsk->thread.i387.fxsave.mxcsr;
++ } else {
++ return MXCSR_DEFAULT;
++ }
++}
++
++#endif /* _ASM_X86_I387_H */
+diff --git a/include/asm-x86/i387_32.h b/include/asm-x86/i387_32.h
+deleted file mode 100644
+index cdd1e24..0000000
+--- a/include/asm-x86/i387_32.h
++++ /dev/null
+@@ -1,151 +0,0 @@
-/*
-- * SH-5
-- * A bit of description here, for neff=32.
-- *
-- * |<--- tag (19 bits) --->|
-- * +-----------------------------+-----------------+------+----------+------+
-- * | | | ways |set index |offset|
-- * +-----------------------------+-----------------+------+----------+------+
-- * ^ 2 bits 8 bits 5 bits
-- * +- Bit 31
+- * include/asm-i386/i387.h
- *
-- * Cacheline size is based on offset: 5 bits = 32 bytes per line
-- * A cache line is identified by a tag + set but OCACHETAG/ICACHETAG
-- * have a broader space for registers. These are outlined by
-- * CACHE_?C_*_STEP below.
+- * Copyright (C) 1994 Linus Torvalds
- *
+- * Pentium III FXSR, SSE support
+- * General FPU state handling cleanups
+- * Gareth Hughes <gareth at valinux.com>, May 2000
- */
-
--/* Valid and Dirty bits */
--#define SH_CACHE_VALID (1LL<<0)
--#define SH_CACHE_UPDATED (1LL<<57)
+-#ifndef __ASM_I386_I387_H
+-#define __ASM_I386_I387_H
-
--/* Cache flags */
--#define SH_CACHE_MODE_WT (1LL<<0)
--#define SH_CACHE_MODE_WB (1LL<<1)
+-#include <linux/sched.h>
+-#include <linux/init.h>
+-#include <linux/kernel_stat.h>
+-#include <asm/processor.h>
+-#include <asm/sigcontext.h>
+-#include <asm/user.h>
-
--#ifndef __ASSEMBLY__
+-extern void mxcsr_feature_mask_init(void);
+-extern void init_fpu(struct task_struct *);
-
-/*
-- * Cache information structure.
-- *
-- * Defined for both I and D cache, per-processor.
+- * FPU lazy state save handling...
- */
--struct cache_info {
-- unsigned int ways;
-- unsigned int sets;
-- unsigned int linesz;
-
-- unsigned int way_shift;
-- unsigned int entry_shift;
-- unsigned int set_shift;
-- unsigned int way_step_shift;
-- unsigned int asid_shift;
+-/*
+- * The "nop" is needed to make the instructions the same
+- * length.
+- */
+-#define restore_fpu(tsk) \
+- alternative_input( \
+- "nop ; frstor %1", \
+- "fxrstor %1", \
+- X86_FEATURE_FXSR, \
+- "m" ((tsk)->thread.i387.fxsave))
-
-- unsigned int way_ofs;
+-extern void kernel_fpu_begin(void);
+-#define kernel_fpu_end() do { stts(); preempt_enable(); } while(0)
-
-- unsigned int asid_mask;
-- unsigned int idx_mask;
-- unsigned int epn_mask;
+-/* We need a safe address that is cheap to find and that is already
+- in L1 during context switch. The best choices are unfortunately
+- different for UP and SMP */
+-#ifdef CONFIG_SMP
+-#define safe_address (__per_cpu_offset[0])
+-#else
+-#define safe_address (kstat_cpu(0).cpustat.user)
+-#endif
-
-- unsigned long flags;
--};
+-/*
+- * These must be called with preempt disabled
+- */
+-static inline void __save_init_fpu( struct task_struct *tsk )
+-{
+- /* Use more nops than strictly needed in case the compiler
+- varies code */
+- alternative_input(
+- "fnsave %[fx] ;fwait;" GENERIC_NOP8 GENERIC_NOP4,
+- "fxsave %[fx]\n"
+- "bt $7,%[fsw] ; jnc 1f ; fnclex\n1:",
+- X86_FEATURE_FXSR,
+- [fx] "m" (tsk->thread.i387.fxsave),
+- [fsw] "m" (tsk->thread.i387.fxsave.swd) : "memory");
+- /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+- is pending. Clear the x87 state here by setting it to fixed
+- values. safe_address is a random variable that should be in L1 */
+- alternative_input(
+- GENERIC_NOP8 GENERIC_NOP2,
+- "emms\n\t" /* clear stack tags */
+- "fildl %[addr]", /* set F?P to defined value */
+- X86_FEATURE_FXSAVE_LEAK,
+- [addr] "m" (safe_address));
+- task_thread_info(tsk)->status &= ~TS_USEDFPU;
+-}
+-
+-#define __unlazy_fpu( tsk ) do { \
+- if (task_thread_info(tsk)->status & TS_USEDFPU) { \
+- __save_init_fpu(tsk); \
+- stts(); \
+- } else \
+- tsk->fpu_counter = 0; \
+-} while (0)
-
--#endif /* __ASSEMBLY__ */
+-#define __clear_fpu( tsk ) \
+-do { \
+- if (task_thread_info(tsk)->status & TS_USEDFPU) { \
+- asm volatile("fnclex ; fwait"); \
+- task_thread_info(tsk)->status &= ~TS_USEDFPU; \
+- stts(); \
+- } \
+-} while (0)
-
--/* Instruction cache */
--#define CACHE_IC_ADDRESS_ARRAY 0x01000000
-
--/* Operand Cache */
--#define CACHE_OC_ADDRESS_ARRAY 0x01800000
+-/*
+- * These disable preemption on their own and are safe
+- */
+-static inline void save_init_fpu( struct task_struct *tsk )
+-{
+- preempt_disable();
+- __save_init_fpu(tsk);
+- stts();
+- preempt_enable();
+-}
-
--/* These declarations relate to cache 'synonyms' in the operand cache. A
-- 'synonym' occurs where effective address bits overlap between those used for
-- indexing the cache sets and those passed to the MMU for translation. In the
-- case of SH5-101 & SH5-103, only bit 12 is affected for 4k pages. */
+-#define unlazy_fpu( tsk ) do { \
+- preempt_disable(); \
+- __unlazy_fpu(tsk); \
+- preempt_enable(); \
+-} while (0)
-
--#define CACHE_OC_N_SYNBITS 1 /* Number of synonym bits */
--#define CACHE_OC_SYN_SHIFT 12
--/* Mask to select synonym bit(s) */
--#define CACHE_OC_SYN_MASK (((1UL<<CACHE_OC_N_SYNBITS)-1)<<CACHE_OC_SYN_SHIFT)
+-#define clear_fpu( tsk ) do { \
+- preempt_disable(); \
+- __clear_fpu( tsk ); \
+- preempt_enable(); \
+-} while (0)
-
+-/*
+- * FPU state interaction...
+- */
+-extern unsigned short get_fpu_cwd( struct task_struct *tsk );
+-extern unsigned short get_fpu_swd( struct task_struct *tsk );
+-extern unsigned short get_fpu_mxcsr( struct task_struct *tsk );
+-extern asmlinkage void math_state_restore(void);
-
-/*
-- * Instruction cache can't be invalidated based on physical addresses.
-- * No Instruction Cache defines required, then.
+- * Signal frame handlers...
- */
+-extern int save_i387( struct _fpstate __user *buf );
+-extern int restore_i387( struct _fpstate __user *buf );
-
--#endif /* __ASM_SH64_CACHE_H */
-diff --git a/include/asm-sh64/cacheflush.h b/include/asm-sh64/cacheflush.h
+-/*
+- * ptrace request handers...
+- */
+-extern int get_fpregs( struct user_i387_struct __user *buf,
+- struct task_struct *tsk );
+-extern int set_fpregs( struct task_struct *tsk,
+- struct user_i387_struct __user *buf );
+-
+-extern int get_fpxregs( struct user_fxsr_struct __user *buf,
+- struct task_struct *tsk );
+-extern int set_fpxregs( struct task_struct *tsk,
+- struct user_fxsr_struct __user *buf );
+-
+-/*
+- * FPU state for core dumps...
+- */
+-extern int dump_fpu( struct pt_regs *regs,
+- struct user_i387_struct *fpu );
+-
+-#endif /* __ASM_I386_I387_H */
+diff --git a/include/asm-x86/i387_64.h b/include/asm-x86/i387_64.h
deleted file mode 100644
-index 1e53a47..0000000
---- a/include/asm-sh64/cacheflush.h
+index 3a4ffba..0000000
+--- a/include/asm-x86/i387_64.h
+++ /dev/null
-@@ -1,50 +0,0 @@
--#ifndef __ASM_SH64_CACHEFLUSH_H
--#define __ASM_SH64_CACHEFLUSH_H
+@@ -1,214 +0,0 @@
+-/*
+- * include/asm-x86_64/i387.h
+- *
+- * Copyright (C) 1994 Linus Torvalds
+- *
+- * Pentium III FXSR, SSE support
+- * General FPU state handling cleanups
+- * Gareth Hughes <gareth at valinux.com>, May 2000
+- * x86-64 work by Andi Kleen 2002
+- */
-
--#ifndef __ASSEMBLY__
+-#ifndef __ASM_X86_64_I387_H
+-#define __ASM_X86_64_I387_H
-
--#include <asm/page.h>
+-#include <linux/sched.h>
+-#include <asm/processor.h>
+-#include <asm/sigcontext.h>
+-#include <asm/user.h>
+-#include <asm/thread_info.h>
+-#include <asm/uaccess.h>
-
--struct vm_area_struct;
--struct page;
--struct mm_struct;
+-extern void fpu_init(void);
+-extern unsigned int mxcsr_feature_mask;
+-extern void mxcsr_feature_mask_init(void);
+-extern void init_fpu(struct task_struct *child);
+-extern int save_i387(struct _fpstate __user *buf);
+-extern asmlinkage void math_state_restore(void);
+-
+-/*
+- * FPU lazy state save handling...
+- */
+-
+-#define unlazy_fpu(tsk) do { \
+- if (task_thread_info(tsk)->status & TS_USEDFPU) \
+- save_init_fpu(tsk); \
+- else \
+- tsk->fpu_counter = 0; \
+-} while (0)
-
--extern void flush_cache_all(void);
--extern void flush_cache_mm(struct mm_struct *mm);
--extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
--extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
-- unsigned long end);
--extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
--extern void flush_dcache_page(struct page *pg);
--extern void flush_icache_range(unsigned long start, unsigned long end);
--extern void flush_icache_user_range(struct vm_area_struct *vma,
-- struct page *page, unsigned long addr,
-- int len);
+-/* Ignore delayed exceptions from user space */
+-static inline void tolerant_fwait(void)
+-{
+- asm volatile("1: fwait\n"
+- "2:\n"
+- " .section __ex_table,\"a\"\n"
+- " .align 8\n"
+- " .quad 1b,2b\n"
+- " .previous\n");
+-}
+-
+-#define clear_fpu(tsk) do { \
+- if (task_thread_info(tsk)->status & TS_USEDFPU) { \
+- tolerant_fwait(); \
+- task_thread_info(tsk)->status &= ~TS_USEDFPU; \
+- stts(); \
+- } \
+-} while (0)
-
--#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
+-/*
+- * ptrace request handers...
+- */
+-extern int get_fpregs(struct user_i387_struct __user *buf,
+- struct task_struct *tsk);
+-extern int set_fpregs(struct task_struct *tsk,
+- struct user_i387_struct __user *buf);
+-
+-/*
+- * i387 state interaction
+- */
+-#define get_fpu_mxcsr(t) ((t)->thread.i387.fxsave.mxcsr)
+-#define get_fpu_cwd(t) ((t)->thread.i387.fxsave.cwd)
+-#define get_fpu_fxsr_twd(t) ((t)->thread.i387.fxsave.twd)
+-#define get_fpu_swd(t) ((t)->thread.i387.fxsave.swd)
+-#define set_fpu_cwd(t,val) ((t)->thread.i387.fxsave.cwd = (val))
+-#define set_fpu_swd(t,val) ((t)->thread.i387.fxsave.swd = (val))
+-#define set_fpu_fxsr_twd(t,val) ((t)->thread.i387.fxsave.twd = (val))
+-
+-#define X87_FSW_ES (1 << 7) /* Exception Summary */
+-
+-/* AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
+- is pending. Clear the x87 state here by setting it to fixed
+- values. The kernel data segment can be sometimes 0 and sometimes
+- new user value. Both should be ok.
+- Use the PDA as safe address because it should be already in L1. */
+-static inline void clear_fpu_state(struct i387_fxsave_struct *fx)
+-{
+- if (unlikely(fx->swd & X87_FSW_ES))
+- asm volatile("fnclex");
+- alternative_input(ASM_NOP8 ASM_NOP2,
+- " emms\n" /* clear stack tags */
+- " fildl %%gs:0", /* load to clear state */
+- X86_FEATURE_FXSAVE_LEAK);
+-}
-
--#define flush_dcache_mmap_lock(mapping) do { } while (0)
--#define flush_dcache_mmap_unlock(mapping) do { } while (0)
+-static inline int restore_fpu_checking(struct i387_fxsave_struct *fx)
+-{
+- int err;
-
--#define flush_cache_vmap(start, end) flush_cache_all()
--#define flush_cache_vunmap(start, end) flush_cache_all()
+- asm volatile("1: rex64/fxrstor (%[fx])\n\t"
+- "2:\n"
+- ".section .fixup,\"ax\"\n"
+- "3: movl $-1,%[err]\n"
+- " jmp 2b\n"
+- ".previous\n"
+- ".section __ex_table,\"a\"\n"
+- " .align 8\n"
+- " .quad 1b,3b\n"
+- ".previous"
+- : [err] "=r" (err)
+-#if 0 /* See comment in __fxsave_clear() below. */
+- : [fx] "r" (fx), "m" (*fx), "0" (0));
+-#else
+- : [fx] "cdaSDb" (fx), "m" (*fx), "0" (0));
+-#endif
+- if (unlikely(err))
+- init_fpu(current);
+- return err;
+-}
-
--#define flush_icache_page(vma, page) do { } while (0)
+-static inline int save_i387_checking(struct i387_fxsave_struct __user *fx)
+-{
+- int err;
-
--#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
-- do { \
-- flush_cache_page(vma, vaddr, page_to_pfn(page));\
-- memcpy(dst, src, len); \
-- flush_icache_user_range(vma, page, vaddr, len); \
-- } while (0)
+- asm volatile("1: rex64/fxsave (%[fx])\n\t"
+- "2:\n"
+- ".section .fixup,\"ax\"\n"
+- "3: movl $-1,%[err]\n"
+- " jmp 2b\n"
+- ".previous\n"
+- ".section __ex_table,\"a\"\n"
+- " .align 8\n"
+- " .quad 1b,3b\n"
+- ".previous"
+- : [err] "=r" (err), "=m" (*fx)
+-#if 0 /* See comment in __fxsave_clear() below. */
+- : [fx] "r" (fx), "0" (0));
+-#else
+- : [fx] "cdaSDb" (fx), "0" (0));
+-#endif
+- if (unlikely(err) && __clear_user(fx, sizeof(struct i387_fxsave_struct)))
+- err = -EFAULT;
+- /* No need to clear here because the caller clears USED_MATH */
+- return err;
+-}
-
--#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
-- do { \
-- flush_cache_page(vma, vaddr, page_to_pfn(page));\
-- memcpy(dst, src, len); \
-- } while (0)
+-static inline void __fxsave_clear(struct task_struct *tsk)
+-{
+- /* Using "rex64; fxsave %0" is broken because, if the memory operand
+- uses any extended registers for addressing, a second REX prefix
+- will be generated (to the assembler, rex64 followed by semicolon
+- is a separate instruction), and hence the 64-bitness is lost. */
+-#if 0
+- /* Using "fxsaveq %0" would be the ideal choice, but is only supported
+- starting with gas 2.16. */
+- __asm__ __volatile__("fxsaveq %0"
+- : "=m" (tsk->thread.i387.fxsave));
+-#elif 0
+- /* Using, as a workaround, the properly prefixed form below isn't
+- accepted by any binutils version so far released, complaining that
+- the same type of prefix is used twice if an extended register is
+- needed for addressing (fix submitted to mainline 2005-11-21). */
+- __asm__ __volatile__("rex64/fxsave %0"
+- : "=m" (tsk->thread.i387.fxsave));
+-#else
+- /* This, however, we can work around by forcing the compiler to select
+- an addressing mode that doesn't require extended registers. */
+- __asm__ __volatile__("rex64/fxsave %P2(%1)"
+- : "=m" (tsk->thread.i387.fxsave)
+- : "cdaSDb" (tsk),
+- "i" (offsetof(__typeof__(*tsk),
+- thread.i387.fxsave)));
+-#endif
+- clear_fpu_state(&tsk->thread.i387.fxsave);
+-}
-
--#endif /* __ASSEMBLY__ */
+-static inline void kernel_fpu_begin(void)
+-{
+- struct thread_info *me = current_thread_info();
+- preempt_disable();
+- if (me->status & TS_USEDFPU) {
+- __fxsave_clear(me->task);
+- me->status &= ~TS_USEDFPU;
+- return;
+- }
+- clts();
+-}
-
--#endif /* __ASM_SH64_CACHEFLUSH_H */
+-static inline void kernel_fpu_end(void)
+-{
+- stts();
+- preempt_enable();
+-}
-
-diff --git a/include/asm-sh64/cayman.h b/include/asm-sh64/cayman.h
-deleted file mode 100644
-index 7b6b968..0000000
---- a/include/asm-sh64/cayman.h
-+++ /dev/null
-@@ -1,20 +0,0 @@
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/cayman.h
-- *
-- * Cayman definitions
-- *
-- * Global defintions for the SH5 Cayman board
-- *
-- * Copyright (C) 2002 Stuart Menefy
+-static inline void save_init_fpu(struct task_struct *tsk)
+-{
+- __fxsave_clear(tsk);
+- task_thread_info(tsk)->status &= ~TS_USEDFPU;
+- stts();
+-}
+-
+-/*
+- * This restores directly out of user space. Exceptions are handled.
- */
+-static inline int restore_i387(struct _fpstate __user *buf)
+-{
+- set_used_math();
+- if (!(task_thread_info(current)->status & TS_USEDFPU)) {
+- clts();
+- task_thread_info(current)->status |= TS_USEDFPU;
+- }
+- return restore_fpu_checking((__force struct i387_fxsave_struct *)buf);
+-}
+-
+-#endif /* __ASM_X86_64_I387_H */
+diff --git a/include/asm-x86/i8253.h b/include/asm-x86/i8253.h
+index 747548e..b51c048 100644
+--- a/include/asm-x86/i8253.h
++++ b/include/asm-x86/i8253.h
+@@ -12,4 +12,7 @@ extern struct clock_event_device *global_clock_event;
+
+ extern void setup_pit_timer(void);
+
++#define inb_pit inb_p
++#define outb_pit outb_p
++
+ #endif /* __ASM_I8253_H__ */
+diff --git a/include/asm-x86/i8259.h b/include/asm-x86/i8259.h
+index 29d8f9a..67c319e 100644
+--- a/include/asm-x86/i8259.h
++++ b/include/asm-x86/i8259.h
+@@ -3,10 +3,25 @@
+
+ extern unsigned int cached_irq_mask;
+
+-#define __byte(x,y) (((unsigned char *) &(y))[x])
++#define __byte(x,y) (((unsigned char *) &(y))[x])
+ #define cached_master_mask (__byte(0, cached_irq_mask))
+ #define cached_slave_mask (__byte(1, cached_irq_mask))
+
++/* i8259A PIC registers */
++#define PIC_MASTER_CMD 0x20
++#define PIC_MASTER_IMR 0x21
++#define PIC_MASTER_ISR PIC_MASTER_CMD
++#define PIC_MASTER_POLL PIC_MASTER_ISR
++#define PIC_MASTER_OCW3 PIC_MASTER_ISR
++#define PIC_SLAVE_CMD 0xa0
++#define PIC_SLAVE_IMR 0xa1
++
++/* i8259A PIC related value */
++#define PIC_CASCADE_IR 2
++#define MASTER_ICW4_DEFAULT 0x01
++#define SLAVE_ICW4_DEFAULT 0x01
++#define PIC_ICW4_AEOI 2
++
+ extern spinlock_t i8259A_lock;
+
+ extern void init_8259A(int auto_eoi);
+@@ -14,4 +29,7 @@ extern void enable_8259A_irq(unsigned int irq);
+ extern void disable_8259A_irq(unsigned int irq);
+ extern unsigned int startup_8259A_irq(unsigned int irq);
+
++#define inb_pic inb_p
++#define outb_pic outb_p
++
+ #endif /* __ASM_I8259_H__ */
+diff --git a/include/asm-x86/ia32.h b/include/asm-x86/ia32.h
+index 0190b7c..aa97332 100644
+--- a/include/asm-x86/ia32.h
++++ b/include/asm-x86/ia32.h
+@@ -159,12 +159,6 @@ struct ustat32 {
+ #define IA32_STACK_TOP IA32_PAGE_OFFSET
+
+ #ifdef __KERNEL__
+-struct user_desc;
+-struct siginfo_t;
+-int do_get_thread_area(struct thread_struct *t, struct user_desc __user *info);
+-int do_set_thread_area(struct thread_struct *t, struct user_desc __user *info);
+-int ia32_child_tls(struct task_struct *p, struct pt_regs *childregs);
+-
+ struct linux_binprm;
+ extern int ia32_setup_arg_pages(struct linux_binprm *bprm,
+ unsigned long stack_top, int exec_stack);
+diff --git a/include/asm-x86/ia32_unistd.h b/include/asm-x86/ia32_unistd.h
+index 5b52ce5..61cea9e 100644
+--- a/include/asm-x86/ia32_unistd.h
++++ b/include/asm-x86/ia32_unistd.h
+@@ -5,7 +5,7 @@
+ * This file contains the system call numbers of the ia32 port,
+ * this is for the kernel only.
+ * Only add syscalls here where some part of the kernel needs to know
+- * the number. This should be otherwise in sync with asm-i386/unistd.h. -AK
++ * the number. This should be otherwise in sync with asm-x86/unistd_32.h. -AK
+ */
+
+ #define __NR_ia32_restart_syscall 0
+diff --git a/include/asm-x86/ide.h b/include/asm-x86/ide.h
+index 42130ad..c2552d8 100644
+--- a/include/asm-x86/ide.h
++++ b/include/asm-x86/ide.h
+@@ -1,6 +1,4 @@
+ /*
+- * linux/include/asm-i386/ide.h
+- *
+ * Copyright (C) 1994-1996 Linus Torvalds & authors
+ */
+
+diff --git a/include/asm-x86/idle.h b/include/asm-x86/idle.h
+index 6bd47dc..d240e5b 100644
+--- a/include/asm-x86/idle.h
++++ b/include/asm-x86/idle.h
+@@ -6,7 +6,6 @@
+
+ struct notifier_block;
+ void idle_notifier_register(struct notifier_block *n);
+-void idle_notifier_unregister(struct notifier_block *n);
+
+ void enter_idle(void);
+ void exit_idle(void);
+diff --git a/include/asm-x86/io_32.h b/include/asm-x86/io_32.h
+index fe881cd..586d7aa 100644
+--- a/include/asm-x86/io_32.h
++++ b/include/asm-x86/io_32.h
+@@ -100,8 +100,6 @@ static inline void * phys_to_virt(unsigned long address)
+ */
+ #define page_to_phys(page) ((dma_addr_t)page_to_pfn(page) << PAGE_SHIFT)
+
+-extern void __iomem * __ioremap(unsigned long offset, unsigned long size, unsigned long flags);
-
+ /**
+ * ioremap - map bus memory into CPU space
+ * @offset: bus address of the memory
+@@ -111,32 +109,39 @@ extern void __iomem * __ioremap(unsigned long offset, unsigned long size, unsign
+ * make bus memory CPU accessible via the readb/readw/readl/writeb/
+ * writew/writel functions and the other mmio helpers. The returned
+ * address is not guaranteed to be usable directly as a virtual
+- * address.
++ * address.
+ *
+ * If the area you are trying to map is a PCI BAR you should have a
+ * look at pci_iomap().
+ */
++extern void __iomem *ioremap_nocache(unsigned long offset, unsigned long size);
++extern void __iomem *ioremap_cache(unsigned long offset, unsigned long size);
+
+-static inline void __iomem * ioremap(unsigned long offset, unsigned long size)
++/*
++ * The default ioremap() behavior is non-cached:
++ */
++static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
+ {
+- return __ioremap(offset, size, 0);
++ return ioremap_nocache(offset, size);
+ }
+
+-extern void __iomem * ioremap_nocache(unsigned long offset, unsigned long size);
+ extern void iounmap(volatile void __iomem *addr);
+
+ /*
+- * bt_ioremap() and bt_iounmap() are for temporary early boot-time
++ * early_ioremap() and early_iounmap() are for temporary early boot-time
+ * mappings, before the real ioremap() is functional.
+ * A boot-time mapping is currently limited to at most 16 pages.
+ */
+-extern void *bt_ioremap(unsigned long offset, unsigned long size);
+-extern void bt_iounmap(void *addr, unsigned long size);
++extern void early_ioremap_init(void);
++extern void early_ioremap_clear(void);
++extern void early_ioremap_reset(void);
++extern void *early_ioremap(unsigned long offset, unsigned long size);
++extern void early_iounmap(void *addr, unsigned long size);
+ extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys);
+
+ /* Use early IO mappings for DMI because it's initialized early */
+-#define dmi_ioremap bt_ioremap
+-#define dmi_iounmap bt_iounmap
++#define dmi_ioremap early_ioremap
++#define dmi_iounmap early_iounmap
+ #define dmi_alloc alloc_bootmem
+
+ /*
+@@ -250,10 +255,10 @@ static inline void flush_write_buffers(void)
+
+ #endif /* __KERNEL__ */
+
+-static inline void native_io_delay(void)
+-{
+- asm volatile("outb %%al,$0x80" : : : "memory");
+-}
++extern void native_io_delay(void);
++
++extern int io_delay_type;
++extern void io_delay_init(void);
+
+ #if defined(CONFIG_PARAVIRT)
+ #include <asm/paravirt.h>
+diff --git a/include/asm-x86/io_64.h b/include/asm-x86/io_64.h
+index a037b07..f64a59c 100644
+--- a/include/asm-x86/io_64.h
++++ b/include/asm-x86/io_64.h
+@@ -35,12 +35,24 @@
+ * - Arnaldo Carvalho de Melo <acme at conectiva.com.br>
+ */
+
+-#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
++extern void native_io_delay(void);
+
+-#ifdef REALLY_SLOW_IO
+-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
++extern int io_delay_type;
++extern void io_delay_init(void);
++
++#if defined(CONFIG_PARAVIRT)
++#include <asm/paravirt.h>
+ #else
+-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
++
++static inline void slow_down_io(void)
++{
++ native_io_delay();
++#ifdef REALLY_SLOW_IO
++ native_io_delay();
++ native_io_delay();
++ native_io_delay();
++#endif
++}
+ #endif
+
+ /*
+@@ -52,9 +64,15 @@ static inline void out##s(unsigned x value, unsigned short port) {
+ #define __OUT2(s,s1,s2) \
+ __asm__ __volatile__ ("out" #s " %" s1 "0,%" s2 "1"
+
++#ifndef REALLY_SLOW_IO
++#define REALLY_SLOW_IO
++#define UNSET_REALLY_SLOW_IO
++#endif
++
+ #define __OUT(s,s1,x) \
+ __OUT1(s,x) __OUT2(s,s1,"w") : : "a" (value), "Nd" (port)); } \
+-__OUT1(s##_p,x) __OUT2(s,s1,"w") __FULL_SLOW_DOWN_IO : : "a" (value), "Nd" (port));} \
++__OUT1(s##_p, x) __OUT2(s, s1, "w") : : "a" (value), "Nd" (port)); \
++ slow_down_io(); }
+
+ #define __IN1(s) \
+ static inline RETURN_TYPE in##s(unsigned short port) { RETURN_TYPE _v;
+@@ -63,8 +81,13 @@ static inline RETURN_TYPE in##s(unsigned short port) { RETURN_TYPE _v;
+ __asm__ __volatile__ ("in" #s " %" s2 "1,%" s1 "0"
+
+ #define __IN(s,s1,i...) \
+-__IN1(s) __IN2(s,s1,"w") : "=a" (_v) : "Nd" (port) ,##i ); return _v; } \
+-__IN1(s##_p) __IN2(s,s1,"w") __FULL_SLOW_DOWN_IO : "=a" (_v) : "Nd" (port) ,##i ); return _v; } \
++__IN1(s) __IN2(s, s1, "w") : "=a" (_v) : "Nd" (port), ##i); return _v; } \
++__IN1(s##_p) __IN2(s, s1, "w") : "=a" (_v) : "Nd" (port), ##i); \
++ slow_down_io(); return _v; }
++
++#ifdef UNSET_REALLY_SLOW_IO
++#undef REALLY_SLOW_IO
++#endif
+
+ #define __INS(s) \
+ static inline void ins##s(unsigned short port, void * addr, unsigned long count) \
+@@ -127,13 +150,6 @@ static inline void * phys_to_virt(unsigned long address)
+
+ #include <asm-generic/iomap.h>
+
+-extern void __iomem *__ioremap(unsigned long offset, unsigned long size, unsigned long flags);
-
--/* Setup for the SMSC FDC37C935 / LAN91C100FD */
--#define SMSC_IRQ IRQ_IRL1
+-static inline void __iomem * ioremap (unsigned long offset, unsigned long size)
+-{
+- return __ioremap(offset, size, 0);
+-}
-
--/* Setup for PCI Bus 2, which transmits interrupts via the EPLD */
--#define PCI2_IRQ IRQ_IRL3
-diff --git a/include/asm-sh64/checksum.h b/include/asm-sh64/checksum.h
+ extern void *early_ioremap(unsigned long addr, unsigned long size);
+ extern void early_iounmap(void *addr, unsigned long size);
+
+@@ -142,8 +158,19 @@ extern void early_iounmap(void *addr, unsigned long size);
+ * it's useful if some control registers are in such an area and write combining
+ * or read caching is not desirable:
+ */
+-extern void __iomem * ioremap_nocache (unsigned long offset, unsigned long size);
++extern void __iomem *ioremap_nocache(unsigned long offset, unsigned long size);
++extern void __iomem *ioremap_cache(unsigned long offset, unsigned long size);
++
++/*
++ * The default ioremap() behavior is non-cached:
++ */
++static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
++{
++ return ioremap_nocache(offset, size);
++}
++
+ extern void iounmap(volatile void __iomem *addr);
++
+ extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys);
+
+ /*
+diff --git a/include/asm-x86/io_apic.h b/include/asm-x86/io_apic.h
+index 8849496..0f5b3fe 100644
+--- a/include/asm-x86/io_apic.h
++++ b/include/asm-x86/io_apic.h
+@@ -1,5 +1,159 @@
++#ifndef __ASM_IO_APIC_H
++#define __ASM_IO_APIC_H
++
++#include <asm/types.h>
++#include <asm/mpspec.h>
++#include <asm/apicdef.h>
++
++/*
++ * Intel IO-APIC support for SMP and UP systems.
++ *
++ * Copyright (C) 1997, 1998, 1999, 2000 Ingo Molnar
++ */
++
++/*
++ * The structure of the IO-APIC:
++ */
++union IO_APIC_reg_00 {
++ u32 raw;
++ struct {
++ u32 __reserved_2 : 14,
++ LTS : 1,
++ delivery_type : 1,
++ __reserved_1 : 8,
++ ID : 8;
++ } __attribute__ ((packed)) bits;
++};
++
++union IO_APIC_reg_01 {
++ u32 raw;
++ struct {
++ u32 version : 8,
++ __reserved_2 : 7,
++ PRQ : 1,
++ entries : 8,
++ __reserved_1 : 8;
++ } __attribute__ ((packed)) bits;
++};
++
++union IO_APIC_reg_02 {
++ u32 raw;
++ struct {
++ u32 __reserved_2 : 24,
++ arbitration : 4,
++ __reserved_1 : 4;
++ } __attribute__ ((packed)) bits;
++};
++
++union IO_APIC_reg_03 {
++ u32 raw;
++ struct {
++ u32 boot_DT : 1,
++ __reserved_1 : 31;
++ } __attribute__ ((packed)) bits;
++};
++
++enum ioapic_irq_destination_types {
++ dest_Fixed = 0,
++ dest_LowestPrio = 1,
++ dest_SMI = 2,
++ dest__reserved_1 = 3,
++ dest_NMI = 4,
++ dest_INIT = 5,
++ dest__reserved_2 = 6,
++ dest_ExtINT = 7
++};
++
++struct IO_APIC_route_entry {
++ __u32 vector : 8,
++ delivery_mode : 3, /* 000: FIXED
++ * 001: lowest prio
++ * 111: ExtINT
++ */
++ dest_mode : 1, /* 0: physical, 1: logical */
++ delivery_status : 1,
++ polarity : 1,
++ irr : 1,
++ trigger : 1, /* 0: edge, 1: level */
++ mask : 1, /* 0: enabled, 1: disabled */
++ __reserved_2 : 15;
++
+ #ifdef CONFIG_X86_32
+-# include "io_apic_32.h"
++ union {
++ struct {
++ __u32 __reserved_1 : 24,
++ physical_dest : 4,
++ __reserved_2 : 4;
++ } physical;
++
++ struct {
++ __u32 __reserved_1 : 24,
++ logical_dest : 8;
++ } logical;
++ } dest;
+ #else
+-# include "io_apic_64.h"
++ __u32 __reserved_3 : 24,
++ dest : 8;
++#endif
++
++} __attribute__ ((packed));
++
++#ifdef CONFIG_X86_IO_APIC
++
++/*
++ * # of IO-APICs and # of IRQ routing registers
++ */
++extern int nr_ioapics;
++extern int nr_ioapic_registers[MAX_IO_APICS];
++
++/*
++ * MP-BIOS irq configuration table structures:
++ */
++
++/* I/O APIC entries */
++extern struct mpc_config_ioapic mp_ioapics[MAX_IO_APICS];
++
++/* # of MP IRQ source entries */
++extern int mp_irq_entries;
++
++/* MP IRQ source entries */
++extern struct mpc_config_intsrc mp_irqs[MAX_IRQ_SOURCES];
++
++/* non-0 if default (table-less) MP configuration */
++extern int mpc_default_type;
++
++/* Older SiS APIC requires we rewrite the index register */
++extern int sis_apic_bug;
++
++/* 1 if "noapic" boot option passed */
++extern int skip_ioapic_setup;
++
++static inline void disable_ioapic_setup(void)
++{
++ skip_ioapic_setup = 1;
++}
++
++/*
++ * If we use the IO-APIC for IRQ routing, disable automatic
++ * assignment of PCI IRQ's.
++ */
++#define io_apic_assign_pci_irqs \
++ (mp_irq_entries && !skip_ioapic_setup && io_apic_irqs)
++
++#ifdef CONFIG_ACPI
++extern int io_apic_get_unique_id(int ioapic, int apic_id);
++extern int io_apic_get_version(int ioapic);
++extern int io_apic_get_redir_entries(int ioapic);
++extern int io_apic_set_pci_routing(int ioapic, int pin, int irq,
++ int edge_level, int active_high_low);
++extern int timer_uses_ioapic_pin_0;
++#endif /* CONFIG_ACPI */
++
++extern int (*ioapic_renumber_irq)(int ioapic, int irq);
++extern void ioapic_init_mappings(void);
++
++#else /* !CONFIG_X86_IO_APIC */
++#define io_apic_assign_pci_irqs 0
++#endif
++
+ #endif
+diff --git a/include/asm-x86/io_apic_32.h b/include/asm-x86/io_apic_32.h
deleted file mode 100644
-index ba594cc..0000000
---- a/include/asm-sh64/checksum.h
+index 3f08788..0000000
+--- a/include/asm-x86/io_apic_32.h
+++ /dev/null
-@@ -1,82 +0,0 @@
--#ifndef __ASM_SH64_CHECKSUM_H
--#define __ASM_SH64_CHECKSUM_H
+@@ -1,155 +0,0 @@
+-#ifndef __ASM_IO_APIC_H
+-#define __ASM_IO_APIC_H
+-
+-#include <asm/types.h>
+-#include <asm/mpspec.h>
+-#include <asm/apicdef.h>
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/checksum.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Intel IO-APIC support for SMP and UP systems.
- *
+- * Copyright (C) 1997, 1998, 1999, 2000 Ingo Molnar
+- */
+-
+-/*
+- * The structure of the IO-APIC:
- */
+-union IO_APIC_reg_00 {
+- u32 raw;
+- struct {
+- u32 __reserved_2 : 14,
+- LTS : 1,
+- delivery_type : 1,
+- __reserved_1 : 8,
+- ID : 8;
+- } __attribute__ ((packed)) bits;
+-};
+-
+-union IO_APIC_reg_01 {
+- u32 raw;
+- struct {
+- u32 version : 8,
+- __reserved_2 : 7,
+- PRQ : 1,
+- entries : 8,
+- __reserved_1 : 8;
+- } __attribute__ ((packed)) bits;
+-};
+-
+-union IO_APIC_reg_02 {
+- u32 raw;
+- struct {
+- u32 __reserved_2 : 24,
+- arbitration : 4,
+- __reserved_1 : 4;
+- } __attribute__ ((packed)) bits;
+-};
+-
+-union IO_APIC_reg_03 {
+- u32 raw;
+- struct {
+- u32 boot_DT : 1,
+- __reserved_1 : 31;
+- } __attribute__ ((packed)) bits;
+-};
+-
+-enum ioapic_irq_destination_types {
+- dest_Fixed = 0,
+- dest_LowestPrio = 1,
+- dest_SMI = 2,
+- dest__reserved_1 = 3,
+- dest_NMI = 4,
+- dest_INIT = 5,
+- dest__reserved_2 = 6,
+- dest_ExtINT = 7
+-};
+-
+-struct IO_APIC_route_entry {
+- __u32 vector : 8,
+- delivery_mode : 3, /* 000: FIXED
+- * 001: lowest prio
+- * 111: ExtINT
+- */
+- dest_mode : 1, /* 0: physical, 1: logical */
+- delivery_status : 1,
+- polarity : 1,
+- irr : 1,
+- trigger : 1, /* 0: edge, 1: level */
+- mask : 1, /* 0: enabled, 1: disabled */
+- __reserved_2 : 15;
+-
+- union { struct { __u32
+- __reserved_1 : 24,
+- physical_dest : 4,
+- __reserved_2 : 4;
+- } physical;
+-
+- struct { __u32
+- __reserved_1 : 24,
+- logical_dest : 8;
+- } logical;
+- } dest;
-
--#include <asm/registers.h>
+-} __attribute__ ((packed));
+-
+-#ifdef CONFIG_X86_IO_APIC
-
-/*
-- * computes the checksum of a memory block at buff, length len,
-- * and adds in "sum" (32-bit)
-- *
-- * returns a 32-bit number suitable for feeding into itself
-- * or csum_tcpudp_magic
-- *
-- * this function must be called with even lengths, except
-- * for the last fragment, which may be odd
-- *
-- * it's best to have buff aligned on a 32-bit boundary
+- * # of IO-APICs and # of IRQ routing registers
- */
--asmlinkage __wsum csum_partial(const void *buff, int len, __wsum sum);
+-extern int nr_ioapics;
+-extern int nr_ioapic_registers[MAX_IO_APICS];
-
-/*
-- * Note: when you get a NULL pointer exception here this means someone
-- * passed in an incorrect kernel address to one of these functions.
-- *
-- * If you use these functions directly please don't forget the
-- * access_ok().
+- * MP-BIOS irq configuration table structures:
- */
-
+-/* I/O APIC entries */
+-extern struct mpc_config_ioapic mp_ioapics[MAX_IO_APICS];
-
--__wsum csum_partial_copy_nocheck(const void *src, void *dst, int len,
-- __wsum sum);
+-/* # of MP IRQ source entries */
+-extern int mp_irq_entries;
-
--__wsum csum_partial_copy_from_user(const void __user *src, void *dst,
-- int len, __wsum sum, int *err_ptr);
+-/* MP IRQ source entries */
+-extern struct mpc_config_intsrc mp_irqs[MAX_IRQ_SOURCES];
-
--static inline __sum16 csum_fold(__wsum csum)
--{
-- u32 sum = (__force u32)csum;
-- sum = (sum & 0xffff) + (sum >> 16);
-- sum = (sum & 0xffff) + (sum >> 16);
-- return (__force __sum16)~sum;
--}
+-/* non-0 if default (table-less) MP configuration */
+-extern int mpc_default_type;
-
--__sum16 ip_fast_csum(const void *iph, unsigned int ihl);
+-/* Older SiS APIC requires we rewrite the index register */
+-extern int sis_apic_bug;
-
--__wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
-- unsigned short len, unsigned short proto,
-- __wsum sum);
+-/* 1 if "noapic" boot option passed */
+-extern int skip_ioapic_setup;
-
--/*
-- * computes the checksum of the TCP/UDP pseudo-header
-- * returns a 16-bit checksum, already complemented
-- */
--static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
-- unsigned short len,
-- unsigned short proto,
-- __wsum sum)
+-static inline void disable_ioapic_setup(void)
-{
-- return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
+- skip_ioapic_setup = 1;
-}
-
--/*
-- * this routine is used for miscellaneous IP-like checksums, mainly
-- * in icmp.c
-- */
--static inline __sum16 ip_compute_csum(const void *buff, int len)
+-static inline int ioapic_setup_disabled(void)
-{
-- return csum_fold(csum_partial(buff, len, 0));
+- return skip_ioapic_setup;
-}
-
--#endif /* __ASM_SH64_CHECKSUM_H */
--
-diff --git a/include/asm-sh64/cpumask.h b/include/asm-sh64/cpumask.h
-deleted file mode 100644
-index b7b105d..0000000
---- a/include/asm-sh64/cpumask.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_CPUMASK_H
--#define __ASM_SH64_CPUMASK_H
+-/*
+- * If we use the IO-APIC for IRQ routing, disable automatic
+- * assignment of PCI IRQ's.
+- */
+-#define io_apic_assign_pci_irqs (mp_irq_entries && !skip_ioapic_setup && io_apic_irqs)
-
--#include <asm-generic/cpumask.h>
+-#ifdef CONFIG_ACPI
+-extern int io_apic_get_unique_id (int ioapic, int apic_id);
+-extern int io_apic_get_version (int ioapic);
+-extern int io_apic_get_redir_entries (int ioapic);
+-extern int io_apic_set_pci_routing (int ioapic, int pin, int irq, int edge_level, int active_high_low);
+-extern int timer_uses_ioapic_pin_0;
+-#endif /* CONFIG_ACPI */
-
--#endif /* __ASM_SH64_CPUMASK_H */
-diff --git a/include/asm-sh64/cputime.h b/include/asm-sh64/cputime.h
-deleted file mode 100644
-index 0fd89da..0000000
---- a/include/asm-sh64/cputime.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __SH64_CPUTIME_H
--#define __SH64_CPUTIME_H
+-extern int (*ioapic_renumber_irq)(int ioapic, int irq);
-
--#include <asm-generic/cputime.h>
+-#else /* !CONFIG_X86_IO_APIC */
+-#define io_apic_assign_pci_irqs 0
+-#endif
-
--#endif /* __SH64_CPUTIME_H */
-diff --git a/include/asm-sh64/current.h b/include/asm-sh64/current.h
+-#endif
+diff --git a/include/asm-x86/io_apic_64.h b/include/asm-x86/io_apic_64.h
deleted file mode 100644
-index 2612243..0000000
---- a/include/asm-sh64/current.h
+index e2c1367..0000000
+--- a/include/asm-x86/io_apic_64.h
+++ /dev/null
-@@ -1,28 +0,0 @@
--#ifndef __ASM_SH64_CURRENT_H
--#define __ASM_SH64_CURRENT_H
+@@ -1,138 +0,0 @@
+-#ifndef __ASM_IO_APIC_H
+-#define __ASM_IO_APIC_H
+-
+-#include <asm/types.h>
+-#include <asm/mpspec.h>
+-#include <asm/apicdef.h>
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/current.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
+- * Intel IO-APIC support for SMP and UP systems.
- *
+- * Copyright (C) 1997, 1998, 1999, 2000 Ingo Molnar
- */
-
--#include <linux/thread_info.h>
--
--struct task_struct;
--
--static __inline__ struct task_struct * get_current(void)
--{
-- return current_thread_info()->task;
--}
+-#define APIC_MISMATCH_DEBUG
-
--#define current get_current()
--
--#endif /* __ASM_SH64_CURRENT_H */
+-/*
+- * The structure of the IO-APIC:
+- */
+-union IO_APIC_reg_00 {
+- u32 raw;
+- struct {
+- u32 __reserved_2 : 14,
+- LTS : 1,
+- delivery_type : 1,
+- __reserved_1 : 8,
+- ID : 8;
+- } __attribute__ ((packed)) bits;
+-};
-
-diff --git a/include/asm-sh64/delay.h b/include/asm-sh64/delay.h
-deleted file mode 100644
-index 6ae3130..0000000
---- a/include/asm-sh64/delay.h
-+++ /dev/null
-@@ -1,11 +0,0 @@
--#ifndef __ASM_SH64_DELAY_H
--#define __ASM_SH64_DELAY_H
+-union IO_APIC_reg_01 {
+- u32 raw;
+- struct {
+- u32 version : 8,
+- __reserved_2 : 7,
+- PRQ : 1,
+- entries : 8,
+- __reserved_1 : 8;
+- } __attribute__ ((packed)) bits;
+-};
-
--extern void __delay(int loops);
--extern void __udelay(unsigned long long usecs, unsigned long lpj);
--extern void __ndelay(unsigned long long nsecs, unsigned long lpj);
--extern void udelay(unsigned long usecs);
--extern void ndelay(unsigned long nsecs);
+-union IO_APIC_reg_02 {
+- u32 raw;
+- struct {
+- u32 __reserved_2 : 24,
+- arbitration : 4,
+- __reserved_1 : 4;
+- } __attribute__ ((packed)) bits;
+-};
-
--#endif /* __ASM_SH64_DELAY_H */
+-union IO_APIC_reg_03 {
+- u32 raw;
+- struct {
+- u32 boot_DT : 1,
+- __reserved_1 : 31;
+- } __attribute__ ((packed)) bits;
+-};
-
-diff --git a/include/asm-sh64/device.h b/include/asm-sh64/device.h
-deleted file mode 100644
-index d8f9872..0000000
---- a/include/asm-sh64/device.h
-+++ /dev/null
-@@ -1,7 +0,0 @@
-/*
-- * Arch specific extensions to struct device
-- *
-- * This file is released under the GPLv2
+- * # of IO-APICs and # of IRQ routing registers
- */
--#include <asm-generic/device.h>
--
-diff --git a/include/asm-sh64/div64.h b/include/asm-sh64/div64.h
-deleted file mode 100644
-index f758695..0000000
---- a/include/asm-sh64/div64.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_DIV64_H
--#define __ASM_SH64_DIV64_H
+-extern int nr_ioapics;
+-extern int nr_ioapic_registers[MAX_IO_APICS];
-
--#include <asm-generic/div64.h>
+-enum ioapic_irq_destination_types {
+- dest_Fixed = 0,
+- dest_LowestPrio = 1,
+- dest_SMI = 2,
+- dest__reserved_1 = 3,
+- dest_NMI = 4,
+- dest_INIT = 5,
+- dest__reserved_2 = 6,
+- dest_ExtINT = 7
+-};
-
--#endif /* __ASM_SH64_DIV64_H */
-diff --git a/include/asm-sh64/dma-mapping.h b/include/asm-sh64/dma-mapping.h
-deleted file mode 100644
-index 18f8dd6..0000000
---- a/include/asm-sh64/dma-mapping.h
-+++ /dev/null
-@@ -1,194 +0,0 @@
--#ifndef __ASM_SH_DMA_MAPPING_H
--#define __ASM_SH_DMA_MAPPING_H
+-struct IO_APIC_route_entry {
+- __u32 vector : 8,
+- delivery_mode : 3, /* 000: FIXED
+- * 001: lowest prio
+- * 111: ExtINT
+- */
+- dest_mode : 1, /* 0: physical, 1: logical */
+- delivery_status : 1,
+- polarity : 1,
+- irr : 1,
+- trigger : 1, /* 0: edge, 1: level */
+- mask : 1, /* 0: enabled, 1: disabled */
+- __reserved_2 : 15;
-
--#include <linux/mm.h>
--#include <linux/scatterlist.h>
--#include <asm/io.h>
+- __u32 __reserved_3 : 24,
+- dest : 8;
+-} __attribute__ ((packed));
-
--struct pci_dev;
--extern void *consistent_alloc(struct pci_dev *hwdev, size_t size,
-- dma_addr_t *dma_handle);
--extern void consistent_free(struct pci_dev *hwdev, size_t size,
-- void *vaddr, dma_addr_t dma_handle);
+-/*
+- * MP-BIOS irq configuration table structures:
+- */
-
--#define dma_supported(dev, mask) (1)
+-/* I/O APIC entries */
+-extern struct mpc_config_ioapic mp_ioapics[MAX_IO_APICS];
-
--static inline int dma_set_mask(struct device *dev, u64 mask)
--{
-- if (!dev->dma_mask || !dma_supported(dev, mask))
-- return -EIO;
+-/* # of MP IRQ source entries */
+-extern int mp_irq_entries;
-
-- *dev->dma_mask = mask;
+-/* MP IRQ source entries */
+-extern struct mpc_config_intsrc mp_irqs[MAX_IRQ_SOURCES];
-
-- return 0;
--}
+-/* non-0 if default (table-less) MP configuration */
+-extern int mpc_default_type;
-
--static inline void *dma_alloc_coherent(struct device *dev, size_t size,
-- dma_addr_t *dma_handle, gfp_t flag)
--{
-- return consistent_alloc(NULL, size, dma_handle);
--}
+-/* 1 if "noapic" boot option passed */
+-extern int skip_ioapic_setup;
-
--static inline void dma_free_coherent(struct device *dev, size_t size,
-- void *vaddr, dma_addr_t dma_handle)
+-static inline void disable_ioapic_setup(void)
-{
-- consistent_free(NULL, size, vaddr, dma_handle);
+- skip_ioapic_setup = 1;
-}
-
--#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
--#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
--#define dma_is_consistent(d, h) (1)
--
--static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
-- enum dma_data_direction dir)
--{
-- unsigned long start = (unsigned long) vaddr;
-- unsigned long s = start & L1_CACHE_ALIGN_MASK;
-- unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
-
-- for (; s <= e; s += L1_CACHE_BYTES)
-- asm volatile ("ocbp %0, 0" : : "r" (s));
--}
+-/*
+- * If we use the IO-APIC for IRQ routing, disable automatic
+- * assignment of PCI IRQ's.
+- */
+-#define io_apic_assign_pci_irqs (mp_irq_entries && !skip_ioapic_setup && io_apic_irqs)
-
--static inline dma_addr_t dma_map_single(struct device *dev,
-- void *ptr, size_t size,
-- enum dma_data_direction dir)
--{
--#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
-- if (dev->bus == &pci_bus_type)
-- return virt_to_phys(ptr);
+-#ifdef CONFIG_ACPI
+-extern int io_apic_get_version (int ioapic);
+-extern int io_apic_get_redir_entries (int ioapic);
+-extern int io_apic_set_pci_routing (int ioapic, int pin, int irq, int, int);
-#endif
-- dma_cache_sync(dev, ptr, size, dir);
-
-- return virt_to_phys(ptr);
--}
+-extern int sis_apic_bug; /* dummy */
-
--#define dma_unmap_single(dev, addr, size, dir) do { } while (0)
+-void enable_NMI_through_LVT0 (void * dummy);
-
--static inline int dma_map_sg(struct device *dev, struct scatterlist *sg,
-- int nents, enum dma_data_direction dir)
--{
-- int i;
+-extern spinlock_t i8259A_lock;
+-
+-extern int timer_over_8254;
-
-- for (i = 0; i < nents; i++) {
--#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
-- dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir);
-#endif
-- sg[i].dma_address = sg_phys(&sg[i]);
-- }
+diff --git a/include/asm-x86/irqflags.h b/include/asm-x86/irqflags.h
+index 1b695ff..92021c1 100644
+--- a/include/asm-x86/irqflags.h
++++ b/include/asm-x86/irqflags.h
+@@ -1,5 +1,245 @@
+-#ifdef CONFIG_X86_32
+-# include "irqflags_32.h"
++#ifndef _X86_IRQFLAGS_H_
++#define _X86_IRQFLAGS_H_
++
++#include <asm/processor-flags.h>
++
++#ifndef __ASSEMBLY__
++/*
++ * Interrupt control:
++ */
++
++static inline unsigned long native_save_fl(void)
++{
++ unsigned long flags;
++
++ __asm__ __volatile__(
++ "# __raw_save_flags\n\t"
++ "pushf ; pop %0"
++ : "=g" (flags)
++ : /* no input */
++ : "memory"
++ );
++
++ return flags;
++}
++
++static inline void native_restore_fl(unsigned long flags)
++{
++ __asm__ __volatile__(
++ "push %0 ; popf"
++ : /* no output */
++ :"g" (flags)
++ :"memory", "cc"
++ );
++}
++
++static inline void native_irq_disable(void)
++{
++ asm volatile("cli": : :"memory");
++}
++
++static inline void native_irq_enable(void)
++{
++ asm volatile("sti": : :"memory");
++}
++
++static inline void native_safe_halt(void)
++{
++ asm volatile("sti; hlt": : :"memory");
++}
++
++static inline void native_halt(void)
++{
++ asm volatile("hlt": : :"memory");
++}
++
++#endif
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else
++#ifndef __ASSEMBLY__
++
++static inline unsigned long __raw_local_save_flags(void)
++{
++ return native_save_fl();
++}
++
++static inline void raw_local_irq_restore(unsigned long flags)
++{
++ native_restore_fl(flags);
++}
++
++static inline void raw_local_irq_disable(void)
++{
++ native_irq_disable();
++}
++
++static inline void raw_local_irq_enable(void)
++{
++ native_irq_enable();
++}
++
++/*
++ * Used in the idle loop; sti takes one instruction cycle
++ * to complete:
++ */
++static inline void raw_safe_halt(void)
++{
++ native_safe_halt();
++}
++
++/*
++ * Used when interrupts are already enabled or to
++ * shutdown the processor:
++ */
++static inline void halt(void)
++{
++ native_halt();
++}
++
++/*
++ * For spinlocks, etc:
++ */
++static inline unsigned long __raw_local_irq_save(void)
++{
++ unsigned long flags = __raw_local_save_flags();
++
++ raw_local_irq_disable();
++
++ return flags;
++}
++#else
++
++#define ENABLE_INTERRUPTS(x) sti
++#define DISABLE_INTERRUPTS(x) cli
++
++#ifdef CONFIG_X86_64
++#define INTERRUPT_RETURN iretq
++#define ENABLE_INTERRUPTS_SYSCALL_RET \
++ movq %gs:pda_oldrsp, %rsp; \
++ swapgs; \
++ sysretq;
++#else
++#define INTERRUPT_RETURN iret
++#define ENABLE_INTERRUPTS_SYSCALL_RET sti; sysexit
++#define GET_CR0_INTO_EAX movl %cr0, %eax
++#endif
++
++
++#endif /* __ASSEMBLY__ */
++#endif /* CONFIG_PARAVIRT */
++
++#ifndef __ASSEMBLY__
++#define raw_local_save_flags(flags) \
++ do { (flags) = __raw_local_save_flags(); } while (0)
++
++#define raw_local_irq_save(flags) \
++ do { (flags) = __raw_local_irq_save(); } while (0)
++
++static inline int raw_irqs_disabled_flags(unsigned long flags)
++{
++ return !(flags & X86_EFLAGS_IF);
++}
++
++static inline int raw_irqs_disabled(void)
++{
++ unsigned long flags = __raw_local_save_flags();
++
++ return raw_irqs_disabled_flags(flags);
++}
++
++/*
++ * makes the traced hardirq state match with the machine state
++ *
++ * should be a rarely used function, only in places where its
++ * otherwise impossible to know the irq state, like in traps.
++ */
++static inline void trace_hardirqs_fixup_flags(unsigned long flags)
++{
++ if (raw_irqs_disabled_flags(flags))
++ trace_hardirqs_off();
++ else
++ trace_hardirqs_on();
++}
++
++static inline void trace_hardirqs_fixup(void)
++{
++ unsigned long flags = __raw_local_save_flags();
++
++ trace_hardirqs_fixup_flags(flags);
++}
++
+ #else
+-# include "irqflags_64.h"
++
++#ifdef CONFIG_X86_64
++/*
++ * Currently paravirt can't handle swapgs nicely when we
++ * don't have a stack we can rely on (such as a user space
++ * stack). So we either find a way around these or just fault
++ * and emulate if a guest tries to call swapgs directly.
++ *
++ * Either way, this is a good way to document that we don't
++ * have a reliable stack. x86_64 only.
++ */
++#define SWAPGS_UNSAFE_STACK swapgs
++#define ARCH_TRACE_IRQS_ON call trace_hardirqs_on_thunk
++#define ARCH_TRACE_IRQS_OFF call trace_hardirqs_off_thunk
++#define ARCH_LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk
++#define ARCH_LOCKDEP_SYS_EXIT_IRQ \
++ TRACE_IRQS_ON; \
++ sti; \
++ SAVE_REST; \
++ LOCKDEP_SYS_EXIT; \
++ RESTORE_REST; \
++ cli; \
++ TRACE_IRQS_OFF;
++
++#else
++#define ARCH_TRACE_IRQS_ON \
++ pushl %eax; \
++ pushl %ecx; \
++ pushl %edx; \
++ call trace_hardirqs_on; \
++ popl %edx; \
++ popl %ecx; \
++ popl %eax;
++
++#define ARCH_TRACE_IRQS_OFF \
++ pushl %eax; \
++ pushl %ecx; \
++ pushl %edx; \
++ call trace_hardirqs_off; \
++ popl %edx; \
++ popl %ecx; \
++ popl %eax;
++
++#define ARCH_LOCKDEP_SYS_EXIT \
++ pushl %eax; \
++ pushl %ecx; \
++ pushl %edx; \
++ call lockdep_sys_exit; \
++ popl %edx; \
++ popl %ecx; \
++ popl %eax;
++
++#define ARCH_LOCKDEP_SYS_EXIT_IRQ
++#endif
++
++#ifdef CONFIG_TRACE_IRQFLAGS
++# define TRACE_IRQS_ON ARCH_TRACE_IRQS_ON
++# define TRACE_IRQS_OFF ARCH_TRACE_IRQS_OFF
++#else
++# define TRACE_IRQS_ON
++# define TRACE_IRQS_OFF
++#endif
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define LOCKDEP_SYS_EXIT ARCH_LOCKDEP_SYS_EXIT
++# define LOCKDEP_SYS_EXIT_IRQ ARCH_LOCKDEP_SYS_EXIT_IRQ
++# else
++# define LOCKDEP_SYS_EXIT
++# define LOCKDEP_SYS_EXIT_IRQ
++# endif
++
++#endif /* __ASSEMBLY__ */
+ #endif
+diff --git a/include/asm-x86/irqflags_32.h b/include/asm-x86/irqflags_32.h
+deleted file mode 100644
+index 4c77200..0000000
+--- a/include/asm-x86/irqflags_32.h
++++ /dev/null
+@@ -1,197 +0,0 @@
+-/*
+- * include/asm-i386/irqflags.h
+- *
+- * IRQ flags handling
+- *
+- * This file gets included from lowlevel asm headers too, to provide
+- * wrapped versions of the local_irq_*() APIs, based on the
+- * raw_local_irq_*() functions from the lowlevel headers.
+- */
+-#ifndef _ASM_IRQFLAGS_H
+-#define _ASM_IRQFLAGS_H
+-#include <asm/processor-flags.h>
-
-- return nents;
+-#ifndef __ASSEMBLY__
+-static inline unsigned long native_save_fl(void)
+-{
+- unsigned long f;
+- asm volatile("pushfl ; popl %0":"=g" (f): /* no input */);
+- return f;
-}
-
--#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)
--
--static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
-- unsigned long offset, size_t size,
-- enum dma_data_direction dir)
+-static inline void native_restore_fl(unsigned long f)
-{
-- return dma_map_single(dev, page_address(page) + offset, size, dir);
+- asm volatile("pushl %0 ; popfl": /* no output */
+- :"g" (f)
+- :"memory", "cc");
-}
-
--static inline void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
-- size_t size, enum dma_data_direction dir)
+-static inline void native_irq_disable(void)
-{
-- dma_unmap_single(dev, dma_address, size, dir);
+- asm volatile("cli": : :"memory");
-}
-
--static inline void dma_sync_single(struct device *dev, dma_addr_t dma_handle,
-- size_t size, enum dma_data_direction dir)
+-static inline void native_irq_enable(void)
-{
--#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
-- if (dev->bus == &pci_bus_type)
-- return;
--#endif
-- dma_cache_sync(dev, phys_to_virt(dma_handle), size, dir);
+- asm volatile("sti": : :"memory");
-}
-
--static inline void dma_sync_single_range(struct device *dev,
-- dma_addr_t dma_handle,
-- unsigned long offset, size_t size,
-- enum dma_data_direction dir)
+-static inline void native_safe_halt(void)
-{
--#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
-- if (dev->bus == &pci_bus_type)
-- return;
--#endif
-- dma_cache_sync(dev, phys_to_virt(dma_handle) + offset, size, dir);
+- asm volatile("sti; hlt": : :"memory");
-}
-
--static inline void dma_sync_sg(struct device *dev, struct scatterlist *sg,
-- int nelems, enum dma_data_direction dir)
+-static inline void native_halt(void)
-{
-- int i;
--
-- for (i = 0; i < nelems; i++) {
--#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
-- dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir);
--#endif
-- sg[i].dma_address = sg_phys(&sg[i]);
-- }
+- asm volatile("hlt": : :"memory");
-}
+-#endif /* __ASSEMBLY__ */
-
--static inline void dma_sync_single_for_cpu(struct device *dev,
-- dma_addr_t dma_handle, size_t size,
-- enum dma_data_direction dir)
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#ifndef __ASSEMBLY__
+-
+-static inline unsigned long __raw_local_save_flags(void)
-{
-- dma_sync_single(dev, dma_handle, size, dir);
+- return native_save_fl();
-}
-
--static inline void dma_sync_single_for_device(struct device *dev,
-- dma_addr_t dma_handle, size_t size,
-- enum dma_data_direction dir)
+-static inline void raw_local_irq_restore(unsigned long flags)
-{
-- dma_sync_single(dev, dma_handle, size, dir);
+- native_restore_fl(flags);
-}
-
--static inline void dma_sync_single_range_for_cpu(struct device *dev,
-- dma_addr_t dma_handle,
-- unsigned long offset,
-- size_t size,
-- enum dma_data_direction direction)
+-static inline void raw_local_irq_disable(void)
-{
-- dma_sync_single_for_cpu(dev, dma_handle+offset, size, direction);
+- native_irq_disable();
-}
-
--static inline void dma_sync_single_range_for_device(struct device *dev,
-- dma_addr_t dma_handle,
-- unsigned long offset,
-- size_t size,
-- enum dma_data_direction direction)
+-static inline void raw_local_irq_enable(void)
-{
-- dma_sync_single_for_device(dev, dma_handle+offset, size, direction);
+- native_irq_enable();
-}
-
--static inline void dma_sync_sg_for_cpu(struct device *dev,
-- struct scatterlist *sg, int nelems,
-- enum dma_data_direction dir)
+-/*
+- * Used in the idle loop; sti takes one instruction cycle
+- * to complete:
+- */
+-static inline void raw_safe_halt(void)
-{
-- dma_sync_sg(dev, sg, nelems, dir);
+- native_safe_halt();
-}
-
--static inline void dma_sync_sg_for_device(struct device *dev,
-- struct scatterlist *sg, int nelems,
-- enum dma_data_direction dir)
+-/*
+- * Used when interrupts are already enabled or to
+- * shutdown the processor:
+- */
+-static inline void halt(void)
-{
-- dma_sync_sg(dev, sg, nelems, dir);
+- native_halt();
-}
-
--static inline int dma_get_cache_alignment(void)
+-/*
+- * For spinlocks, etc:
+- */
+-static inline unsigned long __raw_local_irq_save(void)
-{
-- /*
-- * Each processor family will define its own L1_CACHE_SHIFT,
-- * L1_CACHE_BYTES wraps to this, so this is always safe.
-- */
-- return L1_CACHE_BYTES;
+- unsigned long flags = __raw_local_save_flags();
+-
+- raw_local_irq_disable();
+-
+- return flags;
-}
-
--static inline int dma_mapping_error(dma_addr_t dma_addr)
+-#else
+-#define DISABLE_INTERRUPTS(clobbers) cli
+-#define ENABLE_INTERRUPTS(clobbers) sti
+-#define ENABLE_INTERRUPTS_SYSEXIT sti; sysexit
+-#define INTERRUPT_RETURN iret
+-#define GET_CR0_INTO_EAX movl %cr0, %eax
+-#endif /* __ASSEMBLY__ */
+-#endif /* CONFIG_PARAVIRT */
+-
+-#ifndef __ASSEMBLY__
+-#define raw_local_save_flags(flags) \
+- do { (flags) = __raw_local_save_flags(); } while (0)
+-
+-#define raw_local_irq_save(flags) \
+- do { (flags) = __raw_local_irq_save(); } while (0)
+-
+-static inline int raw_irqs_disabled_flags(unsigned long flags)
-{
-- return dma_addr == 0;
+- return !(flags & X86_EFLAGS_IF);
-}
-
--#endif /* __ASM_SH_DMA_MAPPING_H */
+-static inline int raw_irqs_disabled(void)
+-{
+- unsigned long flags = __raw_local_save_flags();
-
-diff --git a/include/asm-sh64/dma.h b/include/asm-sh64/dma.h
-deleted file mode 100644
-index e701f39..0000000
---- a/include/asm-sh64/dma.h
-+++ /dev/null
-@@ -1,41 +0,0 @@
--#ifndef __ASM_SH64_DMA_H
--#define __ASM_SH64_DMA_H
+- return raw_irqs_disabled_flags(flags);
+-}
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/dma.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
+- * makes the traced hardirq state match with the machine state
- *
+- * should be a rarely used function, only in places where its
+- * otherwise impossible to know the irq state, like in traps.
- */
+-static inline void trace_hardirqs_fixup_flags(unsigned long flags)
+-{
+- if (raw_irqs_disabled_flags(flags))
+- trace_hardirqs_off();
+- else
+- trace_hardirqs_on();
+-}
-
--#include <linux/mm.h>
--#include <asm/io.h>
--#include <asm/pgtable.h>
+-static inline void trace_hardirqs_fixup(void)
+-{
+- unsigned long flags = __raw_local_save_flags();
-
--#define MAX_DMA_CHANNELS 4
+- trace_hardirqs_fixup_flags(flags);
+-}
+-#endif /* __ASSEMBLY__ */
-
-/*
-- * SH5 can DMA in any memory area.
-- *
-- * The static definition is dodgy because it should limit
-- * the highest DMA-able address based on the actual
-- * Physical memory available. This is actually performed
-- * at run time in defining the memory allowed to DMA_ZONE.
+- * Do the CPU's IRQ-state tracing from assembly code. We call a
+- * C function, so save all the C-clobbered registers:
- */
--#define MAX_DMA_ADDRESS ~(NPHYS_MASK)
+-#ifdef CONFIG_TRACE_IRQFLAGS
-
--#define DMA_MODE_READ 0
--#define DMA_MODE_WRITE 1
+-# define TRACE_IRQS_ON \
+- pushl %eax; \
+- pushl %ecx; \
+- pushl %edx; \
+- call trace_hardirqs_on; \
+- popl %edx; \
+- popl %ecx; \
+- popl %eax;
+-
+-# define TRACE_IRQS_OFF \
+- pushl %eax; \
+- pushl %ecx; \
+- pushl %edx; \
+- call trace_hardirqs_off; \
+- popl %edx; \
+- popl %ecx; \
+- popl %eax;
-
--#ifdef CONFIG_PCI
--extern int isa_dma_bridge_buggy;
-#else
--#define isa_dma_bridge_buggy (0)
+-# define TRACE_IRQS_ON
+-# define TRACE_IRQS_OFF
-#endif
-
--#endif /* __ASM_SH64_DMA_H */
-diff --git a/include/asm-sh64/elf.h b/include/asm-sh64/elf.h
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LOCKDEP_SYS_EXIT \
+- pushl %eax; \
+- pushl %ecx; \
+- pushl %edx; \
+- call lockdep_sys_exit; \
+- popl %edx; \
+- popl %ecx; \
+- popl %eax;
+-#else
+-# define LOCKDEP_SYS_EXIT
+-#endif
+-
+-#endif
+diff --git a/include/asm-x86/irqflags_64.h b/include/asm-x86/irqflags_64.h
deleted file mode 100644
-index f994286..0000000
---- a/include/asm-sh64/elf.h
+index bb9163b..0000000
+--- a/include/asm-x86/irqflags_64.h
+++ /dev/null
-@@ -1,107 +0,0 @@
--#ifndef __ASM_SH64_ELF_H
--#define __ASM_SH64_ELF_H
--
+@@ -1,176 +0,0 @@
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/elf.h
+- * include/asm-x86_64/irqflags.h
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * IRQ flags handling
- *
+- * This file gets included from lowlevel asm headers too, to provide
+- * wrapped versions of the local_irq_*() APIs, based on the
+- * raw_local_irq_*() functions from the lowlevel headers.
- */
+-#ifndef _ASM_IRQFLAGS_H
+-#define _ASM_IRQFLAGS_H
+-#include <asm/processor-flags.h>
-
+-#ifndef __ASSEMBLY__
-/*
-- * ELF register definitions..
+- * Interrupt control:
- */
-
--#include <asm/ptrace.h>
--#include <asm/user.h>
--#include <asm/byteorder.h>
+-static inline unsigned long __raw_local_save_flags(void)
+-{
+- unsigned long flags;
-
--typedef unsigned long elf_greg_t;
+- __asm__ __volatile__(
+- "# __raw_save_flags\n\t"
+- "pushfq ; popq %q0"
+- : "=g" (flags)
+- : /* no input */
+- : "memory"
+- );
-
--#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
--typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+- return flags;
+-}
-
--typedef struct user_fpu_struct elf_fpregset_t;
+-#define raw_local_save_flags(flags) \
+- do { (flags) = __raw_local_save_flags(); } while (0)
-
--/*
-- * This is used to ensure we don't load something for the wrong architecture.
-- */
--#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
+-static inline void raw_local_irq_restore(unsigned long flags)
+-{
+- __asm__ __volatile__(
+- "pushq %0 ; popfq"
+- : /* no output */
+- :"g" (flags)
+- :"memory", "cc"
+- );
+-}
+-
+-#ifdef CONFIG_X86_VSMP
-
-/*
-- * These are used to set parameters in the core dumps.
+- * Interrupt control for the VSMP architecture:
- */
--#define ELF_CLASS ELFCLASS32
--#ifdef __LITTLE_ENDIAN__
--#define ELF_DATA ELFDATA2LSB
--#else
--#define ELF_DATA ELFDATA2MSB
--#endif
--#define ELF_ARCH EM_SH
--
--#define USE_ELF_CORE_DUMP
--#define ELF_EXEC_PAGESIZE 4096
--
--/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
-- use of this is to invoke "./ld.so someprog" to test out a new version of
-- the loader. We need to make sure that it is out of the way of the program
-- that it will "exec", and that there is sufficient room for the brk. */
-
--#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
+-static inline void raw_local_irq_disable(void)
+-{
+- unsigned long flags = __raw_local_save_flags();
-
--#define R_SH_DIR32 1
--#define R_SH_REL32 2
--#define R_SH_IMM_LOW16 246
--#define R_SH_IMM_LOW16_PCREL 247
--#define R_SH_IMM_MEDLOW16 248
--#define R_SH_IMM_MEDLOW16_PCREL 249
+- raw_local_irq_restore((flags & ~X86_EFLAGS_IF) | X86_EFLAGS_AC);
+-}
-
--#define ELF_CORE_COPY_REGS(_dest,_regs) \
-- memcpy((char *) &_dest, (char *) _regs, \
-- sizeof(struct pt_regs));
+-static inline void raw_local_irq_enable(void)
+-{
+- unsigned long flags = __raw_local_save_flags();
-
--/* This yields a mask that user programs can use to figure out what
-- instruction set this CPU supports. This could be done in user space,
-- but it's not easy, and we've already done it here. */
+- raw_local_irq_restore((flags | X86_EFLAGS_IF) & (~X86_EFLAGS_AC));
+-}
-
--#define ELF_HWCAP (0)
+-static inline int raw_irqs_disabled_flags(unsigned long flags)
+-{
+- return !(flags & X86_EFLAGS_IF) || (flags & X86_EFLAGS_AC);
+-}
-
--/* This yields a string that ld.so will use to load implementation
-- specific libraries for optimization. This is more specific in
-- intent than poking at uname or /proc/cpuinfo.
+-#else /* CONFIG_X86_VSMP */
-
-- For the moment, we have only optimizations for the Intel generations,
-- but that could change... */
+-static inline void raw_local_irq_disable(void)
+-{
+- __asm__ __volatile__("cli" : : : "memory");
+-}
-
--#define ELF_PLATFORM (NULL)
+-static inline void raw_local_irq_enable(void)
+-{
+- __asm__ __volatile__("sti" : : : "memory");
+-}
-
--#define ELF_PLAT_INIT(_r, load_addr) \
-- do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
-- _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
-- _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
-- _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; _r->regs[15]=0; \
-- _r->regs[16]=0; _r->regs[17]=0; _r->regs[18]=0; _r->regs[19]=0; \
-- _r->regs[20]=0; _r->regs[21]=0; _r->regs[22]=0; _r->regs[23]=0; \
-- _r->regs[24]=0; _r->regs[25]=0; _r->regs[26]=0; _r->regs[27]=0; \
-- _r->regs[28]=0; _r->regs[29]=0; _r->regs[30]=0; _r->regs[31]=0; \
-- _r->regs[32]=0; _r->regs[33]=0; _r->regs[34]=0; _r->regs[35]=0; \
-- _r->regs[36]=0; _r->regs[37]=0; _r->regs[38]=0; _r->regs[39]=0; \
-- _r->regs[40]=0; _r->regs[41]=0; _r->regs[42]=0; _r->regs[43]=0; \
-- _r->regs[44]=0; _r->regs[45]=0; _r->regs[46]=0; _r->regs[47]=0; \
-- _r->regs[48]=0; _r->regs[49]=0; _r->regs[50]=0; _r->regs[51]=0; \
-- _r->regs[52]=0; _r->regs[53]=0; _r->regs[54]=0; _r->regs[55]=0; \
-- _r->regs[56]=0; _r->regs[57]=0; _r->regs[58]=0; _r->regs[59]=0; \
-- _r->regs[60]=0; _r->regs[61]=0; _r->regs[62]=0; \
-- _r->tregs[0]=0; _r->tregs[1]=0; _r->tregs[2]=0; _r->tregs[3]=0; \
-- _r->tregs[4]=0; _r->tregs[5]=0; _r->tregs[6]=0; _r->tregs[7]=0; \
-- _r->sr = SR_FD | SR_MMU; } while (0)
+-static inline int raw_irqs_disabled_flags(unsigned long flags)
+-{
+- return !(flags & X86_EFLAGS_IF);
+-}
-
--#ifdef __KERNEL__
--#define SET_PERSONALITY(ex, ibcs2) set_personality(PER_LINUX_32BIT)
-#endif
-
--#endif /* __ASM_SH64_ELF_H */
-diff --git a/include/asm-sh64/emergency-restart.h b/include/asm-sh64/emergency-restart.h
-deleted file mode 100644
-index 108d8c4..0000000
---- a/include/asm-sh64/emergency-restart.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef _ASM_EMERGENCY_RESTART_H
--#define _ASM_EMERGENCY_RESTART_H
+-/*
+- * For spinlocks, etc.:
+- */
-
--#include <asm-generic/emergency-restart.h>
+-static inline unsigned long __raw_local_irq_save(void)
+-{
+- unsigned long flags = __raw_local_save_flags();
-
--#endif /* _ASM_EMERGENCY_RESTART_H */
-diff --git a/include/asm-sh64/errno.h b/include/asm-sh64/errno.h
-deleted file mode 100644
-index 57b46d4..0000000
---- a/include/asm-sh64/errno.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_ERRNO_H
--#define __ASM_SH64_ERRNO_H
+- raw_local_irq_disable();
-
--#include <asm-generic/errno.h>
+- return flags;
+-}
-
--#endif /* __ASM_SH64_ERRNO_H */
-diff --git a/include/asm-sh64/fb.h b/include/asm-sh64/fb.h
-deleted file mode 100644
-index d92e99c..0000000
---- a/include/asm-sh64/fb.h
-+++ /dev/null
-@@ -1,19 +0,0 @@
--#ifndef _ASM_FB_H_
--#define _ASM_FB_H_
+-#define raw_local_irq_save(flags) \
+- do { (flags) = __raw_local_irq_save(); } while (0)
-
--#include <linux/fb.h>
--#include <linux/fs.h>
--#include <asm/page.h>
+-static inline int raw_irqs_disabled(void)
+-{
+- unsigned long flags = __raw_local_save_flags();
-
--static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
-- unsigned long off)
+- return raw_irqs_disabled_flags(flags);
+-}
+-
+-/*
+- * makes the traced hardirq state match with the machine state
+- *
+- * should be a rarely used function, only in places where its
+- * otherwise impossible to know the irq state, like in traps.
+- */
+-static inline void trace_hardirqs_fixup_flags(unsigned long flags)
-{
-- vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+- if (raw_irqs_disabled_flags(flags))
+- trace_hardirqs_off();
+- else
+- trace_hardirqs_on();
-}
-
--static inline int fb_is_primary_device(struct fb_info *info)
+-static inline void trace_hardirqs_fixup(void)
-{
-- return 0;
+- unsigned long flags = __raw_local_save_flags();
+-
+- trace_hardirqs_fixup_flags(flags);
+-}
+-/*
+- * Used in the idle loop; sti takes one instruction cycle
+- * to complete:
+- */
+-static inline void raw_safe_halt(void)
+-{
+- __asm__ __volatile__("sti; hlt" : : : "memory");
-}
-
--#endif /* _ASM_FB_H_ */
-diff --git a/include/asm-sh64/fcntl.h b/include/asm-sh64/fcntl.h
-deleted file mode 100644
-index 744dd79..0000000
---- a/include/asm-sh64/fcntl.h
-+++ /dev/null
-@@ -1 +0,0 @@
--#include <asm-sh/fcntl.h>
-diff --git a/include/asm-sh64/futex.h b/include/asm-sh64/futex.h
-deleted file mode 100644
-index 6a332a9..0000000
---- a/include/asm-sh64/futex.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef _ASM_FUTEX_H
--#define _ASM_FUTEX_H
+-/*
+- * Used when interrupts are already enabled or to
+- * shutdown the processor:
+- */
+-static inline void halt(void)
+-{
+- __asm__ __volatile__("hlt": : :"memory");
+-}
-
--#include <asm-generic/futex.h>
+-#else /* __ASSEMBLY__: */
+-# ifdef CONFIG_TRACE_IRQFLAGS
+-# define TRACE_IRQS_ON call trace_hardirqs_on_thunk
+-# define TRACE_IRQS_OFF call trace_hardirqs_off_thunk
+-# else
+-# define TRACE_IRQS_ON
+-# define TRACE_IRQS_OFF
+-# endif
+-# ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk
+-# define LOCKDEP_SYS_EXIT_IRQ \
+- TRACE_IRQS_ON; \
+- sti; \
+- SAVE_REST; \
+- LOCKDEP_SYS_EXIT; \
+- RESTORE_REST; \
+- cli; \
+- TRACE_IRQS_OFF;
+-# else
+-# define LOCKDEP_SYS_EXIT
+-# define LOCKDEP_SYS_EXIT_IRQ
+-# endif
+-#endif
-
-#endif
-diff --git a/include/asm-sh64/gpio.h b/include/asm-sh64/gpio.h
+diff --git a/include/asm-x86/k8.h b/include/asm-x86/k8.h
+index 699dd69..452e2b6 100644
+--- a/include/asm-x86/k8.h
++++ b/include/asm-x86/k8.h
+@@ -10,5 +10,6 @@ extern struct pci_dev **k8_northbridges;
+ extern int num_k8_northbridges;
+ extern int cache_k8_northbridges(void);
+ extern void k8_flush_garts(void);
++extern int k8_scan_nodes(unsigned long start, unsigned long end);
+
+ #endif
+diff --git a/include/asm-x86/kdebug.h b/include/asm-x86/kdebug.h
+index e2f9b62..dd442a1 100644
+--- a/include/asm-x86/kdebug.h
++++ b/include/asm-x86/kdebug.h
+@@ -22,12 +22,17 @@ enum die_val {
+ DIE_PAGE_FAULT,
+ };
+
+-extern void printk_address(unsigned long address);
++extern void printk_address(unsigned long address, int reliable);
+ extern void die(const char *,struct pt_regs *,long);
+-extern void __die(const char *,struct pt_regs *,long);
++extern int __must_check __die(const char *, struct pt_regs *, long);
+ extern void show_registers(struct pt_regs *regs);
++extern void __show_registers(struct pt_regs *, int all);
++extern void show_trace(struct task_struct *t, struct pt_regs *regs,
++ unsigned long *sp, unsigned long bp);
++extern void __show_regs(struct pt_regs *regs);
++extern void show_regs(struct pt_regs *regs);
+ extern void dump_pagetable(unsigned long);
+ extern unsigned long oops_begin(void);
+-extern void oops_end(unsigned long);
++extern void oops_end(unsigned long, struct pt_regs *, int signr);
+
+ #endif
+diff --git a/include/asm-x86/kexec.h b/include/asm-x86/kexec.h
+index 718ddbf..c90d3c7 100644
+--- a/include/asm-x86/kexec.h
++++ b/include/asm-x86/kexec.h
+@@ -1,5 +1,170 @@
++#ifndef _KEXEC_H
++#define _KEXEC_H
++
+ #ifdef CONFIG_X86_32
+-# include "kexec_32.h"
++# define PA_CONTROL_PAGE 0
++# define VA_CONTROL_PAGE 1
++# define PA_PGD 2
++# define VA_PGD 3
++# define PA_PTE_0 4
++# define VA_PTE_0 5
++# define PA_PTE_1 6
++# define VA_PTE_1 7
++# ifdef CONFIG_X86_PAE
++# define PA_PMD_0 8
++# define VA_PMD_0 9
++# define PA_PMD_1 10
++# define VA_PMD_1 11
++# define PAGES_NR 12
++# else
++# define PAGES_NR 8
++# endif
+ #else
+-# include "kexec_64.h"
++# define PA_CONTROL_PAGE 0
++# define VA_CONTROL_PAGE 1
++# define PA_PGD 2
++# define VA_PGD 3
++# define PA_PUD_0 4
++# define VA_PUD_0 5
++# define PA_PMD_0 6
++# define VA_PMD_0 7
++# define PA_PTE_0 8
++# define VA_PTE_0 9
++# define PA_PUD_1 10
++# define VA_PUD_1 11
++# define PA_PMD_1 12
++# define VA_PMD_1 13
++# define PA_PTE_1 14
++# define VA_PTE_1 15
++# define PA_TABLE_PAGE 16
++# define PAGES_NR 17
+ #endif
++
++#ifndef __ASSEMBLY__
++
++#include <linux/string.h>
++
++#include <asm/page.h>
++#include <asm/ptrace.h>
++
++/*
++ * KEXEC_SOURCE_MEMORY_LIMIT maximum page get_free_page can return.
++ * I.e. Maximum page that is mapped directly into kernel memory,
++ * and kmap is not required.
++ *
++ * So far x86_64 is limited to 40 physical address bits.
++ */
++#ifdef CONFIG_X86_32
++/* Maximum physical address we can use pages from */
++# define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
++/* Maximum address we can reach in physical address mode */
++# define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
++/* Maximum address we can use for the control code buffer */
++# define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE
++
++# define KEXEC_CONTROL_CODE_SIZE 4096
++
++/* The native architecture */
++# define KEXEC_ARCH KEXEC_ARCH_386
++
++/* We can also handle crash dumps from 64 bit kernel. */
++# define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64)
++#else
++/* Maximum physical address we can use pages from */
++# define KEXEC_SOURCE_MEMORY_LIMIT (0xFFFFFFFFFFUL)
++/* Maximum address we can reach in physical address mode */
++# define KEXEC_DESTINATION_MEMORY_LIMIT (0xFFFFFFFFFFUL)
++/* Maximum address we can use for the control pages */
++# define KEXEC_CONTROL_MEMORY_LIMIT (0xFFFFFFFFFFUL)
++
++/* Allocate one page for the pdp and the second for the code */
++# define KEXEC_CONTROL_CODE_SIZE (4096UL + 4096UL)
++
++/* The native architecture */
++# define KEXEC_ARCH KEXEC_ARCH_X86_64
++#endif
++
++/*
++ * CPU does not save ss and sp on stack if execution is already
++ * running in kernel mode at the time of NMI occurrence. This code
++ * fixes it.
++ */
++static inline void crash_fixup_ss_esp(struct pt_regs *newregs,
++ struct pt_regs *oldregs)
++{
++#ifdef CONFIG_X86_32
++ newregs->sp = (unsigned long)&(oldregs->sp);
++ __asm__ __volatile__(
++ "xorl %%eax, %%eax\n\t"
++ "movw %%ss, %%ax\n\t"
++ :"=a"(newregs->ss));
++#endif
++}
++
++/*
++ * This function is responsible for capturing register states if coming
++ * via panic otherwise just fix up the ss and sp if coming via kernel
++ * mode exception.
++ */
++static inline void crash_setup_regs(struct pt_regs *newregs,
++ struct pt_regs *oldregs)
++{
++ if (oldregs) {
++ memcpy(newregs, oldregs, sizeof(*newregs));
++ crash_fixup_ss_esp(newregs, oldregs);
++ } else {
++#ifdef CONFIG_X86_32
++ __asm__ __volatile__("movl %%ebx,%0" : "=m"(newregs->bx));
++ __asm__ __volatile__("movl %%ecx,%0" : "=m"(newregs->cx));
++ __asm__ __volatile__("movl %%edx,%0" : "=m"(newregs->dx));
++ __asm__ __volatile__("movl %%esi,%0" : "=m"(newregs->si));
++ __asm__ __volatile__("movl %%edi,%0" : "=m"(newregs->di));
++ __asm__ __volatile__("movl %%ebp,%0" : "=m"(newregs->bp));
++ __asm__ __volatile__("movl %%eax,%0" : "=m"(newregs->ax));
++ __asm__ __volatile__("movl %%esp,%0" : "=m"(newregs->sp));
++ __asm__ __volatile__("movl %%ss, %%eax;" :"=a"(newregs->ss));
++ __asm__ __volatile__("movl %%cs, %%eax;" :"=a"(newregs->cs));
++ __asm__ __volatile__("movl %%ds, %%eax;" :"=a"(newregs->ds));
++ __asm__ __volatile__("movl %%es, %%eax;" :"=a"(newregs->es));
++ __asm__ __volatile__("pushfl; popl %0" :"=m"(newregs->flags));
++#else
++ __asm__ __volatile__("movq %%rbx,%0" : "=m"(newregs->bx));
++ __asm__ __volatile__("movq %%rcx,%0" : "=m"(newregs->cx));
++ __asm__ __volatile__("movq %%rdx,%0" : "=m"(newregs->dx));
++ __asm__ __volatile__("movq %%rsi,%0" : "=m"(newregs->si));
++ __asm__ __volatile__("movq %%rdi,%0" : "=m"(newregs->di));
++ __asm__ __volatile__("movq %%rbp,%0" : "=m"(newregs->bp));
++ __asm__ __volatile__("movq %%rax,%0" : "=m"(newregs->ax));
++ __asm__ __volatile__("movq %%rsp,%0" : "=m"(newregs->sp));
++ __asm__ __volatile__("movq %%r8,%0" : "=m"(newregs->r8));
++ __asm__ __volatile__("movq %%r9,%0" : "=m"(newregs->r9));
++ __asm__ __volatile__("movq %%r10,%0" : "=m"(newregs->r10));
++ __asm__ __volatile__("movq %%r11,%0" : "=m"(newregs->r11));
++ __asm__ __volatile__("movq %%r12,%0" : "=m"(newregs->r12));
++ __asm__ __volatile__("movq %%r13,%0" : "=m"(newregs->r13));
++ __asm__ __volatile__("movq %%r14,%0" : "=m"(newregs->r14));
++ __asm__ __volatile__("movq %%r15,%0" : "=m"(newregs->r15));
++ __asm__ __volatile__("movl %%ss, %%eax;" :"=a"(newregs->ss));
++ __asm__ __volatile__("movl %%cs, %%eax;" :"=a"(newregs->cs));
++ __asm__ __volatile__("pushfq; popq %0" :"=m"(newregs->flags));
++#endif
++ newregs->ip = (unsigned long)current_text_addr();
++ }
++}
++
++#ifdef CONFIG_X86_32
++asmlinkage NORET_TYPE void
++relocate_kernel(unsigned long indirection_page,
++ unsigned long control_page,
++ unsigned long start_address,
++ unsigned int has_pae) ATTRIB_NORET;
++#else
++NORET_TYPE void
++relocate_kernel(unsigned long indirection_page,
++ unsigned long page_list,
++ unsigned long start_address) ATTRIB_NORET;
++#endif
++
++#endif /* __ASSEMBLY__ */
++
++#endif /* _KEXEC_H */
+diff --git a/include/asm-x86/kexec_32.h b/include/asm-x86/kexec_32.h
deleted file mode 100644
-index 6bc5a13..0000000
---- a/include/asm-sh64/gpio.h
+index 4b9dc9e..0000000
+--- a/include/asm-x86/kexec_32.h
+++ /dev/null
-@@ -1,8 +0,0 @@
--#ifndef __ASM_SH64_GPIO_H
--#define __ASM_SH64_GPIO_H
+@@ -1,99 +0,0 @@
+-#ifndef _I386_KEXEC_H
+-#define _I386_KEXEC_H
+-
+-#define PA_CONTROL_PAGE 0
+-#define VA_CONTROL_PAGE 1
+-#define PA_PGD 2
+-#define VA_PGD 3
+-#define PA_PTE_0 4
+-#define VA_PTE_0 5
+-#define PA_PTE_1 6
+-#define VA_PTE_1 7
+-#ifdef CONFIG_X86_PAE
+-#define PA_PMD_0 8
+-#define VA_PMD_0 9
+-#define PA_PMD_1 10
+-#define VA_PMD_1 11
+-#define PAGES_NR 12
+-#else
+-#define PAGES_NR 8
+-#endif
+-
+-#ifndef __ASSEMBLY__
+-
+-#include <asm/ptrace.h>
+-#include <asm/string.h>
-
-/*
-- * This is just a stub, so that every arch using sh-sci has a gpio.h
+- * KEXEC_SOURCE_MEMORY_LIMIT maximum page get_free_page can return.
+- * I.e. Maximum page that is mapped directly into kernel memory,
+- * and kmap is not required.
- */
-
--#endif /* __ASM_SH64_GPIO_H */
-diff --git a/include/asm-sh64/hardirq.h b/include/asm-sh64/hardirq.h
-deleted file mode 100644
-index 555fd7a..0000000
---- a/include/asm-sh64/hardirq.h
-+++ /dev/null
-@@ -1,18 +0,0 @@
--#ifndef __ASM_SH64_HARDIRQ_H
--#define __ASM_SH64_HARDIRQ_H
+-/* Maximum physical address we can use pages from */
+-#define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
+-/* Maximum address we can reach in physical address mode */
+-#define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
+-/* Maximum address we can use for the control code buffer */
+-#define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE
-
--#include <linux/threads.h>
--#include <linux/irq.h>
+-#define KEXEC_CONTROL_CODE_SIZE 4096
-
--/* entry.S is sensitive to the offsets of these fields */
--typedef struct {
-- unsigned int __softirq_pending;
--} ____cacheline_aligned irq_cpustat_t;
+-/* The native architecture */
+-#define KEXEC_ARCH KEXEC_ARCH_386
-
--#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
+-/* We can also handle crash dumps from 64 bit kernel. */
+-#define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64)
-
--/* arch/sh64/kernel/irq.c */
--extern void ack_bad_irq(unsigned int irq);
+-/* CPU does not save ss and esp on stack if execution is already
+- * running in kernel mode at the time of NMI occurrence. This code
+- * fixes it.
+- */
+-static inline void crash_fixup_ss_esp(struct pt_regs *newregs,
+- struct pt_regs *oldregs)
+-{
+- memcpy(newregs, oldregs, sizeof(*newregs));
+- newregs->esp = (unsigned long)&(oldregs->esp);
+- __asm__ __volatile__(
+- "xorl %%eax, %%eax\n\t"
+- "movw %%ss, %%ax\n\t"
+- :"=a"(newregs->xss));
+-}
-
--#endif /* __ASM_SH64_HARDIRQ_H */
+-/*
+- * This function is responsible for capturing register states if coming
+- * via panic otherwise just fix up the ss and esp if coming via kernel
+- * mode exception.
+- */
+-static inline void crash_setup_regs(struct pt_regs *newregs,
+- struct pt_regs *oldregs)
+-{
+- if (oldregs)
+- crash_fixup_ss_esp(newregs, oldregs);
+- else {
+- __asm__ __volatile__("movl %%ebx,%0" : "=m"(newregs->ebx));
+- __asm__ __volatile__("movl %%ecx,%0" : "=m"(newregs->ecx));
+- __asm__ __volatile__("movl %%edx,%0" : "=m"(newregs->edx));
+- __asm__ __volatile__("movl %%esi,%0" : "=m"(newregs->esi));
+- __asm__ __volatile__("movl %%edi,%0" : "=m"(newregs->edi));
+- __asm__ __volatile__("movl %%ebp,%0" : "=m"(newregs->ebp));
+- __asm__ __volatile__("movl %%eax,%0" : "=m"(newregs->eax));
+- __asm__ __volatile__("movl %%esp,%0" : "=m"(newregs->esp));
+- __asm__ __volatile__("movw %%ss, %%ax;" :"=a"(newregs->xss));
+- __asm__ __volatile__("movw %%cs, %%ax;" :"=a"(newregs->xcs));
+- __asm__ __volatile__("movw %%ds, %%ax;" :"=a"(newregs->xds));
+- __asm__ __volatile__("movw %%es, %%ax;" :"=a"(newregs->xes));
+- __asm__ __volatile__("pushfl; popl %0" :"=m"(newregs->eflags));
-
-diff --git a/include/asm-sh64/hardware.h b/include/asm-sh64/hardware.h
+- newregs->eip = (unsigned long)current_text_addr();
+- }
+-}
+-asmlinkage NORET_TYPE void
+-relocate_kernel(unsigned long indirection_page,
+- unsigned long control_page,
+- unsigned long start_address,
+- unsigned int has_pae) ATTRIB_NORET;
+-
+-#endif /* __ASSEMBLY__ */
+-
+-#endif /* _I386_KEXEC_H */
+diff --git a/include/asm-x86/kexec_64.h b/include/asm-x86/kexec_64.h
deleted file mode 100644
-index 931c1ad..0000000
---- a/include/asm-sh64/hardware.h
+index 738e581..0000000
+--- a/include/asm-x86/kexec_64.h
+++ /dev/null
-@@ -1,22 +0,0 @@
--#ifndef __ASM_SH64_HARDWARE_H
--#define __ASM_SH64_HARDWARE_H
+@@ -1,94 +0,0 @@
+-#ifndef _X86_64_KEXEC_H
+-#define _X86_64_KEXEC_H
+-
+-#define PA_CONTROL_PAGE 0
+-#define VA_CONTROL_PAGE 1
+-#define PA_PGD 2
+-#define VA_PGD 3
+-#define PA_PUD_0 4
+-#define VA_PUD_0 5
+-#define PA_PMD_0 6
+-#define VA_PMD_0 7
+-#define PA_PTE_0 8
+-#define VA_PTE_0 9
+-#define PA_PUD_1 10
+-#define VA_PUD_1 11
+-#define PA_PMD_1 12
+-#define VA_PMD_1 13
+-#define PA_PTE_1 14
+-#define VA_PTE_1 15
+-#define PA_TABLE_PAGE 16
+-#define PAGES_NR 17
+-
+-#ifndef __ASSEMBLY__
+-
+-#include <linux/string.h>
+-
+-#include <asm/page.h>
+-#include <asm/ptrace.h>
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/hardware.h
+- * KEXEC_SOURCE_MEMORY_LIMIT maximum page get_free_page can return.
+- * I.e. Maximum page that is mapped directly into kernel memory,
+- * and kmap is not required.
- *
-- * Copyright (C) 2002 Stuart Menefy
-- * Copyright (C) 2003 Paul Mundt
-- *
-- * Defitions of the locations of registers in the physical address space.
+- * So far x86_64 is limited to 40 physical address bits.
- */
-
--#define PHYS_PERIPHERAL_BLOCK 0x09000000
--#define PHYS_DMAC_BLOCK 0x0e000000
--#define PHYS_PCI_BLOCK 0x60000000
--#define PHYS_EMI_BLOCK 0xff000000
+-/* Maximum physical address we can use pages from */
+-#define KEXEC_SOURCE_MEMORY_LIMIT (0xFFFFFFFFFFUL)
+-/* Maximum address we can reach in physical address mode */
+-#define KEXEC_DESTINATION_MEMORY_LIMIT (0xFFFFFFFFFFUL)
+-/* Maximum address we can use for the control pages */
+-#define KEXEC_CONTROL_MEMORY_LIMIT (0xFFFFFFFFFFUL)
-
--#endif /* __ASM_SH64_HARDWARE_H */
-diff --git a/include/asm-sh64/hw_irq.h b/include/asm-sh64/hw_irq.h
-deleted file mode 100644
-index ebb3908..0000000
---- a/include/asm-sh64/hw_irq.h
-+++ /dev/null
-@@ -1,15 +0,0 @@
--#ifndef __ASM_SH64_HW_IRQ_H
--#define __ASM_SH64_HW_IRQ_H
+-/* Allocate one page for the pdp and the second for the code */
+-#define KEXEC_CONTROL_CODE_SIZE (4096UL + 4096UL)
+-
+-/* The native architecture */
+-#define KEXEC_ARCH KEXEC_ARCH_X86_64
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/hw_irq.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
+- * Saving the registers of the cpu on which panic occured in
+- * crash_kexec to save a valid sp. The registers of other cpus
+- * will be saved in machine_crash_shutdown while shooting down them.
- */
-
--#endif /* __ASM_SH64_HW_IRQ_H */
-diff --git a/include/asm-sh64/ide.h b/include/asm-sh64/ide.h
+-static inline void crash_setup_regs(struct pt_regs *newregs,
+- struct pt_regs *oldregs)
+-{
+- if (oldregs)
+- memcpy(newregs, oldregs, sizeof(*newregs));
+- else {
+- __asm__ __volatile__("movq %%rbx,%0" : "=m"(newregs->rbx));
+- __asm__ __volatile__("movq %%rcx,%0" : "=m"(newregs->rcx));
+- __asm__ __volatile__("movq %%rdx,%0" : "=m"(newregs->rdx));
+- __asm__ __volatile__("movq %%rsi,%0" : "=m"(newregs->rsi));
+- __asm__ __volatile__("movq %%rdi,%0" : "=m"(newregs->rdi));
+- __asm__ __volatile__("movq %%rbp,%0" : "=m"(newregs->rbp));
+- __asm__ __volatile__("movq %%rax,%0" : "=m"(newregs->rax));
+- __asm__ __volatile__("movq %%rsp,%0" : "=m"(newregs->rsp));
+- __asm__ __volatile__("movq %%r8,%0" : "=m"(newregs->r8));
+- __asm__ __volatile__("movq %%r9,%0" : "=m"(newregs->r9));
+- __asm__ __volatile__("movq %%r10,%0" : "=m"(newregs->r10));
+- __asm__ __volatile__("movq %%r11,%0" : "=m"(newregs->r11));
+- __asm__ __volatile__("movq %%r12,%0" : "=m"(newregs->r12));
+- __asm__ __volatile__("movq %%r13,%0" : "=m"(newregs->r13));
+- __asm__ __volatile__("movq %%r14,%0" : "=m"(newregs->r14));
+- __asm__ __volatile__("movq %%r15,%0" : "=m"(newregs->r15));
+- __asm__ __volatile__("movl %%ss, %%eax;" :"=a"(newregs->ss));
+- __asm__ __volatile__("movl %%cs, %%eax;" :"=a"(newregs->cs));
+- __asm__ __volatile__("pushfq; popq %0" :"=m"(newregs->eflags));
+-
+- newregs->rip = (unsigned long)current_text_addr();
+- }
+-}
+-
+-NORET_TYPE void
+-relocate_kernel(unsigned long indirection_page,
+- unsigned long page_list,
+- unsigned long start_address) ATTRIB_NORET;
+-
+-#endif /* __ASSEMBLY__ */
+-
+-#endif /* _X86_64_KEXEC_H */
+diff --git a/include/asm-x86/kprobes.h b/include/asm-x86/kprobes.h
+index b7bbd25..143476a 100644
+--- a/include/asm-x86/kprobes.h
++++ b/include/asm-x86/kprobes.h
+@@ -1,5 +1,98 @@
+-#ifdef CONFIG_X86_32
+-# include "kprobes_32.h"
+-#else
+-# include "kprobes_64.h"
+-#endif
++#ifndef _ASM_KPROBES_H
++#define _ASM_KPROBES_H
++/*
++ * Kernel Probes (KProbes)
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
++ *
++ * Copyright (C) IBM Corporation, 2002, 2004
++ *
++ * See arch/x86/kernel/kprobes.c for x86 kprobes history.
++ */
++#include <linux/types.h>
++#include <linux/ptrace.h>
++#include <linux/percpu.h>
++
++#define __ARCH_WANT_KPROBES_INSN_SLOT
++
++struct pt_regs;
++struct kprobe;
++
++typedef u8 kprobe_opcode_t;
++#define BREAKPOINT_INSTRUCTION 0xcc
++#define RELATIVEJUMP_INSTRUCTION 0xe9
++#define MAX_INSN_SIZE 16
++#define MAX_STACK_SIZE 64
++#define MIN_STACK_SIZE(ADDR) (((MAX_STACK_SIZE) < \
++ (((unsigned long)current_thread_info()) + THREAD_SIZE \
++ - (unsigned long)(ADDR))) \
++ ? (MAX_STACK_SIZE) \
++ : (((unsigned long)current_thread_info()) + THREAD_SIZE \
++ - (unsigned long)(ADDR)))
++
++#define ARCH_SUPPORTS_KRETPROBES
++#define flush_insn_slot(p) do { } while (0)
++
++extern const int kretprobe_blacklist_size;
++
++void arch_remove_kprobe(struct kprobe *p);
++void kretprobe_trampoline(void);
++
++/* Architecture specific copy of original instruction*/
++struct arch_specific_insn {
++ /* copy of the original instruction */
++ kprobe_opcode_t *insn;
++ /*
++ * boostable = -1: This instruction type is not boostable.
++ * boostable = 0: This instruction type is boostable.
++ * boostable = 1: This instruction has been boosted: we have
++ * added a relative jump after the instruction copy in insn,
++ * so no single-step and fixup are needed (unless there's
++ * a post_handler or break_handler).
++ */
++ int boostable;
++};
++
++struct prev_kprobe {
++ struct kprobe *kp;
++ unsigned long status;
++ unsigned long old_flags;
++ unsigned long saved_flags;
++};
++
++/* per-cpu kprobe control block */
++struct kprobe_ctlblk {
++ unsigned long kprobe_status;
++ unsigned long kprobe_old_flags;
++ unsigned long kprobe_saved_flags;
++ unsigned long *jprobe_saved_sp;
++ struct pt_regs jprobe_saved_regs;
++ kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
++ struct prev_kprobe prev_kprobe;
++};
++
++/* trap3/1 are intr gates for kprobes. So, restore the status of IF,
++ * if necessary, before executing the original int3/1 (trap) handler.
++ */
++static inline void restore_interrupts(struct pt_regs *regs)
++{
++ if (regs->flags & X86_EFLAGS_IF)
++ local_irq_enable();
++}
++
++extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
++extern int kprobe_exceptions_notify(struct notifier_block *self,
++ unsigned long val, void *data);
++#endif /* _ASM_KPROBES_H */
+diff --git a/include/asm-x86/kprobes_32.h b/include/asm-x86/kprobes_32.h
deleted file mode 100644
-index b6e31e8..0000000
---- a/include/asm-sh64/ide.h
+index 9fe8f3b..0000000
+--- a/include/asm-x86/kprobes_32.h
+++ /dev/null
-@@ -1,29 +0,0 @@
+@@ -1,94 +0,0 @@
+-#ifndef _ASM_KPROBES_H
+-#define _ASM_KPROBES_H
-/*
-- * linux/include/asm-sh64/ide.h
+- * Kernel Probes (KProbes)
+- * include/asm-i386/kprobes.h
- *
-- * Copyright (C) 1994-1996 Linus Torvalds & authors
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
- *
-- * sh64 version by Richard Curnow & Paul Mundt
-- */
--
--/*
-- * This file contains the sh64 architecture specific IDE code.
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+- *
+- * Copyright (C) IBM Corporation, 2002, 2004
+- *
+- * 2002-Oct Created by Vamsi Krishna S <vamsi_krishna at in.ibm.com> Kernel
+- * Probes initial implementation ( includes suggestions from
+- * Rusty Russell).
- */
+-#include <linux/types.h>
+-#include <linux/ptrace.h>
-
--#ifndef __ASM_SH64_IDE_H
--#define __ASM_SH64_IDE_H
+-#define __ARCH_WANT_KPROBES_INSN_SLOT
-
--#ifdef __KERNEL__
+-struct kprobe;
+-struct pt_regs;
-
+-typedef u8 kprobe_opcode_t;
+-#define BREAKPOINT_INSTRUCTION 0xcc
+-#define RELATIVEJUMP_INSTRUCTION 0xe9
+-#define MAX_INSN_SIZE 16
+-#define MAX_STACK_SIZE 64
+-#define MIN_STACK_SIZE(ADDR) (((MAX_STACK_SIZE) < \
+- (((unsigned long)current_thread_info()) + THREAD_SIZE - (ADDR))) \
+- ? (MAX_STACK_SIZE) \
+- : (((unsigned long)current_thread_info()) + THREAD_SIZE - (ADDR)))
+-
+-#define ARCH_SUPPORTS_KRETPROBES
+-#define flush_insn_slot(p) do { } while (0)
+-
+-extern const int kretprobe_blacklist_size;
+-
+-void arch_remove_kprobe(struct kprobe *p);
+-void kretprobe_trampoline(void);
+-
+-/* Architecture specific copy of original instruction*/
+-struct arch_specific_insn {
+- /* copy of the original instruction */
+- kprobe_opcode_t *insn;
+- /*
+- * If this flag is not 0, this kprobe can be boost when its
+- * post_handler and break_handler is not set.
+- */
+- int boostable;
+-};
-
--/* Without this, the initialisation of PCI IDE cards end up calling
-- * ide_init_hwif_ports, which won't work. */
--#ifdef CONFIG_BLK_DEV_IDEPCI
--#define ide_default_io_ctl(base) (0)
--#endif
+-struct prev_kprobe {
+- struct kprobe *kp;
+- unsigned long status;
+- unsigned long old_eflags;
+- unsigned long saved_eflags;
+-};
-
--#include <asm-generic/ide_iops.h>
+-/* per-cpu kprobe control block */
+-struct kprobe_ctlblk {
+- unsigned long kprobe_status;
+- unsigned long kprobe_old_eflags;
+- unsigned long kprobe_saved_eflags;
+- unsigned long *jprobe_saved_esp;
+- struct pt_regs jprobe_saved_regs;
+- kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
+- struct prev_kprobe prev_kprobe;
+-};
-
--#endif /* __KERNEL__ */
+-/* trap3/1 are intr gates for kprobes. So, restore the status of IF,
+- * if necessary, before executing the original int3/1 (trap) handler.
+- */
+-static inline void restore_interrupts(struct pt_regs *regs)
+-{
+- if (regs->eflags & IF_MASK)
+- local_irq_enable();
+-}
-
--#endif /* __ASM_SH64_IDE_H */
-diff --git a/include/asm-sh64/io.h b/include/asm-sh64/io.h
+-extern int kprobe_exceptions_notify(struct notifier_block *self,
+- unsigned long val, void *data);
+-extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
+-#endif /* _ASM_KPROBES_H */
+diff --git a/include/asm-x86/kprobes_64.h b/include/asm-x86/kprobes_64.h
deleted file mode 100644
-index 7bd7314..0000000
---- a/include/asm-sh64/io.h
+index 743d762..0000000
+--- a/include/asm-x86/kprobes_64.h
+++ /dev/null
-@@ -1,196 +0,0 @@
--#ifndef __ASM_SH64_IO_H
--#define __ASM_SH64_IO_H
--
+@@ -1,90 +0,0 @@
+-#ifndef _ASM_KPROBES_H
+-#define _ASM_KPROBES_H
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
+- * Kernel Probes (KProbes)
+- * include/asm-x86_64/kprobes.h
- *
-- * include/asm-sh64/io.h
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
- *
-- */
--
--/*
-- * Convention:
-- * read{b,w,l}/write{b,w,l} are for PCI,
-- * while in{b,w,l}/out{b,w,l} are for ISA
-- * These may (will) be platform specific function.
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
- *
-- * In addition, we have
-- * ctrl_in{b,w,l}/ctrl_out{b,w,l} for SuperH specific I/O.
-- * which are processor specific. Address should be the result of
-- * onchip_remap();
+- * Copyright (C) IBM Corporation, 2002, 2004
+- *
+- * 2004-Oct Prasanna S Panchamukhi <prasanna at in.ibm.com> and Jim Keniston
+- * kenistoj at us.ibm.com adopted from i386.
- */
+-#include <linux/types.h>
+-#include <linux/ptrace.h>
+-#include <linux/percpu.h>
-
--#include <linux/compiler.h>
--#include <asm/cache.h>
--#include <asm/system.h>
--#include <asm/page.h>
--#include <asm-generic/iomap.h>
+-#define __ARCH_WANT_KPROBES_INSN_SLOT
-
--/*
-- * Nothing overly special here.. instead of doing the same thing
-- * over and over again, we just define a set of sh64_in/out functions
-- * with an implicit size. The traditional read{b,w,l}/write{b,w,l}
-- * mess is wrapped to this, as are the SH-specific ctrl_in/out routines.
-- */
--static inline unsigned char sh64_in8(const volatile void __iomem *addr)
--{
-- return *(volatile unsigned char __force *)addr;
--}
+-struct pt_regs;
+-struct kprobe;
-
--static inline unsigned short sh64_in16(const volatile void __iomem *addr)
--{
-- return *(volatile unsigned short __force *)addr;
--}
+-typedef u8 kprobe_opcode_t;
+-#define BREAKPOINT_INSTRUCTION 0xcc
+-#define MAX_INSN_SIZE 15
+-#define MAX_STACK_SIZE 64
+-#define MIN_STACK_SIZE(ADDR) (((MAX_STACK_SIZE) < \
+- (((unsigned long)current_thread_info()) + THREAD_SIZE - (ADDR))) \
+- ? (MAX_STACK_SIZE) \
+- : (((unsigned long)current_thread_info()) + THREAD_SIZE - (ADDR)))
+-
+-#define ARCH_SUPPORTS_KRETPROBES
+-extern const int kretprobe_blacklist_size;
+-
+-void kretprobe_trampoline(void);
+-extern void arch_remove_kprobe(struct kprobe *p);
+-#define flush_insn_slot(p) do { } while (0)
+-
+-/* Architecture specific copy of original instruction*/
+-struct arch_specific_insn {
+- /* copy of the original instruction */
+- kprobe_opcode_t *insn;
+-};
-
--static inline unsigned int sh64_in32(const volatile void __iomem *addr)
--{
-- return *(volatile unsigned int __force *)addr;
--}
+-struct prev_kprobe {
+- struct kprobe *kp;
+- unsigned long status;
+- unsigned long old_rflags;
+- unsigned long saved_rflags;
+-};
-
--static inline unsigned long long sh64_in64(const volatile void __iomem *addr)
--{
-- return *(volatile unsigned long long __force *)addr;
--}
+-/* per-cpu kprobe control block */
+-struct kprobe_ctlblk {
+- unsigned long kprobe_status;
+- unsigned long kprobe_old_rflags;
+- unsigned long kprobe_saved_rflags;
+- unsigned long *jprobe_saved_rsp;
+- struct pt_regs jprobe_saved_regs;
+- kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
+- struct prev_kprobe prev_kprobe;
+-};
-
--static inline void sh64_out8(unsigned char b, volatile void __iomem *addr)
+-/* trap3/1 are intr gates for kprobes. So, restore the status of IF,
+- * if necessary, before executing the original int3/1 (trap) handler.
+- */
+-static inline void restore_interrupts(struct pt_regs *regs)
-{
-- *(volatile unsigned char __force *)addr = b;
-- wmb();
+- if (regs->eflags & IF_MASK)
+- local_irq_enable();
-}
-
--static inline void sh64_out16(unsigned short b, volatile void __iomem *addr)
--{
-- *(volatile unsigned short __force *)addr = b;
-- wmb();
--}
+-extern int post_kprobe_handler(struct pt_regs *regs);
+-extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
+-extern int kprobe_handler(struct pt_regs *regs);
+-
+-extern int kprobe_exceptions_notify(struct notifier_block *self,
+- unsigned long val, void *data);
+-#endif /* _ASM_KPROBES_H */
+diff --git a/include/asm-x86/lguest.h b/include/asm-x86/lguest.h
+index ccd3384..1c8367a 100644
+--- a/include/asm-x86/lguest.h
++++ b/include/asm-x86/lguest.h
+@@ -44,14 +44,14 @@ struct lguest_ro_state
+ {
+ /* Host information we need to restore when we switch back. */
+ u32 host_cr3;
+- struct Xgt_desc_struct host_idt_desc;
+- struct Xgt_desc_struct host_gdt_desc;
++ struct desc_ptr host_idt_desc;
++ struct desc_ptr host_gdt_desc;
+ u32 host_sp;
+
+ /* Fields which are used when guest is running. */
+- struct Xgt_desc_struct guest_idt_desc;
+- struct Xgt_desc_struct guest_gdt_desc;
+- struct i386_hw_tss guest_tss;
++ struct desc_ptr guest_idt_desc;
++ struct desc_ptr guest_gdt_desc;
++ struct x86_hw_tss guest_tss;
+ struct desc_struct guest_idt[IDT_ENTRIES];
+ struct desc_struct guest_gdt[GDT_ENTRIES];
+ };
+@@ -78,8 +78,8 @@ static inline void lguest_set_ts(void)
+ }
+
+ /* Full 4G segment descriptors, suitable for CS and DS. */
+-#define FULL_EXEC_SEGMENT ((struct desc_struct){0x0000ffff, 0x00cf9b00})
+-#define FULL_SEGMENT ((struct desc_struct){0x0000ffff, 0x00cf9300})
++#define FULL_EXEC_SEGMENT ((struct desc_struct){ { {0x0000ffff, 0x00cf9b00} } })
++#define FULL_SEGMENT ((struct desc_struct){ { {0x0000ffff, 0x00cf9300} } })
+
+ #endif /* __ASSEMBLY__ */
+
+diff --git a/include/asm-x86/linkage.h b/include/asm-x86/linkage.h
+index 94b257f..31739c7 100644
+--- a/include/asm-x86/linkage.h
++++ b/include/asm-x86/linkage.h
+@@ -1,5 +1,25 @@
++#ifndef __ASM_LINKAGE_H
++#define __ASM_LINKAGE_H
++
++#ifdef CONFIG_X86_64
++#define __ALIGN .p2align 4,,15
++#define __ALIGN_STR ".p2align 4,,15"
++#endif
++
+ #ifdef CONFIG_X86_32
+-# include "linkage_32.h"
+-#else
+-# include "linkage_64.h"
++#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
++#define prevent_tail_call(ret) __asm__ ("" : "=r" (ret) : "0" (ret))
++/*
++ * For 32-bit UML - mark functions implemented in assembly that use
++ * regparm input parameters:
++ */
++#define asmregparm __attribute__((regparm(3)))
++#endif
++
++#ifdef CONFIG_X86_ALIGNMENT_16
++#define __ALIGN .align 16,0x90
++#define __ALIGN_STR ".align 16,0x90"
++#endif
++
+ #endif
++
+diff --git a/include/asm-x86/linkage_32.h b/include/asm-x86/linkage_32.h
+deleted file mode 100644
+index f4a6eba..0000000
+--- a/include/asm-x86/linkage_32.h
++++ /dev/null
+@@ -1,15 +0,0 @@
+-#ifndef __ASM_LINKAGE_H
+-#define __ASM_LINKAGE_H
-
--static inline void sh64_out32(unsigned int b, volatile void __iomem *addr)
--{
-- *(volatile unsigned int __force *)addr = b;
-- wmb();
--}
+-#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
+-#define FASTCALL(x) x __attribute__((regparm(3)))
+-#define fastcall __attribute__((regparm(3)))
-
--static inline void sh64_out64(unsigned long long b, volatile void __iomem *addr)
--{
-- *(volatile unsigned long long __force *)addr = b;
-- wmb();
--}
+-#define prevent_tail_call(ret) __asm__ ("" : "=r" (ret) : "0" (ret))
-
--#define readb(addr) sh64_in8(addr)
--#define readw(addr) sh64_in16(addr)
--#define readl(addr) sh64_in32(addr)
--#define readb_relaxed(addr) sh64_in8(addr)
--#define readw_relaxed(addr) sh64_in16(addr)
--#define readl_relaxed(addr) sh64_in32(addr)
+-#ifdef CONFIG_X86_ALIGNMENT_16
+-#define __ALIGN .align 16,0x90
+-#define __ALIGN_STR ".align 16,0x90"
+-#endif
-
--#define writeb(b, addr) sh64_out8(b, addr)
--#define writew(b, addr) sh64_out16(b, addr)
--#define writel(b, addr) sh64_out32(b, addr)
+-#endif
+diff --git a/include/asm-x86/linkage_64.h b/include/asm-x86/linkage_64.h
+deleted file mode 100644
+index b5f39d0..0000000
+--- a/include/asm-x86/linkage_64.h
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#ifndef __ASM_LINKAGE_H
+-#define __ASM_LINKAGE_H
-
--#define ctrl_inb(addr) sh64_in8(ioport_map(addr, 1))
--#define ctrl_inw(addr) sh64_in16(ioport_map(addr, 2))
--#define ctrl_inl(addr) sh64_in32(ioport_map(addr, 4))
+-#define __ALIGN .p2align 4,,15
-
--#define ctrl_outb(b, addr) sh64_out8(b, ioport_map(addr, 1))
--#define ctrl_outw(b, addr) sh64_out16(b, ioport_map(addr, 2))
--#define ctrl_outl(b, addr) sh64_out32(b, ioport_map(addr, 4))
+-#endif
+diff --git a/include/asm-x86/local.h b/include/asm-x86/local.h
+index c7a1b1c..f852c62 100644
+--- a/include/asm-x86/local.h
++++ b/include/asm-x86/local.h
+@@ -1,5 +1,240 @@
+-#ifdef CONFIG_X86_32
+-# include "local_32.h"
+-#else
+-# include "local_64.h"
++#ifndef _ARCH_LOCAL_H
++#define _ARCH_LOCAL_H
++
++#include <linux/percpu.h>
++
++#include <asm/system.h>
++#include <asm/atomic.h>
++#include <asm/asm.h>
++
++typedef struct {
++ atomic_long_t a;
++} local_t;
++
++#define LOCAL_INIT(i) { ATOMIC_LONG_INIT(i) }
++
++#define local_read(l) atomic_long_read(&(l)->a)
++#define local_set(l, i) atomic_long_set(&(l)->a, (i))
++
++static inline void local_inc(local_t *l)
++{
++ __asm__ __volatile__(
++ _ASM_INC "%0"
++ :"+m" (l->a.counter));
++}
++
++static inline void local_dec(local_t *l)
++{
++ __asm__ __volatile__(
++ _ASM_DEC "%0"
++ :"+m" (l->a.counter));
++}
++
++static inline void local_add(long i, local_t *l)
++{
++ __asm__ __volatile__(
++ _ASM_ADD "%1,%0"
++ :"+m" (l->a.counter)
++ :"ir" (i));
++}
++
++static inline void local_sub(long i, local_t *l)
++{
++ __asm__ __volatile__(
++ _ASM_SUB "%1,%0"
++ :"+m" (l->a.counter)
++ :"ir" (i));
++}
++
++/**
++ * local_sub_and_test - subtract value from variable and test result
++ * @i: integer value to subtract
++ * @l: pointer to type local_t
++ *
++ * Atomically subtracts @i from @l and returns
++ * true if the result is zero, or false for all
++ * other cases.
++ */
++static inline int local_sub_and_test(long i, local_t *l)
++{
++ unsigned char c;
++
++ __asm__ __volatile__(
++ _ASM_SUB "%2,%0; sete %1"
++ :"+m" (l->a.counter), "=qm" (c)
++ :"ir" (i) : "memory");
++ return c;
++}
++
++/**
++ * local_dec_and_test - decrement and test
++ * @l: pointer to type local_t
++ *
++ * Atomically decrements @l by 1 and
++ * returns true if the result is 0, or false for all other
++ * cases.
++ */
++static inline int local_dec_and_test(local_t *l)
++{
++ unsigned char c;
++
++ __asm__ __volatile__(
++ _ASM_DEC "%0; sete %1"
++ :"+m" (l->a.counter), "=qm" (c)
++ : : "memory");
++ return c != 0;
++}
++
++/**
++ * local_inc_and_test - increment and test
++ * @l: pointer to type local_t
++ *
++ * Atomically increments @l by 1
++ * and returns true if the result is zero, or false for all
++ * other cases.
++ */
++static inline int local_inc_and_test(local_t *l)
++{
++ unsigned char c;
++
++ __asm__ __volatile__(
++ _ASM_INC "%0; sete %1"
++ :"+m" (l->a.counter), "=qm" (c)
++ : : "memory");
++ return c != 0;
++}
++
++/**
++ * local_add_negative - add and test if negative
++ * @i: integer value to add
++ * @l: pointer to type local_t
++ *
++ * Atomically adds @i to @l and returns true
++ * if the result is negative, or false when
++ * result is greater than or equal to zero.
++ */
++static inline int local_add_negative(long i, local_t *l)
++{
++ unsigned char c;
++
++ __asm__ __volatile__(
++ _ASM_ADD "%2,%0; sets %1"
++ :"+m" (l->a.counter), "=qm" (c)
++ :"ir" (i) : "memory");
++ return c;
++}
++
++/**
++ * local_add_return - add and return
++ * @i: integer value to add
++ * @l: pointer to type local_t
++ *
++ * Atomically adds @i to @l and returns @i + @l
++ */
++static inline long local_add_return(long i, local_t *l)
++{
++ long __i;
++#ifdef CONFIG_M386
++ unsigned long flags;
++ if (unlikely(boot_cpu_data.x86 <= 3))
++ goto no_xadd;
+ #endif
++ /* Modern 486+ processor */
++ __i = i;
++ __asm__ __volatile__(
++ _ASM_XADD "%0, %1;"
++ :"+r" (i), "+m" (l->a.counter)
++ : : "memory");
++ return i + __i;
++
++#ifdef CONFIG_M386
++no_xadd: /* Legacy 386 processor */
++ local_irq_save(flags);
++ __i = local_read(l);
++ local_set(l, i + __i);
++ local_irq_restore(flags);
++ return i + __i;
++#endif
++}
++
++static inline long local_sub_return(long i, local_t *l)
++{
++ return local_add_return(-i, l);
++}
++
++#define local_inc_return(l) (local_add_return(1, l))
++#define local_dec_return(l) (local_sub_return(1, l))
++
++#define local_cmpxchg(l, o, n) \
++ (cmpxchg_local(&((l)->a.counter), (o), (n)))
++/* Always has a lock prefix */
++#define local_xchg(l, n) (xchg(&((l)->a.counter), (n)))
++
++/**
++ * local_add_unless - add unless the number is a given value
++ * @l: pointer of type local_t
++ * @a: the amount to add to l...
++ * @u: ...unless l is equal to u.
++ *
++ * Atomically adds @a to @l, so long as it was not @u.
++ * Returns non-zero if @l was not @u, and zero otherwise.
++ */
++#define local_add_unless(l, a, u) \
++({ \
++ long c, old; \
++ c = local_read(l); \
++ for (;;) { \
++ if (unlikely(c == (u))) \
++ break; \
++ old = local_cmpxchg((l), c, c + (a)); \
++ if (likely(old == c)) \
++ break; \
++ c = old; \
++ } \
++ c != (u); \
++})
++#define local_inc_not_zero(l) local_add_unless((l), 1, 0)
++
++/* On x86_32, these are no better than the atomic variants.
++ * On x86-64 these are better than the atomic variants on SMP kernels
++ * because they dont use a lock prefix.
++ */
++#define __local_inc(l) local_inc(l)
++#define __local_dec(l) local_dec(l)
++#define __local_add(i, l) local_add((i), (l))
++#define __local_sub(i, l) local_sub((i), (l))
++
++/* Use these for per-cpu local_t variables: on some archs they are
++ * much more efficient than these naive implementations. Note they take
++ * a variable, not an address.
++ *
++ * X86_64: This could be done better if we moved the per cpu data directly
++ * after GS.
++ */
++
++/* Need to disable preemption for the cpu local counters otherwise we could
++ still access a variable of a previous CPU in a non atomic way. */
++#define cpu_local_wrap_v(l) \
++ ({ local_t res__; \
++ preempt_disable(); \
++ res__ = (l); \
++ preempt_enable(); \
++ res__; })
++#define cpu_local_wrap(l) \
++ ({ preempt_disable(); \
++ l; \
++ preempt_enable(); }) \
++
++#define cpu_local_read(l) cpu_local_wrap_v(local_read(&__get_cpu_var(l)))
++#define cpu_local_set(l, i) cpu_local_wrap(local_set(&__get_cpu_var(l), (i)))
++#define cpu_local_inc(l) cpu_local_wrap(local_inc(&__get_cpu_var(l)))
++#define cpu_local_dec(l) cpu_local_wrap(local_dec(&__get_cpu_var(l)))
++#define cpu_local_add(i, l) cpu_local_wrap(local_add((i), &__get_cpu_var(l)))
++#define cpu_local_sub(i, l) cpu_local_wrap(local_sub((i), &__get_cpu_var(l)))
++
++#define __cpu_local_inc(l) cpu_local_inc(l)
++#define __cpu_local_dec(l) cpu_local_dec(l)
++#define __cpu_local_add(i, l) cpu_local_add((i), (l))
++#define __cpu_local_sub(i, l) cpu_local_sub((i), (l))
++
++#endif /* _ARCH_LOCAL_H */
+diff --git a/include/asm-x86/local_32.h b/include/asm-x86/local_32.h
+deleted file mode 100644
+index 6e85975..0000000
+--- a/include/asm-x86/local_32.h
++++ /dev/null
+@@ -1,233 +0,0 @@
+-#ifndef _ARCH_I386_LOCAL_H
+-#define _ARCH_I386_LOCAL_H
-
--#define ioread8(addr) sh64_in8(addr)
--#define ioread16(addr) sh64_in16(addr)
--#define ioread32(addr) sh64_in32(addr)
--#define iowrite8(b, addr) sh64_out8(b, addr)
--#define iowrite16(b, addr) sh64_out16(b, addr)
--#define iowrite32(b, addr) sh64_out32(b, addr)
+-#include <linux/percpu.h>
+-#include <asm/system.h>
+-#include <asm/atomic.h>
-
--#define inb(addr) ctrl_inb(addr)
--#define inw(addr) ctrl_inw(addr)
--#define inl(addr) ctrl_inl(addr)
--#define outb(b, addr) ctrl_outb(b, addr)
--#define outw(b, addr) ctrl_outw(b, addr)
--#define outl(b, addr) ctrl_outl(b, addr)
+-typedef struct
+-{
+- atomic_long_t a;
+-} local_t;
-
--void outsw(unsigned long port, const void *addr, unsigned long count);
--void insw(unsigned long port, void *addr, unsigned long count);
--void outsl(unsigned long port, const void *addr, unsigned long count);
--void insl(unsigned long port, void *addr, unsigned long count);
+-#define LOCAL_INIT(i) { ATOMIC_LONG_INIT(i) }
-
--#define inb_p(addr) inb(addr)
--#define inw_p(addr) inw(addr)
--#define inl_p(addr) inl(addr)
--#define outb_p(x,addr) outb(x,addr)
--#define outw_p(x,addr) outw(x,addr)
--#define outl_p(x,addr) outl(x,addr)
+-#define local_read(l) atomic_long_read(&(l)->a)
+-#define local_set(l,i) atomic_long_set(&(l)->a, (i))
-
--#define __raw_readb readb
--#define __raw_readw readw
--#define __raw_readl readl
--#define __raw_writeb writeb
--#define __raw_writew writew
--#define __raw_writel writel
+-static __inline__ void local_inc(local_t *l)
+-{
+- __asm__ __volatile__(
+- "incl %0"
+- :"+m" (l->a.counter));
+-}
-
--void memcpy_toio(void __iomem *to, const void *from, long count);
--void memcpy_fromio(void *to, void __iomem *from, long count);
+-static __inline__ void local_dec(local_t *l)
+-{
+- __asm__ __volatile__(
+- "decl %0"
+- :"+m" (l->a.counter));
+-}
-
--#define mmiowb()
+-static __inline__ void local_add(long i, local_t *l)
+-{
+- __asm__ __volatile__(
+- "addl %1,%0"
+- :"+m" (l->a.counter)
+- :"ir" (i));
+-}
-
--#ifdef __KERNEL__
+-static __inline__ void local_sub(long i, local_t *l)
+-{
+- __asm__ __volatile__(
+- "subl %1,%0"
+- :"+m" (l->a.counter)
+- :"ir" (i));
+-}
-
--#ifdef CONFIG_SH_CAYMAN
--extern unsigned long smsc_superio_virt;
--#endif
--#ifdef CONFIG_PCI
--extern unsigned long pciio_virt;
--#endif
+-/**
+- * local_sub_and_test - subtract value from variable and test result
+- * @i: integer value to subtract
+- * @l: pointer of type local_t
+- *
+- * Atomically subtracts @i from @l and returns
+- * true if the result is zero, or false for all
+- * other cases.
+- */
+-static __inline__ int local_sub_and_test(long i, local_t *l)
+-{
+- unsigned char c;
-
--#define IO_SPACE_LIMIT 0xffffffff
+- __asm__ __volatile__(
+- "subl %2,%0; sete %1"
+- :"+m" (l->a.counter), "=qm" (c)
+- :"ir" (i) : "memory");
+- return c;
+-}
-
--/*
-- * Change virtual addresses to physical addresses and vv.
-- * These are trivial on the 1:1 Linux/SuperH mapping
+-/**
+- * local_dec_and_test - decrement and test
+- * @l: pointer of type local_t
+- *
+- * Atomically decrements @l by 1 and
+- * returns true if the result is 0, or false for all other
+- * cases.
- */
--static inline unsigned long virt_to_phys(volatile void * address)
+-static __inline__ int local_dec_and_test(local_t *l)
-{
-- return __pa(address);
+- unsigned char c;
+-
+- __asm__ __volatile__(
+- "decl %0; sete %1"
+- :"+m" (l->a.counter), "=qm" (c)
+- : : "memory");
+- return c != 0;
-}
-
--static inline void * phys_to_virt(unsigned long address)
+-/**
+- * local_inc_and_test - increment and test
+- * @l: pointer of type local_t
+- *
+- * Atomically increments @l by 1
+- * and returns true if the result is zero, or false for all
+- * other cases.
+- */
+-static __inline__ int local_inc_and_test(local_t *l)
-{
-- return __va(address);
+- unsigned char c;
+-
+- __asm__ __volatile__(
+- "incl %0; sete %1"
+- :"+m" (l->a.counter), "=qm" (c)
+- : : "memory");
+- return c != 0;
-}
-
--extern void * __ioremap(unsigned long phys_addr, unsigned long size,
-- unsigned long flags);
+-/**
+- * local_add_negative - add and test if negative
+- * @l: pointer of type local_t
+- * @i: integer value to add
+- *
+- * Atomically adds @i to @l and returns true
+- * if the result is negative, or false when
+- * result is greater than or equal to zero.
+- */
+-static __inline__ int local_add_negative(long i, local_t *l)
+-{
+- unsigned char c;
-
--static inline void * ioremap(unsigned long phys_addr, unsigned long size)
+- __asm__ __volatile__(
+- "addl %2,%0; sets %1"
+- :"+m" (l->a.counter), "=qm" (c)
+- :"ir" (i) : "memory");
+- return c;
+-}
+-
+-/**
+- * local_add_return - add and return
+- * @l: pointer of type local_t
+- * @i: integer value to add
+- *
+- * Atomically adds @i to @l and returns @i + @l
+- */
+-static __inline__ long local_add_return(long i, local_t *l)
-{
-- return __ioremap(phys_addr, size, 1);
+- long __i;
+-#ifdef CONFIG_M386
+- unsigned long flags;
+- if(unlikely(boot_cpu_data.x86 <= 3))
+- goto no_xadd;
+-#endif
+- /* Modern 486+ processor */
+- __i = i;
+- __asm__ __volatile__(
+- "xaddl %0, %1;"
+- :"+r" (i), "+m" (l->a.counter)
+- : : "memory");
+- return i + __i;
+-
+-#ifdef CONFIG_M386
+-no_xadd: /* Legacy 386 processor */
+- local_irq_save(flags);
+- __i = local_read(l);
+- local_set(l, i + __i);
+- local_irq_restore(flags);
+- return i + __i;
+-#endif
-}
-
--static inline void * ioremap_nocache (unsigned long phys_addr, unsigned long size)
+-static __inline__ long local_sub_return(long i, local_t *l)
-{
-- return __ioremap(phys_addr, size, 0);
+- return local_add_return(-i,l);
-}
-
--extern void iounmap(void *addr);
+-#define local_inc_return(l) (local_add_return(1,l))
+-#define local_dec_return(l) (local_sub_return(1,l))
-
--unsigned long onchip_remap(unsigned long addr, unsigned long size, const char* name);
--extern void onchip_unmap(unsigned long vaddr);
+-#define local_cmpxchg(l, o, n) \
+- (cmpxchg_local(&((l)->a.counter), (o), (n)))
+-/* Always has a lock prefix */
+-#define local_xchg(l, n) (xchg(&((l)->a.counter), (n)))
-
--/*
-- * Convert a physical pointer to a virtual kernel pointer for /dev/mem
-- * access
+-/**
+- * local_add_unless - add unless the number is a given value
+- * @l: pointer of type local_t
+- * @a: the amount to add to l...
+- * @u: ...unless l is equal to u.
+- *
+- * Atomically adds @a to @l, so long as it was not @u.
+- * Returns non-zero if @l was not @u, and zero otherwise.
- */
--#define xlate_dev_mem_ptr(p) __va(p)
+-#define local_add_unless(l, a, u) \
+-({ \
+- long c, old; \
+- c = local_read(l); \
+- for (;;) { \
+- if (unlikely(c == (u))) \
+- break; \
+- old = local_cmpxchg((l), c, c + (a)); \
+- if (likely(old == c)) \
+- break; \
+- c = old; \
+- } \
+- c != (u); \
+-})
+-#define local_inc_not_zero(l) local_add_unless((l), 1, 0)
-
--/*
-- * Convert a virtual cached pointer to an uncached pointer
-- */
--#define xlate_dev_kmem_ptr(p) p
+-/* On x86, these are no better than the atomic variants. */
+-#define __local_inc(l) local_inc(l)
+-#define __local_dec(l) local_dec(l)
+-#define __local_add(i,l) local_add((i),(l))
+-#define __local_sub(i,l) local_sub((i),(l))
+-
+-/* Use these for per-cpu local_t variables: on some archs they are
+- * much more efficient than these naive implementations. Note they take
+- * a variable, not an address.
+- */
+-
+-/* Need to disable preemption for the cpu local counters otherwise we could
+- still access a variable of a previous CPU in a non atomic way. */
+-#define cpu_local_wrap_v(l) \
+- ({ local_t res__; \
+- preempt_disable(); \
+- res__ = (l); \
+- preempt_enable(); \
+- res__; })
+-#define cpu_local_wrap(l) \
+- ({ preempt_disable(); \
+- l; \
+- preempt_enable(); }) \
+-
+-#define cpu_local_read(l) cpu_local_wrap_v(local_read(&__get_cpu_var(l)))
+-#define cpu_local_set(l, i) cpu_local_wrap(local_set(&__get_cpu_var(l), (i)))
+-#define cpu_local_inc(l) cpu_local_wrap(local_inc(&__get_cpu_var(l)))
+-#define cpu_local_dec(l) cpu_local_wrap(local_dec(&__get_cpu_var(l)))
+-#define cpu_local_add(i, l) cpu_local_wrap(local_add((i), &__get_cpu_var(l)))
+-#define cpu_local_sub(i, l) cpu_local_wrap(local_sub((i), &__get_cpu_var(l)))
+-
+-#define __cpu_local_inc(l) cpu_local_inc(l)
+-#define __cpu_local_dec(l) cpu_local_dec(l)
+-#define __cpu_local_add(i, l) cpu_local_add((i), (l))
+-#define __cpu_local_sub(i, l) cpu_local_sub((i), (l))
-
--#endif /* __KERNEL__ */
--#endif /* __ASM_SH64_IO_H */
-diff --git a/include/asm-sh64/ioctl.h b/include/asm-sh64/ioctl.h
-deleted file mode 100644
-index b279fe0..0000000
---- a/include/asm-sh64/ioctl.h
-+++ /dev/null
-@@ -1 +0,0 @@
--#include <asm-generic/ioctl.h>
-diff --git a/include/asm-sh64/ioctls.h b/include/asm-sh64/ioctls.h
+-#endif /* _ARCH_I386_LOCAL_H */
+diff --git a/include/asm-x86/local_64.h b/include/asm-x86/local_64.h
deleted file mode 100644
-index 6b0c04f..0000000
---- a/include/asm-sh64/ioctls.h
+index e87492b..0000000
+--- a/include/asm-x86/local_64.h
+++ /dev/null
-@@ -1,116 +0,0 @@
--#ifndef __ASM_SH64_IOCTLS_H
--#define __ASM_SH64_IOCTLS_H
+@@ -1,222 +0,0 @@
+-#ifndef _ARCH_X8664_LOCAL_H
+-#define _ARCH_X8664_LOCAL_H
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/ioctls.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2004 Richard Curnow
-- *
-- */
+-#include <linux/percpu.h>
+-#include <asm/atomic.h>
-
--#include <asm/ioctl.h>
+-typedef struct
+-{
+- atomic_long_t a;
+-} local_t;
-
--#define FIOCLEX 0x6601 /* _IO('f', 1) */
--#define FIONCLEX 0x6602 /* _IO('f', 2) */
--#define FIOASYNC 0x4004667d /* _IOW('f', 125, int) */
--#define FIONBIO 0x4004667e /* _IOW('f', 126, int) */
--#define FIONREAD 0x8004667f /* _IOW('f', 127, int) */
--#define TIOCINQ FIONREAD
--#define FIOQSIZE 0x80086680 /* _IOR('f', 128, loff_t) */
+-#define LOCAL_INIT(i) { ATOMIC_LONG_INIT(i) }
-
--#define TCGETS 0x5401
--#define TCSETS 0x5402
--#define TCSETSW 0x5403
--#define TCSETSF 0x5404
+-#define local_read(l) atomic_long_read(&(l)->a)
+-#define local_set(l,i) atomic_long_set(&(l)->a, (i))
-
--#define TCGETA 0x80127417 /* _IOR('t', 23, struct termio) */
--#define TCSETA 0x40127418 /* _IOW('t', 24, struct termio) */
--#define TCSETAW 0x40127419 /* _IOW('t', 25, struct termio) */
--#define TCSETAF 0x4012741c /* _IOW('t', 28, struct termio) */
+-static inline void local_inc(local_t *l)
+-{
+- __asm__ __volatile__(
+- "incq %0"
+- :"=m" (l->a.counter)
+- :"m" (l->a.counter));
+-}
-
--#define TCSBRK 0x741d /* _IO('t', 29) */
--#define TCXONC 0x741e /* _IO('t', 30) */
--#define TCFLSH 0x741f /* _IO('t', 31) */
+-static inline void local_dec(local_t *l)
+-{
+- __asm__ __volatile__(
+- "decq %0"
+- :"=m" (l->a.counter)
+- :"m" (l->a.counter));
+-}
-
--#define TIOCSWINSZ 0x40087467 /* _IOW('t', 103, struct winsize) */
--#define TIOCGWINSZ 0x80087468 /* _IOR('t', 104, struct winsize) */
--#define TIOCSTART 0x746e /* _IO('t', 110) start output, like ^Q */
--#define TIOCSTOP 0x746f /* _IO('t', 111) stop output, like ^S */
--#define TIOCOUTQ 0x80047473 /* _IOR('t', 115, int) output queue size */
+-static inline void local_add(long i, local_t *l)
+-{
+- __asm__ __volatile__(
+- "addq %1,%0"
+- :"=m" (l->a.counter)
+- :"ir" (i), "m" (l->a.counter));
+-}
-
--#define TIOCSPGRP 0x40047476 /* _IOW('t', 118, int) */
--#define TIOCGPGRP 0x80047477 /* _IOR('t', 119, int) */
+-static inline void local_sub(long i, local_t *l)
+-{
+- __asm__ __volatile__(
+- "subq %1,%0"
+- :"=m" (l->a.counter)
+- :"ir" (i), "m" (l->a.counter));
+-}
-
--#define TIOCEXCL 0x540c /* _IO('T', 12) */
--#define TIOCNXCL 0x540d /* _IO('T', 13) */
--#define TIOCSCTTY 0x540e /* _IO('T', 14) */
+-/**
+- * local_sub_and_test - subtract value from variable and test result
+- * @i: integer value to subtract
+- * @l: pointer to type local_t
+- *
+- * Atomically subtracts @i from @l and returns
+- * true if the result is zero, or false for all
+- * other cases.
+- */
+-static __inline__ int local_sub_and_test(long i, local_t *l)
+-{
+- unsigned char c;
-
--#define TIOCSTI 0x40015412 /* _IOW('T', 18, char) 0x5412 */
--#define TIOCMGET 0x80045415 /* _IOR('T', 21, unsigned int) 0x5415 */
--#define TIOCMBIS 0x40045416 /* _IOW('T', 22, unsigned int) 0x5416 */
--#define TIOCMBIC 0x40045417 /* _IOW('T', 23, unsigned int) 0x5417 */
--#define TIOCMSET 0x40045418 /* _IOW('T', 24, unsigned int) 0x5418 */
+- __asm__ __volatile__(
+- "subq %2,%0; sete %1"
+- :"=m" (l->a.counter), "=qm" (c)
+- :"ir" (i), "m" (l->a.counter) : "memory");
+- return c;
+-}
-
--#define TIOCM_LE 0x001
--#define TIOCM_DTR 0x002
--#define TIOCM_RTS 0x004
--#define TIOCM_ST 0x008
--#define TIOCM_SR 0x010
--#define TIOCM_CTS 0x020
--#define TIOCM_CAR 0x040
--#define TIOCM_RNG 0x080
--#define TIOCM_DSR 0x100
--#define TIOCM_CD TIOCM_CAR
--#define TIOCM_RI TIOCM_RNG
+-/**
+- * local_dec_and_test - decrement and test
+- * @l: pointer to type local_t
+- *
+- * Atomically decrements @l by 1 and
+- * returns true if the result is 0, or false for all other
+- * cases.
+- */
+-static __inline__ int local_dec_and_test(local_t *l)
+-{
+- unsigned char c;
-
--#define TIOCGSOFTCAR 0x80045419 /* _IOR('T', 25, unsigned int) 0x5419 */
--#define TIOCSSOFTCAR 0x4004541a /* _IOW('T', 26, unsigned int) 0x541A */
--#define TIOCLINUX 0x4004541c /* _IOW('T', 28, char) 0x541C */
--#define TIOCCONS 0x541d /* _IO('T', 29) */
--#define TIOCGSERIAL 0x803c541e /* _IOR('T', 30, struct serial_struct) 0x541E */
--#define TIOCSSERIAL 0x403c541f /* _IOW('T', 31, struct serial_struct) 0x541F */
--#define TIOCPKT 0x40045420 /* _IOW('T', 32, int) 0x5420 */
+- __asm__ __volatile__(
+- "decq %0; sete %1"
+- :"=m" (l->a.counter), "=qm" (c)
+- :"m" (l->a.counter) : "memory");
+- return c != 0;
+-}
-
--#define TIOCPKT_DATA 0
--#define TIOCPKT_FLUSHREAD 1
--#define TIOCPKT_FLUSHWRITE 2
--#define TIOCPKT_STOP 4
--#define TIOCPKT_START 8
--#define TIOCPKT_NOSTOP 16
--#define TIOCPKT_DOSTOP 32
+-/**
+- * local_inc_and_test - increment and test
+- * @l: pointer to type local_t
+- *
+- * Atomically increments @l by 1
+- * and returns true if the result is zero, or false for all
+- * other cases.
+- */
+-static __inline__ int local_inc_and_test(local_t *l)
+-{
+- unsigned char c;
-
+- __asm__ __volatile__(
+- "incq %0; sete %1"
+- :"=m" (l->a.counter), "=qm" (c)
+- :"m" (l->a.counter) : "memory");
+- return c != 0;
+-}
-
--#define TIOCNOTTY 0x5422 /* _IO('T', 34) */
--#define TIOCSETD 0x40045423 /* _IOW('T', 35, int) 0x5423 */
--#define TIOCGETD 0x80045424 /* _IOR('T', 36, int) 0x5424 */
--#define TCSBRKP 0x40045424 /* _IOW('T', 37, int) 0x5425 */ /* Needed for POSIX tcsendbreak() */
--#define TIOCTTYGSTRUCT 0x8c105426 /* _IOR('T', 38, struct tty_struct) 0x5426 */ /* For debugging only */
--#define TIOCSBRK 0x5427 /* _IO('T', 39) */ /* BSD compatibility */
--#define TIOCCBRK 0x5428 /* _IO('T', 40) */ /* BSD compatibility */
--#define TIOCGSID 0x80045429 /* _IOR('T', 41, pid_t) 0x5429 */ /* Return the session ID of FD */
--#define TIOCGPTN 0x80045430 /* _IOR('T',0x30, unsigned int) 0x5430 Get Pty Number (of pty-mux device) */
--#define TIOCSPTLCK 0x40045431 /* _IOW('T',0x31, int) Lock/unlock Pty */
+-/**
+- * local_add_negative - add and test if negative
+- * @i: integer value to add
+- * @l: pointer to type local_t
+- *
+- * Atomically adds @i to @l and returns true
+- * if the result is negative, or false when
+- * result is greater than or equal to zero.
+- */
+-static __inline__ int local_add_negative(long i, local_t *l)
+-{
+- unsigned char c;
-
--#define TIOCSERCONFIG 0x5453 /* _IO('T', 83) */
--#define TIOCSERGWILD 0x80045454 /* _IOR('T', 84, int) 0x5454 */
--#define TIOCSERSWILD 0x40045455 /* _IOW('T', 85, int) 0x5455 */
--#define TIOCGLCKTRMIOS 0x5456
--#define TIOCSLCKTRMIOS 0x5457
--#define TIOCSERGSTRUCT 0x80d85458 /* _IOR('T', 88, struct async_struct) 0x5458 */ /* For debugging only */
--#define TIOCSERGETLSR 0x80045459 /* _IOR('T', 89, unsigned int) 0x5459 */ /* Get line status register */
+- __asm__ __volatile__(
+- "addq %2,%0; sets %1"
+- :"=m" (l->a.counter), "=qm" (c)
+- :"ir" (i), "m" (l->a.counter) : "memory");
+- return c;
+-}
-
--/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
--#define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
+-/**
+- * local_add_return - add and return
+- * @i: integer value to add
+- * @l: pointer to type local_t
+- *
+- * Atomically adds @i to @l and returns @i + @l
+- */
+-static __inline__ long local_add_return(long i, local_t *l)
+-{
+- long __i = i;
+- __asm__ __volatile__(
+- "xaddq %0, %1;"
+- :"+r" (i), "+m" (l->a.counter)
+- : : "memory");
+- return i + __i;
+-}
-
--#define TIOCSERGETMULTI 0x80a8545a /* _IOR('T', 90, struct serial_multiport_struct) 0x545A */ /* Get multiport config */
--#define TIOCSERSETMULTI 0x40a8545b /* _IOW('T', 91, struct serial_multiport_struct) 0x545B */ /* Set multiport config */
+-static __inline__ long local_sub_return(long i, local_t *l)
+-{
+- return local_add_return(-i,l);
+-}
-
--#define TIOCMIWAIT 0x545c /* _IO('T', 92) wait for a change on serial input line(s) */
--#define TIOCGICOUNT 0x545d /* read serial port inline interrupt counts */
+-#define local_inc_return(l) (local_add_return(1,l))
+-#define local_dec_return(l) (local_sub_return(1,l))
-
--#endif /* __ASM_SH64_IOCTLS_H */
-diff --git a/include/asm-sh64/ipcbuf.h b/include/asm-sh64/ipcbuf.h
-deleted file mode 100644
-index c441e35..0000000
---- a/include/asm-sh64/ipcbuf.h
-+++ /dev/null
-@@ -1,40 +0,0 @@
--#ifndef __ASM_SH64_IPCBUF_H__
--#define __ASM_SH64_IPCBUF_H__
+-#define local_cmpxchg(l, o, n) \
+- (cmpxchg_local(&((l)->a.counter), (o), (n)))
+-/* Always has a lock prefix */
+-#define local_xchg(l, n) (xchg(&((l)->a.counter), (n)))
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/ipcbuf.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+-/**
+- * atomic_up_add_unless - add unless the number is a given value
+- * @l: pointer of type local_t
+- * @a: the amount to add to l...
+- * @u: ...unless l is equal to u.
- *
+- * Atomically adds @a to @l, so long as it was not @u.
+- * Returns non-zero if @l was not @u, and zero otherwise.
- */
+-#define local_add_unless(l, a, u) \
+-({ \
+- long c, old; \
+- c = local_read(l); \
+- for (;;) { \
+- if (unlikely(c == (u))) \
+- break; \
+- old = local_cmpxchg((l), c, c + (a)); \
+- if (likely(old == c)) \
+- break; \
+- c = old; \
+- } \
+- c != (u); \
+-})
+-#define local_inc_not_zero(l) local_add_unless((l), 1, 0)
-
+-/* On x86-64 these are better than the atomic variants on SMP kernels
+- because they dont use a lock prefix. */
+-#define __local_inc(l) local_inc(l)
+-#define __local_dec(l) local_dec(l)
+-#define __local_add(i,l) local_add((i),(l))
+-#define __local_sub(i,l) local_sub((i),(l))
+-
+-/* Use these for per-cpu local_t variables: on some archs they are
+- * much more efficient than these naive implementations. Note they take
+- * a variable, not an address.
+- *
+- * This could be done better if we moved the per cpu data directly
+- * after GS.
+- */
+-
+-/* Need to disable preemption for the cpu local counters otherwise we could
+- still access a variable of a previous CPU in a non atomic way. */
+-#define cpu_local_wrap_v(l) \
+- ({ local_t res__; \
+- preempt_disable(); \
+- res__ = (l); \
+- preempt_enable(); \
+- res__; })
+-#define cpu_local_wrap(l) \
+- ({ preempt_disable(); \
+- l; \
+- preempt_enable(); }) \
+-
+-#define cpu_local_read(l) cpu_local_wrap_v(local_read(&__get_cpu_var(l)))
+-#define cpu_local_set(l, i) cpu_local_wrap(local_set(&__get_cpu_var(l), (i)))
+-#define cpu_local_inc(l) cpu_local_wrap(local_inc(&__get_cpu_var(l)))
+-#define cpu_local_dec(l) cpu_local_wrap(local_dec(&__get_cpu_var(l)))
+-#define cpu_local_add(i, l) cpu_local_wrap(local_add((i), &__get_cpu_var(l)))
+-#define cpu_local_sub(i, l) cpu_local_wrap(local_sub((i), &__get_cpu_var(l)))
+-
+-#define __cpu_local_inc(l) cpu_local_inc(l)
+-#define __cpu_local_dec(l) cpu_local_dec(l)
+-#define __cpu_local_add(i, l) cpu_local_add((i), (l))
+-#define __cpu_local_sub(i, l) cpu_local_sub((i), (l))
+-
+-#endif /* _ARCH_X8664_LOCAL_H */
+diff --git a/include/asm-x86/mach-bigsmp/mach_apic.h b/include/asm-x86/mach-bigsmp/mach_apic.h
+index ebd319f..6df235e 100644
+--- a/include/asm-x86/mach-bigsmp/mach_apic.h
++++ b/include/asm-x86/mach-bigsmp/mach_apic.h
+@@ -110,13 +110,13 @@ static inline int cpu_to_logical_apicid(int cpu)
+ }
+
+ static inline int mpc_apic_id(struct mpc_config_processor *m,
+- struct mpc_config_translation *translation_record)
++ struct mpc_config_translation *translation_record)
+ {
+- printk("Processor #%d %ld:%ld APIC version %d\n",
+- m->mpc_apicid,
+- (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
+- (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
+- m->mpc_apicver);
++ printk("Processor #%d %u:%u APIC version %d\n",
++ m->mpc_apicid,
++ (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
++ (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
++ m->mpc_apicver);
+ return m->mpc_apicid;
+ }
+
+diff --git a/include/asm-x86/mach-default/apm.h b/include/asm-x86/mach-default/apm.h
+index 1f730b8..989f34c 100644
+--- a/include/asm-x86/mach-default/apm.h
++++ b/include/asm-x86/mach-default/apm.h
+@@ -1,6 +1,4 @@
+ /*
+- * include/asm-i386/mach-default/apm.h
+- *
+ * Machine specific APM BIOS functions for generic.
+ * Split out from apm.c by Osamu Tomita <tomita at cinet.co.jp>
+ */
+diff --git a/include/asm-x86/mach-default/io_ports.h b/include/asm-x86/mach-default/io_ports.h
+deleted file mode 100644
+index 48540ba..0000000
+--- a/include/asm-x86/mach-default/io_ports.h
++++ /dev/null
+@@ -1,25 +0,0 @@
-/*
-- * The ipc64_perm structure for i386 architecture.
-- * Note extra padding because this structure is passed back and forth
-- * between kernel and user space.
+- * arch/i386/mach-generic/io_ports.h
- *
-- * Pad space is left for:
-- * - 32-bit mode_t and seq
-- * - 2 miscellaneous 32-bit values
+- * Machine specific IO port address definition for generic.
+- * Written by Osamu Tomita <tomita at cinet.co.jp>
- */
+-#ifndef _MACH_IO_PORTS_H
+-#define _MACH_IO_PORTS_H
-
--struct ipc64_perm
--{
-- __kernel_key_t key;
-- __kernel_uid32_t uid;
-- __kernel_gid32_t gid;
-- __kernel_uid32_t cuid;
-- __kernel_gid32_t cgid;
-- __kernel_mode_t mode;
-- unsigned short __pad1;
-- unsigned short seq;
-- unsigned short __pad2;
-- unsigned long __unused1;
-- unsigned long __unused2;
--};
--
--#endif /* __ASM_SH64_IPCBUF_H__ */
-diff --git a/include/asm-sh64/irq.h b/include/asm-sh64/irq.h
+-/* i8259A PIC registers */
+-#define PIC_MASTER_CMD 0x20
+-#define PIC_MASTER_IMR 0x21
+-#define PIC_MASTER_ISR PIC_MASTER_CMD
+-#define PIC_MASTER_POLL PIC_MASTER_ISR
+-#define PIC_MASTER_OCW3 PIC_MASTER_ISR
+-#define PIC_SLAVE_CMD 0xa0
+-#define PIC_SLAVE_IMR 0xa1
+-
+-/* i8259A PIC related value */
+-#define PIC_CASCADE_IR 2
+-#define MASTER_ICW4_DEFAULT 0x01
+-#define SLAVE_ICW4_DEFAULT 0x01
+-#define PIC_ICW4_AEOI 2
+-
+-#endif /* !_MACH_IO_PORTS_H */
+diff --git a/include/asm-x86/mach-default/mach_apic.h b/include/asm-x86/mach-default/mach_apic.h
+index 6db1c3b..e3c2c10 100644
+--- a/include/asm-x86/mach-default/mach_apic.h
++++ b/include/asm-x86/mach-default/mach_apic.h
+@@ -89,15 +89,15 @@ static inline physid_mask_t apicid_to_cpu_present(int phys_apicid)
+ return physid_mask_of_physid(phys_apicid);
+ }
+
+-static inline int mpc_apic_id(struct mpc_config_processor *m,
+- struct mpc_config_translation *translation_record)
+-{
+- printk("Processor #%d %ld:%ld APIC version %d\n",
+- m->mpc_apicid,
+- (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
+- (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
+- m->mpc_apicver);
+- return (m->mpc_apicid);
++static inline int mpc_apic_id(struct mpc_config_processor *m,
++ struct mpc_config_translation *translation_record)
++{
++ printk("Processor #%d %u:%u APIC version %d\n",
++ m->mpc_apicid,
++ (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
++ (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
++ m->mpc_apicver);
++ return m->mpc_apicid;
+ }
+
+ static inline void setup_portio_remap(void)
+diff --git a/include/asm-x86/mach-default/mach_time.h b/include/asm-x86/mach-default/mach_time.h
deleted file mode 100644
-index 5c9e6a8..0000000
---- a/include/asm-sh64/irq.h
+index 31eb5de..0000000
+--- a/include/asm-x86/mach-default/mach_time.h
+++ /dev/null
-@@ -1,144 +0,0 @@
--#ifndef __ASM_SH64_IRQ_H
--#define __ASM_SH64_IRQ_H
--
+@@ -1,111 +0,0 @@
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/irq.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * include/asm-i386/mach-default/mach_time.h
- *
+- * Machine specific set RTC function for generic.
+- * Split out from time.c by Osamu Tomita <tomita at cinet.co.jp>
- */
+-#ifndef _MACH_TIME_H
+-#define _MACH_TIME_H
+-
+-#include <linux/mc146818rtc.h>
-
+-/* for check timing call set_rtc_mmss() 500ms */
+-/* used in arch/i386/time.c::do_timer_interrupt() */
+-#define USEC_AFTER 500000
+-#define USEC_BEFORE 500000
-
-/*
-- * Encoded IRQs are not considered worth to be supported.
-- * Main reason is that there's no per-encoded-interrupt
-- * enable/disable mechanism (as there was in SH3/4).
-- * An all enabled/all disabled is worth only if there's
-- * a cascaded IC to disable/enable/ack on. Until such
-- * IC is available there's no such support.
-- *
-- * Presumably Encoded IRQs may use extra IRQs beyond 64,
-- * below. Some logic must be added to cope with IRQ_IRL?
-- * in an exclusive way.
+- * In order to set the CMOS clock precisely, set_rtc_mmss has to be
+- * called 500 ms after the second nowtime has started, because when
+- * nowtime is written into the registers of the CMOS clock, it will
+- * jump to the next second precisely 500 ms later. Check the Motorola
+- * MC146818A or Dallas DS12887 data sheet for details.
- *
-- * Priorities are set at Platform level, when IRQ_IRL0-3
-- * are set to 0 Encoding is allowed. Otherwise it's not
-- * allowed.
+- * BUG: This routine does not handle hour overflow properly; it just
+- * sets the minutes. Usually you'll only notice that after reboot!
- */
+-static inline int mach_set_rtc_mmss(unsigned long nowtime)
+-{
+- int retval = 0;
+- int real_seconds, real_minutes, cmos_minutes;
+- unsigned char save_control, save_freq_select;
-
--/* Independent IRQs */
--#define IRQ_IRL0 0
--#define IRQ_IRL1 1
--#define IRQ_IRL2 2
--#define IRQ_IRL3 3
--
--#define IRQ_INTA 4
--#define IRQ_INTB 5
--#define IRQ_INTC 6
--#define IRQ_INTD 7
+- save_control = CMOS_READ(RTC_CONTROL); /* tell the clock it's being set */
+- CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
-
--#define IRQ_SERR 12
--#define IRQ_ERR 13
--#define IRQ_PWR3 14
--#define IRQ_PWR2 15
--#define IRQ_PWR1 16
--#define IRQ_PWR0 17
+- save_freq_select = CMOS_READ(RTC_FREQ_SELECT); /* stop and reset prescaler */
+- CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
-
--#define IRQ_DMTE0 18
--#define IRQ_DMTE1 19
--#define IRQ_DMTE2 20
--#define IRQ_DMTE3 21
--#define IRQ_DAERR 22
+- cmos_minutes = CMOS_READ(RTC_MINUTES);
+- if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+- BCD_TO_BIN(cmos_minutes);
-
--#define IRQ_TUNI0 32
--#define IRQ_TUNI1 33
--#define IRQ_TUNI2 34
--#define IRQ_TICPI2 35
+- /*
+- * since we're only adjusting minutes and seconds,
+- * don't interfere with hour overflow. This avoids
+- * messing with unknown time zones but requires your
+- * RTC not to be off by more than 15 minutes
+- */
+- real_seconds = nowtime % 60;
+- real_minutes = nowtime / 60;
+- if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
+- real_minutes += 30; /* correct for half hour time zone */
+- real_minutes %= 60;
-
--#define IRQ_ATI 36
--#define IRQ_PRI 37
--#define IRQ_CUI 38
+- if (abs(real_minutes - cmos_minutes) < 30) {
+- if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+- BIN_TO_BCD(real_seconds);
+- BIN_TO_BCD(real_minutes);
+- }
+- CMOS_WRITE(real_seconds,RTC_SECONDS);
+- CMOS_WRITE(real_minutes,RTC_MINUTES);
+- } else {
+- printk(KERN_WARNING
+- "set_rtc_mmss: can't update from %d to %d\n",
+- cmos_minutes, real_minutes);
+- retval = -1;
+- }
-
--#define IRQ_ERI 39
--#define IRQ_RXI 40
--#define IRQ_BRI 41
--#define IRQ_TXI 42
+- /* The following flags have to be released exactly in this order,
+- * otherwise the DS12887 (popular MC146818A clone with integrated
+- * battery and quartz) will not reset the oscillator and will not
+- * update precisely 500 ms later. You won't find this mentioned in
+- * the Dallas Semiconductor data sheets, but who believes data
+- * sheets anyway ... -- Markus Kuhn
+- */
+- CMOS_WRITE(save_control, RTC_CONTROL);
+- CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
-
--#define IRQ_ITI 63
+- return retval;
+-}
-
--#define NR_INTC_IRQS 64
+-static inline unsigned long mach_get_cmos_time(void)
+-{
+- unsigned int year, mon, day, hour, min, sec;
-
--#ifdef CONFIG_SH_CAYMAN
--#define NR_EXT_IRQS 32
--#define START_EXT_IRQS 64
+- do {
+- sec = CMOS_READ(RTC_SECONDS);
+- min = CMOS_READ(RTC_MINUTES);
+- hour = CMOS_READ(RTC_HOURS);
+- day = CMOS_READ(RTC_DAY_OF_MONTH);
+- mon = CMOS_READ(RTC_MONTH);
+- year = CMOS_READ(RTC_YEAR);
+- } while (sec != CMOS_READ(RTC_SECONDS));
+-
+- if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+- BCD_TO_BIN(sec);
+- BCD_TO_BIN(min);
+- BCD_TO_BIN(hour);
+- BCD_TO_BIN(day);
+- BCD_TO_BIN(mon);
+- BCD_TO_BIN(year);
+- }
-
--/* PCI bus 2 uses encoded external interrupts on the Cayman board */
--#define IRQ_P2INTA (START_EXT_IRQS + (3*8) + 0)
--#define IRQ_P2INTB (START_EXT_IRQS + (3*8) + 1)
--#define IRQ_P2INTC (START_EXT_IRQS + (3*8) + 2)
--#define IRQ_P2INTD (START_EXT_IRQS + (3*8) + 3)
+- year += 1900;
+- if (year < 1970)
+- year += 100;
-
--#define I8042_KBD_IRQ (START_EXT_IRQS + 2)
--#define I8042_AUX_IRQ (START_EXT_IRQS + 6)
+- return mktime(year, mon, day, hour, min, sec);
+-}
-
--#define IRQ_CFCARD (START_EXT_IRQS + 7)
--#define IRQ_PCMCIA (0)
+-#endif /* !_MACH_TIME_H */
+diff --git a/include/asm-x86/mach-default/mach_timer.h b/include/asm-x86/mach-default/mach_timer.h
+index 807992f..4b76e53 100644
+--- a/include/asm-x86/mach-default/mach_timer.h
++++ b/include/asm-x86/mach-default/mach_timer.h
+@@ -1,6 +1,4 @@
+ /*
+- * include/asm-i386/mach-default/mach_timer.h
+- *
+ * Machine specific calibrate_tsc() for generic.
+ * Split out from timer_tsc.c by Osamu Tomita <tomita at cinet.co.jp>
+ */
+diff --git a/include/asm-x86/mach-default/mach_traps.h b/include/asm-x86/mach-default/mach_traps.h
+index 625438b..2fe7705 100644
+--- a/include/asm-x86/mach-default/mach_traps.h
++++ b/include/asm-x86/mach-default/mach_traps.h
+@@ -1,6 +1,4 @@
+ /*
+- * include/asm-i386/mach-default/mach_traps.h
+- *
+ * Machine specific NMI handling for generic.
+ * Split out from traps.c by Osamu Tomita <tomita at cinet.co.jp>
+ */
+diff --git a/include/asm-x86/mach-es7000/mach_apic.h b/include/asm-x86/mach-es7000/mach_apic.h
+index caec64b..d23011f 100644
+--- a/include/asm-x86/mach-es7000/mach_apic.h
++++ b/include/asm-x86/mach-es7000/mach_apic.h
+@@ -131,11 +131,11 @@ static inline int cpu_to_logical_apicid(int cpu)
+
+ static inline int mpc_apic_id(struct mpc_config_processor *m, struct mpc_config_translation *unused)
+ {
+- printk("Processor #%d %ld:%ld APIC version %d\n",
+- m->mpc_apicid,
+- (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
+- (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
+- m->mpc_apicver);
++ printk("Processor #%d %u:%u APIC version %d\n",
++ m->mpc_apicid,
++ (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
++ (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
++ m->mpc_apicver);
+ return (m->mpc_apicid);
+ }
+
+diff --git a/include/asm-x86/mach-generic/gpio.h b/include/asm-x86/mach-generic/gpio.h
+new file mode 100644
+index 0000000..5305dcb
+--- /dev/null
++++ b/include/asm-x86/mach-generic/gpio.h
+@@ -0,0 +1,15 @@
++#ifndef __ASM_MACH_GENERIC_GPIO_H
++#define __ASM_MACH_GENERIC_GPIO_H
++
++int gpio_request(unsigned gpio, const char *label);
++void gpio_free(unsigned gpio);
++int gpio_direction_input(unsigned gpio);
++int gpio_direction_output(unsigned gpio, int value);
++int gpio_get_value(unsigned gpio);
++void gpio_set_value(unsigned gpio, int value);
++int gpio_to_irq(unsigned gpio);
++int irq_to_gpio(unsigned irq);
++
++#include <asm-generic/gpio.h> /* cansleep wrappers */
++
++#endif /* __ASM_MACH_GENERIC_GPIO_H */
+diff --git a/include/asm-x86/mach-numaq/mach_apic.h b/include/asm-x86/mach-numaq/mach_apic.h
+index 5e5e7dd..17e183b 100644
+--- a/include/asm-x86/mach-numaq/mach_apic.h
++++ b/include/asm-x86/mach-numaq/mach_apic.h
+@@ -101,11 +101,11 @@ static inline int mpc_apic_id(struct mpc_config_processor *m,
+ int quad = translation_record->trans_quad;
+ int logical_apicid = generate_logical_apicid(quad, m->mpc_apicid);
+
+- printk("Processor #%d %ld:%ld APIC version %d (quad %d, apic %d)\n",
+- m->mpc_apicid,
+- (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
+- (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
+- m->mpc_apicver, quad, logical_apicid);
++ printk("Processor #%d %u:%u APIC version %d (quad %d, apic %d)\n",
++ m->mpc_apicid,
++ (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
++ (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
++ m->mpc_apicver, quad, logical_apicid);
+ return logical_apicid;
+ }
+
+diff --git a/include/asm-x86/mach-rdc321x/gpio.h b/include/asm-x86/mach-rdc321x/gpio.h
+new file mode 100644
+index 0000000..db31b92
+--- /dev/null
++++ b/include/asm-x86/mach-rdc321x/gpio.h
+@@ -0,0 +1,56 @@
++#ifndef _RDC321X_GPIO_H
++#define _RDC321X_GPIO_H
++
++extern int rdc_gpio_get_value(unsigned gpio);
++extern void rdc_gpio_set_value(unsigned gpio, int value);
++extern int rdc_gpio_direction_input(unsigned gpio);
++extern int rdc_gpio_direction_output(unsigned gpio, int value);
++
++
++/* Wrappers for the arch-neutral GPIO API */
++
++static inline int gpio_request(unsigned gpio, const char *label)
++{
++ /* Not yet implemented */
++ return 0;
++}
++
++static inline void gpio_free(unsigned gpio)
++{
++ /* Not yet implemented */
++}
++
++static inline int gpio_direction_input(unsigned gpio)
++{
++ return rdc_gpio_direction_input(gpio);
++}
++
++static inline int gpio_direction_output(unsigned gpio, int value)
++{
++ return rdc_gpio_direction_output(gpio, value);
++}
++
++static inline int gpio_get_value(unsigned gpio)
++{
++ return rdc_gpio_get_value(gpio);
++}
++
++static inline void gpio_set_value(unsigned gpio, int value)
++{
++ rdc_gpio_set_value(gpio, value);
++}
++
++static inline int gpio_to_irq(unsigned gpio)
++{
++ return gpio;
++}
++
++static inline int irq_to_gpio(unsigned irq)
++{
++ return irq;
++}
++
++/* For cansleep */
++#include <asm-generic/gpio.h>
++
++#endif /* _RDC321X_GPIO_H_ */
+diff --git a/include/asm-x86/mach-rdc321x/rdc321x_defs.h b/include/asm-x86/mach-rdc321x/rdc321x_defs.h
+new file mode 100644
+index 0000000..838ba8f
+--- /dev/null
++++ b/include/asm-x86/mach-rdc321x/rdc321x_defs.h
+@@ -0,0 +1,6 @@
++#define PFX "rdc321x: "
++
++/* General purpose configuration and data registers */
++#define RDC3210_CFGREG_ADDR 0x0CF8
++#define RDC3210_CFGREG_DATA 0x0CFC
++#define RDC_MAX_GPIO 0x3A
+diff --git a/include/asm-x86/mach-summit/mach_apic.h b/include/asm-x86/mach-summit/mach_apic.h
+index 732f776..062c97f 100644
+--- a/include/asm-x86/mach-summit/mach_apic.h
++++ b/include/asm-x86/mach-summit/mach_apic.h
+@@ -126,15 +126,15 @@ static inline physid_mask_t apicid_to_cpu_present(int apicid)
+ return physid_mask_of_physid(0);
+ }
+
+-static inline int mpc_apic_id(struct mpc_config_processor *m,
+- struct mpc_config_translation *translation_record)
+-{
+- printk("Processor #%d %ld:%ld APIC version %d\n",
+- m->mpc_apicid,
+- (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
+- (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
+- m->mpc_apicver);
+- return (m->mpc_apicid);
++static inline int mpc_apic_id(struct mpc_config_processor *m,
++ struct mpc_config_translation *translation_record)
++{
++ printk("Processor #%d %u:%u APIC version %d\n",
++ m->mpc_apicid,
++ (m->mpc_cpufeature & CPU_FAMILY_MASK) >> 8,
++ (m->mpc_cpufeature & CPU_MODEL_MASK) >> 4,
++ m->mpc_apicver);
++ return m->mpc_apicid;
+ }
+
+ static inline void setup_portio_remap(void)
+diff --git a/include/asm-x86/math_emu.h b/include/asm-x86/math_emu.h
+index a4b0aa3..9bf4ae9 100644
+--- a/include/asm-x86/math_emu.h
++++ b/include/asm-x86/math_emu.h
+@@ -1,11 +1,6 @@
+ #ifndef _I386_MATH_EMU_H
+ #define _I386_MATH_EMU_H
+
+-#include <asm/sigcontext.h>
-
--#else
--#define NR_EXT_IRQS 0
--#endif
+-int restore_i387_soft(void *s387, struct _fpstate __user *buf);
+-int save_i387_soft(void *s387, struct _fpstate __user *buf);
-
--#define NR_IRQS (NR_INTC_IRQS+NR_EXT_IRQS)
+ /* This structure matches the layout of the data saved to the stack
+ following a device-not-present interrupt, part of it saved
+ automatically by the 80386/80486.
+diff --git a/include/asm-x86/mc146818rtc.h b/include/asm-x86/mc146818rtc.h
+index 5c2bb66..cdd9f96 100644
+--- a/include/asm-x86/mc146818rtc.h
++++ b/include/asm-x86/mc146818rtc.h
+@@ -1,5 +1,100 @@
+-#ifdef CONFIG_X86_32
+-# include "mc146818rtc_32.h"
++/*
++ * Machine dependent access functions for RTC registers.
++ */
++#ifndef _ASM_MC146818RTC_H
++#define _ASM_MC146818RTC_H
++
++#include <asm/io.h>
++#include <asm/system.h>
++#include <asm/processor.h>
++#include <linux/mc146818rtc.h>
++
++#ifndef RTC_PORT
++#define RTC_PORT(x) (0x70 + (x))
++#define RTC_ALWAYS_BCD 1 /* RTC operates in binary mode */
++#endif
++
++#if defined(CONFIG_X86_32) && defined(__HAVE_ARCH_CMPXCHG)
++/*
++ * This lock provides nmi access to the CMOS/RTC registers. It has some
++ * special properties. It is owned by a CPU and stores the index register
++ * currently being accessed (if owned). The idea here is that it works
++ * like a normal lock (normally). However, in an NMI, the NMI code will
++ * first check to see if its CPU owns the lock, meaning that the NMI
++ * interrupted during the read/write of the device. If it does, it goes ahead
++ * and performs the access and then restores the index register. If it does
++ * not, it locks normally.
++ *
++ * Note that since we are working with NMIs, we need this lock even in
++ * a non-SMP machine just to mark that the lock is owned.
++ *
++ * This only works with compare-and-swap. There is no other way to
++ * atomically claim the lock and set the owner.
++ */
++#include <linux/smp.h>
++extern volatile unsigned long cmos_lock;
++
++/*
++ * All of these below must be called with interrupts off, preempt
++ * disabled, etc.
++ */
++
++static inline void lock_cmos(unsigned char reg)
++{
++ unsigned long new;
++ new = ((smp_processor_id()+1) << 8) | reg;
++ for (;;) {
++ if (cmos_lock) {
++ cpu_relax();
++ continue;
++ }
++ if (__cmpxchg(&cmos_lock, 0, new, sizeof(cmos_lock)) == 0)
++ return;
++ }
++}
++
++static inline void unlock_cmos(void)
++{
++ cmos_lock = 0;
++}
++static inline int do_i_have_lock_cmos(void)
++{
++ return (cmos_lock >> 8) == (smp_processor_id()+1);
++}
++static inline unsigned char current_lock_cmos_reg(void)
++{
++ return cmos_lock & 0xff;
++}
++#define lock_cmos_prefix(reg) \
++ do { \
++ unsigned long cmos_flags; \
++ local_irq_save(cmos_flags); \
++ lock_cmos(reg)
++#define lock_cmos_suffix(reg) \
++ unlock_cmos(); \
++ local_irq_restore(cmos_flags); \
++ } while (0)
+ #else
+-# include "mc146818rtc_64.h"
++#define lock_cmos_prefix(reg) do {} while (0)
++#define lock_cmos_suffix(reg) do {} while (0)
++#define lock_cmos(reg)
++#define unlock_cmos()
++#define do_i_have_lock_cmos() 0
++#define current_lock_cmos_reg() 0
+ #endif
++
++/*
++ * The yet supported machines all access the RTC index register via
++ * an ISA port access but the way to access the date register differs ...
++ */
++#define CMOS_READ(addr) rtc_cmos_read(addr)
++#define CMOS_WRITE(val, addr) rtc_cmos_write(val, addr)
++unsigned char rtc_cmos_read(unsigned char addr);
++void rtc_cmos_write(unsigned char val, unsigned char addr);
++
++extern int mach_set_rtc_mmss(unsigned long nowtime);
++extern unsigned long mach_get_cmos_time(void);
++
++#define RTC_IRQ 8
++
++#endif /* _ASM_MC146818RTC_H */
+diff --git a/include/asm-x86/mc146818rtc_32.h b/include/asm-x86/mc146818rtc_32.h
+deleted file mode 100644
+index 1613b42..0000000
+--- a/include/asm-x86/mc146818rtc_32.h
++++ /dev/null
+@@ -1,97 +0,0 @@
+-/*
+- * Machine dependent access functions for RTC registers.
+- */
+-#ifndef _ASM_MC146818RTC_H
+-#define _ASM_MC146818RTC_H
-
+-#include <asm/io.h>
+-#include <asm/system.h>
+-#include <asm/processor.h>
+-#include <linux/mc146818rtc.h>
-
--/* Default IRQs, fixed */
--#define TIMER_IRQ IRQ_TUNI0
--#define RTC_IRQ IRQ_CUI
+-#ifndef RTC_PORT
+-#define RTC_PORT(x) (0x70 + (x))
+-#define RTC_ALWAYS_BCD 1 /* RTC operates in binary mode */
+-#endif
-
--/* Default Priorities, Platform may choose differently */
--#define NO_PRIORITY 0 /* Disabled */
--#define TIMER_PRIORITY 2
--#define RTC_PRIORITY TIMER_PRIORITY
--#define SCIF_PRIORITY 3
--#define INTD_PRIORITY 3
--#define IRL3_PRIORITY 4
--#define INTC_PRIORITY 6
--#define IRL2_PRIORITY 7
--#define INTB_PRIORITY 9
--#define IRL1_PRIORITY 10
--#define INTA_PRIORITY 12
--#define IRL0_PRIORITY 13
--#define TOP_PRIORITY 15
+-#ifdef __HAVE_ARCH_CMPXCHG
+-/*
+- * This lock provides nmi access to the CMOS/RTC registers. It has some
+- * special properties. It is owned by a CPU and stores the index register
+- * currently being accessed (if owned). The idea here is that it works
+- * like a normal lock (normally). However, in an NMI, the NMI code will
+- * first check to see if its CPU owns the lock, meaning that the NMI
+- * interrupted during the read/write of the device. If it does, it goes ahead
+- * and performs the access and then restores the index register. If it does
+- * not, it locks normally.
+- *
+- * Note that since we are working with NMIs, we need this lock even in
+- * a non-SMP machine just to mark that the lock is owned.
+- *
+- * This only works with compare-and-swap. There is no other way to
+- * atomically claim the lock and set the owner.
+- */
+-#include <linux/smp.h>
+-extern volatile unsigned long cmos_lock;
-
--extern int intc_evt_to_irq[(0xE20/0x20)+1];
--int intc_irq_describe(char* p, int irq);
+-/*
+- * All of these below must be called with interrupts off, preempt
+- * disabled, etc.
+- */
-
--#define irq_canonicalize(irq) (irq)
+-static inline void lock_cmos(unsigned char reg)
+-{
+- unsigned long new;
+- new = ((smp_processor_id()+1) << 8) | reg;
+- for (;;) {
+- if (cmos_lock) {
+- cpu_relax();
+- continue;
+- }
+- if (__cmpxchg(&cmos_lock, 0, new, sizeof(cmos_lock)) == 0)
+- return;
+- }
+-}
-
--#ifdef CONFIG_SH_CAYMAN
--int cayman_irq_demux(int evt);
--int cayman_irq_describe(char* p, int irq);
--#define irq_demux(x) cayman_irq_demux(x)
--#define irq_describe(p, x) cayman_irq_describe(p, x)
+-static inline void unlock_cmos(void)
+-{
+- cmos_lock = 0;
+-}
+-static inline int do_i_have_lock_cmos(void)
+-{
+- return (cmos_lock >> 8) == (smp_processor_id()+1);
+-}
+-static inline unsigned char current_lock_cmos_reg(void)
+-{
+- return cmos_lock & 0xff;
+-}
+-#define lock_cmos_prefix(reg) \
+- do { \
+- unsigned long cmos_flags; \
+- local_irq_save(cmos_flags); \
+- lock_cmos(reg)
+-#define lock_cmos_suffix(reg) \
+- unlock_cmos(); \
+- local_irq_restore(cmos_flags); \
+- } while (0)
-#else
--#define irq_demux(x) (intc_evt_to_irq[x])
--#define irq_describe(p, x) intc_irq_describe(p, x)
+-#define lock_cmos_prefix(reg) do {} while (0)
+-#define lock_cmos_suffix(reg) do {} while (0)
+-#define lock_cmos(reg)
+-#define unlock_cmos()
+-#define do_i_have_lock_cmos() 0
+-#define current_lock_cmos_reg() 0
-#endif
-
-/*
-- * Function for "on chip support modules".
+- * The yet supported machines all access the RTC index register via
+- * an ISA port access but the way to access the date register differs ...
- */
+-#define CMOS_READ(addr) rtc_cmos_read(addr)
+-#define CMOS_WRITE(val, addr) rtc_cmos_write(val, addr)
+-unsigned char rtc_cmos_read(unsigned char addr);
+-void rtc_cmos_write(unsigned char val, unsigned char addr);
-
--/*
-- * SH-5 supports Priority based interrupts only.
-- * Interrupt priorities are defined at platform level.
-- */
--#define set_ipr_data(a, b, c, d)
--#define make_ipr_irq(a)
--#define make_imask_irq(a)
+-#define RTC_IRQ 8
-
--#endif /* __ASM_SH64_IRQ_H */
-diff --git a/include/asm-sh64/irq_regs.h b/include/asm-sh64/irq_regs.h
-deleted file mode 100644
-index 3dd9c0b..0000000
---- a/include/asm-sh64/irq_regs.h
-+++ /dev/null
-@@ -1 +0,0 @@
--#include <asm-generic/irq_regs.h>
-diff --git a/include/asm-sh64/kdebug.h b/include/asm-sh64/kdebug.h
-deleted file mode 100644
-index 6ece1b0..0000000
---- a/include/asm-sh64/kdebug.h
-+++ /dev/null
-@@ -1 +0,0 @@
--#include <asm-generic/kdebug.h>
-diff --git a/include/asm-sh64/keyboard.h b/include/asm-sh64/keyboard.h
+-#endif /* _ASM_MC146818RTC_H */
+diff --git a/include/asm-x86/mc146818rtc_64.h b/include/asm-x86/mc146818rtc_64.h
deleted file mode 100644
-index 0b01c3b..0000000
---- a/include/asm-sh64/keyboard.h
+index d6e3009..0000000
+--- a/include/asm-x86/mc146818rtc_64.h
+++ /dev/null
-@@ -1,70 +0,0 @@
--/*
-- * linux/include/asm-shmedia/keyboard.h
-- *
-- * Copied from i386 version:
-- * Created 3 Nov 1996 by Geert Uytterhoeven
-- */
--
+@@ -1,29 +0,0 @@
-/*
-- * This file contains the i386 architecture specific keyboard definitions
+- * Machine dependent access functions for RTC registers.
- */
+-#ifndef _ASM_MC146818RTC_H
+-#define _ASM_MC146818RTC_H
-
--#ifndef __ASM_SH64_KEYBOARD_H
--#define __ASM_SH64_KEYBOARD_H
--
--#ifdef __KERNEL__
--
--#include <linux/kernel.h>
--#include <linux/ioport.h>
-#include <asm/io.h>
-
--#ifdef CONFIG_SH_CAYMAN
--#define KEYBOARD_IRQ (START_EXT_IRQS + 2) /* SMSC SuperIO IRQ 1 */
+-#ifndef RTC_PORT
+-#define RTC_PORT(x) (0x70 + (x))
+-#define RTC_ALWAYS_BCD 1 /* RTC operates in binary mode */
-#endif
--#define DISABLE_KBD_DURING_INTERRUPTS 0
--
--extern int pckbd_setkeycode(unsigned int scancode, unsigned int keycode);
--extern int pckbd_getkeycode(unsigned int scancode);
--extern int pckbd_translate(unsigned char scancode, unsigned char *keycode,
-- char raw_mode);
--extern char pckbd_unexpected_up(unsigned char keycode);
--extern void pckbd_leds(unsigned char leds);
--extern void pckbd_init_hw(void);
--
--#define kbd_setkeycode pckbd_setkeycode
--#define kbd_getkeycode pckbd_getkeycode
--#define kbd_translate pckbd_translate
--#define kbd_unexpected_up pckbd_unexpected_up
--#define kbd_leds pckbd_leds
--#define kbd_init_hw pckbd_init_hw
--
--/* resource allocation */
--#define kbd_request_region()
--#define kbd_request_irq(handler) request_irq(KEYBOARD_IRQ, handler, 0, \
-- "keyboard", NULL)
--
--/* How to access the keyboard macros on this platform. */
--#define kbd_read_input() inb(KBD_DATA_REG)
--#define kbd_read_status() inb(KBD_STATUS_REG)
--#define kbd_write_output(val) outb(val, KBD_DATA_REG)
--#define kbd_write_command(val) outb(val, KBD_CNTL_REG)
--
--/* Some stoneage hardware needs delays after some operations. */
--#define kbd_pause() do { } while(0)
-
-/*
-- * Machine specific bits for the PS/2 driver
+- * The yet supported machines all access the RTC index register via
+- * an ISA port access but the way to access the date register differs ...
- */
+-#define CMOS_READ(addr) ({ \
+-outb_p((addr),RTC_PORT(0)); \
+-inb_p(RTC_PORT(1)); \
+-})
+-#define CMOS_WRITE(val, addr) ({ \
+-outb_p((addr),RTC_PORT(0)); \
+-outb_p((val),RTC_PORT(1)); \
+-})
-
--#ifdef CONFIG_SH_CAYMAN
--#define AUX_IRQ (START_EXT_IRQS + 6) /* SMSC SuperIO IRQ12 */
--#endif
--
--#define aux_request_irq(hand, dev_id) \
-- request_irq(AUX_IRQ, hand, IRQF_SHARED, "PS2 Mouse", dev_id)
+-#define RTC_IRQ 8
-
--#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
+-#endif /* _ASM_MC146818RTC_H */
+diff --git a/include/asm-x86/mce.h b/include/asm-x86/mce.h
+index df304fd..94f1fd7 100644
+--- a/include/asm-x86/mce.h
++++ b/include/asm-x86/mce.h
+@@ -13,7 +13,7 @@
+ #define MCG_CTL_P (1UL<<8) /* MCG_CAP register available */
+
+ #define MCG_STATUS_RIPV (1UL<<0) /* restart ip valid */
+-#define MCG_STATUS_EIPV (1UL<<1) /* eip points to correct instruction */
++#define MCG_STATUS_EIPV (1UL<<1) /* ip points to correct instruction */
+ #define MCG_STATUS_MCIP (1UL<<2) /* machine check in progress */
+
+ #define MCI_STATUS_VAL (1UL<<63) /* valid error */
+@@ -30,7 +30,7 @@ struct mce {
+ __u64 misc;
+ __u64 addr;
+ __u64 mcgstatus;
+- __u64 rip;
++ __u64 ip;
+ __u64 tsc; /* cpu time stamp counter */
+ __u64 res1; /* for future extension */
+ __u64 res2; /* dito. */
+@@ -85,14 +85,7 @@ struct mce_log {
+ #ifdef __KERNEL__
+
+ #ifdef CONFIG_X86_32
+-#ifdef CONFIG_X86_MCE
+-extern void mcheck_init(struct cpuinfo_x86 *c);
+-#else
+-#define mcheck_init(c) do {} while(0)
+-#endif
-
--#endif /* __KERNEL__ */
--#endif /* __ASM_SH64_KEYBOARD_H */
+ extern int mce_disabled;
-
-diff --git a/include/asm-sh64/kmap_types.h b/include/asm-sh64/kmap_types.h
+ #else /* CONFIG_X86_32 */
+
+ #include <asm/atomic.h>
+@@ -121,6 +114,13 @@ extern int mce_notify_user(void);
+
+ #endif /* !CONFIG_X86_32 */
+
++
++
++#ifdef CONFIG_X86_MCE
++extern void mcheck_init(struct cpuinfo_x86 *c);
++#else
++#define mcheck_init(c) do { } while (0)
++#endif
+ extern void stop_mce(void);
+ extern void restart_mce(void);
+
+diff --git a/include/asm-x86/mmsegment.h b/include/asm-x86/mmsegment.h
deleted file mode 100644
-index 2ae7c75..0000000
---- a/include/asm-sh64/kmap_types.h
+index d3f80c9..0000000
+--- a/include/asm-x86/mmsegment.h
+++ /dev/null
-@@ -1,7 +0,0 @@
--#ifndef __ASM_SH64_KMAP_TYPES_H
--#define __ASM_SH64_KMAP_TYPES_H
--
--#include <asm-sh/kmap_types.h>
+@@ -1,8 +0,0 @@
+-#ifndef _ASM_MMSEGMENT_H
+-#define _ASM_MMSEGMENT_H 1
-
--#endif /* __ASM_SH64_KMAP_TYPES_H */
+-typedef struct {
+- unsigned long seg;
+-} mm_segment_t;
-
-diff --git a/include/asm-sh64/linkage.h b/include/asm-sh64/linkage.h
-deleted file mode 100644
-index 1dd0e84..0000000
---- a/include/asm-sh64/linkage.h
-+++ /dev/null
-@@ -1,7 +0,0 @@
--#ifndef __ASM_SH64_LINKAGE_H
--#define __ASM_SH64_LINKAGE_H
+-#endif
+diff --git a/include/asm-x86/mmu.h b/include/asm-x86/mmu.h
+index 3f922c8..efa962c 100644
+--- a/include/asm-x86/mmu.h
++++ b/include/asm-x86/mmu.h
+@@ -20,4 +20,12 @@ typedef struct {
+ void *vdso;
+ } mm_context_t;
+
++#ifdef CONFIG_SMP
++void leave_mm(int cpu);
++#else
++static inline void leave_mm(int cpu)
++{
++}
++#endif
++
+ #endif /* _ASM_X86_MMU_H */
+diff --git a/include/asm-x86/mmu_context_32.h b/include/asm-x86/mmu_context_32.h
+index 7eb0b0b..8198d1c 100644
+--- a/include/asm-x86/mmu_context_32.h
++++ b/include/asm-x86/mmu_context_32.h
+@@ -32,8 +32,6 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+ #endif
+ }
+
+-void leave_mm(unsigned long cpu);
-
--#include <asm-sh/linkage.h>
+ static inline void switch_mm(struct mm_struct *prev,
+ struct mm_struct *next,
+ struct task_struct *tsk)
+diff --git a/include/asm-x86/mmu_context_64.h b/include/asm-x86/mmu_context_64.h
+index 0cce83a..ad6dc82 100644
+--- a/include/asm-x86/mmu_context_64.h
++++ b/include/asm-x86/mmu_context_64.h
+@@ -7,7 +7,9 @@
+ #include <asm/pda.h>
+ #include <asm/pgtable.h>
+ #include <asm/tlbflush.h>
++#ifndef CONFIG_PARAVIRT
+ #include <asm-generic/mm_hooks.h>
++#endif
+
+ /*
+ * possibly do the LDT unload here?
+@@ -23,11 +25,6 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+ #endif
+ }
+
+-static inline void load_cr3(pgd_t *pgd)
+-{
+- asm volatile("movq %0,%%cr3" :: "r" (__pa(pgd)) : "memory");
+-}
-
--#endif /* __ASM_SH64_LINKAGE_H */
+ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk)
+ {
+@@ -43,20 +40,20 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ load_cr3(next->pgd);
+
+ if (unlikely(next->context.ldt != prev->context.ldt))
+- load_LDT_nolock(&next->context, cpu);
++ load_LDT_nolock(&next->context);
+ }
+ #ifdef CONFIG_SMP
+ else {
+ write_pda(mmu_state, TLBSTATE_OK);
+ if (read_pda(active_mm) != next)
+- out_of_line_bug();
++ BUG();
+ if (!cpu_test_and_set(cpu, next->cpu_vm_mask)) {
+ /* We were in lazy tlb mode and leave_mm disabled
+ * tlb flush IPI delivery. We must reload CR3
+ * to make sure to use no freed page tables.
+ */
+ load_cr3(next->pgd);
+- load_LDT_nolock(&next->context, cpu);
++ load_LDT_nolock(&next->context);
+ }
+ }
+ #endif
+diff --git a/include/asm-x86/mmzone_32.h b/include/asm-x86/mmzone_32.h
+index 118e981..5d6f4ce 100644
+--- a/include/asm-x86/mmzone_32.h
++++ b/include/asm-x86/mmzone_32.h
+@@ -87,9 +87,6 @@ static inline int pfn_to_nid(unsigned long pfn)
+ __pgdat->node_start_pfn + __pgdat->node_spanned_pages; \
+ })
+
+-/* XXX: FIXME -- wli */
+-#define kern_addr_valid(kaddr) (0)
-
-diff --git a/include/asm-sh64/local.h b/include/asm-sh64/local.h
+ #ifdef CONFIG_X86_NUMAQ /* we have contiguous memory on NUMA-Q */
+ #define pfn_valid(pfn) ((pfn) < num_physpages)
+ #else
+diff --git a/include/asm-x86/mmzone_64.h b/include/asm-x86/mmzone_64.h
+index 19a8937..ebaf966 100644
+--- a/include/asm-x86/mmzone_64.h
++++ b/include/asm-x86/mmzone_64.h
+@@ -15,9 +15,9 @@
+ struct memnode {
+ int shift;
+ unsigned int mapsize;
+- u8 *map;
+- u8 embedded_map[64-16];
+-} ____cacheline_aligned; /* total size = 64 bytes */
++ s16 *map;
++ s16 embedded_map[64-8];
++} ____cacheline_aligned; /* total size = 128 bytes */
+ extern struct memnode memnode;
+ #define memnode_shift memnode.shift
+ #define memnodemap memnode.map
+@@ -41,11 +41,7 @@ static inline __attribute__((pure)) int phys_to_nid(unsigned long addr)
+ #define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \
+ NODE_DATA(nid)->node_spanned_pages)
+
+-#ifdef CONFIG_DISCONTIGMEM
+-#define pfn_to_nid(pfn) phys_to_nid((unsigned long)(pfn) << PAGE_SHIFT)
+-
+-extern int pfn_valid(unsigned long pfn);
+-#endif
++extern int early_pfn_to_nid(unsigned long pfn);
+
+ #ifdef CONFIG_NUMA_EMU
+ #define FAKE_NODE_MIN_SIZE (64*1024*1024)
+diff --git a/include/asm-x86/module.h b/include/asm-x86/module.h
+index 2b2f18d..bfedb24 100644
+--- a/include/asm-x86/module.h
++++ b/include/asm-x86/module.h
+@@ -1,5 +1,82 @@
++#ifndef _ASM_MODULE_H
++#define _ASM_MODULE_H
++
++/* x86_32/64 are simple */
++struct mod_arch_specific {};
++
+ #ifdef CONFIG_X86_32
+-# include "module_32.h"
++# define Elf_Shdr Elf32_Shdr
++# define Elf_Sym Elf32_Sym
++# define Elf_Ehdr Elf32_Ehdr
+ #else
+-# include "module_64.h"
++# define Elf_Shdr Elf64_Shdr
++# define Elf_Sym Elf64_Sym
++# define Elf_Ehdr Elf64_Ehdr
+ #endif
++
++#ifdef CONFIG_X86_64
++/* X86_64 does not define MODULE_PROC_FAMILY */
++#elif defined CONFIG_M386
++#define MODULE_PROC_FAMILY "386 "
++#elif defined CONFIG_M486
++#define MODULE_PROC_FAMILY "486 "
++#elif defined CONFIG_M586
++#define MODULE_PROC_FAMILY "586 "
++#elif defined CONFIG_M586TSC
++#define MODULE_PROC_FAMILY "586TSC "
++#elif defined CONFIG_M586MMX
++#define MODULE_PROC_FAMILY "586MMX "
++#elif defined CONFIG_MCORE2
++#define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_M686
++#define MODULE_PROC_FAMILY "686 "
++#elif defined CONFIG_MPENTIUMII
++#define MODULE_PROC_FAMILY "PENTIUMII "
++#elif defined CONFIG_MPENTIUMIII
++#define MODULE_PROC_FAMILY "PENTIUMIII "
++#elif defined CONFIG_MPENTIUMM
++#define MODULE_PROC_FAMILY "PENTIUMM "
++#elif defined CONFIG_MPENTIUM4
++#define MODULE_PROC_FAMILY "PENTIUM4 "
++#elif defined CONFIG_MK6
++#define MODULE_PROC_FAMILY "K6 "
++#elif defined CONFIG_MK7
++#define MODULE_PROC_FAMILY "K7 "
++#elif defined CONFIG_MK8
++#define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_X86_ELAN
++#define MODULE_PROC_FAMILY "ELAN "
++#elif defined CONFIG_MCRUSOE
++#define MODULE_PROC_FAMILY "CRUSOE "
++#elif defined CONFIG_MEFFICEON
++#define MODULE_PROC_FAMILY "EFFICEON "
++#elif defined CONFIG_MWINCHIPC6
++#define MODULE_PROC_FAMILY "WINCHIPC6 "
++#elif defined CONFIG_MWINCHIP2
++#define MODULE_PROC_FAMILY "WINCHIP2 "
++#elif defined CONFIG_MWINCHIP3D
++#define MODULE_PROC_FAMILY "WINCHIP3D "
++#elif defined CONFIG_MCYRIXIII
++#define MODULE_PROC_FAMILY "CYRIXIII "
++#elif defined CONFIG_MVIAC3_2
++#define MODULE_PROC_FAMILY "VIAC3-2 "
++#elif defined CONFIG_MVIAC7
++#define MODULE_PROC_FAMILY "VIAC7 "
++#elif defined CONFIG_MGEODEGX1
++#define MODULE_PROC_FAMILY "GEODEGX1 "
++#elif defined CONFIG_MGEODE_LX
++#define MODULE_PROC_FAMILY "GEODE "
++#else
++#error unknown processor family
++#endif
++
++#ifdef CONFIG_X86_32
++# ifdef CONFIG_4KSTACKS
++# define MODULE_STACKSIZE "4KSTACKS "
++# else
++# define MODULE_STACKSIZE ""
++# endif
++# define MODULE_ARCH_VERMAGIC MODULE_PROC_FAMILY MODULE_STACKSIZE
++#endif
++
++#endif /* _ASM_MODULE_H */
+diff --git a/include/asm-x86/module_32.h b/include/asm-x86/module_32.h
deleted file mode 100644
-index d9bd95d..0000000
---- a/include/asm-sh64/local.h
+index 7e5fda6..0000000
+--- a/include/asm-x86/module_32.h
+++ /dev/null
-@@ -1,7 +0,0 @@
--#ifndef __ASM_SH64_LOCAL_H
--#define __ASM_SH64_LOCAL_H
+@@ -1,75 +0,0 @@
+-#ifndef _ASM_I386_MODULE_H
+-#define _ASM_I386_MODULE_H
+-
+-/* x86 is simple */
+-struct mod_arch_specific
+-{
+-};
-
--#include <asm-generic/local.h>
+-#define Elf_Shdr Elf32_Shdr
+-#define Elf_Sym Elf32_Sym
+-#define Elf_Ehdr Elf32_Ehdr
+-
+-#ifdef CONFIG_M386
+-#define MODULE_PROC_FAMILY "386 "
+-#elif defined CONFIG_M486
+-#define MODULE_PROC_FAMILY "486 "
+-#elif defined CONFIG_M586
+-#define MODULE_PROC_FAMILY "586 "
+-#elif defined CONFIG_M586TSC
+-#define MODULE_PROC_FAMILY "586TSC "
+-#elif defined CONFIG_M586MMX
+-#define MODULE_PROC_FAMILY "586MMX "
+-#elif defined CONFIG_MCORE2
+-#define MODULE_PROC_FAMILY "CORE2 "
+-#elif defined CONFIG_M686
+-#define MODULE_PROC_FAMILY "686 "
+-#elif defined CONFIG_MPENTIUMII
+-#define MODULE_PROC_FAMILY "PENTIUMII "
+-#elif defined CONFIG_MPENTIUMIII
+-#define MODULE_PROC_FAMILY "PENTIUMIII "
+-#elif defined CONFIG_MPENTIUMM
+-#define MODULE_PROC_FAMILY "PENTIUMM "
+-#elif defined CONFIG_MPENTIUM4
+-#define MODULE_PROC_FAMILY "PENTIUM4 "
+-#elif defined CONFIG_MK6
+-#define MODULE_PROC_FAMILY "K6 "
+-#elif defined CONFIG_MK7
+-#define MODULE_PROC_FAMILY "K7 "
+-#elif defined CONFIG_MK8
+-#define MODULE_PROC_FAMILY "K8 "
+-#elif defined CONFIG_X86_ELAN
+-#define MODULE_PROC_FAMILY "ELAN "
+-#elif defined CONFIG_MCRUSOE
+-#define MODULE_PROC_FAMILY "CRUSOE "
+-#elif defined CONFIG_MEFFICEON
+-#define MODULE_PROC_FAMILY "EFFICEON "
+-#elif defined CONFIG_MWINCHIPC6
+-#define MODULE_PROC_FAMILY "WINCHIPC6 "
+-#elif defined CONFIG_MWINCHIP2
+-#define MODULE_PROC_FAMILY "WINCHIP2 "
+-#elif defined CONFIG_MWINCHIP3D
+-#define MODULE_PROC_FAMILY "WINCHIP3D "
+-#elif defined CONFIG_MCYRIXIII
+-#define MODULE_PROC_FAMILY "CYRIXIII "
+-#elif defined CONFIG_MVIAC3_2
+-#define MODULE_PROC_FAMILY "VIAC3-2 "
+-#elif defined CONFIG_MVIAC7
+-#define MODULE_PROC_FAMILY "VIAC7 "
+-#elif defined CONFIG_MGEODEGX1
+-#define MODULE_PROC_FAMILY "GEODEGX1 "
+-#elif defined CONFIG_MGEODE_LX
+-#define MODULE_PROC_FAMILY "GEODE "
+-#else
+-#error unknown processor family
+-#endif
-
--#endif /* __ASM_SH64_LOCAL_H */
+-#ifdef CONFIG_4KSTACKS
+-#define MODULE_STACKSIZE "4KSTACKS "
+-#else
+-#define MODULE_STACKSIZE ""
+-#endif
-
-diff --git a/include/asm-sh64/mc146818rtc.h b/include/asm-sh64/mc146818rtc.h
+-#define MODULE_ARCH_VERMAGIC MODULE_PROC_FAMILY MODULE_STACKSIZE
+-
+-#endif /* _ASM_I386_MODULE_H */
+diff --git a/include/asm-x86/module_64.h b/include/asm-x86/module_64.h
deleted file mode 100644
-index 6cd3aec..0000000
---- a/include/asm-sh64/mc146818rtc.h
+index 67f8f69..0000000
+--- a/include/asm-x86/module_64.h
+++ /dev/null
-@@ -1,7 +0,0 @@
--/*
-- * linux/include/asm-sh64/mc146818rtc.h
-- *
--*/
+@@ -1,10 +0,0 @@
+-#ifndef _ASM_X8664_MODULE_H
+-#define _ASM_X8664_MODULE_H
-
--/* For now, an empty place-holder to get IDE to compile. */
+-struct mod_arch_specific {};
-
-diff --git a/include/asm-sh64/mman.h b/include/asm-sh64/mman.h
+-#define Elf_Shdr Elf64_Shdr
+-#define Elf_Sym Elf64_Sym
+-#define Elf_Ehdr Elf64_Ehdr
+-
+-#endif
+diff --git a/include/asm-x86/mpspec.h b/include/asm-x86/mpspec.h
+index 8f268e8..781ad74 100644
+--- a/include/asm-x86/mpspec.h
++++ b/include/asm-x86/mpspec.h
+@@ -1,5 +1,117 @@
++#ifndef _AM_X86_MPSPEC_H
++#define _AM_X86_MPSPEC_H
++
++#include <asm/mpspec_def.h>
++
+ #ifdef CONFIG_X86_32
+-# include "mpspec_32.h"
++#include <mach_mpspec.h>
++
++extern int mp_bus_id_to_type[MAX_MP_BUSSES];
++extern int mp_bus_id_to_node[MAX_MP_BUSSES];
++extern int mp_bus_id_to_local[MAX_MP_BUSSES];
++extern int quad_local_to_mp_bus_id[NR_CPUS/4][4];
++
++extern unsigned int def_to_bigsmp;
++extern int apic_version[MAX_APICS];
++extern u8 apicid_2_node[];
++extern int pic_mode;
++
++#define MAX_APICID 256
++
+ #else
+-# include "mpspec_64.h"
++
++#define MAX_MP_BUSSES 256
++/* Each PCI slot may be a combo card with its own bus. 4 IRQ pins per slot. */
++#define MAX_IRQ_SOURCES (MAX_MP_BUSSES * 4)
++
++extern DECLARE_BITMAP(mp_bus_not_pci, MAX_MP_BUSSES);
++
++#endif
++
++extern int mp_bus_id_to_pci_bus[MAX_MP_BUSSES];
++
++extern unsigned int boot_cpu_physical_apicid;
++extern int smp_found_config;
++extern int nr_ioapics;
++extern int mp_irq_entries;
++extern struct mpc_config_intsrc mp_irqs[MAX_IRQ_SOURCES];
++extern int mpc_default_type;
++extern unsigned long mp_lapic_addr;
++
++extern void find_smp_config(void);
++extern void get_smp_config(void);
++
++#ifdef CONFIG_ACPI
++extern void mp_register_lapic(u8 id, u8 enabled);
++extern void mp_register_lapic_address(u64 address);
++extern void mp_register_ioapic(u8 id, u32 address, u32 gsi_base);
++extern void mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger,
++ u32 gsi);
++extern void mp_config_acpi_legacy_irqs(void);
++extern int mp_register_gsi(u32 gsi, int edge_level, int active_high_low);
++#endif /* CONFIG_ACPI */
++
++#define PHYSID_ARRAY_SIZE BITS_TO_LONGS(MAX_APICS)
++
++struct physid_mask
++{
++ unsigned long mask[PHYSID_ARRAY_SIZE];
++};
++
++typedef struct physid_mask physid_mask_t;
++
++#define physid_set(physid, map) set_bit(physid, (map).mask)
++#define physid_clear(physid, map) clear_bit(physid, (map).mask)
++#define physid_isset(physid, map) test_bit(physid, (map).mask)
++#define physid_test_and_set(physid, map) \
++ test_and_set_bit(physid, (map).mask)
++
++#define physids_and(dst, src1, src2) \
++ bitmap_and((dst).mask, (src1).mask, (src2).mask, MAX_APICS)
++
++#define physids_or(dst, src1, src2) \
++ bitmap_or((dst).mask, (src1).mask, (src2).mask, MAX_APICS)
++
++#define physids_clear(map) \
++ bitmap_zero((map).mask, MAX_APICS)
++
++#define physids_complement(dst, src) \
++ bitmap_complement((dst).mask, (src).mask, MAX_APICS)
++
++#define physids_empty(map) \
++ bitmap_empty((map).mask, MAX_APICS)
++
++#define physids_equal(map1, map2) \
++ bitmap_equal((map1).mask, (map2).mask, MAX_APICS)
++
++#define physids_weight(map) \
++ bitmap_weight((map).mask, MAX_APICS)
++
++#define physids_shift_right(d, s, n) \
++ bitmap_shift_right((d).mask, (s).mask, n, MAX_APICS)
++
++#define physids_shift_left(d, s, n) \
++ bitmap_shift_left((d).mask, (s).mask, n, MAX_APICS)
++
++#define physids_coerce(map) ((map).mask[0])
++
++#define physids_promote(physids) \
++ ({ \
++ physid_mask_t __physid_mask = PHYSID_MASK_NONE; \
++ __physid_mask.mask[0] = physids; \
++ __physid_mask; \
++ })
++
++#define physid_mask_of_physid(physid) \
++ ({ \
++ physid_mask_t __physid_mask = PHYSID_MASK_NONE; \
++ physid_set(physid, __physid_mask); \
++ __physid_mask; \
++ })
++
++#define PHYSID_MASK_ALL { {[0 ... PHYSID_ARRAY_SIZE-1] = ~0UL} }
++#define PHYSID_MASK_NONE { {[0 ... PHYSID_ARRAY_SIZE-1] = 0UL} }
++
++extern physid_mask_t phys_cpu_present_map;
++
+ #endif
+diff --git a/include/asm-x86/mpspec_32.h b/include/asm-x86/mpspec_32.h
deleted file mode 100644
-index a9be6d8..0000000
---- a/include/asm-sh64/mman.h
+index f213493..0000000
+--- a/include/asm-x86/mpspec_32.h
+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_MMAN_H
--#define __ASM_SH64_MMAN_H
+@@ -1,81 +0,0 @@
+-#ifndef __ASM_MPSPEC_H
+-#define __ASM_MPSPEC_H
-
--#include <asm-sh/mman.h>
+-#include <linux/cpumask.h>
+-#include <asm/mpspec_def.h>
+-#include <mach_mpspec.h>
-
--#endif /* __ASM_SH64_MMAN_H */
-diff --git a/include/asm-sh64/mmu.h b/include/asm-sh64/mmu.h
-deleted file mode 100644
-index ccd36d2..0000000
---- a/include/asm-sh64/mmu.h
-+++ /dev/null
-@@ -1,7 +0,0 @@
--#ifndef __MMU_H
--#define __MMU_H
+-extern int mp_bus_id_to_type [MAX_MP_BUSSES];
+-extern int mp_bus_id_to_node [MAX_MP_BUSSES];
+-extern int mp_bus_id_to_local [MAX_MP_BUSSES];
+-extern int quad_local_to_mp_bus_id [NR_CPUS/4][4];
+-extern int mp_bus_id_to_pci_bus [MAX_MP_BUSSES];
+-
+-extern unsigned int def_to_bigsmp;
+-extern unsigned int boot_cpu_physical_apicid;
+-extern int smp_found_config;
+-extern void find_smp_config (void);
+-extern void get_smp_config (void);
+-extern int nr_ioapics;
+-extern int apic_version [MAX_APICS];
+-extern int mp_irq_entries;
+-extern struct mpc_config_intsrc mp_irqs [MAX_IRQ_SOURCES];
+-extern int mpc_default_type;
+-extern unsigned long mp_lapic_addr;
+-extern int pic_mode;
+-
+-#ifdef CONFIG_ACPI
+-extern void mp_register_lapic (u8 id, u8 enabled);
+-extern void mp_register_lapic_address (u64 address);
+-extern void mp_register_ioapic (u8 id, u32 address, u32 gsi_base);
+-extern void mp_override_legacy_irq (u8 bus_irq, u8 polarity, u8 trigger, u32 gsi);
+-extern void mp_config_acpi_legacy_irqs (void);
+-extern int mp_register_gsi (u32 gsi, int edge_level, int active_high_low);
+-#endif /* CONFIG_ACPI */
+-
+-#define PHYSID_ARRAY_SIZE BITS_TO_LONGS(MAX_APICS)
+-
+-struct physid_mask
+-{
+- unsigned long mask[PHYSID_ARRAY_SIZE];
+-};
+-
+-typedef struct physid_mask physid_mask_t;
+-
+-#define physid_set(physid, map) set_bit(physid, (map).mask)
+-#define physid_clear(physid, map) clear_bit(physid, (map).mask)
+-#define physid_isset(physid, map) test_bit(physid, (map).mask)
+-#define physid_test_and_set(physid, map) test_and_set_bit(physid, (map).mask)
+-
+-#define physids_and(dst, src1, src2) bitmap_and((dst).mask, (src1).mask, (src2).mask, MAX_APICS)
+-#define physids_or(dst, src1, src2) bitmap_or((dst).mask, (src1).mask, (src2).mask, MAX_APICS)
+-#define physids_clear(map) bitmap_zero((map).mask, MAX_APICS)
+-#define physids_complement(dst, src) bitmap_complement((dst).mask,(src).mask, MAX_APICS)
+-#define physids_empty(map) bitmap_empty((map).mask, MAX_APICS)
+-#define physids_equal(map1, map2) bitmap_equal((map1).mask, (map2).mask, MAX_APICS)
+-#define physids_weight(map) bitmap_weight((map).mask, MAX_APICS)
+-#define physids_shift_right(d, s, n) bitmap_shift_right((d).mask, (s).mask, n, MAX_APICS)
+-#define physids_shift_left(d, s, n) bitmap_shift_left((d).mask, (s).mask, n, MAX_APICS)
+-#define physids_coerce(map) ((map).mask[0])
+-
+-#define physids_promote(physids) \
+- ({ \
+- physid_mask_t __physid_mask = PHYSID_MASK_NONE; \
+- __physid_mask.mask[0] = physids; \
+- __physid_mask; \
+- })
+-
+-#define physid_mask_of_physid(physid) \
+- ({ \
+- physid_mask_t __physid_mask = PHYSID_MASK_NONE; \
+- physid_set(physid, __physid_mask); \
+- __physid_mask; \
+- })
-
--/* Default "unsigned long" context */
--typedef unsigned long mm_context_t;
+-#define PHYSID_MASK_ALL { {[0 ... PHYSID_ARRAY_SIZE-1] = ~0UL} }
+-#define PHYSID_MASK_NONE { {[0 ... PHYSID_ARRAY_SIZE-1] = 0UL} }
+-
+-extern physid_mask_t phys_cpu_present_map;
-
-#endif
-diff --git a/include/asm-sh64/mmu_context.h b/include/asm-sh64/mmu_context.h
+-
+diff --git a/include/asm-x86/mpspec_64.h b/include/asm-x86/mpspec_64.h
deleted file mode 100644
-index 507bf72..0000000
---- a/include/asm-sh64/mmu_context.h
+index 017fddb..0000000
+--- a/include/asm-x86/mpspec_64.h
+++ /dev/null
-@@ -1,208 +0,0 @@
--#ifndef __ASM_SH64_MMU_CONTEXT_H
--#define __ASM_SH64_MMU_CONTEXT_H
+@@ -1,233 +0,0 @@
+-#ifndef __ASM_MPSPEC_H
+-#define __ASM_MPSPEC_H
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/mmu_context.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
-- *
-- * ASID handling idea taken from MIPS implementation.
-- *
+- * Structure definitions for SMP machines following the
+- * Intel Multiprocessing Specification 1.1 and 1.4.
- */
-
--#ifndef __ASSEMBLY__
+-/*
+- * This tag identifies where the SMP configuration
+- * information is.
+- */
+-
+-#define SMP_MAGIC_IDENT (('_'<<24)|('P'<<16)|('M'<<8)|'_')
-
-/*
-- * Cache of MMU context last used.
-- *
-- * The MMU "context" consists of two things:
-- * (a) TLB cache version (or cycle, top 24 bits of mmu_context_cache)
-- * (b) ASID (Address Space IDentifier, bottom 8 bits of mmu_context_cache)
+- * A maximum of 255 APICs with the current APIC ID architecture.
- */
--extern unsigned long mmu_context_cache;
+-#define MAX_APICS 255
-
--#include <asm/page.h>
--#include <asm-generic/mm_hooks.h>
+-struct intel_mp_floating
+-{
+- char mpf_signature[4]; /* "_MP_" */
+- unsigned int mpf_physptr; /* Configuration table address */
+- unsigned char mpf_length; /* Our length (paragraphs) */
+- unsigned char mpf_specification;/* Specification version */
+- unsigned char mpf_checksum; /* Checksum (makes sum 0) */
+- unsigned char mpf_feature1; /* Standard or configuration ? */
+- unsigned char mpf_feature2; /* Bit7 set for IMCR|PIC */
+- unsigned char mpf_feature3; /* Unused (0) */
+- unsigned char mpf_feature4; /* Unused (0) */
+- unsigned char mpf_feature5; /* Unused (0) */
+-};
+-
+-struct mp_config_table
+-{
+- char mpc_signature[4];
+-#define MPC_SIGNATURE "PCMP"
+- unsigned short mpc_length; /* Size of table */
+- char mpc_spec; /* 0x01 */
+- char mpc_checksum;
+- char mpc_oem[8];
+- char mpc_productid[12];
+- unsigned int mpc_oemptr; /* 0 if not present */
+- unsigned short mpc_oemsize; /* 0 if not present */
+- unsigned short mpc_oemcount;
+- unsigned int mpc_lapic; /* APIC address */
+- unsigned int reserved;
+-};
+-
+-/* Followed by entries */
+-
+-#define MP_PROCESSOR 0
+-#define MP_BUS 1
+-#define MP_IOAPIC 2
+-#define MP_INTSRC 3
+-#define MP_LINTSRC 4
+-
+-struct mpc_config_processor
+-{
+- unsigned char mpc_type;
+- unsigned char mpc_apicid; /* Local APIC number */
+- unsigned char mpc_apicver; /* Its versions */
+- unsigned char mpc_cpuflag;
+-#define CPU_ENABLED 1 /* Processor is available */
+-#define CPU_BOOTPROCESSOR 2 /* Processor is the BP */
+- unsigned int mpc_cpufeature;
+-#define CPU_STEPPING_MASK 0x0F
+-#define CPU_MODEL_MASK 0xF0
+-#define CPU_FAMILY_MASK 0xF00
+- unsigned int mpc_featureflag; /* CPUID feature value */
+- unsigned int mpc_reserved[2];
+-};
+-
+-struct mpc_config_bus
+-{
+- unsigned char mpc_type;
+- unsigned char mpc_busid;
+- unsigned char mpc_bustype[6];
+-};
+-
+-/* List of Bus Type string values, Intel MP Spec. */
+-#define BUSTYPE_EISA "EISA"
+-#define BUSTYPE_ISA "ISA"
+-#define BUSTYPE_INTERN "INTERN" /* Internal BUS */
+-#define BUSTYPE_MCA "MCA"
+-#define BUSTYPE_VL "VL" /* Local bus */
+-#define BUSTYPE_PCI "PCI"
+-#define BUSTYPE_PCMCIA "PCMCIA"
+-#define BUSTYPE_CBUS "CBUS"
+-#define BUSTYPE_CBUSII "CBUSII"
+-#define BUSTYPE_FUTURE "FUTURE"
+-#define BUSTYPE_MBI "MBI"
+-#define BUSTYPE_MBII "MBII"
+-#define BUSTYPE_MPI "MPI"
+-#define BUSTYPE_MPSA "MPSA"
+-#define BUSTYPE_NUBUS "NUBUS"
+-#define BUSTYPE_TC "TC"
+-#define BUSTYPE_VME "VME"
+-#define BUSTYPE_XPRESS "XPRESS"
+-
+-struct mpc_config_ioapic
+-{
+- unsigned char mpc_type;
+- unsigned char mpc_apicid;
+- unsigned char mpc_apicver;
+- unsigned char mpc_flags;
+-#define MPC_APIC_USABLE 0x01
+- unsigned int mpc_apicaddr;
+-};
+-
+-struct mpc_config_intsrc
+-{
+- unsigned char mpc_type;
+- unsigned char mpc_irqtype;
+- unsigned short mpc_irqflag;
+- unsigned char mpc_srcbus;
+- unsigned char mpc_srcbusirq;
+- unsigned char mpc_dstapic;
+- unsigned char mpc_dstirq;
+-};
+-
+-enum mp_irq_source_types {
+- mp_INT = 0,
+- mp_NMI = 1,
+- mp_SMI = 2,
+- mp_ExtINT = 3
+-};
+-
+-#define MP_IRQDIR_DEFAULT 0
+-#define MP_IRQDIR_HIGH 1
+-#define MP_IRQDIR_LOW 3
+-
+-
+-struct mpc_config_lintsrc
+-{
+- unsigned char mpc_type;
+- unsigned char mpc_irqtype;
+- unsigned short mpc_irqflag;
+- unsigned char mpc_srcbusid;
+- unsigned char mpc_srcbusirq;
+- unsigned char mpc_destapic;
+-#define MP_APIC_ALL 0xFF
+- unsigned char mpc_destapiclint;
+-};
+-
+-/*
+- * Default configurations
+- *
+- * 1 2 CPU ISA 82489DX
+- * 2 2 CPU EISA 82489DX neither IRQ 0 timer nor IRQ 13 DMA chaining
+- * 3 2 CPU EISA 82489DX
+- * 4 2 CPU MCA 82489DX
+- * 5 2 CPU ISA+PCI
+- * 6 2 CPU EISA+PCI
+- * 7 2 CPU MCA+PCI
+- */
+-
+-#define MAX_MP_BUSSES 256
+-/* Each PCI slot may be a combo card with its own bus. 4 IRQ pins per slot. */
+-#define MAX_IRQ_SOURCES (MAX_MP_BUSSES * 4)
+-extern DECLARE_BITMAP(mp_bus_not_pci, MAX_MP_BUSSES);
+-extern int mp_bus_id_to_pci_bus [MAX_MP_BUSSES];
+-
+-extern unsigned int boot_cpu_physical_apicid;
+-extern int smp_found_config;
+-extern void find_smp_config (void);
+-extern void get_smp_config (void);
+-extern int nr_ioapics;
+-extern unsigned char apic_version [MAX_APICS];
+-extern int mp_irq_entries;
+-extern struct mpc_config_intsrc mp_irqs [MAX_IRQ_SOURCES];
+-extern int mpc_default_type;
+-extern unsigned long mp_lapic_addr;
+-
+-#ifdef CONFIG_ACPI
+-extern void mp_register_lapic (u8 id, u8 enabled);
+-extern void mp_register_lapic_address (u64 address);
+-
+-extern void mp_register_ioapic (u8 id, u32 address, u32 gsi_base);
+-extern void mp_override_legacy_irq (u8 bus_irq, u8 polarity, u8 trigger, u32 gsi);
+-extern void mp_config_acpi_legacy_irqs (void);
+-extern int mp_register_gsi (u32 gsi, int triggering, int polarity);
+-#endif
+-
+-extern int using_apic_timer;
+-
+-#define PHYSID_ARRAY_SIZE BITS_TO_LONGS(MAX_APICS)
+-
+-struct physid_mask
+-{
+- unsigned long mask[PHYSID_ARRAY_SIZE];
+-};
+-
+-typedef struct physid_mask physid_mask_t;
+-
+-#define physid_set(physid, map) set_bit(physid, (map).mask)
+-#define physid_clear(physid, map) clear_bit(physid, (map).mask)
+-#define physid_isset(physid, map) test_bit(physid, (map).mask)
+-#define physid_test_and_set(physid, map) test_and_set_bit(physid, (map).mask)
+-
+-#define physids_and(dst, src1, src2) bitmap_and((dst).mask, (src1).mask, (src2).mask, MAX_APICS)
+-#define physids_or(dst, src1, src2) bitmap_or((dst).mask, (src1).mask, (src2).mask, MAX_APICS)
+-#define physids_clear(map) bitmap_zero((map).mask, MAX_APICS)
+-#define physids_complement(dst, src) bitmap_complement((dst).mask, (src).mask, MAX_APICS)
+-#define physids_empty(map) bitmap_empty((map).mask, MAX_APICS)
+-#define physids_equal(map1, map2) bitmap_equal((map1).mask, (map2).mask, MAX_APICS)
+-#define physids_weight(map) bitmap_weight((map).mask, MAX_APICS)
+-#define physids_shift_right(d, s, n) bitmap_shift_right((d).mask, (s).mask, n, MAX_APICS)
+-#define physids_shift_left(d, s, n) bitmap_shift_left((d).mask, (s).mask, n, MAX_APICS)
+-#define physids_coerce(map) ((map).mask[0])
+-
+-#define physids_promote(physids) \
+- ({ \
+- physid_mask_t __physid_mask = PHYSID_MASK_NONE; \
+- __physid_mask.mask[0] = physids; \
+- __physid_mask; \
+- })
+-
+-#define physid_mask_of_physid(physid) \
+- ({ \
+- physid_mask_t __physid_mask = PHYSID_MASK_NONE; \
+- physid_set(physid, __physid_mask); \
+- __physid_mask; \
+- })
+-
+-#define PHYSID_MASK_ALL { {[0 ... PHYSID_ARRAY_SIZE-1] = ~0UL} }
+-#define PHYSID_MASK_NONE { {[0 ... PHYSID_ARRAY_SIZE-1] = 0UL} }
+-
+-extern physid_mask_t phys_cpu_present_map;
+-
+-#endif
+-
+diff --git a/include/asm-x86/mpspec_def.h b/include/asm-x86/mpspec_def.h
+index 13bafb1..3504617 100644
+--- a/include/asm-x86/mpspec_def.h
++++ b/include/asm-x86/mpspec_def.h
+@@ -8,52 +8,68 @@
+
+ /*
+ * This tag identifies where the SMP configuration
+- * information is.
++ * information is.
+ */
+-
++
+ #define SMP_MAGIC_IDENT (('_'<<24)|('P'<<16)|('M'<<8)|'_')
+
+-#define MAX_MPC_ENTRY 1024
+-#define MAX_APICS 256
++#ifdef CONFIG_X86_32
++# define MAX_MPC_ENTRY 1024
++# define MAX_APICS 256
++#else
++/*
++ * A maximum of 255 APICs with the current APIC ID architecture.
++ */
++# define MAX_APICS 255
++#endif
+
+ struct intel_mp_floating
+ {
+- char mpf_signature[4]; /* "_MP_" */
+- unsigned long mpf_physptr; /* Configuration table address */
++ char mpf_signature[4]; /* "_MP_" */
++ unsigned int mpf_physptr; /* Configuration table address */
+ unsigned char mpf_length; /* Our length (paragraphs) */
+ unsigned char mpf_specification;/* Specification version */
+ unsigned char mpf_checksum; /* Checksum (makes sum 0) */
+- unsigned char mpf_feature1; /* Standard or configuration ? */
++ unsigned char mpf_feature1; /* Standard or configuration ? */
+ unsigned char mpf_feature2; /* Bit7 set for IMCR|PIC */
+ unsigned char mpf_feature3; /* Unused (0) */
+ unsigned char mpf_feature4; /* Unused (0) */
+ unsigned char mpf_feature5; /* Unused (0) */
+ };
+
++#define MPC_SIGNATURE "PCMP"
++
+ struct mp_config_table
+ {
+ char mpc_signature[4];
+-#define MPC_SIGNATURE "PCMP"
+ unsigned short mpc_length; /* Size of table */
+ char mpc_spec; /* 0x01 */
+ char mpc_checksum;
+ char mpc_oem[8];
+ char mpc_productid[12];
+- unsigned long mpc_oemptr; /* 0 if not present */
++ unsigned int mpc_oemptr; /* 0 if not present */
+ unsigned short mpc_oemsize; /* 0 if not present */
+ unsigned short mpc_oemcount;
+- unsigned long mpc_lapic; /* APIC address */
+- unsigned long reserved;
++ unsigned int mpc_lapic; /* APIC address */
++ unsigned int reserved;
+ };
+
+ /* Followed by entries */
+
+-#define MP_PROCESSOR 0
+-#define MP_BUS 1
+-#define MP_IOAPIC 2
+-#define MP_INTSRC 3
+-#define MP_LINTSRC 4
+-#define MP_TRANSLATION 192 /* Used by IBM NUMA-Q to describe node locality */
++#define MP_PROCESSOR 0
++#define MP_BUS 1
++#define MP_IOAPIC 2
++#define MP_INTSRC 3
++#define MP_LINTSRC 4
++/* Used by IBM NUMA-Q to describe node locality */
++#define MP_TRANSLATION 192
++
++#define CPU_ENABLED 1 /* Processor is available */
++#define CPU_BOOTPROCESSOR 2 /* Processor is the BP */
++
++#define CPU_STEPPING_MASK 0x000F
++#define CPU_MODEL_MASK 0x00F0
++#define CPU_FAMILY_MASK 0x0F00
+
+ struct mpc_config_processor
+ {
+@@ -61,14 +77,9 @@ struct mpc_config_processor
+ unsigned char mpc_apicid; /* Local APIC number */
+ unsigned char mpc_apicver; /* Its versions */
+ unsigned char mpc_cpuflag;
+-#define CPU_ENABLED 1 /* Processor is available */
+-#define CPU_BOOTPROCESSOR 2 /* Processor is the BP */
+- unsigned long mpc_cpufeature;
+-#define CPU_STEPPING_MASK 0x0F
+-#define CPU_MODEL_MASK 0xF0
+-#define CPU_FAMILY_MASK 0xF00
+- unsigned long mpc_featureflag; /* CPUID feature value */
+- unsigned long mpc_reserved[2];
++ unsigned int mpc_cpufeature;
++ unsigned int mpc_featureflag; /* CPUID feature value */
++ unsigned int mpc_reserved[2];
+ };
+
+ struct mpc_config_bus
+@@ -98,14 +109,15 @@ struct mpc_config_bus
+ #define BUSTYPE_VME "VME"
+ #define BUSTYPE_XPRESS "XPRESS"
+
++#define MPC_APIC_USABLE 0x01
++
+ struct mpc_config_ioapic
+ {
+ unsigned char mpc_type;
+ unsigned char mpc_apicid;
+ unsigned char mpc_apicver;
+ unsigned char mpc_flags;
+-#define MPC_APIC_USABLE 0x01
+- unsigned long mpc_apicaddr;
++ unsigned int mpc_apicaddr;
+ };
+
+ struct mpc_config_intsrc
+@@ -130,6 +142,7 @@ enum mp_irq_source_types {
+ #define MP_IRQDIR_HIGH 1
+ #define MP_IRQDIR_LOW 3
+
++#define MP_APIC_ALL 0xFF
+
+ struct mpc_config_lintsrc
+ {
+@@ -138,15 +151,15 @@ struct mpc_config_lintsrc
+ unsigned short mpc_irqflag;
+ unsigned char mpc_srcbusid;
+ unsigned char mpc_srcbusirq;
+- unsigned char mpc_destapic;
+-#define MP_APIC_ALL 0xFF
++ unsigned char mpc_destapic;
+ unsigned char mpc_destapiclint;
+ };
+
++#define MPC_OEM_SIGNATURE "_OEM"
++
+ struct mp_config_oemtable
+ {
+ char oem_signature[4];
+-#define MPC_OEM_SIGNATURE "_OEM"
+ unsigned short oem_length; /* Size of table */
+ char oem_rev; /* 0x01 */
+ char oem_checksum;
+@@ -155,13 +168,13 @@ struct mp_config_oemtable
+
+ struct mpc_config_translation
+ {
+- unsigned char mpc_type;
+- unsigned char trans_len;
+- unsigned char trans_type;
+- unsigned char trans_quad;
+- unsigned char trans_global;
+- unsigned char trans_local;
+- unsigned short trans_reserved;
++ unsigned char mpc_type;
++ unsigned char trans_len;
++ unsigned char trans_type;
++ unsigned char trans_quad;
++ unsigned char trans_global;
++ unsigned char trans_local;
++ unsigned short trans_reserved;
+ };
+
+ /*
+diff --git a/include/asm-x86/msr-index.h b/include/asm-x86/msr-index.h
+index a494473..fae118a 100644
+--- a/include/asm-x86/msr-index.h
++++ b/include/asm-x86/msr-index.h
+@@ -63,6 +63,13 @@
+ #define MSR_IA32_LASTINTFROMIP 0x000001dd
+ #define MSR_IA32_LASTINTTOIP 0x000001de
+
++/* DEBUGCTLMSR bits (others vary by model): */
++#define _DEBUGCTLMSR_LBR 0 /* last branch recording */
++#define _DEBUGCTLMSR_BTF 1 /* single-step on branches */
++
++#define DEBUGCTLMSR_LBR (1UL << _DEBUGCTLMSR_LBR)
++#define DEBUGCTLMSR_BTF (1UL << _DEBUGCTLMSR_BTF)
++
+ #define MSR_IA32_MC0_CTL 0x00000400
+ #define MSR_IA32_MC0_STATUS 0x00000401
+ #define MSR_IA32_MC0_ADDR 0x00000402
+@@ -88,6 +95,14 @@
+ #define MSR_AMD64_IBSDCPHYSAD 0xc0011039
+ #define MSR_AMD64_IBSCTL 0xc001103a
+
++/* Fam 10h MSRs */
++#define MSR_FAM10H_MMIO_CONF_BASE 0xc0010058
++#define FAM10H_MMIO_CONF_ENABLE (1<<0)
++#define FAM10H_MMIO_CONF_BUSRANGE_MASK 0xf
++#define FAM10H_MMIO_CONF_BUSRANGE_SHIFT 2
++#define FAM10H_MMIO_CONF_BASE_MASK 0xfffffff
++#define FAM10H_MMIO_CONF_BASE_SHIFT 20
++
+ /* K8 MSRs */
+ #define MSR_K8_TOP_MEM1 0xc001001a
+ #define MSR_K8_TOP_MEM2 0xc001001d
+diff --git a/include/asm-x86/msr.h b/include/asm-x86/msr.h
+index 80b0270..204a8a3 100644
+--- a/include/asm-x86/msr.h
++++ b/include/asm-x86/msr.h
+@@ -7,77 +7,109 @@
+ # include <linux/types.h>
+ #endif
+
+-#ifdef __i386__
-
--/* Current mm's pgd */
--extern pgd_t *mmu_pdtp_cache;
+ #ifdef __KERNEL__
+ #ifndef __ASSEMBLY__
+
++#include <asm/asm.h>
+ #include <asm/errno.h>
+
++static inline unsigned long long native_read_tscp(unsigned int *aux)
++{
++ unsigned long low, high;
++ asm volatile (".byte 0x0f,0x01,0xf9"
++ : "=a" (low), "=d" (high), "=c" (*aux));
++ return low | ((u64)high >> 32);
++}
++
++/*
++ * i386 calling convention returns 64-bit value in edx:eax, while
++ * x86_64 returns at rax. Also, the "A" constraint does not really
++ * mean rdx:rax in x86_64, so we need specialized behaviour for each
++ * architecture
++ */
++#ifdef CONFIG_X86_64
++#define DECLARE_ARGS(val, low, high) unsigned low, high
++#define EAX_EDX_VAL(val, low, high) (low | ((u64)(high) << 32))
++#define EAX_EDX_ARGS(val, low, high) "a" (low), "d" (high)
++#define EAX_EDX_RET(val, low, high) "=a" (low), "=d" (high)
++#else
++#define DECLARE_ARGS(val, low, high) unsigned long long val
++#define EAX_EDX_VAL(val, low, high) (val)
++#define EAX_EDX_ARGS(val, low, high) "A" (val)
++#define EAX_EDX_RET(val, low, high) "=A" (val)
++#endif
++
+ static inline unsigned long long native_read_msr(unsigned int msr)
+ {
+- unsigned long long val;
++ DECLARE_ARGS(val, low, high);
+
+- asm volatile("rdmsr" : "=A" (val) : "c" (msr));
+- return val;
++ asm volatile("rdmsr" : EAX_EDX_RET(val, low, high) : "c" (msr));
++ return EAX_EDX_VAL(val, low, high);
+ }
+
+ static inline unsigned long long native_read_msr_safe(unsigned int msr,
+ int *err)
+ {
+- unsigned long long val;
++ DECLARE_ARGS(val, low, high);
+
+- asm volatile("2: rdmsr ; xorl %0,%0\n"
++ asm volatile("2: rdmsr ; xor %0,%0\n"
+ "1:\n\t"
+ ".section .fixup,\"ax\"\n\t"
+- "3: movl %3,%0 ; jmp 1b\n\t"
++ "3: mov %3,%0 ; jmp 1b\n\t"
+ ".previous\n\t"
+ ".section __ex_table,\"a\"\n"
+- " .align 4\n\t"
+- " .long 2b,3b\n\t"
++ _ASM_ALIGN "\n\t"
++ _ASM_PTR " 2b,3b\n\t"
+ ".previous"
+- : "=r" (*err), "=A" (val)
++ : "=r" (*err), EAX_EDX_RET(val, low, high)
+ : "c" (msr), "i" (-EFAULT));
-
--#define SR_ASID_MASK 0xffffffffff00ffffULL
--#define SR_ASID_SHIFT 16
+- return val;
++ return EAX_EDX_VAL(val, low, high);
+ }
+
+-static inline void native_write_msr(unsigned int msr, unsigned long long val)
++static inline void native_write_msr(unsigned int msr,
++ unsigned low, unsigned high)
+ {
+- asm volatile("wrmsr" : : "c" (msr), "A"(val));
++ asm volatile("wrmsr" : : "c" (msr), "a"(low), "d" (high));
+ }
+
+ static inline int native_write_msr_safe(unsigned int msr,
+- unsigned long long val)
++ unsigned low, unsigned high)
+ {
+ int err;
+- asm volatile("2: wrmsr ; xorl %0,%0\n"
++ asm volatile("2: wrmsr ; xor %0,%0\n"
+ "1:\n\t"
+ ".section .fixup,\"ax\"\n\t"
+- "3: movl %4,%0 ; jmp 1b\n\t"
++ "3: mov %4,%0 ; jmp 1b\n\t"
+ ".previous\n\t"
+ ".section __ex_table,\"a\"\n"
+- " .align 4\n\t"
+- " .long 2b,3b\n\t"
++ _ASM_ALIGN "\n\t"
++ _ASM_PTR " 2b,3b\n\t"
+ ".previous"
+ : "=a" (err)
+- : "c" (msr), "0" ((u32)val), "d" ((u32)(val>>32)),
++ : "c" (msr), "0" (low), "d" (high),
+ "i" (-EFAULT));
+ return err;
+ }
+
+-static inline unsigned long long native_read_tsc(void)
++extern unsigned long long native_read_tsc(void);
++
++static __always_inline unsigned long long __native_read_tsc(void)
+ {
+- unsigned long long val;
+- asm volatile("rdtsc" : "=A" (val));
+- return val;
++ DECLARE_ARGS(val, low, high);
++
++ rdtsc_barrier();
++ asm volatile("rdtsc" : EAX_EDX_RET(val, low, high));
++ rdtsc_barrier();
++
++ return EAX_EDX_VAL(val, low, high);
+ }
+
+-static inline unsigned long long native_read_pmc(void)
++static inline unsigned long long native_read_pmc(int counter)
+ {
+- unsigned long long val;
+- asm volatile("rdpmc" : "=A" (val));
+- return val;
++ DECLARE_ARGS(val, low, high);
++
++ asm volatile("rdpmc" : EAX_EDX_RET(val, low, high) : "c" (counter));
++ return EAX_EDX_VAL(val, low, high);
+ }
+
+ #ifdef CONFIG_PARAVIRT
+@@ -97,20 +129,21 @@ static inline unsigned long long native_read_pmc(void)
+ (val2) = (u32)(__val >> 32); \
+ } while(0)
+
+-static inline void wrmsr(u32 __msr, u32 __low, u32 __high)
++static inline void wrmsr(unsigned msr, unsigned low, unsigned high)
+ {
+- native_write_msr(__msr, ((u64)__high << 32) | __low);
++ native_write_msr(msr, low, high);
+ }
+
+ #define rdmsrl(msr,val) \
+ ((val) = native_read_msr(msr))
+
+-#define wrmsrl(msr,val) native_write_msr(msr, val)
++#define wrmsrl(msr, val) \
++ native_write_msr(msr, (u32)((u64)(val)), (u32)((u64)(val) >> 32))
+
+ /* wrmsr with exception handling */
+-static inline int wrmsr_safe(u32 __msr, u32 __low, u32 __high)
++static inline int wrmsr_safe(unsigned msr, unsigned low, unsigned high)
+ {
+- return native_write_msr_safe(__msr, ((u64)__high << 32) | __low);
++ return native_write_msr_safe(msr, low, high);
+ }
+
+ /* rdmsr with exception handling */
+@@ -129,204 +162,31 @@ static inline int wrmsr_safe(u32 __msr, u32 __low, u32 __high)
+ #define rdtscll(val) \
+ ((val) = native_read_tsc())
+
+-#define write_tsc(val1,val2) wrmsr(0x10, val1, val2)
-
--#define MMU_CONTEXT_ASID_MASK 0x000000ff
--#define MMU_CONTEXT_VERSION_MASK 0xffffff00
--#define MMU_CONTEXT_FIRST_VERSION 0x00000100
--#define NO_CONTEXT 0
+ #define rdpmc(counter,low,high) \
+ do { \
+- u64 _l = native_read_pmc(); \
++ u64 _l = native_read_pmc(counter); \
+ (low) = (u32)_l; \
+ (high) = (u32)(_l >> 32); \
+ } while(0)
+-#endif /* !CONFIG_PARAVIRT */
-
--/* ASID is 8-bit value, so it can't be 0x100 */
--#define MMU_NO_ASID 0x100
+-#ifdef CONFIG_SMP
+-void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+-void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+-int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+-int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+-#else /* CONFIG_SMP */
+-static inline void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+-{
+- rdmsr(msr_no, *l, *h);
+-}
+-static inline void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+-{
+- wrmsr(msr_no, l, h);
+-}
+-static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+-{
+- return rdmsr_safe(msr_no, l, h);
+-}
+-static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+-{
+- return wrmsr_safe(msr_no, l, h);
+-}
+-#endif /* CONFIG_SMP */
+-#endif /* ! __ASSEMBLY__ */
+-#endif /* __KERNEL__ */
-
+-#else /* __i386__ */
-
+-#ifndef __ASSEMBLY__
+-#include <linux/errno.h>
-/*
-- * Virtual Page Number mask
-- */
--#define MMU_VPN_MASK 0xfffff000
--
--static inline void
--get_new_mmu_context(struct mm_struct *mm)
--{
-- extern void flush_tlb_all(void);
-- extern void flush_cache_all(void);
+- * Access to machine-specific registers (available on 586 and better only)
+- * Note: the rd* operations modify the parameters directly (without using
+- * pointer indirection), this allows gcc to optimize better
+- */
+-
+-#define rdmsr(msr,val1,val2) \
+- __asm__ __volatile__("rdmsr" \
+- : "=a" (val1), "=d" (val2) \
+- : "c" (msr))
+-
+-
+-#define rdmsrl(msr,val) do { unsigned long a__,b__; \
+- __asm__ __volatile__("rdmsr" \
+- : "=a" (a__), "=d" (b__) \
+- : "c" (msr)); \
+- val = a__ | (b__<<32); \
+-} while(0)
+-
+-#define wrmsr(msr,val1,val2) \
+- __asm__ __volatile__("wrmsr" \
+- : /* no outputs */ \
+- : "c" (msr), "a" (val1), "d" (val2))
+-
+-#define wrmsrl(msr,val) wrmsr(msr,(__u32)((__u64)(val)),((__u64)(val))>>32)
+
+-#define rdtsc(low,high) \
+- __asm__ __volatile__("rdtsc" : "=a" (low), "=d" (high))
++#define rdtscp(low, high, aux) \
++ do { \
++ unsigned long long _val = native_read_tscp(&(aux)); \
++ (low) = (u32)_val; \
++ (high) = (u32)(_val >> 32); \
++ } while (0)
+
+-#define rdtscl(low) \
+- __asm__ __volatile__ ("rdtsc" : "=a" (low) : : "edx")
++#define rdtscpll(val, aux) (val) = native_read_tscp(&(aux))
+
+-#define rdtscp(low,high,aux) \
+- __asm__ __volatile__ (".byte 0x0f,0x01,0xf9" : "=a" (low), "=d" (high), "=c" (aux))
++#endif /* !CONFIG_PARAVIRT */
+
+-#define rdtscll(val) do { \
+- unsigned int __a,__d; \
+- __asm__ __volatile__("rdtsc" : "=a" (__a), "=d" (__d)); \
+- (val) = ((unsigned long)__a) | (((unsigned long)__d)<<32); \
+-} while(0)
+
+-#define rdtscpll(val, aux) do { \
+- unsigned long __a, __d; \
+- __asm__ __volatile__ (".byte 0x0f,0x01,0xf9" : "=a" (__a), "=d" (__d), "=c" (aux)); \
+- (val) = (__d << 32) | __a; \
+-} while (0)
++#define checking_wrmsrl(msr,val) wrmsr_safe(msr,(u32)(val),(u32)((val)>>32))
+
+ #define write_tsc(val1,val2) wrmsr(0x10, val1, val2)
+
+ #define write_rdtscp_aux(val) wrmsr(0xc0000103, val, 0)
+
+-#define rdpmc(counter,low,high) \
+- __asm__ __volatile__("rdpmc" \
+- : "=a" (low), "=d" (high) \
+- : "c" (counter))
-
-- unsigned long mc = ++mmu_context_cache;
-
-- if (!(mc & MMU_CONTEXT_ASID_MASK)) {
-- /* We exhaust ASID of this version.
-- Flush all TLB and start new cycle. */
-- flush_tlb_all();
-- /* We have to flush all caches as ASIDs are
-- used in cache */
-- flush_cache_all();
-- /* Fix version if needed.
-- Note that we avoid version #0/asid #0 to distingush NO_CONTEXT. */
-- if (!mc)
-- mmu_context_cache = mc = MMU_CONTEXT_FIRST_VERSION;
-- }
-- mm->context = mc;
+-static inline void cpuid(int op, unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
+-{
+- __asm__("cpuid"
+- : "=a" (*eax),
+- "=b" (*ebx),
+- "=c" (*ecx),
+- "=d" (*edx)
+- : "0" (op));
-}
-
--/*
-- * Get MMU context if needed.
-- */
--static __inline__ void
--get_mmu_context(struct mm_struct *mm)
+-/* Some CPUID calls want 'count' to be placed in ecx */
+-static inline void cpuid_count(int op, int count, int *eax, int *ebx, int *ecx,
+- int *edx)
-{
-- if (mm) {
-- unsigned long mc = mmu_context_cache;
-- /* Check if we have old version of context.
-- If it's old, we need to get new context with new version. */
-- if ((mm->context ^ mc) & MMU_CONTEXT_VERSION_MASK)
-- get_new_mmu_context(mm);
-- }
+- __asm__("cpuid"
+- : "=a" (*eax),
+- "=b" (*ebx),
+- "=c" (*ecx),
+- "=d" (*edx)
+- : "0" (op), "c" (count));
-}
-
-/*
-- * Initialize the context related info for a new mm_struct
-- * instance.
+- * CPUID functions returning a single datum
- */
--static inline int init_new_context(struct task_struct *tsk,
-- struct mm_struct *mm)
+-static inline unsigned int cpuid_eax(unsigned int op)
-{
-- mm->context = NO_CONTEXT;
+- unsigned int eax;
-
-- return 0;
+- __asm__("cpuid"
+- : "=a" (eax)
+- : "0" (op)
+- : "bx", "cx", "dx");
+- return eax;
-}
+-static inline unsigned int cpuid_ebx(unsigned int op)
+-{
+- unsigned int eax, ebx;
-
--/*
-- * Destroy context related info for an mm_struct that is about
-- * to be put to rest.
-- */
--static inline void destroy_context(struct mm_struct *mm)
+- __asm__("cpuid"
+- : "=a" (eax), "=b" (ebx)
+- : "0" (op)
+- : "cx", "dx" );
+- return ebx;
+-}
+-static inline unsigned int cpuid_ecx(unsigned int op)
-{
-- extern void flush_tlb_mm(struct mm_struct *mm);
+- unsigned int eax, ecx;
-
-- /* Well, at least free TLB entries */
-- flush_tlb_mm(mm);
+- __asm__("cpuid"
+- : "=a" (eax), "=c" (ecx)
+- : "0" (op)
+- : "bx", "dx" );
+- return ecx;
-}
+-static inline unsigned int cpuid_edx(unsigned int op)
+-{
+- unsigned int eax, edx;
-
--#endif /* __ASSEMBLY__ */
+- __asm__("cpuid"
+- : "=a" (eax), "=d" (edx)
+- : "0" (op)
+- : "bx", "cx");
+- return edx;
+-}
-
--/* Common defines */
--#define TLB_STEP 0x00000010
--#define TLB_PTEH 0x00000000
--#define TLB_PTEL 0x00000008
+-#ifdef __KERNEL__
-
--/* PTEH defines */
--#define PTEH_ASID_SHIFT 2
--#define PTEH_VALID 0x0000000000000001
--#define PTEH_SHARED 0x0000000000000002
--#define PTEH_MATCH_ASID 0x00000000000003ff
+-/* wrmsr with exception handling */
+-#define wrmsr_safe(msr,a,b) ({ int ret__; \
+- asm volatile("2: wrmsr ; xorl %0,%0\n" \
+- "1:\n\t" \
+- ".section .fixup,\"ax\"\n\t" \
+- "3: movl %4,%0 ; jmp 1b\n\t" \
+- ".previous\n\t" \
+- ".section __ex_table,\"a\"\n" \
+- " .align 8\n\t" \
+- " .quad 2b,3b\n\t" \
+- ".previous" \
+- : "=a" (ret__) \
+- : "c" (msr), "0" (a), "d" (b), "i" (-EFAULT)); \
+- ret__; })
+-
+-#define checking_wrmsrl(msr,val) wrmsr_safe(msr,(u32)(val),(u32)((val)>>32))
+-
+-#define rdmsr_safe(msr,a,b) \
+- ({ int ret__; \
+- asm volatile ("1: rdmsr\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3: movl %4,%0\n" \
+- " jmp 2b\n" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n" \
+- " .align 8\n" \
+- " .quad 1b,3b\n" \
+- ".previous":"=&bDS" (ret__), "=a"(*(a)), "=d"(*(b)) \
+- :"c"(msr), "i"(-EIO), "0"(0)); \
+- ret__; })
-
--#ifndef __ASSEMBLY__
--/* This has to be a common function because the next location to fill
-- * information is shared. */
--extern void __do_tlb_refill(unsigned long address, unsigned long long is_text_not_data, pte_t *pte);
+ #ifdef CONFIG_SMP
+ void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+ void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+@@ -350,9 +210,8 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+ return wrmsr_safe(msr_no, l, h);
+ }
+ #endif /* CONFIG_SMP */
+-#endif /* __KERNEL__ */
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLY__ */
++#endif /* __KERNEL__ */
+
+-#endif /* !__i386__ */
+
+ #endif
+diff --git a/include/asm-x86/mtrr.h b/include/asm-x86/mtrr.h
+index e8320e4..319d065 100644
+--- a/include/asm-x86/mtrr.h
++++ b/include/asm-x86/mtrr.h
+@@ -89,24 +89,25 @@ struct mtrr_gentry
+ extern void mtrr_save_fixed_ranges(void *);
+ extern void mtrr_save_state(void);
+ extern int mtrr_add (unsigned long base, unsigned long size,
+- unsigned int type, char increment);
++ unsigned int type, bool increment);
+ extern int mtrr_add_page (unsigned long base, unsigned long size,
+- unsigned int type, char increment);
++ unsigned int type, bool increment);
+ extern int mtrr_del (int reg, unsigned long base, unsigned long size);
+ extern int mtrr_del_page (int reg, unsigned long base, unsigned long size);
+ extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
+ extern void mtrr_ap_init(void);
+ extern void mtrr_bp_init(void);
++extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
+ # else
+ #define mtrr_save_fixed_ranges(arg) do {} while (0)
+ #define mtrr_save_state() do {} while (0)
+ static __inline__ int mtrr_add (unsigned long base, unsigned long size,
+- unsigned int type, char increment)
++ unsigned int type, bool increment)
+ {
+ return -ENODEV;
+ }
+ static __inline__ int mtrr_add_page (unsigned long base, unsigned long size,
+- unsigned int type, char increment)
++ unsigned int type, bool increment)
+ {
+ return -ENODEV;
+ }
+@@ -120,7 +121,10 @@ static __inline__ int mtrr_del_page (int reg, unsigned long base,
+ {
+ return -ENODEV;
+ }
+-
++static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
++{
++ return 0;
++}
+ static __inline__ void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) {;}
+
+ #define mtrr_ap_init() do {} while (0)
+diff --git a/include/asm-x86/mutex_32.h b/include/asm-x86/mutex_32.h
+index 7a17d9e..bbeefb9 100644
+--- a/include/asm-x86/mutex_32.h
++++ b/include/asm-x86/mutex_32.h
+@@ -26,7 +26,7 @@ do { \
+ unsigned int dummy; \
+ \
+ typecheck(atomic_t *, count); \
+- typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \
++ typecheck_fn(void (*)(atomic_t *), fail_fn); \
+ \
+ __asm__ __volatile__( \
+ LOCK_PREFIX " decl (%%eax) \n" \
+@@ -51,8 +51,7 @@ do { \
+ * or anything the slow path function returns
+ */
+ static inline int
+-__mutex_fastpath_lock_retval(atomic_t *count,
+- int fastcall (*fail_fn)(atomic_t *))
++__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+ {
+ if (unlikely(atomic_dec_return(count) < 0))
+ return fail_fn(count);
+@@ -78,7 +77,7 @@ do { \
+ unsigned int dummy; \
+ \
+ typecheck(atomic_t *, count); \
+- typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \
++ typecheck_fn(void (*)(atomic_t *), fail_fn); \
+ \
+ __asm__ __volatile__( \
+ LOCK_PREFIX " incl (%%eax) \n" \
+diff --git a/include/asm-x86/nmi_32.h b/include/asm-x86/nmi_32.h
+index 70a958a..7206c7e 100644
+--- a/include/asm-x86/nmi_32.h
++++ b/include/asm-x86/nmi_32.h
+@@ -1,6 +1,3 @@
+-/*
+- * linux/include/asm-i386/nmi.h
+- */
+ #ifndef ASM_NMI_H
+ #define ASM_NMI_H
+
+diff --git a/include/asm-x86/nmi_64.h b/include/asm-x86/nmi_64.h
+index 65b6acf..2eeb74e 100644
+--- a/include/asm-x86/nmi_64.h
++++ b/include/asm-x86/nmi_64.h
+@@ -1,6 +1,3 @@
+-/*
+- * linux/include/asm-i386/nmi.h
+- */
+ #ifndef ASM_NMI_H
+ #define ASM_NMI_H
+
+@@ -41,7 +38,6 @@ extern void die_nmi(char *str, struct pt_regs *regs, int do_panic);
+
+ #define get_nmi_reason() inb(0x61)
+
+-extern int panic_on_timeout;
+ extern int unknown_nmi_panic;
+ extern int nmi_watchdog_enabled;
+
+@@ -60,7 +56,6 @@ extern void enable_timer_nmi_watchdog(void);
+ extern int nmi_watchdog_tick (struct pt_regs * regs, unsigned reason);
+
+ extern void nmi_watchdog_default(void);
+-extern int setup_nmi_watchdog(char *);
+
+ extern atomic_t nmi_active;
+ extern unsigned int nmi_watchdog;
+diff --git a/include/asm-x86/nops.h b/include/asm-x86/nops.h
+new file mode 100644
+index 0000000..fec025c
+--- /dev/null
++++ b/include/asm-x86/nops.h
+@@ -0,0 +1,90 @@
++#ifndef _ASM_NOPS_H
++#define _ASM_NOPS_H 1
++
++/* Define nops for use with alternative() */
++
++/* generic versions from gas */
++#define GENERIC_NOP1 ".byte 0x90\n"
++#define GENERIC_NOP2 ".byte 0x89,0xf6\n"
++#define GENERIC_NOP3 ".byte 0x8d,0x76,0x00\n"
++#define GENERIC_NOP4 ".byte 0x8d,0x74,0x26,0x00\n"
++#define GENERIC_NOP5 GENERIC_NOP1 GENERIC_NOP4
++#define GENERIC_NOP6 ".byte 0x8d,0xb6,0x00,0x00,0x00,0x00\n"
++#define GENERIC_NOP7 ".byte 0x8d,0xb4,0x26,0x00,0x00,0x00,0x00\n"
++#define GENERIC_NOP8 GENERIC_NOP1 GENERIC_NOP7
++
++/* Opteron 64bit nops */
++#define K8_NOP1 GENERIC_NOP1
++#define K8_NOP2 ".byte 0x66,0x90\n"
++#define K8_NOP3 ".byte 0x66,0x66,0x90\n"
++#define K8_NOP4 ".byte 0x66,0x66,0x66,0x90\n"
++#define K8_NOP5 K8_NOP3 K8_NOP2
++#define K8_NOP6 K8_NOP3 K8_NOP3
++#define K8_NOP7 K8_NOP4 K8_NOP3
++#define K8_NOP8 K8_NOP4 K8_NOP4
++
++/* K7 nops */
++/* uses eax dependencies (arbitary choice) */
++#define K7_NOP1 GENERIC_NOP1
++#define K7_NOP2 ".byte 0x8b,0xc0\n"
++#define K7_NOP3 ".byte 0x8d,0x04,0x20\n"
++#define K7_NOP4 ".byte 0x8d,0x44,0x20,0x00\n"
++#define K7_NOP5 K7_NOP4 ASM_NOP1
++#define K7_NOP6 ".byte 0x8d,0x80,0,0,0,0\n"
++#define K7_NOP7 ".byte 0x8D,0x04,0x05,0,0,0,0\n"
++#define K7_NOP8 K7_NOP7 ASM_NOP1
++
++/* P6 nops */
++/* uses eax dependencies (Intel-recommended choice) */
++#define P6_NOP1 GENERIC_NOP1
++#define P6_NOP2 ".byte 0x66,0x90\n"
++#define P6_NOP3 ".byte 0x0f,0x1f,0x00\n"
++#define P6_NOP4 ".byte 0x0f,0x1f,0x40,0\n"
++#define P6_NOP5 ".byte 0x0f,0x1f,0x44,0x00,0\n"
++#define P6_NOP6 ".byte 0x66,0x0f,0x1f,0x44,0x00,0\n"
++#define P6_NOP7 ".byte 0x0f,0x1f,0x80,0,0,0,0\n"
++#define P6_NOP8 ".byte 0x0f,0x1f,0x84,0x00,0,0,0,0\n"
++
++#if defined(CONFIG_MK8)
++#define ASM_NOP1 K8_NOP1
++#define ASM_NOP2 K8_NOP2
++#define ASM_NOP3 K8_NOP3
++#define ASM_NOP4 K8_NOP4
++#define ASM_NOP5 K8_NOP5
++#define ASM_NOP6 K8_NOP6
++#define ASM_NOP7 K8_NOP7
++#define ASM_NOP8 K8_NOP8
++#elif defined(CONFIG_MK7)
++#define ASM_NOP1 K7_NOP1
++#define ASM_NOP2 K7_NOP2
++#define ASM_NOP3 K7_NOP3
++#define ASM_NOP4 K7_NOP4
++#define ASM_NOP5 K7_NOP5
++#define ASM_NOP6 K7_NOP6
++#define ASM_NOP7 K7_NOP7
++#define ASM_NOP8 K7_NOP8
++#elif defined(CONFIG_M686) || defined(CONFIG_MPENTIUMII) || \
++ defined(CONFIG_MPENTIUMIII) || defined(CONFIG_MPENTIUMM) || \
++ defined(CONFIG_MCORE2) || defined(CONFIG_PENTIUM4)
++#define ASM_NOP1 P6_NOP1
++#define ASM_NOP2 P6_NOP2
++#define ASM_NOP3 P6_NOP3
++#define ASM_NOP4 P6_NOP4
++#define ASM_NOP5 P6_NOP5
++#define ASM_NOP6 P6_NOP6
++#define ASM_NOP7 P6_NOP7
++#define ASM_NOP8 P6_NOP8
++#else
++#define ASM_NOP1 GENERIC_NOP1
++#define ASM_NOP2 GENERIC_NOP2
++#define ASM_NOP3 GENERIC_NOP3
++#define ASM_NOP4 GENERIC_NOP4
++#define ASM_NOP5 GENERIC_NOP5
++#define ASM_NOP6 GENERIC_NOP6
++#define ASM_NOP7 GENERIC_NOP7
++#define ASM_NOP8 GENERIC_NOP8
++#endif
++
++#define ASM_NOP_MAX 8
++
++#endif
+diff --git a/include/asm-x86/numa_32.h b/include/asm-x86/numa_32.h
+index 96fcb15..03d0f7a 100644
+--- a/include/asm-x86/numa_32.h
++++ b/include/asm-x86/numa_32.h
+@@ -1,3 +1,15 @@
++#ifndef _ASM_X86_32_NUMA_H
++#define _ASM_X86_32_NUMA_H 1
+
+-int pxm_to_nid(int pxm);
++extern int pxm_to_nid(int pxm);
+
++#ifdef CONFIG_NUMA
++extern void __init remap_numa_kva(void);
++extern void set_highmem_pages_init(int);
++#else
++static inline void remap_numa_kva(void)
++{
++}
++#endif
++
++#endif /* _ASM_X86_32_NUMA_H */
+diff --git a/include/asm-x86/numa_64.h b/include/asm-x86/numa_64.h
+index 0cc5c97..15fe07c 100644
+--- a/include/asm-x86/numa_64.h
++++ b/include/asm-x86/numa_64.h
+@@ -20,13 +20,19 @@ extern void numa_set_node(int cpu, int node);
+ extern void srat_reserve_add_area(int nodeid);
+ extern int hotadd_percent;
+
+-extern unsigned char apicid_to_node[MAX_LOCAL_APIC];
++extern s16 apicid_to_node[MAX_LOCAL_APIC];
++
++extern void numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn);
++extern unsigned long numa_free_all_bootmem(void);
++extern void setup_node_bootmem(int nodeid, unsigned long start,
++ unsigned long end);
++
+ #ifdef CONFIG_NUMA
+ extern void __init init_cpu_to_node(void);
+
+ static inline void clear_node_cpumask(int cpu)
+ {
+- clear_bit(cpu, &node_to_cpumask[cpu_to_node(cpu)]);
++ clear_bit(cpu, (unsigned long *)&node_to_cpumask_map[cpu_to_node(cpu)]);
+ }
+
+ #else
+@@ -34,6 +40,4 @@ static inline void clear_node_cpumask(int cpu)
+ #define clear_node_cpumask(cpu) do {} while (0)
+ #endif
+
+-#define NUMA_NO_NODE 0xff
-
--/* Profiling counter. */
--#ifdef CONFIG_SH64_PROC_TLB
--extern unsigned long long calls_to_do_fast_page_fault;
--#endif
+ #endif
+diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
+index a757eb2..c8b30ef 100644
+--- a/include/asm-x86/page.h
++++ b/include/asm-x86/page.h
+@@ -1,13 +1,183 @@
++#ifndef _ASM_X86_PAGE_H
++#define _ASM_X86_PAGE_H
++
++#include <linux/const.h>
++
++/* PAGE_SHIFT determines the page size */
++#define PAGE_SHIFT 12
++#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
++#define PAGE_MASK (~(PAGE_SIZE-1))
++
+ #ifdef __KERNEL__
+-# ifdef CONFIG_X86_32
+-# include "page_32.h"
+-# else
+-# include "page_64.h"
+-# endif
++
++#define PHYSICAL_PAGE_MASK (PAGE_MASK & __PHYSICAL_MASK)
++#define PTE_MASK (_AT(long, PHYSICAL_PAGE_MASK))
++
++#define LARGE_PAGE_SIZE (_AC(1,UL) << PMD_SHIFT)
++#define LARGE_PAGE_MASK (~(LARGE_PAGE_SIZE-1))
++
++#define HPAGE_SHIFT PMD_SHIFT
++#define HPAGE_SIZE (_AC(1,UL) << HPAGE_SHIFT)
++#define HPAGE_MASK (~(HPAGE_SIZE - 1))
++#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
++
++/* to align the pointer to the (next) page boundary */
++#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
++
++#define __PHYSICAL_MASK _AT(phys_addr_t, (_AC(1,ULL) << __PHYSICAL_MASK_SHIFT) - 1)
++#define __VIRTUAL_MASK ((_AC(1,UL) << __VIRTUAL_MASK_SHIFT) - 1)
++
++#ifndef __ASSEMBLY__
++#include <linux/types.h>
++#endif
++
++#ifdef CONFIG_X86_64
++#include <asm/page_64.h>
++#define max_pfn_mapped end_pfn_map
+ #else
+-# ifdef __i386__
+-# include "page_32.h"
+-# else
+-# include "page_64.h"
+-# endif
++#include <asm/page_32.h>
++#define max_pfn_mapped max_low_pfn
++#endif /* CONFIG_X86_64 */
++
++#define PAGE_OFFSET ((unsigned long)__PAGE_OFFSET)
++
++#define VM_DATA_DEFAULT_FLAGS \
++ (((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0 ) | \
++ VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
++
++
++#ifndef __ASSEMBLY__
++
++extern int page_is_ram(unsigned long pagenr);
++
++struct page;
++
++static void inline clear_user_page(void *page, unsigned long vaddr,
++ struct page *pg)
++{
++ clear_page(page);
++}
++
++static void inline copy_user_page(void *to, void *from, unsigned long vaddr,
++ struct page *topage)
++{
++ copy_page(to, from);
++}
++
++#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
++ alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
++#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
++
++typedef struct { pgdval_t pgd; } pgd_t;
++typedef struct { pgprotval_t pgprot; } pgprot_t;
++
++static inline pgd_t native_make_pgd(pgdval_t val)
++{
++ return (pgd_t) { val };
++}
++
++static inline pgdval_t native_pgd_val(pgd_t pgd)
++{
++ return pgd.pgd;
++}
++
++#if PAGETABLE_LEVELS >= 3
++#if PAGETABLE_LEVELS == 4
++typedef struct { pudval_t pud; } pud_t;
++
++static inline pud_t native_make_pud(pmdval_t val)
++{
++ return (pud_t) { val };
++}
++
++static inline pudval_t native_pud_val(pud_t pud)
++{
++ return pud.pud;
++}
++#else /* PAGETABLE_LEVELS == 3 */
++#include <asm-generic/pgtable-nopud.h>
++
++static inline pudval_t native_pud_val(pud_t pud)
++{
++ return native_pgd_val(pud.pgd);
++}
++#endif /* PAGETABLE_LEVELS == 4 */
++
++typedef struct { pmdval_t pmd; } pmd_t;
++
++static inline pmd_t native_make_pmd(pmdval_t val)
++{
++ return (pmd_t) { val };
++}
++
++static inline pmdval_t native_pmd_val(pmd_t pmd)
++{
++ return pmd.pmd;
++}
++#else /* PAGETABLE_LEVELS == 2 */
++#include <asm-generic/pgtable-nopmd.h>
++
++static inline pmdval_t native_pmd_val(pmd_t pmd)
++{
++ return native_pgd_val(pmd.pud.pgd);
++}
++#endif /* PAGETABLE_LEVELS >= 3 */
++
++static inline pte_t native_make_pte(pteval_t val)
++{
++ return (pte_t) { .pte = val };
++}
++
++static inline pteval_t native_pte_val(pte_t pte)
++{
++ return pte.pte;
++}
++
++#define pgprot_val(x) ((x).pgprot)
++#define __pgprot(x) ((pgprot_t) { (x) } )
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else /* !CONFIG_PARAVIRT */
++
++#define pgd_val(x) native_pgd_val(x)
++#define __pgd(x) native_make_pgd(x)
++
++#ifndef __PAGETABLE_PUD_FOLDED
++#define pud_val(x) native_pud_val(x)
++#define __pud(x) native_make_pud(x)
++#endif
++
++#ifndef __PAGETABLE_PMD_FOLDED
++#define pmd_val(x) native_pmd_val(x)
++#define __pmd(x) native_make_pmd(x)
+ #endif
++
++#define pte_val(x) native_pte_val(x)
++#define __pte(x) native_make_pte(x)
++
++#endif /* CONFIG_PARAVIRT */
++
++#define __pa(x) __phys_addr((unsigned long)(x))
++/* __pa_symbol should be used for C visible symbols.
++ This seems to be the official gcc blessed way to do such arithmetic. */
++#define __pa_symbol(x) __pa(__phys_reloc_hide((unsigned long)(x)))
++
++#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
++
++#define __boot_va(x) __va(x)
++#define __boot_pa(x) __pa(x)
++
++#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
++#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
++#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
++
++#endif /* __ASSEMBLY__ */
++
++#include <asm-generic/memory_model.h>
++#include <asm-generic/page.h>
++
++#define __HAVE_ARCH_GATE_AREA 1
++
++#endif /* __KERNEL__ */
++#endif /* _ASM_X86_PAGE_H */
+diff --git a/include/asm-x86/page_32.h b/include/asm-x86/page_32.h
+index 80ecc66..a6fd10f 100644
+--- a/include/asm-x86/page_32.h
++++ b/include/asm-x86/page_32.h
+@@ -1,206 +1,107 @@
+-#ifndef _I386_PAGE_H
+-#define _I386_PAGE_H
-
--static inline unsigned long get_asid(void)
--{
-- unsigned long long sr;
+-/* PAGE_SHIFT determines the page size */
+-#define PAGE_SHIFT 12
+-#define PAGE_SIZE (1UL << PAGE_SHIFT)
+-#define PAGE_MASK (~(PAGE_SIZE-1))
-
-- asm volatile ("getcon " __SR ", %0\n\t"
-- : "=r" (sr));
+-#define LARGE_PAGE_MASK (~(LARGE_PAGE_SIZE-1))
+-#define LARGE_PAGE_SIZE (1UL << PMD_SHIFT)
-
-- sr = (sr >> SR_ASID_SHIFT) & MMU_CONTEXT_ASID_MASK;
-- return (unsigned long) sr;
--}
+-#ifdef __KERNEL__
+-#ifndef __ASSEMBLY__
-
--/* Set ASID into SR */
--static inline void set_asid(unsigned long asid)
--{
-- unsigned long long sr, pc;
+-#ifdef CONFIG_X86_USE_3DNOW
-
-- asm volatile ("getcon " __SR ", %0" : "=r" (sr));
+-#include <asm/mmx.h>
-
-- sr = (sr & SR_ASID_MASK) | (asid << SR_ASID_SHIFT);
+-#define clear_page(page) mmx_clear_page((void *)(page))
+-#define copy_page(to,from) mmx_copy_page(to,from)
-
-- /*
-- * It is possible that this function may be inlined and so to avoid
-- * the assembler reporting duplicate symbols we make use of the gas trick
-- * of generating symbols using numerics and forward reference.
-- */
-- asm volatile ("movi 1, %1\n\t"
-- "shlli %1, 28, %1\n\t"
-- "or %0, %1, %1\n\t"
-- "putcon %1, " __SR "\n\t"
-- "putcon %0, " __SSR "\n\t"
-- "movi 1f, %1\n\t"
-- "ori %1, 1 , %1\n\t"
-- "putcon %1, " __SPC "\n\t"
-- "rte\n"
-- "1:\n\t"
-- : "=r" (sr), "=r" (pc) : "0" (sr));
--}
+-#else
++#ifndef _ASM_X86_PAGE_32_H
++#define _ASM_X86_PAGE_32_H
+
+ /*
+- * On older X86 processors it's not a win to use MMX here it seems.
+- * Maybe the K6-III ?
+- */
+-
+-#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
+-#define copy_page(to,from) memcpy((void *)(to), (void *)(from), PAGE_SIZE)
+-
+-#endif
+-
+-#define clear_user_page(page, vaddr, pg) clear_page(page)
+-#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
+-
+-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
+- alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
+-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
-
-/*
-- * After we have set current->mm to a new value, this activates
-- * the context for the new mm so we see the new mappings.
-- */
--static __inline__ void activate_context(struct mm_struct *mm)
+- * These are used to make use of C type-checking..
++ * This handles the memory map.
++ *
++ * A __PAGE_OFFSET of 0xC0000000 means that the kernel has
++ * a virtual address space of one gigabyte, which limits the
++ * amount of physical memory you can use to about 950MB.
++ *
++ * If you want more physical memory than this then see the CONFIG_HIGHMEM4G
++ * and CONFIG_HIGHMEM64G options in the kernel configuration.
+ */
+-extern int nx_enabled;
++#define __PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL)
+
+ #ifdef CONFIG_X86_PAE
+-typedef struct { unsigned long pte_low, pte_high; } pte_t;
+-typedef struct { unsigned long long pmd; } pmd_t;
+-typedef struct { unsigned long long pgd; } pgd_t;
+-typedef struct { unsigned long long pgprot; } pgprot_t;
++#define __PHYSICAL_MASK_SHIFT 36
++#define __VIRTUAL_MASK_SHIFT 32
++#define PAGETABLE_LEVELS 3
+
+-static inline unsigned long long native_pgd_val(pgd_t pgd)
-{
-- get_mmu_context(mm);
-- set_asid(mm->context & MMU_CONTEXT_ASID_MASK);
+- return pgd.pgd;
-}
-
--
--static __inline__ void switch_mm(struct mm_struct *prev,
-- struct mm_struct *next,
-- struct task_struct *tsk)
+-static inline unsigned long long native_pmd_val(pmd_t pmd)
-{
-- if (prev != next) {
-- mmu_pdtp_cache = next->pgd;
-- activate_context(next);
-- }
+- return pmd.pmd;
-}
-
--#define deactivate_mm(tsk,mm) do { } while (0)
--
--#define activate_mm(prev, next) \
-- switch_mm((prev),(next),NULL)
--
--static inline void
--enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+-static inline unsigned long long native_pte_val(pte_t pte)
-{
+- return pte.pte_low | ((unsigned long long)pte.pte_high << 32);
-}
-
--#endif /* __ASSEMBLY__ */
+-static inline pgd_t native_make_pgd(unsigned long long val)
+-{
+- return (pgd_t) { val };
+-}
-
--#endif /* __ASM_SH64_MMU_CONTEXT_H */
-diff --git a/include/asm-sh64/module.h b/include/asm-sh64/module.h
-deleted file mode 100644
-index c313650..0000000
---- a/include/asm-sh64/module.h
-+++ /dev/null
-@@ -1,20 +0,0 @@
--#ifndef __ASM_SH64_MODULE_H
--#define __ASM_SH64_MODULE_H
--/*
-- * This file contains the SH architecture specific module code.
-- */
+-static inline pmd_t native_make_pmd(unsigned long long val)
+-{
+- return (pmd_t) { val };
+-}
-
--struct mod_arch_specific {
-- /* empty */
--};
+-static inline pte_t native_make_pte(unsigned long long val)
+-{
+- return (pte_t) { .pte_low = val, .pte_high = (val >> 32) } ;
+-}
-
--#define Elf_Shdr Elf32_Shdr
--#define Elf_Sym Elf32_Sym
--#define Elf_Ehdr Elf32_Ehdr
+-#ifndef CONFIG_PARAVIRT
+-#define pmd_val(x) native_pmd_val(x)
+-#define __pmd(x) native_make_pmd(x)
+-#endif
-
--#define module_map(x) vmalloc(x)
--#define module_unmap(x) vfree(x)
--#define module_arch_init(x) (0)
--#define arch_init_modules(x) do { } while (0)
+-#define HPAGE_SHIFT 21
+-#include <asm-generic/pgtable-nopud.h>
++#ifndef __ASSEMBLY__
++typedef u64 pteval_t;
++typedef u64 pmdval_t;
++typedef u64 pudval_t;
++typedef u64 pgdval_t;
++typedef u64 pgprotval_t;
++typedef u64 phys_addr_t;
++
++typedef union {
++ struct {
++ unsigned long pte_low, pte_high;
++ };
++ pteval_t pte;
++} pte_t;
++#endif /* __ASSEMBLY__
++ */
+ #else /* !CONFIG_X86_PAE */
+-typedef struct { unsigned long pte_low; } pte_t;
+-typedef struct { unsigned long pgd; } pgd_t;
+-typedef struct { unsigned long pgprot; } pgprot_t;
+-#define boot_pte_t pte_t /* or would you rather have a typedef */
-
--#endif /* __ASM_SH64_MODULE_H */
-diff --git a/include/asm-sh64/msgbuf.h b/include/asm-sh64/msgbuf.h
-deleted file mode 100644
-index cf0494c..0000000
---- a/include/asm-sh64/msgbuf.h
-+++ /dev/null
-@@ -1,42 +0,0 @@
--#ifndef __ASM_SH64_MSGBUF_H
--#define __ASM_SH64_MSGBUF_H
+-static inline unsigned long native_pgd_val(pgd_t pgd)
+-{
+- return pgd.pgd;
+-}
++#define __PHYSICAL_MASK_SHIFT 32
++#define __VIRTUAL_MASK_SHIFT 32
++#define PAGETABLE_LEVELS 2
+
+-static inline unsigned long native_pte_val(pte_t pte)
+-{
+- return pte.pte_low;
+-}
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/msgbuf.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
+-static inline pgd_t native_make_pgd(unsigned long val)
+-{
+- return (pgd_t) { val };
+-}
++#ifndef __ASSEMBLY__
++typedef unsigned long pteval_t;
++typedef unsigned long pmdval_t;
++typedef unsigned long pudval_t;
++typedef unsigned long pgdval_t;
++typedef unsigned long pgprotval_t;
++typedef unsigned long phys_addr_t;
+
+-static inline pte_t native_make_pte(unsigned long val)
+-{
+- return (pte_t) { .pte_low = val };
+-}
++typedef union { pteval_t pte, pte_low; } pte_t;
++typedef pte_t boot_pte_t;
+
+-#define HPAGE_SHIFT 22
+-#include <asm-generic/pgtable-nopmd.h>
++#endif /* __ASSEMBLY__ */
+ #endif /* CONFIG_X86_PAE */
+
+-#define PTE_MASK PAGE_MASK
-
--/*
-- * The msqid64_ds structure for i386 architecture.
-- * Note extra padding because this structure is passed back and forth
-- * between kernel and user space.
-- *
-- * Pad space is left for:
-- * - 64-bit time_t to solve y2038 problem
-- * - 2 miscellaneous 32-bit values
-- */
+ #ifdef CONFIG_HUGETLB_PAGE
+-#define HPAGE_SIZE ((1UL) << HPAGE_SHIFT)
+-#define HPAGE_MASK (~(HPAGE_SIZE - 1))
+-#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
+ #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
+ #endif
+
+-#define pgprot_val(x) ((x).pgprot)
+-#define __pgprot(x) ((pgprot_t) { (x) } )
-
--struct msqid64_ds {
-- struct ipc64_perm msg_perm;
-- __kernel_time_t msg_stime; /* last msgsnd time */
-- unsigned long __unused1;
-- __kernel_time_t msg_rtime; /* last msgrcv time */
-- unsigned long __unused2;
-- __kernel_time_t msg_ctime; /* last change time */
-- unsigned long __unused3;
-- unsigned long msg_cbytes; /* current number of bytes on queue */
-- unsigned long msg_qnum; /* number of messages in queue */
-- unsigned long msg_qbytes; /* max number of bytes on queue */
-- __kernel_pid_t msg_lspid; /* pid of last msgsnd */
-- __kernel_pid_t msg_lrpid; /* last receive pid */
-- unsigned long __unused4;
-- unsigned long __unused5;
--};
+-#ifndef CONFIG_PARAVIRT
+-#define pgd_val(x) native_pgd_val(x)
+-#define __pgd(x) native_make_pgd(x)
+-#define pte_val(x) native_pte_val(x)
+-#define __pte(x) native_make_pte(x)
+-#endif
-
--#endif /* __ASM_SH64_MSGBUF_H */
-diff --git a/include/asm-sh64/mutex.h b/include/asm-sh64/mutex.h
-deleted file mode 100644
-index 458c1f7..0000000
---- a/include/asm-sh64/mutex.h
-+++ /dev/null
-@@ -1,9 +0,0 @@
--/*
-- * Pull in the generic implementation for the mutex fastpath.
-- *
-- * TODO: implement optimized primitives instead, or leave the generic
-- * implementation in place, or pick the atomic_xchg() based generic
-- * implementation. (see asm-generic/mutex-xchg.h for details)
-- */
+-#endif /* !__ASSEMBLY__ */
-
--#include <asm-generic/mutex-dec.h>
-diff --git a/include/asm-sh64/namei.h b/include/asm-sh64/namei.h
-deleted file mode 100644
-index 99d759a..0000000
---- a/include/asm-sh64/namei.h
-+++ /dev/null
-@@ -1,24 +0,0 @@
--#ifndef __ASM_SH64_NAMEI_H
--#define __ASM_SH64_NAMEI_H
+-/* to align the pointer to the (next) page boundary */
+-#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/namei.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- * Included from linux/fs/namei.c
+- * This handles the memory map.. We could make this a config
+- * option, but too many people screw it up, and too few need
+- * it.
+- *
+- * A __PAGE_OFFSET of 0xC0000000 means that the kernel has
+- * a virtual address space of one gigabyte, which limits the
+- * amount of physical memory you can use to about 950MB.
- *
+- * If you want more physical memory than this then see the CONFIG_HIGHMEM4G
+- * and CONFIG_HIGHMEM64G options in the kernel configuration.
- */
-
--/* This dummy routine maybe changed to something useful
-- * for /usr/gnemul/ emulation stuff.
-- * Look at asm-sparc/namei.h for details.
-- */
+ #ifndef __ASSEMBLY__
++#define __phys_addr(x) ((x)-PAGE_OFFSET)
++#define __phys_reloc_hide(x) RELOC_HIDE((x), 0)
++
++#ifdef CONFIG_FLATMEM
++#define pfn_valid(pfn) ((pfn) < max_mapnr)
++#endif /* CONFIG_FLATMEM */
+
+-struct vm_area_struct;
++extern int nx_enabled;
+
+ /*
+ * This much address space is reserved for vmalloc() and iomap()
+ * as well as fixmap mappings.
+ */
+ extern unsigned int __VMALLOC_RESERVE;
-
--#define __emul_prefix() NULL
+ extern int sysctl_legacy_va_layout;
+
+-extern int page_is_ram(unsigned long pagenr);
-
--#endif /* __ASM_SH64_NAMEI_H */
-diff --git a/include/asm-sh64/page.h b/include/asm-sh64/page.h
-deleted file mode 100644
-index 472089a..0000000
---- a/include/asm-sh64/page.h
-+++ /dev/null
-@@ -1,119 +0,0 @@
--#ifndef __ASM_SH64_PAGE_H
--#define __ASM_SH64_PAGE_H
+-#endif /* __ASSEMBLY__ */
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/page.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003, 2004 Paul Mundt
-- *
-- * benedict.gaster at superh.com 19th, 24th July 2002.
-- *
-- * Modified to take account of enabling for D-CACHE support.
-- *
-- */
+-#ifdef __ASSEMBLY__
+-#define __PAGE_OFFSET CONFIG_PAGE_OFFSET
+-#else
+-#define __PAGE_OFFSET ((unsigned long)CONFIG_PAGE_OFFSET)
+-#endif
-
-
+-#define PAGE_OFFSET ((unsigned long)__PAGE_OFFSET)
+ #define VMALLOC_RESERVE ((unsigned long)__VMALLOC_RESERVE)
+ #define MAXMEM (-__PAGE_OFFSET-__VMALLOC_RESERVE)
+-#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
+-/* __pa_symbol should be used for C visible symbols.
+- This seems to be the official gcc blessed way to do such arithmetic. */
+-#define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x),0))
+-#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
+-#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
+-#ifdef CONFIG_FLATMEM
+-#define pfn_valid(pfn) ((pfn) < max_mapnr)
+-#endif /* CONFIG_FLATMEM */
+-#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+
+-#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
++#ifdef CONFIG_X86_USE_3DNOW
++#include <asm/mmx.h>
++
++static inline void clear_page(void *page)
++{
++ mmx_clear_page(page);
++}
+
+-#define VM_DATA_DEFAULT_FLAGS \
+- (VM_READ | VM_WRITE | \
+- ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0 ) | \
+- VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
++static inline void copy_page(void *to, void *from)
++{
++ mmx_copy_page(to, from);
++}
++#else /* !CONFIG_X86_USE_3DNOW */
++#include <linux/string.h>
+
+-#include <asm-generic/memory_model.h>
+-#include <asm-generic/page.h>
++static inline void clear_page(void *page)
++{
++ memset(page, 0, PAGE_SIZE);
++}
+
+-#define __HAVE_ARCH_GATE_AREA 1
+-#endif /* __KERNEL__ */
++static inline void copy_page(void *to, void *from)
++{
++ memcpy(to, from, PAGE_SIZE);
++}
++#endif /* CONFIG_X86_3DNOW */
++#endif /* !__ASSEMBLY__ */
+
+-#endif /* _I386_PAGE_H */
++#endif /* _ASM_X86_PAGE_32_H */
+diff --git a/include/asm-x86/page_64.h b/include/asm-x86/page_64.h
+index c3b52bc..c1ac42d 100644
+--- a/include/asm-x86/page_64.h
++++ b/include/asm-x86/page_64.h
+@@ -1,15 +1,9 @@
+ #ifndef _X86_64_PAGE_H
+ #define _X86_64_PAGE_H
+
+-#include <linux/const.h>
++#define PAGETABLE_LEVELS 4
+
-/* PAGE_SHIFT determines the page size */
-#define PAGE_SHIFT 12
--#ifdef __ASSEMBLY__
--#define PAGE_SIZE 4096
--#else
--#define PAGE_SIZE (1UL << PAGE_SHIFT)
--#endif
+-#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
-#define PAGE_MASK (~(PAGE_SIZE-1))
--#define PTE_MASK PAGE_MASK
--
--#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
--#define HPAGE_SHIFT 16
--#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
--#define HPAGE_SHIFT 20
--#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
--#define HPAGE_SHIFT 29
--#endif
+-#define PHYSICAL_PAGE_MASK (~(PAGE_SIZE-1) & __PHYSICAL_MASK)
-
--#ifdef CONFIG_HUGETLB_PAGE
--#define HPAGE_SIZE (1UL << HPAGE_SHIFT)
--#define HPAGE_MASK (~(HPAGE_SIZE-1))
--#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT-PAGE_SHIFT)
--#define ARCH_HAS_SETCLEAR_HUGE_PTE
--#endif
+-#define THREAD_ORDER 1
++#define THREAD_ORDER 1
+ #define THREAD_SIZE (PAGE_SIZE << THREAD_ORDER)
+ #define CURRENT_MASK (~(THREAD_SIZE-1))
+
+@@ -29,54 +23,7 @@
+ #define MCE_STACK 5
+ #define N_EXCEPTION_STACKS 5 /* hw limit: 7 */
+
+-#define LARGE_PAGE_MASK (~(LARGE_PAGE_SIZE-1))
+-#define LARGE_PAGE_SIZE (_AC(1,UL) << PMD_SHIFT)
+-
+-#define HPAGE_SHIFT PMD_SHIFT
+-#define HPAGE_SIZE (_AC(1,UL) << HPAGE_SHIFT)
+-#define HPAGE_MASK (~(HPAGE_SIZE - 1))
+-#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
-
-#ifdef __KERNEL__
-#ifndef __ASSEMBLY__
-
--extern struct page *mem_map;
--extern void sh64_page_clear(void *page);
--extern void sh64_page_copy(void *from, void *to);
--
--#define clear_page(page) sh64_page_clear(page)
--#define copy_page(to,from) sh64_page_copy(from, to)
+-extern unsigned long end_pfn;
-
--#if defined(CONFIG_DCACHE_DISABLED)
+-void clear_page(void *);
+-void copy_page(void *, void *);
-
-#define clear_user_page(page, vaddr, pg) clear_page(page)
-#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
-
--#else
--
--extern void clear_user_page(void *to, unsigned long address, struct page *pg);
--extern void copy_user_page(void *to, void *from, unsigned long address, struct page *pg);
--
--#endif /* defined(CONFIG_DCACHE_DISABLED) */
--
+-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
+- alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
+-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
-/*
- * These are used to make use of C type-checking..
- */
--typedef struct { unsigned long long pte; } pte_t;
+-typedef struct { unsigned long pte; } pte_t;
-typedef struct { unsigned long pmd; } pmd_t;
+-typedef struct { unsigned long pud; } pud_t;
-typedef struct { unsigned long pgd; } pgd_t;
+-#define PTE_MASK PHYSICAL_PAGE_MASK
+-
-typedef struct { unsigned long pgprot; } pgprot_t;
-
+-extern unsigned long phys_base;
+-
-#define pte_val(x) ((x).pte)
-#define pmd_val(x) ((x).pmd)
+-#define pud_val(x) ((x).pud)
-#define pgd_val(x) ((x).pgd)
-#define pgprot_val(x) ((x).pgprot)
-
-#define __pte(x) ((pte_t) { (x) } )
-#define __pmd(x) ((pmd_t) { (x) } )
+-#define __pud(x) ((pud_t) { (x) } )
-#define __pgd(x) ((pgd_t) { (x) } )
-#define __pgprot(x) ((pgprot_t) { (x) } )
-
-#endif /* !__ASSEMBLY__ */
++#define __PAGE_OFFSET _AC(0xffff810000000000, UL)
+
+ #define __PHYSICAL_START CONFIG_PHYSICAL_START
+ #define __KERNEL_ALIGN 0x200000
+@@ -92,53 +39,44 @@ extern unsigned long phys_base;
+
+ #define __START_KERNEL (__START_KERNEL_map + __PHYSICAL_START)
+ #define __START_KERNEL_map _AC(0xffffffff80000000, UL)
+-#define __PAGE_OFFSET _AC(0xffff810000000000, UL)
-
-/* to align the pointer to the (next) page boundary */
-#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
+
+ /* See Documentation/x86_64/mm.txt for a description of the memory map. */
+ #define __PHYSICAL_MASK_SHIFT 46
+-#define __PHYSICAL_MASK ((_AC(1,UL) << __PHYSICAL_MASK_SHIFT) - 1)
+ #define __VIRTUAL_MASK_SHIFT 48
+-#define __VIRTUAL_MASK ((_AC(1,UL) << __VIRTUAL_MASK_SHIFT) - 1)
+
+ #define KERNEL_TEXT_SIZE (40*1024*1024)
+ #define KERNEL_TEXT_START _AC(0xffffffff80000000, UL)
+-#define PAGE_OFFSET __PAGE_OFFSET
+
+ #ifndef __ASSEMBLY__
++void clear_page(void *page);
++void copy_page(void *to, void *from);
+
+-#include <asm/bug.h>
++extern unsigned long end_pfn;
++extern unsigned long end_pfn_map;
++extern unsigned long phys_base;
+
+ extern unsigned long __phys_addr(unsigned long);
++#define __phys_reloc_hide(x) (x)
+
+-#endif /* __ASSEMBLY__ */
-
--/*
-- * Kconfig defined.
-- */
--#define __MEMORY_START (CONFIG_MEMORY_START)
--#define PAGE_OFFSET (CONFIG_CACHED_MEMORY_OFFSET)
+-#define __pa(x) __phys_addr((unsigned long)(x))
+-#define __pa_symbol(x) __phys_addr((unsigned long)(x))
-
--#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
-#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
--#define MAP_NR(addr) ((__pa(addr)-__MEMORY_START) >> PAGE_SHIFT)
--#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
--
--#define phys_to_page(phys) (mem_map + (((phys) - __MEMORY_START) >> PAGE_SHIFT))
--#define page_to_phys(page) (((page - mem_map) << PAGE_SHIFT) + __MEMORY_START)
+-#define __boot_va(x) __va(x)
+-#define __boot_pa(x) __pa(x)
+-#ifdef CONFIG_FLATMEM
+-#define pfn_valid(pfn) ((pfn) < end_pfn)
+-#endif
-
--/* PFN start number, because of __MEMORY_START */
--#define PFN_START (__MEMORY_START >> PAGE_SHIFT)
--#define ARCH_PFN_OFFSET (PFN_START)
-#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
--#define pfn_valid(pfn) (((pfn) - PFN_START) < max_mapnr)
-#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
--
--#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \
-- VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
--
+-#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
++/*
++ * These are used to make use of C type-checking..
++ */
++typedef unsigned long pteval_t;
++typedef unsigned long pmdval_t;
++typedef unsigned long pudval_t;
++typedef unsigned long pgdval_t;
++typedef unsigned long pgprotval_t;
++typedef unsigned long phys_addr_t;
+
+-#define VM_DATA_DEFAULT_FLAGS \
+- (((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0 ) | \
+- VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
++typedef struct { pteval_t pte; } pte_t;
+
+-#define __HAVE_ARCH_GATE_AREA 1
+ #define vmemmap ((struct page *)VMEMMAP_START)
+
-#include <asm-generic/memory_model.h>
-#include <asm-generic/page.h>
--
++#endif /* !__ASSEMBLY__ */
++
++#ifdef CONFIG_FLATMEM
++#define pfn_valid(pfn) ((pfn) < end_pfn)
++#endif
+
-#endif /* __KERNEL__ */
--#endif /* __ASM_SH64_PAGE_H */
-diff --git a/include/asm-sh64/param.h b/include/asm-sh64/param.h
+
+ #endif /* _X86_64_PAGE_H */
+diff --git a/include/asm-x86/paravirt.h b/include/asm-x86/paravirt.h
+index f59d370..d6236eb 100644
+--- a/include/asm-x86/paravirt.h
++++ b/include/asm-x86/paravirt.h
+@@ -5,22 +5,37 @@
+
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/page.h>
++#include <asm/asm.h>
+
+ /* Bitmask of what can be clobbered: usually at least eax. */
+-#define CLBR_NONE 0x0
+-#define CLBR_EAX 0x1
+-#define CLBR_ECX 0x2
+-#define CLBR_EDX 0x4
+-#define CLBR_ANY 0x7
++#define CLBR_NONE 0
++#define CLBR_EAX (1 << 0)
++#define CLBR_ECX (1 << 1)
++#define CLBR_EDX (1 << 2)
++
++#ifdef CONFIG_X86_64
++#define CLBR_RSI (1 << 3)
++#define CLBR_RDI (1 << 4)
++#define CLBR_R8 (1 << 5)
++#define CLBR_R9 (1 << 6)
++#define CLBR_R10 (1 << 7)
++#define CLBR_R11 (1 << 8)
++#define CLBR_ANY ((1 << 9) - 1)
++#include <asm/desc_defs.h>
++#else
++/* CLBR_ANY should match all regs platform has. For i386, that's just it */
++#define CLBR_ANY ((1 << 3) - 1)
++#endif /* X86_64 */
+
+ #ifndef __ASSEMBLY__
+ #include <linux/types.h>
+ #include <linux/cpumask.h>
+ #include <asm/kmap_types.h>
++#include <asm/desc_defs.h>
+
+ struct page;
+ struct thread_struct;
+-struct Xgt_desc_struct;
++struct desc_ptr;
+ struct tss_struct;
+ struct mm_struct;
+ struct desc_struct;
+@@ -86,22 +101,27 @@ struct pv_cpu_ops {
+ unsigned long (*read_cr4)(void);
+ void (*write_cr4)(unsigned long);
+
++#ifdef CONFIG_X86_64
++ unsigned long (*read_cr8)(void);
++ void (*write_cr8)(unsigned long);
++#endif
++
+ /* Segment descriptor handling */
+ void (*load_tr_desc)(void);
+- void (*load_gdt)(const struct Xgt_desc_struct *);
+- void (*load_idt)(const struct Xgt_desc_struct *);
+- void (*store_gdt)(struct Xgt_desc_struct *);
+- void (*store_idt)(struct Xgt_desc_struct *);
++ void (*load_gdt)(const struct desc_ptr *);
++ void (*load_idt)(const struct desc_ptr *);
++ void (*store_gdt)(struct desc_ptr *);
++ void (*store_idt)(struct desc_ptr *);
+ void (*set_ldt)(const void *desc, unsigned entries);
+ unsigned long (*store_tr)(void);
+ void (*load_tls)(struct thread_struct *t, unsigned int cpu);
+- void (*write_ldt_entry)(struct desc_struct *,
+- int entrynum, u32 low, u32 high);
++ void (*write_ldt_entry)(struct desc_struct *ldt, int entrynum,
++ const void *desc);
+ void (*write_gdt_entry)(struct desc_struct *,
+- int entrynum, u32 low, u32 high);
+- void (*write_idt_entry)(struct desc_struct *,
+- int entrynum, u32 low, u32 high);
+- void (*load_esp0)(struct tss_struct *tss, struct thread_struct *t);
++ int entrynum, const void *desc, int size);
++ void (*write_idt_entry)(gate_desc *,
++ int entrynum, const gate_desc *gate);
++ void (*load_sp0)(struct tss_struct *tss, struct thread_struct *t);
+
+ void (*set_iopl_mask)(unsigned mask);
+
+@@ -115,15 +135,18 @@ struct pv_cpu_ops {
+ /* MSR, PMC and TSR operations.
+ err = 0/-EFAULT. wrmsr returns 0/-EFAULT. */
+ u64 (*read_msr)(unsigned int msr, int *err);
+- int (*write_msr)(unsigned int msr, u64 val);
++ int (*write_msr)(unsigned int msr, unsigned low, unsigned high);
+
+ u64 (*read_tsc)(void);
+- u64 (*read_pmc)(void);
++ u64 (*read_pmc)(int counter);
++ unsigned long long (*read_tscp)(unsigned int *aux);
+
+ /* These two are jmp to, not actually called. */
+- void (*irq_enable_sysexit)(void);
++ void (*irq_enable_syscall_ret)(void);
+ void (*iret)(void);
+
++ void (*swapgs)(void);
++
+ struct pv_lazy_ops lazy_mode;
+ };
+
+@@ -150,9 +173,9 @@ struct pv_apic_ops {
+ * Direct APIC operations, principally for VMI. Ideally
+ * these shouldn't be in this interface.
+ */
+- void (*apic_write)(unsigned long reg, unsigned long v);
+- void (*apic_write_atomic)(unsigned long reg, unsigned long v);
+- unsigned long (*apic_read)(unsigned long reg);
++ void (*apic_write)(unsigned long reg, u32 v);
++ void (*apic_write_atomic)(unsigned long reg, u32 v);
++ u32 (*apic_read)(unsigned long reg);
+ void (*setup_boot_clock)(void);
+ void (*setup_secondary_clock)(void);
+
+@@ -198,7 +221,7 @@ struct pv_mmu_ops {
+
+ /* Hooks for allocating/releasing pagetable pages */
+ void (*alloc_pt)(struct mm_struct *mm, u32 pfn);
+- void (*alloc_pd)(u32 pfn);
++ void (*alloc_pd)(struct mm_struct *mm, u32 pfn);
+ void (*alloc_pd_clone)(u32 pfn, u32 clonepfn, u32 start, u32 count);
+ void (*release_pt)(u32 pfn);
+ void (*release_pd)(u32 pfn);
+@@ -212,28 +235,34 @@ struct pv_mmu_ops {
+ void (*pte_update_defer)(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep);
+
++ pteval_t (*pte_val)(pte_t);
++ pte_t (*make_pte)(pteval_t pte);
++
++ pgdval_t (*pgd_val)(pgd_t);
++ pgd_t (*make_pgd)(pgdval_t pgd);
++
++#if PAGETABLE_LEVELS >= 3
+ #ifdef CONFIG_X86_PAE
+ void (*set_pte_atomic)(pte_t *ptep, pte_t pteval);
+ void (*set_pte_present)(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte);
+- void (*set_pud)(pud_t *pudp, pud_t pudval);
+ void (*pte_clear)(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
+ void (*pmd_clear)(pmd_t *pmdp);
+
+- unsigned long long (*pte_val)(pte_t);
+- unsigned long long (*pmd_val)(pmd_t);
+- unsigned long long (*pgd_val)(pgd_t);
++#endif /* CONFIG_X86_PAE */
+
+- pte_t (*make_pte)(unsigned long long pte);
+- pmd_t (*make_pmd)(unsigned long long pmd);
+- pgd_t (*make_pgd)(unsigned long long pgd);
+-#else
+- unsigned long (*pte_val)(pte_t);
+- unsigned long (*pgd_val)(pgd_t);
++ void (*set_pud)(pud_t *pudp, pud_t pudval);
+
+- pte_t (*make_pte)(unsigned long pte);
+- pgd_t (*make_pgd)(unsigned long pgd);
+-#endif
++ pmdval_t (*pmd_val)(pmd_t);
++ pmd_t (*make_pmd)(pmdval_t pmd);
++
++#if PAGETABLE_LEVELS == 4
++ pudval_t (*pud_val)(pud_t);
++ pud_t (*make_pud)(pudval_t pud);
++
++ void (*set_pgd)(pgd_t *pudp, pgd_t pgdval);
++#endif /* PAGETABLE_LEVELS == 4 */
++#endif /* PAGETABLE_LEVELS >= 3 */
+
+ #ifdef CONFIG_HIGHPTE
+ void *(*kmap_atomic_pte)(struct page *page, enum km_type type);
+@@ -279,7 +308,8 @@ extern struct pv_mmu_ops pv_mmu_ops;
+ #define _paravirt_alt(insn_string, type, clobber) \
+ "771:\n\t" insn_string "\n" "772:\n" \
+ ".pushsection .parainstructions,\"a\"\n" \
+- " .long 771b\n" \
++ _ASM_ALIGN "\n" \
++ _ASM_PTR " 771b\n" \
+ " .byte " type "\n" \
+ " .byte 772b-771b\n" \
+ " .short " clobber "\n" \
+@@ -289,6 +319,11 @@ extern struct pv_mmu_ops pv_mmu_ops;
+ #define paravirt_alt(insn_string) \
+ _paravirt_alt(insn_string, "%c[paravirt_typenum]", "%c[paravirt_clobber]")
+
++/* Simple instruction patching code. */
++#define DEF_NATIVE(ops, name, code) \
++ extern const char start_##ops##_##name[], end_##ops##_##name[]; \
++ asm("start_" #ops "_" #name ": " code "; end_" #ops "_" #name ":")
++
+ unsigned paravirt_patch_nop(void);
+ unsigned paravirt_patch_ignore(unsigned len);
+ unsigned paravirt_patch_call(void *insnbuf,
+@@ -303,6 +338,9 @@ unsigned paravirt_patch_default(u8 type, u16 clobbers, void *insnbuf,
+ unsigned paravirt_patch_insns(void *insnbuf, unsigned len,
+ const char *start, const char *end);
+
++unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
++ unsigned long addr, unsigned len);
++
+ int paravirt_disable_iospace(void);
+
+ /*
+@@ -319,7 +357,7 @@ int paravirt_disable_iospace(void);
+ * runtime.
+ *
+ * Normally, a call to a pv_op function is a simple indirect call:
+- * (paravirt_ops.operations)(args...).
++ * (pv_op_struct.operations)(args...).
+ *
+ * Unfortunately, this is a relatively slow operation for modern CPUs,
+ * because it cannot necessarily determine what the destination
+@@ -329,11 +367,17 @@ int paravirt_disable_iospace(void);
+ * calls are essentially free, because the call and return addresses
+ * are completely predictable.)
+ *
+- * These macros rely on the standard gcc "regparm(3)" calling
++ * For i386, these macros rely on the standard gcc "regparm(3)" calling
+ * convention, in which the first three arguments are placed in %eax,
+ * %edx, %ecx (in that order), and the remaining arguments are placed
+ * on the stack. All caller-save registers (eax,edx,ecx) are expected
+ * to be modified (either clobbered or used for return values).
++ * X86_64, on the other hand, already specifies a register-based calling
++ * conventions, returning at %rax, with parameteres going on %rdi, %rsi,
++ * %rdx, and %rcx. Note that for this reason, x86_64 does not need any
++ * special handling for dealing with 4 arguments, unlike i386.
++ * However, x86_64 also have to clobber all caller saved registers, which
++ * unfortunately, are quite a bit (r8 - r11)
+ *
+ * The call instruction itself is marked by placing its start address
+ * and size into the .parainstructions section, so that
+@@ -356,10 +400,12 @@ int paravirt_disable_iospace(void);
+ * the return type. The macro then uses sizeof() on that type to
+ * determine whether its a 32 or 64 bit value, and places the return
+ * in the right register(s) (just %eax for 32-bit, and %edx:%eax for
+- * 64-bit).
++ * 64-bit). For x86_64 machines, it just returns at %rax regardless of
++ * the return value size.
+ *
+ * 64-bit arguments are passed as a pair of adjacent 32-bit arguments
+- * in low,high order.
++ * i386 also passes 64-bit arguments as a pair of adjacent 32-bit arguments
++ * in low,high order
+ *
+ * Small structures are passed and returned in registers. The macro
+ * calling convention can't directly deal with this, so the wrapper
+@@ -369,46 +415,67 @@ int paravirt_disable_iospace(void);
+ * means that all uses must be wrapped in inline functions. This also
+ * makes sure the incoming and outgoing types are always correct.
+ */
++#ifdef CONFIG_X86_32
++#define PVOP_VCALL_ARGS unsigned long __eax, __edx, __ecx
++#define PVOP_CALL_ARGS PVOP_VCALL_ARGS
++#define PVOP_VCALL_CLOBBERS "=a" (__eax), "=d" (__edx), \
++ "=c" (__ecx)
++#define PVOP_CALL_CLOBBERS PVOP_VCALL_CLOBBERS
++#define EXTRA_CLOBBERS
++#define VEXTRA_CLOBBERS
++#else
++#define PVOP_VCALL_ARGS unsigned long __edi, __esi, __edx, __ecx
++#define PVOP_CALL_ARGS PVOP_VCALL_ARGS, __eax
++#define PVOP_VCALL_CLOBBERS "=D" (__edi), \
++ "=S" (__esi), "=d" (__edx), \
++ "=c" (__ecx)
++
++#define PVOP_CALL_CLOBBERS PVOP_VCALL_CLOBBERS, "=a" (__eax)
++
++#define EXTRA_CLOBBERS , "r8", "r9", "r10", "r11"
++#define VEXTRA_CLOBBERS , "rax", "r8", "r9", "r10", "r11"
++#endif
++
+ #define __PVOP_CALL(rettype, op, pre, post, ...) \
+ ({ \
+ rettype __ret; \
+- unsigned long __eax, __edx, __ecx; \
++ PVOP_CALL_ARGS; \
++ /* This is 32-bit specific, but is okay in 64-bit */ \
++ /* since this condition will never hold */ \
+ if (sizeof(rettype) > sizeof(unsigned long)) { \
+ asm volatile(pre \
+ paravirt_alt(PARAVIRT_CALL) \
+ post \
+- : "=a" (__eax), "=d" (__edx), \
+- "=c" (__ecx) \
++ : PVOP_CALL_CLOBBERS \
+ : paravirt_type(op), \
+ paravirt_clobber(CLBR_ANY), \
+ ##__VA_ARGS__ \
+- : "memory", "cc"); \
++ : "memory", "cc" EXTRA_CLOBBERS); \
+ __ret = (rettype)((((u64)__edx) << 32) | __eax); \
+ } else { \
+ asm volatile(pre \
+ paravirt_alt(PARAVIRT_CALL) \
+ post \
+- : "=a" (__eax), "=d" (__edx), \
+- "=c" (__ecx) \
++ : PVOP_CALL_CLOBBERS \
+ : paravirt_type(op), \
+ paravirt_clobber(CLBR_ANY), \
+ ##__VA_ARGS__ \
+- : "memory", "cc"); \
++ : "memory", "cc" EXTRA_CLOBBERS); \
+ __ret = (rettype)__eax; \
+ } \
+ __ret; \
+ })
+ #define __PVOP_VCALL(op, pre, post, ...) \
+ ({ \
+- unsigned long __eax, __edx, __ecx; \
++ PVOP_VCALL_ARGS; \
+ asm volatile(pre \
+ paravirt_alt(PARAVIRT_CALL) \
+ post \
+- : "=a" (__eax), "=d" (__edx), "=c" (__ecx) \
++ : PVOP_VCALL_CLOBBERS \
+ : paravirt_type(op), \
+ paravirt_clobber(CLBR_ANY), \
+ ##__VA_ARGS__ \
+- : "memory", "cc"); \
++ : "memory", "cc" VEXTRA_CLOBBERS); \
+ })
+
+ #define PVOP_CALL0(rettype, op) \
+@@ -417,22 +484,26 @@ int paravirt_disable_iospace(void);
+ __PVOP_VCALL(op, "", "")
+
+ #define PVOP_CALL1(rettype, op, arg1) \
+- __PVOP_CALL(rettype, op, "", "", "0" ((u32)(arg1)))
++ __PVOP_CALL(rettype, op, "", "", "0" ((unsigned long)(arg1)))
+ #define PVOP_VCALL1(op, arg1) \
+- __PVOP_VCALL(op, "", "", "0" ((u32)(arg1)))
++ __PVOP_VCALL(op, "", "", "0" ((unsigned long)(arg1)))
+
+ #define PVOP_CALL2(rettype, op, arg1, arg2) \
+- __PVOP_CALL(rettype, op, "", "", "0" ((u32)(arg1)), "1" ((u32)(arg2)))
++ __PVOP_CALL(rettype, op, "", "", "0" ((unsigned long)(arg1)), \
++ "1" ((unsigned long)(arg2)))
+ #define PVOP_VCALL2(op, arg1, arg2) \
+- __PVOP_VCALL(op, "", "", "0" ((u32)(arg1)), "1" ((u32)(arg2)))
++ __PVOP_VCALL(op, "", "", "0" ((unsigned long)(arg1)), \
++ "1" ((unsigned long)(arg2)))
+
+ #define PVOP_CALL3(rettype, op, arg1, arg2, arg3) \
+- __PVOP_CALL(rettype, op, "", "", "0" ((u32)(arg1)), \
+- "1"((u32)(arg2)), "2"((u32)(arg3)))
++ __PVOP_CALL(rettype, op, "", "", "0" ((unsigned long)(arg1)), \
++ "1"((unsigned long)(arg2)), "2"((unsigned long)(arg3)))
+ #define PVOP_VCALL3(op, arg1, arg2, arg3) \
+- __PVOP_VCALL(op, "", "", "0" ((u32)(arg1)), "1"((u32)(arg2)), \
+- "2"((u32)(arg3)))
++ __PVOP_VCALL(op, "", "", "0" ((unsigned long)(arg1)), \
++ "1"((unsigned long)(arg2)), "2"((unsigned long)(arg3)))
+
++/* This is the only difference in x86_64. We can make it much simpler */
++#ifdef CONFIG_X86_32
+ #define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4) \
+ __PVOP_CALL(rettype, op, \
+ "push %[_arg4];", "lea 4(%%esp),%%esp;", \
+@@ -443,16 +514,26 @@ int paravirt_disable_iospace(void);
+ "push %[_arg4];", "lea 4(%%esp),%%esp;", \
+ "0" ((u32)(arg1)), "1" ((u32)(arg2)), \
+ "2" ((u32)(arg3)), [_arg4] "mr" ((u32)(arg4)))
++#else
++#define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4) \
++ __PVOP_CALL(rettype, op, "", "", "0" ((unsigned long)(arg1)), \
++ "1"((unsigned long)(arg2)), "2"((unsigned long)(arg3)), \
++ "3"((unsigned long)(arg4)))
++#define PVOP_VCALL4(op, arg1, arg2, arg3, arg4) \
++ __PVOP_VCALL(op, "", "", "0" ((unsigned long)(arg1)), \
++ "1"((unsigned long)(arg2)), "2"((unsigned long)(arg3)), \
++ "3"((unsigned long)(arg4)))
++#endif
+
+ static inline int paravirt_enabled(void)
+ {
+ return pv_info.paravirt_enabled;
+ }
+
+-static inline void load_esp0(struct tss_struct *tss,
++static inline void load_sp0(struct tss_struct *tss,
+ struct thread_struct *thread)
+ {
+- PVOP_VCALL2(pv_cpu_ops.load_esp0, tss, thread);
++ PVOP_VCALL2(pv_cpu_ops.load_sp0, tss, thread);
+ }
+
+ #define ARCH_SETUP pv_init_ops.arch_setup();
+@@ -540,6 +621,18 @@ static inline void write_cr4(unsigned long x)
+ PVOP_VCALL1(pv_cpu_ops.write_cr4, x);
+ }
+
++#ifdef CONFIG_X86_64
++static inline unsigned long read_cr8(void)
++{
++ return PVOP_CALL0(unsigned long, pv_cpu_ops.read_cr8);
++}
++
++static inline void write_cr8(unsigned long x)
++{
++ PVOP_VCALL1(pv_cpu_ops.write_cr8, x);
++}
++#endif
++
+ static inline void raw_safe_halt(void)
+ {
+ PVOP_VCALL0(pv_irq_ops.safe_halt);
+@@ -613,8 +706,6 @@ static inline unsigned long long paravirt_sched_clock(void)
+ }
+ #define calculate_cpu_khz() (pv_time_ops.get_cpu_khz())
+
+-#define write_tsc(val1,val2) wrmsr(0x10, val1, val2)
+-
+ static inline unsigned long long paravirt_read_pmc(int counter)
+ {
+ return PVOP_CALL1(u64, pv_cpu_ops.read_pmc, counter);
+@@ -626,15 +717,36 @@ static inline unsigned long long paravirt_read_pmc(int counter)
+ high = _l >> 32; \
+ } while(0)
+
++static inline unsigned long long paravirt_rdtscp(unsigned int *aux)
++{
++ return PVOP_CALL1(u64, pv_cpu_ops.read_tscp, aux);
++}
++
++#define rdtscp(low, high, aux) \
++do { \
++ int __aux; \
++ unsigned long __val = paravirt_rdtscp(&__aux); \
++ (low) = (u32)__val; \
++ (high) = (u32)(__val >> 32); \
++ (aux) = __aux; \
++} while (0)
++
++#define rdtscpll(val, aux) \
++do { \
++ unsigned long __aux; \
++ val = paravirt_rdtscp(&__aux); \
++ (aux) = __aux; \
++} while (0)
++
+ static inline void load_TR_desc(void)
+ {
+ PVOP_VCALL0(pv_cpu_ops.load_tr_desc);
+ }
+-static inline void load_gdt(const struct Xgt_desc_struct *dtr)
++static inline void load_gdt(const struct desc_ptr *dtr)
+ {
+ PVOP_VCALL1(pv_cpu_ops.load_gdt, dtr);
+ }
+-static inline void load_idt(const struct Xgt_desc_struct *dtr)
++static inline void load_idt(const struct desc_ptr *dtr)
+ {
+ PVOP_VCALL1(pv_cpu_ops.load_idt, dtr);
+ }
+@@ -642,11 +754,11 @@ static inline void set_ldt(const void *addr, unsigned entries)
+ {
+ PVOP_VCALL2(pv_cpu_ops.set_ldt, addr, entries);
+ }
+-static inline void store_gdt(struct Xgt_desc_struct *dtr)
++static inline void store_gdt(struct desc_ptr *dtr)
+ {
+ PVOP_VCALL1(pv_cpu_ops.store_gdt, dtr);
+ }
+-static inline void store_idt(struct Xgt_desc_struct *dtr)
++static inline void store_idt(struct desc_ptr *dtr)
+ {
+ PVOP_VCALL1(pv_cpu_ops.store_idt, dtr);
+ }
+@@ -659,17 +771,22 @@ static inline void load_TLS(struct thread_struct *t, unsigned cpu)
+ {
+ PVOP_VCALL2(pv_cpu_ops.load_tls, t, cpu);
+ }
+-static inline void write_ldt_entry(void *dt, int entry, u32 low, u32 high)
++
++static inline void write_ldt_entry(struct desc_struct *dt, int entry,
++ const void *desc)
+ {
+- PVOP_VCALL4(pv_cpu_ops.write_ldt_entry, dt, entry, low, high);
++ PVOP_VCALL3(pv_cpu_ops.write_ldt_entry, dt, entry, desc);
+ }
+-static inline void write_gdt_entry(void *dt, int entry, u32 low, u32 high)
++
++static inline void write_gdt_entry(struct desc_struct *dt, int entry,
++ void *desc, int type)
+ {
+- PVOP_VCALL4(pv_cpu_ops.write_gdt_entry, dt, entry, low, high);
++ PVOP_VCALL4(pv_cpu_ops.write_gdt_entry, dt, entry, desc, type);
+ }
+-static inline void write_idt_entry(void *dt, int entry, u32 low, u32 high)
++
++static inline void write_idt_entry(gate_desc *dt, int entry, const gate_desc *g)
+ {
+- PVOP_VCALL4(pv_cpu_ops.write_idt_entry, dt, entry, low, high);
++ PVOP_VCALL3(pv_cpu_ops.write_idt_entry, dt, entry, g);
+ }
+ static inline void set_iopl_mask(unsigned mask)
+ {
+@@ -690,17 +807,17 @@ static inline void slow_down_io(void) {
+ /*
+ * Basic functions accessing APICs.
+ */
+-static inline void apic_write(unsigned long reg, unsigned long v)
++static inline void apic_write(unsigned long reg, u32 v)
+ {
+ PVOP_VCALL2(pv_apic_ops.apic_write, reg, v);
+ }
+
+-static inline void apic_write_atomic(unsigned long reg, unsigned long v)
++static inline void apic_write_atomic(unsigned long reg, u32 v)
+ {
+ PVOP_VCALL2(pv_apic_ops.apic_write_atomic, reg, v);
+ }
+
+-static inline unsigned long apic_read(unsigned long reg)
++static inline u32 apic_read(unsigned long reg)
+ {
+ return PVOP_CALL1(unsigned long, pv_apic_ops.apic_read, reg);
+ }
+@@ -786,9 +903,9 @@ static inline void paravirt_release_pt(unsigned pfn)
+ PVOP_VCALL1(pv_mmu_ops.release_pt, pfn);
+ }
+
+-static inline void paravirt_alloc_pd(unsigned pfn)
++static inline void paravirt_alloc_pd(struct mm_struct *mm, unsigned pfn)
+ {
+- PVOP_VCALL1(pv_mmu_ops.alloc_pd, pfn);
++ PVOP_VCALL2(pv_mmu_ops.alloc_pd, mm, pfn);
+ }
+
+ static inline void paravirt_alloc_pd_clone(unsigned pfn, unsigned clonepfn,
+@@ -822,128 +939,236 @@ static inline void pte_update_defer(struct mm_struct *mm, unsigned long addr,
+ PVOP_VCALL3(pv_mmu_ops.pte_update_defer, mm, addr, ptep);
+ }
+
+-#ifdef CONFIG_X86_PAE
+-static inline pte_t __pte(unsigned long long val)
++static inline pte_t __pte(pteval_t val)
+ {
+- unsigned long long ret = PVOP_CALL2(unsigned long long,
+- pv_mmu_ops.make_pte,
+- val, val >> 32);
+- return (pte_t) { ret, ret >> 32 };
++ pteval_t ret;
++
++ if (sizeof(pteval_t) > sizeof(long))
++ ret = PVOP_CALL2(pteval_t,
++ pv_mmu_ops.make_pte,
++ val, (u64)val >> 32);
++ else
++ ret = PVOP_CALL1(pteval_t,
++ pv_mmu_ops.make_pte,
++ val);
++
++ return (pte_t) { .pte = ret };
+ }
+
+-static inline pmd_t __pmd(unsigned long long val)
++static inline pteval_t pte_val(pte_t pte)
+ {
+- return (pmd_t) { PVOP_CALL2(unsigned long long, pv_mmu_ops.make_pmd,
+- val, val >> 32) };
++ pteval_t ret;
++
++ if (sizeof(pteval_t) > sizeof(long))
++ ret = PVOP_CALL2(pteval_t, pv_mmu_ops.pte_val,
++ pte.pte, (u64)pte.pte >> 32);
++ else
++ ret = PVOP_CALL1(pteval_t, pv_mmu_ops.pte_val,
++ pte.pte);
++
++ return ret;
+ }
+
+-static inline pgd_t __pgd(unsigned long long val)
++static inline pgd_t __pgd(pgdval_t val)
+ {
+- return (pgd_t) { PVOP_CALL2(unsigned long long, pv_mmu_ops.make_pgd,
+- val, val >> 32) };
++ pgdval_t ret;
++
++ if (sizeof(pgdval_t) > sizeof(long))
++ ret = PVOP_CALL2(pgdval_t, pv_mmu_ops.make_pgd,
++ val, (u64)val >> 32);
++ else
++ ret = PVOP_CALL1(pgdval_t, pv_mmu_ops.make_pgd,
++ val);
++
++ return (pgd_t) { ret };
+ }
+
+-static inline unsigned long long pte_val(pte_t x)
++static inline pgdval_t pgd_val(pgd_t pgd)
+ {
+- return PVOP_CALL2(unsigned long long, pv_mmu_ops.pte_val,
+- x.pte_low, x.pte_high);
++ pgdval_t ret;
++
++ if (sizeof(pgdval_t) > sizeof(long))
++ ret = PVOP_CALL2(pgdval_t, pv_mmu_ops.pgd_val,
++ pgd.pgd, (u64)pgd.pgd >> 32);
++ else
++ ret = PVOP_CALL1(pgdval_t, pv_mmu_ops.pgd_val,
++ pgd.pgd);
++
++ return ret;
+ }
+
+-static inline unsigned long long pmd_val(pmd_t x)
++static inline void set_pte(pte_t *ptep, pte_t pte)
+ {
+- return PVOP_CALL2(unsigned long long, pv_mmu_ops.pmd_val,
+- x.pmd, x.pmd >> 32);
++ if (sizeof(pteval_t) > sizeof(long))
++ PVOP_VCALL3(pv_mmu_ops.set_pte, ptep,
++ pte.pte, (u64)pte.pte >> 32);
++ else
++ PVOP_VCALL2(pv_mmu_ops.set_pte, ptep,
++ pte.pte);
+ }
+
+-static inline unsigned long long pgd_val(pgd_t x)
++static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, pte_t pte)
+ {
+- return PVOP_CALL2(unsigned long long, pv_mmu_ops.pgd_val,
+- x.pgd, x.pgd >> 32);
++ if (sizeof(pteval_t) > sizeof(long))
++ /* 5 arg words */
++ pv_mmu_ops.set_pte_at(mm, addr, ptep, pte);
++ else
++ PVOP_VCALL4(pv_mmu_ops.set_pte_at, mm, addr, ptep, pte.pte);
+ }
+
+-static inline void set_pte(pte_t *ptep, pte_t pteval)
++static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
+ {
+- PVOP_VCALL3(pv_mmu_ops.set_pte, ptep, pteval.pte_low, pteval.pte_high);
++ pmdval_t val = native_pmd_val(pmd);
++
++ if (sizeof(pmdval_t) > sizeof(long))
++ PVOP_VCALL3(pv_mmu_ops.set_pmd, pmdp, val, (u64)val >> 32);
++ else
++ PVOP_VCALL2(pv_mmu_ops.set_pmd, pmdp, val);
+ }
+
+-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep, pte_t pteval)
++#if PAGETABLE_LEVELS >= 3
++static inline pmd_t __pmd(pmdval_t val)
+ {
+- /* 5 arg words */
+- pv_mmu_ops.set_pte_at(mm, addr, ptep, pteval);
++ pmdval_t ret;
++
++ if (sizeof(pmdval_t) > sizeof(long))
++ ret = PVOP_CALL2(pmdval_t, pv_mmu_ops.make_pmd,
++ val, (u64)val >> 32);
++ else
++ ret = PVOP_CALL1(pmdval_t, pv_mmu_ops.make_pmd,
++ val);
++
++ return (pmd_t) { ret };
+ }
+
+-static inline void set_pte_atomic(pte_t *ptep, pte_t pteval)
++static inline pmdval_t pmd_val(pmd_t pmd)
+ {
+- PVOP_VCALL3(pv_mmu_ops.set_pte_atomic, ptep,
+- pteval.pte_low, pteval.pte_high);
++ pmdval_t ret;
++
++ if (sizeof(pmdval_t) > sizeof(long))
++ ret = PVOP_CALL2(pmdval_t, pv_mmu_ops.pmd_val,
++ pmd.pmd, (u64)pmd.pmd >> 32);
++ else
++ ret = PVOP_CALL1(pmdval_t, pv_mmu_ops.pmd_val,
++ pmd.pmd);
++
++ return ret;
+ }
+
+-static inline void set_pte_present(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep, pte_t pte)
++static inline void set_pud(pud_t *pudp, pud_t pud)
+ {
+- /* 5 arg words */
+- pv_mmu_ops.set_pte_present(mm, addr, ptep, pte);
++ pudval_t val = native_pud_val(pud);
++
++ if (sizeof(pudval_t) > sizeof(long))
++ PVOP_VCALL3(pv_mmu_ops.set_pud, pudp,
++ val, (u64)val >> 32);
++ else
++ PVOP_VCALL2(pv_mmu_ops.set_pud, pudp,
++ val);
++}
++#if PAGETABLE_LEVELS == 4
++static inline pud_t __pud(pudval_t val)
++{
++ pudval_t ret;
++
++ if (sizeof(pudval_t) > sizeof(long))
++ ret = PVOP_CALL2(pudval_t, pv_mmu_ops.make_pud,
++ val, (u64)val >> 32);
++ else
++ ret = PVOP_CALL1(pudval_t, pv_mmu_ops.make_pud,
++ val);
++
++ return (pud_t) { ret };
+ }
+
+-static inline void set_pmd(pmd_t *pmdp, pmd_t pmdval)
++static inline pudval_t pud_val(pud_t pud)
+ {
+- PVOP_VCALL3(pv_mmu_ops.set_pmd, pmdp,
+- pmdval.pmd, pmdval.pmd >> 32);
++ pudval_t ret;
++
++ if (sizeof(pudval_t) > sizeof(long))
++ ret = PVOP_CALL2(pudval_t, pv_mmu_ops.pud_val,
++ pud.pud, (u64)pud.pud >> 32);
++ else
++ ret = PVOP_CALL1(pudval_t, pv_mmu_ops.pud_val,
++ pud.pud);
++
++ return ret;
+ }
+
+-static inline void set_pud(pud_t *pudp, pud_t pudval)
++static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
+ {
+- PVOP_VCALL3(pv_mmu_ops.set_pud, pudp,
+- pudval.pgd.pgd, pudval.pgd.pgd >> 32);
++ pgdval_t val = native_pgd_val(pgd);
++
++ if (sizeof(pgdval_t) > sizeof(long))
++ PVOP_VCALL3(pv_mmu_ops.set_pgd, pgdp,
++ val, (u64)val >> 32);
++ else
++ PVOP_VCALL2(pv_mmu_ops.set_pgd, pgdp,
++ val);
+ }
+
+-static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
++static inline void pgd_clear(pgd_t *pgdp)
+ {
+- PVOP_VCALL3(pv_mmu_ops.pte_clear, mm, addr, ptep);
++ set_pgd(pgdp, __pgd(0));
+ }
+
+-static inline void pmd_clear(pmd_t *pmdp)
++static inline void pud_clear(pud_t *pudp)
+ {
+- PVOP_VCALL1(pv_mmu_ops.pmd_clear, pmdp);
++ set_pud(pudp, __pud(0));
+ }
+
+-#else /* !CONFIG_X86_PAE */
++#endif /* PAGETABLE_LEVELS == 4 */
+
+-static inline pte_t __pte(unsigned long val)
++#endif /* PAGETABLE_LEVELS >= 3 */
++
++#ifdef CONFIG_X86_PAE
++/* Special-case pte-setting operations for PAE, which can't update a
++ 64-bit pte atomically */
++static inline void set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+- return (pte_t) { PVOP_CALL1(unsigned long, pv_mmu_ops.make_pte, val) };
++ PVOP_VCALL3(pv_mmu_ops.set_pte_atomic, ptep,
++ pte.pte, pte.pte >> 32);
+ }
+
+-static inline pgd_t __pgd(unsigned long val)
++static inline void set_pte_present(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, pte_t pte)
+ {
+- return (pgd_t) { PVOP_CALL1(unsigned long, pv_mmu_ops.make_pgd, val) };
++ /* 5 arg words */
++ pv_mmu_ops.set_pte_present(mm, addr, ptep, pte);
+ }
+
+-static inline unsigned long pte_val(pte_t x)
++static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep)
+ {
+- return PVOP_CALL1(unsigned long, pv_mmu_ops.pte_val, x.pte_low);
++ PVOP_VCALL3(pv_mmu_ops.pte_clear, mm, addr, ptep);
+ }
+
+-static inline unsigned long pgd_val(pgd_t x)
++static inline void pmd_clear(pmd_t *pmdp)
++{
++ PVOP_VCALL1(pv_mmu_ops.pmd_clear, pmdp);
++}
++#else /* !CONFIG_X86_PAE */
++static inline void set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+- return PVOP_CALL1(unsigned long, pv_mmu_ops.pgd_val, x.pgd);
++ set_pte(ptep, pte);
+ }
+
+-static inline void set_pte(pte_t *ptep, pte_t pteval)
++static inline void set_pte_present(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, pte_t pte)
+ {
+- PVOP_VCALL2(pv_mmu_ops.set_pte, ptep, pteval.pte_low);
++ set_pte(ptep, pte);
+ }
+
+-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep, pte_t pteval)
++static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep)
+ {
+- PVOP_VCALL4(pv_mmu_ops.set_pte_at, mm, addr, ptep, pteval.pte_low);
++ set_pte_at(mm, addr, ptep, __pte(0));
+ }
+
+-static inline void set_pmd(pmd_t *pmdp, pmd_t pmdval)
++static inline void pmd_clear(pmd_t *pmdp)
+ {
+- PVOP_VCALL2(pv_mmu_ops.set_pmd, pmdp, pmdval.pud.pgd.pgd);
++ set_pmd(pmdp, __pmd(0));
+ }
+ #endif /* CONFIG_X86_PAE */
+
+@@ -1014,52 +1239,68 @@ struct paravirt_patch_site {
+ extern struct paravirt_patch_site __parainstructions[],
+ __parainstructions_end[];
+
++#ifdef CONFIG_X86_32
++#define PV_SAVE_REGS "pushl %%ecx; pushl %%edx;"
++#define PV_RESTORE_REGS "popl %%edx; popl %%ecx"
++#define PV_FLAGS_ARG "0"
++#define PV_EXTRA_CLOBBERS
++#define PV_VEXTRA_CLOBBERS
++#else
++/* We save some registers, but all of them, that's too much. We clobber all
++ * caller saved registers but the argument parameter */
++#define PV_SAVE_REGS "pushq %%rdi;"
++#define PV_RESTORE_REGS "popq %%rdi;"
++#define PV_EXTRA_CLOBBERS EXTRA_CLOBBERS, "rcx" , "rdx"
++#define PV_VEXTRA_CLOBBERS EXTRA_CLOBBERS, "rdi", "rcx" , "rdx"
++#define PV_FLAGS_ARG "D"
++#endif
++
+ static inline unsigned long __raw_local_save_flags(void)
+ {
+ unsigned long f;
+
+- asm volatile(paravirt_alt("pushl %%ecx; pushl %%edx;"
++ asm volatile(paravirt_alt(PV_SAVE_REGS
+ PARAVIRT_CALL
+- "popl %%edx; popl %%ecx")
++ PV_RESTORE_REGS)
+ : "=a"(f)
+ : paravirt_type(pv_irq_ops.save_fl),
+ paravirt_clobber(CLBR_EAX)
+- : "memory", "cc");
++ : "memory", "cc" PV_VEXTRA_CLOBBERS);
+ return f;
+ }
+
+ static inline void raw_local_irq_restore(unsigned long f)
+ {
+- asm volatile(paravirt_alt("pushl %%ecx; pushl %%edx;"
++ asm volatile(paravirt_alt(PV_SAVE_REGS
+ PARAVIRT_CALL
+- "popl %%edx; popl %%ecx")
++ PV_RESTORE_REGS)
+ : "=a"(f)
+- : "0"(f),
++ : PV_FLAGS_ARG(f),
+ paravirt_type(pv_irq_ops.restore_fl),
+ paravirt_clobber(CLBR_EAX)
+- : "memory", "cc");
++ : "memory", "cc" PV_EXTRA_CLOBBERS);
+ }
+
+ static inline void raw_local_irq_disable(void)
+ {
+- asm volatile(paravirt_alt("pushl %%ecx; pushl %%edx;"
++ asm volatile(paravirt_alt(PV_SAVE_REGS
+ PARAVIRT_CALL
+- "popl %%edx; popl %%ecx")
++ PV_RESTORE_REGS)
+ :
+ : paravirt_type(pv_irq_ops.irq_disable),
+ paravirt_clobber(CLBR_EAX)
+- : "memory", "eax", "cc");
++ : "memory", "eax", "cc" PV_EXTRA_CLOBBERS);
+ }
+
+ static inline void raw_local_irq_enable(void)
+ {
+- asm volatile(paravirt_alt("pushl %%ecx; pushl %%edx;"
++ asm volatile(paravirt_alt(PV_SAVE_REGS
+ PARAVIRT_CALL
+- "popl %%edx; popl %%ecx")
++ PV_RESTORE_REGS)
+ :
+ : paravirt_type(pv_irq_ops.irq_enable),
+ paravirt_clobber(CLBR_EAX)
+- : "memory", "eax", "cc");
++ : "memory", "eax", "cc" PV_EXTRA_CLOBBERS);
+ }
+
+ static inline unsigned long __raw_local_irq_save(void)
+@@ -1071,27 +1312,6 @@ static inline unsigned long __raw_local_irq_save(void)
+ return f;
+ }
+
+-#define CLI_STRING \
+- _paravirt_alt("pushl %%ecx; pushl %%edx;" \
+- "call *%[paravirt_cli_opptr];" \
+- "popl %%edx; popl %%ecx", \
+- "%c[paravirt_cli_type]", "%c[paravirt_clobber]")
+-
+-#define STI_STRING \
+- _paravirt_alt("pushl %%ecx; pushl %%edx;" \
+- "call *%[paravirt_sti_opptr];" \
+- "popl %%edx; popl %%ecx", \
+- "%c[paravirt_sti_type]", "%c[paravirt_clobber]")
+-
+-#define CLI_STI_CLOBBERS , "%eax"
+-#define CLI_STI_INPUT_ARGS \
+- , \
+- [paravirt_cli_type] "i" (PARAVIRT_PATCH(pv_irq_ops.irq_disable)), \
+- [paravirt_cli_opptr] "m" (pv_irq_ops.irq_disable), \
+- [paravirt_sti_type] "i" (PARAVIRT_PATCH(pv_irq_ops.irq_enable)), \
+- [paravirt_sti_opptr] "m" (pv_irq_ops.irq_enable), \
+- paravirt_clobber(CLBR_EAX)
+-
+ /* Make sure as little as possible of this mess escapes. */
+ #undef PARAVIRT_CALL
+ #undef __PVOP_CALL
+@@ -1109,43 +1329,72 @@ static inline unsigned long __raw_local_irq_save(void)
+
+ #else /* __ASSEMBLY__ */
+
+-#define PARA_PATCH(struct, off) ((PARAVIRT_PATCH_##struct + (off)) / 4)
+-
+-#define PARA_SITE(ptype, clobbers, ops) \
++#define _PVSITE(ptype, clobbers, ops, word, algn) \
+ 771:; \
+ ops; \
+ 772:; \
+ .pushsection .parainstructions,"a"; \
+- .long 771b; \
++ .align algn; \
++ word 771b; \
+ .byte ptype; \
+ .byte 772b-771b; \
+ .short clobbers; \
+ .popsection
+
++
++#ifdef CONFIG_X86_64
++#define PV_SAVE_REGS pushq %rax; pushq %rdi; pushq %rcx; pushq %rdx
++#define PV_RESTORE_REGS popq %rdx; popq %rcx; popq %rdi; popq %rax
++#define PARA_PATCH(struct, off) ((PARAVIRT_PATCH_##struct + (off)) / 8)
++#define PARA_SITE(ptype, clobbers, ops) _PVSITE(ptype, clobbers, ops, .quad, 8)
++#else
++#define PV_SAVE_REGS pushl %eax; pushl %edi; pushl %ecx; pushl %edx
++#define PV_RESTORE_REGS popl %edx; popl %ecx; popl %edi; popl %eax
++#define PARA_PATCH(struct, off) ((PARAVIRT_PATCH_##struct + (off)) / 4)
++#define PARA_SITE(ptype, clobbers, ops) _PVSITE(ptype, clobbers, ops, .long, 4)
++#endif
++
+ #define INTERRUPT_RETURN \
+ PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_iret), CLBR_NONE, \
+ jmp *%cs:pv_cpu_ops+PV_CPU_iret)
+
+ #define DISABLE_INTERRUPTS(clobbers) \
+ PARA_SITE(PARA_PATCH(pv_irq_ops, PV_IRQ_irq_disable), clobbers, \
+- pushl %eax; pushl %ecx; pushl %edx; \
++ PV_SAVE_REGS; \
+ call *%cs:pv_irq_ops+PV_IRQ_irq_disable; \
+- popl %edx; popl %ecx; popl %eax) \
++ PV_RESTORE_REGS;) \
+
+ #define ENABLE_INTERRUPTS(clobbers) \
+ PARA_SITE(PARA_PATCH(pv_irq_ops, PV_IRQ_irq_enable), clobbers, \
+- pushl %eax; pushl %ecx; pushl %edx; \
++ PV_SAVE_REGS; \
+ call *%cs:pv_irq_ops+PV_IRQ_irq_enable; \
+- popl %edx; popl %ecx; popl %eax)
++ PV_RESTORE_REGS;)
++
++#define ENABLE_INTERRUPTS_SYSCALL_RET \
++ PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_irq_enable_syscall_ret),\
++ CLBR_NONE, \
++ jmp *%cs:pv_cpu_ops+PV_CPU_irq_enable_syscall_ret)
+
+-#define ENABLE_INTERRUPTS_SYSEXIT \
+- PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_irq_enable_sysexit), CLBR_NONE,\
+- jmp *%cs:pv_cpu_ops+PV_CPU_irq_enable_sysexit)
+
++#ifdef CONFIG_X86_32
+ #define GET_CR0_INTO_EAX \
+ push %ecx; push %edx; \
+ call *pv_cpu_ops+PV_CPU_read_cr0; \
+ pop %edx; pop %ecx
++#else
++#define SWAPGS \
++ PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_swapgs), CLBR_NONE, \
++ PV_SAVE_REGS; \
++ call *pv_cpu_ops+PV_CPU_swapgs; \
++ PV_RESTORE_REGS \
++ )
++
++#define GET_CR2_INTO_RCX \
++ call *pv_mmu_ops+PV_MMU_read_cr2; \
++ movq %rax, %rcx; \
++ xorq %rax, %rax;
++
++#endif
+
+ #endif /* __ASSEMBLY__ */
+ #endif /* CONFIG_PARAVIRT */
+diff --git a/include/asm-x86/pci.h b/include/asm-x86/pci.h
+index e883619..c61190c 100644
+--- a/include/asm-x86/pci.h
++++ b/include/asm-x86/pci.h
+@@ -66,6 +66,7 @@ extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
+
+
+ #ifdef CONFIG_PCI
++extern void early_quirks(void);
+ static inline void pci_dma_burst_advice(struct pci_dev *pdev,
+ enum pci_dma_burst_strategy *strat,
+ unsigned long *strategy_parameter)
+@@ -73,9 +74,10 @@ static inline void pci_dma_burst_advice(struct pci_dev *pdev,
+ *strat = PCI_DMA_BURST_INFINITY;
+ *strategy_parameter = ~0UL;
+ }
++#else
++static inline void early_quirks(void) { }
+ #endif
+
+-
+ #endif /* __KERNEL__ */
+
+ #ifdef CONFIG_X86_32
+@@ -90,6 +92,19 @@ static inline void pci_dma_burst_advice(struct pci_dev *pdev,
+ /* generic pci stuff */
+ #include <asm-generic/pci.h>
+
++#ifdef CONFIG_NUMA
++/* Returns the node based on pci bus */
++static inline int __pcibus_to_node(struct pci_bus *bus)
++{
++ struct pci_sysdata *sd = bus->sysdata;
+
++ return sd->node;
++}
++
++static inline cpumask_t __pcibus_to_cpumask(struct pci_bus *bus)
++{
++ return node_to_cpumask(__pcibus_to_node(bus));
++}
++#endif
+
+ #endif
+diff --git a/include/asm-x86/pci_64.h b/include/asm-x86/pci_64.h
+index ef54226..3746903 100644
+--- a/include/asm-x86/pci_64.h
++++ b/include/asm-x86/pci_64.h
+@@ -26,7 +26,6 @@ extern int (*pci_config_write)(int seg, int bus, int dev, int fn, int reg, int l
+
+
+ extern void pci_iommu_alloc(void);
+-extern int iommu_setup(char *opt);
+
+ /* The PCI address space does equal the physical memory
+ * address space. The networking and block device layers use
+diff --git a/include/asm-x86/pda.h b/include/asm-x86/pda.h
+index 35962bb..c0305bf 100644
+--- a/include/asm-x86/pda.h
++++ b/include/asm-x86/pda.h
+@@ -7,22 +7,22 @@
+ #include <linux/cache.h>
+ #include <asm/page.h>
+
+-/* Per processor datastructure. %gs points to it while the kernel runs */
++/* Per processor datastructure. %gs points to it while the kernel runs */
+ struct x8664_pda {
+ struct task_struct *pcurrent; /* 0 Current process */
+ unsigned long data_offset; /* 8 Per cpu data offset from linker
+ address */
+- unsigned long kernelstack; /* 16 top of kernel stack for current */
+- unsigned long oldrsp; /* 24 user rsp for system call */
+- int irqcount; /* 32 Irq nesting counter. Starts with -1 */
+- int cpunumber; /* 36 Logical CPU number */
++ unsigned long kernelstack; /* 16 top of kernel stack for current */
++ unsigned long oldrsp; /* 24 user rsp for system call */
++ int irqcount; /* 32 Irq nesting counter. Starts -1 */
++ unsigned int cpunumber; /* 36 Logical CPU number */
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ unsigned long stack_canary; /* 40 stack canary value */
+ /* gcc-ABI: this canary MUST be at
+ offset 40!!! */
+ #endif
+ char *irqstackptr;
+- int nodenumber; /* number of current node */
++ unsigned int nodenumber; /* number of current node */
+ unsigned int __softirq_pending;
+ unsigned int __nmi_count; /* number of NMI on this CPUs */
+ short mmu_state;
+@@ -40,13 +40,14 @@ struct x8664_pda {
+
+ extern struct x8664_pda *_cpu_pda[];
+ extern struct x8664_pda boot_cpu_pda[];
++extern void pda_init(int);
+
+ #define cpu_pda(i) (_cpu_pda[i])
+
+-/*
++/*
+ * There is no fast way to get the base address of the PDA, all the accesses
+ * have to mention %fs/%gs. So it needs to be done this Torvaldian way.
+- */
++ */
+ extern void __bad_pda_field(void) __attribute__((noreturn));
+
+ /*
+@@ -57,70 +58,70 @@ extern struct x8664_pda _proxy_pda;
+
+ #define pda_offset(field) offsetof(struct x8664_pda, field)
+
+-#define pda_to_op(op,field,val) do { \
++#define pda_to_op(op, field, val) do { \
+ typedef typeof(_proxy_pda.field) T__; \
+ if (0) { T__ tmp__; tmp__ = (val); } /* type checking */ \
+ switch (sizeof(_proxy_pda.field)) { \
+ case 2: \
+- asm(op "w %1,%%gs:%c2" : \
++ asm(op "w %1,%%gs:%c2" : \
+ "+m" (_proxy_pda.field) : \
+ "ri" ((T__)val), \
+- "i"(pda_offset(field))); \
+- break; \
++ "i"(pda_offset(field))); \
++ break; \
+ case 4: \
+- asm(op "l %1,%%gs:%c2" : \
++ asm(op "l %1,%%gs:%c2" : \
+ "+m" (_proxy_pda.field) : \
+ "ri" ((T__)val), \
+- "i" (pda_offset(field))); \
++ "i" (pda_offset(field))); \
+ break; \
+ case 8: \
+- asm(op "q %1,%%gs:%c2": \
++ asm(op "q %1,%%gs:%c2": \
+ "+m" (_proxy_pda.field) : \
+ "ri" ((T__)val), \
+- "i"(pda_offset(field))); \
++ "i"(pda_offset(field))); \
+ break; \
+- default: \
++ default: \
+ __bad_pda_field(); \
+- } \
+- } while (0)
++ } \
++ } while (0)
+
+ #define pda_from_op(op,field) ({ \
+ typeof(_proxy_pda.field) ret__; \
+ switch (sizeof(_proxy_pda.field)) { \
+- case 2: \
+- asm(op "w %%gs:%c1,%0" : \
++ case 2: \
++ asm(op "w %%gs:%c1,%0" : \
+ "=r" (ret__) : \
+- "i" (pda_offset(field)), \
+- "m" (_proxy_pda.field)); \
++ "i" (pda_offset(field)), \
++ "m" (_proxy_pda.field)); \
+ break; \
+ case 4: \
+ asm(op "l %%gs:%c1,%0": \
+ "=r" (ret__): \
+- "i" (pda_offset(field)), \
+- "m" (_proxy_pda.field)); \
++ "i" (pda_offset(field)), \
++ "m" (_proxy_pda.field)); \
+ break; \
+- case 8: \
++ case 8: \
+ asm(op "q %%gs:%c1,%0": \
+ "=r" (ret__) : \
+- "i" (pda_offset(field)), \
+- "m" (_proxy_pda.field)); \
++ "i" (pda_offset(field)), \
++ "m" (_proxy_pda.field)); \
+ break; \
+- default: \
++ default: \
+ __bad_pda_field(); \
+ } \
+ ret__; })
+
+-#define read_pda(field) pda_from_op("mov",field)
+-#define write_pda(field,val) pda_to_op("mov",field,val)
+-#define add_pda(field,val) pda_to_op("add",field,val)
+-#define sub_pda(field,val) pda_to_op("sub",field,val)
+-#define or_pda(field,val) pda_to_op("or",field,val)
++#define read_pda(field) pda_from_op("mov", field)
++#define write_pda(field, val) pda_to_op("mov", field, val)
++#define add_pda(field, val) pda_to_op("add", field, val)
++#define sub_pda(field, val) pda_to_op("sub", field, val)
++#define or_pda(field, val) pda_to_op("or", field, val)
+
+ /* This is not atomic against other CPUs -- CPU preemption needs to be off */
+-#define test_and_clear_bit_pda(bit,field) ({ \
++#define test_and_clear_bit_pda(bit, field) ({ \
+ int old__; \
+ asm volatile("btr %2,%%gs:%c3\n\tsbbl %0,%0" \
+- : "=r" (old__), "+m" (_proxy_pda.field) \
++ : "=r" (old__), "+m" (_proxy_pda.field) \
+ : "dIr" (bit), "i" (pda_offset(field)) : "memory"); \
+ old__; \
+ })
+diff --git a/include/asm-x86/percpu.h b/include/asm-x86/percpu.h
+index a1aaad2..0dec00f 100644
+--- a/include/asm-x86/percpu.h
++++ b/include/asm-x86/percpu.h
+@@ -1,5 +1,142 @@
+-#ifdef CONFIG_X86_32
+-# include "percpu_32.h"
+-#else
+-# include "percpu_64.h"
++#ifndef _ASM_X86_PERCPU_H_
++#define _ASM_X86_PERCPU_H_
++
++#ifdef CONFIG_X86_64
++#include <linux/compiler.h>
++
++/* Same as asm-generic/percpu.h, except that we store the per cpu offset
++ in the PDA. Longer term the PDA and every per cpu variable
++ should be just put into a single section and referenced directly
++ from %gs */
++
++#ifdef CONFIG_SMP
++#include <asm/pda.h>
++
++#define __per_cpu_offset(cpu) (cpu_pda(cpu)->data_offset)
++#define __my_cpu_offset read_pda(data_offset)
++
++#define per_cpu_offset(x) (__per_cpu_offset(x))
++
+ #endif
++#include <asm-generic/percpu.h>
++
++DECLARE_PER_CPU(struct x8664_pda, pda);
++
++#else /* CONFIG_X86_64 */
++
++#ifdef __ASSEMBLY__
++
++/*
++ * PER_CPU finds an address of a per-cpu variable.
++ *
++ * Args:
++ * var - variable name
++ * reg - 32bit register
++ *
++ * The resulting address is stored in the "reg" argument.
++ *
++ * Example:
++ * PER_CPU(cpu_gdt_descr, %ebx)
++ */
++#ifdef CONFIG_SMP
++#define PER_CPU(var, reg) \
++ movl %fs:per_cpu__##this_cpu_off, reg; \
++ lea per_cpu__##var(reg), reg
++#define PER_CPU_VAR(var) %fs:per_cpu__##var
++#else /* ! SMP */
++#define PER_CPU(var, reg) \
++ movl $per_cpu__##var, reg
++#define PER_CPU_VAR(var) per_cpu__##var
++#endif /* SMP */
++
++#else /* ...!ASSEMBLY */
++
++/*
++ * PER_CPU finds an address of a per-cpu variable.
++ *
++ * Args:
++ * var - variable name
++ * cpu - 32bit register containing the current CPU number
++ *
++ * The resulting address is stored in the "cpu" argument.
++ *
++ * Example:
++ * PER_CPU(cpu_gdt_descr, %ebx)
++ */
++#ifdef CONFIG_SMP
++
++#define __my_cpu_offset x86_read_percpu(this_cpu_off)
++
++/* fs segment starts at (positive) offset == __per_cpu_offset[cpu] */
++#define __percpu_seg "%%fs:"
++
++#else /* !SMP */
++
++#define __percpu_seg ""
++
++#endif /* SMP */
++
++#include <asm-generic/percpu.h>
++
++/* We can use this directly for local CPU (faster). */
++DECLARE_PER_CPU(unsigned long, this_cpu_off);
++
++/* For arch-specific code, we can use direct single-insn ops (they
++ * don't give an lvalue though). */
++extern void __bad_percpu_size(void);
++
++#define percpu_to_op(op,var,val) \
++ do { \
++ typedef typeof(var) T__; \
++ if (0) { T__ tmp__; tmp__ = (val); } \
++ switch (sizeof(var)) { \
++ case 1: \
++ asm(op "b %1,"__percpu_seg"%0" \
++ : "+m" (var) \
++ :"ri" ((T__)val)); \
++ break; \
++ case 2: \
++ asm(op "w %1,"__percpu_seg"%0" \
++ : "+m" (var) \
++ :"ri" ((T__)val)); \
++ break; \
++ case 4: \
++ asm(op "l %1,"__percpu_seg"%0" \
++ : "+m" (var) \
++ :"ri" ((T__)val)); \
++ break; \
++ default: __bad_percpu_size(); \
++ } \
++ } while (0)
++
++#define percpu_from_op(op,var) \
++ ({ \
++ typeof(var) ret__; \
++ switch (sizeof(var)) { \
++ case 1: \
++ asm(op "b "__percpu_seg"%1,%0" \
++ : "=r" (ret__) \
++ : "m" (var)); \
++ break; \
++ case 2: \
++ asm(op "w "__percpu_seg"%1,%0" \
++ : "=r" (ret__) \
++ : "m" (var)); \
++ break; \
++ case 4: \
++ asm(op "l "__percpu_seg"%1,%0" \
++ : "=r" (ret__) \
++ : "m" (var)); \
++ break; \
++ default: __bad_percpu_size(); \
++ } \
++ ret__; })
++
++#define x86_read_percpu(var) percpu_from_op("mov", per_cpu__##var)
++#define x86_write_percpu(var,val) percpu_to_op("mov", per_cpu__##var, val)
++#define x86_add_percpu(var,val) percpu_to_op("add", per_cpu__##var, val)
++#define x86_sub_percpu(var,val) percpu_to_op("sub", per_cpu__##var, val)
++#define x86_or_percpu(var,val) percpu_to_op("or", per_cpu__##var, val)
++#endif /* !__ASSEMBLY__ */
++#endif /* !CONFIG_X86_64 */
++#endif /* _ASM_X86_PERCPU_H_ */
+diff --git a/include/asm-x86/percpu_32.h b/include/asm-x86/percpu_32.h
deleted file mode 100644
-index f409adb..0000000
---- a/include/asm-sh64/param.h
+index a7ebd43..0000000
+--- a/include/asm-x86/percpu_32.h
+++ /dev/null
-@@ -1,42 +0,0 @@
+@@ -1,154 +0,0 @@
+-#ifndef __ARCH_I386_PERCPU__
+-#define __ARCH_I386_PERCPU__
+-
+-#ifdef __ASSEMBLY__
+-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
+- * PER_CPU finds an address of a per-cpu variable.
- *
-- * include/asm-sh64/param.h
+- * Args:
+- * var - variable name
+- * reg - 32bit register
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
+- * The resulting address is stored in the "reg" argument.
- *
+- * Example:
+- * PER_CPU(cpu_gdt_descr, %ebx)
- */
--#ifndef __ASM_SH64_PARAM_H
--#define __ASM_SH64_PARAM_H
--
--
--#ifdef __KERNEL__
--# ifdef CONFIG_SH_WDT
--# define HZ 1000 /* Needed for high-res WOVF */
--# else
--# define HZ 100
--# endif
--# define USER_HZ 100 /* User interfaces are in "ticks" */
--# define CLOCKS_PER_SEC (USER_HZ) /* frequency at which times() counts */
--#endif
--
--#ifndef HZ
--#define HZ 100
--#endif
--
--#define EXEC_PAGESIZE 4096
--
--#ifndef NGROUPS
--#define NGROUPS 32
--#endif
--
--#ifndef NOGROUP
--#define NOGROUP (-1)
--#endif
--
--#define MAXHOSTNAMELEN 64 /* max length of hostname */
--
--#endif /* __ASM_SH64_PARAM_H */
-diff --git a/include/asm-sh64/pci.h b/include/asm-sh64/pci.h
-deleted file mode 100644
-index 18055db..0000000
---- a/include/asm-sh64/pci.h
-+++ /dev/null
-@@ -1,102 +0,0 @@
--#ifndef __ASM_SH64_PCI_H
--#define __ASM_SH64_PCI_H
--
--#ifdef __KERNEL__
--
--#include <linux/dma-mapping.h>
--
--/* Can be used to override the logic in pci_scan_bus for skipping
-- already-configured bus numbers - to be used for buggy BIOSes
-- or architectures with incomplete PCI setup by the loader */
+-#ifdef CONFIG_SMP
+-#define PER_CPU(var, reg) \
+- movl %fs:per_cpu__##this_cpu_off, reg; \
+- lea per_cpu__##var(reg), reg
+-#define PER_CPU_VAR(var) %fs:per_cpu__##var
+-#else /* ! SMP */
+-#define PER_CPU(var, reg) \
+- movl $per_cpu__##var, reg
+-#define PER_CPU_VAR(var) per_cpu__##var
+-#endif /* SMP */
-
--#define pcibios_assign_all_busses() 1
+-#else /* ...!ASSEMBLY */
-
-/*
-- * These are currently the correct values for the STM overdrive board
-- * We need some way of setting this on a board specific way, it will
-- * not be the same on other boards I think
+- * PER_CPU finds an address of a per-cpu variable.
+- *
+- * Args:
+- * var - variable name
+- * cpu - 32bit register containing the current CPU number
+- *
+- * The resulting address is stored in the "cpu" argument.
+- *
+- * Example:
+- * PER_CPU(cpu_gdt_descr, %ebx)
- */
--#if defined(CONFIG_CPU_SUBTYPE_SH5_101) || defined(CONFIG_CPU_SUBTYPE_SH5_103)
--#define PCIBIOS_MIN_IO 0x2000
--#define PCIBIOS_MIN_MEM 0x40000000
--#endif
--
--extern void pcibios_set_master(struct pci_dev *dev);
+-#ifdef CONFIG_SMP
+-/* Same as generic implementation except for optimized local access. */
+-#define __GENERIC_PER_CPU
-
--/*
-- * Set penalize isa irq function
-- */
--static inline void pcibios_penalize_isa_irq(int irq, int active)
--{
-- /* We don't do dynamic PCI IRQ allocation */
--}
+-/* This is used for other cpus to find our section. */
+-extern unsigned long __per_cpu_offset[];
-
--/* Dynamic DMA mapping stuff.
-- * SuperH has everything mapped statically like x86.
-- */
+-#define per_cpu_offset(x) (__per_cpu_offset[x])
-
--/* The PCI address space does equal the physical memory
-- * address space. The networking and block device layers use
-- * this boolean for bounce buffer decisions.
-- */
--#define PCI_DMA_BUS_IS_PHYS (1)
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
-
--#include <linux/types.h>
--#include <linux/slab.h>
--#include <asm/scatterlist.h>
--#include <linux/string.h>
--#include <asm/io.h>
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __typeof__(type) per_cpu__##name \
+- ____cacheline_aligned_in_smp
-
--/* pci_unmap_{single,page} being a nop depends upon the
-- * configuration.
-- */
--#ifdef CONFIG_SH_PCIDMA_NONCOHERENT
--#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) \
-- dma_addr_t ADDR_NAME;
--#define DECLARE_PCI_UNMAP_LEN(LEN_NAME) \
-- __u32 LEN_NAME;
--#define pci_unmap_addr(PTR, ADDR_NAME) \
-- ((PTR)->ADDR_NAME)
--#define pci_unmap_addr_set(PTR, ADDR_NAME, VAL) \
-- (((PTR)->ADDR_NAME) = (VAL))
--#define pci_unmap_len(PTR, LEN_NAME) \
-- ((PTR)->LEN_NAME)
--#define pci_unmap_len_set(PTR, LEN_NAME, VAL) \
-- (((PTR)->LEN_NAME) = (VAL))
--#else
--#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME)
--#define DECLARE_PCI_UNMAP_LEN(LEN_NAME)
--#define pci_unmap_addr(PTR, ADDR_NAME) (0)
--#define pci_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0)
--#define pci_unmap_len(PTR, LEN_NAME) (0)
--#define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0)
--#endif
+-/* We can use this directly for local CPU (faster). */
+-DECLARE_PER_CPU(unsigned long, this_cpu_off);
-
--#ifdef CONFIG_PCI
--static inline void pci_dma_burst_advice(struct pci_dev *pdev,
-- enum pci_dma_burst_strategy *strat,
-- unsigned long *strategy_parameter)
--{
-- *strat = PCI_DMA_BURST_INFINITY;
-- *strategy_parameter = ~0UL;
--}
--#endif
+-/* var is in discarded region: offset to particular copy we want */
+-#define per_cpu(var, cpu) (*({ \
+- extern int simple_indentifier_##var(void); \
+- RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]); }))
-
--/* Board-specific fixup routines. */
--extern void pcibios_fixup(void);
--extern void pcibios_fixup_irqs(void);
+-#define __raw_get_cpu_var(var) (*({ \
+- extern int simple_indentifier_##var(void); \
+- RELOC_HIDE(&per_cpu__##var, x86_read_percpu(this_cpu_off)); \
+-}))
-
--#ifdef CONFIG_PCI_AUTO
--extern int pciauto_assign_resources(int busno, struct pci_channel *hose);
--#endif
+-#define __get_cpu_var(var) __raw_get_cpu_var(var)
-
--#endif /* __KERNEL__ */
+-/* A macro to avoid #include hell... */
+-#define percpu_modcopy(pcpudst, src, size) \
+-do { \
+- unsigned int __i; \
+- for_each_possible_cpu(__i) \
+- memcpy((pcpudst)+__per_cpu_offset[__i], \
+- (src), (size)); \
+-} while (0)
-
--/* generic pci stuff */
--#include <asm-generic/pci.h>
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
-
--/* generic DMA-mapping stuff */
--#include <asm-generic/pci-dma-compat.h>
+-/* fs segment starts at (positive) offset == __per_cpu_offset[cpu] */
+-#define __percpu_seg "%%fs:"
+-#else /* !SMP */
+-#include <asm-generic/percpu.h>
+-#define __percpu_seg ""
+-#endif /* SMP */
-
--#endif /* __ASM_SH64_PCI_H */
+-/* For arch-specific code, we can use direct single-insn ops (they
+- * don't give an lvalue though). */
+-extern void __bad_percpu_size(void);
-
-diff --git a/include/asm-sh64/percpu.h b/include/asm-sh64/percpu.h
-deleted file mode 100644
-index a01d16c..0000000
---- a/include/asm-sh64/percpu.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_PERCPU
--#define __ASM_SH64_PERCPU
+-#define percpu_to_op(op,var,val) \
+- do { \
+- typedef typeof(var) T__; \
+- if (0) { T__ tmp__; tmp__ = (val); } \
+- switch (sizeof(var)) { \
+- case 1: \
+- asm(op "b %1,"__percpu_seg"%0" \
+- : "+m" (var) \
+- :"ri" ((T__)val)); \
+- break; \
+- case 2: \
+- asm(op "w %1,"__percpu_seg"%0" \
+- : "+m" (var) \
+- :"ri" ((T__)val)); \
+- break; \
+- case 4: \
+- asm(op "l %1,"__percpu_seg"%0" \
+- : "+m" (var) \
+- :"ri" ((T__)val)); \
+- break; \
+- default: __bad_percpu_size(); \
+- } \
+- } while (0)
-
--#include <asm-generic/percpu.h>
+-#define percpu_from_op(op,var) \
+- ({ \
+- typeof(var) ret__; \
+- switch (sizeof(var)) { \
+- case 1: \
+- asm(op "b "__percpu_seg"%1,%0" \
+- : "=r" (ret__) \
+- : "m" (var)); \
+- break; \
+- case 2: \
+- asm(op "w "__percpu_seg"%1,%0" \
+- : "=r" (ret__) \
+- : "m" (var)); \
+- break; \
+- case 4: \
+- asm(op "l "__percpu_seg"%1,%0" \
+- : "=r" (ret__) \
+- : "m" (var)); \
+- break; \
+- default: __bad_percpu_size(); \
+- } \
+- ret__; })
+-
+-#define x86_read_percpu(var) percpu_from_op("mov", per_cpu__##var)
+-#define x86_write_percpu(var,val) percpu_to_op("mov", per_cpu__##var, val)
+-#define x86_add_percpu(var,val) percpu_to_op("add", per_cpu__##var, val)
+-#define x86_sub_percpu(var,val) percpu_to_op("sub", per_cpu__##var, val)
+-#define x86_or_percpu(var,val) percpu_to_op("or", per_cpu__##var, val)
+-#endif /* !__ASSEMBLY__ */
-
--#endif /* __ASM_SH64_PERCPU */
-diff --git a/include/asm-sh64/pgalloc.h b/include/asm-sh64/pgalloc.h
+-#endif /* __ARCH_I386_PERCPU__ */
+diff --git a/include/asm-x86/percpu_64.h b/include/asm-x86/percpu_64.h
deleted file mode 100644
-index 6eccab7..0000000
---- a/include/asm-sh64/pgalloc.h
+index 5abd482..0000000
+--- a/include/asm-x86/percpu_64.h
+++ /dev/null
-@@ -1,125 +0,0 @@
--#ifndef __ASM_SH64_PGALLOC_H
--#define __ASM_SH64_PGALLOC_H
--
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/pgalloc.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003, 2004 Paul Mundt
-- * Copyright (C) 2003, 2004 Richard Curnow
-- *
-- */
+@@ -1,68 +0,0 @@
+-#ifndef _ASM_X8664_PERCPU_H_
+-#define _ASM_X8664_PERCPU_H_
+-#include <linux/compiler.h>
-
--#include <linux/mm.h>
--#include <linux/quicklist.h>
--#include <asm/page.h>
+-/* Same as asm-generic/percpu.h, except that we store the per cpu offset
+- in the PDA. Longer term the PDA and every per cpu variable
+- should be just put into a single section and referenced directly
+- from %gs */
-
--static inline void pgd_init(unsigned long page)
--{
-- unsigned long *pgd = (unsigned long *)page;
-- extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
-- int i;
+-#ifdef CONFIG_SMP
-
-- for (i = 0; i < USER_PTRS_PER_PGD; i++)
-- pgd[i] = (unsigned long)empty_bad_pte_table;
--}
+-#include <asm/pda.h>
-
--/*
-- * Allocate and free page tables. The xxx_kernel() versions are
-- * used to allocate a kernel page table - this turns on ASN bits
-- * if any.
-- */
+-#define __per_cpu_offset(cpu) (cpu_pda(cpu)->data_offset)
+-#define __my_cpu_offset() read_pda(data_offset)
-
--static inline pgd_t *get_pgd_slow(void)
--{
-- unsigned int pgd_size = (USER_PTRS_PER_PGD * sizeof(pgd_t));
-- pgd_t *ret = kmalloc(pgd_size, GFP_KERNEL);
-- return ret;
--}
+-#define per_cpu_offset(x) (__per_cpu_offset(x))
-
--static inline pgd_t *pgd_alloc(struct mm_struct *mm)
--{
-- return quicklist_alloc(0, GFP_KERNEL, NULL);
--}
+-/* Separate out the type, so (int[3], foo) works. */
+-#define DEFINE_PER_CPU(type, name) \
+- __attribute__((__section__(".data.percpu"))) __typeof__(type) per_cpu__##name
+-
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- __attribute__((__section__(".data.percpu.shared_aligned"))) \
+- __typeof__(type) per_cpu__##name \
+- ____cacheline_internodealigned_in_smp
+-
+-/* var is in discarded region: offset to particular copy we want */
+-#define per_cpu(var, cpu) (*({ \
+- extern int simple_identifier_##var(void); \
+- RELOC_HIDE(&per_cpu__##var, __per_cpu_offset(cpu)); }))
+-#define __get_cpu_var(var) (*({ \
+- extern int simple_identifier_##var(void); \
+- RELOC_HIDE(&per_cpu__##var, __my_cpu_offset()); }))
+-#define __raw_get_cpu_var(var) (*({ \
+- extern int simple_identifier_##var(void); \
+- RELOC_HIDE(&per_cpu__##var, __my_cpu_offset()); }))
-
--static inline void pgd_free(pgd_t *pgd)
--{
-- quicklist_free(0, NULL, pgd);
--}
+-/* A macro to avoid #include hell... */
+-#define percpu_modcopy(pcpudst, src, size) \
+-do { \
+- unsigned int __i; \
+- for_each_possible_cpu(__i) \
+- memcpy((pcpudst)+__per_cpu_offset(__i), \
+- (src), (size)); \
+-} while (0)
-
--static inline struct page *pte_alloc_one(struct mm_struct *mm,
-- unsigned long address)
--{
-- void *pg = quicklist_alloc(0, GFP_KERNEL, NULL);
-- return pg ? virt_to_page(pg) : NULL;
--}
+-extern void setup_per_cpu_areas(void);
-
--static inline void pte_free_kernel(pte_t *pte)
--{
-- quicklist_free(0, NULL, pte);
--}
+-#else /* ! SMP */
-
--static inline void pte_free(struct page *pte)
--{
-- quicklist_free_page(0, NULL, pte);
--}
+-#define DEFINE_PER_CPU(type, name) \
+- __typeof__(type) per_cpu__##name
+-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
+- DEFINE_PER_CPU(type, name)
-
--static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
-- unsigned long address)
--{
-- return quicklist_alloc(0, GFP_KERNEL, NULL);
--}
+-#define per_cpu(var, cpu) (*((void)(cpu), &per_cpu__##var))
+-#define __get_cpu_var(var) per_cpu__##var
+-#define __raw_get_cpu_var(var) per_cpu__##var
-
--#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
+-#endif /* SMP */
-
--/*
-- * allocating and freeing a pmd is trivial: the 1-entry pmd is
-- * inside the pgd, so has no extra memory associated with it.
-- */
+-#define DECLARE_PER_CPU(type, name) extern __typeof__(type) per_cpu__##name
-
--#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+-#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
+-#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
-
+-#endif /* _ASM_X8664_PERCPU_H_ */
+diff --git a/include/asm-x86/pgalloc_32.h b/include/asm-x86/pgalloc_32.h
+index f2fc33c..10c2b45 100644
+--- a/include/asm-x86/pgalloc_32.h
++++ b/include/asm-x86/pgalloc_32.h
+@@ -3,31 +3,33 @@
+
+ #include <linux/threads.h>
+ #include <linux/mm.h> /* for struct page */
++#include <asm/tlb.h>
++#include <asm-generic/tlb.h>
+
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/paravirt.h>
+ #else
+ #define paravirt_alloc_pt(mm, pfn) do { } while (0)
+-#define paravirt_alloc_pd(pfn) do { } while (0)
+-#define paravirt_alloc_pd(pfn) do { } while (0)
++#define paravirt_alloc_pd(mm, pfn) do { } while (0)
+ #define paravirt_alloc_pd_clone(pfn, clonepfn, start, count) do { } while (0)
+ #define paravirt_release_pt(pfn) do { } while (0)
+ #define paravirt_release_pd(pfn) do { } while (0)
+ #endif
+
+-#define pmd_populate_kernel(mm, pmd, pte) \
+-do { \
+- paravirt_alloc_pt(mm, __pa(pte) >> PAGE_SHIFT); \
+- set_pmd(pmd, __pmd(_PAGE_TABLE + __pa(pte))); \
+-} while (0)
++static inline void pmd_populate_kernel(struct mm_struct *mm,
++ pmd_t *pmd, pte_t *pte)
++{
++ paravirt_alloc_pt(mm, __pa(pte) >> PAGE_SHIFT);
++ set_pmd(pmd, __pmd(__pa(pte) | _PAGE_TABLE));
++}
++
++static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct page *pte)
++{
++ unsigned long pfn = page_to_pfn(pte);
+
+-#define pmd_populate(mm, pmd, pte) \
+-do { \
+- paravirt_alloc_pt(mm, page_to_pfn(pte)); \
+- set_pmd(pmd, __pmd(_PAGE_TABLE + \
+- ((unsigned long long)page_to_pfn(pte) << \
+- (unsigned long long) PAGE_SHIFT))); \
+-} while (0)
++ paravirt_alloc_pt(mm, pfn);
++ set_pmd(pmd, __pmd(((pteval_t)pfn << PAGE_SHIFT) | _PAGE_TABLE));
++}
+
+ /*
+ * Allocate and free page tables.
+@@ -49,20 +51,55 @@ static inline void pte_free(struct page *pte)
+ }
+
+
+-#define __pte_free_tlb(tlb,pte) \
+-do { \
+- paravirt_release_pt(page_to_pfn(pte)); \
+- tlb_remove_page((tlb),(pte)); \
+-} while (0)
++static inline void __pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
++{
++ paravirt_release_pt(page_to_pfn(pte));
++ tlb_remove_page(tlb, pte);
++}
+
+ #ifdef CONFIG_X86_PAE
+ /*
+ * In the PAE case we free the pmds as part of the pgd.
+ */
-#define pmd_alloc_one(mm, addr) ({ BUG(); ((pmd_t *)2); })
-#define pmd_free(x) do { } while (0)
--#define pgd_populate(mm, pmd, pte) BUG()
--#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
--#define __pmd_free_tlb(tlb,pmd) do { } while (0)
--
--#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
--
--static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
--{
-- return quicklist_alloc(0, GFP_KERNEL, NULL);
--}
--
--static inline void pmd_free(pmd_t *pmd)
+-#define __pmd_free_tlb(tlb,x) do { } while (0)
+-#define pud_populate(mm, pmd, pte) BUG()
+-#endif
++static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
++{
++ return (pmd_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT);
++}
++
++static inline void pmd_free(pmd_t *pmd)
++{
++ BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
++ free_page((unsigned long)pmd);
++}
++
++static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
++{
++ /* This is called just after the pmd has been detached from
++ the pgd, which requires a full tlb flush to be recognized
++ by the CPU. Rather than incurring multiple tlb flushes
++ while the address space is being pulled down, make the tlb
++ gathering machinery do a full flush when we're done. */
++ tlb->fullmm = 1;
++
++ paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
++ tlb_remove_page(tlb, virt_to_page(pmd));
++}
++
++static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd)
++{
++ paravirt_alloc_pd(mm, __pa(pmd) >> PAGE_SHIFT);
++
++ /* Note: almost everything apart from _PAGE_PRESENT is
++ reserved at the pmd (PDPT) level. */
++ set_pud(pudp, __pud(__pa(pmd) | _PAGE_PRESENT));
++
++ /*
++ * Pentium-II erratum A13: in PAE mode we explicitly have to flush
++ * the TLB via cr3 if the top-level pgd is changed...
++ */
++ if (mm == current->active_mm)
++ write_cr3(read_cr3());
++}
++#endif /* CONFIG_X86_PAE */
+
+ #endif /* _I386_PGALLOC_H */
+diff --git a/include/asm-x86/pgtable-2level.h b/include/asm-x86/pgtable-2level.h
+index 84b03cf..701404f 100644
+--- a/include/asm-x86/pgtable-2level.h
++++ b/include/asm-x86/pgtable-2level.h
+@@ -15,30 +15,31 @@ static inline void native_set_pte(pte_t *ptep , pte_t pte)
+ {
+ *ptep = pte;
+ }
+-static inline void native_set_pte_at(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep , pte_t pte)
-{
-- quicklist_free(0, NULL, pmd);
+- native_set_pte(ptep, pte);
-}
--
--#define pgd_populate(mm, pgd, pmd) pgd_set(pgd, pmd)
--#define __pmd_free_tlb(tlb,pmd) pmd_free(pmd)
--
--#else
--#error "No defined page table size"
++
+ static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd)
+ {
+ *pmdp = pmd;
+ }
+-#ifndef CONFIG_PARAVIRT
+-#define set_pte(pteptr, pteval) native_set_pte(pteptr, pteval)
+-#define set_pte_at(mm,addr,ptep,pteval) native_set_pte_at(mm, addr, ptep, pteval)
+-#define set_pmd(pmdptr, pmdval) native_set_pmd(pmdptr, pmdval)
-#endif
+
+-#define set_pte_atomic(pteptr, pteval) set_pte(pteptr,pteval)
+-#define set_pte_present(mm,addr,ptep,pteval) set_pte_at(mm,addr,ptep,pteval)
++static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
++{
++ native_set_pte(ptep, pte);
++}
+
+-#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0)
+-#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
++static inline void native_set_pte_present(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, pte_t pte)
++{
++ native_set_pte(ptep, pte);
++}
++
++static inline void native_pmd_clear(pmd_t *pmdp)
++{
++ native_set_pmd(pmdp, __pmd(0));
++}
+
+ static inline void native_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *xp)
+ {
+- *xp = __pte(0);
++ *xp = native_make_pte(0);
+ }
+
+ #ifdef CONFIG_SMP
+@@ -53,16 +54,6 @@ static inline pte_t native_ptep_get_and_clear(pte_t *xp)
+ #define pte_page(x) pfn_to_page(pte_pfn(x))
+ #define pte_none(x) (!(x).pte_low)
+ #define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT)
+-#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
-
--#define pmd_populate_kernel(mm, pmd, pte) \
-- set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) (pte)))
--
--static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
-- struct page *pte)
+-/*
+- * All present pages are kernel-executable:
+- */
+-static inline int pte_exec_kernel(pte_t pte)
-{
-- set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) page_address (pte)));
+- return 1;
-}
--
--static inline void check_pgt_cache(void)
+
+ /*
+ * Bits 0, 6 and 7 are taken, split up the 29 bits of offset
+@@ -74,13 +65,13 @@ static inline int pte_exec_kernel(pte_t pte)
+ ((((pte).pte_low >> 1) & 0x1f ) + (((pte).pte_low >> 8) << 5 ))
+
+ #define pgoff_to_pte(off) \
+- ((pte_t) { (((off) & 0x1f) << 1) + (((off) >> 5) << 8) + _PAGE_FILE })
++ ((pte_t) { .pte_low = (((off) & 0x1f) << 1) + (((off) >> 5) << 8) + _PAGE_FILE })
+
+ /* Encode and de-code a swap entry */
+ #define __swp_type(x) (((x).val >> 1) & 0x1f)
+ #define __swp_offset(x) ((x).val >> 8)
+ #define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 1) | ((offset) << 8) })
+ #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low })
+-#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
++#define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
+
+ #endif /* _I386_PGTABLE_2LEVEL_H */
+diff --git a/include/asm-x86/pgtable-3level.h b/include/asm-x86/pgtable-3level.h
+index 948a334..a195c3e 100644
+--- a/include/asm-x86/pgtable-3level.h
++++ b/include/asm-x86/pgtable-3level.h
+@@ -15,16 +15,18 @@
+ #define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %p(%016Lx).\n", __FILE__, __LINE__, &(e), pgd_val(e))
+
+-#define pud_none(pud) 0
+-#define pud_bad(pud) 0
+-#define pud_present(pud) 1
+
+-/*
+- * All present pages with !NX bit are kernel-executable:
+- */
+-static inline int pte_exec_kernel(pte_t pte)
++static inline int pud_none(pud_t pud)
++{
++ return pud_val(pud) == 0;
++}
++static inline int pud_bad(pud_t pud)
++{
++ return (pud_val(pud) & ~(PTE_MASK | _KERNPG_TABLE | _PAGE_USER)) != 0;
++}
++static inline int pud_present(pud_t pud)
+ {
+- return !(pte_val(pte) & _PAGE_NX);
++ return pud_val(pud) & _PAGE_PRESENT;
+ }
+
+ /* Rules for using set_pte: the pte being assigned *must* be
+@@ -39,11 +41,6 @@ static inline void native_set_pte(pte_t *ptep, pte_t pte)
+ smp_wmb();
+ ptep->pte_low = pte.pte_low;
+ }
+-static inline void native_set_pte_at(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep , pte_t pte)
-{
-- quicklist_trim(0, NULL, 25, 16);
+- native_set_pte(ptep, pte);
-}
--
--#endif /* __ASM_SH64_PGALLOC_H */
-diff --git a/include/asm-sh64/pgtable.h b/include/asm-sh64/pgtable.h
-deleted file mode 100644
-index 3488fe3..0000000
---- a/include/asm-sh64/pgtable.h
-+++ /dev/null
-@@ -1,496 +0,0 @@
--#ifndef __ASM_SH64_PGTABLE_H
--#define __ASM_SH64_PGTABLE_H
--
--#include <asm-generic/4level-fixup.h>
+
+ /*
+ * Since this is only called on user PTEs, and the page fault handler
+@@ -71,7 +68,7 @@ static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd)
+ }
+ static inline void native_set_pud(pud_t *pudp, pud_t pud)
+ {
+- *pudp = pud;
++ set_64bit((unsigned long long *)(pudp),native_pud_val(pud));
+ }
+
+ /*
+@@ -94,24 +91,29 @@ static inline void native_pmd_clear(pmd_t *pmd)
+ *(tmp + 1) = 0;
+ }
+
+-#ifndef CONFIG_PARAVIRT
+-#define set_pte(ptep, pte) native_set_pte(ptep, pte)
+-#define set_pte_at(mm, addr, ptep, pte) native_set_pte_at(mm, addr, ptep, pte)
+-#define set_pte_present(mm, addr, ptep, pte) native_set_pte_present(mm, addr, ptep, pte)
+-#define set_pte_atomic(ptep, pte) native_set_pte_atomic(ptep, pte)
+-#define set_pmd(pmdp, pmd) native_set_pmd(pmdp, pmd)
+-#define set_pud(pudp, pud) native_set_pud(pudp, pud)
+-#define pte_clear(mm, addr, ptep) native_pte_clear(mm, addr, ptep)
+-#define pmd_clear(pmd) native_pmd_clear(pmd)
+-#endif
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/pgtable.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003, 2004 Paul Mundt
-- * Copyright (C) 2003, 2004 Richard Curnow
-- *
-- * This file contains the functions and defines necessary to modify and use
-- * the SuperH page table tree.
+- * Pentium-II erratum A13: in PAE mode we explicitly have to flush
+- * the TLB via cr3 if the top-level pgd is changed...
+- * We do not let the generic code free and clear pgd entries due to
+- * this erratum.
- */
+-static inline void pud_clear (pud_t * pud) { }
++static inline void pud_clear(pud_t *pudp)
++{
++ set_pud(pudp, __pud(0));
++
++ /*
++ * In principle we need to do a cr3 reload here to make sure
++ * the processor recognizes the changed pgd. In practice, all
++ * the places where pud_clear() gets called are followed by
++ * full tlb flushes anyway, so we can defer the cost here.
++ *
++ * Specifically:
++ *
++ * mm/memory.c:free_pmd_range() - immediately after the
++ * pud_clear() it does a pmd_free_tlb(). We change the
++ * mmu_gather structure to do a full tlb flush (which has the
++ * effect of reloading cr3) when the pagetable free is
++ * complete.
++ *
++ * arch/x86/mm/hugetlbpage.c:huge_pmd_unshare() - the call to
++ * this is followed by a flush_tlb_range, which on x86 does a
++ * full tlb flush.
++ */
++}
+
+ #define pud_page(pud) \
+ ((struct page *) __va(pud_val(pud) & PAGE_MASK))
+@@ -155,21 +157,7 @@ static inline int pte_none(pte_t pte)
+
+ static inline unsigned long pte_pfn(pte_t pte)
+ {
+- return pte_val(pte) >> PAGE_SHIFT;
+-}
-
--#ifndef __ASSEMBLY__
--#include <asm/processor.h>
--#include <asm/page.h>
--#include <linux/threads.h>
--
--struct vm_area_struct;
--
--extern void paging_init(void);
+-extern unsigned long long __supported_pte_mask;
-
--/* We provide our own get_unmapped_area to avoid cache synonym issue */
--#define HAVE_ARCH_UNMAPPED_AREA
+-static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+-{
+- return __pte((((unsigned long long)page_nr << PAGE_SHIFT) |
+- pgprot_val(pgprot)) & __supported_pte_mask);
+-}
-
--/*
-- * Basically we have the same two-level (which is the logical three level
-- * Linux page table layout folded) page tables as the i386.
-- */
+-static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+-{
+- return __pmd((((unsigned long long)page_nr << PAGE_SHIFT) |
+- pgprot_val(pgprot)) & __supported_pte_mask);
++ return (pte_val(pte) & ~_PAGE_NX) >> PAGE_SHIFT;
+ }
+
+ /*
+@@ -177,7 +165,7 @@ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+ * put the 32 bits of offset into the high part.
+ */
+ #define pte_to_pgoff(pte) ((pte).pte_high)
+-#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
++#define pgoff_to_pte(off) ((pte_t) { { .pte_low = _PAGE_FILE, .pte_high = (off) } })
+ #define PTE_FILE_MAX_BITS 32
+
+ /* Encode and de-code a swap entry */
+@@ -185,8 +173,6 @@ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+ #define __swp_offset(x) ((x).val >> 5)
+ #define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5})
+ #define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
+-#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
-
+-#define __pmd_free_tlb(tlb, x) do { } while (0)
++#define __swp_entry_to_pte(x) ((pte_t){ { .pte_high = (x).val } })
+
+ #endif /* _I386_PGTABLE_3LEVEL_H */
+diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
+index 1039140..cd2524f 100644
+--- a/include/asm-x86/pgtable.h
++++ b/include/asm-x86/pgtable.h
+@@ -1,5 +1,364 @@
++#ifndef _ASM_X86_PGTABLE_H
++#define _ASM_X86_PGTABLE_H
++
++#define USER_PTRS_PER_PGD ((TASK_SIZE-1)/PGDIR_SIZE+1)
++#define FIRST_USER_ADDRESS 0
++
++#define _PAGE_BIT_PRESENT 0
++#define _PAGE_BIT_RW 1
++#define _PAGE_BIT_USER 2
++#define _PAGE_BIT_PWT 3
++#define _PAGE_BIT_PCD 4
++#define _PAGE_BIT_ACCESSED 5
++#define _PAGE_BIT_DIRTY 6
++#define _PAGE_BIT_FILE 6
++#define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */
++#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
++#define _PAGE_BIT_UNUSED1 9 /* available for programmer */
++#define _PAGE_BIT_UNUSED2 10
++#define _PAGE_BIT_UNUSED3 11
++#define _PAGE_BIT_NX 63 /* No execute: only valid after cpuid check */
++
++/*
++ * Note: we use _AC(1, L) instead of _AC(1, UL) so that we get a
++ * sign-extended value on 32-bit with all 1's in the upper word,
++ * which preserves the upper pte values on 64-bit ptes:
++ */
++#define _PAGE_PRESENT (_AC(1, L)<<_PAGE_BIT_PRESENT)
++#define _PAGE_RW (_AC(1, L)<<_PAGE_BIT_RW)
++#define _PAGE_USER (_AC(1, L)<<_PAGE_BIT_USER)
++#define _PAGE_PWT (_AC(1, L)<<_PAGE_BIT_PWT)
++#define _PAGE_PCD (_AC(1, L)<<_PAGE_BIT_PCD)
++#define _PAGE_ACCESSED (_AC(1, L)<<_PAGE_BIT_ACCESSED)
++#define _PAGE_DIRTY (_AC(1, L)<<_PAGE_BIT_DIRTY)
++#define _PAGE_PSE (_AC(1, L)<<_PAGE_BIT_PSE) /* 2MB page */
++#define _PAGE_GLOBAL (_AC(1, L)<<_PAGE_BIT_GLOBAL) /* Global TLB entry */
++#define _PAGE_UNUSED1 (_AC(1, L)<<_PAGE_BIT_UNUSED1)
++#define _PAGE_UNUSED2 (_AC(1, L)<<_PAGE_BIT_UNUSED2)
++#define _PAGE_UNUSED3 (_AC(1, L)<<_PAGE_BIT_UNUSED3)
++
++#if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
++#define _PAGE_NX (_AC(1, ULL) << _PAGE_BIT_NX)
++#else
++#define _PAGE_NX 0
++#endif
++
++/* If _PAGE_PRESENT is clear, we use these: */
++#define _PAGE_FILE _PAGE_DIRTY /* nonlinear file mapping, saved PTE; unset:swap */
++#define _PAGE_PROTNONE _PAGE_PSE /* if the user mapped it with PROT_NONE;
++ pte_present gives true */
++
++#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY)
++#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
++
++#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
++
++#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
++#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
++
++#define PAGE_SHARED_EXEC __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED)
++#define PAGE_COPY_NOEXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
++#define PAGE_COPY_EXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
++#define PAGE_COPY PAGE_COPY_NOEXEC
++#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
++#define PAGE_READONLY_EXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
++
++#ifdef CONFIG_X86_32
++#define _PAGE_KERNEL_EXEC \
++ (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
++#define _PAGE_KERNEL (_PAGE_KERNEL_EXEC | _PAGE_NX)
++
++#ifndef __ASSEMBLY__
++extern pteval_t __PAGE_KERNEL, __PAGE_KERNEL_EXEC;
++#endif /* __ASSEMBLY__ */
++#else
++#define __PAGE_KERNEL_EXEC \
++ (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
++#define __PAGE_KERNEL (__PAGE_KERNEL_EXEC | _PAGE_NX)
++#endif
++
++#define __PAGE_KERNEL_RO (__PAGE_KERNEL & ~_PAGE_RW)
++#define __PAGE_KERNEL_RX (__PAGE_KERNEL_EXEC & ~_PAGE_RW)
++#define __PAGE_KERNEL_EXEC_NOCACHE (__PAGE_KERNEL_EXEC | _PAGE_PCD | _PAGE_PWT)
++#define __PAGE_KERNEL_NOCACHE (__PAGE_KERNEL | _PAGE_PCD | _PAGE_PWT)
++#define __PAGE_KERNEL_VSYSCALL (__PAGE_KERNEL_RX | _PAGE_USER)
++#define __PAGE_KERNEL_VSYSCALL_NOCACHE (__PAGE_KERNEL_VSYSCALL | _PAGE_PCD | _PAGE_PWT)
++#define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE)
++#define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE)
++
++#ifdef CONFIG_X86_32
++# define MAKE_GLOBAL(x) __pgprot((x))
++#else
++# define MAKE_GLOBAL(x) __pgprot((x) | _PAGE_GLOBAL)
++#endif
++
++#define PAGE_KERNEL MAKE_GLOBAL(__PAGE_KERNEL)
++#define PAGE_KERNEL_RO MAKE_GLOBAL(__PAGE_KERNEL_RO)
++#define PAGE_KERNEL_EXEC MAKE_GLOBAL(__PAGE_KERNEL_EXEC)
++#define PAGE_KERNEL_RX MAKE_GLOBAL(__PAGE_KERNEL_RX)
++#define PAGE_KERNEL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_NOCACHE)
++#define PAGE_KERNEL_EXEC_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_EXEC_NOCACHE)
++#define PAGE_KERNEL_LARGE MAKE_GLOBAL(__PAGE_KERNEL_LARGE)
++#define PAGE_KERNEL_LARGE_EXEC MAKE_GLOBAL(__PAGE_KERNEL_LARGE_EXEC)
++#define PAGE_KERNEL_VSYSCALL MAKE_GLOBAL(__PAGE_KERNEL_VSYSCALL)
++#define PAGE_KERNEL_VSYSCALL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_VSYSCALL_NOCACHE)
++
++/* xwr */
++#define __P000 PAGE_NONE
++#define __P001 PAGE_READONLY
++#define __P010 PAGE_COPY
++#define __P011 PAGE_COPY
++#define __P100 PAGE_READONLY_EXEC
++#define __P101 PAGE_READONLY_EXEC
++#define __P110 PAGE_COPY_EXEC
++#define __P111 PAGE_COPY_EXEC
++
++#define __S000 PAGE_NONE
++#define __S001 PAGE_READONLY
++#define __S010 PAGE_SHARED
++#define __S011 PAGE_SHARED
++#define __S100 PAGE_READONLY_EXEC
++#define __S101 PAGE_READONLY_EXEC
++#define __S110 PAGE_SHARED_EXEC
++#define __S111 PAGE_SHARED_EXEC
++
++#ifndef __ASSEMBLY__
++
++/*
++ * ZERO_PAGE is a global shared page that is always zero: used
++ * for zero-mapped memory areas etc..
++ */
++extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)];
++#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
++
++extern spinlock_t pgd_lock;
++extern struct list_head pgd_list;
++
++/*
++ * The following only work if pte_present() is true.
++ * Undefined behaviour if not..
++ */
++static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
++static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
++static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_RW; }
++static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
++static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_PSE; }
++static inline int pte_global(pte_t pte) { return pte_val(pte) & _PAGE_GLOBAL; }
++static inline int pte_exec(pte_t pte) { return !(pte_val(pte) & _PAGE_NX); }
++
++static inline int pmd_large(pmd_t pte) {
++ return (pmd_val(pte) & (_PAGE_PSE|_PAGE_PRESENT)) ==
++ (_PAGE_PSE|_PAGE_PRESENT);
++}
++
++static inline pte_t pte_mkclean(pte_t pte) { return __pte(pte_val(pte) & ~(pteval_t)_PAGE_DIRTY); }
++static inline pte_t pte_mkold(pte_t pte) { return __pte(pte_val(pte) & ~(pteval_t)_PAGE_ACCESSED); }
++static inline pte_t pte_wrprotect(pte_t pte) { return __pte(pte_val(pte) & ~(pteval_t)_PAGE_RW); }
++static inline pte_t pte_mkexec(pte_t pte) { return __pte(pte_val(pte) & ~(pteval_t)_PAGE_NX); }
++static inline pte_t pte_mkdirty(pte_t pte) { return __pte(pte_val(pte) | _PAGE_DIRTY); }
++static inline pte_t pte_mkyoung(pte_t pte) { return __pte(pte_val(pte) | _PAGE_ACCESSED); }
++static inline pte_t pte_mkwrite(pte_t pte) { return __pte(pte_val(pte) | _PAGE_RW); }
++static inline pte_t pte_mkhuge(pte_t pte) { return __pte(pte_val(pte) | _PAGE_PSE); }
++static inline pte_t pte_clrhuge(pte_t pte) { return __pte(pte_val(pte) & ~(pteval_t)_PAGE_PSE); }
++static inline pte_t pte_mkglobal(pte_t pte) { return __pte(pte_val(pte) | _PAGE_GLOBAL); }
++static inline pte_t pte_clrglobal(pte_t pte) { return __pte(pte_val(pte) & ~(pteval_t)_PAGE_GLOBAL); }
++
++extern pteval_t __supported_pte_mask;
++
++static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
++{
++ return __pte((((phys_addr_t)page_nr << PAGE_SHIFT) |
++ pgprot_val(pgprot)) & __supported_pte_mask);
++}
++
++static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
++{
++ return __pmd((((phys_addr_t)page_nr << PAGE_SHIFT) |
++ pgprot_val(pgprot)) & __supported_pte_mask);
++}
++
++static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
++{
++ pteval_t val = pte_val(pte);
++
++ /*
++ * Chop off the NX bit (if present), and add the NX portion of
++ * the newprot (if present):
++ */
++ val &= _PAGE_CHG_MASK & ~_PAGE_NX;
++ val |= pgprot_val(newprot) & __supported_pte_mask;
++
++ return __pte(val);
++}
++
++#define pte_pgprot(x) __pgprot(pte_val(x) & (0xfff | _PAGE_NX))
++
++#define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else /* !CONFIG_PARAVIRT */
++#define set_pte(ptep, pte) native_set_pte(ptep, pte)
++#define set_pte_at(mm, addr, ptep, pte) native_set_pte_at(mm, addr, ptep, pte)
++
++#define set_pte_present(mm, addr, ptep, pte) \
++ native_set_pte_present(mm, addr, ptep, pte)
++#define set_pte_atomic(ptep, pte) \
++ native_set_pte_atomic(ptep, pte)
++
++#define set_pmd(pmdp, pmd) native_set_pmd(pmdp, pmd)
++
++#ifndef __PAGETABLE_PUD_FOLDED
++#define set_pgd(pgdp, pgd) native_set_pgd(pgdp, pgd)
++#define pgd_clear(pgd) native_pgd_clear(pgd)
++#endif
++
++#ifndef set_pud
++# define set_pud(pudp, pud) native_set_pud(pudp, pud)
++#endif
++
++#ifndef __PAGETABLE_PMD_FOLDED
++#define pud_clear(pud) native_pud_clear(pud)
++#endif
++
++#define pte_clear(mm, addr, ptep) native_pte_clear(mm, addr, ptep)
++#define pmd_clear(pmd) native_pmd_clear(pmd)
++
++#define pte_update(mm, addr, ptep) do { } while (0)
++#define pte_update_defer(mm, addr, ptep) do { } while (0)
++#endif /* CONFIG_PARAVIRT */
++
++#endif /* __ASSEMBLY__ */
++
+ #ifdef CONFIG_X86_32
+ # include "pgtable_32.h"
+ #else
+ # include "pgtable_64.h"
+ #endif
++
++#ifndef __ASSEMBLY__
++
++enum {
++ PG_LEVEL_NONE,
++ PG_LEVEL_4K,
++ PG_LEVEL_2M,
++ PG_LEVEL_1G,
++};
++
++/*
++ * Helper function that returns the kernel pagetable entry controlling
++ * the virtual address 'address'. NULL means no pagetable entry present.
++ * NOTE: the return type is pte_t but if the pmd is PSE then we return it
++ * as a pte too.
++ */
++extern pte_t *lookup_address(unsigned long address, int *level);
++
++/* local pte updates need not use xchg for locking */
++static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
++{
++ pte_t res = *ptep;
++
++ /* Pure native function needs no input for mm, addr */
++ native_pte_clear(NULL, 0, ptep);
++ return res;
++}
++
++static inline void native_set_pte_at(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep , pte_t pte)
++{
++ native_set_pte(ptep, pte);
++}
++
++#ifndef CONFIG_PARAVIRT
++/*
++ * Rules for using pte_update - it must be called after any PTE update which
++ * has not been done using the set_pte / clear_pte interfaces. It is used by
++ * shadow mode hypervisors to resynchronize the shadow page tables. Kernel PTE
++ * updates should either be sets, clears, or set_pte_atomic for P->P
++ * transitions, which means this hook should only be called for user PTEs.
++ * This hook implies a P->P protection or access change has taken place, which
++ * requires a subsequent TLB flush. The notification can optionally be delayed
++ * until the TLB flush event by using the pte_update_defer form of the
++ * interface, but care must be taken to assure that the flush happens while
++ * still holding the same page table lock so that the shadow and primary pages
++ * do not become out of sync on SMP.
++ */
++#define pte_update(mm, addr, ptep) do { } while (0)
++#define pte_update_defer(mm, addr, ptep) do { } while (0)
++#endif
++
++/*
++ * We only update the dirty/accessed state if we set
++ * the dirty bit by hand in the kernel, since the hardware
++ * will do the accessed bit for us, and we don't want to
++ * race with other CPU's that might be updating the dirty
++ * bit at the same time.
++ */
++#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
++#define ptep_set_access_flags(vma, address, ptep, entry, dirty) \
++({ \
++ int __changed = !pte_same(*(ptep), entry); \
++ if (__changed && dirty) { \
++ *ptep = entry; \
++ pte_update_defer((vma)->vm_mm, (address), (ptep)); \
++ flush_tlb_page(vma, address); \
++ } \
++ __changed; \
++})
++
++#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
++#define ptep_test_and_clear_young(vma, addr, ptep) ({ \
++ int __ret = 0; \
++ if (pte_young(*(ptep))) \
++ __ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, \
++ &(ptep)->pte); \
++ if (__ret) \
++ pte_update((vma)->vm_mm, addr, ptep); \
++ __ret; \
++})
++
++#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
++#define ptep_clear_flush_young(vma, address, ptep) \
++({ \
++ int __young; \
++ __young = ptep_test_and_clear_young((vma), (address), (ptep)); \
++ if (__young) \
++ flush_tlb_page(vma, address); \
++ __young; \
++})
++
++#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
++static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
++{
++ pte_t pte = native_ptep_get_and_clear(ptep);
++ pte_update(mm, addr, ptep);
++ return pte;
++}
++
++#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
++static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, unsigned long addr, pte_t *ptep, int full)
++{
++ pte_t pte;
++ if (full) {
++ /*
++ * Full address destruction in progress; paravirt does not
++ * care about updates and native needs no locking
++ */
++ pte = native_local_ptep_get_and_clear(ptep);
++ } else {
++ pte = ptep_get_and_clear(mm, addr, ptep);
++ }
++ return pte;
++}
++
++#define __HAVE_ARCH_PTEP_SET_WRPROTECT
++static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
++{
++ clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);
++ pte_update(mm, addr, ptep);
++}
++
++#include <asm-generic/pgtable.h>
++#endif /* __ASSEMBLY__ */
++
++#endif /* _ASM_X86_PGTABLE_H */
+diff --git a/include/asm-x86/pgtable_32.h b/include/asm-x86/pgtable_32.h
+index ed3e70d..21e70fb 100644
+--- a/include/asm-x86/pgtable_32.h
++++ b/include/asm-x86/pgtable_32.h
+@@ -25,20 +25,11 @@
+ struct mm_struct;
+ struct vm_area_struct;
+
-/*
- * ZERO_PAGE is a global shared page that is always zero: used
- * for zero-mapped memory areas etc..
- */
--extern unsigned char empty_zero_page[PAGE_SIZE];
--#define ZERO_PAGE(vaddr) (mem_map + MAP_NR(empty_zero_page))
--
--#endif /* !__ASSEMBLY__ */
--
--/*
-- * NEFF and NPHYS related defines.
-- * FIXME : These need to be model-dependent. For now this is OK, SH5-101 and SH5-103
-- * implement 32 bits effective and 32 bits physical. But future implementations may
-- * extend beyond this.
-- */
--#define NEFF 32
--#define NEFF_SIGN (1LL << (NEFF - 1))
--#define NEFF_MASK (-1LL << NEFF)
--
--#define NPHYS 32
--#define NPHYS_SIGN (1LL << (NPHYS - 1))
--#define NPHYS_MASK (-1LL << NPHYS)
--
--/* Typically 2-level is sufficient up to 32 bits of virtual address space, beyond
-- that 3-level would be appropriate. */
--#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
--/* For 4k pages, this contains 512 entries, i.e. 9 bits worth of address. */
--#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
--#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
--#define PTE_SHIFT PAGE_SHIFT
--#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
--
--/* top level: PMD. */
--#define PGDIR_SHIFT (PTE_SHIFT + PTE_BITS)
--#define PGD_BITS (NEFF - PGDIR_SHIFT)
--#define PTRS_PER_PGD (1<<PGD_BITS)
--
--/* middle level: PMD. This doesn't do anything for the 2-level case. */
--#define PTRS_PER_PMD (1)
--
--#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
--#define PGDIR_MASK (~(PGDIR_SIZE-1))
--#define PMD_SHIFT PGDIR_SHIFT
--#define PMD_SIZE PGDIR_SIZE
--#define PMD_MASK PGDIR_MASK
--
--#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
--/*
-- * three-level asymmetric paging structure: PGD is top level.
-- * The asymmetry comes from 32-bit pointers and 64-bit PTEs.
-- */
--/* bottom level: PTE. It's 9 bits = 512 pointers */
--#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
--#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
--#define PTE_SHIFT PAGE_SHIFT
--#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
--
--/* middle level: PMD. It's 10 bits = 1024 pointers */
--#define PTRS_PER_PMD ((1<<PAGE_SHIFT)/sizeof(unsigned long long *))
--#define PMD_MAGNITUDE 2 /* sizeof(unsigned long long *) magnit. */
--#define PMD_SHIFT (PTE_SHIFT + PTE_BITS)
--#define PMD_BITS (PAGE_SHIFT - PMD_MAGNITUDE)
--
--/* top level: PMD. It's 1 bit = 2 pointers */
--#define PGDIR_SHIFT (PMD_SHIFT + PMD_BITS)
--#define PGD_BITS (NEFF - PGDIR_SHIFT)
--#define PTRS_PER_PGD (1<<PGD_BITS)
--
--#define PMD_SIZE (1UL << PMD_SHIFT)
--#define PMD_MASK (~(PMD_SIZE-1))
--#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
--#define PGDIR_MASK (~(PGDIR_SIZE-1))
+-#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+-extern unsigned long empty_zero_page[1024];
+ extern pgd_t swapper_pg_dir[1024];
+ extern struct kmem_cache *pmd_cache;
+-extern spinlock_t pgd_lock;
+-extern struct page *pgd_list;
+ void check_pgt_cache(void);
+
+-void pmd_ctor(struct kmem_cache *, void *);
+-void pgtable_cache_init(void);
++static inline void pgtable_cache_init(void) {}
+ void paging_init(void);
+
+
+@@ -58,9 +49,6 @@ void paging_init(void);
+ #define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+ #define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+-#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+-#define FIRST_USER_ADDRESS 0
-
+ #define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT)
+ #define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS)
+
+@@ -85,113 +73,6 @@ void paging_init(void);
+ #endif
+
+ /*
+- * _PAGE_PSE set in the page directory entry just means that
+- * the page directory entry points directly to a 4MB-aligned block of
+- * memory.
+- */
+-#define _PAGE_BIT_PRESENT 0
+-#define _PAGE_BIT_RW 1
+-#define _PAGE_BIT_USER 2
+-#define _PAGE_BIT_PWT 3
+-#define _PAGE_BIT_PCD 4
+-#define _PAGE_BIT_ACCESSED 5
+-#define _PAGE_BIT_DIRTY 6
+-#define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page, Pentium+, if present.. */
+-#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
+-#define _PAGE_BIT_UNUSED1 9 /* available for programmer */
+-#define _PAGE_BIT_UNUSED2 10
+-#define _PAGE_BIT_UNUSED3 11
+-#define _PAGE_BIT_NX 63
+-
+-#define _PAGE_PRESENT 0x001
+-#define _PAGE_RW 0x002
+-#define _PAGE_USER 0x004
+-#define _PAGE_PWT 0x008
+-#define _PAGE_PCD 0x010
+-#define _PAGE_ACCESSED 0x020
+-#define _PAGE_DIRTY 0x040
+-#define _PAGE_PSE 0x080 /* 4 MB (or 2MB) page, Pentium+, if present.. */
+-#define _PAGE_GLOBAL 0x100 /* Global TLB entry PPro+ */
+-#define _PAGE_UNUSED1 0x200 /* available for programmer */
+-#define _PAGE_UNUSED2 0x400
+-#define _PAGE_UNUSED3 0x800
+-
+-/* If _PAGE_PRESENT is clear, we use these: */
+-#define _PAGE_FILE 0x040 /* nonlinear file mapping, saved PTE; unset:swap */
+-#define _PAGE_PROTNONE 0x080 /* if the user mapped it with PROT_NONE;
+- pte_present gives true */
+-#ifdef CONFIG_X86_PAE
+-#define _PAGE_NX (1ULL<<_PAGE_BIT_NX)
-#else
--#error "No defined number of page table levels"
+-#define _PAGE_NX 0
-#endif
-
+-#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY)
+-#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
+-#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+-
+-#define PAGE_NONE \
+- __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
+-#define PAGE_SHARED \
+- __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED)
+-
+-#define PAGE_SHARED_EXEC \
+- __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED)
+-#define PAGE_COPY_NOEXEC \
+- __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
+-#define PAGE_COPY_EXEC \
+- __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+-#define PAGE_COPY \
+- PAGE_COPY_NOEXEC
+-#define PAGE_READONLY \
+- __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
+-#define PAGE_READONLY_EXEC \
+- __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+-
+-#define _PAGE_KERNEL \
+- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_NX)
+-#define _PAGE_KERNEL_EXEC \
+- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
+-
+-extern unsigned long long __PAGE_KERNEL, __PAGE_KERNEL_EXEC;
+-#define __PAGE_KERNEL_RO (__PAGE_KERNEL & ~_PAGE_RW)
+-#define __PAGE_KERNEL_RX (__PAGE_KERNEL_EXEC & ~_PAGE_RW)
+-#define __PAGE_KERNEL_NOCACHE (__PAGE_KERNEL | _PAGE_PCD)
+-#define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE)
+-#define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE)
+-
+-#define PAGE_KERNEL __pgprot(__PAGE_KERNEL)
+-#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO)
+-#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC)
+-#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX)
+-#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE)
+-#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE)
+-#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC)
+-
+-/*
+- * The i386 can't do page protection for execute, and considers that
+- * the same are read. Also, write permissions imply read permissions.
+- * This is the closest we can get..
+- */
+-#define __P000 PAGE_NONE
+-#define __P001 PAGE_READONLY
+-#define __P010 PAGE_COPY
+-#define __P011 PAGE_COPY
+-#define __P100 PAGE_READONLY_EXEC
+-#define __P101 PAGE_READONLY_EXEC
+-#define __P110 PAGE_COPY_EXEC
+-#define __P111 PAGE_COPY_EXEC
+-
+-#define __S000 PAGE_NONE
+-#define __S001 PAGE_READONLY
+-#define __S010 PAGE_SHARED
+-#define __S011 PAGE_SHARED
+-#define __S100 PAGE_READONLY_EXEC
+-#define __S101 PAGE_READONLY_EXEC
+-#define __S110 PAGE_SHARED_EXEC
+-#define __S111 PAGE_SHARED_EXEC
+-
+-/*
+ * Define this if things work differently on an i386 and an i486:
+ * it will (on an i486) warn about kernel memory accesses that are
+ * done without a 'access_ok(VERIFY_WRITE,..)'
+@@ -211,133 +92,12 @@ extern unsigned long pg0[];
+
+ #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+
-/*
-- * Error outputs.
+- * The following only work if pte_present() is true.
+- * Undefined behaviour if not..
- */
--#define pte_ERROR(e) \
-- printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
--#define pmd_ERROR(e) \
-- printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
--#define pgd_ERROR(e) \
-- printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+-static inline int pte_dirty(pte_t pte) { return (pte).pte_low & _PAGE_DIRTY; }
+-static inline int pte_young(pte_t pte) { return (pte).pte_low & _PAGE_ACCESSED; }
+-static inline int pte_write(pte_t pte) { return (pte).pte_low & _PAGE_RW; }
+-static inline int pte_huge(pte_t pte) { return (pte).pte_low & _PAGE_PSE; }
+-
+-/*
+- * The following only works if pte_present() is not true.
+- */
+-static inline int pte_file(pte_t pte) { return (pte).pte_low & _PAGE_FILE; }
+-
+-static inline pte_t pte_mkclean(pte_t pte) { (pte).pte_low &= ~_PAGE_DIRTY; return pte; }
+-static inline pte_t pte_mkold(pte_t pte) { (pte).pte_low &= ~_PAGE_ACCESSED; return pte; }
+-static inline pte_t pte_wrprotect(pte_t pte) { (pte).pte_low &= ~_PAGE_RW; return pte; }
+-static inline pte_t pte_mkdirty(pte_t pte) { (pte).pte_low |= _PAGE_DIRTY; return pte; }
+-static inline pte_t pte_mkyoung(pte_t pte) { (pte).pte_low |= _PAGE_ACCESSED; return pte; }
+-static inline pte_t pte_mkwrite(pte_t pte) { (pte).pte_low |= _PAGE_RW; return pte; }
+-static inline pte_t pte_mkhuge(pte_t pte) { (pte).pte_low |= _PAGE_PSE; return pte; }
-
+ #ifdef CONFIG_X86_PAE
+ # include <asm/pgtable-3level.h>
+ #else
+ # include <asm/pgtable-2level.h>
+ #endif
+
+-#ifndef CONFIG_PARAVIRT
-/*
-- * Table setting routines. Used within arch/mm only.
+- * Rules for using pte_update - it must be called after any PTE update which
+- * has not been done using the set_pte / clear_pte interfaces. It is used by
+- * shadow mode hypervisors to resynchronize the shadow page tables. Kernel PTE
+- * updates should either be sets, clears, or set_pte_atomic for P->P
+- * transitions, which means this hook should only be called for user PTEs.
+- * This hook implies a P->P protection or access change has taken place, which
+- * requires a subsequent TLB flush. The notification can optionally be delayed
+- * until the TLB flush event by using the pte_update_defer form of the
+- * interface, but care must be taken to assure that the flush happens while
+- * still holding the same page table lock so that the shadow and primary pages
+- * do not become out of sync on SMP.
- */
--#define set_pgd(pgdptr, pgdval) (*(pgdptr) = pgdval)
--#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+-#define pte_update(mm, addr, ptep) do { } while (0)
+-#define pte_update_defer(mm, addr, ptep) do { } while (0)
+-#endif
-
--static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
+-/* local pte updates need not use xchg for locking */
+-static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
-{
-- unsigned long long x = ((unsigned long long) pteval.pte);
-- unsigned long long *xp = (unsigned long long *) pteptr;
-- /*
-- * Sign-extend based on NPHYS.
-- */
-- *(xp) = (x & NPHYS_SIGN) ? (x | NPHYS_MASK) : x;
--}
--#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
+- pte_t res = *ptep;
-
--static __inline__ void pmd_set(pmd_t *pmdp,pte_t *ptep)
--{
-- pmd_val(*pmdp) = (unsigned long) ptep;
+- /* Pure native function needs no input for mm, addr */
+- native_pte_clear(NULL, 0, ptep);
+- return res;
-}
-
-/*
-- * PGD defines. Top level.
+- * We only update the dirty/accessed state if we set
+- * the dirty bit by hand in the kernel, since the hardware
+- * will do the accessed bit for us, and we don't want to
+- * race with other CPU's that might be updating the dirty
+- * bit at the same time.
- */
+-#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+-#define ptep_set_access_flags(vma, address, ptep, entry, dirty) \
+-({ \
+- int __changed = !pte_same(*(ptep), entry); \
+- if (__changed && dirty) { \
+- (ptep)->pte_low = (entry).pte_low; \
+- pte_update_defer((vma)->vm_mm, (address), (ptep)); \
+- flush_tlb_page(vma, address); \
+- } \
+- __changed; \
+-})
-
--/* To find an entry in a generic PGD. */
--#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
--#define __pgd_offset(address) pgd_index(address)
--#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+-#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+-#define ptep_test_and_clear_young(vma, addr, ptep) ({ \
+- int __ret = 0; \
+- if (pte_young(*(ptep))) \
+- __ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, \
+- &(ptep)->pte_low); \
+- if (__ret) \
+- pte_update((vma)->vm_mm, addr, ptep); \
+- __ret; \
+-})
-
--/* To find an entry in a kernel PGD. */
--#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+-#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
+-#define ptep_clear_flush_young(vma, address, ptep) \
+-({ \
+- int __young; \
+- __young = ptep_test_and_clear_young((vma), (address), (ptep)); \
+- if (__young) \
+- flush_tlb_page(vma, address); \
+- __young; \
+-})
-
--/*
-- * PGD level access routines.
-- *
-- * Note1:
-- * There's no need to use physical addresses since the tree walk is all
-- * in performed in software, until the PTE translation.
-- *
-- * Note 2:
-- * A PGD entry can be uninitialized (_PGD_UNUSED), generically bad,
-- * clear (_PGD_EMPTY), present. When present, lower 3 nibbles contain
-- * _KERNPG_TABLE. Being a kernel virtual pointer also bit 31 must
-- * be 1. Assuming an arbitrary clear value of bit 31 set to 0 and
-- * lower 3 nibbles set to 0xFFF (_PGD_EMPTY) any other value is a
-- * bad pgd that must be notified via printk().
-- *
-- */
--#define _PGD_EMPTY 0x0
+-#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+-static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+-{
+- pte_t pte = native_ptep_get_and_clear(ptep);
+- pte_update(mm, addr, ptep);
+- return pte;
+-}
-
--#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
--static inline int pgd_none(pgd_t pgd) { return 0; }
--static inline int pgd_bad(pgd_t pgd) { return 0; }
--#define pgd_present(pgd) ((pgd_val(pgd) & _PAGE_PRESENT) ? 1 : 0)
--#define pgd_clear(xx) do { } while(0)
+-#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
+-static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, unsigned long addr, pte_t *ptep, int full)
+-{
+- pte_t pte;
+- if (full) {
+- /*
+- * Full address destruction in progress; paravirt does not
+- * care about updates and native needs no locking
+- */
+- pte = native_local_ptep_get_and_clear(ptep);
+- } else {
+- pte = ptep_get_and_clear(mm, addr, ptep);
+- }
+- return pte;
+-}
-
--#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
--#define pgd_present(pgd_entry) (1)
--#define pgd_none(pgd_entry) (pgd_val((pgd_entry)) == _PGD_EMPTY)
--/* TODO: Think later about what a useful definition of 'bad' would be now. */
--#define pgd_bad(pgd_entry) (0)
--#define pgd_clear(pgd_entry_p) (set_pgd((pgd_entry_p), __pgd(_PGD_EMPTY)))
+-#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+-static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+-{
+- clear_bit(_PAGE_BIT_RW, &ptep->pte_low);
+- pte_update(mm, addr, ptep);
+-}
-
+ /*
+ * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
+ *
+@@ -367,25 +127,6 @@ static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
+
+ #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+
+-static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+-{
+- pte.pte_low &= _PAGE_CHG_MASK;
+- pte.pte_low |= pgprot_val(newprot);
+-#ifdef CONFIG_X86_PAE
+- /*
+- * Chop off the NX bit (if present), and add the NX portion of
+- * the newprot (if present):
+- */
+- pte.pte_high &= ~(1 << (_PAGE_BIT_NX - 32));
+- pte.pte_high |= (pgprot_val(newprot) >> 32) & \
+- (__supported_pte_mask >> 32);
-#endif
+- return pte;
+-}
-
+-#define pmd_large(pmd) \
+-((pmd_val(pmd) & (_PAGE_PSE|_PAGE_PRESENT)) == (_PAGE_PSE|_PAGE_PRESENT))
-
--#define pgd_page_vaddr(pgd_entry) ((unsigned long) (pgd_val(pgd_entry) & PAGE_MASK))
--#define pgd_page(pgd) (virt_to_page(pgd_val(pgd)))
--
--
+ /*
+ * the pgd page can be thought of an array like this: pgd_t[PTRS_PER_PGD]
+ *
+@@ -432,26 +173,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ #define pmd_page_vaddr(pmd) \
+ ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
+
-/*
-- * PMD defines. Middle level.
+- * Helper function that returns the kernel pagetable entry controlling
+- * the virtual address 'address'. NULL means no pagetable entry present.
+- * NOTE: the return type is pte_t but if the pmd is PSE then we return it
+- * as a pte too.
- */
--
--/* PGD to PMD dereferencing */
--#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
--static inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
--{
-- return (pmd_t *) dir;
--}
--#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
--#define __pmd_offset(address) \
-- (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
--#define pmd_offset(dir, addr) \
-- ((pmd_t *) ((pgd_val(*(dir))) & PAGE_MASK) + __pmd_offset((addr)))
--#endif
+-extern pte_t *lookup_address(unsigned long address);
-
-/*
-- * PMD level access routines. Same notes as above.
+- * Make a given kernel text page executable/non-executable.
+- * Returns the previous executability setting of that page (which
+- * is used to restore the previous state). Used by the SMP bootup code.
+- * NOTE: this is an __init function for security reasons.
- */
--#define _PMD_EMPTY 0x0
--/* Either the PMD is empty or present, it's not paged out */
--#define pmd_present(pmd_entry) (pmd_val(pmd_entry) & _PAGE_PRESENT)
--#define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY)))
--#define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY)
--#define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
--
--#define pmd_page_vaddr(pmd_entry) \
-- ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))
--
--#define pmd_page(pmd) \
-- (virt_to_page(pmd_val(pmd)))
+-#ifdef CONFIG_X86_PAE
+- extern int set_kernel_exec(unsigned long vaddr, int enable);
+-#else
+- static inline int set_kernel_exec(unsigned long vaddr, int enable) { return 0;}
+-#endif
-
--/* PMD to PTE dereferencing */
--#define pte_index(address) \
-- ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+ #if defined(CONFIG_HIGHPTE)
+ #define pte_offset_map(dir, address) \
+ ((pte_t *)kmap_atomic_pte(pmd_page(*(dir)),KM_PTE0) + pte_index(address))
+@@ -497,13 +218,17 @@ static inline void paravirt_pagetable_setup_done(pgd_t *base)
+
+ #endif /* !__ASSEMBLY__ */
+
++/*
++ * kern_addr_valid() is (1) for FLATMEM and (0) for
++ * SPARSEMEM and DISCONTIGMEM
++ */
+ #ifdef CONFIG_FLATMEM
+ #define kern_addr_valid(addr) (1)
+-#endif /* CONFIG_FLATMEM */
++#else
++#define kern_addr_valid(kaddr) (0)
++#endif
+
+ #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
+ remap_pfn_range(vma, vaddr, pfn, size, prot)
+
+-#include <asm-generic/pgtable.h>
-
--#define pte_offset_kernel(dir, addr) \
-- ((pte_t *) ((pmd_val(*(dir))) & PAGE_MASK) + pte_index((addr)))
+ #endif /* _I386_PGTABLE_H */
+diff --git a/include/asm-x86/pgtable_64.h b/include/asm-x86/pgtable_64.h
+index 9b0ff47..6e615a1 100644
+--- a/include/asm-x86/pgtable_64.h
++++ b/include/asm-x86/pgtable_64.h
+@@ -17,22 +17,16 @@ extern pud_t level3_kernel_pgt[512];
+ extern pud_t level3_ident_pgt[512];
+ extern pmd_t level2_kernel_pgt[512];
+ extern pgd_t init_level4_pgt[];
+-extern unsigned long __supported_pte_mask;
+
+ #define swapper_pg_dir init_level4_pgt
+
+ extern void paging_init(void);
+ extern void clear_kernel_mapping(unsigned long addr, unsigned long size);
+
+-/*
+- * ZERO_PAGE is a global shared page that is always zero: used
+- * for zero-mapped memory areas etc..
+- */
+-extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)];
+-#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
-
--#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr)
--#define pte_offset_map_nested(dir,addr) pte_offset_kernel(dir, addr)
--#define pte_unmap(pte) do { } while (0)
--#define pte_unmap_nested(pte) do { } while (0)
+ #endif /* !__ASSEMBLY__ */
+
++#define SHARED_KERNEL_PMD 1
++
+ /*
+ * PGDIR_SHIFT determines what a top-level page table entry can map
+ */
+@@ -71,57 +65,68 @@ extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)];
+ #define pgd_none(x) (!pgd_val(x))
+ #define pud_none(x) (!pud_val(x))
+
+-static inline void set_pte(pte_t *dst, pte_t val)
++struct mm_struct;
++
++static inline void native_pte_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep)
++{
++ *ptep = native_make_pte(0);
++}
++
++static inline void native_set_pte(pte_t *ptep, pte_t pte)
+ {
+- pte_val(*dst) = pte_val(val);
+-}
+-#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
++ *ptep = pte;
++}
+
+-static inline void set_pmd(pmd_t *dst, pmd_t val)
++static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+- pmd_val(*dst) = pmd_val(val);
+-}
++ native_set_pte(ptep, pte);
++}
+
+-static inline void set_pud(pud_t *dst, pud_t val)
++static inline pte_t native_ptep_get_and_clear(pte_t *xp)
+ {
+- pud_val(*dst) = pud_val(val);
++#ifdef CONFIG_SMP
++ return native_make_pte(xchg(&xp->pte, 0));
++#else
++ /* native_local_ptep_get_and_clear, but duplicated because of cyclic dependency */
++ pte_t ret = *xp;
++ native_pte_clear(NULL, 0, xp);
++ return ret;
++#endif
+ }
+
+-static inline void pud_clear (pud_t *pud)
++static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd)
+ {
+- set_pud(pud, __pud(0));
++ *pmdp = pmd;
+ }
+
+-static inline void set_pgd(pgd_t *dst, pgd_t val)
++static inline void native_pmd_clear(pmd_t *pmd)
+ {
+- pgd_val(*dst) = pgd_val(val);
+-}
++ native_set_pmd(pmd, native_make_pmd(0));
++}
+
+-static inline void pgd_clear (pgd_t * pgd)
++static inline void native_set_pud(pud_t *pudp, pud_t pud)
+ {
+- set_pgd(pgd, __pgd(0));
++ *pudp = pud;
+ }
+
+-#define ptep_get_and_clear(mm,addr,xp) __pte(xchg(&(xp)->pte, 0))
++static inline void native_pud_clear(pud_t *pud)
++{
++ native_set_pud(pud, native_make_pud(0));
++}
+
+-struct mm_struct;
++static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
++{
++ *pgdp = pgd;
++}
+
+-static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, unsigned long addr, pte_t *ptep, int full)
++static inline void native_pgd_clear(pgd_t * pgd)
+ {
+- pte_t pte;
+- if (full) {
+- pte = *ptep;
+- *ptep = __pte(0);
+- } else {
+- pte = ptep_get_and_clear(mm, addr, ptep);
+- }
+- return pte;
++ native_set_pgd(pgd, native_make_pgd(0));
+ }
+
+ #define pte_same(a, b) ((a).pte == (b).pte)
+
+-#define pte_pgprot(a) (__pgprot((a).pte & ~PHYSICAL_PAGE_MASK))
-
--/* Round it up ! */
--#define USER_PTRS_PER_PGD ((TASK_SIZE+PGDIR_SIZE-1)/PGDIR_SIZE)
+ #endif /* !__ASSEMBLY__ */
+
+ #define PMD_SIZE (_AC(1,UL) << PMD_SHIFT)
+@@ -131,8 +136,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, unsigned long
+ #define PGDIR_SIZE (_AC(1,UL) << PGDIR_SHIFT)
+ #define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+-#define USER_PTRS_PER_PGD ((TASK_SIZE-1)/PGDIR_SIZE+1)
-#define FIRST_USER_ADDRESS 0
+
+ #define MAXMEM _AC(0x3fffffffffff, UL)
+ #define VMALLOC_START _AC(0xffffc20000000000, UL)
+@@ -142,91 +145,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, unsigned long
+ #define MODULES_END _AC(0xfffffffffff00000, UL)
+ #define MODULES_LEN (MODULES_END - MODULES_VADDR)
+
+-#define _PAGE_BIT_PRESENT 0
+-#define _PAGE_BIT_RW 1
+-#define _PAGE_BIT_USER 2
+-#define _PAGE_BIT_PWT 3
+-#define _PAGE_BIT_PCD 4
+-#define _PAGE_BIT_ACCESSED 5
+-#define _PAGE_BIT_DIRTY 6
+-#define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */
+-#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
+-#define _PAGE_BIT_NX 63 /* No execute: only valid after cpuid check */
+-
+-#define _PAGE_PRESENT 0x001
+-#define _PAGE_RW 0x002
+-#define _PAGE_USER 0x004
+-#define _PAGE_PWT 0x008
+-#define _PAGE_PCD 0x010
+-#define _PAGE_ACCESSED 0x020
+-#define _PAGE_DIRTY 0x040
+-#define _PAGE_PSE 0x080 /* 2MB page */
+-#define _PAGE_FILE 0x040 /* nonlinear file mapping, saved PTE; unset:swap */
+-#define _PAGE_GLOBAL 0x100 /* Global TLB entry */
-
--#ifndef __ASSEMBLY__
--#define VMALLOC_END 0xff000000
--#define VMALLOC_START 0xf0000000
--#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+-#define _PAGE_PROTNONE 0x080 /* If not present */
+-#define _PAGE_NX (_AC(1,UL)<<_PAGE_BIT_NX)
-
--#define IOBASE_VADDR 0xff000000
--#define IOBASE_END 0xffffffff
+-#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY)
+-#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
-
--/*
-- * PTEL coherent flags.
-- * See Chapter 17 ST50 CPU Core Volume 1, Architecture.
-- */
--/* The bits that are required in the SH-5 TLB are placed in the h/w-defined
-- positions, to avoid expensive bit shuffling on every refill. The remaining
-- bits are used for s/w purposes and masked out on each refill.
+-#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
-
-- Note, the PTE slots are used to hold data of type swp_entry_t when a page is
-- swapped out. Only the _PAGE_PRESENT flag is significant when the page is
-- swapped out, and it must be placed so that it doesn't overlap either the
-- type or offset fields of swp_entry_t. For x86, offset is at [31:8] and type
-- at [6:1], with _PAGE_PRESENT at bit 0 for both pte_t and swp_entry_t. This
-- scheme doesn't map to SH-5 because bit [0] controls cacheability. So bit
-- [2] is used for _PAGE_PRESENT and the type field of swp_entry_t is split
-- into 2 pieces. That is handled by SWP_ENTRY and SWP_TYPE below. */
--#define _PAGE_WT 0x001 /* CB0: if cacheable, 1->write-thru, 0->write-back */
--#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
--#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
--#define _PAGE_PRESENT 0x004 /* software: page referenced */
--#define _PAGE_FILE 0x004 /* software: only when !present */
--#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
--#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
--#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
--#define _PAGE_READ 0x040 /* PR0-bit : read access allowed */
--#define _PAGE_EXECUTE 0x080 /* PR1-bit : execute access allowed */
--#define _PAGE_WRITE 0x100 /* PR2-bit : write access allowed */
--#define _PAGE_USER 0x200 /* PR3-bit : user space access allowed */
--#define _PAGE_DIRTY 0x400 /* software: page accessed in write */
--#define _PAGE_ACCESSED 0x800 /* software: page referenced */
+-#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
+-#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
+-#define PAGE_SHARED_EXEC __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED)
+-#define PAGE_COPY_NOEXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
+-#define PAGE_COPY PAGE_COPY_NOEXEC
+-#define PAGE_COPY_EXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+-#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_NX)
+-#define PAGE_READONLY_EXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+-#define __PAGE_KERNEL \
+- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_NX)
+-#define __PAGE_KERNEL_EXEC \
+- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
+-#define __PAGE_KERNEL_NOCACHE \
+- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_PCD | _PAGE_ACCESSED | _PAGE_NX)
+-#define __PAGE_KERNEL_RO \
+- (_PAGE_PRESENT | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_NX)
+-#define __PAGE_KERNEL_VSYSCALL \
+- (_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+-#define __PAGE_KERNEL_VSYSCALL_NOCACHE \
+- (_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED | _PAGE_PCD)
+-#define __PAGE_KERNEL_LARGE \
+- (__PAGE_KERNEL | _PAGE_PSE)
+-#define __PAGE_KERNEL_LARGE_EXEC \
+- (__PAGE_KERNEL_EXEC | _PAGE_PSE)
+-
+-#define MAKE_GLOBAL(x) __pgprot((x) | _PAGE_GLOBAL)
+-
+-#define PAGE_KERNEL MAKE_GLOBAL(__PAGE_KERNEL)
+-#define PAGE_KERNEL_EXEC MAKE_GLOBAL(__PAGE_KERNEL_EXEC)
+-#define PAGE_KERNEL_RO MAKE_GLOBAL(__PAGE_KERNEL_RO)
+-#define PAGE_KERNEL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_NOCACHE)
+-#define PAGE_KERNEL_VSYSCALL32 __pgprot(__PAGE_KERNEL_VSYSCALL)
+-#define PAGE_KERNEL_VSYSCALL MAKE_GLOBAL(__PAGE_KERNEL_VSYSCALL)
+-#define PAGE_KERNEL_LARGE MAKE_GLOBAL(__PAGE_KERNEL_LARGE)
+-#define PAGE_KERNEL_VSYSCALL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_VSYSCALL_NOCACHE)
+-
+-/* xwr */
+-#define __P000 PAGE_NONE
+-#define __P001 PAGE_READONLY
+-#define __P010 PAGE_COPY
+-#define __P011 PAGE_COPY
+-#define __P100 PAGE_READONLY_EXEC
+-#define __P101 PAGE_READONLY_EXEC
+-#define __P110 PAGE_COPY_EXEC
+-#define __P111 PAGE_COPY_EXEC
+-
+-#define __S000 PAGE_NONE
+-#define __S001 PAGE_READONLY
+-#define __S010 PAGE_SHARED
+-#define __S011 PAGE_SHARED
+-#define __S100 PAGE_READONLY_EXEC
+-#define __S101 PAGE_READONLY_EXEC
+-#define __S110 PAGE_SHARED_EXEC
+-#define __S111 PAGE_SHARED_EXEC
-
--/* Mask which drops software flags */
--#define _PAGE_FLAGS_HARDWARE_MASK 0xfffffffffffff3dbLL
+ #ifndef __ASSEMBLY__
+
+ static inline unsigned long pgd_bad(pgd_t pgd)
+@@ -246,66 +164,16 @@ static inline unsigned long pmd_bad(pmd_t pmd)
+
+ #define pte_none(x) (!pte_val(x))
+ #define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE))
+-#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0)
+
+-#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT)) /* FIXME: is this
+- right? */
++#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT)) /* FIXME: is this right? */
+ #define pte_page(x) pfn_to_page(pte_pfn(x))
+ #define pte_pfn(x) ((pte_val(x) & __PHYSICAL_MASK) >> PAGE_SHIFT)
+
+-static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+-{
+- pte_t pte;
+- pte_val(pte) = (page_nr << PAGE_SHIFT);
+- pte_val(pte) |= pgprot_val(pgprot);
+- pte_val(pte) &= __supported_pte_mask;
+- return pte;
+-}
-
-/*
-- * HugeTLB support
+- * The following only work if pte_present() is true.
+- * Undefined behaviour if not..
- */
--#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
--#define _PAGE_SZHUGE (_PAGE_SIZE0)
--#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
--#define _PAGE_SZHUGE (_PAGE_SIZE1)
--#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
--#define _PAGE_SZHUGE (_PAGE_SIZE0 | _PAGE_SIZE1)
--#endif
+-#define __LARGE_PTE (_PAGE_PSE|_PAGE_PRESENT)
+-static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
+-static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
+-static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_RW; }
+-static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
+-static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_PSE; }
-
--/*
-- * Default flags for a Kernel page.
-- * This is fundametally also SHARED because the main use of this define
-- * (other than for PGD/PMD entries) is for the VMALLOC pool which is
-- * contextless.
-- *
-- * _PAGE_EXECUTE is required for modules
-- *
-- */
--#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
-- _PAGE_EXECUTE | \
-- _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_DIRTY | \
-- _PAGE_SHARED)
+-static inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
+-static inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
+-static inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_RW)); return pte; }
+-static inline pte_t pte_mkexec(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_NX)); return pte; }
+-static inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
+-static inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
+-static inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_RW)); return pte; }
+-static inline pte_t pte_mkhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_PSE)); return pte; }
+-static inline pte_t pte_clrhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_PSE)); return pte; }
-
--/* Default flags for a User page */
--#define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER)
+-struct vm_area_struct;
-
--#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+-{
+- if (!pte_young(*ptep))
+- return 0;
+- return test_and_clear_bit(_PAGE_BIT_ACCESSED, &ptep->pte);
+-}
-
--#define PAGE_NONE __pgprot(_PAGE_CACHABLE | _PAGE_ACCESSED)
--#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
-- _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_USER | \
-- _PAGE_SHARED)
--/* We need to include PAGE_EXECUTE in PAGE_COPY because it is the default
-- * protection mode for the stack. */
--#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
-- _PAGE_ACCESSED | _PAGE_USER | _PAGE_EXECUTE)
--#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
-- _PAGE_ACCESSED | _PAGE_USER)
--#define PAGE_KERNEL __pgprot(_KERNPG_TABLE)
+-static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+-{
+- clear_bit(_PAGE_BIT_RW, &ptep->pte);
+-}
-
+ /*
+ * Macro to mark a page protection value as "uncacheable".
+ */
+ #define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) | _PAGE_PCD | _PAGE_PWT))
+
+-static inline int pmd_large(pmd_t pte) {
+- return (pmd_val(pte) & __LARGE_PTE) == __LARGE_PTE;
+-}
-
--/*
-- * In ST50 we have full permissions (Read/Write/Execute/Shared).
-- * Just match'em all. These are for mmap(), therefore all at least
-- * User/Cachable/Present/Accessed. No point in making Fault on Write.
-- */
--#define __MMAP_COMMON (_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED)
-- /* sxwr */
--#define __P000 __pgprot(__MMAP_COMMON)
--#define __P001 __pgprot(__MMAP_COMMON | _PAGE_READ)
--#define __P010 __pgprot(__MMAP_COMMON)
--#define __P011 __pgprot(__MMAP_COMMON | _PAGE_READ)
--#define __P100 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
--#define __P101 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
--#define __P110 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
--#define __P111 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+
+ /*
+ * Conversion functions: convert a page and protection to a page entry,
+@@ -340,29 +208,18 @@ static inline int pmd_large(pmd_t pte) {
+ pmd_index(address))
+ #define pmd_none(x) (!pmd_val(x))
+ #define pmd_present(x) (pmd_val(x) & _PAGE_PRESENT)
+-#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
+ #define pfn_pmd(nr,prot) (__pmd(((nr) << PAGE_SHIFT) | pgprot_val(prot)))
+ #define pmd_pfn(x) ((pmd_val(x) & __PHYSICAL_MASK) >> PAGE_SHIFT)
+
+ #define pte_to_pgoff(pte) ((pte_val(pte) & PHYSICAL_PAGE_MASK) >> PAGE_SHIFT)
+-#define pgoff_to_pte(off) ((pte_t) { ((off) << PAGE_SHIFT) | _PAGE_FILE })
++#define pgoff_to_pte(off) ((pte_t) { .pte = ((off) << PAGE_SHIFT) | _PAGE_FILE })
+ #define PTE_FILE_MAX_BITS __PHYSICAL_MASK_SHIFT
+
+ /* PTE - Level 1 access. */
+
+ /* page, protection -> pte */
+ #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+-#define mk_pte_huge(entry) (pte_val(entry) |= _PAGE_PRESENT | _PAGE_PSE)
+
+-/* Change flags of a PTE */
+-static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+-{
+- pte_val(pte) &= _PAGE_CHG_MASK;
+- pte_val(pte) |= pgprot_val(newprot);
+- pte_val(pte) &= __supported_pte_mask;
+- return pte;
+-}
+-
+ #define pte_index(address) \
+ (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+ #define pte_offset_kernel(dir, address) ((pte_t *) pmd_page_vaddr(*(dir)) + \
+@@ -376,40 +233,20 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+
+ #define update_mmu_cache(vma,address,pte) do { } while (0)
+
+-/* We only update the dirty/accessed state if we set
+- * the dirty bit by hand in the kernel, since the hardware
+- * will do the accessed bit for us, and we don't want to
+- * race with other CPU's that might be updating the dirty
+- * bit at the same time. */
+-#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+-#define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
+-({ \
+- int __changed = !pte_same(*(__ptep), __entry); \
+- if (__changed && __dirty) { \
+- set_pte(__ptep, __entry); \
+- flush_tlb_page(__vma, __address); \
+- } \
+- __changed; \
+-})
-
--#define __S000 __pgprot(__MMAP_COMMON | _PAGE_SHARED)
--#define __S001 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ)
--#define __S010 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_WRITE)
--#define __S011 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ | _PAGE_WRITE)
--#define __S100 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE)
--#define __S101 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ)
--#define __S110 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_WRITE)
--#define __S111 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ | _PAGE_WRITE)
+ /* Encode and de-code a swap entry */
+ #define __swp_type(x) (((x).val >> 1) & 0x3f)
+ #define __swp_offset(x) ((x).val >> 8)
+ #define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 1) | ((offset) << 8) })
+ #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+-#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-
--/* Make it a device mapping for maximum safety (e.g. for mapping device
-- registers into user-space via /dev/map). */
--#define pgprot_noncached(x) __pgprot(((x).pgprot & ~(_PAGE_CACHABLE)) | _PAGE_DEVICE)
--#define pgprot_writecombine(prot) __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+-extern spinlock_t pgd_lock;
+-extern struct list_head pgd_list;
++#define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
+
+ extern int kern_addr_valid(unsigned long addr);
+
+-pte_t *lookup_address(unsigned long addr);
-
+ #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
+ remap_pfn_range(vma, vaddr, pfn, size, prot)
+
+ #define HAVE_ARCH_UNMAPPED_AREA
++#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+
+ #define pgtable_cache_init() do { } while (0)
+ #define check_pgt_cache() do { } while (0)
+@@ -422,12 +259,7 @@ pte_t *lookup_address(unsigned long addr);
+ #define kc_offset_to_vaddr(o) \
+ (((o) & (1UL << (__VIRTUAL_MASK_SHIFT-1))) ? ((o) | (~__VIRTUAL_MASK)) : (o))
+
+-#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+-#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+-#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
+-#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+ #define __HAVE_ARCH_PTE_SAME
+-#include <asm-generic/pgtable.h>
+ #endif /* !__ASSEMBLY__ */
+
+ #endif /* _X86_64_PGTABLE_H */
+diff --git a/include/asm-x86/processor.h b/include/asm-x86/processor.h
+index 46e1c04..ab4d0c2 100644
+--- a/include/asm-x86/processor.h
++++ b/include/asm-x86/processor.h
+@@ -1,5 +1,842 @@
++#ifndef __ASM_X86_PROCESSOR_H
++#define __ASM_X86_PROCESSOR_H
++
++#include <asm/processor-flags.h>
++
++/* migration helpers, for KVM - will be removed in 2.6.25: */
++#include <asm/vm86.h>
++#define Xgt_desc_struct desc_ptr
++
++/* Forward declaration, a strange C thing */
++struct task_struct;
++struct mm_struct;
++
++#include <asm/vm86.h>
++#include <asm/math_emu.h>
++#include <asm/segment.h>
++#include <asm/types.h>
++#include <asm/sigcontext.h>
++#include <asm/current.h>
++#include <asm/cpufeature.h>
++#include <asm/system.h>
++#include <asm/page.h>
++#include <asm/percpu.h>
++#include <asm/msr.h>
++#include <asm/desc_defs.h>
++#include <asm/nops.h>
++#include <linux/personality.h>
++#include <linux/cpumask.h>
++#include <linux/cache.h>
++#include <linux/threads.h>
++#include <linux/init.h>
++
++/*
++ * Default implementation of macro that returns current
++ * instruction pointer ("program counter").
++ */
++static inline void *current_text_addr(void)
++{
++ void *pc;
++ asm volatile("mov $1f,%0\n1:":"=r" (pc));
++ return pc;
++}
++
++#ifdef CONFIG_X86_VSMP
++#define ARCH_MIN_TASKALIGN (1 << INTERNODE_CACHE_SHIFT)
++#define ARCH_MIN_MMSTRUCT_ALIGN (1 << INTERNODE_CACHE_SHIFT)
++#else
++#define ARCH_MIN_TASKALIGN 16
++#define ARCH_MIN_MMSTRUCT_ALIGN 0
++#endif
++
++/*
++ * CPU type and hardware bug flags. Kept separately for each CPU.
++ * Members of this structure are referenced in head.S, so think twice
++ * before touching them. [mj]
++ */
++
++struct cpuinfo_x86 {
++ __u8 x86; /* CPU family */
++ __u8 x86_vendor; /* CPU vendor */
++ __u8 x86_model;
++ __u8 x86_mask;
++#ifdef CONFIG_X86_32
++ char wp_works_ok; /* It doesn't on 386's */
++ char hlt_works_ok; /* Problems on some 486Dx4's and old 386's */
++ char hard_math;
++ char rfu;
++ char fdiv_bug;
++ char f00f_bug;
++ char coma_bug;
++ char pad0;
++#else
++ /* number of 4K pages in DTLB/ITLB combined(in pages)*/
++ int x86_tlbsize;
++ __u8 x86_virt_bits, x86_phys_bits;
++ /* cpuid returned core id bits */
++ __u8 x86_coreid_bits;
++ /* Max extended CPUID function supported */
++ __u32 extended_cpuid_level;
++#endif
++ int cpuid_level; /* Maximum supported CPUID level, -1=no CPUID */
++ __u32 x86_capability[NCAPINTS];
++ char x86_vendor_id[16];
++ char x86_model_id[64];
++ int x86_cache_size; /* in KB - valid for CPUS which support this
++ call */
++ int x86_cache_alignment; /* In bytes */
++ int x86_power;
++ unsigned long loops_per_jiffy;
++#ifdef CONFIG_SMP
++ cpumask_t llc_shared_map; /* cpus sharing the last level cache */
++#endif
++ u16 x86_max_cores; /* cpuid returned max cores value */
++ u16 apicid;
++ u16 x86_clflush_size;
++#ifdef CONFIG_SMP
++ u16 booted_cores; /* number of cores as seen by OS */
++ u16 phys_proc_id; /* Physical processor id. */
++ u16 cpu_core_id; /* Core id */
++ u16 cpu_index; /* index into per_cpu list */
++#endif
++} __attribute__((__aligned__(SMP_CACHE_BYTES)));
++
++#define X86_VENDOR_INTEL 0
++#define X86_VENDOR_CYRIX 1
++#define X86_VENDOR_AMD 2
++#define X86_VENDOR_UMC 3
++#define X86_VENDOR_NEXGEN 4
++#define X86_VENDOR_CENTAUR 5
++#define X86_VENDOR_TRANSMETA 7
++#define X86_VENDOR_NSC 8
++#define X86_VENDOR_NUM 9
++#define X86_VENDOR_UNKNOWN 0xff
++
++/*
++ * capabilities of CPUs
++ */
++extern struct cpuinfo_x86 boot_cpu_data;
++extern struct cpuinfo_x86 new_cpu_data;
++extern struct tss_struct doublefault_tss;
++extern __u32 cleared_cpu_caps[NCAPINTS];
++
++#ifdef CONFIG_SMP
++DECLARE_PER_CPU(struct cpuinfo_x86, cpu_info);
++#define cpu_data(cpu) per_cpu(cpu_info, cpu)
++#define current_cpu_data cpu_data(smp_processor_id())
++#else
++#define cpu_data(cpu) boot_cpu_data
++#define current_cpu_data boot_cpu_data
++#endif
++
++void cpu_detect(struct cpuinfo_x86 *c);
++
++extern void identify_cpu(struct cpuinfo_x86 *);
++extern void identify_boot_cpu(void);
++extern void identify_secondary_cpu(struct cpuinfo_x86 *);
++extern void print_cpu_info(struct cpuinfo_x86 *);
++extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
++extern unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c);
++extern unsigned short num_cache_leaves;
++
++#if defined(CONFIG_X86_HT) || defined(CONFIG_X86_64)
++extern void detect_ht(struct cpuinfo_x86 *c);
++#else
++static inline void detect_ht(struct cpuinfo_x86 *c) {}
++#endif
++
++static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
++ unsigned int *ecx, unsigned int *edx)
++{
++ /* ecx is often an input as well as an output. */
++ __asm__("cpuid"
++ : "=a" (*eax),
++ "=b" (*ebx),
++ "=c" (*ecx),
++ "=d" (*edx)
++ : "0" (*eax), "2" (*ecx));
++}
++
++static inline void load_cr3(pgd_t *pgdir)
++{
++ write_cr3(__pa(pgdir));
++}
++
++#ifdef CONFIG_X86_32
++/* This is the TSS defined by the hardware. */
++struct x86_hw_tss {
++ unsigned short back_link, __blh;
++ unsigned long sp0;
++ unsigned short ss0, __ss0h;
++ unsigned long sp1;
++ unsigned short ss1, __ss1h; /* ss1 caches MSR_IA32_SYSENTER_CS */
++ unsigned long sp2;
++ unsigned short ss2, __ss2h;
++ unsigned long __cr3;
++ unsigned long ip;
++ unsigned long flags;
++ unsigned long ax, cx, dx, bx;
++ unsigned long sp, bp, si, di;
++ unsigned short es, __esh;
++ unsigned short cs, __csh;
++ unsigned short ss, __ssh;
++ unsigned short ds, __dsh;
++ unsigned short fs, __fsh;
++ unsigned short gs, __gsh;
++ unsigned short ldt, __ldth;
++ unsigned short trace, io_bitmap_base;
++} __attribute__((packed));
++#else
++struct x86_hw_tss {
++ u32 reserved1;
++ u64 sp0;
++ u64 sp1;
++ u64 sp2;
++ u64 reserved2;
++ u64 ist[7];
++ u32 reserved3;
++ u32 reserved4;
++ u16 reserved5;
++ u16 io_bitmap_base;
++} __attribute__((packed)) ____cacheline_aligned;
++#endif
++
++/*
++ * Size of io_bitmap.
++ */
++#define IO_BITMAP_BITS 65536
++#define IO_BITMAP_BYTES (IO_BITMAP_BITS/8)
++#define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long))
++#define IO_BITMAP_OFFSET offsetof(struct tss_struct, io_bitmap)
++#define INVALID_IO_BITMAP_OFFSET 0x8000
++#define INVALID_IO_BITMAP_OFFSET_LAZY 0x9000
++
++struct tss_struct {
++ struct x86_hw_tss x86_tss;
++
++ /*
++ * The extra 1 is there because the CPU will access an
++ * additional byte beyond the end of the IO permission
++ * bitmap. The extra byte must be all 1 bits, and must
++ * be within the limit.
++ */
++ unsigned long io_bitmap[IO_BITMAP_LONGS + 1];
++ /*
++ * Cache the current maximum and the last task that used the bitmap:
++ */
++ unsigned long io_bitmap_max;
++ struct thread_struct *io_bitmap_owner;
++ /*
++ * pads the TSS to be cacheline-aligned (size is 0x100)
++ */
++ unsigned long __cacheline_filler[35];
++ /*
++ * .. and then another 0x100 bytes for emergency kernel stack
++ */
++ unsigned long stack[64];
++} __attribute__((packed));
++
++DECLARE_PER_CPU(struct tss_struct, init_tss);
++
++/* Save the original ist values for checking stack pointers during debugging */
++struct orig_ist {
++ unsigned long ist[7];
++};
++
++#define MXCSR_DEFAULT 0x1f80
++
++struct i387_fsave_struct {
++ u32 cwd;
++ u32 swd;
++ u32 twd;
++ u32 fip;
++ u32 fcs;
++ u32 foo;
++ u32 fos;
++ u32 st_space[20]; /* 8*10 bytes for each FP-reg = 80 bytes */
++ u32 status; /* software status information */
++};
++
++struct i387_fxsave_struct {
++ u16 cwd;
++ u16 swd;
++ u16 twd;
++ u16 fop;
++ union {
++ struct {
++ u64 rip;
++ u64 rdp;
++ };
++ struct {
++ u32 fip;
++ u32 fcs;
++ u32 foo;
++ u32 fos;
++ };
++ };
++ u32 mxcsr;
++ u32 mxcsr_mask;
++ u32 st_space[32]; /* 8*16 bytes for each FP-reg = 128 bytes */
++ u32 xmm_space[64]; /* 16*16 bytes for each XMM-reg = 256 bytes */
++ u32 padding[24];
++} __attribute__((aligned(16)));
++
++struct i387_soft_struct {
++ u32 cwd;
++ u32 swd;
++ u32 twd;
++ u32 fip;
++ u32 fcs;
++ u32 foo;
++ u32 fos;
++ u32 st_space[20]; /* 8*10 bytes for each FP-reg = 80 bytes */
++ u8 ftop, changed, lookahead, no_update, rm, alimit;
++ struct info *info;
++ u32 entry_eip;
++};
++
++union i387_union {
++ struct i387_fsave_struct fsave;
++ struct i387_fxsave_struct fxsave;
++ struct i387_soft_struct soft;
++};
++
++#ifdef CONFIG_X86_32
++/*
++ * the following now lives in the per cpu area:
++ * extern int cpu_llc_id[NR_CPUS];
++ */
++DECLARE_PER_CPU(u8, cpu_llc_id);
++#else
++DECLARE_PER_CPU(struct orig_ist, orig_ist);
++#endif
++
++extern void print_cpu_info(struct cpuinfo_x86 *);
++extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
++extern unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c);
++extern unsigned short num_cache_leaves;
++
++struct thread_struct {
++/* cached TLS descriptors. */
++ struct desc_struct tls_array[GDT_ENTRY_TLS_ENTRIES];
++ unsigned long sp0;
++ unsigned long sp;
++#ifdef CONFIG_X86_32
++ unsigned long sysenter_cs;
++#else
++ unsigned long usersp; /* Copy from PDA */
++ unsigned short es, ds, fsindex, gsindex;
++#endif
++ unsigned long ip;
++ unsigned long fs;
++ unsigned long gs;
++/* Hardware debugging registers */
++ unsigned long debugreg0;
++ unsigned long debugreg1;
++ unsigned long debugreg2;
++ unsigned long debugreg3;
++ unsigned long debugreg6;
++ unsigned long debugreg7;
++/* fault info */
++ unsigned long cr2, trap_no, error_code;
++/* floating point info */
++ union i387_union i387 __attribute__((aligned(16)));;
++#ifdef CONFIG_X86_32
++/* virtual 86 mode info */
++ struct vm86_struct __user *vm86_info;
++ unsigned long screen_bitmap;
++ unsigned long v86flags, v86mask, saved_sp0;
++ unsigned int saved_fs, saved_gs;
++#endif
++/* IO permissions */
++ unsigned long *io_bitmap_ptr;
++ unsigned long iopl;
++/* max allowed port in the bitmap, in bytes: */
++ unsigned io_bitmap_max;
++/* MSR_IA32_DEBUGCTLMSR value to switch in if TIF_DEBUGCTLMSR is set. */
++ unsigned long debugctlmsr;
++/* Debug Store - if not 0 points to a DS Save Area configuration;
++ * goes into MSR_IA32_DS_AREA */
++ unsigned long ds_area_msr;
++};
++
++static inline unsigned long native_get_debugreg(int regno)
++{
++ unsigned long val = 0; /* Damn you, gcc! */
++
++ switch (regno) {
++ case 0:
++ asm("mov %%db0, %0" :"=r" (val)); break;
++ case 1:
++ asm("mov %%db1, %0" :"=r" (val)); break;
++ case 2:
++ asm("mov %%db2, %0" :"=r" (val)); break;
++ case 3:
++ asm("mov %%db3, %0" :"=r" (val)); break;
++ case 6:
++ asm("mov %%db6, %0" :"=r" (val)); break;
++ case 7:
++ asm("mov %%db7, %0" :"=r" (val)); break;
++ default:
++ BUG();
++ }
++ return val;
++}
++
++static inline void native_set_debugreg(int regno, unsigned long value)
++{
++ switch (regno) {
++ case 0:
++ asm("mov %0,%%db0" : /* no output */ :"r" (value));
++ break;
++ case 1:
++ asm("mov %0,%%db1" : /* no output */ :"r" (value));
++ break;
++ case 2:
++ asm("mov %0,%%db2" : /* no output */ :"r" (value));
++ break;
++ case 3:
++ asm("mov %0,%%db3" : /* no output */ :"r" (value));
++ break;
++ case 6:
++ asm("mov %0,%%db6" : /* no output */ :"r" (value));
++ break;
++ case 7:
++ asm("mov %0,%%db7" : /* no output */ :"r" (value));
++ break;
++ default:
++ BUG();
++ }
++}
++
++/*
++ * Set IOPL bits in EFLAGS from given mask
++ */
++static inline void native_set_iopl_mask(unsigned mask)
++{
++#ifdef CONFIG_X86_32
++ unsigned int reg;
++ __asm__ __volatile__ ("pushfl;"
++ "popl %0;"
++ "andl %1, %0;"
++ "orl %2, %0;"
++ "pushl %0;"
++ "popfl"
++ : "=&r" (reg)
++ : "i" (~X86_EFLAGS_IOPL), "r" (mask));
++#endif
++}
++
++static inline void native_load_sp0(struct tss_struct *tss,
++ struct thread_struct *thread)
++{
++ tss->x86_tss.sp0 = thread->sp0;
++#ifdef CONFIG_X86_32
++ /* Only happens when SEP is enabled, no need to test "SEP"arately */
++ if (unlikely(tss->x86_tss.ss1 != thread->sysenter_cs)) {
++ tss->x86_tss.ss1 = thread->sysenter_cs;
++ wrmsr(MSR_IA32_SYSENTER_CS, thread->sysenter_cs, 0);
++ }
++#endif
++}
++
++static inline void native_swapgs(void)
++{
++#ifdef CONFIG_X86_64
++ asm volatile("swapgs" ::: "memory");
++#endif
++}
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else
++#define __cpuid native_cpuid
++#define paravirt_enabled() 0
++
++/*
++ * These special macros can be used to get or set a debugging register
++ */
++#define get_debugreg(var, register) \
++ (var) = native_get_debugreg(register)
++#define set_debugreg(value, register) \
++ native_set_debugreg(register, value)
++
++static inline void load_sp0(struct tss_struct *tss,
++ struct thread_struct *thread)
++{
++ native_load_sp0(tss, thread);
++}
++
++#define set_iopl_mask native_set_iopl_mask
++#define SWAPGS swapgs
++#endif /* CONFIG_PARAVIRT */
++
++/*
++ * Save the cr4 feature set we're using (ie
++ * Pentium 4MB enable and PPro Global page
++ * enable), so that any CPU's that boot up
++ * after us can get the correct flags.
++ */
++extern unsigned long mmu_cr4_features;
++
++static inline void set_in_cr4(unsigned long mask)
++{
++ unsigned cr4;
++ mmu_cr4_features |= mask;
++ cr4 = read_cr4();
++ cr4 |= mask;
++ write_cr4(cr4);
++}
++
++static inline void clear_in_cr4(unsigned long mask)
++{
++ unsigned cr4;
++ mmu_cr4_features &= ~mask;
++ cr4 = read_cr4();
++ cr4 &= ~mask;
++ write_cr4(cr4);
++}
++
++struct microcode_header {
++ unsigned int hdrver;
++ unsigned int rev;
++ unsigned int date;
++ unsigned int sig;
++ unsigned int cksum;
++ unsigned int ldrver;
++ unsigned int pf;
++ unsigned int datasize;
++ unsigned int totalsize;
++ unsigned int reserved[3];
++};
++
++struct microcode {
++ struct microcode_header hdr;
++ unsigned int bits[0];
++};
++
++typedef struct microcode microcode_t;
++typedef struct microcode_header microcode_header_t;
++
++/* microcode format is extended from prescott processors */
++struct extended_signature {
++ unsigned int sig;
++ unsigned int pf;
++ unsigned int cksum;
++};
++
++struct extended_sigtable {
++ unsigned int count;
++ unsigned int cksum;
++ unsigned int reserved[3];
++ struct extended_signature sigs[0];
++};
++
++typedef struct {
++ unsigned long seg;
++} mm_segment_t;
++
++
++/*
++ * create a kernel thread without removing it from tasklists
++ */
++extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
++
++/* Free all resources held by a thread. */
++extern void release_thread(struct task_struct *);
++
++/* Prepare to copy thread state - unlazy all lazy status */
++extern void prepare_to_copy(struct task_struct *tsk);
++
++unsigned long get_wchan(struct task_struct *p);
++
++/*
++ * Generic CPUID function
++ * clear %ecx since some cpus (Cyrix MII) do not set or clear %ecx
++ * resulting in stale register contents being returned.
++ */
++static inline void cpuid(unsigned int op,
++ unsigned int *eax, unsigned int *ebx,
++ unsigned int *ecx, unsigned int *edx)
++{
++ *eax = op;
++ *ecx = 0;
++ __cpuid(eax, ebx, ecx, edx);
++}
++
++/* Some CPUID calls want 'count' to be placed in ecx */
++static inline void cpuid_count(unsigned int op, int count,
++ unsigned int *eax, unsigned int *ebx,
++ unsigned int *ecx, unsigned int *edx)
++{
++ *eax = op;
++ *ecx = count;
++ __cpuid(eax, ebx, ecx, edx);
++}
++
++/*
++ * CPUID functions returning a single datum
++ */
++static inline unsigned int cpuid_eax(unsigned int op)
++{
++ unsigned int eax, ebx, ecx, edx;
++
++ cpuid(op, &eax, &ebx, &ecx, &edx);
++ return eax;
++}
++static inline unsigned int cpuid_ebx(unsigned int op)
++{
++ unsigned int eax, ebx, ecx, edx;
++
++ cpuid(op, &eax, &ebx, &ecx, &edx);
++ return ebx;
++}
++static inline unsigned int cpuid_ecx(unsigned int op)
++{
++ unsigned int eax, ebx, ecx, edx;
++
++ cpuid(op, &eax, &ebx, &ecx, &edx);
++ return ecx;
++}
++static inline unsigned int cpuid_edx(unsigned int op)
++{
++ unsigned int eax, ebx, ecx, edx;
++
++ cpuid(op, &eax, &ebx, &ecx, &edx);
++ return edx;
++}
++
++/* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */
++static inline void rep_nop(void)
++{
++ __asm__ __volatile__("rep;nop": : :"memory");
++}
++
++/* Stop speculative execution */
++static inline void sync_core(void)
++{
++ int tmp;
++ asm volatile("cpuid" : "=a" (tmp) : "0" (1)
++ : "ebx", "ecx", "edx", "memory");
++}
++
++#define cpu_relax() rep_nop()
++
++static inline void __monitor(const void *eax, unsigned long ecx,
++ unsigned long edx)
++{
++ /* "monitor %eax,%ecx,%edx;" */
++ asm volatile(
++ ".byte 0x0f,0x01,0xc8;"
++ : :"a" (eax), "c" (ecx), "d"(edx));
++}
++
++static inline void __mwait(unsigned long eax, unsigned long ecx)
++{
++ /* "mwait %eax,%ecx;" */
++ asm volatile(
++ ".byte 0x0f,0x01,0xc9;"
++ : :"a" (eax), "c" (ecx));
++}
++
++static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
++{
++ /* "mwait %eax,%ecx;" */
++ asm volatile(
++ "sti; .byte 0x0f,0x01,0xc9;"
++ : :"a" (eax), "c" (ecx));
++}
++
++extern void mwait_idle_with_hints(unsigned long eax, unsigned long ecx);
++
++extern int force_mwait;
++
++extern void select_idle_routine(const struct cpuinfo_x86 *c);
++
++extern unsigned long boot_option_idle_override;
++
++extern void enable_sep_cpu(void);
++extern int sysenter_setup(void);
++
++/* Defined in head.S */
++extern struct desc_ptr early_gdt_descr;
++
++extern void cpu_set_gdt(int);
++extern void switch_to_new_gdt(void);
++extern void cpu_init(void);
++extern void init_gdt(int cpu);
++
++/* from system description table in BIOS. Mostly for MCA use, but
++ * others may find it useful. */
++extern unsigned int machine_id;
++extern unsigned int machine_submodel_id;
++extern unsigned int BIOS_revision;
++extern unsigned int mca_pentium_flag;
++
++/* Boot loader type from the setup header */
++extern int bootloader_type;
++
++extern char ignore_fpu_irq;
++#define cache_line_size() (boot_cpu_data.x86_cache_alignment)
++
++#define HAVE_ARCH_PICK_MMAP_LAYOUT 1
++#define ARCH_HAS_PREFETCHW
++#define ARCH_HAS_SPINLOCK_PREFETCH
++
++#ifdef CONFIG_X86_32
++#define BASE_PREFETCH ASM_NOP4
++#define ARCH_HAS_PREFETCH
++#else
++#define BASE_PREFETCH "prefetcht0 (%1)"
++#endif
++
++/* Prefetch instructions for Pentium III and AMD Athlon */
++/* It's not worth to care about 3dnow! prefetches for the K6
++ because they are microcoded there and very slow.
++ However we don't do prefetches for pre XP Athlons currently
++ That should be fixed. */
++static inline void prefetch(const void *x)
++{
++ alternative_input(BASE_PREFETCH,
++ "prefetchnta (%1)",
++ X86_FEATURE_XMM,
++ "r" (x));
++}
++
++/* 3dnow! prefetch to get an exclusive cache line. Useful for
++ spinlocks to avoid one state transition in the cache coherency protocol. */
++static inline void prefetchw(const void *x)
++{
++ alternative_input(BASE_PREFETCH,
++ "prefetchw (%1)",
++ X86_FEATURE_3DNOW,
++ "r" (x));
++}
++
++#define spin_lock_prefetch(x) prefetchw(x)
+ #ifdef CONFIG_X86_32
+-# include "processor_32.h"
++/*
++ * User space process size: 3GB (default).
++ */
++#define TASK_SIZE (PAGE_OFFSET)
++
++#define INIT_THREAD { \
++ .sp0 = sizeof(init_stack) + (long)&init_stack, \
++ .vm86_info = NULL, \
++ .sysenter_cs = __KERNEL_CS, \
++ .io_bitmap_ptr = NULL, \
++ .fs = __KERNEL_PERCPU, \
++}
++
++/*
++ * Note that the .io_bitmap member must be extra-big. This is because
++ * the CPU will access an additional byte beyond the end of the IO
++ * permission bitmap. The extra byte must be all 1 bits, and must
++ * be within the limit.
++ */
++#define INIT_TSS { \
++ .x86_tss = { \
++ .sp0 = sizeof(init_stack) + (long)&init_stack, \
++ .ss0 = __KERNEL_DS, \
++ .ss1 = __KERNEL_CS, \
++ .io_bitmap_base = INVALID_IO_BITMAP_OFFSET, \
++ }, \
++ .io_bitmap = { [0 ... IO_BITMAP_LONGS] = ~0 }, \
++}
++
++#define start_thread(regs, new_eip, new_esp) do { \
++ __asm__("movl %0,%%gs": :"r" (0)); \
++ regs->fs = 0; \
++ set_fs(USER_DS); \
++ regs->ds = __USER_DS; \
++ regs->es = __USER_DS; \
++ regs->ss = __USER_DS; \
++ regs->cs = __USER_CS; \
++ regs->ip = new_eip; \
++ regs->sp = new_esp; \
++} while (0)
++
++
++extern unsigned long thread_saved_pc(struct task_struct *tsk);
++
++#define THREAD_SIZE_LONGS (THREAD_SIZE/sizeof(unsigned long))
++#define KSTK_TOP(info) \
++({ \
++ unsigned long *__ptr = (unsigned long *)(info); \
++ (unsigned long)(&__ptr[THREAD_SIZE_LONGS]); \
++})
++
++/*
++ * The below -8 is to reserve 8 bytes on top of the ring0 stack.
++ * This is necessary to guarantee that the entire "struct pt_regs"
++ * is accessable even if the CPU haven't stored the SS/ESP registers
++ * on the stack (interrupt gate does not save these registers
++ * when switching to the same priv ring).
++ * Therefore beware: accessing the ss/esp fields of the
++ * "struct pt_regs" is possible, but they may contain the
++ * completely wrong values.
++ */
++#define task_pt_regs(task) \
++({ \
++ struct pt_regs *__regs__; \
++ __regs__ = (struct pt_regs *)(KSTK_TOP(task_stack_page(task))-8); \
++ __regs__ - 1; \
++})
++
++#define KSTK_ESP(task) (task_pt_regs(task)->sp)
++
+ #else
+-# include "processor_64.h"
++/*
++ * User space process size. 47bits minus one guard page.
++ */
++#define TASK_SIZE64 (0x800000000000UL - 4096)
++
++/* This decides where the kernel will search for a free chunk of vm
++ * space during mmap's.
++ */
++#define IA32_PAGE_OFFSET ((current->personality & ADDR_LIMIT_3GB) ? \
++ 0xc0000000 : 0xFFFFe000)
++
++#define TASK_SIZE (test_thread_flag(TIF_IA32) ? \
++ IA32_PAGE_OFFSET : TASK_SIZE64)
++#define TASK_SIZE_OF(child) ((test_tsk_thread_flag(child, TIF_IA32)) ? \
++ IA32_PAGE_OFFSET : TASK_SIZE64)
++
++#define INIT_THREAD { \
++ .sp0 = (unsigned long)&init_stack + sizeof(init_stack) \
++}
++
++#define INIT_TSS { \
++ .x86_tss.sp0 = (unsigned long)&init_stack + sizeof(init_stack) \
++}
++
++#define start_thread(regs, new_rip, new_rsp) do { \
++ asm volatile("movl %0,%%fs; movl %0,%%es; movl %0,%%ds": :"r" (0)); \
++ load_gs_index(0); \
++ (regs)->ip = (new_rip); \
++ (regs)->sp = (new_rsp); \
++ write_pda(oldrsp, (new_rsp)); \
++ (regs)->cs = __USER_CS; \
++ (regs)->ss = __USER_DS; \
++ (regs)->flags = 0x200; \
++ set_fs(USER_DS); \
++} while (0)
++
++/*
++ * Return saved PC of a blocked thread.
++ * What is this good for? it will be always the scheduler or ret_from_fork.
++ */
++#define thread_saved_pc(t) (*(unsigned long *)((t)->thread.sp - 8))
++
++#define task_pt_regs(tsk) ((struct pt_regs *)(tsk)->thread.sp0 - 1)
++#define KSTK_ESP(tsk) -1 /* sorry. doesn't work for syscall. */
++#endif /* CONFIG_X86_64 */
++
++/* This decides where the kernel will search for a free chunk of vm
++ * space during mmap's.
++ */
++#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 3))
++
++#define KSTK_EIP(task) (task_pt_regs(task)->ip)
++
+ #endif
+diff --git a/include/asm-x86/processor_32.h b/include/asm-x86/processor_32.h
+deleted file mode 100644
+index 13976b0..0000000
+--- a/include/asm-x86/processor_32.h
++++ /dev/null
+@@ -1,786 +0,0 @@
-/*
-- * Handling allocation failures during page table setup.
+- * include/asm-i386/processor.h
+- *
+- * Copyright (C) 1994 Linus Torvalds
- */
--extern void __handle_bad_pmd_kernel(pmd_t * pmd);
--#define __handle_bad_pmd(x) __handle_bad_pmd_kernel(x)
-
--/*
-- * PTE level access routines.
-- *
-- * Note1:
-- * It's the tree walk leaf. This is physical address to be stored.
-- *
-- * Note 2:
-- * Regarding the choice of _PTE_EMPTY:
+-#ifndef __ASM_I386_PROCESSOR_H
+-#define __ASM_I386_PROCESSOR_H
-
-- We must choose a bit pattern that cannot be valid, whether or not the page
-- is present. bit[2]==1 => present, bit[2]==0 => swapped out. If swapped
-- out, bits [31:8], [6:3], [1:0] are under swapper control, so only bit[7] is
-- left for us to select. If we force bit[7]==0 when swapped out, we could use
-- the combination bit[7,2]=2'b10 to indicate an empty PTE. Alternatively, if
-- we force bit[7]==1 when swapped out, we can use all zeroes to indicate
-- empty. This is convenient, because the page tables get cleared to zero
-- when they are allocated.
+-#include <asm/vm86.h>
+-#include <asm/math_emu.h>
+-#include <asm/segment.h>
+-#include <asm/page.h>
+-#include <asm/types.h>
+-#include <asm/sigcontext.h>
+-#include <asm/cpufeature.h>
+-#include <asm/msr.h>
+-#include <asm/system.h>
+-#include <linux/cache.h>
+-#include <linux/threads.h>
+-#include <asm/percpu.h>
+-#include <linux/cpumask.h>
+-#include <linux/init.h>
+-#include <asm/processor-flags.h>
-
-- */
--#define _PTE_EMPTY 0x0
--#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
--#define pte_clear(mm,addr,xp) (set_pte_at(mm, addr, xp, __pte(_PTE_EMPTY)))
--#define pte_none(x) (pte_val(x) == _PTE_EMPTY)
+-/* flag for disabling the tsc */
+-extern int tsc_disable;
-
--/*
-- * Some definitions to translate between mem_map, PTEs, and page
-- * addresses:
-- */
+-struct desc_struct {
+- unsigned long a,b;
+-};
-
--/*
-- * Given a PTE, return the index of the mem_map[] entry corresponding
-- * to the page frame the PTE. Get the absolute physical address, make
-- * a relative physical address and translate it to an index.
-- */
--#define pte_pagenr(x) (((unsigned long) (pte_val(x)) - \
-- __MEMORY_START) >> PAGE_SHIFT)
+-#define desc_empty(desc) \
+- (!((desc)->a | (desc)->b))
-
+-#define desc_equal(desc1, desc2) \
+- (((desc1)->a == (desc2)->a) && ((desc1)->b == (desc2)->b))
-/*
-- * Given a PTE, return the "struct page *".
+- * Default implementation of macro that returns current
+- * instruction pointer ("program counter").
- */
--#define pte_page(x) (mem_map + pte_pagenr(x))
+-#define current_text_addr() ({ void *pc; __asm__("movl $1f,%0\n1:":"=g" (pc)); pc; })
-
-/*
-- * Return number of (down rounded) MB corresponding to x pages.
+- * CPU type and hardware bug flags. Kept separately for each CPU.
+- * Members of this structure are referenced in head.S, so think twice
+- * before touching them. [mj]
- */
--#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
-
+-struct cpuinfo_x86 {
+- __u8 x86; /* CPU family */
+- __u8 x86_vendor; /* CPU vendor */
+- __u8 x86_model;
+- __u8 x86_mask;
+- char wp_works_ok; /* It doesn't on 386's */
+- char hlt_works_ok; /* Problems on some 486Dx4's and old 386's */
+- char hard_math;
+- char rfu;
+- int cpuid_level; /* Maximum supported CPUID level, -1=no CPUID */
+- unsigned long x86_capability[NCAPINTS];
+- char x86_vendor_id[16];
+- char x86_model_id[64];
+- int x86_cache_size; /* in KB - valid for CPUS which support this
+- call */
+- int x86_cache_alignment; /* In bytes */
+- char fdiv_bug;
+- char f00f_bug;
+- char coma_bug;
+- char pad0;
+- int x86_power;
+- unsigned long loops_per_jiffy;
+-#ifdef CONFIG_SMP
+- cpumask_t llc_shared_map; /* cpus sharing the last level cache */
+-#endif
+- unsigned char x86_max_cores; /* cpuid returned max cores value */
+- unsigned char apicid;
+- unsigned short x86_clflush_size;
+-#ifdef CONFIG_SMP
+- unsigned char booted_cores; /* number of cores as seen by OS */
+- __u8 phys_proc_id; /* Physical processor id. */
+- __u8 cpu_core_id; /* Core id */
+- __u8 cpu_index; /* index into per_cpu list */
+-#endif
+-} __attribute__((__aligned__(SMP_CACHE_BYTES)));
+-
+-#define X86_VENDOR_INTEL 0
+-#define X86_VENDOR_CYRIX 1
+-#define X86_VENDOR_AMD 2
+-#define X86_VENDOR_UMC 3
+-#define X86_VENDOR_NEXGEN 4
+-#define X86_VENDOR_CENTAUR 5
+-#define X86_VENDOR_TRANSMETA 7
+-#define X86_VENDOR_NSC 8
+-#define X86_VENDOR_NUM 9
+-#define X86_VENDOR_UNKNOWN 0xff
+-
+-/*
+- * capabilities of CPUs
+- */
+-
+-extern struct cpuinfo_x86 boot_cpu_data;
+-extern struct cpuinfo_x86 new_cpu_data;
+-extern struct tss_struct doublefault_tss;
+-DECLARE_PER_CPU(struct tss_struct, init_tss);
+-
+-#ifdef CONFIG_SMP
+-DECLARE_PER_CPU(struct cpuinfo_x86, cpu_info);
+-#define cpu_data(cpu) per_cpu(cpu_info, cpu)
+-#define current_cpu_data cpu_data(smp_processor_id())
+-#else
+-#define cpu_data(cpu) boot_cpu_data
+-#define current_cpu_data boot_cpu_data
+-#endif
-
-/*
-- * The following have defined behavior only work if pte_present() is true.
+- * the following now lives in the per cpu area:
+- * extern int cpu_llc_id[NR_CPUS];
- */
--static inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
--static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
--static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
--static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
+-DECLARE_PER_CPU(u8, cpu_llc_id);
+-extern char ignore_fpu_irq;
-
--static inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
--static inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
--static inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
--static inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_WRITE)); return pte; }
--static inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
--static inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
--static inline pte_t pte_mkhuge(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_SZHUGE)); return pte; }
+-void __init cpu_detect(struct cpuinfo_x86 *c);
-
+-extern void identify_boot_cpu(void);
+-extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+-extern void print_cpu_info(struct cpuinfo_x86 *);
+-extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
+-extern unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c);
+-extern unsigned short num_cache_leaves;
-
--/*
-- * Conversion functions: convert a page and protection to a page entry.
-- *
-- * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
-- */
--#define mk_pte(page,pgprot) \
--({ \
-- pte_t __pte; \
-- \
-- set_pte(&__pte, __pte((((page)-mem_map) << PAGE_SHIFT) | \
-- __MEMORY_START | pgprot_val((pgprot)))); \
-- __pte; \
--})
+-#ifdef CONFIG_X86_HT
+-extern void detect_ht(struct cpuinfo_x86 *c);
+-#else
+-static inline void detect_ht(struct cpuinfo_x86 *c) {}
+-#endif
+-
+-static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
+-{
+- /* ecx is often an input as well as an output. */
+- __asm__("cpuid"
+- : "=a" (*eax),
+- "=b" (*ebx),
+- "=c" (*ecx),
+- "=d" (*edx)
+- : "0" (*eax), "2" (*ecx));
+-}
+-
+-#define load_cr3(pgdir) write_cr3(__pa(pgdir))
-
-/*
-- * This takes a (absolute) physical page address that is used
-- * by the remapping functions
+- * Save the cr4 feature set we're using (ie
+- * Pentium 4MB enable and PPro Global page
+- * enable), so that any CPU's that boot up
+- * after us can get the correct flags.
- */
--#define mk_pte_phys(physpage, pgprot) \
--({ pte_t __pte; set_pte(&__pte, __pte(physpage | pgprot_val(pgprot))); __pte; })
+-extern unsigned long mmu_cr4_features;
-
--static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
--{ set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot))); return pte; }
+-static inline void set_in_cr4 (unsigned long mask)
+-{
+- unsigned cr4;
+- mmu_cr4_features |= mask;
+- cr4 = read_cr4();
+- cr4 |= mask;
+- write_cr4(cr4);
+-}
-
--typedef pte_t *pte_addr_t;
--#define pgtable_cache_init() do { } while (0)
+-static inline void clear_in_cr4 (unsigned long mask)
+-{
+- unsigned cr4;
+- mmu_cr4_features &= ~mask;
+- cr4 = read_cr4();
+- cr4 &= ~mask;
+- write_cr4(cr4);
+-}
-
--extern void update_mmu_cache(struct vm_area_struct * vma,
-- unsigned long address, pte_t pte);
+-/* Stop speculative execution */
+-static inline void sync_core(void)
+-{
+- int tmp;
+- asm volatile("cpuid" : "=a" (tmp) : "0" (1) : "ebx","ecx","edx","memory");
+-}
-
--/* Encode and decode a swap entry */
--#define __swp_type(x) (((x).val & 3) + (((x).val >> 1) & 0x3c))
--#define __swp_offset(x) ((x).val >> 8)
--#define __swp_entry(type, offset) ((swp_entry_t) { ((offset << 8) + ((type & 0x3c) << 1) + (type & 3)) })
--#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
--#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+-static inline void __monitor(const void *eax, unsigned long ecx,
+- unsigned long edx)
+-{
+- /* "monitor %eax,%ecx,%edx;" */
+- asm volatile(
+- ".byte 0x0f,0x01,0xc8;"
+- : :"a" (eax), "c" (ecx), "d"(edx));
+-}
-
--/* Encode and decode a nonlinear file mapping entry */
--#define PTE_FILE_MAX_BITS 29
--#define pte_to_pgoff(pte) (pte_val(pte))
--#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
+-static inline void __mwait(unsigned long eax, unsigned long ecx)
+-{
+- /* "mwait %eax,%ecx;" */
+- asm volatile(
+- ".byte 0x0f,0x01,0xc9;"
+- : :"a" (eax), "c" (ecx));
+-}
-
--/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
--#define PageSkip(page) (0)
--#define kern_addr_valid(addr) (1)
+-extern void mwait_idle_with_hints(unsigned long eax, unsigned long ecx);
-
--#define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
-- remap_pfn_range(vma, vaddr, pfn, size, prot)
+-/* from system description table in BIOS. Mostly for MCA use, but
+-others may find it useful. */
+-extern unsigned int machine_id;
+-extern unsigned int machine_submodel_id;
+-extern unsigned int BIOS_revision;
+-extern unsigned int mca_pentium_flag;
-
--#endif /* !__ASSEMBLY__ */
+-/* Boot loader type from the setup header */
+-extern int bootloader_type;
-
-/*
-- * No page table caches to initialise
+- * User space process size: 3GB (default).
- */
--#define pgtable_cache_init() do { } while (0)
+-#define TASK_SIZE (PAGE_OFFSET)
-
--#define pte_pfn(x) (((unsigned long)((x).pte)) >> PAGE_SHIFT)
--#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
--#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-/* This decides where the kernel will search for a free chunk of vm
+- * space during mmap's.
+- */
+-#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 3))
-
--extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+-#define HAVE_ARCH_PICK_MMAP_LAYOUT
-
--#include <asm-generic/pgtable.h>
+-extern void hard_disable_TSC(void);
+-extern void disable_TSC(void);
+-extern void hard_enable_TSC(void);
+-
+-/*
+- * Size of io_bitmap.
+- */
+-#define IO_BITMAP_BITS 65536
+-#define IO_BITMAP_BYTES (IO_BITMAP_BITS/8)
+-#define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long))
+-#define IO_BITMAP_OFFSET offsetof(struct tss_struct,io_bitmap)
+-#define INVALID_IO_BITMAP_OFFSET 0x8000
+-#define INVALID_IO_BITMAP_OFFSET_LAZY 0x9000
+-
+-struct i387_fsave_struct {
+- long cwd;
+- long swd;
+- long twd;
+- long fip;
+- long fcs;
+- long foo;
+- long fos;
+- long st_space[20]; /* 8*10 bytes for each FP-reg = 80 bytes */
+- long status; /* software status information */
+-};
+-
+-struct i387_fxsave_struct {
+- unsigned short cwd;
+- unsigned short swd;
+- unsigned short twd;
+- unsigned short fop;
+- long fip;
+- long fcs;
+- long foo;
+- long fos;
+- long mxcsr;
+- long mxcsr_mask;
+- long st_space[32]; /* 8*16 bytes for each FP-reg = 128 bytes */
+- long xmm_space[32]; /* 8*16 bytes for each XMM-reg = 128 bytes */
+- long padding[56];
+-} __attribute__ ((aligned (16)));
+-
+-struct i387_soft_struct {
+- long cwd;
+- long swd;
+- long twd;
+- long fip;
+- long fcs;
+- long foo;
+- long fos;
+- long st_space[20]; /* 8*10 bytes for each FP-reg = 80 bytes */
+- unsigned char ftop, changed, lookahead, no_update, rm, alimit;
+- struct info *info;
+- unsigned long entry_eip;
+-};
+-
+-union i387_union {
+- struct i387_fsave_struct fsave;
+- struct i387_fxsave_struct fxsave;
+- struct i387_soft_struct soft;
+-};
-
--#endif /* __ASM_SH64_PGTABLE_H */
-diff --git a/include/asm-sh64/platform.h b/include/asm-sh64/platform.h
-deleted file mode 100644
-index bd0d9c4..0000000
---- a/include/asm-sh64/platform.h
-+++ /dev/null
-@@ -1,64 +0,0 @@
--#ifndef __ASM_SH64_PLATFORM_H
--#define __ASM_SH64_PLATFORM_H
+-typedef struct {
+- unsigned long seg;
+-} mm_segment_t;
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/platform.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- * benedict.gaster at superh.com: 3rd May 2002
-- * Added support for ramdisk, removing statically linked romfs at the same time.
-- */
+-struct thread_struct;
-
--#include <linux/ioport.h>
--#include <asm/irq.h>
+-/* This is the TSS defined by the hardware. */
+-struct i386_hw_tss {
+- unsigned short back_link,__blh;
+- unsigned long esp0;
+- unsigned short ss0,__ss0h;
+- unsigned long esp1;
+- unsigned short ss1,__ss1h; /* ss1 is used to cache MSR_IA32_SYSENTER_CS */
+- unsigned long esp2;
+- unsigned short ss2,__ss2h;
+- unsigned long __cr3;
+- unsigned long eip;
+- unsigned long eflags;
+- unsigned long eax,ecx,edx,ebx;
+- unsigned long esp;
+- unsigned long ebp;
+- unsigned long esi;
+- unsigned long edi;
+- unsigned short es, __esh;
+- unsigned short cs, __csh;
+- unsigned short ss, __ssh;
+- unsigned short ds, __dsh;
+- unsigned short fs, __fsh;
+- unsigned short gs, __gsh;
+- unsigned short ldt, __ldth;
+- unsigned short trace, io_bitmap_base;
+-} __attribute__((packed));
-
+-struct tss_struct {
+- struct i386_hw_tss x86_tss;
-
--/*
-- * Platform definition structure.
-- */
--struct sh64_platform {
-- unsigned int readonly_rootfs;
-- unsigned int ramdisk_flags;
-- unsigned int initial_root_dev;
-- unsigned int loader_type;
-- unsigned int initrd_start;
-- unsigned int initrd_size;
-- unsigned int fpu_flags;
-- unsigned int io_res_count;
-- unsigned int kram_res_count;
-- unsigned int xram_res_count;
-- unsigned int rom_res_count;
-- struct resource *io_res_p;
-- struct resource *kram_res_p;
-- struct resource *xram_res_p;
-- struct resource *rom_res_p;
--};
+- /*
+- * The extra 1 is there because the CPU will access an
+- * additional byte beyond the end of the IO permission
+- * bitmap. The extra byte must be all 1 bits, and must
+- * be within the limit.
+- */
+- unsigned long io_bitmap[IO_BITMAP_LONGS + 1];
+- /*
+- * Cache the current maximum and the last task that used the bitmap:
+- */
+- unsigned long io_bitmap_max;
+- struct thread_struct *io_bitmap_owner;
+- /*
+- * pads the TSS to be cacheline-aligned (size is 0x100)
+- */
+- unsigned long __cacheline_filler[35];
+- /*
+- * .. and then another 0x100 bytes for emergency kernel stack
+- */
+- unsigned long stack[64];
+-} __attribute__((packed));
-
--extern struct sh64_platform platform_parms;
+-#define ARCH_MIN_TASKALIGN 16
-
--extern unsigned long long memory_start, memory_end;
+-struct thread_struct {
+-/* cached TLS descriptors. */
+- struct desc_struct tls_array[GDT_ENTRY_TLS_ENTRIES];
+- unsigned long esp0;
+- unsigned long sysenter_cs;
+- unsigned long eip;
+- unsigned long esp;
+- unsigned long fs;
+- unsigned long gs;
+-/* Hardware debugging registers */
+- unsigned long debugreg[8]; /* %%db0-7 debug registers */
+-/* fault info */
+- unsigned long cr2, trap_no, error_code;
+-/* floating point info */
+- union i387_union i387;
+-/* virtual 86 mode info */
+- struct vm86_struct __user * vm86_info;
+- unsigned long screen_bitmap;
+- unsigned long v86flags, v86mask, saved_esp0;
+- unsigned int saved_fs, saved_gs;
+-/* IO permissions */
+- unsigned long *io_bitmap_ptr;
+- unsigned long iopl;
+-/* max allowed port in the bitmap, in bytes: */
+- unsigned long io_bitmap_max;
+-};
+-
+-#define INIT_THREAD { \
+- .esp0 = sizeof(init_stack) + (long)&init_stack, \
+- .vm86_info = NULL, \
+- .sysenter_cs = __KERNEL_CS, \
+- .io_bitmap_ptr = NULL, \
+- .fs = __KERNEL_PERCPU, \
+-}
-
--extern unsigned long long fpu_in_use;
+-/*
+- * Note that the .io_bitmap member must be extra-big. This is because
+- * the CPU will access an additional byte beyond the end of the IO
+- * permission bitmap. The extra byte must be all 1 bits, and must
+- * be within the limit.
+- */
+-#define INIT_TSS { \
+- .x86_tss = { \
+- .esp0 = sizeof(init_stack) + (long)&init_stack, \
+- .ss0 = __KERNEL_DS, \
+- .ss1 = __KERNEL_CS, \
+- .io_bitmap_base = INVALID_IO_BITMAP_OFFSET, \
+- }, \
+- .io_bitmap = { [ 0 ... IO_BITMAP_LONGS] = ~0 }, \
+-}
+-
+-#define start_thread(regs, new_eip, new_esp) do { \
+- __asm__("movl %0,%%gs": :"r" (0)); \
+- regs->xfs = 0; \
+- set_fs(USER_DS); \
+- regs->xds = __USER_DS; \
+- regs->xes = __USER_DS; \
+- regs->xss = __USER_DS; \
+- regs->xcs = __USER_CS; \
+- regs->eip = new_eip; \
+- regs->esp = new_esp; \
+-} while (0)
-
--extern int platform_int_priority[NR_INTC_IRQS];
+-/* Forward declaration, a strange C thing */
+-struct task_struct;
+-struct mm_struct;
-
--#define FPU_FLAGS (platform_parms.fpu_flags)
--#define STANDARD_IO_RESOURCES (platform_parms.io_res_count)
--#define STANDARD_KRAM_RESOURCES (platform_parms.kram_res_count)
--#define STANDARD_XRAM_RESOURCES (platform_parms.xram_res_count)
--#define STANDARD_ROM_RESOURCES (platform_parms.rom_res_count)
+-/* Free all resources held by a thread. */
+-extern void release_thread(struct task_struct *);
+-
+-/* Prepare to copy thread state - unlazy all lazy status */
+-extern void prepare_to_copy(struct task_struct *tsk);
-
-/*
-- * Kernel Memory description, Respectively:
-- * code = last but one memory descriptor
-- * data = last memory descriptor
+- * create a kernel thread without removing it from tasklists
- */
--#define code_resource (platform_parms.kram_res_p[STANDARD_KRAM_RESOURCES - 2])
--#define data_resource (platform_parms.kram_res_p[STANDARD_KRAM_RESOURCES - 1])
--
--#endif /* __ASM_SH64_PLATFORM_H */
-diff --git a/include/asm-sh64/poll.h b/include/asm-sh64/poll.h
-deleted file mode 100644
-index ca29502..0000000
---- a/include/asm-sh64/poll.h
-+++ /dev/null
-@@ -1,8 +0,0 @@
--#ifndef __ASM_SH64_POLL_H
--#define __ASM_SH64_POLL_H
+-extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-
--#include <asm-generic/poll.h>
+-extern unsigned long thread_saved_pc(struct task_struct *tsk);
+-void show_trace(struct task_struct *task, struct pt_regs *regs, unsigned long *stack);
-
--#undef POLLREMOVE
+-unsigned long get_wchan(struct task_struct *p);
-
--#endif /* __ASM_SH64_POLL_H */
-diff --git a/include/asm-sh64/posix_types.h b/include/asm-sh64/posix_types.h
-deleted file mode 100644
-index 0620317..0000000
---- a/include/asm-sh64/posix_types.h
-+++ /dev/null
-@@ -1,131 +0,0 @@
--#ifndef __ASM_SH64_POSIX_TYPES_H
--#define __ASM_SH64_POSIX_TYPES_H
+-#define THREAD_SIZE_LONGS (THREAD_SIZE/sizeof(unsigned long))
+-#define KSTK_TOP(info) \
+-({ \
+- unsigned long *__ptr = (unsigned long *)(info); \
+- (unsigned long)(&__ptr[THREAD_SIZE_LONGS]); \
+-})
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/posix_types.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
-- *
-- * This file is generally used by user-level software, so you need to
-- * be a little careful about namespace pollution etc. Also, we cannot
-- * assume GCC is being used.
-- */
+- * The below -8 is to reserve 8 bytes on top of the ring0 stack.
+- * This is necessary to guarantee that the entire "struct pt_regs"
+- * is accessable even if the CPU haven't stored the SS/ESP registers
+- * on the stack (interrupt gate does not save these registers
+- * when switching to the same priv ring).
+- * Therefore beware: accessing the xss/esp fields of the
+- * "struct pt_regs" is possible, but they may contain the
+- * completely wrong values.
+- */
+-#define task_pt_regs(task) \
+-({ \
+- struct pt_regs *__regs__; \
+- __regs__ = (struct pt_regs *)(KSTK_TOP(task_stack_page(task))-8); \
+- __regs__ - 1; \
+-})
-
--typedef unsigned long __kernel_ino_t;
--typedef unsigned short __kernel_mode_t;
--typedef unsigned short __kernel_nlink_t;
--typedef long __kernel_off_t;
--typedef int __kernel_pid_t;
--typedef unsigned short __kernel_ipc_pid_t;
--typedef unsigned short __kernel_uid_t;
--typedef unsigned short __kernel_gid_t;
--typedef long unsigned int __kernel_size_t;
--typedef int __kernel_ssize_t;
--typedef int __kernel_ptrdiff_t;
--typedef long __kernel_time_t;
--typedef long __kernel_suseconds_t;
--typedef long __kernel_clock_t;
--typedef int __kernel_timer_t;
--typedef int __kernel_clockid_t;
--typedef int __kernel_daddr_t;
--typedef char * __kernel_caddr_t;
--typedef unsigned short __kernel_uid16_t;
--typedef unsigned short __kernel_gid16_t;
--typedef unsigned int __kernel_uid32_t;
--typedef unsigned int __kernel_gid32_t;
+-#define KSTK_EIP(task) (task_pt_regs(task)->eip)
+-#define KSTK_ESP(task) (task_pt_regs(task)->esp)
-
--typedef unsigned short __kernel_old_uid_t;
--typedef unsigned short __kernel_old_gid_t;
--typedef unsigned short __kernel_old_dev_t;
-
--#ifdef __GNUC__
--typedef long long __kernel_loff_t;
--#endif
+-struct microcode_header {
+- unsigned int hdrver;
+- unsigned int rev;
+- unsigned int date;
+- unsigned int sig;
+- unsigned int cksum;
+- unsigned int ldrver;
+- unsigned int pf;
+- unsigned int datasize;
+- unsigned int totalsize;
+- unsigned int reserved[3];
+-};
-
--typedef struct {
--#if defined(__KERNEL__) || defined(__USE_ALL)
-- int val[2];
--#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
-- int __val[2];
--#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
--} __kernel_fsid_t;
+-struct microcode {
+- struct microcode_header hdr;
+- unsigned int bits[0];
+-};
-
--#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
+-typedef struct microcode microcode_t;
+-typedef struct microcode_header microcode_header_t;
-
--#undef __FD_SET
--static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
+-/* microcode format is extended from prescott processors */
+-struct extended_signature {
+- unsigned int sig;
+- unsigned int pf;
+- unsigned int cksum;
+-};
+-
+-struct extended_sigtable {
+- unsigned int count;
+- unsigned int cksum;
+- unsigned int reserved[3];
+- struct extended_signature sigs[0];
+-};
+-
+-/* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */
+-static inline void rep_nop(void)
-{
-- unsigned long __tmp = __fd / __NFDBITS;
-- unsigned long __rem = __fd % __NFDBITS;
-- __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
+- __asm__ __volatile__("rep;nop": : :"memory");
-}
-
--#undef __FD_CLR
--static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
+-#define cpu_relax() rep_nop()
+-
+-static inline void native_load_esp0(struct tss_struct *tss, struct thread_struct *thread)
-{
-- unsigned long __tmp = __fd / __NFDBITS;
-- unsigned long __rem = __fd % __NFDBITS;
-- __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
+- tss->x86_tss.esp0 = thread->esp0;
+- /* This can only happen when SEP is enabled, no need to test "SEP"arately */
+- if (unlikely(tss->x86_tss.ss1 != thread->sysenter_cs)) {
+- tss->x86_tss.ss1 = thread->sysenter_cs;
+- wrmsr(MSR_IA32_SYSENTER_CS, thread->sysenter_cs, 0);
+- }
-}
-
-
--#undef __FD_ISSET
--static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
+-static inline unsigned long native_get_debugreg(int regno)
-{
-- unsigned long __tmp = __fd / __NFDBITS;
-- unsigned long __rem = __fd % __NFDBITS;
-- return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
+- unsigned long val = 0; /* Damn you, gcc! */
+-
+- switch (regno) {
+- case 0:
+- asm("movl %%db0, %0" :"=r" (val)); break;
+- case 1:
+- asm("movl %%db1, %0" :"=r" (val)); break;
+- case 2:
+- asm("movl %%db2, %0" :"=r" (val)); break;
+- case 3:
+- asm("movl %%db3, %0" :"=r" (val)); break;
+- case 6:
+- asm("movl %%db6, %0" :"=r" (val)); break;
+- case 7:
+- asm("movl %%db7, %0" :"=r" (val)); break;
+- default:
+- BUG();
+- }
+- return val;
+-}
+-
+-static inline void native_set_debugreg(int regno, unsigned long value)
+-{
+- switch (regno) {
+- case 0:
+- asm("movl %0,%%db0" : /* no output */ :"r" (value));
+- break;
+- case 1:
+- asm("movl %0,%%db1" : /* no output */ :"r" (value));
+- break;
+- case 2:
+- asm("movl %0,%%db2" : /* no output */ :"r" (value));
+- break;
+- case 3:
+- asm("movl %0,%%db3" : /* no output */ :"r" (value));
+- break;
+- case 6:
+- asm("movl %0,%%db6" : /* no output */ :"r" (value));
+- break;
+- case 7:
+- asm("movl %0,%%db7" : /* no output */ :"r" (value));
+- break;
+- default:
+- BUG();
+- }
-}
-
-/*
-- * This will unroll the loop for the normal constant case (8 ints,
-- * for a 256-bit fd_set)
+- * Set IOPL bits in EFLAGS from given mask
- */
--#undef __FD_ZERO
--static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
+-static inline void native_set_iopl_mask(unsigned mask)
-{
-- unsigned long *__tmp = __p->fds_bits;
-- int __i;
+- unsigned int reg;
+- __asm__ __volatile__ ("pushfl;"
+- "popl %0;"
+- "andl %1, %0;"
+- "orl %2, %0;"
+- "pushl %0;"
+- "popfl"
+- : "=&r" (reg)
+- : "i" (~X86_EFLAGS_IOPL), "r" (mask));
+-}
-
-- if (__builtin_constant_p(__FDSET_LONGS)) {
-- switch (__FDSET_LONGS) {
-- case 16:
-- __tmp[ 0] = 0; __tmp[ 1] = 0;
-- __tmp[ 2] = 0; __tmp[ 3] = 0;
-- __tmp[ 4] = 0; __tmp[ 5] = 0;
-- __tmp[ 6] = 0; __tmp[ 7] = 0;
-- __tmp[ 8] = 0; __tmp[ 9] = 0;
-- __tmp[10] = 0; __tmp[11] = 0;
-- __tmp[12] = 0; __tmp[13] = 0;
-- __tmp[14] = 0; __tmp[15] = 0;
-- return;
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#define paravirt_enabled() 0
+-#define __cpuid native_cpuid
-
-- case 8:
-- __tmp[ 0] = 0; __tmp[ 1] = 0;
-- __tmp[ 2] = 0; __tmp[ 3] = 0;
-- __tmp[ 4] = 0; __tmp[ 5] = 0;
-- __tmp[ 6] = 0; __tmp[ 7] = 0;
-- return;
+-static inline void load_esp0(struct tss_struct *tss, struct thread_struct *thread)
+-{
+- native_load_esp0(tss, thread);
+-}
-
-- case 4:
-- __tmp[ 0] = 0; __tmp[ 1] = 0;
-- __tmp[ 2] = 0; __tmp[ 3] = 0;
-- return;
-- }
-- }
-- __i = __FDSET_LONGS;
-- while (__i) {
-- __i--;
-- *__tmp = 0;
-- __tmp++;
-- }
+-/*
+- * These special macros can be used to get or set a debugging register
+- */
+-#define get_debugreg(var, register) \
+- (var) = native_get_debugreg(register)
+-#define set_debugreg(value, register) \
+- native_set_debugreg(register, value)
+-
+-#define set_iopl_mask native_set_iopl_mask
+-#endif /* CONFIG_PARAVIRT */
+-
+-/*
+- * Generic CPUID function
+- * clear %ecx since some cpus (Cyrix MII) do not set or clear %ecx
+- * resulting in stale register contents being returned.
+- */
+-static inline void cpuid(unsigned int op,
+- unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
+-{
+- *eax = op;
+- *ecx = 0;
+- __cpuid(eax, ebx, ecx, edx);
+-}
+-
+-/* Some CPUID calls want 'count' to be placed in ecx */
+-static inline void cpuid_count(unsigned int op, int count,
+- unsigned int *eax, unsigned int *ebx,
+- unsigned int *ecx, unsigned int *edx)
+-{
+- *eax = op;
+- *ecx = count;
+- __cpuid(eax, ebx, ecx, edx);
+-}
+-
+-/*
+- * CPUID functions returning a single datum
+- */
+-static inline unsigned int cpuid_eax(unsigned int op)
+-{
+- unsigned int eax, ebx, ecx, edx;
+-
+- cpuid(op, &eax, &ebx, &ecx, &edx);
+- return eax;
+-}
+-static inline unsigned int cpuid_ebx(unsigned int op)
+-{
+- unsigned int eax, ebx, ecx, edx;
+-
+- cpuid(op, &eax, &ebx, &ecx, &edx);
+- return ebx;
+-}
+-static inline unsigned int cpuid_ecx(unsigned int op)
+-{
+- unsigned int eax, ebx, ecx, edx;
+-
+- cpuid(op, &eax, &ebx, &ecx, &edx);
+- return ecx;
+-}
+-static inline unsigned int cpuid_edx(unsigned int op)
+-{
+- unsigned int eax, ebx, ecx, edx;
+-
+- cpuid(op, &eax, &ebx, &ecx, &edx);
+- return edx;
+-}
+-
+-/* generic versions from gas */
+-#define GENERIC_NOP1 ".byte 0x90\n"
+-#define GENERIC_NOP2 ".byte 0x89,0xf6\n"
+-#define GENERIC_NOP3 ".byte 0x8d,0x76,0x00\n"
+-#define GENERIC_NOP4 ".byte 0x8d,0x74,0x26,0x00\n"
+-#define GENERIC_NOP5 GENERIC_NOP1 GENERIC_NOP4
+-#define GENERIC_NOP6 ".byte 0x8d,0xb6,0x00,0x00,0x00,0x00\n"
+-#define GENERIC_NOP7 ".byte 0x8d,0xb4,0x26,0x00,0x00,0x00,0x00\n"
+-#define GENERIC_NOP8 GENERIC_NOP1 GENERIC_NOP7
+-
+-/* Opteron nops */
+-#define K8_NOP1 GENERIC_NOP1
+-#define K8_NOP2 ".byte 0x66,0x90\n"
+-#define K8_NOP3 ".byte 0x66,0x66,0x90\n"
+-#define K8_NOP4 ".byte 0x66,0x66,0x66,0x90\n"
+-#define K8_NOP5 K8_NOP3 K8_NOP2
+-#define K8_NOP6 K8_NOP3 K8_NOP3
+-#define K8_NOP7 K8_NOP4 K8_NOP3
+-#define K8_NOP8 K8_NOP4 K8_NOP4
+-
+-/* K7 nops */
+-/* uses eax dependencies (arbitary choice) */
+-#define K7_NOP1 GENERIC_NOP1
+-#define K7_NOP2 ".byte 0x8b,0xc0\n"
+-#define K7_NOP3 ".byte 0x8d,0x04,0x20\n"
+-#define K7_NOP4 ".byte 0x8d,0x44,0x20,0x00\n"
+-#define K7_NOP5 K7_NOP4 ASM_NOP1
+-#define K7_NOP6 ".byte 0x8d,0x80,0,0,0,0\n"
+-#define K7_NOP7 ".byte 0x8D,0x04,0x05,0,0,0,0\n"
+-#define K7_NOP8 K7_NOP7 ASM_NOP1
+-
+-/* P6 nops */
+-/* uses eax dependencies (Intel-recommended choice) */
+-#define P6_NOP1 GENERIC_NOP1
+-#define P6_NOP2 ".byte 0x66,0x90\n"
+-#define P6_NOP3 ".byte 0x0f,0x1f,0x00\n"
+-#define P6_NOP4 ".byte 0x0f,0x1f,0x40,0\n"
+-#define P6_NOP5 ".byte 0x0f,0x1f,0x44,0x00,0\n"
+-#define P6_NOP6 ".byte 0x66,0x0f,0x1f,0x44,0x00,0\n"
+-#define P6_NOP7 ".byte 0x0f,0x1f,0x80,0,0,0,0\n"
+-#define P6_NOP8 ".byte 0x0f,0x1f,0x84,0x00,0,0,0,0\n"
+-
+-#ifdef CONFIG_MK8
+-#define ASM_NOP1 K8_NOP1
+-#define ASM_NOP2 K8_NOP2
+-#define ASM_NOP3 K8_NOP3
+-#define ASM_NOP4 K8_NOP4
+-#define ASM_NOP5 K8_NOP5
+-#define ASM_NOP6 K8_NOP6
+-#define ASM_NOP7 K8_NOP7
+-#define ASM_NOP8 K8_NOP8
+-#elif defined(CONFIG_MK7)
+-#define ASM_NOP1 K7_NOP1
+-#define ASM_NOP2 K7_NOP2
+-#define ASM_NOP3 K7_NOP3
+-#define ASM_NOP4 K7_NOP4
+-#define ASM_NOP5 K7_NOP5
+-#define ASM_NOP6 K7_NOP6
+-#define ASM_NOP7 K7_NOP7
+-#define ASM_NOP8 K7_NOP8
+-#elif defined(CONFIG_M686) || defined(CONFIG_MPENTIUMII) || \
+- defined(CONFIG_MPENTIUMIII) || defined(CONFIG_MPENTIUMM) || \
+- defined(CONFIG_MCORE2) || defined(CONFIG_PENTIUM4)
+-#define ASM_NOP1 P6_NOP1
+-#define ASM_NOP2 P6_NOP2
+-#define ASM_NOP3 P6_NOP3
+-#define ASM_NOP4 P6_NOP4
+-#define ASM_NOP5 P6_NOP5
+-#define ASM_NOP6 P6_NOP6
+-#define ASM_NOP7 P6_NOP7
+-#define ASM_NOP8 P6_NOP8
+-#else
+-#define ASM_NOP1 GENERIC_NOP1
+-#define ASM_NOP2 GENERIC_NOP2
+-#define ASM_NOP3 GENERIC_NOP3
+-#define ASM_NOP4 GENERIC_NOP4
+-#define ASM_NOP5 GENERIC_NOP5
+-#define ASM_NOP6 GENERIC_NOP6
+-#define ASM_NOP7 GENERIC_NOP7
+-#define ASM_NOP8 GENERIC_NOP8
+-#endif
+-
+-#define ASM_NOP_MAX 8
+-
+-/* Prefetch instructions for Pentium III and AMD Athlon */
+-/* It's not worth to care about 3dnow! prefetches for the K6
+- because they are microcoded there and very slow.
+- However we don't do prefetches for pre XP Athlons currently
+- That should be fixed. */
+-#define ARCH_HAS_PREFETCH
+-static inline void prefetch(const void *x)
+-{
+- alternative_input(ASM_NOP4,
+- "prefetchnta (%1)",
+- X86_FEATURE_XMM,
+- "r" (x));
-}
-
--#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
+-#define ARCH_HAS_PREFETCH
+-#define ARCH_HAS_PREFETCHW
+-#define ARCH_HAS_SPINLOCK_PREFETCH
-
--#endif /* __ASM_SH64_POSIX_TYPES_H */
-diff --git a/include/asm-sh64/processor.h b/include/asm-sh64/processor.h
+-/* 3dnow! prefetch to get an exclusive cache line. Useful for
+- spinlocks to avoid one state transition in the cache coherency protocol. */
+-static inline void prefetchw(const void *x)
+-{
+- alternative_input(ASM_NOP4,
+- "prefetchw (%1)",
+- X86_FEATURE_3DNOW,
+- "r" (x));
+-}
+-#define spin_lock_prefetch(x) prefetchw(x)
+-
+-extern void select_idle_routine(const struct cpuinfo_x86 *c);
+-
+-#define cache_line_size() (boot_cpu_data.x86_cache_alignment)
+-
+-extern unsigned long boot_option_idle_override;
+-extern void enable_sep_cpu(void);
+-extern int sysenter_setup(void);
+-
+-/* Defined in head.S */
+-extern struct Xgt_desc_struct early_gdt_descr;
+-
+-extern void cpu_set_gdt(int);
+-extern void switch_to_new_gdt(void);
+-extern void cpu_init(void);
+-extern void init_gdt(int cpu);
+-
+-extern int force_mwait;
+-
+-#endif /* __ASM_I386_PROCESSOR_H */
+diff --git a/include/asm-x86/processor_64.h b/include/asm-x86/processor_64.h
deleted file mode 100644
-index eb2bee4..0000000
---- a/include/asm-sh64/processor.h
+index e4f1997..0000000
+--- a/include/asm-x86/processor_64.h
+++ /dev/null
-@@ -1,287 +0,0 @@
--#ifndef __ASM_SH64_PROCESSOR_H
--#define __ASM_SH64_PROCESSOR_H
--
+@@ -1,452 +0,0 @@
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/processor.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
-- * Copyright (C) 2004 Richard Curnow
+- * include/asm-x86_64/processor.h
- *
+- * Copyright (C) 1994 Linus Torvalds
- */
-
--#include <asm/page.h>
--
--#ifndef __ASSEMBLY__
+-#ifndef __ASM_X86_64_PROCESSOR_H
+-#define __ASM_X86_64_PROCESSOR_H
-
+-#include <asm/segment.h>
+-#include <asm/page.h>
-#include <asm/types.h>
--#include <asm/cache.h>
--#include <asm/registers.h>
+-#include <asm/sigcontext.h>
+-#include <asm/cpufeature.h>
-#include <linux/threads.h>
--#include <linux/compiler.h>
+-#include <asm/msr.h>
+-#include <asm/current.h>
+-#include <asm/system.h>
+-#include <asm/mmsegment.h>
+-#include <asm/percpu.h>
+-#include <linux/personality.h>
+-#include <linux/cpumask.h>
+-#include <asm/processor-flags.h>
+-
+-#define TF_MASK 0x00000100
+-#define IF_MASK 0x00000200
+-#define IOPL_MASK 0x00003000
+-#define NT_MASK 0x00004000
+-#define VM_MASK 0x00020000
+-#define AC_MASK 0x00040000
+-#define VIF_MASK 0x00080000 /* virtual interrupt flag */
+-#define VIP_MASK 0x00100000 /* virtual interrupt pending */
+-#define ID_MASK 0x00200000
+-
+-#define desc_empty(desc) \
+- (!((desc)->a | (desc)->b))
+-
+-#define desc_equal(desc1, desc2) \
+- (((desc1)->a == (desc2)->a) && ((desc1)->b == (desc2)->b))
-
-/*
- * Default implementation of macro that returns current
- * instruction pointer ("program counter").
- */
--#define current_text_addr() ({ \
--void *pc; \
--unsigned long long __dummy = 0; \
--__asm__("gettr tr0, %1\n\t" \
-- "pta 4, tr0\n\t" \
-- "gettr tr0, %0\n\t" \
-- "ptabs %1, tr0\n\t" \
-- :"=r" (pc), "=r" (__dummy) \
-- : "1" (__dummy)); \
--pc; })
+-#define current_text_addr() ({ void *pc; asm volatile("leaq 1f(%%rip),%0\n1:":"=r"(pc)); pc; })
-
-/*
- * CPU type and hardware bug flags. Kept separately for each CPU.
- */
--enum cpu_type {
-- CPU_SH5_101,
-- CPU_SH5_103,
-- CPU_SH_NONE
--};
--
--/*
-- * TLB information structure
-- *
-- * Defined for both I and D tlb, per-processor.
-- */
--struct tlb_info {
-- unsigned long long next;
-- unsigned long long first;
-- unsigned long long last;
-
-- unsigned int entries;
-- unsigned int step;
--
-- unsigned long flags;
--};
--
--struct sh_cpuinfo {
-- enum cpu_type type;
+-struct cpuinfo_x86 {
+- __u8 x86; /* CPU family */
+- __u8 x86_vendor; /* CPU vendor */
+- __u8 x86_model;
+- __u8 x86_mask;
+- int cpuid_level; /* Maximum supported CPUID level, -1=no CPUID */
+- __u32 x86_capability[NCAPINTS];
+- char x86_vendor_id[16];
+- char x86_model_id[64];
+- int x86_cache_size; /* in KB */
+- int x86_clflush_size;
+- int x86_cache_alignment;
+- int x86_tlbsize; /* number of 4K pages in DTLB/ITLB combined(in pages)*/
+- __u8 x86_virt_bits, x86_phys_bits;
+- __u8 x86_max_cores; /* cpuid returned max cores value */
+- __u32 x86_power;
+- __u32 extended_cpuid_level; /* Max extended CPUID function supported */
- unsigned long loops_per_jiffy;
+-#ifdef CONFIG_SMP
+- cpumask_t llc_shared_map; /* cpus sharing the last level cache */
+-#endif
+- __u8 apicid;
+-#ifdef CONFIG_SMP
+- __u8 booted_cores; /* number of cores as seen by OS */
+- __u8 phys_proc_id; /* Physical Processor id. */
+- __u8 cpu_core_id; /* Core id. */
+- __u8 cpu_index; /* index into per_cpu list */
+-#endif
+-} ____cacheline_aligned;
+-
+-#define X86_VENDOR_INTEL 0
+-#define X86_VENDOR_CYRIX 1
+-#define X86_VENDOR_AMD 2
+-#define X86_VENDOR_UMC 3
+-#define X86_VENDOR_NEXGEN 4
+-#define X86_VENDOR_CENTAUR 5
+-#define X86_VENDOR_TRANSMETA 7
+-#define X86_VENDOR_NUM 8
+-#define X86_VENDOR_UNKNOWN 0xff
-
-- char hard_math;
+-#ifdef CONFIG_SMP
+-DECLARE_PER_CPU(struct cpuinfo_x86, cpu_info);
+-#define cpu_data(cpu) per_cpu(cpu_info, cpu)
+-#define current_cpu_data cpu_data(smp_processor_id())
+-#else
+-#define cpu_data(cpu) boot_cpu_data
+-#define current_cpu_data boot_cpu_data
+-#endif
-
-- unsigned long *pgd_quick;
-- unsigned long *pmd_quick;
-- unsigned long *pte_quick;
-- unsigned long pgtable_cache_sz;
-- unsigned int cpu_clock, master_clock, bus_clock, module_clock;
+-extern char ignore_irq13;
-
-- /* Cache info */
-- struct cache_info icache;
-- struct cache_info dcache;
+-extern void identify_cpu(struct cpuinfo_x86 *);
+-extern void print_cpu_info(struct cpuinfo_x86 *);
+-extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
+-extern unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c);
+-extern unsigned short num_cache_leaves;
-
-- /* TLB info */
-- struct tlb_info itlb;
-- struct tlb_info dtlb;
--};
+-/*
+- * Save the cr4 feature set we're using (ie
+- * Pentium 4MB enable and PPro Global page
+- * enable), so that any CPU's that boot up
+- * after us can get the correct flags.
+- */
+-extern unsigned long mmu_cr4_features;
-
--extern struct sh_cpuinfo boot_cpu_data;
+-static inline void set_in_cr4 (unsigned long mask)
+-{
+- mmu_cr4_features |= mask;
+- __asm__("movq %%cr4,%%rax\n\t"
+- "orq %0,%%rax\n\t"
+- "movq %%rax,%%cr4\n"
+- : : "irg" (mask)
+- :"ax");
+-}
-
--#define cpu_data (&boot_cpu_data)
--#define current_cpu_data boot_cpu_data
+-static inline void clear_in_cr4 (unsigned long mask)
+-{
+- mmu_cr4_features &= ~mask;
+- __asm__("movq %%cr4,%%rax\n\t"
+- "andq %0,%%rax\n\t"
+- "movq %%rax,%%cr4\n"
+- : : "irg" (~mask)
+- :"ax");
+-}
-
--#endif
-
-/*
-- * User space process size: 2GB - 4k.
+- * User space process size. 47bits minus one guard page.
- */
--#define TASK_SIZE 0x7ffff000UL
+-#define TASK_SIZE64 (0x800000000000UL - 4096)
-
-/* This decides where the kernel will search for a free chunk of vm
- * space during mmap's.
- */
--#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+-#define IA32_PAGE_OFFSET ((current->personality & ADDR_LIMIT_3GB) ? 0xc0000000 : 0xFFFFe000)
+-
+-#define TASK_SIZE (test_thread_flag(TIF_IA32) ? IA32_PAGE_OFFSET : TASK_SIZE64)
+-#define TASK_SIZE_OF(child) ((test_tsk_thread_flag(child, TIF_IA32)) ? IA32_PAGE_OFFSET : TASK_SIZE64)
+-
+-#define TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE/3)
-
-/*
-- * Bit of SR register
-- *
-- * FD-bit:
-- * When it's set, it means the processor doesn't have right to use FPU,
-- * and it results exception when the floating operation is executed.
-- *
-- * IMASK-bit:
-- * Interrupt level mask
-- *
-- * STEP-bit:
-- * Single step bit
-- *
+- * Size of io_bitmap.
- */
--#define SR_FD 0x00008000
+-#define IO_BITMAP_BITS 65536
+-#define IO_BITMAP_BYTES (IO_BITMAP_BITS/8)
+-#define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long))
+-#define IO_BITMAP_OFFSET offsetof(struct tss_struct,io_bitmap)
+-#define INVALID_IO_BITMAP_OFFSET 0x8000
-
--#if defined(CONFIG_SH64_SR_WATCH)
--#define SR_MMU 0x84000000
--#else
--#define SR_MMU 0x80000000
--#endif
+-struct i387_fxsave_struct {
+- u16 cwd;
+- u16 swd;
+- u16 twd;
+- u16 fop;
+- u64 rip;
+- u64 rdp;
+- u32 mxcsr;
+- u32 mxcsr_mask;
+- u32 st_space[32]; /* 8*16 bytes for each FP-reg = 128 bytes */
+- u32 xmm_space[64]; /* 16*16 bytes for each XMM-reg = 256 bytes */
+- u32 padding[24];
+-} __attribute__ ((aligned (16)));
-
--#define SR_IMASK 0x000000f0
--#define SR_SSTEP 0x08000000
+-union i387_union {
+- struct i387_fxsave_struct fxsave;
+-};
-
--#ifndef __ASSEMBLY__
+-struct tss_struct {
+- u32 reserved1;
+- u64 rsp0;
+- u64 rsp1;
+- u64 rsp2;
+- u64 reserved2;
+- u64 ist[7];
+- u32 reserved3;
+- u32 reserved4;
+- u16 reserved5;
+- u16 io_bitmap_base;
+- /*
+- * The extra 1 is there because the CPU will access an
+- * additional byte beyond the end of the IO permission
+- * bitmap. The extra byte must be all 1 bits, and must
+- * be within the limit. Thus we have:
+- *
+- * 128 bytes, the bitmap itself, for ports 0..0x3ff
+- * 8 bytes, for an extra "long" of ~0UL
+- */
+- unsigned long io_bitmap[IO_BITMAP_LONGS + 1];
+-} __attribute__((packed)) ____cacheline_aligned;
-
--/*
-- * FPU structure and data : require 8-byte alignment as we need to access it
-- with fld.p, fst.p
-- */
-
--struct sh_fpu_hard_struct {
-- unsigned long fp_regs[64];
-- unsigned int fpscr;
-- /* long status; * software status information */
+-extern struct cpuinfo_x86 boot_cpu_data;
+-DECLARE_PER_CPU(struct tss_struct,init_tss);
+-/* Save the original ist values for checking stack pointers during debugging */
+-struct orig_ist {
+- unsigned long ist[7];
-};
+-DECLARE_PER_CPU(struct orig_ist, orig_ist);
-
--#if 0
--/* Dummy fpu emulator */
--struct sh_fpu_soft_struct {
-- unsigned long long fp_regs[32];
-- unsigned int fpscr;
-- unsigned char lookahead;
-- unsigned long entry_pc;
--};
+-#ifdef CONFIG_X86_VSMP
+-#define ARCH_MIN_TASKALIGN (1 << INTERNODE_CACHE_SHIFT)
+-#define ARCH_MIN_MMSTRUCT_ALIGN (1 << INTERNODE_CACHE_SHIFT)
+-#else
+-#define ARCH_MIN_TASKALIGN 16
+-#define ARCH_MIN_MMSTRUCT_ALIGN 0
-#endif
-
--union sh_fpu_union {
-- struct sh_fpu_hard_struct hard;
-- /* 'hard' itself only produces 32 bit alignment, yet we need
-- to access it using 64 bit load/store as well. */
-- unsigned long long alignment_dummy;
--};
--
-struct thread_struct {
-- unsigned long sp;
-- unsigned long pc;
-- /* This stores the address of the pt_regs built during a context
-- switch, or of the register save area built for a kernel mode
-- exception. It is used for backtracing the stack of a sleeping task
-- or one that traps in kernel mode. */
-- struct pt_regs *kregs;
-- /* This stores the address of the pt_regs constructed on entry from
-- user mode. It is a fixed value over the lifetime of a process, or
-- NULL for a kernel thread. */
-- struct pt_regs *uregs;
+- unsigned long rsp0;
+- unsigned long rsp;
+- unsigned long userrsp; /* Copy from PDA */
+- unsigned long fs;
+- unsigned long gs;
+- unsigned short es, ds, fsindex, gsindex;
+-/* Hardware debugging registers */
+- unsigned long debugreg0;
+- unsigned long debugreg1;
+- unsigned long debugreg2;
+- unsigned long debugreg3;
+- unsigned long debugreg6;
+- unsigned long debugreg7;
+-/* fault info */
+- unsigned long cr2, trap_no, error_code;
+-/* floating point info */
+- union i387_union i387 __attribute__((aligned(16)));
+-/* IO permissions. the bitmap could be moved into the GDT, that would make
+- switch faster for a limited number of ioperm using tasks. -AK */
+- int ioperm;
+- unsigned long *io_bitmap_ptr;
+- unsigned io_bitmap_max;
+-/* cached TLS descriptors. */
+- u64 tls_array[GDT_ENTRY_TLS_ENTRIES];
+-} __attribute__((aligned(16)));
-
-- unsigned long trap_no, error_code;
-- unsigned long address;
-- /* Hardware debugging registers may come here */
+-#define INIT_THREAD { \
+- .rsp0 = (unsigned long)&init_stack + sizeof(init_stack) \
+-}
-
-- /* floating point info */
-- union sh_fpu_union fpu;
--};
+-#define INIT_TSS { \
+- .rsp0 = (unsigned long)&init_stack + sizeof(init_stack) \
+-}
-
-#define INIT_MMAP \
-{ &init_mm, 0, 0, NULL, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, 1, NULL, NULL }
-
--extern struct pt_regs fake_swapper_regs;
--
--#define INIT_THREAD { \
-- .sp = sizeof(init_stack) + \
-- (long) &init_stack, \
-- .pc = 0, \
-- .kregs = &fake_swapper_regs, \
-- .uregs = NULL, \
-- .trap_no = 0, \
-- .error_code = 0, \
-- .address = 0, \
-- .fpu = { { { 0, } }, } \
--}
--
--/*
-- * Do necessary setup to start up a newly executed thread.
-- */
--#define SR_USER (SR_MMU | SR_FD)
--
--#define start_thread(regs, new_pc, new_sp) \
-- set_fs(USER_DS); \
-- regs->sr = SR_USER; /* User mode. */ \
-- regs->pc = new_pc - 4; /* Compensate syscall exit */ \
-- regs->pc |= 1; /* Set SHmedia ! */ \
-- regs->regs[18] = 0; \
-- regs->regs[15] = new_sp
+-#define start_thread(regs,new_rip,new_rsp) do { \
+- asm volatile("movl %0,%%fs; movl %0,%%es; movl %0,%%ds": :"r" (0)); \
+- load_gs_index(0); \
+- (regs)->rip = (new_rip); \
+- (regs)->rsp = (new_rsp); \
+- write_pda(oldrsp, (new_rsp)); \
+- (regs)->cs = __USER_CS; \
+- (regs)->ss = __USER_DS; \
+- (regs)->eflags = 0x200; \
+- set_fs(USER_DS); \
+-} while(0)
+-
+-#define get_debugreg(var, register) \
+- __asm__("movq %%db" #register ", %0" \
+- :"=r" (var))
+-#define set_debugreg(value, register) \
+- __asm__("movq %0,%%db" #register \
+- : /* no output */ \
+- :"r" (value))
-
--/* Forward declaration, a strange C thing */
-struct task_struct;
-struct mm_struct;
-
-/* Free all resources held by a thread. */
-extern void release_thread(struct task_struct *);
--/*
-- * create a kernel thread without removing it from tasklists
-- */
--extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-
+-/* Prepare to copy thread state - unlazy all lazy status */
+-extern void prepare_to_copy(struct task_struct *tsk);
-
--/* Copy and release all segment info associated with a VM */
--#define copy_segments(p, mm) do { } while (0)
--#define release_segments(mm) do { } while (0)
--#define forget_segments() do { } while (0)
--#define prepare_to_copy(tsk) do { } while (0)
-/*
-- * FPU lazy state save handling.
+- * create a kernel thread without removing it from tasklists
- */
--
--static inline void release_fpu(void)
--{
-- unsigned long long __dummy;
--
-- /* Set FD flag in SR */
-- __asm__ __volatile__("getcon " __SR ", %0\n\t"
-- "or %0, %1, %0\n\t"
-- "putcon %0, " __SR "\n\t"
-- : "=&r" (__dummy)
-- : "r" (SR_FD));
--}
--
--static inline void grab_fpu(void)
--{
-- unsigned long long __dummy;
--
-- /* Clear out FD flag in SR */
-- __asm__ __volatile__("getcon " __SR ", %0\n\t"
-- "and %0, %1, %0\n\t"
-- "putcon %0, " __SR "\n\t"
-- : "=&r" (__dummy)
-- : "r" (~SR_FD));
--}
--
--/* Round to nearest, no exceptions on inexact, overflow, underflow,
-- zero-divide, invalid. Configure option for whether to flush denorms to
-- zero, or except if a denorm is encountered. */
--#if defined(CONFIG_SH64_FPU_DENORM_FLUSH)
--#define FPSCR_INIT 0x00040000
--#else
--#define FPSCR_INIT 0x00000000
--#endif
--
--/* Save the current FP regs */
--void fpsave(struct sh_fpu_hard_struct *fpregs);
--
--/* Initialise the FP state of a task */
--void fpinit(struct sh_fpu_hard_struct *fpregs);
--
--extern struct task_struct *last_task_used_math;
+-extern long kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-
-/*
- * Return saved PC of a blocked thread.
+- * What is this good for? it will be always the scheduler or ret_from_fork.
- */
--#define thread_saved_pc(tsk) (tsk->thread.pc)
+-#define thread_saved_pc(t) (*(unsigned long *)((t)->thread.rsp - 8))
-
-extern unsigned long get_wchan(struct task_struct *p);
+-#define task_pt_regs(tsk) ((struct pt_regs *)(tsk)->thread.rsp0 - 1)
+-#define KSTK_EIP(tsk) (task_pt_regs(tsk)->rip)
+-#define KSTK_ESP(tsk) -1 /* sorry. doesn't work for syscall. */
-
--#define KSTK_EIP(tsk) ((tsk)->thread.pc)
--#define KSTK_ESP(tsk) ((tsk)->thread.sp)
--
--#define cpu_relax() barrier()
--
--#endif /* __ASSEMBLY__ */
--#endif /* __ASM_SH64_PROCESSOR_H */
--
-diff --git a/include/asm-sh64/ptrace.h b/include/asm-sh64/ptrace.h
-deleted file mode 100644
-index c424f80..0000000
---- a/include/asm-sh64/ptrace.h
-+++ /dev/null
-@@ -1,35 +0,0 @@
--#ifndef __ASM_SH64_PTRACE_H
--#define __ASM_SH64_PTRACE_H
--
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/ptrace.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
-
--/*
-- * This struct defines the way the registers are stored on the
-- * kernel stack during a system call or other kernel entry.
-- */
--struct pt_regs {
-- unsigned long long pc;
-- unsigned long long sr;
-- unsigned long long syscall_nr;
-- unsigned long long regs[63];
-- unsigned long long tregs[8];
-- unsigned long long pad[2];
+-struct microcode_header {
+- unsigned int hdrver;
+- unsigned int rev;
+- unsigned int date;
+- unsigned int sig;
+- unsigned int cksum;
+- unsigned int ldrver;
+- unsigned int pf;
+- unsigned int datasize;
+- unsigned int totalsize;
+- unsigned int reserved[3];
-};
-
--#ifdef __KERNEL__
--#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
--#define instruction_pointer(regs) ((regs)->pc)
--#define profile_pc(regs) ((unsigned long)instruction_pointer(regs))
--extern void show_regs(struct pt_regs *);
--#endif
--
--#endif /* __ASM_SH64_PTRACE_H */
-diff --git a/include/asm-sh64/registers.h b/include/asm-sh64/registers.h
-deleted file mode 100644
-index 7eec666..0000000
---- a/include/asm-sh64/registers.h
-+++ /dev/null
-@@ -1,106 +0,0 @@
--#ifndef __ASM_SH64_REGISTERS_H
--#define __ASM_SH64_REGISTERS_H
--
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/registers.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2004 Richard Curnow
-- */
+-struct microcode {
+- struct microcode_header hdr;
+- unsigned int bits[0];
+-};
-
--#ifdef __ASSEMBLY__
--/* =====================================================================
--**
--** Section 1: acts on assembly sources pre-processed by GPP ( <source.S>).
--** Assigns symbolic names to control & target registers.
--*/
+-typedef struct microcode microcode_t;
+-typedef struct microcode_header microcode_header_t;
-
--/*
-- * Define some useful aliases for control registers.
-- */
--#define SR cr0
--#define SSR cr1
--#define PSSR cr2
-- /* cr3 UNDEFINED */
--#define INTEVT cr4
--#define EXPEVT cr5
--#define PEXPEVT cr6
--#define TRA cr7
--#define SPC cr8
--#define PSPC cr9
--#define RESVEC cr10
--#define VBR cr11
-- /* cr12 UNDEFINED */
--#define TEA cr13
-- /* cr14-cr15 UNDEFINED */
--#define DCR cr16
--#define KCR0 cr17
--#define KCR1 cr18
-- /* cr19-cr31 UNDEFINED */
-- /* cr32-cr61 RESERVED */
--#define CTC cr62
--#define USR cr63
+-/* microcode format is extended from prescott processors */
+-struct extended_signature {
+- unsigned int sig;
+- unsigned int pf;
+- unsigned int cksum;
+-};
-
--/*
-- * ABI dependent registers (general purpose set)
-- */
--#define RET r2
--#define ARG1 r2
--#define ARG2 r3
--#define ARG3 r4
--#define ARG4 r5
--#define ARG5 r6
--#define ARG6 r7
--#define SP r15
--#define LINK r18
--#define ZERO r63
+-struct extended_sigtable {
+- unsigned int count;
+- unsigned int cksum;
+- unsigned int reserved[3];
+- struct extended_signature sigs[0];
+-};
-
--/*
-- * Status register defines: used only by assembly sources (and
-- * syntax independednt)
-- */
--#define SR_RESET_VAL 0x0000000050008000
--#define SR_HARMLESS 0x00000000500080f0 /* Write ignores for most */
--#define SR_ENABLE_FPU 0xffffffffffff7fff /* AND with this */
-
--#if defined (CONFIG_SH64_SR_WATCH)
--#define SR_ENABLE_MMU 0x0000000084000000 /* OR with this */
+-#if defined(CONFIG_MPSC) || defined(CONFIG_MCORE2)
+-#define ASM_NOP1 P6_NOP1
+-#define ASM_NOP2 P6_NOP2
+-#define ASM_NOP3 P6_NOP3
+-#define ASM_NOP4 P6_NOP4
+-#define ASM_NOP5 P6_NOP5
+-#define ASM_NOP6 P6_NOP6
+-#define ASM_NOP7 P6_NOP7
+-#define ASM_NOP8 P6_NOP8
-#else
--#define SR_ENABLE_MMU 0x0000000080000000 /* OR with this */
--#endif
+-#define ASM_NOP1 K8_NOP1
+-#define ASM_NOP2 K8_NOP2
+-#define ASM_NOP3 K8_NOP3
+-#define ASM_NOP4 K8_NOP4
+-#define ASM_NOP5 K8_NOP5
+-#define ASM_NOP6 K8_NOP6
+-#define ASM_NOP7 K8_NOP7
+-#define ASM_NOP8 K8_NOP8
+-#endif
+-
+-/* Opteron nops */
+-#define K8_NOP1 ".byte 0x90\n"
+-#define K8_NOP2 ".byte 0x66,0x90\n"
+-#define K8_NOP3 ".byte 0x66,0x66,0x90\n"
+-#define K8_NOP4 ".byte 0x66,0x66,0x66,0x90\n"
+-#define K8_NOP5 K8_NOP3 K8_NOP2
+-#define K8_NOP6 K8_NOP3 K8_NOP3
+-#define K8_NOP7 K8_NOP4 K8_NOP3
+-#define K8_NOP8 K8_NOP4 K8_NOP4
+-
+-/* P6 nops */
+-/* uses eax dependencies (Intel-recommended choice) */
+-#define P6_NOP1 ".byte 0x90\n"
+-#define P6_NOP2 ".byte 0x66,0x90\n"
+-#define P6_NOP3 ".byte 0x0f,0x1f,0x00\n"
+-#define P6_NOP4 ".byte 0x0f,0x1f,0x40,0\n"
+-#define P6_NOP5 ".byte 0x0f,0x1f,0x44,0x00,0\n"
+-#define P6_NOP6 ".byte 0x66,0x0f,0x1f,0x44,0x00,0\n"
+-#define P6_NOP7 ".byte 0x0f,0x1f,0x80,0,0,0,0\n"
+-#define P6_NOP8 ".byte 0x0f,0x1f,0x84,0x00,0,0,0,0\n"
+-
+-#define ASM_NOP_MAX 8
-
--#define SR_UNBLOCK_EXC 0xffffffffefffffff /* AND with this */
--#define SR_BLOCK_EXC 0x0000000010000000 /* OR with this */
+-/* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */
+-static inline void rep_nop(void)
+-{
+- __asm__ __volatile__("rep;nop": : :"memory");
+-}
-
--#else /* Not __ASSEMBLY__ syntax */
+-/* Stop speculative execution */
+-static inline void sync_core(void)
+-{
+- int tmp;
+- asm volatile("cpuid" : "=a" (tmp) : "0" (1) : "ebx","ecx","edx","memory");
+-}
-
--/*
--** Stringify reg. name
--*/
--#define __str(x) #x
+-#define ARCH_HAS_PREFETCHW 1
+-static inline void prefetchw(void *x)
+-{
+- alternative_input("prefetcht0 (%1)",
+- "prefetchw (%1)",
+- X86_FEATURE_3DNOW,
+- "r" (x));
+-}
-
--/* Stringify control register names for use in inline assembly */
--#define __SR __str(SR)
--#define __SSR __str(SSR)
--#define __PSSR __str(PSSR)
--#define __INTEVT __str(INTEVT)
--#define __EXPEVT __str(EXPEVT)
--#define __PEXPEVT __str(PEXPEVT)
--#define __TRA __str(TRA)
--#define __SPC __str(SPC)
--#define __PSPC __str(PSPC)
--#define __RESVEC __str(RESVEC)
--#define __VBR __str(VBR)
--#define __TEA __str(TEA)
--#define __DCR __str(DCR)
--#define __KCR0 __str(KCR0)
--#define __KCR1 __str(KCR1)
--#define __CTC __str(CTC)
--#define __USR __str(USR)
+-#define ARCH_HAS_SPINLOCK_PREFETCH 1
-
--#endif /* __ASSEMBLY__ */
--#endif /* __ASM_SH64_REGISTERS_H */
-diff --git a/include/asm-sh64/resource.h b/include/asm-sh64/resource.h
-deleted file mode 100644
-index 8ff9394..0000000
---- a/include/asm-sh64/resource.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_RESOURCE_H
--#define __ASM_SH64_RESOURCE_H
+-#define spin_lock_prefetch(x) prefetchw(x)
-
--#include <asm-sh/resource.h>
+-#define cpu_relax() rep_nop()
-
--#endif /* __ASM_SH64_RESOURCE_H */
-diff --git a/include/asm-sh64/scatterlist.h b/include/asm-sh64/scatterlist.h
-deleted file mode 100644
-index 7f729bb..0000000
---- a/include/asm-sh64/scatterlist.h
-+++ /dev/null
-@@ -1,37 +0,0 @@
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/scatterlist.h
-- *
-- * Copyright (C) 2003 Paul Mundt
-- *
-- */
--#ifndef __ASM_SH64_SCATTERLIST_H
--#define __ASM_SH64_SCATTERLIST_H
+-static inline void __monitor(const void *eax, unsigned long ecx,
+- unsigned long edx)
+-{
+- /* "monitor %eax,%ecx,%edx;" */
+- asm volatile(
+- ".byte 0x0f,0x01,0xc8;"
+- : :"a" (eax), "c" (ecx), "d"(edx));
+-}
-
--#include <asm/types.h>
+-static inline void __mwait(unsigned long eax, unsigned long ecx)
+-{
+- /* "mwait %eax,%ecx;" */
+- asm volatile(
+- ".byte 0x0f,0x01,0xc9;"
+- : :"a" (eax), "c" (ecx));
+-}
-
--struct scatterlist {
--#ifdef CONFIG_DEBUG_SG
-- unsigned long sg_magic;
--#endif
-- unsigned long page_link;
-- unsigned int offset;/* for highmem, page offset */
-- dma_addr_t dma_address;
-- unsigned int length;
--};
+-static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+-{
+- /* "mwait %eax,%ecx;" */
+- asm volatile(
+- "sti; .byte 0x0f,0x01,0xc9;"
+- : :"a" (eax), "c" (ecx));
+-}
-
--/* These macros should be used after a pci_map_sg call has been done
-- * to get bus addresses of each of the SG entries and their lengths.
-- * You should only work with the number of sg entries pci_map_sg
-- * returns, or alternatively stop on the first sg_dma_len(sg) which
-- * is 0.
-- */
--#define sg_dma_address(sg) ((sg)->dma_address)
--#define sg_dma_len(sg) ((sg)->length)
+-extern void mwait_idle_with_hints(unsigned long eax, unsigned long ecx);
-
--#define ISA_DMA_THRESHOLD (0xffffffff)
+-#define stack_current() \
+-({ \
+- struct thread_info *ti; \
+- asm("andq %%rsp,%0; ":"=r" (ti) : "0" (CURRENT_MASK)); \
+- ti->task; \
+-})
-
--#endif /* !__ASM_SH64_SCATTERLIST_H */
-diff --git a/include/asm-sh64/sci.h b/include/asm-sh64/sci.h
-deleted file mode 100644
-index 793c568..0000000
---- a/include/asm-sh64/sci.h
-+++ /dev/null
-@@ -1 +0,0 @@
--#include <asm-sh/sci.h>
-diff --git a/include/asm-sh64/sections.h b/include/asm-sh64/sections.h
-deleted file mode 100644
-index 897f36b..0000000
---- a/include/asm-sh64/sections.h
-+++ /dev/null
-@@ -1,7 +0,0 @@
--#ifndef __ASM_SH64_SECTIONS_H
--#define __ASM_SH64_SECTIONS_H
+-#define cache_line_size() (boot_cpu_data.x86_cache_alignment)
-
--#include <asm-sh/sections.h>
+-extern unsigned long boot_option_idle_override;
+-/* Boot loader type from the setup header */
+-extern int bootloader_type;
-
--#endif /* __ASM_SH64_SECTIONS_H */
+-#define HAVE_ARCH_PICK_MMAP_LAYOUT 1
-
-diff --git a/include/asm-sh64/segment.h b/include/asm-sh64/segment.h
-deleted file mode 100644
-index 92ac001..0000000
---- a/include/asm-sh64/segment.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef _ASM_SEGMENT_H
--#define _ASM_SEGMENT_H
+-#endif /* __ASM_X86_64_PROCESSOR_H */
+diff --git a/include/asm-x86/proto.h b/include/asm-x86/proto.h
+index dabba55..68563c0 100644
+--- a/include/asm-x86/proto.h
++++ b/include/asm-x86/proto.h
+@@ -5,87 +5,24 @@
+
+ /* misc architecture specific prototypes */
+
+-struct cpuinfo_x86;
+-struct pt_regs;
-
--/* Only here because we have some old header files that expect it.. */
+-extern void start_kernel(void);
+-extern void pda_init(int);
-
--#endif /* _ASM_SEGMENT_H */
-diff --git a/include/asm-sh64/semaphore-helper.h b/include/asm-sh64/semaphore-helper.h
-deleted file mode 100644
-index fcfafe2..0000000
---- a/include/asm-sh64/semaphore-helper.h
-+++ /dev/null
-@@ -1,101 +0,0 @@
--#ifndef __ASM_SH64_SEMAPHORE_HELPER_H
--#define __ASM_SH64_SEMAPHORE_HELPER_H
+ extern void early_idt_handler(void);
+
+-extern void mcheck_init(struct cpuinfo_x86 *c);
+ extern void init_memory_mapping(unsigned long start, unsigned long end);
+
+-extern void system_call(void);
+-extern int kernel_syscall(void);
++extern void system_call(void);
+ extern void syscall_init(void);
+
+ extern void ia32_syscall(void);
+-extern void ia32_cstar_target(void);
+-extern void ia32_sysenter_target(void);
+-
+-extern void config_acpi_tables(void);
+-extern void ia32_syscall(void);
+-
+-extern int pmtimer_mark_offset(void);
+-extern void pmtimer_resume(void);
+-extern void pmtimer_wait(unsigned);
+-extern unsigned int do_gettimeoffset_pm(void);
+-#ifdef CONFIG_X86_PM_TIMER
+-extern u32 pmtmr_ioport;
+-#else
+-#define pmtmr_ioport 0
+-#endif
+-extern int nohpet;
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/semaphore-helper.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
--#include <asm/errno.h>
+-extern void early_printk(const char *fmt, ...) __attribute__((format(printf,1,2)));
-
--/*
-- * SMP- and interrupt-safe semaphores helper functions.
-- *
-- * (C) Copyright 1996 Linus Torvalds
-- * (C) Copyright 1999 Andrea Arcangeli
-- */
+-extern void early_identify_cpu(struct cpuinfo_x86 *c);
-
--/*
-- * These two _must_ execute atomically wrt each other.
-- *
-- * This is trivially done with load_locked/store_cond,
-- * which we have. Let the rest of the losers suck eggs.
-- */
--static __inline__ void wake_one_more(struct semaphore * sem)
--{
-- atomic_inc((atomic_t *)&sem->sleepers);
--}
+-extern int k8_scan_nodes(unsigned long start, unsigned long end);
-
--static __inline__ int waking_non_zero(struct semaphore *sem)
--{
-- unsigned long flags;
-- int ret = 0;
+-extern void numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn);
+-extern unsigned long numa_free_all_bootmem(void);
++extern void ia32_cstar_target(void);
++extern void ia32_sysenter_target(void);
+
+ extern void reserve_bootmem_generic(unsigned long phys, unsigned len);
+
+-extern void load_gs_index(unsigned gs);
-
-- spin_lock_irqsave(&semaphore_wake_lock, flags);
-- if (sem->sleepers > 0) {
-- sem->sleepers--;
-- ret = 1;
-- }
-- spin_unlock_irqrestore(&semaphore_wake_lock, flags);
-- return ret;
--}
+-extern unsigned long end_pfn_map;
-
--/*
-- * waking_non_zero_interruptible:
-- * 1 got the lock
-- * 0 go to sleep
-- * -EINTR interrupted
-- *
-- * We must undo the sem->count down_interruptible() increment while we are
-- * protected by the spinlock in order to make atomic this atomic_inc() with the
-- * atomic_read() in wake_one_more(), otherwise we can race. -arca
-- */
--static __inline__ int waking_non_zero_interruptible(struct semaphore *sem,
-- struct task_struct *tsk)
--{
-- unsigned long flags;
-- int ret = 0;
+-extern void show_trace(struct task_struct *, struct pt_regs *, unsigned long * rsp);
+-extern void show_registers(struct pt_regs *regs);
-
-- spin_lock_irqsave(&semaphore_wake_lock, flags);
-- if (sem->sleepers > 0) {
-- sem->sleepers--;
-- ret = 1;
-- } else if (signal_pending(tsk)) {
-- atomic_inc(&sem->count);
-- ret = -EINTR;
-- }
-- spin_unlock_irqrestore(&semaphore_wake_lock, flags);
-- return ret;
--}
+-extern void exception_table_check(void);
-
--/*
-- * waking_non_zero_trylock:
-- * 1 failed to lock
-- * 0 got the lock
-- *
-- * We must undo the sem->count down_trylock() increment while we are
-- * protected by the spinlock in order to make atomic this atomic_inc() with the
-- * atomic_read() in wake_one_more(), otherwise we can race. -arca
-- */
--static __inline__ int waking_non_zero_trylock(struct semaphore *sem)
--{
-- unsigned long flags;
-- int ret = 1;
+-extern void acpi_reserve_bootmem(void);
-
-- spin_lock_irqsave(&semaphore_wake_lock, flags);
-- if (sem->sleepers <= 0)
-- atomic_inc(&sem->count);
-- else {
-- sem->sleepers--;
-- ret = 0;
-- }
-- spin_unlock_irqrestore(&semaphore_wake_lock, flags);
-- return ret;
--}
+-extern void swap_low_mappings(void);
-
--#endif /* __ASM_SH64_SEMAPHORE_HELPER_H */
-diff --git a/include/asm-sh64/semaphore.h b/include/asm-sh64/semaphore.h
-deleted file mode 100644
-index f027cc1..0000000
---- a/include/asm-sh64/semaphore.h
-+++ /dev/null
-@@ -1,119 +0,0 @@
--#ifndef __ASM_SH64_SEMAPHORE_H
--#define __ASM_SH64_SEMAPHORE_H
+-extern void __show_regs(struct pt_regs * regs);
+-extern void show_regs(struct pt_regs * regs);
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/semaphore.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- * SMP- and interrupt-safe semaphores.
-- *
-- * (C) Copyright 1996 Linus Torvalds
-- *
-- * SuperH verison by Niibe Yutaka
-- * (Currently no asm implementation but generic C code...)
-- *
-- */
+ extern void syscall32_cpu_init(void);
+
+-extern void setup_node_bootmem(int nodeid, unsigned long start, unsigned long end);
-
--#include <linux/linkage.h>
--#include <linux/spinlock.h>
--#include <linux/wait.h>
--#include <linux/rwsem.h>
+-extern void early_quirks(void);
+ extern void check_efer(void);
+
+-extern void select_idle_routine(const struct cpuinfo_x86 *c);
-
--#include <asm/system.h>
--#include <asm/atomic.h>
+-extern unsigned long table_start, table_end;
-
--struct semaphore {
-- atomic_t count;
-- int sleepers;
-- wait_queue_head_t wait;
--};
+-extern int exception_trace;
+-extern unsigned cpu_khz;
+-extern unsigned tsc_khz;
-
--#define __SEMAPHORE_INITIALIZER(name, n) \
--{ \
-- .count = ATOMIC_INIT(n), \
-- .sleepers = 0, \
-- .wait = __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \
--}
+ extern int reboot_force;
+-extern int notsc_setup(char *);
-
--#define __DECLARE_SEMAPHORE_GENERIC(name,count) \
-- struct semaphore name = __SEMAPHORE_INITIALIZER(name,count)
+-extern int gsi_irq_sharing(int gsi);
-
--#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name,1)
+-extern int force_mwait;
+
+ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr);
+
+diff --git a/include/asm-x86/ptrace-abi.h b/include/asm-x86/ptrace-abi.h
+index 7524e12..81a8ee4 100644
+--- a/include/asm-x86/ptrace-abi.h
++++ b/include/asm-x86/ptrace-abi.h
+@@ -78,4 +78,66 @@
+ # define PTRACE_SYSEMU_SINGLESTEP 32
+ #endif
+
++#define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */
++
++#ifndef __ASSEMBLY__
++
++#include <asm/types.h>
++
++/* configuration/status structure used in PTRACE_BTS_CONFIG and
++ PTRACE_BTS_STATUS commands.
++*/
++struct ptrace_bts_config {
++ /* requested or actual size of BTS buffer in bytes */
++ u32 size;
++ /* bitmask of below flags */
++ u32 flags;
++ /* buffer overflow signal */
++ u32 signal;
++ /* actual size of bts_struct in bytes */
++ u32 bts_size;
++};
++#endif
++
++#define PTRACE_BTS_O_TRACE 0x1 /* branch trace */
++#define PTRACE_BTS_O_SCHED 0x2 /* scheduling events w/ jiffies */
++#define PTRACE_BTS_O_SIGNAL 0x4 /* send SIG<signal> on buffer overflow
++ instead of wrapping around */
++#define PTRACE_BTS_O_CUT_SIZE 0x8 /* cut requested size to max available
++ instead of failing */
++
++#define PTRACE_BTS_CONFIG 40
++/* Configure branch trace recording.
++ ADDR points to a struct ptrace_bts_config.
++ DATA gives the size of that buffer.
++ A new buffer is allocated, iff the size changes.
++ Returns the number of bytes read.
++*/
++#define PTRACE_BTS_STATUS 41
++/* Return the current configuration in a struct ptrace_bts_config
++ pointed to by ADDR; DATA gives the size of that buffer.
++ Returns the number of bytes written.
++*/
++#define PTRACE_BTS_SIZE 42
++/* Return the number of available BTS records.
++ DATA and ADDR are ignored.
++*/
++#define PTRACE_BTS_GET 43
++/* Get a single BTS record.
++ DATA defines the index into the BTS array, where 0 is the newest
++ entry, and higher indices refer to older entries.
++ ADDR is pointing to struct bts_struct (see asm/ds.h).
++*/
++#define PTRACE_BTS_CLEAR 44
++/* Clear the BTS buffer.
++ DATA and ADDR are ignored.
++*/
++#define PTRACE_BTS_DRAIN 45
++/* Read all available BTS records and clear the buffer.
++ ADDR points to an array of struct bts_struct.
++ DATA gives the size of that buffer.
++ BTS records are read from oldest to newest.
++ Returns number of BTS records drained.
++*/
++
+ #endif
+diff --git a/include/asm-x86/ptrace.h b/include/asm-x86/ptrace.h
+index 51ddb25..d9e04b4 100644
+--- a/include/asm-x86/ptrace.h
++++ b/include/asm-x86/ptrace.h
+@@ -4,12 +4,15 @@
+ #include <linux/compiler.h> /* For __user */
+ #include <asm/ptrace-abi.h>
+
++
+ #ifndef __ASSEMBLY__
+
+ #ifdef __i386__
+ /* this struct defines the way the registers are stored on the
+ stack during a system call. */
+
++#ifndef __KERNEL__
++
+ struct pt_regs {
+ long ebx;
+ long ecx;
+@@ -21,7 +24,7 @@ struct pt_regs {
+ int xds;
+ int xes;
+ int xfs;
+- /* int xgs; */
++ /* int gs; */
+ long orig_eax;
+ long eip;
+ int xcs;
+@@ -30,44 +33,37 @@ struct pt_regs {
+ int xss;
+ };
+
+-#ifdef __KERNEL__
++#else /* __KERNEL__ */
++
++struct pt_regs {
++ long bx;
++ long cx;
++ long dx;
++ long si;
++ long di;
++ long bp;
++ long ax;
++ int ds;
++ int es;
++ int fs;
++ /* int gs; */
++ long orig_ax;
++ long ip;
++ int cs;
++ long flags;
++ long sp;
++ int ss;
++};
+
+ #include <asm/vm86.h>
+ #include <asm/segment.h>
+
+-struct task_struct;
+-extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code);
-
--static inline void sema_init (struct semaphore *sem, int val)
--{
-/*
-- * *sem = (struct semaphore)__SEMAPHORE_INITIALIZER((*sem),val);
-- *
-- * i'd rather use the more flexible initialization above, but sadly
-- * GCC 2.7.2.3 emits a bogus warning. EGCS doesnt. Oh well.
+- * user_mode_vm(regs) determines whether a register set came from user mode.
+- * This is true if V8086 mode was enabled OR if the register set was from
+- * protected mode with RPL-3 CS value. This tricky test checks that with
+- * one comparison. Many places in the kernel can bypass this full check
+- * if they have already ruled out V8086 mode, so user_mode(regs) can be used.
- */
-- atomic_set(&sem->count, val);
-- sem->sleepers = 0;
-- init_waitqueue_head(&sem->wait);
--}
--
--static inline void init_MUTEX (struct semaphore *sem)
--{
-- sema_init(sem, 1);
--}
--
--static inline void init_MUTEX_LOCKED (struct semaphore *sem)
+-static inline int user_mode(struct pt_regs *regs)
-{
-- sema_init(sem, 0);
+- return (regs->xcs & SEGMENT_RPL_MASK) == USER_RPL;
-}
--
--#if 0
--asmlinkage void __down_failed(void /* special register calling convention */);
--asmlinkage int __down_failed_interruptible(void /* params in registers */);
--asmlinkage int __down_failed_trylock(void /* params in registers */);
--asmlinkage void __up_wakeup(void /* special register calling convention */);
--#endif
--
--asmlinkage void __down(struct semaphore * sem);
--asmlinkage int __down_interruptible(struct semaphore * sem);
--asmlinkage int __down_trylock(struct semaphore * sem);
--asmlinkage void __up(struct semaphore * sem);
--
--extern spinlock_t semaphore_wake_lock;
--
--static inline void down(struct semaphore * sem)
+-static inline int user_mode_vm(struct pt_regs *regs)
-{
-- if (atomic_dec_return(&sem->count) < 0)
-- __down(sem);
+- return ((regs->xcs & SEGMENT_RPL_MASK) | (regs->eflags & VM_MASK)) >= USER_RPL;
-}
--
--static inline int down_interruptible(struct semaphore * sem)
+-static inline int v8086_mode(struct pt_regs *regs)
-{
-- int ret = 0;
--
-- if (atomic_dec_return(&sem->count) < 0)
-- ret = __down_interruptible(sem);
-- return ret;
+- return (regs->eflags & VM_MASK);
-}
-
--static inline int down_trylock(struct semaphore * sem)
--{
-- int ret = 0;
--
-- if (atomic_dec_return(&sem->count) < 0)
-- ret = __down_trylock(sem);
-- return ret;
--}
+-#define instruction_pointer(regs) ((regs)->eip)
+-#define frame_pointer(regs) ((regs)->ebp)
+-#define stack_pointer(regs) ((unsigned long)(regs))
+-#define regs_return_value(regs) ((regs)->eax)
-
--/*
-- * Note! This is subtle. We jump to wake people up only if
-- * the semaphore was negative (== somebody was waiting on it).
-- */
--static inline void up(struct semaphore * sem)
--{
-- if (atomic_inc_return(&sem->count) <= 0)
-- __up(sem);
--}
+-extern unsigned long profile_pc(struct pt_regs *regs);
+ #endif /* __KERNEL__ */
+
+ #else /* __i386__ */
+
++#ifndef __KERNEL__
++
+ struct pt_regs {
+ unsigned long r15;
+ unsigned long r14;
+@@ -96,47 +92,143 @@ struct pt_regs {
+ /* top of stack page */
+ };
+
++#else /* __KERNEL__ */
++
++struct pt_regs {
++ unsigned long r15;
++ unsigned long r14;
++ unsigned long r13;
++ unsigned long r12;
++ unsigned long bp;
++ unsigned long bx;
++/* arguments: non interrupts/non tracing syscalls only save upto here*/
++ unsigned long r11;
++ unsigned long r10;
++ unsigned long r9;
++ unsigned long r8;
++ unsigned long ax;
++ unsigned long cx;
++ unsigned long dx;
++ unsigned long si;
++ unsigned long di;
++ unsigned long orig_ax;
++/* end of arguments */
++/* cpu exception frame or undefined */
++ unsigned long ip;
++ unsigned long cs;
++ unsigned long flags;
++ unsigned long sp;
++ unsigned long ss;
++/* top of stack page */
++};
++
++#endif /* __KERNEL__ */
++#endif /* !__i386__ */
++
+ #ifdef __KERNEL__
+
+-#define user_mode(regs) (!!((regs)->cs & 3))
+-#define user_mode_vm(regs) user_mode(regs)
+-#define instruction_pointer(regs) ((regs)->rip)
+-#define frame_pointer(regs) ((regs)->rbp)
+-#define stack_pointer(regs) ((regs)->rsp)
+-#define regs_return_value(regs) ((regs)->rax)
++/* the DS BTS struct is used for ptrace as well */
++#include <asm/ds.h>
++
++struct task_struct;
++
++extern void ptrace_bts_take_timestamp(struct task_struct *, enum bts_qualifier);
+
+ extern unsigned long profile_pc(struct pt_regs *regs);
++
++extern unsigned long
++convert_ip_to_linear(struct task_struct *child, struct pt_regs *regs);
++
++#ifdef CONFIG_X86_32
++extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code);
++#else
+ void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
++#endif
+
+-struct task_struct;
++#define regs_return_value(regs) ((regs)->ax)
++
++/*
++ * user_mode_vm(regs) determines whether a register set came from user mode.
++ * This is true if V8086 mode was enabled OR if the register set was from
++ * protected mode with RPL-3 CS value. This tricky test checks that with
++ * one comparison. Many places in the kernel can bypass this full check
++ * if they have already ruled out V8086 mode, so user_mode(regs) can be used.
++ */
++static inline int user_mode(struct pt_regs *regs)
++{
++#ifdef CONFIG_X86_32
++ return (regs->cs & SEGMENT_RPL_MASK) == USER_RPL;
++#else
++ return !!(regs->cs & 3);
++#endif
++}
++
++static inline int user_mode_vm(struct pt_regs *regs)
++{
++#ifdef CONFIG_X86_32
++ return ((regs->cs & SEGMENT_RPL_MASK) |
++ (regs->flags & VM_MASK)) >= USER_RPL;
++#else
++ return user_mode(regs);
++#endif
++}
++
++static inline int v8086_mode(struct pt_regs *regs)
++{
++#ifdef CONFIG_X86_32
++ return (regs->flags & VM_MASK);
++#else
++ return 0; /* No V86 mode support in long mode */
++#endif
++}
++
++/*
++ * X86_32 CPUs don't save ss and esp if the CPU is already in kernel mode
++ * when it traps. So regs will be the current sp.
++ *
++ * This is valid only for kernel mode traps.
++ */
++static inline unsigned long kernel_trap_sp(struct pt_regs *regs)
++{
++#ifdef CONFIG_X86_32
++ return (unsigned long)regs;
++#else
++ return regs->sp;
++#endif
++}
++
++static inline unsigned long instruction_pointer(struct pt_regs *regs)
++{
++ return regs->ip;
++}
++
++static inline unsigned long frame_pointer(struct pt_regs *regs)
++{
++ return regs->bp;
++}
++
++/*
++ * These are defined as per linux/ptrace.h, which see.
++ */
++#define arch_has_single_step() (1)
++extern void user_enable_single_step(struct task_struct *);
++extern void user_disable_single_step(struct task_struct *);
++
++extern void user_enable_block_step(struct task_struct *);
++#ifdef CONFIG_X86_DEBUGCTLMSR
++#define arch_has_block_step() (1)
++#else
++#define arch_has_block_step() (boot_cpu_data.x86 >= 6)
++#endif
++
++struct user_desc;
++extern int do_get_thread_area(struct task_struct *p, int idx,
++ struct user_desc __user *info);
++extern int do_set_thread_area(struct task_struct *p, int idx,
++ struct user_desc __user *info, int can_allocate);
+
+-extern unsigned long
+-convert_rip_to_linear(struct task_struct *child, struct pt_regs *regs);
-
--#endif /* __ASM_SH64_SEMAPHORE_H */
-diff --git a/include/asm-sh64/sembuf.h b/include/asm-sh64/sembuf.h
+-enum {
+- EF_CF = 0x00000001,
+- EF_PF = 0x00000004,
+- EF_AF = 0x00000010,
+- EF_ZF = 0x00000040,
+- EF_SF = 0x00000080,
+- EF_TF = 0x00000100,
+- EF_IE = 0x00000200,
+- EF_DF = 0x00000400,
+- EF_OF = 0x00000800,
+- EF_IOPL = 0x00003000,
+- EF_IOPL_RING0 = 0x00000000,
+- EF_IOPL_RING1 = 0x00001000,
+- EF_IOPL_RING2 = 0x00002000,
+- EF_NT = 0x00004000, /* nested task */
+- EF_RF = 0x00010000, /* resume */
+- EF_VM = 0x00020000, /* virtual mode */
+- EF_AC = 0x00040000, /* alignment */
+- EF_VIF = 0x00080000, /* virtual interrupt */
+- EF_VIP = 0x00100000, /* virtual interrupt pending */
+- EF_ID = 0x00200000, /* id */
+-};
+ #endif /* __KERNEL__ */
+-#endif /* !__i386__ */
++
+ #endif /* !__ASSEMBLY__ */
+
+ #endif
+diff --git a/include/asm-x86/resume-trace.h b/include/asm-x86/resume-trace.h
+index 9b6dd09..46f725b 100644
+--- a/include/asm-x86/resume-trace.h
++++ b/include/asm-x86/resume-trace.h
+@@ -1,5 +1,20 @@
+-#ifdef CONFIG_X86_32
+-# include "resume-trace_32.h"
+-#else
+-# include "resume-trace_64.h"
++#ifndef _ASM_X86_RESUME_TRACE_H
++#define _ASM_X86_RESUME_TRACE_H
++
++#include <asm/asm.h>
++
++#define TRACE_RESUME(user) do { \
++ if (pm_trace_enabled) { \
++ void *tracedata; \
++ asm volatile(_ASM_MOV_UL " $1f,%0\n" \
++ ".section .tracedata,\"a\"\n" \
++ "1:\t.word %c1\n\t" \
++ _ASM_PTR " %c2\n" \
++ ".previous" \
++ :"=r" (tracedata) \
++ : "i" (__LINE__), "i" (__FILE__)); \
++ generate_resume_trace(tracedata, user); \
++ } \
++} while (0)
++
+ #endif
+diff --git a/include/asm-x86/resume-trace_32.h b/include/asm-x86/resume-trace_32.h
deleted file mode 100644
-index ec4d9f1..0000000
---- a/include/asm-sh64/sembuf.h
+index ec9cfd6..0000000
+--- a/include/asm-x86/resume-trace_32.h
+++ /dev/null
-@@ -1,36 +0,0 @@
--#ifndef __ASM_SH64_SEMBUF_H
--#define __ASM_SH64_SEMBUF_H
--
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/sembuf.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
--
--/*
-- * The semid64_ds structure for i386 architecture.
-- * Note extra padding because this structure is passed back and forth
-- * between kernel and user space.
-- *
-- * Pad space is left for:
-- * - 64-bit time_t to solve y2038 problem
-- * - 2 miscellaneous 32-bit values
-- */
--
--struct semid64_ds {
-- struct ipc64_perm sem_perm; /* permissions .. see ipc.h */
-- __kernel_time_t sem_otime; /* last semop time */
-- unsigned long __unused1;
-- __kernel_time_t sem_ctime; /* last change time */
-- unsigned long __unused2;
-- unsigned long sem_nsems; /* no. of semaphores in array */
-- unsigned long __unused3;
-- unsigned long __unused4;
--};
--
--#endif /* __ASM_SH64_SEMBUF_H */
-diff --git a/include/asm-sh64/serial.h b/include/asm-sh64/serial.h
+@@ -1,13 +0,0 @@
+-#define TRACE_RESUME(user) do { \
+- if (pm_trace_enabled) { \
+- void *tracedata; \
+- asm volatile("movl $1f,%0\n" \
+- ".section .tracedata,\"a\"\n" \
+- "1:\t.word %c1\n" \
+- "\t.long %c2\n" \
+- ".previous" \
+- :"=r" (tracedata) \
+- : "i" (__LINE__), "i" (__FILE__)); \
+- generate_resume_trace(tracedata, user); \
+- } \
+-} while (0)
+diff --git a/include/asm-x86/resume-trace_64.h b/include/asm-x86/resume-trace_64.h
deleted file mode 100644
-index e8d7b3f..0000000
---- a/include/asm-sh64/serial.h
+index 34bf998..0000000
+--- a/include/asm-x86/resume-trace_64.h
+++ /dev/null
-@@ -1,31 +0,0 @@
--/*
-- * include/asm-sh64/serial.h
-- *
-- * Configuration details for 8250, 16450, 16550, etc. serial ports
-- */
--
--#ifndef _ASM_SERIAL_H
--#define _ASM_SERIAL_H
--
--/*
-- * This assumes you have a 1.8432 MHz clock for your UART.
-- *
-- * It'd be nice if someone built a serial card with a 24.576 MHz
-- * clock, since the 16550A is capable of handling a top speed of 1.5
-- * megabits/second; but this requires the faster clock.
-- */
--#define BASE_BAUD ( 1843200 / 16 )
--
--#define RS_TABLE_SIZE 2
--
--#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
--
--#define SERIAL_PORT_DFNS \
-- /* UART CLK PORT IRQ FLAGS */ \
-- { 0, BASE_BAUD, 0x3F8, 4, STD_COM_FLAGS }, /* ttyS0 */ \
-- { 0, BASE_BAUD, 0x2F8, 3, STD_COM_FLAGS } /* ttyS1 */
--
--/* XXX: This should be moved ino irq.h */
--#define irq_cannonicalize(x) (x)
--
--#endif /* _ASM_SERIAL_H */
-diff --git a/include/asm-sh64/setup.h b/include/asm-sh64/setup.h
+@@ -1,13 +0,0 @@
+-#define TRACE_RESUME(user) do { \
+- if (pm_trace_enabled) { \
+- void *tracedata; \
+- asm volatile("movq $1f,%0\n" \
+- ".section .tracedata,\"a\"\n" \
+- "1:\t.word %c1\n" \
+- "\t.quad %c2\n" \
+- ".previous" \
+- :"=r" (tracedata) \
+- : "i" (__LINE__), "i" (__FILE__)); \
+- generate_resume_trace(tracedata, user); \
+- } \
+-} while (0)
+diff --git a/include/asm-x86/rio.h b/include/asm-x86/rio.h
+index c7350f6..97cdcc9 100644
+--- a/include/asm-x86/rio.h
++++ b/include/asm-x86/rio.h
+@@ -1,6 +1,6 @@
+ /*
+- * Derived from include/asm-i386/mach-summit/mach_mpparse.h
+- * and include/asm-i386/mach-default/bios_ebda.h
++ * Derived from include/asm-x86/mach-summit/mach_mpparse.h
++ * and include/asm-x86/mach-default/bios_ebda.h
+ *
+ * Author: Laurent Vivier <Laurent.Vivier at bull.net>
+ */
+diff --git a/include/asm-x86/rwlock.h b/include/asm-x86/rwlock.h
+index f2b64a4..6a8c0d6 100644
+--- a/include/asm-x86/rwlock.h
++++ b/include/asm-x86/rwlock.h
+@@ -2,7 +2,6 @@
+ #define _ASM_X86_RWLOCK_H
+
+ #define RW_LOCK_BIAS 0x01000000
+-#define RW_LOCK_BIAS_STR "0x01000000"
+
+ /* Actual code is in asm/spinlock.h or in arch/x86/lib/rwlock.S */
+
+diff --git a/include/asm-x86/rwsem.h b/include/asm-x86/rwsem.h
+index 041906f..520a379 100644
+--- a/include/asm-x86/rwsem.h
++++ b/include/asm-x86/rwsem.h
+@@ -2,7 +2,7 @@
+ *
+ * Written by David Howells (dhowells at redhat.com).
+ *
+- * Derived from asm-i386/semaphore.h
++ * Derived from asm-x86/semaphore.h
+ *
+ *
+ * The MSW of the count is the negated number of active writers and waiting
+@@ -44,10 +44,14 @@
+
+ struct rwsem_waiter;
+
+-extern struct rw_semaphore *FASTCALL(rwsem_down_read_failed(struct rw_semaphore *sem));
+-extern struct rw_semaphore *FASTCALL(rwsem_down_write_failed(struct rw_semaphore *sem));
+-extern struct rw_semaphore *FASTCALL(rwsem_wake(struct rw_semaphore *));
+-extern struct rw_semaphore *FASTCALL(rwsem_downgrade_wake(struct rw_semaphore *sem));
++extern asmregparm struct rw_semaphore *
++ rwsem_down_read_failed(struct rw_semaphore *sem);
++extern asmregparm struct rw_semaphore *
++ rwsem_down_write_failed(struct rw_semaphore *sem);
++extern asmregparm struct rw_semaphore *
++ rwsem_wake(struct rw_semaphore *);
++extern asmregparm struct rw_semaphore *
++ rwsem_downgrade_wake(struct rw_semaphore *sem);
+
+ /*
+ * the semaphore definition
+diff --git a/include/asm-x86/scatterlist.h b/include/asm-x86/scatterlist.h
+index 3a1e762..d13c197 100644
+--- a/include/asm-x86/scatterlist.h
++++ b/include/asm-x86/scatterlist.h
+@@ -1,5 +1,35 @@
++#ifndef _ASM_X86_SCATTERLIST_H
++#define _ASM_X86_SCATTERLIST_H
++
++#include <asm/types.h>
++
++struct scatterlist {
++#ifdef CONFIG_DEBUG_SG
++ unsigned long sg_magic;
++#endif
++ unsigned long page_link;
++ unsigned int offset;
++ unsigned int length;
++ dma_addr_t dma_address;
++#ifdef CONFIG_X86_64
++ unsigned int dma_length;
++#endif
++};
++
++#define ARCH_HAS_SG_CHAIN
++#define ISA_DMA_THRESHOLD (0x00ffffff)
++
++/*
++ * These macros should be used after a pci_map_sg call has been done
++ * to get bus addresses of each of the SG entries and their lengths.
++ * You should only work with the number of sg entries pci_map_sg
++ * returns.
++ */
++#define sg_dma_address(sg) ((sg)->dma_address)
+ #ifdef CONFIG_X86_32
+-# include "scatterlist_32.h"
++# define sg_dma_len(sg) ((sg)->length)
+ #else
+-# include "scatterlist_64.h"
++# define sg_dma_len(sg) ((sg)->dma_length)
++#endif
++
+ #endif
+diff --git a/include/asm-x86/scatterlist_32.h b/include/asm-x86/scatterlist_32.h
deleted file mode 100644
-index 5b07b14..0000000
---- a/include/asm-sh64/setup.h
+index 0e7d997..0000000
+--- a/include/asm-x86/scatterlist_32.h
+++ /dev/null
-@@ -1,22 +0,0 @@
--#ifndef __ASM_SH64_SETUP_H
--#define __ASM_SH64_SETUP_H
--
--#define COMMAND_LINE_SIZE 256
+@@ -1,28 +0,0 @@
+-#ifndef _I386_SCATTERLIST_H
+-#define _I386_SCATTERLIST_H
-
--#ifdef __KERNEL__
+-#include <asm/types.h>
-
--#define PARAM ((unsigned char *)empty_zero_page)
--#define MOUNT_ROOT_RDONLY (*(unsigned long *) (PARAM+0x000))
--#define RAMDISK_FLAGS (*(unsigned long *) (PARAM+0x004))
--#define ORIG_ROOT_DEV (*(unsigned long *) (PARAM+0x008))
--#define LOADER_TYPE (*(unsigned long *) (PARAM+0x00c))
--#define INITRD_START (*(unsigned long *) (PARAM+0x010))
--#define INITRD_SIZE (*(unsigned long *) (PARAM+0x014))
+-struct scatterlist {
+-#ifdef CONFIG_DEBUG_SG
+- unsigned long sg_magic;
+-#endif
+- unsigned long page_link;
+- unsigned int offset;
+- dma_addr_t dma_address;
+- unsigned int length;
+-};
-
--#define COMMAND_LINE ((char *) (PARAM+256))
--#define COMMAND_LINE_SIZE 256
+-#define ARCH_HAS_SG_CHAIN
-
--#endif /* __KERNEL__ */
+-/* These macros should be used after a pci_map_sg call has been done
+- * to get bus addresses of each of the SG entries and their lengths.
+- * You should only work with the number of sg entries pci_map_sg
+- * returns.
+- */
+-#define sg_dma_address(sg) ((sg)->dma_address)
+-#define sg_dma_len(sg) ((sg)->length)
-
--#endif /* __ASM_SH64_SETUP_H */
+-#define ISA_DMA_THRESHOLD (0x00ffffff)
-
-diff --git a/include/asm-sh64/shmbuf.h b/include/asm-sh64/shmbuf.h
+-#endif /* !(_I386_SCATTERLIST_H) */
+diff --git a/include/asm-x86/scatterlist_64.h b/include/asm-x86/scatterlist_64.h
deleted file mode 100644
-index 022f349..0000000
---- a/include/asm-sh64/shmbuf.h
+index 1847c72..0000000
+--- a/include/asm-x86/scatterlist_64.h
+++ /dev/null
-@@ -1,53 +0,0 @@
--#ifndef __ASM_SH64_SHMBUF_H
--#define __ASM_SH64_SHMBUF_H
--
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/shmbuf.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
+@@ -1,29 +0,0 @@
+-#ifndef _X8664_SCATTERLIST_H
+-#define _X8664_SCATTERLIST_H
-
--/*
-- * The shmid64_ds structure for i386 architecture.
-- * Note extra padding because this structure is passed back and forth
-- * between kernel and user space.
-- *
-- * Pad space is left for:
-- * - 64-bit time_t to solve y2038 problem
-- * - 2 miscellaneous 32-bit values
-- */
+-#include <asm/types.h>
-
--struct shmid64_ds {
-- struct ipc64_perm shm_perm; /* operation perms */
-- size_t shm_segsz; /* size of segment (bytes) */
-- __kernel_time_t shm_atime; /* last attach time */
-- unsigned long __unused1;
-- __kernel_time_t shm_dtime; /* last detach time */
-- unsigned long __unused2;
-- __kernel_time_t shm_ctime; /* last change time */
-- unsigned long __unused3;
-- __kernel_pid_t shm_cpid; /* pid of creator */
-- __kernel_pid_t shm_lpid; /* pid of last operator */
-- unsigned long shm_nattch; /* no. of current attaches */
-- unsigned long __unused4;
-- unsigned long __unused5;
+-struct scatterlist {
+-#ifdef CONFIG_DEBUG_SG
+- unsigned long sg_magic;
+-#endif
+- unsigned long page_link;
+- unsigned int offset;
+- unsigned int length;
+- dma_addr_t dma_address;
+- unsigned int dma_length;
-};
-
--struct shminfo64 {
-- unsigned long shmmax;
-- unsigned long shmmin;
-- unsigned long shmmni;
-- unsigned long shmseg;
-- unsigned long shmall;
-- unsigned long __unused1;
-- unsigned long __unused2;
-- unsigned long __unused3;
-- unsigned long __unused4;
--};
+-#define ARCH_HAS_SG_CHAIN
-
--#endif /* __ASM_SH64_SHMBUF_H */
-diff --git a/include/asm-sh64/shmparam.h b/include/asm-sh64/shmparam.h
-deleted file mode 100644
-index 1bb820c..0000000
---- a/include/asm-sh64/shmparam.h
-+++ /dev/null
-@@ -1,12 +0,0 @@
--#ifndef __ASM_SH64_SHMPARAM_H
--#define __ASM_SH64_SHMPARAM_H
+-#define ISA_DMA_THRESHOLD (0x00ffffff)
-
--/*
-- * Set this to a sensible safe default, we'll work out the specifics for the
-- * align mask from the cache descriptor at run-time.
+-/* These macros should be used after a pci_map_sg call has been done
+- * to get bus addresses of each of the SG entries and their lengths.
+- * You should only work with the number of sg entries pci_map_sg
+- * returns.
- */
--#define SHMLBA 0x4000
--
--#define __ARCH_FORCE_SHMLBA
+-#define sg_dma_address(sg) ((sg)->dma_address)
+-#define sg_dma_len(sg) ((sg)->dma_length)
-
--#endif /* __ASM_SH64_SHMPARAM_H */
-diff --git a/include/asm-sh64/sigcontext.h b/include/asm-sh64/sigcontext.h
+-#endif
+diff --git a/include/asm-x86/segment.h b/include/asm-x86/segment.h
+index 6050682..23f0535 100644
+--- a/include/asm-x86/segment.h
++++ b/include/asm-x86/segment.h
+@@ -1,5 +1,204 @@
++#ifndef _ASM_X86_SEGMENT_H_
++#define _ASM_X86_SEGMENT_H_
++
++/* Simple and small GDT entries for booting only */
++
++#define GDT_ENTRY_BOOT_CS 2
++#define __BOOT_CS (GDT_ENTRY_BOOT_CS * 8)
++
++#define GDT_ENTRY_BOOT_DS (GDT_ENTRY_BOOT_CS + 1)
++#define __BOOT_DS (GDT_ENTRY_BOOT_DS * 8)
++
++#define GDT_ENTRY_BOOT_TSS (GDT_ENTRY_BOOT_CS + 2)
++#define __BOOT_TSS (GDT_ENTRY_BOOT_TSS * 8)
++
+ #ifdef CONFIG_X86_32
+-# include "segment_32.h"
++/*
++ * The layout of the per-CPU GDT under Linux:
++ *
++ * 0 - null
++ * 1 - reserved
++ * 2 - reserved
++ * 3 - reserved
++ *
++ * 4 - unused <==== new cacheline
++ * 5 - unused
++ *
++ * ------- start of TLS (Thread-Local Storage) segments:
++ *
++ * 6 - TLS segment #1 [ glibc's TLS segment ]
++ * 7 - TLS segment #2 [ Wine's %fs Win32 segment ]
++ * 8 - TLS segment #3
++ * 9 - reserved
++ * 10 - reserved
++ * 11 - reserved
++ *
++ * ------- start of kernel segments:
++ *
++ * 12 - kernel code segment <==== new cacheline
++ * 13 - kernel data segment
++ * 14 - default user CS
++ * 15 - default user DS
++ * 16 - TSS
++ * 17 - LDT
++ * 18 - PNPBIOS support (16->32 gate)
++ * 19 - PNPBIOS support
++ * 20 - PNPBIOS support
++ * 21 - PNPBIOS support
++ * 22 - PNPBIOS support
++ * 23 - APM BIOS support
++ * 24 - APM BIOS support
++ * 25 - APM BIOS support
++ *
++ * 26 - ESPFIX small SS
++ * 27 - per-cpu [ offset to per-cpu data area ]
++ * 28 - unused
++ * 29 - unused
++ * 30 - unused
++ * 31 - TSS for double fault handler
++ */
++#define GDT_ENTRY_TLS_MIN 6
++#define GDT_ENTRY_TLS_MAX (GDT_ENTRY_TLS_MIN + GDT_ENTRY_TLS_ENTRIES - 1)
++
++#define GDT_ENTRY_DEFAULT_USER_CS 14
++#define __USER_CS (GDT_ENTRY_DEFAULT_USER_CS * 8 + 3)
++
++#define GDT_ENTRY_DEFAULT_USER_DS 15
++#define __USER_DS (GDT_ENTRY_DEFAULT_USER_DS * 8 + 3)
++
++#define GDT_ENTRY_KERNEL_BASE 12
++
++#define GDT_ENTRY_KERNEL_CS (GDT_ENTRY_KERNEL_BASE + 0)
++#define __KERNEL_CS (GDT_ENTRY_KERNEL_CS * 8)
++
++#define GDT_ENTRY_KERNEL_DS (GDT_ENTRY_KERNEL_BASE + 1)
++#define __KERNEL_DS (GDT_ENTRY_KERNEL_DS * 8)
++
++#define GDT_ENTRY_TSS (GDT_ENTRY_KERNEL_BASE + 4)
++#define GDT_ENTRY_LDT (GDT_ENTRY_KERNEL_BASE + 5)
++
++#define GDT_ENTRY_PNPBIOS_BASE (GDT_ENTRY_KERNEL_BASE + 6)
++#define GDT_ENTRY_APMBIOS_BASE (GDT_ENTRY_KERNEL_BASE + 11)
++
++#define GDT_ENTRY_ESPFIX_SS (GDT_ENTRY_KERNEL_BASE + 14)
++#define __ESPFIX_SS (GDT_ENTRY_ESPFIX_SS * 8)
++
++#define GDT_ENTRY_PERCPU (GDT_ENTRY_KERNEL_BASE + 15)
++#ifdef CONFIG_SMP
++#define __KERNEL_PERCPU (GDT_ENTRY_PERCPU * 8)
+ #else
+-# include "segment_64.h"
++#define __KERNEL_PERCPU 0
++#endif
++
++#define GDT_ENTRY_DOUBLEFAULT_TSS 31
++
++/*
++ * The GDT has 32 entries
++ */
++#define GDT_ENTRIES 32
++
++/* The PnP BIOS entries in the GDT */
++#define GDT_ENTRY_PNPBIOS_CS32 (GDT_ENTRY_PNPBIOS_BASE + 0)
++#define GDT_ENTRY_PNPBIOS_CS16 (GDT_ENTRY_PNPBIOS_BASE + 1)
++#define GDT_ENTRY_PNPBIOS_DS (GDT_ENTRY_PNPBIOS_BASE + 2)
++#define GDT_ENTRY_PNPBIOS_TS1 (GDT_ENTRY_PNPBIOS_BASE + 3)
++#define GDT_ENTRY_PNPBIOS_TS2 (GDT_ENTRY_PNPBIOS_BASE + 4)
++
++/* The PnP BIOS selectors */
++#define PNP_CS32 (GDT_ENTRY_PNPBIOS_CS32 * 8) /* segment for calling fn */
++#define PNP_CS16 (GDT_ENTRY_PNPBIOS_CS16 * 8) /* code segment for BIOS */
++#define PNP_DS (GDT_ENTRY_PNPBIOS_DS * 8) /* data segment for BIOS */
++#define PNP_TS1 (GDT_ENTRY_PNPBIOS_TS1 * 8) /* transfer data segment */
++#define PNP_TS2 (GDT_ENTRY_PNPBIOS_TS2 * 8) /* another data segment */
++
++/* Bottom two bits of selector give the ring privilege level */
++#define SEGMENT_RPL_MASK 0x3
++/* Bit 2 is table indicator (LDT/GDT) */
++#define SEGMENT_TI_MASK 0x4
++
++/* User mode is privilege level 3 */
++#define USER_RPL 0x3
++/* LDT segment has TI set, GDT has it cleared */
++#define SEGMENT_LDT 0x4
++#define SEGMENT_GDT 0x0
++
++/*
++ * Matching rules for certain types of segments.
++ */
++
++/* Matches only __KERNEL_CS, ignoring PnP / USER / APM segments */
++#define SEGMENT_IS_KERNEL_CODE(x) (((x) & 0xfc) == GDT_ENTRY_KERNEL_CS * 8)
++
++/* Matches __KERNEL_CS and __USER_CS (they must be 2 entries apart) */
++#define SEGMENT_IS_FLAT_CODE(x) (((x) & 0xec) == GDT_ENTRY_KERNEL_CS * 8)
++
++/* Matches PNP_CS32 and PNP_CS16 (they must be consecutive) */
++#define SEGMENT_IS_PNP_CODE(x) (((x) & 0xf4) == GDT_ENTRY_PNPBIOS_BASE * 8)
++
++
++#else
++#include <asm/cache.h>
++
++#define __KERNEL_CS 0x10
++#define __KERNEL_DS 0x18
++
++#define __KERNEL32_CS 0x08
++
++/*
++ * we cannot use the same code segment descriptor for user and kernel
++ * -- not even in the long flat mode, because of different DPL /kkeil
++ * The segment offset needs to contain a RPL. Grr. -AK
++ * GDT layout to get 64bit syscall right (sysret hardcodes gdt offsets)
++ */
++
++#define __USER32_CS 0x23 /* 4*8+3 */
++#define __USER_DS 0x2b /* 5*8+3 */
++#define __USER_CS 0x33 /* 6*8+3 */
++#define __USER32_DS __USER_DS
++
++#define GDT_ENTRY_TSS 8 /* needs two entries */
++#define GDT_ENTRY_LDT 10 /* needs two entries */
++#define GDT_ENTRY_TLS_MIN 12
++#define GDT_ENTRY_TLS_MAX 14
++
++#define GDT_ENTRY_PER_CPU 15 /* Abused to load per CPU data from limit */
++#define __PER_CPU_SEG (GDT_ENTRY_PER_CPU * 8 + 3)
++
++/* TLS indexes for 64bit - hardcoded in arch_prctl */
++#define FS_TLS 0
++#define GS_TLS 1
++
++#define GS_TLS_SEL ((GDT_ENTRY_TLS_MIN+GS_TLS)*8 + 3)
++#define FS_TLS_SEL ((GDT_ENTRY_TLS_MIN+FS_TLS)*8 + 3)
++
++#define GDT_ENTRIES 16
++
++#endif
++
++#ifndef CONFIG_PARAVIRT
++#define get_kernel_rpl() 0
++#endif
++
++/* User mode is privilege level 3 */
++#define USER_RPL 0x3
++/* LDT segment has TI set, GDT has it cleared */
++#define SEGMENT_LDT 0x4
++#define SEGMENT_GDT 0x0
++
++/* Bottom two bits of selector give the ring privilege level */
++#define SEGMENT_RPL_MASK 0x3
++/* Bit 2 is table indicator (LDT/GDT) */
++#define SEGMENT_TI_MASK 0x4
++
++#define IDT_ENTRIES 256
++#define GDT_SIZE (GDT_ENTRIES * 8)
++#define GDT_ENTRY_TLS_ENTRIES 3
++#define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES * 8)
++
++#ifdef __KERNEL__
++#ifndef __ASSEMBLY__
++extern const char early_idt_handlers[IDT_ENTRIES][10];
++#endif
++#endif
++
+ #endif
+diff --git a/include/asm-x86/segment_32.h b/include/asm-x86/segment_32.h
deleted file mode 100644
-index 6293509..0000000
---- a/include/asm-sh64/sigcontext.h
+index 597a47c..0000000
+--- a/include/asm-x86/segment_32.h
+++ /dev/null
-@@ -1,30 +0,0 @@
--#ifndef __ASM_SH64_SIGCONTEXT_H
--#define __ASM_SH64_SIGCONTEXT_H
+@@ -1,148 +0,0 @@
+-#ifndef _ASM_SEGMENT_H
+-#define _ASM_SEGMENT_H
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/sigcontext.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * The layout of the per-CPU GDT under Linux:
- *
-- */
--
--struct sigcontext {
-- unsigned long oldmask;
--
-- /* CPU registers */
-- unsigned long long sc_regs[63];
-- unsigned long long sc_tregs[8];
-- unsigned long long sc_pc;
-- unsigned long long sc_sr;
+- * 0 - null
+- * 1 - reserved
+- * 2 - reserved
+- * 3 - reserved
+- *
+- * 4 - unused <==== new cacheline
+- * 5 - unused
+- *
+- * ------- start of TLS (Thread-Local Storage) segments:
+- *
+- * 6 - TLS segment #1 [ glibc's TLS segment ]
+- * 7 - TLS segment #2 [ Wine's %fs Win32 segment ]
+- * 8 - TLS segment #3
+- * 9 - reserved
+- * 10 - reserved
+- * 11 - reserved
+- *
+- * ------- start of kernel segments:
+- *
+- * 12 - kernel code segment <==== new cacheline
+- * 13 - kernel data segment
+- * 14 - default user CS
+- * 15 - default user DS
+- * 16 - TSS
+- * 17 - LDT
+- * 18 - PNPBIOS support (16->32 gate)
+- * 19 - PNPBIOS support
+- * 20 - PNPBIOS support
+- * 21 - PNPBIOS support
+- * 22 - PNPBIOS support
+- * 23 - APM BIOS support
+- * 24 - APM BIOS support
+- * 25 - APM BIOS support
+- *
+- * 26 - ESPFIX small SS
+- * 27 - per-cpu [ offset to per-cpu data area ]
+- * 28 - unused
+- * 29 - unused
+- * 30 - unused
+- * 31 - TSS for double fault handler
+- */
+-#define GDT_ENTRY_TLS_ENTRIES 3
+-#define GDT_ENTRY_TLS_MIN 6
+-#define GDT_ENTRY_TLS_MAX (GDT_ENTRY_TLS_MIN + GDT_ENTRY_TLS_ENTRIES - 1)
+-
+-#define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES * 8)
+-
+-#define GDT_ENTRY_DEFAULT_USER_CS 14
+-#define __USER_CS (GDT_ENTRY_DEFAULT_USER_CS * 8 + 3)
+-
+-#define GDT_ENTRY_DEFAULT_USER_DS 15
+-#define __USER_DS (GDT_ENTRY_DEFAULT_USER_DS * 8 + 3)
+-
+-#define GDT_ENTRY_KERNEL_BASE 12
+-
+-#define GDT_ENTRY_KERNEL_CS (GDT_ENTRY_KERNEL_BASE + 0)
+-#define __KERNEL_CS (GDT_ENTRY_KERNEL_CS * 8)
+-
+-#define GDT_ENTRY_KERNEL_DS (GDT_ENTRY_KERNEL_BASE + 1)
+-#define __KERNEL_DS (GDT_ENTRY_KERNEL_DS * 8)
+-
+-#define GDT_ENTRY_TSS (GDT_ENTRY_KERNEL_BASE + 4)
+-#define GDT_ENTRY_LDT (GDT_ENTRY_KERNEL_BASE + 5)
-
-- /* FPU registers */
-- unsigned long long sc_fpregs[32];
-- unsigned int sc_fpscr;
-- unsigned int sc_fpvalid;
--};
+-#define GDT_ENTRY_PNPBIOS_BASE (GDT_ENTRY_KERNEL_BASE + 6)
+-#define GDT_ENTRY_APMBIOS_BASE (GDT_ENTRY_KERNEL_BASE + 11)
-
--#endif /* __ASM_SH64_SIGCONTEXT_H */
-diff --git a/include/asm-sh64/siginfo.h b/include/asm-sh64/siginfo.h
-deleted file mode 100644
-index 56ef1da..0000000
---- a/include/asm-sh64/siginfo.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_SIGINFO_H
--#define __ASM_SH64_SIGINFO_H
+-#define GDT_ENTRY_ESPFIX_SS (GDT_ENTRY_KERNEL_BASE + 14)
+-#define __ESPFIX_SS (GDT_ENTRY_ESPFIX_SS * 8)
-
--#include <asm-generic/siginfo.h>
+-#define GDT_ENTRY_PERCPU (GDT_ENTRY_KERNEL_BASE + 15)
+-#ifdef CONFIG_SMP
+-#define __KERNEL_PERCPU (GDT_ENTRY_PERCPU * 8)
+-#else
+-#define __KERNEL_PERCPU 0
+-#endif
-
--#endif /* __ASM_SH64_SIGINFO_H */
-diff --git a/include/asm-sh64/signal.h b/include/asm-sh64/signal.h
-deleted file mode 100644
-index 244e134..0000000
---- a/include/asm-sh64/signal.h
-+++ /dev/null
-@@ -1,159 +0,0 @@
--#ifndef __ASM_SH64_SIGNAL_H
--#define __ASM_SH64_SIGNAL_H
+-#define GDT_ENTRY_DOUBLEFAULT_TSS 31
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/signal.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
+- * The GDT has 32 entries
- */
+-#define GDT_ENTRIES 32
+-#define GDT_SIZE (GDT_ENTRIES * 8)
-
--#include <linux/types.h>
--
--/* Avoid too many header ordering problems. */
--struct siginfo;
--
--#define _NSIG 64
--#define _NSIG_BPW 32
--#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
+-/* Simple and small GDT entries for booting only */
-
--typedef unsigned long old_sigset_t; /* at least 32 bits */
+-#define GDT_ENTRY_BOOT_CS 2
+-#define __BOOT_CS (GDT_ENTRY_BOOT_CS * 8)
-
--typedef struct {
-- unsigned long sig[_NSIG_WORDS];
--} sigset_t;
+-#define GDT_ENTRY_BOOT_DS (GDT_ENTRY_BOOT_CS + 1)
+-#define __BOOT_DS (GDT_ENTRY_BOOT_DS * 8)
-
--#define SIGHUP 1
--#define SIGINT 2
--#define SIGQUIT 3
--#define SIGILL 4
--#define SIGTRAP 5
--#define SIGABRT 6
--#define SIGIOT 6
--#define SIGBUS 7
--#define SIGFPE 8
--#define SIGKILL 9
--#define SIGUSR1 10
--#define SIGSEGV 11
--#define SIGUSR2 12
--#define SIGPIPE 13
--#define SIGALRM 14
--#define SIGTERM 15
--#define SIGSTKFLT 16
--#define SIGCHLD 17
--#define SIGCONT 18
--#define SIGSTOP 19
--#define SIGTSTP 20
--#define SIGTTIN 21
--#define SIGTTOU 22
--#define SIGURG 23
--#define SIGXCPU 24
--#define SIGXFSZ 25
--#define SIGVTALRM 26
--#define SIGPROF 27
--#define SIGWINCH 28
--#define SIGIO 29
--#define SIGPOLL SIGIO
--/*
--#define SIGLOST 29
--*/
--#define SIGPWR 30
--#define SIGSYS 31
--#define SIGUNUSED 31
+-/* The PnP BIOS entries in the GDT */
+-#define GDT_ENTRY_PNPBIOS_CS32 (GDT_ENTRY_PNPBIOS_BASE + 0)
+-#define GDT_ENTRY_PNPBIOS_CS16 (GDT_ENTRY_PNPBIOS_BASE + 1)
+-#define GDT_ENTRY_PNPBIOS_DS (GDT_ENTRY_PNPBIOS_BASE + 2)
+-#define GDT_ENTRY_PNPBIOS_TS1 (GDT_ENTRY_PNPBIOS_BASE + 3)
+-#define GDT_ENTRY_PNPBIOS_TS2 (GDT_ENTRY_PNPBIOS_BASE + 4)
-
--/* These should not be considered constants from userland. */
--#define SIGRTMIN 32
--#define SIGRTMAX (_NSIG-1)
+-/* The PnP BIOS selectors */
+-#define PNP_CS32 (GDT_ENTRY_PNPBIOS_CS32 * 8) /* segment for calling fn */
+-#define PNP_CS16 (GDT_ENTRY_PNPBIOS_CS16 * 8) /* code segment for BIOS */
+-#define PNP_DS (GDT_ENTRY_PNPBIOS_DS * 8) /* data segment for BIOS */
+-#define PNP_TS1 (GDT_ENTRY_PNPBIOS_TS1 * 8) /* transfer data segment */
+-#define PNP_TS2 (GDT_ENTRY_PNPBIOS_TS2 * 8) /* another data segment */
-
-/*
-- * SA_FLAGS values:
-- *
-- * SA_ONSTACK indicates that a registered stack_t will be used.
-- * SA_RESTART flag to get restarting signals (which were the default long ago)
-- * SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
-- * SA_RESETHAND clears the handler when the signal is delivered.
-- * SA_NOCLDWAIT flag on SIGCHLD to inhibit zombies.
-- * SA_NODEFER prevents the current signal from being masked in the handler.
-- *
-- * SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
-- * Unix names RESETHAND and NODEFER respectively.
+- * The interrupt descriptor table has room for 256 idt's,
+- * the global descriptor table is dependent on the number
+- * of tasks we can have..
- */
--#define SA_NOCLDSTOP 0x00000001
--#define SA_NOCLDWAIT 0x00000002 /* not supported yet */
--#define SA_SIGINFO 0x00000004
--#define SA_ONSTACK 0x08000000
--#define SA_RESTART 0x10000000
--#define SA_NODEFER 0x40000000
--#define SA_RESETHAND 0x80000000
+-#define IDT_ENTRIES 256
-
--#define SA_NOMASK SA_NODEFER
--#define SA_ONESHOT SA_RESETHAND
+-/* Bottom two bits of selector give the ring privilege level */
+-#define SEGMENT_RPL_MASK 0x3
+-/* Bit 2 is table indicator (LDT/GDT) */
+-#define SEGMENT_TI_MASK 0x4
-
--#define SA_RESTORER 0x04000000
+-/* User mode is privilege level 3 */
+-#define USER_RPL 0x3
+-/* LDT segment has TI set, GDT has it cleared */
+-#define SEGMENT_LDT 0x4
+-#define SEGMENT_GDT 0x0
-
+-#ifndef CONFIG_PARAVIRT
+-#define get_kernel_rpl() 0
+-#endif
-/*
-- * sigaltstack controls
+- * Matching rules for certain types of segments.
- */
--#define SS_ONSTACK 1
--#define SS_DISABLE 2
--
--#define MINSIGSTKSZ 2048
--#define SIGSTKSZ THREAD_SIZE
-
--#include <asm-generic/signal.h>
--
--#ifdef __KERNEL__
--struct old_sigaction {
-- __sighandler_t sa_handler;
-- old_sigset_t sa_mask;
-- unsigned long sa_flags;
-- void (*sa_restorer)(void);
--};
+-/* Matches only __KERNEL_CS, ignoring PnP / USER / APM segments */
+-#define SEGMENT_IS_KERNEL_CODE(x) (((x) & 0xfc) == GDT_ENTRY_KERNEL_CS * 8)
-
--struct sigaction {
-- __sighandler_t sa_handler;
-- unsigned long sa_flags;
-- void (*sa_restorer)(void);
-- sigset_t sa_mask; /* mask last for extensibility */
--};
+-/* Matches __KERNEL_CS and __USER_CS (they must be 2 entries apart) */
+-#define SEGMENT_IS_FLAT_CODE(x) (((x) & 0xec) == GDT_ENTRY_KERNEL_CS * 8)
-
--struct k_sigaction {
-- struct sigaction sa;
--};
--#else
--/* Here we must cater to libcs that poke about in kernel headers. */
+-/* Matches PNP_CS32 and PNP_CS16 (they must be consecutive) */
+-#define SEGMENT_IS_PNP_CODE(x) (((x) & 0xf4) == GDT_ENTRY_PNPBIOS_BASE * 8)
-
--struct sigaction {
-- union {
-- __sighandler_t _sa_handler;
-- void (*_sa_sigaction)(int, struct siginfo *, void *);
-- } _u;
-- sigset_t sa_mask;
-- unsigned long sa_flags;
-- void (*sa_restorer)(void);
--};
+-#endif
+diff --git a/include/asm-x86/segment_64.h b/include/asm-x86/segment_64.h
+deleted file mode 100644
+index 04b8ab2..0000000
+--- a/include/asm-x86/segment_64.h
++++ /dev/null
+@@ -1,53 +0,0 @@
+-#ifndef _ASM_SEGMENT_H
+-#define _ASM_SEGMENT_H
-
--#define sa_handler _u._sa_handler
--#define sa_sigaction _u._sa_sigaction
+-#include <asm/cache.h>
-
--#endif /* __KERNEL__ */
+-/* Simple and small GDT entries for booting only */
-
--typedef struct sigaltstack {
-- void *ss_sp;
-- int ss_flags;
-- size_t ss_size;
--} stack_t;
+-#define GDT_ENTRY_BOOT_CS 2
+-#define __BOOT_CS (GDT_ENTRY_BOOT_CS * 8)
-
--#ifdef __KERNEL__
--#include <asm/sigcontext.h>
+-#define GDT_ENTRY_BOOT_DS (GDT_ENTRY_BOOT_CS + 1)
+-#define __BOOT_DS (GDT_ENTRY_BOOT_DS * 8)
-
--#define sigmask(sig) (1UL << ((sig) - 1))
--#define ptrace_signal_deliver(regs, cookie) do { } while (0)
+-#define __KERNEL_CS 0x10
+-#define __KERNEL_DS 0x18
-
--#endif /* __KERNEL__ */
+-#define __KERNEL32_CS 0x08
-
--#endif /* __ASM_SH64_SIGNAL_H */
-diff --git a/include/asm-sh64/smp.h b/include/asm-sh64/smp.h
-deleted file mode 100644
-index 4a4d0da..0000000
---- a/include/asm-sh64/smp.h
-+++ /dev/null
-@@ -1,15 +0,0 @@
--#ifndef __ASM_SH64_SMP_H
--#define __ASM_SH64_SMP_H
+-/*
+- * we cannot use the same code segment descriptor for user and kernel
+- * -- not even in the long flat mode, because of different DPL /kkeil
+- * The segment offset needs to contain a RPL. Grr. -AK
+- * GDT layout to get 64bit syscall right (sysret hardcodes gdt offsets)
+- */
+-
+-#define __USER32_CS 0x23 /* 4*8+3 */
+-#define __USER_DS 0x2b /* 5*8+3 */
+-#define __USER_CS 0x33 /* 6*8+3 */
+-#define __USER32_DS __USER_DS
+-
+-#define GDT_ENTRY_TSS 8 /* needs two entries */
+-#define GDT_ENTRY_LDT 10 /* needs two entries */
+-#define GDT_ENTRY_TLS_MIN 12
+-#define GDT_ENTRY_TLS_MAX 14
+-
+-#define GDT_ENTRY_TLS_ENTRIES 3
+-
+-#define GDT_ENTRY_PER_CPU 15 /* Abused to load per CPU data from limit */
+-#define __PER_CPU_SEG (GDT_ENTRY_PER_CPU * 8 + 3)
+-
+-/* TLS indexes for 64bit - hardcoded in arch_prctl */
+-#define FS_TLS 0
+-#define GS_TLS 1
+-
+-#define GS_TLS_SEL ((GDT_ENTRY_TLS_MIN+GS_TLS)*8 + 3)
+-#define FS_TLS_SEL ((GDT_ENTRY_TLS_MIN+FS_TLS)*8 + 3)
+-
+-#define IDT_ENTRIES 256
+-#define GDT_ENTRIES 16
+-#define GDT_SIZE (GDT_ENTRIES * 8)
+-#define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES * 8)
+-
+-#endif
+diff --git a/include/asm-x86/semaphore_32.h b/include/asm-x86/semaphore_32.h
+index 835c1d7..ac96d38 100644
+--- a/include/asm-x86/semaphore_32.h
++++ b/include/asm-x86/semaphore_32.h
+@@ -83,10 +83,10 @@ static inline void init_MUTEX_LOCKED (struct semaphore *sem)
+ sema_init(sem, 0);
+ }
+
+-fastcall void __down_failed(void /* special register calling convention */);
+-fastcall int __down_failed_interruptible(void /* params in registers */);
+-fastcall int __down_failed_trylock(void /* params in registers */);
+-fastcall void __up_wakeup(void /* special register calling convention */);
++extern asmregparm void __down_failed(atomic_t *count_ptr);
++extern asmregparm int __down_failed_interruptible(atomic_t *count_ptr);
++extern asmregparm int __down_failed_trylock(atomic_t *count_ptr);
++extern asmregparm void __up_wakeup(atomic_t *count_ptr);
+
+ /*
+ * This is ugly, but we want the default case to fall through.
+diff --git a/include/asm-x86/setup.h b/include/asm-x86/setup.h
+index 24d786e..071e054 100644
+--- a/include/asm-x86/setup.h
++++ b/include/asm-x86/setup.h
+@@ -3,6 +3,13 @@
+
+ #define COMMAND_LINE_SIZE 2048
+
++#ifndef __ASSEMBLY__
++char *machine_specific_memory_setup(void);
++#ifndef CONFIG_PARAVIRT
++#define paravirt_post_allocator_init() do {} while (0)
++#endif
++#endif /* __ASSEMBLY__ */
++
+ #ifdef __KERNEL__
+
+ #ifdef __i386__
+@@ -51,9 +58,7 @@ void __init add_memory_region(unsigned long long start,
+
+ extern unsigned long init_pg_tables_end;
+
+-#ifndef CONFIG_PARAVIRT
+-#define paravirt_post_allocator_init() do {} while (0)
+-#endif
++
+
+ #endif /* __i386__ */
+ #endif /* _SETUP */
+diff --git a/include/asm-x86/sigcontext.h b/include/asm-x86/sigcontext.h
+index c047f9d..681dead 100644
+--- a/include/asm-x86/sigcontext.h
++++ b/include/asm-x86/sigcontext.h
+@@ -63,20 +63,20 @@ struct sigcontext {
+ unsigned short fs, __fsh;
+ unsigned short es, __esh;
+ unsigned short ds, __dsh;
+- unsigned long edi;
+- unsigned long esi;
+- unsigned long ebp;
+- unsigned long esp;
+- unsigned long ebx;
+- unsigned long edx;
+- unsigned long ecx;
+- unsigned long eax;
++ unsigned long di;
++ unsigned long si;
++ unsigned long bp;
++ unsigned long sp;
++ unsigned long bx;
++ unsigned long dx;
++ unsigned long cx;
++ unsigned long ax;
+ unsigned long trapno;
+ unsigned long err;
+- unsigned long eip;
++ unsigned long ip;
+ unsigned short cs, __csh;
+- unsigned long eflags;
+- unsigned long esp_at_signal;
++ unsigned long flags;
++ unsigned long sp_at_signal;
+ unsigned short ss, __ssh;
+ struct _fpstate __user * fpstate;
+ unsigned long oldmask;
+@@ -111,16 +111,16 @@ struct sigcontext {
+ unsigned long r13;
+ unsigned long r14;
+ unsigned long r15;
+- unsigned long rdi;
+- unsigned long rsi;
+- unsigned long rbp;
+- unsigned long rbx;
+- unsigned long rdx;
+- unsigned long rax;
+- unsigned long rcx;
+- unsigned long rsp;
+- unsigned long rip;
+- unsigned long eflags; /* RFLAGS */
++ unsigned long di;
++ unsigned long si;
++ unsigned long bp;
++ unsigned long bx;
++ unsigned long dx;
++ unsigned long ax;
++ unsigned long cx;
++ unsigned long sp;
++ unsigned long ip;
++ unsigned long flags;
+ unsigned short cs;
+ unsigned short gs;
+ unsigned short fs;
+diff --git a/include/asm-x86/sigcontext32.h b/include/asm-x86/sigcontext32.h
+index 3d65703..6ffab4f 100644
+--- a/include/asm-x86/sigcontext32.h
++++ b/include/asm-x86/sigcontext32.h
+@@ -48,20 +48,20 @@ struct sigcontext_ia32 {
+ unsigned short fs, __fsh;
+ unsigned short es, __esh;
+ unsigned short ds, __dsh;
+- unsigned int edi;
+- unsigned int esi;
+- unsigned int ebp;
+- unsigned int esp;
+- unsigned int ebx;
+- unsigned int edx;
+- unsigned int ecx;
+- unsigned int eax;
++ unsigned int di;
++ unsigned int si;
++ unsigned int bp;
++ unsigned int sp;
++ unsigned int bx;
++ unsigned int dx;
++ unsigned int cx;
++ unsigned int ax;
+ unsigned int trapno;
+ unsigned int err;
+- unsigned int eip;
++ unsigned int ip;
+ unsigned short cs, __csh;
+- unsigned int eflags;
+- unsigned int esp_at_signal;
++ unsigned int flags;
++ unsigned int sp_at_signal;
+ unsigned short ss, __ssh;
+ unsigned int fpstate; /* really (struct _fpstate_ia32 *) */
+ unsigned int oldmask;
+diff --git a/include/asm-x86/signal.h b/include/asm-x86/signal.h
+index 987a422..aee7eca 100644
+--- a/include/asm-x86/signal.h
++++ b/include/asm-x86/signal.h
+@@ -245,21 +245,14 @@ static __inline__ int sigfindinword(unsigned long word)
+
+ struct pt_regs;
+
+-#define ptrace_signal_deliver(regs, cookie) \
+- do { \
+- if (current->ptrace & PT_DTRACE) { \
+- current->ptrace &= ~PT_DTRACE; \
+- (regs)->eflags &= ~TF_MASK; \
+- } \
+- } while (0)
-
+ #else /* __i386__ */
+
+ #undef __HAVE_ARCH_SIG_BITOPS
+
++#endif /* !__i386__ */
++
+ #define ptrace_signal_deliver(regs, cookie) do { } while (0)
+
+-#endif /* !__i386__ */
+ #endif /* __KERNEL__ */
+ #endif /* __ASSEMBLY__ */
+
+diff --git a/include/asm-x86/smp_32.h b/include/asm-x86/smp_32.h
+index e10b7af..56152e3 100644
+--- a/include/asm-x86/smp_32.h
++++ b/include/asm-x86/smp_32.h
+@@ -1,51 +1,41 @@
+ #ifndef __ASM_SMP_H
+ #define __ASM_SMP_H
+
++#ifndef __ASSEMBLY__
++#include <linux/cpumask.h>
++#include <linux/init.h>
++
+ /*
+ * We need the APIC definitions automatically as part of 'smp.h'
+ */
+-#ifndef __ASSEMBLY__
+-#include <linux/kernel.h>
+-#include <linux/threads.h>
+-#include <linux/cpumask.h>
++#ifdef CONFIG_X86_LOCAL_APIC
++# include <asm/mpspec.h>
++# include <asm/apic.h>
++# ifdef CONFIG_X86_IO_APIC
++# include <asm/io_apic.h>
++# endif
+ #endif
+
+-#if defined(CONFIG_X86_LOCAL_APIC) && !defined(__ASSEMBLY__)
+-#include <linux/bitops.h>
+-#include <asm/mpspec.h>
+-#include <asm/apic.h>
+-#ifdef CONFIG_X86_IO_APIC
+-#include <asm/io_apic.h>
+-#endif
+-#endif
++extern cpumask_t cpu_callout_map;
++extern cpumask_t cpu_callin_map;
+
+-#define BAD_APICID 0xFFu
+-#ifdef CONFIG_SMP
+-#ifndef __ASSEMBLY__
++extern int smp_num_siblings;
++extern unsigned int num_processors;
+
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/smp.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
+- * Private routines/data
- */
+-
+ extern void smp_alloc_memory(void);
+-extern int pic_mode;
+-extern int smp_num_siblings;
+-DECLARE_PER_CPU(cpumask_t, cpu_sibling_map);
+-DECLARE_PER_CPU(cpumask_t, cpu_core_map);
++extern void lock_ipi_call_lock(void);
++extern void unlock_ipi_call_lock(void);
+
+ extern void (*mtrr_hook) (void);
+ extern void zap_low_mappings (void);
+-extern void lock_ipi_call_lock(void);
+-extern void unlock_ipi_call_lock(void);
+
+-#define MAX_APICID 256
+ extern u8 __initdata x86_cpu_to_apicid_init[];
+-extern void *x86_cpu_to_apicid_ptr;
+-DECLARE_PER_CPU(u8, x86_cpu_to_apicid);
+-
+-#define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu)
++extern void *x86_cpu_to_apicid_early_ptr;
+
+-extern void set_cpu_sibling_map(int cpu);
++DECLARE_PER_CPU(cpumask_t, cpu_sibling_map);
++DECLARE_PER_CPU(cpumask_t, cpu_core_map);
++DECLARE_PER_CPU(u8, cpu_llc_id);
++DECLARE_PER_CPU(u8, x86_cpu_to_apicid);
+
+ #ifdef CONFIG_HOTPLUG_CPU
+ extern void cpu_exit_clear(void);
+@@ -53,6 +43,9 @@ extern void cpu_uninit(void);
+ extern void remove_siblinginfo(int cpu);
+ #endif
+
++/* Globals due to paravirt */
++extern void set_cpu_sibling_map(int cpu);
++
+ struct smp_ops
+ {
+ void (*smp_prepare_boot_cpu)(void);
+@@ -67,6 +60,7 @@ struct smp_ops
+ int wait);
+ };
+
++#ifdef CONFIG_SMP
+ extern struct smp_ops smp_ops;
+
+ static inline void smp_prepare_boot_cpu(void)
+@@ -107,10 +101,12 @@ int native_cpu_up(unsigned int cpunum);
+ void native_smp_cpus_done(unsigned int max_cpus);
+
+ #ifndef CONFIG_PARAVIRT
+-#define startup_ipi_hook(phys_apicid, start_eip, start_esp) \
+-do { } while (0)
++#define startup_ipi_hook(phys_apicid, start_eip, start_esp) do { } while (0)
+ #endif
+
++extern int __cpu_disable(void);
++extern void __cpu_die(unsigned int cpu);
++
+ /*
+ * This function is needed by all SMP systems. It must _always_ be valid
+ * from the initial startup. We map APIC_BASE very early in page_setup(),
+@@ -119,9 +115,11 @@ do { } while (0)
+ DECLARE_PER_CPU(int, cpu_number);
+ #define raw_smp_processor_id() (x86_read_percpu(cpu_number))
+
+-extern cpumask_t cpu_callout_map;
+-extern cpumask_t cpu_callin_map;
+-extern cpumask_t cpu_possible_map;
++#define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu)
++
++extern int safe_smp_processor_id(void);
++
++void __cpuinit smp_store_cpu_info(int id);
+
+ /* We don't mark CPUs online until __cpu_up(), so we need another measure */
+ static inline int num_booting_cpus(void)
+@@ -129,56 +127,39 @@ static inline int num_booting_cpus(void)
+ return cpus_weight(cpu_callout_map);
+ }
+
+-extern int safe_smp_processor_id(void);
+-extern int __cpu_disable(void);
+-extern void __cpu_die(unsigned int cpu);
+-extern unsigned int num_processors;
-
--#endif /* __ASM_SH64_SMP_H */
-diff --git a/include/asm-sh64/socket.h b/include/asm-sh64/socket.h
-deleted file mode 100644
-index 1853f72..0000000
---- a/include/asm-sh64/socket.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_SOCKET_H
--#define __ASM_SH64_SOCKET_H
--
--#include <asm-sh/socket.h>
+-void __cpuinit smp_store_cpu_info(int id);
-
--#endif /* __ASM_SH64_SOCKET_H */
-diff --git a/include/asm-sh64/sockios.h b/include/asm-sh64/sockios.h
-deleted file mode 100644
-index 419e76f..0000000
---- a/include/asm-sh64/sockios.h
-+++ /dev/null
-@@ -1,25 +0,0 @@
--#ifndef __ASM_SH64_SOCKIOS_H
--#define __ASM_SH64_SOCKIOS_H
+-#endif /* !__ASSEMBLY__ */
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/sockios.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
+ #else /* CONFIG_SMP */
+
+ #define safe_smp_processor_id() 0
+ #define cpu_physical_id(cpu) boot_cpu_physical_apicid
+
+-#define NO_PROC_ID 0xFF /* No processor magic marker */
-
--/* Socket-level I/O control calls. */
--#define FIOGETOWN _IOR('f', 123, int)
--#define FIOSETOWN _IOW('f', 124, int)
+-#endif /* CONFIG_SMP */
-
--#define SIOCATMARK _IOR('s', 7, int)
--#define SIOCSPGRP _IOW('s', 8, pid_t)
--#define SIOCGPGRP _IOR('s', 9, pid_t)
+-#ifndef __ASSEMBLY__
++#endif /* !CONFIG_SMP */
+
+ #ifdef CONFIG_X86_LOCAL_APIC
+
+-#ifdef APIC_DEFINITION
++static __inline int logical_smp_processor_id(void)
++{
++ /* we don't want to mark this access volatile - bad code generation */
++ return GET_APIC_LOGICAL_ID(*(u32 *)(APIC_BASE + APIC_LDR));
++}
++
++# ifdef APIC_DEFINITION
+ extern int hard_smp_processor_id(void);
+-#else
+-#include <mach_apicdef.h>
++# else
++# include <mach_apicdef.h>
+ static inline int hard_smp_processor_id(void)
+ {
+ /* we don't want to mark this access volatile - bad code generation */
+- return GET_APIC_ID(*(unsigned long *)(APIC_BASE+APIC_ID));
++ return GET_APIC_ID(*(u32 *)(APIC_BASE + APIC_ID));
+ }
+-#endif /* APIC_DEFINITION */
++# endif /* APIC_DEFINITION */
+
+ #else /* CONFIG_X86_LOCAL_APIC */
+
+-#ifndef CONFIG_SMP
+-#define hard_smp_processor_id() 0
+-#endif
++# ifndef CONFIG_SMP
++# define hard_smp_processor_id() 0
++# endif
+
+ #endif /* CONFIG_X86_LOCAL_APIC */
+
+-extern u8 apicid_2_node[];
-
--#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp (timeval) */
--#define SIOCGSTAMPNS _IOR('s', 101, struct timespec) /* Get stamp (timespec) */
--#endif /* __ASM_SH64_SOCKIOS_H */
-diff --git a/include/asm-sh64/spinlock.h b/include/asm-sh64/spinlock.h
-deleted file mode 100644
-index 296b0c9..0000000
---- a/include/asm-sh64/spinlock.h
-+++ /dev/null
-@@ -1,17 +0,0 @@
--#ifndef __ASM_SH64_SPINLOCK_H
--#define __ASM_SH64_SPINLOCK_H
+-#ifdef CONFIG_X86_LOCAL_APIC
+-static __inline int logical_smp_processor_id(void)
+-{
+- /* we don't want to mark this access volatile - bad code generation */
+- return GET_APIC_LOGICAL_ID(*(unsigned long *)(APIC_BASE+APIC_LDR));
+-}
+-#endif
+-#endif
-
++#endif /* !ASSEMBLY */
+ #endif
+diff --git a/include/asm-x86/smp_64.h b/include/asm-x86/smp_64.h
+index ab612b0..e0a7551 100644
+--- a/include/asm-x86/smp_64.h
++++ b/include/asm-x86/smp_64.h
+@@ -1,130 +1,101 @@
+ #ifndef __ASM_SMP_H
+ #define __ASM_SMP_H
+
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/spinlock.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
+- * We need the APIC definitions automatically as part of 'smp.h'
- */
+-#include <linux/threads.h>
+ #include <linux/cpumask.h>
+-#include <linux/bitops.h>
+ #include <linux/init.h>
+-extern int disable_apic;
+
+-#include <asm/mpspec.h>
++/*
++ * We need the APIC definitions automatically as part of 'smp.h'
++ */
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+-#include <asm/thread_info.h>
-
--#error "No SMP on SH64"
+-#ifdef CONFIG_SMP
-
--#endif /* __ASM_SH64_SPINLOCK_H */
-diff --git a/include/asm-sh64/stat.h b/include/asm-sh64/stat.h
-deleted file mode 100644
-index 86f551b..0000000
---- a/include/asm-sh64/stat.h
-+++ /dev/null
-@@ -1,88 +0,0 @@
--#ifndef __ASM_SH64_STAT_H
--#define __ASM_SH64_STAT_H
++#include <asm/mpspec.h>
+ #include <asm/pda.h>
++#include <asm/thread_info.h>
+
+-struct pt_regs;
-
+-extern cpumask_t cpu_present_mask;
+-extern cpumask_t cpu_possible_map;
+-extern cpumask_t cpu_online_map;
+ extern cpumask_t cpu_callout_map;
+ extern cpumask_t cpu_initialized;
+
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/stat.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * Private routines/data
+- */
+-
++extern int smp_num_siblings;
++extern unsigned int num_processors;
++
+ extern void smp_alloc_memory(void);
+-extern volatile unsigned long smp_invalidate_needed;
+ extern void lock_ipi_call_lock(void);
+ extern void unlock_ipi_call_lock(void);
+-extern int smp_num_siblings;
+-extern void smp_send_reschedule(int cpu);
++
+ extern int smp_call_function_mask(cpumask_t mask, void (*func)(void *),
+ void *info, int wait);
+
+-/*
+- * cpu_sibling_map and cpu_core_map now live
+- * in the per cpu area
- *
+- * extern cpumask_t cpu_sibling_map[NR_CPUS];
+- * extern cpumask_t cpu_core_map[NR_CPUS];
- */
++extern u16 __initdata x86_cpu_to_apicid_init[];
++extern u16 __initdata x86_bios_cpu_apicid_init[];
++extern void *x86_cpu_to_apicid_early_ptr;
++extern void *x86_bios_cpu_apicid_early_ptr;
++
+ DECLARE_PER_CPU(cpumask_t, cpu_sibling_map);
+ DECLARE_PER_CPU(cpumask_t, cpu_core_map);
+-DECLARE_PER_CPU(u8, cpu_llc_id);
-
--struct __old_kernel_stat {
-- unsigned short st_dev;
-- unsigned short st_ino;
-- unsigned short st_mode;
-- unsigned short st_nlink;
-- unsigned short st_uid;
-- unsigned short st_gid;
-- unsigned short st_rdev;
-- unsigned long st_size;
-- unsigned long st_atime;
-- unsigned long st_mtime;
-- unsigned long st_ctime;
--};
--
--struct stat {
-- unsigned short st_dev;
-- unsigned short __pad1;
-- unsigned long st_ino;
-- unsigned short st_mode;
-- unsigned short st_nlink;
-- unsigned short st_uid;
-- unsigned short st_gid;
-- unsigned short st_rdev;
-- unsigned short __pad2;
-- unsigned long st_size;
-- unsigned long st_blksize;
-- unsigned long st_blocks;
-- unsigned long st_atime;
-- unsigned long st_atime_nsec;
-- unsigned long st_mtime;
-- unsigned long st_mtime_nsec;
-- unsigned long st_ctime;
-- unsigned long st_ctime_nsec;
-- unsigned long __unused4;
-- unsigned long __unused5;
--};
+-#define SMP_TRAMPOLINE_BASE 0x6000
-
--/* This matches struct stat64 in glibc2.1, hence the absolutely
-- * insane amounts of padding around dev_t's.
+-/*
+- * On x86 all CPUs are mapped 1:1 to the APIC space.
+- * This simplifies scheduling and IPI sending and
+- * compresses data structures.
- */
--struct stat64 {
-- unsigned short st_dev;
-- unsigned char __pad0[10];
--
-- unsigned long st_ino;
-- unsigned int st_mode;
-- unsigned int st_nlink;
--
-- unsigned long st_uid;
-- unsigned long st_gid;
--
-- unsigned short st_rdev;
-- unsigned char __pad3[10];
--
-- long long st_size;
-- unsigned long st_blksize;
--
-- unsigned long st_blocks; /* Number 512-byte blocks allocated. */
-- unsigned long __pad4; /* future possible st_blocks high bits */
--
-- unsigned long st_atime;
-- unsigned long st_atime_nsec;
--
-- unsigned long st_mtime;
-- unsigned long st_mtime_nsec;
++DECLARE_PER_CPU(u16, cpu_llc_id);
++DECLARE_PER_CPU(u16, x86_cpu_to_apicid);
++DECLARE_PER_CPU(u16, x86_bios_cpu_apicid);
+
+-static inline int num_booting_cpus(void)
++static inline int cpu_present_to_apicid(int mps_cpu)
+ {
+- return cpus_weight(cpu_callout_map);
++ if (cpu_present(mps_cpu))
++ return (int)per_cpu(x86_bios_cpu_apicid, mps_cpu);
++ else
++ return BAD_APICID;
+ }
+
+-#define raw_smp_processor_id() read_pda(cpunumber)
++#ifdef CONFIG_SMP
++
++#define SMP_TRAMPOLINE_BASE 0x6000
+
+ extern int __cpu_disable(void);
+ extern void __cpu_die(unsigned int cpu);
+ extern void prefill_possible_map(void);
+-extern unsigned num_processors;
+ extern unsigned __cpuinitdata disabled_cpus;
+
+-#define NO_PROC_ID 0xFF /* No processor magic marker */
-
-- unsigned long st_ctime;
-- unsigned long st_ctime_nsec; /* will be high 32 bits of ctime someday */
+-#endif /* CONFIG_SMP */
++#define raw_smp_processor_id() read_pda(cpunumber)
++#define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu)
+
+-#define safe_smp_processor_id() smp_processor_id()
-
-- unsigned long __unused1;
-- unsigned long __unused2;
--};
+-static inline int hard_smp_processor_id(void)
+-{
+- /* we don't want to mark this access volatile - bad code generation */
+- return GET_APIC_ID(*(unsigned int *)(APIC_BASE+APIC_ID));
+-}
++#define stack_smp_processor_id() \
++ ({ \
++ struct thread_info *ti; \
++ __asm__("andq %%rsp,%0; ":"=r" (ti) : "0" (CURRENT_MASK)); \
++ ti->cpu; \
++})
+
+ /*
+- * Some lowlevel functions might want to know about
+- * the real APIC ID <-> CPU # mapping.
++ * On x86 all CPUs are mapped 1:1 to the APIC space. This simplifies
++ * scheduling and IPI sending and compresses data structures.
+ */
+-extern u8 __initdata x86_cpu_to_apicid_init[];
+-extern void *x86_cpu_to_apicid_ptr;
+-DECLARE_PER_CPU(u8, x86_cpu_to_apicid); /* physical ID */
+-extern u8 bios_cpu_apicid[];
-
--#endif /* __ASM_SH64_STAT_H */
-diff --git a/include/asm-sh64/statfs.h b/include/asm-sh64/statfs.h
+-static inline int cpu_present_to_apicid(int mps_cpu)
++static inline int num_booting_cpus(void)
+ {
+- if (mps_cpu < NR_CPUS)
+- return (int)bios_cpu_apicid[mps_cpu];
+- else
+- return BAD_APICID;
++ return cpus_weight(cpu_callout_map);
+ }
+
+-#ifndef CONFIG_SMP
++extern void smp_send_reschedule(int cpu);
++
++#else /* CONFIG_SMP */
++
++extern unsigned int boot_cpu_id;
++#define cpu_physical_id(cpu) boot_cpu_id
+ #define stack_smp_processor_id() 0
+-#define cpu_logical_map(x) (x)
+-#else
+-#include <asm/thread_info.h>
+-#define stack_smp_processor_id() \
+-({ \
+- struct thread_info *ti; \
+- __asm__("andq %%rsp,%0; ":"=r" (ti) : "0" (CURRENT_MASK)); \
+- ti->cpu; \
+-})
+-#endif
++
++#endif /* !CONFIG_SMP */
++
++#define safe_smp_processor_id() smp_processor_id()
+
+ static __inline int logical_smp_processor_id(void)
+ {
+ /* we don't want to mark this access volatile - bad code generation */
+- return GET_APIC_LOGICAL_ID(*(unsigned long *)(APIC_BASE+APIC_LDR));
++ return GET_APIC_LOGICAL_ID(*(u32 *)(APIC_BASE + APIC_LDR));
++}
++
++static inline int hard_smp_processor_id(void)
++{
++ /* we don't want to mark this access volatile - bad code generation */
++ return GET_APIC_ID(*(u32 *)(APIC_BASE + APIC_ID));
+ }
+
+-#ifdef CONFIG_SMP
+-#define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu)
+-#else
+-extern unsigned int boot_cpu_id;
+-#define cpu_physical_id(cpu) boot_cpu_id
+-#endif /* !CONFIG_SMP */
+ #endif
+
+diff --git a/include/asm-x86/sparsemem.h b/include/asm-x86/sparsemem.h
+index 3f203b1..fa58cd5 100644
+--- a/include/asm-x86/sparsemem.h
++++ b/include/asm-x86/sparsemem.h
+@@ -1,5 +1,34 @@
++#ifndef _ASM_X86_SPARSEMEM_H
++#define _ASM_X86_SPARSEMEM_H
++
++#ifdef CONFIG_SPARSEMEM
++/*
++ * generic non-linear memory support:
++ *
++ * 1) we will not split memory into more chunks than will fit into the flags
++ * field of the struct page
++ *
++ * SECTION_SIZE_BITS 2^n: size of each section
++ * MAX_PHYSADDR_BITS 2^n: max size of physical address space
++ * MAX_PHYSMEM_BITS 2^n: how much memory we can have in that space
++ *
++ */
++
+ #ifdef CONFIG_X86_32
+-# include "sparsemem_32.h"
+-#else
+-# include "sparsemem_64.h"
++# ifdef CONFIG_X86_PAE
++# define SECTION_SIZE_BITS 30
++# define MAX_PHYSADDR_BITS 36
++# define MAX_PHYSMEM_BITS 36
++# else
++# define SECTION_SIZE_BITS 26
++# define MAX_PHYSADDR_BITS 32
++# define MAX_PHYSMEM_BITS 32
++# endif
++#else /* CONFIG_X86_32 */
++# define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */
++# define MAX_PHYSADDR_BITS 40
++# define MAX_PHYSMEM_BITS 40
++#endif
++
++#endif /* CONFIG_SPARSEMEM */
+ #endif
+diff --git a/include/asm-x86/sparsemem_32.h b/include/asm-x86/sparsemem_32.h
deleted file mode 100644
-index 083fd79..0000000
---- a/include/asm-sh64/statfs.h
+index cfeed99..0000000
+--- a/include/asm-x86/sparsemem_32.h
+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_STATFS_H
--#define __ASM_SH64_STATFS_H
+@@ -1,31 +0,0 @@
+-#ifndef _I386_SPARSEMEM_H
+-#define _I386_SPARSEMEM_H
+-#ifdef CONFIG_SPARSEMEM
-
--#include <asm-generic/statfs.h>
+-/*
+- * generic non-linear memory support:
+- *
+- * 1) we will not split memory into more chunks than will fit into the
+- * flags field of the struct page
+- */
-
--#endif /* __ASM_SH64_STATFS_H */
-diff --git a/include/asm-sh64/string.h b/include/asm-sh64/string.h
+-/*
+- * SECTION_SIZE_BITS 2^N: how big each section will be
+- * MAX_PHYSADDR_BITS 2^N: how much physical address space we have
+- * MAX_PHYSMEM_BITS 2^N: how much memory we can have in that space
+- */
+-#ifdef CONFIG_X86_PAE
+-#define SECTION_SIZE_BITS 30
+-#define MAX_PHYSADDR_BITS 36
+-#define MAX_PHYSMEM_BITS 36
+-#else
+-#define SECTION_SIZE_BITS 26
+-#define MAX_PHYSADDR_BITS 32
+-#define MAX_PHYSMEM_BITS 32
+-#endif
+-
+-/* XXX: FIXME -- wli */
+-#define kern_addr_valid(kaddr) (0)
+-
+-#endif /* CONFIG_SPARSEMEM */
+-#endif /* _I386_SPARSEMEM_H */
+diff --git a/include/asm-x86/sparsemem_64.h b/include/asm-x86/sparsemem_64.h
deleted file mode 100644
-index 8a73573..0000000
---- a/include/asm-sh64/string.h
+index dabb167..0000000
+--- a/include/asm-x86/sparsemem_64.h
+++ /dev/null
-@@ -1,21 +0,0 @@
--#ifndef __ASM_SH64_STRING_H
--#define __ASM_SH64_STRING_H
+@@ -1,26 +0,0 @@
+-#ifndef _ASM_X86_64_SPARSEMEM_H
+-#define _ASM_X86_64_SPARSEMEM_H 1
+-
+-#ifdef CONFIG_SPARSEMEM
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/string.h
+- * generic non-linear memory support:
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * 1) we will not split memory into more chunks than will fit into the flags
+- * field of the struct page
- *
-- * Empty on purpose. ARCH SH64 ASM libs are out of the current project scope.
+- * SECTION_SIZE_BITS 2^n: size of each section
+- * MAX_PHYSADDR_BITS 2^n: max size of physical address space
+- * MAX_PHYSMEM_BITS 2^n: how much memory we can have in that space
- *
- */
-
--#define __HAVE_ARCH_MEMCPY
+-#define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */
+-#define MAX_PHYSADDR_BITS 40
+-#define MAX_PHYSMEM_BITS 40
-
--extern void *memcpy(void *dest, const void *src, size_t count);
+-extern int early_pfn_to_nid(unsigned long pfn);
-
--#endif
-diff --git a/include/asm-sh64/system.h b/include/asm-sh64/system.h
+-#endif /* CONFIG_SPARSEMEM */
+-
+-#endif /* _ASM_X86_64_SPARSEMEM_H */
+diff --git a/include/asm-x86/spinlock.h b/include/asm-x86/spinlock.h
+index d74d85e..23804c1 100644
+--- a/include/asm-x86/spinlock.h
++++ b/include/asm-x86/spinlock.h
+@@ -1,5 +1,296 @@
++#ifndef _X86_SPINLOCK_H_
++#define _X86_SPINLOCK_H_
++
++#include <asm/atomic.h>
++#include <asm/rwlock.h>
++#include <asm/page.h>
++#include <asm/processor.h>
++#include <linux/compiler.h>
++
++/*
++ * Your basic SMP spinlocks, allowing only a single CPU anywhere
++ *
++ * Simple spin lock operations. There are two variants, one clears IRQ's
++ * on the local processor, one does not.
++ *
++ * These are fair FIFO ticket locks, which are currently limited to 256
++ * CPUs.
++ *
++ * (the type definitions are in asm/spinlock_types.h)
++ */
++
+ #ifdef CONFIG_X86_32
+-# include "spinlock_32.h"
++typedef char _slock_t;
++# define LOCK_INS_DEC "decb"
++# define LOCK_INS_XCH "xchgb"
++# define LOCK_INS_MOV "movb"
++# define LOCK_INS_CMP "cmpb"
++# define LOCK_PTR_REG "a"
+ #else
+-# include "spinlock_64.h"
++typedef int _slock_t;
++# define LOCK_INS_DEC "decl"
++# define LOCK_INS_XCH "xchgl"
++# define LOCK_INS_MOV "movl"
++# define LOCK_INS_CMP "cmpl"
++# define LOCK_PTR_REG "D"
++#endif
++
++#if defined(CONFIG_X86_32) && \
++ (defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE))
++/*
++ * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
++ * (PPro errata 66, 92)
++ */
++# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
++#else
++# define UNLOCK_LOCK_PREFIX
++#endif
++
++/*
++ * Ticket locks are conceptually two parts, one indicating the current head of
++ * the queue, and the other indicating the current tail. The lock is acquired
++ * by atomically noting the tail and incrementing it by one (thus adding
++ * ourself to the queue and noting our position), then waiting until the head
++ * becomes equal to the the initial value of the tail.
++ *
++ * We use an xadd covering *both* parts of the lock, to increment the tail and
++ * also load the position of the head, which takes care of memory ordering
++ * issues and should be optimal for the uncontended case. Note the tail must be
++ * in the high part, because a wide xadd increment of the low part would carry
++ * up and contaminate the high part.
++ *
++ * With fewer than 2^8 possible CPUs, we can use x86's partial registers to
++ * save some instructions and make the code more elegant. There really isn't
++ * much between them in performance though, especially as locks are out of line.
++ */
++#if (NR_CPUS < 256)
++static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
++{
++ int tmp = *(volatile signed int *)(&(lock)->slock);
++
++ return (((tmp >> 8) & 0xff) != (tmp & 0xff));
++}
++
++static inline int __raw_spin_is_contended(raw_spinlock_t *lock)
++{
++ int tmp = *(volatile signed int *)(&(lock)->slock);
++
++ return (((tmp >> 8) & 0xff) - (tmp & 0xff)) > 1;
++}
++
++static inline void __raw_spin_lock(raw_spinlock_t *lock)
++{
++ short inc = 0x0100;
++
++ __asm__ __volatile__ (
++ LOCK_PREFIX "xaddw %w0, %1\n"
++ "1:\t"
++ "cmpb %h0, %b0\n\t"
++ "je 2f\n\t"
++ "rep ; nop\n\t"
++ "movb %1, %b0\n\t"
++ /* don't need lfence here, because loads are in-order */
++ "jmp 1b\n"
++ "2:"
++ :"+Q" (inc), "+m" (lock->slock)
++ :
++ :"memory", "cc");
++}
++
++#define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock)
++
++static inline int __raw_spin_trylock(raw_spinlock_t *lock)
++{
++ int tmp;
++ short new;
++
++ asm volatile(
++ "movw %2,%w0\n\t"
++ "cmpb %h0,%b0\n\t"
++ "jne 1f\n\t"
++ "movw %w0,%w1\n\t"
++ "incb %h1\n\t"
++ "lock ; cmpxchgw %w1,%2\n\t"
++ "1:"
++ "sete %b1\n\t"
++ "movzbl %b1,%0\n\t"
++ :"=&a" (tmp), "=Q" (new), "+m" (lock->slock)
++ :
++ : "memory", "cc");
++
++ return tmp;
++}
++
++static inline void __raw_spin_unlock(raw_spinlock_t *lock)
++{
++ __asm__ __volatile__(
++ UNLOCK_LOCK_PREFIX "incb %0"
++ :"+m" (lock->slock)
++ :
++ :"memory", "cc");
++}
++#else
++static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
++{
++ int tmp = *(volatile signed int *)(&(lock)->slock);
++
++ return (((tmp >> 16) & 0xffff) != (tmp & 0xffff));
++}
++
++static inline int __raw_spin_is_contended(raw_spinlock_t *lock)
++{
++ int tmp = *(volatile signed int *)(&(lock)->slock);
++
++ return (((tmp >> 16) & 0xffff) - (tmp & 0xffff)) > 1;
++}
++
++static inline void __raw_spin_lock(raw_spinlock_t *lock)
++{
++ int inc = 0x00010000;
++ int tmp;
++
++ __asm__ __volatile__ (
++ "lock ; xaddl %0, %1\n"
++ "movzwl %w0, %2\n\t"
++ "shrl $16, %0\n\t"
++ "1:\t"
++ "cmpl %0, %2\n\t"
++ "je 2f\n\t"
++ "rep ; nop\n\t"
++ "movzwl %1, %2\n\t"
++ /* don't need lfence here, because loads are in-order */
++ "jmp 1b\n"
++ "2:"
++ :"+Q" (inc), "+m" (lock->slock), "=r" (tmp)
++ :
++ :"memory", "cc");
++}
++
++#define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock)
++
++static inline int __raw_spin_trylock(raw_spinlock_t *lock)
++{
++ int tmp;
++ int new;
++
++ asm volatile(
++ "movl %2,%0\n\t"
++ "movl %0,%1\n\t"
++ "roll $16, %0\n\t"
++ "cmpl %0,%1\n\t"
++ "jne 1f\n\t"
++ "addl $0x00010000, %1\n\t"
++ "lock ; cmpxchgl %1,%2\n\t"
++ "1:"
++ "sete %b1\n\t"
++ "movzbl %b1,%0\n\t"
++ :"=&a" (tmp), "=r" (new), "+m" (lock->slock)
++ :
++ : "memory", "cc");
++
++ return tmp;
++}
++
++static inline void __raw_spin_unlock(raw_spinlock_t *lock)
++{
++ __asm__ __volatile__(
++ UNLOCK_LOCK_PREFIX "incw %0"
++ :"+m" (lock->slock)
++ :
++ :"memory", "cc");
++}
++#endif
++
++static inline void __raw_spin_unlock_wait(raw_spinlock_t *lock)
++{
++ while (__raw_spin_is_locked(lock))
++ cpu_relax();
++}
++
++/*
++ * Read-write spinlocks, allowing multiple readers
++ * but only one writer.
++ *
++ * NOTE! it is quite common to have readers in interrupts
++ * but no interrupt writers. For those circumstances we
++ * can "mix" irq-safe locks - any writer needs to get a
++ * irq-safe write-lock, but readers can get non-irqsafe
++ * read-locks.
++ *
++ * On x86, we implement read-write locks as a 32-bit counter
++ * with the high bit (sign) being the "contended" bit.
++ */
++
++/**
++ * read_can_lock - would read_trylock() succeed?
++ * @lock: the rwlock in question.
++ */
++static inline int __raw_read_can_lock(raw_rwlock_t *lock)
++{
++ return (int)(lock)->lock > 0;
++}
++
++/**
++ * write_can_lock - would write_trylock() succeed?
++ * @lock: the rwlock in question.
++ */
++static inline int __raw_write_can_lock(raw_rwlock_t *lock)
++{
++ return (lock)->lock == RW_LOCK_BIAS;
++}
++
++static inline void __raw_read_lock(raw_rwlock_t *rw)
++{
++ asm volatile(LOCK_PREFIX " subl $1,(%0)\n\t"
++ "jns 1f\n"
++ "call __read_lock_failed\n\t"
++ "1:\n"
++ ::LOCK_PTR_REG (rw) : "memory");
++}
++
++static inline void __raw_write_lock(raw_rwlock_t *rw)
++{
++ asm volatile(LOCK_PREFIX " subl %1,(%0)\n\t"
++ "jz 1f\n"
++ "call __write_lock_failed\n\t"
++ "1:\n"
++ ::LOCK_PTR_REG (rw), "i" (RW_LOCK_BIAS) : "memory");
++}
++
++static inline int __raw_read_trylock(raw_rwlock_t *lock)
++{
++ atomic_t *count = (atomic_t *)lock;
++
++ atomic_dec(count);
++ if (atomic_read(count) >= 0)
++ return 1;
++ atomic_inc(count);
++ return 0;
++}
++
++static inline int __raw_write_trylock(raw_rwlock_t *lock)
++{
++ atomic_t *count = (atomic_t *)lock;
++
++ if (atomic_sub_and_test(RW_LOCK_BIAS, count))
++ return 1;
++ atomic_add(RW_LOCK_BIAS, count);
++ return 0;
++}
++
++static inline void __raw_read_unlock(raw_rwlock_t *rw)
++{
++ asm volatile(LOCK_PREFIX "incl %0" :"+m" (rw->lock) : : "memory");
++}
++
++static inline void __raw_write_unlock(raw_rwlock_t *rw)
++{
++ asm volatile(LOCK_PREFIX "addl %1, %0"
++ : "+m" (rw->lock) : "i" (RW_LOCK_BIAS) : "memory");
++}
++
++#define _raw_spin_relax(lock) cpu_relax()
++#define _raw_read_relax(lock) cpu_relax()
++#define _raw_write_relax(lock) cpu_relax()
++
+ #endif
+diff --git a/include/asm-x86/spinlock_32.h b/include/asm-x86/spinlock_32.h
deleted file mode 100644
-index be2a15f..0000000
---- a/include/asm-sh64/system.h
+index d3bcebe..0000000
+--- a/include/asm-x86/spinlock_32.h
+++ /dev/null
-@@ -1,190 +0,0 @@
--#ifndef __ASM_SH64_SYSTEM_H
--#define __ASM_SH64_SYSTEM_H
+@@ -1,221 +0,0 @@
+-#ifndef __ASM_SPINLOCK_H
+-#define __ASM_SPINLOCK_H
+-
+-#include <asm/atomic.h>
+-#include <asm/rwlock.h>
+-#include <asm/page.h>
+-#include <asm/processor.h>
+-#include <linux/compiler.h>
+-
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#define CLI_STRING "cli"
+-#define STI_STRING "sti"
+-#define CLI_STI_CLOBBERS
+-#define CLI_STI_INPUT_ARGS
+-#endif /* CONFIG_PARAVIRT */
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
+- * Your basic SMP spinlocks, allowing only a single CPU anywhere
- *
-- * include/asm-sh64/system.h
+- * Simple spin lock operations. There are two variants, one clears IRQ's
+- * on the local processor, one does not.
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
-- * Copyright (C) 2004 Richard Curnow
+- * We make no fairness assumptions. They have a cost.
- *
+- * (the type definitions are in asm/spinlock_types.h)
- */
-
--#include <asm/registers.h>
--#include <asm/processor.h>
+-static inline int __raw_spin_is_locked(raw_spinlock_t *x)
+-{
+- return *(volatile signed char *)(&(x)->slock) <= 0;
+-}
+-
+-static inline void __raw_spin_lock(raw_spinlock_t *lock)
+-{
+- asm volatile("\n1:\t"
+- LOCK_PREFIX " ; decb %0\n\t"
+- "jns 3f\n"
+- "2:\t"
+- "rep;nop\n\t"
+- "cmpb $0,%0\n\t"
+- "jle 2b\n\t"
+- "jmp 1b\n"
+- "3:\n\t"
+- : "+m" (lock->slock) : : "memory");
+-}
-
-/*
-- * switch_to() should switch tasks to task nr n, first
+- * It is easier for the lock validator if interrupts are not re-enabled
+- * in the middle of a lock-acquire. This is a performance feature anyway
+- * so we turn it off:
+- *
+- * NOTE: there's an irqs-on section here, which normally would have to be
+- * irq-traced, but on CONFIG_TRACE_IRQFLAGS we never use this variant.
- */
+-#ifndef CONFIG_PROVE_LOCKING
+-static inline void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags)
+-{
+- asm volatile(
+- "\n1:\t"
+- LOCK_PREFIX " ; decb %[slock]\n\t"
+- "jns 5f\n"
+- "2:\t"
+- "testl $0x200, %[flags]\n\t"
+- "jz 4f\n\t"
+- STI_STRING "\n"
+- "3:\t"
+- "rep;nop\n\t"
+- "cmpb $0, %[slock]\n\t"
+- "jle 3b\n\t"
+- CLI_STRING "\n\t"
+- "jmp 1b\n"
+- "4:\t"
+- "rep;nop\n\t"
+- "cmpb $0, %[slock]\n\t"
+- "jg 1b\n\t"
+- "jmp 4b\n"
+- "5:\n\t"
+- : [slock] "+m" (lock->slock)
+- : [flags] "r" (flags)
+- CLI_STI_INPUT_ARGS
+- : "memory" CLI_STI_CLOBBERS);
+-}
+-#endif
-
--typedef struct {
-- unsigned long seg;
--} mm_segment_t;
--
--extern struct task_struct *sh64_switch_to(struct task_struct *prev,
-- struct thread_struct *prev_thread,
-- struct task_struct *next,
-- struct thread_struct *next_thread);
--
--#define switch_to(prev,next,last) \
-- do {\
-- if (last_task_used_math != next) {\
-- struct pt_regs *regs = next->thread.uregs;\
-- if (regs) regs->sr |= SR_FD;\
-- }\
-- last = sh64_switch_to(prev, &prev->thread, next, &next->thread);\
-- } while(0)
--
--#define nop() __asm__ __volatile__ ("nop")
--
--#define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
--
--extern void __xchg_called_with_bad_pointer(void);
+-static inline int __raw_spin_trylock(raw_spinlock_t *lock)
+-{
+- char oldval;
+- asm volatile(
+- "xchgb %b0,%1"
+- :"=q" (oldval), "+m" (lock->slock)
+- :"0" (0) : "memory");
+- return oldval > 0;
+-}
-
--#define mb() __asm__ __volatile__ ("synco": : :"memory")
--#define rmb() mb()
--#define wmb() __asm__ __volatile__ ("synco": : :"memory")
--#define read_barrier_depends() do { } while (0)
+-/*
+- * __raw_spin_unlock based on writing $1 to the low byte.
+- * This method works. Despite all the confusion.
+- * (except on PPro SMP or if we are using OOSTORE, so we use xchgb there)
+- * (PPro errata 66, 92)
+- */
-
--#ifdef CONFIG_SMP
--#define smp_mb() mb()
--#define smp_rmb() rmb()
--#define smp_wmb() wmb()
--#define smp_read_barrier_depends() read_barrier_depends()
--#else
--#define smp_mb() barrier()
--#define smp_rmb() barrier()
--#define smp_wmb() barrier()
--#define smp_read_barrier_depends() do { } while (0)
--#endif /* CONFIG_SMP */
+-#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
-
--#define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
+-static inline void __raw_spin_unlock(raw_spinlock_t *lock)
+-{
+- asm volatile("movb $1,%0" : "+m" (lock->slock) :: "memory");
+-}
-
--/* Interrupt Control */
--#ifndef HARD_CLI
--#define SR_MASK_L 0x000000f0L
--#define SR_MASK_LL 0x00000000000000f0LL
-#else
--#define SR_MASK_L 0x10000000L
--#define SR_MASK_LL 0x0000000010000000LL
--#endif
-
--static __inline__ void local_irq_enable(void)
+-static inline void __raw_spin_unlock(raw_spinlock_t *lock)
-{
-- /* cli/sti based on SR.BL */
-- unsigned long long __dummy0, __dummy1=~SR_MASK_LL;
+- char oldval = 1;
-
-- __asm__ __volatile__("getcon " __SR ", %0\n\t"
-- "and %0, %1, %0\n\t"
-- "putcon %0, " __SR "\n\t"
-- : "=&r" (__dummy0)
-- : "r" (__dummy1));
+- asm volatile("xchgb %b0, %1"
+- : "=q" (oldval), "+m" (lock->slock)
+- : "0" (oldval) : "memory");
-}
-
--static __inline__ void local_irq_disable(void)
+-#endif
+-
+-static inline void __raw_spin_unlock_wait(raw_spinlock_t *lock)
-{
-- /* cli/sti based on SR.BL */
-- unsigned long long __dummy0, __dummy1=SR_MASK_LL;
-- __asm__ __volatile__("getcon " __SR ", %0\n\t"
-- "or %0, %1, %0\n\t"
-- "putcon %0, " __SR "\n\t"
-- : "=&r" (__dummy0)
-- : "r" (__dummy1));
+- while (__raw_spin_is_locked(lock))
+- cpu_relax();
-}
-
--#define local_save_flags(x) \
--(__extension__ ({ unsigned long long __dummy=SR_MASK_LL; \
-- __asm__ __volatile__( \
-- "getcon " __SR ", %0\n\t" \
-- "and %0, %1, %0" \
-- : "=&r" (x) \
-- : "r" (__dummy));}))
--
--#define local_irq_save(x) \
--(__extension__ ({ unsigned long long __d2=SR_MASK_LL, __d1; \
-- __asm__ __volatile__( \
-- "getcon " __SR ", %1\n\t" \
-- "or %1, r63, %0\n\t" \
-- "or %1, %2, %1\n\t" \
-- "putcon %1, " __SR "\n\t" \
-- "and %0, %2, %0" \
-- : "=&r" (x), "=&r" (__d1) \
-- : "r" (__d2));}));
--
--#define local_irq_restore(x) do { \
-- if ( ((x) & SR_MASK_L) == 0 ) /* dropping to 0 ? */ \
-- local_irq_enable(); /* yes...re-enable */ \
--} while (0)
--
--#define irqs_disabled() \
--({ \
-- unsigned long flags; \
-- local_save_flags(flags); \
-- (flags != 0); \
--})
+-/*
+- * Read-write spinlocks, allowing multiple readers
+- * but only one writer.
+- *
+- * NOTE! it is quite common to have readers in interrupts
+- * but no interrupt writers. For those circumstances we
+- * can "mix" irq-safe locks - any writer needs to get a
+- * irq-safe write-lock, but readers can get non-irqsafe
+- * read-locks.
+- *
+- * On x86, we implement read-write locks as a 32-bit counter
+- * with the high bit (sign) being the "contended" bit.
+- *
+- * The inline assembly is non-obvious. Think about it.
+- *
+- * Changed to use the same technique as rw semaphores. See
+- * semaphore.h for details. -ben
+- *
+- * the helpers are in arch/i386/kernel/semaphore.c
+- */
-
--static inline unsigned long xchg_u32(volatile int * m, unsigned long val)
+-/**
+- * read_can_lock - would read_trylock() succeed?
+- * @lock: the rwlock in question.
+- */
+-static inline int __raw_read_can_lock(raw_rwlock_t *x)
-{
-- unsigned long flags, retval;
--
-- local_irq_save(flags);
-- retval = *m;
-- *m = val;
-- local_irq_restore(flags);
-- return retval;
+- return (int)(x)->lock > 0;
-}
-
--static inline unsigned long xchg_u8(volatile unsigned char * m, unsigned long val)
+-/**
+- * write_can_lock - would write_trylock() succeed?
+- * @lock: the rwlock in question.
+- */
+-static inline int __raw_write_can_lock(raw_rwlock_t *x)
-{
-- unsigned long flags, retval;
--
-- local_irq_save(flags);
-- retval = *m;
-- *m = val & 0xff;
-- local_irq_restore(flags);
-- return retval;
+- return (x)->lock == RW_LOCK_BIAS;
-}
-
--static __inline__ unsigned long __xchg(unsigned long x, volatile void * ptr, int size)
+-static inline void __raw_read_lock(raw_rwlock_t *rw)
-{
-- switch (size) {
-- case 4:
-- return xchg_u32(ptr, x);
-- break;
-- case 1:
-- return xchg_u8(ptr, x);
-- break;
-- }
-- __xchg_called_with_bad_pointer();
-- return x;
+- asm volatile(LOCK_PREFIX " subl $1,(%0)\n\t"
+- "jns 1f\n"
+- "call __read_lock_failed\n\t"
+- "1:\n"
+- ::"a" (rw) : "memory");
-}
-
--/* XXX
-- * disable hlt during certain critical i/o operations
-- */
--#define HAVE_DISABLE_HLT
--void disable_hlt(void);
--void enable_hlt(void);
+-static inline void __raw_write_lock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX " subl $" RW_LOCK_BIAS_STR ",(%0)\n\t"
+- "jz 1f\n"
+- "call __write_lock_failed\n\t"
+- "1:\n"
+- ::"a" (rw) : "memory");
+-}
-
+-static inline int __raw_read_trylock(raw_rwlock_t *lock)
+-{
+- atomic_t *count = (atomic_t *)lock;
+- atomic_dec(count);
+- if (atomic_read(count) >= 0)
+- return 1;
+- atomic_inc(count);
+- return 0;
+-}
-
--#define smp_mb() barrier()
--#define smp_rmb() barrier()
--#define smp_wmb() barrier()
+-static inline int __raw_write_trylock(raw_rwlock_t *lock)
+-{
+- atomic_t *count = (atomic_t *)lock;
+- if (atomic_sub_and_test(RW_LOCK_BIAS, count))
+- return 1;
+- atomic_add(RW_LOCK_BIAS, count);
+- return 0;
+-}
-
--#ifdef CONFIG_SH_ALPHANUMERIC
--/* This is only used for debugging. */
--extern void print_seg(char *file,int line);
--#define PLS() print_seg(__FILE__,__LINE__)
--#else /* CONFIG_SH_ALPHANUMERIC */
--#define PLS()
--#endif /* CONFIG_SH_ALPHANUMERIC */
+-static inline void __raw_read_unlock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX "incl %0" :"+m" (rw->lock) : : "memory");
+-}
-
--#define PL() printk("@ <%s,%s:%d>\n",__FILE__,__FUNCTION__,__LINE__)
+-static inline void __raw_write_unlock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX "addl $" RW_LOCK_BIAS_STR ", %0"
+- : "+m" (rw->lock) : : "memory");
+-}
-
--#define arch_align_stack(x) (x)
+-#define _raw_spin_relax(lock) cpu_relax()
+-#define _raw_read_relax(lock) cpu_relax()
+-#define _raw_write_relax(lock) cpu_relax()
-
--#endif /* __ASM_SH64_SYSTEM_H */
-diff --git a/include/asm-sh64/termbits.h b/include/asm-sh64/termbits.h
+-#endif /* __ASM_SPINLOCK_H */
+diff --git a/include/asm-x86/spinlock_64.h b/include/asm-x86/spinlock_64.h
deleted file mode 100644
-index 86bde5e..0000000
---- a/include/asm-sh64/termbits.h
+index 88bf981..0000000
+--- a/include/asm-x86/spinlock_64.h
+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_TERMBITS_H
--#define __ASM_SH64_TERMBITS_H
--
--#include <asm-sh/termbits.h>
+@@ -1,167 +0,0 @@
+-#ifndef __ASM_SPINLOCK_H
+-#define __ASM_SPINLOCK_H
-
--#endif /* __ASM_SH64_TERMBITS_H */
-diff --git a/include/asm-sh64/termios.h b/include/asm-sh64/termios.h
-deleted file mode 100644
-index dc44e6e..0000000
---- a/include/asm-sh64/termios.h
-+++ /dev/null
-@@ -1,99 +0,0 @@
--#ifndef __ASM_SH64_TERMIOS_H
--#define __ASM_SH64_TERMIOS_H
+-#include <asm/atomic.h>
+-#include <asm/rwlock.h>
+-#include <asm/page.h>
+-#include <asm/processor.h>
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
+- * Your basic SMP spinlocks, allowing only a single CPU anywhere
- *
-- * include/asm-sh64/termios.h
+- * Simple spin lock operations. There are two variants, one clears IRQ's
+- * on the local processor, one does not.
- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
+- * We make no fairness assumptions. They have a cost.
- *
+- * (the type definitions are in asm/spinlock_types.h)
- */
-
--#include <asm/termbits.h>
--#include <asm/ioctls.h>
+-static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
+-{
+- return *(volatile signed int *)(&(lock)->slock) <= 0;
+-}
-
--struct winsize {
-- unsigned short ws_row;
-- unsigned short ws_col;
-- unsigned short ws_xpixel;
-- unsigned short ws_ypixel;
--};
+-static inline void __raw_spin_lock(raw_spinlock_t *lock)
+-{
+- asm volatile(
+- "\n1:\t"
+- LOCK_PREFIX " ; decl %0\n\t"
+- "jns 2f\n"
+- "3:\n"
+- "rep;nop\n\t"
+- "cmpl $0,%0\n\t"
+- "jle 3b\n\t"
+- "jmp 1b\n"
+- "2:\t" : "=m" (lock->slock) : : "memory");
+-}
-
--#define NCC 8
--struct termio {
-- unsigned short c_iflag; /* input mode flags */
-- unsigned short c_oflag; /* output mode flags */
-- unsigned short c_cflag; /* control mode flags */
-- unsigned short c_lflag; /* local mode flags */
-- unsigned char c_line; /* line discipline */
-- unsigned char c_cc[NCC]; /* control characters */
--};
+-/*
+- * Same as __raw_spin_lock, but reenable interrupts during spinning.
+- */
+-#ifndef CONFIG_PROVE_LOCKING
+-static inline void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags)
+-{
+- asm volatile(
+- "\n1:\t"
+- LOCK_PREFIX " ; decl %0\n\t"
+- "jns 5f\n"
+- "testl $0x200, %1\n\t" /* interrupts were disabled? */
+- "jz 4f\n\t"
+- "sti\n"
+- "3:\t"
+- "rep;nop\n\t"
+- "cmpl $0, %0\n\t"
+- "jle 3b\n\t"
+- "cli\n\t"
+- "jmp 1b\n"
+- "4:\t"
+- "rep;nop\n\t"
+- "cmpl $0, %0\n\t"
+- "jg 1b\n\t"
+- "jmp 4b\n"
+- "5:\n\t"
+- : "+m" (lock->slock) : "r" ((unsigned)flags) : "memory");
+-}
+-#endif
-
--/* modem lines */
--#define TIOCM_LE 0x001
--#define TIOCM_DTR 0x002
--#define TIOCM_RTS 0x004
--#define TIOCM_ST 0x008
--#define TIOCM_SR 0x010
--#define TIOCM_CTS 0x020
--#define TIOCM_CAR 0x040
--#define TIOCM_RNG 0x080
--#define TIOCM_DSR 0x100
--#define TIOCM_CD TIOCM_CAR
--#define TIOCM_RI TIOCM_RNG
--#define TIOCM_OUT1 0x2000
--#define TIOCM_OUT2 0x4000
--#define TIOCM_LOOP 0x8000
+-static inline int __raw_spin_trylock(raw_spinlock_t *lock)
+-{
+- int oldval;
-
--/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+- asm volatile(
+- "xchgl %0,%1"
+- :"=q" (oldval), "=m" (lock->slock)
+- :"0" (0) : "memory");
-
--#ifdef __KERNEL__
+- return oldval > 0;
+-}
-
--/* intr=^C quit=^\ erase=del kill=^U
-- eof=^D vtime=\0 vmin=\1 sxtc=\0
-- start=^Q stop=^S susp=^Z eol=\0
-- reprint=^R discard=^U werase=^W lnext=^V
-- eol2=\0
--*/
--#define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0"
+-static inline void __raw_spin_unlock(raw_spinlock_t *lock)
+-{
+- asm volatile("movl $1,%0" :"=m" (lock->slock) :: "memory");
+-}
+-
+-static inline void __raw_spin_unlock_wait(raw_spinlock_t *lock)
+-{
+- while (__raw_spin_is_locked(lock))
+- cpu_relax();
+-}
-
-/*
-- * Translate a "termio" structure into a "termios". Ugh.
+- * Read-write spinlocks, allowing multiple readers
+- * but only one writer.
+- *
+- * NOTE! it is quite common to have readers in interrupts
+- * but no interrupt writers. For those circumstances we
+- * can "mix" irq-safe locks - any writer needs to get a
+- * irq-safe write-lock, but readers can get non-irqsafe
+- * read-locks.
+- *
+- * On x86, we implement read-write locks as a 32-bit counter
+- * with the high bit (sign) being the "contended" bit.
- */
--#define SET_LOW_TERMIOS_BITS(termios, termio, x) { \
-- unsigned short __tmp; \
-- get_user(__tmp,&(termio)->x); \
-- *(unsigned short *) &(termios)->x = __tmp; \
+-
+-static inline int __raw_read_can_lock(raw_rwlock_t *lock)
+-{
+- return (int)(lock)->lock > 0;
-}
-
--#define user_termio_to_kernel_termios(termios, termio) \
--({ \
-- SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \
-- SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \
-- SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \
-- SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \
-- copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \
--})
+-static inline int __raw_write_can_lock(raw_rwlock_t *lock)
+-{
+- return (lock)->lock == RW_LOCK_BIAS;
+-}
-
--/*
-- * Translate a "termios" structure into a "termio". Ugh.
-- */
--#define kernel_termios_to_user_termio(termio, termios) \
--({ \
-- put_user((termios)->c_iflag, &(termio)->c_iflag); \
-- put_user((termios)->c_oflag, &(termio)->c_oflag); \
-- put_user((termios)->c_cflag, &(termio)->c_cflag); \
-- put_user((termios)->c_lflag, &(termio)->c_lflag); \
-- put_user((termios)->c_line, &(termio)->c_line); \
-- copy_to_user((termio)->c_cc, (termios)->c_cc, NCC); \
--})
+-static inline void __raw_read_lock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX "subl $1,(%0)\n\t"
+- "jns 1f\n"
+- "call __read_lock_failed\n"
+- "1:\n"
+- ::"D" (rw), "i" (RW_LOCK_BIAS) : "memory");
+-}
-
--#define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios))
--#define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios))
+-static inline void __raw_write_lock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX "subl %1,(%0)\n\t"
+- "jz 1f\n"
+- "\tcall __write_lock_failed\n\t"
+- "1:\n"
+- ::"D" (rw), "i" (RW_LOCK_BIAS) : "memory");
+-}
-
--#endif /* __KERNEL__ */
+-static inline int __raw_read_trylock(raw_rwlock_t *lock)
+-{
+- atomic_t *count = (atomic_t *)lock;
+- atomic_dec(count);
+- if (atomic_read(count) >= 0)
+- return 1;
+- atomic_inc(count);
+- return 0;
+-}
-
--#endif /* __ASM_SH64_TERMIOS_H */
-diff --git a/include/asm-sh64/thread_info.h b/include/asm-sh64/thread_info.h
+-static inline int __raw_write_trylock(raw_rwlock_t *lock)
+-{
+- atomic_t *count = (atomic_t *)lock;
+- if (atomic_sub_and_test(RW_LOCK_BIAS, count))
+- return 1;
+- atomic_add(RW_LOCK_BIAS, count);
+- return 0;
+-}
+-
+-static inline void __raw_read_unlock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX " ; incl %0" :"=m" (rw->lock) : : "memory");
+-}
+-
+-static inline void __raw_write_unlock(raw_rwlock_t *rw)
+-{
+- asm volatile(LOCK_PREFIX " ; addl $" RW_LOCK_BIAS_STR ",%0"
+- : "=m" (rw->lock) : : "memory");
+-}
+-
+-#define _raw_spin_relax(lock) cpu_relax()
+-#define _raw_read_relax(lock) cpu_relax()
+-#define _raw_write_relax(lock) cpu_relax()
+-
+-#endif /* __ASM_SPINLOCK_H */
+diff --git a/include/asm-x86/spinlock_types.h b/include/asm-x86/spinlock_types.h
+index 4da9345..9029cf7 100644
+--- a/include/asm-x86/spinlock_types.h
++++ b/include/asm-x86/spinlock_types.h
+@@ -9,7 +9,7 @@ typedef struct {
+ unsigned int slock;
+ } raw_spinlock_t;
+
+-#define __RAW_SPIN_LOCK_UNLOCKED { 1 }
++#define __RAW_SPIN_LOCK_UNLOCKED { 0 }
+
+ typedef struct {
+ unsigned int lock;
+diff --git a/include/asm-x86/stacktrace.h b/include/asm-x86/stacktrace.h
+index 70dd5ba..30f8252 100644
+--- a/include/asm-x86/stacktrace.h
++++ b/include/asm-x86/stacktrace.h
+@@ -9,12 +9,13 @@ struct stacktrace_ops {
+ void (*warning)(void *data, char *msg);
+ /* msg must contain %s for the symbol */
+ void (*warning_symbol)(void *data, char *msg, unsigned long symbol);
+- void (*address)(void *data, unsigned long address);
++ void (*address)(void *data, unsigned long address, int reliable);
+ /* On negative return stop dumping */
+ int (*stack)(void *data, char *name);
+ };
+
+-void dump_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack,
++void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
++ unsigned long *stack, unsigned long bp,
+ const struct stacktrace_ops *ops, void *data);
+
+ #endif
+diff --git a/include/asm-x86/suspend_32.h b/include/asm-x86/suspend_32.h
+index a252073..1bbda3a 100644
+--- a/include/asm-x86/suspend_32.h
++++ b/include/asm-x86/suspend_32.h
+@@ -12,8 +12,8 @@ static inline int arch_prepare_suspend(void) { return 0; }
+ struct saved_context {
+ u16 es, fs, gs, ss;
+ unsigned long cr0, cr2, cr3, cr4;
+- struct Xgt_desc_struct gdt;
+- struct Xgt_desc_struct idt;
++ struct desc_ptr gdt;
++ struct desc_ptr idt;
+ u16 ldt;
+ u16 tss;
+ unsigned long tr;
+diff --git a/include/asm-x86/suspend_64.h b/include/asm-x86/suspend_64.h
+index c505a76..2eb92cb 100644
+--- a/include/asm-x86/suspend_64.h
++++ b/include/asm-x86/suspend_64.h
+@@ -15,7 +15,14 @@ arch_prepare_suspend(void)
+ return 0;
+ }
+
+-/* Image of the saved processor state. If you touch this, fix acpi/wakeup.S. */
++/*
++ * Image of the saved processor state, used by the low level ACPI suspend to
++ * RAM code and by the low level hibernation code.
++ *
++ * If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that
++ * __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c,
++ * still work as required.
++ */
+ struct saved_context {
+ struct pt_regs regs;
+ u16 ds, es, fs, gs, ss;
+@@ -38,8 +45,6 @@ struct saved_context {
+ #define loaddebug(thread,register) \
+ set_debugreg((thread)->debugreg##register, register)
+
+-extern void fix_processor_context(void);
+-
+ /* routines for saving/restoring kernel state */
+ extern int acpi_save_state_mem(void);
+ extern char core_restore_code;
+diff --git a/include/asm-x86/system.h b/include/asm-x86/system.h
+index 692562b..ee32ef9 100644
+--- a/include/asm-x86/system.h
++++ b/include/asm-x86/system.h
+@@ -1,5 +1,414 @@
++#ifndef _ASM_X86_SYSTEM_H_
++#define _ASM_X86_SYSTEM_H_
++
++#include <asm/asm.h>
++#include <asm/segment.h>
++#include <asm/cpufeature.h>
++#include <asm/cmpxchg.h>
++#include <asm/nops.h>
++
++#include <linux/kernel.h>
++#include <linux/irqflags.h>
++
++/* entries in ARCH_DLINFO: */
++#ifdef CONFIG_IA32_EMULATION
++# define AT_VECTOR_SIZE_ARCH 2
++#else
++# define AT_VECTOR_SIZE_ARCH 1
++#endif
++
++#ifdef CONFIG_X86_32
++
++struct task_struct; /* one of the stranger aspects of C forward declarations */
++extern struct task_struct *FASTCALL(__switch_to(struct task_struct *prev,
++ struct task_struct *next));
++
++/*
++ * Saving eflags is important. It switches not only IOPL between tasks,
++ * it also protects other tasks from NT leaking through sysenter etc.
++ */
++#define switch_to(prev, next, last) do { \
++ unsigned long esi, edi; \
++ asm volatile("pushfl\n\t" /* Save flags */ \
++ "pushl %%ebp\n\t" \
++ "movl %%esp,%0\n\t" /* save ESP */ \
++ "movl %5,%%esp\n\t" /* restore ESP */ \
++ "movl $1f,%1\n\t" /* save EIP */ \
++ "pushl %6\n\t" /* restore EIP */ \
++ "jmp __switch_to\n" \
++ "1:\t" \
++ "popl %%ebp\n\t" \
++ "popfl" \
++ :"=m" (prev->thread.sp), "=m" (prev->thread.ip), \
++ "=a" (last), "=S" (esi), "=D" (edi) \
++ :"m" (next->thread.sp), "m" (next->thread.ip), \
++ "2" (prev), "d" (next)); \
++} while (0)
++
++/*
++ * disable hlt during certain critical i/o operations
++ */
++#define HAVE_DISABLE_HLT
++#else
++#define __SAVE(reg, offset) "movq %%" #reg ",(14-" #offset ")*8(%%rsp)\n\t"
++#define __RESTORE(reg, offset) "movq (14-" #offset ")*8(%%rsp),%%" #reg "\n\t"
++
++/* frame pointer must be last for get_wchan */
++#define SAVE_CONTEXT "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
++#define RESTORE_CONTEXT "movq %%rbp,%%rsi ; popq %%rbp ; popf\t"
++
++#define __EXTRA_CLOBBER \
++ , "rcx", "rbx", "rdx", "r8", "r9", "r10", "r11", \
++ "r12", "r13", "r14", "r15"
++
++/* Save restore flags to clear handle leaking NT */
++#define switch_to(prev, next, last) \
++ asm volatile(SAVE_CONTEXT \
++ "movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
++ "movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
++ "call __switch_to\n\t" \
++ ".globl thread_return\n" \
++ "thread_return:\n\t" \
++ "movq %%gs:%P[pda_pcurrent],%%rsi\n\t" \
++ "movq %P[thread_info](%%rsi),%%r8\n\t" \
++ LOCK_PREFIX "btr %[tif_fork],%P[ti_flags](%%r8)\n\t" \
++ "movq %%rax,%%rdi\n\t" \
++ "jc ret_from_fork\n\t" \
++ RESTORE_CONTEXT \
++ : "=a" (last) \
++ : [next] "S" (next), [prev] "D" (prev), \
++ [threadrsp] "i" (offsetof(struct task_struct, thread.sp)), \
++ [ti_flags] "i" (offsetof(struct thread_info, flags)), \
++ [tif_fork] "i" (TIF_FORK), \
++ [thread_info] "i" (offsetof(struct task_struct, stack)), \
++ [pda_pcurrent] "i" (offsetof(struct x8664_pda, pcurrent)) \
++ : "memory", "cc" __EXTRA_CLOBBER)
++#endif
++
++#ifdef __KERNEL__
++#define _set_base(addr, base) do { unsigned long __pr; \
++__asm__ __volatile__ ("movw %%dx,%1\n\t" \
++ "rorl $16,%%edx\n\t" \
++ "movb %%dl,%2\n\t" \
++ "movb %%dh,%3" \
++ :"=&d" (__pr) \
++ :"m" (*((addr)+2)), \
++ "m" (*((addr)+4)), \
++ "m" (*((addr)+7)), \
++ "0" (base) \
++ ); } while (0)
++
++#define _set_limit(addr, limit) do { unsigned long __lr; \
++__asm__ __volatile__ ("movw %%dx,%1\n\t" \
++ "rorl $16,%%edx\n\t" \
++ "movb %2,%%dh\n\t" \
++ "andb $0xf0,%%dh\n\t" \
++ "orb %%dh,%%dl\n\t" \
++ "movb %%dl,%2" \
++ :"=&d" (__lr) \
++ :"m" (*(addr)), \
++ "m" (*((addr)+6)), \
++ "0" (limit) \
++ ); } while (0)
++
++#define set_base(ldt, base) _set_base(((char *)&(ldt)) , (base))
++#define set_limit(ldt, limit) _set_limit(((char *)&(ldt)) , ((limit)-1))
++
++extern void load_gs_index(unsigned);
++
++/*
++ * Load a segment. Fall back on loading the zero
++ * segment if something goes wrong..
++ */
++#define loadsegment(seg, value) \
++ asm volatile("\n" \
++ "1:\t" \
++ "movl %k0,%%" #seg "\n" \
++ "2:\n" \
++ ".section .fixup,\"ax\"\n" \
++ "3:\t" \
++ "movl %k1, %%" #seg "\n\t" \
++ "jmp 2b\n" \
++ ".previous\n" \
++ ".section __ex_table,\"a\"\n\t" \
++ _ASM_ALIGN "\n\t" \
++ _ASM_PTR " 1b,3b\n" \
++ ".previous" \
++ : :"r" (value), "r" (0))
++
++
++/*
++ * Save a segment register away
++ */
++#define savesegment(seg, value) \
++ asm volatile("mov %%" #seg ",%0":"=rm" (value))
++
++static inline unsigned long get_limit(unsigned long segment)
++{
++ unsigned long __limit;
++ __asm__("lsll %1,%0"
++ :"=r" (__limit):"r" (segment));
++ return __limit+1;
++}
++
++static inline void native_clts(void)
++{
++ asm volatile ("clts");
++}
++
++/*
++ * Volatile isn't enough to prevent the compiler from reordering the
++ * read/write functions for the control registers and messing everything up.
++ * A memory clobber would solve the problem, but would prevent reordering of
++ * all loads stores around it, which can hurt performance. Solution is to
++ * use a variable and mimic reads and writes to it to enforce serialization
++ */
++static unsigned long __force_order;
++
++static inline unsigned long native_read_cr0(void)
++{
++ unsigned long val;
++ asm volatile("mov %%cr0,%0\n\t" :"=r" (val), "=m" (__force_order));
++ return val;
++}
++
++static inline void native_write_cr0(unsigned long val)
++{
++ asm volatile("mov %0,%%cr0": :"r" (val), "m" (__force_order));
++}
++
++static inline unsigned long native_read_cr2(void)
++{
++ unsigned long val;
++ asm volatile("mov %%cr2,%0\n\t" :"=r" (val), "=m" (__force_order));
++ return val;
++}
++
++static inline void native_write_cr2(unsigned long val)
++{
++ asm volatile("mov %0,%%cr2": :"r" (val), "m" (__force_order));
++}
++
++static inline unsigned long native_read_cr3(void)
++{
++ unsigned long val;
++ asm volatile("mov %%cr3,%0\n\t" :"=r" (val), "=m" (__force_order));
++ return val;
++}
++
++static inline void native_write_cr3(unsigned long val)
++{
++ asm volatile("mov %0,%%cr3": :"r" (val), "m" (__force_order));
++}
++
++static inline unsigned long native_read_cr4(void)
++{
++ unsigned long val;
++ asm volatile("mov %%cr4,%0\n\t" :"=r" (val), "=m" (__force_order));
++ return val;
++}
++
++static inline unsigned long native_read_cr4_safe(void)
++{
++ unsigned long val;
++ /* This could fault if %cr4 does not exist. In x86_64, a cr4 always
++ * exists, so it will never fail. */
++#ifdef CONFIG_X86_32
++ asm volatile("1: mov %%cr4, %0 \n"
++ "2: \n"
++ ".section __ex_table,\"a\" \n"
++ ".long 1b,2b \n"
++ ".previous \n"
++ : "=r" (val), "=m" (__force_order) : "0" (0));
++#else
++ val = native_read_cr4();
++#endif
++ return val;
++}
++
++static inline void native_write_cr4(unsigned long val)
++{
++ asm volatile("mov %0,%%cr4": :"r" (val), "m" (__force_order));
++}
++
++#ifdef CONFIG_X86_64
++static inline unsigned long native_read_cr8(void)
++{
++ unsigned long cr8;
++ asm volatile("movq %%cr8,%0" : "=r" (cr8));
++ return cr8;
++}
++
++static inline void native_write_cr8(unsigned long val)
++{
++ asm volatile("movq %0,%%cr8" :: "r" (val) : "memory");
++}
++#endif
++
++static inline void native_wbinvd(void)
++{
++ asm volatile("wbinvd": : :"memory");
++}
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else
++#define read_cr0() (native_read_cr0())
++#define write_cr0(x) (native_write_cr0(x))
++#define read_cr2() (native_read_cr2())
++#define write_cr2(x) (native_write_cr2(x))
++#define read_cr3() (native_read_cr3())
++#define write_cr3(x) (native_write_cr3(x))
++#define read_cr4() (native_read_cr4())
++#define read_cr4_safe() (native_read_cr4_safe())
++#define write_cr4(x) (native_write_cr4(x))
++#define wbinvd() (native_wbinvd())
++#ifdef CONFIG_X86_64
++#define read_cr8() (native_read_cr8())
++#define write_cr8(x) (native_write_cr8(x))
++#endif
++
++/* Clear the 'TS' bit */
++#define clts() (native_clts())
++
++#endif/* CONFIG_PARAVIRT */
++
++#define stts() write_cr0(8 | read_cr0())
++
++#endif /* __KERNEL__ */
++
++static inline void clflush(void *__p)
++{
++ asm volatile("clflush %0" : "+m" (*(char __force *)__p));
++}
++
++#define nop() __asm__ __volatile__ ("nop")
++
++void disable_hlt(void);
++void enable_hlt(void);
++
++extern int es7000_plat;
++void cpu_idle_wait(void);
++
++extern unsigned long arch_align_stack(unsigned long sp);
++extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
++
++void default_idle(void);
++
++/*
++ * Force strict CPU ordering.
++ * And yes, this is required on UP too when we're talking
++ * to devices.
++ */
+ #ifdef CONFIG_X86_32
+-# include "system_32.h"
++/*
++ * For now, "wmb()" doesn't actually do anything, as all
++ * Intel CPU's follow what Intel calls a *Processor Order*,
++ * in which all writes are seen in the program order even
++ * outside the CPU.
++ *
++ * I expect future Intel CPU's to have a weaker ordering,
++ * but I'd also expect them to finally get their act together
++ * and add some real memory barriers if so.
++ *
++ * Some non intel clones support out of order store. wmb() ceases to be a
++ * nop for these.
++ */
++#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
++#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
++#define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM)
+ #else
+-# include "system_64.h"
++#define mb() asm volatile("mfence":::"memory")
++#define rmb() asm volatile("lfence":::"memory")
++#define wmb() asm volatile("sfence" ::: "memory")
++#endif
++
++/**
++ * read_barrier_depends - Flush all pending reads that subsequents reads
++ * depend on.
++ *
++ * No data-dependent reads from memory-like regions are ever reordered
++ * over this barrier. All reads preceding this primitive are guaranteed
++ * to access memory (but not necessarily other CPUs' caches) before any
++ * reads following this primitive that depend on the data return by
++ * any of the preceding reads. This primitive is much lighter weight than
++ * rmb() on most CPUs, and is never heavier weight than is
++ * rmb().
++ *
++ * These ordering constraints are respected by both the local CPU
++ * and the compiler.
++ *
++ * Ordering is not guaranteed by anything other than these primitives,
++ * not even by data dependencies. See the documentation for
++ * memory_barrier() for examples and URLs to more information.
++ *
++ * For example, the following code would force ordering (the initial
++ * value of "a" is zero, "b" is one, and "p" is "&a"):
++ *
++ * <programlisting>
++ * CPU 0 CPU 1
++ *
++ * b = 2;
++ * memory_barrier();
++ * p = &b; q = p;
++ * read_barrier_depends();
++ * d = *q;
++ * </programlisting>
++ *
++ * because the read of "*q" depends on the read of "p" and these
++ * two reads are separated by a read_barrier_depends(). However,
++ * the following code, with the same initial values for "a" and "b":
++ *
++ * <programlisting>
++ * CPU 0 CPU 1
++ *
++ * a = 2;
++ * memory_barrier();
++ * b = 3; y = b;
++ * read_barrier_depends();
++ * x = a;
++ * </programlisting>
++ *
++ * does not enforce ordering, since there is no data dependency between
++ * the read of "a" and the read of "b". Therefore, on some CPUs, such
++ * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
++ * in cases like this where there are no data dependencies.
++ **/
++
++#define read_barrier_depends() do { } while (0)
++
++#ifdef CONFIG_SMP
++#define smp_mb() mb()
++#ifdef CONFIG_X86_PPRO_FENCE
++# define smp_rmb() rmb()
++#else
++# define smp_rmb() barrier()
++#endif
++#ifdef CONFIG_X86_OOSTORE
++# define smp_wmb() wmb()
++#else
++# define smp_wmb() barrier()
++#endif
++#define smp_read_barrier_depends() read_barrier_depends()
++#define set_mb(var, value) do { (void) xchg(&var, value); } while (0)
++#else
++#define smp_mb() barrier()
++#define smp_rmb() barrier()
++#define smp_wmb() barrier()
++#define smp_read_barrier_depends() do { } while (0)
++#define set_mb(var, value) do { var = value; barrier(); } while (0)
++#endif
++
++/*
++ * Stop RDTSC speculation. This is needed when you need to use RDTSC
++ * (or get_cycles or vread that possibly accesses the TSC) in a defined
++ * code region.
++ *
++ * (Could use an alternative three way for this if there was one.)
++ */
++static inline void rdtsc_barrier(void)
++{
++ alternative(ASM_NOP3, "mfence", X86_FEATURE_MFENCE_RDTSC);
++ alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
++}
++
+ #endif
+diff --git a/include/asm-x86/system_32.h b/include/asm-x86/system_32.h
deleted file mode 100644
-index f6d5117..0000000
---- a/include/asm-sh64/thread_info.h
+index ef84688..0000000
+--- a/include/asm-x86/system_32.h
+++ /dev/null
-@@ -1,91 +0,0 @@
--#ifndef __ASM_SH64_THREAD_INFO_H
--#define __ASM_SH64_THREAD_INFO_H
+@@ -1,320 +0,0 @@
+-#ifndef __ASM_SYSTEM_H
+-#define __ASM_SYSTEM_H
-
--/*
-- * SuperH 5 version
-- * Copyright (C) 2003 Paul Mundt
-- */
+-#include <linux/kernel.h>
+-#include <asm/segment.h>
+-#include <asm/cpufeature.h>
+-#include <asm/cmpxchg.h>
-
-#ifdef __KERNEL__
+-#define AT_VECTOR_SIZE_ARCH 2 /* entries in ARCH_DLINFO */
-
--#ifndef __ASSEMBLY__
--#include <asm/registers.h>
+-struct task_struct; /* one of the stranger aspects of C forward declarations.. */
+-extern struct task_struct * FASTCALL(__switch_to(struct task_struct *prev, struct task_struct *next));
-
-/*
-- * low level task data that entry.S needs immediate access to
-- * - this struct should fit entirely inside of one cache line
-- * - this struct shares the supervisor stack pages
-- * - if the contents of this structure are changed, the assembly constants must also be changed
+- * Saving eflags is important. It switches not only IOPL between tasks,
+- * it also protects other tasks from NT leaking through sysenter etc.
- */
--struct thread_info {
-- struct task_struct *task; /* main task structure */
-- struct exec_domain *exec_domain; /* execution domain */
-- unsigned long flags; /* low level flags */
-- /* Put the 4 32-bit fields together to make asm offsetting easier. */
-- int preempt_count; /* 0 => preemptable, <0 => BUG */
-- __u16 cpu;
+-#define switch_to(prev,next,last) do { \
+- unsigned long esi,edi; \
+- asm volatile("pushfl\n\t" /* Save flags */ \
+- "pushl %%ebp\n\t" \
+- "movl %%esp,%0\n\t" /* save ESP */ \
+- "movl %5,%%esp\n\t" /* restore ESP */ \
+- "movl $1f,%1\n\t" /* save EIP */ \
+- "pushl %6\n\t" /* restore EIP */ \
+- "jmp __switch_to\n" \
+- "1:\t" \
+- "popl %%ebp\n\t" \
+- "popfl" \
+- :"=m" (prev->thread.esp),"=m" (prev->thread.eip), \
+- "=a" (last),"=S" (esi),"=D" (edi) \
+- :"m" (next->thread.esp),"m" (next->thread.eip), \
+- "2" (prev), "d" (next)); \
+-} while (0)
-
-- mm_segment_t addr_limit;
-- struct restart_block restart_block;
+-#define _set_base(addr,base) do { unsigned long __pr; \
+-__asm__ __volatile__ ("movw %%dx,%1\n\t" \
+- "rorl $16,%%edx\n\t" \
+- "movb %%dl,%2\n\t" \
+- "movb %%dh,%3" \
+- :"=&d" (__pr) \
+- :"m" (*((addr)+2)), \
+- "m" (*((addr)+4)), \
+- "m" (*((addr)+7)), \
+- "0" (base) \
+- ); } while(0)
+-
+-#define _set_limit(addr,limit) do { unsigned long __lr; \
+-__asm__ __volatile__ ("movw %%dx,%1\n\t" \
+- "rorl $16,%%edx\n\t" \
+- "movb %2,%%dh\n\t" \
+- "andb $0xf0,%%dh\n\t" \
+- "orb %%dh,%%dl\n\t" \
+- "movb %%dl,%2" \
+- :"=&d" (__lr) \
+- :"m" (*(addr)), \
+- "m" (*((addr)+6)), \
+- "0" (limit) \
+- ); } while(0)
+-
+-#define set_base(ldt,base) _set_base( ((char *)&(ldt)) , (base) )
+-#define set_limit(ldt,limit) _set_limit( ((char *)&(ldt)) , ((limit)-1) )
+-
+-/*
+- * Load a segment. Fall back on loading the zero
+- * segment if something goes wrong..
+- */
+-#define loadsegment(seg,value) \
+- asm volatile("\n" \
+- "1:\t" \
+- "mov %0,%%" #seg "\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3:\t" \
+- "pushl $0\n\t" \
+- "popl %%" #seg "\n\t" \
+- "jmp 2b\n" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".align 4\n\t" \
+- ".long 1b,3b\n" \
+- ".previous" \
+- : :"rm" (value))
+-
+-/*
+- * Save a segment register away
+- */
+-#define savesegment(seg, value) \
+- asm volatile("mov %%" #seg ",%0":"=rm" (value))
-
-- __u8 supervisor_stack[0];
--};
-
--/*
-- * macros/functions for gaining access to the thread information structure
-- */
--#define INIT_THREAD_INFO(tsk) \
--{ \
-- .task = &tsk, \
-- .exec_domain = &default_exec_domain, \
-- .flags = 0, \
-- .cpu = 0, \
-- .preempt_count = 1, \
-- .addr_limit = KERNEL_DS, \
-- .restart_block = { \
-- .fn = do_no_restart_syscall, \
-- }, \
+-static inline void native_clts(void)
+-{
+- asm volatile ("clts");
-}
-
--#define init_thread_info (init_thread_union.thread_info)
--#define init_stack (init_thread_union.stack)
--
--/* how to get the thread information struct from C */
--static inline struct thread_info *current_thread_info(void)
+-static inline unsigned long native_read_cr0(void)
-{
-- struct thread_info *ti;
--
-- __asm__ __volatile__ ("getcon " __KCR0 ", %0\n\t" : "=r" (ti));
+- unsigned long val;
+- asm volatile("movl %%cr0,%0\n\t" :"=r" (val));
+- return val;
+-}
-
-- return ti;
+-static inline void native_write_cr0(unsigned long val)
+-{
+- asm volatile("movl %0,%%cr0": :"r" (val));
-}
-
--/* thread information allocation */
+-static inline unsigned long native_read_cr2(void)
+-{
+- unsigned long val;
+- asm volatile("movl %%cr2,%0\n\t" :"=r" (val));
+- return val;
+-}
-
+-static inline void native_write_cr2(unsigned long val)
+-{
+- asm volatile("movl %0,%%cr2": :"r" (val));
+-}
-
+-static inline unsigned long native_read_cr3(void)
+-{
+- unsigned long val;
+- asm volatile("movl %%cr3,%0\n\t" :"=r" (val));
+- return val;
+-}
-
--#define alloc_thread_info(ti) ((struct thread_info *) __get_free_pages(GFP_KERNEL,1))
--#define free_thread_info(ti) free_pages((unsigned long) (ti), 1)
+-static inline void native_write_cr3(unsigned long val)
+-{
+- asm volatile("movl %0,%%cr3": :"r" (val));
+-}
-
--#endif /* __ASSEMBLY__ */
+-static inline unsigned long native_read_cr4(void)
+-{
+- unsigned long val;
+- asm volatile("movl %%cr4,%0\n\t" :"=r" (val));
+- return val;
+-}
-
--#define THREAD_SIZE 8192
+-static inline unsigned long native_read_cr4_safe(void)
+-{
+- unsigned long val;
+- /* This could fault if %cr4 does not exist */
+- asm volatile("1: movl %%cr4, %0 \n"
+- "2: \n"
+- ".section __ex_table,\"a\" \n"
+- ".long 1b,2b \n"
+- ".previous \n"
+- : "=r" (val): "0" (0));
+- return val;
+-}
-
--#define PREEMPT_ACTIVE 0x10000000
+-static inline void native_write_cr4(unsigned long val)
+-{
+- asm volatile("movl %0,%%cr4": :"r" (val));
+-}
-
--/* thread information flags */
--#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
--#define TIF_SIGPENDING 2 /* signal pending */
--#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
--#define TIF_MEMDIE 4
--#define TIF_RESTORE_SIGMASK 5 /* Restore signal mask in do_signal */
+-static inline void native_wbinvd(void)
+-{
+- asm volatile("wbinvd": : :"memory");
+-}
-
--#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
--#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
--#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
--#define _TIF_MEMDIE (1 << TIF_MEMDIE)
--#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
+-static inline void clflush(volatile void *__p)
+-{
+- asm volatile("clflush %0" : "+m" (*(char __force *)__p));
+-}
-
--#endif /* __KERNEL__ */
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#define read_cr0() (native_read_cr0())
+-#define write_cr0(x) (native_write_cr0(x))
+-#define read_cr2() (native_read_cr2())
+-#define write_cr2(x) (native_write_cr2(x))
+-#define read_cr3() (native_read_cr3())
+-#define write_cr3(x) (native_write_cr3(x))
+-#define read_cr4() (native_read_cr4())
+-#define read_cr4_safe() (native_read_cr4_safe())
+-#define write_cr4(x) (native_write_cr4(x))
+-#define wbinvd() (native_wbinvd())
-
--#endif /* __ASM_SH64_THREAD_INFO_H */
-diff --git a/include/asm-sh64/timex.h b/include/asm-sh64/timex.h
-deleted file mode 100644
-index 163e2b6..0000000
---- a/include/asm-sh64/timex.h
-+++ /dev/null
-@@ -1,31 +0,0 @@
--#ifndef __ASM_SH64_TIMEX_H
--#define __ASM_SH64_TIMEX_H
+-/* Clear the 'TS' bit */
+-#define clts() (native_clts())
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/timex.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 Paul Mundt
-- *
-- * sh-5 architecture timex specifications
-- *
-- */
+-#endif/* CONFIG_PARAVIRT */
-
--#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
--#define CLOCK_TICK_FACTOR 20 /* Factor of both 1000000 and CLOCK_TICK_RATE */
+-/* Set the 'TS' bit */
+-#define stts() write_cr0(8 | read_cr0())
-
--typedef unsigned long cycles_t;
+-#endif /* __KERNEL__ */
-
--static __inline__ cycles_t get_cycles (void)
+-static inline unsigned long get_limit(unsigned long segment)
-{
-- return 0;
+- unsigned long __limit;
+- __asm__("lsll %1,%0"
+- :"=r" (__limit):"r" (segment));
+- return __limit+1;
-}
-
--#define vxtime_lock() do {} while (0)
--#define vxtime_unlock() do {} while (0)
--
--#endif /* __ASM_SH64_TIMEX_H */
-diff --git a/include/asm-sh64/tlb.h b/include/asm-sh64/tlb.h
-deleted file mode 100644
-index 4979408..0000000
---- a/include/asm-sh64/tlb.h
-+++ /dev/null
-@@ -1,92 +0,0 @@
--/*
-- * include/asm-sh64/tlb.h
-- *
-- * Copyright (C) 2003 Paul Mundt
-- *
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- */
--#ifndef __ASM_SH64_TLB_H
--#define __ASM_SH64_TLB_H
+-#define nop() __asm__ __volatile__ ("nop")
-
-/*
-- * Note! These are mostly unused, we just need the xTLB_LAST_VAR_UNRESTRICTED
-- * for head.S! Once this limitation is gone, we can clean the rest of this up.
-- */
--
--/* ITLB defines */
--#define ITLB_FIXED 0x00000000 /* First fixed ITLB, see head.S */
--#define ITLB_LAST_VAR_UNRESTRICTED 0x000003F0 /* Last ITLB */
--
--/* DTLB defines */
--#define DTLB_FIXED 0x00800000 /* First fixed DTLB, see head.S */
--#define DTLB_LAST_VAR_UNRESTRICTED 0x008003F0 /* Last DTLB */
--
--#ifndef __ASSEMBLY__
--
--/**
-- * for_each_dtlb_entry
-- *
-- * @tlb: TLB entry
+- * Force strict CPU ordering.
+- * And yes, this is required on UP too when we're talking
+- * to devices.
+- *
+- * For now, "wmb()" doesn't actually do anything, as all
+- * Intel CPU's follow what Intel calls a *Processor Order*,
+- * in which all writes are seen in the program order even
+- * outside the CPU.
+- *
+- * I expect future Intel CPU's to have a weaker ordering,
+- * but I'd also expect them to finally get their act together
+- * and add some real memory barriers if so.
- *
-- * Iterate over free (non-wired) DTLB entries
+- * Some non intel clones support out of order store. wmb() ceases to be a
+- * nop for these.
- */
--#define for_each_dtlb_entry(tlb) \
-- for (tlb = cpu_data->dtlb.first; \
-- tlb <= cpu_data->dtlb.last; \
-- tlb += cpu_data->dtlb.step)
+-
-
--/**
-- * for_each_itlb_entry
-- *
-- * @tlb: TLB entry
-- *
-- * Iterate over free (non-wired) ITLB entries
-- */
--#define for_each_itlb_entry(tlb) \
-- for (tlb = cpu_data->itlb.first; \
-- tlb <= cpu_data->itlb.last; \
-- tlb += cpu_data->itlb.step)
+-#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
+-#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
+-#define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM)
-
-/**
-- * __flush_tlb_slot
-- *
-- * @slot: Address of TLB slot.
+- * read_barrier_depends - Flush all pending reads that subsequents reads
+- * depend on.
- *
-- * Flushes TLB slot @slot.
-- */
--static inline void __flush_tlb_slot(unsigned long long slot)
--{
-- __asm__ __volatile__ ("putcfg %0, 0, r63\n" : : "r" (slot));
--}
--
--/* arch/sh64/mm/tlb.c */
--extern int sh64_tlb_init(void);
--extern unsigned long long sh64_next_free_dtlb_entry(void);
--extern unsigned long long sh64_get_wired_dtlb_entry(void);
--extern int sh64_put_wired_dtlb_entry(unsigned long long entry);
--
--extern void sh64_setup_tlb_slot(unsigned long long config_addr, unsigned long eaddr, unsigned long asid, unsigned long paddr);
--extern void sh64_teardown_tlb_slot(unsigned long long config_addr);
+- * No data-dependent reads from memory-like regions are ever reordered
+- * over this barrier. All reads preceding this primitive are guaranteed
+- * to access memory (but not necessarily other CPUs' caches) before any
+- * reads following this primitive that depend on the data return by
+- * any of the preceding reads. This primitive is much lighter weight than
+- * rmb() on most CPUs, and is never heavier weight than is
+- * rmb().
+- *
+- * These ordering constraints are respected by both the local CPU
+- * and the compiler.
+- *
+- * Ordering is not guaranteed by anything other than these primitives,
+- * not even by data dependencies. See the documentation for
+- * memory_barrier() for examples and URLs to more information.
+- *
+- * For example, the following code would force ordering (the initial
+- * value of "a" is zero, "b" is one, and "p" is "&a"):
+- *
+- * <programlisting>
+- * CPU 0 CPU 1
+- *
+- * b = 2;
+- * memory_barrier();
+- * p = &b; q = p;
+- * read_barrier_depends();
+- * d = *q;
+- * </programlisting>
+- *
+- * because the read of "*q" depends on the read of "p" and these
+- * two reads are separated by a read_barrier_depends(). However,
+- * the following code, with the same initial values for "a" and "b":
+- *
+- * <programlisting>
+- * CPU 0 CPU 1
+- *
+- * a = 2;
+- * memory_barrier();
+- * b = 3; y = b;
+- * read_barrier_depends();
+- * x = a;
+- * </programlisting>
+- *
+- * does not enforce ordering, since there is no data dependency between
+- * the read of "a" and the read of "b". Therefore, on some CPUs, such
+- * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
+- * in cases like this where there are no data dependencies.
+- **/
-
--#define tlb_start_vma(tlb, vma) \
-- flush_cache_range(vma, vma->vm_start, vma->vm_end)
+-#define read_barrier_depends() do { } while(0)
-
--#define tlb_end_vma(tlb, vma) \
-- flush_tlb_range(vma, vma->vm_start, vma->vm_end)
+-#ifdef CONFIG_SMP
+-#define smp_mb() mb()
+-#ifdef CONFIG_X86_PPRO_FENCE
+-# define smp_rmb() rmb()
+-#else
+-# define smp_rmb() barrier()
+-#endif
+-#ifdef CONFIG_X86_OOSTORE
+-# define smp_wmb() wmb()
+-#else
+-# define smp_wmb() barrier()
+-#endif
+-#define smp_read_barrier_depends() read_barrier_depends()
+-#define set_mb(var, value) do { (void) xchg(&var, value); } while (0)
+-#else
+-#define smp_mb() barrier()
+-#define smp_rmb() barrier()
+-#define smp_wmb() barrier()
+-#define smp_read_barrier_depends() do { } while(0)
+-#define set_mb(var, value) do { var = value; barrier(); } while (0)
+-#endif
-
--#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0)
+-#include <linux/irqflags.h>
-
-/*
-- * Flush whole TLBs for MM
+- * disable hlt during certain critical i/o operations
- */
--#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
--
--#include <asm-generic/tlb.h>
--
--#endif /* __ASSEMBLY__ */
--
--#endif /* __ASM_SH64_TLB_H */
+-#define HAVE_DISABLE_HLT
+-void disable_hlt(void);
+-void enable_hlt(void);
-
-diff --git a/include/asm-sh64/tlbflush.h b/include/asm-sh64/tlbflush.h
-deleted file mode 100644
-index 16a164a..0000000
---- a/include/asm-sh64/tlbflush.h
-+++ /dev/null
-@@ -1,27 +0,0 @@
--#ifndef __ASM_SH64_TLBFLUSH_H
--#define __ASM_SH64_TLBFLUSH_H
+-extern int es7000_plat;
+-void cpu_idle_wait(void);
-
--#include <asm/pgalloc.h>
+-extern unsigned long arch_align_stack(unsigned long sp);
+-extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
-
--/*
-- * TLB flushing:
-- *
-- * - flush_tlb() flushes the current mm struct TLBs
-- * - flush_tlb_all() flushes all processes TLBs
-- * - flush_tlb_mm(mm) flushes the specified mm context TLB's
-- * - flush_tlb_page(vma, vmaddr) flushes one page
-- * - flush_tlb_range(mm, start, end) flushes a range of pages
-- *
-- */
+-void default_idle(void);
+-void __show_registers(struct pt_regs *, int all);
-
--extern void flush_tlb(void);
--extern void flush_tlb_all(void);
--extern void flush_tlb_mm(struct mm_struct *mm);
--extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
-- unsigned long end);
--extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+-#endif
+diff --git a/include/asm-x86/system_64.h b/include/asm-x86/system_64.h
+index 6e9e484..97fa251 100644
+--- a/include/asm-x86/system_64.h
++++ b/include/asm-x86/system_64.h
+@@ -1,126 +1,9 @@
+ #ifndef __ASM_SYSTEM_H
+ #define __ASM_SYSTEM_H
+
+-#include <linux/kernel.h>
+ #include <asm/segment.h>
+ #include <asm/cmpxchg.h>
+
+-#ifdef __KERNEL__
-
--extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
+-/* entries in ARCH_DLINFO: */
+-#ifdef CONFIG_IA32_EMULATION
+-# define AT_VECTOR_SIZE_ARCH 2
+-#else
+-# define AT_VECTOR_SIZE_ARCH 1
+-#endif
-
--#endif /* __ASM_SH64_TLBFLUSH_H */
+-#define __SAVE(reg,offset) "movq %%" #reg ",(14-" #offset ")*8(%%rsp)\n\t"
+-#define __RESTORE(reg,offset) "movq (14-" #offset ")*8(%%rsp),%%" #reg "\n\t"
-
-diff --git a/include/asm-sh64/topology.h b/include/asm-sh64/topology.h
-deleted file mode 100644
-index 3421178..0000000
---- a/include/asm-sh64/topology.h
-+++ /dev/null
-@@ -1,6 +0,0 @@
--#ifndef __ASM_SH64_TOPOLOGY_H
--#define __ASM_SH64_TOPOLOGY_H
+-/* frame pointer must be last for get_wchan */
+-#define SAVE_CONTEXT "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
+-#define RESTORE_CONTEXT "movq %%rbp,%%rsi ; popq %%rbp ; popf\t"
-
--#include <asm-generic/topology.h>
+-#define __EXTRA_CLOBBER \
+- ,"rcx","rbx","rdx","r8","r9","r10","r11","r12","r13","r14","r15"
-
--#endif /* __ASM_SH64_TOPOLOGY_H */
-diff --git a/include/asm-sh64/types.h b/include/asm-sh64/types.h
-deleted file mode 100644
-index 2c7ad73..0000000
---- a/include/asm-sh64/types.h
-+++ /dev/null
-@@ -1,74 +0,0 @@
--#ifndef __ASM_SH64_TYPES_H
--#define __ASM_SH64_TYPES_H
+-/* Save restore flags to clear handle leaking NT */
+-#define switch_to(prev,next,last) \
+- asm volatile(SAVE_CONTEXT \
+- "movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
+- "movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
+- "call __switch_to\n\t" \
+- ".globl thread_return\n" \
+- "thread_return:\n\t" \
+- "movq %%gs:%P[pda_pcurrent],%%rsi\n\t" \
+- "movq %P[thread_info](%%rsi),%%r8\n\t" \
+- LOCK_PREFIX "btr %[tif_fork],%P[ti_flags](%%r8)\n\t" \
+- "movq %%rax,%%rdi\n\t" \
+- "jc ret_from_fork\n\t" \
+- RESTORE_CONTEXT \
+- : "=a" (last) \
+- : [next] "S" (next), [prev] "D" (prev), \
+- [threadrsp] "i" (offsetof(struct task_struct, thread.rsp)), \
+- [ti_flags] "i" (offsetof(struct thread_info, flags)),\
+- [tif_fork] "i" (TIF_FORK), \
+- [thread_info] "i" (offsetof(struct task_struct, stack)), \
+- [pda_pcurrent] "i" (offsetof(struct x8664_pda, pcurrent)) \
+- : "memory", "cc" __EXTRA_CLOBBER)
+-
+-extern void load_gs_index(unsigned);
-
-/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/types.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
+- * Load a segment. Fall back on loading the zero
+- * segment if something goes wrong..
- */
--
--#ifndef __ASSEMBLY__
--
--typedef unsigned short umode_t;
+-#define loadsegment(seg,value) \
+- asm volatile("\n" \
+- "1:\t" \
+- "movl %k0,%%" #seg "\n" \
+- "2:\n" \
+- ".section .fixup,\"ax\"\n" \
+- "3:\t" \
+- "movl %1,%%" #seg "\n\t" \
+- "jmp 2b\n" \
+- ".previous\n" \
+- ".section __ex_table,\"a\"\n\t" \
+- ".align 8\n\t" \
+- ".quad 1b,3b\n" \
+- ".previous" \
+- : :"r" (value), "r" (0))
-
-/*
-- * __xx is ok: it doesn't pollute the POSIX namespace. Use these in the
-- * header files exported to user space
+- * Clear and set 'TS' bit respectively
- */
+-#define clts() __asm__ __volatile__ ("clts")
-
--typedef __signed__ char __s8;
--typedef unsigned char __u8;
+-static inline unsigned long read_cr0(void)
+-{
+- unsigned long cr0;
+- asm volatile("movq %%cr0,%0" : "=r" (cr0));
+- return cr0;
+-}
-
--typedef __signed__ short __s16;
--typedef unsigned short __u16;
+-static inline void write_cr0(unsigned long val)
+-{
+- asm volatile("movq %0,%%cr0" :: "r" (val));
+-}
-
--typedef __signed__ int __s32;
--typedef unsigned int __u32;
+-static inline unsigned long read_cr2(void)
+-{
+- unsigned long cr2;
+- asm volatile("movq %%cr2,%0" : "=r" (cr2));
+- return cr2;
+-}
-
--#if defined(__GNUC__)
--__extension__ typedef __signed__ long long __s64;
--__extension__ typedef unsigned long long __u64;
--#endif
+-static inline void write_cr2(unsigned long val)
+-{
+- asm volatile("movq %0,%%cr2" :: "r" (val));
+-}
-
--#endif /* __ASSEMBLY__ */
+-static inline unsigned long read_cr3(void)
+-{
+- unsigned long cr3;
+- asm volatile("movq %%cr3,%0" : "=r" (cr3));
+- return cr3;
+-}
-
--/*
-- * These aren't exported outside the kernel to avoid name space clashes
-- */
--#ifdef __KERNEL__
+-static inline void write_cr3(unsigned long val)
+-{
+- asm volatile("movq %0,%%cr3" :: "r" (val) : "memory");
+-}
-
--#ifndef __ASSEMBLY__
+-static inline unsigned long read_cr4(void)
+-{
+- unsigned long cr4;
+- asm volatile("movq %%cr4,%0" : "=r" (cr4));
+- return cr4;
+-}
-
--typedef __signed__ char s8;
--typedef unsigned char u8;
+-static inline void write_cr4(unsigned long val)
+-{
+- asm volatile("movq %0,%%cr4" :: "r" (val) : "memory");
+-}
+
+ static inline unsigned long read_cr8(void)
+ {
+@@ -134,52 +17,6 @@ static inline void write_cr8(unsigned long val)
+ asm volatile("movq %0,%%cr8" :: "r" (val) : "memory");
+ }
+
+-#define stts() write_cr0(8 | read_cr0())
-
--typedef __signed__ short s16;
--typedef unsigned short u16;
+-#define wbinvd() \
+- __asm__ __volatile__ ("wbinvd": : :"memory")
-
--typedef __signed__ int s32;
--typedef unsigned int u32;
+-#endif /* __KERNEL__ */
-
--typedef __signed__ long long s64;
--typedef unsigned long long u64;
+-static inline void clflush(volatile void *__p)
+-{
+- asm volatile("clflush %0" : "+m" (*(char __force *)__p));
+-}
-
--/* DMA addresses come in generic and 64-bit flavours. */
+-#define nop() __asm__ __volatile__ ("nop")
-
--#ifdef CONFIG_HIGHMEM64G
--typedef u64 dma_addr_t;
+-#ifdef CONFIG_SMP
+-#define smp_mb() mb()
+-#define smp_rmb() barrier()
+-#define smp_wmb() barrier()
+-#define smp_read_barrier_depends() do {} while(0)
-#else
--typedef u32 dma_addr_t;
+-#define smp_mb() barrier()
+-#define smp_rmb() barrier()
+-#define smp_wmb() barrier()
+-#define smp_read_barrier_depends() do {} while(0)
-#endif
--typedef u64 dma64_addr_t;
-
--#endif /* __ASSEMBLY__ */
+-
+-/*
+- * Force strict CPU ordering.
+- * And yes, this is required on UP too when we're talking
+- * to devices.
+- */
+-#define mb() asm volatile("mfence":::"memory")
+-#define rmb() asm volatile("lfence":::"memory")
+-#define wmb() asm volatile("sfence" ::: "memory")
-
--#define BITS_PER_LONG 32
+-#define read_barrier_depends() do {} while(0)
+-#define set_mb(var, value) do { (void) xchg(&var, value); } while (0)
-
--#endif /* __KERNEL__ */
+-#define warn_if_not_ulong(x) do { unsigned long foo; (void) (&(x) == &foo); } while (0)
-
--#endif /* __ASM_SH64_TYPES_H */
-diff --git a/include/asm-sh64/uaccess.h b/include/asm-sh64/uaccess.h
+ #include <linux/irqflags.h>
+
+-void cpu_idle_wait(void);
+-
+-extern unsigned long arch_align_stack(unsigned long sp);
+-extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
+-
+ #endif
+diff --git a/include/asm-x86/thread_info_32.h b/include/asm-x86/thread_info_32.h
+index 22a8cbc..5bd5082 100644
+--- a/include/asm-x86/thread_info_32.h
++++ b/include/asm-x86/thread_info_32.h
+@@ -85,7 +85,7 @@ struct thread_info {
+
+
+ /* how to get the current stack pointer from C */
+-register unsigned long current_stack_pointer asm("esp") __attribute_used__;
++register unsigned long current_stack_pointer asm("esp") __used;
+
+ /* how to get the thread information struct from C */
+ static inline struct thread_info *current_thread_info(void)
+@@ -132,11 +132,16 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_SYSCALL_AUDIT 6 /* syscall auditing active */
+ #define TIF_SECCOMP 7 /* secure computing */
+ #define TIF_RESTORE_SIGMASK 8 /* restore signal mask in do_signal() */
++#define TIF_HRTICK_RESCHED 9 /* reprogram hrtick timer */
+ #define TIF_MEMDIE 16
+ #define TIF_DEBUG 17 /* uses debug registers */
+ #define TIF_IO_BITMAP 18 /* uses I/O bitmap */
+ #define TIF_FREEZE 19 /* is freezing for suspend */
+ #define TIF_NOTSC 20 /* TSC is not accessible in userland */
++#define TIF_FORCED_TF 21 /* true if TF in eflags artificially */
++#define TIF_DEBUGCTLMSR 22 /* uses thread_struct.debugctlmsr */
++#define TIF_DS_AREA_MSR 23 /* uses thread_struct.ds_area_msr */
++#define TIF_BTS_TRACE_TS 24 /* record scheduling event timestamps */
+
+ #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
+@@ -147,10 +152,15 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP (1<<TIF_SECCOMP)
+ #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
++#define _TIF_HRTICK_RESCHED (1<<TIF_HRTICK_RESCHED)
+ #define _TIF_DEBUG (1<<TIF_DEBUG)
+ #define _TIF_IO_BITMAP (1<<TIF_IO_BITMAP)
+ #define _TIF_FREEZE (1<<TIF_FREEZE)
+ #define _TIF_NOTSC (1<<TIF_NOTSC)
++#define _TIF_FORCED_TF (1<<TIF_FORCED_TF)
++#define _TIF_DEBUGCTLMSR (1<<TIF_DEBUGCTLMSR)
++#define _TIF_DS_AREA_MSR (1<<TIF_DS_AREA_MSR)
++#define _TIF_BTS_TRACE_TS (1<<TIF_BTS_TRACE_TS)
+
+ /* work to do on interrupt/exception return */
+ #define _TIF_WORK_MASK \
+@@ -160,8 +170,12 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_ALLWORK_MASK (0x0000FFFF & ~_TIF_SECCOMP)
+
+ /* flags to check in __switch_to() */
+-#define _TIF_WORK_CTXSW_NEXT (_TIF_IO_BITMAP | _TIF_NOTSC | _TIF_DEBUG)
+-#define _TIF_WORK_CTXSW_PREV (_TIF_IO_BITMAP | _TIF_NOTSC)
++#define _TIF_WORK_CTXSW \
++ (_TIF_IO_BITMAP | _TIF_NOTSC | _TIF_DEBUGCTLMSR | \
++ _TIF_DS_AREA_MSR | _TIF_BTS_TRACE_TS)
++#define _TIF_WORK_CTXSW_PREV _TIF_WORK_CTXSW
++#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW | _TIF_DEBUG)
++
+
+ /*
+ * Thread-synchronous status.
+diff --git a/include/asm-x86/thread_info_64.h b/include/asm-x86/thread_info_64.h
+index beae2bf..9b531ea 100644
+--- a/include/asm-x86/thread_info_64.h
++++ b/include/asm-x86/thread_info_64.h
+@@ -21,7 +21,7 @@
+ #ifndef __ASSEMBLY__
+ struct task_struct;
+ struct exec_domain;
+-#include <asm/mmsegment.h>
++#include <asm/processor.h>
+
+ struct thread_info {
+ struct task_struct *task; /* main task structure */
+@@ -33,6 +33,9 @@ struct thread_info {
+
+ mm_segment_t addr_limit;
+ struct restart_block restart_block;
++#ifdef CONFIG_IA32_EMULATION
++ void __user *sysenter_return;
++#endif
+ };
+ #endif
+
+@@ -74,20 +77,14 @@ static inline struct thread_info *stack_thread_info(void)
+
+ /* thread information allocation */
+ #ifdef CONFIG_DEBUG_STACK_USAGE
+-#define alloc_thread_info(tsk) \
+- ({ \
+- struct thread_info *ret; \
+- \
+- ret = ((struct thread_info *) __get_free_pages(GFP_KERNEL,THREAD_ORDER)); \
+- if (ret) \
+- memset(ret, 0, THREAD_SIZE); \
+- ret; \
+- })
++#define THREAD_FLAGS (GFP_KERNEL | __GFP_ZERO)
+ #else
+-#define alloc_thread_info(tsk) \
+- ((struct thread_info *) __get_free_pages(GFP_KERNEL,THREAD_ORDER))
++#define THREAD_FLAGS GFP_KERNEL
+ #endif
+
++#define alloc_thread_info(tsk) \
++ ((struct thread_info *) __get_free_pages(THREAD_FLAGS, THREAD_ORDER))
++
+ #define free_thread_info(ti) free_pages((unsigned long) (ti), THREAD_ORDER)
+
+ #else /* !__ASSEMBLY__ */
+@@ -115,6 +112,7 @@ static inline struct thread_info *stack_thread_info(void)
+ #define TIF_SECCOMP 8 /* secure computing */
+ #define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal */
+ #define TIF_MCE_NOTIFY 10 /* notify userspace of an MCE */
++#define TIF_HRTICK_RESCHED 11 /* reprogram hrtick timer */
+ /* 16 free */
+ #define TIF_IA32 17 /* 32bit process */
+ #define TIF_FORK 18 /* ret_from_fork */
+@@ -123,6 +121,10 @@ static inline struct thread_info *stack_thread_info(void)
+ #define TIF_DEBUG 21 /* uses debug registers */
+ #define TIF_IO_BITMAP 22 /* uses I/O bitmap */
+ #define TIF_FREEZE 23 /* is freezing for suspend */
++#define TIF_FORCED_TF 24 /* true if TF in eflags artificially */
++#define TIF_DEBUGCTLMSR 25 /* uses thread_struct.debugctlmsr */
++#define TIF_DS_AREA_MSR 25 /* uses thread_struct.ds_area_msr */
++#define TIF_BTS_TRACE_TS 26 /* record scheduling event timestamps */
+
+ #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
+@@ -133,12 +135,17 @@ static inline struct thread_info *stack_thread_info(void)
+ #define _TIF_SECCOMP (1<<TIF_SECCOMP)
+ #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
+ #define _TIF_MCE_NOTIFY (1<<TIF_MCE_NOTIFY)
++#define _TIF_HRTICK_RESCHED (1<<TIF_HRTICK_RESCHED)
+ #define _TIF_IA32 (1<<TIF_IA32)
+ #define _TIF_FORK (1<<TIF_FORK)
+ #define _TIF_ABI_PENDING (1<<TIF_ABI_PENDING)
+ #define _TIF_DEBUG (1<<TIF_DEBUG)
+ #define _TIF_IO_BITMAP (1<<TIF_IO_BITMAP)
+ #define _TIF_FREEZE (1<<TIF_FREEZE)
++#define _TIF_FORCED_TF (1<<TIF_FORCED_TF)
++#define _TIF_DEBUGCTLMSR (1<<TIF_DEBUGCTLMSR)
++#define _TIF_DS_AREA_MSR (1<<TIF_DS_AREA_MSR)
++#define _TIF_BTS_TRACE_TS (1<<TIF_BTS_TRACE_TS)
+
+ /* work to do on interrupt/exception return */
+ #define _TIF_WORK_MASK \
+@@ -146,8 +153,14 @@ static inline struct thread_info *stack_thread_info(void)
+ /* work to do on any return to user space */
+ #define _TIF_ALLWORK_MASK (0x0000FFFF & ~_TIF_SECCOMP)
+
++#define _TIF_DO_NOTIFY_MASK \
++ (_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY|_TIF_HRTICK_RESCHED)
++
+ /* flags to check in __switch_to() */
+-#define _TIF_WORK_CTXSW (_TIF_DEBUG|_TIF_IO_BITMAP)
++#define _TIF_WORK_CTXSW \
++ (_TIF_IO_BITMAP|_TIF_DEBUGCTLMSR|_TIF_DS_AREA_MSR|_TIF_BTS_TRACE_TS)
++#define _TIF_WORK_CTXSW_PREV _TIF_WORK_CTXSW
++#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW|_TIF_DEBUG)
+
+ #define PREEMPT_ACTIVE 0x10000000
+
+diff --git a/include/asm-x86/time.h b/include/asm-x86/time.h
+index eac0113..68779b0 100644
+--- a/include/asm-x86/time.h
++++ b/include/asm-x86/time.h
+@@ -1,8 +1,12 @@
+-#ifndef _ASMi386_TIME_H
+-#define _ASMi386_TIME_H
++#ifndef _ASMX86_TIME_H
++#define _ASMX86_TIME_H
+
++extern void (*late_time_init)(void);
++extern void hpet_time_init(void);
++
++#include <asm/mc146818rtc.h>
++#ifdef CONFIG_X86_32
+ #include <linux/efi.h>
+-#include "mach_time.h"
+
+ static inline unsigned long native_get_wallclock(void)
+ {
+@@ -28,8 +32,20 @@ static inline int native_set_wallclock(unsigned long nowtime)
+ return retval;
+ }
+
+-extern void (*late_time_init)(void);
+-extern void hpet_time_init(void);
++#else
++extern void native_time_init_hook(void);
++
++static inline unsigned long native_get_wallclock(void)
++{
++ return mach_get_cmos_time();
++}
++
++static inline int native_set_wallclock(unsigned long nowtime)
++{
++ return mach_set_rtc_mmss(nowtime);
++}
++
++#endif
+
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/paravirt.h>
+diff --git a/include/asm-x86/timer.h b/include/asm-x86/timer.h
+index 0db7e99..4f6fcb0 100644
+--- a/include/asm-x86/timer.h
++++ b/include/asm-x86/timer.h
+@@ -2,6 +2,7 @@
+ #define _ASMi386_TIMER_H
+ #include <linux/init.h>
+ #include <linux/pm.h>
++#include <linux/percpu.h>
+
+ #define TICK_SIZE (tick_nsec / 1000)
+
+@@ -16,7 +17,7 @@ extern int recalibrate_cpu_khz(void);
+ #define calculate_cpu_khz() native_calculate_cpu_khz()
+ #endif
+
+-/* Accellerators for sched_clock()
++/* Accelerators for sched_clock()
+ * convert from cycles(64bits) => nanoseconds (64bits)
+ * basic equation:
+ * ns = cycles / (freq / ns_per_sec)
+@@ -31,20 +32,32 @@ extern int recalibrate_cpu_khz(void);
+ * And since SC is a constant power of two, we can convert the div
+ * into a shift.
+ *
+- * We can use khz divisor instead of mhz to keep a better percision, since
++ * We can use khz divisor instead of mhz to keep a better precision, since
+ * cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
+ * (mathieu.desnoyers at polymtl.ca)
+ *
+ * -johnstul at us.ibm.com "math is hard, lets go shopping!"
+ */
+-extern unsigned long cyc2ns_scale __read_mostly;
++
++DECLARE_PER_CPU(unsigned long, cyc2ns);
+
+ #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
+
+-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
++static inline unsigned long long __cycles_2_ns(unsigned long long cyc)
+ {
+- return (cyc * cyc2ns_scale) >> CYC2NS_SCALE_FACTOR;
++ return cyc * per_cpu(cyc2ns, smp_processor_id()) >> CYC2NS_SCALE_FACTOR;
+ }
+
++static inline unsigned long long cycles_2_ns(unsigned long long cyc)
++{
++ unsigned long long ns;
++ unsigned long flags;
++
++ local_irq_save(flags);
++ ns = __cycles_2_ns(cyc);
++ local_irq_restore(flags);
++
++ return ns;
++}
+
+ #endif
+diff --git a/include/asm-x86/timex.h b/include/asm-x86/timex.h
+index 39a21ab..27cfd6c 100644
+--- a/include/asm-x86/timex.h
++++ b/include/asm-x86/timex.h
+@@ -7,6 +7,8 @@
+
+ #ifdef CONFIG_X86_ELAN
+ # define PIT_TICK_RATE 1189200 /* AMD Elan has different frequency! */
++#elif defined(CONFIG_X86_RDC321X)
++# define PIT_TICK_RATE 1041667 /* Underlying HZ for R8610 */
+ #else
+ # define PIT_TICK_RATE 1193182 /* Underlying HZ */
+ #endif
+diff --git a/include/asm-x86/tlbflush.h b/include/asm-x86/tlbflush.h
+index 9af4cc8..3998709 100644
+--- a/include/asm-x86/tlbflush.h
++++ b/include/asm-x86/tlbflush.h
+@@ -1,5 +1,158 @@
++#ifndef _ASM_X86_TLBFLUSH_H
++#define _ASM_X86_TLBFLUSH_H
++
++#include <linux/mm.h>
++#include <linux/sched.h>
++
++#include <asm/processor.h>
++#include <asm/system.h>
++
++#ifdef CONFIG_PARAVIRT
++#include <asm/paravirt.h>
++#else
++#define __flush_tlb() __native_flush_tlb()
++#define __flush_tlb_global() __native_flush_tlb_global()
++#define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
++#endif
++
++static inline void __native_flush_tlb(void)
++{
++ write_cr3(read_cr3());
++}
++
++static inline void __native_flush_tlb_global(void)
++{
++ unsigned long cr4 = read_cr4();
++
++ /* clear PGE */
++ write_cr4(cr4 & ~X86_CR4_PGE);
++ /* write old PGE again and flush TLBs */
++ write_cr4(cr4);
++}
++
++static inline void __native_flush_tlb_single(unsigned long addr)
++{
++ __asm__ __volatile__("invlpg (%0)" ::"r" (addr) : "memory");
++}
++
++static inline void __flush_tlb_all(void)
++{
++ if (cpu_has_pge)
++ __flush_tlb_global();
++ else
++ __flush_tlb();
++}
++
++static inline void __flush_tlb_one(unsigned long addr)
++{
++ if (cpu_has_invlpg)
++ __flush_tlb_single(addr);
++ else
++ __flush_tlb();
++}
++
+ #ifdef CONFIG_X86_32
+-# include "tlbflush_32.h"
++# define TLB_FLUSH_ALL 0xffffffff
+ #else
+-# include "tlbflush_64.h"
++# define TLB_FLUSH_ALL -1ULL
++#endif
++
++/*
++ * TLB flushing:
++ *
++ * - flush_tlb() flushes the current mm struct TLBs
++ * - flush_tlb_all() flushes all processes TLBs
++ * - flush_tlb_mm(mm) flushes the specified mm context TLB's
++ * - flush_tlb_page(vma, vmaddr) flushes one page
++ * - flush_tlb_range(vma, start, end) flushes a range of pages
++ * - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
++ * - flush_tlb_others(cpumask, mm, va) flushes TLBs on other cpus
++ *
++ * ..but the i386 has somewhat limited tlb flushing capabilities,
++ * and page-granular flushes are available only on i486 and up.
++ *
++ * x86-64 can only flush individual pages or full VMs. For a range flush
++ * we always do the full VM. Might be worth trying if for a small
++ * range a few INVLPGs in a row are a win.
++ */
++
++#ifndef CONFIG_SMP
++
++#define flush_tlb() __flush_tlb()
++#define flush_tlb_all() __flush_tlb_all()
++#define local_flush_tlb() __flush_tlb()
++
++static inline void flush_tlb_mm(struct mm_struct *mm)
++{
++ if (mm == current->active_mm)
++ __flush_tlb();
++}
++
++static inline void flush_tlb_page(struct vm_area_struct *vma,
++ unsigned long addr)
++{
++ if (vma->vm_mm == current->active_mm)
++ __flush_tlb_one(addr);
++}
++
++static inline void flush_tlb_range(struct vm_area_struct *vma,
++ unsigned long start, unsigned long end)
++{
++ if (vma->vm_mm == current->active_mm)
++ __flush_tlb();
++}
++
++static inline void native_flush_tlb_others(const cpumask_t *cpumask,
++ struct mm_struct *mm,
++ unsigned long va)
++{
++}
++
++#else /* SMP */
++
++#include <asm/smp.h>
++
++#define local_flush_tlb() __flush_tlb()
++
++extern void flush_tlb_all(void);
++extern void flush_tlb_current_task(void);
++extern void flush_tlb_mm(struct mm_struct *);
++extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
++
++#define flush_tlb() flush_tlb_current_task()
++
++static inline void flush_tlb_range(struct vm_area_struct *vma,
++ unsigned long start, unsigned long end)
++{
++ flush_tlb_mm(vma->vm_mm);
++}
++
++void native_flush_tlb_others(const cpumask_t *cpumask, struct mm_struct *mm,
++ unsigned long va);
++
++#define TLBSTATE_OK 1
++#define TLBSTATE_LAZY 2
++
++#ifdef CONFIG_X86_32
++struct tlb_state
++{
++ struct mm_struct *active_mm;
++ int state;
++ char __cacheline_padding[L1_CACHE_BYTES-8];
++};
++DECLARE_PER_CPU(struct tlb_state, cpu_tlbstate);
++#endif
++
++#endif /* SMP */
++
++#ifndef CONFIG_PARAVIRT
++#define flush_tlb_others(mask, mm, va) native_flush_tlb_others(&mask, mm, va)
+ #endif
++
++static inline void flush_tlb_kernel_range(unsigned long start,
++ unsigned long end)
++{
++ flush_tlb_all();
++}
++
++#endif /* _ASM_X86_TLBFLUSH_H */
+diff --git a/include/asm-x86/tlbflush_32.h b/include/asm-x86/tlbflush_32.h
deleted file mode 100644
-index 644c67b..0000000
---- a/include/asm-sh64/uaccess.h
+index 2bd5b95..0000000
+--- a/include/asm-x86/tlbflush_32.h
+++ /dev/null
-@@ -1,316 +0,0 @@
--#ifndef __ASM_SH64_UACCESS_H
--#define __ASM_SH64_UACCESS_H
+@@ -1,168 +0,0 @@
+-#ifndef _I386_TLBFLUSH_H
+-#define _I386_TLBFLUSH_H
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/uaccess.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003, 2004 Paul Mundt
-- *
-- * User space memory access functions
-- *
-- * Copyright (C) 1999 Niibe Yutaka
-- *
-- * Based on:
-- * MIPS implementation version 1.15 by
-- * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
-- * and i386 version.
-- *
-- */
+-#include <linux/mm.h>
+-#include <asm/processor.h>
-
--#include <linux/errno.h>
--#include <linux/sched.h>
+-#ifdef CONFIG_PARAVIRT
+-#include <asm/paravirt.h>
+-#else
+-#define __flush_tlb() __native_flush_tlb()
+-#define __flush_tlb_global() __native_flush_tlb_global()
+-#define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
+-#endif
-
--#define VERIFY_READ 0
--#define VERIFY_WRITE 1
+-#define __native_flush_tlb() \
+- do { \
+- unsigned int tmpreg; \
+- \
+- __asm__ __volatile__( \
+- "movl %%cr3, %0; \n" \
+- "movl %0, %%cr3; # flush TLB \n" \
+- : "=r" (tmpreg) \
+- :: "memory"); \
+- } while (0)
-
-/*
-- * The fs value determines whether argument validity checking should be
-- * performed or not. If get_fs() == USER_DS, checking is performed, with
-- * get_fs() == KERNEL_DS, checking is bypassed.
-- *
-- * For historical reasons (Data Segment Register?), these macros are misnamed.
+- * Global pages have to be flushed a bit differently. Not a real
+- * performance problem because this does not happen often.
- */
+-#define __native_flush_tlb_global() \
+- do { \
+- unsigned int tmpreg, cr4, cr4_orig; \
+- \
+- __asm__ __volatile__( \
+- "movl %%cr4, %2; # turn off PGE \n" \
+- "movl %2, %1; \n" \
+- "andl %3, %1; \n" \
+- "movl %1, %%cr4; \n" \
+- "movl %%cr3, %0; \n" \
+- "movl %0, %%cr3; # flush TLB \n" \
+- "movl %2, %%cr4; # turn PGE back on \n" \
+- : "=&r" (tmpreg), "=&r" (cr4), "=&r" (cr4_orig) \
+- : "i" (~X86_CR4_PGE) \
+- : "memory"); \
+- } while (0)
-
--#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
--
--#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
--#define USER_DS MAKE_MM_SEG(0x80000000)
--
--#define get_ds() (KERNEL_DS)
--#define get_fs() (current_thread_info()->addr_limit)
--#define set_fs(x) (current_thread_info()->addr_limit=(x))
--
--#define segment_eq(a,b) ((a).seg == (b).seg)
+-#define __native_flush_tlb_single(addr) \
+- __asm__ __volatile__("invlpg (%0)" ::"r" (addr) : "memory")
-
--#define __addr_ok(addr) ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
+-# define __flush_tlb_all() \
+- do { \
+- if (cpu_has_pge) \
+- __flush_tlb_global(); \
+- else \
+- __flush_tlb(); \
+- } while (0)
-
--/*
-- * Uhhuh, this needs 33-bit arithmetic. We have a carry..
-- *
-- * sum := addr + size; carry? --> flag = true;
-- * if (sum >= addr_limit) flag = true;
-- */
--#define __range_ok(addr,size) (((unsigned long) (addr) + (size) < (current_thread_info()->addr_limit.seg)) ? 0 : 1)
+-#define cpu_has_invlpg (boot_cpu_data.x86 > 3)
-
--#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
--#define __access_ok(addr,size) (__range_ok(addr,size) == 0)
+-#ifdef CONFIG_X86_INVLPG
+-# define __flush_tlb_one(addr) __flush_tlb_single(addr)
+-#else
+-# define __flush_tlb_one(addr) \
+- do { \
+- if (cpu_has_invlpg) \
+- __flush_tlb_single(addr); \
+- else \
+- __flush_tlb(); \
+- } while (0)
+-#endif
-
-/*
-- * Uh, these should become the main single-value transfer routines ...
-- * They automatically use the right size if we just have the right
-- * pointer type ...
+- * TLB flushing:
- *
-- * As MIPS uses the same address space for kernel and user data, we
-- * can just do these as direct assignments.
+- * - flush_tlb() flushes the current mm struct TLBs
+- * - flush_tlb_all() flushes all processes TLBs
+- * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+- * - flush_tlb_page(vma, vmaddr) flushes one page
+- * - flush_tlb_range(vma, start, end) flushes a range of pages
+- * - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
+- * - flush_tlb_others(cpumask, mm, va) flushes a TLBs on other cpus
- *
-- * Careful to not
-- * (a) re-use the arguments for side effects (sizeof is ok)
-- * (b) require any knowledge of processes at this stage
-- */
--#define put_user(x,ptr) __put_user_check((x),(ptr),sizeof(*(ptr)))
--#define get_user(x,ptr) __get_user_check((x),(ptr),sizeof(*(ptr)))
--
--/*
-- * The "__xxx" versions do not do address space checking, useful when
-- * doing multiple accesses to the same area (the user has to do the
-- * checks by hand with "access_ok()")
-- */
--#define __put_user(x,ptr) __put_user_nocheck((x),(ptr),sizeof(*(ptr)))
--#define __get_user(x,ptr) __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
--
--/*
-- * The "xxx_ret" versions return constant specified in third argument, if
-- * something bad happens. These macros can be optimized for the
-- * case of just returning from the function xxx_ret is used.
+- * ..but the i386 has somewhat limited tlb flushing capabilities,
+- * and page-granular flushes are available only on i486 and up.
- */
-
--#define put_user_ret(x,ptr,ret) ({ \
--if (put_user(x,ptr)) return ret; })
--
--#define get_user_ret(x,ptr,ret) ({ \
--if (get_user(x,ptr)) return ret; })
--
--#define __put_user_ret(x,ptr,ret) ({ \
--if (__put_user(x,ptr)) return ret; })
--
--#define __get_user_ret(x,ptr,ret) ({ \
--if (__get_user(x,ptr)) return ret; })
--
--struct __large_struct { unsigned long buf[100]; };
--#define __m(x) (*(struct __large_struct *)(x))
--
--#define __get_user_size(x,ptr,size,retval) \
--do { \
-- retval = 0; \
-- switch (size) { \
-- case 1: \
-- retval = __get_user_asm_b(x, ptr); \
-- break; \
-- case 2: \
-- retval = __get_user_asm_w(x, ptr); \
-- break; \
-- case 4: \
-- retval = __get_user_asm_l(x, ptr); \
-- break; \
-- case 8: \
-- retval = __get_user_asm_q(x, ptr); \
-- break; \
-- default: \
-- __get_user_unknown(); \
-- break; \
-- } \
--} while (0)
--
--#define __get_user_nocheck(x,ptr,size) \
--({ \
-- long __gu_err, __gu_val; \
-- __get_user_size((void *)&__gu_val, (long)(ptr), \
-- (size), __gu_err); \
-- (x) = (__typeof__(*(ptr)))__gu_val; \
-- __gu_err; \
--})
--
--#define __get_user_check(x,ptr,size) \
--({ \
-- long __gu_addr = (long)(ptr); \
-- long __gu_err = -EFAULT, __gu_val; \
-- if (__access_ok(__gu_addr, (size))) \
-- __get_user_size((void *)&__gu_val, __gu_addr, \
-- (size), __gu_err); \
-- (x) = (__typeof__(*(ptr))) __gu_val; \
-- __gu_err; \
--})
+-#define TLB_FLUSH_ALL 0xffffffff
-
--extern long __get_user_asm_b(void *, long);
--extern long __get_user_asm_w(void *, long);
--extern long __get_user_asm_l(void *, long);
--extern long __get_user_asm_q(void *, long);
--extern void __get_user_unknown(void);
-
--#define __put_user_size(x,ptr,size,retval) \
--do { \
-- retval = 0; \
-- switch (size) { \
-- case 1: \
-- retval = __put_user_asm_b(x, ptr); \
-- break; \
-- case 2: \
-- retval = __put_user_asm_w(x, ptr); \
-- break; \
-- case 4: \
-- retval = __put_user_asm_l(x, ptr); \
-- break; \
-- case 8: \
-- retval = __put_user_asm_q(x, ptr); \
-- break; \
-- default: \
-- __put_user_unknown(); \
-- } \
--} while (0)
+-#ifndef CONFIG_SMP
-
--#define __put_user_nocheck(x,ptr,size) \
--({ \
-- long __pu_err; \
-- __typeof__(*(ptr)) __pu_val = (x); \
-- __put_user_size((void *)&__pu_val, (long)(ptr), (size), __pu_err); \
-- __pu_err; \
--})
+-#include <linux/sched.h>
-
--#define __put_user_check(x,ptr,size) \
--({ \
-- long __pu_err = -EFAULT; \
-- long __pu_addr = (long)(ptr); \
-- __typeof__(*(ptr)) __pu_val = (x); \
-- \
-- if (__access_ok(__pu_addr, (size))) \
-- __put_user_size((void *)&__pu_val, __pu_addr, (size), __pu_err);\
-- __pu_err; \
--})
+-#define flush_tlb() __flush_tlb()
+-#define flush_tlb_all() __flush_tlb_all()
+-#define local_flush_tlb() __flush_tlb()
-
--extern long __put_user_asm_b(void *, long);
--extern long __put_user_asm_w(void *, long);
--extern long __put_user_asm_l(void *, long);
--extern long __put_user_asm_q(void *, long);
--extern void __put_user_unknown(void);
+-static inline void flush_tlb_mm(struct mm_struct *mm)
+-{
+- if (mm == current->active_mm)
+- __flush_tlb();
+-}
-
--
--/* Generic arbitrary sized copy. */
--/* Return the number of bytes NOT copied */
--/* XXX: should be such that: 4byte and the rest. */
--extern __kernel_size_t __copy_user(void *__to, const void *__from, __kernel_size_t __n);
+-static inline void flush_tlb_page(struct vm_area_struct *vma,
+- unsigned long addr)
+-{
+- if (vma->vm_mm == current->active_mm)
+- __flush_tlb_one(addr);
+-}
-
--#define copy_to_user(to,from,n) ({ \
--void *__copy_to = (void *) (to); \
--__kernel_size_t __copy_size = (__kernel_size_t) (n); \
--__kernel_size_t __copy_res; \
--if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
--__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
--} else __copy_res = __copy_size; \
--__copy_res; })
+-static inline void flush_tlb_range(struct vm_area_struct *vma,
+- unsigned long start, unsigned long end)
+-{
+- if (vma->vm_mm == current->active_mm)
+- __flush_tlb();
+-}
-
--#define copy_to_user_ret(to,from,n,retval) ({ \
--if (copy_to_user(to,from,n)) \
-- return retval; \
--})
+-static inline void native_flush_tlb_others(const cpumask_t *cpumask,
+- struct mm_struct *mm, unsigned long va)
+-{
+-}
-
--#define __copy_to_user(to,from,n) \
-- __copy_user((void *)(to), \
-- (void *)(from), n)
+-#else /* SMP */
-
--#define __copy_to_user_ret(to,from,n,retval) ({ \
--if (__copy_to_user(to,from,n)) \
-- return retval; \
--})
+-#include <asm/smp.h>
-
--#define copy_from_user(to,from,n) ({ \
--void *__copy_to = (void *) (to); \
--void *__copy_from = (void *) (from); \
--__kernel_size_t __copy_size = (__kernel_size_t) (n); \
--__kernel_size_t __copy_res; \
--if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
--__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
--} else __copy_res = __copy_size; \
--__copy_res; })
+-#define local_flush_tlb() \
+- __flush_tlb()
-
--#define copy_from_user_ret(to,from,n,retval) ({ \
--if (copy_from_user(to,from,n)) \
-- return retval; \
--})
+-extern void flush_tlb_all(void);
+-extern void flush_tlb_current_task(void);
+-extern void flush_tlb_mm(struct mm_struct *);
+-extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
-
--#define __copy_from_user(to,from,n) \
-- __copy_user((void *)(to), \
-- (void *)(from), n)
+-#define flush_tlb() flush_tlb_current_task()
-
--#define __copy_from_user_ret(to,from,n,retval) ({ \
--if (__copy_from_user(to,from,n)) \
-- return retval; \
--})
+-static inline void flush_tlb_range(struct vm_area_struct * vma, unsigned long start, unsigned long end)
+-{
+- flush_tlb_mm(vma->vm_mm);
+-}
-
--#define __copy_to_user_inatomic __copy_to_user
--#define __copy_from_user_inatomic __copy_from_user
+-void native_flush_tlb_others(const cpumask_t *cpumask, struct mm_struct *mm,
+- unsigned long va);
-
--/* XXX: Not sure it works well..
-- should be such that: 4byte clear and the rest. */
--extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
+-#define TLBSTATE_OK 1
+-#define TLBSTATE_LAZY 2
-
--#define clear_user(addr,n) ({ \
--void * __cl_addr = (addr); \
--unsigned long __cl_size = (n); \
--if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
--__cl_size = __clear_user(__cl_addr, __cl_size); \
--__cl_size; })
+-struct tlb_state
+-{
+- struct mm_struct *active_mm;
+- int state;
+- char __cacheline_padding[L1_CACHE_BYTES-8];
+-};
+-DECLARE_PER_CPU(struct tlb_state, cpu_tlbstate);
+-#endif /* SMP */
-
--extern int __strncpy_from_user(unsigned long __dest, unsigned long __src, int __count);
+-#ifndef CONFIG_PARAVIRT
+-#define flush_tlb_others(mask, mm, va) \
+- native_flush_tlb_others(&mask, mm, va)
+-#endif
-
--#define strncpy_from_user(dest,src,count) ({ \
--unsigned long __sfu_src = (unsigned long) (src); \
--int __sfu_count = (int) (count); \
--long __sfu_res = -EFAULT; \
--if(__access_ok(__sfu_src, __sfu_count)) { \
--__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
--} __sfu_res; })
+-static inline void flush_tlb_kernel_range(unsigned long start,
+- unsigned long end)
+-{
+- flush_tlb_all();
+-}
-
--#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+-#endif /* _I386_TLBFLUSH_H */
+diff --git a/include/asm-x86/tlbflush_64.h b/include/asm-x86/tlbflush_64.h
+deleted file mode 100644
+index 7731fd2..0000000
+--- a/include/asm-x86/tlbflush_64.h
++++ /dev/null
+@@ -1,100 +0,0 @@
+-#ifndef _X8664_TLBFLUSH_H
+-#define _X8664_TLBFLUSH_H
-
--/*
-- * Return the size of a string (including the ending 0!)
-- */
--extern long __strnlen_user(const char *__s, long __n);
+-#include <linux/mm.h>
+-#include <linux/sched.h>
+-#include <asm/processor.h>
+-#include <asm/system.h>
-
--static inline long strnlen_user(const char *s, long n)
+-static inline void __flush_tlb(void)
-{
-- if (!__addr_ok(s))
-- return 0;
-- else
-- return __strnlen_user(s, n);
+- write_cr3(read_cr3());
-}
-
--struct exception_table_entry
+-static inline void __flush_tlb_all(void)
-{
-- unsigned long insn, fixup;
--};
+- unsigned long cr4 = read_cr4();
+- write_cr4(cr4 & ~X86_CR4_PGE); /* clear PGE */
+- write_cr4(cr4); /* write old PGE again and flush TLBs */
+-}
-
--#define ARCH_HAS_SEARCH_EXTABLE
+-#define __flush_tlb_one(addr) \
+- __asm__ __volatile__("invlpg (%0)" :: "r" (addr) : "memory")
-
--/* If gcc inlines memset, it will use st.q instructions. Therefore, we need
-- kmalloc allocations to be 8-byte aligned. Without this, the alignment
-- becomes BYTE_PER_WORD i.e. only 4 (since sizeof(long)==sizeof(void*)==4 on
-- sh64 at the moment). */
--#define ARCH_KMALLOC_MINALIGN 8
-
-/*
-- * We want 8-byte alignment for the slab caches as well, otherwise we have
-- * the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
+- * TLB flushing:
+- *
+- * - flush_tlb() flushes the current mm struct TLBs
+- * - flush_tlb_all() flushes all processes TLBs
+- * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+- * - flush_tlb_page(vma, vmaddr) flushes one page
+- * - flush_tlb_range(vma, start, end) flushes a range of pages
+- * - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
+- *
+- * x86-64 can only flush individual pages or full VMs. For a range flush
+- * we always do the full VM. Might be worth trying if for a small
+- * range a few INVLPGs in a row are a win.
- */
--#define ARCH_SLAB_MINALIGN 8
-
--/* Returns 0 if exception not found and fixup.unit otherwise. */
--extern unsigned long search_exception_table(unsigned long addr);
--extern const struct exception_table_entry *search_exception_tables (unsigned long addr);
+-#ifndef CONFIG_SMP
-
--#endif /* __ASM_SH64_UACCESS_H */
-diff --git a/include/asm-sh64/ucontext.h b/include/asm-sh64/ucontext.h
-deleted file mode 100644
-index cf77a08..0000000
---- a/include/asm-sh64/ucontext.h
-+++ /dev/null
-@@ -1,23 +0,0 @@
--#ifndef __ASM_SH64_UCONTEXT_H
--#define __ASM_SH64_UCONTEXT_H
+-#define flush_tlb() __flush_tlb()
+-#define flush_tlb_all() __flush_tlb_all()
+-#define local_flush_tlb() __flush_tlb()
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/ucontext.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
+-static inline void flush_tlb_mm(struct mm_struct *mm)
+-{
+- if (mm == current->active_mm)
+- __flush_tlb();
+-}
-
--struct ucontext {
-- unsigned long uc_flags;
-- struct ucontext *uc_link;
-- stack_t uc_stack;
-- struct sigcontext uc_mcontext;
-- sigset_t uc_sigmask; /* mask last for extensibility */
--};
+-static inline void flush_tlb_page(struct vm_area_struct *vma,
+- unsigned long addr)
+-{
+- if (vma->vm_mm == current->active_mm)
+- __flush_tlb_one(addr);
+-}
-
--#endif /* __ASM_SH64_UCONTEXT_H */
-diff --git a/include/asm-sh64/unaligned.h b/include/asm-sh64/unaligned.h
-deleted file mode 100644
-index 74481b1..0000000
---- a/include/asm-sh64/unaligned.h
-+++ /dev/null
-@@ -1,17 +0,0 @@
--#ifndef __ASM_SH64_UNALIGNED_H
--#define __ASM_SH64_UNALIGNED_H
+-static inline void flush_tlb_range(struct vm_area_struct *vma,
+- unsigned long start, unsigned long end)
+-{
+- if (vma->vm_mm == current->active_mm)
+- __flush_tlb();
+-}
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/unaligned.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
+-#else
-
--#include <asm-generic/unaligned.h>
+-#include <asm/smp.h>
-
--#endif /* __ASM_SH64_UNALIGNED_H */
-diff --git a/include/asm-sh64/unistd.h b/include/asm-sh64/unistd.h
-deleted file mode 100644
-index 1a5197f..0000000
---- a/include/asm-sh64/unistd.h
-+++ /dev/null
-@@ -1,417 +0,0 @@
--#ifndef __ASM_SH64_UNISTD_H
--#define __ASM_SH64_UNISTD_H
+-#define local_flush_tlb() \
+- __flush_tlb()
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/unistd.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- * Copyright (C) 2003 - 2007 Paul Mundt
-- * Copyright (C) 2004 Sean McGoogan
-- *
-- * This file contains the system call numbers.
-- *
-- */
+-extern void flush_tlb_all(void);
+-extern void flush_tlb_current_task(void);
+-extern void flush_tlb_mm(struct mm_struct *);
+-extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
-
--#define __NR_restart_syscall 0
--#define __NR_exit 1
--#define __NR_fork 2
--#define __NR_read 3
--#define __NR_write 4
--#define __NR_open 5
--#define __NR_close 6
--#define __NR_waitpid 7
--#define __NR_creat 8
--#define __NR_link 9
--#define __NR_unlink 10
--#define __NR_execve 11
--#define __NR_chdir 12
--#define __NR_time 13
--#define __NR_mknod 14
--#define __NR_chmod 15
--#define __NR_lchown 16
--#define __NR_break 17
--#define __NR_oldstat 18
--#define __NR_lseek 19
--#define __NR_getpid 20
--#define __NR_mount 21
--#define __NR_umount 22
--#define __NR_setuid 23
--#define __NR_getuid 24
--#define __NR_stime 25
--#define __NR_ptrace 26
--#define __NR_alarm 27
--#define __NR_oldfstat 28
--#define __NR_pause 29
--#define __NR_utime 30
--#define __NR_stty 31
--#define __NR_gtty 32
--#define __NR_access 33
--#define __NR_nice 34
--#define __NR_ftime 35
--#define __NR_sync 36
--#define __NR_kill 37
--#define __NR_rename 38
--#define __NR_mkdir 39
--#define __NR_rmdir 40
--#define __NR_dup 41
--#define __NR_pipe 42
--#define __NR_times 43
--#define __NR_prof 44
--#define __NR_brk 45
--#define __NR_setgid 46
--#define __NR_getgid 47
--#define __NR_signal 48
--#define __NR_geteuid 49
--#define __NR_getegid 50
--#define __NR_acct 51
--#define __NR_umount2 52
--#define __NR_lock 53
--#define __NR_ioctl 54
--#define __NR_fcntl 55
--#define __NR_mpx 56
--#define __NR_setpgid 57
--#define __NR_ulimit 58
--#define __NR_oldolduname 59
--#define __NR_umask 60
--#define __NR_chroot 61
--#define __NR_ustat 62
--#define __NR_dup2 63
--#define __NR_getppid 64
--#define __NR_getpgrp 65
--#define __NR_setsid 66
--#define __NR_sigaction 67
--#define __NR_sgetmask 68
--#define __NR_ssetmask 69
--#define __NR_setreuid 70
--#define __NR_setregid 71
--#define __NR_sigsuspend 72
--#define __NR_sigpending 73
--#define __NR_sethostname 74
--#define __NR_setrlimit 75
--#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
--#define __NR_getrusage 77
--#define __NR_gettimeofday 78
--#define __NR_settimeofday 79
--#define __NR_getgroups 80
--#define __NR_setgroups 81
--#define __NR_select 82
--#define __NR_symlink 83
--#define __NR_oldlstat 84
--#define __NR_readlink 85
--#define __NR_uselib 86
--#define __NR_swapon 87
--#define __NR_reboot 88
--#define __NR_readdir 89
--#define __NR_mmap 90
--#define __NR_munmap 91
--#define __NR_truncate 92
--#define __NR_ftruncate 93
--#define __NR_fchmod 94
--#define __NR_fchown 95
--#define __NR_getpriority 96
--#define __NR_setpriority 97
--#define __NR_profil 98
--#define __NR_statfs 99
--#define __NR_fstatfs 100
--#define __NR_ioperm 101
--#define __NR_socketcall 102 /* old implementation of socket systemcall */
--#define __NR_syslog 103
--#define __NR_setitimer 104
--#define __NR_getitimer 105
--#define __NR_stat 106
--#define __NR_lstat 107
--#define __NR_fstat 108
--#define __NR_olduname 109
--#define __NR_iopl 110
--#define __NR_vhangup 111
--#define __NR_idle 112
--#define __NR_vm86old 113
--#define __NR_wait4 114
--#define __NR_swapoff 115
--#define __NR_sysinfo 116
--#define __NR_ipc 117
--#define __NR_fsync 118
--#define __NR_sigreturn 119
--#define __NR_clone 120
--#define __NR_setdomainname 121
--#define __NR_uname 122
--#define __NR_modify_ldt 123
--#define __NR_adjtimex 124
--#define __NR_mprotect 125
--#define __NR_sigprocmask 126
--#define __NR_create_module 127
--#define __NR_init_module 128
--#define __NR_delete_module 129
--#define __NR_get_kernel_syms 130
--#define __NR_quotactl 131
--#define __NR_getpgid 132
--#define __NR_fchdir 133
--#define __NR_bdflush 134
--#define __NR_sysfs 135
--#define __NR_personality 136
--#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
--#define __NR_setfsuid 138
--#define __NR_setfsgid 139
--#define __NR__llseek 140
--#define __NR_getdents 141
--#define __NR__newselect 142
--#define __NR_flock 143
--#define __NR_msync 144
--#define __NR_readv 145
--#define __NR_writev 146
--#define __NR_getsid 147
--#define __NR_fdatasync 148
--#define __NR__sysctl 149
--#define __NR_mlock 150
--#define __NR_munlock 151
--#define __NR_mlockall 152
--#define __NR_munlockall 153
--#define __NR_sched_setparam 154
--#define __NR_sched_getparam 155
--#define __NR_sched_setscheduler 156
--#define __NR_sched_getscheduler 157
--#define __NR_sched_yield 158
--#define __NR_sched_get_priority_max 159
--#define __NR_sched_get_priority_min 160
--#define __NR_sched_rr_get_interval 161
--#define __NR_nanosleep 162
--#define __NR_mremap 163
--#define __NR_setresuid 164
--#define __NR_getresuid 165
--#define __NR_vm86 166
--#define __NR_query_module 167
--#define __NR_poll 168
--#define __NR_nfsservctl 169
--#define __NR_setresgid 170
--#define __NR_getresgid 171
--#define __NR_prctl 172
--#define __NR_rt_sigreturn 173
--#define __NR_rt_sigaction 174
--#define __NR_rt_sigprocmask 175
--#define __NR_rt_sigpending 176
--#define __NR_rt_sigtimedwait 177
--#define __NR_rt_sigqueueinfo 178
--#define __NR_rt_sigsuspend 179
--#define __NR_pread64 180
--#define __NR_pwrite64 181
--#define __NR_chown 182
--#define __NR_getcwd 183
--#define __NR_capget 184
--#define __NR_capset 185
--#define __NR_sigaltstack 186
--#define __NR_sendfile 187
--#define __NR_streams1 188 /* some people actually want it */
--#define __NR_streams2 189 /* some people actually want it */
--#define __NR_vfork 190
--#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
--#define __NR_mmap2 192
--#define __NR_truncate64 193
--#define __NR_ftruncate64 194
--#define __NR_stat64 195
--#define __NR_lstat64 196
--#define __NR_fstat64 197
--#define __NR_lchown32 198
--#define __NR_getuid32 199
--#define __NR_getgid32 200
--#define __NR_geteuid32 201
--#define __NR_getegid32 202
--#define __NR_setreuid32 203
--#define __NR_setregid32 204
--#define __NR_getgroups32 205
--#define __NR_setgroups32 206
--#define __NR_fchown32 207
--#define __NR_setresuid32 208
--#define __NR_getresuid32 209
--#define __NR_setresgid32 210
--#define __NR_getresgid32 211
--#define __NR_chown32 212
--#define __NR_setuid32 213
--#define __NR_setgid32 214
--#define __NR_setfsuid32 215
--#define __NR_setfsgid32 216
--#define __NR_pivot_root 217
--#define __NR_mincore 218
--#define __NR_madvise 219
+-#define flush_tlb() flush_tlb_current_task()
-
--/* Non-multiplexed socket family */
--#define __NR_socket 220
--#define __NR_bind 221
--#define __NR_connect 222
--#define __NR_listen 223
--#define __NR_accept 224
--#define __NR_getsockname 225
--#define __NR_getpeername 226
--#define __NR_socketpair 227
--#define __NR_send 228
--#define __NR_sendto 229
--#define __NR_recv 230
--#define __NR_recvfrom 231
--#define __NR_shutdown 232
--#define __NR_setsockopt 233
--#define __NR_getsockopt 234
--#define __NR_sendmsg 235
--#define __NR_recvmsg 236
+-static inline void flush_tlb_range(struct vm_area_struct * vma, unsigned long start, unsigned long end)
+-{
+- flush_tlb_mm(vma->vm_mm);
+-}
+-
+-#define TLBSTATE_OK 1
+-#define TLBSTATE_LAZY 2
+-
+-/* Roughly an IPI every 20MB with 4k pages for freeing page table
+- ranges. Cost is about 42k of memory for each CPU. */
+-#define ARCH_FREE_PTE_NR 5350
-
--/* Non-multiplexed IPC family */
--#define __NR_semop 237
--#define __NR_semget 238
--#define __NR_semctl 239
--#define __NR_msgsnd 240
--#define __NR_msgrcv 241
--#define __NR_msgget 242
--#define __NR_msgctl 243
--#if 0
--#define __NR_shmatcall 244
-#endif
--#define __NR_shmdt 245
--#define __NR_shmget 246
--#define __NR_shmctl 247
-
--#define __NR_getdents64 248
--#define __NR_fcntl64 249
--/* 223 is unused */
--#define __NR_gettid 252
--#define __NR_readahead 253
--#define __NR_setxattr 254
--#define __NR_lsetxattr 255
--#define __NR_fsetxattr 256
--#define __NR_getxattr 257
--#define __NR_lgetxattr 258
--#define __NR_fgetxattr 269
--#define __NR_listxattr 260
--#define __NR_llistxattr 261
--#define __NR_flistxattr 262
--#define __NR_removexattr 263
--#define __NR_lremovexattr 264
--#define __NR_fremovexattr 265
--#define __NR_tkill 266
--#define __NR_sendfile64 267
--#define __NR_futex 268
--#define __NR_sched_setaffinity 269
--#define __NR_sched_getaffinity 270
--#define __NR_set_thread_area 271
--#define __NR_get_thread_area 272
--#define __NR_io_setup 273
--#define __NR_io_destroy 274
--#define __NR_io_getevents 275
--#define __NR_io_submit 276
--#define __NR_io_cancel 277
--#define __NR_fadvise64 278
--#define __NR_exit_group 280
+-static inline void flush_tlb_kernel_range(unsigned long start,
+- unsigned long end)
+-{
+- flush_tlb_all();
+-}
-
--#define __NR_lookup_dcookie 281
--#define __NR_epoll_create 282
--#define __NR_epoll_ctl 283
--#define __NR_epoll_wait 284
--#define __NR_remap_file_pages 285
--#define __NR_set_tid_address 286
--#define __NR_timer_create 287
--#define __NR_timer_settime (__NR_timer_create+1)
--#define __NR_timer_gettime (__NR_timer_create+2)
--#define __NR_timer_getoverrun (__NR_timer_create+3)
--#define __NR_timer_delete (__NR_timer_create+4)
--#define __NR_clock_settime (__NR_timer_create+5)
--#define __NR_clock_gettime (__NR_timer_create+6)
--#define __NR_clock_getres (__NR_timer_create+7)
--#define __NR_clock_nanosleep (__NR_timer_create+8)
--#define __NR_statfs64 296
--#define __NR_fstatfs64 297
--#define __NR_tgkill 298
--#define __NR_utimes 299
--#define __NR_fadvise64_64 300
--#define __NR_vserver 301
--#define __NR_mbind 302
--#define __NR_get_mempolicy 303
--#define __NR_set_mempolicy 304
--#define __NR_mq_open 305
--#define __NR_mq_unlink (__NR_mq_open+1)
--#define __NR_mq_timedsend (__NR_mq_open+2)
--#define __NR_mq_timedreceive (__NR_mq_open+3)
--#define __NR_mq_notify (__NR_mq_open+4)
--#define __NR_mq_getsetattr (__NR_mq_open+5)
--#define __NR_kexec_load 311
--#define __NR_waitid 312
--#define __NR_add_key 313
--#define __NR_request_key 314
--#define __NR_keyctl 315
--#define __NR_ioprio_set 316
--#define __NR_ioprio_get 317
--#define __NR_inotify_init 318
--#define __NR_inotify_add_watch 319
--#define __NR_inotify_rm_watch 320
--/* 321 is unused */
--#define __NR_migrate_pages 322
--#define __NR_openat 323
--#define __NR_mkdirat 324
--#define __NR_mknodat 325
--#define __NR_fchownat 326
--#define __NR_futimesat 327
--#define __NR_fstatat64 328
--#define __NR_unlinkat 329
--#define __NR_renameat 330
--#define __NR_linkat 331
--#define __NR_symlinkat 332
--#define __NR_readlinkat 333
--#define __NR_fchmodat 334
--#define __NR_faccessat 335
--#define __NR_pselect6 336
--#define __NR_ppoll 337
--#define __NR_unshare 338
--#define __NR_set_robust_list 339
--#define __NR_get_robust_list 340
--#define __NR_splice 341
--#define __NR_sync_file_range 342
--#define __NR_tee 343
--#define __NR_vmsplice 344
--#define __NR_move_pages 345
--#define __NR_getcpu 346
--#define __NR_epoll_pwait 347
--#define __NR_utimensat 348
--#define __NR_signalfd 349
--#define __NR_timerfd 350
--#define __NR_eventfd 351
--#define __NR_fallocate 352
+-#endif /* _X8664_TLBFLUSH_H */
+diff --git a/include/asm-x86/topology.h b/include/asm-x86/topology.h
+index b10fde9..8af05a9 100644
+--- a/include/asm-x86/topology.h
++++ b/include/asm-x86/topology.h
+@@ -1,5 +1,188 @@
++/*
++ * Written by: Matthew Dobson, IBM Corporation
++ *
++ * Copyright (C) 2002, IBM Corp.
++ *
++ * All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
++ * NON INFRINGEMENT. See the GNU General Public License for more
++ * details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
++ *
++ * Send feedback to <colpatch at us.ibm.com>
++ */
++#ifndef _ASM_X86_TOPOLOGY_H
++#define _ASM_X86_TOPOLOGY_H
++
++#ifdef CONFIG_NUMA
++#include <linux/cpumask.h>
++#include <asm/mpspec.h>
++
++/* Mappings between logical cpu number and node number */
+ #ifdef CONFIG_X86_32
+-# include "topology_32.h"
++extern int cpu_to_node_map[];
++
+ #else
+-# include "topology_64.h"
++DECLARE_PER_CPU(int, x86_cpu_to_node_map);
++extern int x86_cpu_to_node_map_init[];
++extern void *x86_cpu_to_node_map_early_ptr;
++/* Returns the number of the current Node. */
++#define numa_node_id() (early_cpu_to_node(raw_smp_processor_id()))
++#endif
++
++extern cpumask_t node_to_cpumask_map[];
++
++#define NUMA_NO_NODE (-1)
++
++/* Returns the number of the node containing CPU 'cpu' */
++#ifdef CONFIG_X86_32
++#define early_cpu_to_node(cpu) cpu_to_node(cpu)
++static inline int cpu_to_node(int cpu)
++{
++ return cpu_to_node_map[cpu];
++}
++
++#else /* CONFIG_X86_64 */
++static inline int early_cpu_to_node(int cpu)
++{
++ int *cpu_to_node_map = x86_cpu_to_node_map_early_ptr;
++
++ if (cpu_to_node_map)
++ return cpu_to_node_map[cpu];
++ else if (per_cpu_offset(cpu))
++ return per_cpu(x86_cpu_to_node_map, cpu);
++ else
++ return NUMA_NO_NODE;
++}
++
++static inline int cpu_to_node(int cpu)
++{
++#ifdef CONFIG_DEBUG_PER_CPU_MAPS
++ if (x86_cpu_to_node_map_early_ptr) {
++ printk("KERN_NOTICE cpu_to_node(%d): usage too early!\n",
++ (int)cpu);
++ dump_stack();
++ return ((int *)x86_cpu_to_node_map_early_ptr)[cpu];
++ }
++#endif
++ if (per_cpu_offset(cpu))
++ return per_cpu(x86_cpu_to_node_map, cpu);
++ else
++ return NUMA_NO_NODE;
++}
++#endif /* CONFIG_X86_64 */
++
++/*
++ * Returns the number of the node containing Node 'node'. This
++ * architecture is flat, so it is a pretty simple function!
++ */
++#define parent_node(node) (node)
++
++/* Returns a bitmask of CPUs on Node 'node'. */
++static inline cpumask_t node_to_cpumask(int node)
++{
++ return node_to_cpumask_map[node];
++}
++
++/* Returns the number of the first CPU on Node 'node'. */
++static inline int node_to_first_cpu(int node)
++{
++ cpumask_t mask = node_to_cpumask(node);
++
++ return first_cpu(mask);
++}
++
++#define pcibus_to_node(bus) __pcibus_to_node(bus)
++#define pcibus_to_cpumask(bus) __pcibus_to_cpumask(bus)
++
++#ifdef CONFIG_X86_32
++extern unsigned long node_start_pfn[];
++extern unsigned long node_end_pfn[];
++extern unsigned long node_remap_size[];
++#define node_has_online_mem(nid) (node_start_pfn[nid] != node_end_pfn[nid])
++
++# ifdef CONFIG_X86_HT
++# define ENABLE_TOPO_DEFINES
++# endif
++
++# define SD_CACHE_NICE_TRIES 1
++# define SD_IDLE_IDX 1
++# define SD_NEWIDLE_IDX 2
++# define SD_FORKEXEC_IDX 0
++
++#else
++
++# ifdef CONFIG_SMP
++# define ENABLE_TOPO_DEFINES
++# endif
++
++# define SD_CACHE_NICE_TRIES 2
++# define SD_IDLE_IDX 2
++# define SD_NEWIDLE_IDX 0
++# define SD_FORKEXEC_IDX 1
++
++#endif
++
++/* sched_domains SD_NODE_INIT for NUMAQ machines */
++#define SD_NODE_INIT (struct sched_domain) { \
++ .span = CPU_MASK_NONE, \
++ .parent = NULL, \
++ .child = NULL, \
++ .groups = NULL, \
++ .min_interval = 8, \
++ .max_interval = 32, \
++ .busy_factor = 32, \
++ .imbalance_pct = 125, \
++ .cache_nice_tries = SD_CACHE_NICE_TRIES, \
++ .busy_idx = 3, \
++ .idle_idx = SD_IDLE_IDX, \
++ .newidle_idx = SD_NEWIDLE_IDX, \
++ .wake_idx = 1, \
++ .forkexec_idx = SD_FORKEXEC_IDX, \
++ .flags = SD_LOAD_BALANCE \
++ | SD_BALANCE_EXEC \
++ | SD_BALANCE_FORK \
++ | SD_SERIALIZE \
++ | SD_WAKE_BALANCE, \
++ .last_balance = jiffies, \
++ .balance_interval = 1, \
++ .nr_balance_failed = 0, \
++}
++
++#ifdef CONFIG_X86_64_ACPI_NUMA
++extern int __node_distance(int, int);
++#define node_distance(a, b) __node_distance(a, b)
++#endif
++
++#else /* CONFIG_NUMA */
++
++#include <asm-generic/topology.h>
++
++#endif
++
++extern cpumask_t cpu_coregroup_map(int cpu);
++
++#ifdef ENABLE_TOPO_DEFINES
++#define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
++#define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
++#define topology_core_siblings(cpu) (per_cpu(cpu_core_map, cpu))
++#define topology_thread_siblings(cpu) (per_cpu(cpu_sibling_map, cpu))
++#endif
++
++#ifdef CONFIG_SMP
++#define mc_capable() (boot_cpu_data.x86_max_cores > 1)
++#define smt_capable() (smp_num_siblings > 1)
++#endif
++
+ #endif
+diff --git a/include/asm-x86/topology_32.h b/include/asm-x86/topology_32.h
+deleted file mode 100644
+index 9040f5a..0000000
+--- a/include/asm-x86/topology_32.h
++++ /dev/null
+@@ -1,121 +0,0 @@
+-/*
+- * linux/include/asm-i386/topology.h
+- *
+- * Written by: Matthew Dobson, IBM Corporation
+- *
+- * Copyright (C) 2002, IBM Corp.
+- *
+- * All rights reserved.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful, but
+- * WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+- * NON INFRINGEMENT. See the GNU General Public License for more
+- * details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+- *
+- * Send feedback to <colpatch at us.ibm.com>
+- */
+-#ifndef _ASM_I386_TOPOLOGY_H
+-#define _ASM_I386_TOPOLOGY_H
-
--#ifdef __KERNEL__
+-#ifdef CONFIG_X86_HT
+-#define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
+-#define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
+-#define topology_core_siblings(cpu) (per_cpu(cpu_core_map, cpu))
+-#define topology_thread_siblings(cpu) (per_cpu(cpu_sibling_map, cpu))
+-#endif
-
--#define NR_syscalls 353
+-#ifdef CONFIG_NUMA
-
--#define __ARCH_WANT_IPC_PARSE_VERSION
--#define __ARCH_WANT_OLD_READDIR
--#define __ARCH_WANT_OLD_STAT
--#define __ARCH_WANT_STAT64
--#define __ARCH_WANT_SYS_ALARM
--#define __ARCH_WANT_SYS_GETHOSTNAME
--#define __ARCH_WANT_SYS_PAUSE
--#define __ARCH_WANT_SYS_SGETMASK
--#define __ARCH_WANT_SYS_SIGNAL
--#define __ARCH_WANT_SYS_TIME
--#define __ARCH_WANT_SYS_UTIME
--#define __ARCH_WANT_SYS_WAITPID
--#define __ARCH_WANT_SYS_SOCKETCALL
--#define __ARCH_WANT_SYS_FADVISE64
--#define __ARCH_WANT_SYS_GETPGRP
--#define __ARCH_WANT_SYS_LLSEEK
--#define __ARCH_WANT_SYS_NICE
--#define __ARCH_WANT_SYS_OLD_GETRLIMIT
--#define __ARCH_WANT_SYS_OLDUMOUNT
--#define __ARCH_WANT_SYS_SIGPENDING
--#define __ARCH_WANT_SYS_SIGPROCMASK
--#define __ARCH_WANT_SYS_RT_SIGACTION
+-#include <asm/mpspec.h>
+-
+-#include <linux/cpumask.h>
+-
+-/* Mappings between logical cpu number and node number */
+-extern cpumask_t node_2_cpu_mask[];
+-extern int cpu_2_node[];
+-
+-/* Returns the number of the node containing CPU 'cpu' */
+-static inline int cpu_to_node(int cpu)
+-{
+- return cpu_2_node[cpu];
+-}
+-
+-/* Returns the number of the node containing Node 'node'. This architecture is flat,
+- so it is a pretty simple function! */
+-#define parent_node(node) (node)
+-
+-/* Returns a bitmask of CPUs on Node 'node'. */
+-static inline cpumask_t node_to_cpumask(int node)
+-{
+- return node_2_cpu_mask[node];
+-}
+-
+-/* Returns the number of the first CPU on Node 'node'. */
+-static inline int node_to_first_cpu(int node)
+-{
+- cpumask_t mask = node_to_cpumask(node);
+- return first_cpu(mask);
+-}
-
+-#define pcibus_to_node(bus) ((struct pci_sysdata *)((bus)->sysdata))->node
+-#define pcibus_to_cpumask(bus) node_to_cpumask(pcibus_to_node(bus))
+-
+-/* sched_domains SD_NODE_INIT for NUMAQ machines */
+-#define SD_NODE_INIT (struct sched_domain) { \
+- .span = CPU_MASK_NONE, \
+- .parent = NULL, \
+- .child = NULL, \
+- .groups = NULL, \
+- .min_interval = 8, \
+- .max_interval = 32, \
+- .busy_factor = 32, \
+- .imbalance_pct = 125, \
+- .cache_nice_tries = 1, \
+- .busy_idx = 3, \
+- .idle_idx = 1, \
+- .newidle_idx = 2, \
+- .wake_idx = 1, \
+- .flags = SD_LOAD_BALANCE \
+- | SD_BALANCE_EXEC \
+- | SD_BALANCE_FORK \
+- | SD_SERIALIZE \
+- | SD_WAKE_BALANCE, \
+- .last_balance = jiffies, \
+- .balance_interval = 1, \
+- .nr_balance_failed = 0, \
+-}
+-
+-extern unsigned long node_start_pfn[];
+-extern unsigned long node_end_pfn[];
+-extern unsigned long node_remap_size[];
+-
+-#define node_has_online_mem(nid) (node_start_pfn[nid] != node_end_pfn[nid])
+-
+-#else /* !CONFIG_NUMA */
-/*
-- * "Conditional" syscalls
-- *
-- * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
-- * but it doesn't work on all toolchains, so we just do it by hand
+- * Other i386 platforms should define their own version of the
+- * above macros here.
- */
--#ifndef cond_syscall
--#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall")
+-
+-#include <asm-generic/topology.h>
+-
+-#endif /* CONFIG_NUMA */
+-
+-extern cpumask_t cpu_coregroup_map(int cpu);
+-
+-#ifdef CONFIG_SMP
+-#define mc_capable() (boot_cpu_data.x86_max_cores > 1)
+-#define smt_capable() (smp_num_siblings > 1)
-#endif
-
--#endif /* __KERNEL__ */
--#endif /* __ASM_SH64_UNISTD_H */
-diff --git a/include/asm-sh64/user.h b/include/asm-sh64/user.h
+-#endif /* _ASM_I386_TOPOLOGY_H */
+diff --git a/include/asm-x86/topology_64.h b/include/asm-x86/topology_64.h
deleted file mode 100644
-index eb3b33e..0000000
---- a/include/asm-sh64/user.h
+index a718dda..0000000
+--- a/include/asm-x86/topology_64.h
+++ /dev/null
-@@ -1,70 +0,0 @@
--#ifndef __ASM_SH64_USER_H
--#define __ASM_SH64_USER_H
+@@ -1,71 +0,0 @@
+-#ifndef _ASM_X86_64_TOPOLOGY_H
+-#define _ASM_X86_64_TOPOLOGY_H
-
--/*
-- * This file is subject to the terms and conditions of the GNU General Public
-- * License. See the file "COPYING" in the main directory of this archive
-- * for more details.
-- *
-- * include/asm-sh64/user.h
-- *
-- * Copyright (C) 2000, 2001 Paolo Alberelli
-- *
-- */
-
--#include <linux/types.h>
--#include <asm/ptrace.h>
--#include <asm/page.h>
+-#ifdef CONFIG_NUMA
-
--/*
-- * Core file format: The core file is written in such a way that gdb
-- * can understand it and provide useful information to the user (under
-- * linux we use the `trad-core' bfd). The file contents are as follows:
-- *
-- * upage: 1 page consisting of a user struct that tells gdb
-- * what is present in the file. Directly after this is a
-- * copy of the task_struct, which is currently not used by gdb,
-- * but it may come in handy at some point. All of the registers
-- * are stored as part of the upage. The upage should always be
-- * only one page long.
-- * data: The data segment follows next. We use current->end_text to
-- * current->brk to pick up all of the user variables, plus any memory
-- * that may have been sbrk'ed. No attempt is made to determine if a
-- * page is demand-zero or if a page is totally unused, we just cover
-- * the entire range. All of the addresses are rounded in such a way
-- * that an integral number of pages is written.
-- * stack: We need the stack information in order to get a meaningful
-- * backtrace. We need to write the data from usp to
-- * current->start_stack, so we round each of these in order to be able
-- * to write an integer number of pages.
-- */
+-#include <asm/mpspec.h>
+-#include <linux/bitops.h>
-
--struct user_fpu_struct {
-- unsigned long long fp_regs[32];
-- unsigned int fpscr;
--};
+-extern cpumask_t cpu_online_map;
-
--struct user {
-- struct pt_regs regs; /* entire machine state */
-- struct user_fpu_struct fpu; /* Math Co-processor registers */
-- int u_fpvalid; /* True if math co-processor being used */
-- size_t u_tsize; /* text size (pages) */
-- size_t u_dsize; /* data size (pages) */
-- size_t u_ssize; /* stack size (pages) */
-- unsigned long start_code; /* text starting address */
-- unsigned long start_data; /* data starting address */
-- unsigned long start_stack; /* stack starting address */
-- long int signal; /* signal causing core dump */
-- struct regs * u_ar0; /* help gdb find registers */
-- struct user_fpu_struct* u_fpstate; /* Math Co-processor pointer */
-- unsigned long magic; /* identifies a core file */
-- char u_comm[32]; /* user command name */
--};
+-extern unsigned char cpu_to_node[];
+-extern cpumask_t node_to_cpumask[];
-
--#define NBPG PAGE_SIZE
--#define UPAGES 1
--#define HOST_TEXT_START_ADDR (u.start_code)
--#define HOST_DATA_START_ADDR (u.start_data)
--#define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG)
+-#ifdef CONFIG_ACPI_NUMA
+-extern int __node_distance(int, int);
+-#define node_distance(a,b) __node_distance(a,b)
+-/* #else fallback version */
+-#endif
+-
+-#define cpu_to_node(cpu) (cpu_to_node[cpu])
+-#define parent_node(node) (node)
+-#define node_to_first_cpu(node) (first_cpu(node_to_cpumask[node]))
+-#define node_to_cpumask(node) (node_to_cpumask[node])
+-#define pcibus_to_node(bus) ((struct pci_sysdata *)((bus)->sysdata))->node
+-#define pcibus_to_cpumask(bus) node_to_cpumask(pcibus_to_node(bus));
+-
+-#define numa_node_id() read_pda(nodenumber)
+-
+-/* sched_domains SD_NODE_INIT for x86_64 machines */
+-#define SD_NODE_INIT (struct sched_domain) { \
+- .span = CPU_MASK_NONE, \
+- .parent = NULL, \
+- .child = NULL, \
+- .groups = NULL, \
+- .min_interval = 8, \
+- .max_interval = 32, \
+- .busy_factor = 32, \
+- .imbalance_pct = 125, \
+- .cache_nice_tries = 2, \
+- .busy_idx = 3, \
+- .idle_idx = 2, \
+- .newidle_idx = 0, \
+- .wake_idx = 1, \
+- .forkexec_idx = 1, \
+- .flags = SD_LOAD_BALANCE \
+- | SD_BALANCE_FORK \
+- | SD_BALANCE_EXEC \
+- | SD_SERIALIZE \
+- | SD_WAKE_BALANCE, \
+- .last_balance = jiffies, \
+- .balance_interval = 1, \
+- .nr_balance_failed = 0, \
+-}
-
--#endif /* __ASM_SH64_USER_H */
-diff --git a/include/asm-x86/thread_info_32.h b/include/asm-x86/thread_info_32.h
-index 22a8cbc..a516e91 100644
---- a/include/asm-x86/thread_info_32.h
-+++ b/include/asm-x86/thread_info_32.h
-@@ -85,7 +85,7 @@ struct thread_info {
+-#endif
+-
+-#ifdef CONFIG_SMP
+-#define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
+-#define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
+-#define topology_core_siblings(cpu) (per_cpu(cpu_core_map, cpu))
+-#define topology_thread_siblings(cpu) (per_cpu(cpu_sibling_map, cpu))
+-#define mc_capable() (boot_cpu_data.x86_max_cores > 1)
+-#define smt_capable() (smp_num_siblings > 1)
+-#endif
+-
+-#include <asm-generic/topology.h>
+-
+-extern cpumask_t cpu_coregroup_map(int cpu);
+-
+-#endif
+diff --git a/include/asm-x86/tsc.h b/include/asm-x86/tsc.h
+index 6baab30..7d3e27f 100644
+--- a/include/asm-x86/tsc.h
++++ b/include/asm-x86/tsc.h
+@@ -17,6 +17,8 @@ typedef unsigned long long cycles_t;
+ extern unsigned int cpu_khz;
+ extern unsigned int tsc_khz;
++extern void disable_TSC(void);
++
+ static inline cycles_t get_cycles(void)
+ {
+ unsigned long long ret = 0;
+@@ -25,39 +27,22 @@ static inline cycles_t get_cycles(void)
+ if (!cpu_has_tsc)
+ return 0;
+ #endif
+-
+-#if defined(CONFIG_X86_GENERIC) || defined(CONFIG_X86_TSC)
+ rdtscll(ret);
+-#endif
++
+ return ret;
+ }
- /* how to get the current stack pointer from C */
--register unsigned long current_stack_pointer asm("esp") __attribute_used__;
-+register unsigned long current_stack_pointer asm("esp") __used;
+-/* Like get_cycles, but make sure the CPU is synchronized. */
+-static __always_inline cycles_t get_cycles_sync(void)
++static inline cycles_t vget_cycles(void)
+ {
+- unsigned long long ret;
+- unsigned eax, edx;
+-
+- /*
+- * Use RDTSCP if possible; it is guaranteed to be synchronous
+- * and doesn't cause a VMEXIT on Hypervisors
+- */
+- alternative_io(ASM_NOP3, ".byte 0x0f,0x01,0xf9", X86_FEATURE_RDTSCP,
+- ASM_OUTPUT2("=a" (eax), "=d" (edx)),
+- "a" (0U), "d" (0U) : "ecx", "memory");
+- ret = (((unsigned long long)edx) << 32) | ((unsigned long long)eax);
+- if (ret)
+- return ret;
+-
+ /*
+- * Don't do an additional sync on CPUs where we know
+- * RDTSC is already synchronous:
++ * We only do VDSOs on TSC capable CPUs, so this shouldnt
++ * access boot_cpu_data (which is not VDSO-safe):
+ */
+- alternative_io("cpuid", ASM_NOP2, X86_FEATURE_SYNC_RDTSC,
+- "=a" (eax), "0" (1) : "ebx","ecx","edx","memory");
+- rdtscll(ret);
+-
+- return ret;
++#ifndef CONFIG_X86_TSC
++ if (!cpu_has_tsc)
++ return 0;
++#endif
++ return (cycles_t) __native_read_tsc();
+ }
- /* how to get the thread information struct from C */
- static inline struct thread_info *current_thread_info(void)
-@@ -132,6 +132,7 @@ static inline struct thread_info *current_thread_info(void)
- #define TIF_SYSCALL_AUDIT 6 /* syscall auditing active */
- #define TIF_SECCOMP 7 /* secure computing */
- #define TIF_RESTORE_SIGMASK 8 /* restore signal mask in do_signal() */
-+#define TIF_HRTICK_RESCHED 9 /* reprogram hrtick timer */
- #define TIF_MEMDIE 16
- #define TIF_DEBUG 17 /* uses debug registers */
- #define TIF_IO_BITMAP 18 /* uses I/O bitmap */
-@@ -147,6 +148,7 @@ static inline struct thread_info *current_thread_info(void)
- #define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
- #define _TIF_SECCOMP (1<<TIF_SECCOMP)
- #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
-+#define _TIF_HRTICK_RESCHED (1<<TIF_HRTICK_RESCHED)
- #define _TIF_DEBUG (1<<TIF_DEBUG)
- #define _TIF_IO_BITMAP (1<<TIF_IO_BITMAP)
- #define _TIF_FREEZE (1<<TIF_FREEZE)
-diff --git a/include/asm-x86/thread_info_64.h b/include/asm-x86/thread_info_64.h
-index beae2bf..7f6ee68 100644
---- a/include/asm-x86/thread_info_64.h
-+++ b/include/asm-x86/thread_info_64.h
-@@ -115,6 +115,7 @@ static inline struct thread_info *stack_thread_info(void)
- #define TIF_SECCOMP 8 /* secure computing */
- #define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal */
- #define TIF_MCE_NOTIFY 10 /* notify userspace of an MCE */
-+#define TIF_HRTICK_RESCHED 11 /* reprogram hrtick timer */
- /* 16 free */
- #define TIF_IA32 17 /* 32bit process */
- #define TIF_FORK 18 /* ret_from_fork */
-@@ -133,6 +134,7 @@ static inline struct thread_info *stack_thread_info(void)
- #define _TIF_SECCOMP (1<<TIF_SECCOMP)
- #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
- #define _TIF_MCE_NOTIFY (1<<TIF_MCE_NOTIFY)
-+#define _TIF_HRTICK_RESCHED (1<<TIF_HRTICK_RESCHED)
- #define _TIF_IA32 (1<<TIF_IA32)
- #define _TIF_FORK (1<<TIF_FORK)
- #define _TIF_ABI_PENDING (1<<TIF_ABI_PENDING)
-@@ -146,6 +148,9 @@ static inline struct thread_info *stack_thread_info(void)
- /* work to do on any return to user space */
- #define _TIF_ALLWORK_MASK (0x0000FFFF & ~_TIF_SECCOMP)
+ extern void tsc_init(void);
+@@ -73,8 +58,7 @@ int check_tsc_unstable(void);
+ extern void check_tsc_sync_source(int cpu);
+ extern void check_tsc_sync_target(void);
-+#define _TIF_DO_NOTIFY_MASK \
-+ (_TIF_SIGPENDING|_TIF_SINGLESTEP|_TIF_MCE_NOTIFY|_TIF_HRTICK_RESCHED)
+-#ifdef CONFIG_X86_64
+ extern void tsc_calibrate(void);
+-#endif
++extern int notsc_setup(char *);
+
+ #endif
+diff --git a/include/asm-x86/uaccess_64.h b/include/asm-x86/uaccess_64.h
+index f4ce876..31d7947 100644
+--- a/include/asm-x86/uaccess_64.h
++++ b/include/asm-x86/uaccess_64.h
+@@ -65,6 +65,8 @@ struct exception_table_entry
+ unsigned long insn, fixup;
+ };
+
++extern int fixup_exception(struct pt_regs *regs);
+
- /* flags to check in __switch_to() */
- #define _TIF_WORK_CTXSW (_TIF_DEBUG|_TIF_IO_BITMAP)
+ #define ARCH_HAS_SEARCH_EXTABLE
+
+ /*
+diff --git a/include/asm-x86/unistd_32.h b/include/asm-x86/unistd_32.h
+index 9b15545..8d8f9b5 100644
+--- a/include/asm-x86/unistd_32.h
++++ b/include/asm-x86/unistd_32.h
+@@ -333,8 +333,6 @@
+
+ #ifdef __KERNEL__
+
+-#define NR_syscalls 325
+-
+ #define __ARCH_WANT_IPC_PARSE_VERSION
+ #define __ARCH_WANT_OLD_READDIR
+ #define __ARCH_WANT_OLD_STAT
+diff --git a/include/asm-x86/user_32.h b/include/asm-x86/user_32.h
+index 0e85d2a..ed8b8fc 100644
+--- a/include/asm-x86/user_32.h
++++ b/include/asm-x86/user_32.h
+@@ -75,13 +75,23 @@ struct user_fxsr_struct {
+ * doesn't use the extra segment registers)
+ */
+ struct user_regs_struct {
+- long ebx, ecx, edx, esi, edi, ebp, eax;
+- unsigned short ds, __ds, es, __es;
+- unsigned short fs, __fs, gs, __gs;
+- long orig_eax, eip;
+- unsigned short cs, __cs;
+- long eflags, esp;
+- unsigned short ss, __ss;
++ unsigned long bx;
++ unsigned long cx;
++ unsigned long dx;
++ unsigned long si;
++ unsigned long di;
++ unsigned long bp;
++ unsigned long ax;
++ unsigned long ds;
++ unsigned long es;
++ unsigned long fs;
++ unsigned long gs;
++ unsigned long orig_ax;
++ unsigned long ip;
++ unsigned long cs;
++ unsigned long flags;
++ unsigned long sp;
++ unsigned long ss;
+ };
+
+ /* When the kernel dumps core, it starts by dumping the user struct -
+diff --git a/include/asm-x86/user_64.h b/include/asm-x86/user_64.h
+index 12785c6..a5449d4 100644
+--- a/include/asm-x86/user_64.h
++++ b/include/asm-x86/user_64.h
+@@ -40,13 +40,13 @@
+ * and both the standard and SIMD floating point data can be accessed via
+ * the new ptrace requests. In either case, changes to the FPU environment
+ * will be reflected in the task's state as expected.
+- *
++ *
+ * x86-64 support by Andi Kleen.
+ */
+
+ /* This matches the 64bit FXSAVE format as defined by AMD. It is the same
+ as the 32bit format defined by Intel, except that the selector:offset pairs for
+- data and eip are replaced with flat 64bit pointers. */
++ data and eip are replaced with flat 64bit pointers. */
+ struct user_i387_struct {
+ unsigned short cwd;
+ unsigned short swd;
+@@ -65,13 +65,34 @@ struct user_i387_struct {
+ * Segment register layout in coredumps.
+ */
+ struct user_regs_struct {
+- unsigned long r15,r14,r13,r12,rbp,rbx,r11,r10;
+- unsigned long r9,r8,rax,rcx,rdx,rsi,rdi,orig_rax;
+- unsigned long rip,cs,eflags;
+- unsigned long rsp,ss;
+- unsigned long fs_base, gs_base;
+- unsigned long ds,es,fs,gs;
+-};
++ unsigned long r15;
++ unsigned long r14;
++ unsigned long r13;
++ unsigned long r12;
++ unsigned long bp;
++ unsigned long bx;
++ unsigned long r11;
++ unsigned long r10;
++ unsigned long r9;
++ unsigned long r8;
++ unsigned long ax;
++ unsigned long cx;
++ unsigned long dx;
++ unsigned long si;
++ unsigned long di;
++ unsigned long orig_ax;
++ unsigned long ip;
++ unsigned long cs;
++ unsigned long flags;
++ unsigned long sp;
++ unsigned long ss;
++ unsigned long fs_base;
++ unsigned long gs_base;
++ unsigned long ds;
++ unsigned long es;
++ unsigned long fs;
++ unsigned long gs;
++};
+
+ /* When the kernel dumps core, it starts by dumping the user struct -
+ this will be used by gdb to figure out where the data and stack segments
+@@ -94,7 +115,7 @@ struct user{
+ This is actually the bottom of the stack,
+ the top of the stack is always found in the
+ esp register. */
+- long int signal; /* Signal that caused the core dump. */
++ long int signal; /* Signal that caused the core dump. */
+ int reserved; /* No longer used */
+ int pad1;
+ struct user_pt_regs * u_ar0; /* Used by gdb to help find the values for */
+diff --git a/include/asm-x86/vdso.h b/include/asm-x86/vdso.h
+new file mode 100644
+index 0000000..629bcb6
+--- /dev/null
++++ b/include/asm-x86/vdso.h
+@@ -0,0 +1,28 @@
++#ifndef _ASM_X86_VDSO_H
++#define _ASM_X86_VDSO_H 1
++
++#ifdef CONFIG_X86_64
++extern const char VDSO64_PRELINK[];
++
++/*
++ * Given a pointer to the vDSO image, find the pointer to VDSO64_name
++ * as that symbol is defined in the vDSO sources or linker script.
++ */
++#define VDSO64_SYMBOL(base, name) ({ \
++ extern const char VDSO64_##name[]; \
++ (void *) (VDSO64_##name - VDSO64_PRELINK + (unsigned long) (base)); })
++#endif
++
++#if defined CONFIG_X86_32 || defined CONFIG_COMPAT
++extern const char VDSO32_PRELINK[];
++
++/*
++ * Given a pointer to the vDSO image, find the pointer to VDSO32_name
++ * as that symbol is defined in the vDSO sources or linker script.
++ */
++#define VDSO32_SYMBOL(base, name) ({ \
++ extern const char VDSO32_##name[]; \
++ (void *) (VDSO32_##name - VDSO32_PRELINK + (unsigned long) (base)); })
++#endif
++
++#endif /* asm-x86/vdso.h */
+diff --git a/include/asm-x86/vsyscall.h b/include/asm-x86/vsyscall.h
+index f01c49f..17b3700 100644
+--- a/include/asm-x86/vsyscall.h
++++ b/include/asm-x86/vsyscall.h
+@@ -36,6 +36,8 @@ extern volatile unsigned long __jiffies;
+ extern int vgetcpu_mode;
+ extern struct timezone sys_tz;
+
++extern void map_vsyscall(void);
++
+ #endif /* __KERNEL__ */
+ #endif /* _ASM_X86_64_VSYSCALL_H_ */
+diff --git a/include/asm-x86/vsyscall32.h b/include/asm-x86/vsyscall32.h
+deleted file mode 100644
+index c631c08..0000000
+--- a/include/asm-x86/vsyscall32.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-#ifndef _ASM_VSYSCALL32_H
+-#define _ASM_VSYSCALL32_H 1
+-
+-/* Values need to match arch/x86_64/ia32/vsyscall.lds */
+-
+-#ifdef __ASSEMBLY__
+-#define VSYSCALL32_BASE 0xffffe000
+-#define VSYSCALL32_SYSEXIT (VSYSCALL32_BASE + 0x410)
+-#else
+-#define VSYSCALL32_BASE 0xffffe000UL
+-#define VSYSCALL32_END (VSYSCALL32_BASE + PAGE_SIZE)
+-#define VSYSCALL32_EHDR ((const struct elf32_hdr *) VSYSCALL32_BASE)
+-
+-#define VSYSCALL32_VSYSCALL ((void *)VSYSCALL32_BASE + 0x400)
+-#define VSYSCALL32_SYSEXIT ((void *)VSYSCALL32_BASE + 0x410)
+-#define VSYSCALL32_SIGRETURN ((void __user *)VSYSCALL32_BASE + 0x500)
+-#define VSYSCALL32_RTSIGRETURN ((void __user *)VSYSCALL32_BASE + 0x600)
+-#endif
+-
+-#endif
+diff --git a/include/asm-x86/xor_32.h b/include/asm-x86/xor_32.h
+index 23c86ce..a41ef1b 100644
+--- a/include/asm-x86/xor_32.h
++++ b/include/asm-x86/xor_32.h
+@@ -1,6 +1,4 @@
+ /*
+- * include/asm-i386/xor.h
+- *
+ * Optimized RAID-5 checksumming functions for MMX and SSE.
+ *
+ * This program is free software; you can redistribute it and/or modify
+diff --git a/include/asm-x86/xor_64.h b/include/asm-x86/xor_64.h
+index f942fcc..1eee7fc 100644
+--- a/include/asm-x86/xor_64.h
++++ b/include/asm-x86/xor_64.h
+@@ -1,6 +1,4 @@
+ /*
+- * include/asm-x86_64/xor.h
+- *
+ * Optimized RAID-5 checksumming functions for MMX and SSE.
+ *
+ * This program is free software; you can redistribute it and/or modify
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
new file mode 100644
index 0000000..0edf949
@@ -538035,6 +641417,19 @@
unifdef-y += wait.h
unifdef-y += wanrouter.h
unifdef-y += watchdog.h
+diff --git a/include/linux/acpi_pmtmr.h b/include/linux/acpi_pmtmr.h
+index 1d0ef1a..7e3d285 100644
+--- a/include/linux/acpi_pmtmr.h
++++ b/include/linux/acpi_pmtmr.h
+@@ -25,6 +25,8 @@ static inline u32 acpi_pm_read_early(void)
+ return acpi_pm_read_verified() & ACPI_PM_MASK;
+ }
+
++extern void pmtimer_wait(unsigned);
++
+ #else
+
+ static inline u32 acpi_pm_read_early(void)
diff --git a/include/linux/ata.h b/include/linux/ata.h
index e672e80..78bbaca 100644
--- a/include/linux/ata.h
@@ -539029,6 +642424,60 @@
/* This is listed as optional in ATAPI 2.6, but is (curiously)
* missing from Mt. Fuji, Table 57. It _is_ mentioned in Mt. Fuji
* Table 377 as an MMC command for SCSi devices though... Most ATAPI
+diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
+index 107787a..85778a4 100644
+--- a/include/linux/clocksource.h
++++ b/include/linux/clocksource.h
+@@ -103,7 +103,7 @@ struct clocksource {
+ #define CLOCK_SOURCE_VALID_FOR_HRES 0x20
+
+ /* simplify initialization of mask field */
+-#define CLOCKSOURCE_MASK(bits) (cycle_t)(bits<64 ? ((1ULL<<bits)-1) : -1)
++#define CLOCKSOURCE_MASK(bits) (cycle_t)((bits) < 64 ? ((1ULL<<(bits))-1) : -1)
+
+ /**
+ * clocksource_khz2mult - calculates mult from khz and shift
+@@ -215,6 +215,7 @@ static inline void clocksource_calculate_interval(struct clocksource *c,
+
+ /* used to install a new clocksource */
+ extern int clocksource_register(struct clocksource*);
++extern void clocksource_unregister(struct clocksource*);
+ extern struct clocksource* clocksource_get_next(void);
+ extern void clocksource_change_rating(struct clocksource *cs, int rating);
+ extern void clocksource_resume(void);
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index 0e69d2c..d38655f 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -191,6 +191,10 @@ asmlinkage long compat_sys_select(int n, compat_ulong_t __user *inp,
+ compat_ulong_t __user *outp, compat_ulong_t __user *exp,
+ struct compat_timeval __user *tvp);
+
++asmlinkage long compat_sys_wait4(compat_pid_t pid,
++ compat_uint_t *stat_addr, int options,
++ struct compat_rusage *ru);
++
+ #define BITS_PER_COMPAT_LONG (8*sizeof(compat_long_t))
+
+ #define BITS_TO_COMPAT_LONGS(bits) \
+@@ -239,6 +243,17 @@ asmlinkage long compat_sys_migrate_pages(compat_pid_t pid,
+ compat_ulong_t maxnode, const compat_ulong_t __user *old_nodes,
+ const compat_ulong_t __user *new_nodes);
+
++extern int compat_ptrace_request(struct task_struct *child,
++ compat_long_t request,
++ compat_ulong_t addr, compat_ulong_t data);
++
++#ifdef __ARCH_WANT_COMPAT_SYS_PTRACE
++extern long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
++ compat_ulong_t addr, compat_ulong_t data);
++asmlinkage long compat_sys_ptrace(compat_long_t request, compat_long_t pid,
++ compat_long_t addr, compat_long_t data);
++#endif /* __ARCH_WANT_COMPAT_SYS_PTRACE */
++
+ /*
+ * epoll (fs/eventpoll.c) compat bits follow ...
+ */
diff --git a/include/linux/compiler-gcc3.h b/include/linux/compiler-gcc3.h
index 2d8c0f4..e5eb795 100644
--- a/include/linux/compiler-gcc3.h
@@ -539110,6 +642559,29 @@
};
struct cn_ctl_entry {
+diff --git a/include/linux/const.h b/include/linux/const.h
+index 07b300b..c22c707 100644
+--- a/include/linux/const.h
++++ b/include/linux/const.h
+@@ -7,13 +7,18 @@
+ * C code. Therefore we cannot annotate them always with
+ * 'UL' and other type specifiers unilaterally. We
+ * use the following macros to deal with this.
++ *
++ * Similarly, _AT() will cast an expression with a type in C, but
++ * leave it unchanged in asm.
+ */
+
+ #ifdef __ASSEMBLY__
+ #define _AC(X,Y) X
++#define _AT(T,X) X
+ #else
+ #define __AC(X,Y) (X##Y)
+ #define _AC(X,Y) __AC(X,Y)
++#define _AT(T,X) ((T)(X))
+ #endif
+
+ #endif /* !(_LINUX_CONST_H) */
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 92f2029..0be8d65 100644
--- a/include/linux/cpu.h
@@ -539164,6 +642636,21 @@
#define hotcpu_notifier(fn, pri) do { (void)(fn); } while (0)
/* These aren't inline functions due to a GCC bug. */
#define register_hotcpu_notifier(nb) ({ (void)(nb); 0; })
+diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
+index 85bd790..7047f58 100644
+--- a/include/linux/cpumask.h
++++ b/include/linux/cpumask.h
+@@ -218,8 +218,8 @@ int __first_cpu(const cpumask_t *srcp);
+ int __next_cpu(int n, const cpumask_t *srcp);
+ #define next_cpu(n, src) __next_cpu((n), &(src))
+ #else
+-#define first_cpu(src) 0
+-#define next_cpu(n, src) 1
++#define first_cpu(src) ({ (void)(src); 0; })
++#define next_cpu(n, src) ({ (void)(src); 1; })
+ #endif
+
+ #define cpumask_of_cpu(cpu) \
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index f3110eb..5e02d1b 100644
--- a/include/linux/crypto.h
@@ -540571,6 +644058,18 @@
void dma_chan_cleanup(struct kref *kref);
+diff --git a/include/linux/elf.h b/include/linux/elf.h
+index 576e83b..7ceb24d 100644
+--- a/include/linux/elf.h
++++ b/include/linux/elf.h
+@@ -355,6 +355,7 @@ typedef struct elf64_shdr {
+ #define NT_AUXV 6
+ #define NT_PRXFPREG 0x46e62b7f /* copied from gdb5.1/include/elf/common.h */
+ #define NT_PPC_VMX 0x100 /* PowerPC Altivec/VMX registers */
++#define NT_386_TLS 0x200 /* i386 TLS slots (struct user_desc) */
+
+
+ /* Note header in a PT_NOTE section */
diff --git a/include/linux/elfnote.h b/include/linux/elfnote.h
index e831759..278e3ef 100644
--- a/include/linux/elfnote.h
@@ -541610,6 +645109,29 @@
void hid_input_field(struct hid_device *hid, struct hid_field *field, __u8 *data, int interrupt);
void hid_output_report(struct hid_report *report, __u8 *data);
void hid_free_device(struct hid_device *device);
+diff --git a/include/linux/hpet.h b/include/linux/hpet.h
+index 707f7cb..9cd94bf 100644
+--- a/include/linux/hpet.h
++++ b/include/linux/hpet.h
+@@ -64,7 +64,7 @@ struct hpet {
+ */
+
+ #define Tn_INT_ROUTE_CAP_MASK (0xffffffff00000000ULL)
+-#define Tn_INI_ROUTE_CAP_SHIFT (32UL)
++#define Tn_INT_ROUTE_CAP_SHIFT (32UL)
+ #define Tn_FSB_INT_DELCAP_MASK (0x8000UL)
+ #define Tn_FSB_INT_DELCAP_SHIFT (15)
+ #define Tn_FSB_EN_CNF_MASK (0x4000UL)
+@@ -115,9 +115,6 @@ static inline void hpet_reserve_timer(struct hpet_data *hd, int timer)
+ }
+
+ int hpet_alloc(struct hpet_data *);
+-int hpet_register(struct hpet_task *, int);
+-int hpet_unregister(struct hpet_task *);
+-int hpet_control(struct hpet_task *, unsigned int, unsigned long);
+
+ #endif /* __KERNEL__ */
+
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index 7a9398e..49067f1 100644
--- a/include/linux/hrtimer.h
@@ -543818,6 +647340,16 @@
/* Functions marked as __devexit may be discarded at kernel link time, depending
on config options. Newer versions of binutils detect references from
retained sections to discarded sections and flag an error. Pointers to
+diff --git a/include/linux/init_ohci1394_dma.h b/include/linux/init_ohci1394_dma.h
+new file mode 100644
+index 0000000..3c03a4b
+--- /dev/null
++++ b/include/linux/init_ohci1394_dma.h
+@@ -0,0 +1,4 @@
++#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
++extern int __initdata init_ohci1394_dma_early;
++extern void __init init_ohci1394_dma_on_all_controllers(void);
++#endif
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index cae35b6..e6b3f70 100644
--- a/include/linux/init_task.h
@@ -543950,6 +647482,24 @@
+}
+
+#endif
+diff --git a/include/linux/ioport.h b/include/linux/ioport.h
+index 6187a85..605d237 100644
+--- a/include/linux/ioport.h
++++ b/include/linux/ioport.h
+@@ -8,6 +8,7 @@
+ #ifndef _LINUX_IOPORT_H
+ #define _LINUX_IOPORT_H
+
++#ifndef __ASSEMBLY__
+ #include <linux/compiler.h>
+ #include <linux/types.h>
+ /*
+@@ -153,4 +154,5 @@ extern struct resource * __devm_request_region(struct device *dev,
+ extern void __devm_release_region(struct device *dev, struct resource *parent,
+ resource_size_t start, resource_size_t n);
+
++#endif /* __ASSEMBLY__ */
+ #endif /* _LINUX_IOPORT_H */
diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h
index baf2938..2a3bb1b 100644
--- a/include/linux/ioprio.h
@@ -544227,7 +647777,7 @@
# error You lose.
#endif
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
-index 94bc996..a7283c9 100644
+index 94bc996..ff356b2 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -105,8 +105,8 @@ struct user;
@@ -544241,6 +647791,16 @@
#else
# define might_resched() do { } while (0)
#endif
+@@ -194,6 +194,9 @@ static inline int log_buf_read(int idx) { return 0; }
+ static inline int log_buf_copy(char *dest, int idx, int len) { return 0; }
+ #endif
+
++extern void __attribute__((format(printf, 1, 2)))
++ early_printk(const char *fmt, ...);
++
+ unsigned long int_sqrt(unsigned long);
+
+ extern int printk_ratelimit(void);
diff --git a/include/linux/kobject.h b/include/linux/kobject.h
index 4a0d27f..caa3f41 100644
--- a/include/linux/kobject.h
@@ -544519,6 +648079,34 @@
{ return -EINVAL; }
#endif
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 8189158..6168c0a 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -182,6 +182,15 @@ static inline void kretprobe_assert(struct kretprobe_instance *ri,
+ }
+ }
+
++#ifdef CONFIG_KPROBES_SANITY_TEST
++extern int init_test_probes(void);
++#else
++static inline int init_test_probes(void)
++{
++ return 0;
++}
++#endif /* CONFIG_KPROBES_SANITY_TEST */
++
+ extern spinlock_t kretprobe_lock;
+ extern struct mutex kprobe_mutex;
+ extern int arch_prepare_kprobe(struct kprobe *p);
+@@ -227,6 +236,7 @@ void unregister_kretprobe(struct kretprobe *rp);
+
+ void kprobe_flush_task(struct task_struct *tk);
+ void recycle_rp_inst(struct kretprobe_instance *ri, struct hlist_head *head);
++
+ #else /* CONFIG_KPROBES */
+
+ #define __kprobes /**/
diff --git a/include/linux/kref.h b/include/linux/kref.h
index 6fee353..5d18563 100644
--- a/include/linux/kref.h
@@ -544959,6 +648547,64 @@
static inline unsigned int ac_err_mask(u8 status)
{
if (status & (ATA_BUSY | ATA_DRQ))
+diff --git a/include/linux/linkage.h b/include/linux/linkage.h
+index ff203dd..3faf599 100644
+--- a/include/linux/linkage.h
++++ b/include/linux/linkage.h
+@@ -13,6 +13,10 @@
+ #define asmlinkage CPP_ASMLINKAGE
+ #endif
+
++#ifndef asmregparm
++# define asmregparm
++#endif
++
+ #ifndef prevent_tail_call
+ # define prevent_tail_call(ret) do { } while (0)
+ #endif
+@@ -53,6 +57,10 @@
+ .size name, .-name
+ #endif
+
++/* If symbol 'name' is treated as a subroutine (gets called, and returns)
++ * then please use ENDPROC to mark 'name' as STT_FUNC for the benefit of
++ * static analysis tools such as stack depth analyzer.
++ */
+ #ifndef ENDPROC
+ #define ENDPROC(name) \
+ .type name, @function; \
+diff --git a/include/linux/lockd/bind.h b/include/linux/lockd/bind.h
+index 6f1637c..3d25bcd 100644
+--- a/include/linux/lockd/bind.h
++++ b/include/linux/lockd/bind.h
+@@ -33,9 +33,26 @@ struct nlmsvc_binding {
+ extern struct nlmsvc_binding * nlmsvc_ops;
+
+ /*
++ * Similar to nfs_client_initdata, but without the NFS-specific
++ * rpc_ops field.
++ */
++struct nlmclnt_initdata {
++ const char *hostname;
++ const struct sockaddr *address;
++ size_t addrlen;
++ unsigned short protocol;
++ u32 nfs_version;
++};
++
++/*
+ * Functions exported by the lockd module
+ */
+-extern int nlmclnt_proc(struct inode *, int, struct file_lock *);
++
++extern struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init);
++extern void nlmclnt_done(struct nlm_host *host);
++
++extern int nlmclnt_proc(struct nlm_host *host, int cmd,
++ struct file_lock *fl);
+ extern int lockd_up(int proto);
+ extern void lockd_down(void);
+
diff --git a/include/linux/m41t00.h b/include/linux/m41t00.h
deleted file mode 100644
index b423360..0000000
@@ -545016,7 +648662,7 @@
-
-#endif /* _M41T00_H */
diff --git a/include/linux/mm.h b/include/linux/mm.h
-index 1b7b95c..1897ca2 100644
+index 1b7b95c..1bba678 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -12,7 +12,6 @@
@@ -545036,6 +648682,37 @@
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
+@@ -1117,9 +1118,21 @@ static inline void vm_stat_account(struct mm_struct *mm,
+ }
+ #endif /* CONFIG_PROC_FS */
+
+-#ifndef CONFIG_DEBUG_PAGEALLOC
++#ifdef CONFIG_DEBUG_PAGEALLOC
++extern int debug_pagealloc_enabled;
++
++extern void kernel_map_pages(struct page *page, int numpages, int enable);
++
++static inline void enable_debug_pagealloc(void)
++{
++ debug_pagealloc_enabled = 1;
++}
++#else
+ static inline void
+ kernel_map_pages(struct page *page, int numpages, int enable) {}
++static inline void enable_debug_pagealloc(void)
++{
++}
+ #endif
+
+ extern struct vm_area_struct *get_gate_vma(struct task_struct *tsk);
+@@ -1145,6 +1158,7 @@ extern int randomize_va_space;
+ #endif
+
+ const char * arch_vma_name(struct vm_area_struct *vma);
++void print_vma_addr(char *prefix, unsigned long rip);
+
+ struct page *sparse_mem_map_populate(unsigned long pnum, int nid);
+ pgd_t *vmemmap_pgd_populate(unsigned long addr, int node);
diff --git a/include/linux/module.h b/include/linux/module.h
index 2cbc0b8..ac481e2 100644
--- a/include/linux/module.h
@@ -546541,6 +650218,262 @@
ret = 1;
spin_unlock_irqrestore(&npinfo->rx_lock, flags);
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 2d15d4a..099ddb4 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -196,28 +196,67 @@ struct nfs_inode {
+ #define NFS_INO_STALE (2) /* possible stale inode */
+ #define NFS_INO_ACL_LRU_SET (3) /* Inode is on the LRU list */
+
+-static inline struct nfs_inode *NFS_I(struct inode *inode)
++static inline struct nfs_inode *NFS_I(const struct inode *inode)
+ {
+ return container_of(inode, struct nfs_inode, vfs_inode);
+ }
+-#define NFS_SB(s) ((struct nfs_server *)(s->s_fs_info))
+
+-#define NFS_FH(inode) (&NFS_I(inode)->fh)
+-#define NFS_SERVER(inode) (NFS_SB(inode->i_sb))
+-#define NFS_CLIENT(inode) (NFS_SERVER(inode)->client)
+-#define NFS_PROTO(inode) (NFS_SERVER(inode)->nfs_client->rpc_ops)
+-#define NFS_COOKIEVERF(inode) (NFS_I(inode)->cookieverf)
+-#define NFS_MINATTRTIMEO(inode) \
+- (S_ISDIR(inode->i_mode)? NFS_SERVER(inode)->acdirmin \
+- : NFS_SERVER(inode)->acregmin)
+-#define NFS_MAXATTRTIMEO(inode) \
+- (S_ISDIR(inode->i_mode)? NFS_SERVER(inode)->acdirmax \
+- : NFS_SERVER(inode)->acregmax)
++static inline struct nfs_server *NFS_SB(const struct super_block *s)
++{
++ return (struct nfs_server *)(s->s_fs_info);
++}
++
++static inline struct nfs_fh *NFS_FH(const struct inode *inode)
++{
++ return &NFS_I(inode)->fh;
++}
++
++static inline struct nfs_server *NFS_SERVER(const struct inode *inode)
++{
++ return NFS_SB(inode->i_sb);
++}
++
++static inline struct rpc_clnt *NFS_CLIENT(const struct inode *inode)
++{
++ return NFS_SERVER(inode)->client;
++}
++
++static inline const struct nfs_rpc_ops *NFS_PROTO(const struct inode *inode)
++{
++ return NFS_SERVER(inode)->nfs_client->rpc_ops;
++}
++
++static inline __be32 *NFS_COOKIEVERF(const struct inode *inode)
++{
++ return NFS_I(inode)->cookieverf;
++}
++
++static inline unsigned NFS_MINATTRTIMEO(const struct inode *inode)
++{
++ struct nfs_server *nfss = NFS_SERVER(inode);
++ return S_ISDIR(inode->i_mode) ? nfss->acdirmin : nfss->acregmin;
++}
+
+-#define NFS_FLAGS(inode) (NFS_I(inode)->flags)
+-#define NFS_STALE(inode) (test_bit(NFS_INO_STALE, &NFS_FLAGS(inode)))
++static inline unsigned NFS_MAXATTRTIMEO(const struct inode *inode)
++{
++ struct nfs_server *nfss = NFS_SERVER(inode);
++ return S_ISDIR(inode->i_mode) ? nfss->acdirmax : nfss->acregmax;
++}
+
+-#define NFS_FILEID(inode) (NFS_I(inode)->fileid)
++static inline int NFS_STALE(const struct inode *inode)
++{
++ return test_bit(NFS_INO_STALE, &NFS_I(inode)->flags);
++}
++
++static inline __u64 NFS_FILEID(const struct inode *inode)
++{
++ return NFS_I(inode)->fileid;
++}
++
++static inline void set_nfs_fileid(struct inode *inode, __u64 fileid)
++{
++ NFS_I(inode)->fileid = fileid;
++}
+
+ static inline void nfs_mark_for_revalidate(struct inode *inode)
+ {
+@@ -237,7 +276,7 @@ static inline int nfs_server_capable(struct inode *inode, int cap)
+
+ static inline int NFS_USE_READDIRPLUS(struct inode *inode)
+ {
+- return test_bit(NFS_INO_ADVISE_RDPLUS, &NFS_FLAGS(inode));
++ return test_bit(NFS_INO_ADVISE_RDPLUS, &NFS_I(inode)->flags);
+ }
+
+ static inline void nfs_set_verifier(struct dentry * dentry, unsigned long verf)
+@@ -366,6 +405,7 @@ extern const struct inode_operations nfs3_dir_inode_operations;
+ extern const struct file_operations nfs_dir_operations;
+ extern struct dentry_operations nfs_dentry_operations;
+
++extern void nfs_force_lookup_revalidate(struct inode *dir);
+ extern int nfs_instantiate(struct dentry *dentry, struct nfs_fh *fh, struct nfs_fattr *fattr);
+ extern int nfs_may_open(struct inode *inode, struct rpc_cred *cred, int openflags);
+ extern void nfs_access_zap_cache(struct inode *inode);
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index 0cac49b..3423c67 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -3,8 +3,12 @@
+
+ #include <linux/list.h>
+ #include <linux/backing-dev.h>
++#include <linux/wait.h>
++
++#include <asm/atomic.h>
+
+ struct nfs_iostats;
++struct nlm_host;
+
+ /*
+ * The nfs_client identifies our client state to the server.
+@@ -14,20 +18,19 @@ struct nfs_client {
+ int cl_cons_state; /* current construction state (-ve: init error) */
+ #define NFS_CS_READY 0 /* ready to be used */
+ #define NFS_CS_INITING 1 /* busy initialising */
+- int cl_nfsversion; /* NFS protocol version */
+ unsigned long cl_res_state; /* NFS resources state */
+ #define NFS_CS_CALLBACK 1 /* - callback started */
+ #define NFS_CS_IDMAP 2 /* - idmap started */
+ #define NFS_CS_RENEWD 3 /* - renewd started */
+- struct sockaddr_in cl_addr; /* server identifier */
++ struct sockaddr_storage cl_addr; /* server identifier */
++ size_t cl_addrlen;
+ char * cl_hostname; /* hostname of server */
+ struct list_head cl_share_link; /* link in global client list */
+ struct list_head cl_superblocks; /* List of nfs_server structs */
+
+ struct rpc_clnt * cl_rpcclient;
+ const struct nfs_rpc_ops *rpc_ops; /* NFS protocol vector */
+- unsigned long retrans_timeo; /* retransmit timeout */
+- unsigned int retrans_count; /* number of retransmit tries */
++ int cl_proto; /* Network transport protocol */
+
+ #ifdef CONFIG_NFS_V4
+ u64 cl_clientid; /* constant */
+@@ -62,7 +65,7 @@ struct nfs_client {
+ /* Our own IP address, as a null-terminated string.
+ * This is used to generate the clientid, and the callback address.
+ */
+- char cl_ipaddr[16];
++ char cl_ipaddr[48];
+ unsigned char cl_id_uniquifier;
+ #endif
+ };
+@@ -78,6 +81,7 @@ struct nfs_server {
+ struct list_head master_link; /* link in master servers list */
+ struct rpc_clnt * client; /* RPC client handle */
+ struct rpc_clnt * client_acl; /* ACL RPC client handle */
++ struct nlm_host *nlm_host; /* NLM client handle */
+ struct nfs_iostats * io_stats; /* I/O statistics */
+ struct backing_dev_info backing_dev_info;
+ atomic_long_t writeback; /* number of writeback pages */
+@@ -110,6 +114,9 @@ struct nfs_server {
+ filesystem */
+ #endif
+ void (*destroy)(struct nfs_server *);
++
++ atomic_t active; /* Keep trace of any activity to this server */
++ wait_queue_head_t active_wq; /* Wait for any activity to stop */
+ };
+
+ /* Server capabilities */
+diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h
+index 30dbcc1..a1676e1 100644
+--- a/include/linux/nfs_page.h
++++ b/include/linux/nfs_page.h
+@@ -83,6 +83,7 @@ extern void nfs_pageio_complete(struct nfs_pageio_descriptor *desc);
+ extern void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *, pgoff_t);
+ extern int nfs_wait_on_request(struct nfs_page *);
+ extern void nfs_unlock_request(struct nfs_page *req);
++extern int nfs_set_page_tag_locked(struct nfs_page *req);
+ extern void nfs_clear_page_tag_locked(struct nfs_page *req);
+
+
+@@ -95,18 +96,6 @@ nfs_lock_request_dontget(struct nfs_page *req)
+ return !test_and_set_bit(PG_BUSY, &req->wb_flags);
+ }
+
+-/*
+- * Lock the page of an asynchronous request and take a reference
+- */
+-static inline int
+-nfs_lock_request(struct nfs_page *req)
+-{
+- if (test_and_set_bit(PG_BUSY, &req->wb_flags))
+- return 0;
+- kref_get(&req->wb_kref);
+- return 1;
+-}
+-
+ /**
+ * nfs_list_add_request - Insert a request into a list
+ * @req: request
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index daab252..f301d0b 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -666,16 +666,17 @@ struct nfs4_rename_res {
+ struct nfs_fattr * new_fattr;
+ };
+
++#define NFS4_SETCLIENTID_NAMELEN (56)
+ struct nfs4_setclientid {
+- const nfs4_verifier * sc_verifier; /* request */
++ const nfs4_verifier * sc_verifier;
+ unsigned int sc_name_len;
+- char sc_name[48]; /* request */
+- u32 sc_prog; /* request */
++ char sc_name[NFS4_SETCLIENTID_NAMELEN];
++ u32 sc_prog;
+ unsigned int sc_netid_len;
+- char sc_netid[4]; /* request */
++ char sc_netid[RPCBIND_MAXNETIDLEN];
+ unsigned int sc_uaddr_len;
+- char sc_uaddr[24]; /* request */
+- u32 sc_cb_ident; /* request */
++ char sc_uaddr[RPCBIND_MAXUADDRLEN];
++ u32 sc_cb_ident;
+ };
+
+ struct nfs4_statfs_arg {
+@@ -773,7 +774,7 @@ struct nfs_access_entry;
+ * RPC procedure vector for NFSv2/NFSv3 demuxing
+ */
+ struct nfs_rpc_ops {
+- int version; /* Protocol version */
++ u32 version; /* Protocol version */
+ struct dentry_operations *dentry_ops;
+ const struct inode_operations *dir_inode_ops;
+ const struct inode_operations *file_inode_ops;
+@@ -816,11 +817,11 @@ struct nfs_rpc_ops {
+ struct nfs_pathconf *);
+ int (*set_capabilities)(struct nfs_server *, struct nfs_fh *);
+ __be32 *(*decode_dirent)(__be32 *, struct nfs_entry *, int plus);
+- void (*read_setup) (struct nfs_read_data *);
++ void (*read_setup) (struct nfs_read_data *, struct rpc_message *);
+ int (*read_done) (struct rpc_task *, struct nfs_read_data *);
+- void (*write_setup) (struct nfs_write_data *, int how);
++ void (*write_setup) (struct nfs_write_data *, struct rpc_message *);
+ int (*write_done) (struct rpc_task *, struct nfs_write_data *);
+- void (*commit_setup) (struct nfs_write_data *, int how);
++ void (*commit_setup) (struct nfs_write_data *, struct rpc_message *);
+ int (*commit_done) (struct rpc_task *, struct nfs_write_data *);
+ int (*file_open) (struct inode *, struct file *);
+ int (*file_release) (struct inode *, struct file *);
diff --git a/include/linux/nl80211.h b/include/linux/nl80211.h
index 538ee1d..9fecf90 100644
--- a/include/linux/nl80211.h
@@ -546787,7 +650720,7 @@
/* PCI Setting Record (Type 0) */
struct hpp_type0 {
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
-index 7f22151..c695313 100644
+index 7f22151..41f6f28 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -1943,6 +1943,7 @@
@@ -546808,17 +650741,24 @@
#define PCI_VENDOR_ID_VITESSE 0x1725
#define PCI_DEVICE_ID_VITESSE_VSC7174 0x7174
-@@ -2078,6 +2082,9 @@
+@@ -2078,6 +2082,16 @@
#define PCI_DEVICE_ID_ALTIMA_AC9100 0x03ea
#define PCI_DEVICE_ID_ALTIMA_AC1003 0x03eb
+#define PCI_VENDOR_ID_BELKIN 0x1799
+#define PCI_DEVICE_ID_BELKIN_F5D7010V7 0x701f
+
++#define PCI_VENDOR_ID_RDC 0x17f3
++#define PCI_DEVICE_ID_RDC_R6020 0x6020
++#define PCI_DEVICE_ID_RDC_R6030 0x6030
++#define PCI_DEVICE_ID_RDC_R6040 0x6040
++#define PCI_DEVICE_ID_RDC_R6060 0x6060
++#define PCI_DEVICE_ID_RDC_R6061 0x6061
++
#define PCI_VENDOR_ID_LENOVO 0x17aa
#define PCI_VENDOR_ID_ARECA 0x17d3
-@@ -2106,6 +2113,8 @@
+@@ -2106,6 +2120,8 @@
#define PCI_DEVICE_ID_HERC_WIN 0x5732
#define PCI_DEVICE_ID_HERC_UNI 0x5832
@@ -546907,6 +650847,41 @@
+#endif /* CONFIG_SMP */
+
+#endif /* __LINUX_PCOUNTER_H */
+diff --git a/include/linux/percpu.h b/include/linux/percpu.h
+index 926adaa..00412bb 100644
+--- a/include/linux/percpu.h
++++ b/include/linux/percpu.h
+@@ -9,6 +9,30 @@
+
+ #include <asm/percpu.h>
+
++#ifndef PER_CPU_ATTRIBUTES
++#define PER_CPU_ATTRIBUTES
++#endif
++
++#ifdef CONFIG_SMP
++#define DEFINE_PER_CPU(type, name) \
++ __attribute__((__section__(".data.percpu"))) \
++ PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
++
++#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
++ __attribute__((__section__(".data.percpu.shared_aligned"))) \
++ PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name \
++ ____cacheline_aligned_in_smp
++#else
++#define DEFINE_PER_CPU(type, name) \
++ PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
++
++#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
++ DEFINE_PER_CPU(type, name)
++#endif
++
++#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
++#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
++
+ /* Enough to cover all DEFINE_PER_CPUs in kernel, including modules. */
+ #ifndef PERCPU_ENOUGH_ROOM
+ #ifdef CONFIG_MODULES
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index 919af93..3276135 100644
--- a/include/linux/pkt_sched.h
@@ -546959,6 +650934,92 @@
#else
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index 3ea5750..515bff0 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -129,6 +129,81 @@ int generic_ptrace_pokedata(struct task_struct *tsk, long addr, long data);
+ #define force_successful_syscall_return() do { } while (0)
+ #endif
+
++/*
++ * <asm/ptrace.h> should define the following things inside #ifdef __KERNEL__.
++ *
++ * These do-nothing inlines are used when the arch does not
++ * implement single-step. The kerneldoc comments are here
++ * to document the interface for all arch definitions.
++ */
++
++#ifndef arch_has_single_step
++/**
++ * arch_has_single_step - does this CPU support user-mode single-step?
++ *
++ * If this is defined, then there must be function declarations or
++ * inlines for user_enable_single_step() and user_disable_single_step().
++ * arch_has_single_step() should evaluate to nonzero iff the machine
++ * supports instruction single-step for user mode.
++ * It can be a constant or it can test a CPU feature bit.
++ */
++#define arch_has_single_step() (0)
++
++/**
++ * user_enable_single_step - single-step in user-mode task
++ * @task: either current or a task stopped in %TASK_TRACED
++ *
++ * This can only be called when arch_has_single_step() has returned nonzero.
++ * Set @task so that when it returns to user mode, it will trap after the
++ * next single instruction executes. If arch_has_block_step() is defined,
++ * this must clear the effects of user_enable_block_step() too.
++ */
++static inline void user_enable_single_step(struct task_struct *task)
++{
++ BUG(); /* This can never be called. */
++}
++
++/**
++ * user_disable_single_step - cancel user-mode single-step
++ * @task: either current or a task stopped in %TASK_TRACED
++ *
++ * Clear @task of the effects of user_enable_single_step() and
++ * user_enable_block_step(). This can be called whether or not either
++ * of those was ever called on @task, and even if arch_has_single_step()
++ * returned zero.
++ */
++static inline void user_disable_single_step(struct task_struct *task)
++{
++}
++#endif /* arch_has_single_step */
++
++#ifndef arch_has_block_step
++/**
++ * arch_has_block_step - does this CPU support user-mode block-step?
++ *
++ * If this is defined, then there must be a function declaration or inline
++ * for user_enable_block_step(), and arch_has_single_step() must be defined
++ * too. arch_has_block_step() should evaluate to nonzero iff the machine
++ * supports step-until-branch for user mode. It can be a constant or it
++ * can test a CPU feature bit.
++ */
++#define arch_has_block_step() (0)
++
++/**
++ * user_enable_block_step - step until branch in user-mode task
++ * @task: either current or a task stopped in %TASK_TRACED
++ *
++ * This can only be called when arch_has_block_step() has returned nonzero,
++ * and will never be called when single-instruction stepping is being used.
++ * Set @task so that when it returns to user mode, it will trap after the
++ * next branch or trap taken.
++ */
++static inline void user_enable_block_step(struct task_struct *task)
++{
++ BUG(); /* This can never be called. */
++}
++#endif /* arch_has_block_step */
++
+ #endif
+
+ #endif
diff --git a/include/linux/rcuclassic.h b/include/linux/rcuclassic.h
new file mode 100644
index 0000000..4d66242
@@ -547567,6 +651628,380 @@
+
+#endif /* __KERNEL__ */
+#endif /* __LINUX_RCUPREEMPT_TRACE_H */
+diff --git a/include/linux/regset.h b/include/linux/regset.h
+new file mode 100644
+index 0000000..8abee65
+--- /dev/null
++++ b/include/linux/regset.h
+@@ -0,0 +1,368 @@
++/*
++ * User-mode machine state access
++ *
++ * Copyright (C) 2007 Red Hat, Inc. All rights reserved.
++ *
++ * This copyrighted material is made available to anyone wishing to use,
++ * modify, copy, or redistribute it subject to the terms and conditions
++ * of the GNU General Public License v.2.
++ *
++ * Red Hat Author: Roland McGrath.
++ */
++
++#ifndef _LINUX_REGSET_H
++#define _LINUX_REGSET_H 1
++
++#include <linux/compiler.h>
++#include <linux/types.h>
++#include <linux/uaccess.h>
++struct task_struct;
++struct user_regset;
++
++
++/**
++ * user_regset_active_fn - type of @active function in &struct user_regset
++ * @target: thread being examined
++ * @regset: regset being examined
++ *
++ * Return -%ENODEV if not available on the hardware found.
++ * Return %0 if no interesting state in this thread.
++ * Return >%0 number of @size units of interesting state.
++ * Any get call fetching state beyond that number will
++ * see the default initialization state for this data,
++ * so a caller that knows what the default state is need
++ * not copy it all out.
++ * This call is optional; the pointer is %NULL if there
++ * is no inexpensive check to yield a value < @n.
++ */
++typedef int user_regset_active_fn(struct task_struct *target,
++ const struct user_regset *regset);
++
++/**
++ * user_regset_get_fn - type of @get function in &struct user_regset
++ * @target: thread being examined
++ * @regset: regset being examined
++ * @pos: offset into the regset data to access, in bytes
++ * @count: amount of data to copy, in bytes
++ * @kbuf: if not %NULL, a kernel-space pointer to copy into
++ * @ubuf: if @kbuf is %NULL, a user-space pointer to copy into
++ *
++ * Fetch register values. Return %0 on success; -%EIO or -%ENODEV
++ * are usual failure returns. The @pos and @count values are in
++ * bytes, but must be properly aligned. If @kbuf is non-null, that
++ * buffer is used and @ubuf is ignored. If @kbuf is %NULL, then
++ * ubuf gives a userland pointer to access directly, and an -%EFAULT
++ * return value is possible.
++ */
++typedef int user_regset_get_fn(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ void *kbuf, void __user *ubuf);
++
++/**
++ * user_regset_set_fn - type of @set function in &struct user_regset
++ * @target: thread being examined
++ * @regset: regset being examined
++ * @pos: offset into the regset data to access, in bytes
++ * @count: amount of data to copy, in bytes
++ * @kbuf: if not %NULL, a kernel-space pointer to copy from
++ * @ubuf: if @kbuf is %NULL, a user-space pointer to copy from
++ *
++ * Store register values. Return %0 on success; -%EIO or -%ENODEV
++ * are usual failure returns. The @pos and @count values are in
++ * bytes, but must be properly aligned. If @kbuf is non-null, that
++ * buffer is used and @ubuf is ignored. If @kbuf is %NULL, then
++ * ubuf gives a userland pointer to access directly, and an -%EFAULT
++ * return value is possible.
++ */
++typedef int user_regset_set_fn(struct task_struct *target,
++ const struct user_regset *regset,
++ unsigned int pos, unsigned int count,
++ const void *kbuf, const void __user *ubuf);
++
++/**
++ * user_regset_writeback_fn - type of @writeback function in &struct user_regset
++ * @target: thread being examined
++ * @regset: regset being examined
++ * @immediate: zero if writeback at completion of next context switch is OK
++ *
++ * This call is optional; usually the pointer is %NULL. When
++ * provided, there is some user memory associated with this regset's
++ * hardware, such as memory backing cached register data on register
++ * window machines; the regset's data controls what user memory is
++ * used (e.g. via the stack pointer value).
++ *
++ * Write register data back to user memory. If the @immediate flag
++ * is nonzero, it must be written to the user memory so uaccess or
++ * access_process_vm() can see it when this call returns; if zero,
++ * then it must be written back by the time the task completes a
++ * context switch (as synchronized with wait_task_inactive()).
++ * Return %0 on success or if there was nothing to do, -%EFAULT for
++ * a memory problem (bad stack pointer or whatever), or -%EIO for a
++ * hardware problem.
++ */
++typedef int user_regset_writeback_fn(struct task_struct *target,
++ const struct user_regset *regset,
++ int immediate);
++
++/**
++ * struct user_regset - accessible thread CPU state
++ * @n: Number of slots (registers).
++ * @size: Size in bytes of a slot (register).
++ * @align: Required alignment, in bytes.
++ * @bias: Bias from natural indexing.
++ * @core_note_type: ELF note @n_type value used in core dumps.
++ * @get: Function to fetch values.
++ * @set: Function to store values.
++ * @active: Function to report if regset is active, or %NULL.
++ * @writeback: Function to write data back to user memory, or %NULL.
++ *
++ * This data structure describes a machine resource we call a register set.
++ * This is part of the state of an individual thread, not necessarily
++ * actual CPU registers per se. A register set consists of a number of
++ * similar slots, given by @n. Each slot is @size bytes, and aligned to
++ * @align bytes (which is at least @size).
++ *
++ * These functions must be called only on the current thread or on a
++ * thread that is in %TASK_STOPPED or %TASK_TRACED state, that we are
++ * guaranteed will not be woken up and return to user mode, and that we
++ * have called wait_task_inactive() on. (The target thread always might
++ * wake up for SIGKILL while these functions are working, in which case
++ * that thread's user_regset state might be scrambled.)
++ *
++ * The @pos argument must be aligned according to @align; the @count
++ * argument must be a multiple of @size. These functions are not
++ * responsible for checking for invalid arguments.
++ *
++ * When there is a natural value to use as an index, @bias gives the
++ * difference between the natural index and the slot index for the
++ * register set. For example, x86 GDT segment descriptors form a regset;
++ * the segment selector produces a natural index, but only a subset of
++ * that index space is available as a regset (the TLS slots); subtracting
++ * @bias from a segment selector index value computes the regset slot.
++ *
++ * If nonzero, @core_note_type gives the n_type field (NT_* value)
++ * of the core file note in which this regset's data appears.
++ * NT_PRSTATUS is a special case in that the regset data starts at
++ * offsetof(struct elf_prstatus, pr_reg) into the note data; that is
++ * part of the per-machine ELF formats userland knows about. In
++ * other cases, the core file note contains exactly the whole regset
++ * (@n * @size) and nothing else. The core file note is normally
++ * omitted when there is an @active function and it returns zero.
++ */
++struct user_regset {
++ user_regset_get_fn *get;
++ user_regset_set_fn *set;
++ user_regset_active_fn *active;
++ user_regset_writeback_fn *writeback;
++ unsigned int n;
++ unsigned int size;
++ unsigned int align;
++ unsigned int bias;
++ unsigned int core_note_type;
++};
++
++/**
++ * struct user_regset_view - available regsets
++ * @name: Identifier, e.g. UTS_MACHINE string.
++ * @regsets: Array of @n regsets available in this view.
++ * @n: Number of elements in @regsets.
++ * @e_machine: ELF header @e_machine %EM_* value written in core dumps.
++ * @e_flags: ELF header @e_flags value written in core dumps.
++ * @ei_osabi: ELF header @e_ident[%EI_OSABI] value written in core dumps.
++ *
++ * A regset view is a collection of regsets (&struct user_regset,
++ * above). This describes all the state of a thread that can be seen
++ * from a given architecture/ABI environment. More than one view might
++ * refer to the same &struct user_regset, or more than one regset
++ * might refer to the same machine-specific state in the thread. For
++ * example, a 32-bit thread's state could be examined from the 32-bit
++ * view or from the 64-bit view. Either method reaches the same thread
++ * register state, doing appropriate widening or truncation.
++ */
++struct user_regset_view {
++ const char *name;
++ const struct user_regset *regsets;
++ unsigned int n;
++ u32 e_flags;
++ u16 e_machine;
++ u8 ei_osabi;
++};
++
++/*
++ * This is documented here rather than at the definition sites because its
++ * implementation is machine-dependent but its interface is universal.
++ */
++/**
++ * task_user_regset_view - Return the process's native regset view.
++ * @tsk: a thread of the process in question
++ *
++ * Return the &struct user_regset_view that is native for the given process.
++ * For example, what it would access when it called ptrace().
++ * Throughout the life of the process, this only changes at exec.
++ */
++const struct user_regset_view *task_user_regset_view(struct task_struct *tsk);
++
++
++/*
++ * These are helpers for writing regset get/set functions in arch code.
++ * Because @start_pos and @end_pos are always compile-time constants,
++ * these are inlined into very little code though they look large.
++ *
++ * Use one or more calls sequentially for each chunk of regset data stored
++ * contiguously in memory. Call with constants for @start_pos and @end_pos,
++ * giving the range of byte positions in the regset that data corresponds
++ * to; @end_pos can be -1 if this chunk is at the end of the regset layout.
++ * Each call updates the arguments to point past its chunk.
++ */
++
++static inline int user_regset_copyout(unsigned int *pos, unsigned int *count,
++ void **kbuf,
++ void __user **ubuf, const void *data,
++ const int start_pos, const int end_pos)
++{
++ if (*count == 0)
++ return 0;
++ BUG_ON(*pos < start_pos);
++ if (end_pos < 0 || *pos < end_pos) {
++ unsigned int copy = (end_pos < 0 ? *count
++ : min(*count, end_pos - *pos));
++ data += *pos - start_pos;
++ if (*kbuf) {
++ memcpy(*kbuf, data, copy);
++ *kbuf += copy;
++ } else if (__copy_to_user(*ubuf, data, copy))
++ return -EFAULT;
++ else
++ *ubuf += copy;
++ *pos += copy;
++ *count -= copy;
++ }
++ return 0;
++}
++
++static inline int user_regset_copyin(unsigned int *pos, unsigned int *count,
++ const void **kbuf,
++ const void __user **ubuf, void *data,
++ const int start_pos, const int end_pos)
++{
++ if (*count == 0)
++ return 0;
++ BUG_ON(*pos < start_pos);
++ if (end_pos < 0 || *pos < end_pos) {
++ unsigned int copy = (end_pos < 0 ? *count
++ : min(*count, end_pos - *pos));
++ data += *pos - start_pos;
++ if (*kbuf) {
++ memcpy(data, *kbuf, copy);
++ *kbuf += copy;
++ } else if (__copy_from_user(data, *ubuf, copy))
++ return -EFAULT;
++ else
++ *ubuf += copy;
++ *pos += copy;
++ *count -= copy;
++ }
++ return 0;
++}
++
++/*
++ * These two parallel the two above, but for portions of a regset layout
++ * that always read as all-zero or for which writes are ignored.
++ */
++static inline int user_regset_copyout_zero(unsigned int *pos,
++ unsigned int *count,
++ void **kbuf, void __user **ubuf,
++ const int start_pos,
++ const int end_pos)
++{
++ if (*count == 0)
++ return 0;
++ BUG_ON(*pos < start_pos);
++ if (end_pos < 0 || *pos < end_pos) {
++ unsigned int copy = (end_pos < 0 ? *count
++ : min(*count, end_pos - *pos));
++ if (*kbuf) {
++ memset(*kbuf, 0, copy);
++ *kbuf += copy;
++ } else if (__clear_user(*ubuf, copy))
++ return -EFAULT;
++ else
++ *ubuf += copy;
++ *pos += copy;
++ *count -= copy;
++ }
++ return 0;
++}
++
++static inline int user_regset_copyin_ignore(unsigned int *pos,
++ unsigned int *count,
++ const void **kbuf,
++ const void __user **ubuf,
++ const int start_pos,
++ const int end_pos)
++{
++ if (*count == 0)
++ return 0;
++ BUG_ON(*pos < start_pos);
++ if (end_pos < 0 || *pos < end_pos) {
++ unsigned int copy = (end_pos < 0 ? *count
++ : min(*count, end_pos - *pos));
++ if (*kbuf)
++ *kbuf += copy;
++ else
++ *ubuf += copy;
++ *pos += copy;
++ *count -= copy;
++ }
++ return 0;
++}
++
++/**
++ * copy_regset_to_user - fetch a thread's user_regset data into user memory
++ * @target: thread to be examined
++ * @view: &struct user_regset_view describing user thread machine state
++ * @setno: index in @view->regsets
++ * @offset: offset into the regset data, in bytes
++ * @size: amount of data to copy, in bytes
++ * @data: user-mode pointer to copy into
++ */
++static inline int copy_regset_to_user(struct task_struct *target,
++ const struct user_regset_view *view,
++ unsigned int setno,
++ unsigned int offset, unsigned int size,
++ void __user *data)
++{
++ const struct user_regset *regset = &view->regsets[setno];
++
++ if (!access_ok(VERIFY_WRITE, data, size))
++ return -EIO;
++
++ return regset->get(target, regset, offset, size, NULL, data);
++}
++
++/**
++ * copy_regset_from_user - store into thread's user_regset data from user memory
++ * @target: thread to be examined
++ * @view: &struct user_regset_view describing user thread machine state
++ * @setno: index in @view->regsets
++ * @offset: offset into the regset data, in bytes
++ * @size: amount of data to copy, in bytes
++ * @data: user-mode pointer to copy from
++ */
++static inline int copy_regset_from_user(struct task_struct *target,
++ const struct user_regset_view *view,
++ unsigned int setno,
++ unsigned int offset, unsigned int size,
++ const void __user *data)
++{
++ const struct user_regset *regset = &view->regsets[setno];
++
++ if (!access_ok(VERIFY_READ, data, size))
++ return -EIO;
++
++ return regset->set(target, regset, offset, size, NULL, data);
++}
++
++
++#endif /* <linux/regset.h> */
diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index 4e81836..b014f6b 100644
--- a/include/linux/rtnetlink.h
@@ -547765,7 +652200,7 @@
+
#endif /* _LINUX_SCATTERLIST_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
-index cc14656..2d0546e 100644
+index cc14656..9d47976 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -27,6 +27,7 @@
@@ -547982,26 +652417,55 @@
int sched_nr_latency_handler(struct ctl_table *table, int write,
struct file *file, void __user *buffer, size_t *length,
-@@ -1850,7 +1905,18 @@ static inline int need_resched(void)
+@@ -1850,29 +1905,33 @@ static inline int need_resched(void)
* cond_resched_lock() will drop the spinlock before scheduling,
* cond_resched_softirq() will enable bhs before scheduling.
*/
-extern int cond_resched(void);
+-extern int cond_resched_lock(spinlock_t * lock);
+-extern int cond_resched_softirq(void);
+-
+-/*
+- * Does a critical section need to be broken due to another
+- * task waiting?:
+- */
+-#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
+-# define need_lockbreak(lock) ((lock)->break_lock)
+#ifdef CONFIG_PREEMPT
+static inline int cond_resched(void)
+{
+ return 0;
+}
-+#else
+ #else
+-# define need_lockbreak(lock) 0
+extern int _cond_resched(void);
+static inline int cond_resched(void)
+{
+ return _cond_resched();
+}
+ #endif
++extern int cond_resched_lock(spinlock_t * lock);
++extern int cond_resched_softirq(void);
+
+ /*
+ * Does a critical section need to be broken due to another
+- * task waiting or preemption being signalled:
++ * task waiting?: (technically does not depend on CONFIG_PREEMPT,
++ * but a general need for low latency)
+ */
+-static inline int lock_need_resched(spinlock_t *lock)
++static inline int spin_needbreak(spinlock_t *lock)
+ {
+- if (need_lockbreak(lock) || need_resched())
+- return 1;
++#ifdef CONFIG_PREEMPT
++ return spin_is_contended(lock);
++#else
+ return 0;
+#endif
- extern int cond_resched_lock(spinlock_t * lock);
- extern int cond_resched_softirq(void);
+ }
+ /*
diff --git a/include/linux/security.h b/include/linux/security.h
index ac05083..d249742 100644
--- a/include/linux/security.h
@@ -548224,6 +652688,19 @@
extern void skb_copy_and_csum_dev(const struct sk_buff *skb, u8 *to);
extern void skb_split(struct sk_buff *skb,
struct sk_buff *skb1, const u32 len);
+diff --git a/include/linux/smp.h b/include/linux/smp.h
+index c25e66b..55232cc 100644
+--- a/include/linux/smp.h
++++ b/include/linux/smp.h
+@@ -78,6 +78,8 @@ int on_each_cpu(void (*func) (void *info), void *info, int retry, int wait);
+ */
+ void smp_prepare_boot_cpu(void);
+
++extern unsigned int setup_max_cpus;
++
+ #else /* !SMP */
+
+ /*
diff --git a/include/linux/smp_lock.h b/include/linux/smp_lock.h
index 58962c5..aab3a4c 100644
--- a/include/linux/smp_lock.h
@@ -548320,6 +652797,58 @@
#define PF_TIPC AF_TIPC
#define PF_BLUETOOTH AF_BLUETOOTH
#define PF_IUCV AF_IUCV
+diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
+index c376f3b..1244497 100644
+--- a/include/linux/spinlock.h
++++ b/include/linux/spinlock.h
+@@ -120,6 +120,12 @@ do { \
+
+ #define spin_is_locked(lock) __raw_spin_is_locked(&(lock)->raw_lock)
+
++#ifdef CONFIG_GENERIC_LOCKBREAK
++#define spin_is_contended(lock) ((lock)->break_lock)
++#else
++#define spin_is_contended(lock) __raw_spin_is_contended(&(lock)->raw_lock)
++#endif
++
+ /**
+ * spin_unlock_wait - wait until the spinlock gets unlocked
+ * @lock: the spinlock in question.
+diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h
+index f6a3a95..68d88f7 100644
+--- a/include/linux/spinlock_types.h
++++ b/include/linux/spinlock_types.h
+@@ -19,7 +19,7 @@
+
+ typedef struct {
+ raw_spinlock_t raw_lock;
+-#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
++#ifdef CONFIG_GENERIC_LOCKBREAK
+ unsigned int break_lock;
+ #endif
+ #ifdef CONFIG_DEBUG_SPINLOCK
+@@ -35,7 +35,7 @@ typedef struct {
+
+ typedef struct {
+ raw_rwlock_t raw_lock;
+-#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
++#ifdef CONFIG_GENERIC_LOCKBREAK
+ unsigned int break_lock;
+ #endif
+ #ifdef CONFIG_DEBUG_SPINLOCK
+diff --git a/include/linux/spinlock_up.h b/include/linux/spinlock_up.h
+index ea54c4c..938234c 100644
+--- a/include/linux/spinlock_up.h
++++ b/include/linux/spinlock_up.h
+@@ -64,6 +64,8 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lock)
+ # define __raw_spin_trylock(lock) ({ (void)(lock); 1; })
+ #endif /* DEBUG_SPINLOCK */
+
++#define __raw_spin_is_contended(lock) (((void)(lock), 0))
++
+ #define __raw_read_can_lock(lock) (((void)(lock), 1))
+ #define __raw_write_can_lock(lock) (((void)(lock), 1))
+
diff --git a/include/linux/splice.h b/include/linux/splice.h
index 33e447f..528dcb9 100644
--- a/include/linux/splice.h
@@ -548629,6 +653158,308 @@
# define print_stack_trace(trace, spaces) do { } while (0)
#endif
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index d9d5c5a..3e9addc 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -46,6 +46,7 @@ struct rpc_clnt {
+ cl_autobind : 1;/* use getport() */
+
+ struct rpc_rtt * cl_rtt; /* RTO estimator data */
++ const struct rpc_timeout *cl_timeout; /* Timeout strategy */
+
+ int cl_nodelen; /* nodename length */
+ char cl_nodename[UNX_MAXNODENAME];
+@@ -54,6 +55,7 @@ struct rpc_clnt {
+ struct dentry * cl_dentry; /* inode */
+ struct rpc_clnt * cl_parent; /* Points to parent of clones */
+ struct rpc_rtt cl_rtt_default;
++ struct rpc_timeout cl_timeout_default;
+ struct rpc_program * cl_program;
+ char cl_inline_name[32];
+ };
+@@ -99,7 +101,7 @@ struct rpc_create_args {
+ struct sockaddr *address;
+ size_t addrsize;
+ struct sockaddr *saddress;
+- struct rpc_timeout *timeout;
++ const struct rpc_timeout *timeout;
+ char *servername;
+ struct rpc_program *program;
+ u32 version;
+@@ -123,11 +125,10 @@ void rpc_shutdown_client(struct rpc_clnt *);
+ void rpc_release_client(struct rpc_clnt *);
+
+ int rpcb_register(u32, u32, int, unsigned short, int *);
+-int rpcb_getport_sync(struct sockaddr_in *, __u32, __u32, int);
++int rpcb_getport_sync(struct sockaddr_in *, u32, u32, int);
+ void rpcb_getport_async(struct rpc_task *);
+
+-void rpc_call_setup(struct rpc_task *, struct rpc_message *, int);
+-
++void rpc_call_start(struct rpc_task *);
+ int rpc_call_async(struct rpc_clnt *clnt, struct rpc_message *msg,
+ int flags, const struct rpc_call_ops *tk_ops,
+ void *calldata);
+@@ -142,7 +143,7 @@ void rpc_setbufsize(struct rpc_clnt *, unsigned int, unsigned int);
+ size_t rpc_max_payload(struct rpc_clnt *);
+ void rpc_force_rebind(struct rpc_clnt *);
+ size_t rpc_peeraddr(struct rpc_clnt *, struct sockaddr *, size_t);
+-char * rpc_peeraddr2str(struct rpc_clnt *, enum rpc_display_format_t);
++const char *rpc_peeraddr2str(struct rpc_clnt *, enum rpc_display_format_t);
+
+ #endif /* __KERNEL__ */
+ #endif /* _LINUX_SUNRPC_CLNT_H */
+diff --git a/include/linux/sunrpc/msg_prot.h b/include/linux/sunrpc/msg_prot.h
+index c4beb57..70df4f1 100644
+--- a/include/linux/sunrpc/msg_prot.h
++++ b/include/linux/sunrpc/msg_prot.h
+@@ -152,5 +152,44 @@ typedef __be32 rpc_fraghdr;
+ */
+ #define RPCBIND_MAXNETIDLEN (4u)
+
++/*
++ * Universal addresses are introduced in RFC 1833 and further spelled
++ * out in RFC 3530. RPCBIND_MAXUADDRLEN defines a maximum byte length
++ * of a universal address for use in allocating buffers and character
++ * arrays.
++ *
++ * Quoting RFC 3530, section 2.2:
++ *
++ * For TCP over IPv4 and for UDP over IPv4, the format of r_addr is the
++ * US-ASCII string:
++ *
++ * h1.h2.h3.h4.p1.p2
++ *
++ * The prefix, "h1.h2.h3.h4", is the standard textual form for
++ * representing an IPv4 address, which is always four octets long.
++ * Assuming big-endian ordering, h1, h2, h3, and h4, are respectively,
++ * the first through fourth octets each converted to ASCII-decimal.
++ * Assuming big-endian ordering, p1 and p2 are, respectively, the first
++ * and second octets each converted to ASCII-decimal. For example, if a
++ * host, in big-endian order, has an address of 0x0A010307 and there is
++ * a service listening on, in big endian order, port 0x020F (decimal
++ * 527), then the complete universal address is "10.1.3.7.2.15".
++ *
++ * ...
++ *
++ * For TCP over IPv6 and for UDP over IPv6, the format of r_addr is the
++ * US-ASCII string:
++ *
++ * x1:x2:x3:x4:x5:x6:x7:x8.p1.p2
++ *
++ * The suffix "p1.p2" is the service port, and is computed the same way
++ * as with universal addresses for TCP and UDP over IPv4. The prefix,
++ * "x1:x2:x3:x4:x5:x6:x7:x8", is the standard textual form for
++ * representing an IPv6 address as defined in Section 2.2 of [RFC2373].
++ * Additionally, the two alternative forms specified in Section 2.2 of
++ * [RFC2373] are also acceptable.
++ */
++#define RPCBIND_MAXUADDRLEN (56u)
++
+ #endif /* __KERNEL__ */
+ #endif /* _LINUX_SUNRPC_MSGPROT_H_ */
+diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
+index 8ea077d..ce3d1b1 100644
+--- a/include/linux/sunrpc/sched.h
++++ b/include/linux/sunrpc/sched.h
+@@ -56,8 +56,6 @@ struct rpc_task {
+ __u8 tk_garb_retry;
+ __u8 tk_cred_retry;
+
+- unsigned long tk_cookie; /* Cookie for batching tasks */
+-
+ /*
+ * timeout_fn to be executed by timer bottom half
+ * callback to be executed after waking up
+@@ -78,7 +76,6 @@ struct rpc_task {
+ struct timer_list tk_timer; /* kernel timer */
+ unsigned long tk_timeout; /* timeout for rpc_sleep() */
+ unsigned short tk_flags; /* misc flags */
+- unsigned char tk_priority : 2;/* Task priority */
+ unsigned long tk_runstate; /* Task run status */
+ struct workqueue_struct *tk_workqueue; /* Normally rpciod, but could
+ * be any workqueue
+@@ -94,6 +91,9 @@ struct rpc_task {
+ unsigned long tk_start; /* RPC task init timestamp */
+ long tk_rtt; /* round-trip time (jiffies) */
+
++ pid_t tk_owner; /* Process id for batching tasks */
++ unsigned char tk_priority : 2;/* Task priority */
++
+ #ifdef RPC_DEBUG
+ unsigned short tk_pid; /* debugging aid */
+ #endif
+@@ -117,6 +117,15 @@ struct rpc_call_ops {
+ void (*rpc_release)(void *);
+ };
+
++struct rpc_task_setup {
++ struct rpc_task *task;
++ struct rpc_clnt *rpc_client;
++ const struct rpc_message *rpc_message;
++ const struct rpc_call_ops *callback_ops;
++ void *callback_data;
++ unsigned short flags;
++ signed char priority;
++};
+
+ /*
+ * RPC task flags
+@@ -180,10 +189,10 @@ struct rpc_call_ops {
+ * Note: if you change these, you must also change
+ * the task initialization definitions below.
+ */
+-#define RPC_PRIORITY_LOW 0
+-#define RPC_PRIORITY_NORMAL 1
+-#define RPC_PRIORITY_HIGH 2
+-#define RPC_NR_PRIORITY (RPC_PRIORITY_HIGH+1)
++#define RPC_PRIORITY_LOW (-1)
++#define RPC_PRIORITY_NORMAL (0)
++#define RPC_PRIORITY_HIGH (1)
++#define RPC_NR_PRIORITY (1 + RPC_PRIORITY_HIGH - RPC_PRIORITY_LOW)
+
+ /*
+ * RPC synchronization objects
+@@ -191,7 +200,7 @@ struct rpc_call_ops {
+ struct rpc_wait_queue {
+ spinlock_t lock;
+ struct list_head tasks[RPC_NR_PRIORITY]; /* task queue for each priority level */
+- unsigned long cookie; /* cookie of last task serviced */
++ pid_t owner; /* process id of last task serviced */
+ unsigned char maxpriority; /* maximum priority (0 if queue is not a priority queue) */
+ unsigned char priority; /* current priority */
+ unsigned char count; /* # task groups remaining serviced so far */
+@@ -208,41 +217,13 @@ struct rpc_wait_queue {
+ * performance of NFS operations such as read/write.
+ */
+ #define RPC_BATCH_COUNT 16
+-
+-#ifndef RPC_DEBUG
+-# define RPC_WAITQ_INIT(var,qname) { \
+- .lock = __SPIN_LOCK_UNLOCKED(var.lock), \
+- .tasks = { \
+- [0] = LIST_HEAD_INIT(var.tasks[0]), \
+- [1] = LIST_HEAD_INIT(var.tasks[1]), \
+- [2] = LIST_HEAD_INIT(var.tasks[2]), \
+- }, \
+- }
+-#else
+-# define RPC_WAITQ_INIT(var,qname) { \
+- .lock = __SPIN_LOCK_UNLOCKED(var.lock), \
+- .tasks = { \
+- [0] = LIST_HEAD_INIT(var.tasks[0]), \
+- [1] = LIST_HEAD_INIT(var.tasks[1]), \
+- [2] = LIST_HEAD_INIT(var.tasks[2]), \
+- }, \
+- .name = qname, \
+- }
+-#endif
+-# define RPC_WAITQ(var,qname) struct rpc_wait_queue var = RPC_WAITQ_INIT(var,qname)
+-
+ #define RPC_IS_PRIORITY(q) ((q)->maxpriority > 0)
+
+ /*
+ * Function prototypes
+ */
+-struct rpc_task *rpc_new_task(struct rpc_clnt *, int flags,
+- const struct rpc_call_ops *ops, void *data);
+-struct rpc_task *rpc_run_task(struct rpc_clnt *clnt, int flags,
+- const struct rpc_call_ops *ops, void *data);
+-void rpc_init_task(struct rpc_task *task, struct rpc_clnt *clnt,
+- int flags, const struct rpc_call_ops *ops,
+- void *data);
++struct rpc_task *rpc_new_task(const struct rpc_task_setup *);
++struct rpc_task *rpc_run_task(const struct rpc_task_setup *);
+ void rpc_put_task(struct rpc_task *);
+ void rpc_exit_task(struct rpc_task *);
+ void rpc_release_calldata(const struct rpc_call_ops *, void *);
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 30b17b3..b3ff9a8 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -120,7 +120,7 @@ struct rpc_xprt {
+ struct kref kref; /* Reference count */
+ struct rpc_xprt_ops * ops; /* transport methods */
+
+- struct rpc_timeout timeout; /* timeout parms */
++ const struct rpc_timeout *timeout; /* timeout parms */
+ struct sockaddr_storage addr; /* server address */
+ size_t addrlen; /* size of server address */
+ int prot; /* IP protocol */
+@@ -183,7 +183,7 @@ struct rpc_xprt {
+ bklog_u; /* backlog queue utilization */
+ } stat;
+
+- char * address_strings[RPC_DISPLAY_MAX];
++ const char *address_strings[RPC_DISPLAY_MAX];
+ };
+
+ struct xprt_create {
+@@ -191,7 +191,6 @@ struct xprt_create {
+ struct sockaddr * srcaddr; /* optional local address */
+ struct sockaddr * dstaddr; /* remote peer address */
+ size_t addrlen;
+- struct rpc_timeout * timeout; /* optional timeout parameters */
+ };
+
+ struct xprt_class {
+@@ -203,11 +202,6 @@ struct xprt_class {
+ };
+
+ /*
+- * Transport operations used by ULPs
+- */
+-void xprt_set_timeout(struct rpc_timeout *to, unsigned int retr, unsigned long incr);
+-
+-/*
+ * Generic internal transport functions
+ */
+ struct rpc_xprt *xprt_create_transport(struct xprt_create *args);
+@@ -245,7 +239,8 @@ void xprt_adjust_cwnd(struct rpc_task *task, int result);
+ struct rpc_rqst * xprt_lookup_rqst(struct rpc_xprt *xprt, __be32 xid);
+ void xprt_complete_rqst(struct rpc_task *task, int copied);
+ void xprt_release_rqst_cong(struct rpc_task *task);
+-void xprt_disconnect(struct rpc_xprt *xprt);
++void xprt_disconnect_done(struct rpc_xprt *xprt);
++void xprt_force_disconnect(struct rpc_xprt *xprt);
+
+ /*
+ * Reserved bit positions in xprt->state
+@@ -256,6 +251,7 @@ void xprt_disconnect(struct rpc_xprt *xprt);
+ #define XPRT_CLOSE_WAIT (3)
+ #define XPRT_BOUND (4)
+ #define XPRT_BINDING (5)
++#define XPRT_CLOSING (6)
+
+ static inline void xprt_set_connected(struct rpc_xprt *xprt)
+ {
+diff --git a/include/linux/suspend.h b/include/linux/suspend.h
+index 4360e08..40280df 100644
+--- a/include/linux/suspend.h
++++ b/include/linux/suspend.h
+@@ -211,9 +211,6 @@ static inline int hibernate(void) { return -ENOSYS; }
+ #ifdef CONFIG_PM_SLEEP
+ void save_processor_state(void);
+ void restore_processor_state(void);
+-struct saved_context;
+-void __save_processor_state(struct saved_context *ctxt);
+-void __restore_processor_state(struct saved_context *ctxt);
+
+ /* kernel/power/main.c */
+ extern struct blocking_notifier_head pm_chain_head;
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index 4f3838a..2c3ce4c 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -6,6 +6,7 @@
+ #include <linux/mmzone.h>
+ #include <linux/list.h>
+ #include <linux/sched.h>
++#include <linux/pagemap.h>
+
+ #include <asm/atomic.h>
+ #include <asm/page.h>
diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
index 4f5047d..89faebf 100644
--- a/include/linux/sysctl.h
@@ -548742,6 +653573,76 @@
int lost_cnt_hint;
int retransmit_cnt_hint;
+diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
+index 9c4ad75..dfbdfb9 100644
+--- a/include/linux/thread_info.h
++++ b/include/linux/thread_info.h
+@@ -42,27 +42,27 @@ extern long do_no_restart_syscall(struct restart_block *parm);
+
+ static inline void set_ti_thread_flag(struct thread_info *ti, int flag)
+ {
+- set_bit(flag,&ti->flags);
++ set_bit(flag, (unsigned long *)&ti->flags);
+ }
+
+ static inline void clear_ti_thread_flag(struct thread_info *ti, int flag)
+ {
+- clear_bit(flag,&ti->flags);
++ clear_bit(flag, (unsigned long *)&ti->flags);
+ }
+
+ static inline int test_and_set_ti_thread_flag(struct thread_info *ti, int flag)
+ {
+- return test_and_set_bit(flag,&ti->flags);
++ return test_and_set_bit(flag, (unsigned long *)&ti->flags);
+ }
+
+ static inline int test_and_clear_ti_thread_flag(struct thread_info *ti, int flag)
+ {
+- return test_and_clear_bit(flag,&ti->flags);
++ return test_and_clear_bit(flag, (unsigned long *)&ti->flags);
+ }
+
+ static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
+ {
+- return test_bit(flag,&ti->flags);
++ return test_bit(flag, (unsigned long *)&ti->flags);
+ }
+
+ #define set_thread_flag(flag) \
+diff --git a/include/linux/tick.h b/include/linux/tick.h
+index f4a1395..0fadf95 100644
+--- a/include/linux/tick.h
++++ b/include/linux/tick.h
+@@ -51,8 +51,10 @@ struct tick_sched {
+ unsigned long idle_jiffies;
+ unsigned long idle_calls;
+ unsigned long idle_sleeps;
++ int idle_active;
+ ktime_t idle_entrytime;
+ ktime_t idle_sleeptime;
++ ktime_t idle_lastupdate;
+ ktime_t sleep_length;
+ unsigned long last_jiffies;
+ unsigned long next_jiffies;
+@@ -103,6 +105,8 @@ extern void tick_nohz_stop_sched_tick(void);
+ extern void tick_nohz_restart_sched_tick(void);
+ extern void tick_nohz_update_jiffies(void);
+ extern ktime_t tick_nohz_get_sleep_length(void);
++extern void tick_nohz_stop_idle(int cpu);
++extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
+ # else
+ static inline void tick_nohz_stop_sched_tick(void) { }
+ static inline void tick_nohz_restart_sched_tick(void) { }
+@@ -113,6 +117,8 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
+
+ return len;
+ }
++static inline void tick_nohz_stop_idle(int cpu) { }
++static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return 0; }
+ # endif /* !NO_HZ */
+
+ #endif
diff --git a/include/linux/tifm.h b/include/linux/tifm.h
index 6b3a318..2096b76 100644
--- a/include/linux/tifm.h
@@ -548755,6 +653656,37 @@
void (*eject)(struct tifm_adapter *fm,
struct tifm_dev *sock);
+diff --git a/include/linux/timer.h b/include/linux/timer.h
+index 78cf899..de0e713 100644
+--- a/include/linux/timer.h
++++ b/include/linux/timer.h
+@@ -5,7 +5,7 @@
+ #include <linux/ktime.h>
+ #include <linux/stddef.h>
+
+-struct tvec_t_base_s;
++struct tvec_base;
+
+ struct timer_list {
+ struct list_head entry;
+@@ -14,7 +14,7 @@ struct timer_list {
+ void (*function)(unsigned long);
+ unsigned long data;
+
+- struct tvec_t_base_s *base;
++ struct tvec_base *base;
+ #ifdef CONFIG_TIMER_STATS
+ void *start_site;
+ char start_comm[16];
+@@ -22,7 +22,7 @@ struct timer_list {
+ #endif
+ };
+
+-extern struct tvec_t_base_s boot_tvec_bases;
++extern struct tvec_base boot_tvec_bases;
+
+ #define TIMER_INITIALIZER(_function, _expires, _data) { \
+ .function = (_function), \
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 47729f1..2352f46 100644
--- a/include/linux/topology.h
@@ -554775,6 +659707,30 @@
unsigned WCE : 1; /* state of disk WCE bit */
unsigned RCD : 1; /* state of disk RCD bit, unused */
unsigned DPOFUA : 1; /* state of disk DPOFUA bit */
+diff --git a/include/xen/page.h b/include/xen/page.h
+index c0c8fcb..031ef22 100644
+--- a/include/xen/page.h
++++ b/include/xen/page.h
+@@ -156,16 +156,16 @@ static inline pte_t mfn_pte(unsigned long page_nr, pgprot_t pgprot)
+
+ static inline unsigned long long pte_val_ma(pte_t x)
+ {
+- return ((unsigned long long)x.pte_high << 32) | x.pte_low;
++ return x.pte;
+ }
+ #define pmd_val_ma(v) ((v).pmd)
+ #define pud_val_ma(v) ((v).pgd.pgd)
+-#define __pte_ma(x) ((pte_t) { .pte_low = (x), .pte_high = (x)>>32 } )
++#define __pte_ma(x) ((pte_t) { .pte = (x) })
+ #define __pmd_ma(x) ((pmd_t) { (x) } )
+ #else /* !X86_PAE */
+ #define pte_mfn(_pte) ((_pte).pte_low >> PAGE_SHIFT)
+ #define mfn_pte(pfn, prot) __pte_ma(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+-#define pte_val_ma(x) ((x).pte_low)
++#define pte_val_ma(x) ((x).pte)
+ #define pmd_val_ma(v) ((v).pud.pgd.pgd)
+ #define __pte_ma(x) ((pte_t) { (x) } )
+ #endif /* CONFIG_X86_PAE */
diff --git a/init/Kconfig b/init/Kconfig
index b9d11a8..0d0bbf2 100644
--- a/init/Kconfig
@@ -555000,10 +659956,103 @@
}
diff --git a/init/main.c b/init/main.c
-index 80b04b6..f287ca5 100644
+index 80b04b6..cb81ed1 100644
--- a/init/main.c
+++ b/init/main.c
-@@ -607,6 +607,7 @@ asmlinkage void __init start_kernel(void)
+@@ -128,7 +128,7 @@ static char *ramdisk_execute_command;
+
+ #ifdef CONFIG_SMP
+ /* Setup configured maximum number of CPUs to activate */
+-static unsigned int __initdata max_cpus = NR_CPUS;
++unsigned int __initdata setup_max_cpus = NR_CPUS;
+
+ /*
+ * Setup routine for controlling SMP activation
+@@ -146,7 +146,7 @@ static inline void disable_ioapic_setup(void) {};
+
+ static int __init nosmp(char *str)
+ {
+- max_cpus = 0;
++ setup_max_cpus = 0;
+ disable_ioapic_setup();
+ return 0;
+ }
+@@ -155,8 +155,8 @@ early_param("nosmp", nosmp);
+
+ static int __init maxcpus(char *str)
+ {
+- get_option(&str, &max_cpus);
+- if (max_cpus == 0)
++ get_option(&str, &setup_max_cpus);
++ if (setup_max_cpus == 0)
+ disable_ioapic_setup();
+
+ return 0;
+@@ -164,7 +164,7 @@ static int __init maxcpus(char *str)
+
+ early_param("maxcpus", maxcpus);
+ #else
+-#define max_cpus NR_CPUS
++#define setup_max_cpus NR_CPUS
+ #endif
+
+ /*
+@@ -318,6 +318,10 @@ static int __init unknown_bootoption(char *param, char *val)
+ return 0;
+ }
+
++#ifdef CONFIG_DEBUG_PAGEALLOC
++int __read_mostly debug_pagealloc_enabled = 0;
++#endif
++
+ static int __init init_setup(char *str)
+ {
+ unsigned int i;
+@@ -363,7 +367,7 @@ static inline void smp_prepare_cpus(unsigned int maxcpus) { }
+
+ #else
+
+-#ifdef __GENERIC_PER_CPU
++#ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
+ unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
+
+ EXPORT_SYMBOL(__per_cpu_offset);
+@@ -384,7 +388,7 @@ static void __init setup_per_cpu_areas(void)
+ ptr += size;
+ }
+ }
+-#endif /* !__GENERIC_PER_CPU */
++#endif /* CONFIG_HAVE_SETUP_PER_CPU_AREA */
+
+ /* Called by boot processor to activate the rest. */
+ static void __init smp_init(void)
+@@ -393,7 +397,7 @@ static void __init smp_init(void)
+
+ /* FIXME: This should be done in userspace --RR */
+ for_each_present_cpu(cpu) {
+- if (num_online_cpus() >= max_cpus)
++ if (num_online_cpus() >= setup_max_cpus)
+ break;
+ if (!cpu_online(cpu))
+ cpu_up(cpu);
+@@ -401,7 +405,7 @@ static void __init smp_init(void)
+
+ /* Any cleanup work */
+ printk(KERN_INFO "Brought up %ld CPUs\n", (long)num_online_cpus());
+- smp_cpus_done(max_cpus);
++ smp_cpus_done(setup_max_cpus);
+ }
+
+ #endif
+@@ -552,6 +556,7 @@ asmlinkage void __init start_kernel(void)
+ preempt_disable();
+ build_all_zonelists();
+ page_alloc_init();
++ enable_debug_pagealloc();
+ printk(KERN_NOTICE "Kernel command line: %s\n", boot_command_line);
+ parse_early_param();
+ parse_args("Booting kernel", static_command_line, __start___param,
+@@ -607,6 +612,7 @@ asmlinkage void __init start_kernel(void)
vfs_caches_init_early();
cpuset_init_early();
mem_init();
@@ -555011,6 +660060,15 @@
kmem_cache_init();
setup_per_cpu_pageset();
numa_policy_init();
+@@ -823,7 +829,7 @@ static int __init kernel_init(void * unused)
+ __set_special_pids(1, 1);
+ cad_pid = task_pid(current);
+
+- smp_prepare_cpus(max_cpus);
++ smp_prepare_cpus(setup_max_cpus);
+
+ do_pre_smp_initcalls();
+
diff --git a/kernel/Kconfig.hz b/kernel/Kconfig.hz
index 4af1580..526128a 100644
--- a/kernel/Kconfig.hz
@@ -555047,10 +660105,26 @@
Say N if you are unsure.
-
diff --git a/kernel/Makefile b/kernel/Makefile
-index dfa9695..390d421 100644
+index dfa9695..8885627 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
-@@ -52,11 +52,17 @@ obj-$(CONFIG_DETECT_SOFTLOCKUP) += softlockup.o
+@@ -36,6 +36,7 @@ obj-$(CONFIG_KALLSYMS) += kallsyms.o
+ obj-$(CONFIG_PM) += power/
+ obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
+ obj-$(CONFIG_KEXEC) += kexec.o
++obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
+ obj-$(CONFIG_COMPAT) += compat.o
+ obj-$(CONFIG_CGROUPS) += cgroup.o
+ obj-$(CONFIG_CGROUP_DEBUG) += cgroup_debug.o
+@@ -43,6 +44,7 @@ obj-$(CONFIG_CPUSETS) += cpuset.o
+ obj-$(CONFIG_CGROUP_NS) += ns_cgroup.o
+ obj-$(CONFIG_IKCONFIG) += configs.o
+ obj-$(CONFIG_STOP_MACHINE) += stop_machine.o
++obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o
+ obj-$(CONFIG_AUDIT) += audit.o auditfilter.o
+ obj-$(CONFIG_AUDITSYSCALL) += auditsc.o
+ obj-$(CONFIG_AUDIT_TREE) += audit_tree.o
+@@ -52,11 +54,17 @@ obj-$(CONFIG_DETECT_SOFTLOCKUP) += softlockup.o
obj-$(CONFIG_GENERIC_HARDIRQS) += irq/
obj-$(CONFIG_SECCOMP) += seccomp.o
obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
@@ -555068,6 +660142,60 @@
ifneq ($(CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER),y)
# According to Alan Modra <alan at linuxcare.com.au>, the -fno-omit-frame-pointer is
+diff --git a/kernel/backtracetest.c b/kernel/backtracetest.c
+new file mode 100644
+index 0000000..d1a7605
+--- /dev/null
++++ b/kernel/backtracetest.c
+@@ -0,0 +1,48 @@
++/*
++ * Simple stack backtrace regression test module
++ *
++ * (C) Copyright 2008 Intel Corporation
++ * Author: Arjan van de Ven <arjan at linux.intel.com>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * as published by the Free Software Foundation; version 2
++ * of the License.
++ */
++
++#include <linux/module.h>
++#include <linux/sched.h>
++#include <linux/delay.h>
++
++static struct timer_list backtrace_timer;
++
++static void backtrace_test_timer(unsigned long data)
++{
++ printk("Testing a backtrace from irq context.\n");
++ printk("The following trace is a kernel self test and not a bug!\n");
++ dump_stack();
++}
++static int backtrace_regression_test(void)
++{
++ printk("====[ backtrace testing ]===========\n");
++ printk("Testing a backtrace from process context.\n");
++ printk("The following trace is a kernel self test and not a bug!\n");
++ dump_stack();
++
++ init_timer(&backtrace_timer);
++ backtrace_timer.function = backtrace_test_timer;
++ mod_timer(&backtrace_timer, jiffies + 10);
++
++ msleep(10);
++ printk("====[ end of backtrace testing ]====\n");
++ return 0;
++}
++
++static void exitf(void)
++{
++}
++
++module_init(backtrace_regression_test);
++module_exit(exitf);
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Arjan van de Ven <arjan at linux.intel.com>");
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6b3a0c1..e0d3a4f 100644
--- a/kernel/cpu.c
@@ -555929,6 +661057,103 @@
hrtimer_init_hres(cpu_base);
}
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 1f31422..438a014 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -479,6 +479,9 @@ void free_irq(unsigned int irq, void *dev_id)
+ return;
+ }
+ printk(KERN_ERR "Trying to free already-free IRQ %d\n", irq);
++#ifdef CONFIG_DEBUG_SHIRQ
++ dump_stack();
++#endif
+ spin_unlock_irqrestore(&desc->lock, flags);
+ return;
+ }
+diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
+index 50b81b9..c2f2ccb 100644
+--- a/kernel/irq/proc.c
++++ b/kernel/irq/proc.c
+@@ -75,6 +75,18 @@ static int irq_affinity_write_proc(struct file *file, const char __user *buffer,
+
+ #endif
+
++static int irq_spurious_read(char *page, char **start, off_t off,
++ int count, int *eof, void *data)
++{
++ struct irq_desc *d = &irq_desc[(long) data];
++ return sprintf(page, "count %u\n"
++ "unhandled %u\n"
++ "last_unhandled %u ms\n",
++ d->irq_count,
++ d->irqs_unhandled,
++ jiffies_to_msecs(d->last_unhandled));
++}
++
+ #define MAX_NAMELEN 128
+
+ static int name_unique(unsigned int irq, struct irqaction *new_action)
+@@ -118,6 +130,7 @@ void register_handler_proc(unsigned int irq, struct irqaction *action)
+ void register_irq_proc(unsigned int irq)
+ {
+ char name [MAX_NAMELEN];
++ struct proc_dir_entry *entry;
+
+ if (!root_irq_dir ||
+ (irq_desc[irq].chip == &no_irq_chip) ||
+@@ -132,8 +145,6 @@ void register_irq_proc(unsigned int irq)
+
+ #ifdef CONFIG_SMP
+ {
+- struct proc_dir_entry *entry;
+-
+ /* create /proc/irq/<irq>/smp_affinity */
+ entry = create_proc_entry("smp_affinity", 0600, irq_desc[irq].dir);
+
+@@ -144,6 +155,12 @@ void register_irq_proc(unsigned int irq)
+ }
+ }
+ #endif
++
++ entry = create_proc_entry("spurious", 0444, irq_desc[irq].dir);
++ if (entry) {
++ entry->data = (void *)(long)irq;
++ entry->read_proc = irq_spurious_read;
++ }
+ }
+
+ #undef MAX_NAMELEN
+diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
+index 32b1619..a6b2bc8 100644
+--- a/kernel/irq/spurious.c
++++ b/kernel/irq/spurious.c
+@@ -10,6 +10,7 @@
+ #include <linux/module.h>
+ #include <linux/kallsyms.h>
+ #include <linux/interrupt.h>
++#include <linux/moduleparam.h>
+
+ static int irqfixup __read_mostly;
+
+@@ -225,6 +226,8 @@ int noirqdebug_setup(char *str)
+ }
+
+ __setup("noirqdebug", noirqdebug_setup);
++module_param(noirqdebug, bool, 0644);
++MODULE_PARM_DESC(noirqdebug, "Disable irq lockup detection when true");
+
+ static int __init irqfixup_setup(char *str)
+ {
+@@ -236,6 +239,8 @@ static int __init irqfixup_setup(char *str)
+ }
+
+ __setup("irqfixup", irqfixup_setup);
++module_param(irqfixup, int, 0644);
++MODULE_PARM_DESC("irqfixup", "0: No fixup, 1: irqfixup mode 2: irqpoll mode");
+
+ static int __init irqpoll_setup(char *str)
+ {
diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index 2fc2581..7dadc71 100644
--- a/kernel/kallsyms.c
@@ -555968,6 +661193,19 @@
return NULL;
}
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index e3a5d81..d0493ea 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -824,6 +824,8 @@ static int __init init_kprobes(void)
+ if (!err)
+ err = register_die_notifier(&kprobe_exceptions_nb);
+
++ if (!err)
++ init_test_probes();
+ return err;
+ }
+
diff --git a/kernel/ksysfs.c b/kernel/ksysfs.c
index 65daa53..e53bc30 100644
--- a/kernel/ksysfs.c
@@ -556428,7 +661666,7 @@
EXPORT_SYMBOL_GPL(debug_show_held_locks);
diff --git a/kernel/module.c b/kernel/module.c
-index c2e3e2e..f6a4e72 100644
+index c2e3e2e..bd60278 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -47,8 +47,6 @@
@@ -556463,7 +661701,22 @@
}
static inline void add_taint_module(struct module *mod, unsigned flag)
-@@ -498,6 +502,8 @@ static struct module_attribute modinfo_##field = { \
+@@ -426,6 +430,14 @@ static unsigned int find_pcpusec(Elf_Ehdr *hdr,
+ return find_sec(hdr, sechdrs, secstrings, ".data.percpu");
+ }
+
++static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size)
++{
++ int cpu;
++
++ for_each_possible_cpu(cpu)
++ memcpy(pcpudest + per_cpu_offset(cpu), from, size);
++}
++
+ static int percpu_modinit(void)
+ {
+ pcpu_num_used = 2;
+@@ -498,6 +510,8 @@ static struct module_attribute modinfo_##field = { \
MODINFO_ATTR(version);
MODINFO_ATTR(srcversion);
@@ -556472,7 +661725,7 @@
#ifdef CONFIG_MODULE_UNLOAD
/* Init the unload section of the module. */
static void module_unload_init(struct module *mod)
-@@ -539,11 +545,21 @@ static int already_uses(struct module *a, struct module *b)
+@@ -539,11 +553,21 @@ static int already_uses(struct module *a, struct module *b)
static int use_module(struct module *a, struct module *b)
{
struct module_use *use;
@@ -556496,7 +661749,7 @@
return 0;
DEBUGP("Allocating new usage for %s.\n", a->name);
-@@ -721,6 +737,8 @@ sys_delete_module(const char __user *name_user, unsigned int flags)
+@@ -721,6 +745,8 @@ sys_delete_module(const char __user *name_user, unsigned int flags)
mod->exit();
mutex_lock(&module_mutex);
}
@@ -556505,7 +661758,7 @@
free_module(mod);
out:
-@@ -814,7 +832,7 @@ static inline void module_unload_free(struct module *mod)
+@@ -814,7 +840,7 @@ static inline void module_unload_free(struct module *mod)
static inline int use_module(struct module *a, struct module *b)
{
@@ -556514,7 +661767,7 @@
}
static inline void module_unload_init(struct module *mod)
-@@ -1122,7 +1140,7 @@ static void add_notes_attrs(struct module *mod, unsigned int nsect,
+@@ -1122,7 +1148,7 @@ static void add_notes_attrs(struct module *mod, unsigned int nsect,
++loaded;
}
@@ -556523,7 +661776,7 @@
if (!notes_attrs->dir)
goto out;
-@@ -1212,6 +1230,7 @@ void module_remove_modinfo_attrs(struct module *mod)
+@@ -1212,6 +1238,7 @@ void module_remove_modinfo_attrs(struct module *mod)
int mod_sysfs_init(struct module *mod)
{
int err;
@@ -556531,7 +661784,7 @@
if (!module_sysfs_initialized) {
printk(KERN_ERR "%s: module sysfs not initialized\n",
-@@ -1219,15 +1238,25 @@ int mod_sysfs_init(struct module *mod)
+@@ -1219,15 +1246,25 @@ int mod_sysfs_init(struct module *mod)
err = -EINVAL;
goto out;
}
@@ -556562,7 +661815,7 @@
out:
return err;
}
-@@ -1238,12 +1267,7 @@ int mod_sysfs_setup(struct module *mod,
+@@ -1238,12 +1275,7 @@ int mod_sysfs_setup(struct module *mod,
{
int err;
@@ -556576,7 +661829,7 @@
if (!mod->holders_dir) {
err = -ENOMEM;
goto out_unreg;
-@@ -1263,11 +1287,9 @@ int mod_sysfs_setup(struct module *mod,
+@@ -1263,11 +1295,9 @@ int mod_sysfs_setup(struct module *mod,
out_unreg_param:
module_param_sysfs_remove(mod);
out_unreg_holders:
@@ -556589,7 +661842,7 @@
return err;
}
#endif
-@@ -1276,9 +1298,20 @@ static void mod_kobject_remove(struct module *mod)
+@@ -1276,9 +1306,20 @@ static void mod_kobject_remove(struct module *mod)
{
module_remove_modinfo_attrs(mod);
module_param_sysfs_remove(mod);
@@ -556613,7 +661866,7 @@
}
/*
-@@ -1330,7 +1363,7 @@ void *__symbol_get(const char *symbol)
+@@ -1330,7 +1371,7 @@ void *__symbol_get(const char *symbol)
preempt_disable();
value = __find_symbol(symbol, &owner, &crc, 1);
@@ -556622,7 +661875,7 @@
value = 0;
preempt_enable();
-@@ -1884,16 +1917,16 @@ static struct module *load_module(void __user *umod,
+@@ -1884,16 +1925,16 @@ static struct module *load_module(void __user *umod,
/* Now we've moved module, initialize linked lists, etc. */
module_unload_init(mod);
@@ -556642,7 +661895,7 @@
if (strcmp(mod->name, "driverloader") == 0)
add_taint_module(mod, TAINT_PROPRIETARY_MODULE);
-@@ -2023,6 +2056,11 @@ static struct module *load_module(void __user *umod,
+@@ -2023,6 +2064,11 @@ static struct module *load_module(void __user *umod,
printk(KERN_WARNING "%s: Ignoring obsolete parameters\n",
mod->name);
@@ -556654,7 +661907,7 @@
/* Size of section 0 is 0, so this works well if no params */
err = parse_args(mod->name, mod->args,
(struct kernel_param *)
-@@ -2031,7 +2069,7 @@ static struct module *load_module(void __user *umod,
+@@ -2031,7 +2077,7 @@ static struct module *load_module(void __user *umod,
/ sizeof(struct kernel_param),
NULL);
if (err < 0)
@@ -556663,7 +661916,7 @@
err = mod_sysfs_setup(mod,
(struct kernel_param *)
-@@ -2039,7 +2077,7 @@ static struct module *load_module(void __user *umod,
+@@ -2039,7 +2085,7 @@ static struct module *load_module(void __user *umod,
sechdrs[setupindex].sh_size
/ sizeof(struct kernel_param));
if (err < 0)
@@ -556672,7 +661925,7 @@
add_sect_attrs(mod, hdr->e_shnum, secstrings, sechdrs);
add_notes_attrs(mod, hdr->e_shnum, secstrings, sechdrs);
-@@ -2054,9 +2092,13 @@ static struct module *load_module(void __user *umod,
+@@ -2054,9 +2100,13 @@ static struct module *load_module(void __user *umod,
/* Done! */
return mod;
@@ -556687,7 +661940,7 @@
module_unload_free(mod);
module_free(mod, mod->module_init);
free_core:
-@@ -2076,17 +2118,6 @@ static struct module *load_module(void __user *umod,
+@@ -2076,17 +2126,6 @@ static struct module *load_module(void __user *umod,
goto free_hdr;
}
@@ -556705,7 +661958,7 @@
/* This is where the real work happens */
asmlinkage long
sys_init_module(void __user *umod,
-@@ -2111,10 +2142,6 @@ sys_init_module(void __user *umod,
+@@ -2111,10 +2150,6 @@ sys_init_module(void __user *umod,
return PTR_ERR(mod);
}
@@ -556716,7 +661969,7 @@
/* Drop lock so they can recurse */
mutex_unlock(&module_mutex);
-@@ -2133,6 +2160,7 @@ sys_init_module(void __user *umod,
+@@ -2133,6 +2168,7 @@ sys_init_module(void __user *umod,
mutex_lock(&module_mutex);
free_module(mod);
mutex_unlock(&module_mutex);
@@ -556724,7 +661977,7 @@
return ret;
}
-@@ -2147,6 +2175,7 @@ sys_init_module(void __user *umod,
+@@ -2147,6 +2183,7 @@ sys_init_module(void __user *umod,
mod->init_size = 0;
mod->init_text_size = 0;
mutex_unlock(&module_mutex);
@@ -556732,7 +661985,7 @@
return 0;
}
-@@ -2211,14 +2240,13 @@ static const char *get_ksymbol(struct module *mod,
+@@ -2211,14 +2248,13 @@ static const char *get_ksymbol(struct module *mod,
return mod->strtab + mod->symtab[best].st_name;
}
@@ -556754,7 +662007,7 @@
{
struct module *mod;
const char *ret = NULL;
-@@ -2233,8 +2261,13 @@ const char *module_address_lookup(unsigned long addr,
+@@ -2233,8 +2269,13 @@ const char *module_address_lookup(unsigned long addr,
break;
}
}
@@ -556769,7 +662022,7 @@
}
int lookup_module_symbol_name(unsigned long addr, char *symname)
-@@ -2362,21 +2395,30 @@ static void m_stop(struct seq_file *m, void *p)
+@@ -2362,21 +2403,30 @@ static void m_stop(struct seq_file *m, void *p)
mutex_unlock(&module_mutex);
}
@@ -556804,7 +662057,7 @@
buf[bx++] = ')';
}
buf[bx] = '\0';
-@@ -2403,7 +2445,7 @@ static int m_show(struct seq_file *m, void *p)
+@@ -2403,7 +2453,7 @@ static int m_show(struct seq_file *m, void *p)
/* Taints info */
if (mod->taints)
@@ -556813,7 +662066,7 @@
seq_printf(m, "\n");
return 0;
-@@ -2498,97 +2540,12 @@ void print_modules(void)
+@@ -2498,97 +2548,12 @@ void print_modules(void)
printk("Modules linked in:");
list_for_each_entry(mod, &modules, list)
@@ -556914,6 +662167,62 @@
#ifdef CONFIG_MODVERSIONS
/* Generate the signature for struct module here, too, for modversions. */
void struct_module(struct module *mod) { return; }
+diff --git a/kernel/panic.c b/kernel/panic.c
+index da4d6ba..d9e90cf 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -20,6 +20,7 @@
+ #include <linux/kexec.h>
+ #include <linux/debug_locks.h>
+ #include <linux/random.h>
++#include <linux/kallsyms.h>
+
+ int panic_on_oops;
+ int tainted;
+@@ -280,6 +281,13 @@ static int init_oops_id(void)
+ }
+ late_initcall(init_oops_id);
+
++static void print_oops_end_marker(void)
++{
++ init_oops_id();
++ printk(KERN_WARNING "---[ end trace %016llx ]---\n",
++ (unsigned long long)oops_id);
++}
++
+ /*
+ * Called when the architecture exits its oops handler, after printing
+ * everything.
+@@ -287,11 +295,26 @@ late_initcall(init_oops_id);
+ void oops_exit(void)
+ {
+ do_oops_enter_exit();
+- init_oops_id();
+- printk(KERN_WARNING "---[ end trace %016llx ]---\n",
+- (unsigned long long)oops_id);
++ print_oops_end_marker();
+ }
+
++#ifdef WANT_WARN_ON_SLOWPATH
++void warn_on_slowpath(const char *file, int line)
++{
++ char function[KSYM_SYMBOL_LEN];
++ unsigned long caller = (unsigned long) __builtin_return_address(0);
++ sprint_symbol(function, caller);
++
++ printk(KERN_WARNING "------------[ cut here ]------------\n");
++ printk(KERN_WARNING "WARNING: at %s:%d %s()\n", file,
++ line, function);
++ print_modules();
++ dump_stack();
++ print_oops_end_marker();
++}
++EXPORT_SYMBOL(warn_on_slowpath);
++#endif
++
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ /*
+ * Called when gcc's -fstack-protector feature is used, and
diff --git a/kernel/params.c b/kernel/params.c
index 7686417..42fe5e6 100644
--- a/kernel/params.c
@@ -557249,10 +662558,24 @@
extern unsigned long image_size;
extern int in_suspend;
diff --git a/kernel/printk.c b/kernel/printk.c
-index 89011bf..3b7c968 100644
+index 89011bf..58bbec6 100644
--- a/kernel/printk.c
+++ b/kernel/printk.c
-@@ -573,11 +573,6 @@ static int __init printk_time_setup(char *str)
+@@ -36,6 +36,13 @@
+
+ #include <asm/uaccess.h>
+
++/*
++ * Architectures can override it:
++ */
++void __attribute__((weak)) early_printk(const char *fmt, ...)
++{
++}
++
+ #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
+
+ /* printk's without a loglevel use this.. */
+@@ -573,11 +580,6 @@ static int __init printk_time_setup(char *str)
__setup("time", printk_time_setup);
@@ -557264,7 +662587,7 @@
/* Check if we have any console registered that can be called early in boot. */
static int have_callable_console(void)
{
-@@ -628,30 +623,57 @@ asmlinkage int printk(const char *fmt, ...)
+@@ -628,30 +630,57 @@ asmlinkage int printk(const char *fmt, ...)
/* cpu currently holding logbuf_lock */
static volatile unsigned int printk_cpu = UINT_MAX;
@@ -557332,7 +662655,7 @@
/*
* Copy the output into log_buf. If the caller didn't provide
-@@ -680,7 +702,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+@@ -680,7 +709,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
loglev_char = default_message_loglevel
+ '0';
}
@@ -557341,7 +662664,7 @@
nanosec_rem = do_div(t, 1000000000);
tlen = sprintf(tbuf,
"<%c>[%5lu.%06lu] ",
-@@ -744,6 +766,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+@@ -744,6 +773,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
printk_cpu = UINT_MAX;
spin_unlock(&logbuf_lock);
lockdep_on();
@@ -557592,10 +662915,111 @@
entry->proc_fops = &proc_profile_operations;
entry->size = (1+prof_len) * sizeof(atomic_t);
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
-index c25db86..c719bb9 100644
+index c25db86..e6e9b8b 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
-@@ -470,6 +470,8 @@ asmlinkage long sys_ptrace(long request, long pid, long addr, long data)
+@@ -366,12 +366,73 @@ static int ptrace_setsiginfo(struct task_struct *child, siginfo_t __user * data)
+ return error;
+ }
+
++
++#ifdef PTRACE_SINGLESTEP
++#define is_singlestep(request) ((request) == PTRACE_SINGLESTEP)
++#else
++#define is_singlestep(request) 0
++#endif
++
++#ifdef PTRACE_SINGLEBLOCK
++#define is_singleblock(request) ((request) == PTRACE_SINGLEBLOCK)
++#else
++#define is_singleblock(request) 0
++#endif
++
++#ifdef PTRACE_SYSEMU
++#define is_sysemu_singlestep(request) ((request) == PTRACE_SYSEMU_SINGLESTEP)
++#else
++#define is_sysemu_singlestep(request) 0
++#endif
++
++static int ptrace_resume(struct task_struct *child, long request, long data)
++{
++ if (!valid_signal(data))
++ return -EIO;
++
++ if (request == PTRACE_SYSCALL)
++ set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
++ else
++ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
++
++#ifdef TIF_SYSCALL_EMU
++ if (request == PTRACE_SYSEMU || request == PTRACE_SYSEMU_SINGLESTEP)
++ set_tsk_thread_flag(child, TIF_SYSCALL_EMU);
++ else
++ clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
++#endif
++
++ if (is_singleblock(request)) {
++ if (unlikely(!arch_has_block_step()))
++ return -EIO;
++ user_enable_block_step(child);
++ } else if (is_singlestep(request) || is_sysemu_singlestep(request)) {
++ if (unlikely(!arch_has_single_step()))
++ return -EIO;
++ user_enable_single_step(child);
++ }
++ else
++ user_disable_single_step(child);
++
++ child->exit_code = data;
++ wake_up_process(child);
++
++ return 0;
++}
++
+ int ptrace_request(struct task_struct *child, long request,
+ long addr, long data)
+ {
+ int ret = -EIO;
+
+ switch (request) {
++ case PTRACE_PEEKTEXT:
++ case PTRACE_PEEKDATA:
++ return generic_ptrace_peekdata(child, addr, data);
++ case PTRACE_POKETEXT:
++ case PTRACE_POKEDATA:
++ return generic_ptrace_pokedata(child, addr, data);
++
+ #ifdef PTRACE_OLDSETOPTIONS
+ case PTRACE_OLDSETOPTIONS:
+ #endif
+@@ -390,6 +451,26 @@ int ptrace_request(struct task_struct *child, long request,
+ case PTRACE_DETACH: /* detach a process that was attached. */
+ ret = ptrace_detach(child, data);
+ break;
++
++#ifdef PTRACE_SINGLESTEP
++ case PTRACE_SINGLESTEP:
++#endif
++#ifdef PTRACE_SINGLEBLOCK
++ case PTRACE_SINGLEBLOCK:
++#endif
++#ifdef PTRACE_SYSEMU
++ case PTRACE_SYSEMU:
++ case PTRACE_SYSEMU_SINGLESTEP:
++#endif
++ case PTRACE_SYSCALL:
++ case PTRACE_CONT:
++ return ptrace_resume(child, request, data);
++
++ case PTRACE_KILL:
++ if (child->exit_state) /* already dead */
++ return 0;
++ return ptrace_resume(child, request, SIGKILL);
++
+ default:
+ break;
+ }
+@@ -470,6 +551,8 @@ asmlinkage long sys_ptrace(long request, long pid, long addr, long data)
lock_kernel();
if (request == PTRACE_TRACEME) {
ret = ptrace_traceme();
@@ -557604,6 +663028,94 @@
goto out;
}
+@@ -524,3 +607,87 @@ int generic_ptrace_pokedata(struct task_struct *tsk, long addr, long data)
+ copied = access_process_vm(tsk, addr, &data, sizeof(data), 1);
+ return (copied == sizeof(data)) ? 0 : -EIO;
+ }
++
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++int compat_ptrace_request(struct task_struct *child, compat_long_t request,
++ compat_ulong_t addr, compat_ulong_t data)
++{
++ compat_ulong_t __user *datap = compat_ptr(data);
++ compat_ulong_t word;
++ int ret;
++
++ switch (request) {
++ case PTRACE_PEEKTEXT:
++ case PTRACE_PEEKDATA:
++ ret = access_process_vm(child, addr, &word, sizeof(word), 0);
++ if (ret != sizeof(word))
++ ret = -EIO;
++ else
++ ret = put_user(word, datap);
++ break;
++
++ case PTRACE_POKETEXT:
++ case PTRACE_POKEDATA:
++ ret = access_process_vm(child, addr, &data, sizeof(data), 1);
++ ret = (ret != sizeof(data) ? -EIO : 0);
++ break;
++
++ case PTRACE_GETEVENTMSG:
++ ret = put_user((compat_ulong_t) child->ptrace_message, datap);
++ break;
++
++ default:
++ ret = ptrace_request(child, request, addr, data);
++ }
++
++ return ret;
++}
++
++#ifdef __ARCH_WANT_COMPAT_SYS_PTRACE
++asmlinkage long compat_sys_ptrace(compat_long_t request, compat_long_t pid,
++ compat_long_t addr, compat_long_t data)
++{
++ struct task_struct *child;
++ long ret;
++
++ /*
++ * This lock_kernel fixes a subtle race with suid exec
++ */
++ lock_kernel();
++ if (request == PTRACE_TRACEME) {
++ ret = ptrace_traceme();
++ goto out;
++ }
++
++ child = ptrace_get_task_struct(pid);
++ if (IS_ERR(child)) {
++ ret = PTR_ERR(child);
++ goto out;
++ }
++
++ if (request == PTRACE_ATTACH) {
++ ret = ptrace_attach(child);
++ /*
++ * Some architectures need to do book-keeping after
++ * a ptrace attach.
++ */
++ if (!ret)
++ arch_ptrace_attach(child);
++ goto out_put_task_struct;
++ }
++
++ ret = ptrace_check_attach(child, request == PTRACE_KILL);
++ if (!ret)
++ ret = compat_arch_ptrace(child, request, addr, data);
++
++ out_put_task_struct:
++ put_task_struct(child);
++ out:
++ unlock_kernel();
++ return ret;
++}
++#endif /* __ARCH_WANT_COMPAT_SYS_PTRACE */
++
++#endif /* CONFIG_COMPAT */
diff --git a/kernel/rcuclassic.c b/kernel/rcuclassic.c
new file mode 100644
index 0000000..f4ffbd0
@@ -560166,7 +665678,7 @@
static int init_test_thread(int id)
diff --git a/kernel/sched.c b/kernel/sched.c
-index e76b11c..524285e 100644
+index e76b11c..ba4c880 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -22,6 +22,8 @@
@@ -560361,7 +665873,7 @@
#endif /* CONFIG_FAIR_GROUP_SCHED */
-@@ -264,10 +362,56 @@ struct cfs_rq {
+@@ -264,11 +362,57 @@ struct cfs_rq {
/* Real-Time classes' related field in a runqueue: */
struct rt_rq {
struct rt_prio_array active;
@@ -560384,8 +665896,8 @@
+ struct task_group *tg;
+ struct sched_rt_entity *rt_se;
+#endif
- };
-
++};
++
+#ifdef CONFIG_SMP
+
+/*
@@ -560407,9 +665919,9 @@
+ */
+ cpumask_t rto_mask;
+ atomic_t rto_count;
-+};
-+
-+/*
+ };
+
+ /*
+ * By default the system creates a single root-domain with all cpus as
+ * members (mimicking the global state we have today).
+ */
@@ -560417,9 +665929,10 @@
+
+#endif
+
- /*
++/*
* This is the main, per-CPU runqueue data structure.
*
+ * Locking rule: those places that want to lock multiple runqueues
@@ -296,11 +440,15 @@ struct rq {
u64 nr_switches;
@@ -561392,7 +666905,33 @@
/*
* cond_resched_lock() - if a reschedule is pending, drop the given lock,
-@@ -4890,7 +5131,7 @@ out_unlock:
+@@ -4704,19 +4945,15 @@ EXPORT_SYMBOL(cond_resched);
+ */
+ int cond_resched_lock(spinlock_t *lock)
+ {
++ int resched = need_resched() && system_state == SYSTEM_RUNNING;
+ int ret = 0;
+
+- if (need_lockbreak(lock)) {
++ if (spin_needbreak(lock) || resched) {
+ spin_unlock(lock);
+- cpu_relax();
+- ret = 1;
+- spin_lock(lock);
+- }
+- if (need_resched() && system_state == SYSTEM_RUNNING) {
+- spin_release(&lock->dep_map, 1, _THIS_IP_);
+- _raw_spin_unlock(lock);
+- preempt_enable_no_resched();
+- __cond_resched();
++ if (resched && need_resched())
++ __cond_resched();
++ else
++ cpu_relax();
+ ret = 1;
+ spin_lock(lock);
+ }
+@@ -4890,7 +5127,7 @@ out_unlock:
static const char stat_nam[] = "RSDTtZX";
@@ -561401,7 +666940,7 @@
{
unsigned long free = 0;
unsigned state;
-@@ -4920,8 +5161,7 @@ static void show_task(struct task_struct *p)
+@@ -4920,8 +5157,7 @@ static void show_task(struct task_struct *p)
printk(KERN_CONT "%5lu %5d %6d\n", free,
task_pid_nr(p), task_pid_nr(p->real_parent));
@@ -561411,7 +666950,7 @@
}
void show_state_filter(unsigned long state_filter)
-@@ -4943,7 +5183,7 @@ void show_state_filter(unsigned long state_filter)
+@@ -4943,7 +5179,7 @@ void show_state_filter(unsigned long state_filter)
*/
touch_nmi_watchdog();
if (!state_filter || (p->state & state_filter))
@@ -561420,7 +666959,7 @@
} while_each_thread(g, p);
touch_all_softlockup_watchdogs();
-@@ -4992,11 +5232,8 @@ void __cpuinit init_idle(struct task_struct *idle, int cpu)
+@@ -4992,11 +5228,8 @@ void __cpuinit init_idle(struct task_struct *idle, int cpu)
spin_unlock_irqrestore(&rq->lock, flags);
/* Set the preempt count _outside_ the spinlocks! */
@@ -561433,7 +666972,7 @@
/*
* The idle tasks have their own, simple scheduling class:
*/
-@@ -5077,7 +5314,13 @@ int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
+@@ -5077,7 +5310,13 @@ int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
goto out;
}
@@ -561448,7 +666987,7 @@
/* Can the task run on the task's current CPU? If so, we're done */
if (cpu_isset(task_cpu(p), new_mask))
goto out;
-@@ -5569,9 +5812,6 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
+@@ -5569,9 +5808,6 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
struct rq *rq;
switch (action) {
@@ -561458,7 +666997,7 @@
case CPU_UP_PREPARE:
case CPU_UP_PREPARE_FROZEN:
-@@ -5590,6 +5830,15 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
+@@ -5590,6 +5826,15 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
case CPU_ONLINE_FROZEN:
/* Strictly unnecessary, as first user will wake it. */
wake_up_process(cpu_rq(cpu)->migration_thread);
@@ -561474,7 +667013,7 @@
break;
#ifdef CONFIG_HOTPLUG_CPU
-@@ -5640,10 +5889,18 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
+@@ -5640,10 +5885,18 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
}
spin_unlock_irq(&rq->lock);
break;
@@ -561496,7 +667035,7 @@
}
return NOTIFY_OK;
}
-@@ -5831,11 +6088,76 @@ sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
+@@ -5831,11 +6084,76 @@ sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
return 1;
}
@@ -561575,7 +667114,7 @@
{
struct rq *rq = cpu_rq(cpu);
struct sched_domain *tmp;
-@@ -5860,6 +6182,7 @@ static void cpu_attach_domain(struct sched_domain *sd, int cpu)
+@@ -5860,6 +6178,7 @@ static void cpu_attach_domain(struct sched_domain *sd, int cpu)
sched_domain_debug(sd, cpu);
@@ -561583,7 +667122,7 @@
rcu_assign_pointer(rq->sd, sd);
}
-@@ -6228,6 +6551,7 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
+@@ -6228,6 +6547,7 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
static int build_sched_domains(const cpumask_t *cpu_map)
{
int i;
@@ -561591,7 +667130,7 @@
#ifdef CONFIG_NUMA
struct sched_group **sched_group_nodes = NULL;
int sd_allnodes = 0;
-@@ -6244,6 +6568,12 @@ static int build_sched_domains(const cpumask_t *cpu_map)
+@@ -6244,6 +6564,12 @@ static int build_sched_domains(const cpumask_t *cpu_map)
sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes;
#endif
@@ -561604,7 +667143,7 @@
/*
* Set up domains for cpus specified by the cpu_map.
*/
-@@ -6460,7 +6790,7 @@ static int build_sched_domains(const cpumask_t *cpu_map)
+@@ -6460,7 +6786,7 @@ static int build_sched_domains(const cpumask_t *cpu_map)
#else
sd = &per_cpu(phys_domains, i);
#endif
@@ -561613,7 +667152,7 @@
}
return 0;
-@@ -6518,7 +6848,7 @@ static void detach_destroy_domains(const cpumask_t *cpu_map)
+@@ -6518,7 +6844,7 @@ static void detach_destroy_domains(const cpumask_t *cpu_map)
unregister_sched_domain_sysctl();
for_each_cpu_mask(i, *cpu_map)
@@ -561622,7 +667161,7 @@
synchronize_sched();
arch_destroy_sched_domains(cpu_map);
}
-@@ -6548,6 +6878,8 @@ void partition_sched_domains(int ndoms_new, cpumask_t *doms_new)
+@@ -6548,6 +6874,8 @@ void partition_sched_domains(int ndoms_new, cpumask_t *doms_new)
{
int i, j;
@@ -561631,7 +667170,7 @@
/* always unregister in case we don't destroy any domains */
unregister_sched_domain_sysctl();
-@@ -6588,6 +6920,8 @@ match2:
+@@ -6588,6 +6916,8 @@ match2:
ndoms_cur = ndoms_new;
register_sched_domain_sysctl();
@@ -561640,7 +667179,7 @@
}
#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-@@ -6595,10 +6929,10 @@ static int arch_reinit_sched_domains(void)
+@@ -6595,10 +6925,10 @@ static int arch_reinit_sched_domains(void)
{
int err;
@@ -561653,7 +667192,7 @@
return err;
}
-@@ -6709,12 +7043,12 @@ void __init sched_init_smp(void)
+@@ -6709,12 +7039,12 @@ void __init sched_init_smp(void)
{
cpumask_t non_isolated_cpus;
@@ -561668,7 +667207,7 @@
/* XXX: Theoretical race here - CPU may be hotplugged now */
hotcpu_notifier(update_sched_domains, 0);
-@@ -6722,6 +7056,21 @@ void __init sched_init_smp(void)
+@@ -6722,6 +7052,21 @@ void __init sched_init_smp(void)
if (set_cpus_allowed(current, non_isolated_cpus) < 0)
BUG();
sched_init_granularity();
@@ -561690,7 +667229,7 @@
}
#else
void __init sched_init_smp(void)
-@@ -6746,13 +7095,87 @@ static void init_cfs_rq(struct cfs_rq *cfs_rq, struct rq *rq)
+@@ -6746,13 +7091,87 @@ static void init_cfs_rq(struct cfs_rq *cfs_rq, struct rq *rq)
cfs_rq->min_vruntime = (u64)(-(1LL << 20));
}
@@ -561779,7 +667318,7 @@
struct rq *rq;
rq = cpu_rq(i);
-@@ -6761,52 +7184,39 @@ void __init sched_init(void)
+@@ -6761,52 +7180,39 @@ void __init sched_init(void)
rq->nr_running = 0;
rq->clock = 1;
init_cfs_rq(&rq->cfs, rq);
@@ -561848,7 +667387,7 @@
}
set_load_weight(&init_task);
-@@ -6975,12 +7385,187 @@ void set_curr_task(int cpu, struct task_struct *p)
+@@ -6975,12 +7381,187 @@ void set_curr_task(int cpu, struct task_struct *p)
#ifdef CONFIG_FAIR_GROUP_SCHED
@@ -562036,7 +667575,7 @@
struct rq *rq;
int i;
-@@ -6994,97 +7579,89 @@ struct task_group *sched_create_group(void)
+@@ -6994,97 +7575,89 @@ struct task_group *sched_create_group(void)
tg->se = kzalloc(sizeof(se) * NR_CPUS, GFP_KERNEL);
if (!tg->se)
goto err;
@@ -562172,7 +667711,7 @@
}
/* change task's runqueue when it moves between groups.
-@@ -7100,11 +7677,6 @@ void sched_move_task(struct task_struct *tsk)
+@@ -7100,11 +7673,6 @@ void sched_move_task(struct task_struct *tsk)
rq = task_rq_lock(tsk, &flags);
@@ -562184,7 +667723,7 @@
update_rq_clock(rq);
running = task_current(rq, tsk);
-@@ -7116,7 +7688,7 @@ void sched_move_task(struct task_struct *tsk)
+@@ -7116,7 +7684,7 @@ void sched_move_task(struct task_struct *tsk)
tsk->sched_class->put_prev_task(rq, tsk);
}
@@ -562193,7 +667732,7 @@
if (on_rq) {
if (unlikely(running))
-@@ -7124,53 +7696,82 @@ void sched_move_task(struct task_struct *tsk)
+@@ -7124,53 +7692,82 @@ void sched_move_task(struct task_struct *tsk)
enqueue_task(rq, tsk, 0);
}
@@ -562292,7 +667831,7 @@
return 0;
}
-@@ -7179,6 +7780,31 @@ unsigned long sched_group_shares(struct task_group *tg)
+@@ -7179,6 +7776,31 @@ unsigned long sched_group_shares(struct task_group *tg)
return tg->shares;
}
@@ -562324,7 +667863,7 @@
#endif /* CONFIG_FAIR_GROUP_SCHED */
#ifdef CONFIG_FAIR_CGROUP_SCHED
-@@ -7254,12 +7880,30 @@ static u64 cpu_shares_read_uint(struct cgroup *cgrp, struct cftype *cft)
+@@ -7254,12 +7876,30 @@ static u64 cpu_shares_read_uint(struct cgroup *cgrp, struct cftype *cft)
return (u64) tg->shares;
}
@@ -564336,6 +669875,57 @@
+ .prio_changed = prio_changed_rt,
+ .switched_to = switched_to_rt,
};
+diff --git a/kernel/signal.c b/kernel/signal.c
+index afa4f78..bf49ce6 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -733,13 +733,13 @@ static void print_fatal_signal(struct pt_regs *regs, int signr)
+ current->comm, task_pid_nr(current), signr);
+
+ #if defined(__i386__) && !defined(__arch_um__)
+- printk("code at %08lx: ", regs->eip);
++ printk("code at %08lx: ", regs->ip);
+ {
+ int i;
+ for (i = 0; i < 16; i++) {
+ unsigned char insn;
+
+- __get_user(insn, (unsigned char *)(regs->eip + i));
++ __get_user(insn, (unsigned char *)(regs->ip + i));
+ printk("%02x ", insn);
+ }
+ }
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index bd89bc4..d7837d4 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -3,7 +3,9 @@
+ *
+ * Copyright (C) 1992 Linus Torvalds
+ *
+- * Rewritten. Old one was good in 2.2, but in 2.3 it was immoral. --ANK (990903)
++ * Distribute under GPLv2.
++ *
++ * Rewritten. Old one was good in 2.2, but in 2.3 it was immoral. --ANK (990903)
+ */
+
+ #include <linux/module.h>
+@@ -278,9 +280,14 @@ asmlinkage void do_softirq(void)
+ */
+ void irq_enter(void)
+ {
++#ifdef CONFIG_NO_HZ
++ int cpu = smp_processor_id();
++ if (idle_cpu(cpu) && !in_interrupt())
++ tick_nohz_stop_idle(cpu);
++#endif
+ __irq_enter();
+ #ifdef CONFIG_NO_HZ
+- if (idle_cpu(smp_processor_id()))
++ if (idle_cpu(cpu))
+ tick_nohz_update_jiffies();
+ #endif
+ }
diff --git a/kernel/softlockup.c b/kernel/softlockup.c
index 11df812..c1d7655 100644
--- a/kernel/softlockup.c
@@ -564521,6 +670111,20 @@
case CPU_DEAD:
case CPU_DEAD_FROZEN:
p = per_cpu(watchdog_task, hotcpu);
+diff --git a/kernel/spinlock.c b/kernel/spinlock.c
+index cd72424..ae28c82 100644
+--- a/kernel/spinlock.c
++++ b/kernel/spinlock.c
+@@ -65,8 +65,7 @@ EXPORT_SYMBOL(_write_trylock);
+ * even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
+ * not re-enabled during lock-acquire (which the preempt-spin-ops do):
+ */
+-#if !defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP) || \
+- defined(CONFIG_DEBUG_LOCK_ALLOC)
++#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
+
+ void __lockfunc _read_lock(rwlock_t *lock)
+ {
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 319821e..51b5ee5 100644
--- a/kernel/stop_machine.c
@@ -564542,10 +670146,18 @@
return ret;
}
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
-index c68f68d..4bc8e48 100644
+index c68f68d..357b68b 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
-@@ -81,6 +81,7 @@ extern int compat_log;
+@@ -53,6 +53,7 @@
+ #ifdef CONFIG_X86
+ #include <asm/nmi.h>
+ #include <asm/stacktrace.h>
++#include <asm/io.h>
+ #endif
+
+ static int deprecated_sysctl_warning(struct __sysctl_args *args);
+@@ -81,6 +82,7 @@ extern int compat_log;
extern int maps_protect;
extern int sysctl_stat_interval;
extern int audit_argv_kb;
@@ -564553,7 +670165,7 @@
/* Constants used for minimum and maximum */
#ifdef CONFIG_DETECT_SOFTLOCKUP
-@@ -156,8 +157,16 @@ static int proc_dointvec_taint(struct ctl_table *table, int write, struct file *
+@@ -156,8 +158,16 @@ static int proc_dointvec_taint(struct ctl_table *table, int write, struct file *
#endif
static struct ctl_table root_table[];
@@ -564572,7 +670184,7 @@
static struct ctl_table kern_table[];
static struct ctl_table vm_table[];
-@@ -191,14 +200,6 @@ static struct ctl_table root_table[] = {
+@@ -191,14 +201,6 @@ static struct ctl_table root_table[] = {
.mode = 0555,
.child = vm_table,
},
@@ -564587,14 +670199,14 @@
{
.ctl_name = CTL_FS,
.procname = "fs",
-@@ -306,9 +307,43 @@ static struct ctl_table kern_table[] = {
+@@ -306,9 +308,43 @@ static struct ctl_table kern_table[] = {
.procname = "sched_nr_migrate",
.data = &sysctl_sched_nr_migrate,
.maxlen = sizeof(unsigned int),
- .mode = 644,
+ .mode = 0644,
- .proc_handler = &proc_dointvec,
- },
++ .proc_handler = &proc_dointvec,
++ },
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "sched_rt_period_ms",
@@ -564609,8 +670221,8 @@
+ .data = &sysctl_sched_rt_ratio,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
-+ .proc_handler = &proc_dointvec,
-+ },
+ .proc_handler = &proc_dointvec,
+ },
+#if defined(CONFIG_FAIR_GROUP_SCHED) && defined(CONFIG_SMP)
+ {
+ .ctl_name = CTL_UNNUMBERED,
@@ -564632,7 +670244,7 @@
#endif
{
.ctl_name = CTL_UNNUMBERED,
-@@ -382,6 +417,15 @@ static struct ctl_table kern_table[] = {
+@@ -382,6 +418,15 @@ static struct ctl_table kern_table[] = {
.proc_handler = &proc_dointvec_taint,
},
#endif
@@ -564648,7 +670260,22 @@
#ifdef CONFIG_SECURITY_CAPABILITIES
{
.procname = "cap-bound",
-@@ -728,13 +772,40 @@ static struct ctl_table kern_table[] = {
+@@ -683,6 +728,14 @@ static struct ctl_table kern_table[] = {
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
++ {
++ .ctl_name = CTL_UNNUMBERED,
++ .procname = "io_delay_type",
++ .data = &io_delay_type,
++ .maxlen = sizeof(int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
+ #endif
+ #if defined(CONFIG_MMU)
+ {
+@@ -728,13 +781,40 @@ static struct ctl_table kern_table[] = {
.ctl_name = CTL_UNNUMBERED,
.procname = "softlockup_thresh",
.data = &softlockup_thresh,
@@ -564691,14 +670318,14 @@
#endif
#ifdef CONFIG_COMPAT
{
-@@ -1300,12 +1371,27 @@ void sysctl_head_finish(struct ctl_table_header *head)
+@@ -1300,12 +1380,27 @@ void sysctl_head_finish(struct ctl_table_header *head)
spin_unlock(&sysctl_lock);
}
-struct ctl_table_header *sysctl_head_next(struct ctl_table_header *prev)
+static struct list_head *
+lookup_header_list(struct ctl_table_root *root, struct nsproxy *namespaces)
-+{
+ {
+ struct list_head *header_list;
+ header_list = &root->header_list;
+ if (root->lookup)
@@ -564708,7 +670335,7 @@
+
+struct ctl_table_header *__sysctl_head_next(struct nsproxy *namespaces,
+ struct ctl_table_header *prev)
- {
++{
+ struct ctl_table_root *root;
+ struct list_head *header_list;
struct ctl_table_header *head;
@@ -564720,7 +670347,7 @@
tmp = &prev->ctl_entry;
unuse_table(prev);
goto next;
-@@ -1319,14 +1405,38 @@ struct ctl_table_header *sysctl_head_next(struct ctl_table_header *prev)
+@@ -1319,14 +1414,38 @@ struct ctl_table_header *sysctl_head_next(struct ctl_table_header *prev)
spin_unlock(&sysctl_lock);
return head;
next:
@@ -564761,7 +670388,7 @@
#ifdef CONFIG_SYSCTL_SYSCALL
int do_sysctl(int __user *name, int nlen, void __user *oldval, size_t __user *oldlenp,
void __user *newval, size_t newlen)
-@@ -1483,18 +1593,21 @@ static __init int sysctl_init(void)
+@@ -1483,18 +1602,21 @@ static __init int sysctl_init(void)
{
int err;
sysctl_set_parent(NULL, root_table);
@@ -564786,7 +670413,7 @@
*
* The members of the &struct ctl_table structure are used as follows:
*
-@@ -1557,25 +1670,99 @@ core_initcall(sysctl_init);
+@@ -1557,25 +1679,99 @@ core_initcall(sysctl_init);
* This routine returns %NULL on a failure to register, and a pointer
* to the table header on success.
*/
@@ -564899,7 +670526,7 @@
}
/**
-@@ -1604,6 +1791,12 @@ struct ctl_table_header *register_sysctl_table(struct ctl_table * table)
+@@ -1604,6 +1800,12 @@ struct ctl_table_header *register_sysctl_table(struct ctl_table * table)
return NULL;
}
@@ -564912,7 +670539,7 @@
void unregister_sysctl_table(struct ctl_table_header * table)
{
}
-@@ -2662,6 +2855,7 @@ EXPORT_SYMBOL(proc_dostring);
+@@ -2662,6 +2864,7 @@ EXPORT_SYMBOL(proc_dostring);
EXPORT_SYMBOL(proc_doulongvec_minmax);
EXPORT_SYMBOL(proc_doulongvec_ms_jiffies_minmax);
EXPORT_SYMBOL(register_sysctl_table);
@@ -565020,11 +670647,330 @@
}
return error;
}
+diff --git a/kernel/test_kprobes.c b/kernel/test_kprobes.c
+new file mode 100644
+index 0000000..88cdb10
+--- /dev/null
++++ b/kernel/test_kprobes.c
+@@ -0,0 +1,216 @@
++/*
++ * test_kprobes.c - simple sanity test for *probes
++ *
++ * Copyright IBM Corp. 2008
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it would be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
++ * the GNU General Public License for more details.
++ */
++
++#include <linux/kernel.h>
++#include <linux/kprobes.h>
++#include <linux/random.h>
++
++#define div_factor 3
++
++static u32 rand1, preh_val, posth_val, jph_val;
++static int errors, handler_errors, num_tests;
++
++static noinline u32 kprobe_target(u32 value)
++{
++ /*
++ * gcc ignores noinline on some architectures unless we stuff
++ * sufficient lard into the function. The get_kprobe() here is
++ * just for that.
++ *
++ * NOTE: We aren't concerned about the correctness of get_kprobe()
++ * here; hence, this call is neither under !preempt nor with the
++ * kprobe_mutex held. This is fine(tm)
++ */
++ if (get_kprobe((void *)0xdeadbeef))
++ printk(KERN_INFO "Kprobe smoke test: probe on 0xdeadbeef!\n");
++
++ return (value / div_factor);
++}
++
++static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
++{
++ preh_val = (rand1 / div_factor);
++ return 0;
++}
++
++static void kp_post_handler(struct kprobe *p, struct pt_regs *regs,
++ unsigned long flags)
++{
++ if (preh_val != (rand1 / div_factor)) {
++ handler_errors++;
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "incorrect value in post_handler\n");
++ }
++ posth_val = preh_val + div_factor;
++}
++
++static struct kprobe kp = {
++ .symbol_name = "kprobe_target",
++ .pre_handler = kp_pre_handler,
++ .post_handler = kp_post_handler
++};
++
++static int test_kprobe(void)
++{
++ int ret;
++
++ ret = register_kprobe(&kp);
++ if (ret < 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "register_kprobe returned %d\n", ret);
++ return ret;
++ }
++
++ ret = kprobe_target(rand1);
++ unregister_kprobe(&kp);
++
++ if (preh_val == 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "kprobe pre_handler not called\n");
++ handler_errors++;
++ }
++
++ if (posth_val == 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "kprobe post_handler not called\n");
++ handler_errors++;
++ }
++
++ return 0;
++}
++
++static u32 j_kprobe_target(u32 value)
++{
++ if (value != rand1) {
++ handler_errors++;
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "incorrect value in jprobe handler\n");
++ }
++
++ jph_val = rand1;
++ jprobe_return();
++ return 0;
++}
++
++static struct jprobe jp = {
++ .entry = j_kprobe_target,
++ .kp.symbol_name = "kprobe_target"
++};
++
++static int test_jprobe(void)
++{
++ int ret;
++
++ ret = register_jprobe(&jp);
++ if (ret < 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "register_jprobe returned %d\n", ret);
++ return ret;
++ }
++
++ ret = kprobe_target(rand1);
++ unregister_jprobe(&jp);
++ if (jph_val == 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "jprobe handler not called\n");
++ handler_errors++;
++ }
++
++ return 0;
++}
++
++#ifdef CONFIG_KRETPROBES
++static u32 krph_val;
++
++static int return_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
++{
++ unsigned long ret = regs_return_value(regs);
++
++ if (ret != (rand1 / div_factor)) {
++ handler_errors++;
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "incorrect value in kretprobe handler\n");
++ }
++
++ krph_val = (rand1 / div_factor);
++ return 0;
++}
++
++static struct kretprobe rp = {
++ .handler = return_handler,
++ .kp.symbol_name = "kprobe_target"
++};
++
++static int test_kretprobe(void)
++{
++ int ret;
++
++ ret = register_kretprobe(&rp);
++ if (ret < 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "register_kretprobe returned %d\n", ret);
++ return ret;
++ }
++
++ ret = kprobe_target(rand1);
++ unregister_kretprobe(&rp);
++ if (krph_val == 0) {
++ printk(KERN_ERR "Kprobe smoke test failed: "
++ "kretprobe handler not called\n");
++ handler_errors++;
++ }
++
++ return 0;
++}
++#endif /* CONFIG_KRETPROBES */
++
++int init_test_probes(void)
++{
++ int ret;
++
++ do {
++ rand1 = random32();
++ } while (rand1 <= div_factor);
++
++ printk(KERN_INFO "Kprobe smoke test started\n");
++ num_tests++;
++ ret = test_kprobe();
++ if (ret < 0)
++ errors++;
++
++ num_tests++;
++ ret = test_jprobe();
++ if (ret < 0)
++ errors++;
++
++#ifdef CONFIG_KRETPROBES
++ num_tests++;
++ ret = test_kretprobe();
++ if (ret < 0)
++ errors++;
++#endif /* CONFIG_KRETPROBES */
++
++ if (errors)
++ printk(KERN_ERR "BUG: Kprobe smoke test: %d out of "
++ "%d tests failed\n", errors, num_tests);
++ else if (handler_errors)
++ printk(KERN_ERR "BUG: Kprobe smoke test: %d error(s) "
++ "running handlers\n", handler_errors);
++ else
++ printk(KERN_INFO "Kprobe smoke test passed successfully\n");
++
++ return 0;
++}
+diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
+index 5fb139f..3e59fce 100644
+--- a/kernel/time/clockevents.c
++++ b/kernel/time/clockevents.c
+@@ -41,6 +41,11 @@ unsigned long clockevent_delta2ns(unsigned long latch,
+ {
+ u64 clc = ((u64) latch << evt->shift);
+
++ if (unlikely(!evt->mult)) {
++ evt->mult = 1;
++ WARN_ON(1);
++ }
++
+ do_div(clc, evt->mult);
+ if (clc < 1000)
+ clc = 1000;
+@@ -151,6 +156,14 @@ static void clockevents_notify_released(void)
+ void clockevents_register_device(struct clock_event_device *dev)
+ {
+ BUG_ON(dev->mode != CLOCK_EVT_MODE_UNUSED);
++ /*
++ * A nsec2cyc multiplicator of 0 is invalid and we'd crash
++ * on it, so fix it up and emit a warning:
++ */
++ if (unlikely(!dev->mult)) {
++ dev->mult = 1;
++ WARN_ON(1);
++ }
+
+ spin_lock(&clockevents_lock);
+
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
-index c8a9d13..8d6125a 100644
+index c8a9d13..6e9259a 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
-@@ -441,7 +441,7 @@ static SYSDEV_ATTR(available_clocksource, 0600,
+@@ -142,8 +142,13 @@ static void clocksource_watchdog(unsigned long data)
+ }
+
+ if (!list_empty(&watchdog_list)) {
+- __mod_timer(&watchdog_timer,
+- watchdog_timer.expires + WATCHDOG_INTERVAL);
++ /* Cycle through CPUs to check if the CPUs stay synchronized to
++ * each other. */
++ int next_cpu = next_cpu(raw_smp_processor_id(), cpu_online_map);
++ if (next_cpu >= NR_CPUS)
++ next_cpu = first_cpu(cpu_online_map);
++ watchdog_timer.expires += WATCHDOG_INTERVAL;
++ add_timer_on(&watchdog_timer, next_cpu);
+ }
+ spin_unlock(&watchdog_lock);
+ }
+@@ -165,7 +170,7 @@ static void clocksource_check_watchdog(struct clocksource *cs)
+ if (!started && watchdog) {
+ watchdog_last = watchdog->read();
+ watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
+- add_timer(&watchdog_timer);
++ add_timer_on(&watchdog_timer, first_cpu(cpu_online_map));
+ }
+ } else {
+ if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS)
+@@ -175,7 +180,7 @@ static void clocksource_check_watchdog(struct clocksource *cs)
+ if (watchdog)
+ del_timer(&watchdog_timer);
+ watchdog = cs;
+- init_timer(&watchdog_timer);
++ init_timer_deferrable(&watchdog_timer);
+ watchdog_timer.function = clocksource_watchdog;
+
+ /* Reset watchdog cycles */
+@@ -186,7 +191,8 @@ static void clocksource_check_watchdog(struct clocksource *cs)
+ watchdog_last = watchdog->read();
+ watchdog_timer.expires =
+ jiffies + WATCHDOG_INTERVAL;
+- add_timer(&watchdog_timer);
++ add_timer_on(&watchdog_timer,
++ first_cpu(cpu_online_map));
+ }
+ }
+ }
+@@ -331,6 +337,21 @@ void clocksource_change_rating(struct clocksource *cs, int rating)
+ spin_unlock_irqrestore(&clocksource_lock, flags);
+ }
+
++/**
++ * clocksource_unregister - remove a registered clocksource
++ */
++void clocksource_unregister(struct clocksource *cs)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&clocksource_lock, flags);
++ list_del(&cs->list);
++ if (clocksource_override == cs)
++ clocksource_override = NULL;
++ next_clocksource = select_clocksource();
++ spin_unlock_irqrestore(&clocksource_lock, flags);
++}
++
+ #ifdef CONFIG_SYSFS
+ /**
+ * sysfs_show_current_clocksources - sysfs interface for current clocksource
+@@ -441,7 +462,7 @@ static SYSDEV_ATTR(available_clocksource, 0600,
sysfs_show_available_clocksources, NULL);
static struct sysdev_class clocksource_sysclass = {
@@ -565033,19 +670979,150 @@
};
static struct sys_device device_clocksource = {
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index 5b86698..e1bd50c 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -126,9 +126,9 @@ int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu)
+ /*
+ * Broadcast the event to the cpus, which are set in the mask
+ */
+-int tick_do_broadcast(cpumask_t mask)
++static void tick_do_broadcast(cpumask_t mask)
+ {
+- int ret = 0, cpu = smp_processor_id();
++ int cpu = smp_processor_id();
+ struct tick_device *td;
+
+ /*
+@@ -138,7 +138,6 @@ int tick_do_broadcast(cpumask_t mask)
+ cpu_clear(cpu, mask);
+ td = &per_cpu(tick_cpu_device, cpu);
+ td->evtdev->event_handler(td->evtdev);
+- ret = 1;
+ }
+
+ if (!cpus_empty(mask)) {
+@@ -151,9 +150,7 @@ int tick_do_broadcast(cpumask_t mask)
+ cpu = first_cpu(mask);
+ td = &per_cpu(tick_cpu_device, cpu);
+ td->evtdev->broadcast(mask);
+- ret = 1;
+ }
+- return ret;
+ }
+
+ /*
+diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
+index bb13f27..f13f2b7 100644
+--- a/kernel/time/tick-internal.h
++++ b/kernel/time/tick-internal.h
+@@ -70,8 +70,6 @@ static inline int tick_resume_broadcast_oneshot(struct clock_event_device *bc)
+ * Broadcasting support
+ */
+ #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
+-extern int tick_do_broadcast(cpumask_t mask);
+-
+ extern int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu);
+ extern int tick_check_broadcast_device(struct clock_event_device *dev);
+ extern int tick_is_broadcast_device(struct clock_event_device *dev);
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
-index cb89fa8..1a21b6f 100644
+index cb89fa8..63f24b5 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
-@@ -153,6 +153,7 @@ void tick_nohz_update_jiffies(void)
+@@ -9,7 +9,7 @@
+ *
+ * Started by: Thomas Gleixner and Ingo Molnar
+ *
+- * For licencing details see kernel-base/COPYING
++ * Distribute under GPLv2.
+ */
+ #include <linux/cpu.h>
+ #include <linux/err.h>
+@@ -143,6 +143,44 @@ void tick_nohz_update_jiffies(void)
+ local_irq_restore(flags);
+ }
+
++void tick_nohz_stop_idle(int cpu)
++{
++ struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
++
++ if (ts->idle_active) {
++ ktime_t now, delta;
++ now = ktime_get();
++ delta = ktime_sub(now, ts->idle_entrytime);
++ ts->idle_lastupdate = now;
++ ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);
++ ts->idle_active = 0;
++ }
++}
++
++static ktime_t tick_nohz_start_idle(int cpu)
++{
++ struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
++ ktime_t now, delta;
++
++ now = ktime_get();
++ if (ts->idle_active) {
++ delta = ktime_sub(now, ts->idle_entrytime);
++ ts->idle_lastupdate = now;
++ ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);
++ }
++ ts->idle_entrytime = now;
++ ts->idle_active = 1;
++ return now;
++}
++
++u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time)
++{
++ struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
++
++ *last_update_time = ktime_to_us(ts->idle_lastupdate);
++ return ktime_to_us(ts->idle_sleeptime);
++}
++
+ /**
+ * tick_nohz_stop_sched_tick - stop the idle tick from the idle task
+ *
+@@ -153,14 +191,16 @@ void tick_nohz_update_jiffies(void)
void tick_nohz_stop_sched_tick(void)
{
unsigned long seq, last_jiffies, next_jiffies, delta_jiffies, flags;
+ unsigned long rt_jiffies;
struct tick_sched *ts;
- ktime_t last_update, expires, now, delta;
+- ktime_t last_update, expires, now, delta;
++ ktime_t last_update, expires, now;
struct clock_event_device *dev = __get_cpu_var(tick_cpu_device).evtdev;
-@@ -216,6 +217,10 @@ void tick_nohz_stop_sched_tick(void)
+ int cpu;
+
+ local_irq_save(flags);
+
+ cpu = smp_processor_id();
++ now = tick_nohz_start_idle(cpu);
+ ts = &per_cpu(tick_cpu_sched, cpu);
+
+ /*
+@@ -192,19 +232,7 @@ void tick_nohz_stop_sched_tick(void)
+ }
+ }
+
+- now = ktime_get();
+- /*
+- * When called from irq_exit we need to account the idle sleep time
+- * correctly.
+- */
+- if (ts->tick_stopped) {
+- delta = ktime_sub(now, ts->idle_entrytime);
+- ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);
+- }
+-
+- ts->idle_entrytime = now;
+ ts->idle_calls++;
+-
+ /* Read jiffies and the time when jiffies were updated last */
+ do {
+ seq = read_seqbegin(&xtime_lock);
+@@ -216,6 +244,10 @@ void tick_nohz_stop_sched_tick(void)
next_jiffies = get_next_timer_interrupt(last_jiffies);
delta_jiffies = next_jiffies - last_jiffies;
@@ -565056,7 +671133,56 @@
if (rcu_needs_cpu(cpu))
delta_jiffies = 1;
/*
-@@ -509,7 +514,6 @@ static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
+@@ -291,7 +323,7 @@ void tick_nohz_stop_sched_tick(void)
+ /* Check, if the timer was already in the past */
+ if (hrtimer_active(&ts->sched_timer))
+ goto out;
+- } else if(!tick_program_event(expires, 0))
++ } else if (!tick_program_event(expires, 0))
+ goto out;
+ /*
+ * We are past the event already. So we crossed a
+@@ -332,23 +364,22 @@ void tick_nohz_restart_sched_tick(void)
+ int cpu = smp_processor_id();
+ struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
+ unsigned long ticks;
+- ktime_t now, delta;
++ ktime_t now;
+
+- if (!ts->tick_stopped)
++ local_irq_disable();
++ tick_nohz_stop_idle(cpu);
++
++ if (!ts->tick_stopped) {
++ local_irq_enable();
+ return;
++ }
+
+ /* Update jiffies first */
+- now = ktime_get();
+-
+- local_irq_disable();
+ select_nohz_load_balancer(0);
++ now = ktime_get();
+ tick_do_update_jiffies64(now);
+ cpu_clear(cpu, nohz_cpu_mask);
+
+- /* Account the idle time */
+- delta = ktime_sub(now, ts->idle_entrytime);
+- ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);
+-
+ /*
+ * We stopped the tick in idle. Update process times would miss the
+ * time we slept as update_process_times does only a 1 tick
+@@ -502,14 +533,13 @@ static inline void tick_nohz_switch_to_nohz(void) { }
+ */
+ #ifdef CONFIG_HIGH_RES_TIMERS
+ /*
+- * We rearm the timer until we get disabled by the idle code
++ * We rearm the timer until we get disabled by the idle code.
+ * Called with interrupts disabled and timer->base->cpu_base->lock held.
+ */
+ static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
{
struct tick_sched *ts =
container_of(timer, struct tick_sched, sched_timer);
@@ -565064,7 +671190,7 @@
struct pt_regs *regs = get_irq_regs();
ktime_t now = ktime_get();
int cpu = smp_processor_id();
-@@ -547,15 +551,8 @@ static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
+@@ -547,15 +577,8 @@ static enum hrtimer_restart tick_sched_timer(struct hrtimer *timer)
touch_softlockup_watchdog();
ts->idle_jiffies++;
}
@@ -565081,10 +671207,80 @@
/* Do not restart, when we are in the idle loop */
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
-index e5e466b..ab46ae8 100644
+index e5e466b..092a236 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
-@@ -335,9 +335,9 @@ static int timekeeping_suspend(struct sys_device *dev, pm_message_t state)
+@@ -82,13 +82,12 @@ static inline s64 __get_nsec_offset(void)
+ }
+
+ /**
+- * __get_realtime_clock_ts - Returns the time of day in a timespec
++ * getnstimeofday - Returns the time of day in a timespec
+ * @ts: pointer to the timespec to be set
+ *
+- * Returns the time of day in a timespec. Used by
+- * do_gettimeofday() and get_realtime_clock_ts().
++ * Returns the time of day in a timespec.
+ */
+-static inline void __get_realtime_clock_ts(struct timespec *ts)
++void getnstimeofday(struct timespec *ts)
+ {
+ unsigned long seq;
+ s64 nsecs;
+@@ -104,30 +103,19 @@ static inline void __get_realtime_clock_ts(struct timespec *ts)
+ timespec_add_ns(ts, nsecs);
+ }
+
+-/**
+- * getnstimeofday - Returns the time of day in a timespec
+- * @ts: pointer to the timespec to be set
+- *
+- * Returns the time of day in a timespec.
+- */
+-void getnstimeofday(struct timespec *ts)
+-{
+- __get_realtime_clock_ts(ts);
+-}
+-
+ EXPORT_SYMBOL(getnstimeofday);
+
+ /**
+ * do_gettimeofday - Returns the time of day in a timeval
+ * @tv: pointer to the timeval to be set
+ *
+- * NOTE: Users should be converted to using get_realtime_clock_ts()
++ * NOTE: Users should be converted to using getnstimeofday()
+ */
+ void do_gettimeofday(struct timeval *tv)
+ {
+ struct timespec now;
+
+- __get_realtime_clock_ts(&now);
++ getnstimeofday(&now);
+ tv->tv_sec = now.tv_sec;
+ tv->tv_usec = now.tv_nsec/1000;
+ }
+@@ -198,7 +186,8 @@ static void change_clocksource(void)
+
+ clock->error = 0;
+ clock->xtime_nsec = 0;
+- clocksource_calculate_interval(clock, NTP_INTERVAL_LENGTH);
++ clocksource_calculate_interval(clock,
++ (unsigned long)(current_tick_length()>>TICK_LENGTH_SHIFT));
+
+ tick_clock_notify();
+
+@@ -255,7 +244,8 @@ void __init timekeeping_init(void)
+ ntp_clear();
+
+ clock = clocksource_get_next();
+- clocksource_calculate_interval(clock, NTP_INTERVAL_LENGTH);
++ clocksource_calculate_interval(clock,
++ (unsigned long)(current_tick_length()>>TICK_LENGTH_SHIFT));
+ clock->cycle_last = clocksource_read(clock);
+
+ xtime.tv_sec = sec;
+@@ -335,9 +325,9 @@ static int timekeeping_suspend(struct sys_device *dev, pm_message_t state)
/* sysfs resume/suspend bits for timekeeping */
static struct sysdev_class timekeeping_sysclass = {
@@ -565095,20 +671291,248 @@
};
static struct sys_device device_timer = {
+diff --git a/kernel/time/timer_stats.c b/kernel/time/timer_stats.c
+index c36bb7e..417da8c 100644
+--- a/kernel/time/timer_stats.c
++++ b/kernel/time/timer_stats.c
+@@ -26,7 +26,7 @@
+ * the pid and cmdline from the owner process if applicable.
+ *
+ * Start/stop data collection:
+- * # echo 1[0] >/proc/timer_stats
++ * # echo [1|0] >/proc/timer_stats
+ *
+ * Display the information collected so far:
+ * # cat /proc/timer_stats
diff --git a/kernel/timer.c b/kernel/timer.c
-index 2a00c22..f739dfb 100644
+index 2a00c22..23f7ead 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
-@@ -896,7 +896,7 @@ static void run_timer_softirq(struct softirq_action *h)
+@@ -58,59 +58,57 @@ EXPORT_SYMBOL(jiffies_64);
+ #define TVN_MASK (TVN_SIZE - 1)
+ #define TVR_MASK (TVR_SIZE - 1)
+
+-typedef struct tvec_s {
++struct tvec {
+ struct list_head vec[TVN_SIZE];
+-} tvec_t;
++};
+
+-typedef struct tvec_root_s {
++struct tvec_root {
+ struct list_head vec[TVR_SIZE];
+-} tvec_root_t;
++};
+
+-struct tvec_t_base_s {
++struct tvec_base {
+ spinlock_t lock;
+ struct timer_list *running_timer;
+ unsigned long timer_jiffies;
+- tvec_root_t tv1;
+- tvec_t tv2;
+- tvec_t tv3;
+- tvec_t tv4;
+- tvec_t tv5;
++ struct tvec_root tv1;
++ struct tvec tv2;
++ struct tvec tv3;
++ struct tvec tv4;
++ struct tvec tv5;
+ } ____cacheline_aligned;
+
+-typedef struct tvec_t_base_s tvec_base_t;
+-
+-tvec_base_t boot_tvec_bases;
++struct tvec_base boot_tvec_bases;
+ EXPORT_SYMBOL(boot_tvec_bases);
+-static DEFINE_PER_CPU(tvec_base_t *, tvec_bases) = &boot_tvec_bases;
++static DEFINE_PER_CPU(struct tvec_base *, tvec_bases) = &boot_tvec_bases;
+
+ /*
+- * Note that all tvec_bases is 2 byte aligned and lower bit of
++ * Note that all tvec_bases are 2 byte aligned and lower bit of
+ * base in timer_list is guaranteed to be zero. Use the LSB for
+ * the new flag to indicate whether the timer is deferrable
+ */
+ #define TBASE_DEFERRABLE_FLAG (0x1)
+
+ /* Functions below help us manage 'deferrable' flag */
+-static inline unsigned int tbase_get_deferrable(tvec_base_t *base)
++static inline unsigned int tbase_get_deferrable(struct tvec_base *base)
+ {
+ return ((unsigned int)(unsigned long)base & TBASE_DEFERRABLE_FLAG);
+ }
+
+-static inline tvec_base_t *tbase_get_base(tvec_base_t *base)
++static inline struct tvec_base *tbase_get_base(struct tvec_base *base)
+ {
+- return ((tvec_base_t *)((unsigned long)base & ~TBASE_DEFERRABLE_FLAG));
++ return ((struct tvec_base *)((unsigned long)base & ~TBASE_DEFERRABLE_FLAG));
+ }
+
+ static inline void timer_set_deferrable(struct timer_list *timer)
+ {
+- timer->base = ((tvec_base_t *)((unsigned long)(timer->base) |
++ timer->base = ((struct tvec_base *)((unsigned long)(timer->base) |
+ TBASE_DEFERRABLE_FLAG));
+ }
+
+ static inline void
+-timer_set_base(struct timer_list *timer, tvec_base_t *new_base)
++timer_set_base(struct timer_list *timer, struct tvec_base *new_base)
+ {
+- timer->base = (tvec_base_t *)((unsigned long)(new_base) |
++ timer->base = (struct tvec_base *)((unsigned long)(new_base) |
+ tbase_get_deferrable(timer->base));
+ }
+
+@@ -246,7 +244,7 @@ unsigned long round_jiffies_relative(unsigned long j)
+ EXPORT_SYMBOL_GPL(round_jiffies_relative);
+
+
+-static inline void set_running_timer(tvec_base_t *base,
++static inline void set_running_timer(struct tvec_base *base,
+ struct timer_list *timer)
+ {
+ #ifdef CONFIG_SMP
+@@ -254,7 +252,7 @@ static inline void set_running_timer(tvec_base_t *base,
+ #endif
+ }
+
+-static void internal_add_timer(tvec_base_t *base, struct timer_list *timer)
++static void internal_add_timer(struct tvec_base *base, struct timer_list *timer)
+ {
+ unsigned long expires = timer->expires;
+ unsigned long idx = expires - base->timer_jiffies;
+@@ -371,14 +369,14 @@ static inline void detach_timer(struct timer_list *timer,
+ * possible to set timer->base = NULL and drop the lock: the timer remains
+ * locked.
+ */
+-static tvec_base_t *lock_timer_base(struct timer_list *timer,
++static struct tvec_base *lock_timer_base(struct timer_list *timer,
+ unsigned long *flags)
+ __acquires(timer->base->lock)
+ {
+- tvec_base_t *base;
++ struct tvec_base *base;
+
+ for (;;) {
+- tvec_base_t *prelock_base = timer->base;
++ struct tvec_base *prelock_base = timer->base;
+ base = tbase_get_base(prelock_base);
+ if (likely(base != NULL)) {
+ spin_lock_irqsave(&base->lock, *flags);
+@@ -393,7 +391,7 @@ static tvec_base_t *lock_timer_base(struct timer_list *timer,
+
+ int __mod_timer(struct timer_list *timer, unsigned long expires)
+ {
+- tvec_base_t *base, *new_base;
++ struct tvec_base *base, *new_base;
+ unsigned long flags;
+ int ret = 0;
+
+@@ -445,7 +443,7 @@ EXPORT_SYMBOL(__mod_timer);
+ */
+ void add_timer_on(struct timer_list *timer, int cpu)
+ {
+- tvec_base_t *base = per_cpu(tvec_bases, cpu);
++ struct tvec_base *base = per_cpu(tvec_bases, cpu);
+ unsigned long flags;
+
+ timer_stats_timer_set_start_info(timer);
+@@ -508,7 +506,7 @@ EXPORT_SYMBOL(mod_timer);
+ */
+ int del_timer(struct timer_list *timer)
+ {
+- tvec_base_t *base;
++ struct tvec_base *base;
+ unsigned long flags;
+ int ret = 0;
+
+@@ -539,7 +537,7 @@ EXPORT_SYMBOL(del_timer);
+ */
+ int try_to_del_timer_sync(struct timer_list *timer)
+ {
+- tvec_base_t *base;
++ struct tvec_base *base;
+ unsigned long flags;
+ int ret = -1;
+
+@@ -591,7 +589,7 @@ int del_timer_sync(struct timer_list *timer)
+ EXPORT_SYMBOL(del_timer_sync);
+ #endif
+
+-static int cascade(tvec_base_t *base, tvec_t *tv, int index)
++static int cascade(struct tvec_base *base, struct tvec *tv, int index)
{
- tvec_base_t *base = __get_cpu_var(tvec_bases);
+ /* cascade all the timers from tv up one level */
+ struct timer_list *timer, *tmp;
+@@ -620,7 +618,7 @@ static int cascade(tvec_base_t *base, tvec_t *tv, int index)
+ * This function cascades all vectors and executes all expired timer
+ * vectors.
+ */
+-static inline void __run_timers(tvec_base_t *base)
++static inline void __run_timers(struct tvec_base *base)
+ {
+ struct timer_list *timer;
+
+@@ -657,7 +655,7 @@ static inline void __run_timers(tvec_base_t *base)
+ int preempt_count = preempt_count();
+ fn(data);
+ if (preempt_count != preempt_count()) {
+- printk(KERN_WARNING "huh, entered %p "
++ printk(KERN_ERR "huh, entered %p "
+ "with preempt_count %08x, exited"
+ " with %08x?\n",
+ fn, preempt_count,
+@@ -678,13 +676,13 @@ static inline void __run_timers(tvec_base_t *base)
+ * is used on S/390 to stop all activity when a cpus is idle.
+ * This functions needs to be called disabled.
+ */
+-static unsigned long __next_timer_interrupt(tvec_base_t *base)
++static unsigned long __next_timer_interrupt(struct tvec_base *base)
+ {
+ unsigned long timer_jiffies = base->timer_jiffies;
+ unsigned long expires = timer_jiffies + NEXT_TIMER_MAX_DELTA;
+ int index, slot, array, found = 0;
+ struct timer_list *nte;
+- tvec_t *varray[4];
++ struct tvec *varray[4];
+
+ /* Look for timer events in tv1. */
+ index = slot = timer_jiffies & TVR_MASK;
+@@ -716,7 +714,7 @@ cascade:
+ varray[3] = &base->tv5;
+
+ for (array = 0; array < 4; array++) {
+- tvec_t *varp = varray[array];
++ struct tvec *varp = varray[array];
+
+ index = slot = timer_jiffies & TVN_MASK;
+ do {
+@@ -795,7 +793,7 @@ static unsigned long cmp_next_hrtimer_event(unsigned long now,
+ */
+ unsigned long get_next_timer_interrupt(unsigned long now)
+ {
+- tvec_base_t *base = __get_cpu_var(tvec_bases);
++ struct tvec_base *base = __get_cpu_var(tvec_bases);
+ unsigned long expires;
+
+ spin_lock(&base->lock);
+@@ -894,9 +892,9 @@ static inline void calc_load(unsigned long ticks)
+ */
+ static void run_timer_softirq(struct softirq_action *h)
+ {
+- tvec_base_t *base = __get_cpu_var(tvec_bases);
++ struct tvec_base *base = __get_cpu_var(tvec_bases);
- hrtimer_run_queues();
+ hrtimer_run_pending();
if (time_after_eq(jiffies, base->timer_jiffies))
__run_timers(base);
-@@ -907,6 +907,7 @@ static void run_timer_softirq(struct softirq_action *h)
+@@ -907,6 +905,7 @@ static void run_timer_softirq(struct softirq_action *h)
*/
void run_local_timers(void)
{
@@ -565116,6 +671540,35 @@
raise_softirq(TIMER_SOFTIRQ);
softlockup_tick();
}
+@@ -1222,7 +1221,7 @@ static struct lock_class_key base_lock_keys[NR_CPUS];
+ static int __cpuinit init_timers_cpu(int cpu)
+ {
+ int j;
+- tvec_base_t *base;
++ struct tvec_base *base;
+ static char __cpuinitdata tvec_base_done[NR_CPUS];
+
+ if (!tvec_base_done[cpu]) {
+@@ -1277,7 +1276,7 @@ static int __cpuinit init_timers_cpu(int cpu)
+ }
+
+ #ifdef CONFIG_HOTPLUG_CPU
+-static void migrate_timer_list(tvec_base_t *new_base, struct list_head *head)
++static void migrate_timer_list(struct tvec_base *new_base, struct list_head *head)
+ {
+ struct timer_list *timer;
+
+@@ -1291,8 +1290,8 @@ static void migrate_timer_list(tvec_base_t *new_base, struct list_head *head)
+
+ static void __cpuinit migrate_timers(int cpu)
+ {
+- tvec_base_t *old_base;
+- tvec_base_t *new_base;
++ struct tvec_base *old_base;
++ struct tvec_base *new_base;
+ int i;
+
+ BUG_ON(cpu_online(cpu));
diff --git a/kernel/user.c b/kernel/user.c
index 8320a87..bc1c48d 100644
--- a/kernel/user.c
@@ -565485,7 +671938,7 @@
case CPU_ONLINE:
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
-index a601093..c4ecb29 100644
+index a601093..89f4035 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -79,6 +79,38 @@ config HEADERS_CHECK
@@ -565527,7 +671980,38 @@
config DEBUG_KERNEL
bool "Kernel debugging"
help
-@@ -517,4 +549,18 @@ config FAULT_INJECTION_STACKTRACE_FILTER
+@@ -462,6 +494,30 @@ config RCU_TORTURE_TEST
+ Say M if you want the RCU torture tests to build as a module.
+ Say N if you are unsure.
+
++config KPROBES_SANITY_TEST
++ bool "Kprobes sanity tests"
++ depends on DEBUG_KERNEL
++ depends on KPROBES
++ default n
++ help
++ This option provides for testing basic kprobes functionality on
++ boot. A sample kprobe, jprobe and kretprobe are inserted and
++ verified for functionality.
++
++ Say N if you are unsure.
++
++config BACKTRACE_SELF_TEST
++ tristate "Self test for the backtrace code"
++ depends on DEBUG_KERNEL
++ default n
++ help
++ This option provides a kernel module that can be used to test
++ the kernel stack backtrace code. This option is not useful
++ for distributions or general kernels, but only for kernel
++ developers working on architecture code.
++
++ Say N if you are unsure.
++
+ config LKDTM
+ tristate "Linux Kernel Dump Test Tool Module"
+ depends on DEBUG_KERNEL
+@@ -517,4 +573,46 @@ config FAULT_INJECTION_STACKTRACE_FILTER
help
Provide stacktrace filter for fault-injection capabilities
@@ -565544,6 +672028,34 @@
+ Enable this option if you want to use the LatencyTOP tool
+ to find out which userspace is blocking on what kernel operations.
+
++config PROVIDE_OHCI1394_DMA_INIT
++ bool "Provide code for enabling DMA over FireWire early on boot"
++ depends on PCI && X86
++ help
++ If you want to debug problems which hang or crash the kernel early
++ on boot and the crashing machine has a FireWire port, you can use
++ this feature to remotely access the memory of the crashed machine
++ over FireWire. This employs remote DMA as part of the OHCI1394
++ specification which is now the standard for FireWire controllers.
++
++ With remote DMA, you can monitor the printk buffer remotely using
++ firescope and access all memory below 4GB using fireproxy from gdb.
++ Even controlling a kernel debugger is possible using remote DMA.
++
++ Usage:
++
++ If ohci1394_dma=early is used as boot parameter, it will initialize
++ all OHCI1394 controllers which are found in the PCI config space.
++
++ As all changes to the FireWire bus such as enabling and disabling
++ devices cause a bus reset and thereby disable remote DMA for all
++ devices, be sure to have the cable plugged and FireWire enabled on
++ the debugging host before booting the debug target for debugging.
++
++ This code (~1k) is freed after boot. By then, the firewire stack
++ in charge of the OHCI-1394 controllers should be used instead.
++
++ See Documentation/debugging-via-ohci1394.txt for more information.
+
source "samples/Kconfig"
diff --git a/lib/Makefile b/lib/Makefile
@@ -566977,6 +673489,46 @@
+}
+EXPORT_SYMBOL_GPL(pcounter_free);
+
+diff --git a/lib/rwsem.c b/lib/rwsem.c
+index 7d02700..3e3365e 100644
+--- a/lib/rwsem.c
++++ b/lib/rwsem.c
+@@ -187,7 +187,7 @@ rwsem_down_failed_common(struct rw_semaphore *sem,
+ /*
+ * wait for the read lock to be granted
+ */
+-struct rw_semaphore fastcall __sched *
++asmregparm struct rw_semaphore __sched *
+ rwsem_down_read_failed(struct rw_semaphore *sem)
+ {
+ struct rwsem_waiter waiter;
+@@ -201,7 +201,7 @@ rwsem_down_read_failed(struct rw_semaphore *sem)
+ /*
+ * wait for the write lock to be granted
+ */
+-struct rw_semaphore fastcall __sched *
++asmregparm struct rw_semaphore __sched *
+ rwsem_down_write_failed(struct rw_semaphore *sem)
+ {
+ struct rwsem_waiter waiter;
+@@ -216,7 +216,7 @@ rwsem_down_write_failed(struct rw_semaphore *sem)
+ * handle waking up a waiter on the semaphore
+ * - up_read/up_write has decremented the active part of count if we come here
+ */
+-struct rw_semaphore fastcall *rwsem_wake(struct rw_semaphore *sem)
++asmregparm struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
+ {
+ unsigned long flags;
+
+@@ -236,7 +236,7 @@ struct rw_semaphore fastcall *rwsem_wake(struct rw_semaphore *sem)
+ * - caller incremented waiting part of count and discovered it still negative
+ * - just wake up any readers at the front of the queue
+ */
+-struct rw_semaphore fastcall *rwsem_downgrade_wake(struct rw_semaphore *sem)
++asmregparm struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
+ {
+ unsigned long flags;
+
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
new file mode 100644
index 0000000..acca490
@@ -567290,11 +673842,89 @@
default "1"
config VIRT_TO_BUS
+diff --git a/mm/memory.c b/mm/memory.c
+index 4b0144b..d902d0e 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -513,8 +513,7 @@ again:
+ if (progress >= 32) {
+ progress = 0;
+ if (need_resched() ||
+- need_lockbreak(src_ptl) ||
+- need_lockbreak(dst_ptl))
++ spin_needbreak(src_ptl) || spin_needbreak(dst_ptl))
+ break;
+ }
+ if (pte_none(*src_pte)) {
+@@ -853,7 +852,7 @@ unsigned long unmap_vmas(struct mmu_gather **tlbp,
+ tlb_finish_mmu(*tlbp, tlb_start, start);
+
+ if (need_resched() ||
+- (i_mmap_lock && need_lockbreak(i_mmap_lock))) {
++ (i_mmap_lock && spin_needbreak(i_mmap_lock))) {
+ if (i_mmap_lock) {
+ *tlbp = NULL;
+ goto out;
+@@ -1768,8 +1767,7 @@ again:
+
+ restart_addr = zap_page_range(vma, start_addr,
+ end_addr - start_addr, details);
+- need_break = need_resched() ||
+- need_lockbreak(details->i_mmap_lock);
++ need_break = need_resched() || spin_needbreak(details->i_mmap_lock);
+
+ if (restart_addr >= end_addr) {
+ /* We have now completed this vma: mark it so */
+@@ -2756,3 +2754,34 @@ int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, in
+
+ return buf - old_buf;
+ }
++
++/*
++ * Print the name of a VMA.
++ */
++void print_vma_addr(char *prefix, unsigned long ip)
++{
++ struct mm_struct *mm = current->mm;
++ struct vm_area_struct *vma;
++
++ down_read(&mm->mmap_sem);
++ vma = find_vma(mm, ip);
++ if (vma && vma->vm_file) {
++ struct file *f = vma->vm_file;
++ char *buf = (char *)__get_free_page(GFP_KERNEL);
++ if (buf) {
++ char *p, *s;
++
++ p = d_path(f->f_dentry, f->f_vfsmnt, buf, PAGE_SIZE);
++ if (IS_ERR(p))
++ p = "?";
++ s = strrchr(p, '/');
++ if (s)
++ p = s+1;
++ printk("%s%s[%lx+%lx]", prefix, p,
++ vma->vm_start,
++ vma->vm_end - vma->vm_start);
++ free_page((unsigned long)buf);
++ }
++ }
++ up_read(¤t->mm->mmap_sem);
++}
diff --git a/mm/mmap.c b/mm/mmap.c
-index 15678aa..bfa389f 100644
+index 15678aa..d2b6d44 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
-@@ -1620,7 +1620,7 @@ static inline int expand_downwards(struct vm_area_struct *vma,
+@@ -251,7 +251,8 @@ asmlinkage unsigned long sys_brk(unsigned long brk)
+ * not page aligned -Ram Gupta
+ */
+ rlim = current->signal->rlim[RLIMIT_DATA].rlim_cur;
+- if (rlim < RLIM_INFINITY && brk - mm->start_data > rlim)
++ if (rlim < RLIM_INFINITY && (brk - mm->start_brk) +
++ (mm->end_data - mm->start_data) > rlim)
+ goto out;
+
+ newbrk = PAGE_ALIGN(brk);
+@@ -1620,7 +1621,7 @@ static inline int expand_downwards(struct vm_area_struct *vma,
return -ENOMEM;
address &= PAGE_MASK;
@@ -567303,7 +673933,7 @@
if (error)
return error;
-@@ -1941,7 +1941,7 @@ unsigned long do_brk(unsigned long addr, unsigned long len)
+@@ -1941,7 +1942,7 @@ unsigned long do_brk(unsigned long addr, unsigned long len)
if (is_hugepage_only_range(mm, addr, len))
return -EINVAL;
@@ -609380,7 +716010,7 @@
.reroute = nf_ip6_reroute,
.route_key_size = sizeof(struct ip6_rt_info),
diff --git a/net/ipv6/netfilter/Kconfig b/net/ipv6/netfilter/Kconfig
-index 838b8dd..4fc0b02 100644
+index 838b8dd..6cae547 100644
--- a/net/ipv6/netfilter/Kconfig
+++ b/net/ipv6/netfilter/Kconfig
@@ -2,12 +2,13 @@
@@ -609482,7 +716112,7 @@
- tristate "IPv6 Extension Headers Match"
+ tristate '"ipv6header" IPv6 Extension Headers Match'
depends on IP6_NF_IPTABLES
-+ depends on NETFILTER_ADVANCED
++ default m if NETFILTER_ADVANCED=n
help
This module allows one to match packets based upon
the ipv6 extension headers.
@@ -645894,6 +752524,128 @@
out:
return err;
}
+diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
+index 1ea2755..bcd9abd 100644
+--- a/net/sunrpc/auth.c
++++ b/net/sunrpc/auth.c
+@@ -51,6 +51,7 @@ rpcauth_register(const struct rpc_authops *ops)
+ spin_unlock(&rpc_authflavor_lock);
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(rpcauth_register);
+
+ int
+ rpcauth_unregister(const struct rpc_authops *ops)
+@@ -68,6 +69,7 @@ rpcauth_unregister(const struct rpc_authops *ops)
+ spin_unlock(&rpc_authflavor_lock);
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(rpcauth_unregister);
+
+ struct rpc_auth *
+ rpcauth_create(rpc_authflavor_t pseudoflavor, struct rpc_clnt *clnt)
+@@ -102,6 +104,7 @@ rpcauth_create(rpc_authflavor_t pseudoflavor, struct rpc_clnt *clnt)
+ out:
+ return auth;
+ }
++EXPORT_SYMBOL_GPL(rpcauth_create);
+
+ void
+ rpcauth_release(struct rpc_auth *auth)
+@@ -151,6 +154,7 @@ rpcauth_init_credcache(struct rpc_auth *auth)
+ auth->au_credcache = new;
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(rpcauth_init_credcache);
+
+ /*
+ * Destroy a list of credentials
+@@ -213,6 +217,7 @@ rpcauth_destroy_credcache(struct rpc_auth *auth)
+ kfree(cache);
+ }
+ }
++EXPORT_SYMBOL_GPL(rpcauth_destroy_credcache);
+
+ /*
+ * Remove stale credentials. Avoid sleeping inside the loop.
+@@ -332,6 +337,7 @@ found:
+ out:
+ return cred;
+ }
++EXPORT_SYMBOL_GPL(rpcauth_lookup_credcache);
+
+ struct rpc_cred *
+ rpcauth_lookupcred(struct rpc_auth *auth, int flags)
+@@ -350,6 +356,7 @@ rpcauth_lookupcred(struct rpc_auth *auth, int flags)
+ put_group_info(acred.group_info);
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(rpcauth_lookupcred);
+
+ void
+ rpcauth_init_cred(struct rpc_cred *cred, const struct auth_cred *acred,
+@@ -366,7 +373,7 @@ rpcauth_init_cred(struct rpc_cred *cred, const struct auth_cred *acred,
+ #endif
+ cred->cr_uid = acred->uid;
+ }
+-EXPORT_SYMBOL(rpcauth_init_cred);
++EXPORT_SYMBOL_GPL(rpcauth_init_cred);
+
+ struct rpc_cred *
+ rpcauth_bindcred(struct rpc_task *task)
+@@ -378,6 +385,7 @@ rpcauth_bindcred(struct rpc_task *task)
+ .group_info = current->group_info,
+ };
+ struct rpc_cred *ret;
++ sigset_t oldset;
+ int flags = 0;
+
+ dprintk("RPC: %5u looking up %s cred\n",
+@@ -385,7 +393,9 @@ rpcauth_bindcred(struct rpc_task *task)
+ get_group_info(acred.group_info);
+ if (task->tk_flags & RPC_TASK_ROOTCREDS)
+ flags |= RPCAUTH_LOOKUP_ROOTCREDS;
++ rpc_clnt_sigmask(task->tk_client, &oldset);
+ ret = auth->au_ops->lookup_cred(auth, &acred, flags);
++ rpc_clnt_sigunmask(task->tk_client, &oldset);
+ if (!IS_ERR(ret))
+ task->tk_msg.rpc_cred = ret;
+ else
+@@ -435,6 +445,7 @@ need_lock:
+ out_destroy:
+ cred->cr_ops->crdestroy(cred);
+ }
++EXPORT_SYMBOL_GPL(put_rpccred);
+
+ void
+ rpcauth_unbindcred(struct rpc_task *task)
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 1f2d85e..6dac387 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -472,16 +472,15 @@ gss_pipe_upcall(struct file *filp, struct rpc_pipe_msg *msg,
+ char __user *dst, size_t buflen)
+ {
+ char *data = (char *)msg->data + msg->copied;
+- ssize_t mlen = msg->len;
+- ssize_t left;
++ size_t mlen = min(msg->len, buflen);
++ unsigned long left;
+
+- if (mlen > buflen)
+- mlen = buflen;
+ left = copy_to_user(dst, data, mlen);
+- if (left < 0) {
+- msg->errno = left;
+- return left;
++ if (left == mlen) {
++ msg->errno = -EFAULT;
++ return -EFAULT;
+ }
++
+ mlen -= left;
+ msg->copied += mlen;
+ msg->errno = 0;
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 8e05557..73f053d 100644
--- a/net/sunrpc/cache.c
@@ -645914,12 +752666,975 @@
{
struct cache_detail *cd = ((struct handle*)m->private)->cd;
read_unlock(&cd->hash_lock);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 76be83e..924916c 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -30,6 +30,7 @@
+ #include <linux/smp_lock.h>
+ #include <linux/utsname.h>
+ #include <linux/workqueue.h>
++#include <linux/in6.h>
+
+ #include <linux/sunrpc/clnt.h>
+ #include <linux/sunrpc/rpc_pipe_fs.h>
+@@ -121,8 +122,9 @@ rpc_setup_pipedir(struct rpc_clnt *clnt, char *dir_name)
+ }
+ }
+
+-static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, struct rpc_program *program, u32 vers, rpc_authflavor_t flavor)
++static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args, struct rpc_xprt *xprt)
+ {
++ struct rpc_program *program = args->program;
+ struct rpc_version *version;
+ struct rpc_clnt *clnt = NULL;
+ struct rpc_auth *auth;
+@@ -131,13 +133,13 @@ static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, s
+
+ /* sanity check the name before trying to print it */
+ err = -EINVAL;
+- len = strlen(servname);
++ len = strlen(args->servername);
+ if (len > RPC_MAXNETNAMELEN)
+ goto out_no_rpciod;
+ len++;
+
+ dprintk("RPC: creating %s client for %s (xprt %p)\n",
+- program->name, servname, xprt);
++ program->name, args->servername, xprt);
+
+ err = rpciod_up();
+ if (err)
+@@ -145,7 +147,11 @@ static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, s
+ err = -EINVAL;
+ if (!xprt)
+ goto out_no_xprt;
+- if (vers >= program->nrvers || !(version = program->version[vers]))
++
++ if (args->version >= program->nrvers)
++ goto out_err;
++ version = program->version[args->version];
++ if (version == NULL)
+ goto out_err;
+
+ err = -ENOMEM;
+@@ -157,12 +163,12 @@ static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, s
+ clnt->cl_server = clnt->cl_inline_name;
+ if (len > sizeof(clnt->cl_inline_name)) {
+ char *buf = kmalloc(len, GFP_KERNEL);
+- if (buf != 0)
++ if (buf != NULL)
+ clnt->cl_server = buf;
+ else
+ len = sizeof(clnt->cl_inline_name);
+ }
+- strlcpy(clnt->cl_server, servname, len);
++ strlcpy(clnt->cl_server, args->servername, len);
+
+ clnt->cl_xprt = xprt;
+ clnt->cl_procinfo = version->procs;
+@@ -182,8 +188,15 @@ static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, s
+ if (!xprt_bound(clnt->cl_xprt))
+ clnt->cl_autobind = 1;
+
++ clnt->cl_timeout = xprt->timeout;
++ if (args->timeout != NULL) {
++ memcpy(&clnt->cl_timeout_default, args->timeout,
++ sizeof(clnt->cl_timeout_default));
++ clnt->cl_timeout = &clnt->cl_timeout_default;
++ }
++
+ clnt->cl_rtt = &clnt->cl_rtt_default;
+- rpc_init_rtt(&clnt->cl_rtt_default, xprt->timeout.to_initval);
++ rpc_init_rtt(&clnt->cl_rtt_default, clnt->cl_timeout->to_initval);
+
+ kref_init(&clnt->cl_kref);
+
+@@ -191,10 +204,10 @@ static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, s
+ if (err < 0)
+ goto out_no_path;
+
+- auth = rpcauth_create(flavor, clnt);
++ auth = rpcauth_create(args->authflavor, clnt);
+ if (IS_ERR(auth)) {
+ printk(KERN_INFO "RPC: Couldn't create auth handle (flavor %u)\n",
+- flavor);
++ args->authflavor);
+ err = PTR_ERR(auth);
+ goto out_no_auth;
+ }
+@@ -245,9 +258,8 @@ struct rpc_clnt *rpc_create(struct rpc_create_args *args)
+ .srcaddr = args->saddress,
+ .dstaddr = args->address,
+ .addrlen = args->addrsize,
+- .timeout = args->timeout
+ };
+- char servername[20];
++ char servername[48];
+
+ xprt = xprt_create_transport(&xprtargs);
+ if (IS_ERR(xprt))
+@@ -258,13 +270,34 @@ struct rpc_clnt *rpc_create(struct rpc_create_args *args)
+ * up a string representation of the passed-in address.
+ */
+ if (args->servername == NULL) {
+- struct sockaddr_in *addr =
+- (struct sockaddr_in *) args->address;
+- snprintf(servername, sizeof(servername), NIPQUAD_FMT,
+- NIPQUAD(addr->sin_addr.s_addr));
++ servername[0] = '\0';
++ switch (args->address->sa_family) {
++ case AF_INET: {
++ struct sockaddr_in *sin =
++ (struct sockaddr_in *)args->address;
++ snprintf(servername, sizeof(servername), NIPQUAD_FMT,
++ NIPQUAD(sin->sin_addr.s_addr));
++ break;
++ }
++ case AF_INET6: {
++ struct sockaddr_in6 *sin =
++ (struct sockaddr_in6 *)args->address;
++ snprintf(servername, sizeof(servername), NIP6_FMT,
++ NIP6(sin->sin6_addr));
++ break;
++ }
++ default:
++ /* caller wants default server name, but
++ * address family isn't recognized. */
++ return ERR_PTR(-EINVAL);
++ }
+ args->servername = servername;
+ }
+
++ xprt = xprt_create_transport(&xprtargs);
++ if (IS_ERR(xprt))
++ return (struct rpc_clnt *)xprt;
++
+ /*
+ * By default, kernel RPC client connects from a reserved port.
+ * CAP_NET_BIND_SERVICE will not be set for unprivileged requesters,
+@@ -275,8 +308,7 @@ struct rpc_clnt *rpc_create(struct rpc_create_args *args)
+ if (args->flags & RPC_CLNT_CREATE_NONPRIVPORT)
+ xprt->resvport = 0;
+
+- clnt = rpc_new_client(xprt, args->servername, args->program,
+- args->version, args->authflavor);
++ clnt = rpc_new_client(args, xprt);
+ if (IS_ERR(clnt))
+ return clnt;
+
+@@ -322,7 +354,7 @@ rpc_clone_client(struct rpc_clnt *clnt)
+ new->cl_autobind = 0;
+ INIT_LIST_HEAD(&new->cl_tasks);
+ spin_lock_init(&new->cl_lock);
+- rpc_init_rtt(&new->cl_rtt_default, clnt->cl_xprt->timeout.to_initval);
++ rpc_init_rtt(&new->cl_rtt_default, clnt->cl_timeout->to_initval);
+ new->cl_metrics = rpc_alloc_iostats(clnt);
+ if (new->cl_metrics == NULL)
+ goto out_no_stats;
+@@ -345,6 +377,7 @@ out_no_clnt:
+ dprintk("RPC: %s: returned error %d\n", __FUNCTION__, err);
+ return ERR_PTR(err);
+ }
++EXPORT_SYMBOL_GPL(rpc_clone_client);
+
+ /*
+ * Properly shut down an RPC client, terminating all outstanding
+@@ -363,6 +396,7 @@ void rpc_shutdown_client(struct rpc_clnt *clnt)
+
+ rpc_release_client(clnt);
+ }
++EXPORT_SYMBOL_GPL(rpc_shutdown_client);
+
+ /*
+ * Free an RPC client
+@@ -467,6 +501,7 @@ struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *old,
+ out:
+ return clnt;
+ }
++EXPORT_SYMBOL_GPL(rpc_bind_new_program);
+
+ /*
+ * Default callback for async RPC calls
+@@ -498,12 +533,12 @@ static void rpc_save_sigmask(sigset_t *oldset, int intr)
+ sigprocmask(SIG_BLOCK, &sigmask, oldset);
+ }
+
+-static inline void rpc_task_sigmask(struct rpc_task *task, sigset_t *oldset)
++static void rpc_task_sigmask(struct rpc_task *task, sigset_t *oldset)
+ {
+ rpc_save_sigmask(oldset, !RPC_TASK_UNINTERRUPTIBLE(task));
+ }
+
+-static inline void rpc_restore_sigmask(sigset_t *oldset)
++static void rpc_restore_sigmask(sigset_t *oldset)
+ {
+ sigprocmask(SIG_SETMASK, oldset, NULL);
+ }
+@@ -512,45 +547,49 @@ void rpc_clnt_sigmask(struct rpc_clnt *clnt, sigset_t *oldset)
+ {
+ rpc_save_sigmask(oldset, clnt->cl_intr);
+ }
++EXPORT_SYMBOL_GPL(rpc_clnt_sigmask);
+
+ void rpc_clnt_sigunmask(struct rpc_clnt *clnt, sigset_t *oldset)
+ {
+ rpc_restore_sigmask(oldset);
+ }
++EXPORT_SYMBOL_GPL(rpc_clnt_sigunmask);
+
+-static
+-struct rpc_task *rpc_do_run_task(struct rpc_clnt *clnt,
+- struct rpc_message *msg,
+- int flags,
+- const struct rpc_call_ops *ops,
+- void *data)
++/**
++ * rpc_run_task - Allocate a new RPC task, then run rpc_execute against it
++ * @task_setup_data: pointer to task initialisation data
++ */
++struct rpc_task *rpc_run_task(const struct rpc_task_setup *task_setup_data)
+ {
+ struct rpc_task *task, *ret;
+ sigset_t oldset;
+
+- task = rpc_new_task(clnt, flags, ops, data);
++ task = rpc_new_task(task_setup_data);
+ if (task == NULL) {
+- rpc_release_calldata(ops, data);
+- return ERR_PTR(-ENOMEM);
++ rpc_release_calldata(task_setup_data->callback_ops,
++ task_setup_data->callback_data);
++ ret = ERR_PTR(-ENOMEM);
++ goto out;
+ }
+
+- /* Mask signals on synchronous RPC calls and RPCSEC_GSS upcalls */
+- rpc_task_sigmask(task, &oldset);
+- if (msg != NULL) {
+- rpc_call_setup(task, msg, 0);
+- if (task->tk_status != 0) {
+- ret = ERR_PTR(task->tk_status);
+- rpc_put_task(task);
+- goto out;
+- }
++ if (task->tk_status != 0) {
++ ret = ERR_PTR(task->tk_status);
++ rpc_put_task(task);
++ goto out;
+ }
+ atomic_inc(&task->tk_count);
+- rpc_execute(task);
++ /* Mask signals on synchronous RPC calls and RPCSEC_GSS upcalls */
++ if (!RPC_IS_ASYNC(task)) {
++ rpc_task_sigmask(task, &oldset);
++ rpc_execute(task);
++ rpc_restore_sigmask(&oldset);
++ } else
++ rpc_execute(task);
+ ret = task;
+ out:
+- rpc_restore_sigmask(&oldset);
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(rpc_run_task);
+
+ /**
+ * rpc_call_sync - Perform a synchronous RPC call
+@@ -561,17 +600,24 @@ out:
+ int rpc_call_sync(struct rpc_clnt *clnt, struct rpc_message *msg, int flags)
+ {
+ struct rpc_task *task;
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = clnt,
++ .rpc_message = msg,
++ .callback_ops = &rpc_default_ops,
++ .flags = flags,
++ };
+ int status;
+
+ BUG_ON(flags & RPC_TASK_ASYNC);
+
+- task = rpc_do_run_task(clnt, msg, flags, &rpc_default_ops, NULL);
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ status = task->tk_status;
+ rpc_put_task(task);
+ return status;
+ }
++EXPORT_SYMBOL_GPL(rpc_call_sync);
+
+ /**
+ * rpc_call_async - Perform an asynchronous RPC call
+@@ -586,45 +632,28 @@ rpc_call_async(struct rpc_clnt *clnt, struct rpc_message *msg, int flags,
+ const struct rpc_call_ops *tk_ops, void *data)
+ {
+ struct rpc_task *task;
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = clnt,
++ .rpc_message = msg,
++ .callback_ops = tk_ops,
++ .callback_data = data,
++ .flags = flags|RPC_TASK_ASYNC,
++ };
+
+- task = rpc_do_run_task(clnt, msg, flags|RPC_TASK_ASYNC, tk_ops, data);
++ task = rpc_run_task(&task_setup_data);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ rpc_put_task(task);
+ return 0;
+ }
+-
+-/**
+- * rpc_run_task - Allocate a new RPC task, then run rpc_execute against it
+- * @clnt: pointer to RPC client
+- * @flags: RPC flags
+- * @ops: RPC call ops
+- * @data: user call data
+- */
+-struct rpc_task *rpc_run_task(struct rpc_clnt *clnt, int flags,
+- const struct rpc_call_ops *tk_ops,
+- void *data)
+-{
+- return rpc_do_run_task(clnt, NULL, flags, tk_ops, data);
+-}
+-EXPORT_SYMBOL(rpc_run_task);
++EXPORT_SYMBOL_GPL(rpc_call_async);
+
+ void
+-rpc_call_setup(struct rpc_task *task, struct rpc_message *msg, int flags)
++rpc_call_start(struct rpc_task *task)
+ {
+- task->tk_msg = *msg;
+- task->tk_flags |= flags;
+- /* Bind the user cred */
+- if (task->tk_msg.rpc_cred != NULL)
+- rpcauth_holdcred(task);
+- else
+- rpcauth_bindcred(task);
+-
+- if (task->tk_status == 0)
+- task->tk_action = call_start;
+- else
+- task->tk_action = rpc_exit_task;
++ task->tk_action = call_start;
+ }
++EXPORT_SYMBOL_GPL(rpc_call_start);
+
+ /**
+ * rpc_peeraddr - extract remote peer address from clnt's xprt
+@@ -653,7 +682,8 @@ EXPORT_SYMBOL_GPL(rpc_peeraddr);
+ * @format: address format
+ *
+ */
+-char *rpc_peeraddr2str(struct rpc_clnt *clnt, enum rpc_display_format_t format)
++const char *rpc_peeraddr2str(struct rpc_clnt *clnt,
++ enum rpc_display_format_t format)
+ {
+ struct rpc_xprt *xprt = clnt->cl_xprt;
+
+@@ -671,6 +701,7 @@ rpc_setbufsize(struct rpc_clnt *clnt, unsigned int sndsize, unsigned int rcvsize
+ if (xprt->ops->set_buffer_size)
+ xprt->ops->set_buffer_size(xprt, sndsize, rcvsize);
+ }
++EXPORT_SYMBOL_GPL(rpc_setbufsize);
+
+ /*
+ * Return size of largest payload RPC client can support, in bytes
+@@ -710,6 +741,7 @@ rpc_restart_call(struct rpc_task *task)
+
+ task->tk_action = call_start;
+ }
++EXPORT_SYMBOL_GPL(rpc_restart_call);
+
+ /*
+ * 0. Initial state
+@@ -1137,7 +1169,7 @@ call_status(struct rpc_task *task)
+ case -ETIMEDOUT:
+ task->tk_action = call_timeout;
+ if (task->tk_client->cl_discrtry)
+- xprt_disconnect(task->tk_xprt);
++ xprt_force_disconnect(task->tk_xprt);
+ break;
+ case -ECONNREFUSED:
+ case -ENOTCONN:
+@@ -1260,7 +1292,7 @@ out_retry:
+ req->rq_received = req->rq_private_buf.len = 0;
+ task->tk_status = 0;
+ if (task->tk_client->cl_discrtry)
+- xprt_disconnect(task->tk_xprt);
++ xprt_force_disconnect(task->tk_xprt);
+ }
+
+ /*
+@@ -1517,9 +1549,15 @@ struct rpc_task *rpc_call_null(struct rpc_clnt *clnt, struct rpc_cred *cred, int
+ .rpc_proc = &rpcproc_null,
+ .rpc_cred = cred,
+ };
+- return rpc_do_run_task(clnt, &msg, flags, &rpc_default_ops, NULL);
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = clnt,
++ .rpc_message = &msg,
++ .callback_ops = &rpc_default_ops,
++ .flags = flags,
++ };
++ return rpc_run_task(&task_setup_data);
+ }
+-EXPORT_SYMBOL(rpc_call_null);
++EXPORT_SYMBOL_GPL(rpc_call_null);
+
+ #ifdef RPC_DEBUG
+ void rpc_show_tasks(void)
+diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
+index c59f3ca..7e19716 100644
+--- a/net/sunrpc/rpc_pipe.c
++++ b/net/sunrpc/rpc_pipe.c
+@@ -76,6 +76,16 @@ rpc_timeout_upcall_queue(struct work_struct *work)
+ rpc_purge_list(rpci, &free_list, destroy_msg, -ETIMEDOUT);
+ }
+
++/**
++ * rpc_queue_upcall
++ * @inode: inode of upcall pipe on which to queue given message
++ * @msg: message to queue
++ *
++ * Call with an @inode created by rpc_mkpipe() to queue an upcall.
++ * A userspace process may then later read the upcall by performing a
++ * read on an open file for this inode. It is up to the caller to
++ * initialize the fields of @msg (other than @msg->list) appropriately.
++ */
+ int
+ rpc_queue_upcall(struct inode *inode, struct rpc_pipe_msg *msg)
+ {
+@@ -103,6 +113,7 @@ out:
+ wake_up(&rpci->waitq);
+ return res;
+ }
++EXPORT_SYMBOL(rpc_queue_upcall);
+
+ static inline void
+ rpc_inode_setowner(struct inode *inode, void *private)
+@@ -512,8 +523,8 @@ rpc_get_inode(struct super_block *sb, int mode)
+ /*
+ * FIXME: This probably has races.
+ */
+-static void
+-rpc_depopulate(struct dentry *parent, int start, int eof)
++static void rpc_depopulate(struct dentry *parent,
++ unsigned long start, unsigned long eof)
+ {
+ struct inode *dir = parent->d_inode;
+ struct list_head *pos, *next;
+@@ -663,7 +674,16 @@ rpc_lookup_negative(char *path, struct nameidata *nd)
+ return dentry;
+ }
+
+-
++/**
++ * rpc_mkdir - Create a new directory in rpc_pipefs
++ * @path: path from the rpc_pipefs root to the new directory
++ * @rpc_clnt: rpc client to associate with this directory
++ *
++ * This creates a directory at the given @path associated with
++ * @rpc_clnt, which will contain a file named "info" with some basic
++ * information about the client, together with any "pipes" that may
++ * later be created using rpc_mkpipe().
++ */
+ struct dentry *
+ rpc_mkdir(char *path, struct rpc_clnt *rpc_client)
+ {
+@@ -699,6 +719,10 @@ err_dput:
+ goto out;
+ }
+
++/**
++ * rpc_rmdir - Remove a directory created with rpc_mkdir()
++ * @dentry: directory to remove
++ */
+ int
+ rpc_rmdir(struct dentry *dentry)
+ {
+@@ -717,6 +741,25 @@ rpc_rmdir(struct dentry *dentry)
+ return error;
+ }
+
++/**
++ * rpc_mkpipe - make an rpc_pipefs file for kernel<->userspace communication
++ * @parent: dentry of directory to create new "pipe" in
++ * @name: name of pipe
++ * @private: private data to associate with the pipe, for the caller's use
++ * @ops: operations defining the behavior of the pipe: upcall, downcall,
++ * release_pipe, and destroy_msg.
++ *
++ * Data is made available for userspace to read by calls to
++ * rpc_queue_upcall(). The actual reads will result in calls to
++ * @ops->upcall, which will be called with the file pointer,
++ * message, and userspace buffer to copy to.
++ *
++ * Writes can come at any time, and do not necessarily have to be
++ * responses to upcalls. They will result in calls to @msg->downcall.
++ *
++ * The @private argument passed here will be available to all these methods
++ * from the file pointer, via RPC_I(file->f_dentry->d_inode)->private.
++ */
+ struct dentry *
+ rpc_mkpipe(struct dentry *parent, const char *name, void *private, struct rpc_pipe_ops *ops, int flags)
+ {
+@@ -763,7 +806,16 @@ err_dput:
+ -ENOMEM);
+ goto out;
+ }
++EXPORT_SYMBOL(rpc_mkpipe);
+
++/**
++ * rpc_unlink - remove a pipe
++ * @dentry: dentry for the pipe, as returned from rpc_mkpipe
++ *
++ * After this call, lookups will no longer find the pipe, and any
++ * attempts to read or write using preexisting opens of the pipe will
++ * return -EPIPE.
++ */
+ int
+ rpc_unlink(struct dentry *dentry)
+ {
+@@ -785,6 +837,7 @@ rpc_unlink(struct dentry *dentry)
+ dput(parent);
+ return error;
+ }
++EXPORT_SYMBOL(rpc_unlink);
+
+ /*
+ * populate the filesystem
+diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
+index a05493a..fa5b8f2 100644
+--- a/net/sunrpc/rpcb_clnt.c
++++ b/net/sunrpc/rpcb_clnt.c
+@@ -55,45 +55,6 @@ enum {
+ #define RPCB_HIGHPROC_4 RPCBPROC_GETSTAT
+
+ /*
+- * r_addr
+- *
+- * Quoting RFC 3530, section 2.2:
+- *
+- * For TCP over IPv4 and for UDP over IPv4, the format of r_addr is the
+- * US-ASCII string:
+- *
+- * h1.h2.h3.h4.p1.p2
+- *
+- * The prefix, "h1.h2.h3.h4", is the standard textual form for
+- * representing an IPv4 address, which is always four octets long.
+- * Assuming big-endian ordering, h1, h2, h3, and h4, are respectively,
+- * the first through fourth octets each converted to ASCII-decimal.
+- * Assuming big-endian ordering, p1 and p2 are, respectively, the first
+- * and second octets each converted to ASCII-decimal. For example, if a
+- * host, in big-endian order, has an address of 0x0A010307 and there is
+- * a service listening on, in big endian order, port 0x020F (decimal
+- * 527), then the complete universal address is "10.1.3.7.2.15".
+- *
+- * ...
+- *
+- * For TCP over IPv6 and for UDP over IPv6, the format of r_addr is the
+- * US-ASCII string:
+- *
+- * x1:x2:x3:x4:x5:x6:x7:x8.p1.p2
+- *
+- * The suffix "p1.p2" is the service port, and is computed the same way
+- * as with universal addresses for TCP and UDP over IPv4. The prefix,
+- * "x1:x2:x3:x4:x5:x6:x7:x8", is the standard textual form for
+- * representing an IPv6 address as defined in Section 2.2 of [RFC2373].
+- * Additionally, the two alternative forms specified in Section 2.2 of
+- * [RFC2373] are also acceptable.
+- *
+- * XXX: Currently this implementation does not explicitly convert the
+- * stored address to US-ASCII on non-ASCII systems.
+- */
+-#define RPCB_MAXADDRLEN (128u)
+-
+-/*
+ * r_owner
+ *
+ * The "owner" is allowed to unset a service in the rpcbind database.
+@@ -112,9 +73,9 @@ struct rpcbind_args {
+ u32 r_vers;
+ u32 r_prot;
+ unsigned short r_port;
+- char * r_netid;
+- char r_addr[RPCB_MAXADDRLEN];
+- char * r_owner;
++ const char * r_netid;
++ const char * r_addr;
++ const char * r_owner;
+ };
+
+ static struct rpc_procinfo rpcb_procedures2[];
+@@ -128,19 +89,6 @@ struct rpcb_info {
+ static struct rpcb_info rpcb_next_version[];
+ static struct rpcb_info rpcb_next_version6[];
+
+-static void rpcb_getport_prepare(struct rpc_task *task, void *calldata)
+-{
+- struct rpcbind_args *map = calldata;
+- struct rpc_xprt *xprt = map->r_xprt;
+- struct rpc_message msg = {
+- .rpc_proc = rpcb_next_version[xprt->bind_index].rpc_proc,
+- .rpc_argp = map,
+- .rpc_resp = &map->r_port,
+- };
+-
+- rpc_call_setup(task, &msg, 0);
+-}
+-
+ static void rpcb_map_release(void *data)
+ {
+ struct rpcbind_args *map = data;
+@@ -150,7 +98,6 @@ static void rpcb_map_release(void *data)
+ }
+
+ static const struct rpc_call_ops rpcb_getport_ops = {
+- .rpc_call_prepare = rpcb_getport_prepare,
+ .rpc_call_done = rpcb_getport_done,
+ .rpc_release = rpcb_map_release,
+ };
+@@ -162,12 +109,13 @@ static void rpcb_wake_rpcbind_waiters(struct rpc_xprt *xprt, int status)
+ }
+
+ static struct rpc_clnt *rpcb_create(char *hostname, struct sockaddr *srvaddr,
+- int proto, int version, int privileged)
++ size_t salen, int proto, u32 version,
++ int privileged)
+ {
+ struct rpc_create_args args = {
+ .protocol = proto,
+ .address = srvaddr,
+- .addrsize = sizeof(struct sockaddr_in),
++ .addrsize = salen,
+ .servername = hostname,
+ .program = &rpcb_program,
+ .version = version,
+@@ -230,7 +178,7 @@ int rpcb_register(u32 prog, u32 vers, int prot, unsigned short port, int *okay)
+ prog, vers, prot, port);
+
+ rpcb_clnt = rpcb_create("localhost", (struct sockaddr *) &sin,
+- XPRT_TRANSPORT_UDP, 2, 1);
++ sizeof(sin), XPRT_TRANSPORT_UDP, 2, 1);
+ if (IS_ERR(rpcb_clnt))
+ return PTR_ERR(rpcb_clnt);
+
+@@ -252,13 +200,15 @@ int rpcb_register(u32 prog, u32 vers, int prot, unsigned short port, int *okay)
+ * @vers: RPC version number to bind
+ * @prot: transport protocol to use to make this request
+ *
++ * Return value is the requested advertised port number,
++ * or a negative errno value.
++ *
+ * Called from outside the RPC client in a synchronous task context.
+ * Uses default timeout parameters specified by underlying transport.
+ *
+- * XXX: Needs to support IPv6, and rpcbind versions 3 and 4
++ * XXX: Needs to support IPv6
+ */
+-int rpcb_getport_sync(struct sockaddr_in *sin, __u32 prog,
+- __u32 vers, int prot)
++int rpcb_getport_sync(struct sockaddr_in *sin, u32 prog, u32 vers, int prot)
+ {
+ struct rpcbind_args map = {
+ .r_prog = prog,
+@@ -272,14 +222,13 @@ int rpcb_getport_sync(struct sockaddr_in *sin, __u32 prog,
+ .rpc_resp = &map.r_port,
+ };
+ struct rpc_clnt *rpcb_clnt;
+- char hostname[40];
+ int status;
+
+ dprintk("RPC: %s(" NIPQUAD_FMT ", %u, %u, %d)\n",
+ __FUNCTION__, NIPQUAD(sin->sin_addr.s_addr), prog, vers, prot);
+
+- sprintf(hostname, NIPQUAD_FMT, NIPQUAD(sin->sin_addr.s_addr));
+- rpcb_clnt = rpcb_create(hostname, (struct sockaddr *)sin, prot, 2, 0);
++ rpcb_clnt = rpcb_create(NULL, (struct sockaddr *)sin,
++ sizeof(*sin), prot, 2, 0);
+ if (IS_ERR(rpcb_clnt))
+ return PTR_ERR(rpcb_clnt);
+
+@@ -295,6 +244,24 @@ int rpcb_getport_sync(struct sockaddr_in *sin, __u32 prog,
+ }
+ EXPORT_SYMBOL_GPL(rpcb_getport_sync);
+
++static struct rpc_task *rpcb_call_async(struct rpc_clnt *rpcb_clnt, struct rpcbind_args *map, int version)
++{
++ struct rpc_message msg = {
++ .rpc_proc = rpcb_next_version[version].rpc_proc,
++ .rpc_argp = map,
++ .rpc_resp = &map->r_port,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = rpcb_clnt,
++ .rpc_message = &msg,
++ .callback_ops = &rpcb_getport_ops,
++ .callback_data = map,
++ .flags = RPC_TASK_ASYNC,
++ };
++
++ return rpc_run_task(&task_setup_data);
++}
++
+ /**
+ * rpcb_getport_async - obtain the port for a given RPC service on a given host
+ * @task: task that is waiting for portmapper request
+@@ -305,12 +272,14 @@ EXPORT_SYMBOL_GPL(rpcb_getport_sync);
+ void rpcb_getport_async(struct rpc_task *task)
+ {
+ struct rpc_clnt *clnt = task->tk_client;
+- int bind_version;
++ u32 bind_version;
+ struct rpc_xprt *xprt = task->tk_xprt;
+ struct rpc_clnt *rpcb_clnt;
+ static struct rpcbind_args *map;
+ struct rpc_task *child;
+- struct sockaddr addr;
++ struct sockaddr_storage addr;
++ struct sockaddr *sap = (struct sockaddr *)&addr;
++ size_t salen;
+ int status;
+ struct rpcb_info *info;
+
+@@ -340,10 +309,10 @@ void rpcb_getport_async(struct rpc_task *task)
+ goto bailout_nofree;
+ }
+
+- rpc_peeraddr(clnt, (void *)&addr, sizeof(addr));
++ salen = rpc_peeraddr(clnt, sap, sizeof(addr));
+
+ /* Don't ever use rpcbind v2 for AF_INET6 requests */
+- switch (addr.sa_family) {
++ switch (sap->sa_family) {
+ case AF_INET:
+ info = rpcb_next_version;
+ break;
+@@ -368,7 +337,7 @@ void rpcb_getport_async(struct rpc_task *task)
+ dprintk("RPC: %5u %s: trying rpcbind version %u\n",
+ task->tk_pid, __FUNCTION__, bind_version);
+
+- rpcb_clnt = rpcb_create(clnt->cl_server, &addr, xprt->prot,
++ rpcb_clnt = rpcb_create(clnt->cl_server, sap, salen, xprt->prot,
+ bind_version, 0);
+ if (IS_ERR(rpcb_clnt)) {
+ status = PTR_ERR(rpcb_clnt);
+@@ -390,12 +359,10 @@ void rpcb_getport_async(struct rpc_task *task)
+ map->r_port = 0;
+ map->r_xprt = xprt_get(xprt);
+ map->r_netid = rpc_peeraddr2str(clnt, RPC_DISPLAY_NETID);
+- memcpy(map->r_addr,
+- rpc_peeraddr2str(rpcb_clnt, RPC_DISPLAY_UNIVERSAL_ADDR),
+- sizeof(map->r_addr));
++ map->r_addr = rpc_peeraddr2str(rpcb_clnt, RPC_DISPLAY_UNIVERSAL_ADDR);
+ map->r_owner = RPCB_OWNER_STRING; /* ignored for GETADDR */
+
+- child = rpc_run_task(rpcb_clnt, RPC_TASK_ASYNC, &rpcb_getport_ops, map);
++ child = rpcb_call_async(rpcb_clnt, map, xprt->bind_index);
+ rpc_release_client(rpcb_clnt);
+ if (IS_ERR(child)) {
+ status = -EIO;
+@@ -518,7 +485,7 @@ static int rpcb_decode_getaddr(struct rpc_rqst *req, __be32 *p,
+ * Simple sanity check. The smallest possible universal
+ * address is an IPv4 address string containing 11 bytes.
+ */
+- if (addr_len < 11 || addr_len > RPCB_MAXADDRLEN)
++ if (addr_len < 11 || addr_len > RPCBIND_MAXUADDRLEN)
+ goto out_err;
+
+ /*
+@@ -569,7 +536,7 @@ out_err:
+ #define RPCB_boolean_sz (1u)
+
+ #define RPCB_netid_sz (1+XDR_QUADLEN(RPCBIND_MAXNETIDLEN))
+-#define RPCB_addr_sz (1+XDR_QUADLEN(RPCB_MAXADDRLEN))
++#define RPCB_addr_sz (1+XDR_QUADLEN(RPCBIND_MAXUADDRLEN))
+ #define RPCB_ownerstring_sz (1+XDR_QUADLEN(RPCB_MAXOWNERLEN))
+
+ #define RPCB_mappingargs_sz RPCB_program_sz+RPCB_version_sz+ \
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
-index c98873f..eed5dd9 100644
+index c98873f..40ce6f6 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
-@@ -811,9 +811,8 @@ EXPORT_SYMBOL_GPL(rpc_free);
- void rpc_init_task(struct rpc_task *task, struct rpc_clnt *clnt, int flags, const struct rpc_call_ops *tk_ops, void *calldata)
+@@ -45,7 +45,7 @@ static void rpc_release_task(struct rpc_task *task);
+ /*
+ * RPC tasks sit here while waiting for conditions to improve.
+ */
+-static RPC_WAITQ(delay_queue, "delayq");
++static struct rpc_wait_queue delay_queue;
+
+ /*
+ * rpciod-related stuff
+@@ -135,7 +135,7 @@ static void __rpc_add_wait_queue_priority(struct rpc_wait_queue *queue, struct r
+ if (unlikely(task->tk_priority > queue->maxpriority))
+ q = &queue->tasks[queue->maxpriority];
+ list_for_each_entry(t, q, u.tk_wait.list) {
+- if (t->tk_cookie == task->tk_cookie) {
++ if (t->tk_owner == task->tk_owner) {
+ list_add_tail(&task->u.tk_wait.list, &t->u.tk_wait.links);
+ return;
+ }
+@@ -208,26 +208,26 @@ static inline void rpc_set_waitqueue_priority(struct rpc_wait_queue *queue, int
+ queue->count = 1 << (priority * 2);
+ }
+
+-static inline void rpc_set_waitqueue_cookie(struct rpc_wait_queue *queue, unsigned long cookie)
++static inline void rpc_set_waitqueue_owner(struct rpc_wait_queue *queue, pid_t pid)
+ {
+- queue->cookie = cookie;
++ queue->owner = pid;
+ queue->nr = RPC_BATCH_COUNT;
+ }
+
+ static inline void rpc_reset_waitqueue_priority(struct rpc_wait_queue *queue)
+ {
+ rpc_set_waitqueue_priority(queue, queue->maxpriority);
+- rpc_set_waitqueue_cookie(queue, 0);
++ rpc_set_waitqueue_owner(queue, 0);
+ }
+
+-static void __rpc_init_priority_wait_queue(struct rpc_wait_queue *queue, const char *qname, int maxprio)
++static void __rpc_init_priority_wait_queue(struct rpc_wait_queue *queue, const char *qname, unsigned char nr_queues)
+ {
+ int i;
+
+ spin_lock_init(&queue->lock);
+ for (i = 0; i < ARRAY_SIZE(queue->tasks); i++)
+ INIT_LIST_HEAD(&queue->tasks[i]);
+- queue->maxpriority = maxprio;
++ queue->maxpriority = nr_queues - 1;
+ rpc_reset_waitqueue_priority(queue);
+ #ifdef RPC_DEBUG
+ queue->name = qname;
+@@ -236,14 +236,14 @@ static void __rpc_init_priority_wait_queue(struct rpc_wait_queue *queue, const c
+
+ void rpc_init_priority_wait_queue(struct rpc_wait_queue *queue, const char *qname)
+ {
+- __rpc_init_priority_wait_queue(queue, qname, RPC_PRIORITY_HIGH);
++ __rpc_init_priority_wait_queue(queue, qname, RPC_NR_PRIORITY);
+ }
+
+ void rpc_init_wait_queue(struct rpc_wait_queue *queue, const char *qname)
+ {
+- __rpc_init_priority_wait_queue(queue, qname, 0);
++ __rpc_init_priority_wait_queue(queue, qname, 1);
+ }
+-EXPORT_SYMBOL(rpc_init_wait_queue);
++EXPORT_SYMBOL_GPL(rpc_init_wait_queue);
+
+ static int rpc_wait_bit_interruptible(void *word)
+ {
+@@ -303,7 +303,7 @@ int __rpc_wait_for_completion_task(struct rpc_task *task, int (*action)(void *))
+ return wait_on_bit(&task->tk_runstate, RPC_TASK_ACTIVE,
+ action, TASK_INTERRUPTIBLE);
+ }
+-EXPORT_SYMBOL(__rpc_wait_for_completion_task);
++EXPORT_SYMBOL_GPL(__rpc_wait_for_completion_task);
+
+ /*
+ * Make an RPC task runnable.
+@@ -373,6 +373,7 @@ void rpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,
+ __rpc_sleep_on(q, task, action, timer);
+ spin_unlock_bh(&q->lock);
+ }
++EXPORT_SYMBOL_GPL(rpc_sleep_on);
+
+ /**
+ * __rpc_do_wake_up_task - wake up a single rpc_task
+@@ -444,6 +445,7 @@ void rpc_wake_up_task(struct rpc_task *task)
+ }
+ rcu_read_unlock_bh();
+ }
++EXPORT_SYMBOL_GPL(rpc_wake_up_task);
+
+ /*
+ * Wake up the next task on a priority queue.
+@@ -454,12 +456,12 @@ static struct rpc_task * __rpc_wake_up_next_priority(struct rpc_wait_queue *queu
+ struct rpc_task *task;
+
+ /*
+- * Service a batch of tasks from a single cookie.
++ * Service a batch of tasks from a single owner.
+ */
+ q = &queue->tasks[queue->priority];
+ if (!list_empty(q)) {
+ task = list_entry(q->next, struct rpc_task, u.tk_wait.list);
+- if (queue->cookie == task->tk_cookie) {
++ if (queue->owner == task->tk_owner) {
+ if (--queue->nr)
+ goto out;
+ list_move_tail(&task->u.tk_wait.list, q);
+@@ -468,7 +470,7 @@ static struct rpc_task * __rpc_wake_up_next_priority(struct rpc_wait_queue *queu
+ * Check if we need to switch queues.
+ */
+ if (--queue->count)
+- goto new_cookie;
++ goto new_owner;
+ }
+
+ /*
+@@ -490,8 +492,8 @@ static struct rpc_task * __rpc_wake_up_next_priority(struct rpc_wait_queue *queu
+
+ new_queue:
+ rpc_set_waitqueue_priority(queue, (unsigned int)(q - &queue->tasks[0]));
+-new_cookie:
+- rpc_set_waitqueue_cookie(queue, task->tk_cookie);
++new_owner:
++ rpc_set_waitqueue_owner(queue, task->tk_owner);
+ out:
+ __rpc_wake_up_task(task);
+ return task;
+@@ -519,6 +521,7 @@ struct rpc_task * rpc_wake_up_next(struct rpc_wait_queue *queue)
+
+ return task;
+ }
++EXPORT_SYMBOL_GPL(rpc_wake_up_next);
+
+ /**
+ * rpc_wake_up - wake up all rpc_tasks
+@@ -544,6 +547,7 @@ void rpc_wake_up(struct rpc_wait_queue *queue)
+ spin_unlock(&queue->lock);
+ rcu_read_unlock_bh();
+ }
++EXPORT_SYMBOL_GPL(rpc_wake_up);
+
+ /**
+ * rpc_wake_up_status - wake up all rpc_tasks and set their status value.
+@@ -572,6 +576,7 @@ void rpc_wake_up_status(struct rpc_wait_queue *queue, int status)
+ spin_unlock(&queue->lock);
+ rcu_read_unlock_bh();
+ }
++EXPORT_SYMBOL_GPL(rpc_wake_up_status);
+
+ static void __rpc_atrun(struct rpc_task *task)
+ {
+@@ -586,6 +591,7 @@ void rpc_delay(struct rpc_task *task, unsigned long delay)
+ task->tk_timeout = delay;
+ rpc_sleep_on(&delay_queue, task, NULL, __rpc_atrun);
+ }
++EXPORT_SYMBOL_GPL(rpc_delay);
+
+ /*
+ * Helper to call task->tk_ops->rpc_call_prepare
+@@ -614,7 +620,7 @@ void rpc_exit_task(struct rpc_task *task)
+ }
+ }
+ }
+-EXPORT_SYMBOL(rpc_exit_task);
++EXPORT_SYMBOL_GPL(rpc_exit_task);
+
+ void rpc_release_calldata(const struct rpc_call_ops *ops, void *calldata)
+ {
+@@ -808,40 +814,49 @@ EXPORT_SYMBOL_GPL(rpc_free);
+ /*
+ * Creation and deletion of RPC task structures
+ */
+-void rpc_init_task(struct rpc_task *task, struct rpc_clnt *clnt, int flags, const struct rpc_call_ops *tk_ops, void *calldata)
++static void rpc_init_task(struct rpc_task *task, const struct rpc_task_setup *task_setup_data)
{
memset(task, 0, sizeof(*task));
- init_timer(&task->tk_timer);
@@ -645928,13 +753643,544 @@
+ setup_timer(&task->tk_timer, (void (*)(unsigned long))rpc_run_timer,
+ (unsigned long)task);
atomic_set(&task->tk_count, 1);
- task->tk_client = clnt;
- task->tk_flags = flags;
+- task->tk_client = clnt;
+- task->tk_flags = flags;
+- task->tk_ops = tk_ops;
+- if (tk_ops->rpc_call_prepare != NULL)
+- task->tk_action = rpc_prepare_task;
+- task->tk_calldata = calldata;
++ task->tk_flags = task_setup_data->flags;
++ task->tk_ops = task_setup_data->callback_ops;
++ task->tk_calldata = task_setup_data->callback_data;
+ INIT_LIST_HEAD(&task->tk_task);
+
+ /* Initialize retry counters */
+ task->tk_garb_retry = 2;
+ task->tk_cred_retry = 2;
+
+- task->tk_priority = RPC_PRIORITY_NORMAL;
+- task->tk_cookie = (unsigned long)current;
++ task->tk_priority = task_setup_data->priority - RPC_PRIORITY_LOW;
++ task->tk_owner = current->tgid;
+
+ /* Initialize workqueue for async tasks */
+ task->tk_workqueue = rpciod_workqueue;
+
+- if (clnt) {
+- kref_get(&clnt->cl_kref);
+- if (clnt->cl_softrtry)
++ task->tk_client = task_setup_data->rpc_client;
++ if (task->tk_client != NULL) {
++ kref_get(&task->tk_client->cl_kref);
++ if (task->tk_client->cl_softrtry)
+ task->tk_flags |= RPC_TASK_SOFT;
+- if (!clnt->cl_intr)
++ if (!task->tk_client->cl_intr)
+ task->tk_flags |= RPC_TASK_NOINTR;
+ }
+
+- BUG_ON(task->tk_ops == NULL);
++ if (task->tk_ops->rpc_call_prepare != NULL)
++ task->tk_action = rpc_prepare_task;
++
++ if (task_setup_data->rpc_message != NULL) {
++ memcpy(&task->tk_msg, task_setup_data->rpc_message, sizeof(task->tk_msg));
++ /* Bind the user cred */
++ if (task->tk_msg.rpc_cred != NULL)
++ rpcauth_holdcred(task);
++ else
++ rpcauth_bindcred(task);
++ if (task->tk_action == NULL)
++ rpc_call_start(task);
++ }
+
+ /* starting timestamp */
+ task->tk_start = jiffies;
+@@ -866,18 +881,22 @@ static void rpc_free_task(struct rcu_head *rcu)
+ /*
+ * Create a new task for the specified client.
+ */
+-struct rpc_task *rpc_new_task(struct rpc_clnt *clnt, int flags, const struct rpc_call_ops *tk_ops, void *calldata)
++struct rpc_task *rpc_new_task(const struct rpc_task_setup *setup_data)
+ {
+- struct rpc_task *task;
+-
+- task = rpc_alloc_task();
+- if (!task)
+- goto out;
++ struct rpc_task *task = setup_data->task;
++ unsigned short flags = 0;
++
++ if (task == NULL) {
++ task = rpc_alloc_task();
++ if (task == NULL)
++ goto out;
++ flags = RPC_TASK_DYNAMIC;
++ }
+
+- rpc_init_task(task, clnt, flags, tk_ops, calldata);
++ rpc_init_task(task, setup_data);
+
++ task->tk_flags |= flags;
+ dprintk("RPC: allocated task %p\n", task);
+- task->tk_flags |= RPC_TASK_DYNAMIC;
+ out:
+ return task;
+ }
+@@ -903,7 +922,7 @@ void rpc_put_task(struct rpc_task *task)
+ call_rcu_bh(&task->u.tk_rcu, rpc_free_task);
+ rpc_release_calldata(tk_ops, calldata);
+ }
+-EXPORT_SYMBOL(rpc_put_task);
++EXPORT_SYMBOL_GPL(rpc_put_task);
+
+ static void rpc_release_task(struct rpc_task *task)
+ {
+@@ -960,6 +979,7 @@ void rpc_killall_tasks(struct rpc_clnt *clnt)
+ }
+ spin_unlock(&clnt->cl_lock);
+ }
++EXPORT_SYMBOL_GPL(rpc_killall_tasks);
+
+ int rpciod_up(void)
+ {
+@@ -1039,6 +1059,11 @@ rpc_init_mempool(void)
+ goto err_nomem;
+ if (!rpciod_start())
+ goto err_nomem;
++ /*
++ * The following is not strictly a mempool initialisation,
++ * but there is no harm in doing it here
++ */
++ rpc_init_wait_queue(&delay_queue, "delayq");
+ return 0;
+ err_nomem:
+ rpc_destroy_mempool();
+diff --git a/net/sunrpc/socklib.c b/net/sunrpc/socklib.c
+index 97ac45f..a661a3a 100644
+--- a/net/sunrpc/socklib.c
++++ b/net/sunrpc/socklib.c
+@@ -72,7 +72,7 @@ ssize_t xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct
+ struct page **ppage = xdr->pages;
+ unsigned int len, pglen = xdr->page_len;
+ ssize_t copied = 0;
+- int ret;
++ size_t ret;
+
+ len = xdr->head[0].iov_len;
+ if (base < len) {
+diff --git a/net/sunrpc/stats.c b/net/sunrpc/stats.c
+index 4d4f373..74df2d3 100644
+--- a/net/sunrpc/stats.c
++++ b/net/sunrpc/stats.c
+@@ -118,7 +118,7 @@ struct rpc_iostats *rpc_alloc_iostats(struct rpc_clnt *clnt)
+ new = kcalloc(clnt->cl_maxproc, sizeof(struct rpc_iostats), GFP_KERNEL);
+ return new;
+ }
+-EXPORT_SYMBOL(rpc_alloc_iostats);
++EXPORT_SYMBOL_GPL(rpc_alloc_iostats);
+
+ /**
+ * rpc_free_iostats - release an rpc_iostats structure
+@@ -129,7 +129,7 @@ void rpc_free_iostats(struct rpc_iostats *stats)
+ {
+ kfree(stats);
+ }
+-EXPORT_SYMBOL(rpc_free_iostats);
++EXPORT_SYMBOL_GPL(rpc_free_iostats);
+
+ /**
+ * rpc_count_iostats - tally up per-task stats
+@@ -215,7 +215,7 @@ void rpc_print_iostats(struct seq_file *seq, struct rpc_clnt *clnt)
+ metrics->om_execute * MILLISECS_PER_JIFFY);
+ }
+ }
+-EXPORT_SYMBOL(rpc_print_iostats);
++EXPORT_SYMBOL_GPL(rpc_print_iostats);
+
+ /*
+ * Register/unregister RPC proc files
+@@ -241,12 +241,14 @@ rpc_proc_register(struct rpc_stat *statp)
+ {
+ return do_register(statp->program->name, statp, &rpc_proc_fops);
+ }
++EXPORT_SYMBOL_GPL(rpc_proc_register);
+
+ void
+ rpc_proc_unregister(const char *name)
+ {
+ remove_proc_entry(name, proc_net_rpc);
+ }
++EXPORT_SYMBOL_GPL(rpc_proc_unregister);
+
+ struct proc_dir_entry *
+ svc_proc_register(struct svc_stat *statp, const struct file_operations *fops)
+diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
+index 33d89e8..1a7e309 100644
+--- a/net/sunrpc/sunrpc_syms.c
++++ b/net/sunrpc/sunrpc_syms.c
+@@ -22,45 +22,6 @@
+ #include <linux/sunrpc/rpc_pipe_fs.h>
+ #include <linux/sunrpc/xprtsock.h>
+
+-/* RPC scheduler */
+-EXPORT_SYMBOL(rpc_execute);
+-EXPORT_SYMBOL(rpc_init_task);
+-EXPORT_SYMBOL(rpc_sleep_on);
+-EXPORT_SYMBOL(rpc_wake_up_next);
+-EXPORT_SYMBOL(rpc_wake_up_task);
+-EXPORT_SYMBOL(rpc_wake_up_status);
+-
+-/* RPC client functions */
+-EXPORT_SYMBOL(rpc_clone_client);
+-EXPORT_SYMBOL(rpc_bind_new_program);
+-EXPORT_SYMBOL(rpc_shutdown_client);
+-EXPORT_SYMBOL(rpc_killall_tasks);
+-EXPORT_SYMBOL(rpc_call_sync);
+-EXPORT_SYMBOL(rpc_call_async);
+-EXPORT_SYMBOL(rpc_call_setup);
+-EXPORT_SYMBOL(rpc_clnt_sigmask);
+-EXPORT_SYMBOL(rpc_clnt_sigunmask);
+-EXPORT_SYMBOL(rpc_delay);
+-EXPORT_SYMBOL(rpc_restart_call);
+-EXPORT_SYMBOL(rpc_setbufsize);
+-EXPORT_SYMBOL(rpc_unlink);
+-EXPORT_SYMBOL(rpc_wake_up);
+-EXPORT_SYMBOL(rpc_queue_upcall);
+-EXPORT_SYMBOL(rpc_mkpipe);
+-
+-/* Client transport */
+-EXPORT_SYMBOL(xprt_set_timeout);
+-
+-/* Client credential cache */
+-EXPORT_SYMBOL(rpcauth_register);
+-EXPORT_SYMBOL(rpcauth_unregister);
+-EXPORT_SYMBOL(rpcauth_create);
+-EXPORT_SYMBOL(rpcauth_lookupcred);
+-EXPORT_SYMBOL(rpcauth_lookup_credcache);
+-EXPORT_SYMBOL(rpcauth_destroy_credcache);
+-EXPORT_SYMBOL(rpcauth_init_credcache);
+-EXPORT_SYMBOL(put_rpccred);
+-
+ /* RPC server stuff */
+ EXPORT_SYMBOL(svc_create);
+ EXPORT_SYMBOL(svc_create_thread);
+@@ -81,8 +42,6 @@ EXPORT_SYMBOL(svc_set_client);
+
+ /* RPC statistics */
+ #ifdef CONFIG_PROC_FS
+-EXPORT_SYMBOL(rpc_proc_register);
+-EXPORT_SYMBOL(rpc_proc_unregister);
+ EXPORT_SYMBOL(svc_proc_register);
+ EXPORT_SYMBOL(svc_proc_unregister);
+ EXPORT_SYMBOL(svc_seq_show);
+@@ -105,31 +64,6 @@ EXPORT_SYMBOL(qword_get);
+ EXPORT_SYMBOL(svcauth_unix_purge);
+ EXPORT_SYMBOL(unix_domain_find);
+
+-/* Generic XDR */
+-EXPORT_SYMBOL(xdr_encode_string);
+-EXPORT_SYMBOL(xdr_decode_string_inplace);
+-EXPORT_SYMBOL(xdr_decode_netobj);
+-EXPORT_SYMBOL(xdr_encode_netobj);
+-EXPORT_SYMBOL(xdr_encode_pages);
+-EXPORT_SYMBOL(xdr_inline_pages);
+-EXPORT_SYMBOL(xdr_shift_buf);
+-EXPORT_SYMBOL(xdr_encode_word);
+-EXPORT_SYMBOL(xdr_decode_word);
+-EXPORT_SYMBOL(xdr_encode_array2);
+-EXPORT_SYMBOL(xdr_decode_array2);
+-EXPORT_SYMBOL(xdr_buf_from_iov);
+-EXPORT_SYMBOL(xdr_buf_subsegment);
+-EXPORT_SYMBOL(xdr_buf_read_netobj);
+-EXPORT_SYMBOL(read_bytes_from_xdr_buf);
+-
+-/* Debugging symbols */
+-#ifdef RPC_DEBUG
+-EXPORT_SYMBOL(rpc_debug);
+-EXPORT_SYMBOL(nfs_debug);
+-EXPORT_SYMBOL(nfsd_debug);
+-EXPORT_SYMBOL(nlm_debug);
+-#endif
+-
+ extern struct cache_detail ip_map_cache, unix_gid_cache;
+
+ static int __init
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index a4a6bf7..4ad5fbb 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -18,6 +18,7 @@
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
++#include <linux/sched.h>
+
+ #include <linux/sunrpc/types.h>
+ #include <linux/sunrpc/xdr.h>
+diff --git a/net/sunrpc/sysctl.c b/net/sunrpc/sysctl.c
+index 2be714e..bada7de 100644
+--- a/net/sunrpc/sysctl.c
++++ b/net/sunrpc/sysctl.c
+@@ -23,9 +23,16 @@
+ * Declare the debug flags here
+ */
+ unsigned int rpc_debug;
++EXPORT_SYMBOL_GPL(rpc_debug);
++
+ unsigned int nfs_debug;
++EXPORT_SYMBOL_GPL(nfs_debug);
++
+ unsigned int nfsd_debug;
++EXPORT_SYMBOL_GPL(nfsd_debug);
++
+ unsigned int nlm_debug;
++EXPORT_SYMBOL_GPL(nlm_debug);
+
+ #ifdef RPC_DEBUG
+
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index fdc5e6d..5426406 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -28,6 +28,7 @@ xdr_encode_netobj(__be32 *p, const struct xdr_netobj *obj)
+ memcpy(p, obj->data, obj->len);
+ return p + XDR_QUADLEN(obj->len);
+ }
++EXPORT_SYMBOL(xdr_encode_netobj);
+
+ __be32 *
+ xdr_decode_netobj(__be32 *p, struct xdr_netobj *obj)
+@@ -40,6 +41,7 @@ xdr_decode_netobj(__be32 *p, struct xdr_netobj *obj)
+ obj->data = (u8 *) p;
+ return p + XDR_QUADLEN(len);
+ }
++EXPORT_SYMBOL(xdr_decode_netobj);
+
+ /**
+ * xdr_encode_opaque_fixed - Encode fixed length opaque data
+@@ -91,6 +93,7 @@ xdr_encode_string(__be32 *p, const char *string)
+ {
+ return xdr_encode_array(p, string, strlen(string));
+ }
++EXPORT_SYMBOL(xdr_encode_string);
+
+ __be32 *
+ xdr_decode_string_inplace(__be32 *p, char **sp, int *lenp, int maxlen)
+@@ -103,6 +106,7 @@ xdr_decode_string_inplace(__be32 *p, char **sp, int *lenp, int maxlen)
+ *sp = (char *) p;
+ return p + XDR_QUADLEN(len);
+ }
++EXPORT_SYMBOL(xdr_decode_string_inplace);
+
+ void
+ xdr_encode_pages(struct xdr_buf *xdr, struct page **pages, unsigned int base,
+@@ -130,6 +134,7 @@ xdr_encode_pages(struct xdr_buf *xdr, struct page **pages, unsigned int base,
+ xdr->buflen += len;
+ xdr->len += len;
+ }
++EXPORT_SYMBOL(xdr_encode_pages);
+
+ void
+ xdr_inline_pages(struct xdr_buf *xdr, unsigned int offset,
+@@ -151,7 +156,7 @@ xdr_inline_pages(struct xdr_buf *xdr, unsigned int offset,
+
+ xdr->buflen += len;
+ }
+-
++EXPORT_SYMBOL(xdr_inline_pages);
+
+ /*
+ * Helper routines for doing 'memmove' like operations on a struct xdr_buf
+@@ -418,6 +423,7 @@ xdr_shift_buf(struct xdr_buf *buf, size_t len)
+ {
+ xdr_shrink_bufhead(buf, len);
+ }
++EXPORT_SYMBOL(xdr_shift_buf);
+
+ /**
+ * xdr_init_encode - Initialize a struct xdr_stream for sending data.
+@@ -639,6 +645,7 @@ xdr_buf_from_iov(struct kvec *iov, struct xdr_buf *buf)
+ buf->page_len = 0;
+ buf->buflen = buf->len = iov->iov_len;
+ }
++EXPORT_SYMBOL(xdr_buf_from_iov);
+
+ /* Sets subbuf to the portion of buf of length len beginning base bytes
+ * from the start of buf. Returns -1 if base of length are out of bounds. */
+@@ -687,6 +694,7 @@ xdr_buf_subsegment(struct xdr_buf *buf, struct xdr_buf *subbuf,
+ return -1;
+ return 0;
+ }
++EXPORT_SYMBOL(xdr_buf_subsegment);
+
+ static void __read_bytes_from_xdr_buf(struct xdr_buf *subbuf, void *obj, unsigned int len)
+ {
+@@ -717,6 +725,7 @@ int read_bytes_from_xdr_buf(struct xdr_buf *buf, unsigned int base, void *obj, u
+ __read_bytes_from_xdr_buf(&subbuf, obj, len);
+ return 0;
+ }
++EXPORT_SYMBOL(read_bytes_from_xdr_buf);
+
+ static void __write_bytes_to_xdr_buf(struct xdr_buf *subbuf, void *obj, unsigned int len)
+ {
+@@ -760,6 +769,7 @@ xdr_decode_word(struct xdr_buf *buf, unsigned int base, u32 *obj)
+ *obj = ntohl(raw);
+ return 0;
+ }
++EXPORT_SYMBOL(xdr_decode_word);
+
+ int
+ xdr_encode_word(struct xdr_buf *buf, unsigned int base, u32 obj)
+@@ -768,6 +778,7 @@ xdr_encode_word(struct xdr_buf *buf, unsigned int base, u32 obj)
+
+ return write_bytes_to_xdr_buf(buf, base, &raw, sizeof(obj));
+ }
++EXPORT_SYMBOL(xdr_encode_word);
+
+ /* If the netobj starting offset bytes from the start of xdr_buf is contained
+ * entirely in the head or the tail, set object to point to it; otherwise
+@@ -805,6 +816,7 @@ int xdr_buf_read_netobj(struct xdr_buf *buf, struct xdr_netobj *obj, unsigned in
+ __read_bytes_from_xdr_buf(&subbuf, obj->data, obj->len);
+ return 0;
+ }
++EXPORT_SYMBOL(xdr_buf_read_netobj);
+
+ /* Returns 0 on success, or else a negative error code. */
+ static int
+@@ -1010,6 +1022,7 @@ xdr_decode_array2(struct xdr_buf *buf, unsigned int base,
+
+ return xdr_xcode_array2(buf, base, desc, 0);
+ }
++EXPORT_SYMBOL(xdr_decode_array2);
+
+ int
+ xdr_encode_array2(struct xdr_buf *buf, unsigned int base,
+@@ -1021,6 +1034,7 @@ xdr_encode_array2(struct xdr_buf *buf, unsigned int base,
+
+ return xdr_xcode_array2(buf, base, desc, 1);
+ }
++EXPORT_SYMBOL(xdr_encode_array2);
+
+ int
+ xdr_process_buf(struct xdr_buf *buf, unsigned int offset, unsigned int len,
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
-index cd641c8..fb92f51 100644
+index cd641c8..cfcade9 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
-@@ -1011,9 +1011,8 @@ found:
+@@ -501,9 +501,10 @@ EXPORT_SYMBOL_GPL(xprt_set_retrans_timeout_def);
+ void xprt_set_retrans_timeout_rtt(struct rpc_task *task)
+ {
+ int timer = task->tk_msg.rpc_proc->p_timer;
+- struct rpc_rtt *rtt = task->tk_client->cl_rtt;
++ struct rpc_clnt *clnt = task->tk_client;
++ struct rpc_rtt *rtt = clnt->cl_rtt;
+ struct rpc_rqst *req = task->tk_rqstp;
+- unsigned long max_timeout = req->rq_xprt->timeout.to_maxval;
++ unsigned long max_timeout = clnt->cl_timeout->to_maxval;
+
+ task->tk_timeout = rpc_calc_rto(rtt, timer);
+ task->tk_timeout <<= rpc_ntimeo(rtt, timer) + req->rq_retries;
+@@ -514,7 +515,7 @@ EXPORT_SYMBOL_GPL(xprt_set_retrans_timeout_rtt);
+
+ static void xprt_reset_majortimeo(struct rpc_rqst *req)
+ {
+- struct rpc_timeout *to = &req->rq_xprt->timeout;
++ const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout;
+
+ req->rq_majortimeo = req->rq_timeout;
+ if (to->to_exponential)
+@@ -534,7 +535,7 @@ static void xprt_reset_majortimeo(struct rpc_rqst *req)
+ int xprt_adjust_timeout(struct rpc_rqst *req)
+ {
+ struct rpc_xprt *xprt = req->rq_xprt;
+- struct rpc_timeout *to = &xprt->timeout;
++ const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout;
+ int status = 0;
+
+ if (time_before(jiffies, req->rq_majortimeo)) {
+@@ -568,17 +569,17 @@ static void xprt_autoclose(struct work_struct *work)
+ struct rpc_xprt *xprt =
+ container_of(work, struct rpc_xprt, task_cleanup);
+
+- xprt_disconnect(xprt);
+ xprt->ops->close(xprt);
++ clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
+ xprt_release_write(xprt, NULL);
+ }
+
+ /**
+- * xprt_disconnect - mark a transport as disconnected
++ * xprt_disconnect_done - mark a transport as disconnected
+ * @xprt: transport to flag for disconnect
+ *
+ */
+-void xprt_disconnect(struct rpc_xprt *xprt)
++void xprt_disconnect_done(struct rpc_xprt *xprt)
+ {
+ dprintk("RPC: disconnected transport %p\n", xprt);
+ spin_lock_bh(&xprt->transport_lock);
+@@ -586,7 +587,26 @@ void xprt_disconnect(struct rpc_xprt *xprt)
+ xprt_wake_pending_tasks(xprt, -ENOTCONN);
+ spin_unlock_bh(&xprt->transport_lock);
+ }
+-EXPORT_SYMBOL_GPL(xprt_disconnect);
++EXPORT_SYMBOL_GPL(xprt_disconnect_done);
++
++/**
++ * xprt_force_disconnect - force a transport to disconnect
++ * @xprt: transport to disconnect
++ *
++ */
++void xprt_force_disconnect(struct rpc_xprt *xprt)
++{
++ /* Don't race with the test_bit() in xprt_clear_locked() */
++ spin_lock_bh(&xprt->transport_lock);
++ set_bit(XPRT_CLOSE_WAIT, &xprt->state);
++ /* Try to schedule an autoclose RPC call */
++ if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
++ queue_work(rpciod_workqueue, &xprt->task_cleanup);
++ else if (xprt->snd_task != NULL)
++ rpc_wake_up_task(xprt->snd_task);
++ spin_unlock_bh(&xprt->transport_lock);
++}
++EXPORT_SYMBOL_GPL(xprt_force_disconnect);
+
+ static void
+ xprt_init_autodisconnect(unsigned long data)
+@@ -909,7 +929,7 @@ static void xprt_request_init(struct rpc_task *task, struct rpc_xprt *xprt)
+ {
+ struct rpc_rqst *req = task->tk_rqstp;
+
+- req->rq_timeout = xprt->timeout.to_initval;
++ req->rq_timeout = task->tk_client->cl_timeout->to_initval;
+ req->rq_task = task;
+ req->rq_xprt = xprt;
+ req->rq_buffer = NULL;
+@@ -959,22 +979,6 @@ void xprt_release(struct rpc_task *task)
+ }
+
+ /**
+- * xprt_set_timeout - set constant RPC timeout
+- * @to: RPC timeout parameters to set up
+- * @retr: number of retries
+- * @incr: amount of increase after each retry
+- *
+- */
+-void xprt_set_timeout(struct rpc_timeout *to, unsigned int retr, unsigned long incr)
+-{
+- to->to_initval =
+- to->to_increment = incr;
+- to->to_maxval = to->to_initval + (incr * retr);
+- to->to_retries = retr;
+- to->to_exponential = 0;
+-}
+-
+-/**
+ * xprt_create_transport - create an RPC transport
+ * @args: rpc transport creation arguments
+ *
+@@ -1011,9 +1015,8 @@ found:
INIT_LIST_HEAD(&xprt->free);
INIT_LIST_HEAD(&xprt->recv);
INIT_WORK(&xprt->task_cleanup, xprt_autoclose);
@@ -645947,9 +754193,36 @@
xprt->cwnd = RPC_INITCWND;
xprt->bind_index = 0;
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
-index ee8de7a..1aa1580 100644
+index ee8de7a..e55427f 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -83,7 +83,7 @@ static const char transfertypes[][12] = {
+ */
+
+ static int
+-rpcrdma_convert_iovs(struct xdr_buf *xdrbuf, int pos,
++rpcrdma_convert_iovs(struct xdr_buf *xdrbuf, unsigned int pos,
+ enum rpcrdma_chunktype type, struct rpcrdma_mr_seg *seg, int nsegs)
+ {
+ int len, n = 0, p;
+@@ -169,7 +169,7 @@ rpcrdma_create_chunks(struct rpc_rqst *rqst, struct xdr_buf *target,
+ struct rpcrdma_req *req = rpcr_to_rdmar(rqst);
+ struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(rqst->rq_task->tk_xprt);
+ int nsegs, nchunks = 0;
+- int pos;
++ unsigned int pos;
+ struct rpcrdma_mr_seg *seg = req->rl_segments;
+ struct rpcrdma_read_chunk *cur_rchunk = NULL;
+ struct rpcrdma_write_array *warray = NULL;
+@@ -213,7 +213,7 @@ rpcrdma_create_chunks(struct rpc_rqst *rqst, struct xdr_buf *target,
+ (__be32 *)&cur_rchunk->rc_target.rs_offset,
+ seg->mr_base);
+ dprintk("RPC: %s: read chunk "
+- "elem %d at 0x%llx:0x%x pos %d (%s)\n", __func__,
++ "elem %d at 0x%llx:0x%x pos %u (%s)\n", __func__,
+ seg->mr_len, (unsigned long long)seg->mr_base,
+ seg->mr_rkey, pos, n < nsegs ? "more" : "last");
+ cur_rchunk++;
@@ -380,7 +380,7 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
headerp->rm_xid = rqst->rq_xid;
headerp->rm_vers = xdr_one;
@@ -645973,11 +754246,265 @@
headerp->rm_body.rm_padded.rm_pempty[0] = xdr_zero;
headerp->rm_body.rm_padded.rm_pempty[1] = xdr_zero;
headerp->rm_body.rm_padded.rm_pempty[2] = xdr_zero;
+@@ -552,7 +552,7 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
+ * RDMA'd by server. See map at rpcrdma_create_chunks()! :-)
+ */
+ static int
+-rpcrdma_count_chunks(struct rpcrdma_rep *rep, int max, int wrchunk, __be32 **iptrp)
++rpcrdma_count_chunks(struct rpcrdma_rep *rep, unsigned int max, int wrchunk, __be32 **iptrp)
+ {
+ unsigned int i, total_len;
+ struct rpcrdma_write_chunk *cur_wchunk;
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 6f2112d..02c522c 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -212,12 +212,16 @@ xprt_rdma_format_addresses(struct rpc_xprt *xprt)
+ static void
+ xprt_rdma_free_addresses(struct rpc_xprt *xprt)
+ {
+- kfree(xprt->address_strings[RPC_DISPLAY_ADDR]);
+- kfree(xprt->address_strings[RPC_DISPLAY_PORT]);
+- kfree(xprt->address_strings[RPC_DISPLAY_ALL]);
+- kfree(xprt->address_strings[RPC_DISPLAY_HEX_ADDR]);
+- kfree(xprt->address_strings[RPC_DISPLAY_HEX_PORT]);
+- kfree(xprt->address_strings[RPC_DISPLAY_UNIVERSAL_ADDR]);
++ unsigned int i;
++
++ for (i = 0; i < RPC_DISPLAY_MAX; i++)
++ switch (i) {
++ case RPC_DISPLAY_PROTO:
++ case RPC_DISPLAY_NETID:
++ continue;
++ default:
++ kfree(xprt->address_strings[i]);
++ }
+ }
+
+ static void
+@@ -289,6 +293,11 @@ xprt_rdma_destroy(struct rpc_xprt *xprt)
+ module_put(THIS_MODULE);
+ }
+
++static const struct rpc_timeout xprt_rdma_default_timeout = {
++ .to_initval = 60 * HZ,
++ .to_maxval = 60 * HZ,
++};
++
+ /**
+ * xprt_setup_rdma - Set up transport to use RDMA
+ *
+@@ -327,7 +336,7 @@ xprt_setup_rdma(struct xprt_create *args)
+ }
+
+ /* 60 second timeout, no retries */
+- xprt_set_timeout(&xprt->timeout, 0, 60UL * HZ);
++ xprt->timeout = &xprt_rdma_default_timeout;
+ xprt->bind_timeout = (60U * HZ);
+ xprt->connect_timeout = (60U * HZ);
+ xprt->reestablish_timeout = (5U * HZ);
+@@ -449,7 +458,7 @@ xprt_rdma_close(struct rpc_xprt *xprt)
+ struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
+
+ dprintk("RPC: %s: closing\n", __func__);
+- xprt_disconnect(xprt);
++ xprt_disconnect_done(xprt);
+ (void) rpcrdma_ep_disconnect(&r_xprt->rx_ep, &r_xprt->rx_ia);
+ }
+
+@@ -682,7 +691,7 @@ xprt_rdma_send_request(struct rpc_task *task)
+ }
+
+ if (rpcrdma_ep_post(&r_xprt->rx_ia, &r_xprt->rx_ep, req)) {
+- xprt_disconnect(xprt);
++ xprt_disconnect_done(xprt);
+ return -ENOTCONN; /* implies disconnect */
+ }
+
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 44b0fb9..ffbf22a 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -522,7 +522,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
+ struct rpcrdma_create_data_internal *cdata)
+ {
+ struct ib_device_attr devattr;
+- int rc;
++ int rc, err;
+
+ rc = ib_query_device(ia->ri_id->device, &devattr);
+ if (rc) {
+@@ -648,8 +648,10 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
+ return 0;
+
+ out2:
+- if (ib_destroy_cq(ep->rep_cq))
+- ;
++ err = ib_destroy_cq(ep->rep_cq);
++ if (err)
++ dprintk("RPC: %s: ib_destroy_cq returned %i\n",
++ __func__, err);
+ out1:
+ return rc;
+ }
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
-index 2f630a5..6fa52f4 100644
+index 2f630a5..30e7ac2 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
-@@ -838,8 +838,12 @@ static void xs_udp_data_ready(struct sock *sk, int len)
+@@ -280,7 +280,9 @@ static inline struct sockaddr_in6 *xs_addr_in6(struct rpc_xprt *xprt)
+ return (struct sockaddr_in6 *) &xprt->addr;
+ }
+
+-static void xs_format_ipv4_peer_addresses(struct rpc_xprt *xprt)
++static void xs_format_ipv4_peer_addresses(struct rpc_xprt *xprt,
++ const char *protocol,
++ const char *netid)
+ {
+ struct sockaddr_in *addr = xs_addr_in(xprt);
+ char *buf;
+@@ -299,21 +301,14 @@ static void xs_format_ipv4_peer_addresses(struct rpc_xprt *xprt)
+ }
+ xprt->address_strings[RPC_DISPLAY_PORT] = buf;
+
+- buf = kzalloc(8, GFP_KERNEL);
+- if (buf) {
+- if (xprt->prot == IPPROTO_UDP)
+- snprintf(buf, 8, "udp");
+- else
+- snprintf(buf, 8, "tcp");
+- }
+- xprt->address_strings[RPC_DISPLAY_PROTO] = buf;
++ xprt->address_strings[RPC_DISPLAY_PROTO] = protocol;
+
+ buf = kzalloc(48, GFP_KERNEL);
+ if (buf) {
+ snprintf(buf, 48, "addr="NIPQUAD_FMT" port=%u proto=%s",
+ NIPQUAD(addr->sin_addr.s_addr),
+ ntohs(addr->sin_port),
+- xprt->prot == IPPROTO_UDP ? "udp" : "tcp");
++ protocol);
+ }
+ xprt->address_strings[RPC_DISPLAY_ALL] = buf;
+
+@@ -340,12 +335,12 @@ static void xs_format_ipv4_peer_addresses(struct rpc_xprt *xprt)
+ }
+ xprt->address_strings[RPC_DISPLAY_UNIVERSAL_ADDR] = buf;
+
+- xprt->address_strings[RPC_DISPLAY_NETID] =
+- kstrdup(xprt->prot == IPPROTO_UDP ?
+- RPCBIND_NETID_UDP : RPCBIND_NETID_TCP, GFP_KERNEL);
++ xprt->address_strings[RPC_DISPLAY_NETID] = netid;
+ }
+
+-static void xs_format_ipv6_peer_addresses(struct rpc_xprt *xprt)
++static void xs_format_ipv6_peer_addresses(struct rpc_xprt *xprt,
++ const char *protocol,
++ const char *netid)
+ {
+ struct sockaddr_in6 *addr = xs_addr_in6(xprt);
+ char *buf;
+@@ -364,21 +359,14 @@ static void xs_format_ipv6_peer_addresses(struct rpc_xprt *xprt)
+ }
+ xprt->address_strings[RPC_DISPLAY_PORT] = buf;
+
+- buf = kzalloc(8, GFP_KERNEL);
+- if (buf) {
+- if (xprt->prot == IPPROTO_UDP)
+- snprintf(buf, 8, "udp");
+- else
+- snprintf(buf, 8, "tcp");
+- }
+- xprt->address_strings[RPC_DISPLAY_PROTO] = buf;
++ xprt->address_strings[RPC_DISPLAY_PROTO] = protocol;
+
+ buf = kzalloc(64, GFP_KERNEL);
+ if (buf) {
+ snprintf(buf, 64, "addr="NIP6_FMT" port=%u proto=%s",
+ NIP6(addr->sin6_addr),
+ ntohs(addr->sin6_port),
+- xprt->prot == IPPROTO_UDP ? "udp" : "tcp");
++ protocol);
+ }
+ xprt->address_strings[RPC_DISPLAY_ALL] = buf;
+
+@@ -405,17 +393,21 @@ static void xs_format_ipv6_peer_addresses(struct rpc_xprt *xprt)
+ }
+ xprt->address_strings[RPC_DISPLAY_UNIVERSAL_ADDR] = buf;
+
+- xprt->address_strings[RPC_DISPLAY_NETID] =
+- kstrdup(xprt->prot == IPPROTO_UDP ?
+- RPCBIND_NETID_UDP6 : RPCBIND_NETID_TCP6, GFP_KERNEL);
++ xprt->address_strings[RPC_DISPLAY_NETID] = netid;
+ }
+
+ static void xs_free_peer_addresses(struct rpc_xprt *xprt)
+ {
+- int i;
++ unsigned int i;
+
+ for (i = 0; i < RPC_DISPLAY_MAX; i++)
+- kfree(xprt->address_strings[i]);
++ switch (i) {
++ case RPC_DISPLAY_PROTO:
++ case RPC_DISPLAY_NETID:
++ continue;
++ default:
++ kfree(xprt->address_strings[i]);
++ }
+ }
+
+ #define XS_SENDMSG_FLAGS (MSG_DONTWAIT | MSG_NOSIGNAL)
+@@ -614,6 +606,22 @@ static int xs_udp_send_request(struct rpc_task *task)
+ return status;
+ }
+
++/**
++ * xs_tcp_shutdown - gracefully shut down a TCP socket
++ * @xprt: transport
++ *
++ * Initiates a graceful shutdown of the TCP socket by calling the
++ * equivalent of shutdown(SHUT_WR);
++ */
++static void xs_tcp_shutdown(struct rpc_xprt *xprt)
++{
++ struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
++ struct socket *sock = transport->sock;
++
++ if (sock != NULL)
++ kernel_sock_shutdown(sock, SHUT_WR);
++}
++
+ static inline void xs_encode_tcp_record_marker(struct xdr_buf *buf)
+ {
+ u32 reclen = buf->len - sizeof(rpc_fraghdr);
+@@ -691,7 +699,7 @@ static int xs_tcp_send_request(struct rpc_task *task)
+ default:
+ dprintk("RPC: sendmsg returned unrecognized error %d\n",
+ -status);
+- xprt_disconnect(xprt);
++ xs_tcp_shutdown(xprt);
+ break;
+ }
+
+@@ -759,7 +767,9 @@ static void xs_close(struct rpc_xprt *xprt)
+ clear_close_wait:
+ smp_mb__before_clear_bit();
+ clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
++ clear_bit(XPRT_CLOSING, &xprt->state);
+ smp_mb__after_clear_bit();
++ xprt_disconnect_done(xprt);
+ }
+
+ /**
+@@ -775,7 +785,6 @@ static void xs_destroy(struct rpc_xprt *xprt)
+
+ cancel_rearming_delayed_work(&transport->connect_worker);
+
+- xprt_disconnect(xprt);
+ xs_close(xprt);
+ xs_free_peer_addresses(xprt);
+ kfree(xprt->slot);
+@@ -838,8 +847,12 @@ static void xs_udp_data_ready(struct sock *sk, int len)
copied = repsize;
/* Suck it into the iovec, verify checksum if not done by hw. */
@@ -645991,6 +754518,318 @@
/* Something worked... */
dst_confirm(skb->dst);
+@@ -882,7 +895,7 @@ static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_rea
+ /* Sanity check of the record length */
+ if (unlikely(transport->tcp_reclen < 4)) {
+ dprintk("RPC: invalid TCP record fragment length\n");
+- xprt_disconnect(xprt);
++ xprt_force_disconnect(xprt);
+ return;
+ }
+ dprintk("RPC: reading TCP record fragment of length %d\n",
+@@ -1109,21 +1122,44 @@ static void xs_tcp_state_change(struct sock *sk)
+ transport->tcp_flags =
+ TCP_RCV_COPY_FRAGHDR | TCP_RCV_COPY_XID;
+
+- xprt->reestablish_timeout = XS_TCP_INIT_REEST_TO;
+ xprt_wake_pending_tasks(xprt, 0);
+ }
+ spin_unlock_bh(&xprt->transport_lock);
+ break;
+- case TCP_SYN_SENT:
+- case TCP_SYN_RECV:
++ case TCP_FIN_WAIT1:
++ /* The client initiated a shutdown of the socket */
++ xprt->reestablish_timeout = 0;
++ set_bit(XPRT_CLOSING, &xprt->state);
++ smp_mb__before_clear_bit();
++ clear_bit(XPRT_CONNECTED, &xprt->state);
++ clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
++ smp_mb__after_clear_bit();
+ break;
+ case TCP_CLOSE_WAIT:
+- /* Try to schedule an autoclose RPC calls */
+- set_bit(XPRT_CLOSE_WAIT, &xprt->state);
+- if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
+- queue_work(rpciod_workqueue, &xprt->task_cleanup);
+- default:
+- xprt_disconnect(xprt);
++ /* The server initiated a shutdown of the socket */
++ set_bit(XPRT_CLOSING, &xprt->state);
++ xprt_force_disconnect(xprt);
++ case TCP_SYN_SENT:
++ case TCP_CLOSING:
++ /*
++ * If the server closed down the connection, make sure that
++ * we back off before reconnecting
++ */
++ if (xprt->reestablish_timeout < XS_TCP_INIT_REEST_TO)
++ xprt->reestablish_timeout = XS_TCP_INIT_REEST_TO;
++ break;
++ case TCP_LAST_ACK:
++ smp_mb__before_clear_bit();
++ clear_bit(XPRT_CONNECTED, &xprt->state);
++ smp_mb__after_clear_bit();
++ break;
++ case TCP_CLOSE:
++ smp_mb__before_clear_bit();
++ clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
++ clear_bit(XPRT_CLOSING, &xprt->state);
++ smp_mb__after_clear_bit();
++ /* Mark transport as closed and wake up all pending tasks */
++ xprt_disconnect_done(xprt);
+ }
+ out:
+ read_unlock(&sk->sk_callback_lock);
+@@ -1275,34 +1311,53 @@ static void xs_set_port(struct rpc_xprt *xprt, unsigned short port)
+ }
+ }
+
++static unsigned short xs_get_srcport(struct sock_xprt *transport, struct socket *sock)
++{
++ unsigned short port = transport->port;
++
++ if (port == 0 && transport->xprt.resvport)
++ port = xs_get_random_port();
++ return port;
++}
++
++static unsigned short xs_next_srcport(struct sock_xprt *transport, struct socket *sock, unsigned short port)
++{
++ if (transport->port != 0)
++ transport->port = 0;
++ if (!transport->xprt.resvport)
++ return 0;
++ if (port <= xprt_min_resvport || port > xprt_max_resvport)
++ return xprt_max_resvport;
++ return --port;
++}
++
+ static int xs_bind4(struct sock_xprt *transport, struct socket *sock)
+ {
+ struct sockaddr_in myaddr = {
+ .sin_family = AF_INET,
+ };
+ struct sockaddr_in *sa;
+- int err;
+- unsigned short port = transport->port;
++ int err, nloop = 0;
++ unsigned short port = xs_get_srcport(transport, sock);
++ unsigned short last;
+
+- if (!transport->xprt.resvport)
+- port = 0;
+ sa = (struct sockaddr_in *)&transport->addr;
+ myaddr.sin_addr = sa->sin_addr;
+ do {
+ myaddr.sin_port = htons(port);
+ err = kernel_bind(sock, (struct sockaddr *) &myaddr,
+ sizeof(myaddr));
+- if (!transport->xprt.resvport)
++ if (port == 0)
+ break;
+ if (err == 0) {
+ transport->port = port;
+ break;
+ }
+- if (port <= xprt_min_resvport)
+- port = xprt_max_resvport;
+- else
+- port--;
+- } while (err == -EADDRINUSE && port != transport->port);
++ last = port;
++ port = xs_next_srcport(transport, sock, port);
++ if (port > last)
++ nloop++;
++ } while (err == -EADDRINUSE && nloop != 2);
+ dprintk("RPC: %s "NIPQUAD_FMT":%u: %s (%d)\n",
+ __FUNCTION__, NIPQUAD(myaddr.sin_addr),
+ port, err ? "failed" : "ok", err);
+@@ -1315,28 +1370,27 @@ static int xs_bind6(struct sock_xprt *transport, struct socket *sock)
+ .sin6_family = AF_INET6,
+ };
+ struct sockaddr_in6 *sa;
+- int err;
+- unsigned short port = transport->port;
++ int err, nloop = 0;
++ unsigned short port = xs_get_srcport(transport, sock);
++ unsigned short last;
+
+- if (!transport->xprt.resvport)
+- port = 0;
+ sa = (struct sockaddr_in6 *)&transport->addr;
+ myaddr.sin6_addr = sa->sin6_addr;
+ do {
+ myaddr.sin6_port = htons(port);
+ err = kernel_bind(sock, (struct sockaddr *) &myaddr,
+ sizeof(myaddr));
+- if (!transport->xprt.resvport)
++ if (port == 0)
+ break;
+ if (err == 0) {
+ transport->port = port;
+ break;
+ }
+- if (port <= xprt_min_resvport)
+- port = xprt_max_resvport;
+- else
+- port--;
+- } while (err == -EADDRINUSE && port != transport->port);
++ last = port;
++ port = xs_next_srcport(transport, sock, port);
++ if (port > last)
++ nloop++;
++ } while (err == -EADDRINUSE && nloop != 2);
+ dprintk("RPC: xs_bind6 "NIP6_FMT":%u: %s (%d)\n",
+ NIP6(myaddr.sin6_addr), port, err ? "failed" : "ok", err);
+ return err;
+@@ -1598,8 +1652,7 @@ static void xs_tcp_connect_worker4(struct work_struct *work)
+ break;
+ default:
+ /* get rid of existing socket, and retry */
+- xs_close(xprt);
+- break;
++ xs_tcp_shutdown(xprt);
+ }
+ }
+ out:
+@@ -1658,8 +1711,7 @@ static void xs_tcp_connect_worker6(struct work_struct *work)
+ break;
+ default:
+ /* get rid of existing socket, and retry */
+- xs_close(xprt);
+- break;
++ xs_tcp_shutdown(xprt);
+ }
+ }
+ out:
+@@ -1706,6 +1758,19 @@ static void xs_connect(struct rpc_task *task)
+ }
+ }
+
++static void xs_tcp_connect(struct rpc_task *task)
++{
++ struct rpc_xprt *xprt = task->tk_xprt;
++
++ /* Initiate graceful shutdown of the socket if not already done */
++ if (test_bit(XPRT_CONNECTED, &xprt->state))
++ xs_tcp_shutdown(xprt);
++ /* Exit if we need to wait for socket shutdown to complete */
++ if (test_bit(XPRT_CLOSING, &xprt->state))
++ return;
++ xs_connect(task);
++}
++
+ /**
+ * xs_udp_print_stats - display UDP socket-specifc stats
+ * @xprt: rpc_xprt struct containing statistics
+@@ -1776,12 +1841,12 @@ static struct rpc_xprt_ops xs_tcp_ops = {
+ .release_xprt = xs_tcp_release_xprt,
+ .rpcbind = rpcb_getport_async,
+ .set_port = xs_set_port,
+- .connect = xs_connect,
++ .connect = xs_tcp_connect,
+ .buf_alloc = rpc_malloc,
+ .buf_free = rpc_free,
+ .send_request = xs_tcp_send_request,
+ .set_retrans_timeout = xprt_set_retrans_timeout_def,
+- .close = xs_close,
++ .close = xs_tcp_shutdown,
+ .destroy = xs_destroy,
+ .print_stats = xs_tcp_print_stats,
+ };
+@@ -1818,11 +1883,17 @@ static struct rpc_xprt *xs_setup_xprt(struct xprt_create *args,
+ xprt->addrlen = args->addrlen;
+ if (args->srcaddr)
+ memcpy(&new->addr, args->srcaddr, args->addrlen);
+- new->port = xs_get_random_port();
+
+ return xprt;
+ }
+
++static const struct rpc_timeout xs_udp_default_timeout = {
++ .to_initval = 5 * HZ,
++ .to_maxval = 30 * HZ,
++ .to_increment = 5 * HZ,
++ .to_retries = 5,
++};
++
+ /**
+ * xs_setup_udp - Set up transport to use a UDP socket
+ * @args: rpc transport creation arguments
+@@ -1851,10 +1922,7 @@ static struct rpc_xprt *xs_setup_udp(struct xprt_create *args)
+
+ xprt->ops = &xs_udp_ops;
+
+- if (args->timeout)
+- xprt->timeout = *args->timeout;
+- else
+- xprt_set_timeout(&xprt->timeout, 5, 5 * HZ);
++ xprt->timeout = &xs_udp_default_timeout;
+
+ switch (addr->sa_family) {
+ case AF_INET:
+@@ -1863,7 +1931,7 @@ static struct rpc_xprt *xs_setup_udp(struct xprt_create *args)
+
+ INIT_DELAYED_WORK(&transport->connect_worker,
+ xs_udp_connect_worker4);
+- xs_format_ipv4_peer_addresses(xprt);
++ xs_format_ipv4_peer_addresses(xprt, "udp", RPCBIND_NETID_UDP);
+ break;
+ case AF_INET6:
+ if (((struct sockaddr_in6 *)addr)->sin6_port != htons(0))
+@@ -1871,7 +1939,7 @@ static struct rpc_xprt *xs_setup_udp(struct xprt_create *args)
+
+ INIT_DELAYED_WORK(&transport->connect_worker,
+ xs_udp_connect_worker6);
+- xs_format_ipv6_peer_addresses(xprt);
++ xs_format_ipv6_peer_addresses(xprt, "udp", RPCBIND_NETID_UDP6);
+ break;
+ default:
+ kfree(xprt);
+@@ -1889,6 +1957,12 @@ static struct rpc_xprt *xs_setup_udp(struct xprt_create *args)
+ return ERR_PTR(-EINVAL);
+ }
+
++static const struct rpc_timeout xs_tcp_default_timeout = {
++ .to_initval = 60 * HZ,
++ .to_maxval = 60 * HZ,
++ .to_retries = 2,
++};
++
+ /**
+ * xs_setup_tcp - Set up transport to use a TCP socket
+ * @args: rpc transport creation arguments
+@@ -1915,11 +1989,7 @@ static struct rpc_xprt *xs_setup_tcp(struct xprt_create *args)
+ xprt->idle_timeout = XS_IDLE_DISC_TO;
+
+ xprt->ops = &xs_tcp_ops;
+-
+- if (args->timeout)
+- xprt->timeout = *args->timeout;
+- else
+- xprt_set_timeout(&xprt->timeout, 2, 60 * HZ);
++ xprt->timeout = &xs_tcp_default_timeout;
+
+ switch (addr->sa_family) {
+ case AF_INET:
+@@ -1927,14 +1997,14 @@ static struct rpc_xprt *xs_setup_tcp(struct xprt_create *args)
+ xprt_set_bound(xprt);
+
+ INIT_DELAYED_WORK(&transport->connect_worker, xs_tcp_connect_worker4);
+- xs_format_ipv4_peer_addresses(xprt);
++ xs_format_ipv4_peer_addresses(xprt, "tcp", RPCBIND_NETID_TCP);
+ break;
+ case AF_INET6:
+ if (((struct sockaddr_in6 *)addr)->sin6_port != htons(0))
+ xprt_set_bound(xprt);
+
+ INIT_DELAYED_WORK(&transport->connect_worker, xs_tcp_connect_worker6);
+- xs_format_ipv6_peer_addresses(xprt);
++ xs_format_ipv6_peer_addresses(xprt, "tcp", RPCBIND_NETID_TCP6);
+ break;
+ default:
+ kfree(xprt);
diff --git a/net/sysctl_net.c b/net/sysctl_net.c
index cd4eafb..665e856 100644
--- a/net/sysctl_net.c
@@ -655434,6 +764273,27 @@
int irqstatus, status;
spin_lock(&waveartist_lock);
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index b4a38a3..4bb9764 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -711,11 +711,13 @@ static void snd_intel8x0_setup_periods(struct intel8x0 *chip, struct ichdev *ich
+ static void fill_nocache(void *buf, int size, int nocache)
+ {
+ size = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+- change_page_attr(virt_to_page(buf), size, nocache ? PAGE_KERNEL_NOCACHE : PAGE_KERNEL);
+- global_flush_tlb();
++ if (nocache)
++ set_pages_uc(virt_to_page(buf), size);
++ else
++ set_pages_wb(virt_to_page(buf), size);
+ }
+ #else
+-#define fill_nocache(buf,size,nocache)
++#define fill_nocache(buf, size, nocache) do { ; } while (0)
+ #endif
+
+ /*
diff --git a/sound/ppc/keywest.c b/sound/ppc/keywest.c
index 272ae38..bb7d744 100644
--- a/sound/ppc/keywest.c
Modified: dists/trunk/linux-2.6/debian/patches/series/1~experimental.1
==============================================================================
--- dists/trunk/linux-2.6/debian/patches/series/1~experimental.1 (original)
+++ dists/trunk/linux-2.6/debian/patches/series/1~experimental.1 Wed Jan 30 21:52:05 2008
@@ -1,4 +1,4 @@
-+ bugfix/all/patch-2.6.24-git7
++ bugfix/all/patch-2.6.24-git8
+ debian/version.patch
+ debian/kernelvariables.patch
+ debian/doc-build-parallel.patch
More information about the Kernel-svn-changes
mailing list